subtree updates

meta-raspberrypi: b6a1645a97..c57b464b88:
  Lluis Campos (1):
        rpi-cmdline: do_compile: Use pure Python syntax to get `CMDLINE`

meta-openembedded: 2eb39477a7..a755af4fb5:
  Adrian Zaharia (1):
        lapack: add packageconfig for lapacke

  Akash Hadke (1):
        polkit: Add --shell /bin/nologin to polkitd user

  Alex Kiernan (3):
        ntpsec: Add UPSTREAM_CHECK_URI
        libgpiod: Detect ptest using PTEST_ENABLED
        ostree: Cleanup PACKAGECONFIGs

  Anuj Mittal (1):
        yasm: fix buildpaths warning

  Atanas Bunchev (1):
        python3-twitter: Upgrade 4.8.0 -> 4.10.1

  Bartosz Golaszewski (4):
        imagemagick: add PACKAGECONFIG for C++ bindings
        python3-matplotlib: don't use PYTHON_PN
        python3-matplotlib: add packaging to RDEPENDS
        python3-matplotlib: bump to 3.5.2

  Bruce Ashfield (3):
        vboxguestdrivers: fix build against 5.19 kernel / libc-headers
        zfs: update to v2.1.5
        vboxguestdrivers: make kernel shared directory dependency explicit

  Carsten Bäcker (1):
        spdlog: Fix CMake flag

  Changqing Li (3):
        fuse3: support ptest
        redis: fix do_patch fuzz warning
        dlt-daemon: fix dlt-system.service failed since buffer overflow

  Clément Péron (1):
        python: add Pydantic data validation package

  Devendra Tewari (1):
        android-tools: sleep more in android-gadget-start

  Ed Tanous (1):
        Add python-requests-unixsocket recipe

  Enguerrand de Ribaucourt (1):
        mdio-tools: add recipes

  Etienne Cordonnier (1):
        uutils-coreutils: add recipe

  Jagadeesh Krishnanjanappa (4):
        python3-asgiref: add recipe
        python3-django: make 3.2.x as default version
        python3-django: Add python3-asgiref runtime dependency
        python3-django: remove 2.2.x recipe

  Jan Luebbe (2):
        chrony: add support for config and source snippet includes
        gensio: upgrade 2.3.1 -> 2.5.2

  Jan Vermaete (1):
        makeself: added makeself as new recipe

  Jim Broadus (1):
        networkmanager: fix iptables and nft paths

  Jose Quaresma (2):
        wireguard-module: 1.0.20210219 -> 1.0.20220627
        wireguard-tools: Add a new package for wg-quick

  Julian Haller (2):
        pcsc-lite: upgrade 1.9.0 -> 1.9.8
        ccid: upgrade 1.4.33 -> 1.5.0

  Justin Bronder (1):
        lmdb: only set SONAME on the shared library

  Khem Raj (61):
        mariadb: Inherit pkgconfig
        mariadb: Add packageconfig for lz4 and enable it
        ibus: Swith to use main branch instead of master
        kronosnet: Upgrade to 1.24
        ostree: Upgrade to 2022.5 release
        sdbus-c++-libsystemd: Fix build with glibc 2.36
        xfstests: Upgrade to v2022.07.10
        autofs: Fix build with glibc 2.36
        audit: Upgrade to 3.0.8 and fix build with linux 5.17+
        pcp: Add to USERADD_PACKAGES instead of override
        mozjs: Use RUST_HOST_SYS and RUST_TARGET_SYS
        fluentbit: Fix build with clang
        audit: Fix build with musl
        fluentbit: Fix build with musl
        klibc: Upgrade to 2.0.10
        gnome-keyring,cunit,xfce4-panel: Do not inherit remove-libtool class here
        mpd: Update to 0.23.8
        openipmi: Enable largefile cflags
        proftpd: Always enable largefile support
        netperf: Always enable largefile support
        openipmi: Always enable largefile support
        unbound: Always enable largefile support
        sysbench: Always enable largefile support
        libmtp: Always enable largefile support
        toybox: Fix build with glibc 2.36+
        xfstests: Upgrade to 2022.07.31 release
        libmpd: Fix function returns and casts
        audit: Revert the tweak done in configure step in do_install
        mpd: Upgrade to 0.23.9
        fluentbit: Use CMAKE_C_STANDARD_LIBRARIES cmake var to pass libatomic
        fluentbit: Upgrade to 1.9.7 and fix build on x86
        klibc: Fix build with kernel 5.19 headers
        ntpsec: Add -D_GNU_SOURCE and fix building with devtool
        gd: Fix build with clang-15
        cpulimit: Define -D_GNU_SOURCE
        safec: Remove unused variable 'len'
        ncftp: Enable autoreconf
        ncftp: Fix TMPDIR path embedding into ncftpget
        libb64: Switch to github fork and upgrade to 2.0.0.1+git
        dhrystone: Disable warnings as errors with clang
        dibbler: Fix build with musl
        fio: Fix additional warnings seen with musl
        ssmtp: Fix null pointer assignments
        gst-editing-services: Add recipe
        rygel: Upgrade to 0.40.4
        libesmtp: Define _GNU_SOURCE
        python3-grpcio: Enable largefile support explicitly
        libteam: Include missing headers for strrchr and memcmp
        neon: Upgrade to 0.32.2
        satyr: Fix build on musl/clang
        libmusicbrainz: Avoid -Wnonnull warning
        aom: Upgrade to 3.4.0
        vorbis-tools: Fix build on musl
        dvb-apps: Use tarball for SRC_URI and fix build on musl
        python3-netifaces: Fix build with python3 and musl
        python3-pyephem: Fix build with python3 and musl
        samba: Fix warnings in configure tests for rpath checks
        lirc: Fix build on musl
        mongodb: Fix boost build with clang-15
        crda: Fix build with clang-15
        monkey: Fix build with musl

  Lei Maohui (2):
        dnf-plugin-tui: Fix somw issue in postinstall process.
        xrdp: Fix buildpaths warning.

  Leon Anavi (16):
        python3-nocasedict: Upgrade 1.0.3 -> 1.0.4
        python3-frozenlist: Upgrade 1.3.0 -> 1.3.1
        python3-networkx: Upgrade 2.8.4 -> 2.8.5
        python3-pyhamcrest: Upgrade 2.0.3 -> 2.0.4
        python3-aiohue: Upgrade 4.4.2 -> 4.5.0
        python3-pyperf: Upgrade 2.3.0 -> 2.4.1
        python3-eth-abi: Upgrade 3.0.0 -> 3.0.1
        python3-cytoolz: Upgrade 0.11.2 -> 0.12.0
        python3-yarl: Upgrade 1.7.2 -> 1.8.1
        python3-term: Upgrade 2.3 -> 2.4
        python3-coverage: Upgrade 6.4.1 -> 6.4.4
        python3-regex: Upgrade 2022.7.25 -> 2022.8.17
        python3-awesomeversion: Upgrade 22.6.0 -> 22.8.0
        python3-typed-ast: Upgrade 1.5.2 -> 1.5.4
        python3-prompt-toolkit: Upgrade 3.0.24 -> 3.0.30
        python3-prettytable: Upgrade 3.1.1 -> 3.3.0

  Markus Volk (6):
        libass: update to v1.16.0
        spdlog: update to v1.10.0
        waylandpp: add recipe
        wireplumber: update to v0.4.11
        pipewire: update to v0.3.56
        pipewire: improve runtime dependency settings

  Marta Rybczynska (1):
        polkit: update patches for musl compilation

  Matthias Klein (1):
        libftdi: update to 1.5

  Mike Crowe (1):
        yasm: Only depend on xmlto when docs are enabled

  Mike Petersen (1):
        sshpass: add recipe

  Mingli Yu (10):
        net-snmp: set ac_cv_path_PSPROG
        postgresql: Fix the buildpaths issue
        freeradius: Fix buildpaths issue
        openipmi: Fix buildpaths issue
        apache2: Fix the buildpaths issue
        frr: fix buildpaths issue
        nspr: fix buildpaths issue
        liblockfile: fix buildpaths issue
        freediameter: fix buildpaths issue
        postgresql: make sure pam conf installed when pam enabled

  Ovidiu Panait (1):
        net-snmp: upgrade 5.9.1 -> 5.9.3

  Paulo Neves (1):
        fluentbit Upgrade to 1.3.5 -> 1.9.6

  Philip Balister (2):
        python3-pybind11: Update to Version 2.10.0.
        Remove dead link and old information from the README.

  Potin Lai (7):
        libplist: add libplist_git.bb
        libimobiledevice-glue: SRCREV bump bc6c44b..d2ff796
        libimobiledevice: add libimobiledevice_git.bb
        libirecovery: SRCREV bump e190945..ab5b4d8
        libusbmuxd: add libusbmuxd_git.bb
        usbmuxd: add usbmuxd_git.bb
        idevicerestore: SRCREV bump 280575b..7d622d9

  Richard Purdie (1):
        lmdb: Don't inherit base

  Sam Van Den Berge (1):
        python3-jsonrpcserver: add patch to use importlib.resources instead of pkg_resources

  Saul Wold (10):
        libipc-signal-perl: Fix LICENSE string
        libdigest-hmac-perl: Fix LICENSE string
        libio-socket-ssl-perl: Fix LICENSE string
        libdigest-sha1-perl: Fix LICENSE string
        libmime-types-perl: Fix LICENSE string
        libauthen-sasl-perl: Fix LICENSE string
        libnet-ldap-perl: Fix LICENSE string
        libxml-libxml-perl: Fix LICENSE string
        libnet-telnet-perl: Fix LICENSE string
        libproc-waitstat-perl: Fix LICENSE string

  Sean Anderson (2):
        image_types_sparse: Pad source image to block size
        image_types_sparse: Generate "don't care" chunks

  Vyacheslav Yurkov (4):
        protobuf: correct ptest dependency
        protobuf: 3.19.4 -> 3.21.5 upgrade
        protobuf: change build system to cmake
        protobuf: disable protoc binary for target

  Wang Mingyu (60):
        cifs-utils: upgrade 6.15 -> 7.0
        geocode-glib: upgrade 3.26.3 -> 3.26.4
        gjs: upgrade 1.72.1 -> 1.72.2
        htpdate: upgrade 1.3.5 -> 1.3.6
        icewm: upgrade 2.9.8 -> 2.9.9
        ipc-run: upgrade 20200505.0 -> 20220807.0
        iwd: upgrade 1.28 -> 1.29
        ldns: upgrade 1.8.1 -> 1.8.2
        libadwaita: upgrade 1.1.3 -> 1.1.4
        libencode-perl: upgrade 3.18 -> 3.19
        libmime-charset-perl: upgrade 1.012.2 -> 1.013.1
        libtest-warn-perl: upgrade 0.36 -> 0.37
        nano: upgrade 6.3 -> 6.4
        nbdkit: upgrade 1.31.15 -> 1.32.1
        netdata: upgrade 1.35.1 -> 1.36.0
        fio: upgrade 3.30 -> 3.31
        nlohmann-json: upgrade 3.10.5 -> 3.11.2
        poco: upgrade 1.12.1 -> 1.12.2
        postgresql: upgrade 14.4 -> 14.5
        poppler: upgrade 22.07.0 -> 22.08.0
        smarty: upgrade 4.1.1 -> 4.2.0
        tracker: upgrade 3.3.2 -> 3.3.3
        uftp: upgrade 5.0 -> 5.0.1
        xdg-user-dirs: upgrade 0.17 -> 0.18
        python3-pycodestyle: upgrade 2.9.0 -> 2.9.1
        python3-pyzmq: upgrade 23.2.0 -> 23.2.1
        python3-setuptools-declarative-requirements: upgrade 1.2.0 -> 1.3.0
        python3-sqlalchemy: upgrade 1.4.39 -> 1.4.40
        python3-werkzeug: upgrade 2.2.1 -> 2.2.2
        python3-xmlschema: upgrade 2.0.1 -> 2.0.2
        python3-yappi: upgrade 1.3.5 -> 1.3.6
        ade: upgrade 0.1.1f -> 0.1.2
        babl: upgrade 0.1.92 -> 0.1.94
        ctags: upgrade 5.9.20220703.0 -> 5.9.20220821.0
        grilo-plugins: upgrade 0.3.14 -> 0.3.15
        ldns: upgrade 1.8.2 -> 1.8.3
        libcurses-perl: upgrade 1.38 -> 1.41
        mosquitto: upgrade 2.0.14 -> 2.0.15
        nbdkit: upgrade 1.32.1 -> 1.33.1
        netdata: upgrade 1.36.0 -> 1.36.1
        libsdl2-ttf: upgrade 2.20.0 -> 2.20.1
        xfstests: upgrade 2022.07.31 -> 2022.08.07
        php: upgrade 8.1.8 -> 8.1.9
        rdma-core: upgrade 41.0 -> 42.0
        spitools: upgrade 1.0.1 -> 1.0.2
        unbound: upgrade 1.16.1 -> 1.16.2
        zlog: upgrade 1.2.15 -> 1.2.16
        python3-hexbytes: upgrade 0.2.3 -> 0.3.0
        python3-pythonping: upgrade 1.1.2 -> 1.1.3
        python3-jsonrpcserver: Add dependence python3-typing-extensions
        feh: upgrade 3.9 -> 3.9.1
        gnome-bluetooth: upgrade 42.2 -> 42.3
        hunspell: upgrade 1.7.0 -> 1.7.1
        gtk4: upgrade 4.6.6 -> 4.6.7
        logwatch: upgrade 7.6 -> 7.7
        bdwgc: upgrade 8.2.0 -> 8.2.2
        tcpreplay: upgrade 4.4.1 -> 4.4.2
        tree: upgrade 2.0.2 -> 2.0.3
        xfsdump: upgrade 3.1.10 -> 3.1.11
        babl: upgrade 0.1.94 -> 0.1.96

  Wolfgang Meyer (1):
        libsdl2-ttf: upgrade 2.0.18 -> 2.20.0

  Xu Huan (18):
        python3-protobuf: upgrade 4.21.3 -> 4.21.4
        python3-pycodestyle: upgrade 2.8.0 -> 2.9.0
        python3-pyflakes: upgrade 2.4.0 -> 2.5.0
        python3-pythonping: upgrade 1.1.1 -> 1.1.2
        python3-regex: upgrade 2022.7.24 -> 2022.7.25
        python3-werkzeug: upgrade 2.2.0 -> 2.2.1
        python3-google-auth: upgrade 2.9.1 -> 2.10.0
        python3-humanize: upgrade 4.2.3 -> 4.3.0
        python3-hexbytes: upgrade 0.2.2 -> 0.2.3
        python3-imageio: upgrade 2.21.0 -> 2.21.1
        python3-nocaselist: upgrade 1.0.5 -> 1.0.6
        python3-protobuf: upgrade 4.21.4 -> 4.21.5
        python3-pycares: upgrade 4.2.1 -> 4.2.2
        python3-fastjsonschema: upgrade 2.16.1 -> 2.16.2
        python3-google-api-python-client: upgrade 2.56.0 -> 2.57.0
        python3-google-auth: upgrade 2.10.0 -> 2.11.0
        python3-grpcio-tools: upgrade 1.47.0 -> 1.48.0
        python3-grpcio: upgrade 1.47.0 -> 1.48.0

  Yi Zhao (5):
        strongswan: upgrade 5.9.6 -> 5.9.7
        libldb: upgrade 2.3.3 -> 2.3.4
        samba: upgrade 4.14.13 -> 4.14.14
        python3-jsonrpcserver: upgrade 5.0.7 -> 5.0.8
        samba: fix buildpaths issue

  wangmy (16):
        gedit: upgrade 42.1 -> 42.2
        libwacom: upgrade 2.3.0 -> 2.4.0
        htpdate: upgrade 1.3.4 -> 1.3.5
        nbdkit: upgrade 1.31.14 -> 1.31.15
        pure-ftpd: upgrade 1.0.50 -> 1.0.51
        avro-c: upgrade 1.11.0 -> 1.11.1
        debootstrap: upgrade 1.0.126 -> 1.0.127
        freerdp: upgrade 2.7.0 -> 2.8.0
        icewm: upgrade 2.9.7 -> 2.9.8
        libmxml: upgrade 3.3 -> 3.3.1
        poco: upgrade 1.12.0 -> 1.12.1
        xfontsel: upgrade 1.0.6 -> 1.1.0
        xmessage: upgrade 1.0.5 -> 1.0.6
        xrefresh: upgrade 1.0.6 -> 1.0.7
        zabbix: upgrade 6.0.5 -> 6.2.1
        xrdp: upgrade 0.9.18 -> 0.9.19

  zhengrq.fnst (4):
        python3-asttokens: upgrade 2.0.7 -> 2.0.8
        python3-charset-normalizer: upgrade 2.1.0 -> 2.1.1
        python3-eth-account: 0.6.1 -> 0.7.0
        python3-cantools: upgrade 37.1.0 -> 37.1.2

  zhengruoqin (12):
        python3-dominate: upgrade 2.6.0 -> 2.7.0
        python3-flask-login: upgrade 0.6.1 -> 0.6.2
        python3-google-api-python-client: upgrade 2.54.0 -> 2.55.0
        python3-haversine: upgrade 2.5.1 -> 2.6.0
        python3-imageio: upgrade 2.19.5 -> 2.21.0
        python3-autobahn: upgrade 22.6.1 -> 22.7.1
        python3-engineio: upgrade 4.3.3 -> 4.3.4
        python3-flask: upgrade 2.1.3 -> 2.2.2
        python3-gcovr: upgrade 5.1 -> 5.2
        python3-google-api-python-client: upgrade 2.55.0 -> 2.56.0
        python3-asttokens: upgrade 2.0.5 -> 2.0.7
        python3-zeroconf: upgrade 0.38.7 -> 0.39.0

meta-security: 2a2d650ee0..10fdc2b13a:
  Anton Antonov (2):
        Use CARGO_TARGET_SUBDIR in do_install
        parsec-service: Update oeqa tests

  Armin Kuster (8):
        python3-privacyidea: update to 3.7.3
        lkrg-module: update to 0.9.5
        apparmor: update to 3.0.6
        packagegroup-core-security: add space for appends
        cryptmount: Add new pkg
        packagegroup-core-security: add pkg to grp
        cyptmount: Fix mount.h conflicts seen with glibc 2.36+
        kas: update testimage inherit

  John Edward Broadbent (1):
        meta-security: Add recipe for Glome

  Mingli Yu (1):
        samhain-standalone: fix buildpaths issue

poky: fc59c28724..9b1db65e7d:
  Alejandro Hernandez Samaniego (1):
        baremetal-image.bbclass: Emulate image.bbclass to handle new classes scope

  Alex Stewart (1):
        maintainers: update opkg maintainer

  Alexander Kanavin (113):
        kmscube: address linux 5.19 fails
        rpm: update 4.17.0 -> 4.17.1
        go: update 1.18.4 -> 1.19
        bluez5: update 5.64 -> 5.65
        python3-pip: update 22.2.1 -> 22.2.2
        ffmpeg: update 5.0.1 -> 5.1
        iproute2: upgrade 5.18.0 -> 5.19.0
        harfbuzz: upgrade 4.4.1 -> 5.1.0
        libwpe: upgrade 1.12.0 -> 1.12.2
        bind: upgrade 9.18.4 -> 9.18.5
        diffoscope: upgrade 218 -> 220
        ell: upgrade 0.51 -> 0.52
        gnutls: upgrade 3.7.6 -> 3.7.7
        iso-codes: upgrade 4.10.0 -> 4.11.0
        kea: upgrade 2.0.2 -> 2.2.0
        kexec-tools: upgrade 2.0.24 -> 2.0.25
        libcap: upgrade 2.64 -> 2.65
        libevdev: upgrade 1.12.1 -> 1.13.0
        libnotify: upgrade 0.8.0 -> 0.8.1
        libwebp: upgrade 1.2.2 -> 1.2.3
        libxcvt: upgrade 0.1.1 -> 0.1.2
        mesa: upgrade 22.1.3 -> 22.1.5
        mobile-broadband-provider-info: upgrade 20220511 -> 20220725
        nettle: upgrade 3.8 -> 3.8.1
        piglit: upgrade to latest revision
        puzzles: upgrade to latest revision
        python3: upgrade 3.10.5 -> 3.10.6
        python3-dtschema: upgrade 2022.7 -> 2022.8
        python3-hypothesis: upgrade 6.50.1 -> 6.54.1
        python3-jsonschema: upgrade 4.9.0 -> 4.9.1
        python3-markdown: upgrade 3.3.7 -> 3.4.1
        python3-setuptools: upgrade 63.3.0 -> 63.4.1
        python3-sphinx: upgrade 5.0.2 -> 5.1.1
        python3-urllib3: upgrade 1.26.10 -> 1.26.11
        sqlite3: upgrade 3.39.1 -> 3.39.2
        sysklogd: upgrade 2.4.0 -> 2.4.2
        webkitgtk: upgrade 2.36.4 -> 2.36.5
        kernel-dev: working with kernel using devtool does not require building and installing eSDK
        sdk-manual: describe how to use extensible SDK functionality directly in a Yocto build
        dropbear: merge .inc into .bb
        rust: update 1.62.0 -> 1.62.1
        cmake: update 3.23.2 -> 3.24.0
        weston: upgrade 10.0.1 -> 10.0.2
        patchelf: update 0.14.5 -> 0.15.0
        patchelf: replace a rejected patch with an equivalent uninative.bbclass tweak
        weston: exclude pre-releases from version check
        tzdata: upgrade 2022a -> 2022b
        libcgroup: update 2.0.2 -> 3.0.0
        python3-setuptools-rust: update 1.4.1 -> 1.5.1
        shadow: update 4.11.1 -> 4.12.1
        slang: update 2.3.2 -> 2.3.3
        xz: update 5.2.5 -> 5.2.6
        gdk-pixbuf: update 2.42.8 -> 2.42.9
        xorgproto: update 2022.1 -> 2022.2
        boost-build-native: update 4.4.1 -> 1.80.0
        boost: update 1.79.0 -> 1.80.0
        vulkan-samples: update to latest revision
        epiphany: upgrade 42.3 -> 42.4
        git: upgrade 2.37.1 -> 2.37.2
        glib-networking: upgrade 2.72.1 -> 2.72.2
        gnu-efi: upgrade 3.0.14 -> 3.0.15
        gpgme: upgrade 1.17.1 -> 1.18.0
        libjpeg-turbo: upgrade 2.1.3 -> 2.1.4
        libwebp: upgrade 1.2.3 -> 1.2.4
        lighttpd: upgrade 1.4.65 -> 1.4.66
        mesa: upgrade 22.1.5 -> 22.1.6
        meson: upgrade 0.63.0 -> 0.63.1
        mpg123: upgrade 1.30.1 -> 1.30.2
        pango: upgrade 1.50.8 -> 1.50.9
        piglit: upgrade to latest revision
        pkgconf: upgrade 1.8.0 -> 1.9.2
        python3-dtschema: upgrade 2022.8 -> 2022.8.1
        python3-more-itertools: upgrade 8.13.0 -> 8.14.0
        python3-numpy: upgrade 1.23.1 -> 1.23.2
        python3-pbr: upgrade 5.9.0 -> 5.10.0
        python3-pyelftools: upgrade 0.28 -> 0.29
        python3-pytz: upgrade 2022.1 -> 2022.2.1
        strace: upgrade 5.18 -> 5.19
        sysklogd: upgrade 2.4.2 -> 2.4.4
        wireless-regdb: upgrade 2022.06.06 -> 2022.08.12
        wpebackend-fdo: upgrade 1.12.0 -> 1.12.1
        python3-hatchling: update 1.6.0 -> 1.8.0
        python3-setuptools: update 63.4.1 -> 65.0.2
        devtool: do not leave behind source trees in workspace/sources
        systemtap: add a patch to address a python 3.11 failure
        bitbake: bitbake-layers: initialize tinfoil before registering command line arguments
        scripts/oe-setup-builddir: add a check that TEMPLATECONF is valid
        bitbake-layers: add a command to save the active build configuration as a template into a layer
        bitbake-layers: add ability to save current layer repository configuration into a file
        scripts/oe-setup-layers: add a script that restores the layer configuration from a json file
        selftest/bblayers: add a test for creating a layer setup and using it to restore the layers
        selftest/bblayers: adjust the revision for the layer setup test
        perl: run builds from a pristine source tree
        meta-poky/conf: move default templates to conf/templates/default/
        syslinux: mark all pending patches as Inactive-Upstream
        shadow: correct the pam patch status
        mtd-utils: remove patch that adds -I option
        gstreamer1.0-plugins-bad: remove an unneeded patch
        ghostscript: remove unneeded patch
        ovmf: drop the force no-stack-protector patch
        python: submit CC to cc_basename patch upstream
        mc: submit perl warnings patch upstream
        sysvinit: send install.patch upstream
        valgrind: (re)send ppc instructions patch upstream
        gdk-pixbuf: submit fatal-loader.patch upstream
        libsdl2: follow upstream version is even rule
        python3-pip: submit reproducible.patch upstream
        python3-pip: remove unneeded reproducible.patch
        llvm: remove 0006-llvm-TargetLibraryInfo-Undefine-libc-functions-if-th.patch
        scripts/oe-setup-builddir: migrate build/conf/templateconf.cfg to new template locations
        meta/files/layers.schema.json: drop the layers property
        scripts/oe-setup-builddir: write to conf/templateconf.cfg after the build is set up
        scripts/oe-setup-builddir: make environment variable the highest priority source for TEMPLATECONF

  Alexandre Belloni (1):
        ruby: drop capstone support

  Andrei Gherzan (7):
        shadow: Enable subid support
        rootfspostcommands.py: Restructure sort_passwd and related functions
        rootfspostcommands.py: Cleanup subid backup files generated by shadow-utils
        selftest: Add module for testing rootfs postcommands
        rootfs-postcommands.bbclass: Follow function rename in rootfspostcommands.py
        shadow: Avoid nss warning/error with musl
        linux-yocto: Fix COMPATIBLE_MACHINE regex match

  Andrey Konovalov (2):
        mesa: add pipe-loader's libraries to libopencl-mesa package
        mesa: build clover with native LLVM codegen support for freedreno

  Anuj Mittal (1):
        poky.conf: add ubuntu-22.04 to tested distros

  Armin Kuster (1):
        system-requirements.rst: remove EOL and Centos7 hosts

  Aryaman Gupta (1):
        bitbake: runqueue: add memory pressure regulation

  Awais Belal (1):
        kernel-fitimage.bbclass: only package unique DTBs

  Beniamin Sandu (1):
        libpam: use /run instead of /var/run in systemd tmpfiles

  Bertrand Marquis (1):
        sysvinit-inittab/start_getty: Fix respawn too fast

  Bruce Ashfield (22):
        linux-yocto/5.15: update to v5.15.58
        linux-yocto/5.10: update to v5.10.134
        linux-yocto-rt/5.15: update to -rt48 (and fix -stable merge)
        linux-libc-headers: update to v5.19
        kernel-devsrc: support arm v5.19+ on target build
        kernel-devsrc: support powerpc on v5.19+
        lttng-modules: fix build against mips and v5.19 kernel
        linux-yocto: introduce v5.19 reference kernel recipes
        meta/conf: update preferred linux-yocto version to v5.19
        linux-yocto: drop v5.10 reference kernel recipes
        linux-yocto/5.15: update to v5.15.59
        linux-yocto/5.15: fix reproducibility issues
        linux-yocto/5.19: cfg: update x32 configuration fragment
        linux-yocto/5.19: fix reproducibility issues
        poky: update preferred version to v5.19
        poky: change preferred kernel version to 5.15 in poky-alt
        yocto-bsp: drop v5.10 bbappend and create 5.19 placeholder
        lttng-modules: replace mips compaction fix with upstream change
        linux-yocto/5.15: update to v5.15.60
        linux-yocto/5.19: update to v5.19.1
        linux-yocto/5.19: update to v5.19.3
        linux-yocto/5.15: update to v5.15.62

  Changqing Li (1):
        apt: fix nativesdk-apt build failure during the second time build

  Chen Qi (2):
        python3-hypothesis: revert back to 6.46.11
        python3-requests: add python3-compression dependency

  Drew Moseley (1):
        rng-tools: Replace obsolete "wants systemd-udev-settle"

  Enrico Scholz (2):
        npm.bbclass: fix typo in 'fund' config option
        npm.bbclass: fix architecture mapping

  Ernst Sjöstrand (1):
        cve-check: Don't use f-strings

  Jacob Kroon (1):
        python3-cython: Remove debug lines

  Jan Luebbe (2):
        openssh: sync local ssh_config + sshd_config files with upstream 8.7p1
        openssh: add support for config snippet includes to ssh and sshd

  JeongBong Seo (1):
        wic: add 'none' fstype for custom image

  Johannes Schneider (1):
        classes: rootfs-postcommands: autologin root on serial-getty

  Jon Mason (2):
        oeqa/parselogs: add qemuarmv5 arm-charlcd masking
        ref-manual: add numa to machine features

  Jose Quaresma (4):
        bitbake: build: prefix the tasks with a timestamp in the log task_order
        archiver.bbclass: some recipes that uses the kernelsrc bbclass uses the shared source
        linux-yocto: prepend the the value with a space when append to KERNEL_EXTRA_ARGS
        shaderc: upgrade 2022.1 -> 2022.2

  Joshua Watt (4):
        bitbake: siggen: Fix insufficent entropy in sigtask file names
        bitbake: utils: Pass lock argument in fileslocked
        classes: cve-check: Get shared database lock
        meta/files: add layer setup JSON schema and example

  Kai Kang (1):
        packagegroup-self-hosted: update for strace

  Kevin Hao (1):
        uboot-config.bbclass: Don't bail out early in multi configs

  Khem Raj (83):
        qemu: Fix build with glibc 2.36
        mtd-utils: Fix build with glibc 2.36
        stress-ng: Upgrade to 0.14.03
        bootchart2: Fix build with glibc 2.36+
        ltp: Fix sys/mount.h conflicts needed for glibc 2.36+ compile
        efivar: Fix build with glibc 2.36
        cracklib: Drop using register keyword
        util-linux: Define pidfd_* function signatures
        util-linux: Upgrade to 2.38.1
        tcp-wrappers: Fix implicit-function-declaration warnings
        perl-cross: Correct function signatures in configure_func.sh
        perl: Pass additional flags to enable lfs and gnu source
        sysvinit: Fix mount.h conflicts seen with glibc 2.36+
        glibc: Bump to 2.36
        glibc: Update patch status
        zip: Enable largefile support based on distro feature
        zip: Make configure checks to be more robust
        unzip: Fix configure tests to use modern C
        unzip: Enable largefile support when enabled in distro
        iproute2: Fix netns check during configure
        glibc: Bump to latest 2.36 branch
        gstreamer1.0-plugins-base: Include required system headers for isspace() and sscanf()
        musl: Upgrade to latest tip of trunk
        zip: Always enable LARGE_FILE_SUPPORT
        libmicrohttpd: Enable largefile support unconditionally
        unzip: Always enable largefile support
        default-distrovars: Remove largefile from defualt DISTRO_FEATURES
        zlib: Resolve CVE-2022-37434
        json-c: Fix function prototypes
        rsync: Backport fix to address CVE-2022-29154
        rsync: Upgrade to 3.2.5
        libtirpc: Backport fix for CVE-2021-46828
        libxml2: Ignore CVE-2016-3709
        tiff: Backport a patch for CVE-2022-34526
        libtirpc: Upgrade to 1.3.3
        perf: Add packageconfig for libbfd support and use disabled as default
        connman: Backports for security fixes
        systemd: Upgrade to 251.4 and fix build with binutils 2.39
        time: Add missing include for memset
        screen: Add missing include files in configure checks
        setserial: Fix build with clang
        expect: Fix implicit-function-declaration warnings
        spirv-tools: Remove default copy constructor in header
        boost: Compile out stdlib unary/binary_functions for c++11 and newer
        vulkan-samples: Qualify move as std::move
        apt: Do not use std::binary_function
        ltp: Fix sys/mount.h and linux/mount.h conflict
        rpm: Remove -Wimplicit-function-declaration warnings
        binutils: Upgrade to 2.39 release
        binutils-cross: Disable gprofng for when building cross binutils
        binutils: Package up gprofng
        binutils: Disable gprofng when using clang
        binutils-cross-canadian: Package up new gprofng.rc file
        autoconf: Fix strict prototype errors in generated tests
        rsync: Add missing prototypes to function declarations
        nfs-utils: Upgrade to 2.6.2
        webkitgtk: Upgrade to 2.36.6 minor update
        musl: Update to tip
        binutils: Disable gprofng on musl systems
        binutils: Upgrade to latest on 2.39 release branch
        cargo_common.bbclass: Add missing space in shell conditional code
        rng-tools: Remove depndencies on hwrng
        ccache: Update the patch status
        ccache: Fix build with gcc12 on musl
        alsa-plugins: Include missing string.h
        xinetd: Pass missing -D_GNU_SOURCE
        watchdog: Include needed system header for function decls
        libcgroup: Use GNU strerror_r only when its available
        pinentry: enable _XOPEN_SOURCE on musl for wchar usage in curses
        apr: Use correct strerror_r implementation based on libc type
        gcr: Define _GNU_SOURCE
        ltp: Adjust types to match create_fifo_thread return
        gcc: Upgrade to 12.2.0
        glibc: Update to latest on 2.36
        ltp: Remove -mfpmath=sse on x86-64 too
        apr: Cache configure tests which use AC_TRY_RUN
        rust: Fix build failure on riscv32
        ncurses: Fix configure tests for exit and mbstate_t
        rust-llvm: Update to matching LLVM_VERSION from rust-source
        librepo: Fix build on musl
        rsync: Turn on -pedantic-errors at the end of 'configure'
        ccache: Upgrade to 4.6.2
        xmlto: Update to use upstream tip of trunk

  Konrad Weihmann (1):
        python3: disable user site-pkg for native target

  Lee Chee Yang (1):
        migration guides: add release notes for 4.0.3

  Luca Ceresoli (1):
        libmnl: remove unneeded SRC_URI 'name' option

  Markus Volk (2):
        connman: add PACKAGECONFIG to support iwd
        packagegroup-base.bb: add a configure option to set the wireless-daemon

  Martin Jansa (5):
        glibc: revert one upstream change to work around broken DEBUG_BUILD build
        syslinux: Fix build with glibc-2.36
        syslinux: refresh patches with devtool
        glibc: fix new upstream build issue with DEBUG_BUILD build
        glibc: apply proposed patch from upstream instead of revert

  Mateusz Marciniec (2):
        util-linux: Remove --enable-raw from EXTRA_OECONF
        util-linux: Improve check for magic in configure.ac

  Michael Halstead (1):
        uninative: Upgrade to 3.7 to work with glibc 2.36

  Michael Opdenacker (1):
        dev-manual: use proper note directive

  Mingli Yu (1):
        bitbake: fetch: use BPN instead

  Neil Horman (1):
        bitbake: Fix npm to use https rather than http

  Paul Eggleton (1):
        relocate_sdk.py: ensure interpreter size error causes relocation to fail

  Pavel Zhukov (6):
        package_rpm: Do not replace square brackets in %files
        selftest: Add regression test for rpm filesnames
        parselogs: Ignore xf86OpenConsole error
        bitbake: gitsm: Error out if submodule refers to parent repo
        bitbake: tests: Add Timeout class
        bitbake: tests: Add test for possible gitsm deadlock

  Peter Bergin (3):
        rust-cross-canadian: rename shell variables for easier appends
        packagegroup-rust-cross-canadian: add native compiler environment
        oeqa/sdk: extend rust test to also use a build script

  Peter Marko (1):
        create-spdx: handle links to inaccessible locations

  Quentin Schulz (3):
        docs: conf.py: update yocto_git base URL
        docs: README: add TeX font package required for building PDF
        docs: ref-manual: system-requirements: add missing packages

  Randy MacLeod (1):
        rust: update from 1.62.1 to 1.63.0

  Rasmus Villemoes (1):
        bitbake.conf: set BB_DEFAULT_UMASK using ??=

  Richard Purdie (85):
        oeqa/selftest/sstate: Ensure tests are deterministic
        nativesdk: Clear TUNE_FEATURES
        populate_sdk_base: Disable rust SDK for MIPS n32
        selftest/reproducible: Exclude rust/rust-dbg for now until we can fix
        conf/distro/no-static-libs: Allow static musl for rust
        rust-target-config: Add mips n32 target information
        rust-common: Add CXXFLAGS
        rust-common: Drop export directive from wrappers
        rust-common: Rework wrappers to handle musl
        rust: Work around reproducibility issues
        rust: Switch to use RUST_XXX_SYS consistently
        rust.inc: Rename variables to make code clearer
        rust.inc: Fix cross build llvm-config handling
        rust/mesa: Drop obsolete YOCTO_ALTERNATE_MULTILIB_NAME
        rust-target-config: Show clear error when target isn't defined
        rust: Generate per recipe target configuration files
        rust-common/rust: Improve bootstrap BUILD_SYS handling
        cargo_common: Handle build SYS as well as HOST/TARGET
        rust-llvm: Enable nativesdk variant
        rust.inc: Fix for cross compilation configuration
        rust-common: Update to match cross targets
        rust-target-config: Make target workaround generic
        rust-common: Simplify libc handling
        cargo: Drop cross-canadian variant and fix/use nativesdk
        rust-common: Set rustlibdir to match target expectation
        rust-cross-canadian: Simplify and fix
        rust: Drop cross/crosssdk
        rust: Enable nativesdk and target builds + replace rust-tools-cross-canadian
        rust: Fix musl builds
        rust: Ensure buildpaths are handled in debug symbols correctly
        rust: Update README
        selftest/wic: Tweak test case to not depend on kernel size
        bitbake: runqueue: Ensure deferred tasks are sorted by multiconfig
        bitbake: runqueue: Improve deadlock warning messages
        bitbake: runqueue: Drop deadlock breaking force fail
        rust-common: Remove conflict with utils create_wrapper
        kern-devsrc: Drop auto.conf creation
        cargo: Work around host system library conflicts
        rust-cross-canadian: Use shell from SDK, not the host
        buildhistory: Only use image-artifact-names as an image class
        rust: Remove unneeded RUST_TARGETGENS settings
        meta-skeleton/hello-mod: Switch to SPDX-License-Identifier
        perf: Fix reproducibility issues with 5.19 onwards
        selftest/runtime_test/incompatible_lic: Use IMAGE_CLASSES for testimage
        testexport: Fix to work as an image class
        testexport: Use IMAGE_CLASSES for testimage
        selftest/runtime_test: Use testexport in IMAGE_CLASSES, not globally
        bitbake: BBHandler: Allow earlier exit for classes not found
        bitbake: BBHandler: Make inherit calls more directly
        bitbake: bitbake: Add copyright headers where missing
        bitbake: BBHandler/cooker: Implement recipe and global classes
        classes: Add copyright statements to files without one
        scripts: Add copyright statements to files without one
        classes: Add SPDX license identifiers
        lib: Add copyright statements to files without one
        insane: Update to allow for class layout changes
        classes: Update classes to match new bitbake class scope functionality
        recipetool: Update for class changes
        package: Switch debug source handling to use prefix map
        libgcc/gcc-runtime: Improve source reference handling
        bitbake.conf: Handle S and B separately for debug mapping
        python3-cython: Update code to match debug path changes
        gcc-cross: Fix relative links
        gcc: Resolve relative prefix-map filenames
        gcc: Add a patch to avoid hardcoded paths in libgcc on powerpc
        gcc: Update patch status to submitted for two patches
        valgrind: Disable drd/tests/std_thread2 ptest
        valgrind: Update to match debug file layout changes
        skeleton/service: Ensure debug path handling works as intended
        distrooverrides: Move back to classes whilst it's usage is clarified
        vim: Upgrade 9.0.0115 -> 9.0.0242
        icu: Drop binconfig support (icu-config)
        libtirpc: Mark CVE-2021-46828 as resolved
        bitbake: runqueue: Change pressure file warning to a note
        rust-target-config: Drop has-elf-tls option
        llvm: Add llvm-config wrapper to improve flags handling
        mesa: Rework llvm handling
        rust-target-config: Fix qemuppc target cpu option
        rust: Fix crossbeam-utils for arches without atomics
        pseudo: Update to include recent upstream minor fixes
        bitbake: Revert "fetch: use BPN instead"
        vim: Upgrade 9.0.0242 -> 9.0.0341
        gcc-multilib-config: Fix i686 toolchain relocation issues
        kernel: Always set CC and LD for the kernel build
        kernel: Use consistent make flags for menuconfig

  Robert Joslyn (1):
        curl: Update to 7.85.0

  Ross Burton (9):
        oeqa/qemurunner: add run_serial() comment
        oeqa/commands: add support for running cross tools to runCmd
        oeqa/selftest: rewrite gdbserver test
        libxml2: wrap xmllint to use the correct XML catalogues
        oeqa/selftest: add test for debuginfod
        libgcrypt: remove obsolete pkgconfig install
        libgcrypt: remove obsolete patch
        libgcrypt: rewrite ptest
        cve-check: close cursors as soon as possible

  Sakib Sajal (2):
        qemu: fix CVE-2021-3507
        qemu: fix CVE-2022-0216

  Shubham Kulkarni (1):
        sanity: add a comment to ensure CONNECTIVITY_CHECK_URIS is correct

  Simone Weiss (1):
        json-c: Add ptest for json-c

  Sundeep KOKKONDA (1):
        glibc : stable 2.35 branch updates

  Thomas Roos (1):
        oeqa devtool: Add tests to cover devtool handling of various git URL styles

  Tom Hochstein (1):
        piglit: Add PACKAGECONFIG for glx and opencl

  Tom Rini (1):
        qemux86-64: Allow higher tunes

  Ulrich Ölmann (1):
        scripts/runqemu.README: fix typos and trailing whitespaces

  William A. Kennington III (1):
        image_types: Set SOURCE_DATE_EPOCH for squashfs

  Yang Xu (1):
        insane.bbclass: Skip patches not in oe-core by full path

  Yogesh Tyagi (1):
        gdbserver : add selftest

  Yongxin Liu (1):
        grub2: fix several CVEs

  wangmy (19):
        msmtp: upgrade 1.8.20 -> 1.8.22
        bind: upgrade 9.18.5 -> 9.18.6
        btrfs-tools: upgrade 5.18.1 -> 5.19
        libdnf: upgrade 0.67.0 -> 0.68.0
        librepo: upgrade 1.14.3 -> 1.14.4
        pkgconf: upgrade 1.9.2 -> 1.9.3
        python3-pygments: upgrade 2.12.0 -> 2.13.0
        ethtool: upgrade 5.18 -> 5.19
        librsvg: upgrade 2.54.4 -> 2.54.5
        libtasn1: upgrade 4.18.0 -> 4.19.0
        liburcu: upgrade 0.13.1 -> 0.13.2
        libwpe: upgrade 1.12.2 -> 1.12.3
        lttng-tools: upgrade 2.13.7 -> 2.13.8
        lttng-ust: upgrade 2.13.3 -> 2.13.4
        libatomic-ops: upgrade 7.6.12 -> 7.6.14
        lz4: upgrade 1.9.3 -> 1.9.4
        python3-hatchling: upgrade 1.8.0 -> 1.8.1
        python3-urllib3: upgrade 1.26.11 -> 1.26.12
        repo: upgrade 2.28 -> 2.29.1

meta-arm: 20a629180c..52f07a4b0b:
  Anton Antonov (11):
        arm/optee-os: backport RWX permission error patch
        work around for too few arguments to function init_disassemble_info() error
        arm/optee-os: backport linker warning patches
        arm/tf-a-tests: work around RWX permission error on segment
        Recipes for Trusted Services dependencies.
        Recipes for Trusted Services Secure Partitions
        ARM-FFA kernel drivers and kernel configs for Trusted Services
        Trusted Services test/demo NWd tools
        psa-api-tests for Trusted Services
        Include Trusted Services SPs into optee-os image
        Define qemuarm64-secureboot-ts CI pipeline and include it into meta-arm

  Gowtham Suresh Kumar (2):
        arm-bsp/secure-partitions: fix SMM gateway bug for EFI GetVariable()
        arm-bsp/u-boot: drop EFI GetVariable() workarounds patches

  Jon Mason (11):
        arm-bsp/fvp-base-arm32: Update kernel patch for v5.19
        arm/qemuarm64-secureboot: remove tfa memory patch
        arm/linux-yocto: remove optee num pages kernel config variable
        arm-bsp/juno: drop scmi patch
        arm/qemuarm-secureboot: remove vmalloc from QB_KERNEL_CMDLINE_APPEND
        arm/fvp: use image-artifact-names as an image class
        atp/atp: drop package inherits
        arm/optee: Update to 3.18
        arm-bsp/fvp-base: set preferred kernel to 5.15
        arm/arm-bsp: Add yocto-kernel-cache bluetooth support
        arm-bsp/corstone1000: use compressed kernel image

  Khem Raj (2):
        gator-daemon: Define _GNU_SOURCE feature test macro
        optee-os: Add section attribute parameters when clang is used

  Peter Hoyes (3):
        docs: Update FVP_CONSOLES in runfvp documentation
        docs: Introduce meta-arm OEQA documentation
        arm/oeqa: Make linuxboot test case timeout configurable

  Richard Purdie (1):
        gem5/gem5-m5ops: Drop uneeded package inherit

  Ross Burton (2):
        arm/trusted-firmware-a: remove redundant patches
        arm/trusted-firmware-a: work around RWX permission error on segment

  Rui Miguel Silva (2):
        arm-bsp:corstone500: rebase u-boot patches on v2022.07
        arm-bsp/corstone1000: rebase u-boot patches on top v2022.07

  Vishnu Banavath (3):
        arm-bsp/trusted-firmware-a: Bump TF-A version for N1SDP
        arm-bsp/optee: add optee-os support for N1SDP target
        arm/optee: update optee-client to v3.18

Signed-off-by: Patrick Williams <patrick@stwcx.xyz>
Change-Id: I90aa0a94410dd208163af126566d22c77787abc2
diff --git a/meta-arm/.gitlab-ci.yml b/meta-arm/.gitlab-ci.yml
index 840a650..1fb21f6 100644
--- a/meta-arm/.gitlab-ci.yml
+++ b/meta-arm/.gitlab-ci.yml
@@ -171,6 +171,13 @@
         TCLIBC: [glibc, musl]
         TESTING: testimage
 
+qemuarm64-secureboot-ts:
+  extends: .build
+  parallel:
+    matrix:
+      - TCLIBC: [glibc, musl]
+        TESTING: testimage
+
 qemuarm64:
   extends: .build
   parallel:
diff --git a/meta-arm/ci/qemuarm64-secureboot-ts.yml b/meta-arm/ci/qemuarm64-secureboot-ts.yml
new file mode 100644
index 0000000..66a27c6
--- /dev/null
+++ b/meta-arm/ci/qemuarm64-secureboot-ts.yml
@@ -0,0 +1,28 @@
+# Build qemuarm64-secureboot machine with
+# Trusted Services secure partition included into optee-os image.
+#
+# Run Trustes Services OEQA tests.
+
+header:
+  version: 11
+  includes:
+    - ci/base.yml
+    - ci/meta-openembedded.yml
+
+machine: qemuarm64-secureboot
+
+local_conf_header:
+  failing_tests: |
+    # software IO TLB: Cannot allocate buffer
+    DEFAULT_TEST_SUITES:remove = "parselogs"
+  trusted_services: |
+    TEST_SUITES:append = " trusted_services"
+    # Include TS Crypto, Storage, ITS, Attestation and SMM-Gateway SPs into optee-os image
+    MACHINE_FEATURES:append = " arm-ffa ts-crypto ts-storage ts-its ts-attestation ts-smm-gateway"
+    # Include TS demo/test tools into image
+    IMAGE_INSTALL:append = " packagegroup-ts-tests"
+    # Include TS PSA Arch tests into image
+    IMAGE_INSTALL:append = " packagegroup-ts-tests-psa"
+
+target:
+  - core-image-base
diff --git a/meta-arm/documentation/oeqa-fvp.md b/meta-arm/documentation/oeqa-fvp.md
new file mode 100644
index 0000000..582dd38
--- /dev/null
+++ b/meta-arm/documentation/oeqa-fvp.md
@@ -0,0 +1,55 @@
+# OEQA on Arm FVPs
+
+OE-Core's [oeqa][OEQA] framework provides a method of performing runtime tests on machines using the `testimage` Yocto task. meta-arm has good support for writing test cases against [Arm FVPs][FVP], meaning the [runfvp][RUNFVP] boot configuration can be re-used.
+
+Tests can be configured to run automatically post-build by setting the variable `TESTIMAGE_AUTO="1"`, e.g. in your Kas file or local.conf.
+
+There are two main methods of testing, using different test "targets".
+
+## OEFVPTarget
+
+This runs test cases on a machine using SSH. It therefore requires that an SSH server is installed in the image.
+
+In test cases, the primary interface with the target is, e.g:
+```
+(status, output) = self.target.run('uname -a')
+```
+which runs a single command on the target (using `ssh -c`) and returns the status code and the output. It is therefore useful for running tests in a Linux environment.
+
+For examples of test cases, see meta/lib/oeqa/runtime/cases in OE-Core. The majority of test cases depend on `ssh.SSHTest.test_ssh`, which first validates that the SSH connection is functioning.
+
+Example machine configuration:
+```
+TEST_TARGET = "OEFVPTarget"
+TEST_SERVER_IP = "127.0.0.1"
+TEST_TARGET_IP = "127.0.0.1:8022"
+IMAGE_FEATURES:append = " ssh-server-dropbear"
+FVP_CONFIG[bp.virtio_net.hostbridge.userNetPorts] ?= "8022=22"
+```
+
+## OEFVPSerialTarget
+
+This runs tests against one or more serial consoles on the FVP. It is more flexible than OEFVPTarget, but test cases written for this test target do not support the test cases in OE-core. As it does not require an SSH server, it is suitable for machines with performance or memory limitations.
+
+Internally, this test target launches a [Pexpect][PEXPECT] instance for each entry in FVP_CONSOLES which can be used with the provided alias. The whole Pexpect API is exposed on the target, where the alias is always passed as the first argument, e.g.:
+```
+self.target.expect('default', r'root@.*\:~#', timeout=30)
+self.assertNotIn(b'ERROR:', self.target.before('tf-a'))
+```
+
+For an example of a full test case, see meta-arm/lib/oeqa/runtime/cases/linuxboot.py This test case can be used to minimally verify that a machine boots to a Linux shell. The default timeout is 10 minutes, but this can be configured with the variable TEST_FVP_LINUX_BOOT_TIMEOUT, which expects a value in seconds.
+
+The SSH interface described above is also available on OEFVPSerialTarget to support writing a set of hybrid test suites that use a combination of serial and SSH access. Note however that this test target does not guarantee that Linux has booted to shell prior to running any tests, so the test cases in OE-core are not supported.
+
+Example machine configuration:
+```
+TEST_TARGET="OEFVPSerialTarget"
+TEST_SUITES="linuxboot"
+FVP_CONSOLES[default] = "terminal_0"
+FVP_CONSOLES[tf-a] = "s_terminal_0"
+```
+
+[OEQA]: https://docs.yoctoproject.org/test-manual/intro.html
+[FVP]: https://developer.arm.com/tools-and-software/simulation-models/fixed-virtual-platforms
+[RUNFVP]: runfvp.md
+[PEXPECT]: https://pexpect.readthedocs.io/en/stable/overview.html
diff --git a/meta-arm/documentation/runfvp.md b/meta-arm/documentation/runfvp.md
index b9e007b..c792f4e 100644
--- a/meta-arm/documentation/runfvp.md
+++ b/meta-arm/documentation/runfvp.md
@@ -98,13 +98,16 @@
 FVP_TERMINALS[bp.terminal_3] = ""
 ```
 
-### `FVP_CONSOLE`
+### `FVP_CONSOLES`
 
-This specifies what serial port is used when `--console` is passed to runfvp. Note that this has to be the FVP identifier but without the board prefix, for example:
+This specifies what serial ports can be used in oeqa tests, along with an alias to be used in the test cases. Note that the values have to be the FVP identifier but without the board prefix, for example:
 ```
-FVP_CONSOLE = "terminal_0"
+FVP_CONSOLES[default] = "terminal_0"
+FVP_CONSOLES[tf-a] = "s_terminal_0"
 ```
 
+The 'default' console is also used when `--console` is passed to runfvp.
+
 ### `FVP_EXTRA_ARGS`
 
 Arbitrary extra arguments that are passed directly to the FVP.  For example:
diff --git a/meta-arm/meta-arm-bsp/conf/machine/corstone500.conf b/meta-arm/meta-arm-bsp/conf/machine/corstone500.conf
index 1d25471..adcfddd 100644
--- a/meta-arm/meta-arm-bsp/conf/machine/corstone500.conf
+++ b/meta-arm/meta-arm-bsp/conf/machine/corstone500.conf
@@ -26,7 +26,7 @@
 UBOOT_MACHINE = "corstone500_defconfig"
 UBOOT_IMAGE_ENTRYPOINT = "0x84000000"
 UBOOT_IMAGE_LOADADDRESS = "0x84000000"
-PREFERRED_VERSION_u-boot ?= "2022.04"
+PREFERRED_VERSION_u-boot ?= "2022.07"
 
 # making sure EXTRA_IMAGEDEPENDS will be used while creating the image
 WKS_FILE_DEPENDS:append = " ${EXTRA_IMAGEDEPENDS}"
diff --git a/meta-arm/meta-arm-bsp/conf/machine/fvp-base.conf b/meta-arm/meta-arm-bsp/conf/machine/fvp-base.conf
index 434812b..04ec112 100644
--- a/meta-arm/meta-arm-bsp/conf/machine/fvp-base.conf
+++ b/meta-arm/meta-arm-bsp/conf/machine/fvp-base.conf
@@ -10,6 +10,8 @@
 TUNE_FEATURES = "aarch64"
 
 PREFERRED_VERSION_u-boot ?= "2022.04"
+PREFERRED_VERSION_linux-yocto ?= "5.15%"
+PREFERRED_VERSION_linux-yocto-rt ?= "5.15%"
 
 # FVP u-boot configuration
 UBOOT_MACHINE = "vexpress_aemv8a_semi_defconfig"
diff --git a/meta-arm/meta-arm-bsp/conf/machine/include/corstone1000.inc b/meta-arm/meta-arm-bsp/conf/machine/include/corstone1000.inc
index 91dbbfd..ec48222 100644
--- a/meta-arm/meta-arm-bsp/conf/machine/include/corstone1000.inc
+++ b/meta-arm/meta-arm-bsp/conf/machine/include/corstone1000.inc
@@ -22,7 +22,7 @@
 RE_IMAGE_OFFSET = "0x1000"
 
 # u-boot
-PREFERRED_VERSION_u-boot ?= "2022.04"
+PREFERRED_VERSION_u-boot ?= "2022.07"
 EXTRA_IMAGEDEPENDS += "u-boot"
 
 UBOOT_CONFIG ??= "EFI"
@@ -46,7 +46,7 @@
 # Linux kernel
 PREFERRED_PROVIDER_virtual/kernel:forcevariable = "linux-yocto"
 PREFERRED_VERSION_linux-yocto = "5.15%"
-KERNEL_IMAGETYPE = "Image"
+KERNEL_IMAGETYPE = "Image.gz"
 
 INITRAMFS_IMAGE_BUNDLE ?= "1"
 
diff --git a/meta-arm/meta-arm-bsp/conf/machine/n1sdp.conf b/meta-arm/meta-arm-bsp/conf/machine/n1sdp.conf
index 5e87e61..5d1be82 100644
--- a/meta-arm/meta-arm-bsp/conf/machine/n1sdp.conf
+++ b/meta-arm/meta-arm-bsp/conf/machine/n1sdp.conf
@@ -30,6 +30,9 @@
 #UEFI EDK2 firmware
 EXTRA_IMAGEDEPENDS += "edk2-firmware"
 
+#optee
+PREFERRED_VERSION_optee-os ?= "3.18.%"
+
 #grub-efi
 EFI_PROVIDER ?= "grub-efi"
 MACHINE_FEATURES += "efi"
diff --git a/meta-arm/meta-arm-bsp/recipes-bsp/trusted-firmware-a/files/n1sdp/bl_size.patch b/meta-arm/meta-arm-bsp/recipes-bsp/trusted-firmware-a/files/n1sdp/bl_size.patch
deleted file mode 100644
index a5b3019..0000000
--- a/meta-arm/meta-arm-bsp/recipes-bsp/trusted-firmware-a/files/n1sdp/bl_size.patch
+++ /dev/null
@@ -1,40 +0,0 @@
-From 80b1efa92486a87f9e82dbf665ef612291148de8 Mon Sep 17 00:00:00 2001
-From: Adam Johnston <adam.johnston@arm.com>
-Date: Tue, 14 Jun 2022 11:19:30 +0000
-Subject: [PATCH] arm-bsp/trusted-firmware-a: N1SDP trusted boot
-
-Increase max size of BL2 on N1SDP by 4KB to enable trusted boot
-Decrease max size of BL1 on N1SDP by 8KB so BL1/BL2 fits above BL31 progbits
-
-Signed-off-by: Adam Johnston <adam.johnston@arm.com>
-Upstream-Status: Pending [Flagged to upstream]
-
----
- plat/arm/board/n1sdp/include/platform_def.h | 4 ++--
- 1 file changed, 2 insertions(+), 2 deletions(-)
-
-diff --git a/plat/arm/board/n1sdp/include/platform_def.h b/plat/arm/board/n1sdp/include/platform_def.h
-index c9b81bafa..7468a31ed 100644
---- a/plat/arm/board/n1sdp/include/platform_def.h
-+++ b/plat/arm/board/n1sdp/include/platform_def.h
-@@ -91,7 +91,7 @@
-  * PLAT_ARM_MAX_BL1_RW_SIZE is calculated using the current BL1 RW debug size
-  * plus a little space for growth.
-  */
--#define PLAT_ARM_MAX_BL1_RW_SIZE	0xE000
-+#define PLAT_ARM_MAX_BL1_RW_SIZE	0xC000
- 
- /*
-  * PLAT_ARM_MAX_ROMLIB_RW_SIZE is define to use a full page
-@@ -110,7 +110,7 @@
-  * little space for growth.
-  */
- #if TRUSTED_BOARD_BOOT
--# define PLAT_ARM_MAX_BL2_SIZE		0x20000
-+# define PLAT_ARM_MAX_BL2_SIZE		0x21000
- #else
- # define PLAT_ARM_MAX_BL2_SIZE		0x14000
- #endif
--- 
-2.35.1
-
diff --git a/meta-arm/meta-arm-bsp/recipes-bsp/trusted-firmware-a/trusted-firmware-a-n1sdp.inc b/meta-arm/meta-arm-bsp/recipes-bsp/trusted-firmware-a/trusted-firmware-a-n1sdp.inc
index f8a0b8d..034dac3 100644
--- a/meta-arm/meta-arm-bsp/recipes-bsp/trusted-firmware-a/trusted-firmware-a-n1sdp.inc
+++ b/meta-arm/meta-arm-bsp/recipes-bsp/trusted-firmware-a/trusted-firmware-a-n1sdp.inc
@@ -1,5 +1,8 @@
 # N1SDP specific TFA support
 
+SRCREV_tfa  = "1d867c14cb41c1171d16fa7e395a4eaed3d572b2"
+PV .= "+git${SRCPV}"
+
 COMPATIBLE_MACHINE = "n1sdp"
 TFA_PLATFORM       = "n1sdp"
 TFA_BUILD_TARGET   = "all fip"
@@ -9,16 +12,23 @@
 TFA_UBOOT          = "0"
 TFA_UEFI           = "1"
 
-SRC_URI:append = " file://bl_size.patch"
-
 TFA_ROT_KEY= "plat/arm/board/common/rotpk/arm_rotprivk_rsa.pem"
 
+# Enabling Secure-EL1 Payload Dispatcher (SPD)
+TFA_SPD = "spmd"
+# Cortex-A35 supports Armv8.0-A (no S-EL2 execution state).
+# So, the SPD SPMC component should run at the S-EL1 execution state
+TFA_SPMD_SPM_AT_SEL2 = "0"
+
+# BL2 loads BL32 (optee). So, optee needs to be built first:
+DEPENDS += "optee-os"
+
 EXTRA_OEMAKE:append = "\
                     TRUSTED_BOARD_BOOT=1 \
-                    GENERATE_COT=1 \
-                    CREATE_KEYS=1 \
-                    ENABLE_PIE=0 \
-                    ARM_ROTPK_LOCATION="devel_rsa" \
-                    ROT_KEY="${TFA_ROT_KEY}" \
-                    BL33=${RECIPE_SYSROOT}/firmware/uefi.bin \
-                    "
+		    GENERATE_COT=1 \
+		    CREATE_KEYS=1 \
+		    ARM_ROTPK_LOCATION="devel_rsa" \
+		    ROT_KEY="${TFA_ROT_KEY}" \
+		    BL32=${RECIPE_SYSROOT}/lib/firmware/tee-pager_v2.bin \
+		    BL33=${RECIPE_SYSROOT}/firmware/uefi.bin \
+		    "
diff --git a/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0001-cmd-load-add-load-command-for-memory-mapped.patch b/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0001-cmd-load-add-load-command-for-memory-mapped.patch
index d0ae964..dc64e3c 100644
--- a/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0001-cmd-load-add-load-command-for-memory-mapped.patch
+++ b/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0001-cmd-load-add-load-command-for-memory-mapped.patch
@@ -1,7 +1,7 @@
-From c047b15fba9da36616bc9a346d9fe4d76e7ca4b2 Mon Sep 17 00:00:00 2001
+From 7a1a84ea74fdd06a7f5f239f4c5f4b727d6cd232 Mon Sep 17 00:00:00 2001
 From: Rui Miguel Silva <rui.silva@linaro.org>
 Date: Thu, 24 Jun 2021 09:25:00 +0100
-Subject: [PATCH 01/27] cmd: load: add load command for memory mapped
+Subject: [PATCH 01/24] cmd: load: add load command for memory mapped
 
 cp.b is used a lot as a way to load binaries to memory and execute
 them, however we may need to integrate this with the efi subsystem to
@@ -26,10 +26,10 @@
  6 files changed, 78 insertions(+)
 
 diff --git a/README b/README
-index f51f392111f9..049bcd108980 100644
+index b7ab6e50708d..cd76f95e74c1 100644
 --- a/README
 +++ b/README
-@@ -2760,6 +2760,7 @@ rarpboot- boot image via network using RARP/TFTP protocol
+@@ -2578,6 +2578,7 @@ rarpboot- boot image via network using RARP/TFTP protocol
  diskboot- boot from IDE devicebootd   - boot default, i.e., run 'bootcmd'
  loads	- load S-Record file over serial line
  loadb	- load binary file over serial line (kermit mode)
@@ -38,10 +38,10 @@
  mm	- memory modify (auto-incrementing)
  nm	- memory modify (constant address)
 diff --git a/cmd/Kconfig b/cmd/Kconfig
-index 5e25e45fd288..ff50102a89c7 100644
+index 09193b61b95f..ba2f321ae989 100644
 --- a/cmd/Kconfig
 +++ b/cmd/Kconfig
-@@ -1076,6 +1076,12 @@ config CMD_LOADB
+@@ -1143,6 +1143,12 @@ config CMD_LOADB
  	help
  	  Load a binary file over serial line.
  
@@ -55,7 +55,7 @@
  	bool "loads"
  	default y
 diff --git a/cmd/bootefi.c b/cmd/bootefi.c
-index 53d9f0e0dcca..8d492ea9e70c 100644
+index 827fcd97dfd8..37ce659fa123 100644
 --- a/cmd/bootefi.c
 +++ b/cmd/bootefi.c
 @@ -34,6 +34,18 @@ static struct efi_device_path *bootefi_device_path;
@@ -141,10 +141,10 @@
 +);
 +#endif /* CONFIG_CMD_LOADM */
 diff --git a/include/efi_loader.h b/include/efi_loader.h
-index af36639ec6a7..126db279dd3e 100644
+index 11930fbea838..5b41985244e2 100644
 --- a/include/efi_loader.h
 +++ b/include/efi_loader.h
-@@ -585,6 +585,8 @@ efi_status_t efi_load_pe(struct efi_loaded_image_obj *handle,
+@@ -591,6 +591,8 @@ efi_status_t efi_load_pe(struct efi_loaded_image_obj *handle,
  void efi_save_gd(void);
  /* Call this to relocate the runtime section to an address space */
  void efi_runtime_relocate(ulong offset, struct efi_mem_desc *map);
@@ -154,10 +154,10 @@
  void efi_add_handle(efi_handle_t obj);
  /* Create handle */
 diff --git a/lib/efi_loader/efi_device_path.c b/lib/efi_loader/efi_device_path.c
-index 0542aaae16c7..d8dc59b2c95c 100644
+index 171661b89727..2493d7432613 100644
 --- a/lib/efi_loader/efi_device_path.c
 +++ b/lib/efi_loader/efi_device_path.c
-@@ -1138,6 +1138,8 @@ efi_status_t efi_dp_from_name(const char *dev, const char *devnr,
+@@ -1158,6 +1158,8 @@ efi_status_t efi_dp_from_name(const char *dev, const char *devnr,
  {
  	struct blk_desc *desc = NULL;
  	struct disk_partition fs_partition;
@@ -166,7 +166,7 @@
  	int part = 0;
  	char *filename;
  	char *s;
-@@ -1153,6 +1155,13 @@ efi_status_t efi_dp_from_name(const char *dev, const char *devnr,
+@@ -1173,6 +1175,13 @@ efi_status_t efi_dp_from_name(const char *dev, const char *devnr,
  	} else if (!strcmp(dev, "Uart")) {
  		if (device)
  			*device = efi_dp_from_uart();
@@ -181,5 +181,5 @@
  		part = blk_get_device_part_str(dev, devnr, &desc, &fs_partition,
  					       1);
 -- 
-2.30.2
+2.37.1
 
diff --git a/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0002-arm-add-support-to-corstone1000-platform.patch b/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0002-arm-add-support-to-corstone1000-platform.patch
index e0caaa4..1863629 100644
--- a/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0002-arm-add-support-to-corstone1000-platform.patch
+++ b/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0002-arm-add-support-to-corstone1000-platform.patch
@@ -1,7 +1,7 @@
-From 5cc889db3279ef4944ab64e36db3dbab1bf9ffa5 Mon Sep 17 00:00:00 2001
+From c9a9a467bb335047812004dd022dcadf9514101f Mon Sep 17 00:00:00 2001
 From: Rui Miguel Silva <rui.silva@linaro.org>
 Date: Tue, 15 Feb 2022 09:44:10 +0000
-Subject: [PATCH 02/27] arm: add support to corstone1000 platform
+Subject: [PATCH 02/24] arm: add support to corstone1000 platform
 
 Corstone1000 is a platform from arm, which includes pre
 verified Corstone SSE710 sub-system that combines Cortex-A and
@@ -27,10 +27,10 @@
  board/armltd/corstone1000/Kconfig        |  12 ++
  board/armltd/corstone1000/MAINTAINERS    |   7 +
  board/armltd/corstone1000/Makefile       |   7 +
- board/armltd/corstone1000/corstone1000.c | 121 ++++++++++++++++
+ board/armltd/corstone1000/corstone1000.c | 125 +++++++++++++++++
  configs/corstone1000_defconfig           |  80 +++++++++++
  include/configs/corstone1000.h           |  86 ++++++++++++
- 11 files changed, 548 insertions(+)
+ 11 files changed, 552 insertions(+)
  create mode 100644 arch/arm/dts/corstone1000-fvp.dts
  create mode 100644 arch/arm/dts/corstone1000-mps3.dts
  create mode 100644 arch/arm/dts/corstone1000.dtsi
@@ -42,12 +42,12 @@
  create mode 100644 include/configs/corstone1000.h
 
 diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
-index 4567c183fb84..d64051b533a7 100644
+index 9898c7d68e1b..2fc2b7d20f12 100644
 --- a/arch/arm/Kconfig
 +++ b/arch/arm/Kconfig
-@@ -1266,6 +1266,12 @@ config TARGET_VEXPRESS64_JUNO
- 	select USB
- 	imply OF_HAS_PRIOR_STAGE
+@@ -1347,6 +1347,12 @@ config ARCH_VEXPRESS64
+ 	select ENV_IS_IN_FLASH if MTD
+ 	imply DISTRO_DEFAULTS
  
 +config TARGET_CORSTONE1000
 +	bool "Support Corstone1000 Platform"
@@ -58,7 +58,7 @@
  config TARGET_TOTAL_COMPUTE
  	bool "Support Total Compute Platform"
  	select ARM64
-@@ -2198,6 +2204,8 @@ source "arch/arm/mach-nexell/Kconfig"
+@@ -2295,6 +2301,8 @@ source "arch/arm/mach-npcm/Kconfig"
  
  source "board/armltd/total_compute/Kconfig"
  
@@ -68,10 +68,10 @@
  source "board/bosch/guardian/Kconfig"
  source "board/Marvell/octeontx/Kconfig"
 diff --git a/arch/arm/dts/Makefile b/arch/arm/dts/Makefile
-index 644ba961a223..7de25d09c9fe 100644
+index a7e0d9f6c0e8..8c8f15b6a813 100644
 --- a/arch/arm/dts/Makefile
 +++ b/arch/arm/dts/Makefile
-@@ -1215,6 +1215,9 @@ dtb-$(CONFIG_TARGET_EA_LPC3250DEVKITV2) += lpc3250-ea3250.dtb
+@@ -1265,6 +1265,9 @@ dtb-$(CONFIG_TARGET_EA_LPC3250DEVKITV2) += lpc3250-ea3250.dtb
  
  dtb-$(CONFIG_ARCH_QEMU) += qemu-arm.dtb qemu-arm64.dtb
  
@@ -369,10 +369,10 @@
 +obj-y	:= corstone1000.o
 diff --git a/board/armltd/corstone1000/corstone1000.c b/board/armltd/corstone1000/corstone1000.c
 new file mode 100644
-index 000000000000..eff1739f0b02
+index 000000000000..2fa485ff3799
 --- /dev/null
 +++ b/board/armltd/corstone1000/corstone1000.c
-@@ -0,0 +1,121 @@
+@@ -0,0 +1,125 @@
 +// SPDX-License-Identifier: GPL-2.0+
 +/*
 + * (C) Copyright 2022 ARM Limited
@@ -452,6 +452,10 @@
 +
 +struct mm_region *mem_map = corstone1000_mem_map;
 +
++void set_dfu_alt_info(char *interface, char *devstr)
++{
++}
++
 +int board_init(void)
 +{
 +	return 0;
@@ -673,5 +677,5 @@
 +				"bootefi $kernel_addr_r $fdtcontroladdr;"
 +#endif
 -- 
-2.30.2
+2.37.1
 
diff --git a/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0003-usb-common-move-urb-code-to-common.patch b/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0003-usb-common-move-urb-code-to-common.patch
index 9974915..1cebd07 100644
--- a/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0003-usb-common-move-urb-code-to-common.patch
+++ b/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0003-usb-common-move-urb-code-to-common.patch
@@ -1,7 +1,7 @@
-From b8f4f76e135ff2061d0032292f5209ac911dba16 Mon Sep 17 00:00:00 2001
+From 61c5fe3758a0febdee33429f5be16f69279045cc Mon Sep 17 00:00:00 2001
 From: Rui Miguel Silva <rui.silva@linaro.org>
 Date: Mon, 28 Jun 2021 23:20:55 +0100
-Subject: [PATCH 03/27] usb: common: move urb code to common
+Subject: [PATCH 03/24] usb: common: move urb code to common
 
 Move urb code from musb only use to a more common scope, so other
 drivers in the future can use the handling of urb in usb.
@@ -493,5 +493,5 @@
  /*
   * Hub Status & Hub Change bit masks
 -- 
-2.30.2
+2.37.1
 
diff --git a/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0004-usb-add-isp1760-family-driver.patch b/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0004-usb-add-isp1760-family-driver.patch
index 5940f06..1dd6f25 100644
--- a/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0004-usb-add-isp1760-family-driver.patch
+++ b/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0004-usb-add-isp1760-family-driver.patch
@@ -1,7 +1,7 @@
-From 62a666a83766f5517e90464638455286bb33f433 Mon Sep 17 00:00:00 2001
+From 8abb9c6a342d750a3a3a66e674c3be6597fc9f66 Mon Sep 17 00:00:00 2001
 From: Rui Miguel Silva <rui.silva@linaro.org>
 Date: Mon, 28 Jun 2021 23:31:25 +0100
-Subject: [PATCH 04/27] usb: add isp1760 family driver
+Subject: [PATCH 04/24] usb: add isp1760 family driver
 
 ISP1760/61/63 are a family of usb controllers, blah, blah, more info
 here.
@@ -34,10 +34,10 @@
  create mode 100644 drivers/usb/isp1760/isp1760-uboot.h
 
 diff --git a/Makefile b/Makefile
-index ad83d60dc39d..2c857c1fe2cc 100644
+index 98867fbe06b4..67851020f5c1 100644
 --- a/Makefile
 +++ b/Makefile
-@@ -834,6 +834,7 @@ libs-y += drivers/usb/host/
+@@ -841,6 +841,7 @@ libs-y += drivers/usb/host/
  libs-y += drivers/usb/mtu3/
  libs-y += drivers/usb/musb/
  libs-y += drivers/usb/musb-new/
@@ -3801,5 +3801,5 @@
 +
 +#endif
 -- 
-2.30.2
+2.37.1
 
diff --git a/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0005-corstone1000-enable-isp1763-usb-controller.patch b/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0005-corstone1000-enable-isp1763-usb-controller.patch
index 2ce03b6..c465455 100644
--- a/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0005-corstone1000-enable-isp1763-usb-controller.patch
+++ b/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0005-corstone1000-enable-isp1763-usb-controller.patch
@@ -1,7 +1,7 @@
-From b7fb62e512e00a5fdbb4321bb0b4109261518481 Mon Sep 17 00:00:00 2001
+From 5031fea320bb4ccc1ce7470193d8f4402ae819c9 Mon Sep 17 00:00:00 2001
 From: Rui Miguel Silva <rui.silva@linaro.org>
 Date: Thu, 3 Mar 2022 16:52:02 +0000
-Subject: [PATCH 05/27] corstone1000: enable isp1763 usb controller
+Subject: [PATCH 05/24] corstone1000: enable isp1763 usb controller
 
 MPS3 board have a ISP1763 usb controller, add the
 correspondent mmio area and enable it to be used for mass
@@ -44,5 +44,5 @@
  				"boot_bank_flag=0x08002000\0"				\
  				"kernel_addr_bank_0=0x083EE000\0"			\
 -- 
-2.30.2
+2.37.1
 
diff --git a/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0006-arm_ffa-introducing-Arm-FF-A-low-level-driver.patch b/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0006-arm_ffa-introducing-Arm-FF-A-low-level-driver.patch
index 8cc848f..617aaf7 100644
--- a/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0006-arm_ffa-introducing-Arm-FF-A-low-level-driver.patch
+++ b/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0006-arm_ffa-introducing-Arm-FF-A-low-level-driver.patch
@@ -1,7 +1,7 @@
-From ede21dc1ca75132698cc81e88357c5789ccf67c7 Mon Sep 17 00:00:00 2001
+From 968c86e8a6ed3e9e6621f0ae44977b5b13d90bfd Mon Sep 17 00:00:00 2001
 From: Abdellatif El Khlifi <abdellatif.elkhlifi@arm.com>
 Date: Tue, 16 Nov 2021 12:34:52 +0000
-Subject: [PATCH 06/27] arm_ffa: introducing Arm FF-A low-level driver
+Subject: [PATCH 06/24] arm_ffa: introducing Arm FF-A low-level driver
 
 This driver implements Arm Firmware Framework for Armv8-A on u-boot
 
@@ -58,10 +58,10 @@
  create mode 100644 lib/arm-ffa/arm_ffa_helper.c
 
 diff --git a/MAINTAINERS b/MAINTAINERS
-index 96582fc67777..14307e6da644 100644
+index 7f27ff4c20fc..d29d7e040764 100644
 --- a/MAINTAINERS
 +++ b/MAINTAINERS
-@@ -232,6 +232,14 @@ F:	board/CZ.NIC/
+@@ -244,6 +244,14 @@ F:	board/CZ.NIC/
  F:	configs/turris_*_defconfig
  F:	include/configs/turris_*.h
  
@@ -143,20 +143,20 @@
  	DEFINE(ARM_SMCCC_QUIRK_STATE_OFFS, offsetof(struct arm_smccc_quirk, state));
  #endif
 diff --git a/common/board_r.c b/common/board_r.c
-index c24d9b4e220b..af20f38b104c 100644
+index 6f4aca2077d6..412a0ea9fac3 100644
 --- a/common/board_r.c
 +++ b/common/board_r.c
-@@ -61,6 +61,9 @@
- #include <wdt.h>
+@@ -62,6 +62,9 @@
  #include <asm-generic/gpio.h>
  #include <efi_loader.h>
+ #include <relocate.h>
 +#ifdef CONFIG_ARM_FFA_TRANSPORT
 +#include <arm_ffa_helper.h>
 +#endif
  
  DECLARE_GLOBAL_DATA_PTR;
  
-@@ -770,6 +773,9 @@ static init_fnc_t init_sequence_r[] = {
+@@ -779,6 +782,9 @@ static init_fnc_t init_sequence_r[] = {
  	INIT_FUNC_WATCHDOG_RESET
  	initr_net,
  #endif
@@ -180,10 +180,10 @@
  
  source "drivers/axi/Kconfig"
 diff --git a/drivers/Makefile b/drivers/Makefile
-index 4e7cf284405a..6671d2a604ab 100644
+index 67c8af74424e..1be687b55d13 100644
 --- a/drivers/Makefile
 +++ b/drivers/Makefile
-@@ -107,6 +107,7 @@ obj-y += iommu/
+@@ -109,6 +109,7 @@ obj-y += iommu/
  obj-y += smem/
  obj-y += thermal/
  obj-$(CONFIG_TEE) += tee/
@@ -2249,10 +2249,10 @@
 +int ffa_uuid_str_to_bin(const char *uuid_str, unsigned char *uuid_bin);
 +#endif
 diff --git a/include/dm/uclass-id.h b/include/dm/uclass-id.h
-index 0e26e1d13824..a1181b8f48e7 100644
+index 3ba69ad9a084..732441824557 100644
 --- a/include/dm/uclass-id.h
 +++ b/include/dm/uclass-id.h
-@@ -52,6 +52,7 @@ enum uclass_id {
+@@ -55,6 +55,7 @@ enum uclass_id {
  	UCLASS_EFI_MEDIA,	/* Devices provided by UEFI firmware */
  	UCLASS_ETH,		/* Ethernet device */
  	UCLASS_ETH_PHY,		/* Ethernet PHY device */
@@ -2320,10 +2320,10 @@
  
  #define arm_smccc_smc_quirk(...) __arm_smccc_smc(__VA_ARGS__)
 diff --git a/lib/Kconfig b/lib/Kconfig
-index 3c6fa99b1a6a..473821b882e2 100644
+index acc0ac081a44..65db65c3c4cd 100644
 --- a/lib/Kconfig
 +++ b/lib/Kconfig
-@@ -810,6 +810,7 @@ config SMBIOS_PARSER
+@@ -902,6 +902,7 @@ config SMBIOS_PARSER
  source lib/efi/Kconfig
  source lib/efi_loader/Kconfig
  source lib/optee/Kconfig
@@ -2332,7 +2332,7 @@
  config TEST_FDTDEC
  	bool "enable fdtdec test"
 diff --git a/lib/Makefile b/lib/Makefile
-index 11b03d1cbec8..8e6fad613067 100644
+index d9b1811f7506..4aa3e2ed2a7e 100644
 --- a/lib/Makefile
 +++ b/lib/Makefile
 @@ -9,6 +9,7 @@ obj-$(CONFIG_EFI) += efi/
@@ -2564,7 +2564,7 @@
 +	return 0;
 +}
 diff --git a/lib/efi_loader/efi_boottime.c b/lib/efi_loader/efi_boottime.c
-index 5bcb8253edba..cffa2c69d621 100644
+index 4da64b5d2962..c68d9ed4f0bd 100644
 --- a/lib/efi_loader/efi_boottime.c
 +++ b/lib/efi_loader/efi_boottime.c
 @@ -23,6 +23,10 @@
@@ -2578,7 +2578,7 @@
  DECLARE_GLOBAL_DATA_PTR;
  
  /* Task priority level */
-@@ -2114,6 +2118,10 @@ static efi_status_t EFIAPI efi_exit_boot_services(efi_handle_t image_handle,
+@@ -2113,6 +2117,10 @@ static efi_status_t EFIAPI efi_exit_boot_services(efi_handle_t image_handle,
  	struct efi_event *evt, *next_event;
  	efi_status_t ret = EFI_SUCCESS;
  
@@ -2589,7 +2589,7 @@
  	EFI_ENTRY("%p, %zx", image_handle, map_key);
  
  	/* Check that the caller has read the current memory map */
-@@ -2174,6 +2182,15 @@ static efi_status_t EFIAPI efi_exit_boot_services(efi_handle_t image_handle,
+@@ -2173,6 +2181,15 @@ static efi_status_t EFIAPI efi_exit_boot_services(efi_handle_t image_handle,
  		dm_remove_devices_flags(DM_REMOVE_ACTIVE_ALL);
  	}
  
@@ -2606,5 +2606,5 @@
  	efi_runtime_detach();
  
 -- 
-2.30.2
+2.37.1
 
diff --git a/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0007-arm_ffa-introducing-armffa-command.patch b/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0007-arm_ffa-introducing-armffa-command.patch
index 83f0547..582bc3e 100644
--- a/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0007-arm_ffa-introducing-armffa-command.patch
+++ b/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0007-arm_ffa-introducing-armffa-command.patch
@@ -1,7 +1,7 @@
-From 541b2b51dc77832ab5845cab4762d29976838d36 Mon Sep 17 00:00:00 2001
+From 58358f79d9f8abbdc8bcfc7d08bd0c7c4c90ec84 Mon Sep 17 00:00:00 2001
 From: Abdellatif El Khlifi <abdellatif.elkhlifi@arm.com>
 Date: Tue, 16 Nov 2021 12:36:27 +0000
-Subject: [PATCH 07/27] arm_ffa: introducing armffa command
+Subject: [PATCH 07/24] arm_ffa: introducing armffa command
 
 A new armffa command is provided as an example of how to use
 the FF-A helper functions to communicate with secure world.
@@ -20,10 +20,10 @@
  create mode 100644 cmd/armffa.c
 
 diff --git a/MAINTAINERS b/MAINTAINERS
-index 14307e6da644..f3fd559da54a 100644
+index d29d7e040764..32fc267fcf13 100644
 --- a/MAINTAINERS
 +++ b/MAINTAINERS
-@@ -235,6 +235,7 @@ F:	include/configs/turris_*.h
+@@ -247,6 +247,7 @@ F:	include/configs/turris_*.h
  ARM FF-A
  M:	Abdellatif El Khlifi <abdellatif.elkhlifi@arm.com>
  S:	Maintained
@@ -32,10 +32,10 @@
  F:	include/arm_ffa.h
  F:	include/arm_ffa_helper.h
 diff --git a/cmd/Kconfig b/cmd/Kconfig
-index ff50102a89c7..ff124bf4bad0 100644
+index ba2f321ae989..090e668125d5 100644
 --- a/cmd/Kconfig
 +++ b/cmd/Kconfig
-@@ -813,6 +813,16 @@ endmenu
+@@ -873,6 +873,16 @@ endmenu
  
  menu "Device access commands"
  
@@ -53,7 +53,7 @@
  	#depends on FLASH_CFI_DRIVER
  	bool "armflash"
 diff --git a/cmd/Makefile b/cmd/Makefile
-index 166c652d9825..770b846c44e0 100644
+index 5e43a1e022e8..e40f52f1e416 100644
 --- a/cmd/Makefile
 +++ b/cmd/Makefile
 @@ -12,6 +12,8 @@ obj-y += panic.o
@@ -338,5 +338,5 @@
 +	   "devlist\n"
 +	   "	 - displays the arm_ffa device info\n");
 -- 
-2.30.2
+2.37.1
 
diff --git a/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0008-arm_ffa-introducing-MM-communication-with-FF-A.patch b/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0008-arm_ffa-introducing-MM-communication-with-FF-A.patch
index 9b1383e..3b82b41 100644
--- a/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0008-arm_ffa-introducing-MM-communication-with-FF-A.patch
+++ b/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0008-arm_ffa-introducing-MM-communication-with-FF-A.patch
@@ -1,7 +1,7 @@
-From 2f09d4a2e87febd7365b9e18d669208ff2c35edc Mon Sep 17 00:00:00 2001
+From ee7c0aee66db53b2372a3b4245a8754dceee804d Mon Sep 17 00:00:00 2001
 From: Abdellatif El Khlifi <abdellatif.elkhlifi@arm.com>
 Date: Wed, 13 Oct 2021 17:51:44 +0100
-Subject: [PATCH 08/27] arm_ffa: introducing MM communication with FF-A
+Subject: [PATCH 08/24] arm_ffa: introducing MM communication with FF-A
 
 This commit allows to perform MM communication using FF-A transport.
 
@@ -30,10 +30,10 @@
  2 files changed, 273 insertions(+), 6 deletions(-)
 
 diff --git a/lib/efi_loader/Kconfig b/lib/efi_loader/Kconfig
-index e5e35fe51f65..6827b821545e 100644
+index e3f2402d0e8e..37131237af3f 100644
 --- a/lib/efi_loader/Kconfig
 +++ b/lib/efi_loader/Kconfig
-@@ -56,13 +56,23 @@ config EFI_VARIABLE_FILE_STORE
+@@ -60,13 +60,23 @@ config EFI_VARIABLE_FILE_STORE
  	  stored as file /ubootefi.var on the EFI system partition.
  
  config EFI_MM_COMM_TEE
@@ -42,7 +42,7 @@
 +	bool "UEFI variables storage service via the trusted world"
 +	depends on OPTEE || ARM_FFA_TRANSPORT
  	help
-+	  The MM SP (also called partition) can be StandAlonneMM or smm-gateway.
++         the MM SP (also called partition) can be StandAlonneMM or smm-gateway.
 +	  When using the u-boot OP-TEE driver, StandAlonneMM is supported.
 +	  When using the u-boot FF-A  driver, StandAlonneMM and smm-gateway are supported.
 +
@@ -56,9 +56,9 @@
 +	  MM buffer. The data is copied by u-boot to thea shared buffer before issuing
 +	  the door bell event.
 +
- endchoice
- 
- config EFI_VARIABLES_PRESEED
+ config EFI_VARIABLE_NO_STORE
+ 	bool "Don't persist non-volatile UEFI variables"
+ 	help
 diff --git a/lib/efi_loader/efi_variable_tee.c b/lib/efi_loader/efi_variable_tee.c
 index dfef18435dfa..9cb8cfb9c779 100644
 --- a/lib/efi_loader/efi_variable_tee.c
@@ -379,5 +379,5 @@
  
  	/*
 -- 
-2.30.2
+2.37.1
 
diff --git a/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0009-arm_ffa-introducing-test-module-for-UCLASS_FFA.patch b/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0009-arm_ffa-introducing-test-module-for-UCLASS_FFA.patch
index 4fb317b..ae4bb02 100644
--- a/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0009-arm_ffa-introducing-test-module-for-UCLASS_FFA.patch
+++ b/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0009-arm_ffa-introducing-test-module-for-UCLASS_FFA.patch
@@ -1,7 +1,7 @@
-From c9a2c457648b732292482fae59a7fd61cefffd33 Mon Sep 17 00:00:00 2001
+From 6f998a5e94e2562b5876b88864876c8b03b88f5a Mon Sep 17 00:00:00 2001
 From: Abdellatif El Khlifi <abdellatif.elkhlifi@arm.com>
 Date: Tue, 16 Nov 2021 12:38:48 +0000
-Subject: [PATCH 09/27] arm_ffa: introducing test module for UCLASS_FFA
+Subject: [PATCH 09/24] arm_ffa: introducing test module for UCLASS_FFA
 
 This is the test module for the UCLASS_FFA class.
 
@@ -17,10 +17,10 @@
  create mode 100644 test/dm/ffa.h
 
 diff --git a/MAINTAINERS b/MAINTAINERS
-index f3fd559da54a..6510f844fe09 100644
+index 32fc267fcf13..8209dc9319f1 100644
 --- a/MAINTAINERS
 +++ b/MAINTAINERS
-@@ -240,6 +240,7 @@ F:	drivers/arm-ffa/
+@@ -252,6 +252,7 @@ F:	drivers/arm-ffa/
  F:	include/arm_ffa.h
  F:	include/arm_ffa_helper.h
  F:	lib/arm-ffa/
@@ -29,7 +29,7 @@
  ARM FREESCALE IMX
  M:	Stefano Babic <sbabic@denx.de>
 diff --git a/test/dm/Makefile b/test/dm/Makefile
-index d46552fbf320..ddac250cdff0 100644
+index f0a7c97e3d17..09a3403d2f53 100644
 --- a/test/dm/Makefile
 +++ b/test/dm/Makefile
 @@ -79,6 +79,7 @@ obj-$(CONFIG_POWER_DOMAIN) += power-domain.o
@@ -128,5 +128,5 @@
 +
 +#endif /*__TEST_DM_FFA_H */
 -- 
-2.30.2
+2.37.1
 
diff --git a/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0010-arm_ffa-corstone1000-enable-FF-A-and-MM-support.patch b/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0010-arm_ffa-corstone1000-enable-FF-A-and-MM-support.patch
index bc96fc4..8cab40c 100644
--- a/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0010-arm_ffa-corstone1000-enable-FF-A-and-MM-support.patch
+++ b/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0010-arm_ffa-corstone1000-enable-FF-A-and-MM-support.patch
@@ -1,7 +1,7 @@
-From ce6598d255113458fd5c9d19bb7469b721e37f6f Mon Sep 17 00:00:00 2001
+From c0b01dff84d74f1b5aaff0d9b594e0aaec16c744 Mon Sep 17 00:00:00 2001
 From: Abdellatif El Khlifi <abdellatif.elkhlifi@arm.com>
 Date: Tue, 2 Nov 2021 16:44:39 +0000
-Subject: [PATCH 10/27] arm_ffa: corstone1000: enable FF-A and MM support
+Subject: [PATCH 10/24] arm_ffa: corstone1000: enable FF-A and MM support
 
 This commit allows corstone1000 platform to perform
 MM communication between u-boot and the secure world
@@ -53,5 +53,5 @@
  #define CONFIG_SKIP_LOWLEVEL_INIT
  
 -- 
-2.30.2
+2.37.1
 
diff --git a/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0011-efi-corstone1000-introduce-EFI-capsule-update.patch b/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0011-efi-corstone1000-introduce-EFI-capsule-update.patch
index cb24ec3..0a829c4 100644
--- a/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0011-efi-corstone1000-introduce-EFI-capsule-update.patch
+++ b/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0011-efi-corstone1000-introduce-EFI-capsule-update.patch
@@ -1,7 +1,7 @@
-From e70d0128090158872847b82b82cdbcf0e2f13885 Mon Sep 17 00:00:00 2001
+From 652259af2f795a5d69c628ae7b1e79d33c234abd Mon Sep 17 00:00:00 2001
 From: Abdellatif El Khlifi <abdellatif.elkhlifi@arm.com>
 Date: Thu, 11 Nov 2021 16:27:59 +0000
-Subject: [PATCH 11/27] efi: corstone1000: introduce EFI capsule update
+Subject: [PATCH 11/24] efi: corstone1000: introduce EFI capsule update
 
 This commit provides capsule update feature for Corstone1000.
 
@@ -58,10 +58,10 @@
  #define MM_SP_UUID_DATA	\
  	0xed, 0x32, 0xd5, 0x33,	\
 diff --git a/include/efi_loader.h b/include/efi_loader.h
-index 126db279dd3e..01b432e6184b 100644
+index 5b41985244e2..796419b69b40 100644
 --- a/include/efi_loader.h
 +++ b/include/efi_loader.h
-@@ -965,11 +965,11 @@ extern const struct efi_firmware_management_protocol efi_fmp_fit;
+@@ -984,11 +984,11 @@ extern const struct efi_firmware_management_protocol efi_fmp_fit;
  extern const struct efi_firmware_management_protocol efi_fmp_raw;
  
  /* Capsule update */
@@ -76,10 +76,10 @@
  		efi_uintn_t capsule_count,
  		u64 *maximum_capsule_size,
 diff --git a/lib/efi_loader/efi_boottime.c b/lib/efi_loader/efi_boottime.c
-index cffa2c69d621..5c77a40c3ebe 100644
+index c68d9ed4f0bd..f2b5c7834c01 100644
 --- a/lib/efi_loader/efi_boottime.c
 +++ b/lib/efi_loader/efi_boottime.c
-@@ -2096,6 +2096,44 @@ static void efi_exit_caches(void)
+@@ -2095,6 +2095,44 @@ static void efi_exit_caches(void)
  #endif
  }
  
@@ -124,7 +124,7 @@
  /**
   * efi_exit_boot_services() - stop all boot services
   * @image_handle: handle of the loaded image
-@@ -2209,6 +2247,15 @@ static efi_status_t EFIAPI efi_exit_boot_services(efi_handle_t image_handle,
+@@ -2208,6 +2246,15 @@ static efi_status_t EFIAPI efi_exit_boot_services(efi_handle_t image_handle,
  	/* Recalculate CRC32 */
  	efi_update_table_header_crc32(&systab.hdr);
  
@@ -141,10 +141,10 @@
  	efi_set_watchdog(0);
  	WATCHDOG_RESET();
 diff --git a/lib/efi_loader/efi_capsule.c b/lib/efi_loader/efi_capsule.c
-index f00440163d41..c100c1b95298 100644
+index a6b98f066a0b..a0689ba912fc 100644
 --- a/lib/efi_loader/efi_capsule.c
 +++ b/lib/efi_loader/efi_capsule.c
-@@ -24,6 +24,14 @@
+@@ -25,6 +25,14 @@
  #include <crypto/pkcs7_parser.h>
  #include <linux/err.h>
  
@@ -159,7 +159,7 @@
  DECLARE_GLOBAL_DATA_PTR;
  
  const efi_guid_t efi_guid_capsule_report = EFI_CAPSULE_REPORT_GUID;
-@@ -509,6 +517,89 @@ static efi_status_t efi_capsule_update_firmware(
+@@ -512,6 +520,89 @@ static efi_status_t efi_capsule_update_firmware(
  }
  #endif /* CONFIG_EFI_CAPSULE_FIRMWARE_MANAGEMENT */
  
@@ -249,7 +249,7 @@
  /**
   * efi_update_capsule() - process information from operating system
   * @capsule_header_array:	Array of virtual address pointers
-@@ -522,7 +613,7 @@ static efi_status_t efi_capsule_update_firmware(
+@@ -525,7 +616,7 @@ static efi_status_t efi_capsule_update_firmware(
   *
   * Return:			status code
   */
@@ -258,7 +258,7 @@
  		struct efi_capsule_header **capsule_header_array,
  		efi_uintn_t capsule_count,
  		u64 scatter_gather_list)
-@@ -539,6 +630,13 @@ efi_status_t EFIAPI efi_update_capsule(
+@@ -542,6 +633,13 @@ efi_status_t EFIAPI efi_update_capsule(
  		goto out;
  	}
  
@@ -272,7 +272,7 @@
  	ret = EFI_SUCCESS;
  	for (i = 0, capsule = *capsule_header_array; i < capsule_count;
  	     i++, capsule = *(++capsule_header_array)) {
-@@ -551,6 +649,39 @@ efi_status_t EFIAPI efi_update_capsule(
+@@ -554,6 +652,39 @@ efi_status_t EFIAPI efi_update_capsule(
  
  		log_debug("Capsule[%d] (guid:%pUs)\n",
  			  i, &capsule->capsule_guid);
@@ -312,7 +312,7 @@
  		if (!guidcmp(&capsule->capsule_guid,
  			     &efi_guid_firmware_management_capsule_id)) {
  			ret  = efi_capsule_update_firmware(capsule);
-@@ -589,7 +720,7 @@ out:
+@@ -592,7 +723,7 @@ out:
   *
   * Return:			status code
   */
@@ -322,7 +322,7 @@
  		efi_uintn_t capsule_count,
  		u64 *maximum_capsule_size,
 diff --git a/lib/efi_loader/efi_setup.c b/lib/efi_loader/efi_setup.c
-index eee54e48784f..989380d4f8cd 100644
+index 492ecf4cb15c..bfd4687e10b5 100644
 --- a/lib/efi_loader/efi_setup.c
 +++ b/lib/efi_loader/efi_setup.c
 @@ -16,6 +16,13 @@
@@ -355,5 +355,5 @@
  		ret = efi_set_variable_int(u"CapsuleMax",
  					   &efi_guid_capsule_report,
 -- 
-2.30.2
+2.37.1
 
diff --git a/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0012-corstone1000-Update-FFA-shared-buffer-address.patch b/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0012-corstone1000-Update-FFA-shared-buffer-address.patch
index 60dc850..d1e13f6 100644
--- a/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0012-corstone1000-Update-FFA-shared-buffer-address.patch
+++ b/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0012-corstone1000-Update-FFA-shared-buffer-address.patch
@@ -1,7 +1,7 @@
-From b2d752b4bbd5b2dc4cb22d2d652a261287505926 Mon Sep 17 00:00:00 2001
+From 1ff229c8e02bdd3c859d581787636cfdf674eec1 Mon Sep 17 00:00:00 2001
 From: Gowtham Suresh Kumar <gowtham.sureshkumar@arm.com>
 Date: Wed, 17 Nov 2021 15:28:06 +0000
-Subject: [PATCH 12/27] corstone1000: Update FFA shared buffer address
+Subject: [PATCH 12/24] corstone1000: Update FFA shared buffer address
 
 FFA shared buffer address changed to 0x02000000.
 
@@ -33,5 +33,5 @@
  #define CONFIG_SYS_INIT_SP_ADDR		(CONFIG_SYS_SDRAM_BASE + 0x03f00000)
  #define CONFIG_SKIP_LOWLEVEL_INIT
 -- 
-2.30.2
+2.37.1
 
diff --git a/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0013-corstone1000-Make-sure-shared-buffer-contents-are-no.patch b/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0013-corstone1000-Make-sure-shared-buffer-contents-are-no.patch
index 2495538..2caeb58 100644
--- a/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0013-corstone1000-Make-sure-shared-buffer-contents-are-no.patch
+++ b/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0013-corstone1000-Make-sure-shared-buffer-contents-are-no.patch
@@ -1,7 +1,7 @@
-From 67a755f74716068cfd44a8897c31151fe9ee4328 Mon Sep 17 00:00:00 2001
+From 370422921b2a3f4f7b73ce5b08820c24e82bba19 Mon Sep 17 00:00:00 2001
 From: Gowtham Suresh Kumar <gowtham.sureshkumar@arm.com>
 Date: Thu, 18 Nov 2021 16:42:59 +0000
-Subject: [PATCH 13/27] corstone1000: Make sure shared buffer contents are not
+Subject: [PATCH 13/24] corstone1000: Make sure shared buffer contents are not
  cached
 
 After updating the shared buffer, it is required to flush the cache
@@ -48,5 +48,5 @@
  
  	ffa_ret = ffa_notify_mm_sp();
 -- 
-2.30.2
+2.37.1
 
diff --git a/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0014-arm-corstone1000-fix-unrecognized-filesystem-type.patch b/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0014-arm-corstone1000-fix-unrecognized-filesystem-type.patch
index fa201eb..3151741 100644
--- a/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0014-arm-corstone1000-fix-unrecognized-filesystem-type.patch
+++ b/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0014-arm-corstone1000-fix-unrecognized-filesystem-type.patch
@@ -1,7 +1,7 @@
-From e2463e3ef52260b38131085c0901de8708d52693 Mon Sep 17 00:00:00 2001
+From 340ba3fbb0ea388578e30aede92695886f221eaf Mon Sep 17 00:00:00 2001
 From: Rui Miguel Silva <rui.silva@linaro.org>
 Date: Fri, 4 Mar 2022 15:56:09 +0000
-Subject: [PATCH 14/27] arm: corstone1000: fix unrecognized filesystem type
+Subject: [PATCH 14/24] arm: corstone1000: fix unrecognized filesystem type
 
 Some usb sticks are not recognized by usb, just add a
 delay before checking status.
@@ -12,10 +12,10 @@
  1 file changed, 3 insertions(+)
 
 diff --git a/common/usb_storage.c b/common/usb_storage.c
-index c9e2d7343ce2..ae72338323ba 100644
+index eaa31374ef73..79cf4297d4f4 100644
 --- a/common/usb_storage.c
 +++ b/common/usb_storage.c
-@@ -769,6 +769,9 @@ static int usb_stor_BBB_transport(struct scsi_cmd *srb, struct us_data *us)
+@@ -784,6 +784,9 @@ static int usb_stor_BBB_transport(struct scsi_cmd *srb, struct us_data *us)
  st:
  	retry = 0;
  again:
@@ -26,5 +26,5 @@
  	result = usb_bulk_msg(us->pusb_dev, pipein, csw, UMASS_BBB_CSW_SIZE,
  				&actlen, USB_CNTL_TIMEOUT*5);
 -- 
-2.30.2
+2.37.1
 
diff --git a/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0015-efi_capsule-corstone1000-pass-interface-id-and-buffe.patch b/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0015-efi_capsule-corstone1000-pass-interface-id-and-buffe.patch
index 0d1912c..4e3f237 100644
--- a/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0015-efi_capsule-corstone1000-pass-interface-id-and-buffe.patch
+++ b/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0015-efi_capsule-corstone1000-pass-interface-id-and-buffe.patch
@@ -1,7 +1,7 @@
-From 81bf9ed7e8e858cef13cfc3d1435c44445e523df Mon Sep 17 00:00:00 2001
+From 4c249de0915750b328e456c34f18546f92850afd Mon Sep 17 00:00:00 2001
 From: Vishnu Banavath <vishnu.banavath@arm.com>
 Date: Fri, 10 Dec 2021 20:03:35 +0000
-Subject: [PATCH 15/27] efi_capsule: corstone1000: pass interface id and buffer
+Subject: [PATCH 15/24] efi_capsule: corstone1000: pass interface id and buffer
  event id using register w4
 
 Initially the interface/event IDs are passed to the SP using register
@@ -39,10 +39,10 @@
  #define CORSTONE1000_CAPSULE_BUFFER_SIZE	(8192) /* 32 MB */
  
 diff --git a/lib/efi_loader/efi_capsule.c b/lib/efi_loader/efi_capsule.c
-index c100c1b95298..17d769803b2a 100644
+index a0689ba912fc..e08e97cf3fb7 100644
 --- a/lib/efi_loader/efi_capsule.c
 +++ b/lib/efi_loader/efi_capsule.c
-@@ -27,6 +27,8 @@
+@@ -28,6 +28,8 @@
  #ifdef CONFIG_TARGET_CORSTONE1000
  #include <arm_ffa_helper.h>
  #include <cpu_func.h>
@@ -51,7 +51,7 @@
  
  void *__efi_runtime_data corstone1000_capsule_buf; /* capsule shared buffer virtual address */
  efi_guid_t corstone1000_capsule_guid = EFI_CORSTONE1000_CAPSULE_ID_GUID;
-@@ -587,11 +589,12 @@ static int __efi_runtime efi_corstone1000_buffer_ready_event(u32 capsule_image_s
+@@ -590,11 +592,12 @@ static int __efi_runtime efi_corstone1000_buffer_ready_event(u32 capsule_image_s
  	func_data.data0 = &part_id;
  
  	/*
@@ -69,5 +69,5 @@
  	func_data.data1_size = sizeof(msg);
  	func_data.data1 = &msg;
 -- 
-2.30.2
+2.37.1
 
diff --git a/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0016-efi_boottime-corstone1000-pass-interface-id-and-kern.patch b/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0016-efi_boottime-corstone1000-pass-interface-id-and-kern.patch
index f460fad..e134f23 100644
--- a/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0016-efi_boottime-corstone1000-pass-interface-id-and-kern.patch
+++ b/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0016-efi_boottime-corstone1000-pass-interface-id-and-kern.patch
@@ -1,7 +1,7 @@
-From 10d0ffc26ddcecd83921c2b3b37cb4eff54a154f Mon Sep 17 00:00:00 2001
+From e5e1cf36cb7b77a5bb526f1744c0c77164374ca3 Mon Sep 17 00:00:00 2001
 From: Vishnu Banavath <vishnu.banavath@arm.com>
 Date: Fri, 10 Dec 2021 20:10:41 +0000
-Subject: [PATCH 16/27] efi_boottime: corstone1000: pass interface id and
+Subject: [PATCH 16/24] efi_boottime: corstone1000: pass interface id and
  kernel event id using register w4
 
 Initially the interface/event IDs are passed to the SP using register
@@ -21,7 +21,7 @@
  1 file changed, 10 insertions(+), 3 deletions(-)
 
 diff --git a/lib/efi_loader/efi_boottime.c b/lib/efi_loader/efi_boottime.c
-index 5c77a40c3ebe..b58a8c98fd05 100644
+index f2b5c7834c01..140d0f4f71da 100644
 --- a/lib/efi_loader/efi_boottime.c
 +++ b/lib/efi_loader/efi_boottime.c
 @@ -27,6 +27,11 @@
@@ -36,7 +36,7 @@
  DECLARE_GLOBAL_DATA_PTR;
  
  /* Task priority level */
-@@ -2121,10 +2126,12 @@ static int efi_corstone1000_kernel_started_event(void)
+@@ -2120,10 +2125,12 @@ static int efi_corstone1000_kernel_started_event(void)
  	func_data.data0 = &part_id;
  
  	/*
@@ -53,5 +53,5 @@
  	func_data.data1_size = sizeof(msg);
  	func_data.data1 = &msg;
 -- 
-2.30.2
+2.37.1
 
diff --git a/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0017-efi_loader-corstone1000-remove-guid-check-from-corst.patch b/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0017-efi_loader-corstone1000-remove-guid-check-from-corst.patch
index fa6ab32..b5a1715 100644
--- a/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0017-efi_loader-corstone1000-remove-guid-check-from-corst.patch
+++ b/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0017-efi_loader-corstone1000-remove-guid-check-from-corst.patch
@@ -1,7 +1,7 @@
-From c463798489e41725f8ba33debeedc7c4011cda38 Mon Sep 17 00:00:00 2001
+From 596cf4d04580b191d2f4f6082000534bdab13791 Mon Sep 17 00:00:00 2001
 From: Vishnu Banavath <vishnu.banavath@arm.com>
 Date: Sat, 11 Dec 2021 13:23:55 +0000
-Subject: [PATCH 17/27] efi_loader: corstone1000: remove guid check from
+Subject: [PATCH 17/24] efi_loader: corstone1000: remove guid check from
  corstone1000 config option
 
 Use generic fmp guid and no separte check is required for
@@ -14,10 +14,10 @@
  1 file changed, 1 insertion(+), 15 deletions(-)
 
 diff --git a/lib/efi_loader/efi_capsule.c b/lib/efi_loader/efi_capsule.c
-index 17d769803b2a..939040d2755e 100644
+index e08e97cf3fb7..891143c33909 100644
 --- a/lib/efi_loader/efi_capsule.c
 +++ b/lib/efi_loader/efi_capsule.c
-@@ -654,12 +654,6 @@ efi_status_t __efi_runtime EFIAPI efi_update_capsule(
+@@ -657,12 +657,6 @@ efi_status_t __efi_runtime EFIAPI efi_update_capsule(
  			  i, &capsule->capsule_guid);
  
  #if CONFIG_IS_ENABLED(TARGET_CORSTONE1000)
@@ -30,7 +30,7 @@
  		if (efi_size_in_pages(capsule->capsule_image_size) >
  		    CORSTONE1000_CAPSULE_BUFFER_SIZE) {
  			log_err("Corstone1000: Capsule data size exceeds the shared buffer size\n");
-@@ -685,15 +679,7 @@ efi_status_t __efi_runtime EFIAPI efi_update_capsule(
+@@ -688,15 +682,7 @@ efi_status_t __efi_runtime EFIAPI efi_update_capsule(
  		goto out;
  #endif
  
@@ -48,5 +48,5 @@
  			goto out;
  	}
 -- 
-2.30.2
+2.37.1
 
diff --git a/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0018-arm_ffa-removing-the-cast-when-using-binary-OR-on-FI.patch b/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0018-arm_ffa-removing-the-cast-when-using-binary-OR-on-FI.patch
index 4ee10a0..f858a26 100644
--- a/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0018-arm_ffa-removing-the-cast-when-using-binary-OR-on-FI.patch
+++ b/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0018-arm_ffa-removing-the-cast-when-using-binary-OR-on-FI.patch
@@ -1,7 +1,7 @@
-From 1cfca60850727448bdbfe720d98d9e0d4523f6aa Mon Sep 17 00:00:00 2001
+From 460406b46b51b6c585788001147a8961c95cc73c Mon Sep 17 00:00:00 2001
 From: Abdellatif El Khlifi <abdellatif.elkhlifi@arm.com>
 Date: Sat, 11 Dec 2021 21:05:10 +0000
-Subject: [PATCH 18/27] arm_ffa: removing the cast when using binary OR on
+Subject: [PATCH 18/24] arm_ffa: removing the cast when using binary OR on
  FIELD_PREP macros
 
 When the GENMASK used is above 16-bits wide a u16 cast will cause
@@ -36,5 +36,5 @@
  /* The FF-A SMC function prototype definition */
  
 -- 
-2.30.2
+2.37.1
 
diff --git a/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0019-Return-proper-error-code-when-rx-buffer-is-larger.patch b/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0019-Return-proper-error-code-when-rx-buffer-is-larger.patch
deleted file mode 100644
index 21a89a4..0000000
--- a/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0019-Return-proper-error-code-when-rx-buffer-is-larger.patch
+++ /dev/null
@@ -1,31 +0,0 @@
-From 7db27eeaba0fd5ddb1e49977bb7e342a1980aa3d Mon Sep 17 00:00:00 2001
-From: Gowtham Suresh Kumar <gowtham.sureshkumar@arm.com>
-Date: Sun, 12 Dec 2021 17:51:17 +0000
-Subject: [PATCH 19/27] Return proper error code when rx buffer is larger
-
-ffa_mm_communicate should return EFI_BUFFER_TOO_SMALL when
-the buffer received from the secure world is larger than the
-comm buffer as this value is forwarded by mm_communicate.
-
-Signed-off-by: Gowtham Suresh Kumar <gowtham.sureshkumar@arm.com>
-Signed-off-by: Rui Miguel Silva <rui.silva@linaro.org>
----
- lib/efi_loader/efi_variable_tee.c | 2 +-
- 1 file changed, 1 insertion(+), 1 deletion(-)
-
-diff --git a/lib/efi_loader/efi_variable_tee.c b/lib/efi_loader/efi_variable_tee.c
-index b6be2b54a030..38655a9dbb7c 100644
---- a/lib/efi_loader/efi_variable_tee.c
-+++ b/lib/efi_loader/efi_variable_tee.c
-@@ -358,7 +358,7 @@ static efi_status_t __efi_runtime ffa_mm_communicate(void *comm_buf, ulong comm_
- 
- 		if (rx_data_size > comm_buf_size) {
- 			unmap_sysmem(virt_shared_buf);
--			return EFI_OUT_OF_RESOURCES;
-+			return EFI_BUFFER_TOO_SMALL;
- 		}
- 
- 		efi_memcpy_runtime(comm_buf, virt_shared_buf, rx_data_size);
--- 
-2.30.2
-
diff --git a/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0019-Use-correct-buffer-size.patch b/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0019-Use-correct-buffer-size.patch
new file mode 100644
index 0000000..af857f4
--- /dev/null
+++ b/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0019-Use-correct-buffer-size.patch
@@ -0,0 +1,40 @@
+From 936c857add300f41bc58c300793a0e10b48ff69f Mon Sep 17 00:00:00 2001
+From: Gowtham Suresh Kumar <gowtham.sureshkumar@arm.com>
+Date: Mon, 13 Dec 2021 15:25:23 +0000
+Subject: [PATCH 19/24] Use correct buffer size
+
+The comm buffer created has additional 4 bytes length which
+needs to be trimmed. This change will reduce the size of the
+comm buffer to what is expected.
+
+Signed-off-by: Gowtham Suresh Kumar <gowtham.sureshkumar@arm.com>
+Signed-off-by: Rui Miguel Silva <rui.silva@linaro.org>
+---
+ include/mm_communication.h | 4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+diff --git a/include/mm_communication.h b/include/mm_communication.h
+index e65fbde60d0a..bb9919095649 100644
+--- a/include/mm_communication.h
++++ b/include/mm_communication.h
+@@ -123,7 +123,7 @@ struct __packed efi_mm_communicate_header {
+  *
+  * Defined in EDK2 as SMM_VARIABLE_COMMUNICATE_HEADER.
+  */
+-struct smm_variable_communicate_header {
++struct __packed smm_variable_communicate_header {
+ 	efi_uintn_t  function;
+ 	efi_status_t ret_status;
+ 	u8           data[];
+@@ -145,7 +145,7 @@ struct smm_variable_communicate_header {
+  * Defined in EDK2 as SMM_VARIABLE_COMMUNICATE_ACCESS_VARIABLE.
+  *
+  */
+-struct smm_variable_access {
++struct __packed smm_variable_access {
+ 	efi_guid_t  guid;
+ 	efi_uintn_t data_size;
+ 	efi_uintn_t name_size;
+-- 
+2.37.1
+
diff --git a/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0020-Use-correct-buffer-size.patch b/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0020-Use-correct-buffer-size.patch
deleted file mode 100644
index 54328a7..0000000
--- a/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0020-Use-correct-buffer-size.patch
+++ /dev/null
@@ -1,40 +0,0 @@
-From 9ad9ead58e8e9e4f9e7a283c916421b443b424ce Mon Sep 17 00:00:00 2001
-From: Gowtham Suresh Kumar <gowtham.sureshkumar@arm.com>
-Date: Mon, 13 Dec 2021 15:25:23 +0000
-Subject: [PATCH 20/27] Use correct buffer size
-
-The comm buffer created has additional 4 bytes length which
-needs to be trimmed. This change will reduce the size of the
-comm buffer to what is expected.
-
-Signed-off-by: Gowtham Suresh Kumar <gowtham.sureshkumar@arm.com>
-Signed-off-by: Rui Miguel Silva <rui.silva@linaro.org>
----
- include/mm_communication.h | 4 ++--
- 1 file changed, 2 insertions(+), 2 deletions(-)
-
-diff --git a/include/mm_communication.h b/include/mm_communication.h
-index e65fbde60d0a..bb9919095649 100644
---- a/include/mm_communication.h
-+++ b/include/mm_communication.h
-@@ -123,7 +123,7 @@ struct __packed efi_mm_communicate_header {
-  *
-  * Defined in EDK2 as SMM_VARIABLE_COMMUNICATE_HEADER.
-  */
--struct smm_variable_communicate_header {
-+struct __packed smm_variable_communicate_header {
- 	efi_uintn_t  function;
- 	efi_status_t ret_status;
- 	u8           data[];
-@@ -145,7 +145,7 @@ struct smm_variable_communicate_header {
-  * Defined in EDK2 as SMM_VARIABLE_COMMUNICATE_ACCESS_VARIABLE.
-  *
-  */
--struct smm_variable_access {
-+struct __packed smm_variable_access {
- 	efi_guid_t  guid;
- 	efi_uintn_t data_size;
- 	efi_uintn_t name_size;
--- 
-2.30.2
-
diff --git a/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0020-efi_loader-populate-ESRT-table-if-EFI_ESRT-config-op.patch b/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0020-efi_loader-populate-ESRT-table-if-EFI_ESRT-config-op.patch
new file mode 100644
index 0000000..1204f2a
--- /dev/null
+++ b/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0020-efi_loader-populate-ESRT-table-if-EFI_ESRT-config-op.patch
@@ -0,0 +1,36 @@
+From 5c57ef351882afebde479de430acf2c4f8fdefc8 Mon Sep 17 00:00:00 2001
+From: Vishnu Banavath <vishnu.banavath@arm.com>
+Date: Fri, 17 Dec 2021 19:49:02 +0000
+Subject: [PATCH 20/24] efi_loader: populate ESRT table if EFI_ESRT config
+ option is set
+
+This change is to call efi_esrt_populate function if CONFIG_EFI_ESRT
+is set. This will populte esrt table with firmware image info
+
+Signed-off-by: Vishnu Banavath <vishnu.banavath@arm.com>
+Signed-off-by: Rui Miguel Silva <rui.silva@linaro.org>
+---
+ lib/efi_loader/efi_capsule.c | 7 +++++++
+ 1 file changed, 7 insertions(+)
+
+diff --git a/lib/efi_loader/efi_capsule.c b/lib/efi_loader/efi_capsule.c
+index 891143c33909..7db78f1f7648 100644
+--- a/lib/efi_loader/efi_capsule.c
++++ b/lib/efi_loader/efi_capsule.c
+@@ -679,6 +679,13 @@ efi_status_t __efi_runtime EFIAPI efi_update_capsule(
+ 			ret = EFI_SUCCESS;
+ 		}
+ 
++		if (IS_ENABLED(CONFIG_EFI_ESRT)) {
++			/* Rebuild the ESRT to reflect any updated FW images. */
++			ret = efi_esrt_populate();
++	               if (ret != EFI_SUCCESS)
++				log_warning("EFI Capsule: failed to update ESRT\n");
++	       }
++
+ 		goto out;
+ #endif
+ 
+-- 
+2.37.1
+
diff --git a/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0021-Update-comm_buf-when-EFI_BUFFER_TOO_SMALL.patch b/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0021-Update-comm_buf-when-EFI_BUFFER_TOO_SMALL.patch
deleted file mode 100644
index c7ac38f..0000000
--- a/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0021-Update-comm_buf-when-EFI_BUFFER_TOO_SMALL.patch
+++ /dev/null
@@ -1,30 +0,0 @@
-From b81214dea7056c3877aa9eb775557dc4702660ec Mon Sep 17 00:00:00 2001
-From: Gowtham Suresh Kumar <gowtham.sureshkumar@arm.com>
-Date: Sun, 12 Dec 2021 17:58:08 +0000
-Subject: [PATCH 21/27] Update comm_buf when EFI_BUFFER_TOO_SMALL
-
-When the received buffer is larger than the comm buffer,
-the contents of the shared buffer which can fit in the
-comm buffer should be read before returning.
-
-Signed-off-by: Gowtham Suresh Kumar <gowtham.sureshkumar@arm.com>
-Signed-off-by: Rui Miguel Silva <rui.silva@linaro.org>
----
- lib/efi_loader/efi_variable_tee.c | 1 +
- 1 file changed, 1 insertion(+)
-
-diff --git a/lib/efi_loader/efi_variable_tee.c b/lib/efi_loader/efi_variable_tee.c
-index 38655a9dbb7c..67743d1f8fce 100644
---- a/lib/efi_loader/efi_variable_tee.c
-+++ b/lib/efi_loader/efi_variable_tee.c
-@@ -357,6 +357,7 @@ static efi_status_t __efi_runtime ffa_mm_communicate(void *comm_buf, ulong comm_
- 			sizeof(size_t);
- 
- 		if (rx_data_size > comm_buf_size) {
-+			efi_memcpy_runtime(comm_buf, virt_shared_buf, comm_buf_size);
- 			unmap_sysmem(virt_shared_buf);
- 			return EFI_BUFFER_TOO_SMALL;
- 		}
--- 
-2.30.2
-
diff --git a/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0021-efi_firmware-add-get_image_info-for-corstone1000.patch b/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0021-efi_firmware-add-get_image_info-for-corstone1000.patch
new file mode 100644
index 0000000..8f86b65
--- /dev/null
+++ b/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0021-efi_firmware-add-get_image_info-for-corstone1000.patch
@@ -0,0 +1,124 @@
+From fcd1dc670d83bd7e7528370d0d6f168bfb44054d Mon Sep 17 00:00:00 2001
+From: Vishnu Banavath <vishnu.banavath@arm.com>
+Date: Fri, 17 Dec 2021 19:50:25 +0000
+Subject: [PATCH 21/24] efi_firmware: add get_image_info for corstone1000
+
+This change is to populate get_image_info which eventually
+will be populated in ESRT table
+
+Signed-off-by: Vishnu Banavath <vishnu.banavath@arm.com>
+
+%% original patch: 0047-efi_firmware-add-get_image_info-for-corstone1000.patch
+
+Signed-off-by: Rui Miguel Silva <rui.silva@linaro.org>
+---
+ lib/efi_loader/efi_firmware.c | 71 ++++++++++++++++++++++++++++++++++-
+ 1 file changed, 70 insertions(+), 1 deletion(-)
+
+diff --git a/lib/efi_loader/efi_firmware.c b/lib/efi_loader/efi_firmware.c
+index 30cafd15caac..af43d4502f92 100644
+--- a/lib/efi_loader/efi_firmware.c
++++ b/lib/efi_loader/efi_firmware.c
+@@ -17,11 +17,69 @@
+ 
+ #define FMP_PAYLOAD_HDR_SIGNATURE	SIGNATURE_32('M', 'S', 'S', '1')
+ 
++#if CONFIG_IS_ENABLED(TARGET_CORSTONE1000)
++#define EFI_FIRMWARE_IMAGE_TYPE_UBOOT_RAW_GUID \
++	EFI_GUID(0xe2bb9c06, 0x70e9, 0x4b14, 0x97, 0xa3, \
++		 0x5a, 0x79, 0x13, 0x17, 0x6e, 0x3f)
++
++ const efi_guid_t efi_firmware_image_type_uboot_raw =
++				EFI_FIRMWARE_IMAGE_TYPE_UBOOT_RAW_GUID;
++
++static efi_status_t efi_corstone1000_img_info_get (
++	efi_uintn_t *image_info_size,
++	struct efi_firmware_image_descriptor *image_info,
++	u32 *descriptor_version,
++	u8 *descriptor_count,
++	efi_uintn_t *descriptor_size,
++	u32 *package_version,
++	u16 **package_version_name,
++	const efi_guid_t *image_type)
++{
++	int i = 0;
++
++	*image_info_size = sizeof(*image_info);
++	*descriptor_version = EFI_FIRMWARE_IMAGE_DESCRIPTOR_VERSION;
++	*descriptor_count = 1;//dfu_num;
++	*descriptor_size = sizeof(*image_info);
++	if (package_version)
++		*package_version = 0xffffffff; /* not supported */
++	if(package_version_name)
++		*package_version_name = NULL; /* not supported */
++
++	if(image_info == NULL) {
++		log_warning("image_info is null\n");
++		return EFI_BUFFER_TOO_SMALL;
++	}
++
++	image_info[i].image_index = i;
++	image_info[i].image_type_id = *image_type;
++	image_info[i].image_id = 0;
++	image_info[i].image_id_name = "wic";
++	image_info[i].version = 1;
++	image_info[i].version_name = NULL;
++	image_info[i].size = 0x1000;
++	image_info[i].attributes_supported = IMAGE_ATTRIBUTE_IMAGE_UPDATABLE |
++					     IMAGE_ATTRIBUTE_AUTHENTICATION_REQUIRED;
++	image_info[i].attributes_setting = IMAGE_ATTRIBUTE_IMAGE_UPDATABLE;
++	/* Check if the capsule authentication is enabled */
++	if (IS_ENABLED(CONFIG_EFI_CAPSULE_AUTHENTICATE))
++		image_info[0].attributes_setting |=
++			IMAGE_ATTRIBUTE_AUTHENTICATION_REQUIRED;
++	image_info[i].lowest_supported_image_version = 0;
++	image_info[i].last_attempt_version = 0;
++	image_info[i].last_attempt_status = LAST_ATTEMPT_STATUS_SUCCESS;
++	image_info[i].hardware_instance = 1;
++	image_info[i].dependencies = NULL;
++
++	return EFI_SUCCESS;
++}
++#endif
++
+ /**
+  * struct fmp_payload_header - EDK2 header for the FMP payload
+  *
+  * This structure describes the header which is preprended to the
+- * FMP payload by the edk2 capsule generation scripts.
++ * FMP payload by the edk1 capsule generation scripts.
+  *
+  * @signature:			Header signature used to identify the header
+  * @header_size:		Size of the structure
+@@ -285,10 +343,18 @@ efi_status_t EFIAPI efi_firmware_get_image_info(
+ 	     !descriptor_size || !package_version || !package_version_name))
+ 		return EFI_EXIT(EFI_INVALID_PARAMETER);
+ 
++#if CONFIG_IS_ENABLED(TARGET_CORSTONE1000)
++	ret = efi_corstone1000_img_info_get(image_info_size, image_info,
++			       descriptor_version, descriptor_count,
++			       descriptor_size,
++			       package_version, package_version_name,
++			       &efi_firmware_image_type_uboot_raw);
++#else
+ 	ret = efi_fill_image_desc_array(image_info_size, image_info,
+ 					descriptor_version, descriptor_count,
+ 					descriptor_size, package_version,
+ 					package_version_name);
++#endif
+ 
+ 	return EFI_EXIT(ret);
+ }
+@@ -401,6 +467,9 @@ efi_status_t EFIAPI efi_firmware_raw_set_image(
+ 	if (status != EFI_SUCCESS)
+ 		return EFI_EXIT(status);
+ 
++#if CONFIG_IS_ENABLED(TARGET_CORSTONE1000)
++	return EFI_EXIT(EFI_SUCCESS);
++#endif
+ 	if (dfu_write_by_alt(image_index - 1, (void *)image, image_size,
+ 			     NULL, NULL))
+ 		return EFI_EXIT(EFI_DEVICE_ERROR);
+-- 
+2.37.1
+
diff --git a/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0022-efi_loader-populate-ESRT-table-if-EFI_ESRT-config-op.patch b/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0022-efi_loader-populate-ESRT-table-if-EFI_ESRT-config-op.patch
deleted file mode 100644
index aaea20e..0000000
--- a/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0022-efi_loader-populate-ESRT-table-if-EFI_ESRT-config-op.patch
+++ /dev/null
@@ -1,36 +0,0 @@
-From 5fec641015f8f1ca80f55f05b5e1f67653321303 Mon Sep 17 00:00:00 2001
-From: Vishnu Banavath <vishnu.banavath@arm.com>
-Date: Fri, 17 Dec 2021 19:49:02 +0000
-Subject: [PATCH 22/27] efi_loader: populate ESRT table if EFI_ESRT config
- option is set
-
-This change is to call efi_esrt_populate function if CONFIG_EFI_ESRT
-is set. This will populte esrt table with firmware image info
-
-Signed-off-by: Vishnu Banavath <vishnu.banavath@arm.com>
-Signed-off-by: Rui Miguel Silva <rui.silva@linaro.org>
----
- lib/efi_loader/efi_capsule.c | 7 +++++++
- 1 file changed, 7 insertions(+)
-
-diff --git a/lib/efi_loader/efi_capsule.c b/lib/efi_loader/efi_capsule.c
-index 939040d2755e..790d2ba8fe19 100644
---- a/lib/efi_loader/efi_capsule.c
-+++ b/lib/efi_loader/efi_capsule.c
-@@ -676,6 +676,13 @@ efi_status_t __efi_runtime EFIAPI efi_update_capsule(
- 			ret = EFI_SUCCESS;
- 		}
- 
-+		if (IS_ENABLED(CONFIG_EFI_ESRT)) {
-+			/* Rebuild the ESRT to reflect any updated FW images. */
-+			ret = efi_esrt_populate();
-+	               if (ret != EFI_SUCCESS)
-+				log_warning("EFI Capsule: failed to update ESRT\n");
-+	       }
-+
- 		goto out;
- #endif
- 
--- 
-2.30.2
-
diff --git a/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0022-efi_loader-send-bootcomplete-message-to-secure-encla.patch b/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0022-efi_loader-send-bootcomplete-message-to-secure-encla.patch
new file mode 100644
index 0000000..1dc0455
--- /dev/null
+++ b/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0022-efi_loader-send-bootcomplete-message-to-secure-encla.patch
@@ -0,0 +1,191 @@
+From 902d5c499b6627a505986d298986a4ac430592b8 Mon Sep 17 00:00:00 2001
+From: Vishnu Banavath <vishnu.banavath@arm.com>
+Date: Wed, 5 Jan 2022 17:56:09 +0000
+Subject: [PATCH 22/24] efi_loader: send bootcomplete message to secure enclave
+
+On corstone1000 platform, Secure Enclave will be expecting
+an event from uboot when it performs capsule update. Previously,
+an event is sent at exitbootservice level. This will create a problem
+when user wants to interrupt at UEFI shell, hence, it is required
+to send an uboot efi initialized event at efi sub-system initialization
+stage.
+
+Signed-off-by: Rui Miguel Silva <rui.silva@linaro.org>
+---
+ include/configs/corstone1000.h |  2 +-
+ lib/efi_loader/efi_boottime.c  | 49 ----------------------------------
+ lib/efi_loader/efi_firmware.c  |  2 +-
+ lib/efi_loader/efi_setup.c     | 48 +++++++++++++++++++++++++++++++++
+ 4 files changed, 50 insertions(+), 51 deletions(-)
+
+diff --git a/include/configs/corstone1000.h b/include/configs/corstone1000.h
+index a7445e61348b..06b605e43bdf 100644
+--- a/include/configs/corstone1000.h
++++ b/include/configs/corstone1000.h
+@@ -22,7 +22,7 @@
+ 
+ /* Notification events used with SE Proxy update service */
+ #define CORSTONE1000_BUFFER_READY_EVT		(0x1)
+-#define CORSTONE1000_KERNEL_STARTED_EVT		(0x2)
++#define CORSTONE1000_UBOOT_EFI_STARTED_EVT	(0x2)
+ 
+ #define PREP_SEPROXY_SVC_ID_MASK	GENMASK(31, 16)
+ #define PREP_SEPROXY_SVC_ID(x)	 (FIELD_PREP(PREP_SEPROXY_SVC_ID_MASK, (x)))
+diff --git a/lib/efi_loader/efi_boottime.c b/lib/efi_loader/efi_boottime.c
+index 140d0f4f71da..6b9f5cf272b8 100644
+--- a/lib/efi_loader/efi_boottime.c
++++ b/lib/efi_loader/efi_boottime.c
+@@ -2100,46 +2100,6 @@ static void efi_exit_caches(void)
+ #endif
+ }
+ 
+-#if IS_ENABLED(CONFIG_TARGET_CORSTONE1000)
+-/**
+- * efi_corstone1000_kernel_started_event - notifies SE Proxy FW update service
+- *
+- * This function notifies the SE Proxy update service that the kernel has already started
+- *
+- * Return:
+- *
+- * 0: on success, otherwise failure
+- */
+-static int efi_corstone1000_kernel_started_event(void)
+-{
+-	struct ffa_interface_data func_data = {0};
+-	struct ffa_send_direct_data msg = {0};
+-	u16 part_id = CORSTONE1000_SEPROXY_PART_ID;
+-
+-	log_debug("[%s]\n", __func__);
+-
+-	/*
+-	 * telling the driver which partition to use
+-	 */
+-	func_data.data0_size = sizeof(part_id);
+-	func_data.data0 = &part_id;
+-
+-	/*
+-	 * setting the kernel started  event arguments:
+-	 * setting capsule update interface ID(31:16)
+-	 * the kernel started event ID(15:0)
+-	 */
+-	msg.a4 = PREP_SEPROXY_SVC_ID(CORSTONE1000_SEPROXY_UPDATE_SVC_ID) |
+-	PREP_SEPROXY_EVT(CORSTONE1000_KERNEL_STARTED_EVT);
+-
+-	func_data.data1_size = sizeof(msg);
+-	func_data.data1 = &msg;
+-
+-	return ffa_helper_msg_send_direct_req(&func_data);
+-}
+-
+-#endif
+-
+ /**
+  * efi_exit_boot_services() - stop all boot services
+  * @image_handle: handle of the loaded image
+@@ -2253,15 +2213,6 @@ static efi_status_t EFIAPI efi_exit_boot_services(efi_handle_t image_handle,
+ 	/* Recalculate CRC32 */
+ 	efi_update_table_header_crc32(&systab.hdr);
+ 
+-#if IS_ENABLED(CONFIG_TARGET_CORSTONE1000)
+-	/* Notifying SE Proxy FW update service */
+-	ffa_ret = efi_corstone1000_kernel_started_event();
+-	if (ffa_ret)
+-		debug("[efi_boottime][ERROR]: Failure to notify SE Proxy FW update service\n");
+-	else
+-		debug("[efi_boottime][INFO]: SE Proxy FW update service notified\n");
+-#endif
+-
+ 	/* Give the payload some time to boot */
+ 	efi_set_watchdog(0);
+ 	WATCHDOG_RESET();
+diff --git a/lib/efi_loader/efi_firmware.c b/lib/efi_loader/efi_firmware.c
+index af43d4502f92..25f427b93669 100644
+--- a/lib/efi_loader/efi_firmware.c
++++ b/lib/efi_loader/efi_firmware.c
+@@ -47,7 +47,7 @@ static efi_status_t efi_corstone1000_img_info_get (
+ 		*package_version_name = NULL; /* not supported */
+ 
+ 	if(image_info == NULL) {
+-		log_warning("image_info is null\n");
++		log_info("image_info is null\n");
+ 		return EFI_BUFFER_TOO_SMALL;
+ 	}
+ 
+diff --git a/lib/efi_loader/efi_setup.c b/lib/efi_loader/efi_setup.c
+index bfd4687e10b5..a20128e9b582 100644
+--- a/lib/efi_loader/efi_setup.c
++++ b/lib/efi_loader/efi_setup.c
+@@ -17,6 +17,9 @@
+ efi_status_t efi_obj_list_initialized = OBJ_LIST_NOT_INITIALIZED;
+ 
+ #if IS_ENABLED(CONFIG_TARGET_CORSTONE1000)
++#include <linux/bitfield.h>
++#include <linux/bitops.h>
++#include <arm_ffa_helper.h>
+ /**
+  * efi_corstone1000_alloc_capsule_shared_buf - allocate capsule shared buffer
+  */
+@@ -126,6 +129,44 @@ static efi_status_t efi_init_secure_boot(void)
+ }
+ #endif /* CONFIG_EFI_SECURE_BOOT */
+ 
++#if IS_ENABLED(CONFIG_TARGET_CORSTONE1000)
++/**
++ * efi_corstone1000_uboot-efi_started_event - notifies SE Proxy FW update service
++ *
++ * This function notifies the SE Proxy update service that uboot efi has already started
++ *
++ * Return:
++ *
++ * 0: on success, otherwise failure
++ * */
++static int efi_corstone1000_uboot_efi_started_event(void)
++{
++	struct ffa_interface_data func_data = {0};
++	struct ffa_send_direct_data msg = {0};
++	u16 part_id = CORSTONE1000_SEPROXY_PART_ID;
++
++	log_debug("[%s]\n", __func__);
++
++	/*
++	 * telling the driver which partition to use
++	 */
++	func_data.data0_size = sizeof(part_id);
++	func_data.data0 = &part_id;
++	/*
++	 * setting the uboot efi subsystem started  event arguments:
++	 * setting capsule update interface ID(31:16)
++	 * the uboot efi subsystem started event ID(15:0)
++	 */
++	msg.a4 = PREP_SEPROXY_SVC_ID(CORSTONE1000_SEPROXY_UPDATE_SVC_ID) |
++			PREP_SEPROXY_EVT(CORSTONE1000_UBOOT_EFI_STARTED_EVT);
++
++	func_data.data1_size = sizeof(msg);
++	func_data.data1 = &msg;
++
++	return ffa_helper_msg_send_direct_req(&func_data);
++}
++#endif
++
+ /**
+  * efi_init_capsule - initialize capsule update state
+  *
+@@ -134,8 +175,15 @@ static efi_status_t efi_init_secure_boot(void)
+ static efi_status_t efi_init_capsule(void)
+ {
+ 	efi_status_t ret = EFI_SUCCESS;
++	int ffa_ret;
+ 
+ #if IS_ENABLED(CONFIG_TARGET_CORSTONE1000)
++	ffa_ret = efi_corstone1000_uboot_efi_started_event();
++	if (ffa_ret)
++		debug("[efi_boottime][ERROR]: Failure to notify SE Proxy FW update service\n");
++	else
++		debug("[efi_boottime][INFO]: SE Proxy FW update service notified\n");
++
+ 	ret = efi_corstone1000_alloc_capsule_shared_buf();
+ 	if (ret != EFI_SUCCESS) {
+ 		printf("EFI: Corstone-1000: cannot allocate caspsule shared buffer\n");
+-- 
+2.37.1
+
diff --git a/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0023-efi_firmware-add-get_image_info-for-corstone1000.patch b/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0023-efi_firmware-add-get_image_info-for-corstone1000.patch
deleted file mode 100644
index c86b658..0000000
--- a/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0023-efi_firmware-add-get_image_info-for-corstone1000.patch
+++ /dev/null
@@ -1,121 +0,0 @@
-From 34fadec4f659248a6020676f5894895977ccf79d Mon Sep 17 00:00:00 2001
-From: Vishnu Banavath <vishnu.banavath@arm.com>
-Date: Fri, 17 Dec 2021 19:50:25 +0000
-Subject: [PATCH 23/27] efi_firmware: add get_image_info for corstone1000
-
-This change is to populate get_image_info which eventually
-will be populated in ESRT table
-
-Signed-off-by: Vishnu Banavath <vishnu.banavath@arm.com>
-
-%% original patch: 0047-efi_firmware-add-get_image_info-for-corstone1000.patch
-
-Signed-off-by: Rui Miguel Silva <rui.silva@linaro.org>
----
- lib/efi_loader/efi_firmware.c | 64 ++++++++++++++++++++++++++++++++++-
- 1 file changed, 63 insertions(+), 1 deletion(-)
-
-diff --git a/lib/efi_loader/efi_firmware.c b/lib/efi_loader/efi_firmware.c
-index a5ff32f121f4..9eb89849b28d 100644
---- a/lib/efi_loader/efi_firmware.c
-+++ b/lib/efi_loader/efi_firmware.c
-@@ -241,6 +241,7 @@ const efi_guid_t efi_firmware_image_type_uboot_fit =
-  *
-  * Return		status code
-  */
-+
- static
- efi_status_t EFIAPI efi_firmware_fit_get_image_info(
- 	struct efi_firmware_management_protocol *this,
-@@ -332,6 +333,56 @@ const struct efi_firmware_management_protocol efi_fmp_fit = {
- const efi_guid_t efi_firmware_image_type_uboot_raw =
- 	EFI_FIRMWARE_IMAGE_TYPE_UBOOT_RAW_GUID;
- 
-+#if CONFIG_IS_ENABLED(TARGET_CORSTONE1000)
-+static efi_status_t efi_corstone1000_img_info_get (
-+	efi_uintn_t *image_info_size,
-+	struct efi_firmware_image_descriptor *image_info,
-+	u32 *descriptor_version,
-+	u8 *descriptor_count,
-+	efi_uintn_t *descriptor_size,
-+	u32 *package_version,
-+	u16 **package_version_name,
-+	const efi_guid_t *image_type)
-+{
-+	int i = 0;
-+
-+	*image_info_size = sizeof(*image_info);
-+	*descriptor_version = EFI_FIRMWARE_IMAGE_DESCRIPTOR_VERSION;
-+	*descriptor_count = 1;//dfu_num;
-+	*descriptor_size = sizeof(*image_info);
-+	if (package_version)
-+		*package_version = 0xffffffff; /* not supported */
-+	if(package_version_name)
-+		*package_version_name = NULL; /* not supported */
-+
-+	if(image_info == NULL) {
-+		log_warning("image_info is null\n");
-+		return EFI_BUFFER_TOO_SMALL;
-+	}
-+
-+	image_info[i].image_index = i;
-+	image_info[i].image_type_id = *image_type;
-+	image_info[i].image_id = 0;
-+	image_info[i].image_id_name = "wic";
-+	image_info[i].version = 1;
-+	image_info[i].version_name = NULL;
-+	image_info[i].size = 0x1000;
-+	image_info[i].attributes_supported = IMAGE_ATTRIBUTE_IMAGE_UPDATABLE |
-+					     IMAGE_ATTRIBUTE_AUTHENTICATION_REQUIRED;
-+	image_info[i].attributes_setting = IMAGE_ATTRIBUTE_IMAGE_UPDATABLE;
-+	/* Check if the capsule authentication is enabled */
-+	if (IS_ENABLED(CONFIG_EFI_CAPSULE_AUTHENTICATE))
-+		image_info[0].attributes_setting |=
-+			IMAGE_ATTRIBUTE_AUTHENTICATION_REQUIRED;
-+	image_info[i].lowest_supported_image_version = 0;
-+	image_info[i].last_attempt_version = 0;
-+	image_info[i].last_attempt_status = LAST_ATTEMPT_STATUS_SUCCESS;
-+	image_info[i].hardware_instance = 1;
-+	image_info[i].dependencies = NULL;
-+
-+	return EFI_SUCCESS;
-+}
-+#endif
- /**
-  * efi_firmware_raw_get_image_info - return information about the current
- 				     firmware image
-@@ -376,12 +427,20 @@ efi_status_t EFIAPI efi_firmware_raw_get_image_info(
- 	     !descriptor_size || !package_version || !package_version_name))
- 		return EFI_EXIT(EFI_INVALID_PARAMETER);
- 
--	ret = efi_get_dfu_info(image_info_size, image_info,
-+#if CONFIG_IS_ENABLED(TARGET_CORSTONE1000)
-+	ret = efi_corstone1000_img_info_get(image_info_size, image_info,
- 			       descriptor_version, descriptor_count,
- 			       descriptor_size,
- 			       package_version, package_version_name,
- 			       &efi_firmware_image_type_uboot_raw);
-+#else
- 
-+	ret = efi_get_dfu_info(image_info_size, image_info,
-+			       descriptor_version, descriptor_count,
-+			       descriptor_size,
-+			       package_version, package_version_name,
-+			       &efi_firmware_image_type_uboot_raw);
-+#endif
- 	return EFI_EXIT(ret);
- }
- 
-@@ -462,6 +521,9 @@ efi_status_t EFIAPI efi_firmware_raw_set_image(
- 
- 	}
- 
-+#if CONFIG_IS_ENABLED(TARGET_CORSTONE1000)
-+	return EFI_EXIT(EFI_SUCCESS);
-+#endif
- 	if (dfu_write_by_alt(image_index - 1, (void *)image, image_size,
- 			     NULL, NULL))
- 		return EFI_EXIT(EFI_DEVICE_ERROR);
--- 
-2.30.2
-
diff --git a/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0023-efi_loader-fix-null-pointer-exception-with-get_image.patch b/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0023-efi_loader-fix-null-pointer-exception-with-get_image.patch
new file mode 100644
index 0000000..165fac5
--- /dev/null
+++ b/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0023-efi_loader-fix-null-pointer-exception-with-get_image.patch
@@ -0,0 +1,62 @@
+From 383078dde2fbf509dc3d24505f6b328316aee030 Mon Sep 17 00:00:00 2001
+From: Vishnu Banavath <vishnu.banavath@arm.com>
+Date: Fri, 14 Jan 2022 15:24:18 +0000
+Subject: [PATCH 23/24] efi_loader: fix null pointer exception with
+ get_image_info
+
+get_img_info API implemented for corstone1000 target does not
+check the input attributes and as a result uboot crash's with
+null pointer access. This change is to fix the null pointer
+exception.
+
+Signed-off-by: Vishnu Banavath <vishnu.banavath@arm.com>
+Signed-off-by: Rui Miguel Silva <rui.silva@linaro.org>
+---
+ lib/efi_loader/efi_firmware.c | 19 +++++++++++--------
+ 1 file changed, 11 insertions(+), 8 deletions(-)
+
+diff --git a/lib/efi_loader/efi_firmware.c b/lib/efi_loader/efi_firmware.c
+index 25f427b93669..28d9a19edb90 100644
+--- a/lib/efi_loader/efi_firmware.c
++++ b/lib/efi_loader/efi_firmware.c
+@@ -38,26 +38,29 @@ static efi_status_t efi_corstone1000_img_info_get (
+ 	int i = 0;
+ 
+ 	*image_info_size = sizeof(*image_info);
+-	*descriptor_version = EFI_FIRMWARE_IMAGE_DESCRIPTOR_VERSION;
+-	*descriptor_count = 1;//dfu_num;
+-	*descriptor_size = sizeof(*image_info);
++	if(descriptor_version)
++	    *descriptor_version = EFI_FIRMWARE_IMAGE_DESCRIPTOR_VERSION;
++	if(descriptor_count)
++	    *descriptor_count = 1;
++	if(descriptor_size)
++	    *descriptor_size = sizeof(*image_info);
+ 	if (package_version)
+ 		*package_version = 0xffffffff; /* not supported */
+ 	if(package_version_name)
+ 		*package_version_name = NULL; /* not supported */
+ 
+ 	if(image_info == NULL) {
+-		log_info("image_info is null\n");
++		log_debug("image_info is null\n");
+ 		return EFI_BUFFER_TOO_SMALL;
+ 	}
+ 
+-	image_info[i].image_index = i;
++	image_info[i].image_index = 1;
+ 	image_info[i].image_type_id = *image_type;
+ 	image_info[i].image_id = 0;
+-	image_info[i].image_id_name = "wic";
+-	image_info[i].version = 1;
++	image_info[i].image_id_name = L"wic image";
++	image_info[i].version = 0;
+ 	image_info[i].version_name = NULL;
+-	image_info[i].size = 0x1000;
++	image_info[i].size = 0;
+ 	image_info[i].attributes_supported = IMAGE_ATTRIBUTE_IMAGE_UPDATABLE |
+ 					     IMAGE_ATTRIBUTE_AUTHENTICATION_REQUIRED;
+ 	image_info[i].attributes_setting = IMAGE_ATTRIBUTE_IMAGE_UPDATABLE;
+-- 
+2.37.1
+
diff --git a/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0024-Comment-mm_communicate-failure-log.patch b/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0024-Comment-mm_communicate-failure-log.patch
deleted file mode 100644
index c6a1aed..0000000
--- a/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0024-Comment-mm_communicate-failure-log.patch
+++ /dev/null
@@ -1,34 +0,0 @@
-From c0c6e4c1166c4868afc36649b9ed98081a6966e1 Mon Sep 17 00:00:00 2001
-From: Gowtham Suresh Kumar <gowtham.sureshkumar@arm.com>
-Date: Fri, 24 Dec 2021 14:22:52 +0000
-Subject: [PATCH 24/27] Comment mm_communicate failure log
-
-When a getVariable() call is made with data size set to 0,
-mm_communicate should return EFI_BUFFER_TOO_SMALL. This is
-an expected behavior. There should not be any failure logs
-in this case. So the error log is commented here.
-
-Signed-off-by: Rui Miguel Silva <rui.silva@linaro.org>
----
- lib/efi_loader/efi_variable_tee.c | 5 ++++-
- 1 file changed, 4 insertions(+), 1 deletion(-)
-
-diff --git a/lib/efi_loader/efi_variable_tee.c b/lib/efi_loader/efi_variable_tee.c
-index 67743d1f8fce..a34989efac83 100644
---- a/lib/efi_loader/efi_variable_tee.c
-+++ b/lib/efi_loader/efi_variable_tee.c
-@@ -411,7 +411,10 @@ static efi_status_t __efi_runtime mm_communicate(u8 *comm_buf, efi_uintn_t dsize
- 	ret = ffa_mm_communicate(comm_buf, dsize);
- 	#endif
- 	if (ret != EFI_SUCCESS) {
--		log_err("%s failed!\n", __func__);
-+		/* mm_communicate failure is logged even when getVariable() is called
-+		 * with data size set to 0. This is not expected so logging is commented.
-+		*/
-+		//log_err("%s failed!\n", __func__);
- 		return ret;
- 	}
- 
--- 
-2.30.2
-
diff --git a/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0024-arm-corstone1000-add-mmc-for-fvp.patch b/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0024-arm-corstone1000-add-mmc-for-fvp.patch
new file mode 100644
index 0000000..2b9ca78
--- /dev/null
+++ b/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0024-arm-corstone1000-add-mmc-for-fvp.patch
@@ -0,0 +1,148 @@
+From cc3356c2a30b7aa85a25e9bc7b69a03537df3f27 Mon Sep 17 00:00:00 2001
+From: Rui Miguel Silva <rui.silva@linaro.org>
+Date: Tue, 5 Apr 2022 10:24:38 +0100
+Subject: [PATCH 24/24] arm:corstone1000: add mmc for fvp
+
+Enable support mmc/sdcard for the corstone1000 FVP.
+
+Signed-off-by: Vishnu Banavath <vishnu.banavath@arm.com>
+Signed-off-by: Rui Miguel Silva <rui.silva@linaro.org>
+---
+ arch/arm/dts/corstone1000-fvp.dts        | 28 +++++++++++++++
+ board/armltd/corstone1000/corstone1000.c | 46 ++++++++++++++++--------
+ configs/corstone1000_defconfig           |  8 ++++-
+ include/configs/corstone1000.h           |  4 ++-
+ 4 files changed, 69 insertions(+), 17 deletions(-)
+
+diff --git a/arch/arm/dts/corstone1000-fvp.dts b/arch/arm/dts/corstone1000-fvp.dts
+index 1fcc137a493c..26b0f1b3cea6 100644
+--- a/arch/arm/dts/corstone1000-fvp.dts
++++ b/arch/arm/dts/corstone1000-fvp.dts
+@@ -20,4 +20,32 @@
+ 		interrupts = <GIC_SPI 116 IRQ_TYPE_LEVEL_HIGH>;
+ 		reg-io-width = <2>;
+ 	};
++
++	vmmc_v3_3d: fixed_v3_3d {
++		compatible = "regulator-fixed";
++		regulator-name = "vmmc_supply";
++		regulator-min-microvolt = <3300000>;
++		regulator-max-microvolt = <3300000>;
++		regulator-always-on;
++	};
++
++	sdmmc0: mmc@40300000 {
++		compatible = "arm,pl18x", "arm,primecell";
++		reg = <0x40300000 0x1000>;
++		interrupts = <GIC_SPI 117 IRQ_TYPE_LEVEL_HIGH>;
++		max-frequency = <12000000>;
++		vmmc-supply = <&vmmc_v3_3d>;
++		clocks = <&smbclk>, <&refclk100mhz>;
++		clock-names = "smclk", "apb_pclk";
++	};
++
++	sdmmc1: mmc@50000000 {
++		compatible = "arm,pl18x", "arm,primecell";
++		reg = <0x50000000 0x10000>;
++		interrupts = <GIC_SPI 115 IRQ_TYPE_LEVEL_HIGH>;
++		max-frequency = <12000000>;
++		vmmc-supply = <&vmmc_v3_3d>;
++		clocks = <&smbclk>, <&refclk100mhz>;
++		clock-names = "smclk", "apb_pclk";
++	};
+ };
+diff --git a/board/armltd/corstone1000/corstone1000.c b/board/armltd/corstone1000/corstone1000.c
+index 2fa485ff3799..3d537d7a9052 100644
+--- a/board/armltd/corstone1000/corstone1000.c
++++ b/board/armltd/corstone1000/corstone1000.c
+@@ -46,22 +46,38 @@ static struct mm_region corstone1000_mem_map[] = {
+ 		.attrs = PTE_BLOCK_MEMTYPE(MT_DEVICE_NGNRNE) |
+ 			 PTE_BLOCK_NON_SHARE |
+ 			 PTE_BLOCK_PXN | PTE_BLOCK_UXN
+-        }, {
+-                /* USB */
+-                .virt = 0x40200000UL,
+-                .phys = 0x40200000UL,
+-                .size = 0x00100000UL,
+-                .attrs = PTE_BLOCK_MEMTYPE(MT_DEVICE_NGNRNE) |
+-                         PTE_BLOCK_NON_SHARE |
+-                         PTE_BLOCK_PXN | PTE_BLOCK_UXN
+ 	}, {
+-                 /* ethernet */
+-                .virt = 0x40100000UL,
+-                .phys = 0x40100000UL,
+-                .size = 0x00100000UL,
+-                .attrs = PTE_BLOCK_MEMTYPE(MT_DEVICE_NGNRNE) |
+-                         PTE_BLOCK_NON_SHARE |
+-                         PTE_BLOCK_PXN | PTE_BLOCK_UXN
++		/* USB */
++		.virt = 0x40200000UL,
++			.phys = 0x40200000UL,
++			.size = 0x00100000UL,
++			.attrs = PTE_BLOCK_MEMTYPE(MT_DEVICE_NGNRNE) |
++				PTE_BLOCK_NON_SHARE |
++				PTE_BLOCK_PXN | PTE_BLOCK_UXN
++	}, {
++		/* MMC0 */
++		.virt = 0x40300000UL,
++		.phys = 0x40300000UL,
++		.size = 0x00100000UL,
++		.attrs = PTE_BLOCK_MEMTYPE(MT_DEVICE_NGNRNE) |
++				 PTE_BLOCK_NON_SHARE |
++				 PTE_BLOCK_PXN | PTE_BLOCK_UXN
++	}, {
++		/* ethernet */
++		.virt = 0x40100000UL,
++			.phys = 0x40100000UL,
++			.size = 0x00100000UL,
++			.attrs = PTE_BLOCK_MEMTYPE(MT_DEVICE_NGNRNE) |
++				PTE_BLOCK_NON_SHARE |
++				PTE_BLOCK_PXN | PTE_BLOCK_UXN
++	}, {
++		/* MMC1 */
++		.virt = 0x50000000UL,
++		.phys = 0x50000000UL,
++		.size = 0x00100000UL,
++		.attrs = PTE_BLOCK_MEMTYPE(MT_DEVICE_NGNRNE) |
++				 PTE_BLOCK_NON_SHARE |
++				 PTE_BLOCK_PXN | PTE_BLOCK_UXN
+ 	}, {
+ 		/* OCVM */
+ 		.virt = 0x80000000UL,
+diff --git a/configs/corstone1000_defconfig b/configs/corstone1000_defconfig
+index b042d4e49419..147c14c94865 100644
+--- a/configs/corstone1000_defconfig
++++ b/configs/corstone1000_defconfig
+@@ -38,7 +38,13 @@ CONFIG_CMD_EFIDEBUG=y
+ CONFIG_CMD_FAT=y
+ CONFIG_OF_CONTROL=y
+ CONFIG_REGMAP=y
+-# CONFIG_MMC is not set
++CONFIG_CLK=y
++CONFIG_CMD_MMC=y
++CONFIG_DM_MMC=y
++CONFIG_ARM_PL180_MMCI=y
++CONFIG_MMC_SDHCI_ADMA_HELPERS=y
++CONFIG_MMC_WRITE=y
++CONFIG_DM_GPIO=y
+ CONFIG_DM_SERIAL=y
+ CONFIG_USB=y
+ CONFIG_DM_USB=y
+diff --git a/include/configs/corstone1000.h b/include/configs/corstone1000.h
+index 06b605e43bdf..d9855bf91ebf 100644
+--- a/include/configs/corstone1000.h
++++ b/include/configs/corstone1000.h
+@@ -95,7 +95,9 @@
+ #define CONFIG_SYS_MAXARGS	64	/* max command args */
+ 
+ #define BOOT_TARGET_DEVICES(func) \
+-	func(USB, usb, 0)
++	func(USB, usb, 0) \
++	func(MMC, mmc, 0) \
++	func(MMC, mmc, 1)
+ 
+ #include <config_distro_bootcmd.h>
+ 
+-- 
+2.37.1
+
diff --git a/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0025-corstone1000-use-a-compressed-kernel.patch b/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0025-corstone1000-use-a-compressed-kernel.patch
new file mode 100644
index 0000000..4cc2498
--- /dev/null
+++ b/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0025-corstone1000-use-a-compressed-kernel.patch
@@ -0,0 +1,32 @@
+From df70c467c5d100f1522b4521f48da4c51e43688c Mon Sep 17 00:00:00 2001
+From: Jon Mason <jon.mason@arm.com>
+Date: Thu, 25 Aug 2022 13:48:22 +0000
+Subject: [PATCH 25/25] corstone1000: use a compressed kernel
+
+The corstone1000 kernel has become too large to fit in the available
+storage.  Swtiching to a compressed kernel avoids the problem, but
+requires uncompressing it.  Add this decompression to the default boot
+instructions.
+
+Signed-off-by: Jon Mason <jon.mason@arm.com>
+---
+ include/configs/corstone1000.h | 3 ++-
+ 1 file changed, 2 insertions(+), 1 deletion(-)
+
+diff --git a/include/configs/corstone1000.h b/include/configs/corstone1000.h
+index d9855bf91e..d0cbc40121 100644
+--- a/include/configs/corstone1000.h
++++ b/include/configs/corstone1000.h
+@@ -126,7 +126,8 @@
+ #define CONFIG_BOOTCOMMAND								\
+ 				"run retrieve_kernel_load_addr;"			\
+ 				"echo Loading kernel from $kernel_addr to memory ... ;"	\
+-				"loadm $kernel_addr $kernel_addr_r 0xc00000;"		\
++				"unzip $kernel_addr 0x90000000;"                        \
++				"loadm 0x90000000 $kernel_addr_r 0xd00000;"		\
+ 				"usb start; usb reset;"					\
+ 				"run distro_bootcmd;"					\
+ 				"bootefi $kernel_addr_r $fdtcontroladdr;"
+-- 
+2.30.2
+
diff --git a/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0025-efi_loader-send-bootcomplete-message-to-secure-encla.patch b/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0025-efi_loader-send-bootcomplete-message-to-secure-encla.patch
deleted file mode 100644
index d5a0ec0..0000000
--- a/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0025-efi_loader-send-bootcomplete-message-to-secure-encla.patch
+++ /dev/null
@@ -1,191 +0,0 @@
-From af2defbfaffa4264052e30f269b91794068e4773 Mon Sep 17 00:00:00 2001
-From: Vishnu Banavath <vishnu.banavath@arm.com>
-Date: Wed, 5 Jan 2022 17:56:09 +0000
-Subject: [PATCH 25/27] efi_loader: send bootcomplete message to secure enclave
-
-On corstone1000 platform, Secure Enclave will be expecting
-an event from uboot when it performs capsule update. Previously,
-an event is sent at exitbootservice level. This will create a problem
-when user wants to interrupt at UEFI shell, hence, it is required
-to send an uboot efi initialized event at efi sub-system initialization
-stage.
-
-Signed-off-by: Rui Miguel Silva <rui.silva@linaro.org>
----
- include/configs/corstone1000.h |  2 +-
- lib/efi_loader/efi_boottime.c  | 49 ----------------------------------
- lib/efi_loader/efi_firmware.c  |  2 +-
- lib/efi_loader/efi_setup.c     | 48 +++++++++++++++++++++++++++++++++
- 4 files changed, 50 insertions(+), 51 deletions(-)
-
-diff --git a/include/configs/corstone1000.h b/include/configs/corstone1000.h
-index a7445e61348b..06b605e43bdf 100644
---- a/include/configs/corstone1000.h
-+++ b/include/configs/corstone1000.h
-@@ -22,7 +22,7 @@
- 
- /* Notification events used with SE Proxy update service */
- #define CORSTONE1000_BUFFER_READY_EVT		(0x1)
--#define CORSTONE1000_KERNEL_STARTED_EVT		(0x2)
-+#define CORSTONE1000_UBOOT_EFI_STARTED_EVT	(0x2)
- 
- #define PREP_SEPROXY_SVC_ID_MASK	GENMASK(31, 16)
- #define PREP_SEPROXY_SVC_ID(x)	 (FIELD_PREP(PREP_SEPROXY_SVC_ID_MASK, (x)))
-diff --git a/lib/efi_loader/efi_boottime.c b/lib/efi_loader/efi_boottime.c
-index b58a8c98fd05..d0703060491b 100644
---- a/lib/efi_loader/efi_boottime.c
-+++ b/lib/efi_loader/efi_boottime.c
-@@ -2101,46 +2101,6 @@ static void efi_exit_caches(void)
- #endif
- }
- 
--#if IS_ENABLED(CONFIG_TARGET_CORSTONE1000)
--/**
-- * efi_corstone1000_kernel_started_event - notifies SE Proxy FW update service
-- *
-- * This function notifies the SE Proxy update service that the kernel has already started
-- *
-- * Return:
-- *
-- * 0: on success, otherwise failure
-- */
--static int efi_corstone1000_kernel_started_event(void)
--{
--	struct ffa_interface_data func_data = {0};
--	struct ffa_send_direct_data msg = {0};
--	u16 part_id = CORSTONE1000_SEPROXY_PART_ID;
--
--	log_debug("[%s]\n", __func__);
--
--	/*
--	 * telling the driver which partition to use
--	 */
--	func_data.data0_size = sizeof(part_id);
--	func_data.data0 = &part_id;
--
--	/*
--	 * setting the kernel started  event arguments:
--	 * setting capsule update interface ID(31:16)
--	 * the kernel started event ID(15:0)
--	 */
--	msg.a4 = PREP_SEPROXY_SVC_ID(CORSTONE1000_SEPROXY_UPDATE_SVC_ID) |
--	PREP_SEPROXY_EVT(CORSTONE1000_KERNEL_STARTED_EVT);
--
--	func_data.data1_size = sizeof(msg);
--	func_data.data1 = &msg;
--
--	return ffa_helper_msg_send_direct_req(&func_data);
--}
--
--#endif
--
- /**
-  * efi_exit_boot_services() - stop all boot services
-  * @image_handle: handle of the loaded image
-@@ -2254,15 +2214,6 @@ static efi_status_t EFIAPI efi_exit_boot_services(efi_handle_t image_handle,
- 	/* Recalculate CRC32 */
- 	efi_update_table_header_crc32(&systab.hdr);
- 
--#if IS_ENABLED(CONFIG_TARGET_CORSTONE1000)
--	/* Notifying SE Proxy FW update service */
--	ffa_ret = efi_corstone1000_kernel_started_event();
--	if (ffa_ret)
--		debug("[efi_boottime][ERROR]: Failure to notify SE Proxy FW update service\n");
--	else
--		debug("[efi_boottime][INFO]: SE Proxy FW update service notified\n");
--#endif
--
- 	/* Give the payload some time to boot */
- 	efi_set_watchdog(0);
- 	WATCHDOG_RESET();
-diff --git a/lib/efi_loader/efi_firmware.c b/lib/efi_loader/efi_firmware.c
-index 9eb89849b28d..477ad072070e 100644
---- a/lib/efi_loader/efi_firmware.c
-+++ b/lib/efi_loader/efi_firmware.c
-@@ -356,7 +356,7 @@ static efi_status_t efi_corstone1000_img_info_get (
- 		*package_version_name = NULL; /* not supported */
- 
- 	if(image_info == NULL) {
--		log_warning("image_info is null\n");
-+		log_info("image_info is null\n");
- 		return EFI_BUFFER_TOO_SMALL;
- 	}
- 
-diff --git a/lib/efi_loader/efi_setup.c b/lib/efi_loader/efi_setup.c
-index 989380d4f8cd..515a0bdf74ef 100644
---- a/lib/efi_loader/efi_setup.c
-+++ b/lib/efi_loader/efi_setup.c
-@@ -17,6 +17,9 @@
- efi_status_t efi_obj_list_initialized = OBJ_LIST_NOT_INITIALIZED;
- 
- #if IS_ENABLED(CONFIG_TARGET_CORSTONE1000)
-+#include <linux/bitfield.h>
-+#include <linux/bitops.h>
-+#include <arm_ffa_helper.h>
- /**
-  * efi_corstone1000_alloc_capsule_shared_buf - allocate capsule shared buffer
-  */
-@@ -126,6 +129,44 @@ static efi_status_t efi_init_secure_boot(void)
- }
- #endif /* CONFIG_EFI_SECURE_BOOT */
- 
-+#if IS_ENABLED(CONFIG_TARGET_CORSTONE1000)
-+/**
-+ * efi_corstone1000_uboot-efi_started_event - notifies SE Proxy FW update service
-+ *
-+ * This function notifies the SE Proxy update service that uboot efi has already started
-+ *
-+ * Return:
-+ *
-+ * 0: on success, otherwise failure
-+ * */
-+static int efi_corstone1000_uboot_efi_started_event(void)
-+{
-+	struct ffa_interface_data func_data = {0};
-+	struct ffa_send_direct_data msg = {0};
-+	u16 part_id = CORSTONE1000_SEPROXY_PART_ID;
-+
-+	log_debug("[%s]\n", __func__);
-+
-+	/*
-+	 * telling the driver which partition to use
-+	 */
-+	func_data.data0_size = sizeof(part_id);
-+	func_data.data0 = &part_id;
-+	/*
-+	 * setting the uboot efi subsystem started  event arguments:
-+	 * setting capsule update interface ID(31:16)
-+	 * the uboot efi subsystem started event ID(15:0)
-+	 */
-+	msg.a4 = PREP_SEPROXY_SVC_ID(CORSTONE1000_SEPROXY_UPDATE_SVC_ID) |
-+			PREP_SEPROXY_EVT(CORSTONE1000_UBOOT_EFI_STARTED_EVT);
-+
-+	func_data.data1_size = sizeof(msg);
-+	func_data.data1 = &msg;
-+
-+	return ffa_helper_msg_send_direct_req(&func_data);
-+}
-+#endif
-+
- /**
-  * efi_init_capsule - initialize capsule update state
-  *
-@@ -134,8 +175,15 @@ static efi_status_t efi_init_secure_boot(void)
- static efi_status_t efi_init_capsule(void)
- {
- 	efi_status_t ret = EFI_SUCCESS;
-+	int ffa_ret;
- 
- #if IS_ENABLED(CONFIG_TARGET_CORSTONE1000)
-+	ffa_ret = efi_corstone1000_uboot_efi_started_event();
-+	if (ffa_ret)
-+		debug("[efi_boottime][ERROR]: Failure to notify SE Proxy FW update service\n");
-+	else
-+		debug("[efi_boottime][INFO]: SE Proxy FW update service notified\n");
-+
- 	ret = efi_corstone1000_alloc_capsule_shared_buf();
- 	if (ret != EFI_SUCCESS) {
- 		printf("EFI: Corstone-1000: cannot allocate caspsule shared buffer\n");
--- 
-2.30.2
-
diff --git a/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0026-efi_loader-fix-null-pointer-exception-with-get_image.patch b/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0026-efi_loader-fix-null-pointer-exception-with-get_image.patch
deleted file mode 100644
index 532e872..0000000
--- a/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0026-efi_loader-fix-null-pointer-exception-with-get_image.patch
+++ /dev/null
@@ -1,62 +0,0 @@
-From 2da8554ab732c59c7ca624ac4b16412fa9c2e39c Mon Sep 17 00:00:00 2001
-From: Vishnu Banavath <vishnu.banavath@arm.com>
-Date: Fri, 14 Jan 2022 15:24:18 +0000
-Subject: [PATCH 26/27] efi_loader: fix null pointer exception with
- get_image_info
-
-get_img_info API implemented for corstone1000 target does not
-check the input attributes and as a result uboot crash's with
-null pointer access. This change is to fix the null pointer
-exception.
-
-Signed-off-by: Vishnu Banavath <vishnu.banavath@arm.com>
-Signed-off-by: Rui Miguel Silva <rui.silva@linaro.org>
----
- lib/efi_loader/efi_firmware.c | 19 +++++++++++--------
- 1 file changed, 11 insertions(+), 8 deletions(-)
-
-diff --git a/lib/efi_loader/efi_firmware.c b/lib/efi_loader/efi_firmware.c
-index 477ad072070e..f99c57fde576 100644
---- a/lib/efi_loader/efi_firmware.c
-+++ b/lib/efi_loader/efi_firmware.c
-@@ -347,26 +347,29 @@ static efi_status_t efi_corstone1000_img_info_get (
- 	int i = 0;
- 
- 	*image_info_size = sizeof(*image_info);
--	*descriptor_version = EFI_FIRMWARE_IMAGE_DESCRIPTOR_VERSION;
--	*descriptor_count = 1;//dfu_num;
--	*descriptor_size = sizeof(*image_info);
-+	if(descriptor_version)
-+	    *descriptor_version = EFI_FIRMWARE_IMAGE_DESCRIPTOR_VERSION;
-+	if(descriptor_count)
-+	    *descriptor_count = 1;
-+	if(descriptor_size)
-+	    *descriptor_size = sizeof(*image_info);
- 	if (package_version)
- 		*package_version = 0xffffffff; /* not supported */
- 	if(package_version_name)
- 		*package_version_name = NULL; /* not supported */
- 
- 	if(image_info == NULL) {
--		log_info("image_info is null\n");
-+		log_debug("image_info is null\n");
- 		return EFI_BUFFER_TOO_SMALL;
- 	}
- 
--	image_info[i].image_index = i;
-+	image_info[i].image_index = 1;
- 	image_info[i].image_type_id = *image_type;
- 	image_info[i].image_id = 0;
--	image_info[i].image_id_name = "wic";
--	image_info[i].version = 1;
-+	image_info[i].image_id_name = L"wic image";
-+	image_info[i].version = 0;
- 	image_info[i].version_name = NULL;
--	image_info[i].size = 0x1000;
-+	image_info[i].size = 0;
- 	image_info[i].attributes_supported = IMAGE_ATTRIBUTE_IMAGE_UPDATABLE |
- 					     IMAGE_ATTRIBUTE_AUTHENTICATION_REQUIRED;
- 	image_info[i].attributes_setting = IMAGE_ATTRIBUTE_IMAGE_UPDATABLE;
--- 
-2.30.2
-
diff --git a/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0027-arm-corstone1000-add-mmc-for-fvp.patch b/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0027-arm-corstone1000-add-mmc-for-fvp.patch
deleted file mode 100644
index bf95ed7..0000000
--- a/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone1000/0027-arm-corstone1000-add-mmc-for-fvp.patch
+++ /dev/null
@@ -1,148 +0,0 @@
-From cbf16548dc6dcc8eea97aa18c6ae17fb848e5c6c Mon Sep 17 00:00:00 2001
-From: Rui Miguel Silva <rui.silva@linaro.org>
-Date: Tue, 5 Apr 2022 10:24:38 +0100
-Subject: [PATCH 27/27] arm:corstone1000: add mmc for fvp
-
-Enable support mmc/sdcard for the corstone1000 FVP.
-
-Signed-off-by: Vishnu Banavath <vishnu.banavath@arm.com>
-Signed-off-by: Rui Miguel Silva <rui.silva@linaro.org>
----
- arch/arm/dts/corstone1000-fvp.dts        | 28 +++++++++++++++
- board/armltd/corstone1000/corstone1000.c | 46 ++++++++++++++++--------
- configs/corstone1000_defconfig           |  8 ++++-
- include/configs/corstone1000.h           |  4 ++-
- 4 files changed, 69 insertions(+), 17 deletions(-)
-
-diff --git a/arch/arm/dts/corstone1000-fvp.dts b/arch/arm/dts/corstone1000-fvp.dts
-index 1fcc137a493c..26b0f1b3cea6 100644
---- a/arch/arm/dts/corstone1000-fvp.dts
-+++ b/arch/arm/dts/corstone1000-fvp.dts
-@@ -20,4 +20,32 @@
- 		interrupts = <GIC_SPI 116 IRQ_TYPE_LEVEL_HIGH>;
- 		reg-io-width = <2>;
- 	};
-+
-+	vmmc_v3_3d: fixed_v3_3d {
-+		compatible = "regulator-fixed";
-+		regulator-name = "vmmc_supply";
-+		regulator-min-microvolt = <3300000>;
-+		regulator-max-microvolt = <3300000>;
-+		regulator-always-on;
-+	};
-+
-+	sdmmc0: mmc@40300000 {
-+		compatible = "arm,pl18x", "arm,primecell";
-+		reg = <0x40300000 0x1000>;
-+		interrupts = <GIC_SPI 117 IRQ_TYPE_LEVEL_HIGH>;
-+		max-frequency = <12000000>;
-+		vmmc-supply = <&vmmc_v3_3d>;
-+		clocks = <&smbclk>, <&refclk100mhz>;
-+		clock-names = "smclk", "apb_pclk";
-+	};
-+
-+	sdmmc1: mmc@50000000 {
-+		compatible = "arm,pl18x", "arm,primecell";
-+		reg = <0x50000000 0x10000>;
-+		interrupts = <GIC_SPI 115 IRQ_TYPE_LEVEL_HIGH>;
-+		max-frequency = <12000000>;
-+		vmmc-supply = <&vmmc_v3_3d>;
-+		clocks = <&smbclk>, <&refclk100mhz>;
-+		clock-names = "smclk", "apb_pclk";
-+	};
- };
-diff --git a/board/armltd/corstone1000/corstone1000.c b/board/armltd/corstone1000/corstone1000.c
-index eff1739f0b02..936a6c9f8b89 100644
---- a/board/armltd/corstone1000/corstone1000.c
-+++ b/board/armltd/corstone1000/corstone1000.c
-@@ -46,22 +46,38 @@ static struct mm_region corstone1000_mem_map[] = {
- 		.attrs = PTE_BLOCK_MEMTYPE(MT_DEVICE_NGNRNE) |
- 			 PTE_BLOCK_NON_SHARE |
- 			 PTE_BLOCK_PXN | PTE_BLOCK_UXN
--        }, {
--                /* USB */
--                .virt = 0x40200000UL,
--                .phys = 0x40200000UL,
--                .size = 0x00100000UL,
--                .attrs = PTE_BLOCK_MEMTYPE(MT_DEVICE_NGNRNE) |
--                         PTE_BLOCK_NON_SHARE |
--                         PTE_BLOCK_PXN | PTE_BLOCK_UXN
- 	}, {
--                 /* ethernet */
--                .virt = 0x40100000UL,
--                .phys = 0x40100000UL,
--                .size = 0x00100000UL,
--                .attrs = PTE_BLOCK_MEMTYPE(MT_DEVICE_NGNRNE) |
--                         PTE_BLOCK_NON_SHARE |
--                         PTE_BLOCK_PXN | PTE_BLOCK_UXN
-+		/* USB */
-+		.virt = 0x40200000UL,
-+			.phys = 0x40200000UL,
-+			.size = 0x00100000UL,
-+			.attrs = PTE_BLOCK_MEMTYPE(MT_DEVICE_NGNRNE) |
-+				PTE_BLOCK_NON_SHARE |
-+				PTE_BLOCK_PXN | PTE_BLOCK_UXN
-+	}, {
-+		/* MMC0 */
-+		.virt = 0x40300000UL,
-+		.phys = 0x40300000UL,
-+		.size = 0x00100000UL,
-+		.attrs = PTE_BLOCK_MEMTYPE(MT_DEVICE_NGNRNE) |
-+				 PTE_BLOCK_NON_SHARE |
-+				 PTE_BLOCK_PXN | PTE_BLOCK_UXN
-+	}, {
-+		/* ethernet */
-+		.virt = 0x40100000UL,
-+			.phys = 0x40100000UL,
-+			.size = 0x00100000UL,
-+			.attrs = PTE_BLOCK_MEMTYPE(MT_DEVICE_NGNRNE) |
-+				PTE_BLOCK_NON_SHARE |
-+				PTE_BLOCK_PXN | PTE_BLOCK_UXN
-+	}, {
-+		/* MMC1 */
-+		.virt = 0x50000000UL,
-+		.phys = 0x50000000UL,
-+		.size = 0x00100000UL,
-+		.attrs = PTE_BLOCK_MEMTYPE(MT_DEVICE_NGNRNE) |
-+				 PTE_BLOCK_NON_SHARE |
-+				 PTE_BLOCK_PXN | PTE_BLOCK_UXN
- 	}, {
- 		/* OCVM */
- 		.virt = 0x80000000UL,
-diff --git a/configs/corstone1000_defconfig b/configs/corstone1000_defconfig
-index b042d4e49419..147c14c94865 100644
---- a/configs/corstone1000_defconfig
-+++ b/configs/corstone1000_defconfig
-@@ -38,7 +38,13 @@ CONFIG_CMD_EFIDEBUG=y
- CONFIG_CMD_FAT=y
- CONFIG_OF_CONTROL=y
- CONFIG_REGMAP=y
--# CONFIG_MMC is not set
-+CONFIG_CLK=y
-+CONFIG_CMD_MMC=y
-+CONFIG_DM_MMC=y
-+CONFIG_ARM_PL180_MMCI=y
-+CONFIG_MMC_SDHCI_ADMA_HELPERS=y
-+CONFIG_MMC_WRITE=y
-+CONFIG_DM_GPIO=y
- CONFIG_DM_SERIAL=y
- CONFIG_USB=y
- CONFIG_DM_USB=y
-diff --git a/include/configs/corstone1000.h b/include/configs/corstone1000.h
-index 06b605e43bdf..d9855bf91ebf 100644
---- a/include/configs/corstone1000.h
-+++ b/include/configs/corstone1000.h
-@@ -95,7 +95,9 @@
- #define CONFIG_SYS_MAXARGS	64	/* max command args */
- 
- #define BOOT_TARGET_DEVICES(func) \
--	func(USB, usb, 0)
-+	func(USB, usb, 0) \
-+	func(MMC, mmc, 0) \
-+	func(MMC, mmc, 1)
- 
- #include <config_distro_bootcmd.h>
- 
--- 
-2.30.2
-
diff --git a/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone500/0001-armv7-adding-generic-timer-access-through-MMIO.patch b/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone500/0001-armv7-adding-generic-timer-access-through-MMIO.patch
index 8a98f9d..9a29975 100644
--- a/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone500/0001-armv7-adding-generic-timer-access-through-MMIO.patch
+++ b/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone500/0001-armv7-adding-generic-timer-access-through-MMIO.patch
@@ -1,4 +1,4 @@
-From fff63cfd7d9654dc9ed0c106f29d3a7ad01b0502 Mon Sep 17 00:00:00 2001
+From d75d794785419592ba49046165d19a6ec9488b2d Mon Sep 17 00:00:00 2001
 From: Rui Miguel Silva <rui.silva@linaro.org>
 Date: Wed, 18 Dec 2019 21:52:34 +0000
 Subject: [PATCH 1/2] armv7: adding generic timer access through MMIO
@@ -21,6 +21,8 @@
 Signed-off-by: Abdellatif El Khlifi <abdellatif.elkhlifi@arm.com>
 
 %% original patch: 0001-armv7-adding-generic-timer-access-through-MMIO.patch
+
+Signed-off-by: Rui Miguel Silva <rui.silva@linaro.org>
 ---
  arch/arm/cpu/armv7/Makefile     |  1 +
  arch/arm/cpu/armv7/mmio_timer.c | 75 +++++++++++++++++++++++++++++++++
@@ -29,7 +31,7 @@
  create mode 100644 arch/arm/cpu/armv7/mmio_timer.c
 
 diff --git a/arch/arm/cpu/armv7/Makefile b/arch/arm/cpu/armv7/Makefile
-index bfbd85ae64..1a0a24e531 100644
+index bfbd85ae64ef..1a0a24e53110 100644
 --- a/arch/arm/cpu/armv7/Makefile
 +++ b/arch/arm/cpu/armv7/Makefile
 @@ -28,6 +28,7 @@ obj-$(CONFIG_ARMV7_PSCI)	+= psci.o psci-common.o
@@ -42,7 +44,7 @@
  obj-y += s5p-common/
 diff --git a/arch/arm/cpu/armv7/mmio_timer.c b/arch/arm/cpu/armv7/mmio_timer.c
 new file mode 100644
-index 0000000000..edd806e06e
+index 000000000000..edd806e06e42
 --- /dev/null
 +++ b/arch/arm/cpu/armv7/mmio_timer.c
 @@ -0,0 +1,75 @@
@@ -122,17 +124,17 @@
 +	return gd->arch.timer_rate_hz;
 +}
 diff --git a/scripts/config_whitelist.txt b/scripts/config_whitelist.txt
-index a6bc234f51..8d5cd67ace 100644
+index c61df4fb1c9b..cfb1c68b6297 100644
 --- a/scripts/config_whitelist.txt
 +++ b/scripts/config_whitelist.txt
-@@ -1524,6 +1524,7 @@ CONFIG_SYS_MMC_U_BOOT_DST
+@@ -1253,6 +1253,7 @@ CONFIG_SYS_MMC_U_BOOT_DST
  CONFIG_SYS_MMC_U_BOOT_OFFS
  CONFIG_SYS_MMC_U_BOOT_SIZE
  CONFIG_SYS_MMC_U_BOOT_START
 +CONFIG_SYS_MMIO_TIMER
- CONFIG_SYS_MONITOR_BASE
  CONFIG_SYS_MONITOR_LEN
  CONFIG_SYS_MONITOR_SEC
+ CONFIG_SYS_MOR_VAL
 -- 
-2.30.2
+2.37.1
 
diff --git a/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone500/0002-board-arm-add-corstone500-board.patch b/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone500/0002-board-arm-add-corstone500-board.patch
index 29b2943..c389a64 100644
--- a/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone500/0002-board-arm-add-corstone500-board.patch
+++ b/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot/corstone500/0002-board-arm-add-corstone500-board.patch
@@ -1,4 +1,4 @@
-From 73c319a1096259652853fa2538a733a8ebea96a8 Mon Sep 17 00:00:00 2001
+From 3566cf4ab79ca78acd69cfb87e74587394e5aeb2 Mon Sep 17 00:00:00 2001
 From: Rui Miguel Silva <rui.silva@linaro.org>
 Date: Wed, 8 Jan 2020 09:48:11 +0000
 Subject: [PATCH 2/2] board: arm: add corstone500 board
@@ -13,6 +13,8 @@
 Signed-off-by: Rui Miguel Silva <rui.silva@linaro.org>
 
 %% original patch: 0002-board-arm-add-corstone500-board.patch
+
+Signed-off-by: Rui Miguel Silva <rui.silva@linaro.org>
 ---
  arch/arm/Kconfig                       |  10 +++
  board/armltd/corstone500/Kconfig       |  12 +++
@@ -28,10 +30,10 @@
  create mode 100644 include/configs/corstone500.h
 
 diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
-index 4567c183fb..66f99fdf4f 100644
+index 9898c7d68e1b..8c60ed39712e 100644
 --- a/arch/arm/Kconfig
 +++ b/arch/arm/Kconfig
-@@ -641,6 +641,15 @@ config ARCH_BCMSTB
+@@ -718,6 +718,15 @@ config ARCH_BCMSTB
  	  This enables support for Broadcom ARM-based set-top box
  	  chipsets, including the 7445 family of chips.
  
@@ -47,7 +49,7 @@
  config TARGET_VEXPRESS_CA9X4
  	bool "Support vexpress_ca9x4"
  	select CPU_V7A
-@@ -2202,6 +2211,7 @@ source "board/bosch/shc/Kconfig"
+@@ -2299,6 +2308,7 @@ source "board/bosch/shc/Kconfig"
  source "board/bosch/guardian/Kconfig"
  source "board/Marvell/octeontx/Kconfig"
  source "board/Marvell/octeontx2/Kconfig"
@@ -57,7 +59,7 @@
  source "board/cortina/presidio-asic/Kconfig"
 diff --git a/board/armltd/corstone500/Kconfig b/board/armltd/corstone500/Kconfig
 new file mode 100644
-index 0000000000..8e689bd1fd
+index 000000000000..8e689bd1fdc8
 --- /dev/null
 +++ b/board/armltd/corstone500/Kconfig
 @@ -0,0 +1,12 @@
@@ -75,7 +77,7 @@
 +endif
 diff --git a/board/armltd/corstone500/Makefile b/board/armltd/corstone500/Makefile
 new file mode 100644
-index 0000000000..6598fdd3ae
+index 000000000000..6598fdd3ae0d
 --- /dev/null
 +++ b/board/armltd/corstone500/Makefile
 @@ -0,0 +1,8 @@
@@ -89,7 +91,7 @@
 +obj-y := corstone500.o
 diff --git a/board/armltd/corstone500/corstone500.c b/board/armltd/corstone500/corstone500.c
 new file mode 100644
-index 0000000000..e878f5c6a5
+index 000000000000..e878f5c6a521
 --- /dev/null
 +++ b/board/armltd/corstone500/corstone500.c
 @@ -0,0 +1,48 @@
@@ -143,7 +145,7 @@
 +
 diff --git a/configs/corstone500_defconfig b/configs/corstone500_defconfig
 new file mode 100644
-index 0000000000..d3161a4b40
+index 000000000000..d3161a4b40d8
 --- /dev/null
 +++ b/configs/corstone500_defconfig
 @@ -0,0 +1,40 @@
@@ -189,7 +191,7 @@
 +CONFIG_OF_LIBFDT=y
 diff --git a/include/configs/corstone500.h b/include/configs/corstone500.h
 new file mode 100644
-index 0000000000..93c397d2f5
+index 000000000000..93c397d2f515
 --- /dev/null
 +++ b/include/configs/corstone500.h
 @@ -0,0 +1,109 @@
@@ -303,5 +305,5 @@
 +#define CONFIG_ENV_IS_IN_FLASH		1
 +#endif
 -- 
-2.30.2
+2.37.1
 
diff --git a/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot_%.bbappend b/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot_%.bbappend
index e254d41..a0a7284 100644
--- a/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot_%.bbappend
+++ b/meta-arm/meta-arm-bsp/recipes-bsp/u-boot/u-boot_%.bbappend
@@ -36,15 +36,13 @@
         file://0016-efi_boottime-corstone1000-pass-interface-id-and-kern.patch \
         file://0017-efi_loader-corstone1000-remove-guid-check-from-corst.patch \
         file://0018-arm_ffa-removing-the-cast-when-using-binary-OR-on-FI.patch \
-        file://0019-Return-proper-error-code-when-rx-buffer-is-larger.patch \
-        file://0020-Use-correct-buffer-size.patch \
-        file://0021-Update-comm_buf-when-EFI_BUFFER_TOO_SMALL.patch \
-        file://0022-efi_loader-populate-ESRT-table-if-EFI_ESRT-config-op.patch \
-        file://0023-efi_firmware-add-get_image_info-for-corstone1000.patch \
-        file://0024-Comment-mm_communicate-failure-log.patch \
-        file://0025-efi_loader-send-bootcomplete-message-to-secure-encla.patch \
-        file://0026-efi_loader-fix-null-pointer-exception-with-get_image.patch \
-        file://0027-arm-corstone1000-add-mmc-for-fvp.patch \
+        file://0019-Use-correct-buffer-size.patch \
+        file://0020-efi_loader-populate-ESRT-table-if-EFI_ESRT-config-op.patch \
+        file://0021-efi_firmware-add-get_image_info-for-corstone1000.patch \
+        file://0022-efi_loader-send-bootcomplete-message-to-secure-encla.patch \
+        file://0023-efi_loader-fix-null-pointer-exception-with-get_image.patch \
+        file://0024-arm-corstone1000-add-mmc-for-fvp.patch \
+        file://0025-corstone1000-use-a-compressed-kernel.patch \
       "
 
 #
diff --git a/meta-arm/meta-arm-bsp/recipes-kernel/linux/arm-platforms-kmeta/bsp/arm-platforms/juno.scc b/meta-arm/meta-arm-bsp/recipes-kernel/linux/arm-platforms-kmeta/bsp/arm-platforms/juno.scc
index 2980b39..240ecf5 100644
--- a/meta-arm/meta-arm-bsp/recipes-kernel/linux/arm-platforms-kmeta/bsp/arm-platforms/juno.scc
+++ b/meta-arm/meta-arm-bsp/recipes-kernel/linux/arm-platforms-kmeta/bsp/arm-platforms/juno.scc
@@ -1,5 +1,6 @@
 include features/input/input.scc
 include features/net/net.scc
+include features/bluetooth/bluetooth.scc
 include cfg/timer/no_hz.scc
 include cfg/usb-mass-storage.scc
 
diff --git a/meta-arm/meta-arm-bsp/recipes-kernel/linux/files/fvp-base-arm32/0001-ARM-vexpress-enable-GICv3.patch b/meta-arm/meta-arm-bsp/recipes-kernel/linux/files/fvp-base-arm32/0001-ARM-vexpress-enable-GICv3.patch
index d0a05c2..184a763 100644
--- a/meta-arm/meta-arm-bsp/recipes-kernel/linux/files/fvp-base-arm32/0001-ARM-vexpress-enable-GICv3.patch
+++ b/meta-arm/meta-arm-bsp/recipes-kernel/linux/files/fvp-base-arm32/0001-ARM-vexpress-enable-GICv3.patch
@@ -1,4 +1,4 @@
-From 5dbb6c4267b1e46ed08359be363d8bc9b6a79397 Mon Sep 17 00:00:00 2001
+From 9fe529a146f4528ec80a3d04588e387f3651dc22 Mon Sep 17 00:00:00 2001
 From: Ryan Harkin <ryan.harkin@linaro.org>
 Date: Wed, 16 Nov 2016 14:43:02 +0000
 Subject: [PATCH] ARM: vexpress: enable GICv3
@@ -11,21 +11,21 @@
 Signed-off-by: Ryan Harkin <ryan.harkin@linaro.org>
 Signed-off-by: Jon Medhurst <tixy@linaro.org>
 ---
- arch/arm/mach-vexpress/Kconfig | 1 +
+ arch/arm/mach-versatile/Kconfig | 1 +
  1 file changed, 1 insertion(+)
 
-diff --git a/arch/arm/mach-vexpress/Kconfig b/arch/arm/mach-vexpress/Kconfig
-index 7c728ebc0b33..ed579382d41f 100644
---- a/arch/arm/mach-vexpress/Kconfig
-+++ b/arch/arm/mach-vexpress/Kconfig
-@@ -4,6 +4,7 @@ menuconfig ARCH_VEXPRESS
- 	select ARCH_SUPPORTS_BIG_ENDIAN
+diff --git a/arch/arm/mach-versatile/Kconfig b/arch/arm/mach-versatile/Kconfig
+index 2ef226194c3a..3d54877fe339 100644
+--- a/arch/arm/mach-versatile/Kconfig
++++ b/arch/arm/mach-versatile/Kconfig
+@@ -251,6 +251,7 @@ menuconfig ARCH_VEXPRESS
+ 	depends on ARCH_MULTI_V7
  	select ARM_AMBA
  	select ARM_GIC
 +	select ARM_GIC_V3
  	select ARM_GLOBAL_TIMER
  	select ARM_TIMER_SP804
- 	select COMMON_CLK_VERSATILE
+ 	select GPIOLIB
 -- 
-2.17.1
+2.30.2
 
diff --git a/meta-arm/meta-arm-bsp/recipes-kernel/linux/files/juno/juno-dts-mhu-doorbell.patch b/meta-arm/meta-arm-bsp/recipes-kernel/linux/files/juno/juno-dts-mhu-doorbell.patch
deleted file mode 100644
index 81f641c..0000000
--- a/meta-arm/meta-arm-bsp/recipes-kernel/linux/files/juno/juno-dts-mhu-doorbell.patch
+++ /dev/null
@@ -1,616 +0,0 @@
-Add MHU doorbell support and SCMI device nodes to the Juno DeviceTree.
-
-Patch taken from https://git.kernel.org/pub/scm/linux/kernel/git/sudeep.holla/linux.git/log/?h=scmi_dt_defconfig
-
-Upstream-Status: Pending
-Signed-off-by: Ross Burton <ross.burton@arm.com>
-
-From 821ffd8e5dc4d2fb2716d5fb912b343b932e1e77 Mon Sep 17 00:00:00 2001
-From: Sudeep Holla <sudeep.holla@arm.com>
-Date: Thu, 20 Apr 2017 11:58:01 +0100
-Subject: [PATCH] arm64: dts: juno: add mhu doorbell support and scmi device
- nodes
-
-Signed-off-by: Sudeep Holla <sudeep.holla@arm.com>
----
- arch/arm64/boot/dts/arm/juno-base.dtsi    | 139 ++++++++++++----------
- arch/arm64/boot/dts/arm/juno-cs-r1r2.dtsi |   6 +-
- arch/arm64/boot/dts/arm/juno-r1.dts       |  12 +-
- arch/arm64/boot/dts/arm/juno-r2.dts       |  12 +-
- arch/arm64/boot/dts/arm/juno.dts          |  12 +-
- 5 files changed, 96 insertions(+), 85 deletions(-)
-
-diff --git a/arch/arm64/boot/dts/arm/juno-base.dtsi b/arch/arm64/boot/dts/arm/juno-base.dtsi
-index 6288e104a089..36844f7d861e 100644
---- a/arch/arm64/boot/dts/arm/juno-base.dtsi
-+++ b/arch/arm64/boot/dts/arm/juno-base.dtsi
-@@ -23,11 +23,12 @@ frame@2a830000 {
- 	};
- 
- 	mailbox: mhu@2b1f0000 {
--		compatible = "arm,mhu", "arm,primecell";
-+		compatible = "arm,mhu-doorbell", "arm,primecell";
- 		reg = <0x0 0x2b1f0000 0x0 0x1000>;
- 		interrupts = <GIC_SPI 36 IRQ_TYPE_LEVEL_HIGH>,
- 			     <GIC_SPI 35 IRQ_TYPE_LEVEL_HIGH>;
--		#mbox-cells = <1>;
-+		#mbox-cells = <2>;
-+		mbox-name = "ARM-MHU";
- 		clocks = <&soc_refclk100mhz>;
- 		clock-names = "apb_pclk";
- 	};
-@@ -39,7 +40,7 @@ smmu_gpu: iommu@2b400000 {
- 			     <GIC_SPI 38 IRQ_TYPE_LEVEL_HIGH>;
- 		#iommu-cells = <1>;
- 		#global-interrupts = <1>;
--		power-domains = <&scpi_devpd 1>;
-+		power-domains = <&scmi_devpd 9>;
- 		dma-coherent;
- 		status = "disabled";
- 	};
-@@ -63,7 +64,7 @@ smmu_etr: iommu@2b600000 {
- 		#iommu-cells = <1>;
- 		#global-interrupts = <1>;
- 		dma-coherent;
--		power-domains = <&scpi_devpd 0>;
-+		power-domains = <&scmi_devpd 8>;
- 	};
- 
- 	gic: interrupt-controller@2c010000 {
-@@ -123,7 +124,7 @@ etf@20010000 { /* etf0 */
- 
- 		clocks = <&soc_smc50mhz>;
- 		clock-names = "apb_pclk";
--		power-domains = <&scpi_devpd 0>;
-+		power-domains = <&scmi_devpd 8>;
- 
- 		in-ports {
- 			port {
-@@ -147,7 +148,7 @@ tpiu@20030000 {
- 
- 		clocks = <&soc_smc50mhz>;
- 		clock-names = "apb_pclk";
--		power-domains = <&scpi_devpd 0>;
-+		power-domains = <&scmi_devpd 8>;
- 		in-ports {
- 			port {
- 				tpiu_in_port: endpoint {
-@@ -164,7 +165,7 @@ main_funnel: funnel@20040000 {
- 
- 		clocks = <&soc_smc50mhz>;
- 		clock-names = "apb_pclk";
--		power-domains = <&scpi_devpd 0>;
-+		power-domains = <&scmi_devpd 8>;
- 
- 		out-ports {
- 			port {
-@@ -201,7 +202,7 @@ etr@20070000 {
- 
- 		clocks = <&soc_smc50mhz>;
- 		clock-names = "apb_pclk";
--		power-domains = <&scpi_devpd 0>;
-+		power-domains = <&scmi_devpd 8>;
- 		arm,scatter-gather;
- 		in-ports {
- 			port {
-@@ -220,7 +221,7 @@ stm@20100000 {
- 
- 		clocks = <&soc_smc50mhz>;
- 		clock-names = "apb_pclk";
--		power-domains = <&scpi_devpd 0>;
-+		power-domains = <&scmi_devpd 8>;
- 		out-ports {
- 			port {
- 				stm_out_port: endpoint {
-@@ -235,7 +236,7 @@ replicator@20120000 {
- 
- 		clocks = <&soc_smc50mhz>;
- 		clock-names = "apb_pclk";
--		power-domains = <&scpi_devpd 0>;
-+		power-domains = <&scmi_devpd 8>;
- 
- 		out-ports {
- 			#address-cells = <1>;
-@@ -270,7 +271,7 @@ cpu_debug0: cpu-debug@22010000 {
- 
- 		clocks = <&soc_smc50mhz>;
- 		clock-names = "apb_pclk";
--		power-domains = <&scpi_devpd 0>;
-+		power-domains = <&scmi_devpd 8>;
- 	};
- 
- 	etm0: etm@22040000 {
-@@ -279,7 +280,7 @@ etm0: etm@22040000 {
- 
- 		clocks = <&soc_smc50mhz>;
- 		clock-names = "apb_pclk";
--		power-domains = <&scpi_devpd 0>;
-+		power-domains = <&scmi_devpd 8>;
- 		out-ports {
- 			port {
- 				cluster0_etm0_out_port: endpoint {
-@@ -295,7 +296,7 @@ funnel@220c0000 { /* cluster0 funnel */
- 
- 		clocks = <&soc_smc50mhz>;
- 		clock-names = "apb_pclk";
--		power-domains = <&scpi_devpd 0>;
-+		power-domains = <&scmi_devpd 8>;
- 		out-ports {
- 			port {
- 				cluster0_funnel_out_port: endpoint {
-@@ -330,7 +331,7 @@ cpu_debug1: cpu-debug@22110000 {
- 
- 		clocks = <&soc_smc50mhz>;
- 		clock-names = "apb_pclk";
--		power-domains = <&scpi_devpd 0>;
-+		power-domains = <&scmi_devpd 8>;
- 	};
- 
- 	etm1: etm@22140000 {
-@@ -339,7 +340,7 @@ etm1: etm@22140000 {
- 
- 		clocks = <&soc_smc50mhz>;
- 		clock-names = "apb_pclk";
--		power-domains = <&scpi_devpd 0>;
-+		power-domains = <&scmi_devpd 8>;
- 		out-ports {
- 			port {
- 				cluster0_etm1_out_port: endpoint {
-@@ -355,7 +356,7 @@ cpu_debug2: cpu-debug@23010000 {
- 
- 		clocks = <&soc_smc50mhz>;
- 		clock-names = "apb_pclk";
--		power-domains = <&scpi_devpd 0>;
-+		power-domains = <&scmi_devpd 8>;
- 	};
- 
- 	etm2: etm@23040000 {
-@@ -364,7 +365,7 @@ etm2: etm@23040000 {
- 
- 		clocks = <&soc_smc50mhz>;
- 		clock-names = "apb_pclk";
--		power-domains = <&scpi_devpd 0>;
-+		power-domains = <&scmi_devpd 8>;
- 		out-ports {
- 			port {
- 				cluster1_etm0_out_port: endpoint {
-@@ -380,7 +381,7 @@ funnel@230c0000 { /* cluster1 funnel */
- 
- 		clocks = <&soc_smc50mhz>;
- 		clock-names = "apb_pclk";
--		power-domains = <&scpi_devpd 0>;
-+		power-domains = <&scmi_devpd 8>;
- 		out-ports {
- 			port {
- 				cluster1_funnel_out_port: endpoint {
-@@ -427,7 +428,7 @@ cpu_debug3: cpu-debug@23110000 {
- 
- 		clocks = <&soc_smc50mhz>;
- 		clock-names = "apb_pclk";
--		power-domains = <&scpi_devpd 0>;
-+		power-domains = <&scmi_devpd 8>;
- 	};
- 
- 	etm3: etm@23140000 {
-@@ -436,7 +437,7 @@ etm3: etm@23140000 {
- 
- 		clocks = <&soc_smc50mhz>;
- 		clock-names = "apb_pclk";
--		power-domains = <&scpi_devpd 0>;
-+		power-domains = <&scmi_devpd 8>;
- 		out-ports {
- 			port {
- 				cluster1_etm1_out_port: endpoint {
-@@ -452,7 +453,7 @@ cpu_debug4: cpu-debug@23210000 {
- 
- 		clocks = <&soc_smc50mhz>;
- 		clock-names = "apb_pclk";
--		power-domains = <&scpi_devpd 0>;
-+		power-domains = <&scmi_devpd 8>;
- 	};
- 
- 	etm4: etm@23240000 {
-@@ -461,7 +462,7 @@ etm4: etm@23240000 {
- 
- 		clocks = <&soc_smc50mhz>;
- 		clock-names = "apb_pclk";
--		power-domains = <&scpi_devpd 0>;
-+		power-domains = <&scmi_devpd 8>;
- 		out-ports {
- 			port {
- 				cluster1_etm2_out_port: endpoint {
-@@ -477,7 +478,7 @@ cpu_debug5: cpu-debug@23310000 {
- 
- 		clocks = <&soc_smc50mhz>;
- 		clock-names = "apb_pclk";
--		power-domains = <&scpi_devpd 0>;
-+		power-domains = <&scmi_devpd 8>;
- 	};
- 
- 	etm5: etm@23340000 {
-@@ -486,7 +487,7 @@ etm5: etm@23340000 {
- 
- 		clocks = <&soc_smc50mhz>;
- 		clock-names = "apb_pclk";
--		power-domains = <&scpi_devpd 0>;
-+		power-domains = <&scmi_devpd 8>;
- 		out-ports {
- 			port {
- 				cluster1_etm3_out_port: endpoint {
-@@ -503,8 +504,8 @@ gpu: gpu@2d000000 {
- 			     <GIC_SPI 34 IRQ_TYPE_LEVEL_HIGH>,
- 			     <GIC_SPI 32 IRQ_TYPE_LEVEL_HIGH>;
- 		interrupt-names = "job", "mmu", "gpu";
--		clocks = <&scpi_dvfs 2>;
--		power-domains = <&scpi_devpd 1>;
-+		clocks = <&scmi_dvfs 2>;
-+		power-domains = <&scmi_devpd 9>;
- 		dma-coherent;
- 		/* The SMMU is only really of interest to bare-metal hypervisors */
- 		/* iommus = <&smmu_gpu 0>; */
-@@ -519,14 +520,24 @@ sram: sram@2e000000 {
- 		#size-cells = <1>;
- 		ranges = <0 0x0 0x2e000000 0x8000>;
- 
--		cpu_scp_lpri: scp-sram@0 {
--			compatible = "arm,juno-scp-shmem";
--			reg = <0x0 0x200>;
-+		cpu_scp_lpri0: scp-sram@0 {
-+			compatible = "arm,scmi-shmem";
-+			reg = <0x0 0x80>;
- 		};
- 
--		cpu_scp_hpri: scp-sram@200 {
--			compatible = "arm,juno-scp-shmem";
--			reg = <0x200 0x200>;
-+		cpu_scp_lpri1: scp-sram@80 {
-+			compatible = "arm,scmi-shmem";
-+			reg = <0x80 0x80>;
-+		};
-+
-+		cpu_scp_hpri0: scp-sram@100 {
-+			compatible = "arm,scmi-shmem";
-+			reg = <0x100 0x80>;
-+		};
-+
-+		cpu_scp_hpri1: scp-sram@180 {
-+			compatible = "arm,scmi-shmem";
-+			reg = <0x180 0x80>;
- 		};
- 	};
- 
-@@ -558,37 +569,37 @@ pcie_ctlr: pcie@40000000 {
- 		iommu-map = <0x0 &smmu_pcie 0x0 0x1>;
- 	};
- 
--	scpi {
--		compatible = "arm,scpi";
--		mboxes = <&mailbox 1>;
--		shmem = <&cpu_scp_hpri>;
-+	firmware {
-+		scmi {
-+			compatible = "arm,scmi";
-+			mbox-names = "tx", "rx";
-+			mboxes = <&mailbox 0 0 &mailbox 0 1>;
-+			shmem = <&cpu_scp_lpri0 &cpu_scp_lpri1>;
-+			#address-cells = <1>;
-+			#size-cells = <0>;
- 
--		clocks {
--			compatible = "arm,scpi-clocks";
-+			scmi_devpd: protocol@11 {
-+				reg = <0x11>;
-+				#power-domain-cells = <1>;
-+			};
- 
--			scpi_dvfs: clocks-0 {
--				compatible = "arm,scpi-dvfs-clocks";
-+			scmi_dvfs: protocol@13 {
-+				reg = <0x13>;
- 				#clock-cells = <1>;
--				clock-indices = <0>, <1>, <2>;
--				clock-output-names = "atlclk", "aplclk","gpuclk";
-+				mbox-names = "tx", "rx";
-+				mboxes = <&mailbox 1 0 &mailbox 1 1>;
-+				shmem = <&cpu_scp_hpri0 &cpu_scp_hpri1>;
- 			};
--			scpi_clk: clocks-1 {
--				compatible = "arm,scpi-variable-clocks";
-+
-+			scmi_clk: protocol@14 {
-+				reg = <0x14>;
- 				#clock-cells = <1>;
--				clock-indices = <3>;
--				clock-output-names = "pxlclk";
- 			};
--		};
- 
--		scpi_devpd: power-controller {
--			compatible = "arm,scpi-power-domains";
--			num-domains = <2>;
--			#power-domain-cells = <1>;
--		};
--
--		scpi_sensors0: sensors {
--			compatible = "arm,scpi-sensors";
--			#thermal-sensor-cells = <1>;
-+			scmi_sensors0: protocol@15 {
-+				reg = <0x15>;
-+				#thermal-sensor-cells = <1>;
-+			};
- 		};
- 	};
- 
-@@ -596,40 +607,40 @@ thermal-zones {
- 		pmic {
- 			polling-delay = <1000>;
- 			polling-delay-passive = <100>;
--			thermal-sensors = <&scpi_sensors0 0>;
-+			thermal-sensors = <&scmi_sensors0 0>;
- 		};
- 
- 		soc {
- 			polling-delay = <1000>;
- 			polling-delay-passive = <100>;
--			thermal-sensors = <&scpi_sensors0 3>;
-+			thermal-sensors = <&scmi_sensors0 3>;
- 		};
- 
- 		big_cluster_thermal_zone: big-cluster {
- 			polling-delay = <1000>;
- 			polling-delay-passive = <100>;
--			thermal-sensors = <&scpi_sensors0 21>;
-+			thermal-sensors = <&scmi_sensors0 21>;
- 			status = "disabled";
- 		};
- 
- 		little_cluster_thermal_zone: little-cluster {
- 			polling-delay = <1000>;
- 			polling-delay-passive = <100>;
--			thermal-sensors = <&scpi_sensors0 22>;
-+			thermal-sensors = <&scmi_sensors0 22>;
- 			status = "disabled";
- 		};
- 
- 		gpu0_thermal_zone: gpu0 {
- 			polling-delay = <1000>;
- 			polling-delay-passive = <100>;
--			thermal-sensors = <&scpi_sensors0 23>;
-+			thermal-sensors = <&scmi_sensors0 23>;
- 			status = "disabled";
- 		};
- 
- 		gpu1_thermal_zone: gpu1 {
- 			polling-delay = <1000>;
- 			polling-delay-passive = <100>;
--			thermal-sensors = <&scpi_sensors0 24>;
-+			thermal-sensors = <&scmi_sensors0 24>;
- 			status = "disabled";
- 		};
- 	};
-@@ -705,7 +716,7 @@ hdlcd@7ff50000 {
- 		reg = <0 0x7ff50000 0 0x1000>;
- 		interrupts = <GIC_SPI 93 IRQ_TYPE_LEVEL_HIGH>;
- 		iommus = <&smmu_hdlcd1 0>;
--		clocks = <&scpi_clk 3>;
-+		clocks = <&scmi_clk 3>;
- 		clock-names = "pxlclk";
- 
- 		port {
-@@ -720,7 +731,7 @@ hdlcd@7ff60000 {
- 		reg = <0 0x7ff60000 0 0x1000>;
- 		interrupts = <GIC_SPI 85 IRQ_TYPE_LEVEL_HIGH>;
- 		iommus = <&smmu_hdlcd0 0>;
--		clocks = <&scpi_clk 3>;
-+		clocks = <&scmi_clk 3>;
- 		clock-names = "pxlclk";
- 
- 		port {
-diff --git a/arch/arm64/boot/dts/arm/juno-cs-r1r2.dtsi b/arch/arm64/boot/dts/arm/juno-cs-r1r2.dtsi
-index eda3d9e18af6..e6ecb0dfcbcd 100644
---- a/arch/arm64/boot/dts/arm/juno-cs-r1r2.dtsi
-+++ b/arch/arm64/boot/dts/arm/juno-cs-r1r2.dtsi
-@@ -6,7 +6,7 @@ funnel@20130000 { /* cssys1 */
- 
- 		clocks = <&soc_smc50mhz>;
- 		clock-names = "apb_pclk";
--		power-domains = <&scpi_devpd 0>;
-+		power-domains = <&scmi_devpd 8>;
- 		out-ports {
- 			port {
- 				csys1_funnel_out_port: endpoint {
-@@ -29,7 +29,7 @@ etf@20140000 { /* etf1 */
- 
- 		clocks = <&soc_smc50mhz>;
- 		clock-names = "apb_pclk";
--		power-domains = <&scpi_devpd 0>;
-+		power-domains = <&scmi_devpd 8>;
- 		in-ports {
- 			port {
- 				etf1_in_port: endpoint {
-@@ -52,7 +52,7 @@ funnel@20150000 { /* cssys2 */
- 
- 		clocks = <&soc_smc50mhz>;
- 		clock-names = "apb_pclk";
--		power-domains = <&scpi_devpd 0>;
-+		power-domains = <&scmi_devpd 8>;
- 		out-ports {
- 			port {
- 				csys2_funnel_out_port: endpoint {
-diff --git a/arch/arm64/boot/dts/arm/juno-r1.dts b/arch/arm64/boot/dts/arm/juno-r1.dts
-index 0e24e29eb9b1..fee67943f4d5 100644
---- a/arch/arm64/boot/dts/arm/juno-r1.dts
-+++ b/arch/arm64/boot/dts/arm/juno-r1.dts
-@@ -96,7 +96,7 @@ A57_0: cpu@0 {
- 			d-cache-line-size = <64>;
- 			d-cache-sets = <256>;
- 			next-level-cache = <&A57_L2>;
--			clocks = <&scpi_dvfs 0>;
-+			clocks = <&scmi_dvfs 0>;
- 			cpu-idle-states = <&CPU_SLEEP_0 &CLUSTER_SLEEP_0>;
- 			capacity-dmips-mhz = <1024>;
- 		};
-@@ -113,7 +113,7 @@ A57_1: cpu@1 {
- 			d-cache-line-size = <64>;
- 			d-cache-sets = <256>;
- 			next-level-cache = <&A57_L2>;
--			clocks = <&scpi_dvfs 0>;
-+			clocks = <&scmi_dvfs 0>;
- 			cpu-idle-states = <&CPU_SLEEP_0 &CLUSTER_SLEEP_0>;
- 			capacity-dmips-mhz = <1024>;
- 		};
-@@ -130,7 +130,7 @@ A53_0: cpu@100 {
- 			d-cache-line-size = <64>;
- 			d-cache-sets = <128>;
- 			next-level-cache = <&A53_L2>;
--			clocks = <&scpi_dvfs 1>;
-+			clocks = <&scmi_dvfs 1>;
- 			cpu-idle-states = <&CPU_SLEEP_0 &CLUSTER_SLEEP_0>;
- 			capacity-dmips-mhz = <578>;
- 		};
-@@ -147,7 +147,7 @@ A53_1: cpu@101 {
- 			d-cache-line-size = <64>;
- 			d-cache-sets = <128>;
- 			next-level-cache = <&A53_L2>;
--			clocks = <&scpi_dvfs 1>;
-+			clocks = <&scmi_dvfs 1>;
- 			cpu-idle-states = <&CPU_SLEEP_0 &CLUSTER_SLEEP_0>;
- 			capacity-dmips-mhz = <578>;
- 		};
-@@ -164,7 +164,7 @@ A53_2: cpu@102 {
- 			d-cache-line-size = <64>;
- 			d-cache-sets = <128>;
- 			next-level-cache = <&A53_L2>;
--			clocks = <&scpi_dvfs 1>;
-+			clocks = <&scmi_dvfs 1>;
- 			cpu-idle-states = <&CPU_SLEEP_0 &CLUSTER_SLEEP_0>;
- 			capacity-dmips-mhz = <578>;
- 		};
-@@ -181,7 +181,7 @@ A53_3: cpu@103 {
- 			d-cache-line-size = <64>;
- 			d-cache-sets = <128>;
- 			next-level-cache = <&A53_L2>;
--			clocks = <&scpi_dvfs 1>;
-+			clocks = <&scmi_dvfs 1>;
- 			cpu-idle-states = <&CPU_SLEEP_0 &CLUSTER_SLEEP_0>;
- 			capacity-dmips-mhz = <578>;
- 		};
-diff --git a/arch/arm64/boot/dts/arm/juno-r2.dts b/arch/arm64/boot/dts/arm/juno-r2.dts
-index e609420ce3e4..7792626eb29e 100644
---- a/arch/arm64/boot/dts/arm/juno-r2.dts
-+++ b/arch/arm64/boot/dts/arm/juno-r2.dts
-@@ -96,7 +96,7 @@ A72_0: cpu@0 {
- 			d-cache-line-size = <64>;
- 			d-cache-sets = <256>;
- 			next-level-cache = <&A72_L2>;
--			clocks = <&scpi_dvfs 0>;
-+			clocks = <&scmi_dvfs 0>;
- 			cpu-idle-states = <&CPU_SLEEP_0 &CLUSTER_SLEEP_0>;
- 			capacity-dmips-mhz = <1024>;
- 			dynamic-power-coefficient = <450>;
-@@ -114,7 +114,7 @@ A72_1: cpu@1 {
- 			d-cache-line-size = <64>;
- 			d-cache-sets = <256>;
- 			next-level-cache = <&A72_L2>;
--			clocks = <&scpi_dvfs 0>;
-+			clocks = <&scmi_dvfs 0>;
- 			cpu-idle-states = <&CPU_SLEEP_0 &CLUSTER_SLEEP_0>;
- 			capacity-dmips-mhz = <1024>;
- 			dynamic-power-coefficient = <450>;
-@@ -132,7 +132,7 @@ A53_0: cpu@100 {
- 			d-cache-line-size = <64>;
- 			d-cache-sets = <128>;
- 			next-level-cache = <&A53_L2>;
--			clocks = <&scpi_dvfs 1>;
-+			clocks = <&scmi_dvfs 1>;
- 			cpu-idle-states = <&CPU_SLEEP_0 &CLUSTER_SLEEP_0>;
- 			capacity-dmips-mhz = <485>;
- 			dynamic-power-coefficient = <140>;
-@@ -150,7 +150,7 @@ A53_1: cpu@101 {
- 			d-cache-line-size = <64>;
- 			d-cache-sets = <128>;
- 			next-level-cache = <&A53_L2>;
--			clocks = <&scpi_dvfs 1>;
-+			clocks = <&scmi_dvfs 1>;
- 			cpu-idle-states = <&CPU_SLEEP_0 &CLUSTER_SLEEP_0>;
- 			capacity-dmips-mhz = <485>;
- 			dynamic-power-coefficient = <140>;
-@@ -168,7 +168,7 @@ A53_2: cpu@102 {
- 			d-cache-line-size = <64>;
- 			d-cache-sets = <128>;
- 			next-level-cache = <&A53_L2>;
--			clocks = <&scpi_dvfs 1>;
-+			clocks = <&scmi_dvfs 1>;
- 			cpu-idle-states = <&CPU_SLEEP_0 &CLUSTER_SLEEP_0>;
- 			capacity-dmips-mhz = <485>;
- 			dynamic-power-coefficient = <140>;
-@@ -186,7 +186,7 @@ A53_3: cpu@103 {
- 			d-cache-line-size = <64>;
- 			d-cache-sets = <128>;
- 			next-level-cache = <&A53_L2>;
--			clocks = <&scpi_dvfs 1>;
-+			clocks = <&scmi_dvfs 1>;
- 			cpu-idle-states = <&CPU_SLEEP_0 &CLUSTER_SLEEP_0>;
- 			capacity-dmips-mhz = <485>;
- 			dynamic-power-coefficient = <140>;
-diff --git a/arch/arm64/boot/dts/arm/juno.dts b/arch/arm64/boot/dts/arm/juno.dts
-index f00cffbd032c..a28316c65c1b 100644
---- a/arch/arm64/boot/dts/arm/juno.dts
-+++ b/arch/arm64/boot/dts/arm/juno.dts
-@@ -95,7 +95,7 @@ A57_0: cpu@0 {
- 			d-cache-line-size = <64>;
- 			d-cache-sets = <256>;
- 			next-level-cache = <&A57_L2>;
--			clocks = <&scpi_dvfs 0>;
-+			clocks = <&scmi_dvfs 0>;
- 			cpu-idle-states = <&CPU_SLEEP_0 &CLUSTER_SLEEP_0>;
- 			capacity-dmips-mhz = <1024>;
- 			dynamic-power-coefficient = <530>;
-@@ -113,7 +113,7 @@ A57_1: cpu@1 {
- 			d-cache-line-size = <64>;
- 			d-cache-sets = <256>;
- 			next-level-cache = <&A57_L2>;
--			clocks = <&scpi_dvfs 0>;
-+			clocks = <&scmi_dvfs 0>;
- 			cpu-idle-states = <&CPU_SLEEP_0 &CLUSTER_SLEEP_0>;
- 			capacity-dmips-mhz = <1024>;
- 			dynamic-power-coefficient = <530>;
-@@ -131,7 +131,7 @@ A53_0: cpu@100 {
- 			d-cache-line-size = <64>;
- 			d-cache-sets = <128>;
- 			next-level-cache = <&A53_L2>;
--			clocks = <&scpi_dvfs 1>;
-+			clocks = <&scmi_dvfs 1>;
- 			cpu-idle-states = <&CPU_SLEEP_0 &CLUSTER_SLEEP_0>;
- 			capacity-dmips-mhz = <578>;
- 			dynamic-power-coefficient = <140>;
-@@ -149,7 +149,7 @@ A53_1: cpu@101 {
- 			d-cache-line-size = <64>;
- 			d-cache-sets = <128>;
- 			next-level-cache = <&A53_L2>;
--			clocks = <&scpi_dvfs 1>;
-+			clocks = <&scmi_dvfs 1>;
- 			cpu-idle-states = <&CPU_SLEEP_0 &CLUSTER_SLEEP_0>;
- 			capacity-dmips-mhz = <578>;
- 			dynamic-power-coefficient = <140>;
-@@ -167,7 +167,7 @@ A53_2: cpu@102 {
- 			d-cache-line-size = <64>;
- 			d-cache-sets = <128>;
- 			next-level-cache = <&A53_L2>;
--			clocks = <&scpi_dvfs 1>;
-+			clocks = <&scmi_dvfs 1>;
- 			cpu-idle-states = <&CPU_SLEEP_0 &CLUSTER_SLEEP_0>;
- 			capacity-dmips-mhz = <578>;
- 			dynamic-power-coefficient = <140>;
-@@ -185,7 +185,7 @@ A53_3: cpu@103 {
- 			d-cache-line-size = <64>;
- 			d-cache-sets = <128>;
- 			next-level-cache = <&A53_L2>;
--			clocks = <&scpi_dvfs 1>;
-+			clocks = <&scmi_dvfs 1>;
- 			cpu-idle-states = <&CPU_SLEEP_0 &CLUSTER_SLEEP_0>;
- 			capacity-dmips-mhz = <578>;
- 			dynamic-power-coefficient = <140>;
--- 
-2.25.1
-
diff --git a/meta-arm/meta-arm-bsp/recipes-kernel/linux/linux-arm-platforms.inc b/meta-arm/meta-arm-bsp/recipes-kernel/linux/linux-arm-platforms.inc
index f05c5ff..7bb4a92 100644
--- a/meta-arm/meta-arm-bsp/recipes-kernel/linux/linux-arm-platforms.inc
+++ b/meta-arm/meta-arm-bsp/recipes-kernel/linux/linux-arm-platforms.inc
@@ -91,7 +91,6 @@
 KBUILD_DEFCONFIG:juno = "defconfig"
 KCONFIG_MODE:juno = "--alldefconfig"
 FILESEXTRAPATHS:prepend:juno := "${ARMBSPFILESPATHS}"
-SRC_URI:append:juno = " file://juno-dts-mhu-doorbell.patch"
 
 #
 # Musca B1/S2 can't run Linux
@@ -180,6 +179,7 @@
     file://0042-ANDROID-trusty-ffa-Add-support-for-FFA-memory-operat.patch \
     file://0043-ANDROID-trusty-ffa-Enable-FFA-transport-for-both-mem.patch \
     file://0044-ANDROID-trusty-Make-trusty-transports-configurable.patch \
+    file://init_disassemble_info-signature-changes-causes-compile-failures.patch \
     "
 KERNEL_FEATURES:append:tc = " bsp/arm-platforms/tc.scc"
 KERNEL_FEATURES:append:tc1 = " bsp/arm-platforms/tc-autofdo.scc"
diff --git a/meta-arm/meta-arm-bsp/recipes-kernel/linux/linux-arm64-ack-5.10/tc/init_disassemble_info-signature-changes-causes-compile-failures.patch b/meta-arm/meta-arm-bsp/recipes-kernel/linux/linux-arm64-ack-5.10/tc/init_disassemble_info-signature-changes-causes-compile-failures.patch
new file mode 100644
index 0000000..aa33bb7
--- /dev/null
+++ b/meta-arm/meta-arm-bsp/recipes-kernel/linux/linux-arm64-ack-5.10/tc/init_disassemble_info-signature-changes-causes-compile-failures.patch
@@ -0,0 +1,99 @@
+From 1b2013986271de39360cf79e62ed9b7d2cc59f9b Mon Sep 17 00:00:00 2001
+From: Andres Freund <andres@anarazel.de>
+Date: Wed, 22 Jun 2022 11:19:18 -0700
+Subject: [PATCH] init_disassemble_info() signature changes causes compile
+ failures
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+Hi,
+
+binutils changed the signature of init_disassemble_info(), which now causes
+perf and bpftool to fail to compile (e.g. on debian unstable).
+
+Relevant binutils commit: https://sourceware.org/git/?p=binutils-gdb.git;a=commitdiff;h=60a3da00bd5407f07d64dff82a4dae98230dfaac
+
+util/annotate.c: In function ‘symbol__disassemble_bpf’:
+util/annotate.c:1765:9: error: too few arguments to function ‘init_disassemble_info’
+ 1765 |         init_disassemble_info(&info, s,
+      |         ^~~~~~~~~~~~~~~~~~~~~
+In file included from util/annotate.c:1718:
+/usr/include/dis-asm.h:472:13: note: declared here
+  472 | extern void init_disassemble_info (struct disassemble_info *dinfo, void *stream,
+      |             ^~~~~~~~~~~~~~~~~~~~~
+
+with equivalent failures in
+
+tools/bpf/bpf_jit_disasm.c
+tools/bpf/bpftool/jit_disasm.c
+
+The fix is easy enough, add a wrapper around fprintf() that conforms to the
+new signature.
+
+However I assume the necessary feature test and wrapper should only be added
+once? I don't know the kernel stuff well enough to choose the right structure
+here.
+
+Attached is my local fix for perf. Obviously would need work to be a real
+solution.
+
+Greetings,
+
+Andres Freund
+---
+
+binutils 2.39 changed the signature of init_disassemble_info(),
+which now causes perf and bpftool to fail to compile.
+
+Relevant binutils commit: [1]
+
+There is a proper fix in development upstream[2].
+This is a work-around for older kernels.
+
+[1] https://sourceware.org/git/?p=binutils-gdb.git;a=commitdiff;h=60a3da00bd5407f07d64dff82a4dae98230dfaac
+[2] https://patchwork.kernel.org/project/netdevbpf/cover/20220801013834.156015-1-andres@anarazel.de/
+
+Upstream-Status: Pending
+Signed-off-by: Anton Antonov <Anton.Antonov@arm.com>
+
+
+ tools/perf/util/annotate.c | 15 ++++++++++++++-
+ 1 file changed, 14 insertions(+), 1 deletion(-)
+
+diff --git a/tools/perf/util/annotate.c b/tools/perf/util/annotate.c
+index 6c8575e182ed..466a3a68f5cd 100644
+--- a/tools/perf/util/annotate.c
++++ b/tools/perf/util/annotate.c
+@@ -1677,6 +1677,18 @@ static int dso__disassemble_filename(struct dso *dso, char *filename, size_t fil
+ #include <bfd.h>
+ #include <dis-asm.h>
+ 
++static int fprintf_styled(void *, enum disassembler_style, const char* fmt, ...)
++{
++  va_list args;
++  int r;
++
++  va_start(args, fmt);
++  r = vprintf(fmt, args);
++  va_end(args);
++
++  return r;
++}
++
+ static int symbol__disassemble_bpf(struct symbol *sym,
+ 				   struct annotate_args *args)
+ {
+@@ -1719,7 +1731,8 @@ static int symbol__disassemble_bpf(struct symbol *sym,
+ 		goto out;
+ 	}
+ 	init_disassemble_info(&info, s,
+-			      (fprintf_ftype) fprintf);
++			      (fprintf_ftype) fprintf,
++			      fprintf_styled);
+ 
+ 	info.arch = bfd_get_arch(bfdf);
+ 	info.mach = bfd_get_mach(bfdf);
+-- 
+2.30.2
+
diff --git a/meta-arm/meta-arm-bsp/recipes-security/optee/files/optee-os/n1sdp/0001-core-arm-add-MPIDR-affinity-shift-and-mask-for-32-bi.patch b/meta-arm/meta-arm-bsp/recipes-security/optee/files/optee-os/n1sdp/0001-core-arm-add-MPIDR-affinity-shift-and-mask-for-32-bi.patch
new file mode 100644
index 0000000..f249e52
--- /dev/null
+++ b/meta-arm/meta-arm-bsp/recipes-security/optee/files/optee-os/n1sdp/0001-core-arm-add-MPIDR-affinity-shift-and-mask-for-32-bi.patch
@@ -0,0 +1,29 @@
+Upstream-Status: Pending [Not submitted to upstream yet]
+Signed-off-by: Vishnu Banavath <vishnu.banavath@arm.com>
+
+From cf84c933bb7b8a95742d1e723950cb2cde2d5320 Mon Sep 17 00:00:00 2001
+From: Vishnu Banavath <vishnu.banavath@arm.com>
+Date: Wed, 20 Jul 2022 16:37:10 +0100
+Subject: [PATCH] core: arm: add MPIDR affinity shift and mask for 32-bit
+
+This change is to add MPIDR affinity shift and mask for
+32-bit
+
+Signed-off-by: Vishnu Banavath <vishnu.banavath@arm.com>
+
+diff --git a/core/arch/arm/include/arm.h b/core/arch/arm/include/arm.h
+index f59478af..2f6f82e7 100644
+--- a/core/arch/arm/include/arm.h
++++ b/core/arch/arm/include/arm.h
+@@ -63,6 +63,8 @@
+ #define MPIDR_AFF1_MASK		(MPIDR_AFFLVL_MASK << MPIDR_AFF1_SHIFT)
+ #define MPIDR_AFF2_SHIFT	U(16)
+ #define MPIDR_AFF2_MASK		(MPIDR_AFFLVL_MASK << MPIDR_AFF2_SHIFT)
++#define MPIDR_AFF3_SHIFT	U(32)
++#define MPIDR_AFF3_MASK		(MPIDR_AFFLVL_MASK << MPIDR_AFF3_SHIFT)
+ 
+ #define MPIDR_MT_SHIFT		U(24)
+ #define MPIDR_MT_MASK		BIT(MPIDR_MT_SHIFT)
+-- 
+2.17.1
+
diff --git a/meta-arm/meta-arm-bsp/recipes-security/optee/files/optee-os/n1sdp/0002-plat-n1sdp-add-N1SDP-platform-support.patch b/meta-arm/meta-arm-bsp/recipes-security/optee/files/optee-os/n1sdp/0002-plat-n1sdp-add-N1SDP-platform-support.patch
new file mode 100644
index 0000000..db195ab
--- /dev/null
+++ b/meta-arm/meta-arm-bsp/recipes-security/optee/files/optee-os/n1sdp/0002-plat-n1sdp-add-N1SDP-platform-support.patch
@@ -0,0 +1,233 @@
+Upstream-Status: Pending [Not submitted to upstream yet]
+Signed-off-by: Vishnu Banavath <vishnu.banavath@arm.com>
+
+From 22ba7c7789082dbc179921962cdcadece4499c89 Mon Sep 17 00:00:00 2001
+From: Vishnu Banavath <vishnu.banavath@arm.com>
+Date: Thu, 30 Jun 2022 18:36:26 +0100
+Subject: [PATCH] plat-n1sdp: add N1SDP platform support
+
+These changes are to add N1SDP platform to optee-os
+
+Signed-off-by: Vishnu Banavath <vishnu.banavath@arm.com>
+
+diff --git a/core/arch/arm/plat-n1sdp/conf.mk b/core/arch/arm/plat-n1sdp/conf.mk
+new file mode 100644
+index 00000000..06b4975a
+--- /dev/null
++++ b/core/arch/arm/plat-n1sdp/conf.mk
+@@ -0,0 +1,41 @@
++include core/arch/arm/cpu/cortex-armv8-0.mk
++
++CFG_DEBUG_INFO = y
++CFG_TEE_CORE_LOG_LEVEL = 4
++
++# Workaround 808870: Unconditional VLDM instructions might cause an
++# alignment fault even though the address is aligned
++# Either hard float must be disabled for AArch32 or strict alignment checks
++# must be disabled
++ifeq ($(CFG_SCTLR_ALIGNMENT_CHECK),y)
++$(call force,CFG_TA_ARM32_NO_HARD_FLOAT_SUPPORT,y)
++else
++$(call force,CFG_SCTLR_ALIGNMENT_CHECK,n)
++endif
++
++CFG_ARM64_core ?= y
++
++CFG_ARM_GICV3 = y
++
++# ARM debugger needs this
++platform-cflags-debug-info = -gdwarf-4
++platform-aflags-debug-info = -gdwarf-4
++
++CFG_CORE_SEL1_SPMC	= y
++CFG_WITH_ARM_TRUSTED_FW	= y
++
++$(call force,CFG_GIC,y)
++$(call force,CFG_PL011,y)
++$(call force,CFG_SECURE_TIME_SOURCE_CNTPCT,y)
++
++CFG_CORE_HEAP_SIZE = 0x32000 # 200kb
++
++CFG_TEE_CORE_NB_CORE = 4
++CFG_TZDRAM_START ?= 0x08000000
++CFG_TZDRAM_SIZE  ?= 0x02008000
++
++CFG_SHMEM_START  ?= 0x83000000
++CFG_SHMEM_SIZE   ?= 0x00210000
++# DRAM1 is defined above 4G
++$(call force,CFG_CORE_LARGE_PHYS_ADDR,y)
++$(call force,CFG_CORE_ARM64_PA_BITS,36)
+diff --git a/core/arch/arm/plat-n1sdp/main.c b/core/arch/arm/plat-n1sdp/main.c
+new file mode 100644
+index 00000000..cfb7f19b
+--- /dev/null
++++ b/core/arch/arm/plat-n1sdp/main.c
+@@ -0,0 +1,63 @@
++// SPDX-License-Identifier: BSD-2-Clause
++/*
++ * Copyright (c) 2022, Arm Limited.
++ */
++
++#include <arm.h>
++#include <console.h>
++#include <drivers/gic.h>
++#include <drivers/pl011.h>
++#include <drivers/tpm2_mmio.h>
++#include <drivers/tpm2_ptp_fifo.h>
++#include <drivers/tzc400.h>
++#include <initcall.h>
++#include <keep.h>
++#include <kernel/boot.h>
++#include <kernel/interrupt.h>
++#include <kernel/misc.h>
++#include <kernel/notif.h>
++#include <kernel/panic.h>
++#include <kernel/spinlock.h>
++#include <kernel/tee_time.h>
++#include <mm/core_memprot.h>
++#include <mm/core_mmu.h>
++#include <platform_config.h>
++#include <sm/psci.h>
++#include <stdint.h>
++#include <string.h>
++#include <trace.h>
++
++static struct gic_data gic_data __nex_bss;
++static struct pl011_data console_data __nex_bss;
++
++register_phys_mem_pgdir(MEM_AREA_IO_SEC, CONSOLE_UART_BASE, PL011_REG_SIZE);
++
++register_ddr(DRAM0_BASE, DRAM0_SIZE);
++
++register_phys_mem_pgdir(MEM_AREA_IO_SEC, GICD_BASE, GIC_DIST_REG_SIZE);
++register_phys_mem_pgdir(MEM_AREA_IO_SEC, GICC_BASE, GIC_DIST_REG_SIZE);
++register_phys_mem_pgdir(MEM_AREA_IO_SEC, GICR_BASE, GIC_DIST_REG_SIZE);
++
++void main_init_gic(void)
++{
++	gic_init_base_addr(&gic_data, GICC_BASE,
++			   GICD_BASE);
++	itr_init(&gic_data.chip);
++}
++
++void main_secondary_init_gic(void)
++{
++	gic_cpu_init(&gic_data);
++}
++
++void itr_core_handler(void)
++{
++	gic_it_handle(&gic_data);
++}
++
++void console_init(void)
++{
++	pl011_init(&console_data, CONSOLE_UART_BASE, CONSOLE_UART_CLK_IN_HZ,
++		   CONSOLE_BAUDRATE);
++	register_serial_console(&console_data.chip);
++}
+diff --git a/core/arch/arm/plat-n1sdp/n1sdp_core_pos.S b/core/arch/arm/plat-n1sdp/n1sdp_core_pos.S
+new file mode 100644
+index 00000000..439d4e67
+--- /dev/null
++++ b/core/arch/arm/plat-n1sdp/n1sdp_core_pos.S
+@@ -0,0 +1,32 @@
++/* SPDX-License-Identifier: BSD-2-Clause */
++/*
++ * Copyright (c) 2022, Arm Limited
++ */
++
++#include <asm.S>
++#include <arm.h>
++#include "platform_config.h"
++
++FUNC get_core_pos_mpidr , :
++        mov     x4, x0
++
++        /*
++         * The MT bit in MPIDR is always set for n1sdp and the
++         * affinity level 0 corresponds to thread affinity level.
++         */
++
++        /* Extract individual affinity fields from MPIDR */
++        ubfx    x0, x4, #MPIDR_AFF0_SHIFT, #MPIDR_AFFINITY_BITS
++        ubfx    x1, x4, #MPIDR_AFF1_SHIFT, #MPIDR_AFFINITY_BITS
++        ubfx    x2, x4, #MPIDR_AFF2_SHIFT, #MPIDR_AFFINITY_BITS
++        ubfx    x3, x4, #MPIDR_AFF3_SHIFT, #MPIDR_AFFINITY_BITS
++
++        /* Compute linear position */
++        mov     x4, #N1SDP_MAX_CLUSTERS_PER_CHIP
++        madd    x2, x3, x4, x2
++        mov     x4, #N1SDP_MAX_CPUS_PER_CLUSTER
++        madd    x1, x2, x4, x1
++        mov     x4, #N1SDP_MAX_PE_PER_CPU
++        madd    x0, x1, x4, x0
++        ret
++END_FUNC get_core_pos_mpidr
+diff --git a/core/arch/arm/plat-n1sdp/platform_config.h b/core/arch/arm/plat-n1sdp/platform_config.h
+new file mode 100644
+index 00000000..81b99409
+--- /dev/null
++++ b/core/arch/arm/plat-n1sdp/platform_config.h
+@@ -0,0 +1,49 @@
++/* SPDX-License-Identifier: BSD-2-Clause */
++/*
++ * Copyright (c) 2022, Arm Limited
++ */
++
++#ifndef PLATFORM_CONFIG_H
++#define PLATFORM_CONFIG_H
++
++#include <mm/generic_ram_layout.h>
++#include <stdint.h>
++
++/* Make stacks aligned to data cache line length */
++#define STACK_ALIGNMENT		64
++
++ /* N1SDP topology related constants */
++#define N1SDP_MAX_CPUS_PER_CLUSTER              U(2)
++#define PLAT_ARM_CLUSTER_COUNT                  U(2)
++#define PLAT_N1SDP_CHIP_COUNT                   U(2)
++#define N1SDP_MAX_CLUSTERS_PER_CHIP             U(2)
++#define N1SDP_MAX_PE_PER_CPU                    U(1)
++
++#define PLATFORM_CORE_COUNT                     (PLAT_N1SDP_CHIP_COUNT *        \
++		                                PLAT_ARM_CLUSTER_COUNT *        \
++						N1SDP_MAX_CPUS_PER_CLUSTER *    \
++						N1SDP_MAX_PE_PER_CPU)
++
++#define GIC_BASE		0x2c010000
++
++#define UART1_BASE		0x1C0A0000
++#define UART1_CLK_IN_HZ		24000000  /*24MHz*/
++
++#define CONSOLE_UART_BASE	UART1_BASE
++#define CONSOLE_UART_CLK_IN_HZ	UART1_CLK_IN_HZ
++
++#define DRAM0_BASE		0x80000000
++#define DRAM0_SIZE		0x80000000
++
++#define GICD_BASE		0x30000000
++#define GICC_BASE		0x2C000000
++#define GICR_BASE		0x300C0000
++
++#ifndef UART_BAUDRATE
++#define UART_BAUDRATE		115200
++#endif
++#ifndef CONSOLE_BAUDRATE
++#define CONSOLE_BAUDRATE	UART_BAUDRATE
++#endif
++
++#endif /*PLATFORM_CONFIG_H*/
+diff --git a/core/arch/arm/plat-n1sdp/sub.mk b/core/arch/arm/plat-n1sdp/sub.mk
+new file mode 100644
+index 00000000..a0b49da1
+--- /dev/null
++++ b/core/arch/arm/plat-n1sdp/sub.mk
+@@ -0,0 +1,3 @@
++global-incdirs-y += .
++srcs-y += main.c
++srcs-y	+= n1sdp_core_pos.S
+-- 
+2.17.1
+
diff --git a/meta-arm/meta-arm-bsp/recipes-security/optee/files/optee-os/n1sdp/0003-HACK-disable-instruction-cache-and-data-cache.patch b/meta-arm/meta-arm-bsp/recipes-security/optee/files/optee-os/n1sdp/0003-HACK-disable-instruction-cache-and-data-cache.patch
new file mode 100644
index 0000000..e8f4cc4
--- /dev/null
+++ b/meta-arm/meta-arm-bsp/recipes-security/optee/files/optee-os/n1sdp/0003-HACK-disable-instruction-cache-and-data-cache.patch
@@ -0,0 +1,46 @@
+Upstream-Status: Pending [Not submitted to upstream yet]
+Signed-off-by: Vishnu Banavath <vishnu.banavath@arm.com>
+
+From 0c3ce4c09cd7d2ff4cd2e62acab899dd88dc9514 Mon Sep 17 00:00:00 2001
+From: Vishnu Banavath <vishnu.banavath@arm.com>
+Date: Wed, 20 Jul 2022 16:45:59 +0100
+Subject: [PATCH] HACK: disable instruction cache and data cache.
+
+For some reason, n1sdp fails to boot with instruction cache and
+data cache enabled. This is a temporary change to disable I cache
+and D cache until a proper fix is found.
+
+Signed-off-by: Vishnu Banavath <vishnu.banavath@arm.com>
+
+%% original patch: 0003-HACK-disable-instruction-cache-and-data-cache.patch
+
+diff --git a/core/arch/arm/kernel/entry_a64.S b/core/arch/arm/kernel/entry_a64.S
+index 875b6e69..594d6928 100644
+--- a/core/arch/arm/kernel/entry_a64.S
++++ b/core/arch/arm/kernel/entry_a64.S
+@@ -52,7 +52,7 @@
+ 
+ 	.macro set_sctlr_el1
+ 		mrs	x0, sctlr_el1
+-		orr	x0, x0, #SCTLR_I
++		bic	x0, x0, #SCTLR_I
+ 		orr	x0, x0, #SCTLR_SA
+ 		orr	x0, x0, #SCTLR_SPAN
+ #if defined(CFG_CORE_RWDATA_NOEXEC)
+@@ -490,11 +490,11 @@ LOCAL_FUNC enable_mmu , : , .identity_map
+ 	isb
+ 
+ 	/* Enable I and D cache */
+-	mrs	x1, sctlr_el1
++	/* mrs	x1, sctlr_el1
+ 	orr	x1, x1, #SCTLR_I
+ 	orr	x1, x1, #SCTLR_C
+ 	msr	sctlr_el1, x1
+-	isb
++	isb */
+ 
+ 	/* Adjust stack pointers and return address */
+ 	msr	spsel, #1
+-- 
+2.17.1
+
diff --git a/meta-arm/meta-arm-bsp/recipes-security/optee/files/optee-os/n1sdp/0004-Handle-logging-syscall.patch b/meta-arm/meta-arm-bsp/recipes-security/optee/files/optee-os/n1sdp/0004-Handle-logging-syscall.patch
new file mode 100644
index 0000000..356be9e
--- /dev/null
+++ b/meta-arm/meta-arm-bsp/recipes-security/optee/files/optee-os/n1sdp/0004-Handle-logging-syscall.patch
@@ -0,0 +1,33 @@
+Upstream-Status: Pending [Not submitted to upstream yet]
+Signed-off-by: Vishnu Banavath <vishnu.banavath@arm.com>
+
+From b3fde6c2e1a950214f760ab9f194f3a6572292a8 Mon Sep 17 00:00:00 2001
+From: Balint Dobszay <balint.dobszay@arm.com>
+Date: Fri, 15 Jul 2022 13:45:54 +0200
+Subject: [PATCH] Handle logging syscall
+
+Signed-off-by: Balint Dobszay <balint.dobszay@arm.com>
+Change-Id: Ib8151cc9c66aea8bcc8fe8b1ecdc3f9f9c5f14e4
+
+%% original patch: 0004-Handle-logging-syscall.patch
+
+diff --git a/core/arch/arm/kernel/spmc_sp_handler.c b/core/arch/arm/kernel/spmc_sp_handler.c
+index e0fa0aa6..c7a45387 100644
+--- a/core/arch/arm/kernel/spmc_sp_handler.c
++++ b/core/arch/arm/kernel/spmc_sp_handler.c
+@@ -1004,6 +1004,12 @@ void spmc_sp_msg_handler(struct thread_smc_args *args,
+ 			ffa_mem_reclaim(args, caller_sp);
+ 			sp_enter(args, caller_sp);
+ 			break;
++		case 0xdeadbeef:
++			ts_push_current_session(&caller_sp->ts_sess);
++			IMSG("%s", (char *)args->a1);
++			ts_pop_current_session();
++			sp_enter(args, caller_sp);
++			break;
+ 		default:
+ 			EMSG("Unhandled FFA function ID %#"PRIx32,
+ 			     (uint32_t)args->a0);
+-- 
+2.17.1
+
diff --git a/meta-arm/meta-arm-bsp/recipes-security/optee/optee-os-n1sdp.inc b/meta-arm/meta-arm-bsp/recipes-security/optee/optee-os-n1sdp.inc
new file mode 100644
index 0000000..219f08b
--- /dev/null
+++ b/meta-arm/meta-arm-bsp/recipes-security/optee/optee-os-n1sdp.inc
@@ -0,0 +1,22 @@
+# N1 SDP specific configuration for optee-os
+
+COMPATIBLE_MACHINE:n1sdp = "n1sdp"
+OPTEEMACHINE:n1sdp = "n1sdp"
+
+TS_INSTALL_PREFIX_PATH = "${RECIPE_SYSROOT}/firmware/sp/opteesp"
+
+FILESEXTRAPATHS:prepend := "${THISDIR}/files/optee-os/n1sdp:"
+SRC_URI:append = " \
+    file://0001-core-arm-add-MPIDR-affinity-shift-and-mask-for-32-bi.patch \
+    file://0002-plat-n1sdp-add-N1SDP-platform-support.patch \
+    file://0003-HACK-disable-instruction-cache-and-data-cache.patch \
+    file://0004-Handle-logging-syscall.patch \
+    "
+
+EXTRA_OEMAKE += " CFG_TEE_CORE_LOG_LEVEL=4"
+
+EXTRA_OEMAKE += " CFG_TEE_BENCHMARK=n"
+
+EXTRA_OEMAKE += " CFG_CORE_SEL1_SPMC=y CFG_CORE_FFA=y"
+
+EXTRA_OEMAKE += " CFG_WITH_SP=y"
diff --git a/meta-arm/meta-arm-bsp/recipes-security/optee/optee-os_3.18.0.bbappend b/meta-arm/meta-arm-bsp/recipes-security/optee/optee-os_3.18.0.bbappend
new file mode 100644
index 0000000..f80e09f
--- /dev/null
+++ b/meta-arm/meta-arm-bsp/recipes-security/optee/optee-os_3.18.0.bbappend
@@ -0,0 +1,6 @@
+# Machine specific configurations
+
+MACHINE_OPTEE_OS_REQUIRE ?= ""
+MACHINE_OPTEE_OS_REQUIRE:n1sdp = "optee-os-n1sdp.inc"
+
+require ${MACHINE_OPTEE_OS_REQUIRE}
diff --git a/meta-arm/meta-arm-bsp/recipes-security/trusted-services/secure-partitions/corstone1000/0048-Fix-UEFI-get_variable-with-small-buffer.patch b/meta-arm/meta-arm-bsp/recipes-security/trusted-services/secure-partitions/corstone1000/0048-Fix-UEFI-get_variable-with-small-buffer.patch
new file mode 100644
index 0000000..e4573a5
--- /dev/null
+++ b/meta-arm/meta-arm-bsp/recipes-security/trusted-services/secure-partitions/corstone1000/0048-Fix-UEFI-get_variable-with-small-buffer.patch
@@ -0,0 +1,407 @@
+Upstream-Status: Pending 
+Signed-off-by: Gowtham Suresh Kumar <gowtham.sureshkumar@arm.com>
+
+From 2d975e5ec5df6f81d6c35fe927f72d49181142f8 Mon Sep 17 00:00:00 2001
+From: Julian Hall <julian.hall@arm.com>
+Date: Tue, 19 Jul 2022 12:43:30 +0100
+Subject: [PATCH] Fix UEFI get_variable with small buffer
+
+The handling of the UEFI get_variable operation was incorrect when
+a small or zero data length was specified by a requester. A zero
+length data length is a legitimate way to discover the size of a
+variable without actually retrieving its data. This change adds
+test cases that reproduce the problem and a fix.
+
+Signed-off-by: Julian Hall <julian.hall@arm.com>
+Change-Id: Iec087fbf9305746d1438888e871602ec0ce15824
+---
+ .../backend/test/variable_store_tests.cpp     | 60 ++++++++++++++++--
+ .../backend/uefi_variable_store.c             | 46 +++++++++++---
+ .../client/cpp/smm_variable_client.cpp        | 33 +++++-----
+ .../client/cpp/smm_variable_client.h          |  8 ++-
+ .../provider/smm_variable_provider.c          |  2 +-
+ .../service/smm_variable_service_tests.cpp    | 62 +++++++++++++++++++
+ 6 files changed, 179 insertions(+), 32 deletions(-)
+
+diff --git a/components/service/smm_variable/backend/test/variable_store_tests.cpp b/components/service/smm_variable/backend/test/variable_store_tests.cpp
+index 235642e6..98faf761 100644
+--- a/components/service/smm_variable/backend/test/variable_store_tests.cpp
++++ b/components/service/smm_variable/backend/test/variable_store_tests.cpp
+@@ -128,7 +128,8 @@ TEST_GROUP(UefiVariableStoreTests)
+ 
+ 	efi_status_t get_variable(
+ 		const std::wstring &name,
+-		std::string &data)
++		std::string &data,
++		size_t data_len_clamp = VARIABLE_BUFFER_SIZE)
+ 	{
+ 		std::vector<int16_t> var_name = to_variable_name(name);
+ 		size_t name_size = var_name.size() * sizeof(int16_t);
+@@ -144,21 +145,40 @@ TEST_GROUP(UefiVariableStoreTests)
+ 		access_variable->NameSize = name_size;
+ 		memcpy(access_variable->Name, var_name.data(), name_size);
+ 
+-		access_variable->DataSize = 0;
++		size_t max_data_len = (data_len_clamp == VARIABLE_BUFFER_SIZE) ?
++			VARIABLE_BUFFER_SIZE -
++				SMM_VARIABLE_COMMUNICATE_ACCESS_VARIABLE_DATA_OFFSET(access_variable) :
++			data_len_clamp;
++
++		access_variable->DataSize = max_data_len;
+ 
+ 		efi_status_t status = uefi_variable_store_get_variable(
+ 			&m_uefi_variable_store,
+ 			access_variable,
+-			VARIABLE_BUFFER_SIZE -
+-				SMM_VARIABLE_COMMUNICATE_ACCESS_VARIABLE_DATA_OFFSET(access_variable),
++			max_data_len,
+ 			&total_size);
+ 
++		data.clear();
++
+ 		if (status == EFI_SUCCESS) {
+ 
+ 			const char *data_start = (const char*)(msg_buffer +
+ 				SMM_VARIABLE_COMMUNICATE_ACCESS_VARIABLE_DATA_OFFSET(access_variable));
+ 
+ 			data = std::string(data_start, access_variable->DataSize);
++
++			UNSIGNED_LONGLONGS_EQUAL(
++				SMM_VARIABLE_COMMUNICATE_ACCESS_VARIABLE_TOTAL_SIZE(access_variable),
++				total_size);
++		}
++		else if (status == EFI_BUFFER_TOO_SMALL) {
++
++			/* String length set to reported variable length */
++			data.insert(0, access_variable->DataSize, '!');
++
++			UNSIGNED_LONGLONGS_EQUAL(
++				SMM_VARIABLE_COMMUNICATE_ACCESS_VARIABLE_DATA_OFFSET(access_variable),
++				total_size);
+ 		}
+ 
+ 		return status;
+@@ -336,6 +356,38 @@ TEST(UefiVariableStoreTests, persistentSetGet)
+ 	LONGS_EQUAL(0, input_data.compare(output_data));
+ }
+ 
++TEST(UefiVariableStoreTests, getWithSmallBuffer)
++{
++	efi_status_t status = EFI_SUCCESS;
++	std::wstring var_name = L"test_variable";
++	std::string input_data = "quick brown fox";
++	std::string output_data;
++
++	/* A get with a zero length buffer is a legitimate way to
++	 * discover the variable size. This test performs GetVariable
++	 * operations with various buffer small buffer sizes. */
++	status = set_variable(var_name, input_data, 0);
++	UNSIGNED_LONGLONGS_EQUAL(EFI_SUCCESS, status);
++
++	/* First get the variable without a constrained buffer */
++	status = get_variable(var_name, output_data);
++	UNSIGNED_LONGLONGS_EQUAL(EFI_SUCCESS, status);
++
++	/* Expect got variable data to be the same as the set value */
++	UNSIGNED_LONGLONGS_EQUAL(input_data.size(), output_data.size());
++	LONGS_EQUAL(0, input_data.compare(output_data));
++
++	/* Now try with a zero length buffer */
++	status = get_variable(var_name, output_data, 0);
++	UNSIGNED_LONGLONGS_EQUAL(EFI_BUFFER_TOO_SMALL, status);
++	UNSIGNED_LONGLONGS_EQUAL(input_data.size(), output_data.size());
++
++	/* Try with a non-zero length but too small buffer */
++	status = get_variable(var_name, output_data, input_data.size() -1);
++	UNSIGNED_LONGLONGS_EQUAL(EFI_BUFFER_TOO_SMALL, status);
++	UNSIGNED_LONGLONGS_EQUAL(input_data.size(), output_data.size());
++}
++
+ TEST(UefiVariableStoreTests, removeVolatile)
+ {
+ 	efi_status_t status = EFI_SUCCESS;
+diff --git a/components/service/smm_variable/backend/uefi_variable_store.c b/components/service/smm_variable/backend/uefi_variable_store.c
+index e8771c21..90d648de 100644
+--- a/components/service/smm_variable/backend/uefi_variable_store.c
++++ b/components/service/smm_variable/backend/uefi_variable_store.c
+@@ -1,5 +1,5 @@
+ /*
+- * Copyright (c) 2021, Arm Limited. All rights reserved.
++ * Copyright (c) 2021-2022, Arm Limited. All rights reserved.
+  *
+  * SPDX-License-Identifier: BSD-3-Clause
+  *
+@@ -294,7 +294,10 @@ efi_status_t uefi_variable_store_get_variable(
+ 
+ 			status = load_variable_data(context, info, var, max_data_len);
+ 			var->Attributes = info->metadata.attributes;
+-			*total_length = SMM_VARIABLE_COMMUNICATE_ACCESS_VARIABLE_TOTAL_SIZE(var);
++
++			*total_length = (status == EFI_SUCCESS) ?
++				SMM_VARIABLE_COMMUNICATE_ACCESS_VARIABLE_TOTAL_SIZE(var) :
++				SMM_VARIABLE_COMMUNICATE_ACCESS_VARIABLE_DATA_OFFSET(var);
+ 		}
+ 	}
+ 
+@@ -682,7 +685,6 @@ static efi_status_t load_variable_data(
+ {
+ 	EMSG("In func %s\n", __func__);
+ 	psa_status_t psa_status = PSA_SUCCESS;
+-	size_t data_len = 0;
+ 	uint8_t *data = (uint8_t*)var +
+ 		SMM_VARIABLE_COMMUNICATE_ACCESS_VARIABLE_DATA_OFFSET(var);
+ 
+@@ -692,17 +694,41 @@ static efi_status_t load_variable_data(
+ 
+ 	if (delegate_store->storage_backend) {
+ 
+-		psa_status = delegate_store->storage_backend->interface->get(
++		struct psa_storage_info_t storage_info;
++
++		psa_status = delegate_store->storage_backend->interface->get_info(
+ 			delegate_store->storage_backend->context,
+ 			context->owner_id,
+ 			info->metadata.uid,
+-			0,
+-			max_data_len,
+-			data,
+-			&data_len);
+-		EMSG("In func %s get status is %d\n", __func__, psa_status);
++			&storage_info);
++
++		if (psa_status == PSA_SUCCESS) {
+ 
+-		var->DataSize = data_len;
++			size_t get_limit = (var->DataSize < max_data_len) ?
++				var->DataSize :
++				max_data_len;
++
++			if (get_limit >= storage_info.size) {
++
++				size_t got_len = 0;
++
++				psa_status = delegate_store->storage_backend->interface->get(
++					delegate_store->storage_backend->context,
++					context->owner_id,
++					info->metadata.uid,
++					0,
++					max_data_len,
++					data,
++					&got_len);
++
++				var->DataSize = got_len;
++			}
++			else {
++
++				var->DataSize = storage_info.size;
++				psa_status = PSA_ERROR_BUFFER_TOO_SMALL;
++			}
++		}
+ 	}
+ 
+ 	return psa_to_efi_storage_status(psa_status);
+diff --git a/components/service/smm_variable/client/cpp/smm_variable_client.cpp b/components/service/smm_variable/client/cpp/smm_variable_client.cpp
+index 8438285b..b6b4ed90 100644
+--- a/components/service/smm_variable/client/cpp/smm_variable_client.cpp
++++ b/components/service/smm_variable/client/cpp/smm_variable_client.cpp
+@@ -1,5 +1,5 @@
+ /*
+- * Copyright (c) 2021, Arm Limited and Contributors. All rights reserved.
++ * Copyright (c) 2021-2022, Arm Limited and Contributors. All rights reserved.
+  *
+  * SPDX-License-Identifier: BSD-3-Clause
+  */
+@@ -122,21 +122,22 @@ efi_status_t smm_variable_client::get_variable(
+ 		guid,
+ 		name,
+ 		data,
+-		0);
++		0,
++		MAX_VAR_DATA_SIZE);
+ }
+ 
+ efi_status_t smm_variable_client::get_variable(
+ 	const EFI_GUID &guid,
+ 	const std::wstring &name,
+ 	std::string &data,
+-	size_t override_name_size)
++	size_t override_name_size,
++	size_t max_data_size)
+ {
+ 	efi_status_t efi_status = EFI_NOT_READY;
+ 
+ 	std::vector<int16_t> var_name = to_variable_name(name);
+ 	size_t name_size = var_name.size() * sizeof(int16_t);
+-	size_t data_size = 0;
+-	size_t req_len = SMM_VARIABLE_COMMUNICATE_ACCESS_VARIABLE_SIZE(name_size, data_size);
++	size_t req_len = SMM_VARIABLE_COMMUNICATE_ACCESS_VARIABLE_SIZE(name_size, 0);
+ 
+ 	rpc_call_handle call_handle;
+ 	uint8_t *req_buf;
+@@ -154,7 +155,7 @@ efi_status_t smm_variable_client::get_variable(
+ 
+ 		access_var->Guid = guid;
+ 		access_var->NameSize = name_size;
+-		access_var->DataSize = data_size;
++		access_var->DataSize = max_data_size;
+ 
+ 		memcpy(access_var->Name, var_name.data(), name_size);
+ 
+@@ -168,26 +169,28 @@ efi_status_t smm_variable_client::get_variable(
+ 
+ 			efi_status = opstatus;
+ 
+-			if (efi_status == EFI_SUCCESS) {
+-
+-				efi_status = EFI_PROTOCOL_ERROR;
++			if (resp_len >= SMM_VARIABLE_COMMUNICATE_ACCESS_VARIABLE_NAME_OFFSET) {
+ 
+-				if (resp_len >= SMM_VARIABLE_COMMUNICATE_ACCESS_VARIABLE_NAME_OFFSET) {
++				access_var = (SMM_VARIABLE_COMMUNICATE_ACCESS_VARIABLE*)resp_buf;
++				size_t data_size = access_var->DataSize;
+ 
+-					access_var = (SMM_VARIABLE_COMMUNICATE_ACCESS_VARIABLE*)resp_buf;
++				if (resp_len >=
++					SMM_VARIABLE_COMMUNICATE_ACCESS_VARIABLE_TOTAL_SIZE(access_var)) {
+ 
+-					if (resp_len >=
+-						SMM_VARIABLE_COMMUNICATE_ACCESS_VARIABLE_TOTAL_SIZE(access_var)) {
++					if (efi_status == EFI_SUCCESS) {
+ 
+-						data_size = access_var->DataSize;
+ 						const char *data_start = (const char*)
+ 						&resp_buf[
+ 							SMM_VARIABLE_COMMUNICATE_ACCESS_VARIABLE_DATA_OFFSET(access_var)];
+ 
+ 						data.assign(data_start, data_size);
+-						efi_status = EFI_SUCCESS;
+ 					}
+ 				}
++				else if (efi_status == EFI_BUFFER_TOO_SMALL) {
++
++					data.clear();
++					data.insert(0, data_size, '!');
++				}
+ 			}
+ 		}
+ 		else {
+diff --git a/components/service/smm_variable/client/cpp/smm_variable_client.h b/components/service/smm_variable/client/cpp/smm_variable_client.h
+index c7973916..3d2371a8 100644
+--- a/components/service/smm_variable/client/cpp/smm_variable_client.h
++++ b/components/service/smm_variable/client/cpp/smm_variable_client.h
+@@ -1,5 +1,5 @@
+ /*
+- * Copyright (c) 2021, Arm Limited and Contributors. All rights reserved.
++ * Copyright (c) 2021-2022, Arm Limited and Contributors. All rights reserved.
+  *
+  * SPDX-License-Identifier: BSD-3-Clause
+  */
+@@ -56,7 +56,8 @@ public:
+ 		const EFI_GUID &guid,
+ 		const std::wstring &name,
+ 		std::string &data,
+-		size_t override_name_size);
++		size_t override_name_size,
++		size_t max_data_size = MAX_VAR_DATA_SIZE);
+ 
+ 	/* Remove a variable */
+ 	efi_status_t remove_variable(
+@@ -113,6 +114,9 @@ public:
+ 
+ 
+ private:
++
++	static const size_t MAX_VAR_DATA_SIZE = 65536;
++
+ 	efi_status_t rpc_to_efi_status() const;
+ 
+ 	static std::vector<int16_t> to_variable_name(const std::wstring &string);
+diff --git a/components/service/smm_variable/provider/smm_variable_provider.c b/components/service/smm_variable/provider/smm_variable_provider.c
+index 1f362c17..95c4fdc9 100644
+--- a/components/service/smm_variable/provider/smm_variable_provider.c
++++ b/components/service/smm_variable/provider/smm_variable_provider.c
+@@ -165,7 +165,7 @@ static rpc_status_t get_variable_handler(void *context, struct call_req *req)
+ 		}
+ 		else {
+ 
+-			/* Reponse buffer not big enough */
++			/* Response buffer not big enough */
+ 			efi_status = EFI_BAD_BUFFER_SIZE;
+ 		}
+ 	}
+diff --git a/components/service/smm_variable/test/service/smm_variable_service_tests.cpp b/components/service/smm_variable/test/service/smm_variable_service_tests.cpp
+index 38c08ebe..989a3e63 100644
+--- a/components/service/smm_variable/test/service/smm_variable_service_tests.cpp
++++ b/components/service/smm_variable/test/service/smm_variable_service_tests.cpp
+@@ -284,6 +284,68 @@ TEST(SmmVariableServiceTests, setAndGetNv)
+ 	UNSIGNED_LONGLONGS_EQUAL(EFI_SUCCESS, efi_status);
+ }
+ 
++TEST(SmmVariableServiceTests, getVarSize)
++{
++	efi_status_t efi_status = EFI_SUCCESS;
++	std::wstring var_name = L"test_variable";
++	std::string set_data = "UEFI variable data string";
++	std::string get_data;
++
++	efi_status = m_client->set_variable(
++		m_common_guid,
++		var_name,
++		set_data,
++		0);
++
++	UNSIGNED_LONGLONGS_EQUAL(EFI_SUCCESS, efi_status);
++
++	/* Get with the data size set to zero. This is the standard way
++	 * to discover the variable size. */
++	efi_status = m_client->get_variable(
++		m_common_guid,
++		var_name,
++		get_data,
++		0, 0);
++
++	UNSIGNED_LONGLONGS_EQUAL(EFI_BUFFER_TOO_SMALL, efi_status);
++	UNSIGNED_LONGS_EQUAL(set_data.size(), get_data.size());
++
++	/* Expect remove to be permitted */
++	efi_status = m_client->remove_variable(m_common_guid, var_name);
++	UNSIGNED_LONGLONGS_EQUAL(EFI_SUCCESS, efi_status);
++}
++
++TEST(SmmVariableServiceTests, getVarSizeNv)
++{
++	efi_status_t efi_status = EFI_SUCCESS;
++	std::wstring var_name = L"test_variable";
++	std::string set_data = "UEFI variable data string";
++	std::string get_data;
++
++	efi_status = m_client->set_variable(
++		m_common_guid,
++		var_name,
++		set_data,
++		EFI_VARIABLE_NON_VOLATILE);
++
++	UNSIGNED_LONGLONGS_EQUAL(EFI_SUCCESS, efi_status);
++
++	/* Get with the data size set to zero. This is the standard way
++	 * to discover the variable size. */
++	efi_status = m_client->get_variable(
++		m_common_guid,
++		var_name,
++		get_data,
++		0, 0);
++
++	UNSIGNED_LONGLONGS_EQUAL(EFI_BUFFER_TOO_SMALL, efi_status);
++	UNSIGNED_LONGS_EQUAL(set_data.size(), get_data.size());
++
++	/* Expect remove to be permitted */
++	efi_status = m_client->remove_variable(m_common_guid, var_name);
++	UNSIGNED_LONGLONGS_EQUAL(EFI_SUCCESS, efi_status);
++}
++
+ TEST(SmmVariableServiceTests, enumerateStoreContents)
+ {
+ 	efi_status_t efi_status = EFI_SUCCESS;
+-- 
+2.17.1
+
diff --git a/meta-arm/meta-arm-bsp/recipes-security/trusted-services/ts-corstone1000.inc b/meta-arm/meta-arm-bsp/recipes-security/trusted-services/ts-corstone1000.inc
index 88c46a7..b04863f 100644
--- a/meta-arm/meta-arm-bsp/recipes-security/trusted-services/ts-corstone1000.inc
+++ b/meta-arm/meta-arm-bsp/recipes-security/trusted-services/ts-corstone1000.inc
@@ -59,6 +59,7 @@
 		  file://0046-Fix-update-psa_set_key_usage_flags-definition-to-the.patch \
 		  file://0047-Fixes-in-AEAD-for-psa-arch-test-54-and-58.patch \
 		  file://0003-corstone1000-port-crypto-config.patch;patchdir=../psa-arch-tests \
+                  file://0048-Fix-UEFI-get_variable-with-small-buffer.patch \
                   "
 
 SRC_URI_MBEDTLS = "git://github.com/ARMmbed/mbedtls.git;protocol=https;branch=development;name=mbedtls;destsuffix=git/mbedtls"
diff --git a/meta-arm/meta-arm-bsp/wic/corstone1000-image.corstone1000.wks b/meta-arm/meta-arm-bsp/wic/corstone1000-image.corstone1000.wks
index c58d7d6..7625679 100644
--- a/meta-arm/meta-arm-bsp/wic/corstone1000-image.corstone1000.wks
+++ b/meta-arm/meta-arm-bsp/wic/corstone1000-image.corstone1000.wks
@@ -12,4 +12,4 @@
 part --source rawcopy --sourceparams="file=signed_fip-corstone1000.bin" --align 1 --no-table --fixed-size 2
 
 # Rawcopy of kernel with initramfs
-part --source rawcopy --sourceparams="file=Image-initramfs-${MACHINE}.bin" --no-table --fixed-size 12
+part --source rawcopy --sourceparams="file=Image.gz-initramfs-${MACHINE}.bin" --no-table --fixed-size 12
diff --git a/meta-arm/meta-arm/classes/fvpboot.bbclass b/meta-arm/meta-arm/classes/fvpboot.bbclass
index ec9d4f5..fbdfa96 100644
--- a/meta-arm/meta-arm/classes/fvpboot.bbclass
+++ b/meta-arm/meta-arm/classes/fvpboot.bbclass
@@ -26,7 +26,7 @@
 
 EXTRA_IMAGEDEPENDS += "${FVP_PROVIDER}"
 
-inherit image-artifact-names
+IMAGE_CLASSES += "image-artifact-names"
 
 IMAGE_POSTPROCESS_COMMAND += "do_write_fvpboot_conf;"
 python do_write_fvpboot_conf() {
diff --git a/meta-arm/meta-arm/conf/machine/qemuarm-secureboot.conf b/meta-arm/meta-arm/conf/machine/qemuarm-secureboot.conf
index a459f3f..e48d964 100644
--- a/meta-arm/meta-arm/conf/machine/qemuarm-secureboot.conf
+++ b/meta-arm/meta-arm/conf/machine/qemuarm-secureboot.conf
@@ -12,6 +12,7 @@
 QB_FSINFO = "wic:no-kernel-in-fs"
 QB_ROOTFS_OPT = ""
 QB_KERNEL_ROOT = "/dev/vda2"
+QB_KERNEL_CMDLINE_APPEND = ""
 
 IMAGE_FSTYPES += "wic wic.qcow2"
 
diff --git a/meta-arm/meta-arm/lib/oeqa/runtime/cases/linuxboot.py b/meta-arm/meta-arm/lib/oeqa/runtime/cases/linuxboot.py
index 8994405..99a8e78 100644
--- a/meta-arm/meta-arm/lib/oeqa/runtime/cases/linuxboot.py
+++ b/meta-arm/meta-arm/lib/oeqa/runtime/cases/linuxboot.py
@@ -12,7 +12,8 @@
 
     def setUp(self):
         self.console = self.target.DEFAULT_CONSOLE
+        self.timeout = int(self.td.get('TEST_FVP_LINUX_BOOT_TIMEOUT') or 10*60)
 
     def test_linux_boot(self):
         self.logger.info(f"{self.console}: Waiting for login prompt")
-        self.target.expect(self.console, r"login\:", timeout=10*60)
+        self.target.expect(self.console, r"login\:", self.timeout)
diff --git a/meta-arm/meta-arm/lib/oeqa/runtime/cases/trusted_services.py b/meta-arm/meta-arm/lib/oeqa/runtime/cases/trusted_services.py
new file mode 100644
index 0000000..a5f9376
--- /dev/null
+++ b/meta-arm/meta-arm/lib/oeqa/runtime/cases/trusted_services.py
@@ -0,0 +1,50 @@
+#
+
+from oeqa.runtime.case import OERuntimeTestCase
+from oeqa.core.decorator.depends import OETestDepends
+from oeqa.runtime.decorator.package import OEHasPackage
+
+class TrustedServicesTest(OERuntimeTestCase):
+
+    def run_test_tool(self, cmd, expected_status=0 ):
+        """ Run a test utility """
+
+        status, output = self.target.run(cmd)
+        self.assertEqual(status, expected_status, msg='\n'.join([cmd, output]))
+
+    @OEHasPackage(['ts-demo'])
+    @OETestDepends(['ssh.SSHTest.test_ssh'])
+    def test_00_ts_demo(self):
+        self.run_test_tool('ts-demo')
+
+    @OEHasPackage(['ts-service-test'])
+    @OETestDepends(['ssh.SSHTest.test_ssh'])
+    def test_01_ts_service_test(self):
+        self.run_test_tool('ts-service-test')
+
+    @OEHasPackage(['ts-uefi-test'])
+    @OETestDepends(['ssh.SSHTest.test_ssh'])
+    def test_02_ts_uefi_test(self):
+        self.run_test_tool('uefi-test')
+
+    @OEHasPackage(['ts-psa-crypto-api-test'])
+    @OETestDepends(['ssh.SSHTest.test_ssh'])
+    def test_03_psa_crypto_api_test(self):
+        # There are a few expected PSA Crypto tests failing
+        self.run_test_tool('psa-crypto-api-test', expected_status=46)
+
+    @OEHasPackage(['ts-psa-its-api-test'])
+    @OETestDepends(['ssh.SSHTest.test_ssh'])
+    def test_04_psa_its_api_test(self):
+        self.run_test_tool('psa-its-api-test')
+
+    @OEHasPackage(['ts-psa-ps-api-test'])
+    @OETestDepends(['ssh.SSHTest.test_ssh'])
+    def test_05_psa_ps_api_test(self):
+        # There are a few expected PSA Storage tests failing
+        self.run_test_tool('psa-ps-api-test', expected_status=46)
+
+    @OEHasPackage(['ts-psa-iat-api-test'])
+    @OETestDepends(['ssh.SSHTest.test_ssh'])
+    def test_06_psa_iat_api_test(self):
+        self.run_test_tool('psa-iat-api-test')
diff --git a/meta-arm/meta-arm/recipes-bsp/trusted-firmware-a/files/build-deps-upgrade-to-mbed-TLS-2.28.0.patch b/meta-arm/meta-arm/recipes-bsp/trusted-firmware-a/files/build-deps-upgrade-to-mbed-TLS-2.28.0.patch
deleted file mode 100644
index 058423c..0000000
--- a/meta-arm/meta-arm/recipes-bsp/trusted-firmware-a/files/build-deps-upgrade-to-mbed-TLS-2.28.0.patch
+++ /dev/null
@@ -1,72 +0,0 @@
-Upstream-Status: Backport
-Signed-off-by: Emekcan Aras <emekcan.aras@arm.com>
-
-From a93084be95634b66b917f1c8baf403067dc75c5d Mon Sep 17 00:00:00 2001
-From: Sandrine Bailleux <sandrine.bailleux@arm.com>
-Date: Thu, 21 Apr 2022 10:21:29 +0200
-Subject: [PATCH] build(deps): upgrade to mbed TLS 2.28.0
-
-Upgrade to the latest and greatest 2.x release of Mbed TLS library
-(i.e. v2.28.0) to take advantage of their bug fixes.
-
-Note that the Mbed TLS project published version 3.x some time
-ago. However, as this is a major release with API breakages, upgrading
-to 3.x might require some more involved changes in TF-A, which we are
-not ready to do. We shall upgrade to mbed TLS 3.x after the v2.7
-release of TF-A.
-
-Actually, the upgrade this time simply boils down to including the new
-source code module 'constant_time.c' into the firmware.
-
-To quote mbed TLS v2.28.0 release notes [1]:
-
-  The mbedcrypto library includes a new source code module
-  constant_time.c, containing various functions meant to resist timing
-  side channel attacks. This module does not have a separate
-  configuration option, and functions from this module will be
-  included in the build as required.
-
-As a matter of fact, if one is attempting to link TF-A against mbed
-TLS v2.28.0 without the present patch, one gets some linker errors
-due to missing symbols from this new module.
-
-Apart from this, none of the items listed in mbed TLS release
-notes [1] directly affect TF-A. Special note on the following one:
-
-  Fix a bug in mbedtls_gcm_starts() when the bit length of the iv
-  exceeds 2^32.
-
-In TF-A, we do use mbedtls_gcm_starts() when the firmware decryption
-feature is enabled with AES-GCM as the authenticated decryption
-algorithm (DECRYPTION_SUPPORT=aes_gcm). However, the iv_len variable
-which gets passed to mbedtls_gcm_starts() is an unsigned int, i.e. a
-32-bit value which by definition is always less than 2**32. Therefore,
-we are immune to this bug.
-
-With this upgrade, the size of BL1 and BL2 binaries does not appear to
-change on a standard sample test build (with trusted boot and measured
-boot enabled).
-
-[1] https://github.com/Mbed-TLS/mbedtls/releases/tag/v2.28.0
-
-Change-Id: Icd5dbf527395e9e22c8fd6b77427188bd7237fd6
-Signed-off-by: Sandrine Bailleux <sandrine.bailleux@arm.com>
----
- drivers/auth/mbedtls/mbedtls_common.mk | 1 +
- 1 file changed, 1 insertion(+)
-
-diff --git a/drivers/auth/mbedtls/mbedtls_common.mk b/drivers/auth/mbedtls/mbedtls_common.mk
-index 0a4775d00..3eb41617f 100644
---- a/drivers/auth/mbedtls/mbedtls_common.mk
-+++ b/drivers/auth/mbedtls/mbedtls_common.mk
-@@ -48,6 +48,7 @@ LIBMBEDTLS_SRCS		:= $(addprefix ${MBEDTLS_DIR}/library/,	\
- 					rsa_internal.c				\
- 					x509.c 					\
- 					x509_crt.c 				\
-+					constant_time.c 			\
- 					)
- 
- # The platform may define the variable 'TF_MBEDTLS_KEY_ALG' to select the key
--- 
-2.25.1
-
diff --git a/meta-arm/meta-arm/recipes-bsp/trusted-firmware-a/files/rwx-segments.patch b/meta-arm/meta-arm/recipes-bsp/trusted-firmware-a/files/rwx-segments.patch
new file mode 100644
index 0000000..a4518ec
--- /dev/null
+++ b/meta-arm/meta-arm/recipes-bsp/trusted-firmware-a/files/rwx-segments.patch
@@ -0,0 +1,38 @@
+Binutils 2.39 now warns when a segment has RXW permissions[1]:
+
+aarch64-none-elf-ld.bfd: warning: bl31.elf has a LOAD segment with RWX
+permissions
+
+However, TF-A passes --fatal-warnings to LD, so this is a build failure.
+
+There is a ticket filed upstream[2], so until that is resolved just
+remove --fatal-warnings.
+
+[1] https://sourceware.org/git/?p=binutils-gdb.git;a=commit;h=ba951afb99912da01a6e8434126b8fac7aa75107
+[2] https://developer.trustedfirmware.org/T996
+
+Upstream-Status: Inappropriate
+Signed-off-by: Ross Burton <ross.burton@arm.com>
+
+diff --git a/Makefile b/Makefile
+index 3941f8698..13bbac348 100644
+--- a/Makefile
++++ b/Makefile
+@@ -418,7 +418,7 @@ TF_LDFLAGS		+=	$(TF_LDFLAGS_$(ARCH))
+ # LD = gcc (used when GCC LTO is enabled)
+ else ifneq ($(findstring gcc,$(notdir $(LD))),)
+ # Pass ld options with Wl or Xlinker switches
+-TF_LDFLAGS		+=	-Wl,--fatal-warnings -O1
++TF_LDFLAGS		+=	-O1
+ TF_LDFLAGS		+=	-Wl,--gc-sections
+ ifeq ($(ENABLE_LTO),1)
+ 	ifeq (${ARCH},aarch64)
+@@ -435,7 +435,7 @@ TF_LDFLAGS		+=	$(subst --,-Xlinker --,$(TF_LDFLAGS_$(ARCH)))
+ 
+ # LD = gcc-ld (ld) or llvm-ld (ld.lld) or other
+ else
+-TF_LDFLAGS		+=	--fatal-warnings -O1
++TF_LDFLAGS		+=	-O1
+ TF_LDFLAGS		+=	--gc-sections
+ # ld.lld doesn't recognize the errata flags,
+ # therefore don't add those in that case
diff --git a/meta-arm/meta-arm/recipes-bsp/trusted-firmware-a/files/ssl.patch b/meta-arm/meta-arm/recipes-bsp/trusted-firmware-a/files/ssl.patch
deleted file mode 100644
index cdabd1b..0000000
--- a/meta-arm/meta-arm/recipes-bsp/trusted-firmware-a/files/ssl.patch
+++ /dev/null
@@ -1,52 +0,0 @@
-fiptool: respect OPENSSL_DIR
-
-fiptool links to libcrypto, so as with the other tools it should respect
-OPENSSL_DIR for include/library paths.
-
-Upstream-Status: Submitted
-Signed-off-by: Ross Burton <ross.burton@arm.com>
-
-diff --git a/Makefile b/Makefile
-index ec6f88585..2d3b9fc26 100644
---- a/Makefile
-+++ b/Makefile
-@@ -1388,7 +1388,7 @@ fwu_fip: ${BUILD_PLAT}/${FWU_FIP_NAME}
- 
- ${FIPTOOL}: FORCE
- ifdef UNIX_MK
--	${Q}${MAKE} CPPFLAGS="-DVERSION='\"${VERSION_STRING}\"'" FIPTOOL=${FIPTOOL} --no-print-directory -C ${FIPTOOLPATH}
-+	${Q}${MAKE} CPPFLAGS="-DVERSION='\"${VERSION_STRING}\"'" FIPTOOL=${FIPTOOL} OPENSSL_DIR=${OPENSSL_DIR} --no-print-directory -C ${FIPTOOLPATH}
- else
- # Clear the MAKEFLAGS as we do not want
- # to pass the gnumake flags to nmake.
-diff --git a/tools/fiptool/Makefile b/tools/fiptool/Makefile
-index 11d2e7b0b..7c2a08379 100644
---- a/tools/fiptool/Makefile
-+++ b/tools/fiptool/Makefile
-@@ -12,6 +12,8 @@ FIPTOOL ?= fiptool${BIN_EXT}
- PROJECT := $(notdir ${FIPTOOL})
- OBJECTS := fiptool.o tbbr_config.o
- V ?= 0
-+OPENSSL_DIR := /usr
-+
- 
- override CPPFLAGS += -D_GNU_SOURCE -D_XOPEN_SOURCE=700
- HOSTCCFLAGS := -Wall -Werror -pedantic -std=c99
-@@ -20,7 +22,7 @@ ifeq (${DEBUG},1)
- else
-   HOSTCCFLAGS += -O2
- endif
--LDLIBS := -lcrypto
-+LDLIBS := -L${OPENSSL_DIR}/lib -lcrypto
- 
- ifeq (${V},0)
-   Q := @
-@@ -28,7 +30,7 @@ else
-   Q :=
- endif
- 
--INCLUDE_PATHS := -I../../include/tools_share
-+INCLUDE_PATHS := -I../../include/tools_share  -I${OPENSSL_DIR}/include
- 
- HOSTCC ?= gcc
- 
diff --git a/meta-arm/meta-arm/recipes-bsp/trusted-firmware-a/files/tf-a-tests-no-warn-rwx-segments.patch b/meta-arm/meta-arm/recipes-bsp/trusted-firmware-a/files/tf-a-tests-no-warn-rwx-segments.patch
new file mode 100644
index 0000000..5d02e35
--- /dev/null
+++ b/meta-arm/meta-arm/recipes-bsp/trusted-firmware-a/files/tf-a-tests-no-warn-rwx-segments.patch
@@ -0,0 +1,26 @@
+Binutils 2.39 now warns when a segment has RXW permissions[1]:
+
+aarch64-poky-linux-musl-ld: tftf.elf has a LOAD segment with RWX permissions
+
+There is a ticket filed upstream[2], so until that is resolved just
+disable the warning
+
+[1] https://sourceware.org/git/?p=binutils-gdb.git;a=commit;h=ba951afb99912da01a6e8434126b8fac7aa75107
+[2] https://developer.trustedfirmware.org/T996
+
+Upstream-Status: Inappropriate
+Signed-off-by: Anton Antonov <anrton.antonov@arm.com>
+
+diff --git a/Makefile b/Makefile
+index 6d0774e1..be3f84ce 100644
+--- a/Makefile
++++ b/Makefile
+@@ -238,7 +238,7 @@ TFTF_SOURCES		:= ${FRAMEWORK_SOURCES}	${TESTS_SOURCES} ${PLAT_SOURCES} ${LIBC_SR
+ TFTF_INCLUDES		+= ${PLAT_INCLUDES}
+ TFTF_CFLAGS		+= ${COMMON_CFLAGS}
+ TFTF_ASFLAGS		+= ${COMMON_ASFLAGS}
+-TFTF_LDFLAGS		+= ${COMMON_LDFLAGS}
++TFTF_LDFLAGS		+= ${COMMON_LDFLAGS} --no-warn-rwx-segments
+ TFTF_EXTRA_OBJS 	:=
+ 
+ ifneq (${BP_OPTION},none)
diff --git a/meta-arm/meta-arm/recipes-bsp/trusted-firmware-a/tf-a-tests_2.7.0.bb b/meta-arm/meta-arm/recipes-bsp/trusted-firmware-a/tf-a-tests_2.7.0.bb
index 0c06440..645b245 100644
--- a/meta-arm/meta-arm/recipes-bsp/trusted-firmware-a/tf-a-tests_2.7.0.bb
+++ b/meta-arm/meta-arm/recipes-bsp/trusted-firmware-a/tf-a-tests_2.7.0.bb
@@ -7,7 +7,8 @@
 
 COMPATIBLE_MACHINE ?= "invalid"
 
-SRC_URI = "git://git.trustedfirmware.org/TF-A/tf-a-tests.git;protocol=https;branch=master"
+SRC_URI = "git://git.trustedfirmware.org/TF-A/tf-a-tests.git;protocol=https;branch=master \
+          file://tf-a-tests-no-warn-rwx-segments.patch"
 SRCREV ?= "5f591f67738a1bbe6b262c53d9dad46ed8bbcd67"
 
 DEPENDS += "optee-os"
diff --git a/meta-arm/meta-arm/recipes-bsp/trusted-firmware-a/trusted-firmware-a_%.bbappend b/meta-arm/meta-arm/recipes-bsp/trusted-firmware-a/trusted-firmware-a_%.bbappend
index 8815510..6cf55d6 100644
--- a/meta-arm/meta-arm/recipes-bsp/trusted-firmware-a/trusted-firmware-a_%.bbappend
+++ b/meta-arm/meta-arm/recipes-bsp/trusted-firmware-a/trusted-firmware-a_%.bbappend
@@ -10,7 +10,14 @@
 TFA_PLATFORM:qemu-generic-arm64 = "qemu_sbsa"
 TFA_PLATFORM:qemuarm-secureboot = "qemu"
 
-TFA_SPD:qemuarm64-secureboot = "opteed"
+# Trusted Services secure partitions require arm-ffa machine feature.
+# Enabling Secure-EL1 Payload Dispatcher (SPD) in this case
+TFA_SPD:qemuarm64-secureboot = "${@bb.utils.contains('MACHINE_FEATURES', 'arm-ffa', 'spmd', 'opteed', d)}"
+# Configure tf-a accordingly to TS requirements if included
+EXTRA_OEMAKE:append:qemuarm64-secureboot = "${@bb.utils.contains('MACHINE_FEATURES', 'arm-ffa', ' CTX_INCLUDE_EL2_REGS=0 SPMC_OPTEE=1 ', '' , d)}"
+# Cortex-A57 supports Armv8.0 (no S-EL2 execution state).
+# The SPD SPMC component should run at the S-EL1 execution state.
+TFA_SPMD_SPM_AT_SEL2:qemuarm64-secureboot = "0"
 
 TFA_UBOOT:qemuarm64-secureboot = "1"
 TFA_UBOOT:qemuarm-secureboot = "1"
diff --git a/meta-arm/meta-arm/recipes-bsp/trusted-firmware-a/trusted-firmware-a_2.7.0.bb b/meta-arm/meta-arm/recipes-bsp/trusted-firmware-a/trusted-firmware-a_2.7.0.bb
index 537ec32..35817c0 100644
--- a/meta-arm/meta-arm/recipes-bsp/trusted-firmware-a/trusted-firmware-a_2.7.0.bb
+++ b/meta-arm/meta-arm/recipes-bsp/trusted-firmware-a/trusted-firmware-a_2.7.0.bb
@@ -3,6 +3,8 @@
 # TF-A v2.7
 SRCREV_tfa = "35f4c7295bafeb32c8bcbdfb6a3f2e74a57e732b"
 
+SRC_URI += "file://rwx-segments.patch"
+
 LIC_FILES_CHKSUM += "file://docs/license.rst;md5=b2c740efedc159745b9b31f88ff03dde"
 
 # mbed TLS v2.28.0
diff --git a/meta-arm/meta-arm/recipes-devtools/gator-daemon/gator-daemon/0001-daemon-mxml-Define-_GNU_SOURCE.patch b/meta-arm/meta-arm/recipes-devtools/gator-daemon/gator-daemon/0001-daemon-mxml-Define-_GNU_SOURCE.patch
new file mode 100644
index 0000000..d246043
--- /dev/null
+++ b/meta-arm/meta-arm/recipes-devtools/gator-daemon/gator-daemon/0001-daemon-mxml-Define-_GNU_SOURCE.patch
@@ -0,0 +1,31 @@
+From 04e2e924c3ab8da41343277746804dbcd7bf520d Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Sat, 13 Aug 2022 16:49:52 -0700
+Subject: [PATCH] daemon/mxml: Define _GNU_SOURCE
+
+This file uses vasprintf() which is defined only with _GNU_SOURCE
+feature macro is on.
+
+Upstream-Status: Pending
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ daemon/mxml/mxml-string.c | 2 ++
+ 1 file changed, 2 insertions(+)
+
+diff --git a/daemon/mxml/mxml-string.c b/daemon/mxml/mxml-string.c
+index 678aeb9..c9cd153 100644
+--- a/daemon/mxml/mxml-string.c
++++ b/daemon/mxml/mxml-string.c
+@@ -13,6 +13,8 @@
+  * Include necessary headers...
+  */
+ 
++#define _GNU_SOURCE
++
+ #include "config.h"
+ 
+ 
+-- 
+2.37.2
+
diff --git a/meta-arm/meta-arm/recipes-devtools/gator-daemon/gator-daemon_7.7.0.bb b/meta-arm/meta-arm/recipes-devtools/gator-daemon/gator-daemon_7.7.0.bb
index 492d321..07b8abb 100644
--- a/meta-arm/meta-arm/recipes-devtools/gator-daemon/gator-daemon_7.7.0.bb
+++ b/meta-arm/meta-arm/recipes-devtools/gator-daemon/gator-daemon_7.7.0.bb
@@ -18,6 +18,7 @@
 SRCREV = "9d8d75fa08352470c51abc23fe3b314879bd8b78"
 SRC_URI = "git://github.com/ARM-software/gator.git;protocol=http;branch=main;protocol=https \
            file://0001-Sources.mk-Remove-xml-PmuXMLParser.h-header-from-GAT.patch;striplevel=2 \
+           file://0001-daemon-mxml-Define-_GNU_SOURCE.patch;striplevel=2 \
           "
 
 S = "${WORKDIR}/git/daemon"
diff --git a/meta-arm/meta-arm/recipes-kernel/arm-ffa-tee/arm-ffa-tee_1.1.1.bb b/meta-arm/meta-arm/recipes-kernel/arm-ffa-tee/arm-ffa-tee_1.1.1.bb
new file mode 100644
index 0000000..9e997de
--- /dev/null
+++ b/meta-arm/meta-arm/recipes-kernel/arm-ffa-tee/arm-ffa-tee_1.1.1.bb
@@ -0,0 +1,22 @@
+SUMMARY = "A Linux kernel module providing user space access to Trusted Services"
+DESCRIPTION = "${SUMMARY}"
+LICENSE = "GPL-2.0-only"
+LIC_FILES_CHKSUM = "file://COPYING;md5=05e355bbd617507216a836c56cf24983"
+
+inherit module
+
+SRC_URI = "git://gitlab.arm.com/linux-arm/linux-trusted-services;protocol=https;branch=main \
+           file://Makefile;subdir=git \
+          "
+S = "${WORKDIR}/git"
+
+# Tag tee-v1.1
+SRCREV = "3b543b7591505b715f332c972248a3ea41604d83"
+
+COMPATIBLE_HOST = "(arm|aarch64).*-linux"
+KERNEL_MODULE_AUTOLOAD += "arm-ffa-tee"
+
+do_install:append() {
+    install -d ${D}${includedir}
+    install -m 0644 ${S}/uapi/arm_ffa_tee.h ${D}${includedir}/
+}
diff --git a/meta-arm/meta-arm/recipes-kernel/arm-ffa-tee/files/Makefile b/meta-arm/meta-arm/recipes-kernel/arm-ffa-tee/files/Makefile
new file mode 100644
index 0000000..40a6e47
--- /dev/null
+++ b/meta-arm/meta-arm/recipes-kernel/arm-ffa-tee/files/Makefile
@@ -0,0 +1,14 @@
+obj-m := arm-ffa-tee.o
+
+SRC := $(shell pwd)
+
+all:
+	$(MAKE) -C $(KERNEL_SRC) M=$(SRC)
+
+modules_install:
+	$(MAKE) -C $(KERNEL_SRC) M=$(SRC) modules_install
+
+clean:
+	rm -f *.o *~ core .depend .*.cmd *.ko *.mod.c
+	rm -f Module.markers Module.symvers modules.order
+	rm -rf .tmp_versions Modules.symvers
diff --git a/meta-arm/meta-arm/recipes-kernel/arm-ffa-user/arm-ffa-user_5.0.0.bb b/meta-arm/meta-arm/recipes-kernel/arm-ffa-user/arm-ffa-user_5.0.0.bb
new file mode 100644
index 0000000..8d86197
--- /dev/null
+++ b/meta-arm/meta-arm/recipes-kernel/arm-ffa-user/arm-ffa-user_5.0.0.bb
@@ -0,0 +1,29 @@
+SUMMARY = "FF-A Debugfs Linux kernel module"
+DESCRIPTION = "This out-of-tree kernel module exposes FF-A operations to user space \
+used for development purposes"
+LICENSE = "GPL-2.0-only"
+LIC_FILES_CHKSUM = "file://COPYING;md5=05e355bbd617507216a836c56cf24983"
+
+inherit module
+
+SRC_URI = "git://gitlab.arm.com/linux-arm/linux-trusted-services;protocol=https;branch=debugfs \
+           file://Makefile;subdir=git \
+          "
+S = "${WORKDIR}/git"
+
+# Tag 5.0.0.
+SRCREV = "6ec4196a59db8204ed670ef3b78f24a8234b85a6"
+
+COMPATIBLE_HOST = "(arm|aarch64).*-linux"
+KERNEL_MODULE_AUTOLOAD += "arm-ffa-user"
+KERNEL_MODULE_PROBECONF += "arm-ffa-user"
+
+# This debugfs driver is used only by uefi-test for testing SmmGW SP
+# UUIDs = SMM Gateway SP
+FFA-USER-UUID-LIST ?= "ed32d533-99e6-4209-9cc0-2d72cdd998a7"
+module_conf_arm-ffa-user = "options arm-ffa-user uuid_str_list=${FFA-USER-UUID-LIST}"
+
+do_install:append() {
+    install -d ${D}${includedir}
+    install -m 0644 ${S}/arm_ffa_user.h ${D}${includedir}/
+}
diff --git a/meta-arm/meta-arm/recipes-kernel/arm-ffa-user/files/Makefile b/meta-arm/meta-arm/recipes-kernel/arm-ffa-user/files/Makefile
new file mode 100644
index 0000000..c54d1fc
--- /dev/null
+++ b/meta-arm/meta-arm/recipes-kernel/arm-ffa-user/files/Makefile
@@ -0,0 +1,14 @@
+obj-m := arm-ffa-user.o
+
+SRC := $(shell pwd)
+
+all:
+	$(MAKE) -C $(KERNEL_SRC) M=$(SRC)
+
+modules_install:
+	$(MAKE) -C $(KERNEL_SRC) M=$(SRC) modules_install
+
+clean:
+	rm -f *.o *~ core .depend .*.cmd *.ko *.mod.c
+	rm -f Module.markers Module.symvers modules.order
+	rm -rf .tmp_versions Modules.symvers
diff --git a/meta-arm/meta-arm/recipes-kernel/linux/arm-ffa-5.15.inc b/meta-arm/meta-arm/recipes-kernel/linux/arm-ffa-5.15.inc
new file mode 100644
index 0000000..bc66efb
--- /dev/null
+++ b/meta-arm/meta-arm/recipes-kernel/linux/arm-ffa-5.15.inc
@@ -0,0 +1,5 @@
+# Include a backport kernel patch for TEE driver
+
+SRC_URI:append = " \
+    file://Add-sec_world_id-to-struct-tee_shm.patch \
+    "
diff --git a/meta-arm/meta-arm/recipes-kernel/linux/arm-ffa-transport.inc b/meta-arm/meta-arm/recipes-kernel/linux/arm-ffa-transport.inc
new file mode 100644
index 0000000..dec31dd
--- /dev/null
+++ b/meta-arm/meta-arm/recipes-kernel/linux/arm-ffa-transport.inc
@@ -0,0 +1,6 @@
+FILESEXTRAPATHS:prepend := "${THISDIR}/${PN}:"
+
+# Enable ARM-FFA transport
+SRC_URI:append = " \
+    file://arm-ffa-transport.cfg \
+    "
diff --git a/meta-arm/meta-arm/recipes-kernel/linux/linux-yocto-5.15/Add-sec_world_id-to-struct-tee_shm.patch b/meta-arm/meta-arm/recipes-kernel/linux/linux-yocto-5.15/Add-sec_world_id-to-struct-tee_shm.patch
new file mode 100644
index 0000000..8f54b30
--- /dev/null
+++ b/meta-arm/meta-arm/recipes-kernel/linux/linux-yocto-5.15/Add-sec_world_id-to-struct-tee_shm.patch
@@ -0,0 +1,44 @@
+From 9028b2463c1ea96f51c3ba53e2479346019ff6ad Mon Sep 17 00:00:00 2001
+From: Jens Wiklander <jens.wiklander@linaro.org>
+Date: Thu, 25 Mar 2021 15:08:44 +0100
+Subject: [PATCH] tee: add sec_world_id to struct tee_shm
+
+Adds sec_world_id to struct tee_shm which describes a shared memory
+object. sec_world_id can be used by a driver to store an id assigned by
+secure world.
+
+Reviewed-by: Sumit Garg <sumit.garg@linaro.org>
+Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
+
+Upstream-Status: Submitted [https://github.com/torvalds/linux/commit/9028b2463c1ea96f51c3ba53e2479346019ff6ad]
+Signed-off-by: Anton Antonov <Anton.Antonov@arm.com>
+
+---
+ include/linux/tee_drv.h | 7 ++++++-
+ 1 file changed, 6 insertions(+), 1 deletion(-)
+
+diff --git a/include/linux/tee_drv.h b/include/linux/tee_drv.h
+index 3ebfea0781f100..a1f03461369bd9 100644
+--- a/include/linux/tee_drv.h
++++ b/include/linux/tee_drv.h
+@@ -197,7 +197,11 @@ int tee_session_calc_client_uuid(uuid_t *uuid, u32 connection_method,
+  * @num_pages:	number of locked pages
+  * @dmabuf:	dmabuf used to for exporting to user space
+  * @flags:	defined by TEE_SHM_* in tee_drv.h
+- * @id:		unique id of a shared memory object on this device
++ * @id:		unique id of a shared memory object on this device, shared
++ *		with user space
++ * @sec_world_id:
++ *		secure world assigned id of this shared memory object, not
++ *		used by all drivers
+  *
+  * This pool is only supposed to be accessed directly from the TEE
+  * subsystem and from drivers that implements their own shm pool manager.
+@@ -213,6 +217,7 @@ struct tee_shm {
+ 	struct dma_buf *dmabuf;
+ 	u32 flags;
+ 	int id;
++	u64 sec_world_id;
+ };
+ 
+ /**
diff --git a/meta-arm/meta-arm/recipes-kernel/linux/linux-yocto-5.15/skip-unavailable-memory.patch b/meta-arm/meta-arm/recipes-kernel/linux/linux-yocto-5.15/skip-unavailable-memory.patch
new file mode 100644
index 0000000..d157ef7
--- /dev/null
+++ b/meta-arm/meta-arm/recipes-kernel/linux/linux-yocto-5.15/skip-unavailable-memory.patch
@@ -0,0 +1,85 @@
+From 7bfeda1c9224270af97adf799ce0b5a4292bceb6 Mon Sep 17 00:00:00 2001
+From: Andre Przywara <andre.przywara@arm.com>
+Date: Tue, 17 May 2022 11:14:10 +0100
+Subject: [PATCH] of/fdt: Ignore disabled memory nodes
+
+When we boot a machine using a devicetree, the generic DT code goes
+through all nodes with a 'device_type = "memory"' property, and collects
+all memory banks mentioned there. However it does not check for the
+status property, so any nodes which are explicitly "disabled" will still
+be added as a memblock.
+This ends up badly for QEMU, when booting with secure firmware on
+arm/arm64 machines, because QEMU adds a node describing secure-only
+memory:
+===================
+	secram@e000000 {
+		secure-status = "okay";
+		status = "disabled";
+		reg = <0x00 0xe000000 0x00 0x1000000>;
+		device_type = "memory";
+	};
+===================
+
+The kernel will eventually use that memory block (which is located below
+the main DRAM bank), but accesses to that will be answered with an
+SError:
+===================
+[    0.000000] Internal error: synchronous external abort: 96000050 [#1] PREEMPT SMP
+[    0.000000] Modules linked in:
+[    0.000000] CPU: 0 PID: 0 Comm: swapper Not tainted 5.18.0-rc6-00014-g10c8acb8b679 #524
+[    0.000000] Hardware name: linux,dummy-virt (DT)
+[    0.000000] pstate: 200000c5 (nzCv daIF -PAN -UAO -TCO -DIT -SSBS BTYPE=--)
+[    0.000000] pc : new_slab+0x190/0x340
+[    0.000000] lr : new_slab+0x184/0x340
+[    0.000000] sp : ffff80000a4b3d10
+....
+==================
+The actual crash location and call stack will be somewhat random, and
+depend on the specific allocation of that physical memory range.
+
+As the DT spec[1] explicitly mentions standard properties, add a simple
+check to skip over disabled memory nodes, so that we only use memory
+that is meant for non-secure code to use.
+
+That fixes booting a QEMU arm64 VM with EL3 enabled ("secure=on"), when
+not using UEFI. In this case the QEMU generated DT will be handed on
+to the kernel, which will see the secram node.
+This issue is reproducible when using TF-A together with U-Boot as
+firmware, then booting with the "booti" command.
+
+When using U-Boot as an UEFI provider, the code there [2] explicitly
+filters for disabled nodes when generating the UEFI memory map, so we
+are safe.
+EDK/2 only reads the first bank of the first DT memory node [3] to learn
+about memory, so we got lucky there.
+
+[1] https://github.com/devicetree-org/devicetree-specification/blob/main/source/chapter3-devicenodes.rst#memory-node (after the table)
+[2] https://source.denx.de/u-boot/u-boot/-/blob/master/lib/fdtdec.c#L1061-1063
+[3] https://github.com/tianocore/edk2/blob/master/ArmVirtPkg/PrePi/FdtParser.c
+
+Reported-by: Ross Burton <ross.burton@arm.com>
+Signed-off-by: Andre Przywara <andre.przywara@arm.com>
+
+Upstream-Status: Submitted [https://lore.kernel.org/linux-arm-kernel/20220517101410.3493781-1-andre.przywara@arm.com/T/#u]
+Signed-off-by: Ross Burton <ross.burton@arm.com>
+
+---
+ drivers/of/fdt.c | 3 +++
+ 1 file changed, 3 insertions(+)
+
+diff --git a/drivers/of/fdt.c b/drivers/of/fdt.c
+index 59a7a9ee58ef..5439c899fe04 100644
+--- a/drivers/of/fdt.c
++++ b/drivers/of/fdt.c
+@@ -1102,6 +1102,9 @@ int __init early_init_dt_scan_memory(unsigned long node, const char *uname,
+ 	if (type == NULL || strcmp(type, "memory") != 0)
+ 		return 0;
+ 
++	if (!of_fdt_device_is_available(initial_boot_params, node))
++		return 0;
++
+ 	reg = of_get_flat_dt_prop(node, "linux,usable-memory", &l);
+ 	if (reg == NULL)
+ 		reg = of_get_flat_dt_prop(node, "reg", &l);
+-- 
+2.25.1
diff --git a/meta-arm/meta-arm/recipes-kernel/linux/linux-yocto/arm-ffa-transport.cfg b/meta-arm/meta-arm/recipes-kernel/linux/linux-yocto/arm-ffa-transport.cfg
new file mode 100644
index 0000000..34de78e
--- /dev/null
+++ b/meta-arm/meta-arm/recipes-kernel/linux/linux-yocto/arm-ffa-transport.cfg
@@ -0,0 +1 @@
+CONFIG_ARM_FFA_TRANSPORT=y
diff --git a/meta-arm/meta-arm/recipes-kernel/linux/linux-yocto/generic-arm64-kmeta/generic-arm64-standard.scc b/meta-arm/meta-arm/recipes-kernel/linux/linux-yocto/generic-arm64-kmeta/generic-arm64-standard.scc
index 21bff02..7036476 100644
--- a/meta-arm/meta-arm/recipes-kernel/linux/linux-yocto/generic-arm64-kmeta/generic-arm64-standard.scc
+++ b/meta-arm/meta-arm/recipes-kernel/linux/linux-yocto/generic-arm64-kmeta/generic-arm64-standard.scc
@@ -3,3 +3,4 @@
 define KARCH arm64
 
 include ktypes/standard/standard.scc
+include features/bluetooth/bluetooth.scc
diff --git a/meta-arm/meta-arm/recipes-kernel/linux/linux-yocto/no-strict-devmem.cfg b/meta-arm/meta-arm/recipes-kernel/linux/linux-yocto/no-strict-devmem.cfg
new file mode 100644
index 0000000..d372aca
--- /dev/null
+++ b/meta-arm/meta-arm/recipes-kernel/linux/linux-yocto/no-strict-devmem.cfg
@@ -0,0 +1 @@
+CONFIG_STRICT_DEVMEM=n
diff --git a/meta-arm/meta-arm/recipes-kernel/linux/linux-yocto/skip-unavailable-memory.patch b/meta-arm/meta-arm/recipes-kernel/linux/linux-yocto/skip-unavailable-memory.patch
deleted file mode 100644
index c5dac6a..0000000
--- a/meta-arm/meta-arm/recipes-kernel/linux/linux-yocto/skip-unavailable-memory.patch
+++ /dev/null
@@ -1,86 +0,0 @@
-Backported to 5.15.
-
-Upstream-Status: Submitted [https://lore.kernel.org/linux-arm-kernel/20220517101410.3493781-1-andre.przywara@arm.com/T/#u]
-Signed-off-by: Ross Burton <ross.burton@arm.com>
-
-From 7bfeda1c9224270af97adf799ce0b5a4292bceb6 Mon Sep 17 00:00:00 2001
-From: Andre Przywara <andre.przywara@arm.com>
-Date: Tue, 17 May 2022 11:14:10 +0100
-Subject: [PATCH] of/fdt: Ignore disabled memory nodes
-
-When we boot a machine using a devicetree, the generic DT code goes
-through all nodes with a 'device_type = "memory"' property, and collects
-all memory banks mentioned there. However it does not check for the
-status property, so any nodes which are explicitly "disabled" will still
-be added as a memblock.
-This ends up badly for QEMU, when booting with secure firmware on
-arm/arm64 machines, because QEMU adds a node describing secure-only
-memory:
-===================
-	secram@e000000 {
-		secure-status = "okay";
-		status = "disabled";
-		reg = <0x00 0xe000000 0x00 0x1000000>;
-		device_type = "memory";
-	};
-===================
-
-The kernel will eventually use that memory block (which is located below
-the main DRAM bank), but accesses to that will be answered with an
-SError:
-===================
-[    0.000000] Internal error: synchronous external abort: 96000050 [#1] PREEMPT SMP
-[    0.000000] Modules linked in:
-[    0.000000] CPU: 0 PID: 0 Comm: swapper Not tainted 5.18.0-rc6-00014-g10c8acb8b679 #524
-[    0.000000] Hardware name: linux,dummy-virt (DT)
-[    0.000000] pstate: 200000c5 (nzCv daIF -PAN -UAO -TCO -DIT -SSBS BTYPE=--)
-[    0.000000] pc : new_slab+0x190/0x340
-[    0.000000] lr : new_slab+0x184/0x340
-[    0.000000] sp : ffff80000a4b3d10
-....
-==================
-The actual crash location and call stack will be somewhat random, and
-depend on the specific allocation of that physical memory range.
-
-As the DT spec[1] explicitly mentions standard properties, add a simple
-check to skip over disabled memory nodes, so that we only use memory
-that is meant for non-secure code to use.
-
-That fixes booting a QEMU arm64 VM with EL3 enabled ("secure=on"), when
-not using UEFI. In this case the QEMU generated DT will be handed on
-to the kernel, which will see the secram node.
-This issue is reproducible when using TF-A together with U-Boot as
-firmware, then booting with the "booti" command.
-
-When using U-Boot as an UEFI provider, the code there [2] explicitly
-filters for disabled nodes when generating the UEFI memory map, so we
-are safe.
-EDK/2 only reads the first bank of the first DT memory node [3] to learn
-about memory, so we got lucky there.
-
-[1] https://github.com/devicetree-org/devicetree-specification/blob/main/source/chapter3-devicenodes.rst#memory-node (after the table)
-[2] https://source.denx.de/u-boot/u-boot/-/blob/master/lib/fdtdec.c#L1061-1063
-[3] https://github.com/tianocore/edk2/blob/master/ArmVirtPkg/PrePi/FdtParser.c
-
-Reported-by: Ross Burton <ross.burton@arm.com>
-Signed-off-by: Andre Przywara <andre.przywara@arm.com>
----
- drivers/of/fdt.c | 3 +++
- 1 file changed, 3 insertions(+)
-
-diff --git a/drivers/of/fdt.c b/drivers/of/fdt.c
-index 59a7a9ee58ef..5439c899fe04 100644
---- a/drivers/of/fdt.c
-+++ b/drivers/of/fdt.c
-@@ -1102,6 +1102,9 @@ int __init early_init_dt_scan_memory(unsigned long node, const char *uname,
- 	if (type == NULL || strcmp(type, "memory") != 0)
- 		return 0;
- 
-+	if (!of_fdt_device_is_available(initial_boot_params, node))
-+		return 0;
-+
- 	reg = of_get_flat_dt_prop(node, "linux,usable-memory", &l);
- 	if (reg == NULL)
- 		reg = of_get_flat_dt_prop(node, "reg", &l);
--- 
-2.25.1
diff --git a/meta-arm/meta-arm/recipes-kernel/linux/linux-yocto/tee.cfg b/meta-arm/meta-arm/recipes-kernel/linux/linux-yocto/tee.cfg
index 9a25cd5..53c452d 100644
--- a/meta-arm/meta-arm/recipes-kernel/linux/linux-yocto/tee.cfg
+++ b/meta-arm/meta-arm/recipes-kernel/linux/linux-yocto/tee.cfg
@@ -5,7 +5,6 @@
 # TEE drivers
 #
 CONFIG_OPTEE=y
-CONFIG_OPTEE_SHM_NUM_PRIV_PAGES=1
 # end of TEE drivers
 
 CONFIG_TCG_TPM=y
diff --git a/meta-arm/meta-arm/recipes-kernel/linux/linux-yocto_%.bbappend b/meta-arm/meta-arm/recipes-kernel/linux/linux-yocto_%.bbappend
index 1d01daa..896add8 100644
--- a/meta-arm/meta-arm/recipes-kernel/linux/linux-yocto_%.bbappend
+++ b/meta-arm/meta-arm/recipes-kernel/linux/linux-yocto_%.bbappend
@@ -8,10 +8,14 @@
 
 FILESEXTRAPATHS:prepend:qemuarm64-secureboot = "${ARMFILESPATHS}"
 SRC_URI:append:qemuarm64-secureboot = " \
-    file://skip-unavailable-memory.patch \
     file://tee.cfg \
     "
 
+# for Trusted Services uefi-test tool if SMM-Gateway is included
+SRC_URI:append:qemuarm64-secureboot = "\
+    ${@bb.utils.contains('MACHINE_FEATURES', 'ts-smm-gateway', 'file://no-strict-devmem.cfg', '' , d)} \
+    "
+
 FILESEXTRAPATHS:prepend:qemuarm-secureboot = "${ARMFILESPATHS}"
 SRC_URI:append:qemuarm-secureboot = " \
     file://tee.cfg \
@@ -22,3 +26,6 @@
 
 FILESEXTRAPATHS:prepend:qemuarm = "${ARMFILESPATHS}"
 SRC_URI:append:qemuarm = " file://efi.cfg"
+
+FFA_TRANSPORT_INCLUDE = "${@bb.utils.contains('MACHINE_FEATURES', 'arm-ffa', 'arm-ffa-transport.inc', '' , d)}"
+require ${FFA_TRANSPORT_INCLUDE}
diff --git a/meta-arm/meta-arm/recipes-kernel/linux/linux-yocto_5.15%.bbappend b/meta-arm/meta-arm/recipes-kernel/linux/linux-yocto_5.15%.bbappend
new file mode 100644
index 0000000..9a18dd8
--- /dev/null
+++ b/meta-arm/meta-arm/recipes-kernel/linux/linux-yocto_5.15%.bbappend
@@ -0,0 +1,8 @@
+FILESEXTRAPATHS:prepend := "${THISDIR}/${PN}-5.15:"
+
+SRC_URI:append:qemuarm64-secureboot = " \
+    file://skip-unavailable-memory.patch \
+    "
+
+FFA_TEE_INCLUDE = "${@bb.utils.contains('MACHINE_FEATURES', 'arm-ffa', 'arm-ffa-5.15.inc', '' , d)}"
+require ${FFA_TEE_INCLUDE}
diff --git a/meta-arm/meta-arm/recipes-security/optee/optee-client_3.17.0.bb b/meta-arm/meta-arm/recipes-security/optee/optee-client_3.17.0.bb
deleted file mode 100644
index 5de16e7..0000000
--- a/meta-arm/meta-arm/recipes-security/optee/optee-client_3.17.0.bb
+++ /dev/null
@@ -1,3 +0,0 @@
-require optee-client.inc
-
-SRCREV = "9a337049c52495e5e16b4a94decaa3e58fce793e"
diff --git a/meta-arm/meta-arm/recipes-security/optee/optee-client_3.18.0.bb b/meta-arm/meta-arm/recipes-security/optee/optee-client_3.18.0.bb
new file mode 100644
index 0000000..0c831db
--- /dev/null
+++ b/meta-arm/meta-arm/recipes-security/optee/optee-client_3.18.0.bb
@@ -0,0 +1,3 @@
+require optee-client.inc
+
+SRCREV = "e7cba71cc6e2ecd02f412c7e9ee104f0a5dffc6f"
diff --git a/meta-arm/meta-arm/recipes-security/optee/optee-examples.inc b/meta-arm/meta-arm/recipes-security/optee/optee-examples.inc
index e6feb99..5011f48 100644
--- a/meta-arm/meta-arm/recipes-security/optee/optee-examples.inc
+++ b/meta-arm/meta-arm/recipes-security/optee/optee-examples.inc
@@ -12,7 +12,7 @@
 require optee.inc
 
 SRC_URI = "git://github.com/linaro-swg/optee_examples.git;branch=master;protocol=https \
-           file://0001-Makefile-Fix-non-portable-sh-check-for-plugins.patch"
+           "
 
 EXTRA_OEMAKE += "TA_DEV_KIT_DIR=${TA_DEV_KIT_DIR} \
                  HOST_CROSS_COMPILE=${HOST_PREFIX} \
diff --git a/meta-arm/meta-arm/recipes-security/optee/optee-examples/0001-Makefile-Fix-non-portable-sh-check-for-plugins.patch b/meta-arm/meta-arm/recipes-security/optee/optee-examples/0001-Makefile-Fix-non-portable-sh-check-for-plugins.patch
deleted file mode 100644
index 70add62..0000000
--- a/meta-arm/meta-arm/recipes-security/optee/optee-examples/0001-Makefile-Fix-non-portable-sh-check-for-plugins.patch
+++ /dev/null
@@ -1,46 +0,0 @@
-From 11610debf750f15c7a104db7315dcd7d69e282a8 Mon Sep 17 00:00:00 2001
-From: Alejandro Enedino Hernandez Samaniego <alhe@linux.microsoft.com>
-Date: Sat, 26 Feb 2022 01:52:26 +0000
-Subject: [PATCH] Makefile: Fix non-portable sh check for plugins
-
-Upstream-Status: Pending
-
-We previously held a patch that used "=" for comparison, but when
-that patch got upstreamed it was changed to "==" which is non-portable,
-resulting in an error:
-
-/bin/sh: 6: [: acipher: unexpected operator
-/bin/sh: 6: [: plugins: unexpected operator
-/bin/sh: 6: [: hello_world: unexpected operator
-/bin/sh: 6: [: hotp: unexpected operator
-/bin/sh: 6: [: aes: unexpected operator
-/bin/sh: 6: [: random: unexpected operator
-/bin/sh: 6: [: secure_storage: unexpected operator
-
-if /bin/sh doesnt point to bash.
-
-Which in turn causes our do_install task to fail since plugins arent
-where we expect them to be.
-
-
-Signed-off-by: Alejandro Enedino Hernandez Samaniego <alhe@linux.microsoft.com>
----
- Makefile | 2 +-
- 1 file changed, 1 insertion(+), 1 deletion(-)
-
-diff --git a/Makefile b/Makefile
-index b3f16aa..9359d95 100644
---- a/Makefile
-+++ b/Makefile
-@@ -31,7 +31,7 @@ prepare-for-rootfs: examples
- 			cp -p $$example/host/optee_example_$$example $(OUTPUT_DIR)/ca/; \
- 		fi; \
- 		cp -pr $$example/ta/*.ta $(OUTPUT_DIR)/ta/; \
--		if [ $$example == plugins ]; then \
-+		if [ $$example = plugins ]; then \
- 			cp -p plugins/syslog/*.plugin $(OUTPUT_DIR)/plugins/; \
- 		fi; \
- 	done
--- 
-2.25.1
-
diff --git a/meta-arm/meta-arm/recipes-security/optee/optee-examples_3.17.0.bb b/meta-arm/meta-arm/recipes-security/optee/optee-examples_3.17.0.bb
deleted file mode 100644
index b5f6269..0000000
--- a/meta-arm/meta-arm/recipes-security/optee/optee-examples_3.17.0.bb
+++ /dev/null
@@ -1,3 +0,0 @@
-require optee-examples.inc
-
-SRCREV = "65fc74309e12189ad5b6ce3ffec37c8011088a5a"
diff --git a/meta-arm/meta-arm/recipes-security/optee/optee-examples_3.18.0.bb b/meta-arm/meta-arm/recipes-security/optee/optee-examples_3.18.0.bb
new file mode 100644
index 0000000..8118fee
--- /dev/null
+++ b/meta-arm/meta-arm/recipes-security/optee/optee-examples_3.18.0.bb
@@ -0,0 +1,3 @@
+require optee-examples.inc
+
+SRCREV = "f301ee9df2129c0db683e726c91dc2cefe4cdb65"
diff --git a/meta-arm/meta-arm/recipes-security/optee/optee-os-tadevkit_3.17.0.bb b/meta-arm/meta-arm/recipes-security/optee/optee-os-tadevkit_3.17.0.bb
deleted file mode 100644
index 5ff373a..0000000
--- a/meta-arm/meta-arm/recipes-security/optee/optee-os-tadevkit_3.17.0.bb
+++ /dev/null
@@ -1,25 +0,0 @@
-FILESEXTRAPATHS:prepend := "${THISDIR}/optee-os:"
-require optee-os_3.17.0.bb
-
-SUMMARY = "OP-TEE Trusted OS TA devkit"
-DESCRIPTION = "OP-TEE TA devkit for build TAs"
-HOMEPAGE = "https://www.op-tee.org/"
-
-DEPENDS += "python3-pycryptodome-native"
-
-do_install() {
-    #install TA devkit
-    install -d ${D}${includedir}/optee/export-user_ta/
-    for f in ${B}/export-ta_${OPTEE_ARCH}/* ; do
-        cp -aR $f ${D}${includedir}/optee/export-user_ta/
-    done
-}
-
-do_deploy() {
-	echo "Do not inherit do_deploy from optee-os."
-}
-
-FILES:${PN} = "${includedir}/optee/"
-
-# Build paths are currently embedded
-INSANE_SKIP:${PN}-dev += "buildpaths"
diff --git a/meta-arm/meta-arm/recipes-security/optee/optee-os-tadevkit_3.18.0.bb b/meta-arm/meta-arm/recipes-security/optee/optee-os-tadevkit_3.18.0.bb
new file mode 100644
index 0000000..0982df3
--- /dev/null
+++ b/meta-arm/meta-arm/recipes-security/optee/optee-os-tadevkit_3.18.0.bb
@@ -0,0 +1,25 @@
+FILESEXTRAPATHS:prepend := "${THISDIR}/optee-os:"
+require optee-os_3.18.0.bb
+
+SUMMARY = "OP-TEE Trusted OS TA devkit"
+DESCRIPTION = "OP-TEE TA devkit for build TAs"
+HOMEPAGE = "https://www.op-tee.org/"
+
+DEPENDS += "python3-pycryptodome-native"
+
+do_install() {
+    #install TA devkit
+    install -d ${D}${includedir}/optee/export-user_ta/
+    for f in ${B}/export-ta_${OPTEE_ARCH}/* ; do
+        cp -aR $f ${D}${includedir}/optee/export-user_ta/
+    done
+}
+
+do_deploy() {
+	echo "Do not inherit do_deploy from optee-os."
+}
+
+FILES:${PN} = "${includedir}/optee/"
+
+# Build paths are currently embedded
+INSANE_SKIP:${PN}-dev += "buildpaths"
diff --git a/meta-arm/meta-arm/recipes-security/optee/optee-os-ts.inc b/meta-arm/meta-arm/recipes-security/optee/optee-os-ts.inc
new file mode 100644
index 0000000..10a4175
--- /dev/null
+++ b/meta-arm/meta-arm/recipes-security/optee/optee-os-ts.inc
@@ -0,0 +1,54 @@
+# Include Trusted Services SPs accordingly to defined machine features
+
+# Please notice that OPTEE will load SPs in the order listed in this file.
+# If an SP requires another SP to be already loaded it must be listed lower.
+
+# TS SPs UUIDs definitions
+require recipes-security/trusted-services/ts-uuid.inc
+
+TS_ENV = "opteesp"
+TS_BIN = "${RECIPE_SYSROOT}/usr/${TS_ENV}/bin"
+
+# ITS SP
+DEPENDS:append  = "${@bb.utils.contains('MACHINE_FEATURES', 'ts-its', \
+                                        ' ts-sp-its', '' , d)}"
+SP_PATHS:append = "${@bb.utils.contains('MACHINE_FEATURES', 'ts-its', \
+                                        ' ${TS_BIN}/${ITS_UUID}.stripped.elf', '', d)}"
+
+# Storage SP
+DEPENDS:append  = "${@bb.utils.contains('MACHINE_FEATURES', 'ts-storage', \
+                                        ' ts-sp-storage', '' , d)}"
+SP_PATHS:append = "${@bb.utils.contains('MACHINE_FEATURES', 'ts-storage', \
+                                        ' ${TS_BIN}/${STORAGE_UUID}.stripped.elf', '', d)}"
+
+# Crypto SP.
+DEPENDS:append  = "${@bb.utils.contains('MACHINE_FEATURES', 'ts-crypto', \
+                                        ' ts-sp-crypto', '' , d)}"
+SP_PATHS:append = "${@bb.utils.contains('MACHINE_FEATURES', 'ts-crypto', \
+                                        ' ${TS_BIN}/${CRYPTO_UUID}.stripped.elf', '', d)}"
+
+# Attestation SP
+DEPENDS:append  = "${@bb.utils.contains('MACHINE_FEATURES', 'ts-attestation', \
+                                        ' ts-sp-attestation', '' , d)}"
+SP_PATHS:append = "${@bb.utils.contains('MACHINE_FEATURES', 'ts-attestation', \
+                                        ' ${TS_BIN}/${ATTESTATION_UUID}.stripped.elf', '', d)}"
+
+# Env-test SP
+DEPENDS:append  = "${@bb.utils.contains('MACHINE_FEATURES', 'ts-env-test', \
+                                        ' ts-sp-env-test', '' , d)}"
+SP_PATHS:append = "${@bb.utils.contains('MACHINE_FEATURES', 'ts-env-test', \
+                                        ' ${TS_BIN}/${ENV_TEST_UUID}.stripped.elf', '', d)}"
+
+# SE-Proxy SP
+DEPENDS:append  = "${@bb.utils.contains('MACHINE_FEATURES', 'ts-se-proxy', \
+                                        ' ts-sp-se-proxy', '' , d)}"
+SP_PATHS:append = "${@bb.utils.contains('MACHINE_FEATURES', 'ts-se-proxy', \
+                                        ' ${TS_BIN}/${SE_PROXY_UUID}.stripped.elf', '', d)}"
+
+# SMM Gateway
+DEPENDS:append  = "${@bb.utils.contains('MACHINE_FEATURES', 'ts-smm-gateway', \
+                                        ' ts-sp-smm-gateway', '' , d)}"
+SP_PATHS:append = "${@bb.utils.contains('MACHINE_FEATURES', 'ts-smm-gateway', \
+                                        ' ${TS_BIN}/${SMM_GATEWAY_UUID}.stripped.elf', '', d)}"
+
+EXTRA_OEMAKE:append = "${@oe.utils.conditional('SP_PATHS', '', '', ' CFG_SECURE_PARTITION=y SP_PATHS=\'${SP_PATHS}\' ', d)}"
diff --git a/meta-arm/meta-arm/recipes-security/optee/optee-os.inc b/meta-arm/meta-arm/recipes-security/optee/optee-os.inc
index 11193dc..a03ea6a 100644
--- a/meta-arm/meta-arm/recipes-security/optee/optee-os.inc
+++ b/meta-arm/meta-arm/recipes-security/optee/optee-os.inc
@@ -19,6 +19,7 @@
 SRC_URI:append = " \
     file://0006-allow-setting-sysroot-for-libgcc-lookup.patch \
     file://0007-allow-setting-sysroot-for-clang.patch \
+    file://0008-no-warn-rwx-segments.patch \
    "
 
 S = "${WORKDIR}/git"
@@ -33,6 +34,8 @@
     ta-targets=ta_${OPTEE_ARCH} \
     O=${B} \
 "
+EXTRA_OEMAKE += " HOST_PREFIX=${HOST_PREFIX}"
+EXTRA_OEMAKE += " CROSS_COMPILE64=${HOST_PREFIX}"
 
 CFLAGS[unexport] = "1"
 LDFLAGS[unexport] = "1"
@@ -40,7 +43,9 @@
 AS[unexport] = "1"
 LD[unexport] = "1"
 
-do_configure[noexec] = "1"
+do_compile:prepend() {
+	PLAT_LIBGCC_PATH=$(${CC} -print-libgcc-file-name)
+}
 
 do_compile() {
     oe_runmake -C ${S} all
diff --git a/meta-arm/meta-arm/recipes-security/optee/optee-os/0001-core-Define-section-attributes-for-clang.patch b/meta-arm/meta-arm/recipes-security/optee/optee-os/0001-core-Define-section-attributes-for-clang.patch
new file mode 100644
index 0000000..db88e7f
--- /dev/null
+++ b/meta-arm/meta-arm/recipes-security/optee/optee-os/0001-core-Define-section-attributes-for-clang.patch
@@ -0,0 +1,188 @@
+From f189457b79989543f65b8a4e8729eff2cdf9a758 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Sat, 13 Aug 2022 19:24:55 -0700
+Subject: [PATCH] core: Define section attributes for clang
+
+Clang's attribute section is not same as gcc, here we need to add flags
+to sections so they can be eventually collected by linker into final
+output segments. Only way to do so with clang is to use
+
+pragma clang section ...
+
+The behavious is described here [1], this allows us to define names bss
+sections. This was not an issue until clang-15 where LLD linker starts
+to detect the section flags before merging them and throws the following
+errors
+
+| ld.lld: error: section type mismatch for .nozi.kdata_page
+| >>> /mnt/b/yoe/master/build/tmp/work/qemuarm64-yoe-linux/optee-os-tadevkit/3.17.0-r0/build/core/arch/arm/kernel/thread.o:(.nozi.kdata_page): SHT_PROGBITS
+| >>> output section .nozi: SHT_NOBITS
+|
+| ld.lld: error: section type mismatch for .nozi.mmu.l2
+| >>> /mnt/b/yoe/master/build/tmp/work/qemuarm64-yoe-linux/optee-os-tadevkit/3.17.0-r0/build/core/arch/arm/mm/core_mmu_lpae.o:(.nozi.mmu.l2): SHT_PROGBITS
+| >>> output section .nozi: SHT_NOBITS
+
+These sections should be carrying SHT_NOBITS but so far it was not
+possible to do so, this patch tries to use clangs pragma to get this
+going and match the functionality with gcc.
+
+[1] https://intel.github.io/llvm-docs/clang/LanguageExtensions.html#specifying-section-names-for-global-objects-pragma-clang-section
+
+Upstream-Status: Pending
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ core/arch/arm/kernel/thread.c    | 19 +++++++++++++++--
+ core/arch/arm/mm/core_mmu_lpae.c | 35 ++++++++++++++++++++++++++++----
+ core/arch/arm/mm/pgt_cache.c     | 12 ++++++++++-
+ core/kernel/thread.c             | 13 +++++++++++-
+ 4 files changed, 71 insertions(+), 8 deletions(-)
+
+diff --git a/core/arch/arm/kernel/thread.c b/core/arch/arm/kernel/thread.c
+index f083b159e..432983c86 100644
+--- a/core/arch/arm/kernel/thread.c
++++ b/core/arch/arm/kernel/thread.c
+@@ -44,15 +44,30 @@ static size_t thread_user_kcode_size __nex_bss;
+ #if defined(CFG_CORE_UNMAP_CORE_AT_EL0) && \
+ 	defined(CFG_CORE_WORKAROUND_SPECTRE_BP_SEC) && defined(ARM64)
+ long thread_user_kdata_sp_offset __nex_bss;
++#ifdef __clang__
++#ifndef CFG_VIRTUALIZATION
++#pragma clang section bss=".nozi.kdata_page"
++#else
++#pragma clang section bss=".nex_nozi.kdata_page"
++#endif
++#endif
+ static uint8_t thread_user_kdata_page[
+ 	ROUNDUP(sizeof(struct thread_core_local) * CFG_TEE_CORE_NB_CORE,
+ 		SMALL_PAGE_SIZE)]
+ 	__aligned(SMALL_PAGE_SIZE)
++#ifndef __clang__
+ #ifndef CFG_VIRTUALIZATION
+-	__section(".nozi.kdata_page");
++	__section(".nozi.kdata_page")
+ #else
+-	__section(".nex_nozi.kdata_page");
++	__section(".nex_nozi.kdata_page")
+ #endif
++#endif
++    ;
++#endif
++
++/* reset BSS section to default ( .bss ) */
++#ifdef __clang__
++#pragma clang section bss=""
+ #endif
+ 
+ #ifdef ARM32
+diff --git a/core/arch/arm/mm/core_mmu_lpae.c b/core/arch/arm/mm/core_mmu_lpae.c
+index 19cd7b61b..78f5910c5 100644
+--- a/core/arch/arm/mm/core_mmu_lpae.c
++++ b/core/arch/arm/mm/core_mmu_lpae.c
+@@ -230,19 +230,46 @@ typedef uint16_t l1_idx_t;
+ typedef uint64_t base_xlat_tbls_t[CFG_TEE_CORE_NB_CORE][NUM_BASE_LEVEL_ENTRIES];
+ typedef uint64_t xlat_tbl_t[XLAT_TABLE_ENTRIES];
+ 
++#ifdef __clang__
++#pragma clang section bss=".nozi.mmu.base_table"
++#endif
+ static base_xlat_tbls_t base_xlation_table[NUM_BASE_TABLES]
+ 	__aligned(NUM_BASE_LEVEL_ENTRIES * XLAT_ENTRY_SIZE)
+-	__section(".nozi.mmu.base_table");
++#ifndef __clang__
++	__section(".nozi.mmu.base_table")
++#endif
++;
++#ifdef __clang__
++#pragma clang section bss=""
++#endif
+ 
++#ifdef __clang__
++#pragma clang section bss=".nozi.mmu.l2"
++#endif
+ static xlat_tbl_t xlat_tables[MAX_XLAT_TABLES]
+-	__aligned(XLAT_TABLE_SIZE) __section(".nozi.mmu.l2");
++	__aligned(XLAT_TABLE_SIZE)
++#ifndef __clang__
++	__section(".nozi.mmu.l2")
++#endif
++;
++#ifdef __clang__
++#pragma clang section bss=""
++#endif
+ 
+ #define XLAT_TABLES_SIZE	(sizeof(xlat_tbl_t) * MAX_XLAT_TABLES)
+ 
++#ifdef __clang__
++#pragma clang section bss=".nozi.mmu.l2"
++#endif
+ /* MMU L2 table for TAs, one for each thread */
+ static xlat_tbl_t xlat_tables_ul1[CFG_NUM_THREADS]
+-	__aligned(XLAT_TABLE_SIZE) __section(".nozi.mmu.l2");
+-
++#ifndef __clang__
++	__aligned(XLAT_TABLE_SIZE) __section(".nozi.mmu.l2")
++#endif
++;
++#ifdef __clang__
++#pragma clang section bss=""
++#endif
+ /*
+  * TAs page table entry inside a level 1 page table.
+  *
+diff --git a/core/arch/arm/mm/pgt_cache.c b/core/arch/arm/mm/pgt_cache.c
+index d658b3e68..6c36706c0 100644
+--- a/core/arch/arm/mm/pgt_cache.c
++++ b/core/arch/arm/mm/pgt_cache.c
+@@ -104,8 +104,18 @@ void pgt_init(void)
+ 	 * has a large alignment, while .bss has a small alignment. The current
+ 	 * link script is optimized for small alignment in .bss
+ 	 */
++#ifdef __clang__
++#pragma clang section bss=".nozi.mmu.l2"
++#endif
+ 	static uint8_t pgt_tables[PGT_CACHE_SIZE][PGT_SIZE]
+-			__aligned(PGT_SIZE) __section(".nozi.pgt_cache");
++			__aligned(PGT_SIZE)
++#ifndef __clang__
++			__section(".nozi.pgt_cache")
++#endif
++			;
++#ifdef __clang__
++#pragma clang section bss=""
++#endif
+ 	size_t n;
+ 
+ 	for (n = 0; n < ARRAY_SIZE(pgt_tables); n++) {
+diff --git a/core/kernel/thread.c b/core/kernel/thread.c
+index 18d34e6ad..086129e28 100644
+--- a/core/kernel/thread.c
++++ b/core/kernel/thread.c
+@@ -37,13 +37,24 @@ struct thread_core_local thread_core_local[CFG_TEE_CORE_NB_CORE] __nex_bss;
+ 	name[stack_num][sizeof(name[stack_num]) / sizeof(uint32_t) - 1]
+ #endif
+ 
++#define DO_PRAGMA(x) _Pragma (#x)
++
++#ifdef __clang__
++#define DECLARE_STACK(name, num_stacks, stack_size, linkage) \
++DO_PRAGMA (clang section bss=".nozi_stack." #name) \
++linkage uint32_t name[num_stacks] \
++		[ROUNDUP(stack_size + STACK_CANARY_SIZE + STACK_CHECK_EXTRA, \
++			 STACK_ALIGNMENT) / sizeof(uint32_t)] \
++		__attribute__((aligned(STACK_ALIGNMENT))); \
++DO_PRAGMA(clang section bss="")
++#else
+ #define DECLARE_STACK(name, num_stacks, stack_size, linkage) \
+ linkage uint32_t name[num_stacks] \
+ 		[ROUNDUP(stack_size + STACK_CANARY_SIZE + STACK_CHECK_EXTRA, \
+ 			 STACK_ALIGNMENT) / sizeof(uint32_t)] \
+ 		__attribute__((section(".nozi_stack." # name), \
+ 			       aligned(STACK_ALIGNMENT)))
+-
++#endif
+ #define GET_STACK(stack) ((vaddr_t)(stack) + STACK_SIZE(stack))
+ 
+ DECLARE_STACK(stack_tmp, CFG_TEE_CORE_NB_CORE,
+-- 
+2.37.2
+
diff --git a/meta-arm/meta-arm/recipes-security/optee/optee-os/0008-no-warn-rwx-segments.patch b/meta-arm/meta-arm/recipes-security/optee/optee-os/0008-no-warn-rwx-segments.patch
new file mode 100644
index 0000000..1dd70b3
--- /dev/null
+++ b/meta-arm/meta-arm/recipes-security/optee/optee-os/0008-no-warn-rwx-segments.patch
@@ -0,0 +1,64 @@
+Signed-off-by: Anton Antonov <Anton.Antonov@arm.com>
+Upstream-Status: Backport [https://github.com/OP-TEE/optee_os/pull/5474]
+
+From 0b8a917fa51a366806edc0f04b88cd23b24098c4 Mon Sep 17 00:00:00 2001
+From: Jerome Forissier <jerome.forissier@linaro.org>
+Date: Fri, 5 Aug 2022 09:48:03 +0200
+Subject: [PATCH] core: link: add --no-warn-rwx-segments
+
+binutils ld.bfd generates one RWX LOAD segment by merging several sections
+with mixed R/W/X attributes (.text, .rodata, .data). After version 2.38 it
+also warns by default when that happens [1], which breaks the build due to
+--fatal-warnings. The RWX segment is not a problem for the TEE core, since
+that information is not used to set memory permissions. Therefore, silence
+the warning.
+
+Link: [1] https://sourceware.org/git/?p=binutils-gdb.git;a=commit;h=ba951afb99912da01a6e8434126b8fac7aa75107
+Link: https://sourceware.org/bugzilla/show_bug.cgi?id=29448
+Reported-by: Dominique Martinet <dominique.martinet@atmark-techno.com>
+Signed-off-by: Jerome Forissier <jerome.forissier@linaro.org>
+Acked-by: Jens Wiklander <jens.wiklander@linaro.org>
+---
+ core/arch/arm/kernel/link.mk | 8 ++++++--
+ 1 file changed, 6 insertions(+), 2 deletions(-)
+
+diff --git a/core/arch/arm/kernel/link.mk b/core/arch/arm/kernel/link.mk
+index 7eed333a32..c39d43cbfc 100644
+--- a/core/arch/arm/kernel/link.mk
++++ b/core/arch/arm/kernel/link.mk
+@@ -31,6 +31,7 @@ link-ldflags += -T $(link-script-pp) -Map=$(link-out-dir)/tee.map
+ link-ldflags += --sort-section=alignment
+ link-ldflags += --fatal-warnings
+ link-ldflags += --gc-sections
++link-ldflags += $(call ld-option,--no-warn-rwx-segments)
+ 
+ link-ldadd  = $(LDADD)
+ link-ldadd += $(ldflags-external)
+@@ -55,6 +56,7 @@ link-script-cppflags := \
+ 		$(cppflagscore))
+ 
+ ldargs-all_objs := -T $(link-script-dummy) --no-check-sections \
++		   $(call ld-option,--no-warn-rwx-segments) \
+ 		   $(link-objs) $(link-ldadd) $(libgcccore)
+ cleanfiles += $(link-out-dir)/all_objs.o
+ $(link-out-dir)/all_objs.o: $(objs) $(libdeps) $(MAKEFILE_LIST)
+@@ -67,7 +69,8 @@ $(link-out-dir)/unpaged_entries.txt: $(link-out-dir)/all_objs.o
+ 	$(q)$(NMcore) $< | \
+ 		$(AWK) '/ ____keep_pager/ { printf "-u%s ", $$3 }' > $@
+ 
+-unpaged-ldargs = -T $(link-script-dummy) --no-check-sections --gc-sections
++unpaged-ldargs := -T $(link-script-dummy) --no-check-sections --gc-sections \
++		 $(call ld-option,--no-warn-rwx-segments)
+ unpaged-ldadd := $(objs) $(link-ldadd) $(libgcccore)
+ cleanfiles += $(link-out-dir)/unpaged.o
+ $(link-out-dir)/unpaged.o: $(link-out-dir)/unpaged_entries.txt
+@@ -95,7 +98,8 @@ $(link-out-dir)/init_entries.txt: $(link-out-dir)/all_objs.o
+ 	$(q)$(NMcore) $< | \
+ 		$(AWK) '/ ____keep_init/ { printf "-u%s ", $$3 }' > $@
+ 
+-init-ldargs := -T $(link-script-dummy) --no-check-sections --gc-sections
++init-ldargs := -T $(link-script-dummy) --no-check-sections --gc-sections \
++	       $(call ld-option,--no-warn-rwx-segments)
+ init-ldadd := $(link-objs-init) $(link-out-dir)/version.o  $(link-ldadd) \
+ 	      $(libgcccore)
+ cleanfiles += $(link-out-dir)/init.o
diff --git a/meta-arm/meta-arm/recipes-security/optee/optee-os/3.14/0009-add-z-execstack.patch b/meta-arm/meta-arm/recipes-security/optee/optee-os/3.14/0009-add-z-execstack.patch
new file mode 100644
index 0000000..616a0ff
--- /dev/null
+++ b/meta-arm/meta-arm/recipes-security/optee/optee-os/3.14/0009-add-z-execstack.patch
@@ -0,0 +1,95 @@
+From cb4349edce6ce360436f10da8b6aa32e68fb778d Mon Sep 17 00:00:00 2001
+From: Jerome Forissier <jerome.forissier@linaro.org>
+Date: Tue, 23 Aug 2022 11:41:00 +0000
+Subject: [PATCH] core, ldelf: link: add -z execstack
+
+When building for arm32 with GNU binutils 2.39, the linker outputs
+warnings when generating some TEE core binaries (all_obj.o, init.o,
+unpaged.o and tee.elf) as well as ldelf.elf:
+
+ arm-poky-linux-gnueabi-ld.bfd: warning: atomic_a32.o: missing .note.GNU-stack section implies executable stack
+ arm-poky-linux-gnueabi-ld.bfd: NOTE: This behaviour is deprecated and will be removed in a future version of the linker
+
+The permissions used when mapping the TEE core stacks do not depend on
+any metadata found in the ELF file. Similarly when the TEE core loads
+ldelf it already creates a non-executable stack regardless of ELF
+information. Therefore we can safely ignore the warnings. This is done
+by adding the '-z execstack' option.
+
+Signed-off-by: Jerome Forissier <jerome.forissier@linaro.org>
+
+Signed-off-by: Anton Antonov <Anton.Antonov@arm.com>
+Upstream-Status: Backport [https://github.com/OP-TEE/optee_os/pull/5499]
+
+---
+ core/arch/arm/kernel/link.mk | 13 +++++++++----
+ ldelf/link.mk                |  4 ++++
+ 2 files changed, 13 insertions(+), 4 deletions(-)
+
+diff --git a/core/arch/arm/kernel/link.mk b/core/arch/arm/kernel/link.mk
+index 3dc459d6..85cde58e 100644
+--- a/core/arch/arm/kernel/link.mk
++++ b/core/arch/arm/kernel/link.mk
+@@ -9,6 +9,11 @@ link-script-dep = $(link-out-dir)/.kern.ld.d
+ 
+ AWK	 = awk
+ 
++link-ldflags-common += $(call ld-option,--no-warn-rwx-segments)
++ifeq ($(CFG_ARM32_core),y)
++link-ldflags-common += $(call ld-option,--no-warn-execstack)
++endif
++
+ link-ldflags  = $(LDFLAGS)
+ ifeq ($(CFG_CORE_ASLR),y)
+ link-ldflags += -pie -Bsymbolic -z notext -z norelro $(ldflag-apply-dynamic-relocs)
+@@ -17,7 +22,7 @@ link-ldflags += -T $(link-script-pp) -Map=$(link-out-dir)/tee.map
+ link-ldflags += --sort-section=alignment
+ link-ldflags += --fatal-warnings
+ link-ldflags += --gc-sections
+-link-ldflags += $(call ld-option,--no-warn-rwx-segments)
++link-ldflags += $(link-ldflags-common)
+ 
+ link-ldadd  = $(LDADD)
+ link-ldadd += $(ldflags-external)
+@@ -39,7 +44,7 @@ link-script-cppflags := \
+ 		$(cppflagscore))
+ 
+ ldargs-all_objs := -T $(link-script-dummy) --no-check-sections \
+-		   $(call ld-option,--no-warn-rwx-segments) \
++		   $(link-ldflags-common) \
+ 		   $(link-objs) $(link-ldadd) $(libgcccore)
+ cleanfiles += $(link-out-dir)/all_objs.o
+ $(link-out-dir)/all_objs.o: $(objs) $(libdeps) $(MAKEFILE_LIST)
+@@ -53,7 +58,7 @@ $(link-out-dir)/unpaged_entries.txt: $(link-out-dir)/all_objs.o
+ 		$(AWK) '/ ____keep_pager/ { printf "-u%s ", $$3 }' > $@
+ 
+ unpaged-ldargs := -T $(link-script-dummy) --no-check-sections --gc-sections \
+-		 $(call ld-option,--no-warn-rwx-segments)
++		 $(link-ldflags-common)
+ unpaged-ldadd := $(objs) $(link-ldadd) $(libgcccore)
+ cleanfiles += $(link-out-dir)/unpaged.o
+ $(link-out-dir)/unpaged.o: $(link-out-dir)/unpaged_entries.txt
+@@ -82,7 +87,7 @@ $(link-out-dir)/init_entries.txt: $(link-out-dir)/all_objs.o
+ 		$(AWK) '/ ____keep_init/ { printf "-u%s ", $$3 }' > $@
+ 
+ init-ldargs := -T $(link-script-dummy) --no-check-sections --gc-sections \
+-	       $(call ld-option,--no-warn-rwx-segments)
++	       $(link-ldflags-common)
+ init-ldadd := $(link-objs-init) $(link-out-dir)/version.o  $(link-ldadd) \
+ 	      $(libgcccore)
+ cleanfiles += $(link-out-dir)/init.o
+diff --git a/ldelf/link.mk b/ldelf/link.mk
+index 8fafc879..d8a05ea6 100644
+--- a/ldelf/link.mk
++++ b/ldelf/link.mk
+@@ -19,6 +19,10 @@ link-ldflags += --sort-section=alignment
+ link-ldflags += -z max-page-size=4096 # OP-TEE always uses 4K alignment
+ link-ldflags += $(link-ldflags$(sm))
+ 
++ifeq ($(CFG_ARM32_$(sm)), y)
++link-ldflags += $(call ld-option,--no-warn-execstack)
++endif
++
+ link-ldadd  = $(addprefix -L,$(libdirs))
+ link-ldadd += --start-group $(addprefix -l,$(libnames)) --end-group
+ ldargs-ldelf.elf := $(link-ldflags) $(objs) $(link-ldadd) $(libgcc$(sm))
diff --git a/meta-arm/meta-arm/recipes-security/optee/optee-os/3.14/0010-add-note-GNU-stack-section.patch b/meta-arm/meta-arm/recipes-security/optee/optee-os/3.14/0010-add-note-GNU-stack-section.patch
new file mode 100644
index 0000000..c0330b9
--- /dev/null
+++ b/meta-arm/meta-arm/recipes-security/optee/optee-os/3.14/0010-add-note-GNU-stack-section.patch
@@ -0,0 +1,128 @@
+From f99a0278ad5e26772b3dcf8c74b5bf986ecfbe1e Mon Sep 17 00:00:00 2001
+From: Jerome Forissier <jerome.forissier@linaro.org>
+Date: Tue, 23 Aug 2022 12:31:46 +0000
+Subject: [PATCH] arm32: libutils, libutee, ta: add .note.GNU-stack section to
+
+ .S files
+
+When building for arm32 with GNU binutils 2.39, the linker outputs
+warnings when linking Trusted Applications:
+
+ arm-unknown-linux-uclibcgnueabihf-ld.bfd: warning: utee_syscalls_a32.o: missing .note.GNU-stack section implies executable stack
+ arm-unknown-linux-uclibcgnueabihf-ld.bfd: NOTE: This behaviour is deprecated and will be removed in a future version of the linker
+
+We could silence the warning by adding the '-z execstack' option to the
+TA link flags, like we did in the parent commit for the TEE core and
+ldelf. Indeed, ldelf always allocates a non-executable piece of memory
+for the TA to use as a stack.
+
+However it seems preferable to comply with the common ELF practices in
+this case. A better fix is therefore to add the missing .note.GNU-stack
+sections in the assembler files.
+
+Signed-off-by: Jerome Forissier <jerome.forissier@linaro.org>
+
+Signed-off-by: Anton Antonov <Anton.Antonov@arm.com>
+Upstream-Status: Backport [https://github.com/OP-TEE/optee_os/pull/5499]
+
+---
+ lib/libutee/arch/arm/utee_syscalls_a32.S             | 2 ++
+ lib/libutils/ext/arch/arm/atomic_a32.S               | 2 ++
+ lib/libutils/ext/arch/arm/mcount_a32.S               | 2 ++
+ lib/libutils/isoc/arch/arm/arm32_aeabi_divmod_a32.S  | 2 ++
+ lib/libutils/isoc/arch/arm/arm32_aeabi_ldivmod_a32.S | 2 ++
+ lib/libutils/isoc/arch/arm/setjmp_a32.S              | 2 ++
+ ta/arch/arm/ta_entry_a32.S                           | 2 ++
+ 7 files changed, 14 insertions(+)
+
+diff --git a/lib/libutee/arch/arm/utee_syscalls_a32.S b/lib/libutee/arch/arm/utee_syscalls_a32.S
+index 6e621ca6..af405f62 100644
+--- a/lib/libutee/arch/arm/utee_syscalls_a32.S
++++ b/lib/libutee/arch/arm/utee_syscalls_a32.S
+@@ -7,6 +7,8 @@
+ #include <tee_syscall_numbers.h>
+ #include <asm.S>
+ 
++	.section .note.GNU-stack,"",%progbits
++
+         .section .text
+         .balign 4
+         .code 32
+diff --git a/lib/libutils/ext/arch/arm/atomic_a32.S b/lib/libutils/ext/arch/arm/atomic_a32.S
+index eaef6914..2be73ffa 100644
+--- a/lib/libutils/ext/arch/arm/atomic_a32.S
++++ b/lib/libutils/ext/arch/arm/atomic_a32.S
+@@ -5,6 +5,8 @@
+ 
+ #include <asm.S>
+ 
++	.section .note.GNU-stack,"",%progbits
++
+ /* uint32_t atomic_inc32(uint32_t *v); */
+ FUNC atomic_inc32 , :
+ 	ldrex	r1, [r0]
+diff --git a/lib/libutils/ext/arch/arm/mcount_a32.S b/lib/libutils/ext/arch/arm/mcount_a32.S
+index 51439a23..54dc3c02 100644
+--- a/lib/libutils/ext/arch/arm/mcount_a32.S
++++ b/lib/libutils/ext/arch/arm/mcount_a32.S
+@@ -7,6 +7,8 @@
+ 
+ #if defined(CFG_TA_GPROF_SUPPORT) || defined(CFG_FTRACE_SUPPORT)
+ 
++	.section .note.GNU-stack,"",%progbits
++
+ /*
+  * Convert return address to call site address by subtracting the size of the
+  * mcount call instruction (blx __gnu_mcount_nc).
+diff --git a/lib/libutils/isoc/arch/arm/arm32_aeabi_divmod_a32.S b/lib/libutils/isoc/arch/arm/arm32_aeabi_divmod_a32.S
+index a600c879..37ae9ec6 100644
+--- a/lib/libutils/isoc/arch/arm/arm32_aeabi_divmod_a32.S
++++ b/lib/libutils/isoc/arch/arm/arm32_aeabi_divmod_a32.S
+@@ -5,6 +5,8 @@
+ 
+ #include <asm.S>
+ 
++	.section .note.GNU-stack,"",%progbits
++
+ /*
+  * signed ret_idivmod_values(signed quot, signed rem);
+  * return quotient and remaining the EABI way (regs r0,r1)
+diff --git a/lib/libutils/isoc/arch/arm/arm32_aeabi_ldivmod_a32.S b/lib/libutils/isoc/arch/arm/arm32_aeabi_ldivmod_a32.S
+index 2dc50bc9..5c3353e2 100644
+--- a/lib/libutils/isoc/arch/arm/arm32_aeabi_ldivmod_a32.S
++++ b/lib/libutils/isoc/arch/arm/arm32_aeabi_ldivmod_a32.S
+@@ -5,6 +5,8 @@
+ 
+ #include <asm.S>
+ 
++	.section .note.GNU-stack,"",%progbits
++
+ /*
+  * __value_in_regs lldiv_t __aeabi_ldivmod( long long n, long long d)
+  */
+diff --git a/lib/libutils/isoc/arch/arm/setjmp_a32.S b/lib/libutils/isoc/arch/arm/setjmp_a32.S
+index 43ea5937..f8a0b70d 100644
+--- a/lib/libutils/isoc/arch/arm/setjmp_a32.S
++++ b/lib/libutils/isoc/arch/arm/setjmp_a32.S
+@@ -51,6 +51,8 @@
+ #define SIZE(x)
+ #endif
+ 
++	.section .note.GNU-stack,"",%progbits
++
+ /* Arm/Thumb interworking support:
+ 
+    The interworking scheme expects functions to use a BX instruction
+diff --git a/ta/arch/arm/ta_entry_a32.S b/ta/arch/arm/ta_entry_a32.S
+index d2f8a69d..cd9a12f9 100644
+--- a/ta/arch/arm/ta_entry_a32.S
++++ b/ta/arch/arm/ta_entry_a32.S
+@@ -5,6 +5,8 @@
+ 
+ #include <asm.S>
+ 
++	.section .note.GNU-stack,"",%progbits
++
+ /*
+  * This function is the bottom of the user call stack. Mark it as such so that
+  * the unwinding code won't try to go further down.
diff --git a/meta-arm/meta-arm/recipes-security/optee/optee-os/3.18/0009-add-z-execstack.patch b/meta-arm/meta-arm/recipes-security/optee/optee-os/3.18/0009-add-z-execstack.patch
new file mode 100644
index 0000000..5463a34
--- /dev/null
+++ b/meta-arm/meta-arm/recipes-security/optee/optee-os/3.18/0009-add-z-execstack.patch
@@ -0,0 +1,90 @@
+From a9d099d17ef0af6deac4c3b4d15ad0555d258ec8 Mon Sep 17 00:00:00 2001
+From: Jerome Forissier <jerome.forissier@linaro.org>
+Date: Tue, 23 Aug 2022 11:41:00 +0000
+Subject: [PATCH] core, ldelf: link: add -z execstack
+
+When building for arm32 with GNU binutils 2.39, the linker outputs
+warnings when generating some TEE core binaries (all_obj.o, init.o,
+unpaged.o and tee.elf) as well as ldelf.elf:
+
+ arm-poky-linux-gnueabi-ld.bfd: warning: atomic_a32.o: missing .note.GNU-stack section implies executable stack
+ arm-poky-linux-gnueabi-ld.bfd: NOTE: This behaviour is deprecated and will be removed in a future version of the linker
+
+The permissions used when mapping the TEE core stacks do not depend on
+any metadata found in the ELF file. Similarly when the TEE core loads
+ldelf it already creates a non-executable stack regardless of ELF
+information. Therefore we can safely ignore the warnings. This is done
+by adding the '-z execstack' option.
+
+Signed-off-by: Jerome Forissier <jerome.forissier@linaro.org>
+
+Signed-off-by: Anton Antonov <Anton.Antonov@arm.com>
+Upstream-Status: Backport [https://github.com/OP-TEE/optee_os/pull/5499]
+
+---
+diff --git a/core/arch/arm/kernel/link.mk b/core/arch/arm/kernel/link.mk
+index c39d43cb..0e96e606 100644
+--- a/core/arch/arm/kernel/link.mk
++++ b/core/arch/arm/kernel/link.mk
+@@ -9,6 +9,11 @@ link-script-dep = $(link-out-dir)/.kern.ld.d
+ 
+ AWK	 = awk
+ 
++link-ldflags-common += $(call ld-option,--no-warn-rwx-segments)
++ifeq ($(CFG_ARM32_core),y)
++link-ldflags-common += $(call ld-option,--no-warn-execstack)
++endif
++
+ link-ldflags  = $(LDFLAGS)
+ ifeq ($(CFG_CORE_ASLR),y)
+ link-ldflags += -pie -Bsymbolic -z norelro $(ldflag-apply-dynamic-relocs)
+@@ -31,7 +36,7 @@ link-ldflags += -T $(link-script-pp) -Map=$(link-out-dir)/tee.map
+ link-ldflags += --sort-section=alignment
+ link-ldflags += --fatal-warnings
+ link-ldflags += --gc-sections
+-link-ldflags += $(call ld-option,--no-warn-rwx-segments)
++link-ldflags += $(link-ldflags-common)
+ 
+ link-ldadd  = $(LDADD)
+ link-ldadd += $(ldflags-external)
+@@ -56,7 +61,7 @@ link-script-cppflags := \
+ 		$(cppflagscore))
+ 
+ ldargs-all_objs := -T $(link-script-dummy) --no-check-sections \
+-		   $(call ld-option,--no-warn-rwx-segments) \
++		   $(link-ldflags-common) \
+ 		   $(link-objs) $(link-ldadd) $(libgcccore)
+ cleanfiles += $(link-out-dir)/all_objs.o
+ $(link-out-dir)/all_objs.o: $(objs) $(libdeps) $(MAKEFILE_LIST)
+@@ -70,7 +75,7 @@ $(link-out-dir)/unpaged_entries.txt: $(link-out-dir)/all_objs.o
+ 		$(AWK) '/ ____keep_pager/ { printf "-u%s ", $$3 }' > $@
+ 
+ unpaged-ldargs := -T $(link-script-dummy) --no-check-sections --gc-sections \
+-		 $(call ld-option,--no-warn-rwx-segments)
++		 $(link-ldflags-common)
+ unpaged-ldadd := $(objs) $(link-ldadd) $(libgcccore)
+ cleanfiles += $(link-out-dir)/unpaged.o
+ $(link-out-dir)/unpaged.o: $(link-out-dir)/unpaged_entries.txt
+@@ -99,7 +104,7 @@ $(link-out-dir)/init_entries.txt: $(link-out-dir)/all_objs.o
+ 		$(AWK) '/ ____keep_init/ { printf "-u%s ", $$3 }' > $@
+ 
+ init-ldargs := -T $(link-script-dummy) --no-check-sections --gc-sections \
+-	       $(call ld-option,--no-warn-rwx-segments)
++	       $(link-ldflags-common)
+ init-ldadd := $(link-objs-init) $(link-out-dir)/version.o  $(link-ldadd) \
+ 	      $(libgcccore)
+ cleanfiles += $(link-out-dir)/init.o
+diff --git a/ldelf/link.mk b/ldelf/link.mk
+index 64c8212a..bd49551e 100644
+--- a/ldelf/link.mk
++++ b/ldelf/link.mk
+@@ -20,6 +20,9 @@ link-ldflags += -z max-page-size=4096 # OP-TEE always uses 4K alignment
+ ifeq ($(CFG_CORE_BTI),y)
+ link-ldflags += $(call ld-option,-z force-bti) --fatal-warnings
+ endif
++ifeq ($(CFG_ARM32_$(sm)), y)
++link-ldflags += $(call ld-option,--no-warn-execstack)
++endif
+ link-ldflags += $(link-ldflags$(sm))
+ 
+ link-ldadd  = $(addprefix -L,$(libdirs))
diff --git a/meta-arm/meta-arm/recipes-security/optee/optee-os/3.18/0010-add-note-GNU-stack-section.patch b/meta-arm/meta-arm/recipes-security/optee/optee-os/3.18/0010-add-note-GNU-stack-section.patch
new file mode 100644
index 0000000..95d5e67
--- /dev/null
+++ b/meta-arm/meta-arm/recipes-security/optee/optee-os/3.18/0010-add-note-GNU-stack-section.patch
@@ -0,0 +1,128 @@
+From 38bf606653ee08b10db6bb298e369cb3a9cdcda9 Mon Sep 17 00:00:00 2001
+From: Jerome Forissier <jerome.forissier@linaro.org>
+Date: Tue, 23 Aug 2022 12:31:46 +0000
+Subject: [PATCH] arm32: libutils, libutee, ta: add .note.GNU-stack section to
+
+ .S files
+
+When building for arm32 with GNU binutils 2.39, the linker outputs
+warnings when linking Trusted Applications:
+
+ arm-unknown-linux-uclibcgnueabihf-ld.bfd: warning: utee_syscalls_a32.o: missing .note.GNU-stack section implies executable stack
+ arm-unknown-linux-uclibcgnueabihf-ld.bfd: NOTE: This behaviour is deprecated and will be removed in a future version of the linker
+
+We could silence the warning by adding the '-z execstack' option to the
+TA link flags, like we did in the parent commit for the TEE core and
+ldelf. Indeed, ldelf always allocates a non-executable piece of memory
+for the TA to use as a stack.
+
+However it seems preferable to comply with the common ELF practices in
+this case. A better fix is therefore to add the missing .note.GNU-stack
+sections in the assembler files.
+
+Signed-off-by: Jerome Forissier <jerome.forissier@linaro.org>
+
+Signed-off-by: Anton Antonov <Anton.Antonov@arm.com>
+Upstream-Status: Backport [https://github.com/OP-TEE/optee_os/pull/5499]
+
+---
+ lib/libutee/arch/arm/utee_syscalls_a32.S             | 2 ++
+ lib/libutils/ext/arch/arm/atomic_a32.S               | 2 ++
+ lib/libutils/ext/arch/arm/mcount_a32.S               | 2 ++
+ lib/libutils/isoc/arch/arm/arm32_aeabi_divmod_a32.S  | 2 ++
+ lib/libutils/isoc/arch/arm/arm32_aeabi_ldivmod_a32.S | 2 ++
+ lib/libutils/isoc/arch/arm/setjmp_a32.S              | 2 ++
+ ta/arch/arm/ta_entry_a32.S                           | 2 ++
+ 7 files changed, 14 insertions(+)
+
+diff --git a/lib/libutee/arch/arm/utee_syscalls_a32.S b/lib/libutee/arch/arm/utee_syscalls_a32.S
+index 6e621ca6..af405f62 100644
+--- a/lib/libutee/arch/arm/utee_syscalls_a32.S
++++ b/lib/libutee/arch/arm/utee_syscalls_a32.S
+@@ -7,6 +7,8 @@
+ #include <tee_syscall_numbers.h>
+ #include <asm.S>
+ 
++	.section .note.GNU-stack,"",%progbits
++
+         .section .text
+         .balign 4
+         .code 32
+diff --git a/lib/libutils/ext/arch/arm/atomic_a32.S b/lib/libutils/ext/arch/arm/atomic_a32.S
+index eaef6914..2be73ffa 100644
+--- a/lib/libutils/ext/arch/arm/atomic_a32.S
++++ b/lib/libutils/ext/arch/arm/atomic_a32.S
+@@ -5,6 +5,8 @@
+ 
+ #include <asm.S>
+ 
++	.section .note.GNU-stack,"",%progbits
++
+ /* uint32_t atomic_inc32(uint32_t *v); */
+ FUNC atomic_inc32 , :
+ 	ldrex	r1, [r0]
+diff --git a/lib/libutils/ext/arch/arm/mcount_a32.S b/lib/libutils/ext/arch/arm/mcount_a32.S
+index 51439a23..54dc3c02 100644
+--- a/lib/libutils/ext/arch/arm/mcount_a32.S
++++ b/lib/libutils/ext/arch/arm/mcount_a32.S
+@@ -7,6 +7,8 @@
+ 
+ #if defined(CFG_TA_GPROF_SUPPORT) || defined(CFG_FTRACE_SUPPORT)
+ 
++	.section .note.GNU-stack,"",%progbits
++
+ /*
+  * Convert return address to call site address by subtracting the size of the
+  * mcount call instruction (blx __gnu_mcount_nc).
+diff --git a/lib/libutils/isoc/arch/arm/arm32_aeabi_divmod_a32.S b/lib/libutils/isoc/arch/arm/arm32_aeabi_divmod_a32.S
+index a600c879..37ae9ec6 100644
+--- a/lib/libutils/isoc/arch/arm/arm32_aeabi_divmod_a32.S
++++ b/lib/libutils/isoc/arch/arm/arm32_aeabi_divmod_a32.S
+@@ -5,6 +5,8 @@
+ 
+ #include <asm.S>
+ 
++	.section .note.GNU-stack,"",%progbits
++
+ /*
+  * signed ret_idivmod_values(signed quot, signed rem);
+  * return quotient and remaining the EABI way (regs r0,r1)
+diff --git a/lib/libutils/isoc/arch/arm/arm32_aeabi_ldivmod_a32.S b/lib/libutils/isoc/arch/arm/arm32_aeabi_ldivmod_a32.S
+index 2dc50bc9..5c3353e2 100644
+--- a/lib/libutils/isoc/arch/arm/arm32_aeabi_ldivmod_a32.S
++++ b/lib/libutils/isoc/arch/arm/arm32_aeabi_ldivmod_a32.S
+@@ -5,6 +5,8 @@
+ 
+ #include <asm.S>
+ 
++	.section .note.GNU-stack,"",%progbits
++
+ /*
+  * __value_in_regs lldiv_t __aeabi_ldivmod( long long n, long long d)
+  */
+diff --git a/lib/libutils/isoc/arch/arm/setjmp_a32.S b/lib/libutils/isoc/arch/arm/setjmp_a32.S
+index 43ea5937..f8a0b70d 100644
+--- a/lib/libutils/isoc/arch/arm/setjmp_a32.S
++++ b/lib/libutils/isoc/arch/arm/setjmp_a32.S
+@@ -51,6 +51,8 @@
+ #define SIZE(x)
+ #endif
+ 
++	.section .note.GNU-stack,"",%progbits
++
+ /* Arm/Thumb interworking support:
+ 
+    The interworking scheme expects functions to use a BX instruction
+diff --git a/ta/arch/arm/ta_entry_a32.S b/ta/arch/arm/ta_entry_a32.S
+index d2f8a69d..cd9a12f9 100644
+--- a/ta/arch/arm/ta_entry_a32.S
++++ b/ta/arch/arm/ta_entry_a32.S
+@@ -5,6 +5,8 @@
+ 
+ #include <asm.S>
+ 
++	.section .note.GNU-stack,"",%progbits
++
+ /*
+  * This function is the bottom of the user call stack. Mark it as such so that
+  * the unwinding code won't try to go further down.
diff --git a/meta-arm/meta-arm/recipes-security/optee/optee-os_%.bbappend b/meta-arm/meta-arm/recipes-security/optee/optee-os_%.bbappend
new file mode 100644
index 0000000..09650b9
--- /dev/null
+++ b/meta-arm/meta-arm/recipes-security/optee/optee-os_%.bbappend
@@ -0,0 +1,5 @@
+# Include Trusted Services Secure Partitions
+require optee-os-ts.inc
+
+# Conditionally include platform specific Trusted Services related OPTEE build parameters
+EXTRA_OEMAKE:append:qemuarm64-secureboot = "${@oe.utils.conditional('SP_PATHS', '', '', ' CFG_CORE_HEAP_SIZE=131072 CFG_TEE_BENCHMARK=n CFG_TEE_CORE_LOG_LEVEL=4 CFG_CORE_SEL1_SPMC=y ', d)}"
diff --git a/meta-arm/meta-arm/recipes-security/optee/optee-os_3.14.0.bb b/meta-arm/meta-arm/recipes-security/optee/optee-os_3.14.0.bb
index 83b89c4..6400ac2 100644
--- a/meta-arm/meta-arm/recipes-security/optee/optee-os_3.14.0.bb
+++ b/meta-arm/meta-arm/recipes-security/optee/optee-os_3.14.0.bb
@@ -3,3 +3,8 @@
 SRCREV = "d21befa5e53eae9db469eba1685f5aa5c6f92c2f"
 
 DEPENDS = "python3-pycryptodome-native python3-pyelftools-native"
+
+SRC_URI:append = " \
+    file://3.14/0009-add-z-execstack.patch \
+    file://3.14/0010-add-note-GNU-stack-section.patch \
+   "
diff --git a/meta-arm/meta-arm/recipes-security/optee/optee-os_3.17.0.bb b/meta-arm/meta-arm/recipes-security/optee/optee-os_3.17.0.bb
deleted file mode 100644
index 3e5e0a6..0000000
--- a/meta-arm/meta-arm/recipes-security/optee/optee-os_3.17.0.bb
+++ /dev/null
@@ -1,5 +0,0 @@
-require optee-os.inc
-
-SRCREV = "f9e550142dd4b33ee1112f5dd64ffa94ba79cefa"
-
-DEPENDS += "dtc-native"
diff --git a/meta-arm/meta-arm/recipes-security/optee/optee-os_3.18.0.bb b/meta-arm/meta-arm/recipes-security/optee/optee-os_3.18.0.bb
index 65d661f..f459efc 100644
--- a/meta-arm/meta-arm/recipes-security/optee/optee-os_3.18.0.bb
+++ b/meta-arm/meta-arm/recipes-security/optee/optee-os_3.18.0.bb
@@ -3,3 +3,8 @@
 DEPENDS += "dtc-native"
 
 SRCREV = "1ee647035939e073a2e8dddb727c0f019cc035f1"
+SRC_URI:append = " \
+    file://0001-core-Define-section-attributes-for-clang.patch \
+    file://3.18/0009-add-z-execstack.patch \
+    file://3.18/0010-add-note-GNU-stack-section.patch \
+   "
diff --git a/meta-arm/meta-arm/recipes-security/optee/optee-test_3.17.0.bb b/meta-arm/meta-arm/recipes-security/optee/optee-test_3.17.0.bb
deleted file mode 100644
index 18870da..0000000
--- a/meta-arm/meta-arm/recipes-security/optee/optee-test_3.17.0.bb
+++ /dev/null
@@ -1,10 +0,0 @@
-require optee-test.inc
-
-SRC_URI:append = " \
-    file://musl-workaround.patch \
-   "
-SRCREV = "44a31d02379bd8e50762caa5e1592ad81e3339af"
-
-EXTRA_OEMAKE:append:libc-musl = " OPTEE_OPENSSL_EXPORT=${STAGING_INCDIR}"
-DEPENDS:append:libc-musl = " openssl"
-CFLAGS:append:libc-musl = " -Wno-error=deprecated-declarations"
diff --git a/meta-arm/meta-arm/recipes-security/optee/optee-test_3.18.0.bb b/meta-arm/meta-arm/recipes-security/optee/optee-test_3.18.0.bb
new file mode 100644
index 0000000..0570687
--- /dev/null
+++ b/meta-arm/meta-arm/recipes-security/optee/optee-test_3.18.0.bb
@@ -0,0 +1,10 @@
+require optee-test.inc
+
+SRC_URI:append = " \
+    file://musl-workaround.patch \
+   "
+SRCREV = "da5282a011b40621a2cf7a296c11a35c833ed91b"
+
+EXTRA_OEMAKE:append:libc-musl = " OPTEE_OPENSSL_EXPORT=${STAGING_INCDIR}"
+DEPENDS:append:libc-musl = " openssl"
+CFLAGS:append:libc-musl = " -Wno-error=deprecated-declarations"
diff --git a/meta-arm/meta-arm/recipes-security/packagegroups/packagegroup-ts-tests.bb b/meta-arm/meta-arm/recipes-security/packagegroups/packagegroup-ts-tests.bb
new file mode 100644
index 0000000..72ba33f
--- /dev/null
+++ b/meta-arm/meta-arm/recipes-security/packagegroups/packagegroup-ts-tests.bb
@@ -0,0 +1,26 @@
+SUMMARY = "Trusted Services test/demo linux tools"
+
+inherit packagegroup
+
+COMPATIBLE_HOST = "aarch64.*-linux"
+
+PACKAGE_ARCH = "${MACHINE_ARCH}"
+
+PACKAGES = "${PN} ${PN}-psa"
+
+RDEPENDS:${PN} = "\
+    ts-demo \
+    ts-service-test \
+    ${@bb.utils.contains('MACHINE_FEATURES', 'ts-env-test', 'ts-remote-test', '' , d)} \
+    ${@bb.utils.contains('MACHINE_FEATURES', 'ts-smm-gateway', 'ts-uefi-test', '' , d)} \
+"
+
+SUMMARY:${PN}-psa = "PSA certification tests (psa-arch-test) for TS SPs"
+RDEPENDS:${PN}-psa = "\
+    ${@bb.utils.contains('MACHINE_FEATURES', 'ts-crypto', 'ts-psa-crypto-api-test', '' , d)} \
+    ${@bb.utils.contains('MACHINE_FEATURES', 'ts-its', 'ts-psa-its-api-test', '' , d)} \
+    ${@bb.utils.contains('MACHINE_FEATURES', 'ts-storage', 'ts-psa-ps-api-test', '' , d)} \
+    ${@bb.utils.contains('MACHINE_FEATURES', 'ts-attestation', 'ts-psa-iat-api-test', '' , d)} \
+    ${@bb.utils.contains('MACHINE_FEATURES', 'ts-se-proxy', \
+          'ts-psa-crypto-api-test ts-psa-its-api-test ts-psa-ps-api-test ts-psa-iat-api-test', '' , d)} \
+"
diff --git a/meta-arm/meta-arm/recipes-security/trusted-services/files/0004-correctly-find-headers-dir.patch b/meta-arm/meta-arm/recipes-security/trusted-services/files/0004-correctly-find-headers-dir.patch
new file mode 100644
index 0000000..b73b5dc
--- /dev/null
+++ b/meta-arm/meta-arm/recipes-security/trusted-services/files/0004-correctly-find-headers-dir.patch
@@ -0,0 +1,30 @@
+From 1b9c8d4a7c9519c6085827da8be6546ce80ee711 Mon Sep 17 00:00:00 2001
+From: Anton Antonov <Anton.Antonov@arm.com>
+Date: Wed, 31 Aug 2022 17:05:14 +0100
+Subject: [PATCH 1/4] Allow to find libgcc headers
+
+Upstream-Status: Pending
+Signed-off-by: Anton Antonov <Anton.Antonov@arm.com>
+---
+ external/newlib/newlib.cmake | 5 ++++-
+ 1 file changed, 4 insertions(+), 1 deletion(-)
+
+diff --git a/external/newlib/newlib.cmake b/external/newlib/newlib.cmake
+index fff5e2a..13eb78c 100644
+--- a/external/newlib/newlib.cmake
++++ b/external/newlib/newlib.cmake
+@@ -82,7 +82,10 @@ message(STATUS "libgcc.a is used from ${LIBGCC_PATH}")
+ # Moreover the GCC specific header file include directory is also required.
+ # Specify LIBGCC_INCLUDE_DIRS in the command line to manually override the libgcc relative location below.
+ if(NOT DEFINED LIBGCC_INCLUDE_DIRS)
+-	get_filename_component(_TMP_VAR "${LIBGCC_PATH}" DIRECTORY)
++
++	# "libgcc.a" lib location in ${LIBGCC_PATH} might not contain a correct path to headers
++	# We can get the correct path if we ask for a location without a library name
++	gcc_get_lib_location(LIBRARY_NAME "" RES _TMP_VAR)
+ 	set(LIBGCC_INCLUDE_DIRS
+ 		"${_TMP_VAR}/include"
+ 		"${_TMP_VAR}/include-fixed" CACHE STRING "GCC specific include PATHs")
+-- 
+2.25.1
+
diff --git a/meta-arm/meta-arm/recipes-security/trusted-services/files/0005-setting-sysroot-for-libgcc-lookup.patch b/meta-arm/meta-arm/recipes-security/trusted-services/files/0005-setting-sysroot-for-libgcc-lookup.patch
new file mode 100644
index 0000000..b226bc4
--- /dev/null
+++ b/meta-arm/meta-arm/recipes-security/trusted-services/files/0005-setting-sysroot-for-libgcc-lookup.patch
@@ -0,0 +1,30 @@
+From 0fbf81d10e0f2aabb80105fabe4ffdf87e28e664 Mon Sep 17 00:00:00 2001
+From: Anton Antonov <Anton.Antonov@arm.com>
+Date: Wed, 31 Aug 2022 17:06:07 +0100
+Subject: [PATCH 2/4] Allow setting sysroot for libgcc lookup
+
+Explicitly pass the new variable LIBGCC_LOCATE_CFLAGS variable when searching
+for the compiler libraries.
+
+Upstream-Status: Pending
+Signed-off-by: Anton Antonov <Anton.Antonov@arm.com>
+---
+ tools/cmake/compiler/GCC.cmake | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/tools/cmake/compiler/GCC.cmake b/tools/cmake/compiler/GCC.cmake
+index 5a8fa59..d591c44 100644
+--- a/tools/cmake/compiler/GCC.cmake
++++ b/tools/cmake/compiler/GCC.cmake
+@@ -268,7 +268,7 @@ function(gcc_get_lib_location)
+ 	cmake_parse_arguments(MY "${options}" "${oneValueArgs}"
+ 						"${multiValueArgs}" ${ARGN} )
+ 	execute_process(
+-		COMMAND ${CMAKE_C_COMPILER} "--print-file-name=${MY_LIBRARY_NAME}"
++		COMMAND ${CMAKE_C_COMPILER} ${LIBGCC_LOCATE_CFLAGS} --print-file-name=${MY_LIBRARY_NAME}
+ 		OUTPUT_VARIABLE _RES
+ 		RESULT_VARIABLE _GCC_ERROR_CODE
+ 		OUTPUT_STRIP_TRAILING_WHITESPACE
+-- 
+2.25.1
+
diff --git a/meta-arm/meta-arm/recipes-security/trusted-services/files/0006-applying-lowercase-project-convention.patch b/meta-arm/meta-arm/recipes-security/trusted-services/files/0006-applying-lowercase-project-convention.patch
new file mode 100644
index 0000000..09f38c0
--- /dev/null
+++ b/meta-arm/meta-arm/recipes-security/trusted-services/files/0006-applying-lowercase-project-convention.patch
@@ -0,0 +1,32 @@
+From 37559c70443fe85e246f1f652045f0cd3c78012b Mon Sep 17 00:00:00 2001
+From: Vishnu Banavath <vishnu.banavath@arm.com>
+Date: Sat, 13 Nov 2021 07:47:44 +0000
+Subject: [PATCH] tools/cmake/common: applying lowercase project convention
+
+Lowercase convention should only apply on the paths inside TS
+source-code.
+Host build paths should not be lowercased. Otherwise, builds
+with uppercase paths will break.
+
+Upstream-Status: Pending [In review]
+Signed-off-by: Abdellatif El Khlifi <abdellatif.elkhlifi@arm.com>
+Signed-off-by: Anton Antonov <Anton.Antonov@arm.com>
+
+diff --git a/tools/cmake/common/AddPlatform.cmake b/tools/cmake/common/AddPlatform.cmake
+index ae34c6e..31bcd8c 100644
+--- a/tools/cmake/common/AddPlatform.cmake
++++ b/tools/cmake/common/AddPlatform.cmake
+@@ -37,8 +37,8 @@ function(add_platform)
+ 	set(TGT ${MY_PARAMS_TARGET} CACHE STRING "")
+ 
+ 	# Ensure file path conforms to lowercase project convention
+-	string(TOLOWER "${TS_PLATFORM_ROOT}/${TS_PLATFORM}/platform.cmake" _platdef)
+-	include(${_platdef})
++	string(TOLOWER "${TS_PLATFORM}/platform.cmake" _platdef)
++	include(${TS_PLATFORM_ROOT}/${_platdef})
+ 	set(CMAKE_CONFIGURE_DEPENDS ${_platdef})
+ 
+ 	unset(TGT CACHE)
+-- 
+2.25.1
+
diff --git a/meta-arm/meta-arm/recipes-security/trusted-services/files/0009-PSA-CRYPTO-API-INCLUDE.patch b/meta-arm/meta-arm/recipes-security/trusted-services/files/0009-PSA-CRYPTO-API-INCLUDE.patch
new file mode 100644
index 0000000..9054f1c
--- /dev/null
+++ b/meta-arm/meta-arm/recipes-security/trusted-services/files/0009-PSA-CRYPTO-API-INCLUDE.patch
@@ -0,0 +1,39 @@
+From 7f254bf14a97d14d19e61e2b8f8359bc238f3f1b Mon Sep 17 00:00:00 2001
+From: Anton Antonov <Anton.Antonov@arm.com>
+Date: Wed, 31 Aug 2022 17:07:51 +0100
+Subject: [PATCH 3/4] Always define PSA_CRYPTO_API_INCLUDE
+
+PSA_CRYPTO_API_INCLUDE is not defined when pre-built mbedtls was used.
+
+Upstream-Status: Pending
+Signed-off-by: Anton Antonov <Anton.Antonov@arm.com>
+---
+ external/MbedTLS/MbedTLS.cmake | 5 +++--
+ 1 file changed, 3 insertions(+), 2 deletions(-)
+
+diff --git a/external/MbedTLS/MbedTLS.cmake b/external/MbedTLS/MbedTLS.cmake
+index 3193a07..f15e25d 100644
+--- a/external/MbedTLS/MbedTLS.cmake
++++ b/external/MbedTLS/MbedTLS.cmake
+@@ -96,8 +96,6 @@ if (NOT MBEDCRYPTO_LIB_FILE)
+ 	#Configure Mbed TLS to build only mbedcrypto lib
+ 	execute_process(COMMAND ${Python3_EXECUTABLE} scripts/config.py crypto WORKING_DIRECTORY ${MBEDTLS_SOURCE_DIR})
+ 
+-	# Advertise Mbed TLS as the provider of the psa crypto API
+-	set(PSA_CRYPTO_API_INCLUDE "${MBEDTLS_INSTALL_DIR}/include" CACHE STRING "PSA Crypto API include path")
+ 
+ 	include(${TS_ROOT}/tools/cmake/common/PropertyCopy.cmake)
+ 
+@@ -157,6 +155,9 @@ if (NOT MBEDCRYPTO_LIB_FILE)
+ 	set(MBEDCRYPTO_LIB_FILE "${MBEDTLS_INSTALL_DIR}/lib/${CMAKE_STATIC_LIBRARY_PREFIX}mbedcrypto${CMAKE_STATIC_LIBRARY_SUFFIX}")
+ endif()
+ 
++# Advertise Mbed TLS as the provider of the psa crypto API
++set(PSA_CRYPTO_API_INCLUDE "${MBEDTLS_INSTALL_DIR}/include" CACHE STRING "PSA Crypto API include path")
++
+ #Create an imported target to have clean abstraction in the build-system.
+ add_library(mbedcrypto STATIC IMPORTED)
+ set_property(DIRECTORY ${CMAKE_SOURCE_DIR} APPEND PROPERTY CMAKE_CONFIGURE_DEPENDS ${MBEDCRYPTO_LIB_FILE})
+-- 
+2.25.1
+
diff --git a/meta-arm/meta-arm/recipes-security/trusted-services/files/0010-change-libts-to-export-CMake-package.patch b/meta-arm/meta-arm/recipes-security/trusted-services/files/0010-change-libts-to-export-CMake-package.patch
new file mode 100644
index 0000000..169ef59
--- /dev/null
+++ b/meta-arm/meta-arm/recipes-security/trusted-services/files/0010-change-libts-to-export-CMake-package.patch
@@ -0,0 +1,346 @@
+From 0ff5a6163bd2760bb6a61fd5a185a2b92da5e86b Mon Sep 17 00:00:00 2001
+From: Gyorgy Szing <Gyorgy.Szing@arm.com>
+Date: Wed, 20 Jul 2022 12:36:52 +0000
+Subject: [PATCH] Fix: change libts to export a CMake package
+
+libts install content was not compatible to find_module() which made
+using a pre-built libts binary from CMake less than ideal.
+
+This change adds the missing files and updates export and install
+commands to make libts generate a proper CMake package.
+
+From now on an external project will be able to use find_module() to
+integrate libts into its build.
+
+Signed-off-by: Gyorgy Szing <Gyorgy.Szing@arm.com>
+Change-Id: I9e86e02030f6fb3c86af45252110f939cb82670c
+
+Upstream-Status: Pending [In review]
+Signed-off-by: Anton Antonov <Anton.Antonov@arm.com>
+
+---
+
+diff --git a/components/messaging/ffa/libsp/component.cmake b/components/messaging/ffa/libsp/component.cmake
+index a21c630..ec4cf6c 100644
+--- a/components/messaging/ffa/libsp/component.cmake
++++ b/components/messaging/ffa/libsp/component.cmake
+@@ -1,5 +1,5 @@
+ #-------------------------------------------------------------------------------
+-# Copyright (c) 2020-2021, Arm Limited and Contributors. All rights reserved.
++# Copyright (c) 2020-2022, Arm Limited and Contributors. All rights reserved.
+ #
+ # SPDX-License-Identifier: BSD-3-Clause
+ #
+@@ -21,7 +21,7 @@
+ 	"${CMAKE_CURRENT_LIST_DIR}/sp_rxtx.c"
+ 	)
+ 
+-set_property(TARGET ${TGT} PROPERTY PUBLIC_HEADER
++set_property(TARGET ${TGT} APPEND PROPERTY PUBLIC_HEADER
+ 	${CMAKE_CURRENT_LIST_DIR}/include/ffa_api.h
+ 	${CMAKE_CURRENT_LIST_DIR}/include/ffa_api_defines.h
+ 	${CMAKE_CURRENT_LIST_DIR}/include/ffa_api_types.h
+@@ -49,5 +49,5 @@
+ target_include_directories(${TGT}
+ 	 PUBLIC
+ 		"$<BUILD_INTERFACE:${CMAKE_CURRENT_LIST_DIR}/include>"
+-		"$<INSTALL_INTERFACE:include>"
++		"$<INSTALL_INTERFACE:${TS_ENV}/include">
+ 	)
+diff --git a/components/rpc/common/interface/component.cmake b/components/rpc/common/interface/component.cmake
+index d567602..e4b2477 100644
+--- a/components/rpc/common/interface/component.cmake
++++ b/components/rpc/common/interface/component.cmake
+@@ -1,5 +1,5 @@
+ #-------------------------------------------------------------------------------
+-# Copyright (c) 2020, Arm Limited and Contributors. All rights reserved.
++# Copyright (c) 2020-2022, Arm Limited and Contributors. All rights reserved.
+ #
+ # SPDX-License-Identifier: BSD-3-Clause
+ #
+@@ -8,11 +8,12 @@
+ 	message(FATAL_ERROR "mandatory parameter TGT is not defined.")
+ endif()
+ 
+-set_property(TARGET ${TGT} PROPERTY RPC_CALLER_PUBLIC_HEADER_FILES
++set_property(TARGET ${TGT} APPEND PROPERTY PUBLIC_HEADER
+ 	"${CMAKE_CURRENT_LIST_DIR}/rpc_caller.h"
+ 	"${CMAKE_CURRENT_LIST_DIR}/rpc_status.h"
+ 	)
+ 
+ target_include_directories(${TGT} PUBLIC
+-	"${CMAKE_CURRENT_LIST_DIR}"
++	"$<BUILD_INTERFACE:${CMAKE_CURRENT_LIST_DIR}>"
++	"$<INSTALL_INTERFACE:${TS_ENV}/include>"
+ 	)
+diff --git a/components/service/locator/interface/component.cmake b/components/service/locator/interface/component.cmake
+index b5aefa3..84a4d75 100644
+--- a/components/service/locator/interface/component.cmake
++++ b/components/service/locator/interface/component.cmake
+@@ -1,5 +1,5 @@
+ #-------------------------------------------------------------------------------
+-# Copyright (c) 2020, Arm Limited and Contributors. All rights reserved.
++# Copyright (c) 2020-2022, Arm Limited and Contributors. All rights reserved.
+ #
+ # SPDX-License-Identifier: BSD-3-Clause
+ #
+@@ -8,10 +8,11 @@
+ 	message(FATAL_ERROR "mandatory parameter TGT is not defined.")
+ endif()
+ 
+-set_property(TARGET ${TGT} PROPERTY SERVICE_LOCATOR_PUBLIC_HEADER_FILES
++set_property(TARGET ${TGT} APPEND PROPERTY PUBLIC_HEADER
+ 	"${CMAKE_CURRENT_LIST_DIR}/service_locator.h"
+ 	)
+ 
+ target_include_directories(${TGT} PUBLIC
+-	"${CMAKE_CURRENT_LIST_DIR}"
++	"$<BUILD_INTERFACE:${CMAKE_CURRENT_LIST_DIR}>"
++	"$<INSTALL_INTERFACE:${TS_ENV}/include>"
+ 	)
+diff --git a/deployments/libts/libts-import.cmake b/deployments/libts/libts-import.cmake
+index dcabc45..fcfc2ac 100644
+--- a/deployments/libts/libts-import.cmake
++++ b/deployments/libts/libts-import.cmake
+@@ -11,48 +11,18 @@
+ # CMake build file allows libts to be built and installed into the binary
+ # directory of the dependent.
+ #-------------------------------------------------------------------------------
+-
+-# Determine the number of processes to run while running parallel builds.
+-# Pass -DPROCESSOR_COUNT=<n> to cmake to override.
+-if(NOT DEFINED PROCESSOR_COUNT)
+-	include(ProcessorCount)
+-	ProcessorCount(PROCESSOR_COUNT)
+-	set(PROCESSOR_COUNT ${PROCESSOR_COUNT} CACHE STRING "Number of cores to use for parallel builds.")
++option(CFG_FORCE_PREBUILT_LIBTS Off)
++# Try to find a pre-build package.
++find_package(libts "1.0.0" QUIET)
++if(NOT libts_FOUND)
++	if (CFG_FORCE_PREBUILT_LIBTS)
++		string(CONCAT _msg "find_package() failed to find the \"libts\" package. Please set libts_DIR or"
++		                   " CMAKE_FIND_ROOT_PATH properly.\n"
++						   "If you wish to debug the search process pass -DCMAKE_FIND_DEBUG_MODE=ON to cmake.")
++		message(FATAL_ERROR ${_msg})
++	endif()
++	# If not successful, build libts as a sub-project.
++	add_subdirectory(${TS_ROOT}/deployments/libts/${TS_ENV} ${CMAKE_BINARY_DIR}/libts)
++else()
++	message(STATUS "Using prebuilt libts from ${libts_DIR}")
+ endif()
+-
+-set(LIBTS_INSTALL_PATH "${CMAKE_CURRENT_BINARY_DIR}/libts_install" CACHE PATH "libts installation directory")
+-set(LIBTS_PACKAGE_PATH "${LIBTS_INSTALL_PATH}/lib/cmake" CACHE PATH "libts CMake package directory")
+-set(LIBTS_SOURCE_DIR "${TS_ROOT}/deployments/libts/${TS_ENV}" CACHE PATH "libts source directory")
+-set(LIBTS_BINARY_DIR "${CMAKE_CURRENT_BINARY_DIR}/_deps/libts-build" CACHE PATH "libts binary directory")
+-
+-file(MAKE_DIRECTORY ${LIBTS_BINARY_DIR})
+-
+-#Configure the library
+-execute_process(COMMAND
+-	${CMAKE_COMMAND}
+-		-DCMAKE_INSTALL_PREFIX=${LIBTS_INSTALL_PATH}
+-		-GUnix\ Makefiles
+-		-S ${LIBTS_SOURCE_DIR}
+-		-B ${LIBTS_BINARY_DIR}
+-	RESULT_VARIABLE _exec_error
+-)
+-
+-if (_exec_error)
+-	message(FATAL_ERROR "Configuration step of libts failed with ${_exec_error}.")
+-endif()
+-
+-#Build the library
+-execute_process(COMMAND
+-	${CMAKE_COMMAND} --build ${LIBTS_BINARY_DIR} --parallel ${PROCESSOR_COUNT} --target install
+-	RESULT_VARIABLE _exec_error
+-)
+-
+-if (_exec_error)
+-	message(FATAL_ERROR "Build step of libts failed with ${_exec_error}.")
+-endif()
+-
+-# Import the built library
+-include(${LIBTS_INSTALL_PATH}/${TS_ENV}/lib/cmake/libts_targets.cmake)
+-add_library(libts SHARED IMPORTED)
+-set_property(TARGET libts PROPERTY INTERFACE_INCLUDE_DIRECTORIES "${LIBTS_INSTALL_PATH}/${TS_ENV}/include")
+-set_property(TARGET libts PROPERTY IMPORTED_LOCATION "${LIBTS_INSTALL_PATH}/${TS_ENV}/lib/${CMAKE_SHARED_LIBRARY_PREFIX}ts${CMAKE_SHARED_LIBRARY_SUFFIX}")
+diff --git a/deployments/libts/libts.cmake b/deployments/libts/libts.cmake
+index 6463ca1..7f278fd 100644
+--- a/deployments/libts/libts.cmake
++++ b/deployments/libts/libts.cmake
+@@ -1,5 +1,5 @@
+ #-------------------------------------------------------------------------------
+-# Copyright (c) 2020-2021, Arm Limited and Contributors. All rights reserved.
++# Copyright (c) 2020-2022, Arm Limited and Contributors. All rights reserved.
+ #
+ # SPDX-License-Identifier: BSD-3-Clause
+ #
+@@ -18,12 +18,11 @@
+ 					MAJOR _major MINOR _minor PATCH _patch)
+ set_target_properties(ts PROPERTIES VERSION "${_major}.${_minor}.${_patch}")
+ set_target_properties(ts PROPERTIES SOVERSION "${_major}")
+-unset(_major)
+-unset(_minor)
+-unset(_patch)
++
++add_library(libts::ts ALIAS ts)
+ 
+ #-------------------------------------------------------------------------------
+-#  Components that are common accross all deployments
++#  Components that are common across all deployments
+ #
+ #-------------------------------------------------------------------------------
+ add_components(
+@@ -53,19 +52,13 @@
+ #-------------------------------------------------------------------------------
+ include(${TS_ROOT}/tools/cmake/common/ExportLibrary.cmake REQUIRED)
+ 
+-# Select public header files to export
+-get_property(_rpc_caller_public_header_files TARGET ts
+-	PROPERTY RPC_CALLER_PUBLIC_HEADER_FILES
+-)
+-
+-get_property(_service_locator_public_header_files TARGET ts
+-	PROPERTY SERVICE_LOCATOR_PUBLIC_HEADER_FILES
+-)
++get_property(_tmp TARGET ts PROPERTY PUBLIC_HEADER)
+ 
+ # Exports library information in preparation for install
+ export_library(
+ 	TARGET "ts"
+ 	LIB_NAME "libts"
++	PKG_CONFIG_FILE "${CMAKE_CURRENT_LIST_DIR}/libtsConfig.cmake.in"
+ 	INTERFACE_FILES
+ 		${_rpc_caller_public_header_files}
+ 		${_service_locator_public_header_files}
+diff --git a/deployments/libts/libtsConfig.cmake.in b/deployments/libts/libtsConfig.cmake.in
+new file mode 100644
+index 0000000..4860135
+--- /dev/null
++++ b/deployments/libts/libtsConfig.cmake.in
+@@ -0,0 +1,10 @@
++#
++# Copyright (c) 2020-2022, Arm Limited and Contributors. All rights reserved.
++#
++# SPDX-License-Identifier: BSD-3-Clause
++#
++
++@PACKAGE_INIT@
++
++include("${CMAKE_CURRENT_LIST_DIR}/libtsTargets.cmake")
++
+diff --git a/tools/cmake/common/ExportLibrary.cmake b/tools/cmake/common/ExportLibrary.cmake
+index fed4e75..4fcf481 100644
+--- a/tools/cmake/common/ExportLibrary.cmake
++++ b/tools/cmake/common/ExportLibrary.cmake
+@@ -1,5 +1,5 @@
+ #-------------------------------------------------------------------------------
+-# Copyright (c) 2020-2021, Arm Limited and Contributors. All rights reserved.
++# Copyright (c) 2020-2022, Arm Limited and Contributors. All rights reserved.
+ #
+ # SPDX-License-Identifier: BSD-3-Clause
+ #
+@@ -26,17 +26,29 @@
+ #]===]
+ function(export_library)
+ 	set(options  )
+-	set(oneValueArgs TARGET LIB_NAME)
++	set(oneValueArgs TARGET LIB_NAME PKG_CONFIG_FILE)
+ 	set(multiValueArgs INTERFACE_FILES)
+ 	cmake_parse_arguments(MY_PARAMS "${options}" "${oneValueArgs}"
+ 						"${multiValueArgs}" ${ARGN} )
+ 
+-	if(NOT DEFINED MY_PARAMS_TARGET)
+-		message(FATAL_ERROR "export_library: mandatory parameter TARGET not defined!")
++	foreach(_param IN ITEMS MY_PARAMS_TARGET MY_PARAMS_LIB_NAME MY_PARAMS_PKG_CONFIG_FILE)
++		if(NOT DEFINED ${_param})
++			list(APPEND _miss_params "${_param}" )
++		endif()
++	endforeach()
++
++	if (_miss_params)
++		string(REPLACE ";" ", " _miss_params "${_miss_params}")
++		message(FATAL_ERROR "export_library: mandatory parameter(s) ${_miss_params} not defined!")
+ 	endif()
+-	if(NOT DEFINED MY_PARAMS_LIB_NAME)
+-		message(FATAL_ERROR "export_library: mandatory parameter LIB_NAME not defined!")
+-	endif()
++
++
++	string(TOUPPER "${MY_PARAMS_LIB_NAME}" UC_LIB_NAME)
++	string(TOLOWER "${MY_PARAMS_LIB_NAME}" LC_LIB_NAME)
++	string(SUBSTRING "${UC_LIB_NAME}" 0 1 CAP_LIB_NAME)
++	string(SUBSTRING "${LC_LIB_NAME}" 1 -1 _tmp)
++	string(APPEND CAP_LIB_NAME "${_tmp}")
++
+ 
+ 	# Set default install location if none specified
+ 	if (CMAKE_INSTALL_PREFIX_INITIALIZED_TO_DEFAULT)
+@@ -55,6 +67,42 @@
+ 			DESTINATION ${TS_ENV}/include
+ 	)
+ 
++	# Create a config file package.
++	include(CMakePackageConfigHelpers)
++	get_target_property(_ver ${MY_PARAMS_TARGET} VERSION)
++	write_basic_package_version_file(
++		"${CMAKE_CURRENT_BINARY_DIR}/${LC_LIB_NAME}ConfigVersion.cmake"
++		VERSION "${_ver}"
++		COMPATIBILITY SameMajorVersion
++	)
++
++	# Create targets file.
++	export(
++		EXPORT
++			${MY_PARAMS_LIB_NAME}_targets
++		FILE
++			"${CMAKE_CURRENT_BINARY_DIR}/${MY_PARAMS_LIB_NAME}Targets.cmake"
++		NAMESPACE
++			${MY_PARAMS_LIB_NAME}::
++	)
++
++	# Finalize config file.
++	# Config package location relative to install root.
++	set(ConfigPackageLocation ${TS_ENV}/lib/cmake)
++	# Config package location ??
++	get_filename_component(ConfigPackageLocationRel ${ConfigPackageLocation} PATH)
++
++	get_filename_component(_configured_pkgcfg_name "${MY_PARAMS_PKG_CONFIG_FILE}" NAME_WLE)
++	set(_configured_pkgcfg_name "${CMAKE_CURRENT_BINARY_DIR}/${_configured_pkgcfg_name}")
++	configure_package_config_file(
++			"${MY_PARAMS_PKG_CONFIG_FILE}"
++			"${_configured_pkgcfg_name}"
++		PATH_VARS
++
++		INSTALL_DESTINATION
++			${ConfigPackageLocationRel}
++	)
++
+ 	# Install library header files files
+ 	install(
+ 		FILES ${MY_PARAMS_INTERFACE_FILES}
+@@ -64,9 +112,21 @@
+ 	# Install the export details
+ 	install(
+ 		EXPORT ${MY_PARAMS_LIB_NAME}_targets
+-		FILE ${MY_PARAMS_LIB_NAME}_targets.cmake
++		FILE ${MY_PARAMS_LIB_NAME}Targets.cmake
+ 		NAMESPACE ${MY_PARAMS_LIB_NAME}::
+-		DESTINATION ${TS_ENV}/lib/cmake
++		DESTINATION ${ConfigPackageLocation}
+ 		COMPONENT ${MY_PARAMS_LIB_NAME}
+ 	)
++
++
++	# install config and version files
++	install(
++		FILES
++			"${_configured_pkgcfg_name}"
++			"${CMAKE_CURRENT_BINARY_DIR}/${LC_LIB_NAME}ConfigVersion.cmake"
++		DESTINATION
++			${ConfigPackageLocation}
++		COMPONENT
++			${MY_PARAMS_LIB_NAME}
++	)
+ endfunction()
diff --git a/meta-arm/meta-arm/recipes-security/trusted-services/files/0011-Adapt-deployments-to-libts-changes.patch b/meta-arm/meta-arm/recipes-security/trusted-services/files/0011-Adapt-deployments-to-libts-changes.patch
new file mode 100644
index 0000000..34b1035
--- /dev/null
+++ b/meta-arm/meta-arm/recipes-security/trusted-services/files/0011-Adapt-deployments-to-libts-changes.patch
@@ -0,0 +1,197 @@
+From dfadca01ff028f9fc935937cdaf92b0effff2b90 Mon Sep 17 00:00:00 2001
+From: Gyorgy Szing <Gyorgy.Szing@arm.com>
+Date: Wed, 20 Jul 2022 16:49:39 +0000
+Subject: [PATCH] Adapt deployments to libts changes
+
+Update deployments and restore compatibility to libts build-system
+interface changes.
+
+Signed-off-by: Gyorgy Szing <Gyorgy.Szing@arm.com>
+Change-Id: Iffd38f92fe628a2a6aaff60224986f22ec3a8a2a
+
+Upstream-Status: Pending [In review]
+Signed-off-by: Anton Antonov <Anton.Antonov@arm.com>
+
+---
+
+diff --git a/deployments/platform-inspect/platform-inspect.cmake b/deployments/platform-inspect/platform-inspect.cmake
+index ef4ba4b..b1b316d 100644
+--- a/deployments/platform-inspect/platform-inspect.cmake
++++ b/deployments/platform-inspect/platform-inspect.cmake
+@@ -1,5 +1,5 @@
+ #-------------------------------------------------------------------------------
+-# Copyright (c) 2021, Arm Limited and Contributors. All rights reserved.
++# Copyright (c) 2021-2022, Arm Limited and Contributors. All rights reserved.
+ #
+ # SPDX-License-Identifier: BSD-3-Clause
+ #
+@@ -12,14 +12,14 @@
+ 
+ #-------------------------------------------------------------------------------
+ #  Use libts for locating and accessing trusted services. An appropriate version
+-#  of libts will be imported for the enviroment in which platform-inspect is
++#  of libts will be imported for the environment in which platform-inspect is
+ #  built.
+ #-------------------------------------------------------------------------------
+ include(${TS_ROOT}/deployments/libts/libts-import.cmake)
+-target_link_libraries(platform-inspect PRIVATE libts)
++target_link_libraries(platform-inspect PRIVATE libts::ts)
+ 
+ #-------------------------------------------------------------------------------
+-#  Components that are common accross all deployments
++#  Components that are common across all deployments
+ #
+ #-------------------------------------------------------------------------------
+ add_components(
+diff --git a/deployments/psa-api-test/psa-api-test.cmake b/deployments/psa-api-test/psa-api-test.cmake
+index d58620f..5c3469c 100644
+--- a/deployments/psa-api-test/psa-api-test.cmake
++++ b/deployments/psa-api-test/psa-api-test.cmake
+@@ -1,5 +1,5 @@
+ #-------------------------------------------------------------------------------
+-# Copyright (c) 2021, Arm Limited and Contributors. All rights reserved.
++# Copyright (c) 2021-2022, Arm Limited and Contributors. All rights reserved.
+ #
+ # SPDX-License-Identifier: BSD-3-Clause
+ #
+@@ -12,14 +12,14 @@
+ 
+ #-------------------------------------------------------------------------------
+ #  Use libts for locating and accessing services. An appropriate version of
+-#  libts will be imported for the enviroment in which service tests are
++#  libts will be imported for the environment in which service tests are
+ #  deployed.
+ #-------------------------------------------------------------------------------
+ include(${TS_ROOT}/deployments/libts/libts-import.cmake)
+-target_link_libraries(${PROJECT_NAME} PRIVATE libts)
++target_link_libraries(${PROJECT_NAME} PRIVATE libts::ts)
+ 
+ #-------------------------------------------------------------------------------
+-#  Components that are common accross all deployments
++#  Components that are common across all deployments
+ #
+ #-------------------------------------------------------------------------------
+ add_components(
+diff --git a/deployments/ts-demo/ts-demo.cmake b/deployments/ts-demo/ts-demo.cmake
+index 3e7cca0..9fd8585 100644
+--- a/deployments/ts-demo/ts-demo.cmake
++++ b/deployments/ts-demo/ts-demo.cmake
+@@ -1,5 +1,5 @@
+ #-------------------------------------------------------------------------------
+-# Copyright (c) 2020-2021, Arm Limited and Contributors. All rights reserved.
++# Copyright (c) 2020-2022, Arm Limited and Contributors. All rights reserved.
+ #
+ # SPDX-License-Identifier: BSD-3-Clause
+ #
+@@ -13,11 +13,11 @@
+ 
+ #-------------------------------------------------------------------------------
+ #  Use libts for locating and accessing services. An appropriate version of
+-#  libts will be imported for the enviroment in which service tests are
++#  libts will be imported for the environment in which service tests are
+ #  deployed.
+ #-------------------------------------------------------------------------------
+ include(${TS_ROOT}/deployments/libts/libts-import.cmake)
+-target_link_libraries(ts-demo PRIVATE libts)
++target_link_libraries(ts-demo PRIVATE libts::ts)
+ 
+ #-------------------------------------------------------------------------------
+ #  Common main for all deployments
+@@ -28,7 +28,7 @@
+ )
+ 
+ #-------------------------------------------------------------------------------
+-#  Components that are common accross all deployments
++#  Components that are common across all deployments
+ #
+ #-------------------------------------------------------------------------------
+ add_components(
+diff --git a/deployments/ts-remote-test/ts-remote-test.cmake b/deployments/ts-remote-test/ts-remote-test.cmake
+index 0f35bb2..c310445 100644
+--- a/deployments/ts-remote-test/ts-remote-test.cmake
++++ b/deployments/ts-remote-test/ts-remote-test.cmake
+@@ -1,5 +1,5 @@
+ #-------------------------------------------------------------------------------
+-# Copyright (c) 2021, Arm Limited and Contributors. All rights reserved.
++# Copyright (c) 2021-2022, Arm Limited and Contributors. All rights reserved.
+ #
+ # SPDX-License-Identifier: BSD-3-Clause
+ #
+@@ -13,11 +13,11 @@
+ 
+ #-------------------------------------------------------------------------------
+ #  Use libts for locating and accessing services. An appropriate version of
+-#  libts will be imported for the enviroment in which tests are
++#  libts will be imported for the environment in which tests are
+ #  deployed.
+ #-------------------------------------------------------------------------------
+ include(${TS_ROOT}/deployments/libts/libts-import.cmake)
+-target_link_libraries(ts-remote-test PRIVATE libts)
++target_link_libraries(ts-remote-test PRIVATE libts::ts)
+ 
+ #-------------------------------------------------------------------------------
+ #  Common main for all deployments
+@@ -28,7 +28,7 @@
+ )
+ 
+ #-------------------------------------------------------------------------------
+-#  Components that are common accross all deployments
++#  Components that are common across all deployments
+ #
+ #-------------------------------------------------------------------------------
+ add_components(
+diff --git a/deployments/ts-service-test/ts-service-test.cmake b/deployments/ts-service-test/ts-service-test.cmake
+index 4a8c59c..3e33757 100644
+--- a/deployments/ts-service-test/ts-service-test.cmake
++++ b/deployments/ts-service-test/ts-service-test.cmake
+@@ -1,5 +1,5 @@
+ #-------------------------------------------------------------------------------
+-# Copyright (c) 2020-2021, Arm Limited and Contributors. All rights reserved.
++# Copyright (c) 2020-2022, Arm Limited and Contributors. All rights reserved.
+ #
+ # SPDX-License-Identifier: BSD-3-Clause
+ #
+@@ -8,19 +8,19 @@
+ #-------------------------------------------------------------------------------
+ #  The base build file shared between deployments of 'ts-service-test' for
+ #  different environments.  Used for running end-to-end service-level tests
+-#  where test cases excerise trusted service client interfaces.
++#  where test cases exercise trusted service client interfaces.
+ #-------------------------------------------------------------------------------
+ 
+ #-------------------------------------------------------------------------------
+ #  Use libts for locating and accessing services. An appropriate version of
+-#  libts will be imported for the enviroment in which service tests are
++#  libts will be imported for the environment in which service tests are
+ #  deployed.
+ #-------------------------------------------------------------------------------
+ include(${TS_ROOT}/deployments/libts/libts-import.cmake)
+-target_link_libraries(ts-service-test PRIVATE libts)
++target_link_libraries(ts-service-test PRIVATE libts::ts)
+ 
+ #-------------------------------------------------------------------------------
+-#  Components that are common accross all deployments
++#  Components that are common across all deployments
+ #
+ #-------------------------------------------------------------------------------
+ add_components(
+diff --git a/deployments/uefi-test/uefi-test.cmake b/deployments/uefi-test/uefi-test.cmake
+index ea678d0..2f47891 100644
+--- a/deployments/uefi-test/uefi-test.cmake
++++ b/deployments/uefi-test/uefi-test.cmake
+@@ -1,5 +1,5 @@
+ #-------------------------------------------------------------------------------
+-# Copyright (c) 2021, Arm Limited and Contributors. All rights reserved.
++# Copyright (c) 2021-2022, Arm Limited and Contributors. All rights reserved.
+ #
+ # SPDX-License-Identifier: BSD-3-Clause
+ #
+@@ -18,7 +18,7 @@
+ #  deployed.
+ #-------------------------------------------------------------------------------
+ include(${TS_ROOT}/deployments/libts/libts-import.cmake)
+-target_link_libraries(uefi-test PRIVATE libts)
++target_link_libraries(uefi-test PRIVATE libts::ts)
+ 
+ #-------------------------------------------------------------------------------
+ #  Components that are common accross all deployments
diff --git a/meta-arm/meta-arm/recipes-security/trusted-services/files/0012-psa-arch-test-toolchain.patch b/meta-arm/meta-arm/recipes-security/trusted-services/files/0012-psa-arch-test-toolchain.patch
new file mode 100644
index 0000000..7d07fca
--- /dev/null
+++ b/meta-arm/meta-arm/recipes-security/trusted-services/files/0012-psa-arch-test-toolchain.patch
@@ -0,0 +1,40 @@
+From 1ce8fcde17a6d2c5cb2e00901d485c91eda776fd Mon Sep 17 00:00:00 2001
+From: Anton Antonov <Anton.Antonov@arm.com>
+Date: Wed, 31 Aug 2022 17:09:17 +0100
+Subject: [PATCH 4/4] Pass Yocto build settings to psa-arch-tests native build
+
+PSA-arch-tests need to build a native executable as a part of target build.
+The patch defines correct toolchain settings for native builds.
+
+Upstream-Status: Inappropriate [Yocto build specific change]
+Signed-off-by: Anton Antonov <Anton.Antonov@arm.com>
+---
+ .../psa_arch_tests/modify_attest_config.patch     | 15 +++++++++++++++
+ 1 file changed, 15 insertions(+)
+
+diff --git a/external/psa_arch_tests/modify_attest_config.patch b/external/psa_arch_tests/modify_attest_config.patch
+index ebe8c44..b5d5e88 100644
+--- a/external/psa_arch_tests/modify_attest_config.patch
++++ b/external/psa_arch_tests/modify_attest_config.patch
+@@ -11,3 +11,18 @@ index 6112ba7..1cdf581 100755
+  
+  /*
+   * Include of PSA defined Header files
++diff --git a/api-tests/tools/scripts/target_cfg/CMakeLists.txt b/api-tests/tools/scripts/target_cfg/CMakeLists.txt
++index 259eb9c..fec1fb8 100644
++--- a/api-tests/tools/scripts/target_cfg/CMakeLists.txt
+++++ b/api-tests/tools/scripts/target_cfg/CMakeLists.txt
++@@ -26,7 +26,9 @@ include("common/CMakeSettings")
++ include("common/Utils")
++ 
++ # Causes toolchain to be re-evaluated
++-unset(ENV{CC})
+++set(ENV{CC} $ENV{BUILD_CC})
+++set(ENV{CFLAGS} $ENV{BUILD_CFLAGS})
+++set(ENV{LDFLAGS} $ENV{BUILD_LDFLAGS})
++ 
++ # Let the CMake look for C compiler
++ project(TargetConfigGen LANGUAGES C)
+-- 
+2.25.1
+
diff --git a/meta-arm/meta-arm/recipes-security/trusted-services/files/0017-Move-libsp-mocks-into-separate-component.patch b/meta-arm/meta-arm/recipes-security/trusted-services/files/0017-Move-libsp-mocks-into-separate-component.patch
new file mode 100644
index 0000000..d6da370
--- /dev/null
+++ b/meta-arm/meta-arm/recipes-security/trusted-services/files/0017-Move-libsp-mocks-into-separate-component.patch
@@ -0,0 +1,349 @@
+From 2cd802030ab59787a34c0f6684c16848befabafa Mon Sep 17 00:00:00 2001
+From: Imre Kis <imre.kis@arm.com>
+Date: Wed, 15 Jun 2022 12:47:37 +0200
+Subject: [PATCH 17/24] Move libsp mocks into separate component
+
+Enable deployments to include libsp mocks in tests by simply adding
+the newly created libsp mock component.
+
+Signed-off-by: Imre Kis <imre.kis@arm.com>
+Change-Id: I40805fd49362c6cc71b5b34f9ba888d27ce01ed8
+
+Upstream-Status: Pending [In review]
+Signed-off-by: Anton Antonov <Anton.Antonov@arm.com>
+
+---
+ .../messaging/ffa/libsp/mock/component.cmake  | 27 ++++++++++
+ .../ffa/libsp/{test => mock}/mock_assert.cpp  |  0
+ .../ffa/libsp/{test => mock}/mock_assert.h    |  0
+ .../ffa/libsp/{test => mock}/mock_ffa_api.cpp |  0
+ .../ffa/libsp/{test => mock}/mock_ffa_api.h   |  0
+ .../{test => mock}/mock_ffa_internal_api.cpp  |  0
+ .../{test => mock}/mock_ffa_internal_api.h    |  0
+ .../ffa/libsp/{test => mock}/mock_sp_rxtx.cpp |  0
+ .../ffa/libsp/{test => mock}/mock_sp_rxtx.h   |  0
+ .../{ => mock}/test/test_mock_assert.cpp      |  0
+ .../{ => mock}/test/test_mock_ffa_api.cpp     |  0
+ .../test/test_mock_ffa_internal_api.cpp       |  0
+ .../{ => mock}/test/test_mock_sp_rxtx.cpp     |  0
+ components/messaging/ffa/libsp/tests.cmake    | 51 +++++++++++--------
+ .../mm_communicate/endpoint/sp/tests.cmake    |  6 +--
+ .../frontend/mm_communicate/tests.cmake       |  6 +--
+ 16 files changed, 64 insertions(+), 26 deletions(-)
+ create mode 100644 components/messaging/ffa/libsp/mock/component.cmake
+ rename components/messaging/ffa/libsp/{test => mock}/mock_assert.cpp (100%)
+ rename components/messaging/ffa/libsp/{test => mock}/mock_assert.h (100%)
+ rename components/messaging/ffa/libsp/{test => mock}/mock_ffa_api.cpp (100%)
+ rename components/messaging/ffa/libsp/{test => mock}/mock_ffa_api.h (100%)
+ rename components/messaging/ffa/libsp/{test => mock}/mock_ffa_internal_api.cpp (100%)
+ rename components/messaging/ffa/libsp/{test => mock}/mock_ffa_internal_api.h (100%)
+ rename components/messaging/ffa/libsp/{test => mock}/mock_sp_rxtx.cpp (100%)
+ rename components/messaging/ffa/libsp/{test => mock}/mock_sp_rxtx.h (100%)
+ rename components/messaging/ffa/libsp/{ => mock}/test/test_mock_assert.cpp (100%)
+ rename components/messaging/ffa/libsp/{ => mock}/test/test_mock_ffa_api.cpp (100%)
+ rename components/messaging/ffa/libsp/{ => mock}/test/test_mock_ffa_internal_api.cpp (100%)
+ rename components/messaging/ffa/libsp/{ => mock}/test/test_mock_sp_rxtx.cpp (100%)
+
+diff --git a/components/messaging/ffa/libsp/mock/component.cmake b/components/messaging/ffa/libsp/mock/component.cmake
+new file mode 100644
+index 0000000..03b8006
+--- /dev/null
++++ b/components/messaging/ffa/libsp/mock/component.cmake
+@@ -0,0 +1,27 @@
++#-------------------------------------------------------------------------------
++# Copyright (c) 2022, Arm Limited and Contributors. All rights reserved.
++#
++# SPDX-License-Identifier: BSD-3-Clause
++#
++#-------------------------------------------------------------------------------
++if (NOT DEFINED TGT)
++	message(FATAL_ERROR "mandatory parameter TGT is not defined.")
++endif()
++
++target_sources(${TGT} PRIVATE
++	"${CMAKE_CURRENT_LIST_DIR}/mock_assert.cpp"
++	"${CMAKE_CURRENT_LIST_DIR}/mock_ffa_api.cpp"
++	"${CMAKE_CURRENT_LIST_DIR}/mock_ffa_internal_api.cpp"
++	"${CMAKE_CURRENT_LIST_DIR}/mock_sp_rxtx.cpp"
++	)
++
++target_include_directories(${TGT}
++	PUBLIC
++		${CMAKE_CURRENT_LIST_DIR}
++		${CMAKE_CURRENT_LIST_DIR}/../include
++)
++
++target_compile_definitions(${TGT}
++	PUBLIC
++		"ARM64=1"
++)
+\ No newline at end of file
+diff --git a/components/messaging/ffa/libsp/test/mock_assert.cpp b/components/messaging/ffa/libsp/mock/mock_assert.cpp
+similarity index 100%
+rename from components/messaging/ffa/libsp/test/mock_assert.cpp
+rename to components/messaging/ffa/libsp/mock/mock_assert.cpp
+diff --git a/components/messaging/ffa/libsp/test/mock_assert.h b/components/messaging/ffa/libsp/mock/mock_assert.h
+similarity index 100%
+rename from components/messaging/ffa/libsp/test/mock_assert.h
+rename to components/messaging/ffa/libsp/mock/mock_assert.h
+diff --git a/components/messaging/ffa/libsp/test/mock_ffa_api.cpp b/components/messaging/ffa/libsp/mock/mock_ffa_api.cpp
+similarity index 100%
+rename from components/messaging/ffa/libsp/test/mock_ffa_api.cpp
+rename to components/messaging/ffa/libsp/mock/mock_ffa_api.cpp
+diff --git a/components/messaging/ffa/libsp/test/mock_ffa_api.h b/components/messaging/ffa/libsp/mock/mock_ffa_api.h
+similarity index 100%
+rename from components/messaging/ffa/libsp/test/mock_ffa_api.h
+rename to components/messaging/ffa/libsp/mock/mock_ffa_api.h
+diff --git a/components/messaging/ffa/libsp/test/mock_ffa_internal_api.cpp b/components/messaging/ffa/libsp/mock/mock_ffa_internal_api.cpp
+similarity index 100%
+rename from components/messaging/ffa/libsp/test/mock_ffa_internal_api.cpp
+rename to components/messaging/ffa/libsp/mock/mock_ffa_internal_api.cpp
+diff --git a/components/messaging/ffa/libsp/test/mock_ffa_internal_api.h b/components/messaging/ffa/libsp/mock/mock_ffa_internal_api.h
+similarity index 100%
+rename from components/messaging/ffa/libsp/test/mock_ffa_internal_api.h
+rename to components/messaging/ffa/libsp/mock/mock_ffa_internal_api.h
+diff --git a/components/messaging/ffa/libsp/test/mock_sp_rxtx.cpp b/components/messaging/ffa/libsp/mock/mock_sp_rxtx.cpp
+similarity index 100%
+rename from components/messaging/ffa/libsp/test/mock_sp_rxtx.cpp
+rename to components/messaging/ffa/libsp/mock/mock_sp_rxtx.cpp
+diff --git a/components/messaging/ffa/libsp/test/mock_sp_rxtx.h b/components/messaging/ffa/libsp/mock/mock_sp_rxtx.h
+similarity index 100%
+rename from components/messaging/ffa/libsp/test/mock_sp_rxtx.h
+rename to components/messaging/ffa/libsp/mock/mock_sp_rxtx.h
+diff --git a/components/messaging/ffa/libsp/test/test_mock_assert.cpp b/components/messaging/ffa/libsp/mock/test/test_mock_assert.cpp
+similarity index 100%
+rename from components/messaging/ffa/libsp/test/test_mock_assert.cpp
+rename to components/messaging/ffa/libsp/mock/test/test_mock_assert.cpp
+diff --git a/components/messaging/ffa/libsp/test/test_mock_ffa_api.cpp b/components/messaging/ffa/libsp/mock/test/test_mock_ffa_api.cpp
+similarity index 100%
+rename from components/messaging/ffa/libsp/test/test_mock_ffa_api.cpp
+rename to components/messaging/ffa/libsp/mock/test/test_mock_ffa_api.cpp
+diff --git a/components/messaging/ffa/libsp/test/test_mock_ffa_internal_api.cpp b/components/messaging/ffa/libsp/mock/test/test_mock_ffa_internal_api.cpp
+similarity index 100%
+rename from components/messaging/ffa/libsp/test/test_mock_ffa_internal_api.cpp
+rename to components/messaging/ffa/libsp/mock/test/test_mock_ffa_internal_api.cpp
+diff --git a/components/messaging/ffa/libsp/test/test_mock_sp_rxtx.cpp b/components/messaging/ffa/libsp/mock/test/test_mock_sp_rxtx.cpp
+similarity index 100%
+rename from components/messaging/ffa/libsp/test/test_mock_sp_rxtx.cpp
+rename to components/messaging/ffa/libsp/mock/test/test_mock_sp_rxtx.cpp
+diff --git a/components/messaging/ffa/libsp/tests.cmake b/components/messaging/ffa/libsp/tests.cmake
+index d851442..296ae46 100644
+--- a/components/messaging/ffa/libsp/tests.cmake
++++ b/components/messaging/ffa/libsp/tests.cmake
+@@ -1,5 +1,5 @@
+ #
+-# Copyright (c) 2020-2021, Arm Limited. All rights reserved.
++# Copyright (c) 2020-2022, Arm Limited. All rights reserved.
+ #
+ # SPDX-License-Identifier: BSD-3-Clause
+ #
+@@ -9,10 +9,11 @@ include(UnitTest)
+ unit_test_add_suite(
+ 	NAME libsp_mock_assert
+ 	SOURCES
+-		${CMAKE_CURRENT_LIST_DIR}/test/mock_assert.cpp
+-		${CMAKE_CURRENT_LIST_DIR}/test/test_mock_assert.cpp
++		${CMAKE_CURRENT_LIST_DIR}/mock/mock_assert.cpp
++		${CMAKE_CURRENT_LIST_DIR}/mock/test/test_mock_assert.cpp
+ 	INCLUDE_DIRECTORIES
+ 		${CMAKE_CURRENT_LIST_DIR}/include/
++		${CMAKE_CURRENT_LIST_DIR}/mock
+ 		${UNIT_TEST_PROJECT_PATH}/components/common/utils/include
+ 	COMPILE_DEFINITIONS
+ 		-DARM64
+@@ -21,10 +22,11 @@ unit_test_add_suite(
+ unit_test_add_suite(
+ 	NAME libsp_mock_ffa_internal_api
+ 	SOURCES
+-		${CMAKE_CURRENT_LIST_DIR}/test/mock_ffa_internal_api.cpp
+-		${CMAKE_CURRENT_LIST_DIR}/test/test_mock_ffa_internal_api.cpp
++		${CMAKE_CURRENT_LIST_DIR}/mock/mock_ffa_internal_api.cpp
++		${CMAKE_CURRENT_LIST_DIR}/mock/test/test_mock_ffa_internal_api.cpp
+ 	INCLUDE_DIRECTORIES
+ 		${CMAKE_CURRENT_LIST_DIR}/include/
++		${CMAKE_CURRENT_LIST_DIR}/mock
+ 		${UNIT_TEST_PROJECT_PATH}/components/common/utils/include
+ 	COMPILE_DEFINITIONS
+ 		-DARM64
+@@ -35,12 +37,13 @@ unit_test_add_suite(
+ 	SOURCES
+ 		${CMAKE_CURRENT_LIST_DIR}/test/test_ffa_api.cpp
+ 		${CMAKE_CURRENT_LIST_DIR}/test/test_ffa_memory_descriptors.cpp
+-		${CMAKE_CURRENT_LIST_DIR}/test/mock_ffa_internal_api.cpp
++		${CMAKE_CURRENT_LIST_DIR}/mock/mock_ffa_internal_api.cpp
+ 		${CMAKE_CURRENT_LIST_DIR}/ffa.c
+ 		${CMAKE_CURRENT_LIST_DIR}/ffa_memory_descriptors.c
+-		${CMAKE_CURRENT_LIST_DIR}/test/mock_assert.cpp
++		${CMAKE_CURRENT_LIST_DIR}/mock/mock_assert.cpp
+ 	INCLUDE_DIRECTORIES
+ 		${CMAKE_CURRENT_LIST_DIR}/include/
++		${CMAKE_CURRENT_LIST_DIR}/mock
+ 		${UNIT_TEST_PROJECT_PATH}/components/common/utils/include
+ 	COMPILE_DEFINITIONS
+ 		-DARM64
+@@ -49,10 +52,11 @@ unit_test_add_suite(
+ unit_test_add_suite(
+ 	NAME libsp_mock_ffa_api
+ 	SOURCES
+-		${CMAKE_CURRENT_LIST_DIR}/test/test_mock_ffa_api.cpp
+-		${CMAKE_CURRENT_LIST_DIR}/test/mock_ffa_api.cpp
++		${CMAKE_CURRENT_LIST_DIR}/mock/test/test_mock_ffa_api.cpp
++		${CMAKE_CURRENT_LIST_DIR}/mock/mock_ffa_api.cpp
+ 	INCLUDE_DIRECTORIES
+ 		${CMAKE_CURRENT_LIST_DIR}/include/
++		${CMAKE_CURRENT_LIST_DIR}/mock
+ 		${UNIT_TEST_PROJECT_PATH}/components/common/utils/include
+ 	COMPILE_DEFINITIONS
+ 		-DARM64
+@@ -62,10 +66,11 @@ unit_test_add_suite(
+ 	NAME libsp_sp_rxtx
+ 	SOURCES
+ 		${CMAKE_CURRENT_LIST_DIR}/test/test_sp_rxtx.cpp
+-		${CMAKE_CURRENT_LIST_DIR}/test/mock_ffa_api.cpp
++		${CMAKE_CURRENT_LIST_DIR}/mock/mock_ffa_api.cpp
+ 		${CMAKE_CURRENT_LIST_DIR}/sp_rxtx.c
+ 	INCLUDE_DIRECTORIES
+ 		${CMAKE_CURRENT_LIST_DIR}/include/
++		${CMAKE_CURRENT_LIST_DIR}/mock
+ 		${UNIT_TEST_PROJECT_PATH}/components/common/utils/include
+ 	COMPILE_DEFINITIONS
+ 		-DARM64
+@@ -74,10 +79,11 @@ unit_test_add_suite(
+ unit_test_add_suite(
+ 	NAME libsp_mock_sp_rxtx
+ 	SOURCES
+-		${CMAKE_CURRENT_LIST_DIR}/test/test_mock_sp_rxtx.cpp
+-		${CMAKE_CURRENT_LIST_DIR}/test/mock_sp_rxtx.cpp
++		${CMAKE_CURRENT_LIST_DIR}/mock/test/test_mock_sp_rxtx.cpp
++		${CMAKE_CURRENT_LIST_DIR}/mock/mock_sp_rxtx.cpp
+ 	INCLUDE_DIRECTORIES
+ 		${CMAKE_CURRENT_LIST_DIR}/include/
++		${CMAKE_CURRENT_LIST_DIR}/mock
+ 		${UNIT_TEST_PROJECT_PATH}/components/common/utils/include
+ 	COMPILE_DEFINITIONS
+ 		-DARM64
+@@ -88,10 +94,11 @@ unit_test_add_suite(
+ 	SOURCES
+ 		${CMAKE_CURRENT_LIST_DIR}/test/test_sp_discovery.cpp
+ 		${CMAKE_CURRENT_LIST_DIR}/sp_discovery.c
+-		${CMAKE_CURRENT_LIST_DIR}/test/mock_ffa_api.cpp
+-		${CMAKE_CURRENT_LIST_DIR}/test/mock_sp_rxtx.cpp
++		${CMAKE_CURRENT_LIST_DIR}/mock/mock_ffa_api.cpp
++		${CMAKE_CURRENT_LIST_DIR}/mock/mock_sp_rxtx.cpp
+ 	INCLUDE_DIRECTORIES
+ 		${CMAKE_CURRENT_LIST_DIR}/include/
++		${CMAKE_CURRENT_LIST_DIR}/mock
+ 		${UNIT_TEST_PROJECT_PATH}/components/common/utils/include
+ 	COMPILE_DEFINITIONS
+ 		-DARM64
+@@ -103,11 +110,12 @@ unit_test_add_suite(
+ 		${CMAKE_CURRENT_LIST_DIR}/test/test_sp_memory_management.cpp
+ 		${CMAKE_CURRENT_LIST_DIR}/sp_memory_management.c
+ 		${CMAKE_CURRENT_LIST_DIR}/ffa_memory_descriptors.c
+-		${CMAKE_CURRENT_LIST_DIR}/test/mock_assert.cpp
+-		${CMAKE_CURRENT_LIST_DIR}/test/mock_ffa_api.cpp
+-		${CMAKE_CURRENT_LIST_DIR}/test/mock_sp_rxtx.cpp
++		${CMAKE_CURRENT_LIST_DIR}/mock/mock_assert.cpp
++		${CMAKE_CURRENT_LIST_DIR}/mock/mock_ffa_api.cpp
++		${CMAKE_CURRENT_LIST_DIR}/mock/mock_sp_rxtx.cpp
+ 	INCLUDE_DIRECTORIES
+ 		${CMAKE_CURRENT_LIST_DIR}/include/
++		${CMAKE_CURRENT_LIST_DIR}/mock
+ 		${UNIT_TEST_PROJECT_PATH}/components/common/utils/include
+ 	COMPILE_DEFINITIONS
+ 		-DARM64
+@@ -119,9 +127,10 @@ unit_test_add_suite(
+ 		${CMAKE_CURRENT_LIST_DIR}/test/test_sp_memory_management_internals.cpp
+ 		${CMAKE_CURRENT_LIST_DIR}/test/sp_memory_management_internals.yml
+ 		${CMAKE_CURRENT_LIST_DIR}/ffa_memory_descriptors.c
+-		${CMAKE_CURRENT_LIST_DIR}/test/mock_assert.cpp
++		${CMAKE_CURRENT_LIST_DIR}/mock/mock_assert.cpp
+ 	INCLUDE_DIRECTORIES
+ 		${CMAKE_CURRENT_LIST_DIR}/include/
++		${CMAKE_CURRENT_LIST_DIR}/mock
+ 		${UNIT_TEST_PROJECT_PATH}/components/common/utils/include
+ 	COMPILE_DEFINITIONS
+ 		-DARM64
+@@ -131,10 +140,11 @@ unit_test_add_suite(
+ 	NAME libsp_sp_messaging
+ 	SOURCES
+ 		${CMAKE_CURRENT_LIST_DIR}/test/test_sp_messaging.cpp
+-		${CMAKE_CURRENT_LIST_DIR}/test/mock_ffa_api.cpp
++		${CMAKE_CURRENT_LIST_DIR}/mock/mock_ffa_api.cpp
+ 		${CMAKE_CURRENT_LIST_DIR}/sp_messaging.c
+ 	INCLUDE_DIRECTORIES
+ 		${CMAKE_CURRENT_LIST_DIR}/include/
++		${CMAKE_CURRENT_LIST_DIR}/mock
+ 		${UNIT_TEST_PROJECT_PATH}/components/common/utils/include
+ 	COMPILE_DEFINITIONS
+ 		-DARM64
+@@ -144,11 +154,12 @@ unit_test_add_suite(
+ 	NAME libsp_sp_messaging_with_routing_extension
+ 	SOURCES
+ 		${CMAKE_CURRENT_LIST_DIR}/test/test_sp_messaging.cpp
+-		${CMAKE_CURRENT_LIST_DIR}/test/mock_ffa_api.cpp
++		${CMAKE_CURRENT_LIST_DIR}/mock/mock_ffa_api.cpp
+ 		${CMAKE_CURRENT_LIST_DIR}/sp_messaging.c
+ 		${CMAKE_CURRENT_LIST_DIR}/ffa_direct_msg_routing_extension.c
+ 	INCLUDE_DIRECTORIES
+ 		${CMAKE_CURRENT_LIST_DIR}/include/
++		${CMAKE_CURRENT_LIST_DIR}/mock
+ 		${UNIT_TEST_PROJECT_PATH}/components/common/utils/include
+ 	COMPILE_DEFINITIONS
+ 		-DARM64
+diff --git a/components/rpc/mm_communicate/endpoint/sp/tests.cmake b/components/rpc/mm_communicate/endpoint/sp/tests.cmake
+index 318f14d..c68a0c7 100644
+--- a/components/rpc/mm_communicate/endpoint/sp/tests.cmake
++++ b/components/rpc/mm_communicate/endpoint/sp/tests.cmake
+@@ -1,5 +1,5 @@
+ #
+-# Copyright (c) 2021, Arm Limited. All rights reserved.
++# Copyright (c) 2021-2022, Arm Limited. All rights reserved.
+ #
+ # SPDX-License-Identifier: BSD-3-Clause
+ #
+@@ -13,12 +13,12 @@ unit_test_add_suite(
+ 		${CMAKE_CURRENT_LIST_DIR}/test/test_mm_communicate_call_ep.cpp
+ 		${CMAKE_CURRENT_LIST_DIR}/test/mock_mm_service.cpp
+ 		${CMAKE_CURRENT_LIST_DIR}/test/test_mock_mm_service.cpp
+-		${UNIT_TEST_PROJECT_PATH}/components/messaging/ffa/libsp/test/mock_assert.cpp
++		${UNIT_TEST_PROJECT_PATH}/components/messaging/ffa/libsp/mock/mock_assert.cpp
+ 	INCLUDE_DIRECTORIES
+ 		${UNIT_TEST_PROJECT_PATH}
+ 		${UNIT_TEST_PROJECT_PATH}/components/common/utils/include
+ 		${UNIT_TEST_PROJECT_PATH}/components/messaging/ffa/libsp/include
+-		${UNIT_TEST_PROJECT_PATH}/components/messaging/ffa/libsp/test
++		${UNIT_TEST_PROJECT_PATH}/components/messaging/ffa/libsp/mock
+ 		${UNIT_TEST_PROJECT_PATH}/components/rpc/common/interface
+ 	COMPILE_DEFINITIONS
+ 		-DARM64
+diff --git a/components/service/smm_variable/frontend/mm_communicate/tests.cmake b/components/service/smm_variable/frontend/mm_communicate/tests.cmake
+index d1f930c..50b0b9a 100644
+--- a/components/service/smm_variable/frontend/mm_communicate/tests.cmake
++++ b/components/service/smm_variable/frontend/mm_communicate/tests.cmake
+@@ -1,5 +1,5 @@
+ #
+-# Copyright (c) 2021, Arm Limited. All rights reserved.
++# Copyright (c) 2021-2022, Arm Limited. All rights reserved.
+ #
+ # SPDX-License-Identifier: BSD-3-Clause
+ #
+@@ -12,13 +12,13 @@ unit_test_add_suite(
+ 		${CMAKE_CURRENT_LIST_DIR}/smm_variable_mm_service.c
+ 		${CMAKE_CURRENT_LIST_DIR}/test/test_smm_variable_mm_service.cpp
+ 		${UNIT_TEST_PROJECT_PATH}/components/rpc/common/test/mock_rpc_interface.cpp
+-		${UNIT_TEST_PROJECT_PATH}/components/messaging/ffa/libsp/test/mock_assert.cpp
++		${UNIT_TEST_PROJECT_PATH}/components/messaging/ffa/libsp/mock/mock_assert.cpp
+ 	INCLUDE_DIRECTORIES
+ 		${UNIT_TEST_PROJECT_PATH}
+ 		${UNIT_TEST_PROJECT_PATH}/components/rpc/mm_communicate/endpoint/sp
+ 		${UNIT_TEST_PROJECT_PATH}/components/common/utils/include
+ 		${UNIT_TEST_PROJECT_PATH}/components/messaging/ffa/libsp/include
+-		${UNIT_TEST_PROJECT_PATH}/components/messaging/ffa/libsp/test
++		${UNIT_TEST_PROJECT_PATH}/components/messaging/ffa/libsp/mock
+ 		${UNIT_TEST_PROJECT_PATH}/components/rpc/common/interface
+ 	COMPILE_DEFINITIONS
+ 		-DARM64
+-- 
+2.17.1
+
diff --git a/meta-arm/meta-arm/recipes-security/trusted-services/files/0018-Add-mock-for-libsp-sp_discovery.patch b/meta-arm/meta-arm/recipes-security/trusted-services/files/0018-Add-mock-for-libsp-sp_discovery.patch
new file mode 100644
index 0000000..b385b7a
--- /dev/null
+++ b/meta-arm/meta-arm/recipes-security/trusted-services/files/0018-Add-mock-for-libsp-sp_discovery.patch
@@ -0,0 +1,339 @@
+From e9ff55c03e06c044eb9c13f2a3315bf7e35f3659 Mon Sep 17 00:00:00 2001
+From: Imre Kis <imre.kis@arm.com>
+Date: Fri, 17 Jun 2022 13:51:21 +0200
+Subject: [PATCH 18/24] Add mock for libsp/sp_discovery
+
+Add mock_sp_discovery for mocking sp_discovery part of libsp.
+
+Signed-off-by: Imre Kis <imre.kis@arm.com>
+Change-Id: I94460dc03dd6dcd27f6865f852cc9a0d85f4b583
+
+Upstream-Status: Pending [In review]
+Signed-off-by: Anton Antonov <Anton.Antonov@arm.com>
+
+---
+ .../messaging/ffa/libsp/mock/component.cmake  |   1 +
+ .../ffa/libsp/mock/mock_sp_discovery.cpp      | 109 +++++++++++++++++
+ .../ffa/libsp/mock/mock_sp_discovery.h        |  37 ++++++
+ .../mock/test/test_mock_sp_discovery.cpp      | 111 ++++++++++++++++++
+ components/messaging/ffa/libsp/tests.cmake    |  13 ++
+ 5 files changed, 271 insertions(+)
+ create mode 100644 components/messaging/ffa/libsp/mock/mock_sp_discovery.cpp
+ create mode 100644 components/messaging/ffa/libsp/mock/mock_sp_discovery.h
+ create mode 100644 components/messaging/ffa/libsp/mock/test/test_mock_sp_discovery.cpp
+
+diff --git a/components/messaging/ffa/libsp/mock/component.cmake b/components/messaging/ffa/libsp/mock/component.cmake
+index 03b8006..15db85a 100644
+--- a/components/messaging/ffa/libsp/mock/component.cmake
++++ b/components/messaging/ffa/libsp/mock/component.cmake
+@@ -12,6 +12,7 @@ target_sources(${TGT} PRIVATE
+ 	"${CMAKE_CURRENT_LIST_DIR}/mock_assert.cpp"
+ 	"${CMAKE_CURRENT_LIST_DIR}/mock_ffa_api.cpp"
+ 	"${CMAKE_CURRENT_LIST_DIR}/mock_ffa_internal_api.cpp"
++	"${CMAKE_CURRENT_LIST_DIR}/mock_sp_discovery.cpp"
+ 	"${CMAKE_CURRENT_LIST_DIR}/mock_sp_rxtx.cpp"
+ 	)
+ 
+diff --git a/components/messaging/ffa/libsp/mock/mock_sp_discovery.cpp b/components/messaging/ffa/libsp/mock/mock_sp_discovery.cpp
+new file mode 100644
+index 0000000..47f4ef7
+--- /dev/null
++++ b/components/messaging/ffa/libsp/mock/mock_sp_discovery.cpp
+@@ -0,0 +1,109 @@
++// SPDX-License-Identifier: BSD-3-Clause
++/*
++ * Copyright (c) 2022, Arm Limited. All rights reserved.
++ */
++
++#include <CppUTestExt/MockSupport.h>
++#include "mock_sp_discovery.h"
++
++void expect_sp_discovery_ffa_version_get(const uint16_t *major,
++					 const uint16_t *minor,
++					 sp_result result)
++{
++	mock()
++		.expectOneCall("sp_discovery_ffa_version_get")
++		.withOutputParameterReturning("major", major, sizeof(*major))
++		.withOutputParameterReturning("minor", minor, sizeof(*minor))
++		.andReturnValue(result);
++}
++
++sp_result sp_discovery_ffa_version_get(uint16_t *major, uint16_t *minor)
++{
++	return mock()
++		.actualCall("sp_discovery_ffa_version_get")
++		.withOutputParameter("major", major)
++		.withOutputParameter("minor", minor)
++		.returnIntValue();
++}
++
++void expect_sp_discovery_own_id_get(const uint16_t *id, sp_result result)
++{
++	mock()
++		.expectOneCall("sp_discovery_own_id_get")
++		.withOutputParameterReturning("id", id, sizeof(*id))
++		.andReturnValue(result);
++}
++
++sp_result sp_discovery_own_id_get(uint16_t *id)
++{
++	return mock()
++		.actualCall("sp_discovery_own_id_get")
++		.withOutputParameter("id", id)
++		.returnIntValue();
++}
++
++void expect_sp_discovery_partition_id_get(const struct sp_uuid *uuid,
++					  const uint16_t *id, sp_result result)
++{
++	mock()
++		.expectOneCall("sp_discovery_partition_id_get")
++		.withMemoryBufferParameter("uuid", (const unsigned char *)uuid,
++					   sizeof(*uuid))
++		.withOutputParameterReturning("id", id, sizeof(*id))
++		.andReturnValue(result);
++}
++
++sp_result sp_discovery_partition_id_get(const struct sp_uuid *uuid,
++					uint16_t *id)
++{
++	return mock()
++		.actualCall("sp_discovery_partition_id_get")
++		.withMemoryBufferParameter("uuid", (const unsigned char *)uuid,
++					   sizeof(*uuid))
++		.withOutputParameter("id", id)
++		.returnIntValue();
++}
++
++void expect_sp_discovery_partition_info_get(const struct sp_uuid *uuid,
++					  const struct sp_partition_info *info,
++					  sp_result result)
++{
++	mock()
++		.expectOneCall("sp_discovery_partition_info_get")
++		.withMemoryBufferParameter("uuid", (const unsigned char *)uuid,
++					   sizeof(*uuid))
++		.withOutputParameterReturning("info", info, sizeof(*info))
++		.andReturnValue(result);
++}
++
++sp_result sp_discovery_partition_info_get(const struct sp_uuid *uuid,
++					  struct sp_partition_info *info)
++{
++	return mock()
++		.actualCall("sp_discovery_partition_info_get")
++		.withMemoryBufferParameter("uuid", (const unsigned char *)uuid,
++					   sizeof(*uuid))
++		.withOutputParameter("info", info)
++		.returnIntValue();
++}
++
++void expect_sp_discovery_partition_info_get_all(const struct sp_partition_info info[],
++						const uint32_t *count,
++						sp_result result)
++{
++	mock()
++		.expectOneCall("sp_discovery_partition_info_get_all")
++		.withOutputParameterReturning("info", info, sizeof(*info) * *count)
++		.withOutputParameterReturning("count", count, sizeof(*count))
++		.andReturnValue(result);
++}
++
++sp_result sp_discovery_partition_info_get_all(struct sp_partition_info info[],
++					      uint32_t *count)
++{
++	return mock()
++		.actualCall("sp_discovery_partition_info_get_all")
++		.withOutputParameter("info", info)
++		.withOutputParameter("count", count)
++		.returnIntValue();
++}
+diff --git a/components/messaging/ffa/libsp/mock/mock_sp_discovery.h b/components/messaging/ffa/libsp/mock/mock_sp_discovery.h
+new file mode 100644
+index 0000000..a71ce18
+--- /dev/null
++++ b/components/messaging/ffa/libsp/mock/mock_sp_discovery.h
+@@ -0,0 +1,37 @@
++/* SPDX-License-Identifier: BSD-3-Clause */
++/*
++ * Copyright (c) 2022, Arm Limited and Contributors. All rights reserved.
++ */
++
++#ifndef LIBSP_MOCK_MOCK_SP_DISCOVERY_H_
++#define LIBSP_MOCK_MOCK_SP_DISCOVERY_H_
++
++#include "../include/sp_discovery.h"
++
++#ifdef __cplusplus
++extern "C" {
++#endif
++
++void expect_sp_discovery_ffa_version_get(const uint16_t *major,
++					 const uint16_t *minor,
++					 sp_result result);
++
++void expect_sp_discovery_own_id_get(const uint16_t *id, sp_result result);
++
++void expect_sp_discovery_partition_id_get(const struct sp_uuid *uuid,
++					  const uint16_t *id, sp_result result);
++
++
++void expect_sp_discovery_partition_info_get(const struct sp_uuid *uuid,
++					    const struct sp_partition_info *info,
++					    sp_result result);
++
++void expect_sp_discovery_partition_info_get_all(const struct sp_partition_info info[],
++						const uint32_t *count,
++						sp_result result);
++
++#ifdef __cplusplus
++}
++#endif
++
++#endif /* LIBSP_MOCK_MOCK_SP_DISCOVERY_H_ */
+diff --git a/components/messaging/ffa/libsp/mock/test/test_mock_sp_discovery.cpp b/components/messaging/ffa/libsp/mock/test/test_mock_sp_discovery.cpp
+new file mode 100644
+index 0000000..bb4bf07
+--- /dev/null
++++ b/components/messaging/ffa/libsp/mock/test/test_mock_sp_discovery.cpp
+@@ -0,0 +1,111 @@
++// SPDX-License-Identifier: BSD-3-Clause
++/*
++ * Copyright (c) 2022, Arm Limited. All rights reserved.
++ */
++
++#include <CppUTestExt/MockSupport.h>
++#include <CppUTest/TestHarness.h>
++#include "mock_sp_discovery.h"
++#include <stdint.h>
++#include <stdlib.h>
++
++
++
++
++TEST_GROUP(mock_sp_discovery) {
++	TEST_TEARDOWN()
++	{
++		mock().checkExpectations();
++		mock().clear();
++	}
++
++	static const sp_result result = -1;
++};
++
++TEST(mock_sp_discovery, sp_discovery_ffa_version_get)
++{
++	const uint16_t expected_major = 0xabcd;
++	const uint16_t expected_minor = 0xef01;
++	uint16_t major = 0, minor = 0;
++
++	expect_sp_discovery_ffa_version_get(&expected_major, &expected_minor,
++					    result);
++	LONGS_EQUAL(result, sp_discovery_ffa_version_get(&major, &minor));
++	UNSIGNED_LONGS_EQUAL(expected_major, major);
++	UNSIGNED_LONGS_EQUAL(expected_minor, minor);
++}
++
++TEST(mock_sp_discovery, sp_discovery_own_id_get)
++{
++	const uint16_t expected_id = 0x8765;
++	uint16_t id = 0;
++
++	expect_sp_discovery_own_id_get(&expected_id, result);
++	LONGS_EQUAL(result, sp_discovery_own_id_get(&id));
++	UNSIGNED_LONGS_EQUAL(expected_id, id);
++}
++
++TEST(mock_sp_discovery, sp_discovery_partition_id_get)
++{
++	const struct sp_uuid expected_uuid = {
++		.uuid = {0xfe, 0xdc, 0xba, 0x98, 0x76, 0x54, 0x32, 0x10,
++			 0x01, 0x23, 0x45, 0x67, 0x89, 0xab, 0xcd, 0xef}};
++	const uint16_t expected_id = 0xc1ca;
++
++	struct sp_uuid uuid = expected_uuid;
++	uint16_t id = 0;
++
++	expect_sp_discovery_partition_id_get(&expected_uuid, &expected_id,
++					       result);
++	LONGS_EQUAL(result, sp_discovery_partition_id_get(&uuid, &id));
++	UNSIGNED_LONGS_EQUAL(expected_id, id);
++}
++
++TEST(mock_sp_discovery, sp_discovery_partition_info_get)
++{
++	const struct sp_uuid expected_uuid = {
++		.uuid = {0xfe, 0xdc, 0xba, 0x98, 0x76, 0x54, 0x32, 0x10,
++			 0x01, 0x23, 0x45, 0x67, 0x89, 0xab, 0xcd, 0xef}};
++	const struct sp_partition_info expected_info = {
++		.partition_id = 0x1234,
++		.execution_context_count = 0xffff,
++		.supports_direct_requests = true,
++		.can_send_direct_requests = true,
++		.supports_indirect_requests = false
++	};
++
++	struct sp_uuid uuid = expected_uuid;
++	struct sp_partition_info info = {0};
++
++	expect_sp_discovery_partition_info_get(&expected_uuid, &expected_info,
++					       result);
++	LONGS_EQUAL(result, sp_discovery_partition_info_get(&uuid, &info));
++	MEMCMP_EQUAL(&expected_info, &info, sizeof(&expected_info));
++}
++
++TEST(mock_sp_discovery, sp_discovery_partition_info_get_all)
++{
++	const uint32_t expected_count = 2;
++	const struct sp_partition_info expected_info[expected_count] = {{
++		.partition_id = 0x5678,
++		.execution_context_count = 0x1111,
++		.supports_direct_requests = false,
++		.can_send_direct_requests = false,
++		.supports_indirect_requests = true
++	}, {
++		.partition_id = 0x1234,
++		.execution_context_count = 0xffff,
++		.supports_direct_requests = true,
++		.can_send_direct_requests = true,
++		.supports_indirect_requests = false
++	}};
++
++	struct sp_partition_info info[expected_count] = {0};
++	uint32_t count = 0;
++
++	expect_sp_discovery_partition_info_get_all(expected_info,
++						   &expected_count, result);
++	LONGS_EQUAL(result, sp_discovery_partition_info_get_all(info, &count));
++	MEMCMP_EQUAL(&expected_info, &info, sizeof(&expected_info));
++	UNSIGNED_LONGS_EQUAL(expected_count, count);
++}
+\ No newline at end of file
+diff --git a/components/messaging/ffa/libsp/tests.cmake b/components/messaging/ffa/libsp/tests.cmake
+index 296ae46..7b52248 100644
+--- a/components/messaging/ffa/libsp/tests.cmake
++++ b/components/messaging/ffa/libsp/tests.cmake
+@@ -104,6 +104,19 @@ unit_test_add_suite(
+ 		-DARM64
+ )
+ 
++unit_test_add_suite(
++	NAME libsp_mock_sp_discovery
++	SOURCES
++		${CMAKE_CURRENT_LIST_DIR}/mock/test/test_mock_sp_discovery.cpp
++		${CMAKE_CURRENT_LIST_DIR}/mock/mock_sp_discovery.cpp
++	INCLUDE_DIRECTORIES
++		${CMAKE_CURRENT_LIST_DIR}/include/
++		${CMAKE_CURRENT_LIST_DIR}/mock
++		${UNIT_TEST_PROJECT_PATH}/components/common/utils/include
++	COMPILE_DEFINITIONS
++		-DARM64
++)
++
+ unit_test_add_suite(
+ 	NAME libsp_sp_memory_management
+ 	SOURCES
+-- 
+2.17.1
+
diff --git a/meta-arm/meta-arm/recipes-security/trusted-services/files/0019-Add-mock-for-libsp-sp_memory_management.patch b/meta-arm/meta-arm/recipes-security/trusted-services/files/0019-Add-mock-for-libsp-sp_memory_management.patch
new file mode 100644
index 0000000..cf50389
--- /dev/null
+++ b/meta-arm/meta-arm/recipes-security/trusted-services/files/0019-Add-mock-for-libsp-sp_memory_management.patch
@@ -0,0 +1,977 @@
+From b592a22dbce521522931f8e3d79d6e16f2d334fa Mon Sep 17 00:00:00 2001
+From: Imre Kis <imre.kis@arm.com>
+Date: Fri, 17 Jun 2022 14:42:04 +0200
+Subject: [PATCH 19/24] Add mock for libsp/sp_memory_management
+
+Add mock_sp_memory_management for mocking sp_memory_management part of
+libsp.
+
+Signed-off-by: Imre Kis <imre.kis@arm.com>
+Change-Id: I9f74142bc3568dfc59f820ec2c2af81deba0d0da
+
+Upstream-Status: Pending [In review]
+Signed-off-by: Anton Antonov <Anton.Antonov@arm.com>
+
+---
+ .../ffa/libsp/include/sp_memory_management.h  |   2 +-
+ .../messaging/ffa/libsp/mock/component.cmake  |   1 +
+ .../libsp/mock/mock_sp_memory_management.cpp  | 523 ++++++++++++++++++
+ .../libsp/mock/mock_sp_memory_management.h    |  98 ++++
+ .../test/test_mock_sp_memory_management.cpp   | 260 +++++++++
+ components/messaging/ffa/libsp/tests.cmake    |  13 +
+ 6 files changed, 896 insertions(+), 1 deletion(-)
+ create mode 100644 components/messaging/ffa/libsp/mock/mock_sp_memory_management.cpp
+ create mode 100644 components/messaging/ffa/libsp/mock/mock_sp_memory_management.h
+ create mode 100644 components/messaging/ffa/libsp/test/test_mock_sp_memory_management.cpp
+
+diff --git a/components/messaging/ffa/libsp/include/sp_memory_management.h b/components/messaging/ffa/libsp/include/sp_memory_management.h
+index 58a6cc1..ec76a3d 100644
+--- a/components/messaging/ffa/libsp/include/sp_memory_management.h
++++ b/components/messaging/ffa/libsp/include/sp_memory_management.h
+@@ -381,7 +381,7 @@ sp_result sp_memory_share_dynamic_is_supported(bool *supported);
+  *
+  * @param[in]      descriptor        The memory descriptor
+  * @param[in,out]  acc_desc          Access descriptor
+- * @param[in,out   regions           Memory region array
++ * @param[in,out]  regions           Memory region array
+  * @param[in]      in_region_count   Count of the specified regions, can be 0
+  * @param[in,out]  out_region_count  Count of the reserved space of in the
+  *                                   regions buffer for retrieved regions. After
+diff --git a/components/messaging/ffa/libsp/mock/component.cmake b/components/messaging/ffa/libsp/mock/component.cmake
+index 15db85a..eb0d28c 100644
+--- a/components/messaging/ffa/libsp/mock/component.cmake
++++ b/components/messaging/ffa/libsp/mock/component.cmake
+@@ -13,6 +13,7 @@ target_sources(${TGT} PRIVATE
+ 	"${CMAKE_CURRENT_LIST_DIR}/mock_ffa_api.cpp"
+ 	"${CMAKE_CURRENT_LIST_DIR}/mock_ffa_internal_api.cpp"
+ 	"${CMAKE_CURRENT_LIST_DIR}/mock_sp_discovery.cpp"
++	"${CMAKE_CURRENT_LIST_DIR}/mock_sp_memory_management.cpp"
+ 	"${CMAKE_CURRENT_LIST_DIR}/mock_sp_rxtx.cpp"
+ 	)
+ 
+diff --git a/components/messaging/ffa/libsp/mock/mock_sp_memory_management.cpp b/components/messaging/ffa/libsp/mock/mock_sp_memory_management.cpp
+new file mode 100644
+index 0000000..9eb0aaa
+--- /dev/null
++++ b/components/messaging/ffa/libsp/mock/mock_sp_memory_management.cpp
+@@ -0,0 +1,523 @@
++// SPDX-License-Identifier: BSD-3-Clause
++/*
++ * Copyright (c) 2022, Arm Limited. All rights reserved.
++ */
++
++#include <CppUTestExt/MockSupport.h>
++#include "mock_sp_memory_management.h"
++
++void expect_sp_memory_donate(const struct sp_memory_descriptor *descriptor,
++			     const struct sp_memory_access_descriptor *acc_desc,
++			     const struct sp_memory_region regions[],
++			     uint32_t region_count, const uint64_t *handle,
++			     sp_result result)
++{
++	mock()
++		.expectOneCall("sp_memory_donate")
++		.withMemoryBufferParameter("descriptor", (const unsigned char *)descriptor,
++					   sizeof(*descriptor))
++		.withMemoryBufferParameter("acc_desc", (const unsigned char *)acc_desc,
++					   sizeof(*acc_desc))
++		.withMemoryBufferParameter("regions", (const unsigned char *)regions,
++					   sizeof(*regions) * region_count)
++		.withUnsignedIntParameter("region_count", region_count)
++		.withOutputParameterReturning("handle", handle, sizeof(*handle))
++		.andReturnValue(result);
++}
++
++sp_result sp_memory_donate(struct sp_memory_descriptor *descriptor,
++			   struct sp_memory_access_descriptor *acc_desc,
++			   struct sp_memory_region regions[],
++			   uint32_t region_count, uint64_t *handle)
++{
++	return mock()
++		.actualCall("sp_memory_donate")
++		.withMemoryBufferParameter("descriptor", (const unsigned char *)descriptor,
++					   sizeof(*descriptor))
++		.withMemoryBufferParameter("acc_desc", (const unsigned char *)acc_desc,
++					   sizeof(*acc_desc))
++		.withMemoryBufferParameter("regions", (const unsigned char *)regions,
++					   sizeof(*regions) * region_count)
++		.withUnsignedIntParameter("region_count", region_count)
++		.withOutputParameter("handle", handle)
++		.returnIntValue();
++}
++
++void expect_sp_memory_donate_dynamic(const struct sp_memory_descriptor *descriptor,
++				     const struct sp_memory_access_descriptor *acc_desc,
++				     const struct sp_memory_region regions[],
++				     uint32_t region_count, const uint64_t *handle,
++				     sp_result result)
++{
++	mock()
++		.expectOneCall("sp_memory_donate_dynamic")
++		.withMemoryBufferParameter("descriptor", (const unsigned char *)descriptor,
++					   sizeof(*descriptor))
++		.withMemoryBufferParameter("acc_desc", (const unsigned char *)acc_desc,
++					   sizeof(*acc_desc))
++		.withMemoryBufferParameter("regions", (const unsigned char *)regions,
++					   sizeof(*regions) * region_count)
++		.withUnsignedIntParameter("region_count", region_count)
++		.withOutputParameterReturning("handle", handle, sizeof(*handle))
++		.andReturnValue(result);
++}
++
++sp_result sp_memory_donate_dynamic(struct sp_memory_descriptor *descriptor,
++				   struct sp_memory_access_descriptor *acc_desc,
++				   struct sp_memory_region regions[],
++				   uint32_t region_count, uint64_t *handle,
++				   struct ffa_mem_transaction_buffer *buffer)
++{
++	if (buffer == NULL) { // LCOV_EXCL_BR_LINE
++		FAIL("ffa_mem_transaction_buffer is NULL"); // LCOV_EXCL_LINE
++	}
++
++	return mock()
++		.actualCall("sp_memory_donate_dynamic")
++		.withMemoryBufferParameter("descriptor", (const unsigned char *)descriptor,
++					   sizeof(*descriptor))
++		.withMemoryBufferParameter("acc_desc", (const unsigned char *)acc_desc,
++					   sizeof(*acc_desc))
++		.withMemoryBufferParameter("regions", (const unsigned char *)regions,
++					   sizeof(*regions) * region_count)
++		.withUnsignedIntParameter("region_count", region_count)
++		.withOutputParameter("handle", handle)
++		.returnIntValue();
++}
++
++void expect_sp_memory_donate_dynamic_is_supported(const bool *supported, sp_result result)
++{
++	mock()
++		.expectOneCall("sp_memory_lend_dynamic_is_supported")
++		.withOutputParameterReturning("supported", supported, sizeof(*supported))
++		.andReturnValue(result);
++}
++
++sp_result sp_memory_donate_dynamic_is_supported(bool *supported)
++{
++	return mock()
++		.actualCall("sp_memory_lend_dynamic_is_supported")
++		.withOutputParameter("supported", supported)
++		.returnIntValue();
++}
++
++void expect_sp_memory_lend(const struct sp_memory_descriptor *descriptor,
++			   const struct sp_memory_access_descriptor acc_desc[],
++			   uint32_t acc_desc_count,
++			   const struct sp_memory_region regions[],
++			   uint32_t region_count, const uint64_t *handle,
++			   sp_result result)
++{
++	mock()
++		.expectOneCall("sp_memory_lend")
++		.withMemoryBufferParameter("descriptor", (const unsigned char *)descriptor,
++					   sizeof(*descriptor))
++		.withMemoryBufferParameter("acc_desc", (const unsigned char *)acc_desc,
++					   sizeof(*acc_desc) * acc_desc_count)
++		.withUnsignedIntParameter("acc_desc_count", acc_desc_count)
++		.withMemoryBufferParameter("regions", (const unsigned char *)regions,
++					   sizeof(*regions) * region_count)
++		.withUnsignedIntParameter("region_count", region_count)
++		.withOutputParameterReturning("handle", handle, sizeof(*handle))
++		.andReturnValue(result);
++}
++
++sp_result sp_memory_lend(struct sp_memory_descriptor *descriptor,
++			 struct sp_memory_access_descriptor acc_desc[],
++			 uint32_t acc_desc_count,
++			 struct sp_memory_region regions[],
++			 uint32_t region_count, uint64_t *handle)
++{
++	return mock()
++		.actualCall("sp_memory_lend")
++		.withMemoryBufferParameter("descriptor", (const unsigned char *)descriptor,
++					   sizeof(*descriptor))
++		.withMemoryBufferParameter("acc_desc", (const unsigned char *)acc_desc,
++					   sizeof(*acc_desc) * acc_desc_count)
++		.withUnsignedIntParameter("acc_desc_count", acc_desc_count)
++		.withMemoryBufferParameter("regions", (const unsigned char *)regions,
++					   sizeof(*regions) * region_count)
++		.withUnsignedIntParameter("region_count", region_count)
++		.withOutputParameter("handle", handle)
++		.returnIntValue();
++}
++
++void expect_sp_memory_lend_dynamic(const struct sp_memory_descriptor *descriptor,
++				   const struct sp_memory_access_descriptor acc_desc[],
++				   uint32_t acc_desc_count,
++				   const struct sp_memory_region regions[],
++				   const uint32_t region_count, const uint64_t *handle,
++				   sp_result result)
++{
++	mock()
++		.expectOneCall("sp_memory_lend")
++		.withMemoryBufferParameter("descriptor", (const unsigned char *)descriptor,
++					   sizeof(*descriptor))
++		.withMemoryBufferParameter("acc_desc", (const unsigned char *)acc_desc,
++					   sizeof(*acc_desc) * acc_desc_count)
++		.withUnsignedIntParameter("acc_desc_count", acc_desc_count)
++		.withMemoryBufferParameter("regions", (const unsigned char *)regions,
++					   sizeof(*regions) * region_count)
++		.withUnsignedIntParameter("region_count", region_count)
++		.withOutputParameterReturning("handle", handle, sizeof(*handle))
++		.andReturnValue(result);
++}
++
++sp_result sp_memory_lend_dynamic(struct sp_memory_descriptor *descriptor,
++				 struct sp_memory_access_descriptor acc_desc[],
++				 uint32_t acc_desc_count,
++				 struct sp_memory_region regions[],
++				 uint32_t region_count, uint64_t *handle,
++				 struct ffa_mem_transaction_buffer *buffer)
++{
++	if (buffer == NULL) { // LCOV_EXCL_BR_LINE
++		FAIL("ffa_mem_transaction_buffer is NULL"); // LCOV_EXCL_LINE
++	}
++
++	return mock()
++		.actualCall("sp_memory_lend")
++		.withMemoryBufferParameter("descriptor", (const unsigned char *)descriptor,
++					   sizeof(*descriptor))
++		.withMemoryBufferParameter("acc_desc", (const unsigned char *)acc_desc,
++					   sizeof(*acc_desc) * acc_desc_count)
++		.withUnsignedIntParameter("acc_desc_count", acc_desc_count)
++		.withMemoryBufferParameter("regions", (const unsigned char *)regions,
++					   sizeof(*regions) * region_count)
++		.withUnsignedIntParameter("region_count", region_count)
++		.withOutputParameter("handle", handle)
++		.returnIntValue();
++}
++
++void expect_sp_memory_lend_dynamic_is_supported(const bool *supported, sp_result result)
++{
++	mock()
++		.expectOneCall("sp_memory_lend_dynamic_is_supported")
++		.withOutputParameterReturning("supported", supported, sizeof(*supported))
++		.andReturnValue(result);
++}
++
++sp_result sp_memory_lend_dynamic_is_supported(bool *supported)
++{
++	return mock()
++		.actualCall("sp_memory_lend_dynamic_is_supported")
++		.withOutputParameter("supported", supported)
++		.returnIntValue();
++}
++
++void expect_sp_memory_share(const struct sp_memory_descriptor *descriptor,
++			    const struct sp_memory_access_descriptor acc_desc[],
++			    uint32_t acc_desc_count,
++			    const struct sp_memory_region regions[],
++			    uint32_t region_count, const uint64_t *handle,
++			    sp_result result)
++{
++	mock()
++		.expectOneCall("sp_memory_share")
++		.withMemoryBufferParameter("descriptor", (const unsigned char *)descriptor,
++					   sizeof(*descriptor))
++		.withMemoryBufferParameter("acc_desc", (const unsigned char *)acc_desc,
++					   sizeof(*acc_desc) * acc_desc_count)
++		.withUnsignedIntParameter("acc_desc_count", acc_desc_count)
++		.withMemoryBufferParameter("regions", (const unsigned char *)regions,
++					   sizeof(*regions) * region_count)
++		.withUnsignedIntParameter("region_count", region_count)
++		.withOutputParameterReturning("handle", handle, sizeof(*handle))
++		.andReturnValue(result);
++}
++
++sp_result sp_memory_share(struct sp_memory_descriptor *descriptor,
++			  struct sp_memory_access_descriptor acc_desc[],
++			  uint32_t acc_desc_count,
++			  struct sp_memory_region regions[],
++			  uint32_t region_count, uint64_t *handle)
++{
++	return mock()
++		.actualCall("sp_memory_share")
++		.withMemoryBufferParameter("descriptor", (const unsigned char *)descriptor,
++					   sizeof(*descriptor))
++		.withMemoryBufferParameter("acc_desc", (const unsigned char *)acc_desc,
++					   sizeof(*acc_desc) * acc_desc_count)
++		.withUnsignedIntParameter("acc_desc_count", acc_desc_count)
++		.withMemoryBufferParameter("regions", (const unsigned char *)regions,
++					   sizeof(*regions) * region_count)
++		.withUnsignedIntParameter("region_count", region_count)
++		.withOutputParameter("handle", handle)
++		.returnIntValue();
++}
++
++void expect_sp_memory_share_dynamic(const struct sp_memory_descriptor *descriptor,
++				    const struct sp_memory_access_descriptor acc_desc[],
++				    uint32_t acc_desc_count,
++				    const struct sp_memory_region regions[],
++				    uint32_t region_count, const uint64_t *handle,
++				    sp_result result)
++{
++	mock()
++		.expectOneCall("sp_memory_share_dynamic")
++		.withMemoryBufferParameter("descriptor", (const unsigned char *)descriptor,
++					   sizeof(*descriptor))
++		.withMemoryBufferParameter("acc_desc", (const unsigned char *)acc_desc,
++					   sizeof(*acc_desc) * acc_desc_count)
++		.withUnsignedIntParameter("acc_desc_count", acc_desc_count)
++		.withMemoryBufferParameter("regions", (const unsigned char *)regions,
++					   sizeof(*regions) * region_count)
++		.withUnsignedIntParameter("region_count", region_count)
++		.withOutputParameterReturning("handle", handle, sizeof(*handle))
++		.andReturnValue(result);
++}
++
++sp_result sp_memory_share_dynamic(struct sp_memory_descriptor *descriptor,
++				  struct sp_memory_access_descriptor acc_desc[],
++				  uint32_t acc_desc_count,
++				  struct sp_memory_region regions[],
++				  uint32_t region_count, uint64_t *handle,
++				  struct ffa_mem_transaction_buffer *buffer)
++{
++	if (buffer == NULL) { // LCOV_EXCL_BR_LINE
++		FAIL("ffa_mem_transaction_buffer is NULL"); // LCOV_EXCL_LINE
++	}
++
++	return mock()
++		.actualCall("sp_memory_share_dynamic")
++		.withMemoryBufferParameter("descriptor", (const unsigned char *)descriptor,
++					   sizeof(*descriptor))
++		.withMemoryBufferParameter("acc_desc", (const unsigned char *)acc_desc,
++					   sizeof(*acc_desc) * acc_desc_count)
++		.withUnsignedIntParameter("acc_desc_count", acc_desc_count)
++		.withMemoryBufferParameter("regions", (const unsigned char *)regions,
++					   sizeof(*regions) * region_count)
++		.withUnsignedIntParameter("region_count", region_count)
++		.withOutputParameter("handle", handle)
++		.returnIntValue();
++}
++
++void expect_sp_memory_share_dynamic_is_supported(const bool *supported, sp_result result)
++{
++	mock()
++		.expectOneCall("sp_memory_share_dynamic_is_supported")
++		.withOutputParameterReturning("supported", supported, sizeof(*supported))
++		.andReturnValue(result);
++}
++
++sp_result sp_memory_share_dynamic_is_supported(bool *supported)
++{
++	return mock()
++		.actualCall("sp_memory_share_dynamic_is_supported")
++		.withOutputParameter("supported", supported)
++		.returnIntValue();
++}
++
++void expect_sp_memory_retrieve(const struct sp_memory_descriptor *descriptor,
++			       const struct sp_memory_access_descriptor *req_acc_desc,
++			       const struct sp_memory_access_descriptor *resp_acc_desc,
++			       const struct sp_memory_region in_regions[],
++			       const struct sp_memory_region out_regions[],
++			       uint32_t in_region_count,
++			       const uint32_t *out_region_count, uint64_t handle,
++			       sp_result result)
++{
++	mock()
++		.expectOneCall("sp_memory_retrieve")
++		.withMemoryBufferParameter("descriptor", (const unsigned char *)descriptor,
++					   sizeof(descriptor))
++		.withMemoryBufferParameter("req_acc_desc", (const unsigned char *)req_acc_desc,
++					   sizeof(*req_acc_desc))
++		.withOutputParameterReturning("resp_acc_desc",
++					      (const unsigned char *)resp_acc_desc,
++					      sizeof(*resp_acc_desc))
++		.withMemoryBufferParameter("in_regions", (const unsigned char *)in_regions,
++					   sizeof(*in_regions) * in_region_count)
++		.withOutputParameterReturning("out_regions", out_regions,
++					      sizeof(*out_regions) * *out_region_count)
++		.withUnsignedIntParameter("in_region_count", in_region_count)
++		.withOutputParameterReturning("out_region_count", out_region_count,
++					      sizeof(*out_region_count))
++		.withUnsignedLongIntParameter("handle", handle)
++		.andReturnValue(result);
++
++}
++
++sp_result sp_memory_retrieve(struct sp_memory_descriptor *descriptor,
++			     struct sp_memory_access_descriptor *acc_desc,
++			     struct sp_memory_region regions[],
++			     uint32_t in_region_count,
++			     uint32_t *out_region_count, uint64_t handle)
++{
++	return mock()
++		.actualCall("sp_memory_retrieve")
++		.withMemoryBufferParameter("descriptor", (const unsigned char *)descriptor,
++					   sizeof(descriptor))
++		.withMemoryBufferParameter("req_acc_desc", (const unsigned char *)acc_desc,
++					   sizeof(*acc_desc))
++		.withOutputParameter("resp_acc_desc", acc_desc)
++		.withMemoryBufferParameter("in_regions", (const unsigned char *)regions,
++					   sizeof(*regions) * in_region_count)
++		.withOutputParameter("out_regions", regions)
++		.withUnsignedIntParameter("in_region_count", in_region_count)
++		.withOutputParameter("out_region_count", out_region_count)
++		.withUnsignedLongIntParameter("handle", handle)
++		.returnIntValue();
++}
++
++void expect_sp_memory_retrieve_dynamic(const struct sp_memory_descriptor *descriptor,
++				       const struct sp_memory_access_descriptor *req_acc_desc,
++				       const struct sp_memory_access_descriptor *resp_acc_desc,
++				       const struct sp_memory_region in_regions[],
++				       const struct sp_memory_region out_regions[],
++				       uint32_t in_region_count,
++				       const uint32_t *out_region_count, uint64_t handle,
++				       sp_result result)
++{
++	mock()
++		.expectOneCall("sp_memory_retrieve")
++		.withMemoryBufferParameter("descriptor", (const unsigned char *)descriptor,
++					   sizeof(descriptor))
++		.withMemoryBufferParameter("req_acc_desc", (const unsigned char *)req_acc_desc,
++					   sizeof(*req_acc_desc))
++		.withOutputParameterReturning("resp_acc_desc",
++					      (const unsigned char *)resp_acc_desc,
++					      sizeof(*resp_acc_desc))
++		.withMemoryBufferParameter("in_regions", (const unsigned char *)in_regions,
++					   sizeof(*in_regions) * in_region_count)
++		.withOutputParameterReturning("out_regions", out_regions,
++					      sizeof(*out_regions) * *out_region_count)
++		.withUnsignedIntParameter("in_region_count", in_region_count)
++		.withOutputParameterReturning("out_region_count", out_region_count,
++					      sizeof(*out_region_count))
++		.withUnsignedLongIntParameter("handle", handle)
++		.andReturnValue(result);
++}
++
++sp_result
++sp_memory_retrieve_dynamic(struct sp_memory_descriptor *descriptor,
++			   struct sp_memory_access_descriptor *acc_desc,
++			   struct sp_memory_region regions[],
++			   uint32_t in_region_count, uint32_t *out_region_count,
++			   uint64_t handle,
++			   struct ffa_mem_transaction_buffer *buffer)
++{
++	if (buffer == NULL) { // LCOV_EXCL_BR_LINE
++		FAIL("ffa_mem_transaction_buffer is NULL"); // LCOV_EXCL_LINE
++	}
++
++	return mock()
++		.actualCall("sp_memory_retrieve")
++		.withMemoryBufferParameter("descriptor", (const unsigned char *)descriptor,
++					   sizeof(descriptor))
++		.withMemoryBufferParameter("req_acc_desc", (const unsigned char *)acc_desc,
++					   sizeof(*acc_desc))
++		.withOutputParameter("resp_acc_desc", acc_desc)
++		.withMemoryBufferParameter("in_regions", (const unsigned char *)regions,
++					   sizeof(*regions) * in_region_count)
++		.withOutputParameter("out_regions", regions)
++		.withUnsignedIntParameter("in_region_count", in_region_count)
++		.withOutputParameter("out_region_count", out_region_count)
++		.withUnsignedLongIntParameter("handle", handle)
++		.returnIntValue();
++}
++
++void expect_sp_memory_retrieve_dynamic_is_supported(const bool *supported, sp_result result)
++{
++	mock()
++		.expectOneCall("sp_memory_retrieve_dynamic_is_supported")
++		.withOutputParameterReturning("supported", supported, sizeof(*supported))
++		.andReturnValue(result);
++}
++
++sp_result sp_memory_retrieve_dynamic_is_supported(bool *supported)
++{
++	return mock()
++		.actualCall("sp_memory_retrieve_dynamic_is_supported")
++		.withOutputParameter("supported", supported)
++		.returnIntValue();
++}
++
++void expect_sp_memory_relinquish(uint64_t handle, const uint16_t endpoints[],
++			         uint32_t endpoint_count,
++			         const struct sp_memory_transaction_flags *flags,
++				 sp_result result)
++{
++	mock()
++		.expectOneCall("sp_memory_relinquish")
++		.withUnsignedLongIntParameter("handle", handle)
++		.withMemoryBufferParameter("endpoints", (const unsigned char *)endpoints,
++					   sizeof(*endpoints) * endpoint_count)
++		.withMemoryBufferParameter("flags", (const unsigned char *)flags, sizeof(*flags))
++		.andReturnValue(result);
++}
++
++sp_result sp_memory_relinquish(uint64_t handle, const uint16_t endpoints[],
++			       uint32_t endpoint_count,
++			       struct sp_memory_transaction_flags *flags)
++{
++	return mock()
++		.actualCall("sp_memory_relinquish")
++		.withUnsignedLongIntParameter("handle", handle)
++		.withMemoryBufferParameter("endpoints", (const unsigned char *)endpoints,
++					   sizeof(*endpoints) * endpoint_count)
++		.withMemoryBufferParameter("flags", (const unsigned char *)flags, sizeof(*flags))
++		.returnIntValue();
++}
++
++void expect_sp_memory_reclaim(uint64_t handle, uint32_t flags, sp_result result)
++{
++	mock()
++		.expectOneCall("sp_memory_reclaim")
++		.withUnsignedLongIntParameter("handle", handle)
++		.withUnsignedIntParameter("flags", flags)
++		.andReturnValue(result);
++}
++
++sp_result sp_memory_reclaim(uint64_t handle, uint32_t flags)
++{
++	return mock()
++		.actualCall("sp_memory_reclaim")
++		.withUnsignedLongIntParameter("handle", handle)
++		.withUnsignedIntParameter("flags", flags)
++		.returnIntValue();
++}
++
++void expect_sp_memory_permission_get(const void *base_address, const struct sp_mem_perm *mem_perm,
++				     sp_result result)
++{
++	mock()
++		.expectOneCall("sp_memory_permission_set")
++		.withConstPointerParameter("base_address", base_address)
++		.withOutputParameterReturning("mem_perm", mem_perm,
++					      sizeof(*mem_perm))
++		.andReturnValue(result);
++}
++
++sp_result sp_memory_permission_get(const void *base_address,
++				   struct sp_mem_perm *mem_perm)
++{
++	return mock()
++		.actualCall("sp_memory_permission_set")
++		.withConstPointerParameter("base_address", base_address)
++		.withOutputParameter("mem_perm", mem_perm)
++		.returnIntValue();
++}
++
++void expect_sp_memory_permission_set(const void *base_address, size_t region_size,
++				     const struct sp_mem_perm *mem_perm, sp_result result)
++{
++	mock()
++		.expectOneCall("sp_memory_permission_set")
++		.withConstPointerParameter("base_address", base_address)
++		.withUnsignedLongIntParameter("region_size", region_size)
++		.withMemoryBufferParameter("mem_perm", (const unsigned char *)mem_perm,
++					   sizeof(*mem_perm))
++		.andReturnValue(result);
++}
++
++sp_result sp_memory_permission_set(const void *base_address, size_t region_size,
++				   const struct sp_mem_perm *mem_perm)
++{
++	return mock()
++		.actualCall("sp_memory_permission_set")
++		.withConstPointerParameter("base_address", base_address)
++		.withUnsignedLongIntParameter("region_size", region_size)
++		.withMemoryBufferParameter("mem_perm", (const unsigned char *)mem_perm,
++					   sizeof(*mem_perm))
++		.returnIntValue();
++}
+diff --git a/components/messaging/ffa/libsp/mock/mock_sp_memory_management.h b/components/messaging/ffa/libsp/mock/mock_sp_memory_management.h
+new file mode 100644
+index 0000000..458d2af
+--- /dev/null
++++ b/components/messaging/ffa/libsp/mock/mock_sp_memory_management.h
+@@ -0,0 +1,98 @@
++/* SPDX-License-Identifier: BSD-3-Clause */
++/*
++ * Copyright (c) 2022, Arm Limited and Contributors. All rights reserved.
++ */
++
++#ifndef LIBSP_MOCK_MOCK_SP_MEMORY_MANAGEMENT_H_
++#define LIBSP_MOCK_MOCK_SP_MEMORY_MANAGEMENT_H_
++
++#include "../include/sp_memory_management.h"
++
++#ifdef __cplusplus
++extern "C" {
++#endif
++
++void expect_sp_memory_donate(const struct sp_memory_descriptor *descriptor,
++			     const struct sp_memory_access_descriptor *acc_desc,
++			     const struct sp_memory_region regions[],
++			     uint32_t region_count, const uint64_t *handle,
++			     sp_result result);
++
++void expect_sp_memory_donate_dynamic(const struct sp_memory_descriptor *descriptor,
++				     const struct sp_memory_access_descriptor *acc_desc,
++				     const struct sp_memory_region regions[],
++				     uint32_t region_count, const uint64_t *handle,
++				     sp_result result);
++
++void expect_sp_memory_donate_dynamic_is_supported(const bool *supported, sp_result result);
++
++void expect_sp_memory_lend(const struct sp_memory_descriptor *descriptor,
++			   const struct sp_memory_access_descriptor acc_desc[],
++			   uint32_t acc_desc_count,
++			   const struct sp_memory_region regions[],
++			   uint32_t region_count, const uint64_t *handle,
++			   sp_result result);
++
++void expect_sp_memory_lend_dynamic(const struct sp_memory_descriptor *descriptor,
++				   const struct sp_memory_access_descriptor acc_desc[],
++				   uint32_t acc_desc_count,
++				   const struct sp_memory_region regions[],
++				   const uint32_t region_count, const uint64_t *handle,
++				   sp_result result);
++
++void expect_sp_memory_lend_dynamic_is_supported(const bool *supported, sp_result result);
++
++void expect_sp_memory_share(const struct sp_memory_descriptor *descriptor,
++			    const struct sp_memory_access_descriptor acc_desc[],
++			    uint32_t acc_desc_count,
++			    const struct sp_memory_region regions[],
++			    uint32_t region_count, const uint64_t *handle,
++			    sp_result result);
++
++void expect_sp_memory_share_dynamic(const struct sp_memory_descriptor *descriptor,
++				    const struct sp_memory_access_descriptor acc_desc[],
++				    uint32_t acc_desc_count,
++				    const struct sp_memory_region regions[],
++				    uint32_t region_count, const uint64_t *handle,
++				    sp_result result);
++
++void expect_sp_memory_share_dynamic_is_supported(const bool *supported, sp_result result);
++
++void expect_sp_memory_retrieve(const struct sp_memory_descriptor *descriptor,
++			       const struct sp_memory_access_descriptor *req_acc_desc,
++			       const struct sp_memory_access_descriptor *resp_acc_desc,
++			       const struct sp_memory_region in_regions[],
++			       const struct sp_memory_region out_regions[],
++			       uint32_t in_region_count,
++			       const uint32_t *out_region_count, uint64_t handle,
++			       sp_result result);
++
++void expect_sp_memory_retrieve_dynamic(const struct sp_memory_descriptor *descriptor,
++				       const struct sp_memory_access_descriptor *req_acc_desc,
++				       const struct sp_memory_access_descriptor *resp_acc_desc,
++				       const struct sp_memory_region in_regions[],
++				       const struct sp_memory_region out_regions[],
++				       uint32_t in_region_count,
++				       const uint32_t *out_region_count, uint64_t handle,
++				       sp_result result);
++
++void expect_sp_memory_retrieve_dynamic_is_supported(const bool *supported, sp_result result);
++
++void expect_sp_memory_relinquish(uint64_t handle, const uint16_t endpoints[],
++				 uint32_t endpoint_count,
++				 const struct sp_memory_transaction_flags *flags,
++				 sp_result result);
++
++void expect_sp_memory_reclaim(uint64_t handle, uint32_t flags, sp_result result);
++
++void expect_sp_memory_permission_get(const void *base_address, const struct sp_mem_perm *mem_perm,
++				     sp_result result);
++
++void expect_sp_memory_permission_set(const void *base_address, size_t region_size,
++				     const struct sp_mem_perm *mem_perm, sp_result result);
++
++#ifdef __cplusplus
++}
++#endif
++
++#endif /* LIBSP_MOCK_MOCK_SP_MEMORY_MANAGEMENT_H_ */
+diff --git a/components/messaging/ffa/libsp/test/test_mock_sp_memory_management.cpp b/components/messaging/ffa/libsp/test/test_mock_sp_memory_management.cpp
+new file mode 100644
+index 0000000..387b50f
+--- /dev/null
++++ b/components/messaging/ffa/libsp/test/test_mock_sp_memory_management.cpp
+@@ -0,0 +1,260 @@
++// SPDX-License-Identifier: BSD-3-Clause
++/*
++ * Copyright (c) 2022, Arm Limited. All rights reserved.
++ */
++
++#include <CppUTestExt/MockSupport.h>
++#include <CppUTest/TestHarness.h>
++#include "mock_sp_memory_management.h"
++#include <stdint.h>
++#include <stdlib.h>
++#include <string.h>
++
++static const struct sp_memory_descriptor expected_descriptor = {
++	.sender_id = 0xfedc,
++	.memory_type = sp_memory_type_normal_memory,
++	.mem_region_attr = {.normal_memory = {
++		.cacheability = sp_cacheability_write_back,
++		.shareability = sp_shareability_inner_shareable
++	}},
++	.flags = {
++		.zero_memory = true,
++		.operation_time_slicing = true,
++		.zero_memory_after_relinquish = true,
++		.transaction_type = sp_memory_transaction_type_relayer_specified,
++		.alignment_hint = 0x2000
++	},
++	.tag = 0x0123456789abcdefULL
++};
++static const struct sp_memory_access_descriptor expected_acc_desc[] = {
++	{
++		.receiver_id = 0xfafa,
++		.instruction_access = sp_instruction_access_executable,
++		.data_access = sp_data_access_read_only
++	}, {
++		.receiver_id = 0xc1ca,
++		.instruction_access = sp_instruction_access_not_executable,
++		.data_access = sp_data_access_read_write
++	}
++};
++static const struct sp_memory_region expected_regions[2] = {
++	{.address = (void *)0x01234567, .page_count = 0x89abcdef},
++	{.address = (void *)0x12345670, .page_count = 0x9abcdef8},
++};
++static const uint64_t expected_handle = 0xabcdef0123456789ULL;
++static const void *expected_address = (const void *)0x234567879;
++static const struct sp_mem_perm expected_mem_perm = {
++	.data_access = sp_mem_perm_data_perm_read_write,
++	.instruction_access = sp_mem_perm_instruction_perm_non_executable,
++};
++
++TEST_GROUP(mock_sp_memory_management)
++{
++	TEST_SETUP()
++	{
++		memset(&descriptor, 0x00, sizeof(descriptor));
++		memset(&acc_desc, 0x00, sizeof(acc_desc));
++		memset(&regions, 0x00, sizeof(regions));
++		handle = 0;
++		supported = false;
++	}
++
++	TEST_TEARDOWN()
++	{
++		mock().checkExpectations();
++		mock().clear();
++	}
++
++	struct sp_memory_descriptor descriptor;
++	struct sp_memory_access_descriptor acc_desc[2];
++	struct sp_memory_region regions[2];
++	uint64_t handle;
++	bool supported;
++	struct ffa_mem_transaction_buffer tr_buffer;
++
++	static const sp_result result = -1;
++};
++
++TEST(mock_sp_memory_management, sp_memory_donate)
++{
++	descriptor = expected_descriptor;
++	acc_desc[0] = expected_acc_desc[0];
++	memcpy(regions, expected_regions, sizeof(regions));
++
++	expect_sp_memory_donate(&expected_descriptor, expected_acc_desc, expected_regions, 2,
++				&expected_handle, result);
++	LONGS_EQUAL(result, sp_memory_donate(&descriptor, acc_desc, regions, 2, &handle));
++
++	UNSIGNED_LONGLONGS_EQUAL(expected_handle, handle);
++}
++
++TEST(mock_sp_memory_management, sp_memory_donate_dynamic)
++{
++	descriptor = expected_descriptor;
++	acc_desc[0] = expected_acc_desc[0];
++	memcpy(regions, expected_regions, sizeof(regions));
++
++	expect_sp_memory_donate_dynamic(&expected_descriptor, expected_acc_desc, expected_regions,
++					2, &expected_handle, result);
++	LONGS_EQUAL(result, sp_memory_donate_dynamic(&descriptor, acc_desc, regions, 2, &handle,
++						     &tr_buffer));
++
++	UNSIGNED_LONGLONGS_EQUAL(expected_handle, handle);
++}
++
++TEST(mock_sp_memory_management, sp_memory_donate_dynamic_is_supported)
++{
++	const bool expected_supported = true;
++	expect_sp_memory_donate_dynamic_is_supported(&expected_supported, result);
++	LONGS_EQUAL(result, sp_memory_donate_dynamic_is_supported(&supported));
++	CHECK_TRUE(supported);
++}
++
++TEST(mock_sp_memory_management, sp_memory_lend)
++{
++	descriptor = expected_descriptor;
++	memcpy(acc_desc, expected_acc_desc, sizeof(acc_desc));
++	memcpy(regions, expected_regions, sizeof(regions));
++
++	expect_sp_memory_lend(&descriptor, acc_desc, 2, regions, 2, &expected_handle, result);
++	LONGS_EQUAL(result, sp_memory_lend(&descriptor, acc_desc, 2, regions, 2, &handle));
++	UNSIGNED_LONGLONGS_EQUAL(expected_handle, handle);
++}
++
++TEST(mock_sp_memory_management, sp_memory_lend_dynamic)
++{
++	descriptor = expected_descriptor;
++	memcpy(acc_desc, expected_acc_desc, sizeof(acc_desc));
++	memcpy(regions, expected_regions, sizeof(regions));
++
++	expect_sp_memory_lend_dynamic(&descriptor, acc_desc, 2, regions, 2, &expected_handle,
++				      result);
++	LONGS_EQUAL(result, sp_memory_lend_dynamic(&descriptor, acc_desc, 2, regions, 2, &handle,
++						   &tr_buffer));
++	UNSIGNED_LONGLONGS_EQUAL(expected_handle, handle);
++}
++
++TEST(mock_sp_memory_management, sp_memory_lend_dynamic_is_supported)
++{
++	const bool expected_supported = true;
++	expect_sp_memory_lend_dynamic_is_supported(&expected_supported, result);
++	LONGS_EQUAL(result, sp_memory_lend_dynamic_is_supported(&supported));
++	CHECK_TRUE(supported);
++}
++
++TEST(mock_sp_memory_management, sp_memory_share)
++{
++	descriptor = expected_descriptor;
++	memcpy(acc_desc, expected_acc_desc, sizeof(acc_desc));
++	memcpy(regions, expected_regions, sizeof(regions));
++
++	expect_sp_memory_share(&descriptor, acc_desc, 2, regions, 2, &expected_handle, result);
++	LONGS_EQUAL(result, sp_memory_share(&descriptor, acc_desc, 2, regions, 2, &handle));
++	UNSIGNED_LONGLONGS_EQUAL(expected_handle, handle);
++}
++
++TEST(mock_sp_memory_management, sp_memory_share_dynamic)
++{
++	descriptor = expected_descriptor;
++	memcpy(acc_desc, expected_acc_desc, sizeof(acc_desc));
++	memcpy(regions, expected_regions, sizeof(regions));
++
++	expect_sp_memory_share_dynamic(&descriptor, acc_desc, 2, regions, 2, &expected_handle,
++				      result);
++	LONGS_EQUAL(result, sp_memory_share_dynamic(&descriptor, acc_desc, 2, regions, 2, &handle,
++						   &tr_buffer));
++	UNSIGNED_LONGLONGS_EQUAL(expected_handle, handle);
++}
++
++TEST(mock_sp_memory_management, sp_memory_share_dynamic_is_supported)
++{
++	const bool expected_supported = true;
++	expect_sp_memory_share_dynamic_is_supported(&expected_supported, result);
++	LONGS_EQUAL(result, sp_memory_share_dynamic_is_supported(&supported));
++	CHECK_TRUE(supported);
++}
++
++TEST(mock_sp_memory_management, sp_memory_retrieve)
++{
++	const uint32_t expected_region_count = 1;
++	struct sp_memory_access_descriptor acc_desc = expected_acc_desc[0];
++	struct sp_memory_region region = expected_regions[0];
++	uint32_t out_region_count = 0;
++
++	descriptor = expected_descriptor;
++
++	expect_sp_memory_retrieve(&expected_descriptor, &expected_acc_desc[0],
++				  &expected_acc_desc[1], &expected_regions[0],
++				  &expected_regions[1], 1, &expected_region_count, expected_handle,
++				  result);
++	LONGS_EQUAL(result, sp_memory_retrieve(&descriptor, &acc_desc, &region, 1,
++					       &out_region_count, expected_handle));
++	MEMCMP_EQUAL(&acc_desc, &expected_acc_desc[1], sizeof(acc_desc));
++	MEMCMP_EQUAL(&region, &expected_regions[1], sizeof(region));
++	UNSIGNED_LONGS_EQUAL(expected_region_count, out_region_count);
++}
++
++TEST(mock_sp_memory_management, sp_memory_retrieve_dynamic)
++{
++	const uint32_t expected_region_count = 1;
++	struct sp_memory_access_descriptor acc_desc = expected_acc_desc[0];
++	struct sp_memory_region region = expected_regions[0];
++	uint32_t out_region_count = 0;
++
++	descriptor = expected_descriptor;
++
++	expect_sp_memory_retrieve_dynamic(&expected_descriptor, &expected_acc_desc[0],
++					  &expected_acc_desc[1], &expected_regions[0],
++					  &expected_regions[1], 1, &expected_region_count,
++					  expected_handle, result);
++	LONGS_EQUAL(result, sp_memory_retrieve_dynamic(&descriptor, &acc_desc, &region, 1,
++						       &out_region_count, expected_handle,
++						       &tr_buffer));
++	MEMCMP_EQUAL(&acc_desc, &expected_acc_desc[1], sizeof(acc_desc));
++	MEMCMP_EQUAL(&region, &expected_regions[1], sizeof(region));
++	UNSIGNED_LONGS_EQUAL(expected_region_count, out_region_count);
++}
++
++TEST(mock_sp_memory_management, sp_memory_retrieve_dynamic_is_supported)
++{
++	const bool expected_supported = true;
++	expect_sp_memory_retrieve_dynamic_is_supported(&expected_supported, result);
++	LONGS_EQUAL(result, sp_memory_retrieve_dynamic_is_supported(&supported));
++	CHECK_TRUE(supported);
++}
++
++TEST(mock_sp_memory_management, sp_memory_relinquish)
++{
++	uint16_t endpoints[3] = {1, 2, 3};
++	struct sp_memory_transaction_flags flags = {0}; // TODO: flags
++
++	expect_sp_memory_relinquish(expected_handle, endpoints, 3, &flags, result);
++	LONGS_EQUAL(result, sp_memory_relinquish(expected_handle, endpoints, 3, &flags));
++}
++
++TEST(mock_sp_memory_management, sp_memory_reclaim)
++{
++	uint32_t flags = 0xffffffff;
++
++	expect_sp_memory_reclaim(expected_handle, flags, result);
++	LONGS_EQUAL(result, sp_memory_reclaim(expected_handle, flags));
++}
++
++TEST(mock_sp_memory_management, sp_memory_permission_get)
++{
++	struct sp_mem_perm mem_perm;
++
++	memset(&mem_perm, 0x00, sizeof(mem_perm));
++
++	expect_sp_memory_permission_get(expected_address, &expected_mem_perm, result);
++	LONGS_EQUAL(result, sp_memory_permission_get(expected_address, &mem_perm));
++	MEMCMP_EQUAL(&expected_mem_perm, &mem_perm, sizeof(expected_mem_perm));
++}
++
++TEST(mock_sp_memory_management, sp_memory_permission_set)
++{
++	size_t size = 0x7654;
++
++	expect_sp_memory_permission_set(expected_address, size, &expected_mem_perm, result);
++	LONGS_EQUAL(result, sp_memory_permission_set(expected_address, size, &expected_mem_perm));
++}
+diff --git a/components/messaging/ffa/libsp/tests.cmake b/components/messaging/ffa/libsp/tests.cmake
+index 7b52248..63abb57 100644
+--- a/components/messaging/ffa/libsp/tests.cmake
++++ b/components/messaging/ffa/libsp/tests.cmake
+@@ -134,6 +134,19 @@ unit_test_add_suite(
+ 		-DARM64
+ )
+ 
++unit_test_add_suite(
++	NAME libsp_mock_sp_memory_management
++	SOURCES
++		${CMAKE_CURRENT_LIST_DIR}/test/test_mock_sp_memory_management.cpp
++		${CMAKE_CURRENT_LIST_DIR}/mock/mock_sp_memory_management.cpp
++	INCLUDE_DIRECTORIES
++		${CMAKE_CURRENT_LIST_DIR}/include/
++		${CMAKE_CURRENT_LIST_DIR}/mock
++		${UNIT_TEST_PROJECT_PATH}/components/common/utils/include
++	COMPILE_DEFINITIONS
++		-DARM64
++)
++
+ unit_test_add_suite(
+ 	NAME libsp_sp_memory_management_internals
+ 	SOURCES
+-- 
+2.17.1
+
diff --git a/meta-arm/meta-arm/recipes-security/trusted-services/files/0020-Add-mock-for-libsp-sp_messaging.patch b/meta-arm/meta-arm/recipes-security/trusted-services/files/0020-Add-mock-for-libsp-sp_messaging.patch
new file mode 100644
index 0000000..3057051
--- /dev/null
+++ b/meta-arm/meta-arm/recipes-security/trusted-services/files/0020-Add-mock-for-libsp-sp_messaging.patch
@@ -0,0 +1,284 @@
+From 1e5ce152214e22a7cd9617a5059e42c370351354 Mon Sep 17 00:00:00 2001
+From: Imre Kis <imre.kis@arm.com>
+Date: Fri, 17 Jun 2022 15:40:18 +0200
+Subject: [PATCH 20/24] Add mock for libsp/sp_messaging
+
+Add mock_sp_messaging for mocking sp_messaging part of libsp.
+
+Signed-off-by: Imre Kis <imre.kis@arm.com>
+Change-Id: I87478027c61b41028682b10e2535a7e14cf6922f
+
+Upstream-Status: Pending [In review]
+Signed-off-by: Anton Antonov <Anton.Antonov@arm.com>
+
+---
+ .../messaging/ffa/libsp/mock/component.cmake  |  1 +
+ .../ffa/libsp/mock/mock_sp_messaging.cpp      | 86 +++++++++++++++++++
+ .../ffa/libsp/mock/mock_sp_messaging.h        | 39 +++++++++
+ .../mock/test/test_mock_sp_messaging.cpp      | 77 +++++++++++++++++
+ components/messaging/ffa/libsp/tests.cmake    | 14 +++
+ 5 files changed, 217 insertions(+)
+ create mode 100644 components/messaging/ffa/libsp/mock/mock_sp_messaging.cpp
+ create mode 100644 components/messaging/ffa/libsp/mock/mock_sp_messaging.h
+ create mode 100644 components/messaging/ffa/libsp/mock/test/test_mock_sp_messaging.cpp
+
+diff --git a/components/messaging/ffa/libsp/mock/component.cmake b/components/messaging/ffa/libsp/mock/component.cmake
+index eb0d28c..375cb46 100644
+--- a/components/messaging/ffa/libsp/mock/component.cmake
++++ b/components/messaging/ffa/libsp/mock/component.cmake
+@@ -14,6 +14,7 @@ target_sources(${TGT} PRIVATE
+ 	"${CMAKE_CURRENT_LIST_DIR}/mock_ffa_internal_api.cpp"
+ 	"${CMAKE_CURRENT_LIST_DIR}/mock_sp_discovery.cpp"
+ 	"${CMAKE_CURRENT_LIST_DIR}/mock_sp_memory_management.cpp"
++	"${CMAKE_CURRENT_LIST_DIR}/mock_sp_messaging.cpp"
+ 	"${CMAKE_CURRENT_LIST_DIR}/mock_sp_rxtx.cpp"
+ 	)
+ 
+diff --git a/components/messaging/ffa/libsp/mock/mock_sp_messaging.cpp b/components/messaging/ffa/libsp/mock/mock_sp_messaging.cpp
+new file mode 100644
+index 0000000..522abb3
+--- /dev/null
++++ b/components/messaging/ffa/libsp/mock/mock_sp_messaging.cpp
+@@ -0,0 +1,86 @@
++// SPDX-License-Identifier: BSD-3-Clause
++/*
++ * Copyright (c) 2022, Arm Limited. All rights reserved.
++ */
++
++#include <CppUTestExt/MockSupport.h>
++#include "mock_sp_messaging.h"
++
++void expect_sp_msg_wait(const struct sp_msg *msg, sp_result result)
++{
++	mock()
++		.expectOneCall("sp_msg_wait")
++		.withOutputParameterReturning("msg", msg, sizeof(*msg))
++		.andReturnValue(result);
++}
++
++sp_result sp_msg_wait(struct sp_msg *msg)
++{
++	return mock()
++		.actualCall("sp_msg_wait")
++		.withOutputParameter("msg", msg)
++		.returnIntValue();
++}
++
++void expect_sp_msg_send_direct_req(const struct sp_msg *req,
++				   const struct sp_msg *resp,
++				   sp_result result)
++{
++	mock()
++		.expectOneCall("sp_msg_send_direct_req")
++		.withMemoryBufferParameter("req", (const unsigned char *)req, sizeof(*req))
++		.withOutputParameterReturning("resp", resp, sizeof(*resp))
++		.andReturnValue(result);
++}
++
++sp_result sp_msg_send_direct_req(const struct sp_msg *req, struct sp_msg *resp)
++{
++	return mock()
++		.actualCall("sp_msg_send_direct_req")
++		.withMemoryBufferParameter("req", (const unsigned char *)req, sizeof(*req))
++		.withOutputParameter("resp", resp)
++		.returnIntValue();
++}
++
++void expect_sp_msg_send_direct_resp(const struct sp_msg *resp,
++				    const struct sp_msg *req,
++				    sp_result result)
++{
++	mock()
++		.expectOneCall("sp_msg_send_direct_resp")
++		.withMemoryBufferParameter("resp", (const unsigned char *)resp, sizeof(*resp))
++		.withOutputParameterReturning("req", req, sizeof(*req))
++		.andReturnValue(result);
++}
++
++sp_result sp_msg_send_direct_resp(const struct sp_msg *resp,
++				  struct sp_msg *req)
++{
++	return mock()
++		.actualCall("sp_msg_send_direct_resp")
++		.withMemoryBufferParameter("resp", (const unsigned char *)resp, sizeof(*resp))
++		.withOutputParameter("req", req)
++		.returnIntValue();
++}
++
++#if FFA_DIRECT_MSG_ROUTING_EXTENSION
++void expect_sp_msg_send_rc_req(const struct sp_msg *req,
++			       const struct sp_msg *resp,
++			       sp_result result)
++{
++	mock()
++		.expectOneCall("sp_msg_send_rc_req")
++		.withMemoryBufferParameter("req", (const unsigned char *)req, sizeof(*req))
++		.withOutputParameterReturning("resp", resp, sizeof(*resp))
++		.andReturnValue(result);
++}
++
++sp_result sp_msg_send_rc_req(const struct sp_msg *req, struct sp_msg *resp)
++{
++	return mock()
++		.actualCall("sp_msg_send_rc_req")
++		.withMemoryBufferParameter("req", (const unsigned char *)req, sizeof(*req))
++		.withOutputParameter("resp", resp)
++		.returnIntValue();
++}
++#endif /* FFA_DIRECT_MSG_ROUTING_EXTENSION */
+diff --git a/components/messaging/ffa/libsp/mock/mock_sp_messaging.h b/components/messaging/ffa/libsp/mock/mock_sp_messaging.h
+new file mode 100644
+index 0000000..8183012
+--- /dev/null
++++ b/components/messaging/ffa/libsp/mock/mock_sp_messaging.h
+@@ -0,0 +1,39 @@
++/* SPDX-License-Identifier: BSD-3-Clause */
++/*
++ * Copyright (c) 2022, Arm Limited and Contributors. All rights reserved.
++ */
++
++#ifndef LIBSP_MOCK_MOCK_SP_MESSAGING_H_
++#define LIBSP_MOCK_MOCK_SP_MESSAGING_H_
++
++#include "../include/sp_messaging.h"
++
++#include <stdint.h>
++
++#ifdef __cplusplus
++extern "C" {
++#endif
++
++void expect_sp_msg_wait(const struct sp_msg *msg, sp_result result);
++
++
++void expect_sp_msg_send_direct_req(const struct sp_msg *req,
++				   const struct sp_msg *resp,
++				   sp_result result);
++
++
++void expect_sp_msg_send_direct_resp(const struct sp_msg *resp,
++				    const struct sp_msg *req,
++				    sp_result result);
++
++#if FFA_DIRECT_MSG_ROUTING_EXTENSION
++void expect_sp_msg_send_rc_req(const struct sp_msg *req,
++			       const struct sp_msg *resp,
++			       sp_result result);
++#endif /* FFA_DIRECT_MSG_ROUTING_EXTENSION */
++
++#ifdef __cplusplus
++}
++#endif
++
++#endif /* LIBSP_MOCK_MOCK_SP_MESSAGING_H_ */
+diff --git a/components/messaging/ffa/libsp/mock/test/test_mock_sp_messaging.cpp b/components/messaging/ffa/libsp/mock/test/test_mock_sp_messaging.cpp
+new file mode 100644
+index 0000000..bfc7959
+--- /dev/null
++++ b/components/messaging/ffa/libsp/mock/test/test_mock_sp_messaging.cpp
+@@ -0,0 +1,77 @@
++// SPDX-License-Identifier: BSD-3-Clause
++/*
++ * Copyright (c) 2022, Arm Limited. All rights reserved.
++ */
++
++#include <CppUTestExt/MockSupport.h>
++#include <CppUTest/TestHarness.h>
++#include "mock_sp_messaging.h"
++#include <stdint.h>
++#include <stdlib.h>
++#include <string.h>
++
++static const struct sp_msg expected_req = {
++	.source_id = 0x0123,
++	.destination_id = 0x4567,
++	.args = {0x89abcdef, 0xfedcba98, 0x76543210, 0xabcdef01}
++};
++static const struct sp_msg expected_resp = {
++	.source_id = 0x1234,
++	.destination_id = 0x5678,
++	.args = {0x9abcdef8, 0xedcba98f, 0x65432107, 0xbcdef01a}
++};
++
++TEST_GROUP(mock_sp_messaging)
++{
++	TEST_SETUP()
++	{
++		memset(&req, 0x00, sizeof(req));
++		memset(&resp, 0x00, sizeof(resp));
++	}
++
++	TEST_TEARDOWN()
++	{
++		mock().checkExpectations();
++		mock().clear();
++	}
++
++	struct sp_msg req;
++	struct sp_msg resp;
++	static const sp_result result = -1;
++};
++
++TEST(mock_sp_messaging, sp_msg_wait)
++{
++	expect_sp_msg_wait(&expected_req, result);
++	LONGS_EQUAL(result, sp_msg_wait(&req));
++	MEMCMP_EQUAL(&expected_req, &req, sizeof(expected_req));
++}
++
++TEST(mock_sp_messaging, sp_msg_send_direct_req)
++{
++	req = expected_req;
++
++	expect_sp_msg_send_direct_req(&expected_req, &expected_resp, result);
++	LONGS_EQUAL(result, sp_msg_send_direct_req(&req, &resp));
++	MEMCMP_EQUAL(&expected_resp, &resp, sizeof(expected_resp));
++}
++
++TEST(mock_sp_messaging, sp_msg_send_direct_resp)
++{
++	resp = expected_resp;
++
++	expect_sp_msg_send_direct_resp(&expected_resp, &expected_req, result);
++	LONGS_EQUAL(result, sp_msg_send_direct_resp(&resp, &req));
++	MEMCMP_EQUAL(&expected_req, &req, sizeof(expected_req));
++}
++
++#if FFA_DIRECT_MSG_ROUTING_EXTENSION
++TEST(mock_sp_messaging, sp_msg_send_rc_req)
++{
++	req = expected_req;
++
++	expect_sp_msg_send_rc_req(&expected_req, &expected_resp, result);
++	LONGS_EQUAL(result, sp_msg_send_rc_req(&req, &resp));
++	MEMCMP_EQUAL(&expected_resp, &resp, sizeof(expected_resp));
++}
++#endif /* FFA_DIRECT_MSG_ROUTING_EXTENSION */
+diff --git a/components/messaging/ffa/libsp/tests.cmake b/components/messaging/ffa/libsp/tests.cmake
+index 63abb57..eb0b41e 100644
+--- a/components/messaging/ffa/libsp/tests.cmake
++++ b/components/messaging/ffa/libsp/tests.cmake
+@@ -176,6 +176,20 @@ unit_test_add_suite(
+ 		-DARM64
+ )
+ 
++unit_test_add_suite(
++	NAME libsp_mock_sp_messaging
++	SOURCES
++		${CMAKE_CURRENT_LIST_DIR}/mock/test/test_mock_sp_messaging.cpp
++		${CMAKE_CURRENT_LIST_DIR}/mock/mock_sp_messaging.cpp
++	INCLUDE_DIRECTORIES
++		${CMAKE_CURRENT_LIST_DIR}/include/
++		${CMAKE_CURRENT_LIST_DIR}/mock
++		${UNIT_TEST_PROJECT_PATH}/components/common/utils/include
++	COMPILE_DEFINITIONS
++		-DARM64
++		-DFFA_DIRECT_MSG_ROUTING_EXTENSION=1
++)
++
+ unit_test_add_suite(
+ 	NAME libsp_sp_messaging_with_routing_extension
+ 	SOURCES
+-- 
+2.17.1
+
diff --git a/meta-arm/meta-arm/recipes-security/trusted-services/files/0021-Add-64-bit-direct-message-handling-to-libsp.patch b/meta-arm/meta-arm/recipes-security/trusted-services/files/0021-Add-64-bit-direct-message-handling-to-libsp.patch
new file mode 100644
index 0000000..a037c27
--- /dev/null
+++ b/meta-arm/meta-arm/recipes-security/trusted-services/files/0021-Add-64-bit-direct-message-handling-to-libsp.patch
@@ -0,0 +1,2552 @@
+From a9edc50077d72cdd7e40ba3f03aee974848cd532 Mon Sep 17 00:00:00 2001
+From: Imre Kis <imre.kis@arm.com>
+Date: Tue, 19 Jul 2022 17:38:00 +0200
+Subject: [PATCH 21/24] Add 64 bit direct message handling to libsp
+
+* Change direct message struct to allow 64 bit arguments
+* Add 64 bit direct message req/resp functions to FF-A layer
+* Distinguish 32/64 bit messages on SP layer by field in sp_msg
+* Update tests and mocks
+
+The FF-A direct message response must match the request's 32/64bit mode
+which is the responsibility of the callee and it should be validated by
+the SPMC.
+
+Signed-off-by: Imre Kis <imre.kis@arm.com>
+Change-Id: Ibbd64ca0291dfe142a23471a649a07ba1a036824
+
+Upstream-Status: Pending [In review]
+Signed-off-by: Anton Antonov <Anton.Antonov@arm.com>
+
+---
+ components/messaging/ffa/libsp/ffa.c          |  84 ++-
+ .../libsp/ffa_direct_msg_routing_extension.c  |  28 +-
+ .../messaging/ffa/libsp/include/ffa_api.h     |  58 +-
+ .../ffa/libsp/include/ffa_api_defines.h       |   5 +
+ .../ffa/libsp/include/ffa_api_types.h         |   7 +-
+ .../ffa/libsp/include/sp_messaging.h          |   9 +-
+ .../messaging/ffa/libsp/mock/mock_ffa_api.cpp | 100 +++-
+ .../messaging/ffa/libsp/mock/mock_ffa_api.h   |  34 +-
+ .../ffa/libsp/mock/test/test_mock_ffa_api.cpp |  50 +-
+ components/messaging/ffa/libsp/sp_messaging.c |  74 ++-
+ .../messaging/ffa/libsp/test/test_ffa_api.cpp | 337 ++++++++++--
+ .../ffa/libsp/test/test_sp_messaging.cpp      | 498 ++++++++++--------
+ .../rpc/ffarpc/caller/sp/ffarpc_caller.c      |  31 +-
+ .../rpc/ffarpc/endpoint/ffarpc_call_ep.c      |   4 +-
+ .../endpoint/sp/mm_communicate_call_ep.c      |  16 +-
+ .../smm-gateway/common/smm_gateway_sp.c       |   8 +-
+ 16 files changed, 961 insertions(+), 382 deletions(-)
+
+diff --git a/components/messaging/ffa/libsp/ffa.c b/components/messaging/ffa/libsp/ffa.c
+index 374e940..caacc79 100644
+--- a/components/messaging/ffa/libsp/ffa.c
++++ b/components/messaging/ffa/libsp/ffa.c
+@@ -33,11 +33,20 @@ static inline void ffa_unpack_direct_msg(struct ffa_params *svc_result,
+ 	msg->function_id = svc_result->a0;
+ 	msg->source_id = (svc_result->a1 >> 16);
+ 	msg->destination_id = svc_result->a1;
+-	msg->args[0] = svc_result->a3;
+-	msg->args[1] = svc_result->a4;
+-	msg->args[2] = svc_result->a5;
+-	msg->args[3] = svc_result->a6;
+-	msg->args[4] = svc_result->a7;
++
++	if (FFA_IS_32_BIT_FUNC(msg->function_id)) {
++		msg->args.args32[0] = svc_result->a3;
++		msg->args.args32[1] = svc_result->a4;
++		msg->args.args32[2] = svc_result->a5;
++		msg->args.args32[3] = svc_result->a6;
++		msg->args.args32[4] = svc_result->a7;
++	} else {
++		msg->args.args64[0] = svc_result->a3;
++		msg->args.args64[1] = svc_result->a4;
++		msg->args.args64[2] = svc_result->a5;
++		msg->args.args64[3] = svc_result->a6;
++		msg->args.args64[4] = svc_result->a7;
++	}
+ }
+ 
+ /*
+@@ -217,7 +226,7 @@ ffa_result ffa_msg_wait(struct ffa_direct_msg *msg)
+ 
+ 	if (result.a0 == FFA_ERROR) {
+ 		return ffa_get_errorcode(&result);
+-	} else if (result.a0 == FFA_MSG_SEND_DIRECT_REQ_32) {
++	} else if (FFA_TO_32_BIT_FUNC(result.a0) == FFA_MSG_SEND_DIRECT_REQ_32) {
+ 		ffa_unpack_direct_msg(&result, msg);
+ 	} else {
+ 		assert(result.a0 == FFA_SUCCESS_32);
+@@ -227,13 +236,15 @@ ffa_result ffa_msg_wait(struct ffa_direct_msg *msg)
+ 	return FFA_OK;
+ }
+ 
+-ffa_result ffa_msg_send_direct_req(uint16_t source, uint16_t dest, uint32_t a0,
+-				   uint32_t a1, uint32_t a2, uint32_t a3,
+-				   uint32_t a4, struct ffa_direct_msg *msg)
++static ffa_result ffa_msg_send_direct_req(uint32_t function_id, uint32_t resp_id,
++					  uint16_t source, uint16_t dest,
++					  uint64_t a0, uint64_t a1, uint64_t a2,
++					  uint64_t a3, uint64_t a4,
++					  struct ffa_direct_msg *msg)
+ {
+ 	struct ffa_params result = {0};
+ 
+-	ffa_svc(FFA_MSG_SEND_DIRECT_REQ_32,
++	ffa_svc(function_id,
+ 		SHIFT_U32(source, FFA_MSG_SEND_DIRECT_REQ_SOURCE_ID_SHIFT) |
+ 		dest, FFA_PARAM_MBZ, a0, a1, a2, a3, a4, &result);
+ 
+@@ -244,7 +255,7 @@ ffa_result ffa_msg_send_direct_req(uint16_t source, uint16_t dest, uint32_t a0,
+ 
+ 	if (result.a0 == FFA_ERROR) {
+ 		return ffa_get_errorcode(&result);
+-	} else if (result.a0 == FFA_MSG_SEND_DIRECT_RESP_32) {
++	} else if (result.a0 == resp_id) {
+ 		ffa_unpack_direct_msg(&result, msg);
+ 	} else {
+ 		assert(result.a0 == FFA_SUCCESS_32);
+@@ -254,13 +265,36 @@ ffa_result ffa_msg_send_direct_req(uint16_t source, uint16_t dest, uint32_t a0,
+ 	return FFA_OK;
+ }
+ 
+-ffa_result ffa_msg_send_direct_resp(uint16_t source, uint16_t dest, uint32_t a0,
+-				    uint32_t a1, uint32_t a2, uint32_t a3,
+-				    uint32_t a4, struct ffa_direct_msg *msg)
++ffa_result ffa_msg_send_direct_req_32(uint16_t source, uint16_t dest,
++				      uint32_t a0, uint32_t a1, uint32_t a2,
++				      uint32_t a3, uint32_t a4,
++				      struct ffa_direct_msg *msg)
++{
++	return ffa_msg_send_direct_req(FFA_MSG_SEND_DIRECT_REQ_32,
++				       FFA_MSG_SEND_DIRECT_RESP_32,
++				       source, dest, a0, a1, a2, a3, a4, msg);
++}
++
++ffa_result ffa_msg_send_direct_req_64(uint16_t source, uint16_t dest,
++				      uint64_t a0, uint64_t a1, uint64_t a2,
++				      uint64_t a3, uint64_t a4,
++				      struct ffa_direct_msg *msg)
++{
++	return ffa_msg_send_direct_req(FFA_MSG_SEND_DIRECT_REQ_64,
++				       FFA_MSG_SEND_DIRECT_RESP_64,
++				       source, dest, a0, a1, a2, a3, a4, msg);
++}
++
++static ffa_result ffa_msg_send_direct_resp(uint32_t function_id,
++					   uint16_t source, uint16_t dest,
++					   uint64_t a0, uint64_t a1,
++					   uint64_t a2, uint64_t a3,
++					   uint64_t a4,
++					   struct ffa_direct_msg *msg)
+ {
+ 	struct ffa_params result = {0};
+ 
+-	ffa_svc(FFA_MSG_SEND_DIRECT_RESP_32,
++	ffa_svc(function_id,
+ 		SHIFT_U32(source, FFA_MSG_SEND_DIRECT_RESP_SOURCE_ID_SHIFT) |
+ 		dest, FFA_PARAM_MBZ, a0, a1, a2, a3, a4, &result);
+ 
+@@ -271,7 +305,7 @@ ffa_result ffa_msg_send_direct_resp(uint16_t source, uint16_t dest, uint32_t a0,
+ 
+ 	if (result.a0 == FFA_ERROR) {
+ 		return ffa_get_errorcode(&result);
+-	} else if (result.a0 == FFA_MSG_SEND_DIRECT_REQ_32) {
++	} else if (FFA_TO_32_BIT_FUNC(result.a0) == FFA_MSG_SEND_DIRECT_REQ_32) {
+ 		ffa_unpack_direct_msg(&result, msg);
+ 	} else {
+ 		assert(result.a0 == FFA_SUCCESS_32);
+@@ -281,6 +315,24 @@ ffa_result ffa_msg_send_direct_resp(uint16_t source, uint16_t dest, uint32_t a0,
+ 	return FFA_OK;
+ }
+ 
++ffa_result ffa_msg_send_direct_resp_32(uint16_t source, uint16_t dest,
++				       uint32_t a0, uint32_t a1, uint32_t a2,
++				       uint32_t a3, uint32_t a4,
++				       struct ffa_direct_msg *msg)
++{
++	return ffa_msg_send_direct_resp(FFA_MSG_SEND_DIRECT_RESP_32, source,
++					dest, a0, a1, a2, a3, a4, msg);
++}
++
++ffa_result ffa_msg_send_direct_resp_64(uint16_t source, uint16_t dest,
++				      uint64_t a0, uint64_t a1, uint64_t a2,
++				      uint64_t a3, uint64_t a4,
++				      struct ffa_direct_msg *msg)
++{
++	return ffa_msg_send_direct_resp(FFA_MSG_SEND_DIRECT_RESP_64, source,
++					dest, a0, a1, a2, a3, a4, msg);
++}
++
+ ffa_result ffa_mem_donate(uint32_t total_length, uint32_t fragment_length,
+ 			  void *buffer_address, uint32_t page_count,
+ 			  uint64_t *handle)
+diff --git a/components/messaging/ffa/libsp/ffa_direct_msg_routing_extension.c b/components/messaging/ffa/libsp/ffa_direct_msg_routing_extension.c
+index 6813573..03a372f 100644
+--- a/components/messaging/ffa/libsp/ffa_direct_msg_routing_extension.c
++++ b/components/messaging/ffa/libsp/ffa_direct_msg_routing_extension.c
+@@ -1,6 +1,6 @@
+ // SPDX-License-Identifier: BSD-3-Clause
+ /*
+- * Copyright (c) 2020-2021, Arm Limited and Contributors. All rights reserved.
++ * Copyright (c) 2020-2022, Arm Limited and Contributors. All rights reserved.
+  */
+ 
+ #include "ffa_direct_msg_routing_extension.h"
+@@ -20,23 +20,23 @@ static uint16_t callee_id = SP_ID_INVALID;
+ 
+ static bool is_rc_message(const struct ffa_direct_msg *msg)
+ {
+-	return msg->args[0] & FFA_ROUTING_EXT_RC_BIT;
++	return msg->args.args32[0] & FFA_ROUTING_EXT_RC_BIT;
+ }
+ 
+ static bool is_error_message(const struct ffa_direct_msg *msg)
+ {
+-	return msg->args[0] & FFA_ROUTING_EXT_ERROR_BIT;
++	return msg->args.args32[0] & FFA_ROUTING_EXT_ERROR_BIT;
+ }
+ 
+ static ffa_result get_error_code_from_message(const struct ffa_direct_msg *msg)
+ {
+-	return (ffa_result)msg->args[1];
++	return (ffa_result)msg->args.args32[1];
+ }
+ 
+ static ffa_result send_rc_error_message(struct ffa_direct_msg *req,
+ 					ffa_result error_code)
+ {
+-	return ffa_msg_send_direct_resp(req->destination_id, req->source_id,
++	return ffa_msg_send_direct_resp_32(req->destination_id, req->source_id,
+ 					(FFA_ROUTING_EXT_ERROR_BIT |
+ 					 FFA_ROUTING_EXT_RC_BIT),
+ 					error_code, 0, 0, 0, req);
+@@ -45,7 +45,7 @@ static ffa_result send_rc_error_message(struct ffa_direct_msg *req,
+ static ffa_result send_rc_error_message_to_rc_root(struct ffa_direct_msg *resp,
+ 						   ffa_result error_code)
+ {
+-	return ffa_msg_send_direct_req(own_id, callee_id,
++	return ffa_msg_send_direct_req_32(own_id, callee_id,
+ 				       (FFA_ROUTING_EXT_RC_BIT |
+ 					FFA_ROUTING_EXT_ERROR_BIT),
+ 				       error_code, 0, 0, 0, resp);
+@@ -128,10 +128,10 @@ ffa_result ffa_direct_msg_routing_ext_req_post_hook(struct ffa_direct_msg *resp)
+ 		/* Forwarding RC request towards the root (normal world) */
+ 		state = forwarding;
+ 
+-		ffa_res = ffa_msg_send_direct_resp(own_id, caller_id,
+-						   resp->args[0], resp->args[1],
+-						   resp->args[2], resp->args[3],
+-						   resp->args[4], &rc_resp);
++		ffa_res = ffa_msg_send_direct_resp_32(own_id, caller_id,
++						   resp->args.args32[0], resp->args.args32[1],
++						   resp->args.args32[2], resp->args.args32[3],
++						   resp->args.args32[4], &rc_resp);
+ 		if (ffa_res != FFA_OK)
+ 			goto forward_ffa_error_to_rc_root;
+ 
+@@ -145,9 +145,9 @@ ffa_result ffa_direct_msg_routing_ext_req_post_hook(struct ffa_direct_msg *resp)
+ 
+ 		/* Forwarding RC response towards the RC root. */
+ 		state = internal;
+-		ffa_res = ffa_msg_send_direct_req(
+-			own_id, callee_id, rc_resp.args[0], rc_resp.args[1],
+-			rc_resp.args[2], rc_resp.args[3], rc_resp.args[4],
++		ffa_res = ffa_msg_send_direct_req_32(
++			own_id, callee_id, rc_resp.args.args32[0], rc_resp.args.args32[1],
++			rc_resp.args.args32[2], rc_resp.args.args32[3], rc_resp.args.args32[4],
+ 			resp);
+ 
+ 		goto break_on_ffa_error;
+@@ -197,7 +197,7 @@ void ffa_direct_msg_routing_ext_resp_error_hook(void)
+ 
+ void ffa_direct_msg_routing_ext_rc_req_pre_hook(struct ffa_direct_msg *req)
+ {
+-	req->args[0] = FFA_ROUTING_EXT_RC_BIT;
++	req->args.args32[0] = FFA_ROUTING_EXT_RC_BIT;
+ 	state = rc_root;
+ }
+ 
+diff --git a/components/messaging/ffa/libsp/include/ffa_api.h b/components/messaging/ffa/libsp/include/ffa_api.h
+index ec5cb04..4b7073b 100644
+--- a/components/messaging/ffa/libsp/include/ffa_api.h
++++ b/components/messaging/ffa/libsp/include/ffa_api.h
+@@ -126,8 +126,8 @@ ffa_result ffa_msg_wait(struct ffa_direct_msg *msg);
+ /** Messaging interfaces */
+ 
+ /**
+- * @brief      Sends a partition message in parameter registers as a request and
+- *             blocks until the response is available.
++ * @brief      Sends a 32 bit partition message in parameter registers as a
++ *             request and blocks until the response is available.
+  * @note       The ffa_interrupt_handler function can be called during the
+  *             execution of this function
+  *
+@@ -138,13 +138,14 @@ ffa_result ffa_msg_wait(struct ffa_direct_msg *msg);
+  *
+  * @return     The FF-A error status code
+  */
+-ffa_result ffa_msg_send_direct_req(uint16_t source, uint16_t dest, uint32_t a0,
+-				   uint32_t a1, uint32_t a2, uint32_t a3,
+-				   uint32_t a4, struct ffa_direct_msg *msg);
++ffa_result ffa_msg_send_direct_req_32(uint16_t source, uint16_t dest,
++				      uint32_t a0, uint32_t a1, uint32_t a2,
++				      uint32_t a3, uint32_t a4,
++				      struct ffa_direct_msg *msg);
+ 
+ /**
+- * @brief      Sends a partition message in parameter registers as a response
+- *             and blocks until the response is available.
++ * @brief      Sends a 64 bit partition message in parameter registers as a
++ *             request and blocks until the response is available.
+  * @note       The ffa_interrupt_handler function can be called during the
+  *             execution of this function
+  *
+@@ -155,9 +156,46 @@ ffa_result ffa_msg_send_direct_req(uint16_t source, uint16_t dest, uint32_t a0,
+  *
+  * @return     The FF-A error status code
+  */
+-ffa_result ffa_msg_send_direct_resp(uint16_t source, uint16_t dest, uint32_t a0,
+-				    uint32_t a1, uint32_t a2, uint32_t a3,
+-				    uint32_t a4, struct ffa_direct_msg *msg);
++ffa_result ffa_msg_send_direct_req_64(uint16_t source, uint16_t dest,
++				      uint64_t a0, uint64_t a1, uint64_t a2,
++				      uint64_t a3, uint64_t a4,
++				      struct ffa_direct_msg *msg);
++
++/**
++ * @brief      Sends a 32 bit partition message in parameter registers as a
++ *             response and blocks until the response is available.
++ * @note       The ffa_interrupt_handler function can be called during the
++ *             execution of this function
++ *
++ * @param[in]  source            Source endpoint ID
++ * @param[in]  dest              Destination endpoint ID
++ * @param[in]  a0,a1,a2,a3,a4    Implementation defined message values
++ * @param[out] msg               The response message
++ *
++ * @return     The FF-A error status code
++ */
++ffa_result ffa_msg_send_direct_resp_32(uint16_t source, uint16_t dest,
++				       uint32_t a0, uint32_t a1, uint32_t a2,
++				       uint32_t a3, uint32_t a4,
++				       struct ffa_direct_msg *msg);
++
++/**
++ * @brief      Sends a 64 bit partition message in parameter registers as a
++ *             response and blocks until the response is available.
++ * @note       The ffa_interrupt_handler function can be called during the
++ *             execution of this function
++ *
++ * @param[in]  source            Source endpoint ID
++ * @param[in]  dest              Destination endpoint ID
++ * @param[in]  a0,a1,a2,a3,a4    Implementation defined message values
++ * @param[out] msg               The response message
++ *
++ * @return     The FF-A error status code
++ */
++ffa_result ffa_msg_send_direct_resp_64(uint16_t source, uint16_t dest,
++				       uint64_t a0, uint64_t a1, uint64_t a2,
++				       uint64_t a3, uint64_t a4,
++				       struct ffa_direct_msg *msg);
+ 
+ /**
+  * Memory management interfaces
+diff --git a/components/messaging/ffa/libsp/include/ffa_api_defines.h b/components/messaging/ffa/libsp/include/ffa_api_defines.h
+index 95bcb0c..163a0cd 100644
+--- a/components/messaging/ffa/libsp/include/ffa_api_defines.h
++++ b/components/messaging/ffa/libsp/include/ffa_api_defines.h
+@@ -58,6 +58,11 @@
+ #define FFA_MEM_PERM_GET		UINT32_C(0x84000088)
+ #define FFA_MEM_PERM_SET		UINT32_C(0x84000089)
+ 
++/* Utility macros */
++#define FFA_TO_32_BIT_FUNC(x)		((x) & (~UINT32_C(0x40000000)))
++#define FFA_IS_32_BIT_FUNC(x)		(((x) & UINT32_C(0x40000000)) == 0)
++#define FFA_IS_64_BIT_FUNC(x)		(((x) & UINT32_C(0x40000000)) != 0)
++
+ /* Special value for MBZ parameters */
+ #define FFA_PARAM_MBZ			UINT32_C(0x0)
+ 
+diff --git a/components/messaging/ffa/libsp/include/ffa_api_types.h b/components/messaging/ffa/libsp/include/ffa_api_types.h
+index 3686e2e..b1be7ac 100644
+--- a/components/messaging/ffa/libsp/include/ffa_api_types.h
++++ b/components/messaging/ffa/libsp/include/ffa_api_types.h
+@@ -1,6 +1,6 @@
+ /* SPDX-License-Identifier: BSD-3-Clause */
+ /*
+- * Copyright (c) 2020, Arm Limited and Contributors. All rights reserved.
++ * Copyright (c) 2020-2022, Arm Limited and Contributors. All rights reserved.
+  */
+ 
+ #ifndef LIBSP_INCLUDE_FFA_API_TYPES_H_
+@@ -80,7 +80,10 @@ struct ffa_direct_msg {
+ 	uint32_t function_id;
+ 	uint16_t source_id;
+ 	uint16_t destination_id;
+-	uint32_t args[5];
++	union {
++		uint32_t args32[5];
++		uint64_t args64[5];
++	} args;
+ };
+ 
+ /**
+diff --git a/components/messaging/ffa/libsp/include/sp_messaging.h b/components/messaging/ffa/libsp/include/sp_messaging.h
+index 4bb45f7..7173a92 100644
+--- a/components/messaging/ffa/libsp/include/sp_messaging.h
++++ b/components/messaging/ffa/libsp/include/sp_messaging.h
+@@ -1,6 +1,6 @@
+ /* SPDX-License-Identifier: BSD-3-Clause */
+ /*
+- * Copyright (c) 2021, Arm Limited and Contributors. All rights reserved.
++ * Copyright (c) 2021-2022, Arm Limited and Contributors. All rights reserved.
+  */
+ 
+ #ifndef LIBSP_INCLUDE_SP_MESSAGING_H_
+@@ -9,6 +9,7 @@
+ #include "sp_api_defines.h"
+ #include "sp_api_types.h"
+ 
++#include <stdbool.h>
+ #include <stdint.h>
+ 
+ #ifdef __cplusplus
+@@ -23,7 +24,11 @@ extern "C" {
+ struct sp_msg {
+ 	uint16_t source_id;
+ 	uint16_t destination_id;
+-	uint32_t args[SP_MSG_ARG_COUNT];
++	bool is_64bit_message;
++	union {
++		uint32_t args32[SP_MSG_ARG_COUNT];
++		uint64_t args64[SP_MSG_ARG_COUNT];
++	} args;
+ };
+ 
+ /**
+diff --git a/components/messaging/ffa/libsp/mock/mock_ffa_api.cpp b/components/messaging/ffa/libsp/mock/mock_ffa_api.cpp
+index ceebcbf..a01848c 100644
+--- a/components/messaging/ffa/libsp/mock/mock_ffa_api.cpp
++++ b/components/messaging/ffa/libsp/mock/mock_ffa_api.cpp
+@@ -140,13 +140,13 @@ ffa_result ffa_msg_wait(struct ffa_direct_msg *msg)
+ 		.returnIntValue();
+ }
+ 
+-void expect_ffa_msg_send_direct_req(uint16_t source, uint16_t dest, uint32_t a0,
+-				    uint32_t a1, uint32_t a2, uint32_t a3,
+-				    uint32_t a4,
+-				    const struct ffa_direct_msg *msg,
+-				    ffa_result result)
++void expect_ffa_msg_send_direct_req_32(uint16_t source, uint16_t dest,
++				       uint32_t a0, uint32_t a1, uint32_t a2,
++				       uint32_t a3, uint32_t a4,
++				       const struct ffa_direct_msg *msg,
++				       ffa_result result)
+ {
+-	mock().expectOneCall("ffa_msg_send_direct_req")
++	mock().expectOneCall("ffa_msg_send_direct_req_32")
+ 		.withUnsignedIntParameter("source", source)
+ 		.withUnsignedIntParameter("dest", dest)
+ 		.withUnsignedIntParameter("a0", a0)
+@@ -158,12 +158,13 @@ void expect_ffa_msg_send_direct_req(uint16_t source, uint16_t dest, uint32_t a0,
+ 		.andReturnValue(result);
+ }
+ 
+-ffa_result ffa_msg_send_direct_req(uint16_t source, uint16_t dest, uint32_t a0,
+-				   uint32_t a1, uint32_t a2, uint32_t a3,
+-				   uint32_t a4, struct ffa_direct_msg *msg)
++ffa_result ffa_msg_send_direct_req_32(uint16_t source, uint16_t dest,
++				      uint32_t a0, uint32_t a1, uint32_t a2,
++				      uint32_t a3,uint32_t a4,
++				      struct ffa_direct_msg *msg)
+ {
+ 	return mock()
+-		.actualCall("ffa_msg_send_direct_req")
++		.actualCall("ffa_msg_send_direct_req_32")
+ 		.withUnsignedIntParameter("source", source)
+ 		.withUnsignedIntParameter("dest", dest)
+ 		.withUnsignedIntParameter("a0", a0)
+@@ -175,13 +176,49 @@ ffa_result ffa_msg_send_direct_req(uint16_t source, uint16_t dest, uint32_t a0,
+ 		.returnIntValue();
+ }
+ 
+-void expect_ffa_msg_send_direct_resp(uint16_t source, uint16_t dest,
++void expect_ffa_msg_send_direct_req_64(uint16_t source, uint16_t dest,
++				       uint64_t a0, uint64_t a1, uint64_t a2,
++				       uint64_t a3, uint64_t a4,
++				       const struct ffa_direct_msg *msg,
++				       ffa_result result)
++{
++	mock().expectOneCall("ffa_msg_send_direct_req_64")
++		.withUnsignedIntParameter("source", source)
++		.withUnsignedIntParameter("dest", dest)
++		.withUnsignedLongIntParameter("a0", a0)
++		.withUnsignedLongIntParameter("a1", a1)
++		.withUnsignedLongIntParameter("a2", a2)
++		.withUnsignedLongIntParameter("a3", a3)
++		.withUnsignedLongIntParameter("a4", a4)
++		.withOutputParameterReturning("msg", msg, sizeof(*msg))
++		.andReturnValue(result);
++}
++
++ffa_result ffa_msg_send_direct_req_64(uint16_t source, uint16_t dest,
++				      uint64_t a0, uint64_t a1, uint64_t a2,
++				      uint64_t a3, uint64_t a4,
++				      struct ffa_direct_msg *msg)
++{
++	return mock()
++		.actualCall("ffa_msg_send_direct_req_64")
++		.withUnsignedIntParameter("source", source)
++		.withUnsignedIntParameter("dest", dest)
++		.withUnsignedLongIntParameter("a0", a0)
++		.withUnsignedLongIntParameter("a1", a1)
++		.withUnsignedLongIntParameter("a2", a2)
++		.withUnsignedLongIntParameter("a3", a3)
++		.withUnsignedLongIntParameter("a4", a4)
++		.withOutputParameter("msg", msg)
++		.returnIntValue();
++}
++
++void expect_ffa_msg_send_direct_resp_32(uint16_t source, uint16_t dest,
+ 				     uint32_t a0, uint32_t a1, uint32_t a2,
+ 				     uint32_t a3, uint32_t a4,
+ 				     const struct ffa_direct_msg *msg,
+ 				     ffa_result result)
+ {
+-	mock().expectOneCall("ffa_msg_send_direct_resp")
++	mock().expectOneCall("ffa_msg_send_direct_resp_32")
+ 		.withUnsignedIntParameter("source", source)
+ 		.withUnsignedIntParameter("dest", dest)
+ 		.withUnsignedIntParameter("a0", a0)
+@@ -193,12 +230,12 @@ void expect_ffa_msg_send_direct_resp(uint16_t source, uint16_t dest,
+ 		.andReturnValue(result);
+ }
+ 
+-ffa_result ffa_msg_send_direct_resp(uint16_t source, uint16_t dest, uint32_t a0,
++ffa_result ffa_msg_send_direct_resp_32(uint16_t source, uint16_t dest, uint32_t a0,
+ 				    uint32_t a1, uint32_t a2, uint32_t a3,
+ 				    uint32_t a4, struct ffa_direct_msg *msg)
+ {
+ 	return mock()
+-		.actualCall("ffa_msg_send_direct_resp")
++		.actualCall("ffa_msg_send_direct_resp_32")
+ 		.withUnsignedIntParameter("source", source)
+ 		.withUnsignedIntParameter("dest", dest)
+ 		.withUnsignedIntParameter("a0", a0)
+@@ -210,6 +247,41 @@ ffa_result ffa_msg_send_direct_resp(uint16_t source, uint16_t dest, uint32_t a0,
+ 		.returnIntValue();
+ }
+ 
++void expect_ffa_msg_send_direct_resp_64(uint16_t source, uint16_t dest,
++				     uint64_t a0, uint64_t a1, uint64_t a2,
++				     uint64_t a3, uint64_t a4,
++				     const struct ffa_direct_msg *msg,
++				     ffa_result result)
++{
++	mock().expectOneCall("ffa_msg_send_direct_resp_64")
++		.withUnsignedIntParameter("source", source)
++		.withUnsignedIntParameter("dest", dest)
++		.withUnsignedLongIntParameter("a0", a0)
++		.withUnsignedLongIntParameter("a1", a1)
++		.withUnsignedLongIntParameter("a2", a2)
++		.withUnsignedLongIntParameter("a3", a3)
++		.withUnsignedLongIntParameter("a4", a4)
++		.withOutputParameterReturning("msg", msg, sizeof(*msg))
++		.andReturnValue(result);
++}
++
++ffa_result ffa_msg_send_direct_resp_64(uint16_t source, uint16_t dest, uint64_t a0,
++				    uint64_t a1, uint64_t a2, uint64_t a3,
++				    uint64_t a4, struct ffa_direct_msg *msg)
++{
++	return mock()
++		.actualCall("ffa_msg_send_direct_resp_64")
++		.withUnsignedIntParameter("source", source)
++		.withUnsignedIntParameter("dest", dest)
++		.withUnsignedLongIntParameter("a0", a0)
++		.withUnsignedLongIntParameter("a1", a1)
++		.withUnsignedLongIntParameter("a2", a2)
++		.withUnsignedLongIntParameter("a3", a3)
++		.withUnsignedLongIntParameter("a4", a4)
++		.withOutputParameter("msg", msg)
++		.returnIntValue();
++}
++
+ void expect_ffa_mem_donate(uint32_t total_length, uint32_t fragment_length,
+ 			   void *buffer_address, uint32_t page_count,
+ 			   const uint64_t *handle, ffa_result result)
+diff --git a/components/messaging/ffa/libsp/mock/mock_ffa_api.h b/components/messaging/ffa/libsp/mock/mock_ffa_api.h
+index b1c794f..4213ccb 100644
+--- a/components/messaging/ffa/libsp/mock/mock_ffa_api.h
++++ b/components/messaging/ffa/libsp/mock/mock_ffa_api.h
+@@ -30,17 +30,29 @@ void expect_ffa_id_get(const uint16_t *id, ffa_result result);
+ 
+ void expect_ffa_msg_wait(const struct ffa_direct_msg *msg, ffa_result result);
+ 
+-void expect_ffa_msg_send_direct_req(uint16_t source, uint16_t dest, uint32_t a0,
+-				    uint32_t a1, uint32_t a2, uint32_t a3,
+-				    uint32_t a4,
+-				    const struct ffa_direct_msg *msg,
+-				    ffa_result result);
+-
+-void expect_ffa_msg_send_direct_resp(uint16_t source, uint16_t dest,
+-				     uint32_t a0, uint32_t a1, uint32_t a2,
+-				     uint32_t a3, uint32_t a4,
+-				     const struct ffa_direct_msg *msg,
+-				     ffa_result result);
++void expect_ffa_msg_send_direct_req_32(uint16_t source, uint16_t dest,
++				       uint32_t a0, uint32_t a1, uint32_t a2,
++				       uint32_t a3, uint32_t a4,
++				       const struct ffa_direct_msg *msg,
++				       ffa_result result);
++
++void expect_ffa_msg_send_direct_req_64(uint16_t source, uint16_t dest,
++				       uint64_t a0, uint64_t a1, uint64_t a2,
++				       uint64_t a3, uint64_t a4,
++				       const struct ffa_direct_msg *msg,
++				       ffa_result result);
++
++void expect_ffa_msg_send_direct_resp_32(uint16_t source, uint16_t dest,
++					uint32_t a0, uint32_t a1, uint32_t a2,
++					uint32_t a3, uint32_t a4,
++					const struct ffa_direct_msg *msg,
++					ffa_result result);
++
++void expect_ffa_msg_send_direct_resp_64(uint16_t source, uint16_t dest,
++					uint64_t a0, uint64_t a1, uint64_t a2,
++					uint64_t a3, uint64_t a4,
++					const struct ffa_direct_msg *msg,
++					ffa_result result);
+ 
+ void expect_ffa_mem_donate(uint32_t total_length, uint32_t fragment_length,
+ 			   void *buffer_address, uint32_t page_count,
+diff --git a/components/messaging/ffa/libsp/mock/test/test_mock_ffa_api.cpp b/components/messaging/ffa/libsp/mock/test/test_mock_ffa_api.cpp
+index ab33649..c1cbdd6 100644
+--- a/components/messaging/ffa/libsp/mock/test/test_mock_ffa_api.cpp
++++ b/components/messaging/ffa/libsp/mock/test/test_mock_ffa_api.cpp
+@@ -105,7 +105,7 @@ TEST(mock_ffa_api, ffa_msg_wait)
+ 	MEMCMP_EQUAL(&expected_msg, &msg, sizeof(expected_msg));
+ }
+ 
+-TEST(mock_ffa_api, ffa_msg_send_direct_req)
++TEST(mock_ffa_api, ffa_msg_send_direct_req_32)
+ {
+ 	const uint16_t source = 0x1122;
+ 	const uint16_t dest = 0x2233;
+@@ -116,13 +116,30 @@ TEST(mock_ffa_api, ffa_msg_send_direct_req)
+ 	const uint32_t a4 = 0x89124567;
+ 	struct ffa_direct_msg msg = { 0 };
+ 
+-	expect_ffa_msg_send_direct_req(source, dest, a0, a1, a2, a3, a4,
++	expect_ffa_msg_send_direct_req_32(source, dest, a0, a1, a2, a3, a4,
++				          &expected_msg, result);
++	LONGS_EQUAL(result, ffa_msg_send_direct_req_32(source, dest, a0, a1, a2,
++						       a3, a4, &msg));
++}
++
++TEST(mock_ffa_api, ffa_msg_send_direct_req_64)
++{
++	const uint16_t source = 0x1122;
++	const uint16_t dest = 0x2233;
++	const uint64_t a0 = 0x4567891221987654;
++	const uint64_t a1 = 0x5678912442198765;
++	const uint64_t a2 = 0x6789124554219876;
++	const uint64_t a3 = 0x7891245665421987;
++	const uint64_t a4 = 0x8912456776542198;
++	struct ffa_direct_msg msg = { 0 };
++
++	expect_ffa_msg_send_direct_req_64(source, dest, a0, a1, a2, a3, a4,
+ 				       &expected_msg, result);
+-	LONGS_EQUAL(result, ffa_msg_send_direct_req(source, dest, a0, a1, a2,
+-						    a3, a4, &msg));
++	LONGS_EQUAL(result, ffa_msg_send_direct_req_64(source, dest, a0, a1, a2,
++						       a3, a4, &msg));
+ }
+ 
+-TEST(mock_ffa_api, ffa_msg_send_direct_resp)
++TEST(mock_ffa_api, ffa_msg_send_direct_resp_32)
+ {
+ 	const uint16_t source = 0x1122;
+ 	const uint16_t dest = 0x2233;
+@@ -133,10 +150,27 @@ TEST(mock_ffa_api, ffa_msg_send_direct_resp)
+ 	const uint32_t a4 = 0x89124567;
+ 	struct ffa_direct_msg msg = { 0 };
+ 
+-	expect_ffa_msg_send_direct_resp(source, dest, a0, a1, a2, a3, a4,
++	expect_ffa_msg_send_direct_resp_32(source, dest, a0, a1, a2, a3, a4,
+ 					&expected_msg, result);
+-	LONGS_EQUAL(result, ffa_msg_send_direct_resp(source, dest, a0, a1, a2,
+-						     a3, a4, &msg));
++	LONGS_EQUAL(result, ffa_msg_send_direct_resp_32(source, dest, a0, a1,
++							a2, a3, a4, &msg));
++}
++
++TEST(mock_ffa_api, ffa_msg_send_direct_resp_64)
++{
++	const uint16_t source = 0x1122;
++	const uint16_t dest = 0x2233;
++	const uint64_t a0 = 0x4567891221987654;
++	const uint64_t a1 = 0x5678912442198765;
++	const uint64_t a2 = 0x6789124554219876;
++	const uint64_t a3 = 0x7891245665421987;
++	const uint64_t a4 = 0x8912456776542198;
++	struct ffa_direct_msg msg = { 0 };
++
++	expect_ffa_msg_send_direct_resp_64(source, dest, a0, a1, a2, a3, a4,
++					  &expected_msg, result);
++	LONGS_EQUAL(result, ffa_msg_send_direct_resp_64(source, dest, a0, a1,
++							a2, a3, a4, &msg));
+ }
+ 
+ TEST(mock_ffa_api, ffa_mem_donate)
+diff --git a/components/messaging/ffa/libsp/sp_messaging.c b/components/messaging/ffa/libsp/sp_messaging.c
+index f7223aa..392006b 100644
+--- a/components/messaging/ffa/libsp/sp_messaging.c
++++ b/components/messaging/ffa/libsp/sp_messaging.c
+@@ -1,6 +1,6 @@
+ // SPDX-License-Identifier: BSD-3-Clause
+ /*
+- * Copyright (c) 2021, Arm Limited and Contributors. All rights reserved.
++ * Copyright (c) 2021-2022, Arm Limited and Contributors. All rights reserved.
+  */
+ 
+ #include "ffa_api.h"
+@@ -20,22 +20,40 @@ static void pack_ffa_direct_msg(const struct sp_msg *msg,
+ 	ffa_msg->source_id = msg->source_id;
+ 	ffa_msg->destination_id = msg->destination_id;
+ 
+-	ffa_msg->args[0] = 0;
+-	memcpy(&ffa_msg->args[SP_MSG_ARG_OFFSET], msg->args, sizeof(msg->args));
++	ffa_msg->args.args64[0] = 0;
++	if (msg->is_64bit_message)
++		memcpy(&ffa_msg->args.args64[SP_MSG_ARG_OFFSET],
++		       msg->args.args64, sizeof(msg->args.args64));
++	else
++		memcpy(&ffa_msg->args.args32[SP_MSG_ARG_OFFSET],
++		       msg->args.args32, sizeof(msg->args.args32));
+ }
+ 
+ static void unpack_ffa_direct_msg(const struct ffa_direct_msg *ffa_msg,
+ 				  struct sp_msg *msg)
+ {
+-	if (ffa_msg->function_id != FFA_SUCCESS_32) {
++	if (ffa_msg->function_id == FFA_MSG_SEND_DIRECT_REQ_32 ||
++	    ffa_msg->function_id == FFA_MSG_SEND_DIRECT_RESP_32) {
+ 		/*
+-		 * Handling request or response (error is handled before call)
++		 * Handling 32 bit request or response
+ 		 */
+ 		msg->source_id = ffa_msg->source_id;
+ 		msg->destination_id = ffa_msg->destination_id;
++		msg->is_64bit_message = FFA_IS_64_BIT_FUNC(ffa_msg->function_id);
+ 
+-		memcpy(msg->args, &ffa_msg->args[SP_MSG_ARG_OFFSET],
+-		       sizeof(msg->args));
++		memcpy(msg->args.args32, &ffa_msg->args.args32[SP_MSG_ARG_OFFSET],
++		       sizeof(msg->args.args32));
++	} else if (ffa_msg->function_id == FFA_MSG_SEND_DIRECT_REQ_64 ||
++		   ffa_msg->function_id == FFA_MSG_SEND_DIRECT_RESP_64) {
++		/*
++		 * Handling 64 bit request or response
++		 */
++		msg->source_id = ffa_msg->source_id;
++		msg->destination_id = ffa_msg->destination_id;
++		msg->is_64bit_message = FFA_IS_64_BIT_FUNC(ffa_msg->function_id);
++
++		memcpy(msg->args.args64, &ffa_msg->args.args64[SP_MSG_ARG_OFFSET],
++		       sizeof(msg->args.args64));
+ 	} else {
+ 		/* Success has no message parameters */
+ 		*msg = (struct sp_msg){ 0 };
+@@ -89,11 +107,18 @@ sp_result sp_msg_send_direct_req(const struct sp_msg *req, struct sp_msg *resp)
+ 	ffa_direct_msg_routing_ext_req_pre_hook(&ffa_req);
+ #endif
+ 
+-	ffa_res = ffa_msg_send_direct_req(ffa_req.source_id,
+-					  ffa_req.destination_id,
+-					  ffa_req.args[0], ffa_req.args[1],
+-					  ffa_req.args[2], ffa_req.args[3],
+-					  ffa_req.args[4], &ffa_resp);
++	if (req->is_64bit_message)
++		ffa_res = ffa_msg_send_direct_req_64(
++			ffa_req.source_id, ffa_req.destination_id,
++			ffa_req.args.args64[0], ffa_req.args.args64[1],
++			ffa_req.args.args64[2], ffa_req.args.args64[3],
++			ffa_req.args.args64[4], &ffa_resp);
++	else
++		ffa_res = ffa_msg_send_direct_req_32(
++			ffa_req.source_id, ffa_req.destination_id,
++			ffa_req.args.args32[0], ffa_req.args.args32[1],
++			ffa_req.args.args32[2], ffa_req.args.args32[3],
++			ffa_req.args.args32[4], &ffa_resp);
+ 
+ 	if (ffa_res != FFA_OK) {
+ #if FFA_DIRECT_MSG_ROUTING_EXTENSION
+@@ -136,11 +161,18 @@ sp_result sp_msg_send_direct_resp(const struct sp_msg *resp, struct sp_msg *req)
+ 	ffa_direct_msg_routing_ext_resp_pre_hook(&ffa_resp);
+ #endif
+ 
+-	ffa_res = ffa_msg_send_direct_resp(ffa_resp.source_id,
+-					   ffa_resp.destination_id,
+-					   ffa_resp.args[0], ffa_resp.args[1],
+-					   ffa_resp.args[2], ffa_resp.args[3],
+-					   ffa_resp.args[4], &ffa_req);
++	if (resp->is_64bit_message)
++		ffa_res = ffa_msg_send_direct_resp_64(
++			ffa_resp.source_id, ffa_resp.destination_id,
++			ffa_resp.args.args64[0], ffa_resp.args.args64[1],
++			ffa_resp.args.args64[2], ffa_resp.args.args64[3],
++			ffa_resp.args.args64[4], &ffa_req);
++	else
++		ffa_res = ffa_msg_send_direct_resp_32(
++			ffa_resp.source_id, ffa_resp.destination_id,
++			ffa_resp.args.args32[0], ffa_resp.args.args32[1],
++			ffa_resp.args.args32[2], ffa_resp.args.args32[3],
++			ffa_resp.args.args32[4], &ffa_req);
+ 
+ 	if (ffa_res != FFA_OK) {
+ #if FFA_DIRECT_MSG_ROUTING_EXTENSION
+@@ -182,11 +214,11 @@ sp_result sp_msg_send_rc_req(const struct sp_msg *req, struct sp_msg *resp)
+ 
+ 	ffa_direct_msg_routing_ext_rc_req_pre_hook(&ffa_req);
+ 
+-	ffa_res = ffa_msg_send_direct_resp(ffa_req.source_id,
++	ffa_res = ffa_msg_send_direct_resp_32(ffa_req.source_id,
+ 					   ffa_req.destination_id,
+-					   ffa_req.args[0], ffa_req.args[1],
+-					   ffa_req.args[2], ffa_req.args[3],
+-					   ffa_req.args[4], &ffa_resp);
++					   ffa_req.args.args32[0], ffa_req.args.args32[1],
++					   ffa_req.args.args32[2], ffa_req.args.args32[3],
++					   ffa_req.args.args32[4], &ffa_resp);
+ 
+ 	if (ffa_res != FFA_OK) {
+ 		ffa_direct_msg_routing_ext_rc_req_error_hook();
+diff --git a/components/messaging/ffa/libsp/test/test_ffa_api.cpp b/components/messaging/ffa/libsp/test/test_ffa_api.cpp
+index 8fa261e..6cca085 100644
+--- a/components/messaging/ffa/libsp/test/test_ffa_api.cpp
++++ b/components/messaging/ffa/libsp/test/test_ffa_api.cpp
+@@ -43,20 +43,35 @@ TEST_GROUP(ffa_api)
+ 		svc_result.a2 = (uint32_t)error_code;
+ 	}
+ 
+-	void msg_equal(uint32_t func_id, uint16_t source_id, uint16_t dest_id,
+-			uint32_t arg0, uint32_t arg1, uint32_t arg2,
+-			uint32_t arg3, uint32_t arg4)
++	void msg_equal_32(uint32_t func_id, uint16_t source_id, uint16_t dest_id,
++			  uint32_t arg0, uint32_t arg1, uint32_t arg2,
++			  uint32_t arg3, uint32_t arg4)
+ 	{
+ 		UNSIGNED_LONGS_EQUAL(func_id, msg.function_id);
+ 		UNSIGNED_LONGS_EQUAL(source_id, msg.source_id);
+ 		UNSIGNED_LONGS_EQUAL(dest_id, msg.destination_id);
+-		UNSIGNED_LONGS_EQUAL(arg0, msg.args[0]);
+-		UNSIGNED_LONGS_EQUAL(arg1, msg.args[1]);
+-		UNSIGNED_LONGS_EQUAL(arg2, msg.args[2]);
+-		UNSIGNED_LONGS_EQUAL(arg3, msg.args[3]);
+-		UNSIGNED_LONGS_EQUAL(arg4, msg.args[4]);
++		UNSIGNED_LONGS_EQUAL(arg0, msg.args.args32[0]);
++		UNSIGNED_LONGS_EQUAL(arg1, msg.args.args32[1]);
++		UNSIGNED_LONGS_EQUAL(arg2, msg.args.args32[2]);
++		UNSIGNED_LONGS_EQUAL(arg3, msg.args.args32[3]);
++		UNSIGNED_LONGS_EQUAL(arg4, msg.args.args32[4]);
+ 	}
+ 
++	void msg_equal_64(uint32_t func_id, uint16_t source_id, uint16_t dest_id,
++			  uint64_t arg0, uint64_t arg1, uint64_t arg2,
++			  uint64_t arg3, uint64_t arg4)
++	{
++		UNSIGNED_LONGS_EQUAL(func_id, msg.function_id);
++		UNSIGNED_LONGS_EQUAL(source_id, msg.source_id);
++		UNSIGNED_LONGS_EQUAL(dest_id, msg.destination_id);
++		UNSIGNED_LONGLONGS_EQUAL(arg0, msg.args.args64[0]);
++		UNSIGNED_LONGLONGS_EQUAL(arg1, msg.args.args64[1]);
++		UNSIGNED_LONGLONGS_EQUAL(arg2, msg.args.args64[2]);
++		UNSIGNED_LONGLONGS_EQUAL(arg3, msg.args.args64[3]);
++		UNSIGNED_LONGLONGS_EQUAL(arg4, msg.args.args64[4]);
++	}
++
++
+ 	struct ffa_params svc_result;
+ 	struct ffa_direct_msg msg;
+ };
+@@ -360,7 +375,7 @@ TEST(ffa_api, ffa_msg_wait_success)
+ 	expect_ffa_svc(0x8400006B, 0, 0, 0, 0, 0, 0, 0, &svc_result);
+ 	ffa_result result = ffa_msg_wait(&msg);
+ 	LONGS_EQUAL(0, result);
+-	msg_equal(0x84000061, 0, 0, 0, 0, 0, 0, 0);
++	msg_equal_32(0x84000061, 0, 0, 0, 0, 0, 0, 0);
+ }
+ 
+ TEST(ffa_api, ffa_msg_wait_error)
+@@ -369,10 +384,10 @@ TEST(ffa_api, ffa_msg_wait_error)
+ 	expect_ffa_svc(0x8400006B, 0, 0, 0, 0, 0, 0, 0, &svc_result);
+ 	ffa_result result = ffa_msg_wait(&msg);
+ 	LONGS_EQUAL(-1, result);
+-	msg_equal(0, 0, 0, 0, 0, 0, 0, 0);
++	msg_equal_32(0, 0, 0, 0, 0, 0, 0, 0);
+ }
+ 
+-TEST(ffa_api, ffa_msg_wait_direct_req)
++TEST(ffa_api, ffa_msg_wait_direct_req_32)
+ {
+ 	const uint16_t source_id = 0x1122;
+ 	const uint16_t dest_id = 0x3344;
+@@ -392,7 +407,31 @@ TEST(ffa_api, ffa_msg_wait_direct_req)
+ 	expect_ffa_svc(0x8400006B, 0, 0, 0, 0, 0, 0, 0, &svc_result);
+ 	ffa_result result = ffa_msg_wait(&msg);
+ 	LONGS_EQUAL(0, result);
+-	msg_equal(0x8400006F, source_id, dest_id, arg0, arg1, arg2, arg3,
++	msg_equal_32(0x8400006F, source_id, dest_id, arg0, arg1, arg2, arg3,
++		   arg4);
++}
++
++TEST(ffa_api, ffa_msg_wait_direct_req_64)
++{
++	const uint16_t source_id = 0x1122;
++	const uint16_t dest_id = 0x3344;
++	const uint64_t arg0 = 0x0123456776543210ULL;
++	const uint64_t arg1 = 0x1234567887654321ULL;
++	const uint64_t arg2 = 0x2345678998765432ULL;
++	const uint64_t arg3 = 0x3456789aa9876543ULL;
++	const uint64_t arg4 = 0x456789abba987654ULL;
++
++	svc_result.a0 = 0xC400006F;
++	svc_result.a1 = ((uint32_t)source_id) << 16 | dest_id;
++	svc_result.a3 = arg0;
++	svc_result.a4 = arg1;
++	svc_result.a5 = arg2;
++	svc_result.a6 = arg3;
++	svc_result.a7 = arg4;
++	expect_ffa_svc(0x8400006B, 0, 0, 0, 0, 0, 0, 0, &svc_result);
++	ffa_result result = ffa_msg_wait(&msg);
++	LONGS_EQUAL(0, result);
++	msg_equal_64(0xC400006F, source_id, dest_id, arg0, arg1, arg2, arg3,
+ 		   arg4);
+ }
+ 
+@@ -410,7 +449,7 @@ TEST(ffa_api, ffa_msg_wait_one_interrupt_success)
+ 	expect_ffa_svc(0x8400006B, 0, 0, 0, 0, 0, 0, 0, &svc_result);
+ 	ffa_result result = ffa_msg_wait(&msg);
+ 	LONGS_EQUAL(0, result);
+-	msg_equal(0x84000061, 0, 0, 0, 0, 0, 0, 0);
++	msg_equal_32(0x84000061, 0, 0, 0, 0, 0, 0, 0);
+ }
+ 
+ TEST(ffa_api, ffa_msg_wait_two_interrupt_success)
+@@ -434,7 +473,7 @@ TEST(ffa_api, ffa_msg_wait_two_interrupt_success)
+ 	expect_ffa_svc(0x8400006B, 0, 0, 0, 0, 0, 0, 0, &svc_result);
+ 	ffa_result result = ffa_msg_wait(&msg);
+ 	LONGS_EQUAL(0, result);
+-	msg_equal(0x84000061, 0, 0, 0, 0, 0, 0, 0);
++	msg_equal_32(0x84000061, 0, 0, 0, 0, 0, 0, 0);
+ }
+ 
+ TEST(ffa_api, ffa_msg_wait_unknown_response)
+@@ -448,7 +487,7 @@ TEST(ffa_api, ffa_msg_wait_unknown_response)
+ 	}
+ }
+ 
+-TEST(ffa_api, ffa_msg_send_direct_req_success)
++TEST(ffa_api, ffa_msg_send_direct_req_32_success)
+ {
+ 	const uint16_t source_id = 0x1122;
+ 	const uint16_t dest_id = 0x3344;
+@@ -461,13 +500,13 @@ TEST(ffa_api, ffa_msg_send_direct_req_success)
+ 	svc_result.a0 = 0x84000061;
+ 	expect_ffa_svc(0x8400006F, ((uint32_t)source_id << 16) | dest_id, 0,
+ 		       arg0, arg1, arg2, arg3, arg4, &svc_result);
+-	ffa_result result = ffa_msg_send_direct_req(
++	ffa_result result = ffa_msg_send_direct_req_32(
+ 		source_id, dest_id, arg0, arg1, arg2, arg3, arg4, &msg);
+ 	LONGS_EQUAL(0, result);
+-	msg_equal(0x84000061, 0, 0, 0, 0, 0, 0, 0);
++	msg_equal_32(0x84000061, 0, 0, 0, 0, 0, 0, 0);
+ }
+ 
+-TEST(ffa_api, ffa_msg_send_direct_req_error)
++TEST(ffa_api, ffa_msg_send_direct_req_32_error)
+ {
+ 	const uint16_t source_id = 0x1122;
+ 	const uint16_t dest_id = 0x3344;
+@@ -480,13 +519,48 @@ TEST(ffa_api, ffa_msg_send_direct_req_error)
+ 	setup_error_response(-1);
+ 	expect_ffa_svc(0x8400006F, ((uint32_t)source_id << 16) | dest_id, 0,
+ 		       arg0, arg1, arg2, arg3, arg4, &svc_result);
+-	ffa_result result = ffa_msg_send_direct_req(
++	ffa_result result = ffa_msg_send_direct_req_32(
+ 		source_id, dest_id, arg0, arg1, arg2, arg3, arg4, &msg);
+ 	LONGS_EQUAL(-1, result);
+-	msg_equal(0, 0, 0, 0, 0, 0, 0, 0);
++	msg_equal_32(0, 0, 0, 0, 0, 0, 0, 0);
++}
++
++TEST(ffa_api, ffa_msg_send_direct_req_32_get_resp_64)
++{
++	const uint16_t source_id = 0x1122;
++	const uint16_t dest_id = 0x3344;
++	const uint32_t arg0 = 0x01234567ULL;
++	const uint32_t arg1 = 0x12345678ULL;
++	const uint32_t arg2 = 0x23456789ULL;
++	const uint32_t arg3 = 0x3456789aULL;
++	const uint32_t arg4 = 0x456789abULL;
++	const uint16_t resp_source_id = 0x1221;
++	const uint16_t resp_dest_id = 0x3443;
++	const uint64_t resp_arg0 = 0x9012345665432109ULL;
++	const uint64_t resp_arg1 = 0xa12345677654321aULL;
++	const uint64_t resp_arg2 = 0xb23456788765432bULL;
++	const uint64_t resp_arg3 = 0xc34567899876543cULL;
++	const uint64_t resp_arg4 = 0xd456789aa987654dULL;
++	assert_environment_t assert_env;
++
++	svc_result.a0 = 0xC4000070;
++	svc_result.a1 = ((uint32_t)resp_source_id) << 16 | resp_dest_id;
++	svc_result.a3 = resp_arg0;
++	svc_result.a4 = resp_arg1;
++	svc_result.a5 = resp_arg2;
++	svc_result.a6 = resp_arg3;
++	svc_result.a7 = resp_arg4;
++
++	expect_ffa_svc(0x8400006F, ((uint32_t)source_id << 16) | dest_id, 0,
++		       arg0, arg1, arg2, arg3, arg4, &svc_result);
++
++	if (SETUP_ASSERT_ENVIRONMENT(assert_env)) {
++		ffa_msg_send_direct_req_32(source_id, dest_id, arg0, arg1, arg2,
++					arg3, arg4, &msg);
++	}
+ }
+ 
+-TEST(ffa_api, ffa_msg_send_direct_req_direct_resp)
++TEST(ffa_api, ffa_msg_send_direct_req_32_direct_resp)
+ {
+ 	const uint16_t source_id = 0x1122;
+ 	const uint16_t dest_id = 0x3344;
+@@ -512,14 +586,82 @@ TEST(ffa_api, ffa_msg_send_direct_req_direct_resp)
+ 	svc_result.a7 = resp_arg4;
+ 	expect_ffa_svc(0x8400006F, ((uint32_t)source_id << 16) | dest_id, 0,
+ 		       arg0, arg1, arg2, arg3, arg4, &svc_result);
+-	ffa_result result = ffa_msg_send_direct_req(
++	ffa_result result = ffa_msg_send_direct_req_32(
++		source_id, dest_id, arg0, arg1, arg2, arg3, arg4, &msg);
++	LONGS_EQUAL(0, result);
++	msg_equal_32(0x84000070, resp_source_id, resp_dest_id, resp_arg0,
++		   resp_arg1, resp_arg2, resp_arg3, resp_arg4);
++}
++
++TEST(ffa_api, ffa_msg_send_direct_req_64_success)
++{
++	const uint16_t source_id = 0x1122;
++	const uint16_t dest_id = 0x3344;
++	const uint64_t arg0 = 0x0123456776543210ULL;
++	const uint64_t arg1 = 0x1234567887654321ULL;
++	const uint64_t arg2 = 0x2345678998765432ULL;
++	const uint64_t arg3 = 0x3456789aa9876543ULL;
++	const uint64_t arg4 = 0x456789abba987654ULL;
++	const uint16_t resp_source_id = 0x1221;
++	const uint16_t resp_dest_id = 0x3443;
++	const uint64_t resp_arg0 = 0x9012345665432109ULL;
++	const uint64_t resp_arg1 = 0xa12345677654321aULL;
++	const uint64_t resp_arg2 = 0xb23456788765432bULL;
++	const uint64_t resp_arg3 = 0xc34567899876543cULL;
++	const uint64_t resp_arg4 = 0xd456789aa987654dULL;
++
++	svc_result.a0 = 0xC4000070;
++	svc_result.a1 = ((uint32_t)resp_source_id) << 16 | resp_dest_id;
++	svc_result.a3 = resp_arg0;
++	svc_result.a4 = resp_arg1;
++	svc_result.a5 = resp_arg2;
++	svc_result.a6 = resp_arg3;
++	svc_result.a7 = resp_arg4;
++	expect_ffa_svc(0xC400006F, ((uint32_t)source_id << 16) | dest_id, 0,
++		       arg0, arg1, arg2, arg3, arg4, &svc_result);
++	ffa_result result = ffa_msg_send_direct_req_64(
+ 		source_id, dest_id, arg0, arg1, arg2, arg3, arg4, &msg);
+ 	LONGS_EQUAL(0, result);
+-	msg_equal(0x84000070, resp_source_id, resp_dest_id, resp_arg0,
++	msg_equal_64(0xC4000070, resp_source_id, resp_dest_id, resp_arg0,
+ 		   resp_arg1, resp_arg2, resp_arg3, resp_arg4);
+ }
+ 
+-TEST(ffa_api, ffa_msg_send_direct_req_one_interrupt_success)
++TEST(ffa_api, ffa_msg_send_direct_req_64_get_resp_32)
++{
++	const uint16_t source_id = 0x1122;
++	const uint16_t dest_id = 0x3344;
++	const uint64_t arg0 = 0x9012345665432109ULL;
++	const uint64_t arg1 = 0xa12345677654321aULL;
++	const uint64_t arg2 = 0xb23456788765432bULL;
++	const uint64_t arg3 = 0xc34567899876543cULL;
++	const uint64_t arg4 = 0xd456789aa987654dULL;
++	const uint16_t resp_source_id = 0x1221;
++	const uint16_t resp_dest_id = 0x3443;
++	const uint32_t resp_arg0 = 0x01234567ULL;
++	const uint32_t resp_arg1 = 0x12345678ULL;
++	const uint32_t resp_arg2 = 0x23456789ULL;
++	const uint32_t resp_arg3 = 0x3456789aULL;
++	const uint32_t resp_arg4 = 0x456789abULL;
++	assert_environment_t assert_env;
++
++	svc_result.a0 = 0x84000070;
++	svc_result.a1 = ((uint32_t)resp_source_id) << 16 | resp_dest_id;
++	svc_result.a3 = resp_arg0;
++	svc_result.a4 = resp_arg1;
++	svc_result.a5 = resp_arg2;
++	svc_result.a6 = resp_arg3;
++	svc_result.a7 = resp_arg4;
++
++	expect_ffa_svc(0xC400006F, ((uint32_t)source_id << 16) | dest_id, 0,
++		       arg0, arg1, arg2, arg3, arg4, &svc_result);
++
++	if (SETUP_ASSERT_ENVIRONMENT(assert_env)) {
++		ffa_msg_send_direct_req_64(source_id, dest_id, arg0, arg1, arg2,
++					arg3, arg4, &msg);
++	}
++}
++
++TEST(ffa_api, ffa_msg_send_direct_req_32_one_interrupt_success)
+ {
+ 	const uint16_t source_id = 0x1122;
+ 	const uint16_t dest_id = 0x3344;
+@@ -539,13 +681,13 @@ TEST(ffa_api, ffa_msg_send_direct_req_one_interrupt_success)
+ 
+ 	svc_result.a0 = 0x84000061;
+ 	expect_ffa_svc(0x8400006B, 0, 0, 0, 0, 0, 0, 0, &svc_result);
+-	ffa_result result = ffa_msg_send_direct_req(
++	ffa_result result = ffa_msg_send_direct_req_32(
+ 		source_id, dest_id, arg0, arg1, arg2, arg3, arg4, &msg);
+ 	LONGS_EQUAL(0, result);
+-	msg_equal(0x84000061, 0, 0, 0, 0, 0, 0, 0);
++	msg_equal_32(0x84000061, 0, 0, 0, 0, 0, 0, 0);
+ }
+ 
+-TEST(ffa_api, ffa_msg_send_direct_req_two_interrupt_success)
++TEST(ffa_api, ffa_msg_send_direct_req_32_two_interrupt_success)
+ {
+ 	const uint16_t source_id = 0x1122;
+ 	const uint16_t dest_id = 0x3344;
+@@ -573,13 +715,13 @@ TEST(ffa_api, ffa_msg_send_direct_req_two_interrupt_success)
+ 
+ 	svc_result.a0 = 0x84000061;
+ 	expect_ffa_svc(0x8400006B, 0, 0, 0, 0, 0, 0, 0, &svc_result);
+-	ffa_result result = ffa_msg_send_direct_req(
++	ffa_result result = ffa_msg_send_direct_req_32(
+ 		source_id, dest_id, arg0, arg1, arg2, arg3, arg4, &msg);
+ 	LONGS_EQUAL(0, result);
+-	msg_equal(0x84000061, 0, 0, 0, 0, 0, 0, 0);
++	msg_equal_32(0x84000061, 0, 0, 0, 0, 0, 0, 0);
+ }
+ 
+-TEST(ffa_api, ffa_msg_send_direct_req_unknown_response)
++TEST(ffa_api, ffa_msg_send_direct_req_32_unknown_response)
+ {
+ 	const uint16_t source_id = 0x1122;
+ 	const uint16_t dest_id = 0x3344;
+@@ -594,12 +736,12 @@ TEST(ffa_api, ffa_msg_send_direct_req_unknown_response)
+ 	expect_ffa_svc(0x8400006F, ((uint32_t)source_id << 16) | dest_id, 0,
+ 		       arg0, arg1, arg2, arg3, arg4, &svc_result);
+ 	if (SETUP_ASSERT_ENVIRONMENT(assert_env)) {
+-		ffa_msg_send_direct_req(source_id, dest_id, arg0, arg1, arg2,
++		ffa_msg_send_direct_req_32(source_id, dest_id, arg0, arg1, arg2,
+ 					arg3, arg4, &msg);
+ 	}
+ }
+ 
+-TEST(ffa_api, ffa_msg_send_direct_resp_success)
++TEST(ffa_api, ffa_msg_send_direct_resp_32_success)
+ {
+ 	const uint16_t source_id = 0x1122;
+ 	const uint16_t dest_id = 0x3344;
+@@ -612,13 +754,13 @@ TEST(ffa_api, ffa_msg_send_direct_resp_success)
+ 	svc_result.a0 = 0x84000061;
+ 	expect_ffa_svc(0x84000070, ((uint32_t)source_id << 16) | dest_id, 0,
+ 		       arg0, arg1, arg2, arg3, arg4, &svc_result);
+-	ffa_result result = ffa_msg_send_direct_resp(
++	ffa_result result = ffa_msg_send_direct_resp_32(
+ 		source_id, dest_id, arg0, arg1, arg2, arg3, arg4, &msg);
+ 	LONGS_EQUAL(0, result);
+-	msg_equal(0x84000061, 0, 0, 0, 0, 0, 0, 0);
++	msg_equal_32(0x84000061, 0, 0, 0, 0, 0, 0, 0);
+ }
+ 
+-TEST(ffa_api, ffa_msg_send_direct_resp_error)
++TEST(ffa_api, ffa_msg_send_direct_resp_32_error)
+ {
+ 	const uint16_t source_id = 0x1122;
+ 	const uint16_t dest_id = 0x3344;
+@@ -631,13 +773,13 @@ TEST(ffa_api, ffa_msg_send_direct_resp_error)
+ 	setup_error_response(-1);
+ 	expect_ffa_svc(0x84000070, ((uint32_t)source_id << 16) | dest_id, 0,
+ 		       arg0, arg1, arg2, arg3, arg4, &svc_result);
+-	ffa_result result = ffa_msg_send_direct_resp(
++	ffa_result result = ffa_msg_send_direct_resp_32(
+ 		source_id, dest_id, arg0, arg1, arg2, arg3, arg4, &msg);
+ 	LONGS_EQUAL(-1, result);
+-	msg_equal(0, 0, 0, 0, 0, 0, 0, 0);
++	msg_equal_32(0, 0, 0, 0, 0, 0, 0, 0);
+ }
+ 
+-TEST(ffa_api, ffa_msg_send_direct_resp_then_get_direct_req_as_response)
++TEST(ffa_api, ffa_msg_send_direct_resp_32_then_get_direct_req_32_as_response)
+ {
+ 	const uint16_t source_id = 0x1122;
+ 	const uint16_t dest_id = 0x3344;
+@@ -663,14 +805,113 @@ TEST(ffa_api, ffa_msg_send_direct_resp_then_get_direct_req_as_response)
+ 	svc_result.a7 = resp_arg4;
+ 	expect_ffa_svc(0x84000070, ((uint32_t)source_id << 16) | dest_id, 0,
+ 		       arg0, arg1, arg2, arg3, arg4, &svc_result);
+-	ffa_result result = ffa_msg_send_direct_resp(
++	ffa_result result = ffa_msg_send_direct_resp_32(
++		source_id, dest_id, arg0, arg1, arg2, arg3, arg4, &msg);
++	LONGS_EQUAL(0, result);
++	msg_equal_32(0x8400006F, resp_source_id, resp_dest_id, resp_arg0,
++		   resp_arg1, resp_arg2, resp_arg3, resp_arg4);
++}
++
++TEST(ffa_api, ffa_msg_send_direct_resp_32_then_get_direct_req_64_as_response)
++{
++	const uint16_t source_id = 0x1122;
++	const uint16_t dest_id = 0x3344;
++	const uint32_t arg0 = 0x01234567ULL;
++	const uint32_t arg1 = 0x12345678ULL;
++	const uint32_t arg2 = 0x23456789ULL;
++	const uint32_t arg3 = 0x3456789aULL;
++	const uint32_t arg4 = 0x456789abULL;
++	const uint16_t resp_source_id = 0x1221;
++	const uint16_t resp_dest_id = 0x3443;
++	const uint64_t resp_arg0 = 0x9012345665432109ULL;
++	const uint64_t resp_arg1 = 0xa12345677654321aULL;
++	const uint64_t resp_arg2 = 0xb23456788765432bULL;
++	const uint64_t resp_arg3 = 0xc34567899876543cULL;
++	const uint64_t resp_arg4 = 0xd456789aa987654dULL;
++
++	svc_result.a0 = 0xC400006F;
++	svc_result.a1 = ((uint32_t)resp_source_id) << 16 | resp_dest_id;
++	svc_result.a3 = resp_arg0;
++	svc_result.a4 = resp_arg1;
++	svc_result.a5 = resp_arg2;
++	svc_result.a6 = resp_arg3;
++	svc_result.a7 = resp_arg4;
++	expect_ffa_svc(0x84000070, ((uint32_t)source_id << 16) | dest_id, 0,
++		       arg0, arg1, arg2, arg3, arg4, &svc_result);
++	ffa_result result = ffa_msg_send_direct_resp_32(
++		source_id, dest_id, arg0, arg1, arg2, arg3, arg4, &msg);
++	LONGS_EQUAL(0, result);
++	msg_equal_64(0xC400006F, resp_source_id, resp_dest_id, resp_arg0,
++		   resp_arg1, resp_arg2, resp_arg3, resp_arg4);
++}
++
++TEST(ffa_api, ffa_msg_send_direct_resp_64_then_get_direct_req_32_as_response)
++{
++	const uint16_t source_id = 0x1122;
++	const uint16_t dest_id = 0x3344;
++	const uint64_t arg0 = 0x9012345665432109ULL;
++	const uint64_t arg1 = 0xa12345677654321aULL;
++	const uint64_t arg2 = 0xb23456788765432bULL;
++	const uint64_t arg3 = 0xc34567899876543cULL;
++	const uint64_t arg4 = 0xd456789aa987654dULL;
++	const uint16_t resp_source_id = 0x1221;
++	const uint16_t resp_dest_id = 0x3443;
++	const uint32_t resp_arg0 = 0x01234567ULL;
++	const uint32_t resp_arg1 = 0x12345678ULL;
++	const uint32_t resp_arg2 = 0x23456789ULL;
++	const uint32_t resp_arg3 = 0x3456789aULL;
++	const uint32_t resp_arg4 = 0x456789abULL;
++
++	svc_result.a0 = 0x8400006F;
++	svc_result.a1 = ((uint32_t)resp_source_id) << 16 | resp_dest_id;
++	svc_result.a3 = resp_arg0;
++	svc_result.a4 = resp_arg1;
++	svc_result.a5 = resp_arg2;
++	svc_result.a6 = resp_arg3;
++	svc_result.a7 = resp_arg4;
++	expect_ffa_svc(0xC4000070, ((uint32_t)source_id << 16) | dest_id, 0,
++		       arg0, arg1, arg2, arg3, arg4, &svc_result);
++	ffa_result result = ffa_msg_send_direct_resp_64(
++		source_id, dest_id, arg0, arg1, arg2, arg3, arg4, &msg);
++	LONGS_EQUAL(0, result);
++	msg_equal_32(0x8400006F, resp_source_id, resp_dest_id, resp_arg0,
++		   resp_arg1, resp_arg2, resp_arg3, resp_arg4);
++}
++
++TEST(ffa_api, ffa_msg_send_direct_resp_64_then_get_direct_req_64_as_response)
++{
++	const uint16_t source_id = 0x1122;
++	const uint16_t dest_id = 0x3344;
++	const uint64_t arg0 = 0x0123456776543210ULL;
++	const uint64_t arg1 = 0x1234567887654321ULL;
++	const uint64_t arg2 = 0x2345678998765432ULL;
++	const uint64_t arg3 = 0x3456789aa9876543ULL;
++	const uint64_t arg4 = 0x456789abba987654ULL;
++	const uint16_t resp_source_id = 0x1221;
++	const uint16_t resp_dest_id = 0x3443;
++	const uint64_t resp_arg0 = 0x9012345665432109ULL;
++	const uint64_t resp_arg1 = 0xa12345677654321aULL;
++	const uint64_t resp_arg2 = 0xb23456788765432bULL;
++	const uint64_t resp_arg3 = 0xc34567899876543cULL;
++	const uint64_t resp_arg4 = 0xd456789aa987654dULL;
++
++	svc_result.a0 = 0xC400006F;
++	svc_result.a1 = ((uint32_t)resp_source_id) << 16 | resp_dest_id;
++	svc_result.a3 = resp_arg0;
++	svc_result.a4 = resp_arg1;
++	svc_result.a5 = resp_arg2;
++	svc_result.a6 = resp_arg3;
++	svc_result.a7 = resp_arg4;
++	expect_ffa_svc(0xC4000070, ((uint32_t)source_id << 16) | dest_id, 0,
++		       arg0, arg1, arg2, arg3, arg4, &svc_result);
++	ffa_result result = ffa_msg_send_direct_resp_64(
+ 		source_id, dest_id, arg0, arg1, arg2, arg3, arg4, &msg);
+ 	LONGS_EQUAL(0, result);
+-	msg_equal(0x8400006F, resp_source_id, resp_dest_id, resp_arg0,
++	msg_equal_64(0xC400006F, resp_source_id, resp_dest_id, resp_arg0,
+ 		   resp_arg1, resp_arg2, resp_arg3, resp_arg4);
+ }
+ 
+-TEST(ffa_api, ffa_msg_send_direct_resp_one_interrupt_success)
++TEST(ffa_api, ffa_msg_send_direct_resp_32_one_interrupt_success)
+ {
+ 	const uint16_t source_id = 0x1122;
+ 	const uint16_t dest_id = 0x3344;
+@@ -690,13 +931,13 @@ TEST(ffa_api, ffa_msg_send_direct_resp_one_interrupt_success)
+ 
+ 	svc_result.a0 = 0x84000061;
+ 	expect_ffa_svc(0x8400006B, 0, 0, 0, 0, 0, 0, 0, &svc_result);
+-	ffa_result result = ffa_msg_send_direct_resp(
++	ffa_result result = ffa_msg_send_direct_resp_32(
+ 		source_id, dest_id, arg0, arg1, arg2, arg3, arg4, &msg);
+ 	LONGS_EQUAL(0, result);
+-	msg_equal(0x84000061, 0, 0, 0, 0, 0, 0, 0);
++	msg_equal_32(0x84000061, 0, 0, 0, 0, 0, 0, 0);
+ }
+ 
+-TEST(ffa_api, ffa_msg_send_direct_resp_two_interrupt_success)
++TEST(ffa_api, ffa_msg_send_direct_resp_32_two_interrupt_success)
+ {
+ 	const uint16_t source_id = 0x1122;
+ 	const uint16_t dest_id = 0x3344;
+@@ -724,13 +965,13 @@ TEST(ffa_api, ffa_msg_send_direct_resp_two_interrupt_success)
+ 
+ 	svc_result.a0 = 0x84000061;
+ 	expect_ffa_svc(0x8400006B, 0, 0, 0, 0, 0, 0, 0, &svc_result);
+-	ffa_result result = ffa_msg_send_direct_resp(
++	ffa_result result = ffa_msg_send_direct_resp_32(
+ 		source_id, dest_id, arg0, arg1, arg2, arg3, arg4, &msg);
+ 	LONGS_EQUAL(0, result);
+-	msg_equal(0x84000061, 0, 0, 0, 0, 0, 0, 0);
++	msg_equal_32(0x84000061, 0, 0, 0, 0, 0, 0, 0);
+ }
+ 
+-TEST(ffa_api, ffa_msg_send_direct_resp_unknown_response)
++TEST(ffa_api, ffa_msg_send_direct_resp_32_unknown_response)
+ {
+ 	const uint16_t source_id = 0x1122;
+ 	const uint16_t dest_id = 0x3344;
+@@ -745,7 +986,7 @@ TEST(ffa_api, ffa_msg_send_direct_resp_unknown_response)
+ 	expect_ffa_svc(0x84000070, ((uint32_t)source_id << 16) | dest_id, 0,
+ 		       arg0, arg1, arg2, arg3, arg4, &svc_result);
+ 	if (SETUP_ASSERT_ENVIRONMENT(assert_env)) {
+-		ffa_msg_send_direct_resp(source_id, dest_id, arg0, arg1, arg2,
++		ffa_msg_send_direct_resp_32(source_id, dest_id, arg0, arg1, arg2,
+ 					 arg3, arg4, &msg);
+ 	}
+ }
+diff --git a/components/messaging/ffa/libsp/test/test_sp_messaging.cpp b/components/messaging/ffa/libsp/test/test_sp_messaging.cpp
+index 78bf8bf..786f66e 100644
+--- a/components/messaging/ffa/libsp/test/test_sp_messaging.cpp
++++ b/components/messaging/ffa/libsp/test/test_sp_messaging.cpp
+@@ -1,6 +1,6 @@
+ // SPDX-License-Identifier: BSD-3-Clause
+ /*
+- * Copyright (c) 2021, Arm Limited. All rights reserved.
++ * Copyright (c) 2021-2022, Arm Limited. All rights reserved.
+  */
+ 
+ #include <CppUTest/TestHarness.h>
+@@ -32,7 +32,7 @@ TEST_GROUP(sp_messaging)
+ 		mock().clear();
+ 	}
+ 
+-	void copy_sp_to_ffa_args(const uint32_t sp_args[], uint32_t ffa_args[])
++	void copy_sp_to_ffa_args_32(const uint32_t sp_args[], uint32_t ffa_args[])
+ 	{
+ 		int i = 0;
+ 
+@@ -41,7 +41,16 @@ TEST_GROUP(sp_messaging)
+ 		}
+ 	}
+ 
+-	void fill_ffa_msg(struct ffa_direct_msg * msg)
++	void copy_sp_to_ffa_args_64(const uint64_t sp_args[], uint64_t ffa_args[])
++	{
++		int i = 0;
++
++		for (i = 0; i < SP_MSG_ARG_COUNT; i++) {
++			ffa_args[i + SP_MSG_ARG_OFFSET] = sp_args[i];
++		}
++	}
++
++	void fill_ffa_msg_32(struct ffa_direct_msg * msg)
+ 	{
+ 		int i = 0;
+ 
+@@ -49,20 +58,47 @@ TEST_GROUP(sp_messaging)
+ 		msg->source_id = source_id;
+ 		msg->destination_id = dest_id;
+ 
+-		msg->args[0] = 0;
++		msg->args.args32[0] = 0;
+ 		for (i = 0; i < SP_MSG_ARG_COUNT; i++) {
+-			msg->args[i + SP_MSG_ARG_OFFSET] = args[i];
++			msg->args.args32[i + SP_MSG_ARG_OFFSET] = args32[i];
+ 		}
+ 	}
+ 
+-	void fill_sp_msg(struct sp_msg * msg)
++	void fill_ffa_msg_64(struct ffa_direct_msg * msg)
+ 	{
+ 		int i = 0;
+ 
++		msg->function_id = FFA_MSG_SEND_DIRECT_REQ_64;
+ 		msg->source_id = source_id;
+ 		msg->destination_id = dest_id;
++
++		msg->args.args64[0] = 0;
+ 		for (i = 0; i < SP_MSG_ARG_COUNT; i++) {
+-			msg->args[i] = args[i + SP_MSG_ARG_OFFSET];
++			msg->args.args64[i + SP_MSG_ARG_OFFSET] = args64[i];
++		}
++	}
++
++	void fill_sp_msg_32(struct sp_msg * msg)
++	{
++		int i = 0;
++
++		msg->source_id = source_id;
++		msg->destination_id = dest_id;
++		msg->is_64bit_message = false;
++		for (i = 0; i < SP_MSG_ARG_COUNT; i++) {
++			msg->args.args32[i] = args32[i + SP_MSG_ARG_OFFSET];
++		}
++	}
++
++	void fill_sp_msg_64(struct sp_msg * msg)
++	{
++		int i = 0;
++
++		msg->source_id = source_id;
++		msg->destination_id = dest_id;
++		msg->is_64bit_message = true;
++		for (i = 0; i < SP_MSG_ARG_COUNT; i++) {
++			msg->args.args64[i] = args64[i + SP_MSG_ARG_OFFSET];
+ 		}
+ 	}
+ 
+@@ -74,10 +110,19 @@ TEST_GROUP(sp_messaging)
+ 		UNSIGNED_LONGS_EQUAL(ffa_msg->source_id, sp_msg->source_id);
+ 		UNSIGNED_LONGS_EQUAL(ffa_msg->destination_id,
+ 				     sp_msg->destination_id);
+-		for (i = 0; i < SP_MSG_ARG_COUNT; i++) {
+-			UNSIGNED_LONGS_EQUAL(
+-				ffa_msg->args[i + SP_MSG_ARG_OFFSET],
+-				sp_msg->args[i]);
++		CHECK_EQUAL(FFA_IS_64_BIT_FUNC(ffa_msg->function_id), sp_msg->is_64bit_message);
++		if (sp_msg->is_64bit_message) {
++			for (i = 0; i < SP_MSG_ARG_COUNT; i++) {
++				UNSIGNED_LONGS_EQUAL(
++					ffa_msg->args.args64[i + SP_MSG_ARG_OFFSET],
++					sp_msg->args.args64[i]);
++			}
++		} else {
++			for (i = 0; i < SP_MSG_ARG_COUNT; i++) {
++				UNSIGNED_LONGS_EQUAL(
++					ffa_msg->args.args32[i + SP_MSG_ARG_OFFSET],
++					sp_msg->args.args32[i]);
++			}
+ 		}
+ 	}
+ 
+@@ -87,7 +132,7 @@ TEST_GROUP(sp_messaging)
+ 		struct ffa_direct_msg expected_ffa_req = { 0 };
+ 		struct sp_msg req = { 0 };
+ 
+-		fill_ffa_msg(&expected_ffa_req);
++		fill_ffa_msg_32(&expected_ffa_req);
+ 		expected_ffa_req.source_id = source_id;
+ 		expected_ffa_req.destination_id = dest_id;
+ 		expect_ffa_msg_wait(&expected_ffa_req, FFA_OK);
+@@ -103,8 +148,10 @@ TEST_GROUP(sp_messaging)
+ 
+ 	const uint16_t source_id = 0x1234;
+ 	const uint16_t dest_id = 0x5678;
+-	const uint32_t args[SP_MSG_ARG_COUNT] = { 0x01234567, 0x12345678,
+-						  0x23456789, 0x3456789a };
++	const uint32_t args32[SP_MSG_ARG_COUNT] = { 0x01234567, 0x12345678,
++						    0x23456789, 0x3456789a };
++	const uint64_t args64[SP_MSG_ARG_COUNT] = { 0x0123456776543210, 0x1234567887654321,
++						    0x2345678998765432, 0x3456789aa9876543 };
+ 	const sp_result result = -1;
+ 	const sp_msg empty_sp_msg = (const sp_msg){ 0 };
+ };
+@@ -126,7 +173,7 @@ TEST(sp_messaging, sp_msg_wait_ffa_error)
+ 
+ TEST(sp_messaging, sp_msg_wait)
+ {
+-	fill_ffa_msg(&ffa_msg);
++	fill_ffa_msg_32(&ffa_msg);
+ 	expect_ffa_msg_wait(&ffa_msg, FFA_OK);
+ 
+ 	LONGS_EQUAL(SP_RESULT_OK, sp_msg_wait(&req));
+@@ -139,12 +186,12 @@ TEST(sp_messaging, sp_msg_wait_deny_rc_failure)
+ 	struct ffa_direct_msg rc_msg = { 0 };
+ 	ffa_result result = FFA_ABORTED;
+ 
+-	fill_ffa_msg(&rc_msg);
+-	rc_msg.args[0] = ROUTING_EXT_RC_BIT;
++	fill_ffa_msg_32(&rc_msg);
++	rc_msg.args.args32[0] = ROUTING_EXT_RC_BIT;
+ 	expect_ffa_msg_wait(&rc_msg, FFA_OK);
+ 
+-	fill_ffa_msg(&ffa_msg);
+-	expect_ffa_msg_send_direct_resp(
++	fill_ffa_msg_32(&ffa_msg);
++	expect_ffa_msg_send_direct_resp_32(
+ 		rc_msg.destination_id, rc_msg.source_id,
+ 		ROUTING_EXT_RC_BIT | ROUTING_EXT_ERR_BIT,
+ 		SP_RESULT_FFA(FFA_DENIED), 0, 0, 0, &ffa_msg, result);
+@@ -157,12 +204,12 @@ TEST(sp_messaging, sp_msg_wait_deny_rc)
+ {
+ 	struct ffa_direct_msg rc_msg = { 0 };
+ 
+-	fill_ffa_msg(&rc_msg);
+-	rc_msg.args[0] = ROUTING_EXT_RC_BIT;
++	fill_ffa_msg_32(&rc_msg);
++	rc_msg.args.args32[0] = ROUTING_EXT_RC_BIT;
+ 	expect_ffa_msg_wait(&rc_msg, FFA_OK);
+ 
+-	fill_ffa_msg(&ffa_msg);
+-	expect_ffa_msg_send_direct_resp(
++	fill_ffa_msg_32(&ffa_msg);
++	expect_ffa_msg_send_direct_resp_32(
+ 		rc_msg.destination_id, rc_msg.source_id,
+ 		ROUTING_EXT_RC_BIT | ROUTING_EXT_ERR_BIT,
+ 		SP_RESULT_FFA(FFA_DENIED), 0, 0, 0, &ffa_msg, FFA_OK);
+@@ -191,10 +238,10 @@ TEST(sp_messaging, sp_msg_send_direct_req_ffa_error)
+ 	ffa_result result = FFA_ABORTED;
+ 	uint32_t expected_ffa_args[5] = { 0 };
+ 
+-	fill_sp_msg(&req);
++	fill_sp_msg_32(&req);
+ 	memset(&resp, 0x5a, sizeof(resp));
+-	copy_sp_to_ffa_args(req.args, expected_ffa_args);
+-	expect_ffa_msg_send_direct_req(
++	copy_sp_to_ffa_args_32(req.args.args32, expected_ffa_args);
++	expect_ffa_msg_send_direct_req_32(
+ 		req.source_id, req.destination_id, expected_ffa_args[0],
+ 		expected_ffa_args[1], expected_ffa_args[2],
+ 		expected_ffa_args[3], expected_ffa_args[4], &ffa_msg, result);
+@@ -203,14 +250,30 @@ TEST(sp_messaging, sp_msg_send_direct_req_ffa_error)
+ 	MEMCMP_EQUAL(&empty_sp_msg, &resp, sizeof(empty_sp_msg));
+ }
+ 
+-TEST(sp_messaging, sp_msg_send_direct_req_msg)
++TEST(sp_messaging, sp_msg_send_direct_req_msg_32)
+ {
+ 	uint32_t expected_ffa_args[5] = { 0 };
+ 
+-	fill_sp_msg(&req);
+-	fill_ffa_msg(&ffa_msg);
+-	copy_sp_to_ffa_args(req.args, expected_ffa_args);
+-	expect_ffa_msg_send_direct_req(
++	fill_sp_msg_32(&req);
++	fill_ffa_msg_32(&ffa_msg);
++	copy_sp_to_ffa_args_32(req.args.args32, expected_ffa_args);
++	expect_ffa_msg_send_direct_req_32(
++		req.source_id, req.destination_id, expected_ffa_args[0],
++		expected_ffa_args[1], expected_ffa_args[2],
++		expected_ffa_args[3], expected_ffa_args[4], &ffa_msg, FFA_OK);
++
++	LONGS_EQUAL(SP_RESULT_OK, sp_msg_send_direct_req(&req, &resp));
++	ffa_and_sp_msg_equal(&ffa_msg, &resp);
++}
++
++TEST(sp_messaging, sp_msg_send_direct_req_msg_64)
++{
++	uint64_t expected_ffa_args[5] = { 0 };
++
++	fill_sp_msg_64(&req);
++	fill_ffa_msg_64(&ffa_msg);
++	copy_sp_to_ffa_args_64(req.args.args64, expected_ffa_args);
++	expect_ffa_msg_send_direct_req_64(
+ 		req.source_id, req.destination_id, expected_ffa_args[0],
+ 		expected_ffa_args[1], expected_ffa_args[2],
+ 		expected_ffa_args[3], expected_ffa_args[4], &ffa_msg, FFA_OK);
+@@ -223,10 +286,10 @@ TEST(sp_messaging, sp_msg_send_direct_req_success)
+ {
+ 	uint32_t expected_ffa_args[5] = { 0 };
+ 
+-	fill_sp_msg(&req);
++	fill_sp_msg_32(&req);
+ 	ffa_msg.function_id = FFA_SUCCESS_32;
+-	copy_sp_to_ffa_args(req.args, expected_ffa_args);
+-	expect_ffa_msg_send_direct_req(
++	copy_sp_to_ffa_args_32(req.args.args32, expected_ffa_args);
++	expect_ffa_msg_send_direct_req_32(
+ 		req.source_id, req.destination_id, expected_ffa_args[0],
+ 		expected_ffa_args[1], expected_ffa_args[2],
+ 		expected_ffa_args[3], expected_ffa_args[4], &ffa_msg, FFA_OK);
+@@ -248,54 +311,54 @@ TEST(sp_messaging, sp_msg_send_direct_req_rc_forwarding_success)
+ 	sp_msg sp_req = { 0 };
+ 	sp_msg sp_resp = { 0 };
+ 
+-	fill_sp_msg(&sp_req);
++	fill_sp_msg_32(&sp_req);
+ 	sp_req.source_id = own_id;
+ 	sp_req.destination_id = rc_root_id;
+ 
+ 	req.function_id = FFA_MSG_SEND_DIRECT_REQ_32;
+ 	req.source_id = own_id;
+ 	req.destination_id = rc_root_id;
+-	copy_sp_to_ffa_args(sp_req.args, req.args);
++	copy_sp_to_ffa_args_32(sp_req.args.args32, req.args.args32);
+ 
+-	fill_ffa_msg(&rc_req);
++	fill_ffa_msg_32(&rc_req);
+ 	rc_req.function_id = FFA_MSG_SEND_DIRECT_RESP_32;
+ 	rc_req.source_id = rc_root_id;
+ 	rc_req.destination_id = own_id;
+-	rc_req.args[0] = ROUTING_EXT_RC_BIT;
++	rc_req.args.args32[0] = ROUTING_EXT_RC_BIT;
+ 
+-	fill_ffa_msg(&rc_resp);
++	fill_ffa_msg_32(&rc_resp);
+ 	rc_resp.function_id = FFA_MSG_SEND_DIRECT_REQ_32;
+ 	rc_resp.source_id = root_id;
+ 	rc_resp.destination_id = own_id;
+-	rc_resp.args[0] = ROUTING_EXT_RC_BIT;
++	rc_resp.args.args32[0] = ROUTING_EXT_RC_BIT;
+ 
+-	fill_sp_msg(&sp_resp);
++	fill_sp_msg_32(&sp_resp);
+ 	sp_resp.source_id = rc_root_id;
+ 	sp_resp.destination_id = own_id;
+ 
+ 	resp.function_id = FFA_MSG_SEND_DIRECT_RESP_32;
+ 	resp.source_id = rc_root_id;
+ 	resp.destination_id = own_id;
+-	copy_sp_to_ffa_args(sp_resp.args, resp.args);
++	copy_sp_to_ffa_args_32(sp_resp.args.args32, resp.args.args32);
+ 
+ 	/* Initial request to current SP to set own_id */
+ 	wait_and_receive_request(root_id, own_id);
+ 
+ 	/* Sending request and receiving RC request from RC root */
+-	expect_ffa_msg_send_direct_req(own_id, rc_root_id, 0, req.args[1],
+-				       req.args[2], req.args[3], req.args[4],
++	expect_ffa_msg_send_direct_req_32(own_id, rc_root_id, 0, req.args.args32[1],
++				       req.args.args32[2], req.args.args32[3], req.args.args32[4],
+ 				       &rc_req, FFA_OK);
+ 
+ 	/* Forwarding RC request to root and receiving RC response */
+-	expect_ffa_msg_send_direct_resp(own_id, root_id, rc_req.args[0],
+-					rc_req.args[1], rc_req.args[2],
+-					rc_req.args[3], rc_req.args[4],
++	expect_ffa_msg_send_direct_resp_32(own_id, root_id, rc_req.args.args32[0],
++					rc_req.args.args32[1], rc_req.args.args32[2],
++					rc_req.args.args32[3], rc_req.args.args32[4],
+ 					&rc_resp, FFA_OK);
+ 
+ 	/* Fowarding RC response to RC root and receiving response */
+-	expect_ffa_msg_send_direct_req(own_id, rc_root_id, rc_resp.args[0],
+-				       rc_resp.args[1], rc_resp.args[2],
+-				       rc_resp.args[3], rc_resp.args[4], &resp,
++	expect_ffa_msg_send_direct_req_32(own_id, rc_root_id, rc_resp.args.args32[0],
++				       rc_resp.args.args32[1], rc_resp.args.args32[2],
++				       rc_resp.args.args32[3], rc_resp.args.args32[4], &resp,
+ 				       FFA_OK);
+ 
+ 	LONGS_EQUAL(SP_RESULT_OK, sp_msg_send_direct_req(&sp_req, &sp_resp));
+@@ -312,28 +375,28 @@ TEST(sp_messaging, sp_msg_send_direct_req_rc_error)
+ 	sp_msg sp_req = { 0 };
+ 	sp_msg sp_resp = { 0 };
+ 
+-	fill_sp_msg(&sp_req);
++	fill_sp_msg_32(&sp_req);
+ 	sp_req.source_id = own_id;
+ 	sp_req.destination_id = rc_root_id;
+ 
+ 	req.function_id = FFA_MSG_SEND_DIRECT_REQ_32;
+ 	req.source_id = own_id;
+ 	req.destination_id = rc_root_id;
+-	copy_sp_to_ffa_args(sp_req.args, req.args);
++	copy_sp_to_ffa_args_32(sp_req.args.args32, req.args.args32);
+ 
+-	fill_ffa_msg(&rc_err);
++	fill_ffa_msg_32(&rc_err);
+ 	rc_err.function_id = FFA_MSG_SEND_DIRECT_RESP_32;
+ 	rc_err.source_id = rc_root_id;
+ 	rc_err.destination_id = own_id;
+-	rc_err.args[0] = ROUTING_EXT_RC_BIT | ROUTING_EXT_ERR_BIT;
+-	rc_err.args[1] = result;
++	rc_err.args.args32[0] = ROUTING_EXT_RC_BIT | ROUTING_EXT_ERR_BIT;
++	rc_err.args.args32[1] = result;
+ 
+ 	/* Initial request to current SP to set own_id */
+ 	wait_and_receive_request(root_id, own_id);
+ 
+ 	/* Sending request and receiving RC request from RC root */
+-	expect_ffa_msg_send_direct_req(own_id, rc_root_id, 0, req.args[1],
+-				       req.args[2], req.args[3], req.args[4],
++	expect_ffa_msg_send_direct_req_32(own_id, rc_root_id, 0, req.args.args32[1],
++				       req.args.args32[2], req.args.args32[3], req.args.args32[4],
+ 				       &rc_err, FFA_OK);
+ 
+ 	LONGS_EQUAL(SP_RESULT_FFA(result),
+@@ -354,64 +417,64 @@ TEST(sp_messaging, sp_msg_send_direct_req_rc_forwarding_success_deny_request)
+ 	sp_msg sp_req = { 0 };
+ 	sp_msg sp_resp = { 0 };
+ 
+-	fill_sp_msg(&sp_req);
++	fill_sp_msg_32(&sp_req);
+ 	sp_req.source_id = own_id;
+ 	sp_req.destination_id = rc_root_id;
+ 
+ 	req.function_id = FFA_MSG_SEND_DIRECT_REQ_32;
+ 	req.source_id = own_id;
+ 	req.destination_id = rc_root_id;
+-	copy_sp_to_ffa_args(sp_req.args, req.args);
++	copy_sp_to_ffa_args_32(sp_req.args.args32, req.args.args32);
+ 
+-	fill_ffa_msg(&rc_req);
++	fill_ffa_msg_32(&rc_req);
+ 	rc_req.function_id = FFA_MSG_SEND_DIRECT_RESP_32;
+ 	rc_req.source_id = rc_root_id;
+ 	rc_req.destination_id = own_id;
+-	rc_req.args[0] = ROUTING_EXT_RC_BIT;
++	rc_req.args.args32[0] = ROUTING_EXT_RC_BIT;
+ 
+ 	request_to_deny.function_id = FFA_MSG_SEND_DIRECT_REQ_32;
+ 	request_to_deny.source_id = root_id;
+ 	request_to_deny.destination_id = own_id;
+-	request_to_deny.args[0] = 0;
++	request_to_deny.args.args32[0] = 0;
+ 
+-	fill_ffa_msg(&rc_resp);
++	fill_ffa_msg_32(&rc_resp);
+ 	rc_resp.function_id = FFA_MSG_SEND_DIRECT_REQ_32;
+ 	rc_resp.source_id = root_id;
+ 	rc_resp.destination_id = own_id;
+-	rc_resp.args[0] = ROUTING_EXT_RC_BIT;
++	rc_resp.args.args32[0] = ROUTING_EXT_RC_BIT;
+ 
+-	fill_sp_msg(&sp_resp);
++	fill_sp_msg_32(&sp_resp);
+ 	sp_resp.source_id = rc_root_id;
+ 	sp_resp.destination_id = own_id;
+ 
+ 	resp.function_id = FFA_MSG_SEND_DIRECT_RESP_32;
+ 	resp.source_id = rc_root_id;
+ 	resp.destination_id = own_id;
+-	copy_sp_to_ffa_args(sp_resp.args, resp.args);
++	copy_sp_to_ffa_args_32(sp_resp.args.args32, resp.args.args32);
+ 
+ 	/* Initial request to current SP to set own_id */
+ 	wait_and_receive_request(root_id, own_id);
+ 
+ 	/* Sending request and receiving RC request from RC root */
+-	expect_ffa_msg_send_direct_req(own_id, rc_root_id, 0, req.args[1],
+-				       req.args[2], req.args[3], req.args[4],
++	expect_ffa_msg_send_direct_req_32(own_id, rc_root_id, 0, req.args.args32[1],
++				       req.args.args32[2], req.args.args32[3], req.args.args32[4],
+ 				       &rc_req, FFA_OK);
+ 
+ 	/* Forwarding RC request to root and receiving a request to deny */
+-	expect_ffa_msg_send_direct_resp(own_id, root_id, rc_req.args[0],
+-					rc_req.args[1], rc_req.args[2],
+-					rc_req.args[3], rc_req.args[4],
++	expect_ffa_msg_send_direct_resp_32(own_id, root_id, rc_req.args.args32[0],
++					rc_req.args.args32[1], rc_req.args.args32[2],
++					rc_req.args.args32[3], rc_req.args.args32[4],
+ 					&request_to_deny, FFA_OK);
+ 
+ 	/* Sending error to root and receiving RC response */
+-	expect_ffa_msg_send_direct_resp(
++	expect_ffa_msg_send_direct_resp_32(
+ 		own_id, root_id, ROUTING_EXT_RC_BIT | ROUTING_EXT_ERR_BIT,
+ 		SP_RESULT_FFA(FFA_BUSY), 0, 0, 0, &rc_resp, FFA_OK);
+ 
+ 	/* Fowarding RC response to RC root and receiving response */
+-	expect_ffa_msg_send_direct_req(own_id, rc_root_id, rc_resp.args[0],
+-				       rc_resp.args[1], rc_resp.args[2],
+-				       rc_resp.args[3], rc_resp.args[4], &resp,
++	expect_ffa_msg_send_direct_req_32(own_id, rc_root_id, rc_resp.args.args32[0],
++				       rc_resp.args.args32[1], rc_resp.args.args32[2],
++				       rc_resp.args.args32[3], rc_resp.args.args32[4], &resp,
+ 				       FFA_OK);
+ 
+ 	LONGS_EQUAL(SP_RESULT_OK, sp_msg_send_direct_req(&sp_req, &sp_resp));
+@@ -431,65 +494,65 @@ TEST(sp_messaging, sp_msg_send_direct_req_rc_forwarding_success_invalid_req_src)
+ 	sp_msg sp_req = { 0 };
+ 	sp_msg sp_resp = { 0 };
+ 
+-	fill_sp_msg(&sp_req);
++	fill_sp_msg_32(&sp_req);
+ 	sp_req.source_id = own_id;
+ 	sp_req.destination_id = rc_root_id;
+ 
+ 	req.function_id = FFA_MSG_SEND_DIRECT_REQ_32;
+ 	req.source_id = own_id;
+ 	req.destination_id = rc_root_id;
+-	copy_sp_to_ffa_args(sp_req.args, req.args);
++	copy_sp_to_ffa_args_32(sp_req.args.args32, req.args.args32);
+ 
+-	fill_ffa_msg(&rc_req);
++	fill_ffa_msg_32(&rc_req);
+ 	rc_req.function_id = FFA_MSG_SEND_DIRECT_RESP_32;
+ 	rc_req.source_id = rc_root_id;
+ 	rc_req.destination_id = own_id;
+-	rc_req.args[0] = ROUTING_EXT_RC_BIT;
++	rc_req.args.args32[0] = ROUTING_EXT_RC_BIT;
+ 
+ 	request_to_deny.function_id = FFA_MSG_SEND_DIRECT_REQ_32;
+ 	/* This source ID should be denied in the current state. */
+ 	request_to_deny.source_id = rc_root_id;
+ 	request_to_deny.destination_id = own_id;
+-	request_to_deny.args[0] = ROUTING_EXT_RC_BIT;
++	request_to_deny.args.args32[0] = ROUTING_EXT_RC_BIT;
+ 
+-	fill_ffa_msg(&rc_resp);
++	fill_ffa_msg_32(&rc_resp);
+ 	rc_resp.function_id = FFA_MSG_SEND_DIRECT_REQ_32;
+ 	rc_resp.source_id = root_id;
+ 	rc_resp.destination_id = own_id;
+-	rc_resp.args[0] = ROUTING_EXT_RC_BIT;
++	rc_resp.args.args32[0] = ROUTING_EXT_RC_BIT;
+ 
+-	fill_sp_msg(&sp_resp);
++	fill_sp_msg_32(&sp_resp);
+ 	sp_resp.source_id = rc_root_id;
+ 	sp_resp.destination_id = own_id;
+ 
+ 	resp.function_id = FFA_MSG_SEND_DIRECT_RESP_32;
+ 	resp.source_id = rc_root_id;
+ 	resp.destination_id = own_id;
+-	copy_sp_to_ffa_args(sp_resp.args, resp.args);
++	copy_sp_to_ffa_args_32(sp_resp.args.args32, resp.args.args32);
+ 
+ 	/* Initial request to current SP to set own_id */
+ 	wait_and_receive_request(root_id, own_id);
+ 
+ 	/* Sending request and receiving RC request from RC root */
+-	expect_ffa_msg_send_direct_req(own_id, rc_root_id, 0, req.args[1],
+-				       req.args[2], req.args[3], req.args[4],
++	expect_ffa_msg_send_direct_req_32(own_id, rc_root_id, 0, req.args.args32[1],
++				       req.args.args32[2], req.args.args32[3], req.args.args32[4],
+ 				       &rc_req, FFA_OK);
+ 
+ 	/* Forwarding RC request to root and receiving RC response */
+-	expect_ffa_msg_send_direct_resp(own_id, root_id, rc_req.args[0],
+-					rc_req.args[1], rc_req.args[2],
+-					rc_req.args[3], rc_req.args[4],
++	expect_ffa_msg_send_direct_resp_32(own_id, root_id, rc_req.args.args32[0],
++					rc_req.args.args32[1], rc_req.args.args32[2],
++					rc_req.args.args32[3], rc_req.args.args32[4],
+ 					&request_to_deny, FFA_OK);
+ 
+ 	/* Sending error to root and receiving RC response */
+-	expect_ffa_msg_send_direct_resp(
++	expect_ffa_msg_send_direct_resp_32(
+ 		own_id, rc_root_id, ROUTING_EXT_ERR_BIT | ROUTING_EXT_RC_BIT,
+ 		SP_RESULT_FFA(FFA_BUSY), 0, 0, 0, &rc_resp, FFA_OK);
+ 
+ 	/* Fowarding RC response to RC root and receiving response */
+-	expect_ffa_msg_send_direct_req(own_id, rc_root_id, rc_resp.args[0],
+-				       rc_resp.args[1], rc_resp.args[2],
+-				       rc_resp.args[3], rc_resp.args[4], &resp,
++	expect_ffa_msg_send_direct_req_32(own_id, rc_root_id, rc_resp.args.args32[0],
++				       rc_resp.args.args32[1], rc_resp.args.args32[2],
++				       rc_resp.args.args32[3], rc_resp.args.args32[4], &resp,
+ 				       FFA_OK);
+ 
+ 	LONGS_EQUAL(SP_RESULT_OK, sp_msg_send_direct_req(&sp_req, &sp_resp));
+@@ -509,58 +572,58 @@ TEST(sp_messaging, sp_msg_send_direct_req_deny_fail_wait_success)
+ 	sp_msg sp_req = { 0 };
+ 	sp_msg sp_resp = { 0 };
+ 
+-	fill_sp_msg(&sp_req);
++	fill_sp_msg_32(&sp_req);
+ 	sp_req.source_id = own_id;
+ 	sp_req.destination_id = rc_root_id;
+ 
+ 	req.function_id = FFA_MSG_SEND_DIRECT_REQ_32;
+ 	req.source_id = own_id;
+ 	req.destination_id = rc_root_id;
+-	copy_sp_to_ffa_args(sp_req.args, req.args);
++	copy_sp_to_ffa_args_32(sp_req.args.args32, req.args.args32);
+ 
+-	fill_ffa_msg(&rc_req);
++	fill_ffa_msg_32(&rc_req);
+ 	rc_req.function_id = FFA_MSG_SEND_DIRECT_RESP_32;
+ 	rc_req.source_id = rc_root_id;
+ 	rc_req.destination_id = own_id;
+-	rc_req.args[0] = ROUTING_EXT_RC_BIT;
++	rc_req.args.args32[0] = ROUTING_EXT_RC_BIT;
+ 
+ 	request_to_deny.function_id = FFA_MSG_SEND_DIRECT_REQ_32;
+ 	/* This source ID should be denied in the current state. */
+ 	request_to_deny.source_id = rc_root_id;
+ 	request_to_deny.destination_id = own_id;
+-	request_to_deny.args[0] = ROUTING_EXT_RC_BIT;
++	request_to_deny.args.args32[0] = ROUTING_EXT_RC_BIT;
+ 
+-	fill_ffa_msg(&rc_resp);
++	fill_ffa_msg_32(&rc_resp);
+ 	rc_resp.function_id = FFA_MSG_SEND_DIRECT_REQ_32;
+ 	rc_resp.source_id = root_id;
+ 	rc_resp.destination_id = own_id;
+-	rc_resp.args[0] = ROUTING_EXT_RC_BIT;
++	rc_resp.args.args32[0] = ROUTING_EXT_RC_BIT;
+ 
+-	fill_sp_msg(&sp_resp);
++	fill_sp_msg_32(&sp_resp);
+ 	sp_resp.source_id = rc_root_id;
+ 	sp_resp.destination_id = own_id;
+ 
+ 	resp.function_id = FFA_MSG_SEND_DIRECT_RESP_32;
+ 	resp.source_id = rc_root_id;
+ 	resp.destination_id = own_id;
+-	copy_sp_to_ffa_args(sp_resp.args, resp.args);
++	copy_sp_to_ffa_args_32(sp_resp.args.args32, resp.args.args32);
+ 
+ 	/* Initial request to current SP to set own_id */
+ 	wait_and_receive_request(root_id, own_id);
+ 
+ 	/* Sending request and receiving RC request from RC root */
+-	expect_ffa_msg_send_direct_req(own_id, rc_root_id, 0, req.args[1],
+-				       req.args[2], req.args[3], req.args[4],
++	expect_ffa_msg_send_direct_req_32(own_id, rc_root_id, 0, req.args.args32[1],
++				       req.args.args32[2], req.args.args32[3], req.args.args32[4],
+ 				       &rc_req, FFA_OK);
+ 
+ 	/* Forwarding RC request to root and receiving RC response */
+-	expect_ffa_msg_send_direct_resp(own_id, root_id, rc_req.args[0],
+-					rc_req.args[1], rc_req.args[2],
+-					rc_req.args[3], rc_req.args[4],
++	expect_ffa_msg_send_direct_resp_32(own_id, root_id, rc_req.args.args32[0],
++					rc_req.args.args32[1], rc_req.args.args32[2],
++					rc_req.args.args32[3], rc_req.args.args32[4],
+ 					&request_to_deny, FFA_OK);
+ 
+ 	/* Sending error to root which fails */
+-	expect_ffa_msg_send_direct_resp(
++	expect_ffa_msg_send_direct_resp_32(
+ 		own_id, rc_root_id, (ROUTING_EXT_ERR_BIT | ROUTING_EXT_RC_BIT),
+ 		SP_RESULT_FFA(FFA_BUSY), 0, 0, 0, &rc_resp, FFA_DENIED);
+ 
+@@ -568,9 +631,9 @@ TEST(sp_messaging, sp_msg_send_direct_req_deny_fail_wait_success)
+ 	expect_ffa_msg_wait(&rc_resp, FFA_OK);
+ 
+ 	/* Fowarding RC response to RC root and receiving response */
+-	expect_ffa_msg_send_direct_req(own_id, rc_root_id, rc_resp.args[0],
+-				       rc_resp.args[1], rc_resp.args[2],
+-				       rc_resp.args[3], rc_resp.args[4], &resp,
++	expect_ffa_msg_send_direct_req_32(own_id, rc_root_id, rc_resp.args.args32[0],
++				       rc_resp.args.args32[1], rc_resp.args.args32[2],
++				       rc_resp.args.args32[3], rc_resp.args.args32[4], &resp,
+ 				       FFA_OK);
+ 
+ 	LONGS_EQUAL(SP_RESULT_OK, sp_msg_send_direct_req(&sp_req, &sp_resp));
+@@ -590,58 +653,58 @@ TEST(sp_messaging, sp_msg_send_direct_req_deny_fail_wait_fail_forwarding)
+ 	sp_msg sp_req = { 0 };
+ 	sp_msg sp_resp = { 0 };
+ 
+-	fill_sp_msg(&sp_req);
++	fill_sp_msg_32(&sp_req);
+ 	sp_req.source_id = own_id;
+ 	sp_req.destination_id = rc_root_id;
+ 
+ 	req.function_id = FFA_MSG_SEND_DIRECT_REQ_32;
+ 	req.source_id = own_id;
+ 	req.destination_id = rc_root_id;
+-	copy_sp_to_ffa_args(sp_req.args, req.args);
++	copy_sp_to_ffa_args_32(sp_req.args.args32, req.args.args32);
+ 
+-	fill_ffa_msg(&rc_req);
++	fill_ffa_msg_32(&rc_req);
+ 	rc_req.function_id = FFA_MSG_SEND_DIRECT_RESP_32;
+ 	rc_req.source_id = rc_root_id;
+ 	rc_req.destination_id = own_id;
+-	rc_req.args[0] = ROUTING_EXT_RC_BIT;
++	rc_req.args.args32[0] = ROUTING_EXT_RC_BIT;
+ 
+ 	request_to_deny.function_id = FFA_MSG_SEND_DIRECT_REQ_32;
+ 	/* This source ID should be denied in the current state. */
+ 	request_to_deny.source_id = rc_root_id;
+ 	request_to_deny.destination_id = own_id;
+-	request_to_deny.args[0] = ROUTING_EXT_RC_BIT;
++	request_to_deny.args.args32[0] = ROUTING_EXT_RC_BIT;
+ 
+-	fill_ffa_msg(&rc_resp);
++	fill_ffa_msg_32(&rc_resp);
+ 	rc_resp.function_id = FFA_MSG_SEND_DIRECT_REQ_32;
+ 	rc_resp.source_id = root_id;
+ 	rc_resp.destination_id = own_id;
+-	rc_resp.args[0] = ROUTING_EXT_RC_BIT;
++	rc_resp.args.args32[0] = ROUTING_EXT_RC_BIT;
+ 
+-	fill_sp_msg(&sp_resp);
++	fill_sp_msg_32(&sp_resp);
+ 	sp_resp.source_id = rc_root_id;
+ 	sp_resp.destination_id = own_id;
+ 
+ 	resp.function_id = FFA_MSG_SEND_DIRECT_RESP_32;
+ 	resp.source_id = rc_root_id;
+ 	resp.destination_id = own_id;
+-	copy_sp_to_ffa_args(sp_resp.args, resp.args);
++	copy_sp_to_ffa_args_32(sp_resp.args.args32, resp.args.args32);
+ 
+ 	/* Initial request to current SP to set own_id */
+ 	wait_and_receive_request(root_id, own_id);
+ 
+ 	/* Sending request and receiving RC request from RC root */
+-	expect_ffa_msg_send_direct_req(own_id, rc_root_id, 0, req.args[1],
+-				       req.args[2], req.args[3], req.args[4],
++	expect_ffa_msg_send_direct_req_32(own_id, rc_root_id, 0, req.args.args32[1],
++				       req.args.args32[2], req.args.args32[3], req.args.args32[4],
+ 				       &rc_req, FFA_OK);
+ 
+ 	/* Forwarding RC request to root and receiving RC response */
+-	expect_ffa_msg_send_direct_resp(own_id, root_id, rc_req.args[0],
+-					rc_req.args[1], rc_req.args[2],
+-					rc_req.args[3], rc_req.args[4],
++	expect_ffa_msg_send_direct_resp_32(own_id, root_id, rc_req.args.args32[0],
++					rc_req.args.args32[1], rc_req.args.args32[2],
++					rc_req.args.args32[3], rc_req.args.args32[4],
+ 					&request_to_deny, FFA_OK);
+ 
+ 	/* Sending error to root which fails */
+-	expect_ffa_msg_send_direct_resp(
++	expect_ffa_msg_send_direct_resp_32(
+ 		own_id, rc_root_id, ROUTING_EXT_ERR_BIT | ROUTING_EXT_RC_BIT,
+ 		SP_RESULT_FFA(FFA_BUSY), 0, 0, 0, &rc_resp, FFA_DENIED);
+ 
+@@ -649,7 +712,7 @@ TEST(sp_messaging, sp_msg_send_direct_req_deny_fail_wait_fail_forwarding)
+ 	expect_ffa_msg_wait(&rc_resp, result);
+ 
+ 	/* Fowarding RC error as FFA_MSG_WAIT failed  */
+-	expect_ffa_msg_send_direct_req(
++	expect_ffa_msg_send_direct_req_32(
+ 		own_id, rc_root_id, (ROUTING_EXT_RC_BIT | ROUTING_EXT_ERR_BIT),
+ 		result, 0, 0, 0, &resp, FFA_OK);
+ 
+@@ -670,52 +733,52 @@ TEST(sp_messaging, sp_msg_send_direct_req_rc_return_rc_error_msg)
+ 	sp_msg sp_resp = { 0 };
+ 	ffa_result result = FFA_ABORTED;
+ 
+-	fill_sp_msg(&sp_req);
++	fill_sp_msg_32(&sp_req);
+ 	sp_req.source_id = own_id;
+ 	sp_req.destination_id = rc_root_id;
+ 
+ 	req.function_id = FFA_MSG_SEND_DIRECT_REQ_32;
+ 	req.source_id = own_id;
+ 	req.destination_id = rc_root_id;
+-	copy_sp_to_ffa_args(sp_req.args, req.args);
++	copy_sp_to_ffa_args_32(sp_req.args.args32, req.args.args32);
+ 
+-	fill_ffa_msg(&rc_req);
++	fill_ffa_msg_32(&rc_req);
+ 	rc_req.function_id = FFA_MSG_SEND_DIRECT_RESP_32;
+ 	rc_req.source_id = rc_root_id;
+ 	rc_req.destination_id = own_id;
+-	rc_req.args[0] = ROUTING_EXT_RC_BIT;
++	rc_req.args.args32[0] = ROUTING_EXT_RC_BIT;
+ 
+-	fill_ffa_msg(&rc_resp);
++	fill_ffa_msg_32(&rc_resp);
+ 	rc_resp.function_id = FFA_MSG_SEND_DIRECT_REQ_32;
+ 	rc_resp.source_id = root_id;
+ 	rc_resp.destination_id = own_id;
+-	rc_resp.args[0] = ROUTING_EXT_RC_BIT;
++	rc_resp.args.args32[0] = ROUTING_EXT_RC_BIT;
+ 
+-	fill_sp_msg(&sp_resp);
++	fill_sp_msg_32(&sp_resp);
+ 	sp_resp.source_id = rc_root_id;
+ 	sp_resp.destination_id = own_id;
+ 
+ 	resp.function_id = FFA_MSG_SEND_DIRECT_RESP_32;
+ 	resp.source_id = rc_root_id;
+ 	resp.destination_id = own_id;
+-	copy_sp_to_ffa_args(sp_resp.args, resp.args);
++	copy_sp_to_ffa_args_32(sp_resp.args.args32, resp.args.args32);
+ 
+ 	/* Initial request to current SP to set own_id */
+ 	wait_and_receive_request(root_id, own_id);
+ 
+ 	/* Sending request and receiving RC request from RC root */
+-	expect_ffa_msg_send_direct_req(own_id, rc_root_id, 0, req.args[1],
+-				       req.args[2], req.args[3], req.args[4],
++	expect_ffa_msg_send_direct_req_32(own_id, rc_root_id, 0, req.args.args32[1],
++				       req.args.args32[2], req.args.args32[3], req.args.args32[4],
+ 				       &rc_req, FFA_OK);
+ 
+ 	/* Forwarding RC request to root and receiving RC response */
+-	expect_ffa_msg_send_direct_resp(own_id, root_id, rc_req.args[0],
+-					rc_req.args[1], rc_req.args[2],
+-					rc_req.args[3], rc_req.args[4],
++	expect_ffa_msg_send_direct_resp_32(own_id, root_id, rc_req.args.args32[0],
++					rc_req.args.args32[1], rc_req.args.args32[2],
++					rc_req.args.args32[3], rc_req.args.args32[4],
+ 					&rc_resp, result);
+ 
+ 	/* Fowarding RC error to RC root and receiving response */
+-	expect_ffa_msg_send_direct_req(own_id, rc_root_id,
++	expect_ffa_msg_send_direct_req_32(own_id, rc_root_id,
+ 				       ROUTING_EXT_RC_BIT | ROUTING_EXT_ERR_BIT,
+ 				       SP_RESULT_FFA(result), 0, 0, 0, &resp,
+ 				       FFA_OK);
+@@ -737,54 +800,54 @@ TEST(sp_messaging, sp_msg_send_direct_req_rc_return_resp_fail)
+ 	sp_msg sp_resp = { 0 };
+ 	ffa_result result = FFA_ABORTED;
+ 
+-	fill_sp_msg(&sp_req);
++	fill_sp_msg_32(&sp_req);
+ 	sp_req.source_id = own_id;
+ 	sp_req.destination_id = rc_root_id;
+ 
+ 	req.function_id = FFA_MSG_SEND_DIRECT_REQ_32;
+ 	req.source_id = own_id;
+ 	req.destination_id = rc_root_id;
+-	copy_sp_to_ffa_args(sp_req.args, req.args);
++	copy_sp_to_ffa_args_32(sp_req.args.args32, req.args.args32);
+ 
+-	fill_ffa_msg(&rc_req);
++	fill_ffa_msg_32(&rc_req);
+ 	rc_req.function_id = FFA_MSG_SEND_DIRECT_RESP_32;
+ 	rc_req.source_id = rc_root_id;
+ 	rc_req.destination_id = own_id;
+-	rc_req.args[0] = ROUTING_EXT_RC_BIT;
++	rc_req.args.args32[0] = ROUTING_EXT_RC_BIT;
+ 
+-	fill_ffa_msg(&rc_resp);
++	fill_ffa_msg_32(&rc_resp);
+ 	rc_resp.function_id = FFA_MSG_SEND_DIRECT_REQ_32;
+ 	rc_resp.source_id = root_id;
+ 	rc_resp.destination_id = own_id;
+-	rc_resp.args[0] = ROUTING_EXT_RC_BIT;
++	rc_resp.args.args32[0] = ROUTING_EXT_RC_BIT;
+ 
+-	fill_sp_msg(&sp_resp);
++	fill_sp_msg_32(&sp_resp);
+ 	sp_resp.source_id = rc_root_id;
+ 	sp_resp.destination_id = own_id;
+ 
+ 	resp.function_id = FFA_MSG_SEND_DIRECT_RESP_32;
+ 	resp.source_id = rc_root_id;
+ 	resp.destination_id = own_id;
+-	copy_sp_to_ffa_args(sp_resp.args, resp.args);
++	copy_sp_to_ffa_args_32(sp_resp.args.args32, resp.args.args32);
+ 
+ 	/* Initial request to current SP to set own_id */
+ 	wait_and_receive_request(root_id, own_id);
+ 
+ 	/* Sending request and receiving RC request from RC root */
+-	expect_ffa_msg_send_direct_req(own_id, rc_root_id, 0, req.args[1],
+-				       req.args[2], req.args[3], req.args[4],
++	expect_ffa_msg_send_direct_req_32(own_id, rc_root_id, 0, req.args.args32[1],
++				       req.args.args32[2], req.args.args32[3], req.args.args32[4],
+ 				       &rc_req, FFA_OK);
+ 
+ 	/* Forwarding RC request to root and receiving RC response */
+-	expect_ffa_msg_send_direct_resp(own_id, root_id, rc_req.args[0],
+-					rc_req.args[1], rc_req.args[2],
+-					rc_req.args[3], rc_req.args[4],
++	expect_ffa_msg_send_direct_resp_32(own_id, root_id, rc_req.args.args32[0],
++					rc_req.args.args32[1], rc_req.args.args32[2],
++					rc_req.args.args32[3], rc_req.args.args32[4],
+ 					&rc_resp, FFA_OK);
+ 
+ 	/* Fowarding RC response to RC root and receiving response */
+-	expect_ffa_msg_send_direct_req(own_id, rc_root_id, rc_resp.args[0],
+-				       rc_resp.args[1], rc_resp.args[2],
+-				       rc_resp.args[3], rc_resp.args[4], &resp,
++	expect_ffa_msg_send_direct_req_32(own_id, rc_root_id, rc_resp.args.args32[0],
++				       rc_resp.args.args32[1], rc_resp.args.args32[2],
++				       rc_resp.args.args32[3], rc_resp.args.args32[4], &resp,
+ 				       result);
+ 
+ 	LONGS_EQUAL(SP_RESULT_FFA(result),
+@@ -812,10 +875,11 @@ TEST(sp_messaging, sp_msg_send_direct_resp_ffa_error)
+ 	ffa_result result = FFA_ABORTED;
+ 	uint32_t expected_ffa_args[5] = { 0 };
+ 
+-	fill_sp_msg(&resp);
++	fill_sp_msg_32(&resp);
+ 	memset(&req, 0x5a, sizeof(req));
+-	copy_sp_to_ffa_args(resp.args, expected_ffa_args);
+-	expect_ffa_msg_send_direct_resp(
++	req.is_64bit_message = false;
++	copy_sp_to_ffa_args_32(resp.args.args32, expected_ffa_args);
++	expect_ffa_msg_send_direct_resp_32(
+ 		resp.source_id, resp.destination_id, expected_ffa_args[0],
+ 		expected_ffa_args[1], expected_ffa_args[2],
+ 		expected_ffa_args[3], expected_ffa_args[4], &ffa_msg, result);
+@@ -825,14 +889,30 @@ TEST(sp_messaging, sp_msg_send_direct_resp_ffa_error)
+ 	MEMCMP_EQUAL(&empty_sp_msg, &req, sizeof(empty_sp_msg));
+ }
+ 
+-TEST(sp_messaging, sp_msg_send_direct_resp_msg)
++TEST(sp_messaging, sp_msg_send_direct_resp_msg_32)
+ {
+ 	uint32_t expected_ffa_args[5] = { 0 };
+ 
+-	fill_sp_msg(&resp);
+-	fill_ffa_msg(&ffa_msg);
+-	copy_sp_to_ffa_args(resp.args, expected_ffa_args);
+-	expect_ffa_msg_send_direct_resp(
++	fill_sp_msg_32(&resp);
++	fill_ffa_msg_32(&ffa_msg);
++	copy_sp_to_ffa_args_32(resp.args.args32, expected_ffa_args);
++	expect_ffa_msg_send_direct_resp_32(
++		resp.source_id, resp.destination_id, expected_ffa_args[0],
++		expected_ffa_args[1], expected_ffa_args[2],
++		expected_ffa_args[3], expected_ffa_args[4], &ffa_msg, FFA_OK);
++
++	LONGS_EQUAL(SP_RESULT_OK, sp_msg_send_direct_resp(&resp, &req));
++	ffa_and_sp_msg_equal(&ffa_msg, &req);
++}
++
++TEST(sp_messaging, sp_msg_send_direct_resp_msg_64)
++{
++	uint64_t expected_ffa_args[5] = { 0 };
++
++	fill_sp_msg_64(&resp);
++	fill_ffa_msg_64(&ffa_msg);
++	copy_sp_to_ffa_args_64(resp.args.args64, expected_ffa_args);
++	expect_ffa_msg_send_direct_resp_64(
+ 		resp.source_id, resp.destination_id, expected_ffa_args[0],
+ 		expected_ffa_args[1], expected_ffa_args[2],
+ 		expected_ffa_args[3], expected_ffa_args[4], &ffa_msg, FFA_OK);
+@@ -841,15 +921,16 @@ TEST(sp_messaging, sp_msg_send_direct_resp_msg)
+ 	ffa_and_sp_msg_equal(&ffa_msg, &req);
+ }
+ 
++
+ TEST(sp_messaging, sp_msg_send_direct_resp_success)
+ {
+ 	uint32_t expected_ffa_args[5] = { 0 };
+ 
+-	fill_sp_msg(&req);
+-	fill_sp_msg(&resp);
++	fill_sp_msg_32(&req);
++	fill_sp_msg_32(&resp);
+ 	ffa_msg.function_id = FFA_SUCCESS_32;
+-	copy_sp_to_ffa_args(resp.args, expected_ffa_args);
+-	expect_ffa_msg_send_direct_resp(
++	copy_sp_to_ffa_args_32(resp.args.args32, expected_ffa_args);
++	expect_ffa_msg_send_direct_resp_32(
+ 		resp.source_id, resp.destination_id, expected_ffa_args[0],
+ 		expected_ffa_args[1], expected_ffa_args[2],
+ 		expected_ffa_args[3], expected_ffa_args[4], &ffa_msg, FFA_OK);
+@@ -864,20 +945,20 @@ TEST(sp_messaging, sp_msg_send_direct_resp_deny_rc_failure)
+ 	uint32_t expected_ffa_args[5] = { 0 };
+ 	struct ffa_direct_msg rc_msg = { 0 };
+ 
+-	fill_sp_msg(&resp);
++	fill_sp_msg_32(&resp);
+ 
+-	fill_ffa_msg(&rc_msg);
+-	rc_msg.args[0] = ROUTING_EXT_RC_BIT;
++	fill_ffa_msg_32(&rc_msg);
++	rc_msg.args.args32[0] = ROUTING_EXT_RC_BIT;
+ 
+-	fill_ffa_msg(&ffa_msg);
+-	copy_sp_to_ffa_args(resp.args, expected_ffa_args);
++	fill_ffa_msg_32(&ffa_msg);
++	copy_sp_to_ffa_args_32(resp.args.args32, expected_ffa_args);
+ 
+-	expect_ffa_msg_send_direct_resp(
++	expect_ffa_msg_send_direct_resp_32(
+ 		resp.source_id, resp.destination_id, expected_ffa_args[0],
+ 		expected_ffa_args[1], expected_ffa_args[2],
+ 		expected_ffa_args[3], expected_ffa_args[4], &rc_msg, FFA_OK);
+ 
+-	expect_ffa_msg_send_direct_resp(
++	expect_ffa_msg_send_direct_resp_32(
+ 		rc_msg.destination_id, rc_msg.source_id,
+ 		ROUTING_EXT_RC_BIT | ROUTING_EXT_ERR_BIT,
+ 		SP_RESULT_FFA(FFA_DENIED), 0, 0, 0, &ffa_msg, result);
+@@ -892,21 +973,21 @@ TEST(sp_messaging, sp_msg_send_direct_resp_deny_rc)
+ 	uint32_t expected_ffa_args[5] = { 0 };
+ 	struct ffa_direct_msg rc_msg = { 0 };
+ 
+-	fill_sp_msg(&resp);
++	fill_sp_msg_32(&resp);
+ 
+-	fill_ffa_msg(&rc_msg);
+-	rc_msg.args[0] = ROUTING_EXT_RC_BIT;
++	fill_ffa_msg_32(&rc_msg);
++	rc_msg.args.args32[0] = ROUTING_EXT_RC_BIT;
+ 
+-	fill_ffa_msg(&ffa_msg);
+-	copy_sp_to_ffa_args(resp.args, expected_ffa_args);
++	fill_ffa_msg_32(&ffa_msg);
++	copy_sp_to_ffa_args_32(resp.args.args32, expected_ffa_args);
+ 
+-	expect_ffa_msg_send_direct_resp(resp.source_id, resp.destination_id, 0,
++	expect_ffa_msg_send_direct_resp_32(resp.source_id, resp.destination_id, 0,
+ 					expected_ffa_args[1],
+ 					expected_ffa_args[2],
+ 					expected_ffa_args[3],
+ 					expected_ffa_args[4], &rc_msg, FFA_OK);
+ 
+-	expect_ffa_msg_send_direct_resp(
++	expect_ffa_msg_send_direct_resp_32(
+ 		rc_msg.destination_id, rc_msg.source_id,
+ 		ROUTING_EXT_RC_BIT | ROUTING_EXT_ERR_BIT,
+ 		SP_RESULT_FFA(FFA_DENIED), 0, 0, 0, &ffa_msg, FFA_OK);
+@@ -933,13 +1014,14 @@ TEST(sp_messaging, sp_msg_send_rc_req_ffa_error)
+ {
+ 	ffa_result result = FFA_ABORTED;
+ 
+-	fill_sp_msg(&resp);
++	fill_sp_msg_32(&resp);
+ 	memset(&req, 0x5a, sizeof(req));
+-	fill_ffa_msg(&ffa_msg);
++	req.is_64bit_message = false;
++	fill_ffa_msg_32(&ffa_msg);
+ 
+-	expect_ffa_msg_send_direct_resp(req.source_id, req.destination_id,
+-					ROUTING_EXT_RC_BIT, req.args[0],
+-					req.args[1], req.args[2], req.args[3],
++	expect_ffa_msg_send_direct_resp_32(req.source_id, req.destination_id,
++					ROUTING_EXT_RC_BIT, req.args.args32[0],
++					req.args.args32[1], req.args.args32[2], req.args.args32[3],
+ 					&ffa_msg, result);
+ 
+ 	LONGS_EQUAL(SP_RESULT_FFA(result), sp_msg_send_rc_req(&req, &resp));
+@@ -953,22 +1035,22 @@ TEST(sp_messaging, sp_msg_send_rc_req_deny_fail_wait_fail)
+ 
+ 	wait_and_receive_request(root_id, own_id);
+ 
+-	fill_sp_msg(&req);
++	fill_sp_msg_32(&req);
+ 	req.source_id = own_id;
+ 	req.destination_id = root_id;
+ 
+-	fill_ffa_msg(&ffa_msg);
++	fill_ffa_msg_32(&ffa_msg);
+ 	ffa_msg.source_id = root_id;
+ 	ffa_msg.destination_id = own_id;
+ 	/* Should be RC message so it will be denied */
+-	ffa_msg.args[0] = 0;
++	ffa_msg.args.args32[0] = 0;
+ 
+-	expect_ffa_msg_send_direct_resp(req.source_id, req.destination_id,
+-					ROUTING_EXT_RC_BIT, req.args[0],
+-					req.args[1], req.args[2], req.args[3],
++	expect_ffa_msg_send_direct_resp_32(req.source_id, req.destination_id,
++					ROUTING_EXT_RC_BIT, req.args.args32[0],
++					req.args.args32[1], req.args.args32[2], req.args.args32[3],
+ 					&ffa_msg, FFA_OK);
+ 
+-	expect_ffa_msg_send_direct_resp(
++	expect_ffa_msg_send_direct_resp_32(
+ 		req.source_id, req.destination_id,
+ 		ROUTING_EXT_RC_BIT | ROUTING_EXT_ERR_BIT,
+ 		SP_RESULT_FFA(FFA_BUSY), 0, 0, 0, &ffa_msg, result);
+@@ -987,19 +1069,19 @@ TEST(sp_messaging, sp_msg_send_rc_req_rc_error)
+ 
+ 	wait_and_receive_request(root_id, own_id);
+ 
+-	fill_sp_msg(&req);
++	fill_sp_msg_32(&req);
+ 	req.source_id = own_id;
+ 	req.destination_id = root_id;
+ 
+-	fill_ffa_msg(&ffa_msg);
++	fill_ffa_msg_32(&ffa_msg);
+ 	ffa_msg.source_id = root_id;
+ 	ffa_msg.destination_id = own_id;
+-	ffa_msg.args[0] = ROUTING_EXT_RC_BIT | ROUTING_EXT_ERR_BIT;
+-	ffa_msg.args[1] = sp_err;
++	ffa_msg.args.args32[0] = ROUTING_EXT_RC_BIT | ROUTING_EXT_ERR_BIT;
++	ffa_msg.args.args32[1] = sp_err;
+ 
+-	expect_ffa_msg_send_direct_resp(req.source_id, req.destination_id,
+-					ROUTING_EXT_RC_BIT, req.args[0],
+-					req.args[1], req.args[2], req.args[3],
++	expect_ffa_msg_send_direct_resp_32(req.source_id, req.destination_id,
++					ROUTING_EXT_RC_BIT, req.args.args32[0],
++					req.args.args32[1], req.args.args32[2], req.args.args32[3],
+ 					&ffa_msg, FFA_OK);
+ 
+ 	LONGS_EQUAL(sp_err, sp_msg_send_rc_req(&req, &resp));
+@@ -1013,18 +1095,18 @@ TEST(sp_messaging, sp_msg_send_rc_req_success)
+ 
+ 	wait_and_receive_request(root_id, own_id);
+ 
+-	fill_sp_msg(&req);
++	fill_sp_msg_32(&req);
+ 	req.source_id = own_id;
+ 	req.destination_id = root_id;
+ 
+-	fill_ffa_msg(&ffa_msg);
++	fill_ffa_msg_32(&ffa_msg);
+ 	ffa_msg.source_id = root_id;
+ 	ffa_msg.destination_id = own_id;
+-	ffa_msg.args[0] = ROUTING_EXT_RC_BIT;
++	ffa_msg.args.args32[0] = ROUTING_EXT_RC_BIT;
+ 
+-	expect_ffa_msg_send_direct_resp(req.source_id, req.destination_id,
+-					ROUTING_EXT_RC_BIT, req.args[0],
+-					req.args[1], req.args[2], req.args[3],
++	expect_ffa_msg_send_direct_resp_32(req.source_id, req.destination_id,
++					ROUTING_EXT_RC_BIT, req.args.args32[0],
++					req.args.args32[1], req.args.args32[2], req.args.args32[3],
+ 					&ffa_msg, FFA_OK);
+ 
+ 	LONGS_EQUAL(SP_RESULT_OK, sp_msg_send_rc_req(&req, &resp));
+diff --git a/components/rpc/ffarpc/caller/sp/ffarpc_caller.c b/components/rpc/ffarpc/caller/sp/ffarpc_caller.c
+index 4ad98fb..ca3d318 100644
+--- a/components/rpc/ffarpc/caller/sp/ffarpc_caller.c
++++ b/components/rpc/ffarpc/caller/sp/ffarpc_caller.c
+@@ -81,16 +81,17 @@ static rpc_status_t call_invoke(void *context, rpc_call_handle handle, uint32_t
+ 
+ 	req.destination_id = this_context->dest_partition_id;
+ 	req.source_id = own_id;
+-	req.args[SP_CALL_ARGS_IFACE_ID_OPCODE] =
++	req.is_64bit_message = false;
++	req.args.args32[SP_CALL_ARGS_IFACE_ID_OPCODE] =
+ 		FFA_CALL_ARGS_COMBINE_IFACE_ID_OPCODE(this_context->dest_iface_id, opcode);
+-	req.args[SP_CALL_ARGS_REQ_DATA_LEN] = (uint32_t)this_context->req_len;
+-	req.args[SP_CALL_ARGS_ENCODING] = this_context->rpc_caller.encoding;
++	req.args.args32[SP_CALL_ARGS_REQ_DATA_LEN] = (uint32_t)this_context->req_len;
++	req.args.args32[SP_CALL_ARGS_ENCODING] = this_context->rpc_caller.encoding;
+ 
+ 	/* Initialise the caller ID.  Depending on the call path, this may
+ 	 * be overridden by a higher privilege execution level, based on its
+ 	 * perspective of the caller identity.
+ 	 */
+-	req.args[SP_CALL_ARGS_CALLER_ID] = 0;
++	req.args.args32[SP_CALL_ARGS_CALLER_ID] = 0;
+ 
+ 	sp_res = sp_msg_send_direct_req(&req, &resp);
+ 	if (sp_res != SP_RESULT_OK) {
+@@ -98,9 +99,9 @@ static rpc_status_t call_invoke(void *context, rpc_call_handle handle, uint32_t
+ 		goto out;
+ 	}
+ 
+-	this_context->resp_len = (size_t)resp.args[SP_CALL_ARGS_RESP_DATA_LEN];
+-	status = resp.args[SP_CALL_ARGS_RESP_RPC_STATUS];
+-	*opstatus = (rpc_status_t)((int32_t)resp.args[SP_CALL_ARGS_RESP_OP_STATUS]);
++	this_context->resp_len = (size_t)resp.args.args32[SP_CALL_ARGS_RESP_DATA_LEN];
++	status = resp.args.args32[SP_CALL_ARGS_RESP_RPC_STATUS];
++	*opstatus = (rpc_status_t)((int32_t)resp.args.args32[SP_CALL_ARGS_RESP_OP_STATUS]);
+ 
+ 	if (this_context->resp_len > this_context->shared_mem_required_size) {
+ 		EMSG("invalid response length");
+@@ -242,11 +243,12 @@ int ffarpc_caller_open(struct ffarpc_caller *caller, uint16_t dest_partition_id,
+ 
+ 	req.source_id = own_id;
+ 	req.destination_id = dest_partition_id;
+-	req.args[SP_CALL_ARGS_IFACE_ID_OPCODE] =
++	req.is_64bit_message = false;
++	req.args.args32[SP_CALL_ARGS_IFACE_ID_OPCODE] =
+ 		FFA_CALL_ARGS_COMBINE_IFACE_ID_OPCODE(FFA_CALL_MGMT_IFACE_ID, FFA_CALL_OPCODE_SHARE_BUF);
+-	req.args[SP_CALL_ARGS_SHARE_MEM_HANDLE_LSW] = (uint32_t)(handle & UINT32_MAX);
+-	req.args[SP_CALL_ARGS_SHARE_MEM_HANDLE_MSW] = (uint32_t)(handle >> 32);
+-	req.args[SP_CALL_ARGS_SHARE_MEM_SIZE] = (uint32_t)(caller->shared_mem_required_size);
++	req.args.args32[SP_CALL_ARGS_SHARE_MEM_HANDLE_LSW] = (uint32_t)(handle & UINT32_MAX);
++	req.args.args32[SP_CALL_ARGS_SHARE_MEM_HANDLE_MSW] = (uint32_t)(handle >> 32);
++	req.args.args32[SP_CALL_ARGS_SHARE_MEM_SIZE] = (uint32_t)(caller->shared_mem_required_size);
+ 
+ 	sp_res = sp_msg_send_direct_req(&req, &resp);
+ 	if (sp_res != SP_RESULT_OK) {
+@@ -273,10 +275,11 @@ int ffarpc_caller_close(struct ffarpc_caller *caller)
+ 
+ 	req.source_id = own_id;
+ 	req.destination_id = caller->dest_partition_id;
+-	req.args[SP_CALL_ARGS_IFACE_ID_OPCODE] =
++	req.is_64bit_message = false;
++	req.args.args32[SP_CALL_ARGS_IFACE_ID_OPCODE] =
+ 		FFA_CALL_ARGS_COMBINE_IFACE_ID_OPCODE(FFA_CALL_MGMT_IFACE_ID, FFA_CALL_OPCODE_UNSHARE_BUF);
+-	req.args[SP_CALL_ARGS_SHARE_MEM_HANDLE_LSW] = handle_lo;
+-	req.args[SP_CALL_ARGS_SHARE_MEM_HANDLE_MSW] = handle_hi;
++	req.args.args32[SP_CALL_ARGS_SHARE_MEM_HANDLE_LSW] = handle_lo;
++	req.args.args32[SP_CALL_ARGS_SHARE_MEM_HANDLE_MSW] = handle_hi;
+ 
+ 	sp_res = sp_msg_send_direct_req(&req, &resp);
+ 	if (sp_res != SP_RESULT_OK) {
+diff --git a/components/rpc/ffarpc/endpoint/ffarpc_call_ep.c b/components/rpc/ffarpc/endpoint/ffarpc_call_ep.c
+index 6a8cef0..c024196 100644
+--- a/components/rpc/ffarpc/endpoint/ffarpc_call_ep.c
++++ b/components/rpc/ffarpc/endpoint/ffarpc_call_ep.c
+@@ -260,8 +260,8 @@ void ffa_call_ep_receive(struct ffa_call_ep *call_ep,
+ 			 const struct sp_msg *req_msg,
+ 			 struct sp_msg *resp_msg)
+ {
+-	const uint32_t *req_args = req_msg->args;
+-	uint32_t *resp_args = resp_msg->args;
++	const uint32_t *req_args = req_msg->args.args32;
++	uint32_t *resp_args = resp_msg->args.args32;
+ 
+ 	uint16_t source_id = req_msg->source_id;
+ 	uint32_t ifaceid_opcode = req_args[SP_CALL_ARGS_IFACE_ID_OPCODE];
+diff --git a/components/rpc/mm_communicate/endpoint/sp/mm_communicate_call_ep.c b/components/rpc/mm_communicate/endpoint/sp/mm_communicate_call_ep.c
+index 09f1e2c..dc49e64 100644
+--- a/components/rpc/mm_communicate/endpoint/sp/mm_communicate_call_ep.c
++++ b/components/rpc/mm_communicate/endpoint/sp/mm_communicate_call_ep.c
+@@ -1,6 +1,6 @@
+ // SPDX-License-Identifier: BSD-3-Clause
+ /*
+- * Copyright (c) 2021, Arm Limited and Contributors. All rights reserved.
++ * Copyright (c) 2021-2022, Arm Limited and Contributors. All rights reserved.
+  */
+ 
+ #include "components/rpc/mm_communicate/common/mm_communicate_call_args.h"
+@@ -127,15 +127,15 @@ void mm_communicate_call_ep_receive(struct mm_communicate_ep *mm_communicate_cal
+ 	uintptr_t buffer_address = 0;
+ 	size_t buffer_size = 0;
+ 
+-	buffer_address = req_msg->args[MM_COMMUNICATE_CALL_ARGS_COMM_BUFFER_ADDRESS];
+-	buffer_size = req_msg->args[MM_COMMUNICATE_CALL_ARGS_COMM_BUFFER_SIZE];
++	buffer_address = req_msg->args.args32[MM_COMMUNICATE_CALL_ARGS_COMM_BUFFER_ADDRESS];
++	buffer_size = req_msg->args.args32[MM_COMMUNICATE_CALL_ARGS_COMM_BUFFER_SIZE];
+ 
+ 	return_value = handle_mm_communicate(mm_communicate_call_ep, req_msg->source_id,
+ 					     buffer_address, buffer_size);
+ 
+-	resp_msg->args[MM_COMMUNICATE_CALL_ARGS_RETURN_ID] = ARM_SVC_ID_SP_EVENT_COMPLETE;
+-	resp_msg->args[MM_COMMUNICATE_CALL_ARGS_RETURN_CODE] = return_value;
+-	resp_msg->args[MM_COMMUNICATE_CALL_ARGS_MBZ0] = 0;
+-	resp_msg->args[MM_COMMUNICATE_CALL_ARGS_MBZ1] = 0;
+-	resp_msg->args[MM_COMMUNICATE_CALL_ARGS_MBZ2] = 0;
++	resp_msg->args.args32[MM_COMMUNICATE_CALL_ARGS_RETURN_ID] = ARM_SVC_ID_SP_EVENT_COMPLETE;
++	resp_msg->args.args32[MM_COMMUNICATE_CALL_ARGS_RETURN_CODE] = return_value;
++	resp_msg->args.args32[MM_COMMUNICATE_CALL_ARGS_MBZ0] = 0;
++	resp_msg->args.args32[MM_COMMUNICATE_CALL_ARGS_MBZ1] = 0;
++	resp_msg->args.args32[MM_COMMUNICATE_CALL_ARGS_MBZ2] = 0;
+ }
+diff --git a/deployments/smm-gateway/common/smm_gateway_sp.c b/deployments/smm-gateway/common/smm_gateway_sp.c
+index 2187fea..3697b7f 100644
+--- a/deployments/smm-gateway/common/smm_gateway_sp.c
++++ b/deployments/smm-gateway/common/smm_gateway_sp.c
+@@ -70,10 +70,10 @@ void __noreturn sp_main(struct ffa_init_info *init_info)
+ 	while (1) {
+ 		mm_communicate_call_ep_receive(&mm_communicate_call_ep, &req_msg, &resp_msg);
+ 
+-		ffa_msg_send_direct_resp(req_msg.destination_id,
+-					 req_msg.source_id, resp_msg.args[0],
+-					 resp_msg.args[1], resp_msg.args[2],
+-					 resp_msg.args[3], resp_msg.args[4],
++		ffa_msg_send_direct_resp_32(req_msg.destination_id,
++					 req_msg.source_id, resp_msg.args.args32[0],
++					 resp_msg.args.args32[1], resp_msg.args.args32[2],
++					 resp_msg.args.args32[3], resp_msg.args.args32[4],
+ 					 &req_msg);
+ 	}
+ 
+-- 
+2.17.1
+
diff --git a/meta-arm/meta-arm/recipes-security/trusted-services/files/0022-Change-MM-communicate-RPC-protocol-of-call-endpoint.patch b/meta-arm/meta-arm/recipes-security/trusted-services/files/0022-Change-MM-communicate-RPC-protocol-of-call-endpoint.patch
new file mode 100644
index 0000000..4090397
--- /dev/null
+++ b/meta-arm/meta-arm/recipes-security/trusted-services/files/0022-Change-MM-communicate-RPC-protocol-of-call-endpoint.patch
@@ -0,0 +1,497 @@
+From 0f02f04c7f0a7130874dc4bc1a500604d580c4dc Mon Sep 17 00:00:00 2001
+From: Imre Kis <imre.kis@arm.com>
+Date: Wed, 20 Jul 2022 15:19:17 +0200
+Subject: [PATCH 22/24] Change MM communicate RPC protocol of call endpoint
+
+Replace buffer address and size parameter by offset in buffer parameter
+and move to 64 bit FF-A direct message call. Deny all 32 bit direct
+messages in SMM gateway.
+
+Signed-off-by: Imre Kis <imre.kis@arm.com>
+Change-Id: I7a69b440ff9842960229b2bfdd1b5ae5318d9c26
+
+Upstream-Status: Pending [In review]
+Signed-off-by: Anton Antonov <Anton.Antonov@arm.com>
+
+---
+ .../common/mm_communicate_call_args.h         |  15 ++-
+ .../endpoint/sp/mm_communicate_call_ep.c      |  58 ++++-----
+ .../endpoint/sp/test/mock_mm_service.cpp      |   6 +-
+ .../endpoint/sp/test/mock_mm_service.h        |   4 +-
+ .../sp/test/test_mm_communicate_call_ep.cpp   | 110 +++++++++++-------
+ .../endpoint/sp/test/test_mock_mm_service.cpp |   4 +-
+ .../smm-gateway/common/smm_gateway_sp.c       |  17 ++-
+ 7 files changed, 123 insertions(+), 91 deletions(-)
+
+diff --git a/components/rpc/mm_communicate/common/mm_communicate_call_args.h b/components/rpc/mm_communicate/common/mm_communicate_call_args.h
+index 7d7311d..280c04d 100644
+--- a/components/rpc/mm_communicate/common/mm_communicate_call_args.h
++++ b/components/rpc/mm_communicate/common/mm_communicate_call_args.h
+@@ -1,6 +1,6 @@
+ /* SPDX-License-Identifier: BSD-3-Clause */
+ /*
+- * Copyright (c) 2021, Arm Limited and Contributors. All rights reserved.
++ * Copyright (c) 2021-2022, Arm Limited and Contributors. All rights reserved.
+  */
+ 
+ #ifndef MM_COMMUNICATE_CALL_ARGS_H_
+@@ -12,13 +12,12 @@
+  */
+ 
+ /* SP message arg indexes */
+-#define MM_COMMUNICATE_CALL_ARGS_COMM_BUFFER_ADDRESS	0
+-#define MM_COMMUNICATE_CALL_ARGS_COMM_BUFFER_SIZE	1
++#define MM_COMMUNICATE_CALL_ARGS_COMM_BUFFER_OFFSET	0
+ 
+-#define MM_COMMUNICATE_CALL_ARGS_RETURN_ID		0
+-#define MM_COMMUNICATE_CALL_ARGS_RETURN_CODE		1
+-#define MM_COMMUNICATE_CALL_ARGS_MBZ0			2
+-#define MM_COMMUNICATE_CALL_ARGS_MBZ1			3
+-#define MM_COMMUNICATE_CALL_ARGS_MBZ2			4
++#define MM_COMMUNICATE_CALL_ARGS_RETURN_CODE		0
++#define MM_COMMUNICATE_CALL_ARGS_MBZ0			1
++#define MM_COMMUNICATE_CALL_ARGS_MBZ1			2
++#define MM_COMMUNICATE_CALL_ARGS_MBZ2			3
++#define MM_COMMUNICATE_CALL_ARGS_MBZ3			4
+ 
+ #endif /* MM_COMMUNICATE_CALL_ARGS_H_ */
+diff --git a/components/rpc/mm_communicate/endpoint/sp/mm_communicate_call_ep.c b/components/rpc/mm_communicate/endpoint/sp/mm_communicate_call_ep.c
+index dc49e64..93aa0f4 100644
+--- a/components/rpc/mm_communicate/endpoint/sp/mm_communicate_call_ep.c
++++ b/components/rpc/mm_communicate/endpoint/sp/mm_communicate_call_ep.c
+@@ -35,7 +35,8 @@ bool mm_communicate_call_ep_init(struct mm_communicate_ep *call_ep, uint8_t *com
+ 
+ static int32_t invoke_mm_service(struct mm_communicate_ep *call_ep, uint16_t source_id,
+ 				 struct mm_service_interface *iface,
+-				 EFI_MM_COMMUNICATE_HEADER *header)
++				 EFI_MM_COMMUNICATE_HEADER *header,
++				 size_t buffer_size)
+ {
+ 	rpc_status_t rpc_status = TS_RPC_ERROR_INTERNAL;
+ 	struct mm_service_call_req call_req = { 0 };
+@@ -49,11 +50,11 @@ static int32_t invoke_mm_service(struct mm_communicate_ep *call_ep, uint16_t sou
+ 	 */
+ 	call_req.req_buf.data = header->Data;
+ 	call_req.req_buf.data_len = header->MessageLength;
+-	call_req.req_buf.size = call_ep->comm_buffer_size - EFI_MM_COMMUNICATE_HEADER_SIZE;
++	call_req.req_buf.size = buffer_size;
+ 
+ 	call_req.resp_buf.data = header->Data;
+ 	call_req.resp_buf.data_len = 0;
+-	call_req.resp_buf.size = call_ep->comm_buffer_size - EFI_MM_COMMUNICATE_HEADER_SIZE;
++	call_req.resp_buf.size = buffer_size;
+ 
+ 	result = iface->receive(iface, &call_req);
+ 
+@@ -63,32 +64,38 @@ static int32_t invoke_mm_service(struct mm_communicate_ep *call_ep, uint16_t sou
+ }
+ 
+ static int32_t handle_mm_communicate(struct mm_communicate_ep *call_ep, uint16_t source_id,
+-				     uintptr_t buffer_addr, size_t buffer_size)
++				     size_t buffer_offset)
+ {
+-	uintptr_t buffer_arg = 0;
+-	size_t request_size = 0;
++	size_t header_end_offset = 0;
++	size_t request_end_offset = 0;
++	size_t buffer_size = 0;
+ 	EFI_MM_COMMUNICATE_HEADER *header = NULL;
+ 	unsigned int i = 0;
+ 
+-	/* Validating call args according to ARM MM spec 3.2.4 */
+-	if (buffer_addr == 0)
++	if (ADD_OVERFLOW(buffer_offset, EFI_MM_COMMUNICATE_HEADER_SIZE, &header_end_offset))
++		return MM_RETURN_CODE_INVALID_PARAMETER;
++
++	if (call_ep->comm_buffer_size < header_end_offset)
+ 		return MM_RETURN_CODE_INVALID_PARAMETER;
+ 
+ 	/* Validating comm buffer contents */
+-	header = (EFI_MM_COMMUNICATE_HEADER *)call_ep->comm_buffer;
+-	if (ADD_OVERFLOW(header->MessageLength, EFI_MM_COMMUNICATE_HEADER_SIZE, &request_size))
++	header = (EFI_MM_COMMUNICATE_HEADER *)(call_ep->comm_buffer + buffer_offset);
++	if (ADD_OVERFLOW(header_end_offset, header->MessageLength, &request_end_offset))
+ 		return MM_RETURN_CODE_INVALID_PARAMETER;
+ 
+-	if (call_ep->comm_buffer_size < request_size)
++	if (call_ep->comm_buffer_size < request_end_offset)
+ 		return MM_RETURN_CODE_INVALID_PARAMETER;
+ 
++	buffer_size = call_ep->comm_buffer_size - header_end_offset;
++
+ 	/* Finding iface_id by GUID */
+ 	for (i = 0; i < ARRAY_SIZE(call_ep->service_table); i++) {
+ 		const struct mm_service_entry *entry = &call_ep->service_table[i];
+ 
+ 		if (entry->iface != NULL &&
+ 		    memcmp(&header->HeaderGuid, &entry->guid, sizeof(entry->guid)) == 0)
+-			return invoke_mm_service(call_ep, source_id, entry->iface, header);
++			return invoke_mm_service(call_ep, source_id, entry->iface, header,
++						 buffer_size);
+ 	}
+ 
+ 	return MM_RETURN_CODE_NOT_SUPPORTED;
+@@ -123,19 +130,16 @@ void mm_communicate_call_ep_receive(struct mm_communicate_ep *mm_communicate_cal
+ 				    const struct ffa_direct_msg *req_msg,
+ 				    struct ffa_direct_msg *resp_msg)
+ {
+-	int32_t return_value = 0;
+-	uintptr_t buffer_address = 0;
+-	size_t buffer_size = 0;
+-
+-	buffer_address = req_msg->args.args32[MM_COMMUNICATE_CALL_ARGS_COMM_BUFFER_ADDRESS];
+-	buffer_size = req_msg->args.args32[MM_COMMUNICATE_CALL_ARGS_COMM_BUFFER_SIZE];
+-
+-	return_value = handle_mm_communicate(mm_communicate_call_ep, req_msg->source_id,
+-					     buffer_address, buffer_size);
+-
+-	resp_msg->args.args32[MM_COMMUNICATE_CALL_ARGS_RETURN_ID] = ARM_SVC_ID_SP_EVENT_COMPLETE;
+-	resp_msg->args.args32[MM_COMMUNICATE_CALL_ARGS_RETURN_CODE] = return_value;
+-	resp_msg->args.args32[MM_COMMUNICATE_CALL_ARGS_MBZ0] = 0;
+-	resp_msg->args.args32[MM_COMMUNICATE_CALL_ARGS_MBZ1] = 0;
+-	resp_msg->args.args32[MM_COMMUNICATE_CALL_ARGS_MBZ2] = 0;
++	int32_t return_value = MM_RETURN_CODE_NOT_SUPPORTED;
++	size_t buffer_offset = req_msg->args.args64[MM_COMMUNICATE_CALL_ARGS_COMM_BUFFER_OFFSET];
++
++	return_value = handle_mm_communicate(mm_communicate_call_ep,
++						req_msg->source_id,
++						buffer_offset);
++
++	resp_msg->args.args64[MM_COMMUNICATE_CALL_ARGS_RETURN_CODE] = return_value;
++	resp_msg->args.args64[MM_COMMUNICATE_CALL_ARGS_MBZ0] = 0;
++	resp_msg->args.args64[MM_COMMUNICATE_CALL_ARGS_MBZ1] = 0;
++	resp_msg->args.args64[MM_COMMUNICATE_CALL_ARGS_MBZ2] = 0;
++	resp_msg->args.args64[MM_COMMUNICATE_CALL_ARGS_MBZ3] = 0;
+ }
+diff --git a/components/rpc/mm_communicate/endpoint/sp/test/mock_mm_service.cpp b/components/rpc/mm_communicate/endpoint/sp/test/mock_mm_service.cpp
+index a58c33a..0ae2a80 100644
+--- a/components/rpc/mm_communicate/endpoint/sp/test/mock_mm_service.cpp
++++ b/components/rpc/mm_communicate/endpoint/sp/test/mock_mm_service.cpp
+@@ -1,6 +1,6 @@
+ // SPDX-License-Identifier: BSD-3-Clause
+ /*
+- * Copyright (c) 2021, Arm Limited and Contributors. All rights reserved.
++ * Copyright (c) 2021-2022, Arm Limited and Contributors. All rights reserved.
+  */
+ 
+ #include <CppUTestExt/MockSupport.h>
+@@ -16,7 +16,7 @@ void mock_mm_service_init(void)
+ 
+ void expect_mock_mm_service_receive(struct mm_service_interface *iface,
+ 				    const struct mm_service_call_req *req,
+-				    int32_t result)
++				    int64_t result)
+ {
+ 	mock().expectOneCall("mm_service_receive").onObject(iface).
+ 		withOutputParameterReturning("resp_buf_data_len", &req->resp_buf.data_len,
+@@ -31,5 +31,5 @@ int32_t mock_mm_service_receive(struct mm_service_interface *iface,
+ 	return mock().actualCall("mm_service_receive").onObject(iface).
+ 		withOutputParameter("resp_buf_data_len", &req->resp_buf.data_len).
+ 		withParameterOfType("mm_service_call_req", "req", req).
+-		returnIntValue();
++		returnLongIntValue();
+ }
+diff --git a/components/rpc/mm_communicate/endpoint/sp/test/mock_mm_service.h b/components/rpc/mm_communicate/endpoint/sp/test/mock_mm_service.h
+index 768022d..56c8a26 100644
+--- a/components/rpc/mm_communicate/endpoint/sp/test/mock_mm_service.h
++++ b/components/rpc/mm_communicate/endpoint/sp/test/mock_mm_service.h
+@@ -1,6 +1,6 @@
+ /* SPDX-License-Identifier: BSD-3-Clause */
+ /*
+- * Copyright (c) 2021, Arm Limited and Contributors. All rights reserved.
++ * Copyright (c) 2021-2022, Arm Limited and Contributors. All rights reserved.
+  */
+ 
+ #ifndef MOCK_MM_SERVICE_H_
+@@ -16,7 +16,7 @@ void mock_mm_service_init(void);
+ 
+ void expect_mock_mm_service_receive(struct mm_service_interface *iface,
+ 				    const struct mm_service_call_req *req,
+-				    int32_t result);
++				    int64_t result);
+ 
+ int32_t mock_mm_service_receive(struct mm_service_interface *iface,
+ 				struct mm_service_call_req *req);
+diff --git a/components/rpc/mm_communicate/endpoint/sp/test/test_mm_communicate_call_ep.cpp b/components/rpc/mm_communicate/endpoint/sp/test/test_mm_communicate_call_ep.cpp
+index 55a61fb..5aaa3a6 100644
+--- a/components/rpc/mm_communicate/endpoint/sp/test/test_mm_communicate_call_ep.cpp
++++ b/components/rpc/mm_communicate/endpoint/sp/test/test_mm_communicate_call_ep.cpp
+@@ -1,6 +1,6 @@
+ // SPDX-License-Identifier: BSD-3-Clause
+ /*
+- * Copyright (c) 2021, Arm Limited and Contributors. All rights reserved.
++ * Copyright (c) 2021-2022, Arm Limited and Contributors. All rights reserved.
+  */
+ 
+ #include <CppUTest/TestHarness.h>
+@@ -32,14 +32,14 @@ TEST_GROUP(mm_communicate_call_ep)
+ 		mock().clear();
+ 	}
+ 
+-	void check_sp_msg(const struct ffa_direct_msg *msg, uint32_t arg0,
+-			  uint32_t arg1, uint32_t arg2, uint32_t arg3, uint32_t arg4)
++	void check_sp_msg(const struct ffa_direct_msg *msg, uint64_t arg0,
++			  uint64_t arg1, uint64_t arg2, uint64_t arg3, uint64_t arg4)
+ 	{
+-		UNSIGNED_LONGLONGS_EQUAL(arg0, msg->args[0]);
+-		UNSIGNED_LONGLONGS_EQUAL(arg1, msg->args[1]);
+-		UNSIGNED_LONGLONGS_EQUAL(arg2, msg->args[2]);
+-		UNSIGNED_LONGLONGS_EQUAL(arg3, msg->args[3]);
+-		UNSIGNED_LONGLONGS_EQUAL(arg4,  msg->args[4]);
++		UNSIGNED_LONGLONGS_EQUAL(arg0, msg->args.args64[0]);
++		UNSIGNED_LONGLONGS_EQUAL(arg1, msg->args.args64[1]);
++		UNSIGNED_LONGLONGS_EQUAL(arg2, msg->args.args64[2]);
++		UNSIGNED_LONGLONGS_EQUAL(arg3, msg->args.args64[3]);
++		UNSIGNED_LONGLONGS_EQUAL(arg4,  msg->args.args64[4]);
+ 	}
+ 
+ 	struct mm_communicate_ep call_ep;
+@@ -114,59 +114,54 @@ TEST(mm_communicate_call_ep, attach_do_not_fit)
+ 	}
+ }
+ 
+-TEST(mm_communicate_call_ep, mm_communicate_no_buffer_arg)
++TEST(mm_communicate_call_ep, mm_communicate_offset_int_overflow)
+ {
+ 	CHECK_TRUE(mm_communicate_call_ep_init(&call_ep, comm_buffer, sizeof(comm_buffer)));
++	req_msg.args.args64[0] = 0xffffffffffffffff;
+ 
+ 	mm_communicate_call_ep_receive(&call_ep, &req_msg, &resp_msg);
+ 
+-	check_sp_msg(&resp_msg, ARM_SVC_ID_SP_EVENT_COMPLETE, MM_RETURN_CODE_INVALID_PARAMETER,
+-		     0, 0, 0);
++	check_sp_msg(&resp_msg, MM_RETURN_CODE_INVALID_PARAMETER, 0, 0, 0, 0);
+ }
+ 
+-TEST(mm_communicate_call_ep, mm_communicate_length_overflow)
++TEST(mm_communicate_call_ep, mm_communicate_offset_overflow)
+ {
+ 	CHECK_TRUE(mm_communicate_call_ep_init(&call_ep, comm_buffer, sizeof(comm_buffer)));
++	req_msg.args.args64[0] = sizeof(comm_buffer) - EFI_MM_COMMUNICATE_HEADER_SIZE + 1;
++
++	mm_communicate_call_ep_receive(&call_ep, &req_msg, &resp_msg);
+ 
+-	req_msg.args[0] = (uintptr_t)comm_buffer;
+-	req_msg.args[1] = sizeof(comm_buffer);
++	check_sp_msg(&resp_msg, MM_RETURN_CODE_INVALID_PARAMETER, 0, 0, 0, 0);
++}
+ 
++TEST(mm_communicate_call_ep, mm_communicate_length_overflow)
++{
++	CHECK_TRUE(mm_communicate_call_ep_init(&call_ep, comm_buffer, sizeof(comm_buffer)));
+ 	header->MessageLength = UINT64_MAX - EFI_MM_COMMUNICATE_HEADER_SIZE + 1;
+ 
+ 	mm_communicate_call_ep_receive(&call_ep, &req_msg, &resp_msg);
+ 
+-	check_sp_msg(&resp_msg, ARM_SVC_ID_SP_EVENT_COMPLETE, MM_RETURN_CODE_INVALID_PARAMETER,
+-		     0, 0, 0);
++	check_sp_msg(&resp_msg, MM_RETURN_CODE_INVALID_PARAMETER, 0, 0, 0, 0);
+ }
+ 
+ TEST(mm_communicate_call_ep, mm_communicate_too_large)
+ {
+ 	CHECK_TRUE(mm_communicate_call_ep_init(&call_ep, comm_buffer, sizeof(comm_buffer)));
+-
+-	req_msg.args[0] = (uintptr_t)comm_buffer;
+-	req_msg.args[1] = sizeof(comm_buffer);
+-
+ 	header->MessageLength = sizeof(comm_buffer) - EFI_MM_COMMUNICATE_HEADER_SIZE + 1;
+ 
+ 	mm_communicate_call_ep_receive(&call_ep, &req_msg, &resp_msg);
+ 
+-	check_sp_msg(&resp_msg, ARM_SVC_ID_SP_EVENT_COMPLETE, MM_RETURN_CODE_INVALID_PARAMETER,
+-		     0, 0, 0);
++	check_sp_msg(&resp_msg, MM_RETURN_CODE_INVALID_PARAMETER, 0, 0, 0, 0);
+ }
+ 
+ TEST(mm_communicate_call_ep, mm_communicate_no_handler)
+ {
+ 	CHECK_TRUE(mm_communicate_call_ep_init(&call_ep, comm_buffer, sizeof(comm_buffer)));
+-
+-	req_msg.args[0] = (uintptr_t)comm_buffer;
+-	req_msg.args[1] = sizeof(comm_buffer);
+-
+ 	header->MessageLength = 0;
+ 
+ 	mm_communicate_call_ep_receive(&call_ep, &req_msg, &resp_msg);
+ 
+-	check_sp_msg(&resp_msg, ARM_SVC_ID_SP_EVENT_COMPLETE, MM_RETURN_CODE_NOT_SUPPORTED,
+-		     0, 0, 0);
++	check_sp_msg(&resp_msg, MM_RETURN_CODE_NOT_SUPPORTED, 0, 0, 0, 0);
+ }
+ 
+ TEST(mm_communicate_call_ep, mm_communicate_single_handler_not_matching)
+@@ -175,16 +170,11 @@ TEST(mm_communicate_call_ep, mm_communicate_single_handler_not_matching)
+ 
+ 	CHECK_TRUE(mm_communicate_call_ep_init(&call_ep, comm_buffer, sizeof(comm_buffer)));
+ 	mm_communicate_call_ep_attach_service(&call_ep, &guid0, &iface);
+-
+-	req_msg.args[0] = (uintptr_t)comm_buffer;
+-	req_msg.args[1] = sizeof(comm_buffer);
+-
+ 	header->MessageLength = 0;
+ 
+ 	mm_communicate_call_ep_receive(&call_ep, &req_msg, &resp_msg);
+ 
+-	check_sp_msg(&resp_msg, ARM_SVC_ID_SP_EVENT_COMPLETE, MM_RETURN_CODE_NOT_SUPPORTED,
+-		     0, 0, 0);
++	check_sp_msg(&resp_msg, MM_RETURN_CODE_NOT_SUPPORTED, 0, 0, 0, 0);
+ }
+ 
+ TEST(mm_communicate_call_ep, mm_communicate_single_handler_matching)
+@@ -211,19 +201,55 @@ TEST(mm_communicate_call_ep, mm_communicate_single_handler_matching)
+ 	CHECK_TRUE(mm_communicate_call_ep_init(&call_ep, comm_buffer, sizeof(comm_buffer)));
+ 	mm_communicate_call_ep_attach_service(&call_ep, &guid0, &iface);
+ 
+-	req_msg.args[0] = (uintptr_t)comm_buffer;
+-	req_msg.args[1] = sizeof(comm_buffer);
++	memcpy(&header->HeaderGuid, &guid0, sizeof(guid0));
++	header->MessageLength = req_len;
++
++	expect_mock_mm_service_receive(&iface, &req, MM_RETURN_CODE_SUCCESS);
++
++	mm_communicate_call_ep_receive(&call_ep, &req_msg, &resp_msg);
++
++	check_sp_msg(&resp_msg, MM_RETURN_CODE_SUCCESS, 0, 0, 0, 0);
++}
++
++TEST(mm_communicate_call_ep, mm_communicate_single_handler_matching_with_offset)
++{
++	const size_t offset = 0x10;
++	EFI_MM_COMMUNICATE_HEADER *header = (EFI_MM_COMMUNICATE_HEADER *)(comm_buffer + offset);
++
++	const size_t req_len = 16;
++	struct mm_service_interface iface = {
++		.context = (void *)0x1234,
++		.receive = mock_mm_service_receive
++	};
++	struct mm_service_call_req req = {
++		.guid = &guid0,
++		.req_buf = {
++			.size = sizeof(comm_buffer) - EFI_MM_COMMUNICATE_HEADER_SIZE - offset,
++			.data_len = req_len,
++			.data = header->Data
++		},
++		.resp_buf = {
++			.size = sizeof(comm_buffer) - EFI_MM_COMMUNICATE_HEADER_SIZE - offset,
++			.data_len = 0,
++			.data = header->Data
++		},
++	};
++
++	CHECK_TRUE(mm_communicate_call_ep_init(&call_ep, comm_buffer, sizeof(comm_buffer)));
++	mm_communicate_call_ep_attach_service(&call_ep, &guid0, &iface);
+ 
+ 	memcpy(&header->HeaderGuid, &guid0, sizeof(guid0));
+ 	header->MessageLength = req_len;
++	req_msg.args.args64[0] = offset;
+ 
+ 	expect_mock_mm_service_receive(&iface, &req, MM_RETURN_CODE_SUCCESS);
+ 
+ 	mm_communicate_call_ep_receive(&call_ep, &req_msg, &resp_msg);
+ 
+-	check_sp_msg(&resp_msg, ARM_SVC_ID_SP_EVENT_COMPLETE, MM_RETURN_CODE_SUCCESS, 0, 0, 0);
++	check_sp_msg(&resp_msg, MM_RETURN_CODE_SUCCESS, 0, 0, 0, 0);
+ }
+ 
++
+ TEST(mm_communicate_call_ep, mm_communicate_single_handler_matching_error)
+ {
+ 	const size_t req_len = 16;
+@@ -248,9 +274,6 @@ TEST(mm_communicate_call_ep, mm_communicate_single_handler_matching_error)
+ 	CHECK_TRUE(mm_communicate_call_ep_init(&call_ep, comm_buffer, sizeof(comm_buffer)));
+ 	mm_communicate_call_ep_attach_service(&call_ep, &guid0, &iface);
+ 
+-	req_msg.args[0] = (uintptr_t)comm_buffer;
+-	req_msg.args[1] = sizeof(comm_buffer);
+-
+ 	memcpy(&header->HeaderGuid, &guid0, sizeof(guid0));
+ 	header->MessageLength = req_len;
+ 
+@@ -258,7 +281,7 @@ TEST(mm_communicate_call_ep, mm_communicate_single_handler_matching_error)
+ 
+ 	mm_communicate_call_ep_receive(&call_ep, &req_msg, &resp_msg);
+ 
+-	check_sp_msg(&resp_msg, ARM_SVC_ID_SP_EVENT_COMPLETE, MM_RETURN_CODE_NO_MEMORY, 0, 0, 0);
++	check_sp_msg(&resp_msg, MM_RETURN_CODE_NO_MEMORY, 0, 0, 0, 0);
+ }
+ 
+ TEST(mm_communicate_call_ep, mm_communicate_two_handlers)
+@@ -290,9 +313,6 @@ TEST(mm_communicate_call_ep, mm_communicate_two_handlers)
+ 	mm_communicate_call_ep_attach_service(&call_ep, &guid0, &iface0);
+ 	mm_communicate_call_ep_attach_service(&call_ep, &guid1, &iface1);
+ 
+-	req_msg.args[0] = (uintptr_t)comm_buffer;
+-	req_msg.args[1] = sizeof(comm_buffer);
+-
+ 	memcpy(&header->HeaderGuid, &guid1, sizeof(guid0));
+ 	header->MessageLength = req_len;
+ 
+@@ -300,5 +320,5 @@ TEST(mm_communicate_call_ep, mm_communicate_two_handlers)
+ 
+ 	mm_communicate_call_ep_receive(&call_ep, &req_msg, &resp_msg);
+ 
+-	check_sp_msg(&resp_msg, ARM_SVC_ID_SP_EVENT_COMPLETE, MM_RETURN_CODE_SUCCESS, 0, 0, 0);
++	check_sp_msg(&resp_msg, MM_RETURN_CODE_SUCCESS, 0, 0, 0, 0);
+ }
+diff --git a/components/rpc/mm_communicate/endpoint/sp/test/test_mock_mm_service.cpp b/components/rpc/mm_communicate/endpoint/sp/test/test_mock_mm_service.cpp
+index 360a8fa..600386e 100644
+--- a/components/rpc/mm_communicate/endpoint/sp/test/test_mock_mm_service.cpp
++++ b/components/rpc/mm_communicate/endpoint/sp/test/test_mock_mm_service.cpp
+@@ -1,6 +1,6 @@
+ // SPDX-License-Identifier: BSD-3-Clause
+ /*
+- * Copyright (c) 2021, Arm Limited and Contributors. All rights reserved.
++ * Copyright (c) 2021-2022, Arm Limited and Contributors. All rights reserved.
+  */
+ 
+ #include <CppUTest/TestHarness.h>
+@@ -44,7 +44,7 @@ TEST(mock_mm_service, receive)
+ 			.data = (void *)0x2345
+ 		}
+ 	};
+-	int32_t result = -123456;
++	int64_t result = -123456;
+ 
+ 	expect_mock_mm_service_receive(&iface, &req, result);
+ 	LONGS_EQUAL(result, mock_mm_service_receive(&iface, &req));
+diff --git a/deployments/smm-gateway/common/smm_gateway_sp.c b/deployments/smm-gateway/common/smm_gateway_sp.c
+index 3697b7f..3062877 100644
+--- a/deployments/smm-gateway/common/smm_gateway_sp.c
++++ b/deployments/smm-gateway/common/smm_gateway_sp.c
+@@ -11,6 +11,7 @@
+ #include "components/rpc/mm_communicate/endpoint/sp/mm_communicate_call_ep.h"
+ #include "components/service/smm_variable/frontend/mm_communicate/smm_variable_mm_service.h"
+ #include "platform/interface/memory_region.h"
++#include "protocols/common/mm/mm_smc.h"
+ #include <ffa_api.h>
+ #include <sp_api.h>
+ #include <sp_messaging.h>
+@@ -68,12 +69,20 @@ void __noreturn sp_main(struct ffa_init_info *init_info)
+ 	ffa_msg_wait(&req_msg);
+ 
+ 	while (1) {
++		if (FFA_IS_32_BIT_FUNC(req_msg.function_id)) {
++			EMSG("MM communicate over 32 bit FF-A messages is not supported");
++			ffa_msg_send_direct_resp_32(req_msg.destination_id, req_msg.source_id,
++						    MM_RETURN_CODE_NOT_SUPPORTED, 0, 0, 0, 0,
++						    &req_msg);
++			continue;
++		}
++
+ 		mm_communicate_call_ep_receive(&mm_communicate_call_ep, &req_msg, &resp_msg);
+ 
+-		ffa_msg_send_direct_resp_32(req_msg.destination_id,
+-					 req_msg.source_id, resp_msg.args.args32[0],
+-					 resp_msg.args.args32[1], resp_msg.args.args32[2],
+-					 resp_msg.args.args32[3], resp_msg.args.args32[4],
++		ffa_msg_send_direct_resp_64(req_msg.destination_id,
++					 req_msg.source_id, resp_msg.args.args64[0],
++					 resp_msg.args.args64[1], resp_msg.args.args64[2],
++					 resp_msg.args.args64[3], resp_msg.args.args64[4],
+ 					 &req_msg);
+ 	}
+ 
+-- 
+2.17.1
+
diff --git a/meta-arm/meta-arm/recipes-security/trusted-services/files/0023-Change-MM-communicate-RPC-protocol-of-MM-caller.patch b/meta-arm/meta-arm/recipes-security/trusted-services/files/0023-Change-MM-communicate-RPC-protocol-of-MM-caller.patch
new file mode 100644
index 0000000..4244225
--- /dev/null
+++ b/meta-arm/meta-arm/recipes-security/trusted-services/files/0023-Change-MM-communicate-RPC-protocol-of-MM-caller.patch
@@ -0,0 +1,100 @@
+From 96d226e4e0ea9c633dbc5d05ae2a7a2f4ba0f39e Mon Sep 17 00:00:00 2001
+From: Imre Kis <imre.kis@arm.com>
+Date: Fri, 22 Jul 2022 17:22:05 +0200
+Subject: [PATCH 23/24] Change MM communicate RPC protocol of MM caller
+
+Replace buffer address and size parameter by offset in buffer parameter
+and move to 64 bit FF-A direct message call. This change requires an
+updated version of the debugfs driver which supports 64 bit direct
+messages.
+
+Signed-off-by: Imre Kis <imre.kis@arm.com>
+Change-Id: I003c1de7f9c3f45bbc52e4a51d622ec960fa7052
+
+Upstream-Status: Pending [In review]
+Signed-off-by: Anton Antonov <Anton.Antonov@arm.com>
+
+---
+ .../caller/linux/mm_communicate_caller.c      | 35 +++++++------------
+ .../LinuxFFAUserShim/LinuxFFAUserShim.cmake   |  2 +-
+ 2 files changed, 14 insertions(+), 23 deletions(-)
+
+diff --git a/components/rpc/mm_communicate/caller/linux/mm_communicate_caller.c b/components/rpc/mm_communicate/caller/linux/mm_communicate_caller.c
+index 0c505b4..0287acf 100644
+--- a/components/rpc/mm_communicate/caller/linux/mm_communicate_caller.c
++++ b/components/rpc/mm_communicate/caller/linux/mm_communicate_caller.c
+@@ -19,7 +19,7 @@
+ #include <string.h>
+ #include <errno.h>
+ 
+-#define KERNEL_MOD_REQ_VER_MAJOR 2
++#define KERNEL_MOD_REQ_VER_MAJOR 5
+ #define KERNEL_MOD_REQ_VER_MINOR 0
+ #define KERNEL_MOD_REQ_VER_PATCH 0
+ 
+@@ -294,37 +294,28 @@ static rpc_status_t call_invoke(
+ 
+ 		direct_msg.dst_id = s->dest_partition_id;
+ 
+-		direct_msg.args[MM_COMMUNICATE_CALL_ARGS_COMM_BUFFER_ADDRESS] = (uintptr_t)s->comm_buffer;
+-		direct_msg.args[MM_COMMUNICATE_CALL_ARGS_COMM_BUFFER_SIZE] = s->comm_buffer_size;
++		direct_msg.args[MM_COMMUNICATE_CALL_ARGS_COMM_BUFFER_OFFSET] = 0;
+ 
+ 		int kernel_op_status = ioctl(s->ffa_fd, FFA_IOC_MSG_SEND, &direct_msg);
+ 
+ 		if (kernel_op_status == 0) {
+-
+ 			/* Kernel send operation completed normally */
+-			uint32_t mm_return_id = direct_msg.args[MM_COMMUNICATE_CALL_ARGS_RETURN_ID];
+ 			int32_t mm_return_code = direct_msg.args[MM_COMMUNICATE_CALL_ARGS_RETURN_CODE];
+ 
+-			if (mm_return_id == ARM_SVC_ID_SP_EVENT_COMPLETE) {
+-
+-				if (mm_return_code == MM_RETURN_CODE_SUCCESS) {
+-
+-					mm_communicate_serializer_header_decode(s->serializer,
+-						s->comm_buffer, (efi_status_t*)opstatus, resp_buf, resp_len);
+-
+-					if (*resp_len > s->req_len) {
++			if (mm_return_code == MM_RETURN_CODE_SUCCESS) {
++				mm_communicate_serializer_header_decode(
++					s->serializer, s->comm_buffer, (efi_status_t *)opstatus,
++					resp_buf, resp_len);
+ 
+-						s->scrub_len =
+-							mm_communicate_serializer_header_size(s->serializer) +
+-							*resp_len;
+-					}
++				if (*resp_len > s->req_len)
++					s->scrub_len =
++						mm_communicate_serializer_header_size(
++							s->serializer) + *resp_len;
+ 
+-					rpc_status = TS_RPC_CALL_ACCEPTED;
+-				}
+-				else {
++				rpc_status = TS_RPC_CALL_ACCEPTED;
++			} else {
+ 
+-					rpc_status = mm_return_code_to_rpc_status(mm_return_code);
+-				}
++				rpc_status = mm_return_code_to_rpc_status(mm_return_code);
+ 			}
+ 		}
+ 	}
+diff --git a/external/LinuxFFAUserShim/LinuxFFAUserShim.cmake b/external/LinuxFFAUserShim/LinuxFFAUserShim.cmake
+index 7ba64af..9c2252c 100644
+--- a/external/LinuxFFAUserShim/LinuxFFAUserShim.cmake
++++ b/external/LinuxFFAUserShim/LinuxFFAUserShim.cmake
+@@ -11,7 +11,7 @@
+ 
+ set(LINUX_FFA_USER_SHIM_URL "https://git.gitlab.arm.com/linux-arm/linux-trusted-services.git"
+ 	CACHE STRING "Linux FF-A user space shim repository URL")
+-set(LINUX_FFA_USER_SHIM_REFSPEC "v4.0.0"
++set(LINUX_FFA_USER_SHIM_REFSPEC "v5.0.0"
+ 	CACHE STRING "Linux FF-A user space shim git refspec")
+ 
+ set(LINUX_FFA_USER_SHIM_SOURCE_DIR "${CMAKE_CURRENT_BINARY_DIR}/_deps/linux_ffa_user_shim-src"
+-- 
+2.17.1
+
diff --git a/meta-arm/meta-arm/recipes-security/trusted-services/files/0024-Deny-64-bit-FF-A-messages-in-FF-A-RPC-endpoint.patch b/meta-arm/meta-arm/recipes-security/trusted-services/files/0024-Deny-64-bit-FF-A-messages-in-FF-A-RPC-endpoint.patch
new file mode 100644
index 0000000..01cf523
--- /dev/null
+++ b/meta-arm/meta-arm/recipes-security/trusted-services/files/0024-Deny-64-bit-FF-A-messages-in-FF-A-RPC-endpoint.patch
@@ -0,0 +1,70 @@
+From f173e99554512c982665c1d5d4b0543421177b09 Mon Sep 17 00:00:00 2001
+From: Imre Kis <imre.kis@arm.com>
+Date: Tue, 26 Jul 2022 17:06:46 +0200
+Subject: [PATCH 24/24] Deny 64 bit FF-A messages in FF-A RPC endpoint
+
+FF-A RPC protocol only allows 32 bit FF-A direct messages thus deny all
+64 bit messages in the RPC endpoint.
+
+Signed-off-by: Imre Kis <imre.kis@arm.com>
+Change-Id: I37c95425f80b6e2821b3f6b8649ceba8aa007bce
+
+Upstream-Status: Pending [In review]
+Signed-off-by: Anton Antonov <Anton.Antonov@arm.com>
+
+---
+ .../rpc/ffarpc/endpoint/ffarpc_call_ep.c      | 31 ++++++++++++-------
+ 1 file changed, 20 insertions(+), 11 deletions(-)
+
+diff --git a/components/rpc/ffarpc/endpoint/ffarpc_call_ep.c b/components/rpc/ffarpc/endpoint/ffarpc_call_ep.c
+index c024196..3035c16 100644
+--- a/components/rpc/ffarpc/endpoint/ffarpc_call_ep.c
++++ b/components/rpc/ffarpc/endpoint/ffarpc_call_ep.c
+@@ -12,6 +12,7 @@
+ #include <protocols/rpc/common/packed-c/status.h>
+ #include <trace.h>
+ #include <stddef.h>
++#include <string.h>
+ 
+ /* TODO: remove this when own ID will be available in libsp */
+ extern uint16_t own_id;
+@@ -260,17 +261,25 @@ void ffa_call_ep_receive(struct ffa_call_ep *call_ep,
+ 			 const struct sp_msg *req_msg,
+ 			 struct sp_msg *resp_msg)
+ {
+-	const uint32_t *req_args = req_msg->args.args32;
+-	uint32_t *resp_args = resp_msg->args.args32;
+-
+-	uint16_t source_id = req_msg->source_id;
+-	uint32_t ifaceid_opcode = req_args[SP_CALL_ARGS_IFACE_ID_OPCODE];
+-
+-	if (FFA_CALL_ARGS_EXTRACT_IFACE(ifaceid_opcode) == FFA_CALL_MGMT_IFACE_ID) {
+-		/* It's an RPC layer management request */
+-		handle_mgmt_msg(call_ep, source_id, req_args, resp_args);
++	resp_msg->is_64bit_message = req_msg->is_64bit_message;
++	memset(&resp_msg->args, 0x00, sizeof(resp_msg->args));
++
++	if (!req_msg->is_64bit_message) {
++		const uint32_t *req_args = req_msg->args.args32;
++		uint32_t *resp_args = resp_msg->args.args32;
++		uint16_t source_id = req_msg->source_id;
++		uint32_t ifaceid_opcode = req_args[SP_CALL_ARGS_IFACE_ID_OPCODE];
++
++		if (FFA_CALL_ARGS_EXTRACT_IFACE(ifaceid_opcode) == FFA_CALL_MGMT_IFACE_ID) {
++			/* It's an RPC layer management request */
++			handle_mgmt_msg(call_ep, source_id, req_args, resp_args);
++		} else {
++			/* Assume anything else is a service request */
++			handle_service_msg(call_ep, source_id, req_args, resp_args);
++		}
+ 	} else {
+-		/* Assume anything else is a service request */
+-		handle_service_msg(call_ep, source_id, req_args, resp_args);
++		EMSG("64 bit FF-A messages are not supported by the TS RPC layer");
++		resp_msg->args.args64[SP_CALL_ARGS_RESP_RPC_STATUS] =
++			TS_RPC_ERROR_INVALID_PARAMETER;
+ 	}
+ }
+-- 
+2.17.1
+
diff --git a/meta-arm/meta-arm/recipes-security/trusted-services/libts/0001-QEMU-MM-communication-buffer-address.patch b/meta-arm/meta-arm/recipes-security/trusted-services/libts/0001-QEMU-MM-communication-buffer-address.patch
new file mode 100644
index 0000000..2c21e6f
--- /dev/null
+++ b/meta-arm/meta-arm/recipes-security/trusted-services/libts/0001-QEMU-MM-communication-buffer-address.patch
@@ -0,0 +1,29 @@
+From 1fe74d7d5008aed61feb34a8d5d8b5f9144a58b2 Mon Sep 17 00:00:00 2001
+From: Anton Antonov <Anton.Antonov@arm.com>
+Date: Wed, 31 Aug 2022 16:33:13 +0100
+Subject: [PATCH] Update MM communication buffer address for qemuarm64 machine
+
+Upstream-Status: Inappropriate [qemuarm64 specific change]
+Signed-off-by: Anton Antonov <Anton.Antonov@arm.com>
+---
+ components/rpc/mm_communicate/caller/linux/carveout.c | 4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+diff --git a/components/rpc/mm_communicate/caller/linux/carveout.c b/components/rpc/mm_communicate/caller/linux/carveout.c
+index e3cdf16f..62845d30 100644
+--- a/components/rpc/mm_communicate/caller/linux/carveout.c
++++ b/components/rpc/mm_communicate/caller/linux/carveout.c
+@@ -12,8 +12,8 @@
+ #include "carveout.h"
+ 
+ /* Need to be aligned with carve-out used by StMM or smm-gateway. */
+-static const off_t carveout_pa = 0x0000000881000000;
+-static const size_t carveout_len = 0x8000;
++static const off_t carveout_pa = 0x42000000;
++static const size_t carveout_len = 0x1000;
+ 
+ int carveout_claim(uint8_t **buf, size_t *buf_size)
+ {
+-- 
+2.25.1
+
diff --git a/meta-arm/meta-arm/recipes-security/trusted-services/libts/tee-udev.rules b/meta-arm/meta-arm/recipes-security/trusted-services/libts/tee-udev.rules
new file mode 100644
index 0000000..216fe99
--- /dev/null
+++ b/meta-arm/meta-arm/recipes-security/trusted-services/libts/tee-udev.rules
@@ -0,0 +1,2 @@
+# tee devices can only be accessed by the teeclnt group members
+KERNEL=="tee[0-9]*", TAG+="systemd", MODE="0660", GROUP="teeclnt"
diff --git a/meta-arm/meta-arm/recipes-security/trusted-services/libts_%.bbappend b/meta-arm/meta-arm/recipes-security/trusted-services/libts_%.bbappend
new file mode 100644
index 0000000..f987e40
--- /dev/null
+++ b/meta-arm/meta-arm/recipes-security/trusted-services/libts_%.bbappend
@@ -0,0 +1,3 @@
+# Update MM communication buffer address for qemuarm64 machine
+SRC_URI:append:qemuarm64-secureboot = "file://0001-QEMU-MM-communication-buffer-address.patch \
+"
diff --git a/meta-arm/meta-arm/recipes-security/trusted-services/libts_git.bb b/meta-arm/meta-arm/recipes-security/trusted-services/libts_git.bb
new file mode 100644
index 0000000..dfcf8ce
--- /dev/null
+++ b/meta-arm/meta-arm/recipes-security/trusted-services/libts_git.bb
@@ -0,0 +1,44 @@
+DESCRIPTION = "Trusted Services libts library for the arm-linux enviroment. \
+               Used for locating and accessing services from a Linux userspace client"
+
+TS_ENV = "arm-linux"
+
+require trusted-services.inc
+
+SRC_URI += "file://tee-udev.rules \
+           "
+
+OECMAKE_SOURCEPATH="${S}/deployments/libts/${TS_ENV}"
+
+DEPENDS           += "arm-ffa-tee arm-ffa-user"
+RRECOMMENDS:${PN} += "arm-ffa-tee"
+
+# arm-ffa-user.h is installed by arm-ffa-user recipe
+EXTRA_OECMAKE += "-DLINUX_FFA_USER_SHIM_INCLUDE_DIR:PATH=/usr/include \
+                 "
+
+# Unix group name for dev/tee* ownership.
+TEE_GROUP_NAME ?= "teeclnt"
+
+do_install:append () {
+    if ${@oe.utils.conditional('VIRTUAL-RUNTIME_dev_manager', 'busybox-mdev', 'false', 'true', d)}; then
+        install -d ${D}${nonarch_base_libdir}/udev/rules.d/
+        install -m 755 ${WORKDIR}/tee-udev.rules ${D}${nonarch_base_libdir}/udev/rules.d/
+        sed -i -e "s/teeclnt/${TEE_GROUP_NAME}/" ${D}${nonarch_base_libdir}/udev/rules.d/tee-udev.rules
+    fi
+
+    # Move the dynamic libraries into the standard place.
+    # Update a cmake files to use correct paths.
+    install -d ${D}${libdir}
+    mv ${D}${TS_INSTALL}/lib/libts* ${D}${libdir}
+
+    sed -i -e "s#/${TS_ENV}##g" ${D}${TS_INSTALL}/lib/cmake/libtsTargets-noconfig.cmake
+    sed -i -e 's#INTERFACE_INCLUDE_DIRECTORIES.*$#INTERFACE_INCLUDE_DIRECTORIES "\${_IMPORT_PREFIX}/${TS_ENV}/include"#' ${D}${TS_INSTALL}/lib/cmake/libtsTargets.cmake
+}
+
+inherit ${@oe.utils.conditional('VIRTUAL-RUNTIME_dev_manager', 'busybox-mdev', '', 'useradd', d)}
+USERADD_PACKAGES = "${PN}"
+GROUPADD_PARAM:${PN} = "--system ${TEE_GROUP_NAME}"
+
+FILES:${PN} = "${libdir}/libts.so.* ${nonarch_base_libdir}/udev/rules.d/"
+FILES:${PN}-dev = "${TS_INSTALL}/lib/cmake ${TS_INSTALL}/include ${libdir}/libts.so"
diff --git a/meta-arm/meta-arm/recipes-security/trusted-services/trusted-services-src.inc b/meta-arm/meta-arm/recipes-security/trusted-services/trusted-services-src.inc
new file mode 100644
index 0000000..0251fef
--- /dev/null
+++ b/meta-arm/meta-arm/recipes-security/trusted-services/trusted-services-src.inc
@@ -0,0 +1,80 @@
+# Define sources of Trusted Service and all external dependencies
+
+LICENSE = "Apache-2.0 & BSD-3-Clause & BSD-2-Clause & Zlib"
+
+SRC_URI = "git://git.trustedfirmware.org/TS/trusted-services.git;protocol=https;branch=integration;name=trusted-services;destsuffix=git/trusted-services \
+           file://0004-correctly-find-headers-dir.patch \
+           file://0005-setting-sysroot-for-libgcc-lookup.patch \
+           file://0006-applying-lowercase-project-convention.patch \
+           file://0009-PSA-CRYPTO-API-INCLUDE.patch \
+           file://0010-change-libts-to-export-CMake-package.patch \
+           file://0011-Adapt-deployments-to-libts-changes.patch \
+           file://0017-Move-libsp-mocks-into-separate-component.patch \
+           file://0018-Add-mock-for-libsp-sp_discovery.patch \
+           file://0019-Add-mock-for-libsp-sp_memory_management.patch \
+           file://0020-Add-mock-for-libsp-sp_messaging.patch \
+           file://0021-Add-64-bit-direct-message-handling-to-libsp.patch \
+           file://0022-Change-MM-communicate-RPC-protocol-of-call-endpoint.patch \
+           file://0023-Change-MM-communicate-RPC-protocol-of-MM-caller.patch \
+           file://0024-Deny-64-bit-FF-A-messages-in-FF-A-RPC-endpoint.patch \
+"
+
+#latest on 05.07.22.
+SRCREV_trusted-services = "1b0c520279445fc4d85fc582eda5e5ff5f380c39"
+LIC_FILES_CHKSUM = "file://${S}/license.rst;md5=ea160bac7f690a069c608516b17997f4"
+
+S = "${WORKDIR}/git/trusted-services"
+PV ?= "0.0+git${SRCPV}"
+
+# DTC, tag "v1.6.1"
+SRC_URI += "git://github.com/dgibson/dtc;name=dtc;protocol=https;branch=main;destsuffix=git/dtc"
+SRCREV_dtc = "b6910bec11614980a21e46fbccc35934b671bd81"
+LIC_FILES_CHKSUM += "file://../dtc/README.license;md5=a1eb22e37f09df5b5511b8a278992d0e"
+
+# MbedTLS, tag "mbedtls-3.1.0"
+SRC_URI += "git://github.com/ARMmbed/mbedtls.git;name=mbedtls;protocol=https;branch=master;destsuffix=git/mbedtls"
+SRCREV_mbedtls = "d65aeb37349ad1a50e0f6c9b694d4b5290d60e49"
+LIC_FILES_CHKSUM += "file://../mbedtls/LICENSE;md5=3b83ef96387f14655fc854ddc3c6bd57"
+
+# Nanopb, tag "nanopb-0.4.6"
+SRC_URI += "git://github.com/nanopb/nanopb.git;name=nanopb;protocol=https;branch=master;destsuffix=git/nanopb"
+SRCREV_nanopb = "afc499f9a410fc9bbf6c9c48cdd8d8b199d49eb4"
+LIC_FILES_CHKSUM += "file://../nanopb/LICENSE.txt;md5=9db4b73a55a3994384112efcdb37c01f"
+
+# qcbor, tag "v1.0.0"
+SRC_URI += "git://github.com/laurencelundblade/QCBOR.git;name=qcbor;protocol=https;branch=master;destsuffix=git/qcbor"
+SRCREV_qcbor = "56b17bf9f74096774944bcac0829adcd887d391e"
+LIC_FILES_CHKSUM += "file://../qcbor/README.md;md5=e8ff2e88a722cdc55eddd0bb9aeca002"
+
+# T_Cose
+SRC_URI += "git://github.com/laurencelundblade/t_cose.git;name=tcose;protocol=https;branch=master;destsuffix=git/tcose"
+SRCREV_tcose = "fc3a4b2c7196ff582e8242de8bd4a1bc4eec577f"
+LIC_FILES_CHKSUM += "file://../tcose/LICENSE;md5=b2ebdbfb82602b97aa628f64cf4b65ad"
+
+# CppUTest,  tag "v3.8"
+SRC_URI += "git://github.com/cpputest/cpputest.git;name=cpputest;protocol=https;branch=master;destsuffix=git/cpputest"
+SRCREV_cpputest = "e25097614e1c4856036366877a02346c4b36bb5b"
+LIC_FILES_CHKSUM += "file://../cpputest/COPYING;md5=ce5d5f1fe02bcd1343ced64a06fd4177"
+
+# TS ships patches for external dependencies that needs to be applied
+apply_ts_patches() {
+    for p in ${S}/external/qcbor/*.patch; do
+        patch -p1 -N -d ${WORKDIR}/git/qcbor < ${p} || true
+    done
+    for p in ${S}/external/t_cose/*.patch; do
+        patch -p1 -N -d ${WORKDIR}/git/tcose < ${p} || true
+    done
+    for p in ${S}/external/CppUTest/*.patch; do
+        patch -p1 -d ${WORKDIR}/git/cpputest < ${p}
+    done
+}
+do_patch[postfuncs] += "apply_ts_patches"
+
+# Paths to dependencies required by some TS SPs/tools
+EXTRA_OECMAKE += "-DDTC_SOURCE_DIR=${WORKDIR}/git/dtc \
+                  -DCPPUTEST_SOURCE_DIR=${WORKDIR}/git/cpputest \
+                  -DNANOPB_SOURCE_DIR=${WORKDIR}/git/nanopb \
+                  -DT_COSE_SOURCE_DIR=${WORKDIR}/git/tcose \
+                  -DQCBOR_SOURCE_DIR=${WORKDIR}/git/qcbor \
+                  -DMBEDTLS_SOURCE_DIR=${WORKDIR}/git/mbedtls \
+                 "
diff --git a/meta-arm/meta-arm/recipes-security/trusted-services/trusted-services.inc b/meta-arm/meta-arm/recipes-security/trusted-services/trusted-services.inc
new file mode 100644
index 0000000..80c0849
--- /dev/null
+++ b/meta-arm/meta-arm/recipes-security/trusted-services/trusted-services.inc
@@ -0,0 +1,52 @@
+SUMMARY ?= "The Trusted Services: framework for developing root-of-trust services"
+HOMEPAGE = "https://trusted-services.readthedocs.io/en/latest/index.html"
+
+LICENSE = "Apache-2.0 & BSD-3-Clause & Zlib"
+
+inherit python3native cmake
+
+COMPATIBLE_HOST = "aarch64.*-linux"
+
+require trusted-services-src.inc
+
+# By default bitbake includes only ${S} (i.e git/trusted-services) in the maps.
+# We also need to include the TS dependencies source trees.
+DEBUG_PREFIX_MAP:append = "-fmacro-prefix-map=${WORKDIR}/git=/usr/src/debug/${PN}/${EXTENDPE}${PV}-${PR} \
+ -fdebug-prefix-map=${WORKDIR}/git=/usr/src/debug/${PN}/${EXTENDPE}${PV}-${PR} \
+"
+
+TS_PLATFORM ?= "ts/mock"
+
+# SP images are embedded into optee-os image
+# FIP packaging is not supported yet
+SP_PACKAGING_METHOD ?= "embedded"
+
+SYSROOT_DIRS += "/usr/opteesp /usr/arm-linux"
+
+# TS cmake files use find_file() to search through source code and build dirs.
+# Yocto cmake class limits CMAKE_FIND_ROOT_PATH and find_file() fails.
+# Include the source tree and build dirs into searchable path.
+OECMAKE_EXTRA_ROOT_PATH = "${WORKDIR}/git/ ${WORKDIR}/build/"
+
+EXTRA_OECMAKE += '-DLIBGCC_LOCATE_CFLAGS="--sysroot=${STAGING_DIR_HOST}" \
+                  -DCROSS_COMPILE="${TARGET_PREFIX}" \
+                  -DSP_PACKAGING_METHOD="${SP_PACKAGING_METHOD}" \
+                  -DTS_PLATFORM="${TS_PLATFORM}" \
+                 '
+export CROSS_COMPILE="${TARGET_PREFIX}"
+
+# Default TS installation path
+TS_INSTALL = "/usr/${TS_ENV}"
+
+# Use the Yocto cmake toolchain for arm-linux TS deployments and
+# the TS opteesp toolchain for opteesp TS deployments
+EXTRA_OECMAKE += "${@oe.utils.conditional('TS_ENV', 'opteesp', \
+                    '-DCMAKE_TOOLCHAIN_FILE=${S}/environments/${TS_ENV}/default_toolchain_file.cmake', \
+                    '-DTS_EXTERNAL_LIB_TOOLCHAIN_FILE=${WORKDIR}/toolchain.cmake', \
+                    d)} \
+                 "
+
+# Paths to pre-built dependencies required by some TS SPs/tools
+EXTRA_OECMAKE += "-Dlibts_DIR=${STAGING_DIR_HOST}${TS_INSTALL}/lib/cmake/ \
+                  -DNEWLIB_INSTALL_DIR=${STAGING_DIR_HOST}${TS_INSTALL}/newlib_install \
+                 "
diff --git a/meta-arm/meta-arm/recipes-security/trusted-services/ts-demo_git.bb b/meta-arm/meta-arm/recipes-security/trusted-services/ts-demo_git.bb
new file mode 100644
index 0000000..b0abb6f
--- /dev/null
+++ b/meta-arm/meta-arm/recipes-security/trusted-services/ts-demo_git.bb
@@ -0,0 +1,21 @@
+DESCRIPTION = "Trusted Services ts-demo deployment for arm-linux. \
+               Used for running simple TS demo from Linux user-space \
+               on an Arm platform with real deployments of trusted services."
+
+TS_ENV = "arm-linux"
+
+require trusted-services.inc
+
+DEPENDS        += "libts"
+RDEPENDS:${PN} += "libts"
+
+OECMAKE_SOURCEPATH="${S}/deployments/ts-demo/${TS_ENV}"
+
+FILES:${PN} = "${bindir}/ts-demo"
+
+do_install:append () {
+    install -d ${D}${bindir}
+    mv ${D}${TS_INSTALL}/bin/ts-demo ${D}${bindir}
+
+    rm -r --one-file-system ${D}${TS_INSTALL}
+}
diff --git a/meta-arm/meta-arm/recipes-security/trusted-services/ts-newlib/0003-Add-newlib-deployment.patch b/meta-arm/meta-arm/recipes-security/trusted-services/ts-newlib/0003-Add-newlib-deployment.patch
new file mode 100644
index 0000000..e43e7d2
--- /dev/null
+++ b/meta-arm/meta-arm/recipes-security/trusted-services/ts-newlib/0003-Add-newlib-deployment.patch
@@ -0,0 +1,85 @@
+From 03337e3a509eace9f55a46993cfaadee0e796f46 Mon Sep 17 00:00:00 2001
+From: Gyorgy Szing <Gyorgy.Szing@arm.com>
+Date: Fri, 14 Jan 2022 20:35:53 +0000
+Subject: [PATCH 1/1] Add newlib deployment
+
+This deployment allow building newlib directly and not part of SP
+builds. The resulting binary can be used as a pre-build binary for
+building SPs later.
+The intent is to help integration systems, where recursive build is
+problematic, and there is no direct support for building newlib.
+
+Change-Id: I770cdfd3c39eb7bf9764de74dfb191c321c49561
+Signed-off-by: Gyorgy Szing <Gyorgy.Szing@arm.com>
+
+Upstream-Status: Pending [In review]
+Signed-off-by: Anton Antonov <Anton.Antonov@arm.com>
+
+---
+ deployments/newlib/opteesp/CMakeLists.txt | 38 +++++++++++++++++++++++
+ tools/b-test/test_data.yaml               |  4 +++
+ 2 files changed, 42 insertions(+)
+ create mode 100644 deployments/newlib/opteesp/CMakeLists.txt
+
+diff --git a/deployments/newlib/opteesp/CMakeLists.txt b/deployments/newlib/opteesp/CMakeLists.txt
+new file mode 100644
+index 00000000..593e0a96
+--- /dev/null
++++ b/deployments/newlib/opteesp/CMakeLists.txt
+@@ -0,0 +1,38 @@
++#-------------------------------------------------------------------------------
++# Copyright (c) 2022, Arm Limited and Contributors. All rights reserved.
++#
++# SPDX-License-Identifier: BSD-3-Clause
++#
++#-------------------------------------------------------------------------------
++cmake_minimum_required(VERSION 3.18 FATAL_ERROR)
++include(../../deployment.cmake REQUIRED)
++
++#-------------------------------------------------------------------------------
++#  The CMakeLists.txt for building the newlib deployment for opteesp
++#
++#  Can be used to build the newlib library, which can be used to build SPs.
++#-------------------------------------------------------------------------------
++include(${TS_ROOT}/environments/opteesp/env.cmake)
++
++project(newlib C)
++
++# This is a dummy library not intended to be compiled ever. It is needed
++# to avoid opteesp specific newlib targeting files.
++add_library(dummy EXCLUDE_FROM_ALL)
++set(TGT dummy)
++# Build newlib as an external component.
++include(${TS_ROOT}/external/newlib/newlib.cmake)
++
++######################################## install
++if (CMAKE_INSTALL_PREFIX_INITIALIZED_TO_DEFAULT)
++	set(CMAKE_INSTALL_PREFIX ${CMAKE_BINARY_DIR}/install CACHE PATH "location to install build output to." FORCE)
++endif()
++
++install(DIRECTORY ${NEWLIB_INSTALL_DIR} DESTINATION  ${TS_ENV})
++
++#get_property(_tmp_lib TARGET stdlib::c PROPERTY IMPORTED_LOCATION)
++#get_filename_component(_tmp_path ${_tmp_lib} DIRECTORY)
++#install(DIRECTORY ${_tmp_path} DESTINATION ${TS_ENV})
++
++#get_property(_tmp_path TARGET stdlib::c PROPERTY INTERFACE_INCLUDE_DIRECTORIES)
++#install(DIRECTORY ${_tmp_path} DESTINATION ${TS_ENV})
+diff --git a/tools/b-test/test_data.yaml b/tools/b-test/test_data.yaml
+index 7caafa8b..6bfacc66 100644
+--- a/tools/b-test/test_data.yaml
++++ b/tools/b-test/test_data.yaml
+@@ -69,6 +69,10 @@ data:
+       os_id : "GNU/Linux"
+       params:
+             - "-GUnix Makefiles"
++    - name: "newlib-optee-arm"
++      src: "$TS_ROOT/deployments/newlib/opteesp"
++      params:
++          - "-GUnix Makefiles"
+     - name: "platform-inspect-arm-linux"
+       src: "$TS_ROOT/deployments/platform-inspect/arm-linux"
+       os_id : "GNU/Linux"
+-- 
+2.37.0
+
diff --git a/meta-arm/meta-arm/recipes-security/trusted-services/ts-newlib/0021-newlib-configure.patch b/meta-arm/meta-arm/recipes-security/trusted-services/ts-newlib/0021-newlib-configure.patch
new file mode 100644
index 0000000..a9d291b
--- /dev/null
+++ b/meta-arm/meta-arm/recipes-security/trusted-services/ts-newlib/0021-newlib-configure.patch
@@ -0,0 +1,72 @@
+From df66efc0db9899c41632091db11bfe2c05eec1fa Mon Sep 17 00:00:00 2001
+From: Anton Antonov <Anton.Antonov@arm.com>
+Date: Wed, 31 Aug 2022 17:55:21 +0100
+Subject: [PATCH] Allow to define additional partameters for newlib configure.
+
+Do not skip newlib and libgloss when crosscompiling
+
+Upstream-Status: Pending
+Signed-off-by: Anton Antonov <Anton.Antonov@arm.com>
+---
+ ...-aarch64-linux-gcc-to-compile-bare-metal-lib.patch | 11 ++++++++++-
+ external/newlib/newlib.cmake                          |  6 ++++++
+ 2 files changed, 16 insertions(+), 1 deletion(-)
+
+diff --git a/external/newlib/0001-Allow-aarch64-linux-gcc-to-compile-bare-metal-lib.patch b/external/newlib/0001-Allow-aarch64-linux-gcc-to-compile-bare-metal-lib.patch
+index f87ed5a..7533ed0 100644
+--- a/external/newlib/0001-Allow-aarch64-linux-gcc-to-compile-bare-metal-lib.patch
++++ b/external/newlib/0001-Allow-aarch64-linux-gcc-to-compile-bare-metal-lib.patch
+@@ -16,9 +16,18 @@ Signed-off-by: Gyorgy Szing <gyorgy.szing@arm.com>
+  2 files changed, 2 insertions(+), 2 deletions(-)
+ 
+ diff --git a/configure b/configure
+-index 5db52701..1eb71a80 100755
++index 5db527014..dce91609e 100755
+ --- a/configure
+ +++ b/configure
++@@ -2886,7 +2886,7 @@ esac
++
++ # Some are only suitable for cross toolchains.
++ # Remove these if host=target.
++-cross_only="target-libgloss target-newlib target-opcodes"
+++cross_only="target-opcodes"
++
++ case $is_cross_compiler in
++   no) skipdirs="${skipdirs} ${cross_only}" ;;
+ @@ -3659,7 +3659,7 @@ case "${target}" in
+    *-*-freebsd*)
+      noconfigdirs="$noconfigdirs target-newlib target-libgloss"
+diff --git a/external/newlib/newlib.cmake b/external/newlib/newlib.cmake
+index 13eb78c..5ee99a5 100644
+--- a/external/newlib/newlib.cmake
++++ b/external/newlib/newlib.cmake
+@@ -168,6 +168,10 @@ if (NOT NEWLIB_LIBC_PATH)
+ 	string(REPLACE ";" " -isystem " CFLAGS_FOR_TARGET "${_gcc_include_dirs}")
+ 	set(CFLAGS_FOR_TARGET "-isystem ${CFLAGS_FOR_TARGET} -fpic")
+ 
++	# Split a newlib extra build parameter into a list of parameters
++	set(NEWLIB_EXTRAS ${NEWLIB_EXTRA})
++	separate_arguments(NEWLIB_EXTRAS)
++
+ 	# Newlib configure step
+ 	# CC env var must be unset otherwise configure will assume the cross compiler is the host
+ 	# compiler.
+@@ -175,6 +179,7 @@ if (NOT NEWLIB_LIBC_PATH)
+ 	execute_process(COMMAND
+ 		${CMAKE_COMMAND} -E env --unset=CC PATH=${COMPILER_PATH}:$ENV{PATH} ./configure
+ 			--target=${COMPILER_PREFIX}
++			--host=${COMPILER_PREFIX}
+ 			--prefix=${NEWLIB_INSTALL_DIR}
+ 			--enable-newlib-nano-formatted-io
+ 			--enable-newlib-nano-malloc
+@@ -182,6 +187,7 @@ if (NOT NEWLIB_LIBC_PATH)
+ 			--enable-newlib-reent-small
+ 			--enable-newlib-global-atexit
+ 			--disable-multilib
++			${NEWLIB_EXTRAS}
+ 			CFLAGS_FOR_TARGET=${CFLAGS_FOR_TARGET}
+ 			LDFLAGS_FOR_TARGET=-fpie
+ 		WORKING_DIRECTORY
+-- 
+2.25.1
+
diff --git a/meta-arm/meta-arm/recipes-security/trusted-services/ts-newlib_4.1.0.bb b/meta-arm/meta-arm/recipes-security/trusted-services/ts-newlib_4.1.0.bb
new file mode 100644
index 0000000..94038b5
--- /dev/null
+++ b/meta-arm/meta-arm/recipes-security/trusted-services/ts-newlib_4.1.0.bb
@@ -0,0 +1,31 @@
+SUMMARY = "Newlib static libraries built with Trusted Services opteesp deployment options"
+
+TS_ENV = "opteesp"
+
+require trusted-services.inc
+
+SRC_URI += "git://sourceware.org/git/newlib-cygwin.git;name=newlib;protocol=https;branch=master;destsuffix=git/newlib \
+            file://0003-Add-newlib-deployment.patch \
+            file://0021-newlib-configure.patch \
+"
+
+# tag "newlib-0.4.1"
+SRCREV_newlib = "415fdd4279b85eeec9d54775ce13c5c412451e08"
+LIC_FILES_CHKSUM += "file://../newlib/COPYING.NEWLIB;md5=b8dda70da54e0efb49b1074f349d7749"
+
+EXTRA_OECMAKE += '-DNEWLIB_SOURCE_DIR=${WORKDIR}/git/newlib \
+                  -DNEWLIB_EXTRA="CFLAGS=--sysroot=${STAGING_DIR_HOST}" \
+                 '
+
+OECMAKE_SOURCEPATH = "${S}/deployments/newlib/${TS_ENV}/"
+
+# TS ships a patch that needs to be applied to newlib
+apply_ts_patch() {
+    for p in ${S}/external/newlib/*.patch; do
+        patch -p1 -d ${WORKDIR}/git/newlib < ${p}
+    done
+}
+do_patch[postfuncs] += "apply_ts_patch"
+
+FILES:${PN}-dev = "${TS_INSTALL}/newlib_install"
+FILES:${PN}-staticdev = "${TS_INSTALL}/newlib_install/*/lib/*.a"
diff --git a/meta-arm/meta-arm/recipes-security/trusted-services/ts-psa-api-test-common_git.inc b/meta-arm/meta-arm/recipes-security/trusted-services/ts-psa-api-test-common_git.inc
new file mode 100644
index 0000000..1e1be6a
--- /dev/null
+++ b/meta-arm/meta-arm/recipes-security/trusted-services/ts-psa-api-test-common_git.inc
@@ -0,0 +1,36 @@
+SUMMARY = "Parts of PSA certification tests (psa-arch-test) for Trusted Services"
+
+TS_ENV = "arm-linux"
+
+require trusted-services.inc
+
+DEPENDS        += "libts"
+RDEPENDS:${PN} += "libts"
+
+SRC_URI += "git://github.com/ARM-software/psa-arch-tests.git;name=psatest;protocol=https;branch=main;destsuffix=git/psatest \
+            file://0012-psa-arch-test-toolchain.patch \
+           "
+
+SRCREV_psatest = "451aa087a40d02c7d04778235014c5619d126471"
+LIC_FILES_CHKSUM += "file://../psatest/LICENSE.md;md5=2a944942e1496af1886903d274dedb13"
+
+EXTRA_OECMAKE += "\
+                  -DPSA_ARCH_TESTS_SOURCE_DIR=${WORKDIR}/git/psatest \
+                 "
+
+# TS ships patches that need to be applied to psa-arch-tests
+apply_ts_patch() {
+    for p in ${S}/external/psa_arch_tests/*.patch; do
+        patch -p1 -d ${WORKDIR}/git/psatest < ${p}
+    done
+}
+do_patch[postfuncs] += "apply_ts_patch"
+
+FILES:${PN} = "${bindir}/${PSA_TEST}"
+
+do_install:append () {
+    install -d ${D}${bindir}
+    mv ${D}${TS_INSTALL}/bin/${PSA_TEST} ${D}${bindir}
+
+    rm -r --one-file-system ${D}${TS_INSTALL}
+}
diff --git a/meta-arm/meta-arm/recipes-security/trusted-services/ts-psa-crypto-api-test_git.bb b/meta-arm/meta-arm/recipes-security/trusted-services/ts-psa-crypto-api-test_git.bb
new file mode 100644
index 0000000..710d377
--- /dev/null
+++ b/meta-arm/meta-arm/recipes-security/trusted-services/ts-psa-crypto-api-test_git.bb
@@ -0,0 +1,9 @@
+DESCRIPTION = "Crypto PSA certification tests (psa-arch-test)"
+
+TS_ENV = "arm-linux"
+
+require ts-psa-api-test-common_${PV}.inc
+
+OECMAKE_SOURCEPATH = "${S}/deployments/psa-api-test/crypto/${TS_ENV}"
+
+PSA_TEST = "psa-crypto-api-test"
diff --git a/meta-arm/meta-arm/recipes-security/trusted-services/ts-psa-iat-api-test/0012-PSA-TARGET-QCBOR.patch b/meta-arm/meta-arm/recipes-security/trusted-services/ts-psa-iat-api-test/0012-PSA-TARGET-QCBOR.patch
new file mode 100644
index 0000000..3b28e80
--- /dev/null
+++ b/meta-arm/meta-arm/recipes-security/trusted-services/ts-psa-iat-api-test/0012-PSA-TARGET-QCBOR.patch
@@ -0,0 +1,29 @@
+From 3229ca31e59933608f82001c1cdcca9d0a0aa0e0 Mon Sep 17 00:00:00 2001
+From: Anton Antonov <Anton.Antonov@arm.com>
+Date: Wed, 31 Aug 2022 17:19:08 +0100
+Subject: [PATCH] Subject: [PATCH] Pass PSA_TARGET_QCBOR to psa-arch-tests
+
+psa-arch-tests require they own version of qcbor library.
+Pass PSA_TARGET_QCBOR which defines where pre-fetched qcbor sources are.
+
+Upstream-Status: Pending
+Signed-off-by: Anton Antonov <Anton.Antonov@arm.com>
+---
+ external/psa_arch_tests/pas-arch-test-init-cache.cmake.in | 1 +
+ 1 file changed, 1 insertion(+)
+
+diff --git a/external/psa_arch_tests/pas-arch-test-init-cache.cmake.in b/external/psa_arch_tests/pas-arch-test-init-cache.cmake.in
+index 5c63596..64196c2 100644
+--- a/external/psa_arch_tests/pas-arch-test-init-cache.cmake.in
++++ b/external/psa_arch_tests/pas-arch-test-init-cache.cmake.in
+@@ -10,6 +10,7 @@ set(CMAKE_TOOLCHAIN_FILE "@TS_EXTERNAL_LIB_TOOLCHAIN_FILE@" CACHE STRING "")
+ 
+ set(TOOLCHAIN INHERIT CACHE STRING "")
+ set(PSA_INCLUDE_PATHS "@PSA_ARCH_TESTS_EXTERNAL_INCLUDE_PATHS@"  CACHE STRING "")
++set(PSA_TARGET_QCBOR "@PSA_TARGET_QCBOR@"  CACHE STRING "")
+ set(SUITE "@TS_ARCH_TEST_SUITE@"  CACHE STRING "")
+ set(ARCH_TEST_EXTERNAL_DEFS "@PSA_ARCH_TEST_EXTERNAL_DEFS@"  CACHE STRING "")
+ set(CMAKE_VERBOSE_MAKEFILE OFF CACHE BOOL "")
+-- 
+2.25.1
+
diff --git a/meta-arm/meta-arm/recipes-security/trusted-services/ts-psa-iat-api-test_git.bb b/meta-arm/meta-arm/recipes-security/trusted-services/ts-psa-iat-api-test_git.bb
new file mode 100644
index 0000000..73c5f61
--- /dev/null
+++ b/meta-arm/meta-arm/recipes-security/trusted-services/ts-psa-iat-api-test_git.bb
@@ -0,0 +1,19 @@
+DESCRIPTION = "Initial Attestation PSA certification tests (psa-arch-test) for Trusted Services"
+
+TS_ENV = "arm-linux"
+
+require ts-psa-api-test-common_${PV}.inc
+
+OECMAKE_SOURCEPATH = "${S}/deployments/psa-api-test/initial_attestation/${TS_ENV}"
+
+PSA_TEST = "psa-iat-api-test"
+
+# psa-arch-tests for INITIAL_ATTESTATION suite can't be built with pre-built qcbor
+# Fetch qcbor sources as a temp work-around and pass PSA_TARGET_QCBOR to psa-arch-tests
+SRC_URI += "git://github.com/laurencelundblade/QCBOR.git;name=psaqcbor;protocol=https;branch=master;destsuffix=git/psaqcbor \
+            file://0012-PSA-TARGET-QCBOR.patch \
+           "
+SRCREV_psaqcbor = "42272e466a8472948bf8fca076d113b81b99f0e0"
+
+EXTRA_OECMAKE += "-DPSA_TARGET_QCBOR=${WORKDIR}/git/psaqcbor \
+                 "
diff --git a/meta-arm/meta-arm/recipes-security/trusted-services/ts-psa-its-api-test_git.bb b/meta-arm/meta-arm/recipes-security/trusted-services/ts-psa-its-api-test_git.bb
new file mode 100644
index 0000000..32f2890
--- /dev/null
+++ b/meta-arm/meta-arm/recipes-security/trusted-services/ts-psa-its-api-test_git.bb
@@ -0,0 +1,9 @@
+DESCRIPTION = "Internal Trusted Storage PSA certification tests (psa-arch-test) for Trusted Services"
+
+TS_ENV = "arm-linux"
+
+require ts-psa-api-test-common_${PV}.inc
+
+OECMAKE_SOURCEPATH = "${S}/deployments/psa-api-test/internal_trusted_storage/${TS_ENV}"
+
+PSA_TEST = "psa-its-api-test"
diff --git a/meta-arm/meta-arm/recipes-security/trusted-services/ts-psa-ps-api-test_git.bb b/meta-arm/meta-arm/recipes-security/trusted-services/ts-psa-ps-api-test_git.bb
new file mode 100644
index 0000000..bcf1671
--- /dev/null
+++ b/meta-arm/meta-arm/recipes-security/trusted-services/ts-psa-ps-api-test_git.bb
@@ -0,0 +1,9 @@
+DESCRIPTION = "Protected Storage PSA certification tests (psa-arch-test) for Trusted Services"
+
+TS_ENV = "arm-linux"
+
+require ts-psa-api-test-common_${PV}.inc
+
+OECMAKE_SOURCEPATH = "${S}/deployments/psa-api-test/protected_storage/${TS_ENV}"
+
+PSA_TEST = "psa-ps-api-test"
diff --git a/meta-arm/meta-arm/recipes-security/trusted-services/ts-remote-test_git.bb b/meta-arm/meta-arm/recipes-security/trusted-services/ts-remote-test_git.bb
new file mode 100644
index 0000000..203defe
--- /dev/null
+++ b/meta-arm/meta-arm/recipes-security/trusted-services/ts-remote-test_git.bb
@@ -0,0 +1,12 @@
+DESCRIPTION = "Trusted Services ts-remote-test deployment for arm-linux."
+
+TS_ENV = "arm-linux"
+
+require trusted-services.inc
+
+DEPENDS        += "libts"
+RDEPENDS:${PN} += "libts"
+
+OECMAKE_SOURCEPATH = "${S}/deployments/ts-remote-test/${TS_ENV}"
+
+FILES:${PN} = "${bindir}/ts-remote-test"
diff --git a/meta-arm/meta-arm/recipes-security/trusted-services/ts-service-test_git.bb b/meta-arm/meta-arm/recipes-security/trusted-services/ts-service-test_git.bb
new file mode 100644
index 0000000..3278c6c
--- /dev/null
+++ b/meta-arm/meta-arm/recipes-security/trusted-services/ts-service-test_git.bb
@@ -0,0 +1,21 @@
+DESCRIPTION = "Trusted Services ts-service-test deployment for arm-linux. \
+               Used for running service level tests from Linux user-space \
+               on an Arm platform with real deployments of trusted services."
+
+TS_ENV = "arm-linux"
+
+require trusted-services.inc
+
+DEPENDS        += "libts python3-protobuf-native"
+RDEPENDS:${PN} += "libts"
+
+OECMAKE_SOURCEPATH = "${S}/deployments/ts-service-test/${TS_ENV}"
+
+FILES:${PN} = "${bindir}/ts-service-test"
+
+do_install:append () {
+    install -d ${D}${bindir}
+    mv ${D}${TS_INSTALL}/bin/ts-service-test ${D}${bindir}
+
+    rm -r --one-file-system ${D}${TS_INSTALL}
+}
diff --git a/meta-arm/meta-arm/recipes-security/trusted-services/ts-sp-attestation_git.bb b/meta-arm/meta-arm/recipes-security/trusted-services/ts-sp-attestation_git.bb
new file mode 100644
index 0000000..eef05fe
--- /dev/null
+++ b/meta-arm/meta-arm/recipes-security/trusted-services/ts-sp-attestation_git.bb
@@ -0,0 +1,7 @@
+DESCRIPTION = "Trusted Services attestation service provider"
+
+require ts-sp-common.inc
+
+SP_UUID = "${ATTESTATION_UUID}"
+
+OECMAKE_SOURCEPATH="${S}/deployments/attestation/${TS_ENV}"
diff --git a/meta-arm/meta-arm/recipes-security/trusted-services/ts-sp-common.inc b/meta-arm/meta-arm/recipes-security/trusted-services/ts-sp-common.inc
new file mode 100644
index 0000000..e46cd6b
--- /dev/null
+++ b/meta-arm/meta-arm/recipes-security/trusted-services/ts-sp-common.inc
@@ -0,0 +1,29 @@
+# Common part of all Trusted Services SPs recipes
+
+TS_ENV = "opteesp"
+
+require trusted-services.inc
+require ts-uuid.inc
+
+DEPENDS += "dtc-native ts-newlib"
+
+FILES:${PN}-dev = "${TS_INSTALL}"
+
+# Secure Partition DTS file might be updated in bbapend files
+SP_DTS_FILE ?= "${D}${TS_INSTALL}/manifest/${SP_UUID}.dts"
+
+do_install:append() {
+    # Generate SP DTB which will be included automatically by optee-os build process
+    dtc -I dts -O dtb -o ${D}${TS_INSTALL}/manifest/${SP_UUID}.dtb ${SP_DTS_FILE}
+
+    # We do not need libs and headers
+    rm -r --one-file-system ${D}${TS_INSTALL}/lib
+    rm -r --one-file-system ${D}${TS_INSTALL}/include
+}
+
+# Use Yocto debug prefix maps for compiling assembler.
+EXTRA_OECMAKE += '-DCMAKE_ASM_FLAGS="${DEBUG_PREFIX_MAP}"'
+
+# Ignore that SP stripped.elf does not have GNU_HASH
+# Older versions of optee support SYSV hash only.
+INSANE_SKIP:${PN}-dev += "ldflags"
diff --git a/meta-arm/meta-arm/recipes-security/trusted-services/ts-sp-crypto_git.bb b/meta-arm/meta-arm/recipes-security/trusted-services/ts-sp-crypto_git.bb
new file mode 100644
index 0000000..77a2855
--- /dev/null
+++ b/meta-arm/meta-arm/recipes-security/trusted-services/ts-sp-crypto_git.bb
@@ -0,0 +1,9 @@
+DESCRIPTION = "Trusted Services crypto service provider"
+
+require ts-sp-common.inc
+
+SP_UUID = "${CRYPTO_UUID}"
+
+DEPENDS += "python3-protobuf-native"
+
+OECMAKE_SOURCEPATH="${S}/deployments/crypto/${TS_ENV}"
diff --git a/meta-arm/meta-arm/recipes-security/trusted-services/ts-sp-env-test/0013-env-test-no-std-libs.patch b/meta-arm/meta-arm/recipes-security/trusted-services/ts-sp-env-test/0013-env-test-no-std-libs.patch
new file mode 100644
index 0000000..f6269db
--- /dev/null
+++ b/meta-arm/meta-arm/recipes-security/trusted-services/ts-sp-env-test/0013-env-test-no-std-libs.patch
@@ -0,0 +1,33 @@
+From 7a0dcc40ea736dc20b25813dfc08e576c2615217 Mon Sep 17 00:00:00 2001
+From: Anton Antonov <Anton.Antonov@arm.com>
+Date: Wed, 31 Aug 2022 17:32:47 +0100
+Subject: [PATCH] Do not use standard libraries in env-test opteesp deployment
+
+In opteesp deployments newlib used used. The standard libraries should not be included.
+
+Upstream-Status: Pending
+Signed-off-by: Anton Antonov <Anton.Antonov@arm.com>
+---
+ deployments/env-test/opteesp/CMakeLists.txt | 6 +++---
+ 1 file changed, 3 insertions(+), 3 deletions(-)
+
+diff --git a/deployments/env-test/opteesp/CMakeLists.txt b/deployments/env-test/opteesp/CMakeLists.txt
+index cff00ff..60abc0d 100644
+--- a/deployments/env-test/opteesp/CMakeLists.txt
++++ b/deployments/env-test/opteesp/CMakeLists.txt
+@@ -56,9 +56,9 @@ include(../env-test.cmake REQUIRED)
+ #-------------------------------------------------------------------------------
+ add_platform(TARGET env-test)
+ 
+-if(CMAKE_CROSSCOMPILING)
+-	target_link_libraries(env-test PRIVATE stdc++ gcc m)
+-endif()
++#if(CMAKE_CROSSCOMPILING)
++#	target_link_libraries(env-test PRIVATE stdc++ gcc m)
++#endif()
+ 
+ #################################################################
+ 
+-- 
+2.25.1
+
diff --git a/meta-arm/meta-arm/recipes-security/trusted-services/ts-sp-env-test_git.bb b/meta-arm/meta-arm/recipes-security/trusted-services/ts-sp-env-test_git.bb
new file mode 100644
index 0000000..9cd73cb
--- /dev/null
+++ b/meta-arm/meta-arm/recipes-security/trusted-services/ts-sp-env-test_git.bb
@@ -0,0 +1,14 @@
+DESCRIPTION = "Trusted Services test_runner service provider"
+
+require ts-sp-common.inc
+
+# Current version of env-test SP contains hard-coded values for FVP.
+COMPATIBLE_MACHINE ?= "invalid"
+
+SP_UUID = "${ENV_TEST_UUID}"
+
+OECMAKE_SOURCEPATH="${S}/deployments/env-test/${TS_ENV}"
+
+SRC_URI += "\
+            file://0013-env-test-no-std-libs.patch \
+"
diff --git a/meta-arm/meta-arm/recipes-security/trusted-services/ts-sp-its_git.bb b/meta-arm/meta-arm/recipes-security/trusted-services/ts-sp-its_git.bb
new file mode 100644
index 0000000..4eb5dc5
--- /dev/null
+++ b/meta-arm/meta-arm/recipes-security/trusted-services/ts-sp-its_git.bb
@@ -0,0 +1,7 @@
+DESCRIPTION = "Trusted Services internal secure storage service provider"
+
+require ts-sp-common.inc
+
+SP_UUID = "${ITS_UUID}"
+
+OECMAKE_SOURCEPATH="${S}/deployments/internal-trusted-storage/${TS_ENV}"
diff --git a/meta-arm/meta-arm/recipes-security/trusted-services/ts-sp-se-proxy_git.bb b/meta-arm/meta-arm/recipes-security/trusted-services/ts-sp-se-proxy_git.bb
new file mode 100644
index 0000000..b924641
--- /dev/null
+++ b/meta-arm/meta-arm/recipes-security/trusted-services/ts-sp-se-proxy_git.bb
@@ -0,0 +1,9 @@
+DESCRIPTION = "Trusted Services proxy service providers"
+
+require ts-sp-common.inc
+
+SP_UUID = "${SE_PROXY_UUID}"
+
+DEPENDS += "python3-protobuf-native"
+
+OECMAKE_SOURCEPATH="${S}/deployments/se-proxy/${TS_ENV}"
diff --git a/meta-arm/meta-arm/recipes-security/trusted-services/ts-sp-smm-gateway_%.bbappend b/meta-arm/meta-arm/recipes-security/trusted-services/ts-sp-smm-gateway_%.bbappend
new file mode 100644
index 0000000..c485a56
--- /dev/null
+++ b/meta-arm/meta-arm/recipes-security/trusted-services/ts-sp-smm-gateway_%.bbappend
@@ -0,0 +1,5 @@
+
+# Update MM communication buffer address for qemuarm64 machine
+EXTRA_OECMAKE:append:qemuarm64-secureboot = "-DMM_COMM_BUFFER_ADDRESS="0x00000000 0x42000000" \
+                                             -DMM_COMM_BUFFER_PAGE_COUNT="1" \
+"
diff --git a/meta-arm/meta-arm/recipes-security/trusted-services/ts-sp-smm-gateway_git.bb b/meta-arm/meta-arm/recipes-security/trusted-services/ts-sp-smm-gateway_git.bb
new file mode 100644
index 0000000..06ca6bd
--- /dev/null
+++ b/meta-arm/meta-arm/recipes-security/trusted-services/ts-sp-smm-gateway_git.bb
@@ -0,0 +1,7 @@
+DESCRIPTION = "Trusted Services service provider for UEFI SMM services"
+
+require ts-sp-common.inc
+
+SP_UUID = "${SMM_GATEWAY_UUID}"
+
+OECMAKE_SOURCEPATH="${S}/deployments/smm-gateway/${TS_ENV}"
diff --git a/meta-arm/meta-arm/recipes-security/trusted-services/ts-sp-storage_git.bb b/meta-arm/meta-arm/recipes-security/trusted-services/ts-sp-storage_git.bb
new file mode 100644
index 0000000..c893754
--- /dev/null
+++ b/meta-arm/meta-arm/recipes-security/trusted-services/ts-sp-storage_git.bb
@@ -0,0 +1,7 @@
+DESCRIPTION = "Trusted Services secure storage service provider"
+
+require ts-sp-common.inc
+
+SP_UUID = "${STORAGE_UUID}"
+
+OECMAKE_SOURCEPATH="${S}/deployments/protected-storage/${TS_ENV}"
diff --git a/meta-arm/meta-arm/recipes-security/trusted-services/ts-uefi-test_git.bb b/meta-arm/meta-arm/recipes-security/trusted-services/ts-uefi-test_git.bb
new file mode 100644
index 0000000..5be436b
--- /dev/null
+++ b/meta-arm/meta-arm/recipes-security/trusted-services/ts-uefi-test_git.bb
@@ -0,0 +1,21 @@
+DESCRIPTION = "Trusted Services uefi-test deployment for arm-linux. \
+               Used for running service level tests from Linux user-space \
+               on an Arm platform with real deployments of UEFI SMM services."
+
+TS_ENV = "arm-linux"
+
+require trusted-services.inc
+
+DEPENDS        += "libts python3-protobuf-native"
+RDEPENDS:${PN} += "libts arm-ffa-user"
+
+OECMAKE_SOURCEPATH = "${S}/deployments/uefi-test/${TS_ENV}"
+
+FILES:${PN} = "${bindir}/uefi-test"
+
+do_install:append () {
+    install -d ${D}${bindir}
+    mv ${D}${TS_INSTALL}/bin/uefi-test ${D}${bindir}
+
+    rm -r --one-file-system ${D}${TS_INSTALL}
+}
diff --git a/meta-arm/meta-arm/recipes-security/trusted-services/ts-uuid.inc b/meta-arm/meta-arm/recipes-security/trusted-services/ts-uuid.inc
new file mode 100644
index 0000000..7a39f73
--- /dev/null
+++ b/meta-arm/meta-arm/recipes-security/trusted-services/ts-uuid.inc
@@ -0,0 +1,9 @@
+# Trusted Services SPs canonical UUIDs
+
+ATTESTATION_UUID = "a1baf155-8876-4695-8f7c-54955e8db974"
+CRYPTO_UUID      = "d9df52d5-16a2-4bb2-9aa4-d26d3b84e8c0"
+ENV_TEST_UUID    = "33c75baf-ac6a-4fe4-8ac7-e9909bee2d17"
+ITS_UUID         = "dc1eef48-b17a-4ccf-ac8b-dfcff7711b14"
+SE_PROXY_UUID    = "46bb39d1-b4d9-45b5-88ff-040027dab249"
+SMM_GATEWAY_UUID = "ed32d533-99e6-4209-9cc0-2d72cdd998a7"
+STORAGE_UUID     = "751bf801-3dde-4768-a514-0f10aeed1790"
diff --git a/meta-arm/meta-atp/recipes-kernel/atp/atp-test_3.1.bb b/meta-arm/meta-atp/recipes-kernel/atp/atp-test_3.1.bb
index 5a3097e..e98e13c 100644
--- a/meta-arm/meta-atp/recipes-kernel/atp/atp-test_3.1.bb
+++ b/meta-arm/meta-atp/recipes-kernel/atp/atp-test_3.1.bb
@@ -1,5 +1,4 @@
 require recipes-devtools/atp/atp-source_3.1.inc
-inherit package
 
 SUMMARY = "End-to-end tests evaluating ATP kernel modules service correctness"
 SECTION = "kernel/userland"
diff --git a/meta-arm/meta-atp/recipes-kernel/atp/atp-uapi_3.1.bb b/meta-arm/meta-atp/recipes-kernel/atp/atp-uapi_3.1.bb
index 8c793a3..140105f 100644
--- a/meta-arm/meta-atp/recipes-kernel/atp/atp-uapi_3.1.bb
+++ b/meta-arm/meta-atp/recipes-kernel/atp/atp-uapi_3.1.bb
@@ -1,5 +1,4 @@
 require recipes-devtools/atp/atp-source_3.1.inc
-inherit package
 
 SUMMARY = "User API for accessing services from ATP kernel modules"
 SECTION = "kernel/userland"
diff --git a/meta-arm/meta-gem5/recipes-devtools/gem5/gem5-m5ops_20.bb b/meta-arm/meta-gem5/recipes-devtools/gem5/gem5-m5ops_20.bb
index 8ff4826..d55ed85 100644
--- a/meta-arm/meta-gem5/recipes-devtools/gem5/gem5-m5ops_20.bb
+++ b/meta-arm/meta-gem5/recipes-devtools/gem5/gem5-m5ops_20.bb
@@ -1,5 +1,5 @@
 require gem5-source_20.inc
-inherit scons package
+inherit scons
 
 HOMEPAGE = "https://www.gem5.org/documentation/general_docs/m5ops"
 SUMMARY = "m5ops provide pseudo-instructions to trigger gem5 functionality"
diff --git a/meta-arm/meta-gem5/recipes-kernel/linux/files/init_disassemble_info-signature-changes-causes-compile-failures.patch b/meta-arm/meta-gem5/recipes-kernel/linux/files/init_disassemble_info-signature-changes-causes-compile-failures.patch
new file mode 100644
index 0000000..3d7ac92
--- /dev/null
+++ b/meta-arm/meta-gem5/recipes-kernel/linux/files/init_disassemble_info-signature-changes-causes-compile-failures.patch
@@ -0,0 +1,99 @@
+From 4af28e8e58b6c461a3b85960b222dade8b6891ce Mon Sep 17 00:00:00 2001
+From: Andres Freund <andres@anarazel.de>
+Date: Wed, 22 Jun 2022 11:19:18 -0700
+Subject: [PATCH] init_disassemble_info() signature changes causes compile
+ failures
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+Hi,
+
+binutils changed the signature of init_disassemble_info(), which now causes
+perf and bpftool to fail to compile (e.g. on debian unstable).
+
+Relevant binutils commit: https://sourceware.org/git/?p=binutils-gdb.git;a=commitdiff;h=60a3da00bd5407f07d64dff82a4dae98230dfaac
+
+util/annotate.c: In function ‘symbol__disassemble_bpf’:
+util/annotate.c:1765:9: error: too few arguments to function ‘init_disassemble_info’
+ 1765 |         init_disassemble_info(&info, s,
+      |         ^~~~~~~~~~~~~~~~~~~~~
+In file included from util/annotate.c:1718:
+/usr/include/dis-asm.h:472:13: note: declared here
+  472 | extern void init_disassemble_info (struct disassemble_info *dinfo, void *stream,
+      |             ^~~~~~~~~~~~~~~~~~~~~
+
+with equivalent failures in
+
+tools/bpf/bpf_jit_disasm.c
+tools/bpf/bpftool/jit_disasm.c
+
+The fix is easy enough, add a wrapper around fprintf() that conforms to the
+new signature.
+
+However I assume the necessary feature test and wrapper should only be added
+once? I don't know the kernel stuff well enough to choose the right structure
+here.
+
+Attached is my local fix for perf. Obviously would need work to be a real
+solution.
+
+Greetings,
+
+Andres Freund
+---
+
+binutils 2.39 changed the signature of init_disassemble_info(),
+which now causes perf and bpftool to fail to compile.
+
+Relevant binutils commit: [1]
+
+There is a proper fix in development upstream[2].
+This is a work-around for older kernels.
+
+[1] https://sourceware.org/git/?p=binutils-gdb.git;a=commitdiff;h=60a3da00bd5407f07d64dff82a4dae98230dfaac
+[2] https://patchwork.kernel.org/project/netdevbpf/cover/20220801013834.156015-1-andres@anarazel.de/
+
+Upstream-Status: Pending
+Signed-off-by: Anton Antonov <Anton.Antonov@arm.com>
+
+
+ tools/perf/util/annotate.c | 15 ++++++++++++++-
+ 1 file changed, 14 insertions(+), 1 deletion(-)
+
+diff --git a/tools/perf/util/annotate.c b/tools/perf/util/annotate.c
+index 0c5e4ce390ef..1151c029f714 100644
+--- a/tools/perf/util/annotate.c
++++ b/tools/perf/util/annotate.c
+@@ -1708,6 +1708,18 @@ static int dso__disassemble_filename(struct dso *dso, char *filename, size_t fil
+ #include <bfd.h>
+ #include <dis-asm.h>
+ 
++static int fprintf_styled(void *, enum disassembler_style, const char* fmt, ...)
++{
++  va_list args;
++  int r;
++
++  va_start(args, fmt);
++  r = vprintf(fmt, args);
++  va_end(args);
++
++  return r;
++}
++
+ static int symbol__disassemble_bpf(struct symbol *sym,
+ 				   struct annotate_args *args)
+ {
+@@ -1750,7 +1762,8 @@ static int symbol__disassemble_bpf(struct symbol *sym,
+ 		goto out;
+ 	}
+ 	init_disassemble_info(&info, s,
+-			      (fprintf_ftype) fprintf);
++			      (fprintf_ftype) fprintf,
++			      fprintf_styled);
+ 
+ 	info.arch = bfd_get_arch(bfdf);
+ 	info.mach = bfd_get_mach(bfdf);
+-- 
+2.30.2
+
diff --git a/meta-arm/meta-gem5/recipes-kernel/linux/linux-yocto_%.bbappend b/meta-arm/meta-gem5/recipes-kernel/linux/linux-yocto_%.bbappend
index b36ea06..e66dd21 100644
--- a/meta-arm/meta-gem5/recipes-kernel/linux/linux-yocto_%.bbappend
+++ b/meta-arm/meta-gem5/recipes-kernel/linux/linux-yocto_%.bbappend
@@ -3,7 +3,8 @@
 COMPATIBLE_MACHINE:gem5-arm64 = "gem5-arm64"
 KMACHINE:gem5-arm64 = "gem5-arm64"
 SRC_URI:append:gem5-arm64 = " file://gem5-kmeta;type=kmeta;name=gem5-kmeta;destsuffix=gem5-kmeta \
-                              file://dts/gem5-arm64;subdir=add-files"
+                              file://dts/gem5-arm64;subdir=add-files \
+                              file://init_disassemble_info-signature-changes-causes-compile-failures.patch"
 
 do_patch:append:gem5-arm64() {
     tar -C ${WORKDIR}/add-files/dts -cf - gem5-arm64 | \
diff --git a/meta-openembedded/meta-filesystems/recipes-filesystems/zfs/zfs/0001-Define-strndupa-if-it-does-not-exist.patch b/meta-openembedded/meta-filesystems/recipes-filesystems/zfs/zfs/0001-Define-strndupa-if-it-does-not-exist.patch
index 8bb8b94..80955b3 100644
--- a/meta-openembedded/meta-filesystems/recipes-filesystems/zfs/zfs/0001-Define-strndupa-if-it-does-not-exist.patch
+++ b/meta-openembedded/meta-filesystems/recipes-filesystems/zfs/zfs/0001-Define-strndupa-if-it-does-not-exist.patch
@@ -1,4 +1,4 @@
-From 54883e714b7fd1e7f4ce47eb1fc09adff35d561e Mon Sep 17 00:00:00 2001
+From cc0cd6f71f6ef96fca2d7b730a3f0f6722fec696 Mon Sep 17 00:00:00 2001
 From: Khem Raj <raj.khem@gmail.com>
 Date: Sat, 7 May 2022 12:15:22 -0700
 Subject: [PATCH] Define strndupa if it does not exist
@@ -7,17 +7,18 @@
 
 Upstream-Status: Pending
 Signed-off-by: Khem Raj <raj.khem@gmail.com>
+
 ---
  etc/systemd/system-generators/zfs-mount-generator.c | 9 +++++++++
  1 file changed, 9 insertions(+)
 
 diff --git a/etc/systemd/system-generators/zfs-mount-generator.c b/etc/systemd/system-generators/zfs-mount-generator.c
-index b806339..592d2f9 100644
+index f4c6c26..255bee4 100644
 --- a/etc/systemd/system-generators/zfs-mount-generator.c
 +++ b/etc/systemd/system-generators/zfs-mount-generator.c
-@@ -47,6 +47,15 @@
- #define	STRCMP ((int(*)(const void *, const void *))&strcmp)
- #define	PID_T_CMP ((int(*)(const void *, const void *))&pid_t_cmp)
+@@ -193,6 +193,15 @@ fopenat(int dirfd, const char *pathname, int flags,
+ 	return (fdopen(fd, stream_mode));
+ }
  
 +#ifndef strndupa
 +#define strndupa(s, n) \
@@ -29,8 +30,5 @@
 +#endif
 +
  static int
- pid_t_cmp(const pid_t *lhs, const pid_t *rhs)
+ line_worker(char *line, const char *cachefile)
  {
--- 
-2.36.0
-
diff --git a/meta-openembedded/meta-filesystems/recipes-filesystems/zfs/zfs_2.1.4.bb b/meta-openembedded/meta-filesystems/recipes-filesystems/zfs/zfs_2.1.4.bb
deleted file mode 100644
index dd676c9..0000000
--- a/meta-openembedded/meta-filesystems/recipes-filesystems/zfs/zfs_2.1.4.bb
+++ /dev/null
@@ -1,61 +0,0 @@
-SUMMARY = "OpenZFS on Linux and FreeBSD"
-DESCRIPTION = "OpenZFS on Linux and FreeBSD"
-LICENSE = "CDDL-1.0"
-LIC_FILES_CHKSUM = "file://LICENSE;md5=7087caaf1dc8a2856585619f4a787faa"
-HOMEPAGE ="https://github.com/openzfs/zfs"
-
-SRC_URI = "https://github.com/openzfs/zfs/releases/download/${BPN}-${PV}/${BPN}-${PV}.tar.gz \
-           file://0001-Define-strndupa-if-it-does-not-exist.patch \
-"
-SRC_URI[sha256sum] = "3b52c0d493f806f638dca87dde809f53861cd318c1ebb0e60daeaa061cf1acf6"
-
-# Using both 'module' and 'autotools' classes seems a bit odd, they both
-# define a do_compile function.
-# That's why we opt for module-base, also this prevents module splitting.
-inherit module-base pkgconfig autotools
-
-DEPENDS = "virtual/kernel zlib util-linux libtirpc openssl curl"
-
-PACKAGECONFIG ?= "${@bb.utils.filter('DISTRO_FEATURES', 'systemd sysvinit', d)}"
-
-PACKAGECONFIG[pam] = "--enable-pam --with-pamconfigsdir=${datadir}/pam-configs --with-pammoduledir=${libdir}/security, --disable-pam"
-PACKAGECONFIG[systemd] = "--enable-systemd,--disable-systemd,"
-PACKAGECONFIG[sysvinit] = "--enable-sysvinit,--disable-sysvinit,"
-
-EXTRA_OECONF:append = " \
-    --disable-pyzfs \
-    --with-linux=${STAGING_KERNEL_DIR} --with-linux-obj=${STAGING_KERNEL_BUILDDIR} \
-    --with-mounthelperdir=${base_sbin} \
-    --with-udevdir=${base_libdir}/udev \
-    --without-dracutdir \
-    "
-
-EXTRA_OEMAKE:append = " \
-    INSTALL_MOD_PATH=${D}${root_prefix} \
-    "
-
-do_install:append() {
-    # /usr/share/zfs contains the zfs-tests folder which we do not need:
-    rm -rf ${D}${datadir}/zfs
-
-    rm -rf ${D}${datadir}/initramfs-tools
-}
-
-FILES:${PN} += "\
-    ${base_sbindir}/* \
-    ${base_libdir}/* \
-    ${sysconfdir}/* \
-    ${sbindir}/* \
-    ${bindir}/* \
-    ${libexecdir}/${BPN}/* \
-    ${libdir}/* \
-    "
-
-FILES:${PN}-dev += "\
-    ${prefix}/src/zfs-${PV}/* \
-    ${prefix}/src/spl-${PV}/* \
-    "
-# Not yet ported to rv32
-COMPATIBLE_HOST:riscv32 = "null"
-# conflicting definition of ABS macro from asm/asm.h from kernel
-COMPATIBLE_HOST:mips = "null"
diff --git a/meta-openembedded/meta-filesystems/recipes-filesystems/zfs/zfs_2.1.5.bb b/meta-openembedded/meta-filesystems/recipes-filesystems/zfs/zfs_2.1.5.bb
new file mode 100644
index 0000000..6aa674c
--- /dev/null
+++ b/meta-openembedded/meta-filesystems/recipes-filesystems/zfs/zfs_2.1.5.bb
@@ -0,0 +1,61 @@
+SUMMARY = "OpenZFS on Linux and FreeBSD"
+DESCRIPTION = "OpenZFS on Linux and FreeBSD"
+LICENSE = "CDDL-1.0"
+LIC_FILES_CHKSUM = "file://LICENSE;md5=7087caaf1dc8a2856585619f4a787faa"
+HOMEPAGE ="https://github.com/openzfs/zfs"
+
+SRC_URI = "https://github.com/openzfs/zfs/releases/download/${BPN}-${PV}/${BPN}-${PV}.tar.gz \
+           file://0001-Define-strndupa-if-it-does-not-exist.patch \
+"
+SRC_URI[sha256sum] = "1913041e5c44ff07ca384346ad8145aeedf77e77cd1cea9ec5d533246691e10c"
+
+# Using both 'module' and 'autotools' classes seems a bit odd, they both
+# define a do_compile function.
+# That's why we opt for module-base, also this prevents module splitting.
+inherit module-base pkgconfig autotools
+
+DEPENDS = "virtual/kernel zlib util-linux libtirpc openssl curl"
+
+PACKAGECONFIG ?= "${@bb.utils.filter('DISTRO_FEATURES', 'systemd sysvinit', d)}"
+
+PACKAGECONFIG[pam] = "--enable-pam --with-pamconfigsdir=${datadir}/pam-configs --with-pammoduledir=${libdir}/security, --disable-pam"
+PACKAGECONFIG[systemd] = "--enable-systemd,--disable-systemd,"
+PACKAGECONFIG[sysvinit] = "--enable-sysvinit,--disable-sysvinit,"
+
+EXTRA_OECONF:append = " \
+    --disable-pyzfs \
+    --with-linux=${STAGING_KERNEL_DIR} --with-linux-obj=${STAGING_KERNEL_BUILDDIR} \
+    --with-mounthelperdir=${base_sbin} \
+    --with-udevdir=${base_libdir}/udev \
+    --without-dracutdir \
+    "
+
+EXTRA_OEMAKE:append = " \
+    INSTALL_MOD_PATH=${D}${root_prefix} \
+    "
+
+do_install:append() {
+    # /usr/share/zfs contains the zfs-tests folder which we do not need:
+    rm -rf ${D}${datadir}/zfs
+
+    rm -rf ${D}${datadir}/initramfs-tools
+}
+
+FILES:${PN} += "\
+    ${base_sbindir}/* \
+    ${base_libdir}/* \
+    ${sysconfdir}/* \
+    ${sbindir}/* \
+    ${bindir}/* \
+    ${libexecdir}/${BPN}/* \
+    ${libdir}/* \
+    "
+
+FILES:${PN}-dev += "\
+    ${prefix}/src/zfs-${PV}/* \
+    ${prefix}/src/spl-${PV}/* \
+    "
+# Not yet ported to rv32
+COMPATIBLE_HOST:riscv32 = "null"
+# conflicting definition of ABS macro from asm/asm.h from kernel
+COMPATIBLE_HOST:mips = "null"
diff --git a/meta-openembedded/meta-filesystems/recipes-support/fuse/fuse3_3.11.0.bb b/meta-openembedded/meta-filesystems/recipes-support/fuse/fuse3_3.11.0.bb
index 0b9164f..8055fb0 100644
--- a/meta-openembedded/meta-filesystems/recipes-support/fuse/fuse3_3.11.0.bb
+++ b/meta-openembedded/meta-filesystems/recipes-support/fuse/fuse3_3.11.0.bb
@@ -35,7 +35,28 @@
 
 do_install_ptest() {
         install -d ${D}${PTEST_PATH}/test
+        install -d ${D}${PTEST_PATH}/example
+        install -d ${D}${PTEST_PATH}/util
         cp -rf ${S}/test/* ${D}${PTEST_PATH}/test/
+
+        example_excutables=`find ${B}/example -type f -executable`
+        util_excutables=`find ${B}/util -type f -executable`
+        test_excutables=`find ${B}/test -type f -executable`
+
+        for e in $example_excutables
+        do
+            cp -rf $e  ${D}${PTEST_PATH}/example/
+         done
+
+        for e in $util_excutables
+        do
+            cp -rf $e  ${D}${PTEST_PATH}/util/
+        done
+
+        for e in $test_excutables
+        do
+            cp -rf $e  ${D}${PTEST_PATH}/test
+        done
 }
 
 DEPENDS = "udev"
@@ -49,10 +70,6 @@
 FILES:${PN} += "${libdir}/libfuse3.so.*"
 FILES:${PN}-dev += "${libdir}/libfuse3*.la"
 
-EXTRA_OEMESON += " \
-     -Dexamples=false \
-"
-
 # Forbid auto-renaming to libfuse3-utils
 FILES:fuse3-utils = "${bindir} ${base_sbindir}"
 DEBIAN_NOAUTONAME:fuse3-utils = "1"
diff --git a/meta-openembedded/meta-filesystems/recipes-utils/xfsdump/xfsdump_3.1.10.bb b/meta-openembedded/meta-filesystems/recipes-utils/xfsdump/xfsdump_3.1.10.bb
deleted file mode 100644
index fdaebbe..0000000
--- a/meta-openembedded/meta-filesystems/recipes-utils/xfsdump/xfsdump_3.1.10.bb
+++ /dev/null
@@ -1,38 +0,0 @@
-SUMMARY = "XFS Filesystem Dump Utility"
-DESCRIPTION = "The xfsdump package contains xfsdump, xfsrestore and a \
-               number of other utilities for administering XFS filesystems.\
-               xfsdump examines files in a filesystem, determines which \
-               need to be backed up, and copies those files to a \
-               specified disk, tape or other storage medium."
-HOMEPAGE = "http://oss.sgi.com/projects/xfs"
-SECTION = "base"
-LICENSE = "GPL-2.0-only"
-LIC_FILES_CHKSUM = "file://doc/COPYING;md5=15c832894d10ddd00dfcf57bee490ecc"
-DEPENDS = "xfsprogs attr"
-
-SRC_URI = "https://www.kernel.org/pub/linux/utils/fs/xfs/xfsdump/${BP}.tar.xz \
-           file://remove-install-as-user.patch \
-           ${@bb.utils.contains('DISTRO_FEATURES','usrmerge','file://0001-xfsdump-support-usrmerge.patch','',d)} \
-           "
-SRC_URI[sha256sum] = "9aab7a53aa05cd46edc97269ebf1456aab2b60ab8c1fffaaf8aa492f0b5f6517"
-
-inherit autotools-brokensep
-
-PARALLEL_MAKE = ""
-PACKAGECONFIG ??= ""
-PACKAGECONFIG[gettext] = "--enable-gettext=yes,--enable-gettext=no,gettext"
-
-CFLAGS += "-D_FILE_OFFSET_BITS=64"
-
-do_configure () {
-    export DEBUG="-DNDEBUG"
-    install -m 0755 ${STAGING_DATADIR_NATIVE}/gnu-config/config.guess ${S}
-    install -m 0755 ${STAGING_DATADIR_NATIVE}/gnu-config/config.sub ${S}
-    oe_runconf
-}
-
-do_install () {
-    export DIST_ROOT=${D}
-    oe_runmake install
-    oe_runmake install-dev
-}
diff --git a/meta-openembedded/meta-filesystems/recipes-utils/xfsdump/xfsdump_3.1.11.bb b/meta-openembedded/meta-filesystems/recipes-utils/xfsdump/xfsdump_3.1.11.bb
new file mode 100644
index 0000000..2e5977c
--- /dev/null
+++ b/meta-openembedded/meta-filesystems/recipes-utils/xfsdump/xfsdump_3.1.11.bb
@@ -0,0 +1,38 @@
+SUMMARY = "XFS Filesystem Dump Utility"
+DESCRIPTION = "The xfsdump package contains xfsdump, xfsrestore and a \
+               number of other utilities for administering XFS filesystems.\
+               xfsdump examines files in a filesystem, determines which \
+               need to be backed up, and copies those files to a \
+               specified disk, tape or other storage medium."
+HOMEPAGE = "http://oss.sgi.com/projects/xfs"
+SECTION = "base"
+LICENSE = "GPL-2.0-only"
+LIC_FILES_CHKSUM = "file://doc/COPYING;md5=15c832894d10ddd00dfcf57bee490ecc"
+DEPENDS = "xfsprogs attr"
+
+SRC_URI = "https://www.kernel.org/pub/linux/utils/fs/xfs/xfsdump/${BP}.tar.xz \
+           file://remove-install-as-user.patch \
+           ${@bb.utils.contains('DISTRO_FEATURES','usrmerge','file://0001-xfsdump-support-usrmerge.patch','',d)} \
+           "
+SRC_URI[sha256sum] = "5657a2ca26a55682dc9724fb0331c860fe362c778225cbfc8c710f1375f458a3"
+
+inherit autotools-brokensep
+
+PARALLEL_MAKE = ""
+PACKAGECONFIG ??= ""
+PACKAGECONFIG[gettext] = "--enable-gettext=yes,--enable-gettext=no,gettext"
+
+CFLAGS += "-D_FILE_OFFSET_BITS=64"
+
+do_configure () {
+    export DEBUG="-DNDEBUG"
+    install -m 0755 ${STAGING_DATADIR_NATIVE}/gnu-config/config.guess ${S}
+    install -m 0755 ${STAGING_DATADIR_NATIVE}/gnu-config/config.sub ${S}
+    oe_runconf
+}
+
+do_install () {
+    export DIST_ROOT=${D}
+    oe_runmake install
+    oe_runmake install-dev
+}
diff --git a/meta-openembedded/meta-filesystems/recipes-utils/xfstests/xfstests/0001-Add-a-return-type-to-aio_rw.patch b/meta-openembedded/meta-filesystems/recipes-utils/xfstests/xfstests/0001-Add-a-return-type-to-aio_rw.patch
new file mode 100644
index 0000000..e0a04c9
--- /dev/null
+++ b/meta-openembedded/meta-filesystems/recipes-utils/xfstests/xfstests/0001-Add-a-return-type-to-aio_rw.patch
@@ -0,0 +1,28 @@
+From f172ea004d34b00aa7bd5baff9422b2ab80df6e7 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Sun, 14 Aug 2022 13:32:10 -0700
+Subject: [PATCH 1/2] Add a return type to aio_rw
+
+Compilers complain about the function prototype otherwise
+
+Upstream-Status: Pending
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ ltp/fsx.c | 1 +
+ 1 file changed, 1 insertion(+)
+
+diff --git a/ltp/fsx.c b/ltp/fsx.c
+index 12c2cc33..55b4e9b6 100644
+--- a/ltp/fsx.c
++++ b/ltp/fsx.c
+@@ -2429,6 +2429,7 @@ out_error:
+ 	return -1;
+ }
+ #else
++int
+ aio_rw(int rw, int fd, char *buf, unsigned len, unsigned offset)
+ {
+ 	fprintf(stderr, "io_rw: need AIO support!\n");
+-- 
+2.37.2
+
diff --git a/meta-openembedded/meta-filesystems/recipes-utils/xfstests/xfstests/0002-Drop-detached_mounts_propagation-and-remove-sys-moun.patch b/meta-openembedded/meta-filesystems/recipes-utils/xfstests/xfstests/0002-Drop-detached_mounts_propagation-and-remove-sys-moun.patch
new file mode 100644
index 0000000..a594b73
--- /dev/null
+++ b/meta-openembedded/meta-filesystems/recipes-utils/xfstests/xfstests/0002-Drop-detached_mounts_propagation-and-remove-sys-moun.patch
@@ -0,0 +1,47 @@
+From dd43cbc7f50266cdc6210f2b920d7f648a83bdd6 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Sun, 14 Aug 2022 13:33:05 -0700
+Subject: [PATCH 2/2] Drop detached_mounts_propagation and remove sys/mount.h
+ from vfs/utils.c
+
+with glibc 2.36+ sys/mount.h conflicts with linux/mount.h and here
+linux/mount.h is included via xfs/xfs.h header and we need sys/mount.h
+for the mount() API prototype. Until thats resolved lets not build this
+testcase
+
+Upstream-Status: Inappropriate [Libc specific Workaround]
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ src/Makefile    | 2 +-
+ src/vfs/utils.c | 1 -
+ 2 files changed, 1 insertion(+), 2 deletions(-)
+
+diff --git a/src/Makefile b/src/Makefile
+index 665edcf9..7debcbbd 100644
+--- a/src/Makefile
++++ b/src/Makefile
+@@ -31,7 +31,7 @@ LINUX_TARGETS = xfsctl bstat t_mtab getdevicesize preallo_rw_pattern_reader \
+ 	dio-invalidate-cache stat_test t_encrypted_d_revalidate \
+ 	attr_replace_test swapon mkswap t_attr_corruption t_open_tmpfiles \
+ 	fscrypt-crypt-util bulkstat_null_ocount splice-test chprojid_fail \
+-	detached_mounts_propagation ext4_resize t_readdir_3 splice2pipe \
++	ext4_resize t_readdir_3 splice2pipe \
+ 	uuid_ioctl
+ 
+ EXTRA_EXECS = dmerror fill2attr fill2fs fill2fs_check scaleread.sh \
+diff --git a/src/vfs/utils.c b/src/vfs/utils.c
+index 1388edda..aacd6c0a 100644
+--- a/src/vfs/utils.c
++++ b/src/vfs/utils.c
+@@ -10,7 +10,6 @@
+ #include <stdlib.h>
+ #include <sys/eventfd.h>
+ #include <sys/fsuid.h>
+-#include <sys/mount.h>
+ #include <sys/prctl.h>
+ #include <sys/socket.h>
+ #include <sys/stat.h>
+-- 
+2.37.2
+
diff --git a/meta-openembedded/meta-filesystems/recipes-utils/xfstests/xfstests_2022.08.07.bb b/meta-openembedded/meta-filesystems/recipes-utils/xfstests/xfstests_2022.08.07.bb
new file mode 100644
index 0000000..ba8b1a2
--- /dev/null
+++ b/meta-openembedded/meta-filesystems/recipes-utils/xfstests/xfstests_2022.08.07.bb
@@ -0,0 +1,64 @@
+SUMMARY = "File system QA test suite"
+LICENSE = "GPL-2.0-only"
+LIC_FILES_CHKSUM = "file://LICENSES/GPL-2.0;md5=74274e8a218423e49eefdea80bc55038"
+
+SRCREV_FORMAT = "xfstests_unionmount"
+
+SRC_URI = "\
+    git://git.kernel.org/pub/scm/fs/xfs/xfstests-dev.git;branch=master;name=xfstests \
+    git://github.com/amir73il/unionmount-testsuite.git;branch=master;protocol=https;name=unionmount;destsuffix=unionmount-testsuite \
+    file://0001-Add-a-return-type-to-aio_rw.patch \
+    file://0002-Drop-detached_mounts_propagation-and-remove-sys-moun.patch \
+"
+
+SRCREV_xfstests = "16ddbd1aee295f64695916cf3621aef57f1163ba"
+SRCREV_unionmount = "e3825b16b46f4c4574a1a69909944c059835f914"
+
+S = "${WORKDIR}/git"
+
+inherit autotools-brokensep useradd
+
+DEPENDS += "xfsprogs acl"
+RDEPENDS:${PN} += "\
+    bash \
+    bc \
+    coreutils \
+    e2fsprogs \
+    e2fsprogs-tune2fs \
+    e2fsprogs-resize2fs \
+    libaio \
+    libcap-bin \
+    overlayfs-progs \
+    perl \
+    python3 \
+    python3-core \
+    xfsprogs \
+    acl \
+"
+
+USERADD_PACKAGES = "${PN}"
+# these users are necessary to run the tests
+USERADD_PARAM:${PN} = "-U -m fsgqa; -N 123456-fsgqa; -N fsgqa2"
+
+EXTRA_OECONF = "INSTALL_USER=root INSTALL_GROUP=root"
+
+# install-sh script in the project is outdated
+# we use the one from the latest libtool to solve installation issues
+# It looks like the upstream is not interested in having it fixed :(
+# https://www.spinics.net/lists/fstests/msg16981.html
+do_configure:prepend() {
+    cp ${STAGING_DIR_NATIVE}${datadir}/libtool/build-aux/install-sh ${B}
+}
+
+do_install:append() {
+    unionmount_target_dir=${D}/usr/xfstests/unionmount-testsuite
+    install -d ${D}/usr/xfstests/unionmount-testsuite/tests
+    install -D ${WORKDIR}/unionmount-testsuite/tests/* -t $unionmount_target_dir/tests
+    install ${WORKDIR}/unionmount-testsuite/*.py -t $unionmount_target_dir
+    install ${WORKDIR}/unionmount-testsuite/run -t $unionmount_target_dir
+    install ${WORKDIR}/unionmount-testsuite/README -t $unionmount_target_dir
+}
+
+FILES:${PN} += "\
+    /usr/xfstests \
+"
diff --git a/meta-openembedded/meta-filesystems/recipes-utils/xfstests/xfstests_git.bb b/meta-openembedded/meta-filesystems/recipes-utils/xfstests/xfstests_git.bb
deleted file mode 100644
index 868fa03..0000000
--- a/meta-openembedded/meta-filesystems/recipes-utils/xfstests/xfstests_git.bb
+++ /dev/null
@@ -1,61 +0,0 @@
-SUMMARY = "File system QA test suite"
-LICENSE = "GPL-2.0-only"
-LIC_FILES_CHKSUM = "file://LICENSES/GPL-2.0;md5=74274e8a218423e49eefdea80bc55038"
-
-SRCREV_FORMAT = "xfstests_unionmount"
-
-SRC_URI = "\
-    git://git.kernel.org/pub/scm/fs/xfs/xfstests-dev.git;branch=master;name=xfstests \
-    git://github.com/amir73il/unionmount-testsuite.git;branch=master;protocol=https;name=unionmount;destsuffix=unionmount-testsuite \
-"
-
-SRCREV_xfstests = "37881397f1aa62df3c63468049c80b301b0e89eb"
-SRCREV_unionmount = "cec4c51a3bf8ba80bb99fc74b302749d4e3d2f1d"
-
-S = "${WORKDIR}/git"
-
-inherit autotools-brokensep useradd
-
-DEPENDS += "xfsprogs acl"
-RDEPENDS:${PN} += "\
-    bash \
-    bc \
-    coreutils \
-    e2fsprogs \
-    e2fsprogs-tune2fs \
-    e2fsprogs-resize2fs \
-    libcap-bin \
-    overlayfs-progs \
-    perl \
-    python3 \
-    python3-core \
-    xfsprogs \
-    acl \
-"
-
-USERADD_PACKAGES = "${PN}"
-# these users are necessary to run the tests
-USERADD_PARAM:${PN} = "-U -m fsgqa; -N 123456-fsgqa; -N fsgqa2"
-
-EXTRA_OECONF = "INSTALL_USER=root INSTALL_GROUP=root"
-
-# install-sh script in the project is outdated
-# we use the one from the latest libtool to solve installation issues
-# It looks like the upstream is not interested in having it fixed :(
-# https://www.spinics.net/lists/fstests/msg16981.html
-do_configure:prepend() {
-    cp ${STAGING_DIR_NATIVE}${datadir}/libtool/build-aux/install-sh ${B}
-}
-
-do_install:append() {
-    unionmount_target_dir=${D}/usr/xfstests/unionmount-testsuite
-    install -d ${D}/usr/xfstests/unionmount-testsuite/tests
-    install -D ${WORKDIR}/unionmount-testsuite/tests/* -t $unionmount_target_dir/tests
-    install ${WORKDIR}/unionmount-testsuite/*.py -t $unionmount_target_dir
-    install ${WORKDIR}/unionmount-testsuite/run -t $unionmount_target_dir
-    install ${WORKDIR}/unionmount-testsuite/README -t $unionmount_target_dir
-}
-
-FILES:${PN} += "\
-    /usr/xfstests \
-"
diff --git a/meta-openembedded/meta-gnome/recipes-gimp/babl/babl/0001-meson-Do-not-run-git-rev-parse-during-configure.patch b/meta-openembedded/meta-gnome/recipes-gimp/babl/babl/0001-meson-Do-not-run-git-rev-parse-during-configure.patch
deleted file mode 100644
index bb129a4..0000000
--- a/meta-openembedded/meta-gnome/recipes-gimp/babl/babl/0001-meson-Do-not-run-git-rev-parse-during-configure.patch
+++ /dev/null
@@ -1,35 +0,0 @@
-From 0e41b11b23c91293d1b39a8ec4cb80c68fb26ad7 Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Thu, 28 Apr 2022 07:35:31 -0700
-Subject: [PATCH] meson: Do not run git rev-parse during configure
-
-This option will try to deduce if babl is being built from git tree or
-release tarball, there should be a better way like checking for .git
-directory etc. instead of doing git operations needing network
-
-see
-https://gitlab.gnome.org/GNOME/babl/-/commit/2dc7fc403fe427a889913ef0cfb71de85b4326ec#note_1439732
-
-Upstream-Status: Pending
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
----
- meson.build | 2 +-
- 1 file changed, 1 insertion(+), 1 deletion(-)
-
-diff --git a/meson.build b/meson.build
-index 7e7a935..649b456 100644
---- a/meson.build
-+++ b/meson.build
-@@ -451,7 +451,7 @@ if git_bin.found() and run_command(
-     git_bin,
-     'rev-parse',
-     '--is-inside-work-tree',
--    check: true,
-+    check: false,
- ).returncode() == 0
-   git_version_h = vcs_tag(
-     input : 'git-version.h.in',
--- 
-2.36.0
-
diff --git a/meta-openembedded/meta-gnome/recipes-gimp/babl/babl/0001-meson-fix-misspelled-kwarg-name.patch b/meta-openembedded/meta-gnome/recipes-gimp/babl/babl/0001-meson-fix-misspelled-kwarg-name.patch
deleted file mode 100644
index 111dac6..0000000
--- a/meta-openembedded/meta-gnome/recipes-gimp/babl/babl/0001-meson-fix-misspelled-kwarg-name.patch
+++ /dev/null
@@ -1,36 +0,0 @@
-From ebcf4795f1132c5124d73a5ae2ca5c01319e584d Mon Sep 17 00:00:00 2001
-From: Eli Schwartz <eschwartz@archlinux.org>
-Date: Sun, 13 Mar 2022 20:26:05 -0400
-Subject: [PATCH 1/2] meson: fix misspelled kwarg name
-
-set10 doesn't have a `Description` kwarg, it does have a `description`
-kwarg though.
-
-This caused the conf variable to not have a description when it should
-have one, and newer versions of Meson with better argument validation
-error out with:
-
-meson.build:58:5: ERROR: configuration_data.set10 got unknown keyword arguments "Description"
-
-Upstream-Status: Backport [https://gitlab.gnome.org/GNOME/babl/-/commit/b05b2826365a7dbc6ca1bf0977b848055cd0cbb6]
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
----
- meson.build | 2 +-
- 1 file changed, 1 insertion(+), 1 deletion(-)
-
-diff --git a/meson.build b/meson.build
-index 487e470..2623e93 100644
---- a/meson.build
-+++ b/meson.build
-@@ -55,7 +55,7 @@ lib_name    = meson.project_name() + '-' + api_version
- stability_version_number = (major_version != 0 ? minor_version : micro_version)
- stable = (stability_version_number % 2 == 0)
- 
--conf.set10('BABL_UNSTABLE', not stable, Description:
-+conf.set10('BABL_UNSTABLE', not stable, description:
-   'Define to 1 if this is an unstable version of BABL.')
- 
- conf.set       ('BABL_MAJOR_VERSION',    '@0@'.format(major_version))
--- 
-2.36.0
-
diff --git a/meta-openembedded/meta-gnome/recipes-gimp/babl/babl/0002-meson-Various-fixes.patch b/meta-openembedded/meta-gnome/recipes-gimp/babl/babl/0002-meson-Various-fixes.patch
deleted file mode 100644
index 919653b..0000000
--- a/meta-openembedded/meta-gnome/recipes-gimp/babl/babl/0002-meson-Various-fixes.patch
+++ /dev/null
@@ -1,132 +0,0 @@
-From 06e16da32dfaad02434fd9937d298ea1ece256ce Mon Sep 17 00:00:00 2001
-From: Xavier Claessens <xavier.claessens@collabora.com>
-Date: Sat, 23 Apr 2022 10:33:17 -0400
-Subject: [PATCH 2/2] meson: Various fixes
-
-- Add missing lcms dependencies. That's needed when lcms is a subproject
-  otherwise those targets does not find its headers.
-- Add lcms2 wrap so meson can build it as subproject in case the
-  dependency is not found on system.
-- Fix couple meson warnings
-- Use meson.override_dependency() so babl can be used as subproject
-  without hardcoding "babl_dep" variable name in main project.
-
-Upstream-Status: Backport [https://gitlab.gnome.org/GNOME/babl/-/commit/2dc7fc403fe427a889913ef0cfb71de85b4326ec]
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
----
- babl/meson.build       |  4 +++-
- extensions/meson.build |  1 +
- meson.build            |  4 +++-
- subprojects/lcms2.wrap | 12 ++++++++++++
- tests/meson.build      |  2 +-
- tools/meson.build      |  2 +-
- 6 files changed, 21 insertions(+), 4 deletions(-)
- create mode 100644 subprojects/lcms2.wrap
-
-diff --git a/babl/meson.build b/babl/meson.build
-index d432dca..70fb131 100644
---- a/babl/meson.build
-+++ b/babl/meson.build
-@@ -138,7 +138,7 @@ babl = library(
-   link_args: babl_link_args,
-   link_with: simd_extra,
-   dependencies: babl_deps,
--  link_depends: version_script,
-+  link_depends: version_script[0],
-   version: so_version,
-   install: true,
- )
-@@ -165,4 +165,6 @@ if build_gir
-       install: true,
-     )
-   endif
-+else
-+  babl_gir = []
- endif
-diff --git a/extensions/meson.build b/extensions/meson.build
-index 23672bb..9935f29 100644
---- a/extensions/meson.build
-+++ b/extensions/meson.build
-@@ -6,6 +6,7 @@ no_cflags = []
- babl_ext_dep = [
-   math,
-   thread,
-+  lcms,
- ]
- 
- # Include directories
-diff --git a/meson.build b/meson.build
-index 2623e93..7e7a935 100644
---- a/meson.build
-+++ b/meson.build
-@@ -451,6 +451,7 @@ if git_bin.found() and run_command(
-     git_bin,
-     'rev-parse',
-     '--is-inside-work-tree',
-+    check: true,
- ).returncode() == 0
-   git_version_h = vcs_tag(
-     input : 'git-version.h.in',
-@@ -531,13 +532,14 @@ babl_dep = declare_dependency(
-   link_with : babl,
-   sources: [
-     babl_version_h,
--    is_variable('babl_gir') ? babl_gir : []
-+    build_gir ? babl_gir : []
-   ],
-   variables: {
-     'babl_path'   : babl_extensions_build_dir,
-     'babl_libdir' : babl_library_build_dir,
-   },
- )
-+meson.override_dependency('babl', babl_dep)
- 
- ################################################################################
- # Build summary
-diff --git a/subprojects/lcms2.wrap b/subprojects/lcms2.wrap
-new file mode 100644
-index 0000000..2cc69df
---- /dev/null
-+++ b/subprojects/lcms2.wrap
-@@ -0,0 +1,12 @@
-+[wrap-file]
-+directory = Little-CMS-2.12
-+source_url = https://github.com/mm2/Little-CMS/archive/refs/tags/2.12.tar.gz
-+source_filename = lcms2-2.12.tar.gz
-+source_hash = e501f1482fc424550ef3abbf86bf1c66090e1661249e89552d39ed5bf935df66
-+patch_filename = lcms2_2.12-2_patch.zip
-+patch_url = https://wrapdb.mesonbuild.com/v2/lcms2_2.12-2/get_patch
-+patch_hash = 3ac6944ac4b8d8507b85961d98cb287532945183d0e8f094c77810e793b6bebe
-+
-+[provide]
-+lcms2 = liblcms2_dep
-+
-diff --git a/tests/meson.build b/tests/meson.build
-index eee8895..7c67e70 100644
---- a/tests/meson.build
-+++ b/tests/meson.build
-@@ -42,7 +42,7 @@ foreach test_name : test_names
-     test_name + '.c',
-     include_directories: [rootInclude, bablInclude],
-     link_with: babl,
--    dependencies: thread,
-+    dependencies: [thread, lcms],
-     export_dynamic: true,
-     install: false,
-   )
-diff --git a/tools/meson.build b/tools/meson.build
-index 2719335..89ccf40 100644
---- a/tools/meson.build
-+++ b/tools/meson.build
-@@ -18,7 +18,7 @@ foreach tool_name : tool_names
-     tool_name + '.c',
-     include_directories: [rootInclude, bablInclude],
-     link_with: babl,
--    dependencies: [math, thread],
-+    dependencies: [math, thread, lcms],
-     install: false,
-   )
- 
--- 
-2.36.0
-
diff --git a/meta-openembedded/meta-gnome/recipes-gimp/babl/babl_0.1.92.bb b/meta-openembedded/meta-gnome/recipes-gimp/babl/babl_0.1.92.bb
deleted file mode 100644
index f2e11c1..0000000
--- a/meta-openembedded/meta-gnome/recipes-gimp/babl/babl_0.1.92.bb
+++ /dev/null
@@ -1,24 +0,0 @@
-SUMMARY = "Babl is a dynamic, any to any, pixel format conversion library"
-LICENSE = "LGPL-3.0-only"
-LIC_FILES_CHKSUM = "file://COPYING;md5=6a6a8e020838b23406c81b19c1d46df6"
-
-GNOMEBASEBUILDCLASS = "meson"
-
-GIR_MESON_OPTION = "enable-gir"
-
-inherit setuptools3 gnomebase gobject-introspection vala
-
-DEPENDS += "lcms"
-
-# https://bugs.llvm.org/show_bug.cgi?id=45555
-CFLAGS:append:toolchain-clang:mipsarch = " -ffp-exception-behavior=ignore "
-CFLAGS:append:toolchain-clang:riscv64 = " -ffp-exception-behavior=ignore "
-
-SRC_URI = "https://download.gimp.org/pub/${BPN}/0.1/${BP}.tar.xz \
-           file://0001-meson-fix-misspelled-kwarg-name.patch \
-           file://0002-meson-Various-fixes.patch \
-           file://0001-meson-Do-not-run-git-rev-parse-during-configure.patch \
-"
-SRC_URI[sha256sum] = "f667735028944b6375ad18f160a64ceb93f5c7dccaa9d8751de359777488a2c1"
-
-BBCLASSEXTEND = "native"
diff --git a/meta-openembedded/meta-gnome/recipes-gimp/babl/babl_0.1.96.bb b/meta-openembedded/meta-gnome/recipes-gimp/babl/babl_0.1.96.bb
new file mode 100644
index 0000000..85cc779
--- /dev/null
+++ b/meta-openembedded/meta-gnome/recipes-gimp/babl/babl_0.1.96.bb
@@ -0,0 +1,20 @@
+SUMMARY = "Babl is a dynamic, any to any, pixel format conversion library"
+LICENSE = "LGPL-3.0-only"
+LIC_FILES_CHKSUM = "file://COPYING;md5=6a6a8e020838b23406c81b19c1d46df6"
+
+GNOMEBASEBUILDCLASS = "meson"
+
+GIR_MESON_OPTION = "enable-gir"
+
+inherit setuptools3 gnomebase gobject-introspection vala
+
+DEPENDS += "lcms"
+
+# https://bugs.llvm.org/show_bug.cgi?id=45555
+CFLAGS:append:toolchain-clang:mipsarch = " -ffp-exception-behavior=ignore "
+CFLAGS:append:toolchain-clang:riscv64 = " -ffp-exception-behavior=ignore "
+
+SRC_URI = "https://download.gimp.org/pub/${BPN}/0.1/${BP}.tar.xz"
+SRC_URI[sha256sum] = "33673fe459a983f411245a49f81fd7f1966af1ea8eca9b095a940c542b8545f6"
+
+BBCLASSEXTEND = "native"
diff --git a/meta-openembedded/meta-gnome/recipes-gnome/gedit/gedit_42.1.bb b/meta-openembedded/meta-gnome/recipes-gnome/gedit/gedit_42.1.bb
deleted file mode 100644
index 3efd2a9..0000000
--- a/meta-openembedded/meta-gnome/recipes-gnome/gedit/gedit_42.1.bb
+++ /dev/null
@@ -1,46 +0,0 @@
-SUMMARY = "GNOME editor"
-SECTION = "x11/gnome"
-LICENSE = "GPL-2.0-or-later"
-LIC_FILES_CHKSUM = "file://COPYING;md5=75859989545e37968a99b631ef42722e"
-
-GNOMEBASEBUILDCLASS = "meson"
-
-DEPENDS = " \
-    gdk-pixbuf-native \
-    gtk+3 \
-    gsettings-desktop-schemas \
-    libpeas \
-    libsoup-2.4 \
-    gspell \
-    gtksourceview4 \
-    tepl \
-"
-
-inherit gnomebase gsettings itstool gnome-help gobject-introspection gtk-doc vala gettext features_check mime-xdg python3targetconfig
-
-def gnome_verdir(v):
-    return oe.utils.trim_version(v, 1)
-
-SRC_URI[archive.sha256sum] = "7f1fd43df5110d4c37de6541993f41f0fbc3efc790900e92053479ba069920e9"
-
-# gobject-introspection is mandatory and cannot be configured
-REQUIRED_DISTRO_FEATURES = "gobject-introspection-data"
-ANY_OF_DISTRO_FEATURES = "${GTK3DISTROFEATURES}"
-
-GIR_MESON_OPTION = ""
-
-GTKDOC_MESON_OPTION = "gtk_doc"
-
-PACKAGES += "${PN}-python"
-
-FILES:${PN} += " \
-    ${datadir}/dbus-1 \
-    ${datadir}/metainfo \
-"
-
-FILES:${PN}-python += " \
-    ${PYTHON_SITEPACKAGES_DIR} \
-"
-
-RDEPENDS:${PN} += "gsettings-desktop-schemas"
-RRECOMMENDS:${PN} += "source-code-pro-fonts"
diff --git a/meta-openembedded/meta-gnome/recipes-gnome/gedit/gedit_42.2.bb b/meta-openembedded/meta-gnome/recipes-gnome/gedit/gedit_42.2.bb
new file mode 100644
index 0000000..13d3e0a
--- /dev/null
+++ b/meta-openembedded/meta-gnome/recipes-gnome/gedit/gedit_42.2.bb
@@ -0,0 +1,46 @@
+SUMMARY = "GNOME editor"
+SECTION = "x11/gnome"
+LICENSE = "GPL-2.0-or-later"
+LIC_FILES_CHKSUM = "file://COPYING;md5=75859989545e37968a99b631ef42722e"
+
+GNOMEBASEBUILDCLASS = "meson"
+
+DEPENDS = " \
+    gdk-pixbuf-native \
+    gtk+3 \
+    gsettings-desktop-schemas \
+    libpeas \
+    libsoup-2.4 \
+    gspell \
+    gtksourceview4 \
+    tepl \
+"
+
+inherit gnomebase gsettings itstool gnome-help gobject-introspection gtk-doc vala gettext features_check mime-xdg python3targetconfig
+
+def gnome_verdir(v):
+    return oe.utils.trim_version(v, 1)
+
+SRC_URI[archive.sha256sum] = "3c6229111f0ac066ae44964920791d1265f5bbb56b0bd949a69b7b1261fc8fca"
+
+# gobject-introspection is mandatory and cannot be configured
+REQUIRED_DISTRO_FEATURES = "gobject-introspection-data"
+ANY_OF_DISTRO_FEATURES = "${GTK3DISTROFEATURES}"
+
+GIR_MESON_OPTION = ""
+
+GTKDOC_MESON_OPTION = "gtk_doc"
+
+PACKAGES += "${PN}-python"
+
+FILES:${PN} += " \
+    ${datadir}/dbus-1 \
+    ${datadir}/metainfo \
+"
+
+FILES:${PN}-python += " \
+    ${PYTHON_SITEPACKAGES_DIR} \
+"
+
+RDEPENDS:${PN} += "gsettings-desktop-schemas"
+RRECOMMENDS:${PN} += "source-code-pro-fonts"
diff --git a/meta-openembedded/meta-gnome/recipes-gnome/geocode-glib/geocode-glib_3.26.3.bb b/meta-openembedded/meta-gnome/recipes-gnome/geocode-glib/geocode-glib_3.26.3.bb
deleted file mode 100644
index 9783dad..0000000
--- a/meta-openembedded/meta-gnome/recipes-gnome/geocode-glib/geocode-glib_3.26.3.bb
+++ /dev/null
@@ -1,19 +0,0 @@
-SUMMARY = "A convenience library for the geocoding"
-
-LICENSE = "LGPL-2.0-only"
-LIC_FILES_CHKSUM = "file://COPYING.LIB;md5=55ca817ccb7d5b5b66355690e9abc605"
-
-GNOMEBASEBUILDCLASS = "meson"
-GIR_MESON_OPTION = "enable-introspection"
-GTKDOC_MESON_OPTION = "enable-gtk-doc"
-
-inherit gnomebase gobject-introspection gettext gtk-doc upstream-version-is-even
-
-DEPENDS = " \
-    json-glib \
-    libsoup-2.4 \
-"
-
-SRC_URI[archive.sha256sum] = "1dfeae83b90eccca1b6cf7dcf7c5e3b317828cf0b56205c4471ef0f911999766"
-
-EXTRA_OEMESON = "-Denable-installed-tests=false"
diff --git a/meta-openembedded/meta-gnome/recipes-gnome/geocode-glib/geocode-glib_3.26.4.bb b/meta-openembedded/meta-gnome/recipes-gnome/geocode-glib/geocode-glib_3.26.4.bb
new file mode 100644
index 0000000..2b1ecad
--- /dev/null
+++ b/meta-openembedded/meta-gnome/recipes-gnome/geocode-glib/geocode-glib_3.26.4.bb
@@ -0,0 +1,19 @@
+SUMMARY = "A convenience library for the geocoding"
+
+LICENSE = "LGPL-2.0-only"
+LIC_FILES_CHKSUM = "file://COPYING.LIB;md5=55ca817ccb7d5b5b66355690e9abc605"
+
+GNOMEBASEBUILDCLASS = "meson"
+GIR_MESON_OPTION = "enable-introspection"
+GTKDOC_MESON_OPTION = "enable-gtk-doc"
+
+inherit gnomebase gobject-introspection gettext gtk-doc upstream-version-is-even
+
+DEPENDS = " \
+    json-glib \
+    libsoup-2.4 \
+"
+
+SRC_URI[archive.sha256sum] = "2d9a6826d158470449a173871221596da0f83ebdcff98b90c7049089056a37aa"
+
+EXTRA_OEMESON = "-Denable-installed-tests=false"
diff --git a/meta-openembedded/meta-gnome/recipes-gnome/gjs/gjs_1.72.1.bb b/meta-openembedded/meta-gnome/recipes-gnome/gjs/gjs_1.72.1.bb
deleted file mode 100644
index 3558940..0000000
--- a/meta-openembedded/meta-gnome/recipes-gnome/gjs/gjs_1.72.1.bb
+++ /dev/null
@@ -1,40 +0,0 @@
-SUMMARY = "Javascript bindings for GNOME"
-LICENSE = "MIT & LGPL-2.0-or-later"
-LIC_FILES_CHKSUM = "file://COPYING;md5=8dcea832f6acf45d856abfeb2d51ec48"
-
-GNOMEBASEBUILDCLASS = "meson"
-
-DEPENDS = "mozjs-91 cairo"
-
-inherit gnomebase gsettings gobject-introspection vala gettext features_check upstream-version-is-even pkgconfig
-
-SRC_URI[archive.sha256sum] = "17c0b1ec3f096671ff8bfaba6e4bbf14198c7013c604bfd677a9858da079c0ab"
-SRC_URI += " \
-    file://0001-Support-cross-builds-a-bit-better.patch \
-    file://0002-meson.build-Do-not-add-dir-installed-tests-when-inst.patch \
-"
-
-# gobject-introspection is mandatory and cannot be configured
-REQUIRED_DISTRO_FEATURES = "gobject-introspection-data"
-GIR_MESON_OPTION = ""
-
-EXTRA_OEMESON = " \
-    -Dinstalled_tests=false \
-    -Dskip_dbus_tests=true \
-    -Dskip_gtk_tests=true \
-"
-
-LDFLAGS:append:mipsarch = " -latomic"
-LDFLAGS:append:powerpc = " -latomic"
-LDFLAGS:append:powerpc64 = " -latomic"
-LDFLAGS:append:riscv32 = " -latomic"
-
-FILES:${PN} += "${datadir}/gjs-1.0/lsan"
-
-PACKAGES =+ "${PN}-valgrind"
-FILES:${PN}-valgrind = "${datadir}/gjs-1.0/valgrind"
-RDEPENDS:${PN}-valgrind += "valgrind"
-
-# Valgrind not yet available on rv32/rv64
-RDEPENDS:${PN}-valgrind:remove:riscv32 = "valgrind"
-RDEPENDS:${PN}-valgrind:remove:riscv64 = "valgrind"
diff --git a/meta-openembedded/meta-gnome/recipes-gnome/gjs/gjs_1.72.2.bb b/meta-openembedded/meta-gnome/recipes-gnome/gjs/gjs_1.72.2.bb
new file mode 100644
index 0000000..3170341
--- /dev/null
+++ b/meta-openembedded/meta-gnome/recipes-gnome/gjs/gjs_1.72.2.bb
@@ -0,0 +1,40 @@
+SUMMARY = "Javascript bindings for GNOME"
+LICENSE = "MIT & LGPL-2.0-or-later"
+LIC_FILES_CHKSUM = "file://COPYING;md5=8dcea832f6acf45d856abfeb2d51ec48"
+
+GNOMEBASEBUILDCLASS = "meson"
+
+DEPENDS = "mozjs-91 cairo"
+
+inherit gnomebase gsettings gobject-introspection vala gettext features_check upstream-version-is-even pkgconfig
+
+SRC_URI[archive.sha256sum] = "ddee379bdc5a7d303a5d894be2b281beb8ac54508604e7d3f20781a869da3977"
+SRC_URI += " \
+    file://0001-Support-cross-builds-a-bit-better.patch \
+    file://0002-meson.build-Do-not-add-dir-installed-tests-when-inst.patch \
+"
+
+# gobject-introspection is mandatory and cannot be configured
+REQUIRED_DISTRO_FEATURES = "gobject-introspection-data"
+GIR_MESON_OPTION = ""
+
+EXTRA_OEMESON = " \
+    -Dinstalled_tests=false \
+    -Dskip_dbus_tests=true \
+    -Dskip_gtk_tests=true \
+"
+
+LDFLAGS:append:mipsarch = " -latomic"
+LDFLAGS:append:powerpc = " -latomic"
+LDFLAGS:append:powerpc64 = " -latomic"
+LDFLAGS:append:riscv32 = " -latomic"
+
+FILES:${PN} += "${datadir}/gjs-1.0/lsan"
+
+PACKAGES =+ "${PN}-valgrind"
+FILES:${PN}-valgrind = "${datadir}/gjs-1.0/valgrind"
+RDEPENDS:${PN}-valgrind += "valgrind"
+
+# Valgrind not yet available on rv32/rv64
+RDEPENDS:${PN}-valgrind:remove:riscv32 = "valgrind"
+RDEPENDS:${PN}-valgrind:remove:riscv64 = "valgrind"
diff --git a/meta-openembedded/meta-gnome/recipes-gnome/gnome-bluetooth/gnome-bluetooth_42.2.bb b/meta-openembedded/meta-gnome/recipes-gnome/gnome-bluetooth/gnome-bluetooth_42.2.bb
deleted file mode 100644
index c916aca..0000000
--- a/meta-openembedded/meta-gnome/recipes-gnome/gnome-bluetooth/gnome-bluetooth_42.2.bb
+++ /dev/null
@@ -1,44 +0,0 @@
-SUMMARY = "GNOME bluetooth manager"
-LICENSE = "GPL-2.0-only & LGPL-2.1-only"
-LIC_FILES_CHKSUM = " \
-    file://COPYING;md5=eb723b61539feef013de476e68b5c50a \
-    file://COPYING.LIB;md5=a6f89e2100d9b6cdffcea4f398e37343 \
-"
-
-SECTION = "x11/gnome"
-
-DEPENDS = " \
-    udev \
-    libnotify \
-    libcanberra \
-    bluez5 \
-    upower \
-    gtk4 \
-    gsound \
-    libadwaita \
-"
-
-GNOMEBASEBUILDCLASS = "meson"
-GTKDOC_MESON_OPTION = "gtk_doc"
-GTKIC_VERSION = "4"
-
-inherit features_check gnomebase gtk-icon-cache gtk-doc gobject-introspection
-
-REQUIRED_DISTRO_FEATURES = "x11"
-
-SRC_URI[archive.sha256sum] = "8ce8ecfab28272db1830a63f08f9ccb5304734d1be2dbfce795fe4029e629f0c"
-
-BT_PULSE_PACKS = " \
-    pulseaudio-lib-bluez5-util \
-    pulseaudio-module-bluetooth-discover \
-    pulseaudio-module-bluetooth-policy \
-    pulseaudio-module-bluez5-device \
-    pulseaudio-module-bluez5-discover \
-"
-
-PACKAGECONFIG ??= "${@bb.utils.filter('DISTRO_FEATURES', 'pulseaudio', d)}"
-PACKAGECONFIG[pulseaudio] = ",,,${BT_PULSE_PACKS}"
-
-FILES:${PN} += "${datadir}/gnome-bluetooth-3.0"
-
-RDEPENDS:${PN} += "bluez5"
diff --git a/meta-openembedded/meta-gnome/recipes-gnome/gnome-bluetooth/gnome-bluetooth_42.3.bb b/meta-openembedded/meta-gnome/recipes-gnome/gnome-bluetooth/gnome-bluetooth_42.3.bb
new file mode 100644
index 0000000..cf73f82
--- /dev/null
+++ b/meta-openembedded/meta-gnome/recipes-gnome/gnome-bluetooth/gnome-bluetooth_42.3.bb
@@ -0,0 +1,44 @@
+SUMMARY = "GNOME bluetooth manager"
+LICENSE = "GPL-2.0-only & LGPL-2.1-only"
+LIC_FILES_CHKSUM = " \
+    file://COPYING;md5=eb723b61539feef013de476e68b5c50a \
+    file://COPYING.LIB;md5=a6f89e2100d9b6cdffcea4f398e37343 \
+"
+
+SECTION = "x11/gnome"
+
+DEPENDS = " \
+    udev \
+    libnotify \
+    libcanberra \
+    bluez5 \
+    upower \
+    gtk4 \
+    gsound \
+    libadwaita \
+"
+
+GNOMEBASEBUILDCLASS = "meson"
+GTKDOC_MESON_OPTION = "gtk_doc"
+GTKIC_VERSION = "4"
+
+inherit features_check gnomebase gtk-icon-cache gtk-doc gobject-introspection
+
+REQUIRED_DISTRO_FEATURES = "x11"
+
+SRC_URI[archive.sha256sum] = "c37a2a07f77d4816b261e6c2086a056ed9767c3881dfabc826f4f82f6e1aa302"
+
+BT_PULSE_PACKS = " \
+    pulseaudio-lib-bluez5-util \
+    pulseaudio-module-bluetooth-discover \
+    pulseaudio-module-bluetooth-policy \
+    pulseaudio-module-bluez5-device \
+    pulseaudio-module-bluez5-discover \
+"
+
+PACKAGECONFIG ??= "${@bb.utils.filter('DISTRO_FEATURES', 'pulseaudio', d)}"
+PACKAGECONFIG[pulseaudio] = ",,,${BT_PULSE_PACKS}"
+
+FILES:${PN} += "${datadir}/gnome-bluetooth-3.0"
+
+RDEPENDS:${PN} += "bluez5"
diff --git a/meta-openembedded/meta-gnome/recipes-gnome/gnome-keyring/gnome-keyring_40.0.bb b/meta-openembedded/meta-gnome/recipes-gnome/gnome-keyring/gnome-keyring_40.0.bb
index b6d9a58..a30303c 100644
--- a/meta-openembedded/meta-gnome/recipes-gnome/gnome-keyring/gnome-keyring_40.0.bb
+++ b/meta-openembedded/meta-gnome/recipes-gnome/gnome-keyring/gnome-keyring_40.0.bb
@@ -17,7 +17,7 @@
     ${@bb.utils.contains('DISTRO_FEATURES', 'pam', 'libpam', '', d)} \
 "
 
-inherit gnomebase gsettings features_check remove-libtool gettext
+inherit gnomebase gsettings features_check gettext
 
 ANY_OF_DISTRO_FEATURES = "${GTK3DISTROFEATURES}"
 
diff --git a/meta-openembedded/meta-gnome/recipes-gnome/grilo/grilo-plugins_0.3.14.bb b/meta-openembedded/meta-gnome/recipes-gnome/grilo/grilo-plugins_0.3.14.bb
deleted file mode 100644
index d00e737..0000000
--- a/meta-openembedded/meta-gnome/recipes-gnome/grilo/grilo-plugins_0.3.14.bb
+++ /dev/null
@@ -1,22 +0,0 @@
-SUMMARY = "Grilo is a framework forsearching media content from various sources"
-LICENSE = "LGPL-2.1-only"
-LIC_FILES_CHKSUM = "file://COPYING;md5=fbc093901857fcd118f065f900982c24"
-
-DEPENDS = " \
-    glib-2.0-native \
-    gperf-native \
-    itstool-native \
-    grilo \
-    tracker \
-    lua \
-    liboauth \
-"
-
-GNOMEBASEBUILDCLASS = "meson"
-
-inherit gnomebase gnome-help vala
-
-SRC_URI += "file://0001-Avoid-running-trackertestutils.patch"
-SRC_URI[archive.sha256sum] = "686844b34ec73b24931ff6cc4f6033f0072947a6db60acdc7fb3eaf157a581c8"
-
-FILES:${PN} += "${libdir}/grilo-0.3"
diff --git a/meta-openembedded/meta-gnome/recipes-gnome/grilo/grilo-plugins_0.3.15.bb b/meta-openembedded/meta-gnome/recipes-gnome/grilo/grilo-plugins_0.3.15.bb
new file mode 100644
index 0000000..b96194d
--- /dev/null
+++ b/meta-openembedded/meta-gnome/recipes-gnome/grilo/grilo-plugins_0.3.15.bb
@@ -0,0 +1,22 @@
+SUMMARY = "Grilo is a framework forsearching media content from various sources"
+LICENSE = "LGPL-2.1-only"
+LIC_FILES_CHKSUM = "file://COPYING;md5=fbc093901857fcd118f065f900982c24"
+
+DEPENDS = " \
+    glib-2.0-native \
+    gperf-native \
+    itstool-native \
+    grilo \
+    tracker \
+    lua \
+    liboauth \
+"
+
+GNOMEBASEBUILDCLASS = "meson"
+
+inherit gnomebase gnome-help vala
+
+SRC_URI += "file://0001-Avoid-running-trackertestutils.patch"
+SRC_URI[archive.sha256sum] = "8518c3d954f93095d955624a044ce16a7345532f811d299dbfa1e114cfebab33"
+
+FILES:${PN} += "${libdir}/grilo-0.3"
diff --git a/meta-openembedded/meta-gnome/recipes-gnome/gtk4/gtk4_4.6.6.bb b/meta-openembedded/meta-gnome/recipes-gnome/gtk4/gtk4_4.6.6.bb
deleted file mode 100644
index dcebd0d..0000000
--- a/meta-openembedded/meta-gnome/recipes-gnome/gtk4/gtk4_4.6.6.bb
+++ /dev/null
@@ -1,142 +0,0 @@
-SUMMARY = "Multi-platform toolkit for creating GUIs"
-DESCRIPTION = "GTK is a multi-platform toolkit for creating graphical user interfaces. Offering a complete \
-set of widgets, GTK is suitable for projects ranging from small one-off projects to complete application suites."
-HOMEPAGE = "http://www.gtk.org"
-BUGTRACKER = "https://bugzilla.gnome.org/"
-SECTION = "libs"
-
-DEPENDS = " \
-    sassc-native \
-    glib-2.0 \
-    libepoxy \
-    graphene \
-    cairo \
-    pango \
-    atk \
-    jpeg \
-    libpng \
-    librsvg \
-    tiff \
-    gdk-pixbuf-native gdk-pixbuf \
-"
-
-LICENSE = "LGPL-2.0-only & LGPL-2.0-or-later & LGPL-2.1-or-later"
-LIC_FILES_CHKSUM = " \
-    file://COPYING;md5=5f30f0716dfdd0d91eb439ebec522ec2 \
-    file://gtk/gtk.h;endline=25;md5=1d8dc0fccdbfa26287a271dce88af737 \
-    file://gdk/gdk.h;endline=25;md5=c920ce39dc88c6f06d3e7c50e08086f2 \
-    file://tests/testgtk.c;endline=25;md5=49d06770681b8322466b52ed19d29fb2 \
-"
-
-MAJ_VER = "${@oe.utils.trim_version("${PV}", 2)}"
-
-UPSTREAM_CHECK_REGEX = "gtk-(?P<pver>\d+\.(\d*[02468])+(\.\d+)+)\.tar.xz"
-
-SRC_URI = "http://ftp.gnome.org/pub/gnome/sources/gtk/${MAJ_VER}/gtk-${PV}.tar.xz"
-SRC_URI[sha256sum] = "7bbfe4d13569f7c297ed49834ac7263e318b7bf102d3271cb466d5971f59ae70"
-
-S = "${WORKDIR}/gtk-${PV}"
-
-inherit meson gettext pkgconfig gtk-doc update-alternatives gsettings features_check gobject-introspection
-
-# TBD: nativesdk
-# gobject-introspection.bbclass pins introspection off for nativesk. As long as
-# we do not remove this wisdom or hack gtk4, it is not possible to build
-# nativesdk-gtk4
-BBCLASSEXTEND = "native"
-
-GSETTINGS_PACKAGE:class-native = ""
-
-ANY_OF_DISTRO_FEATURES = "${GTK3DISTROFEATURES}"
-REQUIRED_DISTRO_FEATURES = "opengl"
-
-GIR_MESON_ENABLE_FLAG = 'enabled'
-GIR_MESON_DISABLE_FLAG = 'disabled'
-GTKDOC_MESON_OPTION = 'gtk_doc'
-
-EXTRA_OEMESON = " -Dbuild-tests=false"
-
-PACKAGECONFIG ??= "${@bb.utils.filter('DISTRO_FEATURES', 'wayland x11', d)}"
-PACKAGECONFIG:class-native = "${@bb.utils.filter('DISTRO_FEATURES', 'x11', d)}"
-PACKAGECONFIG:class-nativesdk = "${@bb.utils.filter('DISTRO_FEATURES', 'x11', d)}"
-
-PACKAGECONFIG[x11] = "-Dx11-backend=true,-Dx11-backend=false,at-spi2-atk fontconfig libx11 libxext libxcursor libxi libxdamage libxrandr libxrender libxcomposite libxfixes xinerama"
-PACKAGECONFIG[wayland] = "-Dwayland-backend=true,-Dwayland-backend=false,wayland wayland-protocols libxkbcommon virtual/egl virtual/libgles2 wayland-native"
-PACKAGECONFIG[cups] = "-Dprint-cups=enabled,-Dprint-cups=disabled,cups"
-PACKAGECONFIG[colord] = "-Dcolord=enabled,-Dcolord=disabled,colord"
-# gtk4 wants gstreamer-player-1.0 -> gstreamer1.0-plugins-bad
-PACKAGECONFIG[gstreamer] = "-Dmedia-gstreamer=enabled,-Dmedia-gstreamer=disabled,gstreamer1.0-plugins-bad"
-PACKAGECONFIG[tracker] = "-Dtracker=enabled,-Dtracker=disabled,tracker"
-
-
-do_compile:prepend() {
-    export GIR_EXTRA_LIBS_PATH="${B}/gdk/.libs"
-}
-
-
-PACKAGES =+ "${PN}-demo"
-LIBV = "4.0.0"
-
-FILES:${PN}-demo = " \
-    ${datadir}/applications/org.gtk.Demo4.desktop \
-    ${datadir}/applications/org.gtk.IconBrowser4.desktop \
-    ${datadir}/applications/org.gtk.WidgetFactory4.desktop \
-    ${datadir}/icons/hicolor/*/apps/org.gtk.Demo4*.* \
-    ${datadir}/icons/hicolor/*/apps/org.gtk.IconBrowser4*.* \
-    ${datadir}/icons/hicolor/*/apps/org.gtk.WidgetFactory4*.* \
-    ${bindir}/gtk4-demo \
-    ${bindir}/gtk4-demo-application \
-    ${bindir}/gtk4-icon-browser \
-    ${bindir}/gtk4-widget-factory \
-"
-
-FILES:${PN}:append = " \
-    ${datadir}/glib-2.0/schemas/ \
-    ${datadir}/gtk-4.0/emoji/ \
-    ${datadir}/metainfo/ \
-    ${datadir}/icons/hicolor/*/apps/org.gtk.PrintEditor4*.* \
-    ${libdir}/gtk-4.0/${LIBV}/printbackends \
-    ${bindir}/gtk4-update-icon-cache \
-    ${bindir}/gtk4-launch \
-"
-
-FILES:${PN}-dev += " \
-    ${datadir}/gtk-4.0/gtk4builder.rng \
-    ${datadir}/gtk-4.0/include \
-    ${datadir}/gtk-4.0/valgrind \
-    ${datadir}/gettext/its \
-    ${bindir}/gtk4-builder-tool \
-    ${bindir}/gtk4-encode-symbolic-svg \
-    ${bindir}/gtk4-query-settings \
-"
-
-GTKBASE_RRECOMMENDS ?= " \
-    liberation-fonts \
-    gdk-pixbuf-loader-png \
-    gdk-pixbuf-loader-jpeg \
-    gdk-pixbuf-loader-gif \
-    gdk-pixbuf-loader-xpm \
-    shared-mime-info \
-    adwaita-icon-theme-symbolic \
-"
-
-GTKBASE_RRECOMMENDS:class-native ?= ""
-
-GTKGLIBC_RRECOMMENDS ?= "${GTKBASE_RRECOMMENDS} glibc-gconv-iso8859-1"
-
-RRECOMMENDS:${PN} = "${GTKBASE_RRECOMMENDS}"
-RRECOMMENDS:${PN}:libc-glibc = "${GTKGLIBC_RRECOMMENDS}"
-RDEPENDS:${PN}-dev += "${@bb.utils.contains("PACKAGECONFIG", "wayland", "wayland-protocols", "", d)}"
-
-PACKAGES_DYNAMIC += "^gtk4-printbackend-.*"
-python populate_packages:prepend () {
-    import os.path
-
-    gtk_libdir = d.expand('${libdir}/gtk-3.0/${LIBV}')
-    printmodules_root = os.path.join(gtk_libdir, 'printbackends');
-
-    do_split_packages(d, printmodules_root, r'^libprintbackend-(.*)\.so$', 'gtk4-printbackend-%s', 'GTK printbackend module for %s')
-
-    if (d.getVar('DEBIAN_NAMES')):
-        d.setVar(d.expand('PKG:${PN}'), '${MLPREFIX}libgtk-4.0')
-}
diff --git a/meta-openembedded/meta-gnome/recipes-gnome/gtk4/gtk4_4.6.7.bb b/meta-openembedded/meta-gnome/recipes-gnome/gtk4/gtk4_4.6.7.bb
new file mode 100644
index 0000000..c71be23
--- /dev/null
+++ b/meta-openembedded/meta-gnome/recipes-gnome/gtk4/gtk4_4.6.7.bb
@@ -0,0 +1,142 @@
+SUMMARY = "Multi-platform toolkit for creating GUIs"
+DESCRIPTION = "GTK is a multi-platform toolkit for creating graphical user interfaces. Offering a complete \
+set of widgets, GTK is suitable for projects ranging from small one-off projects to complete application suites."
+HOMEPAGE = "http://www.gtk.org"
+BUGTRACKER = "https://bugzilla.gnome.org/"
+SECTION = "libs"
+
+DEPENDS = " \
+    sassc-native \
+    glib-2.0 \
+    libepoxy \
+    graphene \
+    cairo \
+    pango \
+    atk \
+    jpeg \
+    libpng \
+    librsvg \
+    tiff \
+    gdk-pixbuf-native gdk-pixbuf \
+"
+
+LICENSE = "LGPL-2.0-only & LGPL-2.0-or-later & LGPL-2.1-or-later"
+LIC_FILES_CHKSUM = " \
+    file://COPYING;md5=5f30f0716dfdd0d91eb439ebec522ec2 \
+    file://gtk/gtk.h;endline=25;md5=1d8dc0fccdbfa26287a271dce88af737 \
+    file://gdk/gdk.h;endline=25;md5=c920ce39dc88c6f06d3e7c50e08086f2 \
+    file://tests/testgtk.c;endline=25;md5=49d06770681b8322466b52ed19d29fb2 \
+"
+
+MAJ_VER = "${@oe.utils.trim_version("${PV}", 2)}"
+
+UPSTREAM_CHECK_REGEX = "gtk-(?P<pver>\d+\.(\d*[02468])+(\.\d+)+)\.tar.xz"
+
+SRC_URI = "http://ftp.gnome.org/pub/gnome/sources/gtk/${MAJ_VER}/gtk-${PV}.tar.xz"
+SRC_URI[sha256sum] = "effd2e7c4b5e2a5c7fad43e0f24adea68baa4092abb0b752caff278e6bb010e8"
+
+S = "${WORKDIR}/gtk-${PV}"
+
+inherit meson gettext pkgconfig gtk-doc update-alternatives gsettings features_check gobject-introspection
+
+# TBD: nativesdk
+# gobject-introspection.bbclass pins introspection off for nativesk. As long as
+# we do not remove this wisdom or hack gtk4, it is not possible to build
+# nativesdk-gtk4
+BBCLASSEXTEND = "native"
+
+GSETTINGS_PACKAGE:class-native = ""
+
+ANY_OF_DISTRO_FEATURES = "${GTK3DISTROFEATURES}"
+REQUIRED_DISTRO_FEATURES = "opengl"
+
+GIR_MESON_ENABLE_FLAG = 'enabled'
+GIR_MESON_DISABLE_FLAG = 'disabled'
+GTKDOC_MESON_OPTION = 'gtk_doc'
+
+EXTRA_OEMESON = " -Dbuild-tests=false"
+
+PACKAGECONFIG ??= "${@bb.utils.filter('DISTRO_FEATURES', 'wayland x11', d)}"
+PACKAGECONFIG:class-native = "${@bb.utils.filter('DISTRO_FEATURES', 'x11', d)}"
+PACKAGECONFIG:class-nativesdk = "${@bb.utils.filter('DISTRO_FEATURES', 'x11', d)}"
+
+PACKAGECONFIG[x11] = "-Dx11-backend=true,-Dx11-backend=false,at-spi2-atk fontconfig libx11 libxext libxcursor libxi libxdamage libxrandr libxrender libxcomposite libxfixes xinerama"
+PACKAGECONFIG[wayland] = "-Dwayland-backend=true,-Dwayland-backend=false,wayland wayland-protocols libxkbcommon virtual/egl virtual/libgles2 wayland-native"
+PACKAGECONFIG[cups] = "-Dprint-cups=enabled,-Dprint-cups=disabled,cups"
+PACKAGECONFIG[colord] = "-Dcolord=enabled,-Dcolord=disabled,colord"
+# gtk4 wants gstreamer-player-1.0 -> gstreamer1.0-plugins-bad
+PACKAGECONFIG[gstreamer] = "-Dmedia-gstreamer=enabled,-Dmedia-gstreamer=disabled,gstreamer1.0-plugins-bad"
+PACKAGECONFIG[tracker] = "-Dtracker=enabled,-Dtracker=disabled,tracker"
+
+
+do_compile:prepend() {
+    export GIR_EXTRA_LIBS_PATH="${B}/gdk/.libs"
+}
+
+
+PACKAGES =+ "${PN}-demo"
+LIBV = "4.0.0"
+
+FILES:${PN}-demo = " \
+    ${datadir}/applications/org.gtk.Demo4.desktop \
+    ${datadir}/applications/org.gtk.IconBrowser4.desktop \
+    ${datadir}/applications/org.gtk.WidgetFactory4.desktop \
+    ${datadir}/icons/hicolor/*/apps/org.gtk.Demo4*.* \
+    ${datadir}/icons/hicolor/*/apps/org.gtk.IconBrowser4*.* \
+    ${datadir}/icons/hicolor/*/apps/org.gtk.WidgetFactory4*.* \
+    ${bindir}/gtk4-demo \
+    ${bindir}/gtk4-demo-application \
+    ${bindir}/gtk4-icon-browser \
+    ${bindir}/gtk4-widget-factory \
+"
+
+FILES:${PN}:append = " \
+    ${datadir}/glib-2.0/schemas/ \
+    ${datadir}/gtk-4.0/emoji/ \
+    ${datadir}/metainfo/ \
+    ${datadir}/icons/hicolor/*/apps/org.gtk.PrintEditor4*.* \
+    ${libdir}/gtk-4.0/${LIBV}/printbackends \
+    ${bindir}/gtk4-update-icon-cache \
+    ${bindir}/gtk4-launch \
+"
+
+FILES:${PN}-dev += " \
+    ${datadir}/gtk-4.0/gtk4builder.rng \
+    ${datadir}/gtk-4.0/include \
+    ${datadir}/gtk-4.0/valgrind \
+    ${datadir}/gettext/its \
+    ${bindir}/gtk4-builder-tool \
+    ${bindir}/gtk4-encode-symbolic-svg \
+    ${bindir}/gtk4-query-settings \
+"
+
+GTKBASE_RRECOMMENDS ?= " \
+    liberation-fonts \
+    gdk-pixbuf-loader-png \
+    gdk-pixbuf-loader-jpeg \
+    gdk-pixbuf-loader-gif \
+    gdk-pixbuf-loader-xpm \
+    shared-mime-info \
+    adwaita-icon-theme-symbolic \
+"
+
+GTKBASE_RRECOMMENDS:class-native ?= ""
+
+GTKGLIBC_RRECOMMENDS ?= "${GTKBASE_RRECOMMENDS} glibc-gconv-iso8859-1"
+
+RRECOMMENDS:${PN} = "${GTKBASE_RRECOMMENDS}"
+RRECOMMENDS:${PN}:libc-glibc = "${GTKGLIBC_RRECOMMENDS}"
+RDEPENDS:${PN}-dev += "${@bb.utils.contains("PACKAGECONFIG", "wayland", "wayland-protocols", "", d)}"
+
+PACKAGES_DYNAMIC += "^gtk4-printbackend-.*"
+python populate_packages:prepend () {
+    import os.path
+
+    gtk_libdir = d.expand('${libdir}/gtk-3.0/${LIBV}')
+    printmodules_root = os.path.join(gtk_libdir, 'printbackends');
+
+    do_split_packages(d, printmodules_root, r'^libprintbackend-(.*)\.so$', 'gtk4-printbackend-%s', 'GTK printbackend module for %s')
+
+    if (d.getVar('DEBIAN_NAMES')):
+        d.setVar(d.expand('PKG:${PN}'), '${MLPREFIX}libgtk-4.0')
+}
diff --git a/meta-openembedded/meta-gnome/recipes-gnome/libadwaita/libadwaita_1.1.3.bb b/meta-openembedded/meta-gnome/recipes-gnome/libadwaita/libadwaita_1.1.3.bb
deleted file mode 100644
index eaec983..0000000
--- a/meta-openembedded/meta-gnome/recipes-gnome/libadwaita/libadwaita_1.1.3.bb
+++ /dev/null
@@ -1,25 +0,0 @@
-SUMMARY = "Building blocks for modern GNOME applications"
-LICENSE="LGPL-2.1-or-later"
-LIC_FILES_CHKSUM = "file://COPYING;md5=4fbd65380cdd255951079008b364516c"
-
-GNOMEBASEBUILDCLASS = "meson"
-
-DEPENDS = " \
-    sassc-native \
-    gtk4 \
-"
-
-inherit gnomebase gobject-introspection gtk-doc vala features_check
-
-SRC_URI[archive.sha256sum] = "9b92be6007da1bf75131a2d5e697f0ff985bccf82380d298d46f013675aa4197"
-
-ANY_OF_DISTRO_FEATURES = "${GTK3DISTROFEATURES}"
-REQUIRED_DISTRO_FEATURES = "opengl"
-
-GIR_MESON_ENABLE_FLAG = 'enabled'
-GIR_MESON_DISABLE_FLAG = 'disabled'
-GTKDOC_MESON_OPTION = 'gtk_doc'
-
-PACKAGECONFIG[examples] = "-Dexamples=true,-Dexamples=false"
-
-FILES:${PN} += "${datadir}/metainfo"
diff --git a/meta-openembedded/meta-gnome/recipes-gnome/libadwaita/libadwaita_1.1.4.bb b/meta-openembedded/meta-gnome/recipes-gnome/libadwaita/libadwaita_1.1.4.bb
new file mode 100644
index 0000000..44d18f5
--- /dev/null
+++ b/meta-openembedded/meta-gnome/recipes-gnome/libadwaita/libadwaita_1.1.4.bb
@@ -0,0 +1,25 @@
+SUMMARY = "Building blocks for modern GNOME applications"
+LICENSE="LGPL-2.1-or-later"
+LIC_FILES_CHKSUM = "file://COPYING;md5=4fbd65380cdd255951079008b364516c"
+
+GNOMEBASEBUILDCLASS = "meson"
+
+DEPENDS = " \
+    sassc-native \
+    gtk4 \
+"
+
+inherit gnomebase gobject-introspection gtk-doc vala features_check
+
+SRC_URI[archive.sha256sum] = "fcc6d56669d33ac3d030098d7571d8045a02e18dc083b49a5a5a6325068e6b58"
+
+ANY_OF_DISTRO_FEATURES = "${GTK3DISTROFEATURES}"
+REQUIRED_DISTRO_FEATURES = "opengl"
+
+GIR_MESON_ENABLE_FLAG = 'enabled'
+GIR_MESON_DISABLE_FLAG = 'disabled'
+GTKDOC_MESON_OPTION = 'gtk_doc'
+
+PACKAGECONFIG[examples] = "-Dexamples=true,-Dexamples=false"
+
+FILES:${PN} += "${datadir}/metainfo"
diff --git a/meta-openembedded/meta-gnome/recipes-gnome/tracker/tracker_3.3.2.bb b/meta-openembedded/meta-gnome/recipes-gnome/tracker/tracker_3.3.2.bb
deleted file mode 100644
index eaa0e06..0000000
--- a/meta-openembedded/meta-gnome/recipes-gnome/tracker/tracker_3.3.2.bb
+++ /dev/null
@@ -1,53 +0,0 @@
-SUMMARY = "Tracker is a file search engine"
-LICENSE = "GPL-2.0-only & LGPL-2.1-only"
-LIC_FILES_CHKSUM = " \
-    file://COPYING.GPL;md5=ee31012bf90e7b8c108c69f197f3e3a4 \
-    file://COPYING.LGPL;md5=2d5025d4aa3495befef8f17206a5b0a1 \
-"
-
-DEPENDS = " \
-    dbus-native \
-    python3-pygobject-native \
-    glib-2.0 \
-    sqlite3 \
-    libarchive \
-    dbus \
-    icu \
-    json-glib \
-    libsoup-2.4 \
-    libstemmer \
-"
-
-GNOMEBASEBUILDCLASS = "meson"
-
-inherit gnomebase gsettings gobject-introspection vala gtk-doc manpages bash-completion features_check python3native
-
-SRC_URI[archive.sha256sum] = "0ed2b98918956d6f16429c607dd8a14c84f4da0a48970fd2eb8c93aba3cf9913"
-
-# gobject-introspection is mandatory and cannot be configured
-REQUIRED_DISTRO_FEATURES = "gobject-introspection-data"
-GIR_MESON_OPTION = ""
-
-# text search is not an option anymore and requires sqlite3 build with
-# PACKAGECONFIG[fts5] set (default)
-
-# set required cross property sqlite3_has_fts5
-do_write_config[vardeps] += "PACKAGECONFIG"
-do_write_config:append() {
-    echo "[properties]" > ${WORKDIR}/meson-tracker.cross
-    echo "sqlite3_has_fts5 = 'true'" >> ${WORKDIR}/meson-tracker.cross
-}
-
-EXTRA_OEMESON = " \
-    --cross-file ${WORKDIR}/meson-tracker.cross \
-    -Dman=false \
-    -Dsystemd_user_services=${@bb.utils.contains('DISTRO_FEATURES', 'systemd', 'true', 'false', d)} \
-    -Dsystemd_user_services_dir=${systemd_user_unitdir} \
-"
-
-FILES:${PN} += " \
-    ${datadir}/dbus-1 \
-    ${datadir}/tracker3 \
-    ${libdir}/tracker-3.0 \
-    ${systemd_user_unitdir} \
-"
diff --git a/meta-openembedded/meta-gnome/recipes-gnome/tracker/tracker_3.3.3.bb b/meta-openembedded/meta-gnome/recipes-gnome/tracker/tracker_3.3.3.bb
new file mode 100644
index 0000000..91d90dd
--- /dev/null
+++ b/meta-openembedded/meta-gnome/recipes-gnome/tracker/tracker_3.3.3.bb
@@ -0,0 +1,53 @@
+SUMMARY = "Tracker is a file search engine"
+LICENSE = "GPL-2.0-only & LGPL-2.1-only"
+LIC_FILES_CHKSUM = " \
+    file://COPYING.GPL;md5=ee31012bf90e7b8c108c69f197f3e3a4 \
+    file://COPYING.LGPL;md5=2d5025d4aa3495befef8f17206a5b0a1 \
+"
+
+DEPENDS = " \
+    dbus-native \
+    python3-pygobject-native \
+    glib-2.0 \
+    sqlite3 \
+    libarchive \
+    dbus \
+    icu \
+    json-glib \
+    libsoup-2.4 \
+    libstemmer \
+"
+
+GNOMEBASEBUILDCLASS = "meson"
+
+inherit gnomebase gsettings gobject-introspection vala gtk-doc manpages bash-completion features_check python3native
+
+SRC_URI[archive.sha256sum] = "4094f704e338f2247fa6b94633279cfd07f7e952bb24627128fab78edb242464"
+
+# gobject-introspection is mandatory and cannot be configured
+REQUIRED_DISTRO_FEATURES = "gobject-introspection-data"
+GIR_MESON_OPTION = ""
+
+# text search is not an option anymore and requires sqlite3 build with
+# PACKAGECONFIG[fts5] set (default)
+
+# set required cross property sqlite3_has_fts5
+do_write_config[vardeps] += "PACKAGECONFIG"
+do_write_config:append() {
+    echo "[properties]" > ${WORKDIR}/meson-tracker.cross
+    echo "sqlite3_has_fts5 = 'true'" >> ${WORKDIR}/meson-tracker.cross
+}
+
+EXTRA_OEMESON = " \
+    --cross-file ${WORKDIR}/meson-tracker.cross \
+    -Dman=false \
+    -Dsystemd_user_services=${@bb.utils.contains('DISTRO_FEATURES', 'systemd', 'true', 'false', d)} \
+    -Dsystemd_user_services_dir=${systemd_user_unitdir} \
+"
+
+FILES:${PN} += " \
+    ${datadir}/dbus-1 \
+    ${datadir}/tracker3 \
+    ${libdir}/tracker-3.0 \
+    ${systemd_user_unitdir} \
+"
diff --git a/meta-openembedded/meta-gnome/recipes-support/ibus/ibus.inc b/meta-openembedded/meta-gnome/recipes-support/ibus/ibus.inc
index 37a490a..bb662f2 100644
--- a/meta-openembedded/meta-gnome/recipes-support/ibus/ibus.inc
+++ b/meta-openembedded/meta-gnome/recipes-support/ibus/ibus.inc
@@ -10,7 +10,7 @@
 DEPENDS = "unicode-ucd"
 
 SRC_URI = " \
-    git://github.com/ibus/ibus.git;branch=master;protocol=https \
+    git://github.com/ibus/ibus.git;branch=main;protocol=https \
     file://0001-Do-not-try-to-start-dbus-we-do-not-have-dbus-lauch.patch \
 "
 SRCREV = "6a70ab0338206bd1c7d01a4e1874ea0ee5b3a9d3"
diff --git a/meta-openembedded/meta-gnome/recipes-support/libwacom/libwacom_2.3.0.bb b/meta-openembedded/meta-gnome/recipes-support/libwacom/libwacom_2.3.0.bb
deleted file mode 100644
index 41d8c62..0000000
--- a/meta-openembedded/meta-gnome/recipes-support/libwacom/libwacom_2.3.0.bb
+++ /dev/null
@@ -1,24 +0,0 @@
-SUMMARY = "A tablet description library"
-DESCRIPTION = "libwacom is a library to identify Wacom tablets and their model-specific features. \
-               It provides easy access to information such as 'is this a built-in on-screen tablet\', \
-               'what is the size of this model', etc."
-HOMEPAGE = "https://github.com/linuxwacom/libwacom"
-BUGTRACKER = "https://github.com/linuxwacom/libwacom/issues"
-LICENSE = "MIT"
-LIC_FILES_CHKSUM = "file://COPYING;md5=40a21fffb367c82f39fd91a3b137c36e"
-
-SRC_URI = "git://github.com/linuxwacom/libwacom.git;branch=master;protocol=https"
-SRCREV = "b88053851ef81694b9e3927e0857f0f02ff9a8a8"
-
-DEPENDS = " \
-    libxml2-native \
-    libgudev \
-"
-
-S = "${WORKDIR}/git"
-
-inherit meson pkgconfig
-
-EXTRA_OEMESON = " \
-    -Dtests=disabled \
-"
diff --git a/meta-openembedded/meta-gnome/recipes-support/libwacom/libwacom_2.4.0.bb b/meta-openembedded/meta-gnome/recipes-support/libwacom/libwacom_2.4.0.bb
new file mode 100644
index 0000000..d690df9
--- /dev/null
+++ b/meta-openembedded/meta-gnome/recipes-support/libwacom/libwacom_2.4.0.bb
@@ -0,0 +1,24 @@
+SUMMARY = "A tablet description library"
+DESCRIPTION = "libwacom is a library to identify Wacom tablets and their model-specific features. \
+               It provides easy access to information such as 'is this a built-in on-screen tablet\', \
+               'what is the size of this model', etc."
+HOMEPAGE = "https://github.com/linuxwacom/libwacom"
+BUGTRACKER = "https://github.com/linuxwacom/libwacom/issues"
+LICENSE = "MIT"
+LIC_FILES_CHKSUM = "file://COPYING;md5=40a21fffb367c82f39fd91a3b137c36e"
+
+SRC_URI = "git://github.com/linuxwacom/libwacom.git;branch=master;protocol=https"
+SRCREV = "9fd28747534ef776ffecc245721a4faa43bdd89b"
+
+DEPENDS = " \
+    libxml2-native \
+    libgudev \
+"
+
+S = "${WORKDIR}/git"
+
+inherit meson pkgconfig
+
+EXTRA_OEMESON = " \
+    -Dtests=disabled \
+"
diff --git a/meta-openembedded/meta-initramfs/recipes-devtools/klibc/files/0001-Define-in_-structs-for-non-glibc-system-libs.patch b/meta-openembedded/meta-initramfs/recipes-devtools/klibc/files/0001-Define-in_-structs-for-non-glibc-system-libs.patch
index e7a0cce..29873cf 100644
--- a/meta-openembedded/meta-initramfs/recipes-devtools/klibc/files/0001-Define-in_-structs-for-non-glibc-system-libs.patch
+++ b/meta-openembedded/meta-initramfs/recipes-devtools/klibc/files/0001-Define-in_-structs-for-non-glibc-system-libs.patch
@@ -16,8 +16,6 @@
  usr/include/netinet/in.h | 36 ++++++++++++++++++++++++++++++++++++
  2 files changed, 47 insertions(+)
 
-diff --git a/usr/include/net/if.h b/usr/include/net/if.h
-index 116a176..6246b12 100644
 --- a/usr/include/net/if.h
 +++ b/usr/include/net/if.h
 @@ -1,6 +1,17 @@
@@ -38,8 +36,6 @@
  #include <sys/socket.h>
  #include <sys/types.h>
  #include <linux/if.h>
-diff --git a/usr/include/netinet/in.h b/usr/include/netinet/in.h
-index 2952bb2..0c95bc9 100644
 --- a/usr/include/netinet/in.h
 +++ b/usr/include/netinet/in.h
 @@ -5,6 +5,42 @@
@@ -82,6 +78,6 @@
 +#define __UAPI_DEF_IF_IFREQ     1
 +#endif
 +
+ #include <sys/types.h>
  #include <klibc/extern.h>
  #include <stdint.h>
- #include <endian.h>		/* Must be included *before* <linux/in.h> */
diff --git a/meta-openembedded/meta-initramfs/recipes-devtools/klibc/files/0001-dash-Specify-format-string-in-fmtstr.patch b/meta-openembedded/meta-initramfs/recipes-devtools/klibc/files/0001-dash-Specify-format-string-in-fmtstr.patch
deleted file mode 100644
index 46a2398..0000000
--- a/meta-openembedded/meta-initramfs/recipes-devtools/klibc/files/0001-dash-Specify-format-string-in-fmtstr.patch
+++ /dev/null
@@ -1,29 +0,0 @@
-From 8beffe501c1ac5b35d62004735c4157c74183901 Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Sun, 9 Jul 2017 13:51:25 -0700
-Subject: [PATCH] dash: Specify format string in fmtstr()
-
-Fixes build with hardening flags
-
-usr/dash/jobs.c:429:3: error: format not a string literal and no format arguments [-Werror=format-security]
-   col = fmtstr(s, 32, strsignal(st));
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
-
----
- usr/dash/jobs.c | 2 +-
- 1 file changed, 1 insertion(+), 1 deletion(-)
-
-diff --git a/usr/dash/jobs.c b/usr/dash/jobs.c
-index 009bbfe..299bcac 100644
---- a/usr/dash/jobs.c
-+++ b/usr/dash/jobs.c
-@@ -426,7 +426,7 @@ sprint_status(char *s, int status, int sigonly)
- 				goto out;
- #endif
- 		}
--		col = fmtstr(s, 32, strsignal(st));
-+		col = fmtstr(s, 32, "%s", strsignal(st));
- #ifdef WCOREDUMP
- 		if (WCOREDUMP(status)) {
- 			col += fmtstr(s + col, 16, " (core dumped)");
diff --git a/meta-openembedded/meta-initramfs/recipes-devtools/klibc/files/0001-fcntl-Fix-build-failure-for-some-architectures-with-.patch b/meta-openembedded/meta-initramfs/recipes-devtools/klibc/files/0001-fcntl-Fix-build-failure-for-some-architectures-with-.patch
new file mode 100644
index 0000000..4fc4b45
--- /dev/null
+++ b/meta-openembedded/meta-initramfs/recipes-devtools/klibc/files/0001-fcntl-Fix-build-failure-for-some-architectures-with-.patch
@@ -0,0 +1,34 @@
+From a33c262f828f803fffdad8e1f44d524dc9c75856 Mon Sep 17 00:00:00 2001
+From: Ben Hutchings <ben@decadent.org.uk>
+Date: Wed, 3 Aug 2022 01:10:01 +0200
+Subject: [PATCH] fcntl: Fix build failure for some architectures with Linux
+ 5.19
+
+Starting from Linux 5.19, the kernel UAPI headers now only define
+__ARCH_FLOCK64_PAD if the architecture actually needs padding in
+struct flock64.  Wrap its use with #ifdef,
+
+Upstream-Status: Backport [https://git.kernel.org/pub/scm/libs/klibc/klibc.git/commit/?id=bb2fde5ddbc18a2e7795ca4d24759230c2aae9d0]
+Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ usr/include/fcntl.h | 2 ++
+ 1 file changed, 2 insertions(+)
+
+diff --git a/usr/include/fcntl.h b/usr/include/fcntl.h
+index ed703a6..cb2e4e5 100644
+--- a/usr/include/fcntl.h
++++ b/usr/include/fcntl.h
+@@ -33,7 +33,9 @@ struct flock {
+ 	__kernel_loff_t l_start;
+ 	__kernel_loff_t l_len;
+ 	__kernel_pid_t  l_pid;
++#ifdef __ARCH_FLOCK64_PAD
+         __ARCH_FLOCK64_PAD
++#endif
+ };
+ 
+ #ifdef F_GETLK64
+-- 
+2.37.2
+
diff --git a/meta-openembedded/meta-initramfs/recipes-devtools/klibc/files/0001-include-linux-sysinfo.h-directly.patch b/meta-openembedded/meta-initramfs/recipes-devtools/klibc/files/0001-include-linux-sysinfo.h-directly.patch
index 04c97fc..d582296 100644
--- a/meta-openembedded/meta-initramfs/recipes-devtools/klibc/files/0001-include-linux-sysinfo.h-directly.patch
+++ b/meta-openembedded/meta-initramfs/recipes-devtools/klibc/files/0001-include-linux-sysinfo.h-directly.patch
@@ -15,14 +15,12 @@
  usr/include/sys/sysinfo.h | 2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)
 
-diff --git a/usr/include/sys/sysinfo.h b/usr/include/sys/sysinfo.h
-index dba68dc..d145c0b 100644
 --- a/usr/include/sys/sysinfo.h
 +++ b/usr/include/sys/sysinfo.h
-@@ -5,7 +5,7 @@
- #ifndef _SYS_SYSINFO_H
+@@ -6,7 +6,7 @@
  #define _SYS_SYSINFO_H
  
+ #include <sys/types.h>
 -#include <linux/kernel.h>
 +#include <linux/sysinfo.h>
  
diff --git a/meta-openembedded/meta-initramfs/recipes-devtools/klibc/files/0001-workaround-for-overlapping-sections-in-binary.patch b/meta-openembedded/meta-initramfs/recipes-devtools/klibc/files/0001-workaround-for-overlapping-sections-in-binary.patch
index 8ccfe44..2f203ef 100644
--- a/meta-openembedded/meta-initramfs/recipes-devtools/klibc/files/0001-workaround-for-overlapping-sections-in-binary.patch
+++ b/meta-openembedded/meta-initramfs/recipes-devtools/klibc/files/0001-workaround-for-overlapping-sections-in-binary.patch
@@ -34,19 +34,14 @@
  usr/klibc/syscalls/Kbuild | 2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)
 
-diff --git a/usr/klibc/syscalls/Kbuild b/usr/klibc/syscalls/Kbuild
-index 2430b9b4..754d028e 100644
 --- a/usr/klibc/syscalls/Kbuild
 +++ b/usr/klibc/syscalls/Kbuild
-@@ -71,7 +71,7 @@ $(obj)/typesize.c: $(srctree)/$(KLIBCSRC)/syscalls.pl $(obj)/SYSCALLS.i     \
+@@ -71,7 +71,7 @@ $(obj)/typesize.c: $(srctree)/$(KLIBCSRC
  
  # Convert typesize.o to typesize.bin
  quiet_cmd_mkbin = OBJCOPY $@
--      cmd_mkbin = $(KLIBCOBJCOPY) -O binary $< $@
+-      cmd_mkbin = $(KLIBCOBJCOPY) -O binary --only-section .rodata $< $@
 +      cmd_mkbin = $(KLIBCOBJCOPY) -O binary --remove-section .note.gnu.property $< $@
  
  $(obj)/typesize.bin: $(obj)/typesize.o FORCE
  	$(call if_changed,mkbin)
--- 
-2.30.0
-
diff --git a/meta-openembedded/meta-initramfs/recipes-devtools/klibc/files/armv4-fix-v4bx.patch b/meta-openembedded/meta-initramfs/recipes-devtools/klibc/files/armv4-fix-v4bx.patch
index 4a334fa..6c734df 100644
--- a/meta-openembedded/meta-initramfs/recipes-devtools/klibc/files/armv4-fix-v4bx.patch
+++ b/meta-openembedded/meta-initramfs/recipes-devtools/klibc/files/armv4-fix-v4bx.patch
@@ -14,8 +14,8 @@
 
 --- a/usr/klibc/arch/arm/MCONFIG
 +++ b/usr/klibc/arch/arm/MCONFIG
-@@ -29,6 +29,7 @@ else
- KLIBCSHAREDFLAGS = -Ttext-segment 0x01800000
+@@ -27,6 +27,7 @@ else
+ KLIBCSHAREDFLAGS = $(LD_IMAGE_BASE_OPT) 0x01800000
  ifeq ($(CONFIG_AEABI),y)
  KLIBCREQFLAGS += -mabi=aapcs-linux -mno-thumb-interwork
 +KLIBCLDFLAGS += $(FIX_ARMV4_EABI_BX)
diff --git a/meta-openembedded/meta-initramfs/recipes-devtools/klibc/files/cross-clang.patch b/meta-openembedded/meta-initramfs/recipes-devtools/klibc/files/cross-clang.patch
index 69799c5..41936c9 100644
--- a/meta-openembedded/meta-initramfs/recipes-devtools/klibc/files/cross-clang.patch
+++ b/meta-openembedded/meta-initramfs/recipes-devtools/klibc/files/cross-clang.patch
@@ -6,7 +6,7 @@
                      -I$(KLIBCOBJ)/../include		\
 -                    -I$(KLIBCINC)
 -ifeq ($(cc-name),clang)
--KLIBCCPPFLAGS    += -I$(shell $(KLIBCCC) $(KLIBCCFLAGS) --print-file-name=include)
+-KLIBCCPPFLAGS    += -isystem $(shell $(KLIBCCC) $(KLIBCCFLAGS) --print-file-name=include)
 -endif
 +                    -I$(KLIBCINC) \
 +                    -I$(shell $(KLIBCCC) $(KLIBCCFLAGS) --print-file-name=include)
diff --git a/meta-openembedded/meta-initramfs/recipes-devtools/klibc/klcc-cross_2.0.8.bb b/meta-openembedded/meta-initramfs/recipes-devtools/klibc/klcc-cross_2.0.10.bb
similarity index 100%
rename from meta-openembedded/meta-initramfs/recipes-devtools/klibc/klcc-cross_2.0.8.bb
rename to meta-openembedded/meta-initramfs/recipes-devtools/klibc/klcc-cross_2.0.10.bb
diff --git a/meta-openembedded/meta-initramfs/recipes-devtools/klibc/klibc-static-utils_2.0.8.bb b/meta-openembedded/meta-initramfs/recipes-devtools/klibc/klibc-static-utils_2.0.10.bb
similarity index 100%
rename from meta-openembedded/meta-initramfs/recipes-devtools/klibc/klibc-static-utils_2.0.8.bb
rename to meta-openembedded/meta-initramfs/recipes-devtools/klibc/klibc-static-utils_2.0.10.bb
diff --git a/meta-openembedded/meta-initramfs/recipes-devtools/klibc/klibc-utils_2.0.8.bb b/meta-openembedded/meta-initramfs/recipes-devtools/klibc/klibc-utils_2.0.10.bb
similarity index 100%
rename from meta-openembedded/meta-initramfs/recipes-devtools/klibc/klibc-utils_2.0.8.bb
rename to meta-openembedded/meta-initramfs/recipes-devtools/klibc/klibc-utils_2.0.10.bb
diff --git a/meta-openembedded/meta-initramfs/recipes-devtools/klibc/klibc.inc b/meta-openembedded/meta-initramfs/recipes-devtools/klibc/klibc.inc
index ceb4f5a..5acf679 100644
--- a/meta-openembedded/meta-initramfs/recipes-devtools/klibc/klibc.inc
+++ b/meta-openembedded/meta-initramfs/recipes-devtools/klibc/klibc.inc
@@ -10,7 +10,6 @@
            ${ARMPATCHES} \
            file://klcc-consider-sysroot.patch \
            file://klcc-cross-accept-clang-options.patch \
-           file://0001-dash-Specify-format-string-in-fmtstr.patch \
            file://0001-Define-in_-structs-for-non-glibc-system-libs.patch \
            file://0001-include-linux-sysinfo.h-directly.patch \
            file://0001-mkfifo-Implement-mkfifo.patch \
@@ -21,6 +20,7 @@
            file://0001-klibc-Kbuild-Accept-EXTRA_KLIBCAFLAGS.patch \
            file://cross-clang.patch \
            file://0001-workaround-for-overlapping-sections-in-binary.patch \
+           file://0001-fcntl-Fix-build-failure-for-some-architectures-with-.patch \
            "
 
 ARMPATCHES ?= ""
@@ -28,7 +28,7 @@
 ARMPATCHES:arm = " \
                   file://armv4-fix-v4bx.patch \
                  "
-SRC_URI[sha256sum] = "4e48f1398cfe3ce0b6df55ce6e70acf54fc8488e3aea3fb3610ee1622d9cb436"
+SRC_URI[sha256sum] = "662753da8889e744dfc0db6eb4021c3377ee7ef8ed66d7d57765f8c9e25939cd"
 
 S = "${WORKDIR}/klibc-${PV}"
 
diff --git a/meta-openembedded/meta-initramfs/recipes-devtools/klibc/klibc_2.0.8.bb b/meta-openembedded/meta-initramfs/recipes-devtools/klibc/klibc_2.0.10.bb
similarity index 100%
rename from meta-openembedded/meta-initramfs/recipes-devtools/klibc/klibc_2.0.8.bb
rename to meta-openembedded/meta-initramfs/recipes-devtools/klibc/klibc_2.0.10.bb
diff --git a/meta-openembedded/meta-multimedia/recipes-connectivity/rygel/rygel_0.38.3.bb b/meta-openembedded/meta-multimedia/recipes-connectivity/rygel/rygel_0.38.3.bb
deleted file mode 100644
index ec7824b..0000000
--- a/meta-openembedded/meta-multimedia/recipes-connectivity/rygel/rygel_0.38.3.bb
+++ /dev/null
@@ -1,72 +0,0 @@
-SUMMARY = "A UPnP AV media server and renderer"
-DESCRIPTION = "Rygel is a home media solution (UPnP AV MediaServer) that \
-allow you to easily share audio, video and pictures to other devices. \
-Additionally, media player software may use Rygel to become a MediaRenderer \
-that may be controlled remotely by a UPnP or DLNA Controller."
-HOMEPAGE = "http://live.gnome.org/Rygel"
-
-LICENSE = "LGPL-2.1-only"
-LIC_FILES_CHKSUM = "file://COPYING;md5=4fbd65380cdd255951079008b364516c"
-
-DEPENDS = "libxml2 glib-2.0 gssdp gupnp gupnp-av gupnp-dlna gstreamer1.0 gstreamer1.0-plugins-base libgee libsoup-2.4 libmediaart-2.0 libunistring sqlite3 intltool-native"
-RDEPENDS:${PN} = "gstreamer1.0-plugins-base-playback shared-mime-info"
-RRECOMMENDS:${PN} = "rygel-plugin-media-export"
-
-inherit gnomebase vala gobject-introspection gettext systemd features_check
-
-# gobject-introspection is mandatory for libmediaart-2.0 and cannot be configured
-REQUIRED_DISTRO_FEATURES = "gobject-introspection-data"
-
-SRC_URI[archive.md5sum] = "7f95401903a3f855b464d5152b9d4c07"
-SRC_URI[archive.sha256sum] = "08c21a577f7bdad26446a75ffa32778b26842c3b1188165f0b19818559747d00"
-
-EXTRA_OECONF = "--disable-tracker-plugin --with-media-engine=gstreamer"
-
-PACKAGECONFIG ?= "external mpris ruih media-export gst-launch"
-
-PACKAGECONFIG:append = "${@bb.utils.contains("DISTRO_FEATURES", "x11", " gtk+3", "", d)}"
-
-PACKAGECONFIG[external] = "--enable-external-plugin,--disable-external-plugin"
-PACKAGECONFIG[mpris] = "--enable-mpris-plugin,--disable-mpris-plugin"
-PACKAGECONFIG[ruih] = "--enable-ruih-plugin,--disable-ruih-plugin"
-PACKAGECONFIG[media-export] = "--enable-media-export-plugin,--disable-media-export-plugin"
-PACKAGECONFIG[gst-launch] = "--enable-gst-launch-plugin,--disable-gst-launch-plugin"
-PACKAGECONFIG[gtk+3] = ",--without-ui,gtk+3"
-PACKAGECONFIG[lms] = "--enable-lms-plugin,--disable-lms-plugin"
-
-LIBV = "2.6"
-
-do_install:append() {
-       # Remove .la files for loadable modules
-       rm -f ${D}/${libdir}/rygel-${LIBV}/engines/*.la
-       rm -f ${D}/${libdir}/rygel-${LIBV}/plugins/*.la
-       if [ -e ${D}${nonarch_libdir}/systemd/user/rygel.service ]; then
-               mkdir -p ${D}${systemd_unitdir}/system
-               mv ${D}${nonarch_libdir}/systemd/user/rygel.service ${D}${systemd_unitdir}/system
-               rmdir --ignore-fail-on-non-empty ${D}${nonarch_libdir}/systemd/user \
-               ${D}${nonarch_libdir}/systemd \
-               ${D}${nonarch_libdir}
-       fi
-}
-
-FILES:${PN} += "${libdir}/rygel-${LIBV}/engines ${datadir}/dbus-1 ${datadir}/icons"
-FILES:${PN}-dbg += "${libdir}/rygel-${LIBV}/engines/.debug ${libdir}/rygel-${LIBV}/plugins/.debug"
-
-PACKAGES += "${PN}-meta"
-ALLOW_EMPTY:${PN}-meta = "1"
-
-PACKAGES_DYNAMIC = "${PN}-plugin-*"
-
-SYSTEMD_SERVICE:${PN} = "rygel.service"
-
-python populate_packages:prepend () {
-    rygel_libdir = d.expand('${libdir}/rygel-${LIBV}')
-    postinst = d.getVar('plugin_postinst')
-    pkgs = []
-
-    pkgs += do_split_packages(d, oe.path.join(rygel_libdir, "plugins"), r'librygel-(.*)\.so$', d.expand('${PN}-plugin-%s'), 'Rygel plugin for %s', postinst=postinst, extra_depends=d.expand('${PN}'))
-    pkgs += do_split_packages(d, oe.path.join(rygel_libdir, "plugins"), r'(.*)\.plugin$', d.expand('${PN}-plugin-%s'), 'Rygel plugin for %s', postinst=postinst, extra_depends=d.expand('${PN}'))
-
-    metapkg = d.getVar('PN') + '-meta'
-    d.setVar('RDEPENDS:' + metapkg, ' '.join(pkgs))
-}
diff --git a/meta-openembedded/meta-multimedia/recipes-connectivity/rygel/rygel_0.40.4.bb b/meta-openembedded/meta-multimedia/recipes-connectivity/rygel/rygel_0.40.4.bb
new file mode 100644
index 0000000..8bc8767
--- /dev/null
+++ b/meta-openembedded/meta-multimedia/recipes-connectivity/rygel/rygel_0.40.4.bb
@@ -0,0 +1,93 @@
+SUMMARY = "A UPnP AV media server and renderer"
+DESCRIPTION = "Rygel is a home media solution (UPnP AV MediaServer) that \
+allow you to easily share audio, video and pictures to other devices. \
+Additionally, media player software may use Rygel to become a MediaRenderer \
+that may be controlled remotely by a UPnP or DLNA Controller."
+HOMEPAGE = "http://live.gnome.org/Rygel"
+
+LICENSE = "LGPL-2.1-only"
+LIC_FILES_CHKSUM = "file://COPYING;md5=4fbd65380cdd255951079008b364516c"
+
+DEPENDS = "libxml2 glib-2.0 gssdp gupnp gupnp-av gupnp-dlna gstreamer1.0 \
+           gstreamer1.0-plugins-base libgee libsoup-2.4 libmediaart-2.0 \
+           libunistring sqlite3 intltool-native gst-editing-services"
+
+RDEPENDS:${PN} = "gstreamer1.0-plugins-base-playback shared-mime-info"
+RRECOMMENDS:${PN} = "rygel-plugin-media-export"
+
+inherit gnomebase features_check vala gobject-introspection gettext systemd meson
+
+# gobject-introspection is mandatory for libmediaart-2.0 and cannot be configured
+REQUIRED_DISTRO_FEATURES = "gobject-introspection-data"
+
+SRC_URI[archive.sha256sum] = "736d8adbe8615f6cbc8fcfff9845dc985fd10e16629da236b4b52dbedf0a348b"
+
+GNOMEBASEBUILDCLASS = "meson"
+GIR_MESON_ENABLE_FLAG = 'enabled'
+GIR_MESON_DISABLE_FLAG = 'disabled'
+
+EXTRA_OEMESON = "-Dengines=gstreamer -Dplugins=${@strip_comma('${RYGEL_PLUGINS}')}"
+PACKAGECONFIG:append = "${@bb.utils.contains("DISTRO_FEATURES", "x11", " gtk+3", "", d)}"
+
+PACKAGECONFIG ?= "external mpris ruih media-export gst-launch"
+
+PACKAGECONFIG[external] = ""
+PACKAGECONFIG[mpris] = ""
+PACKAGECONFIG[ruih] = ""
+PACKAGECONFIG[media-export] = ""
+PACKAGECONFIG[gst-launch] = ""
+PACKAGECONFIG[lms] = ""
+PACKAGECONFIG[tracker3] = ""
+PACKAGECONFIG[gtk+3] = ",-Dgtk=false,gtk+3"
+
+RYGEL_PLUGINS = ""
+RYGEL_PLUGINS:append ="${@bb.utils.contains('PACKAGECONFIG', 'external', ',external', '', d)}"
+RYGEL_PLUGINS:append ="${@bb.utils.contains('PACKAGECONFIG', 'mpris', ',mpris', '', d)}"
+RYGEL_PLUGINS:append ="${@bb.utils.contains('PACKAGECONFIG', 'ruih', ',ruih', '', d)}"
+RYGEL_PLUGINS:append ="${@bb.utils.contains('PACKAGECONFIG', 'gst-launch', ',gst-launch', '', d)}"
+RYGEL_PLUGINS:append ="${@bb.utils.contains('PACKAGECONFIG', 'lms', ',lms', '', d)}"
+RYGEL_PLUGINS:append ="${@bb.utils.contains('PACKAGECONFIG', 'media-export', ',media-export', '', d)}"
+RYGEL_PLUGINS:append ="${@bb.utils.contains('PACKAGECONFIG', 'tracker3', ',tracker3', '', d)}"
+RYGEL_PLUGINS:append ="${@bb.utils.contains('PACKAGECONFIG', 'playbin', ',playbin', '', d)}"
+
+LIBV = "2.6"
+
+CFLAGS:append:toolchain-clang = " -Wno-error=int-conversion"
+
+def strip_comma(s):
+    return s.strip(',')
+
+do_install:append() {
+       # Remove .la files for loadable modules
+       rm -f ${D}/${libdir}/rygel-${LIBV}/engines/*.la
+       rm -f ${D}/${libdir}/rygel-${LIBV}/plugins/*.la
+       if [ -e ${D}${nonarch_libdir}/systemd/user/rygel.service ]; then
+               mkdir -p ${D}${systemd_unitdir}/system
+               mv ${D}${nonarch_libdir}/systemd/user/rygel.service ${D}${systemd_unitdir}/system
+               rmdir --ignore-fail-on-non-empty ${D}${nonarch_libdir}/systemd/user \
+               ${D}${nonarch_libdir}/systemd \
+               ${D}${nonarch_libdir}
+       fi
+}
+
+FILES:${PN} += "${libdir}/rygel-${LIBV}/engines ${datadir}/dbus-1 ${datadir}/icons"
+FILES:${PN}-dbg += "${libdir}/rygel-${LIBV}/engines/.debug ${libdir}/rygel-${LIBV}/plugins/.debug"
+
+PACKAGES += "${PN}-meta"
+ALLOW_EMPTY:${PN}-meta = "1"
+
+PACKAGES_DYNAMIC = "${PN}-plugin-*"
+
+SYSTEMD_SERVICE:${PN} = "rygel.service"
+
+python populate_packages:prepend () {
+    rygel_libdir = d.expand('${libdir}/rygel-${LIBV}')
+    postinst = d.getVar('plugin_postinst')
+    pkgs = []
+
+    pkgs += do_split_packages(d, oe.path.join(rygel_libdir, "plugins"), r'librygel-(.*)\.so$', d.expand('${PN}-plugin-%s'), 'Rygel plugin for %s', postinst=postinst, extra_depends=d.expand('${PN}'))
+    pkgs += do_split_packages(d, oe.path.join(rygel_libdir, "plugins"), r'(.*)\.plugin$', d.expand('${PN}-plugin-%s'), 'Rygel plugin for %s', postinst=postinst, extra_depends=d.expand('${PN}'))
+
+    metapkg = d.getVar('PN') + '-meta'
+    d.setVar('RDEPENDS:' + metapkg, ' '.join(pkgs))
+}
diff --git a/meta-openembedded/meta-multimedia/recipes-multimedia/aom/aom_3.0.0.bb b/meta-openembedded/meta-multimedia/recipes-multimedia/aom/aom_3.0.0.bb
deleted file mode 100644
index 778dc19..0000000
--- a/meta-openembedded/meta-multimedia/recipes-multimedia/aom/aom_3.0.0.bb
+++ /dev/null
@@ -1,22 +0,0 @@
-SUMMARY = "Alliance for Open Media - AV1 Codec Library"
-DESCRIPTION = "Alliance for Open Media AV1 codec library"
-
-LICENSE = "BSD-2-Clause & AOM-Patent-License-1.0"
-LIC_FILES_CHKSUM = "file://LICENSE;md5=6ea91368c1bbdf877159435572b931f5 \
-                    file://PATENTS;md5=e69ad12202bd20da3c76a5d3648cfa83 \
-                   "
-
-SRC_URI = "git://aomedia.googlesource.com/aom;protocol=https;branch=master"
-
-SRCREV = "307ce06ed82d93885ee8ed53e152c9268ac0d98d"
-
-S = "${WORKDIR}/git"
-
-inherit cmake pkgconfig
-DEPENDS = " yasm-native"
-
-EXTRA_OECMAKE = " -DBUILD_SHARED_LIBS=1 -DENABLE_TESTS=0 \
-                  -DPERL_EXECUTABLE=${HOSTTOOLS_DIR}/perl \
-                "
-
-EXTRA_OECMAKE:append:arm = " ${@bb.utils.contains("TUNE_FEATURES","neon","-DENABLE_NEON=ON","-DENABLE_NEON=OFF",d)}"
diff --git a/meta-openembedded/meta-multimedia/recipes-multimedia/aom/aom_3.4.0.bb b/meta-openembedded/meta-multimedia/recipes-multimedia/aom/aom_3.4.0.bb
new file mode 100644
index 0000000..36db45e
--- /dev/null
+++ b/meta-openembedded/meta-multimedia/recipes-multimedia/aom/aom_3.4.0.bb
@@ -0,0 +1,23 @@
+SUMMARY = "Alliance for Open Media - AV1 Codec Library"
+DESCRIPTION = "Alliance for Open Media AV1 codec library"
+
+LICENSE = "BSD-2-Clause & AOM-Patent-License-1.0"
+LIC_FILES_CHKSUM = "file://LICENSE;md5=6ea91368c1bbdf877159435572b931f5 \
+                    file://PATENTS;md5=e69ad12202bd20da3c76a5d3648cfa83 \
+                   "
+
+SRC_URI = "git://aomedia.googlesource.com/aom;protocol=https;branch=main"
+
+SRCREV = "fd0c9275d36930a6eea6d3c35972e7cf9c512944"
+
+S = "${WORKDIR}/git"
+
+inherit cmake pkgconfig
+DEPENDS = " yasm-native"
+
+EXTRA_OECMAKE = " -DBUILD_SHARED_LIBS=1 -DENABLE_TESTS=0 \
+                  -DPERL_EXECUTABLE=${HOSTTOOLS_DIR}/perl \
+                "
+
+CFLAGS:append:libc-musl = " -D_GNU_SOURCE"
+EXTRA_OECMAKE:append:arm = " ${@bb.utils.contains("TUNE_FEATURES","neon","-DENABLE_NEON=ON","-DENABLE_NEON=OFF",d)}"
diff --git a/meta-openembedded/meta-multimedia/recipes-multimedia/dvb-apps/dvb-apps_1.1.1.20140321.bb b/meta-openembedded/meta-multimedia/recipes-multimedia/dvb-apps/dvb-apps_1.1.1.20140321.bb
new file mode 100644
index 0000000..90a69e5
--- /dev/null
+++ b/meta-openembedded/meta-multimedia/recipes-multimedia/dvb-apps/dvb-apps_1.1.1.20140321.bb
@@ -0,0 +1,102 @@
+SUMMARY = "Linux DVB API applications and utilities"
+HOMEPAGE = "http://www.linuxtv.org"
+LICENSE = "GPL-2.0-only"
+LIC_FILES_CHKSUM = "file://COPYING;md5=751419260aa954499f7abaabaa882bbe"
+
+SRC_URI = "https://www.linuxtv.org/hg/dvb-apps/archive/3d43b280298c.tar.bz2;downloadfilename=${BPN}-3d43b280298c.tar.bz2 \
+          file://dvb-scan-table \
+          file://0001-Fix-generate-keynames.patch \
+          file://0003-handle-static-shared-only-build.patch \
+          file://0004-Makefile-remove-test.patch \
+          file://0005-libucsi-optimization-removal.patch \
+          file://0006-CA_SET_PID.patch \
+          file://0001-dvbdate-Remove-Obsoleted-stime-API-calls.patch \
+          "
+SRC_URI[sha256sum] = "f39e2f0ebed7e32bce83522062ad4d414f67fccd5df1b647618524497e15e057"
+S = "${WORKDIR}/${BPN}-3d43b280298c"
+
+inherit perlnative
+
+export enable_static="no"
+
+export PERL_USE_UNSAFE_INC = "1"
+
+do_configure() {
+    sed -i -e s:/usr/include:${STAGING_INCDIR}:g util/av7110_loadkeys/generate-keynames.sh
+}
+do_install() {
+    make DESTDIR=${D} install
+    install -d ${D}/${bindir}
+    install -d ${D}/${docdir}/dvb-apps
+    install -d ${D}/${docdir}/dvb-apps/scan
+    install -d ${D}/${docdir}/dvb-apps/szap
+    chmod a+rx ${D}/${libdir}/*.so*
+    cp -R --no-dereference --preserve=mode,links ${S}/util/szap/channels-conf* ${D}/${docdir}/dvb-apps/szap/
+    cp -R --no-dereference --preserve=mode,links ${S}/util/szap/README   ${D}/${docdir}/dvb-apps/szap/
+    cp -R --no-dereference --preserve=mode,links ${WORKDIR}/dvb-scan-table/* ${D}/usr/share/dvb
+}
+
+PACKAGES =+ "dvb-evtest dvb-evtest-dbg \
+             dvbapp-tests dvbapp-tests-dbg \
+             dvbdate dvbdate-dbg \
+             dvbtraffic dvbtraffic-dbg \
+             dvbnet dvbnet-dbg \
+             dvb-scan dvb-scan-dbg dvb-scan-data \
+             dvb-azap dvb-azap-dbg \
+             dvb-czap dvb-czap-dbg \
+             dvb-szap dvb-szap-dbg \
+             dvb-tzap dvb-tzap-dbg \
+             dvb-femon dvb-femon-dbg \
+             dvb-zap-data"
+PACKAGES =+ "libdvbapi libdvbcfg libdvben50221 \
+            libesg libucsi libdvbsec"
+
+RDEPENDS:dvbdate =+ "libdvbapi libucsi"
+RDEPENDS:dvbtraffic =+ "libdvbapi"
+RDEPENDS:dvb-scan =+ "libdvbapi libdvbcfg libdvbsec"
+RDEPENDS:dvb-apps =+ "libdvbapi libdvbcfg libdvbsec libdvben50221 libucsi"
+RDEPENDS:dvb-femon =+ "libdvbapi"
+RDEPENDS:dvbnet =+ "libdvbapi"
+
+RCONFLICTS:dvb-evtest = "evtest"
+
+FILES:${PN} = "${bindir} ${datadir}/dvb"
+FILES:${PN}-doc = ""
+FILES:${PN}-dev = "${includedir}"
+FILES:dvb-evtest = "${bindir}/evtest"
+FILES:dvb-evtest-dbg = "${bindir}/.debug/evtest"
+FILES:dvbapp-tests = "${bindir}/*test* "
+FILES:dvbapp-tests-dbg = "${bindir}/.debug/*test*"
+FILES:dvbdate = "${bindir}/dvbdate"
+FILES:dvbdate-dbg = "${bindir}/.debug/dvbdate"
+FILES:dvbtraffic = "${bindir}/dvbtraffic"
+FILES:dvbtraffic-dbg = "${bindir}/.debug/dvbtraffic"
+FILES:dvbnet = "${bindir}/dvbnet"
+FILES:dvbnet-dbg = "${bindir}/.debug/dvbnet"
+FILES:dvb-scan = "${bindir}/*scan "
+FILES:dvb-scan-dbg = "${bindir}/.debug/*scan"
+FILES:dvb-scan-data = "${docdir}/dvb-apps/scan"
+FILES:dvb-azap = "${bindir}/azap"
+FILES:dvb-azap-dbg = "${bindir}/.debug/azap"
+FILES:dvb-czap = "${bindir}/czap"
+FILES:dvb-czap-dbg = "${bindir}/.debug/czap"
+FILES:dvb-szap = "${bindir}/szap"
+FILES:dvb-szap-dbg = "${bindir}/.debug/szap"
+FILES:dvb-tzap = "${bindir}/tzap"
+FILES:dvb-tzap-dbg = "${bindir}/.debug/tzap"
+FILES:dvb-femon = "${bindir}/femon"
+FILES:dvb-femon-dbg = "${bindir}/.debug/femon"
+FILES:dvb-zap-data = "${docdir}/dvb-apps/szap"
+
+python populate_packages:prepend () {
+    dvb_libdir = bb.data.expand('${libdir}', d)
+    do_split_packages(d, dvb_libdir, r'^lib(.*)\.so$', 'lib%s', 'DVB %s package', extra_depends='', allow_links=True)
+    do_split_packages(d, dvb_libdir, r'^lib(.*)\.la$', 'lib%s-dev', 'DVB %s development package', extra_depends='${PN}-dev')
+    do_split_packages(d, dvb_libdir, r'^lib(.*)\.a$', 'lib%s-dev', 'DVB %s development package', extra_depends='${PN}-dev')
+    do_split_packages(d, dvb_libdir, r'^lib(.*)\.so\.*', 'lib%s', 'DVB %s library', extra_depends='', allow_links=True)
+}
+
+INSANE_SKIP:${PN} = "ldflags"
+INSANE_SKIP:${PN}-dev = "ldflags"
+
+TARGET_CC_ARCH += "${LDFLAGS}"
diff --git a/meta-openembedded/meta-multimedia/recipes-multimedia/dvb-apps/dvb-apps_1.1.1.bb b/meta-openembedded/meta-multimedia/recipes-multimedia/dvb-apps/dvb-apps_1.1.1.bb
deleted file mode 100644
index 98970d5..0000000
--- a/meta-openembedded/meta-multimedia/recipes-multimedia/dvb-apps/dvb-apps_1.1.1.bb
+++ /dev/null
@@ -1,103 +0,0 @@
-SUMMARY = "Linux DVB API applications and utilities"
-HOMEPAGE = "http://www.linuxtv.org"
-LICENSE = "GPL-2.0-only"
-LIC_FILES_CHKSUM = "file://COPYING;md5=751419260aa954499f7abaabaa882bbe"
-SRCREV = "3d43b280298c39a67d1d889e01e173f52c12da35"
-
-SRC_URI = "hg://linuxtv.org/hg;module=dvb-apps;protocol=http \
-          file://dvb-scan-table \
-          file://0001-Fix-generate-keynames.patch \
-          file://0003-handle-static-shared-only-build.patch \
-          file://0004-Makefile-remove-test.patch \
-          file://0005-libucsi-optimization-removal.patch \
-          file://0006-CA_SET_PID.patch \
-          file://0001-dvbdate-Remove-Obsoleted-stime-API-calls.patch \
-          "
-
-S = "${WORKDIR}/${BPN}"
-
-inherit perlnative
-
-export enable_static="no"
-
-export PERL_USE_UNSAFE_INC = "1"
-
-do_configure() {
-    sed -i -e s:/usr/include:${STAGING_INCDIR}:g util/av7110_loadkeys/generate-keynames.sh
-}
-do_install() {
-    make DESTDIR=${D} install
-    install -d ${D}/${bindir}
-    install -d ${D}/${docdir}/dvb-apps
-    install -d ${D}/${docdir}/dvb-apps/scan
-    install -d ${D}/${docdir}/dvb-apps/szap
-    chmod a+rx ${D}/${libdir}/*.so*
-    cp -R --no-dereference --preserve=mode,links ${S}/util/szap/channels-conf* ${D}/${docdir}/dvb-apps/szap/
-    cp -R --no-dereference --preserve=mode,links ${S}/util/szap/README   ${D}/${docdir}/dvb-apps/szap/
-    cp -R --no-dereference --preserve=mode,links ${WORKDIR}/dvb-scan-table/* ${D}/usr/share/dvb
-}
-
-PACKAGES =+ "dvb-evtest dvb-evtest-dbg \
-             dvbapp-tests dvbapp-tests-dbg \
-             dvbdate dvbdate-dbg \
-             dvbtraffic dvbtraffic-dbg \
-             dvbnet dvbnet-dbg \
-             dvb-scan dvb-scan-dbg dvb-scan-data \
-             dvb-azap dvb-azap-dbg \
-             dvb-czap dvb-czap-dbg \
-             dvb-szap dvb-szap-dbg \
-             dvb-tzap dvb-tzap-dbg \
-             dvb-femon dvb-femon-dbg \
-             dvb-zap-data"
-PACKAGES =+ "libdvbapi libdvbcfg libdvben50221 \
-            libesg libucsi libdvbsec"
-
-RDEPENDS:dvbdate =+ "libdvbapi libucsi"
-RDEPENDS:dvbtraffic =+ "libdvbapi"
-RDEPENDS:dvb-scan =+ "libdvbapi libdvbcfg libdvbsec"
-RDEPENDS:dvb-apps =+ "libdvbapi libdvbcfg libdvbsec libdvben50221 libucsi"
-RDEPENDS:dvb-femon =+ "libdvbapi"
-RDEPENDS:dvbnet =+ "libdvbapi"
-
-RCONFLICTS:dvb-evtest = "evtest"
-
-FILES:${PN} = "${bindir} ${datadir}/dvb"
-FILES:${PN}-doc = ""
-FILES:${PN}-dev = "${includedir}"
-FILES:dvb-evtest = "${bindir}/evtest"
-FILES:dvb-evtest-dbg = "${bindir}/.debug/evtest"
-FILES:dvbapp-tests = "${bindir}/*test* "
-FILES:dvbapp-tests-dbg = "${bindir}/.debug/*test*"
-FILES:dvbdate = "${bindir}/dvbdate"
-FILES:dvbdate-dbg = "${bindir}/.debug/dvbdate"
-FILES:dvbtraffic = "${bindir}/dvbtraffic"
-FILES:dvbtraffic-dbg = "${bindir}/.debug/dvbtraffic"
-FILES:dvbnet = "${bindir}/dvbnet"
-FILES:dvbnet-dbg = "${bindir}/.debug/dvbnet"
-FILES:dvb-scan = "${bindir}/*scan "
-FILES:dvb-scan-dbg = "${bindir}/.debug/*scan"
-FILES:dvb-scan-data = "${docdir}/dvb-apps/scan"
-FILES:dvb-azap = "${bindir}/azap"
-FILES:dvb-azap-dbg = "${bindir}/.debug/azap"
-FILES:dvb-czap = "${bindir}/czap"
-FILES:dvb-czap-dbg = "${bindir}/.debug/czap"
-FILES:dvb-szap = "${bindir}/szap"
-FILES:dvb-szap-dbg = "${bindir}/.debug/szap"
-FILES:dvb-tzap = "${bindir}/tzap"
-FILES:dvb-tzap-dbg = "${bindir}/.debug/tzap"
-FILES:dvb-femon = "${bindir}/femon"
-FILES:dvb-femon-dbg = "${bindir}/.debug/femon"
-FILES:dvb-zap-data = "${docdir}/dvb-apps/szap"
-
-python populate_packages:prepend () {
-    dvb_libdir = bb.data.expand('${libdir}', d)
-    do_split_packages(d, dvb_libdir, r'^lib(.*)\.so$', 'lib%s', 'DVB %s package', extra_depends='', allow_links=True)
-    do_split_packages(d, dvb_libdir, r'^lib(.*)\.la$', 'lib%s-dev', 'DVB %s development package', extra_depends='${PN}-dev')
-    do_split_packages(d, dvb_libdir, r'^lib(.*)\.a$', 'lib%s-dev', 'DVB %s development package', extra_depends='${PN}-dev')
-    do_split_packages(d, dvb_libdir, r'^lib(.*)\.so\.*', 'lib%s', 'DVB %s library', extra_depends='', allow_links=True)
-}
-
-INSANE_SKIP:${PN} = "ldflags"
-INSANE_SKIP:${PN}-dev = "ldflags"
-
-TARGET_CC_ARCH += "${LDFLAGS}"
diff --git a/meta-openembedded/meta-multimedia/recipes-multimedia/dvb-apps/files/0001-dvbdate-Remove-Obsoleted-stime-API-calls.patch b/meta-openembedded/meta-multimedia/recipes-multimedia/dvb-apps/files/0001-dvbdate-Remove-Obsoleted-stime-API-calls.patch
index 9035b56..e89f9a3 100644
--- a/meta-openembedded/meta-multimedia/recipes-multimedia/dvb-apps/files/0001-dvbdate-Remove-Obsoleted-stime-API-calls.patch
+++ b/meta-openembedded/meta-multimedia/recipes-multimedia/dvb-apps/files/0001-dvbdate-Remove-Obsoleted-stime-API-calls.patch
@@ -11,22 +11,17 @@
  util/dvbdate/dvbdate.c | 5 ++++-
  1 file changed, 4 insertions(+), 1 deletion(-)
 
-diff --git a/util/dvbdate/dvbdate.c b/util/dvbdate/dvbdate.c
-index f0df437..492ed79 100644
 --- a/util/dvbdate/dvbdate.c
 +++ b/util/dvbdate/dvbdate.c
-@@ -309,7 +309,10 @@ int atsc_scan_date(time_t *rx_time, unsigned int to)
+@@ -309,7 +309,10 @@ int atsc_scan_date(time_t *rx_time, unsi
   */
  int set_time(time_t * new_time)
  {
 -	if (stime(new_time)) {
-+	struct timespec ts;
-+	ts.tv_sec = &new_time;
-+	ts.tv_nsec = 0;
-+	if (clock_settime(CLOCK_REALTIME, &ts)) {
++	struct timespec s = {0};
++	s.tv_sec = *new_time;
++
++	if (clock_settime(CLOCK_REALTIME, &s)) {
  		perror("Unable to set time");
  		return -1;
  	}
--- 
-2.24.1
-
diff --git a/meta-openembedded/meta-multimedia/recipes-multimedia/musicbrainz/libmusicbrainz/0001-http-fetch-Pass-a-non-null-buffer-to-ne_set_request_.patch b/meta-openembedded/meta-multimedia/recipes-multimedia/musicbrainz/libmusicbrainz/0001-http-fetch-Pass-a-non-null-buffer-to-ne_set_request_.patch
new file mode 100644
index 0000000..1fae376
--- /dev/null
+++ b/meta-openembedded/meta-multimedia/recipes-multimedia/musicbrainz/libmusicbrainz/0001-http-fetch-Pass-a-non-null-buffer-to-ne_set_request_.patch
@@ -0,0 +1,50 @@
+From 06b2a6aa70616aafab780514d9d26e85bd98d965 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Thu, 25 Aug 2022 14:02:16 -0700
+Subject: [PATCH] http/fetch: Pass a non-null buffer to
+ ne_set_request_body_buffer API
+
+Newer versions of neon has added a check for non-null arguments for
+ne_set_request_body_buffer() API and this is triggered but older
+compiler only treats -Wnonnull as warning so all was fine, however gcc
+12.2 has started to throw this warning as error by default and builds
+are breaking
+
+Fixes
+src/HTTPFetch.cc:186:38: warning: null passed to a callee that requires a non-null argument [-Wnonnull]
+                        ne_set_request_body_buffer(req,0,0);
+                                                       ~  ^
+Upstream-Status: Submitted [https://github.com/metabrainz/libmusicbrainz/pull/18]
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ src/HTTPFetch.cc | 6 +++++-
+ 1 file changed, 5 insertions(+), 1 deletion(-)
+
+diff --git a/src/HTTPFetch.cc b/src/HTTPFetch.cc
+index baec359..0c0d919 100644
+--- a/src/HTTPFetch.cc
++++ b/src/HTTPFetch.cc
+@@ -182,8 +182,10 @@ int MusicBrainz5::CHTTPFetch::Fetch(const std::string& URL, const std::string& R
+ 		}
+ 
+ 		ne_request *req = ne_request_create(sess, Request.c_str(), URL.c_str());
++		ne_buffer *body = ne_buffer_create();
++
+ 		if (Request=="PUT")
+-			ne_set_request_body_buffer(req,0,0);
++			ne_set_request_body_buffer(req, body->data, ne_buffer_size(body));
+ 
+ 		if (Request!="GET")
+ 			ne_set_request_flag(req, NE_REQFLAG_IDEMPOTENT, 0);
+@@ -195,6 +197,8 @@ int MusicBrainz5::CHTTPFetch::Fetch(const std::string& URL, const std::string& R
+ 
+ 		Ret=m_d->m_Data.size();
+ 
++		ne_buffer_destroy(body);
++
+ 		ne_request_destroy(req);
+ 
+ 		m_d->m_ErrorMessage = ne_get_error(sess);
+-- 
+2.37.2
+
diff --git a/meta-openembedded/meta-multimedia/recipes-multimedia/musicbrainz/libmusicbrainz_git.bb b/meta-openembedded/meta-multimedia/recipes-multimedia/musicbrainz/libmusicbrainz_git.bb
index 803c627..3b36544 100644
--- a/meta-openembedded/meta-multimedia/recipes-multimedia/musicbrainz/libmusicbrainz_git.bb
+++ b/meta-openembedded/meta-multimedia/recipes-multimedia/musicbrainz/libmusicbrainz_git.bb
@@ -8,7 +8,9 @@
 PV = "5.1.0+git${SRCPV}"
 
 SRCREV = "8be45b12a86bc0e46f2f836c8ac88e1e98d82aee"
-SRC_URI = "git://github.com/metabrainz/libmusicbrainz.git;branch=master;protocol=https"
+SRC_URI = "git://github.com/metabrainz/libmusicbrainz.git;branch=master;protocol=https \
+           file://0001-http-fetch-Pass-a-non-null-buffer-to-ne_set_request_.patch \
+           "
 
 S = "${WORKDIR}/git"
 
diff --git a/meta-openembedded/meta-multimedia/recipes-multimedia/musicpd/libmpd-11.8.17/glibc-2.20.patch b/meta-openembedded/meta-multimedia/recipes-multimedia/musicpd/libmpd-11.8.17/glibc-2.20.patch
deleted file mode 100644
index 4a2b25c..0000000
--- a/meta-openembedded/meta-multimedia/recipes-multimedia/musicpd/libmpd-11.8.17/glibc-2.20.patch
+++ /dev/null
@@ -1,10 +0,0 @@
---- libmpd-11.8.17/src/libmpd-internal.h.orig	2014-09-30 04:08:50.963292427 +0200
-+++ libmpd-11.8.17/src/libmpd-internal.h	2014-09-30 04:08:30.595292223 +0200
-@@ -21,6 +21,7 @@
- #define __MPD_INTERNAL_LIB_
- 
- #include "libmpdclient.h"
-+#include "config.h"
- struct _MpdData_real;
- 
- typedef struct _MpdData_real {
diff --git a/meta-openembedded/meta-multimedia/recipes-multimedia/musicpd/libmpd/0001-fix-return-makes-integer-from-pointer-without-a-cast.patch b/meta-openembedded/meta-multimedia/recipes-multimedia/musicpd/libmpd/0001-fix-return-makes-integer-from-pointer-without-a-cast.patch
new file mode 100644
index 0000000..dd50a71
--- /dev/null
+++ b/meta-openembedded/meta-multimedia/recipes-multimedia/musicpd/libmpd/0001-fix-return-makes-integer-from-pointer-without-a-cast.patch
@@ -0,0 +1,27 @@
+From f0f8cc5ac6f1fa9cb5c98cb0b3688f44c64fa8ee Mon Sep 17 00:00:00 2001
+From: Christian Hesse <mail@eworm.de>
+Date: Wed, 19 Jul 2017 14:22:48 +0200
+Subject: [PATCH 1/3] fix return makes integer from pointer without a cast
+
+Upstream-Status: Pending [https://github.com/archlinux/svntogit-packages/tree/packages/libmpd/trunk]
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ src/libmpd-playlist.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/src/libmpd-playlist.c b/src/libmpd-playlist.c
+index c3c30ec..64c64ea 100644
+--- a/src/libmpd-playlist.c
++++ b/src/libmpd-playlist.c
+@@ -780,7 +780,7 @@ int mpd_playlist_load(MpdObj *mi, const char *path)
+ 	if(mpd_lock_conn(mi))
+ 	{
+ 		debug_printf(DEBUG_ERROR,"lock failed\n");
+-		return NULL;
++		return MPD_LOCK_FAILED;
+ 	}
+     mpd_sendLoadCommand(mi->connection,path);
+ 	mpd_finishCommand(mi->connection);
+-- 
+2.37.2
+
diff --git a/meta-openembedded/meta-multimedia/recipes-multimedia/musicpd/libmpd/0002-fix-comparison-between-pointer-and-zero-character-co.patch b/meta-openembedded/meta-multimedia/recipes-multimedia/musicpd/libmpd/0002-fix-comparison-between-pointer-and-zero-character-co.patch
new file mode 100644
index 0000000..66d921e
--- /dev/null
+++ b/meta-openembedded/meta-multimedia/recipes-multimedia/musicpd/libmpd/0002-fix-comparison-between-pointer-and-zero-character-co.patch
@@ -0,0 +1,27 @@
+From fa3b3b3759986171a85230ba8b53764beafdb37f Mon Sep 17 00:00:00 2001
+From: Christian Hesse <mail@eworm.de>
+Date: Wed, 19 Jul 2017 14:40:00 +0200
+Subject: [PATCH 2/3] fix comparison between pointer and zero character constant
+
+Upstream-Status: Pending [https://github.com/archlinux/svntogit-packages/tree/packages/libmpd/trunk]
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ src/libmpd-database.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/src/libmpd-database.c b/src/libmpd-database.c
+index 2480d5e..edafc0a 100644
+--- a/src/libmpd-database.c
++++ b/src/libmpd-database.c
+@@ -961,7 +961,7 @@ MpdData * mpd_database_get_directory_recursive(MpdObj *mi, const char *path)
+ 		debug_printf(DEBUG_WARNING,"not connected\n");
+ 		return NULL;
+ 	}
+-	if(path == '\0' || path[0] == '\0')
++	if(path == NULL || path[0] == '\0')
+ 	{
+ 		debug_printf(DEBUG_ERROR, "argumant invalid\n");
+ 		return NULL;
+-- 
+2.37.2
+
diff --git a/meta-openembedded/meta-multimedia/recipes-multimedia/musicpd/libmpd/0003-include-config.h.patch b/meta-openembedded/meta-multimedia/recipes-multimedia/musicpd/libmpd/0003-include-config.h.patch
new file mode 100644
index 0000000..805204c
--- /dev/null
+++ b/meta-openembedded/meta-multimedia/recipes-multimedia/musicpd/libmpd/0003-include-config.h.patch
@@ -0,0 +1,26 @@
+From 67eae4f20af9aaaf693025d95a05527a2c1fed1a Mon Sep 17 00:00:00 2001
+From: Christian Hesse <mail@eworm.de>
+Date: Wed, 19 Jul 2017 14:38:43 +0200
+Subject: [PATCH 3/3] include config.h
+
+Upstream-Status: Pending [https://github.com/archlinux/svntogit-packages/tree/packages/libmpd/trunk]
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ src/libmpd-strfsong.c | 1 +
+ 1 file changed, 1 insertion(+)
+
+diff --git a/src/libmpd-strfsong.c b/src/libmpd-strfsong.c
+index 7d47bed..76fa3ff 100644
+--- a/src/libmpd-strfsong.c
++++ b/src/libmpd-strfsong.c
+@@ -28,6 +28,7 @@
+ #include <unistd.h>
+ #include <string.h>
+ #include <glib.h>
++#include <config.h>
+ #include "libmpd.h"
+ #include "libmpd-internal.h"
+ 
+-- 
+2.37.2
+
diff --git a/meta-openembedded/meta-multimedia/recipes-multimedia/musicpd/libmpd_11.8.17.bb b/meta-openembedded/meta-multimedia/recipes-multimedia/musicpd/libmpd_11.8.17.bb
index d5ee395..3a4b3aa 100644
--- a/meta-openembedded/meta-multimedia/recipes-multimedia/musicpd/libmpd_11.8.17.bb
+++ b/meta-openembedded/meta-multimedia/recipes-multimedia/musicpd/libmpd_11.8.17.bb
@@ -5,9 +5,10 @@
 DEPENDS = "glib-2.0"
 
 SRC_URI = "http://www.musicpd.org/download/${BPN}/${PV}/${BP}.tar.gz \
-    file://glibc-2.20.patch \
+           file://0001-fix-return-makes-integer-from-pointer-without-a-cast.patch \
+           file://0002-fix-comparison-between-pointer-and-zero-character-co.patch \
+           file://0003-include-config.h.patch \
 "
-SRC_URI[md5sum] = "5ae3d87467d52aef3345407adb0a2488"
 SRC_URI[sha256sum] = "fe20326b0d10641f71c4673fae637bf9222a96e1712f71f170fca2fc34bf7a83"
 
 inherit autotools pkgconfig
diff --git a/meta-openembedded/meta-multimedia/recipes-multimedia/musicpd/mpd_0.23.6.bb b/meta-openembedded/meta-multimedia/recipes-multimedia/musicpd/mpd_0.23.6.bb
deleted file mode 100644
index c74f107..0000000
--- a/meta-openembedded/meta-multimedia/recipes-multimedia/musicpd/mpd_0.23.6.bb
+++ /dev/null
@@ -1,101 +0,0 @@
-SUMMARY = "Music Player Daemon"
-LICENSE = "GPL-2.0-only"
-LIC_FILES_CHKSUM = "file://COPYING;md5=751419260aa954499f7abaabaa882bbe"
-
-HOMEPAGE ="http://www.musicpd.org"
-
-inherit meson useradd systemd pkgconfig
-
-DEPENDS += " \
-    curl \
-    sqlite3 \
-    ${@bb.utils.filter('DISTRO_FEATURES', 'pulseaudio', d)} \
-    yajl \
-    boost \
-    icu \
-    dbus \
-    expat \
-    fmt \
-"
-
-SRC_URI = "git://github.com/MusicPlayerDaemon/MPD;branch=v0.23.x;protocol=https \
-           file://mpd.conf.in \
-           "
-SRCREV = "f591193ddaa7f9bcb6c85ff5899517fc7b53e35a"
-S = "${WORKDIR}/git"
-
-EXTRA_OEMESON += "${@bb.utils.contains('DISTRO_FEATURES', 'systemd', '-Dsystemd=enabled -Dsystemd_system_unit_dir=${systemd_system_unitdir} -Dsystemd_user_unit_dir=${systemd_system_unitdir}', '-Dsystemd=disabled', d)}"
-
-PACKAGECONFIG ??= "${@bb.utils.contains("LICENSE_FLAGS_ACCEPTED", "commercial", "aac", "", d)} \
-                   alsa ao bzip2 daemon \
-                   ${@bb.utils.contains("LICENSE_FLAGS_ACCEPTED", "commercial", "ffmpeg aac", "", d)} \
-                   fifo flac fluidsynth iso9660 \
-                   jack libsamplerate httpd \
-                   mms mpg123 modplug sndfile \
-                   upnp openal opus oss recorder \
-                   vorbis wavpack zlib"
-
-PACKAGECONFIG[aac] = "-Dfaad=enabled,-Dfaad=disabled,faad2"
-PACKAGECONFIG[alsa] = "-Dalsa=enabled,-Dalsa=disabled,alsa-lib"
-PACKAGECONFIG[ao] = "-Dao=enabled,-Dao=disabled,libao"
-PACKAGECONFIG[audiofile] = "-Daudiofile=enabled,-Daudiofile=disabled,audiofile"
-PACKAGECONFIG[bzip2] = "-Dbzip2=enabled,-Dbzip2=disabled,bzip2"
-PACKAGECONFIG[cdioparanoia] = "-Dcdio_paranoia=enabled,-Dcdio_paranoia=disabled,libcdio-paranoia"
-PACKAGECONFIG[daemon] = "-Ddaemon=true,-Ddaemon=false"
-PACKAGECONFIG[ffmpeg] = "-Dffmpeg=enabled,-Dffmpeg=disabled,ffmpeg"
-PACKAGECONFIG[fifo] = "-Dfifo=true,-Dfifo=false"
-PACKAGECONFIG[flac] = "-Dflac=enabled,-Dflac=disabled,flac"
-PACKAGECONFIG[fluidsynth] = "-Dfluidsynth=enabled,-Dfluidsynth=disabled,fluidsynth"
-PACKAGECONFIG[httpd] = "-Dhttpd=true,-Dhttpd=false"
-PACKAGECONFIG[id3tag] = "-Did3tag=enabled,-Did3tag=disabled,libid3tag"
-PACKAGECONFIG[iso9660] = "-Diso9660=enabled,-Diso9660=disabled,libcdio"
-PACKAGECONFIG[jack] = "-Djack=enabled,-Djack=disabled,jack"
-PACKAGECONFIG[lame] = "-Dlame=enabled,-Dlame=disabled,lame"
-PACKAGECONFIG[libsamplerate] = "-Dlibsamplerate=enabled,-Dlibsamplerate=disabled,libsamplerate0"
-PACKAGECONFIG[mad] = "-Dmad=enabled,-Dmad=disabled,libmad"
-PACKAGECONFIG[mms] = "-Dmms=enabled,-Dmms=disabled,libmms"
-PACKAGECONFIG[modplug] = "-Dmodplug=enabled,-Dmodplug=disabled,libmodplug"
-PACKAGECONFIG[mpg123] = "-Dmpg123=enabled,-Dmpg123=disabled,mpg123"
-PACKAGECONFIG[openal] = "-Dopenal=enabled,-Dopenal=disabled,openal-soft"
-PACKAGECONFIG[opus] = "-Dopus=enabled,-Dopus=disabled,libopus libogg"
-PACKAGECONFIG[oss] = "-Doss=enabled,-Doss=disabled,"
-PACKAGECONFIG[recorder] = "-Drecorder=true,-Drecorder=false"
-PACKAGECONFIG[smb] = "-Dsmbclient=enabled,-Dsmbclient=disabled,samba"
-PACKAGECONFIG[sndfile] = "-Dsndfile=enabled,-Dsndfile=disabled,libsndfile1"
-PACKAGECONFIG[upnp] = "-Dupnp=pupnp,-Dupnp=disabled,libupnp"
-PACKAGECONFIG[vorbis] = "-Dvorbis=enabled,-Dvorbis=disabled,libvorbis libogg"
-PACKAGECONFIG[wavpack] = "-Dwavpack=enabled,-Dwavpack=disabled,wavpack"
-PACKAGECONFIG[zlib] = "-Dzlib=enabled,-Dzlib=disabled,zlib"
-
-do_install:append() {
-    install -o mpd -d \
-        ${D}/${localstatedir}/lib/mpd \
-        ${D}/${localstatedir}/lib/mpd/playlists
-    install -m775 -o mpd -g mpd -d \
-        ${D}/${localstatedir}/lib/mpd/music
-
-    install -d ${D}/${sysconfdir}
-    install -m 644 ${WORKDIR}/mpd.conf.in ${D}/${sysconfdir}/mpd.conf
-    sed -i \
-        -e 's|%music_directory%|${localstatedir}/lib/mpd/music|' \
-        -e 's|%playlist_directory%|${localstatedir}/lib/mpd/playlists|' \
-        -e 's|%db_file%|${localstatedir}/lib/mpd/mpd.db|' \
-        -e 's|%log_file%|${localstatedir}/log/mpd.log|' \
-        -e 's|%state_file%|${localstatedir}/lib/mpd/state|' \
-        ${D}/${sysconfdir}/mpd.conf
-
-    # we don't need the icon
-    rm -rf ${D}${datadir}/icons
-}
-
-RPROVIDES:${PN} += "${PN}-systemd"
-RREPLACES:${PN} += "${PN}-systemd"
-RCONFLICTS:${PN} += "${PN}-systemd"
-SYSTEMD_SERVICE:${PN} = "mpd.socket"
-
-USERADD_PACKAGES = "${PN}"
-USERADD_PARAM:${PN} = " \
-    --system --no-create-home \
-    --home ${localstatedir}/lib/mpd \
-    --groups audio \
-    --user-group mpd"
diff --git a/meta-openembedded/meta-multimedia/recipes-multimedia/musicpd/mpd_0.23.9.bb b/meta-openembedded/meta-multimedia/recipes-multimedia/musicpd/mpd_0.23.9.bb
new file mode 100644
index 0000000..e63c1b5
--- /dev/null
+++ b/meta-openembedded/meta-multimedia/recipes-multimedia/musicpd/mpd_0.23.9.bb
@@ -0,0 +1,101 @@
+SUMMARY = "Music Player Daemon"
+LICENSE = "GPL-2.0-only"
+LIC_FILES_CHKSUM = "file://COPYING;md5=751419260aa954499f7abaabaa882bbe"
+
+HOMEPAGE ="http://www.musicpd.org"
+
+inherit meson useradd systemd pkgconfig
+
+DEPENDS += " \
+    curl \
+    sqlite3 \
+    ${@bb.utils.filter('DISTRO_FEATURES', 'pulseaudio', d)} \
+    yajl \
+    boost \
+    icu \
+    dbus \
+    expat \
+    fmt \
+"
+
+SRC_URI = "git://github.com/MusicPlayerDaemon/MPD;branch=v0.23.x;protocol=https \
+           file://mpd.conf.in \
+           "
+SRCREV = "12147f6d5822899cc4316799b494c093b4b47f91"
+S = "${WORKDIR}/git"
+
+EXTRA_OEMESON += "${@bb.utils.contains('DISTRO_FEATURES', 'systemd', '-Dsystemd=enabled -Dsystemd_system_unit_dir=${systemd_system_unitdir} -Dsystemd_user_unit_dir=${systemd_system_unitdir}', '-Dsystemd=disabled', d)}"
+
+PACKAGECONFIG ??= "${@bb.utils.contains("LICENSE_FLAGS_ACCEPTED", "commercial", "aac", "", d)} \
+                   alsa ao bzip2 daemon \
+                   ${@bb.utils.contains("LICENSE_FLAGS_ACCEPTED", "commercial", "ffmpeg aac", "", d)} \
+                   fifo flac fluidsynth iso9660 \
+                   jack libsamplerate httpd \
+                   mms mpg123 modplug sndfile \
+                   upnp openal opus oss recorder \
+                   vorbis wavpack zlib"
+
+PACKAGECONFIG[aac] = "-Dfaad=enabled,-Dfaad=disabled,faad2"
+PACKAGECONFIG[alsa] = "-Dalsa=enabled,-Dalsa=disabled,alsa-lib"
+PACKAGECONFIG[ao] = "-Dao=enabled,-Dao=disabled,libao"
+PACKAGECONFIG[audiofile] = "-Daudiofile=enabled,-Daudiofile=disabled,audiofile"
+PACKAGECONFIG[bzip2] = "-Dbzip2=enabled,-Dbzip2=disabled,bzip2"
+PACKAGECONFIG[cdioparanoia] = "-Dcdio_paranoia=enabled,-Dcdio_paranoia=disabled,libcdio-paranoia"
+PACKAGECONFIG[daemon] = "-Ddaemon=true,-Ddaemon=false"
+PACKAGECONFIG[ffmpeg] = "-Dffmpeg=enabled,-Dffmpeg=disabled,ffmpeg"
+PACKAGECONFIG[fifo] = "-Dfifo=true,-Dfifo=false"
+PACKAGECONFIG[flac] = "-Dflac=enabled,-Dflac=disabled,flac"
+PACKAGECONFIG[fluidsynth] = "-Dfluidsynth=enabled,-Dfluidsynth=disabled,fluidsynth"
+PACKAGECONFIG[httpd] = "-Dhttpd=true,-Dhttpd=false"
+PACKAGECONFIG[id3tag] = "-Did3tag=enabled,-Did3tag=disabled,libid3tag"
+PACKAGECONFIG[iso9660] = "-Diso9660=enabled,-Diso9660=disabled,libcdio"
+PACKAGECONFIG[jack] = "-Djack=enabled,-Djack=disabled,jack"
+PACKAGECONFIG[lame] = "-Dlame=enabled,-Dlame=disabled,lame"
+PACKAGECONFIG[libsamplerate] = "-Dlibsamplerate=enabled,-Dlibsamplerate=disabled,libsamplerate0"
+PACKAGECONFIG[mad] = "-Dmad=enabled,-Dmad=disabled,libmad"
+PACKAGECONFIG[mms] = "-Dmms=enabled,-Dmms=disabled,libmms"
+PACKAGECONFIG[modplug] = "-Dmodplug=enabled,-Dmodplug=disabled,libmodplug"
+PACKAGECONFIG[mpg123] = "-Dmpg123=enabled,-Dmpg123=disabled,mpg123"
+PACKAGECONFIG[openal] = "-Dopenal=enabled,-Dopenal=disabled,openal-soft"
+PACKAGECONFIG[opus] = "-Dopus=enabled,-Dopus=disabled,libopus libogg"
+PACKAGECONFIG[oss] = "-Doss=enabled,-Doss=disabled,"
+PACKAGECONFIG[recorder] = "-Drecorder=true,-Drecorder=false"
+PACKAGECONFIG[smb] = "-Dsmbclient=enabled,-Dsmbclient=disabled,samba"
+PACKAGECONFIG[sndfile] = "-Dsndfile=enabled,-Dsndfile=disabled,libsndfile1"
+PACKAGECONFIG[upnp] = "-Dupnp=pupnp,-Dupnp=disabled,libupnp"
+PACKAGECONFIG[vorbis] = "-Dvorbis=enabled,-Dvorbis=disabled,libvorbis libogg"
+PACKAGECONFIG[wavpack] = "-Dwavpack=enabled,-Dwavpack=disabled,wavpack"
+PACKAGECONFIG[zlib] = "-Dzlib=enabled,-Dzlib=disabled,zlib"
+
+do_install:append() {
+    install -o mpd -d \
+        ${D}/${localstatedir}/lib/mpd \
+        ${D}/${localstatedir}/lib/mpd/playlists
+    install -m775 -o mpd -g mpd -d \
+        ${D}/${localstatedir}/lib/mpd/music
+
+    install -d ${D}/${sysconfdir}
+    install -m 644 ${WORKDIR}/mpd.conf.in ${D}/${sysconfdir}/mpd.conf
+    sed -i \
+        -e 's|%music_directory%|${localstatedir}/lib/mpd/music|' \
+        -e 's|%playlist_directory%|${localstatedir}/lib/mpd/playlists|' \
+        -e 's|%db_file%|${localstatedir}/lib/mpd/mpd.db|' \
+        -e 's|%log_file%|${localstatedir}/log/mpd.log|' \
+        -e 's|%state_file%|${localstatedir}/lib/mpd/state|' \
+        ${D}/${sysconfdir}/mpd.conf
+
+    # we don't need the icon
+    rm -rf ${D}${datadir}/icons
+}
+
+RPROVIDES:${PN} += "${PN}-systemd"
+RREPLACES:${PN} += "${PN}-systemd"
+RCONFLICTS:${PN} += "${PN}-systemd"
+SYSTEMD_SERVICE:${PN} = "mpd.socket"
+
+USERADD_PACKAGES = "${PN}"
+USERADD_PARAM:${PN} = " \
+    --system --no-create-home \
+    --home ${localstatedir}/lib/mpd \
+    --groups audio \
+    --user-group mpd"
diff --git a/meta-openembedded/meta-multimedia/recipes-multimedia/pipewire/pipewire-media-session_0.4.1.bb b/meta-openembedded/meta-multimedia/recipes-multimedia/pipewire/pipewire-media-session_0.4.1.bb
new file mode 100644
index 0000000..9fdb603
--- /dev/null
+++ b/meta-openembedded/meta-multimedia/recipes-multimedia/pipewire/pipewire-media-session_0.4.1.bb
@@ -0,0 +1,25 @@
+SUMMARY = "PipeWire Media Session is an example session manager for PipeWire"
+HOMEPAGE = "https://gitlab.freedesktop.org/pipewire/media-session"
+LICENSE = "MIT"
+
+LIC_FILES_CHKSUM = "file://COPYING;md5=97be96ca4fab23e9657ffa590b931c1a"
+
+DEPENDS = " \
+	pipewire \
+	alsa-lib \
+	dbus \
+"
+
+SRC_URI = "git://gitlab.freedesktop.org/pipewire/media-session.git;protocol=https;branch=master"
+
+S = "${WORKDIR}/git"
+SRCREV = "e5d5cf2404786af8bcc40bdb8a2962bef4ec18b6"
+
+inherit meson pkgconfig
+
+FILES:${PN} += " \
+	${systemd_user_unitdir}/pipewire-media-session.service \
+	${datadir}/pipewire/media-session.d/* \
+"
+
+RRECOMMENDS:${PN} += "pipewire"
diff --git a/meta-openembedded/meta-multimedia/recipes-multimedia/pipewire/pipewire/0001-avb-fix-compilation-on-big-endian.patch b/meta-openembedded/meta-multimedia/recipes-multimedia/pipewire/pipewire/0001-avb-fix-compilation-on-big-endian.patch
new file mode 100644
index 0000000..fc618b4
--- /dev/null
+++ b/meta-openembedded/meta-multimedia/recipes-multimedia/pipewire/pipewire/0001-avb-fix-compilation-on-big-endian.patch
@@ -0,0 +1,53 @@
+From 1a5ec4452fa21592eaeeb823ad95a1db6eb60376 Mon Sep 17 00:00:00 2001
+From: Wim Taymans <wtaymans@redhat.com>
+Date: Tue, 19 Jul 2022 13:49:42 +0200
+Subject: [PATCH 001/113] avb: fix compilation on big endian
+
+Patch-Status: Backport
+
+---
+ src/modules/module-avb/aaf.h | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/src/modules/module-avb/aaf.h b/src/modules/module-avb/aaf.h
+index cb4871ca6..b444ce251 100644
+--- a/src/modules/module-avb/aaf.h
++++ b/src/modules/module-avb/aaf.h
+@@ -35,7 +35,7 @@ struct avb_packet_aaf {
+ 	unsigned gv:1;
+ 	unsigned tv:1;
+ 
+-	uint8_t seq_number;
++	uint8_t seq_num;
+ 
+ 	unsigned _r2:7;
+ 	unsigned tu:1;
+diff --git a/src/modules/module-avb/iec61883.h b/src/modules/module-avb/iec61883.h
+index d3b3a7daa..6ca8724ad 100644
+--- a/src/modules/module-avb/iec61883.h
++++ b/src/modules/module-avb/iec61883.h
+@@ -37,7 +37,7 @@ struct avb_packet_iec61883 {
+ 	unsigned gv:1;
+ 	unsigned tv:1;
+ 
+-	uint8_t seq_number;
++	uint8_t seq_num;
+ 
+ 	unsigned _r2:7;
+ 	unsigned tu:1;
+diff --git a/spa/plugins/avb/avbtp/packets.h b/spa/plugins/avb/avbtp/packets.h
+index 7047456bf..3d4a652ee 100644
+--- a/spa/plugins/avb/avbtp/packets.h
++++ b/spa/plugins/avb/avbtp/packets.h
+@@ -116,7 +116,7 @@ struct spa_avbtp_packet_aaf {
+ 	unsigned gv:1;
+ 	unsigned tv:1;
+ 
+-	uint8_t seq_number;
++	uint8_t seq_num;
+ 
+ 	unsigned _r2:7;
+ 	unsigned tu:1;
+-- 
+2.34.1
+
diff --git a/meta-openembedded/meta-multimedia/recipes-multimedia/pipewire/pipewire/0001-spa-fix-c90-header-include.patch b/meta-openembedded/meta-multimedia/recipes-multimedia/pipewire/pipewire/0001-spa-fix-c90-header-include.patch
deleted file mode 100644
index ad6448a..0000000
--- a/meta-openembedded/meta-multimedia/recipes-multimedia/pipewire/pipewire/0001-spa-fix-c90-header-include.patch
+++ /dev/null
@@ -1,47 +0,0 @@
-From d3ea3142e1a4de206e616bc18f63a529e6b4986a Mon Sep 17 00:00:00 2001
-From: psykose <alice@ayaya.dev>
-Date: Wed, 13 Apr 2022 21:57:49 +0000
-Subject: [PATCH 001/154] spa: fix c90 header include
-
-placing declarations after code is invalid under ISO c90
-
-Fixes !1211
-
-Patch-Status: Backport
----
- spa/include/spa/utils/string.h | 6 ++++--
- 1 file changed, 4 insertions(+), 2 deletions(-)
-
-diff --git a/spa/include/spa/utils/string.h b/spa/include/spa/utils/string.h
-index e80434537..43d19616c 100644
---- a/spa/include/spa/utils/string.h
-+++ b/spa/include/spa/utils/string.h
-@@ -276,10 +276,11 @@ static inline int spa_scnprintf(char *buffer, size_t size, const char *format, .
- static inline float spa_strtof(const char *str, char **endptr)
- {
- 	static locale_t locale = NULL;
-+	locale_t prev;
- 	float v;
- 	if (SPA_UNLIKELY(locale == NULL))
- 		locale = newlocale(LC_ALL_MASK, "C", NULL);
--	locale_t prev = uselocale(locale);
-+	prev = uselocale(locale);
- 	v = strtof(str, endptr);
- 	uselocale(prev);
- 	return v;
-@@ -319,10 +320,11 @@ static inline bool spa_atof(const char *str, float *val)
- static inline double spa_strtod(const char *str, char **endptr)
- {
- 	static locale_t locale = NULL;
-+	locale_t prev;
- 	double v;
- 	if (SPA_UNLIKELY(locale == NULL))
- 		locale = newlocale(LC_ALL_MASK, "C", NULL);
--	locale_t prev = uselocale(locale);
-+	prev = uselocale(locale);
- 	v = strtod(str, endptr);
- 	uselocale(prev);
- 	return v;
--- 
-2.25.1
-
diff --git a/meta-openembedded/meta-multimedia/recipes-multimedia/pipewire/pipewire_0.3.50.bb b/meta-openembedded/meta-multimedia/recipes-multimedia/pipewire/pipewire_0.3.50.bb
deleted file mode 100644
index c176c6e..0000000
--- a/meta-openembedded/meta-multimedia/recipes-multimedia/pipewire/pipewire_0.3.50.bb
+++ /dev/null
@@ -1,342 +0,0 @@
-SUMMARY     = "Multimedia processing server for Linux"
-DESCRIPTION = "Linux server for handling and routing audio and video streams between applications and multimedia I/O devices"
-HOMEPAGE    = "https://pipewire.org/"
-BUGTRACKER  = "https://gitlab.freedesktop.org/pipewire/pipewire/issues"
-AUTHOR      = "Wim Taymans <wtaymans@redhat.com>"
-SECTION     = "multimedia"
-
-LICENSE = "MIT & LGPL-2.1-or-later & GPL-2.0-only"
-LIC_FILES_CHKSUM = " \
-    file://LICENSE;md5=2158739e172e58dc9ab1bdd2d6ec9c72 \
-    file://COPYING;md5=97be96ca4fab23e9657ffa590b931c1a \
-"
-
-DEPENDS = "dbus ncurses"
-
-SRCREV = "64cf5e80e6240284e6b757907b900507fe56f1b5"
-SRC_URI = " \
-	git://gitlab.freedesktop.org/pipewire/pipewire.git;branch=master;protocol=https \
-	file://0001-spa-fix-c90-header-include.patch \
-"
-
-S = "${WORKDIR}/git"
-
-inherit meson pkgconfig systemd gettext useradd
-
-USERADD_PACKAGES = "${PN}"
-
-GROUPADD_PARAM:${PN} = "--system pipewire"
-
-USERADD_PARAM:${PN} = "--system --home / --no-create-home \
-                       --comment 'PipeWire multimedia daemon' \
-                       --gid pipewire --groups audio,video \
-                       pipewire"
-
-SYSTEMD_PACKAGES = "${PN}"
-
-# For "EVL", look up https://evlproject.org/ . It involves
-# a specially prepared kernel, and is currently unavailable
-# in Yocto.
-#
-# Vulkan support is currently (as of version 0.3.44) not functional.
-#
-# manpage generation requires xmltoman, which is not available.
-#
-# The session-managers list specifies which session managers Meson
-# shall download (via git clone) and build as subprojects. In OE,
-# this is not how a session manager should be built. Instead, they
-# should be integrated as separate OE recipes. To prevent PipeWire
-# from using this Meson feature, set an empty list.
-# This does not disable support or the need for session managers,
-# it just prevents this subproject feature.
-#
-# AptX and LDAC are not available in OE. Currently, neither
-# are lv2 and ROC.
-#
-# The RTKit module is deprecated in favor of the newer RT module.
-# It still exists for legacy setups that still include it in
-# their PipeWire configuration files.
-EXTRA_OEMESON += " \
-    -Devl=disabled \
-    -Dtests=disabled \
-    -Dudevrulesdir=${nonarch_base_libdir}/udev/rules.d/ \
-    -Dsystemd-system-unit-dir=${systemd_system_unitdir} \
-    -Dsystemd-user-unit-dir=${systemd_user_unitdir} \
-    -Dvulkan=disabled \
-    -Dman=disabled \
-    -Dsession-managers='[]' \
-    -Dlv2=disabled \
-    -Droc=disabled \
-    -Dbluez5-codec-aptx=disabled \
-    -Dbluez5-codec-ldac=disabled \
-    -Dlegacy-rtkit=false \
-"
-
-PACKAGECONFIG:class-target ??= "\
-    ${@bb.utils.contains('DISTRO_FEATURES', 'zeroconf', 'avahi', '', d)} \
-    ${@bb.utils.contains('DISTRO_FEATURES', 'bluetooth', 'bluez', '', d)} \
-    ${@bb.utils.contains('DISTRO_FEATURES', 'systemd', 'systemd systemd-system-service', '', d)} \
-    ${@bb.utils.filter('DISTRO_FEATURES', 'alsa', d)} \
-    gstreamer jack libusb pw-cat raop sndfile v4l2 \
-"
-
-# "jack" and "pipewire-jack" packageconfigs cannot be both enabled,
-# since "jack" imports libjack, and "pipewire-jack" generates
-# libjack.so* files, thus colliding with the libpack package. This
-# is why these two are marked in their respective packageconfigs
-# as being in conflict.
-PACKAGECONFIG[alsa] = "-Dalsa=enabled,-Dalsa=disabled,alsa-lib udev"
-PACKAGECONFIG[avahi] = "-Davahi=enabled,-Davahi=disabled,avahi"
-PACKAGECONFIG[bluez] = "-Dbluez5=enabled,-Dbluez5=disabled,bluez5 sbc"
-PACKAGECONFIG[bluez-aac] = "-Dbluez5-codec-aac=enabled,-Dbluez5-codec-aac=disabled,fdk-aac"
-PACKAGECONFIG[docs] = "-Ddocs=enabled,-Ddocs=disabled,doxygen-native graphviz-native"
-PACKAGECONFIG[ffmpeg] = "-Dffmpeg=enabled,-Dffmpeg=disabled,ffmpeg"
-PACKAGECONFIG[gstreamer] = "-Dgstreamer=enabled,-Dgstreamer=disabled,glib-2.0 gstreamer1.0 gstreamer1.0-plugins-base"
-PACKAGECONFIG[jack] = "-Djack=enabled,-Djack=disabled,jack,,,pipewire-jack"
-PACKAGECONFIG[libcamera] = "-Dlibcamera=enabled,-Dlibcamera=disabled,libcamera"
-PACKAGECONFIG[libcanberra] = "-Dlibcanberra=enabled,-Dlibcanberra=disabled,libcanberra"
-PACKAGECONFIG[libusb] = "-Dlibusb=enabled,-Dlibusb=disabled,libusb"
-PACKAGECONFIG[pipewire-alsa] = "-Dpipewire-alsa=enabled,-Dpipewire-alsa=disabled,alsa-lib"
-PACKAGECONFIG[pipewire-jack] = "-Dpipewire-jack=enabled -Dlibjack-path=${libdir}/${PW_MODULE_SUBDIR}/jack,-Dpipewire-jack=disabled,jack,,,jack"
-PACKAGECONFIG[pw-cat] = "-Dpw-cat=enabled,-Dpw-cat=disabled"
-PACKAGECONFIG[raop] = "-Draop=enabled,-Draop=disabled,openssl"
-PACKAGECONFIG[sdl2] = "-Dsdl2=enabled,-Dsdl2=disabled,libsdl2"
-PACKAGECONFIG[sndfile] = "-Dsndfile=enabled,-Dsndfile=disabled,libsndfile1"
-PACKAGECONFIG[systemd] = "-Dsystemd=enabled,-Dsystemd=disabled,systemd"
-PACKAGECONFIG[systemd-system-service] = "-Dsystemd-system-service=enabled,-Dsystemd-system-service=disabled,systemd"
-# "systemd-user-service" packageconfig will only install service
-# files to rootfs but not enable them as systemd.bbclass
-# currently lacks the feature of enabling user services.
-PACKAGECONFIG[systemd-user-service] = "-Dsystemd-user-service=enabled,-Dsystemd-user-service=disabled,systemd"
-# pw-cat needs sndfile packageconfig to be enabled
-PACKAGECONFIG[v4l2] = "-Dv4l2=enabled,-Dv4l2=disabled,udev"
-PACKAGECONFIG[webrtc-echo-cancelling] = "-Decho-cancel-webrtc=enabled,-Decho-cancel-webrtc=disabled,webrtc-audio-processing"
-
-PACKAGESPLITFUNCS:prepend = " split_dynamic_packages "
-PACKAGESPLITFUNCS:append = " set_dynamic_metapkg_rdepends "
-
-SPA_SUBDIR = "spa-0.2"
-PW_MODULE_SUBDIR = "pipewire-0.3"
-
-remove_unused_installed_files() {
-    # jack.conf is used by pipewire-jack (not the JACK SPA plugin).
-    # Remove it if pipewire-jack is not built to avoid creating the
-    # pipewire-jack package.
-    if ${@bb.utils.contains('PACKAGECONFIG', 'pipewire-jack', 'false', 'true', d)}; then
-        rm -f "${D}${datadir}/pipewire/jack.conf"
-    fi
-
-    # minimal.conf is an example of how to minimally configure the
-    # daemon and is not meant to be used for production.
-    rm -f "${D}${datadir}/pipewire/minimal.conf"
-}
-
-do_install[postfuncs] += "remove_unused_installed_files"
-
-python split_dynamic_packages () {
-    # Create packages for each SPA plugin. These plugins are located
-    # in individual subdirectories, so a recursive search is needed.
-    spa_libdir = d.expand('${libdir}/${SPA_SUBDIR}')
-    do_split_packages(d, spa_libdir, r'^libspa-(.*)\.so$', d.expand('${PN}-spa-plugins-%s'), 'PipeWire SPA plugin for %s', extra_depends='', recursive=True)
-
-    # Create packages for each PipeWire module.
-    pw_module_libdir = d.expand('${libdir}/${PW_MODULE_SUBDIR}')
-    do_split_packages(d, pw_module_libdir, r'^libpipewire-module-(.*)\.so$', d.expand('${PN}-modules-%s'), 'PipeWire %s module', extra_depends='', recursive=False)
-}
-
-python set_dynamic_metapkg_rdepends () {
-    import os
-    import oe.utils
-
-    # Go through all generated SPA plugin and PipeWire module packages
-    # (excluding the main package and the -meta package itself) and
-    # add them to the -meta package as RDEPENDS.
-
-    base_pn = d.getVar('PN')
-
-    spa_pn = base_pn + '-spa-plugins'
-    spa_metapkg =  spa_pn + '-meta'
-
-    pw_module_pn = base_pn + '-modules'
-    pw_module_metapkg =  pw_module_pn + '-meta'
-
-    d.setVar('ALLOW_EMPTY:' + spa_metapkg, "1")
-    d.setVar('FILES:' + spa_metapkg, "")
-
-    d.setVar('ALLOW_EMPTY:' + pw_module_metapkg, "1")
-    d.setVar('FILES:' + pw_module_metapkg, "")
-
-    blacklist = [ spa_pn, spa_metapkg, pw_module_pn, pw_module_metapkg ]
-    spa_metapkg_rdepends = []
-    pw_module_metapkg_rdepends = []
-    pkgdest = d.getVar('PKGDEST')
-
-    for pkg in oe.utils.packages_filter_out_system(d):
-        if pkg in blacklist:
-            continue
-
-        is_spa_pkg = pkg.startswith(spa_pn)
-        is_pw_module_pkg = pkg.startswith(pw_module_pn)
-        if not is_spa_pkg and not is_pw_module_pkg:
-            continue
-
-        if pkg in spa_metapkg_rdepends or pkg in pw_module_metapkg_rdepends:
-            continue
-
-        # See if the package is empty by looking at the contents of its
-        # PKGDEST subdirectory. If this subdirectory is empty, then then
-        # package is empty as well. Empty packages do not get added to
-        # the meta package's RDEPENDS.
-        pkgdir = os.path.join(pkgdest, pkg)
-        if os.path.exists(pkgdir):
-            dir_contents = os.listdir(pkgdir) or []
-        else:
-            dir_contents = []
-        is_empty = len(dir_contents) == 0
-        if not is_empty:
-            if is_spa_pkg:
-                spa_metapkg_rdepends.append(pkg)
-            if is_pw_module_pkg:
-                pw_module_metapkg_rdepends.append(pkg)
-
-    d.setVar('RDEPENDS:' + spa_metapkg, ' '.join(spa_metapkg_rdepends))
-    d.setVar('DESCRIPTION:' + spa_metapkg, spa_pn + ' meta package')
-
-    d.setVar('RDEPENDS:' + pw_module_metapkg, ' '.join(pw_module_metapkg_rdepends))
-    d.setVar('DESCRIPTION:' + pw_module_metapkg, pw_module_pn + ' meta package')
-}
-
-PACKAGES =+ "\
-    libpipewire \
-    ${PN}-tools \
-    ${PN}-pulse \
-    ${PN}-alsa \
-    ${PN}-jack \
-    ${PN}-spa-plugins \
-    ${PN}-spa-plugins-meta \
-    ${PN}-spa-tools \
-    ${PN}-modules \
-    ${PN}-modules-meta \
-    ${PN}-alsa-card-profile \
-    ${PN}-v4l2 \
-    gstreamer1.0-pipewire \
-"
-
-PACKAGES_DYNAMIC = "^${PN}-spa-plugins.* ^${PN}-modules.*"
-
-SYSTEMD_SERVICE:${PN} = "${@bb.utils.contains('PACKAGECONFIG', 'systemd-system-service', 'pipewire.service', '', d)}"
-CONFFILES:${PN} += "${datadir}/pipewire/pipewire.conf"
-FILES:${PN} = " \
-    ${datadir}/pipewire/pipewire.conf \
-    ${systemd_system_unitdir}/pipewire.* \
-    ${systemd_user_unitdir}/pipewire.* \
-    ${bindir}/pipewire \
-"
-
-FILES:${PN}-dev += " \
-    ${libdir}/${PW_MODULE_SUBDIR}/jack/libjack*.so \
-"
-
-CONFFILES:libpipewire += "${datadir}/pipewire/client.conf"
-FILES:libpipewire = " \
-    ${datadir}/pipewire/client.conf \
-    ${libdir}/libpipewire-*.so.* \
-"
-# Add the bare minimum modules and plugins required to be able
-# to use libpipewire. Without these, it is essentially unusable.
-RDEPENDS:libpipewire += " \
-    ${PN}-modules-client-node \
-    ${PN}-modules-protocol-native \
-    ${PN}-spa-plugins-support \
-"
-
-FILES:${PN}-tools = " \
-    ${bindir}/pw-cat \
-    ${bindir}/pw-cli \
-    ${bindir}/pw-dot \
-    ${bindir}/pw-dsdplay \
-    ${bindir}/pw-dump \
-    ${bindir}/pw-link \
-    ${bindir}/pw-loopback \
-    ${bindir}/pw-metadata \
-    ${bindir}/pw-mididump \
-    ${bindir}/pw-midiplay \
-    ${bindir}/pw-midirecord \
-    ${bindir}/pw-mon \
-    ${bindir}/pw-play \
-    ${bindir}/pw-profiler \
-    ${bindir}/pw-record \
-    ${bindir}/pw-reserve \
-    ${bindir}/pw-top \
-"
-
-# This is a shim daemon that is intended to be used as a
-# drop-in PulseAudio replacement, providing a pulseaudio-compatible
-# socket that can be used by applications that use libpulse.
-CONFFILES:${PN}-pulse += "${datadir}/pipewire/pipewire-pulse.conf"
-FILES:${PN}-pulse = " \
-    ${datadir}/pipewire/pipewire-pulse.conf \
-    ${systemd_system_unitdir}/pipewire-pulse.* \
-    ${systemd_user_unitdir}/pipewire-pulse.* \
-    ${bindir}/pipewire-pulse \
-"
-RDEPENDS:${PN}-pulse += " \
-    ${PN}-modules-protocol-pulse \
-"
-
-# ALSA plugin to redirect audio to pipewire.
-FILES:${PN}-alsa = "\
-    ${libdir}/alsa-lib/* \
-    ${datadir}/alsa/alsa.conf.d/* \
-"
-
-# JACK drop-in libraries to redirect audio to pipewire.
-CONFFILES:${PN}-jack = "${datadir}/pipewire/jack.conf"
-FILES:${PN}-jack = "\
-    ${bindir}/pw-jack \
-    ${datadir}/pipewire/jack.conf \
-    ${libdir}/${PW_MODULE_SUBDIR}/jack/libjack*.so.* \
-"
-
-# Dynamic SPA plugin packages (see set_dynamic_metapkg_rdepends).
-FILES:${PN}-spa-plugins = ""
-RRECOMMENDS:${PN}-spa-plugins += "${PN}-spa-plugins-meta"
-
-FILES:${PN}-spa-plugins-bluez5 += " \
-    ${datadir}/${SPA_SUBDIR}/bluez5/* \
-"
-
-FILES:${PN}-spa-tools = " \
-    ${bindir}/spa-* \
-"
-
-# Dynamic PipeWire module packages (see set_dynamic_metapkg_rdepends).
-FILES:${PN}-modules = ""
-RRECOMMENDS:${PN}-modules += "${PN}-modules-meta"
-
-CONFFILES:${PN}-modules-rt = "${datadir}/pipewire/client-rt.conf"
-FILES:${PN}-modules-rt += " \
-    ${datadir}/pipewire/client-rt.conf \
-    "
-
-CONFFILES:${PN}-modules-filter-chain = "${datadir}/pipewire/filter-chain/*"
-FILES:${PN}-modules-filter-chain += " \
-    ${datadir}/pipewire/filter-chain/* \
-"
-
-FILES:${PN}-alsa-card-profile = " \
-    ${datadir}/alsa-card-profile/* \
-    ${nonarch_base_libdir}/udev/rules.d/90-pipewire-alsa.rules \
-"
-
-# V4L2 interface emulator for sending/receiving data between PipeWire and V4L2 applications.
-FILES:${PN}-v4l2 += " \
-    ${bindir}/pw-v4l2 \
-    ${libdir}/${PW_MODULE_SUBDIR}/v4l2/libpw-v4l2.so \
-"
-
-FILES:gstreamer1.0-pipewire = " \
-    ${libdir}/gstreamer-1.0/* \
-"
-
-BBCLASSEXTEND = "native nativesdk"
diff --git a/meta-openembedded/meta-multimedia/recipes-multimedia/pipewire/pipewire_0.3.56.bb b/meta-openembedded/meta-multimedia/recipes-multimedia/pipewire/pipewire_0.3.56.bb
new file mode 100644
index 0000000..feefe7c
--- /dev/null
+++ b/meta-openembedded/meta-multimedia/recipes-multimedia/pipewire/pipewire_0.3.56.bb
@@ -0,0 +1,367 @@
+SUMMARY     = "Multimedia processing server for Linux"
+DESCRIPTION = "Linux server for handling and routing audio and video streams between applications and multimedia I/O devices"
+HOMEPAGE    = "https://pipewire.org/"
+BUGTRACKER  = "https://gitlab.freedesktop.org/pipewire/pipewire/issues"
+AUTHOR      = "Wim Taymans <wtaymans@redhat.com>"
+SECTION     = "multimedia"
+
+LICENSE = "MIT & LGPL-2.1-or-later & GPL-2.0-only"
+LIC_FILES_CHKSUM = " \
+    file://LICENSE;md5=2158739e172e58dc9ab1bdd2d6ec9c72 \
+    file://COPYING;md5=97be96ca4fab23e9657ffa590b931c1a \
+"
+
+DEPENDS = "dbus ncurses"
+
+SRCREV = "f274e53d25ee8f483ac6fce9e516bb1830abe88b"
+SRC_URI = " \
+	git://gitlab.freedesktop.org/pipewire/pipewire.git;branch=master;protocol=https \
+	file://0001-avb-fix-compilation-on-big-endian.patch \
+"
+
+S = "${WORKDIR}/git"
+
+inherit meson pkgconfig systemd gettext useradd
+
+USERADD_PACKAGES = "${PN}"
+
+GROUPADD_PARAM:${PN} = "--system pipewire"
+
+USERADD_PARAM:${PN} = "--system --home / --no-create-home \
+                       --comment 'PipeWire multimedia daemon' \
+                       --gid pipewire --groups audio,video \
+                       pipewire"
+
+SYSTEMD_PACKAGES = "${PN}"
+
+# For "EVL", look up https://evlproject.org/ . It involves
+# a specially prepared kernel, and is currently unavailable
+# in Yocto.
+#
+# Vulkan support is currently (as of version 0.3.44) not functional.
+#
+# manpage generation requires xmltoman, which is not available.
+#
+# The session-managers list specifies which session managers Meson
+# shall download (via git clone) and build as subprojects. In OE,
+# this is not how a session manager should be built. Instead, they
+# should be integrated as separate OE recipes. To prevent PipeWire
+# from using this Meson feature, set an empty list.
+# This does not disable support or the need for session managers,
+# it just prevents this subproject feature.
+#
+# AptX and LDAC are not available in OE. Currently, neither
+# are lv2 and ROC.
+#
+# The RTKit module is deprecated in favor of the newer RT module.
+# It still exists for legacy setups that still include it in
+# their PipeWire configuration files.
+EXTRA_OEMESON += " \
+    -Devl=disabled \
+    -Dtests=disabled \
+    -Dudevrulesdir=${nonarch_base_libdir}/udev/rules.d/ \
+    -Dsystemd-system-unit-dir=${systemd_system_unitdir} \
+    -Dsystemd-user-unit-dir=${systemd_user_unitdir} \
+    -Dman=disabled \
+    -Dsession-managers='[]' \
+    -Dlv2=disabled \
+    -Droc=disabled \
+    -Dbluez5-codec-aptx=disabled \
+    -Dbluez5-codec-ldac=disabled \
+    -Dlegacy-rtkit=false \
+"
+
+# spa alsa plugin code uses typedef redefinition, which is officially a C11 feature.
+# Pipewire builds with 'c_std=gnu99' by default. Recent versions of gcc don't issue this warning in gnu99
+# mode but it looks like clang still does
+CFLAGS:append = " -Wno-typedef-redefinition"
+
+# According to wireplumber documentation only one session manager should be installed at a time
+# Possible options are media-session, which has fewer dependencies but is very simple,
+# or wireplumber, which is more powerful.
+PIPEWIRE_SESSION_MANAGER ??= "media-session"
+
+FFMPEG_AVAILABLE = "${@bb.utils.contains('LICENSE_FLAGS_ACCEPTED', 'commercial', 'ffmpeg', '', d)}"
+BLUETOOTH_AAC = "${@bb.utils.contains('LICENSE_FLAGS_ACCEPTED', 'commercial', 'bluez-aac', '', d)}"
+
+PACKAGECONFIG:class-target ??= " \
+    ${@bb.utils.contains('DISTRO_FEATURES', 'zeroconf', 'avahi', '', d)} \
+    ${@bb.utils.contains('DISTRO_FEATURES', 'bluetooth', 'bluez ${BLUETOOTH_AAC}', '', d)} \
+    ${@bb.utils.contains('DISTRO_FEATURES', 'systemd', 'systemd systemd-system-service systemd-user-service', '', d)} \
+    ${@bb.utils.filter('DISTRO_FEATURES', 'alsa vulkan pulseaudio', d)} \
+    ${PIPEWIRE_SESSION_MANAGER} \
+    ${FFMPEG_AVAILABLE} gstreamer jack libusb pw-cat raop sndfile v4l2 udev volume \
+"
+
+# "jack" and "pipewire-jack" packageconfigs cannot be both enabled,
+# since "jack" imports libjack, and "pipewire-jack" generates
+# libjack.so* files, thus colliding with the libpack package. This
+# is why these two are marked in their respective packageconfigs
+# as being in conflict.
+PACKAGECONFIG[alsa] = "-Dalsa=enabled,-Dalsa=disabled,alsa-lib udev,,pipewire-alsa pipewire-alsa-card-profile"
+PACKAGECONFIG[avahi] = "-Davahi=enabled,-Davahi=disabled,avahi"
+PACKAGECONFIG[bluez] = "-Dbluez5=enabled,-Dbluez5=disabled,bluez5 sbc"
+PACKAGECONFIG[bluez-aac] = "-Dbluez5-codec-aac=enabled,-Dbluez5-codec-aac=disabled,fdk-aac"
+PACKAGECONFIG[docs] = "-Ddocs=enabled,-Ddocs=disabled,doxygen-native graphviz-native"
+PACKAGECONFIG[ffmpeg] = "-Dffmpeg=enabled,-Dffmpeg=disabled,ffmpeg"
+PACKAGECONFIG[gstreamer] = "-Dgstreamer=enabled,-Dgstreamer=disabled,glib-2.0 gstreamer1.0 gstreamer1.0-plugins-base,,gstreamer1.0-pipewire"
+PACKAGECONFIG[jack] = "-Djack=enabled,-Djack=disabled,jack,,,pipewire-jack"
+PACKAGECONFIG[libcamera] = "-Dlibcamera=enabled,-Dlibcamera=disabled,libcamera libdrm"
+PACKAGECONFIG[libcanberra] = "-Dlibcanberra=enabled,-Dlibcanberra=disabled,libcanberra"
+PACKAGECONFIG[libusb] = "-Dlibusb=enabled,-Dlibusb=disabled,libusb"
+PACKAGECONFIG[media-session] = ",,,pipewire-media-session,,wireplumber"
+PACKAGECONFIG[pulseaudio] = "-Dlibpulse=enabled,-Dlibpulse=disabled,pulseaudio,,pipewire-pulse"
+PACKAGECONFIG[pipewire-alsa] = "-Dpipewire-alsa=enabled,-Dpipewire-alsa=disabled,alsa-lib"
+PACKAGECONFIG[pipewire-jack] = "-Dpipewire-jack=enabled -Dlibjack-path=${libdir}/${PW_MODULE_SUBDIR}/jack,-Dpipewire-jack=disabled,jack,,pipewire-jack,jack"
+PACKAGECONFIG[pw-cat] = "-Dpw-cat=enabled,-Dpw-cat=disabled"
+PACKAGECONFIG[raop] = "-Draop=enabled,-Draop=disabled,openssl"
+PACKAGECONFIG[sdl2] = "-Dsdl2=enabled,-Dsdl2=disabled,libsdl2"
+PACKAGECONFIG[sndfile] = "-Dsndfile=enabled,-Dsndfile=disabled,libsndfile1"
+PACKAGECONFIG[systemd] = "-Dsystemd=enabled,-Dsystemd=disabled,systemd"
+PACKAGECONFIG[systemd-system-service] = "-Dsystemd-system-service=enabled,-Dsystemd-system-service=disabled,systemd"
+# "systemd-user-service" packageconfig will only install service
+# files to rootfs but not enable them as systemd.bbclass
+# currently lacks the feature of enabling user services.
+PACKAGECONFIG[systemd-user-service] = "-Dsystemd-user-service=enabled,-Dsystemd-user-service=disabled,systemd"
+# pw-cat needs sndfile packageconfig to be enabled
+PACKAGECONFIG[udev] = "-Dudev=enabled,-Dudev=disabled,udev"
+PACKAGECONFIG[v4l2] = "-Dv4l2=enabled,-Dv4l2=disabled,udev"
+PACKAGECONFIG[volume] = "-Dvolume=enabled,-Dvolume=disabled"
+PACKAGECONFIG[vulkan] = "-Dvulkan=enabled,-Dvulkan=disabled,vulkan-headers vulkan-loader"
+PACKAGECONFIG[webrtc-echo-cancelling] = "-Decho-cancel-webrtc=enabled,-Decho-cancel-webrtc=disabled,webrtc-audio-processing"
+PACKAGECONFIG[wireplumber] = ",,,wireplumber,,media-session"
+
+PACKAGESPLITFUNCS:prepend = " split_dynamic_packages "
+PACKAGESPLITFUNCS:append = " set_dynamic_metapkg_rdepends "
+
+SPA_SUBDIR = "spa-0.2"
+PW_MODULE_SUBDIR = "pipewire-0.3"
+
+remove_unused_installed_files() {
+    # jack.conf is used by pipewire-jack (not the JACK SPA plugin).
+    # Remove it if pipewire-jack is not built to avoid creating the
+    # pipewire-jack package.
+    if ${@bb.utils.contains('PACKAGECONFIG', 'pipewire-jack', 'false', 'true', d)}; then
+        rm -f "${D}${datadir}/pipewire/jack.conf"
+    fi
+
+    # minimal.conf is an example of how to minimally configure the
+    # daemon and is not meant to be used for production.
+    rm -f "${D}${datadir}/pipewire/minimal.conf"
+}
+
+do_install[postfuncs] += "remove_unused_installed_files"
+
+python split_dynamic_packages () {
+    # Create packages for each SPA plugin. These plugins are located
+    # in individual subdirectories, so a recursive search is needed.
+    spa_libdir = d.expand('${libdir}/${SPA_SUBDIR}')
+    do_split_packages(d, spa_libdir, r'^libspa-(.*)\.so$', d.expand('${PN}-spa-plugins-%s'), 'PipeWire SPA plugin for %s', extra_depends='', recursive=True)
+
+    # Create packages for each PipeWire module.
+    pw_module_libdir = d.expand('${libdir}/${PW_MODULE_SUBDIR}')
+    do_split_packages(d, pw_module_libdir, r'^libpipewire-module-(.*)\.so$', d.expand('${PN}-modules-%s'), 'PipeWire %s module', extra_depends='', recursive=False)
+}
+
+python set_dynamic_metapkg_rdepends () {
+    import os
+    import oe.utils
+
+    # Go through all generated SPA plugin and PipeWire module packages
+    # (excluding the main package and the -meta package itself) and
+    # add them to the -meta package as RDEPENDS.
+
+    base_pn = d.getVar('PN')
+
+    spa_pn = base_pn + '-spa-plugins'
+    spa_metapkg =  spa_pn + '-meta'
+
+    pw_module_pn = base_pn + '-modules'
+    pw_module_metapkg =  pw_module_pn + '-meta'
+
+    d.setVar('ALLOW_EMPTY:' + spa_metapkg, "1")
+    d.setVar('FILES:' + spa_metapkg, "")
+
+    d.setVar('ALLOW_EMPTY:' + pw_module_metapkg, "1")
+    d.setVar('FILES:' + pw_module_metapkg, "")
+
+    blacklist = [ spa_pn, spa_metapkg, pw_module_pn, pw_module_metapkg ]
+    spa_metapkg_rdepends = []
+    pw_module_metapkg_rdepends = []
+    pkgdest = d.getVar('PKGDEST')
+
+    for pkg in oe.utils.packages_filter_out_system(d):
+        if pkg in blacklist:
+            continue
+
+        is_spa_pkg = pkg.startswith(spa_pn)
+        is_pw_module_pkg = pkg.startswith(pw_module_pn)
+        if not is_spa_pkg and not is_pw_module_pkg:
+            continue
+
+        if pkg in spa_metapkg_rdepends or pkg in pw_module_metapkg_rdepends:
+            continue
+
+        # See if the package is empty by looking at the contents of its
+        # PKGDEST subdirectory. If this subdirectory is empty, then then
+        # package is empty as well. Empty packages do not get added to
+        # the meta package's RDEPENDS.
+        pkgdir = os.path.join(pkgdest, pkg)
+        if os.path.exists(pkgdir):
+            dir_contents = os.listdir(pkgdir) or []
+        else:
+            dir_contents = []
+        is_empty = len(dir_contents) == 0
+        if not is_empty:
+            if is_spa_pkg:
+                spa_metapkg_rdepends.append(pkg)
+            if is_pw_module_pkg:
+                pw_module_metapkg_rdepends.append(pkg)
+
+    d.setVar('RDEPENDS:' + spa_metapkg, ' '.join(spa_metapkg_rdepends))
+    d.setVar('DESCRIPTION:' + spa_metapkg, spa_pn + ' meta package')
+
+    d.setVar('RDEPENDS:' + pw_module_metapkg, ' '.join(pw_module_metapkg_rdepends))
+    d.setVar('DESCRIPTION:' + pw_module_metapkg, pw_module_pn + ' meta package')
+}
+
+PACKAGES =+ "\
+    libpipewire \
+    ${PN}-tools \
+    ${PN}-pulse \
+    ${PN}-alsa \
+    ${PN}-jack \
+    ${PN}-spa-plugins \
+    ${PN}-spa-plugins-meta \
+    ${PN}-spa-tools \
+    ${PN}-modules \
+    ${PN}-modules-meta \
+    ${PN}-alsa-card-profile \
+    ${PN}-v4l2 \
+    gstreamer1.0-pipewire \
+"
+
+PACKAGES_DYNAMIC = "^${PN}-spa-plugins.* ^${PN}-modules.*"
+
+SYSTEMD_SERVICE:${PN} = "${@bb.utils.contains('PACKAGECONFIG', 'systemd-system-service', 'pipewire.service', '', d)}"
+CONFFILES:${PN} += "${datadir}/pipewire/pipewire.conf"
+FILES:${PN} = " \
+    ${datadir}/pipewire \
+    ${systemd_system_unitdir}/pipewire* \
+    ${systemd_user_unitdir}/pipewire* \
+    ${bindir}/pipewire \
+    ${bindir}/pipewire-avb \
+"
+
+RRECOMMENDS:${PN}:class-target += " \
+	pipewire-modules-meta \
+	pipewire-spa-plugins-meta \
+"
+
+FILES:${PN}-dev += " \
+    ${libdir}/${PW_MODULE_SUBDIR}/jack/libjack*.so \
+"
+
+CONFFILES:libpipewire += "${datadir}/pipewire/client.conf"
+FILES:libpipewire = " \
+    ${datadir}/pipewire/client.conf \
+    ${libdir}/libpipewire-*.so.* \
+"
+# Add the bare minimum modules and plugins required to be able
+# to use libpipewire. Without these, it is essentially unusable.
+RDEPENDS:libpipewire += " \
+    ${PN}-modules-client-node \
+    ${PN}-modules-protocol-native \
+    ${PN}-spa-plugins-support \
+"
+
+FILES:${PN}-tools = " \
+    ${bindir}/pw-cat \
+    ${bindir}/pw-cli \
+    ${bindir}/pw-dot \
+    ${bindir}/pw-dsdplay \
+    ${bindir}/pw-dump \
+    ${bindir}/pw-link \
+    ${bindir}/pw-loopback \
+    ${bindir}/pw-metadata \
+    ${bindir}/pw-mididump \
+    ${bindir}/pw-midiplay \
+    ${bindir}/pw-midirecord \
+    ${bindir}/pw-mon \
+    ${bindir}/pw-play \
+    ${bindir}/pw-profiler \
+    ${bindir}/pw-record \
+    ${bindir}/pw-reserve \
+    ${bindir}/pw-top \
+"
+
+# This is a shim daemon that is intended to be used as a
+# drop-in PulseAudio replacement, providing a pulseaudio-compatible
+# socket that can be used by applications that use libpulse.
+CONFFILES:${PN}-pulse += "${datadir}/pipewire/pipewire-pulse.conf"
+FILES:${PN}-pulse = " \
+    ${datadir}/pipewire/pipewire-pulse.conf \
+    ${systemd_system_unitdir}/pipewire-pulse.* \
+    ${systemd_user_unitdir}/pipewire-pulse.* \
+    ${bindir}/pipewire-pulse \
+"
+RDEPENDS:${PN}-pulse += " \
+    ${PN}-modules-protocol-pulse \
+"
+
+# ALSA plugin to redirect audio to pipewire.
+FILES:${PN}-alsa = "\
+    ${libdir}/alsa-lib/* \
+    ${datadir}/alsa/alsa.conf.d/* \
+"
+
+# JACK drop-in libraries to redirect audio to pipewire.
+CONFFILES:${PN}-jack = "${datadir}/pipewire/jack.conf"
+FILES:${PN}-jack = "\
+    ${bindir}/pw-jack \
+    ${datadir}/pipewire/jack.conf \
+    ${libdir}/${PW_MODULE_SUBDIR}/jack/libjack*.so.* \
+"
+
+# Dynamic SPA plugin packages (see set_dynamic_metapkg_rdepends).
+FILES:${PN}-spa-plugins = ""
+RRECOMMENDS:${PN}-spa-plugins += "${PN}-spa-plugins-meta"
+
+FILES:${PN}-spa-plugins-bluez5 += " \
+    ${datadir}/${SPA_SUBDIR}/bluez5/* \
+"
+
+FILES:${PN}-spa-tools = " \
+    ${bindir}/spa-* \
+"
+
+# Dynamic PipeWire module packages (see set_dynamic_metapkg_rdepends).
+FILES:${PN}-modules = ""
+RRECOMMENDS:${PN}-modules += "${PN}-modules-meta"
+
+CONFFILES:${PN}-modules-rt = "${datadir}/pipewire/client-rt.conf"
+FILES:${PN}-modules-rt += " \
+    ${datadir}/pipewire/client-rt.conf \
+    "
+
+CONFFILES:${PN}-modules-filter-chain = "${datadir}/pipewire/filter-chain/*"
+FILES:${PN}-modules-filter-chain += " \
+    ${datadir}/pipewire/filter-chain/* \
+"
+
+FILES:${PN}-alsa-card-profile = " \
+    ${datadir}/alsa-card-profile/* \
+    ${nonarch_base_libdir}/udev/rules.d/90-pipewire-alsa.rules \
+"
+
+# V4L2 interface emulator for sending/receiving data between PipeWire and V4L2 applications.
+FILES:${PN}-v4l2 += " \
+    ${bindir}/pw-v4l2 \
+    ${libdir}/${PW_MODULE_SUBDIR}/v4l2/libpw-v4l2.so \
+"
+
+FILES:gstreamer1.0-pipewire = " \
+    ${libdir}/gstreamer-1.0/* \
+"
+
+BBCLASSEXTEND = "native nativesdk"
diff --git a/meta-openembedded/meta-multimedia/recipes-multimedia/vorbis-tools/vorbis-tools/0001-ogginfo-Include-utf8.h-for-missing-utf8_decode.patch b/meta-openembedded/meta-multimedia/recipes-multimedia/vorbis-tools/vorbis-tools/0001-ogginfo-Include-utf8.h-for-missing-utf8_decode.patch
new file mode 100644
index 0000000..36a31a8
--- /dev/null
+++ b/meta-openembedded/meta-multimedia/recipes-multimedia/vorbis-tools/vorbis-tools/0001-ogginfo-Include-utf8.h-for-missing-utf8_decode.patch
@@ -0,0 +1,27 @@
+From 8c10181547c93438fc10f753e7164ee004add6d1 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Sat, 27 Aug 2022 10:28:47 -0700
+Subject: [PATCH] ogginfo: Include utf8.h for missing utf8_decode
+
+Upstream-Status: Pending
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ ogginfo/codec_skeleton.c | 1 +
+ 1 file changed, 1 insertion(+)
+
+diff --git a/ogginfo/codec_skeleton.c b/ogginfo/codec_skeleton.c
+index a27f8da..3ac13f6 100644
+--- a/ogginfo/codec_skeleton.c
++++ b/ogginfo/codec_skeleton.c
+@@ -25,6 +25,7 @@
+ #include <ogg/ogg.h>
+ 
+ #include "i18n.h"
++#include "utf8.h" /* utf8_decode */
+ 
+ #include "private.h"
+ 
+-- 
+2.37.2
+
diff --git a/meta-openembedded/meta-multimedia/recipes-multimedia/vorbis-tools/vorbis-tools_1.4.2.bb b/meta-openembedded/meta-multimedia/recipes-multimedia/vorbis-tools/vorbis-tools_1.4.2.bb
index 2901621..61a4aed 100644
--- a/meta-openembedded/meta-multimedia/recipes-multimedia/vorbis-tools/vorbis-tools_1.4.2.bb
+++ b/meta-openembedded/meta-multimedia/recipes-multimedia/vorbis-tools/vorbis-tools_1.4.2.bb
@@ -12,6 +12,7 @@
 
 SRC_URI = "http://downloads.xiph.org/releases/vorbis/${BP}.tar.gz \
            file://gettext.patch \
+           file://0001-ogginfo-Include-utf8.h-for-missing-utf8_decode.patch \
           "
 
 SRC_URI[md5sum] = "998fca293bd4e4bdc2b96fb70f952f4e"
diff --git a/meta-openembedded/meta-multimedia/recipes-multimedia/wireplumber/wireplumber_0.4.11.bb b/meta-openembedded/meta-multimedia/recipes-multimedia/wireplumber/wireplumber_0.4.11.bb
new file mode 100644
index 0000000..e093734
--- /dev/null
+++ b/meta-openembedded/meta-multimedia/recipes-multimedia/wireplumber/wireplumber_0.4.11.bb
@@ -0,0 +1,145 @@
+SUMMARY    = "Session / policy manager implementation for PipeWire"
+HOMEPAGE   = "https://gitlab.freedesktop.org/pipewire/wireplumber"
+BUGTRACKER = "https://gitlab.freedesktop.org/pipewire/wireplumber/issues"
+AUTHOR     = "George Kiagiadakis <george.kiagiadakis@collabora.com>"
+SECTION    = "multimedia"
+
+LICENSE = "MIT"
+LIC_FILES_CHKSUM = "file://LICENSE;md5=17d1fe479cdec331eecbc65d26bc7e77"
+
+DEPENDS = "glib-2.0 glib-2.0-native lua pipewire \
+    ${@bb.utils.contains("DISTRO_FEATURES", "gobject-introspection-data", "python3-native python3-lxml-native doxygen-native", "", d)} \
+"
+
+SRCREV = "80b3559963f0ad40a7bfa6c23b0098275c0b5ebe"
+SRC_URI = "git://gitlab.freedesktop.org/pipewire/wireplumber.git;branch=master;protocol=https \
+           file://90-OE-disable-session-dbus-dependent-features.lua \
+           "
+
+S = "${WORKDIR}/git"
+
+inherit meson pkgconfig gobject-introspection systemd
+
+GIR_MESON_ENABLE_FLAG = 'enabled'
+GIR_MESON_DISABLE_FLAG = 'disabled'
+
+# Enable system-lua to let wireplumber use OE's lua.
+# Documentation needs python-sphinx, which is not in oe-core or meta-python2 for now.
+# elogind is not (yet) available in OE, so disable support.
+EXTRA_OEMESON += " \
+    -Ddoc=disabled \
+    -Dsystem-lua=true \
+    -Delogind=disabled \
+    -Dsystemd-system-unit-dir=${systemd_system_unitdir} \
+    -Dsystemd-user-unit-dir=${systemd_user_unitdir} \
+    -Dtests=false \
+"
+
+PACKAGECONFIG ??= "\
+    ${@bb.utils.contains('DISTRO_FEATURES', 'systemd', 'systemd systemd-system-service systemd-user-service', '', d)} \
+"
+
+PACKAGECONFIG[systemd] = "-Dsystemd=enabled,-Dsystemd=disabled,systemd"
+PACKAGECONFIG[systemd-system-service] = "-Dsystemd-system-service=true,-Dsystemd-system-service=false,systemd"
+# "systemd-user-service" packageconfig will only install service
+# files to rootfs but not enable them as systemd.bbclass
+# currently lacks the feature of enabling user services.
+PACKAGECONFIG[systemd-user-service] = "-Dsystemd-user-service=true,-Dsystemd-user-service=false,systemd"
+
+PACKAGESPLITFUNCS:prepend = " split_dynamic_packages "
+PACKAGESPLITFUNCS:append = " set_dynamic_metapkg_rdepends "
+
+WP_MODULE_SUBDIR = "wireplumber-0.4"
+
+add_custom_lua_config_scripts() {
+    install -m 0644 ${WORKDIR}/90-OE-disable-session-dbus-dependent-features.lua ${D}${datadir}/wireplumber/main.lua.d
+}
+
+do_install[postfuncs] += "add_custom_lua_config_scripts"
+
+python split_dynamic_packages () {
+    # Create packages for each WirePlumber module.
+    wp_module_libdir = d.expand('${libdir}/${WP_MODULE_SUBDIR}')
+    do_split_packages(d, wp_module_libdir, r'^libwireplumber-module-(.*)\.so$', d.expand('${PN}-modules-%s'), 'WirePlumber %s module', extra_depends='', recursive=False)
+}
+
+python set_dynamic_metapkg_rdepends () {
+    import os
+    import oe.utils
+
+    # Go through all generated WirePlumber module packages
+    # (excluding the main package and the -meta package itself)
+    # and add them to the -meta package as RDEPENDS.
+
+    base_pn = d.getVar('PN')
+
+    wp_module_pn = base_pn + '-modules'
+    wp_module_metapkg =  wp_module_pn + '-meta'
+
+    d.setVar('ALLOW_EMPTY:' + wp_module_metapkg, "1")
+    d.setVar('FILES:' + wp_module_metapkg, "")
+
+    blacklist = [ wp_module_pn, wp_module_metapkg ]
+    wp_module_metapkg_rdepends = []
+    pkgdest = d.getVar('PKGDEST')
+
+    for pkg in oe.utils.packages_filter_out_system(d):
+        if pkg in blacklist:
+            continue
+
+        is_wp_module_pkg = pkg.startswith(wp_module_pn)
+        if not is_wp_module_pkg:
+            continue
+
+        if pkg in wp_module_metapkg_rdepends:
+            continue
+
+        # See if the package is empty by looking at the contents of its
+        # PKGDEST subdirectory. If this subdirectory is empty, then then
+        # package is empty as well. Empty packages do not get added to
+        # the meta package's RDEPENDS.
+        pkgdir = os.path.join(pkgdest, pkg)
+        if os.path.exists(pkgdir):
+            dir_contents = os.listdir(pkgdir) or []
+        else:
+            dir_contents = []
+        is_empty = len(dir_contents) == 0
+        if not is_empty:
+            if is_wp_module_pkg:
+                wp_module_metapkg_rdepends.append(pkg)
+
+    d.setVar('RDEPENDS:' + wp_module_metapkg, ' '.join(wp_module_metapkg_rdepends))
+    d.setVar('DESCRIPTION:' + wp_module_metapkg, wp_module_pn + ' meta package')
+}
+
+PACKAGES =+ "\
+    libwireplumber \
+    ${PN}-default-config \
+    ${PN}-scripts \
+    ${PN}-modules \
+    ${PN}-modules-meta \
+"
+
+PACKAGES_DYNAMIC = "^${PN}-modules.*"
+
+SYSTEMD_SERVICE:${PN} = "wireplumber.service"
+CONFFILES:${PN} += " \
+    ${datadir}/wireplumber/wireplumber.conf \
+    ${datadir}/wireplumber/*.lua.d/* \
+"
+# Add pipewire to RRECOMMENDS, since WirePlumber expects a PipeWire daemon to
+# be present. While in theory any application that uses libpipewire can configure
+# itself to become a daemon, in practice, the PipeWire daemon is used.
+RRECOMMENDS:${PN} += "pipewire ${PN}-scripts ${PN}-modules-meta"
+
+FILES:${PN} += "${systemd_user_unitdir}"
+
+FILES:libwireplumber = " \
+    ${libdir}/libwireplumber-*.so.* \
+"
+
+FILES:${PN}-scripts += "${datadir}/wireplumber/scripts/*"
+
+# Dynamic packages (see set_dynamic_metapkg_rdepends).
+FILES:${PN}-modules = ""
+RRECOMMENDS:${PN}-modules += "${PN}-modules-meta"
diff --git a/meta-openembedded/meta-multimedia/recipes-multimedia/wireplumber/wireplumber_0.4.9.bb b/meta-openembedded/meta-multimedia/recipes-multimedia/wireplumber/wireplumber_0.4.9.bb
deleted file mode 100644
index 0a2ca3b..0000000
--- a/meta-openembedded/meta-multimedia/recipes-multimedia/wireplumber/wireplumber_0.4.9.bb
+++ /dev/null
@@ -1,143 +0,0 @@
-SUMMARY    = "Session / policy manager implementation for PipeWire"
-HOMEPAGE   = "https://gitlab.freedesktop.org/pipewire/wireplumber"
-BUGTRACKER = "https://gitlab.freedesktop.org/pipewire/wireplumber/issues"
-AUTHOR     = "George Kiagiadakis <george.kiagiadakis@collabora.com>"
-SECTION    = "multimedia"
-
-LICENSE = "MIT"
-LIC_FILES_CHKSUM = "file://LICENSE;md5=17d1fe479cdec331eecbc65d26bc7e77"
-
-DEPENDS = "glib-2.0 glib-2.0-native lua pipewire \
-    ${@bb.utils.contains("DISTRO_FEATURES", "gobject-introspection-data", "python3-native python3-lxml-native doxygen-native", "", d)} \
-"
-
-SRCREV = "8b97b40c4467951fbd4181afb47e4175361365a6"
-SRC_URI = "git://gitlab.freedesktop.org/pipewire/wireplumber.git;branch=master;protocol=https \
-           file://90-OE-disable-session-dbus-dependent-features.lua \
-           "
-
-S = "${WORKDIR}/git"
-
-inherit meson pkgconfig gobject-introspection systemd
-
-GIR_MESON_ENABLE_FLAG = 'enabled'
-GIR_MESON_DISABLE_FLAG = 'disabled'
-
-# Enable system-lua to let wireplumber use OE's lua.
-# Documentation needs python-sphinx, which is not in oe-core or meta-python2 for now.
-# elogind is not (yet) available in OE, so disable support.
-EXTRA_OEMESON += " \
-    -Ddoc=disabled \
-    -Dsystem-lua=true \
-    -Delogind=disabled \
-    -Dsystemd-system-unit-dir=${systemd_system_unitdir} \
-    -Dsystemd-user-unit-dir=${systemd_user_unitdir} \
-    -Dtests=false \
-"
-
-PACKAGECONFIG ??= "\
-    ${@bb.utils.contains('DISTRO_FEATURES', 'systemd', 'systemd systemd-system-service', '', d)} \
-"
-
-PACKAGECONFIG[systemd] = "-Dsystemd=enabled,-Dsystemd=disabled,systemd"
-PACKAGECONFIG[systemd-system-service] = "-Dsystemd-system-service=true,-Dsystemd-system-service=false,systemd"
-# "systemd-user-service" packageconfig will only install service
-# files to rootfs but not enable them as systemd.bbclass
-# currently lacks the feature of enabling user services.
-PACKAGECONFIG[systemd-user-service] = "-Dsystemd-user-service=true,-Dsystemd-user-service=false,systemd"
-
-PACKAGESPLITFUNCS:prepend = " split_dynamic_packages "
-PACKAGESPLITFUNCS:append = " set_dynamic_metapkg_rdepends "
-
-WP_MODULE_SUBDIR = "wireplumber-0.4"
-
-add_custom_lua_config_scripts() {
-    install -m 0644 ${WORKDIR}/90-OE-disable-session-dbus-dependent-features.lua ${D}${datadir}/wireplumber/main.lua.d
-}
-
-do_install[postfuncs] += "add_custom_lua_config_scripts"
-
-python split_dynamic_packages () {
-    # Create packages for each WirePlumber module.
-    wp_module_libdir = d.expand('${libdir}/${WP_MODULE_SUBDIR}')
-    do_split_packages(d, wp_module_libdir, r'^libwireplumber-module-(.*)\.so$', d.expand('${PN}-modules-%s'), 'WirePlumber %s module', extra_depends='', recursive=False)
-}
-
-python set_dynamic_metapkg_rdepends () {
-    import os
-    import oe.utils
-
-    # Go through all generated WirePlumber module packages
-    # (excluding the main package and the -meta package itself)
-    # and add them to the -meta package as RDEPENDS.
-
-    base_pn = d.getVar('PN')
-
-    wp_module_pn = base_pn + '-modules'
-    wp_module_metapkg =  wp_module_pn + '-meta'
-
-    d.setVar('ALLOW_EMPTY:' + wp_module_metapkg, "1")
-    d.setVar('FILES:' + wp_module_metapkg, "")
-
-    blacklist = [ wp_module_pn, wp_module_metapkg ]
-    wp_module_metapkg_rdepends = []
-    pkgdest = d.getVar('PKGDEST')
-
-    for pkg in oe.utils.packages_filter_out_system(d):
-        if pkg in blacklist:
-            continue
-
-        is_wp_module_pkg = pkg.startswith(wp_module_pn)
-        if not is_wp_module_pkg:
-            continue
-
-        if pkg in wp_module_metapkg_rdepends:
-            continue
-
-        # See if the package is empty by looking at the contents of its
-        # PKGDEST subdirectory. If this subdirectory is empty, then then
-        # package is empty as well. Empty packages do not get added to
-        # the meta package's RDEPENDS.
-        pkgdir = os.path.join(pkgdest, pkg)
-        if os.path.exists(pkgdir):
-            dir_contents = os.listdir(pkgdir) or []
-        else:
-            dir_contents = []
-        is_empty = len(dir_contents) == 0
-        if not is_empty:
-            if is_wp_module_pkg:
-                wp_module_metapkg_rdepends.append(pkg)
-
-    d.setVar('RDEPENDS:' + wp_module_metapkg, ' '.join(wp_module_metapkg_rdepends))
-    d.setVar('DESCRIPTION:' + wp_module_metapkg, wp_module_pn + ' meta package')
-}
-
-PACKAGES =+ "\
-    libwireplumber \
-    ${PN}-default-config \
-    ${PN}-scripts \
-    ${PN}-modules \
-    ${PN}-modules-meta \
-"
-
-PACKAGES_DYNAMIC = "^${PN}-modules.*"
-
-SYSTEMD_SERVICE:${PN} = "wireplumber.service"
-CONFFILES:${PN} += " \
-    ${datadir}/wireplumber/wireplumber.conf \
-    ${datadir}/wireplumber/*.lua.d/* \
-"
-# Add pipewire to RRECOMMENDS, since WirePlumber expects a PipeWire daemon to
-# be present. While in theory any application that uses libpipewire can configure
-# itself to become a daemon, in practice, the PipeWire daemon is used.
-RRECOMMENDS:${PN} += "${PN}-scripts pipewire"
-
-FILES:libwireplumber = " \
-    ${libdir}/libwireplumber-*.so.* \
-"
-
-FILES:${PN}-scripts += "${datadir}/wireplumber/scripts/*"
-
-# Dynamic packages (see set_dynamic_metapkg_rdepends).
-FILES:${PN}-modules = ""
-RRECOMMENDS:${PN}-modules += "${PN}-modules-meta"
diff --git a/meta-openembedded/meta-networking/dynamic-layers/meta-python/recipes-connectivity/crda/crda/0001-reglib-Remove-unused-variables.patch b/meta-openembedded/meta-networking/dynamic-layers/meta-python/recipes-connectivity/crda/crda/0001-reglib-Remove-unused-variables.patch
new file mode 100644
index 0000000..c6c3c53
--- /dev/null
+++ b/meta-openembedded/meta-networking/dynamic-layers/meta-python/recipes-connectivity/crda/crda/0001-reglib-Remove-unused-variables.patch
@@ -0,0 +1,59 @@
+From 1bd6ff9d10c83afbc9954fc38b953e9167e6d4a9 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Sun, 28 Aug 2022 14:01:55 -0700
+Subject: [PATCH] reglib: Remove unused variables
+
+These counters are not used anywhere therefore delete them
+Fixes
+reglib.c:1015:15: error: variable 'i' set but not used [-Werror,-Wunused-but-set-variable]
+        unsigned int i = 0;
+                     ^
+reglib.c:1062:15: error: variable 'lines' set but not used [-Werror,-Wunused-but-set-variable]
+        unsigned int lines = 0;
+                     ^
+
+Upstream-Status: Pending
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ reglib.c | 4 ----
+ 1 file changed, 4 deletions(-)
+
+diff --git a/reglib.c b/reglib.c
+index 8565a0b..6c62c2c 100644
+--- a/reglib.c
++++ b/reglib.c
+@@ -1012,7 +1012,6 @@ static int reglib_find_next_country_stream(FILE *fp)
+ {
+ 	fpos_t prev_pos;
+ 	int r;
+-	unsigned int i = 0;
+ 
+ 	while(1) {
+ 		char line[1024];
+@@ -1030,7 +1029,6 @@ static int reglib_find_next_country_stream(FILE *fp)
+ 		line_p = fgets(line, sizeof(line), fp);
+ 		if (line_p == line) {
+ 			if (strspn(line, "\n") == strlen(line)) {
+-				i++;
+ 				continue;
+ 			}
+ 			if (strncmp(line, "country", 7) != 0)
+@@ -1059,7 +1057,6 @@ struct ieee80211_regdomain *reglib_parse_country(FILE *fp)
+ 
+ FILE *reglib_create_parse_stream(FILE *f)
+ {
+-	unsigned int lines = 0;
+ 	FILE *fp;
+ 
+ 	fp = tmpfile();
+@@ -1076,7 +1073,6 @@ FILE *reglib_create_parse_stream(FILE *f)
+ 		if (line_p == line) {
+ 			if (strchr(line, '#') == NULL) {
+ 				fputs(line, fp);
+-				lines++;
+ 			}
+ 			continue;
+ 		} else
+-- 
+2.37.2
+
diff --git a/meta-openembedded/meta-networking/dynamic-layers/meta-python/recipes-connectivity/crda/crda_3.18.bb b/meta-openembedded/meta-networking/dynamic-layers/meta-python/recipes-connectivity/crda/crda_3.18.bb
index a616557..2f4d4da 100644
--- a/meta-openembedded/meta-networking/dynamic-layers/meta-python/recipes-connectivity/crda/crda_3.18.bb
+++ b/meta-openembedded/meta-networking/dynamic-layers/meta-python/recipes-connectivity/crda/crda_3.18.bb
@@ -16,6 +16,7 @@
            file://fix-issues-when-USE_OPENSSL-1.patch \
            file://crda-4.14-python-3.patch \
            file://0001-Make-alpha2-to-be-3-characters-long.patch \
+           file://0001-reglib-Remove-unused-variables.patch \
 "
 SRC_URI[md5sum] = "0431fef3067bf503dfb464069f06163a"
 SRC_URI[sha256sum] = "43fcb9679f8b75ed87ad10944a506292def13e4afb194afa7aa921b01e8ecdbf"
diff --git a/meta-openembedded/meta-networking/recipes-connectivity/dibbler/dibbler/0001-port-linux-Re-order-header-includes.patch b/meta-openembedded/meta-networking/recipes-connectivity/dibbler/dibbler/0001-port-linux-Re-order-header-includes.patch
new file mode 100644
index 0000000..27a562b
--- /dev/null
+++ b/meta-openembedded/meta-networking/recipes-connectivity/dibbler/dibbler/0001-port-linux-Re-order-header-includes.patch
@@ -0,0 +1,33 @@
+From cbb33e1548fe526c3e7dead294617bde1f087ae3 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Wed, 24 Aug 2022 16:40:38 -0700
+Subject: [PATCH] port-linux: Re-order header includes
+
+linux/if.h when included before net/if.h casues duplicate definitions
+
+Upstream-Status: Iappropriate [Upstream is Dead] 
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ Port-linux/interface.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/Port-linux/interface.c b/Port-linux/interface.c
+index 18777e91..19aefb2b 100644
+--- a/Port-linux/interface.c
++++ b/Port-linux/interface.c
+@@ -25,7 +25,6 @@
+ #include <sys/types.h>
+ #include <sys/socket.h>
+ #include <sys/ioctl.h>
+-#include <linux/if.h>
+ #include <syslog.h>
+ #include <string.h>
+ #include <errno.h>
+@@ -42,6 +41,7 @@
+ #include <stdarg.h>
+ #include <linux/sockios.h>
+ #include <linux/if_ether.h>
++#include <linux/if.h>
+ 
+ int interface_auto_up = 0;
+ int interface_do_message = 0;
diff --git a/meta-openembedded/meta-networking/recipes-connectivity/dibbler/dibbler_git.bb b/meta-openembedded/meta-networking/recipes-connectivity/dibbler/dibbler_git.bb
index bdda35a..f57767e 100644
--- a/meta-openembedded/meta-networking/recipes-connectivity/dibbler/dibbler_git.bb
+++ b/meta-openembedded/meta-networking/recipes-connectivity/dibbler/dibbler_git.bb
@@ -9,6 +9,7 @@
 
 SRC_URI = "git://github.com/tomaszmrugalski/dibbler;branch=master;protocol=https \
            file://dibbler_fix_getSize_crash.patch \
+           file://0001-port-linux-Re-order-header-includes.patch \
            "
 PV = "1.0.1+1.0.2RC1+git${SRCREV}"
 
@@ -29,6 +30,7 @@
 
 DEPENDS += "flex-native"
 
+CFLAGS += "-D_GNU_SOURCE"
 LDFLAGS += "-pthread"
 
 PACKAGES =+ "${PN}-requestor ${PN}-client ${PN}-relay ${PN}-server"
diff --git a/meta-openembedded/meta-networking/recipes-connectivity/freeradius/files/0001-version.c-don-t-print-build-flags.patch b/meta-openembedded/meta-networking/recipes-connectivity/freeradius/files/0001-version.c-don-t-print-build-flags.patch
new file mode 100644
index 0000000..697205e
--- /dev/null
+++ b/meta-openembedded/meta-networking/recipes-connectivity/freeradius/files/0001-version.c-don-t-print-build-flags.patch
@@ -0,0 +1,41 @@
+From cbc64dcf6aa2a1be63f45ea6dd7d2c49b70a0bee Mon Sep 17 00:00:00 2001
+From: Mingli Yu <mingli.yu@windriver.com>
+Date: Wed, 3 Aug 2022 16:44:29 +0800
+Subject: [PATCH] version.c: don't print build flags
+
+Don't print the build flags to avoid collecting the build environment info.
+
+Upstream-Status: Inappropriate [oe specific]
+
+Signed-off-by: Mingli Yu <mingli.yu@windriver.com>
+---
+ src/main/version.c | 13 -------------
+ 1 file changed, 13 deletions(-)
+
+diff --git a/src/main/version.c b/src/main/version.c
+index 62972d9f53..cf81de72c9 100644
+--- a/src/main/version.c
++++ b/src/main/version.c
+@@ -589,19 +589,6 @@ void version_print(void)
+ 		DEBUG2("  unknown");
+ #endif
+ 
+-		DEBUG2("Compilation flags:");
+-#ifdef BUILT_WITH_CPPFLAGS
+-		DEBUG2("  cppflags : " BUILT_WITH_CPPFLAGS);
+-#endif
+-#ifdef BUILT_WITH_CFLAGS
+-		DEBUG2("  cflags   : " BUILT_WITH_CFLAGS);
+-#endif
+-#ifdef BUILT_WITH_LDFLAGS
+-		DEBUG2("  ldflags  : " BUILT_WITH_LDFLAGS);
+-#endif
+-#ifdef BUILT_WITH_LIBS
+-		DEBUG2("  libs     : " BUILT_WITH_LIBS);
+-#endif
+ 		DEBUG2("  ");
+ 	}
+ 	INFO("FreeRADIUS Version " RADIUSD_VERSION_STRING);
+-- 
+2.25.1
+
diff --git a/meta-openembedded/meta-networking/recipes-connectivity/freeradius/freeradius_3.0.21.bb b/meta-openembedded/meta-networking/recipes-connectivity/freeradius/freeradius_3.0.21.bb
index d6477e3..1407b79 100644
--- a/meta-openembedded/meta-networking/recipes-connectivity/freeradius/freeradius_3.0.21.bb
+++ b/meta-openembedded/meta-networking/recipes-connectivity/freeradius/freeradius_3.0.21.bb
@@ -32,6 +32,7 @@
     file://radiusd.service \
     file://radiusd-volatiles.conf \
     file://check-openssl-cmds-in-script-bootstrap.patch \
+    file://0001-version.c-don-t-print-build-flags.patch \
 "
 
 raddbdir="${sysconfdir}/${MLPREFIX}raddb"
diff --git a/meta-openembedded/meta-networking/recipes-connectivity/mosquitto/mosquitto_2.0.14.bb b/meta-openembedded/meta-networking/recipes-connectivity/mosquitto/mosquitto_2.0.14.bb
deleted file mode 100644
index 739b7de..0000000
--- a/meta-openembedded/meta-networking/recipes-connectivity/mosquitto/mosquitto_2.0.14.bb
+++ /dev/null
@@ -1,90 +0,0 @@
-SUMMARY = "Open source MQTT implementation"
-DESCRIPTION = "Mosquitto is an open source (Eclipse licensed) message broker \
-that implements the MQ Telemetry Transport protocol version 3.1, 3.1.1 and \
-5, providing both an MQTT broker and several command-line clients. MQTT \
-provides a lightweight method of carrying out messaging using a \
-publish/subscribe model. "
-HOMEPAGE = "http://mosquitto.org/"
-SECTION = "console/network"
-LICENSE = "EPL-2.0 | EDL-1.0"
-LIC_FILES_CHKSUM = "file://LICENSE.txt;md5=ca9a8f366c6babf593e374d0d7d58749 \
-                    file://edl-v10;md5=9f6accb1afcb570f8be65039e2fcd49e \
-                    file://epl-v20;md5=2dd765ca47a05140be15ebafddbeadfe \
-                    file://NOTICE.md;md5=a7a91b4754c6f7995020d1b49bc829c6 \
-"
-DEPENDS = "uthash cjson"
-
-SRC_URI = "http://mosquitto.org/files/source/mosquitto-${PV}.tar.gz \
-           file://mosquitto.init \
-           file://1571.patch \
-"
-
-SRC_URI[sha256sum] = "d0dde8fdb12caf6e2426b4f28081919a2fce3448773bdb8af0d3cd5fe5776925"
-
-inherit systemd update-rc.d useradd cmake pkgconfig
-
-PACKAGECONFIG ??= "ssl dlt websockets \
-                  ${@bb.utils.filter('DISTRO_FEATURES','systemd', d)} \
-                  "
-
-PACKAGECONFIG[manpages] = "-DDOCUMENTATION=ON,-DDOCUMENTATION=OFF,libxslt-native docbook-xsl-stylesheets-native"
-PACKAGECONFIG[dns-srv] = "-DWITH_SRV=ON,-DWITH_SRV=OFF,c-ares"
-PACKAGECONFIG[ssl] = "-DWITH_TLS=ON -DWITH_TLS_PSK=ON -DWITH_EC=ON,-DWITH_TLS=OFF -DWITH_TLS_PSK=OFF -DWITH_EC=OFF,openssl"
-PACKAGECONFIG[systemd] = "-DWITH_SYSTEMD=ON,-DWITH_SYSTEMD=OFF,systemd"
-PACKAGECONFIG[websockets] = "-DWITH_WEBSOCKETS=ON,-DWITH_WEBSOCKETS=OFF,libwebsockets"
-PACKAGECONFIG[dlt] = "-DWITH_DLT=ON,-DWITH_DLT=OFF,dlt-daemon"
-
-EXTRA_OECMAKE = " \
-    -DWITH_BUNDLED_DEPS=OFF \
-    -DWITH_ADNS=ON \
-"
-
-do_install:append() {
-    install -d ${D}${systemd_unitdir}/system/
-    install -m 0644 ${S}/service/systemd/mosquitto.service.notify ${D}${systemd_unitdir}/system/mosquitto.service
-
-    install -d ${D}${sysconfdir}/init.d/
-    install -m 0755 ${WORKDIR}/mosquitto.init ${D}${sysconfdir}/init.d/mosquitto
-    sed -i -e 's,@SBINDIR@,${sbindir},g' \
-        -e 's,@BASE_SBINDIR@,${base_sbindir},g' \
-        -e 's,@LOCALSTATEDIR@,${localstatedir},g' \
-        -e 's,@SYSCONFDIR@,${sysconfdir},g' \
-        ${D}${sysconfdir}/init.d/mosquitto
-}
-
-PACKAGES += "libmosquitto1 libmosquittopp1 ${PN}-clients"
-
-PACKAGE_BEFORE_PN = "${PN}-examples"
-
-FILES:${PN} = "${sbindir}/mosquitto \
-               ${bindir}/mosquitto_passwd \
-               ${bindir}/mosquitto_ctrl \
-               ${libdir}/mosquitto_dynamic_security.so \
-               ${sysconfdir}/mosquitto \
-               ${sysconfdir}/init.d \
-               ${systemd_unitdir}/system/mosquitto.service \
-"
-
-CONFFILES:${PN} += "${sysconfdir}/mosquitto/mosquitto.conf"
-
-FILES:libmosquitto1 = "${libdir}/libmosquitto.so.*"
-
-FILES:libmosquittopp1 = "${libdir}/libmosquittopp.so.*"
-
-FILES:${PN}-clients = "${bindir}/mosquitto_pub \
-                       ${bindir}/mosquitto_sub \
-                       ${bindir}/mosquitto_rr \
-"
-
-FILES:${PN}-examples = "${sysconfdir}/mosquitto/*.example"
-
-SYSTEMD_SERVICE:${PN} = "mosquitto.service"
-
-INITSCRIPT_NAME = "mosquitto"
-INITSCRIPT_PARAMS = "defaults 30"
-
-USERADD_PACKAGES = "${PN}"
-USERADD_PARAM:${PN} = "--system --no-create-home --shell /bin/false \
-                       --user-group mosquitto"
-
-BBCLASSEXTEND += "native nativesdk"
diff --git a/meta-openembedded/meta-networking/recipes-connectivity/mosquitto/mosquitto_2.0.15.bb b/meta-openembedded/meta-networking/recipes-connectivity/mosquitto/mosquitto_2.0.15.bb
new file mode 100644
index 0000000..d06dd2d
--- /dev/null
+++ b/meta-openembedded/meta-networking/recipes-connectivity/mosquitto/mosquitto_2.0.15.bb
@@ -0,0 +1,90 @@
+SUMMARY = "Open source MQTT implementation"
+DESCRIPTION = "Mosquitto is an open source (Eclipse licensed) message broker \
+that implements the MQ Telemetry Transport protocol version 3.1, 3.1.1 and \
+5, providing both an MQTT broker and several command-line clients. MQTT \
+provides a lightweight method of carrying out messaging using a \
+publish/subscribe model. "
+HOMEPAGE = "http://mosquitto.org/"
+SECTION = "console/network"
+LICENSE = "EPL-2.0 | EDL-1.0"
+LIC_FILES_CHKSUM = "file://LICENSE.txt;md5=ca9a8f366c6babf593e374d0d7d58749 \
+                    file://edl-v10;md5=9f6accb1afcb570f8be65039e2fcd49e \
+                    file://epl-v20;md5=2dd765ca47a05140be15ebafddbeadfe \
+                    file://NOTICE.md;md5=a7a91b4754c6f7995020d1b49bc829c6 \
+"
+DEPENDS = "uthash cjson"
+
+SRC_URI = "http://mosquitto.org/files/source/mosquitto-${PV}.tar.gz \
+           file://mosquitto.init \
+           file://1571.patch \
+"
+
+SRC_URI[sha256sum] = "4735b1d32e3f91c7a8896741d88a3022e89730a1ee897946decfa0df27039ac6"
+
+inherit systemd update-rc.d useradd cmake pkgconfig
+
+PACKAGECONFIG ??= "ssl dlt websockets \
+                  ${@bb.utils.filter('DISTRO_FEATURES','systemd', d)} \
+                  "
+
+PACKAGECONFIG[manpages] = "-DDOCUMENTATION=ON,-DDOCUMENTATION=OFF,libxslt-native docbook-xsl-stylesheets-native"
+PACKAGECONFIG[dns-srv] = "-DWITH_SRV=ON,-DWITH_SRV=OFF,c-ares"
+PACKAGECONFIG[ssl] = "-DWITH_TLS=ON -DWITH_TLS_PSK=ON -DWITH_EC=ON,-DWITH_TLS=OFF -DWITH_TLS_PSK=OFF -DWITH_EC=OFF,openssl"
+PACKAGECONFIG[systemd] = "-DWITH_SYSTEMD=ON,-DWITH_SYSTEMD=OFF,systemd"
+PACKAGECONFIG[websockets] = "-DWITH_WEBSOCKETS=ON,-DWITH_WEBSOCKETS=OFF,libwebsockets"
+PACKAGECONFIG[dlt] = "-DWITH_DLT=ON,-DWITH_DLT=OFF,dlt-daemon"
+
+EXTRA_OECMAKE = " \
+    -DWITH_BUNDLED_DEPS=OFF \
+    -DWITH_ADNS=ON \
+"
+
+do_install:append() {
+    install -d ${D}${systemd_unitdir}/system/
+    install -m 0644 ${S}/service/systemd/mosquitto.service.notify ${D}${systemd_unitdir}/system/mosquitto.service
+
+    install -d ${D}${sysconfdir}/init.d/
+    install -m 0755 ${WORKDIR}/mosquitto.init ${D}${sysconfdir}/init.d/mosquitto
+    sed -i -e 's,@SBINDIR@,${sbindir},g' \
+        -e 's,@BASE_SBINDIR@,${base_sbindir},g' \
+        -e 's,@LOCALSTATEDIR@,${localstatedir},g' \
+        -e 's,@SYSCONFDIR@,${sysconfdir},g' \
+        ${D}${sysconfdir}/init.d/mosquitto
+}
+
+PACKAGES += "libmosquitto1 libmosquittopp1 ${PN}-clients"
+
+PACKAGE_BEFORE_PN = "${PN}-examples"
+
+FILES:${PN} = "${sbindir}/mosquitto \
+               ${bindir}/mosquitto_passwd \
+               ${bindir}/mosquitto_ctrl \
+               ${libdir}/mosquitto_dynamic_security.so \
+               ${sysconfdir}/mosquitto \
+               ${sysconfdir}/init.d \
+               ${systemd_unitdir}/system/mosquitto.service \
+"
+
+CONFFILES:${PN} += "${sysconfdir}/mosquitto/mosquitto.conf"
+
+FILES:libmosquitto1 = "${libdir}/libmosquitto.so.*"
+
+FILES:libmosquittopp1 = "${libdir}/libmosquittopp.so.*"
+
+FILES:${PN}-clients = "${bindir}/mosquitto_pub \
+                       ${bindir}/mosquitto_sub \
+                       ${bindir}/mosquitto_rr \
+"
+
+FILES:${PN}-examples = "${sysconfdir}/mosquitto/*.example"
+
+SYSTEMD_SERVICE:${PN} = "mosquitto.service"
+
+INITSCRIPT_NAME = "mosquitto"
+INITSCRIPT_PARAMS = "defaults 30"
+
+USERADD_PACKAGES = "${PN}"
+USERADD_PARAM:${PN} = "--system --no-create-home --shell /bin/false \
+                       --user-group mosquitto"
+
+BBCLASSEXTEND += "native nativesdk"
diff --git a/meta-openembedded/meta-networking/recipes-connectivity/networkmanager/networkmanager_1.38.0.bb b/meta-openembedded/meta-networking/recipes-connectivity/networkmanager/networkmanager_1.38.0.bb
index c8fea5d..ebd25a8 100644
--- a/meta-openembedded/meta-networking/recipes-connectivity/networkmanager/networkmanager_1.38.0.bb
+++ b/meta-openembedded/meta-networking/recipes-connectivity/networkmanager/networkmanager_1.38.0.bb
@@ -55,6 +55,8 @@
     -Dconfig_dns_rc_manager_default=${NETWORKMANAGER_DNS_RC_MANAGER_DEFAULT} \
     -Dconfig_dhcp_default=${NETWORKMANAGER_DHCP_DEFAULT} \
     -Ddhcpcanon=false \
+    -Diptables=${sbindir}/iptables \
+    -Dnft=${sbindir}/nft \
 "
 
 # stolen from https://github.com/void-linux/void-packages/blob/master/srcpkgs/NetworkManager/template
diff --git a/meta-openembedded/meta-networking/recipes-connectivity/samba/samba/0001-smbtorture-skip-test-case-tfork_cmd_send.patch b/meta-openembedded/meta-networking/recipes-connectivity/samba/samba/0001-smbtorture-skip-test-case-tfork_cmd_send.patch
new file mode 100644
index 0000000..90ee317
--- /dev/null
+++ b/meta-openembedded/meta-networking/recipes-connectivity/samba/samba/0001-smbtorture-skip-test-case-tfork_cmd_send.patch
@@ -0,0 +1,38 @@
+From 059b517f9ef6cbdc696e0983ce255b1728042827 Mon Sep 17 00:00:00 2001
+From: Yi Zhao <yi.zhao@windriver.com>
+Date: Thu, 25 Aug 2022 16:46:04 +0800
+Subject: [PATCH] smbtorture: skip test case tfork_cmd_send
+
+The test case tfork_cmd_send fails on target as it requires a script
+located in the source directory:
+
+$ smbtorture ncalrpc:localhost local.tfork.tfork_cmd_send
+test: tfork_cmd_send
+/buildarea/build/tmp/work/core2-64-poky-linux/samba/4.14.14-r0/samba-4.14.14/testprogs/blackbox/tfork.sh:
+Failed to exec child - No such file or directory
+
+Upstream-Status: Inappropriate [embedded specific]
+
+Signed-off-by: Yi Zhao <yi.zhao@windriver.com>
+---
+ lib/util/tests/tfork.c | 4 ----
+ 1 file changed, 4 deletions(-)
+
+diff --git a/lib/util/tests/tfork.c b/lib/util/tests/tfork.c
+index 70ae975..4826ce6 100644
+--- a/lib/util/tests/tfork.c
++++ b/lib/util/tests/tfork.c
+@@ -839,10 +839,6 @@ struct torture_suite *torture_local_tfork(TALLOC_CTX *mem_ctx)
+ 				      "tfork_threads",
+ 				      test_tfork_threads);
+ 
+-	torture_suite_add_simple_test(suite,
+-				      "tfork_cmd_send",
+-				      test_tfork_cmd_send);
+-
+ 	torture_suite_add_simple_test(suite,
+ 				      "tfork_event_file_handle",
+ 				      test_tfork_event_file_handle);
+-- 
+2.25.1
+
diff --git a/meta-openembedded/meta-networking/recipes-connectivity/samba/samba/0001-waf-Fix-errors-with-Werror-implicit-function-declara.patch b/meta-openembedded/meta-networking/recipes-connectivity/samba/samba/0001-waf-Fix-errors-with-Werror-implicit-function-declara.patch
new file mode 100644
index 0000000..4a89f76
--- /dev/null
+++ b/meta-openembedded/meta-networking/recipes-connectivity/samba/samba/0001-waf-Fix-errors-with-Werror-implicit-function-declara.patch
@@ -0,0 +1,32 @@
+From 28ec4c9323e67cd114a0465015c9f3c2e64e6829 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Sat, 27 Aug 2022 13:05:26 -0700
+Subject: [PATCH] waf: Fix errors with Werror=implicit-function-declaration
+ turned on
+
+Clang-15 turns this option into errors by default, and it results in
+rpath check failures
+
+Upstream-Status: Pending
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ buildtools/wafsamba/samba_waf18.py | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/buildtools/wafsamba/samba_waf18.py b/buildtools/wafsamba/samba_waf18.py
+index 7a0a08e..c0d2c3e 100644
+--- a/buildtools/wafsamba/samba_waf18.py
++++ b/buildtools/wafsamba/samba_waf18.py
+@@ -209,7 +209,7 @@ def CHECK_LIBRARY_SUPPORT(conf, rpath=False, version_script=False, msg=None):
+         lib_node.parent.mkdir()
+         lib_node.write('int lib_func(void) { return 42; }\n', 'w')
+         main_node = bld.srcnode.make_node('main.c')
+-        main_node.write('int main(void) {return !(lib_func() == 42);}', 'w')
++        main_node.write('int lib_func(void); int main(void) {return !(lib_func() == 42);}', 'w')
+         linkflags = []
+         if version_script:
+             script = bld.srcnode.make_node('ldscript')
+-- 
+2.37.2
+
diff --git a/meta-openembedded/meta-networking/recipes-connectivity/samba/samba_4.14.13.bb b/meta-openembedded/meta-networking/recipes-connectivity/samba/samba_4.14.13.bb
deleted file mode 100644
index 49e93fc..0000000
--- a/meta-openembedded/meta-networking/recipes-connectivity/samba/samba_4.14.13.bb
+++ /dev/null
@@ -1,347 +0,0 @@
-HOMEPAGE = "https://www.samba.org/"
-SECTION = "console/network"
-
-LICENSE = "GPL-3.0-or-later & LGPL-3.0-or-later & GPL-2.0-or-later"
-LIC_FILES_CHKSUM = "file://COPYING;md5=d32239bcb673463ab874e80d47fae504 \
-                    file://${COREBASE}/meta/files/common-licenses/LGPL-3.0-or-later;md5=c51d3eef3be114124d11349ca0d7e117 \
-                    file://${COREBASE}/meta/files/common-licenses/GPL-2.0-or-later;md5=fed54355545ffd980b814dab4a3b312c"
-
-SAMBA_MIRROR = "http://samba.org/samba/ftp"
-MIRRORS += "\
-${SAMBA_MIRROR}    http://mirror.internode.on.net/pub/samba \n \
-${SAMBA_MIRROR}    http://www.mirrorservice.org/sites/ftp.samba.org \n \
-"
-
-SRC_URI = "${SAMBA_MIRROR}/stable/samba-${PV}.tar.gz \
-           file://smb.conf \
-           file://volatiles.03_samba \
-           file://0001-Don-t-check-xsltproc-manpages.patch \
-           file://0002-do-not-import-target-module-while-cross-compile.patch \
-           file://0003-Add-config-option-without-valgrind.patch \
-           file://0004-Add-options-to-configure-the-use-of-libbsd.patch \
-           file://0005-samba-build-dnsserver_common-code.patch \
-           file://0001-Fix-pyext_PATTERN-for-cross-compilation.patch \
-           "
-
-SRC_URI:append:libc-musl = " \
-           file://netdb_defines.patch \
-           file://samba-pam.patch \
-           file://samba-4.3.9-remove-getpwent_r.patch \
-           file://cmocka-uintptr_t.patch \
-           file://samba-fix-musl-lib-without-innetgr.patch \
-           "
-
-SRC_URI[sha256sum] = "e1df792818a17d8d21faf33580d32939214694c92b84fb499464210d86a7ff75"
-
-UPSTREAM_CHECK_REGEX = "samba\-(?P<pver>4\.14(\.\d+)+).tar.gz"
-
-inherit systemd waf-samba cpan-base perlnative update-rc.d perl-version pkgconfig
-
-# CVE-2011-2411 is valnerble only on HP NonStop Servers.
-CVE_CHECK_IGNORE += "CVE-2011-2411" 
-
-# remove default added RDEPENDS on perl
-RDEPENDS:${PN}:remove = "perl"
-
-DEPENDS += "readline virtual/libiconv zlib popt libtalloc libtdb libtevent libldb libaio libpam libtasn1 jansson libparse-yapp-perl-native gnutls"
-
-inherit features_check
-REQUIRED_DISTRO_FEATURES = "pam"
-
-DEPENDS:append:libc-musl = " libtirpc"
-CFLAGS:append:libc-musl = " -I${STAGING_INCDIR}/tirpc"
-LDFLAGS:append:libc-musl = " -ltirpc"
-
-COMPATIBLE_HOST:riscv32 = "null"
-
-INITSCRIPT_NAME = "samba"
-INITSCRIPT_PARAMS = "start 20 3 5 . stop 20 0 1 6 ."
-
-SYSTEMD_PACKAGES = "${PN}-base ${PN}-ad-dc winbind"
-SYSTEMD_SERVICE:${PN}-base = "nmb.service smb.service"
-SYSTEMD_SERVICE:${PN}-ad-dc = "${@bb.utils.contains('PACKAGECONFIG', 'ad-dc', 'samba.service', '', d)}"
-SYSTEMD_SERVICE:winbind = "winbind.service"
-
-# There are prerequisite settings to enable ad-dc, so disable the service by default.
-# Reference:
-# https://wiki.samba.org/index.php/Setting_up_Samba_as_an_Active_Directory_Domain_Controller
-SYSTEMD_AUTO_ENABLE:${PN}-ad-dc = "disable"
-
-#cross_compile cannot use preforked process, since fork process earlier than point subproces.popen
-#to cross Popen
-export WAF_NO_PREFORK="yes"
-
-# Use krb5.  Build active domain controller.
-#
-PACKAGECONFIG ??= "${@bb.utils.filter('DISTRO_FEATURES', 'systemd zeroconf', d)} \
-                   acl cups ad-dc ldap mitkrb5 \
-"
-
-RDEPENDS:${PN}-ctdb-tests += "bash util-linux-getopt"
-
-PACKAGECONFIG[acl] = "--with-acl-support,--without-acl-support,acl"
-PACKAGECONFIG[fam] = "--with-fam,--without-fam,gamin"
-PACKAGECONFIG[cups] = "--enable-cups,--disable-cups,cups"
-PACKAGECONFIG[ldap] = "--with-ldap,--without-ldap,openldap"
-PACKAGECONFIG[sasl] = ",,cyrus-sasl"
-PACKAGECONFIG[systemd] = "--with-systemd,--without-systemd,systemd"
-PACKAGECONFIG[dmapi] = "--with-dmapi,--without-dmapi,dmapi"
-PACKAGECONFIG[zeroconf] = "--enable-avahi,--disable-avahi,avahi"
-PACKAGECONFIG[valgrind] = ",--without-valgrind,valgrind,"
-PACKAGECONFIG[lttng] = "--with-lttng, --without-lttng,lttng-ust"
-PACKAGECONFIG[archive] = "--with-libarchive, --without-libarchive, libarchive"
-PACKAGECONFIG[libunwind] = ", , libunwind"
-PACKAGECONFIG[gpgme] = ",--without-gpgme,,"
-PACKAGECONFIG[lmdb] = ",--without-ldb-lmdb,lmdb,"
-PACKAGECONFIG[libbsd] = "--with-libbsd, --without-libbsd, libbsd"
-PACKAGECONFIG[ad-dc] = "--with-experimental-mit-ad-dc,--without-ad-dc,python3-markdown python3-dnspython,"
-PACKAGECONFIG[mitkrb5] = "--with-system-mitkrb5 --with-system-mitkdc=/usr/sbin/krb5kdc,,krb5,"
-
-SAMBA4_IDMAP_MODULES="idmap_ad,idmap_rid,idmap_adex,idmap_hash,idmap_tdb2"
-SAMBA4_PDB_MODULES="pdb_tdbsam,${@bb.utils.contains('PACKAGECONFIG', 'ldap', 'pdb_ldap,', '', d)}pdb_ads,pdb_smbpasswd,pdb_wbc_sam,pdb_samba4"
-SAMBA4_AUTH_MODULES="auth_unix,auth_wbc,auth_server,auth_netlogond,auth_script,auth_samba4"
-SAMBA4_MODULES="${SAMBA4_IDMAP_MODULES},${SAMBA4_PDB_MODULES},${SAMBA4_AUTH_MODULES}"
-
-# These libraries are supposed to replace others supplied by packages, but decorate the names of
-# .so files so there will not be a conflict.  This is not done consistantly, so be very careful
-# when adding to this list.
-#
-SAMBA4_LIBS="heimdal,cmocka,NONE"
-
-EXTRA_OECONF += "--enable-fhs \
-                 --with-piddir=/run \
-                 --with-sockets-dir=/run/samba \
-                 --with-modulesdir=${libdir}/samba \
-                 --with-lockdir=${localstatedir}/lib/samba \
-                 --with-cachedir=${localstatedir}/lib/samba \
-                 --disable-rpath-install \
-                 --with-shared-modules=${SAMBA4_MODULES} \
-                 --bundled-libraries=${SAMBA4_LIBS} \
-                 ${@oe.utils.conditional('TARGET_ARCH', 'x86_64', '', '--disable-glusterfs', d)} \
-                 --with-cluster-support \
-                 --with-profiling-data \
-                 --with-libiconv=${STAGING_DIR_HOST}${prefix} \
-                 --with-pam --with-pammodulesdir=${base_libdir}/security \
-                "
-
-LDFLAGS += "-Wl,-z,relro,-z,now ${@bb.utils.contains('DISTRO_FEATURES', 'ld-is-gold', ' -fuse-ld=bfd ', '', d)}"
-
-do_configure:append () {
-    cd ${S}/pidl/
-    perl Makefile.PL PREFIX=${prefix}
-    sed -e 's,VENDORPREFIX)/lib/perl,VENDORPREFIX)/${baselib}/perl,g' \
-        -e 's,PERLPREFIX)/lib/perl,PERLPREFIX)/${baselib}/perl,g' -i Makefile
-
-}
-
-do_compile:append () {
-    oe_runmake -C ${S}/pidl
-}
-
-do_install:append() {
-    for section in 1 5 7; do
-        install -d ${D}${mandir}/man$section
-        install -m 0644 ctdb/doc/*.$section ${D}${mandir}/man$section
-    done
-    for section in 1 5 7 8; do
-        install -d ${D}${mandir}/man$section
-        install -m 0644 docs/manpages/*.$section ${D}${mandir}/man$section
-    done
-
-    install -d ${D}${systemd_system_unitdir}
-    install -m 0644 ${S}/bin/default/packaging/systemd/*.service ${D}${systemd_system_unitdir}/
-    sed -e 's,\(ExecReload=\).*\(/kill\),\1${base_bindir}\2,' \
-        -e 's,/etc/sysconfig/samba,${sysconfdir}/default/samba,' \
-        -i ${D}${systemd_system_unitdir}/*.service
-
-    if [ "${@bb.utils.contains('PACKAGECONFIG', 'ad-dc', 'yes', 'no', d)}" = "no" ]; then
-        rm -f ${D}${systemd_system_unitdir}/samba.service
-    fi
-
-    install -d ${D}${sysconfdir}/tmpfiles.d
-    install -m644 packaging/systemd/samba.conf.tmp ${D}${sysconfdir}/tmpfiles.d/samba.conf
-    echo "d ${localstatedir}/log/samba 0755 root root -" \
-        >> ${D}${sysconfdir}/tmpfiles.d/samba.conf
-    install -d ${D}${sysconfdir}/init.d
-    install -m 0755 packaging/sysv/samba.init ${D}${sysconfdir}/init.d/samba
-    sed -e 's,/opt/samba/bin,${sbindir},g' \
-        -e 's,/opt/samba/smb.conf,${sysconfdir}/samba/smb.conf,g' \
-        -e 's,/opt/samba/log,${localstatedir}/log/samba,g' \
-        -e 's,/etc/init.d/samba.server,${sysconfdir}/init.d/samba,g' \
-        -e 's,/usr/bin,${base_bindir},g' \
-        -i ${D}${sysconfdir}/init.d/samba
-
-    install -d ${D}${sysconfdir}/samba
-    echo "127.0.0.1 localhost" > ${D}${sysconfdir}/samba/lmhosts
-    install -m644 ${WORKDIR}/smb.conf ${D}${sysconfdir}/samba/smb.conf
-    install -D -m 644 ${WORKDIR}/volatiles.03_samba ${D}${sysconfdir}/default/volatiles/03_samba
-
-    install -d ${D}${sysconfdir}/default
-    install -m644 packaging/systemd/samba.sysconfig ${D}${sysconfdir}/default/samba
-
-    # the items are from ctdb/tests/run_tests.sh
-    for d in cunit eventd eventscripts onnode shellcheck takeover takeover_helper tool; do
-        testdir=${D}${datadir}/ctdb-tests/UNIT/$d
-        install -d $testdir
-        cp ${S}/ctdb/tests/UNIT/$d/*.sh $testdir
-        cp -r ${S}/ctdb/tests/UNIT/$d/scripts ${S}/ctdb/tests/UNIT/$d/stubs $testdir || true
-    done
-
-    # fix file-rdeps qa warning
-    if [ -f ${D}${bindir}/onnode ]; then
-        sed -i 's:\(#!/bin/\)bash:\1sh:' ${D}${bindir}/onnode
-    fi
-
-    chmod 0750 ${D}${sysconfdir}/sudoers.d || true
-    rm -rf ${D}/run ${D}${localstatedir}/run ${D}${localstatedir}/log
-    
-    for f in samba-gpupdate samba_upgradedns samba_spnupdate samba_kcc samba_dnsupdate samba_downgrade_db; do
-        if [ -f "${D}${sbindir}/$f" ]; then
-            sed -i -e 's,${PYTHON},/usr/bin/env python3,g' ${D}${sbindir}/$f
-        fi
-    done
-    if [ -f "${D}${bindir}/samba-tool" ]; then
-        sed -i -e 's,${PYTHON},/usr/bin/env python3,g' ${D}${bindir}/samba-tool
-    fi
-
-    oe_runmake -C ${S}/pidl DESTDIR=${D} install_vendor
-    find ${D}${libdir}/ -type f -name "perllocal.pod" | xargs rm -f
-    rm -rf ${D}${libdir}/perl5/vendor_perl/${PERLVERSION}/${BUILD_SYS}/auto/Parse/Pidl/.packlist
-    sed -i -e '1s,#!.*perl,#!${bindir}/env perl,' ${D}${bindir}/pidl
-}
-
-PACKAGES =+ "${PN}-python3 ${PN}-pidl \
-             ${PN}-dsdb-modules ${PN}-testsuite registry-tools \
-             winbind \
-             ${PN}-common ${PN}-base ${PN}-ad-dc ${PN}-ctdb-tests \
-             smbclient ${PN}-client ${PN}-server ${PN}-test"
-
-python samba_populate_packages() {
-    def module_hook(file, pkg, pattern, format, basename):
-        pn = d.getVar('PN')
-        d.appendVar('RRECOMMENDS:%s-base' % pn, ' %s' % pkg)
-
-    mlprefix = d.getVar('MLPREFIX') or ''
-    pam_libdir = d.expand('${base_libdir}/security')
-    pam_pkgname = mlprefix + 'pam-plugin%s'
-    do_split_packages(d, pam_libdir, r'^pam_(.*)\.so$', pam_pkgname, 'PAM plugin for %s', extra_depends='', prepend=True)
-
-    libdir = d.getVar('libdir')
-    do_split_packages(d, libdir, r'^lib(.*)\.so\..*$', 'lib%s', 'Samba %s library', extra_depends='${PN}-common', prepend=True, allow_links=True)
-    pkglibdir = '%s/samba' % libdir
-    do_split_packages(d, pkglibdir, r'^lib(.*)\.so$', 'lib%s', 'Samba %s library', extra_depends='${PN}-common', prepend=True)
-    moduledir = '%s/samba/auth' % libdir
-    do_split_packages(d, moduledir, r'^(.*)\.so$', 'samba-auth-%s', 'Samba %s authentication backend', hook=module_hook, extra_depends='', prepend=True)
-    moduledir = '%s/samba/pdb' % libdir
-    do_split_packages(d, moduledir, r'^(.*)\.so$', 'samba-pdb-%s', 'Samba %s password backend', hook=module_hook, extra_depends='', prepend=True)
-}
-
-PACKAGESPLITFUNCS:prepend = "samba_populate_packages "
-PACKAGES_DYNAMIC = "samba-auth-.* samba-pdb-.*"
-
-RDEPENDS:${PN} += "${PN}-base ${PN}-python3 ${PN}-dsdb-modules python3"
-RDEPENDS:${PN}-python3 += "pytalloc python3-tdb pyldb"
-
-FILES:${PN}-base = "${sbindir}/nmbd \
-                    ${sbindir}/smbd \
-                    ${sysconfdir}/init.d \
-                    ${systemd_system_unitdir}/nmb.service \
-                    ${systemd_system_unitdir}/smb.service"
-
-FILES:${PN}-ad-dc = "${sbindir}/samba \
-                     ${systemd_system_unitdir}/samba.service \
-                     ${libdir}/krb5/plugins/kdb/samba.so \
-"
-RDEPENDS:${PN}-ad-dc = "krb5-kdc"
-
-FILES:${PN}-ctdb-tests = "${bindir}/ctdb_run_tests \
-                          ${bindir}/ctdb_run_cluster_tests \
-                          ${sysconfdir}/ctdb/nodes \
-                          ${datadir}/ctdb-tests \
-                          ${datadir}/ctdb/tests \
-                          ${localstatedir}/lib/ctdb \
-                         "
-
-FILES:${BPN}-common = "${sysconfdir}/default \
-                       ${sysconfdir}/samba \
-                       ${sysconfdir}/tmpfiles.d \
-                       ${localstatedir}/lib/samba \
-                       ${localstatedir}/spool/samba \
-"
-
-FILES:${PN} += "${libdir}/vfs/*.so \
-                ${libdir}/charset/*.so \
-                ${libdir}/*.dat \
-                ${libdir}/auth/*.so \
-                ${datadir}/ctdb/events/* \
-"
-
-FILES:${PN}-dsdb-modules = "${libdir}/samba/ldb"
-
-FILES:${PN}-testsuite = "${bindir}/gentest \
-                         ${bindir}/locktest \
-                         ${bindir}/masktest \
-                         ${bindir}/ndrdump \
-                         ${bindir}/smbtorture"
-
-FILES:registry-tools = "${bindir}/regdiff \
-                        ${bindir}/regpatch \
-                        ${bindir}/regshell \
-                        ${bindir}/regtree"
-
-FILES:winbind = "${sbindir}/winbindd \
-                 ${bindir}/wbinfo \
-                 ${bindir}/ntlm_auth \
-                 ${libdir}/samba/idmap \
-                 ${libdir}/samba/nss_info \
-                 ${libdir}/winbind_krb5_locator.so \
-                 ${libdir}/winbind-krb5-localauth.so \
-                 ${sysconfdir}/init.d/winbind \
-                 ${systemd_system_unitdir}/winbind.service"
-
-FILES:${PN}-python3 = "${PYTHON_SITEPACKAGES_DIR}"
-
-FILES:smbclient = "${bindir}/cifsdd \
-                   ${bindir}/rpcclient \
-                   ${bindir}/smbcacls \
-                   ${bindir}/smbclient \
-                   ${bindir}/smbcquotas \
-                   ${bindir}/smbget \
-                   ${bindir}/smbspool \
-                   ${bindir}/smbtar \
-                   ${bindir}/smbtree \
-                   ${libdir}/samba/smbspool_krb5_wrapper"
-
-RDEPENDS:${PN}-pidl:append = " perl libparse-yapp-perl"
-FILES:${PN}-pidl = "${bindir}/pidl \
-                    ${libdir}/perl5 \
-                   "
-
-RDEPENDS:${PN}-client = "\
-    smbclient \
-    winbind \
-    registry-tools \
-    ${PN}-pidl \
-    "
-
-ALLOW_EMPTY:${PN}-client = "1"
-
-RDEPENDS:${PN}-server = "\
-    ${PN} \
-    winbind \
-    registry-tools \
-    "
-
-ALLOW_EMPTY:${PN}-server = "1"
-
-RDEPENDS:${PN}-test = "\
-    ${PN}-ctdb-tests \
-    ${PN}-testsuite \
-    "
-
-ALLOW_EMPTY:${PN}-test = "1"
-
-# Patch for CVE-2018-1050 is applied in version 4.5.15, 4.6.13, 4.7.5.
-# Patch for CVE-2018-1057 is applied in version 4.3.13, 4.4.16.
-CVE_CHECK_IGNORE += "CVE-2018-1050"
-CVE_CHECK_IGNORE += "CVE-2018-1057"
diff --git a/meta-openembedded/meta-networking/recipes-connectivity/samba/samba_4.14.14.bb b/meta-openembedded/meta-networking/recipes-connectivity/samba/samba_4.14.14.bb
new file mode 100644
index 0000000..f88dee6
--- /dev/null
+++ b/meta-openembedded/meta-networking/recipes-connectivity/samba/samba_4.14.14.bb
@@ -0,0 +1,351 @@
+HOMEPAGE = "https://www.samba.org/"
+SECTION = "console/network"
+
+LICENSE = "GPL-3.0-or-later & LGPL-3.0-or-later & GPL-2.0-or-later"
+LIC_FILES_CHKSUM = "file://COPYING;md5=d32239bcb673463ab874e80d47fae504 \
+                    file://${COREBASE}/meta/files/common-licenses/LGPL-3.0-or-later;md5=c51d3eef3be114124d11349ca0d7e117 \
+                    file://${COREBASE}/meta/files/common-licenses/GPL-2.0-or-later;md5=fed54355545ffd980b814dab4a3b312c"
+
+SAMBA_MIRROR = "http://samba.org/samba/ftp"
+MIRRORS += "\
+${SAMBA_MIRROR}    http://mirror.internode.on.net/pub/samba \n \
+${SAMBA_MIRROR}    http://www.mirrorservice.org/sites/ftp.samba.org \n \
+"
+
+SRC_URI = "${SAMBA_MIRROR}/stable/samba-${PV}.tar.gz \
+           file://smb.conf \
+           file://volatiles.03_samba \
+           file://0001-Don-t-check-xsltproc-manpages.patch \
+           file://0002-do-not-import-target-module-while-cross-compile.patch \
+           file://0003-Add-config-option-without-valgrind.patch \
+           file://0004-Add-options-to-configure-the-use-of-libbsd.patch \
+           file://0005-samba-build-dnsserver_common-code.patch \
+           file://0001-Fix-pyext_PATTERN-for-cross-compilation.patch \
+           file://0001-smbtorture-skip-test-case-tfork_cmd_send.patch \
+           file://0001-waf-Fix-errors-with-Werror-implicit-function-declara.patch \
+           "
+
+SRC_URI:append:libc-musl = " \
+           file://netdb_defines.patch \
+           file://samba-pam.patch \
+           file://samba-4.3.9-remove-getpwent_r.patch \
+           file://cmocka-uintptr_t.patch \
+           file://samba-fix-musl-lib-without-innetgr.patch \
+           "
+
+SRC_URI[sha256sum] = "abd5e9e6aa45e55114b188ba189ebdfc8fd3d7718d43f749e477ce7f791e5519"
+
+UPSTREAM_CHECK_REGEX = "samba\-(?P<pver>4\.14(\.\d+)+).tar.gz"
+
+inherit systemd waf-samba cpan-base perlnative update-rc.d perl-version pkgconfig
+
+# CVE-2011-2411 is valnerble only on HP NonStop Servers.
+CVE_CHECK_IGNORE += "CVE-2011-2411" 
+
+# remove default added RDEPENDS on perl
+RDEPENDS:${PN}:remove = "perl"
+
+DEPENDS += "readline virtual/libiconv zlib popt libtalloc libtdb libtevent libldb libaio libpam libtasn1 jansson libparse-yapp-perl-native gnutls"
+
+inherit features_check
+REQUIRED_DISTRO_FEATURES = "pam"
+
+DEPENDS:append:libc-musl = " libtirpc"
+CFLAGS:append:libc-musl = " -I${STAGING_INCDIR}/tirpc"
+LDFLAGS:append:libc-musl = " -ltirpc"
+
+COMPATIBLE_HOST:riscv32 = "null"
+
+INITSCRIPT_NAME = "samba"
+INITSCRIPT_PARAMS = "start 20 3 5 . stop 20 0 1 6 ."
+
+SYSTEMD_PACKAGES = "${PN}-base ${PN}-ad-dc winbind"
+SYSTEMD_SERVICE:${PN}-base = "nmb.service smb.service"
+SYSTEMD_SERVICE:${PN}-ad-dc = "${@bb.utils.contains('PACKAGECONFIG', 'ad-dc', 'samba.service', '', d)}"
+SYSTEMD_SERVICE:winbind = "winbind.service"
+
+# There are prerequisite settings to enable ad-dc, so disable the service by default.
+# Reference:
+# https://wiki.samba.org/index.php/Setting_up_Samba_as_an_Active_Directory_Domain_Controller
+SYSTEMD_AUTO_ENABLE:${PN}-ad-dc = "disable"
+
+#cross_compile cannot use preforked process, since fork process earlier than point subproces.popen
+#to cross Popen
+export WAF_NO_PREFORK="yes"
+
+# Use krb5.  Build active domain controller.
+#
+PACKAGECONFIG ??= "${@bb.utils.filter('DISTRO_FEATURES', 'systemd zeroconf', d)} \
+                   acl cups ad-dc ldap mitkrb5 \
+"
+
+RDEPENDS:${PN}-ctdb-tests += "bash util-linux-getopt"
+
+PACKAGECONFIG[acl] = "--with-acl-support,--without-acl-support,acl"
+PACKAGECONFIG[fam] = "--with-fam,--without-fam,gamin"
+PACKAGECONFIG[cups] = "--enable-cups,--disable-cups,cups"
+PACKAGECONFIG[ldap] = "--with-ldap,--without-ldap,openldap"
+PACKAGECONFIG[sasl] = ",,cyrus-sasl"
+PACKAGECONFIG[systemd] = "--with-systemd,--without-systemd,systemd"
+PACKAGECONFIG[dmapi] = "--with-dmapi,--without-dmapi,dmapi"
+PACKAGECONFIG[zeroconf] = "--enable-avahi,--disable-avahi,avahi"
+PACKAGECONFIG[valgrind] = ",--without-valgrind,valgrind,"
+PACKAGECONFIG[lttng] = "--with-lttng, --without-lttng,lttng-ust"
+PACKAGECONFIG[archive] = "--with-libarchive, --without-libarchive, libarchive"
+PACKAGECONFIG[libunwind] = ", , libunwind"
+PACKAGECONFIG[gpgme] = ",--without-gpgme,,"
+PACKAGECONFIG[lmdb] = ",--without-ldb-lmdb,lmdb,"
+PACKAGECONFIG[libbsd] = "--with-libbsd, --without-libbsd, libbsd"
+PACKAGECONFIG[ad-dc] = "--with-experimental-mit-ad-dc,--without-ad-dc,python3-markdown python3-dnspython,"
+PACKAGECONFIG[mitkrb5] = "--with-system-mitkrb5 --with-system-mitkdc=/usr/sbin/krb5kdc,,krb5,"
+
+SAMBA4_IDMAP_MODULES="idmap_ad,idmap_rid,idmap_adex,idmap_hash,idmap_tdb2"
+SAMBA4_PDB_MODULES="pdb_tdbsam,${@bb.utils.contains('PACKAGECONFIG', 'ldap', 'pdb_ldap,', '', d)}pdb_ads,pdb_smbpasswd,pdb_wbc_sam,pdb_samba4"
+SAMBA4_AUTH_MODULES="auth_unix,auth_wbc,auth_server,auth_netlogond,auth_script,auth_samba4"
+SAMBA4_MODULES="${SAMBA4_IDMAP_MODULES},${SAMBA4_PDB_MODULES},${SAMBA4_AUTH_MODULES}"
+
+# These libraries are supposed to replace others supplied by packages, but decorate the names of
+# .so files so there will not be a conflict.  This is not done consistantly, so be very careful
+# when adding to this list.
+#
+SAMBA4_LIBS="heimdal,cmocka,NONE"
+
+EXTRA_OECONF += "--enable-fhs \
+                 --with-piddir=/run \
+                 --with-sockets-dir=/run/samba \
+                 --with-modulesdir=${libdir}/samba \
+                 --with-privatelibdir=${libdir}/samba \
+                 --with-lockdir=${localstatedir}/lib/samba \
+                 --with-cachedir=${localstatedir}/lib/samba \
+                 --disable-rpath-install \
+                 --disable-rpath \
+                 --with-shared-modules=${SAMBA4_MODULES} \
+                 --bundled-libraries=${SAMBA4_LIBS} \
+                 ${@oe.utils.conditional('TARGET_ARCH', 'x86_64', '', '--disable-glusterfs', d)} \
+                 --with-cluster-support \
+                 --with-profiling-data \
+                 --with-libiconv=${STAGING_DIR_HOST}${prefix} \
+                 --with-pam --with-pammodulesdir=${base_libdir}/security \
+                "
+
+LDFLAGS += "-Wl,-z,relro,-z,now ${@bb.utils.contains('DISTRO_FEATURES', 'ld-is-gold', ' -fuse-ld=bfd ', '', d)}"
+
+do_configure:append () {
+    cd ${S}/pidl/
+    perl Makefile.PL PREFIX=${prefix}
+    sed -e 's,VENDORPREFIX)/lib/perl,VENDORPREFIX)/${baselib}/perl,g' \
+        -e 's,PERLPREFIX)/lib/perl,PERLPREFIX)/${baselib}/perl,g' -i Makefile
+
+}
+
+do_compile:append () {
+    oe_runmake -C ${S}/pidl
+}
+
+do_install:append() {
+    for section in 1 5 7; do
+        install -d ${D}${mandir}/man$section
+        install -m 0644 ctdb/doc/*.$section ${D}${mandir}/man$section
+    done
+    for section in 1 5 7 8; do
+        install -d ${D}${mandir}/man$section
+        install -m 0644 docs/manpages/*.$section ${D}${mandir}/man$section
+    done
+
+    install -d ${D}${systemd_system_unitdir}
+    install -m 0644 ${S}/bin/default/packaging/systemd/*.service ${D}${systemd_system_unitdir}/
+    sed -e 's,\(ExecReload=\).*\(/kill\),\1${base_bindir}\2,' \
+        -e 's,/etc/sysconfig/samba,${sysconfdir}/default/samba,' \
+        -i ${D}${systemd_system_unitdir}/*.service
+
+    if [ "${@bb.utils.contains('PACKAGECONFIG', 'ad-dc', 'yes', 'no', d)}" = "no" ]; then
+        rm -f ${D}${systemd_system_unitdir}/samba.service
+    fi
+
+    install -d ${D}${sysconfdir}/tmpfiles.d
+    install -m644 packaging/systemd/samba.conf.tmp ${D}${sysconfdir}/tmpfiles.d/samba.conf
+    echo "d ${localstatedir}/log/samba 0755 root root -" \
+        >> ${D}${sysconfdir}/tmpfiles.d/samba.conf
+    install -d ${D}${sysconfdir}/init.d
+    install -m 0755 packaging/sysv/samba.init ${D}${sysconfdir}/init.d/samba
+    sed -e 's,/opt/samba/bin,${sbindir},g' \
+        -e 's,/opt/samba/smb.conf,${sysconfdir}/samba/smb.conf,g' \
+        -e 's,/opt/samba/log,${localstatedir}/log/samba,g' \
+        -e 's,/etc/init.d/samba.server,${sysconfdir}/init.d/samba,g' \
+        -e 's,/usr/bin,${base_bindir},g' \
+        -i ${D}${sysconfdir}/init.d/samba
+
+    install -d ${D}${sysconfdir}/samba
+    echo "127.0.0.1 localhost" > ${D}${sysconfdir}/samba/lmhosts
+    install -m644 ${WORKDIR}/smb.conf ${D}${sysconfdir}/samba/smb.conf
+    install -D -m 644 ${WORKDIR}/volatiles.03_samba ${D}${sysconfdir}/default/volatiles/03_samba
+
+    install -d ${D}${sysconfdir}/default
+    install -m644 packaging/systemd/samba.sysconfig ${D}${sysconfdir}/default/samba
+
+    # the items are from ctdb/tests/run_tests.sh
+    for d in cunit eventd eventscripts onnode shellcheck takeover takeover_helper tool; do
+        testdir=${D}${datadir}/ctdb-tests/UNIT/$d
+        install -d $testdir
+        cp ${S}/ctdb/tests/UNIT/$d/*.sh $testdir
+        cp -r ${S}/ctdb/tests/UNIT/$d/scripts ${S}/ctdb/tests/UNIT/$d/stubs $testdir || true
+    done
+
+    # fix file-rdeps qa warning
+    if [ -f ${D}${bindir}/onnode ]; then
+        sed -i 's:\(#!/bin/\)bash:\1sh:' ${D}${bindir}/onnode
+    fi
+
+    chmod 0750 ${D}${sysconfdir}/sudoers.d || true
+    rm -rf ${D}/run ${D}${localstatedir}/run ${D}${localstatedir}/log
+    
+    for f in samba-gpupdate samba_upgradedns samba_spnupdate samba_kcc samba_dnsupdate samba_downgrade_db; do
+        if [ -f "${D}${sbindir}/$f" ]; then
+            sed -i -e 's,${PYTHON},/usr/bin/env python3,g' ${D}${sbindir}/$f
+        fi
+    done
+    if [ -f "${D}${bindir}/samba-tool" ]; then
+        sed -i -e 's,${PYTHON},/usr/bin/env python3,g' ${D}${bindir}/samba-tool
+    fi
+
+    oe_runmake -C ${S}/pidl DESTDIR=${D} install_vendor
+    find ${D}${libdir}/ -type f -name "perllocal.pod" | xargs rm -f
+    rm -rf ${D}${libdir}/perl5/vendor_perl/${PERLVERSION}/${BUILD_SYS}/auto/Parse/Pidl/.packlist
+    sed -i -e '1s,#!.*perl,#!${bindir}/env perl,' ${D}${bindir}/pidl
+}
+
+PACKAGES =+ "${PN}-python3 ${PN}-pidl \
+             ${PN}-dsdb-modules ${PN}-testsuite registry-tools \
+             winbind \
+             ${PN}-common ${PN}-base ${PN}-ad-dc ${PN}-ctdb-tests \
+             smbclient ${PN}-client ${PN}-server ${PN}-test"
+
+python samba_populate_packages() {
+    def module_hook(file, pkg, pattern, format, basename):
+        pn = d.getVar('PN')
+        d.appendVar('RRECOMMENDS:%s-base' % pn, ' %s' % pkg)
+
+    mlprefix = d.getVar('MLPREFIX') or ''
+    pam_libdir = d.expand('${base_libdir}/security')
+    pam_pkgname = mlprefix + 'pam-plugin%s'
+    do_split_packages(d, pam_libdir, r'^pam_(.*)\.so$', pam_pkgname, 'PAM plugin for %s', extra_depends='', prepend=True)
+
+    libdir = d.getVar('libdir')
+    do_split_packages(d, libdir, r'^lib(.*)\.so\..*$', 'lib%s', 'Samba %s library', extra_depends='${PN}-common', prepend=True, allow_links=True)
+    pkglibdir = '%s/samba' % libdir
+    do_split_packages(d, pkglibdir, r'^lib(.*)\.so$', 'lib%s', 'Samba %s library', extra_depends='${PN}-common', prepend=True)
+    moduledir = '%s/samba/auth' % libdir
+    do_split_packages(d, moduledir, r'^(.*)\.so$', 'samba-auth-%s', 'Samba %s authentication backend', hook=module_hook, extra_depends='', prepend=True)
+    moduledir = '%s/samba/pdb' % libdir
+    do_split_packages(d, moduledir, r'^(.*)\.so$', 'samba-pdb-%s', 'Samba %s password backend', hook=module_hook, extra_depends='', prepend=True)
+}
+
+PACKAGESPLITFUNCS:prepend = "samba_populate_packages "
+PACKAGES_DYNAMIC = "samba-auth-.* samba-pdb-.*"
+
+RDEPENDS:${PN} += "${PN}-base ${PN}-python3 ${PN}-dsdb-modules python3"
+RDEPENDS:${PN}-python3 += "pytalloc python3-tdb pyldb"
+
+FILES:${PN}-base = "${sbindir}/nmbd \
+                    ${sbindir}/smbd \
+                    ${sysconfdir}/init.d \
+                    ${systemd_system_unitdir}/nmb.service \
+                    ${systemd_system_unitdir}/smb.service"
+
+FILES:${PN}-ad-dc = "${sbindir}/samba \
+                     ${systemd_system_unitdir}/samba.service \
+                     ${libdir}/krb5/plugins/kdb/samba.so \
+"
+RDEPENDS:${PN}-ad-dc = "krb5-kdc"
+
+FILES:${PN}-ctdb-tests = "${bindir}/ctdb_run_tests \
+                          ${bindir}/ctdb_run_cluster_tests \
+                          ${sysconfdir}/ctdb/nodes \
+                          ${datadir}/ctdb-tests \
+                          ${datadir}/ctdb/tests \
+                          ${localstatedir}/lib/ctdb \
+                         "
+
+FILES:${BPN}-common = "${sysconfdir}/default \
+                       ${sysconfdir}/samba \
+                       ${sysconfdir}/tmpfiles.d \
+                       ${localstatedir}/lib/samba \
+                       ${localstatedir}/spool/samba \
+"
+
+FILES:${PN} += "${libdir}/vfs/*.so \
+                ${libdir}/charset/*.so \
+                ${libdir}/*.dat \
+                ${libdir}/auth/*.so \
+                ${datadir}/ctdb/events/* \
+"
+
+FILES:${PN}-dsdb-modules = "${libdir}/samba/ldb"
+
+FILES:${PN}-testsuite = "${bindir}/gentest \
+                         ${bindir}/locktest \
+                         ${bindir}/masktest \
+                         ${bindir}/ndrdump \
+                         ${bindir}/smbtorture"
+
+FILES:registry-tools = "${bindir}/regdiff \
+                        ${bindir}/regpatch \
+                        ${bindir}/regshell \
+                        ${bindir}/regtree"
+
+FILES:winbind = "${sbindir}/winbindd \
+                 ${bindir}/wbinfo \
+                 ${bindir}/ntlm_auth \
+                 ${libdir}/samba/idmap \
+                 ${libdir}/samba/nss_info \
+                 ${libdir}/winbind_krb5_locator.so \
+                 ${libdir}/winbind-krb5-localauth.so \
+                 ${sysconfdir}/init.d/winbind \
+                 ${systemd_system_unitdir}/winbind.service"
+
+FILES:${PN}-python3 = "${PYTHON_SITEPACKAGES_DIR}"
+
+FILES:smbclient = "${bindir}/cifsdd \
+                   ${bindir}/rpcclient \
+                   ${bindir}/smbcacls \
+                   ${bindir}/smbclient \
+                   ${bindir}/smbcquotas \
+                   ${bindir}/smbget \
+                   ${bindir}/smbspool \
+                   ${bindir}/smbtar \
+                   ${bindir}/smbtree \
+                   ${libdir}/samba/smbspool_krb5_wrapper"
+
+RDEPENDS:${PN}-pidl:append = " perl libparse-yapp-perl"
+FILES:${PN}-pidl = "${bindir}/pidl \
+                    ${libdir}/perl5 \
+                   "
+
+RDEPENDS:${PN}-client = "\
+    smbclient \
+    winbind \
+    registry-tools \
+    ${PN}-pidl \
+    "
+
+ALLOW_EMPTY:${PN}-client = "1"
+
+RDEPENDS:${PN}-server = "\
+    ${PN} \
+    winbind \
+    registry-tools \
+    "
+
+ALLOW_EMPTY:${PN}-server = "1"
+
+RDEPENDS:${PN}-test = "\
+    ${PN}-ctdb-tests \
+    ${PN}-testsuite \
+    "
+
+ALLOW_EMPTY:${PN}-test = "1"
+
+# Patch for CVE-2018-1050 is applied in version 4.5.15, 4.6.13, 4.7.5.
+# Patch for CVE-2018-1057 is applied in version 4.3.13, 4.4.16.
+CVE_CHECK_IGNORE += "CVE-2018-1050"
+CVE_CHECK_IGNORE += "CVE-2018-1057"
diff --git a/meta-openembedded/meta-networking/recipes-connectivity/sshpass/sshpass_1.09.bb b/meta-openembedded/meta-networking/recipes-connectivity/sshpass/sshpass_1.09.bb
new file mode 100644
index 0000000..5c52437
--- /dev/null
+++ b/meta-openembedded/meta-networking/recipes-connectivity/sshpass/sshpass_1.09.bb
@@ -0,0 +1,11 @@
+DESCRIPTION = "Non-interactive ssh password auth"
+HOMEPAGE = "http://sshpass.sourceforge.net/"
+SECTION = "console/network"
+LICENSE = "GPLv2"
+LIC_FILES_CHKSUM = "file://COPYING;md5=94d55d512a9ba36caa9b7df079bae19f"
+
+SRC_URI = "${SOURCEFORGE_MIRROR}/${BPN}/${BP}.tar.gz"
+
+SRC_URI[sha256sum] = "71746e5e057ffe9b00b44ac40453bf47091930cba96bbea8dc48717dedc49fb7"
+
+inherit autotools
diff --git a/meta-openembedded/meta-networking/recipes-daemons/autofs/autofs/mount_conflict.patch b/meta-openembedded/meta-networking/recipes-daemons/autofs/autofs/mount_conflict.patch
new file mode 100644
index 0000000..e2a94bf
--- /dev/null
+++ b/meta-openembedded/meta-networking/recipes-daemons/autofs/autofs/mount_conflict.patch
@@ -0,0 +1,30 @@
+Avoid conflicts between sys/mount.h and linux/mount.h
+
+linux/fs.h includes linux/mount.h and this include file is unused so
+do not include it and avoid conflict too with glibc 2.36+ see [1]
+
+[1] https://sourceware.org/glibc/wiki/Release/2.36#Usage_of_.3Clinux.2Fmount.h.3E_and_.3Csys.2Fmount.h.3E
+
+Upstream-Status: Pending
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+--- a/modules/parse_amd.c
++++ b/modules/parse_amd.c
+@@ -27,7 +27,6 @@
+ #include <sys/utsname.h>
+ #include <netinet/in.h>
+ #include <sys/mount.h>
+-#include <linux/fs.h>
+ 
+ #define MODULE_PARSE
+ #include "automount.h"
+--- a/modules/parse_sun.c
++++ b/modules/parse_sun.c
+@@ -30,7 +30,6 @@
+ #include <sys/utsname.h>
+ #include <netinet/in.h>
+ #include <sys/mount.h>
+-#include <linux/fs.h>
+ 
+ #define MODULE_PARSE
+ #include "automount.h"
diff --git a/meta-openembedded/meta-networking/recipes-daemons/autofs/autofs_5.1.8.bb b/meta-openembedded/meta-networking/recipes-daemons/autofs/autofs_5.1.8.bb
index 1f87bdd..cb80844 100644
--- a/meta-openembedded/meta-networking/recipes-daemons/autofs/autofs_5.1.8.bb
+++ b/meta-openembedded/meta-networking/recipes-daemons/autofs/autofs_5.1.8.bb
@@ -26,6 +26,7 @@
            file://0001-Do-not-hardcode-path-for-pkg.m4.patch \
            file://0001-Bug-fix-for-pid_t-not-found-on-musl.patch \
            file://0001-Define-__SWORD_TYPE-if-undefined.patch \
+           file://mount_conflict.patch \
            "
 SRC_URI[sha256sum] = "0bd401c56f0eb1ca6251344c3a3d70bface3eccf9c67117cd184422c4cace30c"
 
diff --git a/meta-openembedded/meta-networking/recipes-daemons/ncftp/ncftp/0001-Forward-port-defining-PREFIX_BINDIR-to-use-new-autoc.patch b/meta-openembedded/meta-networking/recipes-daemons/ncftp/ncftp/0001-Forward-port-defining-PREFIX_BINDIR-to-use-new-autoc.patch
new file mode 100644
index 0000000..efd1f34
--- /dev/null
+++ b/meta-openembedded/meta-networking/recipes-daemons/ncftp/ncftp/0001-Forward-port-defining-PREFIX_BINDIR-to-use-new-autoc.patch
@@ -0,0 +1,25 @@
+From 53ca110d53ca82f6c4224e4c29dbcf7dfe6914cd Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Tue, 23 Aug 2022 00:25:06 -0700
+Subject: [PATCH] Forward port defining PREFIX_BINDIR to use new autoconf
+
+Upstream-Status: Pending
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ configure.in | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/configure.in b/configure.in
+index c3ef568..a320c56 100644
+--- a/configure.in
++++ b/configure.in
+@@ -44,7 +44,7 @@ wi_EXTRA_SYSV_SUNOS_DIRS	dnl	For better curses library on SunOS 4
+ 
+ dnl Try to use PATH rather than hardcode the installation path, if possible.
+ if test "${prefix-NONE}" != "NONE" && test "$prefix" != "/usr/local" && test "$prefix" != "/usr"; then
+-	AC_DEFINE_UNQUOTED(PREFIX_BINDIR, "$prefix/bin")
++	AC_DEFINE([PREFIX_BINDIR], [${prefix}/bin], [Install bindir])
+ fi
+ 
+ 
diff --git a/meta-openembedded/meta-networking/recipes-daemons/ncftp/ncftp/unistd.patch b/meta-openembedded/meta-networking/recipes-daemons/ncftp/ncftp/unistd.patch
index 1c8146e..22e4f78 100644
--- a/meta-openembedded/meta-networking/recipes-daemons/ncftp/ncftp/unistd.patch
+++ b/meta-openembedded/meta-networking/recipes-daemons/ncftp/ncftp/unistd.patch
@@ -16,13 +16,9 @@
 Upstream-Status: Pending
 
 Signed-of-by: Khem Raj <raj.khem@gmail.com>
-
-
-Index: ncftp-3.2.6/configure
-===================================================================
---- ncftp-3.2.6.orig/configure
-+++ ncftp-3.2.6/configure
-@@ -7859,7 +7859,6 @@ chmod 755 "$wi_tmpdir/prpp.pl"
+--- a/autoconf_local/aclocal.m4
++++ b/autoconf_local/aclocal.m4
+@@ -4220,7 +4220,6 @@ changequote({{, }})dnl
  cat << 'EOF' > "$wi_tmpdir/unistd.c"
  #include <confdefs.h>
  
diff --git a/meta-openembedded/meta-networking/recipes-daemons/ncftp/ncftp_3.2.6.bb b/meta-openembedded/meta-networking/recipes-daemons/ncftp/ncftp_3.2.6.bb
index 592f98f..e66325c 100644
--- a/meta-openembedded/meta-networking/recipes-daemons/ncftp/ncftp_3.2.6.bb
+++ b/meta-openembedded/meta-networking/recipes-daemons/ncftp/ncftp_3.2.6.bb
@@ -9,7 +9,8 @@
            file://ncftp-configure-use-BUILD_CC-for-ccdv.patch \
            file://unistd.patch \
            file://ncftp-3.2.5-gcc10.patch \
-"
+           file://0001-Forward-port-defining-PREFIX_BINDIR-to-use-new-autoc.patch \
+           "
 SRC_URI[md5sum] = "42d0f896d69a4d603ec097546444245f"
 SRC_URI[sha256sum] = "5f200687c05d0807690d9fb770327b226f02dd86155b49e750853fce4e31098d"
 
@@ -20,14 +21,9 @@
 PACKAGECONFIG ??= ""
 PACKAGECONFIG[ccdv] = "--enable-ccdv,--disable-ccdv,,"
 
-EXTRA_OECONF = "--disable-precomp"
-TARGET_CC_ARCH:append = " ${SELECTED_OPTIMIZATION}"
+EXTRA_OECONF = "--disable-precomp --disable-universal ac_cv_path_TAR=tar"
+ACLOCALEXTRAPATH:append = " -I ${S}/autoconf_local"
 
-do_configure() {
-    install -m 0755 ${STAGING_DATADIR_NATIVE}/gnu-config/config.guess ${S}
-    install -m 0755 ${STAGING_DATADIR_NATIVE}/gnu-config/config.sub ${S}
-    oe_runconf
-}
 do_install () {
     install -d ${D}${bindir} ${D}${sysconfdir} ${D}${mandir}
     oe_runmake 'prefix=${D}${prefix}' 'BINDIR=${D}${bindir}' \
diff --git a/meta-openembedded/meta-networking/recipes-daemons/proftpd/proftpd_1.3.7c.bb b/meta-openembedded/meta-networking/recipes-daemons/proftpd/proftpd_1.3.7c.bb
index 686f1e5..ecd2777 100644
--- a/meta-openembedded/meta-networking/recipes-daemons/proftpd/proftpd_1.3.7c.bb
+++ b/meta-openembedded/meta-networking/recipes-daemons/proftpd/proftpd_1.3.7c.bb
@@ -21,6 +21,8 @@
 
 inherit autotools-brokensep useradd update-rc.d systemd multilib_script
 
+EXTRA_OECONF += "--enable-largefile"
+
 PACKAGECONFIG ??= "shadow \
                    ${@bb.utils.filter('DISTRO_FEATURES', 'ipv6 pam', d)} \
                    static \
@@ -58,7 +60,6 @@
 
 #add mod_dso to core modules
 PACKAGECONFIG[dso] = "--enable-dso, --disable-dso"
-PACKAGECONFIG[largefile] = "--enable-largefile, --disable-largefile"
 
 #omit mod_auth_file from core modules
 PACKAGECONFIG[auth] = "--enable-auth-file, --disable-auth-file"
diff --git a/meta-openembedded/meta-networking/recipes-daemons/pure-ftpd/pure-ftpd/0001-Remove-hardcoded-usr-local-includes-from-configure.a.patch b/meta-openembedded/meta-networking/recipes-daemons/pure-ftpd/pure-ftpd/0001-Remove-hardcoded-usr-local-includes-from-configure.a.patch
index 2606a36..c213943 100644
--- a/meta-openembedded/meta-networking/recipes-daemons/pure-ftpd/pure-ftpd/0001-Remove-hardcoded-usr-local-includes-from-configure.a.patch
+++ b/meta-openembedded/meta-networking/recipes-daemons/pure-ftpd/pure-ftpd/0001-Remove-hardcoded-usr-local-includes-from-configure.a.patch
@@ -11,15 +11,17 @@
 Update for 1.0.49.
 Signed-off-by: Zheng Ruoqin <zhengrq.fnst@cn.fujitsu.com>
 
+Update for 1.0.51.
+Signed-off-by: Wang Mingyu <wangmy@fujitsu.com>
 ---
- configure.ac | 15 ---------------
- 1 file changed, 15 deletions(-)
+ configure.ac | 16 ----------------
+ 1 file changed, 16 deletions(-)
 
 diff --git a/configure.ac b/configure.ac
-index 079e6f0..9a1ec06 100644
+index 62768c8..efaeee5 100644
 --- a/configure.ac
 +++ b/configure.ac
-@@ -96,21 +96,6 @@ AX_CHECK_LINK_FLAG([-Wl,-z,relro], [LDFLAGS="$LDFLAGS -Wl,-z,relro"])
+@@ -97,22 +97,6 @@ AX_CHECK_LINK_FLAG([-Wl,-z,relro], [LDFLAGS="$LDFLAGS -Wl,-z,relro"])
  AX_CHECK_LINK_FLAG([-Wl,-z,now], [LDFLAGS="$LDFLAGS -Wl,-z,now"])
  AX_CHECK_LINK_FLAG([-Wl,-z,noexecstack], [LDFLAGS="$LDFLAGS -Wl,-z,noexecstack"])
  
@@ -27,7 +29,8 @@
 -  for path in \
 -    /usr/kerberos \
 -    /usr/local /opt /usr/local/opt \
--    /usr/openssl@1.1 /opt/openssl@1.1 /usr/local/opt/openssl@1.1 \
+-    /opt/homebrew/opt/openssl@3 /usr/local/opt/openssl@3 \
+-    /opt/homebrew/opt/openssl@1.1 /usr/local/opt/openssl@1.1 \
 -    /usr/openssl /opt/openssl /usr/local/opt/openssl; do
 -    if test -d $path/include; then
 -      CPPFLAGS="$CPPFLAGS -I${path}/include"
@@ -42,5 +45,5 @@
  
  dnl Checks for header files
 -- 
-2.7.4
+2.25.1
 
diff --git a/meta-openembedded/meta-networking/recipes-daemons/pure-ftpd/pure-ftpd_1.0.50.bb b/meta-openembedded/meta-networking/recipes-daemons/pure-ftpd/pure-ftpd_1.0.50.bb
deleted file mode 100644
index edc2af3..0000000
--- a/meta-openembedded/meta-networking/recipes-daemons/pure-ftpd/pure-ftpd_1.0.50.bb
+++ /dev/null
@@ -1,19 +0,0 @@
-SUMMARY = "FTP Server with a strong focus on software security"
-DESCRIPTION = "Pure-FTPd is a free (BSD license), secure, production-quality and standard-conformant FTP server."
-HOMEPAGE = "http://www.pureftpd.org/project/pure-ftpd"
-SECTION = "net"
-LICENSE = "0BSD"
-LIC_FILES_CHKSUM = "file://COPYING;md5=a4496a14dea009df36c612707d455d02"
-
-DEPENDS = "libcap virtual/crypt"
-
-SRC_URI = "http://download.pureftpd.org/pub/pure-ftpd/releases/pure-ftpd-${PV}.tar.gz \
-           file://0001-Remove-hardcoded-usr-local-includes-from-configure.a.patch \
-           file://nostrip.patch \
-"
-SRC_URI[sha256sum] = "abe2f94eb40b330d4dc22b159991f44e5e515212f8e887049dccdef266d0ea23"
-
-inherit autotools
-
-PACKAGECONFIG[libsodium] ="ac_cv_lib_sodium_crypto_pwhash_scryptsalsa208sha256_str=yes, \
-                           ac_cv_lib_sodium_crypto_pwhash_scryptsalsa208sha256_str=no, libsodium"
diff --git a/meta-openembedded/meta-networking/recipes-daemons/pure-ftpd/pure-ftpd_1.0.51.bb b/meta-openembedded/meta-networking/recipes-daemons/pure-ftpd/pure-ftpd_1.0.51.bb
new file mode 100644
index 0000000..6f03f73
--- /dev/null
+++ b/meta-openembedded/meta-networking/recipes-daemons/pure-ftpd/pure-ftpd_1.0.51.bb
@@ -0,0 +1,19 @@
+SUMMARY = "FTP Server with a strong focus on software security"
+DESCRIPTION = "Pure-FTPd is a free (BSD license), secure, production-quality and standard-conformant FTP server."
+HOMEPAGE = "http://www.pureftpd.org/project/pure-ftpd"
+SECTION = "net"
+LICENSE = "0BSD"
+LIC_FILES_CHKSUM = "file://COPYING;md5=194bc994ad6bbd4ff5a021082fe52156"
+
+DEPENDS = "libcap virtual/crypt"
+
+SRC_URI = "http://download.pureftpd.org/pub/pure-ftpd/releases/pure-ftpd-${PV}.tar.gz \
+           file://0001-Remove-hardcoded-usr-local-includes-from-configure.a.patch \
+           file://nostrip.patch \
+"
+SRC_URI[sha256sum] = "4160f66b76615eea2397eac4ea3f0a146b7928207b79bc4cc2f99ad7b7bd9513"
+
+inherit autotools
+
+PACKAGECONFIG[libsodium] ="ac_cv_lib_sodium_crypto_pwhash_scryptsalsa208sha256_str=yes, \
+                           ac_cv_lib_sodium_crypto_pwhash_scryptsalsa208sha256_str=no, libsodium"
diff --git a/meta-openembedded/meta-networking/recipes-extended/kronosnet/kronosnet/0001-libknet-tests-Correct-include-path-for-poll.h.patch b/meta-openembedded/meta-networking/recipes-extended/kronosnet/kronosnet/0001-libknet-tests-Correct-include-path-for-poll.h.patch
deleted file mode 100644
index 0d261fd..0000000
--- a/meta-openembedded/meta-networking/recipes-extended/kronosnet/kronosnet/0001-libknet-tests-Correct-include-path-for-poll.h.patch
+++ /dev/null
@@ -1,29 +0,0 @@
-From cae68083fda5d4ca832ff3cc8a533454df2efe23 Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Tue, 12 Oct 2021 20:35:53 -0700
-Subject: [PATCH] libknet/tests: Correct include path for poll.h
-
-Fixes
-/usr/include/sys/poll.h:1:2: error: redirec
-ting incorrect #include <sys/poll.h> to <poll.h> [-Werror,-W#warnings]
-| #warning redirecting incorrect #include <sys/poll.h> to <poll.h>
-
-Upstream-Status: Submitted [https://github.com/kronosnet/kronosnet/pull/363]
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
----
- libknet/tests/test-common.c | 2 +-
- 1 file changed, 1 insertion(+), 1 deletion(-)
-
-diff --git a/libknet/tests/test-common.c b/libknet/tests/test-common.c
-index 86b76b0..8f8b6ca 100644
---- a/libknet/tests/test-common.c
-+++ b/libknet/tests/test-common.c
-@@ -20,7 +20,7 @@
- #include <pthread.h>
- #include <dirent.h>
- #include <sys/select.h>
--#include <sys/poll.h>
-+#include <poll.h>
- 
- #include "libknet.h"
- #include "test-common.h"
diff --git a/meta-openembedded/meta-networking/recipes-extended/kronosnet/kronosnet_1.22.bb b/meta-openembedded/meta-networking/recipes-extended/kronosnet/kronosnet_1.22.bb
deleted file mode 100644
index 0b0bc29..0000000
--- a/meta-openembedded/meta-networking/recipes-extended/kronosnet/kronosnet_1.22.bb
+++ /dev/null
@@ -1,33 +0,0 @@
-# Copyright (C) 2020 Khem Raj <raj.khem@gmail.com>
-# Released under the MIT license (see COPYING.MIT for the terms)
-
-SUMMARY = "Kronosnet, often referred to as knet, is a network abstraction layer \
-           designed for High Availability use cases, where redundancy, security, \
-           fault tolerance and fast fail-over are the core requirements of your application."
-HOMEPAGE = "https://kronosnet.org/"
-LICENSE = "GPL-2.0-or-later & LGPL-2.1-only"
-LIC_FILES_CHKSUM = "file://COPYING.applications;md5=751419260aa954499f7abaabaa882bbe \
-                    file://COPYING.libraries;md5=2d5025d4aa3495befef8f17206a5b0a1"
-SECTION = "libs"
-DEPENDS = "doxygen-native libqb-native libxml2-native bzip2 libqb libxml2 libnl lksctp-tools lz4 lzo openssl nss xz zlib zstd"
-
-SRCREV = "0123ecebce0ad6aba3cdb320027192e15fd71e23"
-SRC_URI = "git://github.com/kronosnet/kronosnet;protocol=https;branch=stable1 \
-           file://0001-libknet-tests-Correct-include-path-for-poll.h.patch \
-           file://0001-links.c-Fix-build-with-gcc-12.patch \
-           "
-
-UPSTREAM_CHECK_URI = "https://github.com/kronosnet/kronosnet/releases"
-
-inherit autotools pkgconfig
-
-S = "${WORKDIR}/git"
-
-# libknet/transport_udp.c:326:48: error: comparison of integers of different signs: 'unsigned long' and 'int' [-Werror,-Wsign-compare]
-# for (cmsg = CMSG_FIRSTHDR(&msg);cmsg; cmsg = CMSG_NXTHDR(&msg, cmsg)) {
-#                                                             ^~~~~~~~~~~~~~~~~~~~~~~
-CFLAGS:append:toolchain-clang = " -Wno-sign-compare"
-
-PACKAGECONFIG[man] = "enable_man="yes", --disable-man, "
-
-PACKAGECONFIG:remove = "man"
diff --git a/meta-openembedded/meta-networking/recipes-extended/kronosnet/kronosnet_1.24.bb b/meta-openembedded/meta-networking/recipes-extended/kronosnet/kronosnet_1.24.bb
new file mode 100644
index 0000000..cbd5e7a
--- /dev/null
+++ b/meta-openembedded/meta-networking/recipes-extended/kronosnet/kronosnet_1.24.bb
@@ -0,0 +1,32 @@
+# Copyright (C) 2020 Khem Raj <raj.khem@gmail.com>
+# Released under the MIT license (see COPYING.MIT for the terms)
+
+SUMMARY = "Kronosnet, often referred to as knet, is a network abstraction layer \
+           designed for High Availability use cases, where redundancy, security, \
+           fault tolerance and fast fail-over are the core requirements of your application."
+HOMEPAGE = "https://kronosnet.org/"
+LICENSE = "GPL-2.0-or-later & LGPL-2.1-only"
+LIC_FILES_CHKSUM = "file://COPYING.applications;md5=751419260aa954499f7abaabaa882bbe \
+                    file://COPYING.libraries;md5=2d5025d4aa3495befef8f17206a5b0a1"
+SECTION = "libs"
+DEPENDS = "doxygen-native libqb-native libxml2-native bzip2 libqb libxml2 libnl lksctp-tools lz4 lzo openssl nss xz zlib zstd"
+
+SRCREV = "f8f80fd7f9b85f2626d2c6452612962ad8efca9e"
+SRC_URI = "git://github.com/kronosnet/kronosnet;protocol=https;branch=stable1 \
+           file://0001-links.c-Fix-build-with-gcc-12.patch \
+           "
+
+UPSTREAM_CHECK_URI = "https://github.com/kronosnet/kronosnet/releases"
+
+inherit autotools pkgconfig
+
+S = "${WORKDIR}/git"
+
+# libknet/transport_udp.c:326:48: error: comparison of integers of different signs: 'unsigned long' and 'int' [-Werror,-Wsign-compare]
+# for (cmsg = CMSG_FIRSTHDR(&msg);cmsg; cmsg = CMSG_NXTHDR(&msg, cmsg)) {
+#                                                             ^~~~~~~~~~~~~~~~~~~~~~~
+CFLAGS:append:toolchain-clang = " -Wno-sign-compare"
+
+PACKAGECONFIG[man] = "enable_man="yes", --disable-man, "
+
+PACKAGECONFIG:remove = "man"
diff --git a/meta-openembedded/meta-networking/recipes-kernel/wireguard/wireguard-module_1.0.20210219.bb b/meta-openembedded/meta-networking/recipes-kernel/wireguard/wireguard-module_1.0.20210219.bb
deleted file mode 100644
index ce2ba65..0000000
--- a/meta-openembedded/meta-networking/recipes-kernel/wireguard/wireguard-module_1.0.20210219.bb
+++ /dev/null
@@ -1,30 +0,0 @@
-require wireguard.inc
-
-SRCREV = "122f06bfd8fc7b06a0899fa9adc4ce8e06900d98"
-
-SRC_URI = "git://git.zx2c4.com/wireguard-linux-compat;branch=master"
-
-inherit module kernel-module-split
-
-DEPENDS = "virtual/kernel libmnl"
-
-# This module requires Linux 3.10 higher and several networking related
-# configuration options. For exact kernel requirements visit:
-# https://www.wireguard.io/install/#kernel-requirements
-
-EXTRA_OEMAKE:append = " \
-    KERNELDIR=${STAGING_KERNEL_DIR} \
-    "
-
-MAKE_TARGETS = "module"
-MODULES_INSTALL_TARGET = "module-install"
-
-RRECOMMENDS:${PN} = "kernel-module-xt-hashlimit"
-MODULE_NAME = "wireguard"
-
-
-# WireGuard has been merged into Linux kernel >= 5.6 and therefore this compatibility module is no longer required.
-# OE-core post dunfell has moved to use kernel 5.8 which now means we cant build this module in world builds
-# for reference machines e.g. qemu
-EXCLUDE_FROM_WORLD = "1"
-
diff --git a/meta-openembedded/meta-networking/recipes-kernel/wireguard/wireguard-module_1.0.20220627.bb b/meta-openembedded/meta-networking/recipes-kernel/wireguard/wireguard-module_1.0.20220627.bb
new file mode 100644
index 0000000..d80bdd8
--- /dev/null
+++ b/meta-openembedded/meta-networking/recipes-kernel/wireguard/wireguard-module_1.0.20220627.bb
@@ -0,0 +1,30 @@
+require wireguard.inc
+
+SRCREV = "18fbcd68a35a892527345dc5679d0b2d860ee004"
+
+SRC_URI = "git://git.zx2c4.com/wireguard-linux-compat;protocol=https;branch=master"
+
+inherit module kernel-module-split
+
+DEPENDS = "virtual/kernel libmnl"
+
+# This module requires Linux 3.10 higher and several networking related
+# configuration options. For exact kernel requirements visit:
+# https://www.wireguard.io/install/#kernel-requirements
+
+EXTRA_OEMAKE:append = " \
+    KERNELDIR=${STAGING_KERNEL_DIR} \
+    "
+
+MAKE_TARGETS = "module"
+MODULES_INSTALL_TARGET = "module-install"
+
+RRECOMMENDS:${PN} = "kernel-module-xt-hashlimit"
+MODULE_NAME = "wireguard"
+
+
+# WireGuard has been merged into Linux kernel >= 5.6 and therefore this compatibility module is no longer required.
+# OE-core post dunfell has moved to use kernel 5.8 which now means we cant build this module in world builds
+# for reference machines e.g. qemu
+EXCLUDE_FROM_WORLD = "1"
+
diff --git a/meta-openembedded/meta-networking/recipes-kernel/wireguard/wireguard-tools_1.0.20210914.bb b/meta-openembedded/meta-networking/recipes-kernel/wireguard/wireguard-tools_1.0.20210914.bb
index 0c686aa..2043533 100644
--- a/meta-openembedded/meta-networking/recipes-kernel/wireguard/wireguard-tools_1.0.20210914.bb
+++ b/meta-openembedded/meta-networking/recipes-kernel/wireguard/wireguard-tools_1.0.20210914.bb
@@ -16,11 +16,19 @@
         install
 }
 
+PACKAGES += "${PN}-wg-quick"
+
 FILES:${PN} = " \
+    ${bindir}/wg \
     ${sysconfdir} \
+"
+FILES:${PN}-wg-quick = " \
+    ${bindir}/wg-quick \
     ${systemd_system_unitdir} \
-    ${bindir} \
 "
 
-RDEPENDS:${PN} = "bash"
-RRECOMMENDS:${PN} = "kernel-module-wireguard"
+RDEPENDS:${PN}-wg-quick = "${PN} bash"
+RRECOMMENDS:${PN} = " \
+    kernel-module-wireguard \
+    ${PN}-wg-quick \
+    "
diff --git a/meta-openembedded/meta-networking/recipes-protocols/freediameter/files/0001-tests-use-EXTENSIONS_DIR.patch b/meta-openembedded/meta-networking/recipes-protocols/freediameter/files/0001-tests-use-EXTENSIONS_DIR.patch
new file mode 100644
index 0000000..4cedc21
--- /dev/null
+++ b/meta-openembedded/meta-networking/recipes-protocols/freediameter/files/0001-tests-use-EXTENSIONS_DIR.patch
@@ -0,0 +1,92 @@
+From 935fcac46e2790e0e297ca855b8033895c1b8941 Mon Sep 17 00:00:00 2001
+From: Mingli Yu <mingli.yu@windriver.com>
+Date: Wed, 24 Aug 2022 13:45:32 +0800
+Subject: [PATCH] tests: use EXTENSIONS_DIR
+
+Use EXTENSIONS_DIR to replace BUILD_DIR as the BUILD_DIR is meanlingless
+on target and also fix buildpaths issue.
+
+Upstream-Status: Inappropriate [OE ptest specific]
+
+Signed-off-by: Mingli Yu <mingli.yu@windriver.com>
+---
+ tests/CMakeLists.txt    |  1 +
+ tests/testloadext.c     | 12 ++++++------
+ tests/testmesg_stress.c | 12 ++++++------
+ 3 files changed, 13 insertions(+), 12 deletions(-)
+
+diff --git a/tests/CMakeLists.txt b/tests/CMakeLists.txt
+index 8b698ce..2c83cbb 100644
+--- a/tests/CMakeLists.txt
++++ b/tests/CMakeLists.txt
+@@ -37,6 +37,7 @@ SET(TEST_LIST
+ 
+ ADD_DEFINITIONS(-DTEST_DEBUG)
+ ADD_DEFINITIONS(-DBUILD_DIR="${CMAKE_BINARY_DIR}")
++ADD_DEFINITIONS(-DEXTENSIONS_DIR="${EXTENSIONS_DIR}")
+ 
+ INCLUDE_DIRECTORIES( "../libfdproto" )
+ INCLUDE_DIRECTORIES( "../libfdcore" )
+diff --git a/tests/testloadext.c b/tests/testloadext.c
+index 452737f..3fffef5 100644
+--- a/tests/testloadext.c
++++ b/tests/testloadext.c
+@@ -35,9 +35,9 @@
+ 
+ #include "tests.h"
+ 
+-#ifndef BUILD_DIR
+-#error "Missing BUILD_DIR information"
+-#endif /* BUILD_DIR */
++#ifndef EXTENSIONS_DIR
++#error "Missing EXTENSIONS_DIR information"
++#endif /* EXTENSIONS_DIR */
+ 
+ #include <sys/types.h>
+ #include <dirent.h>
+@@ -59,9 +59,9 @@ int main(int argc, char *argv[])
+ 	CHECK( 0, fd_rtdisp_init()  );
+ 	
+ 	/* Find all extensions which have been compiled along the test */
+-	TRACE_DEBUG(INFO, "Loading from: '%s'", BUILD_DIR "/extensions");
+-	CHECK( 0, (dir = opendir (BUILD_DIR "/extensions")) == NULL ? 1 : 0 );
+-	pathlen = snprintf(fullname, sizeof(fullname), BUILD_DIR "/extensions/");
++	TRACE_DEBUG(INFO, "Loading from: '%s'", EXTENSIONS_DIR);
++	CHECK( 0, (dir = opendir (EXTENSIONS_DIR)) == NULL ? 1 : 0 );
++	pathlen = snprintf(fullname, sizeof(fullname), EXTENSIONS_DIR "/");
+ 	
+ 	while ((dp = readdir (dir)) != NULL) {
+ 		char * dot = strrchr(dp->d_name, '.');
+diff --git a/tests/testmesg_stress.c b/tests/testmesg_stress.c
+index 310a9d2..97dfe07 100644
+--- a/tests/testmesg_stress.c
++++ b/tests/testmesg_stress.c
+@@ -38,9 +38,9 @@
+ #include <libgen.h>
+ #include <dlfcn.h>
+ 
+-#ifndef BUILD_DIR
+-#error "Missing BUILD_DIR information"
+-#endif /* BUILD_DIR */
++#ifndef EXTENSIONS_DIR
++#error "Missing EXTENSIONS_DIR information"
++#endif /* EXTENSIONS_DIR */
+ 
+ 
+ /* The number of times each operation is repeated to measure the average operation time */
+@@ -73,9 +73,9 @@ static void load_all_extensions(char * prefix)
+ 	struct fd_list ext_with_depends = FD_LIST_INITIALIZER(ext_with_depends);
+ 
+ 	/* Find all extensions which have been compiled along the test */
+-	LOG_D("Loading %s*.fdx from: '%s'", BUILD_DIR "/extensions", prefix ?: "");
+-	CHECK( 0, (dir = opendir (BUILD_DIR "/extensions")) == NULL ? 1 : 0 );
+-	pathlen = snprintf(fullname, sizeof(fullname), BUILD_DIR "/extensions/");
++	LOG_D("Loading %s*.fdx from: '%s'", EXTENSIONS_DIR, prefix ?: "");
++	CHECK( 0, (dir = opendir (EXTENSIONS_DIR)) == NULL ? 1 : 0 );
++	pathlen = snprintf(fullname, sizeof(fullname), EXTENSIONS_DIR "/");
+ 	
+ 	while ((dp = readdir (dir)) != NULL) {
+ 		char * dot = strrchr(dp->d_name, '.');
+-- 
+2.25.1
+
diff --git a/meta-openembedded/meta-networking/recipes-protocols/freediameter/files/pass-ptest-env.patch b/meta-openembedded/meta-networking/recipes-protocols/freediameter/files/pass-ptest-env.patch
deleted file mode 100644
index ea857af..0000000
--- a/meta-openembedded/meta-networking/recipes-protocols/freediameter/files/pass-ptest-env.patch
+++ /dev/null
@@ -1,72 +0,0 @@
-freediameter ptest cases testmesg_stress.c and testloadext.c need load
-extensions both build time and runtime. Then they search extensions with
-build directory that causes runtime failures.
-
-Pass an environment variable to define runtime extension path.
-
-Upstream-Status: Inappropriate [OE ptest specific]
-
-Signed-off-by: Kai Kang <kai.kang@windriver.com>
-Signed-off-by: Jackie Huang <jackie.huang@windriver.com>
-
-diff -Nur freeDiameter-1.2.0.orig/tests/testloadext.c freeDiameter-1.2.0/tests/testloadext.c
---- freeDiameter-1.2.0.orig/tests/testloadext.c	2014-02-19 17:33:24.785405032 +0800
-+++ freeDiameter-1.2.0/tests/testloadext.c	2014-02-19 20:08:03.871403924 +0800
-@@ -49,7 +49,7 @@
- {
- 	DIR *dir;
- 	struct dirent *dp;
--	char fullname[512];
-+	char fullname[1024];
- 	int pathlen;
- 
- 	/* First, initialize the daemon modules */
-@@ -57,11 +57,16 @@
- 	CHECK( 0, fd_queues_init()  );
- 	CHECK( 0, fd_msg_init()  );
- 	CHECK( 0, fd_rtdisp_init()  );
--	
-+
-+	char *ext_dir = getenv("EXTENSIONS_DIR");
-+	if (ext_dir)
-+		pathlen = snprintf(fullname, sizeof(fullname), "%s", ext_dir);
-+	else
-+		pathlen = snprintf(fullname, sizeof(fullname), BUILD_DIR "/extensions/");
-+
- 	/* Find all extensions which have been compiled along the test */
--	TRACE_DEBUG(INFO, "Loading from: '%s'", BUILD_DIR "/extensions");
--	CHECK( 0, (dir = opendir (BUILD_DIR "/extensions")) == NULL ? 1 : 0 );
--	pathlen = snprintf(fullname, sizeof(fullname), BUILD_DIR "/extensions/");
-+	TRACE_DEBUG(INFO, "Loading from: '%s'", fullname);
-+	CHECK( 0, (dir = opendir (fullname)) == NULL ? 1 : 0 );
- 	
- 	while ((dp = readdir (dir)) != NULL) {
- 		char * dot = strrchr(dp->d_name, '.');
-diff -Nur freeDiameter-1.2.0.orig/tests/testmesg_stress.c freeDiameter-1.2.0/tests/testmesg_stress.c
---- freeDiameter-1.2.0.orig/tests/testmesg_stress.c	2014-02-19 17:33:24.785405032 +0800
-+++ freeDiameter-1.2.0/tests/testmesg_stress.c	2014-02-19 20:08:03.928403924 +0800
-@@ -67,15 +67,20 @@
- {
- 	DIR *dir;
- 	struct dirent *dp;
--	char fullname[512];
-+	char fullname[1024];
- 	int pathlen;
- 	struct fd_list all_extensions = FD_LIST_INITIALIZER(all_extensions);
- 	struct fd_list ext_with_depends = FD_LIST_INITIALIZER(ext_with_depends);
- 
-+	char *ext_dir = getenv("EXTENSIONS_DIR");
-+	if (ext_dir)
-+		pathlen = snprintf(fullname, sizeof(fullname), "%s", ext_dir);
-+	else
-+		pathlen = snprintf(fullname, sizeof(fullname), BUILD_DIR "/extensions/");
-+
- 	/* Find all extensions which have been compiled along the test */
--	LOG_D("Loading %s*.fdx from: '%s'", BUILD_DIR "/extensions", prefix ?: "");
--	CHECK( 0, (dir = opendir (BUILD_DIR "/extensions")) == NULL ? 1 : 0 );
--	pathlen = snprintf(fullname, sizeof(fullname), BUILD_DIR "/extensions/");
-+	TRACE_DEBUG(INFO, "Loading from: '%s'", fullname);
-+	CHECK( 0, (dir = opendir (fullname)) == NULL ? 1 : 0 );
- 	
- 	while ((dp = readdir (dir)) != NULL) {
- 		char * dot = strrchr(dp->d_name, '.');
diff --git a/meta-openembedded/meta-networking/recipes-protocols/freediameter/files/run-ptest b/meta-openembedded/meta-networking/recipes-protocols/freediameter/files/run-ptest
index d0ca8d9..3c84164 100644
--- a/meta-openembedded/meta-networking/recipes-protocols/freediameter/files/run-ptest
+++ b/meta-openembedded/meta-networking/recipes-protocols/freediameter/files/run-ptest
@@ -6,6 +6,5 @@
         echo
 fi
 
-export EXTENSIONS_DIR=$EXTENSIONS_DIR
 cmake  -E cmake_echo_color --cyan "Running tests..."
 ctest --force-new-ctest-process
diff --git a/meta-openembedded/meta-networking/recipes-protocols/freediameter/freediameter_1.4.0.bb b/meta-openembedded/meta-networking/recipes-protocols/freediameter/freediameter_1.4.0.bb
index 3ec20d3..93a607d 100644
--- a/meta-openembedded/meta-networking/recipes-protocols/freediameter/freediameter_1.4.0.bb
+++ b/meta-openembedded/meta-networking/recipes-protocols/freediameter/freediameter_1.4.0.bb
@@ -18,7 +18,7 @@
     file://Replace-murmurhash-algorithm-with-Robert-Jenkin-s-ha.patch \
     file://freediameter.service \
     file://freediameter.init \
-    ${@bb.utils.contains('DISTRO_FEATURES', 'ptest', 'file://install_test.patch file://run-ptest file://pass-ptest-env.patch', '', d)} \
+    ${@bb.utils.contains('DISTRO_FEATURES', 'ptest', 'file://install_test.patch file://run-ptest file://0001-tests-use-EXTENSIONS_DIR.patch', '', d)} \
     file://freeDiameter.conf \
     file://0001-libfdcore-sctp.c-update-the-old-sctp-api-check.patch \
     "
@@ -46,6 +46,7 @@
     -DBUILD_TEST_RT_ANY:BOOL=ON \
     -DINSTALL_LIBRARY_SUFFIX:PATH=${baselib} \
     -DINSTALL_EXTENSIONS_SUFFIX:PATH=${baselib}/${fd_pkgname} \
+    -DEXTENSIONS_DIR:PATH=${libdir}/${fd_pkgname} \
     -DINSTALL_TEST_SUFFIX:PATH=${PTEST_PATH}-tests \
     -DCMAKE_SKIP_RPATH:BOOL=ON \
 "
@@ -107,13 +108,15 @@
     openssl req -x509 -config ${STAGING_DIR_NATIVE}/etc/ssl/openssl.cnf -newkey rsa:4096 -sha256 -nodes -out ${D}${sysconfdir}/freeDiameter/${FD_PEM} -keyout ${D}${sysconfdir}/freeDiameter/${FD_KEY} -days 3650 -subj '/CN=${FD_HOSTNAME}.${FD_REALM}'
     openssl dhparam -out ${D}${sysconfdir}/freeDiameter/${FD_DH_PEM} 1024
 
+    find ${B} \( -name "*.c" -o -name "*.h" \) -exec sed -i -e 's#${WORKDIR}##g' {} \;
 }
 
 do_install_ptest() {
-    sed -i "s#\(EXTENSIONS_DIR=\).*\$#\1${libdir}/${fd_pkgname}/#" ${D}${PTEST_PATH}/run-ptest
     mv ${D}${PTEST_PATH}-tests/* ${D}${PTEST_PATH}/
     rmdir ${D}${PTEST_PATH}-tests
     install -m 0644 ${B}/tests/CTestTestfile.cmake ${D}${PTEST_PATH}/
+    sed -i -e 's#${WORKDIR}##g' ${D}${PTEST_PATH}/CTestTestfile.cmake
+    sed -i "/^set_tests_properties/d" ${D}${PTEST_PATH}/CTestTestfile.cmake
 }
 
 FILES:${PN}-dbg += "${libdir}/${fd_pkgname}/.debug/*"
diff --git a/meta-openembedded/meta-networking/recipes-protocols/frr/frr_8.2.2.bb b/meta-openembedded/meta-networking/recipes-protocols/frr/frr_8.2.2.bb
index 05195a3..f0d0dbf 100644
--- a/meta-openembedded/meta-networking/recipes-protocols/frr/frr_8.2.2.bb
+++ b/meta-openembedded/meta-networking/recipes-protocols/frr/frr_8.2.2.bb
@@ -73,6 +73,11 @@
 SYSTEMD_SERVICE:${PN} = "frr.service"
 SYSTEMD_AUTO_ENABLE = "disable"
 
+do_compile:prepend () {
+   sed -i -e 's#${RECIPE_SYSROOT_NATIVE}##g' \
+          -e 's#${RECIPE_SYSROOT}##g' ${S}/lib/version.h
+}
+
 do_compile:class-native () {
     oe_runmake clippy-only
 }
diff --git a/meta-openembedded/meta-networking/recipes-protocols/net-snmp/net-snmp/0001-ac_add_search_path.m4-keep-consistent-between-32bit.patch b/meta-openembedded/meta-networking/recipes-protocols/net-snmp/net-snmp/0001-ac_add_search_path.m4-keep-consistent-between-32bit.patch
index 4cd7290..0eeddf7 100644
--- a/meta-openembedded/meta-networking/recipes-protocols/net-snmp/net-snmp/0001-ac_add_search_path.m4-keep-consistent-between-32bit.patch
+++ b/meta-openembedded/meta-networking/recipes-protocols/net-snmp/net-snmp/0001-ac_add_search_path.m4-keep-consistent-between-32bit.patch
@@ -1,7 +1,8 @@
-From 6f8ea2e841ad45eed193310b599d3f3b410ae91d Mon Sep 17 00:00:00 2001
+From 98c62e24fdd05d7e8bd8149840bad8eb0feb3fb1 Mon Sep 17 00:00:00 2001
 From: Mingli Yu <mingli.yu@windriver.com>
 Date: Fri, 29 Jan 2021 08:49:15 +0000
-Subject: [PATCH] ac_add_search_path.m4: keep consistent between 32bit and 64bit
+Subject: [PATCH] ac_add_search_path.m4: keep consistent between 32bit and
+ 64bit
 
 With configure option "--with-openssl=${STAGING_EXECPREFIXDIR}", it behaves
 differently between 32bit and 64bit system as the openssl lib resides under
@@ -15,12 +16,13 @@
 Upstream-Status: Inappropriate [configuration specific]
 
 Signed-off-by: Mingli Yu <mingli.yu@windriver.com>
+
 ---
  m4/ac_add_search_path.m4 | 4 ++--
  1 file changed, 2 insertions(+), 2 deletions(-)
 
 diff --git a/m4/ac_add_search_path.m4 b/m4/ac_add_search_path.m4
-index 8e0a819..961f587 100644
+index 8e0a819..e9585bc 100644
 --- a/m4/ac_add_search_path.m4
 +++ b/m4/ac_add_search_path.m4
 @@ -3,8 +3,8 @@ dnl Add a search path to the LIBS and CPPFLAGS variables
@@ -34,6 +36,3 @@
       fi
       if test -d $1/include; then
  	CPPFLAGS="-I$1/include $CPPFLAGS"
--- 
-2.29.2
-
diff --git a/meta-openembedded/meta-networking/recipes-protocols/net-snmp/net-snmp/0001-config_os_headers-Error-Fix.patch b/meta-openembedded/meta-networking/recipes-protocols/net-snmp/net-snmp/0001-config_os_headers-Error-Fix.patch
index 05a47f6..f8a52a6 100644
--- a/meta-openembedded/meta-networking/recipes-protocols/net-snmp/net-snmp/0001-config_os_headers-Error-Fix.patch
+++ b/meta-openembedded/meta-networking/recipes-protocols/net-snmp/net-snmp/0001-config_os_headers-Error-Fix.patch
@@ -1,4 +1,4 @@
-From 69d4c517c07f55c505090e48d96ace8cd599fb26 Mon Sep 17 00:00:00 2001
+From e86d5fd52f19b85da0b7cce660c6e65ec4c0f9bb Mon Sep 17 00:00:00 2001
 From: Li xin <lixin.fnst@cn.fujitsu.com>
 Date: Fri, 21 Aug 2015 18:23:13 +0900
 Subject: [PATCH] config_os_headers: Error Fix
@@ -19,7 +19,7 @@
  1 file changed, 2 insertions(+), 2 deletions(-)
 
 diff --git a/configure.d/config_os_headers b/configure.d/config_os_headers
-index f07d512..2363b42 100644
+index 01c3376..6edd85f 100644
 --- a/configure.d/config_os_headers
 +++ b/configure.d/config_os_headers
 @@ -395,8 +395,8 @@ then
diff --git a/meta-openembedded/meta-networking/recipes-protocols/net-snmp/net-snmp/0001-get_pid_from_inode-Include-limit.h.patch b/meta-openembedded/meta-networking/recipes-protocols/net-snmp/net-snmp/0001-get_pid_from_inode-Include-limit.h.patch
index 22e5915..a7881a8 100644
--- a/meta-openembedded/meta-networking/recipes-protocols/net-snmp/net-snmp/0001-get_pid_from_inode-Include-limit.h.patch
+++ b/meta-openembedded/meta-networking/recipes-protocols/net-snmp/net-snmp/0001-get_pid_from_inode-Include-limit.h.patch
@@ -1,4 +1,4 @@
-From 2bf1bbe1d428ed06d57aa76b03e394b72ff2216d Mon Sep 17 00:00:00 2001
+From 8097734b27fd146f358a4edd0d1a0d28309bd9a4 Mon Sep 17 00:00:00 2001
 From: Khem Raj <raj.khem@gmail.com>
 Date: Fri, 22 Jul 2016 18:34:39 +0000
 Subject: [PATCH] get_pid_from_inode: Include limit.h
@@ -14,7 +14,7 @@
  1 file changed, 1 insertion(+)
 
 diff --git a/agent/mibgroup/util_funcs/get_pid_from_inode.c b/agent/mibgroup/util_funcs/get_pid_from_inode.c
-index aee907d..7abaec2 100644
+index 5788e1d..ea380a6 100644
 --- a/agent/mibgroup/util_funcs/get_pid_from_inode.c
 +++ b/agent/mibgroup/util_funcs/get_pid_from_inode.c
 @@ -6,6 +6,7 @@
@@ -23,5 +23,5 @@
  #include <ctype.h>
 +#include <limits.h>
  #include <stdio.h>
- #if HAVE_STDLIB_H
+ #ifdef HAVE_STDLIB_H
  #include <stdlib.h>
diff --git a/meta-openembedded/meta-networking/recipes-protocols/net-snmp/net-snmp/0001-snmpd-always-exit-after-displaying-usage.patch b/meta-openembedded/meta-networking/recipes-protocols/net-snmp/net-snmp/0001-snmpd-always-exit-after-displaying-usage.patch
deleted file mode 100644
index 4fc9e54..0000000
--- a/meta-openembedded/meta-networking/recipes-protocols/net-snmp/net-snmp/0001-snmpd-always-exit-after-displaying-usage.patch
+++ /dev/null
@@ -1,55 +0,0 @@
-From 94ca941e06bef157bf0e13251f8ca1471daa9393 Mon Sep 17 00:00:00 2001
-From: Kaarle Ritvanen <kaarle.ritvanen@datakunkku.fi>
-Date: Fri, 27 Aug 2021 14:21:45 +0300
-Subject: [PATCH] snmpd: always exit after displaying usage
-
-Currently, viewing the help text with -h results in snmpd being started
-in the background, whereas this does not happen with --help. Similarly,
-when an error is detected in command line syntax, the help text is
-displayed but sometimes snmpd gets started anyway, depending on the
-execution path.
-
-This patch makes snmpd consistently terminate whenever the usage
-function gets called. It also removes the goto statements no longer
-needed.
-
-Upstream-Status: Backport
-[https://github.com/net-snmp/net-snmp/commit/94ca941e06bef157bf0e13251f8ca1471daa9393]
-
-Signed-off-by: Yi Zhao <yi.zhao@windriver.com>
----
- agent/snmpd.c | 4 ++--
- 1 file changed, 2 insertions(+), 2 deletions(-)
-
-diff --git a/agent/snmpd.c b/agent/snmpd.c
-index f5aab0af8..90de12d99 100644
---- a/agent/snmpd.c
-+++ b/agent/snmpd.c
-@@ -289,6 +289,8 @@ usage(char *prog)
-            "  -S d|i|0-7\t\tuse -Ls <facility> instead\n"
-            "\n"
-            );
-+    SOCK_CLEANUP;
-+    exit(1);
- }
- 
- static void
-@@ -494,7 +496,6 @@ main(int argc, char *argv[])
-         case '-':
-             if (strcasecmp(optarg, "help") == 0) {
-                 usage(argv[0]);
--                goto out;
-             }
-             if (strcasecmp(optarg, "version") == 0) {
-                 version();
-@@ -783,7 +784,6 @@ main(int argc, char *argv[])
-             fprintf(stderr, "%s: Illegal argument -X:"
- 		            "AgentX support not compiled in.\n", argv[0]);
-             usage(argv[0]);
--            goto out;
- #endif
-             break;
- 
--- 
-2.25.1
-
diff --git a/meta-openembedded/meta-networking/recipes-protocols/net-snmp/net-snmp/0001-snmplib-keytools.c-Don-t-check-for-return-from-EVP_M.patch b/meta-openembedded/meta-networking/recipes-protocols/net-snmp/net-snmp/0001-snmplib-keytools.c-Don-t-check-for-return-from-EVP_M.patch
index 42352a6..af6334f 100644
--- a/meta-openembedded/meta-networking/recipes-protocols/net-snmp/net-snmp/0001-snmplib-keytools.c-Don-t-check-for-return-from-EVP_M.patch
+++ b/meta-openembedded/meta-networking/recipes-protocols/net-snmp/net-snmp/0001-snmplib-keytools.c-Don-t-check-for-return-from-EVP_M.patch
@@ -1,4 +1,4 @@
-From f3ff99736b8cccbba77349b0d10a3cee366a4c87 Mon Sep 17 00:00:00 2001
+From f4e1acd4f509dd26cf88da872bd5adcf884f4a5f Mon Sep 17 00:00:00 2001
 From: Khem Raj <raj.khem@gmail.com>
 Date: Fri, 18 Sep 2015 00:28:45 -0400
 Subject: [PATCH] snmplib/keytools.c: Don't check for return from
@@ -17,7 +17,7 @@
  1 file changed, 1 insertion(+), 4 deletions(-)
 
 diff --git a/snmplib/keytools.c b/snmplib/keytools.c
-index 129a7c0..2fc1efc 100644
+index 14a452a..fb1694b 100644
 --- a/snmplib/keytools.c
 +++ b/snmplib/keytools.c
 @@ -183,10 +183,7 @@ generate_Ku(const oid * hashtype, u_int hashtype_len,
diff --git a/meta-openembedded/meta-networking/recipes-protocols/net-snmp/net-snmp/0002-configure-fix-a-cc-check-issue.patch b/meta-openembedded/meta-networking/recipes-protocols/net-snmp/net-snmp/0002-configure-fix-a-cc-check-issue.patch
deleted file mode 100644
index c973bde..0000000
--- a/meta-openembedded/meta-networking/recipes-protocols/net-snmp/net-snmp/0002-configure-fix-a-cc-check-issue.patch
+++ /dev/null
@@ -1,28 +0,0 @@
-From 0a02ac779c51a2b4af3b58cb96967bf3eff80367 Mon Sep 17 00:00:00 2001
-From: Wenlin Kang <wenlin.kang@windriver.com>
-Date: Wed, 24 May 2017 16:45:34 +0800
-Subject: [PATCH] configure: fix a cc check issue.
-
-When has "." in cc value, the expression
-$myperl -V:cc | $myperl -n -e 'print if (s/^\s*cc=.([-=\w\s\/]+).;\s*/$1/);'
-can't get corretly the cc's value.
-
-Signed-off-by: Wenlin Kang <wenlin.kang@windriver.com>
-
----
- configure.d/config_project_perl_python | 2 +-
- 1 file changed, 1 insertion(+), 1 deletion(-)
-
-diff --git a/configure.d/config_project_perl_python b/configure.d/config_project_perl_python
-index 475c843..22d2ad3 100644
---- a/configure.d/config_project_perl_python
-+++ b/configure.d/config_project_perl_python
-@@ -87,7 +87,7 @@ if test "x$install_perl" != "xno" ; then
-     if test "x$enable_perl_cc_checks" != "xno" ; then
-         AC_MSG_CHECKING([for Perl cc])
-         changequote(, )
--        PERLCC=`$myperl -V:cc | $myperl -n -e 'print if (s/^\s*cc=.([-=\w\s\/]+).;\s*/$1/);'`
-+        PERLCC=`$myperl -V:cc | $myperl -n -e 'print if (s/^\s*cc=.([-=\.\w\s\/]+).;\s*/$1/);'`
-         changequote([, ])
-         if test "x$PERLCC" != "x" ; then
-             AC_MSG_RESULT([$PERLCC])
diff --git a/meta-openembedded/meta-networking/recipes-protocols/net-snmp/net-snmp/0004-configure-fix-incorrect-variable.patch b/meta-openembedded/meta-networking/recipes-protocols/net-snmp/net-snmp/0004-configure-fix-incorrect-variable.patch
index bfddc63..6e22418 100644
--- a/meta-openembedded/meta-networking/recipes-protocols/net-snmp/net-snmp/0004-configure-fix-incorrect-variable.patch
+++ b/meta-openembedded/meta-networking/recipes-protocols/net-snmp/net-snmp/0004-configure-fix-incorrect-variable.patch
@@ -1,4 +1,4 @@
-From 011bdcd07f2a289d0cfc1b411c03c0cc7c42dad1 Mon Sep 17 00:00:00 2001
+From 6d655ba677563ac9d62d4d8eee59fdb39d486c02 Mon Sep 17 00:00:00 2001
 From: Wenlin Kang <wenlin.kang@windriver.com>
 Date: Wed, 24 May 2017 17:10:20 +0800
 Subject: [PATCH] configure: fix incorrect variable
@@ -14,10 +14,10 @@
  1 file changed, 1 insertion(+), 1 deletion(-)
 
 diff --git a/Makefile.in b/Makefile.in
-index 912f6b2..a53d1b2 100644
+index f1cbbf5..1545be3 100644
 --- a/Makefile.in
 +++ b/Makefile.in
-@@ -174,7 +174,7 @@ OTHERCLEANTODOS=perlclean @PYTHONCLEANTARGS@ cleanfeatures perlcleanfeatures pyt
+@@ -173,7 +173,7 @@ OTHERCLEANTODOS=perlclean @PYTHONCLEANTARGS@ cleanfeatures perlcleanfeatures pyt
  #
  # override LD_RUN_PATH to avoid dependencies on the build directory
  perlmodules: perlmakefiles subdirs
diff --git a/meta-openembedded/meta-networking/recipes-protocols/net-snmp/net-snmp/fix-libtool-finish.patch b/meta-openembedded/meta-networking/recipes-protocols/net-snmp/net-snmp/fix-libtool-finish.patch
index 26dd014..409c1e0 100644
--- a/meta-openembedded/meta-networking/recipes-protocols/net-snmp/net-snmp/fix-libtool-finish.patch
+++ b/meta-openembedded/meta-networking/recipes-protocols/net-snmp/net-snmp/fix-libtool-finish.patch
@@ -1,4 +1,4 @@
-From 27444fbf8323679ea0551a3bd5f04c365143d8c0 Mon Sep 17 00:00:00 2001
+From ab1d77c52e84746e75506a2870783806bc77f396 Mon Sep 17 00:00:00 2001
 From: "Roy.Li" <rongqing.li@windriver.com>
 Date: Fri, 16 Jan 2015 14:14:01 +0800
 Subject: [PATCH] net-snmp: fix "libtool --finish"
@@ -20,11 +20,11 @@
  1 file changed, 1 insertion(+), 1 deletion(-)
 
 diff --git a/Makefile.top b/Makefile.top
-index 6315401..fc0ee06 100644
+index a962c54..1ba5607 100644
 --- a/Makefile.top
 +++ b/Makefile.top
 @@ -89,7 +89,7 @@ LIBREVISION = 0
- LIB_LD_CMD      = $(LIBTOOL) --mode=link $(LINKCC) $(CFLAGS) -rpath $(libdir) -version-info $(LIBCURRENT):$(LIBREVISION):$(LIBAGE) -o
+ LIB_LD_CMD      = $(LIBTOOL) --mode=link $(LINKCC) $(CFLAGS) -rpath $(libdir) -version-info $(LIBCURRENT):$(LIBREVISION):$(LIBAGE) @LD_NO_UNDEFINED@ -o
  LIB_EXTENSION   = la
  LIB_VERSION     =
 -LIB_LDCONFIG_CMD = $(LIBTOOL) --mode=finish $(INSTALL_PREFIX)$(libdir)
diff --git a/meta-openembedded/meta-networking/recipes-protocols/net-snmp/net-snmp/net-snmp-5.7.2-fix-engineBoots-value-on-SIGHUP.patch b/meta-openembedded/meta-networking/recipes-protocols/net-snmp/net-snmp/net-snmp-5.7.2-fix-engineBoots-value-on-SIGHUP.patch
index 022eb95..35e93d6 100644
--- a/meta-openembedded/meta-networking/recipes-protocols/net-snmp/net-snmp/net-snmp-5.7.2-fix-engineBoots-value-on-SIGHUP.patch
+++ b/meta-openembedded/meta-networking/recipes-protocols/net-snmp/net-snmp/net-snmp-5.7.2-fix-engineBoots-value-on-SIGHUP.patch
@@ -1,4 +1,4 @@
-From 1e3178835217ba89aa355e2b6b88e490f17be16d Mon Sep 17 00:00:00 2001
+From 5ad4eab43c1ea63ff343bba64d576440e8783e75 Mon Sep 17 00:00:00 2001
 From: Zheng Ruoqin <zhengrq.fnst@fujitsu.com>
 Date: Wed, 9 Jun 2021 15:47:30 +0900
 Subject: [PATCH] net snmp: fix engineBoots value on SIGHUP
@@ -7,6 +7,7 @@
 
 Signed-off-by: Marian Florea <marian.florea@windriver.com>
 Signed-off-by: Li Zhou <li.zhou@windriver.com>
+Signed-off-by: Ovidiu Panait <ovidiu.panait@windriver.com>
 
 ---
  agent/snmpd.c    | 1 +
@@ -14,19 +15,19 @@
  2 files changed, 3 insertions(+), 2 deletions(-)
 
 diff --git a/agent/snmpd.c b/agent/snmpd.c
-index 1af439f..355b510 100644
+index 90de12d..1ccc4db 100644
 --- a/agent/snmpd.c
 +++ b/agent/snmpd.c
-@@ -1208,6 +1208,7 @@ receive(void)
- 	    snmp_log(LOG_INFO, "NET-SNMP version %s restarted\n",
- 		     netsnmp_get_version());
-             update_config();
-+	    snmp_store(app_name);
-             send_easy_trap(SNMP_TRAP_ENTERPRISESPECIFIC, 3);
- #if HAVE_SIGPROCMASK
-             ret = sigprocmask(SIG_UNBLOCK, &set, NULL);
+@@ -1169,6 +1169,7 @@ snmpd_reconfig(void)
+     snmp_log(LOG_INFO, "NET-SNMP version %s restarted\n",
+              netsnmp_get_version());
+     update_config();
++    snmp_store(app_name);
+     send_easy_trap(SNMP_TRAP_ENTERPRISESPECIFIC, 3);
+ #ifdef HAVE_SIGPROCMASK
+     ret = sigprocmask(SIG_UNBLOCK, &set, NULL);
 diff --git a/snmplib/snmpv3.c b/snmplib/snmpv3.c
-index 29c2a0f..ada961c 100644
+index 7b1746b..4a17e0d 100644
 --- a/snmplib/snmpv3.c
 +++ b/snmplib/snmpv3.c
 @@ -1059,9 +1059,9 @@ init_snmpv3_post_config(int majorid, int minorid, void *serverarg,
@@ -41,6 +42,3 @@
          engineBoots = 1;
      }
  
--- 
-2.25.1
-
diff --git a/meta-openembedded/meta-networking/recipes-protocols/net-snmp/net-snmp/net-snmp-add-knob-whether-nlist.h-are-checked.patch b/meta-openembedded/meta-networking/recipes-protocols/net-snmp/net-snmp/net-snmp-add-knob-whether-nlist.h-are-checked.patch
index f1ebe2b..c5a453a 100644
--- a/meta-openembedded/meta-networking/recipes-protocols/net-snmp/net-snmp/net-snmp-add-knob-whether-nlist.h-are-checked.patch
+++ b/meta-openembedded/meta-networking/recipes-protocols/net-snmp/net-snmp/net-snmp-add-knob-whether-nlist.h-are-checked.patch
@@ -1,4 +1,4 @@
-From e507dcf8b29c55011f85d88bf05400d4717e4074 Mon Sep 17 00:00:00 2001
+From ad65b106d3cb3c6e595381be1c45a73c1ef6eb5e Mon Sep 17 00:00:00 2001
 From: Chong Lu <Chong.Lu@windriver.com>
 Date: Thu, 28 May 2020 09:46:34 -0500
 Subject: [PATCH] net-snmp: add knob whether nlist.h are checked
@@ -15,7 +15,7 @@
  1 file changed, 2 insertions(+)
 
 diff --git a/configure.d/config_os_headers b/configure.d/config_os_headers
-index 76ef58a..f07d512 100644
+index b9c8c31..01c3376 100644
 --- a/configure.d/config_os_headers
 +++ b/configure.d/config_os_headers
 @@ -37,6 +37,7 @@ AC_CHECK_HEADERS([getopt.h   pthread.h  regex.h      ] dnl
diff --git a/meta-openembedded/meta-networking/recipes-protocols/net-snmp/net-snmp/net-snmp-fix-for-disable-des.patch b/meta-openembedded/meta-networking/recipes-protocols/net-snmp/net-snmp/net-snmp-fix-for-disable-des.patch
index 2941a36..c382c02 100644
--- a/meta-openembedded/meta-networking/recipes-protocols/net-snmp/net-snmp/net-snmp-fix-for-disable-des.patch
+++ b/meta-openembedded/meta-networking/recipes-protocols/net-snmp/net-snmp/net-snmp-fix-for-disable-des.patch
@@ -1,4 +1,4 @@
-From 3ca4335ec1d6b7b384c134fc85d7a9e513c68376 Mon Sep 17 00:00:00 2001
+From b1b9980853b1083f0c8b9f628f8b4c3a484d4f91 Mon Sep 17 00:00:00 2001
 From: Jackie Huang <jackie.huang@windriver.com>
 Date: Thu, 22 Jun 2017 10:25:08 +0800
 Subject: [PATCH] net-snmp: fix for --disable-des
@@ -15,7 +15,7 @@
  1 file changed, 2 insertions(+)
 
 diff --git a/snmplib/scapi.c b/snmplib/scapi.c
-index 00c9174..c6875e1 100644
+index 54fdd5c..0f7e931 100644
 --- a/snmplib/scapi.c
 +++ b/snmplib/scapi.c
 @@ -85,7 +85,9 @@ netsnmp_feature_child_of(usm_scapi, usm_support);
diff --git a/meta-openembedded/meta-networking/recipes-protocols/net-snmp/net-snmp/net-snmp-testing-add-the-output-format-for-ptest.patch b/meta-openembedded/meta-networking/recipes-protocols/net-snmp/net-snmp/net-snmp-testing-add-the-output-format-for-ptest.patch
index 807983f..09ca532 100644
--- a/meta-openembedded/meta-networking/recipes-protocols/net-snmp/net-snmp/net-snmp-testing-add-the-output-format-for-ptest.patch
+++ b/meta-openembedded/meta-networking/recipes-protocols/net-snmp/net-snmp/net-snmp-testing-add-the-output-format-for-ptest.patch
@@ -1,4 +1,4 @@
-From 972df16e9599dffddf5d714a4cbf43008c771122 Mon Sep 17 00:00:00 2001
+From 36a5656db7ea75dd15f35a6c1728937c6e2b901c Mon Sep 17 00:00:00 2001
 From: Jackie Huang <jackie.huang@windriver.com>
 Date: Wed, 14 Jan 2015 15:10:06 +0800
 Subject: [PATCH] testing: add the output format for ptest
diff --git a/meta-openembedded/meta-networking/recipes-protocols/net-snmp/net-snmp/reproducibility-have-printcap.patch b/meta-openembedded/meta-networking/recipes-protocols/net-snmp/net-snmp/reproducibility-have-printcap.patch
index bf1e7be..c0b51c5 100644
--- a/meta-openembedded/meta-networking/recipes-protocols/net-snmp/net-snmp/reproducibility-have-printcap.patch
+++ b/meta-openembedded/meta-networking/recipes-protocols/net-snmp/net-snmp/reproducibility-have-printcap.patch
@@ -1,4 +1,4 @@
-From 84e362fe97f50fbad69f083bc2d8fe18f83eb2f7 Mon Sep 17 00:00:00 2001
+From b923cd38e2503b86aedf66b767fd7f51c9f25645 Mon Sep 17 00:00:00 2001
 From: "douglas.royds" <douglas.royds@taitradio.com>
 Date: Wed, 21 Nov 2018 13:52:18 +1300
 Subject: [PATCH] net-snmp: Reproducibility: Don't check build host for
@@ -13,7 +13,7 @@
  1 file changed, 2 insertions(+), 2 deletions(-)
 
 diff --git a/configure.d/config_os_misc4 b/configure.d/config_os_misc4
-index 6f23c8e..8cea75a 100644
+index b6864d9..07ca922 100644
 --- a/configure.d/config_os_misc4
 +++ b/configure.d/config_os_misc4
 @@ -99,9 +99,9 @@ if test x$LPSTAT_PATH != x; then
diff --git a/meta-openembedded/meta-networking/recipes-protocols/net-snmp/net-snmp_5.9.1.bb b/meta-openembedded/meta-networking/recipes-protocols/net-snmp/net-snmp_5.9.1.bb
deleted file mode 100644
index 5f887b8..0000000
--- a/meta-openembedded/meta-networking/recipes-protocols/net-snmp/net-snmp_5.9.1.bb
+++ /dev/null
@@ -1,293 +0,0 @@
-SUMMARY = "Various tools relating to the Simple Network Management Protocol"
-HOMEPAGE = "http://www.net-snmp.org/"
-SECTION = "net"
-LICENSE = "BSD-3-Clause & MIT"
-
-LIC_FILES_CHKSUM = "file://COPYING;md5=9d100a395a38584f2ec18a8275261687"
-
-DEPENDS = "openssl"
-DEPENDS:append:class-target = " pciutils"
-
-SRC_URI = "${SOURCEFORGE_MIRROR}/net-snmp/net-snmp-${PV}.tar.gz \
-           file://init \
-           file://snmpd.conf \
-           file://snmptrapd.conf \
-           file://snmpd.service \
-           file://snmptrapd.service \
-           file://net-snmp-add-knob-whether-nlist.h-are-checked.patch \
-           file://fix-libtool-finish.patch \
-           file://net-snmp-testing-add-the-output-format-for-ptest.patch \
-           file://run-ptest \
-           file://0001-config_os_headers-Error-Fix.patch \
-           file://0001-snmplib-keytools.c-Don-t-check-for-return-from-EVP_M.patch \
-           file://0001-get_pid_from_inode-Include-limit.h.patch \
-           file://0002-configure-fix-a-cc-check-issue.patch \
-           file://0004-configure-fix-incorrect-variable.patch \
-           file://net-snmp-5.7.2-fix-engineBoots-value-on-SIGHUP.patch \
-           file://net-snmp-fix-for-disable-des.patch \
-           file://reproducibility-have-printcap.patch \
-           file://0001-ac_add_search_path.m4-keep-consistent-between-32bit.patch \
-           file://0001-snmpd-always-exit-after-displaying-usage.patch \
-           "
-SRC_URI[sha256sum] = "eb7fd4a44de6cddbffd9a92a85ad1309e5c1054fb9d5a7dd93079c8953f48c3f"
-
-UPSTREAM_CHECK_URI = "https://sourceforge.net/projects/net-snmp/files/net-snmp/"
-UPSTREAM_CHECK_REGEX = "/net-snmp/(?P<pver>\d+(\.\d+)+)/"
-
-inherit autotools-brokensep update-rc.d siteinfo systemd pkgconfig perlnative ptest multilib_script multilib_header
-
-EXTRA_OEMAKE = "INSTALL_PREFIX=${D} OTHERLDFLAGS='${LDFLAGS}' HOST_CPPFLAGS='${BUILD_CPPFLAGS}'"
-
-PARALLEL_MAKE = ""
-CCACHE = ""
-CLEANBROKEN = "1"
-
-TARGET_CC_ARCH += "${LDFLAGS}"
-
-PACKAGECONFIG ??= "${@bb.utils.filter('DISTRO_FEATURES', 'ipv6 systemd', d)} des smux"
-PACKAGECONFIG[des] = "--enable-des, --disable-des"
-PACKAGECONFIG[elfutils] = "--with-elf, --without-elf, elfutils"
-PACKAGECONFIG[ipv6] = "--enable-ipv6, --disable-ipv6"
-PACKAGECONFIG[libnl] = "--with-nl, --without-nl, libnl"
-PACKAGECONFIG[perl] = "--enable-embedded-perl --with-perl-modules=yes, --disable-embedded-perl --with-perl-modules=no, perl"
-PACKAGECONFIG[smux] = ""
-PACKAGECONFIG[systemd] = "--with-systemd, --without-systemd"
-
-EXTRA_OECONF = " \
-    --enable-shared \
-    --disable-manuals \
-    --with-defaults \
-    --with-install-prefix=${D} \
-    --with-persistent-directory=${localstatedir}/lib/net-snmp \
-    --with-endianness=${@oe.utils.conditional('SITEINFO_ENDIANNESS', 'le', 'little', 'big', d)} \
-    --with-mib-modules='${MIB_MODULES}' \
-"
-
-MIB_MODULES = ""
-MIB_MODULES:append = " ${@bb.utils.filter('PACKAGECONFIG', 'smux', d)}"
-
-CACHED_CONFIGUREVARS = " \
-    ac_cv_header_valgrind_valgrind_h=no \
-    ac_cv_header_valgrind_memcheck_h=no \
-    ac_cv_ETC_MNTTAB=/etc/mtab \
-    lt_cv_shlibpath_overrides_runpath=yes \
-    ac_cv_path_UNAMEPROG=${base_bindir}/uname \
-    ac_cv_file__etc_printcap=no \
-    NETSNMP_CONFIGURE_OPTIONS= \
-"
-PERLPROG = "${bindir}/env perl"
-PERLPROG:class-native = "${bindir_native}/env perl"
-PERLPROG:append = "${@bb.utils.contains('PACKAGECONFIG', 'perl', ' -I${WORKDIR}', '', d)}"
-export PERLPROG
-
-HAS_PERL = "${@bb.utils.contains('PACKAGECONFIG', 'perl', '1', '0', d)}"
-
-PTEST_BUILD_HOST_FILES += "net-snmp-config gen-variables"
-
-do_configure:prepend() {
-    sed -i -e "s|I/usr/include|I${STAGING_INCDIR}|g" \
-        "${S}"/configure \
-        "${S}"/configure.d/config_os_libs2
-
-    if [ "${HAS_PERL}" = "1" ]; then
-        # this may need to be changed when package perl has any change.
-        cp -f ${STAGING_DIR_TARGET}/usr/lib*/perl?/*/Config.pm ${WORKDIR}/
-        cp -f ${STAGING_DIR_TARGET}/usr/lib*/perl?/*/*/Config_heavy.pl ${WORKDIR}/
-        sed -e "s@libpth => '/usr/lib.*@libpth => '${STAGING_DIR_TARGET}/${libdir} ${STAGING_DIR_TARGET}/${base_libdir}',@g" \
-            -e "s@privlibexp => '/usr@privlibexp => '${STAGING_DIR_TARGET}/usr@g" \
-            -e "s@scriptdir => '/usr@scriptdir => '${STAGING_DIR_TARGET}/usr@g" \
-            -e "s@sitearchexp => '/usr@sitearchexp => '${STAGING_DIR_TARGET}/usr@g" \
-            -e "s@sitelibexp => '/usr@sitearchexp => '${STAGING_DIR_TARGET}/usr@g" \
-            -e "s@vendorarchexp => '/usr@vendorarchexp => '${STAGING_DIR_TARGET}/usr@g" \
-            -e "s@vendorlibexp => '/usr@vendorlibexp => '${STAGING_DIR_TARGET}/usr@g" \
-            -i ${WORKDIR}/Config.pm
-    fi
-
-}
-
-do_configure:append() {
-    sed -e "s@^NSC_INCLUDEDIR=.*@NSC_INCLUDEDIR=${STAGING_DIR_TARGET}\$\{includedir\}@g" \
-        -e "s@^NSC_LIBDIR=-L.*@NSC_LIBDIR=-L${STAGING_DIR_TARGET}\$\{libdir\}@g" \
-        -e "s@^NSC_LDFLAGS=\"-L.* @NSC_LDFLAGS=\"-L${STAGING_DIR_TARGET}\$\{libdir\} @g" \
-        -i ${B}/net-snmp-config
-}
-
-do_install:append() {
-    install -d ${D}${sysconfdir}/snmp
-    install -d ${D}${sysconfdir}/init.d
-    install -m 755 ${WORKDIR}/init ${D}${sysconfdir}/init.d/snmpd
-    install -m 644 ${WORKDIR}/snmpd.conf ${D}${sysconfdir}/snmp/
-    install -m 644 ${WORKDIR}/snmptrapd.conf ${D}${sysconfdir}/snmp/
-    install -d ${D}${systemd_unitdir}/system
-    install -m 0644 ${WORKDIR}/snmpd.service ${D}${systemd_unitdir}/system
-    install -m 0644 ${WORKDIR}/snmptrapd.service ${D}${systemd_unitdir}/system
-    sed -e "s@^NSC_SRCDIR=.*@NSC_SRCDIR=.@g" \
-        -i ${D}${bindir}/net-snmp-create-v3-user
-    sed -e 's@^NSC_SRCDIR=.*@NSC_SRCDIR=.@g' \
-        -e 's@[^ ]*-ffile-prefix-map=[^ "]*@@g' \
-        -e 's@[^ ]*-fdebug-prefix-map=[^ "]*@@g' \
-        -e 's@[^ ]*-fmacro-prefix-map=[^ "]*@@g' \
-        -e 's@[^ ]*--sysroot=[^ "]*@@g' \
-        -e 's@[^ ]*--with-libtool-sysroot=[^ "]*@@g' \
-        -e 's@[^ ]*--with-install-prefix=[^ "]*@@g' \
-        -e 's@[^ ]*PKG_CONFIG_PATH=[^ "]*@@g' \
-        -e 's@[^ ]*PKG_CONFIG_LIBDIR=[^ "]*@@g' \
-        -i ${D}${bindir}/net-snmp-config
-
-    sed -e 's@[^ ]*-ffile-prefix-map=[^ "]*@@g' \
-        -e 's@[^ ]*-fdebug-prefix-map=[^ "]*@@g' \
-        -e 's@[^ ]*-fmacro-prefix-map=[^ "]*@@g' \
-        -i ${D}${libdir}/pkgconfig/netsnmp*.pc
-
-    # ${STAGING_DIR_HOST} is empty for native builds, and the sed command below
-    # will result in errors if run for native.
-    if [ "${STAGING_DIR_HOST}" ]; then
-        sed -e 's@${STAGING_DIR_HOST}@@g' \
-            -i ${D}${bindir}/net-snmp-config ${D}${libdir}/pkgconfig/netsnmp*.pc
-    fi
-
-    sed -e "s@^NSC_INCLUDEDIR=.*@NSC_INCLUDEDIR=\$\{includedir\}@g" \
-        -e "s@^NSC_LIBDIR=-L.*@NSC_LIBDIR=-L\$\{libdir\}@g" \
-        -e "s@^NSC_LDFLAGS=\"-L.* @NSC_LDFLAGS=\"-L\$\{libdir\} @g" \
-        -i ${D}${bindir}/net-snmp-config
-
-    oe_multilib_header net-snmp/net-snmp-config.h
-
-    if [ "${HAS_PERL}" = "1" ]; then
-        find ${D}${libdir}/ -type f -name "perllocal.pod" | xargs rm -f
-    fi
-}
-
-do_install_ptest() {
-    install -d ${D}${PTEST_PATH}
-    for i in ${S}/dist ${S}/include ${B}/include ${S}/mibs ${S}/configure \
-        ${B}/net-snmp-config ${S}/testing; do
-        if [ -e "$i" ]; then
-            cp -R --no-dereference --preserve=mode,links -v "$i" ${D}${PTEST_PATH}
-        fi
-    done
-    echo `autoconf -V|awk '/autoconf/{print $NF}'` > ${D}${PTEST_PATH}/dist/autoconf-version
-
-    rmdlist="${D}${PTEST_PATH}/dist/net-snmp-solaris-build"
-    for i in $rmdlist; do
-        if [ -d "$i" ]; then
-            rm -rf "$i"
-        fi
-    done
-}
-
-SYSROOT_PREPROCESS_FUNCS += "net_snmp_sysroot_preprocess"
-SNMP_DBGDIR = "/usr/src/debug/${PN}/${EXTENDPE}${PV}-${PR}"
-
-net_snmp_sysroot_preprocess () {
-    if [ -e ${D}${bindir}/net-snmp-config ]; then
-        install -d ${SYSROOT_DESTDIR}${bindir_crossscripts}/
-        install -m 755 ${D}${bindir}/net-snmp-config ${SYSROOT_DESTDIR}${bindir_crossscripts}/
-        sed -e "s@-I/usr/include@-I${STAGING_INCDIR}@g" \
-            -e "s@^prefix=.*@prefix=${STAGING_DIR_HOST}${prefix}@g" \
-            -e "s@^exec_prefix=.*@exec_prefix=${STAGING_EXECPREFIXDIR}@g" \
-            -e "s@^includedir=.*@includedir=${STAGING_INCDIR}@g" \
-            -e "s@^libdir=.*@libdir=${STAGING_LIBDIR}@g" \
-            -e "s@^NSC_SRCDIR=.*@NSC_SRCDIR=${S}@g" \
-            -e "s@-ffile-prefix-map=${SNMP_DBGDIR}@-ffile-prefix-map=${WORKDIR}=${SNMP_DBGDIR}@g" \
-            -e "s@-fdebug-prefix-map=${SNMP_DBGDIR}@-fdebug-prefix-map=${WORKDIR}=${SNMP_DBGDIR}@g" \
-            -e "s@-fdebug-prefix-map= -fdebug-prefix-map=@-fdebug-prefix-map=${STAGING_DIR_NATIVE}= \
-                  -fdebug-prefix-map=${STAGING_DIR_HOST}=@g" \
-            -e "s@--sysroot=@--sysroot=${STAGING_DIR_HOST}@g" \
-            -e "s@--with-libtool-sysroot=@--with-libtool-sysroot=${STAGING_DIR_HOST}@g" \
-            -e "s@--with-install-prefix=@--with-install-prefix=${D}@g" \
-          -i  ${SYSROOT_DESTDIR}${bindir_crossscripts}/net-snmp-config
-    fi
-}
-
-PACKAGES += "${PN}-libs ${PN}-mibs ${PN}-server ${PN}-client \
-             ${PN}-server-snmpd ${PN}-server-snmptrapd \
-             ${PN}-lib-netsnmp ${PN}-lib-agent ${PN}-lib-helpers \
-             ${PN}-lib-mibs ${PN}-lib-trapd"
-
-# perl module
-PACKAGES += "${@bb.utils.contains('PACKAGECONFIG', 'perl', '${PN}-perl-modules', '', d)}"
-
-ALLOW_EMPTY:${PN} = "1"
-ALLOW_EMPTY:${PN}-server = "1"
-ALLOW_EMPTY:${PN}-libs = "1"
-
-FILES:${PN}-perl-modules = "${libdir}/perl?/*"
-RDEPENDS:${PN}-perl-modules = "perl"
-
-FILES:${PN}-libs = ""
-FILES:${PN}-mibs = "${datadir}/snmp/mibs"
-FILES:${PN}-server-snmpd = "${sbindir}/snmpd \
-                            ${sysconfdir}/snmp/snmpd.conf \
-                            ${sysconfdir}/init.d \
-                            ${systemd_unitdir}/system/snmpd.service \
-"
-
-FILES:${PN}-server-snmptrapd = "${sbindir}/snmptrapd \
-                                ${sysconfdir}/snmp/snmptrapd.conf \
-                                ${systemd_unitdir}/system/snmptrapd.service \
-"
-
-FILES:${PN}-lib-netsnmp = "${libdir}/libnetsnmp${SOLIBS}"
-FILES:${PN}-lib-agent = "${libdir}/libnetsnmpagent${SOLIBS}"
-FILES:${PN}-lib-helpers = "${libdir}/libnetsnmphelpers${SOLIBS}"
-FILES:${PN}-lib-mibs = "${libdir}/libnetsnmpmibs${SOLIBS}"
-FILES:${PN}-lib-trapd = "${libdir}/libnetsnmptrapd${SOLIBS}"
-
-FILES:${PN} = ""
-FILES:${PN}-client = "${bindir}/* ${datadir}/snmp/"
-FILES:${PN}-dbg += "${libdir}/.debug/ ${sbindir}/.debug/ ${bindir}/.debug/"
-FILES:${PN}-dev += "${bindir}/mib2c \
-                    ${bindir}/mib2c-update \
-                    ${bindir}/net-snmp-config \
-                    ${bindir}/net-snmp-create-v3-user \
-"
-
-CONFFILES:${PN}-server-snmpd = "${sysconfdir}/snmp/snmpd.conf"
-CONFFILES:${PN}-server-snmptrapd = "${sysconfdir}/snmp/snmptrapd.conf"
-
-INITSCRIPT_PACKAGES = "${PN}-server-snmpd"
-INITSCRIPT_NAME:${PN}-server-snmpd = "snmpd"
-INITSCRIPT_PARAMS:${PN}-server-snmpd = "start 90 2 3 4 5 . stop 60 0 1 6 ."
-
-SYSTEMD_PACKAGES = "${PN}-server-snmpd \
-                    ${PN}-server-snmptrapd"
-
-SYSTEMD_SERVICE:${PN}-server-snmpd = "snmpd.service"
-SYSTEMD_SERVICE:${PN}-server-snmptrapd =  "snmptrapd.service"
-
-RDEPENDS:${PN} += "${@bb.utils.contains('PACKAGECONFIG', 'perl', 'net-snmp-perl-modules', '', d)}"
-RDEPENDS:${PN} += "net-snmp-client"
-RDEPENDS:${PN}-server-snmpd += "net-snmp-mibs"
-RDEPENDS:${PN}-server-snmptrapd += "net-snmp-server-snmpd ${PN}-lib-trapd"
-RDEPENDS:${PN}-server += "net-snmp-server-snmpd net-snmp-server-snmptrapd"
-RDEPENDS:${PN}-client += "net-snmp-mibs net-snmp-libs"
-RDEPENDS:${PN}-libs += "libpci \
-                        ${PN}-lib-netsnmp \
-                        ${PN}-lib-agent \
-                        ${PN}-lib-helpers \
-                        ${PN}-lib-mibs \
-"
-RDEPENDS:${PN}-ptest += "perl \
-                         perl-module-test \
-                         perl-module-file-basename \
-                         perl-module-getopt-long \
-                         perl-module-file-temp \
-                         perl-module-data-dumper \
-"
-RDEPENDS:${PN}-dev = "net-snmp-client (= ${EXTENDPKGV}) net-snmp-server (= ${EXTENDPKGV})"
-RRECOMMENDS:${PN}-dbg = "net-snmp-client (= ${EXTENDPKGV}) net-snmp-server (= ${EXTENDPKGV})"
-
-RPROVIDES:${PN}-server-snmpd += "${PN}-server-snmpd-systemd"
-RREPLACES:${PN}-server-snmpd += "${PN}-server-snmpd-systemd"
-RCONFLICTS:${PN}-server-snmpd += "${PN}-server-snmpd-systemd"
-
-RPROVIDES:${PN}-server-snmptrapd += "${PN}-server-snmptrapd-systemd"
-RREPLACES:${PN}-server-snmptrapd += "${PN}-server-snmptrapd-systemd"
-RCONFLICTS:${PN}-server-snmptrapd += "${PN}-server-snmptrapd-systemd"
-
-LEAD_SONAME = "libnetsnmp.so"
-
-MULTILIB_SCRIPTS = "${PN}-dev:${bindir}/net-snmp-config"
-
-BBCLASSEXTEND = "native"
diff --git a/meta-openembedded/meta-networking/recipes-protocols/net-snmp/net-snmp_5.9.3.bb b/meta-openembedded/meta-networking/recipes-protocols/net-snmp/net-snmp_5.9.3.bb
new file mode 100644
index 0000000..7af5147
--- /dev/null
+++ b/meta-openembedded/meta-networking/recipes-protocols/net-snmp/net-snmp_5.9.3.bb
@@ -0,0 +1,292 @@
+SUMMARY = "Various tools relating to the Simple Network Management Protocol"
+HOMEPAGE = "http://www.net-snmp.org/"
+SECTION = "net"
+LICENSE = "BSD-3-Clause & MIT"
+
+LIC_FILES_CHKSUM = "file://COPYING;md5=9d100a395a38584f2ec18a8275261687"
+
+DEPENDS = "openssl"
+DEPENDS:append:class-target = " pciutils"
+
+SRC_URI = "${SOURCEFORGE_MIRROR}/net-snmp/net-snmp-${PV}.tar.gz \
+           file://init \
+           file://snmpd.conf \
+           file://snmptrapd.conf \
+           file://snmpd.service \
+           file://snmptrapd.service \
+           file://net-snmp-add-knob-whether-nlist.h-are-checked.patch \
+           file://fix-libtool-finish.patch \
+           file://net-snmp-testing-add-the-output-format-for-ptest.patch \
+           file://run-ptest \
+           file://0001-config_os_headers-Error-Fix.patch \
+           file://0001-snmplib-keytools.c-Don-t-check-for-return-from-EVP_M.patch \
+           file://0001-get_pid_from_inode-Include-limit.h.patch \
+           file://0004-configure-fix-incorrect-variable.patch \
+           file://net-snmp-5.7.2-fix-engineBoots-value-on-SIGHUP.patch \
+           file://net-snmp-fix-for-disable-des.patch \
+           file://reproducibility-have-printcap.patch \
+           file://0001-ac_add_search_path.m4-keep-consistent-between-32bit.patch \
+           "
+SRC_URI[sha256sum] = "2097f29b7e1bf3f1300b4bae52fa2308d0bb8d5d3998dbe02f9462a413a2ef0a"
+
+UPSTREAM_CHECK_URI = "https://sourceforge.net/projects/net-snmp/files/net-snmp/"
+UPSTREAM_CHECK_REGEX = "/net-snmp/(?P<pver>\d+(\.\d+)+)/"
+
+inherit autotools-brokensep update-rc.d siteinfo systemd pkgconfig perlnative ptest multilib_script multilib_header
+
+EXTRA_OEMAKE = "INSTALL_PREFIX=${D} OTHERLDFLAGS='${LDFLAGS}' HOST_CPPFLAGS='${BUILD_CPPFLAGS}'"
+
+PARALLEL_MAKE = ""
+CCACHE = ""
+CLEANBROKEN = "1"
+
+TARGET_CC_ARCH += "${LDFLAGS}"
+
+PACKAGECONFIG ??= "${@bb.utils.filter('DISTRO_FEATURES', 'ipv6 systemd', d)} des smux"
+PACKAGECONFIG[des] = "--enable-des, --disable-des"
+PACKAGECONFIG[elfutils] = "--with-elf, --without-elf, elfutils"
+PACKAGECONFIG[ipv6] = "--enable-ipv6, --disable-ipv6"
+PACKAGECONFIG[libnl] = "--with-nl, --without-nl, libnl"
+PACKAGECONFIG[perl] = "--enable-embedded-perl --with-perl-modules=yes, --disable-embedded-perl --with-perl-modules=no, perl"
+PACKAGECONFIG[smux] = ""
+PACKAGECONFIG[systemd] = "--with-systemd, --without-systemd"
+
+EXTRA_OECONF = " \
+    --enable-shared \
+    --disable-manuals \
+    --with-defaults \
+    --with-install-prefix=${D} \
+    --with-persistent-directory=${localstatedir}/lib/net-snmp \
+    --with-endianness=${@oe.utils.conditional('SITEINFO_ENDIANNESS', 'le', 'little', 'big', d)} \
+    --with-mib-modules='${MIB_MODULES}' \
+"
+
+MIB_MODULES = ""
+MIB_MODULES:append = " ${@bb.utils.filter('PACKAGECONFIG', 'smux', d)}"
+
+CACHED_CONFIGUREVARS = " \
+    ac_cv_header_valgrind_valgrind_h=no \
+    ac_cv_header_valgrind_memcheck_h=no \
+    ac_cv_ETC_MNTTAB=/etc/mtab \
+    lt_cv_shlibpath_overrides_runpath=yes \
+    ac_cv_path_UNAMEPROG=${base_bindir}/uname \
+    ac_cv_path_PSPROG=${base_bindir}/ps \
+    ac_cv_file__etc_printcap=no \
+    NETSNMP_CONFIGURE_OPTIONS= \
+"
+PERLPROG = "${bindir}/env perl"
+PERLPROG:class-native = "${bindir_native}/env perl"
+PERLPROG:append = "${@bb.utils.contains('PACKAGECONFIG', 'perl', ' -I${WORKDIR}', '', d)}"
+export PERLPROG
+
+HAS_PERL = "${@bb.utils.contains('PACKAGECONFIG', 'perl', '1', '0', d)}"
+
+PTEST_BUILD_HOST_FILES += "net-snmp-config gen-variables"
+
+do_configure:prepend() {
+    sed -i -e "s|I/usr/include|I${STAGING_INCDIR}|g" \
+        "${S}"/configure \
+        "${S}"/configure.d/config_os_libs2
+
+    if [ "${HAS_PERL}" = "1" ]; then
+        # this may need to be changed when package perl has any change.
+        cp -f ${STAGING_DIR_TARGET}/usr/lib*/perl?/*/Config.pm ${WORKDIR}/
+        cp -f ${STAGING_DIR_TARGET}/usr/lib*/perl?/*/*/Config_heavy.pl ${WORKDIR}/
+        sed -e "s@libpth => '/usr/lib.*@libpth => '${STAGING_DIR_TARGET}/${libdir} ${STAGING_DIR_TARGET}/${base_libdir}',@g" \
+            -e "s@privlibexp => '/usr@privlibexp => '${STAGING_DIR_TARGET}/usr@g" \
+            -e "s@scriptdir => '/usr@scriptdir => '${STAGING_DIR_TARGET}/usr@g" \
+            -e "s@sitearchexp => '/usr@sitearchexp => '${STAGING_DIR_TARGET}/usr@g" \
+            -e "s@sitelibexp => '/usr@sitearchexp => '${STAGING_DIR_TARGET}/usr@g" \
+            -e "s@vendorarchexp => '/usr@vendorarchexp => '${STAGING_DIR_TARGET}/usr@g" \
+            -e "s@vendorlibexp => '/usr@vendorlibexp => '${STAGING_DIR_TARGET}/usr@g" \
+            -i ${WORKDIR}/Config.pm
+    fi
+
+}
+
+do_configure:append() {
+    sed -e "s@^NSC_INCLUDEDIR=.*@NSC_INCLUDEDIR=${STAGING_DIR_TARGET}\$\{includedir\}@g" \
+        -e "s@^NSC_LIBDIR=-L.*@NSC_LIBDIR=-L${STAGING_DIR_TARGET}\$\{libdir\}@g" \
+        -e "s@^NSC_LDFLAGS=\"-L.* @NSC_LDFLAGS=\"-L${STAGING_DIR_TARGET}\$\{libdir\} @g" \
+        -i ${B}/net-snmp-config
+}
+
+do_install:append() {
+    install -d ${D}${sysconfdir}/snmp
+    install -d ${D}${sysconfdir}/init.d
+    install -m 755 ${WORKDIR}/init ${D}${sysconfdir}/init.d/snmpd
+    install -m 644 ${WORKDIR}/snmpd.conf ${D}${sysconfdir}/snmp/
+    install -m 644 ${WORKDIR}/snmptrapd.conf ${D}${sysconfdir}/snmp/
+    install -d ${D}${systemd_unitdir}/system
+    install -m 0644 ${WORKDIR}/snmpd.service ${D}${systemd_unitdir}/system
+    install -m 0644 ${WORKDIR}/snmptrapd.service ${D}${systemd_unitdir}/system
+    sed -e "s@^NSC_SRCDIR=.*@NSC_SRCDIR=.@g" \
+        -i ${D}${bindir}/net-snmp-create-v3-user
+    sed -e 's@^NSC_SRCDIR=.*@NSC_SRCDIR=.@g' \
+        -e 's@[^ ]*-ffile-prefix-map=[^ "]*@@g' \
+        -e 's@[^ ]*-fdebug-prefix-map=[^ "]*@@g' \
+        -e 's@[^ ]*-fmacro-prefix-map=[^ "]*@@g' \
+        -e 's@[^ ]*--sysroot=[^ "]*@@g' \
+        -e 's@[^ ]*--with-libtool-sysroot=[^ "]*@@g' \
+        -e 's@[^ ]*--with-install-prefix=[^ "]*@@g' \
+        -e 's@[^ ]*PKG_CONFIG_PATH=[^ "]*@@g' \
+        -e 's@[^ ]*PKG_CONFIG_LIBDIR=[^ "]*@@g' \
+        -i ${D}${bindir}/net-snmp-config
+
+    sed -e 's@[^ ]*-ffile-prefix-map=[^ "]*@@g' \
+        -e 's@[^ ]*-fdebug-prefix-map=[^ "]*@@g' \
+        -e 's@[^ ]*-fmacro-prefix-map=[^ "]*@@g' \
+        -i ${D}${libdir}/pkgconfig/netsnmp*.pc
+
+    # ${STAGING_DIR_HOST} is empty for native builds, and the sed command below
+    # will result in errors if run for native.
+    if [ "${STAGING_DIR_HOST}" ]; then
+        sed -e 's@${STAGING_DIR_HOST}@@g' \
+            -i ${D}${bindir}/net-snmp-config ${D}${libdir}/pkgconfig/netsnmp*.pc
+    fi
+
+    sed -e "s@^NSC_INCLUDEDIR=.*@NSC_INCLUDEDIR=\$\{includedir\}@g" \
+        -e "s@^NSC_LIBDIR=-L.*@NSC_LIBDIR=-L\$\{libdir\}@g" \
+        -e "s@^NSC_LDFLAGS=\"-L.* @NSC_LDFLAGS=\"-L\$\{libdir\} @g" \
+        -i ${D}${bindir}/net-snmp-config
+
+    oe_multilib_header net-snmp/net-snmp-config.h
+
+    if [ "${HAS_PERL}" = "1" ]; then
+        find ${D}${libdir}/ -type f -name "perllocal.pod" | xargs rm -f
+    fi
+}
+
+do_install_ptest() {
+    install -d ${D}${PTEST_PATH}
+    for i in ${S}/dist ${S}/include ${B}/include ${S}/mibs ${S}/configure \
+        ${B}/net-snmp-config ${S}/testing; do
+        if [ -e "$i" ]; then
+            cp -R --no-dereference --preserve=mode,links -v "$i" ${D}${PTEST_PATH}
+        fi
+    done
+    echo `autoconf -V|awk '/autoconf/{print $NF}'` > ${D}${PTEST_PATH}/dist/autoconf-version
+
+    rmdlist="${D}${PTEST_PATH}/dist/net-snmp-solaris-build"
+    for i in $rmdlist; do
+        if [ -d "$i" ]; then
+            rm -rf "$i"
+        fi
+    done
+}
+
+SYSROOT_PREPROCESS_FUNCS += "net_snmp_sysroot_preprocess"
+SNMP_DBGDIR = "/usr/src/debug/${PN}/${EXTENDPE}${PV}-${PR}"
+
+net_snmp_sysroot_preprocess () {
+    if [ -e ${D}${bindir}/net-snmp-config ]; then
+        install -d ${SYSROOT_DESTDIR}${bindir_crossscripts}/
+        install -m 755 ${D}${bindir}/net-snmp-config ${SYSROOT_DESTDIR}${bindir_crossscripts}/
+        sed -e "s@-I/usr/include@-I${STAGING_INCDIR}@g" \
+            -e "s@^prefix=.*@prefix=${STAGING_DIR_HOST}${prefix}@g" \
+            -e "s@^exec_prefix=.*@exec_prefix=${STAGING_EXECPREFIXDIR}@g" \
+            -e "s@^includedir=.*@includedir=${STAGING_INCDIR}@g" \
+            -e "s@^libdir=.*@libdir=${STAGING_LIBDIR}@g" \
+            -e "s@^NSC_SRCDIR=.*@NSC_SRCDIR=${S}@g" \
+            -e "s@-ffile-prefix-map=${SNMP_DBGDIR}@-ffile-prefix-map=${WORKDIR}=${SNMP_DBGDIR}@g" \
+            -e "s@-fdebug-prefix-map=${SNMP_DBGDIR}@-fdebug-prefix-map=${WORKDIR}=${SNMP_DBGDIR}@g" \
+            -e "s@-fdebug-prefix-map= -fdebug-prefix-map=@-fdebug-prefix-map=${STAGING_DIR_NATIVE}= \
+                  -fdebug-prefix-map=${STAGING_DIR_HOST}=@g" \
+            -e "s@--sysroot=@--sysroot=${STAGING_DIR_HOST}@g" \
+            -e "s@--with-libtool-sysroot=@--with-libtool-sysroot=${STAGING_DIR_HOST}@g" \
+            -e "s@--with-install-prefix=@--with-install-prefix=${D}@g" \
+          -i  ${SYSROOT_DESTDIR}${bindir_crossscripts}/net-snmp-config
+    fi
+}
+
+PACKAGES += "${PN}-libs ${PN}-mibs ${PN}-server ${PN}-client \
+             ${PN}-server-snmpd ${PN}-server-snmptrapd \
+             ${PN}-lib-netsnmp ${PN}-lib-agent ${PN}-lib-helpers \
+             ${PN}-lib-mibs ${PN}-lib-trapd"
+
+# perl module
+PACKAGES += "${@bb.utils.contains('PACKAGECONFIG', 'perl', '${PN}-perl-modules', '', d)}"
+
+ALLOW_EMPTY:${PN} = "1"
+ALLOW_EMPTY:${PN}-server = "1"
+ALLOW_EMPTY:${PN}-libs = "1"
+
+FILES:${PN}-perl-modules = "${libdir}/perl?/*"
+RDEPENDS:${PN}-perl-modules = "perl"
+
+FILES:${PN}-libs = ""
+FILES:${PN}-mibs = "${datadir}/snmp/mibs"
+FILES:${PN}-server-snmpd = "${sbindir}/snmpd \
+                            ${sysconfdir}/snmp/snmpd.conf \
+                            ${sysconfdir}/init.d \
+                            ${systemd_unitdir}/system/snmpd.service \
+"
+
+FILES:${PN}-server-snmptrapd = "${sbindir}/snmptrapd \
+                                ${sysconfdir}/snmp/snmptrapd.conf \
+                                ${systemd_unitdir}/system/snmptrapd.service \
+"
+
+FILES:${PN}-lib-netsnmp = "${libdir}/libnetsnmp${SOLIBS}"
+FILES:${PN}-lib-agent = "${libdir}/libnetsnmpagent${SOLIBS}"
+FILES:${PN}-lib-helpers = "${libdir}/libnetsnmphelpers${SOLIBS}"
+FILES:${PN}-lib-mibs = "${libdir}/libnetsnmpmibs${SOLIBS}"
+FILES:${PN}-lib-trapd = "${libdir}/libnetsnmptrapd${SOLIBS}"
+
+FILES:${PN} = ""
+FILES:${PN}-client = "${bindir}/* ${datadir}/snmp/"
+FILES:${PN}-dbg += "${libdir}/.debug/ ${sbindir}/.debug/ ${bindir}/.debug/"
+FILES:${PN}-dev += "${bindir}/mib2c \
+                    ${bindir}/mib2c-update \
+                    ${bindir}/net-snmp-config \
+                    ${bindir}/net-snmp-create-v3-user \
+"
+
+CONFFILES:${PN}-server-snmpd = "${sysconfdir}/snmp/snmpd.conf"
+CONFFILES:${PN}-server-snmptrapd = "${sysconfdir}/snmp/snmptrapd.conf"
+
+INITSCRIPT_PACKAGES = "${PN}-server-snmpd"
+INITSCRIPT_NAME:${PN}-server-snmpd = "snmpd"
+INITSCRIPT_PARAMS:${PN}-server-snmpd = "start 90 2 3 4 5 . stop 60 0 1 6 ."
+
+SYSTEMD_PACKAGES = "${PN}-server-snmpd \
+                    ${PN}-server-snmptrapd"
+
+SYSTEMD_SERVICE:${PN}-server-snmpd = "snmpd.service"
+SYSTEMD_SERVICE:${PN}-server-snmptrapd =  "snmptrapd.service"
+
+RDEPENDS:${PN} += "${@bb.utils.contains('PACKAGECONFIG', 'perl', 'net-snmp-perl-modules', '', d)}"
+RDEPENDS:${PN} += "net-snmp-client"
+RDEPENDS:${PN}-server-snmpd += "net-snmp-mibs"
+RDEPENDS:${PN}-server-snmptrapd += "net-snmp-server-snmpd ${PN}-lib-trapd"
+RDEPENDS:${PN}-server += "net-snmp-server-snmpd net-snmp-server-snmptrapd"
+RDEPENDS:${PN}-client += "net-snmp-mibs net-snmp-libs"
+RDEPENDS:${PN}-libs += "libpci \
+                        ${PN}-lib-netsnmp \
+                        ${PN}-lib-agent \
+                        ${PN}-lib-helpers \
+                        ${PN}-lib-mibs \
+"
+RDEPENDS:${PN}-ptest += "perl \
+                         perl-module-test \
+                         perl-module-file-basename \
+                         perl-module-getopt-long \
+                         perl-module-file-temp \
+                         perl-module-data-dumper \
+"
+RDEPENDS:${PN}-dev = "net-snmp-client (= ${EXTENDPKGV}) net-snmp-server (= ${EXTENDPKGV})"
+RRECOMMENDS:${PN}-dbg = "net-snmp-client (= ${EXTENDPKGV}) net-snmp-server (= ${EXTENDPKGV})"
+
+RPROVIDES:${PN}-server-snmpd += "${PN}-server-snmpd-systemd"
+RREPLACES:${PN}-server-snmpd += "${PN}-server-snmpd-systemd"
+RCONFLICTS:${PN}-server-snmpd += "${PN}-server-snmpd-systemd"
+
+RPROVIDES:${PN}-server-snmptrapd += "${PN}-server-snmptrapd-systemd"
+RREPLACES:${PN}-server-snmptrapd += "${PN}-server-snmptrapd-systemd"
+RCONFLICTS:${PN}-server-snmptrapd += "${PN}-server-snmptrapd-systemd"
+
+LEAD_SONAME = "libnetsnmp.so"
+
+MULTILIB_SCRIPTS = "${PN}-dev:${bindir}/net-snmp-config"
+
+BBCLASSEXTEND = "native"
diff --git a/meta-openembedded/meta-networking/recipes-support/chrony/chrony/chrony.conf b/meta-openembedded/meta-networking/recipes-support/chrony/chrony/chrony.conf
index 8d226d3..d11e2d4 100644
--- a/meta-openembedded/meta-networking/recipes-support/chrony/chrony/chrony.conf
+++ b/meta-openembedded/meta-networking/recipes-support/chrony/chrony/chrony.conf
@@ -1,3 +1,6 @@
+# Load config files matching the /etc/chrony/conf.d/*.conf pattern.
+confdir /etc/chrony/conf.d
+
 # Use public NTP servers from the pool.ntp.org project.
 # Please consider joining the pool project if possible by running your own
 # server(s).
@@ -17,6 +20,10 @@
 #		gpios = <&ps7_gpio_0 56 0>;
 #	};
 
+# Load source files matching the /etc/chrony/sources.d/*.sources pattern.
+# These can be reloaded using 'chronyc reload sources'.
+sourcedir /etc/chrony/sources.d
+
 # In first three updates step the system clock instead of slew
 # if the adjustment is larger than 1 second.
 makestep 1.0 3
diff --git a/meta-openembedded/meta-networking/recipes-support/cifs/cifs-utils_6.15.bb b/meta-openembedded/meta-networking/recipes-support/cifs/cifs-utils_6.15.bb
deleted file mode 100644
index a009a26..0000000
--- a/meta-openembedded/meta-networking/recipes-support/cifs/cifs-utils_6.15.bb
+++ /dev/null
@@ -1,44 +0,0 @@
-DESCRIPTION = "A a package of utilities for doing and managing mounts of the Linux CIFS filesystem."
-HOMEPAGE = "http://wiki.samba.org/index.php/LinuxCIFS_utils"
-SECTION = "otherosfs"
-LICENSE = "GPL-3.0-only & LGPL-3.0-only"
-LIC_FILES_CHKSUM = "file://COPYING;md5=d32239bcb673463ab874e80d47fae504"
-
-SRCREV = "58ca03f183b375cb723097a241bc2fc2254dab21"
-SRC_URI = "git://git.samba.org/cifs-utils.git;branch=master"
-
-S = "${WORKDIR}/git"
-DEPENDS += "libtalloc"
-
-PACKAGECONFIG ??= ""
-PACKAGECONFIG[cap] = "--with-libcap,--without-libcap,libcap"
-# when enabled, it creates ${bindir}/cifscreds and --ignore-fail-on-non-empty in do_install:append is needed
-PACKAGECONFIG[cifscreds] = "--enable-cifscreds,--disable-cifscreds,keyutils"
-# when enabled, it creates ${sbindir}/cifs.upcall and --ignore-fail-on-non-empty in do_install:append is needed
-PACKAGECONFIG[cifsupcall] = "--enable-cifsupcall,--disable-cifsupcall,krb5 libtalloc keyutils"
-PACKAGECONFIG[cifsidmap] = "--enable-cifsidmap,--disable-cifsidmap,keyutils samba"
-PACKAGECONFIG[cifsacl] = "--enable-cifsacl,--disable-cifsacl,samba"
-PACKAGECONFIG[pam] = "--enable-pam --with-pamdir=${base_libdir}/security,--disable-pam,libpam keyutils"
-
-inherit autotools pkgconfig
-
-do_configure:prepend() {
-    # want installed to /usr/sbin rather than /sbin to be DISTRO_FEATURES usrmerge compliant
-    # must override ROOTSBINDIR (default '/sbin'),
-    # setting --exec-prefix or --prefix in EXTRA_OECONF does not work
-    if ${@bb.utils.contains('DISTRO_FEATURES','usrmerge','true','false',d)}; then
-        export ROOTSBINDIR=${sbindir}
-    fi
-}
-
-do_install:append() {
-    if ${@bb.utils.contains('DISTRO_FEATURES','usrmerge','false','true',d)}; then
-        # Remove empty /usr/bin and /usr/sbin directories since the mount helper
-        # is installed to /sbin
-        rmdir --ignore-fail-on-non-empty ${D}${bindir} ${D}${sbindir}
-    fi
-}
-
-FILES:${PN} += "${base_libdir}/security"
-FILES:${PN}-dbg += "${base_libdir}/security/.debug"
-RRECOMMENDS:${PN} = "kernel-module-cifs"
diff --git a/meta-openembedded/meta-networking/recipes-support/cifs/cifs-utils_7.0.bb b/meta-openembedded/meta-networking/recipes-support/cifs/cifs-utils_7.0.bb
new file mode 100644
index 0000000..c78bbae
--- /dev/null
+++ b/meta-openembedded/meta-networking/recipes-support/cifs/cifs-utils_7.0.bb
@@ -0,0 +1,44 @@
+DESCRIPTION = "A a package of utilities for doing and managing mounts of the Linux CIFS filesystem."
+HOMEPAGE = "http://wiki.samba.org/index.php/LinuxCIFS_utils"
+SECTION = "otherosfs"
+LICENSE = "GPL-3.0-only & LGPL-3.0-only"
+LIC_FILES_CHKSUM = "file://COPYING;md5=d32239bcb673463ab874e80d47fae504"
+
+SRCREV = "316522036133d44ed02cd39ed2748e2b59c85b30"
+SRC_URI = "git://git.samba.org/cifs-utils.git;branch=master"
+
+S = "${WORKDIR}/git"
+DEPENDS += "libtalloc"
+
+PACKAGECONFIG ??= ""
+PACKAGECONFIG[cap] = "--with-libcap,--without-libcap,libcap"
+# when enabled, it creates ${bindir}/cifscreds and --ignore-fail-on-non-empty in do_install:append is needed
+PACKAGECONFIG[cifscreds] = "--enable-cifscreds,--disable-cifscreds,keyutils"
+# when enabled, it creates ${sbindir}/cifs.upcall and --ignore-fail-on-non-empty in do_install:append is needed
+PACKAGECONFIG[cifsupcall] = "--enable-cifsupcall,--disable-cifsupcall,krb5 libtalloc keyutils"
+PACKAGECONFIG[cifsidmap] = "--enable-cifsidmap,--disable-cifsidmap,keyutils samba"
+PACKAGECONFIG[cifsacl] = "--enable-cifsacl,--disable-cifsacl,samba"
+PACKAGECONFIG[pam] = "--enable-pam --with-pamdir=${base_libdir}/security,--disable-pam,libpam keyutils"
+
+inherit autotools pkgconfig
+
+do_configure:prepend() {
+    # want installed to /usr/sbin rather than /sbin to be DISTRO_FEATURES usrmerge compliant
+    # must override ROOTSBINDIR (default '/sbin'),
+    # setting --exec-prefix or --prefix in EXTRA_OECONF does not work
+    if ${@bb.utils.contains('DISTRO_FEATURES','usrmerge','true','false',d)}; then
+        export ROOTSBINDIR=${sbindir}
+    fi
+}
+
+do_install:append() {
+    if ${@bb.utils.contains('DISTRO_FEATURES','usrmerge','false','true',d)}; then
+        # Remove empty /usr/bin and /usr/sbin directories since the mount helper
+        # is installed to /sbin
+        rmdir --ignore-fail-on-non-empty ${D}${bindir} ${D}${sbindir}
+    fi
+}
+
+FILES:${PN} += "${base_libdir}/security"
+FILES:${PN}-dbg += "${base_libdir}/security/.debug"
+RRECOMMENDS:${PN} = "kernel-module-cifs"
diff --git a/meta-openembedded/meta-networking/recipes-support/htpdate/htpdate_1.3.4.bb b/meta-openembedded/meta-networking/recipes-support/htpdate/htpdate_1.3.4.bb
deleted file mode 100644
index d256006..0000000
--- a/meta-openembedded/meta-networking/recipes-support/htpdate/htpdate_1.3.4.bb
+++ /dev/null
@@ -1,31 +0,0 @@
-SUMMARY = "HTTP based time synchronization tool"
-DESCRIPTION = "The  HTTP Time Protocol (HTP) is used to synchronize a computer's time with\
- web servers as reference time source. This program can be used instead\
- ntpdate or similar, in networks that has a firewall blocking the NTP port.\
- Htpdate will synchronize the computer time to Greenwich Mean Time (GMT),\
- using the timestamps from HTTP headers found in web servers response (the\
- HEAD method will be used to get the information).\
- Htpdate works through proxy servers. Accuracy of htpdate will be usually\
- within 0.5 seconds (better with multiple servers).\
-"
-HOMEPAGE = "https://github.com/twekkel/htpdate"
-BUGTRACKER = "https://github.com/twekkel/htpdate/issues"
-LICENSE = "GPL-2.0-or-later"
-LIC_FILES_CHKSUM = "file://htpdate.c;beginline=26;endline=30;md5=2b6cdb94bd5349646d7e33f9f501eef7"
-
-SRC_URI = "http://www.vervest.org/htp/archive/c/htpdate-${PV}.tar.gz"
-SRC_URI[sha256sum] = "744f9200cfd3b008a5516c5eb6da727af532255a329126a7b8f49a5623985642"
-
-TARGET_CC_ARCH += "${LDFLAGS}"
-
-do_configure () {
-	:
-}
-
-do_compile () {
-	oe_runmake
-}
-
-do_install () {
-	oe_runmake install 'INSTALL=install' 'STRIP=echo' 'DESTDIR=${D}'
-}
diff --git a/meta-openembedded/meta-networking/recipes-support/htpdate/htpdate_1.3.6.bb b/meta-openembedded/meta-networking/recipes-support/htpdate/htpdate_1.3.6.bb
new file mode 100644
index 0000000..a4389ce
--- /dev/null
+++ b/meta-openembedded/meta-networking/recipes-support/htpdate/htpdate_1.3.6.bb
@@ -0,0 +1,31 @@
+SUMMARY = "HTTP based time synchronization tool"
+DESCRIPTION = "The  HTTP Time Protocol (HTP) is used to synchronize a computer's time with\
+ web servers as reference time source. This program can be used instead\
+ ntpdate or similar, in networks that has a firewall blocking the NTP port.\
+ Htpdate will synchronize the computer time to Greenwich Mean Time (GMT),\
+ using the timestamps from HTTP headers found in web servers response (the\
+ HEAD method will be used to get the information).\
+ Htpdate works through proxy servers. Accuracy of htpdate will be usually\
+ within 0.5 seconds (better with multiple servers).\
+"
+HOMEPAGE = "https://github.com/twekkel/htpdate"
+BUGTRACKER = "https://github.com/twekkel/htpdate/issues"
+LICENSE = "GPL-2.0-or-later"
+LIC_FILES_CHKSUM = "file://htpdate.c;beginline=26;endline=30;md5=2b6cdb94bd5349646d7e33f9f501eef7"
+
+SRC_URI = "http://www.vervest.org/htp/archive/c/htpdate-${PV}.tar.gz"
+SRC_URI[sha256sum] = "3cdc558ec8e53ef374a42490b2f28c0b23981fa8754a6d7182044707828ad1e9"
+
+TARGET_CC_ARCH += "${LDFLAGS}"
+
+do_configure () {
+	:
+}
+
+do_compile () {
+	oe_runmake
+}
+
+do_install () {
+	oe_runmake install 'INSTALL=install' 'STRIP=echo' 'DESTDIR=${D}'
+}
diff --git a/meta-openembedded/meta-networking/recipes-support/libesmtp/libesmtp_1.1.0.bb b/meta-openembedded/meta-networking/recipes-support/libesmtp/libesmtp_1.1.0.bb
index bf1a12d..164c8c2 100644
--- a/meta-openembedded/meta-networking/recipes-support/libesmtp/libesmtp_1.1.0.bb
+++ b/meta-openembedded/meta-networking/recipes-support/libesmtp/libesmtp_1.1.0.bb
@@ -30,5 +30,7 @@
     -Dntlm=disabled \
 "
 
+CFLAGS += "-D_GNU_SOURCE"
+
 FILES:${PN} = "${libdir}/lib*${SOLIBS} \
                ${libdir}/esmtp-plugins-6.2.0/*${SOLIBSDEV}"
diff --git a/meta-openembedded/meta-networking/recipes-support/libldb/libldb_2.3.3.bb b/meta-openembedded/meta-networking/recipes-support/libldb/libldb_2.3.3.bb
deleted file mode 100644
index 6dd3ec3..0000000
--- a/meta-openembedded/meta-networking/recipes-support/libldb/libldb_2.3.3.bb
+++ /dev/null
@@ -1,81 +0,0 @@
-SUMMARY = "Hierarchical, reference counted memory pool system with destructors"
-HOMEPAGE = "http://ldb.samba.org"
-SECTION = "libs"
-LICENSE = "LGPL-3.0-or-later & LGPL-2.1-or-later & GPL-3.0-or-later"
-
-DEPENDS += "libtdb libtalloc libtevent popt"
-RDEPENDS:pyldb += "python3"
-
-SRC_URI = "http://samba.org/ftp/ldb/ldb-${PV}.tar.gz \
-           file://0001-do-not-import-target-module-while-cross-compile.patch \
-           file://0002-ldb-Add-configure-options-for-packages.patch \
-           file://0001-Fix-pyext_PATTERN-for-cross-compilation.patch \
-           file://libldb-fix-musl-libc-conflict-type-error.patch \
-          "
-
-PACKAGECONFIG ??= "\
-    ${@bb.utils.filter('DISTRO_FEATURES', 'acl', d)} \
-    ${@bb.utils.contains('DISTRO_FEATURES', 'xattr', 'attr', '', d)} \
-"
-PACKAGECONFIG[acl] = "--with-acl,--without-acl,acl"
-PACKAGECONFIG[attr] = "--with-attr,--without-attr,attr"
-PACKAGECONFIG[ldap] = ",,openldap"
-PACKAGECONFIG[libaio] = "--with-libaio,--without-libaio,libaio"
-PACKAGECONFIG[libbsd] = "--with-libbsd,--without-libbsd,libbsd"
-PACKAGECONFIG[libcap] = "--with-libcap,--without-libcap,libcap"
-PACKAGECONFIG[valgrind] = "--with-valgrind,--without-valgrind,valgrind"
-PACKAGECONFIG[lmdb] = ",--without-ldb-lmdb,lmdb,"
-
-SRC_URI += "${@bb.utils.contains('PACKAGECONFIG', 'ldap', '', 'file://0003-avoid-openldap-unless-wanted.patch', d)}"
-
-LIC_FILES_CHKSUM = "file://pyldb.h;endline=24;md5=dfbd238cecad76957f7f860fbe9adade \
-                    file://man/ldb.3.xml;beginline=261;endline=262;md5=137f9fd61040c1505d1aa1019663fd08 \
-                    file://tools/ldbdump.c;endline=19;md5=a7d4fc5d1f75676b49df491575a86a42"
-
-SRC_URI[md5sum] = "6824f69ea3bb58cb8a3be4c179e7569a"
-SRC_URI[sha256sum] = "9ef39700ff05b3e8f5801d2a39fe1ba023218650f81c9d377caca22f49076807"
-
-inherit pkgconfig waf-samba
-
-S = "${WORKDIR}/ldb-${PV}"
-
-#cross_compile cannot use preforked process, since fork process earlier than point subproces.popen
-#to cross Popen
-export WAF_NO_PREFORK="yes"
-
-EXTRA_OECONF += "--disable-rpath \
-                 --disable-rpath-install \
-                 --bundled-libraries=cmocka \
-                 --builtin-libraries=replace \
-                 --with-modulesdir=${libdir}/ldb/modules \
-                 --with-privatelibdir=${libdir}/ldb \
-                 --with-libiconv=${STAGING_DIR_HOST}${prefix}\
-                "
-
-PACKAGES =+ "pyldb pyldb-dbg pyldb-dev"
-
-NOAUTOPACKAGEDEBUG = "1"
-
-FILES:${PN} += "${libdir}/ldb/*"
-FILES:${PN}-dbg += "${bindir}/.debug/* \
-                    ${libdir}/.debug/* \
-                    ${libdir}/ldb/.debug/* \
-                    ${libdir}/ldb/modules/ldb/.debug/*"
-
-FILES:pyldb = "${libdir}/python${PYTHON_BASEVERSION}/site-packages/* \
-               ${libdir}/libpyldb-util.*.so.* \
-              "
-FILES:pyldb-dbg = "${libdir}/python${PYTHON_BASEVERSION}/site-packages/.debug \
-                   ${libdir}/.debug/libpyldb-util.*.so.*"
-FILES:pyldb-dev = "${libdir}/libpyldb-util.*.so"
-
-# Prevent third_party/waf/waflib/Configure.py checking host's path which is
-# incorrect for cross building.
-export PREFIX = "/"
-export LIBDIR = "${libdir}"
-export BINDIR = "${bindir}"
-
-do_configure:prepend() {
-    # For a clean rebuild
-    rm -fr bin/
-}
diff --git a/meta-openembedded/meta-networking/recipes-support/libldb/libldb_2.3.4.bb b/meta-openembedded/meta-networking/recipes-support/libldb/libldb_2.3.4.bb
new file mode 100644
index 0000000..af5f042
--- /dev/null
+++ b/meta-openembedded/meta-networking/recipes-support/libldb/libldb_2.3.4.bb
@@ -0,0 +1,81 @@
+SUMMARY = "Hierarchical, reference counted memory pool system with destructors"
+HOMEPAGE = "http://ldb.samba.org"
+SECTION = "libs"
+LICENSE = "LGPL-3.0-or-later & LGPL-2.1-or-later & GPL-3.0-or-later"
+
+DEPENDS += "libtdb libtalloc libtevent popt"
+RDEPENDS:pyldb += "python3"
+
+SRC_URI = "http://samba.org/ftp/ldb/ldb-${PV}.tar.gz \
+           file://0001-do-not-import-target-module-while-cross-compile.patch \
+           file://0002-ldb-Add-configure-options-for-packages.patch \
+           file://0001-Fix-pyext_PATTERN-for-cross-compilation.patch \
+           file://libldb-fix-musl-libc-conflict-type-error.patch \
+          "
+
+PACKAGECONFIG ??= "\
+    ${@bb.utils.filter('DISTRO_FEATURES', 'acl', d)} \
+    ${@bb.utils.contains('DISTRO_FEATURES', 'xattr', 'attr', '', d)} \
+"
+PACKAGECONFIG[acl] = "--with-acl,--without-acl,acl"
+PACKAGECONFIG[attr] = "--with-attr,--without-attr,attr"
+PACKAGECONFIG[ldap] = ",,openldap"
+PACKAGECONFIG[libaio] = "--with-libaio,--without-libaio,libaio"
+PACKAGECONFIG[libbsd] = "--with-libbsd,--without-libbsd,libbsd"
+PACKAGECONFIG[libcap] = "--with-libcap,--without-libcap,libcap"
+PACKAGECONFIG[valgrind] = "--with-valgrind,--without-valgrind,valgrind"
+PACKAGECONFIG[lmdb] = ",--without-ldb-lmdb,lmdb,"
+
+SRC_URI += "${@bb.utils.contains('PACKAGECONFIG', 'ldap', '', 'file://0003-avoid-openldap-unless-wanted.patch', d)}"
+
+LIC_FILES_CHKSUM = "file://pyldb.h;endline=24;md5=dfbd238cecad76957f7f860fbe9adade \
+                    file://man/ldb.3.xml;beginline=261;endline=262;md5=137f9fd61040c1505d1aa1019663fd08 \
+                    file://tools/ldbdump.c;endline=19;md5=a7d4fc5d1f75676b49df491575a86a42"
+
+SRC_URI[md5sum] = "b01d6913a06901c22c5bc6caedc548ac"
+SRC_URI[sha256sum] = "f2e88dcab7b6007d92724b62f8a16e7c6e77275885c60eb4f87097e4aa4082c1"
+
+inherit pkgconfig waf-samba
+
+S = "${WORKDIR}/ldb-${PV}"
+
+#cross_compile cannot use preforked process, since fork process earlier than point subproces.popen
+#to cross Popen
+export WAF_NO_PREFORK="yes"
+
+EXTRA_OECONF += "--disable-rpath \
+                 --disable-rpath-install \
+                 --bundled-libraries=cmocka \
+                 --builtin-libraries=replace \
+                 --with-modulesdir=${libdir}/ldb/modules \
+                 --with-privatelibdir=${libdir}/ldb \
+                 --with-libiconv=${STAGING_DIR_HOST}${prefix}\
+                "
+
+PACKAGES =+ "pyldb pyldb-dbg pyldb-dev"
+
+NOAUTOPACKAGEDEBUG = "1"
+
+FILES:${PN} += "${libdir}/ldb/*"
+FILES:${PN}-dbg += "${bindir}/.debug/* \
+                    ${libdir}/.debug/* \
+                    ${libdir}/ldb/.debug/* \
+                    ${libdir}/ldb/modules/ldb/.debug/*"
+
+FILES:pyldb = "${libdir}/python${PYTHON_BASEVERSION}/site-packages/* \
+               ${libdir}/libpyldb-util.*.so.* \
+              "
+FILES:pyldb-dbg = "${libdir}/python${PYTHON_BASEVERSION}/site-packages/.debug \
+                   ${libdir}/.debug/libpyldb-util.*.so.*"
+FILES:pyldb-dev = "${libdir}/libpyldb-util.*.so"
+
+# Prevent third_party/waf/waflib/Configure.py checking host's path which is
+# incorrect for cross building.
+export PREFIX = "/"
+export LIBDIR = "${libdir}"
+export BINDIR = "${bindir}"
+
+do_configure:prepend() {
+    # For a clean rebuild
+    rm -fr bin/
+}
diff --git a/meta-openembedded/meta-networking/recipes-support/mdio-tools/mdio-netlink_git.bb b/meta-openembedded/meta-networking/recipes-support/mdio-tools/mdio-netlink_git.bb
new file mode 100644
index 0000000..b50d33f
--- /dev/null
+++ b/meta-openembedded/meta-networking/recipes-support/mdio-tools/mdio-netlink_git.bb
@@ -0,0 +1,13 @@
+require mdio-tools.inc
+
+DEPENDS += "virtual/kernel libmnl"
+# This module requires Linux 5.6 higher
+
+S = "${WORKDIR}/git/kernel"
+
+inherit module
+
+EXTRA_OEMAKE = "KDIR=${STAGING_KERNEL_DIR}"
+MODULES_INSTALL_TARGET = "install"
+
+RPROVIDES:${PN} += "kernel-module-mdio-netlink"
diff --git a/meta-openembedded/meta-networking/recipes-support/mdio-tools/mdio-tools.inc b/meta-openembedded/meta-networking/recipes-support/mdio-tools/mdio-tools.inc
new file mode 100644
index 0000000..a8a435f
--- /dev/null
+++ b/meta-openembedded/meta-networking/recipes-support/mdio-tools/mdio-tools.inc
@@ -0,0 +1,9 @@
+DESCRIPTION = "A low-level debug tool for communicating with devices attached to an MDIO bus"
+SECTION = "networking"
+HOMEPAGE = "https://github.com/wkz/mdio-tools"
+LICENSE = "GPL-2.0-or-later"
+LIC_FILES_CHKSUM = "file://${WORKDIR}/git/COPYING;md5=b234ee4d69f5fce4486a80fdaf4a4263"
+
+SRC_URI = "git://github.com/wkz/mdio-tools.git;protocol=https;branch=master"
+SRCREV = "07cbff2d5e2de05037e5e7edd5044d678394c8d1"
+PV = "1.1.1"
diff --git a/meta-openembedded/meta-networking/recipes-support/mdio-tools/mdio-tools_git.bb b/meta-openembedded/meta-networking/recipes-support/mdio-tools/mdio-tools_git.bb
new file mode 100644
index 0000000..cd4df3d
--- /dev/null
+++ b/meta-openembedded/meta-networking/recipes-support/mdio-tools/mdio-tools_git.bb
@@ -0,0 +1,9 @@
+require mdio-tools.inc
+
+DEPENDS += "libmnl"
+
+S = "${WORKDIR}/git"
+
+inherit pkgconfig autotools
+
+RDEPENDS:${PN} = "kernel-module-mdio-netlink"
diff --git a/meta-openembedded/meta-networking/recipes-support/nbdkit/nbdkit_1.31.14.bb b/meta-openembedded/meta-networking/recipes-support/nbdkit/nbdkit_1.31.14.bb
deleted file mode 100644
index 3f21074..0000000
--- a/meta-openembedded/meta-networking/recipes-support/nbdkit/nbdkit_1.31.14.bb
+++ /dev/null
@@ -1,33 +0,0 @@
-SUMMARY = "nbdkit is a toolkit for creating NBD servers."
-DESCRIPTION = "NBD — Network Block Device — is a protocol \
-for accessing Block Devices (hard disks and disk-like things) \
-over a Network. \
-\
-nbdkit is a toolkit for creating NBD servers."
-
-HOMEPAGE = "https://github.com/libguestfs/nbdkit"
-LICENSE = "BSD-3-Clause"
-LIC_FILES_CHKSUM = "file://LICENSE;md5=f9dcc2d8acdde215fa4bd6ac12bb14f0"
-
-SRC_URI = "git://github.com/libguestfs/nbdkit.git;protocol=https;branch=master \
-"
-SRCREV = "35b42a161f3f3c7cf388e33dbaa5bd7082aac9d8"
-
-S = "${WORKDIR}/git"
-
-DEPENDS = "curl xz e2fsprogs zlib"
-
-# autotools-brokensep is needed as nbdkit does not support build in external directory
-inherit pkgconfig python3native perlnative bash-completion autotools-brokensep
-
-# Those are required to build standalone
-EXTRA_OECONF = " --without-libvirt --without-libguestfs --disable-perl"
-
-# Disable some extended support (not desired for small embedded systems)
-#EXTRA_OECONF += " --disable-python"
-#EXTRA_OECONF += " --disable-ocaml"
-#EXTRA_OECONF += " --disable-rust"
-#EXTRA_OECONF += " --disable-ruby"
-#EXTRA_OECONF += " --disable-tcl"
-#EXTRA_OECONF += " --disable-lua"
-#EXTRA_OECONF += " --disable-vddk"
diff --git a/meta-openembedded/meta-networking/recipes-support/nbdkit/nbdkit_1.33.1.bb b/meta-openembedded/meta-networking/recipes-support/nbdkit/nbdkit_1.33.1.bb
new file mode 100644
index 0000000..322f89c
--- /dev/null
+++ b/meta-openembedded/meta-networking/recipes-support/nbdkit/nbdkit_1.33.1.bb
@@ -0,0 +1,33 @@
+SUMMARY = "nbdkit is a toolkit for creating NBD servers."
+DESCRIPTION = "NBD — Network Block Device — is a protocol \
+for accessing Block Devices (hard disks and disk-like things) \
+over a Network. \
+\
+nbdkit is a toolkit for creating NBD servers."
+
+HOMEPAGE = "https://github.com/libguestfs/nbdkit"
+LICENSE = "BSD-3-Clause"
+LIC_FILES_CHKSUM = "file://LICENSE;md5=f9dcc2d8acdde215fa4bd6ac12bb14f0"
+
+SRC_URI = "git://github.com/libguestfs/nbdkit.git;protocol=https;branch=master \
+"
+SRCREV = "9ebc70dae82220f962167b1668fd2af6de886b16"
+
+S = "${WORKDIR}/git"
+
+DEPENDS = "curl xz e2fsprogs zlib"
+
+# autotools-brokensep is needed as nbdkit does not support build in external directory
+inherit pkgconfig python3native perlnative bash-completion autotools-brokensep
+
+# Those are required to build standalone
+EXTRA_OECONF = " --without-libvirt --without-libguestfs --disable-perl"
+
+# Disable some extended support (not desired for small embedded systems)
+#EXTRA_OECONF += " --disable-python"
+#EXTRA_OECONF += " --disable-ocaml"
+#EXTRA_OECONF += " --disable-rust"
+#EXTRA_OECONF += " --disable-ruby"
+#EXTRA_OECONF += " --disable-tcl"
+#EXTRA_OECONF += " --disable-lua"
+#EXTRA_OECONF += " --disable-vddk"
diff --git a/meta-openembedded/meta-networking/recipes-support/netperf/netperf_git.bb b/meta-openembedded/meta-networking/recipes-support/netperf/netperf_git.bb
index 06b2edd..4074f0c 100644
--- a/meta-openembedded/meta-networking/recipes-support/netperf/netperf_git.bb
+++ b/meta-openembedded/meta-networking/recipes-support/netperf/netperf_git.bb
@@ -28,8 +28,7 @@
 
 # set the "_FILE_OFFSET_BITS" preprocessor symbol to 64 to support files
 # larger than 2GB
-CFLAGS:append = "${@bb.utils.contains('DISTRO_FEATURES', 'largefile', \
-    ' -D_FILE_OFFSET_BITS=64', '', d)}"
+CFLAGS:append = " -D_FILE_OFFSET_BITS=64"
 
 PACKAGECONFIG ??= ""
 PACKAGECONFIG[sctp] = "--enable-sctp,--disable-sctp,lksctp-tools,"
diff --git a/meta-openembedded/meta-networking/recipes-support/ntpsec/ntpsec/0001-wscript-Widen-the-search-for-tags.patch b/meta-openembedded/meta-networking/recipes-support/ntpsec/ntpsec/0001-wscript-Widen-the-search-for-tags.patch
new file mode 100644
index 0000000..98c62ee
--- /dev/null
+++ b/meta-openembedded/meta-networking/recipes-support/ntpsec/ntpsec/0001-wscript-Widen-the-search-for-tags.patch
@@ -0,0 +1,29 @@
+From 9a7dead72f41e79979625c9bdef2fb638427d3d6 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Mon, 22 Aug 2022 20:54:17 -0700
+Subject: [PATCH] wscript: Widen the search for tags
+
+Default is to look for annotated tags, howveer when using devtool we
+create our own git tree from release tarballs which will have tags but
+they are not annotated, therefore broaden the search to include all tags
+
+Upstream-Status: Inappropriate [OE-specific]
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ wscript | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/wscript b/wscript
+index 879ded1..dff835d 100644
+--- a/wscript
++++ b/wscript
+@@ -177,7 +177,7 @@ def configure(ctx):
+     if build_desc:
+         build_desc = ' ' + build_desc
+     if ctx.env.BIN_GIT:
+-        cmd = ctx.env.BIN_GIT + shlex.split("describe --dirty")
++        cmd = ctx.env.BIN_GIT + shlex.split("describe --tags --dirty")
+         git_short_hash = ctx.cmd_and_log(cmd).strip()
+         git_short_hash = '-'.join(git_short_hash.split('-')[1:])
+ 
diff --git a/meta-openembedded/meta-networking/recipes-support/ntpsec/ntpsec_1.2.1.bb b/meta-openembedded/meta-networking/recipes-support/ntpsec/ntpsec_1.2.1.bb
index 3efac7d..e975f90 100644
--- a/meta-openembedded/meta-networking/recipes-support/ntpsec/ntpsec_1.2.1.bb
+++ b/meta-openembedded/meta-networking/recipes-support/ntpsec/ntpsec_1.2.1.bb
@@ -16,10 +16,14 @@
            file://0001-ntpd-ntp_sandbox.c-allow-clone3-for-glibc-2.34-in-se.patch \
            file://0001-ntpd-ntp_sandbox.c-allow-newfstatat-on-all-archs-for.patch \
            file://0002-ntpd-ntp_sandbox.c-match-riscv-to-aarch-in-seccomp-f.patch \
-           file://volatiles.ntpsec"
+           file://volatiles.ntpsec \
+           file://0001-wscript-Widen-the-search-for-tags.patch \
+           "
 
 SRC_URI[sha256sum] = "f2684835116c80b8f21782a5959a805ba3c44e3a681dd6c17c7cb00cc242c27a"
 
+UPSTREAM_CHECK_URI = "ftp://ftp.ntpsec.org/pub/releases/"
+
 inherit pkgconfig python3-dir python3targetconfig systemd update-alternatives update-rc.d useradd waf features_check
 
 # RDEPENDS on gnuplot with this restriction
@@ -54,7 +58,7 @@
 export pyext_PATTERN = "%s.so"
 export PYTHON_LDFLAGS = "-lpthread -ldl"
 
-CFLAGS:append = " -I${PYTHON_INCLUDE_DIR}"
+CFLAGS:append = " -I${PYTHON_INCLUDE_DIR} -D_GNU_SOURCE"
 
 EXTRA_OECONF = "--cross-compiler='${CC}' \
                 --cross-cflags='${CFLAGS}' \
diff --git a/meta-openembedded/meta-networking/recipes-support/openipmi/openipmi_2.0.32.bb b/meta-openembedded/meta-networking/recipes-support/openipmi/openipmi_2.0.32.bb
index c61303b..18f4dec 100644
--- a/meta-openembedded/meta-networking/recipes-support/openipmi/openipmi_2.0.32.bb
+++ b/meta-openembedded/meta-networking/recipes-support/openipmi/openipmi_2.0.32.bb
@@ -41,6 +41,8 @@
 
 inherit autotools-brokensep pkgconfig python3native perlnative update-rc.d systemd cpan-base python3targetconfig
 
+CFLAGS += "-D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64"
+
 EXTRA_OECONF = "--disable-static \
                 --with-perl='${STAGING_BINDIR_NATIVE}/perl-native/perl' \
                 --with-python='${STAGING_BINDIR_NATIVE}/python3-native/python3' \
@@ -85,6 +87,10 @@
     done
 }
 
+do_compile:append () {
+    sed -i -e 's#${RECIPE_SYSROOT_NATIVE}##g' ${S}/swig/perl/OpenIPMI_wrap.c
+}
+
 do_install:append () {
     echo "SAL: D = $D"
     echo "SAL: libdir = $libdir"
diff --git a/meta-openembedded/meta-networking/recipes-support/rdma-core/rdma-core_41.0.bb b/meta-openembedded/meta-networking/recipes-support/rdma-core/rdma-core_41.0.bb
deleted file mode 100644
index e5ecc5c..0000000
--- a/meta-openembedded/meta-networking/recipes-support/rdma-core/rdma-core_41.0.bb
+++ /dev/null
@@ -1,42 +0,0 @@
-SUMMARY = "Userspace support for InfiniBand/RDMA verbs"
-DESCRIPTION = "This is the userspace components for the Linux Kernel's drivers Infiniband/RDMA subsystem."
-SECTION = "libs"
-
-DEPENDS = "libnl"
-RDEPENDS:${PN} = "bash perl"
-
-SRC_URI = "git://github.com/linux-rdma/rdma-core.git;branch=master;protocol=https"
-SRCREV = "467363efbc0fea706752c1ba7a21c313823017e7"
-S = "${WORKDIR}/git"
-
-#Default Dual License https://github.com/linux-rdma/rdma-core/blob/master/COPYING.md
-LICENSE = "BSD-2-Clause | GPL-2.0-only"
-LIC_FILES_CHKSUM = "file://COPYING.BSD_FB;md5=0ec18bae1a9df92c8d6ae01f94a289ae \
-		   file://COPYING.GPL2;md5=b234ee4d69f5fce4486a80fdaf4a4263"
-
-EXTRA_OECMAKE = " \
-    -DCMAKE_INSTALL_SYSTEMD_SERVICEDIR=${systemd_system_unitdir} \
-    -DCMAKE_INSTALL_PERLDIR=${libdir}/perl5/${@get_perl_version(d)} \
-    -DNO_MAN_PAGES=1 \
-"
-
-LTO = ""
-
-FILES_SOLIBSDEV = ""
-FILES:${PN} += "${libdir}/*"
-INSANE_SKIP:${PN} += "dev-so"
-
-inherit cmake cpan-base pkgconfig python3native systemd
-
-SYSTEMD_SERVICE:${PN} = " \
-    srp_daemon.service \
-    iwpmd.service \
-    ibacm.socket \
-    rdma-load-modules@.service \
-    srp_daemon_port@.service \
-    rdma-hw.target \
-    ibacm.service \
-"
-SYSTEMD_AUTO_ENABLE = "disable"
-
-OECMAKE_FIND_ROOT_PATH_MODE_PROGRAM = "BOTH"
diff --git a/meta-openembedded/meta-networking/recipes-support/rdma-core/rdma-core_42.0.bb b/meta-openembedded/meta-networking/recipes-support/rdma-core/rdma-core_42.0.bb
new file mode 100644
index 0000000..e1123dc
--- /dev/null
+++ b/meta-openembedded/meta-networking/recipes-support/rdma-core/rdma-core_42.0.bb
@@ -0,0 +1,42 @@
+SUMMARY = "Userspace support for InfiniBand/RDMA verbs"
+DESCRIPTION = "This is the userspace components for the Linux Kernel's drivers Infiniband/RDMA subsystem."
+SECTION = "libs"
+
+DEPENDS = "libnl"
+RDEPENDS:${PN} = "bash perl"
+
+SRC_URI = "git://github.com/linux-rdma/rdma-core.git;branch=master;protocol=https"
+SRCREV = "196bad56ed060612e22674b668b5ec3d8659ade3"
+S = "${WORKDIR}/git"
+
+#Default Dual License https://github.com/linux-rdma/rdma-core/blob/master/COPYING.md
+LICENSE = "BSD-2-Clause | GPL-2.0-only"
+LIC_FILES_CHKSUM = "file://COPYING.BSD_FB;md5=0ec18bae1a9df92c8d6ae01f94a289ae \
+		   file://COPYING.GPL2;md5=b234ee4d69f5fce4486a80fdaf4a4263"
+
+EXTRA_OECMAKE = " \
+    -DCMAKE_INSTALL_SYSTEMD_SERVICEDIR=${systemd_system_unitdir} \
+    -DCMAKE_INSTALL_PERLDIR=${libdir}/perl5/${@get_perl_version(d)} \
+    -DNO_MAN_PAGES=1 \
+"
+
+LTO = ""
+
+FILES_SOLIBSDEV = ""
+FILES:${PN} += "${libdir}/*"
+INSANE_SKIP:${PN} += "dev-so"
+
+inherit cmake cpan-base pkgconfig python3native systemd
+
+SYSTEMD_SERVICE:${PN} = " \
+    srp_daemon.service \
+    iwpmd.service \
+    ibacm.socket \
+    rdma-load-modules@.service \
+    srp_daemon_port@.service \
+    rdma-hw.target \
+    ibacm.service \
+"
+SYSTEMD_AUTO_ENABLE = "disable"
+
+OECMAKE_FIND_ROOT_PATH_MODE_PROGRAM = "BOTH"
diff --git a/meta-openembedded/meta-networking/recipes-support/ssmtp/ssmtp/0001-ssmtp-Correct-the-null-pointer-assignment-to-char-po.patch b/meta-openembedded/meta-networking/recipes-support/ssmtp/ssmtp/0001-ssmtp-Correct-the-null-pointer-assignment-to-char-po.patch
new file mode 100644
index 0000000..c7468fe
--- /dev/null
+++ b/meta-openembedded/meta-networking/recipes-support/ssmtp/ssmtp/0001-ssmtp-Correct-the-null-pointer-assignment-to-char-po.patch
@@ -0,0 +1,51 @@
+From 58cfb4f86b7fbf19eb643dfba87fdd890b3d4a4a Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Wed, 24 Aug 2022 19:27:31 -0700
+Subject: [PATCH] ssmtp: Correct the null pointer assignment to char pointers
+
+Fixes
+error: incompatible integer to pointer conversion initializing 'char *' with an expression of type 'char' [-Wint-conversion]
+| char *from = (char)NULL;                /* Use this as the From: address */
+
+Upstream-Status: Pending
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ ssmtp.c | 20 ++++++++++----------
+ 1 file changed, 10 insertions(+), 10 deletions(-)
+
+diff --git a/ssmtp.c b/ssmtp.c
+index a74ba4e..0a719ac 100644
+--- a/ssmtp.c
++++ b/ssmtp.c
+@@ -55,21 +55,21 @@ bool_t use_oldauth = False;		/* use old AUTH LOGIN username style */
+ 
+ #define ARPADATE_LENGTH 32		/* Current date in RFC format */
+ char arpadate[ARPADATE_LENGTH];
+-char *auth_user = (char)NULL;
+-char *auth_pass = (char)NULL;
+-char *auth_method = (char)NULL;		/* Mechanism for SMTP authentication */
+-char *mail_domain = (char)NULL;
+-char *from = (char)NULL;		/* Use this as the From: address */
++char *auth_user = NULL;
++char *auth_pass = NULL;
++char *auth_method = NULL;		/* Mechanism for SMTP authentication */
++char *mail_domain = NULL;
++char *from = NULL;		/* Use this as the From: address */
+ char *hostname;
+ char *mailhost = "mailhub";
+-char *minus_f = (char)NULL;
+-char *minus_F = (char)NULL;
++char *minus_f = NULL;
++char *minus_F = NULL;
+ char *gecos;
+-char *prog = (char)NULL;
++char *prog = NULL;
+ char *root = NULL;
+ char *tls_cert = "/etc/ssl/certs/ssmtp.pem";	/* Default Certificate */
+-char *uad = (char)NULL;
+-char *config_file = (char)NULL;		/* alternate configuration file */
++char *uad = NULL;
++char *config_file = NULL;		/* alternate configuration file */
+ 
+ headers_t headers, *ht;
+ 
diff --git a/meta-openembedded/meta-networking/recipes-support/ssmtp/ssmtp_2.64.bb b/meta-openembedded/meta-networking/recipes-support/ssmtp/ssmtp_2.64.bb
index b0e921d..05d2bb1 100644
--- a/meta-openembedded/meta-networking/recipes-support/ssmtp/ssmtp_2.64.bb
+++ b/meta-openembedded/meta-networking/recipes-support/ssmtp/ssmtp_2.64.bb
@@ -7,7 +7,8 @@
            file://ssmtp-bug584162-fix.patch \
            file://build-ouside_srcdir.patch \
            file://use-DESTDIR.patch \
-"
+           file://0001-ssmtp-Correct-the-null-pointer-assignment-to-char-po.patch \
+           "
 
 SRC_URI[md5sum] = "65b4e0df4934a6cd08c506cabcbe584f"
 SRC_URI[sha256sum] = "22c37dc90c871e8e052b2cab0ad219d010fa938608cd66b21c8f3c759046fa36"
diff --git a/meta-openembedded/meta-networking/recipes-support/strongswan/files/0001-enum-Fix-compiler-warning.patch b/meta-openembedded/meta-networking/recipes-support/strongswan/files/0001-enum-Fix-compiler-warning.patch
deleted file mode 100644
index e730fe1..0000000
--- a/meta-openembedded/meta-networking/recipes-support/strongswan/files/0001-enum-Fix-compiler-warning.patch
+++ /dev/null
@@ -1,31 +0,0 @@
-From d23c0ea81e630af3cfda89aeeb52146c0c84c960 Mon Sep 17 00:00:00 2001
-From: Tobias Brunner <tobias@strongswan.org>
-Date: Mon, 2 May 2022 09:31:49 +0200
-Subject: [PATCH] enum: Fix compiler warning
-
-Closes strongswan/strongswan#1025
-
-Upstream-Status: Backport
-[https://github.com/strongswan/strongswan/commit/d23c0ea81e630af3cfda89aeeb52146c0c84c960]
-
-Signed-off-by: Yi Zhao <yi.zhao@windriver.com>
----
- src/libstrongswan/utils/enum.c | 2 +-
- 1 file changed, 1 insertion(+), 1 deletion(-)
-
-diff --git a/src/libstrongswan/utils/enum.c b/src/libstrongswan/utils/enum.c
-index 79da450f0c..1e77489f6f 100644
---- a/src/libstrongswan/utils/enum.c
-+++ b/src/libstrongswan/utils/enum.c
-@@ -97,7 +97,7 @@ char *enum_flags_to_string(enum_name_t *e, u_int val, char *buf, size_t len)
- 		return buf;
- 	}
- 
--	if (snprintf(buf, len, e->names[0]) >= len)
-+	if (snprintf(buf, len, "%s", e->names[0]) >= len)
- 	{
- 		return NULL;
- 	}
--- 
-2.25.1
-
diff --git a/meta-openembedded/meta-networking/recipes-support/strongswan/strongswan_5.9.6.bb b/meta-openembedded/meta-networking/recipes-support/strongswan/strongswan_5.9.6.bb
deleted file mode 100644
index 1b82dce..0000000
--- a/meta-openembedded/meta-networking/recipes-support/strongswan/strongswan_5.9.6.bb
+++ /dev/null
@@ -1,186 +0,0 @@
-DESCRIPTION = "strongSwan is an OpenSource IPsec implementation for the \
-Linux operating system."
-SUMMARY = "strongSwan is an OpenSource IPsec implementation"
-HOMEPAGE = "http://www.strongswan.org"
-SECTION = "net"
-LICENSE = "GPL-2.0-only"
-LIC_FILES_CHKSUM = "file://COPYING;md5=b234ee4d69f5fce4486a80fdaf4a4263"
-DEPENDS = "flex-native flex bison-native"
-DEPENDS:append = "${@bb.utils.contains('DISTRO_FEATURES', 'tpm2', '  tpm2-tss', '', d)}"
-
-SRC_URI = "http://download.strongswan.org/strongswan-${PV}.tar.bz2 \
-           file://0001-enum-Fix-compiler-warning.patch \
-           "
-
-SRC_URI[sha256sum] = "91d0978ac448912759b85452d8ff0d578aafd4507aaf4f1c1719f9d0c7318ab7"
-
-UPSTREAM_CHECK_REGEX = "strongswan-(?P<pver>\d+(\.\d+)+)\.tar"
-
-EXTRA_OECONF = " \
-        --without-lib-prefix \
-        --with-dev-headers=${includedir}/strongswan \
-"
-
-EXTRA_OECONF += "${@bb.utils.contains('DISTRO_FEATURES', 'systemd', '--with-systemdsystemunitdir=${systemd_unitdir}/system/', '--without-systemdsystemunitdir', d)}"
-
-PACKAGECONFIG ?= "curl gmp openssl sqlite3 swanctl curve25519\
-        ${@bb.utils.contains('DISTRO_FEATURES', 'systemd', 'systemd-charon', 'charon', d)} \
-        ${@bb.utils.contains('DISTRO_FEATURES', 'tpm2', 'tpm2', '', d)} \
-        ${@bb.utils.contains('DISTRO_FEATURES', 'ima', 'tnc-imc imc-hcd imc-os imc-scanner imc-attestation', '', d)} \
-        ${@bb.utils.contains('DISTRO_FEATURES', 'ima', 'tnc-imv imv-hcd imv-os imv-scanner imv-attestation', '', d)} \
-"
-
-PACKAGECONFIG[aesni] = "--enable-aesni,--disable-aesni,,${PN}-plugin-aesni"
-PACKAGECONFIG[bfd] = "--enable-bfd-backtraces,--disable-bfd-backtraces,binutils"
-PACKAGECONFIG[charon] = "--enable-charon,--disable-charon,"
-PACKAGECONFIG[curl] = "--enable-curl,--disable-curl,curl,${PN}-plugin-curl"
-PACKAGECONFIG[eap-identity] = "--enable-eap-identity,--disable-eap-identity,,${PN}-plugin-eap-identity"
-PACKAGECONFIG[eap-mschapv2] = "--enable-eap-mschapv2,--disable-eap-mschapv2,,${PN}-plugin-eap-mschapv2"
-PACKAGECONFIG[gmp] = "--enable-gmp,--disable-gmp,gmp,${PN}-plugin-gmp"
-PACKAGECONFIG[ldap] = "--enable-ldap,--disable-ldap,openldap,${PN}-plugin-ldap"
-PACKAGECONFIG[mysql] = "--enable-mysql,--disable-mysql,mysql5,${PN}-plugin-mysql"
-PACKAGECONFIG[openssl] = "--enable-openssl,--disable-openssl,openssl,${PN}-plugin-openssl"
-PACKAGECONFIG[scep] = "--enable-scepclient,--disable-scepclient,"
-PACKAGECONFIG[soup] = "--enable-soup,--disable-soup,libsoup-2.4,${PN}-plugin-soup"
-PACKAGECONFIG[sqlite3] = "--enable-sqlite,--disable-sqlite,sqlite3,${PN}-plugin-sqlite"
-PACKAGECONFIG[stroke] = "--enable-stroke,--disable-stroke,,${PN}-plugin-stroke"
-PACKAGECONFIG[swanctl] = "--enable-swanctl,--disable-swanctl,,libgcc"
-PACKAGECONFIG[curve25519] = "--enable-curve25519,--disable-curve25519,, ${PN}-plugin-curve25519"
-
-# requires swanctl
-PACKAGECONFIG[systemd-charon] = "--enable-systemd,--disable-systemd,systemd,"
-
-# tpm needs meta-tpm layer
-PACKAGECONFIG[tpm2] = "--enable-tpm,--disable-tpm,,${PN}-plugin-tpm"
-
-
-# integraty configuration needs meta-integraty
-#imc 
-PACKAGECONFIG[tnc-imc] = "--enable-tnc-imc,--disable-tnc-imc,,  ${PN}-plugin-tnc-imc ${PN}-plugin-tnc-tnccs"
-PACKAGECONFIG[imc-test] = "--enable-imc-test,--disable-imc-test,,"
-PACKAGECONFIG[imc-scanner] = "--enable-imc-scanner,--disable-imc-scanner,,"
-PACKAGECONFIG[imc-os] = "--enable-imc-os,--disable-imc-os,,"
-PACKAGECONFIG[imc-attestation] = "--enable-imc-attestation,--disable-imc-attestation,,"
-PACKAGECONFIG[imc-swima] = "--enable-imc-swima, --disable-imc-swima, json-c,"
-PACKAGECONFIG[imc-hcd] = "--enable-imc-hcd, --disable-imc-hcd,,"
-
-#imv set
-PACKAGECONFIG[tnc-imv] = "--enable-tnc-imv,--disable-tnc-imv,, ${PN}-plugin-tnc-imv ${PN}-plugin-tnc-tnccs"
-PACKAGECONFIG[imv-test] = "--enable-imv-test,--disable-imv-test,,"
-PACKAGECONFIG[imv-scanner] = "--enable-imv-scanner,--disable-imv-scanner,,"
-PACKAGECONFIG[imv-os] = "--enable-imv-os,--disable-imv-os,,"
-PACKAGECONFIG[imv-attestation] = "--enable-imv-attestation,--disable-imv-attestation,,"
-PACKAGECONFIG[imv-swima] = "--enable-imv-swima, --disable-imv-swima, json-c,"
-PACKAGECONFIG[imv-hcd] = "--enable-imv-hcd, --disable-imv-hcd,,"
-
-PACKAGECONFIG[tnc-ifmap] = "--enable-tnc-ifmap,--disable-tnc-ifmap, libxml2, ${PN}-plugin-tnc-ifmap"
-PACKAGECONFIG[tnc-pdp] = "--enable-tnc-pdp,--disable-tnc-pdp,, ${PN}-plugin-tnc-pdp"
-
-PACKAGECONFIG[tnccs-11] = "--enable-tnccs-11,--disable-tnccs-11,libxml2, ${PN}-plugin-tnccs-11"
-PACKAGECONFIG[tnccs-20] = "--enable-tnccs-20,--disable-tnccs-20,, ${PN}-plugin-tnccs-20"
-PACKAGECONFIG[tnccs-dynamic] = "--enable-tnccs-dynamic,--disable-tnccs-dynamic,,${PN}-plugin-tnccs-dynamic"
-
-inherit autotools systemd pkgconfig
-
-RRECOMMENDS:${PN} = "kernel-module-ah4 \
-                     kernel-module-esp4 \
-                     kernel-module-xfrm-user \
-                    "
-
-FILES:${PN} += "${libdir}/ipsec/lib*${SOLIBS}"
-FILES:${PN}-dbg += "${bindir}/.debug ${sbindir}/.debug ${libdir}/ipsec/.debug ${libexecdir}/ipsec/.debug"
-FILES:${PN}-dev += "${libdir}/ipsec/lib*${SOLIBSDEV} ${libdir}/ipsec/*.la ${libdir}/ipsec/include/config.h"
-FILES:${PN}-staticdev += "${libdir}/ipsec/*.a"
-
-CONFFILES:${PN} = "${sysconfdir}/*.conf ${sysconfdir}/ipsec.d/*.conf ${sysconfdir}/strongswan.d/*.conf"
-
-PACKAGES += "${PN}-plugins"
-ALLOW_EMPTY:${PN}-plugins = "1"
-
-PACKAGE_BEFORE_PN = "${PN}-imcvs ${PN}-imcvs-dbg"
-ALLOW_EMPTY:${PN}-imcvs = "1"
-
-FILES:${PN}-imcvs = "${libdir}/ipsec/imcvs/*.so"
-FILES:${PN}-imcvs-dbg += "${libdir}/ipsec/imcvs/.debug"
-
-PACKAGES_DYNAMIC += "^${PN}-plugin-.*$"
-NOAUTOPACKAGEDEBUG = "1"
-
-python split_strongswan_plugins () {
-    sysconfdir = d.expand('${sysconfdir}/strongswan.d/charon')
-    libdir = d.expand('${libdir}/ipsec/plugins')
-    dbglibdir = os.path.join(libdir, '.debug')
-
-    def add_plugin_conf(f, pkg, file_regex, output_pattern, modulename):
-        dvar = d.getVar('PKGD')
-        oldfiles = d.getVar('CONFFILES:' + pkg)
-        newfile = '/' + os.path.relpath(f, dvar)
-
-        if not oldfiles:
-            d.setVar('CONFFILES:' + pkg, newfile)
-        else:
-            d.setVar('CONFFILES:' + pkg, oldfiles + " " + newfile)
-
-    split_packages = do_split_packages(d, libdir, r'libstrongswan-(.*)\.so', '${PN}-plugin-%s', 'strongSwan %s plugin', prepend=True)
-    do_split_packages(d, sysconfdir, r'(.*)\.conf', '${PN}-plugin-%s', 'strongSwan %s plugin', prepend=True, hook=add_plugin_conf)
-
-    split_dbg_packages = do_split_packages(d, dbglibdir, r'libstrongswan-(.*)\.so', '${PN}-plugin-%s-dbg', 'strongSwan %s plugin - Debugging files', prepend=True, extra_depends='${PN}-dbg')
-    split_dev_packages = do_split_packages(d, libdir, r'libstrongswan-(.*)\.la', '${PN}-plugin-%s-dev', 'strongSwan %s plugin - Development files', prepend=True, extra_depends='${PN}-dev')
-    split_staticdev_packages = do_split_packages(d, libdir, r'libstrongswan-(.*)\.a', '${PN}-plugin-%s-staticdev', 'strongSwan %s plugin - Development files (Static Libraries)', prepend=True, extra_depends='${PN}-staticdev')
-
-    if split_packages:
-        pn = d.getVar('PN')
-        d.setVar('RRECOMMENDS:' + pn + '-plugins', ' '.join(split_packages))
-        d.appendVar('RRECOMMENDS:' + pn + '-dbg', ' ' + ' '.join(split_dbg_packages))
-        d.appendVar('RRECOMMENDS:' + pn + '-dev', ' ' + ' '.join(split_dev_packages))
-        d.appendVar('RRECOMMENDS:' + pn + '-staticdev', ' ' + ' '.join(split_staticdev_packages))
-}
-
-PACKAGESPLITFUNCS:prepend = "split_strongswan_plugins "
-
-# Install some default plugins based on default strongSwan ./configure options
-# See https://wiki.strongswan.org/projects/strongswan/wiki/Pluginlist
-RDEPENDS:${PN} += "\
-    ${PN}-plugin-aes \
-    ${PN}-plugin-attr \
-    ${PN}-plugin-cmac \
-    ${PN}-plugin-constraints \
-    ${PN}-plugin-des \
-    ${PN}-plugin-dnskey \
-    ${PN}-plugin-hmac \
-    ${PN}-plugin-kernel-netlink \
-    ${PN}-plugin-md5 \
-    ${PN}-plugin-nonce \
-    ${PN}-plugin-pem \
-    ${PN}-plugin-pgp \
-    ${PN}-plugin-pkcs1 \
-    ${PN}-plugin-pkcs7 \
-    ${PN}-plugin-pkcs8 \
-    ${PN}-plugin-pkcs12 \
-    ${PN}-plugin-pubkey \
-    ${PN}-plugin-random \
-    ${PN}-plugin-rc2 \
-    ${PN}-plugin-resolve \
-    ${PN}-plugin-revocation \
-    ${PN}-plugin-sha1 \
-    ${PN}-plugin-sha2 \
-    ${PN}-plugin-socket-default \
-    ${PN}-plugin-sshkey \
-    ${PN}-plugin-updown \
-    ${PN}-plugin-vici \
-    ${PN}-plugin-x509 \
-    ${PN}-plugin-xauth-generic \
-    ${PN}-plugin-xcbc \
-    "
-
-RPROVIDES:${PN} += "${PN}-systemd"
-RREPLACES:${PN} += "${PN}-systemd"
-RCONFLICTS:${PN} += "${PN}-systemd"
-
-# The deprecated legacy 'strongswan-starter' service should only be used when charon and
-# stroke are enabled. When swanctl is in use, 'strongswan.service' is needed.
-# See: https://wiki.strongswan.org/projects/strongswan/wiki/Charon-systemd
-SYSTEMD_SERVICE:${PN} = " \
-    ${@bb.utils.contains('PACKAGECONFIG', 'swanctl', '${BPN}.service', '', d)} \
-    ${@bb.utils.contains('PACKAGECONFIG', 'charon', '${BPN}-starter.service', '', d)} \
-"
diff --git a/meta-openembedded/meta-networking/recipes-support/strongswan/strongswan_5.9.7.bb b/meta-openembedded/meta-networking/recipes-support/strongswan/strongswan_5.9.7.bb
new file mode 100644
index 0000000..71ffb7b
--- /dev/null
+++ b/meta-openembedded/meta-networking/recipes-support/strongswan/strongswan_5.9.7.bb
@@ -0,0 +1,189 @@
+DESCRIPTION = "strongSwan is an OpenSource IPsec implementation for the \
+Linux operating system."
+SUMMARY = "strongSwan is an OpenSource IPsec implementation"
+HOMEPAGE = "http://www.strongswan.org"
+SECTION = "net"
+LICENSE = "GPL-2.0-only"
+LIC_FILES_CHKSUM = "file://COPYING;md5=b234ee4d69f5fce4486a80fdaf4a4263"
+DEPENDS = "flex-native flex bison-native"
+DEPENDS:append = "${@bb.utils.contains('DISTRO_FEATURES', 'tpm2', '  tpm2-tss', '', d)}"
+
+SRC_URI = "http://download.strongswan.org/strongswan-${PV}.tar.bz2 \
+           "
+
+SRC_URI[sha256sum] = "9e64a2ba62efeac81abff1d962522404ebc6ed6c0d352a23ab7c0b2c639e3fcf"
+
+UPSTREAM_CHECK_REGEX = "strongswan-(?P<pver>\d+(\.\d+)+)\.tar"
+
+EXTRA_OECONF = " \
+        --without-lib-prefix \
+        --with-dev-headers=${includedir}/strongswan \
+"
+
+EXTRA_OECONF += "${@bb.utils.contains('DISTRO_FEATURES', 'systemd', '--with-systemdsystemunitdir=${systemd_unitdir}/system/', '--without-systemdsystemunitdir', d)}"
+
+PACKAGECONFIG ?= "curl gmp openssl sqlite3 swanctl curve25519\
+        ${@bb.utils.contains('DISTRO_FEATURES', 'systemd', 'systemd-charon', 'charon', d)} \
+        ${@bb.utils.contains('DISTRO_FEATURES', 'tpm2', 'tpm2', '', d)} \
+        ${@bb.utils.contains('DISTRO_FEATURES', 'ima', 'tnc-imc imc-hcd imc-os imc-scanner imc-attestation', '', d)} \
+        ${@bb.utils.contains('DISTRO_FEATURES', 'ima', 'tnc-imv imv-hcd imv-os imv-scanner imv-attestation', '', d)} \
+"
+
+PACKAGECONFIG[aesni] = "--enable-aesni,--disable-aesni,,${PN}-plugin-aesni"
+PACKAGECONFIG[bfd] = "--enable-bfd-backtraces,--disable-bfd-backtraces,binutils"
+PACKAGECONFIG[charon] = "--enable-charon,--disable-charon,"
+PACKAGECONFIG[curl] = "--enable-curl,--disable-curl,curl,${PN}-plugin-curl"
+PACKAGECONFIG[eap-identity] = "--enable-eap-identity,--disable-eap-identity,,${PN}-plugin-eap-identity"
+PACKAGECONFIG[eap-mschapv2] = "--enable-eap-mschapv2,--disable-eap-mschapv2,,${PN}-plugin-eap-mschapv2"
+PACKAGECONFIG[gmp] = "--enable-gmp,--disable-gmp,gmp,${PN}-plugin-gmp"
+PACKAGECONFIG[ldap] = "--enable-ldap,--disable-ldap,openldap,${PN}-plugin-ldap"
+PACKAGECONFIG[mysql] = "--enable-mysql,--disable-mysql,mysql5,${PN}-plugin-mysql"
+PACKAGECONFIG[openssl] = "--enable-openssl,--disable-openssl,openssl,${PN}-plugin-openssl"
+PACKAGECONFIG[scep] = "--enable-scepclient,--disable-scepclient,"
+PACKAGECONFIG[soup] = "--enable-soup,--disable-soup,libsoup-2.4,${PN}-plugin-soup"
+PACKAGECONFIG[sqlite3] = "--enable-sqlite,--disable-sqlite,sqlite3,${PN}-plugin-sqlite"
+PACKAGECONFIG[stroke] = "--enable-stroke,--disable-stroke,,${PN}-plugin-stroke"
+PACKAGECONFIG[swanctl] = "--enable-swanctl,--disable-swanctl,,libgcc"
+PACKAGECONFIG[curve25519] = "--enable-curve25519,--disable-curve25519,, ${PN}-plugin-curve25519"
+
+# requires swanctl
+PACKAGECONFIG[systemd-charon] = "--enable-systemd,--disable-systemd,systemd,"
+
+# tpm needs meta-tpm layer
+PACKAGECONFIG[tpm2] = "--enable-tpm,--disable-tpm,,${PN}-plugin-tpm"
+
+
+# integraty configuration needs meta-integraty
+#imc 
+PACKAGECONFIG[tnc-imc] = "--enable-tnc-imc,--disable-tnc-imc,,  ${PN}-plugin-tnc-imc ${PN}-plugin-tnc-tnccs"
+PACKAGECONFIG[imc-test] = "--enable-imc-test,--disable-imc-test,,"
+PACKAGECONFIG[imc-scanner] = "--enable-imc-scanner,--disable-imc-scanner,,"
+PACKAGECONFIG[imc-os] = "--enable-imc-os,--disable-imc-os,,"
+PACKAGECONFIG[imc-attestation] = "--enable-imc-attestation,--disable-imc-attestation,,"
+PACKAGECONFIG[imc-swima] = "--enable-imc-swima, --disable-imc-swima, json-c,"
+PACKAGECONFIG[imc-hcd] = "--enable-imc-hcd, --disable-imc-hcd,,"
+
+#imv set
+PACKAGECONFIG[tnc-imv] = "--enable-tnc-imv,--disable-tnc-imv,, ${PN}-plugin-tnc-imv ${PN}-plugin-tnc-tnccs"
+PACKAGECONFIG[imv-test] = "--enable-imv-test,--disable-imv-test,,"
+PACKAGECONFIG[imv-scanner] = "--enable-imv-scanner,--disable-imv-scanner,,"
+PACKAGECONFIG[imv-os] = "--enable-imv-os,--disable-imv-os,,"
+PACKAGECONFIG[imv-attestation] = "--enable-imv-attestation,--disable-imv-attestation,,"
+PACKAGECONFIG[imv-swima] = "--enable-imv-swima, --disable-imv-swima, json-c,"
+PACKAGECONFIG[imv-hcd] = "--enable-imv-hcd, --disable-imv-hcd,,"
+
+PACKAGECONFIG[tnc-ifmap] = "--enable-tnc-ifmap,--disable-tnc-ifmap, libxml2, ${PN}-plugin-tnc-ifmap"
+PACKAGECONFIG[tnc-pdp] = "--enable-tnc-pdp,--disable-tnc-pdp,, ${PN}-plugin-tnc-pdp"
+
+PACKAGECONFIG[tnccs-11] = "--enable-tnccs-11,--disable-tnccs-11,libxml2, ${PN}-plugin-tnccs-11"
+PACKAGECONFIG[tnccs-20] = "--enable-tnccs-20,--disable-tnccs-20,, ${PN}-plugin-tnccs-20"
+PACKAGECONFIG[tnccs-dynamic] = "--enable-tnccs-dynamic,--disable-tnccs-dynamic,,${PN}-plugin-tnccs-dynamic"
+
+inherit autotools systemd pkgconfig
+
+RRECOMMENDS:${PN} = "kernel-module-ah4 \
+                     kernel-module-esp4 \
+                     kernel-module-xfrm-user \
+                    "
+
+FILES:${PN} += "${libdir}/ipsec/lib*${SOLIBS}"
+FILES:${PN}-dbg += "${bindir}/.debug ${sbindir}/.debug ${libdir}/ipsec/.debug ${libexecdir}/ipsec/.debug"
+FILES:${PN}-dev += "${libdir}/ipsec/lib*${SOLIBSDEV} ${libdir}/ipsec/*.la ${libdir}/ipsec/include/config.h"
+FILES:${PN}-staticdev += "${libdir}/ipsec/*.a"
+
+CONFFILES:${PN} = "${sysconfdir}/*.conf ${sysconfdir}/ipsec.d/*.conf ${sysconfdir}/strongswan.d/*.conf"
+
+PACKAGES += "${PN}-plugins"
+ALLOW_EMPTY:${PN}-plugins = "1"
+
+PACKAGE_BEFORE_PN = "${PN}-imcvs ${PN}-imcvs-dbg"
+ALLOW_EMPTY:${PN}-imcvs = "1"
+
+FILES:${PN}-imcvs = "${libdir}/ipsec/imcvs/*.so"
+FILES:${PN}-imcvs-dbg += "${libdir}/ipsec/imcvs/.debug"
+
+PACKAGES_DYNAMIC += "^${PN}-plugin-.*$"
+NOAUTOPACKAGEDEBUG = "1"
+
+python split_strongswan_plugins () {
+    sysconfdir = d.expand('${sysconfdir}/strongswan.d/charon')
+    libdir = d.expand('${libdir}/ipsec/plugins')
+    dbglibdir = os.path.join(libdir, '.debug')
+
+    def add_plugin_conf(f, pkg, file_regex, output_pattern, modulename):
+        dvar = d.getVar('PKGD')
+        oldfiles = d.getVar('CONFFILES:' + pkg)
+        newfile = '/' + os.path.relpath(f, dvar)
+
+        if not oldfiles:
+            d.setVar('CONFFILES:' + pkg, newfile)
+        else:
+            d.setVar('CONFFILES:' + pkg, oldfiles + " " + newfile)
+
+    split_packages = do_split_packages(d, libdir, r'libstrongswan-(.*)\.so', '${PN}-plugin-%s', 'strongSwan %s plugin', prepend=True)
+    do_split_packages(d, sysconfdir, r'(.*)\.conf', '${PN}-plugin-%s', 'strongSwan %s plugin', prepend=True, hook=add_plugin_conf)
+
+    split_dbg_packages = do_split_packages(d, dbglibdir, r'libstrongswan-(.*)\.so', '${PN}-plugin-%s-dbg', 'strongSwan %s plugin - Debugging files', prepend=True, extra_depends='${PN}-dbg')
+    split_dev_packages = do_split_packages(d, libdir, r'libstrongswan-(.*)\.la', '${PN}-plugin-%s-dev', 'strongSwan %s plugin - Development files', prepend=True, extra_depends='${PN}-dev')
+    split_staticdev_packages = do_split_packages(d, libdir, r'libstrongswan-(.*)\.a', '${PN}-plugin-%s-staticdev', 'strongSwan %s plugin - Development files (Static Libraries)', prepend=True, extra_depends='${PN}-staticdev')
+
+    if split_packages:
+        pn = d.getVar('PN')
+        d.setVar('RRECOMMENDS:' + pn + '-plugins', ' '.join(split_packages))
+        d.appendVar('RRECOMMENDS:' + pn + '-dbg', ' ' + ' '.join(split_dbg_packages))
+        d.appendVar('RRECOMMENDS:' + pn + '-dev', ' ' + ' '.join(split_dev_packages))
+        d.appendVar('RRECOMMENDS:' + pn + '-staticdev', ' ' + ' '.join(split_staticdev_packages))
+}
+
+PACKAGESPLITFUNCS:prepend = "split_strongswan_plugins "
+
+# Install some default plugins based on default strongSwan ./configure options
+# See https://wiki.strongswan.org/projects/strongswan/wiki/Pluginlist
+RDEPENDS:${PN} += "\
+    ${PN}-plugin-aes \
+    ${PN}-plugin-attr \
+    ${PN}-plugin-cmac \
+    ${PN}-plugin-constraints \
+    ${PN}-plugin-des \
+    ${PN}-plugin-dnskey \
+    ${PN}-plugin-drbg \
+    ${PN}-plugin-fips-prf \
+    ${PN}-plugin-hmac \
+    ${PN}-plugin-kdf \
+    ${PN}-plugin-kernel-netlink \
+    ${PN}-plugin-md5 \
+    ${PN}-plugin-mgf1 \
+    ${PN}-plugin-nonce \
+    ${PN}-plugin-pem \
+    ${PN}-plugin-pgp \
+    ${PN}-plugin-pkcs1 \
+    ${PN}-plugin-pkcs7 \
+    ${PN}-plugin-pkcs8 \
+    ${PN}-plugin-pkcs12 \
+    ${PN}-plugin-pubkey \
+    ${PN}-plugin-random \
+    ${PN}-plugin-rc2 \
+    ${PN}-plugin-resolve \
+    ${PN}-plugin-revocation \
+    ${PN}-plugin-sha1 \
+    ${PN}-plugin-sha2 \
+    ${PN}-plugin-socket-default \
+    ${PN}-plugin-sshkey \
+    ${PN}-plugin-updown \
+    ${PN}-plugin-vici \
+    ${PN}-plugin-x509 \
+    ${PN}-plugin-xauth-generic \
+    ${PN}-plugin-xcbc \
+    "
+
+RPROVIDES:${PN} += "${PN}-systemd"
+RREPLACES:${PN} += "${PN}-systemd"
+RCONFLICTS:${PN} += "${PN}-systemd"
+
+# The deprecated legacy 'strongswan-starter' service should only be used when charon and
+# stroke are enabled. When swanctl is in use, 'strongswan.service' is needed.
+# See: https://wiki.strongswan.org/projects/strongswan/wiki/Charon-systemd
+SYSTEMD_SERVICE:${PN} = " \
+    ${@bb.utils.contains('PACKAGECONFIG', 'swanctl', '${BPN}.service', '', d)} \
+    ${@bb.utils.contains('PACKAGECONFIG', 'charon', '${BPN}-starter.service', '', d)} \
+"
diff --git a/meta-openembedded/meta-networking/recipes-support/tcpreplay/tcpreplay_4.4.1.bb b/meta-openembedded/meta-networking/recipes-support/tcpreplay/tcpreplay_4.4.1.bb
deleted file mode 100644
index 56db66b..0000000
--- a/meta-openembedded/meta-networking/recipes-support/tcpreplay/tcpreplay_4.4.1.bb
+++ /dev/null
@@ -1,21 +0,0 @@
-SUMMARY = "Use previously captured traffic to test network devices"
-
-HOMEPAGE = "https://tcpreplay.appneta.com/"
-
-SECTION = "net"
-
-LICENSE = "GPL-3.0-only"
-LIC_FILES_CHKSUM = "file://docs/LICENSE;md5=10f0474a2f0e5dccfca20f69d6598ad8"
-
-SRC_URI = "https://github.com/appneta/tcpreplay/releases/download/v${PV}/tcpreplay-${PV}.tar.gz"
-
-SRC_URI[sha256sum] = "cb67b6491a618867fc4f9848f586019f1bb2ebd149f393afac5544ee55e4544f"
-
-UPSTREAM_CHECK_URI = "https://github.com/appneta/tcpreplay/releases"
-
-DEPENDS = "libpcap"
-
-EXTRA_OECONF += "--with-libpcap=${STAGING_DIR_HOST}/usr"
-
-inherit siteinfo autotools-brokensep
-
diff --git a/meta-openembedded/meta-networking/recipes-support/tcpreplay/tcpreplay_4.4.2.bb b/meta-openembedded/meta-networking/recipes-support/tcpreplay/tcpreplay_4.4.2.bb
new file mode 100644
index 0000000..165a0e7
--- /dev/null
+++ b/meta-openembedded/meta-networking/recipes-support/tcpreplay/tcpreplay_4.4.2.bb
@@ -0,0 +1,21 @@
+SUMMARY = "Use previously captured traffic to test network devices"
+
+HOMEPAGE = "https://tcpreplay.appneta.com/"
+
+SECTION = "net"
+
+LICENSE = "GPL-3.0-only"
+LIC_FILES_CHKSUM = "file://docs/LICENSE;md5=10f0474a2f0e5dccfca20f69d6598ad8"
+
+SRC_URI = "https://github.com/appneta/tcpreplay/releases/download/v${PV}/tcpreplay-${PV}.tar.gz"
+
+SRC_URI[sha256sum] = "5b272cd83b67d6288a234ea15f89ecd93b4fadda65eddc44e7b5fcb2f395b615"
+
+UPSTREAM_CHECK_URI = "https://github.com/appneta/tcpreplay/releases"
+
+DEPENDS = "libpcap"
+
+EXTRA_OECONF += "--with-libpcap=${STAGING_DIR_HOST}/usr"
+
+inherit siteinfo autotools-brokensep
+
diff --git a/meta-openembedded/meta-networking/recipes-support/uftp/uftp_5.0.1.bb b/meta-openembedded/meta-networking/recipes-support/uftp/uftp_5.0.1.bb
new file mode 100644
index 0000000..6d5cf4b
--- /dev/null
+++ b/meta-openembedded/meta-networking/recipes-support/uftp/uftp_5.0.1.bb
@@ -0,0 +1,16 @@
+DESCRIPTION = "Encrypted UDP based FTP with multicast"
+HOMEPAGE = "https://sourceforge.net/projects/uftp-multicast"
+SECTION = "libs/network"
+LICENSE = "GPL-3.0-only"
+LIC_FILES_CHKSUM = "file://LICENSE.txt;md5=d32239bcb673463ab874e80d47fae504"
+
+UPSTREAM_CHECK_URI = "https://sourceforge.net/projects/uftp-multicast/files/source-tar/"
+
+SRC_URI = "${SOURCEFORGE_MIRROR}/uftp-multicast/source-tar/uftp-${PV}.tar.gz"
+SRC_URI[sha256sum] = "f0435fbc8e9ffa125e05600cb6c7fc933d7d587f5bae41b257267be4f2ce0e61"
+
+DEPENDS = "openssl"
+
+do_install () {
+	oe_runmake install DESTDIR=${D}
+}
diff --git a/meta-openembedded/meta-networking/recipes-support/uftp/uftp_5.0.bb b/meta-openembedded/meta-networking/recipes-support/uftp/uftp_5.0.bb
deleted file mode 100644
index 301e0fa..0000000
--- a/meta-openembedded/meta-networking/recipes-support/uftp/uftp_5.0.bb
+++ /dev/null
@@ -1,16 +0,0 @@
-DESCRIPTION = "Encrypted UDP based FTP with multicast"
-HOMEPAGE = "https://sourceforge.net/projects/uftp-multicast"
-SECTION = "libs/network"
-LICENSE = "GPL-3.0-only"
-LIC_FILES_CHKSUM = "file://LICENSE.txt;md5=d32239bcb673463ab874e80d47fae504"
-
-UPSTREAM_CHECK_URI = "https://sourceforge.net/projects/uftp-multicast/files/source-tar/"
-
-SRC_URI = "${SOURCEFORGE_MIRROR}/uftp-multicast/source-tar/uftp-${PV}.tar.gz"
-SRC_URI[sha256sum] = "562f71ea5a24b615eb491f5744bad01e9c2e58244c1d6252d5ae98d320d308e0"
-
-DEPENDS = "openssl"
-
-do_install () {
-	oe_runmake install DESTDIR=${D}
-}
diff --git a/meta-openembedded/meta-networking/recipes-support/unbound/unbound_1.16.1.bb b/meta-openembedded/meta-networking/recipes-support/unbound/unbound_1.16.1.bb
deleted file mode 100644
index 5eb9ec1..0000000
--- a/meta-openembedded/meta-networking/recipes-support/unbound/unbound_1.16.1.bb
+++ /dev/null
@@ -1,44 +0,0 @@
-SUMMARY = "Unbound is a validating, recursive, and caching DNS resolver"
-DESCRIPTION = "Unbound's design is a set of modular components which incorporate \
-	features including enhanced security (DNSSEC) validation, Internet Protocol \
-	Version 6 (IPv6), and a client resolver library API as an integral part of the \
-	architecture"
-
-HOMEPAGE = "https://www.unbound.net/"
-SECTION = "net"
-LICENSE = "BSD-3-Clause"
-LIC_FILES_CHKSUM = "file://LICENSE;md5=5308494bc0590c0cb036afd781d78f06"
-
-SRC_URI = "git://github.com/NLnetLabs/unbound.git;protocol=http;branch=master;protocol=https \
-	file://0001-contrib-add-yocto-compatible-init-script.patch \
-"
-SRCREV = "903538c76e1d8eb30d0814bb55c3ef1ea28164e8"
-
-inherit autotools pkgconfig systemd update-rc.d
-
-DEPENDS = "openssl libevent libtool-native bison-native expat"
-RDEPENDS:${PN} = "bash openssl-bin daemonize"
-
-S = "${WORKDIR}/git"
-
-EXTRA_OECONF = "--with-libexpat=${STAGING_EXECPREFIXDIR} \
-		--with-ssl=${STAGING_EXECPREFIXDIR}"
-		
-
-PACKAGECONFIG ??= "${@bb.utils.filter('DISTRO_FEATURES', 'largefile systemd', d)}"
-PACKAGECONFIG[dnscrypt] = "--enable-dnscrypt, --disable-dnscrypt, libsodium"
-PACKAGECONFIG[largefile] = "--enable-largefile,--disable-largefile,,"
-PACKAGECONFIG[systemd] = "--enable-systemd,--disable-systemd,systemd"
-
-do_install:append() {
-	install -d ${D}${systemd_unitdir}/system
-	install -m 0644 ${B}/contrib/unbound.service ${D}${systemd_unitdir}/system
-
-	install -d ${D}${sysconfdir}/init.d
-	install -m 0755 ${S}/contrib/unbound.init ${D}${sysconfdir}/init.d/unbound
-}
-
-SYSTEMD_SERVICE:${PN} = "${BPN}.service"
-
-INITSCRIPT_NAME = "unbound"
-INITSCRIPT_PARAMS = "defaults"
diff --git a/meta-openembedded/meta-networking/recipes-support/unbound/unbound_1.16.2.bb b/meta-openembedded/meta-networking/recipes-support/unbound/unbound_1.16.2.bb
new file mode 100644
index 0000000..63036f6
--- /dev/null
+++ b/meta-openembedded/meta-networking/recipes-support/unbound/unbound_1.16.2.bb
@@ -0,0 +1,43 @@
+SUMMARY = "Unbound is a validating, recursive, and caching DNS resolver"
+DESCRIPTION = "Unbound's design is a set of modular components which incorporate \
+	features including enhanced security (DNSSEC) validation, Internet Protocol \
+	Version 6 (IPv6), and a client resolver library API as an integral part of the \
+	architecture"
+
+HOMEPAGE = "https://www.unbound.net/"
+SECTION = "net"
+LICENSE = "BSD-3-Clause"
+LIC_FILES_CHKSUM = "file://LICENSE;md5=5308494bc0590c0cb036afd781d78f06"
+
+SRC_URI = "git://github.com/NLnetLabs/unbound.git;protocol=http;branch=master;protocol=https \
+	file://0001-contrib-add-yocto-compatible-init-script.patch \
+"
+SRCREV = "cbed768b8ff9bfcf11089a5f1699b7e5707f1ea5"
+
+inherit autotools pkgconfig systemd update-rc.d
+
+DEPENDS = "openssl libevent libtool-native bison-native expat"
+RDEPENDS:${PN} = "bash openssl-bin daemonize"
+
+S = "${WORKDIR}/git"
+
+EXTRA_OECONF = "--with-libexpat=${STAGING_EXECPREFIXDIR} \
+		--with-ssl=${STAGING_EXECPREFIXDIR} \
+                --enable-largefile"
+
+PACKAGECONFIG ??= "${@bb.utils.filter('DISTRO_FEATURES', 'systemd', d)}"
+PACKAGECONFIG[dnscrypt] = "--enable-dnscrypt, --disable-dnscrypt, libsodium"
+PACKAGECONFIG[systemd] = "--enable-systemd,--disable-systemd,systemd"
+
+do_install:append() {
+	install -d ${D}${systemd_unitdir}/system
+	install -m 0644 ${B}/contrib/unbound.service ${D}${systemd_unitdir}/system
+
+	install -d ${D}${sysconfdir}/init.d
+	install -m 0755 ${S}/contrib/unbound.init ${D}${sysconfdir}/init.d/unbound
+}
+
+SYSTEMD_SERVICE:${PN} = "${BPN}.service"
+
+INITSCRIPT_NAME = "unbound"
+INITSCRIPT_PARAMS = "defaults"
diff --git a/meta-openembedded/meta-oe/classes/image_types_sparse.bbclass b/meta-openembedded/meta-oe/classes/image_types_sparse.bbclass
index 4263593..69e24cb 100644
--- a/meta-openembedded/meta-oe/classes/image_types_sparse.bbclass
+++ b/meta-openembedded/meta-oe/classes/image_types_sparse.bbclass
@@ -1,8 +1,16 @@
 inherit image_types
 
+# This sets the granularity of the sparse image conversion. Chunk sizes will be
+# specified in units of this value. Setting this value smaller than the
+# underlying image's block size will not result in any further space saving.
+# However, there is no loss in correctness if this value is larger or smaller
+# than optimal. This value should be a power of two.
+SPARSE_BLOCK_SIZE ??= "4096"
+
 CONVERSIONTYPES += "sparse"
-CONVERSION_CMD:sparse = " \
-    img2simg "${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type}" \
-             "${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type}.sparse" \
-"
+CONVERSION_CMD:sparse() {
+    INPUT="${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type}"
+    truncate --no-create --size=%${SPARSE_BLOCK_SIZE} "$INPUT"
+    img2simg -s "$INPUT" "$INPUT.sparse" ${SPARSE_BLOCK_SIZE}
+}
 CONVERSION_DEPENDS_sparse = "android-tools-native"
diff --git a/meta-openembedded/meta-oe/conf/layer.conf b/meta-openembedded/meta-oe/conf/layer.conf
index 34aa295..6a0f05d 100644
--- a/meta-openembedded/meta-oe/conf/layer.conf
+++ b/meta-openembedded/meta-oe/conf/layer.conf
@@ -48,6 +48,8 @@
 
 PREFERRED_RPROVIDER_libdevmapper = "lvm2"
 PREFERRED_PROVIDER_android-tools-conf ?= "android-tools-conf"
+# Configures whether coreutils or uutils-coreutils is used.
+PREFERRED_PROVIDER_coreutils = "coreutils"
 
 SIGGEN_EXCLUDERECIPES_ABISAFE += " \
   fbset-modes \
diff --git a/meta-openembedded/meta-oe/dynamic-layers/meta-python/recipes-connectivity/lirc/lirc/0001-mplay-Fix-build-with-musl.patch b/meta-openembedded/meta-oe/dynamic-layers/meta-python/recipes-connectivity/lirc/lirc/0001-mplay-Fix-build-with-musl.patch
new file mode 100644
index 0000000..48cf7a3
--- /dev/null
+++ b/meta-openembedded/meta-oe/dynamic-layers/meta-python/recipes-connectivity/lirc/lirc/0001-mplay-Fix-build-with-musl.patch
@@ -0,0 +1,44 @@
+From e9e9027d7a324e1ce5e0cb06d4eb51847262a09d Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Sun, 28 Aug 2022 12:26:52 -0700
+Subject: [PATCH] mplay: Fix build with musl
+
+pthread_t is an opaque type, therefore typecast it to avoid warnings on
+musl
+
+Fixes
+mplay.c:200:12: error: incompatible integer to pointer conversion initializing 'pthread_t' (aka 'struct __pthread *') with an expression of type 'int' [-Wint-conversion]
+|         .tid                            = -1
+|                                           ^~
+
+Upstream-Status: Submitted [https://sourceforge.net/p/lirc/git/merge-requests/47/]
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ plugins/mplay.c | 4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+diff --git a/plugins/mplay.c b/plugins/mplay.c
+index d6d9619..5b9eb4b 100644
+--- a/plugins/mplay.c
++++ b/plugins/mplay.c
+@@ -197,7 +197,7 @@ static struct {
+ 	.latest_button			= MPLAY_CODE_ERROR,
+ 	.fd				= -1,
+ 	.pipefd				= { -1,	     -1	  },
+-	.tid				= -1
++	.tid				= (pthread_t)-1
+ };
+ 
+ /**
+@@ -788,7 +788,7 @@ int mplayfamily_deinit(void)
+ 			return 0;
+ 		}
+ 		pthread_join(mplayfamily_local_data.tid, NULL);
+-		mplayfamily_local_data.tid = -1;
++		mplayfamily_local_data.tid = (pthread_t)-1;
+ 	}
+ 	if (mplayfamily_local_data.pipefd[0] != -1) {
+ 		close(mplayfamily_local_data.pipefd[0]);
+-- 
+2.37.2
+
diff --git a/meta-openembedded/meta-oe/dynamic-layers/meta-python/recipes-connectivity/lirc/lirc_0.10.1.bb b/meta-openembedded/meta-oe/dynamic-layers/meta-python/recipes-connectivity/lirc/lirc_0.10.1.bb
index fe96859..467e10b 100644
--- a/meta-openembedded/meta-oe/dynamic-layers/meta-python/recipes-connectivity/lirc/lirc_0.10.1.bb
+++ b/meta-openembedded/meta-oe/dynamic-layers/meta-python/recipes-connectivity/lirc/lirc_0.10.1.bb
@@ -13,6 +13,7 @@
 SRC_URI = "http://prdownloads.sourceforge.net/lirc/lirc-${PV}.tar.bz2 \
     file://0001-Fix-build-on-32bit-arches-with-64bit-time_t.patch \
     file://fix_build_errors.patch \
+    file://0001-mplay-Fix-build-with-musl.patch \
     file://lircd.service \
     file://lircd.init \
     file://lircexec.init \
diff --git a/meta-openembedded/meta-oe/dynamic-layers/meta-python/recipes-dbs/mongodb/mongodb/0001-The-std-lib-unary-binary_function-base-classes-are-d.patch b/meta-openembedded/meta-oe/dynamic-layers/meta-python/recipes-dbs/mongodb/mongodb/0001-The-std-lib-unary-binary_function-base-classes-are-d.patch
new file mode 100644
index 0000000..4594bec
--- /dev/null
+++ b/meta-openembedded/meta-oe/dynamic-layers/meta-python/recipes-dbs/mongodb/mongodb/0001-The-std-lib-unary-binary_function-base-classes-are-d.patch
@@ -0,0 +1,40 @@
+From f9b55f5a1fab85bf73c95e6372779d6f50f75e84 Mon Sep 17 00:00:00 2001
+From: jzmaddock <john@johnmaddock.co.uk>
+Date: Mon, 11 Jul 2022 18:26:07 +0100
+Subject: [PATCH] The std lib unary/binary_function base classes are
+ deprecated/removed from libcpp15. Fixes
+ https://github.com/boostorg/container_hash/issues/24.
+
+Upstream-Status: Backport [https://github.com/boostorg/config/pull/440/commits/f0af4a9184457939b89110795ae2d293582c5f66]
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ src/third_party/boost-1.70.0/boost/config/stdlib/libcpp.hpp | 9 +++++++++
+ 1 file changed, 9 insertions(+)
+
+--- a/src/third_party/boost-1.70.0/boost/config/stdlib/libcpp.hpp
++++ b/src/third_party/boost-1.70.0/boost/config/stdlib/libcpp.hpp
+@@ -140,4 +140,13 @@
+ #  define BOOST_NO_CXX14_HDR_SHARED_MUTEX
+ #endif
+ 
++#if _LIBCPP_VERSION >= 15000
++//
++// Unary function is now deprecated in C++11 and later:
++//
++#if __cplusplus >= 201103L
++#define BOOST_NO_CXX98_FUNCTION_BASE
++#endif
++#endif
++
+ //  --- end ---
+--- a/src/third_party/boost-1.70.0/boost/container_hash/hash.hpp
++++ b/src/third_party/boost-1.70.0/boost/container_hash/hash.hpp
+@@ -118,7 +118,7 @@ namespace boost
+ {
+     namespace hash_detail
+     {
+-#if defined(_HAS_AUTO_PTR_ETC) && !_HAS_AUTO_PTR_ETC
++#if defined(BOOST_NO_CXX98_FUNCTION_BASE)
+         template <typename T>
+         struct hash_base
+         {
diff --git a/meta-openembedded/meta-oe/dynamic-layers/meta-python/recipes-dbs/mongodb/mongodb_git.bb b/meta-openembedded/meta-oe/dynamic-layers/meta-python/recipes-dbs/mongodb/mongodb_git.bb
index ff4a16e..d040ab1 100644
--- a/meta-openembedded/meta-oe/dynamic-layers/meta-python/recipes-dbs/mongodb/mongodb_git.bb
+++ b/meta-openembedded/meta-oe/dynamic-layers/meta-python/recipes-dbs/mongodb/mongodb_git.bb
@@ -32,6 +32,7 @@
            file://PTHREAD_STACK_MIN.patch \
            file://0001-add-explict-static_cast-size_t-to-maxMemoryUsageByte.patch \
            file://0001-server-Adjust-the-cache-alignment-assumptions.patch \
+           file://0001-The-std-lib-unary-binary_function-base-classes-are-d.patch \
            "
 SRC_URI:append:libc-musl ="\
            file://0001-Mark-one-of-strerror_r-implementation-glibc-specific.patch \
diff --git a/meta-openembedded/meta-oe/dynamic-layers/meta-python/recipes-extended/mozjs/mozjs-78_78.15.0.bb b/meta-openembedded/meta-oe/dynamic-layers/meta-python/recipes-extended/mozjs/mozjs-78_78.15.0.bb
index c239503..7d4e4a8 100644
--- a/meta-openembedded/meta-oe/dynamic-layers/meta-python/recipes-extended/mozjs/mozjs-78_78.15.0.bb
+++ b/meta-openembedded/meta-oe/dynamic-layers/meta-python/recipes-extended/mozjs/mozjs-78_78.15.0.bb
@@ -40,7 +40,7 @@
 JIT:mipsarch = "--disable-jit"
 
 EXTRA_OECONF = " \
-    --target=${TARGET_SYS} \
+    --target=${RUST_TARGET_SYS} \
     --host=${BUILD_SYS} \
     --prefix=${prefix} \
     --libdir=${libdir} \
diff --git a/meta-openembedded/meta-oe/recipes-benchmark/dhrystone/dhrystone_2.1.bb b/meta-openembedded/meta-oe/recipes-benchmark/dhrystone/dhrystone_2.1.bb
index 17e8c70..d809a56 100644
--- a/meta-openembedded/meta-oe/recipes-benchmark/dhrystone/dhrystone_2.1.bb
+++ b/meta-openembedded/meta-oe/recipes-benchmark/dhrystone/dhrystone_2.1.bb
@@ -22,5 +22,6 @@
 
 # Prevent procedure merging as required by dhrystone.c:
 CFLAGS += "-fno-lto"
+CFLAGS:append:toolchain-clang = " -Wno-error=implicit-function-declaration -Wno-error=deprecated-non-prototype -Wno-error=implicit-int"
 
 LDFLAGS += "-fno-lto"
diff --git a/meta-openembedded/meta-oe/recipes-benchmark/fio/fio/0001-io_uring-Replace-pthread_self-with-s-tid.patch b/meta-openembedded/meta-oe/recipes-benchmark/fio/fio/0001-io_uring-Replace-pthread_self-with-s-tid.patch
new file mode 100644
index 0000000..766b1fe
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-benchmark/fio/fio/0001-io_uring-Replace-pthread_self-with-s-tid.patch
@@ -0,0 +1,45 @@
+From 269164337e0168b93661bb95c6a4e462ae6d8b61 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Wed, 24 Aug 2022 18:08:53 -0700
+Subject: [PATCH] io_uring: Replace pthread_self with s->tid
+
+__init_rand64 takes 64bit value and srand48 takes unsigned 32bit value,
+pthread_t is opaque type and some libcs ( e.g. musl ) do not define them
+in plain old data types and ends up with errors
+
+| t/io_uring.c:809:32: error: incompatible pointer to integer conversion passing 'pthread_t' (aka 'struct __pthread *') to parameter of type 'uint64_t' (aka 'unsigned long') [-Wint-conver
+sion]
+|         __init_rand64(&s->rand_state, pthread_self());
+|                                       ^~~~~~~~~~~~~~
+
+Upstream-Status: Submitted [https://github.com/axboe/fio/pull/1455]
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ t/io_uring.c | 5 ++---
+ 1 file changed, 2 insertions(+), 3 deletions(-)
+
+diff --git a/t/io_uring.c b/t/io_uring.c
+index 35bf1956..f34a3554 100644
+--- a/t/io_uring.c
++++ b/t/io_uring.c
+@@ -799,15 +799,14 @@ static int submitter_init(struct submitter *s)
+ 	int i, nr_batch, err;
+ 	static int init_printed;
+ 	char buf[80];
+-
+ 	s->tid = gettid();
+ 	printf("submitter=%d, tid=%d, file=%s, node=%d\n", s->index, s->tid,
+ 							s->filename, s->numa_node);
+ 
+ 	set_affinity(s);
+ 
+-	__init_rand64(&s->rand_state, pthread_self());
+-	srand48(pthread_self());
++	__init_rand64(&s->rand_state, s->tid);
++	srand48(s->tid);
+ 
+ 	for (i = 0; i < MAX_FDS; i++)
+ 		s->files[i].fileno = i;
+-- 
+2.37.2
+
diff --git a/meta-openembedded/meta-oe/recipes-benchmark/fio/fio_3.30.bb b/meta-openembedded/meta-oe/recipes-benchmark/fio/fio_3.30.bb
deleted file mode 100644
index f97a1b8..0000000
--- a/meta-openembedded/meta-oe/recipes-benchmark/fio/fio_3.30.bb
+++ /dev/null
@@ -1,45 +0,0 @@
-SUMMARY = "Filesystem and hardware benchmark and stress tool"
-DESCRIPTION = "fio is an I/O tool meant to be used both for benchmark and \
-stress/hardware verification. It has support for a number of I/O engines, \
-I/O priorities (for newer Linux kernels), rate I/O, forked or threaded jobs, \
-and much more. It can work on block devices as well as files. fio accepts \
-job descriptions in a simple-to-understand text format. Several example job \
-files are included. fio displays all sorts of I/O performance information."
-HOMEPAGE = "http://freecode.com/projects/fio"
-SECTION = "console/tests"
-LICENSE = "GPL-2.0-only"
-LIC_FILES_CHKSUM = "file://COPYING;md5=b234ee4d69f5fce4486a80fdaf4a4263"
-
-DEPENDS = "libaio zlib coreutils-native"
-DEPENDS += "${@bb.utils.contains('MACHINE_FEATURES', 'pmem', 'pmdk', '', d)}"
-RDEPENDS:${PN} = "python3-core bash"
-
-PACKAGECONFIG_NUMA = "numa"
-# ARM does not currently support NUMA
-PACKAGECONFIG_NUMA:arm = ""
-PACKAGECONFIG_NUMA:armeb = ""
-
-PACKAGECONFIG ??= "${PACKAGECONFIG_NUMA}"
-PACKAGECONFIG[numa] = ",--disable-numa,numactl"
-
-SRCREV = "a3e48f483db27d20e02cbd81e3a8f18c6c5c50f5"
-SRC_URI = "git://git.kernel.dk/fio.git;branch=master \
-"
-
-S = "${WORKDIR}/git"
-
-# avoids build breaks when using no-static-libs.inc
-DISABLE_STATIC = ""
-
-EXTRA_OEMAKE = "CC='${CC}' LDFLAGS='${LDFLAGS}'"
-EXTRA_OECONF = "${@bb.utils.contains('MACHINE_FEATURES', 'x86', '--disable-optimizations', '', d)}"
-
-do_configure() {
-    ./configure ${EXTRA_OECONF}
-}
-
-do_install() {
-    oe_runmake install DESTDIR=${D} prefix=${prefix} mandir=${mandir}
-    install -d ${D}/${docdir}/${PN}
-    cp -R --no-dereference --preserve=mode,links -v ${S}/examples ${D}/${docdir}/${PN}/
-}
diff --git a/meta-openembedded/meta-oe/recipes-benchmark/fio/fio_3.31.bb b/meta-openembedded/meta-oe/recipes-benchmark/fio/fio_3.31.bb
new file mode 100644
index 0000000..f8d6301
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-benchmark/fio/fio_3.31.bb
@@ -0,0 +1,46 @@
+SUMMARY = "Filesystem and hardware benchmark and stress tool"
+DESCRIPTION = "fio is an I/O tool meant to be used both for benchmark and \
+stress/hardware verification. It has support for a number of I/O engines, \
+I/O priorities (for newer Linux kernels), rate I/O, forked or threaded jobs, \
+and much more. It can work on block devices as well as files. fio accepts \
+job descriptions in a simple-to-understand text format. Several example job \
+files are included. fio displays all sorts of I/O performance information."
+HOMEPAGE = "http://freecode.com/projects/fio"
+SECTION = "console/tests"
+LICENSE = "GPL-2.0-only"
+LIC_FILES_CHKSUM = "file://COPYING;md5=b234ee4d69f5fce4486a80fdaf4a4263"
+
+DEPENDS = "libaio zlib coreutils-native"
+DEPENDS += "${@bb.utils.contains('MACHINE_FEATURES', 'pmem', 'pmdk', '', d)}"
+RDEPENDS:${PN} = "python3-core bash"
+
+PACKAGECONFIG_NUMA = "numa"
+# ARM does not currently support NUMA
+PACKAGECONFIG_NUMA:arm = ""
+PACKAGECONFIG_NUMA:armeb = ""
+
+PACKAGECONFIG ??= "${PACKAGECONFIG_NUMA}"
+PACKAGECONFIG[numa] = ",--disable-numa,numactl"
+
+SRCREV = "6cafe8445fd1e04e5f7d67bbc73029a538d1b253"
+SRC_URI = "git://git.kernel.dk/fio.git;branch=master \
+           file://0001-io_uring-Replace-pthread_self-with-s-tid.patch \
+           "
+
+S = "${WORKDIR}/git"
+
+# avoids build breaks when using no-static-libs.inc
+DISABLE_STATIC = ""
+
+EXTRA_OEMAKE = "CC='${CC}' LDFLAGS='${LDFLAGS}'"
+EXTRA_OECONF = "${@bb.utils.contains('MACHINE_FEATURES', 'x86', '--disable-optimizations', '', d)}"
+
+do_configure() {
+    ./configure ${EXTRA_OECONF}
+}
+
+do_install() {
+    oe_runmake install DESTDIR=${D} prefix=${prefix} mandir=${mandir}
+    install -d ${D}/${docdir}/${PN}
+    cp -R --no-dereference --preserve=mode,links -v ${S}/examples ${D}/${docdir}/${PN}/
+}
diff --git a/meta-openembedded/meta-oe/recipes-benchmark/sysbench/sysbench_0.4.12.bb b/meta-openembedded/meta-oe/recipes-benchmark/sysbench/sysbench_0.4.12.bb
index 22c634a..4ac78fb 100644
--- a/meta-openembedded/meta-oe/recipes-benchmark/sysbench/sysbench_0.4.12.bb
+++ b/meta-openembedded/meta-oe/recipes-benchmark/sysbench/sysbench_0.4.12.bb
@@ -15,8 +15,8 @@
 SRC_URI[md5sum] = "3a6d54fdd3fe002328e4458206392b9d"
 SRC_URI[sha256sum] = "83fa7464193e012c91254e595a89894d8e35b4a38324b52a5974777e3823ea9e"
 
-PACKAGECONFIG ??= "${@bb.utils.filter('DISTRO_FEATURES', 'largefile', d)}"
-PACKAGECONFIG[largefile] = "--enable-largefile,--disable-largefile,,"
+EXTRA_OECONF += "--enable-largefile"
+PACKAGECONFIG ??= ""
 PACKAGECONFIG[aio] = "--enable-aio,--disable-aio,libaio,"
 PACKAGECONFIG[mysql] = "--with-mysql \
                         --with-mysql-includes=${STAGING_INCDIR}/mysql \
diff --git a/meta-openembedded/meta-oe/recipes-connectivity/gensio/files/0001-tools-gensiot-Fix-build-with-musl.patch b/meta-openembedded/meta-oe/recipes-connectivity/gensio/files/0001-tools-gensiot-Fix-build-with-musl.patch
new file mode 100644
index 0000000..93831c3
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-connectivity/gensio/files/0001-tools-gensiot-Fix-build-with-musl.patch
@@ -0,0 +1,29 @@
+From 823b6754a4d7655480b6e8576a9d0037f842d653 Mon Sep 17 00:00:00 2001
+From: Jan Luebbe <jlu@pengutronix.de>
+Date: Thu, 25 Aug 2022 12:19:16 +0200
+Subject: [PATCH] tools:gensiot: Fix build with musl
+
+According to POSIX getpid() is available in unistd.h, not sys/unistd.h.
+
+Upstream-Status: Submitted [https://github.com/cminyard/gensio/pull/47]
+Signed-off-by: Jan Luebbe <jlu@pengutronix.de>
+---
+ tools/gensiotool.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/tools/gensiotool.c b/tools/gensiotool.c
+index cac531bb4b74..ab0bb9583f9f 100644
+--- a/tools/gensiotool.c
++++ b/tools/gensiotool.c
+@@ -44,7 +44,7 @@
+ #include <signal.h>
+ #include <errno.h>
+ #include <sys/types.h>
+-#include <sys/unistd.h>
++#include <unistd.h>
+ #include <syslog.h>
+ #endif
+ 
+-- 
+2.30.2
+
diff --git a/meta-openembedded/meta-oe/recipes-connectivity/gensio/gensio_2.3.1.bb b/meta-openembedded/meta-oe/recipes-connectivity/gensio/gensio_2.3.1.bb
deleted file mode 100644
index a6e0075..0000000
--- a/meta-openembedded/meta-oe/recipes-connectivity/gensio/gensio_2.3.1.bb
+++ /dev/null
@@ -1,24 +0,0 @@
-SUMMARY = "A library to abstract stream I/O like serial port, TCP, telnet, etc"
-HOMEPAGE = "https://github.com/cminyard/gensio"
-LICENSE = "GPL-2.0-only & LGPL-2.1-only"
-LIC_FILES_CHKSUM = "file://COPYING.LIB;md5=a0fd36908af843bcee10cb6dfc47fa67 \
-                    file://COPYING;md5=bae3019b4c6dc4138c217864bd04331f \
-                    "
-
-SRCREV = "c500d8705c517f96e591c060105a789f053d2b7a"
-
-SRC_URI = "git://github.com/cminyard/gensio;protocol=https;branch=master"
-
-S = "${WORKDIR}/git"
-
-inherit autotools
-
-PACKAGECONFIG ??= "openssl tcp-wrappers"
-
-PACKAGECONFIG[openssl] = "--with-openssl=${STAGING_DIR_HOST}${prefix},--without-openssl, openssl"
-PACKAGECONFIG[tcp-wrappers] = "--with-tcp-wrappers,--without-tcp-wrappers, tcp-wrappers"
-PACKAGECONFIG[swig] = "--with-swig,--without-swig, swig"
-
-EXTRA_OECONF = "--without-python"
-
-RDEPENDS:${PN} += "bash"
diff --git a/meta-openembedded/meta-oe/recipes-connectivity/gensio/gensio_2.5.2.bb b/meta-openembedded/meta-oe/recipes-connectivity/gensio/gensio_2.5.2.bb
new file mode 100644
index 0000000..9de2120
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-connectivity/gensio/gensio_2.5.2.bb
@@ -0,0 +1,26 @@
+SUMMARY = "A library to abstract stream I/O like serial port, TCP, telnet, etc"
+HOMEPAGE = "https://github.com/cminyard/gensio"
+LICENSE = "GPL-2.0-only & LGPL-2.1-only"
+LIC_FILES_CHKSUM = "file://COPYING.LIB;md5=a0fd36908af843bcee10cb6dfc47fa67 \
+                    file://COPYING;md5=bae3019b4c6dc4138c217864bd04331f \
+                    "
+
+SRCREV = "b6cd354afe6d5f63bc859c94fd3a455a3cdf0449"
+
+SRC_URI = "git://github.com/cminyard/gensio;protocol=https;branch=master \
+           file://0001-tools-gensiot-Fix-build-with-musl.patch \
+"
+
+S = "${WORKDIR}/git"
+
+inherit autotools
+
+PACKAGECONFIG ??= "openssl tcp-wrappers"
+
+PACKAGECONFIG[openssl] = "--with-openssl=${STAGING_DIR_HOST}${prefix},--without-openssl, openssl"
+PACKAGECONFIG[tcp-wrappers] = "--with-tcp-wrappers,--without-tcp-wrappers, tcp-wrappers"
+PACKAGECONFIG[swig] = "--with-swig,--without-swig, swig"
+
+EXTRA_OECONF = "--without-python"
+
+RDEPENDS:${PN} += "bash"
diff --git a/meta-openembedded/meta-oe/recipes-connectivity/iwd/iwd_1.28.bb b/meta-openembedded/meta-oe/recipes-connectivity/iwd/iwd_1.28.bb
deleted file mode 100644
index 40f35b1..0000000
--- a/meta-openembedded/meta-oe/recipes-connectivity/iwd/iwd_1.28.bb
+++ /dev/null
@@ -1,55 +0,0 @@
-SUMMARY = "Wireless daemon for Linux"
-HOMEPAGE = "https://iwd.wiki.kernel.org/"
-LICENSE = "LGPL-2.1-only"
-LIC_FILES_CHKSUM = "file://COPYING;md5=fb504b67c50331fc78734fed90fb0e09"
-
-DEPENDS = "ell"
-
-SRC_URI = "https://www.kernel.org/pub/linux/network/wireless/${BP}.tar.xz \
-           file://0001-build-Use-abs_top_srcdir-instead-of-abs_srcdir-for-e.patch \
-           "
-SRC_URI[sha256sum] = "ee538c720ad15335ece81a52a4b8a11820236df9ed9a5a72a2e64d74c652f0b0"
-
-inherit autotools manpages pkgconfig python3native systemd
-
-PACKAGECONFIG ??= " \
-    client \
-    monitor \
-    ${@bb.utils.filter('DISTRO_FEATURES', 'systemd', d)} \
-"
-PACKAGECONFIG[client] = "--enable-client,--disable-client,readline"
-PACKAGECONFIG[monitor] = "--enable-monitor,--disable-monitor"
-PACKAGECONFIG[manpages] = "--enable-manual-pages,--disable-manual-pages,python3-docutils-native"
-PACKAGECONFIG[wired] = "--enable-wired,--disable-wired"
-PACKAGECONFIG[ofono] = "--enable-ofono,--disable-ofono"
-PACKAGECONFIG[systemd] = "--with-systemd-unitdir=${systemd_system_unitdir},--disable-systemd-service,systemd"
-
-EXTRA_OECONF = "--enable-external-ell"
-
-SYSTEMD_SERVICE:${PN} = " \
-    iwd.service \
-    ${@bb.utils.contains('PACKAGECONFIG', 'wired', 'ead.service', '', d)} \
-"
-
-do_configure:prepend() {
-    install -d ${S}/build-aux
-}
-
-do_install:append() {
-    # If client and monitor are disabled, bindir is empty, causing a QA error
-    rmdir --ignore-fail-on-non-empty ${D}/${bindir}
-}
-
-FILES:${PN} += " \
-    ${datadir}/dbus-1 \
-    ${nonarch_libdir}/modules-load.d \
-    ${systemd_unitdir}/network \
-"
-
-RDEPENDS:${PN} = "dbus"
-
-RRECOMMENDS:${PN} = "\
-    kernel-module-pkcs7-message \
-    kernel-module-pkcs8-key-parser \
-    kernel-module-x509-key-parser \
-"
diff --git a/meta-openembedded/meta-oe/recipes-connectivity/iwd/iwd_1.29.bb b/meta-openembedded/meta-oe/recipes-connectivity/iwd/iwd_1.29.bb
new file mode 100644
index 0000000..eada93b
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-connectivity/iwd/iwd_1.29.bb
@@ -0,0 +1,55 @@
+SUMMARY = "Wireless daemon for Linux"
+HOMEPAGE = "https://iwd.wiki.kernel.org/"
+LICENSE = "LGPL-2.1-only"
+LIC_FILES_CHKSUM = "file://COPYING;md5=fb504b67c50331fc78734fed90fb0e09"
+
+DEPENDS = "ell"
+
+SRC_URI = "https://www.kernel.org/pub/linux/network/wireless/${BP}.tar.xz \
+           file://0001-build-Use-abs_top_srcdir-instead-of-abs_srcdir-for-e.patch \
+           "
+SRC_URI[sha256sum] = "71533fe3b8f6381f24832198ba11d00f04a361454770c173b3b66bc3cdf272bd"
+
+inherit autotools manpages pkgconfig python3native systemd
+
+PACKAGECONFIG ??= " \
+    client \
+    monitor \
+    ${@bb.utils.filter('DISTRO_FEATURES', 'systemd', d)} \
+"
+PACKAGECONFIG[client] = "--enable-client,--disable-client,readline"
+PACKAGECONFIG[monitor] = "--enable-monitor,--disable-monitor"
+PACKAGECONFIG[manpages] = "--enable-manual-pages,--disable-manual-pages,python3-docutils-native"
+PACKAGECONFIG[wired] = "--enable-wired,--disable-wired"
+PACKAGECONFIG[ofono] = "--enable-ofono,--disable-ofono"
+PACKAGECONFIG[systemd] = "--with-systemd-unitdir=${systemd_system_unitdir},--disable-systemd-service,systemd"
+
+EXTRA_OECONF = "--enable-external-ell"
+
+SYSTEMD_SERVICE:${PN} = " \
+    iwd.service \
+    ${@bb.utils.contains('PACKAGECONFIG', 'wired', 'ead.service', '', d)} \
+"
+
+do_configure:prepend() {
+    install -d ${S}/build-aux
+}
+
+do_install:append() {
+    # If client and monitor are disabled, bindir is empty, causing a QA error
+    rmdir --ignore-fail-on-non-empty ${D}/${bindir}
+}
+
+FILES:${PN} += " \
+    ${datadir}/dbus-1 \
+    ${nonarch_libdir}/modules-load.d \
+    ${systemd_unitdir}/network \
+"
+
+RDEPENDS:${PN} = "dbus"
+
+RRECOMMENDS:${PN} = "\
+    kernel-module-pkcs7-message \
+    kernel-module-pkcs8-key-parser \
+    kernel-module-x509-key-parser \
+"
diff --git a/meta-openembedded/meta-oe/recipes-connectivity/libimobiledevice-glue/libimobiledevice-glue_git.bb b/meta-openembedded/meta-oe/recipes-connectivity/libimobiledevice-glue/libimobiledevice-glue_git.bb
index 6d872b9..7e9299c 100644
--- a/meta-openembedded/meta-oe/recipes-connectivity/libimobiledevice-glue/libimobiledevice-glue_git.bb
+++ b/meta-openembedded/meta-oe/recipes-connectivity/libimobiledevice-glue/libimobiledevice-glue_git.bb
@@ -10,7 +10,7 @@
 
 PV = "1.0.0+git${SRCPV}"
 
-SRCREV = "bc6c44b92091c9587a9bed0ed3f2c3248bfd13b3"
+SRCREV = "d2ff7969dcd0a12e4f18f63dab03e6cd03054fcb"
 SRC_URI = "git://github.com/libimobiledevice/libimobiledevice-glue;protocol=https;branch=master"
 
 S = "${WORKDIR}/git"
diff --git a/meta-openembedded/meta-oe/recipes-connectivity/libimobiledevice/libimobiledevice_git.bb b/meta-openembedded/meta-oe/recipes-connectivity/libimobiledevice/libimobiledevice_git.bb
new file mode 100644
index 0000000..3dfd4e9
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-connectivity/libimobiledevice/libimobiledevice_git.bb
@@ -0,0 +1,19 @@
+SUMMARY = "A protocol library to access an iPhone or iPod Touch in Linux"
+LICENSE = "GPL-2.0-only & LGPL-2.1-only"
+LIC_FILES_CHKSUM = "\
+    file://COPYING;md5=ebb5c50ab7cab4baeffba14977030c07 \
+    file://COPYING.LESSER;md5=6ab17b41640564434dda85c06b7124f7 \
+"
+HOMEPAGE = "http://www.libimobiledevice.org/"
+
+DEPENDS = "libplist usbmuxd libusbmuxd libtasn1 gnutls libgcrypt libimobiledevice-glue openssl"
+
+PV = "1.3.0+git${SRCPV}"
+
+SRCREV = "2eec1b9a172354c8521123a767d998b17bd2ac18"
+SRC_URI = "git://github.com/libimobiledevice/libimobiledevice;protocol=https;branch=master"
+
+S = "${WORKDIR}/git"
+inherit autotools pkgconfig
+
+EXTRA_OECONF = " --without-cython "
diff --git a/meta-openembedded/meta-oe/recipes-connectivity/libirecovery/libirecovery_git.bb b/meta-openembedded/meta-oe/recipes-connectivity/libirecovery/libirecovery_git.bb
index c36837a..e3da181 100644
--- a/meta-openembedded/meta-oe/recipes-connectivity/libirecovery/libirecovery_git.bb
+++ b/meta-openembedded/meta-oe/recipes-connectivity/libirecovery/libirecovery_git.bb
@@ -10,7 +10,7 @@
 
 PV = "1.0.1+git${SRCPV}"
 
-SRCREV = "e19094594552b7bed60418ffe6f46f455e6bb78f"
+SRCREV = "ab5b4d8d4c0e90c05d80f80c7e99a6516de9b5c6"
 SRC_URI = "git://github.com/libimobiledevice/libirecovery;protocol=https;branch=master"
 
 S = "${WORKDIR}/git"
diff --git a/meta-openembedded/meta-oe/recipes-connectivity/libmtp/libmtp_1.1.20.bb b/meta-openembedded/meta-oe/recipes-connectivity/libmtp/libmtp_1.1.20.bb
index f7a7507..41fc46c 100644
--- a/meta-openembedded/meta-oe/recipes-connectivity/libmtp/libmtp_1.1.20.bb
+++ b/meta-openembedded/meta-oe/recipes-connectivity/libmtp/libmtp_1.1.20.bb
@@ -25,12 +25,12 @@
 
 EXTRA_OECONF += " \
     --disable-rpath \
+    --enable-largefile \
     --with-udev=${nonarch_base_libdir}/udev \
 "
 
-PACKAGECONFIG ?= "${@bb.utils.filter('DISTRO_FEATURES', 'largefile', d)}"
+PACKAGECONFIG ?= ""
 PACKAGECONFIG[doxygen] = "--enable-doxygen,--disable-doxygen,doxygen-native"
-PACKAGECONFIG[largefile] = "--enable-largefile,--disable-largefile"
 PACKAGECONFIG[mtpz] = "--enable-mtpz,--disable-mtpz,libgcrypt"
 
 PACKAGES =+ "${BPN}-common ${BPN}-runtime"
diff --git a/meta-openembedded/meta-oe/recipes-connectivity/usbmuxd/usbmuxd_git.bb b/meta-openembedded/meta-oe/recipes-connectivity/usbmuxd/usbmuxd_git.bb
new file mode 100644
index 0000000..e9afaf5
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-connectivity/usbmuxd/usbmuxd_git.bb
@@ -0,0 +1,23 @@
+DESCRIPTION = "This daemon is in charge of multiplexing connections over USB to an iPhone or iPod touch."
+HOMEPAGE = "https://github.com/libimobiledevice/usbmuxd"
+LICENSE = "GPL-3.0-only & GPL-2.0-only & LGPL-2.1-only"
+LIC_FILES_CHKSUM = "file://COPYING.GPLv2;md5=ebb5c50ab7cab4baeffba14977030c07 \
+                    file://COPYING.GPLv3;md5=d32239bcb673463ab874e80d47fae504"
+
+DEPENDS = "udev libusb1 libplist libimobiledevice-glue"
+
+inherit autotools pkgconfig gitpkgv systemd
+
+PKGV = "${GITPKGVTAG}"
+PV = "1.1.2+git${SRCPV}"
+
+SRCREV = "f50e52f3393a9149ac65fdda8f0d425109efc7fe"
+SRC_URI = "git://github.com/libimobiledevice/usbmuxd;protocol=https;branch=master"
+
+S = "${WORKDIR}/git"
+
+EXTRA_OECONF += "--without-preflight"
+
+FILES:${PN} += "${base_libdir}/udev/rules.d/"
+
+SYSTEMD_SERVICE:${PN} = "usbmuxd.service"
diff --git a/meta-openembedded/meta-oe/recipes-connectivity/zabbix/zabbix_6.0.5.bb b/meta-openembedded/meta-oe/recipes-connectivity/zabbix/zabbix_6.0.5.bb
deleted file mode 100644
index 2deaea2..0000000
--- a/meta-openembedded/meta-oe/recipes-connectivity/zabbix/zabbix_6.0.5.bb
+++ /dev/null
@@ -1,78 +0,0 @@
-SUMMARY = "Open-source monitoring solution for your IT infrastructure"
-DESCRIPTION = "\
-ZABBIX is software that monitors numerous parameters of a network and the \
-health and integrity of servers. ZABBIX uses a flexible notification \
-mechanism that allows users to configure e-mail based alerts for virtually \
-any event. This allows a fast reaction to server problems. ZABBIX offers \
-excellent reporting and data visualisation features based on the stored \
-data. This makes ZABBIX ideal for capacity planning. \
-\
-ZABBIX supports both polling and trapping. All ZABBIX reports and \
-statistics, as well as configuration parameters are accessed through a \
-web-based front end. A web-based front end ensures that the status of \
-your network and the health of your servers can be assessed from any \
-location. Properly configured, ZABBIX can play an important role in \
-monitoring IT infrastructure. This is equally true for small \
-organisations with a few servers and for large companies with a \
-multitude of servers."
-HOMEPAGE = "http://www.zabbix.com/"
-SECTION = "Applications/Internet"
-LICENSE = "GPL-2.0-or-later"
-LIC_FILES_CHKSUM = "file://COPYING;md5=300e938ad303147fede2294ed78fe02e"
-DEPENDS  = "libevent libpcre openldap virtual/libiconv zlib"
-
-PACKAGE_ARCH = "${MACHINE_ARCH}"
-
-SRC_URI = "https://cdn.zabbix.com/zabbix/sources/stable/6.0/${BPN}-${PV}.tar.gz \
-    file://0001-Fix-configure.ac.patch \
-    file://zabbix-agent.service \
-"
-
-SRC_URI[sha256sum] = "3eeb7063efc5dad56f84dfdcf9aeb781044be712e11e83f66d043da55f33bdc2"
-
-inherit autotools-brokensep linux-kernel-base pkgconfig systemd useradd
-
-SYSTEMD_PACKAGES = "${PN}"
-SYSTEMD_SERVICE:${PN} = "zabbix-agent.service"
-SYSTEMD_AUTO_ENABLE = "enable"
-
-USERADD_PACKAGES = "${PN}"
-GROUPADD_PARAM:${PN} = "-r zabbix"
-USERADD_PARAM:${PN} = "-r -g zabbix -d /var/lib/zabbix \
-    -s /sbin/nologin -c \"Zabbix Monitoring System\" zabbix \
-"
-
-KERNEL_VERSION = "${@get_kernelversion_headers('${STAGING_KERNEL_DIR}')}"
-
-EXTRA_OECONF = " \
-    --enable-dependency-tracking \
-    --enable-agent \
-    --enable-ipv6 \
-    --with-net-snmp \
-    --with-ldap=${STAGING_EXECPREFIXDIR} \
-    --with-unixodbc \
-    --with-ssh2 \
-    --with-sqlite3 \
-    --with-zlib \
-    --with-libpthread \
-    --with-libevent \
-    --with-libpcre=${STAGING_EXECPREFIXDIR} \
-    --with-iconv=${STAGING_EXECPREFIXDIR} \
-"
-CFLAGS:append = " -lldap -llber -pthread"
-
-do_configure:prepend() {
-    export KERNEL_VERSION="${KERNEL_VERSION}"
-}
-
-do_install:append() {
-    if ${@bb.utils.contains('DISTRO_FEATURES','systemd','true','false',d)}; then
-        install -d ${D}${systemd_unitdir}/system
-        install -m 0644 ${WORKDIR}/zabbix-agent.service ${D}${systemd_unitdir}/system/
-        sed -i -e 's#@SBINDIR@#${sbindir}#g' ${D}${systemd_unitdir}/system/zabbix-agent.service
-    fi
-}
-
-FILES:${PN} += "${libdir}"
-
-RDEPENDS:${PN} = "logrotate"
diff --git a/meta-openembedded/meta-oe/recipes-connectivity/zabbix/zabbix_6.2.1.bb b/meta-openembedded/meta-oe/recipes-connectivity/zabbix/zabbix_6.2.1.bb
new file mode 100644
index 0000000..9949bd8
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-connectivity/zabbix/zabbix_6.2.1.bb
@@ -0,0 +1,78 @@
+SUMMARY = "Open-source monitoring solution for your IT infrastructure"
+DESCRIPTION = "\
+ZABBIX is software that monitors numerous parameters of a network and the \
+health and integrity of servers. ZABBIX uses a flexible notification \
+mechanism that allows users to configure e-mail based alerts for virtually \
+any event. This allows a fast reaction to server problems. ZABBIX offers \
+excellent reporting and data visualisation features based on the stored \
+data. This makes ZABBIX ideal for capacity planning. \
+\
+ZABBIX supports both polling and trapping. All ZABBIX reports and \
+statistics, as well as configuration parameters are accessed through a \
+web-based front end. A web-based front end ensures that the status of \
+your network and the health of your servers can be assessed from any \
+location. Properly configured, ZABBIX can play an important role in \
+monitoring IT infrastructure. This is equally true for small \
+organisations with a few servers and for large companies with a \
+multitude of servers."
+HOMEPAGE = "http://www.zabbix.com/"
+SECTION = "Applications/Internet"
+LICENSE = "GPL-2.0-or-later"
+LIC_FILES_CHKSUM = "file://COPYING;md5=300e938ad303147fede2294ed78fe02e"
+DEPENDS  = "libevent libpcre openldap virtual/libiconv zlib"
+
+PACKAGE_ARCH = "${MACHINE_ARCH}"
+
+SRC_URI = "https://cdn.zabbix.com/zabbix/sources/stable/6.2/${BPN}-${PV}.tar.gz \
+    file://0001-Fix-configure.ac.patch \
+    file://zabbix-agent.service \
+"
+
+SRC_URI[sha256sum] = "f3d6b7cf4e67d820ce7d28cd54ac67724f7453f261f668877e6410cd21ab9ea1"
+
+inherit autotools-brokensep linux-kernel-base pkgconfig systemd useradd
+
+SYSTEMD_PACKAGES = "${PN}"
+SYSTEMD_SERVICE:${PN} = "zabbix-agent.service"
+SYSTEMD_AUTO_ENABLE = "enable"
+
+USERADD_PACKAGES = "${PN}"
+GROUPADD_PARAM:${PN} = "-r zabbix"
+USERADD_PARAM:${PN} = "-r -g zabbix -d /var/lib/zabbix \
+    -s /sbin/nologin -c \"Zabbix Monitoring System\" zabbix \
+"
+
+KERNEL_VERSION = "${@get_kernelversion_headers('${STAGING_KERNEL_DIR}')}"
+
+EXTRA_OECONF = " \
+    --enable-dependency-tracking \
+    --enable-agent \
+    --enable-ipv6 \
+    --with-net-snmp \
+    --with-ldap=${STAGING_EXECPREFIXDIR} \
+    --with-unixodbc \
+    --with-ssh2 \
+    --with-sqlite3 \
+    --with-zlib \
+    --with-libpthread \
+    --with-libevent \
+    --with-libpcre=${STAGING_EXECPREFIXDIR} \
+    --with-iconv=${STAGING_EXECPREFIXDIR} \
+"
+CFLAGS:append = " -lldap -llber -pthread"
+
+do_configure:prepend() {
+    export KERNEL_VERSION="${KERNEL_VERSION}"
+}
+
+do_install:append() {
+    if ${@bb.utils.contains('DISTRO_FEATURES','systemd','true','false',d)}; then
+        install -d ${D}${systemd_unitdir}/system
+        install -m 0644 ${WORKDIR}/zabbix-agent.service ${D}${systemd_unitdir}/system/
+        sed -i -e 's#@SBINDIR@#${sbindir}#g' ${D}${systemd_unitdir}/system/zabbix-agent.service
+    fi
+}
+
+FILES:${PN} += "${libdir}"
+
+RDEPENDS:${PN} = "logrotate"
diff --git a/meta-openembedded/meta-oe/recipes-core/safec/safec/0001-strpbrk_s-Remove-unused-variable-len.patch b/meta-openembedded/meta-oe/recipes-core/safec/safec/0001-strpbrk_s-Remove-unused-variable-len.patch
new file mode 100644
index 0000000..4fd36ab
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-core/safec/safec/0001-strpbrk_s-Remove-unused-variable-len.patch
@@ -0,0 +1,42 @@
+From b1d7cc6495c541cdd99399b4d1a835997376dcbf Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Mon, 22 Aug 2022 23:42:33 -0700
+Subject: [PATCH] strpbrk_s: Remove unused variable len
+
+Fixes
+error: variable 'len' set but not used [-Werror,-Wunused-but-set-variable]
+
+Upstream-Status: Submitted [https://github.com/rurban/safeclib/pull/123]
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ src/extstr/strpbrk_s.c | 3 ---
+ 1 file changed, 3 deletions(-)
+
+diff --git a/src/extstr/strpbrk_s.c b/src/extstr/strpbrk_s.c
+index 5bb7a0f8..2cf8a8be 100644
+--- a/src/extstr/strpbrk_s.c
++++ b/src/extstr/strpbrk_s.c
+@@ -79,7 +79,6 @@ EXPORT errno_t _strpbrk_s_chk(char *dest, rsize_t dmax, char *src, rsize_t slen,
+ #endif
+ {
+     char *ps;
+-    rsize_t len;
+ 
+     CHK_SRC_NULL("strpbrk_s", firstp)
+     *firstp = NULL;
+@@ -121,7 +120,6 @@ EXPORT errno_t _strpbrk_s_chk(char *dest, rsize_t dmax, char *src, rsize_t slen,
+     while (*dest && dmax) {
+ 
+         ps = src;
+-        len = slen;
+         while (*ps) {
+ 
+             /* check for a match with the substring */
+@@ -130,7 +128,6 @@ EXPORT errno_t _strpbrk_s_chk(char *dest, rsize_t dmax, char *src, rsize_t slen,
+                 return RCNEGATE(EOK);
+             }
+             ps++;
+-            len--;
+         }
+         dest++;
+         dmax--;
diff --git a/meta-openembedded/meta-oe/recipes-core/safec/safec_3.7.1.bb b/meta-openembedded/meta-oe/recipes-core/safec/safec_3.7.1.bb
index 5ffe7d7..9dd6f1c 100644
--- a/meta-openembedded/meta-oe/recipes-core/safec/safec_3.7.1.bb
+++ b/meta-openembedded/meta-oe/recipes-core/safec/safec_3.7.1.bb
@@ -9,7 +9,8 @@
 S = "${WORKDIR}/git"
 SRCREV = "f9add9245b97c7bda6e28cceb0ee37fb7e254fd8"
 SRC_URI = "git://github.com/rurban/safeclib.git;branch=master;protocol=https \
-"
+           file://0001-strpbrk_s-Remove-unused-variable-len.patch \
+           "
 
 COMPATIBLE_HOST = '(x86_64|i.86|powerpc|powerpc64|arm|aarch64|mips).*-linux'
 
diff --git a/meta-openembedded/meta-oe/recipes-core/sdbus-c++/sdbus-c++-libsystemd/0001-glibc-Remove-include-linux-fs.h-to-resolve-fsconfig_.patch b/meta-openembedded/meta-oe/recipes-core/sdbus-c++/sdbus-c++-libsystemd/0001-glibc-Remove-include-linux-fs.h-to-resolve-fsconfig_.patch
new file mode 100644
index 0000000..01afd37
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-core/sdbus-c++/sdbus-c++-libsystemd/0001-glibc-Remove-include-linux-fs.h-to-resolve-fsconfig_.patch
@@ -0,0 +1,74 @@
+From b0933e76c6f0594c10cf8a9a70b34e15b68066d1 Mon Sep 17 00:00:00 2001
+From: Rudi Heitbaum <rudi@heitbaum.com>
+Date: Sat, 23 Jul 2022 10:38:49 +0000
+Subject: [PATCH] glibc: Remove #include <linux/fs.h> to resolve fsconfig_command/mount_attr conflict with glibc 2.36
+
+Upstream-Status: Backport [https://github.com/systemd/systemd/pull/23992/commits/21c03ad5e9d8d0350e30dae92a5e15da318a1539]
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ meson.build             | 13 ++++++++++++-
+ src/basic/fd-util.c     |  2 ++
+ src/core/namespace.c    |  2 ++
+ src/shared/mount-util.c |  2 ++
+ 4 files changed, 18 insertions(+), 1 deletion(-)
+
+--- a/meson.build
++++ b/meson.build
+@@ -474,7 +474,6 @@ decl_headers = '''
+ #include <uchar.h>
+ #include <sys/mount.h>
+ #include <sys/stat.h>
+-#include <linux/fs.h>
+ '''
+ 
+ foreach decl : ['char16_t',
+@@ -486,6 +485,17 @@ foreach decl : ['char16_t',
+         # We get -1 if the size cannot be determined
+         have = cc.sizeof(decl, prefix : decl_headers, args : '-D_GNU_SOURCE') > 0
+ 
++        if decl == 'struct mount_attr'
++                if have
++                        want_linux_fs_h = false
++                else
++                        have = cc.sizeof(decl,
++                                         prefix : decl_headers + '#include <linux/fs.h>',
++                                         args : '-D_GNU_SOURCE') > 0
++                        want_linux_fs_h = have
++                endif
++        endif
++
+         if decl == 'struct statx'
+                 if have
+                         want_linux_stat_h = false
+@@ -501,6 +511,7 @@ foreach decl : ['char16_t',
+ endforeach
+ 
+ conf.set10('WANT_LINUX_STAT_H', want_linux_stat_h)
++conf.set10('WANT_LINUX_FS_H', want_linux_fs_h)
+ 
+ foreach ident : ['secure_getenv', '__secure_getenv']
+         conf.set10('HAVE_' + ident.to_upper(), cc.has_function(ident))
+--- a/src/core/namespace.c
++++ b/src/core/namespace.c
+@@ -6,7 +6,9 @@
+ #include <stdio.h>
+ #include <sys/mount.h>
+ #include <unistd.h>
++#if WANT_LINUX_FS_H
+ #include <linux/fs.h>
++#endif
+ 
+ #include "alloc-util.h"
+ #include "base-filesystem.h"
+--- a/src/shared/mount-util.c
++++ b/src/shared/mount-util.c
+@@ -7,7 +7,9 @@
+ #include <sys/statvfs.h>
+ #include <unistd.h>
+ #include <linux/loop.h>
++#if WANT_LINUX_FS_H
+ #include <linux/fs.h>
++#endif
+ 
+ #include "alloc-util.h"
+ #include "chase-symlinks.h"
diff --git a/meta-openembedded/meta-oe/recipes-core/sdbus-c++/sdbus-c++-libsystemd_250.3.bb b/meta-openembedded/meta-oe/recipes-core/sdbus-c++/sdbus-c++-libsystemd_250.3.bb
index d5c799a..853cc20 100644
--- a/meta-openembedded/meta-oe/recipes-core/sdbus-c++/sdbus-c++-libsystemd_250.3.bb
+++ b/meta-openembedded/meta-oe/recipes-core/sdbus-c++/sdbus-c++-libsystemd_250.3.bb
@@ -14,6 +14,7 @@
 SRCBRANCH = "v250-stable"
 SRC_URI = "git://github.com/systemd/systemd-stable.git;protocol=https;branch=${SRCBRANCH} \
            file://static-libsystemd-pkgconfig.patch \
+           file://0001-glibc-Remove-include-linux-fs.h-to-resolve-fsconfig_.patch \
            "
 
 # patches needed by musl
diff --git a/meta-openembedded/meta-oe/recipes-core/toybox/toybox/0001-portability-Avoid-glibc-and-linux-mount.h-conflict.patch b/meta-openembedded/meta-oe/recipes-core/toybox/toybox/0001-portability-Avoid-glibc-and-linux-mount.h-conflict.patch
new file mode 100644
index 0000000..689ee2a
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-core/toybox/toybox/0001-portability-Avoid-glibc-and-linux-mount.h-conflict.patch
@@ -0,0 +1,161 @@
+From 89000d9cb226cd864fa247f2428c9eaf7f414882 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Sun, 14 Aug 2022 10:02:15 -0700
+Subject: [PATCH] portability: Avoid glibc and linux mount.h conflict
+
+With glibc 2.36+ linux/mount.h> and <sys/mount.h> headers are
+no longer directly compatible
+
+Upstream-Status: Submitted [https://github.com/landley/toybox/pull/364]
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ lib/portability.h | 6 +++++-
+ 1 file changed, 5 insertions(+), 1 deletion(-)
+
+--- a/lib/portability.h
++++ b/lib/portability.h
+@@ -180,11 +180,29 @@ void *memmem(const void *haystack, size_
+ #endif
+ 
+ // Linux headers not listed by POSIX or LSB
+-#include <sys/mount.h>
+ #ifdef __linux__
+ #include <sys/statfs.h>
+ #include <sys/swap.h>
+ #include <sys/sysinfo.h>
++
++#ifndef BLKDISCARD
++#define BLKDISCARD _IO(0x12,119)
++#endif
++#ifndef BLKSECDISCARD
++#define BLKSECDISCARD _IO(0x12,125)
++#endif
++#ifndef BLKZEROOUT
++#define BLKZEROOUT _IO(0x12,127)
++#endif
++#ifndef FIFREEZE
++#define FIFREEZE        _IOWR('X', 119, int)    /* Freeze */
++#endif
++#ifndef FITHAW
++#define FITHAW          _IOWR('X', 120, int)    /* Thaw */
++#endif
++
++#else
++#include <sys/mount.h>
+ #endif
+ 
+ #ifdef __APPLE__
+--- a/toys/other/switch_root.c
++++ b/toys/other/switch_root.c
+@@ -19,6 +19,7 @@ config SWITCH_ROOT
+ 
+ #define FOR_switch_root
+ #include "toys.h"
++#include <sys/mount.h>
+ #include <sys/vfs.h>
+ 
+ GLOBALS(
+--- a/toys/other/blkdiscard.c
++++ b/toys/other/blkdiscard.c
+@@ -31,8 +31,7 @@ config BLKDISCARD
+ 
+ #define FOR_blkdiscard
+ #include "toys.h"
+-
+-#include <linux/fs.h>
++#include <sys/mount.h>
+ 
+ GLOBALS(
+   long o, l;
+--- a/toys/other/blockdev.c
++++ b/toys/other/blockdev.c
+@@ -31,7 +31,7 @@ config BLOCKDEV
+ 
+ #define FOR_blockdev
+ #include "toys.h"
+-#include <linux/fs.h>
++#include <sys/mount.h>
+ 
+ GLOBALS(
+   long setbsz, setra;
+--- a/toys/other/fsfreeze.c
++++ b/toys/other/fsfreeze.c
+@@ -18,7 +18,6 @@ config FSFREEZE
+ 
+ #define FOR_fsfreeze
+ #include "toys.h"
+-#include <linux/fs.h>
+ 
+ void fsfreeze_main(void)
+ {
+--- a/lib/portability.c
++++ b/lib/portability.c
+@@ -5,7 +5,7 @@
+  */
+ 
+ #include "toys.h"
+-
++#include <sys/mount.h>
+ // We can't fork() on nommu systems, and vfork() requires an exec() or exit()
+ // before resuming the parent (because they share a heap until then). And no,
+ // we can't implement our own clone() call that does the equivalent of fork()
+--- a/toys/lsb/mount.c
++++ b/toys/lsb/mount.c
+@@ -58,6 +58,7 @@ config MOUNT
+ 
+ #define FOR_mount
+ #include "toys.h"
++#include <sys/mount.h>
+ 
+ GLOBALS(
+   struct arg_list *o;
+--- a/toys/lsb/umount.c
++++ b/toys/lsb/umount.c
+@@ -30,6 +30,7 @@ config UMOUNT
+ 
+ #define FOR_umount
+ #include "toys.h"
++#include <sys/mount.h>
+ 
+ GLOBALS(
+   struct arg_list *t;
+--- a/toys/other/eject.c
++++ b/toys/other/eject.c
+@@ -22,6 +22,7 @@ config EJECT
+ 
+ #define FOR_eject
+ #include "toys.h"
++#include <sys/mount.h>
+ #include <scsi/sg.h>
+ #include <scsi/scsi.h>
+ #include <linux/cdrom.h>
+--- a/toys/other/freeramdisk.c
++++ b/toys/other/freeramdisk.c
+@@ -16,6 +16,7 @@ config FREERAMDISK
+ */
+ 
+ #include "toys.h"
++#include <sys/mount.h>
+ 
+ void freeramdisk_main(void)
+ {
+--- a/toys/other/nbd_client.c
++++ b/toys/other/nbd_client.c
+@@ -36,6 +36,7 @@ config NBD_CLIENT
+ #define FOR_nbd_client
+ #include "toys.h"
+ #include <linux/nbd.h>
++#include <linux/fs.h>
+ 
+ void nbd_client_main(void)
+ {
+--- a/toys/other/partprobe.c
++++ b/toys/other/partprobe.c
+@@ -18,6 +18,7 @@ config PARTPROBE
+ */
+ 
+ #include "toys.h"
++#include <sys/mount.h>
+ 
+ static void do_partprobe(int fd, char *name)
+ {
diff --git a/meta-openembedded/meta-oe/recipes-core/toybox/toybox_0.8.7.bb b/meta-openembedded/meta-oe/recipes-core/toybox/toybox_0.8.7.bb
deleted file mode 100644
index 3441568..0000000
--- a/meta-openembedded/meta-oe/recipes-core/toybox/toybox_0.8.7.bb
+++ /dev/null
@@ -1,114 +0,0 @@
-SUMMARY = "Toybox combines common utilities together into a single executable."
-HOMEPAGE = "http://www.landley.net/toybox/"
-DEPENDS = "attr virtual/crypt"
-
-LICENSE = "0BSD"
-LIC_FILES_CHKSUM = "file://LICENSE;md5=78659a599b9325da368f2f1eb88f19c7"
-
-inherit cml1 update-alternatives
-
-SRC_URI = "http://www.landley.net/toybox/downloads/${BPN}-${PV}.tar.gz \
-           "
-SRC_URI[sha256sum] = "b508bf336f82cb0739b77111f945931d1a143b5a53905cb753cd2607cfdd1494"
-
-SECTION = "base"
-
-RDEPENDS:${PN} = "${@["", "toybox-inittab"][(d.getVar('VIRTUAL-RUNTIME_init_manager') == 'toybox')]}"
-
-TOYBOX_BIN = "generated/unstripped/toybox"
-
-# Toybox is strict on what CC, CFLAGS and CROSS_COMPILE variables should contain.
-# Fix CC, CFLAGS, CROSS_COMPILE to match expectations.
-# CC = compiler name
-# CFLAGS = only compiler flags
-# CROSS_COMPILE = compiler prefix
-CFLAGS += "${TOOLCHAIN_OPTIONS} ${TUNE_CCARGS}"
-
-COMPILER:toolchain-clang = "clang"
-COMPILER  ?= "gcc"
-
-PACKAGECONFIG ??= "no-iconv no-getconf"
-
-PACKAGECONFIG[no-iconv] = ",,"
-PACKAGECONFIG[no-getconf] = ",,"
-
-EXTRA_OEMAKE = 'CROSS_COMPILE="${HOST_PREFIX}" \
-                CC="${COMPILER}" \
-                STRIP="strip" \
-                CFLAGS="${CFLAGS}" \
-                HOSTCC="${BUILD_CC}" CPUS=${@oe.utils.cpu_count()} V=1'
-
-do_configure() {
-    # allow user to define their own defconfig in bbappend, taken from kernel.bbclass
-    if [ "${S}" != "${B}" ] && [ -f "${S}/.config" ] && [ ! -f "${B}/.config" ]; then
-        mv "${S}/.config" "${B}/.config"
-    fi
-
-    # Copy defconfig to .config if .config does not exist. This allows
-    # recipes to manage the .config themselves in do_configure:prepend().
-    if [ -f "${WORKDIR}/defconfig" ] && [ ! -f "${B}/.config" ]; then
-        cp "${WORKDIR}/defconfig" "${B}/.config"
-    fi
-
-    oe_runmake oldconfig || oe_runmake defconfig
-
-    # Disable killall5 as it isn't managed by update-alternatives
-    sed -e 's/CONFIG_KILLALL5=y/# CONFIG_KILLALL5 is not set/' -i .config
-
-    # Disable swapon as it doesn't handle the '-a' argument used during boot
-    sed -e 's/CONFIG_SWAPON=y/# CONFIG_SWAPON is not set/' -i .config
-
-    # Enable init if toybox was set as init manager
-    if ${@bb.utils.contains('VIRTUAL-RUNTIME_init_manager','toybox','true','false',d)}; then
-        sed -e 's/# CONFIG_INIT is not set/CONFIG_INIT=y/' -i .config
-    fi
-}
-
-do_compile() {
-    oe_runmake ${TOYBOX_BIN}
-
-    # Create a list of links needed
-    ${BUILD_CC} -I . scripts/install.c -o generated/instlist
-    ./generated/instlist long | sed -e 's#^#/#' > toybox.links
-    if ${@bb.utils.contains('PACKAGECONFIG','no-iconv','true','false',d)}; then
-        sed -i -e '/iconv$/d' toybox.links
-    fi
-    if ${@bb.utils.contains('PACKAGECONFIG','no-getconf','true','false',d)}; then
-        sed -i -e '/getconf$/d' toybox.links
-    fi
-}
-
-do_install() {
-    # Install manually instead of using 'make install'
-    install -d ${D}${base_bindir}
-    if grep -q "CONFIG_TOYBOX_SUID=y" ${B}/.config; then
-        install -m 4755 ${B}/${TOYBOX_BIN} ${D}${base_bindir}/toybox
-    else
-        install -m 0755 ${B}/${TOYBOX_BIN} ${D}${base_bindir}/toybox
-    fi
-
-    install -d ${D}${sysconfdir}
-    install -m 0644 ${B}/toybox.links ${D}${sysconfdir}
-}
-
-# If you've chosen to install toybox you probably want it to take precedence
-# over busybox where possible but not over other packages
-ALTERNATIVE_PRIORITY = "60"
-
-python do_package:prepend () {
-    # Read links from /etc/toybox.links and create appropriate
-    # update-alternatives variables
-
-    dvar = d.getVar('D')
-    pn = d.getVar('PN')
-    target = d.expand("${base_bindir}/toybox")
-
-    f = open('%s/etc/toybox.links' % (dvar), 'r')
-    for alt_link_name in f:
-        alt_link_name = alt_link_name.strip()
-        alt_name = os.path.basename(alt_link_name)
-        d.appendVar('ALTERNATIVE:%s' % (pn), ' ' + alt_name)
-        d.setVarFlag('ALTERNATIVE_LINK_NAME', alt_name, alt_link_name)
-        d.setVarFlag('ALTERNATIVE_TARGET', alt_name, target)
-    f.close()
-}
diff --git a/meta-openembedded/meta-oe/recipes-core/toybox/toybox_0.8.8.bb b/meta-openembedded/meta-oe/recipes-core/toybox/toybox_0.8.8.bb
new file mode 100644
index 0000000..e27f9ed
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-core/toybox/toybox_0.8.8.bb
@@ -0,0 +1,115 @@
+SUMMARY = "Toybox combines common utilities together into a single executable."
+HOMEPAGE = "http://www.landley.net/toybox/"
+DEPENDS = "attr virtual/crypt"
+
+LICENSE = "0BSD"
+LIC_FILES_CHKSUM = "file://LICENSE;md5=78659a599b9325da368f2f1eb88f19c7"
+
+inherit cml1 update-alternatives
+
+SRC_URI = "http://www.landley.net/toybox/downloads/${BPN}-${PV}.tar.gz \
+           file://0001-portability-Avoid-glibc-and-linux-mount.h-conflict.patch \
+           "
+SRC_URI[sha256sum] = "dafd41978d40f02a61cf1be99a2b4a25812bbfb9c3157e679ee7611202d6ac58"
+
+SECTION = "base"
+
+RDEPENDS:${PN} = "${@["", "toybox-inittab"][(d.getVar('VIRTUAL-RUNTIME_init_manager') == 'toybox')]}"
+
+TOYBOX_BIN = "generated/unstripped/toybox"
+
+# Toybox is strict on what CC, CFLAGS and CROSS_COMPILE variables should contain.
+# Fix CC, CFLAGS, CROSS_COMPILE to match expectations.
+# CC = compiler name
+# CFLAGS = only compiler flags
+# CROSS_COMPILE = compiler prefix
+CFLAGS += "${TOOLCHAIN_OPTIONS} ${TUNE_CCARGS}"
+
+COMPILER:toolchain-clang = "clang"
+COMPILER  ?= "gcc"
+
+PACKAGECONFIG ??= "no-iconv no-getconf"
+
+PACKAGECONFIG[no-iconv] = ",,"
+PACKAGECONFIG[no-getconf] = ",,"
+
+EXTRA_OEMAKE = 'CROSS_COMPILE="${HOST_PREFIX}" \
+                CC="${COMPILER}" \
+                STRIP="strip" \
+                CFLAGS="${CFLAGS}" \
+                HOSTCC="${BUILD_CC}" CPUS=${@oe.utils.cpu_count()} V=1'
+
+do_configure() {
+    # allow user to define their own defconfig in bbappend, taken from kernel.bbclass
+    if [ "${S}" != "${B}" ] && [ -f "${S}/.config" ] && [ ! -f "${B}/.config" ]; then
+        mv "${S}/.config" "${B}/.config"
+    fi
+
+    # Copy defconfig to .config if .config does not exist. This allows
+    # recipes to manage the .config themselves in do_configure:prepend().
+    if [ -f "${WORKDIR}/defconfig" ] && [ ! -f "${B}/.config" ]; then
+        cp "${WORKDIR}/defconfig" "${B}/.config"
+    fi
+
+    oe_runmake oldconfig || oe_runmake defconfig
+
+    # Disable killall5 as it isn't managed by update-alternatives
+    sed -e 's/CONFIG_KILLALL5=y/# CONFIG_KILLALL5 is not set/' -i .config
+
+    # Disable swapon as it doesn't handle the '-a' argument used during boot
+    sed -e 's/CONFIG_SWAPON=y/# CONFIG_SWAPON is not set/' -i .config
+
+    # Enable init if toybox was set as init manager
+    if ${@bb.utils.contains('VIRTUAL-RUNTIME_init_manager','toybox','true','false',d)}; then
+        sed -e 's/# CONFIG_INIT is not set/CONFIG_INIT=y/' -i .config
+    fi
+}
+
+do_compile() {
+    oe_runmake ${TOYBOX_BIN}
+
+    # Create a list of links needed
+    ${BUILD_CC} -I . scripts/install.c -o generated/instlist
+    ./generated/instlist long | sed -e 's#^#/#' > toybox.links
+    if ${@bb.utils.contains('PACKAGECONFIG','no-iconv','true','false',d)}; then
+        sed -i -e '/iconv$/d' toybox.links
+    fi
+    if ${@bb.utils.contains('PACKAGECONFIG','no-getconf','true','false',d)}; then
+        sed -i -e '/getconf$/d' toybox.links
+    fi
+}
+
+do_install() {
+    # Install manually instead of using 'make install'
+    install -d ${D}${base_bindir}
+    if grep -q "CONFIG_TOYBOX_SUID=y" ${B}/.config; then
+        install -m 4755 ${B}/${TOYBOX_BIN} ${D}${base_bindir}/toybox
+    else
+        install -m 0755 ${B}/${TOYBOX_BIN} ${D}${base_bindir}/toybox
+    fi
+
+    install -d ${D}${sysconfdir}
+    install -m 0644 ${B}/toybox.links ${D}${sysconfdir}
+}
+
+# If you've chosen to install toybox you probably want it to take precedence
+# over busybox where possible but not over other packages
+ALTERNATIVE_PRIORITY = "60"
+
+python do_package:prepend () {
+    # Read links from /etc/toybox.links and create appropriate
+    # update-alternatives variables
+
+    dvar = d.getVar('D')
+    pn = d.getVar('PN')
+    target = d.expand("${base_bindir}/toybox")
+
+    f = open('%s/etc/toybox.links' % (dvar), 'r')
+    for alt_link_name in f:
+        alt_link_name = alt_link_name.strip()
+        alt_name = os.path.basename(alt_link_name)
+        d.appendVar('ALTERNATIVE:%s' % (pn), ' ' + alt_name)
+        d.setVarFlag('ALTERNATIVE_LINK_NAME', alt_name, alt_link_name)
+        d.setVarFlag('ALTERNATIVE_TARGET', alt_name, target)
+    f.close()
+}
diff --git a/meta-openembedded/meta-oe/recipes-core/uutils-coreutils/README.txt b/meta-openembedded/meta-oe/recipes-core/uutils-coreutils/README.txt
new file mode 100644
index 0000000..cfd7b05
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-core/uutils-coreutils/README.txt
@@ -0,0 +1,19 @@
+How to generate/update the file uutils-coreutils_XXX.bb:
+
+cargo with version > 1.60 is needed so cargo from Ubuntu's apt will not work
+(because of https://github.com/rust-lang/cargo/issues/10623):
+This package is needed (tested on Ubuntu 22.04):
+sudo apt-get -y install librust-cargo+openssl-dev
+
+Then install cargo-bitbake with:
+$ cargo install --locked cargo-bitbake
+
+You can now update coreutils:
+$ git clone https://github.com/uutils/coreutils.git
+$ cd coreutils
+$ git tag
+$ git checkout 0.0.XXX
+$ cargo-bitbake bitbake
+Wrote: coreutils_0.0.15.bb
+
+Verify manual changes in the bb file (rename coreutils.inc to uutils-coreutils.inc)
diff --git a/meta-openembedded/meta-oe/recipes-core/uutils-coreutils/uutils-coreutils.inc b/meta-openembedded/meta-oe/recipes-core/uutils-coreutils/uutils-coreutils.inc
new file mode 100644
index 0000000..68cfa81
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-core/uutils-coreutils/uutils-coreutils.inc
@@ -0,0 +1,57 @@
+# This file contains content which was not automatically generated using cargo-bitbake on the cargo.toml file of uutils-coreutils
+# Copyright (c) 2022, Snap Inc.
+# Released under the MIT license (see COPYING.MIT for the terms)
+
+PROVIDES = "coreutils"
+RPROVIDES:${PN} = "coreutils"
+
+PACKAGECONFIG ?= "${@bb.utils.filter('DISTRO_FEATURES', 'selinux', d)}"
+
+PACKAGECONFIG[selinux] = "--with-selinux,--without-selinux,libselinux"
+
+CARGO_BUILD_FLAGS += "--features unix"
+CARGO_BUILD_FLAGS += "${@bb.utils.contains('PACKAGECONFIG', 'selinux', '--features feat_selinux', '', d)}"
+
+DEPENDS += "${@bb.utils.contains('PACKAGECONFIG', 'selinux', 'clang-native libselinux-native', '', d)}"
+
+export LIBCLANG_PATH = "${WORKDIR}/recipe-sysroot-native${libdir}"
+export SELINUX_LIB_DIR = "${WORKDIR}/recipe-sysroot-native${libdir}"
+export SELINUX_INCLUDE_DIR = "${WORKDIR}/recipe-sysroot-native${includedir}"
+
+# The code which follows is strongly inspired from the GNU coreutils bitbake recipe:
+
+# [ df mktemp nice printenv base64 gets a special treatment and is not included in this
+bindir_progs = "[ arch basename cksum comm csplit cut dir dircolors dirname du \
+                env expand expr factor fmt fold groups head hostid id install \
+                join link logname md5sum mkfifo nl nohup nproc od paste pathchk \
+                pinky pr printf ptx readlink realpath seq sha1sum sha224sum sha256sum \
+                sha384sum sha512sum shred shuf sort split sum tac tail tee test timeout \
+                tr truncate tsort tty unexpand uniq unlink uptime users vdir wc who whoami yes"
+
+bindir_progs += "${@bb.utils.contains('PACKAGECONFIG', 'selinux', 'chcon runcon', '', d)}"
+
+base_bindir_progs = "cat chgrp chmod chown cp date dd echo false hostname kill ln ls mkdir \
+                     mknod mv pwd rm rmdir sleep stty sync touch true uname stat"
+
+sbindir_progs= "chroot"
+
+inherit update-alternatives
+
+# Higher than busybox (which uses 50)
+ALTERNATIVE_PRIORITY = "100"
+
+# Higher than net-tools (which uses 100)
+ALTERNATIVE_PRIORITY[hostname] = "110"
+
+ALTERNATIVE:${PN} = "${bindir_progs} ${base_bindir_progs} ${sbindir_progs} base32 base64 nice printenv mktemp df"
+
+# Use the multicall binary named "coreutils" for symlinks
+ALTERNATIVE_TARGET = "${bindir}/coreutils"
+
+python __anonymous() {
+    for prog in d.getVar('base_bindir_progs').split():
+        d.setVarFlag('ALTERNATIVE_LINK_NAME', prog, '%s/%s' % (d.getVar('base_bindir'), prog))
+
+    for prog in d.getVar('sbindir_progs').split():
+        d.setVarFlag('ALTERNATIVE_LINK_NAME', prog, '%s/%s' % (d.getVar('sbindir'), prog))
+}
diff --git a/meta-openembedded/meta-oe/recipes-core/uutils-coreutils/uutils-coreutils_0.0.15.bb b/meta-openembedded/meta-oe/recipes-core/uutils-coreutils/uutils-coreutils_0.0.15.bb
new file mode 100644
index 0000000..0700463
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-core/uutils-coreutils/uutils-coreutils_0.0.15.bb
@@ -0,0 +1,282 @@
+# Auto-Generated by cargo-bitbake 0.3.16
+#
+inherit cargo
+
+# If this is git based prefer versioned ones if they exist
+# DEFAULT_PREFERENCE = "-1"
+
+# how to get coreutils could be as easy as but default to a git checkout:
+# SRC_URI += "crate://crates.io/coreutils/0.0.15"
+SRC_URI += "git://github.com/uutils/coreutils.git;protocol=https;nobranch=1"
+SRCREV = "5b6cd6146b40f5c1391cd812b1b929edd2283994"
+S = "${WORKDIR}/git"
+CARGO_SRC_DIR = ""
+
+
+# please note if you have entries that do not begin with crate://
+# you must change them to how that package can be fetched
+SRC_URI += " \
+    crate://crates.io/Inflector/0.11.4 \
+    crate://crates.io/adler/1.0.2 \
+    crate://crates.io/ahash/0.7.6 \
+    crate://crates.io/aho-corasick/0.7.18 \
+    crate://crates.io/aliasable/0.1.3 \
+    crate://crates.io/android_system_properties/0.1.4 \
+    crate://crates.io/ansi_term/0.12.1 \
+    crate://crates.io/arrayref/0.3.6 \
+    crate://crates.io/arrayvec/0.7.2 \
+    crate://crates.io/atty/0.2.14 \
+    crate://crates.io/autocfg/1.1.0 \
+    crate://crates.io/bigdecimal/0.3.0 \
+    crate://crates.io/binary-heap-plus/0.4.1 \
+    crate://crates.io/bindgen/0.59.2 \
+    crate://crates.io/bitflags/1.3.2 \
+    crate://crates.io/blake2b_simd/1.0.0 \
+    crate://crates.io/blake3/1.3.1 \
+    crate://crates.io/block-buffer/0.10.2 \
+    crate://crates.io/bstr/0.2.17 \
+    crate://crates.io/bumpalo/3.10.0 \
+    crate://crates.io/byte-unit/4.0.14 \
+    crate://crates.io/bytecount/0.6.3 \
+    crate://crates.io/byteorder/1.4.3 \
+    crate://crates.io/cc/1.0.73 \
+    crate://crates.io/cexpr/0.6.0 \
+    crate://crates.io/cfg-if/0.1.10 \
+    crate://crates.io/cfg-if/1.0.0 \
+    crate://crates.io/chrono/0.4.22 \
+    crate://crates.io/clang-sys/1.3.3 \
+    crate://crates.io/clap/2.34.0 \
+    crate://crates.io/clap/3.2.17 \
+    crate://crates.io/clap_complete/3.2.4 \
+    crate://crates.io/clap_lex/0.2.4 \
+    crate://crates.io/compare/0.1.0 \
+    crate://crates.io/constant_time_eq/0.1.5 \
+    crate://crates.io/conv/0.3.3 \
+    crate://crates.io/core-foundation-sys/0.8.3 \
+    crate://crates.io/coz/0.1.3 \
+    crate://crates.io/cpp/0.5.7 \
+    crate://crates.io/cpp_build/0.4.0 \
+    crate://crates.io/cpp_common/0.4.0 \
+    crate://crates.io/cpp_common/0.5.7 \
+    crate://crates.io/cpp_macros/0.5.7 \
+    crate://crates.io/cpp_syn/0.12.0 \
+    crate://crates.io/cpp_synmap/0.3.0 \
+    crate://crates.io/cpp_synom/0.12.0 \
+    crate://crates.io/cpufeatures/0.2.2 \
+    crate://crates.io/crc32fast/1.3.2 \
+    crate://crates.io/crossbeam-channel/0.5.6 \
+    crate://crates.io/crossbeam-deque/0.8.2 \
+    crate://crates.io/crossbeam-epoch/0.9.10 \
+    crate://crates.io/crossbeam-utils/0.8.11 \
+    crate://crates.io/crossterm/0.25.0 \
+    crate://crates.io/crossterm_winapi/0.9.0 \
+    crate://crates.io/crypto-common/0.1.6 \
+    crate://crates.io/ctor/0.1.23 \
+    crate://crates.io/ctrlc/3.2.3 \
+    crate://crates.io/custom_derive/0.1.7 \
+    crate://crates.io/data-encoding-macro-internal/0.1.10 \
+    crate://crates.io/data-encoding-macro/0.1.12 \
+    crate://crates.io/data-encoding/2.3.2 \
+    crate://crates.io/diff/0.1.13 \
+    crate://crates.io/digest/0.10.3 \
+    crate://crates.io/dlv-list/0.3.0 \
+    crate://crates.io/dns-lookup/1.0.8 \
+    crate://crates.io/dunce/1.0.2 \
+    crate://crates.io/either/1.7.0 \
+    crate://crates.io/env_logger/0.8.4 \
+    crate://crates.io/env_logger/0.9.0 \
+    crate://crates.io/exacl/0.9.0 \
+    crate://crates.io/fastrand/1.8.0 \
+    crate://crates.io/file_diff/1.0.0 \
+    crate://crates.io/filetime/0.2.17 \
+    crate://crates.io/flate2/1.0.24 \
+    crate://crates.io/fnv/1.0.7 \
+    crate://crates.io/fs_extra/1.2.0 \
+    crate://crates.io/fsevent-sys/4.1.0 \
+    crate://crates.io/fts-sys/0.2.1 \
+    crate://crates.io/gcd/2.1.0 \
+    crate://crates.io/generic-array/0.14.6 \
+    crate://crates.io/getrandom/0.2.7 \
+    crate://crates.io/glob/0.3.0 \
+    crate://crates.io/half/1.8.2 \
+    crate://crates.io/hashbrown/0.12.3 \
+    crate://crates.io/heck/0.4.0 \
+    crate://crates.io/hermit-abi/0.1.19 \
+    crate://crates.io/hex-literal/0.3.4 \
+    crate://crates.io/hex/0.4.3 \
+    crate://crates.io/hostname/0.3.1 \
+    crate://crates.io/humantime/2.1.0 \
+    crate://crates.io/iana-time-zone/0.1.45 \
+    crate://crates.io/indexmap/1.9.1 \
+    crate://crates.io/inotify-sys/0.1.5 \
+    crate://crates.io/inotify/0.9.6 \
+    crate://crates.io/instant/0.1.12 \
+    crate://crates.io/itertools/0.10.3 \
+    crate://crates.io/itoa/1.0.3 \
+    crate://crates.io/js-sys/0.3.59 \
+    crate://crates.io/keccak/0.1.2 \
+    crate://crates.io/kernel32-sys/0.2.2 \
+    crate://crates.io/kqueue-sys/1.0.3 \
+    crate://crates.io/kqueue/1.0.6 \
+    crate://crates.io/lazy_static/1.4.0 \
+    crate://crates.io/lazycell/1.3.0 \
+    crate://crates.io/libc/0.2.132 \
+    crate://crates.io/libloading/0.7.3 \
+    crate://crates.io/lock_api/0.4.7 \
+    crate://crates.io/log/0.4.17 \
+    crate://crates.io/lscolors/0.12.0 \
+    crate://crates.io/match_cfg/0.1.0 \
+    crate://crates.io/md-5/0.10.1 \
+    crate://crates.io/memchr/1.0.2 \
+    crate://crates.io/memchr/2.5.0 \
+    crate://crates.io/memmap2/0.5.7 \
+    crate://crates.io/memoffset/0.6.5 \
+    crate://crates.io/minimal-lexical/0.2.1 \
+    crate://crates.io/miniz_oxide/0.5.3 \
+    crate://crates.io/mio/0.8.4 \
+    crate://crates.io/nix/0.25.0 \
+    crate://crates.io/nom/7.1.1 \
+    crate://crates.io/notify/5.0.0-pre.16 \
+    crate://crates.io/num-bigint/0.4.3 \
+    crate://crates.io/num-integer/0.1.45 \
+    crate://crates.io/num-traits/0.2.15 \
+    crate://crates.io/num_cpus/1.13.1 \
+    crate://crates.io/num_threads/0.1.6 \
+    crate://crates.io/number_prefix/0.4.0 \
+    crate://crates.io/numtoa/0.1.0 \
+    crate://crates.io/once_cell/1.13.1 \
+    crate://crates.io/onig/6.3.2 \
+    crate://crates.io/onig_sys/69.8.1 \
+    crate://crates.io/ordered-multimap/0.4.3 \
+    crate://crates.io/os_display/0.1.3 \
+    crate://crates.io/os_str_bytes/6.0.1 \
+    crate://crates.io/ouroboros/0.15.2 \
+    crate://crates.io/ouroboros_macro/0.15.2 \
+    crate://crates.io/output_vt100/0.1.3 \
+    crate://crates.io/parking_lot/0.12.1 \
+    crate://crates.io/parking_lot_core/0.9.3 \
+    crate://crates.io/paste/1.0.8 \
+    crate://crates.io/peeking_take_while/0.1.2 \
+    crate://crates.io/phf/0.10.1 \
+    crate://crates.io/phf_codegen/0.10.0 \
+    crate://crates.io/phf_generator/0.10.0 \
+    crate://crates.io/phf_shared/0.10.0 \
+    crate://crates.io/pin-utils/0.1.0 \
+    crate://crates.io/pkg-config/0.3.25 \
+    crate://crates.io/platform-info/1.0.0 \
+    crate://crates.io/ppv-lite86/0.2.16 \
+    crate://crates.io/pretty_assertions/1.2.1 \
+    crate://crates.io/proc-macro-error-attr/1.0.4 \
+    crate://crates.io/proc-macro-error/1.0.4 \
+    crate://crates.io/proc-macro2/1.0.43 \
+    crate://crates.io/quick-error/2.0.1 \
+    crate://crates.io/quickcheck/1.0.3 \
+    crate://crates.io/quote/0.3.15 \
+    crate://crates.io/quote/1.0.21 \
+    crate://crates.io/rand/0.8.5 \
+    crate://crates.io/rand_chacha/0.3.1 \
+    crate://crates.io/rand_core/0.6.3 \
+    crate://crates.io/rayon-core/1.9.3 \
+    crate://crates.io/rayon/1.5.3 \
+    crate://crates.io/redox_syscall/0.2.16 \
+    crate://crates.io/redox_termios/0.1.2 \
+    crate://crates.io/reference-counted-singleton/0.1.1 \
+    crate://crates.io/regex-automata/0.1.10 \
+    crate://crates.io/regex-syntax/0.6.27 \
+    crate://crates.io/regex/1.6.0 \
+    crate://crates.io/remove_dir_all/0.5.3 \
+    crate://crates.io/remove_dir_all/0.7.0 \
+    crate://crates.io/retain_mut/0.1.7 \
+    crate://crates.io/rlimit/0.8.3 \
+    crate://crates.io/rust-ini/0.18.0 \
+    crate://crates.io/rustc-hash/1.1.0 \
+    crate://crates.io/rustversion/1.0.9 \
+    crate://crates.io/same-file/1.0.6 \
+    crate://crates.io/scopeguard/1.1.0 \
+    crate://crates.io/selinux-sys/0.5.2 \
+    crate://crates.io/selinux/0.2.7 \
+    crate://crates.io/sha1/0.10.1 \
+    crate://crates.io/sha2/0.10.2 \
+    crate://crates.io/sha3/0.10.2 \
+    crate://crates.io/shlex/1.1.0 \
+    crate://crates.io/signal-hook-mio/0.2.3 \
+    crate://crates.io/signal-hook-registry/1.4.0 \
+    crate://crates.io/signal-hook/0.3.14 \
+    crate://crates.io/siphasher/0.3.10 \
+    crate://crates.io/smallvec/1.9.0 \
+    crate://crates.io/smawk/0.3.1 \
+    crate://crates.io/socket2/0.4.4 \
+    crate://crates.io/strsim/0.10.0 \
+    crate://crates.io/strsim/0.8.0 \
+    crate://crates.io/strum/0.24.1 \
+    crate://crates.io/strum_macros/0.24.3 \
+    crate://crates.io/subtle/2.4.1 \
+    crate://crates.io/syn/1.0.99 \
+    crate://crates.io/tempfile/3.3.0 \
+    crate://crates.io/term_grid/0.1.7 \
+    crate://crates.io/termcolor/1.1.3 \
+    crate://crates.io/terminal_size/0.1.17 \
+    crate://crates.io/termion/1.5.6 \
+    crate://crates.io/termsize/0.1.6 \
+    crate://crates.io/textwrap/0.11.0 \
+    crate://crates.io/textwrap/0.15.0 \
+    crate://crates.io/thiserror-impl/1.0.32 \
+    crate://crates.io/thiserror/1.0.32 \
+    crate://crates.io/time-macros/0.2.4 \
+    crate://crates.io/time/0.3.9 \
+    crate://crates.io/typenum/1.15.0 \
+    crate://crates.io/unicode-ident/1.0.3 \
+    crate://crates.io/unicode-linebreak/0.1.2 \
+    crate://crates.io/unicode-segmentation/1.9.0 \
+    crate://crates.io/unicode-width/0.1.9 \
+    crate://crates.io/unicode-xid/0.0.4 \
+    crate://crates.io/unindent/0.1.10 \
+    crate://crates.io/unix_socket/0.5.0 \
+    crate://crates.io/users/0.11.0 \
+    crate://crates.io/utf-8/0.7.6 \
+    crate://crates.io/utf8-width/0.1.6 \
+    crate://crates.io/uuid/1.1.2 \
+    crate://crates.io/vec_map/0.8.2 \
+    crate://crates.io/version_check/0.9.4 \
+    crate://crates.io/walkdir/2.3.2 \
+    crate://crates.io/wasi/0.11.0+wasi-snapshot-preview1 \
+    crate://crates.io/wasm-bindgen-backend/0.2.82 \
+    crate://crates.io/wasm-bindgen-macro-support/0.2.82 \
+    crate://crates.io/wasm-bindgen-macro/0.2.82 \
+    crate://crates.io/wasm-bindgen-shared/0.2.82 \
+    crate://crates.io/wasm-bindgen/0.2.82 \
+    crate://crates.io/which/4.2.5 \
+    crate://crates.io/wild/2.1.0 \
+    crate://crates.io/winapi-build/0.1.1 \
+    crate://crates.io/winapi-i686-pc-windows-gnu/0.4.0 \
+    crate://crates.io/winapi-util/0.1.5 \
+    crate://crates.io/winapi-x86_64-pc-windows-gnu/0.4.0 \
+    crate://crates.io/winapi/0.2.8 \
+    crate://crates.io/winapi/0.3.9 \
+    crate://crates.io/windows-sys/0.36.1 \
+    crate://crates.io/windows_aarch64_msvc/0.36.1 \
+    crate://crates.io/windows_i686_gnu/0.36.1 \
+    crate://crates.io/windows_i686_msvc/0.36.1 \
+    crate://crates.io/windows_x86_64_gnu/0.36.1 \
+    crate://crates.io/windows_x86_64_msvc/0.36.1 \
+    crate://crates.io/xattr/0.2.3 \
+    crate://crates.io/z85/3.0.5 \
+    crate://crates.io/zip/0.6.2 \
+"
+
+
+
+# FIXME: update generateme with the real MD5 of the license file
+LIC_FILES_CHKSUM = " \
+    file://LICENSE;md5=41f7469eaacac62c67d5664fff2c062d \
+"
+
+SUMMARY = "coreutils ~ GNU coreutils (updated); implemented as universal (cross-platform) utils, written in Rust"
+HOMEPAGE = "https://github.com/uutils/coreutils"
+LICENSE = "MIT"
+
+# includes this file if it exists but does not fail
+# this is useful for anything you may want to override from
+# what cargo-bitbake generates.
+include uutils-coreutils-${PV}.inc
+include uutils-coreutils.inc
diff --git a/meta-openembedded/meta-oe/recipes-dbs/lmdb/files/0001-make-set-soname-on-liblmdb.patch b/meta-openembedded/meta-oe/recipes-dbs/lmdb/files/0001-make-set-soname-on-liblmdb.patch
new file mode 100644
index 0000000..312809d
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-dbs/lmdb/files/0001-make-set-soname-on-liblmdb.patch
@@ -0,0 +1,22 @@
+From b4d418bf3f78748d84e3cfb110833443eef34284 Mon Sep 17 00:00:00 2001
+From: Justin Bronder <jsbronder@cold-front.org>
+Date: Thu, 25 Aug 2022 17:22:20 -0400
+Subject: [PATCH] make: set soname on liblmdb
+
+---
+ libraries/liblmdb/Makefile | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/libraries/liblmdb/Makefile b/libraries/liblmdb/Makefile
+index 1ec74e6..ea08cd6 100644
+--- a/libraries/liblmdb/Makefile
++++ b/libraries/liblmdb/Makefile
+@@ -66,7 +66,7 @@ liblmdb.a:	mdb.o midl.o
+ 
+ liblmdb$(SOEXT):	mdb.lo midl.lo
+ #	$(CC) $(LDFLAGS) -pthread -shared -Wl,-Bsymbolic -o $@ mdb.o midl.o $(SOLIBS)
+-	$(CC) $(LDFLAGS) -pthread -shared -o $@ mdb.lo midl.lo $(SOLIBS)
++	$(CC) $(LDFLAGS) -pthread -shared -Wl,-soname,$@ -o $@ mdb.lo midl.lo $(SOLIBS)
+ 
+ mdb_stat: mdb_stat.o liblmdb.a
+ mdb_copy: mdb_copy.o liblmdb.a
diff --git a/meta-openembedded/meta-oe/recipes-dbs/lmdb/lmdb_0.9.29.bb b/meta-openembedded/meta-oe/recipes-dbs/lmdb/lmdb_0.9.29.bb
index b58a36c..a76d388 100644
--- a/meta-openembedded/meta-oe/recipes-dbs/lmdb/lmdb_0.9.29.bb
+++ b/meta-openembedded/meta-oe/recipes-dbs/lmdb/lmdb_0.9.29.bb
@@ -11,16 +11,15 @@
 SRC_URI = "git://github.com/LMDB/lmdb.git;nobranch=1;protocol=https \
            file://run-ptest \
            file://0001-Makefile-use-libprefix-instead-of-libdir.patch \
+           file://0001-make-set-soname-on-liblmdb.patch;patchdir=../.. \
            "
 
 SRCREV = "8ad7be2510414b9506ec9f9e24f24d04d9b04a1a"
 
-inherit base ptest
+inherit ptest
 
 S = "${WORKDIR}/git/libraries/liblmdb"
 
-LDFLAGS += "-Wl,-soname,lib${PN}.so.${PV}"
-
 do_compile() {
     oe_runmake CC="${CC}" SOEXT=".so.${PV}" LDFLAGS="${LDFLAGS}"
 }
diff --git a/meta-openembedded/meta-oe/recipes-dbs/mysql/mariadb.inc b/meta-openembedded/meta-oe/recipes-dbs/mysql/mariadb.inc
index fa32e68..c63d511 100644
--- a/meta-openembedded/meta-oe/recipes-dbs/mysql/mariadb.inc
+++ b/meta-openembedded/meta-oe/recipes-dbs/mysql/mariadb.inc
@@ -30,7 +30,7 @@
 
 BINCONFIG_GLOB = "mysql_config"
 
-inherit cmake gettext binconfig update-rc.d useradd systemd multilib_script
+inherit cmake gettext binconfig update-rc.d useradd systemd multilib_script pkgconfig
 
 MULTILIB_SCRIPTS = "${PN}-server:${bindir}/mariadbd-safe \
                     ${PN}-setupdb:${bindir}/mariadb-install-db"
@@ -60,11 +60,12 @@
                        ${bindir}/mysql-systemd-start \
                        "
 
-PACKAGECONFIG ??= "${@bb.utils.filter('DISTRO_FEATURES', 'pam', d)} openssl"
+PACKAGECONFIG ??= "${@bb.utils.filter('DISTRO_FEATURES', 'pam', d)} lz4 openssl"
 PACKAGECONFIG:class-native = ""
 PACKAGECONFIG[pam] = ",-DWITHOUT_AUTH_PAM=TRUE,libpam"
 PACKAGECONFIG[valgrind] = "-DWITH_VALGRIND=TRUE,-DWITH_VALGRIND=FALSE,valgrind"
 PACKAGECONFIG[krb5] = ", ,krb5"
+PACKAGECONFIG[lz4] = ", ,lz4"
 PACKAGECONFIG[openssl] = "-DWITH_SSL='system',-DWITH_SSL='bundled',openssl"
 
 # MariaDB doesn't link properly with gold
diff --git a/meta-openembedded/meta-oe/recipes-dbs/postgresql/files/0001-config_info.c-not-expose-build-info.patch b/meta-openembedded/meta-oe/recipes-dbs/postgresql/files/0001-config_info.c-not-expose-build-info.patch
new file mode 100644
index 0000000..101a748
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-dbs/postgresql/files/0001-config_info.c-not-expose-build-info.patch
@@ -0,0 +1,110 @@
+From b92eebe8b0760fee7bd55c6c22318620c2c07579 Mon Sep 17 00:00:00 2001
+From: Mingli Yu <mingli.yu@windriver.com>
+Date: Mon, 1 Aug 2022 15:44:38 +0800
+Subject: [PATCH] config_info.c: not expose build info
+
+Don't collect the build information to fix the buildpaths issue.
+
+Upstream-Status: Inappropriate [oe specific]
+
+Signed-off-by: Mingli Yu <mingli.yu@windriver.com>
+---
+ configure.ac             |  2 +-
+ src/common/config_info.c | 68 ----------------------------------------
+ 2 files changed, 1 insertion(+), 69 deletions(-)
+
+diff --git a/configure.ac b/configure.ac
+index 0eb595b..508487b 100644
+--- a/configure.ac
++++ b/configure.ac
+@@ -23,7 +23,7 @@ AC_COPYRIGHT([Copyright (c) 1996-2021, PostgreSQL Global Development Group])
+ AC_CONFIG_SRCDIR([src/backend/access/common/heaptuple.c])
+ AC_CONFIG_AUX_DIR(config)
+ AC_PREFIX_DEFAULT(/usr/local/pgsql)
+-AC_DEFINE_UNQUOTED(CONFIGURE_ARGS, ["$ac_configure_args"], [Saved arguments from configure])
++AC_DEFINE_UNQUOTED(CONFIGURE_ARGS, ["ac_configure_args"], [Saved arguments from configure])
+ 
+ [PG_MAJORVERSION=`expr "$PACKAGE_VERSION" : '\([0-9][0-9]*\)'`]
+ [PG_MINORVERSION=`expr "$PACKAGE_VERSION" : '.*\.\([0-9][0-9]*\)'`]
+diff --git a/src/common/config_info.c b/src/common/config_info.c
+index e72e729..b482c20 100644
+--- a/src/common/config_info.c
++++ b/src/common/config_info.c
+@@ -123,74 +123,6 @@ get_configdata(const char *my_exec_path, size_t *configdata_len)
+ 	configdata[i].setting = pstrdup(path);
+ 	i++;
+ 
+-	configdata[i].name = pstrdup("CONFIGURE");
+-	configdata[i].setting = pstrdup(CONFIGURE_ARGS);
+-	i++;
+-
+-	configdata[i].name = pstrdup("CC");
+-#ifdef VAL_CC
+-	configdata[i].setting = pstrdup(VAL_CC);
+-#else
+-	configdata[i].setting = pstrdup(_("not recorded"));
+-#endif
+-	i++;
+-
+-	configdata[i].name = pstrdup("CPPFLAGS");
+-#ifdef VAL_CPPFLAGS
+-	configdata[i].setting = pstrdup(VAL_CPPFLAGS);
+-#else
+-	configdata[i].setting = pstrdup(_("not recorded"));
+-#endif
+-	i++;
+-
+-	configdata[i].name = pstrdup("CFLAGS");
+-#ifdef VAL_CFLAGS
+-	configdata[i].setting = pstrdup(VAL_CFLAGS);
+-#else
+-	configdata[i].setting = pstrdup(_("not recorded"));
+-#endif
+-	i++;
+-
+-	configdata[i].name = pstrdup("CFLAGS_SL");
+-#ifdef VAL_CFLAGS_SL
+-	configdata[i].setting = pstrdup(VAL_CFLAGS_SL);
+-#else
+-	configdata[i].setting = pstrdup(_("not recorded"));
+-#endif
+-	i++;
+-
+-	configdata[i].name = pstrdup("LDFLAGS");
+-#ifdef VAL_LDFLAGS
+-	configdata[i].setting = pstrdup(VAL_LDFLAGS);
+-#else
+-	configdata[i].setting = pstrdup(_("not recorded"));
+-#endif
+-	i++;
+-
+-	configdata[i].name = pstrdup("LDFLAGS_EX");
+-#ifdef VAL_LDFLAGS_EX
+-	configdata[i].setting = pstrdup(VAL_LDFLAGS_EX);
+-#else
+-	configdata[i].setting = pstrdup(_("not recorded"));
+-#endif
+-	i++;
+-
+-	configdata[i].name = pstrdup("LDFLAGS_SL");
+-#ifdef VAL_LDFLAGS_SL
+-	configdata[i].setting = pstrdup(VAL_LDFLAGS_SL);
+-#else
+-	configdata[i].setting = pstrdup(_("not recorded"));
+-#endif
+-	i++;
+-
+-	configdata[i].name = pstrdup("LIBS");
+-#ifdef VAL_LIBS
+-	configdata[i].setting = pstrdup(VAL_LIBS);
+-#else
+-	configdata[i].setting = pstrdup(_("not recorded"));
+-#endif
+-	i++;
+-
+ 	configdata[i].name = pstrdup("VERSION");
+ 	configdata[i].setting = pstrdup("PostgreSQL " PG_VERSION);
+ 	i++;
+-- 
+2.25.1
+
diff --git a/meta-openembedded/meta-oe/recipes-dbs/postgresql/files/0001-configure.ac-bypass-autoconf-2.69-version-check.patch b/meta-openembedded/meta-oe/recipes-dbs/postgresql/files/0001-configure.ac-bypass-autoconf-2.69-version-check.patch
index 2256bcc..4a576d7 100644
--- a/meta-openembedded/meta-oe/recipes-dbs/postgresql/files/0001-configure.ac-bypass-autoconf-2.69-version-check.patch
+++ b/meta-openembedded/meta-oe/recipes-dbs/postgresql/files/0001-configure.ac-bypass-autoconf-2.69-version-check.patch
@@ -1,4 +1,4 @@
-From 07e605015fad0621c3e67133ff9330a5c6318daa Mon Sep 17 00:00:00 2001
+From 258c6bd2ad96f2c42f1cb5f4c84e4ca5865059f0 Mon Sep 17 00:00:00 2001
 From: Yi Fan Yu <yifan.yu@windriver.com>
 Date: Fri, 5 Feb 2021 17:15:42 -0500
 Subject: [PATCH] configure.ac: bypass autoconf 2.69 version check
@@ -14,12 +14,12 @@
  1 file changed, 4 deletions(-)
 
 diff --git a/configure.ac b/configure.ac
-index 04ef7be..0eb595b 100644
+index ffe878e..c39799b 100644
 --- a/configure.ac
 +++ b/configure.ac
 @@ -19,10 +19,6 @@ m4_pattern_forbid(^PGAC_)dnl to catch undefined macros
  
- AC_INIT([PostgreSQL], [14.4], [pgsql-bugs@lists.postgresql.org], [], [https://www.postgresql.org/])
+ AC_INIT([PostgreSQL], [14.5], [pgsql-bugs@lists.postgresql.org], [], [https://www.postgresql.org/])
  
 -m4_if(m4_defn([m4_PACKAGE_VERSION]), [2.69], [], [m4_fatal([Autoconf version 2.69 is required.
 -Untested combinations of 'autoconf' and PostgreSQL versions are not
diff --git a/meta-openembedded/meta-oe/recipes-dbs/postgresql/postgresql.inc b/meta-openembedded/meta-oe/recipes-dbs/postgresql/postgresql.inc
index 00c0107..60d44ce 100644
--- a/meta-openembedded/meta-oe/recipes-dbs/postgresql/postgresql.inc
+++ b/meta-openembedded/meta-oe/recipes-dbs/postgresql/postgresql.inc
@@ -205,7 +205,7 @@
     # multiple server config directory
     install -d -m 700 ${D}${sysconfdir}/default/${BPN}
 
-    if [ "${@d.getVar('enable_pam')}" = "pam" ]; then
+    if ${@bb.utils.contains('DISTRO_FEATURES', 'pam', 'true', 'false', d)}; then
         install -d ${D}${sysconfdir}/pam.d
         install -m 644 ${WORKDIR}/postgresql.pam ${D}${sysconfdir}/pam.d/postgresql
     fi
@@ -215,6 +215,14 @@
     install -m 0644 ${WORKDIR}/postgresql.service ${D}${systemd_unitdir}/system
     sed -i -e 's,@BINDIR@,${bindir},g' \
         ${D}${systemd_unitdir}/system/postgresql.service
+    # Remove the build path
+    if [ -f ${D}${libdir}/${BPN}/pgxs/src/Makefile.global ]; then
+        sed -i -e 's#${RECIPE_SYSROOT}##g' \
+               -e 's#${RECIPE_SYSROOT_NATIVE}##g' \
+               -e 's#${WORKDIR}##g' \
+               -e 's#${TMPDIR}##g' \
+             ${D}${libdir}/${BPN}/pgxs/src/Makefile.global
+    fi
 }
 
 SSTATE_SCAN_FILES += "Makefile.global"
diff --git a/meta-openembedded/meta-oe/recipes-dbs/postgresql/postgresql_14.4.bb b/meta-openembedded/meta-oe/recipes-dbs/postgresql/postgresql_14.4.bb
deleted file mode 100644
index 64e83b2..0000000
--- a/meta-openembedded/meta-oe/recipes-dbs/postgresql/postgresql_14.4.bb
+++ /dev/null
@@ -1,17 +0,0 @@
-require postgresql.inc
-
-LIC_FILES_CHKSUM = "file://COPYRIGHT;md5=75af6e3eeec4a06cdd2e578673236fc3"
-
-SRC_URI += "\
-   file://not-check-libperl.patch \
-   file://0001-Add-support-for-RISC-V.patch \
-   file://0001-Improve-reproducibility.patch \
-   file://0001-configure.ac-bypass-autoconf-2.69-version-check.patch \
-   file://remove_duplicate.patch \
-"
-
-SRC_URI[sha256sum] = "c23b6237c5231c791511bdc79098617d6852e9e3bdf360efd8b5d15a1a3d8f6a"
-
-CVE_CHECK_IGNORE += "\
-   CVE-2017-8806 \
-"
diff --git a/meta-openembedded/meta-oe/recipes-dbs/postgresql/postgresql_14.5.bb b/meta-openembedded/meta-oe/recipes-dbs/postgresql/postgresql_14.5.bb
new file mode 100644
index 0000000..1551d34
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-dbs/postgresql/postgresql_14.5.bb
@@ -0,0 +1,18 @@
+require postgresql.inc
+
+LIC_FILES_CHKSUM = "file://COPYRIGHT;md5=75af6e3eeec4a06cdd2e578673236fc3"
+
+SRC_URI += "\
+   file://not-check-libperl.patch \
+   file://0001-Add-support-for-RISC-V.patch \
+   file://0001-Improve-reproducibility.patch \
+   file://0001-configure.ac-bypass-autoconf-2.69-version-check.patch \
+   file://remove_duplicate.patch \
+   file://0001-config_info.c-not-expose-build-info.patch \
+"
+
+SRC_URI[sha256sum] = "d4f72cb5fb857c9a9f75ec8cf091a1771272802f2178f0b2e65b7b6ff64f4a30"
+
+CVE_CHECK_IGNORE += "\
+   CVE-2017-8806 \
+"
diff --git a/meta-openembedded/meta-oe/recipes-devtools/android-tools/android-tools-conf-configfs/android-gadget-start b/meta-openembedded/meta-oe/recipes-devtools/android-tools/android-tools-conf-configfs/android-gadget-start
index d67878f..b73cb70 100644
--- a/meta-openembedded/meta-oe/recipes-devtools/android-tools/android-tools-conf-configfs/android-gadget-start
+++ b/meta-openembedded/meta-oe/recipes-devtools/android-tools/android-tools-conf-configfs/android-gadget-start
@@ -2,6 +2,6 @@
 
 set -e
 
-sleep 3
+sleep 10
 
 ls /sys/class/udc/ | xargs echo -n > /sys/kernel/config/usb_gadget/adb/UDC
diff --git a/meta-openembedded/meta-oe/recipes-devtools/android-tools/android-tools/core/0015-libsparse-Split-off-most-of-sparse_file_read_normal-.patch b/meta-openembedded/meta-oe/recipes-devtools/android-tools/android-tools/core/0015-libsparse-Split-off-most-of-sparse_file_read_normal-.patch
new file mode 100644
index 0000000..b3893f0
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-devtools/android-tools/android-tools/core/0015-libsparse-Split-off-most-of-sparse_file_read_normal-.patch
@@ -0,0 +1,60 @@
+From 7b74d23ed955206a789a96bdc3288593e702afac Mon Sep 17 00:00:00 2001
+From: Sean Anderson <sean.anderson@seco.com>
+Date: Thu, 30 Dec 2021 15:16:08 -0500
+Subject: [PATCH] libsparse: Split off most of sparse_file_read_normal into a
+ helper function
+
+This carves out the core of sparse_file_read_normal and splits it off so
+it can be reused in the next patch. No functional change intended.
+
+Change-Id: Id00491fd7e5bb6fa28c517a0bb32b8b506539d4d
+Upstream-status: Backport [95657f3e5976d96073f7bbfe3a49192509999d1d]
+Signed-off-by: Sean Anderson <sean.anderson@seco.com>
+---
+ libsparse/sparse_read.c | 21 ++++++++++++++++-----
+ 1 file changed, 16 insertions(+), 5 deletions(-)
+
+diff --git a/libsparse/sparse_read.c b/libsparse/sparse_read.c
+index 8e188e9a4..ee4abd86a 100644
+--- a/libsparse/sparse_read.c
++++ b/libsparse/sparse_read.c
+@@ -353,13 +353,11 @@ static int sparse_file_read_sparse(struct sparse_file *s, int fd, bool crc)
+ 	return 0;
+ }
+ 
+-static int sparse_file_read_normal(struct sparse_file *s, int fd)
++static int do_sparse_file_read_normal(struct sparse_file *s, int fd, uint32_t* buf, int64_t offset,
++				      int64_t remain)
+ {
+ 	int ret;
+-	uint32_t *buf = malloc(s->block_size);
+-	unsigned int block = 0;
+-	int64_t remain = s->len;
+-	int64_t offset = 0;
++	unsigned int block = offset / s->block_size;
+ 	unsigned int to_read;
+ 	unsigned int i;
+ 	bool sparse_block;
+@@ -403,6 +401,19 @@ static int sparse_file_read_normal(struct sparse_file *s, int fd)
+ 	return 0;
+ }
+ 
++static int sparse_file_read_normal(struct sparse_file* s, int fd)
++{
++	int ret;
++	uint32_t* buf = (uint32_t*)malloc(s->block_size);
++
++	if (!buf)
++		return -ENOMEM;
++
++	ret = do_sparse_file_read_normal(s, fd, buf, 0, s->len);
++	free(buf);
++	return ret;
++}
++
+ int sparse_file_read(struct sparse_file *s, int fd, bool sparse, bool crc)
+ {
+ 	if (crc && !sparse) {
+-- 
+2.35.1.1320.gc452695387.dirty
+
diff --git a/meta-openembedded/meta-oe/recipes-devtools/android-tools/android-tools/core/0016-libsparse-Add-hole-mode-to-sparse_file_read.patch b/meta-openembedded/meta-oe/recipes-devtools/android-tools/android-tools/core/0016-libsparse-Add-hole-mode-to-sparse_file_read.patch
new file mode 100644
index 0000000..e5221d2
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-devtools/android-tools/android-tools/core/0016-libsparse-Add-hole-mode-to-sparse_file_read.patch
@@ -0,0 +1,188 @@
+From 41574b628ec4229c24dfe289af7b6978edcca4ed Mon Sep 17 00:00:00 2001
+From: Sean Anderson <sean.anderson@seco.com>
+Date: Thu, 30 Dec 2021 15:19:41 -0500
+Subject: [PATCH] libsparse: Add "hole" mode to sparse_file_read
+
+This adds support for filesystem-level sparse files. These files have
+holes which are not stored in the filesystem and when read are full of
+zeros. While these zeros may be significant in some types of files,
+other types of files may not care about the contents of holes. For
+example, most filesystem creation tools write to all the blocks they
+care about. Those blocks not written to will remain holes, and can be
+safely represented by "don't care" chunks. Using "don't care" chunks
+instead of fill chunks can result in a substantial reduction of the time
+it takes to program a sparse image.
+
+To accomplish this, we extend the existing "sparse" boolean parameter to
+be an enum of mode types. This enum represents the strategy we take when
+reading in a file. For the most part the implementation is
+straightforward. We use lseek to determine where the holes in the file
+are, and then use do_sparse_file_read_normal to create chunks for the
+data section. Note that every file has an implicit hole at its end.
+
+Change-Id: I0cfbf08886fca9a91cb753ec8734c84fcbe52c9f
+Upstream-Status: Backport [f96466b05543b984ef7315d830bab4a409228d35]
+Signed-off-by: Sean Anderson <sean.anderson@seco.com>
+---
+ libsparse/img2simg.c              |  2 +-
+ libsparse/include/sparse/sparse.h | 32 +++++++++++---
+ libsparse/sparse_read.c           | 71 +++++++++++++++++++++++++++++--
+ 3 files changed, 93 insertions(+), 12 deletions(-)
+
+diff --git a/libsparse/img2simg.c b/libsparse/img2simg.c
+index a0db36f45..2e171b613 100644
+--- a/libsparse/img2simg.c
++++ b/libsparse/img2simg.c
+@@ -96,7 +96,7 @@ int main(int argc, char *argv[])
+ 	}
+ 
+ 	sparse_file_verbose(s);
+-	ret = sparse_file_read(s, in, false, false);
++	ret = sparse_file_read(s, in, SPARSE_READ_MODE_NORMAL, false);
+ 	if (ret) {
+ 		fprintf(stderr, "Failed to read file\n");
+ 		exit(-1);
+diff --git a/libsparse/include/sparse/sparse.h b/libsparse/include/sparse/sparse.h
+index 8b757d22a..b68aa21a8 100644
+--- a/libsparse/include/sparse/sparse.h
++++ b/libsparse/include/sparse/sparse.h
+@@ -196,23 +196,41 @@ int64_t sparse_file_len(struct sparse_file *s, bool sparse, bool crc);
+ int sparse_file_callback(struct sparse_file *s, bool sparse, bool crc,
+ 		int (*write)(void *priv, const void *data, int len), void *priv);
+ 
++/**
++ * enum sparse_read_mode - The method to use when reading in files
++ * @SPARSE_READ_MODE_NORMAL: The input is a regular file. Constant chunks of
++ *                           data (including holes) will be be converted to
++ *                           fill chunks.
++ * @SPARSE_READ_MODE_SPARSE: The input is an Android sparse file.
++ * @SPARSE_READ_MODE_HOLE: The input is a regular file. Holes will be converted
++ *                         to "don't care" chunks. Other constant chunks will
++ *                         be converted to fill chunks.
++ */
++enum sparse_read_mode {
++	SPARSE_READ_MODE_NORMAL = false,
++	SPARSE_READ_MODE_SPARSE = true,
++	SPARSE_READ_MODE_HOLE,
++};
++
+ /**
+  * sparse_file_read - read a file into a sparse file cookie
+  *
+  * @s - sparse file cookie
+  * @fd - file descriptor to read from
+- * @sparse - read a file in the Android sparse file format
++ * @mode - mode to use when reading the input file
+  * @crc - verify the crc of a file in the Android sparse file format
+  *
+- * Reads a file into a sparse file cookie.  If sparse is true, the file is
+- * assumed to be in the Android sparse file format.  If sparse is false, the
+- * file will be sparsed by looking for block aligned chunks of all zeros or
+- * another 32 bit value.  If crc is true, the crc of the sparse file will be
+- * verified.
++ * Reads a file into a sparse file cookie. If @mode is
++ * %SPARSE_READ_MODE_SPARSE, the file is assumed to be in the Android sparse
++ * file format. If @mode is %SPARSE_READ_MODE_NORMAL, the file will be sparsed
++ * by looking for block aligned chunks of all zeros or another 32 bit value. If
++ * @mode is %SPARSE_READ_MODE_HOLE, the file will be sparsed like
++ * %SPARSE_READ_MODE_NORMAL, but holes in the file will be converted to "don't
++ * care" chunks. If crc is true, the crc of the sparse file will be verified.
+  *
+  * Returns 0 on success, negative errno on error.
+  */
+-int sparse_file_read(struct sparse_file *s, int fd, bool sparse, bool crc);
++int sparse_file_read(struct sparse_file *s, int fd, enum sparse_read_mode mode, bool crc);
+ 
+ /**
+  * sparse_file_import - import an existing sparse file
+diff --git a/libsparse/sparse_read.c b/libsparse/sparse_read.c
+index ee4abd86a..81f48cc13 100644
+--- a/libsparse/sparse_read.c
++++ b/libsparse/sparse_read.c
+@@ -414,16 +414,79 @@ static int sparse_file_read_normal(struct sparse_file* s, int fd)
+ 	return ret;
+ }
+ 
+-int sparse_file_read(struct sparse_file *s, int fd, bool sparse, bool crc)
++#ifdef __linux__
++static int sparse_file_read_hole(struct sparse_file* s, int fd)
+ {
+-	if (crc && !sparse) {
++	int ret;
++	uint32_t* buf = (uint32_t*)malloc(s->block_size);
++	int64_t end = 0;
++	int64_t start = 0;
++
++	if (!buf) {
++		return -ENOMEM;
++	}
++
++	do {
++		start = lseek(fd, end, SEEK_DATA);
++		if (start < 0) {
++			if (errno == ENXIO)
++				/* The rest of the file is a hole */
++				break;
++
++			error("could not seek to data");
++			free(buf);
++			return -errno;
++		} else if (start > s->len) {
++			break;
++		}
++
++		end = lseek(fd, start, SEEK_HOLE);
++		if (end < 0) {
++			error("could not seek to end");
++			free(buf);
++			return -errno;
++		}
++		end = min(end, s->len);
++
++		start = ALIGN_DOWN(start, s->block_size);
++		end = ALIGN(end, s->block_size);
++		if (lseek(fd, start, SEEK_SET) < 0) {
++			free(buf);
++			return -errno;
++		}
++
++		ret = do_sparse_file_read_normal(s, fd, buf, start, end - start);
++		if (ret) {
++			free(buf);
++			return ret;
++		}
++	} while (end < s->len);
++
++	free(buf);
++	return 0;
++}
++#else
++static int sparse_file_read_hole(struct sparse_file* s __unused, int fd __unused)
++{
++	return -ENOTSUP;
++}
++#endif
++
++int sparse_file_read(struct sparse_file *s, int fd, enum sparse_read_mode mode, bool crc)
++{
++	if (crc && mode != SPARSE_READ_MODE_SPARSE) {
+ 		return -EINVAL;
+ 	}
+ 
+-	if (sparse) {
++	switch (mode) {
++	case SPARSE_READ_MODE_SPARSE:
+ 		return sparse_file_read_sparse(s, fd, crc);
+-	} else {
++	case SPARSE_READ_MODE_NORMAL:
+ 		return sparse_file_read_normal(s, fd);
++	case SPARSE_READ_MODE_HOLE:
++		return sparse_file_read_hole(s, fd);
++	default:
++		return -EINVAL;
+ 	}
+ }
+ 
+-- 
+2.35.1.1320.gc452695387.dirty
+
diff --git a/meta-openembedded/meta-oe/recipes-devtools/android-tools/android-tools/core/0017-img2simg-Add-support-for-converting-holes-to-don-t-c.patch b/meta-openembedded/meta-oe/recipes-devtools/android-tools/android-tools/core/0017-img2simg-Add-support-for-converting-holes-to-don-t-c.patch
new file mode 100644
index 0000000..9d19f58
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-devtools/android-tools/android-tools/core/0017-img2simg-Add-support-for-converting-holes-to-don-t-c.patch
@@ -0,0 +1,114 @@
+From 00cce57eff1a0de3b93efa5da225e9eb33d19659 Mon Sep 17 00:00:00 2001
+From: Sean Anderson <sean.anderson@seco.com>
+Date: Thu, 30 Dec 2021 15:34:28 -0500
+Subject: [PATCH] img2simg: Add support for converting holes to "don't care"
+ chunks
+
+This adds support for converting files with holes to "don't care"
+chunks. This can result in a substantial reduction in the time it takes
+to program an image if it has many holes.
+
+Generally, constants compared to argc have been reduced by one, since we
+no longer have the program name as the first argument.
+
+Change-Id: I00750edc07d6415dcc07ae0351e9397b0222b7ba
+Upstream-Status: Backport [6150b00b6025918da8c28e5c2f929ecdf480a9d6]
+Signed-off-by: Sean Anderson <sean.anderson@seco.com>
+---
+ libsparse/img2simg.c | 41 ++++++++++++++++++++++++++++++-----------
+ 1 file changed, 30 insertions(+), 11 deletions(-)
+
+diff --git a/libsparse/img2simg.c b/libsparse/img2simg.c
+index 2e171b613..c985d5449 100644
+--- a/libsparse/img2simg.c
++++ b/libsparse/img2simg.c
+@@ -40,25 +40,42 @@
+ 
+ void usage()
+ {
+-    fprintf(stderr, "Usage: img2simg <raw_image_file> <sparse_image_file> [<block_size>]\n");
++    fprintf(stderr, "Usage: img2simg [-s] <raw_image_file> <sparse_image_file> [<block_size>]\n");
+ }
+ 
+ int main(int argc, char *argv[])
+ {
++	char *arg_in;
++	char *arg_out;
++	enum sparse_read_mode mode = SPARSE_READ_MODE_NORMAL;
++	int extra;
+ 	int in;
++	int opt;
+ 	int out;
+ 	int ret;
+ 	struct sparse_file *s;
+ 	unsigned int block_size = 4096;
+ 	off64_t len;
+ 
+-	if (argc < 3 || argc > 4) {
++	while ((opt = getopt(argc, argv, "s")) != -1) {
++		switch (opt) {
++		case 's':
++			mode = SPARSE_READ_MODE_HOLE;
++			break;
++		default:
++			usage();
++			exit(-1);
++		}
++	}
++
++	extra = argc - optind;
++	if (extra < 2 || extra > 3) {
+ 		usage();
+ 		exit(-1);
+ 	}
+ 
+-	if (argc == 4) {
+-		block_size = atoi(argv[3]);
++	if (extra == 3) {
++		block_size = atoi(argv[optind + 2]);
+ 	}
+ 
+ 	if (block_size < 1024 || block_size % 4 != 0) {
+@@ -66,22 +83,24 @@ int main(int argc, char *argv[])
+ 		exit(-1);
+ 	}
+ 
+-	if (strcmp(argv[1], "-") == 0) {
++	arg_in = argv[optind];
++	if (strcmp(arg_in, "-") == 0) {
+ 		in = STDIN_FILENO;
+ 	} else {
+-		in = open(argv[1], O_RDONLY | O_BINARY);
++		in = open(arg_in, O_RDONLY | O_BINARY);
+ 		if (in < 0) {
+-			fprintf(stderr, "Cannot open input file %s\n", argv[1]);
++			fprintf(stderr, "Cannot open input file %s\n", arg_in);
+ 			exit(-1);
+ 		}
+ 	}
+ 
+-	if (strcmp(argv[2], "-") == 0) {
++	arg_out = argv[optind + 1];
++	if (strcmp(arg_out, "-") == 0) {
+ 		out = STDOUT_FILENO;
+ 	} else {
+-		out = open(argv[2], O_WRONLY | O_CREAT | O_TRUNC | O_BINARY, 0664);
++		out = open(arg_out, O_WRONLY | O_CREAT | O_TRUNC | O_BINARY, 0664);
+ 		if (out < 0) {
+-			fprintf(stderr, "Cannot open output file %s\n", argv[2]);
++			fprintf(stderr, "Cannot open output file %s\n", arg_out);
+ 			exit(-1);
+ 		}
+ 	}
+@@ -96,7 +115,7 @@ int main(int argc, char *argv[])
+ 	}
+ 
+ 	sparse_file_verbose(s);
+-	ret = sparse_file_read(s, in, SPARSE_READ_MODE_NORMAL, false);
++	ret = sparse_file_read(s, in, mode, false);
+ 	if (ret) {
+ 		fprintf(stderr, "Failed to read file\n");
+ 		exit(-1);
+-- 
+2.35.1.1320.gc452695387.dirty
+
diff --git a/meta-openembedded/meta-oe/recipes-devtools/android-tools/android-tools_5.1.1.r37.bb b/meta-openembedded/meta-oe/recipes-devtools/android-tools/android-tools_5.1.1.r37.bb
index 8f28abb..2639360 100644
--- a/meta-openembedded/meta-oe/recipes-devtools/android-tools/android-tools_5.1.1.r37.bb
+++ b/meta-openembedded/meta-oe/recipes-devtools/android-tools/android-tools_5.1.1.r37.bb
@@ -41,6 +41,9 @@
     file://core/adb_libssl_11.diff;patchdir=system/core \
     file://core/0013-adb-Support-riscv64.patch;patchdir=system/core \
     file://core/0014-add-u3-ss-descriptor-support-for-adb.patch;patchdir=system/core \
+    file://core/0015-libsparse-Split-off-most-of-sparse_file_read_normal-.patch;patchdir=system/core \
+    file://core/0016-libsparse-Add-hole-mode-to-sparse_file_read.patch;patchdir=system/core \
+    file://core/0017-img2simg-Add-support-for-converting-holes-to-don-t-c.patch;patchdir=system/core \
     file://extras/0001-ext4_utils-remove-selinux-extensions.patch;patchdir=system/extras \
     file://extras/0002-ext4_utils-add-o-argument-to-preserve-ownership.patch;patchdir=system/extras \
     file://libselinux/0001-Remove-bionic-specific-calls.patch;patchdir=external/libselinux \
diff --git a/meta-openembedded/meta-oe/recipes-devtools/ctags/ctags_5.9.20220703.0.bb b/meta-openembedded/meta-oe/recipes-devtools/ctags/ctags_5.9.20220703.0.bb
deleted file mode 100644
index fb736db..0000000
--- a/meta-openembedded/meta-oe/recipes-devtools/ctags/ctags_5.9.20220703.0.bb
+++ /dev/null
@@ -1,34 +0,0 @@
-# Copyright (C) 2015 Igor Santos <igor.santos@aker.com.br>
-# Released under the MIT license (see COPYING.MIT for the terms)
-
-SUMMARY = "Universal Ctags"
-DESCRIPTION = "Universal Ctags is a multilanguage reimplementation of the \
-               Unix ctags utility. Ctags generates an index of source code \
-               definitions which is used by numerous editors and utilities \
-               to instantly locate the definitions."
-
-HOMEPAGE = "https://ctags.io/"
-
-LICENSE = "GPL-2.0-only"
-LIC_FILES_CHKSUM = "file://COPYING;md5=0636e73ff0215e8d672dc4c32c317bb3"
-
-inherit autotools-brokensep pkgconfig manpages
-
-SRCREV = "787811b6eac03ab81ccf5b6957432d893eb982cb"
-SRC_URI = "git://github.com/universal-ctags/ctags;branch=master;protocol=https"
-
-S = "${WORKDIR}/git"
-
-PACKAGECONFIG ??= " \
-    readcmd \
-    xml \
-    json \
-    yaml \
-"
-PACKAGECONFIG[readcmd] = "--enable-readcmd,--disable-readcmd"
-PACKAGECONFIG[etags] = "--enable-etags,--disable-etags"
-PACKAGECONFIG[xml] = "--enable-xml,--disable-xml,libxml2"
-PACKAGECONFIG[json] = "--enable-json,--disable-json,jansson"
-PACKAGECONFIG[seccomp] = "--enable-seccomp,--disable-seccomp,libseccomp"
-PACKAGECONFIG[yaml] = "--enable-yaml,--disable-yaml,libyaml"
-PACKAGECONFIG[manpages] = ",,python3-docutils-native"
diff --git a/meta-openembedded/meta-oe/recipes-devtools/ctags/ctags_5.9.20220821.0.bb b/meta-openembedded/meta-oe/recipes-devtools/ctags/ctags_5.9.20220821.0.bb
new file mode 100644
index 0000000..31f4935
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-devtools/ctags/ctags_5.9.20220821.0.bb
@@ -0,0 +1,34 @@
+# Copyright (C) 2015 Igor Santos <igor.santos@aker.com.br>
+# Released under the MIT license (see COPYING.MIT for the terms)
+
+SUMMARY = "Universal Ctags"
+DESCRIPTION = "Universal Ctags is a multilanguage reimplementation of the \
+               Unix ctags utility. Ctags generates an index of source code \
+               definitions which is used by numerous editors and utilities \
+               to instantly locate the definitions."
+
+HOMEPAGE = "https://ctags.io/"
+
+LICENSE = "GPL-2.0-only"
+LIC_FILES_CHKSUM = "file://COPYING;md5=0636e73ff0215e8d672dc4c32c317bb3"
+
+inherit autotools-brokensep pkgconfig manpages
+
+SRCREV = "40551e21c507c2426a323373f3ff200799150429"
+SRC_URI = "git://github.com/universal-ctags/ctags;branch=master;protocol=https"
+
+S = "${WORKDIR}/git"
+
+PACKAGECONFIG ??= " \
+    readcmd \
+    xml \
+    json \
+    yaml \
+"
+PACKAGECONFIG[readcmd] = "--enable-readcmd,--disable-readcmd"
+PACKAGECONFIG[etags] = "--enable-etags,--disable-etags"
+PACKAGECONFIG[xml] = "--enable-xml,--disable-xml,libxml2"
+PACKAGECONFIG[json] = "--enable-json,--disable-json,jansson"
+PACKAGECONFIG[seccomp] = "--enable-seccomp,--disable-seccomp,libseccomp"
+PACKAGECONFIG[yaml] = "--enable-yaml,--disable-yaml,libyaml"
+PACKAGECONFIG[manpages] = ",,python3-docutils-native"
diff --git a/meta-openembedded/meta-oe/recipes-devtools/debootstrap/debootstrap_1.0.126.bb b/meta-openembedded/meta-oe/recipes-devtools/debootstrap/debootstrap_1.0.126.bb
deleted file mode 100644
index 38b015c..0000000
--- a/meta-openembedded/meta-oe/recipes-devtools/debootstrap/debootstrap_1.0.126.bb
+++ /dev/null
@@ -1,27 +0,0 @@
-SUMMARY = "Install a Debian system into a subdirectory"
-HOMEPAGE = "https://wiki.debian.org/Debootstrap"
-SECTION = "devel"
-LICENSE = "MIT"
-LIC_FILES_CHKSUM = "file://debian/copyright;md5=1e68ced6e1689d4cd9dac75ff5225608"
-
-SRC_URI  = "\
-    http://http.debian.net/debian/pool/main/d/debootstrap/debootstrap_${PV}.tar.gz \
-    file://0001-support-to-override-usr-sbin-and-usr-share.patch \
-    file://0002-support-to-override-usr-bin-arch-test.patch \
-    file://0001-do-not-hardcode-the-full-path-of-dpkg.patch \
-"
-
-SRC_URI[sha256sum] = "bc48e1c500c33bed50bd00d201f338d3c92d6c0dcb1f202c3bc00ef01f96c618"
-
-S = "${WORKDIR}/debootstrap"
-
-DEPENDS = " \
-    virtual/fakeroot-native \
-"
-
-fakeroot do_install() {
-    oe_runmake 'DESTDIR=${D}' install
-    chown -R root:root ${D}${datadir}/debootstrap
-}
-
-BBCLASSEXTEND = "native nativesdk"
diff --git a/meta-openembedded/meta-oe/recipes-devtools/debootstrap/debootstrap_1.0.127.bb b/meta-openembedded/meta-oe/recipes-devtools/debootstrap/debootstrap_1.0.127.bb
new file mode 100644
index 0000000..5e0a488
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-devtools/debootstrap/debootstrap_1.0.127.bb
@@ -0,0 +1,27 @@
+SUMMARY = "Install a Debian system into a subdirectory"
+HOMEPAGE = "https://wiki.debian.org/Debootstrap"
+SECTION = "devel"
+LICENSE = "MIT"
+LIC_FILES_CHKSUM = "file://debian/copyright;md5=1e68ced6e1689d4cd9dac75ff5225608"
+
+SRC_URI  = "\
+    http://http.debian.net/debian/pool/main/d/debootstrap/debootstrap_${PV}.tar.gz \
+    file://0001-support-to-override-usr-sbin-and-usr-share.patch \
+    file://0002-support-to-override-usr-bin-arch-test.patch \
+    file://0001-do-not-hardcode-the-full-path-of-dpkg.patch \
+"
+
+SRC_URI[sha256sum] = "45887cf0582e6d16598e50713278d16b2272d02bdd117a9876e98277300dabd4"
+
+S = "${WORKDIR}/debootstrap"
+
+DEPENDS = " \
+    virtual/fakeroot-native \
+"
+
+fakeroot do_install() {
+    oe_runmake 'DESTDIR=${D}' install
+    chown -R root:root ${D}${datadir}/debootstrap
+}
+
+BBCLASSEXTEND = "native nativesdk"
diff --git a/meta-openembedded/meta-oe/recipes-devtools/dnf-plugin-tui/dnf-plugin-tui_git.bb b/meta-openembedded/meta-oe/recipes-devtools/dnf-plugin-tui/dnf-plugin-tui_git.bb
index ccbea58..f170b9f 100644
--- a/meta-openembedded/meta-oe/recipes-devtools/dnf-plugin-tui/dnf-plugin-tui_git.bb
+++ b/meta-openembedded/meta-oe/recipes-devtools/dnf-plugin-tui/dnf-plugin-tui_git.bb
@@ -4,7 +4,7 @@
 LIC_FILES_CHKSUM = "file://COPYING;md5=b234ee4d69f5fce4486a80fdaf4a4263"
 
 SRC_URI = "git://github.com/ubinux/dnf-plugin-tui.git;branch=master;protocol=https"
-SRCREV = "f6ee3fa8fe6f3d7ca8d5b104aa73c1a5c938a403"
+SRCREV = "bac88927b253cdcfe0d06ac7dc5afb876cd2d996"
 PV = "1.3"
 
 SRC_URI:append:class-target = " file://oe-remote.repo.sample"
@@ -28,7 +28,14 @@
     install -m 0644 ${WORKDIR}/oe-remote.repo.sample ${D}${sysconfdir}/yum.repos.d
 }
 
+do_install:append:class-nativesdk() {
+    install -d -p ${D}/${SDKPATH}/postinst-intercepts
+    cp -r ${COREBASE}/scripts/postinst-intercepts/* ${D}/${SDKPATH}/postinst-intercepts/
+    sed -i -e 's/STAGING_DIR_NATIVE/NATIVE_ROOT/g' ${D}/${SDKPATH}/postinst-intercepts/*
+}
+
 FILES:${PN} += "${datadir}/dnf"
+FILES:${PN} += "${SDKPATH}/postinst-intercepts"
 
 RDEPENDS:${PN} += " \
     bash \
diff --git a/meta-openembedded/meta-oe/recipes-devtools/gst-editing-services/gst-editing-services_1.20.3.bb b/meta-openembedded/meta-oe/recipes-devtools/gst-editing-services/gst-editing-services_1.20.3.bb
new file mode 100644
index 0000000..d14869b
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-devtools/gst-editing-services/gst-editing-services_1.20.3.bb
@@ -0,0 +1,26 @@
+# Copyright (C) 2022 Khem Raj <raj.khem@gmail.com>
+# Released under the MIT license (see COPYING.MIT for the terms)
+
+SUMMARY = "Gstreamer editing services"
+HOMEPAGE = "http://cgit.freedesktop.org/gstreamer/gst-editing-services/"
+
+LICENSE = "GPL-2.0-on-later & LGPL-2.1-or-later"
+LIC_FILES_CHKSUM = "file://COPYING;md5=6762ed442b3822387a51c92d928ead0d \
+                    file://COPYING.LIB;md5=6762ed442b3822387a51c92d928ead0d"
+
+DEPENDS = "flex-native gstreamer1.0 gstreamer1.0-plugins-base gstreamer1.0-plugins-bad python3-pygobject"
+
+REQUIRED_DISTRO_FEATURES = "gobject-introspection-data"
+GIR_MESON_OPTION = ""
+
+inherit meson pkgconfig upstream-version-is-even gobject-introspection features_check bash-completion
+
+EXTRA_OEMESON = "-Dvalidate=disabled"
+
+SRC_URI = "http://gstreamer.freedesktop.org/src/gst-editing-services/gst-editing-services-${PV}.tar.xz"
+SRC_URI[sha256sum] = "5fd896de69fbe24421eb6b0ff8d2f8b4c3cba3f3025ceacd302172f39a8abaa2"
+
+PACKAGES += "gst-validate-launcher libges"
+
+FILES:gst-validate-launcher = "${nonarch_libdir}/gst-validate-launcher ${datadir}/gstreamer-1.0/validate"
+FILES:libges = "${libdir}/gstreamer-1.0/*.so"
diff --git a/meta-openembedded/meta-oe/recipes-devtools/lapack/lapack_3.10.1.bb b/meta-openembedded/meta-oe/recipes-devtools/lapack/lapack_3.10.1.bb
index 00dfb09..15f394e 100644
--- a/meta-openembedded/meta-oe/recipes-devtools/lapack/lapack_3.10.1.bb
+++ b/meta-openembedded/meta-oe/recipes-devtools/lapack/lapack_3.10.1.bb
@@ -17,6 +17,9 @@
 SRC_URI = "git://github.com/Reference-LAPACK/lapack.git;protocol=https;branch=master"
 S = "${WORKDIR}/git"
 
+PACKAGECONFIG ?= ""
+PACKAGECONFIG[lapacke] = "-DLAPACKE=ON,-DLAPACKE=OFF"
+
 EXTRA_OECMAKE = " -DBUILD_SHARED_LIBS=ON "
 OECMAKE_GENERATOR = "Unix Makefiles"
 
diff --git a/meta-openembedded/meta-oe/recipes-devtools/ldns/ldns_1.8.1.bb b/meta-openembedded/meta-oe/recipes-devtools/ldns/ldns_1.8.1.bb
deleted file mode 100644
index 81e1c62..0000000
--- a/meta-openembedded/meta-oe/recipes-devtools/ldns/ldns_1.8.1.bb
+++ /dev/null
@@ -1,23 +0,0 @@
-SUMMARY = "LDNS is a DNS library that facilitates DNS tool programming"
-HOMEPAGE = "https://nlnetlabs.nl/ldns"
-LICENSE = "BSD-3-Clause"
-LIC_FILES_CHKSUM = "file://LICENSE;md5=34330f15b2b4abbbaaa7623f79a6a019"
-
-SRC_URI = "https://www.nlnetlabs.nl/downloads/ldns/ldns-${PV}.tar.gz"
-SRC_URI[sha256sum] = "958229abce4d3aaa19a75c0d127666564b17216902186e952ca4aef47c6d7fa3"
-
-DEPENDS = "openssl"
-
-inherit autotools-brokensep
-
-PACKAGECONFIG ??= ""
-PACKAGECONFIG[drill] = "--with-drill,--without-drill"
-
-EXTRA_OECONF = "--with-ssl=${STAGING_EXECPREFIXDIR}"
-
-do_install:append() {
-    sed -e 's@[^ ]*-ffile-prefix-map=[^ "]*@@g' \
-        -e 's@[^ ]*-fdebug-prefix-map=[^ "]*@@g' \
-        -e 's@[^ ]*-fmacro-prefix-map=[^ "]*@@g' \
-        -i ${D}${libdir}/pkgconfig/*.pc
-}
diff --git a/meta-openembedded/meta-oe/recipes-devtools/ldns/ldns_1.8.3.bb b/meta-openembedded/meta-oe/recipes-devtools/ldns/ldns_1.8.3.bb
new file mode 100644
index 0000000..16816e62
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-devtools/ldns/ldns_1.8.3.bb
@@ -0,0 +1,23 @@
+SUMMARY = "LDNS is a DNS library that facilitates DNS tool programming"
+HOMEPAGE = "https://nlnetlabs.nl/ldns"
+LICENSE = "BSD-3-Clause"
+LIC_FILES_CHKSUM = "file://LICENSE;md5=34330f15b2b4abbbaaa7623f79a6a019"
+
+SRC_URI = "https://www.nlnetlabs.nl/downloads/ldns/ldns-${PV}.tar.gz"
+SRC_URI[sha256sum] = "c3f72dd1036b2907e3a56e6acf9dfb2e551256b3c1bbd9787942deeeb70e7860"
+
+DEPENDS = "openssl"
+
+inherit autotools-brokensep
+
+PACKAGECONFIG ??= ""
+PACKAGECONFIG[drill] = "--with-drill,--without-drill"
+
+EXTRA_OECONF = "--with-ssl=${STAGING_EXECPREFIXDIR}"
+
+do_install:append() {
+    sed -e 's@[^ ]*-ffile-prefix-map=[^ "]*@@g' \
+        -e 's@[^ ]*-fdebug-prefix-map=[^ "]*@@g' \
+        -e 's@[^ ]*-fmacro-prefix-map=[^ "]*@@g' \
+        -i ${D}${libdir}/pkgconfig/*.pc
+}
diff --git a/meta-openembedded/meta-oe/recipes-devtools/makeself/makeself_2.4.5.bb b/meta-openembedded/meta-oe/recipes-devtools/makeself/makeself_2.4.5.bb
new file mode 100644
index 0000000..e0dfc3d
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-devtools/makeself/makeself_2.4.5.bb
@@ -0,0 +1,29 @@
+SUMMARY = "A self-extracting archiving tool for Unix systems, in 100% shell script."
+DESCRIPTION = "\
+    makeself.sh is a small shell script that generates a self-extractable \
+    compressed tar archive from a directory. The resulting file appears as \
+    a shell script (many of those have a .run suffix), and can be launched as is.\
+"
+HOMEPAGE = "https://makeself.io/"
+LICENSE = "GPL-2.0-only"
+LIC_FILES_CHKSUM = "file://COPYING;md5=b234ee4d69f5fce4486a80fdaf4a4263"
+
+SRC_URI = "\
+    git://git@github.com/megastep/makeself.git;protocol=https;branch=master \
+"
+
+SRCREV = "5742be6410bfad2c619fb1e98bf795e8fa0913c7"
+
+S = "${WORKDIR}/git"
+
+do_configure[noexec] = "1"
+do_compile[noexec] = "1"
+
+do_install() {
+    install -d ${D}${bindir}
+    install -m 0755 ${S}/makeself.1 ${D}${bindir}/
+    install -m 0755 ${S}/makeself.sh ${D}${bindir}/
+    install -m 0755 ${S}/makeself-header.sh ${D}${bindir}/
+}
+
+BBCLASSEXTEND = "native"
diff --git a/meta-openembedded/meta-oe/recipes-devtools/nlohmann-json/nlohmann-json_3.10.5.bb b/meta-openembedded/meta-oe/recipes-devtools/nlohmann-json/nlohmann-json_3.10.5.bb
deleted file mode 100644
index 0cf6fd3..0000000
--- a/meta-openembedded/meta-oe/recipes-devtools/nlohmann-json/nlohmann-json_3.10.5.bb
+++ /dev/null
@@ -1,30 +0,0 @@
-SUMMARY = "JSON for modern C++"
-HOMEPAGE = "https://nlohmann.github.io/json/"
-SECTION = "libs"
-LICENSE = "MIT"
-LIC_FILES_CHKSUM = "file://LICENSE.MIT;md5=f969127d7b7ed0a8a63c2bbeae002588"
-
-CVE_PRODUCT = "json-for-modern-cpp"
-
-SRC_URI = "git://github.com/nlohmann/json.git;nobranch=1;protocol=https \
-           "
-
-SRCREV = "4f8fba14066156b73f1189a2b8bd568bde5284c5"
-
-S = "${WORKDIR}/git"
-
-inherit cmake
-
-EXTRA_OECMAKE += "-DJSON_BuildTests=OFF"
-
-# nlohmann-json is a header only C++ library, so the main package will be empty.
-
-RDEPENDS:${PN}-dev = ""
-
-BBCLASSEXTEND = "native nativesdk"
-
-# other packages commonly reference the file directly as "json.hpp"
-# create symlink to allow this usage
-do_install:append() {
-    ln -s nlohmann/json.hpp ${D}${includedir}/json.hpp
-}
diff --git a/meta-openembedded/meta-oe/recipes-devtools/nlohmann-json/nlohmann-json_3.11.2.bb b/meta-openembedded/meta-oe/recipes-devtools/nlohmann-json/nlohmann-json_3.11.2.bb
new file mode 100644
index 0000000..5022628
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-devtools/nlohmann-json/nlohmann-json_3.11.2.bb
@@ -0,0 +1,30 @@
+SUMMARY = "JSON for modern C++"
+HOMEPAGE = "https://nlohmann.github.io/json/"
+SECTION = "libs"
+LICENSE = "MIT"
+LIC_FILES_CHKSUM = "file://LICENSE.MIT;md5=f969127d7b7ed0a8a63c2bbeae002588"
+
+CVE_PRODUCT = "json-for-modern-cpp"
+
+SRC_URI = "git://github.com/nlohmann/json.git;nobranch=1;protocol=https \
+           "
+
+SRCREV = "bc889afb4c5bf1c0d8ee29ef35eaaf4c8bef8a5d"
+
+S = "${WORKDIR}/git"
+
+inherit cmake
+
+EXTRA_OECMAKE += "-DJSON_BuildTests=OFF"
+
+# nlohmann-json is a header only C++ library, so the main package will be empty.
+
+RDEPENDS:${PN}-dev = ""
+
+BBCLASSEXTEND = "native nativesdk"
+
+# other packages commonly reference the file directly as "json.hpp"
+# create symlink to allow this usage
+do_install:append() {
+    ln -s nlohmann/json.hpp ${D}${includedir}/json.hpp
+}
diff --git a/meta-openembedded/meta-oe/recipes-devtools/perl/ipc-run_20200505.0.bb b/meta-openembedded/meta-oe/recipes-devtools/perl/ipc-run_20200505.0.bb
deleted file mode 100644
index 34f3136..0000000
--- a/meta-openembedded/meta-oe/recipes-devtools/perl/ipc-run_20200505.0.bb
+++ /dev/null
@@ -1,24 +0,0 @@
-DESCRIPTION = "\
-IPC::Run allows you run and interact with child processes \
-using files, pipes, and pseudo-ttys. Both system()-style and scripted \
-usages are supported and may be mixed. Likewise, functional and OO API \
-styles are both supported and may be mixed."
-HOMEPAGE = "https://metacpan.org/release/IPC-Run"
-SECTION = "libs"
-LICENSE = "Artistic-1.0 | GPL-1.0-or-later"
-LIC_FILES_CHKSUM = "file://LICENSE;md5=0ebd37caf53781e8b7223e6b99b63f4e"
-DEPENDS = "perl"
-
-SRC_URI = "git://github.com/toddr/IPC-Run.git;branch=master;protocol=https"
-SRCREV = "af435a1635ef9e48a84adc3230099e7ecf20c79d"
-
-S = "${WORKDIR}/git"
-
-inherit cpan
-
-EXTRA_CPANFLAGS = "EXPATLIBPATH=${STAGING_LIBDIR} EXPATINCPATH=${STAGING_INCDIR}"
-
-do_compile() {
-    export LIBC="$(find ${STAGING_DIR_TARGET}/${base_libdir}/ -name 'libc-*.so')"
-    cpan_do_compile
-}
diff --git a/meta-openembedded/meta-oe/recipes-devtools/perl/ipc-run_20220807.0.bb b/meta-openembedded/meta-oe/recipes-devtools/perl/ipc-run_20220807.0.bb
new file mode 100644
index 0000000..f597a8d
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-devtools/perl/ipc-run_20220807.0.bb
@@ -0,0 +1,24 @@
+DESCRIPTION = "\
+IPC::Run allows you run and interact with child processes \
+using files, pipes, and pseudo-ttys. Both system()-style and scripted \
+usages are supported and may be mixed. Likewise, functional and OO API \
+styles are both supported and may be mixed."
+HOMEPAGE = "https://metacpan.org/release/IPC-Run"
+SECTION = "libs"
+LICENSE = "Artistic-1.0 | GPL-1.0-or-later"
+LIC_FILES_CHKSUM = "file://LICENSE;md5=0ebd37caf53781e8b7223e6b99b63f4e"
+DEPENDS = "perl"
+
+SRC_URI = "git://github.com/toddr/IPC-Run.git;branch=master;protocol=https"
+SRCREV = "44b1f2d2021615c88f2f6b1a6cbdd9aebaeb4ad1"
+
+S = "${WORKDIR}/git"
+
+inherit cpan
+
+EXTRA_CPANFLAGS = "EXPATLIBPATH=${STAGING_LIBDIR} EXPATINCPATH=${STAGING_INCDIR}"
+
+do_compile() {
+    export LIBC="$(find ${STAGING_DIR_TARGET}/${base_libdir}/ -name 'libc-*.so')"
+    cpan_do_compile
+}
diff --git a/meta-openembedded/meta-oe/recipes-devtools/php/php_8.1.8.bb b/meta-openembedded/meta-oe/recipes-devtools/php/php_8.1.8.bb
deleted file mode 100644
index c5a424c..0000000
--- a/meta-openembedded/meta-oe/recipes-devtools/php/php_8.1.8.bb
+++ /dev/null
@@ -1,286 +0,0 @@
-SUMMARY = "A server-side, HTML-embedded scripting language"
-HOMEPAGE = "http://www.php.net"
-SECTION = "console/network"
-
-LICENSE = "PHP-3.0"
-LIC_FILES_CHKSUM = "file://LICENSE;md5=99532e0f6620bc9bca34f12fadaee33c"
-
-BBCLASSEXTEND = "native"
-DEPENDS = "zlib bzip2 libxml2 virtual/libiconv php-native lemon-native"
-DEPENDS:append:libc-musl = " libucontext"
-DEPENDS:class-native = "zlib-native libxml2-native"
-
-PHP_MAJOR_VERSION = "${@d.getVar('PV').split('.')[0]}"
-
-SRC_URI = "http://php.net/distributions/php-${PV}.tar.bz2 \
-           file://0002-build-php.m4-don-t-unset-cache-variables.patch \
-           file://0003-php-remove-host-specific-info-from-header-file.patch \
-           file://0004-configure.ac-don-t-include-build-libtool.m4.patch \
-           file://0006-ext-phar-Makefile.frag-Fix-phar-packaging.patch \
-           file://0009-php-don-t-use-broken-wrapper-for-mkdir.patch \
-           file://0010-iconv-fix-detection.patch \
-          "
-
-SRC_URI:append:class-target = " \
-            file://0001-ext-opcache-config.m4-enable-opcache.patch \
-            file://0005-pear-fix-Makefile.frag-for-Yocto.patch \
-            file://0007-sapi-cli-config.m4-fix-build-directory.patch \
-            file://0008-ext-imap-config.m4-fix-include-paths.patch \
-            file://php-fpm.conf \
-            file://php-fpm-apache.conf \
-            file://70_mod_php${PHP_MAJOR_VERSION}.conf \
-            file://php-fpm.service \
-          "
-
-S = "${WORKDIR}/php-${PV}"
-SRC_URI[sha256sum] = "b8815a5a02431453d4261e3598bd1f28516e4c0354f328c12890f257870e4c01"
-
-CVE_CHECK_IGNORE += "\
-    CVE-2007-2728 \
-    CVE-2007-3205 \
-    CVE-2007-4596 \
-"
-
-inherit autotools pkgconfig python3native gettext
-
-# phpize is not scanned for absolute paths by default (but php-config is).
-#
-SSTATE_SCAN_FILES += "phpize"
-SSTATE_SCAN_FILES += "build-defs.h"
-
-PHP_LIBDIR = "${libdir}/php${PHP_MAJOR_VERSION}"
-
-# Common EXTRA_OECONF
-COMMON_EXTRA_OECONF = "--enable-sockets \
-                       --enable-pcntl \
-                       --enable-shared \
-                       --disable-rpath \
-                       --with-pic \
-                       --libdir=${PHP_LIBDIR} \
-"
-EXTRA_OECONF = "--enable-mbstring \
-                --enable-fpm \
-                --with-libdir=${baselib} \
-                --with-gettext=${STAGING_LIBDIR}/.. \
-                --with-zlib=${STAGING_LIBDIR}/.. \
-                --with-iconv=${STAGING_LIBDIR}/.. \
-                --with-bz2=${STAGING_DIR_TARGET}${exec_prefix} \
-                --with-config-file-path=${sysconfdir}/php/apache2-php${PHP_MAJOR_VERSION} \
-                ${@oe.utils.conditional('SITEINFO_ENDIANNESS', 'le', 'ac_cv_c_bigendian_php=no', 'ac_cv_c_bigendian_php=yes', d)} \
-                ${@bb.utils.contains('PACKAGECONFIG', 'pam', '', 'ac_cv_lib_pam_pam_start=no', d)} \
-                ${COMMON_EXTRA_OECONF} \
-"
-
-EXTRA_OECONF:append:riscv64 = " --with-pcre-jit=no"
-EXTRA_OECONF:append:riscv32 = " --with-pcre-jit=no"
-# Needs fibers assembly implemented for rv32
-# for example rv64 implementation is below
-# see https://github.com/php/php-src/commit/70b02d75f2abe3a292d49c4a4e9e4f850c2fee68
-EXTRA_OECONF:append:riscv32:libc-musl = " --disable-fiber-asm"
-
-CACHED_CONFIGUREVARS += "ac_cv_func_dlopen=no ac_cv_lib_dl_dlopen=yes"
-
-EXTRA_OECONF:class-native = " \
-                --with-zlib=${STAGING_LIBDIR_NATIVE}/.. \
-                --without-iconv \
-                ${COMMON_EXTRA_OECONF} \
-"
-
-PACKAGECONFIG ??= "mysql sqlite3 imap opcache openssl \
-                   ${@bb.utils.filter('DISTRO_FEATURES', 'ipv6 pam', d)} \
-"
-PACKAGECONFIG:class-native = ""
-
-PACKAGECONFIG[zip] = "--with-zip --with-zlib-dir=${STAGING_EXECPREFIXDIR},,libzip"
-
-PACKAGECONFIG[mysql] = "--with-mysqli=mysqlnd \
-                        --with-pdo-mysql=mysqlnd \
-                        ,--without-mysqli --without-pdo-mysql \
-                        ,mysql5"
-
-PACKAGECONFIG[sqlite3] = "--with-sqlite3=${STAGING_LIBDIR}/.. \
-                          --with-pdo-sqlite=${STAGING_LIBDIR}/.. \
-                          ,--without-sqlite3 --without-pdo-sqlite \
-                          ,sqlite3"
-PACKAGECONFIG[pgsql] = "--with-pgsql=${STAGING_DIR_TARGET}${exec_prefix},--without-pgsql,postgresql"
-PACKAGECONFIG[soap] = "--enable-soap, --disable-soap, libxml2"
-PACKAGECONFIG[apache2] = "--with-apxs2=${STAGING_BINDIR_CROSS}/apxs,,apache2-native apache2"
-PACKAGECONFIG[pam] = ",,libpam"
-PACKAGECONFIG[imap] = "--with-imap=${STAGING_DIR_HOST} \
-                       --with-imap-ssl=${STAGING_DIR_HOST} \
-                       ,--without-imap --without-imap-ssl \
-                       ,uw-imap"
-PACKAGECONFIG[ipv6] = "--enable-ipv6,--disable-ipv6,"
-PACKAGECONFIG[opcache] = "--enable-opcache,--disable-opcache"
-PACKAGECONFIG[openssl] = "--with-openssl,--without-openssl,openssl"
-PACKAGECONFIG[valgrind] = "--with-valgrind=${STAGING_DIR_TARGET}/usr,--with-valgrind=no,valgrind"
-PACKAGECONFIG[mbregex] = "--enable-mbregex, --disable-mbregex, oniguruma"
-PACKAGECONFIG[mbstring] = "--enable-mbstring,,"
-
-export HOSTCC = "${BUILD_CC}"
-export PHP_NATIVE_DIR = "${STAGING_BINDIR_NATIVE}"
-export PHP_PEAR_PHP_BIN = "${STAGING_BINDIR_NATIVE}/php"
-CFLAGS += " -D_GNU_SOURCE -g -DPTYS_ARE_GETPT -DPTYS_ARE_SEARCHED -I${STAGING_INCDIR}/apache2"
-
-# Adding these flags enables dynamic library support, which is disabled by
-# default when cross compiling
-# See https://bugs.php.net/bug.php?id=60109
-CFLAGS += " -DHAVE_LIBDL "
-LDFLAGS += " -ldl "
-LDFLAGS:append:libc-musl = " -lucontext "
-
-EXTRA_OEMAKE = "INSTALL_ROOT=${D}"
-
-acpaths = ""
-
-do_configure:prepend () {
-    rm -f ${S}/build/libtool.m4 ${S}/ltmain.sh ${S}/aclocal.m4
-    find ${S} -name config.m4 | xargs -n1 sed -i 's!APXS_HTTPD=.*!APXS_HTTPD=${STAGING_SBINDIR_NATIVE}/httpd!'
-}
-
-do_configure:append() {
-    # No, libtool, we really don't want rpath set...
-    sed -i 's|^hardcode_libdir_flag_spec=.*|hardcode_libdir_flag_spec=""|g' libtool
-    sed -i 's|^runpath_var=LD_RUN_PATH|runpath_var=DIE_RPATH_DIE|g' libtool
-}
-
-do_install:append:class-native() {
-    rm -rf ${D}/${PHP_LIBDIR}/php/.registry
-    rm -rf ${D}/${PHP_LIBDIR}/php/.channels
-    rm -rf ${D}/${PHP_LIBDIR}/php/.[a-z]*
-}
-
-do_install:prepend() {
-    cat ${ACLOCALDIR}/libtool.m4 ${ACLOCALDIR}/lt~obsolete.m4 ${ACLOCALDIR}/ltoptions.m4 \
-        ${ACLOCALDIR}/ltsugar.m4 ${ACLOCALDIR}/ltversion.m4 > ${S}/build/libtool.m4
-}
-
-do_install:prepend:class-target() {
-    if ${@bb.utils.contains('PACKAGECONFIG', 'apache2', 'true', 'false', d)}; then
-        # Install dummy config file so apxs doesn't fail
-        install -d ${D}${sysconfdir}/apache2
-        printf "\nLoadModule dummy_module modules/mod_dummy.so\n" > ${D}${sysconfdir}/apache2/httpd.conf
-    fi
-}
-
-# fixme
-do_install:append:class-target() {
-    install -d ${D}${sysconfdir}/
-    rm -rf ${D}/.registry
-    rm -rf ${D}/.channels
-    rm -rf ${D}/.[a-z]*
-    rm -rf ${D}/var
-    rm -f  ${D}/${sysconfdir}/php-fpm.conf.default
-    install -m 0644 ${WORKDIR}/php-fpm.conf ${D}/${sysconfdir}/php-fpm.conf
-    install -d ${D}/${sysconfdir}/apache2/conf.d
-    install -m 0644 ${WORKDIR}/php-fpm-apache.conf ${D}/${sysconfdir}/apache2/conf.d/php-fpm.conf
-    install -d ${D}${sysconfdir}/init.d
-    sed -i 's:=/usr/sbin:=${sbindir}:g' ${B}/sapi/fpm/init.d.php-fpm
-    sed -i 's:=/etc:=${sysconfdir}:g' ${B}/sapi/fpm/init.d.php-fpm
-    sed -i 's:=/var:=${localstatedir}:g' ${B}/sapi/fpm/init.d.php-fpm
-    install -m 0755 ${B}/sapi/fpm/init.d.php-fpm ${D}${sysconfdir}/init.d/php-fpm
-    install -m 0644 ${WORKDIR}/php-fpm-apache.conf ${D}/${sysconfdir}/apache2/conf.d/php-fpm.conf
-
-    if ${@bb.utils.contains('DISTRO_FEATURES','systemd','true','false',d)};then
-        install -d ${D}${systemd_unitdir}/system
-        install -m 0644 ${WORKDIR}/php-fpm.service ${D}${systemd_unitdir}/system/
-        sed -i -e 's,@SYSCONFDIR@,${sysconfdir},g' \
-            -e 's,@LOCALSTATEDIR@,${localstatedir},g' \
-            ${D}${systemd_unitdir}/system/php-fpm.service
-    fi
-
-    if ${@bb.utils.contains('PACKAGECONFIG', 'apache2', 'true', 'false', d)}; then
-        install -d ${D}${sysconfdir}/apache2/modules.d
-        install -d ${D}${sysconfdir}/php/apache2-php${PHP_MAJOR_VERSION}
-        install -m 644  ${WORKDIR}/70_mod_php${PHP_MAJOR_VERSION}.conf ${D}${sysconfdir}/apache2/modules.d
-        sed -i s,lib/,${libexecdir}/, ${D}${sysconfdir}/apache2/modules.d/70_mod_php${PHP_MAJOR_VERSION}.conf
-        cat ${S}/php.ini-production | \
-            sed -e 's,extension_dir = \"\./\",extension_dir = \"/usr/lib/extensions\",' \
-            > ${D}${sysconfdir}/php/apache2-php${PHP_MAJOR_VERSION}/php.ini
-        rm -f ${D}${sysconfdir}/apache2/httpd.conf*
-    fi
-}
-
-SYSROOT_PREPROCESS_FUNCS += "php_sysroot_preprocess"
-
-php_sysroot_preprocess () {
-    install -d ${SYSROOT_DESTDIR}${bindir_crossscripts}/
-    install -m 755 ${D}${bindir}/phpize ${SYSROOT_DESTDIR}${bindir_crossscripts}/
-    install -m 755 ${D}${bindir}/php-config ${SYSROOT_DESTDIR}${bindir_crossscripts}/
-
-    sed -i 's!eval echo /!eval echo ${STAGING_DIR_HOST}/!' ${SYSROOT_DESTDIR}${bindir_crossscripts}/phpize
-    sed -i 's!^include_dir=.*!include_dir=${STAGING_INCDIR}/php!' ${SYSROOT_DESTDIR}${bindir_crossscripts}/php-config
-}
-
-MODPHP_PACKAGE = "${@bb.utils.contains('PACKAGECONFIG', 'apache2', '${PN}-modphp', '', d)}"
-
-PACKAGES = "${PN}-dbg ${PN}-cli ${PN}-phpdbg ${PN}-cgi ${PN}-fpm ${PN}-fpm-apache2 ${PN}-pear ${PN}-phar ${MODPHP_PACKAGE} ${PN}-dev ${PN}-staticdev ${PN}-doc ${PN}-opcache ${PN}"
-
-RDEPENDS:${PN} += "libgcc"
-RDEPENDS:${PN}-pear = "${PN}"
-RDEPENDS:${PN}-phar = "${PN}-cli"
-RDEPENDS:${PN}-cli = "${PN}"
-RDEPENDS:${PN}-modphp = "${PN} apache2"
-RDEPENDS:${PN}-opcache = "${PN}"
-
-ALLOW_EMPTY:${PN} = "1"
-
-INITSCRIPT_PACKAGES = "${PN}-fpm"
-inherit update-rc.d
-
-# WARNING: lib32-php-8.0.12-r0 do_package_qa: QA Issue: lib32-php: ELF binary /usr/libexec/apache2/modules/libphp.so has relocations in .text [textrel]
-#WARNING: lib32-php-8.0.12-r0 do_package_qa: QA Issue: lib32-php-opcache: ELF binary /usr/lib/php8/extensions/no-debug-zts-20200930/opcache.so has relocations in .text [textrel]
-INSANE_SKIP:${PN}:append:x86 = " textrel"
-INSANE_SKIP:${PN}-opcache:append:x86 = " textrel"
-
-FILES:${PN}-dbg =+ "${bindir}/.debug \
-                    ${libexecdir}/apache2/modules/.debug"
-FILES:${PN}-doc += "${PHP_LIBDIR}/php/doc"
-FILES:${PN}-cli = "${bindir}/php"
-FILES:${PN}-phpdbg = "${bindir}/phpdbg"
-FILES:${PN}-phar = "${bindir}/phar*"
-FILES:${PN}-cgi = "${bindir}/php-cgi"
-FILES:${PN}-fpm = "${sbindir}/php-fpm ${sysconfdir}/php-fpm.conf ${datadir}/fpm ${sysconfdir}/init.d/php-fpm ${systemd_unitdir}/system/php-fpm.service ${sysconfdir}/php-fpm.d/www.conf.default"
-FILES:${PN}-fpm-apache2 = "${sysconfdir}/apache2/conf.d/php-fpm.conf"
-CONFFILES:${PN}-fpm = "${sysconfdir}/php-fpm.conf"
-CONFFILES:${PN}-fpm-apache2 = "${sysconfdir}/apache2/conf.d/php-fpm.conf"
-INITSCRIPT_NAME:${PN}-fpm = "php-fpm"
-INITSCRIPT_PARAMS:${PN}-fpm = "defaults 60"
-FILES:${PN}-pear = "${bindir}/pear* ${bindir}/pecl ${PHP_LIBDIR}/php/PEAR \
-                ${PHP_LIBDIR}/php/PEAR*.php ${PHP_LIBDIR}/php/System.php \
-                ${PHP_LIBDIR}/php/peclcmd.php ${PHP_LIBDIR}/php/pearcmd.php \
-                ${PHP_LIBDIR}/php/.channels ${PHP_LIBDIR}/php/.channels/.alias \
-                ${PHP_LIBDIR}/php/.registry ${PHP_LIBDIR}/php/Archive/Tar.php \
-                ${PHP_LIBDIR}/php/Console/Getopt.php ${PHP_LIBDIR}/php/OS/Guess.php \
-                ${PHP_LIBDIR}/php/data/PEAR \
-                ${sysconfdir}/pear.conf"
-FILES:${PN}-dev = "${includedir}/php ${PHP_LIBDIR}/build ${bindir}/phpize \
-                ${bindir}/php-config ${PHP_LIBDIR}/php/.depdb \
-                ${PHP_LIBDIR}/php/.depdblock ${PHP_LIBDIR}/php/.filemap \
-                ${PHP_LIBDIR}/php/.lock ${PHP_LIBDIR}/php/test"
-FILES:${PN}-staticdev += "${PHP_LIBDIR}/extensions/*/*.a"
-FILES:${PN}-opcache = "${PHP_LIBDIR}/extensions/*/opcache${SOLIBSDEV}"
-FILES:${PN} = "${PHP_LIBDIR}/php"
-FILES:${PN} += "${bindir} ${libexecdir}/apache2"
-
-SUMMARY:${PN}-modphp = "PHP module for the Apache HTTP server"
-FILES:${PN}-modphp = "${libdir}/apache2 ${sysconfdir}"
-
-MODPHP_OLDPACKAGE = "${@bb.utils.contains('PACKAGECONFIG', 'apache2', 'modphp', '', d)}"
-RPROVIDES:${PN}-modphp = "${MODPHP_OLDPACKAGE}"
-RREPLACES:${PN}-modphp = "${MODPHP_OLDPACKAGE}"
-RCONFLICTS:${PN}-modphp = "${MODPHP_OLDPACKAGE}"
-
-do_install:append:class-native() {
-    create_wrapper ${D}${bindir}/php \
-        PHP_PEAR_SYSCONF_DIR=${sysconfdir}/
-}
-
-# Fails to build with thumb-1 (qemuarm)
-# | {standard input}: Assembler messages:
-# | {standard input}:3719: Error: selected processor does not support Thumb mode `smull r0,r2,r9,r3'
-# | {standard input}:3720: Error: unshifted register required -- `sub r2,r2,r0,asr#31'
-# | {standard input}:3796: Error: selected processor does not support Thumb mode `smull r0,r2,r3,r3'
-# | {standard input}:3797: Error: unshifted register required -- `sub r2,r2,r0,asr#31'
-# | make: *** [ext/standard/math.lo] Error 1
-ARM_INSTRUCTION_SET = "arm"
diff --git a/meta-openembedded/meta-oe/recipes-devtools/php/php_8.1.9.bb b/meta-openembedded/meta-oe/recipes-devtools/php/php_8.1.9.bb
new file mode 100644
index 0000000..03756e0
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-devtools/php/php_8.1.9.bb
@@ -0,0 +1,286 @@
+SUMMARY = "A server-side, HTML-embedded scripting language"
+HOMEPAGE = "http://www.php.net"
+SECTION = "console/network"
+
+LICENSE = "PHP-3.0"
+LIC_FILES_CHKSUM = "file://LICENSE;md5=99532e0f6620bc9bca34f12fadaee33c"
+
+BBCLASSEXTEND = "native"
+DEPENDS = "zlib bzip2 libxml2 virtual/libiconv php-native lemon-native"
+DEPENDS:append:libc-musl = " libucontext"
+DEPENDS:class-native = "zlib-native libxml2-native"
+
+PHP_MAJOR_VERSION = "${@d.getVar('PV').split('.')[0]}"
+
+SRC_URI = "http://php.net/distributions/php-${PV}.tar.bz2 \
+           file://0002-build-php.m4-don-t-unset-cache-variables.patch \
+           file://0003-php-remove-host-specific-info-from-header-file.patch \
+           file://0004-configure.ac-don-t-include-build-libtool.m4.patch \
+           file://0006-ext-phar-Makefile.frag-Fix-phar-packaging.patch \
+           file://0009-php-don-t-use-broken-wrapper-for-mkdir.patch \
+           file://0010-iconv-fix-detection.patch \
+          "
+
+SRC_URI:append:class-target = " \
+            file://0001-ext-opcache-config.m4-enable-opcache.patch \
+            file://0005-pear-fix-Makefile.frag-for-Yocto.patch \
+            file://0007-sapi-cli-config.m4-fix-build-directory.patch \
+            file://0008-ext-imap-config.m4-fix-include-paths.patch \
+            file://php-fpm.conf \
+            file://php-fpm-apache.conf \
+            file://70_mod_php${PHP_MAJOR_VERSION}.conf \
+            file://php-fpm.service \
+          "
+
+S = "${WORKDIR}/php-${PV}"
+SRC_URI[sha256sum] = "9ebb0e2e571db6fd5930428dcb2d19ed3e050338ec1f1347c282cae92fc086ff"
+
+CVE_CHECK_IGNORE += "\
+    CVE-2007-2728 \
+    CVE-2007-3205 \
+    CVE-2007-4596 \
+"
+
+inherit autotools pkgconfig python3native gettext
+
+# phpize is not scanned for absolute paths by default (but php-config is).
+#
+SSTATE_SCAN_FILES += "phpize"
+SSTATE_SCAN_FILES += "build-defs.h"
+
+PHP_LIBDIR = "${libdir}/php${PHP_MAJOR_VERSION}"
+
+# Common EXTRA_OECONF
+COMMON_EXTRA_OECONF = "--enable-sockets \
+                       --enable-pcntl \
+                       --enable-shared \
+                       --disable-rpath \
+                       --with-pic \
+                       --libdir=${PHP_LIBDIR} \
+"
+EXTRA_OECONF = "--enable-mbstring \
+                --enable-fpm \
+                --with-libdir=${baselib} \
+                --with-gettext=${STAGING_LIBDIR}/.. \
+                --with-zlib=${STAGING_LIBDIR}/.. \
+                --with-iconv=${STAGING_LIBDIR}/.. \
+                --with-bz2=${STAGING_DIR_TARGET}${exec_prefix} \
+                --with-config-file-path=${sysconfdir}/php/apache2-php${PHP_MAJOR_VERSION} \
+                ${@oe.utils.conditional('SITEINFO_ENDIANNESS', 'le', 'ac_cv_c_bigendian_php=no', 'ac_cv_c_bigendian_php=yes', d)} \
+                ${@bb.utils.contains('PACKAGECONFIG', 'pam', '', 'ac_cv_lib_pam_pam_start=no', d)} \
+                ${COMMON_EXTRA_OECONF} \
+"
+
+EXTRA_OECONF:append:riscv64 = " --with-pcre-jit=no"
+EXTRA_OECONF:append:riscv32 = " --with-pcre-jit=no"
+# Needs fibers assembly implemented for rv32
+# for example rv64 implementation is below
+# see https://github.com/php/php-src/commit/70b02d75f2abe3a292d49c4a4e9e4f850c2fee68
+EXTRA_OECONF:append:riscv32:libc-musl = " --disable-fiber-asm"
+
+CACHED_CONFIGUREVARS += "ac_cv_func_dlopen=no ac_cv_lib_dl_dlopen=yes"
+
+EXTRA_OECONF:class-native = " \
+                --with-zlib=${STAGING_LIBDIR_NATIVE}/.. \
+                --without-iconv \
+                ${COMMON_EXTRA_OECONF} \
+"
+
+PACKAGECONFIG ??= "mysql sqlite3 imap opcache openssl \
+                   ${@bb.utils.filter('DISTRO_FEATURES', 'ipv6 pam', d)} \
+"
+PACKAGECONFIG:class-native = ""
+
+PACKAGECONFIG[zip] = "--with-zip --with-zlib-dir=${STAGING_EXECPREFIXDIR},,libzip"
+
+PACKAGECONFIG[mysql] = "--with-mysqli=mysqlnd \
+                        --with-pdo-mysql=mysqlnd \
+                        ,--without-mysqli --without-pdo-mysql \
+                        ,mysql5"
+
+PACKAGECONFIG[sqlite3] = "--with-sqlite3=${STAGING_LIBDIR}/.. \
+                          --with-pdo-sqlite=${STAGING_LIBDIR}/.. \
+                          ,--without-sqlite3 --without-pdo-sqlite \
+                          ,sqlite3"
+PACKAGECONFIG[pgsql] = "--with-pgsql=${STAGING_DIR_TARGET}${exec_prefix},--without-pgsql,postgresql"
+PACKAGECONFIG[soap] = "--enable-soap, --disable-soap, libxml2"
+PACKAGECONFIG[apache2] = "--with-apxs2=${STAGING_BINDIR_CROSS}/apxs,,apache2-native apache2"
+PACKAGECONFIG[pam] = ",,libpam"
+PACKAGECONFIG[imap] = "--with-imap=${STAGING_DIR_HOST} \
+                       --with-imap-ssl=${STAGING_DIR_HOST} \
+                       ,--without-imap --without-imap-ssl \
+                       ,uw-imap"
+PACKAGECONFIG[ipv6] = "--enable-ipv6,--disable-ipv6,"
+PACKAGECONFIG[opcache] = "--enable-opcache,--disable-opcache"
+PACKAGECONFIG[openssl] = "--with-openssl,--without-openssl,openssl"
+PACKAGECONFIG[valgrind] = "--with-valgrind=${STAGING_DIR_TARGET}/usr,--with-valgrind=no,valgrind"
+PACKAGECONFIG[mbregex] = "--enable-mbregex, --disable-mbregex, oniguruma"
+PACKAGECONFIG[mbstring] = "--enable-mbstring,,"
+
+export HOSTCC = "${BUILD_CC}"
+export PHP_NATIVE_DIR = "${STAGING_BINDIR_NATIVE}"
+export PHP_PEAR_PHP_BIN = "${STAGING_BINDIR_NATIVE}/php"
+CFLAGS += " -D_GNU_SOURCE -g -DPTYS_ARE_GETPT -DPTYS_ARE_SEARCHED -I${STAGING_INCDIR}/apache2"
+
+# Adding these flags enables dynamic library support, which is disabled by
+# default when cross compiling
+# See https://bugs.php.net/bug.php?id=60109
+CFLAGS += " -DHAVE_LIBDL "
+LDFLAGS += " -ldl "
+LDFLAGS:append:libc-musl = " -lucontext "
+
+EXTRA_OEMAKE = "INSTALL_ROOT=${D}"
+
+acpaths = ""
+
+do_configure:prepend () {
+    rm -f ${S}/build/libtool.m4 ${S}/ltmain.sh ${S}/aclocal.m4
+    find ${S} -name config.m4 | xargs -n1 sed -i 's!APXS_HTTPD=.*!APXS_HTTPD=${STAGING_SBINDIR_NATIVE}/httpd!'
+}
+
+do_configure:append() {
+    # No, libtool, we really don't want rpath set...
+    sed -i 's|^hardcode_libdir_flag_spec=.*|hardcode_libdir_flag_spec=""|g' libtool
+    sed -i 's|^runpath_var=LD_RUN_PATH|runpath_var=DIE_RPATH_DIE|g' libtool
+}
+
+do_install:append:class-native() {
+    rm -rf ${D}/${PHP_LIBDIR}/php/.registry
+    rm -rf ${D}/${PHP_LIBDIR}/php/.channels
+    rm -rf ${D}/${PHP_LIBDIR}/php/.[a-z]*
+}
+
+do_install:prepend() {
+    cat ${ACLOCALDIR}/libtool.m4 ${ACLOCALDIR}/lt~obsolete.m4 ${ACLOCALDIR}/ltoptions.m4 \
+        ${ACLOCALDIR}/ltsugar.m4 ${ACLOCALDIR}/ltversion.m4 > ${S}/build/libtool.m4
+}
+
+do_install:prepend:class-target() {
+    if ${@bb.utils.contains('PACKAGECONFIG', 'apache2', 'true', 'false', d)}; then
+        # Install dummy config file so apxs doesn't fail
+        install -d ${D}${sysconfdir}/apache2
+        printf "\nLoadModule dummy_module modules/mod_dummy.so\n" > ${D}${sysconfdir}/apache2/httpd.conf
+    fi
+}
+
+# fixme
+do_install:append:class-target() {
+    install -d ${D}${sysconfdir}/
+    rm -rf ${D}/.registry
+    rm -rf ${D}/.channels
+    rm -rf ${D}/.[a-z]*
+    rm -rf ${D}/var
+    rm -f  ${D}/${sysconfdir}/php-fpm.conf.default
+    install -m 0644 ${WORKDIR}/php-fpm.conf ${D}/${sysconfdir}/php-fpm.conf
+    install -d ${D}/${sysconfdir}/apache2/conf.d
+    install -m 0644 ${WORKDIR}/php-fpm-apache.conf ${D}/${sysconfdir}/apache2/conf.d/php-fpm.conf
+    install -d ${D}${sysconfdir}/init.d
+    sed -i 's:=/usr/sbin:=${sbindir}:g' ${B}/sapi/fpm/init.d.php-fpm
+    sed -i 's:=/etc:=${sysconfdir}:g' ${B}/sapi/fpm/init.d.php-fpm
+    sed -i 's:=/var:=${localstatedir}:g' ${B}/sapi/fpm/init.d.php-fpm
+    install -m 0755 ${B}/sapi/fpm/init.d.php-fpm ${D}${sysconfdir}/init.d/php-fpm
+    install -m 0644 ${WORKDIR}/php-fpm-apache.conf ${D}/${sysconfdir}/apache2/conf.d/php-fpm.conf
+
+    if ${@bb.utils.contains('DISTRO_FEATURES','systemd','true','false',d)};then
+        install -d ${D}${systemd_unitdir}/system
+        install -m 0644 ${WORKDIR}/php-fpm.service ${D}${systemd_unitdir}/system/
+        sed -i -e 's,@SYSCONFDIR@,${sysconfdir},g' \
+            -e 's,@LOCALSTATEDIR@,${localstatedir},g' \
+            ${D}${systemd_unitdir}/system/php-fpm.service
+    fi
+
+    if ${@bb.utils.contains('PACKAGECONFIG', 'apache2', 'true', 'false', d)}; then
+        install -d ${D}${sysconfdir}/apache2/modules.d
+        install -d ${D}${sysconfdir}/php/apache2-php${PHP_MAJOR_VERSION}
+        install -m 644  ${WORKDIR}/70_mod_php${PHP_MAJOR_VERSION}.conf ${D}${sysconfdir}/apache2/modules.d
+        sed -i s,lib/,${libexecdir}/, ${D}${sysconfdir}/apache2/modules.d/70_mod_php${PHP_MAJOR_VERSION}.conf
+        cat ${S}/php.ini-production | \
+            sed -e 's,extension_dir = \"\./\",extension_dir = \"/usr/lib/extensions\",' \
+            > ${D}${sysconfdir}/php/apache2-php${PHP_MAJOR_VERSION}/php.ini
+        rm -f ${D}${sysconfdir}/apache2/httpd.conf*
+    fi
+}
+
+SYSROOT_PREPROCESS_FUNCS += "php_sysroot_preprocess"
+
+php_sysroot_preprocess () {
+    install -d ${SYSROOT_DESTDIR}${bindir_crossscripts}/
+    install -m 755 ${D}${bindir}/phpize ${SYSROOT_DESTDIR}${bindir_crossscripts}/
+    install -m 755 ${D}${bindir}/php-config ${SYSROOT_DESTDIR}${bindir_crossscripts}/
+
+    sed -i 's!eval echo /!eval echo ${STAGING_DIR_HOST}/!' ${SYSROOT_DESTDIR}${bindir_crossscripts}/phpize
+    sed -i 's!^include_dir=.*!include_dir=${STAGING_INCDIR}/php!' ${SYSROOT_DESTDIR}${bindir_crossscripts}/php-config
+}
+
+MODPHP_PACKAGE = "${@bb.utils.contains('PACKAGECONFIG', 'apache2', '${PN}-modphp', '', d)}"
+
+PACKAGES = "${PN}-dbg ${PN}-cli ${PN}-phpdbg ${PN}-cgi ${PN}-fpm ${PN}-fpm-apache2 ${PN}-pear ${PN}-phar ${MODPHP_PACKAGE} ${PN}-dev ${PN}-staticdev ${PN}-doc ${PN}-opcache ${PN}"
+
+RDEPENDS:${PN} += "libgcc"
+RDEPENDS:${PN}-pear = "${PN}"
+RDEPENDS:${PN}-phar = "${PN}-cli"
+RDEPENDS:${PN}-cli = "${PN}"
+RDEPENDS:${PN}-modphp = "${PN} apache2"
+RDEPENDS:${PN}-opcache = "${PN}"
+
+ALLOW_EMPTY:${PN} = "1"
+
+INITSCRIPT_PACKAGES = "${PN}-fpm"
+inherit update-rc.d
+
+# WARNING: lib32-php-8.0.12-r0 do_package_qa: QA Issue: lib32-php: ELF binary /usr/libexec/apache2/modules/libphp.so has relocations in .text [textrel]
+#WARNING: lib32-php-8.0.12-r0 do_package_qa: QA Issue: lib32-php-opcache: ELF binary /usr/lib/php8/extensions/no-debug-zts-20200930/opcache.so has relocations in .text [textrel]
+INSANE_SKIP:${PN}:append:x86 = " textrel"
+INSANE_SKIP:${PN}-opcache:append:x86 = " textrel"
+
+FILES:${PN}-dbg =+ "${bindir}/.debug \
+                    ${libexecdir}/apache2/modules/.debug"
+FILES:${PN}-doc += "${PHP_LIBDIR}/php/doc"
+FILES:${PN}-cli = "${bindir}/php"
+FILES:${PN}-phpdbg = "${bindir}/phpdbg"
+FILES:${PN}-phar = "${bindir}/phar*"
+FILES:${PN}-cgi = "${bindir}/php-cgi"
+FILES:${PN}-fpm = "${sbindir}/php-fpm ${sysconfdir}/php-fpm.conf ${datadir}/fpm ${sysconfdir}/init.d/php-fpm ${systemd_unitdir}/system/php-fpm.service ${sysconfdir}/php-fpm.d/www.conf.default"
+FILES:${PN}-fpm-apache2 = "${sysconfdir}/apache2/conf.d/php-fpm.conf"
+CONFFILES:${PN}-fpm = "${sysconfdir}/php-fpm.conf"
+CONFFILES:${PN}-fpm-apache2 = "${sysconfdir}/apache2/conf.d/php-fpm.conf"
+INITSCRIPT_NAME:${PN}-fpm = "php-fpm"
+INITSCRIPT_PARAMS:${PN}-fpm = "defaults 60"
+FILES:${PN}-pear = "${bindir}/pear* ${bindir}/pecl ${PHP_LIBDIR}/php/PEAR \
+                ${PHP_LIBDIR}/php/PEAR*.php ${PHP_LIBDIR}/php/System.php \
+                ${PHP_LIBDIR}/php/peclcmd.php ${PHP_LIBDIR}/php/pearcmd.php \
+                ${PHP_LIBDIR}/php/.channels ${PHP_LIBDIR}/php/.channels/.alias \
+                ${PHP_LIBDIR}/php/.registry ${PHP_LIBDIR}/php/Archive/Tar.php \
+                ${PHP_LIBDIR}/php/Console/Getopt.php ${PHP_LIBDIR}/php/OS/Guess.php \
+                ${PHP_LIBDIR}/php/data/PEAR \
+                ${sysconfdir}/pear.conf"
+FILES:${PN}-dev = "${includedir}/php ${PHP_LIBDIR}/build ${bindir}/phpize \
+                ${bindir}/php-config ${PHP_LIBDIR}/php/.depdb \
+                ${PHP_LIBDIR}/php/.depdblock ${PHP_LIBDIR}/php/.filemap \
+                ${PHP_LIBDIR}/php/.lock ${PHP_LIBDIR}/php/test"
+FILES:${PN}-staticdev += "${PHP_LIBDIR}/extensions/*/*.a"
+FILES:${PN}-opcache = "${PHP_LIBDIR}/extensions/*/opcache${SOLIBSDEV}"
+FILES:${PN} = "${PHP_LIBDIR}/php"
+FILES:${PN} += "${bindir} ${libexecdir}/apache2"
+
+SUMMARY:${PN}-modphp = "PHP module for the Apache HTTP server"
+FILES:${PN}-modphp = "${libdir}/apache2 ${sysconfdir}"
+
+MODPHP_OLDPACKAGE = "${@bb.utils.contains('PACKAGECONFIG', 'apache2', 'modphp', '', d)}"
+RPROVIDES:${PN}-modphp = "${MODPHP_OLDPACKAGE}"
+RREPLACES:${PN}-modphp = "${MODPHP_OLDPACKAGE}"
+RCONFLICTS:${PN}-modphp = "${MODPHP_OLDPACKAGE}"
+
+do_install:append:class-native() {
+    create_wrapper ${D}${bindir}/php \
+        PHP_PEAR_SYSCONF_DIR=${sysconfdir}/
+}
+
+# Fails to build with thumb-1 (qemuarm)
+# | {standard input}: Assembler messages:
+# | {standard input}:3719: Error: selected processor does not support Thumb mode `smull r0,r2,r9,r3'
+# | {standard input}:3720: Error: unshifted register required -- `sub r2,r2,r0,asr#31'
+# | {standard input}:3796: Error: selected processor does not support Thumb mode `smull r0,r2,r3,r3'
+# | {standard input}:3797: Error: unshifted register required -- `sub r2,r2,r0,asr#31'
+# | make: *** [ext/standard/math.lo] Error 1
+ARM_INSTRUCTION_SET = "arm"
diff --git a/meta-openembedded/meta-oe/recipes-devtools/protobuf/protobuf/0001-Fix-build-on-mips-clang.patch b/meta-openembedded/meta-oe/recipes-devtools/protobuf/protobuf/0001-Fix-build-on-mips-clang.patch
index 8a0696e..9f6116c 100644
--- a/meta-openembedded/meta-oe/recipes-devtools/protobuf/protobuf/0001-Fix-build-on-mips-clang.patch
+++ b/meta-openembedded/meta-oe/recipes-devtools/protobuf/protobuf/0001-Fix-build-on-mips-clang.patch
@@ -14,19 +14,13 @@
  src/google/protobuf/port_def.inc | 2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)
 
-diff --git a/src/google/protobuf/port_def.inc b/src/google/protobuf/port_def.inc
-index 71325c387..303475232 100644
 --- a/src/google/protobuf/port_def.inc
 +++ b/src/google/protobuf/port_def.inc
-@@ -230,7 +230,7 @@
+@@ -255,6 +255,7 @@
  #error PROTOBUF_TAILCALL was previously defined
  #endif
- #if __has_cpp_attribute(clang::musttail) && \
--  !defined(__arm__) && !defined(_ARCH_PPC) && !defined(__wasm__)
-+  !defined(__arm__) && !defined(_ARCH_PPC) && !defined(__wasm__) && !defined(__mips__)
- #  ifndef PROTO2_OPENSOURCE
- // Compilation fails on ARM32: b/195943306
- // Compilation fails on powerpc64le: b/187985113
--- 
-2.33.1
-
+ #if __has_cpp_attribute(clang::musttail) && !defined(__arm__) && \
++    !defined(__mips__) && \
+     !defined(_ARCH_PPC) && !defined(__wasm__) &&                 \
+     !(defined(_MSC_VER) && defined(_M_IX86)) &&                  \
+     !(defined(__NDK_MAJOR__) && __NDK_MAJOR <= 24)
diff --git a/meta-openembedded/meta-oe/recipes-devtools/protobuf/protobuf/0001-Fix-linking-error-with-ld-gold.patch b/meta-openembedded/meta-oe/recipes-devtools/protobuf/protobuf/0001-Fix-linking-error-with-ld-gold.patch
index 488c1f6..2bcb138 100644
--- a/meta-openembedded/meta-oe/recipes-devtools/protobuf/protobuf/0001-Fix-linking-error-with-ld-gold.patch
+++ b/meta-openembedded/meta-oe/recipes-devtools/protobuf/protobuf/0001-Fix-linking-error-with-ld-gold.patch
@@ -1,4 +1,4 @@
-From ddb9c5147883f8b27b4205450139e4a115d9961f Mon Sep 17 00:00:00 2001
+From a91130bb95528743a3f7253f8fe945b7505047d5 Mon Sep 17 00:00:00 2001
 From: Kyungjik Min <dp.min@lge.com>
 Date: Mon, 28 Dec 2020 15:56:09 +0900
 Subject: [PATCH] Fix linking error with ld-gold
@@ -19,6 +19,7 @@
 :Issues Addressed:
 [PLAT-130467] Fix build error for libgoogleassistant with latest
               protobuf-3.11.4
+
 ---
  src/libprotobuf-lite.map | 2 ++
  src/libprotobuf.map      | 2 ++
@@ -64,6 +65,3 @@
  
    local:
      *;
--- 
-2.17.1
-
diff --git a/meta-openembedded/meta-oe/recipes-devtools/protobuf/protobuf/0001-Lower-init-prio-for-extension-attributes.patch b/meta-openembedded/meta-oe/recipes-devtools/protobuf/protobuf/0001-Lower-init-prio-for-extension-attributes.patch
deleted file mode 100644
index a1200e0..0000000
--- a/meta-openembedded/meta-oe/recipes-devtools/protobuf/protobuf/0001-Lower-init-prio-for-extension-attributes.patch
+++ /dev/null
@@ -1,79 +0,0 @@
-From 8ff34dbff1eac612326b492d0b2cb93901ad7e2b Mon Sep 17 00:00:00 2001
-From: Jani Nurminen <jani.nurminen@windriver.com>
-Date: Fri, 24 Sep 2021 09:56:11 +0200
-Subject: [PATCH] Lower init prio for extension attributes
-MIME-Version: 1.0
-Content-Type: text/plain; charset=UTF-8
-Content-Transfer-Encoding: 8bit
-
-Added PROTOBUF_EXTENSION_ATTRIBUTE_INIT_PRIORITY in
-code generation for extension attributes.
-It has lower prio than PROTOBUF_ATTRIBUTE_INIT_PRIORITY to
-ensure that extension attributes are initialized after
-other attribute.
-This is needed in some applications to avoid segmentation fault.
-
-Reported by Karl-Herman Näslund.
-
-Signed-off-by: Jani Nurminen <jani.nurminen@windriver.com>
-
-Rebase on master
-
-Signed-off-by: He Zhe <zhe.he@windriver.com>
----
- src/google/protobuf/compiler/cpp/cpp_extension.cc |  2 +-
- src/google/protobuf/port_def.inc                  | 12 ++++++++++++
- src/google/protobuf/port_undef.inc                |  1 +
- 3 files changed, 14 insertions(+), 1 deletion(-)
-
-diff --git a/src/google/protobuf/compiler/cpp/cpp_extension.cc b/src/google/protobuf/compiler/cpp/cpp_extension.cc
-index 8604da5f2..984345ebe 100644
---- a/src/google/protobuf/compiler/cpp/cpp_extension.cc
-+++ b/src/google/protobuf/compiler/cpp/cpp_extension.cc
-@@ -164,7 +164,7 @@ void ExtensionGenerator::GenerateDefinition(io::Printer* printer) {
-   }
- 
-   format(
--      "PROTOBUF_ATTRIBUTE_INIT_PRIORITY "
-+      "PROTOBUF_EXTENSION_ATTRIBUTE_INIT_PRIORITY "
-       "::$proto_ns$::internal::ExtensionIdentifier< $extendee$,\n"
-       "    ::$proto_ns$::internal::$type_traits$, $field_type$, $packed$ >\n"
-       "  $scoped_name$($constant_name$, $1$);\n",
-diff --git a/src/google/protobuf/port_def.inc b/src/google/protobuf/port_def.inc
-index 7e9119112..a5117090d 100644
---- a/src/google/protobuf/port_def.inc
-+++ b/src/google/protobuf/port_def.inc
-@@ -614,6 +614,18 @@
- #define PROTOBUF_ATTRIBUTE_INIT_PRIORITY
- #endif
- 
-+// Some embedded systems get a segmentation fault if extension attributes are
-+// initialized with higher or equal priority as other attributes. This gives
-+// extension attributes high priority, but lower than other attributes.
-+#ifdef PROTOBUF_EXTENSION_ATTRIBUTE_INIT_PRIORITY
-+#error PROTOBUF_EXTENSION_ATTRIBUTE_INIT_PRIORITY was previously defined
-+#endif
-+#if PROTOBUF_GNUC_MIN(3, 0) && (!defined(__APPLE__) || defined(__clang__)) && !((defined(sun) || defined(__sun)) && (defined(__SVR4) || defined(__svr4__)))
-+#define PROTOBUF_EXTENSION_ATTRIBUTE_INIT_PRIORITY __attribute__((init_priority((103))))
-+#else
-+#define PROTOBUF_EXTENSION_ATTRIBUTE_INIT_PRIORITY
-+#endif
-+
- #ifdef PROTOBUF_PRAGMA_INIT_SEG
- #error PROTOBUF_PRAGMA_INIT_SEG was previously defined
- #endif
-diff --git a/src/google/protobuf/port_undef.inc b/src/google/protobuf/port_undef.inc
-index ccc5daf56..2b28f3a31 100644
---- a/src/google/protobuf/port_undef.inc
-+++ b/src/google/protobuf/port_undef.inc
-@@ -83,6 +83,7 @@
- #undef PROTOBUF_HAVE_ATTRIBUTE_WEAK
- #undef PROTOBUF_ATTRIBUTE_NO_DESTROY
- #undef PROTOBUF_ATTRIBUTE_INIT_PRIORITY
-+#undef PROTOBUF_EXTENSION_ATTRIBUTE_INIT_PRIORITY
- #undef PROTOBUF_PRAGMA_INIT_SEG
- #undef PROTOBUF_ASAN
- #undef PROTOBUF_MSAN
--- 
-2.26.2
-
diff --git a/meta-openembedded/meta-oe/recipes-devtools/protobuf/protobuf/0001-Makefile.am-include-descriptor.cc-when-building-libp.patch b/meta-openembedded/meta-oe/recipes-devtools/protobuf/protobuf/0001-Makefile.am-include-descriptor.cc-when-building-libp.patch
deleted file mode 100644
index bd3a277..0000000
--- a/meta-openembedded/meta-oe/recipes-devtools/protobuf/protobuf/0001-Makefile.am-include-descriptor.cc-when-building-libp.patch
+++ /dev/null
@@ -1,30 +0,0 @@
-From 8515ceec5ba3e2fcdbc819b5bf10fe742d7c9d5d Mon Sep 17 00:00:00 2001
-From: Martin Jansa <Martin.Jansa@gmail.com>
-Date: Thu, 27 Jun 2019 13:27:18 +0000
-Subject: [PATCH] Makefile.am: include descriptor.pb.cc when building
- libprotoc.so
-
-* otherwise plugin.pb.o has undefined symbol scc_info_FileDescriptorProto_google_2fprotobuf_2fdescriptor_2eproto
-  and build with gold fails with:
-  core2-32-oe-linux/protobuf/3.8.0-r0/recipe-sysroot-native/usr/bin/i686-oe-linux/../../libexec/i686-oe-linux/gcc/i686-oe-linux/9.1.0/ld.bfd: ./.libs/libprotoc.so: undefined reference to `descriptor_table_google_2fprotobuf_2fdescriptor_2eproto'
-  core2-32-oe-linux/protobuf/3.8.0-r0/recipe-sysroot-native/usr/bin/i686-oe-linux/../../libexec/i686-oe-linux/gcc/i686-oe-linux/9.1.0/ld.bfd: ./.libs/libprotoc.so: undefined reference to `scc_info_FileDescriptorProto_google_2fprotobuf_2fdescriptor_2eproto'
-
-Upstream-Status: Pending
-Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
-
----
- src/Makefile.am | 1 +
- 1 file changed, 1 insertion(+)
-
-diff --git a/src/Makefile.am b/src/Makefile.am
-index d4f11ce79..96d911746 100644
---- a/src/Makefile.am
-+++ b/src/Makefile.am
-@@ -323,6 +323,7 @@ libprotoc_la_LDFLAGS += -Wl,--version-script=$(srcdir)/libprotoc.map
- EXTRA_libprotoc_la_DEPENDENCIES = libprotoc.map
- endif
- libprotoc_la_SOURCES =                                         \
-+  google/protobuf/descriptor.pb.cc                             \
-   google/protobuf/compiler/code_generator.cc                   \
-   google/protobuf/compiler/command_line_interface.cc           \
-   google/protobuf/compiler/cpp/cpp_enum.cc                     \
diff --git a/meta-openembedded/meta-oe/recipes-devtools/protobuf/protobuf/0001-examples-Makefile-respect-CXX-LDFLAGS-variables-fix-.patch b/meta-openembedded/meta-oe/recipes-devtools/protobuf/protobuf/0001-examples-Makefile-respect-CXX-LDFLAGS-variables-fix-.patch
index 934c981..36c3c59 100644
--- a/meta-openembedded/meta-oe/recipes-devtools/protobuf/protobuf/0001-examples-Makefile-respect-CXX-LDFLAGS-variables-fix-.patch
+++ b/meta-openembedded/meta-oe/recipes-devtools/protobuf/protobuf/0001-examples-Makefile-respect-CXX-LDFLAGS-variables-fix-.patch
@@ -1,4 +1,4 @@
-From e5340f816aa273cfda36998466739ca0748caafb Mon Sep 17 00:00:00 2001
+From e3fa241637ab5a7fa78c0d474802134cff75f91e Mon Sep 17 00:00:00 2001
 From: Martin Jansa <Martin.Jansa@gmail.com>
 Date: Fri, 28 Jun 2019 13:50:52 +0000
 Subject: [PATCH] examples/Makefile: respect CXX,LDFLAGS variables, fix build
@@ -24,12 +24,13 @@
 Upstream-Status: Pending
 Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
 Signed-off-by: Leon Anavi <leon.anavi@konsulko.com>
+
 ---
  examples/Makefile | 6 ++++--
  1 file changed, 4 insertions(+), 2 deletions(-)
 
 diff --git a/examples/Makefile b/examples/Makefile
-index e9f9635ae..b2fbe2de1 100644
+index 1c7ec8d63..85f591231 100644
 --- a/examples/Makefile
 +++ b/examples/Makefile
 @@ -2,6 +2,8 @@
@@ -41,7 +42,7 @@
  all: cpp java python
  
  cpp:    add_person_cpp    list_people_cpp
-@@ -41,11 +43,11 @@ protoc_middleman_dart: addressbook.proto
+@@ -40,11 +42,11 @@ protoc_middleman_dart: addressbook.proto
  
  add_person_cpp: add_person.cc protoc_middleman
  	pkg-config --cflags protobuf  # fails if protobuf is not installed
@@ -55,6 +56,3 @@
  
  add_person_dart: add_person.dart protoc_middleman_dart
  
--- 
-2.17.1
-
diff --git a/meta-openembedded/meta-oe/recipes-devtools/protobuf/protobuf/0001-protobuf-fix-configure-error.patch b/meta-openembedded/meta-oe/recipes-devtools/protobuf/protobuf/0001-protobuf-fix-configure-error.patch
deleted file mode 100644
index a2f7a4b..0000000
--- a/meta-openembedded/meta-oe/recipes-devtools/protobuf/protobuf/0001-protobuf-fix-configure-error.patch
+++ /dev/null
@@ -1,33 +0,0 @@
-From 52959e8e01e39139d18f752e97283e45b4b7a426 Mon Sep 17 00:00:00 2001
-From: Changqing Li <changqing.li@windriver.com>
-Date: Wed, 18 Jul 2018 17:52:34 +0800
-Subject: [PATCH] protobuf: fix configure error
-
-fix below error:
-gnu-configize: 'configure.ac' or 'configure.in' is required
-
-third_party/googletest is git submodule of protobuf. Above error
-caused by missing submodule googletest.
-
-Upstream-Status: Inappropriate [oe-specific]
-
-Signed-off-by: Changqing Li <changqing.li@windriver.com>
----
- configure.ac | 1 -
- 1 file changed, 1 deletion(-)
-
-diff --git a/configure.ac b/configure.ac
-index aec10cf..7fbe57d 100644
---- a/configure.ac
-+++ b/configure.ac
-@@ -214,7 +214,6 @@ AX_CXX_COMPILE_STDCXX([11], [noext], [mandatory])
- #   too.
- export CFLAGS
- export CXXFLAGS
--AC_CONFIG_SUBDIRS([third_party/googletest])
- 
- AC_CONFIG_FILES([Makefile src/Makefile benchmarks/Makefile conformance/Makefile protobuf.pc protobuf-lite.pc])
- AC_OUTPUT
--- 
-2.7.4
-
diff --git a/meta-openembedded/meta-oe/recipes-devtools/protobuf/protobuf_3.19.4.bb b/meta-openembedded/meta-oe/recipes-devtools/protobuf/protobuf_3.19.4.bb
deleted file mode 100644
index 5662330..0000000
--- a/meta-openembedded/meta-oe/recipes-devtools/protobuf/protobuf_3.19.4.bb
+++ /dev/null
@@ -1,95 +0,0 @@
-SUMMARY = "Protocol Buffers - structured data serialisation mechanism"
-DESCRIPTION = "Protocol Buffers are a way of encoding structured data in an \
-efficient yet extensible format. Google uses Protocol Buffers for almost \
-all of its internal RPC protocols and file formats."
-HOMEPAGE = "https://github.com/google/protobuf"
-SECTION = "console/tools"
-LICENSE = "BSD-3-Clause"
-LIC_FILES_CHKSUM = "file://LICENSE;md5=37b5762e07f0af8c74ce80a8bda4266b"
-
-DEPENDS = "zlib"
-DEPENDS:append:class-target = " protobuf-native"
-
-SRCREV = "22d0e265de7d2b3d2e9a00d071313502e7d4cccf"
-
-SRC_URI = "git://github.com/protocolbuffers/protobuf.git;branch=3.19.x;protocol=https \
-           file://run-ptest \
-           file://0001-protobuf-fix-configure-error.patch \
-           file://0001-Makefile.am-include-descriptor.cc-when-building-libp.patch \
-           file://0001-examples-Makefile-respect-CXX-LDFLAGS-variables-fix-.patch \
-           file://0001-Fix-linking-error-with-ld-gold.patch \
-           file://0001-Lower-init-prio-for-extension-attributes.patch \
-           "
-SRC_URI:append:mips:toolchain-clang = " file://0001-Fix-build-on-mips-clang.patch "
-SRC_URI:append:mipsel:toolchain-clang = " file://0001-Fix-build-on-mips-clang.patch "
-
-S = "${WORKDIR}/git"
-
-inherit autotools-brokensep pkgconfig ptest
-
-PACKAGECONFIG ??= ""
-PACKAGECONFIG[python] = ",,"
-
-EXTRA_OECONF += "--with-protoc=echo"
-
-TEST_SRC_DIR = "examples"
-LANG_SUPPORT = "cpp ${@bb.utils.contains('PACKAGECONFIG', 'python', 'python', '', d)}"
-
-do_compile_ptest() {
-	mkdir -p "${B}/${TEST_SRC_DIR}"
-
-	# Add the location of the cross-compiled header and library files
-	# which haven't been installed yet.
-	cp "${B}/protobuf.pc" "${B}/${TEST_SRC_DIR}/protobuf.pc"
-	sed -e 's|libdir=|libdir=${PKG_CONFIG_SYSROOT_DIR}|' -i "${B}/${TEST_SRC_DIR}/protobuf.pc"
-	sed -e 's|Cflags:|Cflags: -I${S}/src|' -i "${B}/${TEST_SRC_DIR}/protobuf.pc"
-	sed -e 's|Libs:|Libs: -L${B}/src/.libs|' -i "${B}/${TEST_SRC_DIR}/protobuf.pc"
-	export PKG_CONFIG_PATH="${B}/${TEST_SRC_DIR}"
-
-	# Save the pkgcfg sysroot variable, and update it to nothing so
-	# that it doesn't append the sysroot to the beginning of paths.
-	# The header and library files aren't installed to the target
-	# system yet.  So the absolute paths were specified above.
-	save_pkg_config_sysroot_dir=$PKG_CONFIG_SYSROOT_DIR
-	export PKG_CONFIG_SYSROOT_DIR=
-
-	# Compile the tests
-	for lang in ${LANG_SUPPORT}; do
-		oe_runmake -C "${S}/${TEST_SRC_DIR}" ${lang}
-	done
-
-	# Restore the pkgconfig sysroot variable
-	export PKG_CONFIG_SYSROOT_DIR=$save_pkg_config_sysroot_dir
-}
-
-do_install_ptest() {
-	local olddir=`pwd`
-
-	cd "${S}/${TEST_SRC_DIR}"
-	install -d "${D}/${PTEST_PATH}"
-	for i in add_person* list_people*; do
-		if [ -x "$i" ]; then
-			install "$i" "${D}/${PTEST_PATH}"
-		fi
-	done
-	cp "${S}/${TEST_SRC_DIR}/addressbook_pb2.py" "${D}/${PTEST_PATH}"
-	cd "$olddir"
-}
-
-PACKAGE_BEFORE_PN = "${PN}-compiler ${PN}-lite"
-
-FILES:${PN}-compiler = "${bindir} ${libdir}/libprotoc${SOLIBS}"
-FILES:${PN}-lite = "${libdir}/libprotobuf-lite${SOLIBS}"
-
-RDEPENDS:${PN}-compiler = "${PN}"
-RDEPENDS:${PN}-dev += "${PN}-compiler"
-RDEPENDS:${PN}-ptest = "bash ${@bb.utils.contains('PACKAGECONFIG', 'python', 'python-protobuf', '', d)}"
-
-MIPS_INSTRUCTION_SET = "mips"
-
-BBCLASSEXTEND = "native nativesdk"
-
-LDFLAGS:append:arm = " -latomic"
-LDFLAGS:append:mips = " -latomic"
-LDFLAGS:append:powerpc = " -latomic"
-LDFLAGS:append:mipsel = " -latomic"
diff --git a/meta-openembedded/meta-oe/recipes-devtools/protobuf/protobuf_3.21.5.bb b/meta-openembedded/meta-oe/recipes-devtools/protobuf/protobuf_3.21.5.bb
new file mode 100644
index 0000000..0bc9cbe
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-devtools/protobuf/protobuf_3.21.5.bb
@@ -0,0 +1,105 @@
+SUMMARY = "Protocol Buffers - structured data serialisation mechanism"
+DESCRIPTION = "Protocol Buffers are a way of encoding structured data in an \
+efficient yet extensible format. Google uses Protocol Buffers for almost \
+all of its internal RPC protocols and file formats."
+HOMEPAGE = "https://github.com/google/protobuf"
+SECTION = "console/tools"
+LICENSE = "BSD-3-Clause"
+LIC_FILES_CHKSUM = "file://LICENSE;md5=37b5762e07f0af8c74ce80a8bda4266b"
+
+DEPENDS = "zlib"
+DEPENDS:append:class-target = " protobuf-native"
+
+SRCREV = "ab840345966d0fa8e7100d771c92a73bfbadd25c"
+
+SRC_URI = "git://github.com/protocolbuffers/protobuf.git;branch=21.x;protocol=https \
+           file://run-ptest \
+           file://0001-examples-Makefile-respect-CXX-LDFLAGS-variables-fix-.patch \
+           file://0001-Fix-linking-error-with-ld-gold.patch \
+           "
+SRC_URI:append:mips:toolchain-clang = " file://0001-Fix-build-on-mips-clang.patch "
+SRC_URI:append:mipsel:toolchain-clang = " file://0001-Fix-build-on-mips-clang.patch "
+
+S = "${WORKDIR}/git"
+
+inherit cmake pkgconfig ptest
+
+PACKAGECONFIG ??= ""
+PACKAGECONFIG:class-native ?= "compiler"
+PACKAGECONFIG[python] = ",,"
+PACKAGECONFIG[compiler] = "-Dprotobuf_BUILD_PROTOC_BINARIES=ON,-Dprotobuf_BUILD_PROTOC_BINARIES=OFF"
+
+EXTRA_OECMAKE += "\
+    -Dprotobuf_BUILD_SHARED_LIBS=ON \
+    -Dprotobuf_BUILD_LIBPROTOC=ON \
+    -Dprotobuf_BUILD_TESTS=OFF \
+    -Dprotobuf_BUILD_EXAMPLES=OFF \
+"
+
+TEST_SRC_DIR = "examples"
+LANG_SUPPORT = "cpp ${@bb.utils.contains('PACKAGECONFIG', 'python', 'python', '', d)}"
+
+do_compile_ptest() {
+	mkdir -p "${B}/${TEST_SRC_DIR}"
+
+	# Add the location of the cross-compiled header and library files
+	# which haven't been installed yet.
+	cp "${B}/protobuf.pc" "${B}/${TEST_SRC_DIR}/protobuf.pc"
+	cp ${S}/${TEST_SRC_DIR}/*.cc "${B}/${TEST_SRC_DIR}/"
+	cp ${S}/${TEST_SRC_DIR}/*.proto "${B}/${TEST_SRC_DIR}/"
+	cp ${S}/${TEST_SRC_DIR}/*.py "${B}/${TEST_SRC_DIR}/"
+	cp ${S}/${TEST_SRC_DIR}/Makefile "${B}/${TEST_SRC_DIR}/"
+	sed -e 's|libdir=|libdir=${PKG_CONFIG_SYSROOT_DIR}|' -i "${B}/${TEST_SRC_DIR}/protobuf.pc"
+	sed -e 's|Cflags:|Cflags: -I${S}/src|' -i "${B}/${TEST_SRC_DIR}/protobuf.pc"
+	sed -e 's|Libs:|Libs: -L${B}|' -i "${B}/${TEST_SRC_DIR}/protobuf.pc"
+	# Until out-of-tree build of examples is supported, we have to use this approach
+	sed -e 's|../src/google/protobuf/.libs/timestamp.pb.o|${B}/CMakeFiles/libprotobuf.dir/src/google/protobuf/timestamp.pb.cc.o|' -i "${B}/${TEST_SRC_DIR}/Makefile"
+	export PKG_CONFIG_PATH="${B}/${TEST_SRC_DIR}"
+
+	# Save the pkgcfg sysroot variable, and update it to nothing so
+	# that it doesn't append the sysroot to the beginning of paths.
+	# The header and library files aren't installed to the target
+	# system yet.  So the absolute paths were specified above.
+	save_pkg_config_sysroot_dir=$PKG_CONFIG_SYSROOT_DIR
+	export PKG_CONFIG_SYSROOT_DIR=
+
+	# Compile the tests
+	for lang in ${LANG_SUPPORT}; do
+		oe_runmake -C "${B}/${TEST_SRC_DIR}" ${lang}
+	done
+
+	# Restore the pkgconfig sysroot variable
+	export PKG_CONFIG_SYSROOT_DIR=$save_pkg_config_sysroot_dir
+}
+
+do_install_ptest() {
+	local olddir=`pwd`
+
+	cd "${S}/${TEST_SRC_DIR}"
+	install -d "${D}/${PTEST_PATH}"
+	for i in add_person* list_people*; do
+		if [ -x "$i" ]; then
+			install "$i" "${D}/${PTEST_PATH}"
+		fi
+	done
+	cp "${B}/${TEST_SRC_DIR}/addressbook_pb2.py" "${D}/${PTEST_PATH}"
+	cd "$olddir"
+}
+
+PACKAGE_BEFORE_PN = "${PN}-compiler ${PN}-lite"
+
+FILES:${PN}-compiler = "${bindir} ${libdir}/libprotoc${SOLIBS}"
+FILES:${PN}-lite = "${libdir}/libprotobuf-lite${SOLIBS}"
+
+RDEPENDS:${PN}-compiler = "${PN}"
+RDEPENDS:${PN}-dev += "${PN}-compiler"
+RDEPENDS:${PN}-ptest = "bash ${@bb.utils.contains('PACKAGECONFIG', 'python', 'python3-protobuf', '', d)}"
+
+MIPS_INSTRUCTION_SET = "mips"
+
+BBCLASSEXTEND = "native nativesdk"
+
+LDFLAGS:append:arm = " -latomic"
+LDFLAGS:append:mips = " -latomic"
+LDFLAGS:append:powerpc = " -latomic"
+LDFLAGS:append:mipsel = " -latomic"
diff --git a/meta-openembedded/meta-oe/recipes-devtools/yasm/yasm_git.bb b/meta-openembedded/meta-oe/recipes-devtools/yasm/yasm_git.bb
index b5cd35a..3dd382b 100644
--- a/meta-openembedded/meta-oe/recipes-devtools/yasm/yasm_git.bb
+++ b/meta-openembedded/meta-oe/recipes-devtools/yasm/yasm_git.bb
@@ -4,7 +4,8 @@
 
 LIC_FILES_CHKSUM = "file://COPYING;md5=a12d8903508fb6bfd49d8d82c6170dd9"
 
-DEPENDS += "flex-native bison-native xmlto-native"
+DEPENDS += "flex-native bison-native"
+PACKAGECONFIG[docs] = ",,xmlto-native,"
 
 PV = "1.3.0+git${SRCPV}"
 # v1.3.0
@@ -22,3 +23,8 @@
 BBCLASSEXTEND = "native"
 
 PARALLEL_MAKE = ""
+
+do_configure:prepend() {
+     # Don't include $CC (which includes path to sysroot) in generated header.
+     sed -i -e "s/^echo \"\/\* generated \$ac_cv_stdint_message \*\/\" >>\$ac_stdint$"// ${S}/m4/ax_create_stdint_h.m4
+}
diff --git a/meta-openembedded/meta-oe/recipes-extended/dlt-daemon/dlt-daemon/0001-dlt-system-Fix-buffer-overflow-detection-on-32bit-ta.patch b/meta-openembedded/meta-oe/recipes-extended/dlt-daemon/dlt-daemon/0001-dlt-system-Fix-buffer-overflow-detection-on-32bit-ta.patch
new file mode 100644
index 0000000..e7e6cb3
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-extended/dlt-daemon/dlt-daemon/0001-dlt-system-Fix-buffer-overflow-detection-on-32bit-ta.patch
@@ -0,0 +1,40 @@
+From 94378458d653b1edca86435026909592cbe5e793 Mon Sep 17 00:00:00 2001
+From: Changqing Li <changqing.li@windriver.com>
+Date: Fri, 19 Aug 2022 11:12:17 +0800
+Subject: [PATCH] dlt-system: Fix buffer overflow detection on 32bit targets
+
+On 32bit target, dlt-system will termiated with error:
+dlt-system: *** buffer overflow detected ***: terminated
+
+Upstream-Status: Submitted [https://github.com/COVESA/dlt-daemon/pull/398]
+
+Signed-off-by: Changqing Li <changqing.li@windriver.com>
+---
+ src/system/dlt-system-watchdog.c | 6 +++---
+ 1 file changed, 3 insertions(+), 3 deletions(-)
+
+diff --git a/src/system/dlt-system-watchdog.c b/src/system/dlt-system-watchdog.c
+index a2b01de..c0eaa12 100644
+--- a/src/system/dlt-system-watchdog.c
++++ b/src/system/dlt-system-watchdog.c
+@@ -109,8 +109,8 @@ int register_watchdog_fd(struct pollfd *pollfd, int fdcnt)
+ 
+ void watchdog_fd_handler(int fd)
+ {
+-    long int timersElapsed = 0;
+-    int r = read(fd, &timersElapsed, 8);    // only needed to reset fd event
++    uint64_t timersElapsed = 0ULL;
++    int r = read(fd, &timersElapsed, 8U);    // only needed to reset fd event
+     if(r < 0)
+         DLT_LOG(watchdogContext, DLT_LOG_ERROR, DLT_STRING("Could not reset systemd watchdog. Exit with: "), 
+             DLT_STRING(strerror(r)));
+@@ -120,4 +120,4 @@ void watchdog_fd_handler(int fd)
+ 
+     DLT_LOG(watchdogContext, DLT_LOG_DEBUG, DLT_STRING("systemd watchdog waited periodic\n"));
+ }
+-#endif
+\ No newline at end of file
++#endif
+-- 
+2.25.1
+
diff --git a/meta-openembedded/meta-oe/recipes-extended/dlt-daemon/dlt-daemon_2.18.8.bb b/meta-openembedded/meta-oe/recipes-extended/dlt-daemon/dlt-daemon_2.18.8.bb
index 7a613bc..aa5ef46 100644
--- a/meta-openembedded/meta-oe/recipes-extended/dlt-daemon/dlt-daemon_2.18.8.bb
+++ b/meta-openembedded/meta-oe/recipes-extended/dlt-daemon/dlt-daemon_2.18.8.bb
@@ -18,6 +18,7 @@
            file://0002-Don-t-execute-processes-as-a-specific-user.patch \
            file://0004-Modify-systemd-config-directory.patch \
            file://0001-cmake-Link-with-libatomic-on-rv32-rv64.patch \
+           file://0001-dlt-system-Fix-buffer-overflow-detection-on-32bit-ta.patch \
            "
 SRCREV = "6a3bd901d825c7206797e36ea98e10a218f5aad2"
 
diff --git a/meta-openembedded/meta-oe/recipes-extended/fluentbit/fluentbit/0001-CMakeLists.txt-Do-not-use-private-makefile-target.patch b/meta-openembedded/meta-oe/recipes-extended/fluentbit/fluentbit/0001-CMakeLists.txt-Do-not-use-private-makefile-target.patch
new file mode 100644
index 0000000..b4634a2
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-extended/fluentbit/fluentbit/0001-CMakeLists.txt-Do-not-use-private-makefile-target.patch
@@ -0,0 +1,71 @@
+From 6a704ab7bf69cd5d6970b3a7d3ae7798b26027c1 Mon Sep 17 00:00:00 2001
+From: Paulo Neves <ptsneves@gmail.com>
+Date: Thu, 28 Jul 2022 11:28:41 +0200
+Subject: [PATCH] CMakeLists.txt Do not use private makefile $< target
+
+$< is a private detail from the Makefile generated by CMakefile and
+are not under control or to be used at the CMakeLists level. In 3.20
+that private generation changed pre-requisite targets[1] and now logs
+contain the path compiler_depend.ts instead of the actual file.
+
+Upstream status: Pending [1]
+[1] https://github.com/fluent/fluent-bit/issues/5492
+---
+ CMakeLists.txt              | 6 +-----
+ lib/chunkio/CMakeLists.txt  | 7 +------
+ lib/cmetrics/CMakeLists.txt | 7 +------
+ 3 files changed, 3 insertions(+), 17 deletions(-)
+
+diff --git a/CMakeLists.txt b/CMakeLists.txt
+index 3dba5a8..d94b988 100644
+--- a/CMakeLists.txt
++++ b/CMakeLists.txt
+@@ -46,11 +46,7 @@ else()
+   set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -Wall")
+ endif()
+ 
+-if(NOT ${CMAKE_SYSTEM_NAME} MATCHES "Windows")
+-  set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -D__FILENAME__='\"$(subst ${CMAKE_SOURCE_DIR}/,,$(abspath $<))\"'")
+-else()
+-  set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -D__FILENAME__=__FILE__")
+-endif()
++set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -D__FILENAME__=__FILE__")
+ 
+ if(${CMAKE_SYSTEM_PROCESSOR} MATCHES "armv7l")
+   set(CMAKE_C_LINK_FLAGS "${CMAKE_C_LINK_FLAGS} -latomic")
+diff --git a/lib/chunkio/CMakeLists.txt b/lib/chunkio/CMakeLists.txt
+index bbe1f39..809ea93 100644
+--- a/lib/chunkio/CMakeLists.txt
++++ b/lib/chunkio/CMakeLists.txt
+@@ -14,12 +14,7 @@ else()
+   set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -Wall ")
+ endif()
+ 
+-# Set __FILENAME__
+-if(NOT ${CMAKE_SYSTEM_NAME} MATCHES "Windows")
+-  set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -D__FILENAME__='\"$(subst ${CMAKE_SOURCE_DIR}/,,$(abspath $<))\"'")
+-else()
+-  set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -D__FILENAME__=__FILE__")
+-endif()
++set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -D__FILENAME__=__FILE__")
+ 
+ include(cmake/macros.cmake)
+ 
+diff --git a/lib/cmetrics/CMakeLists.txt b/lib/cmetrics/CMakeLists.txt
+index 60e8774..e3d6149 100644
+--- a/lib/cmetrics/CMakeLists.txt
++++ b/lib/cmetrics/CMakeLists.txt
+@@ -34,12 +34,7 @@ set(CMT_VERSION_MINOR  3)
+ set(CMT_VERSION_PATCH  5)
+ set(CMT_VERSION_STR "${CMT_VERSION_MAJOR}.${CMT_VERSION_MINOR}.${CMT_VERSION_PATCH}")
+ 
+-# Define __FILENAME__ consistently across Operating Systems
+-if(NOT ${CMAKE_SYSTEM_NAME} MATCHES "Windows")
+-  set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -D__FILENAME__='\"$(subst ${CMAKE_SOURCE_DIR}/,,$(abspath $<))\"'")
+-else()
+-  set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -D__FILENAME__=__FILE__")
+-endif()
++set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -D__FILENAME__=__FILE__")
+ 
+ # Configuration options
+ option(CMT_DEV             "Enable development mode"                   No)
diff --git a/meta-openembedded/meta-oe/recipes-extended/fluentbit/fluentbit/0001-Control-sytemd-unit-install-location-with-SYSTEM_DIR.patch b/meta-openembedded/meta-oe/recipes-extended/fluentbit/fluentbit/0001-Control-sytemd-unit-install-location-with-SYSTEM_DIR.patch
deleted file mode 100644
index bf4cda0..0000000
--- a/meta-openembedded/meta-oe/recipes-extended/fluentbit/fluentbit/0001-Control-sytemd-unit-install-location-with-SYSTEM_DIR.patch
+++ /dev/null
@@ -1,28 +0,0 @@
-From 5571f949fa2048b79c197b5b10a11ecb1891cbe9 Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Sat, 23 Apr 2022 08:24:34 -0700
-Subject: [PATCH] Control sytemd unit install location with SYSTEM_DIR
-
-This helps building when usrmerge is on
-
-Upstream-Status: Pending
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
----
- src/CMakeLists.txt | 6 +++++-
- 1 file changed, 5 insertions(+), 1 deletion(-)
-
---- a/src/CMakeLists.txt
-+++ b/src/CMakeLists.txt
-@@ -323,7 +323,11 @@ if(FLB_BINARY)
-       "${PROJECT_SOURCE_DIR}/init/systemd.in"
-       ${FLB_SYSTEMD_SCRIPT}
-       )
--    install(FILES ${FLB_SYSTEMD_SCRIPT} DESTINATION /lib/systemd/system)
-+    if(SYSTEMD_DIR)
-+        install(FILES ${FLB_SYSTEMD_SCRIPT} DESTINATION ${SYSTEMD_DIR})
-+    else()
-+        install(FILES ${FLB_SYSTEMD_SCRIPT} DESTINATION /lib/systemd/system)
-+    endif()
-     install(DIRECTORY DESTINATION ${FLB_INSTALL_CONFDIR})
-   elseif(FLB_UPSTART)
-     set(FLB_UPSTART_SCRIPT "${PROJECT_SOURCE_DIR}/init/${FLB_OUT_NAME}.conf")
diff --git a/meta-openembedded/meta-oe/recipes-extended/fluentbit/fluentbit/0001-Revert-Remove-unused-variable-in-mpi_mul_hlp.patch b/meta-openembedded/meta-oe/recipes-extended/fluentbit/fluentbit/0001-Revert-Remove-unused-variable-in-mpi_mul_hlp.patch
new file mode 100644
index 0000000..8a165dc
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-extended/fluentbit/fluentbit/0001-Revert-Remove-unused-variable-in-mpi_mul_hlp.patch
@@ -0,0 +1,42 @@
+From af6cefba8c2675f58b75f93785337ab23054568c Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Thu, 18 Aug 2022 23:35:23 -0700
+Subject: [PATCH] Revert Remove unused variable in mpi_mul_hlp()
+
+This reverts
+https://github.com/Mbed-TLS/mbedtls/commit/e7f14a3090e6595eb3c8d821704ad9c90f6d3712
+
+Which helps in compiling the x86 asm code.
+
+Upstream-Status: Pending
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ lib/mbedtls-2.28.0/library/bignum.c | 4 +++-
+ 1 file changed, 3 insertions(+), 1 deletion(-)
+
+diff --git a/lib/mbedtls-2.28.0/library/bignum.c b/lib/mbedtls-2.28.0/library/bignum.c
+index 9c256ae..62e7f76 100644
+--- a/lib/mbedtls-2.28.0/library/bignum.c
++++ b/lib/mbedtls-2.28.0/library/bignum.c
+@@ -1392,7 +1392,7 @@ void mpi_mul_hlp( size_t i,
+                   mbedtls_mpi_uint *d,
+                   mbedtls_mpi_uint b )
+ {
+-    mbedtls_mpi_uint c = 0;
++    mbedtls_mpi_uint c = 0, t = 0;
+ 
+ #if defined(MULADDC_HUIT)
+     for( ; i >= 8; i -= 8 )
+@@ -1443,6 +1443,8 @@ void mpi_mul_hlp( size_t i,
+     }
+ #endif /* MULADDC_HUIT */
+ 
++    t++;
++
+     while( c != 0 )
+     {
+         *d += c; c = ( *d < c ); d++;
+-- 
+2.37.2
+
diff --git a/meta-openembedded/meta-oe/recipes-extended/fluentbit/fluentbit/0001-Use-posix-strerror_r-with-musl.patch b/meta-openembedded/meta-oe/recipes-extended/fluentbit/fluentbit/0001-Use-posix-strerror_r-with-musl.patch
new file mode 100644
index 0000000..8d89e4d
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-extended/fluentbit/fluentbit/0001-Use-posix-strerror_r-with-musl.patch
@@ -0,0 +1,34 @@
+From f645128082117a0152a95b3dccd869a184b7513f Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Wed, 10 Aug 2022 01:23:48 -0700
+Subject: [PATCH 1/2] Use posix strerror_r with musl
+
+Default with glibc is GNU extention of strerror_r
+where as musl uses posix variant, call that out
+
+Upstream-Status: Inappropriate [Need wider porting beyond linux/musl/glibc]
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ src/flb_network.c | 5 +++++
+ 1 file changed, 5 insertions(+)
+
+diff --git a/src/flb_network.c b/src/flb_network.c
+index 992eb1d..5d7a337 100644
+--- a/src/flb_network.c
++++ b/src/flb_network.c
+@@ -506,7 +506,12 @@ static int net_connect_async(int fd,
+             }
+ 
+             /* Connection is broken, not much to do here */
++#ifdef __GLIBC__
+             str = strerror_r(error, so_error_buf, sizeof(so_error_buf));
++#else
++            strerror_r(error, so_error_buf, sizeof(so_error_buf));
++	    str = so_error_buf;
++#endif
+             flb_error("[net] TCP connection failed: %s:%i (%s)",
+                       u->tcp_host, u->tcp_port, str);
+             return -1;
+-- 
+2.37.1
+
diff --git a/meta-openembedded/meta-oe/recipes-extended/fluentbit/fluentbit/0001-bin-fix-SIGSEGV-caused-by-using-flb_free-instead-of-.patch b/meta-openembedded/meta-oe/recipes-extended/fluentbit/fluentbit/0001-bin-fix-SIGSEGV-caused-by-using-flb_free-instead-of-.patch
deleted file mode 100644
index a6ff599..0000000
--- a/meta-openembedded/meta-oe/recipes-extended/fluentbit/fluentbit/0001-bin-fix-SIGSEGV-caused-by-using-flb_free-instead-of-.patch
+++ /dev/null
@@ -1,43 +0,0 @@
-From 3d7390c89c2205d1eed0384be0bb65adb675e60d Mon Sep 17 00:00:00 2001
-From: Ramon Fried <ramon@neureality.ai>
-Date: Tue, 9 Feb 2021 18:59:59 +0200
-Subject: [PATCH] bin: fix SIGSEGV caused by using flb_free instead of
- mk_mem_free
-
-Upstream-Status: Accepted
-Signed-off-by: Ramon Fried <ramon@neureality.ai>
----
- src/fluent-bit.c | 6 +++---
- 1 file changed, 3 insertions(+), 3 deletions(-)
-
-diff --git a/src/fluent-bit.c b/src/fluent-bit.c
-index c0c73b4..989cfde 100644
---- a/src/fluent-bit.c
-+++ b/src/fluent-bit.c
-@@ -289,7 +289,7 @@ static int input_set_property(struct flb_input_instance *in, char *kv)
-                 in->p->name, key);
-     }
- 
--    flb_free(key);
-+    mk_mem_free(key);
-     return ret;
- }
- 
-@@ -314,7 +314,7 @@ static int output_set_property(struct flb_output_instance *out, char *kv)
-     }
- 
-     ret = flb_output_set_property(out, key, value);
--    flb_free(key);
-+    mk_mem_free(key);
-     return ret;
- }
- 
-@@ -340,7 +340,7 @@ static int filter_set_property(struct flb_filter_instance *filter, char *kv)
-     }
- 
-     ret = flb_filter_set_property(filter, key, value);
--    flb_free(key);
-+    mk_mem_free(key);
-     return ret;
- }
- 
diff --git a/meta-openembedded/meta-oe/recipes-extended/fluentbit/fluentbit/0001-monkey-Define-_GNU_SOURCE-for-memmem-API-check.patch b/meta-openembedded/meta-oe/recipes-extended/fluentbit/fluentbit/0001-monkey-Define-_GNU_SOURCE-for-memmem-API-check.patch
new file mode 100644
index 0000000..e706640
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-extended/fluentbit/fluentbit/0001-monkey-Define-_GNU_SOURCE-for-memmem-API-check.patch
@@ -0,0 +1,28 @@
+From 0d22024c5defba7007e3e633753790e20209c6f6 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Tue, 9 Aug 2022 09:59:41 -0700
+Subject: [PATCH 1/5] monkey: Define _GNU_SOURCE for memmem API check
+
+This define is necessary to get this API on glibc based systems
+
+Upstream-Status: Pending
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ lib/monkey/mk_core/CMakeLists.txt | 1 +
+ 1 file changed, 1 insertion(+)
+
+diff --git a/lib/monkey/mk_core/CMakeLists.txt b/lib/monkey/mk_core/CMakeLists.txt
+index 0e74f8d..739fff3 100644
+--- a/lib/monkey/mk_core/CMakeLists.txt
++++ b/lib/monkey/mk_core/CMakeLists.txt
+@@ -62,6 +62,7 @@ set(src "${src}"
+   )
+ 
+ check_c_source_compiles("
++  #define _GNU_SOURCE
+   #include <string.h>
+   int main() {
+      char  haystack[] = \"1234\";
+-- 
+2.37.1
+
diff --git a/meta-openembedded/meta-oe/recipes-extended/fluentbit/fluentbit/0001-ppc-Fix-signature-for-co_create-API.patch b/meta-openembedded/meta-oe/recipes-extended/fluentbit/fluentbit/0001-ppc-Fix-signature-for-co_create-API.patch
deleted file mode 100644
index 1f36c65..0000000
--- a/meta-openembedded/meta-oe/recipes-extended/fluentbit/fluentbit/0001-ppc-Fix-signature-for-co_create-API.patch
+++ /dev/null
@@ -1,38 +0,0 @@
-From be4032079c931704f52e29f5da5c01cde24ac842 Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Thu, 16 Jan 2020 10:44:58 -0800
-Subject: [PATCH] ppc: Fix signature for co_create API
-
-Upstream-Status: Submitted [https://github.com/fluent/fluent-bit/pull/1886]
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
----
- lib/flb_libco/ppc.c | 6 ++++--
- 1 file changed, 4 insertions(+), 2 deletions(-)
-
-diff --git a/lib/flb_libco/ppc.c b/lib/flb_libco/ppc.c
-index e6536d56..533256b3 100644
---- a/lib/flb_libco/ppc.c
-+++ b/lib/flb_libco/ppc.c
-@@ -279,7 +279,9 @@ static uint32_t* co_create_(unsigned size, uintptr_t entry) {
-   return t;
- }
- 
--cothread_t co_create(unsigned int size, void (*entry_)(void)) {
-+cothread_t co_create(unsigned int size, void (*entry_)(void),
-+                     size_t *out_size) {
-+
-   uintptr_t entry = (uintptr_t)entry_;
-   uint32_t* t = 0;
- 
-@@ -325,7 +327,7 @@ cothread_t co_create(unsigned int size, void (*entry_)(void)) {
-     t[10] = (uint32_t)(sp >> shift >> shift);
-     t[11] = (uint32_t)sp;
-   }
--
-+  *out_size = size;
-   return t;
- }
- 
--- 
-2.25.0
-
diff --git a/meta-openembedded/meta-oe/recipes-extended/fluentbit/fluentbit/0002-chunkio-Link-with-fts-library-with-musl.patch b/meta-openembedded/meta-oe/recipes-extended/fluentbit/fluentbit/0002-chunkio-Link-with-fts-library-with-musl.patch
new file mode 100644
index 0000000..4ffb20d
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-extended/fluentbit/fluentbit/0002-chunkio-Link-with-fts-library-with-musl.patch
@@ -0,0 +1,28 @@
+From 63dbbad5978e5f5b0e7d42614999cb6b4ebcce10 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Wed, 10 Aug 2022 01:27:16 -0700
+Subject: [PATCH 2/2] chunkio: Link with fts library with musl
+
+Fixes
+cio_utils.c:(.text+0x64): undefined reference to `fts_read'
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ lib/chunkio/src/CMakeLists.txt | 1 +
+ 1 file changed, 1 insertion(+)
+
+diff --git a/lib/chunkio/src/CMakeLists.txt b/lib/chunkio/src/CMakeLists.txt
+index a4fc2d3..4244eb8 100644
+--- a/lib/chunkio/src/CMakeLists.txt
++++ b/lib/chunkio/src/CMakeLists.txt
+@@ -13,6 +13,7 @@ set(src
+   )
+ 
+ set(libs cio-crc32)
++set(libs ${libs} fts)
+ 
+ if(${CMAKE_SYSTEM_NAME} MATCHES "Windows")
+   set(src
+-- 
+2.37.1
+
diff --git a/meta-openembedded/meta-oe/recipes-extended/fluentbit/fluentbit/0002-flb_info.h.in-Do-not-hardcode-compilation-directorie.patch b/meta-openembedded/meta-oe/recipes-extended/fluentbit/fluentbit/0002-flb_info.h.in-Do-not-hardcode-compilation-directorie.patch
new file mode 100644
index 0000000..4358b2a
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-extended/fluentbit/fluentbit/0002-flb_info.h.in-Do-not-hardcode-compilation-directorie.patch
@@ -0,0 +1,26 @@
+From 71dab751a27a2e582b711de22873065dd28f4b65 Mon Sep 17 00:00:00 2001
+From: Paulo Neves <ptsneves@gmail.com>
+Date: Thu, 28 Jul 2022 11:42:31 +0200
+Subject: [PATCH] flb_info.h.in: Do not hardcode compilation directories
+
+Including the source dir in the header makes the header not
+reproducible and contaminates it with host builder paths. Instead
+make it take CMAKE_DEBUG_SRCDIR that can be set to a known
+reproducible value
+---
+ include/fluent-bit/flb_info.h.in | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/include/fluent-bit/flb_info.h.in b/include/fluent-bit/flb_info.h.in
+index a89485c..2579afc 100644
+--- a/include/fluent-bit/flb_info.h.in
++++ b/include/fluent-bit/flb_info.h.in
+@@ -23,7 +23,7 @@
+ #define STR_HELPER(s)      #s
+ #define STR(s)             STR_HELPER(s)
+ 
+-#define FLB_SOURCE_DIR "@CMAKE_SOURCE_DIR@"
++#define FLB_SOURCE_DIR "@CMAKE_DEBUG_SRCDIR@"
+ 
+ /* General flags set by CMakeLists.txt */
+ @FLB_BUILD_FLAGS@
diff --git a/meta-openembedded/meta-oe/recipes-extended/fluentbit/fluentbit/0002-mbedtls-Remove-unused-variable.patch b/meta-openembedded/meta-oe/recipes-extended/fluentbit/fluentbit/0002-mbedtls-Remove-unused-variable.patch
new file mode 100644
index 0000000..d4451bc
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-extended/fluentbit/fluentbit/0002-mbedtls-Remove-unused-variable.patch
@@ -0,0 +1,38 @@
+From c7b969d1a2a6b61bd179214ee2516b7b6cd55b27 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Tue, 9 Aug 2022 11:21:57 -0700
+Subject: [PATCH 2/5] mbedtls: Remove unused variable
+
+Fixes
+library/bignum.c:1395:29: error: variable 't' set but not used [-Werror,-Wunused-but-set-variable]
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ lib/mbedtls-2.28.0/library/bignum.c | 4 +---
+ 1 file changed, 1 insertion(+), 3 deletions(-)
+
+diff --git a/lib/mbedtls-2.28.0/library/bignum.c b/lib/mbedtls-2.28.0/library/bignum.c
+index 62e7f76..9c256ae 100644
+--- a/lib/mbedtls-2.28.0/library/bignum.c
++++ b/lib/mbedtls-2.28.0/library/bignum.c
+@@ -1392,7 +1392,7 @@ void mpi_mul_hlp( size_t i,
+                   mbedtls_mpi_uint *d,
+                   mbedtls_mpi_uint b )
+ {
+-    mbedtls_mpi_uint c = 0, t = 0;
++    mbedtls_mpi_uint c = 0;
+ 
+ #if defined(MULADDC_HUIT)
+     for( ; i >= 8; i -= 8 )
+@@ -1443,8 +1443,6 @@ void mpi_mul_hlp( size_t i,
+     }
+ #endif /* MULADDC_HUIT */
+ 
+-    t++;
+-
+     while( c != 0 )
+     {
+         *d += c; c = ( *d < c ); d++;
+-- 
+2.37.1
+
diff --git a/meta-openembedded/meta-oe/recipes-extended/fluentbit/fluentbit/0002-parser-Fix-SIGSEGV-caused-by-using-flb_free-instead-.patch b/meta-openembedded/meta-oe/recipes-extended/fluentbit/fluentbit/0002-parser-Fix-SIGSEGV-caused-by-using-flb_free-instead-.patch
deleted file mode 100644
index 91675df..0000000
--- a/meta-openembedded/meta-oe/recipes-extended/fluentbit/fluentbit/0002-parser-Fix-SIGSEGV-caused-by-using-flb_free-instead-.patch
+++ /dev/null
@@ -1,82 +0,0 @@
-From 7c3b1dfb174312594d3317c24ed71c60398f653f Mon Sep 17 00:00:00 2001
-From: Ramon Fried <ramon@neureality.ai>
-Date: Wed, 10 Feb 2021 04:23:36 +0200
-Subject: [PATCH] parser: Fix SIGSEGV caused by using flb_free instead of
- mk_mem_free
-
-Upstream-Status: Backport (fix only for 1.3.5)
-Signed-off-by: Ramon Fried <ramon@neureality.ai>
----
- src/flb_parser.c | 28 ++++++++++++++--------------
- 1 file changed, 14 insertions(+), 14 deletions(-)
-
-diff --git a/src/flb_parser.c b/src/flb_parser.c
-index d35c568..7c20e12 100644
---- a/src/flb_parser.c
-+++ b/src/flb_parser.c
-@@ -490,7 +490,7 @@ int flb_parser_conf_file(const char *file, struct flb_config *config)
-                                        MK_RCONF_STR);
-         if (str) {
-             time_keep = flb_utils_bool(str);
--            flb_free(str);
-+            mk_mem_free(str);
-         }
-         else {
-             time_keep = FLB_FALSE;
-@@ -522,23 +522,23 @@ int flb_parser_conf_file(const char *file, struct flb_config *config)
- 
-         flb_debug("[parser] new parser registered: %s", name);
- 
--        flb_free(name);
--        flb_free(format);
-+        mk_mem_free(name);
-+        mk_mem_free(format);
- 
-         if (regex) {
--            flb_free(regex);
-+            mk_mem_free(regex);
-         }
-         if (time_fmt) {
--            flb_free(time_fmt);
-+            mk_mem_free(time_fmt);
-         }
-         if (time_key) {
--            flb_free(time_key);
-+            mk_mem_free(time_key);
-         }
-         if (time_offset) {
--            flb_free(time_offset);
-+            mk_mem_free(time_offset);
-         }
-         if (types_str) {
--            flb_free(types_str);
-+            mk_mem_free(types_str);
-         }
- 
-         decoders = NULL;
-@@ -548,19 +548,19 @@ int flb_parser_conf_file(const char *file, struct flb_config *config)
-     return 0;
- 
-  fconf_error:
--    flb_free(name);
--    flb_free(format);
-+    mk_mem_free(name);
-+    mk_mem_free(format);
-     if (regex) {
--        flb_free(regex);
-+        mk_mem_free(regex);
-     }
-     if (time_fmt) {
--        flb_free(time_fmt);
-+        mk_mem_free(time_fmt);
-     }
-     if (time_key) {
--        flb_free(time_key);
-+        mk_mem_free(time_key);
-     }
-     if (types_str) {
--        flb_free(types_str);
-+        mk_mem_free(types_str);
-     }
-     if (decoders) {
-         flb_parser_decoder_list_destroy(decoders);
diff --git a/meta-openembedded/meta-oe/recipes-extended/fluentbit/fluentbit/0003-mbedtls-Disable-documentation-warning-as-error-with-.patch b/meta-openembedded/meta-oe/recipes-extended/fluentbit/fluentbit/0003-mbedtls-Disable-documentation-warning-as-error-with-.patch
new file mode 100644
index 0000000..2d7b4ef
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-extended/fluentbit/fluentbit/0003-mbedtls-Disable-documentation-warning-as-error-with-.patch
@@ -0,0 +1,30 @@
+From 2d12629f768d2459b1fc8a8ca0c38024d84bc195 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Tue, 9 Aug 2022 11:32:12 -0700
+Subject: [PATCH 3/5] mbedtls: Disable documentation warning as error with
+ clang
+
+There are shortcomings with doxygen info which clang-15+ flags, dont
+treat them as errors
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ lib/mbedtls-2.28.0/CMakeLists.txt | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/lib/mbedtls-2.28.0/CMakeLists.txt b/lib/mbedtls-2.28.0/CMakeLists.txt
+index b33c088..c5f886f 100644
+--- a/lib/mbedtls-2.28.0/CMakeLists.txt
++++ b/lib/mbedtls-2.28.0/CMakeLists.txt
+@@ -212,7 +212,7 @@ if(CMAKE_COMPILER_IS_GNU)
+ endif(CMAKE_COMPILER_IS_GNU)
+ 
+ if(CMAKE_COMPILER_IS_CLANG)
+-    set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -Wall -Wextra -Wwrite-strings -Wpointer-arith -Wimplicit-fallthrough -Wshadow -Wvla -Wformat=2 -Wno-format-nonliteral")
++	set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -Wall -Wextra -Wwrite-strings -Wpointer-arith -Wimplicit-fallthrough -Wshadow -Wvla -Wformat=2 -Wno-format-nonliteral -Wno-error=documentation")
+     set(CMAKE_C_FLAGS_COVERAGE    "-O0 -g3 --coverage")
+     set(CMAKE_C_FLAGS_ASAN        "-fsanitize=address -fno-common -fsanitize=undefined -fno-sanitize-recover=all -O3")
+     set(CMAKE_C_FLAGS_ASANDBG     "-fsanitize=address -fno-common -fsanitize=undefined -fno-sanitize-recover=all -O1 -g3 -fno-omit-frame-pointer -fno-optimize-sibling-calls")
+-- 
+2.37.1
+
diff --git a/meta-openembedded/meta-oe/recipes-extended/fluentbit/fluentbit/0003-mbedtls-Do-not-overwrite-CFLAGS.patch b/meta-openembedded/meta-oe/recipes-extended/fluentbit/fluentbit/0003-mbedtls-Do-not-overwrite-CFLAGS.patch
new file mode 100644
index 0000000..af31b43
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-extended/fluentbit/fluentbit/0003-mbedtls-Do-not-overwrite-CFLAGS.patch
@@ -0,0 +1,35 @@
+From 8486b912281ae85db0c9fc05bb546f16872e114c Mon Sep 17 00:00:00 2001
+From: Paulo Neves <ptsneves@gmail.com>
+Date: Thu, 28 Jul 2022 14:37:18 +0200
+Subject: [PATCH] mbedtls: Do not overwrite CFLAGS
+
+bitbake passes CFLAGS that are often in conflict with the ones set
+in mbedtls' CMakeLists.txt. Such conflicts are the inability to use
+FORTIFY_SOURCE=2 except in release mode
+
+Upstream status: Innapropriate due to fluent-bit having it's own Release
+flags that also overwrite bitbake ones.
+---
+ lib/mbedtls-2.28.0/CMakeLists.txt | 2 --
+ 1 file changed, 2 deletions(-)
+
+--- a/lib/mbedtls-2.28.0/CMakeLists.txt
++++ b/lib/mbedtls-2.28.0/CMakeLists.txt
+@@ -204,8 +204,6 @@ if(CMAKE_COMPILER_IS_GNU)
+     if (GCC_VERSION VERSION_GREATER 7.0 OR GCC_VERSION VERSION_EQUAL 7.0)
+       set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -Wformat-overflow=2 -Wformat-truncation")
+     endif()
+-    set(CMAKE_C_FLAGS_RELEASE     "-O2")
+-    set(CMAKE_C_FLAGS_DEBUG       "-O0 -g3")
+     set(CMAKE_C_FLAGS_COVERAGE    "-O0 -g3 --coverage")
+     set(CMAKE_C_FLAGS_ASAN        "-fsanitize=address -fno-common -fsanitize=undefined -fno-sanitize-recover=all -O3")
+     set(CMAKE_C_FLAGS_ASANDBG     "-fsanitize=address -fno-common -fsanitize=undefined -fno-sanitize-recover=all -O1 -g3 -fno-omit-frame-pointer -fno-optimize-sibling-calls")
+@@ -215,8 +213,6 @@ endif(CMAKE_COMPILER_IS_GNU)
+ 
+ if(CMAKE_COMPILER_IS_CLANG)
+     set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -Wall -Wextra -Wwrite-strings -Wpointer-arith -Wimplicit-fallthrough -Wshadow -Wvla -Wformat=2 -Wno-format-nonliteral")
+-    set(CMAKE_C_FLAGS_RELEASE     "-O2")
+-    set(CMAKE_C_FLAGS_DEBUG       "-O0 -g3")
+     set(CMAKE_C_FLAGS_COVERAGE    "-O0 -g3 --coverage")
+     set(CMAKE_C_FLAGS_ASAN        "-fsanitize=address -fno-common -fsanitize=undefined -fno-sanitize-recover=all -O3")
+     set(CMAKE_C_FLAGS_ASANDBG     "-fsanitize=address -fno-common -fsanitize=undefined -fno-sanitize-recover=all -O1 -g3 -fno-omit-frame-pointer -fno-optimize-sibling-calls")
diff --git a/meta-openembedded/meta-oe/recipes-extended/fluentbit/fluentbit/0004-Use-correct-type-to-store-return-from-flb_kv_item_cr.patch b/meta-openembedded/meta-oe/recipes-extended/fluentbit/fluentbit/0004-Use-correct-type-to-store-return-from-flb_kv_item_cr.patch
new file mode 100644
index 0000000..224b8cc
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-extended/fluentbit/fluentbit/0004-Use-correct-type-to-store-return-from-flb_kv_item_cr.patch
@@ -0,0 +1,43 @@
+From a797b79483940ed4adcaa5fe2c40dd0487c7c2c7 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Tue, 9 Aug 2022 11:39:08 -0700
+Subject: [PATCH 4/5] Use correct type to store return from flb_kv_item_create
+
+Fix
+error: incompatible pointer to integer conversion assigning to 'int' from 'struct flb_kv *'
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ plugins/out_stackdriver/stackdriver_conf.c | 4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+diff --git a/plugins/out_stackdriver/stackdriver_conf.c b/plugins/out_stackdriver/stackdriver_conf.c
+index a9a8eb0..e4f969e 100644
+--- a/plugins/out_stackdriver/stackdriver_conf.c
++++ b/plugins/out_stackdriver/stackdriver_conf.c
+@@ -176,12 +176,12 @@ static int read_credentials_file(const char *cred_file, struct flb_stackdriver *
+ 
+ static int parse_configuration_labels(struct flb_stackdriver *ctx)
+ {
+-    int ret;
+     char *p;
+     flb_sds_t key;
+     flb_sds_t val;
+     struct mk_list *head;
+     struct flb_slist_entry *entry;
++    struct flb_kv *ret;
+     msgpack_object_kv *kv = NULL;
+ 
+     if (ctx->labels) {
+@@ -216,7 +216,7 @@ static int parse_configuration_labels(struct flb_stackdriver *ctx)
+             flb_sds_destroy(key);
+             flb_sds_destroy(val);
+ 
+-            if (ret == -1) {
++            if (!ret) {
+                 return -1;
+             }
+         }
+-- 
+2.37.1
+
diff --git a/meta-openembedded/meta-oe/recipes-extended/fluentbit/fluentbit/0004-build-Make-systemd-init-systemd-detection-contingent.patch b/meta-openembedded/meta-oe/recipes-extended/fluentbit/fluentbit/0004-build-Make-systemd-init-systemd-detection-contingent.patch
new file mode 100644
index 0000000..9d4d950
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-extended/fluentbit/fluentbit/0004-build-Make-systemd-init-systemd-detection-contingent.patch
@@ -0,0 +1,64 @@
+From 7a792624925d46690c1f07fe4b194b5f4c510db6 Mon Sep 17 00:00:00 2001
+From: Paulo Neves <ptsneves@gmail.com>
+Date: Tue, 2 Aug 2022 09:57:05 +0200
+Subject: [PATCH 1/1] build: Make systemd init systemd detection contingent on
+ pkgconfig
+
+Use pkg-config to get systemd.pc variables and systemdunitdir. Those
+variable ensure that .service files are installed in the correct paths
+and only when systemd is detected.
+
+Upstream Status: Pending [1]
+[1] https://github.com/fluent/fluent-bit/pull/5818
+
+---
+ cmake/FindJournald.cmake | 4 ++++
+ src/CMakeLists.txt       | 4 ++--
+ 2 files changed, 6 insertions(+), 2 deletions(-)
+
+diff --git a/cmake/FindJournald.cmake b/cmake/FindJournald.cmake
+index f5a3a832b..9e6657a29 100644
+--- a/cmake/FindJournald.cmake
++++ b/cmake/FindJournald.cmake
+@@ -5,6 +5,8 @@
+ #  JOURNALD_INCLUDE_DIR - the Journald include directory
+ #  JOURNALD_LIBRARIES - Link these to use Journald
+ #  JOURNALD_DEFINITIONS - Compiler switches required for using Journald
++#  SYSTEMD_UNITDIR - The systemd units' directory
++#
+ # Redistribution and use is allowed according to the terms of the BSD license.
+ # For details see the accompanying COPYING-CMAKE-SCRIPTS file.
+ #
+@@ -16,7 +18,9 @@
+ # in the FIND_PATH() and FIND_LIBRARY() calls
+ find_package(PkgConfig)
+ pkg_check_modules(PC_JOURNALD QUIET systemd)
++pkg_get_variable(PC_SYSTEMD_UNITDIR systemd "systemdsystemunitdir")
+ 
++set(SYSTEMD_UNITDIR ${PC_SYSTEMD_UNITDIR})
+ set(JOURNALD_FOUND ${PC_JOURNALD_FOUND})
+ set(JOURNALD_DEFINITIONS ${PC_JOURNALD_CFLAGS_OTHER})
+ 
+diff --git a/src/CMakeLists.txt b/src/CMakeLists.txt
+index 522bbf9bd..30743d8d6 100644
+--- a/src/CMakeLists.txt
++++ b/src/CMakeLists.txt
+@@ -480,13 +480,13 @@ if(FLB_BINARY)
+   endif()
+ 
+   # Detect init system, install upstart, systemd or init.d script
+-  if(IS_DIRECTORY /lib/systemd/system)
++  if(DEFINED SYSTEMD_UNITDIR)
+     set(FLB_SYSTEMD_SCRIPT "${PROJECT_SOURCE_DIR}/init/${FLB_OUT_NAME}.service")
+     configure_file(
+       "${PROJECT_SOURCE_DIR}/init/systemd.in"
+       ${FLB_SYSTEMD_SCRIPT}
+       )
+-    install(FILES ${FLB_SYSTEMD_SCRIPT} COMPONENT binary DESTINATION /lib/systemd/system)
++    install(FILES ${FLB_SYSTEMD_SCRIPT} COMPONENT binary DESTINATION ${SYSTEMD_UNITDIR})
+     install(DIRECTORY DESTINATION ${FLB_INSTALL_CONFDIR} COMPONENT binary)
+   elseif(IS_DIRECTORY /usr/share/upstart)
+     set(FLB_UPSTART_SCRIPT "${PROJECT_SOURCE_DIR}/init/${FLB_OUT_NAME}.conf")
+-- 
+2.25.1
+
diff --git a/meta-openembedded/meta-oe/recipes-extended/fluentbit/fluentbit/0005-stackdriver-Fix-return-type-mismatch.patch b/meta-openembedded/meta-oe/recipes-extended/fluentbit/fluentbit/0005-stackdriver-Fix-return-type-mismatch.patch
new file mode 100644
index 0000000..cdbbb6b
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-extended/fluentbit/fluentbit/0005-stackdriver-Fix-return-type-mismatch.patch
@@ -0,0 +1,31 @@
+From 27f0bd5a3339612e03112e6b490900a9fabc3337 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Tue, 9 Aug 2022 11:44:25 -0700
+Subject: [PATCH 5/5] stackdriver: Fix return type mismatch
+
+Fix
+error: incompatible integer to pointer conversion returning 'int' from a function with result type 'flb_sds_t' (aka 'char *') [-Wint-conversion]
+            return -1;
+                   ^~
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ plugins/out_stackdriver/stackdriver.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/plugins/out_stackdriver/stackdriver.c b/plugins/out_stackdriver/stackdriver.c
+index ae66bf2..e01755c 100644
+--- a/plugins/out_stackdriver/stackdriver.c
++++ b/plugins/out_stackdriver/stackdriver.c
+@@ -2033,7 +2033,7 @@ static flb_sds_t stackdriver_format(struct flb_stackdriver *ctx,
+             flb_sds_destroy(operation_producer);
+             msgpack_unpacked_destroy(&result);
+             msgpack_sbuffer_destroy(&mp_sbuf);
+-            return -1;
++            return NULL; 
+         }
+ 
+         /* Number of parsed labels */
+-- 
+2.37.1
+
diff --git a/meta-openembedded/meta-oe/recipes-extended/fluentbit/fluentbit/0006-monkey-Fix-TLS-detection-testcase.patch b/meta-openembedded/meta-oe/recipes-extended/fluentbit/fluentbit/0006-monkey-Fix-TLS-detection-testcase.patch
new file mode 100644
index 0000000..eef1a56
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-extended/fluentbit/fluentbit/0006-monkey-Fix-TLS-detection-testcase.patch
@@ -0,0 +1,34 @@
+From f88d9b82e8bd8ae38fba666b5825ffb41769f81a Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Tue, 9 Aug 2022 12:25:22 -0700
+Subject: [PATCH] monkey: Fix TLS detection testcase
+
+Clang15 errors out on compiling the check and disables TLS
+
+Fixes errors like
+
+error: call to undeclared function '__tls_get_addr'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
+         __tls_get_addr(0);
+         ^
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ lib/monkey/CMakeLists.txt | 2 ++
+ 1 file changed, 2 insertions(+)
+
+diff --git a/lib/monkey/CMakeLists.txt b/lib/monkey/CMakeLists.txt
+index 15e62e8..96ac2bd 100644
+--- a/lib/monkey/CMakeLists.txt
++++ b/lib/monkey/CMakeLists.txt
+@@ -178,6 +178,8 @@ endif()
+ # Use old Pthread TLS
+ if(NOT MK_PTHREAD_TLS)
+   check_c_source_compiles("
++     #include <sys/types.h>
++     extern void *__tls_get_addr(size_t *v);
+      __thread int a;
+      int main() {
+          __tls_get_addr(0);
+-- 
+2.37.1
+
diff --git a/meta-openembedded/meta-oe/recipes-extended/fluentbit/fluentbit/builtin-nan.patch b/meta-openembedded/meta-oe/recipes-extended/fluentbit/fluentbit/builtin-nan.patch
deleted file mode 100644
index 8ffc3be..0000000
--- a/meta-openembedded/meta-oe/recipes-extended/fluentbit/fluentbit/builtin-nan.patch
+++ /dev/null
@@ -1,27 +0,0 @@
-help complier to use intrinsics, clang in few cases e.g. aarch64 can not
-and then requires linking with libm, its the only function needed from libm then
-its good to avoid needing it.
-
-Upstream-Status: Pending
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
-
---- a/include/fluent-bit/stream_processor/flb_sp_timeseries.h
-+++ b/include/fluent-bit/stream_processor/flb_sp_timeseries.h
-@@ -207,7 +207,7 @@ void cb_forecast_calc(struct timeseries
-         result = b0 + b1 * (val->f64 + *forecast->latest_x);
-         break;
-     default:
--        result = nan("");
-+        result = __builtin_nan("");
-         break;
-     }
- 
-@@ -283,7 +283,7 @@ void cb_forecast_r_calc(struct timeserie
-             result = ((val->i64 - b0) / b1) - *forecast->latest_x;
-             break;
-         default:
--            result = nan("");
-+            result = __builtin_nan("");
-             break;
-         }
- 
diff --git a/meta-openembedded/meta-oe/recipes-extended/fluentbit/fluentbit/cross-build-init-system-detection.patch b/meta-openembedded/meta-oe/recipes-extended/fluentbit/fluentbit/cross-build-init-system-detection.patch
deleted file mode 100644
index d3822fc..0000000
--- a/meta-openembedded/meta-oe/recipes-extended/fluentbit/fluentbit/cross-build-init-system-detection.patch
+++ /dev/null
@@ -1,38 +0,0 @@
-Define CMake variables to indicate init system for target
-incase of cross compile, detecting systemd support based on
-host directory structure is not right thing to do
-
-Upstream-Status: Pending
-Signed-off-by: Khem Raj <raj.kheem@gmail.com>
-
---- a/src/CMakeLists.txt
-+++ b/src/CMakeLists.txt
-@@ -317,7 +317,7 @@ if(FLB_BINARY)
-   install(TARGETS fluent-bit-bin RUNTIME DESTINATION ${FLB_INSTALL_BINDIR})
- 
-   # Detect init system, install upstart, systemd or init.d script
--  if(IS_DIRECTORY /lib/systemd/system)
-+  if(FLB_SYSTEMD)
-     set(FLB_SYSTEMD_SCRIPT "${PROJECT_SOURCE_DIR}/init/${FLB_OUT_NAME}.service")
-     configure_file(
-       "${PROJECT_SOURCE_DIR}/init/systemd.in"
-@@ -325,7 +325,7 @@ if(FLB_BINARY)
-       )
-     install(FILES ${FLB_SYSTEMD_SCRIPT} DESTINATION /lib/systemd/system)
-     install(DIRECTORY DESTINATION ${FLB_INSTALL_CONFDIR})
--  elseif(IS_DIRECTORY /usr/share/upstart)
-+  elseif(FLB_UPSTART)
-     set(FLB_UPSTART_SCRIPT "${PROJECT_SOURCE_DIR}/init/${FLB_OUT_NAME}.conf")
-     configure_file(
-       "${PROJECT_SOURCE_DIR}/init/upstart.in"
---- a/CMakeLists.txt
-+++ b/CMakeLists.txt
-@@ -70,6 +70,8 @@ option(FLB_RECORD_ACCESSOR    "Enable re
- option(FLB_SYSTEM_STRPTIME    "Use strptime in system libc"  Yes)
- option(FLB_STATIC_CONF        "Build binary using static configuration")
- option(FLB_STREAM_PROCESSOR   "Enable Stream Processor"      Yes)
-+option(FLB_SYSTEMD            "Enable systemd init system"   No)
-+option(FLB_UPSTART            "Enable upstart init system"   No)
- option(FLB_CORO_STACK_SIZE    "Set coroutine stack size")
- 
- # Metrics: Experimental Feature, disabled by default on 0.12 series
diff --git a/meta-openembedded/meta-oe/recipes-extended/fluentbit/fluentbit/jemalloc.patch b/meta-openembedded/meta-oe/recipes-extended/fluentbit/fluentbit/jemalloc.patch
deleted file mode 100644
index 67b3397..0000000
--- a/meta-openembedded/meta-oe/recipes-extended/fluentbit/fluentbit/jemalloc.patch
+++ /dev/null
@@ -1,16 +0,0 @@
-Add  --with-jemalloc-prefix=je_ so it compiles on musl
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
-Upstream-Status: Pending
-
---- a/CMakeLists.txt
-+++ b/CMakeLists.txt
-@@ -523,7 +523,7 @@ if(FLB_JEMALLOC AND ${CMAKE_SYSTEM_NAME}
-   # Link to Jemalloc as an external dependency
-   ExternalProject_Add(jemalloc
-     SOURCE_DIR ${CMAKE_CURRENT_SOURCE_DIR}/lib/jemalloc-5.2.1
--    CONFIGURE_COMMAND ${CMAKE_CURRENT_SOURCE_DIR}/lib/jemalloc-5.2.1/configure ${AUTOCONF_HOST_OPT} --with-lg-quantum=3 --enable-cc-silence --prefix=<INSTALL_DIR>
-+    CONFIGURE_COMMAND ${CMAKE_CURRENT_SOURCE_DIR}/lib/jemalloc-5.2.1/configure ${AUTOCONF_HOST_OPT} --with-jemalloc-prefix=je_ --with-lg-quantum=3 --enable-cc-silence --prefix=<INSTALL_DIR>
-     CFLAGS=-std=gnu99\ -Wall\ -pipe\ -g3\ -O3\ -funroll-loops
-     BUILD_COMMAND $(MAKE)
-     INSTALL_DIR ${CMAKE_CURRENT_BINARY_DIR}/
diff --git a/meta-openembedded/meta-oe/recipes-extended/fluentbit/fluentbit_1.3.5.bb b/meta-openembedded/meta-oe/recipes-extended/fluentbit/fluentbit_1.3.5.bb
deleted file mode 100644
index b231cc2..0000000
--- a/meta-openembedded/meta-oe/recipes-extended/fluentbit/fluentbit_1.3.5.bb
+++ /dev/null
@@ -1,69 +0,0 @@
-SUMMARY = "Fast Log processor and Forwarder"
-DESCRIPTION = "Fluent Bit is a data collector, processor and  \
-forwarder for Linux. It supports several input sources and \
-backends (destinations) for your data. \
-"
-
-HOMEPAGE = "http://fluentbit.io"
-BUGTRACKER = "https://github.com/fluent/fluent-bit/issues"
-
-LICENSE = "Apache-2.0"
-LIC_FILES_CHKSUM = "file://LICENSE;md5=2ee41112a44fe7014dce33e26468ba93"
-SECTION = "net"
-
-SRC_URI = "http://fluentbit.io/releases/1.3/fluent-bit-${PV}.tar.gz \
-           file://jemalloc.patch \
-           file://cross-build-init-system-detection.patch \
-           file://builtin-nan.patch \
-           file://0001-ppc-Fix-signature-for-co_create-API.patch \
-           file://0001-bin-fix-SIGSEGV-caused-by-using-flb_free-instead-of-.patch \
-           file://0002-parser-Fix-SIGSEGV-caused-by-using-flb_free-instead-.patch \
-           file://0001-Control-sytemd-unit-install-location-with-SYSTEM_DIR.patch \
-           "
-SRC_URI[md5sum] = "6eae6dfd0a874e5dd270c36e9c68f747"
-SRC_URI[sha256sum] = "e037c76c89269c8dc4027a08e442fefd2751b0f1e0f9c38f9a4b12d781a9c789"
-
-S = "${WORKDIR}/fluent-bit-${PV}"
-DEPENDS = "zlib bison-native flex-native"
-DEPENDS += "${@bb.utils.filter('DISTRO_FEATURES', 'systemd', d)}"
-
-DEPENDS:append:libc-musl = " fts "
-
-INSANE_SKIP:${PN}-dev += "dev-elf"
-
-LTO = ""
-
-# Use CMake 'Unix Makefiles' generator
-OECMAKE_GENERATOR ?= "Unix Makefiles"
-
-# Fluent Bit build options
-# ========================
-
-# Host related setup
-EXTRA_OECMAKE += "-DGNU_HOST=${HOST_SYS} -DFLB_ALL=ON -DFLB_TD=1"
-
-# Disable LuaJIT and filter_lua support
-EXTRA_OECMAKE += "-DFLB_LUAJIT=Off -DFLB_FILTER_LUA=Off "
-
-# Disable Library and examples
-EXTRA_OECMAKE += "-DFLB_SHARED_LIB=Off -DFLB_EXAMPLES=Off "
-
-# Enable systemd iff systemd is in DISTRO_FEATURES
-EXTRA_OECMAKE += "${@bb.utils.contains('DISTRO_FEATURES','systemd','-DFLB_SYSTEMD=On -DSYSTEMD_DIR=${systemd_system_unitdir}','-DFLB_SYSTEMD=Off',d)}"
-
-EXTRA_OECMAKE:append:riscv64 = " -DFLB_DEPS='atomic'"
-EXTRA_OECMAKE:append:riscv32 = " -DFLB_DEPS='atomic'"
-
-# Kafka Output plugin (disabled by default): note that when
-# enabling Kafka output plugin, the backend library librdkafka
-# requires 'openssl' as a dependency.
-#
-# DEPENDS += "openssl "
-# EXTRA_OECMAKE += "-DFLB_OUT_KAFKA=On "
-
-inherit cmake systemd
-
-CFLAGS += "-fcommon"
-
-SYSTEMD_SERVICE:${PN} = "td-agent-bit.service"
-TARGET_CC_ARCH:append = " ${SELECTED_OPTIMIZATION}"
diff --git a/meta-openembedded/meta-oe/recipes-extended/fluentbit/fluentbit_1.9.7.bb b/meta-openembedded/meta-oe/recipes-extended/fluentbit/fluentbit_1.9.7.bb
new file mode 100644
index 0000000..a1f8794
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-extended/fluentbit/fluentbit_1.9.7.bb
@@ -0,0 +1,90 @@
+SUMMARY = "Fast Log processor and Forwarder"
+DESCRIPTION = "Fluent Bit is a data collector, processor and  \
+forwarder for Linux. It supports several input sources and \
+backends (destinations) for your data. \
+"
+
+HOMEPAGE = "http://fluentbit.io"
+BUGTRACKER = "https://github.com/fluent/fluent-bit/issues"
+
+LICENSE = "Apache-2.0"
+LIC_FILES_CHKSUM = "file://LICENSE;md5=2ee41112a44fe7014dce33e26468ba93"
+SECTION = "net"
+
+SRC_URI = "https://releases.fluentbit.io/1.9/source-${PV}.tar.gz;subdir=fluent-bit-${PV} \
+           file://0001-CMakeLists.txt-Do-not-use-private-makefile-target.patch \
+           file://0002-flb_info.h.in-Do-not-hardcode-compilation-directorie.patch \
+           file://0003-mbedtls-Do-not-overwrite-CFLAGS.patch \
+           file://0004-build-Make-systemd-init-systemd-detection-contingent.patch \
+           file://0001-monkey-Define-_GNU_SOURCE-for-memmem-API-check.patch \
+           file://0002-mbedtls-Remove-unused-variable.patch \
+           file://0003-mbedtls-Disable-documentation-warning-as-error-with-.patch \
+           file://0004-Use-correct-type-to-store-return-from-flb_kv_item_cr.patch \
+           file://0005-stackdriver-Fix-return-type-mismatch.patch \
+           file://0006-monkey-Fix-TLS-detection-testcase.patch \
+           file://0001-Revert-Remove-unused-variable-in-mpi_mul_hlp.patch \
+           "
+SRC_URI:append:libc-musl = "\
+           file://0001-Use-posix-strerror_r-with-musl.patch \
+           file://0002-chunkio-Link-with-fts-library-with-musl.patch \
+           "
+SRC_URI[sha256sum] = "8ca2ac081d7eee717483c06608adcb5e3d5373e182ad87dba21a23f8278c6540"
+S = "${WORKDIR}/fluent-bit-${PV}"
+
+DEPENDS = "zlib bison-native flex-native openssl"
+DEPENDS += "${@bb.utils.filter('DISTRO_FEATURES', 'systemd', d)}"
+
+PACKAGECONFIG[yaml] = "-DFLB_CONFIG_YAML=On,-DFLB_CONFIG_YAML=Off,libyaml"
+PACKAGECONFIG[kafka] = "-DFLB_OUT_KAFKA=On,-DFLB_OUT_KAFKA=Off,librdkafka"
+PACKAGECONFIG[examples] = "-DFLB_EXAMPLES=On,-DFLB_EXAMPLES=Off"
+PACKAGECONFIG[jemalloc] = "-DFLB_JEMALLOC=On,-DFLB_JEMALLOC=Off,jemalloc"
+#TODO add more fluentbit options to PACKAGECONFIG[]
+
+DEPENDS:append:libc-musl = " fts "
+
+# flex hardcodes the input file in #line directives leading to TMPDIR contamination of debug sources.
+do_compile:append() {
+    find ${B} -name '*.c' -or -name '*.h' | xargs sed -i -e 's|${TMPDIR}|/usr/src/debug/${PN}/${EXTENDPE}${PV}-${PR}/|g'
+}
+
+PACKAGECONFIG ?= "yaml"
+
+LTO = ""
+
+# Use CMake 'Unix Makefiles' generator
+OECMAKE_GENERATOR ?= "Unix Makefiles"
+
+# Fluent Bit build options
+# ========================
+
+# Host related setup
+EXTRA_OECMAKE += "-DGNU_HOST=${HOST_SYS} -DFLB_TD=1"
+
+# Disable LuaJIT and filter_lua support
+EXTRA_OECMAKE += "-DFLB_LUAJIT=Off -DFLB_FILTER_LUA=Off "
+
+# Disable Library and examples
+EXTRA_OECMAKE += "-DFLB_SHARED_LIB=Off"
+
+# Enable systemd iff systemd is in DISTRO_FEATURES
+EXTRA_OECMAKE += "${@bb.utils.contains('DISTRO_FEATURES','systemd','-DFLB_SYSTEMD=On','-DFLB_SYSTEMD=Off',d)}"
+
+# Enable release builds
+EXTRA_OECMAKE += "-DFLB_RELEASE=On"
+
+# musl needs these options
+EXTRA_OECMAKE:append:libc-musl = ' -DFLB_JEMALLOC_OPTIONS="--with-jemalloc-prefix=je_ --with-lg-quantum=3" -DFLB_CORO_STACK_SIZE=24576'
+
+EXTRA_OECMAKE:append:riscv64 = " -DCMAKE_C_STANDARD_LIBRARIES=-latomic"
+EXTRA_OECMAKE:append:riscv32 = " -DCMAKE_C_STANDARD_LIBRARIES=-latomic"
+EXTRA_OECMAKE:append:mips = " -DCMAKE_C_STANDARD_LIBRARIES=-latomic"
+EXTRA_OECMAKE:append:x86 = " -DCMAKE_C_STANDARD_LIBRARIES=-latomic"
+
+CFLAGS:append:x86 = " -DMBEDTLS_HAVE_SSE2"
+
+inherit cmake systemd pkgconfig
+
+SYSTEMD_SERVICE:${PN} = "td-agent-bit.service"
+
+EXTRA_OECMAKE += "-DCMAKE_DEBUG_SRCDIR=/usr/src/debug/${PN}/${EXTENDPE}${PV}-${PR}/"
+TARGET_CC_ARCH += " ${SELECTED_OPTIMIZATION}"
diff --git a/meta-openembedded/meta-oe/recipes-extended/icewm/icewm_2.9.7.bb b/meta-openembedded/meta-oe/recipes-extended/icewm/icewm_2.9.7.bb
deleted file mode 100644
index 00377bc..0000000
--- a/meta-openembedded/meta-oe/recipes-extended/icewm/icewm_2.9.7.bb
+++ /dev/null
@@ -1,46 +0,0 @@
-DESCRIPTION = "Ice Window Manager (IceWM)"
-LICENSE = "GPL-2.0-only"
-LIC_FILES_CHKSUM = "file://COPYING;md5=4a26952467ef79a7efca4a9cf52d417b"
-
-SRC_URI = "https://github.com/ice-wm/${BPN}/releases/download/${PV}/${BPN}-${PV}.tar.lz \
-           file://0001-configure.ac-skip-running-test-program-when-cross-co.patch \
-           "
-SRC_URI[sha256sum] = "c25f78e3f3ad49fbebc715691689d0ad1fda46b2e2537ad69e3332a54b81c72a"
-
-UPSTREAM_CHECK_URI = "https://github.com/ice-wm/${BPN}/releases"
-
-inherit autotools pkgconfig gettext perlnative features_check qemu update-alternatives
-REQUIRED_DISTRO_FEATURES = "x11"
-
-EXTRA_OECONF += "--with-libdir=${datadir}/icewm \
-                --with-cfgdir=${sysconfdir}/icewm \
-                --with-docdir=${docdir}/icewm \
-                --enable-fribidi \
-                --enable-xinerama \
-                --enable-shape"
-
-DEPENDS = "asciidoc-native fontconfig fribidi gdk-pixbuf imlib2	libxft libxpm libxrandr \
-    libxinerama libice libsm libx11 libxext libxrender libxcomposite libxdamage \
-    libxfixes"
-DEPENDS:append = " qemu-native"
-RDEPENDS:${PN} = "perl fribidi imlib2 imlib2-loaders"
-
-do_compile:prepend:class-target() {
-
-    cd ${B}
-    oe_runmake -C src genpref
-
-    qemu_binary="${@qemu_wrapper_cmdline(d, '${STAGING_DIR_TARGET}',['${B}/src/.libs','${STAGING_DIR_TARGET}/${libdir}','${STAGING_DIR_TARGET}/${base_libdir}'])}"
-    cat >qemuwrapper <<EOF
-#!/bin/sh
-${qemu_binary} src/genpref "\$@"
-EOF
-    chmod +x qemuwrapper
-    ./qemuwrapper > src/preferences
-}
-
-ALTERNATIVE:${PN} = "x-session-manager"
-ALTERNATIVE_TARGET[x-session-manager] = "${bindir}/icewm-session"
-ALTERNATIVE_PRIORITY_${PN} = "100"
-
-FILES:${PN} += "${datadir}/xsessions"
diff --git a/meta-openembedded/meta-oe/recipes-extended/icewm/icewm_2.9.9.bb b/meta-openembedded/meta-oe/recipes-extended/icewm/icewm_2.9.9.bb
new file mode 100644
index 0000000..ecb0dbe
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-extended/icewm/icewm_2.9.9.bb
@@ -0,0 +1,46 @@
+DESCRIPTION = "Ice Window Manager (IceWM)"
+LICENSE = "GPL-2.0-only"
+LIC_FILES_CHKSUM = "file://COPYING;md5=4a26952467ef79a7efca4a9cf52d417b"
+
+SRC_URI = "https://github.com/ice-wm/${BPN}/releases/download/${PV}/${BPN}-${PV}.tar.lz \
+           file://0001-configure.ac-skip-running-test-program-when-cross-co.patch \
+           "
+SRC_URI[sha256sum] = "af7bab3472189febf50753eaecfac586901419d282cbcbff92e860d4b74894b3"
+
+UPSTREAM_CHECK_URI = "https://github.com/ice-wm/${BPN}/releases"
+
+inherit autotools pkgconfig gettext perlnative features_check qemu update-alternatives
+REQUIRED_DISTRO_FEATURES = "x11"
+
+EXTRA_OECONF += "--with-libdir=${datadir}/icewm \
+                --with-cfgdir=${sysconfdir}/icewm \
+                --with-docdir=${docdir}/icewm \
+                --enable-fribidi \
+                --enable-xinerama \
+                --enable-shape"
+
+DEPENDS = "asciidoc-native fontconfig fribidi gdk-pixbuf imlib2	libxft libxpm libxrandr \
+    libxinerama libice libsm libx11 libxext libxrender libxcomposite libxdamage \
+    libxfixes"
+DEPENDS:append = " qemu-native"
+RDEPENDS:${PN} = "perl fribidi imlib2 imlib2-loaders"
+
+do_compile:prepend:class-target() {
+
+    cd ${B}
+    oe_runmake -C src genpref
+
+    qemu_binary="${@qemu_wrapper_cmdline(d, '${STAGING_DIR_TARGET}',['${B}/src/.libs','${STAGING_DIR_TARGET}/${libdir}','${STAGING_DIR_TARGET}/${base_libdir}'])}"
+    cat >qemuwrapper <<EOF
+#!/bin/sh
+${qemu_binary} src/genpref "\$@"
+EOF
+    chmod +x qemuwrapper
+    ./qemuwrapper > src/preferences
+}
+
+ALTERNATIVE:${PN} = "x-session-manager"
+ALTERNATIVE_TARGET[x-session-manager] = "${bindir}/icewm-session"
+ALTERNATIVE_PRIORITY_${PN} = "100"
+
+FILES:${PN} += "${datadir}/xsessions"
diff --git a/meta-openembedded/meta-oe/recipes-extended/libimobiledevice/libplist_git.bb b/meta-openembedded/meta-oe/recipes-extended/libimobiledevice/libplist_git.bb
new file mode 100644
index 0000000..82f3e4d
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-extended/libimobiledevice/libplist_git.bb
@@ -0,0 +1,37 @@
+SUMMARY = "A library to handle Apple Property List format whereas it's binary or XML"
+HOMEPAGE = "https://github.com/libimobiledevice/libplist"
+LICENSE = "GPL-2.0-only & LGPL-2.1-only"
+LIC_FILES_CHKSUM = "file://COPYING;md5=ebb5c50ab7cab4baeffba14977030c07 \
+                    file://COPYING.LESSER;md5=6ab17b41640564434dda85c06b7124f7"
+
+DEPENDS = "libxml2 glib-2.0 swig python3"
+
+inherit autotools pkgconfig python3native python3targetconfig
+
+PV = "2.2.0+git${SRCPV}"
+
+SRCREV = "db93bae96d64140230ad050061632531644c46ad"
+SRC_URI = "git://github.com/libimobiledevice/libplist;protocol=https;branch=master"
+
+S = "${WORKDIR}/git"
+
+CVE_CHECK_IGNORE += "\
+    CVE-2017-5834 \
+    CVE-2017-5835 \
+    CVE-2017-5836 \
+"
+
+do_install:append () {
+    if [ -e ${D}${libdir}/python*/site-packages/plist/_plist.so ]; then
+        chrpath -d ${D}${libdir}/python*/site-packages/plist/_plist.so
+    fi
+}
+
+PACKAGES =+ "${PN}-utils \
+             ${PN}++ \
+             ${PN}-python"
+
+FILES:${PN} = "${libdir}/libplist-2.0${SOLIBS}"
+FILES:${PN}++ = "${libdir}/libplist++-2.0${SOLIBS}"
+FILES:${PN}-utils = "${bindir}/*"
+FILES:${PN}-python = "${libdir}/python*/site-packages/*"
diff --git a/meta-openembedded/meta-oe/recipes-extended/libimobiledevice/libusbmuxd_git.bb b/meta-openembedded/meta-oe/recipes-extended/libimobiledevice/libusbmuxd_git.bb
new file mode 100644
index 0000000..91a9b4a
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-extended/libimobiledevice/libusbmuxd_git.bb
@@ -0,0 +1,17 @@
+DESCRIPTION = "This daemon is in charge of multiplexing connections over USB to an iPhone or iPod touch."
+LICENSE = "LGPL-2.1-only"
+LIC_FILES_CHKSUM = "file://COPYING;md5=6ab17b41640564434dda85c06b7124f7"
+
+DEPENDS = "udev libusb1 libplist libimobiledevice-glue"
+
+inherit autotools pkgconfig gitpkgv
+
+PKGV = "${GITPKGVTAG}"
+PV = "2.0.2+git${SRCPV}"
+
+SRCREV = "36ffb7ab6e2a7e33bd1b56398a88895b7b8c615a"
+SRC_URI = "git://github.com/libimobiledevice/libusbmuxd;protocol=https;branch=master"
+
+S = "${WORKDIR}/git"
+
+FILES:${PN} += "${base_libdir}/udev/rules.d/"
diff --git a/meta-openembedded/meta-oe/recipes-extended/liblockfile/liblockfile/0001-Makefile.in-redefine-LOCKPROG.patch b/meta-openembedded/meta-oe/recipes-extended/liblockfile/liblockfile/0001-Makefile.in-redefine-LOCKPROG.patch
new file mode 100644
index 0000000..35549ff
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-extended/liblockfile/liblockfile/0001-Makefile.in-redefine-LOCKPROG.patch
@@ -0,0 +1,33 @@
+From 9beb650712d448ad9c0899de9d98e9b623f9c249 Mon Sep 17 00:00:00 2001
+From: Mingli Yu <mingli.yu@windriver.com>
+Date: Fri, 12 Aug 2022 11:18:15 +0800
+Subject: [PATCH] Makefile.in: redefine LOCKPROG
+
+By default the LOCKPROG will be expanded as below:
+LOCKPROG="/build/tmp-glibc/work/core2-32-wrs-linux/liblockfile/1.14-r0/image/usr/bin/dotlockfile"
+
+And it should be "/usr/bin/dotlockfile" on the target.
+
+Upstream-Status: Inappropriate [oe specific]
+
+Signed-off-by: Mingli Yu <mingli.yu@windriver.com>
+---
+ Makefile.in | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/Makefile.in b/Makefile.in
+index 6e53179..bfa0acb 100644
+--- a/Makefile.in
++++ b/Makefile.in
+@@ -42,7 +42,7 @@ dotlockfile:	dotlockfile.o xlockfile.o
+ 		$(CC) $(LDFLAGS) -o dotlockfile dotlockfile.o xlockfile.o
+ 
+ lockfile.o:	lockfile.c
+-		$(CC) $(CFLAGS) -DLIB -DLOCKPROG=\"$(bindir)/dotlockfile\" \
++		$(CC) $(CFLAGS) -DLIB -DLOCKPROG=\"/usr/bin/dotlockfile\" \
+ 			-c lockfile.c
+ 
+ xlockfile.o:	lockfile.c
+-- 
+2.25.1
+
diff --git a/meta-openembedded/meta-oe/recipes-extended/liblockfile/liblockfile_1.14.bb b/meta-openembedded/meta-oe/recipes-extended/liblockfile/liblockfile_1.14.bb
index bac3a2c..2604e14 100644
--- a/meta-openembedded/meta-oe/recipes-extended/liblockfile/liblockfile_1.14.bb
+++ b/meta-openembedded/meta-oe/recipes-extended/liblockfile/liblockfile_1.14.bb
@@ -10,6 +10,7 @@
     file://0001-Makefile.in-add-DESTDIR.patch \
     file://0001-Makefile.in-install-nfslock-libs.patch \
     file://liblockfile-fix-install-so-to-man-dir.patch \
+    file://0001-Makefile.in-redefine-LOCKPROG.patch \
 "
 
 SRC_URI[md5sum] = "420c056ba0cc4d1477e402f70ba2f5eb"
diff --git a/meta-openembedded/meta-oe/recipes-extended/logwatch/logwatch_7.6.bb b/meta-openembedded/meta-oe/recipes-extended/logwatch/logwatch_7.6.bb
deleted file mode 100644
index 8cb8815..0000000
--- a/meta-openembedded/meta-oe/recipes-extended/logwatch/logwatch_7.6.bb
+++ /dev/null
@@ -1,57 +0,0 @@
-SUMMARY = "A log file analysis program"
-DESCRIPTION = "\
-Logwatch is a customizable, pluggable log-monitoring system. It will go \
-through your logs for a given period of time and make a report in the areas \
-that you wish with the detail that you wish. Easy to use - works right out of \
-the package on many systems.\
-"
-SECTION = "devel"
-HOMEPAGE = "http://www.logwatch.org/"
-LICENSE = "MIT"
-LIC_FILES_CHKSUM = "file://LICENSE;md5=ba882fa9b4b6b217a51780be3f4db9c8"
-RDEPENDS:${PN} = "perl"
-
-SRC_URI = "http://jaist.dl.sourceforge.net/project/${BPN}/${BP}/${BP}.tar.gz"
-SRC_URI[sha256sum] = "689f3c68b99ef7af7d3c7007c3ff0a55d5674bdbf9c01f69a9f187726d6d4baf"
-
-do_install() {
-    install -m 0755 -d ${D}${sysconfdir}/logwatch/scripts
-    install -m 0755 -d ${D}${datadir}/logwatch/dist.conf/logfiles
-    install -m 0755 -d ${D}${datadir}/logwatch/dist.conf/services
-    install -m 0755 -d ${D}${localstatedir}/cache/logwatch
-    cp -r -f conf/ ${D}${datadir}/logwatch/default.conf
-    cp -r -f scripts/ ${D}${datadir}/logwatch/scripts
-    cp -r -f lib ${D}${datadir}/logwatch/lib
-    chown -R root:root ${D}${datadir}/logwatch
-
-    install -m 0755 -d ${D}${mandir}/man1
-    install -m 0755 -d ${D}${mandir}/man5
-    install -m 0755 -d ${D}${mandir}/man8
-    install -m 0644 amavis-logwatch.1 ${D}${mandir}/man1
-    install -m 0644 postfix-logwatch.1 ${D}${mandir}/man1
-    install -m 0644 ignore.conf.5 ${D}${mandir}/man5
-    install -m 0644 override.conf.5 ${D}${mandir}/man5
-    install -m 0644 logwatch.conf.5 ${D}${mandir}/man5
-    install -m 0644 logwatch.8 ${D}${mandir}/man8
-
-    install -m 0755 -d ${D}${sysconfdir}/cron.daily
-    install -m 0755 -d ${D}${sbindir}
-    ln -sf ../..${datadir}/logwatch/scripts/logwatch.pl ${D}${sbindir}/logwatch
-    cat > ${D}${sysconfdir}/cron.daily/0logwatch <<EOF
-    DailyReport=\`grep -e "^[[:space:]]*DailyReport[[:space:]]*=[[:space:]]*" /usr/share/logwatch/default.conf/logwatch.conf | head -n1 | sed -e "s|^\s*DailyReport\s*=\s*||"\`
-    if [ "\$DailyReport" != "No" ] && [ "\$DailyReport" != "no" ]
-    then
-            logwatch
-    fi
-EOF
-    chmod 755 ${D}${sysconfdir}/cron.daily/0logwatch
-
-    install -m 0755 -d ${D}${sysconfdir}/logwatch/conf/logfiles
-    install -m 0755 -d ${D}${sysconfdir}/logwatch/conf/services
-    touch ${D}${sysconfdir}/logwatch/conf/logwatch.conf
-    touch ${D}${sysconfdir}/logwatch/conf/ignore.conf
-    touch ${D}${sysconfdir}/logwatch/conf/override.conf
-    echo "# Local configuration options go here (defaults are in /usr/share/logwatch/default.conf/logwatch.conf)" > ${D}${sysconfdir}/logwatch/conf/logwatch.conf
-    echo "###### REGULAR EXPRESSIONS IN THIS FILE WILL BE TRIMMED FROM REPORT OUTPUT #####" > ${D}${sysconfdir}/logwatch/conf/ignore.conf
-    echo "# Configuration overrides for specific logfiles/services may be placed here." > ${D}${sysconfdir}/logwatch/conf/override.conf
-}
diff --git a/meta-openembedded/meta-oe/recipes-extended/logwatch/logwatch_7.7.bb b/meta-openembedded/meta-oe/recipes-extended/logwatch/logwatch_7.7.bb
new file mode 100644
index 0000000..37a9466
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-extended/logwatch/logwatch_7.7.bb
@@ -0,0 +1,57 @@
+SUMMARY = "A log file analysis program"
+DESCRIPTION = "\
+Logwatch is a customizable, pluggable log-monitoring system. It will go \
+through your logs for a given period of time and make a report in the areas \
+that you wish with the detail that you wish. Easy to use - works right out of \
+the package on many systems.\
+"
+SECTION = "devel"
+HOMEPAGE = "http://www.logwatch.org/"
+LICENSE = "MIT"
+LIC_FILES_CHKSUM = "file://LICENSE;md5=ba882fa9b4b6b217a51780be3f4db9c8"
+RDEPENDS:${PN} = "perl"
+
+SRC_URI = "http://jaist.dl.sourceforge.net/project/${BPN}/${BP}/${BP}.tar.gz"
+SRC_URI[sha256sum] = "2a10c2c73f85d2ec9d8e9be3f553b7b5849cf795b89a1c1379c99cc36a06adbd"
+
+do_install() {
+    install -m 0755 -d ${D}${sysconfdir}/logwatch/scripts
+    install -m 0755 -d ${D}${datadir}/logwatch/dist.conf/logfiles
+    install -m 0755 -d ${D}${datadir}/logwatch/dist.conf/services
+    install -m 0755 -d ${D}${localstatedir}/cache/logwatch
+    cp -r -f conf/ ${D}${datadir}/logwatch/default.conf
+    cp -r -f scripts/ ${D}${datadir}/logwatch/scripts
+    cp -r -f lib ${D}${datadir}/logwatch/lib
+    chown -R root:root ${D}${datadir}/logwatch
+
+    install -m 0755 -d ${D}${mandir}/man1
+    install -m 0755 -d ${D}${mandir}/man5
+    install -m 0755 -d ${D}${mandir}/man8
+    install -m 0644 amavis-logwatch.1 ${D}${mandir}/man1
+    install -m 0644 postfix-logwatch.1 ${D}${mandir}/man1
+    install -m 0644 ignore.conf.5 ${D}${mandir}/man5
+    install -m 0644 override.conf.5 ${D}${mandir}/man5
+    install -m 0644 logwatch.conf.5 ${D}${mandir}/man5
+    install -m 0644 logwatch.8 ${D}${mandir}/man8
+
+    install -m 0755 -d ${D}${sysconfdir}/cron.daily
+    install -m 0755 -d ${D}${sbindir}
+    ln -sf ../..${datadir}/logwatch/scripts/logwatch.pl ${D}${sbindir}/logwatch
+    cat > ${D}${sysconfdir}/cron.daily/0logwatch <<EOF
+    DailyReport=\`grep -e "^[[:space:]]*DailyReport[[:space:]]*=[[:space:]]*" /usr/share/logwatch/default.conf/logwatch.conf | head -n1 | sed -e "s|^\s*DailyReport\s*=\s*||"\`
+    if [ "\$DailyReport" != "No" ] && [ "\$DailyReport" != "no" ]
+    then
+            logwatch
+    fi
+EOF
+    chmod 755 ${D}${sysconfdir}/cron.daily/0logwatch
+
+    install -m 0755 -d ${D}${sysconfdir}/logwatch/conf/logfiles
+    install -m 0755 -d ${D}${sysconfdir}/logwatch/conf/services
+    touch ${D}${sysconfdir}/logwatch/conf/logwatch.conf
+    touch ${D}${sysconfdir}/logwatch/conf/ignore.conf
+    touch ${D}${sysconfdir}/logwatch/conf/override.conf
+    echo "# Local configuration options go here (defaults are in /usr/share/logwatch/default.conf/logwatch.conf)" > ${D}${sysconfdir}/logwatch/conf/logwatch.conf
+    echo "###### REGULAR EXPRESSIONS IN THIS FILE WILL BE TRIMMED FROM REPORT OUTPUT #####" > ${D}${sysconfdir}/logwatch/conf/ignore.conf
+    echo "# Configuration overrides for specific logfiles/services may be placed here." > ${D}${sysconfdir}/logwatch/conf/override.conf
+}
diff --git a/meta-openembedded/meta-oe/recipes-extended/mozjs/mozjs-91_91.8.0.bb b/meta-openembedded/meta-oe/recipes-extended/mozjs/mozjs-91_91.8.0.bb
index 9b990ec..8ade3bb 100644
--- a/meta-openembedded/meta-oe/recipes-extended/mozjs/mozjs-91_91.8.0.bb
+++ b/meta-openembedded/meta-oe/recipes-extended/mozjs/mozjs-91_91.8.0.bb
@@ -50,7 +50,7 @@
     cd ${B}
     python3 ${S}/configure.py \
         --enable-project=js \
-        --target=${HOST_SYS} \
+        --target=${RUST_HOST_SYS} \
         --host=${BUILD_SYS} \
         --prefix=${prefix} \
         --libdir=${libdir} \
diff --git a/meta-openembedded/meta-oe/recipes-extended/ostree/ostree/0001-Remove-unused-linux-fs.h-includes.patch b/meta-openembedded/meta-oe/recipes-extended/ostree/ostree/0001-Remove-unused-linux-fs.h-includes.patch
new file mode 100644
index 0000000..2659e46
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-extended/ostree/ostree/0001-Remove-unused-linux-fs.h-includes.patch
@@ -0,0 +1,42 @@
+From 7d32c352f628747cfadabf9fe7fcc13608e5dfe6 Mon Sep 17 00:00:00 2001
+From: Colin Walters <walters@verbum.org>
+Date: Wed, 3 Aug 2022 10:37:40 -0400
+Subject: [PATCH] Remove unused `linux/fs.h` includes
+
+Prep for fixing conflicts introduced by newer glibc.
+cc https://github.com/ostreedev/ostree/issues/2685
+
+Upstream-Status: Backport [https://github.com/ostreedev/ostree/commit/edba4b33be10c05253bfa94895dfbc8477e44d76]
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ src/libostree/ostree-repo-commit.c | 1 -
+ src/ostree/ot-main.c               | 1 -
+ 2 files changed, 2 deletions(-)
+
+diff --git a/src/libostree/ostree-repo-commit.c b/src/libostree/ostree-repo-commit.c
+index afab3fdf..35b16c71 100644
+--- a/src/libostree/ostree-repo-commit.c
++++ b/src/libostree/ostree-repo-commit.c
+@@ -30,7 +30,6 @@
+ #include <sys/xattr.h>
+ #include <glib/gprintf.h>
+ #include <sys/ioctl.h>
+-#include <linux/fs.h>
+ #include <ext2fs/ext2_fs.h>
+ 
+ #include "otutil.h"
+diff --git a/src/ostree/ot-main.c b/src/ostree/ot-main.c
+index b7b50d67..7a4405a5 100644
+--- a/src/ostree/ot-main.c
++++ b/src/ostree/ot-main.c
+@@ -28,7 +28,6 @@
+ #include <string.h>
+ #include <sys/statvfs.h>
+ #include <sys/mount.h>
+-#include <linux/fs.h>
+ 
+ #include "ot-main.h"
+ #include "ostree.h"
+-- 
+2.37.1
+
diff --git a/meta-openembedded/meta-oe/recipes-extended/ostree/ostree/0001-libostree-Remove-including-sys-mount.h.patch b/meta-openembedded/meta-oe/recipes-extended/ostree/ostree/0001-libostree-Remove-including-sys-mount.h.patch
new file mode 100644
index 0000000..5c2792c
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-extended/ostree/ostree/0001-libostree-Remove-including-sys-mount.h.patch
@@ -0,0 +1,29 @@
+From 7ff956e4088e0bdc6bfd429f99124a8a9256c181 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Sat, 6 Aug 2022 21:44:11 -0700
+Subject: [PATCH] libostree: Remove including sys/mount.h
+
+This conflicts with linux/mount.h which is included by linux/fs.h
+with glibc 2.36+
+
+Upstream-Status: Pending
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ src/libostree/ostree-sysroot-deploy.c | 1 -
+ 1 file changed, 1 deletion(-)
+
+diff --git a/src/libostree/ostree-sysroot-deploy.c b/src/libostree/ostree-sysroot-deploy.c
+index 2dc9f58b..61b19e42 100644
+--- a/src/libostree/ostree-sysroot-deploy.c
++++ b/src/libostree/ostree-sysroot-deploy.c
+@@ -23,7 +23,6 @@
+ #include <gio/gunixoutputstream.h>
+ #include <glib-unix.h>
+ #include <stdint.h>
+-#include <sys/mount.h>
+ #include <sys/statvfs.h>
+ #include <sys/socket.h>
+ #include <sys/ioctl.h>
+-- 
+2.37.1
+
diff --git a/meta-openembedded/meta-oe/recipes-extended/ostree/ostree/0001-s390x-se-luks-gencpio-There-is-no-bashism.patch b/meta-openembedded/meta-oe/recipes-extended/ostree/ostree/0001-s390x-se-luks-gencpio-There-is-no-bashism.patch
new file mode 100644
index 0000000..5cf5784
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-extended/ostree/ostree/0001-s390x-se-luks-gencpio-There-is-no-bashism.patch
@@ -0,0 +1,25 @@
+From dd55633e49aa43dede3c8e1770ae8761487f050e Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Sat, 6 Aug 2022 21:52:31 -0700
+Subject: [PATCH] s390x-se-luks-gencpio: There is no bashism
+
+Upstream-Status: Pending
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ src/libostree/s390x-se-luks-gencpio | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/src/libostree/s390x-se-luks-gencpio b/src/libostree/s390x-se-luks-gencpio
+index e821e2fe..96c1d123 100755
+--- a/src/libostree/s390x-se-luks-gencpio
++++ b/src/libostree/s390x-se-luks-gencpio
+@@ -1,4 +1,4 @@
+-#!/bin/bash
++#!/bin/sh
+ # This script creates new initramdisk with LUKS config within
+ set -euo pipefail
+ 
+-- 
+2.37.1
+
diff --git a/meta-openembedded/meta-oe/recipes-extended/ostree/ostree_2022.2.bb b/meta-openembedded/meta-oe/recipes-extended/ostree/ostree_2022.2.bb
deleted file mode 100644
index 50d0548..0000000
--- a/meta-openembedded/meta-oe/recipes-extended/ostree/ostree_2022.2.bb
+++ /dev/null
@@ -1,209 +0,0 @@
-SUMMARY = "Versioned Operating System Repository."
-DESCRIPTION = "libostree is both a shared library and suite of command line \
-tools that combines a \"git-like\" model for committing and downloading \
-bootable filesystem trees, along with a layer for deploying them and managing \
-the bootloader configuration."
-HOMEPAGE = "https://ostree.readthedocs.io"
-LICENSE = "LGPL-2.1-only"
-
-LIC_FILES_CHKSUM = "file://COPYING;md5=5f30f0716dfdd0d91eb439ebec522ec2"
-
-DEPENDS = " \
-    glib-2.0 \
-    e2fsprogs \
-    libcap \
-    zlib \
-    xz \
-    bison-native \
-"
-
-SRC_URI = " \
-    gitsm://github.com/ostreedev/ostree;branch=main;protocol=https \
-    file://run-ptest \
-"
-SRCREV = "fbc6d21c2f71099fbab44cbdd74222b91f61c667"
-
-UPSTREAM_CHECK_GITTAGREGEX = "v(?P<pver>\d+\.\d+)"
-
-S = "${WORKDIR}/git"
-
-inherit autotools bash-completion gobject-introspection gtk-doc manpages pkgconfig ptest-gnome systemd
-
-# Workaround compile failure:
-# |../git/src/libotutil/zbase32.c:37:1: error: function returns an aggregate [-Werror=aggregate-return]
-# so remove -Og and use -O2 as workaround
-DEBUG_OPTIMIZATION:remove = "-Og"
-DEBUG_OPTIMIZATION:append = " -O2"
-BUILD_OPTIMIZATION:remove = "-Og"
-BUILD_OPTIMIZATION:append = " -O2"
-
-# Package configuration - match ostree defaults, but without rofiles-fuse
-# otherwise we introduce a dependendency on meta-filesystems
-PACKAGECONFIG ??= " \
-    ${@bb.utils.filter('DISTRO_FEATURES', 'selinux smack', d)} \
-    ${@bb.utils.contains('DISTRO_FEATURES', 'systemd', 'systemd libmount', '', d)} \
-    glib \
-    gpgme \
-    soup \
-"
-
-# We include soup because ostree can't (currently) be built without
-# soup or curl - https://github.com/ostreedev/ostree/issues/1897
-PACKAGECONFIG:class-native ??= " \
-    ${@bb.utils.filter('DISTRO_FEATURES', 'selinux smack', d)} \
-    builtin-grub2-mkconfig \
-    gpgme \
-    soup \
-"
-
-PACKAGECONFIG:class-nativesdk ??= " \
-    ${@bb.utils.filter('DISTRO_FEATURES', 'selinux smack', d)} \
-    builtin-grub2-mkconfig \
-    gpgme \
-    soup \
-"
-
-PACKAGECONFIG[avahi] = "--with-avahi, --without-avahi, avahi"
-PACKAGECONFIG[builtin-grub2-mkconfig] = "--with-builtin-grub2-mkconfig, --without-builtin-grub2-mkconfig"
-PACKAGECONFIG[curl] = "--with-curl, --without-curl, curl"
-PACKAGECONFIG[dracut] = "--with-dracut, --without-dracut"
-PACKAGECONFIG[glib] = "--with-crypto=glib"
-PACKAGECONFIG[gjs] = "ac_cv_path_GJS=${bindir}/gjs"
-PACKAGECONFIG[gnutls] = "--with-crypto=gnutls, , gnutls"
-PACKAGECONFIG[gpgme] = "--with-gpgme, --without-gpgme, gpgme"
-PACKAGECONFIG[libarchive] = "--with-libarchive, --without-libarchive, libarchive"
-PACKAGECONFIG[libmount] = "--with-libmount, --without-libmount, util-linux"
-PACKAGECONFIG[manpages] = "--enable-man, --disable-man, libxslt-native docbook-xsl-stylesheets-native"
-PACKAGECONFIG[mkinitcpio] = "--with-mkinitcpio, --without-mkinitcpio"
-PACKAGECONFIG[no-http2] = "--disable-http2, --enable-http2"
-PACKAGECONFIG[openssl] = "--with-crypto=openssl, , openssl"
-PACKAGECONFIG[rofiles-fuse] = "--enable-rofiles-fuse, --disable-rofiles-fuse, fuse"
-PACKAGECONFIG[selinux] = "--with-selinux, --without-selinux, libselinux"
-PACKAGECONFIG[smack] = "--with-smack, --without-smack, smack"
-PACKAGECONFIG[soup] = "--with-soup, --without-soup --disable-glibtest, libsoup-2.4"
-PACKAGECONFIG[static] = ""
-PACKAGECONFIG[systemd] = "--with-libsystemd --with-systemdsystemunitdir=${systemd_unitdir}/system, --without-libsystemd, systemd"
-PACKAGECONFIG[trivial-httpd-cmdline] = "--enable-trivial-httpd-cmdline, --disable-trivial-httpd-cmdline"
-
-EXTRA_OECONF = " \
-    ${@bb.utils.contains('PACKAGECONFIG', 'static', '--with-static-compiler=\'${CC} ${CFLAGS} ${CPPFLAGS} ${LDFLAGS}\'', '', d)} \
-"
-
-# Makefile-libostree.am overrides this to avoid a build problem with clang,
-# but that fix breaks cross compilation and we don't need it
-EXTRA_OEMAKE = " \
-    INTROSPECTION_SCANNER_ENV= \
-"
-
-EXTRA_OECONF:class-native = " \
-    --enable-wrpseudo-compat \
-    --disable-otmpfile \
-"
-
-EXTRA_OECONF:class-nativesdk = " \
-    --enable-wrpseudo-compat \
-    --disable-otmpfile \
-"
-
-# Path to ${prefix}/lib/ostree/ostree-grub-generator is hardcoded on the
-# do_configure stage so we do depend on it
-SYSROOT_DIR = "${STAGING_DIR_TARGET}"
-SYSROOT_DIR:class-native = "${STAGING_DIR_NATIVE}"
-do_configure[vardeps] += "SYSROOT_DIR"
-
-do_configure:prepend() {
-    # this reflects what autogen.sh does, but the OE wrappers for autoreconf
-    # allow it to work without the other gyrations which exist there
-    cp ${S}/libglnx/Makefile-libglnx.am ${S}/libglnx/Makefile-libglnx.am.inc
-    cp ${S}/bsdiff/Makefile-bsdiff.am ${S}/bsdiff/Makefile-bsdiff.am.inc
-}
-
-do_install:append:class-native() {
-    create_wrapper ${D}${bindir}/ostree OSTREE_GRUB2_EXEC="${STAGING_LIBDIR_NATIVE}/ostree/ostree-grub-generator"
-}
-
-do_install:append:class-nativesdk() {
-    create_wrapper ${D}${bindir}/ostree OSTREE_GRUB2_EXEC="\$OECORE_NATIVE_SYSROOT/usr/lib/ostree/ostree-grub-generator"
-}
-
-PACKAGE_BEFORE_PN = " \
-    ${PN}-dracut \
-    ${PN}-grub \
-    ${PN}-mkinitcpio \
-    ${PN}-switchroot \
-    ${PN}-trivial-httpd \
-"
-
-FILES:${PN} += " \
-    ${nonarch_libdir}/${BPN} \
-    ${nonarch_libdir}/tmpfiles.d \
-    ${systemd_unitdir}/system \
-    ${systemd_unitdir}/system-generators \
-"
-FILES:${PN}-dracut = " \
-    ${sysconfdir}/dracut.conf.d \
-    ${libdir}/dracut \
-"
-FILES:${PN}-grub = " \
-    ${sysconfdir}/grub.d \
-    ${libexecdir}/libostree/grub2-15_ostree \
-"
-FILES:${PN}-mkinitcpio = " \
-    ${sysconfdir}/ostree-mkinitcpio.conf \
-    ${libdir}/initcpio \
-"
-FILES:${PN}-switchroot = " \
-    ${nonarch_libdir}/${BPN}/ostree-prepare-root \
-    ${systemd_unitdir}/system/ostree-prepare-root.service \
-"
-FILES:${PN}-trivial-httpd = " \
-    ${libexecdir}/libostree/ostree-trivial-httpd \
-"
-
-RDEPENDS:${PN} = " \
-    ${@bb.utils.contains('PACKAGECONFIG', 'trivial-httpd-cmdline', '${PN}-trivial-httpd', '', d)} \
-"
-RDEPENDS:${PN}-dracut = "bash"
-RDEPENDS:${PN}-mkinitcpio = "bash"
-RDEPENDS:${PN}:class-target = " \
-    ${@bb.utils.contains('PACKAGECONFIG', 'gpgme', 'gnupg', '', d)} \
-    ${PN}-switchroot \
-"
-
-#
-# Note that to get ptest to pass you also need:
-#
-#   xattr in DISTRO_FEATURES
-#   static ostree-prepare-root (PACKAGECONFIG:append:pn-ostree = " static")
-#   meta-python in your layers
-#   overlayfs in your kernel (KERNEL_EXTRA_FEATURES += "features/overlayfs/overlayfs.scc")
-#   busybox built statically
-#   /var/tmp as a real filesystem (not a tmpfs)
-#   Sufficient disk space (IMAGE_ROOTFS_SIZE = "524288") and RAM (QB_MEM = "-m 1024")
-#
-RDEPENDS:${PN}-ptest += " \
-    attr \
-    bash \
-    coreutils \
-    cpio \
-    diffutils \
-    findutils \
-    grep \
-    python3-core \
-    python3-multiprocessing \
-    strace \
-    tar \
-    util-linux \
-    xz \
-    ${PN}-trivial-httpd \
-    python3-pyyaml \
-    ${@bb.utils.contains('PACKAGECONFIG', 'gjs', 'gjs', '', d)} \
-"
-RDEPENDS:${PN}-ptest:append:libc-glibc = " glibc-utils glibc-localedata-en-us"
-
-RRECOMMENDS:${PN}:append:class-target = " kernel-module-overlay"
-
-SYSTEMD_SERVICE:${PN} = "ostree-remount.service ostree-finalize-staged.path"
-SYSTEMD_SERVICE:${PN}-switchroot = "ostree-prepare-root.service"
-
-BBCLASSEXTEND = "native nativesdk"
diff --git a/meta-openembedded/meta-oe/recipes-extended/ostree/ostree_2022.5.bb b/meta-openembedded/meta-oe/recipes-extended/ostree/ostree_2022.5.bb
new file mode 100644
index 0000000..a21c473
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-extended/ostree/ostree_2022.5.bb
@@ -0,0 +1,213 @@
+SUMMARY = "Versioned Operating System Repository."
+DESCRIPTION = "libostree is both a shared library and suite of command line \
+tools that combines a \"git-like\" model for committing and downloading \
+bootable filesystem trees, along with a layer for deploying them and managing \
+the bootloader configuration."
+HOMEPAGE = "https://ostree.readthedocs.io"
+LICENSE = "LGPL-2.1-only"
+
+LIC_FILES_CHKSUM = "file://COPYING;md5=5f30f0716dfdd0d91eb439ebec522ec2"
+
+DEPENDS = " \
+    glib-2.0 \
+    e2fsprogs \
+    libcap \
+    zlib \
+    xz \
+    bison-native \
+"
+
+SRC_URI = " \
+    gitsm://github.com/ostreedev/ostree;branch=main;protocol=https \
+    file://0001-Remove-unused-linux-fs.h-includes.patch \
+    file://0001-libostree-Remove-including-sys-mount.h.patch \
+    file://0001-s390x-se-luks-gencpio-There-is-no-bashism.patch \
+    file://run-ptest \
+"
+SRCREV = "15740d042c9c5258a1c082b5e228cf6f115edbb0"
+
+UPSTREAM_CHECK_GITTAGREGEX = "v(?P<pver>\d+\.\d+)"
+
+S = "${WORKDIR}/git"
+
+inherit autotools bash-completion gobject-introspection gtk-doc manpages pkgconfig ptest-gnome systemd
+
+# Workaround compile failure:
+# |../git/src/libotutil/zbase32.c:37:1: error: function returns an aggregate [-Werror=aggregate-return]
+# so remove -Og and use -O2 as workaround
+DEBUG_OPTIMIZATION:remove = "-Og"
+DEBUG_OPTIMIZATION:append = " -O2"
+BUILD_OPTIMIZATION:remove = "-Og"
+BUILD_OPTIMIZATION:append = " -O2"
+
+# Package configuration - match ostree defaults, but without rofiles-fuse
+# otherwise we introduce a dependendency on meta-filesystems
+PACKAGECONFIG ??= " \
+    ${@bb.utils.filter('DISTRO_FEATURES', 'selinux smack', d)} \
+    ${@bb.utils.contains('DISTRO_FEATURES', 'systemd', 'systemd libmount', '', d)} \
+    glib \
+    gpgme \
+    soup \
+"
+
+# We include soup because ostree can't (currently) be built without
+# soup or curl - https://github.com/ostreedev/ostree/issues/1897
+PACKAGECONFIG:class-native ??= " \
+    ${@bb.utils.filter('DISTRO_FEATURES', 'selinux smack', d)} \
+    builtin-grub2-mkconfig \
+    gpgme \
+    soup \
+"
+
+PACKAGECONFIG:class-nativesdk ??= " \
+    ${@bb.utils.filter('DISTRO_FEATURES', 'selinux smack', d)} \
+    builtin-grub2-mkconfig \
+    gpgme \
+    soup \
+"
+
+PACKAGECONFIG[avahi] = "--with-avahi, --without-avahi, avahi"
+PACKAGECONFIG[builtin-grub2-mkconfig] = "--with-builtin-grub2-mkconfig, --without-builtin-grub2-mkconfig"
+PACKAGECONFIG[curl] = "--with-curl, --without-curl, curl"
+PACKAGECONFIG[dracut] = "--with-dracut, --without-dracut"
+PACKAGECONFIG[ed25519-libsodium] = "--with-ed25519-libsodium, --without-ed25519-libsodium, libsodium"
+PACKAGECONFIG[gjs] = "ac_cv_path_GJS=${bindir}/gjs"
+PACKAGECONFIG[glib] = "--with-crypto=glib, , , , , gnutls openssl"
+PACKAGECONFIG[gnutls] = "--with-crypto=gnutls, , gnutls, , , glib openssl"
+PACKAGECONFIG[gpgme] = "--with-gpgme, --without-gpgme, gpgme"
+PACKAGECONFIG[libarchive] = "--with-libarchive, --without-libarchive, libarchive"
+PACKAGECONFIG[libmount] = "--with-libmount, --without-libmount, util-linux"
+PACKAGECONFIG[manpages] = "--enable-man, --disable-man, libxslt-native docbook-xsl-stylesheets-native"
+PACKAGECONFIG[mkinitcpio] = "--with-mkinitcpio, --without-mkinitcpio"
+PACKAGECONFIG[no-http2] = "--disable-http2, --enable-http2"
+PACKAGECONFIG[openssl] = "--with-crypto=openssl, , openssl, , , glib gnutls"
+PACKAGECONFIG[rofiles-fuse] = "--enable-rofiles-fuse, --disable-rofiles-fuse, fuse"
+PACKAGECONFIG[selinux] = "--with-selinux, --without-selinux, libselinux"
+PACKAGECONFIG[smack] = "--with-smack, --without-smack, smack"
+PACKAGECONFIG[soup] = "--with-soup, --without-soup --disable-glibtest, libsoup-2.4"
+PACKAGECONFIG[static] = ""
+PACKAGECONFIG[systemd] = "--with-libsystemd --with-systemdsystemunitdir=${systemd_unitdir}/system, --without-libsystemd, systemd"
+PACKAGECONFIG[trivial-httpd-cmdline] = "--enable-trivial-httpd-cmdline, --disable-trivial-httpd-cmdline"
+
+EXTRA_OECONF = " \
+    ${@bb.utils.contains('PACKAGECONFIG', 'static', '--with-static-compiler=\'${CC} ${CFLAGS} ${CPPFLAGS} ${LDFLAGS}\'', '', d)} \
+"
+
+# Makefile-libostree.am overrides this to avoid a build problem with clang,
+# but that fix breaks cross compilation and we don't need it
+EXTRA_OEMAKE = " \
+    INTROSPECTION_SCANNER_ENV= \
+"
+
+EXTRA_OECONF:class-native = " \
+    --enable-wrpseudo-compat \
+    --disable-otmpfile \
+"
+
+EXTRA_OECONF:class-nativesdk = " \
+    --enable-wrpseudo-compat \
+    --disable-otmpfile \
+"
+
+# Path to ${prefix}/lib/ostree/ostree-grub-generator is hardcoded on the
+# do_configure stage so we do depend on it
+SYSROOT_DIR = "${STAGING_DIR_TARGET}"
+SYSROOT_DIR:class-native = "${STAGING_DIR_NATIVE}"
+do_configure[vardeps] += "SYSROOT_DIR"
+
+do_configure:prepend() {
+    # this reflects what autogen.sh does, but the OE wrappers for autoreconf
+    # allow it to work without the other gyrations which exist there
+    cp ${S}/libglnx/Makefile-libglnx.am ${S}/libglnx/Makefile-libglnx.am.inc
+    cp ${S}/bsdiff/Makefile-bsdiff.am ${S}/bsdiff/Makefile-bsdiff.am.inc
+}
+
+do_install:append:class-native() {
+    create_wrapper ${D}${bindir}/ostree OSTREE_GRUB2_EXEC="${STAGING_LIBDIR_NATIVE}/ostree/ostree-grub-generator"
+}
+
+do_install:append:class-nativesdk() {
+    create_wrapper ${D}${bindir}/ostree OSTREE_GRUB2_EXEC="\$OECORE_NATIVE_SYSROOT/usr/lib/ostree/ostree-grub-generator"
+}
+
+PACKAGE_BEFORE_PN = " \
+    ${PN}-dracut \
+    ${PN}-grub \
+    ${PN}-mkinitcpio \
+    ${PN}-switchroot \
+    ${PN}-trivial-httpd \
+"
+
+FILES:${PN} += " \
+    ${nonarch_libdir}/${BPN} \
+    ${nonarch_libdir}/tmpfiles.d \
+    ${systemd_unitdir}/system \
+    ${systemd_unitdir}/system-generators \
+"
+FILES:${PN}-dracut = " \
+    ${sysconfdir}/dracut.conf.d \
+    ${libdir}/dracut \
+"
+FILES:${PN}-grub = " \
+    ${sysconfdir}/grub.d \
+    ${libexecdir}/libostree/grub2-15_ostree \
+"
+FILES:${PN}-mkinitcpio = " \
+    ${sysconfdir}/ostree-mkinitcpio.conf \
+    ${libdir}/initcpio \
+"
+FILES:${PN}-switchroot = " \
+    ${nonarch_libdir}/${BPN}/ostree-prepare-root \
+    ${systemd_unitdir}/system/ostree-prepare-root.service \
+"
+FILES:${PN}-trivial-httpd = " \
+    ${libexecdir}/libostree/ostree-trivial-httpd \
+"
+
+RDEPENDS:${PN} = " \
+    ${@bb.utils.contains('PACKAGECONFIG', 'trivial-httpd-cmdline', '${PN}-trivial-httpd', '', d)} \
+"
+RDEPENDS:${PN}-dracut = "bash"
+RDEPENDS:${PN}-mkinitcpio = "bash"
+RDEPENDS:${PN}:class-target = " \
+    ${@bb.utils.contains('PACKAGECONFIG', 'gpgme', 'gnupg', '', d)} \
+    ${PN}-switchroot \
+"
+
+#
+# Note that to get ptest to pass you also need:
+#
+#   xattr in DISTRO_FEATURES
+#   static ostree-prepare-root (PACKAGECONFIG:append:pn-ostree = " static")
+#   meta-python in your layers
+#   overlayfs in your kernel (KERNEL_EXTRA_FEATURES += "features/overlayfs/overlayfs.scc")
+#   busybox built statically
+#   /var/tmp as a real filesystem (not a tmpfs)
+#   Sufficient disk space (IMAGE_ROOTFS_SIZE = "524288") and RAM (QB_MEM = "-m 1024")
+#
+RDEPENDS:${PN}-ptest += " \
+    attr \
+    bash \
+    coreutils \
+    cpio \
+    diffutils \
+    findutils \
+    grep \
+    python3-core \
+    python3-multiprocessing \
+    strace \
+    tar \
+    util-linux \
+    xz \
+    ${PN}-trivial-httpd \
+    python3-pyyaml \
+    ${@bb.utils.contains('PACKAGECONFIG', 'gjs', 'gjs', '', d)} \
+"
+RDEPENDS:${PN}-ptest:append:libc-glibc = " glibc-utils glibc-localedata-en-us"
+
+RRECOMMENDS:${PN}:append:class-target = " kernel-module-overlay"
+
+SYSTEMD_SERVICE:${PN} = "ostree-remount.service ostree-finalize-staged.path"
+SYSTEMD_SERVICE:${PN}-switchroot = "ostree-prepare-root.service"
+
+BBCLASSEXTEND = "native nativesdk"
diff --git a/meta-openembedded/meta-oe/recipes-extended/polkit/polkit/0003-Added-support-for-duktape-as-JS-engine.patch b/meta-openembedded/meta-oe/recipes-extended/polkit/polkit/0003-Added-support-for-duktape-as-JS-engine.patch
index e44e4f6..b8562f8 100644
--- a/meta-openembedded/meta-oe/recipes-extended/polkit/polkit/0003-Added-support-for-duktape-as-JS-engine.patch
+++ b/meta-openembedded/meta-oe/recipes-extended/polkit/polkit/0003-Added-support-for-duktape-as-JS-engine.patch
@@ -1,15 +1,18 @@
-From eaecfb21e1bca42e99321cc731e21dbfc1ea0d0c Mon Sep 17 00:00:00 2001
+From 4af72493cb380ab5ce0dd7c5bcd25a8b5457d770 Mon Sep 17 00:00:00 2001
 From: Gustavo Lima Chaves <limachaves@gmail.com>
 Date: Tue, 25 Jan 2022 09:43:21 +0000
-Subject: [PATCH 3/3] Added support for duktape as JS engine
+Subject: [PATCH] Added support for duktape as JS engine
 
 Original author: Wu Xiaotian (@yetist)
 Resurrection author, runaway-killer author: Gustavo Lima Chaves (@limachaves)
 
 Signed-off-by: Mikko Rapeli <mikko.rapeli@bmw.de>
 
+Upstream-Status: Backport [c7fc4e1b61f0fd82fc697c19c604af7e9fb291a2]
+Dropped change to .gitlab-ci.yml and adapted configure.ac due to other
+patches in meta-oe.
+
 ---
- .gitlab-ci.yml                                |    1 +
  buildutil/ax_pthread.m4                       |  522 ++++++++
  configure.ac                                  |   34 +-
  docs/man/polkit.xml                           |    4 +-
@@ -23,16 +26,12 @@
  .../polkitbackendjsauthority.cpp              |  721 +----------
  .../etc/polkit-1/rules.d/10-testing.rules     |    6 +-
  .../test-polkitbackendjsauthority.c           |    2 +-
- 14 files changed, 2399 insertions(+), 678 deletions(-)
+ 13 files changed, 2398 insertions(+), 678 deletions(-)
  create mode 100644 buildutil/ax_pthread.m4
  create mode 100644 src/polkitbackend/polkitbackendcommon.c
  create mode 100644 src/polkitbackend/polkitbackendcommon.h
  create mode 100644 src/polkitbackend/polkitbackendduktapeauthority.c
 
-Upstream-Status: Backport [c7fc4e1b61f0fd82fc697c19c604af7e9fb291a2]
-Dropped change to .gitlab-ci.yml and adapted configure.ac due to other
-patches in meta-oe.
-
 diff --git a/buildutil/ax_pthread.m4 b/buildutil/ax_pthread.m4
 new file mode 100644
 index 0000000..9f35d13
@@ -603,7 +602,7 @@
 +CC="$PTHREAD_CC"
 +AC_CHECK_FUNCS([pthread_condattr_setclock])
 +
- AC_CHECK_FUNCS(clearenv fdatasync setnetgrent)
+ AC_CHECK_FUNCS(clearenv fdatasync)
  
  if test "x$GCC" = "xyes"; then
 @@ -581,6 +598,13 @@ echo "
@@ -3458,6 +3457,3 @@
    },
  
    {
--- 
-2.20.1
-
diff --git a/meta-openembedded/meta-oe/recipes-extended/polkit/polkit/0003-make-netgroup-support-optional.patch b/meta-openembedded/meta-oe/recipes-extended/polkit/polkit/0003-make-netgroup-support-optional.patch
deleted file mode 100644
index 1a268f2..0000000
--- a/meta-openembedded/meta-oe/recipes-extended/polkit/polkit/0003-make-netgroup-support-optional.patch
+++ /dev/null
@@ -1,250 +0,0 @@
-From 0c1debb380fee7f5b2bc62406e45856dc9c9e1a1 Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Wed, 22 May 2019 13:18:55 -0700
-Subject: [PATCH] make netgroup support optional
-
-On at least Linux/musl and Linux/uclibc, netgroup
-support is not available.  PolKit fails to compile on these systems
-for that reason.
-
-This change makes netgroup support conditional on the presence of the
-setnetgrent(3) function which is required for the support to work.  If
-that function is not available on the system, an error will be returned
-to the administrator if unix-netgroup: is specified in configuration.
-
-Fixes bug 50145.
-
-Closes polkit/polkit#14.
-Signed-off-by: A. Wilcox <AWilcox@Wilcox-Tech.com>
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
-
----
- configure.ac                                     |  2 +-
- src/polkit/polkitidentity.c                      | 16 ++++++++++++++++
- src/polkit/polkitunixnetgroup.c                  |  3 +++
- .../polkitbackendinteractiveauthority.c          | 14 ++++++++------
- src/polkitbackend/polkitbackendjsauthority.cpp   |  3 +++
- test/polkit/polkitidentitytest.c                 |  9 ++++++++-
- test/polkit/polkitunixnetgrouptest.c             |  3 +++
- .../test-polkitbackendjsauthority.c              |  2 ++
- 8 files changed, 44 insertions(+), 8 deletions(-)
-
-diff --git a/configure.ac b/configure.ac
-index b625743..d807086 100644
---- a/configure.ac
-+++ b/configure.ac
-@@ -100,7 +100,7 @@ AC_CHECK_LIB(expat,XML_ParserCreate,[EXPAT_LIBS="-lexpat"],
- 	     [AC_MSG_ERROR([Can't find expat library. Please install expat.])])
- AC_SUBST(EXPAT_LIBS)
- 
--AC_CHECK_FUNCS(clearenv fdatasync)
-+AC_CHECK_FUNCS(clearenv fdatasync setnetgrent)
- 
- if test "x$GCC" = "xyes"; then
-   LDFLAGS="-Wl,--as-needed $LDFLAGS"
-diff --git a/src/polkit/polkitidentity.c b/src/polkit/polkitidentity.c
-index 3aa1f7f..10e9c17 100644
---- a/src/polkit/polkitidentity.c
-+++ b/src/polkit/polkitidentity.c
-@@ -182,7 +182,15 @@ polkit_identity_from_string  (const gchar   *str,
-     }
-   else if (g_str_has_prefix (str, "unix-netgroup:"))
-     {
-+#ifndef HAVE_SETNETGRENT
-+      g_set_error (error,
-+                   POLKIT_ERROR,
-+                   POLKIT_ERROR_FAILED,
-+                   "Netgroups are not available on this machine ('%s')",
-+                   str);
-+#else
-       identity = polkit_unix_netgroup_new (str + sizeof "unix-netgroup:" - 1);
-+#endif
-     }
- 
-   if (identity == NULL && (error != NULL && *error == NULL))
-@@ -344,6 +352,13 @@ polkit_identity_new_for_gvariant (GVariant  *variant,
-       GVariant *v;
-       const char *name;
- 
-+#ifndef HAVE_SETNETGRENT
-+      g_set_error (error,
-+                   POLKIT_ERROR,
-+                   POLKIT_ERROR_FAILED,
-+                   "Netgroups are not available on this machine");
-+      goto out;
-+#else
-       v = lookup_asv (details_gvariant, "name", G_VARIANT_TYPE_STRING, error);
-       if (v == NULL)
-         {
-@@ -353,6 +368,7 @@ polkit_identity_new_for_gvariant (GVariant  *variant,
-       name = g_variant_get_string (v, NULL);
-       ret = polkit_unix_netgroup_new (name);
-       g_variant_unref (v);
-+#endif
-     }
-   else
-     {
-diff --git a/src/polkit/polkitunixnetgroup.c b/src/polkit/polkitunixnetgroup.c
-index 8a2b369..83f8d4a 100644
---- a/src/polkit/polkitunixnetgroup.c
-+++ b/src/polkit/polkitunixnetgroup.c
-@@ -194,6 +194,9 @@ polkit_unix_netgroup_set_name (PolkitUnixNetgroup *group,
- PolkitIdentity *
- polkit_unix_netgroup_new (const gchar *name)
- {
-+#ifndef HAVE_SETNETGRENT
-+  g_assert_not_reached();
-+#endif
-   g_return_val_if_fail (name != NULL, NULL);
-   return POLKIT_IDENTITY (g_object_new (POLKIT_TYPE_UNIX_NETGROUP,
-                                        "name", name,
-diff --git a/src/polkitbackend/polkitbackendinteractiveauthority.c b/src/polkitbackend/polkitbackendinteractiveauthority.c
-index 056d9a8..36c2f3d 100644
---- a/src/polkitbackend/polkitbackendinteractiveauthority.c
-+++ b/src/polkitbackend/polkitbackendinteractiveauthority.c
-@@ -2233,25 +2233,26 @@ get_users_in_net_group (PolkitIdentity                    *group,
-   GList *ret;
- 
-   ret = NULL;
-+#ifdef HAVE_SETNETGRENT
-   name = polkit_unix_netgroup_get_name (POLKIT_UNIX_NETGROUP (group));
- 
--#ifdef HAVE_SETNETGRENT_RETURN
-+# ifdef HAVE_SETNETGRENT_RETURN
-   if (setnetgrent (name) == 0)
-     {
-       g_warning ("Error looking up net group with name %s: %s", name, g_strerror (errno));
-       goto out;
-     }
--#else
-+# else
-   setnetgrent (name);
--#endif
-+# endif /* HAVE_SETNETGRENT_RETURN */
- 
-   for (;;)
-     {
--#if defined(HAVE_NETBSD) || defined(HAVE_OPENBSD)
-+# if defined(HAVE_NETBSD) || defined(HAVE_OPENBSD)
-       const char *hostname, *username, *domainname;
--#else
-+# else
-       char *hostname, *username, *domainname;
--#endif
-+# endif /* defined(HAVE_NETBSD) || defined(HAVE_OPENBSD) */
-       PolkitIdentity *user;
-       GError *error = NULL;
- 
-@@ -2282,6 +2283,7 @@ get_users_in_net_group (PolkitIdentity                    *group,
- 
-  out:
-   endnetgrent ();
-+#endif /* HAVE_SETNETGRENT */
-   return ret;
- }
- 
-diff --git a/src/polkitbackend/polkitbackendjsauthority.cpp b/src/polkitbackend/polkitbackendjsauthority.cpp
-index ca17108..41d8d5c 100644
---- a/src/polkitbackend/polkitbackendjsauthority.cpp
-+++ b/src/polkitbackend/polkitbackendjsauthority.cpp
-@@ -1520,6 +1520,7 @@ js_polkit_user_is_in_netgroup (JSContext  *cx,
- 
-   JS::CallArgs args = JS::CallArgsFromVp (argc, vp);
- 
-+#ifdef HAVE_SETNETGRENT
-   JS::RootedString usrstr (authority->priv->cx);
-   usrstr = args[0].toString();
-   user = JS_EncodeStringToUTF8 (cx, usrstr);
-@@ -1535,6 +1536,8 @@ js_polkit_user_is_in_netgroup (JSContext  *cx,
-       is_in_netgroup =  true;
-     }
- 
-+#endif
-+
-   ret = true;
- 
-   args.rval ().setBoolean (is_in_netgroup);
-diff --git a/test/polkit/polkitidentitytest.c b/test/polkit/polkitidentitytest.c
-index e91967b..e829aaa 100644
---- a/test/polkit/polkitidentitytest.c
-+++ b/test/polkit/polkitidentitytest.c
-@@ -19,6 +19,7 @@
-  * Author: Nikki VonHollen <vonhollen@google.com>
-  */
- 
-+#include "config.h"
- #include "glib.h"
- #include <polkit/polkit.h>
- #include <polkit/polkitprivate.h>
-@@ -145,11 +146,15 @@ struct ComparisonTestData comparison_test_data [] = {
-   {"unix-group:root", "unix-group:jane", FALSE},
-   {"unix-group:jane", "unix-group:jane", TRUE},
- 
-+#ifdef HAVE_SETNETGRENT
-   {"unix-netgroup:foo", "unix-netgroup:foo", TRUE},
-   {"unix-netgroup:foo", "unix-netgroup:bar", FALSE},
-+#endif
- 
-   {"unix-user:root", "unix-group:root", FALSE},
-+#ifdef HAVE_SETNETGRENT
-   {"unix-user:jane", "unix-netgroup:foo", FALSE},
-+#endif
- 
-   {NULL},
- };
-@@ -181,11 +186,13 @@ main (int argc, char *argv[])
-   g_test_add_data_func ("/PolkitIdentity/group_string_2", "unix-group:jane", test_string);
-   g_test_add_data_func ("/PolkitIdentity/group_string_3", "unix-group:users", test_string);
- 
-+#ifdef HAVE_SETNETGRENT
-   g_test_add_data_func ("/PolkitIdentity/netgroup_string", "unix-netgroup:foo", test_string);
-+  g_test_add_data_func ("/PolkitIdentity/netgroup_gvariant", "unix-netgroup:foo", test_gvariant);
-+#endif
- 
-   g_test_add_data_func ("/PolkitIdentity/user_gvariant", "unix-user:root", test_gvariant);
-   g_test_add_data_func ("/PolkitIdentity/group_gvariant", "unix-group:root", test_gvariant);
--  g_test_add_data_func ("/PolkitIdentity/netgroup_gvariant", "unix-netgroup:foo", test_gvariant);
- 
-   add_comparison_tests ();
- 
-diff --git a/test/polkit/polkitunixnetgrouptest.c b/test/polkit/polkitunixnetgrouptest.c
-index 3701ba1..e3352eb 100644
---- a/test/polkit/polkitunixnetgrouptest.c
-+++ b/test/polkit/polkitunixnetgrouptest.c
-@@ -19,6 +19,7 @@
-  * Author: Nikki VonHollen <vonhollen@google.com>
-  */
- 
-+#include "config.h"
- #include "glib.h"
- #include <polkit/polkit.h>
- #include <string.h>
-@@ -69,7 +70,9 @@ int
- main (int argc, char *argv[])
- {
-   g_test_init (&argc, &argv, NULL);
-+#ifdef HAVE_SETNETGRENT
-   g_test_add_func ("/PolkitUnixNetgroup/new", test_new);
-   g_test_add_func ("/PolkitUnixNetgroup/set_name", test_set_name);
-+#endif
-   return g_test_run ();
- }
-diff --git a/test/polkitbackend/test-polkitbackendjsauthority.c b/test/polkitbackend/test-polkitbackendjsauthority.c
-index f97e0e0..fc52149 100644
---- a/test/polkitbackend/test-polkitbackendjsauthority.c
-+++ b/test/polkitbackend/test-polkitbackendjsauthority.c
-@@ -137,12 +137,14 @@ test_get_admin_identities (void)
-         "unix-group:users"
-       }
-     },
-+#ifdef HAVE_SETNETGRENT
-     {
-       "net.company.action3",
-       {
-         "unix-netgroup:foo"
-       }
-     },
-+#endif
-   };
-   guint n;
- 
diff --git a/meta-openembedded/meta-oe/recipes-extended/polkit/polkit/0004-Make-netgroup-support-optional.patch b/meta-openembedded/meta-oe/recipes-extended/polkit/polkit/0004-Make-netgroup-support-optional.patch
new file mode 100644
index 0000000..fa273d4
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-extended/polkit/polkit/0004-Make-netgroup-support-optional.patch
@@ -0,0 +1,253 @@
+From 7ef2621ab7adcedc099ed39acfb73c6fa835cbc3 Mon Sep 17 00:00:00 2001
+From: "A. Wilcox" <AWilcox@Wilcox-Tech.com>
+Date: Sun, 15 May 2022 05:04:10 +0000
+Subject: [PATCH] Make netgroup support optional
+
+On at least Linux/musl and Linux/uclibc, netgroup support is not
+available.  PolKit fails to compile on these systems for that reason.
+
+This change makes netgroup support conditional on the presence of the
+setnetgrent(3) function which is required for the support to work.  If
+that function is not available on the system, an error will be returned
+to the administrator if unix-netgroup: is specified in configuration.
+
+(sam: rebased for Meson and Duktape.)
+
+Closes: https://gitlab.freedesktop.org/polkit/polkit/-/issues/14
+Closes: https://gitlab.freedesktop.org/polkit/polkit/-/issues/163
+Closes: https://gitlab.freedesktop.org/polkit/polkit/-/merge_requests/52
+Signed-off-by: A. Wilcox <AWilcox@Wilcox-Tech.com>
+
+Ported back the change in configure.ac (upstream removed autotools
+support).
+
+Upstream-Status: Backport [https://gitlab.freedesktop.org/polkit/polkit/-/commit/b57deee8178190a7ecc75290fa13cf7daabc2c66]
+Signed-off-by: Marta Rybczynska <marta.rybczynska@huawei.com>
+
+---
+ configure.ac                                    |  2 +-
+ meson.build                                     |  1 +
+ src/polkit/polkitidentity.c                     | 17 +++++++++++++++++
+ src/polkit/polkitunixnetgroup.c                 |  3 +++
+ .../polkitbackendinteractiveauthority.c         | 14 ++++++++------
+ src/polkitbackend/polkitbackendjsauthority.cpp  |  2 ++
+ test/polkit/polkitidentitytest.c                |  8 +++++++-
+ test/polkit/polkitunixnetgrouptest.c            |  2 ++
+ .../test-polkitbackendjsauthority.c             |  2 ++
+ 9 files changed, 43 insertions(+), 8 deletions(-)
+
+diff --git a/configure.ac b/configure.ac
+index 59858df..5a7fc11 100644
+--- a/configure.ac
++++ b/configure.ac
+@@ -100,7 +100,7 @@ AC_CHECK_LIB(expat,XML_ParserCreate,[EXPAT_LIBS="-lexpat"],
+ 	     [AC_MSG_ERROR([Can't find expat library. Please install expat.])])
+ AC_SUBST(EXPAT_LIBS)
+ 
+-AC_CHECK_FUNCS(clearenv fdatasync)
++AC_CHECK_FUNCS(clearenv fdatasync setnetgrent)
+ 
+ if test "x$GCC" = "xyes"; then
+   LDFLAGS="-Wl,--as-needed $LDFLAGS"
+diff --git a/meson.build b/meson.build
+index 733bbff..d840926 100644
+--- a/meson.build
++++ b/meson.build
+@@ -82,6 +82,7 @@ config_h.set('_GNU_SOURCE', true)
+ check_functions = [
+   'clearenv',
+   'fdatasync',
++  'setnetgrent',
+ ]
+ 
+ foreach func: check_functions
+diff --git a/src/polkit/polkitidentity.c b/src/polkit/polkitidentity.c
+index 3aa1f7f..793f17d 100644
+--- a/src/polkit/polkitidentity.c
++++ b/src/polkit/polkitidentity.c
+@@ -182,7 +182,15 @@ polkit_identity_from_string  (const gchar   *str,
+     }
+   else if (g_str_has_prefix (str, "unix-netgroup:"))
+     {
++#ifndef HAVE_SETNETGRENT
++      g_set_error (error,
++                   POLKIT_ERROR,
++                   POLKIT_ERROR_FAILED,
++                   "Netgroups are not available on this machine ('%s')",
++                   str);
++#else
+       identity = polkit_unix_netgroup_new (str + sizeof "unix-netgroup:" - 1);
++#endif
+     }
+ 
+   if (identity == NULL && (error != NULL && *error == NULL))
+@@ -344,6 +352,14 @@ polkit_identity_new_for_gvariant (GVariant  *variant,
+       GVariant *v;
+       const char *name;
+ 
++#ifndef HAVE_SETNETGRENT
++      g_set_error (error,
++                   POLKIT_ERROR,
++                   POLKIT_ERROR_FAILED,
++                   "Netgroups are not available on this machine");
++      goto out;
++#else
++
+       v = lookup_asv (details_gvariant, "name", G_VARIANT_TYPE_STRING, error);
+       if (v == NULL)
+         {
+@@ -353,6 +369,7 @@ polkit_identity_new_for_gvariant (GVariant  *variant,
+       name = g_variant_get_string (v, NULL);
+       ret = polkit_unix_netgroup_new (name);
+       g_variant_unref (v);
++#endif
+     }
+   else
+     {
+diff --git a/src/polkit/polkitunixnetgroup.c b/src/polkit/polkitunixnetgroup.c
+index 8a2b369..83f8d4a 100644
+--- a/src/polkit/polkitunixnetgroup.c
++++ b/src/polkit/polkitunixnetgroup.c
+@@ -194,6 +194,9 @@ polkit_unix_netgroup_set_name (PolkitUnixNetgroup *group,
+ PolkitIdentity *
+ polkit_unix_netgroup_new (const gchar *name)
+ {
++#ifndef HAVE_SETNETGRENT
++  g_assert_not_reached();
++#endif
+   g_return_val_if_fail (name != NULL, NULL);
+   return POLKIT_IDENTITY (g_object_new (POLKIT_TYPE_UNIX_NETGROUP,
+                                        "name", name,
+diff --git a/src/polkitbackend/polkitbackendinteractiveauthority.c b/src/polkitbackend/polkitbackendinteractiveauthority.c
+index 056d9a8..36c2f3d 100644
+--- a/src/polkitbackend/polkitbackendinteractiveauthority.c
++++ b/src/polkitbackend/polkitbackendinteractiveauthority.c
+@@ -2233,25 +2233,26 @@ get_users_in_net_group (PolkitIdentity                    *group,
+   GList *ret;
+ 
+   ret = NULL;
++#ifdef HAVE_SETNETGRENT
+   name = polkit_unix_netgroup_get_name (POLKIT_UNIX_NETGROUP (group));
+ 
+-#ifdef HAVE_SETNETGRENT_RETURN
++# ifdef HAVE_SETNETGRENT_RETURN
+   if (setnetgrent (name) == 0)
+     {
+       g_warning ("Error looking up net group with name %s: %s", name, g_strerror (errno));
+       goto out;
+     }
+-#else
++# else
+   setnetgrent (name);
+-#endif
++# endif /* HAVE_SETNETGRENT_RETURN */
+ 
+   for (;;)
+     {
+-#if defined(HAVE_NETBSD) || defined(HAVE_OPENBSD)
++# if defined(HAVE_NETBSD) || defined(HAVE_OPENBSD)
+       const char *hostname, *username, *domainname;
+-#else
++# else
+       char *hostname, *username, *domainname;
+-#endif
++# endif /* defined(HAVE_NETBSD) || defined(HAVE_OPENBSD) */
+       PolkitIdentity *user;
+       GError *error = NULL;
+ 
+@@ -2282,6 +2283,7 @@ get_users_in_net_group (PolkitIdentity                    *group,
+ 
+  out:
+   endnetgrent ();
++#endif /* HAVE_SETNETGRENT */
+   return ret;
+ }
+ 
+diff --git a/src/polkitbackend/polkitbackendjsauthority.cpp b/src/polkitbackend/polkitbackendjsauthority.cpp
+index 5027815..bcb040c 100644
+--- a/src/polkitbackend/polkitbackendjsauthority.cpp
++++ b/src/polkitbackend/polkitbackendjsauthority.cpp
+@@ -1524,6 +1524,7 @@ js_polkit_user_is_in_netgroup (JSContext  *cx,
+ 
+   JS::CallArgs args = JS::CallArgsFromVp (argc, vp);
+ 
++#ifdef HAVE_SETNETGRENT
+   JS::RootedString usrstr (authority->priv->cx);
+   usrstr = args[0].toString();
+   user = JS_EncodeStringToUTF8 (cx, usrstr);
+@@ -1538,6 +1539,7 @@ js_polkit_user_is_in_netgroup (JSContext  *cx,
+     {
+       is_in_netgroup =  true;
+     }
++#endif
+ 
+   ret = true;
+ 
+diff --git a/test/polkit/polkitidentitytest.c b/test/polkit/polkitidentitytest.c
+index e91967b..2635c4c 100644
+--- a/test/polkit/polkitidentitytest.c
++++ b/test/polkit/polkitidentitytest.c
+@@ -145,11 +145,15 @@ struct ComparisonTestData comparison_test_data [] = {
+   {"unix-group:root", "unix-group:jane", FALSE},
+   {"unix-group:jane", "unix-group:jane", TRUE},
+ 
++#ifdef HAVE_SETNETGRENT
+   {"unix-netgroup:foo", "unix-netgroup:foo", TRUE},
+   {"unix-netgroup:foo", "unix-netgroup:bar", FALSE},
++#endif
+ 
+   {"unix-user:root", "unix-group:root", FALSE},
++#ifdef HAVE_SETNETGRENT
+   {"unix-user:jane", "unix-netgroup:foo", FALSE},
++#endif
+ 
+   {NULL},
+ };
+@@ -181,11 +185,13 @@ main (int argc, char *argv[])
+   g_test_add_data_func ("/PolkitIdentity/group_string_2", "unix-group:jane", test_string);
+   g_test_add_data_func ("/PolkitIdentity/group_string_3", "unix-group:users", test_string);
+ 
++#ifdef HAVE_SETNETGRENT
+   g_test_add_data_func ("/PolkitIdentity/netgroup_string", "unix-netgroup:foo", test_string);
++  g_test_add_data_func ("/PolkitIdentity/netgroup_gvariant", "unix-netgroup:foo", test_gvariant);
++#endif
+ 
+   g_test_add_data_func ("/PolkitIdentity/user_gvariant", "unix-user:root", test_gvariant);
+   g_test_add_data_func ("/PolkitIdentity/group_gvariant", "unix-group:root", test_gvariant);
+-  g_test_add_data_func ("/PolkitIdentity/netgroup_gvariant", "unix-netgroup:foo", test_gvariant);
+ 
+   add_comparison_tests ();
+ 
+diff --git a/test/polkit/polkitunixnetgrouptest.c b/test/polkit/polkitunixnetgrouptest.c
+index 3701ba1..e1d211e 100644
+--- a/test/polkit/polkitunixnetgrouptest.c
++++ b/test/polkit/polkitunixnetgrouptest.c
+@@ -69,7 +69,9 @@ int
+ main (int argc, char *argv[])
+ {
+   g_test_init (&argc, &argv, NULL);
++#ifdef HAVE_SETNETGRENT
+   g_test_add_func ("/PolkitUnixNetgroup/new", test_new);
+   g_test_add_func ("/PolkitUnixNetgroup/set_name", test_set_name);
++#endif
+   return g_test_run ();
+ }
+diff --git a/test/polkitbackend/test-polkitbackendjsauthority.c b/test/polkitbackend/test-polkitbackendjsauthority.c
+index f97e0e0..fc52149 100644
+--- a/test/polkitbackend/test-polkitbackendjsauthority.c
++++ b/test/polkitbackend/test-polkitbackendjsauthority.c
+@@ -137,12 +137,14 @@ test_get_admin_identities (void)
+         "unix-group:users"
+       }
+     },
++#ifdef HAVE_SETNETGRENT
+     {
+       "net.company.action3",
+       {
+         "unix-netgroup:foo"
+       }
+     },
++#endif
+   };
+   guint n;
+ 
diff --git a/meta-openembedded/meta-oe/recipes-extended/polkit/polkit/0005-Make-netgroup-support-optional-duktape.patch b/meta-openembedded/meta-oe/recipes-extended/polkit/polkit/0005-Make-netgroup-support-optional-duktape.patch
new file mode 100644
index 0000000..12988ad
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-extended/polkit/polkit/0005-Make-netgroup-support-optional-duktape.patch
@@ -0,0 +1,34 @@
+From 792f8e2151c120ec51b50a4098e4f9642409cbec Mon Sep 17 00:00:00 2001
+From: Marta Rybczynska <rybczynska@gmail.com>
+Date: Fri, 29 Jul 2022 11:52:59 +0200
+Subject: [PATCH] Make netgroup support optional
+
+This patch adds a fragment of the netgroup patch to apply on the duktape-related
+code. This change is needed to compile with duktape+musl.
+
+Upstream-Status: Backport [https://gitlab.freedesktop.org/polkit/polkit/-/commit/b57deee8178190a7ecc75290fa13cf7daabc2c66]
+Signed-off-by: Marta Rybczynska <martarybczynska@huawei.com>
+---
+ src/polkitbackend/polkitbackendduktapeauthority.c | 2 ++
+ 1 file changed, 2 insertions(+)
+
+diff --git a/src/polkitbackend/polkitbackendduktapeauthority.c b/src/polkitbackend/polkitbackendduktapeauthority.c
+index c89dbcf..58a5936 100644
+--- a/src/polkitbackend/polkitbackendduktapeauthority.c
++++ b/src/polkitbackend/polkitbackendduktapeauthority.c
+@@ -1036,6 +1036,7 @@ js_polkit_user_is_in_netgroup (duk_context *cx)
+   user = duk_require_string (cx, 0);
+   netgroup = duk_require_string (cx, 1);
+ 
++#ifdef HAVE_SETNETGRENT
+   if (innetgr (netgroup,
+                NULL,  /* host */
+                user,
+@@ -1043,6 +1044,7 @@ js_polkit_user_is_in_netgroup (duk_context *cx)
+     {
+       is_in_netgroup = TRUE;
+     }
++#endif
+ 
+   duk_push_boolean (cx, is_in_netgroup);
+   return 1;
diff --git a/meta-openembedded/meta-oe/recipes-extended/polkit/polkit_0.119.bb b/meta-openembedded/meta-oe/recipes-extended/polkit/polkit_0.119.bb
index bf16005..c4d3d25 100644
--- a/meta-openembedded/meta-oe/recipes-extended/polkit/polkit_0.119.bb
+++ b/meta-openembedded/meta-oe/recipes-extended/polkit/polkit_0.119.bb
@@ -34,14 +34,16 @@
     file://0003-jsauthority-ensure-to-call-JS_Init-and-JS_ShutDown-e.patch \
 "
 DUKTAPE_PATCHES = "file://0003-Added-support-for-duktape-as-JS-engine.patch"
+DUKTAPE_NG_PATCHES = "file://0005-Make-netgroup-support-optional-duktape.patch"
 PAM_SRC_URI = "file://polkit-1_pam.patch"
 SRC_URI = "http://www.freedesktop.org/software/polkit/releases/polkit-${PV}.tar.gz \
            ${@bb.utils.contains('DISTRO_FEATURES', 'pam', '${PAM_SRC_URI}', '', d)} \
            ${@bb.utils.contains('PACKAGECONFIG', 'mozjs', '${MOZJS_PATCHES}', '', d)} \
            ${@bb.utils.contains('PACKAGECONFIG', 'duktape', '${DUKTAPE_PATCHES}', '', d)} \
-           file://0003-make-netgroup-support-optional.patch \
            file://0001-pkexec-local-privilege-escalation-CVE-2021-4034.patch \
            file://0002-CVE-2021-4115-GHSL-2021-077-fix.patch \
+           file://0004-Make-netgroup-support-optional.patch \
+           ${@bb.utils.contains('PACKAGECONFIG', 'duktape', '${DUKTAPE_NG_PATCHES}', '', d)} \
            "
 SRC_URI[sha256sum] = "c8579fdb86e94295404211285fee0722ad04893f0213e571bd75c00972fd1f5c"
 
@@ -71,7 +73,7 @@
 FILES:${PN}-examples = "${bindir}/*example*"
 
 USERADD_PACKAGES = "${PN}"
-USERADD_PARAM:${PN} = "--system --no-create-home --user-group --home-dir ${sysconfdir}/${BPN}-1 polkitd"
+USERADD_PARAM:${PN} = "--system --no-create-home --user-group --home-dir ${sysconfdir}/${BPN}-1 --shell /bin/nologin polkitd"
 
 SYSTEMD_SERVICE:${PN} = "${BPN}.service"
 SYSTEMD_AUTO_ENABLE = "disable"
diff --git a/meta-openembedded/meta-oe/recipes-extended/redis/redis-7/GNU_SOURCE.patch b/meta-openembedded/meta-oe/recipes-extended/redis/redis-7/GNU_SOURCE-7.patch
similarity index 100%
rename from meta-openembedded/meta-oe/recipes-extended/redis/redis-7/GNU_SOURCE.patch
rename to meta-openembedded/meta-oe/recipes-extended/redis/redis-7/GNU_SOURCE-7.patch
diff --git a/meta-openembedded/meta-oe/recipes-extended/redis/redis_7.0.4.bb b/meta-openembedded/meta-oe/recipes-extended/redis/redis_7.0.4.bb
index 993ff34..cde32e4 100644
--- a/meta-openembedded/meta-oe/recipes-extended/redis/redis_7.0.4.bb
+++ b/meta-openembedded/meta-oe/recipes-extended/redis/redis_7.0.4.bb
@@ -6,7 +6,7 @@
 LIC_FILES_CHKSUM = "file://COPYING;md5=8ffdd6c926faaece928cf9d9640132d2"
 DEPENDS = "readline lua ncurses"
 
-FILESPATH =. "${FILE_DIRNAME}/${PN}-7:"
+FILESPATH =. "${FILE_DIRNAME}/${BPN}-7:"
 
 SRC_URI = "http://download.redis.io/releases/${BP}.tar.gz \
            file://redis.conf \
@@ -16,7 +16,7 @@
            file://lua-update-Makefile-to-use-environment-build-setting.patch \
            file://oe-use-libc-malloc.patch \
            file://0001-src-Do-not-reset-FINAL_LIBS.patch \
-           file://GNU_SOURCE.patch \
+           file://GNU_SOURCE-7.patch \
            file://0006-Define-correct-gregs-for-RISCV32.patch \
            "
 SRC_URI[sha256sum] = "f0e65fda74c44a3dd4fa9d512d4d4d833dd0939c934e946a5c622a630d057f2f"
diff --git a/meta-openembedded/meta-oe/recipes-extended/zlog/zlog_1.2.15.bb b/meta-openembedded/meta-oe/recipes-extended/zlog/zlog_1.2.15.bb
deleted file mode 100644
index 0fda3e6..0000000
--- a/meta-openembedded/meta-oe/recipes-extended/zlog/zlog_1.2.15.bb
+++ /dev/null
@@ -1,17 +0,0 @@
-DESCRIPTION = "Zlog is a pure C logging library"
-HOMEPAGE = "https://github.com/HardySimpson/zlog"
-LICENSE = "LGPL-2.1-only"
-LIC_FILES_CHKSUM = "file://COPYING;md5=4fbd65380cdd255951079008b364516c"
-
-SRCREV = "876099f3c66033f3de11d79f63814766b1021dbe"
-SRC_URI = "git://github.com/HardySimpson/zlog;branch=master;protocol=https"
-
-S = "${WORKDIR}/git"
-
-inherit pkgconfig
-
-EXTRA_OEMAKE = "CC='${CC}' LD='${LD}' LIBRARY_PATH=${baselib}"
-
-do_install() {
-    oe_runmake install PREFIX=${D}${exec_prefix} INSTALL=install
-}
diff --git a/meta-openembedded/meta-oe/recipes-extended/zlog/zlog_1.2.16.bb b/meta-openembedded/meta-oe/recipes-extended/zlog/zlog_1.2.16.bb
new file mode 100644
index 0000000..b75802f
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-extended/zlog/zlog_1.2.16.bb
@@ -0,0 +1,17 @@
+DESCRIPTION = "Zlog is a pure C logging library"
+HOMEPAGE = "https://github.com/HardySimpson/zlog"
+LICENSE = "LGPL-2.1-only"
+LIC_FILES_CHKSUM = "file://COPYING;md5=4fbd65380cdd255951079008b364516c"
+
+SRCREV = "dc2c284664757fce6ef8f96f8b3ab667a53ef489"
+SRC_URI = "git://github.com/HardySimpson/zlog;branch=master;protocol=https"
+
+S = "${WORKDIR}/git"
+
+inherit pkgconfig
+
+EXTRA_OEMAKE = "CC='${CC}' LD='${LD}' LIBRARY_PATH=${baselib}"
+
+do_install() {
+    oe_runmake install PREFIX=${D}${exec_prefix} INSTALL=install
+}
diff --git a/meta-openembedded/meta-oe/recipes-graphics/feh/feh_3.9.1.bb b/meta-openembedded/meta-oe/recipes-graphics/feh/feh_3.9.1.bb
new file mode 100644
index 0000000..d78fe7e
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-graphics/feh/feh_3.9.1.bb
@@ -0,0 +1,31 @@
+SUMMARY = "X11 image viewer aimed mostly at console users"
+AUTHOR = "Tom Gilbert & Daniel Friesel"
+HOMEPAGE = "https://feh.finalrewind.org/"
+SECTION = "x11/utils"
+LICENSE = "MIT"
+LIC_FILES_CHKSUM = "file://COPYING;md5=f91bd06901085c94bdc50649d98c5059"
+DEPENDS = "\
+    imlib2 \
+    virtual/libx11 libxt\
+"
+
+SRC_URI = "https://feh.finalrewind.org/feh-${PV}.tar.bz2"
+SRC_URI[sha256sum] = "455c92711b588af149b945edc5c145f3e9aa137ed9689dabed49d5e4acac75fa"
+
+inherit mime-xdg features_check
+# depends on virtual/libx11
+REQUIRED_DISTRO_FEATURES = "x11"
+
+EXTRA_OEMAKE = "curl=0 xinerama=0 PREFIX=/usr"
+
+do_compile () {
+     oe_runmake
+}
+
+do_install () {
+     oe_runmake install app=1 'DESTDIR=${D}' 'ICON_PREFIX=${D}${datadir}/icons'
+}
+
+RDEPENDS:${PN} += "imlib2-loaders"
+
+FILES:${PN} += "${datadir}/icons"
diff --git a/meta-openembedded/meta-oe/recipes-graphics/feh/feh_3.9.bb b/meta-openembedded/meta-oe/recipes-graphics/feh/feh_3.9.bb
deleted file mode 100644
index e442266..0000000
--- a/meta-openembedded/meta-oe/recipes-graphics/feh/feh_3.9.bb
+++ /dev/null
@@ -1,31 +0,0 @@
-SUMMARY = "X11 image viewer aimed mostly at console users"
-AUTHOR = "Tom Gilbert & Daniel Friesel"
-HOMEPAGE = "https://feh.finalrewind.org/"
-SECTION = "x11/utils"
-LICENSE = "MIT"
-LIC_FILES_CHKSUM = "file://COPYING;md5=f91bd06901085c94bdc50649d98c5059"
-DEPENDS = "\
-    imlib2 \
-    virtual/libx11 libxt\
-"
-
-SRC_URI = "https://feh.finalrewind.org/feh-${PV}.tar.bz2"
-SRC_URI[sha256sum] = "8649962c41d2c7ec4cc3f438eb327638a1820ad5a66df6a9995964601ae6bca0"
-
-inherit mime-xdg features_check
-# depends on virtual/libx11
-REQUIRED_DISTRO_FEATURES = "x11"
-
-EXTRA_OEMAKE = "curl=0 xinerama=0 PREFIX=/usr"
-
-do_compile () {
-     oe_runmake
-}
-
-do_install () {
-     oe_runmake install app=1 'DESTDIR=${D}' 'ICON_PREFIX=${D}${datadir}/icons'
-}
-
-RDEPENDS:${PN} += "imlib2-loaders"
-
-FILES:${PN} += "${datadir}/icons"
diff --git a/meta-openembedded/meta-oe/recipes-graphics/libsdl/libsdl2-ttf_2.0.18.bb b/meta-openembedded/meta-oe/recipes-graphics/libsdl/libsdl2-ttf_2.0.18.bb
deleted file mode 100644
index aae803f..0000000
--- a/meta-openembedded/meta-oe/recipes-graphics/libsdl/libsdl2-ttf_2.0.18.bb
+++ /dev/null
@@ -1,27 +0,0 @@
-SUMMARY = "Simple DirectMedia Layer truetype font library"
-SECTION = "libs"
-DEPENDS = "libsdl2 freetype virtual/egl"
-LICENSE = "Zlib"
-LIC_FILES_CHKSUM = "file://COPYING.txt;md5=e98cfd01ca78f683e9d035795810ce87"
-
-SRC_URI = "http://www.libsdl.org/projects/SDL_ttf/release/SDL2_ttf-${PV}.tar.gz \
-           file://automake_foreign.patch \
-           "
-SRC_URI[sha256sum] = "7234eb8883514e019e7747c703e4a774575b18d435c22a4a29d068cb768a2251"
-
-S = "${WORKDIR}/SDL2_ttf-${PV}"
-
-inherit autotools pkgconfig features_check
-
-# links to libGL.so
-REQUIRED_DISTRO_FEATURES += "opengl"
-
-do_configure:prepend() {
-    # Removing these files fixes a libtool version mismatch.
-    MACROS="libtool.m4 lt~obsolete.m4 ltoptions.m4 ltsugar.m4 ltversion.m4"
-
-    for i in ${MACROS}; do
-        rm -f ${S}/acinclude/$i
-    done
-}
-ASNEEDED = ""
diff --git a/meta-openembedded/meta-oe/recipes-graphics/libsdl/libsdl2-ttf_2.20.1.bb b/meta-openembedded/meta-oe/recipes-graphics/libsdl/libsdl2-ttf_2.20.1.bb
new file mode 100644
index 0000000..b8732b4
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-graphics/libsdl/libsdl2-ttf_2.20.1.bb
@@ -0,0 +1,32 @@
+SUMMARY = "Simple DirectMedia Layer truetype font library"
+SECTION = "libs"
+DEPENDS = "libsdl2 freetype virtual/egl"
+LICENSE = "Zlib"
+LIC_FILES_CHKSUM = "file://LICENSE.txt;md5=771dca8728b18d39b130e19b36514371"
+
+SRC_URI = " \
+    git://github.com/libsdl-org/SDL_ttf.git;branch=release-2.20.x;protocol=https \
+    git://github.com/libsdl-org/freetype.git;branch=VER-2-12-1-SDL;destsuffix=git/external/freetype;name=freetype;protocol=https \
+    git://github.com/libsdl-org/harfbuzz.git;branch=2.9.1-SDL;destsuffix=git/external/harfbuzz;name=harfbuzz;protocol=https \
+    file://automake_foreign.patch \
+"
+SRCREV = "0a652b598625d16ea7016665095cb1e9bce9ef4f"
+SRCREV_freetype = "6fc77cee03e078e97afcee0c0e06a2d3274b9a29"
+SRCREV_harfbuzz = "6022fe2f68d028ee178284f297b3902ffdf65b03"
+
+S = "${WORKDIR}/git"
+
+inherit autotools pkgconfig features_check
+
+# links to libGL.so
+REQUIRED_DISTRO_FEATURES += "opengl"
+
+do_configure:prepend() {
+    # Removing these files fixes a libtool version mismatch.
+    MACROS="libtool.m4 lt~obsolete.m4 ltoptions.m4 ltsugar.m4 ltversion.m4"
+
+    for i in ${MACROS}; do
+        rm -f ${S}/acinclude/$i
+    done
+}
+ASNEEDED = ""
diff --git a/meta-openembedded/meta-oe/recipes-graphics/wayland/waylandpp_1.0.0.bb b/meta-openembedded/meta-oe/recipes-graphics/wayland/waylandpp_1.0.0.bb
new file mode 100644
index 0000000..f075f2a
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-graphics/wayland/waylandpp_1.0.0.bb
@@ -0,0 +1,37 @@
+SUMMARY = " C++ binding for Wayland using the most modern C++ technology"
+LICENSE = "MIT"
+LIC_FILES_CHKSUM = "file://LICENSE;md5=3aae28cc66d61975114c2b14df215407"
+
+SRC_URI = "git://github.com/NilsBrause/waylandpp.git;protocol=https;branch=master"
+
+DEPENDS = "pugixml"
+DEPENDS:append:class-target = " waylandpp-native wayland virtual/egl virtual/libgles2"
+
+S = "${WORKDIR}/git"
+SRCREV = "4321ed5c7b4bffa41b8a2a13dc7f3ece1191f4f3"
+
+inherit cmake pkgconfig
+
+EXTRA_OECMAKE:class-native = " \
+	-DBUILD_SCANNER=ON \
+	-DBUILD_LIBRARIES=OFF \
+	-DBUILD_DOCUMENTATION=OFF \
+	-DCMAKE_BUILD_TYPE=Release \
+	-DCMAKE_VERBOSE_MAKEFILE=TRUE \
+"
+
+EXTRA_OECMAKE:class-target = " \
+	-DBUILD_SCANNER=ON \
+	-DBUILD_LIBRARIES=ON \
+	-DBUILD_DOCUMENTATION=OFF \
+	-DBUILD_EXAMPLES=OFF \
+	-DOPENGL_LIBRARY="-lEGL -lGLESv2" \
+	-DOPENGL_opengl_LIBRARY=-lEGL \
+	-DOPENGL_glx_LIBRARY=-lEGL \
+	-DWAYLAND_SCANNERPP="${STAGING_BINDIR_NATIVE}/wayland-scanner++" \
+	-DCMAKE_BUILD_TYPE=Release \
+	-DCMAKE_VERBOSE_MAKEFILE=TRUE \
+	-DCMAKE_EXE_LINKER_FLAGS="-Wl,--enable-new-dtags" \
+"
+
+BBCLASSEXTEND += "native nativesdk"
diff --git a/meta-openembedded/meta-oe/recipes-graphics/xorg-app/xfontsel_1.0.6.bb b/meta-openembedded/meta-oe/recipes-graphics/xorg-app/xfontsel_1.0.6.bb
deleted file mode 100644
index e926024..0000000
--- a/meta-openembedded/meta-oe/recipes-graphics/xorg-app/xfontsel_1.0.6.bb
+++ /dev/null
@@ -1,13 +0,0 @@
-require recipes-graphics/xorg-app/xorg-app-common.inc
-
-SUMMARY = "xfontsel provides point and click selection of X11 font names"
-HOMEPAGE = "http://cgit.freedesktop.org/xorg/app/xfontsel/"
-SECTION = "x11/app"
-LICENSE = "MIT"
-LIC_FILES_CHKSUM = "file://COPYING;md5=4669d2703c60d585cc29ba7e9a69bcb3"
-DEPENDS += " libxaw"
-
-LIC_FILES_CHKSUM = "file://COPYING;md5=4669d2703c60d585cc29ba7e9a69bcb3"
-
-SRC_URI[md5sum] = "13150ff98846bf6d9a14bee00697fa47"
-SRC_URI[sha256sum] = "25aa0b7c4262f5e99c07c2b96e00e4eb25b7e53f94fa803942af9d0e8da3001c"
diff --git a/meta-openembedded/meta-oe/recipes-graphics/xorg-app/xfontsel_1.1.0.bb b/meta-openembedded/meta-oe/recipes-graphics/xorg-app/xfontsel_1.1.0.bb
new file mode 100644
index 0000000..4c31d0b
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-graphics/xorg-app/xfontsel_1.1.0.bb
@@ -0,0 +1,13 @@
+require recipes-graphics/xorg-app/xorg-app-common.inc
+
+SUMMARY = "xfontsel provides point and click selection of X11 font names"
+HOMEPAGE = "http://cgit.freedesktop.org/xorg/app/xfontsel/"
+SECTION = "x11/app"
+LICENSE = "MIT"
+LIC_FILES_CHKSUM = "file://COPYING;md5=4669d2703c60d585cc29ba7e9a69bcb3"
+DEPENDS += " libxaw"
+
+LIC_FILES_CHKSUM = "file://COPYING;md5=4669d2703c60d585cc29ba7e9a69bcb3"
+
+SRC_URI[sha256sum] = "17052c3357bbfe44b8468675ae3d099c2427ba9fcac10540aef524ae4d77d1b4"
+SRC_URI_EXT = "xz"
diff --git a/meta-openembedded/meta-oe/recipes-graphics/xorg-app/xmessage_1.0.5.bb b/meta-openembedded/meta-oe/recipes-graphics/xorg-app/xmessage_1.0.5.bb
deleted file mode 100644
index 23cfb26..0000000
--- a/meta-openembedded/meta-oe/recipes-graphics/xorg-app/xmessage_1.0.5.bb
+++ /dev/null
@@ -1,10 +0,0 @@
-require recipes-graphics/xorg-app/xorg-app-common.inc
-
-SUMMARY = "Display a message or query in a window"
-LICENSE = "MIT"
-LIC_FILES_CHKSUM = "file://COPYING;md5=73c7f696a728de728d7446cbca814cc5"
-
-DEPENDS += "libxaw"
-
-SRC_URI[md5sum] = "e50ffae17eeb3943079620cb78f5ce0b"
-SRC_URI[sha256sum] = "373dfb81e7a6f06d3d22485a12fcde6e255d58c6dee1bbaeb00c7d0caa9b2029"
diff --git a/meta-openembedded/meta-oe/recipes-graphics/xorg-app/xmessage_1.0.6.bb b/meta-openembedded/meta-oe/recipes-graphics/xorg-app/xmessage_1.0.6.bb
new file mode 100644
index 0000000..cb12383
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-graphics/xorg-app/xmessage_1.0.6.bb
@@ -0,0 +1,10 @@
+require recipes-graphics/xorg-app/xorg-app-common.inc
+
+SUMMARY = "Display a message or query in a window"
+LICENSE = "MIT"
+LIC_FILES_CHKSUM = "file://COPYING;md5=73c7f696a728de728d7446cbca814cc5"
+
+DEPENDS += "libxaw"
+
+SRC_URI[sha256sum] = "d2eac545f137156b960877e052fcc8e29795ed735c02f7690fd7b439e6846a12"
+SRC_URI_EXT = "xz"
diff --git a/meta-openembedded/meta-oe/recipes-graphics/xorg-app/xrefresh_1.0.6.bb b/meta-openembedded/meta-oe/recipes-graphics/xorg-app/xrefresh_1.0.6.bb
deleted file mode 100644
index 99dc3b5..0000000
--- a/meta-openembedded/meta-oe/recipes-graphics/xorg-app/xrefresh_1.0.6.bb
+++ /dev/null
@@ -1,13 +0,0 @@
-require recipes-graphics/xorg-app/xorg-app-common.inc
-
-SUMMARY = "X.Org X11 X client utilities"
-HOMEPAGE = "http://cgit.freedesktop.org/xorg/app/xrefresh/"
-DESCRIPTION = "xrefresh - refresh all or part of an X screen"
-SECTION = "x11/app"
-LICENSE = "MIT"
-LIC_FILES_CHKSUM = "file://COPYING;md5=dad633bce9c3cd0e3abf72a16e0057cf"
-
-BBCLASSEXTEND = "native"
-
-SRC_URI[md5sum] = "c56fa4adbeed1ee5173f464a4c4a61a6"
-SRC_URI[sha256sum] = "287dfb9bb7e8d780d07e672e3252150850869cb550958ed5f8401f0835cd6353"
diff --git a/meta-openembedded/meta-oe/recipes-graphics/xorg-app/xrefresh_1.0.7.bb b/meta-openembedded/meta-oe/recipes-graphics/xorg-app/xrefresh_1.0.7.bb
new file mode 100644
index 0000000..6f6ad73
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-graphics/xorg-app/xrefresh_1.0.7.bb
@@ -0,0 +1,13 @@
+require recipes-graphics/xorg-app/xorg-app-common.inc
+
+SUMMARY = "X.Org X11 X client utilities"
+HOMEPAGE = "http://cgit.freedesktop.org/xorg/app/xrefresh/"
+DESCRIPTION = "xrefresh - refresh all or part of an X screen"
+SECTION = "x11/app"
+LICENSE = "MIT"
+LIC_FILES_CHKSUM = "file://COPYING;md5=dad633bce9c3cd0e3abf72a16e0057cf"
+
+BBCLASSEXTEND = "native"
+
+SRC_URI[sha256sum] = "a9f1d635f2f42283d0174e94d09ab69190c227faa5ab542bfe15ed9607771b1e"
+SRC_URI_EXT = "xz"
diff --git a/meta-openembedded/meta-oe/recipes-multimedia/libass/libass_0.14.0.bb b/meta-openembedded/meta-oe/recipes-multimedia/libass/libass_0.14.0.bb
deleted file mode 100644
index 0e62307..0000000
--- a/meta-openembedded/meta-oe/recipes-multimedia/libass/libass_0.14.0.bb
+++ /dev/null
@@ -1,30 +0,0 @@
-DESCRIPTION = "libass is a portable subtitle renderer for the ASS/SSA (Advanced Substation Alpha/Substation Alpha) subtitle format. It is mostly compatible with VSFilter."
-HOMEPAGE = "https://github.com/libass/libass"
-SECTION = "libs/multimedia"
-
-LICENSE = "ISC"
-LIC_FILES_CHKSUM = "file://COPYING;md5=a42532a0684420bdb15556c3cdd49a75"
-
-DEPENDS = "enca fontconfig freetype libpng fribidi"
-
-SRC_URI = "git://github.com/libass/libass.git;branch=master;protocol=https"
-SRCREV = "73284b676b12b47e17af2ef1b430527299e10c17"
-S = "${WORKDIR}/git"
-
-inherit autotools pkgconfig
-
-PACKAGECONFIG ??= ""
-PACKAGECONFIG[harfbuzz] = "--enable-harfbuzz,--disable-harfbuzz,harfbuzz"
-
-EXTRA_OECONF = " \
-    --enable-fontconfig \
-"
-
-# Disable compiling with ASM for x86 to avoid textrel
-EXTRA_OECONF:append:x86 = " --disable-asm"
-
-PACKAGES =+ "${PN}-tests"
-
-FILES:${PN}-tests = " \
-    ${libdir}/test/test \
-"
diff --git a/meta-openembedded/meta-oe/recipes-multimedia/libass/libass_0.16.0.bb b/meta-openembedded/meta-oe/recipes-multimedia/libass/libass_0.16.0.bb
new file mode 100644
index 0000000..6befd31
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-multimedia/libass/libass_0.16.0.bb
@@ -0,0 +1,27 @@
+DESCRIPTION = "libass is a portable subtitle renderer for the ASS/SSA (Advanced Substation Alpha/Substation Alpha) subtitle format. It is mostly compatible with VSFilter."
+HOMEPAGE = "https://github.com/libass/libass"
+SECTION = "libs/multimedia"
+
+LICENSE = "ISC"
+LIC_FILES_CHKSUM = "file://COPYING;md5=a42532a0684420bdb15556c3cdd49a75"
+
+DEPENDS = "fontconfig freetype fribidi harfbuzz"
+
+SRC_URI = "git://github.com/libass/libass.git;protocol=https;branch=master"
+SRCREV = "1af6240c5d1e499326146e0b88c987e626b13c23"
+S = "${WORKDIR}/git"
+
+inherit autotools pkgconfig
+
+PACKAGECONFIG[asm] = "--enable-asm,--disable-asm,nasm-native"
+# use larger tiles in the rasterizer (better performance, slightly worse quality)
+PACKAGECONFIG[largetiles] = "--enable-large-tiles,--disable-large-tiles"
+
+PACKAGECONFIG ??= ""
+PACKAGECONFIG:append:x86-64 = " asm"
+
+PACKAGES =+ "${PN}-tests"
+
+FILES:${PN}-tests = " \
+    ${libdir}/test/test \
+"
diff --git a/meta-openembedded/meta-oe/recipes-security/audit/audit/0001-Replace-__attribute_malloc__-with-__attribute__-__ma.patch b/meta-openembedded/meta-oe/recipes-security/audit/audit/0001-Replace-__attribute_malloc__-with-__attribute__-__ma.patch
new file mode 100644
index 0000000..c2080e1
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-security/audit/audit/0001-Replace-__attribute_malloc__-with-__attribute__-__ma.patch
@@ -0,0 +1,34 @@
+From 79c8d6a2755c9dfa00a5e86378e89a94eef0504d Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Tue, 9 Aug 2022 23:57:03 -0700
+Subject: [PATCH] Replace __attribute_malloc__ with 
+ __attribute__((__malloc__))
+
+__attribute_malloc__ is not available on musl
+
+Fixes
+| ../../git/auparse/auparse.h:54:2: error: expected function body after function declarator
+|         __attribute_malloc__ __attr_dealloc (auparse_destroy, 1);
+|         ^
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ auparse/auparse.h | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/auparse/auparse.h b/auparse/auparse.h
+index 95cf256d..c7dbe5ff 100644
+--- a/auparse/auparse.h
++++ b/auparse/auparse.h
+@@ -51,7 +51,7 @@ typedef void (*auparse_callback_ptr)(auparse_state_t *au,
+ void auparse_destroy(auparse_state_t *au);
+ void auparse_destroy_ext(auparse_state_t *au, auparse_destroy_what_t what);
+ auparse_state_t *auparse_init(ausource_t source, const void *b)
+-	__attribute_malloc__ __attr_dealloc (auparse_destroy, 1);
++	__attribute__((__malloc__)) __attr_dealloc (auparse_destroy, 1);
+ int auparse_new_buffer(auparse_state_t *au, const char *data, size_t data_len)
+ 	__attr_access ((__read_only__, 2, 3));
+ int auparse_feed(auparse_state_t *au, const char *data, size_t data_len)
+-- 
+2.37.1
+
diff --git a/meta-openembedded/meta-oe/recipes-security/audit/audit/Fixed-swig-host-contamination-issue.patch b/meta-openembedded/meta-oe/recipes-security/audit/audit/Fixed-swig-host-contamination-issue.patch
index 740bcb5..b023c80 100644
--- a/meta-openembedded/meta-oe/recipes-security/audit/audit/Fixed-swig-host-contamination-issue.patch
+++ b/meta-openembedded/meta-oe/recipes-security/audit/audit/Fixed-swig-host-contamination-issue.patch
@@ -18,11 +18,9 @@
  bindings/swig/src/auditswig.i     | 2 +-
  2 files changed, 3 insertions(+), 2 deletions(-)
 
-diff --git a/bindings/swig/python3/Makefile.am b/bindings/swig/python3/Makefile.am
-index dd9d934..61b486d 100644
 --- a/bindings/swig/python3/Makefile.am
 +++ b/bindings/swig/python3/Makefile.am
-@@ -22,6 +22,7 @@
+@@ -23,6 +23,7 @@
  CONFIG_CLEAN_FILES = *.loT *.rej *.orig
  AM_CFLAGS = -fPIC -DPIC -fno-strict-aliasing $(PYTHON3_CFLAGS)
  AM_CPPFLAGS = -I. -I$(top_builddir) -I${top_srcdir}/lib $(PYTHON3_INCLUDES)
@@ -30,7 +28,7 @@
  LIBS = $(top_builddir)/lib/libaudit.la
  SWIG_FLAGS = -python -py3 -modern
  SWIG_INCLUDES = -I. -I$(top_builddir) -I${top_srcdir}/lib $(PYTHON3_INCLUDES)
-@@ -36,7 +37,7 @@ _audit_la_DEPENDENCIES =${top_srcdir}/lib/libaudit.h ${top_builddir}/lib/libaudi
+@@ -37,7 +38,7 @@ _audit_la_DEPENDENCIES =${top_srcdir}/li
  _audit_la_LIBADD = ${top_builddir}/lib/libaudit.la
  nodist__audit_la_SOURCES  = audit_wrap.c
  audit.py audit_wrap.c: ${srcdir}/../src/auditswig.i 
@@ -39,8 +37,6 @@
  
  CLEANFILES = audit.py* audit_wrap.c *~
  
-diff --git a/bindings/swig/src/auditswig.i b/bindings/swig/src/auditswig.i
-index 21aafca..dd0f62c 100644
 --- a/bindings/swig/src/auditswig.i
 +++ b/bindings/swig/src/auditswig.i
 @@ -39,7 +39,7 @@ signed
@@ -48,10 +44,7 @@
  typedef unsigned __u32;
  typedef unsigned uid_t;
 -%include "/usr/include/linux/audit.h"
-+%include "linux/audit.h"
++%include "../lib/audit.h"
  #define __extension__ /*nothing*/
  %include <stdint.i>
  %include "../lib/libaudit.h"
--- 
-2.17.1
-
diff --git a/meta-openembedded/meta-oe/recipes-security/audit/audit_3.0.7.bb b/meta-openembedded/meta-oe/recipes-security/audit/audit_3.0.7.bb
deleted file mode 100644
index d77aec2..0000000
--- a/meta-openembedded/meta-oe/recipes-security/audit/audit_3.0.7.bb
+++ /dev/null
@@ -1,108 +0,0 @@
-SUMMARY = "User space tools for kernel auditing"
-DESCRIPTION = "The audit package contains the user space utilities for \
-storing and searching the audit records generated by the audit subsystem \
-in the Linux kernel."
-HOMEPAGE = "http://people.redhat.com/sgrubb/audit/"
-SECTION = "base"
-LICENSE = "GPL-2.0-or-later & LGPL-2.0-or-later"
-LIC_FILES_CHKSUM = "file://COPYING;md5=94d55d512a9ba36caa9b7df079bae19f"
-
-SRC_URI = "git://github.com/linux-audit/${BPN}-userspace.git;branch=master;protocol=https \
-           file://Fixed-swig-host-contamination-issue.patch \
-           file://auditd \
-           file://auditd.service \
-           file://audit-volatile.conf \
-"
-
-S = "${WORKDIR}/git"
-SRCREV = "f60b2d8f55c74be798a7f5bcbd6c587987f2578a"
-
-inherit autotools python3native update-rc.d systemd
-
-UPDATERCPN = "auditd"
-INITSCRIPT_NAME = "auditd"
-INITSCRIPT_PARAMS = "defaults"
-
-SYSTEMD_PACKAGES = "auditd"
-SYSTEMD_SERVICE:auditd = "auditd.service"
-
-DEPENDS = "python3 tcp-wrappers libcap-ng linux-libc-headers swig-native"
-
-EXTRA_OECONF = " --with-libwrap \
-        --enable-gssapi-krb5=no \
-        --with-libcap-ng=yes \
-        --with-python3=yes \
-        --libdir=${base_libdir} \
-        --sbindir=${base_sbindir} \
-        --without-python \
-        --without-golang \
-        --disable-zos-remote \
-        --with-arm=yes \
-        --with-aarch64=yes \
-        "
-
-EXTRA_OEMAKE = "PYLIBVER='python${PYTHON_BASEVERSION}' \
-	PYINC='${STAGING_INCDIR}/$(PYLIBVER)' \
-	pyexecdir=${libdir}/python${PYTHON_BASEVERSION}/site-packages \
-	STDINC='${STAGING_INCDIR}' \
-	pkgconfigdir=${libdir}/pkgconfig \
-	"
-
-SUMMARY:audispd-plugins = "Plugins for the audit event dispatcher"
-DESCRIPTION:audispd-plugins = "The audispd-plugins package provides plugins for the real-time \
-interface to the audit system, audispd. These plugins can do things \
-like relay events to remote machines or analyze events for suspicious \
-behavior."
-
-PACKAGES =+ "audispd-plugins"
-PACKAGES += "auditd ${PN}-python"
-
-FILES:${PN} = "${sysconfdir}/libaudit.conf ${base_libdir}/libaudit.so.1* ${base_libdir}/libauparse.so.*"
-FILES:auditd = "${bindir}/* ${base_sbindir}/* ${sysconfdir}/* ${datadir}/audit/*"
-FILES:audispd-plugins = "${sysconfdir}/audit/audisp-remote.conf \
-	${sysconfdir}/audit/plugins.d/au-remote.conf \
-	${sysconfdir}/audit/plugins.d/syslog.conf \
-	${base_sbindir}/audisp-remote \
-	${base_sbindir}/audisp-syslog \
-	${localstatedir}/spool/audit \
-	"
-FILES:${PN}-dbg += "${libdir}/python${PYTHON_BASEVERSION}/*/.debug"
-FILES:${PN}-python = "${libdir}/python${PYTHON_BASEVERSION}"
-
-CONFFILES:auditd = "${sysconfdir}/audit/audit.rules"
-
-do_install:append() {
-	rm -f ${D}/${libdir}/python${PYTHON_BASEVERSION}/site-packages/*.a
-	rm -f ${D}/${libdir}/python${PYTHON_BASEVERSION}/site-packages/*.la
-
-	# reuse auditd config
-	[ ! -e ${D}/etc/default ] && mkdir ${D}/etc/default
-	mv ${D}/etc/sysconfig/auditd ${D}/etc/default
-	rmdir ${D}/etc/sysconfig/
-
-	# replace init.d
-	install -D -m 0755 ${WORKDIR}/auditd ${D}/etc/init.d/auditd
-	rm -rf ${D}/etc/rc.d
-
-	if ${@bb.utils.contains('DISTRO_FEATURES', 'systemd', 'true', 'false', d)}; then
-		# install systemd unit files
-		install -d ${D}${systemd_unitdir}/system
-		install -m 0644 ${WORKDIR}/auditd.service ${D}${systemd_unitdir}/system
-
-		install -d ${D}${sysconfdir}/tmpfiles.d/
-		install -m 0644 ${WORKDIR}/audit-volatile.conf ${D}${sysconfdir}/tmpfiles.d/
-	fi
-
-	# audit-2.5 doesn't install any rules by default, so we do that here
-	mkdir -p ${D}/etc/audit ${D}/etc/audit/rules.d
-	cp ${S}/rules/10-base-config.rules ${D}/etc/audit/rules.d/audit.rules
-
-	chmod 750 ${D}/etc/audit ${D}/etc/audit/rules.d
-	chmod 640 ${D}/etc/audit/auditd.conf ${D}/etc/audit/rules.d/audit.rules
-
-	# Based on the audit.spec "Copy default rules into place on new installation"
-	cp ${D}/etc/audit/rules.d/audit.rules ${D}/etc/audit/audit.rules
-
-	# Create /var/spool/audit directory for audisp-remote
-	install -m 0700 -d ${D}${localstatedir}/spool/audit
-}
diff --git a/meta-openembedded/meta-oe/recipes-security/audit/audit_3.0.8.bb b/meta-openembedded/meta-oe/recipes-security/audit/audit_3.0.8.bb
new file mode 100644
index 0000000..78439f6
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-security/audit/audit_3.0.8.bb
@@ -0,0 +1,116 @@
+SUMMARY = "User space tools for kernel auditing"
+DESCRIPTION = "The audit package contains the user space utilities for \
+storing and searching the audit records generated by the audit subsystem \
+in the Linux kernel."
+HOMEPAGE = "http://people.redhat.com/sgrubb/audit/"
+SECTION = "base"
+LICENSE = "GPL-2.0-or-later & LGPL-2.0-or-later"
+LIC_FILES_CHKSUM = "file://COPYING;md5=94d55d512a9ba36caa9b7df079bae19f"
+
+SRC_URI = "git://github.com/linux-audit/${BPN}-userspace.git;branch=master;protocol=https \
+           file://Fixed-swig-host-contamination-issue.patch \
+           file://0001-Replace-__attribute_malloc__-with-__attribute__-__ma.patch \
+           file://auditd \
+           file://auditd.service \
+           file://audit-volatile.conf \
+"
+
+S = "${WORKDIR}/git"
+SRCREV = "54a62e78792fe583267cf80da717ee480b8f42bc"
+
+inherit autotools python3native update-rc.d systemd
+
+UPDATERCPN = "auditd"
+INITSCRIPT_NAME = "auditd"
+INITSCRIPT_PARAMS = "defaults"
+
+SYSTEMD_PACKAGES = "auditd"
+SYSTEMD_SERVICE:auditd = "auditd.service"
+
+DEPENDS = "python3 tcp-wrappers libcap-ng linux-libc-headers swig-native"
+
+EXTRA_OECONF = " --with-libwrap \
+        --enable-gssapi-krb5=no \
+        --with-libcap-ng=yes \
+        --with-python3=yes \
+        --libdir=${base_libdir} \
+        --sbindir=${base_sbindir} \
+        --without-python \
+        --without-golang \
+        --disable-zos-remote \
+        --with-arm=yes \
+        --with-aarch64=yes \
+        "
+
+EXTRA_OEMAKE = "PYLIBVER='python${PYTHON_BASEVERSION}' \
+	PYINC='${STAGING_INCDIR}/$(PYLIBVER)' \
+	pyexecdir=${libdir}/python${PYTHON_BASEVERSION}/site-packages \
+	STDINC='${STAGING_INCDIR}' \
+	pkgconfigdir=${libdir}/pkgconfig \
+	"
+
+SUMMARY:audispd-plugins = "Plugins for the audit event dispatcher"
+DESCRIPTION:audispd-plugins = "The audispd-plugins package provides plugins for the real-time \
+interface to the audit system, audispd. These plugins can do things \
+like relay events to remote machines or analyze events for suspicious \
+behavior."
+
+PACKAGES =+ "audispd-plugins"
+PACKAGES += "auditd ${PN}-python"
+
+FILES:${PN} = "${sysconfdir}/libaudit.conf ${base_libdir}/libaudit.so.1* ${base_libdir}/libauparse.so.*"
+FILES:auditd = "${bindir}/* ${base_sbindir}/* ${sysconfdir}/* ${datadir}/audit/*"
+FILES:audispd-plugins = "${sysconfdir}/audit/audisp-remote.conf \
+	${sysconfdir}/audit/plugins.d/au-remote.conf \
+	${sysconfdir}/audit/plugins.d/syslog.conf \
+	${base_sbindir}/audisp-remote \
+	${base_sbindir}/audisp-syslog \
+	${localstatedir}/spool/audit \
+	"
+FILES:${PN}-dbg += "${libdir}/python${PYTHON_BASEVERSION}/*/.debug"
+FILES:${PN}-python = "${libdir}/python${PYTHON_BASEVERSION}"
+
+CONFFILES:auditd = "${sysconfdir}/audit/audit.rules"
+
+do_configure:prepend() {
+	sed -e 's|buf\[];|buf[0];|g'  ${STAGING_INCDIR}/linux/audit.h > ${S}/lib/audit.h
+        sed -i -e 's|#include <linux/audit.h>|#include "audit.h"|g' ${S}/lib/libaudit.h
+}
+
+do_install:append() {
+        sed -i -e 's|#include "audit.h"|#include <linux/audit.h>|g' ${D}${includedir}/libaudit.h
+
+	rm -f ${D}/${libdir}/python${PYTHON_BASEVERSION}/site-packages/*.a
+	rm -f ${D}/${libdir}/python${PYTHON_BASEVERSION}/site-packages/*.la
+
+	# reuse auditd config
+	[ ! -e ${D}/etc/default ] && mkdir ${D}/etc/default
+	mv ${D}/etc/sysconfig/auditd ${D}/etc/default
+	rmdir ${D}/etc/sysconfig/
+
+	# replace init.d
+	install -D -m 0755 ${WORKDIR}/auditd ${D}/etc/init.d/auditd
+	rm -rf ${D}/etc/rc.d
+
+	if ${@bb.utils.contains('DISTRO_FEATURES', 'systemd', 'true', 'false', d)}; then
+		# install systemd unit files
+		install -d ${D}${systemd_unitdir}/system
+		install -m 0644 ${WORKDIR}/auditd.service ${D}${systemd_unitdir}/system
+
+		install -d ${D}${sysconfdir}/tmpfiles.d/
+		install -m 0644 ${WORKDIR}/audit-volatile.conf ${D}${sysconfdir}/tmpfiles.d/
+	fi
+
+	# audit-2.5 doesn't install any rules by default, so we do that here
+	mkdir -p ${D}/etc/audit ${D}/etc/audit/rules.d
+	cp ${S}/rules/10-base-config.rules ${D}/etc/audit/rules.d/audit.rules
+
+	chmod 750 ${D}/etc/audit ${D}/etc/audit/rules.d
+	chmod 640 ${D}/etc/audit/auditd.conf ${D}/etc/audit/rules.d/audit.rules
+
+	# Based on the audit.spec "Copy default rules into place on new installation"
+	cp ${D}/etc/audit/rules.d/audit.rules ${D}/etc/audit/audit.rules
+
+	# Create /var/spool/audit directory for audisp-remote
+	install -m 0700 -d ${D}${localstatedir}/spool/audit
+}
diff --git a/meta-openembedded/meta-oe/recipes-support/avro/avro-c_1.11.0.bb b/meta-openembedded/meta-oe/recipes-support/avro/avro-c_1.11.0.bb
deleted file mode 100644
index 8558f75..0000000
--- a/meta-openembedded/meta-oe/recipes-support/avro/avro-c_1.11.0.bb
+++ /dev/null
@@ -1,17 +0,0 @@
-SUMMARY = "Apache Avro data serialization system."
-HOMEPAGE = "http://apr.apache.org/"
-SECTION = "libs"
-
-LICENSE = "Apache-2.0"
-LIC_FILES_CHKSUM = "file://LICENSE;md5=6d502b41f76179fc84e536236f359cae"
-
-DEPENDS = "jansson zlib xz"
-
-BRANCH = "branch-1.11"
-SRCREV = "4e1fefca493029ace961b7ef8889a3722458565a"
-SRC_URI = "git://github.com/apache/avro;branch=${BRANCH};protocol=https \
-          "
-
-S = "${WORKDIR}/git/lang/c"
-
-inherit cmake pkgconfig
diff --git a/meta-openembedded/meta-oe/recipes-support/avro/avro-c_1.11.1.bb b/meta-openembedded/meta-oe/recipes-support/avro/avro-c_1.11.1.bb
new file mode 100644
index 0000000..bb0bd7d
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-support/avro/avro-c_1.11.1.bb
@@ -0,0 +1,17 @@
+SUMMARY = "Apache Avro data serialization system."
+HOMEPAGE = "http://apr.apache.org/"
+SECTION = "libs"
+
+LICENSE = "Apache-2.0"
+LIC_FILES_CHKSUM = "file://LICENSE;md5=6d502b41f76179fc84e536236f359cae"
+
+DEPENDS = "jansson zlib xz"
+
+BRANCH = "branch-1.11"
+SRCREV = "3a9e5a789b5165e0c8c4da799c387fdf84bfb75e"
+SRC_URI = "git://github.com/apache/avro;branch=${BRANCH};protocol=https \
+          "
+
+S = "${WORKDIR}/git/lang/c"
+
+inherit cmake pkgconfig
diff --git a/meta-openembedded/meta-oe/recipes-support/bdwgc/bdwgc_8.2.0.bb b/meta-openembedded/meta-oe/recipes-support/bdwgc/bdwgc_8.2.0.bb
deleted file mode 100644
index 7ee63b1..0000000
--- a/meta-openembedded/meta-oe/recipes-support/bdwgc/bdwgc_8.2.0.bb
+++ /dev/null
@@ -1,39 +0,0 @@
-SUMMARY = "A garbage collector for C and C++"
-
-DESCRIPTION = "The Boehm-Demers-Weiser conservative garbage collector can be\
- used as a garbage collecting replacement for C malloc or C++ new. It allows\
- you to allocate memory basically as you normally would, without explicitly\
- deallocating memory that is no longer useful. The collector automatically\
- recycles memory when it determines that it can no longer be otherwise\
- accessed.\
-  The collector is also used by a number of programming language\
- implementations that either use C as intermediate code, want to facilitate\
- easier interoperation with C libraries, or just prefer the simple collector\
- interface.\
-  Alternatively, the garbage collector may be used as a leak detector for C\
- or C++ programs, though that is not its primary goal.\
-  Empirically, this collector works with most unmodified C programs, simply\
- by replacing malloc with GC_malloc calls, replacing realloc with GC_realloc\
- calls, and removing free calls."
-
-HOMEPAGE = "http://www.hboehm.info/gc/"
-SECTION = "devel"
-LICENSE = "MIT"
-LIC_FILES_CHKSUM = "file://README.QUICK;md5=7912d9213b3547f8a81aadd08893fe84"
-
-DEPENDS = "libatomic-ops"
-
-SRCREV = "47e9106c17b72e9ee5501308f69ea94531e798b3"
-SRC_URI = "git://github.com/ivmai/bdwgc.git;branch=master;protocol=https"
-
-S = "${WORKDIR}/git"
-
-inherit autotools pkgconfig
-
-EXTRA_OECONF += "--enable-cpluscplus"
-
-CFLAGS:append:libc-musl = " -D_GNU_SOURCE -DNO_GETCONTEXT -DSEARCH_FOR_DATA_START -DUSE_MMAP -DHAVE_DL_ITERATE_PHDR"
-
-FILES:${PN}-doc = "${datadir}"
-
-BBCLASSEXTEND = "native nativesdk"
diff --git a/meta-openembedded/meta-oe/recipes-support/bdwgc/bdwgc_8.2.2.bb b/meta-openembedded/meta-oe/recipes-support/bdwgc/bdwgc_8.2.2.bb
new file mode 100644
index 0000000..622402a
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-support/bdwgc/bdwgc_8.2.2.bb
@@ -0,0 +1,39 @@
+SUMMARY = "A garbage collector for C and C++"
+
+DESCRIPTION = "The Boehm-Demers-Weiser conservative garbage collector can be\
+ used as a garbage collecting replacement for C malloc or C++ new. It allows\
+ you to allocate memory basically as you normally would, without explicitly\
+ deallocating memory that is no longer useful. The collector automatically\
+ recycles memory when it determines that it can no longer be otherwise\
+ accessed.\
+  The collector is also used by a number of programming language\
+ implementations that either use C as intermediate code, want to facilitate\
+ easier interoperation with C libraries, or just prefer the simple collector\
+ interface.\
+  Alternatively, the garbage collector may be used as a leak detector for C\
+ or C++ programs, though that is not its primary goal.\
+  Empirically, this collector works with most unmodified C programs, simply\
+ by replacing malloc with GC_malloc calls, replacing realloc with GC_realloc\
+ calls, and removing free calls."
+
+HOMEPAGE = "http://www.hboehm.info/gc/"
+SECTION = "devel"
+LICENSE = "MIT"
+LIC_FILES_CHKSUM = "file://README.QUICK;md5=dd27361ad00943bb27bc3e0589037075"
+
+DEPENDS = "libatomic-ops"
+
+SRCREV = "cd1fbc1dbfd2cc888436944dd2784f39820698d7"
+SRC_URI = "git://github.com/ivmai/bdwgc.git;branch=release-8_2;protocol=https"
+
+S = "${WORKDIR}/git"
+
+inherit autotools pkgconfig
+
+EXTRA_OECONF += "--enable-cpluscplus"
+
+CFLAGS:append:libc-musl = " -D_GNU_SOURCE -DNO_GETCONTEXT -DSEARCH_FOR_DATA_START -DUSE_MMAP -DHAVE_DL_ITERATE_PHDR"
+
+FILES:${PN}-doc = "${datadir}"
+
+BBCLASSEXTEND = "native nativesdk"
diff --git a/meta-openembedded/meta-oe/recipes-support/ccid/ccid_1.4.33.bb b/meta-openembedded/meta-oe/recipes-support/ccid/ccid_1.4.33.bb
deleted file mode 100644
index 7b260f1..0000000
--- a/meta-openembedded/meta-oe/recipes-support/ccid/ccid_1.4.33.bb
+++ /dev/null
@@ -1,19 +0,0 @@
-SUMMARY = "Generic USB CCID smart card reader driver"
-HOMEPAGE = "https://ccid.apdu.fr/"
-LICENSE = "LGPL-2.1-or-later"
-LIC_FILES_CHKSUM = "file://COPYING;md5=2d5025d4aa3495befef8f17206a5b0a1"
-
-DEPENDS = "virtual/libusb0 pcsc-lite"
-RDEPENDS:${PN} = "pcsc-lite"
-
-SRC_URI = "https://ccid.apdu.fr/files/ccid-${PV}.tar.bz2 \
-    file://0001-Add-build-rule-for-README.patch \
-"
-
-SRC_URI[md5sum] = "b11907894ce2d345439635e2b967e7e5"
-SRC_URI[sha256sum] = "5256da939711deb42b74d05d2bd6bd0c73c4d564feb0c1a50212609eb680e424"
-
-inherit autotools pkgconfig
-
-FILES:${PN} += "${libdir}/pcsc/"
-FILES:${PN}-dbg += "${libdir}/pcsc/drivers/*/*/*/.debug"
diff --git a/meta-openembedded/meta-oe/recipes-support/ccid/ccid_1.5.0.bb b/meta-openembedded/meta-oe/recipes-support/ccid/ccid_1.5.0.bb
new file mode 100644
index 0000000..9775d82
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-support/ccid/ccid_1.5.0.bb
@@ -0,0 +1,19 @@
+SUMMARY = "Generic USB CCID smart card reader driver"
+HOMEPAGE = "https://ccid.apdu.fr/"
+LICENSE = "LGPL-2.1-or-later"
+LIC_FILES_CHKSUM = "file://COPYING;md5=2d5025d4aa3495befef8f17206a5b0a1"
+
+DEPENDS = "autoconf-archive-native virtual/libusb0 pcsc-lite"
+RDEPENDS:${PN} = "pcsc-lite"
+
+SRC_URI = "https://ccid.apdu.fr/files/ccid-${PV}.tar.bz2 \
+    file://0001-Add-build-rule-for-README.patch \
+"
+
+SRC_URI[md5sum] = "f246d6601856775310c86b841b616de5"
+SRC_URI[sha256sum] = "81549b3422469d503996d03a3aed2ef1375b359167f10d66be9e3844e729322e"
+
+inherit autotools pkgconfig
+
+FILES:${PN} += "${libdir}/pcsc/"
+FILES:${PN}-dbg += "${libdir}/pcsc/drivers/*/*/*/.debug"
diff --git a/meta-openembedded/meta-oe/recipes-support/cpulimit/cpulimit_0.2.bb b/meta-openembedded/meta-oe/recipes-support/cpulimit/cpulimit_0.2.bb
index 3ee2b5c..86a58be 100644
--- a/meta-openembedded/meta-oe/recipes-support/cpulimit/cpulimit_0.2.bb
+++ b/meta-openembedded/meta-oe/recipes-support/cpulimit/cpulimit_0.2.bb
@@ -18,5 +18,5 @@
     install -m 0755 ${B}/src/${PN} ${D}${sbindir}/
 }
 
-CFLAGS += "${LDFLAGS}"
+CFLAGS += "-D_GNU_SOURCE ${LDFLAGS}"
 
diff --git a/meta-openembedded/meta-oe/recipes-support/freerdp/freerdp_2.7.0.bb b/meta-openembedded/meta-oe/recipes-support/freerdp/freerdp_2.7.0.bb
deleted file mode 100644
index 0645ccd..0000000
--- a/meta-openembedded/meta-oe/recipes-support/freerdp/freerdp_2.7.0.bb
+++ /dev/null
@@ -1,85 +0,0 @@
-# Copyright (C) 2010-2012 O.S. Systems Software Ltda. All Rights Reserved
-# Released under the MIT license
-
-DESCRIPTION = "FreeRDP RDP client & server library"
-HOMEPAGE = "http://www.freerdp.com"
-DEPENDS = "openssl alsa-lib libusb1"
-SECTION = "net"
-LICENSE = "Apache-2.0"
-LIC_FILES_CHKSUM = "file://LICENSE;md5=3b83ef96387f14655fc854ddc3c6bd57"
-
-inherit pkgconfig cmake gitpkgv
-
-PE = "1"
-PKGV = "${GITPKGVTAG}"
-
-SRCREV = "40ee5d3bcc70343af6c0300d71968858c1f1948f"
-SRC_URI = "git://github.com/FreeRDP/FreeRDP.git;branch=stable-2.0;protocol=https \
-    file://winpr-makecert-Build-with-install-RPATH.patch \
-"
-
-S = "${WORKDIR}/git"
-
-EXTRA_OECMAKE += " \
-    -DWITH_ALSA=ON \
-    -DWITH_FFMPEG=OFF \
-    -DWITH_CUNIT=OFF \
-    -DWITH_NEON=OFF \
-    -DBUILD_STATIC_LIBS=OFF \
-    -DCMAKE_POSITION_INDEPENDANT_CODE=ON \
-    -DWITH_MANPAGES=OFF \
-"
-
-PACKAGECONFIG ??= " \
-    ${@bb.utils.filter('DISTRO_FEATURES', 'directfb pam pulseaudio wayland x11', d)}\
-    gstreamer cups pcsc \
-"
-
-X11_DEPS = "virtual/libx11 libxinerama libxext libxcursor libxv libxi libxrender libxfixes libxdamage libxrandr libxkbfile"
-PACKAGECONFIG[x11] = "-DWITH_X11=ON -DWITH_XINERAMA=ON -DWITH_XEXT=ON -DWITH_XCURSOR=ON -DWITH_XV=ON -DWITH_XI=ON -DWITH_XRENDER=ON -DWITH_XFIXES=ON -DWITH_XDAMAGE=ON -DWITH_XRANDR=ON -DWITH_XKBFILE=ON,-DWITH_X11=OFF,${X11_DEPS}"
-PACKAGECONFIG[wayland] = "-DWITH_WAYLAND=ON,-DWITH_WAYLAND=OFF,wayland wayland-native libxkbcommon"
-PACKAGECONFIG[directfb] = "-DWITH_DIRECTFB=ON,-DWITH_DIRECTFB=OFF,directfb"
-PACKAGECONFIG[pam] = "-DWITH_PAM=ON,-DWITH_PAM=OFF,libpam"
-PACKAGECONFIG[pcsc] = "-DWITH_PCSC=ON,-DWITH_PCSC=OFF,pcsc-lite"
-PACKAGECONFIG[pulseaudio] = "-DWITH_PULSEAUDIO=ON,-DWITH_PULSEAUDIO=OFF,pulseaudio"
-PACKAGECONFIG[gstreamer] = "-DWITH_GSTREAMER_1_0=ON,-DWITH_GSTREAMER_1_0=OFF,gstreamer1.0 gstreamer1.0-plugins-base"
-PACKAGECONFIG[cups] = "-DWITH_CUPS=ON,-DWITH_CUPS=OFF,cups"
-
-PACKAGES =+ "libfreerdp"
-
-LEAD_SONAME = "libfreerdp.so"
-FILES:libfreerdp = "${libdir}/lib*${SOLIBS}"
-
-PACKAGES_DYNAMIC += "^libfreerdp-plugin-.*"
-
-# we will need winpr-makecert to generate TLS certificates
-do_install:append () {
-    install -d ${D}${bindir}
-    install -m755 winpr/tools/makecert-cli/winpr-makecert ${D}${bindir}
-    rm -rf ${D}${libdir}/cmake
-    rm -rf ${D}${libdir}/freerdp
-}
-
-python populate_packages:prepend () {
-    freerdp_root = d.expand('${libdir}/freerdp')
-
-    do_split_packages(d, freerdp_root, r'^(audin_.*)\.so$',
-        output_pattern='libfreerdp-plugin-%s',
-        description='FreeRDP plugin %s',
-        prepend=True, extra_depends='libfreerdp-plugin-audin')
-
-    do_split_packages(d, freerdp_root, r'^(rdpsnd_.*)\.so$',
-        output_pattern='libfreerdp-plugin-%s',
-        description='FreeRDP plugin %s',
-        prepend=True, extra_depends='libfreerdp-plugin-rdpsnd')
-
-    do_split_packages(d, freerdp_root, r'^(tsmf_.*)\.so$',
-        output_pattern='libfreerdp-plugin-%s',
-        description='FreeRDP plugin %s',
-        prepend=True, extra_depends='libfreerdp-plugin-tsmf')
-
-    do_split_packages(d, freerdp_root, r'^([^-]*)\.so$',
-        output_pattern='libfreerdp-plugin-%s',
-        description='FreeRDP plugin %s',
-        prepend=True, extra_depends='')
-}
diff --git a/meta-openembedded/meta-oe/recipes-support/freerdp/freerdp_2.8.0.bb b/meta-openembedded/meta-oe/recipes-support/freerdp/freerdp_2.8.0.bb
new file mode 100644
index 0000000..33782e5
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-support/freerdp/freerdp_2.8.0.bb
@@ -0,0 +1,85 @@
+# Copyright (C) 2010-2012 O.S. Systems Software Ltda. All Rights Reserved
+# Released under the MIT license
+
+DESCRIPTION = "FreeRDP RDP client & server library"
+HOMEPAGE = "http://www.freerdp.com"
+DEPENDS = "openssl alsa-lib libusb1"
+SECTION = "net"
+LICENSE = "Apache-2.0"
+LIC_FILES_CHKSUM = "file://LICENSE;md5=3b83ef96387f14655fc854ddc3c6bd57"
+
+inherit pkgconfig cmake gitpkgv
+
+PE = "1"
+PKGV = "${GITPKGVTAG}"
+
+SRCREV = "e3fc97feb512053189e276b2ca79762990bb8c4c"
+SRC_URI = "git://github.com/FreeRDP/FreeRDP.git;branch=stable-2.0;protocol=https \
+    file://winpr-makecert-Build-with-install-RPATH.patch \
+"
+
+S = "${WORKDIR}/git"
+
+EXTRA_OECMAKE += " \
+    -DWITH_ALSA=ON \
+    -DWITH_FFMPEG=OFF \
+    -DWITH_CUNIT=OFF \
+    -DWITH_NEON=OFF \
+    -DBUILD_STATIC_LIBS=OFF \
+    -DCMAKE_POSITION_INDEPENDANT_CODE=ON \
+    -DWITH_MANPAGES=OFF \
+"
+
+PACKAGECONFIG ??= " \
+    ${@bb.utils.filter('DISTRO_FEATURES', 'directfb pam pulseaudio wayland x11', d)}\
+    gstreamer cups pcsc \
+"
+
+X11_DEPS = "virtual/libx11 libxinerama libxext libxcursor libxv libxi libxrender libxfixes libxdamage libxrandr libxkbfile"
+PACKAGECONFIG[x11] = "-DWITH_X11=ON -DWITH_XINERAMA=ON -DWITH_XEXT=ON -DWITH_XCURSOR=ON -DWITH_XV=ON -DWITH_XI=ON -DWITH_XRENDER=ON -DWITH_XFIXES=ON -DWITH_XDAMAGE=ON -DWITH_XRANDR=ON -DWITH_XKBFILE=ON,-DWITH_X11=OFF,${X11_DEPS}"
+PACKAGECONFIG[wayland] = "-DWITH_WAYLAND=ON,-DWITH_WAYLAND=OFF,wayland wayland-native libxkbcommon"
+PACKAGECONFIG[directfb] = "-DWITH_DIRECTFB=ON,-DWITH_DIRECTFB=OFF,directfb"
+PACKAGECONFIG[pam] = "-DWITH_PAM=ON,-DWITH_PAM=OFF,libpam"
+PACKAGECONFIG[pcsc] = "-DWITH_PCSC=ON,-DWITH_PCSC=OFF,pcsc-lite"
+PACKAGECONFIG[pulseaudio] = "-DWITH_PULSEAUDIO=ON,-DWITH_PULSEAUDIO=OFF,pulseaudio"
+PACKAGECONFIG[gstreamer] = "-DWITH_GSTREAMER_1_0=ON,-DWITH_GSTREAMER_1_0=OFF,gstreamer1.0 gstreamer1.0-plugins-base"
+PACKAGECONFIG[cups] = "-DWITH_CUPS=ON,-DWITH_CUPS=OFF,cups"
+
+PACKAGES =+ "libfreerdp"
+
+LEAD_SONAME = "libfreerdp.so"
+FILES:libfreerdp = "${libdir}/lib*${SOLIBS}"
+
+PACKAGES_DYNAMIC += "^libfreerdp-plugin-.*"
+
+# we will need winpr-makecert to generate TLS certificates
+do_install:append () {
+    install -d ${D}${bindir}
+    install -m755 winpr/tools/makecert-cli/winpr-makecert ${D}${bindir}
+    rm -rf ${D}${libdir}/cmake
+    rm -rf ${D}${libdir}/freerdp
+}
+
+python populate_packages:prepend () {
+    freerdp_root = d.expand('${libdir}/freerdp')
+
+    do_split_packages(d, freerdp_root, r'^(audin_.*)\.so$',
+        output_pattern='libfreerdp-plugin-%s',
+        description='FreeRDP plugin %s',
+        prepend=True, extra_depends='libfreerdp-plugin-audin')
+
+    do_split_packages(d, freerdp_root, r'^(rdpsnd_.*)\.so$',
+        output_pattern='libfreerdp-plugin-%s',
+        description='FreeRDP plugin %s',
+        prepend=True, extra_depends='libfreerdp-plugin-rdpsnd')
+
+    do_split_packages(d, freerdp_root, r'^(tsmf_.*)\.so$',
+        output_pattern='libfreerdp-plugin-%s',
+        description='FreeRDP plugin %s',
+        prepend=True, extra_depends='libfreerdp-plugin-tsmf')
+
+    do_split_packages(d, freerdp_root, r'^([^-]*)\.so$',
+        output_pattern='libfreerdp-plugin-%s',
+        description='FreeRDP plugin %s',
+        prepend=True, extra_depends='')
+}
diff --git a/meta-openembedded/meta-oe/recipes-support/gd/gd/0001-Fix-deprecared-function-prototypes.patch b/meta-openembedded/meta-oe/recipes-support/gd/gd/0001-Fix-deprecared-function-prototypes.patch
new file mode 100644
index 0000000..5ac5170
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-support/gd/gd/0001-Fix-deprecared-function-prototypes.patch
@@ -0,0 +1,115 @@
+From 6379331cd0647fc6f149f55e4505a9a92e4f159f Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Mon, 22 Aug 2022 22:43:26 -0700
+Subject: [PATCH] Fix deprecared function prototypes
+
+Fixes following errors:
+error: a function definition without a prototype is deprecated in all versions of C and is not supported in C2x [-Werror,-Wdeprecated-non-prototype]
+
+Upstream-Status: Submitted [https://github.com/libgd/libgd/pull/835]
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ src/gd_nnquant.c | 32 +++++++-------------------------
+ src/gd_tiff.c    |  4 +---
+ 2 files changed, 8 insertions(+), 28 deletions(-)
+
+diff --git a/src/gd_nnquant.c b/src/gd_nnquant.c
+index 8b9aa794..013f7160 100644
+--- a/src/gd_nnquant.c
++++ b/src/gd_nnquant.c
+@@ -112,12 +112,7 @@ typedef struct {
+ 
+ /* Initialise network in range (0,0,0,0) to (255,255,255,255) and set parameters
+    ----------------------------------------------------------------------- */
+-static void initnet(nnq, thepic, len, sample, colours)
+-nn_quant *nnq;
+-unsigned char *thepic;
+-int len;
+-int sample;
+-int colours;
++static void initnet(nn_quant *nnq, unsigned char *thepic, int len, int sample, int colours)
+ {
+ 	register int i;
+ 	register int *p;
+@@ -163,9 +158,7 @@ static void unbiasnet(nn_quant *nnq)
+ }
+ 
+ /* Output colormap to unsigned char ptr in RGBA format */
+-static void getcolormap(nnq, map)
+-nn_quant *nnq;
+-unsigned char *map;
++static void getcolormap(nn_quant *nnq, unsigned char *map)
+ {
+ 	int i,j;
+ 	for(j=0; j < nnq->netsize; j++) {
+@@ -232,9 +225,7 @@ static void inxbuild(nn_quant *nnq)
+ 
+ /* Search for ABGR values 0..255 (after net is unbiased) and return colour index
+ 	 ---------------------------------------------------------------------------- */
+-static unsigned int inxsearch(nnq, al,b,g,r)
+-nn_quant *nnq;
+-register int al, b, g, r;
++static unsigned int inxsearch(nn_quant *nnq, int al, int b, int g, int r)
+ {
+ 	register int i, j, dist, a, bestd;
+ 	register int *p;
+@@ -306,9 +297,7 @@ register int al, b, g, r;
+ 
+ /* Search for biased ABGR values
+    ---------------------------- */
+-static int contest(nnq, al,b,g,r)
+-nn_quant *nnq;
+-register int al,b,g,r;
++static int contest(nn_quant *nnq, int al, int b, int g, int r)
+ {
+ 	/* finds closest neuron (min dist) and updates freq */
+ 	/* finds best neuron (min dist-bias) and returns position */
+@@ -362,9 +351,7 @@ register int al,b,g,r;
+ /* Move neuron i towards biased (a,b,g,r) by factor alpha
+ 	 ---------------------------------------------------- */
+ 
+-static void altersingle(nnq, alpha,i,al,b,g,r)
+-nn_quant *nnq;
+-register int alpha,i,al,b,g,r;
++static void altersingle(nn_quant *nnq, int alpha, int i,int al, int b, int g, int r)
+ {
+ 	register int *n;
+ 
+@@ -382,10 +369,7 @@ register int alpha,i,al,b,g,r;
+ /* Move adjacent neurons by precomputed alpha*(1-((i-j)^2/[r]^2)) in radpower[|i-j|]
+ 	 --------------------------------------------------------------------------------- */
+ 
+-static void alterneigh(nnq, rad,i,al,b,g,r)
+-nn_quant *nnq;
+-int rad,i;
+-register int al,b,g,r;
++static void alterneigh(nn_quant *nnq, int rad, int i, int al,int b,int g, int r)
+ {
+ 	register int j,k,lo,hi,a;
+ 	register int *p, *q;
+@@ -429,9 +413,7 @@ register int al,b,g,r;
+ /* Main Learning Loop
+    ------------------ */
+ 
+-static void learn(nnq, verbose) /* Stu: N.B. added parameter so that main() could control verbosity. */
+-nn_quant *nnq;
+-int verbose;
++static void learn(nn_quant *nnq, int verbose) /* Stu: N.B. added parameter so that main() could control verbosity. */
+ {
+ 	register int i,j,al,b,g,r;
+ 	int radius,rad,alpha,step,delta,samplepixels;
+diff --git a/src/gd_tiff.c b/src/gd_tiff.c
+index 7f72b610..3d90e61a 100644
+--- a/src/gd_tiff.c
++++ b/src/gd_tiff.c
+@@ -446,9 +446,7 @@ BGD_DECLARE(void) gdImageTiffCtx(gdImagePtr image, gdIOCtx *out)
+ }
+ 
+ /* Check if we are really in 8bit mode */
+-static int checkColorMap(n, r, g, b)
+-int n;
+-uint16_t *r, *g, *b;
++static int checkColorMap(int n, uint16_t *r, uint16_t *g, uint16_t *b)
+ {
+ 	while (n-- > 0)
+ 		if (*r++ >= 256 || *g++ >= 256 || *b++ >= 256)
diff --git a/meta-openembedded/meta-oe/recipes-support/gd/gd_2.3.3.bb b/meta-openembedded/meta-oe/recipes-support/gd/gd_2.3.3.bb
index 9d4ee1f..cc2c157 100644
--- a/meta-openembedded/meta-oe/recipes-support/gd/gd_2.3.3.bb
+++ b/meta-openembedded/meta-oe/recipes-support/gd/gd_2.3.3.bb
@@ -14,6 +14,7 @@
 DEPENDS = "freetype libpng jpeg zlib tiff"
 
 SRC_URI = "git://github.com/libgd/libgd.git;nobranch=1;protocol=https \
+           file://0001-Fix-deprecared-function-prototypes.patch \
            "
 
 SRCREV = "b5319a41286107b53daa0e08e402aa1819764bdc"
diff --git a/meta-openembedded/meta-oe/recipes-support/hunspell/hunspell_1.7.0.bb b/meta-openembedded/meta-oe/recipes-support/hunspell/hunspell_1.7.0.bb
deleted file mode 100644
index 8d4dd52..0000000
--- a/meta-openembedded/meta-oe/recipes-support/hunspell/hunspell_1.7.0.bb
+++ /dev/null
@@ -1,18 +0,0 @@
-SUMMARY = "A spell checker and morphological analyzer library"
-HOMEPAGE = "http://hunspell.github.io/"
-LICENSE = "GPL-2.0-only | LGPL-2.1-only"
-LIC_FILES_CHKSUM = " \
-    file://COPYING;md5=75859989545e37968a99b631ef42722e \
-    file://COPYING.LESSER;md5=c96ca6c1de8adc025adfada81d06fba5 \
-"
-
-SRCREV = "4ddd8ed5ca6484b930b111aec50c2750a6119a0f"
-SRC_URI = "git://github.com/${BPN}/${BPN}.git;branch=master;protocol=https"
-
-S = "${WORKDIR}/git"
-
-inherit autotools pkgconfig gettext
-
-RDEPENDS:${PN} = "perl"
-
-BBCLASSEXTEND = "native"
diff --git a/meta-openembedded/meta-oe/recipes-support/hunspell/hunspell_1.7.1.bb b/meta-openembedded/meta-oe/recipes-support/hunspell/hunspell_1.7.1.bb
new file mode 100644
index 0000000..54bbc2f
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-support/hunspell/hunspell_1.7.1.bb
@@ -0,0 +1,18 @@
+SUMMARY = "A spell checker and morphological analyzer library"
+HOMEPAGE = "http://hunspell.github.io/"
+LICENSE = "GPL-2.0-only | LGPL-2.1-only"
+LIC_FILES_CHKSUM = " \
+    file://COPYING;md5=75859989545e37968a99b631ef42722e \
+    file://COPYING.LESSER;md5=c96ca6c1de8adc025adfada81d06fba5 \
+"
+
+SRCREV = "1180421f50f211984211e968eb6801ffd3390b8f"
+SRC_URI = "git://github.com/${BPN}/${BPN}.git;branch=master;protocol=https"
+
+S = "${WORKDIR}/git"
+
+inherit autotools pkgconfig gettext
+
+RDEPENDS:${PN} = "perl"
+
+BBCLASSEXTEND = "native"
diff --git a/meta-openembedded/meta-oe/recipes-support/idevicerestore/idevicerestore_git.bb b/meta-openembedded/meta-oe/recipes-support/idevicerestore/idevicerestore_git.bb
index fc07c6e..d7c76a5 100644
--- a/meta-openembedded/meta-oe/recipes-support/idevicerestore/idevicerestore_git.bb
+++ b/meta-openembedded/meta-oe/recipes-support/idevicerestore/idevicerestore_git.bb
@@ -6,11 +6,11 @@
 
 HOMEPAGE = "http://www.libimobiledevice.org/"
 
-DEPENDS = "libirecovery libimobiledevice libzip curl"
+DEPENDS = "libirecovery libimobiledevice libzip curl libimobiledevice-glue openssl"
 
 PV = "1.0.1+git${SRCPV}"
 
-SRCREV = "280575bb95977241e240ed081a2602d68746443e"
+SRCREV = "7d622d916be16f2df5a72bf53a42f3a326bbfaa4"
 SRC_URI = "git://github.com/libimobiledevice/idevicerestore;protocol=https;branch=master"
 
 S = "${WORKDIR}/git"
diff --git a/meta-openembedded/meta-oe/recipes-support/imagemagick/imagemagick_7.0.10.bb b/meta-openembedded/meta-oe/recipes-support/imagemagick/imagemagick_7.0.10.bb
index b8167f5..010288b 100644
--- a/meta-openembedded/meta-oe/recipes-support/imagemagick/imagemagick_7.0.10.bb
+++ b/meta-openembedded/meta-oe/recipes-support/imagemagick/imagemagick_7.0.10.bb
@@ -24,6 +24,7 @@
 
 CACHED_CONFIGUREVARS = "ac_cv_sys_file_offset_bits=yes"
 PACKAGECONFIG ??= "${@bb.utils.filter('DISTRO_FEATURES', 'x11', d)}"
+PACKAGECONFIG[cxx] = "--with-magick-plus-plus,--without-magick-plus-plus"
 PACKAGECONFIG[graphviz] = "--with-gvc,--without-gvc,graphviz"
 PACKAGECONFIG[jp2] = "--with-jp2,,jasper"
 PACKAGECONFIG[lzma] = "--with-lzma,--without-lzma,xz"
diff --git a/meta-openembedded/meta-oe/recipes-support/libb64/libb64/0001-Makefile-fix-parallel-build-of-examples.patch b/meta-openembedded/meta-oe/recipes-support/libb64/libb64/0001-Makefile-fix-parallel-build-of-examples.patch
new file mode 100644
index 0000000..84dee41
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-support/libb64/libb64/0001-Makefile-fix-parallel-build-of-examples.patch
@@ -0,0 +1,46 @@
+From cbe8bd2948f522062c6170f581e1e265692a9a55 Mon Sep 17 00:00:00 2001
+From: Sergei Trofimovich <slyich@gmail.com>
+Date: Sun, 24 Oct 2021 18:53:04 +0100
+Subject: [PATCH] Makefile: fix parallel build of examples
+
+Without the change examples fails to build as:
+
+    $ LANG=C make -j
+    make -C src
+    make -C examples
+    make[1]: Entering directory 'libb64/src'
+    cc -O3 -Werror -pedantic -I../include   -c -o cencode.o cencode.c
+    make[1]: Entering directory 'libb64/examples'
+    make[1]: *** No rule to make target 'libb64.a', needed by 'c-example1'.  Stop.
+    make[1]: Leaving directory 'libb64/examples'
+    make: *** [Makefile:8: all_examples] Error 2
+    make: *** Waiting for unfinished jobs....
+    cc -O3 -Werror -pedantic -I../include   -c -o cdecode.o cdecode.c
+    ar rv libb64.a cencode.o cdecode.o
+    ar: creating libb64.a
+    a - cencode.o
+    a - cdecode.o
+    make[1]: Leaving directory 'libb64/src'
+
+Upstream-Status: Submitted [https://github.com/libb64/libb64/pull/9]
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ Makefile | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/Makefile b/Makefile
+index db40356..aa48c76 100644
+--- a/Makefile
++++ b/Makefile
+@@ -4,7 +4,7 @@ all_src:
+ 	$(MAKE) -C src
+ all_base64: all_src
+ 	$(MAKE) -C base64
+-all_examples:
++all_examples: all_src
+ 	$(MAKE) -C examples
+ 	
+ clean: clean_src clean_base64 clean_include clean_examples
+-- 
+2.37.2
+
diff --git a/meta-openembedded/meta-oe/recipes-support/libb64/libb64/0001-examples-Use-proper-function-prototype-for-main.patch b/meta-openembedded/meta-oe/recipes-support/libb64/libb64/0001-examples-Use-proper-function-prototype-for-main.patch
new file mode 100644
index 0000000..42e889e
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-support/libb64/libb64/0001-examples-Use-proper-function-prototype-for-main.patch
@@ -0,0 +1,27 @@
+From 98eaf510f40e384b32c01ad4bd5c3a697fdd8560 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Wed, 24 Aug 2022 14:34:38 -0700
+Subject: [PATCH] examples: Use proper function prototype for main
+
+Upstream-Status: Submitted [https://github.com/libb64/libb64/pull/10]
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ examples/c-example1.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/examples/c-example1.c b/examples/c-example1.c
+index a0001df..34585dd 100644
+--- a/examples/c-example1.c
++++ b/examples/c-example1.c
+@@ -83,7 +83,7 @@ char* decode(const char* input)
+ }
+ 
+ 
+-int main()
++int main(int argc, char** argv)
+ {
+ 	const char* input = "hello world";
+ 	char* encoded;
+-- 
+2.37.2
+
diff --git a/meta-openembedded/meta-oe/recipes-support/libb64/libb64/0002-use-BUFSIZ-as-buffer-size.patch b/meta-openembedded/meta-oe/recipes-support/libb64/libb64/0002-use-BUFSIZ-as-buffer-size.patch
deleted file mode 100644
index 10ec8e1..0000000
--- a/meta-openembedded/meta-oe/recipes-support/libb64/libb64/0002-use-BUFSIZ-as-buffer-size.patch
+++ /dev/null
@@ -1,57 +0,0 @@
-From ee03e265804a07a0da5028b86960031bd7ab86b2 Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Sat, 27 Mar 2021 22:01:13 -0700
-Subject: [PATCH] use BUFSIZ as buffer size
-
-Author: Jakub Wilk <jwilk@debian.org>
-Bug: http://sourceforge.net/tracker/?func=detail&atid=785907&aid=3591336&group_id=152942
-
-Upstream-Status: Pending
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
----
- include/b64/decode.h | 3 ++-
- include/b64/encode.h | 3 ++-
- 2 files changed, 4 insertions(+), 2 deletions(-)
-
-diff --git a/include/b64/decode.h b/include/b64/decode.h
-index 12b16ea..e9019f3 100644
---- a/include/b64/decode.h
-+++ b/include/b64/decode.h
-@@ -8,6 +8,7 @@ For details, see http://sourceforge.net/projects/libb64
- #ifndef BASE64_DECODE_H
- #define BASE64_DECODE_H
- 
-+#include <cstdio>
- #include <iostream>
- 
- namespace base64
-@@ -22,7 +23,7 @@ namespace base64
- 		base64_decodestate _state;
- 		int _buffersize;
- 
--		decoder(int buffersize_in = BUFFERSIZE)
-+		decoder(int buffersize_in = BUFSIZ)
- 		: _buffersize(buffersize_in)
- 		{}
- 
-diff --git a/include/b64/encode.h b/include/b64/encode.h
-index 5d807d9..e7a7035 100644
---- a/include/b64/encode.h
-+++ b/include/b64/encode.h
-@@ -8,6 +8,7 @@ For details, see http://sourceforge.net/projects/libb64
- #ifndef BASE64_ENCODE_H
- #define BASE64_ENCODE_H
- 
-+#include <cstdio>
- #include <iostream>
- 
- namespace base64
-@@ -22,7 +23,7 @@ namespace base64
- 		base64_encodestate _state;
- 		int _buffersize;
- 
--		encoder(int buffersize_in = BUFFERSIZE)
-+		encoder(int buffersize_in = BUFSIZ)
- 		: _buffersize(buffersize_in)
- 		{}
- 
diff --git a/meta-openembedded/meta-oe/recipes-support/libb64/libb64/0003-fix-integer-overflows.patch b/meta-openembedded/meta-oe/recipes-support/libb64/libb64/0003-fix-integer-overflows.patch
deleted file mode 100644
index 8854bb6..0000000
--- a/meta-openembedded/meta-oe/recipes-support/libb64/libb64/0003-fix-integer-overflows.patch
+++ /dev/null
@@ -1,77 +0,0 @@
-From 7b30fbc3d47dfaf38d8ce8b8949a69d2984dac76 Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Sat, 27 Mar 2021 22:06:03 -0700
-Subject: [PATCH] fix integer overflows
-
-Author: Jakub Wilk <jwilk@debian.org>
-Bug: http://sourceforge.net/tracker/?func=detail&aid=3591129&group_id=152942&atid=785907
-
-Upstream-Status: Pending
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
----
- src/cdecode.c | 15 ++++++++-------
- 1 file changed, 8 insertions(+), 7 deletions(-)
-
-diff --git a/src/cdecode.c b/src/cdecode.c
-index a6c0a42..4e47e9f 100644
---- a/src/cdecode.c
-+++ b/src/cdecode.c
-@@ -9,10 +9,11 @@ For details, see http://sourceforge.net/projects/libb64
- 
- int base64_decode_value(char value_in)
- {
--	static const char decoding[] = {62,-1,-1,-1,63,52,53,54,55,56,57,58,59,60,61,-1,-1,-1,-2,-1,-1,-1,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,-1,-1,-1,-1,-1,-1,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51};
-+	static const signed char decoding[] = {62,-1,-1,-1,63,52,53,54,55,56,57,58,59,60,61,-1,-1,-1,-2,-1,-1,-1,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,-1,-1,-1,-1,-1,-1,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51};
- 	static const char decoding_size = sizeof(decoding);
-+	if (value_in < 43) return -1;
- 	value_in -= 43;
--	if (value_in < 0 || value_in >= decoding_size) return -1;
-+	if (value_in > decoding_size) return -1;
- 	return decoding[(int)value_in];
- }
- 
-@@ -26,7 +27,7 @@ int base64_decode_block(const char* code_in, const int length_in, char* plaintex
- {
- 	const char* codechar = code_in;
- 	char* plainchar = plaintext_out;
--	char fragment;
-+	int fragment;
- 	
- 	*plainchar = state_in->plainchar;
- 	
-@@ -42,7 +43,7 @@ int base64_decode_block(const char* code_in, const int length_in, char* plaintex
- 					state_in->plainchar = *plainchar;
- 					return plainchar - plaintext_out;
- 				}
--				fragment = (char)base64_decode_value(*codechar++);
-+				fragment = base64_decode_value(*codechar++);
- 			} while (fragment < 0);
- 			*plainchar    = (fragment & 0x03f) << 2;
- 	case step_b:
-@@ -53,7 +54,7 @@ int base64_decode_block(const char* code_in, const int length_in, char* plaintex
- 					state_in->plainchar = *plainchar;
- 					return plainchar - plaintext_out;
- 				}
--				fragment = (char)base64_decode_value(*codechar++);
-+				fragment = base64_decode_value(*codechar++);
- 			} while (fragment < 0);
- 			*plainchar++ |= (fragment & 0x030) >> 4;
- 			*plainchar    = (fragment & 0x00f) << 4;
-@@ -65,7 +66,7 @@ int base64_decode_block(const char* code_in, const int length_in, char* plaintex
- 					state_in->plainchar = *plainchar;
- 					return plainchar - plaintext_out;
- 				}
--				fragment = (char)base64_decode_value(*codechar++);
-+				fragment = base64_decode_value(*codechar++);
- 			} while (fragment < 0);
- 			*plainchar++ |= (fragment & 0x03c) >> 2;
- 			*plainchar    = (fragment & 0x003) << 6;
-@@ -77,7 +78,7 @@ int base64_decode_block(const char* code_in, const int length_in, char* plaintex
- 					state_in->plainchar = *plainchar;
- 					return plainchar - plaintext_out;
- 				}
--				fragment = (char)base64_decode_value(*codechar++);
-+				fragment = base64_decode_value(*codechar++);
- 			} while (fragment < 0);
- 			*plainchar++   |= (fragment & 0x03f);
- 		}
diff --git a/meta-openembedded/meta-oe/recipes-support/libb64/libb64/0004-Fix-off-by-one-error.patch b/meta-openembedded/meta-oe/recipes-support/libb64/libb64/0004-Fix-off-by-one-error.patch
deleted file mode 100644
index e19dbad..0000000
--- a/meta-openembedded/meta-oe/recipes-support/libb64/libb64/0004-Fix-off-by-one-error.patch
+++ /dev/null
@@ -1,26 +0,0 @@
-From 8144fd9e02bd5ccd1e080297b19a1e9eb4d3ff96 Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Sat, 27 Mar 2021 22:07:15 -0700
-Subject: [PATCH] Fix off by one error
-
-Launchpad bug #1501176 reported by William McCall on 2015-09-30
-
-Upstream-Status: Pending
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
----
- src/cdecode.c | 2 +-
- 1 file changed, 1 insertion(+), 1 deletion(-)
-
-diff --git a/src/cdecode.c b/src/cdecode.c
-index 4e47e9f..45da4e1 100644
---- a/src/cdecode.c
-+++ b/src/cdecode.c
-@@ -13,7 +13,7 @@ int base64_decode_value(char value_in)
- 	static const char decoding_size = sizeof(decoding);
- 	if (value_in < 43) return -1;
- 	value_in -= 43;
--	if (value_in > decoding_size) return -1;
-+	if (value_in >= decoding_size) return -1;
- 	return decoding[(int)value_in];
- }
- 
diff --git a/meta-openembedded/meta-oe/recipes-support/libb64/libb64/0005-make-overriding-CFLAGS-possible.patch b/meta-openembedded/meta-oe/recipes-support/libb64/libb64/0005-make-overriding-CFLAGS-possible.patch
deleted file mode 100644
index e93015e..0000000
--- a/meta-openembedded/meta-oe/recipes-support/libb64/libb64/0005-make-overriding-CFLAGS-possible.patch
+++ /dev/null
@@ -1,40 +0,0 @@
-From a7914d5ffee6ffdfb3f2b8ebcc22c8367d078301 Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Sat, 27 Mar 2021 22:08:43 -0700
-Subject: [PATCH] make overriding CFLAGS possible
-
-Author: Jakub Wilk <jwilk@debian.org>
-
-Upstream-Status: Pending
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
----
- base64/Makefile | 2 +-
- src/Makefile    | 2 +-
- 2 files changed, 2 insertions(+), 2 deletions(-)
-
-diff --git a/base64/Makefile b/base64/Makefile
-index 30a2c5c..783a248 100644
---- a/base64/Makefile
-+++ b/base64/Makefile
-@@ -3,7 +3,7 @@ BINARIES = base64
- # Build flags (uncomment one)
- #############################
- # Release build flags
--CFLAGS += -O3
-+CFLAGS ?= -O3
- #############################
- # Debug build flags
- #CFLAGS += -g
-diff --git a/src/Makefile b/src/Makefile
-index 28b2382..48801fc 100644
---- a/src/Makefile
-+++ b/src/Makefile
-@@ -3,7 +3,7 @@ LIBRARIES = libb64.a
- # Build flags (uncomment one)
- #############################
- # Release build flags
--CFLAGS += -O3
-+CFLAGS ?= -O3
- #############################
- # Debug build flags
- #CFLAGS += -g
diff --git a/meta-openembedded/meta-oe/recipes-support/libb64/libb64/0006-do-not-export-the-CHARS_PER_LINE-variable.patch b/meta-openembedded/meta-oe/recipes-support/libb64/libb64/0006-do-not-export-the-CHARS_PER_LINE-variable.patch
deleted file mode 100644
index 9ba08c8..0000000
--- a/meta-openembedded/meta-oe/recipes-support/libb64/libb64/0006-do-not-export-the-CHARS_PER_LINE-variable.patch
+++ /dev/null
@@ -1,27 +0,0 @@
-From a1b9bb4af819ed389675f16e4a521efeda4cc3f3 Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Sat, 27 Mar 2021 22:10:48 -0700
-Subject: [PATCH] do not export the CHARS_PER_LINE variable
-
-The library exports a variable named "CHARS_PER_LINE". This is a generic name that could conflict with a name in user's code.
-Please either rename the variable or make it static.
-
-Upstream-Status: Submitted [http://sourceforge.net/tracker/?func=detail&aid=3591420&group_id=152942&atid=785907]
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
----
- src/cencode.c | 2 +-
- 1 file changed, 1 insertion(+), 1 deletion(-)
-
-diff --git a/src/cencode.c b/src/cencode.c
-index 03ba5b6..3df62a8 100644
---- a/src/cencode.c
-+++ b/src/cencode.c
-@@ -7,7 +7,7 @@ For details, see http://sourceforge.net/projects/libb64
- 
- #include <b64/cencode.h>
- 
--const int CHARS_PER_LINE = 72;
-+static const int CHARS_PER_LINE = 72;
- 
- void base64_init_encodestate(base64_encodestate* state_in)
- {
diff --git a/meta-openembedded/meta-oe/recipes-support/libb64/libb64/0007-initialize-encoder-decoder-state-in-the-constructors.patch b/meta-openembedded/meta-oe/recipes-support/libb64/libb64/0007-initialize-encoder-decoder-state-in-the-constructors.patch
deleted file mode 100644
index fdf8339..0000000
--- a/meta-openembedded/meta-oe/recipes-support/libb64/libb64/0007-initialize-encoder-decoder-state-in-the-constructors.patch
+++ /dev/null
@@ -1,44 +0,0 @@
-From c1ba44d83cc7d9d756cfb063717852eae9d03328 Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Sat, 27 Mar 2021 22:12:41 -0700
-Subject: [PATCH] initialize encoder/decoder state in the constructors
-
-Author: Jakub Wilk <jwilk@debian.org>
-
-Upstream-Status: Pending
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
----
- include/b64/decode.h | 4 +++-
- include/b64/encode.h | 4 +++-
- 2 files changed, 6 insertions(+), 2 deletions(-)
-
-diff --git a/include/b64/decode.h b/include/b64/decode.h
-index e9019f3..aefb7bc 100644
---- a/include/b64/decode.h
-+++ b/include/b64/decode.h
-@@ -25,7 +25,9 @@ namespace base64
- 
- 		decoder(int buffersize_in = BUFSIZ)
- 		: _buffersize(buffersize_in)
--		{}
-+		{
-+			base64_init_decodestate(&_state);
-+		}
- 
- 		int decode(char value_in)
- 		{
-diff --git a/include/b64/encode.h b/include/b64/encode.h
-index e7a7035..33848b3 100644
---- a/include/b64/encode.h
-+++ b/include/b64/encode.h
-@@ -25,7 +25,9 @@ namespace base64
- 
- 		encoder(int buffersize_in = BUFSIZ)
- 		: _buffersize(buffersize_in)
--		{}
-+		{
-+			base64_init_encodestate(&_state);
-+		}
- 
- 		int encode(char value_in)
- 		{
diff --git a/meta-openembedded/meta-oe/recipes-support/libb64/libb64_1.2.1.bb b/meta-openembedded/meta-oe/recipes-support/libb64/libb64_1.2.1.bb
deleted file mode 100644
index 64a34fe..0000000
--- a/meta-openembedded/meta-oe/recipes-support/libb64/libb64_1.2.1.bb
+++ /dev/null
@@ -1,39 +0,0 @@
-SUMMARY = "Base64 Encoding/Decoding Routines"
-DESCRIPTION = "base64 encoding/decoding library - runtime library \
-libb64 is a library of ANSI C routines for fast encoding/decoding data into \
-and from a base64-encoded format"
-HOMEPAGE = "http://libb64.sourceforge.net/"
-LICENSE = "PD"
-LIC_FILES_CHKSUM = "file://LICENSE;md5=ce551aad762074c7ab618a0e07a8dca3"
-
-SRC_URI = "${SOURCEFORGE_MIRROR}/${BPN}/${BPN}/${BP}.zip \
-           file://0001-example-Do-not-run-the-tests.patch \
-           file://0002-use-BUFSIZ-as-buffer-size.patch \
-           file://0003-fix-integer-overflows.patch \
-           file://0004-Fix-off-by-one-error.patch \
-           file://0005-make-overriding-CFLAGS-possible.patch \
-           file://0006-do-not-export-the-CHARS_PER_LINE-variable.patch \
-           file://0007-initialize-encoder-decoder-state-in-the-constructors.patch \
-           "
-SRC_URI[sha256sum] = "20106f0ba95cfd9c35a13c71206643e3fb3e46512df3e2efb2fdbf87116314b2"
-
-PARALLEL_MAKE = ""
-
-CFLAGS += "-fPIC"
-
-do_configure () {
-    :
-}
-
-do_compile () {
-    oe_runmake
-    ${CC} ${LDFLAGS} ${CFLAGS} -shared -Wl,-soname,${BPN}.so.0 src/*.o -o src/${BPN}.so.0
-}
-
-do_install () {
-    install -d ${D}${includedir}/b64
-    install -Dm 0644 ${B}/src/libb64.a ${D}${libdir}/libb64.a
-    install -Dm 0644 ${B}/src/libb64.so.0 ${D}${libdir}/libb64.so.0
-    ln -s libb64.so.0 ${D}${libdir}/libb64.so
-    install -Dm 0644 ${S}/include/b64/*.h ${D}${includedir}/b64/
-}
diff --git a/meta-openembedded/meta-oe/recipes-support/libb64/libb64_2.0.0.1.bb b/meta-openembedded/meta-oe/recipes-support/libb64/libb64_2.0.0.1.bb
new file mode 100644
index 0000000..8122419
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-support/libb64/libb64_2.0.0.1.bb
@@ -0,0 +1,37 @@
+SUMMARY = "Base64 Encoding/Decoding Routines"
+DESCRIPTION = "base64 encoding/decoding library - runtime library \
+libb64 is a library of ANSI C routines for fast encoding/decoding data into \
+and from a base64-encoded format"
+HOMEPAGE = "https://github.com/libb64"
+LICENSE = "PD"
+LIC_FILES_CHKSUM = "file://LICENSE.md;md5=81296a564fa0621472714aae7c763d96"
+
+PV .= "+2.0.0.2+git${SRCPV}"
+SRCREV = "ce864b17ea0e24a91e77c7dd3eb2d1ac4175b3f0"
+
+SRC_URI = "git://github.com/libb64/libb64;protocol=https;branch=master \
+           file://0001-example-Do-not-run-the-tests.patch \
+           file://0001-Makefile-fix-parallel-build-of-examples.patch \
+           file://0001-examples-Use-proper-function-prototype-for-main.patch \
+           "
+
+S = "${WORKDIR}/git"
+
+CFLAGS += "-fPIC"
+
+do_configure () {
+    :
+}
+
+do_compile () {
+    oe_runmake
+    ${CC} ${LDFLAGS} ${CFLAGS} -shared -Wl,-soname,${BPN}.so.0 src/*.o -o src/${BPN}.so.0
+}
+
+do_install () {
+    install -d ${D}${includedir}/b64
+    install -Dm 0644 ${B}/src/libb64.a ${D}${libdir}/libb64.a
+    install -Dm 0644 ${B}/src/libb64.so.0 ${D}${libdir}/libb64.so.0
+    ln -s libb64.so.0 ${D}${libdir}/libb64.so
+    install -Dm 0644 ${S}/include/b64/*.h ${D}${includedir}/b64/
+}
diff --git a/meta-openembedded/meta-oe/recipes-support/libftdi/libftdi_1.4.bb b/meta-openembedded/meta-oe/recipes-support/libftdi/libftdi_1.4.bb
deleted file mode 100644
index f142cb2..0000000
--- a/meta-openembedded/meta-oe/recipes-support/libftdi/libftdi_1.4.bb
+++ /dev/null
@@ -1,33 +0,0 @@
-DESCRIPTION = "libftdi is a library to talk to FTDI chips.\
-FT232BM/245BM, FT2232C/D and FT232/245R using libusb,\
-including the popular bitbang mode."
-HOMEPAGE = "http://www.intra2net.com/en/developer/libftdi/"
-SECTION = "libs"
-LICENSE = "LGPL-2.1-only & GPL-2.0-only"
-LIC_FILES_CHKSUM= "\
-    file://COPYING.GPL;md5=751419260aa954499f7abaabaa882bbe \
-    file://COPYING.LIB;md5=5f30f0716dfdd0d91eb439ebec522ec2 \
-"
-
-DEPENDS = "libusb1 python3 swig-native"
-
-SRC_URI = "http://www.intra2net.com/en/developer/${BPN}/download/${BPN}1-${PV}.tar.bz2"
-SRC_URI[md5sum] = "0c09fb2bb19a57c839fa6845c6c780a2"
-SRC_URI[sha256sum] = "ec36fb49080f834690c24008328a5ef42d3cf584ef4060f3a35aa4681cb31b74"
-
-S = "${WORKDIR}/${BPN}1-${PV}"
-
-inherit cmake binconfig pkgconfig python3native
-
-PACKAGECONFIG ??= ""
-PACKAGECONFIG[cpp-wrapper] = "-DFTDI_BUILD_CPP=on -DFTDIPP=on,-DFTDI_BUILD_CPP=off -DFTDIPP=off,boost"
-
-EXTRA_OECMAKE = "-DLIB_SUFFIX=${@d.getVar('baselib').replace('lib', '')} \
-                 -DPYTHON_LIBRARY=${STAGING_LIBDIR}/lib${PYTHON_DIR}${PYTHON_ABI}.so \
-                 -DPYTHON_INCLUDE_DIR=${STAGING_INCDIR}/${PYTHON_DIR}${PYTHON_ABI}"
-
-BBCLASSEXTEND = "native nativesdk"
-
-PACKAGES += "${PN}-python"
-
-FILES:${PN}-python = "${libdir}/${PYTHON_DIR}/site-packages/"
diff --git a/meta-openembedded/meta-oe/recipes-support/libftdi/libftdi_1.5.bb b/meta-openembedded/meta-oe/recipes-support/libftdi/libftdi_1.5.bb
new file mode 100644
index 0000000..b03a0c7
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-support/libftdi/libftdi_1.5.bb
@@ -0,0 +1,33 @@
+DESCRIPTION = "libftdi is a library to talk to FTDI chips.\
+FT232BM/245BM, FT2232C/D and FT232/245R using libusb,\
+including the popular bitbang mode."
+HOMEPAGE = "http://www.intra2net.com/en/developer/libftdi/"
+SECTION = "libs"
+LICENSE = "LGPL-2.1-only & GPL-2.0-only"
+LIC_FILES_CHKSUM= "\
+    file://COPYING.GPL;md5=751419260aa954499f7abaabaa882bbe \
+    file://COPYING.LIB;md5=5f30f0716dfdd0d91eb439ebec522ec2 \
+"
+
+DEPENDS = "libusb1 python3 swig-native"
+
+SRC_URI = "http://www.intra2net.com/en/developer/${BPN}/download/${BPN}1-${PV}.tar.bz2"
+SRC_URI[sha256sum] = "7c7091e9c86196148bd41177b4590dccb1510bfe6cea5bf7407ff194482eb049"
+
+S = "${WORKDIR}/${BPN}1-${PV}"
+
+inherit cmake binconfig pkgconfig python3native
+
+PACKAGECONFIG ??= ""
+PACKAGECONFIG[cpp-wrapper] = "-DFTDI_BUILD_CPP=on -DFTDIPP=on,-DFTDI_BUILD_CPP=off -DFTDIPP=off,boost"
+
+EXTRA_OECMAKE = "-DSTATICLIBS=off -DEXAMPLES=off -DFTDI_EEPROM=off \
+                 -DLIB_SUFFIX=${@d.getVar('baselib').replace('lib', '')} \
+                 -DPYTHON_LIBRARY=${STAGING_LIBDIR}/lib${PYTHON_DIR}${PYTHON_ABI}.so \
+                 -DPYTHON_INCLUDE_DIR=${STAGING_INCDIR}/${PYTHON_DIR}${PYTHON_ABI}"
+
+BBCLASSEXTEND = "native nativesdk"
+
+PACKAGES += "${PN}-python"
+
+FILES:${PN}-python = "${libdir}/${PYTHON_DIR}/site-packages/"
diff --git a/meta-openembedded/meta-oe/recipes-support/libgpiod/libgpiod_1.6.3.bb b/meta-openembedded/meta-oe/recipes-support/libgpiod/libgpiod_1.6.3.bb
index 2cccf93..3e6e5d5 100644
--- a/meta-openembedded/meta-oe/recipes-support/libgpiod/libgpiod_1.6.3.bb
+++ b/meta-openembedded/meta-oe/recipes-support/libgpiod/libgpiod_1.6.3.bb
@@ -19,7 +19,8 @@
 PACKAGECONFIG[python3] = "--enable-bindings-python,--disable-bindings-python,python3"
 
 # Enable cxx bindings by default.
-PACKAGECONFIG ?= "cxx"
+PACKAGECONFIG ?= "cxx \
+		  ${@bb.utils.contains('PTEST_ENABLED', '1', 'tests', '', d)}"
 
 # Always build tools - they don't have any additional
 # requirements over the library.
@@ -56,8 +57,6 @@
 "
 RDEPENDS:${PN}-ptest += "bats python3-packaging"
 
-PACKAGECONFIG:append = " ${@bb.utils.contains('DISTRO_FEATURES', 'ptest', 'tests', '', d)}"
-
 do_install_ptest() {
     install -d ${D}${PTEST_PATH}/tests
 
diff --git a/meta-openembedded/meta-oe/recipes-support/libmxml/libmxml_3.3.1.bb b/meta-openembedded/meta-oe/recipes-support/libmxml/libmxml_3.3.1.bb
new file mode 100644
index 0000000..b81050b
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-support/libmxml/libmxml_3.3.1.bb
@@ -0,0 +1,33 @@
+DESCRIPTION = "Tiny XML Library"
+LICENSE = "Apache-2.0"
+LIC_FILES_CHKSUM = "file://LICENSE;md5=86d3f3a95c324c9479bd8986968f4327"
+HOMEPAGE = "https://www.msweet.org/mxml/"
+BUGTRACKER = "https://github.com/michaelrsweet/mxml/issues"
+
+SRC_URI = "git://github.com/michaelrsweet/mxml.git;nobranch=1;protocol=https"
+SRCREV = "fd47c7d115191c8a6bce2c781ffee41e179530f2"
+S = "${WORKDIR}/git"
+
+inherit autotools
+
+PACKAGECONFIG ??= "threads"
+PACKAGECONFIG[threads] = "--enable-threads,--disable-threads"
+
+# Package does not support out of tree builds.
+B = "${S}"
+
+# MXML uses autotools but it explicitly states it does not support autoheader.
+EXTRA_AUTORECONF = "--exclude=autopoint,autoheader"
+
+do_configure:prepend() {
+    # Respect optimization CFLAGS specified by OE.
+    sed -e 's/-Os -g//' -i ${S}/configure.ac
+
+    # Enable verbose compilation output. This is required for extra QA checks to work.
+    sed -e '/.SILENT:/d' -i ${S}/Makefile.in
+}
+
+do_install() {
+    # Package uses DSTROOT instread of standard DESTDIR to specify install location.
+    oe_runmake install DSTROOT=${D}
+}
diff --git a/meta-openembedded/meta-oe/recipes-support/libmxml/libmxml_3.3.bb b/meta-openembedded/meta-oe/recipes-support/libmxml/libmxml_3.3.bb
deleted file mode 100644
index c8e2167..0000000
--- a/meta-openembedded/meta-oe/recipes-support/libmxml/libmxml_3.3.bb
+++ /dev/null
@@ -1,33 +0,0 @@
-DESCRIPTION = "Tiny XML Library"
-LICENSE = "Apache-2.0"
-LIC_FILES_CHKSUM = "file://LICENSE;md5=86d3f3a95c324c9479bd8986968f4327"
-HOMEPAGE = "https://www.msweet.org/mxml/"
-BUGTRACKER = "https://github.com/michaelrsweet/mxml/issues"
-
-SRC_URI = "git://github.com/michaelrsweet/mxml.git;nobranch=1;protocol=https"
-SRCREV = "0237559fdbcecae34157b547aa2b99e12de305a2"
-S = "${WORKDIR}/git"
-
-inherit autotools
-
-PACKAGECONFIG ??= "threads"
-PACKAGECONFIG[threads] = "--enable-threads,--disable-threads"
-
-# Package does not support out of tree builds.
-B = "${S}"
-
-# MXML uses autotools but it explicitly states it does not support autoheader.
-EXTRA_AUTORECONF = "--exclude=autopoint,autoheader"
-
-do_configure:prepend() {
-    # Respect optimization CFLAGS specified by OE.
-    sed -e 's/-Os -g//' -i ${S}/configure.ac
-
-    # Enable verbose compilation output. This is required for extra QA checks to work.
-    sed -e '/.SILENT:/d' -i ${S}/Makefile.in
-}
-
-do_install() {
-    # Package uses DSTROOT instread of standard DESTDIR to specify install location.
-    oe_runmake install DSTROOT=${D}
-}
diff --git a/meta-openembedded/meta-oe/recipes-support/libteam/libteam/0001-teamd-Include-missing-headers-for-strrchr-and-memcmp.patch b/meta-openembedded/meta-oe/recipes-support/libteam/libteam/0001-teamd-Include-missing-headers-for-strrchr-and-memcmp.patch
new file mode 100644
index 0000000..5f8e561
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-support/libteam/libteam/0001-teamd-Include-missing-headers-for-strrchr-and-memcmp.patch
@@ -0,0 +1,46 @@
+From 49693cac37ee35ff673240c8060201efe0d999c2 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Wed, 24 Aug 2022 22:27:03 -0700
+Subject: [PATCH] teamd: Include missing headers for strrchr and memcmp
+
+Compiler does not see the prototype for these functions otherwise and
+build fails e.g.
+
+| ../../git/teamd/teamd_phys_port_check.c:52:10: error: call to undeclared library function 'strrchr' with type 'char *(const char *, int)'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
+|         start = strrchr(link, '/');
+|                 ^
+
+Upstream-Status: Submitted [https://github.com/jpirko/libteam/pull/68]
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ teamd/teamd_phys_port_check.c    | 1 +
+ teamd/teamd_runner_loadbalance.c | 1 +
+ 2 files changed, 2 insertions(+)
+
+diff --git a/teamd/teamd_phys_port_check.c b/teamd/teamd_phys_port_check.c
+index 1eec129..c2454ab 100644
+--- a/teamd/teamd_phys_port_check.c
++++ b/teamd/teamd_phys_port_check.c
+@@ -19,6 +19,7 @@
+ 
+ #include <stdio.h>
+ #include <errno.h>
++#include <string.h>
+ #include <sys/types.h>
+ #include <sys/stat.h>
+ #include <unistd.h>
+diff --git a/teamd/teamd_runner_loadbalance.c b/teamd/teamd_runner_loadbalance.c
+index a581472..421a7c6 100644
+--- a/teamd/teamd_runner_loadbalance.c
++++ b/teamd/teamd_runner_loadbalance.c
+@@ -17,6 +17,7 @@
+  *   Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301  USA
+  */
+ 
++#include <string.h>
+ #include <sys/socket.h>
+ #include <linux/netdevice.h>
+ #include <team.h>
+-- 
+2.37.2
+
diff --git a/meta-openembedded/meta-oe/recipes-support/libteam/libteam_1.31.bb b/meta-openembedded/meta-oe/recipes-support/libteam/libteam_1.31.bb
index ad84389..023cea9 100644
--- a/meta-openembedded/meta-oe/recipes-support/libteam/libteam_1.31.bb
+++ b/meta-openembedded/meta-oe/recipes-support/libteam/libteam_1.31.bb
@@ -13,6 +13,7 @@
            file://0001-team_basic_test.py-disable-RedHat-specific-test.patch \
            file://0001-team_basic_test.py-switch-to-python3.patch \
            file://0001-team_basic_test.py-check-the-return-value.patch \
+           file://0001-teamd-Include-missing-headers-for-strrchr-and-memcmp.patch \
            file://run-ptest \
            "
 SRCREV = "3ee12c6d569977cf1cd30d0da77807a07aa77158"
diff --git a/meta-openembedded/meta-oe/recipes-support/nano/nano_6.3.bb b/meta-openembedded/meta-oe/recipes-support/nano/nano_6.3.bb
deleted file mode 100644
index 6d22bfc..0000000
--- a/meta-openembedded/meta-oe/recipes-support/nano/nano_6.3.bb
+++ /dev/null
@@ -1,22 +0,0 @@
-SUMMARY = "Small and friendly console text editor"
-DESCRIPTION = "GNU nano (Nano's ANOther editor, or \
-Not ANOther editor) is an enhanced clone of the \
-Pico text editor."
-HOMEPAGE = "http://www.nano-editor.org/"
-SECTION = "console/utils"
-LICENSE = "GPL-3.0-only"
-LIC_FILES_CHKSUM = "file://COPYING;md5=f27defe1e96c2e1ecd4e0c9be8967949"
-
-DEPENDS = "ncurses file"
-RDEPENDS:${PN} = "ncurses-terminfo-base"
-
-PV_MAJOR = "${@d.getVar('PV').split('.')[0]}"
-
-SRC_URI = "https://nano-editor.org/dist/v${PV_MAJOR}/nano-${PV}.tar.xz"
-SRC_URI[sha256sum] = "eb532da4985672730b500f685dbaab885a466d08fbbf7415832b95805e6f8687"
-
-UPSTREAM_CHECK_URI = "https://ftp.gnu.org/gnu/nano"
-
-inherit autotools gettext pkgconfig
-
-PACKAGECONFIG[tiny] = "--enable-tiny,"
diff --git a/meta-openembedded/meta-oe/recipes-support/nano/nano_6.4.bb b/meta-openembedded/meta-oe/recipes-support/nano/nano_6.4.bb
new file mode 100644
index 0000000..d499362
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-support/nano/nano_6.4.bb
@@ -0,0 +1,22 @@
+SUMMARY = "Small and friendly console text editor"
+DESCRIPTION = "GNU nano (Nano's ANOther editor, or \
+Not ANOther editor) is an enhanced clone of the \
+Pico text editor."
+HOMEPAGE = "http://www.nano-editor.org/"
+SECTION = "console/utils"
+LICENSE = "GPL-3.0-only"
+LIC_FILES_CHKSUM = "file://COPYING;md5=f27defe1e96c2e1ecd4e0c9be8967949"
+
+DEPENDS = "ncurses file"
+RDEPENDS:${PN} = "ncurses-terminfo-base"
+
+PV_MAJOR = "${@d.getVar('PV').split('.')[0]}"
+
+SRC_URI = "https://nano-editor.org/dist/v${PV_MAJOR}/nano-${PV}.tar.xz"
+SRC_URI[sha256sum] = "4199ae8ca78a7796de56de1a41b821dc47912c0307e9816b56cc317df34661c0"
+
+UPSTREAM_CHECK_URI = "https://ftp.gnu.org/gnu/nano"
+
+inherit autotools gettext pkgconfig
+
+PACKAGECONFIG[tiny] = "--enable-tiny,"
diff --git a/meta-openembedded/meta-oe/recipes-support/neon/neon/0001-Disable-installing-documentation.patch b/meta-openembedded/meta-oe/recipes-support/neon/neon/0001-Disable-installing-documentation.patch
new file mode 100644
index 0000000..1f63df2
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-support/neon/neon/0001-Disable-installing-documentation.patch
@@ -0,0 +1,28 @@
+From f477408f1c24ce6e5589e5a99d369279916c7c6e Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Wed, 24 Aug 2022 13:11:12 -0700
+Subject: [PATCH] Disable installing documentation
+
+It does not build
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ Makefile.in | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/Makefile.in b/Makefile.in
+index ed87a69..c32405c 100644
+--- a/Makefile.in
++++ b/Makefile.in
+@@ -125,7 +125,7 @@ Makefile: $(srcdir)/Makefile.in
+ neon-config: $(srcdir)/neon-config.in
+ 	@./config.status neon-config
+ 
+-install-docs: install-man install-html
++install-docs:
+ 
+ install-html:
+ 	$(INSTALL) -d $(DESTDIR)$(docdir)/html
+-- 
+2.37.2
+
diff --git a/meta-openembedded/meta-oe/recipes-support/neon/neon/fix-package-check-for-libxml2.patch b/meta-openembedded/meta-oe/recipes-support/neon/neon/fix-package-check-for-libxml2.patch
deleted file mode 100644
index 92a05c0..0000000
--- a/meta-openembedded/meta-oe/recipes-support/neon/neon/fix-package-check-for-libxml2.patch
+++ /dev/null
@@ -1,50 +0,0 @@
-neon: Change the neon configure to use pkg-config instead of xml2-config
-
-xml2-config is broken for neon
-if packageconfig libxml2, webdav, zlib is enabled for neon
-we get the following configure error in the yocto build
-
-| configure: WebDAV support is enabled
-| checking for xml2-config... xml2-config
-| ERROR: /usr/bin/xml2-config should not be used, use an alternative such as pkg-config
-| ERROR: /usr/bin/xml2-config should not be used, use an alternative such as pkg-config
-| ERROR: /usr/bin/xml2-config should not be used, use an alternative such as pkg-config
-| checking libxml/xmlversion.h usability... no
-| checking libxml/xmlversion.h presence... no
-| checking for libxml/xmlversion.h... no
-| configure: error: could not find parser.h, libxml installation problem?
-| WARNING: exit code 1 from a shell command.
-
-The patch lets configure use pkg-config
-
-Upstream-Status: inappropriate
-(Upstream suggests to use latest 0.31 as per the discussion
-https://github.com/notroj/neon/discussions/47)
-
-Signed-off-by: Nisha Parrakat <Nisha.Parrakat@kpit.com>
---- a/macros/neon-xml-parser.m4        2008-07-19 23:52:35.000000000 +0200
-+++ b/macros/neon-xml-parser.m4        2021-02-15 23:56:59.202751257 +0100
-@@ -44,17 +44,17 @@
-
- dnl Find libxml2: run $1 if found, else $2
- AC_DEFUN([NE_XML_LIBXML2], [
--AC_CHECK_PROG(XML2_CONFIG, xml2-config, xml2-config)
-+AC_CHECK_PROG(XML2_CONFIG, pkg-config, pkg-config)
- if test -n "$XML2_CONFIG"; then
--    neon_xml_parser_message="libxml `$XML2_CONFIG --version`"
-     AC_DEFINE(HAVE_LIBXML, 1, [Define if you have libxml])
--    # xml2-config in some versions erroneously includes -I/include
--    # in the --cflags output.
--    CPPFLAGS="$CPPFLAGS `$XML2_CONFIG --cflags | sed 's| -I/include||g'`"
--    NEON_LIBS="$NEON_LIBS `$XML2_CONFIG --libs | sed 's|-L/usr/lib ||g'`"
-+    PKG_CHECK_MODULES(XML, libxml-2.0 >= 2.4)
-+    AC_MSG_NOTICE([libxmlfound CFlags : , ${XML_CFLAGS}])
-+    CPPFLAGS="$CPPFLAGS ${XML_CFLAGS}"
-+    NEON_LIBS="$NEON_LIBS ${XML_LIBS}"
-     AC_CHECK_HEADERS(libxml/xmlversion.h libxml/parser.h,,[
-       AC_MSG_ERROR([could not find parser.h, libxml installation problem?])])
-     neon_xml_parser=libxml2
-+    neon_xml_parser_message="libxml2"
- else
-     $1
- fi
diff --git a/meta-openembedded/meta-oe/recipes-support/neon/neon_0.30.2.bb b/meta-openembedded/meta-oe/recipes-support/neon/neon_0.30.2.bb
deleted file mode 100644
index 646a9ec..0000000
--- a/meta-openembedded/meta-oe/recipes-support/neon/neon_0.30.2.bb
+++ /dev/null
@@ -1,52 +0,0 @@
-SUMMARY = "An HTTP and WebDAV client library with a C interface"
-HOMEPAGE = "http://www.webdav.org/neon/"
-SECTION = "libs"
-LICENSE = "LGPL-2.0-or-later"
-LIC_FILES_CHKSUM = "file://src/COPYING.LIB;md5=f30a9716ef3762e3467a2f62bf790f0a \
-                    file://src/ne_utils.h;beginline=1;endline=20;md5=2caca609538eddaa6f6adf120a218037"
-
-SRC_URI = "${DEBIAN_MIRROR}/main/n/neon27/neon27_${PV}.orig.tar.gz \
-           file://pkgconfig.patch \
-           file://fix-package-check-for-libxml2.patch \
-           file://run-ptest \
-          "
-
-SRC_URI[md5sum] = "e28d77bf14032d7f5046b3930704ef41"
-SRC_URI[sha256sum] = "db0bd8cdec329b48f53a6f00199c92d5ba40b0f015b153718d1b15d3d967fbca"
-
-inherit autotools binconfig-disabled lib_package pkgconfig ptest
-
-# Enable gnutls or openssl, not both
-PACKAGECONFIG ?= "expat gnutls libproxy webdav zlib"
-PACKAGECONFIG:class-native = "expat gnutls webdav zlib"
-
-PACKAGECONFIG[expat] = "--with-expat,--without-expat,expat"
-PACKAGECONFIG[gnutls] = "--with-ssl=gnutls,,gnutls"
-PACKAGECONFIG[gssapi] = "--with-gssapi,--without-gssapi,krb5"
-PACKAGECONFIG[libproxy] = "--with-libproxy,--without-libproxy,libproxy"
-PACKAGECONFIG[libxml2] = "--with-libxml2,--without-libxml2,libxml2"
-PACKAGECONFIG[openssl] = "--with-ssl=openssl,,openssl"
-PACKAGECONFIG[webdav] = "--enable-webdav,--disable-webdav,"
-PACKAGECONFIG[zlib] = "--with-zlib,--without-zlib,zlib"
-
-EXTRA_OECONF += "--enable-shared"
-
-do_compile:append() {
-    oe_runmake -C test
-}
-
-do_install_ptest(){
-    BASIC_TESTS="auth basic redirect request session socket string-tests \
-                 stubs uri-tests util-tests"
-    DAV_TESTS="acl3744 lock oldacl props xml xmlreq"
-    mkdir "${D}${PTEST_PATH}/test"
-    for i in ${BASIC_TESTS} ${DAV_TESTS}
-    do
-        install -m 0755 "${B}/test/${i}" \
-        "${D}${PTEST_PATH}/test"
-    done
-}
-
-BINCONFIG = "${bindir}/neon-config"
-
-BBCLASSEXTEND = "native"
diff --git a/meta-openembedded/meta-oe/recipes-support/neon/neon_0.32.2.bb b/meta-openembedded/meta-oe/recipes-support/neon/neon_0.32.2.bb
new file mode 100644
index 0000000..0f4e971
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-support/neon/neon_0.32.2.bb
@@ -0,0 +1,63 @@
+SUMMARY = "An HTTP and WebDAV client library with a C interface"
+HOMEPAGE = "http://www.webdav.org/neon/"
+SECTION = "libs"
+LICENSE = "LGPL-2.0-or-later"
+LIC_FILES_CHKSUM = "file://src/COPYING.LIB;md5=f30a9716ef3762e3467a2f62bf790f0a \
+                    file://src/ne_utils.h;beginline=1;endline=20;md5=34c8e338bfa0237561e68d30c3c71133"
+
+SRC_URI = "${DEBIAN_MIRROR}/main/n/neon27/neon27_${PV}.orig.tar.gz \
+           file://pkgconfig.patch \
+           file://0001-Disable-installing-documentation.patch \
+           file://run-ptest \
+           "
+
+SRC_URI[sha256sum] = "7a25ba2c9223676b9aaec22a585a0ca118127bad71deed0b9ed6cd960fe5c353"
+
+inherit autotools-brokensep binconfig-disabled lib_package pkgconfig ptest
+
+# Enable gnutls or openssl, not both
+PACKAGECONFIG ?= "expat gnutls libproxy webdav zlib nls"
+PACKAGECONFIG:class-native = "expat gnutls webdav zlib nls"
+PACKAGECONFIG:remove:libc-musl = "nls"
+
+PACKAGECONFIG[expat] = "--with-expat,--without-expat,expat"
+PACKAGECONFIG[gnutls] = "--with-ssl=gnutls,,gnutls"
+PACKAGECONFIG[gssapi] = "--with-gssapi,--without-gssapi,krb5"
+PACKAGECONFIG[libproxy] = "--with-libproxy,--without-libproxy,libproxy"
+PACKAGECONFIG[libxml2] = "--with-libxml2,--without-libxml2,libxml2"
+PACKAGECONFIG[nls] = ",--disable-nls,gettext-native"
+PACKAGECONFIG[openssl] = "--with-ssl=openssl,,openssl"
+PACKAGECONFIG[webdav] = "--enable-webdav,--disable-webdav,"
+PACKAGECONFIG[zlib] = "--with-zlib,--without-zlib,zlib"
+
+EXTRA_OECONF += "--enable-shared --enable-threadsafe-ssl=posix"
+
+# Do not install into /usr/local
+EXTRA_OEMAKE:append:class-native = "prefix=${prefix_native}"
+
+do_configure:prepend() {
+    echo "${PV}" > ${S}/.version
+}
+
+do_compile:append() {
+    if ${@bb.utils.contains('PACKAGECONFIG', 'nls', 'true', 'false', d)}; then
+        oe_runmake compile-gmo
+    fi
+    oe_runmake -C test
+}
+
+do_install_ptest(){
+    BASIC_TESTS="auth basic redirect request session socket string-tests \
+                 stubs uri-tests util-tests"
+    DAV_TESTS="acl3744 lock oldacl props xml xmlreq"
+    mkdir "${D}${PTEST_PATH}/test"
+    for i in ${BASIC_TESTS} ${DAV_TESTS}
+    do
+        install -m 0755 "${B}/test/${i}" \
+        "${D}${PTEST_PATH}/test"
+    done
+}
+
+BINCONFIG = "${bindir}/neon-config"
+
+BBCLASSEXTEND = "native"
diff --git a/meta-openembedded/meta-oe/recipes-support/nspr/nspr/0001-config-nspr-config.in-don-t-pass-LDFLAGS.patch b/meta-openembedded/meta-oe/recipes-support/nspr/nspr/0001-config-nspr-config.in-don-t-pass-LDFLAGS.patch
new file mode 100644
index 0000000..6ebc9c4
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-support/nspr/nspr/0001-config-nspr-config.in-don-t-pass-LDFLAGS.patch
@@ -0,0 +1,30 @@
+From 13e9d66c24d1dce5179805ae5e1bf940409b4914 Mon Sep 17 00:00:00 2001
+From: Mingli Yu <mingli.yu@windriver.com>
+Date: Wed, 10 Aug 2022 15:21:07 +0800
+Subject: [PATCH] config/nspr-config.in: don't pass LDFLAGS
+
+Don't pass LDFLAGS to avoid exposing the build env info.
+
+Upstream-Status: Inappropriate [oe specific]
+
+Signed-off-by: Mingli Yu <mingli.yu@windriver.com>
+---
+ config/nspr-config.in | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/config/nspr-config.in b/config/nspr-config.in
+index 2cb62a0..2bec715 100755
+--- a/config/nspr-config.in
++++ b/config/nspr-config.in
+@@ -136,7 +136,7 @@ if test "$echo_libs" = "yes"; then
+       if test -n "$lib_nspr"; then
+ 	libdirs="$libdirs -lnspr${major_version}"
+       fi
+-      os_ldflags="@LDFLAGS@"
++      os_ldflags="LDFLAGS"
+       for i in $os_ldflags ; do
+ 	if echo $i | grep \^-L >/dev/null; then
+ 	  libdirs="$libdirs $i"
+-- 
+2.25.1
+
diff --git a/meta-openembedded/meta-oe/recipes-support/nspr/nspr_4.29.bb b/meta-openembedded/meta-oe/recipes-support/nspr/nspr_4.29.bb
index b60de08..92c5234 100644
--- a/meta-openembedded/meta-oe/recipes-support/nspr/nspr_4.29.bb
+++ b/meta-openembedded/meta-oe/recipes-support/nspr/nspr_4.29.bb
@@ -12,6 +12,7 @@
            file://0002-Add-nios2-support.patch \
            file://0001-md-Fix-build-with-musl.patch \
            file://Makefile.in-remove-_BUILD_STRING-and-_BUILD_TIME.patch \
+           file://0001-config-nspr-config.in-don-t-pass-LDFLAGS.patch \
            file://nspr.pc.in \
 "
 
diff --git a/meta-openembedded/meta-oe/recipes-support/opencv/ade/0001-use-GNUInstallDirs-for-detecting-install-paths.patch b/meta-openembedded/meta-oe/recipes-support/opencv/ade/0001-use-GNUInstallDirs-for-detecting-install-paths.patch
deleted file mode 100644
index f038b0a..0000000
--- a/meta-openembedded/meta-oe/recipes-support/opencv/ade/0001-use-GNUInstallDirs-for-detecting-install-paths.patch
+++ /dev/null
@@ -1,39 +0,0 @@
-From 67ccf77d97b76e8260c9d793ab172577e2393dbc Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Thu, 19 Dec 2019 21:33:46 -0800
-Subject: [PATCH] use GNUInstallDirs for detecting install paths
-
-This helps with multilib builds
-
-Upstream-Status: Submitted [https://github.com/opencv/ade/pull/19]
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
----
- sources/ade/CMakeLists.txt | 10 ++++++----
- 1 file changed, 6 insertions(+), 4 deletions(-)
-
-diff --git a/sources/ade/CMakeLists.txt b/sources/ade/CMakeLists.txt
-index 2d1dd20..46415d1 100644
---- a/sources/ade/CMakeLists.txt
-+++ b/sources/ade/CMakeLists.txt
-@@ -47,12 +47,14 @@ if(BUILD_ADE_DOCUMENTATION)
-                       VERBATIM)
- endif()
- 
-+include(GNUInstallDirs)
-+
- install(TARGETS ade COMPONENT dev
-         EXPORT adeTargets
--        ARCHIVE DESTINATION lib
--        LIBRARY DESTINATION lib
--        RUNTIME DESTINATION lib
--        INCLUDES DESTINATION include)
-+	ARCHIVE DESTINATION ${CMAKE_INSTALL_LIBDIR}
-+        LIBRARY DESTINATION ${CMAKE_INSTALL_LIBDIR}
-+        RUNTIME DESTINATION ${CMAKE_INSTALL_LIBDIR}
-+	INCLUDES DESTINATION ${CMAKE_INSTALL_INCLUDEDIR})
- 
- install(EXPORT adeTargets DESTINATION share/ade COMPONENT dev)
- 
--- 
-2.24.1
-
diff --git a/meta-openembedded/meta-oe/recipes-support/opencv/ade_0.1.1f.bb b/meta-openembedded/meta-oe/recipes-support/opencv/ade_0.1.1f.bb
deleted file mode 100644
index 48a4514..0000000
--- a/meta-openembedded/meta-oe/recipes-support/opencv/ade_0.1.1f.bb
+++ /dev/null
@@ -1,22 +0,0 @@
-SUMMARY = "A graph construction, manipulation, and processing framework"
-DESCRIPTION = "ADE Framework is a graph construction, manipulation, \
-and processing framework. ADE Framework is suitable for \
-organizing data flow processing and execution."
-HOMEPAGE = "https://github.com/opencv/ade"
-
-SRC_URI = "git://github.com/opencv/ade.git;branch=master;protocol=https \
-           file://0001-use-GNUInstallDirs-for-detecting-install-paths.patch \
-           "
-
-SRCREV = "58b2595a1a95cc807be8bf6222f266a9a1f393a9"
-
-LICENSE = "Apache-2.0"
-LIC_FILES_CHKSUM = "file://LICENSE;md5=3b83ef96387f14655fc854ddc3c6bd57"
-
-inherit cmake
-
-S = "${WORKDIR}/git"
-
-EXTRA_OECMAKE += " -DCMAKE_BUILD_TYPE=Release"
-
-FILES:${PN}-dev += "${datadir}/${BPN}/*.cmake"
diff --git a/meta-openembedded/meta-oe/recipes-support/opencv/ade_0.1.2.bb b/meta-openembedded/meta-oe/recipes-support/opencv/ade_0.1.2.bb
new file mode 100644
index 0000000..93b14ad
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-support/opencv/ade_0.1.2.bb
@@ -0,0 +1,20 @@
+SUMMARY = "A graph construction, manipulation, and processing framework"
+DESCRIPTION = "ADE Framework is a graph construction, manipulation, \
+and processing framework. ADE Framework is suitable for \
+organizing data flow processing and execution."
+HOMEPAGE = "https://github.com/opencv/ade"
+
+SRC_URI = "git://github.com/opencv/ade.git;branch=master;protocol=https"
+
+SRCREV = "1e02d7486bdb9c87993d91b9910e7cc6c4ddbf66"
+
+LICENSE = "Apache-2.0"
+LIC_FILES_CHKSUM = "file://LICENSE;md5=3b83ef96387f14655fc854ddc3c6bd57"
+
+inherit cmake
+
+S = "${WORKDIR}/git"
+
+EXTRA_OECMAKE += " -DCMAKE_BUILD_TYPE=Release"
+
+FILES:${PN}-dev += "${datadir}/${BPN}/*.cmake"
diff --git a/meta-openembedded/meta-oe/recipes-support/pcp/pcp_5.3.6.bb b/meta-openembedded/meta-oe/recipes-support/pcp/pcp_5.3.6.bb
index 48d536e..0543d77 100644
--- a/meta-openembedded/meta-oe/recipes-support/pcp/pcp_5.3.6.bb
+++ b/meta-openembedded/meta-oe/recipes-support/pcp/pcp_5.3.6.bb
@@ -34,7 +34,7 @@
 USERADD_PARAM:${PN} = "--system --home ${localstatedir}/lib/pcp --no-create-home \
                        --user-group pcp"
 
-USERADD_PACKAGES = "${PN}-testsuite"
+USERADD_PACKAGES += "${PN}-testsuite"
 USERADD_PARAM:${PN}-testsuite = "--system --home ${localstatedir}/lib/pcp/testsuite --no-create-home \
                        --user-group pcpqa"
 
diff --git a/meta-openembedded/meta-oe/recipes-support/pcsc-lite/pcsc-lite/0001-pcsc-spy-use-python3-only.patch b/meta-openembedded/meta-oe/recipes-support/pcsc-lite/pcsc-lite/0001-pcsc-spy-use-python3-only.patch
deleted file mode 100644
index 3e7b0ad..0000000
--- a/meta-openembedded/meta-oe/recipes-support/pcsc-lite/pcsc-lite/0001-pcsc-spy-use-python3-only.patch
+++ /dev/null
@@ -1,43 +0,0 @@
-From 75dd98876951d86890ceb30be521de57fd31e3c7 Mon Sep 17 00:00:00 2001
-From: Andrey Zhizhikin <andrey.z@gmail.com>
-Date: Mon, 27 Jan 2020 13:27:12 +0000
-Subject: [PATCH] pcsc-spy: use python3 only
-
-Python2 has been EOL and most distributions would not provide any
-support for it anymore. Since Python3 is available in all distributions
-now, switch pcsc-spy to use it exclusively.
-
-Upstream-Status: Pending
-
-Signed-off-by: Andrey Zhizhikin <andrey.z@gmail.com>
----
- src/spy/pcsc-spy | 9 ++-------
- 1 file changed, 2 insertions(+), 7 deletions(-)
-
-diff --git a/src/spy/pcsc-spy b/src/spy/pcsc-spy
-index 85222c6..965138e 100755
---- a/src/spy/pcsc-spy
-+++ b/src/spy/pcsc-spy
-@@ -1,4 +1,4 @@
--#! /usr/bin/python
-+#!/usr/bin/env python3
- 
- """
- #    Display PC/SC functions arguments
-@@ -22,12 +22,7 @@ from __future__ import print_function
- import os
- import signal
- import time
--try:
--    # for Python3
--    from queue import Queue
--except ImportError:
--    # for Python2
--    from Queue import Queue
-+from queue import Queue
- from threading import Thread
- from operator import attrgetter
- 
--- 
-2.17.1
-
diff --git a/meta-openembedded/meta-oe/recipes-support/pcsc-lite/pcsc-lite_1.9.0.bb b/meta-openembedded/meta-oe/recipes-support/pcsc-lite/pcsc-lite_1.9.0.bb
deleted file mode 100644
index 9ae091a..0000000
--- a/meta-openembedded/meta-oe/recipes-support/pcsc-lite/pcsc-lite_1.9.0.bb
+++ /dev/null
@@ -1,59 +0,0 @@
-SUMMARY = "PC/SC Lite smart card framework and applications"
-HOMEPAGE = "http://pcsclite.alioth.debian.org/"
-LICENSE = "BSD-3-Clause & GPL-3.0-or-later"
-LICENSE:${PN} = "BSD-3-Clause"
-LICENSE:${PN}-lib = "BSD-3-Clause"
-LICENSE:${PN}-doc = "BSD-3-Clause"
-LICENSE:${PN}-dev = "BSD-3-Clause"
-LICENSE:${PN}-dbg = "BSD-3-Clause & GPL-3.0-or-later"
-LICENSE:${PN}-spy = "GPL-3.0-or-later"
-LICENSE:${PN}-spy-dev = "GPL-3.0-or-later"
-LIC_FILES_CHKSUM = "file://COPYING;md5=628c01ba985ecfa21677f5ee2d5202f6"
-
-SRC_URI = "\
-	https://pcsclite.apdu.fr/files/${BP}.tar.bz2 \
-	file://0001-pcsc-spy-use-python3-only.patch \
-"
-SRC_URI[md5sum] = "eb595f2d398ff229207a6ec09fbc4e98"
-SRC_URI[sha256sum] = "0148d403137124552c5d0f10f8cdab2cbb8dfc7c6ce75e018faf667be34f2ef9"
-
-inherit autotools systemd pkgconfig perlnative
-
-EXTRA_OECONF = " \
-    --disable-libusb \
-    --enable-usbdropdir=${libdir}/pcsc/drivers \
-"
-
-S = "${WORKDIR}/pcsc-lite-${PV}"
-
-PACKAGECONFIG ??= "${@bb.utils.filter('DISTRO_FEATURES', 'systemd', d)} udev"
-PACKAGECONFIG:class-native ??= ""
-
-PACKAGECONFIG[systemd]  = ",--disable-libsystemd,systemd,"
-PACKAGECONFIG[udev] = "--enable-libudev,--disable-libudev,udev"
-
-PACKAGES = "${PN} ${PN}-dbg ${PN}-dev ${PN}-lib ${PN}-doc ${PN}-spy ${PN}-spy-dev"
-
-RRECOMMENDS:${PN} = "ccid"
-RRECOMMENDS:${PN}:class-native = ""
-RPROVIDES:${PN}:class-native += "pcsc-lite-lib-native"
-
-FILES:${PN} = "${sbindir}/pcscd"
-FILES:${PN}-lib = "${libdir}/libpcsclite*${SOLIBS}"
-FILES:${PN}-dev = "${includedir} \
-                   ${libdir}/pkgconfig \
-                   ${libdir}/libpcsclite.la \
-                   ${libdir}/libpcsclite.so"
-
-FILES:${PN}-spy = "${bindir}/pcsc-spy \
-                   ${libdir}/libpcscspy*${SOLIBS}"
-FILES:${PN}-spy-dev = "${libdir}/libpcscspy.la \
-                       ${libdir}/libpcscspy.so "
-
-RPROVIDES:${PN} += "${PN}-systemd"
-RREPLACES:${PN} += "${PN}-systemd"
-RCONFLICTS:${PN} += "${PN}-systemd"
-SYSTEMD_SERVICE:${PN} = "pcscd.socket"
-RDEPENDS:${PN}-spy +="python3"
-
-BBCLASSEXTEND = "native"
diff --git a/meta-openembedded/meta-oe/recipes-support/pcsc-lite/pcsc-lite_1.9.8.bb b/meta-openembedded/meta-oe/recipes-support/pcsc-lite/pcsc-lite_1.9.8.bb
new file mode 100644
index 0000000..64c9c2a
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-support/pcsc-lite/pcsc-lite_1.9.8.bb
@@ -0,0 +1,57 @@
+SUMMARY = "PC/SC Lite smart card framework and applications"
+HOMEPAGE = "http://pcsclite.alioth.debian.org/"
+LICENSE = "BSD-3-Clause & GPL-3.0-or-later"
+LICENSE:${PN} = "BSD-3-Clause"
+LICENSE:${PN}-lib = "BSD-3-Clause"
+LICENSE:${PN}-doc = "BSD-3-Clause"
+LICENSE:${PN}-dev = "BSD-3-Clause"
+LICENSE:${PN}-dbg = "BSD-3-Clause & GPL-3.0-or-later"
+LICENSE:${PN}-spy = "GPL-3.0-or-later"
+LICENSE:${PN}-spy-dev = "GPL-3.0-or-later"
+LIC_FILES_CHKSUM = "file://COPYING;md5=628c01ba985ecfa21677f5ee2d5202f6"
+DEPENDS = "autoconf-archive-native"
+
+SRC_URI = "https://pcsclite.apdu.fr/files/${BP}.tar.bz2"
+SRC_URI[md5sum] = "d063c6ca17c17fab39a85132811e155d"
+SRC_URI[sha256sum] = "502d80c557ecbee285eb99fe8703eeb667bcfe067577467b50efe3420d1b2289"
+
+inherit autotools systemd pkgconfig perlnative
+
+EXTRA_OECONF = " \
+    --disable-libusb \
+    --enable-usbdropdir=${libdir}/pcsc/drivers \
+"
+
+S = "${WORKDIR}/pcsc-lite-${PV}"
+
+PACKAGECONFIG ??= "${@bb.utils.filter('DISTRO_FEATURES', 'systemd', d)} udev"
+PACKAGECONFIG:class-native ??= ""
+
+PACKAGECONFIG[systemd]  = ",--disable-libsystemd,systemd,"
+PACKAGECONFIG[udev] = "--enable-libudev,--disable-libudev,udev"
+
+PACKAGES = "${PN} ${PN}-dbg ${PN}-dev ${PN}-lib ${PN}-doc ${PN}-spy ${PN}-spy-dev"
+
+RRECOMMENDS:${PN} = "ccid"
+RRECOMMENDS:${PN}:class-native = ""
+RPROVIDES:${PN}:class-native += "pcsc-lite-lib-native"
+
+FILES:${PN} = "${sbindir}/pcscd"
+FILES:${PN}-lib = "${libdir}/libpcsclite*${SOLIBS}"
+FILES:${PN}-dev = "${includedir} \
+                   ${libdir}/pkgconfig \
+                   ${libdir}/libpcsclite.la \
+                   ${libdir}/libpcsclite.so"
+
+FILES:${PN}-spy = "${bindir}/pcsc-spy \
+                   ${libdir}/libpcscspy*${SOLIBS}"
+FILES:${PN}-spy-dev = "${libdir}/libpcscspy.la \
+                       ${libdir}/libpcscspy.so "
+
+RPROVIDES:${PN} += "${PN}-systemd"
+RREPLACES:${PN} += "${PN}-systemd"
+RCONFLICTS:${PN} += "${PN}-systemd"
+SYSTEMD_SERVICE:${PN} = "pcscd.socket"
+RDEPENDS:${PN}-spy += "python3-core"
+
+BBCLASSEXTEND = "native"
diff --git a/meta-openembedded/meta-oe/recipes-support/poco/poco/0001-fix-unbundled-PCRE2-dependency.patch b/meta-openembedded/meta-oe/recipes-support/poco/poco/0001-fix-unbundled-PCRE2-dependency.patch
deleted file mode 100644
index 1a9d23c..0000000
--- a/meta-openembedded/meta-oe/recipes-support/poco/poco/0001-fix-unbundled-PCRE2-dependency.patch
+++ /dev/null
@@ -1,25 +0,0 @@
-From f049898c8bf058ed187de8e5fab20abeaab1f3b6 Mon Sep 17 00:00:00 2001
-From: Alex Fabijanic <alex@pocoproject.org>
-Date: Sat, 9 Jul 2022 19:13:04 +0200
-Subject: [PATCH] fix(cmake): PocoFoundationConfig.cmake should now check for
- PCRE2 #3677
-
-Upstream-Status: Backport [https://github.com/pocoproject/poco/issues/3677]
-
----
- Foundation/cmake/PocoFoundationConfig.cmake | 2 +-
- 1 file changed, 1 insertion(+), 1 deletion(-)
-
-diff --git a/Foundation/cmake/PocoFoundationConfig.cmake b/Foundation/cmake/PocoFoundationConfig.cmake
-index 46c2d3fc00..82c5788940 100644
---- a/Foundation/cmake/PocoFoundationConfig.cmake
-+++ b/Foundation/cmake/PocoFoundationConfig.cmake
-@@ -2,7 +2,7 @@ if(@POCO_UNBUNDLED@)
- 	include(CMakeFindDependencyMacro)
- 	list(APPEND CMAKE_MODULE_PATH "${CMAKE_CURRENT_LIST_DIR}")
- 	find_dependency(ZLIB REQUIRED)
--	find_dependency(PCRE REQUIRED)
-+	find_dependency(PCRE2 REQUIRED)
- endif()
- 
- include("${CMAKE_CURRENT_LIST_DIR}/PocoFoundationTargets.cmake")
diff --git a/meta-openembedded/meta-oe/recipes-support/poco/poco/0002-remove-providers-unitialization.patch b/meta-openembedded/meta-oe/recipes-support/poco/poco/0002-remove-providers-unitialization.patch
deleted file mode 100644
index 7d24b79..0000000
--- a/meta-openembedded/meta-oe/recipes-support/poco/poco/0002-remove-providers-unitialization.patch
+++ /dev/null
@@ -1,35 +0,0 @@
-From c976c32e5249cb8a2433e7abfa095c1fe8dc4f8e Mon Sep 17 00:00:00 2001
-From: Alex Fabijanic <alex@pocoproject.org>
-Date: Wed, 13 Jul 2022 12:53:52 +0200
-Subject: [PATCH] fix(OpenSSLInitializer): remove providers unitialization
- #3562 #3567
-
-Upstream-Status: Backport [https://github.com/pocoproject/poco/issues/3562]
-
----
- Crypto/src/OpenSSLInitializer.cpp | 12 ------------
- 1 file changed, 12 deletions(-)
-
-diff --git a/Crypto/src/OpenSSLInitializer.cpp b/Crypto/src/OpenSSLInitializer.cpp
-index 4678d22299..c537c3f9c2 100644
---- a/Crypto/src/OpenSSLInitializer.cpp
-+++ b/Crypto/src/OpenSSLInitializer.cpp
-@@ -157,18 +157,6 @@ void OpenSSLInitializer::uninitialize()
- #endif
- 		delete [] _mutexes;
- #endif
--
--#if OPENSSL_VERSION_NUMBER >= 0x30000000L
--		OSSL_PROVIDER* provider = nullptr;
--		if ((provider = _defaultProvider.exchange(nullptr)))
--		{
--			OSSL_PROVIDER_unload(provider);
--		}
--		if ((provider = _legacyProvider.exchange(nullptr)))
--		{
--			OSSL_PROVIDER_unload(provider);
--		}
--#endif
- 	}
- }
- 
diff --git a/meta-openembedded/meta-oe/recipes-support/poco/poco_1.12.0.bb b/meta-openembedded/meta-oe/recipes-support/poco/poco_1.12.0.bb
deleted file mode 100644
index c3b52c8..0000000
--- a/meta-openembedded/meta-oe/recipes-support/poco/poco_1.12.0.bb
+++ /dev/null
@@ -1,109 +0,0 @@
-SUMMARY = "Modern, powerful open source cross-platform C++ class libraries"
-DESCRIPTION = "Modern, powerful open source C++ class libraries and frameworks for building network- and internet-based applications that run on desktop, server, mobile and embedded systems."
-HOMEPAGE = "http://pocoproject.org/"
-SECTION = "libs"
-LICENSE = "BSL-1.0"
-LIC_FILES_CHKSUM = "file://LICENSE;md5=4267f48fc738f50380cbeeb76f95cebc"
-
-# These dependencies are required by Foundation
-DEPENDS = "libpcre2 zlib"
-
-SRC_URI = " \
-    git://github.com/pocoproject/poco.git;branch=master;protocol=https \
-    file://0001-fix-unbundled-PCRE2-dependency.patch \
-    file://0002-remove-providers-unitialization.patch \
-    file://run-ptest \
-   "
-SRCREV = "4ba8595ed83841d1fa240716b5652adc3772c36b"
-
-UPSTREAM_CHECK_GITTAGREGEX = "poco-(?P<pver>\d+(\.\d+)+)"
-
-S = "${WORKDIR}/git"
-
-inherit cmake ptest
-
-# By default the most commonly used poco components are built
-# Foundation is built anyway and doesn't need to be listed explicitly
-# these don't have dependencies outside oe-core
-PACKAGECONFIG ??= "XML JSON MongoDB PDF Util Net NetSSL Crypto JWT Data DataSQLite Zip Encodings Redis Prometheus"
-
-PACKAGECONFIG[XML] = "-DENABLE_XML=ON,-DENABLE_XML=OFF,expat"
-PACKAGECONFIG[JSON] = "-DENABLE_JSON=ON,-DENABLE_JSON=OFF"
-PACKAGECONFIG[MongoDB] = "-DENABLE_MONGODB=ON,-DENABLE_MONGODB=OFF"
-PACKAGECONFIG[PDF] = "-DENABLE_PDF=ON,-DENABLE_PDF=OFF,zlib"
-PACKAGECONFIG[Util] = "-DENABLE_UTIL=ON,-DENABLE_UTIL=OFF"
-PACKAGECONFIG[Net] = "-DENABLE_NET=ON,-DENABLE_NET=OFF"
-PACKAGECONFIG[NetSSL] = "-DENABLE_NETSSL=ON,-DENABLE_NETSSL=OFF,openssl"
-PACKAGECONFIG[Crypto] = "-DENABLE_CRYPTO=ON,-DENABLE_CRYPTO=OFF,openssl"
-PACKAGECONFIG[JWT] = "-DENABLE_JWT=ON,-DENABLE_JWT=OFF,openssl"
-PACKAGECONFIG[Data] = "-DENABLE_DATA=ON,-DENABLE_DATA=OFF"
-PACKAGECONFIG[DataSQLite] = "-DENABLE_DATA_SQLITE=ON -DSQLITE3_LIBRARY:STRING=sqlite3,-DENABLE_DATA_SQLITE=OFF,sqlite3"
-PACKAGECONFIG[Zip] = "-DENABLE_ZIP=ON,-DENABLE_ZIP=OFF"
-PACKAGECONFIG[Encodings] = "-DENABLE_ENCODINGS=ON,-DENABLE_ENCODINGS=OFF"
-PACKAGECONFIG[Redis] = "-DENABLE_REDIS=ON,-DENABLE_REDIS=OFF"
-PACKAGECONFIG[Prometheus] = "-DENABLE_PROMETHEUS=ON,-DENABLE_PROMETHEUS=OFF"
-
-# Additional components not build by default,
-# they might have dependencies not included in oe-core
-# or they don't work on all architectures
-PACKAGECONFIG[mod_poco] = "-DENABLE_APACHECONNECTOR=ON,-DENABLE_APACHECONNECTOR=OFF,apr apache2"
-PACKAGECONFIG[CppParser] = "-DENABLE_CPPPARSER=ON,-DENABLE_CPPPARSER=OFF"
-PACKAGECONFIG[DataMySQL] = "-DENABLE_DATA_MYSQL=ON -DMYSQL_LIB:STRING=mysqlclient_r,-DENABLE_DATA_MYSQL=OFF,mariadb"
-PACKAGECONFIG[DataODBC] = "-DENABLE_DATA_ODBC=ON,-DENABLE_DATA_ODBC=OFF,libiodbc"
-PACKAGECONFIG[ActiveRecord] = "-DENABLE_ACTIVERECORD=ON,-DENABLE_ACTIVERECORD=OFF"
-PACKAGECONFIG[ActiveRecordCompiler] = "-DENABLE_ACTIVERECORD_COMPILER=ON,-DENABLE_ACTIVERECORD_COMPILER=OFF"
-PACKAGECONFIG[PageCompiler] = "-DENABLE_PAGECOMPILER=ON,-DENABLE_PAGECOMPILER=OFF"
-PACKAGECONFIG[PageCompilerFile2Page] = "-DENABLE_PAGECOMPILER_FILE2PAGE=ON,-DENABLE_PAGECOMPILER_FILE2PAGE=OFF"
-PACKAGECONFIG[SevenZip] = "-DENABLE_SEVENZIP=ON,-DENABLE_SEVENZIP=OFF"
-
-EXTRA_OECMAKE = "-DCMAKE_BUILD_TYPE=RelWithDebInfo -DPOCO_UNBUNDLED=ON \
-                  -DLIB_SUFFIX=${@d.getVar('baselib').replace('lib', '')} \
-                 ${@bb.utils.contains('PTEST_ENABLED', '1', '-DENABLE_TESTS=ON ', '', d)}"
-
-# For the native build we want to use the bundled version
-EXTRA_OECMAKE:append:class-native = " -DPOCO_UNBUNDLED=OFF"
-
-# do not use rpath
-EXTRA_OECMAKE:append = " -DCMAKE_SKIP_RPATH=ON"
-
-LDFLAGS:append:riscv32 = "${@bb.utils.contains('PACKAGECONFIG', 'Prometheus', ' -Wl,--no-as-needed -latomic -Wl,--as-needed', '', d)}"
-LDFLAGS:append:mips = "${@bb.utils.contains('PACKAGECONFIG', 'Prometheus', ' -Wl,--no-as-needed -latomic -Wl,--as-needed', '', d)}"
-
-python populate_packages:prepend () {
-    poco_libdir = d.expand('${libdir}')
-    pn = d.getVar("PN")
-    packages = []
-    testrunners = []
-
-    def hook(f, pkg, file_regex, output_pattern, modulename):
-        packages.append(pkg)
-        testrunners.append(modulename)
-
-    do_split_packages(d, poco_libdir, r'^libPoco(.*)\.so\..*$',
-                    'poco-%s', 'Poco %s component', extra_depends='', prepend=True, hook=hook)
-
-    d.setVar("RRECOMMENDS:%s" % pn, " ".join(packages))
-    d.setVar("POCO_TESTRUNNERS", "\n".join(testrunners))
-}
-
-do_install_ptest () {
-       cp -rf ${B}/bin/ ${D}${PTEST_PATH}
-       cp -f ${B}/lib/libCppUnit.so* ${D}${libdir}
-       cp -rf ${B}/*/testsuite/data ${D}${PTEST_PATH}/bin/
-       find "${D}${PTEST_PATH}" -executable -exec chrpath -d {} \;
-       echo "${POCO_TESTRUNNERS}" > "${D}${PTEST_PATH}/testrunners"
-}
-
-PACKAGES_DYNAMIC = "poco-.*"
-
-# "poco" is a metapackage which pulls in all Poco components
-ALLOW_EMPTY:${PN} = "1"
-
-# cppunit is only built if tests are enabled
-PACKAGES =+ "${PN}-cppunit"
-FILES:${PN}-cppunit += "${libdir}/libCppUnit.so*"
-ALLOW_EMPTY:${PN}-cppunit = "1"
-
-RDEPENDS:${PN}-ptest += "${PN}-cppunit"
-
-BBCLASSEXTEND = "native"
diff --git a/meta-openembedded/meta-oe/recipes-support/poco/poco_1.12.2.bb b/meta-openembedded/meta-oe/recipes-support/poco/poco_1.12.2.bb
new file mode 100644
index 0000000..5ecc5b8
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-support/poco/poco_1.12.2.bb
@@ -0,0 +1,106 @@
+SUMMARY = "Modern, powerful open source cross-platform C++ class libraries"
+DESCRIPTION = "Modern, powerful open source C++ class libraries and frameworks for building network- and internet-based applications that run on desktop, server, mobile and embedded systems."
+HOMEPAGE = "http://pocoproject.org/"
+SECTION = "libs"
+LICENSE = "BSL-1.0"
+LIC_FILES_CHKSUM = "file://LICENSE;md5=4267f48fc738f50380cbeeb76f95cebc"
+
+# These dependencies are required by Foundation
+DEPENDS = "libpcre2 zlib"
+
+SRC_URI = "git://github.com/pocoproject/poco.git;branch=master;protocol=https \
+           file://run-ptest \
+           "
+SRCREV = "be19dc4a2f30eb97cc9bdd7551460db11cc27353"
+
+UPSTREAM_CHECK_GITTAGREGEX = "poco-(?P<pver>\d+(\.\d+)+)"
+
+S = "${WORKDIR}/git"
+
+inherit cmake ptest
+
+# By default the most commonly used poco components are built
+# Foundation is built anyway and doesn't need to be listed explicitly
+# these don't have dependencies outside oe-core
+PACKAGECONFIG ??= "XML JSON MongoDB PDF Util Net NetSSL Crypto JWT Data DataSQLite Zip Encodings Redis Prometheus"
+
+PACKAGECONFIG[XML] = "-DENABLE_XML=ON,-DENABLE_XML=OFF,expat"
+PACKAGECONFIG[JSON] = "-DENABLE_JSON=ON,-DENABLE_JSON=OFF"
+PACKAGECONFIG[MongoDB] = "-DENABLE_MONGODB=ON,-DENABLE_MONGODB=OFF"
+PACKAGECONFIG[PDF] = "-DENABLE_PDF=ON,-DENABLE_PDF=OFF,zlib"
+PACKAGECONFIG[Util] = "-DENABLE_UTIL=ON,-DENABLE_UTIL=OFF"
+PACKAGECONFIG[Net] = "-DENABLE_NET=ON,-DENABLE_NET=OFF"
+PACKAGECONFIG[NetSSL] = "-DENABLE_NETSSL=ON,-DENABLE_NETSSL=OFF,openssl"
+PACKAGECONFIG[Crypto] = "-DENABLE_CRYPTO=ON,-DENABLE_CRYPTO=OFF,openssl"
+PACKAGECONFIG[JWT] = "-DENABLE_JWT=ON,-DENABLE_JWT=OFF,openssl"
+PACKAGECONFIG[Data] = "-DENABLE_DATA=ON,-DENABLE_DATA=OFF"
+PACKAGECONFIG[DataSQLite] = "-DENABLE_DATA_SQLITE=ON -DSQLITE3_LIBRARY:STRING=sqlite3,-DENABLE_DATA_SQLITE=OFF,sqlite3"
+PACKAGECONFIG[Zip] = "-DENABLE_ZIP=ON,-DENABLE_ZIP=OFF"
+PACKAGECONFIG[Encodings] = "-DENABLE_ENCODINGS=ON,-DENABLE_ENCODINGS=OFF"
+PACKAGECONFIG[Redis] = "-DENABLE_REDIS=ON,-DENABLE_REDIS=OFF"
+PACKAGECONFIG[Prometheus] = "-DENABLE_PROMETHEUS=ON,-DENABLE_PROMETHEUS=OFF"
+
+# Additional components not build by default,
+# they might have dependencies not included in oe-core
+# or they don't work on all architectures
+PACKAGECONFIG[mod_poco] = "-DENABLE_APACHECONNECTOR=ON,-DENABLE_APACHECONNECTOR=OFF,apr apache2"
+PACKAGECONFIG[CppParser] = "-DENABLE_CPPPARSER=ON,-DENABLE_CPPPARSER=OFF"
+PACKAGECONFIG[DataMySQL] = "-DENABLE_DATA_MYSQL=ON -DMYSQL_LIB:STRING=mysqlclient_r,-DENABLE_DATA_MYSQL=OFF,mariadb"
+PACKAGECONFIG[DataODBC] = "-DENABLE_DATA_ODBC=ON,-DENABLE_DATA_ODBC=OFF,libiodbc"
+PACKAGECONFIG[ActiveRecord] = "-DENABLE_ACTIVERECORD=ON,-DENABLE_ACTIVERECORD=OFF"
+PACKAGECONFIG[ActiveRecordCompiler] = "-DENABLE_ACTIVERECORD_COMPILER=ON,-DENABLE_ACTIVERECORD_COMPILER=OFF"
+PACKAGECONFIG[PageCompiler] = "-DENABLE_PAGECOMPILER=ON,-DENABLE_PAGECOMPILER=OFF"
+PACKAGECONFIG[PageCompilerFile2Page] = "-DENABLE_PAGECOMPILER_FILE2PAGE=ON,-DENABLE_PAGECOMPILER_FILE2PAGE=OFF"
+PACKAGECONFIG[SevenZip] = "-DENABLE_SEVENZIP=ON,-DENABLE_SEVENZIP=OFF"
+
+EXTRA_OECMAKE = "-DCMAKE_BUILD_TYPE=RelWithDebInfo -DPOCO_UNBUNDLED=ON \
+                  -DLIB_SUFFIX=${@d.getVar('baselib').replace('lib', '')} \
+                 ${@bb.utils.contains('PTEST_ENABLED', '1', '-DENABLE_TESTS=ON ', '', d)}"
+
+# For the native build we want to use the bundled version
+EXTRA_OECMAKE:append:class-native = " -DPOCO_UNBUNDLED=OFF"
+
+# do not use rpath
+EXTRA_OECMAKE:append = " -DCMAKE_SKIP_RPATH=ON"
+
+LDFLAGS:append:riscv32 = "${@bb.utils.contains('PACKAGECONFIG', 'Prometheus', ' -Wl,--no-as-needed -latomic -Wl,--as-needed', '', d)}"
+LDFLAGS:append:mips = "${@bb.utils.contains('PACKAGECONFIG', 'Prometheus', ' -Wl,--no-as-needed -latomic -Wl,--as-needed', '', d)}"
+
+python populate_packages:prepend () {
+    poco_libdir = d.expand('${libdir}')
+    pn = d.getVar("PN")
+    packages = []
+    testrunners = []
+
+    def hook(f, pkg, file_regex, output_pattern, modulename):
+        packages.append(pkg)
+        testrunners.append(modulename)
+
+    do_split_packages(d, poco_libdir, r'^libPoco(.*)\.so\..*$',
+                    'poco-%s', 'Poco %s component', extra_depends='', prepend=True, hook=hook)
+
+    d.setVar("RRECOMMENDS:%s" % pn, " ".join(packages))
+    d.setVar("POCO_TESTRUNNERS", "\n".join(testrunners))
+}
+
+do_install_ptest () {
+       cp -rf ${B}/bin/ ${D}${PTEST_PATH}
+       cp -f ${B}/lib/libCppUnit.so* ${D}${libdir}
+       cp -rf ${B}/*/testsuite/data ${D}${PTEST_PATH}/bin/
+       find "${D}${PTEST_PATH}" -executable -exec chrpath -d {} \;
+       echo "${POCO_TESTRUNNERS}" > "${D}${PTEST_PATH}/testrunners"
+}
+
+PACKAGES_DYNAMIC = "poco-.*"
+
+# "poco" is a metapackage which pulls in all Poco components
+ALLOW_EMPTY:${PN} = "1"
+
+# cppunit is only built if tests are enabled
+PACKAGES =+ "${PN}-cppunit"
+FILES:${PN}-cppunit += "${libdir}/libCppUnit.so*"
+ALLOW_EMPTY:${PN}-cppunit = "1"
+
+RDEPENDS:${PN}-ptest += "${PN}-cppunit"
+
+BBCLASSEXTEND = "native"
diff --git a/meta-openembedded/meta-oe/recipes-support/poppler/poppler_22.07.0.bb b/meta-openembedded/meta-oe/recipes-support/poppler/poppler_22.07.0.bb
deleted file mode 100644
index 33723a5..0000000
--- a/meta-openembedded/meta-oe/recipes-support/poppler/poppler_22.07.0.bb
+++ /dev/null
@@ -1,55 +0,0 @@
-SUMMARY = "Poppler is a PDF rendering library based on the xpdf-3.0 code base"
-HOMEPAGE = "https://poppler.freedesktop.org/"
-LICENSE = "GPL-2.0-only"
-LIC_FILES_CHKSUM = "file://COPYING;md5=751419260aa954499f7abaabaa882bbe"
-
-SRC_URI = "http://poppler.freedesktop.org/${BP}.tar.xz \
-           file://0001-Do-not-overwrite-all-our-build-flags.patch \
-           file://basename-include.patch \
-           "
-SRC_URI[sha256sum] = "420230c5c43782e2151259b3e523e632f4861342aad70e7e20b8773d9eaf3428"
-
-DEPENDS = "fontconfig zlib cairo lcms glib-2.0"
-
-inherit cmake pkgconfig gobject-introspection
-
-PACKAGECONFIG ??= "jpeg openjpeg png tiff nss splash"
-PACKAGECONFIG[jpeg] = "-DWITH_JPEG=ON -DENABLE_DCTDECODER=libjpeg,-DWITH_JPEG=OFF -DENABLE_DCTDECODER=none,jpeg"
-PACKAGECONFIG[png] = "-DWITH_PNG=ON,-DWITH_PNG=OFF,libpng"
-PACKAGECONFIG[tiff] = "-DWITH_TIFF=ON,-DWITH_TIFF=OFF,tiff"
-PACKAGECONFIG[curl] = "-DENABLE_LIBCURL=ON,-DENABLE_LIBCURL=OFF,curl"
-PACKAGECONFIG[openjpeg] = "-DENABLE_LIBOPENJPEG=openjpeg2,-DENABLE_LIBOPENJPEG=none,openjpeg"
-PACKAGECONFIG[qt5] = "-DENABLE_QT5=ON,-DENABLE_QT5=OFF,qtbase qttools-native"
-PACKAGECONFIG[nss] = "-DWITH_NSS3=ON,-DWITH_NSS3=OFF,nss"
-PACKAGECONFIG[splash] = "-DENABLE_SPLASH=ON -DENABLE_BOOST=ON,-DENABLE_SPLASH=OFF -DENABLE_BOOST=OFF,boost"
-
-# surprise - did not expect this to work :)
-inherit ${@bb.utils.contains('PACKAGECONFIG', 'qt5', 'cmake_qt5', '', d)}
-
-SECURITY_CFLAGS = "${SECURITY_NO_PIE_CFLAGS}"
-
-EXTRA_OECMAKE += " \
-    -DENABLE_CMS=lcms2 \
-    -DENABLE_UNSTABLE_API_ABI_HEADERS=ON \
-    -DBUILD_GTK_TESTS=OFF \
-    -DENABLE_ZLIB=ON \
-    -DRUN_GPERF_IF_PRESENT=OFF \
-    -DCMAKE_CXX_IMPLICIT_INCLUDE_DIRECTORIES:PATH='${STAGING_INCDIR}' \
-    ${@bb.utils.contains('GI_DATA_ENABLED', 'True', '-DENABLE_GOBJECT_INTROSPECTION=ON', '-DENABLE_GOBJECT_INTROSPECTION=OFF', d)} \
-"
-EXTRA_OECMAKE:append:class-native = " -DENABLE_CPP=OFF"
-
-do_configure:append() {
-    # poppler macro uses pkg-config to check for g-ir runtimes. Something
-    # makes them point to /usr/bin. Align them to sysroot - that's where the
-    # gir-wrappers are:
-    sed -i 's: ${bindir}/g-ir: ${STAGING_BINDIR}/g-ir:' ${B}/build.ninja
-}
-
-PACKAGES =+ "libpoppler libpoppler-glib"
-FILES:libpoppler = "${libdir}/libpoppler.so.*"
-FILES:libpoppler-glib = "${libdir}/libpoppler-glib.so.*"
-
-RDEPENDS:libpoppler = "poppler-data"
-
-BBCLASSEXTEND = "native"
diff --git a/meta-openembedded/meta-oe/recipes-support/poppler/poppler_22.08.0.bb b/meta-openembedded/meta-oe/recipes-support/poppler/poppler_22.08.0.bb
new file mode 100644
index 0000000..c75bf79
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-support/poppler/poppler_22.08.0.bb
@@ -0,0 +1,55 @@
+SUMMARY = "Poppler is a PDF rendering library based on the xpdf-3.0 code base"
+HOMEPAGE = "https://poppler.freedesktop.org/"
+LICENSE = "GPL-2.0-only"
+LIC_FILES_CHKSUM = "file://COPYING;md5=751419260aa954499f7abaabaa882bbe"
+
+SRC_URI = "http://poppler.freedesktop.org/${BP}.tar.xz \
+           file://0001-Do-not-overwrite-all-our-build-flags.patch \
+           file://basename-include.patch \
+           "
+SRC_URI[sha256sum] = "b493328721402f25cb7523f9cdc2f7d7c59f45ad999bde75c63c90604db0f20b"
+
+DEPENDS = "fontconfig zlib cairo lcms glib-2.0"
+
+inherit cmake pkgconfig gobject-introspection
+
+PACKAGECONFIG ??= "jpeg openjpeg png tiff nss splash"
+PACKAGECONFIG[jpeg] = "-DWITH_JPEG=ON -DENABLE_DCTDECODER=libjpeg,-DWITH_JPEG=OFF -DENABLE_DCTDECODER=none,jpeg"
+PACKAGECONFIG[png] = "-DWITH_PNG=ON,-DWITH_PNG=OFF,libpng"
+PACKAGECONFIG[tiff] = "-DWITH_TIFF=ON,-DWITH_TIFF=OFF,tiff"
+PACKAGECONFIG[curl] = "-DENABLE_LIBCURL=ON,-DENABLE_LIBCURL=OFF,curl"
+PACKAGECONFIG[openjpeg] = "-DENABLE_LIBOPENJPEG=openjpeg2,-DENABLE_LIBOPENJPEG=none,openjpeg"
+PACKAGECONFIG[qt5] = "-DENABLE_QT5=ON,-DENABLE_QT5=OFF,qtbase qttools-native"
+PACKAGECONFIG[nss] = "-DWITH_NSS3=ON,-DWITH_NSS3=OFF,nss"
+PACKAGECONFIG[splash] = "-DENABLE_SPLASH=ON -DENABLE_BOOST=ON,-DENABLE_SPLASH=OFF -DENABLE_BOOST=OFF,boost"
+
+# surprise - did not expect this to work :)
+inherit ${@bb.utils.contains('PACKAGECONFIG', 'qt5', 'cmake_qt5', '', d)}
+
+SECURITY_CFLAGS = "${SECURITY_NO_PIE_CFLAGS}"
+
+EXTRA_OECMAKE += " \
+    -DENABLE_CMS=lcms2 \
+    -DENABLE_UNSTABLE_API_ABI_HEADERS=ON \
+    -DBUILD_GTK_TESTS=OFF \
+    -DENABLE_ZLIB=ON \
+    -DRUN_GPERF_IF_PRESENT=OFF \
+    -DCMAKE_CXX_IMPLICIT_INCLUDE_DIRECTORIES:PATH='${STAGING_INCDIR}' \
+    ${@bb.utils.contains('GI_DATA_ENABLED', 'True', '-DENABLE_GOBJECT_INTROSPECTION=ON', '-DENABLE_GOBJECT_INTROSPECTION=OFF', d)} \
+"
+EXTRA_OECMAKE:append:class-native = " -DENABLE_CPP=OFF"
+
+do_configure:append() {
+    # poppler macro uses pkg-config to check for g-ir runtimes. Something
+    # makes them point to /usr/bin. Align them to sysroot - that's where the
+    # gir-wrappers are:
+    sed -i 's: ${bindir}/g-ir: ${STAGING_BINDIR}/g-ir:' ${B}/build.ninja
+}
+
+PACKAGES =+ "libpoppler libpoppler-glib"
+FILES:libpoppler = "${libdir}/libpoppler.so.*"
+FILES:libpoppler-glib = "${libdir}/libpoppler-glib.so.*"
+
+RDEPENDS:libpoppler = "poppler-data"
+
+BBCLASSEXTEND = "native"
diff --git a/meta-openembedded/meta-oe/recipes-support/satyr/files/0001-py_base_stacktrace.c-include-glib.h.patch b/meta-openembedded/meta-oe/recipes-support/satyr/files/0001-py_base_stacktrace.c-include-glib.h.patch
new file mode 100644
index 0000000..fe3b1c1
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-support/satyr/files/0001-py_base_stacktrace.c-include-glib.h.patch
@@ -0,0 +1,29 @@
+From 3b84fe4375292d00ebb605a5917e66129fe5f0cb Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Wed, 24 Aug 2022 23:26:46 -0700
+Subject: [PATCH] py_base_stacktrace.c: include glib.h
+
+This file has references to g_free from glib-2.0 which needs this header
+
+Upstream-Status: Submitted [https://github.com/abrt/satyr/pull/333]
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ python/py_base_stacktrace.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/python/py_base_stacktrace.c b/python/py_base_stacktrace.c
+index b9bd16c..301db84 100644
+--- a/python/py_base_stacktrace.c
++++ b/python/py_base_stacktrace.c
+@@ -17,7 +17,7 @@
+     with this program; if not, write to the Free Software Foundation, Inc.,
+     51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
+ */
+-
++#include <glib.h>
+ #include "py_common.h"
+ #include "py_base_thread.h"
+ #include "py_base_stacktrace.h"
+-- 
+2.37.2
+
diff --git a/meta-openembedded/meta-oe/recipes-support/satyr/satyr_0.39.bb b/meta-openembedded/meta-oe/recipes-support/satyr/satyr_0.39.bb
index 32f27f5..be1ef3f 100644
--- a/meta-openembedded/meta-oe/recipes-support/satyr/satyr_0.39.bb
+++ b/meta-openembedded/meta-oe/recipes-support/satyr/satyr_0.39.bb
@@ -9,6 +9,7 @@
 
 SRC_URI = "git://github.com/abrt/satyr.git;branch=master;protocol=https \
            file://0002-fix-compile-failure-against-musl-C-library.patch \
+           file://0001-py_base_stacktrace.c-include-glib.h.patch \
 "
 SRCREV = "f8a0dbfe7fcc6e44f03d66ca5c81363aea318380"
 S = "${WORKDIR}/git"
diff --git a/meta-openembedded/meta-oe/recipes-support/smarty/smarty_4.1.1.bb b/meta-openembedded/meta-oe/recipes-support/smarty/smarty_4.1.1.bb
deleted file mode 100644
index df441e8..0000000
--- a/meta-openembedded/meta-oe/recipes-support/smarty/smarty_4.1.1.bb
+++ /dev/null
@@ -1,26 +0,0 @@
-DESCRIPTION = "the compiling PHP template engine"
-SECTION = "console/network"
-HOMEPAGE = "https://www.smarty.net/"
-
-LICENSE = "GPL-3.0-only"
-LIC_FILES_CHKSUM = "file://LICENSE;md5=2c0f216b2120ffc367e20f2b56df51b3"
-
-DEPENDS += "php"
-
-SRC_URI = "git://github.com/smarty-php/smarty.git;protocol=https;branch=master"
-
-SRCREV = "71036be8be02bf93735c47b0b745f722efbc729f"
-
-S = "${WORKDIR}/git"
-
-do_install() {
-        install -d ${D}${datadir}/php/smarty3/libs/
-        install -m 0644 ${S}/libs/*.php ${D}${datadir}/php/smarty3/libs/
-
-        install -d ${D}${datadir}/php/smarty3/libs/plugins
-        install -m 0644 ${S}/libs/plugins/*.php ${D}${datadir}/php/smarty3/libs/plugins/
-
-        install -d ${D}${datadir}/php/smarty3/libs/sysplugins
-        install -m 0644 ${S}/libs/sysplugins/*.php ${D}${datadir}/php/smarty3/libs/sysplugins/
-}
-FILES:${PN} = "${datadir}/php/smarty3/"
diff --git a/meta-openembedded/meta-oe/recipes-support/smarty/smarty_4.2.0.bb b/meta-openembedded/meta-oe/recipes-support/smarty/smarty_4.2.0.bb
new file mode 100644
index 0000000..2cd96a2
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-support/smarty/smarty_4.2.0.bb
@@ -0,0 +1,26 @@
+DESCRIPTION = "the compiling PHP template engine"
+SECTION = "console/network"
+HOMEPAGE = "https://www.smarty.net/"
+
+LICENSE = "GPL-3.0-only"
+LIC_FILES_CHKSUM = "file://LICENSE;md5=2c0f216b2120ffc367e20f2b56df51b3"
+
+DEPENDS += "php"
+
+SRC_URI = "git://github.com/smarty-php/smarty.git;protocol=https;branch=master"
+
+SRCREV = "97aeb14c6fc2fb733938809926e2f9d6c581a70d"
+
+S = "${WORKDIR}/git"
+
+do_install() {
+        install -d ${D}${datadir}/php/smarty3/libs/
+        install -m 0644 ${S}/libs/*.php ${D}${datadir}/php/smarty3/libs/
+
+        install -d ${D}${datadir}/php/smarty3/libs/plugins
+        install -m 0644 ${S}/libs/plugins/*.php ${D}${datadir}/php/smarty3/libs/plugins/
+
+        install -d ${D}${datadir}/php/smarty3/libs/sysplugins
+        install -m 0644 ${S}/libs/sysplugins/*.php ${D}${datadir}/php/smarty3/libs/sysplugins/
+}
+FILES:${PN} = "${datadir}/php/smarty3/"
diff --git a/meta-openembedded/meta-oe/recipes-support/spdlog/files/0001-Enable-use-of-external-fmt-library.patch b/meta-openembedded/meta-oe/recipes-support/spdlog/files/0001-Enable-use-of-external-fmt-library.patch
deleted file mode 100644
index 98c342f..0000000
--- a/meta-openembedded/meta-oe/recipes-support/spdlog/files/0001-Enable-use-of-external-fmt-library.patch
+++ /dev/null
@@ -1,68 +0,0 @@
-Author: Nilesh Patra <npatra974@gmail.com>
-Description: Use external libfmt by default
-Last-Changed: Sun, May, 14 2020
-Forwarded: not-needed
---- a/CMakeLists.txt
-+++ b/CMakeLists.txt
-@@ -87,7 +87,7 @@ option(SPDLOG_BUILD_WARNINGS "Enable com
- 
- # install options
- option(SPDLOG_INSTALL "Generate the install target" ${SPDLOG_MASTER_PROJECT})
--option(SPDLOG_FMT_EXTERNAL "Use external fmt library instead of bundled" OFF)
-+option(SPDLOG_FMT_EXTERNAL "Use external fmt library instead of bundled" ON)
- option(SPDLOG_FMT_EXTERNAL_HO "Use external fmt header-only library instead of bundled" OFF)
- option(SPDLOG_NO_EXCEPTIONS "Compile with -fno-exceptions. Call abort() on any spdlog exceptions" OFF)
- 
---- a/include/spdlog/tweakme.h
-+++ b/include/spdlog/tweakme.h
-@@ -71,7 +71,7 @@
- // In this case spdlog will try to include <fmt/format.h> so set your -I flag
- // accordingly.
- //
--// #define SPDLOG_FMT_EXTERNAL
-+#define SPDLOG_FMT_EXTERNAL 1
- ///////////////////////////////////////////////////////////////////////////////
- 
- ///////////////////////////////////////////////////////////////////////////////
---- a/include/spdlog/fmt/bin_to_hex.h
-+++ b/include/spdlog/fmt/bin_to_hex.h
-@@ -5,6 +5,7 @@
- 
- #pragma once
- 
-+#include <spdlog/tweakme.h>
- #include <cctype>
- #include <spdlog/common.h>
- 
---- a/include/spdlog/fmt/fmt.h
-+++ b/include/spdlog/fmt/fmt.h
-@@ -4,7 +4,7 @@
- //
- 
- #pragma once
--
-+#include <spdlog/tweakme.h>
- //
- // Include a bundled header-only copy of fmtlib or an external one.
- // By default spdlog include its own copy.
---- a/include/spdlog/fmt/ostr.h
-+++ b/include/spdlog/fmt/ostr.h
-@@ -7,7 +7,7 @@
- //
- // include bundled or external copy of fmtlib's ostream support
- //
--
-+#include <spdlog/tweakme.h>
- #if !defined(SPDLOG_FMT_EXTERNAL)
- #    ifdef SPDLOG_HEADER_ONLY
- #        ifndef FMT_HEADER_ONLY
---- a/src/fmt.cpp
-+++ b/src/fmt.cpp
-@@ -6,6 +6,7 @@
- #    error Please define SPDLOG_COMPILED_LIB to compile this file.
- #endif
- 
-+#include <spdlog/tweakme.h>
- #if !defined(SPDLOG_FMT_EXTERNAL)
- #    include <spdlog/fmt/bundled/format-inl.h>
- 
diff --git a/meta-openembedded/meta-oe/recipes-support/spdlog/spdlog_1.10.0.bb b/meta-openembedded/meta-oe/recipes-support/spdlog/spdlog_1.10.0.bb
new file mode 100644
index 0000000..a7d9263
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-support/spdlog/spdlog_1.10.0.bb
@@ -0,0 +1,17 @@
+DESCRIPTION = "Very fast, header only, C++ logging library."
+HOMEPAGE = "https://github.com/gabime/spdlog/wiki"
+LICENSE = "MIT"
+LIC_FILES_CHKSUM = "file://${COREBASE}/meta/files/common-licenses/MIT;md5=0835ade698e0bcf8506ecda2f7b4f302"
+
+SRCREV = "76fb40d95455f249bd70824ecfcae7a8f0930fa3"
+SRC_URI = "git://github.com/gabime/spdlog.git;protocol=https;branch=v1.x"
+
+DEPENDS += "fmt"
+
+S = "${WORKDIR}/git"
+
+BBCLASSEXTEND = "native"
+# no need to build example & tests & benchmarks on pure yocto
+EXTRA_OECMAKE += "-DSPDLOG_INSTALL=on -DSPDLOG_BUILD_SHARED=on -DSPDLOG_BUILD_EXAMPLE=off -DSPDLOG_BUILD_TESTS=off -DSPDLOG_BUILD_BENCH=off -DSPDLOG_FMT_EXTERNAL=on"
+
+inherit cmake
diff --git a/meta-openembedded/meta-oe/recipes-support/spdlog/spdlog_1.9.2.bb b/meta-openembedded/meta-oe/recipes-support/spdlog/spdlog_1.9.2.bb
deleted file mode 100644
index d377241..0000000
--- a/meta-openembedded/meta-oe/recipes-support/spdlog/spdlog_1.9.2.bb
+++ /dev/null
@@ -1,18 +0,0 @@
-DESCRIPTION = "Very fast, header only, C++ logging library."
-HOMEPAGE = "https://github.com/gabime/spdlog/wiki"
-LICENSE = "MIT"
-LIC_FILES_CHKSUM = "file://${COREBASE}/meta/files/common-licenses/MIT;md5=0835ade698e0bcf8506ecda2f7b4f302"
-
-SRCREV = "eb3220622e73a4889eee355ffa37972b3cac3df5"
-SRC_URI = "git://github.com/gabime/spdlog.git;protocol=https;branch=v1.x; \
-           file://0001-Enable-use-of-external-fmt-library.patch"
-
-DEPENDS += "fmt"
-
-S = "${WORKDIR}/git"
-
-BBCLASSEXTEND = "native"
-# no need to build example&text&benchmarks on pure yocto
-EXTRA_OECMAKE += "-DSPDLOG_INSTALL=on -DSPDLOG_BUILD_SHARED=on -DSPDLOG_BUILD_EXAMPLES=off -DSPDLOG_BUILD_TESTS=off -DSPDLOG_BUILD_BENCH=off -DSPDLOG_FMT_EXTERNAL=on"
-
-inherit cmake
diff --git a/meta-openembedded/meta-oe/recipes-support/spitools/spitools_git.bb b/meta-openembedded/meta-oe/recipes-support/spitools/spitools_git.bb
index e1b9302..9dfb23c 100644
--- a/meta-openembedded/meta-oe/recipes-support/spitools/spitools_git.bb
+++ b/meta-openembedded/meta-oe/recipes-support/spitools/spitools_git.bb
@@ -5,8 +5,8 @@
 LIC_FILES_CHKSUM = "file://LICENSE;md5=8c16666ae6c159876a0ba63099614381"
 
 BPV = "1.0.0"
-PV = "1.0.1"
-SRCREV = "87da3bfc03f3088e2e880b6b48195bb225fafeac"
+PV = "1.0.2"
+SRCREV = "1748e092425a4a0ff693aa347062a57fc1ffdd00"
 
 S = "${WORKDIR}/git"
 
diff --git a/meta-openembedded/meta-oe/recipes-support/tree/tree_2.0.2.bb b/meta-openembedded/meta-oe/recipes-support/tree/tree_2.0.2.bb
deleted file mode 100644
index 26b6074..0000000
--- a/meta-openembedded/meta-oe/recipes-support/tree/tree_2.0.2.bb
+++ /dev/null
@@ -1,20 +0,0 @@
-SUMMARY = "A recursive directory listing command"
-HOMEPAGE = "http://mama.indstate.edu/users/ice/tree/"
-SECTION = "console/utils"
-LICENSE = "GPL-2.0-only"
-LIC_FILES_CHKSUM = "file://LICENSE;md5=393a5ca445f6965873eca0259a17f833"
-
-SRC_URI = "http://mama.indstate.edu/users/ice/tree/src/${BP}.tgz"
-SRC_URI[sha256sum] = "7d693a1d88d3c4e70a73e03b8dbbdc12c2945d482647494f2f5bd83a479eeeaf"
-
-# tree's default CFLAGS for Linux
-CFLAGS += "-Wall -DLINUX -D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64"
-
-EXTRA_OEMAKE = "CC='${CC}' CFLAGS='${CFLAGS}' LDFLAGS='${LDFLAGS}'"
-
-do_configure[noexec] = "1"
-
-do_install() {
-    install -d ${D}${bindir}
-    install -m 0755 ${S}/${BPN} ${D}${bindir}/
-}
diff --git a/meta-openembedded/meta-oe/recipes-support/tree/tree_2.0.3.bb b/meta-openembedded/meta-oe/recipes-support/tree/tree_2.0.3.bb
new file mode 100644
index 0000000..c5f3364
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-support/tree/tree_2.0.3.bb
@@ -0,0 +1,20 @@
+SUMMARY = "A recursive directory listing command"
+HOMEPAGE = "http://mama.indstate.edu/users/ice/tree/"
+SECTION = "console/utils"
+LICENSE = "GPL-2.0-only"
+LIC_FILES_CHKSUM = "file://LICENSE;md5=393a5ca445f6965873eca0259a17f833"
+
+SRC_URI = "http://mama.indstate.edu/users/ice/tree/src/${BP}.tgz"
+SRC_URI[sha256sum] = "ba14e77b5f9dc7f8250c3f702ec5b6be2f93cd0fa87311bab3239676866a3b1d"
+
+# tree's default CFLAGS for Linux
+CFLAGS += "-Wall -DLINUX -D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64"
+
+EXTRA_OEMAKE = "CC='${CC}' CFLAGS='${CFLAGS}' LDFLAGS='${LDFLAGS}'"
+
+do_configure[noexec] = "1"
+
+do_install() {
+    install -d ${D}${bindir}
+    install -m 0755 ${S}/${BPN} ${D}${bindir}/
+}
diff --git a/meta-openembedded/meta-oe/recipes-support/vboxguestdrivers/vboxguestdrivers/0001-utils-fix-build-against-5.15-libc-headers-headers.patch b/meta-openembedded/meta-oe/recipes-support/vboxguestdrivers/vboxguestdrivers/0001-utils-fix-build-against-5.15-libc-headers-headers.patch
deleted file mode 100644
index 203eec6..0000000
--- a/meta-openembedded/meta-oe/recipes-support/vboxguestdrivers/vboxguestdrivers/0001-utils-fix-build-against-5.15-libc-headers-headers.patch
+++ /dev/null
@@ -1,62 +0,0 @@
-From 7213a5bfa3bd9f360d6be01e6dbd59d91095a0fd Mon Sep 17 00:00:00 2001
-From: Bruce Ashfield <bruce.ashfield@gmail.com>
-Date: Thu, 4 Nov 2021 14:53:46 -0400
-Subject: [PATCH] utils: fix build against 5.15 libc-headers headers
-
-In kernel v5.15+ stdarg.h is part of the kernel source, and the
-upstream project has a change to prefer that stdarg.h to the
-c-library variant and hence includes it as <linux/stdarg.h>, which
-leads to the following build error:
-
-   | In file included from ../vboxsf/include/iprt/types.h:34,
-   |                  from ../vboxsf/include/iprt/string.h:33,
-   |                  from mount.vboxsf.c:53:
-   | ../vboxsf/include/iprt/stdarg.h:49:13: fatal error: linux/stdarg.h: No such file or directory
-   |    49 | #   include <linux/stdarg.h>
-   |       |             ^~~~~~~~~~~~~~~~
-
-If we modify our build of the vboxdrivers to have the kernel source
-directory on the include path (to find linux/stdarg.h, that leads
-to the following errors:
-
-   In file included from build/tmp/work/qemux86_64-poky-linux/vboxguestdrivers/6.1.28-r0/recipe-sysroot/usr/include/stdlib.h:394,
-   |                  from mount.vboxsf.c:36:
-   | build/tmp/work/qemux86_64-poky-linux/vboxguestdrivers/6.1.28-r0/recipe-sysroot/usr/include/sys/types.h:192:20: note: previous declaration of 'blkcnt_t' with type 'blkcnt_t' {aka 'long int'}
-   |   192 | typedef __blkcnt_t blkcnt_t;     /* Type to count number of disk blocks.  */
-   |       |                    ^~~~~~~~
-   | In file included from build/tmp/work-shared/qemux86-64/kernel-source/include/linux/time.h:5,
-   |                  from poky/build/tmp/work-shared/qemux86-64/kernel-source/include/linux/stat.h:19,
-   |                  from build/tmp/work/qemux86_64-poky-linux/vboxguestdrivers/6.1.28-r0/recipe-sysroot/usr/include/bits/statx.h:31,
-
-Our libc-headers are safe and don't lead to the potential conflicing
-information that the upstream commit is guarding against. The easiest
-solution is to revert the upstream change and trust our headers.
-
-Upstream-Status: Inappropriate [OE specific]
-
-Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
----
- vboxsf/include/iprt/stdarg.h | 7 +------
- 1 file changed, 1 insertion(+), 6 deletions(-)
-
-diff --git a/include/iprt/stdarg.h b/include/iprt/stdarg.h
-index c73093c..7bffde5 100644
---- a/include/iprt/stdarg.h
-+++ b/include/iprt/stdarg.h
-@@ -44,12 +44,7 @@
- #   define __builtin_stdarg_start __builtin_va_start
- #  endif
- # elif defined(RT_OS_LINUX) && defined(IN_RING0)
--#  include "linux/version.h"
--#  if RTLNX_VER_MIN(5,15,0) || RTLNX_RHEL_MAJ_PREREQ(9,1)
--#   include <linux/stdarg.h>
--#  else
--#   include <stdarg.h>
--#  endif
-+#  include <stdarg.h>
- # else
- #  include <stdarg.h>
- # endif
--- 
-2.19.1
-
diff --git a/meta-openembedded/meta-oe/recipes-support/vboxguestdrivers/vboxguestdrivers_6.1.36.bb b/meta-openembedded/meta-oe/recipes-support/vboxguestdrivers/vboxguestdrivers_6.1.36.bb
index 37dd022..7eb497a 100644
--- a/meta-openembedded/meta-oe/recipes-support/vboxguestdrivers/vboxguestdrivers_6.1.36.bb
+++ b/meta-openembedded/meta-oe/recipes-support/vboxguestdrivers/vboxguestdrivers_6.1.36.bb
@@ -13,7 +13,6 @@
 
 SRC_URI = "http://download.virtualbox.org/virtualbox/${PV}/${VBOX_NAME}.tar.bz2 \
     file://Makefile.utils \
-    file://0001-utils-fix-build-against-5.15-libc-headers-headers.patch \
 "
 
 SRC_URI[sha256sum] = "e47942e42892c13c621869865e2b7b320340154f0fa74ecbdaf18fdaf70ef047"
@@ -30,6 +29,7 @@
 MAKE_TARGETS = "all"
 
 addtask export_sources after do_patch before do_configure
+do_export_sources[depends] += "virtual/kernel:do_shared_workdir"
 
 do_export_sources() {
     mkdir -p "${S}"
@@ -42,6 +42,14 @@
     install ${WORKDIR}/${VBOX_NAME}/src/VBox/Additions/linux/sharedfolders/vbsfmount.c ${S}/utils
     install ${S}/../Makefile.utils ${S}/utils/Makefile
 
+    # some kernel versions have issues with stdarg.h and compatibility with
+    # the sysroot and libc-headers/uapi. If we include the file directly from
+    # the kernel source (STAGING_KERNEL_DIR) we get conflicting types on many
+    # structures, due to kernel .h files being found before libc .h files.
+    # if we grab just this one file from the source, and put it into our
+    # file structure, everything holds together
+    mkdir -p ${S}/vboxsf/include/linux
+    install ${STAGING_KERNEL_DIR}/include/linux/stdarg.h  ${S}/vboxsf/include/linux
 }
 
 do_configure:prepend() {
diff --git a/meta-openembedded/meta-oe/recipes-support/xdg-user-dirs/xdg-user-dirs_0.17.bb b/meta-openembedded/meta-oe/recipes-support/xdg-user-dirs/xdg-user-dirs_0.17.bb
deleted file mode 100644
index 75d7f27..0000000
--- a/meta-openembedded/meta-oe/recipes-support/xdg-user-dirs/xdg-user-dirs_0.17.bb
+++ /dev/null
@@ -1,16 +0,0 @@
-DESCRIPTION = "xdg-user-dirs is a tool to help manage user directories like the desktop folder and the music folder"
-LICENSE = "GPL-2.0-only"
-LIC_FILES_CHKSUM = "file://COPYING;md5=94d55d512a9ba36caa9b7df079bae19f"
-
-SRC_URI = "http://user-dirs.freedesktop.org/releases/${BPN}-${PV}.tar.gz"
-SRC_URI[md5sum] = "e0564ec6d838e6e41864d872a29b3575"
-SRC_URI[sha256sum] = "2a07052823788e8614925c5a19ef5b968d8db734fdee656699ea4f97d132418c"
-
-inherit autotools gettext
-
-EXTRA_OECONF = "--disable-documentation"
-
-CONFFILES:${PN} += " \
-    ${sysconfdir}/xdg/user-dirs.conf \
-    ${sysconfdir}/xdg/user-dirs.defaults \
-"
diff --git a/meta-openembedded/meta-oe/recipes-support/xdg-user-dirs/xdg-user-dirs_0.18.bb b/meta-openembedded/meta-oe/recipes-support/xdg-user-dirs/xdg-user-dirs_0.18.bb
new file mode 100644
index 0000000..84fc9e2
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-support/xdg-user-dirs/xdg-user-dirs_0.18.bb
@@ -0,0 +1,15 @@
+DESCRIPTION = "xdg-user-dirs is a tool to help manage user directories like the desktop folder and the music folder"
+LICENSE = "GPL-2.0-only"
+LIC_FILES_CHKSUM = "file://COPYING;md5=94d55d512a9ba36caa9b7df079bae19f"
+
+SRC_URI = "http://user-dirs.freedesktop.org/releases/${BPN}-${PV}.tar.gz"
+SRC_URI[sha256sum] = "ec6f06d7495cdba37a732039f9b5e1578bcb296576fde0da40edb2f52220df3c"
+
+inherit autotools gettext
+
+EXTRA_OECONF = "--disable-documentation"
+
+CONFFILES:${PN} += " \
+    ${sysconfdir}/xdg/user-dirs.conf \
+    ${sysconfdir}/xdg/user-dirs.defaults \
+"
diff --git a/meta-openembedded/meta-oe/recipes-support/xrdp/xrdp_0.9.18.bb b/meta-openembedded/meta-oe/recipes-support/xrdp/xrdp_0.9.18.bb
deleted file mode 100644
index 7ec6ae1..0000000
--- a/meta-openembedded/meta-oe/recipes-support/xrdp/xrdp_0.9.18.bb
+++ /dev/null
@@ -1,89 +0,0 @@
-SUMMARY = "An open source remote desktop protocol(rdp) server."
-
-LICENSE = "Apache-2.0"
-LIC_FILES_CHKSUM = "file://COPYING;md5=72cfbe4e7bd33a0a1de9630c91195c21 \
-"
-
-inherit features_check autotools pkgconfig useradd systemd
-
-DEPENDS = "openssl virtual/libx11 libxfixes libxrandr libpam nasm-native imlib2 pixman libsm"
-
-REQUIRED_DISTRO_FEATURES = "x11 pam"
-
-SRC_URI = "https://github.com/neutrinolabs/${BPN}/releases/download/v${PV}/${BPN}-${PV}.tar.gz \
-           file://xrdp.sysconfig \
-           file://0001-Added-req_distinguished_name-in-etc-xrdp-openssl.con.patch \
-           file://0001-Fix-the-compile-error.patch \
-           file://0001-arch-Define-NO_NEED_ALIGN-on-ppc64.patch \
-           "
-
-SRC_URI[sha256sum] = "c5eea0af055fac90c632e44fb667f1a25c55de2e34b37127e4cb0aabaef90a0f"
-
-CFLAGS += " -Wno-deprecated-declarations"
-
-PACKAGECONFIG ??= ""
-PACKAGECONFIG[fuse] = " --enable-fuse, --disable-fuse, fuse"
-
-USERADD_PACKAGES = "${PN}"
-GROUPADD_PARAM:${PN} = "--system xrdp"
-USERADD_PARAM:${PN}  = "--system --home /var/run/xrdp -g xrdp \
-                        --no-create-home --shell /bin/false xrdp"
-
-FILES:${PN} += "${datadir}/dbus-1/services/*.service \
-                ${datadir}/dbus-1/accessibility-services/*.service "
-
-FILES:${PN}-dev += "${libdir}/xrdp/libcommon.so \
-                    ${libdir}/xrdp/libxrdp.so \
-                    ${libdir}/xrdp/libscp.so \
-                    ${libdir}/xrdp/libxrdpapi.so "
-
-EXTRA_OECONF = "--enable-pam-config=suse --enable-fuse \
-                --enable-pixman --enable-painter --enable-vsock \
-                --enable-ipv6 --with-imlib2 --with-socketdir=${localstatedir}/run/${PN}"
-
-do_configure:prepend() {
-    cd ${S}
-    ./bootstrap
-    cd -
-}
-
-do_compile:prepend() {
-    sed -i 's/(MAKE) $(AM_MAKEFLAGS) install-exec-am install-data-am/(MAKE) $(AM_MAKEFLAGS) install-exec-am/g' ${S}/keygen/Makefile.in
-}
-
-do_install:append() {
-
-	# deal with systemd unit files
-	install -d ${D}${systemd_unitdir}/system
-	install -m 0644 ${S}/instfiles/xrdp.service.in ${D}${systemd_unitdir}/system/xrdp.service
-	install -m 0644 ${S}/instfiles/xrdp-sesman.service.in ${D}${systemd_unitdir}/system/xrdp-sesman.service
-	sed -i -e 's,@localstatedir@,${localstatedir},g' ${D}${systemd_unitdir}/system/xrdp.service ${D}${systemd_unitdir}/system/xrdp-sesman.service
-	sed -i -e 's,@sysconfdir@,${sysconfdir},g' ${D}${systemd_unitdir}/system/xrdp.service ${D}${systemd_unitdir}/system/xrdp-sesman.service
-	sed -i -e 's,@sbindir@,${sbindir},g' ${D}${systemd_unitdir}/system/xrdp.service ${D}${systemd_unitdir}/system/xrdp-sesman.service
-
-	install -d ${D}${sysconfdir}/sysconfig/xrdp
-	install -m 0644 ${S}/instfiles/*.ini ${D}${sysconfdir}/xrdp/
-	install -m 0644 ${S}/keygen/openssl.conf ${D}${sysconfdir}/xrdp/
-	install -m 0644 ${WORKDIR}/xrdp.sysconfig ${D}${sysconfdir}/sysconfig/xrdp/
-	chown xrdp:xrdp ${D}${sysconfdir}/xrdp
-}
-
-SYSTEMD_SERVICE:${PN} = "xrdp.service xrdp-sesman.service"
-
-pkg_postinst:${PN}() {
-	if test -z "$D"
-	then
-		if test -x ${bindir}/xrdp-keygen
-		then
-			${bindir}/xrdp-keygen xrdp ${sysconfdir}/xrdp/rsakeys.ini >/dev/null
-                fi
-		if test ! -s ${sysconfdir}/xrdp/cert.pem
-		then
-			openssl req -x509 -newkey rsa:2048 -sha256 -nodes -days 3652 \
-			-keyout ${sysconfdir}/xrdp/key.pem \
-			-out ${sysconfdir}/xrdp/cert.pem \
-			-config ${sysconfdir}/xrdp/openssl.conf >/dev/null 2>&1
-			chmod 400 ${sysconfdir}/xrdp/key.pem
-		fi
-        fi
-}
diff --git a/meta-openembedded/meta-oe/recipes-support/xrdp/xrdp_0.9.19.bb b/meta-openembedded/meta-oe/recipes-support/xrdp/xrdp_0.9.19.bb
new file mode 100644
index 0000000..3e2e84f
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-support/xrdp/xrdp_0.9.19.bb
@@ -0,0 +1,90 @@
+SUMMARY = "An open source remote desktop protocol(rdp) server."
+
+LICENSE = "Apache-2.0"
+LIC_FILES_CHKSUM = "file://COPYING;md5=72cfbe4e7bd33a0a1de9630c91195c21 \
+"
+
+inherit features_check autotools pkgconfig useradd systemd
+
+DEPENDS = "openssl virtual/libx11 libxfixes libxrandr libpam nasm-native imlib2 pixman libsm"
+
+REQUIRED_DISTRO_FEATURES = "x11 pam"
+
+SRC_URI = "https://github.com/neutrinolabs/${BPN}/releases/download/v${PV}/${BPN}-${PV}.tar.gz \
+           file://xrdp.sysconfig \
+           file://0001-Added-req_distinguished_name-in-etc-xrdp-openssl.con.patch \
+           file://0001-Fix-the-compile-error.patch \
+           file://0001-arch-Define-NO_NEED_ALIGN-on-ppc64.patch \
+           "
+
+SRC_URI[sha256sum] = "94017d30e475c6d7a24f651e16791551862ae46f82d8de62385e63393f5f93d0"
+
+CFLAGS += " -Wno-deprecated-declarations"
+
+PACKAGECONFIG ??= ""
+PACKAGECONFIG[fuse] = " --enable-fuse, --disable-fuse, fuse"
+
+USERADD_PACKAGES = "${PN}"
+GROUPADD_PARAM:${PN} = "--system xrdp"
+USERADD_PARAM:${PN}  = "--system --home /var/run/xrdp -g xrdp \
+                        --no-create-home --shell /bin/false xrdp"
+
+FILES:${PN} += "${datadir}/dbus-1/services/*.service \
+                ${datadir}/dbus-1/accessibility-services/*.service "
+
+FILES:${PN}-dev += "${libdir}/xrdp/libcommon.so \
+                    ${libdir}/xrdp/libxrdp.so \
+                    ${libdir}/xrdp/libscp.so \
+                    ${libdir}/xrdp/libxrdpapi.so "
+
+EXTRA_OECONF = "--enable-pam-config=suse --enable-fuse \
+                --enable-pixman --enable-painter --enable-vsock \
+                --enable-ipv6 --with-imlib2 --with-socketdir=${localstatedir}/run/${PN}"
+
+do_configure:prepend() {
+    cd ${S}
+    ./bootstrap
+    cd -
+}
+
+do_compile:prepend() {
+    sed -i 's/(MAKE) $(AM_MAKEFLAGS) install-exec-am install-data-am/(MAKE) $(AM_MAKEFLAGS) install-exec-am/g' ${S}/keygen/Makefile.in
+    echo "" > ${B}/xrdp_configure_options.h
+}
+
+do_install:append() {
+
+	# deal with systemd unit files
+	install -d ${D}${systemd_unitdir}/system
+	install -m 0644 ${S}/instfiles/xrdp.service.in ${D}${systemd_unitdir}/system/xrdp.service
+	install -m 0644 ${S}/instfiles/xrdp-sesman.service.in ${D}${systemd_unitdir}/system/xrdp-sesman.service
+	sed -i -e 's,@localstatedir@,${localstatedir},g' ${D}${systemd_unitdir}/system/xrdp.service ${D}${systemd_unitdir}/system/xrdp-sesman.service
+	sed -i -e 's,@sysconfdir@,${sysconfdir},g' ${D}${systemd_unitdir}/system/xrdp.service ${D}${systemd_unitdir}/system/xrdp-sesman.service
+	sed -i -e 's,@sbindir@,${sbindir},g' ${D}${systemd_unitdir}/system/xrdp.service ${D}${systemd_unitdir}/system/xrdp-sesman.service
+
+	install -d ${D}${sysconfdir}/sysconfig/xrdp
+	install -m 0644 ${S}/instfiles/*.ini ${D}${sysconfdir}/xrdp/
+	install -m 0644 ${S}/keygen/openssl.conf ${D}${sysconfdir}/xrdp/
+	install -m 0644 ${WORKDIR}/xrdp.sysconfig ${D}${sysconfdir}/sysconfig/xrdp/
+	chown xrdp:xrdp ${D}${sysconfdir}/xrdp
+}
+
+SYSTEMD_SERVICE:${PN} = "xrdp.service xrdp-sesman.service"
+
+pkg_postinst:${PN}() {
+	if test -z "$D"
+	then
+		if test -x ${bindir}/xrdp-keygen
+		then
+			${bindir}/xrdp-keygen xrdp ${sysconfdir}/xrdp/rsakeys.ini >/dev/null
+                fi
+		if test ! -s ${sysconfdir}/xrdp/cert.pem
+		then
+			openssl req -x509 -newkey rsa:2048 -sha256 -nodes -days 3652 \
+			-keyout ${sysconfdir}/xrdp/key.pem \
+			-out ${sysconfdir}/xrdp/cert.pem \
+			-config ${sysconfdir}/xrdp/openssl.conf >/dev/null 2>&1
+			chmod 400 ${sysconfdir}/xrdp/key.pem
+		fi
+        fi
+}
diff --git a/meta-openembedded/meta-oe/recipes-test/cunit/cunit_2.1-3.bb b/meta-openembedded/meta-oe/recipes-test/cunit/cunit_2.1-3.bb
index 9a0c18d..252ef60 100644
--- a/meta-openembedded/meta-oe/recipes-test/cunit/cunit_2.1-3.bb
+++ b/meta-openembedded/meta-oe/recipes-test/cunit/cunit_2.1-3.bb
@@ -15,7 +15,7 @@
 
 UPSTREAM_CHECK_URI = "http://sourceforge.net/projects/cunit/files/releases"
 
-inherit autotools-brokensep remove-libtool ptest
+inherit autotools-brokensep ptest
 
 EXTRA_OECONF = "--enable-memtrace --enable-automated --enable-basic --enable-console"
 
diff --git a/meta-openembedded/meta-perl/recipes-perl/libauthen/libauthen-sasl-perl_2.16.bb b/meta-openembedded/meta-perl/recipes-perl/libauthen/libauthen-sasl-perl_2.16.bb
index 8545eb5..a9eec69 100644
--- a/meta-openembedded/meta-perl/recipes-perl/libauthen/libauthen-sasl-perl_2.16.bb
+++ b/meta-openembedded/meta-perl/recipes-perl/libauthen/libauthen-sasl-perl_2.16.bb
@@ -5,7 +5,7 @@
 HOMEPAGE = "http://search.cpan.org/dist/Authen-SASL/"
 SECTION = "libs"
 
-LICENSE = "Artistic-1.0|GPL-1.0-or-later"
+LICENSE = "Artistic-1.0 | GPL-1.0-or-later"
 LIC_FILES_CHKSUM = "file://lib/Authen/SASL/Perl.pm;beginline=1;endline=3;md5=17123315bbcda19f484c07227594a609"
 
 DEPENDS = "perl"
diff --git a/meta-openembedded/meta-perl/recipes-perl/libcurses/libcurses-perl_1.38.bb b/meta-openembedded/meta-perl/recipes-perl/libcurses/libcurses-perl_1.38.bb
deleted file mode 100644
index 8c56c20..0000000
--- a/meta-openembedded/meta-perl/recipes-perl/libcurses/libcurses-perl_1.38.bb
+++ /dev/null
@@ -1,25 +0,0 @@
-DESCRIPTION = "lib-curses provides an interface between Perl programs and \
-the curses library."
-
-SECTION = "libs"
-LICENSE = "Artistic-1.0 | GPL-1.0-or-later"
-
-LIC_FILES_CHKSUM = "file://README;beginline=26;endline=30;md5=0b37356c5e9e28080a3422d82af8af09"
-
-DEPENDS += "perl ncurses "
-
-SRC_URI = "http://www.cpan.org/authors/id/G/GI/GIRAFFED/Curses-${PV}.tar.gz"
-
-SRC_URI[sha256sum] = "d521408298eb6413b209ef29d4ffcba6f5f58ee1abc60160739a17aafcb8f2f2"
-
-S = "${WORKDIR}/Curses-${PV}"
-
-EXTRA_CPANFLAGS = "INC=-I${STAGING_INCDIR} LIBS=-L${STAGING_LIBDIR}"
-
-inherit cpan
-
-do_compile() {
-    export LIBC="$(find ${STAGING_DIR_TARGET}/${base_libdir}/ -name 'libc-*.so')"
-    cpan_do_compile
-}
-
diff --git a/meta-openembedded/meta-perl/recipes-perl/libcurses/libcurses-perl_1.41.bb b/meta-openembedded/meta-perl/recipes-perl/libcurses/libcurses-perl_1.41.bb
new file mode 100644
index 0000000..6a6f012
--- /dev/null
+++ b/meta-openembedded/meta-perl/recipes-perl/libcurses/libcurses-perl_1.41.bb
@@ -0,0 +1,25 @@
+DESCRIPTION = "lib-curses provides an interface between Perl programs and \
+the curses library."
+
+SECTION = "libs"
+LICENSE = "Artistic-1.0 | GPL-1.0-or-later"
+
+LIC_FILES_CHKSUM = "file://README;beginline=26;endline=30;md5=0b37356c5e9e28080a3422d82af8af09"
+
+DEPENDS += "perl ncurses "
+
+SRC_URI = "http://www.cpan.org/authors/id/G/GI/GIRAFFED/Curses-${PV}.tar.gz"
+
+SRC_URI[sha256sum] = "fb9efea8c7b5ed5f8ea5dee49d35252accfc05ee6e75cb9a37ab7c847cd261d7"
+
+S = "${WORKDIR}/Curses-${PV}"
+
+EXTRA_CPANFLAGS = "INC=-I${STAGING_INCDIR} LIBS=-L${STAGING_LIBDIR}"
+
+inherit cpan
+
+do_compile() {
+    export LIBC="$(find ${STAGING_DIR_TARGET}/${base_libdir}/ -name 'libc-*.so')"
+    cpan_do_compile
+}
+
diff --git a/meta-openembedded/meta-perl/recipes-perl/libdigest/libdigest-hmac-perl_1.03.bb b/meta-openembedded/meta-perl/recipes-perl/libdigest/libdigest-hmac-perl_1.03.bb
index 51a2ad3..43b7f4d 100644
--- a/meta-openembedded/meta-perl/recipes-perl/libdigest/libdigest-hmac-perl_1.03.bb
+++ b/meta-openembedded/meta-perl/recipes-perl/libdigest/libdigest-hmac-perl_1.03.bb
@@ -3,7 +3,7 @@
 HOMEPAGE = "http://search.cpan.org/~gaas/Digest-HMAC-1.03/"
 SECTION = "libs"
 
-LICENSE = "Artistic-1.0|GPL-1.0-or-later"
+LICENSE = "Artistic-1.0 | GPL-1.0-or-later"
 LIC_FILES_CHKSUM = "file://README;beginline=13;endline=17;md5=da980cdc026faa065e5d5004115334e6"
 
 RDEPENDS:${PN} = "libdigest-sha1-perl perl-module-extutils-makemaker perl-module-digest-md5"
diff --git a/meta-openembedded/meta-perl/recipes-perl/libdigest/libdigest-sha1-perl_2.13.bb b/meta-openembedded/meta-perl/recipes-perl/libdigest/libdigest-sha1-perl_2.13.bb
index cd63675..df89c9b 100644
--- a/meta-openembedded/meta-perl/recipes-perl/libdigest/libdigest-sha1-perl_2.13.bb
+++ b/meta-openembedded/meta-perl/recipes-perl/libdigest/libdigest-sha1-perl_2.13.bb
@@ -3,7 +3,7 @@
 HOMEPAGE = "http://search.cpan.org/~gaas/Digest-SHA1-2.13/"
 SECTION = "libs"
 
-LICENSE = "Artistic-1.0|GPL-1.0-or-later"
+LICENSE = "Artistic-1.0 | GPL-1.0-or-later"
 LIC_FILES_CHKSUM = "file://README;beginline=10;endline=14;md5=ff5867ebb4bc1103a7a416aef2fce00a"
 
 SRC_URI = "http://search.cpan.org/CPAN/authors/id/G/GA/GAAS/Digest-SHA1-${PV}.tar.gz \
diff --git a/meta-openembedded/meta-perl/recipes-perl/libencode/libencode-perl_3.18.bb b/meta-openembedded/meta-perl/recipes-perl/libencode/libencode-perl_3.18.bb
deleted file mode 100644
index 999863b..0000000
--- a/meta-openembedded/meta-perl/recipes-perl/libencode/libencode-perl_3.18.bb
+++ /dev/null
@@ -1,105 +0,0 @@
-# NOTE:
-#    You should use perl-module-encode rather than this package
-#    unless you specifically need a version newer than what is
-#    provided by perl.
-
-SUMMARY = "Encode - character encodings"
-DESCRIPTION = "The \"Encode\" module provides the interfaces between \
-Perl's strings and the rest of the system.  Perl strings are sequences \
-of characters."
-
-AUTHOR = "Dan Kogai <dankogai+cpan@gmail.com>"
-HOMEPAGE = "https://metacpan.org/release/Encode"
-SECTION = "lib"
-LICENSE = "Artistic-1.0 | GPL-1.0-or-later"
-LIC_FILES_CHKSUM = "file://META.json;beginline=8;endline=10;md5=b12e3be1e17a7e99ca4f429ff32c28b5"
-
-SRC_URI = "${CPAN_MIRROR}/authors/id/D/DA/DANKOGAI/Encode-${PV}.tar.gz"
-SRC_URI[sha256sum] = "74dcbd851171a68cf3ef225568ece47b0959b8e3cda887482fde97c1ae1691e2"
-
-UPSTREAM_CHECK_REGEX = "Encode\-(?P<pver>(\d+\.\d+))(?!_\d+).tar"
-
-S = "${WORKDIR}/Encode-${PV}"
-
-inherit cpan ptest-perl
-
-do_install:prepend() {
-    # Requires "-T" (taint) option on command line
-    rm -rf ${B}/t/taint.t
-    # Circular dependency of perl-module-open on perl-module-encode
-    # and we cannot load perl-module-encode because we are providing
-    # an alternative
-    rm -rf ${B}/t/use-Encode-Alias.t
-}
-
-do_install_ptest() {
-    mkdir ${D}${PTEST_PATH}/bin
-    cp -r ${B}/bin/piconv ${D}${PTEST_PATH}/bin
-    cp -r ${B}/blib ${D}${PTEST_PATH}
-    chown -R root:root ${D}${PTEST_PATH}
-}
-
-#  file /usr/bin/enc2xs from install of perl-misc-5.24.1-r0.i586 conflicts with file from package libencode-perl-2.94-r0.i586
-#  file /usr/bin/encguess from install of perl-misc-5.24.1-r0.i586 conflicts with file from package libencode-perl-2.94-r0.i586
-#  file /usr/bin/piconv from install of perl-misc-5.24.1-r0.i586 conflicts with file from package libencode-perl-2.94-r0.i586
-RCONFLICTS:${PN} = "perl-misc perl-module-encode"
-
-RDEPENDS:${PN} += " \
-    perl-module-bytes \
-    perl-module-constant \
-    perl-module-parent \
-    perl-module-storable \
-    perl-module-xsloader \
-    "
-
-RPROVIDES:${PN} += " \
-    libencode-alias-perl \
-    libencode-byte-perl \
-    libencode-cjkconstants-perl \
-    libencode-cn-perl \
-    libencode-cn-hz-perl \
-    libencode-config-perl \
-    libencode-ebcdic-perl \
-    libencode-encoder-perl \
-    libencode-encoding-perl \
-    libencode-gsm0338-perl \
-    libencode-guess-perl \
-    libencode-jp-perl \
-    libencode-jp-h2z-perl \
-    libencode-jp-jis7-perl \
-    libencode-kr-perl \
-    libencode-kr-2022_kr-perl \
-    libencode-mime-header-perl \
-    libencode-mime-name-perl \
-    libencode-symbol-perl \
-    libencode-tw-perl \
-    libencode-unicode-perl \
-    libencode-unicode-utf7-perl \
-    libencoding-perl \
-    libencode-internal-perl \
-    libencode-mime-header-iso_2022_jp-perl \
-    libencode-utf8-perl \
-    libencode-utf_ebcdic-perl \
-    "
-
-RDEPENDS:${PN}-ptest += " \
-    perl-module-blib \
-    perl-module-charnames \
-    perl-module-file-compare \
-    perl-module-file-copy \
-    perl-module-filehandle \
-    perl-module-findbin \
-    perl-module-integer \
-    perl-module-io-select \
-    perl-module-ipc-open3 \
-    perl-module-mime-base64 \
-    perl-module-perlio \
-    perl-module-perlio-encoding \
-    perl-module-perlio-scalar \
-    perl-module-test-more \
-    perl-module-tie-scalar \
-    perl-module-unicore \
-    perl-module-utf8 \
-    "
-
-BBCLASSEXTEND = "native"
diff --git a/meta-openembedded/meta-perl/recipes-perl/libencode/libencode-perl_3.19.bb b/meta-openembedded/meta-perl/recipes-perl/libencode/libencode-perl_3.19.bb
new file mode 100644
index 0000000..352517c
--- /dev/null
+++ b/meta-openembedded/meta-perl/recipes-perl/libencode/libencode-perl_3.19.bb
@@ -0,0 +1,105 @@
+# NOTE:
+#    You should use perl-module-encode rather than this package
+#    unless you specifically need a version newer than what is
+#    provided by perl.
+
+SUMMARY = "Encode - character encodings"
+DESCRIPTION = "The \"Encode\" module provides the interfaces between \
+Perl's strings and the rest of the system.  Perl strings are sequences \
+of characters."
+
+AUTHOR = "Dan Kogai <dankogai+cpan@gmail.com>"
+HOMEPAGE = "https://metacpan.org/release/Encode"
+SECTION = "lib"
+LICENSE = "Artistic-1.0 | GPL-1.0-or-later"
+LIC_FILES_CHKSUM = "file://META.json;beginline=8;endline=10;md5=b12e3be1e17a7e99ca4f429ff32c28b5"
+
+SRC_URI = "${CPAN_MIRROR}/authors/id/D/DA/DANKOGAI/Encode-${PV}.tar.gz"
+SRC_URI[sha256sum] = "9163f848eef69e4d4cc8838397f0861fd9ea7ede001117dbd9694f8d95052ef5"
+
+UPSTREAM_CHECK_REGEX = "Encode\-(?P<pver>(\d+\.\d+))(?!_\d+).tar"
+
+S = "${WORKDIR}/Encode-${PV}"
+
+inherit cpan ptest-perl
+
+do_install:prepend() {
+    # Requires "-T" (taint) option on command line
+    rm -rf ${B}/t/taint.t
+    # Circular dependency of perl-module-open on perl-module-encode
+    # and we cannot load perl-module-encode because we are providing
+    # an alternative
+    rm -rf ${B}/t/use-Encode-Alias.t
+}
+
+do_install_ptest() {
+    mkdir ${D}${PTEST_PATH}/bin
+    cp -r ${B}/bin/piconv ${D}${PTEST_PATH}/bin
+    cp -r ${B}/blib ${D}${PTEST_PATH}
+    chown -R root:root ${D}${PTEST_PATH}
+}
+
+#  file /usr/bin/enc2xs from install of perl-misc-5.24.1-r0.i586 conflicts with file from package libencode-perl-2.94-r0.i586
+#  file /usr/bin/encguess from install of perl-misc-5.24.1-r0.i586 conflicts with file from package libencode-perl-2.94-r0.i586
+#  file /usr/bin/piconv from install of perl-misc-5.24.1-r0.i586 conflicts with file from package libencode-perl-2.94-r0.i586
+RCONFLICTS:${PN} = "perl-misc perl-module-encode"
+
+RDEPENDS:${PN} += " \
+    perl-module-bytes \
+    perl-module-constant \
+    perl-module-parent \
+    perl-module-storable \
+    perl-module-xsloader \
+    "
+
+RPROVIDES:${PN} += " \
+    libencode-alias-perl \
+    libencode-byte-perl \
+    libencode-cjkconstants-perl \
+    libencode-cn-perl \
+    libencode-cn-hz-perl \
+    libencode-config-perl \
+    libencode-ebcdic-perl \
+    libencode-encoder-perl \
+    libencode-encoding-perl \
+    libencode-gsm0338-perl \
+    libencode-guess-perl \
+    libencode-jp-perl \
+    libencode-jp-h2z-perl \
+    libencode-jp-jis7-perl \
+    libencode-kr-perl \
+    libencode-kr-2022_kr-perl \
+    libencode-mime-header-perl \
+    libencode-mime-name-perl \
+    libencode-symbol-perl \
+    libencode-tw-perl \
+    libencode-unicode-perl \
+    libencode-unicode-utf7-perl \
+    libencoding-perl \
+    libencode-internal-perl \
+    libencode-mime-header-iso_2022_jp-perl \
+    libencode-utf8-perl \
+    libencode-utf_ebcdic-perl \
+    "
+
+RDEPENDS:${PN}-ptest += " \
+    perl-module-blib \
+    perl-module-charnames \
+    perl-module-file-compare \
+    perl-module-file-copy \
+    perl-module-filehandle \
+    perl-module-findbin \
+    perl-module-integer \
+    perl-module-io-select \
+    perl-module-ipc-open3 \
+    perl-module-mime-base64 \
+    perl-module-perlio \
+    perl-module-perlio-encoding \
+    perl-module-perlio-scalar \
+    perl-module-test-more \
+    perl-module-tie-scalar \
+    perl-module-unicore \
+    perl-module-utf8 \
+    "
+
+BBCLASSEXTEND = "native"
diff --git a/meta-openembedded/meta-perl/recipes-perl/libio/libio-socket-ssl-perl_2.074.bb b/meta-openembedded/meta-perl/recipes-perl/libio/libio-socket-ssl-perl_2.074.bb
index 1d04f00..6249fd1 100644
--- a/meta-openembedded/meta-perl/recipes-perl/libio/libio-socket-ssl-perl_2.074.bb
+++ b/meta-openembedded/meta-perl/recipes-perl/libio/libio-socket-ssl-perl_2.074.bb
@@ -9,7 +9,7 @@
 HOMEPAGE = "http://search.cpan.org/dist/IO-Socket-SSL/"
 SECTION = "libs"
 
-LICENSE = "Artistic-1.0|GPL-1.0-or-later"
+LICENSE = "Artistic-1.0 | GPL-1.0-or-later"
 LIC_FILES_CHKSUM = "file://META.yml;beginline=12;endline=12;md5=963ce28228347875ace682de56eef8e8"
 
 RDEPENDS:${PN} += "\
diff --git a/meta-openembedded/meta-perl/recipes-perl/libipc/libipc-signal-perl_1.00.bb b/meta-openembedded/meta-perl/recipes-perl/libipc/libipc-signal-perl_1.00.bb
index 389be2c..203db7b 100644
--- a/meta-openembedded/meta-perl/recipes-perl/libipc/libipc-signal-perl_1.00.bb
+++ b/meta-openembedded/meta-perl/recipes-perl/libipc/libipc-signal-perl_1.00.bb
@@ -5,7 +5,7 @@
 HOMEPAGE = "http://search.cpan.org/~rosch/IPC-Signal-1.00/"
 SECTION = "libs"
 
-LICENSE = "Artistic-1.0|GPL-1.0-or-later"
+LICENSE = "Artistic-1.0 | GPL-1.0-or-later"
 LIC_FILES_CHKSUM = "file://README;beginline=16;endline=18;md5=f36550f59a0ae5e6e3b0be6a4da60d26"
 
 S = "${WORKDIR}/IPC-Signal-${PV}"
diff --git a/meta-openembedded/meta-perl/recipes-perl/libmime/libmime-charset-perl_1.012.2.bb b/meta-openembedded/meta-perl/recipes-perl/libmime/libmime-charset-perl_1.012.2.bb
deleted file mode 100644
index 20557a3..0000000
--- a/meta-openembedded/meta-perl/recipes-perl/libmime/libmime-charset-perl_1.012.2.bb
+++ /dev/null
@@ -1,19 +0,0 @@
-SUMMARY = "MIME::Charset - Charset Information for MIME."
-DESCRIPTION = "MIME::Charset provides information about character sets used for MIME \
-messages on Internet."
-HOMEPAGE = "http://search.cpan.org/~nezumi/MIME-Charset-${PV}/"
-SECTION = "libs"
-
-LICENSE = "Artistic-1.0 | GPL-1.0-or-later"
-LIC_FILES_CHKSUM = "file://COPYING;md5=b234ee4d69f5fce4486a80fdaf4a4263"
-
-SRC_URI = "${CPAN_MIRROR}/authors/id/N/NE/NEZUMI/MIME-Charset-${PV}.tar.gz"
-
-SRC_URI[md5sum] = "71440416376248c31aa3bef753fae28d"
-SRC_URI[sha256sum] = "878c779c0256c591666bd06c0cde4c0d7820eeeb98fd1183082aee9a1e7b1d13"
-
-S = "${WORKDIR}/MIME-Charset-${PV}"
-
-inherit cpan
-
-BBCLASSEXTEND = "native"
diff --git a/meta-openembedded/meta-perl/recipes-perl/libmime/libmime-charset-perl_1.013.1.bb b/meta-openembedded/meta-perl/recipes-perl/libmime/libmime-charset-perl_1.013.1.bb
new file mode 100644
index 0000000..27ed41e
--- /dev/null
+++ b/meta-openembedded/meta-perl/recipes-perl/libmime/libmime-charset-perl_1.013.1.bb
@@ -0,0 +1,18 @@
+SUMMARY = "MIME::Charset - Charset Information for MIME."
+DESCRIPTION = "MIME::Charset provides information about character sets used for MIME \
+messages on Internet."
+HOMEPAGE = "http://search.cpan.org/~nezumi/MIME-Charset-${PV}/"
+SECTION = "libs"
+
+LICENSE = "Artistic-1.0 | GPL-1.0-or-later"
+LIC_FILES_CHKSUM = "file://COPYING;md5=b234ee4d69f5fce4486a80fdaf4a4263"
+
+SRC_URI = "${CPAN_MIRROR}/authors/id/N/NE/NEZUMI/MIME-Charset-${PV}.tar.gz"
+
+SRC_URI[sha256sum] = "1bb7a6e0c0d251f23d6e60bf84c9adefc5b74eec58475bfee4d39107e60870f0"
+
+S = "${WORKDIR}/MIME-Charset-${PV}"
+
+inherit cpan
+
+BBCLASSEXTEND = "native"
diff --git a/meta-openembedded/meta-perl/recipes-perl/libmime/libmime-types-perl_2.17.bb b/meta-openembedded/meta-perl/recipes-perl/libmime/libmime-types-perl_2.17.bb
index 2c06728..d1f6f8c 100644
--- a/meta-openembedded/meta-perl/recipes-perl/libmime/libmime-types-perl_2.17.bb
+++ b/meta-openembedded/meta-perl/recipes-perl/libmime/libmime-types-perl_2.17.bb
@@ -8,7 +8,7 @@
 HOMEPAGE = "http://search.cpan.org/~markov/MIME-Types-${PV}"
 SECTION = "libraries"
 
-LICENSE = "Artistic-1.0|GPL-1.0-or-later"
+LICENSE = "Artistic-1.0 | GPL-1.0-or-later"
 LIC_FILES_CHKSUM = "file://META.yml;beginline=11;endline=11;md5=963ce28228347875ace682de56eef8e8"
 
 SRC_URI = "http://search.cpan.org/CPAN/authors/id/M/MA/MARKOV/MIME-Types-${PV}.tar.gz \
diff --git a/meta-openembedded/meta-perl/recipes-perl/libnet/libnet-ldap-perl_0.68.bb b/meta-openembedded/meta-perl/recipes-perl/libnet/libnet-ldap-perl_0.68.bb
index 293f421..dcc5ea8 100644
--- a/meta-openembedded/meta-perl/recipes-perl/libnet/libnet-ldap-perl_0.68.bb
+++ b/meta-openembedded/meta-perl/recipes-perl/libnet/libnet-ldap-perl_0.68.bb
@@ -6,7 +6,7 @@
 
 SECTION = "libs"
 
-LICENSE = "Artistic-1.0|GPL-1.0-or-later"
+LICENSE = "Artistic-1.0 | GPL-1.0-or-later"
 LIC_FILES_CHKSUM = "file://README;beginline=3;endline=5;md5=4d6588c2fa0d38ae162f6314d201d89e"
 
 SRC_URI = "${CPAN_MIRROR}/authors/id/M/MA/MARSCHAP/perl-ldap-${PV}.tar.gz"
diff --git a/meta-openembedded/meta-perl/recipes-perl/libnet/libnet-telnet-perl_3.05.bb b/meta-openembedded/meta-perl/recipes-perl/libnet/libnet-telnet-perl_3.05.bb
index d7d4201..d1365f2 100644
--- a/meta-openembedded/meta-perl/recipes-perl/libnet/libnet-telnet-perl_3.05.bb
+++ b/meta-openembedded/meta-perl/recipes-perl/libnet/libnet-telnet-perl_3.05.bb
@@ -11,7 +11,7 @@
 HOMEPAGE = "http://search.cpan.org/dist/Net-Telnet/"
 SECTION = "Development/Libraries"
 
-LICENSE = "Artistic-1.0|GPL-1.0-or-later"
+LICENSE = "Artistic-1.0 | GPL-1.0-or-later"
 LIC_FILES_CHKSUM = "file://README;beginline=4;endline=7;md5=e94ab3b72335e3cdadd6c1ff736dd714"
 
 SRC_URI = "http://search.cpan.org/CPAN/authors/id/J/JR/JROGERS/Net-Telnet-${PV}.tar.gz"
diff --git a/meta-openembedded/meta-perl/recipes-perl/libproc/libproc-waitstat-perl_1.00.bb b/meta-openembedded/meta-perl/recipes-perl/libproc/libproc-waitstat-perl_1.00.bb
index ffd87ed..643a704 100644
--- a/meta-openembedded/meta-perl/recipes-perl/libproc/libproc-waitstat-perl_1.00.bb
+++ b/meta-openembedded/meta-perl/recipes-perl/libproc/libproc-waitstat-perl_1.00.bb
@@ -5,7 +5,7 @@
 HOMEPAGE = "http://search.cpan.org/~rosch/Proc-WaitStat/"
 SECTION = "libraries"
 
-LICENSE = "Artistic-1.0|GPL-1.0-or-later"
+LICENSE = "Artistic-1.0 | GPL-1.0-or-later"
 LIC_FILES_CHKSUM = "file://README;beginline=21;endline=23;md5=f36550f59a0ae5e6e3b0be6a4da60d26"
 
 RDEPENDS:${PN} += "perl libipc-signal-perl"
diff --git a/meta-openembedded/meta-perl/recipes-perl/libtest/libtest-warn-perl_0.36.bb b/meta-openembedded/meta-perl/recipes-perl/libtest/libtest-warn-perl_0.36.bb
deleted file mode 100644
index 7f53993..0000000
--- a/meta-openembedded/meta-perl/recipes-perl/libtest/libtest-warn-perl_0.36.bb
+++ /dev/null
@@ -1,46 +0,0 @@
-SUMMARY = "Test::Warn - Perl extension to test methods for warnings"
-DESCRIPTION = "This module provides a few convenience methods for testing \
-warning based code. \
-\
-If you are not already familiar with the Test::More manpage now would be \
-the time to go take a look. \
-"
-
-SECTION = "libs"
-HOMEPAGE= "https://metacpan.org/release/Test-Warn"
-
-LICENSE = "Artistic-1.0 | GPL-1.0-or-later"
-LIC_FILES_CHKSUM = "file://README;beginline=73;endline=78;md5=42b423d91c92ba59c215835a2ee9b57a"
-
-CPAN_PACKAGE = "Test-Warn"
-CPAN_AUTHOR = "BIGJ"
-
-SRC_URI = "${CPAN_MIRROR}/authors/id/B/BI/${CPAN_AUTHOR}/${CPAN_PACKAGE}-${PV}.tar.gz"
-
-SRC_URI[md5sum] = "3d958f43d36db263994affde5da09b51"
-SRC_URI[sha256sum] = "ecbca346d379cef8d3c0e4ac0c8eb3b2613d737ffaaeae52271c38d7bf3c6cda"
-
-S = "${WORKDIR}/${CPAN_PACKAGE}-${PV}"
-
-inherit cpan ptest-perl
-
-do_install_ptest() {
-    cp -r ${B}/blib ${D}${PTEST_PATH}
-    chown -R root:root ${D}${PTEST_PATH}
-}
-
-RDEPENDS:${PN} += " \
-    libsub-uplevel-perl \
-    perl-module-blib \
-    perl-module-carp \
-    perl-module-test-builder \
-    perl-module-test-builder-tester \
-    perl-module-test-tester \
-"
-
-RDEPENDS:${PN}-ptest += " \
-    perl-module-file-spec \
-    perl-module-test-more \
-"
-
-BBCLASSEXTEND = "native nativesdk"
diff --git a/meta-openembedded/meta-perl/recipes-perl/libtest/libtest-warn-perl_0.37.bb b/meta-openembedded/meta-perl/recipes-perl/libtest/libtest-warn-perl_0.37.bb
new file mode 100644
index 0000000..5148fbe
--- /dev/null
+++ b/meta-openembedded/meta-perl/recipes-perl/libtest/libtest-warn-perl_0.37.bb
@@ -0,0 +1,45 @@
+SUMMARY = "Test::Warn - Perl extension to test methods for warnings"
+DESCRIPTION = "This module provides a few convenience methods for testing \
+warning based code. \
+\
+If you are not already familiar with the Test::More manpage now would be \
+the time to go take a look. \
+"
+
+SECTION = "libs"
+HOMEPAGE= "https://metacpan.org/release/Test-Warn"
+
+LICENSE = "Artistic-1.0 | GPL-1.0-or-later"
+LIC_FILES_CHKSUM = "file://README;beginline=73;endline=78;md5=42b423d91c92ba59c215835a2ee9b57a"
+
+CPAN_PACKAGE = "Test-Warn"
+CPAN_AUTHOR = "BIGJ"
+
+SRC_URI = "${CPAN_MIRROR}/authors/id/B/BI/${CPAN_AUTHOR}/${CPAN_PACKAGE}-${PV}.tar.gz"
+
+SRC_URI[sha256sum] = "98ca32e7f2f5ea89b8bfb9a0609977f3d153e242e2e51705126cb954f1a06b57"
+
+S = "${WORKDIR}/${CPAN_PACKAGE}-${PV}"
+
+inherit cpan ptest-perl
+
+do_install_ptest() {
+    cp -r ${B}/blib ${D}${PTEST_PATH}
+    chown -R root:root ${D}${PTEST_PATH}
+}
+
+RDEPENDS:${PN} += " \
+    libsub-uplevel-perl \
+    perl-module-blib \
+    perl-module-carp \
+    perl-module-test-builder \
+    perl-module-test-builder-tester \
+    perl-module-test-tester \
+"
+
+RDEPENDS:${PN}-ptest += " \
+    perl-module-file-spec \
+    perl-module-test-more \
+"
+
+BBCLASSEXTEND = "native nativesdk"
diff --git a/meta-openembedded/meta-perl/recipes-perl/libxml/libxml-libxml-perl_2.0134.bb b/meta-openembedded/meta-perl/recipes-perl/libxml/libxml-libxml-perl_2.0134.bb
index c2898a9..c2ea47a 100644
--- a/meta-openembedded/meta-perl/recipes-perl/libxml/libxml-libxml-perl_2.0134.bb
+++ b/meta-openembedded/meta-perl/recipes-perl/libxml/libxml-libxml-perl_2.0134.bb
@@ -8,7 +8,7 @@
 
 HOMEPAGE = "http://search.cpan.org/dist/XML-LibXML-1.99/"
 SECTION = "libs"
-LICENSE = "Artistic-1.0|GPL-1.0-or-later"
+LICENSE = "Artistic-1.0 | GPL-1.0-or-later"
 DEPENDS += "libxml2 \
         libxml-sax-perl-native \
         zlib \
diff --git a/meta-openembedded/meta-python/README b/meta-openembedded/meta-python/README
index 01c51dc..36c1939 100644
--- a/meta-openembedded/meta-python/README
+++ b/meta-openembedded/meta-python/README
@@ -19,11 +19,6 @@
 	layers: meta-oe
 	branch: master
 
-Please follow the recommended setup procedures of your OE distribution.
-For Angstrom that is:
-        http://www.angstrom-distribution.org/building-angstrom,
-other distros should have similar online resources.
-
 Contributing
 -------------------------
 
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-aiohue_4.4.2.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-aiohue_4.4.2.bb
deleted file mode 100644
index 63d29a4..0000000
--- a/meta-openembedded/meta-python/recipes-devtools/python/python3-aiohue_4.4.2.bb
+++ /dev/null
@@ -1,16 +0,0 @@
-DESCRIPTION = "Asynchronous library to control Philips Hue"
-HOMEPAGE = "https://pypi.org/project/aiohue/"
-SECTION = "devel/python"
-LICENSE = "Apache-2.0"
-LIC_FILES_CHKSUM = "file://LICENSE;md5=dab31a1d28183826937f4b152143a33f"
-
-SRC_URI[sha256sum] = "871c27da3d5a5b7d7c5049e6d23713425a2f251639ff1fb5fac460724ba42f15"
-
-inherit pypi setuptools3
-
-RDEPENDS:${PN} += " \
-	${PYTHON_PN}-aiohttp \
-	${PYTHON_PN}-asyncio-throttle \
-	${PYTHON_PN}-profile \
-        ${PYTHON_PN}-awesomeversion \
-"
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-aiohue_4.5.0.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-aiohue_4.5.0.bb
new file mode 100644
index 0000000..7507b8f
--- /dev/null
+++ b/meta-openembedded/meta-python/recipes-devtools/python/python3-aiohue_4.5.0.bb
@@ -0,0 +1,16 @@
+DESCRIPTION = "Asynchronous library to control Philips Hue"
+HOMEPAGE = "https://pypi.org/project/aiohue/"
+SECTION = "devel/python"
+LICENSE = "Apache-2.0"
+LIC_FILES_CHKSUM = "file://LICENSE;md5=dab31a1d28183826937f4b152143a33f"
+
+SRC_URI[sha256sum] = "9a5aaf2fa97a889746142093e74143d5e3e02209fae428d1e2dbcc34a5c36efd"
+
+inherit pypi setuptools3
+
+RDEPENDS:${PN} += " \
+	${PYTHON_PN}-aiohttp \
+	${PYTHON_PN}-asyncio-throttle \
+	${PYTHON_PN}-profile \
+        ${PYTHON_PN}-awesomeversion \
+"
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-asgiref/run-ptest b/meta-openembedded/meta-python/recipes-devtools/python/python3-asgiref/run-ptest
new file mode 100644
index 0000000..3385d68
--- /dev/null
+++ b/meta-openembedded/meta-python/recipes-devtools/python/python3-asgiref/run-ptest
@@ -0,0 +1,3 @@
+#!/bin/sh
+
+pytest -o log_cli=true -o log_cli_level=INFO | sed -e 's/\[...%\]//g'| sed -e 's/PASSED/PASS/g'| sed -e 's/FAILED/FAIL/g'|sed -e 's/SKIPED/SKIP/g'| awk '{if ($NF=="PASS" || $NF=="FAIL" || $NF=="SKIP" || $NF=="XFAIL" || $NF=="XPASS"){printf "%s: %s\n", $NF, $0}else{print}}'| awk '{if ($NF=="PASS" || $NF=="FAIL" || $NF=="SKIP" || $NF=="XFAIL" || $NF=="XPASS") {$NF="";print $0}else{print}}'
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-asgiref_3.5.2.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-asgiref_3.5.2.bb
new file mode 100644
index 0000000..8604791
--- /dev/null
+++ b/meta-openembedded/meta-python/recipes-devtools/python/python3-asgiref_3.5.2.bb
@@ -0,0 +1,29 @@
+DESCRIPTION = "ASGI is a standard for Python asynchronous web apps and servers to communicate with each other, and positioned as an asynchronous successor to WSGI."
+HOMEPAGE = "https://pypi.org/project/asgiref/"
+SECTION = "devel/python"
+LICENSE = "BSD-3-Clause"
+LIC_FILES_CHKSUM = "file://LICENSE;md5=f09eb47206614a4954c51db8a94840fa"
+
+SRC_URI += "file://run-ptest \
+	    "
+
+SRC_URI[sha256sum] = "4a29362a6acebe09bf1d6640db38c1dc3d9217c68e6f9f6204d72667fc19a424"
+
+export BUILD_SYS
+export HOST_SYS
+
+inherit pypi ptest setuptools3
+
+RDEPENDS:${PN}-ptest += " \
+    ${PYTHON_PN}-pytest \
+    ${PYTHON_PN}-asyncio \
+    ${PYTHON_PN}-io \
+    ${PYTHON_PN}-multiprocessing \
+"
+
+do_install_ptest() {
+    install -d ${D}${PTEST_PATH}/tests
+    cp -rf ${S}/tests/* ${D}${PTEST_PATH}/tests/
+}
+
+BBCLASSEXTEND = "native nativesdk"
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-asttokens_2.0.5.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-asttokens_2.0.5.bb
deleted file mode 100644
index da49040..0000000
--- a/meta-openembedded/meta-python/recipes-devtools/python/python3-asttokens_2.0.5.bb
+++ /dev/null
@@ -1,14 +0,0 @@
-SUMMARY = "The asttokens module annotates Python abstract syntax trees (ASTs)"
-HOMEPAGE = "https://github.com/gristlabs/asttokens"
-LICENSE = "Apache-2.0"
-LIC_FILES_CHKSUM = "file://LICENSE;md5=e3fc50a88d0a364313df4b21ef20c29e"
-
-PYPI_PACKAGE = "asttokens"
-
-inherit pypi python_setuptools_build_meta
-
-SRC_URI[sha256sum] = "9a54c114f02c7a9480d56550932546a3f1fe71d8a02f1bc7ccd0ee3ee35cf4d5"
-
-DEPENDS += "python3-setuptools-scm-native"
-
-BBCLASSEXTEND = "native"
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-asttokens_2.0.8.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-asttokens_2.0.8.bb
new file mode 100644
index 0000000..973c576
--- /dev/null
+++ b/meta-openembedded/meta-python/recipes-devtools/python/python3-asttokens_2.0.8.bb
@@ -0,0 +1,18 @@
+SUMMARY = "The asttokens module annotates Python abstract syntax trees (ASTs)"
+HOMEPAGE = "https://github.com/gristlabs/asttokens"
+LICENSE = "Apache-2.0"
+LIC_FILES_CHKSUM = "file://LICENSE;md5=e3fc50a88d0a364313df4b21ef20c29e"
+
+PYPI_PACKAGE = "asttokens"
+
+inherit pypi python_setuptools_build_meta
+
+SRC_URI[sha256sum] = "c61e16246ecfb2cde2958406b4c8ebc043c9e6d73aaa83c941673b35e5d3a76b"
+
+DEPENDS += "python3-setuptools-scm-native"
+
+RDEPENDS:${PN} += " \
+	${PYTHON_PN}-six \
+"
+
+BBCLASSEXTEND = "native"
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-autobahn_22.6.1.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-autobahn_22.6.1.bb
deleted file mode 100644
index 71da18f..0000000
--- a/meta-openembedded/meta-python/recipes-devtools/python/python3-autobahn_22.6.1.bb
+++ /dev/null
@@ -1,23 +0,0 @@
-DESCRIPTION = "WebSocket client & server library, WAMP real-time framework"
-HOMEPAGE = "http://crossbar.io/autobahn"
-LICENSE = "MIT"
-LIC_FILES_CHKSUM = "file://LICENSE;md5=97c0bda20ad1d845c6369c0e47a1cd98"
-
-SRC_URI[sha256sum] = "fb63e946d5c2dd0df680851e84e65624a494ce87c999f2a4944e4f2d81bf4498"
-
-inherit pypi setuptools3
-
-RDEPENDS:${PN} += " \
-    ${PYTHON_PN}-twisted \
-    ${PYTHON_PN}-zopeinterface \
-    ${PYTHON_PN}-py-ubjson \
-    ${PYTHON_PN}-cbor2 \
-    ${PYTHON_PN}-u-msgpack-python \
-    ${PYTHON_PN}-lz4 \
-    ${PYTHON_PN}-snappy \
-    ${PYTHON_PN}-pyopenssl \
-    ${PYTHON_PN}-txaio \
-    ${PYTHON_PN}-six \
-"
-
-BBCLASSEXTEND = "native nativesdk"
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-autobahn_22.7.1.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-autobahn_22.7.1.bb
new file mode 100644
index 0000000..09957ad
--- /dev/null
+++ b/meta-openembedded/meta-python/recipes-devtools/python/python3-autobahn_22.7.1.bb
@@ -0,0 +1,23 @@
+DESCRIPTION = "WebSocket client & server library, WAMP real-time framework"
+HOMEPAGE = "http://crossbar.io/autobahn"
+LICENSE = "MIT"
+LIC_FILES_CHKSUM = "file://LICENSE;md5=97c0bda20ad1d845c6369c0e47a1cd98"
+
+SRC_URI[sha256sum] = "8b462ea2e6aad6b4dc0ed45fb800b6cbfeb0325e7fe6983907f122f2be4a1fe9"
+
+inherit pypi setuptools3
+
+RDEPENDS:${PN} += " \
+    ${PYTHON_PN}-twisted \
+    ${PYTHON_PN}-zopeinterface \
+    ${PYTHON_PN}-py-ubjson \
+    ${PYTHON_PN}-cbor2 \
+    ${PYTHON_PN}-u-msgpack-python \
+    ${PYTHON_PN}-lz4 \
+    ${PYTHON_PN}-snappy \
+    ${PYTHON_PN}-pyopenssl \
+    ${PYTHON_PN}-txaio \
+    ${PYTHON_PN}-six \
+"
+
+BBCLASSEXTEND = "native nativesdk"
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-awesomeversion_22.6.0.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-awesomeversion_22.6.0.bb
deleted file mode 100644
index cadb6f5..0000000
--- a/meta-openembedded/meta-python/recipes-devtools/python/python3-awesomeversion_22.6.0.bb
+++ /dev/null
@@ -1,11 +0,0 @@
-DESCRIPTION = "One version package to rule them all, One version package to find them, One version package to bring them all, and in the darkness bind them."
-HOMEPAGE = "https://pypi.org/project/awesomeversion/"
-SECTION = "devel/python"
-LICENSE = "MIT"
-LIC_FILES_CHKSUM = "file://LICENCE.md;md5=92622b5a8e216099be741d78328bae5d"
-
-SRC_URI[sha256sum] = "38f580bfacf1c06b674bcd0f68e0c445ebb03fcd3700c6a2c588fb9313308e0f"
-
-RDEPENDS:${PN} += "python3-profile python3-logging"
-
-inherit pypi setuptools3
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-awesomeversion_22.8.0.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-awesomeversion_22.8.0.bb
new file mode 100644
index 0000000..550a8c4
--- /dev/null
+++ b/meta-openembedded/meta-python/recipes-devtools/python/python3-awesomeversion_22.8.0.bb
@@ -0,0 +1,11 @@
+DESCRIPTION = "One version package to rule them all, One version package to find them, One version package to bring them all, and in the darkness bind them."
+HOMEPAGE = "https://pypi.org/project/awesomeversion/"
+SECTION = "devel/python"
+LICENSE = "MIT"
+LIC_FILES_CHKSUM = "file://LICENCE.md;md5=92622b5a8e216099be741d78328bae5d"
+
+SRC_URI[sha256sum] = "0469a801faafb3a7a13c529edc977ea04c3bce825af9ebb602f58012bc487db5"
+
+RDEPENDS:${PN} += "python3-profile python3-logging"
+
+inherit pypi setuptools3
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-cantools_37.1.0.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-cantools_37.1.0.bb
deleted file mode 100644
index 1af1310..0000000
--- a/meta-openembedded/meta-python/recipes-devtools/python/python3-cantools_37.1.0.bb
+++ /dev/null
@@ -1,22 +0,0 @@
-DESCRIPTION = "CAN BUS tools in Python 3."
-HOMEPAGE = "https://github.com/eerimoq/cantools"
-LICENSE = "MIT"
-LIC_FILES_CHKSUM = "file://LICENSE;md5=d9aa4ec07de78abae21c490c9ffe61bd"
-
-SRC_URI[sha256sum] = "9f2449e94a7698bd44bb50d9b464788053a0bf070faa09a132535c3aa07e7e6a"
-
-PYPI_PACKAGE = "cantools"
-
-inherit pypi setuptools3
-
-RDEPENDS:${PN} += "\
-	${PYTHON_PN}-can \
-	${PYTHON_PN}-bitstruct \
-	${PYTHON_PN}-core \
-	${PYTHON_PN}-textparser \
-	${PYTHON_PN}-typing-extensions \
-	${PYTHON_PN}-diskcache \
-        ${PYTHON_PN}-asyncio \
-"
-
-CLEANBROKEN = "1"
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-cantools_37.1.2.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-cantools_37.1.2.bb
new file mode 100644
index 0000000..ec437ee
--- /dev/null
+++ b/meta-openembedded/meta-python/recipes-devtools/python/python3-cantools_37.1.2.bb
@@ -0,0 +1,22 @@
+DESCRIPTION = "CAN BUS tools in Python 3."
+HOMEPAGE = "https://github.com/eerimoq/cantools"
+LICENSE = "MIT"
+LIC_FILES_CHKSUM = "file://LICENSE;md5=d9aa4ec07de78abae21c490c9ffe61bd"
+
+SRC_URI[sha256sum] = "0d84b879a18d869d182023cdebae9318095a8959ceee6309de59fd3c399dbfef"
+
+PYPI_PACKAGE = "cantools"
+
+inherit pypi setuptools3
+
+RDEPENDS:${PN} += "\
+	${PYTHON_PN}-can \
+	${PYTHON_PN}-bitstruct \
+	${PYTHON_PN}-core \
+	${PYTHON_PN}-textparser \
+	${PYTHON_PN}-typing-extensions \
+	${PYTHON_PN}-diskcache \
+        ${PYTHON_PN}-asyncio \
+"
+
+CLEANBROKEN = "1"
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-charset-normalizer_2.1.0.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-charset-normalizer_2.1.0.bb
deleted file mode 100644
index 7c3d3ff..0000000
--- a/meta-openembedded/meta-python/recipes-devtools/python/python3-charset-normalizer_2.1.0.bb
+++ /dev/null
@@ -1,15 +0,0 @@
-SUMMARY = "The Real First Universal Charset Detector. Open, modern and actively maintained alternative to Chardet."
-HOMEPAGE = "https://github.com/ousret/charset_normalizer"
-LICENSE = "MIT"
-LIC_FILES_CHKSUM = "file://LICENSE;md5=0974a390827087287db39928f7c524b5"
-
-SRC_URI[sha256sum] = "575e708016ff3a5e3681541cb9d79312c416835686d054a23accb873b254f413"
-
-inherit pypi setuptools3
-
-RDEPENDS:${PN} += " \
-	${PYTHON_PN}-core \
-	${PYTHON_PN}-logging \
-	${PYTHON_PN}-codecs \
-	${PYTHON_PN}-json \
-"
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-charset-normalizer_2.1.1.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-charset-normalizer_2.1.1.bb
new file mode 100644
index 0000000..1ab72e5
--- /dev/null
+++ b/meta-openembedded/meta-python/recipes-devtools/python/python3-charset-normalizer_2.1.1.bb
@@ -0,0 +1,15 @@
+SUMMARY = "The Real First Universal Charset Detector. Open, modern and actively maintained alternative to Chardet."
+HOMEPAGE = "https://github.com/ousret/charset_normalizer"
+LICENSE = "MIT"
+LIC_FILES_CHKSUM = "file://LICENSE;md5=0974a390827087287db39928f7c524b5"
+
+SRC_URI[sha256sum] = "5a3d016c7c547f69d6f81fb0db9449ce888b418b5b9952cc5e6e66843e9dd845"
+
+inherit pypi setuptools3
+
+RDEPENDS:${PN} += " \
+	${PYTHON_PN}-core \
+	${PYTHON_PN}-logging \
+	${PYTHON_PN}-codecs \
+	${PYTHON_PN}-json \
+"
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-coverage_6.4.1.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-coverage_6.4.1.bb
deleted file mode 100644
index d352cda..0000000
--- a/meta-openembedded/meta-python/recipes-devtools/python/python3-coverage_6.4.1.bb
+++ /dev/null
@@ -1,16 +0,0 @@
-SUMMARY = "Code coverage measurement for Python"
-HOMEPAGE = "https://coverage.readthedocs.io"
-LICENSE = "Apache-2.0"
-LIC_FILES_CHKSUM = "file://LICENSE.txt;md5=2ee41112a44fe7014dce33e26468ba93"
-
-SRC_URI[sha256sum] = "4321f075095a096e70aff1d002030ee612b65a205a0a0f5b815280d5dc58100c"
-
-inherit pypi setuptools3
-
-RDEPENDS:${PN} += " \
-	${PYTHON_PN}-sqlite3 \
-	${PYTHON_PN}-core \
-	${PYTHON_PN}-pprint \
-	${PYTHON_PN}-json \
-	${PYTHON_PN}-xml \
-"
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-coverage_6.4.4.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-coverage_6.4.4.bb
new file mode 100644
index 0000000..5bff969
--- /dev/null
+++ b/meta-openembedded/meta-python/recipes-devtools/python/python3-coverage_6.4.4.bb
@@ -0,0 +1,19 @@
+SUMMARY = "Code coverage measurement for Python"
+HOMEPAGE = "https://coverage.readthedocs.io"
+LICENSE = "Apache-2.0"
+LIC_FILES_CHKSUM = "file://LICENSE.txt;md5=2ee41112a44fe7014dce33e26468ba93"
+
+SRC_URI[sha256sum] = "e16c45b726acb780e1e6f88b286d3c10b3914ab03438f32117c4aa52d7f30d58"
+
+inherit pypi setuptools3
+
+RDEPENDS:${PN} += " \
+	${PYTHON_PN}-sqlite3 \
+	${PYTHON_PN}-core \
+	${PYTHON_PN}-pprint \
+	${PYTHON_PN}-json \
+	${PYTHON_PN}-xml \
+	${PYTHON_PN}-crypt \
+	${PYTHON_PN}-shell \
+	${PYTHON_PN}-io \
+"
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-cytoolz_0.11.2.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-cytoolz_0.11.2.bb
deleted file mode 100644
index 14f597c..0000000
--- a/meta-openembedded/meta-python/recipes-devtools/python/python3-cytoolz_0.11.2.bb
+++ /dev/null
@@ -1,12 +0,0 @@
-SUMMARY = "Cython implementation of the toolz package, which provides high \
-performance utility functions for iterables, functions, and dictionaries."
-HOMEPAGE = "https://github.com/pytoolz/cytoolz"
-SECTION = "devel/python"
-LICENSE = "BSD-3-Clause"
-LIC_FILES_CHKSUM = "file://LICENSE.txt;md5=efbcddfa5610ca0c07ecfa274a82b373"
-
-SRC_URI[sha256sum] = "ea23663153806edddce7e4153d1d407d62357c05120a4e8485bddf1bd5ab22b4"
-
-inherit pypi setuptools3
-
-RDEPENDS:${PN} += "python3-toolz"
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-cytoolz_0.12.0.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-cytoolz_0.12.0.bb
new file mode 100644
index 0000000..b6977b0
--- /dev/null
+++ b/meta-openembedded/meta-python/recipes-devtools/python/python3-cytoolz_0.12.0.bb
@@ -0,0 +1,13 @@
+SUMMARY = "Cython implementation of the toolz package, which provides high \
+performance utility functions for iterables, functions, and dictionaries."
+HOMEPAGE = "https://github.com/pytoolz/cytoolz"
+SECTION = "devel/python"
+LICENSE = "BSD-3-Clause"
+LIC_FILES_CHKSUM = "file://LICENSE.txt;md5=efbcddfa5610ca0c07ecfa274a82b373"
+
+SRC_URI[sha256sum] = "c105b05f85e03fbcd60244375968e62e44fe798c15a3531c922d531018d22412"
+
+inherit pypi setuptools3
+
+DEPENDS += "${PYTHON_PN}-cython-native"
+RDEPENDS:${PN} += "${PYTHON_PN}-toolz"
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-django_2.2.28.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-django_2.2.28.bb
deleted file mode 100644
index 9ef9881..0000000
--- a/meta-openembedded/meta-python/recipes-devtools/python/python3-django_2.2.28.bb
+++ /dev/null
@@ -1,12 +0,0 @@
-require python-django.inc
-
-# Pin to 2.2.x LTS releases ONLY for this recipe
-UPSTREAM_CHECK_REGEX = "/${PYPI_PACKAGE}/(?P<pver>(2\.2\.\d*)+)/"
-
-inherit setuptools3
-
-SRC_URI[sha256sum] = "0200b657afbf1bc08003845ddda053c7641b9b24951e52acd51f6abda33a7413"
-
-RDEPENDS:${PN} += "\
-    ${PYTHON_PN}-sqlparse \
-"
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-django_3.2.12.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-django_3.2.12.bb
index adbc498..17d402d 100644
--- a/meta-openembedded/meta-python/recipes-devtools/python/python3-django_3.2.12.bb
+++ b/meta-openembedded/meta-python/recipes-devtools/python/python3-django_3.2.12.bb
@@ -5,9 +5,5 @@
 
 RDEPENDS:${PN} += "\
     ${PYTHON_PN}-sqlparse \
+    ${PYTHON_PN}-asgiref \
 "
-
-# Set DEFAULT_PREFERENCE so that the LTS version of django is built by
-# default. To build the 3.x branch, 
-# PREFERRED_VERSION_python3-django = "3.2.2" can be added to local.conf
-DEFAULT_PREFERENCE = "-1"
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-django_4.0.2.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-django_4.0.2.bb
index 690b980..7f933d1 100644
--- a/meta-openembedded/meta-python/recipes-devtools/python/python3-django_4.0.2.bb
+++ b/meta-openembedded/meta-python/recipes-devtools/python/python3-django_4.0.2.bb
@@ -5,6 +5,7 @@
 
 RDEPENDS:${PN} += "\
     ${PYTHON_PN}-sqlparse \
+    ${PYTHON_PN}-asgiref \
 "
 
 # Set DEFAULT_PREFERENCE so that the LTS version of django is built by
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-dominate_2.6.0.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-dominate_2.6.0.bb
deleted file mode 100644
index 8e17438..0000000
--- a/meta-openembedded/meta-python/recipes-devtools/python/python3-dominate_2.6.0.bb
+++ /dev/null
@@ -1,26 +0,0 @@
-SUMMARY = "Dominate is a Python library for creating and manipulating HTML documents using an elegant DOM API."
-LICENSE = "LGPL-3.0-only"
-LIC_FILES_CHKSUM = "file://LICENSE.txt;md5=b52f2d57d10c4f7ee67a7eb9615d5d24"
-
-SRC_URI[md5sum] = "9f714324ca99eee98bb3c3cdbe838de6"
-SRC_URI[sha256sum] = "76ec2cde23700a6fc4fee098168b9dee43b99c2f1dd0ca6a711f683e8eb7e1e4"
-
-inherit pypi setuptools3 ptest
-
-SRC_URI += " \
-	file://run-ptest \
-"
-
-RDEPENDS:${PN}-ptest += " \
-	${PYTHON_PN}-pytest \
-"
-
-do_install_ptest() {
-	install -d ${D}${PTEST_PATH}/tests
-	cp -rf ${S}/tests/* ${D}${PTEST_PATH}/tests/
-}
-
-RDEPENDS:${PN} += "\
-    ${PYTHON_PN}-numbers \
-    ${PYTHON_PN}-threading \
-    "
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-dominate_2.7.0.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-dominate_2.7.0.bb
new file mode 100644
index 0000000..bee89ed
--- /dev/null
+++ b/meta-openembedded/meta-python/recipes-devtools/python/python3-dominate_2.7.0.bb
@@ -0,0 +1,25 @@
+SUMMARY = "Dominate is a Python library for creating and manipulating HTML documents using an elegant DOM API."
+LICENSE = "LGPL-3.0-only"
+LIC_FILES_CHKSUM = "file://LICENSE.txt;md5=b52f2d57d10c4f7ee67a7eb9615d5d24"
+
+SRC_URI[sha256sum] = "520101360892ebf9d0553f67d37e359ff92403d8a1e33814030503088a05da49"
+
+inherit pypi setuptools3 ptest
+
+SRC_URI += " \
+	file://run-ptest \
+"
+
+RDEPENDS:${PN}-ptest += " \
+	${PYTHON_PN}-pytest \
+"
+
+do_install_ptest() {
+	install -d ${D}${PTEST_PATH}/tests
+	cp -rf ${S}/tests/* ${D}${PTEST_PATH}/tests/
+}
+
+RDEPENDS:${PN} += "\
+    ${PYTHON_PN}-numbers \
+    ${PYTHON_PN}-threading \
+    "
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-engineio_4.3.3.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-engineio_4.3.3.bb
deleted file mode 100644
index 1a80a55..0000000
--- a/meta-openembedded/meta-python/recipes-devtools/python/python3-engineio_4.3.3.bb
+++ /dev/null
@@ -1,20 +0,0 @@
-SUMMARY = "Engine.IO server"
-HOMEPAGE = "https://github.com/miguelgrinberg/python-engineio/"
-SECTION = "devel/python"
-
-LICENSE = "MIT"
-LIC_FILES_CHKSUM = "file://LICENSE;md5=42d0a9e728978f0eeb759c3be91536b8"
-
-inherit pypi python_setuptools_build_meta
-
-PYPI_PACKAGE = "python-engineio"
-
-RDEPENDS:${PN} += " \
-	python3-netclient \
-	python3-json \
-	python3-logging \
-	python3-compression \
-	python3-asyncio \
-"
-
-SRC_URI[sha256sum] = "18474c452894c60590b2d2339d6c81b93fb9857f1be271a2e91fb2707eb4095d"
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-engineio_4.3.4.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-engineio_4.3.4.bb
new file mode 100644
index 0000000..d89a54d
--- /dev/null
+++ b/meta-openembedded/meta-python/recipes-devtools/python/python3-engineio_4.3.4.bb
@@ -0,0 +1,20 @@
+SUMMARY = "Engine.IO server"
+HOMEPAGE = "https://github.com/miguelgrinberg/python-engineio/"
+SECTION = "devel/python"
+
+LICENSE = "MIT"
+LIC_FILES_CHKSUM = "file://LICENSE;md5=42d0a9e728978f0eeb759c3be91536b8"
+
+inherit pypi python_setuptools_build_meta
+
+PYPI_PACKAGE = "python-engineio"
+
+RDEPENDS:${PN} += " \
+	python3-netclient \
+	python3-json \
+	python3-logging \
+	python3-compression \
+	python3-asyncio \
+"
+
+SRC_URI[sha256sum] = "d8d8b072799c36cadcdcc2b40d2a560ce09797ab3d2d596b2ad519a5e4df19ae"
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-eth-abi_3.0.0.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-eth-abi_3.0.0.bb
deleted file mode 100644
index e1f220a..0000000
--- a/meta-openembedded/meta-python/recipes-devtools/python/python3-eth-abi_3.0.0.bb
+++ /dev/null
@@ -1,18 +0,0 @@
-SUMMARY = "Python utilities for working with Ethereum ABI definitions, especially encoding and decoding."
-HOMEPAGE = "https://github.com/ethereum/eth-abi"
-SECTION = "devel/python"
-LICENSE = "MIT"
-LIC_FILES_CHKSUM = "file://LICENSE;md5=bf9691ead96f1163622689e47ce3f366"
-
-SRC_URI[sha256sum] = "31578b179cf9430c21ac32a4e5f401c14b6e2cc1fd64ca3587cd354068217804"
-
-PYPI_PACKAGE = "eth_abi"
-
-inherit pypi setuptools3
-
-RDEPENDS:${PN} += " \
-    python3-eth-typing \
-    python3-eth-utils \
-    python3-parsimonious \
-    python3-setuptools \
-"
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-eth-abi_3.0.1.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-eth-abi_3.0.1.bb
new file mode 100644
index 0000000..3d8a67b
--- /dev/null
+++ b/meta-openembedded/meta-python/recipes-devtools/python/python3-eth-abi_3.0.1.bb
@@ -0,0 +1,18 @@
+SUMMARY = "Python utilities for working with Ethereum ABI definitions, especially encoding and decoding."
+HOMEPAGE = "https://github.com/ethereum/eth-abi"
+SECTION = "devel/python"
+LICENSE = "MIT"
+LIC_FILES_CHKSUM = "file://LICENSE;md5=bf9691ead96f1163622689e47ce3f366"
+
+SRC_URI[sha256sum] = "c3872e3ac1e9ef3f8c6599aaca4ee536d536eefca63a6892ab937f0560edb656"
+
+PYPI_PACKAGE = "eth_abi"
+
+inherit pypi setuptools3
+
+RDEPENDS:${PN} += " \
+    python3-eth-typing \
+    python3-eth-utils \
+    python3-parsimonious \
+    python3-setuptools \
+"
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-eth-account_0.6.1.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-eth-account_0.6.1.bb
deleted file mode 100644
index 9c87d3f..0000000
--- a/meta-openembedded/meta-python/recipes-devtools/python/python3-eth-account_0.6.1.bb
+++ /dev/null
@@ -1,18 +0,0 @@
-SUMMARY = "Assign Ethereum transactions and messages with local private keys."
-HOMEPAGE = "https://github.com/ethereum/eth-account"
-SECTION = "devel/python"
-LICENSE = "MIT"
-LIC_FILES_CHKSUM = "file://LICENSE;md5=287820ad3553117aa2f92bf84c219324"
-
-SRC_URI[sha256sum] = "54b0b7d661e73f4cd12d508c9baa5c9a6e8c194aa7bafc39277cd673683ae50e"
-
-inherit pypi setuptools3
-
-RDEPENDS:${PN} += " \
-    python3-bitarray \
-    python3-cytoolz \
-    python3-eth-abi \
-    python3-eth-keyfile \
-    python3-eth-rlp \
-    python3-hexbytes \
-"
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-eth-account_0.7.0.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-eth-account_0.7.0.bb
new file mode 100644
index 0000000..5b8c34e
--- /dev/null
+++ b/meta-openembedded/meta-python/recipes-devtools/python/python3-eth-account_0.7.0.bb
@@ -0,0 +1,18 @@
+SUMMARY = "Assign Ethereum transactions and messages with local private keys."
+HOMEPAGE = "https://github.com/ethereum/eth-account"
+SECTION = "devel/python"
+LICENSE = "MIT"
+LIC_FILES_CHKSUM = "file://LICENSE;md5=287820ad3553117aa2f92bf84c219324"
+
+SRC_URI[sha256sum] = "61360e9e514e09e49929ed365ca0e1656758ecbd11248c629ad85b4335c2661a"
+
+inherit pypi setuptools3
+
+RDEPENDS:${PN} += " \
+    python3-bitarray \
+    python3-cytoolz \
+    python3-eth-abi \
+    python3-eth-keyfile \
+    python3-eth-rlp \
+    python3-hexbytes \
+"
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-fastjsonschema_2.16.1.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-fastjsonschema_2.16.1.bb
deleted file mode 100644
index bb6c1ee..0000000
--- a/meta-openembedded/meta-python/recipes-devtools/python/python3-fastjsonschema_2.16.1.bb
+++ /dev/null
@@ -1,39 +0,0 @@
-# Copyright (C) 2021 Khem Raj <raj.khem@gmail.com>
-# Released under the MIT license (see COPYING.MIT for the terms)
-
-SUMMARY = "Fastest Python implementation of JSON schema"
-HOMEPAGE = "https://github.com/seznam/python-fastjsonschema"
-LICENSE = "BSD-3-Clause"
-LIC_FILES_CHKSUM = "file://LICENSE;md5=18950e8362b69c0c617b42b8bd8e7532"
-
-SRCREV = "98399bb4029b2d7020d8abd9770661a5b2c4f9f8"
-PYPI_SRC_URI = "git://github.com/horejsek/python-fastjsonschema;protocol=https;branch=master"
-
-SRC_URI += "file://run-ptest"
-
-inherit ptest pypi setuptools3
-
-S = "${WORKDIR}/git"
-
-do_install_ptest() {
-	install -d ${D}${PTEST_PATH}/tests
-	cp -rf ${S}/tests/* ${D}${PTEST_PATH}/tests/
-}
-
-RDEPENDS:${PN}-ptest += "\
-    python3-colorama \
-    python3-jsonschema \
-    python3-pylint \
-    python3-pytest \
-    python3-pytest-benchmark \
-    python3-pytest-cache \
-"
-RDEPENDS:${PN} += "\
-    python3-core \
-    python3-urllib3 \
-    python3-numbers \
-    python3-pickle \
-    python3-json \
-    "
-
-BBCLASSEXTEND = "native nativesdk"
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-fastjsonschema_2.16.2.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-fastjsonschema_2.16.2.bb
new file mode 100644
index 0000000..a1d3392
--- /dev/null
+++ b/meta-openembedded/meta-python/recipes-devtools/python/python3-fastjsonschema_2.16.2.bb
@@ -0,0 +1,39 @@
+# Copyright (C) 2021 Khem Raj <raj.khem@gmail.com>
+# Released under the MIT license (see COPYING.MIT for the terms)
+
+SUMMARY = "Fastest Python implementation of JSON schema"
+HOMEPAGE = "https://github.com/seznam/python-fastjsonschema"
+LICENSE = "BSD-3-Clause"
+LIC_FILES_CHKSUM = "file://LICENSE;md5=18950e8362b69c0c617b42b8bd8e7532"
+
+SRCREV = "1aad747bab39d4b1201ab99917463f4079955ecd"
+PYPI_SRC_URI = "git://github.com/horejsek/python-fastjsonschema;protocol=https;branch=master"
+
+SRC_URI += "file://run-ptest"
+
+inherit ptest pypi setuptools3
+
+S = "${WORKDIR}/git"
+
+do_install_ptest() {
+	install -d ${D}${PTEST_PATH}/tests
+	cp -rf ${S}/tests/* ${D}${PTEST_PATH}/tests/
+}
+
+RDEPENDS:${PN}-ptest += "\
+    python3-colorama \
+    python3-jsonschema \
+    python3-pylint \
+    python3-pytest \
+    python3-pytest-benchmark \
+    python3-pytest-cache \
+"
+RDEPENDS:${PN} += "\
+    python3-core \
+    python3-urllib3 \
+    python3-numbers \
+    python3-pickle \
+    python3-json \
+    "
+
+BBCLASSEXTEND = "native nativesdk"
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-flask-login_0.6.1.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-flask-login_0.6.1.bb
deleted file mode 100644
index d19acd9..0000000
--- a/meta-openembedded/meta-python/recipes-devtools/python/python3-flask-login_0.6.1.bb
+++ /dev/null
@@ -1,15 +0,0 @@
-SUMMARY = "User session management for Flask"
-DESCRIPTION = "Flask-Login provides user session management for Flask. \
-It handles the common tasks of logging in, logging out, and remembering \
-your users’ sessions over extended periods of time."
-HOMEPAGE = " https://github.com/maxcountryman/flask-login"
-LICENSE = "MIT"
-LIC_FILES_CHKSUM = "file://LICENSE;md5=8aa87a1cd9fa41d969ad32cfdac2c596"
-
-SRC_URI[sha256sum] = "1306d474a270a036d6fd14f45640c4d77355e4f1c67ca4331b372d3448997b8c"
-
-PYPI_PACKAGE = "Flask-Login"
-
-inherit pypi setuptools3
-
-RDEPENDS:${PN}:class-target = "${PYTHON_PN}-flask"
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-flask-login_0.6.2.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-flask-login_0.6.2.bb
new file mode 100644
index 0000000..97d7bce
--- /dev/null
+++ b/meta-openembedded/meta-python/recipes-devtools/python/python3-flask-login_0.6.2.bb
@@ -0,0 +1,15 @@
+SUMMARY = "User session management for Flask"
+DESCRIPTION = "Flask-Login provides user session management for Flask. \
+It handles the common tasks of logging in, logging out, and remembering \
+your users’ sessions over extended periods of time."
+HOMEPAGE = " https://github.com/maxcountryman/flask-login"
+LICENSE = "MIT"
+LIC_FILES_CHKSUM = "file://LICENSE;md5=8aa87a1cd9fa41d969ad32cfdac2c596"
+
+SRC_URI[sha256sum] = "c0a7baa9fdc448cdd3dd6f0939df72eec5177b2f7abe6cb82fc934d29caac9c3"
+
+PYPI_PACKAGE = "Flask-Login"
+
+inherit pypi setuptools3
+
+RDEPENDS:${PN}:class-target = "${PYTHON_PN}-flask"
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-flask_2.1.3.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-flask_2.1.3.bb
deleted file mode 100644
index 95abddf..0000000
--- a/meta-openembedded/meta-python/recipes-devtools/python/python3-flask_2.1.3.bb
+++ /dev/null
@@ -1,23 +0,0 @@
-SUMMARY = "A microframework based on Werkzeug, Jinja2 and good intentions"
-DESCRIPTION = "\
-Flask is a microframework for Python based on Werkzeug, Jinja 2 and good \
-intentions. And before you ask: It’s BSD licensed!"
-HOMEPAGE = "https://github.com/mitsuhiko/flask/"
-LICENSE = "BSD-3-Clause"
-LIC_FILES_CHKSUM = "file://LICENSE.rst;md5=ffeffa59c90c9c4a033c7574f8f3fb75"
-
-SRC_URI[sha256sum] = "15972e5017df0575c3d6c090ba168b6db90259e620ac8d7ea813a396bad5b6cb"
-
-PYPI_PACKAGE = "Flask"
-
-inherit pypi setuptools3
-
-CLEANBROKEN = "1"
-
-RDEPENDS:${PN} = " \
-    ${PYTHON_PN}-werkzeug \
-    ${PYTHON_PN}-jinja2 \
-    ${PYTHON_PN}-itsdangerous \
-    ${PYTHON_PN}-click \
-    ${PYTHON_PN}-profile \
-"
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-flask_2.2.2.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-flask_2.2.2.bb
new file mode 100644
index 0000000..cceab72
--- /dev/null
+++ b/meta-openembedded/meta-python/recipes-devtools/python/python3-flask_2.2.2.bb
@@ -0,0 +1,23 @@
+SUMMARY = "A microframework based on Werkzeug, Jinja2 and good intentions"
+DESCRIPTION = "\
+Flask is a microframework for Python based on Werkzeug, Jinja 2 and good \
+intentions. And before you ask: It’s BSD licensed!"
+HOMEPAGE = "https://github.com/mitsuhiko/flask/"
+LICENSE = "BSD-3-Clause"
+LIC_FILES_CHKSUM = "file://LICENSE.rst;md5=ffeffa59c90c9c4a033c7574f8f3fb75"
+
+SRC_URI[sha256sum] = "642c450d19c4ad482f96729bd2a8f6d32554aa1e231f4f6b4e7e5264b16cca2b"
+
+PYPI_PACKAGE = "Flask"
+
+inherit pypi setuptools3
+
+CLEANBROKEN = "1"
+
+RDEPENDS:${PN} = " \
+    ${PYTHON_PN}-werkzeug \
+    ${PYTHON_PN}-jinja2 \
+    ${PYTHON_PN}-itsdangerous \
+    ${PYTHON_PN}-click \
+    ${PYTHON_PN}-profile \
+"
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-frozenlist_1.3.0.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-frozenlist_1.3.0.bb
deleted file mode 100644
index 4dd7738..0000000
--- a/meta-openembedded/meta-python/recipes-devtools/python/python3-frozenlist_1.3.0.bb
+++ /dev/null
@@ -1,11 +0,0 @@
-SUMMARY = "A list-like structure which implements collections.abc.MutableSequence, and which can be made immutable."
-HOMEPAGE = "https://github.com/aio-libs/frozenlist"
-LICENSE = "Apache-2.0"
-LIC_FILES_CHKSUM = "file://LICENSE;md5=cf056e8e7a0a5477451af18b7b5aa98c"
-
-SRC_URI[sha256sum] = "ce6f2ba0edb7b0c1d8976565298ad2deba6f8064d2bebb6ffce2ca896eb35b0b"
-
-inherit pypi python_setuptools_build_meta
-
-BBCLASSEXTEND = "native nativesdk"
-
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-frozenlist_1.3.1.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-frozenlist_1.3.1.bb
new file mode 100644
index 0000000..b95a014
--- /dev/null
+++ b/meta-openembedded/meta-python/recipes-devtools/python/python3-frozenlist_1.3.1.bb
@@ -0,0 +1,11 @@
+SUMMARY = "A list-like structure which implements collections.abc.MutableSequence, and which can be made immutable."
+HOMEPAGE = "https://github.com/aio-libs/frozenlist"
+LICENSE = "Apache-2.0"
+LIC_FILES_CHKSUM = "file://LICENSE;md5=cf056e8e7a0a5477451af18b7b5aa98c"
+
+SRC_URI[sha256sum] = "3a735e4211a04ccfa3f4833547acdf5d2f863bfeb01cfd3edaffbc251f15cec8"
+
+inherit pypi python_setuptools_build_meta
+
+BBCLASSEXTEND = "native nativesdk"
+
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-gcovr_5.1.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-gcovr_5.1.bb
deleted file mode 100644
index 995f3b7..0000000
--- a/meta-openembedded/meta-python/recipes-devtools/python/python3-gcovr_5.1.bb
+++ /dev/null
@@ -1,17 +0,0 @@
-DESCRIPTION = "generate GCC code coverage reports"
-HOMEPAGE = "https://gcovr.com"
-SECTION = "devel/python"
-LICENSE = "BSD-3-Clause"
-LIC_FILES_CHKSUM = "file://LICENSE.txt;md5=08208c66520e8d69d5367483186d94ed"
-
-SRC_URI = "git://github.com/gcovr/gcovr.git;branch=master;protocol=https"
-SRCREV = "e71e883521b78122c49016eb4e510e6da06c6916"
-
-S = "${WORKDIR}/git"
-
-inherit setuptools3
-PIP_INSTALL_PACKAGE = "gcovr"
-
-RDEPENDS:${PN} += "${PYTHON_PN}-jinja2 ${PYTHON_PN}-lxml ${PYTHON_PN}-setuptools ${PYTHON_PN}-pygments"
-
-BBCLASSEXTEND = "native nativesdk"
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-gcovr_5.2.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-gcovr_5.2.bb
new file mode 100644
index 0000000..03231f9
--- /dev/null
+++ b/meta-openembedded/meta-python/recipes-devtools/python/python3-gcovr_5.2.bb
@@ -0,0 +1,17 @@
+DESCRIPTION = "generate GCC code coverage reports"
+HOMEPAGE = "https://gcovr.com"
+SECTION = "devel/python"
+LICENSE = "BSD-3-Clause"
+LIC_FILES_CHKSUM = "file://LICENSE.txt;md5=e59af597b3484fa3b52c0fbfd5d17611"
+
+SRC_URI = "git://github.com/gcovr/gcovr.git;branch=master;protocol=https"
+SRCREV = "1040a85ecfb3ef0d01635df9d50a3cae5059d566"
+
+S = "${WORKDIR}/git"
+
+inherit setuptools3
+PIP_INSTALL_PACKAGE = "gcovr"
+
+RDEPENDS:${PN} += "${PYTHON_PN}-jinja2 ${PYTHON_PN}-lxml ${PYTHON_PN}-setuptools ${PYTHON_PN}-pygments"
+
+BBCLASSEXTEND = "native nativesdk"
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-google-api-python-client_2.54.0.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-google-api-python-client_2.54.0.bb
deleted file mode 100644
index af1b934..0000000
--- a/meta-openembedded/meta-python/recipes-devtools/python/python3-google-api-python-client_2.54.0.bb
+++ /dev/null
@@ -1,20 +0,0 @@
-SUMMARY = "The Google API Client for Python is a client library for accessing the Plus, \
-Moderator, and many other Google APIs."
-HOMEPAGE = "https://github.com/googleapis/google-api-python-client"
-LICENSE = "Apache-2.0"
-LIC_FILES_CHKSUM = "file://LICENSE;md5=86d3f3a95c324c9479bd8986968f4327"
-
-SRC_URI[sha256sum] = "90ebbae53025545b45962c0bc9874640511f35e929df773d034f40d9464c86af"
-
-inherit pypi setuptools3
-
-RDEPENDS:${PN} += "\
-    ${PYTHON_PN}-logging \
-    ${PYTHON_PN}-six \
-    ${PYTHON_PN}-json \
-    ${PYTHON_PN}-core \
-    ${PYTHON_PN}-netclient \
-    ${PYTHON_PN}-httplib2 \
-    ${PYTHON_PN}-uritemplate \
-    ${PYTHON_PN}-google-api-core \
-"
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-google-api-python-client_2.57.0.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-google-api-python-client_2.57.0.bb
new file mode 100644
index 0000000..fdbad82
--- /dev/null
+++ b/meta-openembedded/meta-python/recipes-devtools/python/python3-google-api-python-client_2.57.0.bb
@@ -0,0 +1,20 @@
+SUMMARY = "The Google API Client for Python is a client library for accessing the Plus, \
+Moderator, and many other Google APIs."
+HOMEPAGE = "https://github.com/googleapis/google-api-python-client"
+LICENSE = "Apache-2.0"
+LIC_FILES_CHKSUM = "file://LICENSE;md5=86d3f3a95c324c9479bd8986968f4327"
+
+SRC_URI[sha256sum] = "ec4412545b0c5978a833bb03993a46121ad2c700f32af0cba23f8439b3f5fb02"
+
+inherit pypi setuptools3
+
+RDEPENDS:${PN} += "\
+    ${PYTHON_PN}-logging \
+    ${PYTHON_PN}-six \
+    ${PYTHON_PN}-json \
+    ${PYTHON_PN}-core \
+    ${PYTHON_PN}-netclient \
+    ${PYTHON_PN}-httplib2 \
+    ${PYTHON_PN}-uritemplate \
+    ${PYTHON_PN}-google-api-core \
+"
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-google-auth_2.11.0.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-google-auth_2.11.0.bb
new file mode 100644
index 0000000..d1698f5
--- /dev/null
+++ b/meta-openembedded/meta-python/recipes-devtools/python/python3-google-auth_2.11.0.bb
@@ -0,0 +1,27 @@
+DESCRIPTION = "Google Authentication Library"
+HOMEPAGE = "https://github.com/googleapis/google-auth-library-python"
+AUTHOR = "Google Cloud Platform"
+LICENSE = "Apache-2.0"
+LIC_FILES_CHKSUM = "file://LICENSE;md5=86d3f3a95c324c9479bd8986968f4327"
+
+inherit pypi setuptools3
+
+SRC_URI[sha256sum] = "ed65ecf9f681832298e29328e1ef0a3676e3732b2e56f41532d45f70a22de0fb"
+
+RDEPENDS:${PN} += "\
+    ${PYTHON_PN}-asyncio \
+    ${PYTHON_PN}-datetime \
+    ${PYTHON_PN}-io \
+    ${PYTHON_PN}-json \
+    ${PYTHON_PN}-logging \
+    ${PYTHON_PN}-netclient \
+    ${PYTHON_PN}-numbers \
+"
+
+RDEPENDS:${PN} += "\
+    ${PYTHON_PN}-aiohttp \
+    ${PYTHON_PN}-cachetools \
+    ${PYTHON_PN}-pyasn1-modules \
+    ${PYTHON_PN}-rsa \
+    ${PYTHON_PN}-six \
+"
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-google-auth_2.9.1.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-google-auth_2.9.1.bb
deleted file mode 100644
index e884aba..0000000
--- a/meta-openembedded/meta-python/recipes-devtools/python/python3-google-auth_2.9.1.bb
+++ /dev/null
@@ -1,27 +0,0 @@
-DESCRIPTION = "Google Authentication Library"
-HOMEPAGE = "https://github.com/googleapis/google-auth-library-python"
-AUTHOR = "Google Cloud Platform"
-LICENSE = "Apache-2.0"
-LIC_FILES_CHKSUM = "file://LICENSE;md5=86d3f3a95c324c9479bd8986968f4327"
-
-inherit pypi setuptools3
-
-SRC_URI[sha256sum] = "14292fa3429f2bb1e99862554cde1ee730d6840ebae067814d3d15d8549c0888"
-
-RDEPENDS:${PN} += "\
-    ${PYTHON_PN}-asyncio \
-    ${PYTHON_PN}-datetime \
-    ${PYTHON_PN}-io \
-    ${PYTHON_PN}-json \
-    ${PYTHON_PN}-logging \
-    ${PYTHON_PN}-netclient \
-    ${PYTHON_PN}-numbers \
-"
-
-RDEPENDS:${PN} += "\
-    ${PYTHON_PN}-aiohttp \
-    ${PYTHON_PN}-cachetools \
-    ${PYTHON_PN}-pyasn1-modules \
-    ${PYTHON_PN}-rsa \
-    ${PYTHON_PN}-six \
-"
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-grpcio-tools_1.47.0.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-grpcio-tools_1.47.0.bb
deleted file mode 100644
index 90c8efe..0000000
--- a/meta-openembedded/meta-python/recipes-devtools/python/python3-grpcio-tools_1.47.0.bb
+++ /dev/null
@@ -1,17 +0,0 @@
-DESCRIPTION = "Google gRPC tools"
-HOMEPAGE = "http://www.grpc.io/"
-SECTION = "devel/python"
-
-LICENSE = "Apache-2.0"
-LIC_FILES_CHKSUM = "file://PKG-INFO;beginline=8;endline=8;md5=7145f7cdd263359b62d342a02f005515"
-
-inherit pypi setuptools3
-
-DEPENDS += "${PYTHON_PN}-grpcio"
-
-SRC_URI += "file://0001-setup.py-Do-not-mix-C-and-C-compiler-options.patch"
-SRC_URI[sha256sum] = "f64b5378484be1d6ce59311f86174be29c8ff98d8d90f589e1c56d5acae67d3c"
-
-RDEPENDS:${PN} = "${PYTHON_PN}-grpcio"
-
-BBCLASSEXTEND = "native nativesdk"
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-grpcio-tools_1.48.0.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-grpcio-tools_1.48.0.bb
new file mode 100644
index 0000000..fcd2840
--- /dev/null
+++ b/meta-openembedded/meta-python/recipes-devtools/python/python3-grpcio-tools_1.48.0.bb
@@ -0,0 +1,17 @@
+DESCRIPTION = "Google gRPC tools"
+HOMEPAGE = "http://www.grpc.io/"
+SECTION = "devel/python"
+
+LICENSE = "Apache-2.0"
+LIC_FILES_CHKSUM = "file://PKG-INFO;beginline=8;endline=8;md5=7145f7cdd263359b62d342a02f005515"
+
+inherit pypi setuptools3
+
+DEPENDS += "${PYTHON_PN}-grpcio"
+
+SRC_URI += "file://0001-setup.py-Do-not-mix-C-and-C-compiler-options.patch"
+SRC_URI[sha256sum] = "dd7f757608e7dfae4ab2e7fc1e8951e6eb9526ebdc7ce90597329bc4c408c9a1"
+
+RDEPENDS:${PN} = "${PYTHON_PN}-grpcio"
+
+BBCLASSEXTEND = "native nativesdk"
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-grpcio/0001-absl-always-use-asm-sgidefs.h.patch b/meta-openembedded/meta-python/recipes-devtools/python/python3-grpcio/0001-absl-always-use-asm-sgidefs.h.patch
deleted file mode 100644
index be516ca..0000000
--- a/meta-openembedded/meta-python/recipes-devtools/python/python3-grpcio/0001-absl-always-use-asm-sgidefs.h.patch
+++ /dev/null
@@ -1,33 +0,0 @@
-From 8f21fdfb83b0fa844a9f1f03a86a9ca46642d85e Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Thu, 9 Apr 2020 13:06:27 -0700
-Subject: [PATCH 1/2] absl: always use <asm/sgidefs.h>
-
-Fixes mips/musl build, since sgidefs.h is not present on all C libraries
-but on linux asm/sgidefs.h is there and contains same definitions, using
-that makes it portable.
-
-Upstream-Status: Pending
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
----
- third_party/abseil-cpp/absl/base/internal/direct_mmap.h | 6 +-----
- 1 file changed, 1 insertion(+), 5 deletions(-)
-
---- a/third_party/abseil-cpp/absl/base/internal/direct_mmap.h
-+++ b/third_party/abseil-cpp/absl/base/internal/direct_mmap.h
-@@ -41,13 +41,9 @@
- 
- #ifdef __mips__
- // Include definitions of the ABI currently in use.
--#ifdef __BIONIC__
--// Android doesn't have sgidefs.h, but does have asm/sgidefs.h, which has the
-+// bionic/musl C libs don't have sgidefs.h, but do have asm/sgidefs.h, which has the
- // definitions we need.
- #include <asm/sgidefs.h>
--#else
--#include <sgidefs.h>
--#endif  // __BIONIC__
- #endif  // __mips__
- 
- // SYS_mmap and SYS_munmap are not defined in Android.
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-grpcio/abseil-ppc-fixes.patch b/meta-openembedded/meta-python/recipes-devtools/python/python3-grpcio/abseil-ppc-fixes.patch
index e8048fe..c5fdcd6 100644
--- a/meta-openembedded/meta-python/recipes-devtools/python/python3-grpcio/abseil-ppc-fixes.patch
+++ b/meta-openembedded/meta-python/recipes-devtools/python/python3-grpcio/abseil-ppc-fixes.patch
@@ -8,7 +8,16 @@
 Sourced from void linux
 
 Signed-off-by: Khem Raj <raj.khem@gmail.com>
+Signed-off-by: Xu Huan <xuhuan.fnst@fujitsu.com>
+---
+ absl/base/internal/unscaledcycleclock.cc    | 4 ++--
+ absl/base/internal/unscaledcycleclock.h     | 3 ++-
+ absl/debugging/internal/examine_stack.cc    | 8 +++++++-
+ absl/debugging/internal/stacktrace_config.h | 2 +-
+ 4 files changed, 12 insertions(+), 5 deletions(-)
 
+diff --git a/absl/base/internal/unscaledcycleclock.cc b/absl/base/internal/unscaledcycleclock.cc
+index b1c396c..d62bfd6 100644
 --- a/absl/base/internal/unscaledcycleclock.cc
 +++ b/absl/base/internal/unscaledcycleclock.cc
 @@ -20,7 +20,7 @@
@@ -20,7 +29,7 @@
  #ifdef __GLIBC__
  #include <sys/platform/ppc.h>
  #elif defined(__FreeBSD__)
-@@ -59,7 +59,7 @@ double UnscaledCycleClock::Frequency() {
+@@ -58,7 +58,7 @@ double UnscaledCycleClock::Frequency() {
    return base_internal::NominalCPUFrequency();
  }
  
@@ -29,6 +38,8 @@
  
  int64_t UnscaledCycleClock::Now() {
  #ifdef __GLIBC__
+diff --git a/absl/base/internal/unscaledcycleclock.h b/absl/base/internal/unscaledcycleclock.h
+index 2cbeae3..683a5ef 100644
 --- a/absl/base/internal/unscaledcycleclock.h
 +++ b/absl/base/internal/unscaledcycleclock.h
 @@ -46,7 +46,8 @@
@@ -38,12 +49,14 @@
 -    defined(__powerpc__) || defined(__ppc__) || defined(__riscv) ||     \
 +    ((defined(__powerpc__) || defined(__ppc__)) && defined(__GLIBC__)) || \
 +    defined(__riscv) ||     \
-     defined(_M_IX86) || defined(_M_X64)
+     defined(_M_IX86) || (defined(_M_X64) && !defined(_M_ARM64EC))
  #define ABSL_HAVE_UNSCALED_CYCLECLOCK_IMPLEMENTATION 1
  #else
+diff --git a/absl/debugging/internal/examine_stack.cc b/absl/debugging/internal/examine_stack.cc
+index 5bdd341..a784e0d 100644
 --- a/absl/debugging/internal/examine_stack.cc
 +++ b/absl/debugging/internal/examine_stack.cc
-@@ -27,6 +27,10 @@
+@@ -33,6 +33,10 @@
  #include <csignal>
  #include <cstdio>
  
@@ -54,7 +67,7 @@
  #include "absl/base/attributes.h"
  #include "absl/base/internal/raw_logging.h"
  #include "absl/base/macros.h"
-@@ -63,8 +67,10 @@ void* GetProgramCounter(void* vuc) {
+@@ -174,8 +178,10 @@ void* GetProgramCounter(void* const vuc) {
      return reinterpret_cast<void*>(context->uc_mcontext.pc);
  #elif defined(__powerpc64__)
      return reinterpret_cast<void*>(context->uc_mcontext.gp_regs[32]);
@@ -66,9 +79,11 @@
  #elif defined(__riscv)
      return reinterpret_cast<void*>(context->uc_mcontext.__gregs[REG_PC]);
  #elif defined(__s390__) && !defined(__s390x__)
+diff --git a/absl/debugging/internal/stacktrace_config.h b/absl/debugging/internal/stacktrace_config.h
+index 3929b1b..23d5e50 100644
 --- a/absl/debugging/internal/stacktrace_config.h
 +++ b/absl/debugging/internal/stacktrace_config.h
-@@ -59,7 +59,7 @@
+@@ -60,7 +60,7 @@
  #elif defined(__i386__) || defined(__x86_64__)
  #define ABSL_STACKTRACE_INL_HEADER \
    "absl/debugging/internal/stacktrace_x86-inl.inc"
@@ -77,3 +92,6 @@
  #define ABSL_STACKTRACE_INL_HEADER \
    "absl/debugging/internal/stacktrace_powerpc-inl.inc"
  #elif defined(__aarch64__)
+-- 
+2.25.1
+
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-grpcio_1.47.0.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-grpcio_1.47.0.bb
deleted file mode 100644
index 24c6e53..0000000
--- a/meta-openembedded/meta-python/recipes-devtools/python/python3-grpcio_1.47.0.bb
+++ /dev/null
@@ -1,46 +0,0 @@
-DESCRIPTION = "Google gRPC"
-HOMEPAGE = "http://www.grpc.io/"
-SECTION = "devel/python"
-LICENSE = "Apache-2.0 & BSD-3-Clause"
-LIC_FILES_CHKSUM = "file://LICENSE;md5=6e4cf218112648d22420a84281b68b88"
-
-DEPENDS += "${PYTHON_PN}-protobuf"
-
-SRC_URI += "file://0001-setup.py-Do-not-mix-C-and-C-compiler-options.patch"
-SRC_URI:append:class-target = " file://ppc-boringssl-support.patch \
-                                file://boring_ssl.patch \
-                                file://mips_bigendian.patch \
-                                file://0001-absl-always-use-asm-sgidefs.h.patch \
-                                file://abseil-ppc-fixes.patch;patchdir=third_party/abseil-cpp \
-"
-SRC_URI[sha256sum] = "5dbba95fab9b35957b4977b8904fc1fa56b302f9051eff4d7716ebb0c087f801"
-
-RDEPENDS:${PN} = "${PYTHON_PN}-protobuf \
-                  ${PYTHON_PN}-setuptools \
-                  ${PYTHON_PN}-six \
-"
-
-inherit setuptools3
-inherit pypi
-
-export GRPC_PYTHON_DISABLE_LIBC_COMPATIBILITY = "1"
-
-BORING_SSL_PLATFORM:arm = "linux-arm"
-BORING_SSL_PLATFORM:x86-64 = "linux-x86_64"
-BORING_SSL_PLATFORM ?= "unsupported"
-export GRPC_BORING_SSL_PLATFORM = "${BORING_SSL_PLATFORM}"
-
-BORING_SSL:x86-64 = "1"
-BORING_SSL:arm = "1"
-BORING_SSL ?= "0"
-export GRPC_BUILD_WITH_BORING_SSL_ASM = "${BORING_SSL}"
-
-GRPC_CFLAGS ?= ""
-GRPC_CFLAGS:append:toolchain-clang = " -fvisibility=hidden -fno-wrapv -fno-exceptions"
-export GRPC_PYTHON_CFLAGS = "${GRPC_CFLAGS}"
-
-CLEANBROKEN = "1"
-
-BBCLASSEXTEND = "native nativesdk"
-
-CCACHE_DISABLE = "1"
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-grpcio_1.48.0.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-grpcio_1.48.0.bb
new file mode 100644
index 0000000..a16b880
--- /dev/null
+++ b/meta-openembedded/meta-python/recipes-devtools/python/python3-grpcio_1.48.0.bb
@@ -0,0 +1,47 @@
+DESCRIPTION = "Google gRPC"
+HOMEPAGE = "http://www.grpc.io/"
+SECTION = "devel/python"
+LICENSE = "Apache-2.0 & BSD-3-Clause"
+LIC_FILES_CHKSUM = "file://LICENSE;md5=731e401b36f8077ae0c134b59be5c906"
+
+DEPENDS += "${PYTHON_PN}-protobuf"
+
+SRC_URI += "file://0001-setup.py-Do-not-mix-C-and-C-compiler-options.patch"
+SRC_URI:append:class-target = " file://ppc-boringssl-support.patch \
+                                file://boring_ssl.patch \
+                                file://mips_bigendian.patch \
+                                file://abseil-ppc-fixes.patch;patchdir=third_party/abseil-cpp \
+"
+SRC_URI[sha256sum] = "eaf4bb73819863440727195411ab3b5c304f6663625e66f348e91ebe0a039306"
+
+RDEPENDS:${PN} = "${PYTHON_PN}-protobuf \
+                  ${PYTHON_PN}-setuptools \
+                  ${PYTHON_PN}-six \
+"
+
+inherit setuptools3
+inherit pypi
+
+CFLAGS += "-D_LARGEFILE64_SOURCE"
+
+export GRPC_PYTHON_DISABLE_LIBC_COMPATIBILITY = "1"
+
+BORING_SSL_PLATFORM:arm = "linux-arm"
+BORING_SSL_PLATFORM:x86-64 = "linux-x86_64"
+BORING_SSL_PLATFORM ?= "unsupported"
+export GRPC_BORING_SSL_PLATFORM = "${BORING_SSL_PLATFORM}"
+
+BORING_SSL:x86-64 = "1"
+BORING_SSL:arm = "1"
+BORING_SSL ?= "0"
+export GRPC_BUILD_WITH_BORING_SSL_ASM = "${BORING_SSL}"
+
+GRPC_CFLAGS ?= ""
+GRPC_CFLAGS:append:toolchain-clang = " -fvisibility=hidden -fno-wrapv -fno-exceptions"
+export GRPC_PYTHON_CFLAGS = "${GRPC_CFLAGS}"
+
+CLEANBROKEN = "1"
+
+BBCLASSEXTEND = "native nativesdk"
+
+CCACHE_DISABLE = "1"
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-haversine_2.5.1.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-haversine_2.5.1.bb
deleted file mode 100644
index 1ed67d3..0000000
--- a/meta-openembedded/meta-python/recipes-devtools/python/python3-haversine_2.5.1.bb
+++ /dev/null
@@ -1,11 +0,0 @@
-SUMMARY = "Calculate the distance between 2 points on Earth"
-LICENSE = "MIT"
-LIC_FILES_CHKSUM = "file://${COMMON_LICENSE_DIR}/MIT;md5=0835ade698e0bcf8506ecda2f7b4f302"
-
-SRC_URI[sha256sum] = "357e41dfddc4a0f2b1c941d92a590cac840f7ce4b3da14b45b68d968b3ad7cc7"
-
-inherit pypi setuptools3
-
-RDEPENDS:${PN} += "python3-numpy"
-
-BBCLASSEXTEND = "native nativesdk"
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-haversine_2.6.0.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-haversine_2.6.0.bb
new file mode 100644
index 0000000..6577918
--- /dev/null
+++ b/meta-openembedded/meta-python/recipes-devtools/python/python3-haversine_2.6.0.bb
@@ -0,0 +1,11 @@
+SUMMARY = "Calculate the distance between 2 points on Earth"
+LICENSE = "MIT"
+LIC_FILES_CHKSUM = "file://${COMMON_LICENSE_DIR}/MIT;md5=0835ade698e0bcf8506ecda2f7b4f302"
+
+SRC_URI[sha256sum] = "eb7c308ba721e86662c1d50427cb9b06f9e7eb9984803d9ec200582425cc4fb7"
+
+inherit pypi setuptools3
+
+RDEPENDS:${PN} += "python3-numpy"
+
+BBCLASSEXTEND = "native nativesdk"
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-hexbytes_0.2.2.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-hexbytes_0.2.2.bb
deleted file mode 100644
index 89792d9..0000000
--- a/meta-openembedded/meta-python/recipes-devtools/python/python3-hexbytes_0.2.2.bb
+++ /dev/null
@@ -1,9 +0,0 @@
-SUMMARY = "Python bytes subclass that decodes hex, with a readable console output."
-HOMEPAGE = "https://github.com/ethereum/hexbytes"
-SECTION = "devel/python"
-LICENSE = "MIT"
-LIC_FILES_CHKSUM = "file://LICENSE;md5=287820ad3553117aa2f92bf84c219324"
-
-SRC_URI[sha256sum] = "a5881304d186e87578fb263a85317c808cf130e1d4b3d37d30142ab0f7898d03"
-
-inherit pypi setuptools3
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-hexbytes_0.3.0.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-hexbytes_0.3.0.bb
new file mode 100644
index 0000000..8402dde
--- /dev/null
+++ b/meta-openembedded/meta-python/recipes-devtools/python/python3-hexbytes_0.3.0.bb
@@ -0,0 +1,9 @@
+SUMMARY = "Python bytes subclass that decodes hex, with a readable console output."
+HOMEPAGE = "https://github.com/ethereum/hexbytes"
+SECTION = "devel/python"
+LICENSE = "MIT"
+LIC_FILES_CHKSUM = "file://LICENSE;md5=287820ad3553117aa2f92bf84c219324"
+
+SRC_URI[sha256sum] = "afeebfb800f5f15a3ca5bab52e49eabcb4b6dac06ec8ff01a94fdb890c6c0712"
+
+inherit pypi setuptools3
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-humanize_4.2.3.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-humanize_4.2.3.bb
deleted file mode 100644
index 2e4822c..0000000
--- a/meta-openembedded/meta-python/recipes-devtools/python/python3-humanize_4.2.3.bb
+++ /dev/null
@@ -1,21 +0,0 @@
-SUMMARY = "Python humanize utilities"
-HOMEPAGE = "http://github.com/jmoiron/humanize"
-SECTION = "devel/python"
-
-LICENSE = "MIT"
-LIC_FILES_CHKSUM = "file://LICENCE;md5=4ecc42519e84f6f3e23529464df7bd1d"
-
-SRC_URI[sha256sum] = "2bc1fdd831cd00557d3010abdd84d3e41b4a96703a3eaf6c24ee290b26b75a44"
-
-inherit pypi python_setuptools_build_meta
-
-DEPENDS += "\
-    ${PYTHON_PN}-setuptools-scm-native \
-"
-
-RDEPENDS:${PN} += "\
-    ${PYTHON_PN}-datetime \
-    ${PYTHON_PN}-setuptools \
-"
-
-BBCLASSEXTEND = "native nativesdk"
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-humanize_4.3.0.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-humanize_4.3.0.bb
new file mode 100644
index 0000000..55386b6
--- /dev/null
+++ b/meta-openembedded/meta-python/recipes-devtools/python/python3-humanize_4.3.0.bb
@@ -0,0 +1,21 @@
+SUMMARY = "Python humanize utilities"
+HOMEPAGE = "http://github.com/jmoiron/humanize"
+SECTION = "devel/python"
+
+LICENSE = "MIT"
+LIC_FILES_CHKSUM = "file://LICENCE;md5=4ecc42519e84f6f3e23529464df7bd1d"
+
+SRC_URI[sha256sum] = "0dfac79fe8c1c0c734c14177b07b857bad9ae30dd50daa0a14e2c3d8054ee0c4"
+
+inherit pypi python_setuptools_build_meta
+
+DEPENDS += "\
+    ${PYTHON_PN}-setuptools-scm-native \
+"
+
+RDEPENDS:${PN} += "\
+    ${PYTHON_PN}-datetime \
+    ${PYTHON_PN}-setuptools \
+"
+
+BBCLASSEXTEND = "native nativesdk"
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-imageio_2.19.5.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-imageio_2.19.5.bb
deleted file mode 100644
index 6fa4393..0000000
--- a/meta-openembedded/meta-python/recipes-devtools/python/python3-imageio_2.19.5.bb
+++ /dev/null
@@ -1,12 +0,0 @@
-SUMMARY = "Python library that provides an easy interface to read and \
-write a wide range of image data, including animated images, video, \
-volumetric data, and scientific formats."
-SECTION = "devel/python"
-LICENSE = "BSD-2-Clause"
-LIC_FILES_CHKSUM = "file://LICENSE;md5=24cb9a367a9e641b459a01c4d15256ba"
-
-SRC_URI[sha256sum] = "eb3cd70de8be87b72ea85716b7363c700b91144589ee6b5d7b49d42998b7d185"
-
-inherit pypi setuptools3
-
-RDEPENDS:${PN} = "python3-numpy python3-pillow"
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-imageio_2.21.1.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-imageio_2.21.1.bb
new file mode 100644
index 0000000..5c792a5
--- /dev/null
+++ b/meta-openembedded/meta-python/recipes-devtools/python/python3-imageio_2.21.1.bb
@@ -0,0 +1,12 @@
+SUMMARY = "Python library that provides an easy interface to read and \
+write a wide range of image data, including animated images, video, \
+volumetric data, and scientific formats."
+SECTION = "devel/python"
+LICENSE = "BSD-2-Clause"
+LIC_FILES_CHKSUM = "file://LICENSE;md5=24cb9a367a9e641b459a01c4d15256ba"
+
+SRC_URI[sha256sum] = "5f0278217c1cf99d90ef855dab948f93d9fce0ab7ab388e13a597c706b7ec4e5"
+
+inherit pypi setuptools3
+
+RDEPENDS:${PN} = "python3-numpy python3-pillow"
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-jsonrpcserver_5.0.7.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-jsonrpcserver_5.0.7.bb
deleted file mode 100644
index 12e9003..0000000
--- a/meta-openembedded/meta-python/recipes-devtools/python/python3-jsonrpcserver_5.0.7.bb
+++ /dev/null
@@ -1,22 +0,0 @@
-SUMMARY = "Library to process JSON-RPC requests"
-HOMEPAGE = "https://github.com/explodinglabs/jsonrpcserver"
-LICENSE = "MIT"
-LIC_FILES_CHKSUM = "file://LICENSE;md5=61b63ea9d36f6fb63ddaaaac8265304f"
-
-SRC_URI[sha256sum] = "b15d3fd043ad0c40b2ff17f7df2ddaec2e880bb923b40d133939a107c97fde5c"
-
-inherit pypi setuptools3
-
-RDEPENDS:${PN} += "\
-    python3-apply-defaults \
-    python3-asyncio \
-    python3-core \
-    python3-json \
-    python3-jsonschema \
-    python3-logging \
-    python3-netclient \
-    python3-pkgutil \
-    python3-oslash \
-"
-
-BBCLASSEXTEND = "native nativesdk"
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-jsonrpcserver_5.0.8.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-jsonrpcserver_5.0.8.bb
new file mode 100644
index 0000000..88b1d68
--- /dev/null
+++ b/meta-openembedded/meta-python/recipes-devtools/python/python3-jsonrpcserver_5.0.8.bb
@@ -0,0 +1,23 @@
+SUMMARY = "Library to process JSON-RPC requests"
+HOMEPAGE = "https://github.com/explodinglabs/jsonrpcserver"
+LICENSE = "MIT"
+LIC_FILES_CHKSUM = "file://LICENSE;md5=61b63ea9d36f6fb63ddaaaac8265304f"
+
+SRC_URI[sha256sum] = "5150071e4abc9a93f086aa0fd0004dfe0410de66adfaaf513613baa2c2fc00d7"
+
+inherit pypi setuptools3
+
+RDEPENDS:${PN} += "\
+    python3-apply-defaults \
+    python3-asyncio \
+    python3-core \
+    python3-json \
+    python3-jsonschema \
+    python3-logging \
+    python3-netclient \
+    python3-pkgutil \
+    python3-oslash \
+    python3-typing-extensions \
+"
+
+BBCLASSEXTEND = "native nativesdk"
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-matplotlib_3.5.1.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-matplotlib_3.5.1.bb
deleted file mode 100644
index cd05b45..0000000
--- a/meta-openembedded/meta-python/recipes-devtools/python/python3-matplotlib_3.5.1.bb
+++ /dev/null
@@ -1,65 +0,0 @@
-SUMMARY = "matplotlib: plotting with Python"
-DESCRIPTION = "\
-Matplotlib is a Python 2D plotting library which produces \
-publication-quality figures in a variety of hardcopy formats \
-and interactive environments across platforms."
-HOMEPAGE = "https://github.com/matplotlib/matplotlib"
-SECTION = "devel/python"
-LICENSE = "PSF-2.0"
-LIC_FILES_CHKSUM = "\
-    file://setup.py;beginline=296;endline=296;md5=20e7ab4d2b2b1395a0e4ab800181eb96 \
-    file://LICENSE/LICENSE;md5=afec61498aa5f0c45936687da9a53d74 \
-"
-
-DEPENDS = "\
-    freetype \
-    libpng \
-    ${PYTHON_PN}-numpy-native \
-    ${PYTHON_PN}-pip-native \
-    ${PYTHON_PN}-dateutil-native \
-    ${PYTHON_PN}-pytz-native \
-    ${PYTHON_PN}-certifi-native \
-"
-
-SRC_URI[sha256sum] = "b2e9810e09c3a47b73ce9cab5a72243a1258f61e7900969097a817232246ce1c"
-
-inherit pypi setuptools3 pkgconfig
-
-# Stop the component from attempting to download when it detects a missing
-# dependency
-SRC_URI += "file://matplotlib-disable-download.patch"
-
-# This python module requires a full copy of freetype-2.6.1
-SRC_URI += "https://downloads.sourceforge.net/project/freetype/freetype2/2.6.1/freetype-2.6.1.tar.gz;name=freetype;subdir=matplotlib-${PV}/build"
-SRC_URI[freetype.sha256sum] = "0a3c7dfbda6da1e8fce29232e8e96d987ababbbf71ebc8c75659e4132c367014"
-
-# This python module requires a full copy of 'qhull-2020'
-SRC_URI += "http://www.qhull.org/download/qhull-2020-src-8.0.2.tgz;name=qhull;subdir=matplotlib-${PV}/build"
-SRC_URI[qhull.sha256sum] = "b5c2d7eb833278881b952c8a52d20179eab87766b00b865000469a45c1838b7e"
-
-# LTO with clang needs lld
-LDFLAGS:append:toolchain-clang = " -fuse-ld=lld"
-LDFLAGS:remove:toolchain-clang:mips = "-fuse-ld=lld"
-
-RDEPENDS:${PN} = "\
-    freetype \
-    libpng \
-    ${PYTHON_PN}-numpy \
-    ${PYTHON_PN}-pyparsing \
-    ${PYTHON_PN}-cycler \
-    ${PYTHON_PN}-dateutil \
-    ${PYTHON_PN}-kiwisolver \
-    ${PYTHON_PN}-pytz \
-    ${PYTHON_PN}-pillow \
-"
-
-ENABLELTO:toolchain-clang:riscv64 = "echo enable_lto = False >> ${S}/mplsetup.cfg"
-ENABLELTO:toolchain-clang:riscv32 = "echo enable_lto = False >> ${S}/mplsetup.cfg"
-ENABLELTO:toolchain-clang:mips = "echo enable_lto = False >> ${S}/mplsetup.cfg"
-do_compile:prepend() {
-    echo [libs] > ${S}/mplsetup.cfg
-    echo system_freetype = True >> ${S}/mplsetup.cfg
-    ${ENABLELTO}
-}
-
-BBCLASSEXTEND = "native"
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-matplotlib_3.5.2.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-matplotlib_3.5.2.bb
new file mode 100644
index 0000000..eaa1447
--- /dev/null
+++ b/meta-openembedded/meta-python/recipes-devtools/python/python3-matplotlib_3.5.2.bb
@@ -0,0 +1,66 @@
+SUMMARY = "matplotlib: plotting with Python"
+DESCRIPTION = "\
+Matplotlib is a Python 2D plotting library which produces \
+publication-quality figures in a variety of hardcopy formats \
+and interactive environments across platforms."
+HOMEPAGE = "https://github.com/matplotlib/matplotlib"
+SECTION = "devel/python"
+LICENSE = "PSF-2.0"
+LIC_FILES_CHKSUM = "\
+    file://setup.py;beginline=296;endline=296;md5=20e7ab4d2b2b1395a0e4ab800181eb96 \
+    file://LICENSE/LICENSE;md5=afec61498aa5f0c45936687da9a53d74 \
+"
+
+DEPENDS = "\
+    freetype \
+    libpng \
+    python3-numpy-native \
+    python3-pip-native \
+    python3-dateutil-native \
+    python3-pytz-native \
+    python3-certifi-native \
+"
+
+SRC_URI[sha256sum] = "48cf850ce14fa18067f2d9e0d646763681948487a8080ec0af2686468b4607a2"
+
+inherit pypi setuptools3 pkgconfig
+
+# Stop the component from attempting to download when it detects a missing
+# dependency
+SRC_URI += "file://matplotlib-disable-download.patch"
+
+# This python module requires a full copy of freetype-2.6.1
+SRC_URI += "https://downloads.sourceforge.net/project/freetype/freetype2/2.6.1/freetype-2.6.1.tar.gz;name=freetype;subdir=matplotlib-${PV}/build"
+SRC_URI[freetype.sha256sum] = "0a3c7dfbda6da1e8fce29232e8e96d987ababbbf71ebc8c75659e4132c367014"
+
+# This python module requires a full copy of 'qhull-2020'
+SRC_URI += "http://www.qhull.org/download/qhull-2020-src-8.0.2.tgz;name=qhull;subdir=matplotlib-${PV}/build"
+SRC_URI[qhull.sha256sum] = "b5c2d7eb833278881b952c8a52d20179eab87766b00b865000469a45c1838b7e"
+
+# LTO with clang needs lld
+LDFLAGS:append:toolchain-clang = " -fuse-ld=lld"
+LDFLAGS:remove:toolchain-clang:mips = "-fuse-ld=lld"
+
+RDEPENDS:${PN} = "\
+    freetype \
+    libpng \
+    python3-numpy \
+    python3-pyparsing \
+    python3-cycler \
+    python3-dateutil \
+    python3-kiwisolver \
+    python3-pytz \
+    python3-pillow \
+    python3-packaging \
+"
+
+ENABLELTO:toolchain-clang:riscv64 = "echo enable_lto = False >> ${S}/mplsetup.cfg"
+ENABLELTO:toolchain-clang:riscv32 = "echo enable_lto = False >> ${S}/mplsetup.cfg"
+ENABLELTO:toolchain-clang:mips = "echo enable_lto = False >> ${S}/mplsetup.cfg"
+do_compile:prepend() {
+    echo [libs] > ${S}/mplsetup.cfg
+    echo system_freetype = True >> ${S}/mplsetup.cfg
+    ${ENABLELTO}
+}
+
+BBCLASSEXTEND = "native"
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-netifaces/0001-netifaces-initialize-msghdr-in-a-portable-way.patch b/meta-openembedded/meta-python/recipes-devtools/python/python3-netifaces/0001-netifaces-initialize-msghdr-in-a-portable-way.patch
new file mode 100644
index 0000000..7ff86cc
--- /dev/null
+++ b/meta-openembedded/meta-python/recipes-devtools/python/python3-netifaces/0001-netifaces-initialize-msghdr-in-a-portable-way.patch
@@ -0,0 +1,49 @@
+From cbcd19f38ae4b31c57c57ce3619b8d2674defb68 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Sun, 28 Aug 2022 08:11:27 -0700
+Subject: [PATCH] netifaces: initialize msghdr in a portable way
+
+musl has padding bytes inside the msghdr struct which means initializing
+full structure will cause wrong assignments, doing partial assignment is
+more portable and assign the elements after that
+
+Fixes
+netifaces.c:1808:9: error: incompatible pointer to integer conversion initializing 'int' with an expression of type 'void *' [-Wint-conversion]
+        NULL,
+        ^~~~
+
+Upstream-Status: Inappropriate [Upstream Repo is read-only]
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ netifaces.c | 15 ++++++---------
+ 1 file changed, 6 insertions(+), 9 deletions(-)
+
+diff --git a/netifaces.c b/netifaces.c
+index 839c42c..7da78e7 100644
+--- a/netifaces.c
++++ b/netifaces.c
+@@ -1800,15 +1800,12 @@ gateways (PyObject *self)
+     do {
+       struct sockaddr_nl sanl_from;
+       struct iovec iov = { msgbuf, bufsize };
+-      struct msghdr msghdr = {
+-        &sanl_from,
+-        sizeof(sanl_from),
+-        &iov,
+-        1,
+-        NULL,
+-        0,
+-        0
+-      };
++      struct msghdr msghdr = { 0 };
++
++      msghdr.msg_name = &sanl_from;
++      msghdr.msg_namelen = sizeof(sanl_from);
++      msghdr.msg_iov = &iov;
++      msghdr.msg_iovlen = 1;
+       int nllen;
+ 
+       ret = recvmsg (s, &msghdr, 0);
+-- 
+2.37.2
+
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-netifaces_0.11.0.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-netifaces_0.11.0.bb
index 09e54b0..f211c69 100644
--- a/meta-openembedded/meta-python/recipes-devtools/python/python3-netifaces_0.11.0.bb
+++ b/meta-openembedded/meta-python/recipes-devtools/python/python3-netifaces_0.11.0.bb
@@ -4,6 +4,8 @@
 LICENSE = "MIT"
 LIC_FILES_CHKSUM = "file://PKG-INFO;beginline=8;endline=8;md5=a53cbc7cb75660694e138ba973c148df"
 
-SRC_URI[sha256sum] = "043a79146eb2907edf439899f262b3dfe41717d34124298ed281139a8b93ca32"
-
 inherit pypi setuptools3
+
+SRC_URI += "file://0001-netifaces-initialize-msghdr-in-a-portable-way.patch"
+
+SRC_URI[sha256sum] = "043a79146eb2907edf439899f262b3dfe41717d34124298ed281139a8b93ca32"
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-networkx_2.8.4.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-networkx_2.8.4.bb
deleted file mode 100644
index 8181d90..0000000
--- a/meta-openembedded/meta-python/recipes-devtools/python/python3-networkx_2.8.4.bb
+++ /dev/null
@@ -1,19 +0,0 @@
-DESCRIPTION = "Python package for creating and manipulating graphs and networks"
-HOMEPAGE = "http://networkx.github.io/"
-LICENSE = "BSD-3-Clause"
-LIC_FILES_CHKSUM = "file://LICENSE.txt;md5=44614b6df7cf3c19be69d0a945e29904"
-
-SRC_URI[sha256sum] = "5e53f027c0d567cf1f884dbb283224df525644e43afd1145d64c9d88a3584762"
-
-inherit pypi setuptools3
-
-RDEPENDS:${PN} += "\
-                   ${PYTHON_PN}-decorator \
-                   ${PYTHON_PN}-netclient \
-                   ${PYTHON_PN}-compression \
-                   ${PYTHON_PN}-numbers \
-                   ${PYTHON_PN}-pickle \
-                   ${PYTHON_PN}-html \
-                   ${PYTHON_PN}-xml \
-                   ${PYTHON_PN}-json \
-                   "
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-networkx_2.8.5.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-networkx_2.8.5.bb
new file mode 100644
index 0000000..8d0b8db
--- /dev/null
+++ b/meta-openembedded/meta-python/recipes-devtools/python/python3-networkx_2.8.5.bb
@@ -0,0 +1,21 @@
+DESCRIPTION = "Python package for creating and manipulating graphs and networks"
+HOMEPAGE = "http://networkx.github.io/"
+LICENSE = "BSD-3-Clause"
+LIC_FILES_CHKSUM = "file://LICENSE.txt;md5=44614b6df7cf3c19be69d0a945e29904"
+
+SRC_URI[sha256sum] = "15a7b81a360791c458c55a417418ea136c13378cfdc06a2dcdc12bd2f9cf09c1"
+
+inherit pypi setuptools3
+
+RDEPENDS:${PN} += "\
+                   ${PYTHON_PN}-decorator \
+                   ${PYTHON_PN}-netclient \
+                   ${PYTHON_PN}-compression \
+                   ${PYTHON_PN}-numbers \
+                   ${PYTHON_PN}-pickle \
+                   ${PYTHON_PN}-html \
+                   ${PYTHON_PN}-xml \
+                   ${PYTHON_PN}-json \
+                   ${PYTHON_PN}-profile \
+                   ${PYTHON_PN}-threading \
+                   "
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-nocasedict_1.0.3.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-nocasedict_1.0.3.bb
deleted file mode 100644
index a47c4a4..0000000
--- a/meta-openembedded/meta-python/recipes-devtools/python/python3-nocasedict_1.0.3.bb
+++ /dev/null
@@ -1,8 +0,0 @@
-SUMMARY = "A case-insensitive ordered dictionary for Python"
-HOMEPAGE = "https://github.com/pywbem/nocasedict"
-LICENSE = "LGPL-2.1-only"
-LIC_FILES_CHKSUM = "file://LICENSE;md5=1803fa9c2c3ce8cb06b4861d75310742"
-
-SRC_URI[sha256sum] = "8220b97ba06b08eb2deded774c406c77e0ca0d5352ae71249f6f9d1f2a17bd7b"
-
-inherit pypi setuptools3
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-nocasedict_1.0.4.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-nocasedict_1.0.4.bb
new file mode 100644
index 0000000..006799c
--- /dev/null
+++ b/meta-openembedded/meta-python/recipes-devtools/python/python3-nocasedict_1.0.4.bb
@@ -0,0 +1,12 @@
+SUMMARY = "A case-insensitive ordered dictionary for Python"
+HOMEPAGE = "https://github.com/pywbem/nocasedict"
+LICENSE = "LGPL-2.1-only"
+LIC_FILES_CHKSUM = "file://LICENSE;md5=1803fa9c2c3ce8cb06b4861d75310742"
+
+SRC_URI[sha256sum] = "7c111da4cefd244433cb63377aff081a40f84bddae9e6f376c67f086c0f806da"
+
+inherit pypi setuptools3
+
+RDEPENDS:${PN} += " \
+    ${PYTHON_PN}-six \
+"
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-nocaselist_1.0.5.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-nocaselist_1.0.5.bb
deleted file mode 100644
index 22b8825..0000000
--- a/meta-openembedded/meta-python/recipes-devtools/python/python3-nocaselist_1.0.5.bb
+++ /dev/null
@@ -1,8 +0,0 @@
-SUMMARY = "A case-insensitive list for Python"
-HOMEPAGE = "https://nocaselist.readthedocs.io/en/latest/"
-LICENSE = "Apache-2.0"
-LIC_FILES_CHKSUM = "file://LICENSE;md5=86d3f3a95c324c9479bd8986968f4327"
-
-SRC_URI[sha256sum] = "e1c12ca2ae9d345b34948f2c8f60e3894619e2be2ed28b4ecc5e7f1dea117d1d"
-
-inherit pypi setuptools3
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-nocaselist_1.0.6.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-nocaselist_1.0.6.bb
new file mode 100644
index 0000000..9e68429
--- /dev/null
+++ b/meta-openembedded/meta-python/recipes-devtools/python/python3-nocaselist_1.0.6.bb
@@ -0,0 +1,8 @@
+SUMMARY = "A case-insensitive list for Python"
+HOMEPAGE = "https://nocaselist.readthedocs.io/en/latest/"
+LICENSE = "Apache-2.0"
+LIC_FILES_CHKSUM = "file://LICENSE;md5=86d3f3a95c324c9479bd8986968f4327"
+
+SRC_URI[sha256sum] = "48f067f8cb841245f34d03120bc1ba9900f13b19cb51bcc6c7bee017f7c874da"
+
+inherit pypi setuptools3
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-prettytable_3.1.1.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-prettytable_3.1.1.bb
deleted file mode 100644
index 5520edc..0000000
--- a/meta-openembedded/meta-python/recipes-devtools/python/python3-prettytable_3.1.1.bb
+++ /dev/null
@@ -1,44 +0,0 @@
-SUMMARY = "Python library for displaying tabular data in a ASCII table format"
-HOMEPAGE = "http://code.google.com/p/prettytable"
-LICENSE = "BSD-3-Clause"
-LIC_FILES_CHKSUM = "file://COPYING;md5=c9a6829fcd174d9535b46211917c7671"
-
-SRC_URI[sha256sum] = "43c9e23272ca253d038ae76fe3adde89794e92e7fcab2ddf5b94b38642ef4f21"
-
-do_install:append() {
-    perm_files=`find "${D}${PYTHON_SITEPACKAGES_DIR}/" -name "*.txt" -o -name "PKG-INFO"`
-    for f in $perm_files; do
-        chmod 644 "${f}"
-    done
-}
-
-UPSTREAM_CHECK_URI = "https://pypi.python.org/pypi/PrettyTable/"
-UPSTREAM_CHECK_REGEX = "/PrettyTable/(?P<pver>(\d+[\.\-_]*)+)"
-
-BBCLASSEXTEND = "native nativesdk"
-inherit pypi ptest setuptools3
-
-SRC_URI += " \
-	file://run-ptest \
-"
-
-DEPENDS += "${PYTHON_PN}-setuptools-scm-native"
-
-RDEPENDS:${PN} += " \
-	${PYTHON_PN}-math \
-	${PYTHON_PN}-html \
-	${PYTHON_PN}-wcwidth \
-	${PYTHON_PN}-json \
-	${PYTHON_PN}-compression \
-	${PYTHON_PN}-importlib-metadata \
-"
-
-RDEPENDS:${PN}-ptest += " \
-    ${PYTHON_PN}-pytest \
-    ${PYTHON_PN}-pytest-lazy-fixture \
-    ${PYTHON_PN}-sqlite3 \
-"
-
-do_install_ptest() {
-	cp -f ${S}/tests/test_prettytable.py ${D}${PTEST_PATH}/
-}
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-prettytable_3.3.0.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-prettytable_3.3.0.bb
new file mode 100644
index 0000000..b98ee49
--- /dev/null
+++ b/meta-openembedded/meta-python/recipes-devtools/python/python3-prettytable_3.3.0.bb
@@ -0,0 +1,44 @@
+SUMMARY = "Python library for displaying tabular data in a ASCII table format"
+HOMEPAGE = "http://code.google.com/p/prettytable"
+LICENSE = "BSD-3-Clause"
+LIC_FILES_CHKSUM = "file://COPYING;md5=c9a6829fcd174d9535b46211917c7671"
+
+SRC_URI[sha256sum] = "118eb54fd2794049b810893653b20952349df6d3bc1764e7facd8a18064fa9b0"
+
+do_install:append() {
+    perm_files=`find "${D}${PYTHON_SITEPACKAGES_DIR}/" -name "*.txt" -o -name "PKG-INFO"`
+    for f in $perm_files; do
+        chmod 644 "${f}"
+    done
+}
+
+UPSTREAM_CHECK_URI = "https://pypi.python.org/pypi/PrettyTable/"
+UPSTREAM_CHECK_REGEX = "/PrettyTable/(?P<pver>(\d+[\.\-_]*)+)"
+
+BBCLASSEXTEND = "native nativesdk"
+inherit pypi ptest setuptools3
+
+SRC_URI += " \
+	file://run-ptest \
+"
+
+DEPENDS += "${PYTHON_PN}-setuptools-scm-native"
+
+RDEPENDS:${PN} += " \
+	${PYTHON_PN}-math \
+	${PYTHON_PN}-html \
+	${PYTHON_PN}-wcwidth \
+	${PYTHON_PN}-json \
+	${PYTHON_PN}-compression \
+	${PYTHON_PN}-importlib-metadata \
+"
+
+RDEPENDS:${PN}-ptest += " \
+    ${PYTHON_PN}-pytest \
+    ${PYTHON_PN}-pytest-lazy-fixture \
+    ${PYTHON_PN}-sqlite3 \
+"
+
+do_install_ptest() {
+	cp -f ${S}/tests/test_prettytable.py ${D}${PTEST_PATH}/
+}
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-prompt-toolkit_3.0.24.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-prompt-toolkit_3.0.24.bb
deleted file mode 100644
index 9a6798a..0000000
--- a/meta-openembedded/meta-python/recipes-devtools/python/python3-prompt-toolkit_3.0.24.bb
+++ /dev/null
@@ -1,25 +0,0 @@
-SUMMARY = "Library for building powerful interactive command lines in Python"
-HOMEPAGE = "https://python-prompt-toolkit.readthedocs.io/"
-LICENSE = "BSD-3-Clause"
-LIC_FILES_CHKSUM = "file://LICENSE;md5=b2cde7da89f0c1f3e49bf968d00d554f"
-
-SRC_URI[sha256sum] = "1bb05628c7d87b645974a1bad3f17612be0c29fa39af9f7688030163f680bad6"
-
-inherit pypi setuptools3
-
-PYPI_PACKAGE = "prompt_toolkit"
-
-RDEPENDS:${PN} += " \
-    ${PYTHON_PN}-core \
-    ${PYTHON_PN}-six \
-    ${PYTHON_PN}-terminal \
-    ${PYTHON_PN}-threading \
-    ${PYTHON_PN}-wcwidth \
-    ${PYTHON_PN}-datetime \
-    ${PYTHON_PN}-shell \
-    ${PYTHON_PN}-image \
-    ${PYTHON_PN}-asyncio \
-    ${PYTHON_PN}-xml \
-"
-
-BBCLASSEXTEND = "native nativesdk"
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-prompt-toolkit_3.0.30.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-prompt-toolkit_3.0.30.bb
new file mode 100644
index 0000000..15627b1
--- /dev/null
+++ b/meta-openembedded/meta-python/recipes-devtools/python/python3-prompt-toolkit_3.0.30.bb
@@ -0,0 +1,25 @@
+SUMMARY = "Library for building powerful interactive command lines in Python"
+HOMEPAGE = "https://python-prompt-toolkit.readthedocs.io/"
+LICENSE = "BSD-3-Clause"
+LIC_FILES_CHKSUM = "file://LICENSE;md5=b2cde7da89f0c1f3e49bf968d00d554f"
+
+SRC_URI[sha256sum] = "859b283c50bde45f5f97829f77a4674d1c1fcd88539364f1b28a37805cfd89c0"
+
+inherit pypi setuptools3
+
+PYPI_PACKAGE = "prompt_toolkit"
+
+RDEPENDS:${PN} += " \
+    ${PYTHON_PN}-core \
+    ${PYTHON_PN}-six \
+    ${PYTHON_PN}-terminal \
+    ${PYTHON_PN}-threading \
+    ${PYTHON_PN}-wcwidth \
+    ${PYTHON_PN}-datetime \
+    ${PYTHON_PN}-shell \
+    ${PYTHON_PN}-image \
+    ${PYTHON_PN}-asyncio \
+    ${PYTHON_PN}-xml \
+"
+
+BBCLASSEXTEND = "native nativesdk"
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-protobuf_4.21.3.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-protobuf_4.21.3.bb
deleted file mode 100644
index 528f99e..0000000
--- a/meta-openembedded/meta-python/recipes-devtools/python/python3-protobuf_4.21.3.bb
+++ /dev/null
@@ -1,39 +0,0 @@
-DESCRIPTION = "Protocol Buffers"
-HOMEPAGE = "https://developers.google.com/protocol-buffers/"
-SECTION = "devel/python"
-
-LICENSE = "BSD-3-Clause"
-LIC_FILES_CHKSUM = "file://PKG-INFO;beginline=8;endline=8;md5=53dbfa56f61b90215a9f8f0d527c043d"
-
-inherit pypi setuptools3
-
-SRC_URI[sha256sum] = "9130759e719bee1e6d05ca6a3037f7eff66d7a7ff6ba25871917dc40e8f3fbb6"
-
-# http://errors.yoctoproject.org/Errors/Details/184715/
-# Can't find required file: ../src/google/protobuf/descriptor.proto
-CLEANBROKEN = "1"
-
-UPSTREAM_CHECK_REGEX = "protobuf/(?P<pver>\d+(\.\d+)+)/"
-
-DEPENDS += "protobuf"
-
-RDEPENDS:${PN} += " \
-    ${PYTHON_PN}-datetime \
-    ${PYTHON_PN}-json \
-    ${PYTHON_PN}-logging \
-    ${PYTHON_PN}-netclient \
-    ${PYTHON_PN}-numbers \
-    ${PYTHON_PN}-pkgutil \
-    ${PYTHON_PN}-six \
-    ${PYTHON_PN}-unittest \
-"
-
-# For usage in other recipies when compiling protobuf files (e.g. by grpcio-tools)
-BBCLASSEXTEND = "native nativesdk"
-
-DISTUTILS_BUILD_ARGS += "--cpp_implementation"
-DISTUTILS_INSTALL_ARGS += "--cpp_implementation"
-
-do_compile:prepend:class-native () {
-    export KOKORO_BUILD_NUMBER="1"
-}
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-protobuf_4.21.5.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-protobuf_4.21.5.bb
new file mode 100644
index 0000000..5376830
--- /dev/null
+++ b/meta-openembedded/meta-python/recipes-devtools/python/python3-protobuf_4.21.5.bb
@@ -0,0 +1,39 @@
+DESCRIPTION = "Protocol Buffers"
+HOMEPAGE = "https://developers.google.com/protocol-buffers/"
+SECTION = "devel/python"
+
+LICENSE = "BSD-3-Clause"
+LIC_FILES_CHKSUM = "file://PKG-INFO;beginline=8;endline=8;md5=53dbfa56f61b90215a9f8f0d527c043d"
+
+inherit pypi setuptools3
+
+SRC_URI[sha256sum] = "eb1106e87e095628e96884a877a51cdb90087106ee693925ec0a300468a9be3a"
+
+# http://errors.yoctoproject.org/Errors/Details/184715/
+# Can't find required file: ../src/google/protobuf/descriptor.proto
+CLEANBROKEN = "1"
+
+UPSTREAM_CHECK_REGEX = "protobuf/(?P<pver>\d+(\.\d+)+)/"
+
+DEPENDS += "protobuf"
+
+RDEPENDS:${PN} += " \
+    ${PYTHON_PN}-datetime \
+    ${PYTHON_PN}-json \
+    ${PYTHON_PN}-logging \
+    ${PYTHON_PN}-netclient \
+    ${PYTHON_PN}-numbers \
+    ${PYTHON_PN}-pkgutil \
+    ${PYTHON_PN}-six \
+    ${PYTHON_PN}-unittest \
+"
+
+# For usage in other recipies when compiling protobuf files (e.g. by grpcio-tools)
+BBCLASSEXTEND = "native nativesdk"
+
+DISTUTILS_BUILD_ARGS += "--cpp_implementation"
+DISTUTILS_INSTALL_ARGS += "--cpp_implementation"
+
+do_compile:prepend:class-native () {
+    export KOKORO_BUILD_NUMBER="1"
+}
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-pybind11/0001-Do-not-check-pointer-size-when-cross-compiling.patch b/meta-openembedded/meta-python/recipes-devtools/python/python3-pybind11/0001-Do-not-check-pointer-size-when-cross-compiling.patch
deleted file mode 100644
index 761422e..0000000
--- a/meta-openembedded/meta-python/recipes-devtools/python/python3-pybind11/0001-Do-not-check-pointer-size-when-cross-compiling.patch
+++ /dev/null
@@ -1,31 +0,0 @@
-From 2e9318f7a70699eed239aee6301d1d0bbd2457ee Mon Sep 17 00:00:00 2001
-From: Philip Balister <philip@balister.org>
-Date: Fri, 10 Jul 2020 10:14:59 -0400
-Subject: [PATCH] Do not check pointer size when cross compiling.
-
-It is reasonable to build for 32 machine on a 64 bit build machine. Prevents:
-| CMake Error at tools/FindPythonLibsNew.cmake:127 (message):
-|   Python config failure: Python is 64-bit, chosen compiler is 32-bit
-
-Signed-off-by: Philip Balister <philip@balister.org>
-Signed-off-by: Leon Anavi <leon.anavi@konsulko.com>
----
- tools/FindPythonLibsNew.cmake | 2 +-
- 1 file changed, 1 insertion(+), 1 deletion(-)
-
-diff --git a/tools/FindPythonLibsNew.cmake b/tools/FindPythonLibsNew.cmake
-index 3605aebc..67f4d4a0 100644
---- a/tools/FindPythonLibsNew.cmake
-+++ b/tools/FindPythonLibsNew.cmake
-@@ -156,7 +156,7 @@ list(GET _PYTHON_VALUES 9 PYTHON_MULTIARCH)
- 
- # Make sure the Python has the same pointer-size as the chosen compiler
- # Skip if CMAKE_SIZEOF_VOID_P is not defined
--if(CMAKE_SIZEOF_VOID_P AND (NOT "${PYTHON_SIZEOF_VOID_P}" STREQUAL "${CMAKE_SIZEOF_VOID_P}"))
-+if((NOT CMAKE_CROSSCOMPILING) AND CMAKE_SIZEOF_VOID_P AND (NOT "${PYTHON_SIZEOF_VOID_P}" STREQUAL "${CMAKE_SIZEOF_VOID_P}"))
-   if(PythonLibsNew_FIND_REQUIRED)
-     math(EXPR _PYTHON_BITS "${PYTHON_SIZEOF_VOID_P} * 8")
-     math(EXPR _CMAKE_BITS "${CMAKE_SIZEOF_VOID_P} * 8")
--- 
-2.17.1
-
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-pybind11_2.10.0.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-pybind11_2.10.0.bb
new file mode 100644
index 0000000..66aefac
--- /dev/null
+++ b/meta-openembedded/meta-python/recipes-devtools/python/python3-pybind11_2.10.0.bb
@@ -0,0 +1,39 @@
+SUMMARY = "Seamless operability between C++11 and Python"
+HOMEPAGE = "https://github.com/wjakob/pybind11"
+LICENSE = "BSD-2-Clause"
+LIC_FILES_CHKSUM = "file://LICENSE;md5=774f65abd8a7fe3124be2cdf766cd06f"
+
+DEPENDS = "boost"
+
+SRC_URI = "git://github.com/pybind/pybind11.git;branch=stable;protocol=https \
+           file://0001-Do-not-strip-binaries.patch \
+"
+
+SRCREV = "aa304c9c7d725ffb9d10af08a3b34cb372307020"
+
+S = "${WORKDIR}/git"
+
+BBCLASSEXTEND = "native"
+
+EXTRA_OECMAKE =  "-DPYBIND11_TEST=OFF"
+
+inherit cmake setuptools3 python3native
+
+PIP_INSTALL_DIST_PATH = "${S}/dist"
+PIP_INSTALL_PACKAGE = "pybind11"
+
+do_configure() {
+	cmake_do_configure
+}
+
+do_compile() {
+	setuptools3_do_compile
+	cmake_do_compile
+}
+
+do_install() {
+	setuptools3_do_install
+	cmake_do_install
+}
+
+BBCLASSEXTEND = "native nativesdk"
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-pybind11_2.9.2.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-pybind11_2.9.2.bb
deleted file mode 100644
index 433bfd6..0000000
--- a/meta-openembedded/meta-python/recipes-devtools/python/python3-pybind11_2.9.2.bb
+++ /dev/null
@@ -1,39 +0,0 @@
-SUMMARY = "Seamless operability between C++11 and Python"
-HOMEPAGE = "https://github.com/wjakob/pybind11"
-LICENSE = "BSD-2-Clause"
-LIC_FILES_CHKSUM = "file://LICENSE;md5=774f65abd8a7fe3124be2cdf766cd06f"
-
-DEPENDS = "boost"
-
-SRC_URI = "git://github.com/pybind/pybind11.git;branch=stable;protocol=https \
-           file://0001-Do-not-strip-binaries.patch \
-           file://0001-Do-not-check-pointer-size-when-cross-compiling.patch \
-"
-SRCREV = "914c06fb252b6cc3727d0eedab6736e88a3fcb01"
-
-S = "${WORKDIR}/git"
-
-BBCLASSEXTEND = "native"
-
-EXTRA_OECMAKE =  "-DPYBIND11_TEST=OFF"
-
-inherit cmake setuptools3 python3native
-
-PIP_INSTALL_DIST_PATH = "${S}/dist"
-PIP_INSTALL_PACKAGE = "pybind11"
-
-do_configure() {
-	cmake_do_configure
-}
-
-do_compile() {
-	setuptools3_do_compile
-	cmake_do_compile
-}
-
-do_install() {
-	setuptools3_do_install
-	cmake_do_install
-}
-
-BBCLASSEXTEND = "native nativesdk"
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-pycares_4.2.1.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-pycares_4.2.1.bb
deleted file mode 100644
index dacaaa7..0000000
--- a/meta-openembedded/meta-python/recipes-devtools/python/python3-pycares_4.2.1.bb
+++ /dev/null
@@ -1,21 +0,0 @@
-SUMMARY = "Python interface for c-ares"
-DESCRIPTION = "\
-pycares is a Python module which provides an interface to c-ares. c-ares is \
-a C library that performs DNS requests and name resolutions asynchronously."
-HOMEPAGE = "https://github.com/saghul/pycares"
-LICENSE = "MIT"
-LIC_FILES_CHKSUM = "file://LICENSE;md5=b1538fcaea82ebf2313ed648b96c69b1"
-
-SRC_URI[md5sum] = "92fa9622ba42cb895d598910722e80b5"
-SRC_URI[sha256sum] = "735b4f75fd0f595c4e9184da18cd87737f46bc81a64ea41f4edce2b6b68d46d2"
-
-PYPI_PACKAGE = "pycares"
-
-inherit pypi setuptools3
-
-DEPENDS += "${PYTHON_PN}-cffi-native"
-
-RDEPENDS:${PN} += " \
-    ${PYTHON_PN}-cffi \
-    ${PYTHON_PN}-idna \
-"
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-pycares_4.2.2.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-pycares_4.2.2.bb
new file mode 100644
index 0000000..d2de4d3
--- /dev/null
+++ b/meta-openembedded/meta-python/recipes-devtools/python/python3-pycares_4.2.2.bb
@@ -0,0 +1,20 @@
+SUMMARY = "Python interface for c-ares"
+DESCRIPTION = "\
+pycares is a Python module which provides an interface to c-ares. c-ares is \
+a C library that performs DNS requests and name resolutions asynchronously."
+HOMEPAGE = "https://github.com/saghul/pycares"
+LICENSE = "MIT"
+LIC_FILES_CHKSUM = "file://LICENSE;md5=b1538fcaea82ebf2313ed648b96c69b1"
+
+SRC_URI[sha256sum] = "e1f57a8004370080694bd6fb969a1ffc9171a59c6824d54f791c1b2e4d298385"
+
+PYPI_PACKAGE = "pycares"
+
+inherit pypi setuptools3
+
+DEPENDS += "${PYTHON_PN}-cffi-native"
+
+RDEPENDS:${PN} += " \
+    ${PYTHON_PN}-cffi \
+    ${PYTHON_PN}-idna \
+"
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-pycodestyle_2.8.0.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-pycodestyle_2.8.0.bb
deleted file mode 100644
index 31720e2..0000000
--- a/meta-openembedded/meta-python/recipes-devtools/python/python3-pycodestyle_2.8.0.bb
+++ /dev/null
@@ -1,11 +0,0 @@
-SUMMARY = "Python style guide checker (formly called pep8)"
-HOMEPAGE = "https://pypi.org/project/pycodestyle"
-LICENSE = "MIT"
-SECTION = "devel/python"
-LIC_FILES_CHKSUM = "file://LICENSE;md5=a8546d0e77f416fb05a26acd89c8b3bd"
-
-SRC_URI[sha256sum] = "eddd5847ef438ea1c7870ca7eb78a9d47ce0cdb4851a5523949f2601d0cbbe7f"
-
-inherit pypi setuptools3
-
-BBCLASSEXTEND = "native nativesdk"
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-pycodestyle_2.9.1.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-pycodestyle_2.9.1.bb
new file mode 100644
index 0000000..928e2b2
--- /dev/null
+++ b/meta-openembedded/meta-python/recipes-devtools/python/python3-pycodestyle_2.9.1.bb
@@ -0,0 +1,11 @@
+SUMMARY = "Python style guide checker (formly called pep8)"
+HOMEPAGE = "https://pypi.org/project/pycodestyle"
+LICENSE = "MIT"
+SECTION = "devel/python"
+LIC_FILES_CHKSUM = "file://LICENSE;md5=a8546d0e77f416fb05a26acd89c8b3bd"
+
+SRC_URI[sha256sum] = "2c9607871d58c76354b697b42f5d57e1ada7d261c261efac224b664affdc5785"
+
+inherit pypi setuptools3
+
+BBCLASSEXTEND = "native nativesdk"
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-pydantic_1.9.1.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-pydantic_1.9.1.bb
new file mode 100644
index 0000000..1d113c9
--- /dev/null
+++ b/meta-openembedded/meta-python/recipes-devtools/python/python3-pydantic_1.9.1.bb
@@ -0,0 +1,11 @@
+SUMMARY = "Data validation and settings management using Python type hinting"
+HOMEPAGE = "https://github.com/samuelcolvin/pydantic"
+LICENSE = "MIT"
+LIC_FILES_CHKSUM = "file://LICENSE;md5=2c02ea30650b91528657db64baea1757"
+RDEPENDS:${PN} += "\
+    python3-typing-extensions \
+"
+
+inherit pypi setuptools3
+
+SRC_URI[sha256sum] = "1ed987c3ff29fff7fd8c3ea3a3ea877ad310aae2ef9889a119e22d3f2db0691a"
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-pyflakes_2.4.0.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-pyflakes_2.4.0.bb
deleted file mode 100644
index 827ff0b..0000000
--- a/meta-openembedded/meta-python/recipes-devtools/python/python3-pyflakes_2.4.0.bb
+++ /dev/null
@@ -1,10 +0,0 @@
-SUMMARY = "passive checker of Python programs"
-HOMEPAGE = "https://github.com/PyCQA/pyflakes"
-LICENSE = "MIT"
-LIC_FILES_CHKSUM = "file://LICENSE;md5=690c2d09203dc9e07c4083fc45ea981f"
-
-SRC_URI[sha256sum] = "05a85c2872edf37a4ed30b0cce2f6093e1d0581f8c19d7393122da7e25b2b24c"
-
-inherit pypi setuptools3
-
-BBCLASSEXTEND = "native nativesdk"
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-pyflakes_2.5.0.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-pyflakes_2.5.0.bb
new file mode 100644
index 0000000..5c75ea4
--- /dev/null
+++ b/meta-openembedded/meta-python/recipes-devtools/python/python3-pyflakes_2.5.0.bb
@@ -0,0 +1,10 @@
+SUMMARY = "passive checker of Python programs"
+HOMEPAGE = "https://github.com/PyCQA/pyflakes"
+LICENSE = "MIT"
+LIC_FILES_CHKSUM = "file://LICENSE;md5=690c2d09203dc9e07c4083fc45ea981f"
+
+SRC_URI[sha256sum] = "491feb020dca48ccc562a8c0cbe8df07ee13078df59813b83959cbdada312ea3"
+
+inherit pypi setuptools3
+
+BBCLASSEXTEND = "native nativesdk"
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-pyhamcrest_2.0.3.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-pyhamcrest_2.0.3.bb
deleted file mode 100644
index 5b06e52..0000000
--- a/meta-openembedded/meta-python/recipes-devtools/python/python3-pyhamcrest_2.0.3.bb
+++ /dev/null
@@ -1,12 +0,0 @@
-SUMMARY = "Hamcrest framework for matcher objects"
-HOMEPAGE = "https://github.com/hamcrest/PyHamcrest"
-LICENSE = "BSD-3-Clause"
-LIC_FILES_CHKSUM = "file://LICENSE.txt;md5=79391bf1501c898472d043f36e960612"
-
-PYPI_PACKAGE = "PyHamcrest"
-
-SRC_URI[sha256sum] = "dfb19cf6d71743e086fbb761ed7faea5aacbc8ec10c17a08b93ecde39192a3db"
-
-inherit pypi python_setuptools_build_meta
-
-RDEPENDS:${PN} += "${PYTHON_PN}-six"
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-pyhamcrest_2.0.4.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-pyhamcrest_2.0.4.bb
new file mode 100644
index 0000000..888278a
--- /dev/null
+++ b/meta-openembedded/meta-python/recipes-devtools/python/python3-pyhamcrest_2.0.4.bb
@@ -0,0 +1,15 @@
+SUMMARY = "Hamcrest framework for matcher objects"
+HOMEPAGE = "https://github.com/hamcrest/PyHamcrest"
+LICENSE = "BSD-3-Clause"
+LIC_FILES_CHKSUM = "file://LICENSE.txt;md5=79391bf1501c898472d043f36e960612"
+
+SRC_URI[sha256sum] = "b5d9ce6b977696286cf232ce2adf8969b4d0b045975b0936ac9005e84e67e9c1"
+
+inherit pypi python_setuptools_build_meta
+
+DEPENDS += "${PYTHON_PN}-hatch-vcs-native"
+
+RDEPENDS:${PN} += " \
+    ${PYTHON_PN}-six \
+    ${PYTHON_PN}-numbers \
+"
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-pyperf_2.3.0.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-pyperf_2.3.0.bb
deleted file mode 100644
index 5172b0a..0000000
--- a/meta-openembedded/meta-python/recipes-devtools/python/python3-pyperf_2.3.0.bb
+++ /dev/null
@@ -1,23 +0,0 @@
-SUMMARY = "A toolkit to write, run and analyze benchmarks"
-DESCRIPTION = " \
-The Python pyperf module is a toolkit to write, run and analyze benchmarks. \
-Features: \
-    * Simple API to run reliable benchmarks \
-    * Automatically calibrate a benchmark for a time budget. \
-    * Spawn multiple worker processes. \
-    * Compute the mean and standard deviation. \
-    * Detect if a benchmark result seems unstable. \
-    * JSON format to store benchmark results. \
-    * Support multiple units: seconds, bytes and integer. \
-"
-LICENSE = "MIT"
-LIC_FILES_CHKSUM = "file://COPYING;md5=78bc2e6e87c8c61272937b879e6dc2f8"
-
-SRC_URI[sha256sum] = "8a85dd42e067131d5b26b71472336da7f7f4b87ff9c97350d89f5ff0de9adedc"
-
-DEPENDS += "${PYTHON_PN}-six-native"
-
-PYPI_PACKAGE = "pyperf"
-inherit pypi setuptools3
-
-RDEPENDS:${PN} += "${PYTHON_PN}-misc"
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-pyperf_2.4.1.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-pyperf_2.4.1.bb
new file mode 100644
index 0000000..cb82825
--- /dev/null
+++ b/meta-openembedded/meta-python/recipes-devtools/python/python3-pyperf_2.4.1.bb
@@ -0,0 +1,23 @@
+SUMMARY = "A toolkit to write, run and analyze benchmarks"
+DESCRIPTION = " \
+The Python pyperf module is a toolkit to write, run and analyze benchmarks. \
+Features: \
+    * Simple API to run reliable benchmarks \
+    * Automatically calibrate a benchmark for a time budget. \
+    * Spawn multiple worker processes. \
+    * Compute the mean and standard deviation. \
+    * Detect if a benchmark result seems unstable. \
+    * JSON format to store benchmark results. \
+    * Support multiple units: seconds, bytes and integer. \
+"
+LICENSE = "MIT"
+LIC_FILES_CHKSUM = "file://COPYING;md5=78bc2e6e87c8c61272937b879e6dc2f8"
+
+SRC_URI[sha256sum] = "38cf5e90c56f906a8320ce82a50bfa92c902b93affd72e4dc81580115f355853"
+
+DEPENDS += "${PYTHON_PN}-six-native"
+
+PYPI_PACKAGE = "pyperf"
+inherit pypi setuptools3
+
+RDEPENDS:${PN} += "${PYTHON_PN}-misc ${PYTHON_PN}-statistics"
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-pythonping_1.1.1.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-pythonping_1.1.1.bb
deleted file mode 100644
index 75cef50..0000000
--- a/meta-openembedded/meta-python/recipes-devtools/python/python3-pythonping_1.1.1.bb
+++ /dev/null
@@ -1,11 +0,0 @@
-SUMMARY = "PythonPing is simple way to ping in Python."
-HOMEPAGE = "https://pypi.org/project/pythonping/"
-SECTION = "devel/python"
-LICENSE = "MIT"
-LIC_FILES_CHKSUM = "file://setup.py;beginline=12;endline=12;md5=2d33c00f47720c7e35e1fdb4b9fab027"
-
-SRC_URI[sha256sum] = "0022f6cbe52762e9d596316e3bccb8a3b794355a49c0d788f7228d90f9461cfc"
-
-inherit pypi setuptools3
-
-RDEPENDS:${PN} += "python3-io"
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-pythonping_1.1.3.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-pythonping_1.1.3.bb
new file mode 100644
index 0000000..5e016fb
--- /dev/null
+++ b/meta-openembedded/meta-python/recipes-devtools/python/python3-pythonping_1.1.3.bb
@@ -0,0 +1,11 @@
+SUMMARY = "PythonPing is simple way to ping in Python."
+HOMEPAGE = "https://pypi.org/project/pythonping/"
+SECTION = "devel/python"
+LICENSE = "MIT"
+LIC_FILES_CHKSUM = "file://setup.py;beginline=12;endline=12;md5=2d33c00f47720c7e35e1fdb4b9fab027"
+
+SRC_URI[sha256sum] = "3555a03439eb48d5e0e8c201f7c334c1e13b997d744f93453d4d601c0fc8330f"
+
+inherit pypi setuptools3
+
+RDEPENDS:${PN} += "python3-io"
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-pyzmq_23.2.0.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-pyzmq_23.2.0.bb
deleted file mode 100644
index 003f35d..0000000
--- a/meta-openembedded/meta-python/recipes-devtools/python/python3-pyzmq_23.2.0.bb
+++ /dev/null
@@ -1,49 +0,0 @@
-SUMMARY = "PyZMQ: Python bindings for ZMQ"
-DESCRIPTION = "This package contains Python bindings for ZeroMQ. ZMQ is a lightweight and fast messaging implementation."
-HOMEPAGE = "http://zeromq.org/bindings:python"
-LICENSE = "BSD-3-Clause & LGPL-3.0-only"
-LIC_FILES_CHKSUM = "\
-    file://COPYING.BSD;md5=11c65680f637c3df7f58bbc8d133e96e \
-    file://COPYING.LESSER;md5=12c592fa0bcfff3fb0977b066e9cb69e \
-"
-
-DEPENDS = "python3-packaging-native zeromq"
-
-SRC_URI:append = " \
-    file://club-rpath-out.patch \
-    file://run-ptest \
-"
-SRC_URI[sha256sum] = "a51f12a8719aad9dcfb55d456022f16b90abc8dde7d3ca93ce3120b40e3fa169"
-
-inherit pypi pkgconfig python_setuptools_build_meta ptest
-
-PACKAGES =+ "\
-    ${PN}-test \
-"
-
-FILES:${PN}-test += "\
-    ${libdir}/${PYTHON_DIR}/site-packages/*/tests \
-"
-
-RDEPENDS:${PN} += "\
-        ${PYTHON_PN}-json \
-        ${PYTHON_PN}-multiprocessing \
-"
-
-RDEPENDS:${PN}-ptest += "\
-        ${PN}-test \
-"
-
-do_compile:prepend() {
-    echo [global] > ${S}/setup.cfg
-    echo zmq_prefix = ${STAGING_DIR_HOST} >> ${S}/setup.cfg
-    echo have_sys_un_h = True >> ${S}/setup.cfg
-    echo skip_check_zmq = True >> ${S}/setup.cfg
-    echo libzmq_extension = False >> ${S}/setup.cfg
-    echo no_libzmq_extension = True >> ${S}/setup.cfg
-}
-
-do_install_ptest() {
-        install -d ${D}${PTEST_PATH}/tests
-        cp -rf ${S}/zmq/tests/* ${D}${PTEST_PATH}/tests/
-}
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-pyzmq_23.2.1.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-pyzmq_23.2.1.bb
new file mode 100644
index 0000000..40cb22b
--- /dev/null
+++ b/meta-openembedded/meta-python/recipes-devtools/python/python3-pyzmq_23.2.1.bb
@@ -0,0 +1,49 @@
+SUMMARY = "PyZMQ: Python bindings for ZMQ"
+DESCRIPTION = "This package contains Python bindings for ZeroMQ. ZMQ is a lightweight and fast messaging implementation."
+HOMEPAGE = "http://zeromq.org/bindings:python"
+LICENSE = "BSD-3-Clause & LGPL-3.0-only"
+LIC_FILES_CHKSUM = "\
+    file://COPYING.BSD;md5=11c65680f637c3df7f58bbc8d133e96e \
+    file://COPYING.LESSER;md5=12c592fa0bcfff3fb0977b066e9cb69e \
+"
+
+DEPENDS = "python3-packaging-native zeromq"
+
+SRC_URI:append = " \
+    file://club-rpath-out.patch \
+    file://run-ptest \
+"
+SRC_URI[sha256sum] = "2b381aa867ece7d0a82f30a0c7f3d4387b7cf2e0697e33efaa5bed6c5784abcd"
+
+inherit pypi pkgconfig python_setuptools_build_meta ptest
+
+PACKAGES =+ "\
+    ${PN}-test \
+"
+
+FILES:${PN}-test += "\
+    ${libdir}/${PYTHON_DIR}/site-packages/*/tests \
+"
+
+RDEPENDS:${PN} += "\
+        ${PYTHON_PN}-json \
+        ${PYTHON_PN}-multiprocessing \
+"
+
+RDEPENDS:${PN}-ptest += "\
+        ${PN}-test \
+"
+
+do_compile:prepend() {
+    echo [global] > ${S}/setup.cfg
+    echo zmq_prefix = ${STAGING_DIR_HOST} >> ${S}/setup.cfg
+    echo have_sys_un_h = True >> ${S}/setup.cfg
+    echo skip_check_zmq = True >> ${S}/setup.cfg
+    echo libzmq_extension = False >> ${S}/setup.cfg
+    echo no_libzmq_extension = True >> ${S}/setup.cfg
+}
+
+do_install_ptest() {
+        install -d ${D}${PTEST_PATH}/tests
+        cp -rf ${S}/zmq/tests/* ${D}${PTEST_PATH}/tests/
+}
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-regex_2022.7.24.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-regex_2022.7.24.bb
deleted file mode 100644
index 897aa76..0000000
--- a/meta-openembedded/meta-python/recipes-devtools/python/python3-regex_2022.7.24.bb
+++ /dev/null
@@ -1,14 +0,0 @@
-SUMMARY = "Alternative regular expression module, to replace re."
-HOMEPAGE = "https://bitbucket.org/mrabarnett/mrab-regex/src"
-LICENSE = "Apache-2.0"
-LIC_FILES_CHKSUM = "file://LICENSE.txt;md5=7b5751ddd6b643203c31ff873051d069"
-
-inherit pypi setuptools3
-
-SRC_URI[sha256sum] = "fa8a4bc81b15f49c57ede3fd636786c6619179661acf2430fcc387d75bf28d33"
-
-RDEPENDS:${PN} += " \
-	python3-stringold \
-"
-
-BBCLASSEXTEND = "native nativesdk"
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-regex_2022.8.17.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-regex_2022.8.17.bb
new file mode 100644
index 0000000..4999fc6
--- /dev/null
+++ b/meta-openembedded/meta-python/recipes-devtools/python/python3-regex_2022.8.17.bb
@@ -0,0 +1,14 @@
+SUMMARY = "Alternative regular expression module, to replace re."
+HOMEPAGE = "https://bitbucket.org/mrabarnett/mrab-regex/src"
+LICENSE = "Apache-2.0"
+LIC_FILES_CHKSUM = "file://LICENSE.txt;md5=7b5751ddd6b643203c31ff873051d069"
+
+inherit pypi setuptools3
+
+SRC_URI[sha256sum] = "5c77eab46f3a2b2cd8bbe06467df783543bf7396df431eb4a144cc4b89e9fb3c"
+
+RDEPENDS:${PN} += " \
+	python3-stringold \
+"
+
+BBCLASSEXTEND = "native nativesdk"
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-requests-unixsocket_0.3.0.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-requests-unixsocket_0.3.0.bb
new file mode 100644
index 0000000..330ab7c
--- /dev/null
+++ b/meta-openembedded/meta-python/recipes-devtools/python/python3-requests-unixsocket_0.3.0.bb
@@ -0,0 +1,14 @@
+SUMMARY = "Use requests to talk HTTP via a UNIX domain socket"
+HOMEPAGE = "https://pypi.org/project/requests-unixsocket/"
+LICENSE = "Apache-2.0"
+LIC_FILES_CHKSUM = "file://LICENSE;md5=d2794c0df5b907fdace235a619d80314"
+
+SRC_URI[sha256sum] = "28304283ea9357d45fff58ad5b11e47708cfbf5806817aa59b2a363228ee971e"
+
+PYPI_PACKAGE = "requests-unixsocket"
+
+inherit pypi
+inherit setuptools3
+
+DEPENDS += "python3-pbr-native"
+RDEPENDS_${PN} = "python3-requests python3-urllib3"
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-setuptools-declarative-requirements_1.2.0.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-setuptools-declarative-requirements_1.2.0.bb
deleted file mode 100644
index 9628656..0000000
--- a/meta-openembedded/meta-python/recipes-devtools/python/python3-setuptools-declarative-requirements_1.2.0.bb
+++ /dev/null
@@ -1,12 +0,0 @@
-SUMMARY = "File support for setuptools declarative setup.cfg"
-HOMEPAGE = "https://pypi.org/project/setuptools-declarative-requirements/"
-LICENSE = "Apache-2.0"
-LIC_FILES_CHKSUM = "file://LICENSE;md5=86d3f3a95c324c9479bd8986968f4327"
-
-SRC_URI[sha256sum] = "d11fdb5ef818c65b20bc241e0f5ef44905a5640b681dae21ba1ac1742dab1fd1"
-
-inherit pypi python_setuptools_build_meta
-
-DEPENDS += "python3-setuptools-scm-native"
-
-BBCLASSEXTEND = "native nativesdk"
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-setuptools-declarative-requirements_1.3.0.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-setuptools-declarative-requirements_1.3.0.bb
new file mode 100644
index 0000000..82bb341
--- /dev/null
+++ b/meta-openembedded/meta-python/recipes-devtools/python/python3-setuptools-declarative-requirements_1.3.0.bb
@@ -0,0 +1,12 @@
+SUMMARY = "File support for setuptools declarative setup.cfg"
+HOMEPAGE = "https://pypi.org/project/setuptools-declarative-requirements/"
+LICENSE = "Apache-2.0"
+LIC_FILES_CHKSUM = "file://LICENSE;md5=86d3f3a95c324c9479bd8986968f4327"
+
+SRC_URI[sha256sum] = "57a5b9bb9ad350c278e8aa6be4cdebbcd925b9ba71d6a712a178a618cfb898f7"
+
+inherit pypi python_setuptools_build_meta
+
+DEPENDS += "python3-setuptools-scm-native"
+
+BBCLASSEXTEND = "native nativesdk"
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-sqlalchemy_1.4.39.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-sqlalchemy_1.4.39.bb
deleted file mode 100644
index 0690a8b..0000000
--- a/meta-openembedded/meta-python/recipes-devtools/python/python3-sqlalchemy_1.4.39.bb
+++ /dev/null
@@ -1,23 +0,0 @@
-DESCRIPTION = "Python SQL toolkit and Object Relational Mapper that gives \
-application developers the full power and flexibility of SQL"
-HOMEPAGE = "http://www.sqlalchemy.org/"
-LICENSE = "MIT"
-LIC_FILES_CHKSUM = "file://LICENSE;md5=f4001d1ca15b69d096fa1b4fd1bdce79"
-
-SRC_URI[sha256sum] = "8194896038753b46b08a0b0ae89a5d80c897fb601dd51e243ed5720f1f155d27"
-
-PYPI_PACKAGE = "SQLAlchemy"
-inherit pypi setuptools3
-
-RDEPENDS:${PN} += " \
-    ${PYTHON_PN}-json \
-    ${PYTHON_PN}-pickle \
-    ${PYTHON_PN}-logging \
-    ${PYTHON_PN}-netclient \
-    ${PYTHON_PN}-numbers \
-    ${PYTHON_PN}-threading \
-    ${PYTHON_PN}-compression \
-    ${PYTHON_PN}-profile \
-"
-
-BBCLASSEXTEND = "native nativesdk"
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-sqlalchemy_1.4.40.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-sqlalchemy_1.4.40.bb
new file mode 100644
index 0000000..f61030f
--- /dev/null
+++ b/meta-openembedded/meta-python/recipes-devtools/python/python3-sqlalchemy_1.4.40.bb
@@ -0,0 +1,23 @@
+DESCRIPTION = "Python SQL toolkit and Object Relational Mapper that gives \
+application developers the full power and flexibility of SQL"
+HOMEPAGE = "http://www.sqlalchemy.org/"
+LICENSE = "MIT"
+LIC_FILES_CHKSUM = "file://LICENSE;md5=f4001d1ca15b69d096fa1b4fd1bdce79"
+
+SRC_URI[sha256sum] = "44a660506080cc975e1dfa5776fe5f6315ddc626a77b50bf0eee18b0389ea265"
+
+PYPI_PACKAGE = "SQLAlchemy"
+inherit pypi setuptools3
+
+RDEPENDS:${PN} += " \
+    ${PYTHON_PN}-json \
+    ${PYTHON_PN}-pickle \
+    ${PYTHON_PN}-logging \
+    ${PYTHON_PN}-netclient \
+    ${PYTHON_PN}-numbers \
+    ${PYTHON_PN}-threading \
+    ${PYTHON_PN}-compression \
+    ${PYTHON_PN}-profile \
+"
+
+BBCLASSEXTEND = "native nativesdk"
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-term_2.3.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-term_2.3.bb
deleted file mode 100644
index 6b62eb0..0000000
--- a/meta-openembedded/meta-python/recipes-devtools/python/python3-term_2.3.bb
+++ /dev/null
@@ -1,11 +0,0 @@
-SUMMARY = "An enhanced version of the tty module"
-SECTION = "devel/python"
-LICENSE = "Python-2.0"
-LIC_FILES_CHKSUM = "file://LICENSE;md5=d90e2d280a4836c607520383d1639be1"
-
-SRC_URI[md5sum] = "ab0c1bce381b1109fe4390c56aa06237"
-SRC_URI[sha256sum] = "3dcc8c212e04700784e5c1c5b601916ba0549ae6025b35b64fd62144899e7180"
-
-PYPI_PACKAGE_EXT = "zip"
-
-inherit pypi setuptools3
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-term_2.4.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-term_2.4.bb
new file mode 100644
index 0000000..bc6a4c3
--- /dev/null
+++ b/meta-openembedded/meta-python/recipes-devtools/python/python3-term_2.4.bb
@@ -0,0 +1,12 @@
+SUMMARY = "An enhanced version of the tty module"
+SECTION = "devel/python"
+LICENSE = "Python-2.0"
+LIC_FILES_CHKSUM = "file://LICENSE;md5=d90e2d280a4836c607520383d1639be1"
+
+SRC_URI[sha256sum] = "2cca4cf5f83035ca12627c4bbeff2891ad4711666247a790fd8200d73f38c3f0"
+
+inherit pypi setuptools3
+
+RDEPENDS:${PN} = "\
+    ${PYTHON_PN}-io \
+"
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-twitter_4.10.1.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-twitter_4.10.1.bb
new file mode 100644
index 0000000..b283aef
--- /dev/null
+++ b/meta-openembedded/meta-python/recipes-devtools/python/python3-twitter_4.10.1.bb
@@ -0,0 +1,19 @@
+SUMMARY = "Twitter for Python"
+DESCRIPTION = "Python module to support twitter API"
+
+LICENSE = "MIT"
+LIC_FILES_CHKSUM = "file://PKG-INFO;beginline=9;endline=9;md5=8227180126797a0148f94f483f3e1489"
+
+SRC_URI[sha256sum] = "310193775d7fc381abd6f37021a9af27f7e9edfcce5ec51bd73ea5f30c21fa61"
+
+PYPI_PACKAGE = "tweepy"
+
+inherit pypi setuptools3
+
+RDEPENDS:${PN} += "\
+    ${PYTHON_PN}-pip \
+    ${PYTHON_PN}-pysocks \
+    ${PYTHON_PN}-requests \
+    ${PYTHON_PN}-requests-oauthlib \
+    ${PYTHON_PN}-six \
+"
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-twitter_4.8.0.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-twitter_4.8.0.bb
deleted file mode 100644
index 247b4e5..0000000
--- a/meta-openembedded/meta-python/recipes-devtools/python/python3-twitter_4.8.0.bb
+++ /dev/null
@@ -1,18 +0,0 @@
-SUMMARY = "Twitter for Python"
-DESCRIPTION = "Python module to support twitter API"
-
-LICENSE = "MIT"
-LIC_FILES_CHKSUM = "file://PKG-INFO;beginline=8;endline=8;md5=8227180126797a0148f94f483f3e1489"
-
-SRC_URI[sha256sum] = "8ba5774ac1663b09e5fce1b030daf076f2c9b3ddbf2e7e7ea0bae762e3b1fe3e"
-
-PYPI_PACKAGE = "tweepy"
-
-inherit pypi setuptools3
-
-RDEPENDS:${PN} += "\
-    ${PYTHON_PN}-pip \
-    ${PYTHON_PN}-pysocks \
-    ${PYTHON_PN}-requests \
-    ${PYTHON_PN}-six \
-"
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-typed-ast_1.5.2.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-typed-ast_1.5.2.bb
deleted file mode 100644
index 55cd78c..0000000
--- a/meta-openembedded/meta-python/recipes-devtools/python/python3-typed-ast_1.5.2.bb
+++ /dev/null
@@ -1,12 +0,0 @@
-SUMMARY = "Modified fork of CPython's ast module that parses `# type:` comments"
-HOMEPAGE = "https://github.com/python/typed_ast"
-LICENSE = "Apache-2.0 & MIT"
-LIC_FILES_CHKSUM = "file://LICENSE;md5=97f1494e93daf66a5df47118407a4c4f"
-
-PYPI_PACKAGE = "typed_ast"
-
-inherit pypi setuptools3
-
-SRC_URI[sha256sum] = "525a2d4088e70a9f75b08b3f87a51acc9cde640e19cc523c7e41aa355564ae27"
-
-BBCLASSEXTEND = "native"
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-typed-ast_1.5.4.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-typed-ast_1.5.4.bb
new file mode 100644
index 0000000..e6c1746
--- /dev/null
+++ b/meta-openembedded/meta-python/recipes-devtools/python/python3-typed-ast_1.5.4.bb
@@ -0,0 +1,12 @@
+SUMMARY = "Modified fork of CPython's ast module that parses `# type:` comments"
+HOMEPAGE = "https://github.com/python/typed_ast"
+LICENSE = "Apache-2.0 & MIT"
+LIC_FILES_CHKSUM = "file://LICENSE;md5=97f1494e93daf66a5df47118407a4c4f"
+
+PYPI_PACKAGE = "typed_ast"
+
+inherit pypi setuptools3
+
+SRC_URI[sha256sum] = "39e21ceb7388e4bb37f4c679d72707ed46c2fbf2a5609b8b8ebc4b067d977df2"
+
+BBCLASSEXTEND = "native"
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-werkzeug_2.2.0.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-werkzeug_2.2.0.bb
deleted file mode 100644
index c9e3b0d..0000000
--- a/meta-openembedded/meta-python/recipes-devtools/python/python3-werkzeug_2.2.0.bb
+++ /dev/null
@@ -1,40 +0,0 @@
-SUMMARY = "The Swiss Army knife of Python web development"
-DESCRIPTION = "\
-Werkzeug started as simple collection of various utilities for WSGI \
-applications and has become one of the most advanced WSGI utility modules. \
-It includes a powerful debugger, full featured request and response objects, \
-HTTP utilities to handle entity tags, cache control headers, HTTP dates, \
-cookie handling, file uploads, a powerful URL routing system and a bunch \
-of community contributed addon modules."
-HOMEPAGE = "http://werkzeug.pocoo.org/"
-LICENSE = "BSD-3-Clause"
-LIC_FILES_CHKSUM = "file://LICENSE.rst;md5=5dc88300786f1c214c1e9827a5229462"
-
-PYPI_PACKAGE = "Werkzeug"
-
-SRC_URI[sha256sum] = "fe8bcdcef40275ed915fc734c2527a39d705b57a716d4f09e790296abbd16a7f"
-
-inherit pypi setuptools3
-
-CLEANBROKEN = "1"
-
-RDEPENDS:${PN} += " \
-    ${PYTHON_PN}-datetime \
-    ${PYTHON_PN}-difflib \
-    ${PYTHON_PN}-email \
-    ${PYTHON_PN}-html \
-    ${PYTHON_PN}-io \
-    ${PYTHON_PN}-json \
-    ${PYTHON_PN}-logging \
-    ${PYTHON_PN}-netclient \
-    ${PYTHON_PN}-netserver \
-    ${PYTHON_PN}-numbers \
-    ${PYTHON_PN}-pkgutil \
-    ${PYTHON_PN}-pprint \
-    ${PYTHON_PN}-simplejson \
-    ${PYTHON_PN}-threading \
-    ${PYTHON_PN}-unixadmin \
-    ${PYTHON_PN}-misc \
-    ${PYTHON_PN}-profile \
-    ${PYTHON_PN}-markupsafe \
-"
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-werkzeug_2.2.2.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-werkzeug_2.2.2.bb
new file mode 100644
index 0000000..92302f9
--- /dev/null
+++ b/meta-openembedded/meta-python/recipes-devtools/python/python3-werkzeug_2.2.2.bb
@@ -0,0 +1,40 @@
+SUMMARY = "The Swiss Army knife of Python web development"
+DESCRIPTION = "\
+Werkzeug started as simple collection of various utilities for WSGI \
+applications and has become one of the most advanced WSGI utility modules. \
+It includes a powerful debugger, full featured request and response objects, \
+HTTP utilities to handle entity tags, cache control headers, HTTP dates, \
+cookie handling, file uploads, a powerful URL routing system and a bunch \
+of community contributed addon modules."
+HOMEPAGE = "http://werkzeug.pocoo.org/"
+LICENSE = "BSD-3-Clause"
+LIC_FILES_CHKSUM = "file://LICENSE.rst;md5=5dc88300786f1c214c1e9827a5229462"
+
+PYPI_PACKAGE = "Werkzeug"
+
+SRC_URI[sha256sum] = "7ea2d48322cc7c0f8b3a215ed73eabd7b5d75d0b50e31ab006286ccff9e00b8f"
+
+inherit pypi setuptools3
+
+CLEANBROKEN = "1"
+
+RDEPENDS:${PN} += " \
+    ${PYTHON_PN}-datetime \
+    ${PYTHON_PN}-difflib \
+    ${PYTHON_PN}-email \
+    ${PYTHON_PN}-html \
+    ${PYTHON_PN}-io \
+    ${PYTHON_PN}-json \
+    ${PYTHON_PN}-logging \
+    ${PYTHON_PN}-netclient \
+    ${PYTHON_PN}-netserver \
+    ${PYTHON_PN}-numbers \
+    ${PYTHON_PN}-pkgutil \
+    ${PYTHON_PN}-pprint \
+    ${PYTHON_PN}-simplejson \
+    ${PYTHON_PN}-threading \
+    ${PYTHON_PN}-unixadmin \
+    ${PYTHON_PN}-misc \
+    ${PYTHON_PN}-profile \
+    ${PYTHON_PN}-markupsafe \
+"
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-xmlschema_2.0.1.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-xmlschema_2.0.1.bb
deleted file mode 100644
index 68148b7..0000000
--- a/meta-openembedded/meta-python/recipes-devtools/python/python3-xmlschema_2.0.1.bb
+++ /dev/null
@@ -1,20 +0,0 @@
-SUMMARY = "The xmlschema library is an implementation of XML Schema for Python (supports Python 3.6+)."
-HOMEPAGE = "https://github.com/sissaschool/xmlschema"
-LICENSE = "MIT"
-LIC_FILES_CHKSUM = "file://LICENSE;md5=47489cb18c469474afeb259ed1d4832f"
-
-SRC_URI[sha256sum] = "1460ba451b4084d4edd031b564f460f5c11b14b20764ce1f64691f8c69e1194d"
-
-PYPI_PACKAGE = "xmlschema"
-inherit pypi setuptools3
-
-DEPENDS += "\
-    ${PYTHON_PN}-elementpath-native \
-"
-
-RDEPENDS:${PN} += "\
-    ${PYTHON_PN}-elementpath \
-    ${PYTHON_PN}-modules \
-"
-
-BBCLASSEXTEND = "native nativesdk"
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-xmlschema_2.0.2.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-xmlschema_2.0.2.bb
new file mode 100644
index 0000000..eb9d4f5
--- /dev/null
+++ b/meta-openembedded/meta-python/recipes-devtools/python/python3-xmlschema_2.0.2.bb
@@ -0,0 +1,20 @@
+SUMMARY = "The xmlschema library is an implementation of XML Schema for Python (supports Python 3.6+)."
+HOMEPAGE = "https://github.com/sissaschool/xmlschema"
+LICENSE = "MIT"
+LIC_FILES_CHKSUM = "file://LICENSE;md5=47489cb18c469474afeb259ed1d4832f"
+
+SRC_URI[sha256sum] = "ce915696b3a819fe0f986824517b9281dbd5626fd2d213363fab40f34edf05bd"
+
+PYPI_PACKAGE = "xmlschema"
+inherit pypi setuptools3
+
+DEPENDS += "\
+    ${PYTHON_PN}-elementpath-native \
+"
+
+RDEPENDS:${PN} += "\
+    ${PYTHON_PN}-elementpath \
+    ${PYTHON_PN}-modules \
+"
+
+BBCLASSEXTEND = "native nativesdk"
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-yappi_1.3.5.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-yappi_1.3.5.bb
deleted file mode 100644
index 9b08f75..0000000
--- a/meta-openembedded/meta-python/recipes-devtools/python/python3-yappi_1.3.5.bb
+++ /dev/null
@@ -1,34 +0,0 @@
-SUMMARY  = "Yet Another Python Profiler"
-HOMEPAGE = "https://github.com/sumerc/yappi"
-
-LICENSE = "MIT"
-LIC_FILES_CHKSUM = "file://LICENSE;md5=71c208c9a4fd864385eb69ad4caa3bee"
-
-SRC_URI[sha256sum] = "f54c25f04aa7c613633b529bffd14e0699a4363f414dc9c065616fd52064a49b"
-
-SRC_URI += " \
-    file://run-ptest \
-    file://0001-Fix-imports-for-ptests.patch \
-"
-
-inherit pypi setuptools3 ptest
-
-RDEPENDS:${PN} += "\
-    ${PYTHON_PN}-datetime \
-    ${PYTHON_PN}-pickle \
-    ${PYTHON_PN}-threading \
-"
-
-RDEPENDS:${PN}-ptest += " \
-    ${PYTHON_PN}-gevent \
-    ${PYTHON_PN}-multiprocessing \
-    ${PYTHON_PN}-pytest \
-    ${PYTHON_PN}-profile \
-    ${PYTHON_PN}-zopeinterface \
-"
-
-do_install_ptest() {
-    install -d ${D}${PTEST_PATH}/tests
-    cp -rf ${S}/tests/* ${D}${PTEST_PATH}/tests/
-    cp -f ${S}/yappi/yappi.py ${D}/${PTEST_PATH}/
-}
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-yappi_1.3.6.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-yappi_1.3.6.bb
new file mode 100644
index 0000000..2f24443
--- /dev/null
+++ b/meta-openembedded/meta-python/recipes-devtools/python/python3-yappi_1.3.6.bb
@@ -0,0 +1,34 @@
+SUMMARY  = "Yet Another Python Profiler"
+HOMEPAGE = "https://github.com/sumerc/yappi"
+
+LICENSE = "MIT"
+LIC_FILES_CHKSUM = "file://LICENSE;md5=71c208c9a4fd864385eb69ad4caa3bee"
+
+SRC_URI[sha256sum] = "0a73c608a2603570a020a32d4369ba744012bc5267f37e5bd8026fb491abba56"
+
+SRC_URI += " \
+    file://run-ptest \
+    file://0001-Fix-imports-for-ptests.patch \
+"
+
+inherit pypi setuptools3 ptest
+
+RDEPENDS:${PN} += "\
+    ${PYTHON_PN}-datetime \
+    ${PYTHON_PN}-pickle \
+    ${PYTHON_PN}-threading \
+"
+
+RDEPENDS:${PN}-ptest += " \
+    ${PYTHON_PN}-gevent \
+    ${PYTHON_PN}-multiprocessing \
+    ${PYTHON_PN}-pytest \
+    ${PYTHON_PN}-profile \
+    ${PYTHON_PN}-zopeinterface \
+"
+
+do_install_ptest() {
+    install -d ${D}${PTEST_PATH}/tests
+    cp -rf ${S}/tests/* ${D}${PTEST_PATH}/tests/
+    cp -f ${S}/yappi/yappi.py ${D}/${PTEST_PATH}/
+}
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-yarl_1.7.2.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-yarl_1.7.2.bb
deleted file mode 100644
index 0867d1c..0000000
--- a/meta-openembedded/meta-python/recipes-devtools/python/python3-yarl_1.7.2.bb
+++ /dev/null
@@ -1,26 +0,0 @@
-SUMMARY = "The module provides handy URL class for url parsing and changing"
-HOMEPAGE = "https://github.com/aio-libs/yarl/"
-LICENSE = "Apache-2.0"
-LIC_FILES_CHKSUM = "file://LICENSE;md5=e581798a7b985311f29fa3e163ea27ae"
-
-SRC_URI[sha256sum] = "45399b46d60c253327a460e99856752009fcee5f5d3c80b2f7c0cae1c38d56dd"
-
-SRC_URI += "file://run-ptest"
-
-PYPI_PACKAGE = "yarl"
-
-inherit pypi ptest setuptools3
-
-RDEPENDS:${PN} = "\
-    ${PYTHON_PN}-multidict \
-    ${PYTHON_PN}-idna \
-"
-
-RDEPENDS:${PN}-ptest += " \
-    ${PYTHON_PN}-pytest \
-"
-
-do_install_ptest() {
-    install -d ${D}${PTEST_PATH}/tests
-    cp -rf ${S}/tests/* ${D}${PTEST_PATH}/tests/
-}
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-yarl_1.8.1.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-yarl_1.8.1.bb
new file mode 100644
index 0000000..66ee46b
--- /dev/null
+++ b/meta-openembedded/meta-python/recipes-devtools/python/python3-yarl_1.8.1.bb
@@ -0,0 +1,27 @@
+SUMMARY = "The module provides handy URL class for url parsing and changing"
+HOMEPAGE = "https://github.com/aio-libs/yarl/"
+LICENSE = "Apache-2.0"
+LIC_FILES_CHKSUM = "file://LICENSE;md5=e581798a7b985311f29fa3e163ea27ae"
+
+SRC_URI[sha256sum] = "af887845b8c2e060eb5605ff72b6f2dd2aab7a761379373fd89d314f4752abbf"
+
+SRC_URI += "file://run-ptest"
+
+PYPI_PACKAGE = "yarl"
+
+inherit pypi ptest setuptools3
+
+RDEPENDS:${PN} = "\
+    ${PYTHON_PN}-multidict \
+    ${PYTHON_PN}-idna \
+    ${PYTHON_PN}-io \
+"
+
+RDEPENDS:${PN}-ptest += " \
+    ${PYTHON_PN}-pytest \
+"
+
+do_install_ptest() {
+    install -d ${D}${PTEST_PATH}/tests
+    cp -rf ${S}/tests/* ${D}${PTEST_PATH}/tests/
+}
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-zeroconf_0.38.7.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-zeroconf_0.38.7.bb
deleted file mode 100644
index 3f67f5f..0000000
--- a/meta-openembedded/meta-python/recipes-devtools/python/python3-zeroconf_0.38.7.bb
+++ /dev/null
@@ -1,13 +0,0 @@
-SUMMARY = "Pure Python Multicast DNS Service Discovery Library (Bonjour/Avahi compatible)"
-HOMEPAGE = "https://github.com/jstasiak/python-zeroconf"
-LICENSE = "LGPL-2.1-only"
-LIC_FILES_CHKSUM = "file://COPYING;md5=3bb705b228ea4a14ea2728215b780d80"
-
-SRC_URI[sha256sum] = "eaee2293e5f4e6d249f6155f9d3cca1668cb22b2545995ea72c6a03b4b7706d4"
-
-inherit pypi setuptools3
-
-RDEPENDS:${PN} += " \
-    ${PYTHON_PN}-ifaddr \
-    ${PYTHON_PN}-asyncio \
-"
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-zeroconf_0.39.0.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-zeroconf_0.39.0.bb
new file mode 100644
index 0000000..9b15763
--- /dev/null
+++ b/meta-openembedded/meta-python/recipes-devtools/python/python3-zeroconf_0.39.0.bb
@@ -0,0 +1,14 @@
+SUMMARY = "Pure Python Multicast DNS Service Discovery Library (Bonjour/Avahi compatible)"
+HOMEPAGE = "https://github.com/jstasiak/python-zeroconf"
+LICENSE = "LGPL-2.1-only"
+LIC_FILES_CHKSUM = "file://COPYING;md5=3bb705b228ea4a14ea2728215b780d80"
+
+SRC_URI[sha256sum] = "7c0d8257b940ee43e637fb560c2f9bd79da0638f37af162eb4f506f7274ef8e4"
+
+inherit pypi setuptools3
+
+RDEPENDS:${PN} += " \
+    ${PYTHON_PN}-ifaddr \
+    ${PYTHON_PN}-asyncio \
+    ${PYTHON_PN}-async-timeout \
+"
diff --git a/meta-openembedded/meta-python/recipes-extended/python-pyephem/python3-pyephem/0001-Don-t-set-tp_print-on-Python-3.patch b/meta-openembedded/meta-python/recipes-extended/python-pyephem/python3-pyephem/0001-Don-t-set-tp_print-on-Python-3.patch
new file mode 100644
index 0000000..a84d852
--- /dev/null
+++ b/meta-openembedded/meta-python/recipes-extended/python-pyephem/python3-pyephem/0001-Don-t-set-tp_print-on-Python-3.patch
@@ -0,0 +1,44 @@
+From 866f7560034e8b7a604432611b3cb2c92e76def9 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Sun, 28 Aug 2022 11:03:39 -0700
+Subject: [PATCH] Don't set tp_print on Python 3.
+
+In 3.8 it produces a compilation warning, in earlier versions it is ignored.
+
+Upstream-Status: Submitted [https://github.com/brandon-rhodes/pyephem/pull/245]
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ extensions/_libastro.c | 8 ++++++++
+ 1 file changed, 8 insertions(+)
+
+diff --git a/extensions/_libastro.c b/extensions/_libastro.c
+index ce07d93..c9ef1e6 100644
+--- a/extensions/_libastro.c
++++ b/extensions/_libastro.c
+@@ -372,7 +372,11 @@ static PyTypeObject AngleType = {
+      sizeof(AngleObject),
+      0,
+      0,				/* tp_dealloc */
++#if PY_MAJOR_VERSION < 3
+      Angle_print,		/* tp_print */
++#else
++     0,				/* reserved in 3.x */
++#endif
+      0,				/* tp_getattr */
+      0,				/* tp_setattr */
+      0,				/* tp_compare */
+@@ -669,7 +673,11 @@ static PyTypeObject DateType = {
+      sizeof(PyFloatObject),
+      0,
+      0,				/* tp_dealloc */
++#if PY_MAJOR_VERSION < 3
+      Date_print,		/* tp_print */
++#else
++     0,				/* tp_print slot is reserved and unused in Python 3 */
++#endif
+      0,				/* tp_getattr */
+      0,				/* tp_setattr */
+      0,				/* tp_compare */
+-- 
+2.37.2
+
diff --git a/meta-openembedded/meta-python/recipes-extended/python-pyephem/python3-pyephem_4.1.3.bb b/meta-openembedded/meta-python/recipes-extended/python-pyephem/python3-pyephem_4.1.3.bb
index 29697bc..8c4f0eb 100644
--- a/meta-openembedded/meta-python/recipes-extended/python-pyephem/python3-pyephem_4.1.3.bb
+++ b/meta-openembedded/meta-python/recipes-extended/python-pyephem/python3-pyephem_4.1.3.bb
@@ -4,6 +4,7 @@
 LICENSE = "MIT"
 LIC_FILES_CHKSUM = "file://LICENSE;md5=9c930b395b435b00bb13ec83b0c99f40"
 
+SRC_URI += "file://0001-Don-t-set-tp_print-on-Python-3.patch"
 SRC_URI[sha256sum] = "7fa18685981ba528edd504052a9d5212a09aa5bf15c11a734edc6a86e8a8b56a"
 
 PYPI_PACKAGE = "ephem"
diff --git a/meta-openembedded/meta-webserver/recipes-httpd/apache2/apache2/0001-make_exports.awk-not-expose-the-path.patch b/meta-openembedded/meta-webserver/recipes-httpd/apache2/apache2/0001-make_exports.awk-not-expose-the-path.patch
new file mode 100644
index 0000000..78f23f0
--- /dev/null
+++ b/meta-openembedded/meta-webserver/recipes-httpd/apache2/apache2/0001-make_exports.awk-not-expose-the-path.patch
@@ -0,0 +1,32 @@
+From 5b5eae9cdf3bae91756c717349f2f33a31888f24 Mon Sep 17 00:00:00 2001
+From: Mingli Yu <mingli.yu@windriver.com>
+Date: Wed, 3 Aug 2022 12:35:16 +0800
+Subject: [PATCH] make_exports.awk: not expose the path
+
+Don't print the full path in the comment line.
+
+Upstream-Status: Inappropriate [oe specific]
+
+Signed-off-by: Mingli Yu <mingli.yu@windriver.com>
+---
+ build/make_exports.awk | 4 +++-
+ 1 file changed, 3 insertions(+), 1 deletion(-)
+
+diff --git a/build/make_exports.awk b/build/make_exports.awk
+index 1cf0568..44d93c5 100644
+--- a/build/make_exports.awk
++++ b/build/make_exports.awk
+@@ -47,7 +47,9 @@ function push(line) {
+ 
+ function do_output() {
+     printf("/*\n")
+-    printf(" * %s\n", FILENAME)
++    file = FILENAME
++    sub("([^/]*[/])*", "", file)
++    printf(" * %s\n", file)
+     printf(" */\n")
+     
+     for (i = 0; i < stackptr; i++) {
+-- 
+2.25.1
+
diff --git a/meta-openembedded/meta-webserver/recipes-httpd/apache2/apache2_2.4.54.bb b/meta-openembedded/meta-webserver/recipes-httpd/apache2/apache2_2.4.54.bb
index c5f014b..4f30eca 100644
--- a/meta-openembedded/meta-webserver/recipes-httpd/apache2/apache2_2.4.54.bb
+++ b/meta-openembedded/meta-webserver/recipes-httpd/apache2/apache2_2.4.54.bb
@@ -15,6 +15,7 @@
            file://0007-apache2-allow-to-disable-selinux-support.patch \
            file://0008-Fix-perl-install-directory-to-usr-bin.patch \
            file://0009-support-apxs.in-force-destdir-to-be-empty-string.patch \
+           file://0001-make_exports.awk-not-expose-the-path.patch \
           "
 
 SRC_URI:append:class-target = " \
diff --git a/meta-openembedded/meta-webserver/recipes-httpd/monkey/files/0001-fastcgi-Use-value-instead-of-address-of-sin6_port.patch b/meta-openembedded/meta-webserver/recipes-httpd/monkey/files/0001-fastcgi-Use-value-instead-of-address-of-sin6_port.patch
new file mode 100644
index 0000000..f4bab49
--- /dev/null
+++ b/meta-openembedded/meta-webserver/recipes-httpd/monkey/files/0001-fastcgi-Use-value-instead-of-address-of-sin6_port.patch
@@ -0,0 +1,30 @@
+From 7f724bbafbb1e170401dd5de201273ab8c8bc75f Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Sun, 28 Aug 2022 14:24:02 -0700
+Subject: [PATCH] fastcgi: Use value instead of address of sin6_port
+
+This seems to be wrongly assigned where ipv4 sin_port is
+equated to address of sin6_port and not value of sin6_port
+
+Upstream-Status: Submitted [https://github.com/monkey/monkey/pull/375]
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ plugins/fastcgi/fcgi_handler.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/plugins/fastcgi/fcgi_handler.c b/plugins/fastcgi/fcgi_handler.c
+index 9e095e3c..e8e1eec1 100644
+--- a/plugins/fastcgi/fcgi_handler.c
++++ b/plugins/fastcgi/fcgi_handler.c
+@@ -245,7 +245,7 @@ static inline int fcgi_add_param_net(struct fcgi_handler *handler)
+             struct sockaddr_in *s4 = (struct sockaddr_in *)&addr4;   
+             memset(&addr4, 0, sizeof(addr4));
+             addr4.sin_family = AF_INET;
+-            addr4.sin_port = &s->sin6_port;
++            addr4.sin_port = s->sin6_port;
+             memcpy(&addr4.sin_addr.s_addr, 
+                    s->sin6_addr.s6_addr + 12, 
+                    sizeof(addr4.sin_addr.s_addr));
+-- 
+2.37.2
+
diff --git a/meta-openembedded/meta-webserver/recipes-httpd/monkey/monkey_1.6.9.bb b/meta-openembedded/meta-webserver/recipes-httpd/monkey/monkey_1.6.9.bb
index fff406a..5b7e327 100644
--- a/meta-openembedded/meta-webserver/recipes-httpd/monkey/monkey_1.6.9.bb
+++ b/meta-openembedded/meta-webserver/recipes-httpd/monkey/monkey_1.6.9.bb
@@ -8,6 +8,7 @@
 SECTION = "net"
 
 SRC_URI = "http://monkey-project.com/releases/1.6/monkey-${PV}.tar.gz \
+           file://0001-fastcgi-Use-value-instead-of-address-of-sin6_port.patch \
            file://monkey.service \
            file://monkey.init"
 
diff --git a/meta-openembedded/meta-webserver/recipes-webadmin/netdata/netdata_1.35.1.bb b/meta-openembedded/meta-webserver/recipes-webadmin/netdata/netdata_1.35.1.bb
deleted file mode 100644
index 9decb63..0000000
--- a/meta-openembedded/meta-webserver/recipes-webadmin/netdata/netdata_1.35.1.bb
+++ /dev/null
@@ -1,82 +0,0 @@
-SUMMARY = "Real-time performance monitoring"
-DESCRIPTION = "Netdata is high-fidelity infrastructure monitoring and troubleshooting. \
-               Open-source, free, preconfigured, opinionated, and always real-time."
-HOMEPAGE = "https://github.com/netdata/netdata/"
-LICENSE = "GPL-3.0-only"
-LIC_FILES_CHKSUM = "file://LICENSE;md5=fc9b848046ef54b5eaee6071947abd24"
-
-DEPENDS += "libuv util-linux zlib"
-
-SRC_URI = "https://github.com/${BPN}/${BPN}/releases/download/v${PV}/${BPN}-v${PV}.tar.gz \
-"
-SRC_URI[sha256sum] = "587f6cce421015f8e0a527e3964a4de8cc17085c354498150bc3ade21606bbf9"
-
-# default netdata.conf for netdata configuration
-SRC_URI += "file://netdata.conf"
-
-# file for providing systemd service support
-SRC_URI += "file://netdata.service"
-
-UPSTREAM_CHECK_URI = "https://github.com/netdata/netdata/releases"
-
-S = "${WORKDIR}/${BPN}-v${PV}"
-
-# Stop sending anonymous statistics to Google Analytics
-NETDATA_ANONYMOUS ??= "enabled"
-
-inherit pkgconfig autotools-brokensep useradd systemd
-
-LIBS:toolchain-clang:x86 = "-latomic"
-LIBS:riscv64 = "-latomic"
-LIBS:riscv32 = "-latomic"
-LIBS:mips = "-latomic"
-export LIBS
-
-#systemd
-SYSTEMD_PACKAGES = "${PN}"
-SYSTEMD_SERVICE:${PN} = "netdata.service"
-SYSTEMD_AUTO_ENABLE:${PN} = "enable"
-
-#User specific
-USERADD_PACKAGES = "${PN}"
-USERADD_PARAM:${PN} = "--system --no-create-home --home-dir ${localstatedir}/run/netdata --user-group netdata"
-
-PACKAGECONFIG ??= "https"
-PACKAGECONFIG[cloud] = "--enable-cloud, --disable-cloud, json-c"
-PACKAGECONFIG[compression] = "--enable-compression, --disable-compression, lz4"
-PACKAGECONFIG[https] = "--enable-https, --disable-https, openssl"
-
-# ebpf doesn't compile (or detect) the cross compilation well
-EXTRA_OECONF += "--disable-ebpf"
-
-do_install:append() {
-    #set S UID for plugins
-    chmod 4755 ${D}${libexecdir}/netdata/plugins.d/apps.plugin
-
-    if ${@bb.utils.contains('DISTRO_FEATURES','systemd','true','false',d)}; then
-        # Install systemd unit files
-        install -d ${D}${systemd_unitdir}/system
-        install -m 0644 ${WORKDIR}/netdata.service ${D}${systemd_unitdir}/system
-        sed -i -e 's,@@datadir,${datadir_native},g' ${D}${systemd_unitdir}/system/netdata.service
-    fi
-
-    # Install default netdata.conf
-    install -d ${D}${sysconfdir}/netdata
-    install -m 0644 ${WORKDIR}/netdata.conf ${D}${sysconfdir}/netdata/
-    sed -i -e 's,@@sysconfdir,${sysconfdir},g' ${D}${sysconfdir}/netdata/netdata.conf
-    sed -i -e 's,@@libdir,${libexecdir},g' ${D}${sysconfdir}/netdata/netdata.conf
-    sed -i -e 's,@@datadir,${datadir},g' ${D}${sysconfdir}/netdata/netdata.conf
-
-    if [ "${NETDATA_ANONYMOUS}" = "enabled" ]; then
-        touch ${D}${sysconfdir}/netdata/.opt-out-from-anonymous-statistics
-    fi
-
-    install --group netdata --owner netdata --directory ${D}${localstatedir}/cache/netdata
-    install --group netdata --owner netdata --directory ${D}${localstatedir}/lib/netdata
-
-    chown -R netdata:netdata ${D}${datadir}/netdata/web
-}
-
-FILES:${PN} += "${localstatedir}/cache/netdata/ ${localstatedir}/lib/netdata/"
-
-RDEPENDS:${PN} = "bash zlib"
diff --git a/meta-openembedded/meta-webserver/recipes-webadmin/netdata/netdata_1.36.1.bb b/meta-openembedded/meta-webserver/recipes-webadmin/netdata/netdata_1.36.1.bb
new file mode 100644
index 0000000..52d99e7
--- /dev/null
+++ b/meta-openembedded/meta-webserver/recipes-webadmin/netdata/netdata_1.36.1.bb
@@ -0,0 +1,82 @@
+SUMMARY = "Real-time performance monitoring"
+DESCRIPTION = "Netdata is high-fidelity infrastructure monitoring and troubleshooting. \
+               Open-source, free, preconfigured, opinionated, and always real-time."
+HOMEPAGE = "https://github.com/netdata/netdata/"
+LICENSE = "GPL-3.0-only"
+LIC_FILES_CHKSUM = "file://LICENSE;md5=fc9b848046ef54b5eaee6071947abd24"
+
+DEPENDS += "libuv util-linux zlib"
+
+SRC_URI = "https://github.com/${BPN}/${BPN}/releases/download/v${PV}/${BPN}-v${PV}.tar.gz \
+"
+SRC_URI[sha256sum] = "f4a1233112b55e07e2862ffda0416255f0aa4c8e2b16929b76fa7ad6b69fd931"
+
+# default netdata.conf for netdata configuration
+SRC_URI += "file://netdata.conf"
+
+# file for providing systemd service support
+SRC_URI += "file://netdata.service"
+
+UPSTREAM_CHECK_URI = "https://github.com/netdata/netdata/releases"
+
+S = "${WORKDIR}/${BPN}-v${PV}"
+
+# Stop sending anonymous statistics to Google Analytics
+NETDATA_ANONYMOUS ??= "enabled"
+
+inherit pkgconfig autotools-brokensep useradd systemd
+
+LIBS:toolchain-clang:x86 = "-latomic"
+LIBS:riscv64 = "-latomic"
+LIBS:riscv32 = "-latomic"
+LIBS:mips = "-latomic"
+export LIBS
+
+#systemd
+SYSTEMD_PACKAGES = "${PN}"
+SYSTEMD_SERVICE:${PN} = "netdata.service"
+SYSTEMD_AUTO_ENABLE:${PN} = "enable"
+
+#User specific
+USERADD_PACKAGES = "${PN}"
+USERADD_PARAM:${PN} = "--system --no-create-home --home-dir ${localstatedir}/run/netdata --user-group netdata"
+
+PACKAGECONFIG ??= "https"
+PACKAGECONFIG[cloud] = "--enable-cloud, --disable-cloud, json-c"
+PACKAGECONFIG[compression] = "--enable-compression, --disable-compression, lz4"
+PACKAGECONFIG[https] = "--enable-https, --disable-https, openssl"
+
+# ebpf doesn't compile (or detect) the cross compilation well
+EXTRA_OECONF += "--disable-ebpf"
+
+do_install:append() {
+    #set S UID for plugins
+    chmod 4755 ${D}${libexecdir}/netdata/plugins.d/apps.plugin
+
+    if ${@bb.utils.contains('DISTRO_FEATURES','systemd','true','false',d)}; then
+        # Install systemd unit files
+        install -d ${D}${systemd_unitdir}/system
+        install -m 0644 ${WORKDIR}/netdata.service ${D}${systemd_unitdir}/system
+        sed -i -e 's,@@datadir,${datadir_native},g' ${D}${systemd_unitdir}/system/netdata.service
+    fi
+
+    # Install default netdata.conf
+    install -d ${D}${sysconfdir}/netdata
+    install -m 0644 ${WORKDIR}/netdata.conf ${D}${sysconfdir}/netdata/
+    sed -i -e 's,@@sysconfdir,${sysconfdir},g' ${D}${sysconfdir}/netdata/netdata.conf
+    sed -i -e 's,@@libdir,${libexecdir},g' ${D}${sysconfdir}/netdata/netdata.conf
+    sed -i -e 's,@@datadir,${datadir},g' ${D}${sysconfdir}/netdata/netdata.conf
+
+    if [ "${NETDATA_ANONYMOUS}" = "enabled" ]; then
+        touch ${D}${sysconfdir}/netdata/.opt-out-from-anonymous-statistics
+    fi
+
+    install --group netdata --owner netdata --directory ${D}${localstatedir}/cache/netdata
+    install --group netdata --owner netdata --directory ${D}${localstatedir}/lib/netdata
+
+    chown -R netdata:netdata ${D}${datadir}/netdata/web
+}
+
+FILES:${PN} += "${localstatedir}/cache/netdata/ ${localstatedir}/lib/netdata/"
+
+RDEPENDS:${PN} = "bash zlib"
diff --git a/meta-openembedded/meta-xfce/recipes-xfce/xfce4-panel/xfce4-panel_4.16.3.bb b/meta-openembedded/meta-xfce/recipes-xfce/xfce4-panel/xfce4-panel_4.16.3.bb
index d47f9ce..46d2615 100644
--- a/meta-openembedded/meta-xfce/recipes-xfce/xfce4-panel/xfce4-panel_4.16.3.bb
+++ b/meta-openembedded/meta-xfce/recipes-xfce/xfce4-panel/xfce4-panel_4.16.3.bb
@@ -4,7 +4,7 @@
 LIC_FILES_CHKSUM = "file://COPYING;md5=26a8bd75d8f8498bdbbe64a27791d4ee"
 DEPENDS = "garcon exo gtk+3 cairo virtual/libx11 libxml2 libwnck3 vala-native"
 
-inherit xfce gtk-doc gobject-introspection features_check remove-libtool mime-xdg
+inherit xfce gtk-doc gobject-introspection features_check mime-xdg
 
 # xfce4 depends on libwnck3, gtk+3 and libepoxy need to be built with x11 PACKAGECONFIG.
 # cairo would at least needed to be built with xlib.
diff --git a/meta-raspberrypi/recipes-bsp/bootfiles/rpi-cmdline.bb b/meta-raspberrypi/recipes-bsp/bootfiles/rpi-cmdline.bb
index 413ca4d..6fb3a1b 100644
--- a/meta-raspberrypi/recipes-bsp/bootfiles/rpi-cmdline.bb
+++ b/meta-raspberrypi/recipes-bsp/bootfiles/rpi-cmdline.bb
@@ -62,7 +62,7 @@
     "
 
 do_compile() {
-    echo "${@' '.join('${CMDLINE}'.split())}" > "${WORKDIR}/cmdline.txt"
+    echo "${@' '.join(d.getVar('CMDLINE').split())}" > "${WORKDIR}/cmdline.txt"
 }
 
 do_deploy() {
diff --git a/meta-security/dynamic-layers/meta-python/recipes-security/mfa/python3-privacyidea_3.7.2.bb b/meta-security/dynamic-layers/meta-python/recipes-security/mfa/python3-privacyidea_3.7.2.bb
deleted file mode 100644
index c1e3108..0000000
--- a/meta-security/dynamic-layers/meta-python/recipes-security/mfa/python3-privacyidea_3.7.2.bb
+++ /dev/null
@@ -1,38 +0,0 @@
-SUMMARY = "identity, multifactor authentication (OTP), authorization, audit"
-DESCRIPTION = "privacyIDEA is an open solution for strong two-factor authentication like OTP tokens, SMS, smartphones or SSH keys. Using privacyIDEA you can enhance your existing applications like local login (PAM, Windows Credential Provider), VPN, remote access, SSH connections, access to web sites or web portals with a second factor during authentication. Thus boosting the security of your existing applications."
-
-HOMEPAGE = "http://www.privacyidea.org/"
-LICENSE = "AGPL-3.0-only"
-LIC_FILES_CHKSUM = "file://LICENSE;md5=c0acfa7a8a03b718abee9135bc1a1c55"
-
-PYPI_PACKAGE = "privacyIDEA"
-SRC_URI[sha256sum] = "17cbfdf0212eec94ffb10b3046093cf25af71b41413b6361668685333c5a35a7"
-
-inherit pypi setuptools3
-
-do_install:append () {
-    rm -fr ${D}${libdir}/${PYTHON_DIR}/site-packages/tests
-}
-
-USERADD_PACKAGES = "${PN}"
-GROUPADD_PARAM:${PN} = "--system privacyidea"
-USERADD_PARAM:${PN} = "--system -g privacyidea -o -r -d /opt/${BPN}  \
-    --shell /bin/false privacyidea"
-
-FILES:${PN} += " ${prefix}/etc/privacyidea/* ${prefix}/lib/privacyidea/*"
-
-RDEPENDS:${PN} += " bash perl freeradius-mysql freeradius-utils"
-
-RDEPENDS:${PN} += "python3 python3-alembic python3-babel python3-bcrypt"
-RDEPENDS:${PN} += "python3-beautifulsoup4 python3-cbor2 python3-certifi python3-cffi python3-chardet"
-RDEPENDS:${PN} += "python3-click python3-configobj python3-croniter python3-cryptography python3-defusedxml"
-RDEPENDS:${PN} += "python3-ecdsa  python3-flask python3-flask-babel python3-flask-migrate"
-RDEPENDS:${PN} += "python3-flask-script python3-flask-sqlalchemy python3-flask-versioned"
-RDEPENDS:${PN} += "python3-future python3-httplib2 python3-huey python3-idna python3-ipaddress"
-RDEPENDS:${PN} += "python3-itsdangerous python3-jinja2 python3-ldap python3-lxml python3-mako"
-RDEPENDS:${PN} += "python3-markupsafe python3-netaddr python3-oauth2client python3-passlib python3-pillow"
-RDEPENDS:${PN} += "python3-pyasn1 python3-pyasn1-modules python3-pycparser python3-pyjwt python3-pymysql"
-RDEPENDS:${PN} += "python3-pyopenssl python3-pyrad python3-dateutil python3-editor python3-gnupg"
-RDEPENDS:${PN} += "python3-pytz python3-pyyaml python3-qrcode python3-redis python3-requests python3-rsa"
-RDEPENDS:${PN} += "python3-six python3-smpplib python3-soupsieve python3-soupsieve "
-RDEPENDS:${PN} += "python3-sqlalchemy python3-sqlsoup python3-urllib3 python3-werkzeug"
diff --git a/meta-security/dynamic-layers/meta-python/recipes-security/mfa/python3-privacyidea_3.7.3.bb b/meta-security/dynamic-layers/meta-python/recipes-security/mfa/python3-privacyidea_3.7.3.bb
new file mode 100644
index 0000000..97fa8f9
--- /dev/null
+++ b/meta-security/dynamic-layers/meta-python/recipes-security/mfa/python3-privacyidea_3.7.3.bb
@@ -0,0 +1,38 @@
+SUMMARY = "identity, multifactor authentication (OTP), authorization, audit"
+DESCRIPTION = "privacyIDEA is an open solution for strong two-factor authentication like OTP tokens, SMS, smartphones or SSH keys. Using privacyIDEA you can enhance your existing applications like local login (PAM, Windows Credential Provider), VPN, remote access, SSH connections, access to web sites or web portals with a second factor during authentication. Thus boosting the security of your existing applications."
+
+HOMEPAGE = "http://www.privacyidea.org/"
+LICENSE = "AGPL-3.0-only"
+LIC_FILES_CHKSUM = "file://LICENSE;md5=c0acfa7a8a03b718abee9135bc1a1c55"
+
+PYPI_PACKAGE = "privacyIDEA"
+SRC_URI[sha256sum] = "7b5725d1af004fe3f68d16c2b14be5a3d61c4d265d18cb7d50a9013da0df42d2"
+
+inherit pypi setuptools3
+
+do_install:append () {
+    rm -fr ${D}${libdir}/${PYTHON_DIR}/site-packages/tests
+}
+
+USERADD_PACKAGES = "${PN}"
+GROUPADD_PARAM:${PN} = "--system privacyidea"
+USERADD_PARAM:${PN} = "--system -g privacyidea -o -r -d /opt/${BPN}  \
+    --shell /bin/false privacyidea"
+
+FILES:${PN} += " ${prefix}/etc/privacyidea/* ${prefix}/lib/privacyidea/*"
+
+RDEPENDS:${PN} += " bash perl freeradius-mysql freeradius-utils"
+
+RDEPENDS:${PN} += "python3 python3-alembic python3-babel python3-bcrypt"
+RDEPENDS:${PN} += "python3-beautifulsoup4 python3-cbor2 python3-certifi python3-cffi python3-chardet"
+RDEPENDS:${PN} += "python3-click python3-configobj python3-croniter python3-cryptography python3-defusedxml"
+RDEPENDS:${PN} += "python3-ecdsa  python3-flask python3-flask-babel python3-flask-migrate"
+RDEPENDS:${PN} += "python3-flask-script python3-flask-sqlalchemy python3-flask-versioned"
+RDEPENDS:${PN} += "python3-future python3-httplib2 python3-huey python3-idna python3-ipaddress"
+RDEPENDS:${PN} += "python3-itsdangerous python3-jinja2 python3-ldap python3-lxml python3-mako"
+RDEPENDS:${PN} += "python3-markupsafe python3-netaddr python3-oauth2client python3-passlib python3-pillow"
+RDEPENDS:${PN} += "python3-pyasn1 python3-pyasn1-modules python3-pycparser python3-pyjwt python3-pymysql"
+RDEPENDS:${PN} += "python3-pyopenssl python3-pyrad python3-dateutil python3-editor python3-gnupg"
+RDEPENDS:${PN} += "python3-pytz python3-pyyaml python3-qrcode python3-redis python3-requests python3-rsa"
+RDEPENDS:${PN} += "python3-six python3-smpplib python3-soupsieve python3-soupsieve "
+RDEPENDS:${PN} += "python3-sqlalchemy python3-sqlsoup python3-urllib3 python3-werkzeug"
diff --git a/meta-security/kas/kas-security-base.yml b/meta-security/kas/kas-security-base.yml
index 3bf46db..a594fd7 100644
--- a/meta-security/kas/kas-security-base.yml
+++ b/meta-security/kas/kas-security-base.yml
@@ -39,8 +39,7 @@
     BB_SIGNATURE_HANDLER = "OEEquivHash"
     INHERIT += "buildstats buildstats-summary buildhistory"
     INHERIT += "report-error"
-    INHERIT += "testimage"
-    INHERIT += "rm_work"
+    IMAGE_CLASSES += "testimage"
     BB_NUMBER_THREADS="24"
     BB_NUMBER_PARSE_THREADS="12"
     BB_TASK_NICE_LEVEL = '5'
diff --git a/meta-security/meta-parsec/README.md b/meta-security/meta-parsec/README.md
index f720cd2..99935bc 100644
--- a/meta-security/meta-parsec/README.md
+++ b/meta-security/meta-parsec/README.md
@@ -99,6 +99,7 @@
 - all providers pre-configured in the Parsec config file included in the image.
 - PKCS11 and TPM providers with software backends if softhsm and
   swtpm packages included in the image.
+- TS Provider if Parsec is built with it included.
 
 Meta-parsec also contains a recipe for `security-parsec-image` image with Parsec,
 softhsm and swtpm included.
@@ -214,7 +215,7 @@
   The IBM Software TPM service can be used for manual testing of the provider by
 including it into your test image:
 
-    IMAGE_INSTALL:append = " ibmswtpm2 tpm2-tools libtss2 libtss2-tcti-mssim"
+    IMAGE_INSTALL:append = " swtpm tpm2-tools libtss2 libtss2-tcti-mssim"
 
 Inside the running VM:
 - Stop Parsec
diff --git a/meta-security/meta-parsec/lib/oeqa/runtime/cases/parsec.py b/meta-security/meta-parsec/lib/oeqa/runtime/cases/parsec.py
index 11e5572..6be84ba 100644
--- a/meta-security/meta-parsec/lib/oeqa/runtime/cases/parsec.py
+++ b/meta-security/meta-parsec/lib/oeqa/runtime/cases/parsec.py
@@ -12,12 +12,8 @@
 class ParsecTest(OERuntimeTestCase):
     @classmethod
     def setUpClass(cls):
-        cls.tc.target.run('swtpm_ioctl -s --tcp :2322')
         cls.toml_file = '/etc/parsec/config.toml'
-
-    @classmethod
-    def tearDownClass(cls):
-        cls.tc.target.run('swtpm_ioctl -s --tcp :2322')
+        cls.tc.target.run('cp -p %s %s-original' % (cls.toml_file, cls.toml_file))
 
     def setUp(self):
         super(ParsecTest, self).setUp()
@@ -40,6 +36,11 @@
         status, output = self.target.run('cat %s-%s >>%s' % (self.toml_file, provider, self.toml_file))
         os.remove(tmp_path)
 
+    def restore_parsec_config(self):
+        """ Restore original Parsec config """
+        self.target.run('cp -p %s-original %s' % (self.toml_file, self.toml_file))
+        self.target.run(self.parsec_reload)
+
     def check_parsec_providers(self, provider=None, prov_id=None):
         """ Get Parsec providers list and check for one if defined """
 
@@ -58,6 +59,23 @@
         status, output = self.target.run('parsec-cli-tests.sh %s' % ("-%d" % prov_id if prov_id else ""))
         self.assertEqual(status, 0, msg='Parsec CLI tests failed.\n %s' % output)
 
+    def check_packageconfig(self, prov):
+        """ Check that the require provider is included in Parsec """
+        if prov not in self.tc.td['PACKAGECONFIG:pn-parsec-service']:
+            self.skipTest('%s provider is not included in Parsec. Parsec PACKAGECONFIG: "%s"' % \
+                          (prov, self.tc.td['PACKAGECONFIG:pn-parsec-service']))
+
+    def check_packages(self, prov, packages):
+        """ Check for the required packages for Parsec providers software backends """
+        if isinstance(packages, str):
+            need_pkgs = set([packages,])
+        else:
+            need_pkgs = set(packages)
+
+        if not self.tc.image_packages.issuperset(need_pkgs):
+            self.skipTest('%s provider is not configured and packages "%s" are not included into the image' % \
+                          (prov, need_pkgs))
+
     @OEHasPackage(['parsec-service'])
     @OETestDepends(['ssh.SSHTest.test_ssh'])
     def test_all_providers(self):
@@ -84,7 +102,9 @@
                 'mkdir /tmp/myvtpm',
                 'swtpm socket -d --tpmstate dir=/tmp/myvtpm --tpm2 --ctrl type=tcp,port=2322 --server type=tcp,port=2321 --flags not-need-init',
                 'tpm2_startup -c -T "swtpm:port=2321"',
+                'chown -R parsec /tmp/myvtpm',
                 self.parsec_reload,
+                'sleep 5',
                ]
 
         for cmd in cmds:
@@ -92,16 +112,30 @@
             self.assertEqual(status, 0, msg='\n'.join([cmd, output]))
 
     @OEHasPackage(['parsec-service'])
-    @OEHasPackage(['swtpm'])
     @skipIfNotFeature('tpm2','Test parsec_tpm_provider requires tpm2 to be in DISTRO_FEATURES')
-    @OETestDepends(['ssh.SSHTest.test_ssh', 'parsec.ParsecTest.test_all_providers'])
+    @OETestDepends(['ssh.SSHTest.test_ssh'])
     def test_tpm_provider(self):
         """ Configure and test Parsec TPM provider with swtpm as a backend """
 
+        self.check_packageconfig("TPM")
+
+        reconfigure = False
         prov_id = 3
-        self.configure_tpm_provider()
-        self.check_parsec_providers("TPM", prov_id)
+        try:
+            # Chech if the provider is already configured
+            self.check_parsec_providers("TPM", prov_id)
+        except:
+            # Try to test the provider with a software backend
+            self.check_packages("TPM", ['swtpm', 'tpm2-tools'])
+            reconfigure = True
+            self.configure_tpm_provider()
+            self.check_parsec_providers("TPM", prov_id)
+
         self.run_cli_tests(prov_id)
+        self.restore_parsec_config()
+
+        if reconfigure:
+            self.target.run('swtpm_ioctl -s --tcp :2322')
 
     def configure_pkcs11_provider(self):
         """ Create Parsec PKCS11 provider configuration """
@@ -132,12 +166,52 @@
         self.assertEqual(status, 0, msg='Failed to reload Parsec.\n%s' % output)
 
     @OEHasPackage(['parsec-service'])
-    @OEHasPackage(['softhsm'])
-    @OETestDepends(['ssh.SSHTest.test_ssh', 'parsec.ParsecTest.test_all_providers'])
+    @OETestDepends(['ssh.SSHTest.test_ssh'])
     def test_pkcs11_provider(self):
         """ Configure and test Parsec PKCS11 provider with softhsm as a backend """
 
+        self.check_packageconfig("PKCS11")
         prov_id = 2
-        self.configure_pkcs11_provider()
-        self.check_parsec_providers("PKCS #11", prov_id)
+        try:
+            # Chech if the provider is already configured
+            self.check_parsec_providers("PKCS #11", prov_id)
+        except:
+            # Try to test the provider with a software backend
+            self.check_packages("PKCS11", 'softhsm')
+            self.configure_pkcs11_provider()
+            self.check_parsec_providers("PKCS #11", prov_id)
+
         self.run_cli_tests(prov_id)
+        self.restore_parsec_config()
+
+    def configure_TS_provider(self):
+        """ Create Trusted Services provider configuration """
+
+        cfg = [
+                '',
+                '[[provider]]',
+                'name = "trusted-service-provider"',
+                'provider_type = "TrustedService"',
+                'key_info_manager = "sqlite-manager"',
+              ]
+        self.copy_subconfig(cfg, "TS")
+
+        status, output = self.target.run(self.parsec_reload)
+        self.assertEqual(status, 0, msg='Failed to reload Parsec.\n%s' % output)
+
+    @OEHasPackage(['parsec-service'])
+    @OETestDepends(['ssh.SSHTest.test_ssh'])
+    def test_TS_provider(self):
+        """ Configure and test Parsec PKCS11 provider with softhsm as a backend """
+
+        self.check_packageconfig("TS")
+        prov_id = 4
+        try:
+            # Chech if the provider is already configured
+            self.check_parsec_providers("Trusted Service", prov_id)
+        except:
+            self.configure_TS_provider()
+            self.check_parsec_providers("Trusted Service", prov_id)
+
+        self.run_cli_tests(prov_id)
+        self.restore_parsec_config()
diff --git a/meta-security/meta-parsec/recipes-parsec/parsec-service/parsec-service_1.0.0.bb b/meta-security/meta-parsec/recipes-parsec/parsec-service/parsec-service_1.0.0.bb
index 84539f9..931abee 100644
--- a/meta-security/meta-parsec/recipes-parsec/parsec-service/parsec-service_1.0.0.bb
+++ b/meta-security/meta-parsec/recipes-parsec/parsec-service/parsec-service_1.0.0.bb
@@ -45,7 +45,7 @@
 do_install () {
     # Binaries
     install -d -m 700 -o parsec -g parsec "${D}${libexecdir}/parsec"
-    install -m 700 -o parsec -g parsec "${WORKDIR}/build/target/${CARGO_TARGET_SUBDIR}/parsec" ${D}${libexecdir}/parsec/parsec
+    install -m 700 -o parsec -g parsec "${B}/target/${CARGO_TARGET_SUBDIR}/parsec" ${D}${libexecdir}/parsec/parsec
 
     # Config file
     install -d -m 700 -o parsec -g parsec "${D}${sysconfdir}/parsec"
@@ -69,9 +69,10 @@
 
 inherit useradd
 USERADD_PACKAGES = "${PN}"
-USERADD_PARAM:${PN} = "-r -g parsec -s /bin/false -d ${localstatedir}/lib/parsec parsec"
 GROUPADD_PARAM:${PN} = "-r parsec"
-GROUPMEMS_PARAM:${PN} = "${@bb.utils.contains('PACKAGECONFIG_CONFARGS', 'tpm-provider', '-a parsec -g tss', '', d)}"
+USERADD_PARAM:${PN} = "-r -g parsec -s /bin/false -d ${localstatedir}/lib/parsec parsec"
+GROUPMEMS_PARAM:${PN} = "${@bb.utils.contains('PACKAGECONFIG_CONFARGS', 'tpm-provider', '-a parsec -g tss ;', '', d)}"
+GROUPMEMS_PARAM:${PN} += "${@bb.utils.contains('PACKAGECONFIG_CONFARGS', 'trusted-service-provider', '-a parsec -g teeclnt', '', d)}"
 
 FILES:${PN} += " \
     ${sysconfdir}/parsec/config.toml \
diff --git a/meta-security/meta-parsec/recipes-parsec/parsec-tool/parsec-tool_0.5.2.bb b/meta-security/meta-parsec/recipes-parsec/parsec-tool/parsec-tool_0.5.2.bb
index 4b053b9..6ecce8e 100644
--- a/meta-security/meta-parsec/recipes-parsec/parsec-tool/parsec-tool_0.5.2.bb
+++ b/meta-security/meta-parsec/recipes-parsec/parsec-tool/parsec-tool_0.5.2.bb
@@ -11,7 +11,7 @@
 
 do_install() {
   install -d ${D}/${bindir}
-  install -m 755 "${B}/target/${TARGET_SYS}/release/parsec-tool" "${D}${bindir}/parsec-tool"
+  install -m 755 "${B}/target/${CARGO_TARGET_SUBDIR}/parsec-tool" "${D}${bindir}/parsec-tool"
   install -m 755 "${S}/tests/parsec-cli-tests.sh" "${D}${bindir}/parsec-cli-tests.sh"
 }
 
diff --git a/meta-security/recipes-core/packagegroup/packagegroup-core-security.bb b/meta-security/recipes-core/packagegroup/packagegroup-core-security.bb
index a12a4c2..22c1245 100644
--- a/meta-security/recipes-core/packagegroup/packagegroup-core-security.bb
+++ b/meta-security/recipes-core/packagegroup/packagegroup-core-security.bb
@@ -28,9 +28,11 @@
 RDEPENDS:packagegroup-security-utils = "\
     bubblewrap \
     checksec \
+    cryptmount \
     ding-libs \
     ecryptfs-utils \
     fscryptctl \
+    glome \
     keyutils \
     nmap \
     pinentry \
@@ -42,8 +44,8 @@
     ${@bb.utils.contains("DISTRO_FEATURES", "pax", "pax-utils packctl", "",d)} \
     "
 
-RDEPENDS:packagegroup-security-utils:append:x86 = "chipsec"
-RDEPENDS:packagegroup-security-utils:append:x86-64 = "chipsec"
+RDEPENDS:packagegroup-security-utils:append:x86 = " chipsec"
+RDEPENDS:packagegroup-security-utils:append:x86-64 = " chipsec"
 RDEPENDS:packagegroup-security-utils:remove:mipsarch = "firejail krill"
 RDEPENDS:packagegroup-security-utils:remove:libc-musl = "krill"
 RDEPENDS:packagegroup-security-utils:remove:riscv64 = "krill"
diff --git a/meta-security/recipes-ids/samhain/files/0001-Don-t-expose-configure-args.patch b/meta-security/recipes-ids/samhain/files/0001-Don-t-expose-configure-args.patch
new file mode 100644
index 0000000..fedbe5b
--- /dev/null
+++ b/meta-security/recipes-ids/samhain/files/0001-Don-t-expose-configure-args.patch
@@ -0,0 +1,44 @@
+From 111b1e8f35e989513d8961a45a806767109f6e1e Mon Sep 17 00:00:00 2001
+From: Mingli Yu <mingli.yu@windriver.com>
+Date: Thu, 11 Aug 2022 17:15:30 +0800
+Subject: [PATCH] Don't expose configure args
+
+Don't expost configure args to fix buildpath issue.
+
+Upstream-Status: Inappropriate [oe specific]
+
+Signed-off-by: Mingli Yu <mingli.yu@windriver.com>
+---
+ scripts/samhain.ebuild-light.in | 2 +-
+ scripts/samhain.ebuild.in       | 2 +-
+ 2 files changed, 2 insertions(+), 2 deletions(-)
+
+diff --git a/scripts/samhain.ebuild-light.in b/scripts/samhain.ebuild-light.in
+index 2b09cdb..b7f7062 100644
+--- a/scripts/samhain.ebuild-light.in
++++ b/scripts/samhain.ebuild-light.in
+@@ -55,7 +55,7 @@ src_compile() {
+ #	      --with-state-dir=/var/lib/${PN} \
+ #	      --with-log-file=/var/log/${PN}.log \
+ 
+-	./configure ${myconf} @mydefargs@ || die
++	./configure ${myconf} mydefargs || die
+         emake || die
+ 
+ 	echo '#!/bin/sh' > ./sstrip
+diff --git a/scripts/samhain.ebuild.in b/scripts/samhain.ebuild.in
+index 635a746..b9a42e7 100644
+--- a/scripts/samhain.ebuild.in
++++ b/scripts/samhain.ebuild.in
+@@ -55,7 +55,7 @@ src_compile() {
+ #	      --with-state-dir=/var/lib/${PN} \
+ #	      --with-log-file=/var/log/${PN}.log \
+ 
+-	./configure ${myconf} @mydefargs@ || die
++	./configure ${myconf} mydefargs || die
+         emake || die
+ 
+ 	echo '#!/bin/sh' > ./sstrip
+-- 
+2.25.1
+
diff --git a/meta-security/recipes-ids/samhain/samhain-standalone.bb b/meta-security/recipes-ids/samhain/samhain-standalone.bb
index 445cb99..b832dc8 100644
--- a/meta-security/recipes-ids/samhain/samhain-standalone.bb
+++ b/meta-security/recipes-ids/samhain/samhain-standalone.bb
@@ -1,6 +1,7 @@
 require samhain.inc
 
 SRC_URI += "file://samhain-not-run-ptest-on-host.patch \
+            file://0001-Don-t-expose-configure-args.patch \
             file://run-ptest \
 "
 
diff --git a/meta-security/recipes-kernel/lkrg/lkrg-module_0.9.4.bb b/meta-security/recipes-kernel/lkrg/lkrg-module_0.9.5.bb
similarity index 100%
rename from meta-security/recipes-kernel/lkrg/lkrg-module_0.9.4.bb
rename to meta-security/recipes-kernel/lkrg/lkrg-module_0.9.5.bb
diff --git a/meta-security/recipes-mac/AppArmor/apparmor_3.0.5.bb b/meta-security/recipes-mac/AppArmor/apparmor_3.0.6.bb
similarity index 100%
rename from meta-security/recipes-mac/AppArmor/apparmor_3.0.5.bb
rename to meta-security/recipes-mac/AppArmor/apparmor_3.0.6.bb
diff --git a/meta-security/recipes-security/cryptmount/cryptmount_5.3.3.bb b/meta-security/recipes-security/cryptmount/cryptmount_5.3.3.bb
new file mode 100644
index 0000000..fb522cb
--- /dev/null
+++ b/meta-security/recipes-security/cryptmount/cryptmount_5.3.3.bb
@@ -0,0 +1,27 @@
+SUMMARY = "Linux encrypted filesystem management tool"
+HOMEPAGE = "http://cryptmount.sourceforge.net/"
+LIC_FILES_CHKSUM = "file://README;beginline=3;endline=4;md5=673a990de93a2c5531a0f13f1c40725a"
+LICENSE = "GPL-2.0-only"
+
+SRC_URI = "https://sourceforge.net/projects/cryptmount/files/${BPN}/${BPN}-5.3/${BPN}-${PV}.tar.gz \
+           file://remove_linux_fs.patch \
+           "
+
+SRC_URI[sha256sum] = "682953ff5ba497d48d6b13e22ca726c98659abd781bb8596bb299640dd255d9b"
+
+inherit autotools-brokensep gettext pkgconfig systemd
+
+EXTRA_OECONF = " --enable-cswap --enable-fsck --enable-argv0switch"
+
+PACKAGECONFIG ?="intl luks gcrypt nls"
+PACKAGECONFIG:append = " ${@bb.utils.contains('DISTRO_FEATURES', 'systemd', 'systemd', '', d)}"
+
+PACKAGECONFIG[systemd] = "--with-systemd, --without-systemd, systemd"
+PACKAGECONFIG[intl] = "--with-libintl-prefix, --without-libintl-prefix"
+PACKAGECONFIG[gcrypt] = "--with-libgcrypt, --without-libgcrypt, libgcrypt"
+PACKAGECONFIG[luks] = "--enable-luks, --disable-luks, cryptsetup"
+PACKAGECONFIG[nls] = "--enable-nls, --disable-nls, "
+
+SYSTEMD_SERVICE:${PN} = "cryptmount.service"
+
+RDEPENDS:${PN} = "libdevmapper"
diff --git a/meta-security/recipes-security/cryptmount/files/remove_linux_fs.patch b/meta-security/recipes-security/cryptmount/files/remove_linux_fs.patch
new file mode 100644
index 0000000..304b853
--- /dev/null
+++ b/meta-security/recipes-security/cryptmount/files/remove_linux_fs.patch
@@ -0,0 +1,19 @@
+# From glibc 2.36, <linux/mount.h> (included from <linux/fs.h>) and 
+# <sys/mount.h> (included from glibc) are no longer compatible:
+# https://sourceware.org/glibc/wiki/Release/2.36#Usage_of_.3Clinux.2Fmount.h.3E_and_.3Csys.2Fmount.h.3E
+
+Upstream-Status: Pending
+Signed-off-by: Armin Kuster <akuster808@gmail.com>
+
+Index: cryptmount-5.3.3/cryptmount.c
+===================================================================
+--- cryptmount-5.3.3.orig/cryptmount.c
++++ cryptmount-5.3.3/cryptmount.c
+@@ -41,7 +41,6 @@
+ #ifdef HAVE_SYSLOG
+ #  include <syslog.h>
+ #endif
+-#include <linux/fs.h>       /* Beware ordering conflict with sys/mount.h */
+ 
+ 
+ #include "armour.h"
diff --git a/meta-security/recipes-security/glome/glome_git.bb b/meta-security/recipes-security/glome/glome_git.bb
new file mode 100644
index 0000000..12d6d5f
--- /dev/null
+++ b/meta-security/recipes-security/glome/glome_git.bb
@@ -0,0 +1,24 @@
+SUMMARY = "GLOME Login Client"
+HOME_PAGE = "https://github.com/google/glome"
+DESCRIPTION = "GLOME is used to authorize serial console access to Linux machines"
+PV = "0.1+git${SRCPV}"
+
+LICENSE = "Apache-2.0"
+LIC_FILES_CHKSUM = "file://LICENSE;md5=3b83ef96387f14655fc854ddc3c6bd57"
+
+inherit meson pkgconfig
+
+DEPENDS += "openssl"
+
+S = "${WORKDIR}/git"
+SRC_URI = "git://github.com/google/glome.git;branch=master;protocol=https"
+SRCREV = "978ad9fb165f1e382c875f2ce08a1fc4f2ddcf1b"
+
+FILES:${PN} += "${libdir}/security"
+
+PACKAGECONFIG ??= ""
+PACKAGECONFIG[glome-cli] = "-Dglome-cli=true,-Dglome-cli=false"
+PACKAGECONFIG[pam-glome] = "-Dpam-glome=true,-Dpam-glome=false,libpam"
+
+EXTRA_OEMESON = "-Dtests=false"
+
diff --git a/poky/.templateconf b/poky/.templateconf
index 0fe6f82..faf0348 100644
--- a/poky/.templateconf
+++ b/poky/.templateconf
@@ -1,2 +1,2 @@
 # Template settings
-TEMPLATECONF=${TEMPLATECONF:-meta-poky/conf}
+TEMPLATECONF=${TEMPLATECONF:-meta-poky/conf/templates/default}
diff --git a/poky/bitbake/bin/bitbake-layers b/poky/bitbake/bin/bitbake-layers
index 449434d..d4b1d1a 100755
--- a/poky/bitbake/bin/bitbake-layers
+++ b/poky/bitbake/bin/bitbake-layers
@@ -68,11 +68,11 @@
 
         registered = False
         for plugin in plugins:
+            if hasattr(plugin, 'tinfoil_init'):
+                plugin.tinfoil_init(tinfoil)
             if hasattr(plugin, 'register_commands'):
                 registered = True
                 plugin.register_commands(subparsers)
-            if hasattr(plugin, 'tinfoil_init'):
-                plugin.tinfoil_init(tinfoil)
 
         if not registered:
             logger.error("No commands registered - missing plugins?")
diff --git a/poky/bitbake/bin/bitbake-prserv b/poky/bitbake/bin/bitbake-prserv
index 323df66..5be42f3 100755
--- a/poky/bitbake/bin/bitbake-prserv
+++ b/poky/bitbake/bin/bitbake-prserv
@@ -1,5 +1,7 @@
 #!/usr/bin/env python3
 #
+# Copyright BitBake Contributors
+#
 # SPDX-License-Identifier: GPL-2.0-only
 #
 
diff --git a/poky/bitbake/bin/bitbake-worker b/poky/bitbake/bin/bitbake-worker
index 9d850ec..7be3937 100755
--- a/poky/bitbake/bin/bitbake-worker
+++ b/poky/bitbake/bin/bitbake-worker
@@ -1,5 +1,7 @@
 #!/usr/bin/env python3
 #
+# Copyright BitBake Contributors
+#
 # SPDX-License-Identifier: GPL-2.0-only
 #
 
@@ -455,6 +457,7 @@
         for mc in self.databuilder.mcdata:
             self.databuilder.mcdata[mc].setVar("PRSERV_HOST", self.workerdata["prhost"])
             self.databuilder.mcdata[mc].setVar("BB_HASHSERVE", self.workerdata["hashservaddr"])
+            self.databuilder.mcdata[mc].setVar("__bbclasstype", "recipe")
 
     def handle_newtaskhashes(self, data):
         self.workerdata["newhashes"] = pickle.loads(data)
diff --git a/poky/bitbake/bin/git-make-shallow b/poky/bitbake/bin/git-make-shallow
index 1d00fbf..d0532c5 100755
--- a/poky/bitbake/bin/git-make-shallow
+++ b/poky/bitbake/bin/git-make-shallow
@@ -1,5 +1,7 @@
 #!/usr/bin/env python3
 #
+# Copyright BitBake Contributors
+#
 # SPDX-License-Identifier: GPL-2.0-only
 #
 
diff --git a/poky/bitbake/lib/bb/COW.py b/poky/bitbake/lib/bb/COW.py
index 23c22b6..76bc08a 100644
--- a/poky/bitbake/lib/bb/COW.py
+++ b/poky/bitbake/lib/bb/COW.py
@@ -3,6 +3,8 @@
 #
 # Copyright (C) 2006 Tim Ansell
 #
+# SPDX-License-Identifier: GPL-2.0-only
+#
 # Please Note:
 # Be careful when using mutable types (ie Dict and Lists) - operations involving these are SLOW.
 # Assign a file to __warn__ to get warnings about slow operations.
diff --git a/poky/bitbake/lib/bb/asyncrpc/__init__.py b/poky/bitbake/lib/bb/asyncrpc/__init__.py
index c2f2b3c..9a85e99 100644
--- a/poky/bitbake/lib/bb/asyncrpc/__init__.py
+++ b/poky/bitbake/lib/bb/asyncrpc/__init__.py
@@ -1,4 +1,6 @@
 #
+# Copyright BitBake Contributors
+#
 # SPDX-License-Identifier: GPL-2.0-only
 #
 
diff --git a/poky/bitbake/lib/bb/asyncrpc/client.py b/poky/bitbake/lib/bb/asyncrpc/client.py
index 3496019..881434d 100644
--- a/poky/bitbake/lib/bb/asyncrpc/client.py
+++ b/poky/bitbake/lib/bb/asyncrpc/client.py
@@ -1,4 +1,6 @@
 #
+# Copyright BitBake Contributors
+#
 # SPDX-License-Identifier: GPL-2.0-only
 #
 
diff --git a/poky/bitbake/lib/bb/asyncrpc/serv.py b/poky/bitbake/lib/bb/asyncrpc/serv.py
index 585bc12..5cf45f9 100644
--- a/poky/bitbake/lib/bb/asyncrpc/serv.py
+++ b/poky/bitbake/lib/bb/asyncrpc/serv.py
@@ -1,4 +1,6 @@
 #
+# Copyright BitBake Contributors
+#
 # SPDX-License-Identifier: GPL-2.0-only
 #
 
diff --git a/poky/bitbake/lib/bb/build.py b/poky/bitbake/lib/bb/build.py
index 55f68b9..b8c1099 100644
--- a/poky/bitbake/lib/bb/build.py
+++ b/poky/bitbake/lib/bb/build.py
@@ -20,6 +20,7 @@
 import time
 import re
 import stat
+import datetime
 import bb
 import bb.msg
 import bb.process
@@ -618,7 +619,8 @@
     logorder = os.path.join(tempdir, 'log.task_order')
     try:
         with open(logorder, 'a') as logorderfile:
-            logorderfile.write('{0} ({1}): {2}\n'.format(task, os.getpid(), logbase))
+            timestamp = datetime.datetime.now().strftime("%Y%m%d-%H%M%S.%f")
+            logorderfile.write('{0} {1} ({2}): {3}\n'.format(timestamp, task, os.getpid(), logbase))
     except OSError:
         logger.exception("Opening log file '%s'", logorder)
         pass
diff --git a/poky/bitbake/lib/bb/codeparser.py b/poky/bitbake/lib/bb/codeparser.py
index 3b3c3b4..9d66d3a 100644
--- a/poky/bitbake/lib/bb/codeparser.py
+++ b/poky/bitbake/lib/bb/codeparser.py
@@ -1,4 +1,6 @@
 #
+# Copyright BitBake Contributors
+#
 # SPDX-License-Identifier: GPL-2.0-only
 #
 
diff --git a/poky/bitbake/lib/bb/compress/_pipecompress.py b/poky/bitbake/lib/bb/compress/_pipecompress.py
index 5de17a8..4a403d6 100644
--- a/poky/bitbake/lib/bb/compress/_pipecompress.py
+++ b/poky/bitbake/lib/bb/compress/_pipecompress.py
@@ -1,4 +1,6 @@
 #
+# Copyright BitBake Contributors
+#
 # SPDX-License-Identifier: GPL-2.0-only
 #
 # Helper library to implement streaming compression and decompression using an
diff --git a/poky/bitbake/lib/bb/compress/lz4.py b/poky/bitbake/lib/bb/compress/lz4.py
index 0f6bc51..88b0989 100644
--- a/poky/bitbake/lib/bb/compress/lz4.py
+++ b/poky/bitbake/lib/bb/compress/lz4.py
@@ -1,4 +1,6 @@
 #
+# Copyright BitBake Contributors
+#
 # SPDX-License-Identifier: GPL-2.0-only
 #
 
diff --git a/poky/bitbake/lib/bb/compress/zstd.py b/poky/bitbake/lib/bb/compress/zstd.py
index 50c4213..cdbbe9d 100644
--- a/poky/bitbake/lib/bb/compress/zstd.py
+++ b/poky/bitbake/lib/bb/compress/zstd.py
@@ -1,4 +1,6 @@
 #
+# Copyright BitBake Contributors
+#
 # SPDX-License-Identifier: GPL-2.0-only
 #
 
diff --git a/poky/bitbake/lib/bb/cooker.py b/poky/bitbake/lib/bb/cooker.py
index 2adf4d2..1b6ee30 100644
--- a/poky/bitbake/lib/bb/cooker.py
+++ b/poky/bitbake/lib/bb/cooker.py
@@ -402,6 +402,7 @@
         for mc in self.databuilder.mcdata.values():
             mc.renameVar("__depends", "__base_depends")
             self.add_filewatch(mc.getVar("__base_depends", False), self.configwatcher)
+            mc.setVar("__bbclasstype", "recipe")
 
         self.baseconfig_valid = True
         self.parsecache_valid = False
diff --git a/poky/bitbake/lib/bb/cookerdata.py b/poky/bitbake/lib/bb/cookerdata.py
index d54ac93..9706948 100644
--- a/poky/bitbake/lib/bb/cookerdata.py
+++ b/poky/bitbake/lib/bb/cookerdata.py
@@ -254,6 +254,7 @@
         filtered_keys = bb.utils.approved_variables()
         bb.data.inheritFromOS(self.basedata, self.savedenv, filtered_keys)
         self.basedata.setVar("BB_ORIGENV", self.savedenv)
+        self.basedata.setVar("__bbclasstype", "global")
 
         if worker:
             self.basedata.setVar("BB_WORKERCONTEXT", "1")
diff --git a/poky/bitbake/lib/bb/daemonize.py b/poky/bitbake/lib/bb/daemonize.py
index 4957bfd..7689404 100644
--- a/poky/bitbake/lib/bb/daemonize.py
+++ b/poky/bitbake/lib/bb/daemonize.py
@@ -1,4 +1,6 @@
 #
+# Copyright BitBake Contributors
+#
 # SPDX-License-Identifier: GPL-2.0-only
 #
 
diff --git a/poky/bitbake/lib/bb/data.py b/poky/bitbake/lib/bb/data.py
index c09d9b0..53fe348 100644
--- a/poky/bitbake/lib/bb/data.py
+++ b/poky/bitbake/lib/bb/data.py
@@ -441,7 +441,7 @@
 
 def inherits_class(klass, d):
     val = d.getVar('__inherit_cache', False) or []
-    needle = os.path.join('classes', '%s.bbclass' % klass)
+    needle = '/%s.bbclass' % klass
     for v in val:
         if v.endswith(needle):
             return True
diff --git a/poky/bitbake/lib/bb/exceptions.py b/poky/bitbake/lib/bb/exceptions.py
index ecbad59..801db9c 100644
--- a/poky/bitbake/lib/bb/exceptions.py
+++ b/poky/bitbake/lib/bb/exceptions.py
@@ -1,4 +1,6 @@
 #
+# Copyright BitBake Contributors
+#
 # SPDX-License-Identifier: GPL-2.0-only
 #
 
diff --git a/poky/bitbake/lib/bb/fetch2/gitsm.py b/poky/bitbake/lib/bb/fetch2/gitsm.py
index c1950e4..25d5db0 100644
--- a/poky/bitbake/lib/bb/fetch2/gitsm.py
+++ b/poky/bitbake/lib/bb/fetch2/gitsm.py
@@ -115,6 +115,9 @@
                     # This has to be a file reference
                     proto = "file"
                     url = "gitsm://" + uris[module]
+            if "{}{}".format(ud.host, ud.path) in url:
+                raise bb.fetch2.FetchError("Submodule refers to the parent repository. This will cause deadlock situation in current version of Bitbake." \
+                                           "Consider using git fetcher instead.")
 
             url += ';protocol=%s' % proto
             url += ";name=%s" % module
diff --git a/poky/bitbake/lib/bb/fetch2/npm.py b/poky/bitbake/lib/bb/fetch2/npm.py
index 8f7c10a..8a179a3 100644
--- a/poky/bitbake/lib/bb/fetch2/npm.py
+++ b/poky/bitbake/lib/bb/fetch2/npm.py
@@ -156,7 +156,7 @@
             raise ParameterError("Invalid 'version' parameter", ud.url)
 
         # Extract the 'registry' part of the url
-        ud.registry = re.sub(r"^npm://", "http://", ud.url.split(";")[0])
+        ud.registry = re.sub(r"^npm://", "https://", ud.url.split(";")[0])
 
         # Using the 'downloadfilename' parameter as local filename
         # or the npm package name.
diff --git a/poky/bitbake/lib/bb/fetch2/osc.py b/poky/bitbake/lib/bb/fetch2/osc.py
index 86f8ddf..495ac8a 100644
--- a/poky/bitbake/lib/bb/fetch2/osc.py
+++ b/poky/bitbake/lib/bb/fetch2/osc.py
@@ -1,4 +1,6 @@
 #
+# Copyright BitBake Contributors
+#
 # SPDX-License-Identifier: GPL-2.0-only
 #
 """
diff --git a/poky/bitbake/lib/bb/parse/parse_py/BBHandler.py b/poky/bitbake/lib/bb/parse/parse_py/BBHandler.py
index 6841573..18e6868 100644
--- a/poky/bitbake/lib/bb/parse/parse_py/BBHandler.py
+++ b/poky/bitbake/lib/bb/parse/parse_py/BBHandler.py
@@ -44,23 +44,36 @@
     __inherit_cache = d.getVar('__inherit_cache', False) or []
     files = d.expand(files).split()
     for file in files:
-        if not os.path.isabs(file) and not file.endswith(".bbclass"):
-            file = os.path.join('classes', '%s.bbclass' % file)
+        classtype = d.getVar("__bbclasstype", False)
+        origfile = file
+        for t in ["classes-" + classtype, "classes"]:
+            file = origfile
+            if not os.path.isabs(file) and not file.endswith(".bbclass"):
+                file = os.path.join(t, '%s.bbclass' % file)
 
-        if not os.path.isabs(file):
-            bbpath = d.getVar("BBPATH")
-            abs_fn, attempts = bb.utils.which(bbpath, file, history=True)
-            for af in attempts:
-                if af != abs_fn:
-                    bb.parse.mark_dependency(d, af)
-            if abs_fn:
-                file = abs_fn
+            if not os.path.isabs(file):
+                bbpath = d.getVar("BBPATH")
+                abs_fn, attempts = bb.utils.which(bbpath, file, history=True)
+                for af in attempts:
+                    if af != abs_fn:
+                        bb.parse.mark_dependency(d, af)
+                if abs_fn:
+                    file = abs_fn
+
+            if os.path.exists(file):
+                break
+
+        if not os.path.exists(file):
+            raise ParseError("Could not inherit file %s" % (file), fn, lineno)
 
         if not file in __inherit_cache:
             logger.debug("Inheriting %s (from %s:%d)" % (file, fn, lineno))
             __inherit_cache.append( file )
             d.setVar('__inherit_cache', __inherit_cache)
-            include(fn, file, lineno, d, "inherit")
+            try:
+                bb.parse.handle(file, d, True)
+            except (IOError, OSError) as exc:
+                raise ParseError("Could not inherit file %s: %s" % (fn, exc.strerror), fn, lineno)
             __inherit_cache = d.getVar('__inherit_cache', False) or []
 
 def get_statements(filename, absolute_filename, base_name):
diff --git a/poky/bitbake/lib/bb/process.py b/poky/bitbake/lib/bb/process.py
index be2c15a..4c7b6d3 100644
--- a/poky/bitbake/lib/bb/process.py
+++ b/poky/bitbake/lib/bb/process.py
@@ -1,4 +1,6 @@
 #
+# Copyright BitBake Contributors
+#
 # SPDX-License-Identifier: GPL-2.0-only
 #
 
diff --git a/poky/bitbake/lib/bb/runqueue.py b/poky/bitbake/lib/bb/runqueue.py
index 359b503..48e2540 100644
--- a/poky/bitbake/lib/bb/runqueue.py
+++ b/poky/bitbake/lib/bb/runqueue.py
@@ -168,15 +168,19 @@
         openSUSE /proc/pressure/* files have readable file permissions but when read the error EOPNOTSUPP (Operation not supported)
         is returned.
         """
-        if self.rq.max_cpu_pressure or self.rq.max_io_pressure:
+        if self.rq.max_cpu_pressure or self.rq.max_io_pressure or self.rq.max_memory_pressure:
             try:
-                with open("/proc/pressure/cpu") as cpu_pressure_fds, open("/proc/pressure/io") as io_pressure_fds:
+                with open("/proc/pressure/cpu") as cpu_pressure_fds, \
+                    open("/proc/pressure/io") as io_pressure_fds, \
+                    open("/proc/pressure/memory") as memory_pressure_fds:
+
                     self.prev_cpu_pressure = cpu_pressure_fds.readline().split()[4].split("=")[1]
                     self.prev_io_pressure = io_pressure_fds.readline().split()[4].split("=")[1]
+                    self.prev_memory_pressure = memory_pressure_fds.readline().split()[4].split("=")[1]
                     self.prev_pressure_time = time.time()
                 self.check_pressure = True
             except:
-                bb.warn("The /proc/pressure files can't be read. Continuing build without monitoring pressure")
+                bb.note("The /proc/pressure files can't be read. Continuing build without monitoring pressure")
                 self.check_pressure = False
         else:
             self.check_pressure = False
@@ -184,21 +188,26 @@
     def exceeds_max_pressure(self):
         """
         Monitor the difference in total pressure at least once per second, if
-        BB_PRESSURE_MAX_{CPU|IO} are set, return True if above threshold.
+        BB_PRESSURE_MAX_{CPU|IO|MEMORY} are set, return True if above threshold.
         """
         if self.check_pressure:
-            with open("/proc/pressure/cpu") as cpu_pressure_fds, open("/proc/pressure/io") as io_pressure_fds:
+            with open("/proc/pressure/cpu") as cpu_pressure_fds, \
+                open("/proc/pressure/io") as io_pressure_fds, \
+                open("/proc/pressure/memory") as memory_pressure_fds:
                 # extract "total" from /proc/pressure/{cpu|io}
                 curr_cpu_pressure = cpu_pressure_fds.readline().split()[4].split("=")[1]
                 curr_io_pressure = io_pressure_fds.readline().split()[4].split("=")[1]
+                curr_memory_pressure = memory_pressure_fds.readline().split()[4].split("=")[1]
                 exceeds_cpu_pressure =  self.rq.max_cpu_pressure and (float(curr_cpu_pressure) - float(self.prev_cpu_pressure)) > self.rq.max_cpu_pressure
                 exceeds_io_pressure =  self.rq.max_io_pressure and (float(curr_io_pressure) - float(self.prev_io_pressure)) > self.rq.max_io_pressure
+                exceeds_memory_pressure = self.rq.max_memory_pressure and (float(curr_memory_pressure) - float(self.prev_memory_pressure)) > self.rq.max_memory_pressure
                 now = time.time()
                 if now - self.prev_pressure_time > 1.0:
                     self.prev_cpu_pressure = curr_cpu_pressure
                     self.prev_io_pressure = curr_io_pressure
+                    self.prev_memory_pressure = curr_memory_pressure
                     self.prev_pressure_time = now
-            return (exceeds_cpu_pressure or exceeds_io_pressure)
+            return (exceeds_cpu_pressure or exceeds_io_pressure or exceeds_memory_pressure)
         return False
 
     def next_buildable_task(self):
@@ -1748,6 +1757,7 @@
         self.scheduler = self.cfgData.getVar("BB_SCHEDULER") or "speed"
         self.max_cpu_pressure = self.cfgData.getVar("BB_PRESSURE_MAX_CPU")
         self.max_io_pressure = self.cfgData.getVar("BB_PRESSURE_MAX_IO")
+        self.max_memory_pressure = self.cfgData.getVar("BB_PRESSURE_MAX_MEMORY")
 
         self.sq_buildable = set()
         self.sq_running = set()
@@ -1798,6 +1808,13 @@
             if self.max_io_pressure > upper_limit:
                 bb.warn("Your build will be largely unregulated since BB_PRESSURE_MAX_IO is set to %s. It is very unlikely that such high pressure will be experienced." % (self.max_io_pressure))
 
+        if self.max_memory_pressure:
+            self.max_memory_pressure = float(self.max_memory_pressure)
+            if self.max_memory_pressure < lower_limit:
+                bb.fatal("Invalid BB_PRESSURE_MAX_MEMORY %s, minimum value is %s." % (self.max_memory_pressure, lower_limit))
+            if self.max_memory_pressure > upper_limit:
+                bb.warn("Your build will be largely unregulated since BB_PRESSURE_MAX_MEMORY is set to %s. It is very unlikely that such high pressure will be experienced." % (self.max_io_pressure))
+            
         # List of setscene tasks which we've covered
         self.scenequeue_covered = set()
         # List of tasks which are covered (including setscene ones)
@@ -2237,10 +2254,9 @@
 
         # No more tasks can be run. If we have deferred setscene tasks we should run them.
         if self.sq_deferred:
-            tid = self.sq_deferred.pop(list(self.sq_deferred.keys())[0])
-            logger.warning("Runqeueue deadlocked on deferred tasks, forcing task %s" % tid)
-            if tid not in self.runq_complete:
-                self.sq_task_failoutright(tid)
+            deferred_tid = list(self.sq_deferred.keys())[0]
+            blocking_tid = self.sq_deferred.pop(deferred_tid)
+            logger.warning("Runqeueue deadlocked on deferred tasks, forcing task %s blocked by %s" % (deferred_tid, blocking_tid))
             return True
 
         if self.failed_tids:
@@ -2506,11 +2522,14 @@
 
         if update_tasks:
             self.sqdone = False
-            for tid in [t[0] for t in update_tasks]:
-                h = pending_hash_index(tid, self.rqdata)
-                if h in self.sqdata.hashes and tid != self.sqdata.hashes[h]:
-                    self.sq_deferred[tid] = self.sqdata.hashes[h]
-                    bb.note("Deferring %s after %s" % (tid, self.sqdata.hashes[h]))
+            for mc in sorted(self.sqdata.multiconfigs):
+                for tid in sorted([t[0] for t in update_tasks]):
+                    if mc_from_tid(tid) != mc:
+                        continue
+                    h = pending_hash_index(tid, self.rqdata)
+                    if h in self.sqdata.hashes and tid != self.sqdata.hashes[h]:
+                        self.sq_deferred[tid] = self.sqdata.hashes[h]
+                        bb.note("Deferring %s after %s" % (tid, self.sqdata.hashes[h]))
             update_scenequeue_data([t[0] for t in update_tasks], self.sqdata, self.rqdata, self.rq, self.cooker, self.stampcache, self, summary=False)
 
         for (tid, harddepfail, origvalid) in update_tasks:
diff --git a/poky/bitbake/lib/bb/siggen.py b/poky/bitbake/lib/bb/siggen.py
index 3f3d6df..07bb529 100644
--- a/poky/bitbake/lib/bb/siggen.py
+++ b/poky/bitbake/lib/bb/siggen.py
@@ -1,4 +1,6 @@
 #
+# Copyright BitBake Contributors
+#
 # SPDX-License-Identifier: GPL-2.0-only
 #
 
@@ -425,7 +427,7 @@
                 bb.error("Taskhash mismatch %s versus %s for %s" % (computed_taskhash, self.taskhash[tid], tid))
                 sigfile = sigfile.replace(self.taskhash[tid], computed_taskhash)
 
-        fd, tmpfile = tempfile.mkstemp(dir=os.path.dirname(sigfile), prefix="sigtask.")
+        fd, tmpfile = bb.utils.mkstemp(dir=os.path.dirname(sigfile), prefix="sigtask.")
         try:
             with bb.compress.zstd.open(fd, "wt", encoding="utf-8", num_threads=1) as f:
                 json.dump(data, f, sort_keys=True, separators=(",", ":"), cls=SetEncoder)
diff --git a/poky/bitbake/lib/bb/tests/compression.py b/poky/bitbake/lib/bb/tests/compression.py
index d3ddf67..95af3f9 100644
--- a/poky/bitbake/lib/bb/tests/compression.py
+++ b/poky/bitbake/lib/bb/tests/compression.py
@@ -1,4 +1,6 @@
 #
+# Copyright BitBake Contributors
+#
 # SPDX-License-Identifier: GPL-2.0-only
 #
 
diff --git a/poky/bitbake/lib/bb/tests/cooker.py b/poky/bitbake/lib/bb/tests/cooker.py
index c82d4b7..9e524ae 100644
--- a/poky/bitbake/lib/bb/tests/cooker.py
+++ b/poky/bitbake/lib/bb/tests/cooker.py
@@ -1,6 +1,8 @@
 #
 # BitBake Tests for cooker.py
 #
+# Copyright BitBake Contributors
+#
 # SPDX-License-Identifier: GPL-2.0-only
 #
 
diff --git a/poky/bitbake/lib/bb/tests/fetch.py b/poky/bitbake/lib/bb/tests/fetch.py
index 7fcf57e..b4ed691 100644
--- a/poky/bitbake/lib/bb/tests/fetch.py
+++ b/poky/bitbake/lib/bb/tests/fetch.py
@@ -11,6 +11,7 @@
 import tempfile
 import collections
 import os
+import signal
 import tarfile
 from bb.fetch2 import URI
 from bb.fetch2 import FetchMethod
@@ -22,6 +23,24 @@
         return unittest.skip("network test")
     return lambda f: f
 
+class TestTimeout(Exception):
+    pass
+
+class Timeout():
+
+    def __init__(self, seconds):
+        self.seconds = seconds
+
+    def handle_timeout(self, signum, frame):
+        raise TestTimeout("Test failed: timeout reached")
+
+    def __enter__(self):
+        signal.signal(signal.SIGALRM, self.handle_timeout)
+        signal.alarm(self.seconds)
+
+    def __exit__(self, exc_type, exc_val, exc_tb):
+        signal.alarm(0)
+
 class URITest(unittest.TestCase):
     test_uris = {
         "http://www.google.com/index.html" : {
@@ -1172,6 +1191,15 @@
         self.assertTrue(os.path.exists(os.path.join(repo_path, 'edgelet/hsm-sys/azure-iot-hsm-c/deps/utpm/deps/c-utility/testtools/umock-c/deps/ctest/README.md')), msg='Missing submodule checkout')
         self.assertTrue(os.path.exists(os.path.join(repo_path, 'edgelet/hsm-sys/azure-iot-hsm-c/deps/utpm/deps/c-utility/testtools/umock-c/deps/testrunner/readme.md')), msg='Missing submodule checkout')
 
+    @skipIfNoNetwork()
+    def test_git_submodule_reference_to_parent(self):
+        self.recipe_url = "gitsm://github.com/gflags/gflags.git;protocol=https;branch=master"
+        self.d.setVar("SRCREV", "14e1138441bbbb584160cb1c0a0426ec1bac35f1")
+        with Timeout(60):
+            fetcher = bb.fetch.Fetch([self.recipe_url], self.d)
+            with self.assertRaises(bb.fetch2.FetchError):
+                fetcher.download()
+
 class SVNTest(FetcherTest):
     def skipIfNoSvn():
         import shutil
diff --git a/poky/bitbake/lib/bb/tests/parse.py b/poky/bitbake/lib/bb/tests/parse.py
index 1a3b749..ee7f253 100644
--- a/poky/bitbake/lib/bb/tests/parse.py
+++ b/poky/bitbake/lib/bb/tests/parse.py
@@ -164,6 +164,7 @@
     # become unset/disappear.
     #
     def test_parse_classextend_contamination(self):
+        self.d.setVar("__bbclasstype", "recipe")
         cls = self.parsehelper(self.classextend_bbclass, suffix=".bbclass")
         #clsname = os.path.basename(cls.name).replace(".bbclass", "")
         self.classextend = self.classextend.replace("###CLASS###", cls.name)
diff --git a/poky/bitbake/lib/bb/utils.py b/poky/bitbake/lib/bb/utils.py
index 19ed68e..92d44c5 100644
--- a/poky/bitbake/lib/bb/utils.py
+++ b/poky/bitbake/lib/bb/utils.py
@@ -28,6 +28,8 @@
 import collections
 import copy
 import ctypes
+import random
+import tempfile
 from subprocess import getstatusoutput
 from contextlib import contextmanager
 from ctypes import cdll
@@ -429,12 +431,14 @@
     return eval(source, ctx, locals)
 
 @contextmanager
-def fileslocked(files):
+def fileslocked(files, *args, **kwargs):
     """Context manager for locking and unlocking file locks."""
     locks = []
     if files:
         for lockfile in files:
-            locks.append(bb.utils.lockfile(lockfile))
+            l = bb.utils.lockfile(lockfile, *args, **kwargs)
+            if l is not None:
+                locks.append(l)
 
     try:
         yield
@@ -1754,3 +1758,22 @@
             if str(uid) == line_split[2]:
                 return True
     return False
+
+def mkstemp(suffix=None, prefix=None, dir=None, text=False):
+    """
+    Generates a unique filename, independent of time.
+
+    mkstemp() in glibc (at least) generates unique file names based on the
+    current system time. When combined with highly parallel builds, and
+    operating over NFS (e.g. shared sstate/downloads) this can result in
+    conflicts and race conditions.
+
+    This function adds additional entropy to the file name so that a collision
+    is independent of time and thus extremely unlikely.
+    """
+    entropy = "".join(random.choices("abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890", k=20))
+    if prefix:
+        prefix = prefix + entropy
+    else:
+        prefix = tempfile.gettempprefix() + entropy
+    return tempfile.mkstemp(suffix=suffix, prefix=prefix, dir=dir, text=text)
diff --git a/poky/bitbake/lib/bblayers/__init__.py b/poky/bitbake/lib/bblayers/__init__.py
index 4e7c09d..78efd29 100644
--- a/poky/bitbake/lib/bblayers/__init__.py
+++ b/poky/bitbake/lib/bblayers/__init__.py
@@ -1,4 +1,6 @@
 #
+# Copyright BitBake Contributors
+#
 # SPDX-License-Identifier: GPL-2.0-only
 #
 
diff --git a/poky/bitbake/lib/bblayers/action.py b/poky/bitbake/lib/bblayers/action.py
index 6723e2c..454c251 100644
--- a/poky/bitbake/lib/bblayers/action.py
+++ b/poky/bitbake/lib/bblayers/action.py
@@ -1,4 +1,6 @@
 #
+# Copyright BitBake Contributors
+#
 # SPDX-License-Identifier: GPL-2.0-only
 #
 
diff --git a/poky/bitbake/lib/bblayers/common.py b/poky/bitbake/lib/bblayers/common.py
index 6c76ef3..f7b9cee 100644
--- a/poky/bitbake/lib/bblayers/common.py
+++ b/poky/bitbake/lib/bblayers/common.py
@@ -1,4 +1,6 @@
 #
+# Copyright BitBake Contributors
+#
 # SPDX-License-Identifier: GPL-2.0-only
 #
 
diff --git a/poky/bitbake/lib/bblayers/layerindex.py b/poky/bitbake/lib/bblayers/layerindex.py
index 7936516..0ac8fd2 100644
--- a/poky/bitbake/lib/bblayers/layerindex.py
+++ b/poky/bitbake/lib/bblayers/layerindex.py
@@ -1,4 +1,6 @@
 #
+# Copyright BitBake Contributors
+#
 # SPDX-License-Identifier: GPL-2.0-only
 #
 
diff --git a/poky/bitbake/lib/bblayers/query.py b/poky/bitbake/lib/bblayers/query.py
index 525d4f0..afd3951 100644
--- a/poky/bitbake/lib/bblayers/query.py
+++ b/poky/bitbake/lib/bblayers/query.py
@@ -1,4 +1,6 @@
 #
+# Copyright BitBake Contributors
+#
 # SPDX-License-Identifier: GPL-2.0-only
 #
 
@@ -55,11 +57,12 @@
         # Check for overlayed .bbclass files
         classes = collections.defaultdict(list)
         for layerdir in self.bblayers:
-            classdir = os.path.join(layerdir, 'classes')
-            if os.path.exists(classdir):
-                for classfile in os.listdir(classdir):
-                    if os.path.splitext(classfile)[1] == '.bbclass':
-                        classes[classfile].append(classdir)
+            for c in ["classes-global", "classes-recipe", "classes"]:
+                classdir = os.path.join(layerdir, c)
+                if os.path.exists(classdir):
+                    for classfile in os.listdir(classdir):
+                        if os.path.splitext(classfile)[1] == '.bbclass':
+                            classes[classfile].append(classdir)
 
         # Locating classes and other files is a bit more complicated than recipes -
         # layer priority is not a factor; instead BitBake uses the first matching
@@ -122,9 +125,14 @@
         if inherits:
             bbpath = str(self.tinfoil.config_data.getVar('BBPATH'))
             for classname in inherits:
-                classfile = 'classes/%s.bbclass' % classname
-                if not bb.utils.which(bbpath, classfile, history=False):
-                    logger.error('No class named %s found in BBPATH', classfile)
+                found = False
+                for c in ["classes-global", "classes-recipe", "classes"]:
+                    cfile = c + '/%s.bbclass' % classname
+                    if bb.utils.which(bbpath, cfile, history=False):
+                        found = True
+                        break
+                if not found:
+                    logger.error('No class named %s found in BBPATH', classname)
                     sys.exit(1)
 
         pkg_pn = self.tinfoil.cooker.recipecaches[mc].pkg_pn
@@ -172,7 +180,7 @@
                     logger.plain("  %s %s%s", layer.ljust(20), ver, skipped)
 
         global_inherit = (self.tinfoil.config_data.getVar('INHERIT') or "").split()
-        cls_re = re.compile('classes/')
+        cls_re = re.compile('classes.*/')
 
         preffiles = []
         show_unique_pn = []
@@ -405,7 +413,7 @@
                     self.check_cross_depends("RRECOMMENDS", layername, f, best, args.filenames, ignore_layers)
 
             # The inherit class
-            cls_re = re.compile('classes/')
+            cls_re = re.compile('classes.*/')
             if f in self.tinfoil.cooker_data.inherits:
                 inherits = self.tinfoil.cooker_data.inherits[f]
                 for cls in inherits:
diff --git a/poky/bitbake/lib/prserv/__init__.py b/poky/bitbake/lib/prserv/__init__.py
index 9961040..38ced81 100644
--- a/poky/bitbake/lib/prserv/__init__.py
+++ b/poky/bitbake/lib/prserv/__init__.py
@@ -1,4 +1,6 @@
 #
+# Copyright BitBake Contributors
+#
 # SPDX-License-Identifier: GPL-2.0-only
 #
 
diff --git a/poky/bitbake/lib/prserv/client.py b/poky/bitbake/lib/prserv/client.py
index a3f19dd..69ab7a4 100644
--- a/poky/bitbake/lib/prserv/client.py
+++ b/poky/bitbake/lib/prserv/client.py
@@ -1,4 +1,6 @@
 #
+# Copyright BitBake Contributors
+#
 # SPDX-License-Identifier: GPL-2.0-only
 #
 
diff --git a/poky/bitbake/lib/prserv/db.py b/poky/bitbake/lib/prserv/db.py
index 2710d4a..b4bda70 100644
--- a/poky/bitbake/lib/prserv/db.py
+++ b/poky/bitbake/lib/prserv/db.py
@@ -1,4 +1,6 @@
 #
+# Copyright BitBake Contributors
+#
 # SPDX-License-Identifier: GPL-2.0-only
 #
 
diff --git a/poky/bitbake/lib/prserv/serv.py b/poky/bitbake/lib/prserv/serv.py
index 0a20b92..c686b20 100644
--- a/poky/bitbake/lib/prserv/serv.py
+++ b/poky/bitbake/lib/prserv/serv.py
@@ -1,4 +1,6 @@
 #
+# Copyright BitBake Contributors
+#
 # SPDX-License-Identifier: GPL-2.0-only
 #
 
diff --git a/poky/bitbake/lib/toaster/manage.py b/poky/bitbake/lib/toaster/manage.py
index ae32619..f8de49c 100755
--- a/poky/bitbake/lib/toaster/manage.py
+++ b/poky/bitbake/lib/toaster/manage.py
@@ -1,5 +1,7 @@
 #!/usr/bin/env python3
 #
+# Copyright BitBake Contributors
+#
 # SPDX-License-Identifier: GPL-2.0-only
 #
 
diff --git a/poky/documentation/README b/poky/documentation/README
index 6f6a8ec..6333f04 100644
--- a/poky/documentation/README
+++ b/poky/documentation/README
@@ -129,6 +129,10 @@
 Inkscape is need to convert SVG graphics to PNG (for EPUB
 export) and to PDF (for PDF export).
 
+Additionally install "fncychap.sty" TeX font if you want to build PDFs. Debian
+and Ubuntu have it in "texlive-latex-extra" package while RedHat distributions
+and OpenSUSE have it in "texlive-fncychap" package for example.
+
 To build the documentation locally, run:
 
  $ cd documentation
diff --git a/poky/documentation/conf.py b/poky/documentation/conf.py
index 2a01756..07a15ce 100644
--- a/poky/documentation/conf.py
+++ b/poky/documentation/conf.py
@@ -98,7 +98,7 @@
     'yocto_bugs': ('https://bugzilla.yoctoproject.org%s', None),
     'yocto_ab': ('https://autobuilder.yoctoproject.org%s', None),
     'yocto_docs': ('https://docs.yoctoproject.org%s', None),
-    'yocto_git': ('https://git.yoctoproject.org/cgit/cgit.cgi%s', None),
+    'yocto_git': ('https://git.yoctoproject.org%s', None),
     'yocto_sstate': ('http://sstate.yoctoproject.org%s', None),
     'oe_home': ('https://www.openembedded.org%s', None),
     'oe_lists': ('https://lists.openembedded.org%s', None),
diff --git a/poky/documentation/dev-manual/common-tasks.rst b/poky/documentation/dev-manual/common-tasks.rst
index 8be3c7c..b08a553 100644
--- a/poky/documentation/dev-manual/common-tasks.rst
+++ b/poky/documentation/dev-manual/common-tasks.rst
@@ -6527,7 +6527,7 @@
 
 This command will ask you to confirm the deletions it identifies.
 
-Note::
+.. note::
 
    The duplicated sstate cache files of one package must have the same
    architecture, which means that sstate cache files with multiple
diff --git a/poky/documentation/kernel-dev/common.rst b/poky/documentation/kernel-dev/common.rst
index dc3345a..16ef645 100644
--- a/poky/documentation/kernel-dev/common.rst
+++ b/poky/documentation/kernel-dev/common.rst
@@ -52,8 +52,8 @@
 ":ref:`kernel-dev/common:using \`\`devtool\`\` to patch the kernel`"
 section:
 
-1. *Initialize the BitBake Environment:* Before building an extensible
-   SDK, you need to initialize the BitBake build environment by sourcing
+1. *Initialize the BitBake Environment:*
+   you need to initialize the BitBake build environment by sourcing
    the build environment script (i.e. :ref:`structure-core-script`)::
 
       $ cd poky
@@ -120,67 +120,10 @@
       NOTE: Starting bitbake server...
       $
 
-5. *Build the Extensible SDK:* Use BitBake to build the extensible SDK
-   specifically for use with images to be run using QEMU::
+5. *Build the Clean Image:* The final step in preparing to work on the
+   kernel is to build an initial image using ``bitbake``::
 
-      $ cd poky/build
-      $ bitbake core-image-minimal -c populate_sdk_ext
-
-   Once
-   the build finishes, you can find the SDK installer file (i.e.
-   ``*.sh`` file) in the following directory::
-
-      poky/build/tmp/deploy/sdk
-
-   For this example, the installer file is named
-   ``poky-glibc-x86_64-core-image-minimal-i586-toolchain-ext-&DISTRO;.sh``.
-
-6. *Install the Extensible SDK:* Use the following command to install
-   the SDK. For this example, install the SDK in the default
-   ``poky_sdk`` directory::
-
-      $ cd poky/build/tmp/deploy/sdk
-      $ ./poky-glibc-x86_64-core-image-minimal-i586-toolchain-ext-&DISTRO;.sh
-      Poky (Yocto Project Reference Distro) Extensible SDK installer version &DISTRO;
-      ============================================================================
-      Enter target directory for SDK (default: poky_sdk):
-      You are about to install the SDK to "/home/scottrif/poky_sdk". Proceed [Y/n]? Y
-      Extracting SDK......................................done
-      Setting it up...
-      Extracting buildtools...
-      Preparing build system...
-      Parsing recipes: 100% |#################################################################| Time: 0:00:52
-      Initializing tasks: 100% |############## ###############################################| Time: 0:00:04
-      Checking sstate mirror object availability: 100% |######################################| Time: 0:00:00
-      Parsing recipes: 100% |#################################################################| Time: 0:00:33
-      Initializing tasks: 100% |##############################################################| Time: 0:00:00
-      done
-      SDK has been successfully set up and is ready to be used.
-      Each time you wish to use the SDK in a new shell session, you need to source the environment setup script e.g.
-       $ . /home/scottrif/poky_sdk/environment-setup-i586-poky-linux
-
-
-7. *Set Up a New Terminal to Work With the Extensible SDK:* You must set
-   up a new terminal to work with the SDK. You cannot use the same
-   BitBake shell used to build the installer.
-
-   After opening a new shell, run the SDK environment setup script as
-   directed by the output from installing the SDK::
-
-      $ source poky_sdk/environment-setup-i586-poky-linux
-      "SDK environment now set up; additionally you may now run devtool to perform development tasks.
-      Run devtool --help for further details.
-
-   .. note::
-
-      If you get a warning about attempting to use the extensible SDK in
-      an environment set up to run BitBake, you did not use a new shell.
-
-8. *Build the Clean Image:* The final step in preparing to work on the
-   kernel is to build an initial image using ``devtool`` in the new
-   terminal you just set up and initialized for SDK work::
-
-      $ devtool build-image
+      $ bitbake core-image-minimal
       Parsing recipes: 100% |##########################################| Time: 0:00:05
       Parsing of 830 .bb files complete (0 cached, 830 parsed). 1299 targets, 47 skipped, 0 masked, 0 errors.
       WARNING: No packages to add, building image core-image-minimal unmodified
@@ -192,7 +135,6 @@
       NOTE: Executing SetScene Tasks
       NOTE: Executing RunQueue Tasks
       NOTE: Tasks Summary: Attempted 2866 tasks of which 2604 didn't need to be rerun and all succeeded.
-      NOTE: Successfully built core-image-minimal. You can find output files in /home/scottrif/poky_sdk/tmp/deploy/images/qemux86
 
    If you were
    building for actual hardware and not for emulation, you could flash
@@ -202,7 +144,7 @@
    Wiki page.
 
 At this point you have set up to start making modifications to the
-kernel by using the extensible SDK. For a continued example, see the
+kernel. For a continued example, see the
 ":ref:`kernel-dev/common:using \`\`devtool\`\` to patch the kernel`"
 section.
 
@@ -744,7 +686,7 @@
 =====================================
 
 The steps in this procedure show you how you can patch the kernel using
-the extensible SDK and ``devtool``.
+``devtool``.
 
 .. note::
 
@@ -766,8 +708,7 @@
 the ":ref:`kernel-dev/common:getting ready to develop using \`\`devtool\`\``" Section.
 
 1. *Check Out the Kernel Source Files:* First you must use ``devtool``
-   to checkout the kernel source code in its workspace. Be sure you are
-   in the terminal set up to do work with the extensible SDK.
+   to checkout the kernel source code in its workspace.
 
    .. note::
 
@@ -867,7 +808,7 @@
       the results of your ``printk`` statements as part of the output
       when you scroll down the console window.
 
-6. *Stage and commit your changes*: Within your eSDK terminal, change
+6. *Stage and commit your changes*: Change
    your working directory to where you modified the ``calibrate.c`` file
    and use these Git commands to stage and commit your changes::
 
@@ -878,8 +819,7 @@
 
 7. *Export the Patches and Create an Append File:* To export your
    commits as patches and create a ``.bbappend`` file, use the following
-   command in the terminal used to work with the extensible SDK. This
-   example uses the previously established layer named ``meta-mylayer``.
+   command. This example uses the previously established layer named ``meta-mylayer``.
    ::
 
       $ devtool finish linux-yocto ~/meta-mylayer
@@ -907,8 +847,8 @@
 ========================================================
 
 The steps in this procedure show you how you can patch the kernel using
-traditional kernel development (i.e. not using ``devtool`` and the
-extensible SDK as described in the
+traditional kernel development (i.e. not using ``devtool``
+as described in the
 ":ref:`kernel-dev/common:using \`\`devtool\`\` to patch the kernel`"
 section).
 
diff --git a/poky/documentation/kernel-dev/intro.rst b/poky/documentation/kernel-dev/intro.rst
index b9ce7f2..4ff4dc7 100644
--- a/poky/documentation/kernel-dev/intro.rst
+++ b/poky/documentation/kernel-dev/intro.rst
@@ -114,13 +114,13 @@
    a build host ready to use the Yocto Project.
 
 2. *Set Up Your Host Development System for Kernel Development:* It is
-   recommended that you use ``devtool`` and an extensible SDK for kernel
+   recommended that you use ``devtool`` for kernel
    development. Alternatively, you can use traditional kernel
    development methods with the Yocto Project. Either way, there are
    steps you need to take to get the development environment ready.
 
-   Using ``devtool`` and the eSDK requires that you have a clean build
-   of the image and that you are set up with the appropriate eSDK. For
+   Using ``devtool`` requires that you have a clean build
+   of the image. For
    more information, see the
    ":ref:`kernel-dev/common:getting ready to develop using \`\`devtool\`\``"
    section.
@@ -134,7 +134,7 @@
 3. *Make Changes to the Kernel Source Code if applicable:* Modifying the
    kernel does not always mean directly changing source files. However,
    if you have to do this, you make the changes to the files in the
-   eSDK's Build Directory if you are using ``devtool``. For more
+   Yocto's Build Directory if you are using ``devtool``. For more
    information, see the
    ":ref:`kernel-dev/common:using \`\`devtool\`\` to patch the kernel`"
    section.
diff --git a/poky/documentation/migration-guides/release-4.0.rst b/poky/documentation/migration-guides/release-4.0.rst
index 6584963..fe1efae 100644
--- a/poky/documentation/migration-guides/release-4.0.rst
+++ b/poky/documentation/migration-guides/release-4.0.rst
@@ -7,3 +7,4 @@
    release-notes-4.0
    release-notes-4.0.1
    release-notes-4.0.2
+   release-notes-4.0.3
diff --git a/poky/documentation/migration-guides/release-notes-4.0.3.rst b/poky/documentation/migration-guides/release-notes-4.0.3.rst
new file mode 100644
index 0000000..e2a212c
--- /dev/null
+++ b/poky/documentation/migration-guides/release-notes-4.0.3.rst
@@ -0,0 +1,314 @@
+Release notes for Yocto-4.0.3 (Kirkstone)
+-----------------------------------------
+
+Security Fixes in Yocto-4.0.3
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+-  binutils: fix :cve:`2019-1010204`
+-  busybox: fix :cve:`2022-30065`
+-  cups: ignore :cve:`2022-26691`
+-  curl: Fix :cve:`2022-32205`, :cve:`2022-32206`, :cve:`2022-32207` and :cve:`2022-32208`
+-  dpkg: fix :cve:`2022-1664`
+-  ghostscript: fix :cve:`2022-2085`
+-  harfbuzz: fix :cve:`2022-33068`
+-  libtirpc: fix :cve:`2021-46828`
+-  lua: fix :cve:`2022-33099`
+-  nasm: ignore :cve:`2020-18974`
+-  qemu: fix :cve:`2022-35414`
+-  qemu: ignore :cve:`2021-20255` and :cve:`2019-12067`
+-  tiff: fix :cve:`2022-1354`, :cve:`2022-1355`, :cve:`2022-2056`, :cve:`2022-2057` and :cve:`2022-2058`
+-  u-boot: fix :cve:`2022-34835`
+-  unzip: fix :cve:`2022-0529` and :cve:`2022-0530`
+
+
+Fixes in Yocto-4.0.3
+~~~~~~~~~~~~~~~~~~~~
+
+-  alsa-state: correct license
+-  at: take tarballs from debian
+-  base.bbclass: Correct the test for obsolete license exceptions
+-  base/reproducible: Change Source Date Epoch generation methods
+-  bin_package: install into base_prefix
+-  bind: Remove legacy python3 PACKAGECONFIG code
+-  bind: upgrade to 9.18.4
+-  binutils: stable 2.38 branch updates
+-  build-appliance-image: Update to kirkstone head revision
+-  cargo_common.bbclass: enable bitbake vendoring for externalsrc
+-  coreutils: Tweak packaging variable names for coreutils-dev
+-  curl: backport openssl fix CN check error code
+-  cve-check: hook cleanup to the BuildCompleted event, not CookerExit
+-  cve-extra-exclusions: Clean up and ignore three CVEs (2xqemu and nasm)
+-  devtool: finish: handle patching when S points to subdir of a git repo
+-  devtool: ignore pn- overrides when determining SRC_URI overrides
+-  docs: BB_HASHSERVE_UPSTREAM: update to new host
+-  dropbear: break dependency on base package for -dev package
+-  efivar: fix import functionality
+-  encodings: update to 1.0.6
+-  epiphany: upgrade to 42.3
+-  externalsrc.bbclass: support crate fetcher on externalsrc
+-  font-util: update 1.3.2 -> 1.3.3
+-  gcc-runtime: Fix build when using gold
+-  gcc-runtime: Fix missing MLPREFIX in debug mappings
+-  gcc-runtime: Pass -nostartfiles when building dummy libstdc++.so
+-  gcc: Backport a fix for gcc bug 105039
+-  git: upgrade to v2.35.4
+-  glib-2.0: upgrade to 2.72.3
+-  glib-networking: upgrade to 2.72.1
+-  glibc : stable 2.35 branch updates
+-  glibc-tests: Avoid reproducibility issues
+-  glibc-tests: not clear BBCLASSEXTEND
+-  glibc: revert one upstream change to work around broken DEBUG_BUILD build
+-  glibc: stable 2.35 branch updates
+-  gnupg: upgrade to 2.3.7
+-  go: upgrade to v1.17.12
+-  gobject-introspection-data: Disable cache for g-ir-scanner
+-  gperf: Add a patch to work around reproducibility issues
+-  gperf: Switch to upstream patch
+-  gst-devtools: upgrade to 1.20.3
+-  gstreamer1.0-libav: upgrade to 1.20.3
+-  gstreamer1.0-omx: upgrade to 1.20.3
+-  gstreamer1.0-plugins-bad: upgrade to 1.20.3
+-  gstreamer1.0-plugins-base: upgrade to 1.20.3
+-  gstreamer1.0-plugins-good: upgrade to 1.20.3
+-  gstreamer1.0-plugins-ugly: upgrade to 1.20.3
+-  gstreamer1.0-python: upgrade to 1.20.3
+-  gstreamer1.0-rtsp-server: upgrade to 1.20.3
+-  gstreamer1.0-vaapi: upgrade to 1.20.3
+-  gstreamer1.0: upgrade to 1.20.3
+-  gtk-doc: Remove hardcoded buildpath
+-  harfbuzz: Fix compilation with clang
+-  initramfs-framework: move storage mounts to actual rootfs
+-  initscripts: run umountnfs as a KILL script
+-  insane.bbclass: host-user-contaminated: Correct per package home path
+-  insane: Fix buildpaths test to work with special devices
+-  kernel-arch: Fix buildpaths leaking into external module compiles
+-  kernel-devsrc: fix reproducibility and buildpaths QA warning
+-  kernel-devsrc: ppc32: fix reproducibility
+-  kernel-uboot.bbclass: Use vmlinux.initramfs when INITRAMFS_IMAGE_BUNDLE set
+-  kernel.bbclass: pass LD also in savedefconfig
+-  libffi: fix native build being not portable
+-  libgcc: Fix standalone target builds with usrmerge distro feature
+-  libmodule-build-perl: Use env utility to find perl interpreter
+-  libsoup: upgrade to 3.0.7
+-  libuv: upgrade to 1.44.2
+-  linux-firmware: upgrade to 20220708
+-  linux-firwmare: restore WHENCE_CHKSUM variable
+-  linux-yocto-rt/5.15: update to -rt48 (and fix -stable merge)
+-  linux-yocto/5.10: fix build_OID_registry/conmakehash buildpaths warning
+-  linux-yocto/5.10: fix buildpaths issue with gen-mach-types
+-  linux-yocto/5.10: fix buildpaths issue with pnmtologo
+-  linux-yocto/5.10: update to v5.10.135
+-  linux-yocto/5.15: drop obselete GPIO sysfs ABI
+-  linux-yocto/5.15: fix build_OID_registry buildpaths warning
+-  linux-yocto/5.15: fix buildpaths issue with gen-mach-types
+-  linux-yocto/5.15: fix buildpaths issue with pnmtologo
+-  linux-yocto/5.15: fix qemuppc buildpaths warning
+-  linux-yocto/5.15: fix reproducibility issues
+-  linux-yocto/5.15: update to v5.15.59
+-  log4cplus: upgrade to 2.0.8
+-  lttng-modules: Fix build failure for kernel v5.15.58
+-  lttng-modules: upgrade to 2.13.4
+-  lua: Fix multilib buildpath reproducibility issues
+-  mkfontscale: upgrade to 1.2.2
+-  oe-selftest-image: Ensure the image has sftp as well as dropbear
+-  oe-selftest: devtool: test modify git recipe building from a subdir
+-  oeqa/runtime/scp: Disable scp test for dropbear
+-  oeqa/runtime: add test that the kernel has CONFIG_PREEMPT_RT enabled
+-  oeqa/sdk: drop the nativesdk-python 2.x test
+-  openssh: Add openssh-sftp-server to openssh RDEPENDS
+-  openssh: break dependency on base package for -dev package
+-  openssl: update to 3.0.5
+-  package.bbclass: Avoid stripping signed kernel modules in splitdebuginfo
+-  package.bbclass: Fix base directory for debugsource files when using externalsrc
+-  package.bbclass: Fix kernel source handling when not using externalsrc
+-  package_manager/ipk: do not pipe stderr to stdout
+-  packagegroup-core-ssh-dropbear: Add openssh-sftp-server recommendation
+-  patch: handle if S points to a subdirectory of a git repo
+-  perf: fix reproducibility in 5.19+
+-  perf: fix reproduciblity in older releases of Linux
+-  perf: sort-pmuevents: really keep array terminators
+-  perl: don't install Makefile.old into perl-ptest
+-  poky.conf: bump version for 4.0.3
+-  pulseaudio: add m4-native to DEPENDS
+-  python3: Backport patch to fix an issue in subinterpreters
+-  qemu: Add PACKAGECONFIG for brlapi
+-  qemu: Avoid accidental librdmacm linkage
+-  qemu: Avoid accidental libvdeplug linkage
+-  qemu: Fix slirp determinism issue
+-  qemu: add PACKAGECONFIG for capstone
+-  recipetool/devtool: Fix python egg whitespace issues in PACKAGECONFIG
+-  ref-manual: variables: remove sphinx directive from literal block
+-  rootfs-postcommands.bbclass: move host-user-contaminated.txt to ${S}
+-  ruby: add PACKAGECONFIG for capstone
+-  rust: fix issue building cross-canadian tools for aarch64 on x86_64
+-  sanity.bbclass: Add ftps to accepted URI protocols for mirrors sanity
+-  selftest/runtime_test/virgl: Disable for all almalinux
+-  sstatesig: Include all dependencies in SPDX task signatures
+-  strace: set COMPATIBLE_HOST for riscv32
+-  systemd: Added base_bindir into pkg_postinst:udev-hwdb.
+-  udev-extraconf/initrdscripts/parted: Rename mount.blacklist -> mount.ignorelist
+-  udev-extraconf/mount.sh: add LABELs to mountpoints
+-  udev-extraconf/mount.sh: ignore lvm in automount
+-  udev-extraconf/mount.sh: only mount devices on hotplug
+-  udev-extraconf/mount.sh: save mount name in our tmp filecache
+-  udev-extraconf: fix some systemd automount issues
+-  udev-extraconf: force systemd-udevd to use shared MountFlags
+-  udev-extraconf: let automount base directory configurable
+-  udev-extraconf:mount.sh: fix a umount issue
+-  udev-extraconf:mount.sh: fix path mismatching issues
+-  vala: Fix on target wrapper buildpaths issue
+-  vala: upgrade to 0.56.2
+-  vim: upgrade to 9.0.0063
+-  waffle: correctly request wayland-scanner executable
+-  webkitgtk: upgrade to 2.36.4
+-  weston: upgrade to 10.0.1
+-  wic/plugins/rootfs: Fix NameError for 'orig_path'
+-  wic: fix WicError message
+-  wireless-regdb: upgrade to 2022.06.06
+-  xdpyinfo: upgrade to 1.3.3
+-  xev: upgrade to 1.2.5
+-  xf86-input-synaptics: upgrade to 1.9.2
+-  xmodmap: upgrade to 1.0.11
+-  xorg-app: Tweak handling of compression changes in SRC_URI
+-  xserver-xorg: upgrade to 21.1.4
+-  xwayland: upgrade to 22.1.3
+-  yocto-bsps/5.10: fix buildpaths issue with gen-mach-types
+-  yocto-bsps/5.10: fix buildpaths issue with pnmtologo
+-  yocto-bsps/5.15: fix buildpaths issue with gen-mach-types
+-  yocto-bsps/5.15: fix buildpaths issue with pnmtologo
+-  yocto-bsps: buildpaths fixes
+-  yocto-bsps: update to v5.10.130
+-  yocto-bsps: buildpaths fixes
+-  yocto-bsps: update to v5.15.54
+
+
+Known Issues in Yocto-4.0.3
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+- N/A
+
+
+Contributors to Yocto-4.0.3
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+-  Ahmed Hossam
+-  Alejandro Hernandez Samaniego
+-  Alex Kiernan
+-  Alexander Kanavin
+-  Bruce Ashfield
+-  Chanho Park
+-  Christoph Lauer
+-  David Bagonyi
+-  Dmitry Baryshkov
+-  He Zhe
+-  Hitendra Prajapati
+-  Jose Quaresma
+-  Joshua Watt
+-  Kai Kang
+-  Khem Raj
+-  Lee Chee Yang
+-  Lucas Stach
+-  Markus Volk
+-  Martin Jansa
+-  Maxime Roussin-Bélanger
+-  Michael Opdenacker
+-  Mihai Lindner
+-  Ming Liu
+-  Mingli Yu
+-  Muhammad Hamza
+-  Naveen
+-  Pascal Bach
+-  Paul Eggleton
+-  Pavel Zhukov
+-  Peter Bergin
+-  Peter Kjellerstedt
+-  Peter Marko
+-  Pgowda
+-  Raju Kumar Pothuraju
+-  Richard Purdie
+-  Robert Joslyn
+-  Ross Burton
+-  Sakib Sajal
+-  Shruthi Ravichandran
+-  Steve Sakoman
+-  Sundeep Kokkonda
+-  Thomas Roos
+-  Tom Hochstein
+-  Wentao Zhang
+-  Yi Zhao
+-  Yue Tao
+-  gr embeter
+-  leimaohui
+-  wangmy
+
+
+Repositories / Downloads for Yocto-4.0.3
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+poky
+
+-  Repository Location: https://git.yoctoproject.org/git/poky
+-  Branch: :yocto_git:`kirkstone </poky/log/?h=kirkstone>`
+-  Tag:  :yocto_git:`yocto-4.0.3 </poky/log/?h=yocto-4.0.3>`
+-  Git Revision: :yocto_git:`387ab5f18b17c3af3e9e30dc58584641a70f359f </poky/commit/?id=387ab5f18b17c3af3e9e30dc58584641a70f359f>`
+-  Release Artefact: poky-387ab5f18b17c3af3e9e30dc58584641a70f359f
+-  sha: fe674186bdb0684313746caa9472134fc19e6f1443c274fe02c06cb1e675b404
+-  Download Locations:
+   http://downloads.yoctoproject.org/releases/yocto/yocto-4.0.3/poky-387ab5f18b17c3af3e9e30dc58584641a70f359f.tar.bz2
+   http://mirrors.kernel.org/yocto/yocto/yocto-4.0.3/poky-387ab5f18b17c3af3e9e30dc58584641a70f359f.tar.bz2
+
+openembedded-core
+
+-  Repository Location: https://git.openembedded.org/openembedded-core
+-  Branch: :oe_git:`kirkstone </openembedded-core/log/?h=kirkstone>`
+-  Tag:  :oe_git:`yocto-4.0.3 </openembedded-core/log/?h=yocto-4.0.3>`
+-  Git Revision: :oe_git:`2cafa6ed5f0aa9df5a120b6353755d56c7c7800d </openembedded-core/commit/?id=2cafa6ed5f0aa9df5a120b6353755d56c7c7800d>`
+-  Release Artefact: oecore-2cafa6ed5f0aa9df5a120b6353755d56c7c7800d
+-  sha: 5181d3e8118c6112936637f01a07308b715e0e3d12c7eba338556747dfcabe92
+-  Download Locations:
+   http://downloads.yoctoproject.org/releases/yocto/yocto-4.0.3/oecore-2cafa6ed5f0aa9df5a120b6353755d56c7c7800d.tar.bz2
+   http://mirrors.kernel.org/yocto/yocto/yocto-4.0.3/oecore-2cafa6ed5f0aa9df5a120b6353755d56c7c7800d.tar.bz2
+
+meta-mingw
+
+-  Repository Location: https://git.yoctoproject.org/git/meta-mingw
+-  Branch: :yocto_git:`kirkstone </meta-mingw/log/?h=kirkstone>`
+-  Tag:  :yocto_git:`yocto-4.0.3 </meta-mingw/log/?h=yocto-4.0.3>`
+-  Git Revision: :yocto_git:`a90614a6498c3345704e9611f2842eb933dc51c1 </meta-mingw/commit/?id=a90614a6498c3345704e9611f2842eb933dc51c1>`
+-  Release Artefact: meta-mingw-a90614a6498c3345704e9611f2842eb933dc51c1
+-  sha: 49f9900bfbbc1c68136f8115b314e95d0b7f6be75edf36a75d9bcd1cca7c6302
+-  Download Locations:
+   http://downloads.yoctoproject.org/releases/yocto/yocto-4.0.3/meta-mingw-a90614a6498c3345704e9611f2842eb933dc51c1.tar.bz2
+   http://mirrors.kernel.org/yocto/yocto/yocto-4.0.3/meta-mingw-a90614a6498c3345704e9611f2842eb933dc51c1.tar.bz2
+
+meta-gplv2
+
+-  Repository Location: https://git.yoctoproject.org/git/meta-gplv2
+-  Branch: :yocto_git:`kirkstone </meta-gplv2/log/?h=kirkstone>`
+-  Tag:  :yocto_git:`yocto-4.0.3 </meta-gplv2/log/?h=yocto-4.0.3>`
+-  Git Revision: :yocto_git:`d2f8b5cdb285b72a4ed93450f6703ca27aa42e8a </meta-gplv2/commit/?id=d2f8b5cdb285b72a4ed93450f6703ca27aa42e8a>`
+-  Release Artefact: meta-gplv2-d2f8b5cdb285b72a4ed93450f6703ca27aa42e8a
+-  sha: c386f59f8a672747dc3d0be1d4234b6039273d0e57933eb87caa20f56b9cca6d
+-  Download Locations:
+   http://downloads.yoctoproject.org/releases/yocto/yocto-4.0.3/meta-gplv2-d2f8b5cdb285b72a4ed93450f6703ca27aa42e8a.tar.bz2
+   http://mirrors.kernel.org/yocto/yocto/yocto-4.0.3/meta-gplv2-d2f8b5cdb285b72a4ed93450f6703ca27aa42e8a.tar.bz2
+
+bitbake
+
+-  Repository Location: https://git.openembedded.org/bitbake
+-  Branch: :oe_git:`2.0 </bitbake/log/?h=2.0>`
+-  Tag:  :oe_git:`yocto-4.0.3 </bitbake/log/?h=yocto-4.0.3>`
+-  Git Revision: :oe_git:`b8fd6f5d9959d27176ea016c249cf6d35ac8ba03 </bitbake/commit/?id=b8fd6f5d9959d27176ea016c249cf6d35ac8ba03>`
+-  Release Artefact: bitbake-b8fd6f5d9959d27176ea016c249cf6d35ac8ba03
+-  sha: 373818b1dee2c502264edf654d6d8f857b558865437f080e02d5ba6bb9e72cc3
+-  Download Locations:
+   http://downloads.yoctoproject.org/releases/yocto/yocto-4.0.3/bitbake-b8fd6f5d9959d27176ea016c249cf6d35ac8ba03.tar.bz2
+   http://mirrors.kernel.org/yocto/yocto/yocto-4.0.3/bitbake-b8fd6f5d9959d27176ea016c249cf6d35ac8ba03.tar.bz2
+
+yocto-docs
+
+-  Repository Location: https://git.yoctoproject.org/git/yocto-docs
+-  Branch: :yocto_git:`kirkstone </yocto-docs/log/?h=kirkstone>`
+-  Tag: :yocto_git:`yocto-4.0.3 </yocto-docs/log/?h=yocto-4.0.3>`
+-  Git Revision: :yocto_git:`d9b3dcf65ef25c06f552482aba460dd16862bf96 </yocto-docs/commit/?id=d9b3dcf65ef25c06f552482aba460dd16862bf96>`
+
diff --git a/poky/documentation/ref-manual/features.rst b/poky/documentation/ref-manual/features.rst
index 17521ac..8dfe29d 100644
--- a/poky/documentation/ref-manual/features.rst
+++ b/poky/documentation/ref-manual/features.rst
@@ -62,6 +62,8 @@
 
 -  *keyboard:* Hardware has a keyboard
 
+-  *numa:* Hardware has non-uniform memory access
+
 -  *pcbios:* Support for booting through BIOS
 
 -  *pci:* Hardware has a PCI bus
diff --git a/poky/documentation/ref-manual/system-requirements.rst b/poky/documentation/ref-manual/system-requirements.rst
index 6cf88f2..62da1bf 100644
--- a/poky/documentation/ref-manual/system-requirements.rst
+++ b/poky/documentation/ref-manual/system-requirements.rst
@@ -45,14 +45,8 @@
 
 -  Fedora 35
 
--  CentOS 7.x
-
--  CentOS 8.x
-
 -  AlmaLinux 8.5
 
--  Debian GNU/Linux 9.x (Stretch)
-
 -  Debian GNU/Linux 10.x (Buster)
 
 -  Debian GNU/Linux 11.x (Bullseye)
@@ -120,12 +114,6 @@
          $ sudo apt build-dep qemu
          $ sudo apt remove oss4-dev
 
-   -  For Debian-8, ``python3-git`` and ``pylint3`` are no longer
-      available via ``apt``.
-      ::
-
-         $ sudo pip3 install GitPython pylint==1.9.5
-
 -  *Essentials:* Packages needed to build an image on a headless system::
 
       $ sudo apt install &UBUNTU_HOST_PACKAGES_ESSENTIAL;
@@ -133,15 +121,9 @@
 -  *Documentation:* Packages needed if you are going to build out the
    Yocto Project documentation manuals::
 
-      $ sudo apt install make python3-pip
+      $ sudo apt install make python3-pip inkscape texlive-latex-extra
       &PIP3_HOST_PACKAGES_DOC;
 
-   .. note::
-
-      It is currently not possible to build out documentation from Debian 8
-      (Jessie) because of outdated ``pip3`` and ``python3``. ``python3-sphinx``
-      is too outdated.
-
 Fedora Packages
 ---------------
 
@@ -156,7 +138,7 @@
 -  *Documentation:* Packages needed if you are going to build out the
    Yocto Project documentation manuals::
 
-      $ sudo dnf install make python3-pip which
+      $ sudo dnf install make python3-pip which inkscape texlive-fncychap
       &PIP3_HOST_PACKAGES_DOC;
 
 openSUSE Packages
@@ -173,42 +155,15 @@
 -  *Documentation:* Packages needed if you are going to build out the
    Yocto Project documentation manuals::
 
-      $ sudo zypper install make python3-pip which
+      $ sudo zypper install make python3-pip which inkscape texlive-fncychap
       &PIP3_HOST_PACKAGES_DOC;
 
 
-CentOS-7 Packages
------------------
+AlmaLinux-8 Packages
+--------------------
 
 Here are the required packages by function given a
-supported CentOS-7 Linux distribution:
-
--  *Essentials:* Packages needed to build an image for a headless
-   system::
-
-      $ sudo yum install &CENTOS7_HOST_PACKAGES_ESSENTIAL;
-
-   .. note::
-
-      -  Extra Packages for Enterprise Linux (i.e. ``epel-release``) is
-         a collection of packages from Fedora built on RHEL/CentOS for
-         easy installation of packages not included in enterprise Linux
-         by default. You need to install these packages separately.
-
-      -  The ``makecache`` command consumes additional Metadata from
-         ``epel-release``.
-
--  *Documentation:* Packages needed if you are going to build out the
-   Yocto Project documentation manuals::
-
-      $ sudo yum install make python3-pip which
-      &PIP3_HOST_PACKAGES_DOC;
-
-CentOS-8 Packages
------------------
-
-Here are the required packages by function given a
-supported CentOS-8 Linux distribution:
+supported AlmaLinux-8 Linux distribution:
 
 -  *Essentials:* Packages needed to build an image for a headless
    system::
@@ -231,7 +186,7 @@
 -  *Documentation:* Packages needed if you are going to build out the
    Yocto Project documentation manuals::
 
-      $ sudo dnf install make python3-pip which
+      $ sudo dnf install make python3-pip which inkscape texlive-fncychap
       &PIP3_HOST_PACKAGES_DOC;
 
 Required Git, tar, Python, make and gcc Versions
diff --git a/poky/documentation/sdk-manual/appendix-customizing.rst b/poky/documentation/sdk-manual/appendix-customizing.rst
index 9a76cc5..23a437e 100644
--- a/poky/documentation/sdk-manual/appendix-customizing.rst
+++ b/poky/documentation/sdk-manual/appendix-customizing.rst
@@ -1,11 +1,17 @@
 .. SPDX-License-Identifier: CC-BY-SA-2.0-UK
 
-******************************
-Customizing the Extensible SDK
-******************************
+***************************************************
+Customizing the Extensible SDK standalone installer
+***************************************************
 
 This appendix describes customizations you can apply to the extensible
-SDK.
+SDK when using in the standalone installer version.
+
+.. note::
+
+   It is also possible to use the Extensible SDK functionality directly in a
+   Yocto build, avoiding separate installer artefacts. Please refer to
+   ":ref:`sdk-manual/extensible:Installing the Extensible SDK`"
 
 Configuring the Extensible SDK
 ==============================
diff --git a/poky/documentation/sdk-manual/appendix-obtain.rst b/poky/documentation/sdk-manual/appendix-obtain.rst
index ece378c..7a09a83 100644
--- a/poky/documentation/sdk-manual/appendix-obtain.rst
+++ b/poky/documentation/sdk-manual/appendix-obtain.rst
@@ -4,8 +4,22 @@
 Obtaining the SDK
 *****************
 
+Working with the SDK components directly in a Yocto build
+=========================================================
+
+Please refer to section
+":ref:`sdk-manual/extensible:Setting up the Extensible SDK environment directly in a Yocto build`"
+
+Note that to use this feature effectively either a powerful build
+machine, or a well-functioning sstate cache infrastructure is required:
+otherwise significant time could be spent waiting for components to be built
+by BitBake from source code.
+
+Working with standalone SDK Installers
+======================================
+
 Locating Pre-Built SDK Installers
-=================================
+---------------------------------
 
 You can use existing, pre-built toolchains by locating and running an
 SDK installer script that ships with the Yocto Project. Using this
@@ -72,7 +86,7 @@
    section for more information.
 
 Building an SDK Installer
-=========================
+-------------------------
 
 As an alternative to locating and downloading an SDK installer, you can
 build the SDK installer. Follow these steps:
diff --git a/poky/documentation/sdk-manual/extensible.rst b/poky/documentation/sdk-manual/extensible.rst
index ed9e43a..0df6519 100644
--- a/poky/documentation/sdk-manual/extensible.rst
+++ b/poky/documentation/sdk-manual/extensible.rst
@@ -41,6 +41,42 @@
 Installing the Extensible SDK
 =============================
 
+Two ways to install the Extensible SDK
+--------------------------------------
+
+Extensible SDK can be installed in two different ways, and both have
+their own pros and cons:
+
+1. *Setting up the Extensible SDK environment directly in a Yocto build*. This
+avoids having to produce, test, distribute and maintain separate SDK installer
+archives, which can get very large. There is only one environment for the regular
+Yocto build and the SDK and less code paths where things can go not according to plan.
+It's easier to update the SDK: it simply means updating the Yocto layers with
+git fetch or layer management tooling. The SDK extensibility is better than in the
+second option: just run ``bitbake`` again to add more things to the sysroot, or add layers
+if even more things are required.
+
+2. *Setting up the Extensible SDK from a standalone installer*. This has the benefit of
+having a single, self-contained archive that includes all the needed binary artifacts.
+So nothing needs to be rebuilt, and there is no need to provide a well-functioning
+binary artefact cache over the network for developers with underpowered laptops.
+
+Setting up the Extensible SDK environment directly in a Yocto build
+-------------------------------------------------------------------
+
+1. Set up all the needed layers and a Yocto build directory, e.g. a regular Yocto
+   build where ``bitbake`` can be executed.
+
+2. Run:
+    $ bitbake meta-ide-support
+    $ bitbake -c populate_sysroot gtk+3
+    (or any other target or native item that the application developer would need)
+    $ bitbake populate-sysroots
+
+
+Setting up the Extensible SDK from a standalone installer
+---------------------------------------------------------
+
 The first thing you need to do is install the SDK on your :term:`Build
 Host` by running the ``*.sh`` installation script.
 
@@ -136,7 +172,12 @@
 ===================================================
 
 Once you have the SDK installed, you must run the SDK environment setup
-script before you can actually use the SDK. This setup script resides in
+script before you can actually use the SDK.
+
+When using a SDK directly in a Yocto build, you will find the script in
+``tmp/deploy/images/qemux86-64/`` in your build directory.
+
+When using a standalone SDK installer, this setup script resides in
 the directory you chose when you installed the SDK, which is either the
 default ``poky_sdk`` directory or the directory you chose during
 installation.
@@ -154,6 +195,11 @@
    SDK environment now set up; additionally you may now run devtool to perform development tasks.
    Run devtool --help for further details.
 
+When using the environment script directly in a Yocto build, it can
+be run similarly:
+
+   $ source tmp/deploy/images/qemux86-64/environment-setup-core2-64-poky-linux
+
 Running the setup script defines many environment variables needed in
 order to use the SDK (e.g. ``PATH``,
 :term:`CC`,
@@ -1215,9 +1261,23 @@
 You can use the following command to find out::
 
    $ devtool search libGL mesa
+   A free implementation of the OpenGL API
 
-A free implementation of the OpenGL API Once you know the recipe
-(i.e. ``mesa`` in this example), you can install it::
+Once you know the recipe
+(i.e. ``mesa`` in this example), you can install it.
+
+When using the extensible SDK directly in a Yocto build
+-------------------------------------------------------
+
+In this scenario, the Yocto build tooling, e.g. ``bitbake``
+is directly accessible to build additional items, and it
+can simply be executed directly:
+
+   $ bitbake mesa
+   $ bitbake populate-sysroots
+
+When using a standalone installer for the Extensible SDK
+--------------------------------------------------------
 
    $ devtool sdk-install mesa
 
diff --git a/poky/documentation/sdk-manual/working-projects.rst b/poky/documentation/sdk-manual/working-projects.rst
index 7f8d9b8..91d8d6a 100644
--- a/poky/documentation/sdk-manual/working-projects.rst
+++ b/poky/documentation/sdk-manual/working-projects.rst
@@ -88,9 +88,13 @@
 
       $ source /opt/poky/&DISTRO;/environment-setup-i586-poky-linux
 
+   Another example is sourcing the environment setup directly in a Yocto
+   build::
+
+      $ source tmp/deploy/images/qemux86-64/environment-setup-core2-64-poky-linux
+
 3. *Create the configure Script:* Use the ``autoreconf`` command to
-   generate the ``configure`` script.
-   ::
+   generate the ``configure`` script::
 
       $ autoreconf
 
@@ -279,6 +283,11 @@
 
       $ source /opt/poky/&DISTRO;/environment-setup-i586-poky-linux
 
+   Another example is sourcing the environment setup directly in a Yocto
+   build::
+
+      $ source tmp/deploy/images/qemux86-64/environment-setup-core2-64-poky-linux
+
 3. *Create the Makefile:* For this example, the Makefile contains
    two lines that can be used to set the :term:`CC` variable. One line is
    identical to the value that is set when you run the SDK environment
diff --git a/poky/meta-poky/conf/distro/include/poky-distro-alt-test-config.inc b/poky/meta-poky/conf/distro/include/poky-distro-alt-test-config.inc
index 0de2013..b42b830 100644
--- a/poky/meta-poky/conf/distro/include/poky-distro-alt-test-config.inc
+++ b/poky/meta-poky/conf/distro/include/poky-distro-alt-test-config.inc
@@ -2,7 +2,7 @@
 DISTRO_FEATURES:append = " pam"
 
 # Use the LTSI Kernel
-PREFERRED_VERSION_linux-yocto = "5.10%"
+PREFERRED_VERSION_linux-yocto = "5.15%"
 
 # Ensure the kernel nfs server is enabled
 KERNEL_FEATURES:append:pn-linux-yocto = " features/nfsd/nfsd-enable.scc"
diff --git a/poky/meta-poky/conf/distro/poky-tiny.conf b/poky/meta-poky/conf/distro/poky-tiny.conf
index 2fe0d47..5772661 100644
--- a/poky/meta-poky/conf/distro/poky-tiny.conf
+++ b/poky/meta-poky/conf/distro/poky-tiny.conf
@@ -44,7 +44,7 @@
 # Distro config is evaluated after the machine config, so we have to explicitly
 # set the kernel provider to override a machine config.
 PREFERRED_PROVIDER_virtual/kernel = "linux-yocto-tiny"
-PREFERRED_VERSION_linux-yocto-tiny ?= "5.15%"
+PREFERRED_VERSION_linux-yocto-tiny ?= "5.19%"
 
 # We can use packagegroup-core-boot, but in the future we may need a new packagegroup-core-tiny
 #POKY_DEFAULT_EXTRA_RDEPENDS += "packagegroup-core-boot"
diff --git a/poky/meta-poky/conf/distro/poky.conf b/poky/meta-poky/conf/distro/poky.conf
index 6625a11..856c885 100644
--- a/poky/meta-poky/conf/distro/poky.conf
+++ b/poky/meta-poky/conf/distro/poky.conf
@@ -19,8 +19,8 @@
 
 DISTRO_FEATURES ?= "${DISTRO_FEATURES_DEFAULT} ${POKY_DEFAULT_DISTRO_FEATURES}"
 
-PREFERRED_VERSION_linux-yocto ?= "5.15%"
-PREFERRED_VERSION_linux-yocto-rt ?= "5.15%"
+PREFERRED_VERSION_linux-yocto ?= "5.19%"
+PREFERRED_VERSION_linux-yocto-rt ?= "5.19%"
 
 SDK_NAME = "${DISTRO}-${TCLIBC}-${SDKMACHINE}-${IMAGE_BASENAME}-${TUNE_PKGARCH}-${MACHINE}"
 SDKPATHINSTALL = "/opt/${DISTRO}/${SDK_VERSION}"
@@ -38,6 +38,7 @@
             ubuntu-18.04 \n \
             ubuntu-20.04 \n \
             ubuntu-21.10 \n \
+            ubuntu-22.04 \n \
             fedora-34 \n \
             fedora-35 \n \
             debian-10 \n \
diff --git a/poky/meta-poky/conf/bblayers.conf.sample b/poky/meta-poky/conf/templates/default/bblayers.conf.sample
similarity index 100%
rename from poky/meta-poky/conf/bblayers.conf.sample
rename to poky/meta-poky/conf/templates/default/bblayers.conf.sample
diff --git a/poky/meta-poky/conf/conf-notes.txt b/poky/meta-poky/conf/templates/default/conf-notes.txt
similarity index 100%
rename from poky/meta-poky/conf/conf-notes.txt
rename to poky/meta-poky/conf/templates/default/conf-notes.txt
diff --git a/poky/meta-poky/conf/local.conf.sample b/poky/meta-poky/conf/templates/default/local.conf.sample
similarity index 100%
rename from poky/meta-poky/conf/local.conf.sample
rename to poky/meta-poky/conf/templates/default/local.conf.sample
diff --git a/poky/meta-poky/conf/local.conf.sample.extended b/poky/meta-poky/conf/templates/default/local.conf.sample.extended
similarity index 100%
rename from poky/meta-poky/conf/local.conf.sample.extended
rename to poky/meta-poky/conf/templates/default/local.conf.sample.extended
diff --git a/poky/meta-poky/conf/site.conf.sample b/poky/meta-poky/conf/templates/default/site.conf.sample
similarity index 100%
rename from poky/meta-poky/conf/site.conf.sample
rename to poky/meta-poky/conf/templates/default/site.conf.sample
diff --git a/poky/meta-selftest/recipes-test/testrpm/files/testfile.txt b/poky/meta-selftest/recipes-test/testrpm/files/testfile.txt
new file mode 100644
index 0000000..c4d7630
--- /dev/null
+++ b/poky/meta-selftest/recipes-test/testrpm/files/testfile.txt
@@ -0,0 +1 @@
+== This file serves the purposes of SRC_URI only
diff --git a/poky/meta-selftest/recipes-test/testrpm/testrpm_0.0.1.bb b/poky/meta-selftest/recipes-test/testrpm/testrpm_0.0.1.bb
new file mode 100644
index 0000000..5e8761a
--- /dev/null
+++ b/poky/meta-selftest/recipes-test/testrpm/testrpm_0.0.1.bb
@@ -0,0 +1,18 @@
+SUMMARY = "Test recipe for testing rpm generated by oe-core"
+LIC_FILES_CHKSUM = "file://${COMMON_LICENSE_DIR}/MIT;md5=0835ade698e0bcf8506ecda2f7b4f302"
+
+LICENSE = "MIT"
+
+SRC_URI = "file://testfile.txt"
+INHIBIT_DEFAULT_DEPS = "1"
+
+do_compile(){
+	echo "testdata" > ${B}/"file with [brackets].txt"
+	echo "testdata" > ${B}/"file with (parentheses).txt"
+}
+
+do_install(){
+	install ${B}/* ${D}/
+}
+
+FILES:${PN} = "*"
diff --git a/poky/meta-skeleton/recipes-kernel/hello-mod/files/hello.c b/poky/meta-skeleton/recipes-kernel/hello-mod/files/hello.c
index 6b73a79..4f73455 100644
--- a/poky/meta-skeleton/recipes-kernel/hello-mod/files/hello.c
+++ b/poky/meta-skeleton/recipes-kernel/hello-mod/files/hello.c
@@ -2,18 +2,7 @@
  *
  *   Copyright (C) 2011  Intel Corporation. All rights reserved.
  *
- *   This program is free software;  you can redistribute it and/or modify
- *   it under the terms of the GNU General Public License as published by
- *   the Free Software Foundation; version 2 of the License.
- *
- *   This program is distributed in the hope that it will be useful,
- *   but WITHOUT ANY WARRANTY;  without even the implied warranty of
- *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See
- *   the GNU General Public License for more details.
- *
- *   You should have received a copy of the GNU General Public License
- *   along with this program;  if not, write to the Free Software
- *   Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ *   SPDX-License-Identifier: GPL-2.0-only
  *
  *****************************************************************************/
 
diff --git a/poky/meta-skeleton/recipes-skeleton/service/service_0.1.bb b/poky/meta-skeleton/recipes-skeleton/service/service_0.1.bb
index d1d8c5f..912f6b0 100644
--- a/poky/meta-skeleton/recipes-skeleton/service/service_0.1.bb
+++ b/poky/meta-skeleton/recipes-skeleton/service/service_0.1.bb
@@ -9,6 +9,8 @@
 	   file://COPYRIGHT \
 	   "
 
+S = "${WORKDIR}"
+
 do_compile () {
 	${CC} ${CFLAGS} ${LDFLAGS} ${WORKDIR}/skeleton_test.c -o ${WORKDIR}/skeleton-test
 }
diff --git a/poky/meta-yocto-bsp/recipes-kernel/linux/linux-yocto_5.10.bbappend b/poky/meta-yocto-bsp/recipes-kernel/linux/linux-yocto_5.10.bbappend
deleted file mode 100644
index bec8319..0000000
--- a/poky/meta-yocto-bsp/recipes-kernel/linux/linux-yocto_5.10.bbappend
+++ /dev/null
@@ -1,23 +0,0 @@
-KBRANCH:genericx86  = "v5.10/standard/base"
-KBRANCH:genericx86-64  = "v5.10/standard/base"
-KBRANCH:edgerouter = "v5.10/standard/edgerouter"
-KBRANCH:beaglebone-yocto = "v5.10/standard/beaglebone"
-
-KMACHINE:genericx86 ?= "common-pc"
-KMACHINE:genericx86-64 ?= "common-pc-64"
-KMACHINE:beaglebone-yocto ?= "beaglebone"
-
-SRCREV_machine:genericx86 ?= "2883e69e202dc7948c99a7828e192b2b42c2d90a"
-SRCREV_machine:genericx86-64 ?= "2883e69e202dc7948c99a7828e192b2b42c2d90a"
-SRCREV_machine:edgerouter ?= "7c9332d91089ee63581be6cd3e7197c9d3e9a883"
-SRCREV_machine:beaglebone-yocto ?= "3c44f12b9de336579d00ac0105852f4cbf7e8b7d"
-
-COMPATIBLE_MACHINE:genericx86 = "genericx86"
-COMPATIBLE_MACHINE:genericx86-64 = "genericx86-64"
-COMPATIBLE_MACHINE:edgerouter = "edgerouter"
-COMPATIBLE_MACHINE:beaglebone-yocto = "beaglebone-yocto"
-
-LINUX_VERSION:genericx86 = "5.10.130"
-LINUX_VERSION:genericx86-64 = "5.10.130"
-LINUX_VERSION:edgerouter = "5.10.130"
-LINUX_VERSION:beaglebone-yocto = "5.10.130"
diff --git a/poky/meta-yocto-bsp/recipes-kernel/linux/linux-yocto_5.19.bbappend b/poky/meta-yocto-bsp/recipes-kernel/linux/linux-yocto_5.19.bbappend
new file mode 100644
index 0000000..ff5070b
--- /dev/null
+++ b/poky/meta-yocto-bsp/recipes-kernel/linux/linux-yocto_5.19.bbappend
@@ -0,0 +1,23 @@
+KBRANCH:genericx86  = "v5.19/standard/base"
+KBRANCH:genericx86-64  = "v5.19/standard/base"
+KBRANCH:edgerouter = "v5.19/standard/edgerouter"
+KBRANCH:beaglebone-yocto = "v5.19/standard/beaglebone"
+
+KMACHINE:genericx86 ?= "common-pc"
+KMACHINE:genericx86-64 ?= "common-pc-64"
+KMACHINE:beaglebone-yocto ?= "beaglebone"
+
+SRCREV_machine:genericx86 ?= "43e6ab6ed043f4bc8e7cffbb08af86af0bdb5e12"
+SRCREV_machine:genericx86-64 ?= "43e6ab6ed043f4bc8e7cffbb08af86af0bdb5e12"
+SRCREV_machine:edgerouter ?= "43e6ab6ed043f4bc8e7cffbb08af86af0bdb5e12"
+SRCREV_machine:beaglebone-yocto ?= "43e6ab6ed043f4bc8e7cffbb08af86af0bdb5e12"
+
+COMPATIBLE_MACHINE:genericx86 = "genericx86"
+COMPATIBLE_MACHINE:genericx86-64 = "genericx86-64"
+COMPATIBLE_MACHINE:edgerouter = "edgerouter"
+COMPATIBLE_MACHINE:beaglebone-yocto = "beaglebone-yocto"
+
+LINUX_VERSION:genericx86 = "5.19"
+LINUX_VERSION:genericx86-64 = "5.19"
+LINUX_VERSION:edgerouter = "5.19"
+LINUX_VERSION:beaglebone-yocto = "5.19"
diff --git a/poky/meta/classes-global/base.bbclass b/poky/meta/classes-global/base.bbclass
new file mode 100644
index 0000000..8203f54
--- /dev/null
+++ b/poky/meta/classes-global/base.bbclass
@@ -0,0 +1,789 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+BB_DEFAULT_TASK ?= "build"
+CLASSOVERRIDE ?= "class-target"
+
+inherit patch
+inherit staging
+
+inherit mirrors
+inherit utils
+inherit utility-tasks
+inherit logging
+
+OE_EXTRA_IMPORTS ?= ""
+
+OE_IMPORTS += "os sys time oe.path oe.utils oe.types oe.package oe.packagegroup oe.sstatesig oe.lsb oe.cachedpath oe.license oe.qa oe.reproducible oe.rust oe.buildcfg ${OE_EXTRA_IMPORTS}"
+OE_IMPORTS[type] = "list"
+
+PACKAGECONFIG_CONFARGS ??= ""
+
+def oe_import(d):
+    import sys
+
+    bbpath = [os.path.join(dir, "lib") for dir in d.getVar("BBPATH").split(":")]
+    sys.path[0:0] = [dir for dir in bbpath if dir not in sys.path]
+
+    import oe.data
+    for toimport in oe.data.typed_value("OE_IMPORTS", d):
+        try:
+            # Make a python object accessible from the metadata
+            bb.utils._context[toimport.split(".", 1)[0]] = __import__(toimport)
+        except AttributeError as e:
+            bb.error("Error importing OE modules: %s" % str(e))
+    return ""
+
+# We need the oe module name space early (before INHERITs get added)
+OE_IMPORTED := "${@oe_import(d)}"
+
+inherit metadata_scm
+
+def lsb_distro_identifier(d):
+    adjust = d.getVar('LSB_DISTRO_ADJUST')
+    adjust_func = None
+    if adjust:
+        try:
+            adjust_func = globals()[adjust]
+        except KeyError:
+            pass
+    return oe.lsb.distro_identifier(adjust_func)
+
+die() {
+	bbfatal_log "$*"
+}
+
+oe_runmake_call() {
+	bbnote ${MAKE} ${EXTRA_OEMAKE} "$@"
+	${MAKE} ${EXTRA_OEMAKE} "$@"
+}
+
+oe_runmake() {
+	oe_runmake_call "$@" || die "oe_runmake failed"
+}
+
+
+def get_base_dep(d):
+    if d.getVar('INHIBIT_DEFAULT_DEPS', False):
+        return ""
+    return "${BASE_DEFAULT_DEPS}"
+
+BASE_DEFAULT_DEPS = "virtual/${HOST_PREFIX}gcc virtual/${HOST_PREFIX}compilerlibs virtual/libc"
+
+BASEDEPENDS = ""
+BASEDEPENDS:class-target = "${@get_base_dep(d)}"
+BASEDEPENDS:class-nativesdk = "${@get_base_dep(d)}"
+
+DEPENDS:prepend="${BASEDEPENDS} "
+
+FILESPATH = "${@base_set_filespath(["${FILE_DIRNAME}/${BP}", "${FILE_DIRNAME}/${BPN}", "${FILE_DIRNAME}/files"], d)}"
+# THISDIR only works properly with imediate expansion as it has to run
+# in the context of the location its used (:=)
+THISDIR = "${@os.path.dirname(d.getVar('FILE'))}"
+
+def extra_path_elements(d):
+    path = ""
+    elements = (d.getVar('EXTRANATIVEPATH') or "").split()
+    for e in elements:
+        path = path + "${STAGING_BINDIR_NATIVE}/" + e + ":"
+    return path
+
+PATH:prepend = "${@extra_path_elements(d)}"
+
+def get_lic_checksum_file_list(d):
+    filelist = []
+    lic_files = d.getVar("LIC_FILES_CHKSUM") or ''
+    tmpdir = d.getVar("TMPDIR")
+    s = d.getVar("S")
+    b = d.getVar("B")
+    workdir = d.getVar("WORKDIR")
+
+    urls = lic_files.split()
+    for url in urls:
+        # We only care about items that are absolute paths since
+        # any others should be covered by SRC_URI.
+        try:
+            (method, host, path, user, pswd, parm) = bb.fetch.decodeurl(url)
+            if method != "file" or not path:
+                raise bb.fetch.MalformedUrl(url)
+
+            if path[0] == '/':
+                if path.startswith((tmpdir, s, b, workdir)):
+                    continue
+                filelist.append(path + ":" + str(os.path.exists(path)))
+        except bb.fetch.MalformedUrl:
+            bb.fatal(d.getVar('PN') + ": LIC_FILES_CHKSUM contains an invalid URL: " + url)
+    return " ".join(filelist)
+
+def setup_hosttools_dir(dest, toolsvar, d, fatal=True):
+    tools = d.getVar(toolsvar).split()
+    origbbenv = d.getVar("BB_ORIGENV", False)
+    path = origbbenv.getVar("PATH")
+    # Need to ignore our own scripts directories to avoid circular links
+    for p in path.split(":"):
+        if p.endswith("/scripts"):
+            path = path.replace(p, "/ignoreme")
+    bb.utils.mkdirhier(dest)
+    notfound = []
+    for tool in tools:
+        desttool = os.path.join(dest, tool)
+        if not os.path.exists(desttool):
+            # clean up dead symlink
+            if os.path.islink(desttool):
+                os.unlink(desttool)
+            srctool = bb.utils.which(path, tool, executable=True)
+            # gcc/g++ may link to ccache on some hosts, e.g.,
+            # /usr/local/bin/ccache/gcc -> /usr/bin/ccache, then which(gcc)
+            # would return /usr/local/bin/ccache/gcc, but what we need is
+            # /usr/bin/gcc, this code can check and fix that.
+            if "ccache" in srctool:
+                srctool = bb.utils.which(path, tool, executable=True, direction=1)
+            if srctool:
+                os.symlink(srctool, desttool)
+            else:
+                notfound.append(tool)
+
+    if notfound and fatal:
+        bb.fatal("The following required tools (as specified by HOSTTOOLS) appear to be unavailable in PATH, please install them in order to proceed:\n  %s" % " ".join(notfound))
+
+addtask fetch
+do_fetch[dirs] = "${DL_DIR}"
+do_fetch[file-checksums] = "${@bb.fetch.get_checksum_file_list(d)}"
+do_fetch[file-checksums] += " ${@get_lic_checksum_file_list(d)}"
+do_fetch[vardeps] += "SRCREV"
+do_fetch[network] = "1"
+python base_do_fetch() {
+
+    src_uri = (d.getVar('SRC_URI') or "").split()
+    if not src_uri:
+        return
+
+    try:
+        fetcher = bb.fetch2.Fetch(src_uri, d)
+        fetcher.download()
+    except bb.fetch2.BBFetchException as e:
+        bb.fatal("Bitbake Fetcher Error: " + repr(e))
+}
+
+addtask unpack after do_fetch
+do_unpack[dirs] = "${WORKDIR}"
+
+do_unpack[cleandirs] = "${@d.getVar('S') if os.path.normpath(d.getVar('S')) != os.path.normpath(d.getVar('WORKDIR')) else os.path.join('${S}', 'patches')}"
+
+python base_do_unpack() {
+    src_uri = (d.getVar('SRC_URI') or "").split()
+    if not src_uri:
+        return
+
+    try:
+        fetcher = bb.fetch2.Fetch(src_uri, d)
+        fetcher.unpack(d.getVar('WORKDIR'))
+    except bb.fetch2.BBFetchException as e:
+        bb.fatal("Bitbake Fetcher Error: " + repr(e))
+}
+
+SSTATETASKS += "do_deploy_source_date_epoch"
+
+do_deploy_source_date_epoch () {
+    mkdir -p ${SDE_DEPLOYDIR}
+    if [ -e ${SDE_FILE} ]; then
+        echo "Deploying SDE from ${SDE_FILE} -> ${SDE_DEPLOYDIR}."
+        cp -p ${SDE_FILE} ${SDE_DEPLOYDIR}/__source_date_epoch.txt
+    else
+        echo "${SDE_FILE} not found!"
+    fi
+}
+
+python do_deploy_source_date_epoch_setscene () {
+    sstate_setscene(d)
+    bb.utils.mkdirhier(d.getVar('SDE_DIR'))
+    sde_file = os.path.join(d.getVar('SDE_DEPLOYDIR'), '__source_date_epoch.txt')
+    if os.path.exists(sde_file):
+        target = d.getVar('SDE_FILE')
+        bb.debug(1, "Moving setscene SDE file %s -> %s" % (sde_file, target))
+        bb.utils.rename(sde_file, target)
+    else:
+        bb.debug(1, "%s not found!" % sde_file)
+}
+
+do_deploy_source_date_epoch[dirs] = "${SDE_DEPLOYDIR}"
+do_deploy_source_date_epoch[sstate-plaindirs] = "${SDE_DEPLOYDIR}"
+addtask do_deploy_source_date_epoch_setscene
+addtask do_deploy_source_date_epoch before do_configure after do_patch
+
+python create_source_date_epoch_stamp() {
+    # Version: 1
+    source_date_epoch = oe.reproducible.get_source_date_epoch(d, d.getVar('S'))
+    oe.reproducible.epochfile_write(source_date_epoch, d.getVar('SDE_FILE'), d)
+}
+do_unpack[postfuncs] += "create_source_date_epoch_stamp"
+
+def get_source_date_epoch_value(d):
+    return oe.reproducible.epochfile_read(d.getVar('SDE_FILE'), d)
+
+def get_layers_branch_rev(d):
+    revisions = oe.buildcfg.get_layer_revisions(d)
+    layers_branch_rev = ["%-20s = \"%s:%s\"" % (r[1], r[2], r[3]) for r in revisions]
+    i = len(layers_branch_rev)-1
+    p1 = layers_branch_rev[i].find("=")
+    s1 = layers_branch_rev[i][p1:]
+    while i > 0:
+        p2 = layers_branch_rev[i-1].find("=")
+        s2= layers_branch_rev[i-1][p2:]
+        if s1 == s2:
+            layers_branch_rev[i-1] = layers_branch_rev[i-1][0:p2]
+            i -= 1
+        else:
+            i -= 1
+            p1 = layers_branch_rev[i].find("=")
+            s1= layers_branch_rev[i][p1:]
+    return layers_branch_rev
+
+
+BUILDCFG_FUNCS ??= "buildcfg_vars get_layers_branch_rev buildcfg_neededvars"
+BUILDCFG_FUNCS[type] = "list"
+
+def buildcfg_vars(d):
+    statusvars = oe.data.typed_value('BUILDCFG_VARS', d)
+    for var in statusvars:
+        value = d.getVar(var)
+        if value is not None:
+            yield '%-20s = "%s"' % (var, value)
+
+def buildcfg_neededvars(d):
+    needed_vars = oe.data.typed_value("BUILDCFG_NEEDEDVARS", d)
+    pesteruser = []
+    for v in needed_vars:
+        val = d.getVar(v)
+        if not val or val == 'INVALID':
+            pesteruser.append(v)
+
+    if pesteruser:
+        bb.fatal('The following variable(s) were not set: %s\nPlease set them directly, or choose a MACHINE or DISTRO that sets them.' % ', '.join(pesteruser))
+
+addhandler base_eventhandler
+base_eventhandler[eventmask] = "bb.event.ConfigParsed bb.event.MultiConfigParsed bb.event.BuildStarted bb.event.RecipePreFinalise bb.event.RecipeParsed"
+python base_eventhandler() {
+    import bb.runqueue
+
+    if isinstance(e, bb.event.ConfigParsed):
+        if not d.getVar("NATIVELSBSTRING", False):
+            d.setVar("NATIVELSBSTRING", lsb_distro_identifier(d))
+        d.setVar("ORIGNATIVELSBSTRING", d.getVar("NATIVELSBSTRING", False))
+        d.setVar('BB_VERSION', bb.__version__)
+
+    # There might be no bb.event.ConfigParsed event if bitbake server is
+    # running, so check bb.event.BuildStarted too to make sure ${HOSTTOOLS_DIR}
+    # exists.
+    if isinstance(e, bb.event.ConfigParsed) or \
+            (isinstance(e, bb.event.BuildStarted) and not os.path.exists(d.getVar('HOSTTOOLS_DIR'))):
+        # Works with the line in layer.conf which changes PATH to point here
+        setup_hosttools_dir(d.getVar('HOSTTOOLS_DIR'), 'HOSTTOOLS', d)
+        setup_hosttools_dir(d.getVar('HOSTTOOLS_DIR'), 'HOSTTOOLS_NONFATAL', d, fatal=False)
+
+    if isinstance(e, bb.event.MultiConfigParsed):
+        # We need to expand SIGGEN_EXCLUDE_SAFE_RECIPE_DEPS in each of the multiconfig data stores
+        # own contexts so the variables get expanded correctly for that arch, then inject back into
+        # the main data store.
+        deps = []
+        for config in e.mcdata:
+            deps.append(e.mcdata[config].getVar("SIGGEN_EXCLUDE_SAFE_RECIPE_DEPS"))
+        deps = " ".join(deps)
+        e.mcdata[''].setVar("SIGGEN_EXCLUDE_SAFE_RECIPE_DEPS", deps)
+
+    if isinstance(e, bb.event.BuildStarted):
+        localdata = bb.data.createCopy(d)
+        statuslines = []
+        for func in oe.data.typed_value('BUILDCFG_FUNCS', localdata):
+            g = globals()
+            if func not in g:
+                bb.warn("Build configuration function '%s' does not exist" % func)
+            else:
+                flines = g[func](localdata)
+                if flines:
+                    statuslines.extend(flines)
+
+        statusheader = d.getVar('BUILDCFG_HEADER')
+        if statusheader:
+            bb.plain('\n%s\n%s\n' % (statusheader, '\n'.join(statuslines)))
+
+    # This code is to silence warnings where the SDK variables overwrite the 
+    # target ones and we'd see dulpicate key names overwriting each other
+    # for various PREFERRED_PROVIDERS
+    if isinstance(e, bb.event.RecipePreFinalise):
+        if d.getVar("TARGET_PREFIX") == d.getVar("SDK_PREFIX"):
+            d.delVar("PREFERRED_PROVIDER_virtual/${TARGET_PREFIX}binutils")
+            d.delVar("PREFERRED_PROVIDER_virtual/${TARGET_PREFIX}gcc")
+            d.delVar("PREFERRED_PROVIDER_virtual/${TARGET_PREFIX}g++")
+            d.delVar("PREFERRED_PROVIDER_virtual/${TARGET_PREFIX}compilerlibs")
+
+    if isinstance(e, bb.event.RecipeParsed):
+        #
+        # If we have multiple providers of virtual/X and a PREFERRED_PROVIDER_virtual/X is set
+        # skip parsing for all the other providers which will mean they get uninstalled from the
+        # sysroot since they're now "unreachable". This makes switching virtual/kernel work in 
+        # particular.
+        #
+        pn = d.getVar('PN')
+        source_mirror_fetch = d.getVar('SOURCE_MIRROR_FETCH', False)
+        if not source_mirror_fetch:
+            provs = (d.getVar("PROVIDES") or "").split()
+            multiprovidersallowed = (d.getVar("BB_MULTI_PROVIDER_ALLOWED") or "").split()
+            for p in provs:
+                if p.startswith("virtual/") and p not in multiprovidersallowed:
+                    profprov = d.getVar("PREFERRED_PROVIDER_" + p)
+                    if profprov and pn != profprov:
+                        raise bb.parse.SkipRecipe("PREFERRED_PROVIDER_%s set to %s, not %s" % (p, profprov, pn))
+}
+
+CONFIGURESTAMPFILE = "${WORKDIR}/configure.sstate"
+CLEANBROKEN = "0"
+
+addtask configure after do_patch
+do_configure[dirs] = "${B}"
+base_do_configure() {
+	if [ -n "${CONFIGURESTAMPFILE}" -a -e "${CONFIGURESTAMPFILE}" ]; then
+		if [ "`cat ${CONFIGURESTAMPFILE}`" != "${BB_TASKHASH}" ]; then
+			cd ${B}
+			if [ "${CLEANBROKEN}" != "1" -a \( -e Makefile -o -e makefile -o -e GNUmakefile \) ]; then
+				oe_runmake clean
+			fi
+			# -ignore_readdir_race does not work correctly with -delete;
+			# use xargs to avoid spurious build failures
+			find ${B} -ignore_readdir_race -name \*.la -type f -print0 | xargs -0 rm -f
+		fi
+	fi
+	if [ -n "${CONFIGURESTAMPFILE}" ]; then
+		mkdir -p `dirname ${CONFIGURESTAMPFILE}`
+		echo ${BB_TASKHASH} > ${CONFIGURESTAMPFILE}
+	fi
+}
+
+addtask compile after do_configure
+do_compile[dirs] = "${B}"
+base_do_compile() {
+	if [ -e Makefile -o -e makefile -o -e GNUmakefile ]; then
+		oe_runmake || die "make failed"
+	else
+		bbnote "nothing to compile"
+	fi
+}
+
+addtask install after do_compile
+do_install[dirs] = "${B}"
+# Remove and re-create ${D} so that is it guaranteed to be empty
+do_install[cleandirs] = "${D}"
+
+base_do_install() {
+	:
+}
+
+base_do_package() {
+	:
+}
+
+addtask build after do_populate_sysroot
+do_build[noexec] = "1"
+do_build[recrdeptask] += "do_deploy"
+do_build () {
+	:
+}
+
+def set_packagetriplet(d):
+    archs = []
+    tos = []
+    tvs = []
+
+    archs.append(d.getVar("PACKAGE_ARCHS").split())
+    tos.append(d.getVar("TARGET_OS"))
+    tvs.append(d.getVar("TARGET_VENDOR"))
+
+    def settriplet(d, varname, archs, tos, tvs):
+        triplets = []
+        for i in range(len(archs)):
+            for arch in archs[i]:
+                triplets.append(arch + tvs[i] + "-" + tos[i])
+        triplets.reverse()
+        d.setVar(varname, " ".join(triplets))
+
+    settriplet(d, "PKGTRIPLETS", archs, tos, tvs)
+
+    variants = d.getVar("MULTILIB_VARIANTS") or ""
+    for item in variants.split():
+        localdata = bb.data.createCopy(d)
+        overrides = localdata.getVar("OVERRIDES", False) + ":virtclass-multilib-" + item
+        localdata.setVar("OVERRIDES", overrides)
+
+        archs.append(localdata.getVar("PACKAGE_ARCHS").split())
+        tos.append(localdata.getVar("TARGET_OS"))
+        tvs.append(localdata.getVar("TARGET_VENDOR"))
+
+    settriplet(d, "PKGMLTRIPLETS", archs, tos, tvs)
+
+python () {
+    import string, re
+
+    # Handle backfilling
+    oe.utils.features_backfill("DISTRO_FEATURES", d)
+    oe.utils.features_backfill("MACHINE_FEATURES", d)
+
+    if d.getVar("S")[-1] == '/':
+        bb.warn("Recipe %s sets S variable with trailing slash '%s', remove it" % (d.getVar("PN"), d.getVar("S")))
+    if d.getVar("B")[-1] == '/':
+        bb.warn("Recipe %s sets B variable with trailing slash '%s', remove it" % (d.getVar("PN"), d.getVar("B")))
+
+    if os.path.normpath(d.getVar("WORKDIR")) != os.path.normpath(d.getVar("S")):
+        d.appendVar("PSEUDO_IGNORE_PATHS", ",${S}")
+    if os.path.normpath(d.getVar("WORKDIR")) != os.path.normpath(d.getVar("B")):
+        d.appendVar("PSEUDO_IGNORE_PATHS", ",${B}")
+
+    # To add a recipe to the skip list , set:
+    #   SKIP_RECIPE[pn] = "message"
+    pn = d.getVar('PN')
+    skip_msg = d.getVarFlag('SKIP_RECIPE', pn)
+    if skip_msg:
+        bb.debug(1, "Skipping %s %s" % (pn, skip_msg))
+        raise bb.parse.SkipRecipe("Recipe will be skipped because: %s" % (skip_msg))
+
+    # Handle PACKAGECONFIG
+    #
+    # These take the form:
+    #
+    # PACKAGECONFIG ??= "<default options>"
+    # PACKAGECONFIG[foo] = "--enable-foo,--disable-foo,foo_depends,foo_runtime_depends,foo_runtime_recommends,foo_conflict_packageconfig"
+    pkgconfigflags = d.getVarFlags("PACKAGECONFIG") or {}
+    if pkgconfigflags:
+        pkgconfig = (d.getVar('PACKAGECONFIG') or "").split()
+        pn = d.getVar("PN")
+
+        mlprefix = d.getVar("MLPREFIX")
+
+        def expandFilter(appends, extension, prefix):
+            appends = bb.utils.explode_deps(d.expand(" ".join(appends)))
+            newappends = []
+            for a in appends:
+                if a.endswith("-native") or ("-cross-" in a):
+                    newappends.append(a)
+                elif a.startswith("virtual/"):
+                    subs = a.split("/", 1)[1]
+                    if subs.startswith(prefix):
+                        newappends.append(a + extension)
+                    else:
+                        newappends.append("virtual/" + prefix + subs + extension)
+                else:
+                    if a.startswith(prefix):
+                        newappends.append(a + extension)
+                    else:
+                        newappends.append(prefix + a + extension)
+            return newappends
+
+        def appendVar(varname, appends):
+            if not appends:
+                return
+            if varname.find("DEPENDS") != -1:
+                if bb.data.inherits_class('nativesdk', d) or bb.data.inherits_class('cross-canadian', d) :
+                    appends = expandFilter(appends, "", "nativesdk-")
+                elif bb.data.inherits_class('native', d):
+                    appends = expandFilter(appends, "-native", "")
+                elif mlprefix:
+                    appends = expandFilter(appends, "", mlprefix)
+            varname = d.expand(varname)
+            d.appendVar(varname, " " + " ".join(appends))
+
+        extradeps = []
+        extrardeps = []
+        extrarrecs = []
+        extraconf = []
+        for flag, flagval in sorted(pkgconfigflags.items()):
+            items = flagval.split(",")
+            num = len(items)
+            if num > 6:
+                bb.error("%s: PACKAGECONFIG[%s] Only enable,disable,depend,rdepend,rrecommend,conflict_packageconfig can be specified!"
+                    % (d.getVar('PN'), flag))
+
+            if flag in pkgconfig:
+                if num >= 3 and items[2]:
+                    extradeps.append(items[2])
+                if num >= 4 and items[3]:
+                    extrardeps.append(items[3])
+                if num >= 5 and items[4]:
+                    extrarrecs.append(items[4])
+                if num >= 1 and items[0]:
+                    extraconf.append(items[0])
+            elif num >= 2 and items[1]:
+                    extraconf.append(items[1])
+
+            if num >= 6 and items[5]:
+                conflicts = set(items[5].split())
+                invalid = conflicts.difference(set(pkgconfigflags.keys()))
+                if invalid:
+                    bb.error("%s: PACKAGECONFIG[%s] Invalid conflict package config%s '%s' specified."
+                        % (d.getVar('PN'), flag, 's' if len(invalid) > 1 else '', ' '.join(invalid)))
+
+                if flag in pkgconfig:
+                    intersec = conflicts.intersection(set(pkgconfig))
+                    if intersec:
+                        bb.fatal("%s: PACKAGECONFIG[%s] Conflict package config%s '%s' set in PACKAGECONFIG."
+                            % (d.getVar('PN'), flag, 's' if len(intersec) > 1 else '', ' '.join(intersec)))
+
+        appendVar('DEPENDS', extradeps)
+        appendVar('RDEPENDS:${PN}', extrardeps)
+        appendVar('RRECOMMENDS:${PN}', extrarrecs)
+        appendVar('PACKAGECONFIG_CONFARGS', extraconf)
+
+    pn = d.getVar('PN')
+    license = d.getVar('LICENSE')
+    if license == "INVALID" and pn != "defaultpkgname":
+        bb.fatal('This recipe does not have the LICENSE field set (%s)' % pn)
+
+    if bb.data.inherits_class('license', d):
+        check_license_format(d)
+        unmatched_license_flags = check_license_flags(d)
+        if unmatched_license_flags:
+            if len(unmatched_license_flags) == 1:
+                message = "because it has a restricted license '{0}'. Which is not listed in LICENSE_FLAGS_ACCEPTED".format(unmatched_license_flags[0])
+            else:
+                message = "because it has restricted licenses {0}. Which are not listed in LICENSE_FLAGS_ACCEPTED".format(
+                    ", ".join("'{0}'".format(f) for f in unmatched_license_flags))
+            bb.debug(1, "Skipping %s %s" % (pn, message))
+            raise bb.parse.SkipRecipe(message)
+
+    # If we're building a target package we need to use fakeroot (pseudo)
+    # in order to capture permissions, owners, groups and special files
+    if not bb.data.inherits_class('native', d) and not bb.data.inherits_class('cross', d):
+        d.appendVarFlag('do_prepare_recipe_sysroot', 'depends', ' virtual/fakeroot-native:do_populate_sysroot')
+        d.appendVarFlag('do_install', 'depends', ' virtual/fakeroot-native:do_populate_sysroot')
+        d.setVarFlag('do_install', 'fakeroot', '1')
+        d.appendVarFlag('do_package', 'depends', ' virtual/fakeroot-native:do_populate_sysroot')
+        d.setVarFlag('do_package', 'fakeroot', '1')
+        d.setVarFlag('do_package_setscene', 'fakeroot', '1')
+        d.appendVarFlag('do_package_setscene', 'depends', ' virtual/fakeroot-native:do_populate_sysroot')
+        d.setVarFlag('do_devshell', 'fakeroot', '1')
+        d.appendVarFlag('do_devshell', 'depends', ' virtual/fakeroot-native:do_populate_sysroot')
+
+    need_machine = d.getVar('COMPATIBLE_MACHINE')
+    if need_machine and not d.getVar('PARSE_ALL_RECIPES', False):
+        import re
+        compat_machines = (d.getVar('MACHINEOVERRIDES') or "").split(":")
+        for m in compat_machines:
+            if re.match(need_machine, m):
+                break
+        else:
+            raise bb.parse.SkipRecipe("incompatible with machine %s (not in COMPATIBLE_MACHINE)" % d.getVar('MACHINE'))
+
+    source_mirror_fetch = d.getVar('SOURCE_MIRROR_FETCH', False) or d.getVar('PARSE_ALL_RECIPES', False)
+    if not source_mirror_fetch:
+        need_host = d.getVar('COMPATIBLE_HOST')
+        if need_host:
+            import re
+            this_host = d.getVar('HOST_SYS')
+            if not re.match(need_host, this_host):
+                raise bb.parse.SkipRecipe("incompatible with host %s (not in COMPATIBLE_HOST)" % this_host)
+
+        bad_licenses = (d.getVar('INCOMPATIBLE_LICENSE') or "").split()
+
+        check_license = False if pn.startswith("nativesdk-") else True
+        for t in ["-native", "-cross-${TARGET_ARCH}", "-cross-initial-${TARGET_ARCH}",
+              "-crosssdk-${SDK_SYS}", "-crosssdk-initial-${SDK_SYS}",
+              "-cross-canadian-${TRANSLATED_TARGET_ARCH}"]:
+            if pn.endswith(d.expand(t)):
+                check_license = False
+        if pn.startswith("gcc-source-"):
+            check_license = False
+
+        if check_license and bad_licenses:
+            bad_licenses = expand_wildcard_licenses(d, bad_licenses)
+
+            exceptions = (d.getVar("INCOMPATIBLE_LICENSE_EXCEPTIONS") or "").split()
+
+            for lic_exception in exceptions:
+                if ":" in lic_exception:
+                    lic_exception = lic_exception.split(":")[1]
+                if lic_exception in oe.license.obsolete_license_list():
+                    bb.fatal("Obsolete license %s used in INCOMPATIBLE_LICENSE_EXCEPTIONS" % lic_exception)
+
+            pkgs = d.getVar('PACKAGES').split()
+            skipped_pkgs = {}
+            unskipped_pkgs = []
+            for pkg in pkgs:
+                remaining_bad_licenses = oe.license.apply_pkg_license_exception(pkg, bad_licenses, exceptions)
+
+                incompatible_lic = incompatible_license(d, remaining_bad_licenses, pkg)
+                if incompatible_lic:
+                    skipped_pkgs[pkg] = incompatible_lic
+                else:
+                    unskipped_pkgs.append(pkg)
+
+            if unskipped_pkgs:
+                for pkg in skipped_pkgs:
+                    bb.debug(1, "Skipping the package %s at do_rootfs because of incompatible license(s): %s" % (pkg, ' '.join(skipped_pkgs[pkg])))
+                    d.setVar('_exclude_incompatible-' + pkg, ' '.join(skipped_pkgs[pkg]))
+                for pkg in unskipped_pkgs:
+                    bb.debug(1, "Including the package %s" % pkg)
+            else:
+                incompatible_lic = incompatible_license(d, bad_licenses)
+                for pkg in skipped_pkgs:
+                    incompatible_lic += skipped_pkgs[pkg]
+                incompatible_lic = sorted(list(set(incompatible_lic)))
+
+                if incompatible_lic:
+                    bb.debug(1, "Skipping recipe %s because of incompatible license(s): %s" % (pn, ' '.join(incompatible_lic)))
+                    raise bb.parse.SkipRecipe("it has incompatible license(s): %s" % ' '.join(incompatible_lic))
+
+    needsrcrev = False
+    srcuri = d.getVar('SRC_URI')
+    for uri_string in srcuri.split():
+        uri = bb.fetch.URI(uri_string)
+        # Also check downloadfilename as the URL path might not be useful for sniffing
+        path = uri.params.get("downloadfilename", uri.path)
+
+        # HTTP/FTP use the wget fetcher
+        if uri.scheme in ("http", "https", "ftp"):
+            d.appendVarFlag('do_fetch', 'depends', ' wget-native:do_populate_sysroot')
+
+        # Svn packages should DEPEND on subversion-native
+        if uri.scheme == "svn":
+            needsrcrev = True
+            d.appendVarFlag('do_fetch', 'depends', ' subversion-native:do_populate_sysroot')
+
+        # Git packages should DEPEND on git-native
+        elif uri.scheme in ("git", "gitsm"):
+            needsrcrev = True
+            d.appendVarFlag('do_fetch', 'depends', ' git-native:do_populate_sysroot')
+
+        # Mercurial packages should DEPEND on mercurial-native
+        elif uri.scheme == "hg":
+            needsrcrev = True
+            d.appendVar("EXTRANATIVEPATH", ' python3-native ')
+            d.appendVarFlag('do_fetch', 'depends', ' mercurial-native:do_populate_sysroot')
+
+        # Perforce packages support SRCREV = "${AUTOREV}"
+        elif uri.scheme == "p4":
+            needsrcrev = True
+
+        # OSC packages should DEPEND on osc-native
+        elif uri.scheme == "osc":
+            d.appendVarFlag('do_fetch', 'depends', ' osc-native:do_populate_sysroot')
+
+        elif uri.scheme == "npm":
+            d.appendVarFlag('do_fetch', 'depends', ' nodejs-native:do_populate_sysroot')
+
+        elif uri.scheme == "repo":
+            needsrcrev = True
+            d.appendVarFlag('do_fetch', 'depends', ' repo-native:do_populate_sysroot')
+
+        # *.lz4 should DEPEND on lz4-native for unpacking
+        if path.endswith('.lz4'):
+            d.appendVarFlag('do_unpack', 'depends', ' lz4-native:do_populate_sysroot')
+
+        # *.zst should DEPEND on zstd-native for unpacking
+        elif path.endswith('.zst'):
+            d.appendVarFlag('do_unpack', 'depends', ' zstd-native:do_populate_sysroot')
+
+        # *.lz should DEPEND on lzip-native for unpacking
+        elif path.endswith('.lz'):
+            d.appendVarFlag('do_unpack', 'depends', ' lzip-native:do_populate_sysroot')
+
+        # *.xz should DEPEND on xz-native for unpacking
+        elif path.endswith('.xz') or path.endswith('.txz'):
+            d.appendVarFlag('do_unpack', 'depends', ' xz-native:do_populate_sysroot')
+
+        # .zip should DEPEND on unzip-native for unpacking
+        elif path.endswith('.zip') or path.endswith('.jar'):
+            d.appendVarFlag('do_unpack', 'depends', ' unzip-native:do_populate_sysroot')
+
+        # Some rpm files may be compressed internally using xz (for example, rpms from Fedora)
+        elif path.endswith('.rpm'):
+            d.appendVarFlag('do_unpack', 'depends', ' xz-native:do_populate_sysroot')
+
+        # *.deb should DEPEND on xz-native for unpacking
+        elif path.endswith('.deb'):
+            d.appendVarFlag('do_unpack', 'depends', ' xz-native:do_populate_sysroot')
+
+    if needsrcrev:
+        d.setVar("SRCPV", "${@bb.fetch2.get_srcrev(d)}")
+
+        # Gather all named SRCREVs to add to the sstate hash calculation
+        # This anonymous python snippet is called multiple times so we
+        # need to be careful to not double up the appends here and cause
+        # the base hash to mismatch the task hash
+        for uri in srcuri.split():
+            parm = bb.fetch.decodeurl(uri)[5]
+            uri_names = parm.get("name", "").split(",")
+            for uri_name in filter(None, uri_names):
+                srcrev_name = "SRCREV_{}".format(uri_name)
+                if srcrev_name not in (d.getVarFlag("do_fetch", "vardeps") or "").split():
+                    d.appendVarFlag("do_fetch", "vardeps", " {}".format(srcrev_name))
+
+    set_packagetriplet(d)
+
+    # 'multimachine' handling
+    mach_arch = d.getVar('MACHINE_ARCH')
+    pkg_arch = d.getVar('PACKAGE_ARCH')
+
+    if (pkg_arch == mach_arch):
+        # Already machine specific - nothing further to do
+        return
+
+    #
+    # We always try to scan SRC_URI for urls with machine overrides
+    # unless the package sets SRC_URI_OVERRIDES_PACKAGE_ARCH=0
+    #
+    override = d.getVar('SRC_URI_OVERRIDES_PACKAGE_ARCH')
+    if override != '0':
+        paths = []
+        fpaths = (d.getVar('FILESPATH') or '').split(':')
+        machine = d.getVar('MACHINE')
+        for p in fpaths:
+            if os.path.basename(p) == machine and os.path.isdir(p):
+                paths.append(p)
+
+        if paths:
+            for s in srcuri.split():
+                if not s.startswith("file://"):
+                    continue
+                fetcher = bb.fetch2.Fetch([s], d)
+                local = fetcher.localpath(s)
+                for mp in paths:
+                    if local.startswith(mp):
+                        #bb.note("overriding PACKAGE_ARCH from %s to %s for %s" % (pkg_arch, mach_arch, pn))
+                        d.setVar('PACKAGE_ARCH', "${MACHINE_ARCH}")
+                        return
+
+    packages = d.getVar('PACKAGES').split()
+    for pkg in packages:
+        pkgarch = d.getVar("PACKAGE_ARCH_%s" % pkg)
+
+        # We could look for != PACKAGE_ARCH here but how to choose
+        # if multiple differences are present?
+        # Look through PACKAGE_ARCHS for the priority order?
+        if pkgarch and pkgarch == mach_arch:
+            d.setVar('PACKAGE_ARCH', "${MACHINE_ARCH}")
+            bb.warn("Recipe %s is marked as only being architecture specific but seems to have machine specific packages?! The recipe may as well mark itself as machine specific directly." % d.getVar("PN"))
+}
+
+addtask cleansstate after do_clean
+python do_cleansstate() {
+        sstate_clean_cachefiles(d)
+}
+addtask cleanall after do_cleansstate
+do_cleansstate[nostamp] = "1"
+
+python do_cleanall() {
+    src_uri = (d.getVar('SRC_URI') or "").split()
+    if not src_uri:
+        return
+
+    try:
+        fetcher = bb.fetch2.Fetch(src_uri, d)
+        fetcher.clean()
+    except bb.fetch2.BBFetchException as e:
+        bb.fatal(str(e))
+}
+do_cleanall[nostamp] = "1"
+
+
+EXPORT_FUNCTIONS do_fetch do_unpack do_configure do_compile do_install do_package
diff --git a/poky/meta/classes-global/buildstats.bbclass b/poky/meta/classes-global/buildstats.bbclass
new file mode 100644
index 0000000..f49a67a
--- /dev/null
+++ b/poky/meta/classes-global/buildstats.bbclass
@@ -0,0 +1,302 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+BUILDSTATS_BASE = "${TMPDIR}/buildstats/"
+
+################################################################################
+# Build statistics gathering.
+#
+# The CPU and Time gathering/tracking functions and bbevent inspiration
+# were written by Christopher Larson.
+#
+################################################################################
+
+def get_buildprocess_cputime(pid):
+    with open("/proc/%d/stat" % pid, "r") as f:
+        fields = f.readline().rstrip().split()
+    # 13: utime, 14: stime, 15: cutime, 16: cstime
+    return sum(int(field) for field in fields[13:16])
+
+def get_process_cputime(pid):
+    import resource
+    with open("/proc/%d/stat" % pid, "r") as f:
+        fields = f.readline().rstrip().split()
+    stats = { 
+        'utime'  : fields[13],
+        'stime'  : fields[14], 
+        'cutime' : fields[15], 
+        'cstime' : fields[16],  
+    }
+    iostats = {}
+    if os.path.isfile("/proc/%d/io" % pid):
+        with open("/proc/%d/io" % pid, "r") as f:
+            while True:
+                i = f.readline().strip()
+                if not i:
+                    break
+                if not ":" in i:
+                    # one more extra line is appended (empty or containing "0")
+                    # most probably due to race condition in kernel while
+                    # updating IO stats
+                    break
+                i = i.split(": ")
+                iostats[i[0]] = i[1]
+    resources = resource.getrusage(resource.RUSAGE_SELF)
+    childres = resource.getrusage(resource.RUSAGE_CHILDREN)
+    return stats, iostats, resources, childres
+
+def get_cputime():
+    with open("/proc/stat", "r") as f:
+        fields = f.readline().rstrip().split()[1:]
+    return sum(int(field) for field in fields)
+
+def set_timedata(var, d, server_time):
+    d.setVar(var, server_time)
+
+def get_timedata(var, d, end_time):
+    oldtime = d.getVar(var, False)
+    if oldtime is None:
+        return
+    return end_time - oldtime
+
+def set_buildtimedata(var, d):
+    import time
+    time = time.time()
+    cputime = get_cputime()
+    proctime = get_buildprocess_cputime(os.getpid())
+    d.setVar(var, (time, cputime, proctime))
+
+def get_buildtimedata(var, d):
+    import time
+    timedata = d.getVar(var, False)
+    if timedata is None:
+        return
+    oldtime, oldcpu, oldproc = timedata
+    procdiff = get_buildprocess_cputime(os.getpid()) - oldproc
+    cpudiff = get_cputime() - oldcpu
+    end_time = time.time()
+    timediff = end_time - oldtime
+    if cpudiff > 0:
+        cpuperc = float(procdiff) * 100 / cpudiff
+    else:
+        cpuperc = None
+    return timediff, cpuperc
+
+def write_task_data(status, logfile, e, d):
+    with open(os.path.join(logfile), "a") as f:
+        elapsedtime = get_timedata("__timedata_task", d, e.time)
+        if elapsedtime:
+            f.write(d.expand("${PF}: %s\n" % e.task))
+            f.write(d.expand("Elapsed time: %0.2f seconds\n" % elapsedtime))
+            cpu, iostats, resources, childres = get_process_cputime(os.getpid())
+            if cpu:
+                f.write("utime: %s\n" % cpu['utime'])
+                f.write("stime: %s\n" % cpu['stime'])
+                f.write("cutime: %s\n" % cpu['cutime'])
+                f.write("cstime: %s\n" % cpu['cstime'])
+            for i in iostats:
+                f.write("IO %s: %s\n" % (i, iostats[i]))
+            rusages = ["ru_utime", "ru_stime", "ru_maxrss", "ru_minflt", "ru_majflt", "ru_inblock", "ru_oublock", "ru_nvcsw", "ru_nivcsw"]
+            for i in rusages:
+                f.write("rusage %s: %s\n" % (i, getattr(resources, i)))
+            for i in rusages:
+                f.write("Child rusage %s: %s\n" % (i, getattr(childres, i)))
+        if status == "passed":
+            f.write("Status: PASSED \n")
+        else:
+            f.write("Status: FAILED \n")
+        f.write("Ended: %0.2f \n" % e.time)
+
+def write_host_data(logfile, e, d, type):
+    import subprocess, os, datetime
+    # minimum time allowed for each command to run, in seconds
+    time_threshold = 0.5
+    limit = 10
+    # the total number of commands
+    num_cmds = 0
+    msg = ""
+    if type == "interval":
+        # interval at which data will be logged
+        interval = d.getVar("BB_HEARTBEAT_EVENT", False)
+        if interval is None:
+            bb.warn("buildstats: Collecting host data at intervals failed. Set BB_HEARTBEAT_EVENT=\"<interval>\" in conf/local.conf for the interval at which host data will be logged.")
+            d.setVar("BB_LOG_HOST_STAT_ON_INTERVAL", "0")
+            return
+        interval = int(interval)
+        cmds = d.getVar('BB_LOG_HOST_STAT_CMDS_INTERVAL')
+        msg = "Host Stats: Collecting data at %d second intervals.\n" % interval
+        if cmds is None:
+            d.setVar("BB_LOG_HOST_STAT_ON_INTERVAL", "0")
+            bb.warn("buildstats: Collecting host data at intervals failed. Set BB_LOG_HOST_STAT_CMDS_INTERVAL=\"command1 ; command2 ; ... \" in conf/local.conf.")
+            return
+    if type == "failure":
+        cmds = d.getVar('BB_LOG_HOST_STAT_CMDS_FAILURE')
+        msg = "Host Stats: Collecting data on failure.\n"
+        msg += "Failed at task: " + e.task + "\n"
+        if cmds is None:
+            d.setVar("BB_LOG_HOST_STAT_ON_FAILURE", "0")
+            bb.warn("buildstats: Collecting host data on failure failed. Set BB_LOG_HOST_STAT_CMDS_FAILURE=\"command1 ; command2 ; ... \" in conf/local.conf.")
+            return
+    c_san = []
+    for cmd in cmds.split(";"):
+        if len(cmd) == 0:
+            continue
+        num_cmds += 1
+        c_san.append(cmd)
+    if num_cmds == 0:
+        if type == "interval":
+            d.setVar("BB_LOG_HOST_STAT_ON_INTERVAL", "0")
+        if type == "failure":
+            d.setVar("BB_LOG_HOST_STAT_ON_FAILURE", "0")
+        return
+
+    # return if the interval is not enough to run all commands within the specified BB_HEARTBEAT_EVENT interval
+    if type == "interval":
+        limit = interval / num_cmds
+        if limit <= time_threshold:
+            d.setVar("BB_LOG_HOST_STAT_ON_INTERVAL", "0")
+            bb.warn("buildstats: Collecting host data failed. BB_HEARTBEAT_EVENT interval not enough to run the specified commands. Increase value of BB_HEARTBEAT_EVENT in conf/local.conf.")
+            return
+
+    # set the environment variables 
+    path = d.getVar("PATH")
+    opath = d.getVar("BB_ORIGENV", False).getVar("PATH")
+    ospath = os.environ['PATH']
+    os.environ['PATH'] = path + ":" + opath + ":" + ospath
+    with open(logfile, "a") as f:
+        f.write("Event Time: %f\nDate: %s\n" % (e.time, datetime.datetime.now()))
+        f.write("%s" % msg)
+        for c in c_san:
+            try:
+                output = subprocess.check_output(c.split(), stderr=subprocess.STDOUT, timeout=limit).decode('utf-8')
+            except (subprocess.CalledProcessError, subprocess.TimeoutExpired, FileNotFoundError) as err:
+                output = "Error running command: %s\n%s\n" % (c, err)
+            f.write("%s\n%s\n" % (c, output))
+    # reset the environment
+    os.environ['PATH'] = ospath
+
+python run_buildstats () {
+    import bb.build
+    import bb.event
+    import time, subprocess, platform
+
+    bn = d.getVar('BUILDNAME')
+    ########################################################################
+    # bitbake fires HeartbeatEvent even before a build has been
+    # triggered, causing BUILDNAME to be None
+    ########################################################################
+    if bn is not None:
+        bsdir = os.path.join(d.getVar('BUILDSTATS_BASE'), bn)
+        taskdir = os.path.join(bsdir, d.getVar('PF'))
+        if isinstance(e, bb.event.HeartbeatEvent) and bb.utils.to_boolean(d.getVar("BB_LOG_HOST_STAT_ON_INTERVAL")):
+            bb.utils.mkdirhier(bsdir)
+            write_host_data(os.path.join(bsdir, "host_stats_interval"), e, d, "interval")
+
+    if isinstance(e, bb.event.BuildStarted):
+        ########################################################################
+        # If the kernel was not configured to provide I/O statistics, issue
+        # a one time warning.
+        ########################################################################
+        if not os.path.isfile("/proc/%d/io" % os.getpid()):
+            bb.warn("The Linux kernel on your build host was not configured to provide process I/O statistics. (CONFIG_TASK_IO_ACCOUNTING is not set)")
+
+        ########################################################################
+        # at first pass make the buildstats hierarchy and then
+        # set the buildname
+        ########################################################################
+        bb.utils.mkdirhier(bsdir)
+        set_buildtimedata("__timedata_build", d)
+        build_time = os.path.join(bsdir, "build_stats")
+        # write start of build into build_time
+        with open(build_time, "a") as f:
+            host_info = platform.uname()
+            f.write("Host Info: ")
+            for x in host_info:
+                if x:
+                    f.write(x + " ")
+            f.write("\n")
+            f.write("Build Started: %0.2f \n" % d.getVar('__timedata_build', False)[0])
+
+    elif isinstance(e, bb.event.BuildCompleted):
+        build_time = os.path.join(bsdir, "build_stats")
+        with open(build_time, "a") as f:
+            ########################################################################
+            # Write build statistics for the build
+            ########################################################################
+            timedata = get_buildtimedata("__timedata_build", d)
+            if timedata:
+                time, cpu = timedata
+                # write end of build and cpu used into build_time
+                f.write("Elapsed time: %0.2f seconds \n" % (time))
+                if cpu:
+                    f.write("CPU usage: %0.1f%% \n" % cpu)
+
+    if isinstance(e, bb.build.TaskStarted):
+        set_timedata("__timedata_task", d, e.time)
+        bb.utils.mkdirhier(taskdir)
+        # write into the task event file the name and start time
+        with open(os.path.join(taskdir, e.task), "a") as f:
+            f.write("Event: %s \n" % bb.event.getName(e))
+            f.write("Started: %0.2f \n" % e.time)
+
+    elif isinstance(e, bb.build.TaskSucceeded):
+        write_task_data("passed", os.path.join(taskdir, e.task), e, d)
+        if e.task == "do_rootfs":
+            bs = os.path.join(bsdir, "build_stats")
+            with open(bs, "a") as f:
+                rootfs = d.getVar('IMAGE_ROOTFS')
+                if os.path.isdir(rootfs):
+                    try:
+                        rootfs_size = subprocess.check_output(["du", "-sh", rootfs],
+                                stderr=subprocess.STDOUT).decode('utf-8')
+                        f.write("Uncompressed Rootfs size: %s" % rootfs_size)
+                    except subprocess.CalledProcessError as err:
+                        bb.warn("Failed to get rootfs size: %s" % err.output.decode('utf-8'))
+
+    elif isinstance(e, bb.build.TaskFailed):
+        # Can have a failure before TaskStarted so need to mkdir here too
+        bb.utils.mkdirhier(taskdir)
+        write_task_data("failed", os.path.join(taskdir, e.task), e, d)
+        ########################################################################
+        # Lets make things easier and tell people where the build failed in
+        # build_status. We do this here because BuildCompleted triggers no
+        # matter what the status of the build actually is
+        ########################################################################
+        build_status = os.path.join(bsdir, "build_stats")
+        with open(build_status, "a") as f:
+            f.write(d.expand("Failed at: ${PF} at task: %s \n" % e.task))
+        if bb.utils.to_boolean(d.getVar("BB_LOG_HOST_STAT_ON_FAILURE")):
+            write_host_data(os.path.join(bsdir, "host_stats_%s_failure" % e.task), e, d, "failure")
+}
+
+addhandler run_buildstats
+run_buildstats[eventmask] = "bb.event.BuildStarted bb.event.BuildCompleted bb.event.HeartbeatEvent bb.build.TaskStarted bb.build.TaskSucceeded bb.build.TaskFailed"
+
+python runqueue_stats () {
+    import buildstats
+    from bb import event, runqueue
+    # We should not record any samples before the first task has started,
+    # because that's the first activity shown in the process chart.
+    # Besides, at that point we are sure that the build variables
+    # are available that we need to find the output directory.
+    # The persistent SystemStats is stored in the datastore and
+    # closed when the build is done.
+    system_stats = d.getVar('_buildstats_system_stats', False)
+    if not system_stats and isinstance(e, (bb.runqueue.sceneQueueTaskStarted, bb.runqueue.runQueueTaskStarted)):
+        system_stats = buildstats.SystemStats(d)
+        d.setVar('_buildstats_system_stats', system_stats)
+    if system_stats:
+        # Ensure that we sample at important events.
+        done = isinstance(e, bb.event.BuildCompleted)
+        if system_stats.sample(e, force=done):
+            d.setVar('_buildstats_system_stats', system_stats)
+        if done:
+            system_stats.close()
+            d.delVar('_buildstats_system_stats')
+}
+
+addhandler runqueue_stats
+runqueue_stats[eventmask] = "bb.runqueue.sceneQueueTaskStarted bb.runqueue.runQueueTaskStarted bb.event.HeartbeatEvent bb.event.BuildCompleted bb.event.MonitorDiskEvent"
diff --git a/poky/meta/classes-global/debian.bbclass b/poky/meta/classes-global/debian.bbclass
new file mode 100644
index 0000000..7135d74
--- /dev/null
+++ b/poky/meta/classes-global/debian.bbclass
@@ -0,0 +1,156 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+# Debian package renaming only occurs when a package is built
+# We therefore have to make sure we build all runtime packages
+# before building the current package to make the packages runtime
+# depends are correct
+#
+# Custom library package names can be defined setting
+# DEBIANNAME: + pkgname to the desired name.
+#
+# Better expressed as ensure all RDEPENDS package before we package
+# This means we can't have circular RDEPENDS/RRECOMMENDS
+
+AUTO_LIBNAME_PKGS = "${PACKAGES}"
+
+inherit package
+
+DEBIANRDEP = "do_packagedata"
+do_package_write_ipk[deptask] = "${DEBIANRDEP}"
+do_package_write_deb[deptask] = "${DEBIANRDEP}"
+do_package_write_tar[deptask] = "${DEBIANRDEP}"
+do_package_write_rpm[deptask] = "${DEBIANRDEP}"
+do_package_write_ipk[rdeptask] = "${DEBIANRDEP}"
+do_package_write_deb[rdeptask] = "${DEBIANRDEP}"
+do_package_write_tar[rdeptask] = "${DEBIANRDEP}"
+do_package_write_rpm[rdeptask] = "${DEBIANRDEP}"
+
+python () {
+    if not d.getVar("PACKAGES"):
+        d.setVar("DEBIANRDEP", "")
+}
+
+python debian_package_name_hook () {
+    import glob, copy, stat, errno, re, pathlib, subprocess
+
+    pkgdest = d.getVar("PKGDEST")
+    packages = d.getVar('PACKAGES')
+    so_re = re.compile(r"lib.*\.so")
+
+    def socrunch(s):
+        s = s.lower().replace('_', '-')
+        m = re.match(r"^(.*)(.)\.so\.(.*)$", s)
+        if m is None:
+            return None
+        if m.group(2) in '0123456789':
+            bin = '%s%s-%s' % (m.group(1), m.group(2), m.group(3))
+        else:
+            bin = m.group(1) + m.group(2) + m.group(3)
+        dev = m.group(1) + m.group(2)
+        return (bin, dev)
+
+    def isexec(path):
+        try:
+            s = os.stat(path)
+        except (os.error, AttributeError):
+            return 0
+        return (s[stat.ST_MODE] & stat.S_IEXEC)
+
+    def add_rprovides(pkg, d):
+        newpkg = d.getVar('PKG:' + pkg)
+        if newpkg and newpkg != pkg:
+            provs = (d.getVar('RPROVIDES:' + pkg) or "").split()
+            if pkg not in provs:
+                d.appendVar('RPROVIDES:' + pkg, " " + pkg + " (=" + d.getVar("PKGV") + ")")
+
+    def auto_libname(packages, orig_pkg):
+        p = lambda var: pathlib.PurePath(d.getVar(var))
+        libdirs = (p("base_libdir"), p("libdir"))
+        bindirs = (p("base_bindir"), p("base_sbindir"), p("bindir"), p("sbindir"))
+
+        sonames = []
+        has_bins = 0
+        has_libs = 0
+        for f in pkgfiles[orig_pkg]:
+            # This is .../packages-split/orig_pkg/
+            pkgpath = pathlib.PurePath(pkgdest, orig_pkg)
+            # Strip pkgpath off the full path to a file in the package, re-root
+            # so it is absolute, and then get the parent directory of the file.
+            path = pathlib.PurePath("/") / (pathlib.PurePath(f).relative_to(pkgpath).parent)
+            if path in bindirs:
+                has_bins = 1
+            if path in libdirs:
+                has_libs = 1
+                if so_re.match(os.path.basename(f)):
+                    try:
+                        cmd = [d.expand("${TARGET_PREFIX}objdump"), "-p", f]
+                        output = subprocess.check_output(cmd).decode("utf-8")
+                        for m in re.finditer(r"\s+SONAME\s+([^\s]+)", output):
+                            if m.group(1) not in sonames:
+                                sonames.append(m.group(1))
+                    except subprocess.CalledProcessError:
+                        pass
+        bb.debug(1, 'LIBNAMES: pkg %s libs %d bins %d sonames %s' % (orig_pkg, has_libs, has_bins, sonames))
+        soname = None
+        if len(sonames) == 1:
+            soname = sonames[0]
+        elif len(sonames) > 1:
+            lead = d.getVar('LEAD_SONAME')
+            if lead:
+                r = re.compile(lead)
+                filtered = []
+                for s in sonames:
+                    if r.match(s):
+                        filtered.append(s)
+                if len(filtered) == 1:
+                    soname = filtered[0]
+                elif len(filtered) > 1:
+                    bb.note("Multiple matches (%s) for LEAD_SONAME '%s'" % (", ".join(filtered), lead))
+                else:
+                    bb.note("Multiple libraries (%s) found, but LEAD_SONAME '%s' doesn't match any of them" % (", ".join(sonames), lead))
+            else:
+                bb.note("Multiple libraries (%s) found and LEAD_SONAME not defined" % ", ".join(sonames))
+
+        if has_libs and not has_bins and soname:
+            soname_result = socrunch(soname)
+            if soname_result:
+                (pkgname, devname) = soname_result
+                for pkg in packages.split():
+                    if (d.getVar('PKG:' + pkg, False) or d.getVar('DEBIAN_NOAUTONAME:' + pkg, False)):
+                        add_rprovides(pkg, d)
+                        continue
+                    debian_pn = d.getVar('DEBIANNAME:' + pkg, False)
+                    if debian_pn:
+                        newpkg = debian_pn
+                    elif pkg == orig_pkg:
+                        newpkg = pkgname
+                    else:
+                        newpkg = pkg.replace(orig_pkg, devname, 1)
+                    mlpre=d.getVar('MLPREFIX')
+                    if mlpre:
+                        if not newpkg.find(mlpre) == 0:
+                            newpkg = mlpre + newpkg
+                    if newpkg != pkg:
+                        bb.note("debian: renaming %s to %s" % (pkg, newpkg))
+                        d.setVar('PKG:' + pkg, newpkg)
+                        add_rprovides(pkg, d)
+        else:
+            add_rprovides(orig_pkg, d)
+
+    # reversed sort is needed when some package is substring of another
+    # ie in ncurses we get without reverse sort: 
+    # DEBUG: LIBNAMES: pkgname libtic5 devname libtic pkg ncurses-libtic orig_pkg ncurses-libtic debian_pn None newpkg libtic5
+    # and later
+    # DEBUG: LIBNAMES: pkgname libtic5 devname libtic pkg ncurses-libticw orig_pkg ncurses-libtic debian_pn None newpkg libticw
+    # so we need to handle ncurses-libticw->libticw5 before ncurses-libtic->libtic5
+    for pkg in sorted((d.getVar('AUTO_LIBNAME_PKGS') or "").split(), reverse=True):
+        auto_libname(packages, pkg)
+}
+
+EXPORT_FUNCTIONS package_name_hook
+
+DEBIAN_NAMES = "1"
diff --git a/poky/meta/classes-global/devshell.bbclass b/poky/meta/classes-global/devshell.bbclass
new file mode 100644
index 0000000..03af56b
--- /dev/null
+++ b/poky/meta/classes-global/devshell.bbclass
@@ -0,0 +1,166 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+inherit terminal
+
+DEVSHELL = "${SHELL}"
+
+PATH:prepend:task-devshell = "${COREBASE}/scripts/git-intercept:"
+
+python do_devshell () {
+    if d.getVarFlag("do_devshell", "manualfakeroot"):
+       d.prependVar("DEVSHELL", "pseudo ")
+       fakeenv = d.getVar("FAKEROOTENV").split()
+       for f in fakeenv:
+            k = f.split("=")
+            d.setVar(k[0], k[1])
+            d.appendVar("OE_TERMINAL_EXPORTS", " " + k[0])
+       d.delVarFlag("do_devshell", "fakeroot")
+
+    oe_terminal(d.getVar('DEVSHELL'), 'OpenEmbedded Developer Shell', d)
+}
+
+addtask devshell after do_patch do_prepare_recipe_sysroot
+
+# The directory that the terminal starts in
+DEVSHELL_STARTDIR ?= "${S}"
+do_devshell[dirs] = "${DEVSHELL_STARTDIR}"
+do_devshell[nostamp] = "1"
+do_devshell[network] = "1"
+
+# devshell and fakeroot/pseudo need careful handling since only the final
+# command should run under fakeroot emulation, any X connection should
+# be done as the normal user. We therfore carefully construct the envionment
+# manually
+python () {
+    if d.getVarFlag("do_devshell", "fakeroot"):
+       # We need to signal our code that we want fakeroot however we
+       # can't manipulate the environment and variables here yet (see YOCTO #4795)
+       d.setVarFlag("do_devshell", "manualfakeroot", "1")
+       d.delVarFlag("do_devshell", "fakeroot")
+} 
+
+def pydevshell(d):
+
+    import code
+    import select
+    import signal
+    import termios
+
+    m, s = os.openpty()
+    sname = os.ttyname(s)
+
+    def noechoicanon(fd):
+        old = termios.tcgetattr(fd)
+        old[3] = old[3] &~ termios.ECHO &~ termios.ICANON
+        # &~ termios.ISIG
+        termios.tcsetattr(fd, termios.TCSADRAIN, old)
+
+    # No echo or buffering over the pty
+    noechoicanon(s)
+
+    pid = os.fork()
+    if pid:
+        os.close(m)
+        oe_terminal("oepydevshell-internal.py %s %d" % (sname, pid), 'OpenEmbedded Developer PyShell', d)
+        os._exit(0)
+    else:
+        os.close(s)
+
+        os.dup2(m, sys.stdin.fileno())
+        os.dup2(m, sys.stdout.fileno())
+        os.dup2(m, sys.stderr.fileno())
+
+        bb.utils.nonblockingfd(sys.stdout)
+        bb.utils.nonblockingfd(sys.stderr)
+        bb.utils.nonblockingfd(sys.stdin)
+
+        _context = {
+            "os": os,
+            "bb": bb,
+            "time": time,
+            "d": d,
+        }
+
+        ps1 = "pydevshell> "
+        ps2 = "... "
+        buf = []
+        more = False
+
+        i = code.InteractiveInterpreter(locals=_context)
+        print("OE PyShell (PN = %s)\n" % d.getVar("PN"))
+
+        def prompt(more):
+            if more:
+                prompt = ps2
+            else:
+                prompt = ps1
+            sys.stdout.write(prompt)
+            sys.stdout.flush()
+
+        # Restore Ctrl+C since bitbake masks this
+        def signal_handler(signal, frame):
+            raise KeyboardInterrupt
+        signal.signal(signal.SIGINT, signal_handler)
+
+        child = None
+
+        prompt(more)
+        while True:
+            try:
+                try:
+                    (r, _, _) = select.select([sys.stdin], [], [], 1)
+                    if not r:
+                        continue
+                    line = sys.stdin.readline().strip()
+                    if not line:
+                        prompt(more)
+                        continue
+                except EOFError as e:
+                    sys.stdout.write("\n")
+                    sys.stdout.flush()
+                except (OSError, IOError) as e:
+                    if e.errno == 11:
+                        continue
+                    if e.errno == 5:
+                        return
+                    raise
+                else:
+                    if not child:
+                        child = int(line)
+                        continue
+                    buf.append(line)
+                    source = "\n".join(buf)
+                    more = i.runsource(source, "<pyshell>")
+                    if not more:
+                        buf = []
+                    sys.stderr.flush()
+                    prompt(more)
+            except KeyboardInterrupt:
+                i.write("\nKeyboardInterrupt\n")
+                buf = []
+                more = False
+                prompt(more)
+            except SystemExit:
+                # Easiest way to ensure everything exits
+                os.kill(child, signal.SIGTERM)
+                break
+
+python do_pydevshell() {
+    import signal
+
+    try:
+        pydevshell(d)
+    except SystemExit:
+        # Stop the SIGTERM above causing an error exit code
+        return
+    finally:
+        return
+}
+addtask pydevshell after do_patch
+
+do_pydevshell[nostamp] = "1"
+do_pydevshell[network] = "1"
diff --git a/poky/meta/classes-global/insane.bbclass b/poky/meta/classes-global/insane.bbclass
new file mode 100644
index 0000000..db34b4b
--- /dev/null
+++ b/poky/meta/classes-global/insane.bbclass
@@ -0,0 +1,1454 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+# BB Class inspired by ebuild.sh
+#
+# This class will test files after installation for certain
+# security issues and other kind of issues.
+#
+# Checks we do:
+#  -Check the ownership and permissions
+#  -Check the RUNTIME path for the $TMPDIR
+#  -Check if .la files wrongly point to workdir
+#  -Check if .pc files wrongly point to workdir
+#  -Check if packages contains .debug directories or .so files
+#   where they should be in -dev or -dbg
+#  -Check if config.log contains traces to broken autoconf tests
+#  -Check invalid characters (non-utf8) on some package metadata
+#  -Ensure that binaries in base_[bindir|sbindir|libdir] do not link
+#   into exec_prefix
+#  -Check that scripts in base_[bindir|sbindir|libdir] do not reference
+#   files under exec_prefix
+#  -Check if the package name is upper case
+
+# Elect whether a given type of error is a warning or error, they may
+# have been set by other files.
+WARN_QA ?= " libdir xorg-driver-abi buildpaths \
+            textrel incompatible-license files-invalid \
+            infodir build-deps src-uri-bad symlink-to-sysroot multilib \
+            invalid-packageconfig host-user-contaminated uppercase-pn patch-fuzz \
+            mime mime-xdg unlisted-pkg-lics unhandled-features-check \
+            missing-update-alternatives native-last missing-ptest \
+            license-exists license-no-generic license-syntax license-format \
+            license-incompatible license-file-missing obsolete-license \
+            "
+ERROR_QA ?= "dev-so debug-deps dev-deps debug-files arch pkgconfig la \
+            perms dep-cmp pkgvarcheck perm-config perm-line perm-link \
+            split-strip packages-list pkgv-undefined var-undefined \
+            version-going-backwards expanded-d invalid-chars \
+            license-checksum dev-elf file-rdeps configure-unsafe \
+            configure-gettext perllocalpod shebang-size \
+            already-stripped installed-vs-shipped ldflags compile-host-path \
+            install-host-path pn-overrides unknown-configure-option \
+            useless-rpaths rpaths staticdev empty-dirs \
+            "
+# Add usrmerge QA check based on distro feature
+ERROR_QA:append = "${@bb.utils.contains('DISTRO_FEATURES', 'usrmerge', ' usrmerge', '', d)}"
+
+FAKEROOT_QA = "host-user-contaminated"
+FAKEROOT_QA[doc] = "QA tests which need to run under fakeroot. If any \
+enabled tests are listed here, the do_package_qa task will run under fakeroot."
+
+ALL_QA = "${WARN_QA} ${ERROR_QA}"
+
+UNKNOWN_CONFIGURE_OPT_IGNORE ?= "--enable-nls --disable-nls --disable-silent-rules --disable-dependency-tracking --with-libtool-sysroot --disable-static"
+
+# This is a list of directories that are expected to be empty.
+QA_EMPTY_DIRS ?= " \
+    /dev/pts \
+    /media \
+    /proc \
+    /run \
+    /tmp \
+    ${localstatedir}/run \
+    ${localstatedir}/volatile \
+"
+# It is possible to specify why a directory is expected to be empty by defining
+# QA_EMPTY_DIRS_RECOMMENDATION:<path>, which will then be included in the error
+# message if the directory is not empty. If it is not specified for a directory,
+# then "but it is expected to be empty" will be used.
+
+def package_qa_clean_path(path, d, pkg=None):
+    """
+    Remove redundant paths from the path for display.  If pkg isn't set then
+    TMPDIR is stripped, otherwise PKGDEST/pkg is stripped.
+    """
+    if pkg:
+        path = path.replace(os.path.join(d.getVar("PKGDEST"), pkg), "/")
+    return path.replace(d.getVar("TMPDIR"), "/").replace("//", "/")
+
+QAPATHTEST[shebang-size] = "package_qa_check_shebang_size"
+def package_qa_check_shebang_size(path, name, d, elf, messages):
+    import stat
+    if os.path.islink(path) or stat.S_ISFIFO(os.stat(path).st_mode) or elf:
+        return
+
+    try:
+        with open(path, 'rb') as f:
+            stanza = f.readline(130)
+    except IOError:
+        return
+
+    if stanza.startswith(b'#!'):
+        #Shebang not found
+        try:
+            stanza = stanza.decode("utf-8")
+        except UnicodeDecodeError:
+            #If it is not a text file, it is not a script
+            return
+
+        if len(stanza) > 129:
+            oe.qa.add_message(messages, "shebang-size", "%s: %s maximum shebang size exceeded, the maximum size is 128." % (name, package_qa_clean_path(path, d)))
+            return
+
+QAPATHTEST[libexec] = "package_qa_check_libexec"
+def package_qa_check_libexec(path,name, d, elf, messages):
+
+    # Skip the case where the default is explicitly /usr/libexec
+    libexec = d.getVar('libexecdir')
+    if libexec == "/usr/libexec":
+        return True
+
+    if 'libexec' in path.split(os.path.sep):
+        oe.qa.add_message(messages, "libexec", "%s: %s is using libexec please relocate to %s" % (name, package_qa_clean_path(path, d), libexec))
+        return False
+
+    return True
+
+QAPATHTEST[rpaths] = "package_qa_check_rpath"
+def package_qa_check_rpath(file,name, d, elf, messages):
+    """
+    Check for dangerous RPATHs
+    """
+    if not elf:
+        return
+
+    if os.path.islink(file):
+        return
+
+    bad_dirs = [d.getVar('BASE_WORKDIR'), d.getVar('STAGING_DIR_TARGET')]
+
+    phdrs = elf.run_objdump("-p", d)
+
+    import re
+    rpath_re = re.compile(r"\s+RPATH\s+(.*)")
+    for line in phdrs.split("\n"):
+        m = rpath_re.match(line)
+        if m:
+            rpath = m.group(1)
+            for dir in bad_dirs:
+                if dir in rpath:
+                    oe.qa.add_message(messages, "rpaths", "package %s contains bad RPATH %s in file %s" % (name, rpath, file))
+
+QAPATHTEST[useless-rpaths] = "package_qa_check_useless_rpaths"
+def package_qa_check_useless_rpaths(file, name, d, elf, messages):
+    """
+    Check for RPATHs that are useless but not dangerous
+    """
+    def rpath_eq(a, b):
+        return os.path.normpath(a) == os.path.normpath(b)
+
+    if not elf:
+        return
+
+    if os.path.islink(file):
+        return
+
+    libdir = d.getVar("libdir")
+    base_libdir = d.getVar("base_libdir")
+
+    phdrs = elf.run_objdump("-p", d)
+
+    import re
+    rpath_re = re.compile(r"\s+RPATH\s+(.*)")
+    for line in phdrs.split("\n"):
+        m = rpath_re.match(line)
+        if m:
+            rpath = m.group(1)
+            if rpath_eq(rpath, libdir) or rpath_eq(rpath, base_libdir):
+                # The dynamic linker searches both these places anyway.  There is no point in
+                # looking there again.
+                oe.qa.add_message(messages, "useless-rpaths", "%s: %s contains probably-redundant RPATH %s" % (name, package_qa_clean_path(file, d, name), rpath))
+
+QAPATHTEST[dev-so] = "package_qa_check_dev"
+def package_qa_check_dev(path, name, d, elf, messages):
+    """
+    Check for ".so" library symlinks in non-dev packages
+    """
+
+    if not name.endswith("-dev") and not name.endswith("-dbg") and not name.endswith("-ptest") and not name.startswith("nativesdk-") and path.endswith(".so") and os.path.islink(path):
+        oe.qa.add_message(messages, "dev-so", "non -dev/-dbg/nativesdk- package %s contains symlink .so '%s'" % \
+                 (name, package_qa_clean_path(path, d, name)))
+
+QAPATHTEST[dev-elf] = "package_qa_check_dev_elf"
+def package_qa_check_dev_elf(path, name, d, elf, messages):
+    """
+    Check that -dev doesn't contain real shared libraries.  The test has to
+    check that the file is not a link and is an ELF object as some recipes
+    install link-time .so files that are linker scripts.
+    """
+    if name.endswith("-dev") and path.endswith(".so") and not os.path.islink(path) and elf:
+        oe.qa.add_message(messages, "dev-elf", "-dev package %s contains non-symlink .so '%s'" % \
+                 (name, package_qa_clean_path(path, d, name)))
+
+QAPATHTEST[staticdev] = "package_qa_check_staticdev"
+def package_qa_check_staticdev(path, name, d, elf, messages):
+    """
+    Check for ".a" library in non-staticdev packages
+    There are a number of exceptions to this rule, -pic packages can contain
+    static libraries, the _nonshared.a belong with their -dev packages and
+    libgcc.a, libgcov.a will be skipped in their packages
+    """
+
+    if not name.endswith("-pic") and not name.endswith("-staticdev") and not name.endswith("-ptest") and path.endswith(".a") and not path.endswith("_nonshared.a") and not '/usr/lib/debug-static/' in path and not '/.debug-static/' in path:
+        oe.qa.add_message(messages, "staticdev", "non -staticdev package contains static .a library: %s path '%s'" % \
+                 (name, package_qa_clean_path(path,d, name)))
+
+QAPATHTEST[mime] = "package_qa_check_mime"
+def package_qa_check_mime(path, name, d, elf, messages):
+    """
+    Check if package installs mime types to /usr/share/mime/packages
+    while no inheriting mime.bbclass
+    """
+
+    if d.getVar("datadir") + "/mime/packages" in path and path.endswith('.xml') and not bb.data.inherits_class("mime", d):
+        oe.qa.add_message(messages, "mime", "package contains mime types but does not inherit mime: %s path '%s'" % \
+                 (name, package_qa_clean_path(path,d)))
+
+QAPATHTEST[mime-xdg] = "package_qa_check_mime_xdg"
+def package_qa_check_mime_xdg(path, name, d, elf, messages):
+    """
+    Check if package installs desktop file containing MimeType and requires
+    mime-types.bbclass to create /usr/share/applications/mimeinfo.cache
+    """
+
+    if d.getVar("datadir") + "/applications" in path and path.endswith('.desktop') and not bb.data.inherits_class("mime-xdg", d):
+        mime_type_found = False
+        try:
+            with open(path, 'r') as f:
+                for line in f.read().split('\n'):
+                    if 'MimeType' in line:
+                        mime_type_found = True
+                        break;
+        except:
+            # At least libreoffice installs symlinks with absolute paths that are dangling here.
+            # We could implement some magic but for few (one) recipes it is not worth the effort so just warn:
+            wstr = "%s cannot open %s - is it a symlink with absolute path?\n" % (name, package_qa_clean_path(path,d))
+            wstr += "Please check if (linked) file contains key 'MimeType'.\n"
+            pkgname = name
+            if name == d.getVar('PN'):
+                pkgname = '${PN}'
+            wstr += "If yes: add \'inhert mime-xdg\' and \'MIME_XDG_PACKAGES += \"%s\"\' / if no add \'INSANE_SKIP:%s += \"mime-xdg\"\' to recipe." % (pkgname, pkgname)
+            oe.qa.add_message(messages, "mime-xdg", wstr)
+        if mime_type_found:
+            oe.qa.add_message(messages, "mime-xdg", "package contains desktop file with key 'MimeType' but does not inhert mime-xdg: %s path '%s'" % \
+                    (name, package_qa_clean_path(path,d)))
+
+def package_qa_check_libdir(d):
+    """
+    Check for wrong library installation paths. For instance, catch
+    recipes installing /lib/bar.so when ${base_libdir}="lib32" or
+    installing in /usr/lib64 when ${libdir}="/usr/lib"
+    """
+    import re
+
+    pkgdest = d.getVar('PKGDEST')
+    base_libdir = d.getVar("base_libdir") + os.sep
+    libdir = d.getVar("libdir") + os.sep
+    libexecdir = d.getVar("libexecdir") + os.sep
+    exec_prefix = d.getVar("exec_prefix") + os.sep
+
+    messages = []
+
+    # The re's are purposely fuzzy, as some there are some .so.x.y.z files
+    # that don't follow the standard naming convention. It checks later
+    # that they are actual ELF files
+    lib_re = re.compile(r"^/lib.+\.so(\..+)?$")
+    exec_re = re.compile(r"^%s.*/lib.+\.so(\..+)?$" % exec_prefix)
+
+    for root, dirs, files in os.walk(pkgdest):
+        if root == pkgdest:
+            # Skip subdirectories for any packages with libdir in INSANE_SKIP
+            skippackages = []
+            for package in dirs:
+                if 'libdir' in (d.getVar('INSANE_SKIP:' + package) or "").split():
+                    bb.note("Package %s skipping libdir QA test" % (package))
+                    skippackages.append(package)
+                elif d.getVar('PACKAGE_DEBUG_SPLIT_STYLE') == 'debug-file-directory' and package.endswith("-dbg"):
+                    bb.note("Package %s skipping libdir QA test for PACKAGE_DEBUG_SPLIT_STYLE equals debug-file-directory" % (package))
+                    skippackages.append(package)
+            for package in skippackages:
+                dirs.remove(package)
+        for file in files:
+            full_path = os.path.join(root, file)
+            rel_path = os.path.relpath(full_path, pkgdest)
+            if os.sep in rel_path:
+                package, rel_path = rel_path.split(os.sep, 1)
+                rel_path = os.sep + rel_path
+                if lib_re.match(rel_path):
+                    if base_libdir not in rel_path:
+                        # make sure it's an actual ELF file
+                        elf = oe.qa.ELFFile(full_path)
+                        try:
+                            elf.open()
+                            messages.append("%s: found library in wrong location: %s" % (package, rel_path))
+                        except (oe.qa.NotELFFileError):
+                            pass
+                if exec_re.match(rel_path):
+                    if libdir not in rel_path and libexecdir not in rel_path:
+                        # make sure it's an actual ELF file
+                        elf = oe.qa.ELFFile(full_path)
+                        try:
+                            elf.open()
+                            messages.append("%s: found library in wrong location: %s" % (package, rel_path))
+                        except (oe.qa.NotELFFileError):
+                            pass
+
+    if messages:
+        oe.qa.handle_error("libdir", "\n".join(messages), d)
+
+QAPATHTEST[debug-files] = "package_qa_check_dbg"
+def package_qa_check_dbg(path, name, d, elf, messages):
+    """
+    Check for ".debug" files or directories outside of the dbg package
+    """
+
+    if not "-dbg" in name and not "-ptest" in name:
+        if '.debug' in path.split(os.path.sep):
+            oe.qa.add_message(messages, "debug-files", "non debug package contains .debug directory: %s path %s" % \
+                     (name, package_qa_clean_path(path,d)))
+
+QAPATHTEST[arch] = "package_qa_check_arch"
+def package_qa_check_arch(path,name,d, elf, messages):
+    """
+    Check if archs are compatible
+    """
+    import re, oe.elf
+
+    if not elf:
+        return
+
+    target_os   = d.getVar('HOST_OS')
+    target_arch = d.getVar('HOST_ARCH')
+    provides = d.getVar('PROVIDES')
+    bpn = d.getVar('BPN')
+
+    if target_arch == "allarch":
+        pn = d.getVar('PN')
+        oe.qa.add_message(messages, "arch", pn + ": Recipe inherits the allarch class, but has packaged architecture-specific binaries")
+        return
+
+    # FIXME: Cross package confuse this check, so just skip them
+    for s in ['cross', 'nativesdk', 'cross-canadian']:
+        if bb.data.inherits_class(s, d):
+            return
+
+    # avoid following links to /usr/bin (e.g. on udev builds)
+    # we will check the files pointed to anyway...
+    if os.path.islink(path):
+        return
+
+    #if this will throw an exception, then fix the dict above
+    (machine, osabi, abiversion, littleendian, bits) \
+        = oe.elf.machine_dict(d)[target_os][target_arch]
+
+    # Check the architecture and endiannes of the binary
+    is_32 = (("virtual/kernel" in provides) or bb.data.inherits_class("module", d)) and \
+            (target_os == "linux-gnux32" or target_os == "linux-muslx32" or \
+            target_os == "linux-gnu_ilp32" or re.match(r'mips64.*32', d.getVar('DEFAULTTUNE')))
+    is_bpf = (oe.qa.elf_machine_to_string(elf.machine()) == "BPF")
+    if not ((machine == elf.machine()) or is_32 or is_bpf):
+        oe.qa.add_message(messages, "arch", "Architecture did not match (%s, expected %s) in %s" % \
+                 (oe.qa.elf_machine_to_string(elf.machine()), oe.qa.elf_machine_to_string(machine), package_qa_clean_path(path, d, name)))
+    elif not ((bits == elf.abiSize()) or is_32 or is_bpf):
+        oe.qa.add_message(messages, "arch", "Bit size did not match (%d, expected %d) in %s" % \
+                 (elf.abiSize(), bits, package_qa_clean_path(path, d, name)))
+    elif not ((littleendian == elf.isLittleEndian()) or is_bpf):
+        oe.qa.add_message(messages, "arch", "Endiannes did not match (%d, expected %d) in %s" % \
+                 (elf.isLittleEndian(), littleendian, package_qa_clean_path(path,d, name)))
+
+QAPATHTEST[desktop] = "package_qa_check_desktop"
+def package_qa_check_desktop(path, name, d, elf, messages):
+    """
+    Run all desktop files through desktop-file-validate.
+    """
+    if path.endswith(".desktop"):
+        desktop_file_validate = os.path.join(d.getVar('STAGING_BINDIR_NATIVE'),'desktop-file-validate')
+        output = os.popen("%s %s" % (desktop_file_validate, path))
+        # This only produces output on errors
+        for l in output:
+            oe.qa.add_message(messages, "desktop", "Desktop file issue: " + l.strip())
+
+QAPATHTEST[textrel] = "package_qa_textrel"
+def package_qa_textrel(path, name, d, elf, messages):
+    """
+    Check if the binary contains relocations in .text
+    """
+
+    if not elf:
+        return
+
+    if os.path.islink(path):
+        return
+
+    phdrs = elf.run_objdump("-p", d)
+    sane = True
+
+    import re
+    textrel_re = re.compile(r"\s+TEXTREL\s+")
+    for line in phdrs.split("\n"):
+        if textrel_re.match(line):
+            sane = False
+            break
+
+    if not sane:
+        path = package_qa_clean_path(path, d, name)
+        oe.qa.add_message(messages, "textrel", "%s: ELF binary %s has relocations in .text" % (name, path))
+
+QAPATHTEST[ldflags] = "package_qa_hash_style"
+def package_qa_hash_style(path, name, d, elf, messages):
+    """
+    Check if the binary has the right hash style...
+    """
+
+    if not elf:
+        return
+
+    if os.path.islink(path):
+        return
+
+    gnu_hash = "--hash-style=gnu" in d.getVar('LDFLAGS')
+    if not gnu_hash:
+        gnu_hash = "--hash-style=both" in d.getVar('LDFLAGS')
+    if not gnu_hash:
+        return
+
+    sane = False
+    has_syms = False
+
+    phdrs = elf.run_objdump("-p", d)
+
+    # If this binary has symbols, we expect it to have GNU_HASH too.
+    for line in phdrs.split("\n"):
+        if "SYMTAB" in line:
+            has_syms = True
+        if "GNU_HASH" in line or "MIPS_XHASH" in line:
+            sane = True
+        if ("[mips32]" in line or "[mips64]" in line) and d.getVar('TCLIBC') == "musl":
+            sane = True
+    if has_syms and not sane:
+        path = package_qa_clean_path(path, d, name)
+        oe.qa.add_message(messages, "ldflags", "File %s in package %s doesn't have GNU_HASH (didn't pass LDFLAGS?)" % (path, name))
+
+
+QAPATHTEST[buildpaths] = "package_qa_check_buildpaths"
+def package_qa_check_buildpaths(path, name, d, elf, messages):
+    """
+    Check for build paths inside target files and error if paths are not
+    explicitly ignored.
+    """
+    import stat
+
+    # Ignore symlinks/devs/fifos
+    mode = os.lstat(path).st_mode
+    if stat.S_ISLNK(mode) or stat.S_ISBLK(mode) or stat.S_ISFIFO(mode) or stat.S_ISCHR(mode) or stat.S_ISSOCK(mode):
+        return
+
+    tmpdir = bytes(d.getVar('TMPDIR'), encoding="utf-8")
+    with open(path, 'rb') as f:
+        file_content = f.read()
+        if tmpdir in file_content:
+            trimmed = path.replace(os.path.join (d.getVar("PKGDEST"), name), "")
+            oe.qa.add_message(messages, "buildpaths", "File %s in package %s contains reference to TMPDIR" % (trimmed, name))
+
+
+QAPATHTEST[xorg-driver-abi] = "package_qa_check_xorg_driver_abi"
+def package_qa_check_xorg_driver_abi(path, name, d, elf, messages):
+    """
+    Check that all packages containing Xorg drivers have ABI dependencies
+    """
+
+    # Skip dev, dbg or nativesdk packages
+    if name.endswith("-dev") or name.endswith("-dbg") or name.startswith("nativesdk-"):
+        return
+
+    driverdir = d.expand("${libdir}/xorg/modules/drivers/")
+    if driverdir in path and path.endswith(".so"):
+        mlprefix = d.getVar('MLPREFIX') or ''
+        for rdep in bb.utils.explode_deps(d.getVar('RDEPENDS:' + name) or ""):
+            if rdep.startswith("%sxorg-abi-" % mlprefix):
+                return
+        oe.qa.add_message(messages, "xorg-driver-abi", "Package %s contains Xorg driver (%s) but no xorg-abi- dependencies" % (name, os.path.basename(path)))
+
+QAPATHTEST[infodir] = "package_qa_check_infodir"
+def package_qa_check_infodir(path, name, d, elf, messages):
+    """
+    Check that /usr/share/info/dir isn't shipped in a particular package
+    """
+    infodir = d.expand("${infodir}/dir")
+
+    if infodir in path:
+        oe.qa.add_message(messages, "infodir", "The /usr/share/info/dir file is not meant to be shipped in a particular package.")
+
+QAPATHTEST[symlink-to-sysroot] = "package_qa_check_symlink_to_sysroot"
+def package_qa_check_symlink_to_sysroot(path, name, d, elf, messages):
+    """
+    Check that the package doesn't contain any absolute symlinks to the sysroot.
+    """
+    if os.path.islink(path):
+        target = os.readlink(path)
+        if os.path.isabs(target):
+            tmpdir = d.getVar('TMPDIR')
+            if target.startswith(tmpdir):
+                trimmed = path.replace(os.path.join (d.getVar("PKGDEST"), name), "")
+                oe.qa.add_message(messages, "symlink-to-sysroot", "Symlink %s in %s points to TMPDIR" % (trimmed, name))
+
+# Check license variables
+do_populate_lic[postfuncs] += "populate_lic_qa_checksum"
+python populate_lic_qa_checksum() {
+    """
+    Check for changes in the license files.
+    """
+
+    lic_files = d.getVar('LIC_FILES_CHKSUM') or ''
+    lic = d.getVar('LICENSE')
+    pn = d.getVar('PN')
+
+    if lic == "CLOSED":
+        return
+
+    if not lic_files and d.getVar('SRC_URI'):
+        oe.qa.handle_error("license-checksum", pn + ": Recipe file fetches files and does not have license file information (LIC_FILES_CHKSUM)", d)
+
+    srcdir = d.getVar('S')
+    corebase_licensefile = d.getVar('COREBASE') + "/LICENSE"
+    for url in lic_files.split():
+        try:
+            (type, host, path, user, pswd, parm) = bb.fetch.decodeurl(url)
+        except bb.fetch.MalformedUrl:
+            oe.qa.handle_error("license-checksum", pn + ": LIC_FILES_CHKSUM contains an invalid URL: " + url, d)
+            continue
+        srclicfile = os.path.join(srcdir, path)
+        if not os.path.isfile(srclicfile):
+            oe.qa.handle_error("license-checksum", pn + ": LIC_FILES_CHKSUM points to an invalid file: " + srclicfile, d)
+            continue
+
+        if (srclicfile == corebase_licensefile):
+            bb.warn("${COREBASE}/LICENSE is not a valid license file, please use '${COMMON_LICENSE_DIR}/MIT' for a MIT License file in LIC_FILES_CHKSUM. This will become an error in the future")
+
+        recipemd5 = parm.get('md5', '')
+        beginline, endline = 0, 0
+        if 'beginline' in parm:
+            beginline = int(parm['beginline'])
+        if 'endline' in parm:
+            endline = int(parm['endline'])
+
+        if (not beginline) and (not endline):
+            md5chksum = bb.utils.md5_file(srclicfile)
+            with open(srclicfile, 'r', errors='replace') as f:
+                license = f.read().splitlines()
+        else:
+            with open(srclicfile, 'rb') as f:
+                import hashlib
+                lineno = 0
+                license = []
+                m = hashlib.new('MD5', usedforsecurity=False)
+                for line in f:
+                    lineno += 1
+                    if (lineno >= beginline):
+                        if ((lineno <= endline) or not endline):
+                            m.update(line)
+                            license.append(line.decode('utf-8', errors='replace').rstrip())
+                        else:
+                            break
+                md5chksum = m.hexdigest()
+        if recipemd5 == md5chksum:
+            bb.note (pn + ": md5 checksum matched for ", url)
+        else:
+            if recipemd5:
+                msg = pn + ": The LIC_FILES_CHKSUM does not match for " + url
+                msg = msg + "\n" + pn + ": The new md5 checksum is " + md5chksum
+                max_lines = int(d.getVar('QA_MAX_LICENSE_LINES') or 20)
+                if not license or license[-1] != '':
+                    # Ensure that our license text ends with a line break
+                    # (will be added with join() below).
+                    license.append('')
+                remove = len(license) - max_lines
+                if remove > 0:
+                    start = max_lines // 2
+                    end = start + remove - 1
+                    del license[start:end]
+                    license.insert(start, '...')
+                msg = msg + "\n" + pn + ": Here is the selected license text:" + \
+                        "\n" + \
+                        "{:v^70}".format(" beginline=%d " % beginline if beginline else "") + \
+                        "\n" + "\n".join(license) + \
+                        "{:^^70}".format(" endline=%d " % endline if endline else "")
+                if beginline:
+                    if endline:
+                        srcfiledesc = "%s (lines %d through to %d)" % (srclicfile, beginline, endline)
+                    else:
+                        srcfiledesc = "%s (beginning on line %d)" % (srclicfile, beginline)
+                elif endline:
+                    srcfiledesc = "%s (ending on line %d)" % (srclicfile, endline)
+                else:
+                    srcfiledesc = srclicfile
+                msg = msg + "\n" + pn + ": Check if the license information has changed in %s to verify that the LICENSE value \"%s\" remains valid" % (srcfiledesc, lic)
+
+            else:
+                msg = pn + ": LIC_FILES_CHKSUM is not specified for " +  url
+                msg = msg + "\n" + pn + ": The md5 checksum is " + md5chksum
+            oe.qa.handle_error("license-checksum", msg, d)
+
+    oe.qa.exit_if_errors(d)
+}
+
+def qa_check_staged(path,d):
+    """
+    Check staged la and pc files for common problems like references to the work
+    directory.
+
+    As this is run after every stage we should be able to find the one
+    responsible for the errors easily even if we look at every .pc and .la file.
+    """
+
+    tmpdir = d.getVar('TMPDIR')
+    workdir = os.path.join(tmpdir, "work")
+    recipesysroot = d.getVar("RECIPE_SYSROOT")
+
+    if bb.data.inherits_class("native", d) or bb.data.inherits_class("cross", d):
+        pkgconfigcheck = workdir
+    else:
+        pkgconfigcheck = tmpdir
+
+    skip = (d.getVar('INSANE_SKIP') or "").split()
+    skip_la = False
+    if 'la' in skip:
+        bb.note("Recipe %s skipping qa checking: la" % d.getVar('PN'))
+        skip_la = True
+
+    skip_pkgconfig = False
+    if 'pkgconfig' in skip:
+        bb.note("Recipe %s skipping qa checking: pkgconfig" % d.getVar('PN'))
+        skip_pkgconfig = True
+
+    skip_shebang_size = False
+    if 'shebang-size' in skip:
+        bb.note("Recipe %s skipping qa checkking: shebang-size" % d.getVar('PN'))
+        skip_shebang_size = True
+
+    # find all .la and .pc files
+    # read the content
+    # and check for stuff that looks wrong
+    for root, dirs, files in os.walk(path):
+        for file in files:
+            path = os.path.join(root,file)
+            if file.endswith(".la") and not skip_la:
+                with open(path) as f:
+                    file_content = f.read()
+                    file_content = file_content.replace(recipesysroot, "")
+                    if workdir in file_content:
+                        error_msg = "%s failed sanity test (workdir) in path %s" % (file,root)
+                        oe.qa.handle_error("la", error_msg, d)
+            elif file.endswith(".pc") and not skip_pkgconfig:
+                with open(path) as f:
+                    file_content = f.read()
+                    file_content = file_content.replace(recipesysroot, "")
+                    if pkgconfigcheck in file_content:
+                        error_msg = "%s failed sanity test (tmpdir) in path %s" % (file,root)
+                        oe.qa.handle_error("pkgconfig", error_msg, d)
+
+            if not skip_shebang_size:
+                errors = {}
+                package_qa_check_shebang_size(path, "", d, None, errors)
+                for e in errors:
+                    oe.qa.handle_error(e, errors[e], d)
+
+
+# Run all package-wide warnfuncs and errorfuncs
+def package_qa_package(warnfuncs, errorfuncs, package, d):
+    warnings = {}
+    errors = {}
+
+    for func in warnfuncs:
+        func(package, d, warnings)
+    for func in errorfuncs:
+        func(package, d, errors)
+
+    for w in warnings:
+        oe.qa.handle_error(w, warnings[w], d)
+    for e in errors:
+        oe.qa.handle_error(e, errors[e], d)
+
+    return len(errors) == 0
+
+# Run all recipe-wide warnfuncs and errorfuncs
+def package_qa_recipe(warnfuncs, errorfuncs, pn, d):
+    warnings = {}
+    errors = {}
+
+    for func in warnfuncs:
+        func(pn, d, warnings)
+    for func in errorfuncs:
+        func(pn, d, errors)
+
+    for w in warnings:
+        oe.qa.handle_error(w, warnings[w], d)
+    for e in errors:
+        oe.qa.handle_error(e, errors[e], d)
+
+    return len(errors) == 0
+
+def prepopulate_objdump_p(elf, d):
+    output = elf.run_objdump("-p", d)
+    return (elf.name, output)
+
+# Walk over all files in a directory and call func
+def package_qa_walk(warnfuncs, errorfuncs, package, d):
+    #if this will throw an exception, then fix the dict above
+    target_os   = d.getVar('HOST_OS')
+    target_arch = d.getVar('HOST_ARCH')
+
+    warnings = {}
+    errors = {}
+    elves = {}
+    for path in pkgfiles[package]:
+            elf = None
+            if os.path.isfile(path):
+                elf = oe.qa.ELFFile(path)
+                try:
+                    elf.open()
+                    elf.close()
+                except oe.qa.NotELFFileError:
+                    elf = None
+            if elf:
+                elves[path] = elf
+
+    results = oe.utils.multiprocess_launch(prepopulate_objdump_p, elves.values(), d, extraargs=(d,))
+    for item in results:
+        elves[item[0]].set_objdump("-p", item[1])
+
+    for path in pkgfiles[package]:
+            if path in elves:
+                elves[path].open()
+            for func in warnfuncs:
+                func(path, package, d, elves.get(path), warnings)
+            for func in errorfuncs:
+                func(path, package, d, elves.get(path), errors)
+            if path in elves:
+                elves[path].close()
+
+    for w in warnings:
+        oe.qa.handle_error(w, warnings[w], d)
+    for e in errors:
+        oe.qa.handle_error(e, errors[e], d)
+
+def package_qa_check_rdepends(pkg, pkgdest, skip, taskdeps, packages, d):
+    # Don't do this check for kernel/module recipes, there aren't too many debug/development
+    # packages and you can get false positives e.g. on kernel-module-lirc-dev
+    if bb.data.inherits_class("kernel", d) or bb.data.inherits_class("module-base", d):
+        return
+
+    if not "-dbg" in pkg and not "packagegroup-" in pkg and not "-image" in pkg:
+        localdata = bb.data.createCopy(d)
+        localdata.setVar('OVERRIDES', localdata.getVar('OVERRIDES') + ':' + pkg)
+
+        # Now check the RDEPENDS
+        rdepends = bb.utils.explode_deps(localdata.getVar('RDEPENDS') or "")
+
+        # Now do the sanity check!!!
+        if "build-deps" not in skip:
+            for rdepend in rdepends:
+                if "-dbg" in rdepend and "debug-deps" not in skip:
+                    error_msg = "%s rdepends on %s" % (pkg,rdepend)
+                    oe.qa.handle_error("debug-deps", error_msg, d)
+                if (not "-dev" in pkg and not "-staticdev" in pkg) and rdepend.endswith("-dev") and "dev-deps" not in skip:
+                    error_msg = "%s rdepends on %s" % (pkg, rdepend)
+                    oe.qa.handle_error("dev-deps", error_msg, d)
+                if rdepend not in packages:
+                    rdep_data = oe.packagedata.read_subpkgdata(rdepend, d)
+                    if rdep_data and 'PN' in rdep_data and rdep_data['PN'] in taskdeps:
+                        continue
+                    if not rdep_data or not 'PN' in rdep_data:
+                        pkgdata_dir = d.getVar("PKGDATA_DIR")
+                        try:
+                            possibles = os.listdir("%s/runtime-rprovides/%s/" % (pkgdata_dir, rdepend))
+                        except OSError:
+                            possibles = []
+                        for p in possibles:
+                            rdep_data = oe.packagedata.read_subpkgdata(p, d)
+                            if rdep_data and 'PN' in rdep_data and rdep_data['PN'] in taskdeps:
+                                break
+                    if rdep_data and 'PN' in rdep_data and rdep_data['PN'] in taskdeps:
+                        continue
+                    if rdep_data and 'PN' in rdep_data:
+                        error_msg = "%s rdepends on %s, but it isn't a build dependency, missing %s in DEPENDS or PACKAGECONFIG?" % (pkg, rdepend, rdep_data['PN'])
+                    else:
+                        error_msg = "%s rdepends on %s, but it isn't a build dependency?" % (pkg, rdepend)
+                    oe.qa.handle_error("build-deps", error_msg, d)
+
+        if "file-rdeps" not in skip:
+            ignored_file_rdeps = set(['/bin/sh', '/usr/bin/env', 'rtld(GNU_HASH)'])
+            if bb.data.inherits_class('nativesdk', d):
+                ignored_file_rdeps |= set(['/bin/bash', '/usr/bin/perl', 'perl'])
+            # For Saving the FILERDEPENDS
+            filerdepends = {}
+            rdep_data = oe.packagedata.read_subpkgdata(pkg, d)
+            for key in rdep_data:
+                if key.startswith("FILERDEPENDS:"):
+                    for subkey in bb.utils.explode_deps(rdep_data[key]):
+                        if subkey not in ignored_file_rdeps and \
+                                not subkey.startswith('perl('):
+                            # We already know it starts with FILERDEPENDS_
+                            filerdepends[subkey] = key[13:]
+
+            if filerdepends:
+                done = rdepends[:]
+                # Add the rprovides of itself
+                if pkg not in done:
+                    done.insert(0, pkg)
+
+                # The python is not a package, but python-core provides it, so
+                # skip checking /usr/bin/python if python is in the rdeps, in
+                # case there is a RDEPENDS:pkg = "python" in the recipe.
+                for py in [ d.getVar('MLPREFIX') + "python", "python" ]:
+                    if py in done:
+                        filerdepends.pop("/usr/bin/python",None)
+                        done.remove(py)
+                for rdep in done:
+                    # The file dependencies may contain package names, e.g.,
+                    # perl
+                    filerdepends.pop(rdep,None)
+
+                    # For Saving the FILERPROVIDES, RPROVIDES and FILES_INFO
+                    rdep_data = oe.packagedata.read_subpkgdata(rdep, d)
+                    for key in rdep_data:
+                        if key.startswith("FILERPROVIDES:") or key.startswith("RPROVIDES:"):
+                            for subkey in bb.utils.explode_deps(rdep_data[key]):
+                                filerdepends.pop(subkey,None)
+                        # Add the files list to the rprovides
+                        if key.startswith("FILES_INFO:"):
+                            # Use eval() to make it as a dict
+                            for subkey in eval(rdep_data[key]):
+                                filerdepends.pop(subkey,None)
+                    if not filerdepends:
+                        # Break if all the file rdepends are met
+                        break
+            if filerdepends:
+                for key in filerdepends:
+                    error_msg = "%s contained in package %s requires %s, but no providers found in RDEPENDS:%s?" % \
+                            (filerdepends[key].replace(":%s" % pkg, "").replace("@underscore@", "_"), pkg, key, pkg)
+                    oe.qa.handle_error("file-rdeps", error_msg, d)
+package_qa_check_rdepends[vardepsexclude] = "OVERRIDES"
+
+def package_qa_check_deps(pkg, pkgdest, d):
+
+    localdata = bb.data.createCopy(d)
+    localdata.setVar('OVERRIDES', pkg)
+
+    def check_valid_deps(var):
+        try:
+            rvar = bb.utils.explode_dep_versions2(localdata.getVar(var) or "")
+        except ValueError as e:
+            bb.fatal("%s:%s: %s" % (var, pkg, e))
+        for dep in rvar:
+            for v in rvar[dep]:
+                if v and not v.startswith(('< ', '= ', '> ', '<= ', '>=')):
+                    error_msg = "%s:%s is invalid: %s (%s)   only comparisons <, =, >, <=, and >= are allowed" % (var, pkg, dep, v)
+                    oe.qa.handle_error("dep-cmp", error_msg, d)
+
+    check_valid_deps('RDEPENDS')
+    check_valid_deps('RRECOMMENDS')
+    check_valid_deps('RSUGGESTS')
+    check_valid_deps('RPROVIDES')
+    check_valid_deps('RREPLACES')
+    check_valid_deps('RCONFLICTS')
+
+QAPKGTEST[usrmerge] = "package_qa_check_usrmerge"
+def package_qa_check_usrmerge(pkg, d, messages):
+
+    pkgdest = d.getVar('PKGDEST')
+    pkg_dir = pkgdest + os.sep + pkg + os.sep
+    merged_dirs = ['bin', 'sbin', 'lib'] + d.getVar('MULTILIB_VARIANTS').split()
+    for f in merged_dirs:
+        if os.path.exists(pkg_dir + f) and not os.path.islink(pkg_dir + f):
+            msg = "%s package is not obeying usrmerge distro feature. /%s should be relocated to /usr." % (pkg, f)
+            oe.qa.add_message(messages, "usrmerge", msg)
+            return False
+    return True
+
+QAPKGTEST[perllocalpod] = "package_qa_check_perllocalpod"
+def package_qa_check_perllocalpod(pkg, d, messages):
+    """
+    Check that the recipe didn't ship a perlocal.pod file, which shouldn't be
+    installed in a distribution package.  cpan.bbclass sets NO_PERLLOCAL=1 to
+    handle this for most recipes.
+    """
+    import glob
+    pkgd = oe.path.join(d.getVar('PKGDEST'), pkg)
+    podpath = oe.path.join(pkgd, d.getVar("libdir"), "perl*", "*", "*", "perllocal.pod")
+
+    matches = glob.glob(podpath)
+    if matches:
+        matches = [package_qa_clean_path(path, d, pkg) for path in matches]
+        msg = "%s contains perllocal.pod (%s), should not be installed" % (pkg, " ".join(matches))
+        oe.qa.add_message(messages, "perllocalpod", msg)
+
+QAPKGTEST[expanded-d] = "package_qa_check_expanded_d"
+def package_qa_check_expanded_d(package, d, messages):
+    """
+    Check for the expanded D (${D}) value in pkg_* and FILES
+    variables, warn the user to use it correctly.
+    """
+    sane = True
+    expanded_d = d.getVar('D')
+
+    for var in 'FILES','pkg_preinst', 'pkg_postinst', 'pkg_prerm', 'pkg_postrm':
+        bbvar = d.getVar(var + ":" + package) or ""
+        if expanded_d in bbvar:
+            if var == 'FILES':
+                oe.qa.add_message(messages, "expanded-d", "FILES in %s recipe should not contain the ${D} variable as it references the local build directory not the target filesystem, best solution is to remove the ${D} reference" % package)
+                sane = False
+            else:
+                oe.qa.add_message(messages, "expanded-d", "%s in %s recipe contains ${D}, it should be replaced by $D instead" % (var, package))
+                sane = False
+    return sane
+
+QAPKGTEST[unlisted-pkg-lics] = "package_qa_check_unlisted_pkg_lics"
+def package_qa_check_unlisted_pkg_lics(package, d, messages):
+    """
+    Check that all licenses for a package are among the licenses for the recipe.
+    """
+    pkg_lics = d.getVar('LICENSE:' + package)
+    if not pkg_lics:
+        return True
+
+    recipe_lics_set = oe.license.list_licenses(d.getVar('LICENSE'))
+    package_lics = oe.license.list_licenses(pkg_lics)
+    unlisted = package_lics - recipe_lics_set
+    if unlisted:
+        oe.qa.add_message(messages, "unlisted-pkg-lics",
+                               "LICENSE:%s includes licenses (%s) that are not "
+                               "listed in LICENSE" % (package, ' '.join(unlisted)))
+        return False
+    obsolete = set(oe.license.obsolete_license_list()) & package_lics - recipe_lics_set
+    if obsolete:
+        oe.qa.add_message(messages, "obsolete-license",
+                               "LICENSE:%s includes obsolete licenses %s" % (package, ' '.join(obsolete)))
+        return False
+    return True
+
+QAPKGTEST[empty-dirs] = "package_qa_check_empty_dirs"
+def package_qa_check_empty_dirs(pkg, d, messages):
+    """
+    Check for the existence of files in directories that are expected to be
+    empty.
+    """
+
+    pkgd = oe.path.join(d.getVar('PKGDEST'), pkg)
+    for dir in (d.getVar('QA_EMPTY_DIRS') or "").split():
+        empty_dir = oe.path.join(pkgd, dir)
+        if os.path.exists(empty_dir) and os.listdir(empty_dir):
+            recommendation = (d.getVar('QA_EMPTY_DIRS_RECOMMENDATION:' + dir) or
+                              "but it is expected to be empty")
+            msg = "%s installs files in %s, %s" % (pkg, dir, recommendation)
+            oe.qa.add_message(messages, "empty-dirs", msg)
+
+def package_qa_check_encoding(keys, encode, d):
+    def check_encoding(key, enc):
+        sane = True
+        value = d.getVar(key)
+        if value:
+            try:
+                s = value.encode(enc)
+            except UnicodeDecodeError as e:
+                error_msg = "%s has non %s characters" % (key,enc)
+                sane = False
+                oe.qa.handle_error("invalid-chars", error_msg, d)
+        return sane
+
+    for key in keys:
+        sane = check_encoding(key, encode)
+        if not sane:
+            break
+
+HOST_USER_UID := "${@os.getuid()}"
+HOST_USER_GID := "${@os.getgid()}"
+
+QAPATHTEST[host-user-contaminated] = "package_qa_check_host_user"
+def package_qa_check_host_user(path, name, d, elf, messages):
+    """Check for paths outside of /home which are owned by the user running bitbake."""
+
+    if not os.path.lexists(path):
+        return
+
+    dest = d.getVar('PKGDEST')
+    pn = d.getVar('PN')
+    home = os.path.join(dest, name, 'home')
+    if path == home or path.startswith(home + os.sep):
+        return
+
+    try:
+        stat = os.lstat(path)
+    except OSError as exc:
+        import errno
+        if exc.errno != errno.ENOENT:
+            raise
+    else:
+        check_uid = int(d.getVar('HOST_USER_UID'))
+        if stat.st_uid == check_uid:
+            oe.qa.add_message(messages, "host-user-contaminated", "%s: %s is owned by uid %d, which is the same as the user running bitbake. This may be due to host contamination" % (pn, package_qa_clean_path(path, d, name), check_uid))
+            return False
+
+        check_gid = int(d.getVar('HOST_USER_GID'))
+        if stat.st_gid == check_gid:
+            oe.qa.add_message(messages, "host-user-contaminated", "%s: %s is owned by gid %d, which is the same as the user running bitbake. This may be due to host contamination" % (pn, package_qa_clean_path(path, d, name), check_gid))
+            return False
+    return True
+
+QARECIPETEST[unhandled-features-check] = "package_qa_check_unhandled_features_check"
+def package_qa_check_unhandled_features_check(pn, d, messages):
+    if not bb.data.inherits_class('features_check', d):
+        var_set = False
+        for kind in ['DISTRO', 'MACHINE', 'COMBINED']:
+            for var in ['ANY_OF_' + kind + '_FEATURES', 'REQUIRED_' + kind + '_FEATURES', 'CONFLICT_' + kind + '_FEATURES']:
+                if d.getVar(var) is not None or d.hasOverrides(var):
+                    var_set = True
+        if var_set:
+            oe.qa.handle_error("unhandled-features-check", "%s: recipe doesn't inherit features_check" % pn, d)
+
+QARECIPETEST[missing-update-alternatives] = "package_qa_check_missing_update_alternatives"
+def package_qa_check_missing_update_alternatives(pn, d, messages):
+    # Look at all packages and find out if any of those sets ALTERNATIVE variable
+    # without inheriting update-alternatives class
+    for pkg in (d.getVar('PACKAGES') or '').split():
+        if d.getVar('ALTERNATIVE:%s' % pkg) and not bb.data.inherits_class('update-alternatives', d):
+            oe.qa.handle_error("missing-update-alternatives", "%s: recipe defines ALTERNATIVE:%s but doesn't inherit update-alternatives. This might fail during do_rootfs later!" % (pn, pkg), d)
+
+# The PACKAGE FUNC to scan each package
+python do_package_qa () {
+    import subprocess
+    import oe.packagedata
+
+    bb.note("DO PACKAGE QA")
+
+    main_lic = d.getVar('LICENSE')
+
+    # Check for obsolete license references in main LICENSE (packages are checked below for any changes)
+    main_licenses = oe.license.list_licenses(d.getVar('LICENSE'))
+    obsolete = set(oe.license.obsolete_license_list()) & main_licenses
+    if obsolete:
+        oe.qa.handle_error("obsolete-license", "Recipe LICENSE includes obsolete licenses %s" % ' '.join(obsolete), d)
+
+    bb.build.exec_func("read_subpackage_metadata", d)
+
+    # Check non UTF-8 characters on recipe's metadata
+    package_qa_check_encoding(['DESCRIPTION', 'SUMMARY', 'LICENSE', 'SECTION'], 'utf-8', d)
+
+    logdir = d.getVar('T')
+    pn = d.getVar('PN')
+
+    # Scan the packages...
+    pkgdest = d.getVar('PKGDEST')
+    packages = set((d.getVar('PACKAGES') or '').split())
+
+    global pkgfiles
+    pkgfiles = {}
+    for pkg in packages:
+        pkgfiles[pkg] = []
+        pkgdir = os.path.join(pkgdest, pkg)
+        for walkroot, dirs, files in os.walk(pkgdir):
+            # Don't walk into top-level CONTROL or DEBIAN directories as these
+            # are temporary directories created by do_package.
+            if walkroot == pkgdir:
+                for control in ("CONTROL", "DEBIAN"):
+                    if control in dirs:
+                        dirs.remove(control)
+            for file in files:
+                pkgfiles[pkg].append(os.path.join(walkroot, file))
+
+    # no packages should be scanned
+    if not packages:
+        return
+
+    import re
+    # The package name matches the [a-z0-9.+-]+ regular expression
+    pkgname_pattern = re.compile(r"^[a-z0-9.+-]+$")
+
+    taskdepdata = d.getVar("BB_TASKDEPDATA", False)
+    taskdeps = set()
+    for dep in taskdepdata:
+        taskdeps.add(taskdepdata[dep][0])
+
+    def parse_test_matrix(matrix_name):
+        testmatrix = d.getVarFlags(matrix_name) or {}
+        g = globals()
+        warnchecks = []
+        for w in (d.getVar("WARN_QA") or "").split():
+            if w in skip:
+               continue
+            if w in testmatrix and testmatrix[w] in g:
+                warnchecks.append(g[testmatrix[w]])
+
+        errorchecks = []
+        for e in (d.getVar("ERROR_QA") or "").split():
+            if e in skip:
+               continue
+            if e in testmatrix and testmatrix[e] in g:
+                errorchecks.append(g[testmatrix[e]])
+        return warnchecks, errorchecks
+
+    for package in packages:
+        skip = set((d.getVar('INSANE_SKIP') or "").split() +
+                   (d.getVar('INSANE_SKIP:' + package) or "").split())
+        if skip:
+            bb.note("Package %s skipping QA tests: %s" % (package, str(skip)))
+
+        bb.note("Checking Package: %s" % package)
+        # Check package name
+        if not pkgname_pattern.match(package):
+            oe.qa.handle_error("pkgname",
+                    "%s doesn't match the [a-z0-9.+-]+ regex" % package, d)
+
+        warn_checks, error_checks = parse_test_matrix("QAPATHTEST")
+        package_qa_walk(warn_checks, error_checks, package, d)
+
+        warn_checks, error_checks = parse_test_matrix("QAPKGTEST")
+        package_qa_package(warn_checks, error_checks, package, d)
+
+        package_qa_check_rdepends(package, pkgdest, skip, taskdeps, packages, d)
+        package_qa_check_deps(package, pkgdest, d)
+
+    warn_checks, error_checks = parse_test_matrix("QARECIPETEST")
+    package_qa_recipe(warn_checks, error_checks, pn, d)
+
+    if 'libdir' in d.getVar("ALL_QA").split():
+        package_qa_check_libdir(d)
+
+    oe.qa.exit_if_errors(d)
+}
+
+# binutils is used for most checks, so need to set as dependency
+# POPULATESYSROOTDEPS is defined in staging class.
+do_package_qa[depends] += "${POPULATESYSROOTDEPS}"
+do_package_qa[vardeps] = "${@bb.utils.contains('ERROR_QA', 'empty-dirs', 'QA_EMPTY_DIRS', '', d)}"
+do_package_qa[vardepsexclude] = "BB_TASKDEPDATA"
+do_package_qa[rdeptask] = "do_packagedata"
+addtask do_package_qa after do_packagedata do_package before do_build
+
+# Add the package specific INSANE_SKIPs to the sstate dependencies
+python() {
+    pkgs = (d.getVar('PACKAGES') or '').split()
+    for pkg in pkgs:
+        d.appendVarFlag("do_package_qa", "vardeps", " INSANE_SKIP:{}".format(pkg))
+}
+
+SSTATETASKS += "do_package_qa"
+do_package_qa[sstate-inputdirs] = ""
+do_package_qa[sstate-outputdirs] = ""
+python do_package_qa_setscene () {
+    sstate_setscene(d)
+}
+addtask do_package_qa_setscene
+
+python do_qa_sysroot() {
+    bb.note("QA checking do_populate_sysroot")
+    sysroot_destdir = d.expand('${SYSROOT_DESTDIR}')
+    for sysroot_dir in d.expand('${SYSROOT_DIRS}').split():
+        qa_check_staged(sysroot_destdir + sysroot_dir, d)
+    oe.qa.exit_with_message_if_errors("do_populate_sysroot for this recipe installed files with QA issues", d)
+}
+do_populate_sysroot[postfuncs] += "do_qa_sysroot"
+
+python do_qa_patch() {
+    import subprocess
+
+    ###########################################################################
+    # Check patch.log for fuzz warnings
+    #
+    # Further information on why we check for patch fuzz warnings:
+    # http://lists.openembedded.org/pipermail/openembedded-core/2018-March/148675.html
+    # https://bugzilla.yoctoproject.org/show_bug.cgi?id=10450
+    ###########################################################################
+
+    logdir = d.getVar('T')
+    patchlog = os.path.join(logdir,"log.do_patch")
+
+    if os.path.exists(patchlog):
+        fuzzheader = '--- Patch fuzz start ---'
+        fuzzfooter = '--- Patch fuzz end ---'
+        statement = "grep -e '%s' %s > /dev/null" % (fuzzheader, patchlog)
+        if subprocess.call(statement, shell=True) == 0:
+            msg = "Fuzz detected:\n\n"
+            fuzzmsg = ""
+            inFuzzInfo = False
+            f = open(patchlog, "r")
+            for line in f:
+                if fuzzheader in line:
+                    inFuzzInfo = True
+                    fuzzmsg = ""
+                elif fuzzfooter in line:
+                    fuzzmsg = fuzzmsg.replace('\n\n', '\n')
+                    msg += fuzzmsg
+                    msg += "\n"
+                    inFuzzInfo = False
+                elif inFuzzInfo and not 'Now at patch' in line:
+                    fuzzmsg += line
+            f.close()
+            msg += "The context lines in the patches can be updated with devtool:\n"
+            msg += "\n"
+            msg += "    devtool modify %s\n" % d.getVar('PN')
+            msg += "    devtool finish --force-patch-refresh %s <layer_path>\n\n" % d.getVar('PN')
+            msg += "Don't forget to review changes done by devtool!\n"
+            if bb.utils.filter('ERROR_QA', 'patch-fuzz', d):
+                bb.error(msg)
+            elif bb.utils.filter('WARN_QA', 'patch-fuzz', d):
+                bb.warn(msg)
+            msg = "Patch log indicates that patches do not apply cleanly."
+            oe.qa.handle_error("patch-fuzz", msg, d)
+
+    # Check if the patch contains a correctly formatted and spelled Upstream-Status
+    import re
+    from oe import patch
+
+    coremeta_path = os.path.join(d.getVar('COREBASE'), 'meta', '')
+    for url in patch.src_patches(d):
+       (_, _, fullpath, _, _, _) = bb.fetch.decodeurl(url)
+
+       # skip patches not in oe-core
+       if not os.path.abspath(fullpath).startswith(coremeta_path):
+           continue
+
+       kinda_status_re = re.compile(r"^.*upstream.*status.*$", re.IGNORECASE | re.MULTILINE)
+       strict_status_re = re.compile(r"^Upstream-Status: (Pending|Submitted|Denied|Accepted|Inappropriate|Backport|Inactive-Upstream)( .+)?$", re.MULTILINE)
+       guidelines = "https://www.openembedded.org/wiki/Commit_Patch_Message_Guidelines#Patch_Header_Recommendations:_Upstream-Status"
+
+       with open(fullpath, encoding='utf-8', errors='ignore') as f:
+           file_content = f.read()
+           match_kinda = kinda_status_re.search(file_content)
+           match_strict = strict_status_re.search(file_content)
+
+           if not match_strict:
+               if match_kinda:
+                   bb.error("Malformed Upstream-Status in patch\n%s\nPlease correct according to %s :\n%s" % (fullpath, guidelines, match_kinda.group(0)))
+               else:
+                   bb.error("Missing Upstream-Status in patch\n%s\nPlease add according to %s ." % (fullpath, guidelines))
+}
+
+python do_qa_configure() {
+    import subprocess
+
+    ###########################################################################
+    # Check config.log for cross compile issues
+    ###########################################################################
+
+    configs = []
+    workdir = d.getVar('WORKDIR')
+
+    skip = (d.getVar('INSANE_SKIP') or "").split()
+    skip_configure_unsafe = False
+    if 'configure-unsafe' in skip:
+        bb.note("Recipe %s skipping qa checking: configure-unsafe" % d.getVar('PN'))
+        skip_configure_unsafe = True
+
+    if bb.data.inherits_class('autotools', d) and not skip_configure_unsafe:
+        bb.note("Checking autotools environment for common misconfiguration")
+        for root, dirs, files in os.walk(workdir):
+            statement = "grep -q -F -e 'is unsafe for cross-compilation' %s" % \
+                        os.path.join(root,"config.log")
+            if "config.log" in files:
+                if subprocess.call(statement, shell=True) == 0:
+                    error_msg = """This autoconf log indicates errors, it looked at host include and/or library paths while determining system capabilities.
+Rerun configure task after fixing this."""
+                    oe.qa.handle_error("configure-unsafe", error_msg, d)
+
+            if "configure.ac" in files:
+                configs.append(os.path.join(root,"configure.ac"))
+            if "configure.in" in files:
+                configs.append(os.path.join(root, "configure.in"))
+
+    ###########################################################################
+    # Check gettext configuration and dependencies are correct
+    ###########################################################################
+
+    skip_configure_gettext = False
+    if 'configure-gettext' in skip:
+        bb.note("Recipe %s skipping qa checking: configure-gettext" % d.getVar('PN'))
+        skip_configure_gettext = True
+
+    cnf = d.getVar('EXTRA_OECONF') or ""
+    if not ("gettext" in d.getVar('P') or "gcc-runtime" in d.getVar('P') or \
+            "--disable-nls" in cnf or skip_configure_gettext):
+        ml = d.getVar("MLPREFIX") or ""
+        if bb.data.inherits_class('cross-canadian', d):
+            gt = "nativesdk-gettext"
+        else:
+            gt = "gettext-native"
+        deps = bb.utils.explode_deps(d.getVar('DEPENDS') or "")
+        if gt not in deps:
+            for config in configs:
+                gnu = "grep \"^[[:space:]]*AM_GNU_GETTEXT\" %s >/dev/null" % config
+                if subprocess.call(gnu, shell=True) == 0:
+                    error_msg = "AM_GNU_GETTEXT used but no inherit gettext"
+                    oe.qa.handle_error("configure-gettext", error_msg, d)
+
+    ###########################################################################
+    # Check unrecognised configure options (with a white list)
+    ###########################################################################
+    if bb.data.inherits_class("autotools", d):
+        bb.note("Checking configure output for unrecognised options")
+        try:
+            if bb.data.inherits_class("autotools", d):
+                flag = "WARNING: unrecognized options:"
+                log = os.path.join(d.getVar('B'), 'config.log')
+            output = subprocess.check_output(['grep', '-F', flag, log]).decode("utf-8").replace(', ', ' ').replace('"', '')
+            options = set()
+            for line in output.splitlines():
+                options |= set(line.partition(flag)[2].split())
+            ignore_opts = set(d.getVar("UNKNOWN_CONFIGURE_OPT_IGNORE").split())
+            options -= ignore_opts
+            if options:
+                pn = d.getVar('PN')
+                error_msg = pn + ": configure was passed unrecognised options: " + " ".join(options)
+                oe.qa.handle_error("unknown-configure-option", error_msg, d)
+        except subprocess.CalledProcessError:
+            pass
+
+    # Check invalid PACKAGECONFIG
+    pkgconfig = (d.getVar("PACKAGECONFIG") or "").split()
+    if pkgconfig:
+        pkgconfigflags = d.getVarFlags("PACKAGECONFIG") or {}
+        for pconfig in pkgconfig:
+            if pconfig not in pkgconfigflags:
+                pn = d.getVar('PN')
+                error_msg = "%s: invalid PACKAGECONFIG: %s" % (pn, pconfig)
+                oe.qa.handle_error("invalid-packageconfig", error_msg, d)
+
+    oe.qa.exit_if_errors(d)
+}
+
+def unpack_check_src_uri(pn, d):
+    import re
+
+    skip = (d.getVar('INSANE_SKIP') or "").split()
+    if 'src-uri-bad' in skip:
+        bb.note("Recipe %s skipping qa checking: src-uri-bad" % d.getVar('PN'))
+        return
+
+    if "${PN}" in d.getVar("SRC_URI", False):
+        oe.qa.handle_error("src-uri-bad", "%s: SRC_URI uses PN not BPN" % pn, d)
+
+    for url in d.getVar("SRC_URI").split():
+        # Search for github and gitlab URLs that pull unstable archives (comment for future greppers)
+        if re.search(r"git(hu|la)b\.com/.+/.+/archive/.+", url):
+            oe.qa.handle_error("src-uri-bad", "%s: SRC_URI uses unstable GitHub/GitLab archives, convert recipe to use git protocol" % pn, d)
+
+python do_qa_unpack() {
+    src_uri = d.getVar('SRC_URI')
+    s_dir = d.getVar('S')
+    if src_uri and not os.path.exists(s_dir):
+        bb.warn('%s: the directory %s (%s) pointed to by the S variable doesn\'t exist - please set S within the recipe to point to where the source has been unpacked to' % (d.getVar('PN'), d.getVar('S', False), s_dir))
+
+    unpack_check_src_uri(d.getVar('PN'), d)
+}
+
+# Check for patch fuzz
+do_patch[postfuncs] += "do_qa_patch "
+
+# Check broken config.log files, for packages requiring Gettext which
+# don't have it in DEPENDS.
+#addtask qa_configure after do_configure before do_compile
+do_configure[postfuncs] += "do_qa_configure "
+
+# Check does S exist.
+do_unpack[postfuncs] += "do_qa_unpack"
+
+python () {
+    import re
+    
+    tests = d.getVar('ALL_QA').split()
+    if "desktop" in tests:
+        d.appendVar("PACKAGE_DEPENDS", " desktop-file-utils-native")
+
+    ###########################################################################
+    # Check various variables
+    ###########################################################################
+
+    # Checking ${FILESEXTRAPATHS}
+    extrapaths = (d.getVar("FILESEXTRAPATHS") or "")
+    if '__default' not in extrapaths.split(":"):
+        msg = "FILESEXTRAPATHS-variable, must always use :prepend (or :append)\n"
+        msg += "type of assignment, and don't forget the colon.\n"
+        msg += "Please assign it with the format of:\n"
+        msg += "  FILESEXTRAPATHS:append := \":${THISDIR}/Your_Files_Path\" or\n"
+        msg += "  FILESEXTRAPATHS:prepend := \"${THISDIR}/Your_Files_Path:\"\n"
+        msg += "in your bbappend file\n\n"
+        msg += "Your incorrect assignment is:\n"
+        msg += "%s\n" % extrapaths
+        bb.warn(msg)
+
+    overrides = d.getVar('OVERRIDES').split(':')
+    pn = d.getVar('PN')
+    if pn in overrides:
+        msg = 'Recipe %s has PN of "%s" which is in OVERRIDES, this can result in unexpected behaviour.' % (d.getVar("FILE"), pn)
+        oe.qa.handle_error("pn-overrides", msg, d)
+    prog = re.compile(r'[A-Z]')
+    if prog.search(pn):
+        oe.qa.handle_error("uppercase-pn", 'PN: %s is upper case, this can result in unexpected behavior.' % pn, d)
+
+    # Some people mistakenly use DEPENDS:${PN} instead of DEPENDS and wonder
+    # why it doesn't work.
+    if (d.getVar(d.expand('DEPENDS:${PN}'))):
+        oe.qa.handle_error("pkgvarcheck", "recipe uses DEPENDS:${PN}, should use DEPENDS", d)
+
+    issues = []
+    if (d.getVar('PACKAGES') or "").split():
+        for dep in (d.getVar('QADEPENDS') or "").split():
+            d.appendVarFlag('do_package_qa', 'depends', " %s:do_populate_sysroot" % dep)
+        for var in 'RDEPENDS', 'RRECOMMENDS', 'RSUGGESTS', 'RCONFLICTS', 'RPROVIDES', 'RREPLACES', 'FILES', 'pkg_preinst', 'pkg_postinst', 'pkg_prerm', 'pkg_postrm', 'ALLOW_EMPTY':
+            if d.getVar(var, False):
+                issues.append(var)
+
+        fakeroot_tests = d.getVar('FAKEROOT_QA').split()
+        if set(tests) & set(fakeroot_tests):
+            d.setVarFlag('do_package_qa', 'fakeroot', '1')
+            d.appendVarFlag('do_package_qa', 'depends', ' virtual/fakeroot-native:do_populate_sysroot')
+    else:
+        d.setVarFlag('do_package_qa', 'rdeptask', '')
+    for i in issues:
+        oe.qa.handle_error("pkgvarcheck", "%s: Variable %s is set as not being package specific, please fix this." % (d.getVar("FILE"), i), d)
+
+    if 'native-last' not in (d.getVar('INSANE_SKIP') or "").split():
+        for native_class in ['native', 'nativesdk']:
+            if bb.data.inherits_class(native_class, d):
+
+                inherited_classes = d.getVar('__inherit_cache', False) or []
+                needle = "/" + native_class
+
+                bbclassextend = (d.getVar('BBCLASSEXTEND') or '').split()
+                # BBCLASSEXTEND items are always added in the end
+                skip_classes = bbclassextend
+                if bb.data.inherits_class('native', d) or 'native' in bbclassextend:
+                    # native also inherits nopackages and relocatable bbclasses
+                    skip_classes.extend(['nopackages', 'relocatable'])
+
+                broken_order = []
+                for class_item in reversed(inherited_classes):
+                    if needle not in class_item:
+                        for extend_item in skip_classes:
+                            if '/%s.bbclass' % extend_item in class_item:
+                                break
+                        else:
+                            pn = d.getVar('PN')
+                            broken_order.append(os.path.basename(class_item))
+                    else:
+                        break
+                if broken_order:
+                    oe.qa.handle_error("native-last", "%s: native/nativesdk class is not inherited last, this can result in unexpected behaviour. "
+                                             "Classes inherited after native/nativesdk: %s" % (pn, " ".join(broken_order)), d)
+
+    oe.qa.exit_if_errors(d)
+}
diff --git a/poky/meta/classes-global/license.bbclass b/poky/meta/classes-global/license.bbclass
new file mode 100644
index 0000000..560acb8
--- /dev/null
+++ b/poky/meta/classes-global/license.bbclass
@@ -0,0 +1,426 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+# Populates LICENSE_DIRECTORY as set in distro config with the license files as set by
+# LIC_FILES_CHKSUM.
+# TODO:
+# - There is a real issue revolving around license naming standards.
+
+LICENSE_DIRECTORY ??= "${DEPLOY_DIR}/licenses"
+LICSSTATEDIR = "${WORKDIR}/license-destdir/"
+
+# Create extra package with license texts and add it to RRECOMMENDS:${PN}
+LICENSE_CREATE_PACKAGE[type] = "boolean"
+LICENSE_CREATE_PACKAGE ??= "0"
+LICENSE_PACKAGE_SUFFIX ??= "-lic"
+LICENSE_FILES_DIRECTORY ??= "${datadir}/licenses/"
+
+addtask populate_lic after do_patch before do_build
+do_populate_lic[dirs] = "${LICSSTATEDIR}/${PN}"
+do_populate_lic[cleandirs] = "${LICSSTATEDIR}"
+
+python do_populate_lic() {
+    """
+    Populate LICENSE_DIRECTORY with licenses.
+    """
+    lic_files_paths = find_license_files(d)
+
+    # The base directory we wrangle licenses to
+    destdir = os.path.join(d.getVar('LICSSTATEDIR'), d.getVar('PN'))
+    copy_license_files(lic_files_paths, destdir)
+    info = get_recipe_info(d)
+    with open(os.path.join(destdir, "recipeinfo"), "w") as f:
+        for key in sorted(info.keys()):
+            f.write("%s: %s\n" % (key, info[key]))
+    oe.qa.exit_if_errors(d)
+}
+
+PSEUDO_IGNORE_PATHS .= ",${@','.join(((d.getVar('COMMON_LICENSE_DIR') or '') + ' ' + (d.getVar('LICENSE_PATH') or '') + ' ' + d.getVar('COREBASE') + '/meta/COPYING').split())}"
+# it would be better to copy them in do_install:append, but find_license_filesa is python
+python perform_packagecopy:prepend () {
+    enabled = oe.data.typed_value('LICENSE_CREATE_PACKAGE', d)
+    if d.getVar('CLASSOVERRIDE') == 'class-target' and enabled:
+        lic_files_paths = find_license_files(d)
+
+        # LICENSE_FILES_DIRECTORY starts with '/' so os.path.join cannot be used to join D and LICENSE_FILES_DIRECTORY
+        destdir = d.getVar('D') + os.path.join(d.getVar('LICENSE_FILES_DIRECTORY'), d.getVar('PN'))
+        copy_license_files(lic_files_paths, destdir)
+        add_package_and_files(d)
+}
+perform_packagecopy[vardeps] += "LICENSE_CREATE_PACKAGE"
+
+def get_recipe_info(d):
+    info = {}
+    info["PV"] = d.getVar("PV")
+    info["PR"] = d.getVar("PR")
+    info["LICENSE"] = d.getVar("LICENSE")
+    return info
+
+def add_package_and_files(d):
+    packages = d.getVar('PACKAGES')
+    files = d.getVar('LICENSE_FILES_DIRECTORY')
+    pn = d.getVar('PN')
+    pn_lic = "%s%s" % (pn, d.getVar('LICENSE_PACKAGE_SUFFIX', False))
+    if pn_lic in packages.split():
+        bb.warn("%s package already existed in %s." % (pn_lic, pn))
+    else:
+        # first in PACKAGES to be sure that nothing else gets LICENSE_FILES_DIRECTORY
+        d.setVar('PACKAGES', "%s %s" % (pn_lic, packages))
+        d.setVar('FILES:' + pn_lic, files)
+
+def copy_license_files(lic_files_paths, destdir):
+    import shutil
+    import errno
+
+    bb.utils.mkdirhier(destdir)
+    for (basename, path, beginline, endline) in lic_files_paths:
+        try:
+            src = path
+            dst = os.path.join(destdir, basename)
+            if os.path.exists(dst):
+                os.remove(dst)
+            if os.path.islink(src):
+                src = os.path.realpath(src)
+            canlink = os.access(src, os.W_OK) and (os.stat(src).st_dev == os.stat(destdir).st_dev) and beginline is None and endline is None
+            if canlink:
+                try:
+                    os.link(src, dst)
+                except OSError as err:
+                    if err.errno == errno.EXDEV:
+                        # Copy license files if hardlink is not possible even if st_dev is the
+                        # same on source and destination (docker container with device-mapper?)
+                        canlink = False
+                    else:
+                        raise
+                # Only chown if we did hardlink and we're running under pseudo
+                if canlink and os.environ.get('PSEUDO_DISABLED') == '0':
+                    os.chown(dst,0,0)
+            if not canlink:
+                begin_idx = max(0, int(beginline) - 1) if beginline is not None else None
+                end_idx = max(0, int(endline)) if endline is not None else None
+                if begin_idx is None and end_idx is None:
+                    shutil.copyfile(src, dst)
+                else:
+                    with open(src, 'rb') as src_f:
+                        with open(dst, 'wb') as dst_f:
+                            dst_f.write(b''.join(src_f.readlines()[begin_idx:end_idx]))
+
+        except Exception as e:
+            bb.warn("Could not copy license file %s to %s: %s" % (src, dst, e))
+
+def find_license_files(d):
+    """
+    Creates list of files used in LIC_FILES_CHKSUM and generic LICENSE files.
+    """
+    import shutil
+    import oe.license
+    from collections import defaultdict, OrderedDict
+
+    # All the license files for the package
+    lic_files = d.getVar('LIC_FILES_CHKSUM') or ""
+    pn = d.getVar('PN')
+    # The license files are located in S/LIC_FILE_CHECKSUM.
+    srcdir = d.getVar('S')
+    # Directory we store the generic licenses as set in the distro configuration
+    generic_directory = d.getVar('COMMON_LICENSE_DIR')
+    # List of basename, path tuples
+    lic_files_paths = []
+    # hash for keep track generic lics mappings
+    non_generic_lics = {}
+    # Entries from LIC_FILES_CHKSUM
+    lic_chksums = {}
+    license_source_dirs = []
+    license_source_dirs.append(generic_directory)
+    try:
+        additional_lic_dirs = d.getVar('LICENSE_PATH').split()
+        for lic_dir in additional_lic_dirs:
+            license_source_dirs.append(lic_dir)
+    except:
+        pass
+
+    class FindVisitor(oe.license.LicenseVisitor):
+        def visit_Str(self, node):
+            #
+            # Until I figure out what to do with
+            # the two modifiers I support (or greater = +
+            # and "with exceptions" being *
+            # we'll just strip out the modifier and put
+            # the base license.
+            find_license(node.s.replace("+", "").replace("*", ""))
+            self.generic_visit(node)
+
+        def visit_Constant(self, node):
+            find_license(node.value.replace("+", "").replace("*", ""))
+            self.generic_visit(node)
+
+    def find_license(license_type):
+        try:
+            bb.utils.mkdirhier(gen_lic_dest)
+        except:
+            pass
+        spdx_generic = None
+        license_source = None
+        # If the generic does not exist we need to check to see if there is an SPDX mapping to it,
+        # unless NO_GENERIC_LICENSE is set.
+        for lic_dir in license_source_dirs:
+            if not os.path.isfile(os.path.join(lic_dir, license_type)):
+                if d.getVarFlag('SPDXLICENSEMAP', license_type) != None:
+                    # Great, there is an SPDXLICENSEMAP. We can copy!
+                    bb.debug(1, "We need to use a SPDXLICENSEMAP for %s" % (license_type))
+                    spdx_generic = d.getVarFlag('SPDXLICENSEMAP', license_type)
+                    license_source = lic_dir
+                    break
+            elif os.path.isfile(os.path.join(lic_dir, license_type)):
+                spdx_generic = license_type
+                license_source = lic_dir
+                break
+
+        non_generic_lic = d.getVarFlag('NO_GENERIC_LICENSE', license_type)
+        if spdx_generic and license_source:
+            # we really should copy to generic_ + spdx_generic, however, that ends up messing the manifest
+            # audit up. This should be fixed in emit_pkgdata (or, we actually got and fix all the recipes)
+
+            lic_files_paths.append(("generic_" + license_type, os.path.join(license_source, spdx_generic),
+                                    None, None))
+
+            # The user may attempt to use NO_GENERIC_LICENSE for a generic license which doesn't make sense
+            # and should not be allowed, warn the user in this case.
+            if d.getVarFlag('NO_GENERIC_LICENSE', license_type):
+                oe.qa.handle_error("license-no-generic",
+                    "%s: %s is a generic license, please don't use NO_GENERIC_LICENSE for it." % (pn, license_type), d)
+
+        elif non_generic_lic and non_generic_lic in lic_chksums:
+            # if NO_GENERIC_LICENSE is set, we copy the license files from the fetched source
+            # of the package rather than the license_source_dirs.
+            lic_files_paths.append(("generic_" + license_type,
+                                    os.path.join(srcdir, non_generic_lic), None, None))
+            non_generic_lics[non_generic_lic] = license_type
+        else:
+            # Explicitly avoid the CLOSED license because this isn't generic
+            if license_type != 'CLOSED':
+                # And here is where we warn people that their licenses are lousy
+                oe.qa.handle_error("license-exists",
+                    "%s: No generic license file exists for: %s in any provider" % (pn, license_type), d)
+            pass
+
+    if not generic_directory:
+        bb.fatal("COMMON_LICENSE_DIR is unset. Please set this in your distro config")
+
+    for url in lic_files.split():
+        try:
+            (method, host, path, user, pswd, parm) = bb.fetch.decodeurl(url)
+            if method != "file" or not path:
+                raise bb.fetch.MalformedUrl()
+        except bb.fetch.MalformedUrl:
+            bb.fatal("%s: LIC_FILES_CHKSUM contains an invalid URL:  %s" % (d.getVar('PF'), url))
+        # We want the license filename and path
+        chksum = parm.get('md5', None)
+        beginline = parm.get('beginline')
+        endline = parm.get('endline')
+        lic_chksums[path] = (chksum, beginline, endline)
+
+    v = FindVisitor()
+    try:
+        v.visit_string(d.getVar('LICENSE'))
+    except oe.license.InvalidLicense as exc:
+        bb.fatal('%s: %s' % (d.getVar('PF'), exc))
+    except SyntaxError:
+        oe.qa.handle_error("license-syntax",
+            "%s: Failed to parse it's LICENSE field." % (d.getVar('PF')), d)
+    # Add files from LIC_FILES_CHKSUM to list of license files
+    lic_chksum_paths = defaultdict(OrderedDict)
+    for path, data in sorted(lic_chksums.items()):
+        lic_chksum_paths[os.path.basename(path)][data] = (os.path.join(srcdir, path), data[1], data[2])
+    for basename, files in lic_chksum_paths.items():
+        if len(files) == 1:
+            # Don't copy again a LICENSE already handled as non-generic
+            if basename in non_generic_lics:
+                continue
+            data = list(files.values())[0]
+            lic_files_paths.append(tuple([basename] + list(data)))
+        else:
+            # If there are multiple different license files with identical
+            # basenames we rename them to <file>.0, <file>.1, ...
+            for i, data in enumerate(files.values()):
+                lic_files_paths.append(tuple(["%s.%d" % (basename, i)] + list(data)))
+
+    return lic_files_paths
+
+def return_spdx(d, license):
+    """
+    This function returns the spdx mapping of a license if it exists.
+     """
+    return d.getVarFlag('SPDXLICENSEMAP', license)
+
+def canonical_license(d, license):
+    """
+    Return the canonical (SPDX) form of the license if available (so GPLv3
+    becomes GPL-3.0-only) or the passed license if there is no canonical form.
+    """
+    return d.getVarFlag('SPDXLICENSEMAP', license) or license
+
+def expand_wildcard_licenses(d, wildcard_licenses):
+    """
+    There are some common wildcard values users may want to use. Support them
+    here.
+    """
+    licenses = set(wildcard_licenses)
+    mapping = {
+        "AGPL-3.0*" : ["AGPL-3.0-only", "AGPL-3.0-or-later"],
+        "GPL-3.0*" : ["GPL-3.0-only", "GPL-3.0-or-later"],
+        "LGPL-3.0*" : ["LGPL-3.0-only", "LGPL-3.0-or-later"],
+    }
+    for k in mapping:
+        if k in wildcard_licenses:
+            licenses.remove(k)
+            for item in mapping[k]:
+                licenses.add(item)
+
+    for l in licenses:
+        if l in oe.license.obsolete_license_list():
+            bb.fatal("Error, %s is an obsolete license, please use an SPDX reference in INCOMPATIBLE_LICENSE" % l)
+        if "*" in l:
+            bb.fatal("Error, %s is an invalid license wildcard entry" % l)
+
+    return list(licenses)
+
+def incompatible_license_contains(license, truevalue, falsevalue, d):
+    license = canonical_license(d, license)
+    bad_licenses = (d.getVar('INCOMPATIBLE_LICENSE') or "").split()
+    bad_licenses = expand_wildcard_licenses(d, bad_licenses)
+    return truevalue if license in bad_licenses else falsevalue
+
+def incompatible_pkg_license(d, dont_want_licenses, license):
+    # Handles an "or" or two license sets provided by
+    # flattened_licenses(), pick one that works if possible.
+    def choose_lic_set(a, b):
+        return a if all(oe.license.license_ok(canonical_license(d, lic),
+                            dont_want_licenses) for lic in a) else b
+
+    try:
+        licenses = oe.license.flattened_licenses(license, choose_lic_set)
+    except oe.license.LicenseError as exc:
+        bb.fatal('%s: %s' % (d.getVar('P'), exc))
+
+    incompatible_lic = []
+    for l in licenses:
+        license = canonical_license(d, l)
+        if not oe.license.license_ok(license, dont_want_licenses):
+            incompatible_lic.append(license)
+
+    return sorted(incompatible_lic)
+
+def incompatible_license(d, dont_want_licenses, package=None):
+    """
+    This function checks if a recipe has only incompatible licenses. It also
+    take into consideration 'or' operand.  dont_want_licenses should be passed
+    as canonical (SPDX) names.
+    """
+    import oe.license
+    license = d.getVar("LICENSE:%s" % package) if package else None
+    if not license:
+        license = d.getVar('LICENSE')
+
+    return incompatible_pkg_license(d, dont_want_licenses, license)
+
+def check_license_flags(d):
+    """
+    This function checks if a recipe has any LICENSE_FLAGS that
+    aren't acceptable.
+
+    If it does, it returns the all LICENSE_FLAGS missing from the list
+    of acceptable license flags, or all of the LICENSE_FLAGS if there
+    is no list of acceptable flags.
+
+    If everything is is acceptable, it returns None.
+    """
+
+    def license_flag_matches(flag, acceptlist, pn):
+        """
+        Return True if flag matches something in acceptlist, None if not.
+
+        Before we test a flag against the acceptlist, we append _${PN}
+        to it.  We then try to match that string against the
+        acceptlist.  This covers the normal case, where we expect
+        LICENSE_FLAGS to be a simple string like 'commercial', which
+        the user typically matches exactly in the acceptlist by
+        explicitly appending the package name e.g 'commercial_foo'.
+        If we fail the match however, we then split the flag across
+        '_' and append each fragment and test until we either match or
+        run out of fragments.
+        """
+        flag_pn = ("%s_%s" % (flag, pn))
+        for candidate in acceptlist:
+            if flag_pn == candidate:
+                    return True
+
+        flag_cur = ""
+        flagments = flag_pn.split("_")
+        flagments.pop() # we've already tested the full string
+        for flagment in flagments:
+            if flag_cur:
+                flag_cur += "_"
+            flag_cur += flagment
+            for candidate in acceptlist:
+                if flag_cur == candidate:
+                    return True
+        return False
+
+    def all_license_flags_match(license_flags, acceptlist):
+        """ Return all unmatched flags, None if all flags match """
+        pn = d.getVar('PN')
+        split_acceptlist = acceptlist.split()
+        flags = []
+        for flag in license_flags.split():
+            if not license_flag_matches(flag, split_acceptlist, pn):
+                flags.append(flag)
+        return flags if flags else None
+
+    license_flags = d.getVar('LICENSE_FLAGS')
+    if license_flags:
+        acceptlist = d.getVar('LICENSE_FLAGS_ACCEPTED')
+        if not acceptlist:
+            return license_flags.split()
+        unmatched_flags = all_license_flags_match(license_flags, acceptlist)
+        if unmatched_flags:
+            return unmatched_flags
+    return None
+
+def check_license_format(d):
+    """
+    This function checks if LICENSE is well defined,
+        Validate operators in LICENSES.
+        No spaces are allowed between LICENSES.
+    """
+    pn = d.getVar('PN')
+    licenses = d.getVar('LICENSE')
+    from oe.license import license_operator, license_operator_chars, license_pattern
+
+    elements = list(filter(lambda x: x.strip(), license_operator.split(licenses)))
+    for pos, element in enumerate(elements):
+        if license_pattern.match(element):
+            if pos > 0 and license_pattern.match(elements[pos - 1]):
+                oe.qa.handle_error('license-format',
+                        '%s: LICENSE value "%s" has an invalid format - license names ' \
+                        'must be separated by the following characters to indicate ' \
+                        'the license selection: %s' %
+                        (pn, licenses, license_operator_chars), d)
+        elif not license_operator.match(element):
+            oe.qa.handle_error('license-format',
+                    '%s: LICENSE value "%s" has an invalid separator "%s" that is not ' \
+                    'in the valid list of separators (%s)' %
+                    (pn, licenses, element, license_operator_chars), d)
+
+SSTATETASKS += "do_populate_lic"
+do_populate_lic[sstate-inputdirs] = "${LICSSTATEDIR}"
+do_populate_lic[sstate-outputdirs] = "${LICENSE_DIRECTORY}/"
+
+IMAGE_CLASSES:append = " license_image"
+
+python do_populate_lic_setscene () {
+    sstate_setscene(d)
+}
+addtask do_populate_lic_setscene
diff --git a/poky/meta/classes-global/logging.bbclass b/poky/meta/classes-global/logging.bbclass
new file mode 100644
index 0000000..ce03abf
--- /dev/null
+++ b/poky/meta/classes-global/logging.bbclass
@@ -0,0 +1,107 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+# The following logging mechanisms are to be used in bash functions of recipes.
+# They are intended to map one to one in intention and output format with the
+# python recipe logging functions of a similar naming convention: bb.plain(),
+# bb.note(), etc.
+
+LOGFIFO = "${T}/fifo.${@os.getpid()}"
+
+# Print the output exactly as it is passed in. Typically used for output of
+# tasks that should be seen on the console. Use sparingly.
+# Output: logs console
+bbplain() {
+	if [ -p ${LOGFIFO} ] ; then
+		printf "%b\0" "bbplain $*" > ${LOGFIFO}
+	else
+		echo "$*"
+	fi
+}
+
+# Notify the user of a noteworthy condition. 
+# Output: logs
+bbnote() {
+	if [ -p ${LOGFIFO} ] ; then
+		printf "%b\0" "bbnote $*" > ${LOGFIFO}
+	else
+		echo "NOTE: $*"
+	fi
+}
+
+# Print a warning to the log. Warnings are non-fatal, and do not
+# indicate a build failure.
+# Output: logs console
+bbwarn() {
+	if [ -p ${LOGFIFO} ] ; then
+		printf "%b\0" "bbwarn $*" > ${LOGFIFO}
+	else
+		echo "WARNING: $*"
+	fi
+}
+
+# Print an error to the log. Errors are non-fatal in that the build can
+# continue, but they do indicate a build failure.
+# Output: logs console
+bberror() {
+	if [ -p ${LOGFIFO} ] ; then
+		printf "%b\0" "bberror $*" > ${LOGFIFO}
+	else
+		echo "ERROR: $*"
+	fi
+}
+
+# Print a fatal error to the log. Fatal errors indicate build failure
+# and halt the build, exiting with an error code.
+# Output: logs console
+bbfatal() {
+	if [ -p ${LOGFIFO} ] ; then
+		printf "%b\0" "bbfatal $*" > ${LOGFIFO}
+	else
+		echo "ERROR: $*"
+	fi
+	exit 1
+}
+
+# Like bbfatal, except prevents the suppression of the error log by
+# bitbake's UI.
+# Output: logs console
+bbfatal_log() {
+	if [ -p ${LOGFIFO} ] ; then
+		printf "%b\0" "bbfatal_log $*" > ${LOGFIFO}
+	else
+		echo "ERROR: $*"
+	fi
+	exit 1
+}
+
+# Print debug messages. These are appropriate for progress checkpoint
+# messages to the logs. Depending on the debug log level, they may also
+# go to the console.
+# Output: logs console
+# Usage: bbdebug 1 "first level debug message"
+#        bbdebug 2 "second level debug message"
+bbdebug() {
+	USAGE='Usage: bbdebug [123] "message"'
+	if [ $# -lt 2 ]; then
+		bbfatal "$USAGE"
+	fi
+	
+	# Strip off the debug level and ensure it is an integer
+	DBGLVL=$1; shift
+	NONDIGITS=$(echo "$DBGLVL" | tr -d "[:digit:]")
+	if [ "$NONDIGITS" ]; then
+		bbfatal "$USAGE"
+	fi
+
+	# All debug output is printed to the logs
+	if [ -p ${LOGFIFO} ] ; then
+		printf "%b\0" "bbdebug $DBGLVL $*" > ${LOGFIFO}
+	else
+		echo "DEBUG: $*"
+	fi
+}
+
diff --git a/poky/meta/classes-global/mirrors.bbclass b/poky/meta/classes-global/mirrors.bbclass
new file mode 100644
index 0000000..9643b31
--- /dev/null
+++ b/poky/meta/classes-global/mirrors.bbclass
@@ -0,0 +1,95 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+MIRRORS += "\
+${DEBIAN_MIRROR}	http://snapshot.debian.org/archive/debian/20180310T215105Z/pool \
+${DEBIAN_MIRROR}	http://snapshot.debian.org/archive/debian-archive/20120328T092752Z/debian/pool \
+${DEBIAN_MIRROR}	http://snapshot.debian.org/archive/debian-archive/20110127T084257Z/debian/pool \
+${DEBIAN_MIRROR}	http://snapshot.debian.org/archive/debian-archive/20090802T004153Z/debian/pool \
+${DEBIAN_MIRROR}	http://ftp.de.debian.org/debian/pool \
+${DEBIAN_MIRROR}	http://ftp.au.debian.org/debian/pool \
+${DEBIAN_MIRROR}	http://ftp.cl.debian.org/debian/pool \
+${DEBIAN_MIRROR}	http://ftp.hr.debian.org/debian/pool \
+${DEBIAN_MIRROR}	http://ftp.fi.debian.org/debian/pool \
+${DEBIAN_MIRROR}	http://ftp.hk.debian.org/debian/pool \
+${DEBIAN_MIRROR}	http://ftp.hu.debian.org/debian/pool \
+${DEBIAN_MIRROR}	http://ftp.ie.debian.org/debian/pool \
+${DEBIAN_MIRROR}	http://ftp.it.debian.org/debian/pool \
+${DEBIAN_MIRROR}	http://ftp.jp.debian.org/debian/pool \
+${DEBIAN_MIRROR}	http://ftp.no.debian.org/debian/pool \
+${DEBIAN_MIRROR}	http://ftp.pl.debian.org/debian/pool \
+${DEBIAN_MIRROR}	http://ftp.ro.debian.org/debian/pool \
+${DEBIAN_MIRROR}	http://ftp.si.debian.org/debian/pool \
+${DEBIAN_MIRROR}	http://ftp.es.debian.org/debian/pool \
+${DEBIAN_MIRROR}	http://ftp.se.debian.org/debian/pool \
+${DEBIAN_MIRROR}	http://ftp.tr.debian.org/debian/pool \
+${GNU_MIRROR}	https://mirrors.kernel.org/gnu \
+${KERNELORG_MIRROR}	http://www.kernel.org/pub \
+${GNUPG_MIRROR}	ftp://ftp.gnupg.org/gcrypt \
+${GNUPG_MIRROR}	ftp://ftp.franken.de/pub/crypt/mirror/ftp.gnupg.org/gcrypt \
+${GNUPG_MIRROR}	ftp://mirrors.dotsrc.org/gcrypt \
+ftp://dante.ctan.org/tex-archive ftp://ftp.fu-berlin.de/tex/CTAN \
+ftp://dante.ctan.org/tex-archive http://sunsite.sut.ac.jp/pub/archives/ctan/ \
+ftp://dante.ctan.org/tex-archive http://ctan.unsw.edu.au/ \
+ftp://ftp.gnutls.org/gcrypt/gnutls ${GNUPG_MIRROR}/gnutls \
+http://ftp.info-zip.org/pub/infozip/src/ ftp://sunsite.icm.edu.pl/pub/unix/archiving/info-zip/src/ \
+http://www.mirrorservice.org/sites/lsof.itap.purdue.edu/pub/tools/unix/lsof/ http://www.mirrorservice.org/sites/lsof.itap.purdue.edu/pub/tools/unix/lsof/OLD/ \
+${APACHE_MIRROR}  http://www.us.apache.org/dist \
+${APACHE_MIRROR}  http://archive.apache.org/dist \
+http://downloads.sourceforge.net/watchdog/ http://fossies.org/linux/misc/ \
+${SAVANNAH_GNU_MIRROR} http://download-mirror.savannah.gnu.org/releases \
+${SAVANNAH_NONGNU_MIRROR} http://download-mirror.savannah.nongnu.org/releases \
+ftp://sourceware.org/pub http://mirrors.kernel.org/sourceware \
+ftp://sourceware.org/pub http://gd.tuwien.ac.at/gnu/sourceware \
+ftp://sourceware.org/pub http://ftp.gwdg.de/pub/linux/sources.redhat.com/sourceware \
+cvs://.*/.*     http://downloads.yoctoproject.org/mirror/sources/ \
+svn://.*/.*     http://downloads.yoctoproject.org/mirror/sources/ \
+git://.*/.*     http://downloads.yoctoproject.org/mirror/sources/ \
+gitsm://.*/.*   http://downloads.yoctoproject.org/mirror/sources/ \
+hg://.*/.*      http://downloads.yoctoproject.org/mirror/sources/ \
+bzr://.*/.*     http://downloads.yoctoproject.org/mirror/sources/ \
+p4://.*/.*      http://downloads.yoctoproject.org/mirror/sources/ \
+osc://.*/.*     http://downloads.yoctoproject.org/mirror/sources/ \
+https?://.*/.*  http://downloads.yoctoproject.org/mirror/sources/ \
+ftp://.*/.*     http://downloads.yoctoproject.org/mirror/sources/ \
+npm://.*/?.*    http://downloads.yoctoproject.org/mirror/sources/ \
+cvs://.*/.*     http://sources.openembedded.org/ \
+svn://.*/.*     http://sources.openembedded.org/ \
+git://.*/.*     http://sources.openembedded.org/ \
+gitsm://.*/.*   http://sources.openembedded.org/ \
+hg://.*/.*      http://sources.openembedded.org/ \
+bzr://.*/.*     http://sources.openembedded.org/ \
+p4://.*/.*      http://sources.openembedded.org/ \
+osc://.*/.*     http://sources.openembedded.org/ \
+https?://.*/.*  http://sources.openembedded.org/ \
+ftp://.*/.*     http://sources.openembedded.org/ \
+npm://.*/?.*    http://sources.openembedded.org/ \
+${CPAN_MIRROR}  http://cpan.metacpan.org/ \
+${CPAN_MIRROR}  http://search.cpan.org/CPAN/ \
+https?://downloads.yoctoproject.org/releases/uninative/ https://mirrors.kernel.org/yocto/uninative/ \
+https?://downloads.yoctoproject.org/mirror/sources/ https://mirrors.kernel.org/yocto-sources/ \
+"
+
+# Use MIRRORS to provide git repo fallbacks using the https protocol, for cases
+# where git native protocol fetches may fail due to local firewall rules, etc.
+
+MIRRORS += "\
+git://salsa.debian.org/.*     git://salsa.debian.org/PATH;protocol=https \
+git://git.gnome.org/.*        git://gitlab.gnome.org/GNOME/PATH;protocol=https \
+git://.*/.*                   git://HOST/PATH;protocol=https \
+git://.*/.*                   git://HOST/git/PATH;protocol=https \
+"
+
+# Switch glibc and binutils recipes to use shallow clones as they're large and this
+# improves user experience whilst allowing the flexibility of git urls in the recipes
+BB_GIT_SHALLOW:pn-binutils = "1"
+BB_GIT_SHALLOW:pn-binutils-cross-${TARGET_ARCH} = "1"
+BB_GIT_SHALLOW:pn-binutils-cross-canadian-${TRANSLATED_TARGET_ARCH} = "1"
+BB_GIT_SHALLOW:pn-binutils-cross-testsuite = "1"
+BB_GIT_SHALLOW:pn-binutils-crosssdk-${SDK_SYS} = "1"
+BB_GIT_SHALLOW:pn-glibc = "1"
+PREMIRRORS += "git://sourceware.org/git/glibc.git https://downloads.yoctoproject.org/mirror/sources/ \
+              git://sourceware.org/git/binutils-gdb.git https://downloads.yoctoproject.org/mirror/sources/"
diff --git a/poky/meta/classes-global/package.bbclass b/poky/meta/classes-global/package.bbclass
new file mode 100644
index 0000000..2d985d8
--- /dev/null
+++ b/poky/meta/classes-global/package.bbclass
@@ -0,0 +1,2546 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+#
+# Packaging process
+#
+# Executive summary: This class iterates over the functions listed in PACKAGEFUNCS
+# Taking D and splitting it up into the packages listed in PACKAGES, placing the
+# resulting output in PKGDEST.
+#
+# There are the following default steps but PACKAGEFUNCS can be extended:
+#
+# a) package_convert_pr_autoinc - convert AUTOINC in PKGV to ${PRSERV_PV_AUTOINC}
+#
+# b) perform_packagecopy - Copy D into PKGD
+#
+# c) package_do_split_locales - Split out the locale files, updates FILES and PACKAGES
+#
+# d) split_and_strip_files - split the files into runtime and debug and strip them.
+#    Debug files include debug info split, and associated sources that end up in -dbg packages
+#
+# e) fixup_perms - Fix up permissions in the package before we split it.
+#
+# f) populate_packages - Split the files in PKGD into separate packages in PKGDEST/<pkgname>
+#    Also triggers the binary stripping code to put files in -dbg packages.
+#
+# g) package_do_filedeps - Collect perfile run-time dependency metadata
+#    The data is stores in FILER{PROVIDES,DEPENDS}_file_pkg variables with
+#    a list of affected files in FILER{PROVIDES,DEPENDS}FLIST_pkg
+#
+# h) package_do_shlibs - Look at the shared libraries generated and autotmatically add any
+#    dependencies found. Also stores the package name so anyone else using this library
+#    knows which package to depend on.
+#
+# i) package_do_pkgconfig - Keep track of which packages need and provide which .pc files
+#
+# j) read_shlibdeps - Reads the stored shlibs information into the metadata
+#
+# k) package_depchains - Adds automatic dependencies to -dbg and -dev packages
+#
+# l) emit_pkgdata - saves the packaging data into PKGDATA_DIR for use in later
+#    packaging steps
+
+inherit packagedata
+inherit chrpath
+inherit package_pkgdata
+inherit insane
+
+PKGD    = "${WORKDIR}/package"
+PKGDEST = "${WORKDIR}/packages-split"
+
+LOCALE_SECTION ?= ''
+
+ALL_MULTILIB_PACKAGE_ARCHS = "${@all_multilib_tune_values(d, 'PACKAGE_ARCHS')}"
+
+# rpm is used for the per-file dependency identification
+# dwarfsrcfiles is used to determine the list of debug source files
+PACKAGE_DEPENDS += "rpm-native dwarfsrcfiles-native"
+
+
+# If your postinstall can execute at rootfs creation time rather than on
+# target but depends on a native/cross tool in order to execute, you need to
+# list that tool in PACKAGE_WRITE_DEPS. Target package dependencies belong
+# in the package dependencies as normal, this is just for native/cross support
+# tools at rootfs build time.
+PACKAGE_WRITE_DEPS ??= ""
+
+def legitimize_package_name(s):
+    """
+    Make sure package names are legitimate strings
+    """
+    import re
+
+    def fixutf(m):
+        cp = m.group(1)
+        if cp:
+            return ('\\u%s' % cp).encode('latin-1').decode('unicode_escape')
+
+    # Handle unicode codepoints encoded as <U0123>, as in glibc locale files.
+    s = re.sub(r'<U([0-9A-Fa-f]{1,4})>', fixutf, s)
+
+    # Remaining package name validity fixes
+    return s.lower().replace('_', '-').replace('@', '+').replace(',', '+').replace('/', '-')
+
+def do_split_packages(d, root, file_regex, output_pattern, description, postinst=None, recursive=False, hook=None, extra_depends=None, aux_files_pattern=None, postrm=None, allow_dirs=False, prepend=False, match_path=False, aux_files_pattern_verbatim=None, allow_links=False, summary=None):
+    """
+    Used in .bb files to split up dynamically generated subpackages of a
+    given package, usually plugins or modules.
+
+    Arguments:
+    root           -- the path in which to search
+    file_regex     -- regular expression to match searched files. Use
+                      parentheses () to mark the part of this expression
+                      that should be used to derive the module name (to be
+                      substituted where %s is used in other function
+                      arguments as noted below)
+    output_pattern -- pattern to use for the package names. Must include %s.
+    description    -- description to set for each package. Must include %s.
+    postinst       -- postinstall script to use for all packages (as a
+                      string)
+    recursive      -- True to perform a recursive search - default False
+    hook           -- a hook function to be called for every match. The
+                      function will be called with the following arguments
+                      (in the order listed):
+                        f: full path to the file/directory match
+                        pkg: the package name
+                        file_regex: as above
+                        output_pattern: as above
+                        modulename: the module name derived using file_regex
+    extra_depends  -- extra runtime dependencies (RDEPENDS) to be set for
+                      all packages. The default value of None causes a
+                      dependency on the main package (${PN}) - if you do
+                      not want this, pass '' for this parameter.
+    aux_files_pattern -- extra item(s) to be added to FILES for each
+                      package. Can be a single string item or a list of
+                      strings for multiple items.  Must include %s.
+    postrm         -- postrm script to use for all packages (as a string)
+    allow_dirs     -- True allow directories to be matched - default False
+    prepend        -- if True, prepend created packages to PACKAGES instead
+                      of the default False which appends them
+    match_path     -- match file_regex on the whole relative path to the
+                      root rather than just the file name
+    aux_files_pattern_verbatim -- extra item(s) to be added to FILES for
+                      each package, using the actual derived module name
+                      rather than converting it to something legal for a
+                      package name. Can be a single string item or a list
+                      of strings for multiple items. Must include %s.
+    allow_links    -- True to allow symlinks to be matched - default False
+    summary        -- Summary to set for each package. Must include %s;
+                      defaults to description if not set.
+
+    """
+
+    dvar = d.getVar('PKGD')
+    root = d.expand(root)
+    output_pattern = d.expand(output_pattern)
+    extra_depends = d.expand(extra_depends)
+
+    # If the root directory doesn't exist, don't error out later but silently do
+    # no splitting.
+    if not os.path.exists(dvar + root):
+        return []
+
+    ml = d.getVar("MLPREFIX")
+    if ml:
+        if not output_pattern.startswith(ml):
+            output_pattern = ml + output_pattern
+
+        newdeps = []
+        for dep in (extra_depends or "").split():
+            if dep.startswith(ml):
+                newdeps.append(dep)
+            else:
+                newdeps.append(ml + dep)
+        if newdeps:
+            extra_depends = " ".join(newdeps)
+
+
+    packages = d.getVar('PACKAGES').split()
+    split_packages = set()
+
+    if postinst:
+        postinst = '#!/bin/sh\n' + postinst + '\n'
+    if postrm:
+        postrm = '#!/bin/sh\n' + postrm + '\n'
+    if not recursive:
+        objs = os.listdir(dvar + root)
+    else:
+        objs = []
+        for walkroot, dirs, files in os.walk(dvar + root):
+            for file in files:
+                relpath = os.path.join(walkroot, file).replace(dvar + root + '/', '', 1)
+                if relpath:
+                    objs.append(relpath)
+
+    if extra_depends == None:
+        extra_depends = d.getVar("PN")
+
+    if not summary:
+        summary = description
+
+    for o in sorted(objs):
+        import re, stat
+        if match_path:
+            m = re.match(file_regex, o)
+        else:
+            m = re.match(file_regex, os.path.basename(o))
+
+        if not m:
+            continue
+        f = os.path.join(dvar + root, o)
+        mode = os.lstat(f).st_mode
+        if not (stat.S_ISREG(mode) or (allow_links and stat.S_ISLNK(mode)) or (allow_dirs and stat.S_ISDIR(mode))):
+            continue
+        on = legitimize_package_name(m.group(1))
+        pkg = output_pattern % on
+        split_packages.add(pkg)
+        if not pkg in packages:
+            if prepend:
+                packages = [pkg] + packages
+            else:
+                packages.append(pkg)
+        oldfiles = d.getVar('FILES:' + pkg)
+        newfile = os.path.join(root, o)
+        # These names will be passed through glob() so if the filename actually
+        # contains * or ? (rare, but possible) we need to handle that specially
+        newfile = newfile.replace('*', '[*]')
+        newfile = newfile.replace('?', '[?]')
+        if not oldfiles:
+            the_files = [newfile]
+            if aux_files_pattern:
+                if type(aux_files_pattern) is list:
+                    for fp in aux_files_pattern:
+                        the_files.append(fp % on)
+                else:
+                    the_files.append(aux_files_pattern % on)
+            if aux_files_pattern_verbatim:
+                if type(aux_files_pattern_verbatim) is list:
+                    for fp in aux_files_pattern_verbatim:
+                        the_files.append(fp % m.group(1))
+                else:
+                    the_files.append(aux_files_pattern_verbatim % m.group(1))
+            d.setVar('FILES:' + pkg, " ".join(the_files))
+        else:
+            d.setVar('FILES:' + pkg, oldfiles + " " + newfile)
+        if extra_depends != '':
+            d.appendVar('RDEPENDS:' + pkg, ' ' + extra_depends)
+        if not d.getVar('DESCRIPTION:' + pkg):
+            d.setVar('DESCRIPTION:' + pkg, description % on)
+        if not d.getVar('SUMMARY:' + pkg):
+            d.setVar('SUMMARY:' + pkg, summary % on)
+        if postinst:
+            d.setVar('pkg_postinst:' + pkg, postinst)
+        if postrm:
+            d.setVar('pkg_postrm:' + pkg, postrm)
+        if callable(hook):
+            hook(f, pkg, file_regex, output_pattern, m.group(1))
+
+    d.setVar('PACKAGES', ' '.join(packages))
+    return list(split_packages)
+
+PACKAGE_DEPENDS += "file-native"
+
+python () {
+    if d.getVar('PACKAGES') != '':
+        deps = ""
+        for dep in (d.getVar('PACKAGE_DEPENDS') or "").split():
+            deps += " %s:do_populate_sysroot" % dep
+        if d.getVar('PACKAGE_MINIDEBUGINFO') == '1':
+            deps += ' xz-native:do_populate_sysroot'
+        d.appendVarFlag('do_package', 'depends', deps)
+
+        # shlibs requires any DEPENDS to have already packaged for the *.list files
+        d.appendVarFlag('do_package', 'deptask', " do_packagedata")
+}
+
+# Get a list of files from file vars by searching files under current working directory
+# The list contains symlinks, directories and normal files.
+def files_from_filevars(filevars):
+    import os,glob
+    cpath = oe.cachedpath.CachedPath()
+    files = []
+    for f in filevars:
+        if os.path.isabs(f):
+            f = '.' + f
+        if not f.startswith("./"):
+            f = './' + f
+        globbed = glob.glob(f)
+        if globbed:
+            if [ f ] != globbed:
+                files += globbed
+                continue
+        files.append(f)
+
+    symlink_paths = []
+    for ind, f in enumerate(files):
+        # Handle directory symlinks. Truncate path to the lowest level symlink
+        parent = ''
+        for dirname in f.split('/')[:-1]:
+            parent = os.path.join(parent, dirname)
+            if dirname == '.':
+                continue
+            if cpath.islink(parent):
+                bb.warn("FILES contains file '%s' which resides under a "
+                        "directory symlink. Please fix the recipe and use the "
+                        "real path for the file." % f[1:])
+                symlink_paths.append(f)
+                files[ind] = parent
+                f = parent
+                break
+
+        if not cpath.islink(f):
+            if cpath.isdir(f):
+                newfiles = [ os.path.join(f,x) for x in os.listdir(f) ]
+                if newfiles:
+                    files += newfiles
+
+    return files, symlink_paths
+
+# Called in package_<rpm,ipk,deb>.bbclass to get the correct list of configuration files
+def get_conffiles(pkg, d):
+    pkgdest = d.getVar('PKGDEST')
+    root = os.path.join(pkgdest, pkg)
+    cwd = os.getcwd()
+    os.chdir(root)
+
+    conffiles = d.getVar('CONFFILES:%s' % pkg);
+    if conffiles == None:
+        conffiles = d.getVar('CONFFILES')
+    if conffiles == None:
+        conffiles = ""
+    conffiles = conffiles.split()
+    conf_orig_list = files_from_filevars(conffiles)[0]
+
+    # Remove links and directories from conf_orig_list to get conf_list which only contains normal files
+    conf_list = []
+    for f in conf_orig_list:
+        if os.path.isdir(f):
+            continue
+        if os.path.islink(f):
+            continue
+        if not os.path.exists(f):
+            continue
+        conf_list.append(f)
+
+    # Remove the leading './'
+    for i in range(0, len(conf_list)):
+        conf_list[i] = conf_list[i][1:]
+
+    os.chdir(cwd)
+    return conf_list
+
+def checkbuildpath(file, d):
+    tmpdir = d.getVar('TMPDIR')
+    with open(file) as f:
+        file_content = f.read()
+        if tmpdir in file_content:
+            return True
+
+    return False
+
+def parse_debugsources_from_dwarfsrcfiles_output(dwarfsrcfiles_output):
+    debugfiles = {}
+
+    for line in dwarfsrcfiles_output.splitlines():
+        if line.startswith("\t"):
+            debugfiles[os.path.normpath(line.split()[0])] = ""
+
+    return debugfiles.keys()
+
+def source_info(file, d, fatal=True):
+    import subprocess
+
+    cmd = ["dwarfsrcfiles", file]
+    try:
+        output = subprocess.check_output(cmd, universal_newlines=True, stderr=subprocess.STDOUT)
+        retval = 0
+    except subprocess.CalledProcessError as exc:
+        output = exc.output
+        retval = exc.returncode
+
+    # 255 means a specific file wasn't fully parsed to get the debug file list, which is not a fatal failure
+    if retval != 0 and retval != 255:
+        msg = "dwarfsrcfiles failed with exit code %s (cmd was %s)%s" % (retval, cmd, ":\n%s" % output if output else "")
+        if fatal:
+            bb.fatal(msg)
+        bb.note(msg)
+
+    debugsources = parse_debugsources_from_dwarfsrcfiles_output(output)
+
+    return list(debugsources)
+
+def splitdebuginfo(file, dvar, dv, d):
+    # Function to split a single file into two components, one is the stripped
+    # target system binary, the other contains any debugging information. The
+    # two files are linked to reference each other.
+    #
+    # return a mapping of files:debugsources
+
+    import stat
+    import subprocess
+
+    src = file[len(dvar):]
+    dest = dv["libdir"] + os.path.dirname(src) + dv["dir"] + "/" + os.path.basename(src) + dv["append"]
+    debugfile = dvar + dest
+    sources = []
+
+    if file.endswith(".ko") and file.find("/lib/modules/") != -1:
+        if oe.package.is_kernel_module_signed(file):
+            bb.debug(1, "Skip strip on signed module %s" % file)
+            return (file, sources)
+
+    # Split the file...
+    bb.utils.mkdirhier(os.path.dirname(debugfile))
+    #bb.note("Split %s -> %s" % (file, debugfile))
+    # Only store off the hard link reference if we successfully split!
+
+    dvar = d.getVar('PKGD')
+    objcopy = d.getVar("OBJCOPY")
+
+    newmode = None
+    if not os.access(file, os.W_OK) or os.access(file, os.R_OK):
+        origmode = os.stat(file)[stat.ST_MODE]
+        newmode = origmode | stat.S_IWRITE | stat.S_IREAD
+        os.chmod(file, newmode)
+
+    # We need to extract the debug src information here...
+    if dv["srcdir"]:
+        sources = source_info(file, d)
+
+    bb.utils.mkdirhier(os.path.dirname(debugfile))
+
+    subprocess.check_output([objcopy, '--only-keep-debug', file, debugfile], stderr=subprocess.STDOUT)
+
+    # Set the debuglink to have the view of the file path on the target
+    subprocess.check_output([objcopy, '--add-gnu-debuglink', debugfile, file], stderr=subprocess.STDOUT)
+
+    if newmode:
+        os.chmod(file, origmode)
+
+    return (file, sources)
+
+def splitstaticdebuginfo(file, dvar, dv, d):
+    # Unlike the function above, there is no way to split a static library
+    # two components.  So to get similar results we will copy the unmodified
+    # static library (containing the debug symbols) into a new directory.
+    # We will then strip (preserving symbols) the static library in the
+    # typical location.
+    #
+    # return a mapping of files:debugsources
+
+    import stat
+
+    src = file[len(dvar):]
+    dest = dv["staticlibdir"] + os.path.dirname(src) + dv["staticdir"] + "/" + os.path.basename(src) + dv["staticappend"]
+    debugfile = dvar + dest
+    sources = []
+
+    # Copy the file...
+    bb.utils.mkdirhier(os.path.dirname(debugfile))
+    #bb.note("Copy %s -> %s" % (file, debugfile))
+
+    dvar = d.getVar('PKGD')
+
+    newmode = None
+    if not os.access(file, os.W_OK) or os.access(file, os.R_OK):
+        origmode = os.stat(file)[stat.ST_MODE]
+        newmode = origmode | stat.S_IWRITE | stat.S_IREAD
+        os.chmod(file, newmode)
+
+    # We need to extract the debug src information here...
+    if dv["srcdir"]:
+        sources = source_info(file, d)
+
+    bb.utils.mkdirhier(os.path.dirname(debugfile))
+
+    # Copy the unmodified item to the debug directory
+    shutil.copy2(file, debugfile)
+
+    if newmode:
+        os.chmod(file, origmode)
+
+    return (file, sources)
+
+def inject_minidebuginfo(file, dvar, dv, d):
+    # Extract just the symbols from debuginfo into minidebuginfo,
+    # compress it with xz and inject it back into the binary in a .gnu_debugdata section.
+    # https://sourceware.org/gdb/onlinedocs/gdb/MiniDebugInfo.html
+
+    import subprocess
+
+    readelf = d.getVar('READELF')
+    nm = d.getVar('NM')
+    objcopy = d.getVar('OBJCOPY')
+
+    minidebuginfodir = d.expand('${WORKDIR}/minidebuginfo')
+
+    src = file[len(dvar):]
+    dest = dv["libdir"] + os.path.dirname(src) + dv["dir"] + "/" + os.path.basename(src) + dv["append"]
+    debugfile = dvar + dest
+    minidebugfile = minidebuginfodir + src + '.minidebug'
+    bb.utils.mkdirhier(os.path.dirname(minidebugfile))
+
+    # If we didn't produce debuginfo for any reason, we can't produce minidebuginfo either
+    # so skip it.
+    if not os.path.exists(debugfile):
+        bb.debug(1, 'ELF file {} has no debuginfo, skipping minidebuginfo injection'.format(file))
+        return
+
+    # Find non-allocated PROGBITS, NOTE, and NOBITS sections in the debuginfo.
+    # We will exclude all of these from minidebuginfo to save space.
+    remove_section_names = []
+    for line in subprocess.check_output([readelf, '-W', '-S', debugfile], universal_newlines=True).splitlines():
+        fields = line.split()
+        if len(fields) < 8:
+            continue
+        name = fields[0]
+        type = fields[1]
+        flags = fields[7]
+        # .debug_ sections will be removed by objcopy -S so no need to explicitly remove them
+        if name.startswith('.debug_'):
+            continue
+        if 'A' not in flags and type in ['PROGBITS', 'NOTE', 'NOBITS']:
+            remove_section_names.append(name)
+
+    # List dynamic symbols in the binary. We can exclude these from minidebuginfo
+    # because they are always present in the binary.
+    dynsyms = set()
+    for line in subprocess.check_output([nm, '-D', file, '--format=posix', '--defined-only'], universal_newlines=True).splitlines():
+        dynsyms.add(line.split()[0])
+
+    # Find all function symbols from debuginfo which aren't in the dynamic symbols table.
+    # These are the ones we want to keep in minidebuginfo.
+    keep_symbols_file = minidebugfile + '.symlist'
+    found_any_symbols = False
+    with open(keep_symbols_file, 'w') as f:
+        for line in subprocess.check_output([nm, debugfile, '--format=sysv', '--defined-only'], universal_newlines=True).splitlines():
+            fields = line.split('|')
+            if len(fields) < 7:
+                continue
+            name = fields[0].strip()
+            type = fields[3].strip()
+            if type == 'FUNC' and name not in dynsyms:
+                f.write('{}\n'.format(name))
+                found_any_symbols = True
+
+    if not found_any_symbols:
+        bb.debug(1, 'ELF file {} contains no symbols, skipping minidebuginfo injection'.format(file))
+        return
+
+    bb.utils.remove(minidebugfile)
+    bb.utils.remove(minidebugfile + '.xz')
+
+    subprocess.check_call([objcopy, '-S'] +
+                          ['--remove-section={}'.format(s) for s in remove_section_names] +
+                          ['--keep-symbols={}'.format(keep_symbols_file), debugfile, minidebugfile])
+
+    subprocess.check_call(['xz', '--keep', minidebugfile])
+
+    subprocess.check_call([objcopy, '--add-section', '.gnu_debugdata={}.xz'.format(minidebugfile), file])
+
+def copydebugsources(debugsrcdir, sources, d):
+    # The debug src information written out to sourcefile is further processed
+    # and copied to the destination here.
+
+    import stat
+    import subprocess
+
+    if debugsrcdir and sources:
+        sourcefile = d.expand("${WORKDIR}/debugsources.list")
+        bb.utils.remove(sourcefile)
+
+        # filenames are null-separated - this is an artefact of the previous use
+        # of rpm's debugedit, which was writing them out that way, and the code elsewhere
+        # is still assuming that.
+        debuglistoutput = '\0'.join(sources) + '\0'
+        with open(sourcefile, 'a') as sf:
+           sf.write(debuglistoutput)
+
+        dvar = d.getVar('PKGD')
+        strip = d.getVar("STRIP")
+        objcopy = d.getVar("OBJCOPY")
+        workdir = d.getVar("WORKDIR")
+        sdir = d.getVar("S")
+        cflags = d.expand("${CFLAGS}")
+
+        prefixmap = {}
+        for flag in cflags.split():
+            if not flag.startswith("-fdebug-prefix-map"):
+                continue
+            if "recipe-sysroot" in flag:
+                continue
+            flag = flag.split("=")
+            prefixmap[flag[1]] = flag[2]
+
+        nosuchdir = []
+        basepath = dvar
+        for p in debugsrcdir.split("/"):
+            basepath = basepath + "/" + p
+            if not cpath.exists(basepath):
+                nosuchdir.append(basepath)
+        bb.utils.mkdirhier(basepath)
+        cpath.updatecache(basepath)
+
+        for pmap in prefixmap:
+            # Ignore files from the recipe sysroots (target and native)
+            cmd =  "LC_ALL=C ; sort -z -u '%s' | egrep -v -z '((<internal>|<built-in>)$|/.*recipe-sysroot.*/)' | " % sourcefile
+            # We need to ignore files that are not actually ours
+            # we do this by only paying attention to items from this package
+            cmd += "fgrep -zw '%s' | " % prefixmap[pmap]
+            # Remove prefix in the source paths
+            cmd += "sed 's#%s/##g' | " % (prefixmap[pmap])
+            cmd += "(cd '%s' ; cpio -pd0mlL --no-preserve-owner '%s%s' 2>/dev/null)" % (pmap, dvar, prefixmap[pmap])
+
+            try:
+                subprocess.check_output(cmd, shell=True, stderr=subprocess.STDOUT)
+            except subprocess.CalledProcessError:
+                # Can "fail" if internal headers/transient sources are attempted
+                pass
+            # cpio seems to have a bug with -lL together and symbolic links are just copied, not dereferenced.
+            # Work around this by manually finding and copying any symbolic links that made it through.
+            cmd = "find %s%s -type l -print0 -delete | sed s#%s%s/##g | (cd '%s' ; cpio -pd0mL --no-preserve-owner '%s%s')" % \
+                    (dvar, prefixmap[pmap], dvar, prefixmap[pmap], pmap, dvar, prefixmap[pmap])
+            subprocess.check_output(cmd, shell=True, stderr=subprocess.STDOUT)
+
+        # debugsources.list may be polluted from the host if we used externalsrc,
+        # cpio uses copy-pass and may have just created a directory structure
+        # matching the one from the host, if thats the case move those files to
+        # debugsrcdir to avoid host contamination.
+        # Empty dir structure will be deleted in the next step.
+
+        # Same check as above for externalsrc
+        if workdir not in sdir:
+            if os.path.exists(dvar + debugsrcdir + sdir):
+                cmd = "mv %s%s%s/* %s%s" % (dvar, debugsrcdir, sdir, dvar,debugsrcdir)
+                subprocess.check_output(cmd, shell=True, stderr=subprocess.STDOUT)
+
+        # The copy by cpio may have resulted in some empty directories!  Remove these
+        cmd = "find %s%s -empty -type d -delete" % (dvar, debugsrcdir)
+        subprocess.check_output(cmd, shell=True, stderr=subprocess.STDOUT)
+
+        # Also remove debugsrcdir if its empty
+        for p in nosuchdir[::-1]:
+            if os.path.exists(p) and not os.listdir(p):
+                os.rmdir(p)
+
+#
+# Package data handling routines
+#
+
+def get_package_mapping (pkg, basepkg, d, depversions=None):
+    import oe.packagedata
+
+    data = oe.packagedata.read_subpkgdata(pkg, d)
+    key = "PKG:%s" % pkg
+
+    if key in data:
+        if bb.data.inherits_class('allarch', d) and bb.data.inherits_class('packagegroup', d) and pkg != data[key]:
+            bb.error("An allarch packagegroup shouldn't depend on packages which are dynamically renamed (%s to %s)" % (pkg, data[key]))
+        # Have to avoid undoing the write_extra_pkgs(global_variants...)
+        if bb.data.inherits_class('allarch', d) and not d.getVar('MULTILIB_VARIANTS') \
+            and data[key] == basepkg:
+            return pkg
+        if depversions == []:
+            # Avoid returning a mapping if the renamed package rprovides its original name
+            rprovkey = "RPROVIDES:%s" % pkg
+            if rprovkey in data:
+                if pkg in bb.utils.explode_dep_versions2(data[rprovkey]):
+                    bb.note("%s rprovides %s, not replacing the latter" % (data[key], pkg))
+                    return pkg
+        # Do map to rewritten package name
+        return data[key]
+
+    return pkg
+
+def get_package_additional_metadata (pkg_type, d):
+    base_key = "PACKAGE_ADD_METADATA"
+    for key in ("%s_%s" % (base_key, pkg_type.upper()), base_key):
+        if d.getVar(key, False) is None:
+            continue
+        d.setVarFlag(key, "type", "list")
+        if d.getVarFlag(key, "separator") is None:
+            d.setVarFlag(key, "separator", "\\n")
+        metadata_fields = [field.strip() for field in oe.data.typed_value(key, d)]
+        return "\n".join(metadata_fields).strip()
+
+def runtime_mapping_rename (varname, pkg, d):
+    #bb.note("%s before: %s" % (varname, d.getVar(varname)))
+
+    new_depends = {}
+    deps = bb.utils.explode_dep_versions2(d.getVar(varname) or "")
+    for depend, depversions in deps.items():
+        new_depend = get_package_mapping(depend, pkg, d, depversions)
+        if depend != new_depend:
+            bb.note("package name mapping done: %s -> %s" % (depend, new_depend))
+        new_depends[new_depend] = deps[depend]
+
+    d.setVar(varname, bb.utils.join_deps(new_depends, commasep=False))
+
+    #bb.note("%s after: %s" % (varname, d.getVar(varname)))
+
+#
+# Used by do_packagedata (and possibly other routines post do_package)
+#
+
+PRSERV_ACTIVE = "${@bool(d.getVar("PRSERV_HOST"))}"
+PRSERV_ACTIVE[vardepvalue] = "${PRSERV_ACTIVE}"
+package_get_auto_pr[vardepsexclude] = "BB_TASKDEPDATA"
+package_get_auto_pr[vardeps] += "PRSERV_ACTIVE"
+python package_get_auto_pr() {
+    import oe.prservice
+
+    def get_do_package_hash(pn):
+        if d.getVar("BB_RUNTASK") != "do_package":
+            taskdepdata = d.getVar("BB_TASKDEPDATA", False)
+            for dep in taskdepdata:
+                if taskdepdata[dep][1] == "do_package" and taskdepdata[dep][0] == pn:
+                    return taskdepdata[dep][6]
+        return None
+
+    # Support per recipe PRSERV_HOST
+    pn = d.getVar('PN')
+    host = d.getVar("PRSERV_HOST_" + pn)
+    if not (host is None):
+        d.setVar("PRSERV_HOST", host)
+
+    pkgv = d.getVar("PKGV")
+
+    # PR Server not active, handle AUTOINC
+    if not d.getVar('PRSERV_HOST'):
+        d.setVar("PRSERV_PV_AUTOINC", "0")
+        return
+
+    auto_pr = None
+    pv = d.getVar("PV")
+    version = d.getVar("PRAUTOINX")
+    pkgarch = d.getVar("PACKAGE_ARCH")
+    checksum = get_do_package_hash(pn)
+
+    # If do_package isn't in the dependencies, we can't get the checksum...
+    if not checksum:
+        bb.warn('Task %s requested do_package unihash, but it was not available.' % d.getVar('BB_RUNTASK'))
+        #taskdepdata = d.getVar("BB_TASKDEPDATA", False)
+        #for dep in taskdepdata:
+        #    bb.warn('%s:%s = %s' % (taskdepdata[dep][0], taskdepdata[dep][1], taskdepdata[dep][6]))
+        return
+
+    if d.getVar('PRSERV_LOCKDOWN'):
+        auto_pr = d.getVar('PRAUTO_' + version + '_' + pkgarch) or d.getVar('PRAUTO_' + version) or None
+        if auto_pr is None:
+            bb.fatal("Can NOT get PRAUTO from lockdown exported file")
+        d.setVar('PRAUTO',str(auto_pr))
+        return
+
+    try:
+        conn = oe.prservice.prserv_make_conn(d)
+        if conn is not None:
+            if "AUTOINC" in pkgv:
+                srcpv = bb.fetch2.get_srcrev(d)
+                base_ver = "AUTOINC-%s" % version[:version.find(srcpv)]
+                value = conn.getPR(base_ver, pkgarch, srcpv)
+                d.setVar("PRSERV_PV_AUTOINC", str(value))
+
+            auto_pr = conn.getPR(version, pkgarch, checksum)
+            conn.close()
+    except Exception as e:
+        bb.fatal("Can NOT get PRAUTO, exception %s" %  str(e))
+    if auto_pr is None:
+        bb.fatal("Can NOT get PRAUTO from remote PR service")
+    d.setVar('PRAUTO',str(auto_pr))
+}
+
+#
+# Package functions suitable for inclusion in PACKAGEFUNCS
+#
+
+python package_convert_pr_autoinc() {
+    pkgv = d.getVar("PKGV")
+
+    # Adjust pkgv as necessary...
+    if 'AUTOINC' in pkgv:
+        d.setVar("PKGV", pkgv.replace("AUTOINC", "${PRSERV_PV_AUTOINC}"))
+
+    # Change PRSERV_PV_AUTOINC and EXTENDPRAUTO usage to special values
+    d.setVar('PRSERV_PV_AUTOINC', '@PRSERV_PV_AUTOINC@')
+    d.setVar('EXTENDPRAUTO', '@EXTENDPRAUTO@')
+}
+
+LOCALEBASEPN ??= "${PN}"
+
+python package_do_split_locales() {
+    if (d.getVar('PACKAGE_NO_LOCALE') == '1'):
+        bb.debug(1, "package requested not splitting locales")
+        return
+
+    packages = (d.getVar('PACKAGES') or "").split()
+
+    datadir = d.getVar('datadir')
+    if not datadir:
+        bb.note("datadir not defined")
+        return
+
+    dvar = d.getVar('PKGD')
+    pn = d.getVar('LOCALEBASEPN')
+
+    if pn + '-locale' in packages:
+        packages.remove(pn + '-locale')
+
+    localedir = os.path.join(dvar + datadir, 'locale')
+
+    if not cpath.isdir(localedir):
+        bb.debug(1, "No locale files in this package")
+        return
+
+    locales = os.listdir(localedir)
+
+    summary = d.getVar('SUMMARY') or pn
+    description = d.getVar('DESCRIPTION') or ""
+    locale_section = d.getVar('LOCALE_SECTION')
+    mlprefix = d.getVar('MLPREFIX') or ""
+    for l in sorted(locales):
+        ln = legitimize_package_name(l)
+        pkg = pn + '-locale-' + ln
+        packages.append(pkg)
+        d.setVar('FILES:' + pkg, os.path.join(datadir, 'locale', l))
+        d.setVar('RRECOMMENDS:' + pkg, '%svirtual-locale-%s' % (mlprefix, ln))
+        d.setVar('RPROVIDES:' + pkg, '%s-locale %s%s-translation' % (pn, mlprefix, ln))
+        d.setVar('SUMMARY:' + pkg, '%s - %s translations' % (summary, l))
+        d.setVar('DESCRIPTION:' + pkg, '%s  This package contains language translation files for the %s locale.' % (description, l))
+        if locale_section:
+            d.setVar('SECTION:' + pkg, locale_section)
+
+    d.setVar('PACKAGES', ' '.join(packages))
+
+    # Disabled by RP 18/06/07
+    # Wildcards aren't supported in debian
+    # They break with ipkg since glibc-locale* will mean that
+    # glibc-localedata-translit* won't install as a dependency
+    # for some other package which breaks meta-toolchain
+    # Probably breaks since virtual-locale- isn't provided anywhere
+    #rdep = (d.getVar('RDEPENDS:%s' % pn) or "").split()
+    #rdep.append('%s-locale*' % pn)
+    #d.setVar('RDEPENDS:%s' % pn, ' '.join(rdep))
+}
+
+python perform_packagecopy () {
+    import subprocess
+    import shutil
+
+    dest = d.getVar('D')
+    dvar = d.getVar('PKGD')
+
+    # Start by package population by taking a copy of the installed
+    # files to operate on
+    # Preserve sparse files and hard links
+    cmd = 'tar --exclude=./sysroot-only -cf - -C %s -p -S . | tar -xf - -C %s' % (dest, dvar)
+    subprocess.check_output(cmd, shell=True, stderr=subprocess.STDOUT)
+
+    # replace RPATHs for the nativesdk binaries, to make them relocatable
+    if bb.data.inherits_class('nativesdk', d) or bb.data.inherits_class('cross-canadian', d):
+        rpath_replace (dvar, d)
+}
+perform_packagecopy[cleandirs] = "${PKGD}"
+perform_packagecopy[dirs] = "${PKGD}"
+
+# We generate a master list of directories to process, we start by
+# seeding this list with reasonable defaults, then load from
+# the fs-perms.txt files
+python fixup_perms () {
+    import pwd, grp
+
+    # init using a string with the same format as a line as documented in
+    # the fs-perms.txt file
+    # <path> <mode> <uid> <gid> <walk> <fmode> <fuid> <fgid>
+    # <path> link <link target>
+    #
+    # __str__ can be used to print out an entry in the input format
+    #
+    # if fs_perms_entry.path is None:
+    #    an error occurred
+    # if fs_perms_entry.link, you can retrieve:
+    #    fs_perms_entry.path = path
+    #    fs_perms_entry.link = target of link
+    # if not fs_perms_entry.link, you can retrieve:
+    #    fs_perms_entry.path = path
+    #    fs_perms_entry.mode = expected dir mode or None
+    #    fs_perms_entry.uid = expected uid or -1
+    #    fs_perms_entry.gid = expected gid or -1
+    #    fs_perms_entry.walk = 'true' or something else
+    #    fs_perms_entry.fmode = expected file mode or None
+    #    fs_perms_entry.fuid = expected file uid or -1
+    #    fs_perms_entry_fgid = expected file gid or -1
+    class fs_perms_entry():
+        def __init__(self, line):
+            lsplit = line.split()
+            if len(lsplit) == 3 and lsplit[1].lower() == "link":
+                self._setlink(lsplit[0], lsplit[2])
+            elif len(lsplit) == 8:
+                self._setdir(lsplit[0], lsplit[1], lsplit[2], lsplit[3], lsplit[4], lsplit[5], lsplit[6], lsplit[7])
+            else:
+                msg = "Fixup Perms: invalid config line %s" % line
+                oe.qa.handle_error("perm-config", msg, d)
+                self.path = None
+                self.link = None
+
+        def _setdir(self, path, mode, uid, gid, walk, fmode, fuid, fgid):
+            self.path = os.path.normpath(path)
+            self.link = None
+            self.mode = self._procmode(mode)
+            self.uid  = self._procuid(uid)
+            self.gid  = self._procgid(gid)
+            self.walk = walk.lower()
+            self.fmode = self._procmode(fmode)
+            self.fuid = self._procuid(fuid)
+            self.fgid = self._procgid(fgid)
+
+        def _setlink(self, path, link):
+            self.path = os.path.normpath(path)
+            self.link = link
+
+        def _procmode(self, mode):
+            if not mode or (mode and mode == "-"):
+                return None
+            else:
+                return int(mode,8)
+
+        # Note uid/gid -1 has special significance in os.lchown
+        def _procuid(self, uid):
+            if uid is None or uid == "-":
+                return -1
+            elif uid.isdigit():
+                return int(uid)
+            else:
+                return pwd.getpwnam(uid).pw_uid
+
+        def _procgid(self, gid):
+            if gid is None or gid == "-":
+                return -1
+            elif gid.isdigit():
+                return int(gid)
+            else:
+                return grp.getgrnam(gid).gr_gid
+
+        # Use for debugging the entries
+        def __str__(self):
+            if self.link:
+                return "%s link %s" % (self.path, self.link)
+            else:
+                mode = "-"
+                if self.mode:
+                    mode = "0%o" % self.mode
+                fmode = "-"
+                if self.fmode:
+                    fmode = "0%o" % self.fmode
+                uid = self._mapugid(self.uid)
+                gid = self._mapugid(self.gid)
+                fuid = self._mapugid(self.fuid)
+                fgid = self._mapugid(self.fgid)
+                return "%s %s %s %s %s %s %s %s" % (self.path, mode, uid, gid, self.walk, fmode, fuid, fgid)
+
+        def _mapugid(self, id):
+            if id is None or id == -1:
+                return "-"
+            else:
+                return "%d" % id
+
+    # Fix the permission, owner and group of path
+    def fix_perms(path, mode, uid, gid, dir):
+        if mode and not os.path.islink(path):
+            #bb.note("Fixup Perms: chmod 0%o %s" % (mode, dir))
+            os.chmod(path, mode)
+        # -1 is a special value that means don't change the uid/gid
+        # if they are BOTH -1, don't bother to lchown
+        if not (uid == -1 and gid == -1):
+            #bb.note("Fixup Perms: lchown %d:%d %s" % (uid, gid, dir))
+            os.lchown(path, uid, gid)
+
+    # Return a list of configuration files based on either the default
+    # files/fs-perms.txt or the contents of FILESYSTEM_PERMS_TABLES
+    # paths are resolved via BBPATH
+    def get_fs_perms_list(d):
+        str = ""
+        bbpath = d.getVar('BBPATH')
+        fs_perms_tables = d.getVar('FILESYSTEM_PERMS_TABLES') or ""
+        for conf_file in fs_perms_tables.split():
+            confpath = bb.utils.which(bbpath, conf_file)
+            if confpath:
+                str += " %s" % bb.utils.which(bbpath, conf_file)
+            else:
+                bb.warn("cannot find %s specified in FILESYSTEM_PERMS_TABLES" % conf_file)
+        return str
+
+
+
+    dvar = d.getVar('PKGD')
+
+    fs_perms_table = {}
+    fs_link_table = {}
+
+    # By default all of the standard directories specified in
+    # bitbake.conf will get 0755 root:root.
+    target_path_vars = [    'base_prefix',
+                'prefix',
+                'exec_prefix',
+                'base_bindir',
+                'base_sbindir',
+                'base_libdir',
+                'datadir',
+                'sysconfdir',
+                'servicedir',
+                'sharedstatedir',
+                'localstatedir',
+                'infodir',
+                'mandir',
+                'docdir',
+                'bindir',
+                'sbindir',
+                'libexecdir',
+                'libdir',
+                'includedir',
+                'oldincludedir' ]
+
+    for path in target_path_vars:
+        dir = d.getVar(path) or ""
+        if dir == "":
+            continue
+        fs_perms_table[dir] = fs_perms_entry(d.expand("%s 0755 root root false - - -" % (dir)))
+
+    # Now we actually load from the configuration files
+    for conf in get_fs_perms_list(d).split():
+        if not os.path.exists(conf):
+            continue
+        with open(conf) as f:
+            for line in f:
+                if line.startswith('#'):
+                    continue
+                lsplit = line.split()
+                if len(lsplit) == 0:
+                    continue
+                if len(lsplit) != 8 and not (len(lsplit) == 3 and lsplit[1].lower() == "link"):
+                    msg = "Fixup perms: %s invalid line: %s" % (conf, line)
+                    oe.qa.handle_error("perm-line", msg, d)
+                    continue
+                entry = fs_perms_entry(d.expand(line))
+                if entry and entry.path:
+                    if entry.link:
+                        fs_link_table[entry.path] = entry
+                        if entry.path in fs_perms_table:
+                            fs_perms_table.pop(entry.path)
+                    else:
+                        fs_perms_table[entry.path] = entry
+                        if entry.path in fs_link_table:
+                            fs_link_table.pop(entry.path)
+
+    # Debug -- list out in-memory table
+    #for dir in fs_perms_table:
+    #    bb.note("Fixup Perms: %s: %s" % (dir, str(fs_perms_table[dir])))
+    #for link in fs_link_table:
+    #    bb.note("Fixup Perms: %s: %s" % (link, str(fs_link_table[link])))
+
+    # We process links first, so we can go back and fixup directory ownership
+    # for any newly created directories
+    # Process in sorted order so /run gets created before /run/lock, etc.
+    for entry in sorted(fs_link_table.values(), key=lambda x: x.link):
+        link = entry.link
+        dir = entry.path
+        origin = dvar + dir
+        if not (cpath.exists(origin) and cpath.isdir(origin) and not cpath.islink(origin)):
+            continue
+
+        if link[0] == "/":
+            target = dvar + link
+            ptarget = link
+        else:
+            target = os.path.join(os.path.dirname(origin), link)
+            ptarget = os.path.join(os.path.dirname(dir), link)
+        if os.path.exists(target):
+            msg = "Fixup Perms: Unable to correct directory link, target already exists: %s -> %s" % (dir, ptarget)
+            oe.qa.handle_error("perm-link", msg, d)
+            continue
+
+        # Create path to move directory to, move it, and then setup the symlink
+        bb.utils.mkdirhier(os.path.dirname(target))
+        #bb.note("Fixup Perms: Rename %s -> %s" % (dir, ptarget))
+        bb.utils.rename(origin, target)
+        #bb.note("Fixup Perms: Link %s -> %s" % (dir, link))
+        os.symlink(link, origin)
+
+    for dir in fs_perms_table:
+        origin = dvar + dir
+        if not (cpath.exists(origin) and cpath.isdir(origin)):
+            continue
+
+        fix_perms(origin, fs_perms_table[dir].mode, fs_perms_table[dir].uid, fs_perms_table[dir].gid, dir)
+
+        if fs_perms_table[dir].walk == 'true':
+            for root, dirs, files in os.walk(origin):
+                for dr in dirs:
+                    each_dir = os.path.join(root, dr)
+                    fix_perms(each_dir, fs_perms_table[dir].mode, fs_perms_table[dir].uid, fs_perms_table[dir].gid, dir)
+                for f in files:
+                    each_file = os.path.join(root, f)
+                    fix_perms(each_file, fs_perms_table[dir].fmode, fs_perms_table[dir].fuid, fs_perms_table[dir].fgid, dir)
+}
+
+def package_debug_vars(d):
+    # We default to '.debug' style
+    if d.getVar('PACKAGE_DEBUG_SPLIT_STYLE') == 'debug-file-directory':
+        # Single debug-file-directory style debug info
+        debug_vars = {
+            "append": ".debug",
+            "staticappend": "",
+            "dir": "",
+            "staticdir": "",
+            "libdir": "/usr/lib/debug",
+            "staticlibdir": "/usr/lib/debug-static",
+            "srcdir": "/usr/src/debug",
+        }
+    elif d.getVar('PACKAGE_DEBUG_SPLIT_STYLE') == 'debug-without-src':
+        # Original OE-core, a.k.a. ".debug", style debug info, but without sources in /usr/src/debug
+        debug_vars = {
+            "append": "",
+            "staticappend": "",
+            "dir": "/.debug",
+            "staticdir": "/.debug-static",
+            "libdir": "",
+            "staticlibdir": "",
+            "srcdir": "",
+        }
+    elif d.getVar('PACKAGE_DEBUG_SPLIT_STYLE') == 'debug-with-srcpkg':
+        debug_vars = {
+            "append": "",
+            "staticappend": "",
+            "dir": "/.debug",
+            "staticdir": "/.debug-static",
+            "libdir": "",
+            "staticlibdir": "",
+            "srcdir": "/usr/src/debug",
+        }
+    else:
+        # Original OE-core, a.k.a. ".debug", style debug info
+        debug_vars = {
+            "append": "",
+            "staticappend": "",
+            "dir": "/.debug",
+            "staticdir": "/.debug-static",
+            "libdir": "",
+            "staticlibdir": "",
+            "srcdir": "/usr/src/debug",
+        }
+
+    return debug_vars
+
+python split_and_strip_files () {
+    import stat, errno
+    import subprocess
+
+    dvar = d.getVar('PKGD')
+    pn = d.getVar('PN')
+    hostos = d.getVar('HOST_OS')
+
+    oldcwd = os.getcwd()
+    os.chdir(dvar)
+
+    dv = package_debug_vars(d)
+
+    #
+    # First lets figure out all of the files we may have to process ... do this only once!
+    #
+    elffiles = {}
+    symlinks = {}
+    staticlibs = []
+    inodes = {}
+    libdir = os.path.abspath(dvar + os.sep + d.getVar("libdir"))
+    baselibdir = os.path.abspath(dvar + os.sep + d.getVar("base_libdir"))
+    skipfiles = (d.getVar("INHIBIT_PACKAGE_STRIP_FILES") or "").split()
+    if (d.getVar('INHIBIT_PACKAGE_STRIP') != '1' or \
+            d.getVar('INHIBIT_PACKAGE_DEBUG_SPLIT') != '1'):
+        checkelf = {}
+        checkelflinks = {}
+        for root, dirs, files in cpath.walk(dvar):
+            for f in files:
+                file = os.path.join(root, f)
+
+                # Skip debug files
+                if dv["append"] and file.endswith(dv["append"]):
+                    continue
+                if dv["dir"] and dv["dir"] in os.path.dirname(file[len(dvar):]):
+                    continue
+
+                if file in skipfiles:
+                    continue
+
+                if oe.package.is_static_lib(file):
+                    staticlibs.append(file)
+                    continue
+
+                try:
+                    ltarget = cpath.realpath(file, dvar, False)
+                    s = cpath.lstat(ltarget)
+                except OSError as e:
+                    (err, strerror) = e.args
+                    if err != errno.ENOENT:
+                        raise
+                    # Skip broken symlinks
+                    continue
+                if not s:
+                    continue
+                # Check its an executable
+                if (s[stat.ST_MODE] & stat.S_IXUSR) or (s[stat.ST_MODE] & stat.S_IXGRP) \
+                        or (s[stat.ST_MODE] & stat.S_IXOTH) \
+                        or ((file.startswith(libdir) or file.startswith(baselibdir)) \
+                        and (".so" in f or ".node" in f)) \
+                        or (f.startswith('vmlinux') or ".ko" in f):
+
+                    if cpath.islink(file):
+                        checkelflinks[file] = ltarget
+                        continue
+                    # Use a reference of device ID and inode number to identify files
+                    file_reference = "%d_%d" % (s.st_dev, s.st_ino)
+                    checkelf[file] = (file, file_reference)
+
+        results = oe.utils.multiprocess_launch(oe.package.is_elf, checkelflinks.values(), d)
+        results_map = {}
+        for (ltarget, elf_file) in results:
+            results_map[ltarget] = elf_file
+        for file in checkelflinks:
+            ltarget = checkelflinks[file]
+            # If it's a symlink, and points to an ELF file, we capture the readlink target
+            if results_map[ltarget]:
+                target = os.readlink(file)
+                #bb.note("Sym: %s (%d)" % (ltarget, results_map[ltarget]))
+                symlinks[file] = target
+
+        results = oe.utils.multiprocess_launch(oe.package.is_elf, checkelf.keys(), d)
+
+        # Sort results by file path. This ensures that the files are always
+        # processed in the same order, which is important to make sure builds
+        # are reproducible when dealing with hardlinks
+        results.sort(key=lambda x: x[0])
+
+        for (file, elf_file) in results:
+            # It's a file (or hardlink), not a link
+            # ...but is it ELF, and is it already stripped?
+            if elf_file & 1:
+                if elf_file & 2:
+                    if 'already-stripped' in (d.getVar('INSANE_SKIP:' + pn) or "").split():
+                        bb.note("Skipping file %s from %s for already-stripped QA test" % (file[len(dvar):], pn))
+                    else:
+                        msg = "File '%s' from %s was already stripped, this will prevent future debugging!" % (file[len(dvar):], pn)
+                        oe.qa.handle_error("already-stripped", msg, d)
+                    continue
+
+                # At this point we have an unstripped elf file. We need to:
+                #  a) Make sure any file we strip is not hardlinked to anything else outside this tree
+                #  b) Only strip any hardlinked file once (no races)
+                #  c) Track any hardlinks between files so that we can reconstruct matching debug file hardlinks
+
+                # Use a reference of device ID and inode number to identify files
+                file_reference = checkelf[file][1]
+                if file_reference in inodes:
+                    os.unlink(file)
+                    os.link(inodes[file_reference][0], file)
+                    inodes[file_reference].append(file)
+                else:
+                    inodes[file_reference] = [file]
+                    # break hardlink
+                    bb.utils.break_hardlinks(file)
+                    elffiles[file] = elf_file
+                # Modified the file so clear the cache
+                cpath.updatecache(file)
+
+    def strip_pkgd_prefix(f):
+        nonlocal dvar
+
+        if f.startswith(dvar):
+            return f[len(dvar):]
+
+        return f
+
+    #
+    # First lets process debug splitting
+    #
+    if (d.getVar('INHIBIT_PACKAGE_DEBUG_SPLIT') != '1'):
+        results = oe.utils.multiprocess_launch(splitdebuginfo, list(elffiles), d, extraargs=(dvar, dv, d))
+
+        if dv["srcdir"] and not hostos.startswith("mingw"):
+            if (d.getVar('PACKAGE_DEBUG_STATIC_SPLIT') == '1'):
+                results = oe.utils.multiprocess_launch(splitstaticdebuginfo, staticlibs, d, extraargs=(dvar, dv, d))
+            else:
+                for file in staticlibs:
+                    results.append( (file,source_info(file, d)) )
+
+        d.setVar("PKGDEBUGSOURCES", {strip_pkgd_prefix(f): sorted(s) for f, s in results})
+
+        sources = set()
+        for r in results:
+            sources.update(r[1])
+
+        # Hardlink our debug symbols to the other hardlink copies
+        for ref in inodes:
+            if len(inodes[ref]) == 1:
+                continue
+
+            target = inodes[ref][0][len(dvar):]
+            for file in inodes[ref][1:]:
+                src = file[len(dvar):]
+                dest = dv["libdir"] + os.path.dirname(src) + dv["dir"] + "/" + os.path.basename(target) + dv["append"]
+                fpath = dvar + dest
+                ftarget = dvar + dv["libdir"] + os.path.dirname(target) + dv["dir"] + "/" + os.path.basename(target) + dv["append"]
+                bb.utils.mkdirhier(os.path.dirname(fpath))
+                # Only one hardlink of separated debug info file in each directory
+                if not os.access(fpath, os.R_OK):
+                    #bb.note("Link %s -> %s" % (fpath, ftarget))
+                    os.link(ftarget, fpath)
+
+        # Create symlinks for all cases we were able to split symbols
+        for file in symlinks:
+            src = file[len(dvar):]
+            dest = dv["libdir"] + os.path.dirname(src) + dv["dir"] + "/" + os.path.basename(src) + dv["append"]
+            fpath = dvar + dest
+            # Skip it if the target doesn't exist
+            try:
+                s = os.stat(fpath)
+            except OSError as e:
+                (err, strerror) = e.args
+                if err != errno.ENOENT:
+                    raise
+                continue
+
+            ltarget = symlinks[file]
+            lpath = os.path.dirname(ltarget)
+            lbase = os.path.basename(ltarget)
+            ftarget = ""
+            if lpath and lpath != ".":
+                ftarget += lpath + dv["dir"] + "/"
+            ftarget += lbase + dv["append"]
+            if lpath.startswith(".."):
+                ftarget = os.path.join("..", ftarget)
+            bb.utils.mkdirhier(os.path.dirname(fpath))
+            #bb.note("Symlink %s -> %s" % (fpath, ftarget))
+            os.symlink(ftarget, fpath)
+
+        # Process the dv["srcdir"] if requested...
+        # This copies and places the referenced sources for later debugging...
+        copydebugsources(dv["srcdir"], sources, d)
+    #
+    # End of debug splitting
+    #
+
+    #
+    # Now lets go back over things and strip them
+    #
+    if (d.getVar('INHIBIT_PACKAGE_STRIP') != '1'):
+        strip = d.getVar("STRIP")
+        sfiles = []
+        for file in elffiles:
+            elf_file = int(elffiles[file])
+            #bb.note("Strip %s" % file)
+            sfiles.append((file, elf_file, strip))
+        if (d.getVar('PACKAGE_STRIP_STATIC') == '1' or d.getVar('PACKAGE_DEBUG_STATIC_SPLIT') == '1'):
+            for f in staticlibs:
+                sfiles.append((f, 16, strip))
+
+        oe.utils.multiprocess_launch(oe.package.runstrip, sfiles, d)
+
+    # Build "minidebuginfo" and reinject it back into the stripped binaries
+    if d.getVar('PACKAGE_MINIDEBUGINFO') == '1':
+        oe.utils.multiprocess_launch(inject_minidebuginfo, list(elffiles), d,
+                                     extraargs=(dvar, dv, d))
+
+    #
+    # End of strip
+    #
+    os.chdir(oldcwd)
+}
+
+python populate_packages () {
+    import glob, re
+
+    workdir = d.getVar('WORKDIR')
+    outdir = d.getVar('DEPLOY_DIR')
+    dvar = d.getVar('PKGD')
+    packages = d.getVar('PACKAGES').split()
+    pn = d.getVar('PN')
+
+    bb.utils.mkdirhier(outdir)
+    os.chdir(dvar)
+    
+    autodebug = not (d.getVar("NOAUTOPACKAGEDEBUG") or False)
+
+    split_source_package = (d.getVar('PACKAGE_DEBUG_SPLIT_STYLE') == 'debug-with-srcpkg')
+
+    # If debug-with-srcpkg mode is enabled then add the source package if it
+    # doesn't exist and add the source file contents to the source package.
+    if split_source_package:
+        src_package_name = ('%s-src' % d.getVar('PN'))
+        if not src_package_name in packages:
+            packages.append(src_package_name)
+        d.setVar('FILES:%s' % src_package_name, '/usr/src/debug')
+
+    # Sanity check PACKAGES for duplicates
+    # Sanity should be moved to sanity.bbclass once we have the infrastructure
+    package_dict = {}
+
+    for i, pkg in enumerate(packages):
+        if pkg in package_dict:
+            msg = "%s is listed in PACKAGES multiple times, this leads to packaging errors." % pkg
+            oe.qa.handle_error("packages-list", msg, d)
+        # Ensure the source package gets the chance to pick up the source files
+        # before the debug package by ordering it first in PACKAGES. Whether it
+        # actually picks up any source files is controlled by
+        # PACKAGE_DEBUG_SPLIT_STYLE.
+        elif pkg.endswith("-src"):
+            package_dict[pkg] = (10, i)
+        elif autodebug and pkg.endswith("-dbg"):
+            package_dict[pkg] = (30, i)
+        else:
+            package_dict[pkg] = (50, i)
+    packages = sorted(package_dict.keys(), key=package_dict.get)
+    d.setVar('PACKAGES', ' '.join(packages))
+    pkgdest = d.getVar('PKGDEST')
+
+    seen = []
+
+    # os.mkdir masks the permissions with umask so we have to unset it first
+    oldumask = os.umask(0)
+
+    debug = []
+    for root, dirs, files in cpath.walk(dvar):
+        dir = root[len(dvar):]
+        if not dir:
+            dir = os.sep
+        for f in (files + dirs):
+            path = "." + os.path.join(dir, f)
+            if "/.debug/" in path or "/.debug-static/" in path or path.endswith("/.debug"):
+                debug.append(path)
+
+    for pkg in packages:
+        root = os.path.join(pkgdest, pkg)
+        bb.utils.mkdirhier(root)
+
+        filesvar = d.getVar('FILES:%s' % pkg) or ""
+        if "//" in filesvar:
+            msg = "FILES variable for package %s contains '//' which is invalid. Attempting to fix this but you should correct the metadata.\n" % pkg
+            oe.qa.handle_error("files-invalid", msg, d)
+            filesvar.replace("//", "/")
+
+        origfiles = filesvar.split()
+        files, symlink_paths = files_from_filevars(origfiles)
+
+        if autodebug and pkg.endswith("-dbg"):
+            files.extend(debug)
+
+        for file in files:
+            if (not cpath.islink(file)) and (not cpath.exists(file)):
+                continue
+            if file in seen:
+                continue
+            seen.append(file)
+
+            def mkdir(src, dest, p):
+                src = os.path.join(src, p)
+                dest = os.path.join(dest, p)
+                fstat = cpath.stat(src)
+                os.mkdir(dest)
+                os.chmod(dest, fstat.st_mode)
+                os.chown(dest, fstat.st_uid, fstat.st_gid)
+                if p not in seen:
+                    seen.append(p)
+                cpath.updatecache(dest)
+
+            def mkdir_recurse(src, dest, paths):
+                if cpath.exists(dest + '/' + paths):
+                    return
+                while paths.startswith("./"):
+                    paths = paths[2:]
+                p = "."
+                for c in paths.split("/"):
+                    p = os.path.join(p, c)
+                    if not cpath.exists(os.path.join(dest, p)):
+                        mkdir(src, dest, p)
+
+            if cpath.isdir(file) and not cpath.islink(file):
+                mkdir_recurse(dvar, root, file)
+                continue
+
+            mkdir_recurse(dvar, root, os.path.dirname(file))
+            fpath = os.path.join(root,file)
+            if not cpath.islink(file):
+                os.link(file, fpath)
+                continue
+            ret = bb.utils.copyfile(file, fpath)
+            if ret is False or ret == 0:
+                bb.fatal("File population failed")
+
+        # Check if symlink paths exist
+        for file in symlink_paths:
+            if not os.path.exists(os.path.join(root,file)):
+                bb.fatal("File '%s' cannot be packaged into '%s' because its "
+                         "parent directory structure does not exist. One of "
+                         "its parent directories is a symlink whose target "
+                         "directory is not included in the package." %
+                         (file, pkg))
+
+    os.umask(oldumask)
+    os.chdir(workdir)
+
+    # Handle excluding packages with incompatible licenses
+    package_list = []
+    for pkg in packages:
+        licenses = d.getVar('_exclude_incompatible-' + pkg)
+        if licenses:
+            msg = "Excluding %s from packaging as it has incompatible license(s): %s" % (pkg, licenses)
+            oe.qa.handle_error("incompatible-license", msg, d)
+        else:
+            package_list.append(pkg)
+    d.setVar('PACKAGES', ' '.join(package_list))
+
+    unshipped = []
+    for root, dirs, files in cpath.walk(dvar):
+        dir = root[len(dvar):]
+        if not dir:
+            dir = os.sep
+        for f in (files + dirs):
+            path = os.path.join(dir, f)
+            if ('.' + path) not in seen:
+                unshipped.append(path)
+
+    if unshipped != []:
+        msg = pn + ": Files/directories were installed but not shipped in any package:"
+        if "installed-vs-shipped" in (d.getVar('INSANE_SKIP:' + pn) or "").split():
+            bb.note("Package %s skipping QA tests: installed-vs-shipped" % pn)
+        else:
+            for f in unshipped:
+                msg = msg + "\n  " + f
+            msg = msg + "\nPlease set FILES such that these items are packaged. Alternatively if they are unneeded, avoid installing them or delete them within do_install.\n"
+            msg = msg + "%s: %d installed and not shipped files." % (pn, len(unshipped))
+            oe.qa.handle_error("installed-vs-shipped", msg, d)
+}
+populate_packages[dirs] = "${D}"
+
+python package_fixsymlinks () {
+    import errno
+    pkgdest = d.getVar('PKGDEST')
+    packages = d.getVar("PACKAGES", False).split()
+
+    dangling_links = {}
+    pkg_files = {}
+    for pkg in packages:
+        dangling_links[pkg] = []
+        pkg_files[pkg] = []
+        inst_root = os.path.join(pkgdest, pkg)
+        for path in pkgfiles[pkg]:
+                rpath = path[len(inst_root):]
+                pkg_files[pkg].append(rpath)
+                rtarget = cpath.realpath(path, inst_root, True, assume_dir = True)
+                if not cpath.lexists(rtarget):
+                    dangling_links[pkg].append(os.path.normpath(rtarget[len(inst_root):]))
+
+    newrdepends = {}
+    for pkg in dangling_links:
+        for l in dangling_links[pkg]:
+            found = False
+            bb.debug(1, "%s contains dangling link %s" % (pkg, l))
+            for p in packages:
+                if l in pkg_files[p]:
+                        found = True
+                        bb.debug(1, "target found in %s" % p)
+                        if p == pkg:
+                            break
+                        if pkg not in newrdepends:
+                            newrdepends[pkg] = []
+                        newrdepends[pkg].append(p)
+                        break
+            if found == False:
+                bb.note("%s contains dangling symlink to %s" % (pkg, l))
+
+    for pkg in newrdepends:
+        rdepends = bb.utils.explode_dep_versions2(d.getVar('RDEPENDS:' + pkg) or "")
+        for p in newrdepends[pkg]:
+            if p not in rdepends:
+                rdepends[p] = []
+        d.setVar('RDEPENDS:' + pkg, bb.utils.join_deps(rdepends, commasep=False))
+}
+
+
+python package_package_name_hook() {
+    """
+    A package_name_hook function can be used to rewrite the package names by
+    changing PKG.  For an example, see debian.bbclass.
+    """
+    pass
+}
+
+EXPORT_FUNCTIONS package_name_hook
+
+
+PKGDESTWORK = "${WORKDIR}/pkgdata"
+
+PKGDATA_VARS = "PN PE PV PR PKGE PKGV PKGR LICENSE DESCRIPTION SUMMARY RDEPENDS RPROVIDES RRECOMMENDS RSUGGESTS RREPLACES RCONFLICTS SECTION PKG ALLOW_EMPTY FILES CONFFILES FILES_INFO PACKAGE_ADD_METADATA pkg_postinst pkg_postrm pkg_preinst pkg_prerm"
+
+python emit_pkgdata() {
+    from glob import glob
+    import json
+    import bb.compress.zstd
+
+    def process_postinst_on_target(pkg, mlprefix):
+        pkgval = d.getVar('PKG:%s' % pkg)
+        if pkgval is None:
+            pkgval = pkg
+
+        defer_fragment = """
+if [ -n "$D" ]; then
+    $INTERCEPT_DIR/postinst_intercept delay_to_first_boot %s mlprefix=%s
+    exit 0
+fi
+""" % (pkgval, mlprefix)
+
+        postinst = d.getVar('pkg_postinst:%s' % pkg)
+        postinst_ontarget = d.getVar('pkg_postinst_ontarget:%s' % pkg)
+
+        if postinst_ontarget:
+            bb.debug(1, 'adding deferred pkg_postinst_ontarget() to pkg_postinst() for %s' % pkg)
+            if not postinst:
+                postinst = '#!/bin/sh\n'
+            postinst += defer_fragment
+            postinst += postinst_ontarget
+            d.setVar('pkg_postinst:%s' % pkg, postinst)
+
+    def add_set_e_to_scriptlets(pkg):
+        for scriptlet_name in ('pkg_preinst', 'pkg_postinst', 'pkg_prerm', 'pkg_postrm'):
+            scriptlet = d.getVar('%s:%s' % (scriptlet_name, pkg))
+            if scriptlet:
+                scriptlet_split = scriptlet.split('\n')
+                if scriptlet_split[0].startswith("#!"):
+                    scriptlet = scriptlet_split[0] + "\nset -e\n" + "\n".join(scriptlet_split[1:])
+                else:
+                    scriptlet = "set -e\n" + "\n".join(scriptlet_split[0:])
+            d.setVar('%s:%s' % (scriptlet_name, pkg), scriptlet)
+
+    def write_if_exists(f, pkg, var):
+        def encode(str):
+            import codecs
+            c = codecs.getencoder("unicode_escape")
+            return c(str)[0].decode("latin1")
+
+        val = d.getVar('%s:%s' % (var, pkg))
+        if val:
+            f.write('%s:%s: %s\n' % (var, pkg, encode(val)))
+            return val
+        val = d.getVar('%s' % (var))
+        if val:
+            f.write('%s: %s\n' % (var, encode(val)))
+        return val
+
+    def write_extra_pkgs(variants, pn, packages, pkgdatadir):
+        for variant in variants:
+            with open("%s/%s-%s" % (pkgdatadir, variant, pn), 'w') as fd:
+                fd.write("PACKAGES: %s\n" % ' '.join(
+                            map(lambda pkg: '%s-%s' % (variant, pkg), packages.split())))
+
+    def write_extra_runtime_pkgs(variants, packages, pkgdatadir):
+        for variant in variants:
+            for pkg in packages.split():
+                ml_pkg = "%s-%s" % (variant, pkg)
+                subdata_file = "%s/runtime/%s" % (pkgdatadir, ml_pkg)
+                with open(subdata_file, 'w') as fd:
+                    fd.write("PKG:%s: %s" % (ml_pkg, pkg))
+
+    packages = d.getVar('PACKAGES')
+    pkgdest = d.getVar('PKGDEST')
+    pkgdatadir = d.getVar('PKGDESTWORK')
+
+    data_file = pkgdatadir + d.expand("/${PN}")
+    with open(data_file, 'w') as fd:
+        fd.write("PACKAGES: %s\n" % packages)
+
+    pkgdebugsource = d.getVar("PKGDEBUGSOURCES") or []
+
+    pn = d.getVar('PN')
+    global_variants = (d.getVar('MULTILIB_GLOBAL_VARIANTS') or "").split()
+    variants = (d.getVar('MULTILIB_VARIANTS') or "").split()
+
+    if bb.data.inherits_class('kernel', d) or bb.data.inherits_class('module-base', d):
+        write_extra_pkgs(variants, pn, packages, pkgdatadir)
+
+    if bb.data.inherits_class('allarch', d) and not variants \
+        and not bb.data.inherits_class('packagegroup', d):
+        write_extra_pkgs(global_variants, pn, packages, pkgdatadir)
+
+    workdir = d.getVar('WORKDIR')
+
+    for pkg in packages.split():
+        pkgval = d.getVar('PKG:%s' % pkg)
+        if pkgval is None:
+            pkgval = pkg
+            d.setVar('PKG:%s' % pkg, pkg)
+
+        extended_data = {
+            "files_info": {}
+        }
+
+        pkgdestpkg = os.path.join(pkgdest, pkg)
+        files = {}
+        files_extra = {}
+        total_size = 0
+        seen = set()
+        for f in pkgfiles[pkg]:
+            fpath = os.sep + os.path.relpath(f, pkgdestpkg)
+
+            fstat = os.lstat(f)
+            files[fpath] = fstat.st_size
+
+            extended_data["files_info"].setdefault(fpath, {})
+            extended_data["files_info"][fpath]['size'] = fstat.st_size
+
+            if fstat.st_ino not in seen:
+                seen.add(fstat.st_ino)
+                total_size += fstat.st_size
+
+            if fpath in pkgdebugsource:
+                extended_data["files_info"][fpath]['debugsrc'] = pkgdebugsource[fpath]
+                del pkgdebugsource[fpath]
+
+        d.setVar('FILES_INFO:' + pkg , json.dumps(files, sort_keys=True))
+
+        process_postinst_on_target(pkg, d.getVar("MLPREFIX"))
+        add_set_e_to_scriptlets(pkg)
+
+        subdata_file = pkgdatadir + "/runtime/%s" % pkg
+        with open(subdata_file, 'w') as sf:
+            for var in (d.getVar('PKGDATA_VARS') or "").split():
+                val = write_if_exists(sf, pkg, var)
+
+            write_if_exists(sf, pkg, 'FILERPROVIDESFLIST')
+            for dfile in sorted((d.getVar('FILERPROVIDESFLIST:' + pkg) or "").split()):
+                write_if_exists(sf, pkg, 'FILERPROVIDES:' + dfile)
+
+            write_if_exists(sf, pkg, 'FILERDEPENDSFLIST')
+            for dfile in sorted((d.getVar('FILERDEPENDSFLIST:' + pkg) or "").split()):
+                write_if_exists(sf, pkg, 'FILERDEPENDS:' + dfile)
+
+            sf.write('%s:%s: %d\n' % ('PKGSIZE', pkg, total_size))
+
+        subdata_extended_file = pkgdatadir + "/extended/%s.json.zstd" % pkg
+        num_threads = int(d.getVar("BB_NUMBER_THREADS"))
+        with bb.compress.zstd.open(subdata_extended_file, "wt", encoding="utf-8", num_threads=num_threads) as f:
+            json.dump(extended_data, f, sort_keys=True, separators=(",", ":"))
+
+        # Symlinks needed for rprovides lookup
+        rprov = d.getVar('RPROVIDES:%s' % pkg) or d.getVar('RPROVIDES')
+        if rprov:
+            for p in bb.utils.explode_deps(rprov):
+                subdata_sym = pkgdatadir + "/runtime-rprovides/%s/%s" % (p, pkg)
+                bb.utils.mkdirhier(os.path.dirname(subdata_sym))
+                oe.path.symlink("../../runtime/%s" % pkg, subdata_sym, True)
+
+        allow_empty = d.getVar('ALLOW_EMPTY:%s' % pkg)
+        if not allow_empty:
+            allow_empty = d.getVar('ALLOW_EMPTY')
+        root = "%s/%s" % (pkgdest, pkg)
+        os.chdir(root)
+        g = glob('*')
+        if g or allow_empty == "1":
+            # Symlinks needed for reverse lookups (from the final package name)
+            subdata_sym = pkgdatadir + "/runtime-reverse/%s" % pkgval
+            oe.path.symlink("../runtime/%s" % pkg, subdata_sym, True)
+
+            packagedfile = pkgdatadir + '/runtime/%s.packaged' % pkg
+            open(packagedfile, 'w').close()
+
+    if bb.data.inherits_class('kernel', d) or bb.data.inherits_class('module-base', d):
+        write_extra_runtime_pkgs(variants, packages, pkgdatadir)
+
+    if bb.data.inherits_class('allarch', d) and not variants \
+        and not bb.data.inherits_class('packagegroup', d):
+        write_extra_runtime_pkgs(global_variants, packages, pkgdatadir)
+
+}
+emit_pkgdata[dirs] = "${PKGDESTWORK}/runtime ${PKGDESTWORK}/runtime-reverse ${PKGDESTWORK}/runtime-rprovides ${PKGDESTWORK}/extended"
+emit_pkgdata[vardepsexclude] = "BB_NUMBER_THREADS"
+
+ldconfig_postinst_fragment() {
+if [ x"$D" = "x" ]; then
+	if [ -x /sbin/ldconfig ]; then /sbin/ldconfig ; fi
+fi
+}
+
+RPMDEPS = "${STAGING_LIBDIR_NATIVE}/rpm/rpmdeps --alldeps --define '__font_provides %{nil}'"
+
+# Collect perfile run-time dependency metadata
+# Output:
+#  FILERPROVIDESFLIST:pkg - list of all files w/ deps
+#  FILERPROVIDES:filepath:pkg - per file dep
+#
+#  FILERDEPENDSFLIST:pkg - list of all files w/ deps
+#  FILERDEPENDS:filepath:pkg - per file dep
+
+python package_do_filedeps() {
+    if d.getVar('SKIP_FILEDEPS') == '1':
+        return
+
+    pkgdest = d.getVar('PKGDEST')
+    packages = d.getVar('PACKAGES')
+    rpmdeps = d.getVar('RPMDEPS')
+
+    def chunks(files, n):
+        return [files[i:i+n] for i in range(0, len(files), n)]
+
+    pkglist = []
+    for pkg in packages.split():
+        if d.getVar('SKIP_FILEDEPS:' + pkg) == '1':
+            continue
+        if pkg.endswith('-dbg') or pkg.endswith('-doc') or pkg.find('-locale-') != -1 or pkg.find('-localedata-') != -1 or pkg.find('-gconv-') != -1 or pkg.find('-charmap-') != -1 or pkg.startswith('kernel-module-') or pkg.endswith('-src'):
+            continue
+        for files in chunks(pkgfiles[pkg], 100):
+            pkglist.append((pkg, files, rpmdeps, pkgdest))
+
+    processed = oe.utils.multiprocess_launch(oe.package.filedeprunner, pkglist, d)
+
+    provides_files = {}
+    requires_files = {}
+
+    for result in processed:
+        (pkg, provides, requires) = result
+
+        if pkg not in provides_files:
+            provides_files[pkg] = []
+        if pkg not in requires_files:
+            requires_files[pkg] = []
+
+        for file in sorted(provides):
+            provides_files[pkg].append(file)
+            key = "FILERPROVIDES:" + file + ":" + pkg
+            d.appendVar(key, " " + " ".join(provides[file]))
+
+        for file in sorted(requires):
+            requires_files[pkg].append(file)
+            key = "FILERDEPENDS:" + file + ":" + pkg
+            d.appendVar(key, " " + " ".join(requires[file]))
+
+    for pkg in requires_files:
+        d.setVar("FILERDEPENDSFLIST:" + pkg, " ".join(sorted(requires_files[pkg])))
+    for pkg in provides_files:
+        d.setVar("FILERPROVIDESFLIST:" + pkg, " ".join(sorted(provides_files[pkg])))
+}
+
+SHLIBSDIRS = "${WORKDIR_PKGDATA}/${MLPREFIX}shlibs2"
+SHLIBSWORKDIR = "${PKGDESTWORK}/${MLPREFIX}shlibs2"
+
+python package_do_shlibs() {
+    import itertools
+    import re, pipes
+    import subprocess
+
+    exclude_shlibs = d.getVar('EXCLUDE_FROM_SHLIBS', False)
+    if exclude_shlibs:
+        bb.note("not generating shlibs")
+        return
+
+    lib_re = re.compile(r"^.*\.so")
+    libdir_re = re.compile(r".*/%s$" % d.getVar('baselib'))
+
+    packages = d.getVar('PACKAGES')
+
+    shlib_pkgs = []
+    exclusion_list = d.getVar("EXCLUDE_PACKAGES_FROM_SHLIBS")
+    if exclusion_list:
+        for pkg in packages.split():
+            if pkg not in exclusion_list.split():
+                shlib_pkgs.append(pkg)
+            else:
+                bb.note("not generating shlibs for %s" % pkg)
+    else:
+        shlib_pkgs = packages.split()
+
+    hostos = d.getVar('HOST_OS')
+
+    workdir = d.getVar('WORKDIR')
+
+    ver = d.getVar('PKGV')
+    if not ver:
+        msg = "PKGV not defined"
+        oe.qa.handle_error("pkgv-undefined", msg, d)
+        return
+
+    pkgdest = d.getVar('PKGDEST')
+
+    shlibswork_dir = d.getVar('SHLIBSWORKDIR')
+
+    def linux_so(file, pkg, pkgver, d):
+        needs_ldconfig = False
+        needed = set()
+        sonames = set()
+        renames = []
+        ldir = os.path.dirname(file).replace(pkgdest + "/" + pkg, '')
+        cmd = d.getVar('OBJDUMP') + " -p " + pipes.quote(file) + " 2>/dev/null"
+        fd = os.popen(cmd)
+        lines = fd.readlines()
+        fd.close()
+        rpath = tuple()
+        for l in lines:
+            m = re.match(r"\s+RPATH\s+([^\s]*)", l)
+            if m:
+                rpaths = m.group(1).replace("$ORIGIN", ldir).split(":")
+                rpath = tuple(map(os.path.normpath, rpaths))
+        for l in lines:
+            m = re.match(r"\s+NEEDED\s+([^\s]*)", l)
+            if m:
+                dep = m.group(1)
+                if dep not in needed:
+                    needed.add((dep, file, rpath))
+            m = re.match(r"\s+SONAME\s+([^\s]*)", l)
+            if m:
+                this_soname = m.group(1)
+                prov = (this_soname, ldir, pkgver)
+                if not prov in sonames:
+                    # if library is private (only used by package) then do not build shlib for it
+                    import fnmatch
+                    if not private_libs or len([i for i in private_libs if fnmatch.fnmatch(this_soname, i)]) == 0:
+                        sonames.add(prov)
+                if libdir_re.match(os.path.dirname(file)):
+                    needs_ldconfig = True
+                if needs_ldconfig and snap_symlinks and (os.path.basename(file) != this_soname):
+                    renames.append((file, os.path.join(os.path.dirname(file), this_soname)))
+        return (needs_ldconfig, needed, sonames, renames)
+
+    def darwin_so(file, needed, sonames, renames, pkgver):
+        if not os.path.exists(file):
+            return
+        ldir = os.path.dirname(file).replace(pkgdest + "/" + pkg, '')
+
+        def get_combinations(base):
+            #
+            # Given a base library name, find all combinations of this split by "." and "-"
+            #
+            combos = []
+            options = base.split(".")
+            for i in range(1, len(options) + 1):
+                combos.append(".".join(options[0:i]))
+            options = base.split("-")
+            for i in range(1, len(options) + 1):
+                combos.append("-".join(options[0:i]))
+            return combos
+
+        if (file.endswith('.dylib') or file.endswith('.so')) and not pkg.endswith('-dev') and not pkg.endswith('-dbg') and not pkg.endswith('-src'):
+            # Drop suffix
+            name = os.path.basename(file).rsplit(".",1)[0]
+            # Find all combinations
+            combos = get_combinations(name)
+            for combo in combos:
+                if not combo in sonames:
+                    prov = (combo, ldir, pkgver)
+                    sonames.add(prov)
+        if file.endswith('.dylib') or file.endswith('.so'):
+            rpath = []
+            p = subprocess.Popen([d.expand("${HOST_PREFIX}otool"), '-l', file], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
+            out, err = p.communicate()
+            # If returned successfully, process stdout for results
+            if p.returncode == 0:
+                for l in out.split("\n"):
+                    l = l.strip()
+                    if l.startswith('path '):
+                        rpath.append(l.split()[1])
+
+        p = subprocess.Popen([d.expand("${HOST_PREFIX}otool"), '-L', file], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
+        out, err = p.communicate()
+        # If returned successfully, process stdout for results
+        if p.returncode == 0:
+            for l in out.split("\n"):
+                l = l.strip()
+                if not l or l.endswith(":"):
+                    continue
+                if "is not an object file" in l:
+                    continue
+                name = os.path.basename(l.split()[0]).rsplit(".", 1)[0]
+                if name and name not in needed[pkg]:
+                     needed[pkg].add((name, file, tuple()))
+
+    def mingw_dll(file, needed, sonames, renames, pkgver):
+        if not os.path.exists(file):
+            return
+
+        if file.endswith(".dll"):
+            # assume all dlls are shared objects provided by the package
+            sonames.add((os.path.basename(file), os.path.dirname(file).replace(pkgdest + "/" + pkg, ''), pkgver))
+
+        if (file.endswith(".dll") or file.endswith(".exe")):
+            # use objdump to search for "DLL Name: .*\.dll"
+            p = subprocess.Popen([d.expand("${HOST_PREFIX}objdump"), "-p", file], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
+            out, err = p.communicate()
+            # process the output, grabbing all .dll names
+            if p.returncode == 0:
+                for m in re.finditer(r"DLL Name: (.*?\.dll)$", out.decode(), re.MULTILINE | re.IGNORECASE):
+                    dllname = m.group(1)
+                    if dllname:
+                        needed[pkg].add((dllname, file, tuple()))
+
+    if d.getVar('PACKAGE_SNAP_LIB_SYMLINKS') == "1":
+        snap_symlinks = True
+    else:
+        snap_symlinks = False
+
+    needed = {}
+
+    shlib_provider = oe.package.read_shlib_providers(d)
+
+    for pkg in shlib_pkgs:
+        private_libs = d.getVar('PRIVATE_LIBS:' + pkg) or d.getVar('PRIVATE_LIBS') or ""
+        private_libs = private_libs.split()
+        needs_ldconfig = False
+        bb.debug(2, "calculating shlib provides for %s" % pkg)
+
+        pkgver = d.getVar('PKGV:' + pkg)
+        if not pkgver:
+            pkgver = d.getVar('PV_' + pkg)
+        if not pkgver:
+            pkgver = ver
+
+        needed[pkg] = set()
+        sonames = set()
+        renames = []
+        linuxlist = []
+        for file in pkgfiles[pkg]:
+                soname = None
+                if cpath.islink(file):
+                    continue
+                if hostos == "darwin" or hostos == "darwin8":
+                    darwin_so(file, needed, sonames, renames, pkgver)
+                elif hostos.startswith("mingw"):
+                    mingw_dll(file, needed, sonames, renames, pkgver)
+                elif os.access(file, os.X_OK) or lib_re.match(file):
+                    linuxlist.append(file)
+
+        if linuxlist:
+            results = oe.utils.multiprocess_launch(linux_so, linuxlist, d, extraargs=(pkg, pkgver, d))
+            for r in results:
+                ldconfig = r[0]
+                needed[pkg] |= r[1]
+                sonames |= r[2]
+                renames.extend(r[3])
+                needs_ldconfig = needs_ldconfig or ldconfig
+
+        for (old, new) in renames:
+            bb.note("Renaming %s to %s" % (old, new))
+            bb.utils.rename(old, new)
+            pkgfiles[pkg].remove(old)
+
+        shlibs_file = os.path.join(shlibswork_dir, pkg + ".list")
+        if len(sonames):
+            with open(shlibs_file, 'w') as fd:
+                for s in sorted(sonames):
+                    if s[0] in shlib_provider and s[1] in shlib_provider[s[0]]:
+                        (old_pkg, old_pkgver) = shlib_provider[s[0]][s[1]]
+                        if old_pkg != pkg:
+                            bb.warn('%s-%s was registered as shlib provider for %s, changing it to %s-%s because it was built later' % (old_pkg, old_pkgver, s[0], pkg, pkgver))
+                    bb.debug(1, 'registering %s-%s as shlib provider for %s' % (pkg, pkgver, s[0]))
+                    fd.write(s[0] + ':' + s[1] + ':' + s[2] + '\n')
+                    if s[0] not in shlib_provider:
+                        shlib_provider[s[0]] = {}
+                    shlib_provider[s[0]][s[1]] = (pkg, pkgver)
+        if needs_ldconfig:
+            bb.debug(1, 'adding ldconfig call to postinst for %s' % pkg)
+            postinst = d.getVar('pkg_postinst:%s' % pkg)
+            if not postinst:
+                postinst = '#!/bin/sh\n'
+            postinst += d.getVar('ldconfig_postinst_fragment')
+            d.setVar('pkg_postinst:%s' % pkg, postinst)
+        bb.debug(1, 'LIBNAMES: pkg %s sonames %s' % (pkg, sonames))
+
+    assumed_libs = d.getVar('ASSUME_SHLIBS')
+    if assumed_libs:
+        libdir = d.getVar("libdir")
+        for e in assumed_libs.split():
+            l, dep_pkg = e.split(":")
+            lib_ver = None
+            dep_pkg = dep_pkg.rsplit("_", 1)
+            if len(dep_pkg) == 2:
+                lib_ver = dep_pkg[1]
+            dep_pkg = dep_pkg[0]
+            if l not in shlib_provider:
+                shlib_provider[l] = {}
+            shlib_provider[l][libdir] = (dep_pkg, lib_ver)
+
+    libsearchpath = [d.getVar('libdir'), d.getVar('base_libdir')]
+
+    for pkg in shlib_pkgs:
+        bb.debug(2, "calculating shlib requirements for %s" % pkg)
+
+        private_libs = d.getVar('PRIVATE_LIBS:' + pkg) or d.getVar('PRIVATE_LIBS') or ""
+        private_libs = private_libs.split()
+
+        deps = list()
+        for n in needed[pkg]:
+            # if n is in private libraries, don't try to search provider for it
+            # this could cause problem in case some abc.bb provides private
+            # /opt/abc/lib/libfoo.so.1 and contains /usr/bin/abc depending on system library libfoo.so.1
+            # but skipping it is still better alternative than providing own
+            # version and then adding runtime dependency for the same system library
+            import fnmatch
+            if private_libs and len([i for i in private_libs if fnmatch.fnmatch(n[0], i)]) > 0:
+                bb.debug(2, '%s: Dependency %s covered by PRIVATE_LIBS' % (pkg, n[0]))
+                continue
+            if n[0] in shlib_provider.keys():
+                shlib_provider_map = shlib_provider[n[0]]
+                matches = set()
+                for p in itertools.chain(list(n[2]), sorted(shlib_provider_map.keys()), libsearchpath):
+                    if p in shlib_provider_map:
+                        matches.add(p)
+                if len(matches) > 1:
+                    matchpkgs = ', '.join([shlib_provider_map[match][0] for match in matches])
+                    bb.error("%s: Multiple shlib providers for %s: %s (used by files: %s)" % (pkg, n[0], matchpkgs, n[1]))
+                elif len(matches) == 1:
+                    (dep_pkg, ver_needed) = shlib_provider_map[matches.pop()]
+
+                    bb.debug(2, '%s: Dependency %s requires package %s (used by files: %s)' % (pkg, n[0], dep_pkg, n[1]))
+
+                    if dep_pkg == pkg:
+                        continue
+
+                    if ver_needed:
+                        dep = "%s (>= %s)" % (dep_pkg, ver_needed)
+                    else:
+                        dep = dep_pkg
+                    if not dep in deps:
+                        deps.append(dep)
+                    continue
+            bb.note("Couldn't find shared library provider for %s, used by files: %s" % (n[0], n[1]))
+
+        deps_file = os.path.join(pkgdest, pkg + ".shlibdeps")
+        if os.path.exists(deps_file):
+            os.remove(deps_file)
+        if deps:
+            with open(deps_file, 'w') as fd:
+                for dep in sorted(deps):
+                    fd.write(dep + '\n')
+}
+
+python package_do_pkgconfig () {
+    import re
+
+    packages = d.getVar('PACKAGES')
+    workdir = d.getVar('WORKDIR')
+    pkgdest = d.getVar('PKGDEST')
+
+    shlibs_dirs = d.getVar('SHLIBSDIRS').split()
+    shlibswork_dir = d.getVar('SHLIBSWORKDIR')
+
+    pc_re = re.compile(r'(.*)\.pc$')
+    var_re = re.compile(r'(.*)=(.*)')
+    field_re = re.compile(r'(.*): (.*)')
+
+    pkgconfig_provided = {}
+    pkgconfig_needed = {}
+    for pkg in packages.split():
+        pkgconfig_provided[pkg] = []
+        pkgconfig_needed[pkg] = []
+        for file in sorted(pkgfiles[pkg]):
+                m = pc_re.match(file)
+                if m:
+                    pd = bb.data.init()
+                    name = m.group(1)
+                    pkgconfig_provided[pkg].append(os.path.basename(name))
+                    if not os.access(file, os.R_OK):
+                        continue
+                    with open(file, 'r') as f:
+                        lines = f.readlines()
+                    for l in lines:
+                        m = var_re.match(l)
+                        if m:
+                            name = m.group(1)
+                            val = m.group(2)
+                            pd.setVar(name, pd.expand(val))
+                            continue
+                        m = field_re.match(l)
+                        if m:
+                            hdr = m.group(1)
+                            exp = pd.expand(m.group(2))
+                            if hdr == 'Requires':
+                                pkgconfig_needed[pkg] += exp.replace(',', ' ').split()
+
+    for pkg in packages.split():
+        pkgs_file = os.path.join(shlibswork_dir, pkg + ".pclist")
+        if pkgconfig_provided[pkg] != []:
+            with open(pkgs_file, 'w') as f:
+                for p in sorted(pkgconfig_provided[pkg]):
+                    f.write('%s\n' % p)
+
+    # Go from least to most specific since the last one found wins
+    for dir in reversed(shlibs_dirs):
+        if not os.path.exists(dir):
+            continue
+        for file in sorted(os.listdir(dir)):
+            m = re.match(r'^(.*)\.pclist$', file)
+            if m:
+                pkg = m.group(1)
+                with open(os.path.join(dir, file)) as fd:
+                    lines = fd.readlines()
+                pkgconfig_provided[pkg] = []
+                for l in lines:
+                    pkgconfig_provided[pkg].append(l.rstrip())
+
+    for pkg in packages.split():
+        deps = []
+        for n in pkgconfig_needed[pkg]:
+            found = False
+            for k in pkgconfig_provided.keys():
+                if n in pkgconfig_provided[k]:
+                    if k != pkg and not (k in deps):
+                        deps.append(k)
+                    found = True
+            if found == False:
+                bb.note("couldn't find pkgconfig module '%s' in any package" % n)
+        deps_file = os.path.join(pkgdest, pkg + ".pcdeps")
+        if len(deps):
+            with open(deps_file, 'w') as fd:
+                for dep in deps:
+                    fd.write(dep + '\n')
+}
+
+def read_libdep_files(d):
+    pkglibdeps = {}
+    packages = d.getVar('PACKAGES').split()
+    for pkg in packages:
+        pkglibdeps[pkg] = {}
+        for extension in ".shlibdeps", ".pcdeps", ".clilibdeps":
+            depsfile = d.expand("${PKGDEST}/" + pkg + extension)
+            if os.access(depsfile, os.R_OK):
+                with open(depsfile) as fd:
+                    lines = fd.readlines()
+                for l in lines:
+                    l.rstrip()
+                    deps = bb.utils.explode_dep_versions2(l)
+                    for dep in deps:
+                        if not dep in pkglibdeps[pkg]:
+                            pkglibdeps[pkg][dep] = deps[dep]
+    return pkglibdeps
+
+python read_shlibdeps () {
+    pkglibdeps = read_libdep_files(d)
+
+    packages = d.getVar('PACKAGES').split()
+    for pkg in packages:
+        rdepends = bb.utils.explode_dep_versions2(d.getVar('RDEPENDS:' + pkg) or "")
+        for dep in sorted(pkglibdeps[pkg]):
+            # Add the dep if it's not already there, or if no comparison is set
+            if dep not in rdepends:
+                rdepends[dep] = []
+            for v in pkglibdeps[pkg][dep]:
+                if v not in rdepends[dep]:
+                    rdepends[dep].append(v)
+        d.setVar('RDEPENDS:' + pkg, bb.utils.join_deps(rdepends, commasep=False))
+}
+
+python package_depchains() {
+    """
+    For a given set of prefix and postfix modifiers, make those packages
+    RRECOMMENDS on the corresponding packages for its RDEPENDS.
+
+    Example:  If package A depends upon package B, and A's .bb emits an
+    A-dev package, this would make A-dev Recommends: B-dev.
+
+    If only one of a given suffix is specified, it will take the RRECOMMENDS
+    based on the RDEPENDS of *all* other packages. If more than one of a given
+    suffix is specified, its will only use the RDEPENDS of the single parent
+    package.
+    """
+
+    packages  = d.getVar('PACKAGES')
+    postfixes = (d.getVar('DEPCHAIN_POST') or '').split()
+    prefixes  = (d.getVar('DEPCHAIN_PRE') or '').split()
+
+    def pkg_adddeprrecs(pkg, base, suffix, getname, depends, d):
+
+        #bb.note('depends for %s is %s' % (base, depends))
+        rreclist = bb.utils.explode_dep_versions2(d.getVar('RRECOMMENDS:' + pkg) or "")
+
+        for depend in sorted(depends):
+            if depend.find('-native') != -1 or depend.find('-cross') != -1 or depend.startswith('virtual/'):
+                #bb.note("Skipping %s" % depend)
+                continue
+            if depend.endswith('-dev'):
+                depend = depend[:-4]
+            if depend.endswith('-dbg'):
+                depend = depend[:-4]
+            pkgname = getname(depend, suffix)
+            #bb.note("Adding %s for %s" % (pkgname, depend))
+            if pkgname not in rreclist and pkgname != pkg:
+                rreclist[pkgname] = []
+
+        #bb.note('setting: RRECOMMENDS:%s=%s' % (pkg, ' '.join(rreclist)))
+        d.setVar('RRECOMMENDS:%s' % pkg, bb.utils.join_deps(rreclist, commasep=False))
+
+    def pkg_addrrecs(pkg, base, suffix, getname, rdepends, d):
+
+        #bb.note('rdepends for %s is %s' % (base, rdepends))
+        rreclist = bb.utils.explode_dep_versions2(d.getVar('RRECOMMENDS:' + pkg) or "")
+
+        for depend in sorted(rdepends):
+            if depend.find('virtual-locale-') != -1:
+                #bb.note("Skipping %s" % depend)
+                continue
+            if depend.endswith('-dev'):
+                depend = depend[:-4]
+            if depend.endswith('-dbg'):
+                depend = depend[:-4]
+            pkgname = getname(depend, suffix)
+            #bb.note("Adding %s for %s" % (pkgname, depend))
+            if pkgname not in rreclist and pkgname != pkg:
+                rreclist[pkgname] = []
+
+        #bb.note('setting: RRECOMMENDS:%s=%s' % (pkg, ' '.join(rreclist)))
+        d.setVar('RRECOMMENDS:%s' % pkg, bb.utils.join_deps(rreclist, commasep=False))
+
+    def add_dep(list, dep):
+        if dep not in list:
+            list.append(dep)
+
+    depends = []
+    for dep in bb.utils.explode_deps(d.getVar('DEPENDS') or ""):
+        add_dep(depends, dep)
+
+    rdepends = []
+    for pkg in packages.split():
+        for dep in bb.utils.explode_deps(d.getVar('RDEPENDS:' + pkg) or ""):
+            add_dep(rdepends, dep)
+
+    #bb.note('rdepends is %s' % rdepends)
+
+    def post_getname(name, suffix):
+        return '%s%s' % (name, suffix)
+    def pre_getname(name, suffix):
+        return '%s%s' % (suffix, name)
+
+    pkgs = {}
+    for pkg in packages.split():
+        for postfix in postfixes:
+            if pkg.endswith(postfix):
+                if not postfix in pkgs:
+                    pkgs[postfix] = {}
+                pkgs[postfix][pkg] = (pkg[:-len(postfix)], post_getname)
+
+        for prefix in prefixes:
+            if pkg.startswith(prefix):
+                if not prefix in pkgs:
+                    pkgs[prefix] = {}
+                pkgs[prefix][pkg] = (pkg[:-len(prefix)], pre_getname)
+
+    if "-dbg" in pkgs:
+        pkglibdeps = read_libdep_files(d)
+        pkglibdeplist = []
+        for pkg in pkglibdeps:
+            for k in pkglibdeps[pkg]:
+                add_dep(pkglibdeplist, k)
+        dbgdefaultdeps = ((d.getVar('DEPCHAIN_DBGDEFAULTDEPS') == '1') or (bb.data.inherits_class('packagegroup', d)))
+
+    for suffix in pkgs:
+        for pkg in pkgs[suffix]:
+            if d.getVarFlag('RRECOMMENDS:' + pkg, 'nodeprrecs'):
+                continue
+            (base, func) = pkgs[suffix][pkg]
+            if suffix == "-dev":
+                pkg_adddeprrecs(pkg, base, suffix, func, depends, d)
+            elif suffix == "-dbg":
+                if not dbgdefaultdeps:
+                    pkg_addrrecs(pkg, base, suffix, func, pkglibdeplist, d)
+                    continue
+            if len(pkgs[suffix]) == 1:
+                pkg_addrrecs(pkg, base, suffix, func, rdepends, d)
+            else:
+                rdeps = []
+                for dep in bb.utils.explode_deps(d.getVar('RDEPENDS:' + base) or ""):
+                    add_dep(rdeps, dep)
+                pkg_addrrecs(pkg, base, suffix, func, rdeps, d)
+}
+
+# Since bitbake can't determine which variables are accessed during package
+# iteration, we need to list them here:
+PACKAGEVARS = "FILES RDEPENDS RRECOMMENDS SUMMARY DESCRIPTION RSUGGESTS RPROVIDES RCONFLICTS PKG ALLOW_EMPTY pkg_postinst pkg_postrm pkg_postinst_ontarget INITSCRIPT_NAME INITSCRIPT_PARAMS DEBIAN_NOAUTONAME ALTERNATIVE PKGE PKGV PKGR USERADD_PARAM GROUPADD_PARAM CONFFILES SYSTEMD_SERVICE LICENSE SECTION pkg_preinst pkg_prerm RREPLACES GROUPMEMS_PARAM SYSTEMD_AUTO_ENABLE SKIP_FILEDEPS PRIVATE_LIBS PACKAGE_ADD_METADATA"
+
+def gen_packagevar(d, pkgvars="PACKAGEVARS"):
+    ret = []
+    pkgs = (d.getVar("PACKAGES") or "").split()
+    vars = (d.getVar(pkgvars) or "").split()
+    for v in vars:
+        ret.append(v)
+    for p in pkgs:
+        for v in vars:
+            ret.append(v + ":" + p)
+
+        # Ensure that changes to INCOMPATIBLE_LICENSE re-run do_package for
+        # affected recipes.
+        ret.append('_exclude_incompatible-%s' % p)
+    return " ".join(ret)
+
+PACKAGE_PREPROCESS_FUNCS ?= ""
+# Functions for setting up PKGD
+PACKAGEBUILDPKGD ?= " \
+                package_prepare_pkgdata \
+                perform_packagecopy \
+                ${PACKAGE_PREPROCESS_FUNCS} \
+                split_and_strip_files \
+                fixup_perms \
+                "
+# Functions which split PKGD up into separate packages
+PACKAGESPLITFUNCS ?= " \
+                package_do_split_locales \
+                populate_packages"
+# Functions which process metadata based on split packages
+PACKAGEFUNCS += " \
+                package_fixsymlinks \
+                package_name_hook \
+                package_do_filedeps \
+                package_do_shlibs \
+                package_do_pkgconfig \
+                read_shlibdeps \
+                package_depchains \
+                emit_pkgdata"
+
+python do_package () {
+    # Change the following version to cause sstate to invalidate the package
+    # cache.  This is useful if an item this class depends on changes in a
+    # way that the output of this class changes.  rpmdeps is a good example
+    # as any change to rpmdeps requires this to be rerun.
+    # PACKAGE_BBCLASS_VERSION = "4"
+
+    # Init cachedpath
+    global cpath
+    cpath = oe.cachedpath.CachedPath()
+
+    ###########################################################################
+    # Sanity test the setup
+    ###########################################################################
+
+    packages = (d.getVar('PACKAGES') or "").split()
+    if len(packages) < 1:
+        bb.debug(1, "No packages to build, skipping do_package")
+        return
+
+    workdir = d.getVar('WORKDIR')
+    outdir = d.getVar('DEPLOY_DIR')
+    dest = d.getVar('D')
+    dvar = d.getVar('PKGD')
+    pn = d.getVar('PN')
+
+    if not workdir or not outdir or not dest or not dvar or not pn:
+        msg = "WORKDIR, DEPLOY_DIR, D, PN and PKGD all must be defined, unable to package"
+        oe.qa.handle_error("var-undefined", msg, d)
+        return
+
+    bb.build.exec_func("package_convert_pr_autoinc", d)
+
+    ###########################################################################
+    # Optimisations
+    ###########################################################################
+
+    # Continually expanding complex expressions is inefficient, particularly
+    # when we write to the datastore and invalidate the expansion cache. This
+    # code pre-expands some frequently used variables
+
+    def expandVar(x, d):
+        d.setVar(x, d.getVar(x))
+
+    for x in 'PN', 'PV', 'BPN', 'TARGET_SYS', 'EXTENDPRAUTO':
+        expandVar(x, d)
+
+    ###########################################################################
+    # Setup PKGD (from D)
+    ###########################################################################
+
+    for f in (d.getVar('PACKAGEBUILDPKGD') or '').split():
+        bb.build.exec_func(f, d)
+
+    ###########################################################################
+    # Split up PKGD into PKGDEST
+    ###########################################################################
+
+    cpath = oe.cachedpath.CachedPath()
+
+    for f in (d.getVar('PACKAGESPLITFUNCS') or '').split():
+        bb.build.exec_func(f, d)
+
+    ###########################################################################
+    # Process PKGDEST
+    ###########################################################################
+
+    # Build global list of files in each split package
+    global pkgfiles
+    pkgfiles = {}
+    packages = d.getVar('PACKAGES').split()
+    pkgdest = d.getVar('PKGDEST')
+    for pkg in packages:
+        pkgfiles[pkg] = []
+        for walkroot, dirs, files in cpath.walk(pkgdest + "/" + pkg):
+            for file in files:
+                pkgfiles[pkg].append(walkroot + os.sep + file)
+
+    for f in (d.getVar('PACKAGEFUNCS') or '').split():
+        bb.build.exec_func(f, d)
+
+    oe.qa.exit_if_errors(d)
+}
+
+do_package[dirs] = "${SHLIBSWORKDIR} ${D}"
+do_package[vardeps] += "${PACKAGEBUILDPKGD} ${PACKAGESPLITFUNCS} ${PACKAGEFUNCS} ${@gen_packagevar(d)}"
+addtask package after do_install
+
+SSTATETASKS += "do_package"
+do_package[cleandirs] = "${PKGDEST} ${PKGDESTWORK}"
+do_package[sstate-plaindirs] = "${PKGD} ${PKGDEST} ${PKGDESTWORK}"
+do_package_setscene[dirs] = "${STAGING_DIR}"
+
+python do_package_setscene () {
+    sstate_setscene(d)
+}
+addtask do_package_setscene
+
+# Copy from PKGDESTWORK to tempdirectory as tempdirectory can be cleaned at both
+# do_package_setscene and do_packagedata_setscene leading to races
+python do_packagedata () {
+    bb.build.exec_func("package_get_auto_pr", d)
+
+    src = d.expand("${PKGDESTWORK}")
+    dest = d.expand("${WORKDIR}/pkgdata-pdata-input")
+    oe.path.copyhardlinktree(src, dest)
+
+    bb.build.exec_func("packagedata_translate_pr_autoinc", d)
+}
+do_packagedata[cleandirs] += "${WORKDIR}/pkgdata-pdata-input"
+
+# Translate the EXTENDPRAUTO and AUTOINC to the final values
+packagedata_translate_pr_autoinc() {
+    find ${WORKDIR}/pkgdata-pdata-input -type f | xargs --no-run-if-empty \
+        sed -e 's,@PRSERV_PV_AUTOINC@,${PRSERV_PV_AUTOINC},g' \
+            -e 's,@EXTENDPRAUTO@,${EXTENDPRAUTO},g' -i
+}
+
+addtask packagedata before do_build after do_package
+
+SSTATETASKS += "do_packagedata"
+do_packagedata[sstate-inputdirs] = "${WORKDIR}/pkgdata-pdata-input"
+do_packagedata[sstate-outputdirs] = "${PKGDATA_DIR}"
+do_packagedata[stamp-extra-info] = "${MACHINE_ARCH}"
+
+python do_packagedata_setscene () {
+    sstate_setscene(d)
+}
+addtask do_packagedata_setscene
+
+#
+# Helper functions for the package writing classes
+#
+
+def mapping_rename_hook(d):
+    """
+    Rewrite variables to account for package renaming in things
+    like debian.bbclass or manual PKG variable name changes
+    """
+    pkg = d.getVar("PKG")
+    runtime_mapping_rename("RDEPENDS", pkg, d)
+    runtime_mapping_rename("RRECOMMENDS", pkg, d)
+    runtime_mapping_rename("RSUGGESTS", pkg, d)
diff --git a/poky/meta/classes-global/package_deb.bbclass b/poky/meta/classes-global/package_deb.bbclass
new file mode 100644
index 0000000..ec7e10d
--- /dev/null
+++ b/poky/meta/classes-global/package_deb.bbclass
@@ -0,0 +1,329 @@
+#
+# Copyright 2006-2008 OpenedHand Ltd.
+#
+# SPDX-License-Identifier: MIT
+#
+
+inherit package
+
+IMAGE_PKGTYPE ?= "deb"
+
+DPKG_BUILDCMD ??= "dpkg-deb"
+
+DPKG_ARCH ?= "${@debian_arch_map(d.getVar('TARGET_ARCH'), d.getVar('TUNE_FEATURES'))}"
+DPKG_ARCH[vardepvalue] = "${DPKG_ARCH}"
+
+PKGWRITEDIRDEB = "${WORKDIR}/deploy-debs"
+
+APTCONF_TARGET = "${WORKDIR}"
+
+APT_ARGS = "${@['', '--no-install-recommends'][d.getVar("NO_RECOMMENDATIONS") == "1"]}"
+
+def debian_arch_map(arch, tune):
+    tune_features = tune.split()
+    if arch == "allarch":
+        return "all"
+    if arch in ["i586", "i686"]:
+        return "i386"
+    if arch == "x86_64":
+        if "mx32" in tune_features:
+            return "x32"
+        return "amd64"
+    if arch.startswith("mips"):
+        endian = ["el", ""]["bigendian" in tune_features]
+        if "n64" in tune_features:
+            return "mips64" + endian
+        if "n32" in tune_features:
+            return "mipsn32" + endian
+        return "mips" + endian
+    if arch == "powerpc":
+        return arch + ["", "spe"]["spe" in tune_features]
+    if arch == "aarch64":
+        return "arm64"
+    if arch == "arm":
+        return arch + ["el", "hf"]["callconvention-hard" in tune_features]
+    return arch
+
+python do_package_deb () {
+    packages = d.getVar('PACKAGES')
+    if not packages:
+        bb.debug(1, "PACKAGES not defined, nothing to package")
+        return
+
+    tmpdir = d.getVar('TMPDIR')
+    if os.access(os.path.join(tmpdir, "stamps", "DEB_PACKAGE_INDEX_CLEAN"),os.R_OK):
+        os.unlink(os.path.join(tmpdir, "stamps", "DEB_PACKAGE_INDEX_CLEAN"))
+
+    oe.utils.multiprocess_launch(deb_write_pkg, packages.split(), d, extraargs=(d,))
+}
+do_package_deb[vardeps] += "deb_write_pkg"
+do_package_deb[vardepsexclude] = "BB_NUMBER_THREADS"
+
+def deb_write_pkg(pkg, d):
+    import re, copy
+    import textwrap
+    import subprocess
+    import collections
+    import codecs
+
+    outdir = d.getVar('PKGWRITEDIRDEB')
+    pkgdest = d.getVar('PKGDEST')
+
+    def cleanupcontrol(root):
+        for p in ['CONTROL', 'DEBIAN']:
+            p = os.path.join(root, p)
+            if os.path.exists(p):
+                bb.utils.prunedir(p)
+
+    localdata = bb.data.createCopy(d)
+    root = "%s/%s" % (pkgdest, pkg)
+
+    lf = bb.utils.lockfile(root + ".lock")
+    try:
+
+        localdata.setVar('ROOT', '')
+        localdata.setVar('ROOT_%s' % pkg, root)
+        pkgname = localdata.getVar('PKG:%s' % pkg)
+        if not pkgname:
+            pkgname = pkg
+        localdata.setVar('PKG', pkgname)
+
+        localdata.setVar('OVERRIDES', d.getVar("OVERRIDES", False) + ":" + pkg)
+
+        basedir = os.path.join(os.path.dirname(root))
+
+        pkgoutdir = os.path.join(outdir, localdata.getVar('PACKAGE_ARCH'))
+        bb.utils.mkdirhier(pkgoutdir)
+
+        os.chdir(root)
+        cleanupcontrol(root)
+        from glob import glob
+        g = glob('*')
+        if not g and localdata.getVar('ALLOW_EMPTY', False) != "1":
+            bb.note("Not creating empty archive for %s-%s-%s" % (pkg, localdata.getVar('PKGV'), localdata.getVar('PKGR')))
+            return
+
+        controldir = os.path.join(root, 'DEBIAN')
+        bb.utils.mkdirhier(controldir)
+        os.chmod(controldir, 0o755)
+
+        ctrlfile = codecs.open(os.path.join(controldir, 'control'), 'w', 'utf-8')
+
+        fields = []
+        pe = d.getVar('PKGE')
+        if pe and int(pe) > 0:
+            fields.append(["Version: %s:%s-%s\n", ['PKGE', 'PKGV', 'PKGR']])
+        else:
+            fields.append(["Version: %s-%s\n", ['PKGV', 'PKGR']])
+        fields.append(["Description: %s\n", ['DESCRIPTION']])
+        fields.append(["Section: %s\n", ['SECTION']])
+        fields.append(["Priority: %s\n", ['PRIORITY']])
+        fields.append(["Maintainer: %s\n", ['MAINTAINER']])
+        fields.append(["Architecture: %s\n", ['DPKG_ARCH']])
+        fields.append(["OE: %s\n", ['PN']])
+        fields.append(["PackageArch: %s\n", ['PACKAGE_ARCH']])
+        if d.getVar('HOMEPAGE'):
+            fields.append(["Homepage: %s\n", ['HOMEPAGE']])
+
+        # Package, Version, Maintainer, Description - mandatory
+        # Section, Priority, Essential, Architecture, Source, Depends, Pre-Depends, Recommends, Suggests, Conflicts, Replaces, Provides - Optional
+
+
+        def pullData(l, d):
+            l2 = []
+            for i in l:
+                data = d.getVar(i)
+                if data is None:
+                    raise KeyError(i)
+                if i == 'DPKG_ARCH' and d.getVar('PACKAGE_ARCH') == 'all':
+                    data = 'all'
+                elif i == 'PACKAGE_ARCH' or i == 'DPKG_ARCH':
+                   # The params in deb package control don't allow character
+                   # `_', so change the arch's `_' to `-'. Such as `x86_64'
+                   # -->`x86-64'
+                   data = data.replace('_', '-')
+                l2.append(data)
+            return l2
+
+        ctrlfile.write("Package: %s\n" % pkgname)
+        if d.getVar('PACKAGE_ARCH') == "all":
+            ctrlfile.write("Multi-Arch: foreign\n")
+        # check for required fields
+        for (c, fs) in fields:
+            # Special behavior for description...
+            if 'DESCRIPTION' in fs:
+                 summary = localdata.getVar('SUMMARY') or localdata.getVar('DESCRIPTION') or "."
+                 ctrlfile.write('Description: %s\n' % summary)
+                 description = localdata.getVar('DESCRIPTION') or "."
+                 description = textwrap.dedent(description).strip()
+                 if '\\n' in description:
+                     # Manually indent
+                     for t in description.split('\\n'):
+                         ctrlfile.write(' %s\n' % (t.strip() or '.'))
+                 else:
+                     # Auto indent
+                     ctrlfile.write('%s\n' % textwrap.fill(description.strip(), width=74, initial_indent=' ', subsequent_indent=' '))
+
+            else:
+                 ctrlfile.write(c % tuple(pullData(fs, localdata)))
+
+        # more fields
+
+        custom_fields_chunk = get_package_additional_metadata("deb", localdata)
+        if custom_fields_chunk:
+            ctrlfile.write(custom_fields_chunk)
+            ctrlfile.write("\n")
+
+        mapping_rename_hook(localdata)
+
+        def debian_cmp_remap(var):
+            # dpkg does not allow for '(', ')' or ':' in a dependency name
+            # Replace any instances of them with '__'
+            #
+            # In debian '>' and '<' do not mean what it appears they mean
+            #   '<' = less or equal
+            #   '>' = greater or equal
+            # adjust these to the '<<' and '>>' equivalents
+            # Also, "=" specifiers only work if they have the PR in, so 1.2.3 != 1.2.3-r0
+            # so to avoid issues, map this to ">= 1.2.3 << 1.2.3.0"
+            for dep in list(var.keys()):
+                if '(' in dep or '/' in dep:
+                    newdep = re.sub(r'[(:)/]', '__', dep)
+                    if newdep.startswith("__"):
+                        newdep = "A" + newdep
+                    if newdep != dep:
+                        var[newdep] = var[dep]
+                        del var[dep]
+            for dep in var:
+                for i, v in enumerate(var[dep]):
+                    if (v or "").startswith("< "):
+                        var[dep][i] = var[dep][i].replace("< ", "<< ")
+                    elif (v or "").startswith("> "):
+                        var[dep][i] = var[dep][i].replace("> ", ">> ")
+                    elif (v or "").startswith("= ") and "-r" not in v:
+                        ver = var[dep][i].replace("= ", "")
+                        var[dep][i] = var[dep][i].replace("= ", ">= ")
+                        var[dep].append("<< " + ver + ".0")
+
+        rdepends = bb.utils.explode_dep_versions2(localdata.getVar("RDEPENDS") or "")
+        debian_cmp_remap(rdepends)
+        for dep in list(rdepends.keys()):
+                if dep == pkg:
+                        del rdepends[dep]
+                        continue
+                if '*' in dep:
+                        del rdepends[dep]
+        rrecommends = bb.utils.explode_dep_versions2(localdata.getVar("RRECOMMENDS") or "")
+        debian_cmp_remap(rrecommends)
+        for dep in list(rrecommends.keys()):
+                if '*' in dep:
+                        del rrecommends[dep]
+        rsuggests = bb.utils.explode_dep_versions2(localdata.getVar("RSUGGESTS") or "")
+        debian_cmp_remap(rsuggests)
+        # Deliberately drop version information here, not wanted/supported by deb
+        rprovides = dict.fromkeys(bb.utils.explode_dep_versions2(localdata.getVar("RPROVIDES") or ""), [])
+        # Remove file paths if any from rprovides, debian does not support custom providers
+        for key in list(rprovides.keys()):
+            if key.startswith('/'):
+                del rprovides[key]
+        rprovides = collections.OrderedDict(sorted(rprovides.items(), key=lambda x: x[0]))
+        debian_cmp_remap(rprovides)
+        rreplaces = bb.utils.explode_dep_versions2(localdata.getVar("RREPLACES") or "")
+        debian_cmp_remap(rreplaces)
+        rconflicts = bb.utils.explode_dep_versions2(localdata.getVar("RCONFLICTS") or "")
+        debian_cmp_remap(rconflicts)
+        if rdepends:
+            ctrlfile.write("Depends: %s\n" % bb.utils.join_deps(rdepends))
+        if rsuggests:
+            ctrlfile.write("Suggests: %s\n" % bb.utils.join_deps(rsuggests))
+        if rrecommends:
+            ctrlfile.write("Recommends: %s\n" % bb.utils.join_deps(rrecommends))
+        if rprovides:
+            ctrlfile.write("Provides: %s\n" % bb.utils.join_deps(rprovides))
+        if rreplaces:
+            ctrlfile.write("Replaces: %s\n" % bb.utils.join_deps(rreplaces))
+        if rconflicts:
+            ctrlfile.write("Conflicts: %s\n" % bb.utils.join_deps(rconflicts))
+        ctrlfile.close()
+
+        for script in ["preinst", "postinst", "prerm", "postrm"]:
+            scriptvar = localdata.getVar('pkg_%s' % script)
+            if not scriptvar:
+                continue
+            scriptvar = scriptvar.strip()
+            scriptfile = open(os.path.join(controldir, script), 'w')
+
+            if scriptvar.startswith("#!"):
+                pos = scriptvar.find("\n") + 1
+                scriptfile.write(scriptvar[:pos])
+            else:
+                pos = 0
+                scriptfile.write("#!/bin/sh\n")
+
+            # Prevent the prerm/postrm scripts from being run during an upgrade
+            if script in ('prerm', 'postrm'):
+                scriptfile.write('[ "$1" != "upgrade" ] || exit 0\n')
+
+            scriptfile.write(scriptvar[pos:])
+            scriptfile.write('\n')
+            scriptfile.close()
+            os.chmod(os.path.join(controldir, script), 0o755)
+
+        conffiles_str = ' '.join(get_conffiles(pkg, d))
+        if conffiles_str:
+            conffiles = open(os.path.join(controldir, 'conffiles'), 'w')
+            for f in conffiles_str.split():
+                if os.path.exists(oe.path.join(root, f)):
+                    conffiles.write('%s\n' % f)
+            conffiles.close()
+
+        os.chdir(basedir)
+        subprocess.check_output("PATH=\"%s\" %s -b %s %s" % (localdata.getVar("PATH"), localdata.getVar("DPKG_BUILDCMD"),
+                                                             root, pkgoutdir),
+                                stderr=subprocess.STDOUT,
+                                shell=True)
+
+    finally:
+        cleanupcontrol(root)
+        bb.utils.unlockfile(lf)
+
+# Otherwise allarch packages may change depending on override configuration
+deb_write_pkg[vardepsexclude] = "OVERRIDES"
+
+# Have to list any variables referenced as X_<pkg> that aren't in pkgdata here
+DEBEXTRAVARS = "PKGV PKGR PKGV DESCRIPTION SECTION PRIORITY MAINTAINER DPKG_ARCH PN HOMEPAGE PACKAGE_ADD_METADATA_DEB"
+do_package_write_deb[vardeps] += "${@gen_packagevar(d, 'DEBEXTRAVARS')}"
+
+SSTATETASKS += "do_package_write_deb"
+do_package_write_deb[sstate-inputdirs] = "${PKGWRITEDIRDEB}"
+do_package_write_deb[sstate-outputdirs] = "${DEPLOY_DIR_DEB}"
+
+python do_package_write_deb_setscene () {
+    tmpdir = d.getVar('TMPDIR')
+
+    if os.access(os.path.join(tmpdir, "stamps", "DEB_PACKAGE_INDEX_CLEAN"),os.R_OK):
+        os.unlink(os.path.join(tmpdir, "stamps", "DEB_PACKAGE_INDEX_CLEAN"))
+
+    sstate_setscene(d)
+}
+addtask do_package_write_deb_setscene
+
+python () {
+    if d.getVar('PACKAGES') != '':
+        deps = ' dpkg-native:do_populate_sysroot virtual/fakeroot-native:do_populate_sysroot'
+        d.appendVarFlag('do_package_write_deb', 'depends', deps)
+        d.setVarFlag('do_package_write_deb', 'fakeroot', "1")
+}
+
+python do_package_write_deb () {
+    bb.build.exec_func("read_subpackage_metadata", d)
+    bb.build.exec_func("do_package_deb", d)
+}
+do_package_write_deb[dirs] = "${PKGWRITEDIRDEB}"
+do_package_write_deb[cleandirs] = "${PKGWRITEDIRDEB}"
+do_package_write_deb[depends] += "${@oe.utils.build_depends_string(d.getVar('PACKAGE_WRITE_DEPS'), 'do_populate_sysroot')}"
+addtask package_write_deb after do_packagedata do_package do_deploy_source_date_epoch before do_build
+do_build[rdeptask] += "do_package_write_deb"
+
+PACKAGEINDEXDEPS += "dpkg-native:do_populate_sysroot"
+PACKAGEINDEXDEPS += "apt-native:do_populate_sysroot"
diff --git a/poky/meta/classes-global/package_ipk.bbclass b/poky/meta/classes-global/package_ipk.bbclass
new file mode 100644
index 0000000..c43592a
--- /dev/null
+++ b/poky/meta/classes-global/package_ipk.bbclass
@@ -0,0 +1,292 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+inherit package
+
+IMAGE_PKGTYPE ?= "ipk"
+
+IPKGCONF_TARGET = "${WORKDIR}/opkg.conf"
+IPKGCONF_SDK =  "${WORKDIR}/opkg-sdk.conf"
+IPKGCONF_SDK_TARGET = "${WORKDIR}/opkg-sdk-target.conf"
+
+PKGWRITEDIRIPK = "${WORKDIR}/deploy-ipks"
+
+# Program to be used to build opkg packages
+OPKGBUILDCMD ??= 'opkg-build -Z xz -a "${XZ_DEFAULTS}"'
+
+OPKG_ARGS += "--force_postinstall --prefer-arch-to-version"
+OPKG_ARGS += "${@['', '--no-install-recommends'][d.getVar("NO_RECOMMENDATIONS") == "1"]}"
+OPKG_ARGS += "${@['', '--add-exclude ' + ' --add-exclude '.join((d.getVar('PACKAGE_EXCLUDE') or "").split())][(d.getVar("PACKAGE_EXCLUDE") or "").strip() != ""]}"
+
+OPKGLIBDIR ??= "${localstatedir}/lib"
+
+python do_package_ipk () {
+    workdir = d.getVar('WORKDIR')
+    outdir = d.getVar('PKGWRITEDIRIPK')
+    tmpdir = d.getVar('TMPDIR')
+    pkgdest = d.getVar('PKGDEST')
+    if not workdir or not outdir or not tmpdir:
+        bb.error("Variables incorrectly set, unable to package")
+        return
+
+    packages = d.getVar('PACKAGES')
+    if not packages or packages == '':
+        bb.debug(1, "No packages; nothing to do")
+        return
+
+    # We're about to add new packages so the index needs to be checked
+    # so remove the appropriate stamp file.
+    if os.access(os.path.join(tmpdir, "stamps", "IPK_PACKAGE_INDEX_CLEAN"), os.R_OK):
+        os.unlink(os.path.join(tmpdir, "stamps", "IPK_PACKAGE_INDEX_CLEAN"))
+
+    oe.utils.multiprocess_launch(ipk_write_pkg, packages.split(), d, extraargs=(d,))
+}
+do_package_ipk[vardeps] += "ipk_write_pkg"
+do_package_ipk[vardepsexclude] = "BB_NUMBER_THREADS"
+
+def ipk_write_pkg(pkg, d):
+    import re, copy
+    import subprocess
+    import textwrap
+    import collections
+    import glob
+
+    def cleanupcontrol(root):
+        for p in ['CONTROL', 'DEBIAN']:
+            p = os.path.join(root, p)
+            if os.path.exists(p):
+                bb.utils.prunedir(p)
+
+    outdir = d.getVar('PKGWRITEDIRIPK')
+    pkgdest = d.getVar('PKGDEST')
+    recipesource = os.path.basename(d.getVar('FILE'))
+
+    localdata = bb.data.createCopy(d)
+    root = "%s/%s" % (pkgdest, pkg)
+
+    lf = bb.utils.lockfile(root + ".lock")
+    try:
+        localdata.setVar('ROOT', '')
+        localdata.setVar('ROOT_%s' % pkg, root)
+        pkgname = localdata.getVar('PKG:%s' % pkg)
+        if not pkgname:
+            pkgname = pkg
+        localdata.setVar('PKG', pkgname)
+
+        localdata.setVar('OVERRIDES', d.getVar("OVERRIDES", False) + ":" + pkg)
+
+        basedir = os.path.join(os.path.dirname(root))
+        arch = localdata.getVar('PACKAGE_ARCH')
+
+        if localdata.getVar('IPK_HIERARCHICAL_FEED', False) == "1":
+            # Spread packages across subdirectories so each isn't too crowded
+            if pkgname.startswith('lib'):
+                pkg_prefix = 'lib' + pkgname[3]
+            else:
+                pkg_prefix = pkgname[0]
+
+            # Keep -dbg, -dev, -doc, -staticdev, -locale and -locale-* packages
+            # together. These package suffixes are taken from the definitions of
+            # PACKAGES and PACKAGES_DYNAMIC in meta/conf/bitbake.conf
+            if pkgname[-4:] in ('-dbg', '-dev', '-doc'):
+                pkg_subdir = pkgname[:-4]
+            elif pkgname.endswith('-staticdev'):
+                pkg_subdir = pkgname[:-10]
+            elif pkgname.endswith('-locale'):
+                pkg_subdir = pkgname[:-7]
+            elif '-locale-' in pkgname:
+                pkg_subdir = pkgname[:pkgname.find('-locale-')]
+            else:
+                pkg_subdir = pkgname
+
+            pkgoutdir = "%s/%s/%s/%s" % (outdir, arch, pkg_prefix, pkg_subdir)
+        else:
+            pkgoutdir = "%s/%s" % (outdir, arch)
+
+        bb.utils.mkdirhier(pkgoutdir)
+        os.chdir(root)
+        cleanupcontrol(root)
+        g = glob.glob('*')
+        if not g and localdata.getVar('ALLOW_EMPTY', False) != "1":
+            bb.note("Not creating empty archive for %s-%s-%s" % (pkg, localdata.getVar('PKGV'), localdata.getVar('PKGR')))
+            return
+
+        controldir = os.path.join(root, 'CONTROL')
+        bb.utils.mkdirhier(controldir)
+        ctrlfile = open(os.path.join(controldir, 'control'), 'w')
+
+        fields = []
+        pe = d.getVar('PKGE')
+        if pe and int(pe) > 0:
+            fields.append(["Version: %s:%s-%s\n", ['PKGE', 'PKGV', 'PKGR']])
+        else:
+            fields.append(["Version: %s-%s\n", ['PKGV', 'PKGR']])
+        fields.append(["Description: %s\n", ['DESCRIPTION']])
+        fields.append(["Section: %s\n", ['SECTION']])
+        fields.append(["Priority: %s\n", ['PRIORITY']])
+        fields.append(["Maintainer: %s\n", ['MAINTAINER']])
+        fields.append(["License: %s\n", ['LICENSE']])
+        fields.append(["Architecture: %s\n", ['PACKAGE_ARCH']])
+        fields.append(["OE: %s\n", ['PN']])
+        if d.getVar('HOMEPAGE'):
+            fields.append(["Homepage: %s\n", ['HOMEPAGE']])
+
+        def pullData(l, d):
+            l2 = []
+            for i in l:
+                l2.append(d.getVar(i))
+            return l2
+
+        ctrlfile.write("Package: %s\n" % pkgname)
+        # check for required fields
+        for (c, fs) in fields:
+            for f in fs:
+                if localdata.getVar(f, False) is None:
+                    raise KeyError(f)
+            # Special behavior for description...
+            if 'DESCRIPTION' in fs:
+                summary = localdata.getVar('SUMMARY') or localdata.getVar('DESCRIPTION') or "."
+                ctrlfile.write('Description: %s\n' % summary)
+                description = localdata.getVar('DESCRIPTION') or "."
+                description = textwrap.dedent(description).strip()
+                if '\\n' in description:
+                    # Manually indent: multiline description includes a leading space
+                    for t in description.split('\\n'):
+                        ctrlfile.write(' %s\n' % (t.strip() or ' .'))
+                else:
+                    # Auto indent
+                    ctrlfile.write('%s\n' % textwrap.fill(description, width=74, initial_indent=' ', subsequent_indent=' '))
+            else:
+                ctrlfile.write(c % tuple(pullData(fs, localdata)))
+
+        custom_fields_chunk = get_package_additional_metadata("ipk", localdata)
+        if custom_fields_chunk is not None:
+            ctrlfile.write(custom_fields_chunk)
+            ctrlfile.write("\n")
+
+        mapping_rename_hook(localdata)
+
+        def debian_cmp_remap(var):
+            # In debian '>' and '<' do not mean what it appears they mean
+            #   '<' = less or equal
+            #   '>' = greater or equal
+            # adjust these to the '<<' and '>>' equivalents
+            # Also, "=" specifiers only work if they have the PR in, so 1.2.3 != 1.2.3-r0
+            # so to avoid issues, map this to ">= 1.2.3 << 1.2.3.0"
+            for dep in var:
+                for i, v in enumerate(var[dep]):
+                    if (v or "").startswith("< "):
+                        var[dep][i] = var[dep][i].replace("< ", "<< ")
+                    elif (v or "").startswith("> "):
+                        var[dep][i] = var[dep][i].replace("> ", ">> ")
+                    elif (v or "").startswith("= ") and "-r" not in v:
+                        ver = var[dep][i].replace("= ", "")
+                        var[dep][i] = var[dep][i].replace("= ", ">= ")
+                        var[dep].append("<< " + ver + ".0")
+
+        rdepends = bb.utils.explode_dep_versions2(localdata.getVar("RDEPENDS") or "")
+        debian_cmp_remap(rdepends)
+        rrecommends = bb.utils.explode_dep_versions2(localdata.getVar("RRECOMMENDS") or "")
+        debian_cmp_remap(rrecommends)
+        rsuggests = bb.utils.explode_dep_versions2(localdata.getVar("RSUGGESTS") or "")
+        debian_cmp_remap(rsuggests)
+        # Deliberately drop version information here, not wanted/supported by ipk
+        rprovides = dict.fromkeys(bb.utils.explode_dep_versions2(localdata.getVar("RPROVIDES") or ""), [])
+        rprovides = collections.OrderedDict(sorted(rprovides.items(), key=lambda x: x[0]))
+        debian_cmp_remap(rprovides)
+        rreplaces = bb.utils.explode_dep_versions2(localdata.getVar("RREPLACES") or "")
+        debian_cmp_remap(rreplaces)
+        rconflicts = bb.utils.explode_dep_versions2(localdata.getVar("RCONFLICTS") or "")
+        debian_cmp_remap(rconflicts)
+
+        if rdepends:
+            ctrlfile.write("Depends: %s\n" % bb.utils.join_deps(rdepends))
+        if rsuggests:
+            ctrlfile.write("Suggests: %s\n" % bb.utils.join_deps(rsuggests))
+        if rrecommends:
+            ctrlfile.write("Recommends: %s\n" % bb.utils.join_deps(rrecommends))
+        if rprovides:
+            ctrlfile.write("Provides: %s\n" % bb.utils.join_deps(rprovides))
+        if rreplaces:
+            ctrlfile.write("Replaces: %s\n" % bb.utils.join_deps(rreplaces))
+        if rconflicts:
+            ctrlfile.write("Conflicts: %s\n" % bb.utils.join_deps(rconflicts))
+        ctrlfile.write("Source: %s\n" % recipesource)
+        ctrlfile.close()
+
+        for script in ["preinst", "postinst", "prerm", "postrm"]:
+            scriptvar = localdata.getVar('pkg_%s' % script)
+            if not scriptvar:
+                continue
+            scriptfile = open(os.path.join(controldir, script), 'w')
+            scriptfile.write(scriptvar)
+            scriptfile.close()
+            os.chmod(os.path.join(controldir, script), 0o755)
+
+        conffiles_str = ' '.join(get_conffiles(pkg, d))
+        if conffiles_str:
+            conffiles = open(os.path.join(controldir, 'conffiles'), 'w')
+            for f in conffiles_str.split():
+                if os.path.exists(oe.path.join(root, f)):
+                    conffiles.write('%s\n' % f)
+            conffiles.close()
+
+        os.chdir(basedir)
+        subprocess.check_output("PATH=\"%s\" %s %s %s" % (localdata.getVar("PATH"),
+                                                          d.getVar("OPKGBUILDCMD"), pkg, pkgoutdir),
+                                stderr=subprocess.STDOUT,
+                                shell=True)
+
+        if d.getVar('IPK_SIGN_PACKAGES') == '1':
+            ipkver = "%s-%s" % (localdata.getVar('PKGV'), localdata.getVar('PKGR'))
+            ipk_to_sign = "%s/%s_%s_%s.ipk" % (pkgoutdir, pkgname, ipkver, localdata.getVar('PACKAGE_ARCH'))
+            sign_ipk(d, ipk_to_sign)
+
+    finally:
+        cleanupcontrol(root)
+        bb.utils.unlockfile(lf)
+
+# Have to list any variables referenced as X_<pkg> that aren't in pkgdata here
+IPKEXTRAVARS = "PRIORITY MAINTAINER PACKAGE_ARCH HOMEPAGE PACKAGE_ADD_METADATA_IPK"
+ipk_write_pkg[vardeps] += "${@gen_packagevar(d, 'IPKEXTRAVARS')}"
+
+# Otherwise allarch packages may change depending on override configuration
+ipk_write_pkg[vardepsexclude] = "OVERRIDES"
+
+
+SSTATETASKS += "do_package_write_ipk"
+do_package_write_ipk[sstate-inputdirs] = "${PKGWRITEDIRIPK}"
+do_package_write_ipk[sstate-outputdirs] = "${DEPLOY_DIR_IPK}"
+
+python do_package_write_ipk_setscene () {
+    tmpdir = d.getVar('TMPDIR')
+
+    if os.access(os.path.join(tmpdir, "stamps", "IPK_PACKAGE_INDEX_CLEAN"), os.R_OK):
+        os.unlink(os.path.join(tmpdir, "stamps", "IPK_PACKAGE_INDEX_CLEAN"))
+
+    sstate_setscene(d)
+}
+addtask do_package_write_ipk_setscene
+
+python () {
+    if d.getVar('PACKAGES') != '':
+        deps = ' opkg-utils-native:do_populate_sysroot virtual/fakeroot-native:do_populate_sysroot xz-native:do_populate_sysroot'
+        d.appendVarFlag('do_package_write_ipk', 'depends', deps)
+        d.setVarFlag('do_package_write_ipk', 'fakeroot', "1")
+}
+
+python do_package_write_ipk () {
+    bb.build.exec_func("read_subpackage_metadata", d)
+    bb.build.exec_func("do_package_ipk", d)
+}
+do_package_write_ipk[dirs] = "${PKGWRITEDIRIPK}"
+do_package_write_ipk[cleandirs] = "${PKGWRITEDIRIPK}"
+do_package_write_ipk[depends] += "${@oe.utils.build_depends_string(d.getVar('PACKAGE_WRITE_DEPS'), 'do_populate_sysroot')}"
+addtask package_write_ipk after do_packagedata do_package do_deploy_source_date_epoch before do_build
+do_build[rdeptask] += "do_package_write_ipk"
+
+PACKAGEINDEXDEPS += "opkg-utils-native:do_populate_sysroot"
+PACKAGEINDEXDEPS += "opkg-native:do_populate_sysroot"
diff --git a/poky/meta/classes-global/package_pkgdata.bbclass b/poky/meta/classes-global/package_pkgdata.bbclass
new file mode 100644
index 0000000..f653bd9
--- /dev/null
+++ b/poky/meta/classes-global/package_pkgdata.bbclass
@@ -0,0 +1,173 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+WORKDIR_PKGDATA = "${WORKDIR}/pkgdata-sysroot"
+
+def package_populate_pkgdata_dir(pkgdatadir, d):
+    import glob
+
+    postinsts = []
+    seendirs = set()
+    stagingdir = d.getVar("PKGDATA_DIR")
+    pkgarchs = ['${MACHINE_ARCH}']
+    pkgarchs = pkgarchs + list(reversed(d.getVar("PACKAGE_EXTRA_ARCHS").split()))
+    pkgarchs.append('allarch')
+
+    bb.utils.mkdirhier(pkgdatadir)
+    for pkgarch in pkgarchs:
+        for manifest in glob.glob(d.expand("${SSTATE_MANIFESTS}/manifest-%s-*.packagedata" % pkgarch)):
+            with open(manifest, "r") as f:
+                for l in f:
+                    l = l.strip()
+                    dest = l.replace(stagingdir, "")
+                    if l.endswith("/"):
+                        staging_copydir(l, pkgdatadir, dest, seendirs)
+                        continue
+                    try:
+                        staging_copyfile(l, pkgdatadir, dest, postinsts, seendirs)
+                    except FileExistsError:
+                        continue
+
+python package_prepare_pkgdata() {
+    import copy
+    import glob
+
+    taskdepdata = d.getVar("BB_TASKDEPDATA", False)
+    mytaskname = d.getVar("BB_RUNTASK")
+    if mytaskname.endswith("_setscene"):
+        mytaskname = mytaskname.replace("_setscene", "")
+    workdir = d.getVar("WORKDIR")
+    pn = d.getVar("PN")
+    stagingdir = d.getVar("PKGDATA_DIR")
+    pkgdatadir = d.getVar("WORKDIR_PKGDATA")
+
+    # Detect bitbake -b usage
+    nodeps = d.getVar("BB_LIMITEDDEPS") or False
+    if nodeps:
+        staging_package_populate_pkgdata_dir(pkgdatadir, d)
+        return
+
+    start = None
+    configuredeps = []
+    for dep in taskdepdata:
+        data = taskdepdata[dep]
+        if data[1] == mytaskname and data[0] == pn:
+            start = dep
+            break
+    if start is None:
+        bb.fatal("Couldn't find ourself in BB_TASKDEPDATA?")
+
+    # We need to figure out which sysroot files we need to expose to this task.
+    # This needs to match what would get restored from sstate, which is controlled
+    # ultimately by calls from bitbake to setscene_depvalid().
+    # That function expects a setscene dependency tree. We build a dependency tree
+    # condensed to inter-sstate task dependencies, similar to that used by setscene
+    # tasks. We can then call into setscene_depvalid() and decide
+    # which dependencies we can "see" and should expose in the recipe specific sysroot.
+    setscenedeps = copy.deepcopy(taskdepdata)
+
+    start = set([start])
+
+    sstatetasks = d.getVar("SSTATETASKS").split()
+    # Add recipe specific tasks referenced by setscene_depvalid()
+    sstatetasks.append("do_stash_locale")
+
+    # If start is an sstate task (like do_package) we need to add in its direct dependencies
+    # else the code below won't recurse into them.
+    for dep in set(start):
+        for dep2 in setscenedeps[dep][3]:
+            start.add(dep2)
+        start.remove(dep)
+
+    # Create collapsed do_populate_sysroot -> do_populate_sysroot tree
+    for dep in taskdepdata:
+        data = setscenedeps[dep]
+        if data[1] not in sstatetasks:
+            for dep2 in setscenedeps:
+                data2 = setscenedeps[dep2]
+                if dep in data2[3]:
+                    data2[3].update(setscenedeps[dep][3])
+                    data2[3].remove(dep)
+            if dep in start:
+                start.update(setscenedeps[dep][3])
+                start.remove(dep)
+            del setscenedeps[dep]
+
+    # Remove circular references
+    for dep in setscenedeps:
+        if dep in setscenedeps[dep][3]:
+            setscenedeps[dep][3].remove(dep)
+
+    # Direct dependencies should be present and can be depended upon
+    for dep in set(start):
+        if setscenedeps[dep][1] == "do_packagedata":
+            if dep not in configuredeps:
+                configuredeps.append(dep)
+
+    msgbuf = []
+    # Call into setscene_depvalid for each sub-dependency and only copy sysroot files
+    # for ones that would be restored from sstate.
+    done = list(start)
+    next = list(start)
+    while next:
+        new = []
+        for dep in next:
+            data = setscenedeps[dep]
+            for datadep in data[3]:
+                if datadep in done:
+                    continue
+                taskdeps = {}
+                taskdeps[dep] = setscenedeps[dep][:2]
+                taskdeps[datadep] = setscenedeps[datadep][:2]
+                retval = setscene_depvalid(datadep, taskdeps, [], d, msgbuf)
+                done.append(datadep)
+                new.append(datadep)
+                if retval:
+                    msgbuf.append("Skipping setscene dependency %s" % datadep)
+                    continue
+                if datadep not in configuredeps and setscenedeps[datadep][1] == "do_packagedata":
+                    configuredeps.append(datadep)
+                    msgbuf.append("Adding dependency on %s" % setscenedeps[datadep][0])
+                else:
+                    msgbuf.append("Following dependency on %s" % setscenedeps[datadep][0])
+        next = new
+
+    # This logging is too verbose for day to day use sadly
+    #bb.debug(2, "\n".join(msgbuf))
+
+    seendirs = set()
+    postinsts = []
+    multilibs = {}
+    manifests = {}
+
+    msg_adding = []
+
+    for dep in configuredeps:
+        c = setscenedeps[dep][0]
+        msg_adding.append(c)
+
+        manifest, d2 = oe.sstatesig.find_sstate_manifest(c, setscenedeps[dep][2], "packagedata", d, multilibs)
+        destsysroot = pkgdatadir
+
+        if manifest:
+            targetdir = destsysroot
+            with open(manifest, "r") as f:
+                manifests[dep] = manifest
+                for l in f:
+                    l = l.strip()
+                    dest = targetdir + l.replace(stagingdir, "")
+                    if l.endswith("/"):
+                        staging_copydir(l, targetdir, dest, seendirs)
+                        continue
+                    staging_copyfile(l, targetdir, dest, postinsts, seendirs)
+
+    bb.note("Installed into pkgdata-sysroot: %s" % str(msg_adding))
+
+}
+package_prepare_pkgdata[cleandirs] = "${WORKDIR_PKGDATA}"
+package_prepare_pkgdata[vardepsexclude] += "MACHINE_ARCH PACKAGE_EXTRA_ARCHS SDK_ARCH BUILD_ARCH SDK_OS BB_TASKDEPDATA SSTATETASKS"
+
+
diff --git a/poky/meta/classes-global/package_rpm.bbclass b/poky/meta/classes-global/package_rpm.bbclass
new file mode 100644
index 0000000..81a2060
--- /dev/null
+++ b/poky/meta/classes-global/package_rpm.bbclass
@@ -0,0 +1,755 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+inherit package
+
+IMAGE_PKGTYPE ?= "rpm"
+
+RPM="rpm"
+RPMBUILD="rpmbuild"
+
+PKGWRITEDIRRPM = "${WORKDIR}/deploy-rpms"
+
+# Maintaining the perfile dependencies has singificant overhead when writing the
+# packages. When set, this value merges them for efficiency.
+MERGEPERFILEDEPS = "1"
+
+# Filter dependencies based on a provided function.
+def filter_deps(var, f):
+    import collections
+
+    depends_dict = bb.utils.explode_dep_versions2(var)
+    newdeps_dict = collections.OrderedDict()
+    for dep in depends_dict:
+        if f(dep):
+            newdeps_dict[dep] = depends_dict[dep]
+    return bb.utils.join_deps(newdeps_dict, commasep=False)
+
+# Filter out absolute paths (typically /bin/sh and /usr/bin/env) and any perl
+# dependencies for nativesdk packages.
+def filter_nativesdk_deps(srcname, var):
+    if var and srcname.startswith("nativesdk-"):
+        var = filter_deps(var, lambda dep: not dep.startswith('/') and dep != 'perl' and not dep.startswith('perl('))
+    return var
+
+# Construct per file dependencies file
+def write_rpm_perfiledata(srcname, d):
+    workdir = d.getVar('WORKDIR')
+    packages = d.getVar('PACKAGES')
+    pkgd = d.getVar('PKGD')
+
+    def dump_filerdeps(varname, outfile, d):
+        outfile.write("#!/usr/bin/env python3\n\n")
+        outfile.write("# Dependency table\n")
+        outfile.write('deps = {\n')
+        for pkg in packages.split():
+            dependsflist_key = 'FILE' + varname + 'FLIST' + ":" + pkg
+            dependsflist = (d.getVar(dependsflist_key) or "")
+            for dfile in dependsflist.split():
+                key = "FILE" + varname + ":" + dfile + ":" + pkg
+                deps = filter_nativesdk_deps(srcname, d.getVar(key) or "")
+                depends_dict = bb.utils.explode_dep_versions(deps)
+                file = dfile.replace("@underscore@", "_")
+                file = file.replace("@closebrace@", "]")
+                file = file.replace("@openbrace@", "[")
+                file = file.replace("@tab@", "\t")
+                file = file.replace("@space@", " ")
+                file = file.replace("@at@", "@")
+                outfile.write('"' + pkgd + file + '" : "')
+                for dep in depends_dict:
+                    ver = depends_dict[dep]
+                    if dep and ver:
+                        ver = ver.replace("(","")
+                        ver = ver.replace(")","")
+                        outfile.write(dep + " " + ver + " ")
+                    else:
+                        outfile.write(dep + " ")
+                outfile.write('",\n')
+        outfile.write('}\n\n')
+        outfile.write("import sys\n")
+        outfile.write("while 1:\n")
+        outfile.write("\tline = sys.stdin.readline().strip()\n")
+        outfile.write("\tif not line:\n")
+        outfile.write("\t\tsys.exit(0)\n")
+        outfile.write("\tif line in deps:\n")
+        outfile.write("\t\tprint(deps[line] + '\\n')\n")
+
+    # OE-core dependencies a.k.a. RPM requires
+    outdepends = workdir + "/" + srcname + ".requires"
+
+    dependsfile = open(outdepends, 'w')
+
+    dump_filerdeps('RDEPENDS', dependsfile, d)
+
+    dependsfile.close()
+    os.chmod(outdepends, 0o755)
+
+    # OE-core / RPM Provides
+    outprovides = workdir + "/" + srcname + ".provides"
+
+    providesfile = open(outprovides, 'w')
+
+    dump_filerdeps('RPROVIDES', providesfile, d)
+
+    providesfile.close()
+    os.chmod(outprovides, 0o755)
+
+    return (outdepends, outprovides)
+
+
+python write_specfile () {
+    import oe.packagedata
+
+    # append information for logs and patches to %prep
+    def add_prep(d,spec_files_bottom):
+        if d.getVarFlag('ARCHIVER_MODE', 'srpm') == '1' and bb.data.inherits_class('archiver', d):
+            spec_files_bottom.append('%%prep -n %s' % d.getVar('PN') )
+            spec_files_bottom.append('%s' % "echo \"include logs and patches, Please check them in SOURCES\"")
+            spec_files_bottom.append('')
+
+    # append the name of tarball to key word 'SOURCE' in xxx.spec.
+    def tail_source(d):
+        if d.getVarFlag('ARCHIVER_MODE', 'srpm') == '1' and bb.data.inherits_class('archiver', d):
+            ar_outdir = d.getVar('ARCHIVER_OUTDIR')
+            if not os.path.exists(ar_outdir):
+                return
+            source_list = os.listdir(ar_outdir)
+            source_number = 0
+            for source in source_list:
+                # do_deploy_archives may have already run (from sstate) meaning a .src.rpm may already 
+                # exist in ARCHIVER_OUTDIR so skip if present.
+                if source.endswith(".src.rpm"):
+                    continue
+                # The rpmbuild doesn't need the root permission, but it needs
+                # to know the file's user and group name, the only user and
+                # group in fakeroot is "root" when working in fakeroot.
+                f = os.path.join(ar_outdir, source)
+                os.chown(f, 0, 0)
+                spec_preamble_top.append('Source%s: %s' % (source_number, source))
+                source_number += 1
+
+    # In RPM, dependencies are of the format: pkg <>= Epoch:Version-Release
+    # This format is similar to OE, however there are restrictions on the
+    # characters that can be in a field.  In the Version field, "-"
+    # characters are not allowed.  "-" is allowed in the Release field.
+    #
+    # We translate the "-" in the version to a "+", by loading the PKGV
+    # from the dependent recipe, replacing the - with a +, and then using
+    # that value to do a replace inside of this recipe's dependencies.
+    # This preserves the "-" separator between the version and release, as
+    # well as any "-" characters inside of the release field.
+    #
+    # All of this has to happen BEFORE the mapping_rename_hook as
+    # after renaming we cannot look up the dependencies in the packagedata
+    # store.
+    def translate_vers(varname, d):
+        depends = d.getVar(varname)
+        if depends:
+            depends_dict = bb.utils.explode_dep_versions2(depends)
+            newdeps_dict = {}
+            for dep in depends_dict:
+                verlist = []
+                for ver in depends_dict[dep]:
+                    if '-' in ver:
+                        subd = oe.packagedata.read_subpkgdata_dict(dep, d)
+                        if 'PKGV' in subd:
+                            pv = subd['PV']
+                            pkgv = subd['PKGV']
+                            reppv = pkgv.replace('-', '+')
+                            ver = ver.replace(pv, reppv).replace(pkgv, reppv)
+                        if 'PKGR' in subd:
+                            # Make sure PKGR rather than PR in ver
+                            pr = '-' + subd['PR']
+                            pkgr = '-' + subd['PKGR']
+                            if pkgr not in ver:
+                                ver = ver.replace(pr, pkgr)
+                        verlist.append(ver)
+                    else:
+                        verlist.append(ver)
+                newdeps_dict[dep] = verlist
+            depends = bb.utils.join_deps(newdeps_dict)
+            d.setVar(varname, depends.strip())
+
+    # We need to change the style the dependency from BB to RPM
+    # This needs to happen AFTER the mapping_rename_hook
+    def print_deps(variable, tag, array, d):
+        depends = variable
+        if depends:
+            depends_dict = bb.utils.explode_dep_versions2(depends)
+            for dep in depends_dict:
+                for ver in depends_dict[dep]:
+                    ver = ver.replace('(', '')
+                    ver = ver.replace(')', '')
+                    array.append("%s: %s %s" % (tag, dep, ver))
+                if not len(depends_dict[dep]):
+                    array.append("%s: %s" % (tag, dep))
+
+    def walk_files(walkpath, target, conffiles, dirfiles):
+        # We can race against the ipk/deb backends which create CONTROL or DEBIAN directories
+        # when packaging. We just ignore these files which are created in 
+        # packages-split/ and not package/
+        # We have the odd situation where the CONTROL/DEBIAN directory can be removed in the middle of
+        # of the walk, the isdir() test would then fail and the walk code would assume its a file
+        # hence we check for the names in files too.
+        for rootpath, dirs, files in os.walk(walkpath):
+            path = rootpath.replace(walkpath, "")
+            if path.endswith("DEBIAN") or path.endswith("CONTROL"):
+                continue
+            path = path.replace("%", "%%%%%%%%")
+
+            # Treat all symlinks to directories as normal files.
+            # os.walk() lists them as directories.
+            def move_to_files(dir):
+                if os.path.islink(os.path.join(rootpath, dir)):
+                    files.append(dir)
+                    return True
+                else:
+                    return False
+            dirs[:] = [dir for dir in dirs if not move_to_files(dir)]
+
+            # Directory handling can happen in two ways, either DIRFILES is not set at all
+            # in which case we fall back to the older behaviour of packages owning all their
+            # directories
+            if dirfiles is None:
+                for dir in dirs:
+                    if dir == "CONTROL" or dir == "DEBIAN":
+                        continue
+                    dir = dir.replace("%", "%%%%%%%%")
+                    # All packages own the directories their files are in...
+                    target.append('%dir "' + path + '/' + dir + '"')
+            else:
+                # packages own only empty directories or explict directory.
+                # This will prevent the overlapping of security permission.
+                if path and not files and not dirs:
+                    target.append('%dir "' + path + '"')
+                elif path and path in dirfiles:
+                    target.append('%dir "' + path + '"')
+
+            for file in files:
+                if file == "CONTROL" or file == "DEBIAN":
+                    continue
+                file = file.replace("%", "%%%%%%%%")
+                if conffiles.count(path + '/' + file):
+                    target.append('%config "' + path + '/' + file + '"')
+                else:
+                    target.append('"' + path + '/' + file + '"')
+
+    # Prevent the prerm/postrm scripts from being run during an upgrade
+    def wrap_uninstall(scriptvar):
+        scr = scriptvar.strip()
+        if scr.startswith("#!"):
+            pos = scr.find("\n") + 1
+        else:
+            pos = 0
+        scr = scr[:pos] + 'if [ "$1" = "0" ] ; then\n' + scr[pos:] + '\nfi'
+        return scr
+
+    def get_perfile(varname, pkg, d):
+        deps = []
+        dependsflist_key = 'FILE' + varname + 'FLIST' + ":" + pkg
+        dependsflist = (d.getVar(dependsflist_key) or "")
+        for dfile in dependsflist.split():
+            key = "FILE" + varname + ":" + dfile + ":" + pkg
+            depends = d.getVar(key)
+            if depends:
+                deps.append(depends)
+        return " ".join(deps)
+
+    def append_description(spec_preamble, text):
+        """
+        Add the description to the spec file.
+        """
+        import textwrap
+        dedent_text = textwrap.dedent(text).strip()
+        # Bitbake saves "\n" as "\\n"
+        if '\\n' in dedent_text:
+            for t in dedent_text.split('\\n'):
+                spec_preamble.append(t.strip())
+        else:
+            spec_preamble.append('%s' % textwrap.fill(dedent_text, width=75))
+
+    packages = d.getVar('PACKAGES')
+    if not packages or packages == '':
+        bb.debug(1, "No packages; nothing to do")
+        return
+
+    pkgdest = d.getVar('PKGDEST')
+    if not pkgdest:
+        bb.fatal("No PKGDEST")
+
+    outspecfile = d.getVar('OUTSPECFILE')
+    if not outspecfile:
+        bb.fatal("No OUTSPECFILE")
+
+    # Construct the SPEC file...
+    srcname    = d.getVar('PN')
+    localdata = bb.data.createCopy(d)
+    localdata.setVar('OVERRIDES', d.getVar("OVERRIDES", False) + ":" + srcname)
+    srcsummary = (localdata.getVar('SUMMARY') or localdata.getVar('DESCRIPTION') or ".")
+    srcversion = localdata.getVar('PKGV').replace('-', '+')
+    srcrelease = localdata.getVar('PKGR')
+    srcepoch   = (localdata.getVar('PKGE') or "")
+    srclicense = localdata.getVar('LICENSE')
+    srcsection = localdata.getVar('SECTION')
+    srcmaintainer  = localdata.getVar('MAINTAINER')
+    srchomepage    = localdata.getVar('HOMEPAGE')
+    srcdescription = localdata.getVar('DESCRIPTION') or "."
+    srccustomtagschunk = get_package_additional_metadata("rpm", localdata)
+
+    srcdepends     = d.getVar('DEPENDS')
+    srcrdepends    = ""
+    srcrrecommends = ""
+    srcrsuggests   = ""
+    srcrprovides   = ""
+    srcrreplaces   = ""
+    srcrconflicts  = ""
+    srcrobsoletes  = ""
+
+    srcrpreinst  = []
+    srcrpostinst = []
+    srcrprerm    = []
+    srcrpostrm   = []
+
+    spec_preamble_top = []
+    spec_preamble_bottom = []
+
+    spec_scriptlets_top = []
+    spec_scriptlets_bottom = []
+
+    spec_files_top = []
+    spec_files_bottom = []
+
+    perfiledeps = (d.getVar("MERGEPERFILEDEPS") or "0") == "0"
+    extra_pkgdata = (d.getVar("RPM_EXTRA_PKGDATA") or "0") == "1"
+
+    for pkg in packages.split():
+        localdata = bb.data.createCopy(d)
+
+        root = "%s/%s" % (pkgdest, pkg)
+
+        localdata.setVar('ROOT', '')
+        localdata.setVar('ROOT_%s' % pkg, root)
+        pkgname = localdata.getVar('PKG:%s' % pkg)
+        if not pkgname:
+            pkgname = pkg
+        localdata.setVar('PKG', pkgname)
+
+        localdata.setVar('OVERRIDES', d.getVar("OVERRIDES", False) + ":" + pkg)
+
+        conffiles = get_conffiles(pkg, d)
+        dirfiles = localdata.getVar('DIRFILES')
+        if dirfiles is not None:
+            dirfiles = dirfiles.split()
+
+        splitname    = pkgname
+
+        splitsummary = (localdata.getVar('SUMMARY') or localdata.getVar('DESCRIPTION') or ".")
+        splitversion = (localdata.getVar('PKGV') or "").replace('-', '+')
+        splitrelease = (localdata.getVar('PKGR') or "")
+        splitepoch   = (localdata.getVar('PKGE') or "")
+        splitlicense = (localdata.getVar('LICENSE') or "")
+        splitsection = (localdata.getVar('SECTION') or "")
+        splitdescription = (localdata.getVar('DESCRIPTION') or ".")
+        splitcustomtagschunk = get_package_additional_metadata("rpm", localdata)
+
+        translate_vers('RDEPENDS', localdata)
+        translate_vers('RRECOMMENDS', localdata)
+        translate_vers('RSUGGESTS', localdata)
+        translate_vers('RPROVIDES', localdata)
+        translate_vers('RREPLACES', localdata)
+        translate_vers('RCONFLICTS', localdata)
+
+        # Map the dependencies into their final form
+        mapping_rename_hook(localdata)
+
+        splitrdepends    = localdata.getVar('RDEPENDS') or ""
+        splitrrecommends = localdata.getVar('RRECOMMENDS') or ""
+        splitrsuggests   = localdata.getVar('RSUGGESTS') or ""
+        splitrprovides   = localdata.getVar('RPROVIDES') or ""
+        splitrreplaces   = localdata.getVar('RREPLACES') or ""
+        splitrconflicts  = localdata.getVar('RCONFLICTS') or ""
+        splitrobsoletes  = ""
+
+        splitrpreinst  = localdata.getVar('pkg_preinst')
+        splitrpostinst = localdata.getVar('pkg_postinst')
+        splitrprerm    = localdata.getVar('pkg_prerm')
+        splitrpostrm   = localdata.getVar('pkg_postrm')
+
+
+        if not perfiledeps:
+            # Add in summary of per file dependencies
+            splitrdepends = splitrdepends + " " + get_perfile('RDEPENDS', pkg, d)
+            splitrprovides = splitrprovides + " " + get_perfile('RPROVIDES', pkg, d)
+
+        splitrdepends = filter_nativesdk_deps(srcname, splitrdepends)
+
+        # Gather special src/first package data
+        if srcname == splitname:
+            archiving = d.getVarFlag('ARCHIVER_MODE', 'srpm') == '1' and \
+                        bb.data.inherits_class('archiver', d)
+            if archiving and srclicense != splitlicense:
+                bb.warn("The SRPM produced may not have the correct overall source license in the License tag. This is due to the LICENSE for the primary package and SRPM conflicting.")
+
+            srclicense     = splitlicense
+            srcrdepends    = splitrdepends
+            srcrrecommends = splitrrecommends
+            srcrsuggests   = splitrsuggests
+            srcrprovides   = splitrprovides
+            srcrreplaces   = splitrreplaces
+            srcrconflicts  = splitrconflicts
+
+            srcrpreinst    = splitrpreinst
+            srcrpostinst   = splitrpostinst
+            srcrprerm      = splitrprerm
+            srcrpostrm     = splitrpostrm
+
+            file_list = []
+            walk_files(root, file_list, conffiles, dirfiles)
+            if not file_list and localdata.getVar('ALLOW_EMPTY', False) != "1":
+                bb.note("Not creating empty RPM package for %s" % splitname)
+            else:
+                spec_files_top.append('%files')
+                if extra_pkgdata:
+                    package_rpm_extra_pkgdata(splitname, spec_files_top, localdata)
+                spec_files_top.append('%defattr(-,-,-,-)')
+                if file_list:
+                    bb.note("Creating RPM package for %s" % splitname)
+                    spec_files_top.extend(file_list)
+                else:
+                    bb.note("Creating empty RPM package for %s" % splitname)
+                spec_files_top.append('')
+            continue
+
+        # Process subpackage data
+        spec_preamble_bottom.append('%%package -n %s' % splitname)
+        spec_preamble_bottom.append('Summary: %s' % splitsummary)
+        if srcversion != splitversion:
+            spec_preamble_bottom.append('Version: %s' % splitversion)
+        if srcrelease != splitrelease:
+            spec_preamble_bottom.append('Release: %s' % splitrelease)
+        if srcepoch != splitepoch:
+            spec_preamble_bottom.append('Epoch: %s' % splitepoch)
+        spec_preamble_bottom.append('License: %s' % splitlicense)
+        spec_preamble_bottom.append('Group: %s' % splitsection)
+
+        if srccustomtagschunk != splitcustomtagschunk:
+            spec_preamble_bottom.append(splitcustomtagschunk)
+
+        # Replaces == Obsoletes && Provides
+        robsoletes = bb.utils.explode_dep_versions2(splitrobsoletes)
+        rprovides = bb.utils.explode_dep_versions2(splitrprovides)
+        rreplaces = bb.utils.explode_dep_versions2(splitrreplaces)
+        for dep in rreplaces:
+            if not dep in robsoletes:
+                robsoletes[dep] = rreplaces[dep]
+            if not dep in rprovides:
+                rprovides[dep] = rreplaces[dep]
+        splitrobsoletes = bb.utils.join_deps(robsoletes, commasep=False)
+        splitrprovides = bb.utils.join_deps(rprovides, commasep=False)
+
+        print_deps(splitrdepends, "Requires", spec_preamble_bottom, d)
+        if splitrpreinst:
+            print_deps(splitrdepends, "Requires(pre)", spec_preamble_bottom, d)
+        if splitrpostinst:
+            print_deps(splitrdepends, "Requires(post)", spec_preamble_bottom, d)
+        if splitrprerm:
+            print_deps(splitrdepends, "Requires(preun)", spec_preamble_bottom, d)
+        if splitrpostrm:
+            print_deps(splitrdepends, "Requires(postun)", spec_preamble_bottom, d)
+
+        print_deps(splitrrecommends, "Recommends", spec_preamble_bottom, d)
+        print_deps(splitrsuggests,  "Suggests", spec_preamble_bottom, d)
+        print_deps(splitrprovides,  "Provides", spec_preamble_bottom, d)
+        print_deps(splitrobsoletes, "Obsoletes", spec_preamble_bottom, d)
+        print_deps(splitrconflicts,  "Conflicts", spec_preamble_bottom, d)
+
+        spec_preamble_bottom.append('')
+
+        spec_preamble_bottom.append('%%description -n %s' % splitname)
+        append_description(spec_preamble_bottom, splitdescription)
+
+        spec_preamble_bottom.append('')
+
+        # Now process scriptlets
+        if splitrpreinst:
+            spec_scriptlets_bottom.append('%%pre -n %s' % splitname)
+            spec_scriptlets_bottom.append('# %s - preinst' % splitname)
+            spec_scriptlets_bottom.append(splitrpreinst)
+            spec_scriptlets_bottom.append('')
+        if splitrpostinst:
+            spec_scriptlets_bottom.append('%%post -n %s' % splitname)
+            spec_scriptlets_bottom.append('# %s - postinst' % splitname)
+            spec_scriptlets_bottom.append(splitrpostinst)
+            spec_scriptlets_bottom.append('')
+        if splitrprerm:
+            spec_scriptlets_bottom.append('%%preun -n %s' % splitname)
+            spec_scriptlets_bottom.append('# %s - prerm' % splitname)
+            scriptvar = wrap_uninstall(splitrprerm)
+            spec_scriptlets_bottom.append(scriptvar)
+            spec_scriptlets_bottom.append('')
+        if splitrpostrm:
+            spec_scriptlets_bottom.append('%%postun -n %s' % splitname)
+            spec_scriptlets_bottom.append('# %s - postrm' % splitname)
+            scriptvar = wrap_uninstall(splitrpostrm)
+            spec_scriptlets_bottom.append(scriptvar)
+            spec_scriptlets_bottom.append('')
+
+        # Now process files
+        file_list = []
+        walk_files(root, file_list, conffiles, dirfiles)
+        if not file_list and localdata.getVar('ALLOW_EMPTY', False) != "1":
+            bb.note("Not creating empty RPM package for %s" % splitname)
+        else:
+            spec_files_bottom.append('%%files -n %s' % splitname)
+            if extra_pkgdata:
+                package_rpm_extra_pkgdata(splitname, spec_files_bottom, localdata)
+            spec_files_bottom.append('%defattr(-,-,-,-)')
+            if file_list:
+                bb.note("Creating RPM package for %s" % splitname)
+                spec_files_bottom.extend(file_list)
+            else:
+                bb.note("Creating empty RPM package for %s" % splitname)
+            spec_files_bottom.append('')
+
+        del localdata
+    
+    add_prep(d,spec_files_bottom)
+    spec_preamble_top.append('Summary: %s' % srcsummary)
+    spec_preamble_top.append('Name: %s' % srcname)
+    spec_preamble_top.append('Version: %s' % srcversion)
+    spec_preamble_top.append('Release: %s' % srcrelease)
+    if srcepoch and srcepoch.strip() != "":
+        spec_preamble_top.append('Epoch: %s' % srcepoch)
+    spec_preamble_top.append('License: %s' % srclicense)
+    spec_preamble_top.append('Group: %s' % srcsection)
+    spec_preamble_top.append('Packager: %s' % srcmaintainer)
+    if srchomepage:
+        spec_preamble_top.append('URL: %s' % srchomepage)
+    if srccustomtagschunk:
+        spec_preamble_top.append(srccustomtagschunk)
+    tail_source(d)
+
+    # Replaces == Obsoletes && Provides
+    robsoletes = bb.utils.explode_dep_versions2(srcrobsoletes)
+    rprovides = bb.utils.explode_dep_versions2(srcrprovides)
+    rreplaces = bb.utils.explode_dep_versions2(srcrreplaces)
+    for dep in rreplaces:
+        if not dep in robsoletes:
+            robsoletes[dep] = rreplaces[dep]
+        if not dep in rprovides:
+            rprovides[dep] = rreplaces[dep]
+    srcrobsoletes = bb.utils.join_deps(robsoletes, commasep=False)
+    srcrprovides = bb.utils.join_deps(rprovides, commasep=False)
+
+    print_deps(srcdepends, "BuildRequires", spec_preamble_top, d)
+    print_deps(srcrdepends, "Requires", spec_preamble_top, d)
+    if srcrpreinst:
+        print_deps(srcrdepends, "Requires(pre)", spec_preamble_top, d)
+    if srcrpostinst:
+        print_deps(srcrdepends, "Requires(post)", spec_preamble_top, d)
+    if srcrprerm:
+        print_deps(srcrdepends, "Requires(preun)", spec_preamble_top, d)
+    if srcrpostrm:
+        print_deps(srcrdepends, "Requires(postun)", spec_preamble_top, d)
+
+    print_deps(srcrrecommends, "Recommends", spec_preamble_top, d)
+    print_deps(srcrsuggests, "Suggests", spec_preamble_top, d)
+    print_deps(srcrprovides, "Provides", spec_preamble_top, d)
+    print_deps(srcrobsoletes, "Obsoletes", spec_preamble_top, d)
+    print_deps(srcrconflicts, "Conflicts", spec_preamble_top, d)
+
+    spec_preamble_top.append('')
+
+    spec_preamble_top.append('%description')
+    append_description(spec_preamble_top, srcdescription)
+
+    spec_preamble_top.append('')
+
+    if srcrpreinst:
+        spec_scriptlets_top.append('%pre')
+        spec_scriptlets_top.append('# %s - preinst' % srcname)
+        spec_scriptlets_top.append(srcrpreinst)
+        spec_scriptlets_top.append('')
+    if srcrpostinst:
+        spec_scriptlets_top.append('%post')
+        spec_scriptlets_top.append('# %s - postinst' % srcname)
+        spec_scriptlets_top.append(srcrpostinst)
+        spec_scriptlets_top.append('')
+    if srcrprerm:
+        spec_scriptlets_top.append('%preun')
+        spec_scriptlets_top.append('# %s - prerm' % srcname)
+        scriptvar = wrap_uninstall(srcrprerm)
+        spec_scriptlets_top.append(scriptvar)
+        spec_scriptlets_top.append('')
+    if srcrpostrm:
+        spec_scriptlets_top.append('%postun')
+        spec_scriptlets_top.append('# %s - postrm' % srcname)
+        scriptvar = wrap_uninstall(srcrpostrm)
+        spec_scriptlets_top.append(scriptvar)
+        spec_scriptlets_top.append('')
+
+    # Write the SPEC file
+    specfile = open(outspecfile, 'w')
+
+    # RPMSPEC_PREAMBLE is a way to add arbitrary text to the top
+    # of the generated spec file
+    external_preamble = d.getVar("RPMSPEC_PREAMBLE")
+    if external_preamble:
+        specfile.write(external_preamble + "\n")
+
+    for line in spec_preamble_top:
+        specfile.write(line + "\n")
+
+    for line in spec_preamble_bottom:
+        specfile.write(line + "\n")
+
+    for line in spec_scriptlets_top:
+        specfile.write(line + "\n")
+
+    for line in spec_scriptlets_bottom:
+        specfile.write(line + "\n")
+
+    for line in spec_files_top:
+        specfile.write(line + "\n")
+
+    for line in spec_files_bottom:
+        specfile.write(line + "\n")
+
+    specfile.close()
+}
+# Otherwise allarch packages may change depending on override configuration
+write_specfile[vardepsexclude] = "OVERRIDES"
+
+# Have to list any variables referenced as X_<pkg> that aren't in pkgdata here
+RPMEXTRAVARS = "PACKAGE_ADD_METADATA_RPM"
+write_specfile[vardeps] += "${@gen_packagevar(d, 'RPMEXTRAVARS')}"
+
+python do_package_rpm () {
+    workdir = d.getVar('WORKDIR')
+    tmpdir = d.getVar('TMPDIR')
+    pkgd = d.getVar('PKGD')
+    pkgdest = d.getVar('PKGDEST')
+    if not workdir or not pkgd or not tmpdir:
+        bb.error("Variables incorrectly set, unable to package")
+        return
+
+    packages = d.getVar('PACKAGES')
+    if not packages or packages == '':
+        bb.debug(1, "No packages; nothing to do")
+        return
+
+    # Construct the spec file...
+    # If the spec file already exist, and has not been stored into 
+    # pseudo's files.db, it maybe cause rpmbuild src.rpm fail,
+    # so remove it before doing rpmbuild src.rpm.
+    srcname    = d.getVar('PN')
+    outspecfile = workdir + "/" + srcname + ".spec"
+    if os.path.isfile(outspecfile):
+        os.remove(outspecfile)
+    d.setVar('OUTSPECFILE', outspecfile)
+    bb.build.exec_func('write_specfile', d)
+
+    perfiledeps = (d.getVar("MERGEPERFILEDEPS") or "0") == "0"
+    if perfiledeps:
+        outdepends, outprovides = write_rpm_perfiledata(srcname, d)
+
+    # Setup the rpmbuild arguments...
+    rpmbuild = d.getVar('RPMBUILD')
+    targetsys = d.getVar('TARGET_SYS')
+    targetvendor = d.getVar('HOST_VENDOR')
+
+    # Too many places in dnf stack assume that arch-independent packages are "noarch".
+    # Let's not fight against this.
+    package_arch = (d.getVar('PACKAGE_ARCH') or "").replace("-", "_")
+    if package_arch == "all":
+        package_arch = "noarch"
+
+    sdkpkgsuffix = (d.getVar('SDKPKGSUFFIX') or "nativesdk").replace("-", "_")
+    d.setVar('PACKAGE_ARCH_EXTEND', package_arch)
+    pkgwritedir = d.expand('${PKGWRITEDIRRPM}/${PACKAGE_ARCH_EXTEND}')
+    d.setVar('RPM_PKGWRITEDIR', pkgwritedir)
+    bb.debug(1, 'PKGWRITEDIR: %s' % d.getVar('RPM_PKGWRITEDIR'))
+    pkgarch = d.expand('${PACKAGE_ARCH_EXTEND}${HOST_VENDOR}-linux')
+    bb.utils.mkdirhier(pkgwritedir)
+    os.chmod(pkgwritedir, 0o755)
+
+    cmd = rpmbuild
+    cmd = cmd + " --noclean --nodeps --short-circuit --target " + pkgarch + " --buildroot " + pkgd
+    cmd = cmd + " --define '_topdir " + workdir + "' --define '_rpmdir " + pkgwritedir + "'"
+    cmd = cmd + " --define '_builddir " + d.getVar('B') + "'"
+    cmd = cmd + " --define '_build_name_fmt %%{NAME}-%%{VERSION}-%%{RELEASE}.%%{ARCH}.rpm'"
+    cmd = cmd + " --define '_use_internal_dependency_generator 0'"
+    cmd = cmd + " --define '_binaries_in_noarch_packages_terminate_build 0'"
+    cmd = cmd + " --define '_build_id_links none'"
+    cmd = cmd + " --define '_binary_payload w19T%d.zstdio'" % int(d.getVar("ZSTD_THREADS"))
+    cmd = cmd + " --define '_source_payload w19T%d.zstdio'" % int(d.getVar("ZSTD_THREADS"))
+    cmd = cmd + " --define 'clamp_mtime_to_source_date_epoch 1'"
+    cmd = cmd + " --define 'use_source_date_epoch_as_buildtime 1'"
+    cmd = cmd + " --define '_buildhost reproducible'"
+    cmd = cmd + " --define '__font_provides %{nil}'"
+    if perfiledeps:
+        cmd = cmd + " --define '__find_requires " + outdepends + "'"
+        cmd = cmd + " --define '__find_provides " + outprovides + "'"
+    else:
+        cmd = cmd + " --define '__find_requires %{nil}'"
+        cmd = cmd + " --define '__find_provides %{nil}'"
+    cmd = cmd + " --define '_unpackaged_files_terminate_build 0'"
+    cmd = cmd + " --define 'debug_package %{nil}'"
+    cmd = cmd + " --define '_tmppath " + workdir + "'"
+    if d.getVarFlag('ARCHIVER_MODE', 'srpm') == '1' and bb.data.inherits_class('archiver', d):
+        cmd = cmd + " --define '_sourcedir " + d.getVar('ARCHIVER_OUTDIR') + "'"
+        cmdsrpm = cmd + " --define '_srcrpmdir " + d.getVar('ARCHIVER_RPMOUTDIR') + "'"
+        cmdsrpm = cmdsrpm + " -bs " + outspecfile
+        # Build the .src.rpm
+        d.setVar('SBUILDSPEC', cmdsrpm + "\n")
+        d.setVarFlag('SBUILDSPEC', 'func', '1')
+        bb.build.exec_func('SBUILDSPEC', d)
+    cmd = cmd + " -bb " + outspecfile
+
+    # rpm 4 creates various empty directories in _topdir, let's clean them up
+    cleanupcmd = "rm -rf %s/BUILDROOT %s/SOURCES %s/SPECS %s/SRPMS" % (workdir, workdir, workdir, workdir)
+
+    # Build the rpm package!
+    d.setVar('BUILDSPEC', cmd + "\n" + cleanupcmd + "\n")
+    d.setVarFlag('BUILDSPEC', 'func', '1')
+    bb.build.exec_func('BUILDSPEC', d)
+
+    if d.getVar('RPM_SIGN_PACKAGES') == '1':
+        bb.build.exec_func("sign_rpm", d)
+}
+
+python () {
+    if d.getVar('PACKAGES') != '':
+        deps = ' rpm-native:do_populate_sysroot virtual/fakeroot-native:do_populate_sysroot'
+        d.appendVarFlag('do_package_write_rpm', 'depends', deps)
+        d.setVarFlag('do_package_write_rpm', 'fakeroot', '1')
+}
+
+SSTATETASKS += "do_package_write_rpm"
+do_package_write_rpm[sstate-inputdirs] = "${PKGWRITEDIRRPM}"
+do_package_write_rpm[sstate-outputdirs] = "${DEPLOY_DIR_RPM}"
+# Take a shared lock, we can write multiple packages at the same time...
+# but we need to stop the rootfs/solver from running while we do...
+do_package_write_rpm[sstate-lockfile-shared] += "${DEPLOY_DIR_RPM}/rpm.lock"
+
+python do_package_write_rpm_setscene () {
+    sstate_setscene(d)
+}
+addtask do_package_write_rpm_setscene
+
+python do_package_write_rpm () {
+    bb.build.exec_func("read_subpackage_metadata", d)
+    bb.build.exec_func("do_package_rpm", d)
+}
+
+do_package_write_rpm[dirs] = "${PKGWRITEDIRRPM}"
+do_package_write_rpm[cleandirs] = "${PKGWRITEDIRRPM}"
+do_package_write_rpm[depends] += "${@oe.utils.build_depends_string(d.getVar('PACKAGE_WRITE_DEPS'), 'do_populate_sysroot')}"
+addtask package_write_rpm after do_packagedata do_package do_deploy_source_date_epoch before do_build
+do_build[rdeptask] += "do_package_write_rpm"
+
+PACKAGEINDEXDEPS += "rpm-native:do_populate_sysroot"
+PACKAGEINDEXDEPS += "createrepo-c-native:do_populate_sysroot"
diff --git a/poky/meta/classes-global/package_tar.bbclass b/poky/meta/classes-global/package_tar.bbclass
new file mode 100644
index 0000000..de995f9
--- /dev/null
+++ b/poky/meta/classes-global/package_tar.bbclass
@@ -0,0 +1,77 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+inherit package
+
+IMAGE_PKGTYPE ?= "tar"
+
+python do_package_tar () {
+    import subprocess
+
+    oldcwd = os.getcwd()
+
+    workdir = d.getVar('WORKDIR')
+    if not workdir:
+        bb.error("WORKDIR not defined, unable to package")
+        return
+
+    outdir = d.getVar('DEPLOY_DIR_TAR')
+    if not outdir:
+        bb.error("DEPLOY_DIR_TAR not defined, unable to package")
+        return
+
+    dvar = d.getVar('D')
+    if not dvar:
+        bb.error("D not defined, unable to package")
+        return
+
+    packages = d.getVar('PACKAGES')
+    if not packages:
+        bb.debug(1, "PACKAGES not defined, nothing to package")
+        return
+
+    pkgdest = d.getVar('PKGDEST')
+
+    bb.utils.mkdirhier(outdir)
+    bb.utils.mkdirhier(dvar)
+
+    for pkg in packages.split():
+        localdata = bb.data.createCopy(d)
+        root = "%s/%s" % (pkgdest, pkg)
+
+        overrides = localdata.getVar('OVERRIDES', False)
+        localdata.setVar('OVERRIDES', '%s:%s' % (overrides, pkg))
+
+        bb.utils.mkdirhier(root)
+        basedir = os.path.dirname(root)
+        tarfn = localdata.expand("${DEPLOY_DIR_TAR}/${PKG}-${PKGV}-${PKGR}.tar.gz")
+        os.chdir(root)
+        dlist = os.listdir(root)
+        if not dlist:
+            bb.note("Not creating empty archive for %s-%s-%s" % (pkg, localdata.getVar('PKGV'), localdata.getVar('PKGR')))
+            continue
+        args = "tar -cz --exclude=CONTROL --exclude=DEBIAN -f".split()
+        ret = subprocess.call(args + [tarfn] + dlist)
+        if ret != 0:
+            bb.error("Creation of tar %s failed." % tarfn)
+
+    os.chdir(oldcwd)
+}
+
+python () {
+    if d.getVar('PACKAGES') != '':
+        deps = ' tar-native:do_populate_sysroot virtual/fakeroot-native:do_populate_sysroot'
+        d.appendVarFlag('do_package_write_tar', 'depends', deps)
+        d.setVarFlag('do_package_write_tar', 'fakeroot', "1")
+}
+
+
+python do_package_write_tar () {
+    bb.build.exec_func("read_subpackage_metadata", d)
+    bb.build.exec_func("do_package_tar", d)
+}
+do_package_write_tar[dirs] = "${D}"
+addtask package_write_tar before do_build after do_packagedata do_package
diff --git a/poky/meta/classes-global/packagedata.bbclass b/poky/meta/classes-global/packagedata.bbclass
new file mode 100644
index 0000000..9f72c01
--- /dev/null
+++ b/poky/meta/classes-global/packagedata.bbclass
@@ -0,0 +1,40 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+python read_subpackage_metadata () {
+    import oe.packagedata
+
+    vars = {
+        "PN" : d.getVar('PN'), 
+        "PE" : d.getVar('PE'), 
+        "PV" : d.getVar('PV'),
+        "PR" : d.getVar('PR'),
+    }
+
+    data = oe.packagedata.read_pkgdata(vars["PN"], d)
+
+    for key in data.keys():
+        d.setVar(key, data[key])
+
+    for pkg in d.getVar('PACKAGES').split():
+        sdata = oe.packagedata.read_subpkgdata(pkg, d)
+        for key in sdata.keys():
+            if key in vars:
+                if sdata[key] != vars[key]:
+                    if key == "PN":
+                        bb.fatal("Recipe %s is trying to create package %s which was already written by recipe %s. This will cause corruption, please resolve this and only provide the package from one recipe or the other or only build one of the recipes." % (vars[key], pkg, sdata[key]))
+                    bb.fatal("Recipe %s is trying to change %s from '%s' to '%s'. This will cause do_package_write_* failures since the incorrect data will be used and they will be unable to find the right workdir." % (vars["PN"], key, vars[key], sdata[key]))
+                continue
+            #
+            # If we set unsuffixed variables here there is a chance they could clobber override versions
+            # of that variable, e.g. DESCRIPTION could clobber DESCRIPTION:<pkgname>
+            # We therefore don't clobber for the unsuffixed variable versions
+            #
+            if key.endswith(":" + pkg):
+                d.setVar(key, sdata[key])
+            else:
+                d.setVar(key, sdata[key], parsing=True)
+}
diff --git a/poky/meta/classes-global/patch.bbclass b/poky/meta/classes-global/patch.bbclass
new file mode 100644
index 0000000..e3157c7
--- /dev/null
+++ b/poky/meta/classes-global/patch.bbclass
@@ -0,0 +1,171 @@
+# Copyright (C) 2006  OpenedHand LTD
+#
+# SPDX-License-Identifier: MIT
+
+# Point to an empty file so any user's custom settings don't break things
+QUILTRCFILE ?= "${STAGING_ETCDIR_NATIVE}/quiltrc"
+
+PATCHDEPENDENCY = "${PATCHTOOL}-native:do_populate_sysroot"
+
+# There is a bug in patch 2.7.3 and earlier where index lines
+# in patches can change file modes when they shouldn't:
+# http://git.savannah.gnu.org/cgit/patch.git/patch/?id=82b800c9552a088a241457948219d25ce0a407a4
+# This leaks into debug sources in particular. Add the dependency
+# to target recipes to avoid this problem until we can rely on 2.7.4 or later.
+PATCHDEPENDENCY:append:class-target = " patch-replacement-native:do_populate_sysroot"
+
+PATCH_GIT_USER_NAME ?= "OpenEmbedded"
+PATCH_GIT_USER_EMAIL ?= "oe.patch@oe"
+
+inherit terminal
+
+python () {
+    if d.getVar('PATCHTOOL') == 'git' and d.getVar('PATCH_COMMIT_FUNCTIONS') == '1':
+        extratasks = bb.build.tasksbetween('do_unpack', 'do_patch', d)
+        try:
+            extratasks.remove('do_unpack')
+        except ValueError:
+            # For some recipes do_unpack doesn't exist, ignore it
+            pass
+
+        d.appendVarFlag('do_patch', 'prefuncs', ' patch_task_patch_prefunc')
+        for task in extratasks:
+            d.appendVarFlag(task, 'postfuncs', ' patch_task_postfunc')
+}
+
+python patch_task_patch_prefunc() {
+    # Prefunc for do_patch
+    srcsubdir = d.getVar('S')
+
+    workdir = os.path.abspath(d.getVar('WORKDIR'))
+    testsrcdir = os.path.abspath(srcsubdir)
+    if (testsrcdir + os.sep).startswith(workdir + os.sep):
+        # Double-check that either workdir or S or some directory in-between is a git repository
+        found = False
+        while testsrcdir != workdir:
+            if os.path.exists(os.path.join(testsrcdir, '.git')):
+                found = True
+                break
+            if testsrcdir == workdir:
+                break
+            testsrcdir = os.path.dirname(testsrcdir)
+        if not found:
+            bb.fatal('PATCHTOOL = "git" set for source tree that is not a git repository. Refusing to continue as that may result in commits being made in your metadata repository.')
+
+    patchdir = os.path.join(srcsubdir, 'patches')
+    if os.path.exists(patchdir):
+        if os.listdir(patchdir):
+            d.setVar('PATCH_HAS_PATCHES_DIR', '1')
+        else:
+            os.rmdir(patchdir)
+}
+
+python patch_task_postfunc() {
+    # Prefunc for task functions between do_unpack and do_patch
+    import oe.patch
+    import shutil
+    func = d.getVar('BB_RUNTASK')
+    srcsubdir = d.getVar('S')
+
+    if os.path.exists(srcsubdir):
+        if func == 'do_patch':
+            haspatches = (d.getVar('PATCH_HAS_PATCHES_DIR') == '1')
+            patchdir = os.path.join(srcsubdir, 'patches')
+            if os.path.exists(patchdir):
+                shutil.rmtree(patchdir)
+                if haspatches:
+                    stdout, _ = bb.process.run('git status --porcelain patches', cwd=srcsubdir)
+                    if stdout:
+                        bb.process.run('git checkout patches', cwd=srcsubdir)
+        stdout, _ = bb.process.run('git status --porcelain .', cwd=srcsubdir)
+        if stdout:
+            useroptions = []
+            oe.patch.GitApplyTree.gitCommandUserOptions(useroptions, d=d)
+            bb.process.run('git add .; git %s commit -a -m "Committing changes from %s\n\n%s"' % (' '.join(useroptions), func, oe.patch.GitApplyTree.ignore_commit_prefix + ' - from %s' % func), cwd=srcsubdir)
+}
+
+def src_patches(d, all=False, expand=True):
+    import oe.patch
+    return oe.patch.src_patches(d, all, expand)
+
+def should_apply(parm, d):
+    """Determine if we should apply the given patch"""
+    import oe.patch
+    return oe.patch.should_apply(parm, d)
+
+should_apply[vardepsexclude] = "DATE SRCDATE"
+
+python patch_do_patch() {
+    import oe.patch
+
+    patchsetmap = {
+        "patch": oe.patch.PatchTree,
+        "quilt": oe.patch.QuiltTree,
+        "git": oe.patch.GitApplyTree,
+    }
+
+    cls = patchsetmap[d.getVar('PATCHTOOL') or 'quilt']
+
+    resolvermap = {
+        "noop": oe.patch.NOOPResolver,
+        "user": oe.patch.UserResolver,
+    }
+
+    rcls = resolvermap[d.getVar('PATCHRESOLVE') or 'user']
+
+    classes = {}
+
+    s = d.getVar('S')
+
+    os.putenv('PATH', d.getVar('PATH'))
+
+    # We must use one TMPDIR per process so that the "patch" processes
+    # don't generate the same temp file name.
+
+    import tempfile
+    process_tmpdir = tempfile.mkdtemp()
+    os.environ['TMPDIR'] = process_tmpdir
+
+    for patch in src_patches(d):
+        _, _, local, _, _, parm = bb.fetch.decodeurl(patch)
+
+        if "patchdir" in parm:
+            patchdir = parm["patchdir"]
+            if not os.path.isabs(patchdir):
+                patchdir = os.path.join(s, patchdir)
+            if not os.path.isdir(patchdir):
+                bb.fatal("Target directory '%s' not found, patchdir '%s' is incorrect in patch file '%s'" %
+                    (patchdir, parm["patchdir"], parm['patchname']))
+        else:
+            patchdir = s
+
+        if not patchdir in classes:
+            patchset = cls(patchdir, d)
+            resolver = rcls(patchset, oe_terminal)
+            classes[patchdir] = (patchset, resolver)
+            patchset.Clean()
+        else:
+            patchset, resolver = classes[patchdir]
+
+        bb.note("Applying patch '%s' (%s)" % (parm['patchname'], oe.path.format_display(local, d)))
+        try:
+            patchset.Import({"file":local, "strippath": parm['striplevel']}, True)
+        except Exception as exc:
+            bb.utils.remove(process_tmpdir, True)
+            bb.fatal("Importing patch '%s' with striplevel '%s'\n%s" % (parm['patchname'], parm['striplevel'], repr(exc).replace("\\n", "\n")))
+        try:
+            resolver.Resolve()
+        except bb.BBHandledException as e:
+            bb.utils.remove(process_tmpdir, True)
+            bb.fatal("Applying patch '%s' on target directory '%s'\n%s" % (parm['patchname'], patchdir, repr(e).replace("\\n", "\n")))
+
+    bb.utils.remove(process_tmpdir, True)
+    del os.environ['TMPDIR']
+}
+patch_do_patch[vardepsexclude] = "PATCHRESOLVE"
+
+addtask patch after do_unpack
+do_patch[dirs] = "${WORKDIR}"
+do_patch[depends] = "${PATCHDEPENDENCY}"
+
+EXPORT_FUNCTIONS do_patch
diff --git a/poky/meta/classes-global/sanity.bbclass b/poky/meta/classes-global/sanity.bbclass
new file mode 100644
index 0000000..4a403a2
--- /dev/null
+++ b/poky/meta/classes-global/sanity.bbclass
@@ -0,0 +1,1029 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+#
+# Sanity check the users setup for common misconfigurations
+#
+
+SANITY_REQUIRED_UTILITIES ?= "patch diffstat git bzip2 tar \
+    gzip gawk chrpath wget cpio perl file which"
+
+def bblayers_conf_file(d):
+    return os.path.join(d.getVar('TOPDIR'), 'conf/bblayers.conf')
+
+def sanity_conf_read(fn):
+    with open(fn, 'r') as f:
+        lines = f.readlines()
+    return lines
+
+def sanity_conf_find_line(pattern, lines):
+    import re
+    return next(((index, line)
+        for index, line in enumerate(lines)
+        if re.search(pattern, line)), (None, None))
+
+def sanity_conf_update(fn, lines, version_var_name, new_version):
+    index, line = sanity_conf_find_line(r"^%s" % version_var_name, lines)
+    lines[index] = '%s = "%d"\n' % (version_var_name, new_version)
+    with open(fn, "w") as f:
+        f.write(''.join(lines))
+
+# Functions added to this variable MUST throw a NotImplementedError exception unless 
+# they successfully changed the config version in the config file. Exceptions
+# are used since exec_func doesn't handle return values.
+BBLAYERS_CONF_UPDATE_FUNCS += " \
+    conf/bblayers.conf:LCONF_VERSION:LAYER_CONF_VERSION:oecore_update_bblayers \
+    conf/local.conf:CONF_VERSION:LOCALCONF_VERSION:oecore_update_localconf \
+    conf/site.conf:SCONF_VERSION:SITE_CONF_VERSION:oecore_update_siteconf \
+"
+
+SANITY_DIFF_TOOL ?= "meld"
+
+SANITY_LOCALCONF_SAMPLE ?= "${COREBASE}/meta*/conf/local.conf.sample"
+python oecore_update_localconf() {
+    # Check we are using a valid local.conf
+    current_conf  = d.getVar('CONF_VERSION')
+    conf_version =  d.getVar('LOCALCONF_VERSION')
+
+    failmsg = """Your version of local.conf was generated from an older/newer version of 
+local.conf.sample and there have been updates made to this file. Please compare the two 
+files and merge any changes before continuing.
+
+Matching the version numbers will remove this message.
+
+\"${SANITY_DIFF_TOOL} conf/local.conf ${SANITY_LOCALCONF_SAMPLE}\" 
+
+is a good way to visualise the changes."""
+    failmsg = d.expand(failmsg)
+
+    raise NotImplementedError(failmsg)
+}
+
+SANITY_SITECONF_SAMPLE ?= "${COREBASE}/meta*/conf/site.conf.sample"
+python oecore_update_siteconf() {
+    # If we have a site.conf, check it's valid
+    current_sconf = d.getVar('SCONF_VERSION')
+    sconf_version = d.getVar('SITE_CONF_VERSION')
+
+    failmsg = """Your version of site.conf was generated from an older version of 
+site.conf.sample and there have been updates made to this file. Please compare the two 
+files and merge any changes before continuing.
+
+Matching the version numbers will remove this message.
+
+\"${SANITY_DIFF_TOOL} conf/site.conf ${SANITY_SITECONF_SAMPLE}\" 
+
+is a good way to visualise the changes."""
+    failmsg = d.expand(failmsg)
+
+    raise NotImplementedError(failmsg)
+}
+
+SANITY_BBLAYERCONF_SAMPLE ?= "${COREBASE}/meta*/conf/bblayers.conf.sample"
+python oecore_update_bblayers() {
+    # bblayers.conf is out of date, so see if we can resolve that
+
+    current_lconf = int(d.getVar('LCONF_VERSION'))
+    lconf_version = int(d.getVar('LAYER_CONF_VERSION'))
+
+    failmsg = """Your version of bblayers.conf has the wrong LCONF_VERSION (has ${LCONF_VERSION}, expecting ${LAYER_CONF_VERSION}).
+Please compare your file against bblayers.conf.sample and merge any changes before continuing.
+"${SANITY_DIFF_TOOL} conf/bblayers.conf ${SANITY_BBLAYERCONF_SAMPLE}" 
+
+is a good way to visualise the changes."""
+    failmsg = d.expand(failmsg)
+
+    if not current_lconf:
+        raise NotImplementedError(failmsg)
+
+    lines = []
+
+    if current_lconf < 4:
+        raise NotImplementedError(failmsg)
+
+    bblayers_fn = bblayers_conf_file(d)
+    lines = sanity_conf_read(bblayers_fn)
+
+    if current_lconf == 4 and lconf_version > 4:
+        topdir_var = '$' + '{TOPDIR}'
+        index, bbpath_line = sanity_conf_find_line('BBPATH', lines)
+        if bbpath_line:
+            start = bbpath_line.find('"')
+            if start != -1 and (len(bbpath_line) != (start + 1)):
+                if bbpath_line[start + 1] == '"':
+                    lines[index] = (bbpath_line[:start + 1] +
+                                    topdir_var + bbpath_line[start + 1:])
+                else:
+                    if not topdir_var in bbpath_line:
+                        lines[index] = (bbpath_line[:start + 1] +
+                                    topdir_var + ':' + bbpath_line[start + 1:])
+            else:
+                raise NotImplementedError(failmsg)
+        else:
+            index, bbfiles_line = sanity_conf_find_line('BBFILES', lines)
+            if bbfiles_line:
+                lines.insert(index, 'BBPATH = "' + topdir_var + '"\n')
+            else:
+                raise NotImplementedError(failmsg)
+
+        current_lconf += 1
+        sanity_conf_update(bblayers_fn, lines, 'LCONF_VERSION', current_lconf)
+        bb.note("Your conf/bblayers.conf has been automatically updated.")
+        return
+
+    elif current_lconf == 5 and lconf_version > 5:
+        # Null update, to avoid issues with people switching between poky and other distros
+        current_lconf = 6
+        sanity_conf_update(bblayers_fn, lines, 'LCONF_VERSION', current_lconf)
+        bb.note("Your conf/bblayers.conf has been automatically updated.")
+        return
+
+        status.addresult()
+
+    elif current_lconf == 6 and lconf_version > 6:
+        # Handle rename of meta-yocto -> meta-poky
+        # This marks the start of separate version numbers but code is needed in OE-Core
+        # for the migration, one last time.
+        layers = d.getVar('BBLAYERS').split()
+        layers = [ os.path.basename(path) for path in layers ]
+        if 'meta-yocto' in layers:
+            found = False
+            while True:
+                index, meta_yocto_line = sanity_conf_find_line(r'.*meta-yocto[\'"\s\n]', lines)
+                if meta_yocto_line:
+                    lines[index] = meta_yocto_line.replace('meta-yocto', 'meta-poky')
+                    found = True
+                else:
+                    break
+            if not found:
+                raise NotImplementedError(failmsg)
+            index, meta_yocto_line = sanity_conf_find_line('LCONF_VERSION.*\n', lines)
+            if meta_yocto_line:
+                lines[index] = 'POKY_BBLAYERS_CONF_VERSION = "1"\n'
+            else:
+                raise NotImplementedError(failmsg)
+            with open(bblayers_fn, "w") as f:
+                f.write(''.join(lines))
+            bb.note("Your conf/bblayers.conf has been automatically updated.")
+            return
+        current_lconf += 1
+        sanity_conf_update(bblayers_fn, lines, 'LCONF_VERSION', current_lconf)
+        bb.note("Your conf/bblayers.conf has been automatically updated.")
+        return
+
+    raise NotImplementedError(failmsg)
+}
+
+def raise_sanity_error(msg, d, network_error=False):
+    if d.getVar("SANITY_USE_EVENTS") == "1":
+        try:
+            bb.event.fire(bb.event.SanityCheckFailed(msg, network_error), d)
+        except TypeError:
+            bb.event.fire(bb.event.SanityCheckFailed(msg), d)
+        return
+
+    bb.fatal(""" OE-core's config sanity checker detected a potential misconfiguration.
+    Either fix the cause of this error or at your own risk disable the checker (see sanity.conf).
+    Following is the list of potential problems / advisories:
+    
+    %s""" % msg)
+
+# Check a single tune for validity.
+def check_toolchain_tune(data, tune, multilib):
+    tune_errors = []
+    if not tune:
+        return "No tuning found for %s multilib." % multilib
+    localdata = bb.data.createCopy(data)
+    if multilib != "default":
+        # Apply the overrides so we can look at the details.
+        overrides = localdata.getVar("OVERRIDES", False) + ":virtclass-multilib-" + multilib
+        localdata.setVar("OVERRIDES", overrides)
+    bb.debug(2, "Sanity-checking tuning '%s' (%s) features:" % (tune, multilib))
+    features = (localdata.getVar("TUNE_FEATURES:tune-%s" % tune) or "").split()
+    if not features:
+        return "Tuning '%s' has no defined features, and cannot be used." % tune
+    valid_tunes = localdata.getVarFlags('TUNEVALID') or {}
+    conflicts = localdata.getVarFlags('TUNECONFLICTS') or {}
+    # [doc] is the documentation for the variable, not a real feature
+    if 'doc' in valid_tunes:
+        del valid_tunes['doc']
+    if 'doc' in conflicts:
+        del conflicts['doc']
+    for feature in features:
+        if feature in conflicts:
+            for conflict in conflicts[feature].split():
+                if conflict in features:
+                    tune_errors.append("Feature '%s' conflicts with '%s'." %
+                        (feature, conflict))
+        if feature in valid_tunes:
+            bb.debug(2, "  %s: %s" % (feature, valid_tunes[feature]))
+        else:
+            tune_errors.append("Feature '%s' is not defined." % feature)
+    if tune_errors:
+        return "Tuning '%s' has the following errors:\n" % tune + '\n'.join(tune_errors)
+
+def check_toolchain(data):
+    tune_error_set = []
+    deftune = data.getVar("DEFAULTTUNE")
+    tune_errors = check_toolchain_tune(data, deftune, 'default')
+    if tune_errors:
+        tune_error_set.append(tune_errors)
+
+    multilibs = (data.getVar("MULTILIB_VARIANTS") or "").split()
+    global_multilibs = (data.getVar("MULTILIB_GLOBAL_VARIANTS") or "").split()
+
+    if multilibs:
+        seen_libs = []
+        seen_tunes = []
+        for lib in multilibs:
+            if lib in seen_libs:
+                tune_error_set.append("The multilib '%s' appears more than once." % lib)
+            else:
+                seen_libs.append(lib)
+            if not lib in global_multilibs:
+                tune_error_set.append("Multilib %s is not present in MULTILIB_GLOBAL_VARIANTS" % lib)
+            tune = data.getVar("DEFAULTTUNE:virtclass-multilib-%s" % lib)
+            if tune in seen_tunes:
+                tune_error_set.append("The tuning '%s' appears in more than one multilib." % tune)
+            else:
+                seen_libs.append(tune)
+            if tune == deftune:
+                tune_error_set.append("Multilib '%s' (%s) is also the default tuning." % (lib, deftune))
+            else:
+                tune_errors = check_toolchain_tune(data, tune, lib)
+            if tune_errors:
+                tune_error_set.append(tune_errors)
+    if tune_error_set:
+        return "Toolchain tunings invalid:\n" + '\n'.join(tune_error_set) + "\n"
+
+    return ""
+
+def check_conf_exists(fn, data):
+    bbpath = []
+    fn = data.expand(fn)
+    vbbpath = data.getVar("BBPATH", False)
+    if vbbpath:
+        bbpath += vbbpath.split(":")
+    for p in bbpath:
+        currname = os.path.join(data.expand(p), fn)
+        if os.access(currname, os.R_OK):
+            return True
+    return False
+
+def check_create_long_filename(filepath, pathname):
+    import string, random
+    testfile = os.path.join(filepath, ''.join(random.choice(string.ascii_letters) for x in range(200)))
+    try:
+        if not os.path.exists(filepath):
+            bb.utils.mkdirhier(filepath)
+        f = open(testfile, "w")
+        f.close()
+        os.remove(testfile)
+    except IOError as e:
+        import errno
+        err, strerror = e.args
+        if err == errno.ENAMETOOLONG:
+            return "Failed to create a file with a long name in %s. Please use a filesystem that does not unreasonably limit filename length.\n" % pathname
+        else:
+            return "Failed to create a file in %s: %s.\n" % (pathname, strerror)
+    except OSError as e:
+        errno, strerror = e.args
+        return "Failed to create %s directory in which to run long name sanity check: %s.\n" % (pathname, strerror)
+    return ""
+
+def check_path_length(filepath, pathname, limit):
+    if len(filepath) > limit:
+        return "The length of %s is longer than %s, this would cause unexpected errors, please use a shorter path.\n" % (pathname, limit)
+    return ""
+
+def get_filesystem_id(path):
+    import subprocess
+    try:
+        return subprocess.check_output(["stat", "-f", "-c", "%t", path]).decode('utf-8').strip()
+    except subprocess.CalledProcessError:
+        bb.warn("Can't get filesystem id of: %s" % path)
+        return None
+
+# Check that the path isn't located on nfs.
+def check_not_nfs(path, name):
+    # The nfs' filesystem id is 6969
+    if get_filesystem_id(path) == "6969":
+        return "The %s: %s can't be located on nfs.\n" % (name, path)
+    return ""
+
+# Check that the path is on a case-sensitive file system
+def check_case_sensitive(path, name):
+    import tempfile
+    with tempfile.NamedTemporaryFile(prefix='TmP', dir=path) as tmp_file:
+        if os.path.exists(tmp_file.name.lower()):
+            return "The %s (%s) can't be on a case-insensitive file system.\n" % (name, path)
+        return ""
+
+# Check that path isn't a broken symlink
+def check_symlink(lnk, data):
+    if os.path.islink(lnk) and not os.path.exists(lnk):
+        raise_sanity_error("%s is a broken symlink." % lnk, data)
+
+def check_connectivity(d):
+    # URI's to check can be set in the CONNECTIVITY_CHECK_URIS variable
+    # using the same syntax as for SRC_URI. If the variable is not set
+    # the check is skipped
+    test_uris = (d.getVar('CONNECTIVITY_CHECK_URIS') or "").split()
+    retval = ""
+
+    bbn = d.getVar('BB_NO_NETWORK')
+    if bbn not in (None, '0', '1'):
+        return 'BB_NO_NETWORK should be "0" or "1", but it is "%s"' % bbn
+
+    # Only check connectivity if network enabled and the
+    # CONNECTIVITY_CHECK_URIS are set
+    network_enabled = not (bbn == '1')
+    check_enabled = len(test_uris)
+    if check_enabled and network_enabled:
+        # Take a copy of the data store and unset MIRRORS and PREMIRRORS
+        data = bb.data.createCopy(d)
+        data.delVar('PREMIRRORS')
+        data.delVar('MIRRORS')
+        try:
+            fetcher = bb.fetch2.Fetch(test_uris, data)
+            fetcher.checkstatus()
+        except Exception as err:
+            # Allow the message to be configured so that users can be
+            # pointed to a support mechanism.
+            msg = data.getVar('CONNECTIVITY_CHECK_MSG') or ""
+            if len(msg) == 0:
+                msg = "%s.\n" % err
+                msg += "    Please ensure your host's network is configured correctly.\n"
+                msg += "    Please ensure CONNECTIVITY_CHECK_URIS is correct and specified URIs are available.\n"
+                msg += "    If your ISP or network is blocking the above URL,\n"
+                msg += "    try with another domain name, for example by setting:\n"
+                msg += "    CONNECTIVITY_CHECK_URIS = \"https://www.example.com/\""
+                msg += "    You could also set BB_NO_NETWORK = \"1\" to disable network\n"
+                msg += "    access if all required sources are on local disk.\n"
+            retval = msg
+
+    return retval
+
+def check_supported_distro(sanity_data):
+    from fnmatch import fnmatch
+
+    tested_distros = sanity_data.getVar('SANITY_TESTED_DISTROS')
+    if not tested_distros:
+        return
+
+    try:
+        distro = oe.lsb.distro_identifier()
+    except Exception:
+        distro = None
+
+    if not distro:
+        bb.warn('Host distribution could not be determined; you may possibly experience unexpected failures. It is recommended that you use a tested distribution.')
+
+    for supported in [x.strip() for x in tested_distros.split('\\n')]:
+        if fnmatch(distro, supported):
+            return
+
+    bb.warn('Host distribution "%s" has not been validated with this version of the build system; you may possibly experience unexpected failures. It is recommended that you use a tested distribution.' % distro)
+
+# Checks we should only make if MACHINE is set correctly
+def check_sanity_validmachine(sanity_data):
+    messages = ""
+
+    # Check TUNE_ARCH is set
+    if sanity_data.getVar('TUNE_ARCH') == 'INVALID':
+        messages = messages + 'TUNE_ARCH is unset. Please ensure your MACHINE configuration includes a valid tune configuration file which will set this correctly.\n'
+
+    # Check TARGET_OS is set
+    if sanity_data.getVar('TARGET_OS') == 'INVALID':
+        messages = messages + 'Please set TARGET_OS directly, or choose a MACHINE or DISTRO that does so.\n'
+
+    # Check that we don't have duplicate entries in PACKAGE_ARCHS & that TUNE_PKGARCH is in PACKAGE_ARCHS
+    pkgarchs = sanity_data.getVar('PACKAGE_ARCHS')
+    tunepkg = sanity_data.getVar('TUNE_PKGARCH')
+    defaulttune = sanity_data.getVar('DEFAULTTUNE')
+    tunefound = False
+    seen = {}
+    dups = []
+
+    for pa in pkgarchs.split():
+        if seen.get(pa, 0) == 1:
+            dups.append(pa)
+        else:
+            seen[pa] = 1
+        if pa == tunepkg:
+            tunefound = True
+
+    if len(dups):
+        messages = messages + "Error, the PACKAGE_ARCHS variable contains duplicates. The following archs are listed more than once: %s" % " ".join(dups)
+
+    if tunefound == False:
+        messages = messages + "Error, the PACKAGE_ARCHS variable (%s) for DEFAULTTUNE (%s) does not contain TUNE_PKGARCH (%s)." % (pkgarchs, defaulttune, tunepkg)
+
+    return messages
+
+# Patch before 2.7 can't handle all the features in git-style diffs.  Some
+# patches may incorrectly apply, and others won't apply at all.
+def check_patch_version(sanity_data):
+    import re, subprocess
+
+    try:
+        result = subprocess.check_output(["patch", "--version"], stderr=subprocess.STDOUT).decode('utf-8')
+        version = re.search(r"[0-9.]+", result.splitlines()[0]).group()
+        if bb.utils.vercmp_string_op(version, "2.7", "<"):
+            return "Your version of patch is older than 2.7 and has bugs which will break builds. Please install a newer version of patch.\n"
+        else:
+            return None
+    except subprocess.CalledProcessError as e:
+        return "Unable to execute patch --version, exit code %d:\n%s\n" % (e.returncode, e.output)
+
+# Glibc needs make 4.0 or later, we may as well match at this point
+def check_make_version(sanity_data):
+    import subprocess
+
+    try:
+        result = subprocess.check_output(['make', '--version'], stderr=subprocess.STDOUT).decode('utf-8')
+    except subprocess.CalledProcessError as e:
+        return "Unable to execute make --version, exit code %d\n%s\n" % (e.returncode, e.output)
+    version = result.split()[2]
+    if bb.utils.vercmp_string_op(version, "4.0", "<"):
+        return "Please install a make version of 4.0 or later.\n"
+
+    if bb.utils.vercmp_string_op(version, "4.2.1", "=="):
+        distro = oe.lsb.distro_identifier()
+        if "ubuntu" in distro or "debian" in distro or "linuxmint" in distro:
+            return None
+        return "make version 4.2.1 is known to have issues on Centos/OpenSUSE and other non-Ubuntu systems. Please use a buildtools-make-tarball or a newer version of make.\n"
+    return None
+
+
+# Check if we're running on WSL (Windows Subsystem for Linux).
+# WSLv1 is known not to work but WSLv2 should work properly as
+# long as the VHDX file is optimized often, let the user know
+# upfront.
+# More information on installing WSLv2 at:
+# https://docs.microsoft.com/en-us/windows/wsl/wsl2-install
+def check_wsl(d):
+    with open("/proc/version", "r") as f:
+        verdata = f.readlines()
+    for l in verdata:
+        if "Microsoft" in l:
+            return "OpenEmbedded doesn't work under WSLv1, please upgrade to WSLv2 if you want to run builds on Windows"
+        elif "microsoft" in l:
+            bb.warn("You are running bitbake under WSLv2, this works properly but you should optimize your VHDX file eventually to avoid running out of storage space")
+    return None
+
+# Require at least gcc version 7.5.
+#
+# This can be fixed on CentOS-7 with devtoolset-6+
+# https://www.softwarecollections.org/en/scls/rhscl/devtoolset-6/
+#
+# A less invasive fix is with scripts/install-buildtools (or with user
+# built buildtools-extended-tarball)
+#
+def check_gcc_version(sanity_data):
+    import subprocess
+    
+    build_cc, version = oe.utils.get_host_compiler_version(sanity_data)
+    if build_cc.strip() == "gcc":
+        if bb.utils.vercmp_string_op(version, "7.5", "<"):
+            return "Your version of gcc is older than 7.5 and will break builds. Please install a newer version of gcc (you could use the project's buildtools-extended-tarball or use scripts/install-buildtools).\n"
+    return None
+
+# Tar version 1.24 and onwards handle overwriting symlinks correctly
+# but earlier versions do not; this needs to work properly for sstate
+# Version 1.28 is needed so opkg-build works correctly when reproducibile builds are enabled
+def check_tar_version(sanity_data):
+    import subprocess
+    try:
+        result = subprocess.check_output(["tar", "--version"], stderr=subprocess.STDOUT).decode('utf-8')
+    except subprocess.CalledProcessError as e:
+        return "Unable to execute tar --version, exit code %d\n%s\n" % (e.returncode, e.output)
+    version = result.split()[3]
+    if bb.utils.vercmp_string_op(version, "1.28", "<"):
+        return "Your version of tar is older than 1.28 and does not have the support needed to enable reproducible builds. Please install a newer version of tar (you could use the project's buildtools-tarball from our last release or use scripts/install-buildtools).\n"
+    return None
+
+# We use git parameters and functionality only found in 1.7.8 or later
+# The kernel tools assume git >= 1.8.3.1 (verified needed > 1.7.9.5) see #6162 
+# The git fetcher also had workarounds for git < 1.7.9.2 which we've dropped
+def check_git_version(sanity_data):
+    import subprocess
+    try:
+        result = subprocess.check_output(["git", "--version"], stderr=subprocess.DEVNULL).decode('utf-8')
+    except subprocess.CalledProcessError as e:
+        return "Unable to execute git --version, exit code %d\n%s\n" % (e.returncode, e.output)
+    version = result.split()[2]
+    if bb.utils.vercmp_string_op(version, "1.8.3.1", "<"):
+        return "Your version of git is older than 1.8.3.1 and has bugs which will break builds. Please install a newer version of git.\n"
+    return None
+
+# Check the required perl modules which may not be installed by default
+def check_perl_modules(sanity_data):
+    import subprocess
+    ret = ""
+    modules = ( "Text::ParseWords", "Thread::Queue", "Data::Dumper" )
+    errresult = ''
+    for m in modules:
+        try:
+            subprocess.check_output(["perl", "-e", "use %s" % m])
+        except subprocess.CalledProcessError as e:
+            errresult += bytes.decode(e.output)
+            ret += "%s " % m
+    if ret:
+        return "Required perl module(s) not found: %s\n\n%s\n" % (ret, errresult)
+    return None
+
+def sanity_check_conffiles(d):
+    funcs = d.getVar('BBLAYERS_CONF_UPDATE_FUNCS').split()
+    for func in funcs:
+        conffile, current_version, required_version, func = func.split(":")
+        if check_conf_exists(conffile, d) and d.getVar(current_version) is not None and \
+                d.getVar(current_version) != d.getVar(required_version):
+            try:
+                bb.build.exec_func(func, d)
+            except NotImplementedError as e:
+                bb.fatal(str(e))
+            d.setVar("BB_INVALIDCONF", True)
+
+def drop_v14_cross_builds(d):
+    import glob
+    indexes = glob.glob(d.expand("${SSTATE_MANIFESTS}/index-${BUILD_ARCH}_*"))
+    for i in indexes:
+        with open(i, "r") as f:
+            lines = f.readlines()
+            for l in reversed(lines):
+                try:
+                    (stamp, manifest, workdir) = l.split()
+                except ValueError:
+                    bb.fatal("Invalid line '%s' in sstate manifest '%s'" % (l, i))
+                for m in glob.glob(manifest + ".*"):
+                    if m.endswith(".postrm"):
+                        continue
+                    sstate_clean_manifest(m, d)
+                bb.utils.remove(stamp + "*")
+                bb.utils.remove(workdir, recurse = True)
+
+def sanity_handle_abichanges(status, d):
+    #
+    # Check the 'ABI' of TMPDIR
+    #
+    import subprocess
+
+    current_abi = d.getVar('OELAYOUT_ABI')
+    abifile = d.getVar('SANITY_ABIFILE')
+    if os.path.exists(abifile):
+        with open(abifile, "r") as f:
+            abi = f.read().strip()
+        if not abi.isdigit():
+            with open(abifile, "w") as f:
+                f.write(current_abi)
+        elif int(abi) <= 11 and current_abi == "12":
+            status.addresult("The layout of TMPDIR changed for Recipe Specific Sysroots.\nConversion doesn't make sense and this change will rebuild everything so please delete TMPDIR (%s).\n" % d.getVar("TMPDIR"))
+        elif int(abi) <= 13 and current_abi == "14":
+            status.addresult("TMPDIR changed to include path filtering from the pseudo database.\nIt is recommended to use a clean TMPDIR with the new pseudo path filtering so TMPDIR (%s) would need to be removed to continue.\n" % d.getVar("TMPDIR"))
+        elif int(abi) == 14 and current_abi == "15":
+            drop_v14_cross_builds(d)
+            with open(abifile, "w") as f:
+                f.write(current_abi)
+        elif (abi != current_abi):
+            # Code to convert from one ABI to another could go here if possible.
+            status.addresult("Error, TMPDIR has changed its layout version number (%s to %s) and you need to either rebuild, revert or adjust it at your own risk.\n" % (abi, current_abi))
+    else:
+        with open(abifile, "w") as f:
+            f.write(current_abi)
+
+def check_sanity_sstate_dir_change(sstate_dir, data):
+    # Sanity checks to be done when the value of SSTATE_DIR changes
+
+    # Check that SSTATE_DIR isn't on a filesystem with limited filename length (eg. eCryptFS)
+    testmsg = ""
+    if sstate_dir != "":
+        testmsg = check_create_long_filename(sstate_dir, "SSTATE_DIR")
+        # If we don't have permissions to SSTATE_DIR, suggest the user set it as an SSTATE_MIRRORS
+        try:
+            err = testmsg.split(': ')[1].strip()
+            if err == "Permission denied.":
+                testmsg = testmsg + "You could try using %s in SSTATE_MIRRORS rather than as an SSTATE_CACHE.\n" % (sstate_dir)
+        except IndexError:
+            pass
+    return testmsg
+
+def check_sanity_version_change(status, d):
+    # Sanity checks to be done when SANITY_VERSION or NATIVELSBSTRING changes
+    # In other words, these tests run once in a given build directory and then 
+    # never again until the sanity version or host distrubution id/version changes.
+
+    # Check the python install is complete. Examples that are often removed in
+    # minimal installations: glib-2.0-natives requries # xml.parsers.expat and icu
+    # requires distutils.sysconfig.
+    try:
+        import xml.parsers.expat
+        import distutils.sysconfig
+    except ImportError as e:
+        status.addresult('Your Python 3 is not a full install. Please install the module %s (see the Getting Started guide for further information).\n' % e.name)
+
+    status.addresult(check_gcc_version(d))
+    status.addresult(check_make_version(d))
+    status.addresult(check_patch_version(d))
+    status.addresult(check_tar_version(d))
+    status.addresult(check_git_version(d))
+    status.addresult(check_perl_modules(d))
+    status.addresult(check_wsl(d))
+
+    missing = ""
+
+    if not check_app_exists("${MAKE}", d):
+        missing = missing + "GNU make,"
+
+    if not check_app_exists('${BUILD_CC}', d):
+        missing = missing + "C Compiler (%s)," % d.getVar("BUILD_CC")
+
+    if not check_app_exists('${BUILD_CXX}', d):
+        missing = missing + "C++ Compiler (%s)," % d.getVar("BUILD_CXX")
+
+    required_utilities = d.getVar('SANITY_REQUIRED_UTILITIES')
+
+    for util in required_utilities.split():
+        if not check_app_exists(util, d):
+            missing = missing + "%s," % util
+
+    if missing:
+        missing = missing.rstrip(',')
+        status.addresult("Please install the following missing utilities: %s\n" % missing)
+
+    assume_provided = d.getVar('ASSUME_PROVIDED').split()
+    # Check user doesn't have ASSUME_PROVIDED = instead of += in local.conf
+    if "diffstat-native" not in assume_provided:
+        status.addresult('Please use ASSUME_PROVIDED +=, not ASSUME_PROVIDED = in your local.conf\n')
+
+    # Check that TMPDIR isn't on a filesystem with limited filename length (eg. eCryptFS)
+    import stat
+    tmpdir = d.getVar('TMPDIR')
+    status.addresult(check_create_long_filename(tmpdir, "TMPDIR"))
+    tmpdirmode = os.stat(tmpdir).st_mode
+    if (tmpdirmode & stat.S_ISGID):
+        status.addresult("TMPDIR is setgid, please don't build in a setgid directory")
+    if (tmpdirmode & stat.S_ISUID):
+        status.addresult("TMPDIR is setuid, please don't build in a setuid directory")
+
+    # Check that a user isn't building in a path in PSEUDO_IGNORE_PATHS
+    pseudoignorepaths = d.getVar('PSEUDO_IGNORE_PATHS', expand=True).split(",")
+    workdir = d.getVar('WORKDIR', expand=True)
+    for i in pseudoignorepaths:
+        if i and workdir.startswith(i):
+            status.addresult("You are building in a path included in PSEUDO_IGNORE_PATHS " + str(i) + " please locate the build outside this path.\n")
+
+    # Check if PSEUDO_IGNORE_PATHS and and paths under pseudo control overlap
+    pseudoignorepaths = d.getVar('PSEUDO_IGNORE_PATHS', expand=True).split(",")
+    pseudo_control_dir = "${D},${PKGD},${PKGDEST},${IMAGEROOTFS},${SDK_OUTPUT}"
+    pseudocontroldir = d.expand(pseudo_control_dir).split(",")
+    for i in pseudoignorepaths:
+        for j in pseudocontroldir:
+            if i and j:
+                if j.startswith(i):
+                    status.addresult("A path included in PSEUDO_IGNORE_PATHS " + str(i) + " and the path " + str(j) + " overlap and this will break pseudo permission and ownership tracking. Please set the path " + str(j) + " to a different directory which does not overlap with pseudo controlled directories. \n")
+
+    # Some third-party software apparently relies on chmod etc. being suid root (!!)
+    import stat
+    suid_check_bins = "chown chmod mknod".split()
+    for bin_cmd in suid_check_bins:
+        bin_path = bb.utils.which(os.environ["PATH"], bin_cmd)
+        if bin_path:
+            bin_stat = os.stat(bin_path)
+            if bin_stat.st_uid == 0 and bin_stat.st_mode & stat.S_ISUID:
+                status.addresult('%s has the setuid bit set. This interferes with pseudo and may cause other issues that break the build process.\n' % bin_path)
+
+    # Check that we can fetch from various network transports
+    netcheck = check_connectivity(d)
+    status.addresult(netcheck)
+    if netcheck:
+        status.network_error = True
+
+    nolibs = d.getVar('NO32LIBS')
+    if not nolibs:
+        lib32path = '/lib'
+        if os.path.exists('/lib64') and ( os.path.islink('/lib64') or os.path.islink('/lib') ):
+           lib32path = '/lib32'
+
+        if os.path.exists('%s/libc.so.6' % lib32path) and not os.path.exists('/usr/include/gnu/stubs-32.h'):
+            status.addresult("You have a 32-bit libc, but no 32-bit headers.  You must install the 32-bit libc headers.\n")
+
+    bbpaths = d.getVar('BBPATH').split(":")
+    if ("." in bbpaths or "./" in bbpaths or "" in bbpaths):
+        status.addresult("BBPATH references the current directory, either through "    \
+                "an empty entry, a './' or a '.'.\n\t This is unsafe and means your "\
+                "layer configuration is adding empty elements to BBPATH.\n\t "\
+                "Please check your layer.conf files and other BBPATH "        \
+                "settings to remove the current working directory "           \
+                "references.\n" \
+                "Parsed BBPATH is" + str(bbpaths));
+
+    oes_bb_conf = d.getVar( 'OES_BITBAKE_CONF')
+    if not oes_bb_conf:
+        status.addresult('You are not using the OpenEmbedded version of conf/bitbake.conf. This means your environment is misconfigured, in particular check BBPATH.\n')
+
+    # The length of TMPDIR can't be longer than 410
+    status.addresult(check_path_length(tmpdir, "TMPDIR", 410))
+
+    # Check that TMPDIR isn't located on nfs
+    status.addresult(check_not_nfs(tmpdir, "TMPDIR"))
+
+    # Check for case-insensitive file systems (such as Linux in Docker on
+    # macOS with default HFS+ file system)
+    status.addresult(check_case_sensitive(tmpdir, "TMPDIR"))
+
+def sanity_check_locale(d):
+    """
+    Currently bitbake switches locale to en_US.UTF-8 so check that this locale actually exists.
+    """
+    import locale
+    try:
+        locale.setlocale(locale.LC_ALL, "en_US.UTF-8")
+    except locale.Error:
+        raise_sanity_error("Your system needs to support the en_US.UTF-8 locale.", d)
+
+def check_sanity_everybuild(status, d):
+    import os, stat
+    # Sanity tests which test the users environment so need to run at each build (or are so cheap
+    # it makes sense to always run them.
+
+    if 0 == os.getuid():
+        raise_sanity_error("Do not use Bitbake as root.", d)
+
+    # Check the Python version, we now have a minimum of Python 3.6
+    import sys
+    if sys.hexversion < 0x030600F0:
+        status.addresult('The system requires at least Python 3.6 to run. Please update your Python interpreter.\n')
+
+    # Check the bitbake version meets minimum requirements
+    minversion = d.getVar('BB_MIN_VERSION')
+    if bb.utils.vercmp_string_op(bb.__version__, minversion, "<"):
+        status.addresult('Bitbake version %s is required and version %s was found\n' % (minversion, bb.__version__))
+
+    sanity_check_locale(d)
+
+    paths = d.getVar('PATH').split(":")
+    if "." in paths or "./" in paths or "" in paths:
+        status.addresult("PATH contains '.', './' or '' (empty element), which will break the build, please remove this.\nParsed PATH is " + str(paths) + "\n")
+
+    #Check if bitbake is present in PATH environment variable
+    bb_check = bb.utils.which(d.getVar('PATH'), 'bitbake')
+    if not bb_check:
+        bb.warn("bitbake binary is not found in PATH, did you source the script?")
+
+    # Check whether 'inherit' directive is found (used for a class to inherit)
+    # in conf file it's supposed to be uppercase INHERIT
+    inherit = d.getVar('inherit')
+    if inherit:
+        status.addresult("Please don't use inherit directive in your local.conf. The directive is supposed to be used in classes and recipes only to inherit of bbclasses. Here INHERIT should be used.\n")
+
+    # Check that the DISTRO is valid, if set
+    # need to take into account DISTRO renaming DISTRO
+    distro = d.getVar('DISTRO')
+    if distro and distro != "nodistro":
+        if not ( check_conf_exists("conf/distro/${DISTRO}.conf", d) or check_conf_exists("conf/distro/include/${DISTRO}.inc", d) ):
+            status.addresult("DISTRO '%s' not found. Please set a valid DISTRO in your local.conf\n" % d.getVar("DISTRO"))
+
+    # Check that these variables don't use tilde-expansion as we don't do that
+    for v in ("TMPDIR", "DL_DIR", "SSTATE_DIR"):
+        if d.getVar(v).startswith("~"):
+            status.addresult("%s uses ~ but Bitbake will not expand this, use an absolute path or variables." % v)
+
+    # Check that DL_DIR is set, exists and is writable. In theory, we should never even hit the check if DL_DIR isn't 
+    # set, since so much relies on it being set.
+    dldir = d.getVar('DL_DIR')
+    if not dldir:
+        status.addresult("DL_DIR is not set. Your environment is misconfigured, check that DL_DIR is set, and if the directory exists, that it is writable. \n")
+    if os.path.exists(dldir) and not os.access(dldir, os.W_OK):
+        status.addresult("DL_DIR: %s exists but you do not appear to have write access to it. \n" % dldir)
+    check_symlink(dldir, d)
+
+    # Check that the MACHINE is valid, if it is set
+    machinevalid = True
+    if d.getVar('MACHINE'):
+        if not check_conf_exists("conf/machine/${MACHINE}.conf", d):
+            status.addresult('MACHINE=%s is invalid. Please set a valid MACHINE in your local.conf, environment or other configuration file.\n' % (d.getVar('MACHINE')))
+            machinevalid = False
+        else:
+            status.addresult(check_sanity_validmachine(d))
+    else:
+        status.addresult('Please set a MACHINE in your local.conf or environment\n')
+        machinevalid = False
+    if machinevalid:
+        status.addresult(check_toolchain(d))
+
+    # Check that the SDKMACHINE is valid, if it is set
+    if d.getVar('SDKMACHINE'):
+        if not check_conf_exists("conf/machine-sdk/${SDKMACHINE}.conf", d):
+            status.addresult('Specified SDKMACHINE value is not valid\n')
+        elif d.getVar('SDK_ARCH', False) == "${BUILD_ARCH}":
+            status.addresult('SDKMACHINE is set, but SDK_ARCH has not been changed as a result - SDKMACHINE may have been set too late (e.g. in the distro configuration)\n')
+
+    # If SDK_VENDOR looks like "-my-sdk" then the triples are badly formed so fail early
+    sdkvendor = d.getVar("SDK_VENDOR")
+    if not (sdkvendor.startswith("-") and sdkvendor.count("-") == 1):
+        status.addresult("SDK_VENDOR should be of the form '-foosdk' with a single dash; found '%s'\n" % sdkvendor)
+
+    check_supported_distro(d)
+
+    omask = os.umask(0o022)
+    if omask & 0o755:
+        status.addresult("Please use a umask which allows a+rx and u+rwx\n")
+    os.umask(omask)
+
+    if d.getVar('TARGET_ARCH') == "arm":
+        # This path is no longer user-readable in modern (very recent) Linux
+        try:
+            if os.path.exists("/proc/sys/vm/mmap_min_addr"):
+                f = open("/proc/sys/vm/mmap_min_addr", "r")
+                try:
+                    if (int(f.read().strip()) > 65536):
+                        status.addresult("/proc/sys/vm/mmap_min_addr is not <= 65536. This will cause problems with qemu so please fix the value (as root).\n\nTo fix this in later reboots, set vm.mmap_min_addr = 65536 in /etc/sysctl.conf.\n")
+                finally:
+                    f.close()
+        except:
+            pass
+
+    for checkdir in ['COREBASE', 'TMPDIR']:
+        val = d.getVar(checkdir)
+        if val.find('..') != -1:
+            status.addresult("Error, you have '..' in your %s directory path. Please ensure the variable contains an absolute path as this can break some recipe builds in obtuse ways." % checkdir)
+        if val.find('+') != -1:
+            status.addresult("Error, you have an invalid character (+) in your %s directory path. Please move the installation to a directory which doesn't include any + characters." % checkdir)
+        if val.find('@') != -1:
+            status.addresult("Error, you have an invalid character (@) in your %s directory path. Please move the installation to a directory which doesn't include any @ characters." % checkdir)
+        if val.find(' ') != -1:
+            status.addresult("Error, you have a space in your %s directory path. Please move the installation to a directory which doesn't include a space since autotools doesn't support this." % checkdir)
+        if val.find('%') != -1:
+            status.addresult("Error, you have an invalid character (%) in your %s directory path which causes problems with python string formatting. Please move the installation to a directory which doesn't include any % characters." % checkdir)
+
+    # Check the format of MIRRORS, PREMIRRORS and SSTATE_MIRRORS
+    import re
+    mirror_vars = ['MIRRORS', 'PREMIRRORS', 'SSTATE_MIRRORS']
+    protocols = ['http', 'ftp', 'file', 'https', \
+                 'git', 'gitsm', 'hg', 'osc', 'p4', 'svn', \
+                 'bzr', 'cvs', 'npm', 'sftp', 'ssh', 's3', 'az', 'ftps']
+    for mirror_var in mirror_vars:
+        mirrors = (d.getVar(mirror_var) or '').replace('\\n', ' ').split()
+
+        # Split into pairs
+        if len(mirrors) % 2 != 0:
+            bb.warn('Invalid mirror variable value for %s: %s, should contain paired members.' % (mirror_var, str(mirrors)))
+            continue
+        mirrors = list(zip(*[iter(mirrors)]*2))
+
+        for mirror_entry in mirrors:
+            pattern, mirror = mirror_entry
+
+            decoded = bb.fetch2.decodeurl(pattern)
+            try:
+                pattern_scheme = re.compile(decoded[0])
+            except re.error as exc:
+                bb.warn('Invalid scheme regex (%s) in %s; %s' % (pattern, mirror_var, mirror_entry))
+                continue
+
+            if not any(pattern_scheme.match(protocol) for protocol in protocols):
+                bb.warn('Invalid protocol (%s) in %s: %s' % (decoded[0], mirror_var, mirror_entry))
+                continue
+
+            if not any(mirror.startswith(protocol + '://') for protocol in protocols):
+                bb.warn('Invalid protocol in %s: %s' % (mirror_var, mirror_entry))
+                continue
+
+            if mirror.startswith('file://'):
+                import urllib
+                check_symlink(urllib.parse.urlparse(mirror).path, d)
+                # SSTATE_MIRROR ends with a /PATH string
+                if mirror.endswith('/PATH'):
+                    # remove /PATH$ from SSTATE_MIRROR to get a working
+                    # base directory path
+                    mirror_base = urllib.parse.urlparse(mirror[:-1*len('/PATH')]).path
+                    check_symlink(mirror_base, d)
+
+    # Check sstate mirrors aren't being used with a local hash server and no remote
+    hashserv = d.getVar("BB_HASHSERVE")
+    if d.getVar("SSTATE_MIRRORS") and hashserv and hashserv.startswith("unix://") and not d.getVar("BB_HASHSERVE_UPSTREAM"):
+        bb.warn("You are using a local hash equivalence server but have configured an sstate mirror. This will likely mean no sstate will match from the mirror. You may wish to disable the hash equivalence use (BB_HASHSERVE), or use a hash equivalence server alongside the sstate mirror.")
+
+    # Check that TMPDIR hasn't changed location since the last time we were run
+    tmpdir = d.getVar('TMPDIR')
+    checkfile = os.path.join(tmpdir, "saved_tmpdir")
+    if os.path.exists(checkfile):
+        with open(checkfile, "r") as f:
+            saved_tmpdir = f.read().strip()
+            if (saved_tmpdir != tmpdir):
+                status.addresult("Error, TMPDIR has changed location. You need to either move it back to %s or delete it and rebuild\n" % saved_tmpdir)
+    else:
+        bb.utils.mkdirhier(tmpdir)
+        # Remove setuid, setgid and sticky bits from TMPDIR
+        try:
+            os.chmod(tmpdir, os.stat(tmpdir).st_mode & ~ stat.S_ISUID)
+            os.chmod(tmpdir, os.stat(tmpdir).st_mode & ~ stat.S_ISGID)
+            os.chmod(tmpdir, os.stat(tmpdir).st_mode & ~ stat.S_ISVTX)
+        except OSError as exc:
+            bb.warn("Unable to chmod TMPDIR: %s" % exc)
+        with open(checkfile, "w") as f:
+            f.write(tmpdir)
+
+    # If /bin/sh is a symlink, check that it points to dash or bash
+    if os.path.islink('/bin/sh'):
+        real_sh = os.path.realpath('/bin/sh')
+        # Due to update-alternatives, the shell name may take various
+        # forms, such as /bin/dash, bin/bash, /bin/bash.bash ...
+        if '/dash' not in real_sh and '/bash' not in real_sh:
+            status.addresult("Error, /bin/sh links to %s, must be dash or bash\n" % real_sh)
+
+def check_sanity(sanity_data):
+    class SanityStatus(object):
+        def __init__(self):
+            self.messages = ""
+            self.network_error = False
+
+        def addresult(self, message):
+            if message:
+                self.messages = self.messages + message
+
+    status = SanityStatus()
+
+    tmpdir = sanity_data.getVar('TMPDIR')
+    sstate_dir = sanity_data.getVar('SSTATE_DIR')
+
+    check_symlink(sstate_dir, sanity_data)
+
+    # Check saved sanity info
+    last_sanity_version = 0
+    last_tmpdir = ""
+    last_sstate_dir = ""
+    last_nativelsbstr = ""
+    sanityverfile = sanity_data.expand("${TOPDIR}/cache/sanity_info")
+    if os.path.exists(sanityverfile):
+        with open(sanityverfile, 'r') as f:
+            for line in f:
+                if line.startswith('SANITY_VERSION'):
+                    last_sanity_version = int(line.split()[1])
+                if line.startswith('TMPDIR'):
+                    last_tmpdir = line.split()[1]
+                if line.startswith('SSTATE_DIR'):
+                    last_sstate_dir = line.split()[1]
+                if line.startswith('NATIVELSBSTRING'):
+                    last_nativelsbstr = line.split()[1]
+
+    check_sanity_everybuild(status, sanity_data)
+    
+    sanity_version = int(sanity_data.getVar('SANITY_VERSION') or 1)
+    network_error = False
+    # NATIVELSBSTRING var may have been overridden with "universal", so
+    # get actual host distribution id and version
+    nativelsbstr = lsb_distro_identifier(sanity_data)
+    if last_sanity_version < sanity_version or last_nativelsbstr != nativelsbstr: 
+        check_sanity_version_change(status, sanity_data)
+        status.addresult(check_sanity_sstate_dir_change(sstate_dir, sanity_data))
+    else: 
+        if last_sstate_dir != sstate_dir:
+            status.addresult(check_sanity_sstate_dir_change(sstate_dir, sanity_data))
+
+    if os.path.exists(os.path.dirname(sanityverfile)) and not status.messages:
+        with open(sanityverfile, 'w') as f:
+            f.write("SANITY_VERSION %s\n" % sanity_version) 
+            f.write("TMPDIR %s\n" % tmpdir) 
+            f.write("SSTATE_DIR %s\n" % sstate_dir) 
+            f.write("NATIVELSBSTRING %s\n" % nativelsbstr) 
+
+    sanity_handle_abichanges(status, sanity_data)
+
+    if status.messages != "":
+        raise_sanity_error(sanity_data.expand(status.messages), sanity_data, status.network_error)
+
+# Create a copy of the datastore and finalise it to ensure appends and 
+# overrides are set - the datastore has yet to be finalised at ConfigParsed
+def copy_data(e):
+    sanity_data = bb.data.createCopy(e.data)
+    sanity_data.finalize()
+    return sanity_data
+
+addhandler config_reparse_eventhandler
+config_reparse_eventhandler[eventmask] = "bb.event.ConfigParsed"
+python config_reparse_eventhandler() {
+    sanity_check_conffiles(e.data)
+}
+
+addhandler check_sanity_eventhandler
+check_sanity_eventhandler[eventmask] = "bb.event.SanityCheck bb.event.NetworkTest"
+python check_sanity_eventhandler() {
+    if bb.event.getName(e) == "SanityCheck":
+        sanity_data = copy_data(e)
+        check_sanity(sanity_data)
+        if e.generateevents:
+            sanity_data.setVar("SANITY_USE_EVENTS", "1")
+        bb.event.fire(bb.event.SanityCheckPassed(), e.data)
+    elif bb.event.getName(e) == "NetworkTest":
+        sanity_data = copy_data(e)
+        if e.generateevents:
+            sanity_data.setVar("SANITY_USE_EVENTS", "1")
+        bb.event.fire(bb.event.NetworkTestFailed() if check_connectivity(sanity_data) else bb.event.NetworkTestPassed(), e.data)
+
+    return
+}
diff --git a/poky/meta/classes-global/sstate.bbclass b/poky/meta/classes-global/sstate.bbclass
new file mode 100644
index 0000000..cd77c58
--- /dev/null
+++ b/poky/meta/classes-global/sstate.bbclass
@@ -0,0 +1,1364 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+SSTATE_VERSION = "10"
+
+SSTATE_ZSTD_CLEVEL ??= "8"
+
+SSTATE_MANIFESTS ?= "${TMPDIR}/sstate-control"
+SSTATE_MANFILEPREFIX = "${SSTATE_MANIFESTS}/manifest-${SSTATE_MANMACH}-${PN}"
+
+def generate_sstatefn(spec, hash, taskname, siginfo, d):
+    if taskname is None:
+       return ""
+    extension = ".tar.zst"
+    # 8 chars reserved for siginfo
+    limit = 254 - 8
+    if siginfo:
+        limit = 254
+        extension = ".tar.zst.siginfo"
+    if not hash:
+        hash = "INVALID"
+    fn = spec + hash + "_" + taskname + extension
+    # If the filename is too long, attempt to reduce it
+    if len(fn) > limit:
+        components = spec.split(":")
+        # Fields 0,5,6 are mandatory, 1 is most useful, 2,3,4 are just for information
+        # 7 is for the separators
+        avail = (limit - len(hash + "_" + taskname + extension) - len(components[0]) - len(components[1]) - len(components[5]) - len(components[6]) - 7) // 3
+        components[2] = components[2][:avail]
+        components[3] = components[3][:avail]
+        components[4] = components[4][:avail]
+        spec = ":".join(components)
+        fn = spec + hash + "_" + taskname + extension
+        if len(fn) > limit:
+            bb.fatal("Unable to reduce sstate name to less than 255 chararacters")
+    return hash[:2] + "/" + hash[2:4] + "/" + fn
+
+SSTATE_PKGARCH    = "${PACKAGE_ARCH}"
+SSTATE_PKGSPEC    = "sstate:${PN}:${PACKAGE_ARCH}${TARGET_VENDOR}-${TARGET_OS}:${PV}:${PR}:${SSTATE_PKGARCH}:${SSTATE_VERSION}:"
+SSTATE_SWSPEC     = "sstate:${PN}::${PV}:${PR}::${SSTATE_VERSION}:"
+SSTATE_PKGNAME    = "${SSTATE_EXTRAPATH}${@generate_sstatefn(d.getVar('SSTATE_PKGSPEC'), d.getVar('BB_UNIHASH'), d.getVar('SSTATE_CURRTASK'), False, d)}"
+SSTATE_PKG        = "${SSTATE_DIR}/${SSTATE_PKGNAME}"
+SSTATE_EXTRAPATH   = ""
+SSTATE_EXTRAPATHWILDCARD = ""
+SSTATE_PATHSPEC   = "${SSTATE_DIR}/${SSTATE_EXTRAPATHWILDCARD}*/*/${SSTATE_PKGSPEC}*_${SSTATE_PATH_CURRTASK}.tar.zst*"
+
+# explicitly make PV to depend on evaluated value of PV variable
+PV[vardepvalue] = "${PV}"
+
+# We don't want the sstate to depend on things like the distro string
+# of the system, we let the sstate paths take care of this.
+SSTATE_EXTRAPATH[vardepvalue] = ""
+SSTATE_EXTRAPATHWILDCARD[vardepvalue] = ""
+
+# For multilib rpm the allarch packagegroup files can overwrite (in theory they're identical)
+SSTATE_ALLOW_OVERLAP_FILES = "${DEPLOY_DIR}/licenses/"
+# Avoid docbook/sgml catalog warnings for now
+SSTATE_ALLOW_OVERLAP_FILES += "${STAGING_ETCDIR_NATIVE}/sgml ${STAGING_DATADIR_NATIVE}/sgml"
+# sdk-provides-dummy-nativesdk and nativesdk-buildtools-perl-dummy overlap for different SDKMACHINE
+SSTATE_ALLOW_OVERLAP_FILES += "${DEPLOY_DIR_RPM}/sdk_provides_dummy_nativesdk/ ${DEPLOY_DIR_IPK}/sdk-provides-dummy-nativesdk/"
+SSTATE_ALLOW_OVERLAP_FILES += "${DEPLOY_DIR_RPM}/buildtools_dummy_nativesdk/ ${DEPLOY_DIR_IPK}/buildtools-dummy-nativesdk/"
+# target-sdk-provides-dummy overlaps that allarch is disabled when multilib is used
+SSTATE_ALLOW_OVERLAP_FILES += "${COMPONENTS_DIR}/sdk-provides-dummy-target/ ${DEPLOY_DIR_RPM}/sdk_provides_dummy_target/ ${DEPLOY_DIR_IPK}/sdk-provides-dummy-target/"
+# Archive the sources for many architectures in one deploy folder
+SSTATE_ALLOW_OVERLAP_FILES += "${DEPLOY_DIR_SRC}"
+# ovmf/grub-efi/systemd-boot/intel-microcode multilib recipes can generate identical overlapping files
+SSTATE_ALLOW_OVERLAP_FILES += "${DEPLOY_DIR_IMAGE}/ovmf"
+SSTATE_ALLOW_OVERLAP_FILES += "${DEPLOY_DIR_IMAGE}/grub-efi"
+SSTATE_ALLOW_OVERLAP_FILES += "${DEPLOY_DIR_IMAGE}/systemd-boot"
+SSTATE_ALLOW_OVERLAP_FILES += "${DEPLOY_DIR_IMAGE}/microcode"
+
+SSTATE_SCAN_FILES ?= "*.la *-config *_config postinst-*"
+SSTATE_SCAN_CMD ??= 'find ${SSTATE_BUILDDIR} \( -name "${@"\" -o -name \"".join(d.getVar("SSTATE_SCAN_FILES").split())}" \) -type f'
+SSTATE_SCAN_CMD_NATIVE ??= 'grep -Irl -e ${RECIPE_SYSROOT} -e ${RECIPE_SYSROOT_NATIVE} -e ${HOSTTOOLS_DIR} ${SSTATE_BUILDDIR}'
+SSTATE_HASHEQUIV_FILEMAP ?= " \
+    populate_sysroot:*/postinst-useradd-*:${TMPDIR} \
+    populate_sysroot:*/postinst-useradd-*:${COREBASE} \
+    populate_sysroot:*/postinst-useradd-*:regex-\s(PATH|PSEUDO_IGNORE_PATHS|HOME|LOGNAME|OMP_NUM_THREADS|USER)=.*\s \
+    populate_sysroot:*/crossscripts/*:${TMPDIR} \
+    populate_sysroot:*/crossscripts/*:${COREBASE} \
+    "
+
+BB_HASHFILENAME = "False ${SSTATE_PKGSPEC} ${SSTATE_SWSPEC}"
+
+SSTATE_ARCHS = " \
+    ${BUILD_ARCH} \
+    ${BUILD_ARCH}_${ORIGNATIVELSBSTRING} \
+    ${BUILD_ARCH}_${SDK_ARCH}_${SDK_OS} \
+    ${SDK_ARCH}_${SDK_OS} \
+    ${SDK_ARCH}_${PACKAGE_ARCH} \
+    allarch \
+    ${PACKAGE_ARCH} \
+    ${PACKAGE_EXTRA_ARCHS} \
+    ${MACHINE_ARCH}"
+SSTATE_ARCHS[vardepsexclude] = "ORIGNATIVELSBSTRING"
+
+SSTATE_MANMACH ?= "${SSTATE_PKGARCH}"
+
+SSTATECREATEFUNCS += "sstate_hardcode_path"
+SSTATECREATEFUNCS[vardeps] = "SSTATE_SCAN_FILES"
+SSTATEPOSTCREATEFUNCS = ""
+SSTATEPREINSTFUNCS = ""
+SSTATEPOSTUNPACKFUNCS = "sstate_hardcode_path_unpack"
+SSTATEPOSTINSTFUNCS = ""
+EXTRA_STAGING_FIXMES ?= "HOSTTOOLS_DIR"
+
+# Check whether sstate exists for tasks that support sstate and are in the
+# locked signatures file.
+SIGGEN_LOCKEDSIGS_SSTATE_EXISTS_CHECK ?= 'error'
+
+# Check whether the task's computed hash matches the task's hash in the
+# locked signatures file.
+SIGGEN_LOCKEDSIGS_TASKSIG_CHECK ?= "error"
+
+# The GnuPG key ID and passphrase to use to sign sstate archives (or unset to
+# not sign)
+SSTATE_SIG_KEY ?= ""
+SSTATE_SIG_PASSPHRASE ?= ""
+# Whether to verify the GnUPG signatures when extracting sstate archives
+SSTATE_VERIFY_SIG ?= "0"
+# List of signatures to consider valid.
+SSTATE_VALID_SIGS ??= ""
+SSTATE_VALID_SIGS[vardepvalue] = ""
+
+SSTATE_HASHEQUIV_METHOD ?= "oe.sstatesig.OEOuthashBasic"
+SSTATE_HASHEQUIV_METHOD[doc] = "The fully-qualified function used to calculate \
+    the output hash for a task, which in turn is used to determine equivalency. \
+    "
+
+SSTATE_HASHEQUIV_REPORT_TASKDATA ?= "0"
+SSTATE_HASHEQUIV_REPORT_TASKDATA[doc] = "Report additional useful data to the \
+    hash equivalency server, such as PN, PV, taskname, etc. This information \
+    is very useful for developers looking at task data, but may leak sensitive \
+    data if the equivalence server is public. \
+    "
+
+python () {
+    if bb.data.inherits_class('native', d):
+        d.setVar('SSTATE_PKGARCH', d.getVar('BUILD_ARCH', False))
+    elif bb.data.inherits_class('crosssdk', d):
+        d.setVar('SSTATE_PKGARCH', d.expand("${BUILD_ARCH}_${SDK_ARCH}_${SDK_OS}"))
+    elif bb.data.inherits_class('cross', d):
+        d.setVar('SSTATE_PKGARCH', d.expand("${BUILD_ARCH}"))
+    elif bb.data.inherits_class('nativesdk', d):
+        d.setVar('SSTATE_PKGARCH', d.expand("${SDK_ARCH}_${SDK_OS}"))
+    elif bb.data.inherits_class('cross-canadian', d):
+        d.setVar('SSTATE_PKGARCH', d.expand("${SDK_ARCH}_${PACKAGE_ARCH}"))
+    elif bb.data.inherits_class('allarch', d) and d.getVar("PACKAGE_ARCH") == "all":
+        d.setVar('SSTATE_PKGARCH', "allarch")
+    else:
+        d.setVar('SSTATE_MANMACH', d.expand("${PACKAGE_ARCH}"))
+
+    if bb.data.inherits_class('native', d) or bb.data.inherits_class('crosssdk', d) or bb.data.inherits_class('cross', d):
+        d.setVar('SSTATE_EXTRAPATH', "${NATIVELSBSTRING}/")
+        d.setVar('BB_HASHFILENAME', "True ${SSTATE_PKGSPEC} ${SSTATE_SWSPEC}")
+        d.setVar('SSTATE_EXTRAPATHWILDCARD', "${NATIVELSBSTRING}/")
+
+    unique_tasks = sorted(set((d.getVar('SSTATETASKS') or "").split()))
+    d.setVar('SSTATETASKS', " ".join(unique_tasks))
+    for task in unique_tasks:
+        d.prependVarFlag(task, 'prefuncs', "sstate_task_prefunc ")
+        d.appendVarFlag(task, 'postfuncs', " sstate_task_postfunc")
+        d.setVarFlag(task, 'network', '1')
+        d.setVarFlag(task + "_setscene", 'network', '1')
+}
+
+def sstate_init(task, d):
+    ss = {}
+    ss['task'] = task
+    ss['dirs'] = []
+    ss['plaindirs'] = []
+    ss['lockfiles'] = []
+    ss['lockfiles-shared'] = []
+    return ss
+
+def sstate_state_fromvars(d, task = None):
+    if task is None:
+        task = d.getVar('BB_CURRENTTASK')
+        if not task:
+            bb.fatal("sstate code running without task context?!")
+        task = task.replace("_setscene", "")
+
+    if task.startswith("do_"):
+        task = task[3:]
+    inputs = (d.getVarFlag("do_" + task, 'sstate-inputdirs') or "").split()
+    outputs = (d.getVarFlag("do_" + task, 'sstate-outputdirs') or "").split()
+    plaindirs = (d.getVarFlag("do_" + task, 'sstate-plaindirs') or "").split()
+    lockfiles = (d.getVarFlag("do_" + task, 'sstate-lockfile') or "").split()
+    lockfilesshared = (d.getVarFlag("do_" + task, 'sstate-lockfile-shared') or "").split()
+    interceptfuncs = (d.getVarFlag("do_" + task, 'sstate-interceptfuncs') or "").split()
+    fixmedir = d.getVarFlag("do_" + task, 'sstate-fixmedir') or ""
+    if not task or len(inputs) != len(outputs):
+        bb.fatal("sstate variables not setup correctly?!")
+
+    if task == "populate_lic":
+        d.setVar("SSTATE_PKGSPEC", "${SSTATE_SWSPEC}")
+        d.setVar("SSTATE_EXTRAPATH", "")
+        d.setVar('SSTATE_EXTRAPATHWILDCARD', "")
+
+    ss = sstate_init(task, d)
+    for i in range(len(inputs)):
+        sstate_add(ss, inputs[i], outputs[i], d)
+    ss['lockfiles'] = lockfiles
+    ss['lockfiles-shared'] = lockfilesshared
+    ss['plaindirs'] = plaindirs
+    ss['interceptfuncs'] = interceptfuncs
+    ss['fixmedir'] = fixmedir
+    return ss
+
+def sstate_add(ss, source, dest, d):
+    if not source.endswith("/"):
+         source = source + "/"
+    if not dest.endswith("/"):
+         dest = dest + "/"
+    source = os.path.normpath(source)
+    dest = os.path.normpath(dest)
+    srcbase = os.path.basename(source)
+    ss['dirs'].append([srcbase, source, dest])
+    return ss
+
+def sstate_install(ss, d):
+    import oe.path
+    import oe.sstatesig
+    import subprocess
+
+    sharedfiles = []
+    shareddirs = []
+    bb.utils.mkdirhier(d.expand("${SSTATE_MANIFESTS}"))
+
+    sstateinst = d.expand("${WORKDIR}/sstate-install-%s/" % ss['task'])
+
+    manifest, d2 = oe.sstatesig.sstate_get_manifest_filename(ss['task'], d)
+
+    if os.access(manifest, os.R_OK):
+        bb.fatal("Package already staged (%s)?!" % manifest)
+
+    d.setVar("SSTATE_INST_POSTRM", manifest + ".postrm")
+
+    locks = []
+    for lock in ss['lockfiles-shared']:
+        locks.append(bb.utils.lockfile(lock, True))
+    for lock in ss['lockfiles']:
+        locks.append(bb.utils.lockfile(lock))
+
+    for state in ss['dirs']:
+        bb.debug(2, "Staging files from %s to %s" % (state[1], state[2]))
+        for walkroot, dirs, files in os.walk(state[1]):
+            for file in files:
+                srcpath = os.path.join(walkroot, file)
+                dstpath = srcpath.replace(state[1], state[2])
+                #bb.debug(2, "Staging %s to %s" % (srcpath, dstpath))
+                sharedfiles.append(dstpath)
+            for dir in dirs:
+                srcdir = os.path.join(walkroot, dir)
+                dstdir = srcdir.replace(state[1], state[2])
+                #bb.debug(2, "Staging %s to %s" % (srcdir, dstdir))
+                if os.path.islink(srcdir):
+                    sharedfiles.append(dstdir)
+                    continue
+                if not dstdir.endswith("/"):
+                    dstdir = dstdir + "/"
+                shareddirs.append(dstdir)
+
+    # Check the file list for conflicts against files which already exist
+    overlap_allowed = (d.getVar("SSTATE_ALLOW_OVERLAP_FILES") or "").split()
+    match = []
+    for f in sharedfiles:
+        if os.path.exists(f) and not os.path.islink(f):
+            f = os.path.normpath(f)
+            realmatch = True
+            for w in overlap_allowed:
+                w = os.path.normpath(w)
+                if f.startswith(w):
+                    realmatch = False
+                    break
+            if realmatch:
+                match.append(f)
+                sstate_search_cmd = "grep -rlF '%s' %s --exclude=master.list | sed -e 's:^.*/::'" % (f, d.expand("${SSTATE_MANIFESTS}"))
+                search_output = subprocess.Popen(sstate_search_cmd, shell=True, stdout=subprocess.PIPE).communicate()[0]
+                if search_output:
+                    match.append("  (matched in %s)" % search_output.decode('utf-8').rstrip())
+                else:
+                    match.append("  (not matched to any task)")
+    if match:
+        bb.error("The recipe %s is trying to install files into a shared " \
+          "area when those files already exist. Those files and their manifest " \
+          "location are:\n  %s\nPlease verify which recipe should provide the " \
+          "above files.\n\nThe build has stopped, as continuing in this scenario WILL " \
+          "break things - if not now, possibly in the future (we've seen builds fail " \
+          "several months later). If the system knew how to recover from this " \
+          "automatically it would, however there are several different scenarios " \
+          "which can result in this and we don't know which one this is. It may be " \
+          "you have switched providers of something like virtual/kernel (e.g. from " \
+          "linux-yocto to linux-yocto-dev), in that case you need to execute the " \
+          "clean task for both recipes and it will resolve this error. It may be " \
+          "you changed DISTRO_FEATURES from systemd to udev or vice versa. Cleaning " \
+          "those recipes should again resolve this error, however switching " \
+          "DISTRO_FEATURES on an existing build directory is not supported - you " \
+          "should really clean out tmp and rebuild (reusing sstate should be safe). " \
+          "It could be the overlapping files detected are harmless in which case " \
+          "adding them to SSTATE_ALLOW_OVERLAP_FILES may be the correct solution. It could " \
+          "also be your build is including two different conflicting versions of " \
+          "things (e.g. bluez 4 and bluez 5 and the correct solution for that would " \
+          "be to resolve the conflict. If in doubt, please ask on the mailing list, " \
+          "sharing the error and filelist above." % \
+          (d.getVar('PN'), "\n  ".join(match)))
+        bb.fatal("If the above message is too much, the simpler version is you're advised to wipe out tmp and rebuild (reusing sstate is fine). That will likely fix things in most (but not all) cases.")
+
+    if ss['fixmedir'] and os.path.exists(ss['fixmedir'] + "/fixmepath.cmd"):
+        sharedfiles.append(ss['fixmedir'] + "/fixmepath.cmd")
+        sharedfiles.append(ss['fixmedir'] + "/fixmepath")
+
+    # Write out the manifest
+    f = open(manifest, "w")
+    for file in sharedfiles:
+        f.write(file + "\n")
+
+    # We want to ensure that directories appear at the end of the manifest
+    # so that when we test to see if they should be deleted any contents
+    # added by the task will have been removed first.
+    dirs = sorted(shareddirs, key=len)
+    # Must remove children first, which will have a longer path than the parent
+    for di in reversed(dirs):
+        f.write(di + "\n")
+    f.close()
+
+    # Append to the list of manifests for this PACKAGE_ARCH
+
+    i = d2.expand("${SSTATE_MANIFESTS}/index-${SSTATE_MANMACH}")
+    l = bb.utils.lockfile(i + ".lock")
+    filedata = d.getVar("STAMP") + " " + d2.getVar("SSTATE_MANFILEPREFIX") + " " + d.getVar("WORKDIR") + "\n"
+    manifests = []
+    if os.path.exists(i):
+        with open(i, "r") as f:
+            manifests = f.readlines()
+    # We append new entries, we don't remove older entries which may have the same
+    # manifest name but different versions from stamp/workdir. See below.
+    if filedata not in manifests:
+        with open(i, "a+") as f:
+            f.write(filedata)
+    bb.utils.unlockfile(l)
+
+    # Run the actual file install
+    for state in ss['dirs']:
+        if os.path.exists(state[1]):
+            oe.path.copyhardlinktree(state[1], state[2])
+
+    for postinst in (d.getVar('SSTATEPOSTINSTFUNCS') or '').split():
+        # All hooks should run in the SSTATE_INSTDIR
+        bb.build.exec_func(postinst, d, (sstateinst,))
+
+    for lock in locks:
+        bb.utils.unlockfile(lock)
+
+sstate_install[vardepsexclude] += "SSTATE_ALLOW_OVERLAP_FILES STATE_MANMACH SSTATE_MANFILEPREFIX"
+sstate_install[vardeps] += "${SSTATEPOSTINSTFUNCS}"
+
+def sstate_installpkg(ss, d):
+    from oe.gpg_sign import get_signer
+
+    sstateinst = d.expand("${WORKDIR}/sstate-install-%s/" % ss['task'])
+    d.setVar("SSTATE_CURRTASK", ss['task'])
+    sstatefetch = d.getVar('SSTATE_PKGNAME')
+    sstatepkg = d.getVar('SSTATE_PKG')
+
+    if not os.path.exists(sstatepkg):
+        pstaging_fetch(sstatefetch, d)
+
+    if not os.path.isfile(sstatepkg):
+        bb.note("Sstate package %s does not exist" % sstatepkg)
+        return False
+
+    sstate_clean(ss, d)
+
+    d.setVar('SSTATE_INSTDIR', sstateinst)
+
+    if bb.utils.to_boolean(d.getVar("SSTATE_VERIFY_SIG"), False):
+        if not os.path.isfile(sstatepkg + '.sig'):
+            bb.warn("No signature file for sstate package %s, skipping acceleration..." % sstatepkg)
+            return False
+        signer = get_signer(d, 'local')
+        if not signer.verify(sstatepkg + '.sig', d.getVar("SSTATE_VALID_SIGS")):
+            bb.warn("Cannot verify signature on sstate package %s, skipping acceleration..." % sstatepkg)
+            return False
+
+    # Empty sstateinst directory, ensure its clean
+    if os.path.exists(sstateinst):
+        oe.path.remove(sstateinst)
+    bb.utils.mkdirhier(sstateinst)
+
+    sstateinst = d.getVar("SSTATE_INSTDIR")
+    d.setVar('SSTATE_FIXMEDIR', ss['fixmedir'])
+
+    for f in (d.getVar('SSTATEPREINSTFUNCS') or '').split() + ['sstate_unpack_package']:
+        # All hooks should run in the SSTATE_INSTDIR
+        bb.build.exec_func(f, d, (sstateinst,))
+
+    return sstate_installpkgdir(ss, d)
+
+def sstate_installpkgdir(ss, d):
+    import oe.path
+    import subprocess
+
+    sstateinst = d.getVar("SSTATE_INSTDIR")
+    d.setVar('SSTATE_FIXMEDIR', ss['fixmedir'])
+
+    for f in (d.getVar('SSTATEPOSTUNPACKFUNCS') or '').split():
+        # All hooks should run in the SSTATE_INSTDIR
+        bb.build.exec_func(f, d, (sstateinst,))
+
+    def prepdir(dir):
+        # remove dir if it exists, ensure any parent directories do exist
+        if os.path.exists(dir):
+            oe.path.remove(dir)
+        bb.utils.mkdirhier(dir)
+        oe.path.remove(dir)
+
+    for state in ss['dirs']:
+        prepdir(state[1])
+        bb.utils.rename(sstateinst + state[0], state[1])
+    sstate_install(ss, d)
+
+    for plain in ss['plaindirs']:
+        workdir = d.getVar('WORKDIR')
+        sharedworkdir = os.path.join(d.getVar('TMPDIR'), "work-shared")
+        src = sstateinst + "/" + plain.replace(workdir, '')
+        if sharedworkdir in plain:
+            src = sstateinst + "/" + plain.replace(sharedworkdir, '')
+        dest = plain
+        bb.utils.mkdirhier(src)
+        prepdir(dest)
+        bb.utils.rename(src, dest)
+
+    return True
+
+python sstate_hardcode_path_unpack () {
+    # Fixup hardcoded paths
+    #
+    # Note: The logic below must match the reverse logic in
+    # sstate_hardcode_path(d)
+    import subprocess
+
+    sstateinst = d.getVar('SSTATE_INSTDIR')
+    sstatefixmedir = d.getVar('SSTATE_FIXMEDIR')
+    fixmefn = sstateinst + "fixmepath"
+    if os.path.isfile(fixmefn):
+        staging_target = d.getVar('RECIPE_SYSROOT')
+        staging_host = d.getVar('RECIPE_SYSROOT_NATIVE')
+
+        if bb.data.inherits_class('native', d) or bb.data.inherits_class('cross-canadian', d):
+            sstate_sed_cmd = "sed -i -e 's:FIXMESTAGINGDIRHOST:%s:g'" % (staging_host)
+        elif bb.data.inherits_class('cross', d) or bb.data.inherits_class('crosssdk', d):
+            sstate_sed_cmd = "sed -i -e 's:FIXMESTAGINGDIRTARGET:%s:g; s:FIXMESTAGINGDIRHOST:%s:g'" % (staging_target, staging_host)
+        else:
+            sstate_sed_cmd = "sed -i -e 's:FIXMESTAGINGDIRTARGET:%s:g'" % (staging_target)
+
+        extra_staging_fixmes = d.getVar('EXTRA_STAGING_FIXMES') or ''
+        for fixmevar in extra_staging_fixmes.split():
+            fixme_path = d.getVar(fixmevar)
+            sstate_sed_cmd += " -e 's:FIXME_%s:%s:g'" % (fixmevar, fixme_path)
+
+        # Add sstateinst to each filename in fixmepath, use xargs to efficiently call sed
+        sstate_hardcode_cmd = "sed -e 's:^:%s:g' %s | xargs %s" % (sstateinst, fixmefn, sstate_sed_cmd)
+
+        # Defer do_populate_sysroot relocation command
+        if sstatefixmedir:
+            bb.utils.mkdirhier(sstatefixmedir)
+            with open(sstatefixmedir + "/fixmepath.cmd", "w") as f:
+                sstate_hardcode_cmd = sstate_hardcode_cmd.replace(fixmefn, sstatefixmedir + "/fixmepath")
+                sstate_hardcode_cmd = sstate_hardcode_cmd.replace(sstateinst, "FIXMEFINALSSTATEINST")
+                sstate_hardcode_cmd = sstate_hardcode_cmd.replace(staging_host, "FIXMEFINALSSTATEHOST")
+                sstate_hardcode_cmd = sstate_hardcode_cmd.replace(staging_target, "FIXMEFINALSSTATETARGET")
+                f.write(sstate_hardcode_cmd)
+            bb.utils.copyfile(fixmefn, sstatefixmedir + "/fixmepath")
+            return
+
+        bb.note("Replacing fixme paths in sstate package: %s" % (sstate_hardcode_cmd))
+        subprocess.check_call(sstate_hardcode_cmd, shell=True)
+
+        # Need to remove this or we'd copy it into the target directory and may
+        # conflict with another writer
+        os.remove(fixmefn)
+}
+
+def sstate_clean_cachefile(ss, d):
+    import oe.path
+
+    if d.getVarFlag('do_%s' % ss['task'], 'task'):
+        d.setVar("SSTATE_PATH_CURRTASK", ss['task'])
+        sstatepkgfile = d.getVar('SSTATE_PATHSPEC')
+        bb.note("Removing %s" % sstatepkgfile)
+        oe.path.remove(sstatepkgfile)
+
+def sstate_clean_cachefiles(d):
+    for task in (d.getVar('SSTATETASKS') or "").split():
+        ld = d.createCopy()
+        ss = sstate_state_fromvars(ld, task)
+        sstate_clean_cachefile(ss, ld)
+
+def sstate_clean_manifest(manifest, d, canrace=False, prefix=None):
+    import oe.path
+
+    mfile = open(manifest)
+    entries = mfile.readlines()
+    mfile.close()
+
+    for entry in entries:
+        entry = entry.strip()
+        if prefix and not entry.startswith("/"):
+            entry = prefix + "/" + entry
+        bb.debug(2, "Removing manifest: %s" % entry)
+        # We can race against another package populating directories as we're removing them
+        # so we ignore errors here.
+        try:
+            if entry.endswith("/"):
+                if os.path.islink(entry[:-1]):
+                    os.remove(entry[:-1])
+                elif os.path.exists(entry) and len(os.listdir(entry)) == 0 and not canrace:
+                    # Removing directories whilst builds are in progress exposes a race. Only
+                    # do it in contexts where it is safe to do so.
+                    os.rmdir(entry[:-1])
+            else:
+                os.remove(entry)
+        except OSError:
+            pass
+
+    postrm = manifest + ".postrm"
+    if os.path.exists(manifest + ".postrm"):
+        import subprocess
+        os.chmod(postrm, 0o755)
+        subprocess.check_call(postrm, shell=True)
+        oe.path.remove(postrm)
+
+    oe.path.remove(manifest)
+
+def sstate_clean(ss, d):
+    import oe.path
+    import glob
+
+    d2 = d.createCopy()
+    stamp_clean = d.getVar("STAMPCLEAN")
+    extrainf = d.getVarFlag("do_" + ss['task'], 'stamp-extra-info')
+    if extrainf:
+        d2.setVar("SSTATE_MANMACH", extrainf)
+        wildcard_stfile = "%s.do_%s*.%s" % (stamp_clean, ss['task'], extrainf)
+    else:
+        wildcard_stfile = "%s.do_%s*" % (stamp_clean, ss['task'])
+
+    manifest = d2.expand("${SSTATE_MANFILEPREFIX}.%s" % ss['task'])
+
+    if os.path.exists(manifest):
+        locks = []
+        for lock in ss['lockfiles-shared']:
+            locks.append(bb.utils.lockfile(lock))
+        for lock in ss['lockfiles']:
+            locks.append(bb.utils.lockfile(lock))
+
+        sstate_clean_manifest(manifest, d, canrace=True)
+
+        for lock in locks:
+            bb.utils.unlockfile(lock)
+
+    # Remove the current and previous stamps, but keep the sigdata.
+    #
+    # The glob() matches do_task* which may match multiple tasks, for
+    # example: do_package and do_package_write_ipk, so we need to
+    # exactly match *.do_task.* and *.do_task_setscene.*
+    rm_stamp = '.do_%s.' % ss['task']
+    rm_setscene = '.do_%s_setscene.' % ss['task']
+    # For BB_SIGNATURE_HANDLER = "noop"
+    rm_nohash = ".do_%s" % ss['task']
+    for stfile in glob.glob(wildcard_stfile):
+        # Keep the sigdata
+        if ".sigdata." in stfile or ".sigbasedata." in stfile:
+            continue
+        # Preserve taint files in the stamps directory
+        if stfile.endswith('.taint'):
+            continue
+        if rm_stamp in stfile or rm_setscene in stfile or \
+                stfile.endswith(rm_nohash):
+            oe.path.remove(stfile)
+
+sstate_clean[vardepsexclude] = "SSTATE_MANFILEPREFIX"
+
+CLEANFUNCS += "sstate_cleanall"
+
+python sstate_cleanall() {
+    bb.note("Removing shared state for package %s" % d.getVar('PN'))
+
+    manifest_dir = d.getVar('SSTATE_MANIFESTS')
+    if not os.path.exists(manifest_dir):
+        return
+
+    tasks = d.getVar('SSTATETASKS').split()
+    for name in tasks:
+        ld = d.createCopy()
+        shared_state = sstate_state_fromvars(ld, name)
+        sstate_clean(shared_state, ld)
+}
+
+python sstate_hardcode_path () {
+    import subprocess, platform
+
+    # Need to remove hardcoded paths and fix these when we install the
+    # staging packages.
+    #
+    # Note: the logic in this function needs to match the reverse logic
+    # in sstate_installpkg(ss, d)
+
+    staging_target = d.getVar('RECIPE_SYSROOT')
+    staging_host = d.getVar('RECIPE_SYSROOT_NATIVE')
+    sstate_builddir = d.getVar('SSTATE_BUILDDIR')
+
+    sstate_sed_cmd = "sed -i -e 's:%s:FIXMESTAGINGDIRHOST:g'" % staging_host
+    if bb.data.inherits_class('native', d) or bb.data.inherits_class('cross-canadian', d):
+        sstate_grep_cmd = "grep -l -e '%s'" % (staging_host)
+    elif bb.data.inherits_class('cross', d) or bb.data.inherits_class('crosssdk', d):
+        sstate_grep_cmd = "grep -l -e '%s' -e '%s'" % (staging_target, staging_host)
+        sstate_sed_cmd += " -e 's:%s:FIXMESTAGINGDIRTARGET:g'" % staging_target
+    else:
+        sstate_grep_cmd = "grep -l -e '%s' -e '%s'" % (staging_target, staging_host)
+        sstate_sed_cmd += " -e 's:%s:FIXMESTAGINGDIRTARGET:g'" % staging_target
+
+    extra_staging_fixmes = d.getVar('EXTRA_STAGING_FIXMES') or ''
+    for fixmevar in extra_staging_fixmes.split():
+        fixme_path = d.getVar(fixmevar)
+        sstate_sed_cmd += " -e 's:%s:FIXME_%s:g'" % (fixme_path, fixmevar)
+        sstate_grep_cmd += " -e '%s'" % (fixme_path)
+
+    fixmefn =  sstate_builddir + "fixmepath"
+
+    sstate_scan_cmd = d.getVar('SSTATE_SCAN_CMD')
+    sstate_filelist_cmd = "tee %s" % (fixmefn)
+
+    # fixmepath file needs relative paths, drop sstate_builddir prefix
+    sstate_filelist_relative_cmd = "sed -i -e 's:^%s::g' %s" % (sstate_builddir, fixmefn)
+
+    xargs_no_empty_run_cmd = '--no-run-if-empty'
+    if platform.system() == 'Darwin':
+        xargs_no_empty_run_cmd = ''
+
+    # Limit the fixpaths and sed operations based on the initial grep search
+    # This has the side effect of making sure the vfs cache is hot
+    sstate_hardcode_cmd = "%s | xargs %s | %s | xargs %s %s" % (sstate_scan_cmd, sstate_grep_cmd, sstate_filelist_cmd, xargs_no_empty_run_cmd, sstate_sed_cmd)
+
+    bb.note("Removing hardcoded paths from sstate package: '%s'" % (sstate_hardcode_cmd))
+    subprocess.check_output(sstate_hardcode_cmd, shell=True, cwd=sstate_builddir)
+
+        # If the fixmefn is empty, remove it..
+    if os.stat(fixmefn).st_size == 0:
+        os.remove(fixmefn)
+    else:
+        bb.note("Replacing absolute paths in fixmepath file: '%s'" % (sstate_filelist_relative_cmd))
+        subprocess.check_output(sstate_filelist_relative_cmd, shell=True)
+}
+
+def sstate_package(ss, d):
+    import oe.path
+    import time
+
+    tmpdir = d.getVar('TMPDIR')
+
+    fixtime = False
+    if ss['task'] == "package":
+        fixtime = True
+
+    def fixtimestamp(root, path):
+        f = os.path.join(root, path)
+        if os.lstat(f).st_mtime > sde:
+            os.utime(f, (sde, sde), follow_symlinks=False)
+
+    sstatebuild = d.expand("${WORKDIR}/sstate-build-%s/" % ss['task'])
+    sde = int(d.getVar("SOURCE_DATE_EPOCH") or time.time())
+    d.setVar("SSTATE_CURRTASK", ss['task'])
+    bb.utils.remove(sstatebuild, recurse=True)
+    bb.utils.mkdirhier(sstatebuild)
+    for state in ss['dirs']:
+        if not os.path.exists(state[1]):
+            continue
+        srcbase = state[0].rstrip("/").rsplit('/', 1)[0]
+        # Find and error for absolute symlinks. We could attempt to relocate but its not
+        # clear where the symlink is relative to in this context. We could add that markup
+        # to sstate tasks but there aren't many of these so better just avoid them entirely.
+        for walkroot, dirs, files in os.walk(state[1]):
+            for file in files + dirs:
+                if fixtime:
+                    fixtimestamp(walkroot, file)
+                srcpath = os.path.join(walkroot, file)
+                if not os.path.islink(srcpath):
+                    continue
+                link = os.readlink(srcpath)
+                if not os.path.isabs(link):
+                    continue
+                if not link.startswith(tmpdir):
+                    continue
+                bb.error("sstate found an absolute path symlink %s pointing at %s. Please replace this with a relative link." % (srcpath, link))
+        bb.debug(2, "Preparing tree %s for packaging at %s" % (state[1], sstatebuild + state[0]))
+        bb.utils.rename(state[1], sstatebuild + state[0])
+
+    workdir = d.getVar('WORKDIR')
+    sharedworkdir = os.path.join(d.getVar('TMPDIR'), "work-shared")
+    for plain in ss['plaindirs']:
+        pdir = plain.replace(workdir, sstatebuild)
+        if sharedworkdir in plain:
+            pdir = plain.replace(sharedworkdir, sstatebuild)
+        bb.utils.mkdirhier(plain)
+        bb.utils.mkdirhier(pdir)
+        bb.utils.rename(plain, pdir)
+        if fixtime:
+            fixtimestamp(pdir, "")
+            for walkroot, dirs, files in os.walk(pdir):
+                for file in files + dirs:
+                    fixtimestamp(walkroot, file)
+
+    d.setVar('SSTATE_BUILDDIR', sstatebuild)
+    d.setVar('SSTATE_INSTDIR', sstatebuild)
+
+    if d.getVar('SSTATE_SKIP_CREATION') == '1':
+        return
+
+    sstate_create_package = ['sstate_report_unihash', 'sstate_create_package']
+    if d.getVar('SSTATE_SIG_KEY'):
+        sstate_create_package.append('sstate_sign_package')
+
+    for f in (d.getVar('SSTATECREATEFUNCS') or '').split() + \
+             sstate_create_package + \
+             (d.getVar('SSTATEPOSTCREATEFUNCS') or '').split():
+        # All hooks should run in SSTATE_BUILDDIR.
+        bb.build.exec_func(f, d, (sstatebuild,))
+
+    # SSTATE_PKG may have been changed by sstate_report_unihash
+    siginfo = d.getVar('SSTATE_PKG') + ".siginfo"
+    if not os.path.exists(siginfo):
+        bb.siggen.dump_this_task(siginfo, d)
+    else:
+        try:
+            os.utime(siginfo, None)
+        except PermissionError:
+            pass
+        except OSError as e:
+            # Handle read-only file systems gracefully
+            import errno
+            if e.errno != errno.EROFS:
+                raise e
+
+    return
+
+sstate_package[vardepsexclude] += "SSTATE_SIG_KEY"
+
+def pstaging_fetch(sstatefetch, d):
+    import bb.fetch2
+
+    # Only try and fetch if the user has configured a mirror
+    mirrors = d.getVar('SSTATE_MIRRORS')
+    if not mirrors:
+        return
+
+    # Copy the data object and override DL_DIR and SRC_URI
+    localdata = bb.data.createCopy(d)
+
+    dldir = localdata.expand("${SSTATE_DIR}")
+    bb.utils.mkdirhier(dldir)
+
+    localdata.delVar('MIRRORS')
+    localdata.setVar('FILESPATH', dldir)
+    localdata.setVar('DL_DIR', dldir)
+    localdata.setVar('PREMIRRORS', mirrors)
+    localdata.setVar('SRCPV', d.getVar('SRCPV'))
+
+    # if BB_NO_NETWORK is set but we also have SSTATE_MIRROR_ALLOW_NETWORK,
+    # we'll want to allow network access for the current set of fetches.
+    if bb.utils.to_boolean(localdata.getVar('BB_NO_NETWORK')) and \
+            bb.utils.to_boolean(localdata.getVar('SSTATE_MIRROR_ALLOW_NETWORK')):
+        localdata.delVar('BB_NO_NETWORK')
+
+    # Try a fetch from the sstate mirror, if it fails just return and
+    # we will build the package
+    uris = ['file://{0};downloadfilename={0}'.format(sstatefetch),
+            'file://{0}.siginfo;downloadfilename={0}.siginfo'.format(sstatefetch)]
+    if bb.utils.to_boolean(d.getVar("SSTATE_VERIFY_SIG"), False):
+        uris += ['file://{0}.sig;downloadfilename={0}.sig'.format(sstatefetch)]
+
+    for srcuri in uris:
+        localdata.setVar('SRC_URI', srcuri)
+        try:
+            fetcher = bb.fetch2.Fetch([srcuri], localdata, cache=False)
+            fetcher.checkstatus()
+            fetcher.download()
+
+        except bb.fetch2.BBFetchException:
+            pass
+
+pstaging_fetch[vardepsexclude] += "SRCPV"
+
+
+def sstate_setscene(d):
+    shared_state = sstate_state_fromvars(d)
+    accelerate = sstate_installpkg(shared_state, d)
+    if not accelerate:
+        msg = "No sstate archive obtainable, will run full task instead."
+        bb.warn(msg)
+        raise bb.BBHandledException(msg)
+
+python sstate_task_prefunc () {
+    shared_state = sstate_state_fromvars(d)
+    sstate_clean(shared_state, d)
+}
+sstate_task_prefunc[dirs] = "${WORKDIR}"
+
+python sstate_task_postfunc () {
+    shared_state = sstate_state_fromvars(d)
+
+    for intercept in shared_state['interceptfuncs']:
+        bb.build.exec_func(intercept, d, (d.getVar("WORKDIR"),))
+
+    omask = os.umask(0o002)
+    if omask != 0o002:
+       bb.note("Using umask 0o002 (not %0o) for sstate packaging" % omask)
+    sstate_package(shared_state, d)
+    os.umask(omask)
+
+    sstateinst = d.getVar("SSTATE_INSTDIR")
+    d.setVar('SSTATE_FIXMEDIR', shared_state['fixmedir'])
+
+    sstate_installpkgdir(shared_state, d)
+
+    bb.utils.remove(d.getVar("SSTATE_BUILDDIR"), recurse=True)
+}
+sstate_task_postfunc[dirs] = "${WORKDIR}"
+
+
+#
+# Shell function to generate a sstate package from a directory
+# set as SSTATE_BUILDDIR. Will be run from within SSTATE_BUILDDIR.
+#
+sstate_create_package () {
+	# Exit early if it already exists
+	if [ -e ${SSTATE_PKG} ]; then
+		touch ${SSTATE_PKG} 2>/dev/null || true
+		return
+	fi
+
+	mkdir --mode=0775 -p `dirname ${SSTATE_PKG}`
+	TFILE=`mktemp ${SSTATE_PKG}.XXXXXXXX`
+
+	OPT="-cS"
+	ZSTD="zstd -${SSTATE_ZSTD_CLEVEL} -T${ZSTD_THREADS}"
+	# Use pzstd if available
+	if [ -x "$(command -v pzstd)" ]; then
+		ZSTD="pzstd -${SSTATE_ZSTD_CLEVEL} -p ${ZSTD_THREADS}"
+	fi
+
+	# Need to handle empty directories
+	if [ "$(ls -A)" ]; then
+		set +e
+		tar -I "$ZSTD" $OPT -f $TFILE *
+		ret=$?
+		if [ $ret -ne 0 ] && [ $ret -ne 1 ]; then
+			exit 1
+		fi
+		set -e
+	else
+		tar -I "$ZSTD" $OPT --file=$TFILE --files-from=/dev/null
+	fi
+	chmod 0664 $TFILE
+	# Skip if it was already created by some other process
+	if [ -h ${SSTATE_PKG} ] && [ ! -e ${SSTATE_PKG} ]; then
+		# There is a symbolic link, but it links to nothing.
+		# Forcefully replace it with the new file.
+		ln -f $TFILE ${SSTATE_PKG} || true
+	elif [ ! -e ${SSTATE_PKG} ]; then
+		# Move into place using ln to attempt an atomic op.
+		# Abort if it already exists
+		ln $TFILE ${SSTATE_PKG} || true
+	else
+		touch ${SSTATE_PKG} 2>/dev/null || true
+	fi
+	rm $TFILE
+}
+
+python sstate_sign_package () {
+    from oe.gpg_sign import get_signer
+
+
+    signer = get_signer(d, 'local')
+    sstate_pkg = d.getVar('SSTATE_PKG')
+    if os.path.exists(sstate_pkg + '.sig'):
+        os.unlink(sstate_pkg + '.sig')
+    signer.detach_sign(sstate_pkg, d.getVar('SSTATE_SIG_KEY', False), None,
+                       d.getVar('SSTATE_SIG_PASSPHRASE'), armor=False)
+}
+
+python sstate_report_unihash() {
+    report_unihash = getattr(bb.parse.siggen, 'report_unihash', None)
+
+    if report_unihash:
+        ss = sstate_state_fromvars(d)
+        report_unihash(os.getcwd(), ss['task'], d)
+}
+
+#
+# Shell function to decompress and prepare a package for installation
+# Will be run from within SSTATE_INSTDIR.
+#
+sstate_unpack_package () {
+	ZSTD="zstd -T${ZSTD_THREADS}"
+	# Use pzstd if available
+	if [ -x "$(command -v pzstd)" ]; then
+		ZSTD="pzstd -p ${ZSTD_THREADS}"
+	fi
+
+	tar -I "$ZSTD" -xvpf ${SSTATE_PKG}
+	# update .siginfo atime on local/NFS mirror if it is a symbolic link
+	[ ! -h ${SSTATE_PKG}.siginfo ] || [ ! -e ${SSTATE_PKG}.siginfo ] || touch -a ${SSTATE_PKG}.siginfo 2>/dev/null || true
+	# update each symbolic link instead of any referenced file
+	touch --no-dereference ${SSTATE_PKG} 2>/dev/null || true
+	[ ! -e ${SSTATE_PKG}.sig ] || touch --no-dereference ${SSTATE_PKG}.sig 2>/dev/null || true
+	[ ! -e ${SSTATE_PKG}.siginfo ] || touch --no-dereference ${SSTATE_PKG}.siginfo 2>/dev/null || true
+}
+
+BB_HASHCHECK_FUNCTION = "sstate_checkhashes"
+
+def sstate_checkhashes(sq_data, d, siginfo=False, currentcount=0, summary=True, **kwargs):
+    found = set()
+    missed = set()
+
+    def gethash(task):
+        return sq_data['unihash'][task]
+
+    def getpathcomponents(task, d):
+        # Magic data from BB_HASHFILENAME
+        splithashfn = sq_data['hashfn'][task].split(" ")
+        spec = splithashfn[1]
+        if splithashfn[0] == "True":
+            extrapath = d.getVar("NATIVELSBSTRING") + "/"
+        else:
+            extrapath = ""
+        
+        tname = bb.runqueue.taskname_from_tid(task)[3:]
+
+        if tname in ["fetch", "unpack", "patch", "populate_lic", "preconfigure"] and splithashfn[2]:
+            spec = splithashfn[2]
+            extrapath = ""
+
+        return spec, extrapath, tname
+
+    def getsstatefile(tid, siginfo, d):
+        spec, extrapath, tname = getpathcomponents(tid, d)
+        return extrapath + generate_sstatefn(spec, gethash(tid), tname, siginfo, d)
+
+    for tid in sq_data['hash']:
+
+        sstatefile = d.expand("${SSTATE_DIR}/" + getsstatefile(tid, siginfo, d))
+
+        if os.path.exists(sstatefile):
+            found.add(tid)
+            bb.debug(2, "SState: Found valid sstate file %s" % sstatefile)
+        else:
+            missed.add(tid)
+            bb.debug(2, "SState: Looked for but didn't find file %s" % sstatefile)
+
+    foundLocal = len(found)
+    mirrors = d.getVar("SSTATE_MIRRORS")
+    if mirrors:
+        # Copy the data object and override DL_DIR and SRC_URI
+        localdata = bb.data.createCopy(d)
+
+        dldir = localdata.expand("${SSTATE_DIR}")
+        localdata.delVar('MIRRORS')
+        localdata.setVar('FILESPATH', dldir)
+        localdata.setVar('DL_DIR', dldir)
+        localdata.setVar('PREMIRRORS', mirrors)
+
+        bb.debug(2, "SState using premirror of: %s" % mirrors)
+
+        # if BB_NO_NETWORK is set but we also have SSTATE_MIRROR_ALLOW_NETWORK,
+        # we'll want to allow network access for the current set of fetches.
+        if bb.utils.to_boolean(localdata.getVar('BB_NO_NETWORK')) and \
+                bb.utils.to_boolean(localdata.getVar('SSTATE_MIRROR_ALLOW_NETWORK')):
+            localdata.delVar('BB_NO_NETWORK')
+
+        from bb.fetch2 import FetchConnectionCache
+        def checkstatus_init():
+            while not connection_cache_pool.full():
+                connection_cache_pool.put(FetchConnectionCache())
+
+        def checkstatus_end():
+            while not connection_cache_pool.empty():
+                connection_cache = connection_cache_pool.get()
+                connection_cache.close_connections()
+
+        def checkstatus(arg):
+            (tid, sstatefile) = arg
+
+            connection_cache = connection_cache_pool.get()
+            localdata2 = bb.data.createCopy(localdata)
+            srcuri = "file://" + sstatefile
+            localdata2.setVar('SRC_URI', srcuri)
+            bb.debug(2, "SState: Attempting to fetch %s" % srcuri)
+
+            import traceback
+
+            try:
+                fetcher = bb.fetch2.Fetch(srcuri.split(), localdata2,
+                            connection_cache=connection_cache)
+                fetcher.checkstatus()
+                bb.debug(2, "SState: Successful fetch test for %s" % srcuri)
+                found.add(tid)
+                missed.remove(tid)
+            except bb.fetch2.FetchError as e:
+                bb.debug(2, "SState: Unsuccessful fetch test for %s (%s)\n%s" % (srcuri, repr(e), traceback.format_exc()))
+            except Exception as e:
+                bb.error("SState: cannot test %s: %s\n%s" % (srcuri, repr(e), traceback.format_exc()))
+
+            connection_cache_pool.put(connection_cache)
+
+            if progress:
+                bb.event.fire(bb.event.ProcessProgress(msg, len(tasklist) - thread_worker.tasks.qsize()), d)
+
+        tasklist = []
+        for tid in missed:
+            sstatefile = d.expand(getsstatefile(tid, siginfo, d))
+            tasklist.append((tid, sstatefile))
+
+        if tasklist:
+            nproc = min(int(d.getVar("BB_NUMBER_THREADS")), len(tasklist))
+
+            progress = len(tasklist) >= 100
+            if progress:
+                msg = "Checking sstate mirror object availability"
+                bb.event.fire(bb.event.ProcessStarted(msg, len(tasklist)), d)
+
+            # Have to setup the fetcher environment here rather than in each thread as it would race
+            fetcherenv = bb.fetch2.get_fetcher_environment(d)
+            with bb.utils.environment(**fetcherenv):
+                bb.event.enable_threadlock()
+                import concurrent.futures
+                from queue import Queue
+                connection_cache_pool = Queue(nproc)
+                checkstatus_init()
+                with concurrent.futures.ThreadPoolExecutor(max_workers=nproc) as executor:
+                    executor.map(checkstatus, tasklist.copy())
+                checkstatus_end()
+                bb.event.disable_threadlock()
+
+            if progress:
+                bb.event.fire(bb.event.ProcessFinished(msg), d)
+
+    inheritlist = d.getVar("INHERIT")
+    if "toaster" in inheritlist:
+        evdata = {'missed': [], 'found': []};
+        for tid in missed:
+            sstatefile = d.expand(getsstatefile(tid, False, d))
+            evdata['missed'].append((bb.runqueue.fn_from_tid(tid), bb.runqueue.taskname_from_tid(tid), gethash(tid), sstatefile ) )
+        for tid in found:
+            sstatefile = d.expand(getsstatefile(tid, False, d))
+            evdata['found'].append((bb.runqueue.fn_from_tid(tid), bb.runqueue.taskname_from_tid(tid), gethash(tid), sstatefile ) )
+        bb.event.fire(bb.event.MetadataEvent("MissedSstate", evdata), d)
+
+    if summary:
+        # Print some summary statistics about the current task completion and how much sstate
+        # reuse there was. Avoid divide by zero errors.
+        total = len(sq_data['hash'])
+        complete = 0
+        if currentcount:
+            complete = (len(found) + currentcount) / (total + currentcount) * 100
+        match = 0
+        if total:
+            match = len(found) / total * 100
+        bb.plain("Sstate summary: Wanted %d Local %d Mirrors %d Missed %d Current %d (%d%% match, %d%% complete)" %
+            (total, foundLocal, len(found)-foundLocal, len(missed), currentcount, match, complete))
+
+    if hasattr(bb.parse.siggen, "checkhashes"):
+        bb.parse.siggen.checkhashes(sq_data, missed, found, d)
+
+    return found
+setscene_depvalid[vardepsexclude] = "SSTATE_EXCLUDEDEPS_SYSROOT"
+
+BB_SETSCENE_DEPVALID = "setscene_depvalid"
+
+def setscene_depvalid(task, taskdependees, notneeded, d, log=None):
+    # taskdependees is a dict of tasks which depend on task, each being a 3 item list of [PN, TASKNAME, FILENAME]
+    # task is included in taskdependees too
+    # Return - False - We need this dependency
+    #        - True - We can skip this dependency
+    import re
+
+    def logit(msg, log):
+        if log is not None:
+            log.append(msg)
+        else:
+            bb.debug(2, msg)
+
+    logit("Considering setscene task: %s" % (str(taskdependees[task])), log)
+
+    directtasks = ["do_populate_lic", "do_deploy_source_date_epoch", "do_shared_workdir", "do_stash_locale", "do_gcc_stash_builddir", "do_create_spdx"]
+
+    def isNativeCross(x):
+        return x.endswith("-native") or "-cross-" in x or "-crosssdk" in x or x.endswith("-cross")
+
+    # We only need to trigger deploy_source_date_epoch through direct dependencies
+    if taskdependees[task][1] in directtasks:
+        return True
+
+    # We only need to trigger packagedata through direct dependencies
+    # but need to preserve packagedata on packagedata links
+    if taskdependees[task][1] == "do_packagedata":
+        for dep in taskdependees:
+            if taskdependees[dep][1] == "do_packagedata":
+                return False
+        return True
+
+    for dep in taskdependees:
+        logit("  considering dependency: %s" % (str(taskdependees[dep])), log)
+        if task == dep:
+            continue
+        if dep in notneeded:
+            continue
+        # do_package_write_* and do_package doesn't need do_package
+        if taskdependees[task][1] == "do_package" and taskdependees[dep][1] in ['do_package', 'do_package_write_deb', 'do_package_write_ipk', 'do_package_write_rpm', 'do_packagedata', 'do_package_qa']:
+            continue
+        # do_package_write_* need do_populate_sysroot as they're mainly postinstall dependencies
+        if taskdependees[task][1] == "do_populate_sysroot" and taskdependees[dep][1] in ['do_package_write_deb', 'do_package_write_ipk', 'do_package_write_rpm']:
+            return False
+        # do_package/packagedata/package_qa/deploy don't need do_populate_sysroot
+        if taskdependees[task][1] == "do_populate_sysroot" and taskdependees[dep][1] in ['do_package', 'do_packagedata', 'do_package_qa', 'do_deploy']:
+            continue
+        # Native/Cross packages don't exist and are noexec anyway
+        if isNativeCross(taskdependees[dep][0]) and taskdependees[dep][1] in ['do_package_write_deb', 'do_package_write_ipk', 'do_package_write_rpm', 'do_packagedata', 'do_package', 'do_package_qa']:
+            continue
+
+        # This is due to the [depends] in useradd.bbclass complicating matters
+        # The logic *is* reversed here due to the way hard setscene dependencies are injected
+        if (taskdependees[task][1] == 'do_package' or taskdependees[task][1] == 'do_populate_sysroot') and taskdependees[dep][0].endswith(('shadow-native', 'shadow-sysroot', 'base-passwd', 'pseudo-native')) and taskdependees[dep][1] == 'do_populate_sysroot':
+            continue
+
+        # Consider sysroot depending on sysroot tasks
+        if taskdependees[task][1] == 'do_populate_sysroot' and taskdependees[dep][1] == 'do_populate_sysroot':
+            # Allow excluding certain recursive dependencies. If a recipe needs it should add a
+            # specific dependency itself, rather than relying on one of its dependees to pull
+            # them in.
+            # See also http://lists.openembedded.org/pipermail/openembedded-core/2018-January/146324.html
+            not_needed = False
+            excludedeps = d.getVar('_SSTATE_EXCLUDEDEPS_SYSROOT')
+            if excludedeps is None:
+                # Cache the regular expressions for speed
+                excludedeps = []
+                for excl in (d.getVar('SSTATE_EXCLUDEDEPS_SYSROOT') or "").split():
+                    excludedeps.append((re.compile(excl.split('->', 1)[0]), re.compile(excl.split('->', 1)[1])))
+                d.setVar('_SSTATE_EXCLUDEDEPS_SYSROOT', excludedeps)
+            for excl in excludedeps:
+                if excl[0].match(taskdependees[dep][0]):
+                    if excl[1].match(taskdependees[task][0]):
+                        not_needed = True
+                        break
+            if not_needed:
+                continue
+            # For meta-extsdk-toolchain we want all sysroot dependencies
+            if taskdependees[dep][0] == 'meta-extsdk-toolchain':
+                return False
+            # Native/Cross populate_sysroot need their dependencies
+            if isNativeCross(taskdependees[task][0]) and isNativeCross(taskdependees[dep][0]):
+                return False
+            # Target populate_sysroot depended on by cross tools need to be installed
+            if isNativeCross(taskdependees[dep][0]):
+                return False
+            # Native/cross tools depended upon by target sysroot are not needed
+            # Add an exception for shadow-native as required by useradd.bbclass
+            if isNativeCross(taskdependees[task][0]) and taskdependees[task][0] != 'shadow-native':
+                continue
+            # Target populate_sysroot need their dependencies
+            return False
+
+        if taskdependees[dep][1] in directtasks:
+            continue
+
+        # Safe fallthrough default
+        logit(" Default setscene dependency fall through due to dependency: %s" % (str(taskdependees[dep])), log)
+        return False
+    return True
+
+addhandler sstate_eventhandler
+sstate_eventhandler[eventmask] = "bb.build.TaskSucceeded"
+python sstate_eventhandler() {
+    d = e.data
+    writtensstate = d.getVar('SSTATE_CURRTASK')
+    if not writtensstate:
+        taskname = d.getVar("BB_RUNTASK")[3:]
+        spec = d.getVar('SSTATE_PKGSPEC')
+        swspec = d.getVar('SSTATE_SWSPEC')
+        if taskname in ["fetch", "unpack", "patch", "populate_lic", "preconfigure"] and swspec:
+            d.setVar("SSTATE_PKGSPEC", "${SSTATE_SWSPEC}")
+            d.setVar("SSTATE_EXTRAPATH", "")
+        d.setVar("SSTATE_CURRTASK", taskname)
+        siginfo = d.getVar('SSTATE_PKG') + ".siginfo"
+        if not os.path.exists(siginfo):
+            bb.siggen.dump_this_task(siginfo, d)
+        else:
+            try:
+                os.utime(siginfo, None)
+            except PermissionError:
+                pass
+            except OSError as e:
+                # Handle read-only file systems gracefully
+                import errno
+                if e.errno != errno.EROFS:
+                    raise e
+
+}
+
+SSTATE_PRUNE_OBSOLETEWORKDIR ?= "1"
+
+#
+# Event handler which removes manifests and stamps file for recipes which are no
+# longer 'reachable' in a build where they once were. 'Reachable' refers to
+# whether a recipe is parsed so recipes in a layer which was removed would no
+# longer be reachable. Switching between systemd and sysvinit where recipes
+# became skipped would be another example.
+#
+# Also optionally removes the workdir of those tasks/recipes
+#
+addhandler sstate_eventhandler_reachablestamps
+sstate_eventhandler_reachablestamps[eventmask] = "bb.event.ReachableStamps"
+python sstate_eventhandler_reachablestamps() {
+    import glob
+    d = e.data
+    stamps = e.stamps.values()
+    removeworkdir = (d.getVar("SSTATE_PRUNE_OBSOLETEWORKDIR", False) == "1")
+    preservestampfile = d.expand('${SSTATE_MANIFESTS}/preserve-stamps')
+    preservestamps = []
+    if os.path.exists(preservestampfile):
+        with open(preservestampfile, 'r') as f:
+            preservestamps = f.readlines()
+    seen = []
+
+    # The machine index contains all the stamps this machine has ever seen in this build directory.
+    # We should only remove things which this machine once accessed but no longer does.
+    machineindex = set()
+    bb.utils.mkdirhier(d.expand("${SSTATE_MANIFESTS}"))
+    mi = d.expand("${SSTATE_MANIFESTS}/index-machine-${MACHINE}")
+    if os.path.exists(mi):
+        with open(mi, "r") as f:
+            machineindex = set(line.strip() for line in f.readlines())
+
+    for a in sorted(list(set(d.getVar("SSTATE_ARCHS").split()))):
+        toremove = []
+        i = d.expand("${SSTATE_MANIFESTS}/index-" + a)
+        if not os.path.exists(i):
+            continue
+        manseen = set()
+        ignore = []
+        with open(i, "r") as f:
+            lines = f.readlines()
+            for l in reversed(lines):
+                try:
+                    (stamp, manifest, workdir) = l.split()
+                    # The index may have multiple entries for the same manifest as the code above only appends
+                    # new entries and there may be an entry with matching manifest but differing version in stamp/workdir.
+                    # The last entry in the list is the valid one, any earlier entries with matching manifests
+                    # should be ignored.
+                    if manifest in manseen:
+                        ignore.append(l)
+                        continue
+                    manseen.add(manifest)
+                    if stamp not in stamps and stamp not in preservestamps and stamp in machineindex:
+                        toremove.append(l)
+                        if stamp not in seen:
+                            bb.debug(2, "Stamp %s is not reachable, removing related manifests" % stamp)
+                            seen.append(stamp)
+                except ValueError:
+                    bb.fatal("Invalid line '%s' in sstate manifest '%s'" % (l, i))
+
+        if toremove:
+            msg = "Removing %d recipes from the %s sysroot" % (len(toremove), a)
+            bb.event.fire(bb.event.ProcessStarted(msg, len(toremove)), d)
+
+            removed = 0
+            for r in toremove:
+                (stamp, manifest, workdir) = r.split()
+                for m in glob.glob(manifest + ".*"):
+                    if m.endswith(".postrm"):
+                        continue
+                    sstate_clean_manifest(m, d)
+                bb.utils.remove(stamp + "*")
+                if removeworkdir:
+                    bb.utils.remove(workdir, recurse = True)
+                lines.remove(r)
+                removed = removed + 1
+                bb.event.fire(bb.event.ProcessProgress(msg, removed), d)
+
+            bb.event.fire(bb.event.ProcessFinished(msg), d)
+
+        with open(i, "w") as f:
+            for l in lines:
+                if l in ignore:
+                    continue
+                f.write(l)
+    machineindex |= set(stamps)
+    with open(mi, "w") as f:
+        for l in machineindex:
+            f.write(l + "\n")
+
+    if preservestamps:
+        os.remove(preservestampfile)
+}
+
+
+#
+# Bitbake can generate an event showing which setscene tasks are 'stale',
+# i.e. which ones will be rerun. These are ones where a stamp file is present but
+# it is stable (e.g. taskhash doesn't match). With that list we can go through
+# the manifests for matching tasks and "uninstall" those manifests now. We do
+# this now rather than mid build since the distribution of files between sstate
+# objects may have changed, new tasks may run first and if those new tasks overlap
+# with the stale tasks, we'd see overlapping files messages and failures. Thankfully
+# removing these files is fast.
+#
+addhandler sstate_eventhandler_stalesstate
+sstate_eventhandler_stalesstate[eventmask] = "bb.event.StaleSetSceneTasks"
+python sstate_eventhandler_stalesstate() {
+    d = e.data
+    tasks = e.tasks
+
+    bb.utils.mkdirhier(d.expand("${SSTATE_MANIFESTS}"))
+
+    for a in list(set(d.getVar("SSTATE_ARCHS").split())):
+        toremove = []
+        i = d.expand("${SSTATE_MANIFESTS}/index-" + a)
+        if not os.path.exists(i):
+            continue
+        with open(i, "r") as f:
+            lines = f.readlines()
+            for l in lines:
+                try:
+                    (stamp, manifest, workdir) = l.split()
+                    for tid in tasks:
+                        for s in tasks[tid]:
+                            if s.startswith(stamp):
+                                taskname = bb.runqueue.taskname_from_tid(tid)[3:]
+                                manname = manifest + "." + taskname
+                                if os.path.exists(manname):
+                                    bb.debug(2, "Sstate for %s is stale, removing related manifest %s" % (tid, manname))
+                                    toremove.append((manname, tid, tasks[tid]))
+                                    break
+                except ValueError:
+                    bb.fatal("Invalid line '%s' in sstate manifest '%s'" % (l, i))
+
+        if toremove:
+            msg = "Removing %d stale sstate objects for arch %s" % (len(toremove), a)
+            bb.event.fire(bb.event.ProcessStarted(msg, len(toremove)), d)
+
+            removed = 0
+            for (manname, tid, stamps) in toremove:
+                sstate_clean_manifest(manname, d)
+                for stamp in stamps:
+                    bb.utils.remove(stamp)
+                removed = removed + 1
+                bb.event.fire(bb.event.ProcessProgress(msg, removed), d)
+
+            bb.event.fire(bb.event.ProcessFinished(msg), d)
+}
diff --git a/poky/meta/classes-global/staging.bbclass b/poky/meta/classes-global/staging.bbclass
new file mode 100644
index 0000000..5a1f43d
--- /dev/null
+++ b/poky/meta/classes-global/staging.bbclass
@@ -0,0 +1,690 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+# These directories will be staged in the sysroot
+SYSROOT_DIRS = " \
+    ${includedir} \
+    ${libdir} \
+    ${base_libdir} \
+    ${nonarch_base_libdir} \
+    ${datadir} \
+    /sysroot-only \
+"
+
+# These directories are also staged in the sysroot when they contain files that
+# are usable on the build system
+SYSROOT_DIRS_NATIVE = " \
+    ${bindir} \
+    ${sbindir} \
+    ${base_bindir} \
+    ${base_sbindir} \
+    ${libexecdir} \
+    ${sysconfdir} \
+    ${localstatedir} \
+"
+SYSROOT_DIRS:append:class-native = " ${SYSROOT_DIRS_NATIVE}"
+SYSROOT_DIRS:append:class-cross = " ${SYSROOT_DIRS_NATIVE}"
+SYSROOT_DIRS:append:class-crosssdk = " ${SYSROOT_DIRS_NATIVE}"
+
+# These directories will not be staged in the sysroot
+SYSROOT_DIRS_IGNORE = " \
+    ${mandir} \
+    ${docdir} \
+    ${infodir} \
+    ${datadir}/X11/locale \
+    ${datadir}/applications \
+    ${datadir}/bash-completion \
+    ${datadir}/fonts \
+    ${datadir}/gtk-doc/html \
+    ${datadir}/installed-tests \
+    ${datadir}/locale \
+    ${datadir}/pixmaps \
+    ${datadir}/terminfo \
+    ${libdir}/${BPN}/ptest \
+"
+
+sysroot_stage_dir() {
+	src="$1"
+	dest="$2"
+	# if the src doesn't exist don't do anything
+	if [ ! -d "$src" ]; then
+		 return
+	fi
+
+	mkdir -p "$dest"
+	rdest=$(realpath --relative-to="$src" "$dest")
+	(
+		cd $src
+		find . -print0 | cpio --null -pdlu $rdest
+	)
+}
+
+sysroot_stage_dirs() {
+	from="$1"
+	to="$2"
+
+	for dir in ${SYSROOT_DIRS}; do
+		sysroot_stage_dir "$from$dir" "$to$dir"
+	done
+
+	# Remove directories we do not care about
+	for dir in ${SYSROOT_DIRS_IGNORE}; do
+		rm -rf "$to$dir"
+	done
+}
+
+sysroot_stage_all() {
+	sysroot_stage_dirs ${D} ${SYSROOT_DESTDIR}
+}
+
+python sysroot_strip () {
+    inhibit_sysroot = d.getVar('INHIBIT_SYSROOT_STRIP')
+    if inhibit_sysroot and oe.types.boolean(inhibit_sysroot):
+        return
+
+    dstdir = d.getVar('SYSROOT_DESTDIR')
+    pn = d.getVar('PN')
+    libdir = d.getVar("libdir")
+    base_libdir = d.getVar("base_libdir")
+    qa_already_stripped = 'already-stripped' in (d.getVar('INSANE_SKIP:' + pn) or "").split()
+    strip_cmd = d.getVar("STRIP")
+
+    oe.package.strip_execs(pn, dstdir, strip_cmd, libdir, base_libdir, d,
+                           qa_already_stripped=qa_already_stripped)
+}
+
+do_populate_sysroot[dirs] = "${SYSROOT_DESTDIR}"
+
+addtask populate_sysroot after do_install
+
+SYSROOT_PREPROCESS_FUNCS ?= ""
+SYSROOT_DESTDIR = "${WORKDIR}/sysroot-destdir"
+
+python do_populate_sysroot () {
+    # SYSROOT 'version' 2
+    bb.build.exec_func("sysroot_stage_all", d)
+    bb.build.exec_func("sysroot_strip", d)
+    for f in (d.getVar('SYSROOT_PREPROCESS_FUNCS') or '').split():
+        bb.build.exec_func(f, d)
+    pn = d.getVar("PN")
+    multiprov = d.getVar("BB_MULTI_PROVIDER_ALLOWED").split()
+    provdir = d.expand("${SYSROOT_DESTDIR}${base_prefix}/sysroot-providers/")
+    bb.utils.mkdirhier(provdir)
+    for p in d.getVar("PROVIDES").split():
+        if p in multiprov:
+            continue
+        p = p.replace("/", "_")
+        with open(provdir + p, "w") as f:
+            f.write(pn)
+}
+
+do_populate_sysroot[vardeps] += "${SYSROOT_PREPROCESS_FUNCS}"
+do_populate_sysroot[vardepsexclude] += "BB_MULTI_PROVIDER_ALLOWED"
+
+POPULATESYSROOTDEPS = ""
+POPULATESYSROOTDEPS:class-target = "virtual/${MLPREFIX}${HOST_PREFIX}binutils:do_populate_sysroot"
+POPULATESYSROOTDEPS:class-nativesdk = "virtual/${HOST_PREFIX}binutils-crosssdk:do_populate_sysroot"
+do_populate_sysroot[depends] += "${POPULATESYSROOTDEPS}"
+
+SSTATETASKS += "do_populate_sysroot"
+do_populate_sysroot[cleandirs] = "${SYSROOT_DESTDIR}"
+do_populate_sysroot[sstate-inputdirs] = "${SYSROOT_DESTDIR}"
+do_populate_sysroot[sstate-outputdirs] = "${COMPONENTS_DIR}/${PACKAGE_ARCH}/${PN}"
+do_populate_sysroot[sstate-fixmedir] = "${COMPONENTS_DIR}/${PACKAGE_ARCH}/${PN}"
+
+python do_populate_sysroot_setscene () {
+    sstate_setscene(d)
+}
+addtask do_populate_sysroot_setscene
+
+def staging_copyfile(c, target, dest, postinsts, seendirs):
+    import errno
+
+    destdir = os.path.dirname(dest)
+    if destdir not in seendirs:
+        bb.utils.mkdirhier(destdir)
+        seendirs.add(destdir)
+    if "/usr/bin/postinst-" in c:
+        postinsts.append(dest)
+    if os.path.islink(c):
+        linkto = os.readlink(c)
+        if os.path.lexists(dest):
+            if not os.path.islink(dest):
+                raise OSError(errno.EEXIST, "Link %s already exists as a file" % dest, dest)
+            if os.readlink(dest) == linkto:
+                return dest
+            raise OSError(errno.EEXIST, "Link %s already exists to a different location? (%s vs %s)" % (dest, os.readlink(dest), linkto), dest)
+        os.symlink(linkto, dest)
+        #bb.warn(c)
+    else:
+        try:
+            os.link(c, dest)
+        except OSError as err:
+            if err.errno == errno.EXDEV:
+                bb.utils.copyfile(c, dest)
+            else:
+                raise
+    return dest
+
+def staging_copydir(c, target, dest, seendirs):
+    if dest not in seendirs:
+        bb.utils.mkdirhier(dest)
+        seendirs.add(dest)
+
+def staging_processfixme(fixme, target, recipesysroot, recipesysrootnative, d):
+    import subprocess
+
+    if not fixme:
+        return
+    cmd = "sed -e 's:^[^/]*/:%s/:g' %s | xargs sed -i -e 's:FIXMESTAGINGDIRTARGET:%s:g; s:FIXMESTAGINGDIRHOST:%s:g'" % (target, " ".join(fixme), recipesysroot, recipesysrootnative)
+    for fixmevar in ['PSEUDO_SYSROOT', 'HOSTTOOLS_DIR', 'PKGDATA_DIR', 'PSEUDO_LOCALSTATEDIR', 'LOGFIFO']:
+        fixme_path = d.getVar(fixmevar)
+        cmd += " -e 's:FIXME_%s:%s:g'" % (fixmevar, fixme_path)
+    bb.debug(2, cmd)
+    subprocess.check_output(cmd, shell=True, stderr=subprocess.STDOUT)
+
+
+def staging_populate_sysroot_dir(targetsysroot, nativesysroot, native, d):
+    import glob
+    import subprocess
+    import errno
+
+    fixme = []
+    postinsts = []
+    seendirs = set()
+    stagingdir = d.getVar("STAGING_DIR")
+    if native:
+        pkgarchs = ['${BUILD_ARCH}', '${BUILD_ARCH}_*']
+        targetdir = nativesysroot
+    else:
+        pkgarchs = ['${MACHINE_ARCH}']
+        pkgarchs = pkgarchs + list(reversed(d.getVar("PACKAGE_EXTRA_ARCHS").split()))
+        pkgarchs.append('allarch')
+        targetdir = targetsysroot
+
+    bb.utils.mkdirhier(targetdir)
+    for pkgarch in pkgarchs:
+        for manifest in glob.glob(d.expand("${SSTATE_MANIFESTS}/manifest-%s-*.populate_sysroot" % pkgarch)):
+            if manifest.endswith("-initial.populate_sysroot"):
+                # skip libgcc-initial due to file overlap
+                continue
+            if not native and (manifest.endswith("-native.populate_sysroot") or "nativesdk-" in manifest):
+                continue
+            if native and not (manifest.endswith("-native.populate_sysroot") or manifest.endswith("-cross.populate_sysroot") or "-cross-" in manifest):
+                continue
+            tmanifest = targetdir + "/" + os.path.basename(manifest)
+            if os.path.exists(tmanifest):
+                continue
+            try:
+                os.link(manifest, tmanifest)
+            except OSError as err:
+                if err.errno == errno.EXDEV:
+                    bb.utils.copyfile(manifest, tmanifest)
+                else:
+                    raise
+            with open(manifest, "r") as f:
+                for l in f:
+                    l = l.strip()
+                    if l.endswith("/fixmepath"):
+                        fixme.append(l)
+                        continue
+                    if l.endswith("/fixmepath.cmd"):
+                        continue
+                    dest = l.replace(stagingdir, "")
+                    dest = targetdir + "/" + "/".join(dest.split("/")[3:])
+                    if l.endswith("/"):
+                        staging_copydir(l, targetdir, dest, seendirs)
+                        continue
+                    try:
+                        staging_copyfile(l, targetdir, dest, postinsts, seendirs)
+                    except FileExistsError:
+                        continue
+
+    staging_processfixme(fixme, targetdir, targetsysroot, nativesysroot, d)
+    for p in postinsts:
+        subprocess.check_output(p, shell=True, stderr=subprocess.STDOUT)
+
+#
+# Manifests here are complicated. The main sysroot area has the unpacked sstate
+# which us unrelocated and tracked by the main sstate manifests. Each recipe
+# specific sysroot has manifests for each dependency that is installed there.
+# The task hash is used to tell whether the data needs to be reinstalled. We
+# use a symlink to point to the currently installed hash. There is also a
+# "complete" stamp file which is used to mark if installation completed. If
+# something fails (e.g. a postinst), this won't get written and we would
+# remove and reinstall the dependency. This also means partially installed
+# dependencies should get cleaned up correctly.
+#
+
+python extend_recipe_sysroot() {
+    import copy
+    import subprocess
+    import errno
+    import collections
+    import glob
+
+    taskdepdata = d.getVar("BB_TASKDEPDATA", False)
+    mytaskname = d.getVar("BB_RUNTASK")
+    if mytaskname.endswith("_setscene"):
+        mytaskname = mytaskname.replace("_setscene", "")
+    workdir = d.getVar("WORKDIR")
+    #bb.warn(str(taskdepdata))
+    pn = d.getVar("PN")
+    stagingdir = d.getVar("STAGING_DIR")
+    sharedmanifests = d.getVar("COMPONENTS_DIR") + "/manifests"
+    recipesysroot = d.getVar("RECIPE_SYSROOT")
+    recipesysrootnative = d.getVar("RECIPE_SYSROOT_NATIVE")
+
+    # Detect bitbake -b usage
+    nodeps = d.getVar("BB_LIMITEDDEPS") or False
+    if nodeps:
+        lock = bb.utils.lockfile(recipesysroot + "/sysroot.lock")
+        staging_populate_sysroot_dir(recipesysroot, recipesysrootnative, True, d)
+        staging_populate_sysroot_dir(recipesysroot, recipesysrootnative, False, d)
+        bb.utils.unlockfile(lock)
+        return
+
+    start = None
+    configuredeps = []
+    owntaskdeps = []
+    for dep in taskdepdata:
+        data = taskdepdata[dep]
+        if data[1] == mytaskname and data[0] == pn:
+            start = dep
+        elif data[0] == pn:
+            owntaskdeps.append(data[1])
+    if start is None:
+        bb.fatal("Couldn't find ourself in BB_TASKDEPDATA?")
+
+    # We need to figure out which sysroot files we need to expose to this task.
+    # This needs to match what would get restored from sstate, which is controlled
+    # ultimately by calls from bitbake to setscene_depvalid().
+    # That function expects a setscene dependency tree. We build a dependency tree
+    # condensed to inter-sstate task dependencies, similar to that used by setscene
+    # tasks. We can then call into setscene_depvalid() and decide
+    # which dependencies we can "see" and should expose in the recipe specific sysroot.
+    setscenedeps = copy.deepcopy(taskdepdata)
+
+    start = set([start])
+
+    sstatetasks = d.getVar("SSTATETASKS").split()
+    # Add recipe specific tasks referenced by setscene_depvalid()
+    sstatetasks.append("do_stash_locale")
+    sstatetasks.append("do_deploy")
+
+    def print_dep_tree(deptree):
+        data = ""
+        for dep in deptree:
+            deps = "    " + "\n    ".join(deptree[dep][3]) + "\n"
+            data = data + "%s:\n  %s\n  %s\n%s  %s\n  %s\n" % (deptree[dep][0], deptree[dep][1], deptree[dep][2], deps, deptree[dep][4], deptree[dep][5])
+        return data
+
+    #bb.note("Full dep tree is:\n%s" % print_dep_tree(taskdepdata))
+
+    #bb.note(" start2 is %s" % str(start))
+
+    # If start is an sstate task (like do_package) we need to add in its direct dependencies
+    # else the code below won't recurse into them.
+    for dep in set(start):
+        for dep2 in setscenedeps[dep][3]:
+            start.add(dep2)
+        start.remove(dep)
+
+    #bb.note(" start3 is %s" % str(start))
+
+    # Create collapsed do_populate_sysroot -> do_populate_sysroot tree
+    for dep in taskdepdata:
+        data = setscenedeps[dep]
+        if data[1] not in sstatetasks:
+            for dep2 in setscenedeps:
+                data2 = setscenedeps[dep2]
+                if dep in data2[3]:
+                    data2[3].update(setscenedeps[dep][3])
+                    data2[3].remove(dep)
+            if dep in start:
+                start.update(setscenedeps[dep][3])
+                start.remove(dep)
+            del setscenedeps[dep]
+
+    # Remove circular references
+    for dep in setscenedeps:
+        if dep in setscenedeps[dep][3]:
+            setscenedeps[dep][3].remove(dep)
+
+    #bb.note("Computed dep tree is:\n%s" % print_dep_tree(setscenedeps))
+    #bb.note(" start is %s" % str(start))
+
+    # Direct dependencies should be present and can be depended upon
+    for dep in sorted(set(start)):
+        if setscenedeps[dep][1] == "do_populate_sysroot":
+            if dep not in configuredeps:
+                configuredeps.append(dep)
+    bb.note("Direct dependencies are %s" % str(configuredeps))
+    #bb.note(" or %s" % str(start))
+
+    msgbuf = []
+    # Call into setscene_depvalid for each sub-dependency and only copy sysroot files
+    # for ones that would be restored from sstate.
+    done = list(start)
+    next = list(start)
+    while next:
+        new = []
+        for dep in next:
+            data = setscenedeps[dep]
+            for datadep in data[3]:
+                if datadep in done:
+                    continue
+                taskdeps = {}
+                taskdeps[dep] = setscenedeps[dep][:2]
+                taskdeps[datadep] = setscenedeps[datadep][:2]
+                retval = setscene_depvalid(datadep, taskdeps, [], d, msgbuf)
+                if retval:
+                    msgbuf.append("Skipping setscene dependency %s for installation into the sysroot" % datadep)
+                    continue
+                done.append(datadep)
+                new.append(datadep)
+                if datadep not in configuredeps and setscenedeps[datadep][1] == "do_populate_sysroot":
+                    configuredeps.append(datadep)
+                    msgbuf.append("Adding dependency on %s" % setscenedeps[datadep][0])
+                else:
+                    msgbuf.append("Following dependency on %s" % setscenedeps[datadep][0])
+        next = new
+
+    # This logging is too verbose for day to day use sadly
+    #bb.debug(2, "\n".join(msgbuf))
+
+    depdir = recipesysrootnative + "/installeddeps"
+    bb.utils.mkdirhier(depdir)
+    bb.utils.mkdirhier(sharedmanifests)
+
+    lock = bb.utils.lockfile(recipesysroot + "/sysroot.lock")
+
+    fixme = {}
+    seendirs = set()
+    postinsts = []
+    multilibs = {}
+    manifests = {}
+    # All files that we're going to be installing, to find conflicts.
+    fileset = {}
+
+    invalidate_tasks = set()
+    for f in os.listdir(depdir):
+        removed = []
+        if not f.endswith(".complete"):
+            continue
+        f = depdir + "/" + f
+        if os.path.islink(f) and not os.path.exists(f):
+            bb.note("%s no longer exists, removing from sysroot" % f)
+            lnk = os.readlink(f.replace(".complete", ""))
+            sstate_clean_manifest(depdir + "/" + lnk, d, canrace=True, prefix=workdir)
+            os.unlink(f)
+            os.unlink(f.replace(".complete", ""))
+            removed.append(os.path.basename(f.replace(".complete", "")))
+
+        # If we've removed files from the sysroot above, the task that installed them may still
+        # have a stamp file present for the task. This is probably invalid right now but may become
+        # valid again if the user were to change configuration back for example. Since we've removed
+        # the files a task might need, remove the stamp file too to force it to rerun.
+        # YOCTO #14790
+        if removed:
+            for i in glob.glob(depdir + "/index.*"):
+                if i.endswith("." + mytaskname):
+                    continue
+                with open(i, "r") as f:
+                    for l in f:
+                        if l.startswith("TaskDeps:"):
+                            continue
+                        l = l.strip()
+                        if l in removed:
+                            invalidate_tasks.add(i.rsplit(".", 1)[1])
+                            break
+    for t in invalidate_tasks:
+        bb.note("Invalidating stamps for task %s" % t)
+        bb.build.clean_stamp(t, d)
+
+    installed = []
+    for dep in configuredeps:
+        c = setscenedeps[dep][0]
+        if mytaskname in ["do_sdk_depends", "do_populate_sdk_ext"] and c.endswith("-initial"):
+            bb.note("Skipping initial setscene dependency %s for installation into the sysroot" % c)
+            continue
+        installed.append(c)
+
+    # We want to remove anything which this task previously installed but is no longer a dependency
+    taskindex = depdir + "/" + "index." + mytaskname
+    if os.path.exists(taskindex):
+        potential = []
+        with open(taskindex, "r") as f:
+            for l in f:
+                l = l.strip()
+                if l not in installed:
+                    fl = depdir + "/" + l
+                    if not os.path.exists(fl):
+                        # Was likely already uninstalled
+                        continue
+                    potential.append(l)
+        # We need to ensure no other task needs this dependency. We hold the sysroot
+        # lock so we ca search the indexes to check
+        if potential:
+            for i in glob.glob(depdir + "/index.*"):
+                if i.endswith("." + mytaskname):
+                    continue
+                with open(i, "r") as f:
+                    for l in f:
+                        if l.startswith("TaskDeps:"):
+                            prevtasks = l.split()[1:]
+                            if mytaskname in prevtasks:
+                                # We're a dependency of this task so we can clear items out the sysroot
+                                break
+                        l = l.strip()
+                        if l in potential:
+                            potential.remove(l)
+        for l in potential:
+            fl = depdir + "/" + l
+            bb.note("Task %s no longer depends on %s, removing from sysroot" % (mytaskname, l))
+            lnk = os.readlink(fl)
+            sstate_clean_manifest(depdir + "/" + lnk, d, canrace=True, prefix=workdir)
+            os.unlink(fl)
+            os.unlink(fl + ".complete")
+
+    msg_exists = []
+    msg_adding = []
+
+    # Handle all removals first since files may move between recipes
+    for dep in configuredeps:
+        c = setscenedeps[dep][0]
+        if c not in installed:
+            continue
+        taskhash = setscenedeps[dep][5]
+        taskmanifest = depdir + "/" + c + "." + taskhash
+
+        if os.path.exists(depdir + "/" + c):
+            lnk = os.readlink(depdir + "/" + c)
+            if lnk == c + "." + taskhash and os.path.exists(depdir + "/" + c + ".complete"):
+                continue
+            else:
+                bb.note("%s exists in sysroot, but is stale (%s vs. %s), removing." % (c, lnk, c + "." + taskhash))
+                sstate_clean_manifest(depdir + "/" + lnk, d, canrace=True, prefix=workdir)
+                os.unlink(depdir + "/" + c)
+                if os.path.lexists(depdir + "/" + c + ".complete"):
+                    os.unlink(depdir + "/" + c + ".complete")
+        elif os.path.lexists(depdir + "/" + c):
+            os.unlink(depdir + "/" + c)
+
+    binfiles = {}
+    # Now handle installs
+    for dep in configuredeps:
+        c = setscenedeps[dep][0]
+        if c not in installed:
+            continue
+        taskhash = setscenedeps[dep][5]
+        taskmanifest = depdir + "/" + c + "." + taskhash
+
+        if os.path.exists(depdir + "/" + c):
+            lnk = os.readlink(depdir + "/" + c)
+            if lnk == c + "." + taskhash and os.path.exists(depdir + "/" + c + ".complete"):
+                msg_exists.append(c)
+                continue
+
+        msg_adding.append(c)
+
+        os.symlink(c + "." + taskhash, depdir + "/" + c)
+
+        manifest, d2 = oe.sstatesig.find_sstate_manifest(c, setscenedeps[dep][2], "populate_sysroot", d, multilibs)
+        if d2 is not d:
+            # If we don't do this, the recipe sysroot will be placed in the wrong WORKDIR for multilibs
+            # We need a consistent WORKDIR for the image
+            d2.setVar("WORKDIR", d.getVar("WORKDIR"))
+        destsysroot = d2.getVar("RECIPE_SYSROOT")
+        # We put allarch recipes into the default sysroot
+        if manifest and "allarch" in manifest:
+            destsysroot = d.getVar("RECIPE_SYSROOT")
+
+        native = False
+        if c.endswith("-native") or "-cross-" in c or "-crosssdk" in c:
+            native = True
+
+        if manifest:
+            newmanifest = collections.OrderedDict()
+            targetdir = destsysroot
+            if native:
+                targetdir = recipesysrootnative
+            if targetdir not in fixme:
+                fixme[targetdir] = []
+            fm = fixme[targetdir]
+
+            with open(manifest, "r") as f:
+                manifests[dep] = manifest
+                for l in f:
+                    l = l.strip()
+                    if l.endswith("/fixmepath"):
+                        fm.append(l)
+                        continue
+                    if l.endswith("/fixmepath.cmd"):
+                        continue
+                    dest = l.replace(stagingdir, "")
+                    dest = "/" + "/".join(dest.split("/")[3:])
+                    newmanifest[l] = targetdir + dest
+
+                    # Check if files have already been installed by another
+                    # recipe and abort if they have, explaining what recipes are
+                    # conflicting.
+                    hashname = targetdir + dest
+                    if not hashname.endswith("/"):
+                        if hashname in fileset:
+                            bb.fatal("The file %s is installed by both %s and %s, aborting" % (dest, c, fileset[hashname]))
+                        else:
+                            fileset[hashname] = c
+
+            # Having multiple identical manifests in each sysroot eats diskspace so
+            # create a shared pool of them and hardlink if we can.
+            # We create the manifest in advance so that if something fails during installation,
+            # or the build is interrupted, subsequent exeuction can cleanup.
+            sharedm = sharedmanifests + "/" + os.path.basename(taskmanifest)
+            if not os.path.exists(sharedm):
+                smlock = bb.utils.lockfile(sharedm + ".lock")
+                # Can race here. You'd think it just means we may not end up with all copies hardlinked to each other
+                # but python can lose file handles so we need to do this under a lock.
+                if not os.path.exists(sharedm):
+                    with open(sharedm, 'w') as m:
+                       for l in newmanifest:
+                           dest = newmanifest[l]
+                           m.write(dest.replace(workdir + "/", "") + "\n")
+                bb.utils.unlockfile(smlock)
+            try:
+                os.link(sharedm, taskmanifest)
+            except OSError as err:
+                if err.errno == errno.EXDEV:
+                    bb.utils.copyfile(sharedm, taskmanifest)
+                else:
+                    raise
+            # Finally actually install the files
+            for l in newmanifest:
+                    dest = newmanifest[l]
+                    if l.endswith("/"):
+                        staging_copydir(l, targetdir, dest, seendirs)
+                        continue
+                    if "/bin/" in l or "/sbin/" in l:
+                        # defer /*bin/* files until last in case they need libs
+                        binfiles[l] = (targetdir, dest)
+                    else:
+                        staging_copyfile(l, targetdir, dest, postinsts, seendirs)
+
+    # Handle deferred binfiles
+    for l in binfiles:
+        (targetdir, dest) = binfiles[l]
+        staging_copyfile(l, targetdir, dest, postinsts, seendirs)
+
+    bb.note("Installed into sysroot: %s" % str(msg_adding))
+    bb.note("Skipping as already exists in sysroot: %s" % str(msg_exists))
+
+    for f in fixme:
+        staging_processfixme(fixme[f], f, recipesysroot, recipesysrootnative, d)
+
+    for p in postinsts:
+        subprocess.check_output(p, shell=True, stderr=subprocess.STDOUT)
+
+    for dep in manifests:
+        c = setscenedeps[dep][0]
+        os.symlink(manifests[dep], depdir + "/" + c + ".complete")
+
+    with open(taskindex, "w") as f:
+        f.write("TaskDeps: " + " ".join(owntaskdeps) + "\n")
+        for l in sorted(installed):
+            f.write(l + "\n")
+
+    bb.utils.unlockfile(lock)
+}
+extend_recipe_sysroot[vardepsexclude] += "MACHINE_ARCH PACKAGE_EXTRA_ARCHS SDK_ARCH BUILD_ARCH SDK_OS BB_TASKDEPDATA"
+
+do_prepare_recipe_sysroot[deptask] = "do_populate_sysroot"
+python do_prepare_recipe_sysroot () {
+    bb.build.exec_func("extend_recipe_sysroot", d)
+}
+addtask do_prepare_recipe_sysroot before do_configure after do_fetch
+
+python staging_taskhandler() {
+    bbtasks = e.tasklist
+    for task in bbtasks:
+        deps = d.getVarFlag(task, "depends")
+        if task == "do_configure" or (deps and "populate_sysroot" in deps):
+            d.prependVarFlag(task, "prefuncs", "extend_recipe_sysroot ")
+}
+staging_taskhandler[eventmask] = "bb.event.RecipeTaskPreProcess"
+addhandler staging_taskhandler
+
+
+#
+# Target build output, stored in do_populate_sysroot or do_package can depend
+# not only upon direct dependencies but also indirect ones. A good example is
+# linux-libc-headers. The toolchain depends on this but most target recipes do
+# not. There are some headers which are not used by the toolchain build and do
+# not change the toolchain task output, hence the task hashes can change without
+# changing the sysroot output of that recipe yet they can influence others.
+#
+# A specific example is rtc.h which can change rtcwake.c in util-linux but is not
+# used in the glibc or gcc build. To account for this, we need to account for the
+# populate_sysroot hashes in the task output hashes.
+#
+python target_add_sysroot_deps () {
+    current_task = "do_" + d.getVar("BB_CURRENTTASK")
+    if current_task not in ["do_populate_sysroot", "do_package"]:
+        return
+
+    pn = d.getVar("PN")
+    if pn.endswith("-native"):
+        return
+
+    taskdepdata = d.getVar("BB_TASKDEPDATA", False)
+    deps = {}
+    for dep in taskdepdata.values():
+        if dep[1] == "do_populate_sysroot" and not dep[0].endswith(("-native", "-initial")) and "-cross-" not in dep[0] and dep[0] != pn:
+            deps[dep[0]] = dep[6]
+
+    d.setVar("HASHEQUIV_EXTRA_SIGDATA", "\n".join("%s: %s" % (k, deps[k]) for k in sorted(deps.keys())))
+}
+SSTATECREATEFUNCS += "target_add_sysroot_deps"
+
diff --git a/poky/meta/classes-global/uninative.bbclass b/poky/meta/classes-global/uninative.bbclass
new file mode 100644
index 0000000..42c5f8f
--- /dev/null
+++ b/poky/meta/classes-global/uninative.bbclass
@@ -0,0 +1,179 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+UNINATIVE_LOADER ?= "${UNINATIVE_STAGING_DIR}-uninative/${BUILD_ARCH}-linux/lib/${@bb.utils.contains('BUILD_ARCH', 'x86_64', 'ld-linux-x86-64.so.2', '', d)}${@bb.utils.contains('BUILD_ARCH', 'i686', 'ld-linux.so.2', '', d)}${@bb.utils.contains('BUILD_ARCH', 'aarch64', 'ld-linux-aarch64.so.1', '', d)}${@bb.utils.contains('BUILD_ARCH', 'ppc64le', 'ld64.so.2', '', d)}"
+UNINATIVE_STAGING_DIR ?= "${STAGING_DIR}"
+
+UNINATIVE_URL ?= "unset"
+UNINATIVE_TARBALL ?= "${BUILD_ARCH}-nativesdk-libc-${UNINATIVE_VERSION}.tar.xz"
+# Example checksums
+#UNINATIVE_CHECKSUM[aarch64] = "dead"
+#UNINATIVE_CHECKSUM[i686] = "dead"
+#UNINATIVE_CHECKSUM[x86_64] = "dead"
+UNINATIVE_DLDIR ?= "${DL_DIR}/uninative/"
+
+# Enabling uninative will change the following variables so they need to go the parsing ignored variables list to prevent multiple recipe parsing
+BB_HASHCONFIG_IGNORE_VARS += "NATIVELSBSTRING SSTATEPOSTUNPACKFUNCS BUILD_LDFLAGS"
+
+addhandler uninative_event_fetchloader
+uninative_event_fetchloader[eventmask] = "bb.event.BuildStarted"
+
+addhandler uninative_event_enable
+uninative_event_enable[eventmask] = "bb.event.ConfigParsed"
+
+python uninative_event_fetchloader() {
+    """
+    This event fires on the parent and will try to fetch the tarball if the
+    loader isn't already present.
+    """
+
+    chksum = d.getVarFlag("UNINATIVE_CHECKSUM", d.getVar("BUILD_ARCH"))
+    if not chksum:
+        bb.fatal("Uninative selected but not configured correctly, please set UNINATIVE_CHECKSUM[%s]" % d.getVar("BUILD_ARCH"))
+
+    loader = d.getVar("UNINATIVE_LOADER")
+    loaderchksum = loader + ".chksum"
+    if os.path.exists(loader) and os.path.exists(loaderchksum):
+        with open(loaderchksum, "r") as f:
+            readchksum = f.read().strip()
+        if readchksum == chksum:
+            return
+
+    import subprocess
+    try:
+        # Save and restore cwd as Fetch.download() does a chdir()
+        olddir = os.getcwd()
+
+        tarball = d.getVar("UNINATIVE_TARBALL")
+        tarballdir = os.path.join(d.getVar("UNINATIVE_DLDIR"), chksum)
+        tarballpath = os.path.join(tarballdir, tarball)
+
+        if not os.path.exists(tarballpath + ".done"):
+            bb.utils.mkdirhier(tarballdir)
+            if d.getVar("UNINATIVE_URL") == "unset":
+                bb.fatal("Uninative selected but not configured, please set UNINATIVE_URL")
+
+            localdata = bb.data.createCopy(d)
+            localdata.setVar('FILESPATH', "")
+            localdata.setVar('DL_DIR', tarballdir)
+            # Our games with path manipulation of DL_DIR mean standard PREMIRRORS don't work
+            # and we can't easily put 'chksum' into the url path from a url parameter with
+            # the current fetcher url handling
+            premirrors = bb.fetch2.mirror_from_string(localdata.getVar("PREMIRRORS"))
+            for line in premirrors:
+                try:
+                    (find, replace) = line
+                except ValueError:
+                    continue
+                if find.startswith("http"):
+                    localdata.appendVar("PREMIRRORS", " ${UNINATIVE_URL}${UNINATIVE_TARBALL} %s/uninative/%s/${UNINATIVE_TARBALL}" % (replace, chksum))
+
+            srcuri = d.expand("${UNINATIVE_URL}${UNINATIVE_TARBALL};sha256sum=%s" % chksum)
+            bb.note("Fetching uninative binary shim %s (will check PREMIRRORS first)" % srcuri)
+
+            fetcher = bb.fetch2.Fetch([srcuri], localdata, cache=False)
+            fetcher.download()
+            localpath = fetcher.localpath(srcuri)
+            if localpath != tarballpath and os.path.exists(localpath) and not os.path.exists(tarballpath):
+                # Follow the symlink behavior from the bitbake fetch2.
+                # This will cover the case where an existing symlink is broken
+                # as well as if there are two processes trying to create it
+                # at the same time.
+                if os.path.islink(tarballpath):
+                    # Broken symbolic link
+                    os.unlink(tarballpath)
+
+                # Deal with two processes trying to make symlink at once
+                try:
+                    os.symlink(localpath, tarballpath)
+                except FileExistsError:
+                    pass
+
+        # ldd output is "ldd (Ubuntu GLIBC 2.23-0ubuntu10) 2.23", extract last option from first line
+        glibcver = subprocess.check_output(["ldd", "--version"]).decode('utf-8').split('\n')[0].split()[-1]
+        if bb.utils.vercmp_string(d.getVar("UNINATIVE_MAXGLIBCVERSION"), glibcver) < 0:
+            raise RuntimeError("Your host glibc version (%s) is newer than that in uninative (%s). Disabling uninative so that sstate is not corrupted." % (glibcver, d.getVar("UNINATIVE_MAXGLIBCVERSION")))
+
+        cmd = d.expand("\
+mkdir -p ${UNINATIVE_STAGING_DIR}-uninative; \
+cd ${UNINATIVE_STAGING_DIR}-uninative; \
+tar -xJf ${UNINATIVE_DLDIR}/%s/${UNINATIVE_TARBALL}; \
+${UNINATIVE_STAGING_DIR}-uninative/relocate_sdk.py \
+  ${UNINATIVE_STAGING_DIR}-uninative/${BUILD_ARCH}-linux \
+  ${UNINATIVE_LOADER} \
+  ${UNINATIVE_LOADER} \
+  ${UNINATIVE_STAGING_DIR}-uninative/${BUILD_ARCH}-linux/${bindir_native}/patchelf-uninative \
+  ${UNINATIVE_STAGING_DIR}-uninative/${BUILD_ARCH}-linux${base_libdir_native}/libc*.so*" % chksum)
+        subprocess.check_output(cmd, shell=True)
+
+        with open(loaderchksum, "w") as f:
+            f.write(chksum)
+
+        enable_uninative(d)
+
+    except RuntimeError as e:
+        bb.warn(str(e))
+    except bb.fetch2.BBFetchException as exc:
+        bb.warn("Disabling uninative as unable to fetch uninative tarball: %s" % str(exc))
+        bb.warn("To build your own uninative loader, please bitbake uninative-tarball and set UNINATIVE_TARBALL appropriately.")
+    except subprocess.CalledProcessError as exc:
+        bb.warn("Disabling uninative as unable to install uninative tarball: %s" % str(exc))
+        bb.warn("To build your own uninative loader, please bitbake uninative-tarball and set UNINATIVE_TARBALL appropriately.")
+    finally:
+        os.chdir(olddir)
+}
+
+python uninative_event_enable() {
+    """
+    This event handler is called in the workers and is responsible for setting
+    up uninative if a loader is found.
+    """
+    enable_uninative(d)
+}
+
+def enable_uninative(d):
+    loader = d.getVar("UNINATIVE_LOADER")
+    if os.path.exists(loader):
+        bb.debug(2, "Enabling uninative")
+        d.setVar("NATIVELSBSTRING", "universal%s" % oe.utils.host_gcc_version(d))
+        d.appendVar("SSTATEPOSTUNPACKFUNCS", " uninative_changeinterp")
+        d.appendVarFlag("SSTATEPOSTUNPACKFUNCS", "vardepvalueexclude", "| uninative_changeinterp")
+        d.appendVar("BUILD_LDFLAGS", " -Wl,--allow-shlib-undefined -Wl,--dynamic-linker=${UNINATIVE_LOADER}")
+        d.appendVarFlag("BUILD_LDFLAGS", "vardepvalueexclude", "| -Wl,--allow-shlib-undefined -Wl,--dynamic-linker=${UNINATIVE_LOADER}")
+        d.appendVarFlag("BUILD_LDFLAGS", "vardepsexclude", "UNINATIVE_LOADER")
+        d.prependVar("PATH", "${STAGING_DIR}-uninative/${BUILD_ARCH}-linux${bindir_native}:")
+
+python uninative_changeinterp () {
+    import subprocess
+    import stat
+    import oe.qa
+
+    if not (bb.data.inherits_class('native', d) or bb.data.inherits_class('crosssdk', d) or bb.data.inherits_class('cross', d)):
+        return
+
+    sstateinst = d.getVar('SSTATE_INSTDIR')
+    for walkroot, dirs, files in os.walk(sstateinst):
+        for file in files:
+            if file.endswith(".so") or ".so." in file:
+                continue
+            f = os.path.join(walkroot, file)
+            if os.path.islink(f):
+                continue
+            s = os.stat(f)
+            if not ((s[stat.ST_MODE] & stat.S_IXUSR) or (s[stat.ST_MODE] & stat.S_IXGRP) or (s[stat.ST_MODE] & stat.S_IXOTH)):
+                continue
+            elf = oe.qa.ELFFile(f)
+            try:
+                elf.open()
+            except oe.qa.NotELFFileError:
+                continue
+            if not elf.isDynamic():
+                continue
+
+            os.chmod(f, s[stat.ST_MODE] | stat.S_IWUSR)
+            subprocess.check_output(("patchelf-uninative", "--set-interpreter", d.getVar("UNINATIVE_LOADER"), f), stderr=subprocess.STDOUT)
+            os.chmod(f, s[stat.ST_MODE])
+}
diff --git a/poky/meta/classes-global/utility-tasks.bbclass b/poky/meta/classes-global/utility-tasks.bbclass
new file mode 100644
index 0000000..ae2da33
--- /dev/null
+++ b/poky/meta/classes-global/utility-tasks.bbclass
@@ -0,0 +1,60 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+addtask listtasks
+do_listtasks[nostamp] = "1"
+python do_listtasks() {
+    taskdescs = {}
+    maxlen = 0
+    for e in d.keys():
+        if d.getVarFlag(e, 'task'):
+            maxlen = max(maxlen, len(e))
+            if e.endswith('_setscene'):
+                desc = "%s (setscene version)" % (d.getVarFlag(e[:-9], 'doc') or '')
+            else:
+                desc = d.getVarFlag(e, 'doc') or ''
+            taskdescs[e] = desc
+
+    tasks = sorted(taskdescs.keys())
+    for taskname in tasks:
+        bb.plain("%s  %s" % (taskname.ljust(maxlen), taskdescs[taskname]))
+}
+
+CLEANFUNCS ?= ""
+
+T:task-clean = "${LOG_DIR}/cleanlogs/${PN}"
+addtask clean
+do_clean[nostamp] = "1"
+python do_clean() {
+    """clear the build and temp directories"""
+    dir = d.expand("${WORKDIR}")
+    bb.note("Removing " + dir)
+    oe.path.remove(dir)
+
+    dir = "%s.*" % d.getVar('STAMP')
+    bb.note("Removing " + dir)
+    oe.path.remove(dir)
+
+    for f in (d.getVar('CLEANFUNCS') or '').split():
+        bb.build.exec_func(f, d)
+}
+
+addtask checkuri
+do_checkuri[nostamp] = "1"
+do_checkuri[network] = "1"
+python do_checkuri() {
+    src_uri = (d.getVar('SRC_URI') or "").split()
+    if len(src_uri) == 0:
+        return
+
+    try:
+        fetcher = bb.fetch2.Fetch(src_uri, d)
+        fetcher.checkstatus()
+    except bb.fetch2.BBFetchException as e:
+        bb.fatal(str(e))
+}
+
+
diff --git a/poky/meta/classes-global/utils.bbclass b/poky/meta/classes-global/utils.bbclass
new file mode 100644
index 0000000..8d797ff
--- /dev/null
+++ b/poky/meta/classes-global/utils.bbclass
@@ -0,0 +1,369 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+oe_soinstall() {
+	# Purpose: Install shared library file and
+	#          create the necessary links
+	# Example: oe_soinstall libfoo.so.1.2.3 ${D}${libdir}
+	libname=`basename $1`
+	case "$libname" in
+	    *.so)
+	        bbfatal "oe_soinstall: Shared library must haved versioned filename (e.g. libfoo.so.1.2.3)"
+	        ;;
+	esac
+	install -m 755 $1 $2/$libname
+	sonamelink=`${HOST_PREFIX}readelf -d $1 |grep 'Library soname:' |sed -e 's/.*\[\(.*\)\].*/\1/'`
+	if [ -z $sonamelink ]; then
+		bbfatal "oe_soinstall: $libname is missing ELF tag 'SONAME'."
+	fi
+	solink=`echo $libname | sed -e 's/\.so\..*/.so/'`
+	ln -sf $libname $2/$sonamelink
+	ln -sf $libname $2/$solink
+}
+
+oe_libinstall() {
+	# Purpose: Install a library, in all its forms
+	# Example
+	#
+	# oe_libinstall libltdl ${STAGING_LIBDIR}/
+	# oe_libinstall -C src/libblah libblah ${D}/${libdir}/
+	dir=""
+	libtool=""
+	silent=""
+	require_static=""
+	require_shared=""
+	while [ "$#" -gt 0 ]; do
+		case "$1" in
+		-C)
+			shift
+			dir="$1"
+			;;
+		-s)
+			silent=1
+			;;
+		-a)
+			require_static=1
+			;;
+		-so)
+			require_shared=1
+			;;
+		-*)
+			bbfatal "oe_libinstall: unknown option: $1"
+			;;
+		*)
+			break;
+			;;
+		esac
+		shift
+	done
+
+	libname="$1"
+	shift
+	destpath="$1"
+	if [ -z "$destpath" ]; then
+		bbfatal "oe_libinstall: no destination path specified"
+	fi
+
+	__runcmd () {
+		if [ -z "$silent" ]; then
+			echo >&2 "oe_libinstall: $*"
+		fi
+		$*
+	}
+
+	if [ -z "$dir" ]; then
+		dir=`pwd`
+	fi
+
+	dotlai=$libname.lai
+
+	# Sanity check that the libname.lai is unique
+	number_of_files=`(cd $dir; find . -name "$dotlai") | wc -l`
+	if [ $number_of_files -gt 1 ]; then
+		bbfatal "oe_libinstall: $dotlai is not unique in $dir"
+	fi
+
+
+	dir=$dir`(cd $dir;find . -name "$dotlai") | sed "s/^\.//;s/\/$dotlai\$//;q"`
+	olddir=`pwd`
+	__runcmd cd $dir
+
+	lafile=$libname.la
+
+	# If such file doesn't exist, try to cut version suffix
+	if [ ! -f "$lafile" ]; then
+		libname1=`echo "$libname" | sed 's/-[0-9.]*$//'`
+		lafile1=$libname.la
+		if [ -f "$lafile1" ]; then
+			libname=$libname1
+			lafile=$lafile1
+		fi
+	fi
+
+	if [ -f "$lafile" ]; then
+		# libtool archive
+		eval `cat $lafile|grep "^library_names="`
+		libtool=1
+	else
+		library_names="$libname.so* $libname.dll.a $libname.*.dylib"
+	fi
+
+	__runcmd install -d $destpath/
+	dota=$libname.a
+	if [ -f "$dota" -o -n "$require_static" ]; then
+		rm -f $destpath/$dota
+		__runcmd install -m 0644 $dota $destpath/
+	fi
+	if [ -f "$dotlai" -a -n "$libtool" ]; then
+		rm -f $destpath/$libname.la
+		__runcmd install -m 0644 $dotlai $destpath/$libname.la
+	fi
+
+	for name in $library_names; do
+		files=`eval echo $name`
+		for f in $files; do
+			if [ ! -e "$f" ]; then
+				if [ -n "$libtool" ]; then
+					bbfatal "oe_libinstall: $dir/$f not found."
+				fi
+			elif [ -L "$f" ]; then
+				__runcmd cp -P "$f" $destpath/
+			elif [ ! -L "$f" ]; then
+				libfile="$f"
+				rm -f $destpath/$libfile
+				__runcmd install -m 0755 $libfile $destpath/
+			fi
+		done
+	done
+
+	if [ -z "$libfile" ]; then
+		if  [ -n "$require_shared" ]; then
+			bbfatal "oe_libinstall: unable to locate shared library"
+		fi
+	elif [ -z "$libtool" ]; then
+		# special case hack for non-libtool .so.#.#.# links
+		baselibfile=`basename "$libfile"`
+		if (echo $baselibfile | grep -qE '^lib.*\.so\.[0-9.]*$'); then
+			sonamelink=`${HOST_PREFIX}readelf -d $libfile |grep 'Library soname:' |sed -e 's/.*\[\(.*\)\].*/\1/'`
+			solink=`echo $baselibfile | sed -e 's/\.so\..*/.so/'`
+			if [ -n "$sonamelink" -a x"$baselibfile" != x"$sonamelink" ]; then
+				__runcmd ln -sf $baselibfile $destpath/$sonamelink
+			fi
+			__runcmd ln -sf $baselibfile $destpath/$solink
+		fi
+	fi
+
+	__runcmd cd "$olddir"
+}
+
+create_cmdline_wrapper () {
+	# Create a wrapper script where commandline options are needed
+	#
+	# These are useful to work around relocation issues, by passing extra options
+	# to a program
+	#
+	# Usage: create_cmdline_wrapper FILENAME <extra-options>
+
+	cmd=$1
+	shift
+
+	echo "Generating wrapper script for $cmd"
+
+	mv $cmd $cmd.real
+	cmdname=`basename $cmd`
+	dirname=`dirname $cmd`
+	cmdoptions=$@
+	if [ "${base_prefix}" != "" ]; then
+		relpath=`python3 -c "import os; print(os.path.relpath('${D}${base_prefix}', '$dirname'))"`
+		cmdoptions=`echo $@ | sed -e "s:${base_prefix}:\\$realdir/$relpath:g"`
+	fi
+	cat <<END >$cmd
+#!/bin/bash
+realpath=\`readlink -fn \$0\`
+realdir=\`dirname \$realpath\`
+exec -a \$realdir/$cmdname \$realdir/$cmdname.real $cmdoptions "\$@"
+END
+	chmod +x $cmd
+}
+
+create_cmdline_shebang_wrapper () {
+	# Create a wrapper script where commandline options are needed
+	#
+	# These are useful to work around shebang relocation issues, where shebangs are too
+	# long or have arguments in them, thus preventing them from using the /usr/bin/env
+	# shebang
+	#
+	# Usage: create_cmdline_wrapper FILENAME <extra-options>
+
+	cmd=$1
+	shift
+
+	echo "Generating wrapper script for $cmd"
+
+	# Strip #! and get remaining interpreter + arg
+	argument="$(sed -ne 's/^#! *//p;q' $cmd)"
+	# strip the shebang from the real script as we do not want it to be usable anyway
+	tail -n +2 $cmd > $cmd.real
+	chown --reference=$cmd $cmd.real
+	chmod --reference=$cmd $cmd.real
+	rm -f $cmd
+	cmdname=$(basename $cmd)
+	dirname=$(dirname $cmd)
+	cmdoptions=$@
+	if [ "${base_prefix}" != "" ]; then
+		relpath=`python3 -c "import os; print(os.path.relpath('${D}${base_prefix}', '$dirname'))"`
+		cmdoptions=`echo $@ | sed -e "s:${base_prefix}:\\$realdir/$relpath:g"`
+	fi
+	cat <<END >$cmd
+#!/usr/bin/env bash
+realpath=\`readlink -fn \$0\`
+realdir=\`dirname \$realpath\`
+exec -a \$realdir/$cmdname $argument \$realdir/$cmdname.real $cmdoptions "\$@"
+END
+	chmod +x $cmd
+}
+
+create_wrapper () {
+	# Create a wrapper script where extra environment variables are needed
+	#
+	# These are useful to work around relocation issues, by setting environment
+	# variables which point to paths in the filesystem.
+	#
+	# Usage: create_wrapper FILENAME [[VAR=VALUE]..]
+
+	cmd=$1
+	shift
+
+	echo "Generating wrapper script for $cmd"
+
+	mv $cmd $cmd.real
+	cmdname=`basename $cmd`
+	dirname=`dirname $cmd`
+	exportstring=$@
+	if [ "${base_prefix}" != "" ]; then
+		relpath=`python3 -c "import os; print(os.path.relpath('${D}${base_prefix}', '$dirname'))"`
+		exportstring=`echo $@ | sed -e "s:${base_prefix}:\\$realdir/$relpath:g"`
+	fi
+	cat <<END >$cmd
+#!/bin/bash
+realpath=\`readlink -fn \$0\`
+realdir=\`dirname \$realpath\`
+export $exportstring
+exec -a "\$0" \$realdir/$cmdname.real "\$@"
+END
+	chmod +x $cmd
+}
+
+# Copy files/directories from $1 to $2 but using hardlinks
+# (preserve symlinks)
+hardlinkdir () {
+	from=$1
+	to=$2
+	(cd $from; find . -print0 | cpio --null -pdlu $to)
+}
+
+
+def check_app_exists(app, d):
+    app = d.expand(app).split()[0].strip()
+    path = d.getVar('PATH')
+    return bool(bb.utils.which(path, app))
+
+def explode_deps(s):
+    return bb.utils.explode_deps(s)
+
+def base_set_filespath(path, d):
+    filespath = []
+    extrapaths = (d.getVar("FILESEXTRAPATHS") or "")
+    # Remove default flag which was used for checking
+    extrapaths = extrapaths.replace("__default:", "")
+    # Don't prepend empty strings to the path list
+    if extrapaths != "":
+        path = extrapaths.split(":") + path
+    # The ":" ensures we have an 'empty' override
+    overrides = (":" + (d.getVar("FILESOVERRIDES") or "")).split(":")
+    overrides.reverse()
+    for o in overrides:
+        for p in path:
+            if p != "":
+                filespath.append(os.path.join(p, o))
+    return ":".join(filespath)
+
+def extend_variants(d, var, extend, delim=':'):
+    """Return a string of all bb class extend variants for the given extend"""
+    variants = []
+    whole = d.getVar(var) or ""
+    for ext in whole.split():
+        eext = ext.split(delim)
+        if len(eext) > 1 and eext[0] == extend:
+            variants.append(eext[1])
+    return " ".join(variants)
+
+def multilib_pkg_extend(d, pkg):
+    variants = (d.getVar("MULTILIB_VARIANTS") or "").split()
+    if not variants:
+        return pkg
+    pkgs = pkg
+    for v in variants:
+        pkgs = pkgs + " " + v + "-" + pkg
+    return pkgs
+
+def get_multilib_datastore(variant, d):
+    return oe.utils.get_multilib_datastore(variant, d)
+
+def all_multilib_tune_values(d, var, unique = True, need_split = True, delim = ' '):
+    """Return a string of all ${var} in all multilib tune configuration"""
+    values = []
+    variants = (d.getVar("MULTILIB_VARIANTS") or "").split() + ['']
+    for item in variants:
+        localdata = get_multilib_datastore(item, d)
+        # We need WORKDIR to be consistent with the original datastore
+        localdata.setVar("WORKDIR", d.getVar("WORKDIR"))
+        value = localdata.getVar(var) or ""
+        if value != "":
+            if need_split:
+                for item in value.split(delim):
+                    values.append(item)
+            else:
+                values.append(value)
+    if unique:
+        #we do this to keep order as much as possible
+        ret = []
+        for value in values:
+            if not value in ret:
+                ret.append(value)
+    else:
+        ret = values
+    return " ".join(ret)
+
+def all_multilib_tune_list(vars, d):
+    """
+    Return a list of ${VAR} for each variable VAR in vars from each 
+    multilib tune configuration.
+    Is safe to be called from a multilib recipe/context as it can
+    figure out the original tune and remove the multilib overrides.
+    """
+    values = {}
+    for v in vars:
+        values[v] = []
+    values['ml'] = ['']
+
+    variants = (d.getVar("MULTILIB_VARIANTS") or "").split() + ['']
+    for item in variants:
+        localdata = get_multilib_datastore(item, d)
+        values[v].append(localdata.getVar(v))
+        values['ml'].append(item)
+    return values
+all_multilib_tune_list[vardepsexclude] = "OVERRIDES"
+
+# If the user hasn't set up their name/email, set some defaults
+check_git_config() {
+	if ! git config user.email > /dev/null ; then
+		git config --local user.email "${PATCH_GIT_USER_EMAIL}"
+	fi
+	if ! git config user.name > /dev/null ; then
+		git config --local user.name "${PATCH_GIT_USER_NAME}"
+	fi
+}
diff --git a/poky/meta/classes-recipe/allarch.bbclass b/poky/meta/classes-recipe/allarch.bbclass
new file mode 100644
index 0000000..9138f40
--- /dev/null
+++ b/poky/meta/classes-recipe/allarch.bbclass
@@ -0,0 +1,71 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+#
+# This class is used for architecture independent recipes/data files (usually scripts)
+#
+
+python allarch_package_arch_handler () {
+    if bb.data.inherits_class("native", d) or bb.data.inherits_class("nativesdk", d) \
+        or bb.data.inherits_class("crosssdk", d):
+        return
+
+    variants = d.getVar("MULTILIB_VARIANTS")
+    if not variants:
+        d.setVar("PACKAGE_ARCH", "all" )
+}
+
+addhandler allarch_package_arch_handler
+allarch_package_arch_handler[eventmask] = "bb.event.RecipePreFinalise"
+
+python () {
+    # Allow this class to be included but overridden - only set
+    # the values if we're still "all" package arch.
+    if d.getVar("PACKAGE_ARCH") == "all":
+        # No need for virtual/libc or a cross compiler
+        d.setVar("INHIBIT_DEFAULT_DEPS","1")
+
+        # Set these to a common set of values, we shouldn't be using them other that for WORKDIR directory
+        # naming anyway
+        d.setVar("baselib", "lib")
+        d.setVar("TARGET_ARCH", "allarch")
+        d.setVar("TARGET_OS", "linux")
+        d.setVar("TARGET_CC_ARCH", "none")
+        d.setVar("TARGET_LD_ARCH", "none")
+        d.setVar("TARGET_AS_ARCH", "none")
+        d.setVar("TARGET_FPU", "")
+        d.setVar("TARGET_PREFIX", "")
+        # Expand PACKAGE_EXTRA_ARCHS since the staging code needs this
+        # (this removes any dependencies from the hash perspective)
+        d.setVar("PACKAGE_EXTRA_ARCHS", d.getVar("PACKAGE_EXTRA_ARCHS"))
+        d.setVar("SDK_ARCH", "none")
+        d.setVar("SDK_CC_ARCH", "none")
+        d.setVar("TARGET_CPPFLAGS", "none")
+        d.setVar("TARGET_CFLAGS", "none")
+        d.setVar("TARGET_CXXFLAGS", "none")
+        d.setVar("TARGET_LDFLAGS", "none")
+        d.setVar("POPULATESYSROOTDEPS", "")
+
+        # Avoid this being unnecessarily different due to nuances of
+        # the target machine that aren't important for "all" arch
+        # packages.
+        d.setVar("LDFLAGS", "")
+
+        # No need to do shared library processing or debug symbol handling
+        d.setVar("EXCLUDE_FROM_SHLIBS", "1")
+        d.setVar("INHIBIT_PACKAGE_DEBUG_SPLIT", "1")
+        d.setVar("INHIBIT_PACKAGE_STRIP", "1")
+
+        # These multilib values shouldn't change allarch packages so exclude them
+        d.appendVarFlag("emit_pkgdata", "vardepsexclude", " MULTILIB_VARIANTS")
+        d.appendVarFlag("write_specfile", "vardepsexclude", " MULTILIBS")
+        d.appendVarFlag("do_package", "vardepsexclude", " package_do_shlibs")
+    elif bb.data.inherits_class('packagegroup', d) and not bb.data.inherits_class('nativesdk', d):
+        bb.error("Please ensure recipe %s sets PACKAGE_ARCH before inherit packagegroup" % d.getVar("FILE"))
+}
+
+def qemu_wrapper_cmdline(data, rootfs_path, library_paths):
+    return 'false'
diff --git a/poky/meta/classes-recipe/autotools-brokensep.bbclass b/poky/meta/classes-recipe/autotools-brokensep.bbclass
new file mode 100644
index 0000000..a0fb4b7
--- /dev/null
+++ b/poky/meta/classes-recipe/autotools-brokensep.bbclass
@@ -0,0 +1,11 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+# Autotools class for recipes where separate build dir doesn't work
+# Ideally we should fix software so it does work. Standard autotools supports
+# this.
+inherit autotools
+B = "${S}"
diff --git a/poky/meta/classes-recipe/autotools.bbclass b/poky/meta/classes-recipe/autotools.bbclass
new file mode 100644
index 0000000..a4c1c4b
--- /dev/null
+++ b/poky/meta/classes-recipe/autotools.bbclass
@@ -0,0 +1,260 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+def get_autotools_dep(d):
+    if d.getVar('INHIBIT_AUTOTOOLS_DEPS'):
+        return ''
+
+    pn = d.getVar('PN')
+    deps = ''
+
+    if pn in ['autoconf-native', 'automake-native']:
+        return deps
+    deps += 'autoconf-native automake-native '
+
+    if not pn in ['libtool', 'libtool-native'] and not pn.endswith("libtool-cross"):
+        deps += 'libtool-native '
+        if not bb.data.inherits_class('native', d) \
+                        and not bb.data.inherits_class('nativesdk', d) \
+                        and not bb.data.inherits_class('cross', d) \
+                        and not d.getVar('INHIBIT_DEFAULT_DEPS'):
+            deps += 'libtool-cross '
+
+    return deps
+
+
+DEPENDS:prepend = "${@get_autotools_dep(d)} "
+
+inherit siteinfo
+
+# Space separated list of shell scripts with variables defined to supply test
+# results for autoconf tests we cannot run at build time.
+# The value of this variable is filled in in a prefunc because it depends on
+# the contents of the sysroot.
+export CONFIG_SITE
+
+acpaths ?= "default"
+EXTRA_AUTORECONF = "--exclude=autopoint --exclude=gtkdocize"
+
+export lt_cv_sys_lib_dlsearch_path_spec = "${libdir} ${base_libdir}"
+
+# When building tools for use at build-time it's recommended for the build
+# system to use these variables when cross-compiling.
+# (http://sources.redhat.com/autobook/autobook/autobook_270.html)
+export CPP_FOR_BUILD = "${BUILD_CPP}"
+export CPPFLAGS_FOR_BUILD = "${BUILD_CPPFLAGS}"
+
+export CC_FOR_BUILD = "${BUILD_CC}"
+export CFLAGS_FOR_BUILD = "${BUILD_CFLAGS}"
+
+export CXX_FOR_BUILD = "${BUILD_CXX}"
+export CXXFLAGS_FOR_BUILD="${BUILD_CXXFLAGS}"
+
+export LD_FOR_BUILD = "${BUILD_LD}"
+export LDFLAGS_FOR_BUILD = "${BUILD_LDFLAGS}"
+
+def append_libtool_sysroot(d):
+    # Only supply libtool sysroot option for non-native packages
+    if not bb.data.inherits_class('native', d):
+        return '--with-libtool-sysroot=${STAGING_DIR_HOST}'
+    return ""
+
+CONFIGUREOPTS = " --build=${BUILD_SYS} \
+		  --host=${HOST_SYS} \
+		  --target=${TARGET_SYS} \
+		  --prefix=${prefix} \
+		  --exec_prefix=${exec_prefix} \
+		  --bindir=${bindir} \
+		  --sbindir=${sbindir} \
+		  --libexecdir=${libexecdir} \
+		  --datadir=${datadir} \
+		  --sysconfdir=${sysconfdir} \
+		  --sharedstatedir=${sharedstatedir} \
+		  --localstatedir=${localstatedir} \
+		  --libdir=${libdir} \
+		  --includedir=${includedir} \
+		  --oldincludedir=${oldincludedir} \
+		  --infodir=${infodir} \
+		  --mandir=${mandir} \
+		  --disable-silent-rules \
+		  ${CONFIGUREOPT_DEPTRACK} \
+		  ${@append_libtool_sysroot(d)}"
+CONFIGUREOPT_DEPTRACK ?= "--disable-dependency-tracking"
+
+CACHED_CONFIGUREVARS ?= ""
+
+AUTOTOOLS_SCRIPT_PATH ?= "${S}"
+CONFIGURE_SCRIPT ?= "${AUTOTOOLS_SCRIPT_PATH}/configure"
+
+AUTOTOOLS_AUXDIR ?= "${AUTOTOOLS_SCRIPT_PATH}"
+
+oe_runconf () {
+	# Use relative path to avoid buildpaths in files
+	cfgscript_name="`basename ${CONFIGURE_SCRIPT}`"
+	cfgscript=`python3 -c "import os; print(os.path.relpath(os.path.dirname('${CONFIGURE_SCRIPT}'), '.'))"`/$cfgscript_name
+	if [ -x "$cfgscript" ] ; then
+		bbnote "Running $cfgscript ${CONFIGUREOPTS} ${EXTRA_OECONF} $@"
+		if ! CONFIG_SHELL=${CONFIG_SHELL-/bin/bash} ${CACHED_CONFIGUREVARS} $cfgscript ${CONFIGUREOPTS} ${EXTRA_OECONF} "$@"; then
+			bbnote "The following config.log files may provide further information."
+			bbnote `find ${B} -ignore_readdir_race -type f -name config.log`
+			bbfatal_log "configure failed"
+		fi
+	else
+		bbfatal "no configure script found at $cfgscript"
+	fi
+}
+
+CONFIGURESTAMPFILE = "${WORKDIR}/configure.sstate"
+
+autotools_preconfigure() {
+	if [ -n "${CONFIGURESTAMPFILE}" -a -e "${CONFIGURESTAMPFILE}" ]; then
+		if [ "`cat ${CONFIGURESTAMPFILE}`" != "${BB_TASKHASH}" ]; then
+			if [ "${S}" != "${B}" ]; then
+				echo "Previously configured separate build directory detected, cleaning ${B}"
+				rm -rf ${B}
+				mkdir -p ${B}
+			else
+				# At least remove the .la files since automake won't automatically
+				# regenerate them even if CFLAGS/LDFLAGS are different
+				cd ${S}
+				if [ "${CLEANBROKEN}" != "1" -a \( -e Makefile -o -e makefile -o -e GNUmakefile \) ]; then
+					oe_runmake clean
+				fi
+				find ${S} -ignore_readdir_race -name \*.la -delete
+			fi
+		fi
+	fi
+}
+
+autotools_postconfigure(){
+	if [ -n "${CONFIGURESTAMPFILE}" ]; then
+		mkdir -p `dirname ${CONFIGURESTAMPFILE}`
+		echo ${BB_TASKHASH} > ${CONFIGURESTAMPFILE}
+	fi
+}
+
+EXTRACONFFUNCS ??= ""
+
+EXTRA_OECONF:append = " ${PACKAGECONFIG_CONFARGS}"
+
+do_configure[prefuncs] += "autotools_preconfigure autotools_aclocals ${EXTRACONFFUNCS}"
+do_compile[prefuncs] += "autotools_aclocals"
+do_install[prefuncs] += "autotools_aclocals"
+do_configure[postfuncs] += "autotools_postconfigure"
+
+ACLOCALDIR = "${STAGING_DATADIR}/aclocal"
+ACLOCALEXTRAPATH = ""
+ACLOCALEXTRAPATH:class-target = " -I ${STAGING_DATADIR_NATIVE}/aclocal/"
+ACLOCALEXTRAPATH:class-nativesdk = " -I ${STAGING_DATADIR_NATIVE}/aclocal/"
+
+python autotools_aclocals () {
+    sitefiles, searched = siteinfo_get_files(d, sysrootcache=True)
+    d.setVar("CONFIG_SITE", " ".join(sitefiles))
+}
+
+do_configure[file-checksums] += "${@' '.join(siteinfo_get_files(d, sysrootcache=False)[1])}"
+
+CONFIGURE_FILES = "${S}/configure.in ${S}/configure.ac ${S}/config.h.in ${S}/acinclude.m4 Makefile.am"
+
+autotools_do_configure() {
+	# WARNING: gross hack follows:
+	# An autotools built package generally needs these scripts, however only
+	# automake or libtoolize actually install the current versions of them.
+	# This is a problem in builds that do not use libtool or automake, in the case
+	# where we -need- the latest version of these scripts.  e.g. running a build
+	# for a package whose autotools are old, on an x86_64 machine, which the old
+	# config.sub does not support.  Work around this by installing them manually
+	# regardless.
+
+	PRUNE_M4=""
+
+	for ac in `find ${S} -ignore_readdir_race -name configure.in -o -name configure.ac`; do
+		rm -f `dirname $ac`/configure
+	done
+	if [ -e ${AUTOTOOLS_SCRIPT_PATH}/configure.in -o -e ${AUTOTOOLS_SCRIPT_PATH}/configure.ac ]; then
+		olddir=`pwd`
+		cd ${AUTOTOOLS_SCRIPT_PATH}
+		mkdir -p ${ACLOCALDIR}
+		ACLOCAL="aclocal --system-acdir=${ACLOCALDIR}/"
+		if [ x"${acpaths}" = xdefault ]; then
+			acpaths=
+			for i in `find ${AUTOTOOLS_SCRIPT_PATH} -ignore_readdir_race -maxdepth 2 -name \*.m4|grep -v 'aclocal.m4'| \
+				grep -v 'acinclude.m4' | sed -e 's,\(.*/\).*$,\1,'|sort -u`; do
+				acpaths="$acpaths -I $i"
+			done
+		else
+			acpaths="${acpaths}"
+		fi
+		acpaths="$acpaths ${ACLOCALEXTRAPATH}"
+		AUTOV=`automake --version | sed -e '1{s/.* //;s/\.[0-9]\+$//};q'`
+		automake --version
+		echo "AUTOV is $AUTOV"
+		if [ -d ${STAGING_DATADIR_NATIVE}/aclocal-$AUTOV ]; then
+			ACLOCAL="$ACLOCAL --automake-acdir=${STAGING_DATADIR_NATIVE}/aclocal-$AUTOV"
+		fi
+		# autoreconf is too shy to overwrite aclocal.m4 if it doesn't look
+		# like it was auto-generated.  Work around this by blowing it away
+		# by hand, unless the package specifically asked not to run aclocal.
+		if ! echo ${EXTRA_AUTORECONF} | grep -q "aclocal"; then
+			rm -f aclocal.m4
+		fi
+		if [ -e configure.in ]; then
+			CONFIGURE_AC=configure.in
+		else
+			CONFIGURE_AC=configure.ac
+		fi
+		if grep -q "^[[:space:]]*AM_GLIB_GNU_GETTEXT" $CONFIGURE_AC; then
+			if grep -q "sed.*POTFILES" $CONFIGURE_AC; then
+				: do nothing -- we still have an old unmodified configure.ac
+	    		else
+				bbnote Executing glib-gettextize --force --copy
+				echo "no" | glib-gettextize --force --copy
+			fi
+		elif [ "${BPN}" != "gettext" ] && grep -q "^[[:space:]]*AM_GNU_GETTEXT" $CONFIGURE_AC; then
+			# We'd call gettextize here if it wasn't so broken...
+			cp ${STAGING_DATADIR_NATIVE}/gettext/config.rpath ${AUTOTOOLS_AUXDIR}/
+			if [ -d ${S}/po/ ]; then
+				cp -f ${STAGING_DATADIR_NATIVE}/gettext/po/Makefile.in.in ${S}/po/
+				if [ ! -e ${S}/po/remove-potcdate.sin ]; then
+					cp ${STAGING_DATADIR_NATIVE}/gettext/po/remove-potcdate.sin ${S}/po/
+				fi
+			fi
+			PRUNE_M4="$PRUNE_M4 gettext.m4 iconv.m4 lib-ld.m4 lib-link.m4 lib-prefix.m4 nls.m4 po.m4 progtest.m4"
+		fi
+		mkdir -p m4
+
+		for i in $PRUNE_M4; do
+			find ${S} -ignore_readdir_race -name $i -delete
+		done
+
+		bbnote Executing ACLOCAL=\"$ACLOCAL\" autoreconf -Wcross --verbose --install --force ${EXTRA_AUTORECONF} $acpaths
+		ACLOCAL="$ACLOCAL" autoreconf -Wcross -Wno-obsolete --verbose --install --force ${EXTRA_AUTORECONF} $acpaths || die "autoreconf execution failed."
+		cd $olddir
+	fi
+	if [ -e ${CONFIGURE_SCRIPT} ]; then
+		oe_runconf
+	else
+		bbnote "nothing to configure"
+	fi
+}
+
+autotools_do_compile() {
+	oe_runmake
+}
+
+autotools_do_install() {
+	oe_runmake 'DESTDIR=${D}' install
+	# Info dir listing isn't interesting at this point so remove it if it exists.
+	if [ -e "${D}${infodir}/dir" ]; then
+		rm -f ${D}${infodir}/dir
+	fi
+}
+
+inherit siteconfig
+
+EXPORT_FUNCTIONS do_configure do_compile do_install
+
+B = "${WORKDIR}/build"
diff --git a/poky/meta/classes-recipe/baremetal-image.bbclass b/poky/meta/classes-recipe/baremetal-image.bbclass
new file mode 100644
index 0000000..d3377a9
--- /dev/null
+++ b/poky/meta/classes-recipe/baremetal-image.bbclass
@@ -0,0 +1,137 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+# Baremetal image class
+#
+# This class is meant to be inherited by recipes for baremetal/RTOS applications
+# It contains code that would be used by all of them, every recipe just needs to
+# override certain variables.
+#
+# For scalability purposes, code within this class focuses on the "image" wiring
+# to satisfy the OpenEmbedded image creation and testing infrastructure.
+#
+# See meta-skeleton for a working example.
+
+## Emulate image.bbclass
+# Handle inherits of any of the image classes we need
+IMAGE_CLASSES ??= ""
+IMGCLASSES = " ${IMAGE_CLASSES}"
+inherit ${IMGCLASSES}
+# Set defaults to satisfy IMAGE_FEATURES check
+IMAGE_FEATURES ?= ""
+IMAGE_FEATURES[type] = "list"
+IMAGE_FEATURES[validitems] += ""
+
+# Toolchain should be baremetal or newlib based.
+# TCLIBC="baremetal" or TCLIBC="newlib"
+COMPATIBLE_HOST:libc-musl:class-target = "null"
+COMPATIBLE_HOST:libc-glibc:class-target = "null"
+
+
+inherit rootfs-postcommands
+
+# Set some defaults, but these should be overriden by each recipe if required
+IMGDEPLOYDIR ?= "${WORKDIR}/deploy-${PN}-image-complete"
+BAREMETAL_BINNAME ?= "hello_baremetal_${MACHINE}"
+IMAGE_LINK_NAME ?= "baremetal-helloworld-image-${MACHINE}"
+IMAGE_NAME_SUFFIX ?= ""
+
+do_rootfs[dirs] = "${IMGDEPLOYDIR} ${DEPLOY_DIR_IMAGE}"
+
+do_image(){
+    install ${D}/${base_libdir}/firmware/${BAREMETAL_BINNAME}.bin ${IMGDEPLOYDIR}/${IMAGE_LINK_NAME}.bin
+    install ${D}/${base_libdir}/firmware/${BAREMETAL_BINNAME}.elf ${IMGDEPLOYDIR}/${IMAGE_LINK_NAME}.elf
+}
+
+do_image_complete(){
+    :
+}
+
+python do_rootfs(){
+    from oe.utils import execute_pre_post_process
+    from pathlib import Path
+
+    # Write empty manifest file to satisfy test infrastructure
+    deploy_dir = d.getVar('IMGDEPLOYDIR')
+    link_name = d.getVar('IMAGE_LINK_NAME')
+    manifest_name = d.getVar('IMAGE_MANIFEST')
+
+    Path(manifest_name).touch()
+    if os.path.exists(manifest_name) and link_name:
+        manifest_link = deploy_dir + "/" + link_name + ".manifest"
+        if manifest_link != manifest_name:
+            if os.path.lexists(manifest_link):
+                os.remove(manifest_link)
+            os.symlink(os.path.basename(manifest_name), manifest_link)
+    # A lot of postprocess commands assume the existence of rootfs/etc
+    sysconfdir = d.getVar("IMAGE_ROOTFS") + d.getVar('sysconfdir')
+    bb.utils.mkdirhier(sysconfdir)
+
+    execute_pre_post_process(d, d.getVar('ROOTFS_POSTPROCESS_COMMAND'))
+}
+
+
+# Assure binaries, manifest and qemubootconf are populated on DEPLOY_DIR_IMAGE
+do_image_complete[dirs] = "${TOPDIR}"
+SSTATETASKS += "do_image_complete"
+SSTATE_SKIP_CREATION:task-image-complete = '1'
+do_image_complete[sstate-inputdirs] = "${IMGDEPLOYDIR}"
+do_image_complete[sstate-outputdirs] = "${DEPLOY_DIR_IMAGE}"
+do_image_complete[stamp-extra-info] = "${MACHINE_ARCH}"
+addtask do_image_complete after do_image before do_build
+
+python do_image_complete_setscene () {
+    sstate_setscene(d)
+}
+addtask do_image_complete_setscene
+
+# QEMU generic Baremetal/RTOS parameters
+QB_DEFAULT_KERNEL ?= "${IMAGE_LINK_NAME}.bin"
+QB_MEM ?= "-m 256"
+QB_DEFAULT_FSTYPE ?= "bin"
+QB_DTB ?= ""
+QB_OPT_APPEND:append = " -nographic"
+
+# RISC-V tunes set the BIOS, unset, and instruct QEMU to
+# ignore the BIOS and boot from -kernel
+QB_DEFAULT_BIOS:qemuriscv64 = ""
+QB_DEFAULT_BIOS:qemuriscv32 = ""
+QB_OPT_APPEND:append:qemuriscv64 = " -bios none"
+QB_OPT_APPEND:append:qemuriscv32 = " -bios none"
+
+
+# Use the medium-any code model for the RISC-V 64 bit implementation,
+# since medlow can only access addresses below 0x80000000 and RAM
+# starts at 0x80000000 on RISC-V 64
+# Keep RISC-V 32 using -mcmodel=medlow (symbols lie between -2GB:2GB)
+CFLAGS:append:qemuriscv64 = " -mcmodel=medany"
+
+
+# This next part is necessary to trick the build system into thinking
+# its building an image recipe so it generates the qemuboot.conf
+addtask do_rootfs before do_image after do_install
+addtask do_image after do_rootfs before do_image_complete
+addtask do_image_complete after do_image before do_build
+inherit qemuboot
+
+# Based on image.bbclass to make sure we build qemu
+python(){
+    # do_addto_recipe_sysroot doesnt exist for all recipes, but we need it to have
+    # /usr/bin on recipe-sysroot (qemu) populated
+    # The do_addto_recipe_sysroot dependency is coming from EXTRA_IMAGDEPENDS now,
+    # we just need to add the logic to add its dependency to do_image.
+    def extraimage_getdepends(task):
+        deps = ""
+        for dep in (d.getVar('EXTRA_IMAGEDEPENDS') or "").split():
+        # Make sure we only add it for qemu
+            if 'qemu' in dep:
+                if ":" in dep:
+                    deps += " %s " % (dep)
+                else:
+                    deps += " %s:%s" % (dep, task)
+        return deps
+    d.appendVarFlag('do_image', 'depends', extraimage_getdepends('do_populate_sysroot')) 
+}
diff --git a/poky/meta/classes-recipe/bash-completion.bbclass b/poky/meta/classes-recipe/bash-completion.bbclass
new file mode 100644
index 0000000..b656e76
--- /dev/null
+++ b/poky/meta/classes-recipe/bash-completion.bbclass
@@ -0,0 +1,13 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+DEPENDS:append:class-target = " bash-completion"
+
+PACKAGES += "${PN}-bash-completion"
+
+FILES:${PN}-bash-completion = "${datadir}/bash-completion ${sysconfdir}/bash_completion.d"
+
+RDEPENDS:${PN}-bash-completion = "bash-completion"
diff --git a/poky/meta/classes-recipe/bin_package.bbclass b/poky/meta/classes-recipe/bin_package.bbclass
new file mode 100644
index 0000000..3a1befc
--- /dev/null
+++ b/poky/meta/classes-recipe/bin_package.bbclass
@@ -0,0 +1,42 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+# Common variable and task for the binary package recipe.
+# Basic principle:
+# * The files have been unpacked to ${S} by base.bbclass
+# * Skip do_configure and do_compile
+# * Use do_install to install the files to ${D}
+#
+# Note:
+# The "subdir" parameter in the SRC_URI is useful when the input package
+# is rpm, ipk, deb and so on, for example:
+#
+# SRC_URI = "http://foo.com/foo-1.0-r1.i586.rpm;subdir=foo-1.0"
+#
+# Then the files would be unpacked to ${WORKDIR}/foo-1.0, otherwise
+# they would be in ${WORKDIR}.
+#
+
+# Skip the unwanted steps
+do_configure[noexec] = "1"
+do_compile[noexec] = "1"
+
+# Install the files to ${D}
+bin_package_do_install () {
+    # Do it carefully
+    [ -d "${S}" ] || exit 1
+    if [ -z "$(ls -A ${S})" ]; then
+        bbfatal bin_package has nothing to install. Be sure the SRC_URI unpacks into S.
+    fi
+    cd ${S}
+    install -d ${D}${base_prefix}
+    tar --no-same-owner --exclude='./patches' --exclude='./.pc' -cpf - . \
+        | tar --no-same-owner -xpf - -C ${D}${base_prefix}
+}
+
+FILES:${PN} = "/"
+
+EXPORT_FUNCTIONS do_install
diff --git a/poky/meta/classes-recipe/binconfig-disabled.bbclass b/poky/meta/classes-recipe/binconfig-disabled.bbclass
new file mode 100644
index 0000000..cbe2078
--- /dev/null
+++ b/poky/meta/classes-recipe/binconfig-disabled.bbclass
@@ -0,0 +1,36 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+#
+# Class to disable binconfig files instead of installing them
+#
+
+# The list of scripts which should be disabled.
+BINCONFIG ?= ""
+
+FILES:${PN}-dev += "${bindir}/*-config"
+
+do_install:append () {
+	for x in ${BINCONFIG}; do
+		# Make the disabled script emit invalid parameters for those configure
+		# scripts which call it without checking the return code.
+		echo "#!/bin/sh" > ${D}$x
+		echo "echo 'ERROR: $x should not be used, use an alternative such as pkg-config' >&2" >> ${D}$x
+		echo "echo '--should-not-have-used-$x'" >> ${D}$x
+		echo "exit 1" >> ${D}$x
+		chmod +x ${D}$x
+	done
+}
+
+SYSROOT_PREPROCESS_FUNCS += "binconfig_disabled_sysroot_preprocess"
+
+binconfig_disabled_sysroot_preprocess () {
+	for x in ${BINCONFIG}; do
+		configname=`basename $x`
+		install -d ${SYSROOT_DESTDIR}${bindir_crossscripts}
+		install ${D}$x ${SYSROOT_DESTDIR}${bindir_crossscripts}
+	done
+}
diff --git a/poky/meta/classes-recipe/binconfig.bbclass b/poky/meta/classes-recipe/binconfig.bbclass
new file mode 100644
index 0000000..427dba7
--- /dev/null
+++ b/poky/meta/classes-recipe/binconfig.bbclass
@@ -0,0 +1,60 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+FILES:${PN}-dev += "${bindir}/*-config"
+ 
+# The namespaces can clash here hence the two step replace
+def get_binconfig_mangle(d):
+    s = "-e ''"
+    if not bb.data.inherits_class('native', d):
+        optional_quote = r"\(\"\?\)"
+        s += " -e 's:=%s${base_libdir}:=\\1OEBASELIBDIR:;'" % optional_quote
+        s += " -e 's:=%s${libdir}:=\\1OELIBDIR:;'" % optional_quote
+        s += " -e 's:=%s${includedir}:=\\1OEINCDIR:;'" % optional_quote
+        s += " -e 's:=%s${datadir}:=\\1OEDATADIR:'" % optional_quote
+        s += " -e 's:=%s${prefix}/:=\\1OEPREFIX/:'" % optional_quote
+        s += " -e 's:=%s${exec_prefix}/:=\\1OEEXECPREFIX/:'" % optional_quote
+        s += " -e 's:-L${libdir}:-LOELIBDIR:;'"
+        s += " -e 's:-I${includedir}:-IOEINCDIR:;'"
+        s += " -e 's:-L${WORKDIR}:-LOELIBDIR:'"
+        s += " -e 's:-I${WORKDIR}:-IOEINCDIR:'"
+        s += " -e 's:OEBASELIBDIR:${STAGING_BASELIBDIR}:;'"
+        s += " -e 's:OELIBDIR:${STAGING_LIBDIR}:;'"
+        s += " -e 's:OEINCDIR:${STAGING_INCDIR}:;'"
+        s += " -e 's:OEDATADIR:${STAGING_DATADIR}:'"
+        s += " -e 's:OEPREFIX:${STAGING_DIR_HOST}${prefix}:'"
+        s += " -e 's:OEEXECPREFIX:${STAGING_DIR_HOST}${exec_prefix}:'"
+        if d.getVar("OE_BINCONFIG_EXTRA_MANGLE", False):
+            s += d.getVar("OE_BINCONFIG_EXTRA_MANGLE")
+
+    return s
+
+BINCONFIG_GLOB ?= "*-config"
+
+PACKAGE_PREPROCESS_FUNCS += "binconfig_package_preprocess"
+
+binconfig_package_preprocess () {
+	for config in `find ${PKGD} -type f -name '${BINCONFIG_GLOB}'`; do
+		sed -i \
+		    -e 's:${STAGING_BASELIBDIR}:${base_libdir}:g;' \
+		    -e 's:${STAGING_LIBDIR}:${libdir}:g;' \
+		    -e 's:${STAGING_INCDIR}:${includedir}:g;' \
+		    -e 's:${STAGING_DATADIR}:${datadir}:' \
+		    -e 's:${STAGING_DIR_HOST}${prefix}:${prefix}:' \
+                    $config
+	done
+}
+
+SYSROOT_PREPROCESS_FUNCS += "binconfig_sysroot_preprocess"
+
+binconfig_sysroot_preprocess () {
+	for config in `find ${S} -type f -name '${BINCONFIG_GLOB}'` `find ${B} -type f -name '${BINCONFIG_GLOB}'`; do
+		configname=`basename $config`
+		install -d ${SYSROOT_DESTDIR}${bindir_crossscripts}
+		sed ${@get_binconfig_mangle(d)} $config > ${SYSROOT_DESTDIR}${bindir_crossscripts}/$configname
+		chmod u+x ${SYSROOT_DESTDIR}${bindir_crossscripts}/$configname
+	done
+}
diff --git a/poky/meta/classes-recipe/cargo.bbclass b/poky/meta/classes-recipe/cargo.bbclass
new file mode 100644
index 0000000..d1e8351
--- /dev/null
+++ b/poky/meta/classes-recipe/cargo.bbclass
@@ -0,0 +1,97 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+##
+## Purpose:
+## This class is used by any recipes that are built using
+## Cargo.
+
+inherit cargo_common
+inherit rust-target-config
+
+# the binary we will use
+CARGO = "cargo"
+
+# We need cargo to compile for the target
+BASEDEPENDS:append = " cargo-native"
+
+# Ensure we get the right rust variant
+DEPENDS:append:class-target = " rust-native ${RUSTLIB_DEP}"
+DEPENDS:append:class-nativesdk = " rust-native ${RUSTLIB_DEP}"
+DEPENDS:append:class-native = " rust-native"
+
+# Enable build separation
+B = "${WORKDIR}/build"
+
+# In case something fails in the build process, give a bit more feedback on
+# where the issue occured
+export RUST_BACKTRACE = "1"
+
+# The directory of the Cargo.toml relative to the root directory, per default
+# assume there's a Cargo.toml directly in the root directory
+CARGO_SRC_DIR ??= ""
+
+# The actual path to the Cargo.toml
+MANIFEST_PATH ??= "${S}/${CARGO_SRC_DIR}/Cargo.toml"
+
+RUSTFLAGS ??= ""
+BUILD_MODE = "${@['--release', ''][d.getVar('DEBUG_BUILD') == '1']}"
+CARGO_BUILD_FLAGS = "-v --target ${RUST_HOST_SYS} ${BUILD_MODE} --manifest-path=${MANIFEST_PATH}"
+
+# This is based on the content of CARGO_BUILD_FLAGS and generally will need to
+# change if CARGO_BUILD_FLAGS changes.
+BUILD_DIR = "${@['release', 'debug'][d.getVar('DEBUG_BUILD') == '1']}"
+CARGO_TARGET_SUBDIR="${RUST_HOST_SYS}/${BUILD_DIR}"
+oe_cargo_build () {
+	export RUSTFLAGS="${RUSTFLAGS}"
+	bbnote "Using rust targets from ${RUST_TARGET_PATH}"
+	bbnote "cargo = $(which ${CARGO})"
+	bbnote "rustc = $(which ${RUSTC})"
+	bbnote "${CARGO} build ${CARGO_BUILD_FLAGS} $@"
+	"${CARGO}" build ${CARGO_BUILD_FLAGS} "$@"
+}
+
+do_compile[progress] = "outof:\s+(\d+)/(\d+)"
+cargo_do_compile () {
+	oe_cargo_fix_env
+	oe_cargo_build
+}
+
+cargo_do_install () {
+	local have_installed=false
+	for tgt in "${B}/target/${CARGO_TARGET_SUBDIR}/"*; do
+		case $tgt in
+		*.so|*.rlib)
+			install -d "${D}${rustlibdir}"
+			install -m755 "$tgt" "${D}${rustlibdir}"
+			have_installed=true
+			;;
+		*examples)
+			if [ -d "$tgt" ]; then
+				for example in "$tgt/"*; do
+					if [ -f "$example" ] && [ -x "$example" ]; then
+						install -d "${D}${bindir}"
+						install -m755 "$example" "${D}${bindir}"
+						have_installed=true
+					fi
+				done
+			fi
+			;;
+		*)
+			if [ -f "$tgt" ] && [ -x "$tgt" ]; then
+				install -d "${D}${bindir}"
+				install -m755 "$tgt" "${D}${bindir}"
+				have_installed=true
+			fi
+			;;
+		esac
+	done
+	if ! $have_installed; then
+		die "Did not find anything to install"
+	fi
+}
+
+EXPORT_FUNCTIONS do_compile do_install
diff --git a/poky/meta/classes-recipe/cargo_common.bbclass b/poky/meta/classes-recipe/cargo_common.bbclass
new file mode 100644
index 0000000..dea0fbe
--- /dev/null
+++ b/poky/meta/classes-recipe/cargo_common.bbclass
@@ -0,0 +1,139 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+##
+## Purpose:
+## This class is to support building with cargo. It
+## must be different than cargo.bbclass because Rust
+## now builds with Cargo but cannot use cargo.bbclass
+## due to dependencies and assumptions in cargo.bbclass
+## that Rust & Cargo are already installed. So this
+## is used by cargo.bbclass and Rust
+##
+
+# add crate fetch support
+inherit rust-common
+
+# Where we download our registry and dependencies to
+export CARGO_HOME = "${WORKDIR}/cargo_home"
+
+# The pkg-config-rs library used by cargo build scripts disables itself when
+# cross compiling unless this is defined. We set up pkg-config appropriately
+# for cross compilation, so tell it we know better than it.
+export PKG_CONFIG_ALLOW_CROSS = "1"
+
+# Don't instruct cargo to use crates downloaded by bitbake. Some rust packages,
+# for example the rust compiler itself, come with their own vendored sources.
+# Specifying two [source.crates-io] will not work.
+CARGO_DISABLE_BITBAKE_VENDORING ?= "0"
+
+# Used by libstd-rs to point to the vendor dir included in rustc src
+CARGO_VENDORING_DIRECTORY ?= "${CARGO_HOME}/bitbake"
+
+CARGO_RUST_TARGET_CCLD ?= "${RUST_TARGET_CCLD}"
+cargo_common_do_configure () {
+	mkdir -p ${CARGO_HOME}/bitbake
+
+	cat <<- EOF > ${CARGO_HOME}/config
+	# EXTRA_OECARGO_PATHS
+	paths = [
+	$(for p in ${EXTRA_OECARGO_PATHS}; do echo \"$p\",; done)
+	]
+	EOF
+
+	cat <<- EOF >> ${CARGO_HOME}/config
+
+	# Local mirror vendored by bitbake
+	[source.bitbake]
+	directory = "${CARGO_VENDORING_DIRECTORY}"
+	EOF
+
+	if [ ${CARGO_DISABLE_BITBAKE_VENDORING} = "0" ]; then
+		cat <<- EOF >> ${CARGO_HOME}/config
+
+		[source.crates-io]
+		replace-with = "bitbake"
+		local-registry = "/nonexistant"
+		EOF
+	fi
+
+	cat <<- EOF >> ${CARGO_HOME}/config
+
+	[http]
+	# Multiplexing can't be enabled because http2 can't be enabled
+	# in curl-native without dependency loops
+	multiplexing = false
+
+	# Ignore the hard coded and incorrect path to certificates
+	cainfo = "${STAGING_ETCDIR_NATIVE}/ssl/certs/ca-certificates.crt"
+
+	EOF
+
+	cat <<- EOF >> ${CARGO_HOME}/config
+
+	# HOST_SYS
+	[target.${RUST_HOST_SYS}]
+	linker = "${CARGO_RUST_TARGET_CCLD}"
+	EOF
+
+	if [ "${RUST_HOST_SYS}" != "${RUST_BUILD_SYS}" ]; then
+		cat <<- EOF >> ${CARGO_HOME}/config
+
+		# BUILD_SYS
+		[target.${RUST_BUILD_SYS}]
+		linker = "${RUST_BUILD_CCLD}"
+		EOF
+	fi
+
+	if [ "${RUST_TARGET_SYS}" != "${RUST_BUILD_SYS}" -a "${RUST_TARGET_SYS}" != "${RUST_HOST_SYS}" ]; then
+		cat <<- EOF >> ${CARGO_HOME}/config
+
+		# TARGET_SYS
+		[target.${RUST_TARGET_SYS}]
+		linker = "${RUST_TARGET_CCLD}"
+		EOF
+	fi
+
+	# Put build output in build directory preferred by bitbake instead of
+	# inside source directory unless they are the same
+	if [ "${B}" != "${S}" ]; then
+		cat <<- EOF >> ${CARGO_HOME}/config
+
+		[build]
+		# Use out of tree build destination to avoid poluting the source tree
+		target-dir = "${B}/target"
+		EOF
+	fi
+
+	cat <<- EOF >> ${CARGO_HOME}/config
+
+	[term]
+	progress.when = 'always'
+	progress.width = 80
+	EOF
+}
+
+oe_cargo_fix_env () {
+	export CC="${RUST_TARGET_CC}"
+	export CXX="${RUST_TARGET_CXX}"
+	export CFLAGS="${CFLAGS}"
+	export CXXFLAGS="${CXXFLAGS}"
+	export AR="${AR}"
+	export TARGET_CC="${RUST_TARGET_CC}"
+	export TARGET_CXX="${RUST_TARGET_CXX}"
+	export TARGET_CFLAGS="${CFLAGS}"
+	export TARGET_CXXFLAGS="${CXXFLAGS}"
+	export TARGET_AR="${AR}"
+	export HOST_CC="${RUST_BUILD_CC}"
+	export HOST_CXX="${RUST_BUILD_CXX}"
+	export HOST_CFLAGS="${BUILD_CFLAGS}"
+	export HOST_CXXFLAGS="${BUILD_CXXFLAGS}"
+	export HOST_AR="${BUILD_AR}"
+}
+
+EXTRA_OECARGO_PATHS ??= ""
+
+EXPORT_FUNCTIONS do_configure
diff --git a/poky/meta/classes-recipe/cmake.bbclass b/poky/meta/classes-recipe/cmake.bbclass
new file mode 100644
index 0000000..554b948
--- /dev/null
+++ b/poky/meta/classes-recipe/cmake.bbclass
@@ -0,0 +1,223 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+# Path to the CMake file to process.
+OECMAKE_SOURCEPATH ??= "${S}"
+
+DEPENDS:prepend = "cmake-native "
+B = "${WORKDIR}/build"
+
+# What CMake generator to use.
+# The supported options are "Unix Makefiles" or "Ninja".
+OECMAKE_GENERATOR ?= "Ninja"
+
+python() {
+    generator = d.getVar("OECMAKE_GENERATOR")
+    if "Unix Makefiles" in generator:
+        args = "-G '" + generator +  "' -DCMAKE_MAKE_PROGRAM=" + d.getVar("MAKE")
+        d.setVar("OECMAKE_GENERATOR_ARGS", args)
+        d.setVarFlag("do_compile", "progress", "percent")
+    elif "Ninja" in generator:
+        args = "-G '" + generator + "' -DCMAKE_MAKE_PROGRAM=ninja"
+        d.appendVar("DEPENDS", " ninja-native")
+        d.setVar("OECMAKE_GENERATOR_ARGS", args)
+        d.setVarFlag("do_compile", "progress", r"outof:^\[(\d+)/(\d+)\]\s+")
+    else:
+        bb.fatal("Unknown CMake Generator %s" % generator)
+}
+OECMAKE_AR ?= "${AR}"
+
+# Compiler flags
+OECMAKE_C_FLAGS ?= "${HOST_CC_ARCH} ${TOOLCHAIN_OPTIONS} ${CFLAGS}"
+OECMAKE_CXX_FLAGS ?= "${HOST_CC_ARCH} ${TOOLCHAIN_OPTIONS} ${CXXFLAGS}"
+OECMAKE_C_FLAGS_RELEASE ?= "-DNDEBUG"
+OECMAKE_CXX_FLAGS_RELEASE ?= "-DNDEBUG"
+OECMAKE_C_LINK_FLAGS ?= "${HOST_CC_ARCH} ${TOOLCHAIN_OPTIONS} ${CPPFLAGS} ${LDFLAGS}"
+OECMAKE_CXX_LINK_FLAGS ?= "${HOST_CC_ARCH} ${TOOLCHAIN_OPTIONS} ${CXXFLAGS} ${LDFLAGS}"
+
+def oecmake_map_compiler(compiler, d):
+    args = d.getVar(compiler).split()
+    if args[0] == "ccache":
+        return args[1], args[0]
+    return args[0], ""
+
+# C/C++ Compiler (without cpu arch/tune arguments)
+OECMAKE_C_COMPILER ?= "${@oecmake_map_compiler('CC', d)[0]}"
+OECMAKE_C_COMPILER_LAUNCHER ?= "${@oecmake_map_compiler('CC', d)[1]}"
+OECMAKE_CXX_COMPILER ?= "${@oecmake_map_compiler('CXX', d)[0]}"
+OECMAKE_CXX_COMPILER_LAUNCHER ?= "${@oecmake_map_compiler('CXX', d)[1]}"
+
+# clear compiler vars for allarch to avoid sig hash difference
+OECMAKE_C_COMPILER_allarch = ""
+OECMAKE_C_COMPILER_LAUNCHER_allarch = ""
+OECMAKE_CXX_COMPILER_allarch = ""
+OECMAKE_CXX_COMPILER_LAUNCHER_allarch = ""
+
+OECMAKE_RPATH ?= ""
+OECMAKE_PERLNATIVE_DIR ??= ""
+OECMAKE_EXTRA_ROOT_PATH ?= ""
+
+OECMAKE_FIND_ROOT_PATH_MODE_PROGRAM = "ONLY"
+OECMAKE_FIND_ROOT_PATH_MODE_PROGRAM:class-native = "BOTH"
+
+EXTRA_OECMAKE:append = " ${PACKAGECONFIG_CONFARGS}"
+
+export CMAKE_BUILD_PARALLEL_LEVEL
+CMAKE_BUILD_PARALLEL_LEVEL:task-compile = "${@oe.utils.parallel_make(d, False)}"
+CMAKE_BUILD_PARALLEL_LEVEL:task-install = "${@oe.utils.parallel_make(d, True)}"
+
+OECMAKE_TARGET_COMPILE ?= "all"
+OECMAKE_TARGET_INSTALL ?= "install"
+
+def map_host_os_to_system_name(host_os):
+    if host_os.startswith('mingw'):
+        return 'Windows'
+    if host_os.startswith('linux'):
+        return 'Linux'
+    return host_os
+
+# CMake expects target architectures in the format of uname(2),
+# which do not always match TARGET_ARCH, so all the necessary
+# conversions should happen here.
+def map_host_arch_to_uname_arch(host_arch):
+    if host_arch == "powerpc":
+        return "ppc"
+    if host_arch == "powerpc64le":
+        return "ppc64le"
+    if host_arch == "powerpc64":
+        return "ppc64"
+    return host_arch
+
+cmake_do_generate_toolchain_file() {
+	if [ "${BUILD_SYS}" = "${HOST_SYS}" ]; then
+		cmake_crosscompiling="set( CMAKE_CROSSCOMPILING FALSE )"
+	fi
+	cat > ${WORKDIR}/toolchain.cmake <<EOF
+# CMake system name must be something like "Linux".
+# This is important for cross-compiling.
+$cmake_crosscompiling
+set( CMAKE_SYSTEM_NAME ${@map_host_os_to_system_name(d.getVar('HOST_OS'))} )
+set( CMAKE_SYSTEM_PROCESSOR ${@map_host_arch_to_uname_arch(d.getVar('HOST_ARCH'))} )
+set( CMAKE_C_COMPILER ${OECMAKE_C_COMPILER} )
+set( CMAKE_CXX_COMPILER ${OECMAKE_CXX_COMPILER} )
+set( CMAKE_C_COMPILER_LAUNCHER ${OECMAKE_C_COMPILER_LAUNCHER} )
+set( CMAKE_CXX_COMPILER_LAUNCHER ${OECMAKE_CXX_COMPILER_LAUNCHER} )
+set( CMAKE_ASM_COMPILER ${OECMAKE_C_COMPILER} )
+find_program( CMAKE_AR ${OECMAKE_AR} DOC "Archiver" REQUIRED )
+
+set( CMAKE_C_FLAGS "${OECMAKE_C_FLAGS}" CACHE STRING "CFLAGS" )
+set( CMAKE_CXX_FLAGS "${OECMAKE_CXX_FLAGS}" CACHE STRING "CXXFLAGS" )
+set( CMAKE_ASM_FLAGS "${OECMAKE_C_FLAGS}" CACHE STRING "ASM FLAGS" )
+set( CMAKE_C_FLAGS_RELEASE "${OECMAKE_C_FLAGS_RELEASE}" CACHE STRING "Additional CFLAGS for release" )
+set( CMAKE_CXX_FLAGS_RELEASE "${OECMAKE_CXX_FLAGS_RELEASE}" CACHE STRING "Additional CXXFLAGS for release" )
+set( CMAKE_ASM_FLAGS_RELEASE "${OECMAKE_C_FLAGS_RELEASE}" CACHE STRING "Additional ASM FLAGS for release" )
+set( CMAKE_C_LINK_FLAGS "${OECMAKE_C_LINK_FLAGS}" CACHE STRING "LDFLAGS" )
+set( CMAKE_CXX_LINK_FLAGS "${OECMAKE_CXX_LINK_FLAGS}" CACHE STRING "LDFLAGS" )
+
+# only search in the paths provided so cmake doesnt pick
+# up libraries and tools from the native build machine
+set( CMAKE_FIND_ROOT_PATH ${STAGING_DIR_HOST} ${STAGING_DIR_NATIVE} ${CROSS_DIR} ${OECMAKE_PERLNATIVE_DIR} ${OECMAKE_EXTRA_ROOT_PATH} ${EXTERNAL_TOOLCHAIN} ${HOSTTOOLS_DIR})
+set( CMAKE_FIND_ROOT_PATH_MODE_PACKAGE ONLY )
+set( CMAKE_FIND_ROOT_PATH_MODE_PROGRAM ${OECMAKE_FIND_ROOT_PATH_MODE_PROGRAM} )
+set( CMAKE_FIND_ROOT_PATH_MODE_LIBRARY ONLY )
+set( CMAKE_FIND_ROOT_PATH_MODE_INCLUDE ONLY )
+set( CMAKE_PROGRAM_PATH "/" )
+
+# Use qt.conf settings
+set( ENV{QT_CONF_PATH} ${WORKDIR}/qt.conf )
+
+# We need to set the rpath to the correct directory as cmake does not provide any
+# directory as rpath by default
+set( CMAKE_INSTALL_RPATH ${OECMAKE_RPATH} )
+
+# Use RPATHs relative to build directory for reproducibility
+set( CMAKE_BUILD_RPATH_USE_ORIGIN ON )
+
+# Use our cmake modules
+list(APPEND CMAKE_MODULE_PATH "${STAGING_DATADIR}/cmake/Modules/")
+
+# add for non /usr/lib libdir, e.g. /usr/lib64
+set( CMAKE_LIBRARY_PATH ${libdir} ${base_libdir})
+
+# add include dir to implicit includes in case it differs from /usr/include
+list(APPEND CMAKE_C_IMPLICIT_INCLUDE_DIRECTORIES ${includedir})
+list(APPEND CMAKE_CXX_IMPLICIT_INCLUDE_DIRECTORIES ${includedir})
+
+EOF
+}
+
+addtask generate_toolchain_file after do_patch before do_configure
+
+CONFIGURE_FILES = "CMakeLists.txt"
+
+do_configure[cleandirs] = "${@d.getVar('B') if d.getVar('S') != d.getVar('B') else ''}"
+
+cmake_do_configure() {
+	if [ "${OECMAKE_BUILDPATH}" ]; then
+		bbnote "cmake.bbclass no longer uses OECMAKE_BUILDPATH.  The default behaviour is now out-of-tree builds with B=WORKDIR/build."
+	fi
+
+	if [ "${S}" = "${B}" ]; then
+		find ${B} -name CMakeFiles -or -name Makefile -or -name cmake_install.cmake -or -name CMakeCache.txt -delete
+	fi
+
+	# Just like autotools cmake can use a site file to cache result that need generated binaries to run
+	if [ -e ${WORKDIR}/site-file.cmake ] ; then
+		oecmake_sitefile="-C ${WORKDIR}/site-file.cmake"
+	else
+		oecmake_sitefile=
+	fi
+
+	cmake \
+	  ${OECMAKE_GENERATOR_ARGS} \
+	  $oecmake_sitefile \
+	  ${OECMAKE_SOURCEPATH} \
+	  -DCMAKE_INSTALL_PREFIX:PATH=${prefix} \
+	  -DCMAKE_INSTALL_BINDIR:PATH=${@os.path.relpath(d.getVar('bindir'), d.getVar('prefix') + '/')} \
+	  -DCMAKE_INSTALL_SBINDIR:PATH=${@os.path.relpath(d.getVar('sbindir'), d.getVar('prefix') + '/')} \
+	  -DCMAKE_INSTALL_LIBEXECDIR:PATH=${@os.path.relpath(d.getVar('libexecdir'), d.getVar('prefix') + '/')} \
+	  -DCMAKE_INSTALL_SYSCONFDIR:PATH=${sysconfdir} \
+	  -DCMAKE_INSTALL_SHAREDSTATEDIR:PATH=${@os.path.relpath(d.getVar('sharedstatedir'), d.  getVar('prefix') + '/')} \
+	  -DCMAKE_INSTALL_LOCALSTATEDIR:PATH=${localstatedir} \
+	  -DCMAKE_INSTALL_LIBDIR:PATH=${@os.path.relpath(d.getVar('libdir'), d.getVar('prefix') + '/')} \
+	  -DCMAKE_INSTALL_INCLUDEDIR:PATH=${@os.path.relpath(d.getVar('includedir'), d.getVar('prefix') + '/')} \
+	  -DCMAKE_INSTALL_DATAROOTDIR:PATH=${@os.path.relpath(d.getVar('datadir'), d.getVar('prefix') + '/')} \
+	  -DPYTHON_EXECUTABLE:PATH=${PYTHON} \
+	  -DPython_EXECUTABLE:PATH=${PYTHON} \
+	  -DPython3_EXECUTABLE:PATH=${PYTHON} \
+	  -DLIB_SUFFIX=${@d.getVar('baselib').replace('lib', '')} \
+	  -DCMAKE_INSTALL_SO_NO_EXE=0 \
+	  -DCMAKE_TOOLCHAIN_FILE=${WORKDIR}/toolchain.cmake \
+	  -DCMAKE_NO_SYSTEM_FROM_IMPORTED=1 \
+	  -DCMAKE_EXPORT_NO_PACKAGE_REGISTRY=ON \
+	  -DFETCHCONTENT_FULLY_DISCONNECTED=ON \
+	  ${EXTRA_OECMAKE} \
+	  -Wno-dev
+}
+
+# To disable verbose cmake logs for a given recipe or globally config metadata e.g. local.conf
+# add following
+#
+# CMAKE_VERBOSE = ""
+#
+
+CMAKE_VERBOSE ??= "VERBOSE=1"
+
+# Then run do_compile again
+cmake_runcmake_build() {
+	bbnote ${DESTDIR:+DESTDIR=${DESTDIR} }${CMAKE_VERBOSE} cmake --build '${B}' "$@" -- ${EXTRA_OECMAKE_BUILD}
+	eval ${DESTDIR:+DESTDIR=${DESTDIR} }${CMAKE_VERBOSE} cmake --build '${B}' "$@" -- ${EXTRA_OECMAKE_BUILD}
+}
+
+cmake_do_compile()  {
+	cmake_runcmake_build --target ${OECMAKE_TARGET_COMPILE}
+}
+
+cmake_do_install() {
+	DESTDIR='${D}' cmake_runcmake_build --target ${OECMAKE_TARGET_INSTALL}
+}
+
+EXPORT_FUNCTIONS do_configure do_compile do_install do_generate_toolchain_file
diff --git a/poky/meta/classes-recipe/cml1.bbclass b/poky/meta/classes-recipe/cml1.bbclass
new file mode 100644
index 0000000..b790913
--- /dev/null
+++ b/poky/meta/classes-recipe/cml1.bbclass
@@ -0,0 +1,107 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+# returns all the elements from the src uri that are .cfg files
+def find_cfgs(d):
+    sources=src_patches(d, True)
+    sources_list=[]
+    for s in sources:
+        if s.endswith('.cfg'):
+            sources_list.append(s)
+
+    return sources_list
+
+cml1_do_configure() {
+	set -e
+	unset CFLAGS CPPFLAGS CXXFLAGS LDFLAGS
+	yes '' | oe_runmake oldconfig
+}
+
+EXPORT_FUNCTIONS do_configure
+addtask configure after do_unpack do_patch before do_compile
+
+inherit terminal
+
+OE_TERMINAL_EXPORTS += "HOST_EXTRACFLAGS HOSTLDFLAGS TERMINFO CROSS_CURSES_LIB CROSS_CURSES_INC"
+HOST_EXTRACFLAGS = "${BUILD_CFLAGS} ${BUILD_LDFLAGS}"
+HOSTLDFLAGS = "${BUILD_LDFLAGS}"
+CROSS_CURSES_LIB = "-lncurses -ltinfo"
+CROSS_CURSES_INC = '-DCURSES_LOC="<curses.h>"'
+TERMINFO = "${STAGING_DATADIR_NATIVE}/terminfo"
+
+KCONFIG_CONFIG_COMMAND ??= "menuconfig"
+KCONFIG_CONFIG_ROOTDIR ??= "${B}"
+python do_menuconfig() {
+    import shutil
+
+    config = os.path.join(d.getVar('KCONFIG_CONFIG_ROOTDIR'), ".config")
+    configorig = os.path.join(d.getVar('KCONFIG_CONFIG_ROOTDIR'), ".config.orig")
+
+    try:
+        mtime = os.path.getmtime(config)
+        shutil.copy(config, configorig)
+    except OSError:
+        mtime = 0
+
+    # setup native pkg-config variables (kconfig scripts call pkg-config directly, cannot generically be overriden to pkg-config-native)
+    d.setVar("PKG_CONFIG_DIR", "${STAGING_DIR_NATIVE}${libdir_native}/pkgconfig")
+    d.setVar("PKG_CONFIG_PATH", "${PKG_CONFIG_DIR}:${STAGING_DATADIR_NATIVE}/pkgconfig")
+    d.setVar("PKG_CONFIG_LIBDIR", "${PKG_CONFIG_DIR}")
+    d.setVarFlag("PKG_CONFIG_SYSROOT_DIR", "unexport", "1")
+    # ensure that environment variables are overwritten with this tasks 'd' values
+    d.appendVar("OE_TERMINAL_EXPORTS", " PKG_CONFIG_DIR PKG_CONFIG_PATH PKG_CONFIG_LIBDIR PKG_CONFIG_SYSROOT_DIR")
+
+    oe_terminal("sh -c \"make %s; if [ \\$? -ne 0 ]; then echo 'Command failed.'; printf 'Press any key to continue... '; read r; fi\"" % d.getVar('KCONFIG_CONFIG_COMMAND'),
+                d.getVar('PN') + ' Configuration', d)
+
+    # FIXME this check can be removed when the minimum bitbake version has been bumped
+    if hasattr(bb.build, 'write_taint'):
+        try:
+            newmtime = os.path.getmtime(config)
+        except OSError:
+            newmtime = 0
+
+        if newmtime > mtime:
+            bb.note("Configuration changed, recompile will be forced")
+            bb.build.write_taint('do_compile', d)
+}
+do_menuconfig[depends] += "ncurses-native:do_populate_sysroot"
+do_menuconfig[nostamp] = "1"
+do_menuconfig[dirs] = "${KCONFIG_CONFIG_ROOTDIR}"
+addtask menuconfig after do_configure
+
+python do_diffconfig() {
+    import shutil
+    import subprocess
+
+    workdir = d.getVar('WORKDIR')
+    fragment = workdir + '/fragment.cfg'
+    configorig = os.path.join(d.getVar('KCONFIG_CONFIG_ROOTDIR'), ".config.orig")
+    config = os.path.join(d.getVar('KCONFIG_CONFIG_ROOTDIR'), ".config")
+
+    try:
+        md5newconfig = bb.utils.md5_file(configorig)
+        md5config = bb.utils.md5_file(config)
+        isdiff = md5newconfig != md5config
+    except IOError as e:
+        bb.fatal("No config files found. Did you do menuconfig ?\n%s" % e)
+
+    if isdiff:
+        statement = 'diff --unchanged-line-format= --old-line-format= --new-line-format="%L" ' + configorig + ' ' + config + '>' + fragment
+        subprocess.call(statement, shell=True)
+        # No need to check the exit code as we know it's going to be
+        # non-zero, but that's what we expect.
+        shutil.copy(configorig, config)
+
+        bb.plain("Config fragment has been dumped into:\n %s" % fragment)
+    else:
+        if os.path.exists(fragment):
+            os.unlink(fragment)
+}
+
+do_diffconfig[nostamp] = "1"
+do_diffconfig[dirs] = "${KCONFIG_CONFIG_ROOTDIR}"
+addtask diffconfig
diff --git a/poky/meta/classes-recipe/compress_doc.bbclass b/poky/meta/classes-recipe/compress_doc.bbclass
new file mode 100644
index 0000000..d603caf
--- /dev/null
+++ b/poky/meta/classes-recipe/compress_doc.bbclass
@@ -0,0 +1,269 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+# Compress man pages in ${mandir} and info pages in ${infodir}
+#
+# 1. The doc will be compressed to gz format by default.
+#
+# 2. It will automatically correct the compressed doc which is not
+# in ${DOC_COMPRESS} but in ${DOC_COMPRESS_LIST} to the format
+# of ${DOC_COMPRESS} policy
+#
+# 3. It is easy to add a new type compression by editing
+# local.conf, such as:
+# DOC_COMPRESS_LIST:append = ' abc'
+# DOC_COMPRESS = 'abc'
+# DOC_COMPRESS_CMD[abc] = 'abc compress cmd ***'
+# DOC_DECOMPRESS_CMD[abc] = 'abc decompress cmd ***'
+
+# All supported compression policy
+DOC_COMPRESS_LIST ?= "gz xz bz2"
+
+# Compression policy, must be one of ${DOC_COMPRESS_LIST}
+DOC_COMPRESS ?= "gz"
+
+# Compression shell command
+DOC_COMPRESS_CMD[gz] ?= 'gzip -v -9 -n'
+DOC_COMPRESS_CMD[bz2] ?= "bzip2 -v -9"
+DOC_COMPRESS_CMD[xz] ?= "xz -v"
+
+# Decompression shell command
+DOC_DECOMPRESS_CMD[gz] ?= 'gunzip -v'
+DOC_DECOMPRESS_CMD[bz2] ?= "bunzip2 -v"
+DOC_DECOMPRESS_CMD[xz] ?= "unxz -v"
+
+PACKAGE_PREPROCESS_FUNCS += "package_do_compress_doc compress_doc_updatealternatives"
+python package_do_compress_doc() {
+    compress_mode = d.getVar('DOC_COMPRESS')
+    compress_list = (d.getVar('DOC_COMPRESS_LIST') or '').split()
+    if compress_mode not in compress_list:
+        bb.fatal('Compression policy %s not supported (not listed in %s)\n' % (compress_mode, compress_list))
+
+    dvar = d.getVar('PKGD')
+    compress_cmds = {}
+    decompress_cmds = {}
+    for mode in compress_list:
+        compress_cmds[mode] = d.getVarFlag('DOC_COMPRESS_CMD', mode)
+        decompress_cmds[mode] = d.getVarFlag('DOC_DECOMPRESS_CMD', mode)
+
+    mandir = os.path.abspath(dvar + os.sep + d.getVar("mandir"))
+    if os.path.exists(mandir):
+        # Decompress doc files which format is not compress_mode
+        decompress_doc(mandir, compress_mode, decompress_cmds)
+        compress_doc(mandir, compress_mode, compress_cmds)
+
+    infodir = os.path.abspath(dvar + os.sep + d.getVar("infodir"))
+    if os.path.exists(infodir):
+        # Decompress doc files which format is not compress_mode
+        decompress_doc(infodir, compress_mode, decompress_cmds)
+        compress_doc(infodir, compress_mode, compress_cmds)
+}
+
+def _get_compress_format(file, compress_format_list):
+    for compress_format in compress_format_list:
+        compress_suffix = '.' + compress_format
+        if file.endswith(compress_suffix):
+            return compress_format
+
+    return ''
+
+# Collect hardlinks to dict, each element in dict lists hardlinks
+# which points to the same doc file.
+# {hardlink10: [hardlink11, hardlink12],,,}
+# The hardlink10, hardlink11 and hardlink12 are the same file.
+def _collect_hardlink(hardlink_dict, file):
+    for hardlink in hardlink_dict:
+        # Add to the existed hardlink
+        if os.path.samefile(hardlink, file):
+            hardlink_dict[hardlink].append(file)
+            return hardlink_dict
+
+    hardlink_dict[file] = []
+    return hardlink_dict
+
+def _process_hardlink(hardlink_dict, compress_mode, shell_cmds, decompress=False):
+    import subprocess
+    for target in hardlink_dict:
+        if decompress:
+            compress_format = _get_compress_format(target, shell_cmds.keys())
+            cmd = "%s -f %s" % (shell_cmds[compress_format], target)
+            bb.note('decompress hardlink %s' % target)
+        else:
+            cmd = "%s -f %s" % (shell_cmds[compress_mode], target)
+            bb.note('compress hardlink %s' % target)
+        (retval, output) = subprocess.getstatusoutput(cmd)
+        if retval:
+            bb.warn("de/compress file failed %s (cmd was %s)%s" % (retval, cmd, ":\n%s" % output if output else ""))
+            return
+
+        for hardlink_dup in hardlink_dict[target]:
+            if decompress:
+                # Remove compress suffix
+                compress_suffix = '.' + compress_format
+                new_hardlink = hardlink_dup[:-len(compress_suffix)]
+                new_target = target[:-len(compress_suffix)]
+            else:
+                # Append compress suffix
+                compress_suffix = '.' + compress_mode
+                new_hardlink = hardlink_dup + compress_suffix
+                new_target = target + compress_suffix
+
+            bb.note('hardlink %s-->%s' % (new_hardlink, new_target))
+            if not os.path.exists(new_hardlink):
+                os.link(new_target, new_hardlink)
+            if os.path.exists(hardlink_dup):
+                os.unlink(hardlink_dup)
+
+def _process_symlink(file, compress_format, decompress=False):
+    compress_suffix = '.' + compress_format
+    if decompress:
+        # Remove compress suffix
+        new_linkname = file[:-len(compress_suffix)]
+        new_source = os.readlink(file)[:-len(compress_suffix)]
+    else:
+        # Append compress suffix
+        new_linkname = file + compress_suffix
+        new_source = os.readlink(file) + compress_suffix
+
+    bb.note('symlink %s-->%s' % (new_linkname, new_source))
+    if not os.path.exists(new_linkname):
+        os.symlink(new_source, new_linkname)
+
+    os.unlink(file)
+
+def _is_info(file):
+    flags = '.info .info-'.split()
+    for flag in flags:
+        if flag in os.path.basename(file):
+            return True
+
+    return False
+
+def _is_man(file):
+    import re
+
+    # It refers MANSECT-var in man(1.6g)'s man.config
+    # ".1:.1p:.8:.2:.3:.3p:.4:.5:.6:.7:.9:.0p:.tcl:.n:.l:.p:.o"
+    # Not start with '.', and contain the above colon-seperate element
+    p = re.compile(r'[^\.]+\.([1-9lnop]|0p|tcl)')
+    if p.search(file):
+        return True
+
+    return False
+
+def _is_compress_doc(file, compress_format_list):
+    compress_format = _get_compress_format(file, compress_format_list)
+    compress_suffix = '.' + compress_format
+    if file.endswith(compress_suffix):
+        # Remove the compress suffix
+        uncompress_file = file[:-len(compress_suffix)]
+        if _is_info(uncompress_file) or _is_man(uncompress_file):
+            return True, compress_format
+
+    return False, ''
+
+def compress_doc(topdir, compress_mode, compress_cmds):
+    import subprocess
+    hardlink_dict = {}
+    for root, dirs, files in os.walk(topdir):
+        for f in files:
+            file = os.path.join(root, f)
+            if os.path.isdir(file):
+                continue
+
+            if _is_info(file) or _is_man(file):
+                # Symlink
+                if os.path.islink(file):
+                    _process_symlink(file, compress_mode)
+                # Hardlink
+                elif os.lstat(file).st_nlink > 1:
+                    _collect_hardlink(hardlink_dict, file)
+                # Normal file
+                elif os.path.isfile(file):
+                    cmd = "%s %s" % (compress_cmds[compress_mode], file)
+                    (retval, output) = subprocess.getstatusoutput(cmd)
+                    if retval:
+                        bb.warn("compress failed %s (cmd was %s)%s" % (retval, cmd, ":\n%s" % output if output else ""))
+                        continue
+                    bb.note('compress file %s' % file)
+
+    _process_hardlink(hardlink_dict, compress_mode, compress_cmds)
+
+# Decompress doc files which format is not compress_mode
+def decompress_doc(topdir, compress_mode, decompress_cmds):
+    import subprocess
+    hardlink_dict = {}
+    decompress = True
+    for root, dirs, files in os.walk(topdir):
+        for f in files:
+            file = os.path.join(root, f)
+            if os.path.isdir(file):
+                continue
+
+            res, compress_format = _is_compress_doc(file, decompress_cmds.keys())
+            # Decompress files which format is not compress_mode
+            if res and compress_mode!=compress_format:
+                # Symlink
+                if os.path.islink(file):
+                    _process_symlink(file, compress_format, decompress)
+                # Hardlink
+                elif os.lstat(file).st_nlink > 1:
+                    _collect_hardlink(hardlink_dict, file)
+                # Normal file
+                elif os.path.isfile(file):
+                    cmd = "%s %s" % (decompress_cmds[compress_format], file)
+                    (retval, output) = subprocess.getstatusoutput(cmd)
+                    if retval:
+                        bb.warn("decompress failed %s (cmd was %s)%s" % (retval, cmd, ":\n%s" % output if output else ""))
+                        continue
+                    bb.note('decompress file %s' % file)
+
+    _process_hardlink(hardlink_dict, compress_mode, decompress_cmds, decompress)
+
+python compress_doc_updatealternatives () {
+    if not bb.data.inherits_class('update-alternatives', d):
+        return
+
+    mandir = d.getVar("mandir")
+    infodir = d.getVar("infodir")
+    compress_mode = d.getVar('DOC_COMPRESS')
+    for pkg in (d.getVar('PACKAGES') or "").split():
+        old_names = (d.getVar('ALTERNATIVE:%s' % pkg) or "").split()
+        new_names = []
+        for old_name in old_names:
+            old_link     = d.getVarFlag('ALTERNATIVE_LINK_NAME', old_name)
+            old_target   = d.getVarFlag('ALTERNATIVE_TARGET_%s' % pkg, old_name) or \
+                d.getVarFlag('ALTERNATIVE_TARGET', old_name) or \
+                d.getVar('ALTERNATIVE_TARGET_%s' % pkg) or \
+                d.getVar('ALTERNATIVE_TARGET') or \
+                old_link
+            # Sometimes old_target is specified as relative to the link name.
+            old_target   = os.path.join(os.path.dirname(old_link), old_target)
+
+            # The updatealternatives used for compress doc
+            if mandir in old_target or infodir in old_target:
+                new_name = old_name + '.' + compress_mode
+                new_link = old_link + '.' + compress_mode
+                new_target = old_target + '.' + compress_mode
+                d.delVarFlag('ALTERNATIVE_LINK_NAME', old_name)
+                d.setVarFlag('ALTERNATIVE_LINK_NAME', new_name, new_link)
+                if d.getVarFlag('ALTERNATIVE_TARGET_%s' % pkg, old_name):
+                    d.delVarFlag('ALTERNATIVE_TARGET_%s' % pkg, old_name)
+                    d.setVarFlag('ALTERNATIVE_TARGET_%s' % pkg, new_name, new_target)
+                elif d.getVarFlag('ALTERNATIVE_TARGET', old_name):
+                    d.delVarFlag('ALTERNATIVE_TARGET', old_name)
+                    d.setVarFlag('ALTERNATIVE_TARGET', new_name, new_target)
+                elif d.getVar('ALTERNATIVE_TARGET_%s' % pkg):
+                    d.setVar('ALTERNATIVE_TARGET_%s' % pkg, new_target)
+                elif d.getVar('ALTERNATIVE_TARGET'):
+                    d.setVar('ALTERNATIVE_TARGET', new_target)
+
+                new_names.append(new_name)
+
+        if new_names:
+            d.setVar('ALTERNATIVE:%s' % pkg, ' '.join(new_names))
+}
+
diff --git a/poky/meta/classes-recipe/core-image.bbclass b/poky/meta/classes-recipe/core-image.bbclass
new file mode 100644
index 0000000..4b5f2c9
--- /dev/null
+++ b/poky/meta/classes-recipe/core-image.bbclass
@@ -0,0 +1,82 @@
+# Common code for generating core reference images
+#
+# Copyright (C) 2007-2011 Linux Foundation
+#
+# SPDX-License-Identifier: MIT
+
+# IMAGE_FEATURES control content of the core reference images
+# 
+# By default we install packagegroup-core-boot and packagegroup-base-extended packages;
+# this gives us working (console only) rootfs.
+#
+# Available IMAGE_FEATURES:
+#
+# - weston              - Weston Wayland compositor
+# - x11                 - X server
+# - x11-base            - X server with minimal environment
+# - x11-sato            - OpenedHand Sato environment
+# - tools-debug         - debugging tools
+# - eclipse-debug       - Eclipse remote debugging support
+# - tools-profile       - profiling tools
+# - tools-testapps      - tools usable to make some device tests
+# - tools-sdk           - SDK (C/C++ compiler, autotools, etc.)
+# - nfs-server          - NFS server
+# - nfs-client          - NFS client
+# - ssh-server-dropbear - SSH server (dropbear)
+# - ssh-server-openssh  - SSH server (openssh)
+# - hwcodecs            - Install hardware acceleration codecs
+# - package-management  - installs package management tools and preserves the package manager database
+# - debug-tweaks        - makes an image suitable for development, e.g. allowing passwordless root logins
+#   - empty-root-password
+#   - allow-empty-password
+#   - allow-root-login
+#   - post-install-logging
+# - serial-autologin-root - with 'empty-root-password': autologin 'root' on the serial console
+# - dev-pkgs            - development packages (headers, etc.) for all installed packages in the rootfs
+# - dbg-pkgs            - debug symbol packages for all installed packages in the rootfs
+# - lic-pkgs            - license packages for all installed pacakges in the rootfs, requires
+#                         LICENSE_CREATE_PACKAGE="1" to be set when building packages too
+# - doc-pkgs            - documentation packages for all installed packages in the rootfs
+# - bash-completion-pkgs - bash-completion packages for recipes using bash-completion bbclass
+# - ptest-pkgs          - ptest packages for all ptest-enabled recipes
+# - read-only-rootfs    - tweaks an image to support read-only rootfs
+# - stateless-rootfs    - systemctl-native not run, image populated by systemd at runtime
+# - splash              - bootup splash screen
+#
+FEATURE_PACKAGES_weston = "packagegroup-core-weston"
+FEATURE_PACKAGES_x11 = "packagegroup-core-x11"
+FEATURE_PACKAGES_x11-base = "packagegroup-core-x11-base"
+FEATURE_PACKAGES_x11-sato = "packagegroup-core-x11-sato"
+FEATURE_PACKAGES_tools-debug = "packagegroup-core-tools-debug"
+FEATURE_PACKAGES_eclipse-debug = "packagegroup-core-eclipse-debug"
+FEATURE_PACKAGES_tools-profile = "packagegroup-core-tools-profile"
+FEATURE_PACKAGES_tools-testapps = "packagegroup-core-tools-testapps"
+FEATURE_PACKAGES_tools-sdk = "packagegroup-core-sdk packagegroup-core-standalone-sdk-target"
+FEATURE_PACKAGES_nfs-server = "packagegroup-core-nfs-server"
+FEATURE_PACKAGES_nfs-client = "packagegroup-core-nfs-client"
+FEATURE_PACKAGES_ssh-server-dropbear = "packagegroup-core-ssh-dropbear"
+FEATURE_PACKAGES_ssh-server-openssh = "packagegroup-core-ssh-openssh"
+FEATURE_PACKAGES_hwcodecs = "${MACHINE_HWCODECS}"
+
+
+# IMAGE_FEATURES_REPLACES_foo = 'bar1 bar2'
+# Including image feature foo would replace the image features bar1 and bar2
+IMAGE_FEATURES_REPLACES_ssh-server-openssh = "ssh-server-dropbear"
+
+# IMAGE_FEATURES_CONFLICTS_foo = 'bar1 bar2'
+# An error exception would be raised if both image features foo and bar1(or bar2) are included
+
+MACHINE_HWCODECS ??= ""
+
+CORE_IMAGE_BASE_INSTALL = '\
+    packagegroup-core-boot \
+    packagegroup-base-extended \
+    \
+    ${CORE_IMAGE_EXTRA_INSTALL} \
+    '
+
+CORE_IMAGE_EXTRA_INSTALL ?= ""
+
+IMAGE_INSTALL ?= "${CORE_IMAGE_BASE_INSTALL}"
+
+inherit image
diff --git a/poky/meta/classes-recipe/cpan-base.bbclass b/poky/meta/classes-recipe/cpan-base.bbclass
new file mode 100644
index 0000000..1db0a4d
--- /dev/null
+++ b/poky/meta/classes-recipe/cpan-base.bbclass
@@ -0,0 +1,33 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+#
+# cpan-base providers various perl related information needed for building
+# cpan modules
+#
+FILES:${PN} += "${libdir}/perl5 ${datadir}/perl5"
+
+DEPENDS  += "${@["perl", "perl-native"][(bb.data.inherits_class('native', d))]}"
+RDEPENDS:${PN} += "${@["perl", ""][(bb.data.inherits_class('native', d))]}"
+
+inherit perl-version
+
+def is_target(d):
+    if not bb.data.inherits_class('native', d):
+        return "yes"
+    return "no"
+
+PERLLIBDIRS = "${libdir}/perl5"
+PERLLIBDIRS:class-native = "${libdir}/perl5"
+
+def cpan_upstream_check_pattern(d):
+    for x in (d.getVar('SRC_URI') or '').split(' '):
+        if x.startswith("https://cpan.metacpan.org"):
+            _pattern = x.split('/')[-1].replace(d.getVar('PV'), r'(?P<pver>\d+.\d+)')
+            return _pattern
+    return ''
+
+UPSTREAM_CHECK_REGEX ?= "${@cpan_upstream_check_pattern(d)}"
diff --git a/poky/meta/classes-recipe/cpan.bbclass b/poky/meta/classes-recipe/cpan.bbclass
new file mode 100644
index 0000000..bb76a5b
--- /dev/null
+++ b/poky/meta/classes-recipe/cpan.bbclass
@@ -0,0 +1,71 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+#
+# This is for perl modules that use the old Makefile.PL build system
+#
+inherit cpan-base perlnative
+
+EXTRA_CPANFLAGS ?= ""
+EXTRA_PERLFLAGS ?= ""
+
+# Env var which tells perl if it should use host (no) or target (yes) settings
+export PERLCONFIGTARGET = "${@is_target(d)}"
+
+# Env var which tells perl where the perl include files are
+export PERL_INC = "${STAGING_LIBDIR}${PERL_OWN_DIR}/perl5/${@get_perl_version(d)}/${@get_perl_arch(d)}/CORE"
+export PERL_LIB = "${STAGING_LIBDIR}${PERL_OWN_DIR}/perl5/${@get_perl_version(d)}"
+export PERL_ARCHLIB = "${STAGING_LIBDIR}${PERL_OWN_DIR}/perl5/${@get_perl_version(d)}/${@get_perl_arch(d)}"
+export PERLHOSTLIB = "${STAGING_LIBDIR_NATIVE}/perl5/${@get_perl_version(d)}/"
+export PERLHOSTARCHLIB = "${STAGING_LIBDIR_NATIVE}/perl5/${@get_perl_version(d)}/${@get_perl_hostarch(d)}/"
+
+cpan_do_configure () {
+	yes '' | perl ${EXTRA_PERLFLAGS} Makefile.PL INSTALLDIRS=vendor NO_PERLLOCAL=1 NO_PACKLIST=1 PERL=$(which perl) ${EXTRA_CPANFLAGS}
+
+	# Makefile.PLs can exit with success without generating a
+	# Makefile, e.g. in cases of missing configure time
+	# dependencies. This is considered a best practice by
+	# cpantesters.org. See:
+	#  * http://wiki.cpantesters.org/wiki/CPANAuthorNotes
+	#  * http://www.nntp.perl.org/group/perl.qa/2008/08/msg11236.html
+	[ -e Makefile ] || bbfatal "No Makefile was generated by Makefile.PL"
+
+	if [ "${BUILD_SYS}" != "${HOST_SYS}" ]; then
+		. ${STAGING_LIBDIR}${PERL_OWN_DIR}/perl5/config.sh
+		# Use find since there can be a Makefile generated for each Makefile.PL
+		for f in `find -name Makefile.PL`; do
+			f2=`echo $f | sed -e 's/.PL//'`
+			test -f $f2 || continue
+			sed -i -e "s:\(PERL_ARCHLIB = \).*:\1${PERL_ARCHLIB}:" \
+				-e 's/perl.real/perl/' \
+				-e "s|^\(CCFLAGS =.*\)|\1 ${CFLAGS}|" \
+				$f2
+		done
+	fi
+}
+
+do_configure:append:class-target() {
+       find . -name Makefile | xargs sed -E -i \
+           -e 's:LD_RUN_PATH ?= ?"?[^"]*"?::g'
+}
+
+do_configure:append:class-nativesdk() {
+       find . -name Makefile | xargs sed -E -i \
+           -e 's:LD_RUN_PATH ?= ?"?[^"]*"?::g'
+}
+
+cpan_do_compile () {
+	oe_runmake PASTHRU_INC="${CFLAGS}" LD="${CCLD}"
+}
+
+cpan_do_install () {
+	oe_runmake DESTDIR="${D}" install_vendor
+	for PERLSCRIPT in `grep -rIEl '#! *${bindir}/perl-native.*/perl' ${D}`; do
+		sed -i -e 's|${bindir}/perl-native.*/perl|/usr/bin/env nativeperl|' $PERLSCRIPT
+	done
+}
+
+EXPORT_FUNCTIONS do_configure do_compile do_install
diff --git a/poky/meta/classes-recipe/cpan_build.bbclass b/poky/meta/classes-recipe/cpan_build.bbclass
new file mode 100644
index 0000000..026859b
--- /dev/null
+++ b/poky/meta/classes-recipe/cpan_build.bbclass
@@ -0,0 +1,47 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+#
+# This is for perl modules that use the new Build.PL build system
+#
+inherit cpan-base perlnative
+
+EXTRA_CPAN_BUILD_FLAGS ?= ""
+
+# Env var which tells perl if it should use host (no) or target (yes) settings
+export PERLCONFIGTARGET = "${@is_target(d)}"
+export PERL_ARCHLIB = "${STAGING_LIBDIR}${PERL_OWN_DIR}/perl5/${@get_perl_version(d)}/${@get_perl_arch(d)}"
+export PERLHOSTLIB = "${STAGING_LIBDIR_NATIVE}/perl5/${@get_perl_version(d)}/"
+export PERLHOSTARCHLIB = "${STAGING_LIBDIR_NATIVE}/perl5/${@get_perl_version(d)}/${@get_perl_hostarch(d)}/"
+export LD = "${CCLD}"
+
+cpan_build_do_configure () {
+	if [ "${@is_target(d)}" = "yes" ]; then
+		# build for target
+		. ${STAGING_LIBDIR}/perl5/config.sh
+	fi
+
+	perl Build.PL --installdirs vendor --destdir ${D} \
+			${EXTRA_CPAN_BUILD_FLAGS}
+
+	# Build.PLs can exit with success without generating a
+	# Build, e.g. in cases of missing configure time
+	# dependencies. This is considered a best practice by
+	# cpantesters.org. See:
+	#  * http://wiki.cpantesters.org/wiki/CPANAuthorNotes
+	#  * http://www.nntp.perl.org/group/perl.qa/2008/08/msg11236.html
+	[ -e Build ] || bbfatal "No Build was generated by Build.PL"
+}
+
+cpan_build_do_compile () {
+        perl Build --perl "${bindir}/perl" verbose=1
+}
+
+cpan_build_do_install () {
+	perl Build install --destdir ${D}
+}
+
+EXPORT_FUNCTIONS do_configure do_compile do_install
diff --git a/poky/meta/classes-recipe/cross-canadian.bbclass b/poky/meta/classes-recipe/cross-canadian.bbclass
new file mode 100644
index 0000000..1670217
--- /dev/null
+++ b/poky/meta/classes-recipe/cross-canadian.bbclass
@@ -0,0 +1,200 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+# NOTE - When using this class the user is responsible for ensuring that
+# TRANSLATED_TARGET_ARCH is added into PN. This ensures that if the TARGET_ARCH
+# is changed, another nativesdk xxx-canadian-cross can be installed
+#
+
+
+# SDK packages are built either explicitly by the user,
+# or indirectly via dependency.  No need to be in 'world'.
+EXCLUDE_FROM_WORLD = "1"
+NATIVESDKLIBC ?= "libc-glibc"
+LIBCOVERRIDE = ":${NATIVESDKLIBC}"
+CLASSOVERRIDE = "class-cross-canadian"
+STAGING_BINDIR_TOOLCHAIN = "${STAGING_DIR_NATIVE}${bindir_native}/${SDK_ARCH}${SDK_VENDOR}-${SDK_OS}:${STAGING_DIR_NATIVE}${bindir_native}/${TARGET_ARCH}${TARGET_VENDOR}-${TARGET_OS}"
+
+#
+# Update BASE_PACKAGE_ARCH and PACKAGE_ARCHS
+#
+PACKAGE_ARCH = "${SDK_ARCH}-${SDKPKGSUFFIX}"
+BASECANADIANEXTRAOS ?= "linux-musl"
+CANADIANEXTRAOS = "${BASECANADIANEXTRAOS}"
+CANADIANEXTRAVENDOR = ""
+MODIFYTOS ??= "1"
+python () {
+    archs = d.getVar('PACKAGE_ARCHS').split()
+    sdkarchs = []
+    for arch in archs:
+        sdkarchs.append(arch + '-${SDKPKGSUFFIX}')
+    d.setVar('PACKAGE_ARCHS', " ".join(sdkarchs))
+
+    # Allow the following code segment to be disabled, e.g. meta-environment
+    if d.getVar("MODIFYTOS") != "1":
+        return
+
+    if d.getVar("TCLIBC") in [ 'baremetal', 'newlib' ]:
+        return
+
+    tos = d.getVar("TARGET_OS")
+    tos_known = ["mingw32"]
+    extralibcs = [""]
+    if "musl" in d.getVar("BASECANADIANEXTRAOS"):
+        extralibcs.append("musl")
+    if "android" in tos:
+        extralibcs.append("android")
+    for variant in ["", "spe", "x32", "eabi", "n32", "_ilp32"]:
+        for libc in extralibcs:
+            entry = "linux"
+            if variant and libc:
+                entry = entry + "-" + libc + variant
+            elif variant:
+                entry = entry + "-gnu" + variant
+            elif libc:
+                entry = entry + "-" + libc
+            tos_known.append(entry)
+    if tos not in tos_known:
+        bb.fatal("Building cross-candian for an unknown TARGET_SYS (%s), please update cross-canadian.bbclass" % d.getVar("TARGET_SYS"))
+
+    for n in ["PROVIDES", "DEPENDS"]:
+        d.setVar(n, d.getVar(n))
+    d.setVar("STAGING_BINDIR_TOOLCHAIN", d.getVar("STAGING_BINDIR_TOOLCHAIN"))
+    for prefix in ["AR", "AS", "DLLTOOL", "CC", "CXX", "GCC", "LD", "LIPO", "NM", "OBJDUMP", "RANLIB", "STRIP", "WINDRES"]:
+        n = prefix + "_FOR_TARGET"
+        d.setVar(n, d.getVar(n))
+    # This is a bit ugly. We need to zero LIBC/ABI extension which will change TARGET_OS
+    # however we need the old value in some variables. We expand those here first.
+    tarch = d.getVar("TARGET_ARCH")
+    if tarch == "x86_64":
+        d.setVar("LIBCEXTENSION", "")
+        d.setVar("ABIEXTENSION", "")
+        d.appendVar("CANADIANEXTRAOS", " linux-gnux32")
+        for extraos in d.getVar("BASECANADIANEXTRAOS").split():
+            d.appendVar("CANADIANEXTRAOS", " " + extraos + "x32")
+    elif tarch == "powerpc":
+        # PowerPC can build "linux" and "linux-gnuspe"
+        d.setVar("LIBCEXTENSION", "")
+        d.setVar("ABIEXTENSION", "")
+        d.appendVar("CANADIANEXTRAOS", " linux-gnuspe")
+        for extraos in d.getVar("BASECANADIANEXTRAOS").split():
+            d.appendVar("CANADIANEXTRAOS", " " + extraos + "spe")
+    elif tarch == "mips64":
+        d.appendVar("CANADIANEXTRAOS", " linux-gnun32")
+        for extraos in d.getVar("BASECANADIANEXTRAOS").split():
+            d.appendVar("CANADIANEXTRAOS", " " + extraos + "n32")
+    if tarch == "arm" or tarch == "armeb":
+        d.appendVar("CANADIANEXTRAOS", " linux-gnueabi linux-musleabi")
+        d.setVar("TARGET_OS", "linux-gnueabi")
+    else:
+        d.setVar("TARGET_OS", "linux")
+
+    # Also need to handle multilib target vendors
+    vendors = d.getVar("CANADIANEXTRAVENDOR")
+    if not vendors:
+        vendors = all_multilib_tune_values(d, 'TARGET_VENDOR')
+    origvendor = d.getVar("TARGET_VENDOR_MULTILIB_ORIGINAL")
+    if origvendor:
+        d.setVar("TARGET_VENDOR", origvendor)
+        if origvendor not in vendors.split():
+            vendors = origvendor + " " + vendors
+    d.setVar("CANADIANEXTRAVENDOR", vendors)
+}
+MULTIMACH_TARGET_SYS = "${PACKAGE_ARCH}${HOST_VENDOR}-${HOST_OS}"
+
+INHIBIT_DEFAULT_DEPS = "1"
+
+STAGING_DIR_HOST = "${RECIPE_SYSROOT}"
+
+TOOLCHAIN_OPTIONS = " --sysroot=${RECIPE_SYSROOT}"
+
+PATH:append = ":${TMPDIR}/sysroots/${HOST_ARCH}/${bindir_cross}"
+PKGHIST_DIR = "${TMPDIR}/pkghistory/${HOST_ARCH}-${SDKPKGSUFFIX}${HOST_VENDOR}-${HOST_OS}/"
+
+HOST_ARCH = "${SDK_ARCH}"
+HOST_VENDOR = "${SDK_VENDOR}"
+HOST_OS = "${SDK_OS}"
+HOST_PREFIX = "${SDK_PREFIX}"
+HOST_CC_ARCH = "${SDK_CC_ARCH}"
+HOST_LD_ARCH = "${SDK_LD_ARCH}"
+HOST_AS_ARCH = "${SDK_AS_ARCH}"
+
+#assign DPKG_ARCH
+DPKG_ARCH = "${@debian_arch_map(d.getVar('SDK_ARCH'), '')}"
+
+CPPFLAGS = "${BUILDSDK_CPPFLAGS}"
+CFLAGS = "${BUILDSDK_CFLAGS}"
+CXXFLAGS = "${BUILDSDK_CFLAGS}"
+LDFLAGS = "${BUILDSDK_LDFLAGS} \
+           -Wl,-rpath-link,${STAGING_LIBDIR}/.. \
+           -Wl,-rpath,${libdir}/.. "
+
+#
+# We need chrpath >= 0.14 to ensure we can deal with 32 and 64 bit
+# binaries
+#
+DEPENDS:append = " chrpath-replacement-native"
+EXTRANATIVEPATH += "chrpath-native"
+
+# Path mangling needed by the cross packaging
+# Note that we use := here to ensure that libdir and includedir are
+# target paths.
+target_base_prefix := "${base_prefix}"
+target_prefix := "${prefix}"
+target_exec_prefix := "${exec_prefix}"
+target_base_libdir = "${target_base_prefix}/${baselib}"
+target_libdir = "${target_exec_prefix}/${baselib}"
+target_includedir := "${includedir}"
+
+# Change to place files in SDKPATH
+base_prefix = "${SDKPATHNATIVE}"
+prefix = "${SDKPATHNATIVE}${prefix_nativesdk}"
+exec_prefix = "${SDKPATHNATIVE}${prefix_nativesdk}"
+bindir = "${exec_prefix}/bin/${TARGET_ARCH}${TARGET_VENDOR}-${TARGET_OS}"
+sbindir = "${bindir}"
+base_bindir = "${bindir}"
+base_sbindir = "${bindir}"
+libdir = "${exec_prefix}/lib/${TARGET_ARCH}${TARGET_VENDOR}-${TARGET_OS}"
+libexecdir = "${exec_prefix}/libexec/${TARGET_ARCH}${TARGET_VENDOR}-${TARGET_OS}"
+
+FILES:${PN} = "${prefix}"
+
+export PKG_CONFIG_DIR = "${STAGING_DIR_HOST}${exec_prefix}/lib/pkgconfig"
+export PKG_CONFIG_SYSROOT_DIR = "${STAGING_DIR_HOST}"
+
+do_populate_sysroot[stamp-extra-info] = ""
+do_packagedata[stamp-extra-info] = ""
+
+USE_NLS = "${SDKUSE_NLS}"
+
+# We have to us TARGET_ARCH but we care about the absolute value
+# and not any particular tune that is enabled.
+TARGET_ARCH[vardepsexclude] = "TUNE_ARCH"
+
+PKGDATA_DIR = "${PKGDATA_DIR_SDK}"
+# If MLPREFIX is set by multilib code, shlibs
+# points to the wrong place so force it
+SHLIBSDIRS = "${PKGDATA_DIR}/nativesdk-shlibs2"
+SHLIBSWORKDIR = "${PKGDATA_DIR}/nativesdk-shlibs2"
+
+cross_canadian_bindirlinks () {
+	for i in linux ${CANADIANEXTRAOS}
+	do
+		for v in ${CANADIANEXTRAVENDOR}
+		do
+			d=${D}${bindir}/../${TARGET_ARCH}$v-$i
+			if [ -d $d ];
+			then
+			    continue
+			fi
+			install -d $d
+			for j in `ls ${D}${bindir}`
+			do
+				p=${TARGET_ARCH}$v-$i-`echo $j | sed -e s,${TARGET_PREFIX},,`
+				ln -s ../${TARGET_SYS}/$j $d/$p
+			done
+		done
+       done
+}
diff --git a/poky/meta/classes-recipe/cross.bbclass b/poky/meta/classes-recipe/cross.bbclass
new file mode 100644
index 0000000..93de9a5
--- /dev/null
+++ b/poky/meta/classes-recipe/cross.bbclass
@@ -0,0 +1,103 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+inherit relocatable
+
+# Cross packages are built indirectly via dependency,
+# no need for them to be a direct target of 'world'
+EXCLUDE_FROM_WORLD = "1"
+
+CLASSOVERRIDE = "class-cross"
+PACKAGES = ""
+PACKAGES_DYNAMIC = ""
+PACKAGES_DYNAMIC:class-native = ""
+
+HOST_ARCH = "${BUILD_ARCH}"
+HOST_VENDOR = "${BUILD_VENDOR}"
+HOST_OS = "${BUILD_OS}"
+HOST_PREFIX = "${BUILD_PREFIX}"
+HOST_CC_ARCH = "${BUILD_CC_ARCH}"
+HOST_LD_ARCH = "${BUILD_LD_ARCH}"
+HOST_AS_ARCH = "${BUILD_AS_ARCH}"
+
+# No strip sysroot when DEBUG_BUILD is enabled
+INHIBIT_SYSROOT_STRIP ?= "${@oe.utils.vartrue('DEBUG_BUILD', '1', '', d)}"
+
+export lt_cv_sys_lib_dlsearch_path_spec = "${libdir} ${base_libdir} /lib /lib64 /usr/lib /usr/lib64"
+
+STAGING_DIR_HOST = "${RECIPE_SYSROOT_NATIVE}"
+
+PACKAGE_ARCH = "${BUILD_ARCH}"
+
+MULTIMACH_TARGET_SYS = "${BUILD_ARCH}${BUILD_VENDOR}-${BUILD_OS}"
+
+export PKG_CONFIG_DIR = "${exec_prefix}/lib/pkgconfig"
+export PKG_CONFIG_SYSROOT_DIR = ""
+
+TARGET_CPPFLAGS = ""
+TARGET_CFLAGS = ""
+TARGET_CXXFLAGS = ""
+TARGET_LDFLAGS = ""
+
+CPPFLAGS = "${BUILD_CPPFLAGS}"
+CFLAGS = "${BUILD_CFLAGS}"
+CXXFLAGS = "${BUILD_CFLAGS}"
+LDFLAGS = "${BUILD_LDFLAGS}"
+
+TOOLCHAIN_OPTIONS = ""
+
+# This class encodes staging paths into its scripts data so can only be
+# reused if we manipulate the paths.
+SSTATE_SCAN_CMD ?= "${SSTATE_SCAN_CMD_NATIVE}"
+
+# Path mangling needed by the cross packaging
+# Note that we use := here to ensure that libdir and includedir are
+# target paths.
+target_base_prefix := "${root_prefix}"
+target_prefix := "${prefix}"
+target_exec_prefix := "${exec_prefix}"
+target_base_libdir = "${target_base_prefix}/${baselib}"
+target_libdir = "${target_exec_prefix}/${baselib}"
+target_includedir := "${includedir}"
+
+# Overrides for paths
+CROSS_TARGET_SYS_DIR = "${TARGET_SYS}"
+prefix = "${STAGING_DIR_NATIVE}${prefix_native}"
+base_prefix = "${STAGING_DIR_NATIVE}"
+exec_prefix = "${STAGING_DIR_NATIVE}${prefix_native}"
+bindir = "${exec_prefix}/bin/${CROSS_TARGET_SYS_DIR}"
+sbindir = "${bindir}"
+base_bindir = "${bindir}"
+base_sbindir = "${bindir}"
+libdir = "${exec_prefix}/lib/${CROSS_TARGET_SYS_DIR}"
+libexecdir = "${exec_prefix}/libexec/${CROSS_TARGET_SYS_DIR}"
+
+do_populate_sysroot[sstate-inputdirs] = "${SYSROOT_DESTDIR}/${STAGING_DIR_NATIVE}/"
+do_packagedata[stamp-extra-info] = ""
+
+USE_NLS = "no"
+
+export CC = "${BUILD_CC}"
+export CXX = "${BUILD_CXX}"
+export FC = "${BUILD_FC}"
+export CPP = "${BUILD_CPP}"
+export LD = "${BUILD_LD}"
+export CCLD = "${BUILD_CCLD}"
+export AR = "${BUILD_AR}"
+export AS = "${BUILD_AS}"
+export RANLIB = "${BUILD_RANLIB}"
+export STRIP = "${BUILD_STRIP}"
+export NM = "${BUILD_NM}"
+
+inherit nopackages
+
+python do_addto_recipe_sysroot () {
+    bb.build.exec_func("extend_recipe_sysroot", d)
+}
+addtask addto_recipe_sysroot after do_populate_sysroot
+do_addto_recipe_sysroot[deptask] = "do_populate_sysroot"
+
+PATH:prepend = "${COREBASE}/scripts/cross-intercept:"
diff --git a/poky/meta/classes-recipe/crosssdk.bbclass b/poky/meta/classes-recipe/crosssdk.bbclass
new file mode 100644
index 0000000..824b1bc
--- /dev/null
+++ b/poky/meta/classes-recipe/crosssdk.bbclass
@@ -0,0 +1,57 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+inherit cross
+
+CLASSOVERRIDE = "class-crosssdk"
+NATIVESDKLIBC ?= "libc-glibc"
+LIBCOVERRIDE = ":${NATIVESDKLIBC}"
+MACHINEOVERRIDES = ""
+PACKAGE_ARCH = "${SDK_ARCH}"
+
+python () {
+    # set TUNE_PKGARCH to SDK_ARCH
+    d.setVar('TUNE_PKGARCH', d.getVar('SDK_ARCH'))
+    # Set features here to prevent appends and distro features backfill
+    # from modifying nativesdk distro features
+    features = set(d.getVar("DISTRO_FEATURES_NATIVESDK").split())
+    filtered = set(bb.utils.filter("DISTRO_FEATURES", d.getVar("DISTRO_FEATURES_FILTER_NATIVESDK"), d).split())
+    d.setVar("DISTRO_FEATURES", " ".join(sorted(features | filtered)))
+}
+
+STAGING_BINDIR_TOOLCHAIN = "${STAGING_DIR_NATIVE}${bindir_native}/${TARGET_ARCH}${TARGET_VENDOR}-${TARGET_OS}"
+
+# This class encodes staging paths into its scripts data so can only be
+# reused if we manipulate the paths.
+SSTATE_SCAN_CMD ?= "${SSTATE_SCAN_CMD_NATIVE}"
+
+TARGET_ARCH = "${SDK_ARCH}"
+TARGET_VENDOR = "${SDK_VENDOR}"
+TARGET_OS = "${SDK_OS}"
+TARGET_PREFIX = "${SDK_PREFIX}"
+TARGET_CC_ARCH = "${SDK_CC_ARCH}"
+TARGET_LD_ARCH = "${SDK_LD_ARCH}"
+TARGET_AS_ARCH = "${SDK_AS_ARCH}"
+TARGET_CPPFLAGS = ""
+TARGET_CFLAGS = ""
+TARGET_CXXFLAGS = ""
+TARGET_LDFLAGS = ""
+TARGET_FPU = ""
+
+
+target_libdir = "${SDKPATHNATIVE}${libdir_nativesdk}"
+target_includedir = "${SDKPATHNATIVE}${includedir_nativesdk}"
+target_base_libdir = "${SDKPATHNATIVE}${base_libdir_nativesdk}"
+target_prefix = "${SDKPATHNATIVE}${prefix_nativesdk}"
+target_exec_prefix = "${SDKPATHNATIVE}${prefix_nativesdk}"
+baselib = "lib"
+
+do_packagedata[stamp-extra-info] = ""
+
+# Need to force this to ensure consitency across architectures
+EXTRA_OECONF_GCC_FLOAT = ""
+
+USE_NLS = "no"
diff --git a/poky/meta/classes-recipe/deploy.bbclass b/poky/meta/classes-recipe/deploy.bbclass
new file mode 100644
index 0000000..f56fe98
--- /dev/null
+++ b/poky/meta/classes-recipe/deploy.bbclass
@@ -0,0 +1,18 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+DEPLOYDIR = "${WORKDIR}/deploy-${PN}"
+SSTATETASKS += "do_deploy"
+do_deploy[sstate-inputdirs] = "${DEPLOYDIR}"
+do_deploy[sstate-outputdirs] = "${DEPLOY_DIR_IMAGE}"
+
+python do_deploy_setscene () {
+    sstate_setscene(d)
+}
+addtask do_deploy_setscene
+do_deploy[dirs] = "${B}"
+do_deploy[cleandirs] = "${DEPLOYDIR}"
+do_deploy[stamp-extra-info] = "${MACHINE_ARCH}"
diff --git a/poky/meta/classes-recipe/devicetree.bbclass b/poky/meta/classes-recipe/devicetree.bbclass
new file mode 100644
index 0000000..ac1d284
--- /dev/null
+++ b/poky/meta/classes-recipe/devicetree.bbclass
@@ -0,0 +1,154 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+# This bbclass implements device tree compliation for user provided device tree
+# sources. The compilation of the device tree sources is the same as the kernel
+# device tree compilation process, this includes being able to include sources
+# from the kernel such as soc dtsi files or header files such as gpio.h. In
+# addition to device trees this bbclass also handles compilation of device tree
+# overlays.
+#
+# The output of this class behaves similar to how kernel-devicetree.bbclass
+# operates in that the output files are installed into /boot/devicetree.
+# However this class on purpose separates the deployed device trees into the
+# 'devicetree' subdirectory. This prevents clashes with the kernel-devicetree
+# output. Additionally the device trees are populated into the sysroot for
+# access via the sysroot from within other recipes.
+
+SECTION ?= "bsp"
+
+# The default inclusion of kernel device tree includes and headers means that
+# device trees built with them are at least GPL-2.0-only (and in some cases dual
+# licensed). Default to GPL-2.0-only if the recipe does not specify a license.
+LICENSE ?= "GPL-2.0-only"
+LIC_FILES_CHKSUM ?= "file://${COMMON_LICENSE_DIR}/GPL-2.0-only;md5=801f80980d171dd6425610833a22dbe6"
+
+INHIBIT_DEFAULT_DEPS = "1"
+DEPENDS += "dtc-native"
+
+inherit deploy kernel-arch
+
+COMPATIBLE_MACHINE ?= "^$"
+
+PROVIDES = "virtual/dtb"
+
+PACKAGE_ARCH = "${MACHINE_ARCH}"
+
+SYSROOT_DIRS += "/boot/devicetree"
+FILES:${PN} = "/boot/devicetree/*.dtb /boot/devicetree/*.dtbo"
+
+S = "${WORKDIR}"
+B = "${WORKDIR}/build"
+
+# Default kernel includes, these represent what are normally used for in-kernel
+# sources.
+KERNEL_INCLUDE ??= " \
+        ${STAGING_KERNEL_DIR}/arch/${ARCH}/boot/dts \
+        ${STAGING_KERNEL_DIR}/arch/${ARCH}/boot/dts/* \
+        ${STAGING_KERNEL_DIR}/scripts/dtc/include-prefixes \
+        "
+
+DT_INCLUDE[doc] = "Search paths to be made available to both the device tree compiler and preprocessor for inclusion."
+DT_INCLUDE ?= "${DT_FILES_PATH} ${KERNEL_INCLUDE}"
+DT_FILES_PATH[doc] = "Defaults to source directory, can be used to select dts files that are not in source (e.g. generated)."
+DT_FILES_PATH ?= "${S}"
+
+DT_PADDING_SIZE[doc] = "Size of padding on the device tree blob, used as extra space typically for additional properties during boot."
+DT_PADDING_SIZE ??= "0x3000"
+DT_RESERVED_MAP[doc] = "Number of reserved map entires."
+DT_RESERVED_MAP ??= "8"
+DT_BOOT_CPU[doc] = "The boot cpu, defaults to 0"
+DT_BOOT_CPU ??= "0"
+
+DTC_FLAGS ?= "-R ${DT_RESERVED_MAP} -b ${DT_BOOT_CPU}"
+DTC_PPFLAGS ?= "-nostdinc -undef -D__DTS__ -x assembler-with-cpp"
+DTC_BFLAGS ?= "-p ${DT_PADDING_SIZE} -@"
+DTC_OFLAGS ?= "-p 0 -@ -H epapr"
+
+python () {
+    if d.getVar("KERNEL_INCLUDE"):
+        # auto add dependency on kernel tree, but only if kernel include paths
+        # are specified.
+        d.appendVarFlag("do_compile", "depends", " virtual/kernel:do_configure")
+}
+
+def expand_includes(varname, d):
+    import glob
+    includes = set()
+    # expand all includes with glob
+    for i in (d.getVar(varname) or "").split():
+        for g in glob.glob(i):
+            if os.path.isdir(g): # only add directories to include path
+                includes.add(g)
+    return includes
+
+def devicetree_source_is_overlay(path):
+    # determine if a dts file is an overlay by checking if it uses "/plugin/;"
+    with open(path, "r") as f:
+        for i in f:
+            if i.startswith("/plugin/;"):
+                return True
+    return False
+
+def devicetree_compile(dtspath, includes, d):
+    import subprocess
+    dts = os.path.basename(dtspath)
+    dtname = os.path.splitext(dts)[0]
+    bb.note("Processing {0} [{1}]".format(dtname, dts))
+
+    # preprocess
+    ppargs = d.getVar("BUILD_CPP").split()
+    ppargs += (d.getVar("DTC_PPFLAGS") or "").split()
+    for i in includes:
+        ppargs.append("-I{0}".format(i))
+    ppargs += ["-o", "{0}.pp".format(dts), dtspath]
+    bb.note("Running {0}".format(" ".join(ppargs)))
+    subprocess.run(ppargs, check = True)
+
+    # determine if the file is an overlay or not (using the preprocessed file)
+    isoverlay = devicetree_source_is_overlay("{0}.pp".format(dts))
+
+    # compile
+    dtcargs = ["dtc"] + (d.getVar("DTC_FLAGS") or "").split()
+    if isoverlay:
+        dtcargs += (d.getVar("DTC_OFLAGS") or "").split()
+    else:
+        dtcargs += (d.getVar("DTC_BFLAGS") or "").split()
+    for i in includes:
+        dtcargs += ["-i", i]
+    dtcargs += ["-o", "{0}.{1}".format(dtname, "dtbo" if isoverlay else "dtb")]
+    dtcargs += ["-I", "dts", "-O", "dtb", "{0}.pp".format(dts)]
+    bb.note("Running {0}".format(" ".join(dtcargs)))
+    subprocess.run(dtcargs, check = True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
+
+python devicetree_do_compile() {
+    includes = expand_includes("DT_INCLUDE", d)
+    listpath = d.getVar("DT_FILES_PATH")
+    for dts in os.listdir(listpath):
+        dtspath = os.path.join(listpath, dts)
+        try:
+            if not(os.path.isfile(dtspath)) or not(dts.endswith(".dts") or devicetree_source_is_overlay(dtspath)):
+                continue # skip non-.dts files and non-overlay files
+        except:
+            continue # skip if can't determine if overlay
+        devicetree_compile(dtspath, includes, d)
+}
+
+devicetree_do_install() {
+    for DTB_FILE in `ls *.dtb *.dtbo`; do
+        install -Dm 0644 ${B}/${DTB_FILE} ${D}/boot/devicetree/${DTB_FILE}
+    done
+}
+
+devicetree_do_deploy() {
+    for DTB_FILE in `ls *.dtb *.dtbo`; do
+        install -Dm 0644 ${B}/${DTB_FILE} ${DEPLOYDIR}/devicetree/${DTB_FILE}
+    done
+}
+addtask deploy before do_build after do_install
+
+EXPORT_FUNCTIONS do_compile do_install do_deploy
+
diff --git a/poky/meta/classes-recipe/devupstream.bbclass b/poky/meta/classes-recipe/devupstream.bbclass
new file mode 100644
index 0000000..1529cc8
--- /dev/null
+++ b/poky/meta/classes-recipe/devupstream.bbclass
@@ -0,0 +1,61 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+# Class for use in BBCLASSEXTEND to make it easier to have a single recipe that
+# can build both stable tarballs and snapshots from upstream source
+# repositories.
+#
+# Usage:
+# BBCLASSEXTEND = "devupstream:target"
+# SRC_URI:class-devupstream = "git://git.example.com/example;branch=master"
+# SRCREV:class-devupstream = "abcdef"
+#
+# If the first entry in SRC_URI is a git: URL then S is rewritten to
+# WORKDIR/git.
+#
+# There are a few caveats that remain to be solved:
+# - You can't build native or nativesdk recipes using for example
+#   devupstream:native, you can only build target recipes.
+# - If the fetcher requires native tools (such as subversion-native) then
+#   bitbake won't be able to add them automatically.
+
+python devupstream_virtclass_handler () {
+    # Do nothing if this is inherited, as it's for BBCLASSEXTEND
+    if "devupstream" not in (d.getVar('BBCLASSEXTEND') or ""):
+        bb.error("Don't inherit devupstream, use BBCLASSEXTEND")
+        return
+
+    variant = d.getVar("BBEXTENDVARIANT")
+    if variant not in ("target", "native"):
+        bb.error("Unsupported variant %s. Pass the variant when using devupstream, for example devupstream:target" % variant)
+        return
+
+    # Develpment releases are never preferred by default
+    d.setVar("DEFAULT_PREFERENCE", "-1")
+
+    src_uri = d.getVar("SRC_URI:class-devupstream") or d.getVar("SRC_URI")
+    uri = bb.fetch2.URI(src_uri.split()[0])
+
+    if uri.scheme == "git" and not d.getVar("S:class-devupstream"):
+        d.setVar("S", "${WORKDIR}/git")
+
+    # Modify the PV if the recipe hasn't already overridden it
+    pv = d.getVar("PV")
+    proto_marker = "+" + uri.scheme
+    if proto_marker not in pv and not d.getVar("PV:class-devupstream"):
+        d.setVar("PV", pv + proto_marker + "${SRCPV}")
+
+    if variant == "native":
+        pn = d.getVar("PN")
+        d.setVar("PN", "%s-native" % (pn))
+        fn = d.getVar("FILE")
+        bb.parse.BBHandler.inherit("native", fn, 0, d)
+
+    d.appendVar("CLASSOVERRIDE", ":class-devupstream")
+}
+
+addhandler devupstream_virtclass_handler
+devupstream_virtclass_handler[eventmask] = "bb.event.RecipePreFinalise"
diff --git a/poky/meta/classes-recipe/distro_features_check.bbclass b/poky/meta/classes-recipe/distro_features_check.bbclass
new file mode 100644
index 0000000..1f2674f
--- /dev/null
+++ b/poky/meta/classes-recipe/distro_features_check.bbclass
@@ -0,0 +1,13 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+# Temporarily provide fallback to the old name of the class
+
+python __anonymous() {
+    bb.warn("distro_features_check.bbclass is deprecated, please use features_check.bbclass instead")
+}
+
+inherit features_check
diff --git a/poky/meta/classes-recipe/dos2unix.bbclass b/poky/meta/classes-recipe/dos2unix.bbclass
new file mode 100644
index 0000000..18e89b1
--- /dev/null
+++ b/poky/meta/classes-recipe/dos2unix.bbclass
@@ -0,0 +1,20 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+# Class for use to convert all CRLF line terminators to LF
+# provided that some projects are being developed/maintained
+# on Windows so they have different line terminators(CRLF) vs
+# on Linux(LF), which can cause annoying patching errors during
+# git push/checkout processes.
+
+do_convert_crlf_to_lf[depends] += "dos2unix-native:do_populate_sysroot"
+
+# Convert CRLF line terminators to LF
+do_convert_crlf_to_lf () {
+	find ${S} -type f -exec dos2unix {} \;
+}
+
+addtask convert_crlf_to_lf after do_unpack before do_patch
diff --git a/poky/meta/classes-recipe/externalsrc.bbclass b/poky/meta/classes-recipe/externalsrc.bbclass
new file mode 100644
index 0000000..51dbe9e
--- /dev/null
+++ b/poky/meta/classes-recipe/externalsrc.bbclass
@@ -0,0 +1,269 @@
+# Copyright (C) 2012 Linux Foundation
+# Author: Richard Purdie
+# Some code and influence taken from srctree.bbclass:
+# Copyright (C) 2009 Chris Larson <clarson@kergoth.com>
+#
+# SPDX-License-Identifier: MIT
+#
+# externalsrc.bbclass enables use of an existing source tree, usually external to
+# the build system to build a piece of software rather than the usual fetch/unpack/patch
+# process.
+#
+# To use, add externalsrc to the global inherit and set EXTERNALSRC to point at the
+# directory you want to use containing the sources e.g. from local.conf for a recipe
+# called "myrecipe" you would do:
+#
+# INHERIT += "externalsrc"
+# EXTERNALSRC:pn-myrecipe = "/path/to/my/source/tree"
+#
+# In order to make this class work for both target and native versions (or with
+# multilibs/cross or other BBCLASSEXTEND variants), B is set to point to a separate
+# directory under the work directory (split source and build directories). This is
+# the default, but the build directory can be set to the source directory if
+# circumstances dictate by setting EXTERNALSRC_BUILD to the same value, e.g.:
+#
+# EXTERNALSRC_BUILD:pn-myrecipe = "/path/to/my/source/tree"
+#
+
+SRCTREECOVEREDTASKS ?= "do_patch do_unpack do_fetch"
+EXTERNALSRC_SYMLINKS ?= "oe-workdir:${WORKDIR} oe-logs:${T}"
+
+python () {
+    externalsrc = d.getVar('EXTERNALSRC')
+    externalsrcbuild = d.getVar('EXTERNALSRC_BUILD')
+
+    if externalsrc and not externalsrc.startswith("/"):
+        bb.error("EXTERNALSRC must be an absolute path")
+    if externalsrcbuild and not externalsrcbuild.startswith("/"):
+        bb.error("EXTERNALSRC_BUILD must be an absolute path")
+
+    # If this is the base recipe and EXTERNALSRC is set for it or any of its
+    # derivatives, then enable BB_DONT_CACHE to force the recipe to always be
+    # re-parsed so that the file-checksums function for do_compile is run every
+    # time.
+    bpn = d.getVar('BPN')
+    classextend = (d.getVar('BBCLASSEXTEND') or '').split()
+    if bpn == d.getVar('PN') or not classextend:
+        if (externalsrc or
+                ('native' in classextend and
+                 d.getVar('EXTERNALSRC:pn-%s-native' % bpn)) or
+                ('nativesdk' in classextend and
+                 d.getVar('EXTERNALSRC:pn-nativesdk-%s' % bpn)) or
+                ('cross' in classextend and
+                 d.getVar('EXTERNALSRC:pn-%s-cross' % bpn))):
+            d.setVar('BB_DONT_CACHE', '1')
+
+    if externalsrc:
+        import oe.recipeutils
+        import oe.path
+
+        d.setVar('S', externalsrc)
+        if externalsrcbuild:
+            d.setVar('B', externalsrcbuild)
+        else:
+            d.setVar('B', '${WORKDIR}/${BPN}-${PV}/')
+
+        local_srcuri = []
+        fetch = bb.fetch2.Fetch((d.getVar('SRC_URI') or '').split(), d)
+        for url in fetch.urls:
+            url_data = fetch.ud[url]
+            parm = url_data.parm
+            if (url_data.type == 'file' or
+                    url_data.type == 'npmsw' or url_data.type == 'crate' or
+                    'type' in parm and parm['type'] == 'kmeta'):
+                local_srcuri.append(url)
+
+        d.setVar('SRC_URI', ' '.join(local_srcuri))
+
+        # Dummy value because the default function can't be called with blank SRC_URI
+        d.setVar('SRCPV', '999')
+
+        if d.getVar('CONFIGUREOPT_DEPTRACK') == '--disable-dependency-tracking':
+            d.setVar('CONFIGUREOPT_DEPTRACK', '')
+
+        tasks = filter(lambda k: d.getVarFlag(k, "task"), d.keys())
+
+        for task in tasks:
+            if task.endswith("_setscene"):
+                # sstate is never going to work for external source trees, disable it
+                bb.build.deltask(task, d)
+            elif os.path.realpath(d.getVar('S')) == os.path.realpath(d.getVar('B')):
+                # Since configure will likely touch ${S}, ensure only we lock so one task has access at a time
+                d.appendVarFlag(task, "lockfiles", " ${S}/singletask.lock")
+
+            for funcname in [task, "base_" + task, "kernel_" + task]:
+                # We do not want our source to be wiped out, ever (kernel.bbclass does this for do_clean)
+                cleandirs = oe.recipeutils.split_var_value(d.getVarFlag(funcname, 'cleandirs', False) or '')
+                setvalue = False
+                for cleandir in cleandirs[:]:
+                    if oe.path.is_path_parent(externalsrc, d.expand(cleandir)):
+                        cleandirs.remove(cleandir)
+                        setvalue = True
+                if setvalue:
+                    d.setVarFlag(funcname, 'cleandirs', ' '.join(cleandirs))
+
+        fetch_tasks = ['do_fetch', 'do_unpack']
+        # If we deltask do_patch, there's no dependency to ensure do_unpack gets run, so add one
+        # Note that we cannot use d.appendVarFlag() here because deps is expected to be a list object, not a string
+        d.setVarFlag('do_configure', 'deps', (d.getVarFlag('do_configure', 'deps', False) or []) + ['do_unpack'])
+
+        for task in d.getVar("SRCTREECOVEREDTASKS").split():
+            if local_srcuri and task in fetch_tasks:
+                continue
+            bb.build.deltask(task, d)
+            if task == 'do_unpack':
+                # The reproducible build create_source_date_epoch_stamp function must
+                # be run after the source is available and before the
+                # do_deploy_source_date_epoch task.  In the normal case, it's attached
+                # to do_unpack as a postfuncs, but since we removed do_unpack (above)
+                # we need to move the function elsewhere.  The easiest thing to do is
+                # move it into the prefuncs of the do_deploy_source_date_epoch task.
+                # This is safe, as externalsrc runs with the source already unpacked.
+                d.prependVarFlag('do_deploy_source_date_epoch', 'prefuncs', 'create_source_date_epoch_stamp ')
+
+        d.prependVarFlag('do_compile', 'prefuncs', "externalsrc_compile_prefunc ")
+        d.prependVarFlag('do_configure', 'prefuncs', "externalsrc_configure_prefunc ")
+
+        d.setVarFlag('do_compile', 'file-checksums', '${@srctree_hash_files(d)}')
+        d.setVarFlag('do_configure', 'file-checksums', '${@srctree_configure_hash_files(d)}')
+
+        # We don't want the workdir to go away
+        d.appendVar('RM_WORK_EXCLUDE', ' ' + d.getVar('PN'))
+
+        bb.build.addtask('do_buildclean',
+                         'do_clean' if d.getVar('S') == d.getVar('B') else None,
+                         None, d)
+
+        # If B=S the same builddir is used even for different architectures.
+        # Thus, use a shared CONFIGURESTAMPFILE and STAMP directory so that
+        # change of do_configure task hash is correctly detected and stamps are
+        # invalidated if e.g. MACHINE changes.
+        if d.getVar('S') == d.getVar('B'):
+            configstamp = '${TMPDIR}/work-shared/${PN}/${EXTENDPE}${PV}-${PR}/configure.sstate'
+            d.setVar('CONFIGURESTAMPFILE', configstamp)
+            d.setVar('STAMP', '${STAMPS_DIR}/work-shared/${PN}/${EXTENDPE}${PV}-${PR}')
+            d.setVar('STAMPCLEAN', '${STAMPS_DIR}/work-shared/${PN}/*-*')
+}
+
+python externalsrc_configure_prefunc() {
+    s_dir = d.getVar('S')
+    # Create desired symlinks
+    symlinks = (d.getVar('EXTERNALSRC_SYMLINKS') or '').split()
+    newlinks = []
+    for symlink in symlinks:
+        symsplit = symlink.split(':', 1)
+        lnkfile = os.path.join(s_dir, symsplit[0])
+        target = d.expand(symsplit[1])
+        if len(symsplit) > 1:
+            if os.path.islink(lnkfile):
+                # Link already exists, leave it if it points to the right location already
+                if os.readlink(lnkfile) == target:
+                    continue
+                os.unlink(lnkfile)
+            elif os.path.exists(lnkfile):
+                # File/dir exists with same name as link, just leave it alone
+                continue
+            os.symlink(target, lnkfile)
+            newlinks.append(symsplit[0])
+    # Hide the symlinks from git
+    try:
+        git_exclude_file = os.path.join(s_dir, '.git/info/exclude')
+        if os.path.exists(git_exclude_file):
+            with open(git_exclude_file, 'r+') as efile:
+                elines = efile.readlines()
+                for link in newlinks:
+                    if link in elines or '/'+link in elines:
+                        continue
+                    efile.write('/' + link + '\n')
+    except IOError as ioe:
+        bb.note('Failed to hide EXTERNALSRC_SYMLINKS from git')
+}
+
+python externalsrc_compile_prefunc() {
+    # Make it obvious that this is happening, since forgetting about it could lead to much confusion
+    bb.plain('NOTE: %s: compiling from external source tree %s' % (d.getVar('PN'), d.getVar('EXTERNALSRC')))
+}
+
+do_buildclean[dirs] = "${S} ${B}"
+do_buildclean[nostamp] = "1"
+do_buildclean[doc] = "Call 'make clean' or equivalent in ${B}"
+externalsrc_do_buildclean() {
+	if [ -e Makefile -o -e makefile -o -e GNUmakefile ]; then
+		rm -f ${@' '.join([x.split(':')[0] for x in (d.getVar('EXTERNALSRC_SYMLINKS') or '').split()])}
+		if [ "${CLEANBROKEN}" != "1" ]; then
+			oe_runmake clean || die "make failed"
+		fi
+	else
+		bbnote "nothing to do - no makefile found"
+	fi
+}
+
+def srctree_hash_files(d, srcdir=None):
+    import shutil
+    import subprocess
+    import tempfile
+    import hashlib
+
+    s_dir = srcdir or d.getVar('EXTERNALSRC')
+    git_dir = None
+
+    try:
+        git_dir = os.path.join(s_dir,
+            subprocess.check_output(['git', '-C', s_dir, 'rev-parse', '--git-dir'], stderr=subprocess.DEVNULL).decode("utf-8").rstrip())
+        top_git_dir = os.path.join(s_dir, subprocess.check_output(['git', '-C', d.getVar("TOPDIR"), 'rev-parse', '--git-dir'],
+            stderr=subprocess.DEVNULL).decode("utf-8").rstrip())
+        if git_dir == top_git_dir:
+            git_dir = None
+    except subprocess.CalledProcessError:
+        pass
+
+    ret = " "
+    if git_dir is not None:
+        oe_hash_file = os.path.join(git_dir, 'oe-devtool-tree-sha1-%s' % d.getVar('PN'))
+        with tempfile.NamedTemporaryFile(prefix='oe-devtool-index') as tmp_index:
+            # Clone index
+            shutil.copyfile(os.path.join(git_dir, 'index'), tmp_index.name)
+            # Update our custom index
+            env = os.environ.copy()
+            env['GIT_INDEX_FILE'] = tmp_index.name
+            subprocess.check_output(['git', 'add', '-A', '.'], cwd=s_dir, env=env)
+            git_sha1 = subprocess.check_output(['git', 'write-tree'], cwd=s_dir, env=env).decode("utf-8")
+            submodule_helper = subprocess.check_output(['git', 'submodule--helper', 'list'], cwd=s_dir, env=env).decode("utf-8")
+            for line in submodule_helper.splitlines():
+                module_dir = os.path.join(s_dir, line.rsplit(maxsplit=1)[1])
+                if os.path.isdir(module_dir):
+                    proc = subprocess.Popen(['git', 'add', '-A', '.'], cwd=module_dir, env=env, stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
+                    proc.communicate()
+                    proc = subprocess.Popen(['git', 'write-tree'], cwd=module_dir, env=env, stdout=subprocess.PIPE, stderr=subprocess.DEVNULL)
+                    stdout, _ = proc.communicate()
+                    git_sha1 += stdout.decode("utf-8")
+            sha1 = hashlib.sha1(git_sha1.encode("utf-8")).hexdigest()
+        with open(oe_hash_file, 'w') as fobj:
+            fobj.write(sha1)
+        ret = oe_hash_file + ':True'
+    else:
+        ret = s_dir + '/*:True'
+    return ret
+
+def srctree_configure_hash_files(d):
+    """
+    Get the list of files that should trigger do_configure to re-execute,
+    based on the value of CONFIGURE_FILES
+    """
+    in_files = (d.getVar('CONFIGURE_FILES') or '').split()
+    out_items = []
+    search_files = []
+    for entry in in_files:
+        if entry.startswith('/'):
+            out_items.append('%s:%s' % (entry, os.path.exists(entry)))
+        else:
+            search_files.append(entry)
+    if search_files:
+        s_dir = d.getVar('EXTERNALSRC')
+        for root, _, files in os.walk(s_dir):
+            for f in files:
+                if f in search_files:
+                    out_items.append('%s:True' % os.path.join(root, f))
+    return ' '.join(out_items)
+
+EXPORT_FUNCTIONS do_buildclean
diff --git a/poky/meta/classes-recipe/features_check.bbclass b/poky/meta/classes-recipe/features_check.bbclass
new file mode 100644
index 0000000..163a7bc
--- /dev/null
+++ b/poky/meta/classes-recipe/features_check.bbclass
@@ -0,0 +1,57 @@
+# Allow checking of required and conflicting features
+#
+# xxx = [DISTRO,MACHINE,COMBINED,IMAGE]
+#
+# ANY_OF_xxx_FEATURES:        ensure at least one item on this list is included
+#                             in xxx_FEATURES.
+# REQUIRED_xxx_FEATURES:      ensure every item on this list is included
+#                             in xxx_FEATURES.
+# CONFLICT_xxx_FEATURES:      ensure no item in this list is included in
+#                             xxx_FEATURES.
+#
+# Copyright 2019 (C) Texas Instruments Inc.
+# Copyright 2013 (C) O.S. Systems Software LTDA.
+#
+# SPDX-License-Identifier: MIT
+
+
+python () {
+    if d.getVar('PARSE_ALL_RECIPES', False):
+        return
+
+    unused = True
+
+    for kind in ['DISTRO', 'MACHINE', 'COMBINED', 'IMAGE']:
+        if d.getVar('ANY_OF_' + kind + '_FEATURES') is None and not d.hasOverrides('ANY_OF_' + kind + '_FEATURES') and \
+           d.getVar('REQUIRED_' + kind + '_FEATURES') is None and not d.hasOverrides('REQUIRED_' + kind + '_FEATURES') and \
+           d.getVar('CONFLICT_' + kind + '_FEATURES') is None and not d.hasOverrides('CONFLICT_' + kind + '_FEATURES'):
+            continue
+
+        unused = False
+
+        # Assume at least one var is set.
+        features = set((d.getVar(kind + '_FEATURES') or '').split())
+
+        any_of_features = set((d.getVar('ANY_OF_' + kind + '_FEATURES') or '').split())
+        if any_of_features:
+            if set.isdisjoint(any_of_features, features):
+                raise bb.parse.SkipRecipe("one of '%s' needs to be in %s_FEATURES"
+                    % (' '.join(any_of_features), kind))
+
+        required_features = set((d.getVar('REQUIRED_' + kind + '_FEATURES') or '').split())
+        if required_features:
+            missing = set.difference(required_features, features)
+            if missing:
+                raise bb.parse.SkipRecipe("missing required %s feature%s '%s' (not in %s_FEATURES)"
+                    % (kind.lower(), 's' if len(missing) > 1 else '', ' '.join(missing), kind))
+
+        conflict_features = set((d.getVar('CONFLICT_' + kind + '_FEATURES') or '').split())
+        if conflict_features:
+            conflicts = set.intersection(conflict_features, features)
+            if conflicts:
+                raise bb.parse.SkipRecipe("conflicting %s feature%s '%s' (in %s_FEATURES)"
+                    % (kind.lower(), 's' if len(conflicts) > 1 else '', ' '.join(conflicts), kind))
+
+    if unused:
+        bb.warn("Recipe inherits features_check but doesn't use it")
+}
diff --git a/poky/meta/classes-recipe/fontcache.bbclass b/poky/meta/classes-recipe/fontcache.bbclass
new file mode 100644
index 0000000..0d496b7
--- /dev/null
+++ b/poky/meta/classes-recipe/fontcache.bbclass
@@ -0,0 +1,63 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+#
+# This class will generate the proper postinst/postrm scriptlets for font
+# packages.
+#
+
+PACKAGE_WRITE_DEPS += "qemu-native"
+inherit qemu
+
+FONT_PACKAGES ??= "${PN}"
+FONT_EXTRA_RDEPENDS ?= "${MLPREFIX}fontconfig-utils"
+FONTCONFIG_CACHE_DIR ?= "${localstatedir}/cache/fontconfig"
+FONTCONFIG_CACHE_PARAMS ?= "-v"
+# You can change this to e.g. FC_DEBUG=16 to debug fc-cache issues,
+# something has to be set, because qemuwrapper is using this variable after -E
+# multiple variables aren't allowed because for qemu they are separated
+# by comma and in -n "$D" case they should be separated by space
+FONTCONFIG_CACHE_ENV ?= "FC_DEBUG=1"
+fontcache_common() {
+if [ -n "$D" ] ; then
+	$INTERCEPT_DIR/postinst_intercept update_font_cache ${PKG} mlprefix=${MLPREFIX} binprefix=${MLPREFIX} \
+		'bindir="${bindir}"' \
+		'libdir="${libdir}"' \
+		'libexecdir="${libexecdir}"' \
+		'base_libdir="${base_libdir}"' \
+		'fontconfigcachedir="${FONTCONFIG_CACHE_DIR}"' \
+		'fontconfigcacheparams="${FONTCONFIG_CACHE_PARAMS}"' \
+		'fontconfigcacheenv="${FONTCONFIG_CACHE_ENV}"'
+else
+	${FONTCONFIG_CACHE_ENV} fc-cache ${FONTCONFIG_CACHE_PARAMS}
+fi
+}
+
+python () {
+    font_pkgs = d.getVar('FONT_PACKAGES').split()
+    deps = d.getVar("FONT_EXTRA_RDEPENDS")
+
+    for pkg in font_pkgs:
+        if deps: d.appendVar('RDEPENDS:' + pkg, ' '+deps)
+}
+
+python add_fontcache_postinsts() {
+    for pkg in d.getVar('FONT_PACKAGES').split():
+        bb.note("adding fonts postinst and postrm scripts to %s" % pkg)
+        postinst = d.getVar('pkg_postinst:%s' % pkg) or d.getVar('pkg_postinst')
+        if not postinst:
+            postinst = '#!/bin/sh\n'
+        postinst += d.getVar('fontcache_common')
+        d.setVar('pkg_postinst:%s' % pkg, postinst)
+
+        postrm = d.getVar('pkg_postrm:%s' % pkg) or d.getVar('pkg_postrm')
+        if not postrm:
+            postrm = '#!/bin/sh\n'
+        postrm += d.getVar('fontcache_common')
+        d.setVar('pkg_postrm:%s' % pkg, postrm)
+}
+
+PACKAGEFUNCS =+ "add_fontcache_postinsts"
diff --git a/poky/meta/classes-recipe/fs-uuid.bbclass b/poky/meta/classes-recipe/fs-uuid.bbclass
new file mode 100644
index 0000000..a9e7eb8
--- /dev/null
+++ b/poky/meta/classes-recipe/fs-uuid.bbclass
@@ -0,0 +1,30 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+# Extract UUID from ${ROOTFS}, which must have been built
+# by the time that this function gets called. Only works
+# on ext file systems and depends on tune2fs.
+def get_rootfs_uuid(d):
+    import subprocess
+    rootfs = d.getVar('ROOTFS')
+    output = subprocess.check_output(['tune2fs', '-l', rootfs])
+    for line in output.split('\n'):
+        if line.startswith('Filesystem UUID:'):
+            uuid = line.split()[-1]
+            bb.note('UUID of %s: %s' % (rootfs, uuid))
+            return uuid
+    bb.fatal('Could not determine filesystem UUID of %s' % rootfs)
+
+# Replace the special <<uuid-of-rootfs>> inside a string (like the
+# root= APPEND string in a syslinux.cfg or systemd-boot entry) with the
+# actual UUID of the rootfs. Does nothing if the special string
+# is not used.
+def replace_rootfs_uuid(d, string):
+    UUID_PLACEHOLDER = '<<uuid-of-rootfs>>'
+    if UUID_PLACEHOLDER in string:
+        uuid = get_rootfs_uuid(d)
+        string = string.replace(UUID_PLACEHOLDER, uuid)
+    return string
diff --git a/poky/meta/classes-recipe/gconf.bbclass b/poky/meta/classes-recipe/gconf.bbclass
new file mode 100644
index 0000000..b81851b
--- /dev/null
+++ b/poky/meta/classes-recipe/gconf.bbclass
@@ -0,0 +1,77 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+DEPENDS += "gconf"
+PACKAGE_WRITE_DEPS += "gconf-native"
+
+# These are for when gconftool is used natively and the prefix isn't necessarily
+# the sysroot.  TODO: replicate the postinst logic for -native packages going
+# into sysroot as they won't be running their own install-time schema
+# registration (disabled below) nor the postinst script (as they don't happen).
+export GCONF_SCHEMA_INSTALL_SOURCE = "xml:merged:${STAGING_DIR_NATIVE}${sysconfdir}/gconf/gconf.xml.defaults"
+export GCONF_BACKEND_DIR = "${STAGING_LIBDIR_NATIVE}/GConf/2"
+
+# Disable install-time schema registration as we're a packaging system so this
+# happens in the postinst script, not at install time.  Set both the configure
+# script option and the traditional envionment variable just to make sure.
+EXTRA_OECONF += "--disable-schemas-install"
+export GCONF_DISABLE_MAKEFILE_SCHEMA_INSTALL = "1"
+
+gconf_postinst() {
+if [ "x$D" != "x" ]; then
+	export GCONF_CONFIG_SOURCE="xml::$D${sysconfdir}/gconf/gconf.xml.defaults"
+else
+	export GCONF_CONFIG_SOURCE=`gconftool-2 --get-default-source`
+fi
+
+SCHEMA_LOCATION=$D/etc/gconf/schemas
+for SCHEMA in ${SCHEMA_FILES}; do
+	if [ -e $SCHEMA_LOCATION/$SCHEMA ]; then
+		HOME=$D/root gconftool-2 \
+			--makefile-install-rule $SCHEMA_LOCATION/$SCHEMA > /dev/null
+	fi
+done
+}
+
+gconf_prerm() {
+SCHEMA_LOCATION=/etc/gconf/schemas
+for SCHEMA in ${SCHEMA_FILES}; do
+	if [ -e $SCHEMA_LOCATION/$SCHEMA ]; then
+		HOME=/root GCONF_CONFIG_SOURCE=`gconftool-2 --get-default-source` \
+			gconftool-2 \
+			--makefile-uninstall-rule $SCHEMA_LOCATION/$SCHEMA > /dev/null
+	fi
+done
+}
+
+python populate_packages:append () {
+    import re
+    packages = d.getVar('PACKAGES').split()
+    pkgdest =  d.getVar('PKGDEST')
+    
+    for pkg in packages:
+        schema_dir = '%s/%s/etc/gconf/schemas' % (pkgdest, pkg)
+        schemas = []
+        schema_re = re.compile(r".*\.schemas$")
+        if os.path.exists(schema_dir):
+            for f in os.listdir(schema_dir):
+                if schema_re.match(f):
+                    schemas.append(f)
+        if schemas != []:
+            bb.note("adding gconf postinst and prerm scripts to %s" % pkg)
+            d.setVar('SCHEMA_FILES', " ".join(schemas))
+            postinst = d.getVar('pkg_postinst:%s' % pkg)
+            if not postinst:
+                postinst = '#!/bin/sh\n'
+            postinst += d.getVar('gconf_postinst')
+            d.setVar('pkg_postinst:%s' % pkg, postinst)
+            prerm = d.getVar('pkg_prerm:%s' % pkg)
+            if not prerm:
+                prerm = '#!/bin/sh\n'
+            prerm += d.getVar('gconf_prerm')
+            d.setVar('pkg_prerm:%s' % pkg, prerm)
+            d.appendVar("RDEPENDS:%s" % pkg, ' ' + d.getVar('MLPREFIX', False) + 'gconf')
+}
diff --git a/poky/meta/classes-recipe/gettext.bbclass b/poky/meta/classes-recipe/gettext.bbclass
new file mode 100644
index 0000000..c313885
--- /dev/null
+++ b/poky/meta/classes-recipe/gettext.bbclass
@@ -0,0 +1,28 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+def gettext_dependencies(d):
+    if d.getVar('INHIBIT_DEFAULT_DEPS') and not oe.utils.inherits(d, 'cross-canadian'):
+        return ""
+    if d.getVar('USE_NLS') == 'no':
+        return "gettext-minimal-native"
+    return "gettext-native"
+
+def gettext_oeconf(d):
+    if d.getVar('USE_NLS') == 'no':
+        return '--disable-nls'
+    # Remove the NLS bits if USE_NLS is no or INHIBIT_DEFAULT_DEPS is set
+    if d.getVar('INHIBIT_DEFAULT_DEPS') and not oe.utils.inherits(d, 'cross-canadian'):
+        return '--disable-nls'
+    return "--enable-nls"
+
+BASEDEPENDS:append = " ${@gettext_dependencies(d)}"
+EXTRA_OECONF:append = " ${@gettext_oeconf(d)}"
+
+# Without this, msgfmt from gettext-native will not find ITS files
+# provided by target recipes (for example, polkit.its).
+GETTEXTDATADIRS:append:class-target = ":${STAGING_DATADIR}/gettext"
+export GETTEXTDATADIRS
diff --git a/poky/meta/classes-recipe/gi-docgen.bbclass b/poky/meta/classes-recipe/gi-docgen.bbclass
new file mode 100644
index 0000000..8b7eaac
--- /dev/null
+++ b/poky/meta/classes-recipe/gi-docgen.bbclass
@@ -0,0 +1,30 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+# gi-docgen is a new gnome documentation generator, which
+# seems to be a successor to gtk-doc:
+# https://gitlab.gnome.org/GNOME/gi-docgen
+
+# This variable is set to True if api-documentation is in
+# DISTRO_FEATURES, and False otherwise.
+GIDOCGEN_ENABLED ?= "${@bb.utils.contains('DISTRO_FEATURES', 'api-documentation', 'True', 'False', d)}"
+# When building native recipes, disable gi-docgen, as it is not necessary,
+# pulls in additional dependencies, and makes build times longer
+GIDOCGEN_ENABLED:class-native = "False"
+GIDOCGEN_ENABLED:class-nativesdk = "False"
+
+# meson: default option name to enable/disable gi-docgen. This matches most
+# projects' configuration. In doubts - check meson_options.txt in project's
+# source path.
+GIDOCGEN_MESON_OPTION ?= 'gtk_doc'
+GIDOCGEN_MESON_ENABLE_FLAG ?= 'true'
+GIDOCGEN_MESON_DISABLE_FLAG ?= 'false'
+
+# Auto enable/disable based on GIDOCGEN_ENABLED
+EXTRA_OEMESON:prepend = "-D${GIDOCGEN_MESON_OPTION}=${@bb.utils.contains('GIDOCGEN_ENABLED', 'True', '${GIDOCGEN_MESON_ENABLE_FLAG}', '${GIDOCGEN_MESON_DISABLE_FLAG}', d)} "
+
+DEPENDS:append = "${@' gi-docgen-native gi-docgen' if d.getVar('GIDOCGEN_ENABLED') == 'True' else ''}"
+
diff --git a/poky/meta/classes-recipe/gio-module-cache.bbclass b/poky/meta/classes-recipe/gio-module-cache.bbclass
new file mode 100644
index 0000000..d12e03c
--- /dev/null
+++ b/poky/meta/classes-recipe/gio-module-cache.bbclass
@@ -0,0 +1,44 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+PACKAGE_WRITE_DEPS += "qemu-native"
+inherit qemu
+
+GIO_MODULE_PACKAGES ??= "${PN}"
+
+gio_module_cache_common() {
+if [ "x$D" != "x" ]; then
+    $INTERCEPT_DIR/postinst_intercept update_gio_module_cache ${PKG} \
+            mlprefix=${MLPREFIX} \
+            binprefix=${MLPREFIX} \
+            libdir=${libdir} \
+            libexecdir=${libexecdir} \
+            base_libdir=${base_libdir} \
+            bindir=${bindir}
+else
+    ${libexecdir}/${MLPREFIX}gio-querymodules ${libdir}/gio/modules/
+fi
+}
+
+python populate_packages:append () {
+    packages = d.getVar('GIO_MODULE_PACKAGES').split()
+
+    for pkg in packages:
+        bb.note("adding gio-module-cache postinst and postrm scripts to %s" % pkg)
+
+        postinst = d.getVar('pkg_postinst:%s' % pkg)
+        if not postinst:
+            postinst = '#!/bin/sh\n'
+        postinst += d.getVar('gio_module_cache_common')
+        d.setVar('pkg_postinst:%s' % pkg, postinst)
+
+        postrm = d.getVar('pkg_postrm:%s' % pkg)
+        if not postrm:
+            postrm = '#!/bin/sh\n'
+        postrm += d.getVar('gio_module_cache_common')
+        d.setVar('pkg_postrm:%s' % pkg, postrm)
+}
+
diff --git a/poky/meta/classes-recipe/glide.bbclass b/poky/meta/classes-recipe/glide.bbclass
new file mode 100644
index 0000000..21b48fa
--- /dev/null
+++ b/poky/meta/classes-recipe/glide.bbclass
@@ -0,0 +1,15 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+# Handle Glide Vendor Package Management use
+#
+# Copyright 2018 (C) O.S. Systems Software LTDA.
+
+DEPENDS:append = " glide-native"
+
+do_compile:prepend() {
+    ( cd ${B}/src/${GO_IMPORT} && glide install )
+}
diff --git a/poky/meta/classes-recipe/gnomebase.bbclass b/poky/meta/classes-recipe/gnomebase.bbclass
new file mode 100644
index 0000000..805daaf
--- /dev/null
+++ b/poky/meta/classes-recipe/gnomebase.bbclass
@@ -0,0 +1,37 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+def gnome_verdir(v):
+    return ".".join(v.split(".")[:-1])
+
+
+GNOME_COMPRESS_TYPE ?= "xz"
+SECTION ?= "x11/gnome"
+GNOMEBN ?= "${BPN}"
+SRC_URI = "${GNOME_MIRROR}/${GNOMEBN}/${@gnome_verdir("${PV}")}/${GNOMEBN}-${PV}.tar.${GNOME_COMPRESS_TYPE};name=archive"
+
+FILES:${PN} += "${datadir}/application-registry  \
+                ${datadir}/mime-info \
+                ${datadir}/mime/packages \
+                ${datadir}/mime/application \
+                ${datadir}/gnome-2.0 \
+                ${datadir}/polkit* \
+                ${datadir}/GConf \
+                ${datadir}/glib-2.0/schemas \
+                ${datadir}/appdata \
+                ${datadir}/icons \
+"
+
+FILES:${PN}-doc += "${datadir}/devhelp"
+
+GNOMEBASEBUILDCLASS ??= "autotools"
+inherit ${GNOMEBASEBUILDCLASS} pkgconfig
+
+do_install:append() {
+	rm -rf ${D}${localstatedir}/lib/scrollkeeper/*
+	rm -rf ${D}${localstatedir}/scrollkeeper/*
+	rm -f ${D}${datadir}/applications/*.cache
+}
diff --git a/poky/meta/classes-recipe/go-mod.bbclass b/poky/meta/classes-recipe/go-mod.bbclass
new file mode 100644
index 0000000..927746a
--- /dev/null
+++ b/poky/meta/classes-recipe/go-mod.bbclass
@@ -0,0 +1,26 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+# Handle Go Modules support
+#
+# When using Go Modules, the the current working directory MUST be at or below
+# the location of the 'go.mod' file when the go tool is used, and there is no
+# way to tell it to look elsewhere.  It will automatically look upwards for the
+# file, but not downwards.
+#
+# To support this use case, we provide the `GO_WORKDIR` variable, which defaults
+# to `GO_IMPORT` but allows for easy override.
+#
+# Copyright 2020 (C) O.S. Systems Software LTDA.
+
+# The '-modcacherw' option ensures we have write access to the cached objects so
+# we avoid errors during clean task as well as when removing the TMPDIR.
+GOBUILDFLAGS:append = " -modcacherw"
+
+inherit go
+
+GO_WORKDIR ?= "${GO_IMPORT}"
+do_compile[dirs] += "${B}/src/${GO_WORKDIR}"
diff --git a/poky/meta/classes-recipe/go-ptest.bbclass b/poky/meta/classes-recipe/go-ptest.bbclass
new file mode 100644
index 0000000..54fcbb5
--- /dev/null
+++ b/poky/meta/classes-recipe/go-ptest.bbclass
@@ -0,0 +1,60 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+inherit go ptest
+
+do_compile_ptest_base() {
+	export TMPDIR="${GOTMPDIR}"
+	rm -f ${B}/.go_compiled_tests.list
+	go_list_package_tests | while read pkg; do
+		cd ${B}/src/$pkg
+		${GO} test ${GOPTESTBUILDFLAGS} $pkg
+		find . -mindepth 1 -maxdepth 1 -type f -name '*.test' -exec echo $pkg/{} \; | \
+			sed -e's,/\./,/,'>> ${B}/.go_compiled_tests.list
+	done
+	do_compile_ptest
+}
+
+do_compile_ptest_base[dirs] =+ "${GOTMPDIR}"
+
+go_make_ptest_wrapper() {
+	cat >${D}${PTEST_PATH}/run-ptest <<EOF
+#!/bin/sh
+RC=0
+run_test() (
+    cd "\$1"
+    ((((./\$2 ${GOPTESTFLAGS}; echo \$? >&3) | sed -r -e"s,^(PASS|SKIP|FAIL)\$,\\1: \$1/\$2," >&4) 3>&1) | (read rc; exit \$rc)) 4>&1
+    exit \$?)
+EOF
+
+}
+
+do_install_ptest_base() {
+	test -f "${B}/.go_compiled_tests.list" || exit 0
+	install -d ${D}${PTEST_PATH}
+	go_stage_testdata
+	go_make_ptest_wrapper
+	havetests=""
+	while read test; do
+		testdir=`dirname $test`
+		testprog=`basename $test`
+		install -d ${D}${PTEST_PATH}/$testdir
+		install -m 0755 ${B}/src/$test ${D}${PTEST_PATH}/$test
+	echo "run_test $testdir $testprog || RC=1" >> ${D}${PTEST_PATH}/run-ptest
+		havetests="yes"
+	done < ${B}/.go_compiled_tests.list
+	if [ -n "$havetests" ]; then
+		echo "exit \$RC" >> ${D}${PTEST_PATH}/run-ptest
+		chmod +x ${D}${PTEST_PATH}/run-ptest
+	else
+		rm -rf ${D}${PTEST_PATH}
+	fi
+	do_install_ptest
+	chown -R root:root ${D}${PTEST_PATH}
+}
+
+INSANE_SKIP:${PN}-ptest += "ldflags"
+
diff --git a/poky/meta/classes-recipe/go.bbclass b/poky/meta/classes-recipe/go.bbclass
new file mode 100644
index 0000000..6b97484
--- /dev/null
+++ b/poky/meta/classes-recipe/go.bbclass
@@ -0,0 +1,170 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+inherit goarch
+inherit linuxloader
+
+GO_PARALLEL_BUILD ?= "${@oe.utils.parallel_make_argument(d, '-p %d')}"
+
+export GODEBUG = "gocachehash=1"
+
+GOROOT:class-native = "${STAGING_LIBDIR_NATIVE}/go"
+GOROOT:class-nativesdk = "${STAGING_DIR_TARGET}${libdir}/go"
+GOROOT = "${STAGING_LIBDIR}/go"
+export GOROOT
+export GOROOT_FINAL = "${libdir}/go"
+export GOCACHE = "${B}/.cache"
+
+export GOARCH = "${TARGET_GOARCH}"
+export GOOS = "${TARGET_GOOS}"
+export GOHOSTARCH="${BUILD_GOARCH}"
+export GOHOSTOS="${BUILD_GOOS}"
+
+GOARM[export] = "0"
+GOARM:arm:class-target = "${TARGET_GOARM}"
+GOARM:arm:class-target[export] = "1"
+
+GO386[export] = "0"
+GO386:x86:class-target = "${TARGET_GO386}"
+GO386:x86:class-target[export] = "1"
+
+GOMIPS[export] = "0"
+GOMIPS:mips:class-target = "${TARGET_GOMIPS}"
+GOMIPS:mips:class-target[export] = "1"
+
+DEPENDS_GOLANG:class-target = "virtual/${TUNE_PKGARCH}-go virtual/${TARGET_PREFIX}go-runtime"
+DEPENDS_GOLANG:class-native = "go-native"
+DEPENDS_GOLANG:class-nativesdk = "virtual/${TARGET_PREFIX}go-crosssdk virtual/${TARGET_PREFIX}go-runtime"
+
+DEPENDS:append = " ${DEPENDS_GOLANG}"
+
+GO_LINKSHARED ?= "${@'-linkshared' if d.getVar('GO_DYNLINK') else ''}"
+GO_RPATH_LINK = "${@'-Wl,-rpath-link=${STAGING_DIR_TARGET}${libdir}/go/pkg/${TARGET_GOTUPLE}_dynlink' if d.getVar('GO_DYNLINK') else ''}"
+GO_RPATH = "${@'-r ${libdir}/go/pkg/${TARGET_GOTUPLE}_dynlink' if d.getVar('GO_DYNLINK') else ''}"
+GO_RPATH:class-native = "${@'-r ${STAGING_LIBDIR_NATIVE}/go/pkg/${TARGET_GOTUPLE}_dynlink' if d.getVar('GO_DYNLINK') else ''}"
+GO_RPATH_LINK:class-native = "${@'-Wl,-rpath-link=${STAGING_LIBDIR_NATIVE}/go/pkg/${TARGET_GOTUPLE}_dynlink' if d.getVar('GO_DYNLINK') else ''}"
+GO_EXTLDFLAGS ?= "${HOST_CC_ARCH}${TOOLCHAIN_OPTIONS} ${GO_RPATH_LINK} ${LDFLAGS}"
+GO_LINKMODE ?= ""
+GO_LINKMODE:class-nativesdk = "--linkmode=external"
+GO_LINKMODE:class-native = "--linkmode=external"
+GO_EXTRA_LDFLAGS ?= ""
+GO_LINUXLOADER ?= "-I ${@get_linuxloader(d)}"
+# Use system loader. If uninative is used, the uninative loader will be patched automatically
+GO_LINUXLOADER:class-native = ""
+GO_LDFLAGS ?= '-ldflags="${GO_RPATH} ${GO_LINKMODE} ${GO_LINUXLOADER} ${GO_EXTRA_LDFLAGS} -extldflags '${GO_EXTLDFLAGS}'"'
+export GOBUILDFLAGS ?= "-v ${GO_LDFLAGS} -trimpath"
+export GOPATH_OMIT_IN_ACTIONID ?= "1"
+export GOPTESTBUILDFLAGS ?= "${GOBUILDFLAGS} -c"
+export GOPTESTFLAGS ?= ""
+GOBUILDFLAGS:prepend:task-compile = "${GO_PARALLEL_BUILD} "
+
+export GO = "${HOST_PREFIX}go"
+GOTOOLDIR = "${STAGING_LIBDIR_NATIVE}/${TARGET_SYS}/go/pkg/tool/${BUILD_GOTUPLE}"
+GOTOOLDIR:class-native = "${STAGING_LIBDIR_NATIVE}/go/pkg/tool/${BUILD_GOTUPLE}"
+export GOTOOLDIR
+
+export CGO_ENABLED ?= "1"
+export CGO_CFLAGS ?= "${CFLAGS}"
+export CGO_CPPFLAGS ?= "${CPPFLAGS}"
+export CGO_CXXFLAGS ?= "${CXXFLAGS}"
+export CGO_LDFLAGS ?= "${LDFLAGS}"
+
+GO_INSTALL ?= "${GO_IMPORT}/..."
+GO_INSTALL_FILTEROUT ?= "${GO_IMPORT}/vendor/"
+
+B = "${WORKDIR}/build"
+export GOPATH = "${B}"
+export GOENV = "off"
+export GOTMPDIR ?= "${WORKDIR}/build-tmp"
+GOTMPDIR[vardepvalue] = ""
+
+python go_do_unpack() {
+    src_uri = (d.getVar('SRC_URI') or "").split()
+    if len(src_uri) == 0:
+        return
+
+    fetcher = bb.fetch2.Fetch(src_uri, d)
+    for url in fetcher.urls:
+        if fetcher.ud[url].type == 'git':
+            if fetcher.ud[url].parm.get('destsuffix') is None:
+                s_dirname = os.path.basename(d.getVar('S'))
+                fetcher.ud[url].parm['destsuffix'] = os.path.join(s_dirname, 'src', d.getVar('GO_IMPORT')) + '/'
+    fetcher.unpack(d.getVar('WORKDIR'))
+}
+
+go_list_packages() {
+	${GO} list -f '{{.ImportPath}}' ${GOBUILDFLAGS} ${GO_INSTALL} | \
+		egrep -v '${GO_INSTALL_FILTEROUT}'
+}
+
+go_list_package_tests() {
+	${GO} list -f '{{.ImportPath}} {{.TestGoFiles}}' ${GOBUILDFLAGS} ${GO_INSTALL} | \
+		grep -v '\[\]$' | \
+		egrep -v '${GO_INSTALL_FILTEROUT}' | \
+		awk '{ print $1 }'
+}
+
+go_do_configure() {
+	ln -snf ${S}/src ${B}/
+}
+do_configure[dirs] =+ "${GOTMPDIR}"
+
+go_do_compile() {
+	export TMPDIR="${GOTMPDIR}"
+	if [ -n "${GO_INSTALL}" ]; then
+		if [ -n "${GO_LINKSHARED}" ]; then
+			${GO} install ${GOBUILDFLAGS} `go_list_packages`
+			rm -rf ${B}/bin
+		fi
+		${GO} install ${GO_LINKSHARED} ${GOBUILDFLAGS} `go_list_packages`
+	fi
+}
+do_compile[dirs] =+ "${GOTMPDIR}"
+do_compile[cleandirs] = "${B}/bin ${B}/pkg"
+
+go_do_install() {
+	install -d ${D}${libdir}/go/src/${GO_IMPORT}
+	tar -C ${S}/src/${GO_IMPORT} -cf - --exclude-vcs --exclude '*.test' --exclude 'testdata' . | \
+		tar -C ${D}${libdir}/go/src/${GO_IMPORT} --no-same-owner -xf -
+	tar -C ${B} -cf - --exclude-vcs --exclude '*.test' --exclude 'testdata' pkg | \
+		tar -C ${D}${libdir}/go --no-same-owner -xf -
+
+	if [ -n "`ls ${B}/${GO_BUILD_BINDIR}/`" ]; then
+		install -d ${D}${bindir}
+		install -m 0755 ${B}/${GO_BUILD_BINDIR}/* ${D}${bindir}/
+	fi
+}
+
+go_stage_testdata() {
+	oldwd="$PWD"
+	cd ${S}/src
+	find ${GO_IMPORT} -depth -type d -name testdata | while read d; do
+		if echo "$d" | grep -q '/vendor/'; then
+			continue
+		fi
+		parent=`dirname $d`
+		install -d ${D}${PTEST_PATH}/$parent
+		cp --preserve=mode,timestamps -R $d ${D}${PTEST_PATH}/$parent/
+	done
+	cd "$oldwd"
+}
+
+EXPORT_FUNCTIONS do_unpack do_configure do_compile do_install
+
+FILES:${PN}-dev = "${libdir}/go/src"
+FILES:${PN}-staticdev = "${libdir}/go/pkg"
+
+INSANE_SKIP:${PN} += "ldflags"
+
+# Add -buildmode=pie to GOBUILDFLAGS to satisfy "textrel" QA checking, but mips
+# doesn't support -buildmode=pie, so skip the QA checking for mips/rv32 and its
+# variants.
+python() {
+    if 'mips' in d.getVar('TARGET_ARCH') or 'riscv32' in d.getVar('TARGET_ARCH'):
+        d.appendVar('INSANE_SKIP:%s' % d.getVar('PN'), " textrel")
+    else:
+        d.appendVar('GOBUILDFLAGS', ' -buildmode=pie')
+}
diff --git a/poky/meta/classes-recipe/goarch.bbclass b/poky/meta/classes-recipe/goarch.bbclass
new file mode 100644
index 0000000..61ead30
--- /dev/null
+++ b/poky/meta/classes-recipe/goarch.bbclass
@@ -0,0 +1,122 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+BUILD_GOOS = "${@go_map_os(d.getVar('BUILD_OS'), d)}"
+BUILD_GOARCH = "${@go_map_arch(d.getVar('BUILD_ARCH'), d)}"
+BUILD_GOTUPLE = "${BUILD_GOOS}_${BUILD_GOARCH}"
+HOST_GOOS = "${@go_map_os(d.getVar('HOST_OS'), d)}"
+HOST_GOARCH = "${@go_map_arch(d.getVar('HOST_ARCH'), d)}"
+HOST_GOARM = "${@go_map_arm(d.getVar('HOST_ARCH'), d)}"
+HOST_GO386 = "${@go_map_386(d.getVar('HOST_ARCH'), d.getVar('TUNE_FEATURES'), d)}"
+HOST_GOMIPS = "${@go_map_mips(d.getVar('HOST_ARCH'), d.getVar('TUNE_FEATURES'), d)}"
+HOST_GOARM:class-native = "7"
+HOST_GO386:class-native = "sse2"
+HOST_GOMIPS:class-native = "hardfloat"
+HOST_GOTUPLE = "${HOST_GOOS}_${HOST_GOARCH}"
+TARGET_GOOS = "${@go_map_os(d.getVar('TARGET_OS'), d)}"
+TARGET_GOARCH = "${@go_map_arch(d.getVar('TARGET_ARCH'), d)}"
+TARGET_GOARM = "${@go_map_arm(d.getVar('TARGET_ARCH'), d)}"
+TARGET_GO386 = "${@go_map_386(d.getVar('TARGET_ARCH'), d.getVar('TUNE_FEATURES'), d)}"
+TARGET_GOMIPS = "${@go_map_mips(d.getVar('TARGET_ARCH'), d.getVar('TUNE_FEATURES'), d)}"
+TARGET_GOARM:class-native = "7"
+TARGET_GO386:class-native = "sse2"
+TARGET_GOMIPS:class-native = "hardfloat"
+TARGET_GOTUPLE = "${TARGET_GOOS}_${TARGET_GOARCH}"
+GO_BUILD_BINDIR = "${@['bin/${HOST_GOTUPLE}','bin'][d.getVar('BUILD_GOTUPLE') == d.getVar('HOST_GOTUPLE')]}"
+
+# Use the MACHINEOVERRIDES to map ARM CPU architecture passed to GO via GOARM.
+# This is combined with *_ARCH to set HOST_GOARM and TARGET_GOARM.
+BASE_GOARM = ''
+BASE_GOARM:armv7ve = '7'
+BASE_GOARM:armv7a = '7'
+BASE_GOARM:armv6 = '6'
+BASE_GOARM:armv5 = '5'
+
+# Go supports dynamic linking on a limited set of architectures.
+# See the supportsDynlink function in go/src/cmd/compile/internal/gc/main.go
+GO_DYNLINK = ""
+GO_DYNLINK:arm ?= "1"
+GO_DYNLINK:aarch64 ?= "1"
+GO_DYNLINK:x86 ?= "1"
+GO_DYNLINK:x86-64 ?= "1"
+GO_DYNLINK:powerpc64 ?= "1"
+GO_DYNLINK:powerpc64le ?= "1"
+GO_DYNLINK:class-native ?= ""
+GO_DYNLINK:class-nativesdk = ""
+
+# define here because everybody inherits this class
+#
+COMPATIBLE_HOST:linux-gnux32 = "null"
+COMPATIBLE_HOST:linux-muslx32 = "null"
+COMPATIBLE_HOST:powerpc = "null"
+COMPATIBLE_HOST:powerpc64 = "null"
+COMPATIBLE_HOST:mipsarchn32 = "null"
+
+ARM_INSTRUCTION_SET:armv4 = "arm"
+ARM_INSTRUCTION_SET:armv5 = "arm"
+ARM_INSTRUCTION_SET:armv6 = "arm"
+
+TUNE_CCARGS:remove = "-march=mips32r2"
+SECURITY_NOPIE_CFLAGS ??= ""
+
+# go can't be built with ccache:
+# gcc: fatal error: no input files
+CCACHE_DISABLE ?= "1"
+
+def go_map_arch(a, d):
+    import re
+    if re.match('i.86', a):
+        return '386'
+    elif a == 'x86_64':
+        return 'amd64'
+    elif re.match('arm.*', a):
+        return 'arm'
+    elif re.match('aarch64.*', a):
+        return 'arm64'
+    elif re.match('mips64el.*', a):
+        return 'mips64le'
+    elif re.match('mips64.*', a):
+        return 'mips64'
+    elif a == 'mips':
+        return 'mips'
+    elif a == 'mipsel':
+        return 'mipsle'
+    elif re.match('p(pc|owerpc)(64le)', a):
+        return 'ppc64le'
+    elif re.match('p(pc|owerpc)(64)', a):
+        return 'ppc64'
+    elif a == 'riscv64':
+        return 'riscv64'
+    else:
+        raise bb.parse.SkipRecipe("Unsupported CPU architecture: %s" % a)
+
+def go_map_arm(a, d):
+    if a.startswith("arm"):
+        return d.getVar('BASE_GOARM')
+    return ''
+
+def go_map_386(a, f, d):
+    import re
+    if re.match('i.86', a):
+        if ('core2' in f) or ('corei7' in f):
+            return 'sse2'
+        else:
+            return 'softfloat'
+    return ''
+
+def go_map_mips(a, f, d):
+    import re
+    if a == 'mips' or a == 'mipsel':
+        if 'fpu-hard' in f:
+            return 'hardfloat'
+        else:
+            return 'softfloat'
+    return ''
+
+def go_map_os(o, d):
+    if o.startswith('linux'):
+        return 'linux'
+    return o
diff --git a/poky/meta/classes-recipe/gobject-introspection-data.bbclass b/poky/meta/classes-recipe/gobject-introspection-data.bbclass
new file mode 100644
index 0000000..7f522a1
--- /dev/null
+++ b/poky/meta/classes-recipe/gobject-introspection-data.bbclass
@@ -0,0 +1,18 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+# This variable is set to True if gobject-introspection-data is in
+# DISTRO_FEATURES and qemu-usermode is in MACHINE_FEATURES, and False otherwise.
+#
+# It should be used in recipes to determine whether introspection data should be built,
+# so that qemu use can be avoided when necessary.
+GI_DATA_ENABLED ?= "${@bb.utils.contains('DISTRO_FEATURES', 'gobject-introspection-data', \
+                      bb.utils.contains('MACHINE_FEATURES', 'qemu-usermode', 'True', 'False', d), 'False', d)}"
+
+do_compile:prepend() {
+    # This prevents g-ir-scanner from writing cache data to $HOME
+    export GI_SCANNER_DISABLE_CACHE=1
+}
diff --git a/poky/meta/classes-recipe/gobject-introspection.bbclass b/poky/meta/classes-recipe/gobject-introspection.bbclass
new file mode 100644
index 0000000..0c7b7d2
--- /dev/null
+++ b/poky/meta/classes-recipe/gobject-introspection.bbclass
@@ -0,0 +1,61 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+# Inherit this class in recipes to enable building their introspection files
+
+# python3native is inherited to prevent introspection tools being run with
+# host's python 3 (they need to be run with native python 3)
+#
+# This also sets up autoconf-based recipes to build introspection data (or not),
+# depending on distro and machine features (see gobject-introspection-data class).
+inherit python3native gobject-introspection-data
+
+# meson: default option name to enable/disable introspection. This matches most
+# project's configuration. In doubts - check meson_options.txt in project's
+# source path.
+GIR_MESON_OPTION ?= 'introspection'
+GIR_MESON_ENABLE_FLAG ?= 'true'
+GIR_MESON_DISABLE_FLAG ?= 'false'
+
+# Define g-i options such that they can be disabled completely when GIR_MESON_OPTION is empty
+GIRMESONTARGET = "-D${GIR_MESON_OPTION}=${@bb.utils.contains('GI_DATA_ENABLED', 'True', '${GIR_MESON_ENABLE_FLAG}', '${GIR_MESON_DISABLE_FLAG}', d)} "
+GIRMESONBUILD = "-D${GIR_MESON_OPTION}=${GIR_MESON_DISABLE_FLAG} "
+# Auto enable/disable based on GI_DATA_ENABLED
+EXTRA_OECONF:prepend:class-target = "${@bb.utils.contains('GI_DATA_ENABLED', 'True', '--enable-introspection', '--disable-introspection', d)} "
+EXTRA_OEMESON:prepend:class-target = "${@['', '${GIRMESONTARGET}'][d.getVar('GIR_MESON_OPTION') != '']}"
+# When building native recipes, disable introspection, as it is not necessary,
+# pulls in additional dependencies, and makes build times longer
+EXTRA_OECONF:prepend:class-native = "--disable-introspection "
+EXTRA_OECONF:prepend:class-nativesdk = "--disable-introspection "
+EXTRA_OEMESON:prepend:class-native = "${@['', '${GIRMESONBUILD}'][d.getVar('GIR_MESON_OPTION') != '']}"
+EXTRA_OEMESON:prepend:class-nativesdk = "${@['', '${GIRMESONBUILD}'][d.getVar('GIR_MESON_OPTION') != '']}"
+
+# Generating introspection data depends on a combination of native and target
+# introspection tools, and qemu to run the target tools.
+DEPENDS:append:class-target = " gobject-introspection gobject-introspection-native qemu-native"
+
+# Even though introspection is disabled on -native, gobject-introspection package is still
+# needed for m4 macros.
+DEPENDS:append:class-native = " gobject-introspection-native"
+DEPENDS:append:class-nativesdk = " gobject-introspection-native"
+
+# This is used by introspection tools to find .gir includes
+export XDG_DATA_DIRS = "${STAGING_DATADIR}:${STAGING_LIBDIR}"
+
+do_configure:prepend:class-target () {
+    # introspection.m4 pre-packaged with upstream tarballs does not yet
+    # have our fixes
+    mkdir -p ${S}/m4
+    cp ${STAGING_DIR_TARGET}/${datadir}/aclocal/introspection.m4 ${S}/m4
+}
+
+# .typelib files are needed at runtime and so they go to the main package (so
+# they'll be together with libraries they support).
+FILES:${PN}:append = " ${libdir}/girepository-*/*.typelib" 
+    
+# .gir files go to dev package, as they're needed for developing (but not for
+# running) things that depends on introspection.
+FILES:${PN}-dev:append = " ${datadir}/gir-*/*.gir ${libdir}/gir-*/*.gir"
diff --git a/poky/meta/classes-recipe/grub-efi-cfg.bbclass b/poky/meta/classes-recipe/grub-efi-cfg.bbclass
new file mode 100644
index 0000000..52e85a3
--- /dev/null
+++ b/poky/meta/classes-recipe/grub-efi-cfg.bbclass
@@ -0,0 +1,122 @@
+# grub-efi.bbclass
+# Copyright (c) 2011, Intel Corporation.
+#
+# SPDX-License-Identifier: MIT
+
+# Provide grub-efi specific functions for building bootable images.
+
+# External variables
+# ${INITRD} - indicates a list of filesystem images to concatenate and use as an initrd (optional)
+# ${ROOTFS} - indicates a filesystem image to include as the root filesystem (optional)
+# ${GRUB_GFXSERIAL} - set this to 1 to have graphics and serial in the boot menu
+# ${LABELS} - a list of targets for the automatic config
+# ${APPEND} - an override list of append strings for each label
+# ${GRUB_OPTS} - additional options to add to the config, ';' delimited # (optional)
+# ${GRUB_TIMEOUT} - timeout before executing the deault label (optional)
+# ${GRUB_ROOT} - grub's root device.
+
+GRUB_SERIAL ?= "console=ttyS0,115200"
+GRUB_CFG_VM = "${S}/grub_vm.cfg"
+GRUB_CFG_LIVE = "${S}/grub_live.cfg"
+GRUB_TIMEOUT ?= "10"
+#FIXME: build this from the machine config
+GRUB_OPTS ?= "serial --unit=0 --speed=115200 --word=8 --parity=no --stop=1"
+
+GRUB_ROOT ?= "${ROOT}"
+APPEND ?= ""
+
+# Uses MACHINE specific KERNEL_IMAGETYPE
+PACKAGE_ARCH = "${MACHINE_ARCH}"
+
+# Need UUID utility code.
+inherit fs-uuid
+
+python build_efi_cfg() {
+    import sys
+
+    workdir = d.getVar('WORKDIR')
+    if not workdir:
+        bb.error("WORKDIR not defined, unable to package")
+        return
+
+    gfxserial = d.getVar('GRUB_GFXSERIAL') or ""
+
+    labels = d.getVar('LABELS')
+    if not labels:
+        bb.debug(1, "LABELS not defined, nothing to do")
+        return
+
+    if labels == []:
+        bb.debug(1, "No labels, nothing to do")
+        return
+
+    cfile = d.getVar('GRUB_CFG')
+    if not cfile:
+        bb.fatal('Unable to read GRUB_CFG')
+
+    try:
+         cfgfile = open(cfile, 'w')
+    except OSError:
+        bb.fatal('Unable to open %s' % cfile)
+
+    cfgfile.write('# Automatically created by OE\n')
+
+    opts = d.getVar('GRUB_OPTS')
+    if opts:
+        for opt in opts.split(';'):
+            cfgfile.write('%s\n' % opt)
+
+    cfgfile.write('default=%s\n' % (labels.split()[0]))
+
+    timeout = d.getVar('GRUB_TIMEOUT')
+    if timeout:
+        cfgfile.write('timeout=%s\n' % timeout)
+    else:
+        cfgfile.write('timeout=50\n')
+
+    root = d.getVar('GRUB_ROOT')
+    if not root:
+        bb.fatal('GRUB_ROOT not defined')
+
+    if gfxserial == "1":
+        btypes = [ [ " graphics console", "" ],
+            [ " serial console", d.getVar('GRUB_SERIAL') or "" ] ]
+    else:
+        btypes = [ [ "", "" ] ]
+
+    for label in labels.split():
+        localdata = d.createCopy()
+
+        overrides = localdata.getVar('OVERRIDES')
+        if not overrides:
+            bb.fatal('OVERRIDES not defined')
+
+        localdata.setVar('OVERRIDES', 'grub_' + label + ':' + overrides)
+
+        for btype in btypes:
+            cfgfile.write('\nmenuentry \'%s%s\'{\n' % (label, btype[0]))
+            lb = label
+            if label == "install":
+                lb = "install-efi"
+            kernel = localdata.getVar('KERNEL_IMAGETYPE')
+            cfgfile.write('linux /%s LABEL=%s' % (kernel, lb))
+
+            cfgfile.write(' %s' % replace_rootfs_uuid(d, root))
+
+            append = localdata.getVar('APPEND')
+            initrd = localdata.getVar('INITRD')
+
+            if append:
+                append = replace_rootfs_uuid(d, append)
+                cfgfile.write(' %s' % (append))
+
+            cfgfile.write(' %s' % btype[1])
+            cfgfile.write('\n')
+
+            if initrd:
+                cfgfile.write('initrd /initrd')
+            cfgfile.write('\n}\n')
+
+    cfgfile.close()
+}
+build_efi_cfg[vardepsexclude] += "OVERRIDES"
diff --git a/poky/meta/classes-recipe/grub-efi.bbclass b/poky/meta/classes-recipe/grub-efi.bbclass
new file mode 100644
index 0000000..4afd121
--- /dev/null
+++ b/poky/meta/classes-recipe/grub-efi.bbclass
@@ -0,0 +1,14 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+inherit grub-efi-cfg
+require conf/image-uefi.conf
+
+efi_populate() {
+	efi_populate_common "$1" grub-efi
+
+	install -m 0644 ${GRUB_CFG} ${DEST}${EFIDIR}/grub.cfg
+}
diff --git a/poky/meta/classes-recipe/gsettings.bbclass b/poky/meta/classes-recipe/gsettings.bbclass
new file mode 100644
index 0000000..adb027e
--- /dev/null
+++ b/poky/meta/classes-recipe/gsettings.bbclass
@@ -0,0 +1,48 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+# A bbclass to handle installed GSettings (glib) schemas, updated the compiled
+# form on package install and remove.
+#
+# The compiled schemas are platform-agnostic, so we can depend on
+# glib-2.0-native for the native tool and run the postinst script when the
+# rootfs builds to save a little time on first boot.
+
+# TODO use a trigger so that this runs once per package operation run
+
+GSETTINGS_PACKAGE ?= "${PN}"
+
+python __anonymous() {
+    pkg = d.getVar("GSETTINGS_PACKAGE")
+    if pkg:
+        d.appendVar("PACKAGE_WRITE_DEPS", " glib-2.0-native")
+        d.appendVar("RDEPENDS:" + pkg, " ${MLPREFIX}glib-2.0-utils")
+        d.appendVar("FILES:" + pkg, " ${datadir}/glib-2.0/schemas")
+}
+
+gsettings_postinstrm () {
+	glib-compile-schemas $D${datadir}/glib-2.0/schemas
+}
+
+python populate_packages:append () {
+    pkg = d.getVar('GSETTINGS_PACKAGE')
+    if pkg:
+        bb.note("adding gsettings postinst scripts to %s" % pkg)
+
+        postinst = d.getVar('pkg_postinst:%s' % pkg) or d.getVar('pkg_postinst')
+        if not postinst:
+            postinst = '#!/bin/sh\n'
+        postinst += d.getVar('gsettings_postinstrm')
+        d.setVar('pkg_postinst:%s' % pkg, postinst)
+
+        bb.note("adding gsettings postrm scripts to %s" % pkg)
+
+        postrm = d.getVar('pkg_postrm:%s' % pkg) or d.getVar('pkg_postrm')
+        if not postrm:
+            postrm = '#!/bin/sh\n'
+        postrm += d.getVar('gsettings_postinstrm')
+        d.setVar('pkg_postrm:%s' % pkg, postrm)
+}
diff --git a/poky/meta/classes-recipe/gtk-doc.bbclass b/poky/meta/classes-recipe/gtk-doc.bbclass
new file mode 100644
index 0000000..68fa2cc
--- /dev/null
+++ b/poky/meta/classes-recipe/gtk-doc.bbclass
@@ -0,0 +1,89 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+# Helper class to pull in the right gtk-doc dependencies and configure
+# gtk-doc to enable or disable documentation building (which requries the
+# use of usermode qemu).
+
+# This variable is set to True if api-documentation is in
+# DISTRO_FEATURES and qemu-usermode is in MACHINE_FEATURES, and False otherwise.
+#
+# It should be used in recipes to determine whether gtk-doc based documentation should be built,
+# so that qemu use can be avoided when necessary.
+GTKDOC_ENABLED:class-native = "False"
+GTKDOC_ENABLED ?= "${@bb.utils.contains('DISTRO_FEATURES', 'api-documentation', \
+                      bb.utils.contains('MACHINE_FEATURES', 'qemu-usermode', 'True', 'False', d), 'False', d)}"
+
+# meson: default option name to enable/disable gtk-doc. This matches most
+# project's configuration. In doubts - check meson_options.txt in project's
+# source path.
+GTKDOC_MESON_OPTION ?= 'docs'
+GTKDOC_MESON_ENABLE_FLAG ?= 'true'
+GTKDOC_MESON_DISABLE_FLAG ?= 'false'
+
+# Auto enable/disable based on GTKDOC_ENABLED
+EXTRA_OECONF:prepend:class-target = "${@bb.utils.contains('GTKDOC_ENABLED', 'True', '--enable-gtk-doc --enable-gtk-doc-html --disable-gtk-doc-pdf', \
+                                                                                    '--disable-gtk-doc', d)} "
+EXTRA_OEMESON:prepend:class-target = "-D${GTKDOC_MESON_OPTION}=${@bb.utils.contains('GTKDOC_ENABLED', 'True', '${GTKDOC_MESON_ENABLE_FLAG}', '${GTKDOC_MESON_DISABLE_FLAG}', d)} "
+
+# When building native recipes, disable gtkdoc, as it is not necessary,
+# pulls in additional dependencies, and makes build times longer
+EXTRA_OECONF:prepend:class-native = "--disable-gtk-doc "
+EXTRA_OECONF:prepend:class-nativesdk = "--disable-gtk-doc "
+EXTRA_OEMESON:prepend:class-native = "-D${GTKDOC_MESON_OPTION}=${GTKDOC_MESON_DISABLE_FLAG} "
+EXTRA_OEMESON:prepend:class-nativesdk = "-D${GTKDOC_MESON_OPTION}=${GTKDOC_MESON_DISABLE_FLAG} "
+
+# Even though gtkdoc is disabled on -native, gtk-doc package is still
+# needed for m4 macros.
+DEPENDS:append = " gtk-doc-native"
+
+# The documentation directory, where the infrastructure will be copied.
+# gtkdocize has a default of "." so to handle out-of-tree builds set this to $S.
+GTKDOC_DOCDIR ?= "${S}"
+
+export STAGING_DIR_HOST
+
+inherit python3native pkgconfig qemu
+DEPENDS:append = "${@' qemu-native' if d.getVar('GTKDOC_ENABLED') == 'True' else ''}"
+
+do_configure:prepend () {
+	# Need to use ||true as this is only needed if configure.ac both exists
+	# and uses GTK_DOC_CHECK.
+	gtkdocize --srcdir ${S} --docdir ${GTKDOC_DOCDIR} || true
+}
+
+do_compile:prepend:class-target () {
+    if [ ${GTKDOC_ENABLED} = True ]; then
+        # Write out a qemu wrapper that will be given to gtkdoc-scangobj so that it
+        # can run target helper binaries through that.
+        qemu_binary="${@qemu_wrapper_cmdline(d, '$STAGING_DIR_HOST', ['\\$GIR_EXTRA_LIBS_PATH','$STAGING_DIR_HOST/${libdir}','$STAGING_DIR_HOST/${base_libdir}'])}"
+        cat > ${B}/gtkdoc-qemuwrapper << EOF
+#!/bin/sh
+# Use a modules directory which doesn't exist so we don't load random things
+# which may then get deleted (or their dependencies) and potentially segfault
+export GIO_MODULE_DIR=${STAGING_LIBDIR}/gio/modules-dummy
+
+GIR_EXTRA_LIBS_PATH=\`find ${B} -name *.so -printf "%h\n"|sort|uniq| tr '\n' ':'\`\$GIR_EXTRA_LIBS_PATH
+GIR_EXTRA_LIBS_PATH=\`find ${B} -name .libs| tr '\n' ':'\`\$GIR_EXTRA_LIBS_PATH
+
+# meson sets this wrongly (only to libs in build-dir), qemu_wrapper_cmdline() and GIR_EXTRA_LIBS_PATH take care of it properly
+unset LD_LIBRARY_PATH
+
+if [ -d ".libs" ]; then
+    $qemu_binary ".libs/\$@"
+else
+    $qemu_binary "\$@"
+fi
+
+if [ \$? -ne 0 ]; then
+    echo "If the above error message is about missing .so libraries, then setting up GIR_EXTRA_LIBS_PATH in the recipe should help."
+    echo "(typically like this: GIR_EXTRA_LIBS_PATH=\"$""{B}/something/.libs\" )"
+    exit 1
+fi
+EOF
+        chmod +x ${B}/gtkdoc-qemuwrapper
+    fi
+}
diff --git a/poky/meta/classes-recipe/gtk-icon-cache.bbclass b/poky/meta/classes-recipe/gtk-icon-cache.bbclass
new file mode 100644
index 0000000..17c7eb7
--- /dev/null
+++ b/poky/meta/classes-recipe/gtk-icon-cache.bbclass
@@ -0,0 +1,95 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+FILES:${PN} += "${datadir}/icons/hicolor"
+
+GTKIC_VERSION ??= '3'
+
+GTKPN = "${@ 'gtk4' if d.getVar('GTKIC_VERSION') == '4' else 'gtk+3' }"
+GTKIC_CMD = "${@ 'gtk-update-icon-cache-3.0.0' if d.getVar('GTKIC_VERSION') == '4' else 'gtk4-update-icon-cache' }"
+
+#gtk+3/gtk4 require GTK3DISTROFEATURES, DEPENDS on it make all the
+#recipes inherit this class require GTK3DISTROFEATURES
+inherit features_check
+ANY_OF_DISTRO_FEATURES = "${GTK3DISTROFEATURES}"
+
+DEPENDS +=" ${@ '' if d.getVar('BPN') == 'hicolor-icon-theme' else 'hicolor-icon-theme' } \
+            ${@ '' if d.getVar('BPN') == 'gdk-pixbuf' else 'gdk-pixbuf' } \
+            ${@ '' if d.getVar('BPN') == d.getVar('GTKPN') else d.getVar('GTKPN') } \
+            ${GTKPN}-native \
+"
+
+PACKAGE_WRITE_DEPS += "${GTKPN}-native gdk-pixbuf-native"
+
+gtk_icon_cache_postinst() {
+if [ "x$D" != "x" ]; then
+	$INTERCEPT_DIR/postinst_intercept update_gtk_icon_cache ${PKG} \
+		mlprefix=${MLPREFIX} \
+		libdir_native=${libdir_native}
+else
+
+	# Update the pixbuf loaders in case they haven't been registered yet
+	${libdir}/gdk-pixbuf-2.0/gdk-pixbuf-query-loaders --update-cache
+
+	for icondir in /usr/share/icons/* ; do
+		if [ -d $icondir ] ; then
+			${GTKIC_CMD} -fqt  $icondir
+		fi
+	done
+fi
+}
+
+gtk_icon_cache_postrm() {
+if [ "x$D" != "x" ]; then
+	$INTERCEPT_DIR/postinst_intercept update_gtk_icon_cache ${PKG} \
+		mlprefix=${MLPREFIX} \
+		libdir=${libdir}
+else
+	for icondir in /usr/share/icons/* ; do
+		if [ -d $icondir ] ; then
+			${GTKIC_CMD} -qt  $icondir
+		fi
+	done
+fi
+}
+
+python populate_packages:append () {
+    packages = d.getVar('PACKAGES').split()
+    pkgdest =  d.getVar('PKGDEST')
+    
+    for pkg in packages:
+        icon_dir = '%s/%s/%s/icons' % (pkgdest, pkg, d.getVar('datadir'))
+        if not os.path.exists(icon_dir):
+            continue
+
+        bb.note("adding hicolor-icon-theme dependency to %s" % pkg)
+        rdepends = ' ' + d.getVar('MLPREFIX', False) + "hicolor-icon-theme"
+        d.appendVar('RDEPENDS:%s' % pkg, rdepends)
+
+        #gtk_icon_cache_postinst depend on gdk-pixbuf and gtk+3/gtk4
+        bb.note("adding gdk-pixbuf dependency to %s" % pkg)
+        rdepends = ' ' + d.getVar('MLPREFIX', False) + "gdk-pixbuf"
+        d.appendVar('RDEPENDS:%s' % pkg, rdepends)
+
+        bb.note("adding %s dependency to %s" % (d.getVar('GTKPN'), pkg))
+        rdepends = ' ' + d.getVar('MLPREFIX', False) + d.getVar('GTKPN')
+        d.appendVar('RDEPENDS:%s' % pkg, rdepends)
+
+        bb.note("adding gtk-icon-cache postinst and postrm scripts to %s" % pkg)
+
+        postinst = d.getVar('pkg_postinst:%s' % pkg)
+        if not postinst:
+            postinst = '#!/bin/sh\n'
+        postinst += d.getVar('gtk_icon_cache_postinst')
+        d.setVar('pkg_postinst:%s' % pkg, postinst)
+
+        postrm = d.getVar('pkg_postrm:%s' % pkg)
+        if not postrm:
+            postrm = '#!/bin/sh\n'
+        postrm += d.getVar('gtk_icon_cache_postrm')
+        d.setVar('pkg_postrm:%s' % pkg, postrm)
+}
+
diff --git a/poky/meta/classes-recipe/gtk-immodules-cache.bbclass b/poky/meta/classes-recipe/gtk-immodules-cache.bbclass
new file mode 100644
index 0000000..8fbe1dd
--- /dev/null
+++ b/poky/meta/classes-recipe/gtk-immodules-cache.bbclass
@@ -0,0 +1,82 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+# This class will update the inputmethod module cache for virtual keyboards
+#
+# Usage: Set GTKIMMODULES_PACKAGES to the packages that needs to update the inputmethod modules
+
+PACKAGE_WRITE_DEPS += "qemu-native"
+
+inherit qemu
+
+GTKIMMODULES_PACKAGES ?= "${PN}"
+
+gtk_immodule_cache_postinst() {
+if [ "x$D" != "x" ]; then
+    $INTERCEPT_DIR/postinst_intercept update_gtk_immodules_cache ${PKG} \
+            mlprefix=${MLPREFIX} \
+            binprefix=${MLPREFIX} \
+            libdir=${libdir} \
+            libexecdir=${libexecdir} \
+            base_libdir=${base_libdir} \
+            bindir=${bindir}
+else
+    if [ ! -z `which gtk-query-immodules-2.0` ]; then
+        gtk-query-immodules-2.0 > ${libdir}/gtk-2.0/2.10.0/immodules.cache
+    fi
+    if [ ! -z `which gtk-query-immodules-3.0` ]; then
+        mkdir -p ${libdir}/gtk-3.0/3.0.0
+        gtk-query-immodules-3.0 > ${libdir}/gtk-3.0/3.0.0/immodules.cache
+    fi
+fi
+}
+
+gtk_immodule_cache_postrm() {
+if [ "x$D" != "x" ]; then
+    $INTERCEPT_DIR/postinst_intercept update_gtk_immodules_cache ${PKG} \
+            mlprefix=${MLPREFIX} \
+            binprefix=${MLPREFIX} \
+            libdir=${libdir} \
+            libexecdir=${libexecdir} \
+            base_libdir=${base_libdir} \
+            bindir=${bindir}
+else
+    if [ ! -z `which gtk-query-immodules-2.0` ]; then
+        gtk-query-immodules-2.0 > ${libdir}/gtk-2.0/2.10.0/immodules.cache
+    fi
+    if [ ! -z `which gtk-query-immodules-3.0` ]; then
+        gtk-query-immodules-3.0 > ${libdir}/gtk-3.0/3.0.0/immodules.cache
+    fi
+fi
+}
+
+python populate_packages:append () {
+    gtkimmodules_pkgs = d.getVar('GTKIMMODULES_PACKAGES').split()
+
+    for pkg in gtkimmodules_pkgs:
+            bb.note("adding gtk-immodule-cache postinst and postrm scripts to %s" % pkg)
+
+            postinst = d.getVar('pkg_postinst:%s' % pkg)
+            if not postinst:
+                postinst = '#!/bin/sh\n'
+            postinst += d.getVar('gtk_immodule_cache_postinst')
+            d.setVar('pkg_postinst:%s' % pkg, postinst)
+
+            postrm = d.getVar('pkg_postrm:%s' % pkg)
+            if not postrm:
+                postrm = '#!/bin/sh\n'
+            postrm += d.getVar('gtk_immodule_cache_postrm')
+            d.setVar('pkg_postrm:%s' % pkg, postrm)
+}
+
+python __anonymous() {
+    if not bb.data.inherits_class('native', d) and not bb.data.inherits_class('cross', d):
+        gtkimmodules_check = d.getVar('GTKIMMODULES_PACKAGES', False)
+        if not gtkimmodules_check:
+            bb_filename = d.getVar('FILE', False)
+            bb.fatal("ERROR: %s inherits gtk-immodules-cache but doesn't set GTKIMMODULES_PACKAGES" % bb_filename)
+}
+
diff --git a/poky/meta/classes-recipe/image-artifact-names.bbclass b/poky/meta/classes-recipe/image-artifact-names.bbclass
new file mode 100644
index 0000000..5c4e746
--- /dev/null
+++ b/poky/meta/classes-recipe/image-artifact-names.bbclass
@@ -0,0 +1,28 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+##################################################################
+# Specific image creation and rootfs population info.
+##################################################################
+
+IMAGE_BASENAME ?= "${PN}"
+IMAGE_VERSION_SUFFIX ?= "-${DATETIME}"
+IMAGE_VERSION_SUFFIX[vardepsexclude] += "DATETIME SOURCE_DATE_EPOCH"
+IMAGE_NAME ?= "${IMAGE_BASENAME}-${MACHINE}${IMAGE_VERSION_SUFFIX}"
+IMAGE_LINK_NAME ?= "${IMAGE_BASENAME}-${MACHINE}"
+
+# IMAGE_NAME is the base name for everything produced when building images.
+# The actual image that contains the rootfs has an additional suffix (.rootfs
+# by default) followed by additional suffices which describe the format (.ext4,
+# .ext4.xz, etc.).
+IMAGE_NAME_SUFFIX ??= ".rootfs"
+
+python () {
+    if bb.data.inherits_class('deploy', d) and d.getVar("IMAGE_VERSION_SUFFIX") == "-${DATETIME}":
+        import datetime
+        d.setVar("IMAGE_VERSION_SUFFIX", "-" + datetime.datetime.fromtimestamp(int(d.getVar("SOURCE_DATE_EPOCH")), datetime.timezone.utc).strftime('%Y%m%d%H%M%S'))
+        d.setVarFlag("IMAGE_VERSION_SUFFIX", "vardepvalue", "")
+}
diff --git a/poky/meta/classes-recipe/image-combined-dbg.bbclass b/poky/meta/classes-recipe/image-combined-dbg.bbclass
new file mode 100644
index 0000000..dcf1968
--- /dev/null
+++ b/poky/meta/classes-recipe/image-combined-dbg.bbclass
@@ -0,0 +1,15 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+IMAGE_PREPROCESS_COMMAND:append = " combine_dbg_image; "
+
+combine_dbg_image () {
+        if [ "${IMAGE_GEN_DEBUGFS}" = "1" -a -e ${IMAGE_ROOTFS}-dbg ]; then
+                # copy target files into -dbg rootfs, so it can be used for
+                # debug purposes directly
+                tar -C ${IMAGE_ROOTFS} -cf - . | tar -C ${IMAGE_ROOTFS}-dbg -xf -
+        fi
+}
diff --git a/poky/meta/classes-recipe/image-container.bbclass b/poky/meta/classes-recipe/image-container.bbclass
new file mode 100644
index 0000000..d24b030
--- /dev/null
+++ b/poky/meta/classes-recipe/image-container.bbclass
@@ -0,0 +1,27 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+ROOTFS_BOOTSTRAP_INSTALL = ""
+IMAGE_TYPES_MASKED += "container"
+IMAGE_TYPEDEP:container = "tar.bz2"
+
+python __anonymous() {
+    if "container" in d.getVar("IMAGE_FSTYPES") and \
+       d.getVar("IMAGE_CONTAINER_NO_DUMMY") != "1" and \
+       "linux-dummy" not in d.getVar("PREFERRED_PROVIDER_virtual/kernel"):
+        msg = '"container" is in IMAGE_FSTYPES, but ' \
+              'PREFERRED_PROVIDER_virtual/kernel is not "linux-dummy". ' \
+              'Unless a particular kernel is needed, using linux-dummy will ' \
+              'prevent a kernel from being built, which can reduce ' \
+              'build times. If you don\'t want to use "linux-dummy", set ' \
+              '"IMAGE_CONTAINER_NO_DUMMY" to "1".'
+
+        # Raising skip recipe was Paul's clever idea. It causes the error to
+        # only be shown for the recipes actually requested to build, rather
+        # than bb.fatal which would appear for all recipes inheriting the
+        # class.
+        raise bb.parse.SkipRecipe(msg)
+}
diff --git a/poky/meta/classes-recipe/image-live.bbclass b/poky/meta/classes-recipe/image-live.bbclass
new file mode 100644
index 0000000..1034acc
--- /dev/null
+++ b/poky/meta/classes-recipe/image-live.bbclass
@@ -0,0 +1,265 @@
+# Copyright (C) 2004, Advanced Micro Devices, Inc.
+#
+# SPDX-License-Identifier: MIT
+
+# Creates a bootable image using syslinux, your kernel and an optional
+# initrd
+
+#
+# End result is two things:
+#
+# 1. A .hddimg file which is an msdos filesystem containing syslinux, a kernel,
+# an initrd and a rootfs image. These can be written to harddisks directly and
+# also booted on USB flash disks (write them there with dd).
+#
+# 2. A CD .iso image
+
+# Boot process is that the initrd will boot and process which label was selected
+# in syslinux. Actions based on the label are then performed (e.g. installing to
+# an hdd)
+
+# External variables (also used by syslinux.bbclass)
+# ${INITRD} - indicates a list of filesystem images to concatenate and use as an initrd (optional)
+# ${HDDIMG_ID} - FAT image volume-id
+# ${ROOTFS} - indicates a filesystem image to include as the root filesystem (optional)
+
+inherit live-vm-common image-artifact-names
+
+do_bootimg[depends] += "dosfstools-native:do_populate_sysroot \
+                        mtools-native:do_populate_sysroot \
+                        cdrtools-native:do_populate_sysroot \
+                        virtual/kernel:do_deploy \
+                        ${MLPREFIX}syslinux:do_populate_sysroot \
+                        syslinux-native:do_populate_sysroot \
+                        ${@'%s:do_image_%s' % (d.getVar('PN'), d.getVar('LIVE_ROOTFS_TYPE').replace('-', '_')) if d.getVar('ROOTFS') else ''} \
+                        "
+
+
+LABELS_LIVE ?= "boot install"
+ROOT_LIVE ?= "root=/dev/ram0"
+INITRD_IMAGE_LIVE ?= "${MLPREFIX}core-image-minimal-initramfs"
+INITRD_LIVE ?= "${DEPLOY_DIR_IMAGE}/${INITRD_IMAGE_LIVE}-${MACHINE}.${INITRAMFS_FSTYPES}"
+
+LIVE_ROOTFS_TYPE ?= "ext4"
+ROOTFS ?= "${IMGDEPLOYDIR}/${IMAGE_LINK_NAME}.${LIVE_ROOTFS_TYPE}"
+
+IMAGE_TYPEDEP:live = "${LIVE_ROOTFS_TYPE}"
+IMAGE_TYPEDEP:iso = "${LIVE_ROOTFS_TYPE}"
+IMAGE_TYPEDEP:hddimg = "${LIVE_ROOTFS_TYPE}"
+IMAGE_TYPES_MASKED += "live hddimg iso"
+
+python() {
+    image_b = d.getVar('IMAGE_BASENAME')
+    initrd_i = d.getVar('INITRD_IMAGE_LIVE')
+    if image_b == initrd_i:
+        bb.error('INITRD_IMAGE_LIVE %s cannot use image live, hddimg or iso.' % initrd_i)
+        bb.fatal('Check IMAGE_FSTYPES and INITRAMFS_FSTYPES settings.')
+    elif initrd_i:
+        d.appendVarFlag('do_bootimg', 'depends', ' %s:do_image_complete' % initrd_i)
+}
+
+HDDDIR = "${S}/hddimg"
+ISODIR = "${S}/iso"
+EFIIMGDIR = "${S}/efi_img"
+COMPACT_ISODIR = "${S}/iso.z"
+
+ISOLINUXDIR ?= "/isolinux"
+ISO_BOOTIMG = "isolinux/isolinux.bin"
+ISO_BOOTCAT = "isolinux/boot.cat"
+MKISOFS_OPTIONS = "-no-emul-boot -boot-load-size 4 -boot-info-table"
+
+BOOTIMG_VOLUME_ID   ?= "boot"
+BOOTIMG_EXTRA_SPACE ?= "512"
+
+populate_live() {
+    populate_kernel $1
+	if [ -s "${ROOTFS}" ]; then
+		install -m 0644 ${ROOTFS} $1/rootfs.img
+	fi
+}
+
+build_iso() {
+	# Only create an ISO if we have an INITRD and the live or iso image type was selected
+	if [ -z "${INITRD}" ] || [ "${@bb.utils.contains_any('IMAGE_FSTYPES', 'live iso', '1', '0', d)}" != "1" ]; then
+		bbnote "ISO image will not be created."
+		return
+	fi
+	# ${INITRD} is a list of multiple filesystem images
+	for fs in ${INITRD}
+	do
+		if [ ! -s "$fs" ]; then
+			bbwarn "ISO image will not be created. $fs is invalid."
+			return
+		fi
+	done
+
+	populate_live ${ISODIR}
+
+	if [ "${PCBIOS}" = "1" ]; then
+		syslinux_iso_populate ${ISODIR}
+	fi
+	if [ "${EFI}" = "1" ]; then
+		efi_iso_populate ${ISODIR}
+		build_fat_img ${EFIIMGDIR} ${ISODIR}/efi.img
+	fi
+
+	# EFI only
+	if [ "${PCBIOS}" != "1" ] && [ "${EFI}" = "1" ] ; then
+		# Work around bug in isohybrid where it requires isolinux.bin
+		# In the boot catalog, even though it is not used
+		mkdir -p ${ISODIR}/${ISOLINUXDIR}
+		install -m 0644 ${STAGING_DATADIR}/syslinux/isolinux.bin ${ISODIR}${ISOLINUXDIR}
+	fi
+
+	# We used to have support for zisofs; this is a relic of that
+	mkisofs_compress_opts="-r"
+
+	# Check the size of ${ISODIR}/rootfs.img, use mkisofs -iso-level 3
+	# when it exceeds 3.8GB, the specification is 4G - 1 bytes, we need
+	# leave a few space for other files.
+	mkisofs_iso_level=""
+
+        if [ -n "${ROOTFS}" ] && [ -s "${ROOTFS}" ]; then
+		rootfs_img_size=`stat -c '%s' ${ISODIR}/rootfs.img`
+		# 4080218931 = 3.8 * 1024 * 1024 * 1024
+		if [ $rootfs_img_size -gt 4080218931 ]; then
+			bbnote "${ISODIR}/rootfs.img execeeds 3.8GB, using '-iso-level 3' for mkisofs"
+			mkisofs_iso_level="-iso-level 3"
+		fi
+	fi
+
+	if [ "${PCBIOS}" = "1" ] && [ "${EFI}" != "1" ] ; then
+		# PCBIOS only media
+		mkisofs -V ${BOOTIMG_VOLUME_ID} \
+		        -o ${IMGDEPLOYDIR}/${IMAGE_NAME}.iso \
+			-b ${ISO_BOOTIMG} -c ${ISO_BOOTCAT} \
+			$mkisofs_compress_opts \
+			${MKISOFS_OPTIONS} $mkisofs_iso_level ${ISODIR}
+	else
+		# EFI only OR EFI+PCBIOS
+		mkisofs -A ${BOOTIMG_VOLUME_ID} -V ${BOOTIMG_VOLUME_ID} \
+		        -o ${IMGDEPLOYDIR}/${IMAGE_NAME}.iso \
+			-b ${ISO_BOOTIMG} -c ${ISO_BOOTCAT} \
+			$mkisofs_compress_opts ${MKISOFS_OPTIONS} $mkisofs_iso_level \
+			-eltorito-alt-boot -eltorito-platform efi \
+			-b efi.img -no-emul-boot \
+			${ISODIR}
+		isohybrid_args="-u"
+	fi
+
+	isohybrid $isohybrid_args ${IMGDEPLOYDIR}/${IMAGE_NAME}.iso
+}
+
+build_fat_img() {
+	FATSOURCEDIR=$1
+	FATIMG=$2
+
+	# Calculate the size required for the final image including the
+	# data and filesystem overhead.
+	# Sectors: 512 bytes
+	#  Blocks: 1024 bytes
+
+	# Determine the sector count just for the data
+	SECTORS=$(expr $(du --apparent-size -ks ${FATSOURCEDIR} | cut -f 1) \* 2)
+
+	# Account for the filesystem overhead. This includes directory
+	# entries in the clusters as well as the FAT itself.
+	# Assumptions:
+	#   FAT32 (12 or 16 may be selected by mkdosfs, but the extra
+	#   padding will be minimal on those smaller images and not
+	#   worth the logic here to caclulate the smaller FAT sizes)
+	#   < 16 entries per directory
+	#   8.3 filenames only
+
+	# 32 bytes per dir entry
+	DIR_BYTES=$(expr $(find ${FATSOURCEDIR} | tail -n +2 | wc -l) \* 32)
+	# 32 bytes for every end-of-directory dir entry
+	DIR_BYTES=$(expr $DIR_BYTES + $(expr $(find ${FATSOURCEDIR} -type d | tail -n +2 | wc -l) \* 32))
+	# 4 bytes per FAT entry per sector of data
+	FAT_BYTES=$(expr $SECTORS \* 4)
+	# 4 bytes per FAT entry per end-of-cluster list
+	FAT_BYTES=$(expr $FAT_BYTES + $(expr $(find ${FATSOURCEDIR} -type d | tail -n +2 | wc -l) \* 4))
+
+	# Use a ceiling function to determine FS overhead in sectors
+	DIR_SECTORS=$(expr $(expr $DIR_BYTES + 511) / 512)
+	# There are two FATs on the image
+	FAT_SECTORS=$(expr $(expr $(expr $FAT_BYTES + 511) / 512) \* 2)
+	SECTORS=$(expr $SECTORS + $(expr $DIR_SECTORS + $FAT_SECTORS))
+
+	# Determine the final size in blocks accounting for some padding
+	BLOCKS=$(expr $(expr $SECTORS / 2) + ${BOOTIMG_EXTRA_SPACE})
+
+	# mkdosfs will sometimes use FAT16 when it is not appropriate,
+	# resulting in a boot failure from SYSLINUX. Use FAT32 for
+	# images larger than 512MB, otherwise let mkdosfs decide.
+	if [ $(expr $BLOCKS / 1024) -gt 512 ]; then
+		FATSIZE="-F 32"
+	fi
+
+	# mkdosfs will fail if ${FATIMG} exists. Since we are creating an
+	# new image, it is safe to delete any previous image.
+	if [ -e ${FATIMG} ]; then
+		rm ${FATIMG}
+	fi
+
+	if [ -z "${HDDIMG_ID}" ]; then
+		mkdosfs ${FATSIZE} -n ${BOOTIMG_VOLUME_ID} ${MKDOSFS_EXTRAOPTS} -C ${FATIMG} \
+			${BLOCKS}
+	else
+		mkdosfs ${FATSIZE} -n ${BOOTIMG_VOLUME_ID} ${MKDOSFS_EXTRAOPTS} -C ${FATIMG} \
+		${BLOCKS} -i ${HDDIMG_ID}
+	fi
+
+	# Copy FATSOURCEDIR recursively into the image file directly
+	mcopy -i ${FATIMG} -s ${FATSOURCEDIR}/* ::/
+}
+
+build_hddimg() {
+	# Create an HDD image
+	if [ "${@bb.utils.contains_any('IMAGE_FSTYPES', 'live hddimg', '1', '0', d)}" = "1" ] ; then
+		populate_live ${HDDDIR}
+
+		if [ "${PCBIOS}" = "1" ]; then
+			syslinux_hddimg_populate ${HDDDIR}
+		fi
+		if [ "${EFI}" = "1" ]; then
+			efi_hddimg_populate ${HDDDIR}
+		fi
+
+		# Check the size of ${HDDDIR}/rootfs.img, error out if it
+		# exceeds 4GB, it is the single file's max size of FAT fs.
+		if [ -f ${HDDDIR}/rootfs.img ]; then
+			rootfs_img_size=`stat -c '%s' ${HDDDIR}/rootfs.img`
+			max_size=`expr 4 \* 1024 \* 1024 \* 1024`
+			if [ $rootfs_img_size -ge $max_size ]; then
+				bberror "${HDDDIR}/rootfs.img rootfs size is greather than or equal to 4GB,"
+				bberror "and this doesn't work on a FAT filesystem. You can either:"
+				bberror "1) Reduce the size of rootfs.img, or,"
+				bbfatal "2) Use wic, vmdk,vhd, vhdx or vdi instead of hddimg\n"
+			fi
+		fi
+
+		build_fat_img ${HDDDIR} ${IMGDEPLOYDIR}/${IMAGE_NAME}.hddimg
+
+		if [ "${PCBIOS}" = "1" ]; then
+			syslinux_hddimg_install
+		fi
+
+		chmod 644 ${IMGDEPLOYDIR}/${IMAGE_NAME}.hddimg
+	fi
+}
+
+python do_bootimg() {
+    set_live_vm_vars(d, 'LIVE')
+    if d.getVar("PCBIOS") == "1":
+        bb.build.exec_func('build_syslinux_cfg', d)
+    if d.getVar("EFI") == "1":
+        bb.build.exec_func('build_efi_cfg', d)
+    bb.build.exec_func('build_hddimg', d)
+    bb.build.exec_func('build_iso', d)
+    bb.build.exec_func('create_symlinks', d)
+}
+do_bootimg[subimages] = "hddimg iso"
+do_bootimg[imgsuffix] = "."
+
+addtask bootimg before do_image_complete after do_rootfs
diff --git a/poky/meta/classes-recipe/image-postinst-intercepts.bbclass b/poky/meta/classes-recipe/image-postinst-intercepts.bbclass
new file mode 100644
index 0000000..fc15926
--- /dev/null
+++ b/poky/meta/classes-recipe/image-postinst-intercepts.bbclass
@@ -0,0 +1,29 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+# Gather existing and candidate postinst intercepts from BBPATH
+POSTINST_INTERCEPTS_DIR ?= "${COREBASE}/scripts/postinst-intercepts"
+POSTINST_INTERCEPTS_PATHS ?= "${@':'.join('%s/postinst-intercepts' % p for p in '${BBPATH}'.split(':'))}:${POSTINST_INTERCEPTS_DIR}"
+
+python find_intercepts() {
+    intercepts = {}
+    search_paths = []
+    paths = d.getVar('POSTINST_INTERCEPTS_PATHS').split(':')
+    overrides = (':' + d.getVar('FILESOVERRIDES')).split(':') + ['']
+    search_paths = [os.path.join(p, op) for p in paths for op in overrides]
+    searched = oe.path.which_wild('*', ':'.join(search_paths), candidates=True)
+    files, chksums = [], []
+    for pathname, candidates in searched:
+        if os.path.isfile(pathname):
+            files.append(pathname)
+            chksums.append('%s:True' % pathname)
+            chksums.extend('%s:False' % c for c in candidates[:-1])
+
+    d.setVar('POSTINST_INTERCEPT_CHECKSUMS', ' '.join(chksums))
+    d.setVar('POSTINST_INTERCEPTS', ' '.join(files))
+}
+find_intercepts[eventmask] += "bb.event.RecipePreFinalise"
+addhandler find_intercepts
diff --git a/poky/meta/classes-recipe/image.bbclass b/poky/meta/classes-recipe/image.bbclass
new file mode 100644
index 0000000..e387645
--- /dev/null
+++ b/poky/meta/classes-recipe/image.bbclass
@@ -0,0 +1,684 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+IMAGE_CLASSES ??= ""
+
+# rootfs bootstrap install
+# warning -  image-container resets this
+ROOTFS_BOOTSTRAP_INSTALL = "run-postinsts"
+
+# Handle inherits of any of the image classes we need
+IMGCLASSES = "rootfs_${IMAGE_PKGTYPE} image_types ${IMAGE_CLASSES}"
+# Only Linux SDKs support populate_sdk_ext, fall back to populate_sdk_base
+# in the non-Linux SDK_OS case, such as mingw32
+IMGCLASSES += "${@['populate_sdk_base', 'populate_sdk_ext']['linux' in d.getVar("SDK_OS")]}"
+IMGCLASSES += "${@bb.utils.contains_any('IMAGE_FSTYPES', 'live iso hddimg', 'image-live', '', d)}"
+IMGCLASSES += "${@bb.utils.contains('IMAGE_FSTYPES', 'container', 'image-container', '', d)}"
+IMGCLASSES += "image_types_wic"
+IMGCLASSES += "rootfs-postcommands"
+IMGCLASSES += "image-postinst-intercepts"
+IMGCLASSES += "overlayfs-etc"
+inherit ${IMGCLASSES}
+
+TOOLCHAIN_TARGET_TASK += "${PACKAGE_INSTALL}"
+TOOLCHAIN_TARGET_TASK_ATTEMPTONLY += "${PACKAGE_INSTALL_ATTEMPTONLY}"
+POPULATE_SDK_POST_TARGET_COMMAND += "rootfs_sysroot_relativelinks; "
+
+LICENSE ?= "MIT"
+PACKAGES = ""
+DEPENDS += "${@' '.join(["%s-qemuwrapper-cross" % m for m in d.getVar("MULTILIB_VARIANTS").split()])} qemuwrapper-cross depmodwrapper-cross cross-localedef-native"
+RDEPENDS += "${PACKAGE_INSTALL} ${LINGUAS_INSTALL} ${IMAGE_INSTALL_DEBUGFS}"
+RRECOMMENDS += "${PACKAGE_INSTALL_ATTEMPTONLY}"
+PATH:prepend = "${@":".join(all_multilib_tune_values(d, 'STAGING_BINDIR_CROSS').split())}:"
+
+INHIBIT_DEFAULT_DEPS = "1"
+
+# IMAGE_FEATURES may contain any available package group
+IMAGE_FEATURES ?= ""
+IMAGE_FEATURES[type] = "list"
+IMAGE_FEATURES[validitems] += "debug-tweaks read-only-rootfs read-only-rootfs-delayed-postinsts stateless-rootfs empty-root-password allow-empty-password allow-root-login serial-autologin-root post-install-logging overlayfs-etc"
+
+# Generate companion debugfs?
+IMAGE_GEN_DEBUGFS ?= "0"
+
+# These packages will be installed as additional into debug rootfs
+IMAGE_INSTALL_DEBUGFS ?= ""
+
+# These packages will be removed from a read-only rootfs after all other
+# packages have been installed
+ROOTFS_RO_UNNEEDED ??= "update-rc.d base-passwd shadow ${VIRTUAL-RUNTIME_update-alternatives} ${ROOTFS_BOOTSTRAP_INSTALL}"
+
+# packages to install from features
+FEATURE_INSTALL = "${@' '.join(oe.packagegroup.required_packages(oe.data.typed_value('IMAGE_FEATURES', d), d))}"
+FEATURE_INSTALL[vardepvalue] = "${FEATURE_INSTALL}"
+FEATURE_INSTALL_OPTIONAL = "${@' '.join(oe.packagegroup.optional_packages(oe.data.typed_value('IMAGE_FEATURES', d), d))}"
+FEATURE_INSTALL_OPTIONAL[vardepvalue] = "${FEATURE_INSTALL_OPTIONAL}"
+
+# Define some very basic feature package groups
+FEATURE_PACKAGES_package-management = "${ROOTFS_PKGMANAGE}"
+SPLASH ?= "${@bb.utils.contains("MACHINE_FEATURES", "screen", "psplash", "", d)}"
+FEATURE_PACKAGES_splash = "${SPLASH}"
+
+IMAGE_INSTALL_COMPLEMENTARY = '${@complementary_globs("IMAGE_FEATURES", d)}'
+
+def check_image_features(d):
+    valid_features = (d.getVarFlag('IMAGE_FEATURES', 'validitems') or "").split()
+    valid_features += d.getVarFlags('COMPLEMENTARY_GLOB').keys()
+    for var in d:
+       if var.startswith("FEATURE_PACKAGES_"):
+           valid_features.append(var[17:])
+    valid_features.sort()
+
+    features = set(oe.data.typed_value('IMAGE_FEATURES', d))
+    for feature in features:
+        if feature not in valid_features:
+            if bb.utils.contains('EXTRA_IMAGE_FEATURES', feature, True, False, d):
+                raise bb.parse.SkipRecipe("'%s' in IMAGE_FEATURES (added via EXTRA_IMAGE_FEATURES) is not a valid image feature. Valid features: %s" % (feature, ' '.join(valid_features)))
+            else:
+                raise bb.parse.SkipRecipe("'%s' in IMAGE_FEATURES is not a valid image feature. Valid features: %s" % (feature, ' '.join(valid_features)))
+
+IMAGE_INSTALL ?= ""
+IMAGE_INSTALL[type] = "list"
+export PACKAGE_INSTALL ?= "${IMAGE_INSTALL} ${ROOTFS_BOOTSTRAP_INSTALL} ${FEATURE_INSTALL}"
+PACKAGE_INSTALL_ATTEMPTONLY ?= "${FEATURE_INSTALL_OPTIONAL}"
+
+IMGDEPLOYDIR = "${WORKDIR}/deploy-${PN}-image-complete"
+
+# Images are generally built explicitly, do not need to be part of world.
+EXCLUDE_FROM_WORLD = "1"
+
+USE_DEVFS ?= "1"
+USE_DEPMOD ?= "1"
+
+PID = "${@os.getpid()}"
+
+PACKAGE_ARCH = "${MACHINE_ARCH}"
+
+LDCONFIGDEPEND ?= "ldconfig-native:do_populate_sysroot"
+LDCONFIGDEPEND:libc-musl = ""
+
+# This is needed to have depmod data in PKGDATA_DIR,
+# but if you're building small initramfs image
+# e.g. to include it in your kernel, you probably
+# don't want this dependency, which is causing dependency loop
+KERNELDEPMODDEPEND ?= "virtual/kernel:do_packagedata"
+
+do_rootfs[depends] += " \
+    makedevs-native:do_populate_sysroot virtual/fakeroot-native:do_populate_sysroot ${LDCONFIGDEPEND} \
+    virtual/update-alternatives-native:do_populate_sysroot update-rc.d-native:do_populate_sysroot \
+    ${KERNELDEPMODDEPEND} \
+"
+do_rootfs[recrdeptask] += "do_packagedata"
+
+def rootfs_command_variables(d):
+    return ['ROOTFS_POSTPROCESS_COMMAND','ROOTFS_PREPROCESS_COMMAND','ROOTFS_POSTINSTALL_COMMAND','ROOTFS_POSTUNINSTALL_COMMAND','OPKG_PREPROCESS_COMMANDS','OPKG_POSTPROCESS_COMMANDS','IMAGE_POSTPROCESS_COMMAND',
+            'IMAGE_PREPROCESS_COMMAND','RPM_PREPROCESS_COMMANDS','RPM_POSTPROCESS_COMMANDS','DEB_PREPROCESS_COMMANDS','DEB_POSTPROCESS_COMMANDS']
+
+python () {
+    variables = rootfs_command_variables(d)
+    for var in variables:
+        if d.getVar(var, False):
+            d.setVarFlag(var, 'func', '1')
+}
+
+def rootfs_variables(d):
+    from oe.rootfs import variable_depends
+    variables = ['IMAGE_DEVICE_TABLE','IMAGE_DEVICE_TABLES','BUILD_IMAGES_FROM_FEEDS','IMAGE_TYPES_MASKED','IMAGE_ROOTFS_ALIGNMENT','IMAGE_OVERHEAD_FACTOR','IMAGE_ROOTFS_SIZE','IMAGE_ROOTFS_EXTRA_SPACE',
+                 'IMAGE_ROOTFS_MAXSIZE','IMAGE_NAME','IMAGE_LINK_NAME','IMAGE_MANIFEST','DEPLOY_DIR_IMAGE','IMAGE_FSTYPES','IMAGE_INSTALL_COMPLEMENTARY','IMAGE_LINGUAS', 'IMAGE_LINGUAS_COMPLEMENTARY', 'IMAGE_LOCALES_ARCHIVE',
+                 'MULTILIBRE_ALLOW_REP','MULTILIB_TEMP_ROOTFS','MULTILIB_VARIANTS','MULTILIBS','ALL_MULTILIB_PACKAGE_ARCHS','MULTILIB_GLOBAL_VARIANTS','BAD_RECOMMENDATIONS','NO_RECOMMENDATIONS',
+                 'PACKAGE_ARCHS','PACKAGE_CLASSES','TARGET_VENDOR','TARGET_ARCH','TARGET_OS','OVERRIDES','BBEXTENDVARIANT','FEED_DEPLOYDIR_BASE_URI','INTERCEPT_DIR','USE_DEVFS',
+                 'CONVERSIONTYPES', 'IMAGE_GEN_DEBUGFS', 'ROOTFS_RO_UNNEEDED', 'IMGDEPLOYDIR', 'PACKAGE_EXCLUDE_COMPLEMENTARY', 'REPRODUCIBLE_TIMESTAMP_ROOTFS', 'IMAGE_INSTALL_DEBUGFS']
+    variables.extend(rootfs_command_variables(d))
+    variables.extend(variable_depends(d))
+    return " ".join(variables)
+
+do_rootfs[vardeps] += "${@rootfs_variables(d)}"
+
+# This is needed to have kernel image in DEPLOY_DIR.
+# This follows many common usecases and user expectations.
+# But if you are building an image which doesn't need the kernel image at all,
+# you can unset this variable manually.
+KERNEL_DEPLOY_DEPEND ?= "virtual/kernel:do_deploy"
+do_build[depends] += "${KERNEL_DEPLOY_DEPEND}"
+
+
+python () {
+    def extraimage_getdepends(task):
+        deps = ""
+        for dep in (d.getVar('EXTRA_IMAGEDEPENDS') or "").split():
+            if ":" in dep:
+                deps += " %s " % (dep)
+            else:
+                deps += " %s:%s" % (dep, task)
+        return deps
+
+    d.appendVarFlag('do_image_complete', 'depends', extraimage_getdepends('do_populate_sysroot'))
+
+    deps = " " + imagetypes_getdepends(d)
+    d.appendVarFlag('do_rootfs', 'depends', deps)
+
+    #process IMAGE_FEATURES, we must do this before runtime_mapping_rename
+    #Check for replaces image features
+    features = set(oe.data.typed_value('IMAGE_FEATURES', d))
+    remain_features = features.copy()
+    for feature in features:
+        replaces = set((d.getVar("IMAGE_FEATURES_REPLACES_%s" % feature) or "").split())
+        remain_features -= replaces
+
+    #Check for conflict image features
+    for feature in remain_features:
+        conflicts = set((d.getVar("IMAGE_FEATURES_CONFLICTS_%s" % feature) or "").split())
+        temp = conflicts & remain_features
+        if temp:
+            bb.fatal("%s contains conflicting IMAGE_FEATURES %s %s" % (d.getVar('PN'), feature, ' '.join(list(temp))))
+
+    d.setVar('IMAGE_FEATURES', ' '.join(sorted(list(remain_features))))
+
+    check_image_features(d)
+}
+
+IMAGE_POSTPROCESS_COMMAND ?= ""
+
+# some default locales
+IMAGE_LINGUAS ?= "de-de fr-fr en-gb"
+
+LINGUAS_INSTALL ?= "${@" ".join(map(lambda s: "locale-base-%s" % s, d.getVar('IMAGE_LINGUAS').split()))}"
+
+# per default create a locale archive
+IMAGE_LOCALES_ARCHIVE ?= '1'
+
+# Prefer image, but use the fallback files for lookups if the image ones
+# aren't yet available.
+PSEUDO_PASSWD = "${IMAGE_ROOTFS}:${STAGING_DIR_NATIVE}"
+
+PSEUDO_IGNORE_PATHS .= ",${WORKDIR}/intercept_scripts,${WORKDIR}/oe-rootfs-repo,${WORKDIR}/sstate-build-image_complete"
+
+PACKAGE_EXCLUDE ??= ""
+PACKAGE_EXCLUDE[type] = "list"
+
+fakeroot python do_rootfs () {
+    from oe.rootfs import create_rootfs
+    from oe.manifest import create_manifest
+    import logging
+
+    logger = d.getVar('BB_TASK_LOGGER', False)
+    if logger:
+        logcatcher = bb.utils.LogCatcher()
+        logger.addHandler(logcatcher)
+    else:
+        logcatcher = None
+
+    # NOTE: if you add, remove or significantly refactor the stages of this
+    # process then you should recalculate the weightings here. This is quite
+    # easy to do - just change the MultiStageProgressReporter line temporarily
+    # to pass debug=True as the last parameter and you'll get a printout of
+    # the weightings as well as a map to the lines where next_stage() was
+    # called. Of course this isn't critical, but it helps to keep the progress
+    # reporting accurate.
+    stage_weights = [1, 203, 354, 186, 65, 4228, 1, 353, 49, 330, 382, 23, 1]
+    progress_reporter = bb.progress.MultiStageProgressReporter(d, stage_weights)
+    progress_reporter.next_stage()
+
+    # Handle package exclusions
+    excl_pkgs = d.getVar("PACKAGE_EXCLUDE").split()
+    inst_pkgs = d.getVar("PACKAGE_INSTALL").split()
+    inst_attempt_pkgs = d.getVar("PACKAGE_INSTALL_ATTEMPTONLY").split()
+
+    d.setVar('PACKAGE_INSTALL_ORIG', ' '.join(inst_pkgs))
+    d.setVar('PACKAGE_INSTALL_ATTEMPTONLY', ' '.join(inst_attempt_pkgs))
+
+    for pkg in excl_pkgs:
+        if pkg in inst_pkgs:
+            bb.warn("Package %s, set to be excluded, is in %s PACKAGE_INSTALL (%s).  It will be removed from the list." % (pkg, d.getVar('PN'), inst_pkgs))
+            inst_pkgs.remove(pkg)
+
+        if pkg in inst_attempt_pkgs:
+            bb.warn("Package %s, set to be excluded, is in %s PACKAGE_INSTALL_ATTEMPTONLY (%s).  It will be removed from the list." % (pkg, d.getVar('PN'), inst_pkgs))
+            inst_attempt_pkgs.remove(pkg)
+
+    d.setVar("PACKAGE_INSTALL", ' '.join(inst_pkgs))
+    d.setVar("PACKAGE_INSTALL_ATTEMPTONLY", ' '.join(inst_attempt_pkgs))
+
+    # Ensure we handle package name remapping
+    # We have to delay the runtime_mapping_rename until just before rootfs runs
+    # otherwise, the multilib renaming could step in and squash any fixups that
+    # may have occurred.
+    pn = d.getVar('PN')
+    runtime_mapping_rename("PACKAGE_INSTALL", pn, d)
+    runtime_mapping_rename("PACKAGE_INSTALL_ATTEMPTONLY", pn, d)
+    runtime_mapping_rename("BAD_RECOMMENDATIONS", pn, d)
+
+    # Generate the initial manifest
+    create_manifest(d)
+
+    progress_reporter.next_stage()
+
+    # generate rootfs
+    d.setVarFlag('REPRODUCIBLE_TIMESTAMP_ROOTFS', 'export', '1')
+    create_rootfs(d, progress_reporter=progress_reporter, logcatcher=logcatcher)
+
+    progress_reporter.finish()
+}
+do_rootfs[dirs] = "${TOPDIR}"
+do_rootfs[cleandirs] += "${IMAGE_ROOTFS} ${IMGDEPLOYDIR} ${S}"
+do_rootfs[file-checksums] += "${POSTINST_INTERCEPT_CHECKSUMS}"
+addtask rootfs after do_prepare_recipe_sysroot
+
+fakeroot python do_image () {
+    from oe.utils import execute_pre_post_process
+
+    d.setVarFlag('REPRODUCIBLE_TIMESTAMP_ROOTFS', 'export', '1')
+    pre_process_cmds = d.getVar("IMAGE_PREPROCESS_COMMAND")
+
+    execute_pre_post_process(d, pre_process_cmds)
+}
+do_image[dirs] = "${TOPDIR}"
+addtask do_image after do_rootfs
+
+fakeroot python do_image_complete () {
+    from oe.utils import execute_pre_post_process
+
+    post_process_cmds = d.getVar("IMAGE_POSTPROCESS_COMMAND")
+
+    execute_pre_post_process(d, post_process_cmds)
+}
+do_image_complete[dirs] = "${TOPDIR}"
+SSTATETASKS += "do_image_complete"
+SSTATE_SKIP_CREATION:task-image-complete = '1'
+do_image_complete[sstate-inputdirs] = "${IMGDEPLOYDIR}"
+do_image_complete[sstate-outputdirs] = "${DEPLOY_DIR_IMAGE}"
+do_image_complete[stamp-extra-info] = "${MACHINE_ARCH}"
+addtask do_image_complete after do_image before do_build
+python do_image_complete_setscene () {
+    sstate_setscene(d)
+}
+addtask do_image_complete_setscene
+
+# Add image-level QA/sanity checks to IMAGE_QA_COMMANDS
+#
+# IMAGE_QA_COMMANDS += " \
+#     image_check_everything_ok \
+# "
+# This task runs all functions in IMAGE_QA_COMMANDS after the rootfs
+# construction has completed in order to validate the resulting image.
+#
+# The functions should use ${IMAGE_ROOTFS} to find the unpacked rootfs
+# directory, which if QA passes will be the basis for the images.
+fakeroot python do_image_qa () {
+    from oe.utils import ImageQAFailed
+
+    qa_cmds = (d.getVar('IMAGE_QA_COMMANDS') or '').split()
+    qamsg = ""
+
+    for cmd in qa_cmds:
+        try:
+            bb.build.exec_func(cmd, d)
+        except oe.utils.ImageQAFailed as e:
+            qamsg = qamsg + '\tImage QA function %s failed: %s\n' % (e.name, e.description)
+        except Exception as e:
+            qamsg = qamsg + '\tImage QA function %s failed\n' % cmd
+
+    if qamsg:
+        imgname = d.getVar('IMAGE_NAME')
+        bb.fatal("QA errors found whilst validating image: %s\n%s" % (imgname, qamsg))
+}
+addtask do_image_qa after do_rootfs before do_image
+
+SSTATETASKS += "do_image_qa"
+SSTATE_SKIP_CREATION:task-image-qa = '1'
+do_image_qa[sstate-inputdirs] = ""
+do_image_qa[sstate-outputdirs] = ""
+python do_image_qa_setscene () {
+    sstate_setscene(d)
+}
+addtask do_image_qa_setscene
+
+def setup_debugfs_variables(d):
+    d.appendVar('IMAGE_ROOTFS', '-dbg')
+    if d.getVar('IMAGE_LINK_NAME'):
+        d.appendVar('IMAGE_LINK_NAME', '-dbg')
+    d.appendVar('IMAGE_NAME','-dbg')
+    d.setVar('IMAGE_BUILDING_DEBUGFS', 'true')
+    debugfs_image_fstypes = d.getVar('IMAGE_FSTYPES_DEBUGFS')
+    if debugfs_image_fstypes:
+        d.setVar('IMAGE_FSTYPES', debugfs_image_fstypes)
+
+python setup_debugfs () {
+    setup_debugfs_variables(d)
+}
+
+python () {
+    vardeps = set()
+    # We allow CONVERSIONTYPES to have duplicates. That avoids breaking
+    # derived distros when OE-core or some other layer independently adds
+    # the same type. There is still only one command for each type, but
+    # presumably the commands will do the same when the type is the same,
+    # even when added in different places.
+    #
+    # Without de-duplication, gen_conversion_cmds() below
+    # would create the same compression command multiple times.
+    ctypes = set(d.getVar('CONVERSIONTYPES').split())
+    old_overrides = d.getVar('OVERRIDES', False)
+
+    def _image_base_type(type):
+        basetype = type
+        for ctype in ctypes:
+            if type.endswith("." + ctype):
+                basetype = type[:-len("." + ctype)]
+                break
+
+        if basetype != type:
+            # New base type itself might be generated by a conversion command.
+            basetype = _image_base_type(basetype)
+
+        return basetype
+
+    basetypes = {}
+    alltypes = d.getVar('IMAGE_FSTYPES').split()
+    typedeps = {}
+
+    if d.getVar('IMAGE_GEN_DEBUGFS') == "1":
+        debugfs_fstypes = d.getVar('IMAGE_FSTYPES_DEBUGFS').split()
+        for t in debugfs_fstypes:
+            alltypes.append("debugfs_" + t)
+
+    def _add_type(t):
+        baset = _image_base_type(t)
+        input_t = t
+        if baset not in basetypes:
+            basetypes[baset]= []
+        if t not in basetypes[baset]:
+            basetypes[baset].append(t)
+        debug = ""
+        if t.startswith("debugfs_"):
+            t = t[8:]
+            debug = "debugfs_"
+        deps = (d.getVar('IMAGE_TYPEDEP:' + t) or "").split()
+        vardeps.add('IMAGE_TYPEDEP:' + t)
+        if baset not in typedeps:
+            typedeps[baset] = set()
+        deps = [debug + dep for dep in deps]
+        for dep in deps:
+            if dep not in alltypes:
+                alltypes.append(dep)
+            _add_type(dep)
+            basedep = _image_base_type(dep)
+            typedeps[baset].add(basedep)
+
+        if baset != input_t:
+            _add_type(baset)
+
+    for t in alltypes[:]:
+        _add_type(t)
+
+    d.appendVarFlag('do_image', 'vardeps', ' '.join(vardeps))
+
+    maskedtypes = (d.getVar('IMAGE_TYPES_MASKED') or "").split()
+    maskedtypes = [dbg + t for t in maskedtypes for dbg in ("", "debugfs_")]
+
+    for t in basetypes:
+        vardeps = set()
+        cmds = []
+        subimages = []
+        realt = t
+
+        if t in maskedtypes:
+            continue
+
+        localdata = bb.data.createCopy(d)
+        debug = ""
+        if t.startswith("debugfs_"):
+            setup_debugfs_variables(localdata)
+            debug = "setup_debugfs "
+            realt = t[8:]
+        localdata.setVar('OVERRIDES', '%s:%s' % (realt, old_overrides))
+        localdata.setVar('type', realt)
+        # Delete DATETIME so we don't expand any references to it now
+        # This means the task's hash can be stable rather than having hardcoded
+        # date/time values. It will get expanded at execution time.
+        # Similarly TMPDIR since otherwise we see QA stamp comparision problems
+        # Expand PV else it can trigger get_srcrev which can fail due to these variables being unset
+        localdata.setVar('PV', d.getVar('PV'))
+        localdata.delVar('DATETIME')
+        localdata.delVar('DATE')
+        localdata.delVar('TMPDIR')
+        localdata.delVar('IMAGE_VERSION_SUFFIX')
+        vardepsexclude = (d.getVarFlag('IMAGE_CMD:' + realt, 'vardepsexclude', True) or '').split()
+        for dep in vardepsexclude:
+            localdata.delVar(dep)
+
+        image_cmd = localdata.getVar("IMAGE_CMD")
+        vardeps.add('IMAGE_CMD:' + realt)
+        if image_cmd:
+            cmds.append("\t" + image_cmd)
+        else:
+            bb.fatal("No IMAGE_CMD defined for IMAGE_FSTYPES entry '%s' - possibly invalid type name or missing support class" % t)
+        cmds.append(localdata.expand("\tcd ${IMGDEPLOYDIR}"))
+
+        # Since a copy of IMAGE_CMD:xxx will be inlined within do_image_xxx,
+        # prevent a redundant copy of IMAGE_CMD:xxx being emitted as a function.
+        d.delVarFlag('IMAGE_CMD:' + realt, 'func')
+
+        rm_tmp_images = set()
+        def gen_conversion_cmds(bt):
+            for ctype in sorted(ctypes):
+                if bt.endswith("." + ctype):
+                    type = bt[0:-len(ctype) - 1]
+                    if type.startswith("debugfs_"):
+                        type = type[8:]
+                    # Create input image first.
+                    gen_conversion_cmds(type)
+                    localdata.setVar('type', type)
+                    cmd = "\t" + localdata.getVar("CONVERSION_CMD:" + ctype)
+                    if cmd not in cmds:
+                        cmds.append(cmd)
+                    vardeps.add('CONVERSION_CMD:' + ctype)
+                    subimage = type + "." + ctype
+                    if subimage not in subimages:
+                        subimages.append(subimage)
+                    if type not in alltypes:
+                        rm_tmp_images.add(localdata.expand("${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type}"))
+
+        for bt in basetypes[t]:
+            gen_conversion_cmds(bt)
+
+        localdata.setVar('type', realt)
+        if t not in alltypes:
+            rm_tmp_images.add(localdata.expand("${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type}"))
+        else:
+            subimages.append(realt)
+
+        # Clean up after applying all conversion commands. Some of them might
+        # use the same input, therefore we cannot delete sooner without applying
+        # some complex dependency analysis.
+        for image in sorted(rm_tmp_images):
+            cmds.append("\trm " + image)
+
+        after = 'do_image'
+        for dep in typedeps[t]:
+            after += ' do_image_%s' % dep.replace("-", "_").replace(".", "_")
+
+        task = "do_image_%s" % t.replace("-", "_").replace(".", "_")
+
+        d.setVar(task, '\n'.join(cmds))
+        d.setVarFlag(task, 'func', '1')
+        d.setVarFlag(task, 'fakeroot', '1')
+
+        d.appendVarFlag(task, 'prefuncs', ' ' + debug + ' set_image_size')
+        d.prependVarFlag(task, 'postfuncs', 'create_symlinks ')
+        d.appendVarFlag(task, 'subimages', ' ' + ' '.join(subimages))
+        d.appendVarFlag(task, 'vardeps', ' ' + ' '.join(vardeps))
+        d.appendVarFlag(task, 'vardepsexclude', ' DATETIME DATE ' + ' '.join(vardepsexclude))
+
+        bb.debug(2, "Adding task %s before %s, after %s" % (task, 'do_image_complete', after))
+        bb.build.addtask(task, 'do_image_complete', after, d)
+}
+
+#
+# Compute the rootfs size
+#
+def get_rootfs_size(d):
+    import subprocess, oe.utils
+
+    rootfs_alignment = int(d.getVar('IMAGE_ROOTFS_ALIGNMENT'))
+    overhead_factor = float(d.getVar('IMAGE_OVERHEAD_FACTOR'))
+    rootfs_req_size = int(d.getVar('IMAGE_ROOTFS_SIZE'))
+    rootfs_extra_space = eval(d.getVar('IMAGE_ROOTFS_EXTRA_SPACE'))
+    rootfs_maxsize = d.getVar('IMAGE_ROOTFS_MAXSIZE')
+    image_fstypes = d.getVar('IMAGE_FSTYPES') or ''
+    initramfs_fstypes = d.getVar('INITRAMFS_FSTYPES') or ''
+    initramfs_maxsize = d.getVar('INITRAMFS_MAXSIZE')
+
+    size_kb = oe.utils.directory_size(d.getVar("IMAGE_ROOTFS")) / 1024
+
+    base_size = size_kb * overhead_factor
+    bb.debug(1, '%f = %d * %f' % (base_size, size_kb, overhead_factor))
+    base_size2 = max(base_size, rootfs_req_size) + rootfs_extra_space
+    bb.debug(1, '%f = max(%f, %d)[%f] + %d' % (base_size2, base_size, rootfs_req_size, max(base_size, rootfs_req_size), rootfs_extra_space))
+
+    base_size = base_size2
+    if base_size != int(base_size):
+        base_size = int(base_size + 1)
+    else:
+        base_size = int(base_size)
+    bb.debug(1, '%f = int(%f)' % (base_size, base_size2))
+
+    base_size_saved = base_size
+    base_size += rootfs_alignment - 1
+    base_size -= base_size % rootfs_alignment
+    bb.debug(1, '%d = aligned(%d)' % (base_size, base_size_saved))
+
+    # Do not check image size of the debugfs image. This is not supposed
+    # to be deployed, etc. so it doesn't make sense to limit the size
+    # of the debug.
+    if (d.getVar('IMAGE_BUILDING_DEBUGFS') or "") == "true":
+        bb.debug(1, 'returning debugfs size %d' % (base_size))
+        return base_size
+
+    # Check the rootfs size against IMAGE_ROOTFS_MAXSIZE (if set)
+    if rootfs_maxsize:
+        rootfs_maxsize_int = int(rootfs_maxsize)
+        if base_size > rootfs_maxsize_int:
+            bb.fatal("The rootfs size %d(K) exceeds IMAGE_ROOTFS_MAXSIZE: %d(K)" % \
+                (base_size, rootfs_maxsize_int))
+
+    # Check the initramfs size against INITRAMFS_MAXSIZE (if set)
+    if image_fstypes == initramfs_fstypes != ''  and initramfs_maxsize:
+        initramfs_maxsize_int = int(initramfs_maxsize)
+        if base_size > initramfs_maxsize_int:
+            bb.error("The initramfs size %d(K) exceeds INITRAMFS_MAXSIZE: %d(K)" % \
+                (base_size, initramfs_maxsize_int))
+            bb.error("You can set INITRAMFS_MAXSIZE a larger value. Usually, it should")
+            bb.fatal("be less than 1/2 of ram size, or you may fail to boot it.\n")
+
+    bb.debug(1, 'returning %d' % (base_size))
+    return base_size
+
+python set_image_size () {
+        rootfs_size = get_rootfs_size(d)
+        d.setVar('ROOTFS_SIZE', str(rootfs_size))
+        d.setVarFlag('ROOTFS_SIZE', 'export', '1')
+}
+
+#
+# Create symlinks to the newly created image
+#
+python create_symlinks() {
+
+    deploy_dir = d.getVar('IMGDEPLOYDIR')
+    img_name = d.getVar('IMAGE_NAME')
+    link_name = d.getVar('IMAGE_LINK_NAME')
+    manifest_name = d.getVar('IMAGE_MANIFEST')
+    taskname = d.getVar("BB_CURRENTTASK")
+    subimages = (d.getVarFlag("do_" + taskname, 'subimages', False) or "").split()
+    imgsuffix = d.getVarFlag("do_" + taskname, 'imgsuffix') or d.expand("${IMAGE_NAME_SUFFIX}.")
+
+    if not link_name:
+        return
+    for type in subimages:
+        dst = os.path.join(deploy_dir, link_name + "." + type)
+        src = img_name + imgsuffix + type
+        if os.path.exists(os.path.join(deploy_dir, src)):
+            bb.note("Creating symlink: %s -> %s" % (dst, src))
+            if os.path.islink(dst):
+                os.remove(dst)
+            os.symlink(src, dst)
+        else:
+            bb.note("Skipping symlink, source does not exist: %s -> %s" % (dst, src))
+}
+
+MULTILIBRE_ALLOW_REP =. "${base_bindir}|${base_sbindir}|${bindir}|${sbindir}|${libexecdir}|${sysconfdir}|${nonarch_base_libdir}/udev|/lib/modules/[^/]*/modules.*|"
+MULTILIB_CHECK_FILE = "${WORKDIR}/multilib_check.py"
+MULTILIB_TEMP_ROOTFS = "${WORKDIR}/multilib"
+
+do_fetch[noexec] = "1"
+do_unpack[noexec] = "1"
+do_patch[noexec] = "1"
+do_configure[noexec] = "1"
+do_compile[noexec] = "1"
+do_install[noexec] = "1"
+deltask do_populate_lic
+deltask do_populate_sysroot
+do_package[noexec] = "1"
+deltask do_package_qa
+deltask do_packagedata
+deltask do_package_write_ipk
+deltask do_package_write_deb
+deltask do_package_write_rpm
+
+# Prepare the root links to point to the /usr counterparts.
+create_merged_usr_symlinks() {
+    root="$1"
+    install -d $root${base_bindir} $root${base_sbindir} $root${base_libdir}
+    ln -rs $root${base_bindir} $root/bin
+    ln -rs $root${base_sbindir} $root/sbin
+    ln -rs $root${base_libdir} $root/${baselib}
+
+    if [ "${nonarch_base_libdir}" != "${base_libdir}" ]; then
+       install -d $root${nonarch_base_libdir}
+       ln -rs $root${nonarch_base_libdir} $root/lib
+    fi
+
+    # create base links for multilibs
+    multi_libdirs="${@d.getVar('MULTILIB_VARIANTS')}"
+    for d in $multi_libdirs; do
+        install -d $root${exec_prefix}/$d
+        ln -rs $root${exec_prefix}/$d $root/$d
+    done
+}
+
+create_merged_usr_symlinks_rootfs() {
+    create_merged_usr_symlinks ${IMAGE_ROOTFS}
+}
+
+create_merged_usr_symlinks_sdk() {
+    create_merged_usr_symlinks ${SDK_OUTPUT}${SDKTARGETSYSROOT}
+}
+
+ROOTFS_PREPROCESS_COMMAND += "${@bb.utils.contains('DISTRO_FEATURES', 'usrmerge', 'create_merged_usr_symlinks_rootfs; ', '',d)}"
+POPULATE_SDK_PRE_TARGET_COMMAND += "${@bb.utils.contains('DISTRO_FEATURES', 'usrmerge', 'create_merged_usr_symlinks_sdk; ', '',d)}"
+
+reproducible_final_image_task () {
+    if [ "$REPRODUCIBLE_TIMESTAMP_ROOTFS" = "" ]; then
+        REPRODUCIBLE_TIMESTAMP_ROOTFS=`git -C "${COREBASE}" log -1 --pretty=%ct 2>/dev/null` || true
+        if [ "$REPRODUCIBLE_TIMESTAMP_ROOTFS" = "" ]; then
+            REPRODUCIBLE_TIMESTAMP_ROOTFS=`stat -c%Y ${@bb.utils.which(d.getVar("BBPATH"), "conf/bitbake.conf")}`
+        fi
+    fi
+    # Set mtime of all files to a reproducible value
+    bbnote "reproducible_final_image_task: mtime set to $REPRODUCIBLE_TIMESTAMP_ROOTFS"
+    find  ${IMAGE_ROOTFS} -print0 | xargs -0 touch -h  --date=@$REPRODUCIBLE_TIMESTAMP_ROOTFS
+}
+
+systemd_preset_all () {
+    if [ -e ${IMAGE_ROOTFS}${root_prefix}/lib/systemd/systemd ]; then
+	systemctl --root="${IMAGE_ROOTFS}" --preset-mode=enable-only preset-all
+    fi
+}
+
+IMAGE_PREPROCESS_COMMAND:append = " ${@ 'systemd_preset_all;' if bb.utils.contains('DISTRO_FEATURES', 'systemd', True, False, d) and not bb.utils.contains('IMAGE_FEATURES', 'stateless-rootfs', True, False, d) else ''} reproducible_final_image_task; "
+
+CVE_PRODUCT = ""
diff --git a/poky/meta/classes-recipe/image_types.bbclass b/poky/meta/classes-recipe/image_types.bbclass
new file mode 100644
index 0000000..764e6a5
--- /dev/null
+++ b/poky/meta/classes-recipe/image_types.bbclass
@@ -0,0 +1,363 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+# The default aligment of the size of the rootfs is set to 1KiB. In case
+# you're using the SD card emulation of a QEMU system simulator you may
+# set this value to 2048 (2MiB alignment).
+IMAGE_ROOTFS_ALIGNMENT ?= "1"
+
+def imagetypes_getdepends(d):
+    def adddep(depstr, deps):
+        for d in (depstr or "").split():
+            # Add task dependency if not already present
+            if ":" not in d:
+                d += ":do_populate_sysroot"
+            deps.add(d)
+
+    # Take a type in the form of foo.bar.car and split it into the items
+    # needed for the image deps "foo", and the conversion deps ["bar", "car"]
+    def split_types(typestring):
+        types = typestring.split(".")
+        return types[0], types[1:]
+
+    fstypes = set((d.getVar('IMAGE_FSTYPES') or "").split())
+    fstypes |= set((d.getVar('IMAGE_FSTYPES_DEBUGFS') or "").split())
+
+    deprecated = set()
+    deps = set()
+    for typestring in fstypes:
+        basetype, resttypes = split_types(typestring)
+
+        var = "IMAGE_DEPENDS_%s" % basetype
+        if d.getVar(var) is not None:
+            deprecated.add(var)
+
+        for typedepends in (d.getVar("IMAGE_TYPEDEP:%s" % basetype) or "").split():
+            base, rest = split_types(typedepends)
+            resttypes += rest
+
+            var = "IMAGE_DEPENDS_%s" % base
+            if d.getVar(var) is not None:
+                deprecated.add(var)
+
+        for ctype in resttypes:
+            adddep(d.getVar("CONVERSION_DEPENDS_%s" % ctype), deps)
+            adddep(d.getVar("COMPRESS_DEPENDS_%s" % ctype), deps)
+
+    if deprecated:
+        bb.fatal('Deprecated variable(s) found: "%s". '
+                 'Use do_image_<type>[depends] += "<recipe>:<task>" instead' % ', '.join(deprecated))
+
+    # Sort the set so that ordering is consistant
+    return " ".join(sorted(deps))
+
+XZ_COMPRESSION_LEVEL ?= "-9"
+XZ_INTEGRITY_CHECK ?= "crc32"
+
+ZIP_COMPRESSION_LEVEL ?= "-9"
+
+ZSTD_COMPRESSION_LEVEL ?= "-3"
+
+JFFS2_SUM_EXTRA_ARGS ?= ""
+IMAGE_CMD:jffs2 = "mkfs.jffs2 --root=${IMAGE_ROOTFS} --faketime --output=${IMGDEPLOYDIR}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.jffs2 ${EXTRA_IMAGECMD}"
+
+IMAGE_CMD:cramfs = "mkfs.cramfs ${IMAGE_ROOTFS} ${IMGDEPLOYDIR}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.cramfs ${EXTRA_IMAGECMD}"
+
+oe_mkext234fs () {
+	fstype=$1
+	extra_imagecmd=""
+
+	if [ $# -gt 1 ]; then
+		shift
+		extra_imagecmd=$@
+	fi
+
+	# If generating an empty image the size of the sparse block should be large
+	# enough to allocate an ext4 filesystem using 4096 bytes per inode, this is
+	# about 60K, so dd needs a minimum count of 60, with bs=1024 (bytes per IO)
+	eval local COUNT=\"0\"
+	eval local MIN_COUNT=\"60\"
+	if [ $ROOTFS_SIZE -lt $MIN_COUNT ]; then
+		eval COUNT=\"$MIN_COUNT\"
+	fi
+	# Create a sparse image block
+	bbdebug 1 Executing "dd if=/dev/zero of=${IMGDEPLOYDIR}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.$fstype seek=$ROOTFS_SIZE count=$COUNT bs=1024"
+	dd if=/dev/zero of=${IMGDEPLOYDIR}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.$fstype seek=$ROOTFS_SIZE count=$COUNT bs=1024
+	bbdebug 1 "Actual Rootfs size:  `du -s ${IMAGE_ROOTFS}`"
+	bbdebug 1 "Actual Partition size: `stat -c '%s' ${IMGDEPLOYDIR}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.$fstype`"
+	bbdebug 1 Executing "mkfs.$fstype -F $extra_imagecmd ${IMGDEPLOYDIR}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.$fstype -d ${IMAGE_ROOTFS}"
+	mkfs.$fstype -F $extra_imagecmd ${IMGDEPLOYDIR}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.$fstype -d ${IMAGE_ROOTFS}
+	# Error codes 0-3 indicate successfull operation of fsck (no errors or errors corrected)
+	fsck.$fstype -pvfD ${IMGDEPLOYDIR}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.$fstype || [ $? -le 3 ]
+}
+
+IMAGE_CMD:ext2 = "oe_mkext234fs ext2 ${EXTRA_IMAGECMD}"
+IMAGE_CMD:ext3 = "oe_mkext234fs ext3 ${EXTRA_IMAGECMD}"
+IMAGE_CMD:ext4 = "oe_mkext234fs ext4 ${EXTRA_IMAGECMD}"
+
+MIN_BTRFS_SIZE ?= "16384"
+IMAGE_CMD:btrfs () {
+	size=${ROOTFS_SIZE}
+	if [ ${size} -lt ${MIN_BTRFS_SIZE} ] ; then
+		size=${MIN_BTRFS_SIZE}
+		bbwarn "Rootfs size is too small for BTRFS. Filesystem will be extended to ${size}K"
+	fi
+	dd if=/dev/zero of=${IMGDEPLOYDIR}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.btrfs seek=${size} count=0 bs=1024
+	mkfs.btrfs ${EXTRA_IMAGECMD} -r ${IMAGE_ROOTFS} ${IMGDEPLOYDIR}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.btrfs
+}
+
+oe_mksquashfs () {
+    local comp=$1
+    local suffix=$2
+
+    # Use the bitbake reproducible timestamp instead of the hardcoded squashfs one
+    export SOURCE_DATE_EPOCH=$(stat -c '%Y' ${IMAGE_ROOTFS})
+    mksquashfs ${IMAGE_ROOTFS} ${IMGDEPLOYDIR}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.squashfs${comp:+-}${suffix:-$comp} ${EXTRA_IMAGECMD} -noappend ${comp:+-comp }$comp
+}
+IMAGE_CMD:squashfs = "oe_mksquashfs"
+IMAGE_CMD:squashfs-xz = "oe_mksquashfs xz"
+IMAGE_CMD:squashfs-lzo = "oe_mksquashfs lzo"
+IMAGE_CMD:squashfs-lz4 = "oe_mksquashfs lz4"
+IMAGE_CMD:squashfs-zst = "oe_mksquashfs zstd zst"
+
+IMAGE_CMD:erofs = "mkfs.erofs ${EXTRA_IMAGECMD} ${IMGDEPLOYDIR}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.erofs ${IMAGE_ROOTFS}"
+IMAGE_CMD:erofs-lz4 = "mkfs.erofs -zlz4 ${EXTRA_IMAGECMD} ${IMGDEPLOYDIR}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.erofs-lz4 ${IMAGE_ROOTFS}"
+IMAGE_CMD:erofs-lz4hc = "mkfs.erofs -zlz4hc ${EXTRA_IMAGECMD} ${IMGDEPLOYDIR}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.erofs-lz4hc ${IMAGE_ROOTFS}"
+
+
+IMAGE_CMD_TAR ?= "tar"
+# ignore return code 1 "file changed as we read it" as other tasks(e.g. do_image_wic) may be hardlinking rootfs
+IMAGE_CMD:tar = "${IMAGE_CMD_TAR} --sort=name --format=posix --numeric-owner -cf ${IMGDEPLOYDIR}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.tar -C ${IMAGE_ROOTFS} . || [ $? -eq 1 ]"
+
+do_image_cpio[cleandirs] += "${WORKDIR}/cpio_append"
+IMAGE_CMD:cpio () {
+	(cd ${IMAGE_ROOTFS} && find . | sort | cpio --reproducible -o -H newc >${IMGDEPLOYDIR}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.cpio)
+	# We only need the /init symlink if we're building the real
+	# image. The -dbg image doesn't need it! By being clever
+	# about this we also avoid 'touch' below failing, as it
+	# might be trying to touch /sbin/init on the host since both
+	# the normal and the -dbg image share the same WORKDIR
+	if [ "${IMAGE_BUILDING_DEBUGFS}" != "true" ]; then
+		if [ ! -L ${IMAGE_ROOTFS}/init ] && [ ! -e ${IMAGE_ROOTFS}/init ]; then
+			if [ -L ${IMAGE_ROOTFS}/sbin/init ] || [ -e ${IMAGE_ROOTFS}/sbin/init ]; then
+				ln -sf /sbin/init ${WORKDIR}/cpio_append/init
+			else
+				touch ${WORKDIR}/cpio_append/init
+			fi
+			(cd  ${WORKDIR}/cpio_append && echo ./init | cpio -oA -H newc -F ${IMGDEPLOYDIR}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.cpio)
+		fi
+	fi
+}
+
+UBI_VOLNAME ?= "${MACHINE}-rootfs"
+UBI_VOLTYPE ?= "dynamic"
+UBI_IMGTYPE ?= "ubifs"
+
+write_ubi_config() {
+	if [ -z "$1" ]; then
+		local vname=""
+	else
+		local vname="_$1"
+	fi
+
+	cat <<EOF > ubinize${vname}-${IMAGE_NAME}.cfg
+[ubifs]
+mode=ubi
+image=${IMGDEPLOYDIR}/${IMAGE_NAME}${vname}${IMAGE_NAME_SUFFIX}.${UBI_IMGTYPE}
+vol_id=0
+vol_type=${UBI_VOLTYPE}
+vol_name=${UBI_VOLNAME}
+vol_flags=autoresize
+EOF
+}
+
+multiubi_mkfs() {
+	local mkubifs_args="$1"
+	local ubinize_args="$2"
+
+        # Added prompt error message for ubi and ubifs image creation.
+        if [ -z "$mkubifs_args" ] || [ -z "$ubinize_args" ]; then
+            bbfatal "MKUBIFS_ARGS and UBINIZE_ARGS have to be set, see http://www.linux-mtd.infradead.org/faq/ubifs.html for details"
+        fi
+
+	write_ubi_config "$3"
+
+	if [ -n "$vname" ]; then
+		mkfs.ubifs -r ${IMAGE_ROOTFS} -o ${IMGDEPLOYDIR}/${IMAGE_NAME}${vname}${IMAGE_NAME_SUFFIX}.ubifs ${mkubifs_args}
+	fi
+	ubinize -o ${IMGDEPLOYDIR}/${IMAGE_NAME}${vname}${IMAGE_NAME_SUFFIX}.ubi ${ubinize_args} ubinize${vname}-${IMAGE_NAME}.cfg
+
+	# Cleanup cfg file
+	mv ubinize${vname}-${IMAGE_NAME}.cfg ${IMGDEPLOYDIR}/
+
+	# Create own symlinks for 'named' volumes
+	if [ -n "$vname" ]; then
+		cd ${IMGDEPLOYDIR}
+		if [ -e ${IMAGE_NAME}${vname}${IMAGE_NAME_SUFFIX}.ubifs ]; then
+			ln -sf ${IMAGE_NAME}${vname}${IMAGE_NAME_SUFFIX}.ubifs \
+			${IMAGE_LINK_NAME}${vname}.ubifs
+		fi
+		if [ -e ${IMAGE_NAME}${vname}${IMAGE_NAME_SUFFIX}.ubi ]; then
+			ln -sf ${IMAGE_NAME}${vname}${IMAGE_NAME_SUFFIX}.ubi \
+			${IMAGE_LINK_NAME}${vname}.ubi
+		fi
+		cd -
+	fi
+}
+
+IMAGE_CMD:multiubi () {
+	# Split MKUBIFS_ARGS_<name> and UBINIZE_ARGS_<name>
+	for name in ${MULTIUBI_BUILD}; do
+		eval local mkubifs_args=\"\$MKUBIFS_ARGS_${name}\"
+		eval local ubinize_args=\"\$UBINIZE_ARGS_${name}\"
+
+		multiubi_mkfs "${mkubifs_args}" "${ubinize_args}" "${name}"
+	done
+}
+
+IMAGE_CMD:ubi () {
+	multiubi_mkfs "${MKUBIFS_ARGS}" "${UBINIZE_ARGS}"
+}
+IMAGE_TYPEDEP:ubi = "${UBI_IMGTYPE}"
+
+IMAGE_CMD:ubifs = "mkfs.ubifs -r ${IMAGE_ROOTFS} -o ${IMGDEPLOYDIR}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.ubifs ${MKUBIFS_ARGS}"
+
+MIN_F2FS_SIZE ?= "524288"
+IMAGE_CMD:f2fs () {
+        # We need to add additional smarts here form devices smaller than 1.5G
+        # Need to scale appropriately between 40M -> 1.5G as the "overprovision
+        # ratio" goes down as the device gets bigger (70% -> 4.5%), below about
+        # 500M the standard IMAGE_OVERHEAD_FACTOR does not work, so add additional
+        # space here when under 500M
+	size=${ROOTFS_SIZE}
+	if [ ${size} -lt ${MIN_F2FS_SIZE} ] ; then
+		size=${MIN_F2FS_SIZE}
+		bbwarn "Rootfs size is too small for F2FS. Filesystem will be extended to ${size}K"
+	fi
+	dd if=/dev/zero of=${IMGDEPLOYDIR}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.f2fs seek=${size} count=0 bs=1024
+	mkfs.f2fs ${EXTRA_IMAGECMD} ${IMGDEPLOYDIR}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.f2fs
+	sload.f2fs -f ${IMAGE_ROOTFS} ${IMGDEPLOYDIR}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.f2fs
+}
+
+EXTRA_IMAGECMD = ""
+
+inherit siteinfo kernel-arch image-artifact-names
+
+JFFS2_ENDIANNESS ?= "${@oe.utils.conditional('SITEINFO_ENDIANNESS', 'le', '-l', '-b', d)}"
+JFFS2_ERASEBLOCK ?= "0x40000"
+EXTRA_IMAGECMD:jffs2 ?= "--pad ${JFFS2_ENDIANNESS} --eraseblock=${JFFS2_ERASEBLOCK} --no-cleanmarkers"
+
+# Change these if you want default mkfs behavior (i.e. create minimal inode number)
+EXTRA_IMAGECMD:ext2 ?= "-i 4096"
+EXTRA_IMAGECMD:ext3 ?= "-i 4096"
+EXTRA_IMAGECMD:ext4 ?= "-i 4096"
+EXTRA_IMAGECMD:btrfs ?= "-n 4096 --shrink"
+EXTRA_IMAGECMD:f2fs ?= ""
+
+do_image_cpio[depends] += "cpio-native:do_populate_sysroot"
+do_image_jffs2[depends] += "mtd-utils-native:do_populate_sysroot"
+do_image_cramfs[depends] += "util-linux-native:do_populate_sysroot"
+do_image_ext2[depends] += "e2fsprogs-native:do_populate_sysroot"
+do_image_ext3[depends] += "e2fsprogs-native:do_populate_sysroot"
+do_image_ext4[depends] += "e2fsprogs-native:do_populate_sysroot"
+do_image_btrfs[depends] += "btrfs-tools-native:do_populate_sysroot"
+do_image_squashfs[depends] += "squashfs-tools-native:do_populate_sysroot"
+do_image_squashfs_xz[depends] += "squashfs-tools-native:do_populate_sysroot"
+do_image_squashfs_lzo[depends] += "squashfs-tools-native:do_populate_sysroot"
+do_image_squashfs_lz4[depends] += "squashfs-tools-native:do_populate_sysroot"
+do_image_squashfs_zst[depends] += "squashfs-tools-native:do_populate_sysroot"
+do_image_ubi[depends] += "mtd-utils-native:do_populate_sysroot"
+do_image_ubifs[depends] += "mtd-utils-native:do_populate_sysroot"
+do_image_multiubi[depends] += "mtd-utils-native:do_populate_sysroot"
+do_image_f2fs[depends] += "f2fs-tools-native:do_populate_sysroot"
+do_image_erofs[depends] += "erofs-utils-native:do_populate_sysroot"
+do_image_erofs_lz4[depends] += "erofs-utils-native:do_populate_sysroot"
+do_image_erofs_lz4hc[depends] += "erofs-utils-native:do_populate_sysroot"
+
+# This variable is available to request which values are suitable for IMAGE_FSTYPES
+IMAGE_TYPES = " \
+    jffs2 jffs2.sum \
+    cramfs \
+    ext2 ext2.gz ext2.bz2 ext2.lzma \
+    ext3 ext3.gz \
+    ext4 ext4.gz \
+    btrfs \
+    squashfs squashfs-xz squashfs-lzo squashfs-lz4 squashfs-zst \
+    ubi ubifs multiubi \
+    tar tar.gz tar.bz2 tar.xz tar.lz4 tar.zst \
+    cpio cpio.gz cpio.xz cpio.lzma cpio.lz4 cpio.zst \
+    wic wic.gz wic.bz2 wic.lzma wic.zst \
+    container \
+    f2fs \
+    erofs erofs-lz4 erofs-lz4hc \
+"
+# These image types are x86 specific as they need syslinux
+IMAGE_TYPES:append:x86 = " hddimg iso"
+IMAGE_TYPES:append:x86-64 = " hddimg iso"
+
+# Compression is a special case of conversion. The old variable
+# names are still supported for backward-compatibility. When defining
+# new compression or conversion commands, use CONVERSIONTYPES and
+# CONVERSION_CMD/DEPENDS.
+COMPRESSIONTYPES ?= ""
+
+CONVERSIONTYPES = "gz bz2 lzma xz lz4 lzo zip zst sum md5sum sha1sum sha224sum sha256sum sha384sum sha512sum bmap u-boot vmdk vhd vhdx vdi qcow2 base64 gzsync zsync ${COMPRESSIONTYPES}"
+CONVERSION_CMD:lzma = "lzma -k -f -7 ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type}"
+CONVERSION_CMD:gz = "gzip -f -9 -n -c --rsyncable ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type} > ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type}.gz"
+CONVERSION_CMD:bz2 = "pbzip2 -f -k ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type}"
+CONVERSION_CMD:xz = "xz -f -k -c ${XZ_COMPRESSION_LEVEL} ${XZ_DEFAULTS} --check=${XZ_INTEGRITY_CHECK} ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type} > ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type}.xz"
+CONVERSION_CMD:lz4 = "lz4 -9 -z -l ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type} ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type}.lz4"
+CONVERSION_CMD:lzo = "lzop -9 ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type}"
+CONVERSION_CMD:zip = "zip ${ZIP_COMPRESSION_LEVEL} ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type}.zip ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type}"
+CONVERSION_CMD:zst = "zstd -f -k -T0 -c ${ZSTD_COMPRESSION_LEVEL} ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type} > ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type}.zst"
+CONVERSION_CMD:sum = "sumtool -i ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type} -o ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type}.sum ${JFFS2_SUM_EXTRA_ARGS}"
+CONVERSION_CMD:md5sum = "md5sum ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type} > ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type}.md5sum"
+CONVERSION_CMD:sha1sum = "sha1sum ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type} > ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type}.sha1sum"
+CONVERSION_CMD:sha224sum = "sha224sum ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type} > ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type}.sha224sum"
+CONVERSION_CMD:sha256sum = "sha256sum ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type} > ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type}.sha256sum"
+CONVERSION_CMD:sha384sum = "sha384sum ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type} > ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type}.sha384sum"
+CONVERSION_CMD:sha512sum = "sha512sum ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type} > ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type}.sha512sum"
+CONVERSION_CMD:bmap = "bmaptool create ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type} -o ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type}.bmap"
+CONVERSION_CMD:u-boot = "mkimage -A ${UBOOT_ARCH} -O linux -T ramdisk -C none -n ${IMAGE_NAME} -d ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type} ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type}.u-boot"
+CONVERSION_CMD:vmdk = "qemu-img convert -O vmdk ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type} ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type}.vmdk"
+CONVERSION_CMD:vhdx = "qemu-img convert -O vhdx -o subformat=dynamic ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type} ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type}.vhdx"
+CONVERSION_CMD:vhd = "qemu-img convert -O vpc -o subformat=fixed ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type} ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type}.vhd"
+CONVERSION_CMD:vdi = "qemu-img convert -O vdi ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type} ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type}.vdi"
+CONVERSION_CMD:qcow2 = "qemu-img convert -O qcow2 ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type} ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type}.qcow2"
+CONVERSION_CMD:base64 = "base64 ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type} > ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type}.base64"
+CONVERSION_CMD:zsync = "zsyncmake_curl ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type}"
+CONVERSION_CMD:gzsync = "zsyncmake_curl -z ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type}"
+CONVERSION_DEPENDS_lzma = "xz-native"
+CONVERSION_DEPENDS_gz = "pigz-native"
+CONVERSION_DEPENDS_bz2 = "pbzip2-native"
+CONVERSION_DEPENDS_xz = "xz-native"
+CONVERSION_DEPENDS_lz4 = "lz4-native"
+CONVERSION_DEPENDS_lzo = "lzop-native"
+CONVERSION_DEPENDS_zip = "zip-native"
+CONVERSION_DEPENDS_zst = "zstd-native"
+CONVERSION_DEPENDS_sum = "mtd-utils-native"
+CONVERSION_DEPENDS_bmap = "bmap-tools-native"
+CONVERSION_DEPENDS_u-boot = "u-boot-tools-native"
+CONVERSION_DEPENDS_vmdk = "qemu-system-native"
+CONVERSION_DEPENDS_vdi = "qemu-system-native"
+CONVERSION_DEPENDS_qcow2 = "qemu-system-native"
+CONVERSION_DEPENDS_base64 = "coreutils-native"
+CONVERSION_DEPENDS_vhdx = "qemu-system-native"
+CONVERSION_DEPENDS_vhd = "qemu-system-native"
+CONVERSION_DEPENDS_zsync = "zsync-curl-native"
+CONVERSION_DEPENDS_gzsync = "zsync-curl-native"
+
+RUNNABLE_IMAGE_TYPES ?= "ext2 ext3 ext4"
+RUNNABLE_MACHINE_PATTERNS ?= "qemu"
+
+DEPLOYABLE_IMAGE_TYPES ?= "hddimg iso"
+
+# The IMAGE_TYPES_MASKED variable will be used to mask out from the IMAGE_FSTYPES,
+# images that will not be built at do_rootfs time: vmdk, vhd, vhdx, vdi, qcow2, hddimg, iso, etc.
+IMAGE_TYPES_MASKED ?= ""
+
+# bmap requires python3 to be in the PATH
+EXTRANATIVEPATH += "${@'python3-native' if d.getVar('IMAGE_FSTYPES').find('.bmap') else ''}"
diff --git a/poky/meta/classes-recipe/image_types_wic.bbclass b/poky/meta/classes-recipe/image_types_wic.bbclass
new file mode 100644
index 0000000..c339b9b
--- /dev/null
+++ b/poky/meta/classes-recipe/image_types_wic.bbclass
@@ -0,0 +1,190 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+# The WICVARS variable is used to define list of bitbake variables used in wic code
+# variables from this list is written to <image>.env file
+WICVARS ?= "\
+	APPEND \
+	ASSUME_PROVIDED \
+	BBLAYERS \
+	DEPLOY_DIR_IMAGE \
+	FAKEROOTCMD \
+	HOSTTOOLS_DIR \
+	IMAGE_BASENAME \
+	IMAGE_BOOT_FILES \
+	IMAGE_EFI_BOOT_FILES \
+	IMAGE_LINK_NAME \
+	IMAGE_ROOTFS \
+	IMGDEPLOYDIR \
+	INITRAMFS_FSTYPES \
+	INITRAMFS_IMAGE \
+	INITRAMFS_IMAGE_BUNDLE \
+	INITRAMFS_LINK_NAME \
+	INITRD \
+	INITRD_LIVE \
+	ISODIR \
+	KERNEL_IMAGETYPE \
+	MACHINE \
+	PSEUDO_IGNORE_PATHS \
+	RECIPE_SYSROOT_NATIVE \
+	ROOTFS_SIZE \
+	STAGING_DATADIR \
+	STAGING_DIR \
+	STAGING_DIR_HOST \
+	STAGING_LIBDIR \
+	TARGET_SYS \
+"
+
+inherit ${@bb.utils.contains('INITRAMFS_IMAGE_BUNDLE', '1', 'kernel-artifact-names', '', d)}
+
+WKS_FILE ??= "${IMAGE_BASENAME}.${MACHINE}.wks"
+WKS_FILES ?= "${WKS_FILE} ${IMAGE_BASENAME}.wks"
+WKS_SEARCH_PATH ?= "${THISDIR}:${@':'.join('%s/wic' % p for p in '${BBPATH}'.split(':'))}:${@':'.join('%s/scripts/lib/wic/canned-wks' % l for l in '${BBPATH}:${COREBASE}'.split(':'))}"
+WKS_FULL_PATH = "${@wks_search(d.getVar('WKS_FILES').split(), d.getVar('WKS_SEARCH_PATH')) or ''}"
+
+def wks_search(files, search_path):
+    for f in files:
+        if os.path.isabs(f):
+            if os.path.exists(f):
+                return f
+        else:
+            searched = bb.utils.which(search_path, f)
+            if searched:
+                return searched
+
+WIC_CREATE_EXTRA_ARGS ?= ""
+
+IMAGE_CMD:wic () {
+	out="${IMGDEPLOYDIR}/${IMAGE_NAME}"
+	build_wic="${WORKDIR}/build-wic"
+	tmp_wic="${WORKDIR}/tmp-wic"
+	wks="${WKS_FULL_PATH}"
+	if [ -e "$tmp_wic" ]; then
+		# Ensure we don't have any junk leftover from a previously interrupted
+		# do_image_wic execution
+		rm -rf "$tmp_wic"
+	fi
+	if [ -z "$wks" ]; then
+		bbfatal "No kickstart files from WKS_FILES were found: ${WKS_FILES}. Please set WKS_FILE or WKS_FILES appropriately."
+	fi
+	BUILDDIR="${TOPDIR}" PSEUDO_UNLOAD=1 wic create "$wks" --vars "${STAGING_DIR}/${MACHINE}/imgdata/" -e "${IMAGE_BASENAME}" -o "$build_wic/" -w "$tmp_wic" ${WIC_CREATE_EXTRA_ARGS}
+	mv "$build_wic/$(basename "${wks%.wks}")"*.direct "$out${IMAGE_NAME_SUFFIX}.wic"
+}
+IMAGE_CMD:wic[vardepsexclude] = "WKS_FULL_PATH WKS_FILES TOPDIR"
+do_image_wic[cleandirs] = "${WORKDIR}/build-wic"
+
+PSEUDO_IGNORE_PATHS .= ",${WORKDIR}/build-wic"
+
+# Rebuild when the wks file or vars in WICVARS change
+USING_WIC = "${@bb.utils.contains_any('IMAGE_FSTYPES', 'wic ' + ' '.join('wic.%s' % c for c in '${CONVERSIONTYPES}'.split()), '1', '', d)}"
+WKS_FILE_CHECKSUM = "${@'${WKS_FULL_PATH}:%s' % os.path.exists('${WKS_FULL_PATH}') if '${USING_WIC}' else ''}"
+do_image_wic[file-checksums] += "${WKS_FILE_CHECKSUM}"
+do_image_wic[depends] += "${@' '.join('%s-native:do_populate_sysroot' % r for r in ('parted', 'gptfdisk', 'dosfstools', 'mtools'))}"
+
+# We ensure all artfacts are deployed (e.g virtual/bootloader)
+do_image_wic[recrdeptask] += "do_deploy"
+do_image_wic[deptask] += "do_image_complete"
+
+WKS_FILE_DEPENDS_DEFAULT = '${@bb.utils.contains_any("BUILD_ARCH", [ 'x86_64', 'i686' ], "syslinux-native", "",d)}'
+WKS_FILE_DEPENDS_DEFAULT += "bmap-tools-native cdrtools-native btrfs-tools-native squashfs-tools-native e2fsprogs-native erofs-utils-native"
+# Unified kernel images need objcopy
+WKS_FILE_DEPENDS_DEFAULT += "virtual/${MLPREFIX}${TARGET_PREFIX}binutils"
+WKS_FILE_DEPENDS_BOOTLOADERS = ""
+WKS_FILE_DEPENDS_BOOTLOADERS:x86 = "syslinux grub-efi systemd-boot os-release"
+WKS_FILE_DEPENDS_BOOTLOADERS:x86-64 = "syslinux grub-efi systemd-boot os-release"
+WKS_FILE_DEPENDS_BOOTLOADERS:x86-x32 = "syslinux grub-efi"
+
+WKS_FILE_DEPENDS ??= "${WKS_FILE_DEPENDS_DEFAULT} ${WKS_FILE_DEPENDS_BOOTLOADERS}"
+
+DEPENDS += "${@ '${WKS_FILE_DEPENDS}' if d.getVar('USING_WIC') else '' }"
+
+python do_write_wks_template () {
+    """Write out expanded template contents to WKS_FULL_PATH."""
+    import re
+
+    template_body = d.getVar('_WKS_TEMPLATE')
+
+    # Remove any remnant variable references left behind by the expansion
+    # due to undefined variables
+    expand_var_regexp = re.compile(r"\${[^{}@\n\t :]+}")
+    while True:
+        new_body = re.sub(expand_var_regexp, '', template_body)
+        if new_body == template_body:
+            break
+        else:
+            template_body = new_body
+
+    wks_file = d.getVar('WKS_FULL_PATH')
+    with open(wks_file, 'w') as f:
+        f.write(template_body)
+    f.close()
+    # Copy the finalized wks file to the deploy directory for later use
+    depdir = d.getVar('IMGDEPLOYDIR')
+    basename = d.getVar('IMAGE_BASENAME')
+    bb.utils.copyfile(wks_file, "%s/%s" % (depdir, basename + '-' + os.path.basename(wks_file)))
+}
+
+do_flush_pseudodb() {
+	${FAKEROOTENV} ${FAKEROOTCMD} -S
+}
+
+python () {
+    if d.getVar('USING_WIC'):
+        wks_file_u = d.getVar('WKS_FULL_PATH', False)
+        wks_file = d.expand(wks_file_u)
+        base, ext = os.path.splitext(wks_file)
+        if ext == '.in' and os.path.exists(wks_file):
+            wks_out_file = os.path.join(d.getVar('WORKDIR'), os.path.basename(base))
+            d.setVar('WKS_FULL_PATH', wks_out_file)
+            d.setVar('WKS_TEMPLATE_PATH', wks_file_u)
+            d.setVar('WKS_FILE_CHECKSUM', '${WKS_TEMPLATE_PATH}:True')
+
+            # We need to re-parse each time the file changes, and bitbake
+            # needs to be told about that explicitly.
+            bb.parse.mark_dependency(d, wks_file)
+
+            try:
+                with open(wks_file, 'r') as f:
+                    body = f.read()
+            except (IOError, OSError) as exc:
+                pass
+            else:
+                # Previously, I used expandWithRefs to get the dependency list
+                # and add it to WICVARS, but there's no point re-parsing the
+                # file in process_wks_template as well, so just put it in
+                # a variable and let the metadata deal with the deps.
+                d.setVar('_WKS_TEMPLATE', body)
+                bb.build.addtask('do_write_wks_template', 'do_image_wic', 'do_image', d)
+        bb.build.addtask('do_image_wic', 'do_image_complete', None, d)
+}
+
+#
+# Write environment variables used by wic
+# to tmp/sysroots/<machine>/imgdata/<image>.env
+#
+python do_rootfs_wicenv () {
+    wicvars = d.getVar('WICVARS')
+    if not wicvars:
+        return
+
+    stdir = d.getVar('STAGING_DIR')
+    outdir = os.path.join(stdir, d.getVar('MACHINE'), 'imgdata')
+    bb.utils.mkdirhier(outdir)
+    basename = d.getVar('IMAGE_BASENAME')
+    with open(os.path.join(outdir, basename) + '.env', 'w') as envf:
+        for var in wicvars.split():
+            value = d.getVar(var)
+            if value:
+                envf.write('%s="%s"\n' % (var, value.strip()))
+    envf.close()
+    # Copy .env file to deploy directory for later use with stand alone wic
+    depdir = d.getVar('IMGDEPLOYDIR')
+    bb.utils.copyfile(os.path.join(outdir, basename) + '.env', os.path.join(depdir, basename) + '.env')
+}
+addtask do_flush_pseudodb after do_rootfs before do_image do_image_qa
+addtask do_rootfs_wicenv after do_image before do_image_wic
+do_rootfs_wicenv[vardeps] += "${WICVARS}"
+do_rootfs_wicenv[prefuncs] = 'set_image_size'
diff --git a/poky/meta/classes-recipe/kernel-arch.bbclass b/poky/meta/classes-recipe/kernel-arch.bbclass
new file mode 100644
index 0000000..6f5d3bd
--- /dev/null
+++ b/poky/meta/classes-recipe/kernel-arch.bbclass
@@ -0,0 +1,74 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+#
+# set the ARCH environment variable for kernel compilation (including
+# modules). return value must match one of the architecture directories
+# in the kernel source "arch" directory
+#
+
+valid_archs = "alpha cris ia64 \
+               i386 x86 \
+               m68knommu m68k ppc powerpc powerpc64 ppc64  \
+               sparc sparc64 \
+               arm aarch64 \
+               m32r mips \
+               sh sh64 um h8300   \
+               parisc s390  v850 \
+               avr32 blackfin \
+               microblaze \
+               nios2 arc riscv xtensa"
+
+def map_kernel_arch(a, d):
+    import re
+
+    valid_archs = d.getVar('valid_archs').split()
+
+    if   re.match('(i.86|athlon|x86.64)$', a):  return 'x86'
+    elif re.match('arceb$', a):                 return 'arc'
+    elif re.match('armeb$', a):                 return 'arm'
+    elif re.match('aarch64$', a):               return 'arm64'
+    elif re.match('aarch64_be$', a):            return 'arm64'
+    elif re.match('aarch64_ilp32$', a):         return 'arm64'
+    elif re.match('aarch64_be_ilp32$', a):      return 'arm64'
+    elif re.match('mips(isa|)(32|64|)(r6|)(el|)$', a):      return 'mips'
+    elif re.match('mcf', a):                    return 'm68k'
+    elif re.match('riscv(32|64|)(eb|)$', a):    return 'riscv'
+    elif re.match('p(pc|owerpc)(|64)', a):      return 'powerpc'
+    elif re.match('sh(3|4)$', a):               return 'sh'
+    elif re.match('bfin', a):                   return 'blackfin'
+    elif re.match('microblazee[bl]', a):        return 'microblaze'
+    elif a in valid_archs:                      return a
+    else:
+        if not d.getVar("TARGET_OS").startswith("linux"):
+            return a
+        bb.error("cannot map '%s' to a linux kernel architecture" % a)
+
+export ARCH = "${@map_kernel_arch(d.getVar('TARGET_ARCH'), d)}"
+
+def map_uboot_arch(a, d):
+    import re
+
+    if   re.match('p(pc|owerpc)(|64)', a): return 'ppc'
+    elif re.match('i.86$', a): return 'x86'
+    return a
+
+export UBOOT_ARCH = "${@map_uboot_arch(d.getVar('ARCH'), d)}"
+
+# Set TARGET_??_KERNEL_ARCH in the machine .conf to set architecture
+# specific options necessary for building the kernel and modules.
+TARGET_CC_KERNEL_ARCH ?= ""
+HOST_CC_KERNEL_ARCH ?= "${TARGET_CC_KERNEL_ARCH}"
+TARGET_LD_KERNEL_ARCH ?= ""
+HOST_LD_KERNEL_ARCH ?= "${TARGET_LD_KERNEL_ARCH}"
+TARGET_AR_KERNEL_ARCH ?= ""
+HOST_AR_KERNEL_ARCH ?= "${TARGET_AR_KERNEL_ARCH}"
+
+KERNEL_CC = "${CCACHE}${HOST_PREFIX}gcc ${HOST_CC_KERNEL_ARCH} -fuse-ld=bfd ${DEBUG_PREFIX_MAP} -fdebug-prefix-map=${STAGING_KERNEL_DIR}=${KERNEL_SRC_PATH} -fdebug-prefix-map=${STAGING_KERNEL_BUILDDIR}=${KERNEL_SRC_PATH}"
+KERNEL_LD = "${CCACHE}${HOST_PREFIX}ld.bfd ${HOST_LD_KERNEL_ARCH}"
+KERNEL_AR = "${CCACHE}${HOST_PREFIX}ar ${HOST_AR_KERNEL_ARCH}"
+TOOLCHAIN = "gcc"
+
diff --git a/poky/meta/classes-recipe/kernel-artifact-names.bbclass b/poky/meta/classes-recipe/kernel-artifact-names.bbclass
new file mode 100644
index 0000000..311075c
--- /dev/null
+++ b/poky/meta/classes-recipe/kernel-artifact-names.bbclass
@@ -0,0 +1,37 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+##################################################################
+# Specific kernel creation info
+# for recipes/bbclasses which need to reuse some of the kernel
+# artifacts, but aren't kernel recipes themselves
+##################################################################
+
+inherit image-artifact-names
+
+KERNEL_ARTIFACT_NAME ?= "${PKGE}-${PKGV}-${PKGR}-${MACHINE}${IMAGE_VERSION_SUFFIX}"
+KERNEL_ARTIFACT_LINK_NAME ?= "${MACHINE}"
+KERNEL_ARTIFACT_BIN_EXT ?= ".bin"
+
+KERNEL_IMAGE_NAME ?= "${KERNEL_ARTIFACT_NAME}"
+KERNEL_IMAGE_LINK_NAME ?= "${KERNEL_ARTIFACT_LINK_NAME}"
+KERNEL_IMAGE_BIN_EXT ?= "${KERNEL_ARTIFACT_BIN_EXT}"
+KERNEL_IMAGETYPE_SYMLINK ?= "1"
+
+KERNEL_DTB_NAME ?= "${KERNEL_ARTIFACT_NAME}"
+KERNEL_DTB_LINK_NAME ?= "${KERNEL_ARTIFACT_LINK_NAME}"
+KERNEL_DTB_BIN_EXT ?= "${KERNEL_ARTIFACT_BIN_EXT}"
+
+KERNEL_FIT_NAME ?= "${KERNEL_ARTIFACT_NAME}"
+KERNEL_FIT_LINK_NAME ?= "${KERNEL_ARTIFACT_LINK_NAME}"
+KERNEL_FIT_BIN_EXT ?= "${KERNEL_ARTIFACT_BIN_EXT}"
+
+MODULE_TARBALL_NAME ?= "${KERNEL_ARTIFACT_NAME}"
+MODULE_TARBALL_LINK_NAME ?= "${KERNEL_ARTIFACT_LINK_NAME}"
+MODULE_TARBALL_DEPLOY ?= "1"
+
+INITRAMFS_NAME ?= "initramfs-${KERNEL_ARTIFACT_NAME}"
+INITRAMFS_LINK_NAME ?= "initramfs-${KERNEL_ARTIFACT_LINK_NAME}"
diff --git a/poky/meta/classes-recipe/kernel-devicetree.bbclass b/poky/meta/classes-recipe/kernel-devicetree.bbclass
new file mode 100644
index 0000000..b2117de
--- /dev/null
+++ b/poky/meta/classes-recipe/kernel-devicetree.bbclass
@@ -0,0 +1,119 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+# Support for device tree generation
+python () {
+    if not bb.data.inherits_class('nopackages', d):
+        d.appendVar("PACKAGES", " ${KERNEL_PACKAGE_NAME}-devicetree")
+        if d.getVar('KERNEL_DEVICETREE_BUNDLE') == '1':
+            d.appendVar("PACKAGES", " ${KERNEL_PACKAGE_NAME}-image-zimage-bundle")
+}
+
+FILES:${KERNEL_PACKAGE_NAME}-devicetree = "/${KERNEL_IMAGEDEST}/*.dtb /${KERNEL_IMAGEDEST}/*.dtbo"
+FILES:${KERNEL_PACKAGE_NAME}-image-zimage-bundle = "/${KERNEL_IMAGEDEST}/zImage-*.dtb.bin"
+
+# Generate kernel+devicetree bundle
+KERNEL_DEVICETREE_BUNDLE ?= "0"
+
+# dtc flags passed via DTC_FLAGS env variable
+KERNEL_DTC_FLAGS ?= ""
+
+normalize_dtb () {
+	dtb="$1"
+	if echo $dtb | grep -q '/dts/'; then
+		bbwarn "$dtb contains the full path to the the dts file, but only the dtb name should be used."
+		dtb=`basename $dtb | sed 's,\.dts$,.dtb,g'`
+	fi
+	echo "$dtb"
+}
+
+get_real_dtb_path_in_kernel () {
+	dtb="$1"
+	dtb_path="${B}/arch/${ARCH}/boot/dts/$dtb"
+	if [ ! -e "$dtb_path" ]; then
+		dtb_path="${B}/arch/${ARCH}/boot/$dtb"
+	fi
+	echo "$dtb_path"
+}
+
+do_configure:append() {
+	if [ "${KERNEL_DEVICETREE_BUNDLE}" = "1" ]; then
+		if echo ${KERNEL_IMAGETYPE_FOR_MAKE} | grep -q 'zImage'; then
+			case "${ARCH}" in
+				"arm")
+				config="${B}/.config"
+				if ! grep -q 'CONFIG_ARM_APPENDED_DTB=y' $config; then
+					bbwarn 'CONFIG_ARM_APPENDED_DTB is NOT enabled in the kernel. Enabling it to allow the kernel to boot with the Device Tree appended!'
+					sed -i "/CONFIG_ARM_APPENDED_DTB[ =]/d" $config
+					echo "CONFIG_ARM_APPENDED_DTB=y" >> $config
+					echo "# CONFIG_ARM_ATAG_DTB_COMPAT is not set" >> $config
+				fi
+				;;
+				*)
+				bberror "KERNEL_DEVICETREE_BUNDLE is not supported for ${ARCH}. Currently it is only supported for 'ARM'."
+			esac
+		else
+			bberror 'The KERNEL_DEVICETREE_BUNDLE requires the KERNEL_IMAGETYPE to contain zImage.'
+		fi
+	fi
+}
+
+do_compile:append() {
+	if [ -n "${KERNEL_DTC_FLAGS}" ]; then
+		export DTC_FLAGS="${KERNEL_DTC_FLAGS}"
+	fi
+
+	for dtbf in ${KERNEL_DEVICETREE}; do
+		dtb=`normalize_dtb "$dtbf"`
+		oe_runmake $dtb CC="${KERNEL_CC} $cc_extra " LD="${KERNEL_LD}" ${KERNEL_EXTRA_ARGS}
+	done
+}
+
+do_install:append() {
+	for dtbf in ${KERNEL_DEVICETREE}; do
+		dtb=`normalize_dtb "$dtbf"`
+		dtb_ext=${dtb##*.}
+		dtb_base_name=`basename $dtb .$dtb_ext`
+		dtb_path=`get_real_dtb_path_in_kernel "$dtb"`
+		install -m 0644 $dtb_path ${D}/${KERNEL_IMAGEDEST}/$dtb_base_name.$dtb_ext
+	done
+}
+
+do_deploy:append() {
+	for dtbf in ${KERNEL_DEVICETREE}; do
+		dtb=`normalize_dtb "$dtbf"`
+		dtb_ext=${dtb##*.}
+		dtb_base_name=`basename $dtb .$dtb_ext`
+		install -d $deployDir
+		install -m 0644 ${D}/${KERNEL_IMAGEDEST}/$dtb_base_name.$dtb_ext $deployDir/$dtb_base_name-${KERNEL_DTB_NAME}.$dtb_ext
+		if [ "${KERNEL_IMAGETYPE_SYMLINK}" = "1" ] ; then
+			ln -sf $dtb_base_name-${KERNEL_DTB_NAME}.$dtb_ext $deployDir/$dtb_base_name.$dtb_ext
+		fi
+		if [ -n "${KERNEL_DTB_LINK_NAME}" ] ; then
+			ln -sf $dtb_base_name-${KERNEL_DTB_NAME}.$dtb_ext $deployDir/$dtb_base_name-${KERNEL_DTB_LINK_NAME}.$dtb_ext
+		fi
+		for type in ${KERNEL_IMAGETYPE_FOR_MAKE}; do
+			if [ "$type" = "zImage" ] && [ "${KERNEL_DEVICETREE_BUNDLE}" = "1" ]; then
+				cat ${D}/${KERNEL_IMAGEDEST}/$type \
+					$deployDir/$dtb_base_name-${KERNEL_DTB_NAME}.$dtb_ext \
+					> $deployDir/$type-$dtb_base_name-${KERNEL_DTB_NAME}.$dtb_ext${KERNEL_DTB_BIN_EXT}
+				if [ -n "${KERNEL_DTB_LINK_NAME}" ]; then
+					ln -sf $type-$dtb_base_name-${KERNEL_DTB_NAME}.$dtb_ext${KERNEL_DTB_BIN_EXT} \
+						$deployDir/$type-$dtb_base_name-${KERNEL_DTB_LINK_NAME}.$dtb_ext${KERNEL_DTB_BIN_EXT}
+				fi
+				if [ -e "${KERNEL_OUTPUT_DIR}/${type}.initramfs" ]; then
+					cat ${KERNEL_OUTPUT_DIR}/${type}.initramfs \
+						$deployDir/$dtb_base_name-${KERNEL_DTB_NAME}.$dtb_ext \
+						>  $deployDir/${type}-${INITRAMFS_NAME}-$dtb_base_name-${KERNEL_DTB_NAME}.$dtb_ext${KERNEL_DTB_BIN_EXT}
+					if [ -n "${KERNEL_DTB_LINK_NAME}" ]; then
+						ln -sf ${type}-${INITRAMFS_NAME}-$dtb_base_name-${KERNEL_DTB_NAME}.$dtb_ext${KERNEL_DTB_BIN_EXT} \
+							$deployDir/${type}-${INITRAMFS_NAME}-$dtb_base_name-${KERNEL_DTB_LINK_NAME}.$dtb_ext${KERNEL_DTB_BIN_EXT}
+					fi
+				fi
+			fi
+		done
+	done
+}
diff --git a/poky/meta/classes-recipe/kernel-fitimage.bbclass b/poky/meta/classes-recipe/kernel-fitimage.bbclass
new file mode 100644
index 0000000..107914e
--- /dev/null
+++ b/poky/meta/classes-recipe/kernel-fitimage.bbclass
@@ -0,0 +1,811 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+inherit kernel-uboot kernel-artifact-names uboot-sign
+
+def get_fit_replacement_type(d):
+    kerneltypes = d.getVar('KERNEL_IMAGETYPES') or ""
+    replacementtype = ""
+    if 'fitImage' in kerneltypes.split():
+        uarch = d.getVar("UBOOT_ARCH")
+        if uarch == "arm64":
+            replacementtype = "Image"
+        elif uarch == "riscv":
+            replacementtype = "Image"
+        elif uarch == "mips":
+            replacementtype = "vmlinuz.bin"
+        elif uarch == "x86":
+            replacementtype = "bzImage"
+        elif uarch == "microblaze":
+            replacementtype = "linux.bin"
+        else:
+            replacementtype = "zImage"
+    return replacementtype
+
+KERNEL_IMAGETYPE_REPLACEMENT ?= "${@get_fit_replacement_type(d)}"
+DEPENDS:append = " ${@'u-boot-tools-native dtc-native' if 'fitImage' in (d.getVar('KERNEL_IMAGETYPES') or '').split() else ''}"
+
+python __anonymous () {
+        # Override KERNEL_IMAGETYPE_FOR_MAKE variable, which is internal
+        # to kernel.bbclass . We have to override it, since we pack zImage
+        # (at least for now) into the fitImage .
+        typeformake = d.getVar("KERNEL_IMAGETYPE_FOR_MAKE") or ""
+        if 'fitImage' in typeformake.split():
+            d.setVar('KERNEL_IMAGETYPE_FOR_MAKE', typeformake.replace('fitImage', d.getVar('KERNEL_IMAGETYPE_REPLACEMENT')))
+
+        image = d.getVar('INITRAMFS_IMAGE')
+        if image:
+            d.appendVarFlag('do_assemble_fitimage_initramfs', 'depends', ' ${INITRAMFS_IMAGE}:do_image_complete')
+
+        ubootenv = d.getVar('UBOOT_ENV')
+        if ubootenv:
+            d.appendVarFlag('do_assemble_fitimage', 'depends', ' virtual/bootloader:do_populate_sysroot')
+
+        #check if there are any dtb providers
+        providerdtb = d.getVar("PREFERRED_PROVIDER_virtual/dtb")
+        if providerdtb:
+            d.appendVarFlag('do_assemble_fitimage', 'depends', ' virtual/dtb:do_populate_sysroot')
+            d.appendVarFlag('do_assemble_fitimage_initramfs', 'depends', ' virtual/dtb:do_populate_sysroot')
+            d.setVar('EXTERNAL_KERNEL_DEVICETREE', "${RECIPE_SYSROOT}/boot/devicetree")
+
+        # Verified boot will sign the fitImage and append the public key to
+        # U-Boot dtb. We ensure the U-Boot dtb is deployed before assembling
+        # the fitImage:
+        if d.getVar('UBOOT_SIGN_ENABLE') == "1" and d.getVar('UBOOT_DTB_BINARY'):
+            uboot_pn = d.getVar('PREFERRED_PROVIDER_u-boot') or 'u-boot'
+            d.appendVarFlag('do_assemble_fitimage', 'depends', ' %s:do_populate_sysroot' % uboot_pn)
+            if d.getVar('INITRAMFS_IMAGE_BUNDLE') == "1":
+                d.appendVarFlag('do_assemble_fitimage_initramfs', 'depends', ' %s:do_populate_sysroot' % uboot_pn)
+}
+
+
+# Description string
+FIT_DESC ?= "Kernel fitImage for ${DISTRO_NAME}/${PV}/${MACHINE}"
+
+# Sign individual images as well
+FIT_SIGN_INDIVIDUAL ?= "0"
+
+FIT_CONF_PREFIX ?= "conf-"
+FIT_CONF_PREFIX[doc] = "Prefix to use for FIT configuration node name"
+
+FIT_SUPPORTED_INITRAMFS_FSTYPES ?= "cpio.lz4 cpio.lzo cpio.lzma cpio.xz cpio.zst cpio.gz ext2.gz cpio"
+
+# Keys used to sign individually image nodes.
+# The keys to sign image nodes must be different from those used to sign
+# configuration nodes, otherwise the "required" property, from
+# UBOOT_DTB_BINARY, will be set to "conf", because "conf" prevails on "image".
+# Then the images signature checking will not be mandatory and no error will be
+# raised in case of failure.
+# UBOOT_SIGN_IMG_KEYNAME = "dev2" # keys name in keydir (eg. "dev2.crt", "dev2.key")
+
+#
+# Emit the fitImage ITS header
+#
+# $1 ... .its filename
+fitimage_emit_fit_header() {
+	cat << EOF >> $1
+/dts-v1/;
+
+/ {
+        description = "${FIT_DESC}";
+        #address-cells = <1>;
+EOF
+}
+
+#
+# Emit the fitImage section bits
+#
+# $1 ... .its filename
+# $2 ... Section bit type: imagestart - image section start
+#                          confstart  - configuration section start
+#                          sectend    - section end
+#                          fitend     - fitimage end
+#
+fitimage_emit_section_maint() {
+	case $2 in
+	imagestart)
+		cat << EOF >> $1
+
+        images {
+EOF
+	;;
+	confstart)
+		cat << EOF >> $1
+
+        configurations {
+EOF
+	;;
+	sectend)
+		cat << EOF >> $1
+	};
+EOF
+	;;
+	fitend)
+		cat << EOF >> $1
+};
+EOF
+	;;
+	esac
+}
+
+#
+# Emit the fitImage ITS kernel section
+#
+# $1 ... .its filename
+# $2 ... Image counter
+# $3 ... Path to kernel image
+# $4 ... Compression type
+fitimage_emit_section_kernel() {
+
+	kernel_csum="${FIT_HASH_ALG}"
+	kernel_sign_algo="${FIT_SIGN_ALG}"
+	kernel_sign_keyname="${UBOOT_SIGN_IMG_KEYNAME}"
+
+	ENTRYPOINT="${UBOOT_ENTRYPOINT}"
+	if [ -n "${UBOOT_ENTRYSYMBOL}" ]; then
+		ENTRYPOINT=`${HOST_PREFIX}nm vmlinux | \
+			awk '$3=="${UBOOT_ENTRYSYMBOL}" {print "0x"$1;exit}'`
+	fi
+
+	cat << EOF >> $1
+                kernel-$2 {
+                        description = "Linux kernel";
+                        data = /incbin/("$3");
+                        type = "${UBOOT_MKIMAGE_KERNEL_TYPE}";
+                        arch = "${UBOOT_ARCH}";
+                        os = "linux";
+                        compression = "$4";
+                        load = <${UBOOT_LOADADDRESS}>;
+                        entry = <$ENTRYPOINT>;
+                        hash-1 {
+                                algo = "$kernel_csum";
+                        };
+                };
+EOF
+
+	if [ "${UBOOT_SIGN_ENABLE}" = "1" -a "${FIT_SIGN_INDIVIDUAL}" = "1" -a -n "$kernel_sign_keyname" ] ; then
+		sed -i '$ d' $1
+		cat << EOF >> $1
+                        signature-1 {
+                                algo = "$kernel_csum,$kernel_sign_algo";
+                                key-name-hint = "$kernel_sign_keyname";
+                        };
+                };
+EOF
+	fi
+}
+
+#
+# Emit the fitImage ITS DTB section
+#
+# $1 ... .its filename
+# $2 ... Image counter
+# $3 ... Path to DTB image
+fitimage_emit_section_dtb() {
+
+	dtb_csum="${FIT_HASH_ALG}"
+	dtb_sign_algo="${FIT_SIGN_ALG}"
+	dtb_sign_keyname="${UBOOT_SIGN_IMG_KEYNAME}"
+
+	dtb_loadline=""
+	dtb_ext=${DTB##*.}
+	if [ "${dtb_ext}" = "dtbo" ]; then
+		if [ -n "${UBOOT_DTBO_LOADADDRESS}" ]; then
+			dtb_loadline="load = <${UBOOT_DTBO_LOADADDRESS}>;"
+		fi
+	elif [ -n "${UBOOT_DTB_LOADADDRESS}" ]; then
+		dtb_loadline="load = <${UBOOT_DTB_LOADADDRESS}>;"
+	fi
+	cat << EOF >> $1
+                fdt-$2 {
+                        description = "Flattened Device Tree blob";
+                        data = /incbin/("$3");
+                        type = "flat_dt";
+                        arch = "${UBOOT_ARCH}";
+                        compression = "none";
+                        $dtb_loadline
+                        hash-1 {
+                                algo = "$dtb_csum";
+                        };
+                };
+EOF
+
+	if [ "${UBOOT_SIGN_ENABLE}" = "1" -a "${FIT_SIGN_INDIVIDUAL}" = "1" -a -n "$dtb_sign_keyname" ] ; then
+		sed -i '$ d' $1
+		cat << EOF >> $1
+                        signature-1 {
+                                algo = "$dtb_csum,$dtb_sign_algo";
+                                key-name-hint = "$dtb_sign_keyname";
+                        };
+                };
+EOF
+	fi
+}
+
+#
+# Emit the fitImage ITS u-boot script section
+#
+# $1 ... .its filename
+# $2 ... Image counter
+# $3 ... Path to boot script image
+fitimage_emit_section_boot_script() {
+
+	bootscr_csum="${FIT_HASH_ALG}"
+	bootscr_sign_algo="${FIT_SIGN_ALG}"
+	bootscr_sign_keyname="${UBOOT_SIGN_IMG_KEYNAME}"
+
+        cat << EOF >> $1
+                bootscr-$2 {
+                        description = "U-boot script";
+                        data = /incbin/("$3");
+                        type = "script";
+                        arch = "${UBOOT_ARCH}";
+                        compression = "none";
+                        hash-1 {
+                                algo = "$bootscr_csum";
+                        };
+                };
+EOF
+
+	if [ "${UBOOT_SIGN_ENABLE}" = "1" -a "${FIT_SIGN_INDIVIDUAL}" = "1" -a -n "$bootscr_sign_keyname" ] ; then
+		sed -i '$ d' $1
+		cat << EOF >> $1
+                        signature-1 {
+                                algo = "$bootscr_csum,$bootscr_sign_algo";
+                                key-name-hint = "$bootscr_sign_keyname";
+                        };
+                };
+EOF
+	fi
+}
+
+#
+# Emit the fitImage ITS setup section
+#
+# $1 ... .its filename
+# $2 ... Image counter
+# $3 ... Path to setup image
+fitimage_emit_section_setup() {
+
+	setup_csum="${FIT_HASH_ALG}"
+
+	cat << EOF >> $1
+                setup-$2 {
+                        description = "Linux setup.bin";
+                        data = /incbin/("$3");
+                        type = "x86_setup";
+                        arch = "${UBOOT_ARCH}";
+                        os = "linux";
+                        compression = "none";
+                        load = <0x00090000>;
+                        entry = <0x00090000>;
+                        hash-1 {
+                                algo = "$setup_csum";
+                        };
+                };
+EOF
+}
+
+#
+# Emit the fitImage ITS ramdisk section
+#
+# $1 ... .its filename
+# $2 ... Image counter
+# $3 ... Path to ramdisk image
+fitimage_emit_section_ramdisk() {
+
+	ramdisk_csum="${FIT_HASH_ALG}"
+	ramdisk_sign_algo="${FIT_SIGN_ALG}"
+	ramdisk_sign_keyname="${UBOOT_SIGN_IMG_KEYNAME}"
+	ramdisk_loadline=""
+	ramdisk_entryline=""
+
+	if [ -n "${UBOOT_RD_LOADADDRESS}" ]; then
+		ramdisk_loadline="load = <${UBOOT_RD_LOADADDRESS}>;"
+	fi
+	if [ -n "${UBOOT_RD_ENTRYPOINT}" ]; then
+		ramdisk_entryline="entry = <${UBOOT_RD_ENTRYPOINT}>;"
+	fi
+
+	cat << EOF >> $1
+                ramdisk-$2 {
+                        description = "${INITRAMFS_IMAGE}";
+                        data = /incbin/("$3");
+                        type = "ramdisk";
+                        arch = "${UBOOT_ARCH}";
+                        os = "linux";
+                        compression = "none";
+                        $ramdisk_loadline
+                        $ramdisk_entryline
+                        hash-1 {
+                                algo = "$ramdisk_csum";
+                        };
+                };
+EOF
+
+	if [ "${UBOOT_SIGN_ENABLE}" = "1" -a "${FIT_SIGN_INDIVIDUAL}" = "1" -a -n "$ramdisk_sign_keyname" ] ; then
+		sed -i '$ d' $1
+		cat << EOF >> $1
+                        signature-1 {
+                                algo = "$ramdisk_csum,$ramdisk_sign_algo";
+                                key-name-hint = "$ramdisk_sign_keyname";
+                        };
+                };
+EOF
+	fi
+}
+
+#
+# Emit the fitImage ITS configuration section
+#
+# $1 ... .its filename
+# $2 ... Linux kernel ID
+# $3 ... DTB image name
+# $4 ... ramdisk ID
+# $5 ... u-boot script ID
+# $6 ... config ID
+# $7 ... default flag
+fitimage_emit_section_config() {
+
+	conf_csum="${FIT_HASH_ALG}"
+	conf_sign_algo="${FIT_SIGN_ALG}"
+	conf_padding_algo="${FIT_PAD_ALG}"
+	if [ "${UBOOT_SIGN_ENABLE}" = "1" ] ; then
+		conf_sign_keyname="${UBOOT_SIGN_KEYNAME}"
+	fi
+
+	its_file="$1"
+	kernel_id="$2"
+	dtb_image="$3"
+	ramdisk_id="$4"
+	bootscr_id="$5"
+	config_id="$6"
+	default_flag="$7"
+
+	# Test if we have any DTBs at all
+	sep=""
+	conf_desc=""
+	conf_node="${FIT_CONF_PREFIX}"
+	kernel_line=""
+	fdt_line=""
+	ramdisk_line=""
+	bootscr_line=""
+	setup_line=""
+	default_line=""
+
+	# conf node name is selected based on dtb ID if it is present,
+	# otherwise its selected based on kernel ID
+	if [ -n "$dtb_image" ]; then
+		conf_node=$conf_node$dtb_image
+	else
+		conf_node=$conf_node$kernel_id
+	fi
+
+	if [ -n "$kernel_id" ]; then
+		conf_desc="Linux kernel"
+		sep=", "
+		kernel_line="kernel = \"kernel-$kernel_id\";"
+	fi
+
+	if [ -n "$dtb_image" ]; then
+		conf_desc="$conf_desc${sep}FDT blob"
+		sep=", "
+		fdt_line="fdt = \"fdt-$dtb_image\";"
+	fi
+
+	if [ -n "$ramdisk_id" ]; then
+		conf_desc="$conf_desc${sep}ramdisk"
+		sep=", "
+		ramdisk_line="ramdisk = \"ramdisk-$ramdisk_id\";"
+	fi
+
+	if [ -n "$bootscr_id" ]; then
+		conf_desc="$conf_desc${sep}u-boot script"
+		sep=", "
+		bootscr_line="bootscr = \"bootscr-$bootscr_id\";"
+	fi
+
+	if [ -n "$config_id" ]; then
+		conf_desc="$conf_desc${sep}setup"
+		setup_line="setup = \"setup-$config_id\";"
+	fi
+
+	if [ "$default_flag" = "1" ]; then
+		# default node is selected based on dtb ID if it is present,
+		# otherwise its selected based on kernel ID
+		if [ -n "$dtb_image" ]; then
+			default_line="default = \"${FIT_CONF_PREFIX}$dtb_image\";"
+		else
+			default_line="default = \"${FIT_CONF_PREFIX}$kernel_id\";"
+		fi
+	fi
+
+	cat << EOF >> $its_file
+                $default_line
+                $conf_node {
+                        description = "$default_flag $conf_desc";
+                        $kernel_line
+                        $fdt_line
+                        $ramdisk_line
+                        $bootscr_line
+                        $setup_line
+                        hash-1 {
+                                algo = "$conf_csum";
+                        };
+EOF
+
+	if [ -n "$conf_sign_keyname" ] ; then
+
+		sign_line="sign-images = "
+		sep=""
+
+		if [ -n "$kernel_id" ]; then
+			sign_line="$sign_line${sep}\"kernel\""
+			sep=", "
+		fi
+
+		if [ -n "$dtb_image" ]; then
+			sign_line="$sign_line${sep}\"fdt\""
+			sep=", "
+		fi
+
+		if [ -n "$ramdisk_id" ]; then
+			sign_line="$sign_line${sep}\"ramdisk\""
+			sep=", "
+		fi
+
+		if [ -n "$bootscr_id" ]; then
+			sign_line="$sign_line${sep}\"bootscr\""
+			sep=", "
+		fi
+
+		if [ -n "$config_id" ]; then
+			sign_line="$sign_line${sep}\"setup\""
+		fi
+
+		sign_line="$sign_line;"
+
+		cat << EOF >> $its_file
+                        signature-1 {
+                                algo = "$conf_csum,$conf_sign_algo";
+                                key-name-hint = "$conf_sign_keyname";
+                                padding = "$conf_padding_algo";
+                                $sign_line
+                        };
+EOF
+	fi
+
+	cat << EOF >> $its_file
+                };
+EOF
+}
+
+#
+# Assemble fitImage
+#
+# $1 ... .its filename
+# $2 ... fitImage name
+# $3 ... include ramdisk
+fitimage_assemble() {
+	kernelcount=1
+	dtbcount=""
+	DTBS=""
+	ramdiskcount=$3
+	setupcount=""
+	bootscr_id=""
+	rm -f $1 arch/${ARCH}/boot/$2
+
+	if [ -n "${UBOOT_SIGN_IMG_KEYNAME}" -a "${UBOOT_SIGN_KEYNAME}" = "${UBOOT_SIGN_IMG_KEYNAME}" ]; then
+		bbfatal "Keys used to sign images and configuration nodes must be different."
+	fi
+
+	fitimage_emit_fit_header $1
+
+	#
+	# Step 1: Prepare a kernel image section.
+	#
+	fitimage_emit_section_maint $1 imagestart
+
+	uboot_prep_kimage
+	fitimage_emit_section_kernel $1 $kernelcount linux.bin "$linux_comp"
+
+	#
+	# Step 2: Prepare a DTB image section
+	#
+
+	if [ -n "${KERNEL_DEVICETREE}" ]; then
+		dtbcount=1
+		for DTB in ${KERNEL_DEVICETREE}; do
+			if echo $DTB | grep -q '/dts/'; then
+				bbwarn "$DTB contains the full path to the the dts file, but only the dtb name should be used."
+				DTB=`basename $DTB | sed 's,\.dts$,.dtb,g'`
+			fi
+
+			# Skip ${DTB} if it's also provided in ${EXTERNAL_KERNEL_DEVICETREE}
+			if [ -n "${EXTERNAL_KERNEL_DEVICETREE}" ] && [ -s ${EXTERNAL_KERNEL_DEVICETREE}/${DTB} ]; then
+				continue
+			fi
+
+			DTB_PATH="arch/${ARCH}/boot/dts/$DTB"
+			if [ ! -e "$DTB_PATH" ]; then
+				DTB_PATH="arch/${ARCH}/boot/$DTB"
+			fi
+
+			DTB=$(echo "$DTB" | tr '/' '_')
+
+			# Skip DTB if we've picked it up previously
+			echo "$DTBS" | tr ' ' '\n' | grep -xq "$DTB" && continue
+
+			DTBS="$DTBS $DTB"
+			fitimage_emit_section_dtb $1 $DTB $DTB_PATH
+		done
+	fi
+
+	if [ -n "${EXTERNAL_KERNEL_DEVICETREE}" ]; then
+		dtbcount=1
+		for DTB in $(find "${EXTERNAL_KERNEL_DEVICETREE}" \( -name '*.dtb' -o -name '*.dtbo' \) -printf '%P\n' | sort); do
+			DTB=$(echo "$DTB" | tr '/' '_')
+
+			# Skip DTB if we've picked it up previously
+			echo "$DTBS" | tr ' ' '\n' | grep -xq "$DTB" && continue
+
+			DTBS="$DTBS $DTB"
+			fitimage_emit_section_dtb $1 $DTB "${EXTERNAL_KERNEL_DEVICETREE}/$DTB"
+		done
+	fi
+
+	#
+	# Step 3: Prepare a u-boot script section
+	#
+
+	if [ -n "${UBOOT_ENV}" ] && [ -d "${STAGING_DIR_HOST}/boot" ]; then
+		if [ -e "${STAGING_DIR_HOST}/boot/${UBOOT_ENV_BINARY}" ]; then
+			cp ${STAGING_DIR_HOST}/boot/${UBOOT_ENV_BINARY} ${B}
+			bootscr_id="${UBOOT_ENV_BINARY}"
+			fitimage_emit_section_boot_script $1 "$bootscr_id" ${UBOOT_ENV_BINARY}
+		else
+			bbwarn "${STAGING_DIR_HOST}/boot/${UBOOT_ENV_BINARY} not found."
+		fi
+	fi
+
+	#
+	# Step 4: Prepare a setup section. (For x86)
+	#
+	if [ -e arch/${ARCH}/boot/setup.bin ]; then
+		setupcount=1
+		fitimage_emit_section_setup $1 $setupcount arch/${ARCH}/boot/setup.bin
+	fi
+
+	#
+	# Step 5: Prepare a ramdisk section.
+	#
+	if [ "x${ramdiskcount}" = "x1" ] && [ "${INITRAMFS_IMAGE_BUNDLE}" != "1" ]; then
+		# Find and use the first initramfs image archive type we find
+		found=
+		for img in ${FIT_SUPPORTED_INITRAMFS_FSTYPES}; do
+			initramfs_path="${DEPLOY_DIR_IMAGE}/${INITRAMFS_IMAGE_NAME}.$img"
+			if [ -e "$initramfs_path" ]; then
+				bbnote "Found initramfs image: $initramfs_path"
+				found=true
+				fitimage_emit_section_ramdisk $1 "$ramdiskcount" "$initramfs_path"
+				break
+			else
+				bbnote "Did not find initramfs image: $initramfs_path"
+			fi
+		done
+
+		if [ -z "$found" ]; then
+			bbfatal "Could not find a valid initramfs type for ${INITRAMFS_IMAGE_NAME}, the supported types are: ${FIT_SUPPORTED_INITRAMFS_FSTYPES}"
+		fi
+	fi
+
+	fitimage_emit_section_maint $1 sectend
+
+	# Force the first Kernel and DTB in the default config
+	kernelcount=1
+	if [ -n "$dtbcount" ]; then
+		dtbcount=1
+	fi
+
+	#
+	# Step 6: Prepare a configurations section
+	#
+	fitimage_emit_section_maint $1 confstart
+
+	# kernel-fitimage.bbclass currently only supports a single kernel (no less or
+	# more) to be added to the FIT image along with 0 or more device trees and
+	# 0 or 1 ramdisk.
+        # It is also possible to include an initramfs bundle (kernel and rootfs in one binary)
+        # When the initramfs bundle is used ramdisk is disabled.
+	# If a device tree is to be part of the FIT image, then select
+	# the default configuration to be used is based on the dtbcount. If there is
+	# no dtb present than select the default configuation to be based on
+	# the kernelcount.
+	if [ -n "$DTBS" ]; then
+		i=1
+		for DTB in ${DTBS}; do
+			dtb_ext=${DTB##*.}
+			if [ "$dtb_ext" = "dtbo" ]; then
+				fitimage_emit_section_config $1 "" "$DTB" "" "$bootscr_id" "" "`expr $i = $dtbcount`"
+			else
+				fitimage_emit_section_config $1 $kernelcount "$DTB" "$ramdiskcount" "$bootscr_id" "$setupcount" "`expr $i = $dtbcount`"
+			fi
+			i=`expr $i + 1`
+		done
+	else
+		defaultconfigcount=1
+		fitimage_emit_section_config $1 $kernelcount "" "$ramdiskcount" "$bootscr_id"  "$setupcount" $defaultconfigcount
+	fi
+
+	fitimage_emit_section_maint $1 sectend
+
+	fitimage_emit_section_maint $1 fitend
+
+	#
+	# Step 7: Assemble the image
+	#
+	${UBOOT_MKIMAGE} \
+		${@'-D "${UBOOT_MKIMAGE_DTCOPTS}"' if len('${UBOOT_MKIMAGE_DTCOPTS}') else ''} \
+		-f $1 \
+		arch/${ARCH}/boot/$2
+
+	#
+	# Step 8: Sign the image and add public key to U-Boot dtb
+	#
+	if [ "x${UBOOT_SIGN_ENABLE}" = "x1" ] ; then
+		add_key_to_u_boot=""
+		if [ -n "${UBOOT_DTB_BINARY}" ]; then
+			# The u-boot.dtb is a symlink to UBOOT_DTB_IMAGE, so we need copy
+			# both of them, and don't dereference the symlink.
+			cp -P ${STAGING_DATADIR}/u-boot*.dtb ${B}
+			add_key_to_u_boot="-K ${B}/${UBOOT_DTB_BINARY}"
+		fi
+		${UBOOT_MKIMAGE_SIGN} \
+			${@'-D "${UBOOT_MKIMAGE_DTCOPTS}"' if len('${UBOOT_MKIMAGE_DTCOPTS}') else ''} \
+			-F -k "${UBOOT_SIGN_KEYDIR}" \
+			$add_key_to_u_boot \
+			-r arch/${ARCH}/boot/$2 \
+			${UBOOT_MKIMAGE_SIGN_ARGS}
+	fi
+}
+
+do_assemble_fitimage() {
+	if echo ${KERNEL_IMAGETYPES} | grep -wq "fitImage"; then
+		cd ${B}
+		fitimage_assemble fit-image.its fitImage ""
+	fi
+}
+
+addtask assemble_fitimage before do_install after do_compile
+
+do_assemble_fitimage_initramfs() {
+	if echo ${KERNEL_IMAGETYPES} | grep -wq "fitImage" && \
+		test -n "${INITRAMFS_IMAGE}" ; then
+		cd ${B}
+		if [ "${INITRAMFS_IMAGE_BUNDLE}" = "1" ]; then
+			fitimage_assemble fit-image-${INITRAMFS_IMAGE}.its fitImage ""
+		else
+			fitimage_assemble fit-image-${INITRAMFS_IMAGE}.its fitImage-${INITRAMFS_IMAGE} 1
+		fi
+	fi
+}
+
+addtask assemble_fitimage_initramfs before do_deploy after do_bundle_initramfs
+
+do_kernel_generate_rsa_keys() {
+	if [ "${UBOOT_SIGN_ENABLE}" = "0" ] && [ "${FIT_GENERATE_KEYS}" = "1" ]; then
+		bbwarn "FIT_GENERATE_KEYS is set to 1 even though UBOOT_SIGN_ENABLE is set to 0. The keys will not be generated as they won't be used."
+	fi
+
+	if [ "${UBOOT_SIGN_ENABLE}" = "1" ] && [ "${FIT_GENERATE_KEYS}" = "1" ]; then
+
+		# Generate keys to sign configuration nodes, only if they don't already exist
+		if [ ! -f "${UBOOT_SIGN_KEYDIR}/${UBOOT_SIGN_KEYNAME}".key ] || \
+			[ ! -f "${UBOOT_SIGN_KEYDIR}/${UBOOT_SIGN_KEYNAME}".crt ]; then
+
+			# make directory if it does not already exist
+			mkdir -p "${UBOOT_SIGN_KEYDIR}"
+
+			bbnote "Generating RSA private key for signing fitImage"
+			openssl genrsa ${FIT_KEY_GENRSA_ARGS} -out \
+				"${UBOOT_SIGN_KEYDIR}/${UBOOT_SIGN_KEYNAME}".key \
+			"${FIT_SIGN_NUMBITS}"
+
+			bbnote "Generating certificate for signing fitImage"
+			openssl req ${FIT_KEY_REQ_ARGS} "${FIT_KEY_SIGN_PKCS}" \
+				-key "${UBOOT_SIGN_KEYDIR}/${UBOOT_SIGN_KEYNAME}".key \
+				-out "${UBOOT_SIGN_KEYDIR}/${UBOOT_SIGN_KEYNAME}".crt
+		fi
+
+		# Generate keys to sign image nodes, only if they don't already exist
+		if [ ! -f "${UBOOT_SIGN_KEYDIR}/${UBOOT_SIGN_IMG_KEYNAME}".key ] || \
+			[ ! -f "${UBOOT_SIGN_KEYDIR}/${UBOOT_SIGN_IMG_KEYNAME}".crt ]; then
+
+			# make directory if it does not already exist
+			mkdir -p "${UBOOT_SIGN_KEYDIR}"
+
+			bbnote "Generating RSA private key for signing fitImage"
+			openssl genrsa ${FIT_KEY_GENRSA_ARGS} -out \
+				"${UBOOT_SIGN_KEYDIR}/${UBOOT_SIGN_IMG_KEYNAME}".key \
+			"${FIT_SIGN_NUMBITS}"
+
+			bbnote "Generating certificate for signing fitImage"
+			openssl req ${FIT_KEY_REQ_ARGS} "${FIT_KEY_SIGN_PKCS}" \
+				-key "${UBOOT_SIGN_KEYDIR}/${UBOOT_SIGN_IMG_KEYNAME}".key \
+				-out "${UBOOT_SIGN_KEYDIR}/${UBOOT_SIGN_IMG_KEYNAME}".crt
+		fi
+	fi
+}
+
+addtask kernel_generate_rsa_keys before do_assemble_fitimage after do_compile
+
+kernel_do_deploy[vardepsexclude] = "DATETIME"
+kernel_do_deploy:append() {
+	# Update deploy directory
+	if echo ${KERNEL_IMAGETYPES} | grep -wq "fitImage"; then
+
+		if [ "${INITRAMFS_IMAGE_BUNDLE}" != "1" ]; then
+			bbnote "Copying fit-image.its source file..."
+			install -m 0644 ${B}/fit-image.its "$deployDir/fitImage-its-${KERNEL_FIT_NAME}.its"
+			if [ -n "${KERNEL_FIT_LINK_NAME}" ] ; then
+				ln -snf fitImage-its-${KERNEL_FIT_NAME}.its "$deployDir/fitImage-its-${KERNEL_FIT_LINK_NAME}"
+			fi
+
+			bbnote "Copying linux.bin file..."
+			install -m 0644 ${B}/linux.bin $deployDir/fitImage-linux.bin-${KERNEL_FIT_NAME}${KERNEL_FIT_BIN_EXT}
+			if [ -n "${KERNEL_FIT_LINK_NAME}" ] ; then
+				ln -snf fitImage-linux.bin-${KERNEL_FIT_NAME}${KERNEL_FIT_BIN_EXT} "$deployDir/fitImage-linux.bin-${KERNEL_FIT_LINK_NAME}"
+			fi
+		fi
+
+		if [ -n "${INITRAMFS_IMAGE}" ]; then
+			bbnote "Copying fit-image-${INITRAMFS_IMAGE}.its source file..."
+			install -m 0644 ${B}/fit-image-${INITRAMFS_IMAGE}.its "$deployDir/fitImage-its-${INITRAMFS_IMAGE_NAME}-${KERNEL_FIT_NAME}.its"
+			if [ -n "${KERNEL_FIT_LINK_NAME}" ] ; then
+				ln -snf fitImage-its-${INITRAMFS_IMAGE_NAME}-${KERNEL_FIT_NAME}.its "$deployDir/fitImage-its-${INITRAMFS_IMAGE_NAME}-${KERNEL_FIT_LINK_NAME}"
+			fi
+
+			if [ "${INITRAMFS_IMAGE_BUNDLE}" != "1" ]; then
+				bbnote "Copying fitImage-${INITRAMFS_IMAGE} file..."
+				install -m 0644 ${B}/arch/${ARCH}/boot/fitImage-${INITRAMFS_IMAGE} "$deployDir/fitImage-${INITRAMFS_IMAGE_NAME}-${KERNEL_FIT_NAME}${KERNEL_FIT_BIN_EXT}"
+				if [ -n "${KERNEL_FIT_LINK_NAME}" ] ; then
+					ln -snf fitImage-${INITRAMFS_IMAGE_NAME}-${KERNEL_FIT_NAME}${KERNEL_FIT_BIN_EXT} "$deployDir/fitImage-${INITRAMFS_IMAGE_NAME}-${KERNEL_FIT_LINK_NAME}"
+				fi
+			fi
+		fi
+	fi
+	if [ "${UBOOT_SIGN_ENABLE}" = "1" -o "${UBOOT_FITIMAGE_ENABLE}" = "1" ] && \
+	   [ -n "${UBOOT_DTB_BINARY}" ] ; then
+		# UBOOT_DTB_IMAGE is a realfile, but we can't use
+		# ${UBOOT_DTB_IMAGE} since it contains ${PV} which is aimed
+		# for u-boot, but we are in kernel env now.
+		install -m 0644 ${B}/u-boot-${MACHINE}*.dtb "$deployDir/"
+	fi
+	if [ "${UBOOT_FITIMAGE_ENABLE}" = "1" -a -n "${UBOOT_BINARY}" -a -n "${SPL_DTB_BINARY}" ] ; then
+		# If we're also creating and/or signing the uboot fit, now we need to
+		# deploy it, it's its file, as well as u-boot-spl.dtb
+		install -m 0644 ${B}/u-boot-spl-${MACHINE}*.dtb "$deployDir/"
+		bbnote "Copying u-boot-fitImage file..."
+		install -m 0644 ${B}/u-boot-fitImage-* "$deployDir/"
+		bbnote "Copying u-boot-its file..."
+		install -m 0644 ${B}/u-boot-its-* "$deployDir/"
+	fi
+}
+
+# The function below performs the following in case of initramfs bundles:
+# - Removes do_assemble_fitimage. FIT generation is done through
+#   do_assemble_fitimage_initramfs. do_assemble_fitimage is not needed
+#   and should not be part of the tasks to be executed.
+# - Since do_kernel_generate_rsa_keys is inserted by default
+#   between do_compile and do_assemble_fitimage, this is
+#   not suitable in case of initramfs bundles.  do_kernel_generate_rsa_keys
+#   should be between do_bundle_initramfs and do_assemble_fitimage_initramfs.
+python () {
+    if d.getVar('INITRAMFS_IMAGE_BUNDLE') == "1":
+        bb.build.deltask('do_assemble_fitimage', d)
+        bb.build.deltask('kernel_generate_rsa_keys', d)
+        bb.build.addtask('kernel_generate_rsa_keys', 'do_assemble_fitimage_initramfs', 'do_bundle_initramfs', d)
+}
diff --git a/poky/meta/classes-recipe/kernel-grub.bbclass b/poky/meta/classes-recipe/kernel-grub.bbclass
new file mode 100644
index 0000000..2325e63
--- /dev/null
+++ b/poky/meta/classes-recipe/kernel-grub.bbclass
@@ -0,0 +1,111 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+#
+# While installing a rpm to update kernel on a deployed target, it will update
+# the boot area and the boot menu with the kernel as the priority but allow
+# you to fall back to the original kernel as well.
+#
+# - In kernel-image's preinstall scriptlet, it backs up original kernel to avoid
+#   probable confliction with the new one.
+#
+# - In kernel-image's postinstall scriptlet, it modifies grub's config file to
+#   updates the new kernel as the boot priority.
+#
+
+python __anonymous () {
+    import re
+
+    preinst = '''
+	# Parsing confliction
+	[ -f "$D/boot/grub/menu.list" ] && grubcfg="$D/boot/grub/menu.list"
+	[ -f "$D/boot/grub/grub.cfg" ] && grubcfg="$D/boot/grub/grub.cfg"
+	if [ -n "$grubcfg" ]; then
+		# Dereference symlink to avoid confliction with new kernel name.
+		if grep -q "/KERNEL_IMAGETYPE \+root=" $grubcfg; then
+			if [ -L "$D/boot/KERNEL_IMAGETYPE" ]; then
+				kimage=`realpath $D/boot/KERNEL_IMAGETYPE 2>/dev/null`
+				if [ -f "$D$kimage" ]; then
+					sed -i "s:KERNEL_IMAGETYPE \+root=:${kimage##*/} root=:" $grubcfg
+				fi
+			fi
+		fi
+
+		# Rename old kernel if it conflicts with new kernel name.
+		if grep -q "/KERNEL_IMAGETYPE-${KERNEL_VERSION} \+root=" $grubcfg; then
+			if [ -f "$D/boot/KERNEL_IMAGETYPE-${KERNEL_VERSION}" ]; then
+				timestamp=`date +%s`
+				kimage="$D/boot/KERNEL_IMAGETYPE-${KERNEL_VERSION}-$timestamp-back"
+				sed -i "s:KERNEL_IMAGETYPE-${KERNEL_VERSION} \+root=:${kimage##*/} root=:" $grubcfg
+				mv "$D/boot/KERNEL_IMAGETYPE-${KERNEL_VERSION}" "$kimage"
+			fi
+		fi
+	fi
+'''
+
+    postinst = '''
+	get_new_grub_cfg() {
+		grubcfg="$1"
+		old_image="$2"
+		title="Update KERNEL_IMAGETYPE-${KERNEL_VERSION}-${PV}"
+		if [ "${grubcfg##*/}" = "grub.cfg" ]; then
+			rootfs=`grep " *linux \+[^ ]\+ \+root=" $grubcfg -m 1 | \
+				 sed "s#${old_image}#${old_image%/*}/KERNEL_IMAGETYPE-${KERNEL_VERSION}#"`
+
+			echo "menuentry \"$title\" {"
+			echo "    set root=(hd0,1)"
+			echo "$rootfs"
+			echo "}"
+		elif [ "${grubcfg##*/}" = "menu.list" ]; then
+			rootfs=`grep "kernel \+[^ ]\+ \+root=" $grubcfg -m 1 | \
+				 sed "s#${old_image}#${old_image%/*}/KERNEL_IMAGETYPE-${KERNEL_VERSION}#"`
+
+			echo "default 0"
+			echo "timeout 30"
+			echo "title $title"
+			echo "root  (hd0,0)"
+			echo "$rootfs"
+		fi
+	}
+
+	get_old_grub_cfg() {
+		grubcfg="$1"
+		if [ "${grubcfg##*/}" = "grub.cfg" ]; then
+			cat "$grubcfg"
+		elif [ "${grubcfg##*/}" = "menu.list" ]; then
+			sed -e '/^default/d' -e '/^timeout/d' "$grubcfg"
+		fi
+	}
+
+	if [ -f "$D/boot/grub/grub.cfg" ]; then
+		grubcfg="$D/boot/grub/grub.cfg"
+		old_image=`grep ' *linux \+[^ ]\+ \+root=' -m 1 "$grubcfg" | awk '{print $2}'`
+	elif [ -f "$D/boot/grub/menu.list" ]; then
+		grubcfg="$D/boot/grub/menu.list"
+		old_image=`grep '^kernel \+[^ ]\+ \+root=' -m 1 "$grubcfg" | awk '{print $2}'`
+	fi
+
+	# Don't update grubcfg at first install while old bzImage doesn't exist.
+	if [ -f "$D/boot/${old_image##*/}" ]; then
+		grubcfgtmp="$grubcfg.tmp"
+		get_new_grub_cfg "$grubcfg" "$old_image"  > $grubcfgtmp
+		get_old_grub_cfg "$grubcfg" >> $grubcfgtmp
+		mv $grubcfgtmp $grubcfg
+		echo "Caution! Update kernel may affect kernel-module!"
+	fi
+'''
+
+    imagetypes = d.getVar('KERNEL_IMAGETYPES')
+    imagetypes = re.sub(r'\.gz$', '', imagetypes)
+
+    for type in imagetypes.split():
+        typelower = type.lower()
+        preinst_append = preinst.replace('KERNEL_IMAGETYPE', type)
+        postinst_prepend = postinst.replace('KERNEL_IMAGETYPE', type)
+        d.setVar('pkg_preinst:kernel-image-' + typelower + ':append', preinst_append)
+        d.setVar('pkg_postinst:kernel-image-' + typelower + ':prepend', postinst_prepend)
+}
+
diff --git a/poky/meta/classes-recipe/kernel-module-split.bbclass b/poky/meta/classes-recipe/kernel-module-split.bbclass
new file mode 100644
index 0000000..1b4c864
--- /dev/null
+++ b/poky/meta/classes-recipe/kernel-module-split.bbclass
@@ -0,0 +1,197 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+pkg_postinst:modules () {
+if [ -z "$D" ]; then
+	depmod -a ${KERNEL_VERSION}
+else
+	# image.bbclass will call depmodwrapper after everything is installed,
+	# no need to do it here as well
+	:
+fi
+}
+
+pkg_postrm:modules () {
+if [ -z "$D" ]; then
+	depmod -a ${KERNEL_VERSION}
+else
+	depmodwrapper -a -b $D ${KERNEL_VERSION}
+fi
+}
+
+autoload_postinst_fragment() {
+if [ x"$D" = "x" ]; then
+	modprobe %s || true
+fi
+}
+
+PACKAGE_WRITE_DEPS += "kmod-native depmodwrapper-cross"
+
+do_install:append() {
+	install -d ${D}${sysconfdir}/modules-load.d/ ${D}${sysconfdir}/modprobe.d/
+}
+
+KERNEL_SPLIT_MODULES ?= "1"
+PACKAGESPLITFUNCS:prepend = "split_kernel_module_packages "
+
+KERNEL_MODULES_META_PACKAGE ?= "${@ d.getVar("KERNEL_PACKAGE_NAME") or "kernel" }-modules"
+
+KERNEL_MODULE_PACKAGE_PREFIX ?= ""
+KERNEL_MODULE_PACKAGE_SUFFIX ?= "-${KERNEL_VERSION}"
+KERNEL_MODULE_PROVIDE_VIRTUAL ?= "1"
+
+python split_kernel_module_packages () {
+    import re
+
+    modinfoexp = re.compile("([^=]+)=(.*)")
+
+    def extract_modinfo(file):
+        import tempfile, subprocess
+        tempfile.tempdir = d.getVar("WORKDIR")
+        compressed = re.match( r'.*\.(gz|xz|zst)$', file)
+        tf = tempfile.mkstemp()
+        tmpfile = tf[1]
+        if compressed:
+            tmpkofile = tmpfile + ".ko"
+            if compressed.group(1) == 'gz':
+                cmd = "gunzip -dc %s > %s" % (file, tmpkofile)
+                subprocess.check_call(cmd, shell=True)
+            elif compressed.group(1) == 'xz':
+                cmd = "xz -dc %s > %s" % (file, tmpkofile)
+                subprocess.check_call(cmd, shell=True)
+            elif compressed.group(1) == 'zst':
+                cmd = "zstd -dc %s > %s" % (file, tmpkofile)
+                subprocess.check_call(cmd, shell=True)
+            else:
+                msg = "Cannot decompress '%s'" % file
+                raise msg
+            cmd = "%sobjcopy -j .modinfo -O binary %s %s" % (d.getVar("HOST_PREFIX") or "", tmpkofile, tmpfile)
+        else:
+            cmd = "%sobjcopy -j .modinfo -O binary %s %s" % (d.getVar("HOST_PREFIX") or "", file, tmpfile)
+        subprocess.check_call(cmd, shell=True)
+        # errors='replace': Some old kernel versions contain invalid utf-8 characters in mod descriptions (like 0xf6, 'ö')
+        f = open(tmpfile, errors='replace')
+        l = f.read().split("\000")
+        f.close()
+        os.close(tf[0])
+        os.unlink(tmpfile)
+        if compressed:
+            os.unlink(tmpkofile)
+        vals = {}
+        for i in l:
+            m = modinfoexp.match(i)
+            if not m:
+                continue
+            vals[m.group(1)] = m.group(2)
+        return vals
+
+    def frob_metadata(file, pkg, pattern, format, basename):
+        vals = extract_modinfo(file)
+
+        dvar = d.getVar('PKGD')
+
+        # If autoloading is requested, output /etc/modules-load.d/<name>.conf and append
+        # appropriate modprobe commands to the postinst
+        autoloadlist = (d.getVar("KERNEL_MODULE_AUTOLOAD") or "").split()
+        autoload = d.getVar('module_autoload_%s' % basename)
+        if autoload and autoload == basename:
+            bb.warn("module_autoload_%s was replaced by KERNEL_MODULE_AUTOLOAD for cases where basename == module name, please drop it" % basename)
+        if autoload and basename not in autoloadlist:
+            bb.warn("module_autoload_%s is defined but '%s' isn't included in KERNEL_MODULE_AUTOLOAD, please add it there" % (basename, basename))
+        if basename in autoloadlist:
+            name = '%s/etc/modules-load.d/%s.conf' % (dvar, basename)
+            f = open(name, 'w')
+            if autoload:
+                for m in autoload.split():
+                    f.write('%s\n' % m)
+            else:
+                f.write('%s\n' % basename)
+            f.close()
+            postinst = d.getVar('pkg_postinst:%s' % pkg)
+            if not postinst:
+                bb.fatal("pkg_postinst:%s not defined" % pkg)
+            postinst += d.getVar('autoload_postinst_fragment') % (autoload or basename)
+            d.setVar('pkg_postinst:%s' % pkg, postinst)
+
+        # Write out any modconf fragment
+        modconflist = (d.getVar("KERNEL_MODULE_PROBECONF") or "").split()
+        modconf = d.getVar('module_conf_%s' % basename)
+        if modconf and basename in modconflist:
+            name = '%s/etc/modprobe.d/%s.conf' % (dvar, basename)
+            f = open(name, 'w')
+            f.write("%s\n" % modconf)
+            f.close()
+        elif modconf:
+            bb.error("Please ensure module %s is listed in KERNEL_MODULE_PROBECONF since module_conf_%s is set" % (basename, basename))
+
+        files = d.getVar('FILES:%s' % pkg)
+        files = "%s /etc/modules-load.d/%s.conf /etc/modprobe.d/%s.conf" % (files, basename, basename)
+        d.setVar('FILES:%s' % pkg, files)
+
+        conffiles = d.getVar('CONFFILES:%s' % pkg)
+        conffiles = "%s /etc/modules-load.d/%s.conf /etc/modprobe.d/%s.conf" % (conffiles, basename, basename)
+        d.setVar('CONFFILES:%s' % pkg, conffiles)
+
+        if "description" in vals:
+            old_desc = d.getVar('DESCRIPTION:' + pkg) or ""
+            d.setVar('DESCRIPTION:' + pkg, old_desc + "; " + vals["description"])
+
+        rdepends = bb.utils.explode_dep_versions2(d.getVar('RDEPENDS:' + pkg) or "")
+        modinfo_deps = []
+        if "depends" in vals and vals["depends"] != "":
+            for dep in vals["depends"].split(","):
+                on = legitimize_package_name(dep)
+                dependency_pkg = format % on
+                modinfo_deps.append(dependency_pkg)
+        for dep in modinfo_deps:
+            if not dep in rdepends:
+                rdepends[dep] = []
+        d.setVar('RDEPENDS:' + pkg, bb.utils.join_deps(rdepends, commasep=False))
+
+        # Avoid automatic -dev recommendations for modules ending with -dev.
+        d.setVarFlag('RRECOMMENDS:' + pkg, 'nodeprrecs', 1)
+
+        # Provide virtual package without postfix
+        providevirt = d.getVar('KERNEL_MODULE_PROVIDE_VIRTUAL')
+        if providevirt == "1":
+           postfix = format.split('%s')[1]
+           d.setVar('RPROVIDES:' + pkg, pkg.replace(postfix, ''))
+
+    kernel_package_name = d.getVar("KERNEL_PACKAGE_NAME") or "kernel"
+    kernel_version = d.getVar("KERNEL_VERSION")
+
+    metapkg = d.getVar('KERNEL_MODULES_META_PACKAGE')
+    splitmods = d.getVar('KERNEL_SPLIT_MODULES')
+    postinst = d.getVar('pkg_postinst:modules')
+    postrm = d.getVar('pkg_postrm:modules')
+
+    if splitmods != '1':
+        etcdir = d.getVar('sysconfdir')
+        d.appendVar('FILES:' + metapkg, '%s/modules-load.d/ %s/modprobe.d/ %s/modules/' % (etcdir, etcdir, d.getVar("nonarch_base_libdir")))
+        d.appendVar('pkg_postinst:%s' % metapkg, postinst)
+        d.prependVar('pkg_postrm:%s' % metapkg, postrm);
+        return
+
+    module_regex = r'^(.*)\.k?o(?:\.(gz|xz|zst))?$'
+
+    module_pattern_prefix = d.getVar('KERNEL_MODULE_PACKAGE_PREFIX')
+    module_pattern_suffix = d.getVar('KERNEL_MODULE_PACKAGE_SUFFIX')
+    module_pattern = module_pattern_prefix + kernel_package_name + '-module-%s' + module_pattern_suffix
+
+    modules = do_split_packages(d, root='${nonarch_base_libdir}/modules', file_regex=module_regex, output_pattern=module_pattern, description='%s kernel module', postinst=postinst, postrm=postrm, recursive=True, hook=frob_metadata, extra_depends='%s-%s' % (kernel_package_name, kernel_version))
+    if modules:
+        d.appendVar('RDEPENDS:' + metapkg, ' '+' '.join(modules))
+
+    # If modules-load.d and modprobe.d are empty at this point, remove them to
+    # avoid warnings. removedirs only raises an OSError if an empty
+    # directory cannot be removed.
+    dvar = d.getVar('PKGD')
+    for dir in ["%s/etc/modprobe.d" % (dvar), "%s/etc/modules-load.d" % (dvar), "%s/etc" % (dvar)]:
+        if len(os.listdir(dir)) == 0:
+            os.rmdir(dir)
+}
+
+do_package[vardeps] += '${@" ".join(map(lambda s: "module_conf_" + s, (d.getVar("KERNEL_MODULE_PROBECONF") or "").split()))}'
diff --git a/poky/meta/classes-recipe/kernel-uboot.bbclass b/poky/meta/classes-recipe/kernel-uboot.bbclass
new file mode 100644
index 0000000..4aab026
--- /dev/null
+++ b/poky/meta/classes-recipe/kernel-uboot.bbclass
@@ -0,0 +1,49 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+# fitImage kernel compression algorithm
+FIT_KERNEL_COMP_ALG ?= "gzip"
+FIT_KERNEL_COMP_ALG_EXTENSION ?= ".gz"
+
+# Kernel image type passed to mkimage (i.e. kernel kernel_noload...)
+UBOOT_MKIMAGE_KERNEL_TYPE ?= "kernel"
+
+uboot_prep_kimage() {
+	if [ -e arch/${ARCH}/boot/compressed/vmlinux ]; then
+		vmlinux_path="arch/${ARCH}/boot/compressed/vmlinux"
+		linux_suffix=""
+		linux_comp="none"
+	elif [ -e arch/${ARCH}/boot/vmlinuz.bin ]; then
+		rm -f linux.bin
+		cp -l arch/${ARCH}/boot/vmlinuz.bin linux.bin
+		vmlinux_path=""
+		linux_suffix=""
+		linux_comp="none"
+	else
+		vmlinux_path="vmlinux"
+		# Use vmlinux.initramfs for linux.bin when INITRAMFS_IMAGE_BUNDLE set
+		# As per the implementation in kernel.bbclass.
+		# See do_bundle_initramfs function
+		if [ "${INITRAMFS_IMAGE_BUNDLE}" = "1" ] && [ -e vmlinux.initramfs ]; then
+			vmlinux_path="vmlinux.initramfs"
+		fi
+		linux_suffix="${FIT_KERNEL_COMP_ALG_EXTENSION}"
+		linux_comp="${FIT_KERNEL_COMP_ALG}"
+	fi
+
+	[ -n "${vmlinux_path}" ] && ${OBJCOPY} -O binary -R .note -R .comment -S "${vmlinux_path}" linux.bin
+
+	if [ "${linux_comp}" != "none" ] ; then
+		if [ "${linux_comp}" = "gzip" ] ; then
+			gzip -9 linux.bin
+		elif [ "${linux_comp}" = "lzo" ] ; then
+			lzop -9 linux.bin
+		fi
+		mv -f "linux.bin${linux_suffix}" linux.bin
+	fi
+
+	echo "${linux_comp}"
+}
diff --git a/poky/meta/classes-recipe/kernel-uimage.bbclass b/poky/meta/classes-recipe/kernel-uimage.bbclass
new file mode 100644
index 0000000..1a599e6
--- /dev/null
+++ b/poky/meta/classes-recipe/kernel-uimage.bbclass
@@ -0,0 +1,41 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+inherit kernel-uboot
+
+python __anonymous () {
+    if "uImage" in d.getVar('KERNEL_IMAGETYPES'):
+        depends = d.getVar("DEPENDS")
+        depends = "%s u-boot-tools-native" % depends
+        d.setVar("DEPENDS", depends)
+
+        # Override KERNEL_IMAGETYPE_FOR_MAKE variable, which is internal
+        # to kernel.bbclass . We override the variable here, since we need
+        # to build uImage using the kernel build system if and only if
+        # KEEPUIMAGE == yes. Otherwise, we pack compressed vmlinux into
+        # the uImage .
+        if d.getVar("KEEPUIMAGE") != 'yes':
+            typeformake = d.getVar("KERNEL_IMAGETYPE_FOR_MAKE") or ""
+            if "uImage" in typeformake.split():
+                d.setVar('KERNEL_IMAGETYPE_FOR_MAKE', typeformake.replace('uImage', 'vmlinux'))
+
+            # Enable building of uImage with mkimage
+            bb.build.addtask('do_uboot_mkimage', 'do_install', 'do_kernel_link_images', d)
+}
+
+do_uboot_mkimage[dirs] += "${B}"
+do_uboot_mkimage() {
+	uboot_prep_kimage
+
+	ENTRYPOINT=${UBOOT_ENTRYPOINT}
+	if [ -n "${UBOOT_ENTRYSYMBOL}" ]; then
+		ENTRYPOINT=`${HOST_PREFIX}nm ${B}/vmlinux | \
+			awk '$3=="${UBOOT_ENTRYSYMBOL}" {print "0x"$1;exit}'`
+	fi
+
+	uboot-mkimage -A ${UBOOT_ARCH} -O linux -T ${UBOOT_MKIMAGE_KERNEL_TYPE} -C "${linux_comp}" -a ${UBOOT_LOADADDRESS} -e $ENTRYPOINT -n "${DISTRO_NAME}/${PV}/${MACHINE}" -d linux.bin ${B}/arch/${ARCH}/boot/uImage
+	rm -f linux.bin
+}
diff --git a/poky/meta/classes-recipe/kernel-yocto.bbclass b/poky/meta/classes-recipe/kernel-yocto.bbclass
new file mode 100644
index 0000000..8eda0dc
--- /dev/null
+++ b/poky/meta/classes-recipe/kernel-yocto.bbclass
@@ -0,0 +1,732 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+# remove tasks that modify the source tree in case externalsrc is inherited
+SRCTREECOVEREDTASKS += "do_validate_branches do_kernel_configcheck do_kernel_checkout do_fetch do_unpack do_patch"
+PATCH_GIT_USER_EMAIL ?= "kernel-yocto@oe"
+PATCH_GIT_USER_NAME ?= "OpenEmbedded"
+
+# The distro or local.conf should set this, but if nobody cares...
+LINUX_KERNEL_TYPE ??= "standard"
+
+# KMETA ?= ""
+KBRANCH ?= "master"
+KMACHINE ?= "${MACHINE}"
+SRCREV_FORMAT ?= "meta_machine"
+
+# LEVELS:
+#   0: no reporting
+#   1: report options that are specified, but not in the final config
+#   2: report options that are not hardware related, but set by a BSP
+KCONF_AUDIT_LEVEL ?= "1"
+KCONF_BSP_AUDIT_LEVEL ?= "0"
+KMETA_AUDIT ?= "yes"
+KMETA_AUDIT_WERROR ?= ""
+
+# returns local (absolute) path names for all valid patches in the
+# src_uri
+def find_patches(d,subdir):
+    patches = src_patches(d)
+    patch_list=[]
+    for p in patches:
+        _, _, local, _, _, parm = bb.fetch.decodeurl(p)
+        # if patchdir has been passed, we won't be able to apply it so skip
+        # the patch for now, and special processing happens later
+        patchdir = ''
+        if "patchdir" in parm:
+            patchdir = parm["patchdir"]
+        if subdir:
+            if subdir == patchdir:
+                patch_list.append(local)
+        else:
+            # skip the patch if a patchdir was supplied, it won't be handled
+            # properly
+            if not patchdir:
+                patch_list.append(local)
+
+    return patch_list
+
+# returns all the elements from the src uri that are .scc files
+def find_sccs(d):
+    sources=src_patches(d, True)
+    sources_list=[]
+    for s in sources:
+        base, ext = os.path.splitext(os.path.basename(s))
+        if ext and ext in [".scc", ".cfg"]:
+            sources_list.append(s)
+        elif base and 'defconfig' in base:
+            sources_list.append(s)
+
+    return sources_list
+
+# check the SRC_URI for "kmeta" type'd git repositories. Return the name of
+# the repository as it will be found in WORKDIR
+def find_kernel_feature_dirs(d):
+    feature_dirs=[]
+    fetch = bb.fetch2.Fetch([], d)
+    for url in fetch.urls:
+        urldata = fetch.ud[url]
+        parm = urldata.parm
+        type=""
+        if "type" in parm:
+            type = parm["type"]
+        if "destsuffix" in parm:
+            destdir = parm["destsuffix"]
+            if type == "kmeta":
+                feature_dirs.append(destdir)
+	    
+    return feature_dirs
+
+# find the master/machine source branch. In the same way that the fetcher proceses
+# git repositories in the SRC_URI we take the first repo found, first branch.
+def get_machine_branch(d, default):
+    fetch = bb.fetch2.Fetch([], d)
+    for url in fetch.urls:
+        urldata = fetch.ud[url]
+        parm = urldata.parm
+        if "branch" in parm:
+            branches = urldata.parm.get("branch").split(',')
+            btype = urldata.parm.get("type")
+            if btype != "kmeta":
+                return branches[0]
+	    
+    return default
+
+# returns a list of all directories that are on FILESEXTRAPATHS (and
+# hence available to the build) that contain .scc or .cfg files
+def get_dirs_with_fragments(d):
+    extrapaths = []
+    extrafiles = []
+    extrapathsvalue = (d.getVar("FILESEXTRAPATHS") or "")
+    # Remove default flag which was used for checking
+    extrapathsvalue = extrapathsvalue.replace("__default:", "")
+    extrapaths = extrapathsvalue.split(":")
+    for path in extrapaths:
+        if path + ":True" not in extrafiles:
+            extrafiles.append(path + ":" + str(os.path.exists(path)))
+
+    return " ".join(extrafiles)
+
+do_kernel_metadata() {
+	set +e
+
+	if [ -n "$1" ]; then
+		mode="$1"
+	else
+		mode="patch"
+	fi
+
+	cd ${S}
+	export KMETA=${KMETA}
+
+	bbnote "do_kernel_metadata: for summary/debug, set KCONF_AUDIT_LEVEL > 0"
+
+	# if kernel tools are available in-tree, they are preferred
+	# and are placed on the path before any external tools. Unless
+	# the external tools flag is set, in that case we do nothing.
+	if [ -f "${S}/scripts/util/configme" ]; then
+		if [ -z "${EXTERNAL_KERNEL_TOOLS}" ]; then
+			PATH=${S}/scripts/util:${PATH}
+		fi
+	fi
+
+	# In a similar manner to the kernel itself:
+	#
+	#   defconfig: $(obj)/conf
+	#   ifeq ($(KBUILD_DEFCONFIG),)
+	#	$< --defconfig $(Kconfig)
+	#   else
+	#	@echo "*** Default configuration is based on '$(KBUILD_DEFCONFIG)'"
+	#	$(Q)$< --defconfig=arch/$(SRCARCH)/configs/$(KBUILD_DEFCONFIG) $(Kconfig)
+	#   endif
+	#
+	# If a defconfig is specified via the KBUILD_DEFCONFIG variable, we copy it
+	# from the source tree, into a common location and normalized "defconfig" name,
+	# where the rest of the process will include and incoroporate it into the build
+	#
+	# If the fetcher has already placed a defconfig in WORKDIR (from the SRC_URI),
+	# we don't overwrite it, but instead warn the user that SRC_URI defconfigs take
+	# precendence.
+	#
+	if [ -n "${KBUILD_DEFCONFIG}" ]; then
+		if [ -f "${S}/arch/${ARCH}/configs/${KBUILD_DEFCONFIG}" ]; then
+			if [ -f "${WORKDIR}/defconfig" ]; then
+				# If the two defconfig's are different, warn that we overwrote the
+				# one already placed in WORKDIR
+				cmp "${WORKDIR}/defconfig" "${S}/arch/${ARCH}/configs/${KBUILD_DEFCONFIG}"
+				if [ $? -ne 0 ]; then
+					bbdebug 1 "detected SRC_URI or unpatched defconfig in WORKDIR. ${KBUILD_DEFCONFIG} copied over it"
+				fi
+				cp -f ${S}/arch/${ARCH}/configs/${KBUILD_DEFCONFIG} ${WORKDIR}/defconfig
+			else
+				cp -f ${S}/arch/${ARCH}/configs/${KBUILD_DEFCONFIG} ${WORKDIR}/defconfig
+			fi
+			in_tree_defconfig="${WORKDIR}/defconfig"
+		else
+			bbfatal "A KBUILD_DEFCONFIG '${KBUILD_DEFCONFIG}' was specified, but not present in the source tree (${S}/arch/${ARCH}/configs/)"
+		fi
+	fi
+
+	if [ "$mode" = "patch" ]; then
+		# was anyone trying to patch the kernel meta data ?, we need to do
+		# this here, since the scc commands migrate the .cfg fragments to the
+		# kernel source tree, where they'll be used later.
+		check_git_config
+		patches="${@" ".join(find_patches(d,'kernel-meta'))}"
+		for p in $patches; do
+		    (
+			cd ${WORKDIR}/kernel-meta
+			git am -s $p
+		    )
+		done
+	fi
+
+	sccs_from_src_uri="${@" ".join(find_sccs(d))}"
+	patches="${@" ".join(find_patches(d,''))}"
+	feat_dirs="${@" ".join(find_kernel_feature_dirs(d))}"
+
+	# a quick check to make sure we don't have duplicate defconfigs If
+	# there's a defconfig in the SRC_URI, did we also have one from the
+	# KBUILD_DEFCONFIG processing above ?
+	src_uri_defconfig=$(echo $sccs_from_src_uri | awk '(match($0, "defconfig") != 0) { print $0 }' RS=' ')
+	# drop and defconfig's from the src_uri variable, we captured it just above here if it existed
+	sccs_from_src_uri=$(echo $sccs_from_src_uri | awk '(match($0, "defconfig") == 0) { print $0 }' RS=' ')
+
+	if [ -n "$in_tree_defconfig" ]; then
+		sccs_defconfig=$in_tree_defconfig
+		if [ -n "$src_uri_defconfig" ]; then
+			bbwarn "[NOTE]: defconfig was supplied both via KBUILD_DEFCONFIG and SRC_URI. Dropping SRC_URI entry $src_uri_defconfig"
+		fi
+	else
+		# if we didn't have an in-tree one, make our defconfig the one
+		# from the src_uri. Note: there may not have been one from the
+		# src_uri, so this can be an empty variable.
+		sccs_defconfig=$src_uri_defconfig
+	fi
+	sccs="$sccs_from_src_uri"
+
+	# check for feature directories/repos/branches that were part of the
+	# SRC_URI. If they were supplied, we convert them into include directives
+	# for the update part of the process
+	for f in ${feat_dirs}; do
+		if [ -d "${WORKDIR}/$f/meta" ]; then
+			includes="$includes -I${WORKDIR}/$f/kernel-meta"
+		elif [ -d "${WORKDIR}/../oe-local-files/$f" ]; then
+			includes="$includes -I${WORKDIR}/../oe-local-files/$f"
+	        elif [ -d "${WORKDIR}/$f" ]; then
+			includes="$includes -I${WORKDIR}/$f"
+		fi
+	done
+	for s in ${sccs} ${patches}; do
+		sdir=$(dirname $s)
+		includes="$includes -I${sdir}"
+                # if a SRC_URI passed patch or .scc has a subdir of "kernel-meta",
+                # then we add it to the search path
+                if [ -d "${sdir}/kernel-meta" ]; then
+			includes="$includes -I${sdir}/kernel-meta"
+                fi
+	done
+
+	# expand kernel features into their full path equivalents
+	bsp_definition=$(spp ${includes} --find -DKMACHINE=${KMACHINE} -DKTYPE=${LINUX_KERNEL_TYPE})
+	if [ -z "$bsp_definition" ]; then
+		if [ -z "$sccs_defconfig" ]; then
+			bbfatal_log "Could not locate BSP definition for ${KMACHINE}/${LINUX_KERNEL_TYPE} and no defconfig was provided"
+		fi
+	else
+		# if the bsp definition has "define KMETA_EXTERNAL_BSP t",
+		# then we need to set a flag that will instruct the next
+		# steps to use the BSP as both configuration and patches.
+		grep -q KMETA_EXTERNAL_BSP $bsp_definition
+		if [ $? -eq 0 ]; then
+		    KMETA_EXTERNAL_BSPS="t"
+		fi
+	fi
+	meta_dir=$(kgit --meta)
+
+	KERNEL_FEATURES_FINAL=""
+	if [ -n "${KERNEL_FEATURES}" ]; then
+		for feature in ${KERNEL_FEATURES}; do
+			feature_found=f
+			for d in $includes; do
+				path_to_check=$(echo $d | sed 's/^-I//')
+				if [ "$feature_found" = "f" ] && [ -e "$path_to_check/$feature" ]; then
+				    feature_found=t
+				fi
+			done
+			if [ "$feature_found" = "f" ]; then
+				if [ -n "${KERNEL_DANGLING_FEATURES_WARN_ONLY}" ]; then
+				    bbwarn "Feature '$feature' not found, but KERNEL_DANGLING_FEATURES_WARN_ONLY is set"
+				    bbwarn "This may cause runtime issues, dropping feature and allowing configuration to continue"
+				else
+				    bberror "Feature '$feature' not found, this will cause configuration failures."
+				    bberror "Check the SRC_URI for meta-data repositories or directories that may be missing"
+				    bbfatal_log "Set KERNEL_DANGLING_FEATURES_WARN_ONLY to ignore this issue"
+				fi
+			else
+				KERNEL_FEATURES_FINAL="$KERNEL_FEATURES_FINAL $feature"
+			fi
+		done
+        fi
+
+	if [ "$mode" = "config" ]; then
+		# run1: pull all the configuration fragments, no matter where they come from
+		elements="`echo -n ${bsp_definition} $sccs_defconfig ${sccs} ${patches} $KERNEL_FEATURES_FINAL`"
+		if [ -n "${elements}" ]; then
+			echo "${bsp_definition}" > ${S}/${meta_dir}/bsp_definition
+			scc --force -o ${S}/${meta_dir}:cfg,merge,meta ${includes} $sccs_defconfig $bsp_definition $sccs $patches $KERNEL_FEATURES_FINAL
+			if [ $? -ne 0 ]; then
+				bbfatal_log "Could not generate configuration queue for ${KMACHINE}."
+			fi
+		fi
+	fi
+
+	# if KMETA_EXTERNAL_BSPS has been set, or it has been detected from
+	# the bsp definition, then we inject the bsp_definition into the
+	# patch phase below.  we'll piggy back on the sccs variable.
+	if [ -n "${KMETA_EXTERNAL_BSPS}" ]; then
+		sccs="${bsp_definition} ${sccs}"
+	fi
+
+	if [ "$mode" = "patch" ]; then
+		# run2: only generate patches for elements that have been passed on the SRC_URI
+		elements="`echo -n ${sccs} ${patches} $KERNEL_FEATURES_FINAL`"
+		if [ -n "${elements}" ]; then
+			scc --force -o ${S}/${meta_dir}:patch --cmds patch ${includes} ${sccs} ${patches} $KERNEL_FEATURES_FINAL
+			if [ $? -ne 0 ]; then
+				bbfatal_log "Could not generate configuration queue for ${KMACHINE}."
+			fi
+		fi
+	fi
+
+	if [ ${KCONF_AUDIT_LEVEL} -gt 0 ]; then
+		bbnote "kernel meta data summary for ${KMACHINE} (${LINUX_KERNEL_TYPE}):"
+		bbnote "======================================================================"
+		if [ -n "${KMETA_EXTERNAL_BSPS}" ]; then
+			bbnote "Non kernel-cache (external) bsp"
+		fi
+		bbnote "BSP entry point / definition: $bsp_definition"
+		if [ -n "$in_tree_defconfig" ]; then
+			bbnote "KBUILD_DEFCONFIG: ${KBUILD_DEFCONFIG}"
+		fi
+		bbnote "Fragments from SRC_URI: $sccs_from_src_uri"
+		bbnote "KERNEL_FEATURES: $KERNEL_FEATURES_FINAL"
+		bbnote "Final scc/cfg list: $sccs_defconfig $bsp_definition $sccs $KERNEL_FEATURES_FINAL"
+	fi
+
+	set -e
+}
+
+do_patch() {
+	set +e
+	cd ${S}
+
+	check_git_config
+	meta_dir=$(kgit --meta)
+	(cd ${meta_dir}; ln -sf patch.queue series)
+	if [ -f "${meta_dir}/series" ]; then
+		kgit_extra_args=""
+		if [ "${KERNEL_DEBUG_TIMESTAMPS}" != "1" ]; then
+		    kgit_extra_args="--commit-sha author"
+		fi
+		kgit-s2q --gen -v $kgit_extra_args --patches .kernel-meta/
+		if [ $? -ne 0 ]; then
+			bberror "Could not apply patches for ${KMACHINE}."
+			bbfatal_log "Patch failures can be resolved in the linux source directory ${S})"
+		fi
+	fi
+
+	if [ -f "${meta_dir}/merge.queue" ]; then
+		# we need to merge all these branches
+		for b in $(cat ${meta_dir}/merge.queue); do
+			git show-ref --verify --quiet refs/heads/${b}
+			if [ $? -eq 0 ]; then
+				bbnote "Merging branch ${b}"
+				git merge -q --no-ff -m "Merge branch ${b}" ${b}
+			else
+				bbfatal "branch ${b} does not exist, cannot merge"
+			fi
+		done
+	fi
+
+	set -e
+}
+
+do_kernel_checkout() {
+	set +e
+
+	source_dir=`echo ${S} | sed 's%/$%%'`
+	source_workdir="${WORKDIR}/git"
+	if [ -d "${WORKDIR}/git/" ]; then
+		# case: git repository
+		# if S is WORKDIR/git, then we shouldn't be moving or deleting the tree.
+		if [ "${source_dir}" != "${source_workdir}" ]; then
+			if [ -d "${source_workdir}/.git" ]; then
+				# regular git repository with .git
+				rm -rf ${S}
+				mv ${WORKDIR}/git ${S}
+			else
+				# create source for bare cloned git repository
+				git clone ${WORKDIR}/git ${S}
+				rm -rf ${WORKDIR}/git
+			fi
+		fi
+		cd ${S}
+
+		# convert any remote branches to local tracking ones
+		for i in `git branch -a --no-color | grep remotes | grep -v HEAD`; do
+			b=`echo $i | cut -d' ' -f2 | sed 's%remotes/origin/%%'`;
+			git show-ref --quiet --verify -- "refs/heads/$b"
+			if [ $? -ne 0 ]; then
+				git branch $b $i > /dev/null
+			fi
+		done
+
+		# Create a working tree copy of the kernel by checking out a branch
+		machine_branch="${@ get_machine_branch(d, "${KBRANCH}" )}"
+
+		# checkout and clobber any unimportant files
+		git checkout -f ${machine_branch}
+	else
+		# case: we have no git repository at all. 
+		# To support low bandwidth options for building the kernel, we'll just 
+		# convert the tree to a git repo and let the rest of the process work unchanged
+		
+		# if ${S} hasn't been set to the proper subdirectory a default of "linux" is 
+		# used, but we can't initialize that empty directory. So check it and throw a
+		# clear error
+
+	        cd ${S}
+		if [ ! -f "Makefile" ]; then
+			bberror "S is not set to the linux source directory. Check "
+			bbfatal "the recipe and set S to the proper extracted subdirectory"
+		fi
+		rm -f .gitignore
+		git init
+		check_git_config
+		git add .
+		git commit -q -m "baseline commit: creating repo for ${PN}-${PV}"
+		git clean -d -f
+	fi
+
+	set -e
+}
+do_kernel_checkout[dirs] = "${S} ${WORKDIR}"
+
+addtask kernel_checkout before do_kernel_metadata after do_symlink_kernsrc
+addtask kernel_metadata after do_validate_branches do_unpack before do_patch
+do_kernel_metadata[depends] = "kern-tools-native:do_populate_sysroot"
+do_kernel_metadata[file-checksums] = " ${@get_dirs_with_fragments(d)}"
+do_validate_branches[depends] = "kern-tools-native:do_populate_sysroot"
+
+do_kernel_configme[depends] += "virtual/${TARGET_PREFIX}binutils:do_populate_sysroot"
+do_kernel_configme[depends] += "virtual/${TARGET_PREFIX}gcc:do_populate_sysroot"
+do_kernel_configme[depends] += "bc-native:do_populate_sysroot bison-native:do_populate_sysroot"
+do_kernel_configme[depends] += "kern-tools-native:do_populate_sysroot"
+do_kernel_configme[dirs] += "${S} ${B}"
+do_kernel_configme() {
+	do_kernel_metadata config
+
+	# translate the kconfig_mode into something that merge_config.sh
+	# understands
+	case ${KCONFIG_MODE} in
+		*allnoconfig)
+			config_flags="-n"
+			;;
+		*alldefconfig)
+			config_flags=""
+			;;
+		*)
+			if [ -f ${WORKDIR}/defconfig ]; then
+				config_flags="-n"
+			fi
+			;;
+	esac
+
+	cd ${S}
+
+	meta_dir=$(kgit --meta)
+	configs="$(scc --configs -o ${meta_dir})"
+	if [ $? -ne 0 ]; then
+		bberror "${configs}"
+		bbfatal_log "Could not find configuration queue (${meta_dir}/config.queue)"
+	fi
+
+	CFLAGS="${CFLAGS} ${TOOLCHAIN_OPTIONS}" HOSTCC="${BUILD_CC} ${BUILD_CFLAGS} ${BUILD_LDFLAGS}" HOSTCPP="${BUILD_CPP}" CC="${KERNEL_CC}" LD="${KERNEL_LD}" ARCH=${ARCH} merge_config.sh -O ${B} ${config_flags} ${configs} > ${meta_dir}/cfg/merge_config_build.log 2>&1
+	if [ $? -ne 0 -o ! -f ${B}/.config ]; then
+		bberror "Could not generate a .config for ${KMACHINE}-${LINUX_KERNEL_TYPE}"
+		if [ ${KCONF_AUDIT_LEVEL} -gt 1 ]; then
+			bbfatal_log "`cat ${meta_dir}/cfg/merge_config_build.log`"
+		else
+			bbfatal_log "Details can be found at: ${S}/${meta_dir}/cfg/merge_config_build.log"
+		fi
+	fi
+
+	if [ ! -z "${LINUX_VERSION_EXTENSION}" ]; then
+		echo "# Global settings from linux recipe" >> ${B}/.config
+		echo "CONFIG_LOCALVERSION="\"${LINUX_VERSION_EXTENSION}\" >> ${B}/.config
+	fi
+}
+
+addtask kernel_configme before do_configure after do_patch
+addtask config_analysis
+
+do_config_analysis[depends] = "virtual/kernel:do_configure"
+do_config_analysis[depends] += "kern-tools-native:do_populate_sysroot"
+
+CONFIG_AUDIT_FILE ?= "${WORKDIR}/config-audit.txt"
+CONFIG_ANALYSIS_FILE ?= "${WORKDIR}/config-analysis.txt"
+
+python do_config_analysis() {
+    import re, string, sys, subprocess
+
+    s = d.getVar('S')
+
+    env = os.environ.copy()
+    env['PATH'] = "%s:%s%s" % (d.getVar('PATH'), s, "/scripts/util/")
+    env['LD'] = d.getVar('KERNEL_LD')
+    env['CC'] = d.getVar('KERNEL_CC')
+    env['ARCH'] = d.getVar('ARCH')
+    env['srctree'] = s
+
+    # read specific symbols from the kernel recipe or from local.conf
+    # i.e.: CONFIG_ANALYSIS:pn-linux-yocto-dev = 'NF_CONNTRACK LOCALVERSION'
+    config = d.getVar( 'CONFIG_ANALYSIS' )
+    if not config:
+       config = [ "" ]
+    else:
+       config = config.split()
+
+    for c in config:
+        for action in ["analysis","audit"]:
+            if action == "analysis":
+                try:
+                    analysis = subprocess.check_output(['symbol_why.py', '--dotconfig',  '{}'.format( d.getVar('B') + '/.config' ), '--blame', c], cwd=s, env=env ).decode('utf-8')
+                except subprocess.CalledProcessError as e:
+                    bb.fatal( "config analysis failed: %s" % e.output.decode('utf-8'))
+
+                outfile = d.getVar( 'CONFIG_ANALYSIS_FILE' )
+
+            if action == "audit":
+                try:
+                    analysis = subprocess.check_output(['symbol_why.py', '--dotconfig',  '{}'.format( d.getVar('B') + '/.config' ), '--summary', '--extended', '--sanity', c], cwd=s, env=env ).decode('utf-8')
+                except subprocess.CalledProcessError as e:
+                    bb.fatal( "config analysis failed: %s" % e.output.decode('utf-8'))
+
+                outfile = d.getVar( 'CONFIG_AUDIT_FILE' )
+
+            if c:
+                outdir = os.path.dirname( outfile )
+                outname = os.path.basename( outfile )
+                outfile = outdir + '/'+ c + '-' + outname
+
+            if config and os.path.isfile(outfile):
+                os.remove(outfile)
+
+            with open(outfile, 'w+') as f:
+                f.write( analysis )
+
+            bb.warn( "Configuration {} executed, see: {} for details".format(action,outfile ))
+            if c:
+                bb.warn( analysis )
+}
+
+python do_kernel_configcheck() {
+    import re, string, sys, subprocess
+
+    s = d.getVar('S')
+
+    # if KMETA isn't set globally by a recipe using this routine, use kgit to
+    # locate or create the meta directory. Otherwise, kconf_check is not
+    # passed a valid meta-series for processing
+    kmeta = d.getVar("KMETA")
+    if not kmeta or not os.path.exists('{}/{}'.format(s,kmeta)):
+        kmeta = subprocess.check_output(['kgit', '--meta'], cwd=d.getVar('S')).decode('utf-8').rstrip()
+
+    env = os.environ.copy()
+    env['PATH'] = "%s:%s%s" % (d.getVar('PATH'), s, "/scripts/util/")
+    env['LD'] = d.getVar('KERNEL_LD')
+    env['CC'] = d.getVar('KERNEL_CC')
+    env['ARCH'] = d.getVar('ARCH')
+    env['srctree'] = s
+
+    try:
+        configs = subprocess.check_output(['scc', '--configs', '-o', s + '/.kernel-meta'], env=env).decode('utf-8')
+    except subprocess.CalledProcessError as e:
+        bb.fatal( "Cannot gather config fragments for audit: %s" % e.output.decode("utf-8") )
+
+    config_check_visibility = int(d.getVar("KCONF_AUDIT_LEVEL") or 0)
+    bsp_check_visibility = int(d.getVar("KCONF_BSP_AUDIT_LEVEL") or 0)
+    kmeta_audit_werror = d.getVar("KMETA_AUDIT_WERROR") or ""
+    warnings_detected = False
+
+    # if config check visibility is "1", that's the lowest level of audit. So
+    # we add the --classify option to the run, since classification will
+    # streamline the output to only report options that could be boot issues,
+    # or are otherwise required for proper operation.
+    extra_params = ""
+    if config_check_visibility == 1:
+       extra_params = "--classify"
+
+    # category #1: mismatches
+    try:
+        analysis = subprocess.check_output(['symbol_why.py', '--dotconfig',  '{}'.format( d.getVar('B') + '/.config' ), '--mismatches', extra_params], cwd=s, env=env ).decode('utf-8')
+    except subprocess.CalledProcessError as e:
+        bb.fatal( "config analysis failed: %s" % e.output.decode('utf-8'))
+
+    if analysis:
+        outfile = "{}/{}/cfg/mismatch.txt".format( s, kmeta )
+        if os.path.isfile(outfile):
+           os.remove(outfile)
+        with open(outfile, 'w+') as f:
+            f.write( analysis )
+
+        if config_check_visibility and os.stat(outfile).st_size > 0:
+            with open (outfile, "r") as myfile:
+                results = myfile.read()
+                bb.warn( "[kernel config]: specified values did not make it into the kernel's final configuration:\n\n%s" % results)
+                warnings_detected = True
+
+    # category #2: invalid fragment elements
+    extra_params = ""
+    if bsp_check_visibility > 1:
+        extra_params = "--strict"
+    try:
+        analysis = subprocess.check_output(['symbol_why.py', '--dotconfig',  '{}'.format( d.getVar('B') + '/.config' ), '--invalid', extra_params], cwd=s, env=env ).decode('utf-8')
+    except subprocess.CalledProcessError as e:
+        bb.fatal( "config analysis failed: %s" % e.output.decode('utf-8'))
+
+    if analysis:
+        outfile = "{}/{}/cfg/invalid.txt".format(s,kmeta)
+        if os.path.isfile(outfile):
+           os.remove(outfile)
+        with open(outfile, 'w+') as f:
+            f.write( analysis )
+
+        if bsp_check_visibility and os.stat(outfile).st_size > 0:
+            with open (outfile, "r") as myfile:
+                results = myfile.read()
+                bb.warn( "[kernel config]: This BSP contains fragments with warnings:\n\n%s" % results)
+                warnings_detected = True
+
+    # category #3: redefined options (this is pretty verbose and is debug only)
+    try:
+        analysis = subprocess.check_output(['symbol_why.py', '--dotconfig',  '{}'.format( d.getVar('B') + '/.config' ), '--sanity'], cwd=s, env=env ).decode('utf-8')
+    except subprocess.CalledProcessError as e:
+        bb.fatal( "config analysis failed: %s" % e.output.decode('utf-8'))
+
+    if analysis:
+        outfile = "{}/{}/cfg/redefinition.txt".format(s,kmeta)
+        if os.path.isfile(outfile):
+           os.remove(outfile)
+        with open(outfile, 'w+') as f:
+            f.write( analysis )
+
+        # if the audit level is greater than two, we report if a fragment has overriden
+        # a value from a base fragment. This is really only used for new kernel introduction
+        if bsp_check_visibility > 2 and os.stat(outfile).st_size > 0:
+            with open (outfile, "r") as myfile:
+                results = myfile.read()
+                bb.warn( "[kernel config]: This BSP has configuration options defined in more than one config, with differing values:\n\n%s" % results)
+                warnings_detected = True
+
+    if warnings_detected and kmeta_audit_werror:
+        bb.fatal( "configuration warnings detected, werror is set, promoting to fatal" )
+}
+
+# Ensure that the branches (BSP and meta) are on the locations specified by
+# their SRCREV values. If they are NOT on the right commits, the branches
+# are corrected to the proper commit.
+do_validate_branches() {
+	set +e
+	cd ${S}
+
+	machine_branch="${@ get_machine_branch(d, "${KBRANCH}" )}"
+	machine_srcrev="${SRCREV_machine}"
+
+	# if SRCREV is AUTOREV it shows up as AUTOINC there's nothing to
+	# check and we can exit early
+	if [ "${machine_srcrev}" = "AUTOINC" ]; then
+	    linux_yocto_dev='${@oe.utils.conditional("PREFERRED_PROVIDER_virtual/kernel", "linux-yocto-dev", "1", "", d)}'
+	    if [ -n "$linux_yocto_dev" ]; then
+		git checkout -q -f ${machine_branch}
+		ver=$(grep "^VERSION =" ${S}/Makefile | sed s/.*=\ *//)
+		patchlevel=$(grep "^PATCHLEVEL =" ${S}/Makefile | sed s/.*=\ *//)
+		sublevel=$(grep "^SUBLEVEL =" ${S}/Makefile | sed s/.*=\ *//)
+		kver="$ver.$patchlevel"
+		bbnote "dev kernel: performing version -> branch -> SRCREV validation"
+		bbnote "dev kernel: recipe version ${LINUX_VERSION}, src version: $kver"
+		echo "${LINUX_VERSION}" | grep -q $kver
+		if [ $? -ne 0 ]; then
+		    version="$(echo ${LINUX_VERSION} | sed 's/\+.*$//g')"
+		    versioned_branch="v$version/$machine_branch"
+
+		    machine_branch=$versioned_branch
+		    force_srcrev="$(git rev-parse $machine_branch 2> /dev/null)"
+		    if [ $? -ne 0 ]; then
+			bbfatal "kernel version mismatch detected, and no valid branch $machine_branch detected"
+		    fi
+
+		    bbnote "dev kernel: adjusting branch to $machine_branch, srcrev to: $force_srcrev"
+		fi
+	    else
+		bbnote "SRCREV validation is not required for AUTOREV"
+	    fi
+	elif [ "${machine_srcrev}" = "" ]; then
+		if [ "${SRCREV}" != "AUTOINC" ] && [ "${SRCREV}" != "INVALID" ]; then
+		       # SRCREV_machine_<MACHINE> was not set. This means that a custom recipe
+		       # that doesn't use the SRCREV_FORMAT "machine_meta" is being built. In
+		       # this case, we need to reset to the give SRCREV before heading to patching
+		       bbnote "custom recipe is being built, forcing SRCREV to ${SRCREV}"
+		       force_srcrev="${SRCREV}"
+		fi
+	else
+		git cat-file -t ${machine_srcrev} > /dev/null
+		if [ $? -ne 0 ]; then
+			bberror "${machine_srcrev} is not a valid commit ID."
+			bbfatal_log "The kernel source tree may be out of sync"
+		fi
+		force_srcrev=${machine_srcrev}
+	fi
+
+	git checkout -q -f ${machine_branch}
+	if [ -n "${force_srcrev}" ]; then
+		# see if the branch we are about to patch has been properly reset to the defined
+		# SRCREV .. if not, we reset it.
+		branch_head=`git rev-parse HEAD`
+		if [ "${force_srcrev}" != "${branch_head}" ]; then
+			current_branch=`git rev-parse --abbrev-ref HEAD`
+			git branch "$current_branch-orig"
+			git reset --hard ${force_srcrev}
+			# We've checked out HEAD, make sure we cleanup kgit-s2q fence post check
+			# so the patches are applied as expected otherwise no patching
+			# would be done in some corner cases.
+			kgit-s2q --clean
+		fi
+	fi
+
+	set -e
+}
+
+OE_TERMINAL_EXPORTS += "KBUILD_OUTPUT"
+KBUILD_OUTPUT = "${B}"
+
+python () {
+    # If diffconfig is available, ensure it runs after kernel_configme
+    if 'do_diffconfig' in d:
+        bb.build.addtask('do_diffconfig', None, 'do_kernel_configme', d)
+
+    externalsrc = d.getVar('EXTERNALSRC')
+    if externalsrc:
+        # If we deltask do_patch, do_kernel_configme is left without
+        # dependencies and runs too early
+        d.setVarFlag('do_kernel_configme', 'deps', (d.getVarFlag('do_kernel_configme', 'deps', False) or []) + ['do_unpack'])
+}
+
+# extra tasks
+addtask kernel_version_sanity_check after do_kernel_metadata do_kernel_checkout before do_compile
+addtask validate_branches before do_patch after do_kernel_checkout
+addtask kernel_configcheck after do_configure before do_compile
diff --git a/poky/meta/classes-recipe/kernel.bbclass b/poky/meta/classes-recipe/kernel.bbclass
new file mode 100644
index 0000000..de1b80d
--- /dev/null
+++ b/poky/meta/classes-recipe/kernel.bbclass
@@ -0,0 +1,823 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+inherit linux-kernel-base kernel-module-split
+
+COMPATIBLE_HOST = ".*-linux"
+
+KERNEL_PACKAGE_NAME ??= "kernel"
+KERNEL_DEPLOYSUBDIR ??= "${@ "" if (d.getVar("KERNEL_PACKAGE_NAME") == "kernel") else d.getVar("KERNEL_PACKAGE_NAME") }"
+
+PROVIDES += "virtual/kernel"
+DEPENDS += "virtual/${TARGET_PREFIX}binutils virtual/${TARGET_PREFIX}gcc kmod-native bc-native bison-native"
+DEPENDS += "${@bb.utils.contains("INITRAMFS_FSTYPES", "cpio.lzo", "lzop-native", "", d)}"
+DEPENDS += "${@bb.utils.contains("INITRAMFS_FSTYPES", "cpio.lz4", "lz4-native", "", d)}"
+DEPENDS += "${@bb.utils.contains("INITRAMFS_FSTYPES", "cpio.zst", "zstd-native", "", d)}"
+PACKAGE_WRITE_DEPS += "depmodwrapper-cross"
+
+do_deploy[depends] += "depmodwrapper-cross:do_populate_sysroot gzip-native:do_populate_sysroot"
+do_clean[depends] += "make-mod-scripts:do_clean"
+
+CVE_PRODUCT ?= "linux_kernel"
+
+S = "${STAGING_KERNEL_DIR}"
+B = "${WORKDIR}/build"
+KBUILD_OUTPUT = "${B}"
+OE_TERMINAL_EXPORTS += "KBUILD_OUTPUT"
+
+# we include gcc above, we dont need virtual/libc
+INHIBIT_DEFAULT_DEPS = "1"
+
+KERNEL_IMAGETYPE ?= "zImage"
+INITRAMFS_IMAGE ?= ""
+INITRAMFS_IMAGE_NAME ?= "${@['${INITRAMFS_IMAGE}-${MACHINE}', ''][d.getVar('INITRAMFS_IMAGE') == '']}"
+INITRAMFS_TASK ?= ""
+INITRAMFS_IMAGE_BUNDLE ?= ""
+INITRAMFS_DEPLOY_DIR_IMAGE ?= "${DEPLOY_DIR_IMAGE}"
+INITRAMFS_MULTICONFIG ?= ""
+
+# KERNEL_VERSION is extracted from source code. It is evaluated as
+# None for the first parsing, since the code has not been fetched.
+# After the code is fetched, it will be evaluated as real version
+# number and cause kernel to be rebuilt. To avoid this, make
+# KERNEL_VERSION_NAME and KERNEL_VERSION_PKG_NAME depend on
+# LINUX_VERSION which is a constant.
+KERNEL_VERSION_NAME = "${@d.getVar('KERNEL_VERSION') or ""}"
+KERNEL_VERSION_NAME[vardepvalue] = "${LINUX_VERSION}"
+KERNEL_VERSION_PKG_NAME = "${@legitimize_package_name(d.getVar('KERNEL_VERSION'))}"
+KERNEL_VERSION_PKG_NAME[vardepvalue] = "${LINUX_VERSION}"
+
+python __anonymous () {
+    pn = d.getVar("PN")
+    kpn = d.getVar("KERNEL_PACKAGE_NAME")
+
+    # XXX Remove this after bug 11905 is resolved
+    #  FILES:${KERNEL_PACKAGE_NAME}-dev doesn't expand correctly
+    if kpn == pn:
+        bb.warn("Some packages (E.g. *-dev) might be missing due to "
+                "bug 11905 (variable KERNEL_PACKAGE_NAME == PN)")
+
+    # The default kernel recipe builds in a shared location defined by
+    # bitbake/distro confs: STAGING_KERNEL_DIR and STAGING_KERNEL_BUILDDIR.
+    # Set these variables to directories under ${WORKDIR} in alternate
+    # kernel recipes (I.e. where KERNEL_PACKAGE_NAME != kernel) so that they
+    # may build in parallel with the default kernel without clobbering.
+    if kpn != "kernel":
+        workdir = d.getVar("WORKDIR")
+        sourceDir = os.path.join(workdir, 'kernel-source')
+        artifactsDir = os.path.join(workdir, 'kernel-build-artifacts')
+        d.setVar("STAGING_KERNEL_DIR", sourceDir)
+        d.setVar("STAGING_KERNEL_BUILDDIR", artifactsDir)
+
+    # Merge KERNEL_IMAGETYPE and KERNEL_ALT_IMAGETYPE into KERNEL_IMAGETYPES
+    type = d.getVar('KERNEL_IMAGETYPE') or ""
+    alttype = d.getVar('KERNEL_ALT_IMAGETYPE') or ""
+    types = d.getVar('KERNEL_IMAGETYPES') or ""
+    if type not in types.split():
+        types = (type + ' ' + types).strip()
+    if alttype not in types.split():
+        types = (alttype + ' ' + types).strip()
+    d.setVar('KERNEL_IMAGETYPES', types)
+
+    # KERNEL_IMAGETYPES may contain a mixture of image types supported directly
+    # by the kernel build system and types which are created by post-processing
+    # the output of the kernel build system (e.g. compressing vmlinux ->
+    # vmlinux.gz in kernel_do_transform_kernel()).
+    # KERNEL_IMAGETYPE_FOR_MAKE should contain only image types supported
+    # directly by the kernel build system.
+    if not d.getVar('KERNEL_IMAGETYPE_FOR_MAKE'):
+        typeformake = set()
+        for type in types.split():
+            if type == 'vmlinux.gz':
+                type = 'vmlinux'
+            typeformake.add(type)
+
+        d.setVar('KERNEL_IMAGETYPE_FOR_MAKE', ' '.join(sorted(typeformake)))
+
+    kname = d.getVar('KERNEL_PACKAGE_NAME') or "kernel"
+    imagedest = d.getVar('KERNEL_IMAGEDEST')
+
+    for type in types.split():
+        if bb.data.inherits_class('nopackages', d):
+            continue
+        typelower = type.lower()
+        d.appendVar('PACKAGES', ' %s-image-%s' % (kname, typelower))
+        d.setVar('FILES:' + kname + '-image-' + typelower, '/' + imagedest + '/' + type + '-${KERNEL_VERSION_NAME}' + ' /' + imagedest + '/' + type)
+        d.appendVar('RDEPENDS:%s-image' % kname, ' %s-image-%s (= ${EXTENDPKGV})' % (kname, typelower))
+        splitmods = d.getVar("KERNEL_SPLIT_MODULES")
+        if splitmods != '1':
+            d.appendVar('RDEPENDS:%s-image' % kname, ' %s-modules (= ${EXTENDPKGV})' % kname)
+            d.appendVar('RDEPENDS:%s-image-%s' % (kname, typelower), ' %s-modules-${KERNEL_VERSION_PKG_NAME} (= ${EXTENDPKGV})' % kname)
+            d.setVar('PKG:%s-modules' % kname, '%s-modules-${KERNEL_VERSION_PKG_NAME}' % kname)
+            d.appendVar('RPROVIDES:%s-modules' % kname, '%s-modules-${KERNEL_VERSION_PKG_NAME}' % kname)
+
+        d.setVar('PKG:%s-image-%s' % (kname,typelower), '%s-image-%s-${KERNEL_VERSION_PKG_NAME}' % (kname, typelower))
+        d.setVar('ALLOW_EMPTY:%s-image-%s' % (kname, typelower), '1')
+        d.prependVar('pkg_postinst:%s-image-%s' % (kname,typelower), """set +e
+if [ -n "$D" ]; then
+    ln -sf %s-${KERNEL_VERSION} $D/${KERNEL_IMAGEDEST}/%s > /dev/null 2>&1
+else
+    ln -sf %s-${KERNEL_VERSION} ${KERNEL_IMAGEDEST}/%s > /dev/null 2>&1
+    if [ $? -ne 0 ]; then
+        echo "Filesystem on ${KERNEL_IMAGEDEST}/ doesn't support symlinks, falling back to copied image (%s)."
+        install -m 0644 ${KERNEL_IMAGEDEST}/%s-${KERNEL_VERSION} ${KERNEL_IMAGEDEST}/%s
+    fi
+fi
+set -e
+""" % (type, type, type, type, type, type, type))
+        d.setVar('pkg_postrm:%s-image-%s' % (kname,typelower), """set +e
+if [ -f "${KERNEL_IMAGEDEST}/%s" -o -L "${KERNEL_IMAGEDEST}/%s" ]; then
+    rm -f ${KERNEL_IMAGEDEST}/%s  > /dev/null 2>&1
+fi
+set -e
+""" % (type, type, type))
+
+
+    image = d.getVar('INITRAMFS_IMAGE')
+    # If the INTIRAMFS_IMAGE is set but the INITRAMFS_IMAGE_BUNDLE is set to 0,
+    # the do_bundle_initramfs does nothing, but the INITRAMFS_IMAGE is built
+    # standalone for use by wic and other tools.
+    if image:
+        if d.getVar('INITRAMFS_MULTICONFIG'):
+            d.appendVarFlag('do_bundle_initramfs', 'mcdepends', ' mc::${INITRAMFS_MULTICONFIG}:${INITRAMFS_IMAGE}:do_image_complete')
+        else:
+            d.appendVarFlag('do_bundle_initramfs', 'depends', ' ${INITRAMFS_IMAGE}:do_image_complete')
+    if image and bb.utils.to_boolean(d.getVar('INITRAMFS_IMAGE_BUNDLE')):
+        bb.build.addtask('do_transform_bundled_initramfs', 'do_deploy', 'do_bundle_initramfs', d)
+
+    # NOTE: setting INITRAMFS_TASK is for backward compatibility
+    #       The preferred method is to set INITRAMFS_IMAGE, because
+    #       this INITRAMFS_TASK has circular dependency problems
+    #       if the initramfs requires kernel modules
+    image_task = d.getVar('INITRAMFS_TASK')
+    if image_task:
+        d.appendVarFlag('do_configure', 'depends', ' ${INITRAMFS_TASK}')
+}
+
+# Here we pull in all various kernel image types which we support.
+#
+# In case you're wondering why kernel.bbclass inherits the other image
+# types instead of the other way around, the reason for that is to
+# maintain compatibility with various currently existing meta-layers.
+# By pulling in the various kernel image types here, we retain the
+# original behavior of kernel.bbclass, so no meta-layers should get
+# broken.
+#
+# KERNEL_CLASSES by default pulls in kernel-uimage.bbclass, since this
+# used to be the default behavior when only uImage was supported. This
+# variable can be appended by users who implement support for new kernel
+# image types.
+
+KERNEL_CLASSES ?= " kernel-uimage "
+inherit ${KERNEL_CLASSES}
+
+# Old style kernels may set ${S} = ${WORKDIR}/git for example
+# We need to move these over to STAGING_KERNEL_DIR. We can't just
+# create the symlink in advance as the git fetcher can't cope with
+# the symlink.
+do_unpack[cleandirs] += " ${S} ${STAGING_KERNEL_DIR} ${B} ${STAGING_KERNEL_BUILDDIR}"
+do_clean[cleandirs] += " ${S} ${STAGING_KERNEL_DIR} ${B} ${STAGING_KERNEL_BUILDDIR}"
+python do_symlink_kernsrc () {
+    s = d.getVar("S")
+    if s[-1] == '/':
+        # drop trailing slash, so that os.symlink(kernsrc, s) doesn't use s as directory name and fail
+        s=s[:-1]
+    kernsrc = d.getVar("STAGING_KERNEL_DIR")
+    if s != kernsrc:
+        bb.utils.mkdirhier(kernsrc)
+        bb.utils.remove(kernsrc, recurse=True)
+        if d.getVar("EXTERNALSRC"):
+            # With EXTERNALSRC S will not be wiped so we can symlink to it
+            os.symlink(s, kernsrc)
+        else:
+            import shutil
+            shutil.move(s, kernsrc)
+            os.symlink(kernsrc, s)
+}
+# do_patch is normally ordered before do_configure, but
+# externalsrc.bbclass deletes do_patch, breaking the dependency of
+# do_configure on do_symlink_kernsrc.
+addtask symlink_kernsrc before do_patch do_configure after do_unpack
+
+inherit kernel-arch deploy
+
+PACKAGES_DYNAMIC += "^${KERNEL_PACKAGE_NAME}-module-.*"
+PACKAGES_DYNAMIC += "^${KERNEL_PACKAGE_NAME}-image-.*"
+PACKAGES_DYNAMIC += "^${KERNEL_PACKAGE_NAME}-firmware-.*"
+
+export OS = "${TARGET_OS}"
+export CROSS_COMPILE = "${TARGET_PREFIX}"
+export KBUILD_BUILD_VERSION = "1"
+export KBUILD_BUILD_USER ?= "oe-user"
+export KBUILD_BUILD_HOST ?= "oe-host"
+
+KERNEL_RELEASE ?= "${KERNEL_VERSION}"
+
+# The directory where built kernel lies in the kernel tree
+KERNEL_OUTPUT_DIR ?= "arch/${ARCH}/boot"
+KERNEL_IMAGEDEST ?= "boot"
+
+#
+# configuration
+#
+export CMDLINE_CONSOLE = "console=${@d.getVar("KERNEL_CONSOLE") or "ttyS0"}"
+
+KERNEL_VERSION = "${@get_kernelversion_headers('${B}')}"
+
+# kernels are generally machine specific
+PACKAGE_ARCH = "${MACHINE_ARCH}"
+
+# U-Boot support
+UBOOT_ENTRYPOINT ?= "20008000"
+UBOOT_LOADADDRESS ?= "${UBOOT_ENTRYPOINT}"
+
+# Some Linux kernel configurations need additional parameters on the command line
+KERNEL_EXTRA_ARGS ?= ""
+
+EXTRA_OEMAKE += ' CC="${KERNEL_CC}" LD="${KERNEL_LD}"'
+EXTRA_OEMAKE += ' HOSTCC="${BUILD_CC}" HOSTCFLAGS="${BUILD_CFLAGS}" HOSTLDFLAGS="${BUILD_LDFLAGS}" HOSTCPP="${BUILD_CPP}"'
+EXTRA_OEMAKE += ' HOSTCXX="${BUILD_CXX}" HOSTCXXFLAGS="${BUILD_CXXFLAGS}" PAHOLE=false'
+
+KERNEL_ALT_IMAGETYPE ??= ""
+
+copy_initramfs() {
+	echo "Copying initramfs into ./usr ..."
+	# In case the directory is not created yet from the first pass compile:
+	mkdir -p ${B}/usr
+	# Find and use the first initramfs image archive type we find
+	rm -f ${B}/usr/${INITRAMFS_IMAGE_NAME}.cpio
+	for img in cpio cpio.gz cpio.lz4 cpio.lzo cpio.lzma cpio.xz cpio.zst; do
+		if [ -e "${INITRAMFS_DEPLOY_DIR_IMAGE}/${INITRAMFS_IMAGE_NAME}.$img" ]; then
+			cp ${INITRAMFS_DEPLOY_DIR_IMAGE}/${INITRAMFS_IMAGE_NAME}.$img ${B}/usr/.
+			case $img in
+			*gz)
+				echo "gzip decompressing image"
+				gunzip -f ${B}/usr/${INITRAMFS_IMAGE_NAME}.$img
+				break
+				;;
+			*lz4)
+				echo "lz4 decompressing image"
+				lz4 -df ${B}/usr/${INITRAMFS_IMAGE_NAME}.$img ${B}/usr/${INITRAMFS_IMAGE_NAME}.cpio
+				break
+				;;
+			*lzo)
+				echo "lzo decompressing image"
+				lzop -df ${B}/usr/${INITRAMFS_IMAGE_NAME}.$img
+				break
+				;;
+			*lzma)
+				echo "lzma decompressing image"
+				lzma -df ${B}/usr/${INITRAMFS_IMAGE_NAME}.$img
+				break
+				;;
+			*xz)
+				echo "xz decompressing image"
+				xz -df ${B}/usr/${INITRAMFS_IMAGE_NAME}.$img
+				break
+				;;
+			*zst)
+				echo "zst decompressing image"
+				zstd -df ${B}/usr/${INITRAMFS_IMAGE_NAME}.$img
+				break
+				;;
+			esac
+			break
+		fi
+	done
+	# Verify that the above loop found a initramfs, fail otherwise
+	[ -f ${B}/usr/${INITRAMFS_IMAGE_NAME}.cpio ] && echo "Finished copy of initramfs into ./usr" || die "Could not find any ${INITRAMFS_DEPLOY_DIR_IMAGE}/${INITRAMFS_IMAGE_NAME}.cpio{.gz|.lz4|.lzo|.lzma|.xz|.zst) for bundling; INITRAMFS_IMAGE_NAME might be wrong."
+}
+
+do_bundle_initramfs () {
+	if [ ! -z "${INITRAMFS_IMAGE}" -a x"${INITRAMFS_IMAGE_BUNDLE}" = x1 ]; then
+		echo "Creating a kernel image with a bundled initramfs..."
+		copy_initramfs
+		# Backing up kernel image relies on its type(regular file or symbolic link)
+		tmp_path=""
+		for imageType in ${KERNEL_IMAGETYPE_FOR_MAKE} ; do
+			if [ -h ${KERNEL_OUTPUT_DIR}/$imageType ] ; then
+				linkpath=`readlink -n ${KERNEL_OUTPUT_DIR}/$imageType`
+				realpath=`readlink -fn ${KERNEL_OUTPUT_DIR}/$imageType`
+				mv -f $realpath $realpath.bak
+				tmp_path=$tmp_path" "$imageType"#"$linkpath"#"$realpath
+			elif [ -f ${KERNEL_OUTPUT_DIR}/$imageType ]; then
+				mv -f ${KERNEL_OUTPUT_DIR}/$imageType ${KERNEL_OUTPUT_DIR}/$imageType.bak
+				tmp_path=$tmp_path" "$imageType"##"
+			fi
+		done
+		use_alternate_initrd=CONFIG_INITRAMFS_SOURCE=${B}/usr/${INITRAMFS_IMAGE_NAME}.cpio
+		kernel_do_compile
+		# Restoring kernel image
+		for tp in $tmp_path ; do
+			imageType=`echo $tp|cut -d "#" -f 1`
+			linkpath=`echo $tp|cut -d "#" -f 2`
+			realpath=`echo $tp|cut -d "#" -f 3`
+			if [ -n "$realpath" ]; then
+				mv -f $realpath $realpath.initramfs
+				mv -f $realpath.bak $realpath
+				ln -sf $linkpath.initramfs ${B}/${KERNEL_OUTPUT_DIR}/$imageType.initramfs
+			else
+				mv -f ${KERNEL_OUTPUT_DIR}/$imageType ${KERNEL_OUTPUT_DIR}/$imageType.initramfs
+				mv -f ${KERNEL_OUTPUT_DIR}/$imageType.bak ${KERNEL_OUTPUT_DIR}/$imageType
+			fi
+		done
+	fi
+}
+do_bundle_initramfs[dirs] = "${B}"
+
+kernel_do_transform_bundled_initramfs() {
+        # vmlinux.gz is not built by kernel
+	if (echo "${KERNEL_IMAGETYPES}" | grep -wq "vmlinux\.gz"); then
+		gzip -9cn < ${KERNEL_OUTPUT_DIR}/vmlinux.initramfs > ${KERNEL_OUTPUT_DIR}/vmlinux.gz.initramfs
+        fi
+}
+do_transform_bundled_initramfs[dirs] = "${B}"
+
+python do_devshell:prepend () {
+    os.environ["LDFLAGS"] = ''
+}
+
+addtask bundle_initramfs after do_install before do_deploy
+
+KERNEL_DEBUG_TIMESTAMPS ??= "0"
+
+kernel_do_compile() {
+	unset CFLAGS CPPFLAGS CXXFLAGS LDFLAGS MACHINE
+
+	# setup native pkg-config variables (kconfig scripts call pkg-config directly, cannot generically be overriden to pkg-config-native)
+	export PKG_CONFIG_DIR="${STAGING_DIR_NATIVE}${libdir_native}/pkgconfig"
+	export PKG_CONFIG_PATH="$PKG_CONFIG_DIR:${STAGING_DATADIR_NATIVE}/pkgconfig"
+	export PKG_CONFIG_LIBDIR="$PKG_CONFIG_DIR"
+	export PKG_CONFIG_SYSROOT_DIR=""
+
+	if [ "${KERNEL_DEBUG_TIMESTAMPS}" != "1" ]; then
+		# kernel sources do not use do_unpack, so SOURCE_DATE_EPOCH may not
+		# be set....
+		if [ "${SOURCE_DATE_EPOCH}" = "" -o "${SOURCE_DATE_EPOCH}" = "0" ]; then
+			# The source directory is not necessarily a git repository, so we
+			# specify the git-dir to ensure that git does not query a
+			# repository in any parent directory.
+			SOURCE_DATE_EPOCH=`git --git-dir="${S}/.git" log -1 --pretty=%ct 2>/dev/null || echo "${REPRODUCIBLE_TIMESTAMP_ROOTFS}"`
+		fi
+
+		ts=`LC_ALL=C date -d @$SOURCE_DATE_EPOCH`
+		export KBUILD_BUILD_TIMESTAMP="$ts"
+		export KCONFIG_NOTIMESTAMP=1
+		bbnote "KBUILD_BUILD_TIMESTAMP: $ts"
+	fi
+	# The $use_alternate_initrd is only set from
+	# do_bundle_initramfs() This variable is specifically for the
+	# case where we are making a second pass at the kernel
+	# compilation and we want to force the kernel build to use a
+	# different initramfs image.  The way to do that in the kernel
+	# is to specify:
+	# make ...args... CONFIG_INITRAMFS_SOURCE=some_other_initramfs.cpio
+	if [ "$use_alternate_initrd" = "" ] && [ "${INITRAMFS_TASK}" != "" ] ; then
+		# The old style way of copying an prebuilt image and building it
+		# is turned on via INTIRAMFS_TASK != ""
+		copy_initramfs
+		use_alternate_initrd=CONFIG_INITRAMFS_SOURCE=${B}/usr/${INITRAMFS_IMAGE_NAME}.cpio
+	fi
+	for typeformake in ${KERNEL_IMAGETYPE_FOR_MAKE} ; do
+		oe_runmake ${typeformake} ${KERNEL_EXTRA_ARGS} $use_alternate_initrd
+	done
+}
+
+kernel_do_transform_kernel() {
+	# vmlinux.gz is not built by kernel
+	if (echo "${KERNEL_IMAGETYPES}" | grep -wq "vmlinux\.gz"); then
+		mkdir -p "${KERNEL_OUTPUT_DIR}"
+		gzip -9cn < ${B}/vmlinux > "${KERNEL_OUTPUT_DIR}/vmlinux.gz"
+	fi
+}
+do_transform_kernel[dirs] = "${B}"
+addtask transform_kernel after do_compile before do_install
+
+do_compile_kernelmodules() {
+	unset CFLAGS CPPFLAGS CXXFLAGS LDFLAGS MACHINE
+	if [ "${KERNEL_DEBUG_TIMESTAMPS}" != "1" ]; then
+		# kernel sources do not use do_unpack, so SOURCE_DATE_EPOCH may not
+		# be set....
+		if [ "${SOURCE_DATE_EPOCH}" = "" -o "${SOURCE_DATE_EPOCH}" = "0" ]; then
+			# The source directory is not necessarily a git repository, so we
+			# specify the git-dir to ensure that git does not query a
+			# repository in any parent directory.
+			SOURCE_DATE_EPOCH=`git --git-dir="${S}/.git" log -1 --pretty=%ct 2>/dev/null || echo "${REPRODUCIBLE_TIMESTAMP_ROOTFS}"`
+		fi
+
+		ts=`LC_ALL=C date -d @$SOURCE_DATE_EPOCH`
+		export KBUILD_BUILD_TIMESTAMP="$ts"
+		export KCONFIG_NOTIMESTAMP=1
+		bbnote "KBUILD_BUILD_TIMESTAMP: $ts"
+	fi
+	if (grep -q -i -e '^CONFIG_MODULES=y$' ${B}/.config); then
+		oe_runmake -C ${B} ${PARALLEL_MAKE} modules ${KERNEL_EXTRA_ARGS}
+
+		# Module.symvers gets updated during the 
+		# building of the kernel modules. We need to
+		# update this in the shared workdir since some
+		# external kernel modules has a dependency on
+		# other kernel modules and will look at this
+		# file to do symbol lookups
+		cp ${B}/Module.symvers ${STAGING_KERNEL_BUILDDIR}/
+		# 5.10+ kernels have module.lds that we need to copy for external module builds
+		if [ -e "${B}/scripts/module.lds" ]; then
+			install -Dm 0644 ${B}/scripts/module.lds ${STAGING_KERNEL_BUILDDIR}/scripts/module.lds
+		fi
+	else
+		bbnote "no modules to compile"
+	fi
+}
+addtask compile_kernelmodules after do_compile before do_strip
+
+kernel_do_install() {
+	#
+	# First install the modules
+	#
+	unset CFLAGS CPPFLAGS CXXFLAGS LDFLAGS MACHINE
+	if (grep -q -i -e '^CONFIG_MODULES=y$' .config); then
+		oe_runmake DEPMOD=echo MODLIB=${D}${nonarch_base_libdir}/modules/${KERNEL_VERSION} INSTALL_FW_PATH=${D}${nonarch_base_libdir}/firmware modules_install
+		rm "${D}${nonarch_base_libdir}/modules/${KERNEL_VERSION}/build"
+		rm "${D}${nonarch_base_libdir}/modules/${KERNEL_VERSION}/source"
+		# If the kernel/ directory is empty remove it to prevent QA issues
+		rmdir --ignore-fail-on-non-empty "${D}${nonarch_base_libdir}/modules/${KERNEL_VERSION}/kernel"
+	else
+		bbnote "no modules to install"
+	fi
+
+	#
+	# Install various kernel output (zImage, map file, config, module support files)
+	#
+	install -d ${D}/${KERNEL_IMAGEDEST}
+
+	#
+	# When including an initramfs bundle inside a FIT image, the fitImage is created after the install task
+	# by do_assemble_fitimage_initramfs.
+	# This happens after the generation of the initramfs bundle (done by do_bundle_initramfs).
+	# So, at the level of the install task we should not try to install the fitImage. fitImage is still not
+	# generated yet.
+	# After the generation of the fitImage, the deploy task copies the fitImage from the build directory to
+	# the deploy folder.
+	#
+
+	for imageType in ${KERNEL_IMAGETYPES} ; do
+		if [ $imageType != "fitImage" ] || [ "${INITRAMFS_IMAGE_BUNDLE}" != "1" ] ; then
+			install -m 0644 ${KERNEL_OUTPUT_DIR}/$imageType ${D}/${KERNEL_IMAGEDEST}/$imageType-${KERNEL_VERSION}
+		fi
+	done
+
+	install -m 0644 System.map ${D}/${KERNEL_IMAGEDEST}/System.map-${KERNEL_VERSION}
+	install -m 0644 .config ${D}/${KERNEL_IMAGEDEST}/config-${KERNEL_VERSION}
+	install -m 0644 vmlinux ${D}/${KERNEL_IMAGEDEST}/vmlinux-${KERNEL_VERSION}
+	[ -e Module.symvers ] && install -m 0644 Module.symvers ${D}/${KERNEL_IMAGEDEST}/Module.symvers-${KERNEL_VERSION}
+	install -d ${D}${sysconfdir}/modules-load.d
+	install -d ${D}${sysconfdir}/modprobe.d
+}
+
+# Must be ran no earlier than after do_kernel_checkout or else Makefile won't be in ${S}/Makefile
+do_kernel_version_sanity_check() {
+	if [ "x${KERNEL_VERSION_SANITY_SKIP}" = "x1" ]; then
+		exit 0
+	fi
+
+	# The Makefile determines the kernel version shown at runtime
+	# Don't use KERNEL_VERSION because the headers it grabs the version from aren't generated until do_compile
+	VERSION=$(grep "^VERSION =" ${S}/Makefile | sed s/.*=\ *//)
+	PATCHLEVEL=$(grep "^PATCHLEVEL =" ${S}/Makefile | sed s/.*=\ *//)
+	SUBLEVEL=$(grep "^SUBLEVEL =" ${S}/Makefile | sed s/.*=\ *//)
+	EXTRAVERSION=$(grep "^EXTRAVERSION =" ${S}/Makefile | sed s/.*=\ *//)
+
+	# Build a string for regex and a plain version string
+	reg="^${VERSION}\.${PATCHLEVEL}"
+	vers="${VERSION}.${PATCHLEVEL}"
+	if [ -n "${SUBLEVEL}" ]; then
+		# Ignoring a SUBLEVEL of zero is fine
+		if [ "${SUBLEVEL}" = "0" ]; then
+			reg="${reg}(\.${SUBLEVEL})?"
+		else
+			reg="${reg}\.${SUBLEVEL}"
+			vers="${vers}.${SUBLEVEL}"
+		fi
+	fi
+	vers="${vers}${EXTRAVERSION}"
+	reg="${reg}${EXTRAVERSION}"
+
+	if [ -z `echo ${PV} | grep -E "${reg}"` ]; then
+		bbfatal "Package Version (${PV}) does not match of kernel being built (${vers}). Please update the PV variable to match the kernel source or set KERNEL_VERSION_SANITY_SKIP=\"1\" in your recipe."
+	fi
+	exit 0
+}
+
+addtask shared_workdir after do_compile before do_compile_kernelmodules
+addtask shared_workdir_setscene
+
+do_shared_workdir_setscene () {
+	exit 1
+}
+
+emit_depmod_pkgdata() {
+	# Stash data for depmod
+	install -d ${PKGDESTWORK}/${KERNEL_PACKAGE_NAME}-depmod/
+	echo "${KERNEL_VERSION}" > ${PKGDESTWORK}/${KERNEL_PACKAGE_NAME}-depmod/${KERNEL_PACKAGE_NAME}-abiversion
+	cp ${B}/System.map ${PKGDESTWORK}/${KERNEL_PACKAGE_NAME}-depmod/System.map-${KERNEL_VERSION}
+}
+
+PACKAGEFUNCS += "emit_depmod_pkgdata"
+
+do_shared_workdir[cleandirs] += " ${STAGING_KERNEL_BUILDDIR}"
+do_shared_workdir () {
+	cd ${B}
+
+	kerneldir=${STAGING_KERNEL_BUILDDIR}
+	install -d $kerneldir
+
+	#
+	# Store the kernel version in sysroots for module-base.bbclass
+	#
+
+	echo "${KERNEL_VERSION}" > $kerneldir/${KERNEL_PACKAGE_NAME}-abiversion
+
+	# Copy files required for module builds
+	cp System.map $kerneldir/System.map-${KERNEL_VERSION}
+	[ -e Module.symvers ] && cp Module.symvers $kerneldir/
+	cp .config $kerneldir/
+	mkdir -p $kerneldir/include/config
+	cp include/config/kernel.release $kerneldir/include/config/kernel.release
+	if [ -e certs/signing_key.x509 ]; then
+		# The signing_key.* files are stored in the certs/ dir in
+		# newer Linux kernels
+		mkdir -p $kerneldir/certs
+		cp certs/signing_key.* $kerneldir/certs/
+	elif [ -e signing_key.priv ]; then
+		cp signing_key.* $kerneldir/
+	fi
+
+	# We can also copy over all the generated files and avoid special cases
+	# like version.h, but we've opted to keep this small until file creep starts
+	# to happen
+	if [ -e include/linux/version.h ]; then
+		mkdir -p $kerneldir/include/linux
+		cp include/linux/version.h $kerneldir/include/linux/version.h
+	fi
+
+	# As of Linux kernel version 3.0.1, the clean target removes
+	# arch/powerpc/lib/crtsavres.o which is present in
+	# KBUILD_LDFLAGS_MODULE, making it required to build external modules.
+	if [ ${ARCH} = "powerpc" ]; then
+		if [ -e arch/powerpc/lib/crtsavres.o ]; then
+			mkdir -p $kerneldir/arch/powerpc/lib/
+			cp arch/powerpc/lib/crtsavres.o $kerneldir/arch/powerpc/lib/crtsavres.o
+		fi
+	fi
+
+	if [ -d include/generated ]; then
+		mkdir -p $kerneldir/include/generated/
+		cp -fR include/generated/* $kerneldir/include/generated/
+	fi
+
+	if [ -d arch/${ARCH}/include/generated ]; then
+		mkdir -p $kerneldir/arch/${ARCH}/include/generated/
+		cp -fR arch/${ARCH}/include/generated/* $kerneldir/arch/${ARCH}/include/generated/
+	fi
+
+	if (grep -q -i -e '^CONFIG_UNWINDER_ORC=y$' $kerneldir/.config); then
+		# With CONFIG_UNWINDER_ORC (the default in 4.14), objtool is required for
+		# out-of-tree modules to be able to generate object files.
+		if [ -x tools/objtool/objtool ]; then
+			mkdir -p ${kerneldir}/tools/objtool
+			cp tools/objtool/objtool ${kerneldir}/tools/objtool/
+		fi
+	fi
+}
+
+# We don't need to stage anything, not the modules/firmware since those would clash with linux-firmware
+sysroot_stage_all () {
+	:
+}
+
+KERNEL_CONFIG_COMMAND ?= "oe_runmake_call -C ${S} O=${B} olddefconfig || oe_runmake -C ${S} O=${B} oldnoconfig"
+
+python check_oldest_kernel() {
+    oldest_kernel = d.getVar('OLDEST_KERNEL')
+    kernel_version = d.getVar('KERNEL_VERSION')
+    tclibc = d.getVar('TCLIBC')
+    if tclibc == 'glibc':
+        kernel_version = kernel_version.split('-', 1)[0]
+        if oldest_kernel and kernel_version:
+            if bb.utils.vercmp_string(kernel_version, oldest_kernel) < 0:
+                bb.warn('%s: OLDEST_KERNEL is "%s" but the version of the kernel you are building is "%s" - therefore %s as built may not be compatible with this kernel. Either set OLDEST_KERNEL to an older version, or build a newer kernel.' % (d.getVar('PN'), oldest_kernel, kernel_version, tclibc))
+}
+
+check_oldest_kernel[vardepsexclude] += "OLDEST_KERNEL KERNEL_VERSION"
+do_configure[prefuncs] += "check_oldest_kernel"
+
+kernel_do_configure() {
+	# fixes extra + in /lib/modules/2.6.37+
+	# $ scripts/setlocalversion . => +
+	# $ make kernelversion => 2.6.37
+	# $ make kernelrelease => 2.6.37+
+	touch ${B}/.scmversion ${S}/.scmversion
+
+	if [ "${S}" != "${B}" ] && [ -f "${S}/.config" ] && [ ! -f "${B}/.config" ]; then
+		mv "${S}/.config" "${B}/.config"
+	fi
+
+	# Copy defconfig to .config if .config does not exist. This allows
+	# recipes to manage the .config themselves in do_configure:prepend().
+	if [ -f "${WORKDIR}/defconfig" ] && [ ! -f "${B}/.config" ]; then
+		cp "${WORKDIR}/defconfig" "${B}/.config"
+	fi
+
+	${KERNEL_CONFIG_COMMAND}
+}
+
+do_savedefconfig() {
+	bbplain "Saving defconfig to:\n${B}/defconfig"
+	oe_runmake -C ${B} savedefconfig
+}
+do_savedefconfig[nostamp] = "1"
+addtask savedefconfig after do_configure
+
+inherit cml1
+
+# Need LD, HOSTLDFLAGS and more for config operations
+KCONFIG_CONFIG_COMMAND:append = " ${EXTRA_OEMAKE}"
+
+EXPORT_FUNCTIONS do_compile do_transform_kernel do_transform_bundled_initramfs do_install do_configure
+
+# kernel-base becomes kernel-${KERNEL_VERSION}
+# kernel-image becomes kernel-image-${KERNEL_VERSION}
+PACKAGES = "${KERNEL_PACKAGE_NAME} ${KERNEL_PACKAGE_NAME}-base ${KERNEL_PACKAGE_NAME}-vmlinux ${KERNEL_PACKAGE_NAME}-image ${KERNEL_PACKAGE_NAME}-dev ${KERNEL_PACKAGE_NAME}-modules ${KERNEL_PACKAGE_NAME}-dbg"
+FILES:${PN} = ""
+FILES:${KERNEL_PACKAGE_NAME}-base = "${nonarch_base_libdir}/modules/${KERNEL_VERSION}/modules.order ${nonarch_base_libdir}/modules/${KERNEL_VERSION}/modules.builtin ${nonarch_base_libdir}/modules/${KERNEL_VERSION}/modules.builtin.modinfo"
+FILES:${KERNEL_PACKAGE_NAME}-image = ""
+FILES:${KERNEL_PACKAGE_NAME}-dev = "/${KERNEL_IMAGEDEST}/System.map* /${KERNEL_IMAGEDEST}/Module.symvers* /${KERNEL_IMAGEDEST}/config* ${KERNEL_SRC_PATH} ${nonarch_base_libdir}/modules/${KERNEL_VERSION}/build"
+FILES:${KERNEL_PACKAGE_NAME}-vmlinux = "/${KERNEL_IMAGEDEST}/vmlinux-${KERNEL_VERSION_NAME}"
+FILES:${KERNEL_PACKAGE_NAME}-modules = ""
+FILES:${KERNEL_PACKAGE_NAME}-dbg = "/usr/lib/debug /usr/src/debug"
+RDEPENDS:${KERNEL_PACKAGE_NAME} = "${KERNEL_PACKAGE_NAME}-base (= ${EXTENDPKGV})"
+# Allow machines to override this dependency if kernel image files are
+# not wanted in images as standard
+RRECOMMENDS:${KERNEL_PACKAGE_NAME}-base ?= "${KERNEL_PACKAGE_NAME}-image (= ${EXTENDPKGV})"
+PKG:${KERNEL_PACKAGE_NAME}-image = "${KERNEL_PACKAGE_NAME}-image-${@legitimize_package_name(d.getVar('KERNEL_VERSION'))}"
+RDEPENDS:${KERNEL_PACKAGE_NAME}-image += "${@oe.utils.conditional('KERNEL_IMAGETYPE', 'vmlinux', '${KERNEL_PACKAGE_NAME}-vmlinux (= ${EXTENDPKGV})', '', d)}"
+PKG:${KERNEL_PACKAGE_NAME}-base = "${KERNEL_PACKAGE_NAME}-${@legitimize_package_name(d.getVar('KERNEL_VERSION'))}"
+RPROVIDES:${KERNEL_PACKAGE_NAME}-base += "${KERNEL_PACKAGE_NAME}-${KERNEL_VERSION}"
+ALLOW_EMPTY:${KERNEL_PACKAGE_NAME} = "1"
+ALLOW_EMPTY:${KERNEL_PACKAGE_NAME}-base = "1"
+ALLOW_EMPTY:${KERNEL_PACKAGE_NAME}-image = "1"
+ALLOW_EMPTY:${KERNEL_PACKAGE_NAME}-modules = "1"
+DESCRIPTION:${KERNEL_PACKAGE_NAME}-modules = "Kernel modules meta package"
+
+pkg_postinst:${KERNEL_PACKAGE_NAME}-base () {
+	if [ ! -e "$D/lib/modules/${KERNEL_VERSION}" ]; then
+		mkdir -p $D/lib/modules/${KERNEL_VERSION}
+	fi
+	if [ -n "$D" ]; then
+		depmodwrapper -a -b $D ${KERNEL_VERSION}
+	else
+		depmod -a ${KERNEL_VERSION}
+	fi
+}
+
+PACKAGESPLITFUNCS:prepend = "split_kernel_packages "
+
+python split_kernel_packages () {
+    do_split_packages(d, root='${nonarch_base_libdir}/firmware', file_regex=r'^(.*)\.(bin|fw|cis|csp|dsp)$', output_pattern='${KERNEL_PACKAGE_NAME}-firmware-%s', description='Firmware for %s', recursive=True, extra_depends='')
+}
+
+# Many scripts want to look in arch/$arch/boot for the bootable
+# image. This poses a problem for vmlinux and vmlinuz based
+# booting. This task arranges to have vmlinux and vmlinuz appear
+# in the normalized directory location.
+do_kernel_link_images() {
+	if [ ! -d "${B}/arch/${ARCH}/boot" ]; then
+		mkdir ${B}/arch/${ARCH}/boot
+	fi
+	cd ${B}/arch/${ARCH}/boot
+	ln -sf ../../../vmlinux
+	if [ -f ../../../vmlinuz ]; then
+		ln -sf ../../../vmlinuz
+	fi
+	if [ -f ../../../vmlinuz.bin ]; then
+		ln -sf ../../../vmlinuz.bin
+	fi
+	if [ -f ../../../vmlinux.64 ]; then
+		ln -sf ../../../vmlinux.64
+	fi
+}
+addtask kernel_link_images after do_compile before do_strip
+
+python do_strip() {
+    import shutil
+
+    strip = d.getVar('STRIP')
+    extra_sections = d.getVar('KERNEL_IMAGE_STRIP_EXTRA_SECTIONS')
+    kernel_image = d.getVar('B') + "/" + d.getVar('KERNEL_OUTPUT_DIR') + "/vmlinux"
+
+    if (extra_sections and kernel_image.find(d.getVar('KERNEL_IMAGEDEST') + '/vmlinux') != -1):
+        kernel_image_stripped = kernel_image + ".stripped"
+        shutil.copy2(kernel_image, kernel_image_stripped)
+        oe.package.runstrip((kernel_image_stripped, 8, strip, extra_sections))
+        bb.debug(1, "KERNEL_IMAGE_STRIP_EXTRA_SECTIONS is set, stripping sections: " + \
+            extra_sections)
+}
+do_strip[dirs] = "${B}"
+
+addtask strip before do_sizecheck after do_kernel_link_images
+
+# Support checking the kernel size since some kernels need to reside in partitions
+# with a fixed length or there is a limit in transferring the kernel to memory.
+# If more than one image type is enabled, warn on any that don't fit but only fail
+# if none fit.
+do_sizecheck() {
+	if [ ! -z "${KERNEL_IMAGE_MAXSIZE}" ]; then
+		invalid=`echo ${KERNEL_IMAGE_MAXSIZE} | sed 's/[0-9]//g'`
+		if [ -n "$invalid" ]; then
+			die "Invalid KERNEL_IMAGE_MAXSIZE: ${KERNEL_IMAGE_MAXSIZE}, should be an integer (The unit is Kbytes)"
+		fi
+		at_least_one_fits=
+		for imageType in ${KERNEL_IMAGETYPES} ; do
+			size=`du -ks ${B}/${KERNEL_OUTPUT_DIR}/$imageType | awk '{print $1}'`
+			if [ $size -gt ${KERNEL_IMAGE_MAXSIZE} ]; then
+				bbwarn "This kernel $imageType (size=$size(K) > ${KERNEL_IMAGE_MAXSIZE}(K)) is too big for your device."
+			else
+				at_least_one_fits=y
+			fi
+		done
+		if [ -z "$at_least_one_fits" ]; then
+			die "All kernel images are too big for your device. Please reduce the size of the kernel by making more of it modular."
+		fi
+	fi
+}
+do_sizecheck[dirs] = "${B}"
+
+addtask sizecheck before do_install after do_strip
+
+inherit kernel-artifact-names
+
+kernel_do_deploy() {
+	deployDir="${DEPLOYDIR}"
+	if [ -n "${KERNEL_DEPLOYSUBDIR}" ]; then
+		deployDir="${DEPLOYDIR}/${KERNEL_DEPLOYSUBDIR}"
+		mkdir "$deployDir"
+	fi
+
+	for imageType in ${KERNEL_IMAGETYPES} ; do
+		baseName=$imageType-${KERNEL_IMAGE_NAME}
+
+		if [ -s ${KERNEL_OUTPUT_DIR}/$imageType.stripped ] ; then
+			install -m 0644 ${KERNEL_OUTPUT_DIR}/$imageType.stripped $deployDir/$baseName${KERNEL_IMAGE_BIN_EXT}
+		else
+			install -m 0644 ${KERNEL_OUTPUT_DIR}/$imageType $deployDir/$baseName${KERNEL_IMAGE_BIN_EXT}
+		fi
+		if [ -n "${KERNEL_IMAGE_LINK_NAME}" ] ; then
+			ln -sf $baseName${KERNEL_IMAGE_BIN_EXT} $deployDir/$imageType-${KERNEL_IMAGE_LINK_NAME}${KERNEL_IMAGE_BIN_EXT}
+		fi
+		if [ "${KERNEL_IMAGETYPE_SYMLINK}" = "1" ] ; then
+			ln -sf $baseName${KERNEL_IMAGE_BIN_EXT} $deployDir/$imageType
+		fi
+	done
+
+	if [ ${MODULE_TARBALL_DEPLOY} = "1" ] && (grep -q -i -e '^CONFIG_MODULES=y$' .config); then
+		mkdir -p ${D}${root_prefix}/lib
+		if [ -n "${SOURCE_DATE_EPOCH}" ]; then
+			TAR_ARGS="--sort=name --clamp-mtime --mtime=@${SOURCE_DATE_EPOCH}"
+		else
+			TAR_ARGS=""
+		fi
+		TAR_ARGS="$TAR_ARGS --owner=0 --group=0"
+		tar $TAR_ARGS -cv -C ${D}${root_prefix} lib | gzip -9n > $deployDir/modules-${MODULE_TARBALL_NAME}.tgz
+
+		if [ -n "${MODULE_TARBALL_LINK_NAME}" ] ; then
+			ln -sf modules-${MODULE_TARBALL_NAME}.tgz $deployDir/modules-${MODULE_TARBALL_LINK_NAME}.tgz
+		fi
+	fi
+
+	if [ ! -z "${INITRAMFS_IMAGE}" -a x"${INITRAMFS_IMAGE_BUNDLE}" = x1 ]; then
+		for imageType in ${KERNEL_IMAGETYPES} ; do
+			if [ "$imageType" = "fitImage" ] ; then
+				continue
+			fi
+			initramfsBaseName=$imageType-${INITRAMFS_NAME}
+			install -m 0644 ${KERNEL_OUTPUT_DIR}/$imageType.initramfs $deployDir/$initramfsBaseName${KERNEL_IMAGE_BIN_EXT}
+			if [ -n "${INITRAMFS_LINK_NAME}" ] ; then
+				ln -sf $initramfsBaseName${KERNEL_IMAGE_BIN_EXT} $deployDir/$imageType-${INITRAMFS_LINK_NAME}${KERNEL_IMAGE_BIN_EXT}
+			fi
+		done
+	fi
+}
+
+# We deploy to filenames that include PKGV and PKGR, read the saved data to
+# ensure we get the right values for both
+do_deploy[prefuncs] += "read_subpackage_metadata"
+
+addtask deploy after do_populate_sysroot do_packagedata
+
+EXPORT_FUNCTIONS do_deploy
+
+# Add using Device Tree support
+inherit kernel-devicetree
diff --git a/poky/meta/classes-recipe/kernelsrc.bbclass b/poky/meta/classes-recipe/kernelsrc.bbclass
new file mode 100644
index 0000000..a32882a
--- /dev/null
+++ b/poky/meta/classes-recipe/kernelsrc.bbclass
@@ -0,0 +1,16 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+S = "${STAGING_KERNEL_DIR}"
+deltask do_fetch
+deltask do_unpack
+do_patch[depends] += "virtual/kernel:do_shared_workdir"
+do_patch[noexec] = "1"
+do_package[depends] += "virtual/kernel:do_populate_sysroot"
+KERNEL_VERSION = "${@get_kernelversion_file("${STAGING_KERNEL_BUILDDIR}")}"
+
+inherit linux-kernel-base
+
diff --git a/poky/meta/classes-recipe/lib_package.bbclass b/poky/meta/classes-recipe/lib_package.bbclass
new file mode 100644
index 0000000..6d11015
--- /dev/null
+++ b/poky/meta/classes-recipe/lib_package.bbclass
@@ -0,0 +1,12 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+#
+# ${PN}-bin is defined in bitbake.conf
+#
+# We need to allow the other packages to be greedy with what they
+# want out of /usr/bin and /usr/sbin before ${PN}-bin gets greedy.
+# 
+PACKAGE_BEFORE_PN = "${PN}-bin"
diff --git a/poky/meta/classes-recipe/libc-package.bbclass b/poky/meta/classes-recipe/libc-package.bbclass
new file mode 100644
index 0000000..de3d422
--- /dev/null
+++ b/poky/meta/classes-recipe/libc-package.bbclass
@@ -0,0 +1,390 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+#
+# This class knows how to package up [e]glibc. Its shared since prebuild binary toolchains
+# may need packaging and its pointless to duplicate this code.
+#
+# Caller should set GLIBC_INTERNAL_USE_BINARY_LOCALE to one of:
+#  "compile" - Use QEMU to generate the binary locale files
+#  "precompiled" - The binary locale files are pregenerated and already present
+#  "ondevice" - The device will build the locale files upon first boot through the postinst
+
+GLIBC_INTERNAL_USE_BINARY_LOCALE ?= "ondevice"
+
+GLIBC_SPLIT_LC_PACKAGES ?= "0"
+
+python __anonymous () {
+    enabled = d.getVar("ENABLE_BINARY_LOCALE_GENERATION")
+
+    pn = d.getVar("PN")
+    if pn.endswith("-initial"):
+        enabled = False
+
+    if enabled and int(enabled):
+        import re
+
+        target_arch = d.getVar("TARGET_ARCH")
+        binary_arches = d.getVar("BINARY_LOCALE_ARCHES") or ""
+        use_cross_localedef = d.getVar("LOCALE_GENERATION_WITH_CROSS-LOCALEDEF") or ""
+
+        for regexp in binary_arches.split(" "):
+            r = re.compile(regexp)
+
+            if r.match(target_arch):
+                depends = d.getVar("DEPENDS")
+                if use_cross_localedef == "1" :
+                    depends = "%s cross-localedef-native" % depends
+                else:
+                    depends = "%s qemu-native" % depends
+                d.setVar("DEPENDS", depends)
+                d.setVar("GLIBC_INTERNAL_USE_BINARY_LOCALE", "compile")
+                break
+}
+
+# try to fix disable charsets/locales/locale-code compile fail
+PACKAGE_NO_GCONV ?= "0"
+
+OVERRIDES:append = ":${TARGET_ARCH}-${TARGET_OS}"
+
+locale_base_postinst_ontarget() {
+localedef --inputfile=${datadir}/i18n/locales/%s --charmap=%s %s
+}
+
+locale_base_postrm() {
+#!/bin/sh
+localedef --delete-from-archive --inputfile=${datadir}/locales/%s --charmap=%s %s
+}
+
+LOCALETREESRC ?= "${PKGD}"
+
+do_prep_locale_tree() {
+	treedir=${WORKDIR}/locale-tree
+	rm -rf $treedir
+	mkdir -p $treedir/${base_bindir} $treedir/${base_libdir} $treedir/${datadir} $treedir/${localedir}
+	tar -cf - -C ${LOCALETREESRC}${datadir} -p i18n | tar -xf - -C $treedir/${datadir}
+	# unzip to avoid parsing errors
+	for i in $treedir/${datadir}/i18n/charmaps/*gz; do 
+		gunzip $i
+	done
+	# The extract pattern "./l*.so*" is carefully selected so that it will
+	# match ld*.so and lib*.so*, but not any files in the gconv directory
+	# (if it exists). This makes sure we only unpack the files we need.
+	# This is important in case usrmerge is set in DISTRO_FEATURES, which
+	# means ${base_libdir} == ${libdir}.
+	tar -cf - -C ${LOCALETREESRC}${base_libdir} -p . | tar -xf - -C $treedir/${base_libdir} --wildcards './l*.so*'
+	if [ -f ${STAGING_LIBDIR_NATIVE}/libgcc_s.* ]; then
+		tar -cf - -C ${STAGING_LIBDIR_NATIVE} -p libgcc_s.* | tar -xf - -C $treedir/${base_libdir}
+	fi
+	install -m 0755 ${LOCALETREESRC}${bindir}/localedef $treedir/${base_bindir}
+}
+
+do_collect_bins_from_locale_tree() {
+	treedir=${WORKDIR}/locale-tree
+
+	parent=$(dirname ${localedir})
+	mkdir -p ${PKGD}/$parent
+	tar -cf - -C $treedir/$parent -p $(basename ${localedir}) | tar -xf - -C ${PKGD}$parent
+
+	# Finalize tree by chaning all duplicate files into hard links
+	cross-localedef-hardlink -c -v ${WORKDIR}/locale-tree
+}
+
+inherit qemu
+
+python package_do_split_gconvs () {
+    import re
+    if (d.getVar('PACKAGE_NO_GCONV') == '1'):
+        bb.note("package requested not splitting gconvs")
+        return
+
+    if not d.getVar('PACKAGES'):
+        return
+
+    mlprefix = d.getVar("MLPREFIX") or ""
+
+    bpn = d.getVar('BPN')
+    libdir = d.getVar('libdir')
+    if not libdir:
+        bb.error("libdir not defined")
+        return
+    datadir = d.getVar('datadir')
+    if not datadir:
+        bb.error("datadir not defined")
+        return
+
+    gconv_libdir = oe.path.join(libdir, "gconv")
+    charmap_dir = oe.path.join(datadir, "i18n", "charmaps")
+    locales_dir = oe.path.join(datadir, "i18n", "locales")
+    binary_locales_dir = d.getVar('localedir')
+
+    def calc_gconv_deps(fn, pkg, file_regex, output_pattern, group):
+        deps = []
+        f = open(fn, "rb")
+        c_re = re.compile(r'^copy "(.*)"')
+        i_re = re.compile(r'^include "(\w+)".*')
+        for l in f.readlines():
+            l = l.decode("latin-1")
+            m = c_re.match(l) or i_re.match(l)
+            if m:
+                dp = legitimize_package_name('%s%s-gconv-%s' % (mlprefix, bpn, m.group(1)))
+                if not dp in deps:
+                    deps.append(dp)
+        f.close()
+        if deps != []:
+            d.setVar('RDEPENDS:%s' % pkg, " ".join(deps))
+        if bpn != 'glibc':
+            d.setVar('RPROVIDES:%s' % pkg, pkg.replace(bpn, 'glibc'))
+
+    do_split_packages(d, gconv_libdir, file_regex=r'^(.*)\.so$', output_pattern=bpn+'-gconv-%s', \
+        description='gconv module for character set %s', hook=calc_gconv_deps, \
+        extra_depends=bpn+'-gconv')
+
+    def calc_charmap_deps(fn, pkg, file_regex, output_pattern, group):
+        deps = []
+        f = open(fn, "rb")
+        c_re = re.compile(r'^copy "(.*)"')
+        i_re = re.compile(r'^include "(\w+)".*')
+        for l in f.readlines():
+            l = l.decode("latin-1")
+            m = c_re.match(l) or i_re.match(l)
+            if m:
+                dp = legitimize_package_name('%s%s-charmap-%s' % (mlprefix, bpn, m.group(1)))
+                if not dp in deps:
+                    deps.append(dp)
+        f.close()
+        if deps != []:
+            d.setVar('RDEPENDS:%s' % pkg, " ".join(deps))
+        if bpn != 'glibc':
+            d.setVar('RPROVIDES:%s' % pkg, pkg.replace(bpn, 'glibc'))
+
+    do_split_packages(d, charmap_dir, file_regex=r'^(.*)\.gz$', output_pattern=bpn+'-charmap-%s', \
+        description='character map for %s encoding', hook=calc_charmap_deps, extra_depends='')
+
+    def calc_locale_deps(fn, pkg, file_regex, output_pattern, group):
+        deps = []
+        f = open(fn, "rb")
+        c_re = re.compile(r'^copy "(.*)"')
+        i_re = re.compile(r'^include "(\w+)".*')
+        for l in f.readlines():
+            l = l.decode("latin-1")
+            m = c_re.match(l) or i_re.match(l)
+            if m:
+                dp = legitimize_package_name(mlprefix+bpn+'-localedata-%s' % m.group(1))
+                if not dp in deps:
+                    deps.append(dp)
+        f.close()
+        if deps != []:
+            d.setVar('RDEPENDS:%s' % pkg, " ".join(deps))
+        if bpn != 'glibc':
+            d.setVar('RPROVIDES:%s' % pkg, pkg.replace(bpn, 'glibc'))
+
+    do_split_packages(d, locales_dir, file_regex=r'(.*)', output_pattern=bpn+'-localedata-%s', \
+        description='locale definition for %s', hook=calc_locale_deps, extra_depends='')
+    d.setVar('PACKAGES', d.getVar('PACKAGES', False) + ' ' + d.getVar('MLPREFIX', False) + bpn + '-gconv')
+
+    use_bin = d.getVar("GLIBC_INTERNAL_USE_BINARY_LOCALE")
+
+    dot_re = re.compile(r"(.*)\.(.*)")
+
+    # Read in supported locales and associated encodings
+    supported = {}
+    with open(oe.path.join(d.getVar('WORKDIR'), "SUPPORTED")) as f:
+        for line in f.readlines():
+            try:
+                locale, charset = line.rstrip().split()
+            except ValueError:
+                continue
+            supported[locale] = charset
+
+    # GLIBC_GENERATE_LOCALES var specifies which locales to be generated. empty or "all" means all locales
+    to_generate = d.getVar('GLIBC_GENERATE_LOCALES')
+    if not to_generate or to_generate == 'all':
+        to_generate = sorted(supported.keys())
+    else:
+        to_generate = to_generate.split()
+        for locale in to_generate:
+            if locale not in supported:
+                if '.' in locale:
+                    charset = locale.split('.')[1]
+                else:
+                    charset = 'UTF-8'
+                    bb.warn("Unsupported locale '%s', assuming encoding '%s'" % (locale, charset))
+                supported[locale] = charset
+
+    def output_locale_source(name, pkgname, locale, encoding):
+        d.setVar('RDEPENDS:%s' % pkgname, '%slocaledef %s-localedata-%s %s-charmap-%s' % \
+        (mlprefix, mlprefix+bpn, legitimize_package_name(locale), mlprefix+bpn, legitimize_package_name(encoding)))
+        d.setVar('pkg_postinst_ontarget:%s' % pkgname, d.getVar('locale_base_postinst_ontarget') \
+        % (locale, encoding, locale))
+        d.setVar('pkg_postrm:%s' % pkgname, d.getVar('locale_base_postrm') % \
+        (locale, encoding, locale))
+
+    def output_locale_binary_rdepends(name, pkgname, locale, encoding):
+        dep = legitimize_package_name('%s-binary-localedata-%s' % (bpn, name))
+        lcsplit = d.getVar('GLIBC_SPLIT_LC_PACKAGES')
+        if lcsplit and int(lcsplit):
+            d.appendVar('PACKAGES', ' ' + dep)
+            d.setVar('ALLOW_EMPTY:%s' % dep, '1')
+        d.setVar('RDEPENDS:%s' % pkgname, mlprefix + dep)
+
+    commands = {}
+
+    def output_locale_binary(name, pkgname, locale, encoding):
+        treedir = oe.path.join(d.getVar("WORKDIR"), "locale-tree")
+        ldlibdir = oe.path.join(treedir, d.getVar("base_libdir"))
+        path = d.getVar("PATH")
+        i18npath = oe.path.join(treedir, datadir, "i18n")
+        gconvpath = oe.path.join(treedir, "iconvdata")
+        outputpath = oe.path.join(treedir, binary_locales_dir)
+
+        use_cross_localedef = d.getVar("LOCALE_GENERATION_WITH_CROSS-LOCALEDEF") or "0"
+        if use_cross_localedef == "1":
+            target_arch = d.getVar('TARGET_ARCH')
+            locale_arch_options = { \
+                "arc":     " --uint32-align=4 --little-endian ", \
+                "arceb":   " --uint32-align=4 --big-endian ",    \
+                "arm":     " --uint32-align=4 --little-endian ", \
+                "armeb":   " --uint32-align=4 --big-endian ",    \
+                "aarch64": " --uint32-align=4 --little-endian ",    \
+                "aarch64_be": " --uint32-align=4 --big-endian ",    \
+                "sh4":     " --uint32-align=4 --big-endian ",    \
+                "powerpc": " --uint32-align=4 --big-endian ",    \
+                "powerpc64": " --uint32-align=4 --big-endian ",  \
+                "powerpc64le": " --uint32-align=4 --little-endian ",  \
+                "mips":    " --uint32-align=4 --big-endian ",    \
+                "mipsisa32r6":    " --uint32-align=4 --big-endian ",    \
+                "mips64":  " --uint32-align=4 --big-endian ",    \
+                "mipsisa64r6":  " --uint32-align=4 --big-endian ",    \
+                "mipsel":  " --uint32-align=4 --little-endian ", \
+                "mipsisa32r6el":  " --uint32-align=4 --little-endian ", \
+                "mips64el":" --uint32-align=4 --little-endian ", \
+                "mipsisa64r6el":" --uint32-align=4 --little-endian ", \
+                "riscv64": " --uint32-align=4 --little-endian ", \
+                "riscv32": " --uint32-align=4 --little-endian ", \
+                "i586":    " --uint32-align=4 --little-endian ", \
+                "i686":    " --uint32-align=4 --little-endian ", \
+                "x86_64":  " --uint32-align=4 --little-endian "  }
+
+            if target_arch in locale_arch_options:
+                localedef_opts = locale_arch_options[target_arch]
+            else:
+                bb.error("locale_arch_options not found for target_arch=" + target_arch)
+                bb.fatal("unknown arch:" + target_arch + " for locale_arch_options")
+
+            localedef_opts += " --force --no-hard-links --no-archive --prefix=%s \
+                --inputfile=%s/%s/i18n/locales/%s --charmap=%s %s/%s" \
+                % (treedir, treedir, datadir, locale, encoding, outputpath, name)
+
+            cmd = "PATH=\"%s\" I18NPATH=\"%s\" GCONV_PATH=\"%s\" cross-localedef %s" % \
+                (path, i18npath, gconvpath, localedef_opts)
+        else: # earlier slower qemu way 
+            qemu = qemu_target_binary(d) 
+            localedef_opts = "--force --no-hard-links --no-archive --prefix=%s \
+                --inputfile=%s/i18n/locales/%s --charmap=%s %s" \
+                % (treedir, datadir, locale, encoding, name)
+
+            qemu_options = d.getVar('QEMU_OPTIONS')
+
+            cmd = "PSEUDO_RELOADED=YES PATH=\"%s\" I18NPATH=\"%s\" %s -L %s \
+                -E LD_LIBRARY_PATH=%s %s %s${base_bindir}/localedef %s" % \
+                (path, i18npath, qemu, treedir, ldlibdir, qemu_options, treedir, localedef_opts)
+
+        commands["%s/%s" % (outputpath, name)] = cmd
+
+        bb.note("generating locale %s (%s)" % (locale, encoding))
+
+    def output_locale(name, locale, encoding):
+        pkgname = d.getVar('MLPREFIX', False) + 'locale-base-' + legitimize_package_name(name)
+        d.setVar('ALLOW_EMPTY:%s' % pkgname, '1')
+        d.setVar('PACKAGES', '%s %s' % (pkgname, d.getVar('PACKAGES')))
+        rprovides = ' %svirtual-locale-%s' % (mlprefix, legitimize_package_name(name))
+        m = re.match(r"(.*)_(.*)", name)
+        if m:
+            rprovides += ' %svirtual-locale-%s' % (mlprefix, m.group(1))
+        d.setVar('RPROVIDES:%s' % pkgname, rprovides)
+
+        if use_bin == "compile":
+            output_locale_binary_rdepends(name, pkgname, locale, encoding)
+            output_locale_binary(name, pkgname, locale, encoding)
+        elif use_bin == "precompiled":
+            output_locale_binary_rdepends(name, pkgname, locale, encoding)
+        else:
+            output_locale_source(name, pkgname, locale, encoding)
+
+    if use_bin == "compile":
+        bb.note("preparing tree for binary locale generation")
+        bb.build.exec_func("do_prep_locale_tree", d)
+
+    utf8_only = int(d.getVar('LOCALE_UTF8_ONLY') or 0)
+    utf8_is_default = int(d.getVar('LOCALE_UTF8_IS_DEFAULT') or 0)
+
+    encodings = {}
+    for locale in to_generate:
+        charset = supported[locale]
+        if utf8_only and charset != 'UTF-8':
+            continue
+
+        m = dot_re.match(locale)
+        if m:
+            base = m.group(1)
+        else:
+            base = locale
+
+        # Non-precompiled locales may be renamed so that the default
+        # (non-suffixed) encoding is always UTF-8, i.e., instead of en_US and
+        # en_US.UTF-8, we have en_US and en_US.ISO-8859-1. This implicitly
+        # contradicts SUPPORTED.
+        if use_bin == "precompiled" or not utf8_is_default:
+            output_locale(locale, base, charset)
+        else:
+            if charset == 'UTF-8':
+                output_locale(base, base, charset)
+            else:
+                output_locale('%s.%s' % (base, charset), base, charset)
+
+    def metapkg_hook(file, pkg, pattern, format, basename):
+        name = basename.split('/', 1)[0]
+        metapkg = legitimize_package_name('%s-binary-localedata-%s' % (mlprefix+bpn, name))
+        d.appendVar('RDEPENDS:%s' % metapkg, ' ' + pkg)
+
+    if use_bin == "compile":
+        makefile = oe.path.join(d.getVar("WORKDIR"), "locale-tree", "Makefile")
+        with open(makefile, "w") as m:
+            m.write("all: %s\n\n" % " ".join(commands.keys()))
+            total = len(commands)
+            for i, (maketarget, makerecipe) in enumerate(commands.items()):
+                m.write(maketarget + ":\n")
+                m.write("\t@echo 'Progress %d/%d'\n" % (i, total))
+                m.write("\t" + makerecipe + "\n\n")
+        d.setVar("EXTRA_OEMAKE", "-C %s ${PARALLEL_MAKE}" % (os.path.dirname(makefile)))
+        d.setVarFlag("oe_runmake", "progress", r"outof:Progress\s(\d+)/(\d+)")
+        bb.note("Executing binary locale generation makefile")
+        bb.build.exec_func("oe_runmake", d)
+        bb.note("collecting binary locales from locale tree")
+        bb.build.exec_func("do_collect_bins_from_locale_tree", d)
+
+    if use_bin in ('compile', 'precompiled'):
+        lcsplit = d.getVar('GLIBC_SPLIT_LC_PACKAGES')
+        if lcsplit and int(lcsplit):
+            do_split_packages(d, binary_locales_dir, file_regex=r'^(.*/LC_\w+)', \
+                output_pattern=bpn+'-binary-localedata-%s', \
+                description='binary locale definition for %s', recursive=True,
+                hook=metapkg_hook, extra_depends='', allow_dirs=True, match_path=True)
+        else:
+            do_split_packages(d, binary_locales_dir, file_regex=r'(.*)', \
+                output_pattern=bpn+'-binary-localedata-%s', \
+                description='binary locale definition for %s', extra_depends='', allow_dirs=True)
+    else:
+        bb.note("generation of binary locales disabled. this may break i18n!")
+
+}
+
+# We want to do this indirection so that we can safely 'return'
+# from the called function even though we're prepending
+python populate_packages:prepend () {
+    bb.build.exec_func('package_do_split_gconvs', d)
+}
diff --git a/poky/meta/classes-recipe/license_image.bbclass b/poky/meta/classes-recipe/license_image.bbclass
new file mode 100644
index 0000000..b60d6e4
--- /dev/null
+++ b/poky/meta/classes-recipe/license_image.bbclass
@@ -0,0 +1,295 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+ROOTFS_LICENSE_DIR = "${IMAGE_ROOTFS}/usr/share/common-licenses"
+
+# This requires LICENSE_CREATE_PACKAGE=1 to work too
+COMPLEMENTARY_GLOB[lic-pkgs] = "*-lic"
+
+python() {
+    if not oe.data.typed_value('LICENSE_CREATE_PACKAGE', d):
+        features = set(oe.data.typed_value('IMAGE_FEATURES', d))
+        if 'lic-pkgs' in features:
+            bb.error("'lic-pkgs' in IMAGE_FEATURES but LICENSE_CREATE_PACKAGE not enabled to generate -lic packages")
+}
+
+python write_package_manifest() {
+    # Get list of installed packages
+    license_image_dir = d.expand('${LICENSE_DIRECTORY}/${IMAGE_NAME}')
+    bb.utils.mkdirhier(license_image_dir)
+    from oe.rootfs import image_list_installed_packages
+    from oe.utils import format_pkg_list
+
+    pkgs = image_list_installed_packages(d)
+    output = format_pkg_list(pkgs)
+    with open(os.path.join(license_image_dir, 'package.manifest'), "w+") as package_manifest:
+        package_manifest.write(output)
+}
+
+python license_create_manifest() {
+    import oe.packagedata
+    from oe.rootfs import image_list_installed_packages
+
+    build_images_from_feeds = d.getVar('BUILD_IMAGES_FROM_FEEDS')
+    if build_images_from_feeds == "1":
+        return 0
+
+    pkg_dic = {}
+    for pkg in sorted(image_list_installed_packages(d)):
+        pkg_info = os.path.join(d.getVar('PKGDATA_DIR'),
+                                'runtime-reverse', pkg)
+        pkg_name = os.path.basename(os.readlink(pkg_info))
+
+        pkg_dic[pkg_name] = oe.packagedata.read_pkgdatafile(pkg_info)
+        if not "LICENSE" in pkg_dic[pkg_name].keys():
+            pkg_lic_name = "LICENSE:" + pkg_name
+            pkg_dic[pkg_name]["LICENSE"] = pkg_dic[pkg_name][pkg_lic_name]
+
+    rootfs_license_manifest = os.path.join(d.getVar('LICENSE_DIRECTORY'),
+                        d.getVar('IMAGE_NAME'), 'license.manifest')
+    write_license_files(d, rootfs_license_manifest, pkg_dic, rootfs=True)
+}
+
+def write_license_files(d, license_manifest, pkg_dic, rootfs=True):
+    import re
+    import stat
+
+    bad_licenses = (d.getVar("INCOMPATIBLE_LICENSE") or "").split()
+    bad_licenses = expand_wildcard_licenses(d, bad_licenses)
+
+    exceptions = (d.getVar("INCOMPATIBLE_LICENSE_EXCEPTIONS") or "").split()
+    with open(license_manifest, "w") as license_file:
+        for pkg in sorted(pkg_dic):
+            remaining_bad_licenses = oe.license.apply_pkg_license_exception(pkg, bad_licenses, exceptions)
+            incompatible_licenses = incompatible_pkg_license(d, remaining_bad_licenses, pkg_dic[pkg]["LICENSE"])
+            if incompatible_licenses:
+                bb.fatal("Package %s cannot be installed into the image because it has incompatible license(s): %s" %(pkg, ' '.join(incompatible_licenses)))
+            else:
+                incompatible_licenses = incompatible_pkg_license(d, bad_licenses, pkg_dic[pkg]["LICENSE"])
+                if incompatible_licenses:
+                    oe.qa.handle_error('license-incompatible', "Including %s with incompatible license(s) %s into the image, because it has been allowed by exception list." %(pkg, ' '.join(incompatible_licenses)), d)
+            try:
+                (pkg_dic[pkg]["LICENSE"], pkg_dic[pkg]["LICENSES"]) = \
+                    oe.license.manifest_licenses(pkg_dic[pkg]["LICENSE"],
+                    remaining_bad_licenses, canonical_license, d)
+            except oe.license.LicenseError as exc:
+                bb.fatal('%s: %s' % (d.getVar('P'), exc))
+
+            if not "IMAGE_MANIFEST" in pkg_dic[pkg]:
+                # Rootfs manifest
+                license_file.write("PACKAGE NAME: %s\n" % pkg)
+                license_file.write("PACKAGE VERSION: %s\n" % pkg_dic[pkg]["PV"])
+                license_file.write("RECIPE NAME: %s\n" % pkg_dic[pkg]["PN"])
+                license_file.write("LICENSE: %s\n\n" % pkg_dic[pkg]["LICENSE"])
+
+                # If the package doesn't contain any file, that is, its size is 0, the license
+                # isn't relevant as far as the final image is concerned. So doing license check
+                # doesn't make much sense, skip it.
+                if pkg_dic[pkg]["PKGSIZE:%s" % pkg] == "0":
+                    continue
+            else:
+                # Image manifest
+                license_file.write("RECIPE NAME: %s\n" % pkg_dic[pkg]["PN"])
+                license_file.write("VERSION: %s\n" % pkg_dic[pkg]["PV"])
+                license_file.write("LICENSE: %s\n" % pkg_dic[pkg]["LICENSE"])
+                license_file.write("FILES: %s\n\n" % pkg_dic[pkg]["FILES"])
+
+            for lic in pkg_dic[pkg]["LICENSES"]:
+                lic_file = os.path.join(d.getVar('LICENSE_DIRECTORY'),
+                                        pkg_dic[pkg]["PN"], "generic_%s" % 
+                                        re.sub(r'\+', '', lic))
+                # add explicity avoid of CLOSED license because isn't generic
+                if lic == "CLOSED":
+                   continue
+
+                if not os.path.exists(lic_file):
+                    oe.qa.handle_error('license-file-missing',
+                                       "The license listed %s was not in the "\
+                                       "licenses collected for recipe %s"
+                                       % (lic, pkg_dic[pkg]["PN"]), d)
+    oe.qa.exit_if_errors(d)
+
+    # Two options here:
+    # - Just copy the manifest
+    # - Copy the manifest and the license directories
+    # With both options set we see a .5 M increase in core-image-minimal
+    copy_lic_manifest = d.getVar('COPY_LIC_MANIFEST')
+    copy_lic_dirs = d.getVar('COPY_LIC_DIRS')
+    if rootfs and copy_lic_manifest == "1":
+        rootfs_license_dir = d.getVar('ROOTFS_LICENSE_DIR')
+        bb.utils.mkdirhier(rootfs_license_dir)
+        rootfs_license_manifest = os.path.join(rootfs_license_dir,
+                os.path.split(license_manifest)[1])
+        if not os.path.exists(rootfs_license_manifest):
+            oe.path.copyhardlink(license_manifest, rootfs_license_manifest)
+
+        if copy_lic_dirs == "1":
+            for pkg in sorted(pkg_dic):
+                pkg_rootfs_license_dir = os.path.join(rootfs_license_dir, pkg)
+                bb.utils.mkdirhier(pkg_rootfs_license_dir)
+                pkg_license_dir = os.path.join(d.getVar('LICENSE_DIRECTORY'),
+                                            pkg_dic[pkg]["PN"]) 
+
+                pkg_manifest_licenses = [canonical_license(d, lic) \
+                        for lic in pkg_dic[pkg]["LICENSES"]]
+
+                licenses = os.listdir(pkg_license_dir)
+                for lic in licenses:
+                    pkg_license = os.path.join(pkg_license_dir, lic)
+                    pkg_rootfs_license = os.path.join(pkg_rootfs_license_dir, lic)
+
+                    if re.match(r"^generic_.*$", lic):
+                        generic_lic = canonical_license(d,
+                                re.search(r"^generic_(.*)$", lic).group(1))
+
+                        # Do not copy generic license into package if isn't
+                        # declared into LICENSES of the package.
+                        if not re.sub(r'\+$', '', generic_lic) in \
+                                [re.sub(r'\+', '', lic) for lic in \
+                                 pkg_manifest_licenses]:
+                            continue
+
+                        if oe.license.license_ok(generic_lic,
+                                bad_licenses) == False:
+                            continue
+
+                        # Make sure we use only canonical name for the license file
+                        generic_lic_file = "generic_%s" % generic_lic
+                        rootfs_license = os.path.join(rootfs_license_dir, generic_lic_file)
+                        if not os.path.exists(rootfs_license):
+                            oe.path.copyhardlink(pkg_license, rootfs_license)
+
+                        if not os.path.exists(pkg_rootfs_license):
+                            os.symlink(os.path.join('..', generic_lic_file), pkg_rootfs_license)
+                    else:
+                        if (oe.license.license_ok(canonical_license(d,
+                                lic), bad_licenses) == False or
+                                os.path.exists(pkg_rootfs_license)):
+                            continue
+
+                        oe.path.copyhardlink(pkg_license, pkg_rootfs_license)
+            # Fixup file ownership and permissions
+            for walkroot, dirs, files in os.walk(rootfs_license_dir):
+                for f in files:
+                    p = os.path.join(walkroot, f)
+                    os.lchown(p, 0, 0)
+                    if not os.path.islink(p):
+                        os.chmod(p, stat.S_IRUSR | stat.S_IWUSR | stat.S_IRGRP | stat.S_IROTH)
+                for dir in dirs:
+                    p = os.path.join(walkroot, dir)
+                    os.lchown(p, 0, 0)
+                    os.chmod(p, stat.S_IRWXU | stat.S_IRGRP | stat.S_IXGRP | stat.S_IROTH | stat.S_IXOTH)
+
+
+
+def license_deployed_manifest(d):
+    """
+    Write the license manifest for the deployed recipes.
+    The deployed recipes usually includes the bootloader
+    and extra files to boot the target.
+    """
+
+    dep_dic = {}
+    man_dic = {}
+    lic_dir = d.getVar("LICENSE_DIRECTORY")
+
+    dep_dic = get_deployed_dependencies(d)
+    for dep in dep_dic.keys():
+        man_dic[dep] = {}
+        # It is necessary to mark this will be used for image manifest
+        man_dic[dep]["IMAGE_MANIFEST"] = True
+        man_dic[dep]["PN"] = dep
+        man_dic[dep]["FILES"] = \
+            " ".join(get_deployed_files(dep_dic[dep]))
+        with open(os.path.join(lic_dir, dep, "recipeinfo"), "r") as f:
+            for line in f.readlines():
+                key,val = line.split(": ", 1)
+                man_dic[dep][key] = val[:-1]
+
+    lic_manifest_dir = os.path.join(d.getVar('LICENSE_DIRECTORY'),
+                                    d.getVar('IMAGE_NAME'))
+    bb.utils.mkdirhier(lic_manifest_dir)
+    image_license_manifest = os.path.join(lic_manifest_dir, 'image_license.manifest')
+    write_license_files(d, image_license_manifest, man_dic, rootfs=False)
+
+    link_name = d.getVar('IMAGE_LINK_NAME')
+    if link_name:
+        lic_manifest_symlink_dir = os.path.join(d.getVar('LICENSE_DIRECTORY'),
+                                    link_name)
+        # remove old symlink
+        if os.path.islink(lic_manifest_symlink_dir):
+            os.unlink(lic_manifest_symlink_dir)
+
+        # create the image dir symlink
+        if lic_manifest_dir != lic_manifest_symlink_dir:
+            os.symlink(lic_manifest_dir, lic_manifest_symlink_dir)
+
+def get_deployed_dependencies(d):
+    """
+    Get all the deployed dependencies of an image
+    """
+
+    deploy = {}
+    # Get all the dependencies for the current task (rootfs).
+    taskdata = d.getVar("BB_TASKDEPDATA", False)
+    pn = d.getVar("PN", True)
+    depends = list(set([dep[0] for dep
+                    in list(taskdata.values())
+                    if not dep[0].endswith("-native") and not dep[0] == pn]))
+
+    # To verify what was deployed it checks the rootfs dependencies against
+    # the SSTATE_MANIFESTS for "deploy" task.
+    # The manifest file name contains the arch. Because we are not running
+    # in the recipe context it is necessary to check every arch used.
+    sstate_manifest_dir = d.getVar("SSTATE_MANIFESTS")
+    archs = list(set(d.getVar("SSTATE_ARCHS").split()))
+    for dep in depends:
+        for arch in archs:
+            sstate_manifest_file = os.path.join(sstate_manifest_dir,
+                    "manifest-%s-%s.deploy" % (arch, dep))
+            if os.path.exists(sstate_manifest_file):
+                deploy[dep] = sstate_manifest_file
+                break
+
+    return deploy
+get_deployed_dependencies[vardepsexclude] = "BB_TASKDEPDATA"
+
+def get_deployed_files(man_file):
+    """
+    Get the files deployed from the sstate manifest
+    """
+
+    dep_files = []
+    excluded_files = []
+    with open(man_file, "r") as manifest:
+        all_files = manifest.read()
+    for f in all_files.splitlines():
+        if ((not (os.path.islink(f) or os.path.isdir(f))) and
+                not os.path.basename(f) in excluded_files):
+            dep_files.append(os.path.basename(f))
+    return dep_files
+
+ROOTFS_POSTPROCESS_COMMAND:prepend = "write_package_manifest; license_create_manifest; "
+do_rootfs[recrdeptask] += "do_populate_lic"
+
+python do_populate_lic_deploy() {
+    license_deployed_manifest(d)
+    oe.qa.exit_if_errors(d)
+}
+
+addtask populate_lic_deploy before do_build after do_image_complete
+do_populate_lic_deploy[recrdeptask] += "do_populate_lic do_deploy"
+
+python license_qa_dead_symlink() {
+    import os
+
+    for root, dirs, files in os.walk(d.getVar('ROOTFS_LICENSE_DIR')):
+        for file in files:
+            full_path = root + "/" + file
+            if os.path.islink(full_path) and not os.path.exists(full_path):
+                bb.error("broken symlink: " + full_path)
+}
+IMAGE_QA_COMMANDS += "license_qa_dead_symlink"
diff --git a/poky/meta/classes-recipe/linux-dummy.bbclass b/poky/meta/classes-recipe/linux-dummy.bbclass
new file mode 100644
index 0000000..9291533
--- /dev/null
+++ b/poky/meta/classes-recipe/linux-dummy.bbclass
@@ -0,0 +1,31 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+python __anonymous () {
+    if d.getVar('PREFERRED_PROVIDER_virtual/kernel') == 'linux-dummy':
+        # copy part codes from kernel.bbclass
+        kname = d.getVar('KERNEL_PACKAGE_NAME') or "kernel"
+
+        # set an empty package of kernel-devicetree
+        d.appendVar('PACKAGES', ' %s-devicetree' % kname)
+        d.setVar('ALLOW_EMPTY:%s-devicetree' % kname, '1')
+
+        # Merge KERNEL_IMAGETYPE and KERNEL_ALT_IMAGETYPE into KERNEL_IMAGETYPES
+        type = d.getVar('KERNEL_IMAGETYPE') or ""
+        alttype = d.getVar('KERNEL_ALT_IMAGETYPE') or ""
+        types = d.getVar('KERNEL_IMAGETYPES') or ""
+        if type not in types.split():
+            types = (type + ' ' + types).strip()
+        if alttype not in types.split():
+            types = (alttype + ' ' + types).strip()
+
+        # set empty packages of kernel-image-*
+        for type in types.split():
+            typelower = type.lower()
+            d.appendVar('PACKAGES', ' %s-image-%s' % (kname, typelower))
+            d.setVar('ALLOW_EMPTY:%s-image-%s' % (kname, typelower), '1')
+}
+
diff --git a/poky/meta/classes-recipe/linux-kernel-base.bbclass b/poky/meta/classes-recipe/linux-kernel-base.bbclass
new file mode 100644
index 0000000..cb2212c
--- /dev/null
+++ b/poky/meta/classes-recipe/linux-kernel-base.bbclass
@@ -0,0 +1,47 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+# parse kernel ABI version out of <linux/version.h>
+def get_kernelversion_headers(p):
+    import re
+
+    fn = p + '/include/linux/utsrelease.h'
+    if not os.path.isfile(fn):
+        # after 2.6.33-rc1
+        fn = p + '/include/generated/utsrelease.h'
+    if not os.path.isfile(fn):
+        fn = p + '/include/linux/version.h'
+
+    try:
+        f = open(fn, 'r')
+    except IOError:
+        return None
+
+    l = f.readlines()
+    f.close()
+    r = re.compile("#define UTS_RELEASE \"(.*)\"")
+    for s in l:
+        m = r.match(s)
+        if m:
+            return m.group(1)
+    return None
+
+
+def get_kernelversion_file(p):
+    fn = p + '/kernel-abiversion'
+
+    try:
+        with open(fn, 'r') as f:
+            return f.readlines()[0].strip()
+    except IOError:
+        return None
+
+def linux_module_packages(s, d):
+    suffix = ""
+    return " ".join(map(lambda s: "kernel-module-%s%s" % (s.lower().replace('_', '-').replace('@', '+'), suffix), s.split()))
+
+# that's all
+
diff --git a/poky/meta/classes-recipe/linuxloader.bbclass b/poky/meta/classes-recipe/linuxloader.bbclass
new file mode 100644
index 0000000..1dfb95e
--- /dev/null
+++ b/poky/meta/classes-recipe/linuxloader.bbclass
@@ -0,0 +1,82 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+def get_musl_loader_arch(d):
+    import re
+    ldso_arch = "NotSupported"
+
+    targetarch = d.getVar("TARGET_ARCH")
+    if targetarch.startswith("microblaze"):
+        ldso_arch = "microblaze${@bb.utils.contains('TUNE_FEATURES', 'bigendian', '', 'el', d)}"
+    elif targetarch.startswith("mips"):
+        ldso_arch = "mips${ABIEXTENSION}${MIPSPKGSFX_BYTE}${MIPSPKGSFX_R6}${MIPSPKGSFX_ENDIAN}${@['', '-sf'][d.getVar('TARGET_FPU') == 'soft']}"
+    elif targetarch == "powerpc":
+        ldso_arch = "powerpc${@['', '-sf'][d.getVar('TARGET_FPU') == 'soft']}"
+    elif targetarch.startswith("powerpc64"):
+        ldso_arch = "powerpc64${@bb.utils.contains('TUNE_FEATURES', 'bigendian', '', 'le', d)}"
+    elif targetarch == "x86_64":
+        ldso_arch = "x86_64"
+    elif re.search("i.86", targetarch):
+        ldso_arch = "i386"
+    elif targetarch.startswith("arm"):
+        ldso_arch = "arm${ARMPKGSFX_ENDIAN}${ARMPKGSFX_EABI}"
+    elif targetarch.startswith("aarch64"):
+        ldso_arch = "aarch64${ARMPKGSFX_ENDIAN_64}"
+    elif targetarch.startswith("riscv64"):
+        ldso_arch = "riscv64${@['', '-sf'][d.getVar('TARGET_FPU') == 'soft']}"
+    elif targetarch.startswith("riscv32"):
+        ldso_arch = "riscv32${@['', '-sf'][d.getVar('TARGET_FPU') == 'soft']}"
+    return ldso_arch
+
+def get_musl_loader(d):
+    import re
+    return "/lib/ld-musl-" + get_musl_loader_arch(d) + ".so.1"
+
+def get_glibc_loader(d):
+    import re
+
+    dynamic_loader = "NotSupported"
+    targetarch = d.getVar("TARGET_ARCH")
+    if targetarch in ["powerpc", "microblaze"]:
+        dynamic_loader = "${base_libdir}/ld.so.1"
+    elif targetarch in ["mipsisa32r6el", "mipsisa32r6", "mipsisa64r6el", "mipsisa64r6"]:
+        dynamic_loader = "${base_libdir}/ld-linux-mipsn8.so.1"
+    elif targetarch.startswith("mips"):
+        dynamic_loader = "${base_libdir}/ld.so.1"
+    elif targetarch == "powerpc64le":
+        dynamic_loader = "${base_libdir}/ld64.so.2"
+    elif targetarch == "powerpc64":
+        dynamic_loader = "${base_libdir}/ld64.so.1"
+    elif targetarch == "x86_64":
+        dynamic_loader = "${base_libdir}/ld-linux-x86-64.so.2"
+    elif re.search("i.86", targetarch):
+        dynamic_loader = "${base_libdir}/ld-linux.so.2"
+    elif targetarch == "arm":
+        dynamic_loader = "${base_libdir}/ld-linux${@['-armhf', ''][d.getVar('TARGET_FPU') == 'soft']}.so.3"
+    elif targetarch.startswith("aarch64"):
+        dynamic_loader = "${base_libdir}/ld-linux-aarch64${ARMPKGSFX_ENDIAN_64}.so.1"
+    elif targetarch.startswith("riscv64"):
+        dynamic_loader = "${base_libdir}/ld-linux-riscv64-lp64${@['d', ''][d.getVar('TARGET_FPU') == 'soft']}.so.1"
+    elif targetarch.startswith("riscv32"):
+        dynamic_loader = "${base_libdir}/ld-linux-riscv32-ilp32${@['d', ''][d.getVar('TARGET_FPU') == 'soft']}.so.1"
+    return dynamic_loader
+
+def get_linuxloader(d):
+    overrides = d.getVar("OVERRIDES").split(":")
+
+    if "libc-baremetal" in overrides:
+        return "NotSupported"
+
+    if "libc-musl" in overrides:
+        dynamic_loader = get_musl_loader(d)
+    else:
+        dynamic_loader = get_glibc_loader(d)
+    return dynamic_loader
+
+get_linuxloader[vardepvalue] = "${@get_linuxloader(d)}"
+get_musl_loader[vardepvalue] = "${@get_musl_loader(d)}"
+get_musl_loader_arch[vardepvalue] = "${@get_musl_loader_arch(d)}"
+get_glibc_loader[vardepvalue] = "${@get_glibc_loader(d)}"
diff --git a/poky/meta/classes-recipe/live-vm-common.bbclass b/poky/meta/classes-recipe/live-vm-common.bbclass
new file mode 100644
index 0000000..b619f3a
--- /dev/null
+++ b/poky/meta/classes-recipe/live-vm-common.bbclass
@@ -0,0 +1,100 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+# Some of the vars for vm and live image are conflicted, this function
+# is used for fixing the problem.
+def set_live_vm_vars(d, suffix):
+    vars = ['GRUB_CFG', 'SYSLINUX_CFG', 'ROOT', 'LABELS', 'INITRD']
+    for var in vars:
+        var_with_suffix = var + '_' + suffix
+        if d.getVar(var):
+            bb.warn('Found potential conflicted var %s, please use %s rather than %s' % \
+                (var, var_with_suffix, var))
+        elif d.getVar(var_with_suffix):
+            d.setVar(var, d.getVar(var_with_suffix))
+
+
+EFI = "${@bb.utils.contains("MACHINE_FEATURES", "efi", "1", "0", d)}"
+EFI_PROVIDER ?= "grub-efi"
+EFI_CLASS = "${@bb.utils.contains("MACHINE_FEATURES", "efi", "${EFI_PROVIDER}", "", d)}"
+
+MKDOSFS_EXTRAOPTS ??= "-S 512"
+
+# Include legacy boot if MACHINE_FEATURES includes "pcbios" or if it does not
+# contain "efi". This way legacy is supported by default if neither is
+# specified, maintaining the original behavior.
+def pcbios(d):
+    pcbios = bb.utils.contains("MACHINE_FEATURES", "pcbios", "1", "0", d)
+    if pcbios == "0":
+        pcbios = bb.utils.contains("MACHINE_FEATURES", "efi", "0", "1", d)
+    return pcbios
+
+PCBIOS = "${@pcbios(d)}"
+PCBIOS_CLASS = "${@['','syslinux'][d.getVar('PCBIOS') == '1']}"
+
+# efi_populate_common DEST BOOTLOADER
+efi_populate_common() {
+        # DEST must be the root of the image so that EFIDIR is not
+        # nested under a top level directory.
+        DEST=$1
+
+        install -d ${DEST}${EFIDIR}
+
+        install -m 0644 ${DEPLOY_DIR_IMAGE}/$2-${EFI_BOOT_IMAGE} ${DEST}${EFIDIR}/${EFI_BOOT_IMAGE}
+        EFIPATH=$(echo "${EFIDIR}" | sed 's/\//\\/g')
+        printf 'fs0:%s\%s\n' "$EFIPATH" "${EFI_BOOT_IMAGE}" >${DEST}/startup.nsh
+}
+
+efi_iso_populate() {
+        iso_dir=$1
+        efi_populate $iso_dir
+        # Build a EFI directory to create efi.img
+        mkdir -p ${EFIIMGDIR}/${EFIDIR}
+        cp $iso_dir/${EFIDIR}/* ${EFIIMGDIR}${EFIDIR}
+        cp $iso_dir/${KERNEL_IMAGETYPE} ${EFIIMGDIR}
+
+        EFIPATH=$(echo "${EFIDIR}" | sed 's/\//\\/g')
+        printf 'fs0:%s\%s\n' "$EFIPATH" "${EFI_BOOT_IMAGE}" >${EFIIMGDIR}/startup.nsh
+
+        if [ -f "$iso_dir/initrd" ] ; then
+                cp $iso_dir/initrd ${EFIIMGDIR}
+        fi
+}
+
+efi_hddimg_populate() {
+	efi_populate $1
+}
+
+inherit ${EFI_CLASS}
+inherit ${PCBIOS_CLASS}
+
+populate_kernel() {
+	dest=$1
+	install -d $dest
+
+	# Install bzImage, initrd, and rootfs.img in DEST for all loaders to use.
+	bbnote "Trying to install ${DEPLOY_DIR_IMAGE}/${KERNEL_IMAGETYPE} as $dest/${KERNEL_IMAGETYPE}"
+	if [ -e ${DEPLOY_DIR_IMAGE}/${KERNEL_IMAGETYPE} ]; then
+		install -m 0644 ${DEPLOY_DIR_IMAGE}/${KERNEL_IMAGETYPE} $dest/${KERNEL_IMAGETYPE}
+	else
+		bbwarn "${DEPLOY_DIR_IMAGE}/${KERNEL_IMAGETYPE} doesn't exist"
+	fi
+
+	# initrd is made of concatenation of multiple filesystem images
+	if [ -n "${INITRD}" ]; then
+		rm -f $dest/initrd
+		for fs in ${INITRD}
+		do
+			if [ -s "$fs" ]; then
+				cat $fs >> $dest/initrd
+			else
+				bbfatal "$fs is invalid. initrd image creation failed."
+			fi
+		done
+		chmod 0644 $dest/initrd
+	fi
+}
+
diff --git a/poky/meta/classes-recipe/manpages.bbclass b/poky/meta/classes-recipe/manpages.bbclass
new file mode 100644
index 0000000..693fb53
--- /dev/null
+++ b/poky/meta/classes-recipe/manpages.bbclass
@@ -0,0 +1,51 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+# Inherit this class to enable or disable building and installation of manpages
+# depending on whether 'api-documentation' is in DISTRO_FEATURES. Such building
+# tends to pull in the entire XML stack and other tools, so it's not enabled
+# by default.
+PACKAGECONFIG:append:class-target = " ${@bb.utils.contains('DISTRO_FEATURES', 'api-documentation', 'manpages', '', d)}"
+
+inherit qemu
+
+# usually manual files are packaged to ${PN}-doc except man-pages
+MAN_PKG ?= "${PN}-doc"
+
+# only add man-db to RDEPENDS when manual files are built and installed
+RDEPENDS:${MAN_PKG} += "${@bb.utils.contains('PACKAGECONFIG', 'manpages', 'man-db', '', d)}"
+
+pkg_postinst:${MAN_PKG}:append () {
+	# only update manual page index caches when manual files are built and installed
+	if ${@bb.utils.contains('PACKAGECONFIG', 'manpages', 'true', 'false', d)}; then
+		if test -n "$D"; then
+			if ${@bb.utils.contains('MACHINE_FEATURES', 'qemu-usermode', 'true', 'false', d)}; then
+				sed "s:\(\s\)/:\1$D/:g" $D${sysconfdir}/man_db.conf | ${@qemu_run_binary(d, '$D', '${bindir}/mandb')} -C - -u -q $D${mandir}
+				chown -R root:root $D${mandir}
+
+				mkdir -p $D${localstatedir}/cache/man
+				cd $D${mandir}
+				find . -name index.db | while read index; do
+					mkdir -p $D${localstatedir}/cache/man/$(dirname ${index})
+					mv ${index} $D${localstatedir}/cache/man/${index}
+					chown man:man $D${localstatedir}/cache/man/${index}
+				done
+				cd -
+			else
+				$INTERCEPT_DIR/postinst_intercept delay_to_first_boot ${PKG} mlprefix=${MLPREFIX}
+			fi
+		else
+			mandb -q
+		fi
+	fi
+}
+
+pkg_postrm:${MAN_PKG}:append () {
+	# only update manual page index caches when manual files are built and installed
+	if ${@bb.utils.contains('PACKAGECONFIG', 'manpages', 'true', 'false', d)}; then
+		mandb -q
+	fi
+}
diff --git a/poky/meta/classes-recipe/meson-routines.bbclass b/poky/meta/classes-recipe/meson-routines.bbclass
new file mode 100644
index 0000000..6086fce
--- /dev/null
+++ b/poky/meta/classes-recipe/meson-routines.bbclass
@@ -0,0 +1,57 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+inherit siteinfo
+
+def meson_array(var, d):
+    items = d.getVar(var).split()
+    return repr(items[0] if len(items) == 1 else items)
+
+# Map our ARCH values to what Meson expects:
+# http://mesonbuild.com/Reference-tables.html#cpu-families
+def meson_cpu_family(var, d):
+    import re
+    arch = d.getVar(var)
+    if arch == 'powerpc':
+        return 'ppc'
+    elif arch == 'powerpc64' or arch == 'powerpc64le':
+        return 'ppc64'
+    elif arch == 'armeb':
+        return 'arm'
+    elif arch == 'aarch64_be':
+        return 'aarch64'
+    elif arch == 'mipsel':
+        return 'mips'
+    elif arch == 'mips64el':
+        return 'mips64'
+    elif re.match(r"i[3-6]86", arch):
+        return "x86"
+    elif arch == "microblazeel":
+        return "microblaze"
+    else:
+        return arch
+
+# Map our OS values to what Meson expects:
+# https://mesonbuild.com/Reference-tables.html#operating-system-names
+def meson_operating_system(var, d):
+    os = d.getVar(var)
+    if "mingw" in os:
+        return "windows"
+    # avoid e.g 'linux-gnueabi'
+    elif "linux" in os:
+        return "linux"
+    else:
+        return os
+
+def meson_endian(prefix, d):
+    arch, os = d.getVar(prefix + "_ARCH"), d.getVar(prefix + "_OS")
+    sitedata = siteinfo_data_for_machine(arch, os, d)
+    if "endian-little" in sitedata:
+        return "little"
+    elif "endian-big" in sitedata:
+        return "big"
+    else:
+        bb.fatal("Cannot determine endianism for %s-%s" % (arch, os))
diff --git a/poky/meta/classes-recipe/meson.bbclass b/poky/meta/classes-recipe/meson.bbclass
new file mode 100644
index 0000000..765e81b
--- /dev/null
+++ b/poky/meta/classes-recipe/meson.bbclass
@@ -0,0 +1,179 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+inherit python3native meson-routines qemu
+
+DEPENDS:append = " meson-native ninja-native"
+
+EXEWRAPPER_ENABLED:class-native = "False"
+EXEWRAPPER_ENABLED:class-nativesdk = "False"
+EXEWRAPPER_ENABLED ?= "${@bb.utils.contains('MACHINE_FEATURES', 'qemu-usermode', 'True', 'False', d)}"
+DEPENDS:append = "${@' qemu-native' if d.getVar('EXEWRAPPER_ENABLED') == 'True' else ''}"
+
+# As Meson enforces out-of-tree builds we can just use cleandirs
+B = "${WORKDIR}/build"
+do_configure[cleandirs] = "${B}"
+
+# Where the meson.build build configuration is
+MESON_SOURCEPATH = "${S}"
+
+def noprefix(var, d):
+    return d.getVar(var).replace(d.getVar('prefix') + '/', '', 1)
+
+MESON_BUILDTYPE ?= "${@oe.utils.vartrue('DEBUG_BUILD', 'debug', 'plain', d)}"
+MESON_BUILDTYPE[vardeps] += "DEBUG_BUILD"
+MESONOPTS = " --prefix ${prefix} \
+              --buildtype ${MESON_BUILDTYPE} \
+              --bindir ${@noprefix('bindir', d)} \
+              --sbindir ${@noprefix('sbindir', d)} \
+              --datadir ${@noprefix('datadir', d)} \
+              --libdir ${@noprefix('libdir', d)} \
+              --libexecdir ${@noprefix('libexecdir', d)} \
+              --includedir ${@noprefix('includedir', d)} \
+              --mandir ${@noprefix('mandir', d)} \
+              --infodir ${@noprefix('infodir', d)} \
+              --sysconfdir ${sysconfdir} \
+              --localstatedir ${localstatedir} \
+              --sharedstatedir ${sharedstatedir} \
+              --wrap-mode nodownload \
+              --native-file ${WORKDIR}/meson.native"
+
+EXTRA_OEMESON:append = " ${PACKAGECONFIG_CONFARGS}"
+
+MESON_CROSS_FILE = ""
+MESON_CROSS_FILE:class-target = "--cross-file ${WORKDIR}/meson.cross"
+MESON_CROSS_FILE:class-nativesdk = "--cross-file ${WORKDIR}/meson.cross"
+
+# Needed to set up qemu wrapper below
+export STAGING_DIR_HOST
+
+def rust_tool(d, target_var):
+    rustc = d.getVar('RUSTC')
+    if not rustc:
+        return ""
+    cmd = [rustc, "--target", d.getVar(target_var)] + d.getVar("RUSTFLAGS").split()
+    return "rust = %s" % repr(cmd)
+
+addtask write_config before do_configure
+do_write_config[vardeps] += "CC CXX LD AR NM STRIP READELF CFLAGS CXXFLAGS LDFLAGS RUSTC RUSTFLAGS"
+do_write_config() {
+    # This needs to be Py to split the args into single-element lists
+    cat >${WORKDIR}/meson.cross <<EOF
+[binaries]
+c = ${@meson_array('CC', d)}
+cpp = ${@meson_array('CXX', d)}
+cython = 'cython3'
+ar = ${@meson_array('AR', d)}
+nm = ${@meson_array('NM', d)}
+strip = ${@meson_array('STRIP', d)}
+readelf = ${@meson_array('READELF', d)}
+objcopy = ${@meson_array('OBJCOPY', d)}
+pkgconfig = 'pkg-config'
+llvm-config = 'llvm-config${LLVMVERSION}'
+cups-config = 'cups-config'
+g-ir-scanner = '${STAGING_BINDIR}/g-ir-scanner-wrapper'
+g-ir-compiler = '${STAGING_BINDIR}/g-ir-compiler-wrapper'
+${@rust_tool(d, "HOST_SYS")}
+${@"exe_wrapper = '${WORKDIR}/meson-qemuwrapper'" if d.getVar('EXEWRAPPER_ENABLED') == 'True' else ""}
+
+[built-in options]
+c_args = ${@meson_array('CFLAGS', d)}
+c_link_args = ${@meson_array('LDFLAGS', d)}
+cpp_args = ${@meson_array('CXXFLAGS', d)}
+cpp_link_args = ${@meson_array('LDFLAGS', d)}
+
+[properties]
+needs_exe_wrapper = true
+
+[host_machine]
+system = '${@meson_operating_system('HOST_OS', d)}'
+cpu_family = '${@meson_cpu_family('HOST_ARCH', d)}'
+cpu = '${HOST_ARCH}'
+endian = '${@meson_endian('HOST', d)}'
+
+[target_machine]
+system = '${@meson_operating_system('TARGET_OS', d)}'
+cpu_family = '${@meson_cpu_family('TARGET_ARCH', d)}'
+cpu = '${TARGET_ARCH}'
+endian = '${@meson_endian('TARGET', d)}'
+EOF
+
+    cat >${WORKDIR}/meson.native <<EOF
+[binaries]
+c = ${@meson_array('BUILD_CC', d)}
+cpp = ${@meson_array('BUILD_CXX', d)}
+cython = 'cython3'
+ar = ${@meson_array('BUILD_AR', d)}
+nm = ${@meson_array('BUILD_NM', d)}
+strip = ${@meson_array('BUILD_STRIP', d)}
+readelf = ${@meson_array('BUILD_READELF', d)}
+objcopy = ${@meson_array('BUILD_OBJCOPY', d)}
+pkgconfig = 'pkg-config-native'
+${@rust_tool(d, "BUILD_SYS")}
+
+[built-in options]
+c_args = ${@meson_array('BUILD_CFLAGS', d)}
+c_link_args = ${@meson_array('BUILD_LDFLAGS', d)}
+cpp_args = ${@meson_array('BUILD_CXXFLAGS', d)}
+cpp_link_args = ${@meson_array('BUILD_LDFLAGS', d)}
+EOF
+}
+
+do_write_config:append:class-target() {
+    # Write out a qemu wrapper that will be used as exe_wrapper so that meson
+    # can run target helper binaries through that.
+    qemu_binary="${@qemu_wrapper_cmdline(d, '$STAGING_DIR_HOST', ['$STAGING_DIR_HOST/${libdir}','$STAGING_DIR_HOST/${base_libdir}'])}"
+    cat > ${WORKDIR}/meson-qemuwrapper << EOF
+#!/bin/sh
+# Use a modules directory which doesn't exist so we don't load random things
+# which may then get deleted (or their dependencies) and potentially segfault
+export GIO_MODULE_DIR=${STAGING_LIBDIR}/gio/modules-dummy
+
+# meson sets this wrongly (only to libs in build-dir), qemu_wrapper_cmdline() and GIR_EXTRA_LIBS_PATH take care of it properly
+unset LD_LIBRARY_PATH
+
+$qemu_binary "\$@"
+EOF
+    chmod +x ${WORKDIR}/meson-qemuwrapper
+}
+
+# Tell externalsrc that changes to this file require a reconfigure
+CONFIGURE_FILES = "meson.build"
+
+meson_do_configure() {
+    # Meson requires this to be 'bfd, 'lld' or 'gold' from 0.53 onwards
+    # https://github.com/mesonbuild/meson/commit/ef9aeb188ea2bc7353e59916c18901cde90fa2b3
+    unset LD
+
+    # Work around "Meson fails if /tmp is mounted with noexec #2972"
+    mkdir -p "${B}/meson-private/tmp"
+    export TMPDIR="${B}/meson-private/tmp"
+    bbnote Executing meson ${EXTRA_OEMESON}...
+    if ! meson ${MESONOPTS} "${MESON_SOURCEPATH}" "${B}" ${MESON_CROSS_FILE} ${EXTRA_OEMESON}; then
+        bbfatal_log meson failed
+    fi
+}
+
+python meson_do_qa_configure() {
+    import re
+    warn_re = re.compile(r"^WARNING: Cross property (.+) is using default value (.+)$", re.MULTILINE)
+    with open(d.expand("${B}/meson-logs/meson-log.txt")) as logfile:
+        log = logfile.read()
+    for (prop, value) in warn_re.findall(log):
+        bb.warn("Meson cross property %s used without explicit assignment, defaulting to %s" % (prop, value))
+}
+do_configure[postfuncs] += "meson_do_qa_configure"
+
+do_compile[progress] = "outof:^\[(\d+)/(\d+)\]\s+"
+meson_do_compile() {
+    ninja -v ${PARALLEL_MAKE}
+}
+
+meson_do_install() {
+    DESTDIR='${D}' ninja -v ${PARALLEL_MAKEINST} install
+}
+
+EXPORT_FUNCTIONS do_configure do_compile do_install
diff --git a/poky/meta/classes-recipe/mime-xdg.bbclass b/poky/meta/classes-recipe/mime-xdg.bbclass
new file mode 100644
index 0000000..cbdcb4c
--- /dev/null
+++ b/poky/meta/classes-recipe/mime-xdg.bbclass
@@ -0,0 +1,78 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+# This class creates mime <-> application associations based on entry 
+# 'MimeType' in *.desktop files
+#
+
+DEPENDS += "desktop-file-utils"
+PACKAGE_WRITE_DEPS += "desktop-file-utils-native"
+DESKTOPDIR = "${datadir}/applications"
+
+# There are recipes out there installing their .desktop files as absolute
+# symlinks. For us these are dangling and cannot be introspected for "MimeType"
+# easily. By addding package-names to MIME_XDG_PACKAGES, packager can force
+# proper update-desktop-database handling. Note that all introspection is
+# skipped for MIME_XDG_PACKAGES not empty
+MIME_XDG_PACKAGES ?= ""
+
+mime_xdg_postinst() {
+if [ "x$D" != "x" ]; then
+	$INTERCEPT_DIR/postinst_intercept update_desktop_database ${PKG} \
+		mlprefix=${MLPREFIX} \
+		desktop_dir=${DESKTOPDIR}
+else
+	update-desktop-database $D${DESKTOPDIR}
+fi
+}
+
+mime_xdg_postrm() {
+if [ "x$D" != "x" ]; then
+	$INTERCEPT_DIR/postinst_intercept update_desktop_database ${PKG} \
+		mlprefix=${MLPREFIX} \
+		desktop_dir=${DESKTOPDIR}
+else
+	update-desktop-database $D${DESKTOPDIR}
+fi
+}
+
+python populate_packages:append () {
+    packages = d.getVar('PACKAGES').split()
+    pkgdest =  d.getVar('PKGDEST')
+    desktop_base = d.getVar('DESKTOPDIR')
+    forced_mime_xdg_pkgs = (d.getVar('MIME_XDG_PACKAGES') or '').split()
+
+    for pkg in packages:
+        desktops_with_mime_found = pkg in forced_mime_xdg_pkgs
+        if d.getVar('MIME_XDG_PACKAGES') == '':
+            desktop_dir = '%s/%s%s' % (pkgdest, pkg, desktop_base)
+            if os.path.exists(desktop_dir):
+                for df in os.listdir(desktop_dir):
+                    if df.endswith('.desktop'):
+                        try:
+                            with open(desktop_dir + '/'+ df, 'r') as f:
+                                for line in f.read().split('\n'):
+                                    if 'MimeType' in line:
+                                        desktops_with_mime_found = True
+                                        break;
+                        except:
+                            bb.warn('Could not open %s. Set MIME_XDG_PACKAGES in recipe or add mime-xdg to INSANE_SKIP.' % desktop_dir + '/'+ df)
+                    if desktops_with_mime_found:
+                        break
+        if desktops_with_mime_found:
+            bb.note("adding mime-xdg postinst and postrm scripts to %s" % pkg)
+            postinst = d.getVar('pkg_postinst:%s' % pkg)
+            if not postinst:
+                postinst = '#!/bin/sh\n'
+            postinst += d.getVar('mime_xdg_postinst')
+            d.setVar('pkg_postinst:%s' % pkg, postinst)
+            postrm = d.getVar('pkg_postrm:%s' % pkg)
+            if not postrm:
+                postrm = '#!/bin/sh\n'
+            postrm += d.getVar('mime_xdg_postrm')
+            d.setVar('pkg_postrm:%s' % pkg, postrm)
+            bb.note("adding desktop-file-utils dependency to %s" % pkg)
+            d.appendVar('RDEPENDS:' + pkg, " " + d.getVar('MLPREFIX')+"desktop-file-utils")
+}
diff --git a/poky/meta/classes-recipe/mime.bbclass b/poky/meta/classes-recipe/mime.bbclass
new file mode 100644
index 0000000..9b13f62
--- /dev/null
+++ b/poky/meta/classes-recipe/mime.bbclass
@@ -0,0 +1,76 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+#
+# This class is used by recipes installing mime types
+#
+
+DEPENDS += "${@bb.utils.contains('BPN', 'shared-mime-info', '', 'shared-mime-info', d)}"
+PACKAGE_WRITE_DEPS += "shared-mime-info-native"
+MIMEDIR = "${datadir}/mime"
+
+mime_postinst() {
+if [ "x$D" != "x" ]; then
+	$INTERCEPT_DIR/postinst_intercept update_mime_database ${PKG} \
+		mlprefix=${MLPREFIX} \
+		mimedir=${MIMEDIR}
+else
+	echo "Updating MIME database... this may take a while."
+	update-mime-database $D${MIMEDIR}
+fi
+}
+
+mime_postrm() {
+if [ "x$D" != "x" ]; then
+	$INTERCEPT_DIR/postinst_intercept update_mime_database ${PKG} \
+		mlprefix=${MLPREFIX} \
+		mimedir=${MIMEDIR}
+else
+	echo "Updating MIME database... this may take a while."
+	# $D${MIMEDIR}/packages belong to package shared-mime-info-data,
+	# packages like libfm-mime depend on shared-mime-info-data.
+	# after shared-mime-info-data uninstalled, $D${MIMEDIR}/packages
+	# is removed, but update-mime-database need this dir to update
+	# database, workaround to create one and remove it later
+	if [ ! -d $D${MIMEDIR}/packages ]; then
+		mkdir -p $D${MIMEDIR}/packages
+		update-mime-database $D${MIMEDIR}
+		rmdir --ignore-fail-on-non-empty $D${MIMEDIR}/packages
+	else
+		update-mime-database $D${MIMEDIR}
+fi
+fi
+}
+
+python populate_packages:append () {
+    packages = d.getVar('PACKAGES').split()
+    pkgdest =  d.getVar('PKGDEST')
+    mimedir = d.getVar('MIMEDIR')
+
+    for pkg in packages:
+        mime_packages_dir = '%s/%s%s/packages' % (pkgdest, pkg, mimedir)
+        mimes_types_found = False
+        if os.path.exists(mime_packages_dir):
+            for f in os.listdir(mime_packages_dir):
+                if f.endswith('.xml'):
+                    mimes_types_found = True
+                    break
+        if mimes_types_found:
+            bb.note("adding mime postinst and postrm scripts to %s" % pkg)
+            postinst = d.getVar('pkg_postinst:%s' % pkg)
+            if not postinst:
+                postinst = '#!/bin/sh\n'
+            postinst += d.getVar('mime_postinst')
+            d.setVar('pkg_postinst:%s' % pkg, postinst)
+            postrm = d.getVar('pkg_postrm:%s' % pkg)
+            if not postrm:
+                postrm = '#!/bin/sh\n'
+            postrm += d.getVar('mime_postrm')
+            d.setVar('pkg_postrm:%s' % pkg, postrm)
+            if pkg != 'shared-mime-info-data':
+                bb.note("adding shared-mime-info-data dependency to %s" % pkg)
+                d.appendVar('RDEPENDS:' + pkg, " " + d.getVar('MLPREFIX')+"shared-mime-info-data")
+}
diff --git a/poky/meta/classes-recipe/module-base.bbclass b/poky/meta/classes-recipe/module-base.bbclass
new file mode 100644
index 0000000..094b563
--- /dev/null
+++ b/poky/meta/classes-recipe/module-base.bbclass
@@ -0,0 +1,27 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+inherit kernel-arch
+
+# We do the dependency this way because the output is not preserved
+# in sstate, so we must force do_compile to run (once).
+do_configure[depends] += "make-mod-scripts:do_compile"
+
+export OS = "${TARGET_OS}"
+export CROSS_COMPILE = "${TARGET_PREFIX}"
+
+# This points to the build artefacts from the main kernel build
+# such as .config and System.map
+# Confusingly it is not the module build output (which is ${B}) but
+# we didn't pick the name.
+export KBUILD_OUTPUT = "${STAGING_KERNEL_BUILDDIR}"
+
+export KERNEL_VERSION = "${@oe.utils.read_file('${STAGING_KERNEL_BUILDDIR}/kernel-abiversion')}"
+KERNEL_OBJECT_SUFFIX = ".ko"
+
+# kernel modules are generally machine specific
+PACKAGE_ARCH = "${MACHINE_ARCH}"
+
diff --git a/poky/meta/classes-recipe/module.bbclass b/poky/meta/classes-recipe/module.bbclass
new file mode 100644
index 0000000..d52d5e3
--- /dev/null
+++ b/poky/meta/classes-recipe/module.bbclass
@@ -0,0 +1,80 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+inherit module-base kernel-module-split pkgconfig
+
+EXTRA_OEMAKE += "KERNEL_SRC=${STAGING_KERNEL_DIR}"
+
+MODULES_INSTALL_TARGET ?= "modules_install"
+MODULES_MODULE_SYMVERS_LOCATION ?= ""
+
+python __anonymous () {
+    depends = d.getVar('DEPENDS')
+    extra_symbols = []
+    for dep in depends.split():
+        if dep.startswith("kernel-module-"):
+            extra_symbols.append("${STAGING_INCDIR}/" + dep + "/Module.symvers")
+    d.setVar('KBUILD_EXTRA_SYMBOLS', " ".join(extra_symbols))
+}
+
+python do_devshell:prepend () {
+    os.environ['CFLAGS'] = ''
+    os.environ['CPPFLAGS'] = ''
+    os.environ['CXXFLAGS'] = ''
+    os.environ['LDFLAGS'] = ''
+
+    os.environ['KERNEL_PATH'] = d.getVar('STAGING_KERNEL_DIR')
+    os.environ['KERNEL_SRC'] = d.getVar('STAGING_KERNEL_DIR')
+    os.environ['KERNEL_VERSION'] = d.getVar('KERNEL_VERSION')
+    os.environ['CC'] = d.getVar('KERNEL_CC')
+    os.environ['LD'] = d.getVar('KERNEL_LD')
+    os.environ['AR'] = d.getVar('KERNEL_AR')
+    os.environ['O'] = d.getVar('STAGING_KERNEL_BUILDDIR')
+    kbuild_extra_symbols = d.getVar('KBUILD_EXTRA_SYMBOLS')
+    if kbuild_extra_symbols:
+        os.environ['KBUILD_EXTRA_SYMBOLS'] = kbuild_extra_symbols
+    else:
+        os.environ['KBUILD_EXTRA_SYMBOLS'] = ''
+}
+
+module_do_compile() {
+	unset CFLAGS CPPFLAGS CXXFLAGS LDFLAGS
+	oe_runmake KERNEL_PATH=${STAGING_KERNEL_DIR}   \
+		   KERNEL_VERSION=${KERNEL_VERSION}    \
+		   CC="${KERNEL_CC}" LD="${KERNEL_LD}" \
+		   AR="${KERNEL_AR}" \
+	           O=${STAGING_KERNEL_BUILDDIR} \
+		   KBUILD_EXTRA_SYMBOLS="${KBUILD_EXTRA_SYMBOLS}" \
+		   ${MAKE_TARGETS}
+}
+
+module_do_install() {
+	unset CFLAGS CPPFLAGS CXXFLAGS LDFLAGS
+	oe_runmake DEPMOD=echo MODLIB="${D}${nonarch_base_libdir}/modules/${KERNEL_VERSION}" \
+	           INSTALL_FW_PATH="${D}${nonarch_base_libdir}/firmware" \
+	           CC="${KERNEL_CC}" LD="${KERNEL_LD}" \
+	           O=${STAGING_KERNEL_BUILDDIR} \
+	           ${MODULES_INSTALL_TARGET}
+
+	if [ ! -e "${B}/${MODULES_MODULE_SYMVERS_LOCATION}/Module.symvers" ] ; then
+		bbwarn "Module.symvers not found in ${B}/${MODULES_MODULE_SYMVERS_LOCATION}"
+		bbwarn "Please consider setting MODULES_MODULE_SYMVERS_LOCATION to a"
+		bbwarn "directory below B to get correct inter-module dependencies"
+	else
+		install -Dm0644 "${B}/${MODULES_MODULE_SYMVERS_LOCATION}"/Module.symvers ${D}${includedir}/${BPN}/Module.symvers
+		# Module.symvers contains absolute path to the build directory.
+		# While it doesn't actually seem to matter which path is specified,
+		# clear them out to avoid confusion
+		sed -e 's:${B}/::g' -i ${D}${includedir}/${BPN}/Module.symvers
+	fi
+}
+
+EXPORT_FUNCTIONS do_compile do_install
+
+# add all splitted modules to PN RDEPENDS, PN can be empty now
+KERNEL_MODULES_META_PACKAGE = "${PN}"
+FILES:${PN} = ""
+ALLOW_EMPTY:${PN} = "1"
diff --git a/poky/meta/classes-recipe/multilib_header.bbclass b/poky/meta/classes-recipe/multilib_header.bbclass
new file mode 100644
index 0000000..33f7e02
--- /dev/null
+++ b/poky/meta/classes-recipe/multilib_header.bbclass
@@ -0,0 +1,58 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+inherit siteinfo
+
+# If applicable on the architecture, this routine will rename the header and
+# add a unique identifier to the name for the ABI/bitsize that is being used.
+# A wrapper will be generated for the architecture that knows how to call
+# all of the ABI variants for that given architecture.
+#
+oe_multilib_header() {
+
+	case ${HOST_OS} in
+	*-musl*)
+		return
+		;;
+	*)
+	esac
+        # For MIPS: "n32" is a special case, which needs to be
+        # distinct from both 64-bit and 32-bit.
+        case ${TARGET_ARCH} in
+        mips*)  case "${MIPSPKGSFX_ABI}" in
+                "-n32")
+                       ident=n32   
+                       ;;
+                *)     
+                       ident=${SITEINFO_BITS}
+                       ;;
+                esac
+                ;;
+        *)      ident=${SITEINFO_BITS}
+        esac
+	for each_header in "$@" ; do
+	   if [ ! -f "${D}/${includedir}/$each_header" ]; then
+	      bberror "oe_multilib_header: Unable to find header $each_header."
+	      continue
+	   fi
+	   stem=$(echo $each_header | sed 's#\.h$##')
+	   # if mips64/n32 set ident to n32
+	   mv ${D}/${includedir}/$each_header ${D}/${includedir}/${stem}-${ident}.h
+
+	   sed -e "s#ENTER_HEADER_FILENAME_HERE#${stem}#g" ${COREBASE}/scripts/multilib_header_wrapper.h > ${D}/${includedir}/$each_header
+	done
+}
+
+# Dependencies on arch variables like MIPSPKGSFX_ABI can be problematic.
+# We don't need multilib headers for native builds so brute force things.
+oe_multilib_header:class-native () {
+	return
+}
+
+# Nor do we need multilib headers for nativesdk builds.
+oe_multilib_header:class-nativesdk () {
+	return
+}
diff --git a/poky/meta/classes-recipe/multilib_script.bbclass b/poky/meta/classes-recipe/multilib_script.bbclass
new file mode 100644
index 0000000..7011526
--- /dev/null
+++ b/poky/meta/classes-recipe/multilib_script.bbclass
@@ -0,0 +1,40 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+#
+# Recipe needs to set MULTILIB_SCRIPTS in the form <pkgname>:<scriptname>, e.g.
+# MULTILIB_SCRIPTS = "${PN}-dev:${bindir}/file1 ${PN}:${base_bindir}/file2"
+# to indicate which script files to process from which packages.
+#
+
+inherit update-alternatives
+
+MULTILIB_SUFFIX = "${@d.getVar('base_libdir',1).split('/')[-1]}"
+
+PACKAGE_PREPROCESS_FUNCS += "multilibscript_rename"
+
+multilibscript_rename() {
+	:
+}
+
+python () {
+    # Do nothing if multilib isn't being used
+    if not d.getVar("MULTILIB_VARIANTS"):
+        return
+    # Do nothing for native/cross
+    if bb.data.inherits_class('native', d) or bb.data.inherits_class('cross', d):
+        return
+
+    for entry in (d.getVar("MULTILIB_SCRIPTS", False) or "").split():
+        pkg, script = entry.split(":")
+        epkg = d.expand(pkg)
+        scriptname = os.path.basename(script)
+        d.appendVar("ALTERNATIVE:" + epkg, " " + scriptname + " ")
+        d.setVarFlag("ALTERNATIVE_LINK_NAME", scriptname, script)
+        d.setVarFlag("ALTERNATIVE_TARGET", scriptname, script + "-${MULTILIB_SUFFIX}")
+        d.appendVar("multilibscript_rename",  "\n	mv ${PKGD}" + script + " ${PKGD}" + script + "-${MULTILIB_SUFFIX}")
+        d.appendVar("FILES:" + epkg, " " + script + "-${MULTILIB_SUFFIX}")
+}
diff --git a/poky/meta/classes-recipe/native.bbclass b/poky/meta/classes-recipe/native.bbclass
new file mode 100644
index 0000000..61ad053
--- /dev/null
+++ b/poky/meta/classes-recipe/native.bbclass
@@ -0,0 +1,236 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+# We want native packages to be relocatable
+inherit relocatable
+
+# Native packages are built indirectly via dependency,
+# no need for them to be a direct target of 'world'
+EXCLUDE_FROM_WORLD = "1"
+
+PACKAGE_ARCH = "${BUILD_ARCH}"
+
+# used by cmake class
+OECMAKE_RPATH = "${libdir}"
+OECMAKE_RPATH:class-native = "${libdir}"
+
+TARGET_ARCH = "${BUILD_ARCH}"
+TARGET_OS = "${BUILD_OS}"
+TARGET_VENDOR = "${BUILD_VENDOR}"
+TARGET_PREFIX = "${BUILD_PREFIX}"
+TARGET_CC_ARCH = "${BUILD_CC_ARCH}"
+TARGET_LD_ARCH = "${BUILD_LD_ARCH}"
+TARGET_AS_ARCH = "${BUILD_AS_ARCH}"
+TARGET_CPPFLAGS = "${BUILD_CPPFLAGS}"
+TARGET_CFLAGS = "${BUILD_CFLAGS}"
+TARGET_CXXFLAGS = "${BUILD_CXXFLAGS}"
+TARGET_LDFLAGS = "${BUILD_LDFLAGS}"
+TARGET_FPU = ""
+TUNE_FEATURES = ""
+ABIEXTENSION = ""
+
+HOST_ARCH = "${BUILD_ARCH}"
+HOST_OS = "${BUILD_OS}"
+HOST_VENDOR = "${BUILD_VENDOR}"
+HOST_PREFIX = "${BUILD_PREFIX}"
+HOST_CC_ARCH = "${BUILD_CC_ARCH}"
+HOST_LD_ARCH = "${BUILD_LD_ARCH}"
+HOST_AS_ARCH = "${BUILD_AS_ARCH}"
+
+CPPFLAGS = "${BUILD_CPPFLAGS}"
+CFLAGS = "${BUILD_CFLAGS}"
+CXXFLAGS = "${BUILD_CXXFLAGS}"
+LDFLAGS = "${BUILD_LDFLAGS}"
+
+STAGING_BINDIR = "${STAGING_BINDIR_NATIVE}"
+STAGING_BINDIR_CROSS = "${STAGING_BINDIR_NATIVE}"
+
+# native pkg doesn't need the TOOLCHAIN_OPTIONS.
+TOOLCHAIN_OPTIONS = ""
+
+# Don't build ptest natively
+PTEST_ENABLED = "0"
+
+# Don't use site files for native builds
+export CONFIG_SITE = "${COREBASE}/meta/site/native"
+
+# set the compiler as well. It could have been set to something else
+export CC = "${BUILD_CC}"
+export CXX = "${BUILD_CXX}"
+export FC = "${BUILD_FC}"
+export CPP = "${BUILD_CPP}"
+export LD = "${BUILD_LD}"
+export CCLD = "${BUILD_CCLD}"
+export AR = "${BUILD_AR}"
+export AS = "${BUILD_AS}"
+export RANLIB = "${BUILD_RANLIB}"
+export STRIP = "${BUILD_STRIP}"
+export NM = "${BUILD_NM}"
+
+# Path prefixes
+base_prefix = "${STAGING_DIR_NATIVE}"
+prefix = "${STAGING_DIR_NATIVE}${prefix_native}"
+exec_prefix = "${STAGING_DIR_NATIVE}${prefix_native}"
+
+bindir = "${STAGING_BINDIR_NATIVE}"
+sbindir = "${STAGING_SBINDIR_NATIVE}"
+base_libdir = "${STAGING_LIBDIR_NATIVE}"
+libdir = "${STAGING_LIBDIR_NATIVE}"
+includedir = "${STAGING_INCDIR_NATIVE}"
+sysconfdir = "${STAGING_ETCDIR_NATIVE}"
+datadir = "${STAGING_DATADIR_NATIVE}"
+
+baselib = "lib"
+
+export lt_cv_sys_lib_dlsearch_path_spec = "${libdir} ${base_libdir} /lib /lib64 /usr/lib /usr/lib64"
+
+NATIVE_PACKAGE_PATH_SUFFIX ?= ""
+bindir .= "${NATIVE_PACKAGE_PATH_SUFFIX}"
+sbindir .= "${NATIVE_PACKAGE_PATH_SUFFIX}"
+base_libdir .= "${NATIVE_PACKAGE_PATH_SUFFIX}"
+libdir .= "${NATIVE_PACKAGE_PATH_SUFFIX}"
+libexecdir .= "${NATIVE_PACKAGE_PATH_SUFFIX}"
+
+do_populate_sysroot[sstate-inputdirs] = "${SYSROOT_DESTDIR}/${STAGING_DIR_NATIVE}/"
+do_populate_sysroot[sstate-outputdirs] = "${COMPONENTS_DIR}/${PACKAGE_ARCH}/${PN}"
+
+# Since we actually install these into situ there is no staging prefix
+STAGING_DIR_HOST = ""
+STAGING_DIR_TARGET = ""
+PKG_CONFIG_DIR = "${libdir}/pkgconfig"
+
+EXTRA_NATIVE_PKGCONFIG_PATH ?= ""
+PKG_CONFIG_PATH .= "${EXTRA_NATIVE_PKGCONFIG_PATH}"
+PKG_CONFIG_SYSROOT_DIR = ""
+PKG_CONFIG_SYSTEM_LIBRARY_PATH[unexport] = "1"
+PKG_CONFIG_SYSTEM_INCLUDE_PATH[unexport] = "1"
+
+# we dont want libc-*libc to kick in for native recipes
+LIBCOVERRIDE = ""
+CLASSOVERRIDE = "class-native"
+MACHINEOVERRIDES = ""
+MACHINE_FEATURES = ""
+
+PATH:prepend = "${COREBASE}/scripts/native-intercept:"
+
+# This class encodes staging paths into its scripts data so can only be
+# reused if we manipulate the paths.
+SSTATE_SCAN_CMD ?= "${SSTATE_SCAN_CMD_NATIVE}"
+
+# No strip sysroot when DEBUG_BUILD is enabled
+INHIBIT_SYSROOT_STRIP ?= "${@oe.utils.vartrue('DEBUG_BUILD', '1', '', d)}"
+
+python native_virtclass_handler () {
+    pn = e.data.getVar("PN")
+    if not pn.endswith("-native"):
+        return
+    bpn = e.data.getVar("BPN")
+
+    # Set features here to prevent appends and distro features backfill
+    # from modifying native distro features
+    features = set(d.getVar("DISTRO_FEATURES_NATIVE").split())
+    filtered = set(bb.utils.filter("DISTRO_FEATURES", d.getVar("DISTRO_FEATURES_FILTER_NATIVE"), d).split())
+    d.setVar("DISTRO_FEATURES", " ".join(sorted(features | filtered)))
+
+    classextend = e.data.getVar('BBCLASSEXTEND') or ""
+    if "native" not in classextend:
+        return
+
+    def map_dependencies(varname, d, suffix = "", selfref=True):
+        if suffix:
+            varname = varname + ":" + suffix
+        deps = d.getVar(varname)
+        if not deps:
+            return
+        deps = bb.utils.explode_deps(deps)
+        newdeps = []
+        for dep in deps:
+            if dep == pn:
+                if not selfref:
+                    continue
+                newdeps.append(dep)
+            elif "-cross-" in dep:
+                newdeps.append(dep.replace("-cross", "-native"))
+            elif not dep.endswith("-native"):
+                # Replace ${PN} with ${BPN} in the dependency to make sure
+                # dependencies on, e.g., ${PN}-foo become ${BPN}-foo-native
+                # rather than ${BPN}-native-foo-native.
+                newdeps.append(dep.replace(pn, bpn) + "-native")
+            else:
+                newdeps.append(dep)
+        d.setVar(varname, " ".join(newdeps), parsing=True)
+
+    map_dependencies("DEPENDS", e.data, selfref=False)
+    for pkg in e.data.getVar("PACKAGES", False).split():
+        map_dependencies("RDEPENDS", e.data, pkg)
+        map_dependencies("RRECOMMENDS", e.data, pkg)
+        map_dependencies("RSUGGESTS", e.data, pkg)
+        map_dependencies("RPROVIDES", e.data, pkg)
+        map_dependencies("RREPLACES", e.data, pkg)
+    map_dependencies("PACKAGES", e.data)
+
+    provides = e.data.getVar("PROVIDES")
+    nprovides = []
+    for prov in provides.split():
+        if prov.find(pn) != -1:
+            nprovides.append(prov)
+        elif not prov.endswith("-native"):
+            nprovides.append(prov + "-native")
+        else:
+            nprovides.append(prov)
+    e.data.setVar("PROVIDES", ' '.join(nprovides))
+
+
+}
+
+addhandler native_virtclass_handler
+native_virtclass_handler[eventmask] = "bb.event.RecipePreFinalise"
+
+python do_addto_recipe_sysroot () {
+    bb.build.exec_func("extend_recipe_sysroot", d)
+}
+addtask addto_recipe_sysroot after do_populate_sysroot
+do_addto_recipe_sysroot[deptask] = "do_populate_sysroot"
+
+inherit nopackages
+
+do_packagedata[stamp-extra-info] = ""
+
+USE_NLS = "no"
+
+RECIPERDEPTASK = "do_populate_sysroot"
+do_populate_sysroot[rdeptask] = "${RECIPERDEPTASK}"
+
+#
+# Native task outputs are directly run on the target (host) system after being
+# built. Even if the output of this recipe doesn't change, a change in one of
+# its dependencies may cause a change in the output it generates (e.g. rpm
+# output depends on the output of its dependent zstd library).
+#
+# This can cause poor interactions with hash equivalence, since this recipes
+# output-changing dependency is "hidden" and downstream task only see that this
+# recipe has the same outhash and therefore is equivalent. This can result in
+# different output in different cases.
+#
+# To resolve this, unhide the output-changing dependency by adding its unihash
+# to this tasks outhash calculation. Unfortunately, don't know specifically
+# know which dependencies are output-changing, so we have to add all of them.
+#
+python native_add_do_populate_sysroot_deps () {
+    current_task = "do_" + d.getVar("BB_CURRENTTASK")
+    if current_task != "do_populate_sysroot":
+        return
+
+    taskdepdata = d.getVar("BB_TASKDEPDATA", False)
+    pn = d.getVar("PN")
+    deps = {
+        dep[0]:dep[6] for dep in taskdepdata.values() if
+            dep[1] == current_task and dep[0] != pn
+    }
+
+    d.setVar("HASHEQUIV_EXTRA_SIGDATA", "\n".join("%s: %s" % (k, deps[k]) for k in sorted(deps.keys())))
+}
+SSTATECREATEFUNCS += "native_add_do_populate_sysroot_deps"
diff --git a/poky/meta/classes-recipe/nativesdk.bbclass b/poky/meta/classes-recipe/nativesdk.bbclass
new file mode 100644
index 0000000..08288fd
--- /dev/null
+++ b/poky/meta/classes-recipe/nativesdk.bbclass
@@ -0,0 +1,124 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+# SDK packages are built either explicitly by the user,
+# or indirectly via dependency.  No need to be in 'world'.
+EXCLUDE_FROM_WORLD = "1"
+
+STAGING_BINDIR_TOOLCHAIN = "${STAGING_DIR_NATIVE}${bindir_native}/${SDK_ARCH}${SDK_VENDOR}-${SDK_OS}"
+
+# libc for the SDK can be different to that of the target
+NATIVESDKLIBC ?= "libc-glibc"
+LIBCOVERRIDE = ":${NATIVESDKLIBC}"
+CLASSOVERRIDE = "class-nativesdk"
+MACHINEOVERRIDES = ""
+MACHINE_FEATURES = ""
+
+MULTILIBS = ""
+
+# we need consistent staging dir whether or not multilib is enabled
+STAGING_DIR_HOST = "${WORKDIR}/recipe-sysroot"
+STAGING_DIR_TARGET = "${WORKDIR}/recipe-sysroot"
+RECIPE_SYSROOT = "${WORKDIR}/recipe-sysroot"
+
+#
+# Update PACKAGE_ARCH and PACKAGE_ARCHS
+#
+PACKAGE_ARCH = "${SDK_ARCH}-${SDKPKGSUFFIX}"
+PACKAGE_ARCHS = "${SDK_PACKAGE_ARCHS}"
+
+#
+# We need chrpath >= 0.14 to ensure we can deal with 32 and 64 bit
+# binaries
+#
+DEPENDS:append = " chrpath-replacement-native"
+EXTRANATIVEPATH += "chrpath-native"
+
+PKGDATA_DIR = "${PKGDATA_DIR_SDK}"
+
+HOST_ARCH = "${SDK_ARCH}"
+HOST_VENDOR = "${SDK_VENDOR}"
+HOST_OS = "${SDK_OS}"
+HOST_PREFIX = "${SDK_PREFIX}"
+HOST_CC_ARCH = "${SDK_CC_ARCH}"
+HOST_LD_ARCH = "${SDK_LD_ARCH}"
+HOST_AS_ARCH = "${SDK_AS_ARCH}"
+#HOST_SYS = "${HOST_ARCH}${TARGET_VENDOR}-${HOST_OS}"
+
+TARGET_ARCH = "${SDK_ARCH}"
+TARGET_VENDOR = "${SDK_VENDOR}"
+TARGET_OS = "${SDK_OS}"
+TARGET_PREFIX = "${SDK_PREFIX}"
+TARGET_CC_ARCH = "${SDK_CC_ARCH}"
+TARGET_LD_ARCH = "${SDK_LD_ARCH}"
+TARGET_AS_ARCH = "${SDK_AS_ARCH}"
+TARGET_CPPFLAGS = "${BUILDSDK_CPPFLAGS}"
+TARGET_CFLAGS = "${BUILDSDK_CFLAGS}"
+TARGET_CXXFLAGS = "${BUILDSDK_CXXFLAGS}"
+TARGET_LDFLAGS = "${BUILDSDK_LDFLAGS}"
+TARGET_FPU = ""
+EXTRA_OECONF_GCC_FLOAT = ""
+TUNE_FEATURES = ""
+
+CPPFLAGS = "${BUILDSDK_CPPFLAGS}"
+CFLAGS = "${BUILDSDK_CFLAGS}"
+CXXFLAGS = "${BUILDSDK_CXXFLAGS}"
+LDFLAGS = "${BUILDSDK_LDFLAGS}"
+
+# Change to place files in SDKPATH
+base_prefix = "${SDKPATHNATIVE}"
+prefix = "${SDKPATHNATIVE}${prefix_nativesdk}"
+exec_prefix = "${SDKPATHNATIVE}${prefix_nativesdk}"
+baselib = "lib"
+sbindir = "${bindir}"
+
+export PKG_CONFIG_DIR = "${STAGING_DIR_HOST}${libdir}/pkgconfig"
+export PKG_CONFIG_SYSROOT_DIR = "${STAGING_DIR_HOST}"
+
+python nativesdk_virtclass_handler () {
+    pn = e.data.getVar("PN")
+    if not (pn.endswith("-nativesdk") or pn.startswith("nativesdk-")):
+        return
+
+    # Set features here to prevent appends and distro features backfill
+    # from modifying nativesdk distro features
+    features = set(d.getVar("DISTRO_FEATURES_NATIVESDK").split())
+    filtered = set(bb.utils.filter("DISTRO_FEATURES", d.getVar("DISTRO_FEATURES_FILTER_NATIVESDK"), d).split())
+    d.setVar("DISTRO_FEATURES", " ".join(sorted(features | filtered)))
+
+    e.data.setVar("MLPREFIX", "nativesdk-")
+    e.data.setVar("PN", "nativesdk-" + e.data.getVar("PN").replace("-nativesdk", "").replace("nativesdk-", ""))
+}
+
+python () {
+    pn = d.getVar("PN")
+    if not pn.startswith("nativesdk-"):
+        return
+
+    import oe.classextend
+
+    clsextend = oe.classextend.NativesdkClassExtender("nativesdk", d)
+    clsextend.rename_packages()
+    clsextend.rename_package_variables((d.getVar("PACKAGEVARS") or "").split())
+
+    clsextend.map_depends_variable("DEPENDS")
+    clsextend.map_packagevars()
+    clsextend.map_variable("PROVIDES")
+    clsextend.map_regexp_variable("PACKAGES_DYNAMIC")
+    d.setVar("LIBCEXTENSION", "")
+    d.setVar("ABIEXTENSION", "")
+}
+
+addhandler nativesdk_virtclass_handler
+nativesdk_virtclass_handler[eventmask] = "bb.event.RecipePreFinalise"
+
+do_packagedata[stamp-extra-info] = ""
+
+USE_NLS = "${SDKUSE_NLS}"
+
+OLDEST_KERNEL = "${SDK_OLDEST_KERNEL}"
+
+PATH:prepend = "${COREBASE}/scripts/nativesdk-intercept:"
diff --git a/poky/meta/classes-recipe/nopackages.bbclass b/poky/meta/classes-recipe/nopackages.bbclass
new file mode 100644
index 0000000..9ea7273
--- /dev/null
+++ b/poky/meta/classes-recipe/nopackages.bbclass
@@ -0,0 +1,19 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+deltask do_package
+deltask do_package_write_rpm
+deltask do_package_write_ipk
+deltask do_package_write_deb
+deltask do_package_write_tar
+deltask do_package_qa
+deltask do_packagedata
+deltask do_package_setscene
+deltask do_package_write_rpm_setscene
+deltask do_package_write_ipk_setscene
+deltask do_package_write_deb_setscene
+deltask do_package_qa_setscene
+deltask do_packagedata_setscene
diff --git a/poky/meta/classes-recipe/npm.bbclass b/poky/meta/classes-recipe/npm.bbclass
new file mode 100644
index 0000000..20350ce
--- /dev/null
+++ b/poky/meta/classes-recipe/npm.bbclass
@@ -0,0 +1,338 @@
+# Copyright (C) 2020 Savoir-Faire Linux
+#
+# SPDX-License-Identifier: GPL-2.0-only
+#
+# This bbclass builds and installs an npm package to the target. The package
+# sources files should be fetched in the calling recipe by using the SRC_URI
+# variable. The ${S} variable should be updated depending of your fetcher.
+#
+# Usage:
+#  SRC_URI = "..."
+#  inherit npm
+#
+# Optional variables:
+#  NPM_ARCH:
+#       Override the auto generated npm architecture.
+#
+#  NPM_INSTALL_DEV:
+#       Set to 1 to also install devDependencies.
+
+inherit python3native
+
+DEPENDS:prepend = "nodejs-native nodejs-oe-cache-native "
+RDEPENDS:${PN}:append:class-target = " nodejs"
+
+EXTRA_OENPM = ""
+
+NPM_INSTALL_DEV ?= "0"
+
+NPM_NODEDIR ?= "${RECIPE_SYSROOT_NATIVE}${prefix_native}"
+
+## must match mapping in nodejs.bb (openembedded-meta)
+def map_nodejs_arch(a, d):
+    import re
+
+    if   re.match('i.86$', a): return 'ia32'
+    elif re.match('x86_64$', a): return 'x64'
+    elif re.match('aarch64$', a): return 'arm64'
+    elif re.match('(powerpc64|powerpc64le|ppc64le)$', a): return 'ppc64'
+    elif re.match('powerpc$', a): return 'ppc'
+    return a
+
+NPM_ARCH ?= "${@map_nodejs_arch(d.getVar("TARGET_ARCH"), d)}"
+
+NPM_PACKAGE = "${WORKDIR}/npm-package"
+NPM_CACHE = "${WORKDIR}/npm-cache"
+NPM_BUILD = "${WORKDIR}/npm-build"
+NPM_REGISTRY = "${WORKDIR}/npm-registry"
+
+def npm_global_configs(d):
+    """Get the npm global configuration"""
+    configs = []
+    # Ensure no network access is done
+    configs.append(("offline", "true"))
+    configs.append(("proxy", "http://invalid"))
+    configs.append(("fund", False))
+    configs.append(("audit", False))
+    # Configure the cache directory
+    configs.append(("cache", d.getVar("NPM_CACHE")))
+    return configs
+
+## 'npm pack' runs 'prepare' and 'prepack' scripts. Support for
+## 'ignore-scripts' which prevents this behavior has been removed
+## from nodejs 16.  Use simple 'tar' instead of.
+def npm_pack(env, srcdir, workdir):
+    """Emulate 'npm pack' on a specified directory"""
+    import subprocess
+    import os
+    import json
+
+    src = os.path.join(srcdir, 'package.json')
+    with open(src) as f:
+        j = json.load(f)
+
+    # base does not really matter and is for documentation purposes
+    # only.  But the 'version' part must exist because other parts of
+    # the bbclass rely on it.
+    base = j['name'].split('/')[-1]
+    tarball = os.path.join(workdir, "%s-%s.tgz" % (base, j['version']));
+
+    # TODO: real 'npm pack' does not include directories while 'tar'
+    # does.  But this does not seem to matter...
+    subprocess.run(['tar', 'czf', tarball,
+                    '--exclude', './node-modules',
+                    '--exclude-vcs',
+                    '--transform', 's,^\./,package/,',
+                    '--mtime', '1985-10-26T08:15:00.000Z',
+                    '.'],
+                   check = True, cwd = srcdir)
+
+    return (tarball, j)
+
+python npm_do_configure() {
+    """
+    Step one: configure the npm cache and the main npm package
+
+    Every dependencies have been fetched and patched in the source directory.
+    They have to be packed (this remove unneeded files) and added to the npm
+    cache to be available for the next step.
+
+    The main package and its associated manifest file and shrinkwrap file have
+    to be configured to take into account these cached dependencies.
+    """
+    import base64
+    import copy
+    import json
+    import re
+    import shlex
+    import stat
+    import tempfile
+    from bb.fetch2.npm import NpmEnvironment
+    from bb.fetch2.npm import npm_unpack
+    from bb.fetch2.npmsw import foreach_dependencies
+    from bb.progress import OutOfProgressHandler
+    from oe.npm_registry import NpmRegistry
+
+    bb.utils.remove(d.getVar("NPM_CACHE"), recurse=True)
+    bb.utils.remove(d.getVar("NPM_PACKAGE"), recurse=True)
+
+    env = NpmEnvironment(d, configs=npm_global_configs(d))
+    registry = NpmRegistry(d.getVar('NPM_REGISTRY'), d.getVar('NPM_CACHE'))
+
+    def _npm_cache_add(tarball, pkg):
+        """Add tarball to local registry and register it in the
+           cache"""
+        registry.add_pkg(tarball, pkg)
+
+    def _npm_integrity(tarball):
+        """Return the npm integrity of a specified tarball"""
+        sha512 = bb.utils.sha512_file(tarball)
+        return "sha512-" + base64.b64encode(bytes.fromhex(sha512)).decode()
+
+    def _npmsw_dependency_dict(orig, deptree):
+        """
+        Return the sub dictionary in the 'orig' dictionary corresponding to the
+        'deptree' dependency tree. This function follows the shrinkwrap file
+        format.
+        """
+        ptr = orig
+        for dep in deptree:
+            if "dependencies" not in ptr:
+                ptr["dependencies"] = {}
+            ptr = ptr["dependencies"]
+            if dep not in ptr:
+                ptr[dep] = {}
+            ptr = ptr[dep]
+        return ptr
+
+    # Manage the manifest file and shrinkwrap files
+    orig_manifest_file = d.expand("${S}/package.json")
+    orig_shrinkwrap_file = d.expand("${S}/npm-shrinkwrap.json")
+    cached_manifest_file = d.expand("${NPM_PACKAGE}/package.json")
+    cached_shrinkwrap_file = d.expand("${NPM_PACKAGE}/npm-shrinkwrap.json")
+
+    with open(orig_manifest_file, "r") as f:
+        orig_manifest = json.load(f)
+
+    cached_manifest = copy.deepcopy(orig_manifest)
+    cached_manifest.pop("dependencies", None)
+    cached_manifest.pop("devDependencies", None)
+
+    has_shrinkwrap_file = True
+
+    try:
+        with open(orig_shrinkwrap_file, "r") as f:
+            orig_shrinkwrap = json.load(f)
+    except IOError:
+        has_shrinkwrap_file = False
+
+    if has_shrinkwrap_file:
+       cached_shrinkwrap = copy.deepcopy(orig_shrinkwrap)
+       cached_shrinkwrap.pop("dependencies", None)
+
+    # Manage the dependencies
+    progress = OutOfProgressHandler(d, r"^(\d+)/(\d+)$")
+    progress_total = 1 # also count the main package
+    progress_done = 0
+
+    def _count_dependency(name, params, deptree):
+        nonlocal progress_total
+        progress_total += 1
+
+    def _cache_dependency(name, params, deptree):
+        destsubdirs = [os.path.join("node_modules", dep) for dep in deptree]
+        destsuffix = os.path.join(*destsubdirs)
+        with tempfile.TemporaryDirectory() as tmpdir:
+            # Add the dependency to the npm cache
+            destdir = os.path.join(d.getVar("S"), destsuffix)
+            (tarball, pkg) = npm_pack(env, destdir, tmpdir)
+            _npm_cache_add(tarball, pkg)
+            # Add its signature to the cached shrinkwrap
+            dep = _npmsw_dependency_dict(cached_shrinkwrap, deptree)
+            dep["version"] = pkg['version']
+            dep["integrity"] = _npm_integrity(tarball)
+            if params.get("dev", False):
+                dep["dev"] = True
+            # Display progress
+            nonlocal progress_done
+            progress_done += 1
+            progress.write("%d/%d" % (progress_done, progress_total))
+
+    dev = bb.utils.to_boolean(d.getVar("NPM_INSTALL_DEV"), False)
+
+    if has_shrinkwrap_file:
+        foreach_dependencies(orig_shrinkwrap, _count_dependency, dev)
+        foreach_dependencies(orig_shrinkwrap, _cache_dependency, dev)
+
+    # Configure the main package
+    with tempfile.TemporaryDirectory() as tmpdir:
+        (tarball, _) = npm_pack(env, d.getVar("S"), tmpdir)
+        npm_unpack(tarball, d.getVar("NPM_PACKAGE"), d)
+
+    # Configure the cached manifest file and cached shrinkwrap file
+    def _update_manifest(depkey):
+        for name in orig_manifest.get(depkey, {}):
+            version = cached_shrinkwrap["dependencies"][name]["version"]
+            if depkey not in cached_manifest:
+                cached_manifest[depkey] = {}
+            cached_manifest[depkey][name] = version
+
+    if has_shrinkwrap_file:
+        _update_manifest("dependencies")
+
+    if dev:
+        if has_shrinkwrap_file:
+            _update_manifest("devDependencies")
+
+    os.chmod(cached_manifest_file, os.stat(cached_manifest_file).st_mode | stat.S_IWUSR)
+    with open(cached_manifest_file, "w") as f:
+        json.dump(cached_manifest, f, indent=2)
+
+    if has_shrinkwrap_file:
+        with open(cached_shrinkwrap_file, "w") as f:
+            json.dump(cached_shrinkwrap, f, indent=2)
+}
+
+python npm_do_compile() {
+    """
+    Step two: install the npm package
+
+    Use the configured main package and the cached dependencies to run the
+    installation process. The installation is done in a directory which is
+    not the destination directory yet.
+
+    A combination of 'npm pack' and 'npm install' is used to ensure that the
+    installed files are actual copies instead of symbolic links (which is the
+    default npm behavior).
+    """
+    import shlex
+    import tempfile
+    from bb.fetch2.npm import NpmEnvironment
+
+    bb.utils.remove(d.getVar("NPM_BUILD"), recurse=True)
+
+    with tempfile.TemporaryDirectory() as tmpdir:
+        args = []
+        configs = npm_global_configs(d)
+
+        if bb.utils.to_boolean(d.getVar("NPM_INSTALL_DEV"), False):
+            configs.append(("also", "development"))
+        else:
+            configs.append(("only", "production"))
+
+        # Report as many logs as possible for debugging purpose
+        configs.append(("loglevel", "silly"))
+
+        # Configure the installation to be done globally in the build directory
+        configs.append(("global", "true"))
+        configs.append(("prefix", d.getVar("NPM_BUILD")))
+
+        # Add node-gyp configuration
+        configs.append(("arch", d.getVar("NPM_ARCH")))
+        configs.append(("release", "true"))
+        configs.append(("nodedir", d.getVar("NPM_NODEDIR")))
+        configs.append(("python", d.getVar("PYTHON")))
+
+        env = NpmEnvironment(d, configs)
+
+        # Add node-pre-gyp configuration
+        args.append(("target_arch", d.getVar("NPM_ARCH")))
+        args.append(("build-from-source", "true"))
+
+        # Pack and install the main package
+        (tarball, _) = npm_pack(env, d.getVar("NPM_PACKAGE"), tmpdir)
+        cmd = "npm install %s %s" % (shlex.quote(tarball), d.getVar("EXTRA_OENPM"))
+        env.run(cmd, args=args)
+}
+
+npm_do_install() {
+    # Step three: final install
+    #
+    # The previous installation have to be filtered to remove some extra files.
+
+    rm -rf ${D}
+
+    # Copy the entire lib and bin directories
+    install -d ${D}/${nonarch_libdir}
+    cp --no-preserve=ownership --recursive ${NPM_BUILD}/lib/. ${D}/${nonarch_libdir}
+
+    if [ -d "${NPM_BUILD}/bin" ]
+    then
+        install -d ${D}/${bindir}
+        cp --no-preserve=ownership --recursive ${NPM_BUILD}/bin/. ${D}/${bindir}
+    fi
+
+    # If the package (or its dependencies) uses node-gyp to build native addons,
+    # object files, static libraries or other temporary files can be hidden in
+    # the lib directory. To reduce the package size and to avoid QA issues
+    # (staticdev with static library files) these files must be removed.
+    local GYP_REGEX=".*/build/Release/[^/]*.node"
+
+    # Remove any node-gyp directory in ${D} to remove temporary build files
+    for GYP_D_FILE in $(find ${D} -regex "${GYP_REGEX}")
+    do
+        local GYP_D_DIR=${GYP_D_FILE%/Release/*}
+
+        rm --recursive --force ${GYP_D_DIR}
+    done
+
+    # Copy only the node-gyp release files
+    for GYP_B_FILE in $(find ${NPM_BUILD} -regex "${GYP_REGEX}")
+    do
+        local GYP_D_FILE=${D}/${prefix}/${GYP_B_FILE#${NPM_BUILD}}
+
+        install -d ${GYP_D_FILE%/*}
+        install -m 755 ${GYP_B_FILE} ${GYP_D_FILE}
+    done
+
+    # Remove the shrinkwrap file which does not need to be packed
+    rm -f ${D}/${nonarch_libdir}/node_modules/*/npm-shrinkwrap.json
+    rm -f ${D}/${nonarch_libdir}/node_modules/@*/*/npm-shrinkwrap.json
+}
+
+FILES:${PN} += " \
+    ${bindir} \
+    ${nonarch_libdir} \
+"
+
+EXPORT_FUNCTIONS do_configure do_compile do_install
diff --git a/poky/meta/classes-recipe/packagegroup.bbclass b/poky/meta/classes-recipe/packagegroup.bbclass
new file mode 100644
index 0000000..6f17fc7
--- /dev/null
+++ b/poky/meta/classes-recipe/packagegroup.bbclass
@@ -0,0 +1,67 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+# Class for packagegroup (package group) recipes
+
+# By default, only the packagegroup package itself is in PACKAGES.
+# -dbg and -dev flavours are handled by the anonfunc below.
+# This means that packagegroup recipes used to build multiple packagegroup
+# packages have to modify PACKAGES after inheriting packagegroup.bbclass.
+PACKAGES = "${PN}"
+
+# By default, packagegroup packages do not depend on a certain architecture.
+# Only if dependencies are modified by MACHINE_FEATURES, packages
+# need to be set to MACHINE_ARCH before inheriting packagegroup.bbclass
+PACKAGE_ARCH ?= "all"
+
+# Fully expanded - so it applies the overrides as well
+PACKAGE_ARCH_EXPANDED := "${PACKAGE_ARCH}"
+
+LICENSE ?= "MIT"
+
+inherit ${@oe.utils.ifelse(d.getVar('PACKAGE_ARCH_EXPANDED') == 'all', 'allarch', '')}
+
+# This automatically adds -dbg and -dev flavours of all PACKAGES
+# to the list. Their dependencies (RRECOMMENDS) are handled as usual
+# by package_depchains in a following step.
+# Also mark all packages as ALLOW_EMPTY
+python () {
+    packages = d.getVar('PACKAGES').split()
+    if d.getVar('PACKAGEGROUP_DISABLE_COMPLEMENTARY') != '1':
+        types = ['', '-dbg', '-dev']
+        if bb.utils.contains('DISTRO_FEATURES', 'ptest', True, False, d):
+            types.append('-ptest')
+        packages = [pkg + suffix for pkg in packages
+                    for suffix in types]
+        d.setVar('PACKAGES', ' '.join(packages))
+    for pkg in packages:
+        d.setVar('ALLOW_EMPTY:%s' % pkg, '1')
+}
+
+# We don't want to look at shared library dependencies for the
+# dbg packages
+DEPCHAIN_DBGDEFAULTDEPS = "1"
+
+# We only need the packaging tasks - disable the rest
+deltask do_fetch
+deltask do_unpack
+deltask do_patch
+deltask do_configure
+deltask do_compile
+deltask do_install
+deltask do_populate_sysroot
+
+INHIBIT_DEFAULT_DEPS = "1"
+
+python () {
+    if bb.data.inherits_class('nativesdk', d):
+        return
+    initman = d.getVar("VIRTUAL-RUNTIME_init_manager")
+    if initman and initman in ['sysvinit', 'systemd'] and not bb.utils.contains('DISTRO_FEATURES', initman, True, False, d):
+        bb.fatal("Please ensure that your setting of VIRTUAL-RUNTIME_init_manager (%s) matches the entries enabled in DISTRO_FEATURES" % initman)
+}
+
+CVE_PRODUCT = ""
diff --git a/poky/meta/classes-recipe/perl-version.bbclass b/poky/meta/classes-recipe/perl-version.bbclass
new file mode 100644
index 0000000..269ac9e
--- /dev/null
+++ b/poky/meta/classes-recipe/perl-version.bbclass
@@ -0,0 +1,72 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+PERL_OWN_DIR = ""
+
+# Determine the staged version of perl from the perl configuration file
+# Assign vardepvalue, because otherwise signature is changed before and after
+# perl is built (from None to real version in config.sh).
+get_perl_version[vardepvalue] = "${PERL_OWN_DIR}"
+def get_perl_version(d):
+    import re
+    cfg = d.expand('${STAGING_LIBDIR}${PERL_OWN_DIR}/perl5/config.sh')
+    try:
+        f = open(cfg, 'r')
+    except IOError:
+        return None
+    l = f.readlines();
+    f.close();
+    r = re.compile(r"^version='(\d*\.\d*\.\d*)'")
+    for s in l:
+        m = r.match(s)
+        if m:
+            return m.group(1)
+    return None
+
+PERLVERSION := "${@get_perl_version(d)}"
+PERLVERSION[vardepvalue] = ""
+
+
+# Determine the staged arch of perl from the perl configuration file
+# Assign vardepvalue, because otherwise signature is changed before and after
+# perl is built (from None to real version in config.sh).
+def get_perl_arch(d):
+    import re
+    cfg = d.expand('${STAGING_LIBDIR}${PERL_OWN_DIR}/perl5/config.sh')
+    try:
+        f = open(cfg, 'r')
+    except IOError:
+        return None
+    l = f.readlines();
+    f.close();
+    r = re.compile("^archname='([^']*)'")
+    for s in l:
+        m = r.match(s)
+        if m:
+            return m.group(1)
+    return None
+
+PERLARCH := "${@get_perl_arch(d)}"
+PERLARCH[vardepvalue] = ""
+
+# Determine the staged arch of perl-native from the perl configuration file
+# Assign vardepvalue, because otherwise signature is changed before and after
+# perl is built (from None to real version in config.sh).
+def get_perl_hostarch(d):
+    import re
+    cfg = d.expand('${STAGING_LIBDIR_NATIVE}/perl5/config.sh')
+    try:
+        f = open(cfg, 'r')
+    except IOError:
+        return None
+    l = f.readlines();
+    f.close();
+    r = re.compile("^archname='([^']*)'")
+    for s in l:
+        m = r.match(s)
+        if m:
+            return m.group(1)
+    return None
diff --git a/poky/meta/classes-recipe/perlnative.bbclass b/poky/meta/classes-recipe/perlnative.bbclass
new file mode 100644
index 0000000..d56ec4a
--- /dev/null
+++ b/poky/meta/classes-recipe/perlnative.bbclass
@@ -0,0 +1,9 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+EXTRANATIVEPATH += "perl-native"
+DEPENDS += "perl-native"
+OECMAKE_PERLNATIVE_DIR = "${STAGING_BINDIR_NATIVE}/perl-native"
diff --git a/poky/meta/classes-recipe/pixbufcache.bbclass b/poky/meta/classes-recipe/pixbufcache.bbclass
new file mode 100644
index 0000000..107e388
--- /dev/null
+++ b/poky/meta/classes-recipe/pixbufcache.bbclass
@@ -0,0 +1,69 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+#
+# This class will generate the proper postinst/postrm scriptlets for pixbuf
+# packages.
+#
+
+DEPENDS:append:class-target = " qemu-native"
+inherit qemu
+
+PIXBUF_PACKAGES ??= "${PN}"
+
+PACKAGE_WRITE_DEPS += "qemu-native gdk-pixbuf-native"
+
+pixbufcache_common() {
+if [ "x$D" != "x" ]; then
+	$INTERCEPT_DIR/postinst_intercept update_pixbuf_cache ${PKG} mlprefix=${MLPREFIX} binprefix=${MLPREFIX} libdir=${libdir} \
+		bindir=${bindir} base_libdir=${base_libdir}
+else
+
+	# Update the pixbuf loaders in case they haven't been registered yet
+	${libdir}/gdk-pixbuf-2.0/gdk-pixbuf-query-loaders --update-cache
+
+	if [ -x ${bindir}/gtk-update-icon-cache ] && [ -d ${datadir}/icons ]; then
+		for icondir in /usr/share/icons/*; do
+			if [ -d ${icondir} ]; then
+				gtk-update-icon-cache -t -q ${icondir}
+			fi
+		done
+	fi
+fi
+}
+
+python populate_packages:append() {
+    pixbuf_pkgs = d.getVar('PIXBUF_PACKAGES').split()
+
+    for pkg in pixbuf_pkgs:
+        bb.note("adding pixbuf postinst and postrm scripts to %s" % pkg)
+        postinst = d.getVar('pkg_postinst:%s' % pkg) or d.getVar('pkg_postinst')
+        if not postinst:
+            postinst = '#!/bin/sh\n'
+        postinst += d.getVar('pixbufcache_common')
+        d.setVar('pkg_postinst:%s' % pkg, postinst)
+
+        postrm = d.getVar('pkg_postrm:%s' % pkg) or d.getVar('pkg_postrm')
+        if not postrm:
+            postrm = '#!/bin/sh\n'
+        postrm += d.getVar('pixbufcache_common')
+        d.setVar('pkg_postrm:%s' % pkg, postrm)
+}
+
+gdkpixbuf_complete() {
+GDK_PIXBUF_FATAL_LOADER=1 ${STAGING_LIBDIR_NATIVE}/gdk-pixbuf-2.0/gdk-pixbuf-query-loaders --update-cache || exit 1
+}
+
+DEPENDS:append:class-native = " gdk-pixbuf-native"
+SYSROOT_PREPROCESS_FUNCS:append:class-native = " pixbufcache_sstate_postinst"
+
+pixbufcache_sstate_postinst() {
+	mkdir -p ${SYSROOT_DESTDIR}${bindir}
+	dest=${SYSROOT_DESTDIR}${bindir}/postinst-${PN}
+        echo '#!/bin/sh' > $dest
+	echo "${gdkpixbuf_complete}" >> $dest
+	chmod 0755 $dest
+}
diff --git a/poky/meta/classes-recipe/pkgconfig.bbclass b/poky/meta/classes-recipe/pkgconfig.bbclass
new file mode 100644
index 0000000..1e1f382
--- /dev/null
+++ b/poky/meta/classes-recipe/pkgconfig.bbclass
@@ -0,0 +1,8 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+DEPENDS:prepend = "pkgconfig-native "
+
diff --git a/poky/meta/classes-recipe/populate_sdk.bbclass b/poky/meta/classes-recipe/populate_sdk.bbclass
new file mode 100644
index 0000000..caeef5d
--- /dev/null
+++ b/poky/meta/classes-recipe/populate_sdk.bbclass
@@ -0,0 +1,13 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+# The majority of populate_sdk is located in populate_sdk_base
+# This chunk simply facilitates compatibility with SDK only recipes.
+
+inherit populate_sdk_base
+
+addtask populate_sdk after do_install before do_build
+
diff --git a/poky/meta/classes-recipe/populate_sdk_base.bbclass b/poky/meta/classes-recipe/populate_sdk_base.bbclass
new file mode 100644
index 0000000..0be108a
--- /dev/null
+++ b/poky/meta/classes-recipe/populate_sdk_base.bbclass
@@ -0,0 +1,384 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+PACKAGES = ""
+
+inherit image-postinst-intercepts image-artifact-names
+
+# Wildcards specifying complementary packages to install for every package that has been explicitly
+# installed into the rootfs
+COMPLEMENTARY_GLOB[dev-pkgs] = '*-dev'
+COMPLEMENTARY_GLOB[staticdev-pkgs] = '*-staticdev'
+COMPLEMENTARY_GLOB[doc-pkgs] = '*-doc'
+COMPLEMENTARY_GLOB[dbg-pkgs] = '*-dbg'
+COMPLEMENTARY_GLOB[src-pkgs] = '*-src'
+COMPLEMENTARY_GLOB[ptest-pkgs] = '*-ptest'
+COMPLEMENTARY_GLOB[bash-completion-pkgs] = '*-bash-completion'
+
+def complementary_globs(featurevar, d):
+    all_globs = d.getVarFlags('COMPLEMENTARY_GLOB')
+    globs = []
+    features = set((d.getVar(featurevar) or '').split())
+    for name, glob in all_globs.items():
+        if name in features:
+            globs.append(glob)
+    return ' '.join(globs)
+
+SDKIMAGE_FEATURES ??= "dev-pkgs dbg-pkgs src-pkgs ${@bb.utils.contains('DISTRO_FEATURES', 'api-documentation', 'doc-pkgs', '', d)}"
+SDKIMAGE_INSTALL_COMPLEMENTARY = '${@complementary_globs("SDKIMAGE_FEATURES", d)}'
+SDKIMAGE_INSTALL_COMPLEMENTARY[vardeps] += "SDKIMAGE_FEATURES"
+
+PACKAGE_ARCHS:append:task-populate-sdk = " sdk-provides-dummy-target"
+SDK_PACKAGE_ARCHS += "sdk-provides-dummy-${SDKPKGSUFFIX}"
+
+# List of locales to install, or "all" for all of them, or unset for none.
+SDKIMAGE_LINGUAS ?= "all"
+
+inherit rootfs_${IMAGE_PKGTYPE}
+
+SDK_DIR = "${WORKDIR}/sdk"
+SDK_OUTPUT = "${SDK_DIR}/image"
+SDK_DEPLOY = "${DEPLOY_DIR}/sdk"
+
+SDKDEPLOYDIR = "${WORKDIR}/${SDKMACHINE}-deploy-${PN}-populate-sdk"
+
+B:task-populate-sdk = "${SDK_DIR}"
+
+SDKTARGETSYSROOT = "${SDKPATH}/sysroots/${REAL_MULTIMACH_TARGET_SYS}"
+
+SDK_TOOLCHAIN_LANGS ??= ""
+SDK_TOOLCHAIN_LANGS:remove:sdkmingw32 = "rust"
+# libstd-rs doesn't build for mips n32 with compiler constraint errors
+SDK_TOOLCHAIN_LANGS:remove:mipsarchn32 = "rust"
+
+TOOLCHAIN_HOST_TASK ?= " \
+    nativesdk-packagegroup-sdk-host \
+    packagegroup-cross-canadian-${MACHINE} \
+    ${@bb.utils.contains('SDK_TOOLCHAIN_LANGS', 'go', 'packagegroup-go-cross-canadian-${MACHINE}', '', d)} \
+    ${@bb.utils.contains('SDK_TOOLCHAIN_LANGS', 'rust', 'packagegroup-rust-cross-canadian-${MACHINE}', '', d)} \
+"
+TOOLCHAIN_HOST_TASK_ATTEMPTONLY ?= ""
+TOOLCHAIN_TARGET_TASK ?= " \
+    ${@multilib_pkg_extend(d, 'packagegroup-core-standalone-sdk-target')} \
+    ${@bb.utils.contains('SDK_TOOLCHAIN_LANGS', 'go', multilib_pkg_extend(d, 'packagegroup-go-sdk-target'), '', d)} \
+    ${@bb.utils.contains('SDK_TOOLCHAIN_LANGS', 'rust', multilib_pkg_extend(d, 'libstd-rs'), '', d)} \
+    target-sdk-provides-dummy \
+"
+TOOLCHAIN_TARGET_TASK_ATTEMPTONLY ?= ""
+TOOLCHAIN_OUTPUTNAME ?= "${SDK_NAME}-toolchain-${SDK_VERSION}"
+
+# Default archived SDK's suffix
+SDK_ARCHIVE_TYPE ?= "tar.xz"
+SDK_XZ_COMPRESSION_LEVEL ?= "-9"
+SDK_XZ_OPTIONS ?= "${XZ_DEFAULTS} ${SDK_XZ_COMPRESSION_LEVEL}"
+
+# To support different sdk type according to SDK_ARCHIVE_TYPE, now support zip and tar.xz
+python () {
+    if d.getVar('SDK_ARCHIVE_TYPE') == 'zip':
+       d.setVar('SDK_ARCHIVE_DEPENDS', 'zip-native')
+       # SDK_ARCHIVE_CMD used to generate archived sdk ${TOOLCHAIN_OUTPUTNAME}.${SDK_ARCHIVE_TYPE} from input dir ${SDK_OUTPUT}/${SDKPATH} to output dir ${SDKDEPLOYDIR}
+       # recommand to cd into input dir first to avoid archive with buildpath
+       d.setVar('SDK_ARCHIVE_CMD', 'cd ${SDK_OUTPUT}/${SDKPATH}; zip -r -y ${SDKDEPLOYDIR}/${TOOLCHAIN_OUTPUTNAME}.${SDK_ARCHIVE_TYPE} .')
+    else:
+       d.setVar('SDK_ARCHIVE_DEPENDS', 'xz-native')
+       d.setVar('SDK_ARCHIVE_CMD', 'cd ${SDK_OUTPUT}/${SDKPATH}; tar ${SDKTAROPTS} -cf - . | xz ${SDK_XZ_OPTIONS} > ${SDKDEPLOYDIR}/${TOOLCHAIN_OUTPUTNAME}.${SDK_ARCHIVE_TYPE}')
+}
+
+SDK_RDEPENDS = "${TOOLCHAIN_TARGET_TASK} ${TOOLCHAIN_HOST_TASK}"
+SDK_DEPENDS = "virtual/fakeroot-native ${SDK_ARCHIVE_DEPENDS} cross-localedef-native nativesdk-qemuwrapper-cross ${@' '.join(["%s-qemuwrapper-cross" % m for m in d.getVar("MULTILIB_VARIANTS").split()])} qemuwrapper-cross"
+PATH:prepend = "${WORKDIR}/recipe-sysroot/${SDKPATHNATIVE}${bindir}/crossscripts:${@":".join(all_multilib_tune_values(d, 'STAGING_BINDIR_CROSS').split())}:"
+SDK_DEPENDS += "nativesdk-glibc-locale"
+
+# We want the MULTIARCH_TARGET_SYS to point to the TUNE_PKGARCH, not PACKAGE_ARCH as it
+# could be set to the MACHINE_ARCH
+REAL_MULTIMACH_TARGET_SYS = "${TUNE_PKGARCH}${TARGET_VENDOR}-${TARGET_OS}"
+
+PID = "${@os.getpid()}"
+
+EXCLUDE_FROM_WORLD = "1"
+
+SDK_PACKAGING_FUNC ?= "create_shar"
+SDK_PRE_INSTALL_COMMAND ?= ""
+SDK_POST_INSTALL_COMMAND ?= ""
+SDK_RELOCATE_AFTER_INSTALL ?= "1"
+
+SDKEXTPATH ??= "~/${@d.getVar('DISTRO')}_sdk"
+SDK_TITLE ??= "${@d.getVar('DISTRO_NAME') or d.getVar('DISTRO')} SDK"
+
+SDK_TARGET_MANIFEST = "${SDKDEPLOYDIR}/${TOOLCHAIN_OUTPUTNAME}.target.manifest"
+SDK_HOST_MANIFEST = "${SDKDEPLOYDIR}/${TOOLCHAIN_OUTPUTNAME}.host.manifest"
+SDK_EXT_TARGET_MANIFEST = "${SDK_DEPLOY}/${TOOLCHAINEXT_OUTPUTNAME}.target.manifest"
+SDK_EXT_HOST_MANIFEST = "${SDK_DEPLOY}/${TOOLCHAINEXT_OUTPUTNAME}.host.manifest"
+
+SDK_PRUNE_SYSROOT_DIRS ?= "/dev"
+
+python write_target_sdk_manifest () {
+    from oe.sdk import sdk_list_installed_packages
+    from oe.utils import format_pkg_list
+    sdkmanifestdir = os.path.dirname(d.getVar("SDK_TARGET_MANIFEST"))
+    pkgs = sdk_list_installed_packages(d, True)
+    if not os.path.exists(sdkmanifestdir):
+        bb.utils.mkdirhier(sdkmanifestdir)
+    with open(d.getVar('SDK_TARGET_MANIFEST'), 'w') as output:
+        output.write(format_pkg_list(pkgs, 'ver'))
+}
+
+sdk_prune_dirs () {
+    for d in ${SDK_PRUNE_SYSROOT_DIRS}; do
+        rm -rf ${SDK_OUTPUT}${SDKTARGETSYSROOT}$d
+    done
+}
+
+python write_sdk_test_data() {
+    from oe.data import export2json
+    testdata = "%s/%s.testdata.json" % (d.getVar('SDKDEPLOYDIR'), d.getVar('TOOLCHAIN_OUTPUTNAME'))
+    bb.utils.mkdirhier(os.path.dirname(testdata))
+    export2json(d, testdata)
+}
+
+python write_host_sdk_manifest () {
+    from oe.sdk import sdk_list_installed_packages
+    from oe.utils import format_pkg_list
+    sdkmanifestdir = os.path.dirname(d.getVar("SDK_HOST_MANIFEST"))
+    pkgs = sdk_list_installed_packages(d, False)
+    if not os.path.exists(sdkmanifestdir):
+        bb.utils.mkdirhier(sdkmanifestdir)
+    with open(d.getVar('SDK_HOST_MANIFEST'), 'w') as output:
+        output.write(format_pkg_list(pkgs, 'ver'))
+}
+
+POPULATE_SDK_POST_TARGET_COMMAND:append = " write_sdk_test_data ; "
+POPULATE_SDK_POST_TARGET_COMMAND:append:task-populate-sdk  = " write_target_sdk_manifest; sdk_prune_dirs; "
+POPULATE_SDK_POST_HOST_COMMAND:append:task-populate-sdk = " write_host_sdk_manifest; "
+
+SDK_PACKAGING_COMMAND = "${@'${SDK_PACKAGING_FUNC};' if '${SDK_PACKAGING_FUNC}' else ''}"
+SDK_POSTPROCESS_COMMAND = " create_sdk_files; check_sdk_sysroots; archive_sdk; ${SDK_PACKAGING_COMMAND} "
+
+def populate_sdk_common(d):
+    from oe.sdk import populate_sdk
+    from oe.manifest import create_manifest, Manifest
+
+    # Handle package exclusions
+    excl_pkgs = (d.getVar("PACKAGE_EXCLUDE") or "").split()
+    inst_pkgs = (d.getVar("PACKAGE_INSTALL") or "").split()
+    inst_attempt_pkgs = (d.getVar("PACKAGE_INSTALL_ATTEMPTONLY") or "").split()
+
+    d.setVar('PACKAGE_INSTALL_ORIG', ' '.join(inst_pkgs))
+    d.setVar('PACKAGE_INSTALL_ATTEMPTONLY', ' '.join(inst_attempt_pkgs))
+
+    for pkg in excl_pkgs:
+        if pkg in inst_pkgs:
+            bb.warn("Package %s, set to be excluded, is in %s PACKAGE_INSTALL (%s).  It will be removed from the list." % (pkg, d.getVar('PN'), inst_pkgs))
+            inst_pkgs.remove(pkg)
+
+        if pkg in inst_attempt_pkgs:
+            bb.warn("Package %s, set to be excluded, is in %s PACKAGE_INSTALL_ATTEMPTONLY (%s).  It will be removed from the list." % (pkg, d.getVar('PN'), inst_pkgs))
+            inst_attempt_pkgs.remove(pkg)
+
+    d.setVar("PACKAGE_INSTALL", ' '.join(inst_pkgs))
+    d.setVar("PACKAGE_INSTALL_ATTEMPTONLY", ' '.join(inst_attempt_pkgs))
+
+    pn = d.getVar('PN')
+    runtime_mapping_rename("TOOLCHAIN_TARGET_TASK", pn, d)
+    runtime_mapping_rename("TOOLCHAIN_TARGET_TASK_ATTEMPTONLY", pn, d)
+
+    ld = bb.data.createCopy(d)
+    ld.setVar("PKGDATA_DIR", "${STAGING_DIR}/${SDK_ARCH}-${SDKPKGSUFFIX}${SDK_VENDOR}-${SDK_OS}/pkgdata")
+    runtime_mapping_rename("TOOLCHAIN_HOST_TASK", pn, ld)
+    runtime_mapping_rename("TOOLCHAIN_HOST_TASK_ATTEMPTONLY", pn, ld)
+    d.setVar("TOOLCHAIN_HOST_TASK", ld.getVar("TOOLCHAIN_HOST_TASK"))
+    d.setVar("TOOLCHAIN_HOST_TASK_ATTEMPTONLY", ld.getVar("TOOLCHAIN_HOST_TASK_ATTEMPTONLY"))
+    
+    # create target/host SDK manifests
+    create_manifest(d, manifest_dir=d.getVar('SDK_DIR'),
+                    manifest_type=Manifest.MANIFEST_TYPE_SDK_HOST)
+    create_manifest(d, manifest_dir=d.getVar('SDK_DIR'),
+                    manifest_type=Manifest.MANIFEST_TYPE_SDK_TARGET)
+
+    populate_sdk(d)
+
+fakeroot python do_populate_sdk() {
+    populate_sdk_common(d)
+}
+SSTATETASKS += "do_populate_sdk"
+SSTATE_SKIP_CREATION:task-populate-sdk = '1'
+do_populate_sdk[cleandirs] = "${SDKDEPLOYDIR}"
+do_populate_sdk[sstate-inputdirs] = "${SDKDEPLOYDIR}"
+do_populate_sdk[sstate-outputdirs] = "${SDK_DEPLOY}"
+do_populate_sdk[stamp-extra-info] = "${MACHINE_ARCH}${SDKMACHINE}"
+python do_populate_sdk_setscene () {
+    sstate_setscene(d)
+}
+addtask do_populate_sdk_setscene
+
+PSEUDO_IGNORE_PATHS .= ",${SDKDEPLOYDIR},${WORKDIR}/oe-sdk-repo,${WORKDIR}/sstate-build-populate_sdk"
+
+fakeroot create_sdk_files() {
+	cp ${COREBASE}/scripts/relocate_sdk.py ${SDK_OUTPUT}/${SDKPATH}/
+
+	# Replace the ##DEFAULT_INSTALL_DIR## with the correct pattern.
+	# Escape special characters like '+' and '.' in the SDKPATH
+	escaped_sdkpath=$(echo ${SDKPATH} |sed -e "s:[\+\.]:\\\\\\\\\0:g")
+	sed -i -e "s:##DEFAULT_INSTALL_DIR##:$escaped_sdkpath:" ${SDK_OUTPUT}/${SDKPATH}/relocate_sdk.py
+
+       mkdir -p ${SDK_OUTPUT}/${SDKPATHNATIVE}${sysconfdir}/
+       echo '${SDKPATHNATIVE}${libdir_nativesdk}
+${SDKPATHNATIVE}${base_libdir_nativesdk}
+include /etc/ld.so.conf' > ${SDK_OUTPUT}/${SDKPATHNATIVE}${sysconfdir}/ld.so.conf
+}
+
+python check_sdk_sysroots() {
+    # Fails build if there are broken or dangling symlinks in SDK sysroots
+
+    if d.getVar('CHECK_SDK_SYSROOTS') != '1':
+        # disabled, bail out
+        return
+
+    def norm_path(path):
+        return os.path.abspath(path)
+
+    # Get scan root
+    SCAN_ROOT = norm_path("%s/%s/sysroots/" % (d.getVar('SDK_OUTPUT'),
+                                               d.getVar('SDKPATH')))
+
+    bb.note('Checking SDK sysroots at ' + SCAN_ROOT)
+
+    def check_symlink(linkPath):
+        if not os.path.islink(linkPath):
+            return
+
+        linkDirPath = os.path.dirname(linkPath)
+
+        targetPath = os.readlink(linkPath)
+        if not os.path.isabs(targetPath):
+            targetPath = os.path.join(linkDirPath, targetPath)
+        targetPath = norm_path(targetPath)
+
+        if SCAN_ROOT != os.path.commonprefix( [SCAN_ROOT, targetPath] ):
+            bb.error("Escaping symlink {0!s} --> {1!s}".format(linkPath, targetPath))
+            return
+
+        if not os.path.exists(targetPath):
+            bb.error("Broken symlink {0!s} --> {1!s}".format(linkPath, targetPath))
+            return
+
+        if os.path.isdir(targetPath):
+            dir_walk(targetPath)
+
+    def walk_error_handler(e):
+        bb.error(str(e))
+
+    def dir_walk(rootDir):
+        for dirPath,subDirEntries,fileEntries in os.walk(rootDir, followlinks=False, onerror=walk_error_handler):
+            entries = subDirEntries + fileEntries
+            for e in entries:
+                ePath = os.path.join(dirPath, e)
+                check_symlink(ePath)
+
+    # start
+    dir_walk(SCAN_ROOT)
+}
+
+SDKTAROPTS = "--owner=root --group=root"
+
+fakeroot archive_sdk() {
+	# Package it up
+	mkdir -p ${SDKDEPLOYDIR}
+	${SDK_ARCHIVE_CMD}
+}
+
+TOOLCHAIN_SHAR_EXT_TMPL ?= "${COREBASE}/meta/files/toolchain-shar-extract.sh"
+TOOLCHAIN_SHAR_REL_TMPL ?= "${COREBASE}/meta/files/toolchain-shar-relocate.sh"
+
+fakeroot create_shar() {
+	# copy in the template shar extractor script
+	cp ${TOOLCHAIN_SHAR_EXT_TMPL} ${SDKDEPLOYDIR}/${TOOLCHAIN_OUTPUTNAME}.sh
+
+	rm -f ${T}/pre_install_command ${T}/post_install_command
+
+	if [ "${SDK_RELOCATE_AFTER_INSTALL}" = "1" ] ; then
+		cp ${TOOLCHAIN_SHAR_REL_TMPL} ${T}/post_install_command
+	fi
+	cat << "EOF" >> ${T}/pre_install_command
+${SDK_PRE_INSTALL_COMMAND}
+EOF
+
+	cat << "EOF" >> ${T}/post_install_command
+${SDK_POST_INSTALL_COMMAND}
+EOF
+	sed -i -e '/@SDK_PRE_INSTALL_COMMAND@/r ${T}/pre_install_command' \
+		-e '/@SDK_POST_INSTALL_COMMAND@/r ${T}/post_install_command' \
+		${SDKDEPLOYDIR}/${TOOLCHAIN_OUTPUTNAME}.sh
+
+	# substitute variables
+	sed -i -e 's#@SDK_ARCH@#${SDK_ARCH}#g' \
+		-e 's#@SDKPATH@#${SDKPATH}#g' \
+		-e 's#@SDKPATHINSTALL@#${SDKPATHINSTALL}#g' \
+		-e 's#@SDKEXTPATH@#${SDKEXTPATH}#g' \
+		-e 's#@OLDEST_KERNEL@#${SDK_OLDEST_KERNEL}#g' \
+		-e 's#@REAL_MULTIMACH_TARGET_SYS@#${REAL_MULTIMACH_TARGET_SYS}#g' \
+		-e 's#@SDK_TITLE@#${@d.getVar("SDK_TITLE").replace('&', '\\&')}#g' \
+		-e 's#@SDK_VERSION@#${SDK_VERSION}#g' \
+		-e '/@SDK_PRE_INSTALL_COMMAND@/d' \
+		-e '/@SDK_POST_INSTALL_COMMAND@/d' \
+		-e 's#@SDK_GCC_VER@#${@oe.utils.host_gcc_version(d, taskcontextonly=True)}#g' \
+		-e 's#@SDK_ARCHIVE_TYPE@#${SDK_ARCHIVE_TYPE}#g' \
+		${SDKDEPLOYDIR}/${TOOLCHAIN_OUTPUTNAME}.sh
+
+	# add execution permission
+	chmod +x ${SDKDEPLOYDIR}/${TOOLCHAIN_OUTPUTNAME}.sh
+
+	# append the SDK tarball
+	cat ${SDKDEPLOYDIR}/${TOOLCHAIN_OUTPUTNAME}.${SDK_ARCHIVE_TYPE} >> ${SDKDEPLOYDIR}/${TOOLCHAIN_OUTPUTNAME}.sh
+
+	# delete the old tarball, we don't need it anymore
+	rm ${SDKDEPLOYDIR}/${TOOLCHAIN_OUTPUTNAME}.${SDK_ARCHIVE_TYPE}
+}
+
+populate_sdk_log_check() {
+	for target in $*
+	do
+		lf_path="`dirname ${BB_LOGFILE}`/log.do_$target.${PID}"
+
+		echo "log_check: Using $lf_path as logfile"
+
+		if [ -e "$lf_path" ]; then
+			${IMAGE_PKGTYPE}_log_check $target $lf_path
+		else
+			echo "Cannot find logfile [$lf_path]"
+		fi
+		echo "Logfile is clean"
+	done
+}
+
+def sdk_command_variables(d):
+    return ['OPKG_PREPROCESS_COMMANDS','OPKG_POSTPROCESS_COMMANDS','POPULATE_SDK_POST_HOST_COMMAND','POPULATE_SDK_PRE_TARGET_COMMAND','POPULATE_SDK_POST_TARGET_COMMAND','SDK_POSTPROCESS_COMMAND','RPM_PREPROCESS_COMMANDS','RPM_POSTPROCESS_COMMANDS']
+
+def sdk_variables(d):
+    variables = ['BUILD_IMAGES_FROM_FEEDS','SDK_OS','SDK_OUTPUT','SDKPATHNATIVE','SDKTARGETSYSROOT','SDK_DIR','SDK_VENDOR','SDKIMAGE_INSTALL_COMPLEMENTARY','SDK_PACKAGE_ARCHS','SDK_OUTPUT',
+                 'SDKTARGETSYSROOT','MULTILIB_VARIANTS','MULTILIBS','ALL_MULTILIB_PACKAGE_ARCHS','MULTILIB_GLOBAL_VARIANTS','BAD_RECOMMENDATIONS','NO_RECOMMENDATIONS','PACKAGE_ARCHS',
+                 'PACKAGE_CLASSES','TARGET_VENDOR','TARGET_VENDOR','TARGET_ARCH','TARGET_OS','BBEXTENDVARIANT','FEED_DEPLOYDIR_BASE_URI', 'PACKAGE_EXCLUDE_COMPLEMENTARY', 'IMAGE_INSTALL_DEBUGFS']
+    variables.extend(sdk_command_variables(d))
+    return " ".join(variables)
+
+do_populate_sdk[vardeps] += "${@sdk_variables(d)}"
+
+python () {
+    variables = sdk_command_variables(d)
+    for var in variables:
+        if d.getVar(var, False):
+            d.setVarFlag(var, 'func', '1')
+}
+
+do_populate_sdk[file-checksums] += "${TOOLCHAIN_SHAR_REL_TMPL}:True \
+                                    ${TOOLCHAIN_SHAR_EXT_TMPL}:True"
+
+do_populate_sdk[dirs] = "${PKGDATA_DIR} ${TOPDIR}"
+do_populate_sdk[depends] += "${@' '.join([x + ':do_populate_sysroot' for x in d.getVar('SDK_DEPENDS').split()])}  ${@d.getVarFlag('do_rootfs', 'depends', False)}"
+do_populate_sdk[rdepends] = "${@' '.join([x + ':do_package_write_${IMAGE_PKGTYPE} ' + x + ':do_packagedata' for x in d.getVar('SDK_RDEPENDS').split()])}"
+do_populate_sdk[recrdeptask] += "do_packagedata do_package_write_rpm do_package_write_ipk do_package_write_deb"
+do_populate_sdk[file-checksums] += "${POSTINST_INTERCEPT_CHECKSUMS}"
+addtask populate_sdk
diff --git a/poky/meta/classes-recipe/populate_sdk_ext.bbclass b/poky/meta/classes-recipe/populate_sdk_ext.bbclass
new file mode 100644
index 0000000..925cb31
--- /dev/null
+++ b/poky/meta/classes-recipe/populate_sdk_ext.bbclass
@@ -0,0 +1,843 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+# Extensible SDK
+
+inherit populate_sdk_base
+
+# Used to override TOOLCHAIN_HOST_TASK in the eSDK case
+TOOLCHAIN_HOST_TASK_ESDK = " \
+    meta-environment-extsdk-${MACHINE} \
+    "
+
+SDK_RELOCATE_AFTER_INSTALL:task-populate-sdk-ext = "0"
+
+SDK_EXT = ""
+SDK_EXT:task-populate-sdk-ext = "-ext"
+
+# Options are full or minimal
+SDK_EXT_TYPE ?= "full"
+SDK_INCLUDE_PKGDATA ?= "0"
+SDK_INCLUDE_TOOLCHAIN ?= "${@'1' if d.getVar('SDK_EXT_TYPE') == 'full' else '0'}"
+SDK_INCLUDE_NATIVESDK ?= "0"
+SDK_INCLUDE_BUILDTOOLS ?= '1'
+
+SDK_RECRDEP_TASKS ?= ""
+SDK_CUSTOM_TEMPLATECONF ?= "0"
+
+ESDK_LOCALCONF_ALLOW ?= ""
+ESDK_LOCALCONF_REMOVE ?= "CONF_VERSION \
+                             BB_NUMBER_THREADS \
+                             BB_NUMBER_PARSE_THREADS \
+                             PARALLEL_MAKE \
+                             PRSERV_HOST \
+                             SSTATE_MIRRORS \
+                             DL_DIR \
+                             SSTATE_DIR \
+                             TMPDIR \
+                             BB_SERVER_TIMEOUT \
+                            "
+ESDK_CLASS_INHERIT_DISABLE ?= "buildhistory icecc"
+SDK_UPDATE_URL ?= ""
+
+SDK_TARGETS ?= "${PN}"
+
+def get_sdk_install_targets(d, images_only=False):
+    sdk_install_targets = ''
+    if images_only or d.getVar('SDK_EXT_TYPE') != 'minimal':
+        sdk_install_targets = d.getVar('SDK_TARGETS')
+
+        depd = d.getVar('BB_TASKDEPDATA', False)
+        tasklist = bb.build.tasksbetween('do_image_complete', 'do_build', d)
+        tasklist.remove('do_build')
+        for v in depd.values():
+            if v[1] in tasklist:
+                if v[0] not in sdk_install_targets:
+                    sdk_install_targets += ' {}'.format(v[0])
+
+    if not images_only:
+        if d.getVar('SDK_INCLUDE_PKGDATA') == '1':
+            sdk_install_targets += ' meta-world-pkgdata:do_allpackagedata'
+        if d.getVar('SDK_INCLUDE_TOOLCHAIN') == '1':
+            sdk_install_targets += ' meta-extsdk-toolchain:do_populate_sysroot'
+
+    return sdk_install_targets
+
+get_sdk_install_targets[vardepsexclude] = "BB_TASKDEPDATA"
+
+OE_INIT_ENV_SCRIPT ?= "oe-init-build-env"
+
+# The files from COREBASE that you want preserved in the COREBASE copied
+# into the sdk. This allows someone to have their own setup scripts in
+# COREBASE be preserved as well as untracked files.
+COREBASE_FILES ?= " \
+    oe-init-build-env \
+    scripts \
+    LICENSE \
+    .templateconf \
+"
+
+SDK_DIR:task-populate-sdk-ext = "${WORKDIR}/sdk-ext"
+B:task-populate-sdk-ext = "${SDK_DIR}"
+TOOLCHAINEXT_OUTPUTNAME ?= "${SDK_NAME}-toolchain-ext-${SDK_VERSION}"
+TOOLCHAIN_OUTPUTNAME:task-populate-sdk-ext = "${TOOLCHAINEXT_OUTPUTNAME}"
+
+SDK_EXT_TARGET_MANIFEST = "${SDK_DEPLOY}/${TOOLCHAINEXT_OUTPUTNAME}.target.manifest"
+SDK_EXT_HOST_MANIFEST = "${SDK_DEPLOY}/${TOOLCHAINEXT_OUTPUTNAME}.host.manifest"
+
+python write_target_sdk_ext_manifest () {
+    from oe.sdk import get_extra_sdkinfo
+    sstate_dir = d.expand('${SDK_OUTPUT}/${SDKPATH}/sstate-cache')
+    extra_info = get_extra_sdkinfo(sstate_dir)
+
+    target = d.getVar('TARGET_SYS')
+    target_multimach = d.getVar('MULTIMACH_TARGET_SYS')
+    real_target_multimach = d.getVar('REAL_MULTIMACH_TARGET_SYS')
+
+    pkgs = {}
+    os.makedirs(os.path.dirname(d.getVar('SDK_EXT_TARGET_MANIFEST')), exist_ok=True)
+    with open(d.getVar('SDK_EXT_TARGET_MANIFEST'), 'w') as f:
+        for fn in extra_info['filesizes']:
+            info = fn.split(':')
+            if info[2] in (target, target_multimach, real_target_multimach) \
+                    or info[5] == 'allarch':
+                if not info[1] in pkgs:
+                    f.write("%s %s %s\n" % (info[1], info[2], info[3]))
+                    pkgs[info[1]] = {}
+}
+python write_host_sdk_ext_manifest () {
+    from oe.sdk import get_extra_sdkinfo
+    sstate_dir = d.expand('${SDK_OUTPUT}/${SDKPATH}/sstate-cache')
+    extra_info = get_extra_sdkinfo(sstate_dir)
+    host = d.getVar('BUILD_SYS')
+    with open(d.getVar('SDK_EXT_HOST_MANIFEST'), 'w') as f:
+        for fn in extra_info['filesizes']:
+            info = fn.split(':')
+            if info[2] == host:
+                f.write("%s %s %s\n" % (info[1], info[2], info[3]))
+}
+
+SDK_POSTPROCESS_COMMAND:append:task-populate-sdk-ext = "write_target_sdk_ext_manifest; write_host_sdk_ext_manifest; "    
+
+SDK_TITLE:task-populate-sdk-ext = "${@d.getVar('DISTRO_NAME') or d.getVar('DISTRO')} Extensible SDK"
+
+def clean_esdk_builddir(d, sdkbasepath):
+    """Clean up traces of the fake build for create_filtered_tasklist()"""
+    import shutil
+    cleanpaths = ['cache', 'tmp']
+    for pth in cleanpaths:
+        fullpth = os.path.join(sdkbasepath, pth)
+        if os.path.isdir(fullpth):
+            shutil.rmtree(fullpth)
+        elif os.path.isfile(fullpth):
+            os.remove(fullpth)
+
+def create_filtered_tasklist(d, sdkbasepath, tasklistfile, conf_initpath):
+    """
+    Create a filtered list of tasks. Also double-checks that the build system
+    within the SDK basically works and required sstate artifacts are available.
+    """
+    import tempfile
+    import shutil
+    import oe.copy_buildsystem
+
+    # Create a temporary build directory that we can pass to the env setup script
+    shutil.copyfile(sdkbasepath + '/conf/local.conf', sdkbasepath + '/conf/local.conf.bak')
+    try:
+        with open(sdkbasepath + '/conf/local.conf', 'a') as f:
+            # Force the use of sstate from the build system
+            f.write('\nSSTATE_DIR:forcevariable = "%s"\n' % d.getVar('SSTATE_DIR'))
+            f.write('SSTATE_MIRRORS:forcevariable = "file://universal/(.*) file://universal-4.9/\\1 file://universal-4.9/(.*) file://universal-4.8/\\1"\n')
+            # Ensure TMPDIR is the default so that clean_esdk_builddir() can delete it
+            f.write('TMPDIR:forcevariable = "${TOPDIR}/tmp"\n')
+            f.write('TCLIBCAPPEND:forcevariable = ""\n')
+            # Drop uninative if the build isn't using it (or else NATIVELSBSTRING will
+            # be different and we won't be able to find our native sstate)
+            if not bb.data.inherits_class('uninative', d):
+                f.write('INHERIT:remove = "uninative"\n')
+
+        # Unfortunately the default SDKPATH (or even a custom value) may contain characters that bitbake
+        # will not allow in its COREBASE path, so we need to rename the directory temporarily
+        temp_sdkbasepath = d.getVar('SDK_OUTPUT') + '/tmp-renamed-sdk'
+        # Delete any existing temp dir
+        try:
+            shutil.rmtree(temp_sdkbasepath)
+        except FileNotFoundError:
+            pass
+        bb.utils.rename(sdkbasepath, temp_sdkbasepath)
+        cmdprefix = '. %s .; ' % conf_initpath
+        logfile = d.getVar('WORKDIR') + '/tasklist_bb_log.txt'
+        try:
+            oe.copy_buildsystem.check_sstate_task_list(d, get_sdk_install_targets(d), tasklistfile, cmdprefix=cmdprefix, cwd=temp_sdkbasepath, logfile=logfile)
+        except bb.process.ExecutionError as e:
+            msg = 'Failed to generate filtered task list for extensible SDK:\n%s' %  e.stdout.rstrip()
+            if 'attempted to execute unexpectedly and should have been setscened' in e.stdout:
+                msg += '\n----------\n\nNOTE: "attempted to execute unexpectedly and should have been setscened" errors indicate this may be caused by missing sstate artifacts that were likely produced in earlier builds, but have been subsequently deleted for some reason.\n'
+            bb.fatal(msg)
+        bb.utils.rename(temp_sdkbasepath, sdkbasepath)
+        # Clean out residue of running bitbake, which check_sstate_task_list()
+        # will effectively do
+        clean_esdk_builddir(d, sdkbasepath)
+    finally:
+        localconf = sdkbasepath + '/conf/local.conf'
+        if os.path.exists(localconf + '.bak'):
+            os.replace(localconf + '.bak', localconf)
+
+python copy_buildsystem () {
+    import re
+    import shutil
+    import glob
+    import oe.copy_buildsystem
+
+    oe_init_env_script = d.getVar('OE_INIT_ENV_SCRIPT')
+
+    conf_bbpath = ''
+    conf_initpath = ''
+    core_meta_subdir = ''
+
+    # Copy in all metadata layers + bitbake (as repositories)
+    buildsystem = oe.copy_buildsystem.BuildSystem('extensible SDK', d)
+    baseoutpath = d.getVar('SDK_OUTPUT') + '/' + d.getVar('SDKPATH')
+
+    #check if custome templateconf path is set
+    use_custom_templateconf = d.getVar('SDK_CUSTOM_TEMPLATECONF')
+
+    # Determine if we're building a derivative extensible SDK (from devtool build-sdk)
+    derivative = (d.getVar('SDK_DERIVATIVE') or '') == '1'
+    if derivative:
+        workspace_name = 'orig-workspace'
+    else:
+        workspace_name = None
+
+    corebase, sdkbblayers = buildsystem.copy_bitbake_and_layers(baseoutpath + '/layers', workspace_name)
+    conf_bbpath = os.path.join('layers', corebase, 'bitbake')
+
+    for path in os.listdir(baseoutpath + '/layers'):
+        relpath = os.path.join('layers', path, oe_init_env_script)
+        if os.path.exists(os.path.join(baseoutpath, relpath)):
+            conf_initpath = relpath
+
+        relpath = os.path.join('layers', path, 'scripts', 'devtool')
+        if os.path.exists(os.path.join(baseoutpath, relpath)):
+            scriptrelpath = os.path.dirname(relpath)
+
+        relpath = os.path.join('layers', path, 'meta')
+        if os.path.exists(os.path.join(baseoutpath, relpath, 'lib', 'oe')):
+            core_meta_subdir = relpath
+
+    d.setVar('oe_init_build_env_path', conf_initpath)
+    d.setVar('scriptrelpath', scriptrelpath)
+
+    # Write out config file for devtool
+    import configparser
+    config = configparser.SafeConfigParser()
+    config.add_section('General')
+    config.set('General', 'bitbake_subdir', conf_bbpath)
+    config.set('General', 'init_path', conf_initpath)
+    config.set('General', 'core_meta_subdir', core_meta_subdir)
+    config.add_section('SDK')
+    config.set('SDK', 'sdk_targets', d.getVar('SDK_TARGETS'))
+    updateurl = d.getVar('SDK_UPDATE_URL')
+    if updateurl:
+        config.set('SDK', 'updateserver', updateurl)
+    bb.utils.mkdirhier(os.path.join(baseoutpath, 'conf'))
+    with open(os.path.join(baseoutpath, 'conf', 'devtool.conf'), 'w') as f:
+        config.write(f)
+
+    unlockedsigs =  os.path.join(baseoutpath, 'conf', 'unlocked-sigs.inc')
+    with open(unlockedsigs, 'w') as f:
+        pass
+
+    # Create a layer for new recipes / appends
+    bbpath = d.getVar('BBPATH')
+    env = os.environ.copy()
+    env['PYTHONDONTWRITEBYTECODE'] = '1'
+    bb.process.run(['devtool', '--bbpath', bbpath, '--basepath', baseoutpath, 'create-workspace', '--create-only', os.path.join(baseoutpath, 'workspace')], env=env)
+
+    # Create bblayers.conf
+    bb.utils.mkdirhier(baseoutpath + '/conf')
+    with open(baseoutpath + '/conf/bblayers.conf', 'w') as f:
+        f.write('# WARNING: this configuration has been automatically generated and in\n')
+        f.write('# most cases should not be edited. If you need more flexibility than\n')
+        f.write('# this configuration provides, it is strongly suggested that you set\n')
+        f.write('# up a proper instance of the full build system and use that instead.\n\n')
+
+        # LCONF_VERSION may not be set, for example when using meta-poky
+        # so don't error if it isn't found
+        lconf_version = d.getVar('LCONF_VERSION', False)
+        if lconf_version is not None:
+            f.write('LCONF_VERSION = "%s"\n\n' % lconf_version)
+
+        f.write('BBPATH = "$' + '{TOPDIR}"\n')
+        f.write('SDKBASEMETAPATH = "$' + '{TOPDIR}"\n')
+        f.write('BBLAYERS := " \\\n')
+        for layerrelpath in sdkbblayers:
+            f.write('    $' + '{SDKBASEMETAPATH}/layers/%s \\\n' % layerrelpath)
+        f.write('    $' + '{SDKBASEMETAPATH}/workspace \\\n')
+        f.write('    "\n')
+
+    # Copy uninative tarball
+    # For now this is where uninative.bbclass expects the tarball
+    if bb.data.inherits_class('uninative', d):
+        uninative_file = d.expand('${UNINATIVE_DLDIR}/' + d.getVarFlag("UNINATIVE_CHECKSUM", d.getVar("BUILD_ARCH")) + '/${UNINATIVE_TARBALL}')
+        uninative_checksum = bb.utils.sha256_file(uninative_file)
+        uninative_outdir = '%s/downloads/uninative/%s' % (baseoutpath, uninative_checksum)
+        bb.utils.mkdirhier(uninative_outdir)
+        shutil.copy(uninative_file, uninative_outdir)
+
+    env_passthrough = (d.getVar('BB_ENV_PASSTHROUGH_ADDITIONS') or '').split()
+    env_passthrough_values = {}
+
+    # Create local.conf
+    builddir = d.getVar('TOPDIR')
+    if derivative and os.path.exists(builddir + '/conf/site.conf'):
+        shutil.copyfile(builddir + '/conf/site.conf', baseoutpath + '/conf/site.conf')
+    if derivative and os.path.exists(builddir + '/conf/auto.conf'):
+        shutil.copyfile(builddir + '/conf/auto.conf', baseoutpath + '/conf/auto.conf')
+    if derivative:
+        shutil.copyfile(builddir + '/conf/local.conf', baseoutpath + '/conf/local.conf')
+    else:
+        local_conf_allowed = (d.getVar('ESDK_LOCALCONF_ALLOW') or '').split()
+        local_conf_remove = (d.getVar('ESDK_LOCALCONF_REMOVE') or '').split()
+        def handle_var(varname, origvalue, op, newlines):
+            if varname in local_conf_remove or (origvalue.strip().startswith('/') and not varname in local_conf_allowed):
+                newlines.append('# Removed original setting of %s\n' % varname)
+                return None, op, 0, True
+            else:
+                if varname in env_passthrough:
+                    env_passthrough_values[varname] = origvalue
+                return origvalue, op, 0, True
+        varlist = ['[^#=+ ]*']
+        oldlines = []
+        if os.path.exists(builddir + '/conf/site.conf'):
+            with open(builddir + '/conf/site.conf', 'r') as f:
+                oldlines += f.readlines()
+        if os.path.exists(builddir + '/conf/auto.conf'):
+            with open(builddir + '/conf/auto.conf', 'r') as f:
+                oldlines += f.readlines()
+        if os.path.exists(builddir + '/conf/local.conf'):
+            with open(builddir + '/conf/local.conf', 'r') as f:
+                oldlines += f.readlines()
+        (updated, newlines) = bb.utils.edit_metadata(oldlines, varlist, handle_var)
+
+        with open(baseoutpath + '/conf/local.conf', 'w') as f:
+            f.write('# WARNING: this configuration has been automatically generated and in\n')
+            f.write('# most cases should not be edited. If you need more flexibility than\n')
+            f.write('# this configuration provides, it is strongly suggested that you set\n')
+            f.write('# up a proper instance of the full build system and use that instead.\n\n')
+            for line in newlines:
+                if line.strip() and not line.startswith('#'):
+                    f.write(line)
+            # Write a newline just in case there's none at the end of the original
+            f.write('\n')
+
+            f.write('TMPDIR = "${TOPDIR}/tmp"\n')
+            f.write('TCLIBCAPPEND = ""\n')
+            f.write('DL_DIR = "${TOPDIR}/downloads"\n')
+
+            if bb.data.inherits_class('uninative', d):
+               f.write('INHERIT += "%s"\n' % 'uninative')
+               f.write('UNINATIVE_CHECKSUM[%s] = "%s"\n\n' % (d.getVar('BUILD_ARCH'), uninative_checksum))
+            f.write('CONF_VERSION = "%s"\n\n' % d.getVar('CONF_VERSION', False))
+
+            # Some classes are not suitable for SDK, remove them from INHERIT
+            f.write('INHERIT:remove = "%s"\n' % d.getVar('ESDK_CLASS_INHERIT_DISABLE', False))
+
+            # Bypass the default connectivity check if any
+            f.write('CONNECTIVITY_CHECK_URIS = ""\n\n')
+
+            # This warning will come out if reverse dependencies for a task
+            # don't have sstate as well as the task itself. We already know
+            # this will be the case for the extensible sdk, so turn off the
+            # warning.
+            f.write('SIGGEN_LOCKEDSIGS_SSTATE_EXISTS_CHECK = "none"\n\n')
+
+            # Warn if the sigs in the locked-signature file don't match
+            # the sig computed from the metadata.
+            f.write('SIGGEN_LOCKEDSIGS_TASKSIG_CHECK = "warn"\n\n')
+
+            # We want to be able to set this without a full reparse
+            f.write('BB_HASHCONFIG_IGNORE_VARS:append = " SIGGEN_UNLOCKED_RECIPES"\n\n')
+
+            # Set up which tasks are ignored for run on install
+            f.write('BB_SETSCENE_ENFORCE_IGNORE_TASKS = "%:* *:do_shared_workdir *:do_rm_work wic-tools:* *:do_addto_recipe_sysroot"\n\n')
+
+            # Hide the config information from bitbake output (since it's fixed within the SDK)
+            f.write('BUILDCFG_HEADER = ""\n\n')
+
+            # Write METADATA_REVISION
+            f.write('METADATA_REVISION = "%s"\n\n' % d.getVar('METADATA_REVISION'))
+
+            f.write('# Provide a flag to indicate we are in the EXT_SDK Context\n')
+            f.write('WITHIN_EXT_SDK = "1"\n\n')
+
+            # Map gcc-dependent uninative sstate cache for installer usage
+            f.write('SSTATE_MIRRORS += " file://universal/(.*) file://universal-4.9/\\1 file://universal-4.9/(.*) file://universal-4.8/\\1"\n\n')
+
+            if d.getVar("PRSERV_HOST"):
+                # Override this, we now include PR data, so it should only point ot the local database
+                f.write('PRSERV_HOST = "localhost:0"\n\n')
+
+            # Allow additional config through sdk-extra.conf
+            fn = bb.cookerdata.findConfigFile('sdk-extra.conf', d)
+            if fn:
+                with open(fn, 'r') as xf:
+                    for line in xf:
+                        f.write(line)
+
+            # If you define a sdk_extraconf() function then it can contain additional config
+            # (Though this is awkward; sdk-extra.conf should probably be used instead)
+            extraconf = (d.getVar('sdk_extraconf') or '').strip()
+            if extraconf:
+                # Strip off any leading / trailing spaces
+                for line in extraconf.splitlines():
+                    f.write(line.strip() + '\n')
+
+            f.write('require conf/locked-sigs.inc\n')
+            f.write('require conf/unlocked-sigs.inc\n')
+
+    # Copy multiple configurations if they exist in the users config directory
+    if d.getVar('BBMULTICONFIG') is not None:
+        bb.utils.mkdirhier(os.path.join(baseoutpath, 'conf', 'multiconfig'))
+        for mc in d.getVar('BBMULTICONFIG').split():
+            dest_stub = "/conf/multiconfig/%s.conf" % (mc,)
+            if os.path.exists(builddir + dest_stub):
+                shutil.copyfile(builddir + dest_stub, baseoutpath + dest_stub)
+
+    cachedir = os.path.join(baseoutpath, 'cache')
+    bb.utils.mkdirhier(cachedir)
+    bb.parse.siggen.copy_unitaskhashes(cachedir)
+
+    # If PR Service is in use, we need to export this as well
+    bb.note('Do we have a pr database?')
+    if d.getVar("PRSERV_HOST"):
+        bb.note('Writing PR database...')
+        # Based on the code in classes/prexport.bbclass
+        import oe.prservice
+        #dump meta info of tables
+        localdata = d.createCopy()
+        localdata.setVar('PRSERV_DUMPOPT_COL', "1")
+        localdata.setVar('PRSERV_DUMPDIR', os.path.join(baseoutpath, 'conf'))
+        localdata.setVar('PRSERV_DUMPFILE', '${PRSERV_DUMPDIR}/prserv.inc')
+
+        bb.note('PR Database write to %s' % (localdata.getVar('PRSERV_DUMPFILE')))
+
+        retval = oe.prservice.prserv_dump_db(localdata)
+        if not retval:
+            bb.error("prexport_handler: export failed!")
+            return
+        (metainfo, datainfo) = retval
+        oe.prservice.prserv_export_tofile(localdata, metainfo, datainfo, True)
+
+    # Use templateconf.cfg file from builddir if exists
+    if os.path.exists(builddir + '/conf/templateconf.cfg') and use_custom_templateconf == '1':
+        shutil.copyfile(builddir + '/conf/templateconf.cfg', baseoutpath + '/conf/templateconf.cfg')
+    else:
+        # Write a templateconf.cfg
+        with open(baseoutpath + '/conf/templateconf.cfg', 'w') as f:
+            f.write('meta/conf/templates/default\n')
+        os.makedirs(os.path.join(baseoutpath, core_meta_subdir, 'conf/templates/default'), exist_ok=True)
+
+    # Ensure any variables set from the external environment (by way of
+    # BB_ENV_PASSTHROUGH_ADDITIONS) are set in the SDK's configuration
+    extralines = []
+    for name, value in env_passthrough_values.items():
+        actualvalue = d.getVar(name) or ''
+        if value != actualvalue:
+            extralines.append('%s = "%s"\n' % (name, actualvalue))
+    if extralines:
+        with open(baseoutpath + '/conf/local.conf', 'a') as f:
+            f.write('\n')
+            f.write('# Extra settings from environment:\n')
+            for line in extralines:
+                f.write(line)
+            f.write('\n')
+
+    # Filter the locked signatures file to just the sstate tasks we are interested in
+    excluded_targets = get_sdk_install_targets(d, images_only=True)
+    sigfile = d.getVar('WORKDIR') + '/locked-sigs.inc'
+    lockedsigs_pruned = baseoutpath + '/conf/locked-sigs.inc'
+    #nativesdk-only sigfile to merge into locked-sigs.inc
+    sdk_include_nativesdk = (d.getVar("SDK_INCLUDE_NATIVESDK") == '1')
+    nativesigfile = d.getVar('WORKDIR') + '/locked-sigs_nativesdk.inc'
+    nativesigfile_pruned = d.getVar('WORKDIR') + '/locked-sigs_nativesdk_pruned.inc'
+
+    if sdk_include_nativesdk:
+        oe.copy_buildsystem.prune_lockedsigs([],
+                                             excluded_targets.split(),
+                                             nativesigfile,
+                                             True,
+                                             nativesigfile_pruned)
+
+        oe.copy_buildsystem.merge_lockedsigs([],
+                                             sigfile,
+                                             nativesigfile_pruned,
+                                             sigfile)
+
+    oe.copy_buildsystem.prune_lockedsigs([],
+                                         excluded_targets.split(),
+                                         sigfile,
+                                         False,
+                                         lockedsigs_pruned)
+
+    sstate_out = baseoutpath + '/sstate-cache'
+    bb.utils.remove(sstate_out, True)
+
+    # uninative.bbclass sets NATIVELSBSTRING to 'universal%s' % oe.utils.host_gcc_version(d)
+    fixedlsbstring = "universal%s" % oe.utils.host_gcc_version(d)
+
+    sdk_include_toolchain = (d.getVar('SDK_INCLUDE_TOOLCHAIN') == '1')
+    sdk_ext_type = d.getVar('SDK_EXT_TYPE')
+    if (sdk_ext_type != 'minimal' or sdk_include_toolchain or derivative) and not sdk_include_nativesdk:
+        # Create the filtered task list used to generate the sstate cache shipped with the SDK
+        tasklistfn = d.getVar('WORKDIR') + '/tasklist.txt'
+        create_filtered_tasklist(d, baseoutpath, tasklistfn, conf_initpath)
+    else:
+        tasklistfn = None
+
+
+    cachedir = os.path.join(baseoutpath, 'cache')
+    bb.utils.mkdirhier(cachedir)
+    bb.parse.siggen.copy_unitaskhashes(cachedir)
+
+    # Add packagedata if enabled
+    if d.getVar('SDK_INCLUDE_PKGDATA') == '1':
+        lockedsigs_base = d.getVar('WORKDIR') + '/locked-sigs-base.inc'
+        lockedsigs_copy = d.getVar('WORKDIR') + '/locked-sigs-copy.inc'
+        shutil.move(lockedsigs_pruned, lockedsigs_base)
+        oe.copy_buildsystem.merge_lockedsigs(['do_packagedata'],
+                                             lockedsigs_base,
+                                             d.getVar('STAGING_DIR_HOST') + '/world-pkgdata/locked-sigs-pkgdata.inc',
+                                             lockedsigs_pruned,
+                                             lockedsigs_copy)
+
+    if sdk_include_toolchain:
+        lockedsigs_base = d.getVar('WORKDIR') + '/locked-sigs-base2.inc'
+        lockedsigs_toolchain = d.expand("${STAGING_DIR}/${TUNE_PKGARCH}/meta-extsdk-toolchain/locked-sigs/locked-sigs-extsdk-toolchain.inc")
+        shutil.move(lockedsigs_pruned, lockedsigs_base)
+        oe.copy_buildsystem.merge_lockedsigs([],
+                                             lockedsigs_base,
+                                             lockedsigs_toolchain,
+                                             lockedsigs_pruned)
+        oe.copy_buildsystem.create_locked_sstate_cache(lockedsigs_toolchain,
+                                                       d.getVar('SSTATE_DIR'),
+                                                       sstate_out, d,
+                                                       fixedlsbstring,
+                                                       filterfile=tasklistfn)
+
+    if sdk_ext_type == 'minimal':
+        if derivative:
+            # Assume the user is not going to set up an additional sstate
+            # mirror, thus we need to copy the additional artifacts (from
+            # workspace recipes) into the derivative SDK
+            lockedsigs_orig = d.getVar('TOPDIR') + '/conf/locked-sigs.inc'
+            if os.path.exists(lockedsigs_orig):
+                lockedsigs_extra = d.getVar('WORKDIR') + '/locked-sigs-extra.inc'
+                oe.copy_buildsystem.merge_lockedsigs(None,
+                                                     lockedsigs_orig,
+                                                     lockedsigs_pruned,
+                                                     None,
+                                                     lockedsigs_extra)
+                oe.copy_buildsystem.create_locked_sstate_cache(lockedsigs_extra,
+                                                               d.getVar('SSTATE_DIR'),
+                                                               sstate_out, d,
+                                                               fixedlsbstring,
+                                                               filterfile=tasklistfn)
+    else:
+        oe.copy_buildsystem.create_locked_sstate_cache(lockedsigs_pruned,
+                                                       d.getVar('SSTATE_DIR'),
+                                                       sstate_out, d,
+                                                       fixedlsbstring,
+                                                       filterfile=tasklistfn)
+
+    # We don't need sstate do_package files
+    for root, dirs, files in os.walk(sstate_out):
+        for name in files:
+            if name.endswith("_package.tar.zst"):
+                f = os.path.join(root, name)
+                os.remove(f)
+
+    # Write manifest file
+    # Note: at the moment we cannot include the env setup script here to keep
+    # it updated, since it gets modified during SDK installation (see
+    # sdk_ext_postinst() below) thus the checksum we take here would always
+    # be different.
+    manifest_file_list = ['conf/*']
+    if d.getVar('BBMULTICONFIG') is not None:
+        manifest_file_list.append('conf/multiconfig/*')
+
+    esdk_manifest_excludes = (d.getVar('ESDK_MANIFEST_EXCLUDES') or '').split()
+    esdk_manifest_excludes_list = []
+    for exclude_item in esdk_manifest_excludes:
+        esdk_manifest_excludes_list += glob.glob(os.path.join(baseoutpath, exclude_item))
+    manifest_file = os.path.join(baseoutpath, 'conf', 'sdk-conf-manifest')
+    with open(manifest_file, 'w') as f:
+        for item in manifest_file_list:
+            for fn in glob.glob(os.path.join(baseoutpath, item)):
+                if fn == manifest_file or os.path.isdir(fn):
+                    continue
+                if fn in esdk_manifest_excludes_list:
+                    continue
+                chksum = bb.utils.sha256_file(fn)
+                f.write('%s\t%s\n' % (chksum, os.path.relpath(fn, baseoutpath)))
+}
+
+def get_current_buildtools(d):
+    """Get the file name of the current buildtools installer"""
+    import glob
+    btfiles = glob.glob(os.path.join(d.getVar('SDK_DEPLOY'), '*-buildtools-nativesdk-standalone-*.sh'))
+    btfiles.sort(key=os.path.getctime)
+    return os.path.basename(btfiles[-1])
+
+def get_sdk_required_utilities(buildtools_fn, d):
+    """Find required utilities that aren't provided by the buildtools"""
+    sanity_required_utilities = (d.getVar('SANITY_REQUIRED_UTILITIES') or '').split()
+    sanity_required_utilities.append(d.expand('${BUILD_PREFIX}gcc'))
+    sanity_required_utilities.append(d.expand('${BUILD_PREFIX}g++'))
+    if buildtools_fn:
+        buildtools_installer = os.path.join(d.getVar('SDK_DEPLOY'), buildtools_fn)
+        filelist, _ = bb.process.run('%s -l' % buildtools_installer)
+    else:
+        buildtools_installer = None
+        filelist = ""
+    localdata = bb.data.createCopy(d)
+    localdata.setVar('SDKPATH', '.')
+    sdkpathnative = localdata.getVar('SDKPATHNATIVE')
+    sdkbindirs = [localdata.getVar('bindir_nativesdk'),
+                  localdata.getVar('sbindir_nativesdk'),
+                  localdata.getVar('base_bindir_nativesdk'),
+                  localdata.getVar('base_sbindir_nativesdk')]
+    for line in filelist.splitlines():
+        splitline = line.split()
+        if len(splitline) > 5:
+            fn = splitline[5]
+            if not fn.startswith('./'):
+                fn = './%s' % fn
+            if fn.startswith(sdkpathnative):
+                relpth = '/' + os.path.relpath(fn, sdkpathnative)
+                for bindir in sdkbindirs:
+                    if relpth.startswith(bindir):
+                        relpth = os.path.relpath(relpth, bindir)
+                        if relpth in sanity_required_utilities:
+                            sanity_required_utilities.remove(relpth)
+                        break
+    return ' '.join(sanity_required_utilities)
+
+install_tools() {
+	install -d ${SDK_OUTPUT}/${SDKPATHNATIVE}${bindir_nativesdk}
+	scripts="devtool recipetool oe-find-native-sysroot runqemu* wic"
+	for script in $scripts; do
+		for scriptfn in `find ${SDK_OUTPUT}/${SDKPATH}/${scriptrelpath} -maxdepth 1 -executable -name "$script"`; do
+			targetscriptfn="${SDK_OUTPUT}/${SDKPATHNATIVE}${bindir_nativesdk}/$(basename $scriptfn)"
+			test -e ${targetscriptfn} || ln -rs ${scriptfn} ${targetscriptfn}
+		done
+	done
+	# We can't use the same method as above because files in the sysroot won't exist at this point
+	# (they get populated from sstate on installation)
+	unfsd_path="${SDK_OUTPUT}/${SDKPATHNATIVE}${bindir_nativesdk}/unfsd"
+	if [ "${SDK_INCLUDE_TOOLCHAIN}" = "1" -a ! -e $unfsd_path ] ; then
+		binrelpath=${@os.path.relpath(d.getVar('STAGING_BINDIR_NATIVE'), d.getVar('TMPDIR'))}
+		ln -rs ${SDK_OUTPUT}/${SDKPATH}/tmp/$binrelpath/unfsd $unfsd_path
+	fi
+	touch ${SDK_OUTPUT}/${SDKPATH}/.devtoolbase
+
+	# find latest buildtools-tarball and install it
+	if [ -n "${SDK_BUILDTOOLS_INSTALLER}" ]; then
+		install ${SDK_DEPLOY}/${SDK_BUILDTOOLS_INSTALLER} ${SDK_OUTPUT}/${SDKPATH}
+	fi
+
+	install -m 0644 ${COREBASE}/meta/files/ext-sdk-prepare.py ${SDK_OUTPUT}/${SDKPATH}
+}
+do_populate_sdk_ext[file-checksums] += "${COREBASE}/meta/files/ext-sdk-prepare.py:True"
+
+sdk_ext_preinst() {
+	# Since bitbake won't run as root it doesn't make sense to try and install
+	# the extensible sdk as root.
+	if [ "`id -u`" = "0" ]; then
+		echo "ERROR: The extensible sdk cannot be installed as root."
+		exit 1
+	fi
+	if ! command -v locale > /dev/null; then
+		echo "ERROR: The installer requires the locale command, please install it first"
+		exit 1
+	fi
+        # Check setting of LC_ALL set above
+	canonicalised_locale=`echo $LC_ALL | sed 's/UTF-8/utf8/'`
+	if ! locale -a | grep -q $canonicalised_locale ; then
+		echo "ERROR: the installer requires the $LC_ALL locale to be installed (but not selected), please install it first"
+		exit 1
+	fi
+	# The relocation script used by buildtools installer requires python
+	if ! command -v python3 > /dev/null; then
+		echo "ERROR: The installer requires python3, please install it first"
+		exit 1
+	fi
+	missing_utils=""
+	for util in ${SDK_REQUIRED_UTILITIES}; do
+		if ! command -v $util > /dev/null; then
+			missing_utils="$missing_utils $util"
+		fi
+	done
+	if [ -n "$missing_utils" ] ; then
+		echo "ERROR: the SDK requires the following missing utilities, please install them: $missing_utils"
+		exit 1
+	fi
+	SDK_EXTENSIBLE="1"
+	if [ "$publish" = "1" ] && [ "${SDK_EXT_TYPE}" = "minimal" ] ; then
+		EXTRA_TAR_OPTIONS="$EXTRA_TAR_OPTIONS --exclude=sstate-cache"
+	fi
+}
+SDK_PRE_INSTALL_COMMAND:task-populate-sdk-ext = "${sdk_ext_preinst}"
+
+# FIXME this preparation should be done as part of the SDK construction
+sdk_ext_postinst() {
+	printf "\nExtracting buildtools...\n"
+	cd $target_sdk_dir
+	env_setup_script="$target_sdk_dir/environment-setup-${REAL_MULTIMACH_TARGET_SYS}"
+        if [ -n "${SDK_BUILDTOOLS_INSTALLER}" ]; then
+		printf "buildtools\ny" | ./${SDK_BUILDTOOLS_INSTALLER} > buildtools.log || { printf 'ERROR: buildtools installation failed:\n' ; cat buildtools.log ; echo "printf 'ERROR: this SDK was not fully installed and needs reinstalling\n'" >> $env_setup_script ; exit 1 ; }
+
+		# Delete the buildtools tar file since it won't be used again
+		rm -f ./${SDK_BUILDTOOLS_INSTALLER}
+		# We don't need the log either since it succeeded
+		rm -f buildtools.log
+
+		# Make sure when the user sets up the environment, they also get
+		# the buildtools-tarball tools in their path.
+		echo "# Save and reset OECORE_NATIVE_SYSROOT as buildtools may change it" >> $env_setup_script
+		echo "SAVED=\"\$OECORE_NATIVE_SYSROOT\"" >> $env_setup_script
+		echo ". $target_sdk_dir/buildtools/environment-setup*" >> $env_setup_script
+		echo "OECORE_NATIVE_SYSROOT=\"\$SAVED\"" >> $env_setup_script
+	fi
+
+	# Allow bitbake environment setup to be ran as part of this sdk.
+	echo "export OE_SKIP_SDK_CHECK=1" >> $env_setup_script
+	# Work around runqemu not knowing how to get this information within the eSDK
+	echo "export DEPLOY_DIR_IMAGE=$target_sdk_dir/tmp/${@os.path.relpath(d.getVar('DEPLOY_DIR_IMAGE'), d.getVar('TMPDIR'))}" >> $env_setup_script
+
+	# A bit of another hack, but we need this in the path only for devtool
+	# so put it at the end of $PATH.
+	echo "export PATH=$target_sdk_dir/sysroots/${SDK_SYS}${bindir_nativesdk}:\$PATH" >> $env_setup_script
+
+	echo "printf 'SDK environment now set up; additionally you may now run devtool to perform development tasks.\nRun devtool --help for further details.\n'" >> $env_setup_script
+
+	# Warn if trying to use external bitbake and the ext SDK together
+	echo "(which bitbake > /dev/null 2>&1 && echo 'WARNING: attempting to use the extensible SDK in an environment set up to run bitbake - this may lead to unexpected results. Please source this script in a new shell session instead.') || true" >> $env_setup_script
+
+	if [ "$prepare_buildsystem" != "no" -a -n "${SDK_BUILDTOOLS_INSTALLER}" ]; then
+		printf "Preparing build system...\n"
+		# dash which is /bin/sh on Ubuntu will not preserve the
+		# current working directory when first ran, nor will it set $1 when
+		# sourcing a script. That is why this has to look so ugly.
+		LOGFILE="$target_sdk_dir/preparing_build_system.log"
+		sh -c ". buildtools/environment-setup* > $LOGFILE && cd $target_sdk_dir/`dirname ${oe_init_build_env_path}` && set $target_sdk_dir && . $target_sdk_dir/${oe_init_build_env_path} $target_sdk_dir >> $LOGFILE && python3 $target_sdk_dir/ext-sdk-prepare.py $LOGFILE '${SDK_INSTALL_TARGETS}'" || { echo "printf 'ERROR: this SDK was not fully installed and needs reinstalling\n'" >> $env_setup_script ; exit 1 ; }
+	fi
+	if [ -e $target_sdk_dir/ext-sdk-prepare.py ]; then
+		rm $target_sdk_dir/ext-sdk-prepare.py
+	fi
+	echo done
+}
+
+SDK_POST_INSTALL_COMMAND:task-populate-sdk-ext = "${sdk_ext_postinst}"
+
+SDK_POSTPROCESS_COMMAND:prepend:task-populate-sdk-ext = "copy_buildsystem; install_tools; "
+
+SDK_INSTALL_TARGETS = ""
+fakeroot python do_populate_sdk_ext() {
+    # FIXME hopefully we can remove this restriction at some point, but uninative
+    # currently forces this upon us
+    if d.getVar('SDK_ARCH') != d.getVar('BUILD_ARCH'):
+        bb.fatal('The extensible SDK can currently only be built for the same architecture as the machine being built on - SDK_ARCH is set to %s (likely via setting SDKMACHINE) which is different from the architecture of the build machine (%s). Unable to continue.' % (d.getVar('SDK_ARCH'), d.getVar('BUILD_ARCH')))
+
+    # FIXME hopefully we can remove this restriction at some point, but the eSDK
+    # can only be built for the primary (default) multiconfig
+    if d.getVar('BB_CURRENT_MC') != 'default':
+        bb.fatal('The extensible SDK can currently only be built for the default multiconfig.  Currently trying to build for %s.' % d.getVar('BB_CURRENT_MC'))
+
+    # eSDK dependencies don't use the traditional variables and things don't work properly if they are set
+    d.setVar("TOOLCHAIN_HOST_TASK", "${TOOLCHAIN_HOST_TASK_ESDK}")
+    d.setVar("TOOLCHAIN_TARGET_TASK", "")
+
+    d.setVar('SDK_INSTALL_TARGETS', get_sdk_install_targets(d))
+    if d.getVar('SDK_INCLUDE_BUILDTOOLS') == '1':
+        buildtools_fn = get_current_buildtools(d)
+    else:
+        buildtools_fn = None
+    d.setVar('SDK_REQUIRED_UTILITIES', get_sdk_required_utilities(buildtools_fn, d))
+    d.setVar('SDK_BUILDTOOLS_INSTALLER', buildtools_fn)
+    d.setVar('SDKDEPLOYDIR', '${SDKEXTDEPLOYDIR}')
+    # ESDKs have a libc from the buildtools so ensure we don't ship linguas twice
+    d.delVar('SDKIMAGE_LINGUAS')
+    if d.getVar("SDK_INCLUDE_NATIVESDK") == '1':
+        generate_nativesdk_lockedsigs(d)
+    populate_sdk_common(d)
+}
+
+def generate_nativesdk_lockedsigs(d):
+    import oe.copy_buildsystem
+    sigfile = d.getVar('WORKDIR') + '/locked-sigs_nativesdk.inc'
+    oe.copy_buildsystem.generate_locked_sigs(sigfile, d)
+
+def get_ext_sdk_depends(d):
+    # Note: the deps varflag is a list not a string, so we need to specify expand=False
+    deps = d.getVarFlag('do_image_complete', 'deps', False)
+    pn = d.getVar('PN')
+    deplist = ['%s:%s' % (pn, dep) for dep in deps]
+    tasklist = bb.build.tasksbetween('do_image_complete', 'do_build', d)
+    tasklist.append('do_rootfs')
+    for task in tasklist:
+        deplist.extend((d.getVarFlag(task, 'depends') or '').split())
+    return ' '.join(deplist)
+
+python do_sdk_depends() {
+    # We have to do this separately in its own task so we avoid recursing into
+    # dependencies we don't need to (e.g. buildtools-tarball) and bringing those
+    # into the SDK's sstate-cache
+    import oe.copy_buildsystem
+    sigfile = d.getVar('WORKDIR') + '/locked-sigs.inc'
+    oe.copy_buildsystem.generate_locked_sigs(sigfile, d)
+}
+addtask sdk_depends
+
+do_sdk_depends[dirs] = "${WORKDIR}"
+do_sdk_depends[depends] = "${@get_ext_sdk_depends(d)} meta-extsdk-toolchain:do_populate_sysroot"
+do_sdk_depends[recrdeptask] = "${@d.getVarFlag('do_populate_sdk', 'recrdeptask', False)}"
+do_sdk_depends[recrdeptask] += "do_populate_lic do_package_qa do_populate_sysroot do_deploy ${SDK_RECRDEP_TASKS}"
+do_sdk_depends[rdepends] = "${@' '.join([x + ':do_package_write_${IMAGE_PKGTYPE} ' + x + ':do_packagedata' for x in d.getVar('TOOLCHAIN_HOST_TASK_ESDK').split()])}"
+
+do_populate_sdk_ext[dirs] = "${@d.getVarFlag('do_populate_sdk', 'dirs', False)}"
+
+do_populate_sdk_ext[depends] = "${@d.getVarFlag('do_populate_sdk', 'depends', False)} \
+                                ${@'buildtools-tarball:do_populate_sdk' if d.getVar('SDK_INCLUDE_BUILDTOOLS') == '1' else ''} \
+                                ${@'meta-world-pkgdata:do_collect_packagedata' if d.getVar('SDK_INCLUDE_PKGDATA') == '1' else ''} \
+                                ${@'meta-extsdk-toolchain:do_locked_sigs' if d.getVar('SDK_INCLUDE_TOOLCHAIN') == '1' else ''}"
+
+# We must avoid depending on do_build here if rm_work.bbclass is active,
+# because otherwise do_rm_work may run before do_populate_sdk_ext itself.
+# We can't mark do_populate_sdk_ext and do_sdk_depends as having to
+# run before do_rm_work, because then they would also run as part
+# of normal builds.
+do_populate_sdk_ext[rdepends] += "${@' '.join([x + ':' + (d.getVar('RM_WORK_BUILD_WITHOUT') or 'do_build') for x in d.getVar('SDK_TARGETS').split()])}"
+
+# Make sure code changes can result in rebuild
+do_populate_sdk_ext[vardeps] += "copy_buildsystem \
+                                 sdk_ext_postinst"
+
+# Since any change in the metadata of any layer should cause a rebuild of the
+# sdk(since the layers are put in the sdk) set the task to nostamp so it
+# always runs.
+do_populate_sdk_ext[nostamp] = "1"
+
+SDKEXTDEPLOYDIR = "${WORKDIR}/deploy-${PN}-populate-sdk-ext"
+
+SSTATETASKS += "do_populate_sdk_ext"
+SSTATE_SKIP_CREATION:task-populate-sdk-ext = '1'
+do_populate_sdk_ext[cleandirs] = "${SDKEXTDEPLOYDIR}"
+do_populate_sdk_ext[sstate-inputdirs] = "${SDKEXTDEPLOYDIR}"
+do_populate_sdk_ext[sstate-outputdirs] = "${SDK_DEPLOY}"
+do_populate_sdk_ext[stamp-extra-info] = "${MACHINE_ARCH}"
+
+addtask populate_sdk_ext after do_sdk_depends
diff --git a/poky/meta/classes-recipe/ptest-gnome.bbclass b/poky/meta/classes-recipe/ptest-gnome.bbclass
new file mode 100644
index 0000000..d4ad22d
--- /dev/null
+++ b/poky/meta/classes-recipe/ptest-gnome.bbclass
@@ -0,0 +1,14 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+inherit ptest
+
+EXTRA_OECONF:append = " ${@bb.utils.contains('PTEST_ENABLED', '1', '--enable-installed-tests', '--disable-installed-tests', d)}"
+
+FILES:${PN}-ptest += "${libexecdir}/installed-tests/ \
+                      ${datadir}/installed-tests/"
+
+RDEPENDS:${PN}-ptest += "gnome-desktop-testing"
diff --git a/poky/meta/classes-recipe/ptest-perl.bbclass b/poky/meta/classes-recipe/ptest-perl.bbclass
new file mode 100644
index 0000000..c283fdd
--- /dev/null
+++ b/poky/meta/classes-recipe/ptest-perl.bbclass
@@ -0,0 +1,36 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+inherit ptest
+
+FILESEXTRAPATHS:prepend := "${COREBASE}/meta/files:"
+
+SRC_URI += "file://ptest-perl/run-ptest"
+
+do_install_ptest_perl() {
+	install -d ${D}${PTEST_PATH}
+	if [ ! -f ${D}${PTEST_PATH}/run-ptest ]; then
+		install -m 0755 ${WORKDIR}/ptest-perl/run-ptest ${D}${PTEST_PATH}
+	fi
+	cp -r ${B}/t ${D}${PTEST_PATH}
+	chown -R root:root ${D}${PTEST_PATH}
+}
+
+FILES:${PN}-ptest:prepend = "${PTEST_PATH}/t/* ${PTEST_PATH}/run-ptest "
+
+RDEPENDS:${PN}-ptest:prepend = "perl "
+
+addtask install_ptest_perl after do_install_ptest_base before do_package
+
+python () {
+    if not bb.data.inherits_class('native', d) and not bb.data.inherits_class('cross', d):
+        d.setVarFlag('do_install_ptest_perl', 'fakeroot', '1')
+
+    # Remove all '*ptest_perl' tasks when ptest is not enabled
+    if not(d.getVar('PTEST_ENABLED') == "1"):
+        for i in ['do_install_ptest_perl']:
+            bb.build.deltask(i, d)
+}
diff --git a/poky/meta/classes-recipe/ptest.bbclass b/poky/meta/classes-recipe/ptest.bbclass
new file mode 100644
index 0000000..0383206
--- /dev/null
+++ b/poky/meta/classes-recipe/ptest.bbclass
@@ -0,0 +1,142 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+SUMMARY:${PN}-ptest ?= "${SUMMARY} - Package test files"
+DESCRIPTION:${PN}-ptest ?= "${DESCRIPTION}  \
+This package contains a test directory ${PTEST_PATH} for package test purposes."
+
+PTEST_PATH ?= "${libdir}/${BPN}/ptest"
+PTEST_BUILD_HOST_FILES ?= "Makefile"
+PTEST_BUILD_HOST_PATTERN ?= ""
+PTEST_PARALLEL_MAKE ?= "${PARALLEL_MAKE}"
+PTEST_PARALLEL_MAKEINST ?= "${PARALLEL_MAKEINST}"
+EXTRA_OEMAKE:prepend:task-compile-ptest-base = "${PTEST_PARALLEL_MAKE} "
+EXTRA_OEMAKE:prepend:task-install-ptest-base = "${PTEST_PARALLEL_MAKEINST} "
+
+FILES:${PN}-ptest += "${PTEST_PATH}"
+SECTION:${PN}-ptest = "devel"
+ALLOW_EMPTY:${PN}-ptest = "1"
+PTEST_ENABLED = "${@bb.utils.contains('DISTRO_FEATURES', 'ptest', '1', '0', d)}"
+PTEST_ENABLED:class-native = ""
+PTEST_ENABLED:class-nativesdk = ""
+PTEST_ENABLED:class-cross-canadian = ""
+RDEPENDS:${PN}-ptest += "${PN}"
+RDEPENDS:${PN}-ptest:class-native = ""
+RDEPENDS:${PN}-ptest:class-nativesdk = ""
+RRECOMMENDS:${PN}-ptest += "ptest-runner"
+
+PACKAGES =+ "${@bb.utils.contains('PTEST_ENABLED', '1', '${PN}-ptest', '', d)}"
+
+require conf/distro/include/ptest-packagelists.inc
+
+do_configure_ptest() {
+    :
+}
+
+do_configure_ptest_base() {
+    do_configure_ptest
+}
+
+do_compile_ptest() {
+    :
+}
+
+do_compile_ptest_base() {
+    do_compile_ptest
+}
+
+do_install_ptest() {
+    :
+}
+
+do_install_ptest_base() {
+    if [ -f ${WORKDIR}/run-ptest ]; then
+        install -D ${WORKDIR}/run-ptest ${D}${PTEST_PATH}/run-ptest
+    fi
+    if grep -q install-ptest: Makefile; then
+        oe_runmake DESTDIR=${D}${PTEST_PATH} install-ptest
+    fi
+    do_install_ptest
+    chown -R root:root ${D}${PTEST_PATH}
+
+    # Strip build host paths from any installed Makefile
+    for filename in ${PTEST_BUILD_HOST_FILES}; do
+        for installed_ptest_file in $(find ${D}${PTEST_PATH} -type f -name $filename); do
+            bbnote "Stripping host paths from: $installed_ptest_file"
+            sed -e 's#${HOSTTOOLS_DIR}/*##g' \
+                -e 's#${WORKDIR}/*=#.=#g' \
+                -e 's#${WORKDIR}/*##g' \
+                -i $installed_ptest_file
+            if [ -n "${PTEST_BUILD_HOST_PATTERN}" ]; then
+               sed -E '/${PTEST_BUILD_HOST_PATTERN}/d' \
+                   -i $installed_ptest_file
+            fi
+        done
+    done
+}
+
+PTEST_BINDIR_PKGD_PATH = "${PKGD}${PTEST_PATH}/bin"
+
+# This function needs to run after apply_update_alternative_renames because the
+# aforementioned function will update the ALTERNATIVE_LINK_NAME flag. Append is
+# used here to make this function to run as late as possible.
+PACKAGE_PREPROCESS_FUNCS:append = "${@bb.utils.contains('PTEST_BINDIR', '1', \
+                                    bb.utils.contains('PTEST_ENABLED', '1', ' ptest_update_alternatives', '', d), '', d)}"
+
+python ptest_update_alternatives() {
+    """
+    This function will generate the symlinks in the PTEST_BINDIR_PKGD_PATH
+    to match the renamed binaries by update-alternatives.
+    """
+
+    if not bb.data.inherits_class('update-alternatives', d) \
+           or not update_alternatives_enabled(d):
+        return
+
+    bb.note("Generating symlinks for ptest")
+    bin_paths = { d.getVar("bindir"), d.getVar("base_bindir"),
+                   d.getVar("sbindir"), d.getVar("base_sbindir") }
+    ptest_bindir = d.getVar("PTEST_BINDIR_PKGD_PATH")
+    os.mkdir(ptest_bindir)
+    for pkg in (d.getVar('PACKAGES') or "").split():
+        alternatives = update_alternatives_alt_targets(d, pkg)
+        for alt_name, alt_link, alt_target, _ in alternatives:
+            # Some alternatives are for man pages,
+            # check if the alternative is in PATH
+            if os.path.dirname(alt_link) in bin_paths:
+                os.symlink(alt_target, os.path.join(ptest_bindir, alt_name))
+}
+
+do_configure_ptest_base[dirs] = "${B}"
+do_compile_ptest_base[dirs] = "${B}"
+do_install_ptest_base[dirs] = "${B}"
+do_install_ptest_base[cleandirs] = "${D}${PTEST_PATH}"
+
+addtask configure_ptest_base after do_configure before do_compile
+addtask compile_ptest_base   after do_compile   before do_install
+addtask install_ptest_base   after do_install   before do_package do_populate_sysroot
+
+python () {
+    if not bb.data.inherits_class('native', d) and not bb.data.inherits_class('cross', d):
+        d.setVarFlag('do_install_ptest_base', 'fakeroot', '1')
+        d.setVarFlag('do_install_ptest_base', 'umask', '022')
+
+    # Remove all '*ptest_base' tasks when ptest is not enabled
+    if not(d.getVar('PTEST_ENABLED') == "1"):
+        for i in ['do_configure_ptest_base', 'do_compile_ptest_base', 'do_install_ptest_base']:
+            bb.build.deltask(i, d)
+}
+
+QARECIPETEST[missing-ptest] = "package_qa_check_missing_ptest"
+def package_qa_check_missing_ptest(pn, d, messages):
+    # This checks that ptest package is actually included
+    # in standard oe-core ptest images - only for oe-core recipes
+    if not 'meta/recipes' in d.getVar('FILE') or not(d.getVar('PTEST_ENABLED') == "1"):
+        return
+
+    enabled_ptests = " ".join([d.getVar('PTESTS_FAST'), d.getVar('PTESTS_SLOW'), d.getVar('PTESTS_PROBLEMS')]).split()
+    if (pn + "-ptest").replace(d.getVar('MLPREFIX'), '') not in enabled_ptests:
+        oe.qa.handle_error("missing-ptest", "supports ptests but is not included in oe-core's ptest-packagelists.inc", d)
diff --git a/poky/meta/classes-recipe/pypi.bbclass b/poky/meta/classes-recipe/pypi.bbclass
new file mode 100644
index 0000000..aab04c6
--- /dev/null
+++ b/poky/meta/classes-recipe/pypi.bbclass
@@ -0,0 +1,34 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+def pypi_package(d):
+    bpn = d.getVar('BPN')
+    if bpn.startswith('python-'):
+        return bpn[7:]
+    elif bpn.startswith('python3-'):
+        return bpn[8:]
+    return bpn
+
+PYPI_PACKAGE ?= "${@pypi_package(d)}"
+PYPI_PACKAGE_EXT ?= "tar.gz"
+PYPI_ARCHIVE_NAME ?= "${PYPI_PACKAGE}-${PV}.${PYPI_PACKAGE_EXT}"
+
+def pypi_src_uri(d):
+    package = d.getVar('PYPI_PACKAGE')
+    archive_name = d.getVar('PYPI_ARCHIVE_NAME')
+    return 'https://files.pythonhosted.org/packages/source/%s/%s/%s' % (package[0], package, archive_name)
+
+PYPI_SRC_URI ?= "${@pypi_src_uri(d)}"
+
+HOMEPAGE ?= "https://pypi.python.org/pypi/${PYPI_PACKAGE}/"
+SECTION = "devel/python"
+SRC_URI:prepend = "${PYPI_SRC_URI} "
+S = "${WORKDIR}/${PYPI_PACKAGE}-${PV}"
+
+UPSTREAM_CHECK_URI ?= "https://pypi.org/project/${PYPI_PACKAGE}/"
+UPSTREAM_CHECK_REGEX ?= "/${PYPI_PACKAGE}/(?P<pver>(\d+[\.\-_]*)+)/"
+
+CVE_PRODUCT ?= "python:${PYPI_PACKAGE}"
diff --git a/poky/meta/classes-recipe/python3-dir.bbclass b/poky/meta/classes-recipe/python3-dir.bbclass
new file mode 100644
index 0000000..912c672
--- /dev/null
+++ b/poky/meta/classes-recipe/python3-dir.bbclass
@@ -0,0 +1,11 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+PYTHON_BASEVERSION = "3.10"
+PYTHON_ABI = ""
+PYTHON_DIR = "python${PYTHON_BASEVERSION}"
+PYTHON_PN = "python3"
+PYTHON_SITEPACKAGES_DIR = "${libdir}/${PYTHON_DIR}/site-packages"
diff --git a/poky/meta/classes-recipe/python3native.bbclass b/poky/meta/classes-recipe/python3native.bbclass
new file mode 100644
index 0000000..654a002
--- /dev/null
+++ b/poky/meta/classes-recipe/python3native.bbclass
@@ -0,0 +1,30 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+inherit python3-dir
+
+PYTHON="${STAGING_BINDIR_NATIVE}/python3-native/python3"
+EXTRANATIVEPATH += "python3-native"
+DEPENDS:append = " python3-native "
+
+# python-config and other scripts are using sysconfig modules
+# which we patch to access these variables
+export STAGING_INCDIR
+export STAGING_LIBDIR
+
+# Packages can use
+# find_package(PythonInterp REQUIRED)
+# find_package(PythonLibs REQUIRED)
+# which ends up using libs/includes from build host
+# Therefore pre-empt that effort
+export PYTHON_LIBRARY="${STAGING_LIBDIR}/lib${PYTHON_DIR}${PYTHON_ABI}.so"
+export PYTHON_INCLUDE_DIR="${STAGING_INCDIR}/${PYTHON_DIR}${PYTHON_ABI}"
+
+# suppress host user's site-packages dirs.
+export PYTHONNOUSERSITE = "1"
+
+# autoconf macros will use their internal default preference otherwise
+export PYTHON
diff --git a/poky/meta/classes-recipe/python3targetconfig.bbclass b/poky/meta/classes-recipe/python3targetconfig.bbclass
new file mode 100644
index 0000000..3f89e5e
--- /dev/null
+++ b/poky/meta/classes-recipe/python3targetconfig.bbclass
@@ -0,0 +1,35 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+inherit python3native
+
+EXTRA_PYTHON_DEPENDS ?= ""
+EXTRA_PYTHON_DEPENDS:class-target = "python3"
+DEPENDS:append = " ${EXTRA_PYTHON_DEPENDS}"
+
+do_configure:prepend:class-target() {
+        export _PYTHON_SYSCONFIGDATA_NAME="_sysconfigdata"
+}
+
+do_compile:prepend:class-target() {
+        export _PYTHON_SYSCONFIGDATA_NAME="_sysconfigdata"
+}
+
+do_install:prepend:class-target() {
+        export _PYTHON_SYSCONFIGDATA_NAME="_sysconfigdata"
+}
+
+do_configure:prepend:class-nativesdk() {
+        export _PYTHON_SYSCONFIGDATA_NAME="_sysconfigdata"
+}
+
+do_compile:prepend:class-nativesdk() {
+        export _PYTHON_SYSCONFIGDATA_NAME="_sysconfigdata"
+}
+
+do_install:prepend:class-nativesdk() {
+        export _PYTHON_SYSCONFIGDATA_NAME="_sysconfigdata"
+}
diff --git a/poky/meta/classes-recipe/python_flit_core.bbclass b/poky/meta/classes-recipe/python_flit_core.bbclass
new file mode 100644
index 0000000..a0b1feb
--- /dev/null
+++ b/poky/meta/classes-recipe/python_flit_core.bbclass
@@ -0,0 +1,14 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+inherit python_pep517 python3native python3-dir setuptools3-base
+
+DEPENDS += "python3 python3-flit-core-native"
+
+python_flit_core_do_manual_build () {
+    cd ${PEP517_SOURCE_PATH}
+    nativepython3 -m flit_core.wheel --outdir ${PEP517_WHEEL_PATH} .
+}
diff --git a/poky/meta/classes-recipe/python_hatchling.bbclass b/poky/meta/classes-recipe/python_hatchling.bbclass
new file mode 100644
index 0000000..b9e6582
--- /dev/null
+++ b/poky/meta/classes-recipe/python_hatchling.bbclass
@@ -0,0 +1,9 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+inherit python_pep517 python3native python3-dir setuptools3-base
+
+DEPENDS += "python3-hatchling-native"
diff --git a/poky/meta/classes-recipe/python_pep517.bbclass b/poky/meta/classes-recipe/python_pep517.bbclass
new file mode 100644
index 0000000..202dde0
--- /dev/null
+++ b/poky/meta/classes-recipe/python_pep517.bbclass
@@ -0,0 +1,60 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+# Common infrastructure for Python packages that use PEP-517 compliant packaging.
+# https://www.python.org/dev/peps/pep-0517/
+#
+# This class will build a wheel in do_compile, and use pypa/installer to install
+# it in do_install.
+
+DEPENDS:append = " python3-picobuild-native python3-installer-native"
+
+# Where to execute the build process from
+PEP517_SOURCE_PATH ?= "${S}"
+
+# The directory where wheels will be written
+PEP517_WHEEL_PATH ?= "${WORKDIR}/dist"
+
+PEP517_PICOBUILD_OPTS ?= ""
+
+# The interpreter to use for installed scripts
+PEP517_INSTALL_PYTHON = "python3"
+PEP517_INSTALL_PYTHON:class-native = "nativepython3"
+
+# pypa/installer option to control the bytecode compilation
+INSTALL_WHEEL_COMPILE_BYTECODE ?= "--compile-bytecode=0"
+
+# PEP517 doesn't have a specific configure step, so set an empty do_configure to avoid
+# running base_do_configure.
+python_pep517_do_configure () {
+    :
+}
+
+# When we have Python 3.11 we can parse pyproject.toml to determine the build
+# API entry point directly
+python_pep517_do_compile () {
+    nativepython3 -m picobuild --source ${PEP517_SOURCE_PATH} --dest ${PEP517_WHEEL_PATH} --wheel ${PEP517_PICOBUILD_OPTS}
+}
+do_compile[cleandirs] += "${PEP517_WHEEL_PATH}"
+
+python_pep517_do_install () {
+    COUNT=$(find ${PEP517_WHEEL_PATH} -name '*.whl' | wc -l)
+    if test $COUNT -eq 0; then
+        bbfatal No wheels found in ${PEP517_WHEEL_PATH}
+    elif test $COUNT -gt 1; then
+        bbfatal More than one wheel found in ${PEP517_WHEEL_PATH}, this should not happen
+    fi
+
+    nativepython3 -m installer ${INSTALL_WHEEL_COMPILE_BYTECODE} --interpreter "${USRBINPATH}/env ${PEP517_INSTALL_PYTHON}" --destdir=${D} ${PEP517_WHEEL_PATH}/*.whl
+}
+
+# A manual do_install that just uses unzip for bootstrapping purposes. Callers should DEPEND on unzip-native.
+python_pep517_do_bootstrap_install () {
+    install -d ${D}${PYTHON_SITEPACKAGES_DIR}
+    unzip -d ${D}${PYTHON_SITEPACKAGES_DIR} ${PEP517_WHEEL_PATH}/*.whl
+}
+
+EXPORT_FUNCTIONS do_configure do_compile do_install
diff --git a/poky/meta/classes-recipe/python_poetry_core.bbclass b/poky/meta/classes-recipe/python_poetry_core.bbclass
new file mode 100644
index 0000000..c7dc5d0
--- /dev/null
+++ b/poky/meta/classes-recipe/python_poetry_core.bbclass
@@ -0,0 +1,9 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+inherit python_pep517 python3native setuptools3-base
+
+DEPENDS += "python3-poetry-core-native"
diff --git a/poky/meta/classes-recipe/python_pyo3.bbclass b/poky/meta/classes-recipe/python_pyo3.bbclass
new file mode 100644
index 0000000..9a32eac
--- /dev/null
+++ b/poky/meta/classes-recipe/python_pyo3.bbclass
@@ -0,0 +1,36 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+#
+# This class helps make sure that Python extensions built with PyO3
+# and setuptools_rust properly set up the environment for cross compilation
+#
+
+inherit cargo python3-dir siteinfo
+
+export PYO3_CROSS="1"
+export PYO3_CROSS_PYTHON_VERSION="${PYTHON_BASEVERSION}"
+export PYO3_CROSS_LIB_DIR="${STAGING_LIBDIR}"
+export CARGO_BUILD_TARGET="${RUST_HOST_SYS}"
+export RUSTFLAGS
+export PYO3_PYTHON="${PYTHON}"
+export PYO3_CONFIG_FILE="${WORKDIR}/pyo3.config"
+
+python_pyo3_do_configure () {
+    cat > ${WORKDIR}/pyo3.config << EOF
+implementation=CPython
+version=${PYTHON_BASEVERSION}
+shared=true
+abi3=false
+lib_name=${PYTHON_DIR}
+lib_dir=${STAGING_LIBDIR}
+pointer_width=${SITEINFO_BITS}
+build_flags=WITH_THREAD
+suppress_build_script_link_lines=false
+EOF
+}
+
+EXPORT_FUNCTIONS do_configure
diff --git a/poky/meta/classes-recipe/python_setuptools3_rust.bbclass b/poky/meta/classes-recipe/python_setuptools3_rust.bbclass
new file mode 100644
index 0000000..d6ce2ed
--- /dev/null
+++ b/poky/meta/classes-recipe/python_setuptools3_rust.bbclass
@@ -0,0 +1,17 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+inherit python_pyo3 setuptools3
+
+DEPENDS += "python3-setuptools-rust-native"
+
+python_setuptools3_rust_do_configure() {
+    python_pyo3_do_configure
+    cargo_common_do_configure
+    setuptools3_do_configure
+}
+
+EXPORT_FUNCTIONS do_configure
diff --git a/poky/meta/classes-recipe/python_setuptools_build_meta.bbclass b/poky/meta/classes-recipe/python_setuptools_build_meta.bbclass
new file mode 100644
index 0000000..4c84d1e
--- /dev/null
+++ b/poky/meta/classes-recipe/python_setuptools_build_meta.bbclass
@@ -0,0 +1,9 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+inherit setuptools3-base python_pep517
+
+DEPENDS += "python3-setuptools-native python3-wheel-native"
diff --git a/poky/meta/classes-recipe/qemu.bbclass b/poky/meta/classes-recipe/qemu.bbclass
new file mode 100644
index 0000000..874b151
--- /dev/null
+++ b/poky/meta/classes-recipe/qemu.bbclass
@@ -0,0 +1,77 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+#
+# This class contains functions for recipes that need QEMU or test for its
+# existence.
+#
+
+def qemu_target_binary(data):
+    package_arch = data.getVar("PACKAGE_ARCH")
+    qemu_target_binary = (data.getVar("QEMU_TARGET_BINARY_%s" % package_arch) or "")
+    if qemu_target_binary:
+        return qemu_target_binary
+
+    target_arch = data.getVar("TARGET_ARCH")
+    if target_arch in ("i486", "i586", "i686"):
+        target_arch = "i386"
+    elif target_arch == "powerpc":
+        target_arch = "ppc"
+    elif target_arch == "powerpc64":
+        target_arch = "ppc64"
+    elif target_arch == "powerpc64le":
+        target_arch = "ppc64le"
+
+    return "qemu-" + target_arch
+
+def qemu_wrapper_cmdline(data, rootfs_path, library_paths):
+    import string
+
+    qemu_binary = qemu_target_binary(data)
+    if qemu_binary == "qemu-allarch":
+        qemu_binary = "qemuwrapper"
+
+    qemu_options = data.getVar("QEMU_OPTIONS")    
+
+    return "PSEUDO_UNLOAD=1 " + qemu_binary + " " + qemu_options + " -L " + rootfs_path\
+            + " -E LD_LIBRARY_PATH=" + ":".join(library_paths) + " "
+
+# Next function will return a string containing the command that is needed to
+# to run a certain binary through qemu. For example, in order to make a certain
+# postinstall scriptlet run at do_rootfs time and running the postinstall is
+# architecture dependent, we can run it through qemu. For example, in the
+# postinstall scriptlet, we could use the following:
+#
+# ${@qemu_run_binary(d, '$D', '/usr/bin/test_app')} [test_app arguments]
+#
+def qemu_run_binary(data, rootfs_path, binary):
+    libdir = rootfs_path + data.getVar("libdir", False)
+    base_libdir = rootfs_path + data.getVar("base_libdir", False)
+
+    return qemu_wrapper_cmdline(data, rootfs_path, [libdir, base_libdir]) + rootfs_path + binary
+
+# QEMU_EXTRAOPTIONS is not meant to be directly used, the extensions are
+# PACKAGE_ARCH, *NOT* overrides.
+# In some cases (e.g. ppc) simply being arch specific (apparently) isn't good
+# enough and a PACKAGE_ARCH specific -cpu option is needed (hence we have to do
+# this dance). For others (e.g. arm) a -cpu option is not necessary, since the
+# qemu-arm default CPU supports all required architecture levels.
+
+QEMU_OPTIONS = "-r ${OLDEST_KERNEL} ${@d.getVar("QEMU_EXTRAOPTIONS_%s" % d.getVar('PACKAGE_ARCH')) or ""}"
+QEMU_OPTIONS[vardeps] += "QEMU_EXTRAOPTIONS_${PACKAGE_ARCH}"
+
+QEMU_EXTRAOPTIONS_ppce500v2 = " -cpu e500v2"
+QEMU_EXTRAOPTIONS_ppce500mc = " -cpu e500mc"
+QEMU_EXTRAOPTIONS_ppce5500 = " -cpu e500mc"
+QEMU_EXTRAOPTIONS_ppc64e5500 = " -cpu e500mc"
+QEMU_EXTRAOPTIONS_ppce6500 = " -cpu e500mc"
+QEMU_EXTRAOPTIONS_ppc64e6500 = " -cpu e500mc"
+QEMU_EXTRAOPTIONS_ppc7400 = " -cpu 7400"
+QEMU_EXTRAOPTIONS_powerpc64le = " -cpu POWER9"
+# Some packages e.g. fwupd sets PACKAGE_ARCH = MACHINE_ARCH and uses meson which
+# needs right options to usermode qemu
+QEMU_EXTRAOPTIONS_qemuppc = " -cpu 7400"
+QEMU_EXTRAOPTIONS_qemuppc64 = " -cpu POWER9"
diff --git a/poky/meta/classes-recipe/qemuboot.bbclass b/poky/meta/classes-recipe/qemuboot.bbclass
new file mode 100644
index 0000000..018c000
--- /dev/null
+++ b/poky/meta/classes-recipe/qemuboot.bbclass
@@ -0,0 +1,171 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+# Help runqemu boot target board, "QB" means Qemu Boot, the following
+# vars can be set in conf files, such as <bsp.conf> to make it can be
+# boot by runqemu:
+#
+# QB_SYSTEM_NAME: qemu name, e.g., "qemu-system-i386"
+#
+# QB_OPT_APPEND: options to append to qemu, e.g., "-device usb-mouse"
+#
+# QB_DEFAULT_KERNEL: default kernel to boot, e.g., "bzImage"
+#
+# QB_DEFAULT_FSTYPE: default FSTYPE to boot, e.g., "ext4"
+#
+# QB_MEM: memory, e.g., "-m 512"
+#
+# QB_MACHINE: qemu machine, e.g., "-machine virt"
+#
+# QB_CPU: qemu cpu, e.g., "-cpu qemu32"
+#
+# QB_CPU_KVM: the similar to QB_CPU, but used when kvm, e.g., '-cpu kvm64',
+#             set it when support kvm.
+#
+# QB_SMP: amount of CPU cores inside qemu guest, each mapped to a thread on the host,
+#             e.g. "-smp 8".
+#
+# QB_KERNEL_CMDLINE_APPEND: options to append to kernel's -append
+#                           option, e.g., "console=ttyS0 console=tty"
+#
+# QB_DTB: qemu dtb name
+#
+# QB_AUDIO_DRV: qemu audio driver, e.g., "alsa", set it when support audio
+#
+# QB_AUDIO_OPT: qemu audio option, e.g., "-device AC97", used
+#               when QB_AUDIO_DRV is set.
+#
+# QB_RNG: Pass-through for host random number generator, it can speedup boot
+#         in system mode, where system is experiencing entropy starvation
+#
+# QB_KERNEL_ROOT: kernel's root, e.g., /dev/vda
+#                 By default "/dev/vda rw" gets passed to the kernel.
+#                 To mount the rootfs read-only QB_KERNEL_ROOT can be set to e.g. "/dev/vda ro".
+#
+# QB_NETWORK_DEVICE: network device, e.g., "-device virtio-net-pci,netdev=net0,mac=@MAC@",
+#                    it needs work with QB_TAP_OPT and QB_SLIRP_OPT.
+#                    Note, runqemu will replace @MAC@ with a predefined mac, you can set
+#                    a custom one, but that may cause conflicts when multiple qemus are
+#                    running on the same host.
+#                    Note: If more than one interface of type -device virtio-net-device gets added,
+#                          QB_NETWORK_DEVICE:prepend might be used, since Qemu enumerates the eth*
+#                          devices in reverse order to -device arguments.
+#
+# QB_TAP_OPT: network option for 'tap' mode, e.g.,
+#             "-netdev tap,id=net0,ifname=@TAP@,script=no,downscript=no"
+#              Note, runqemu will replace "@TAP@" with the one which is used, such as tap0, tap1 ...
+#
+# QB_SLIRP_OPT: network option for SLIRP mode, e.g., -netdev user,id=net0"
+#
+# QB_CMDLINE_IP_SLIRP: If QB_NETWORK_DEVICE adds more than one network interface to qemu, usually the
+#                      ip= kernel comand line argument needs to be changed accordingly. Details are documented
+#                      in the kernel docuemntation https://www.kernel.org/doc/Documentation/filesystems/nfs/nfsroot.txt
+#                      Example to configure only the first interface: "ip=eth0:dhcp"
+# QB_CMDLINE_IP_TAP: This parameter is similar to the QB_CMDLINE_IP_SLIRP parameter. Since the tap interface requires
+#                    static IP configuration @CLIENT@ and @GATEWAY@ place holders are replaced by the IP and the gateway
+#                    address of the qemu guest by runqemu.
+#                    Example: "ip=192.168.7.@CLIENT@::192.168.7.@GATEWAY@:255.255.255.0::eth0"
+#
+# QB_ROOTFS_OPT: used as rootfs, e.g.,
+#               "-drive id=disk0,file=@ROOTFS@,if=none,format=raw -device virtio-blk-device,drive=disk0"
+#              Note, runqemu will replace "@ROOTFS@" with the one which is used, such as core-image-minimal-qemuarm64.ext4.
+#
+# QB_SERIAL_OPT: serial port, e.g., "-serial mon:stdio"
+#
+# QB_TCPSERIAL_OPT: tcp serial port option, e.g.,
+#                   " -device virtio-serial-device -chardev socket,id=virtcon,port=@PORT@,host=127.0.0.1 -device virtconsole,chardev=virtcon"
+#                   Note, runqemu will replace "@PORT@" with the port number which is used.
+#
+# QB_ROOTFS_EXTRA_OPT: extra options to be appended to the rootfs device in case there is none specified by QB_ROOTFS_OPT.
+#                      Can be used to automatically determine the image from the other variables
+#                      but define things link 'bootindex' when booting from EFI or 'readonly' when using squashfs
+#                      without the need to specify a dedicated qemu configuration
+#
+# QB_GRAPHICS: QEMU video card type (e.g. "-vga std")
+#
+# Usage:
+# IMAGE_CLASSES += "qemuboot"
+# See "runqemu help" for more info
+
+QB_MEM ?= "-m 256"
+QB_SMP ?= ""
+QB_SERIAL_OPT ?= "-serial mon:stdio -serial null"
+QB_DEFAULT_KERNEL ?= "${KERNEL_IMAGETYPE}"
+QB_DEFAULT_FSTYPE ?= "ext4"
+QB_RNG ?= "-object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-pci,rng=rng0"
+QB_OPT_APPEND ?= ""
+QB_NETWORK_DEVICE ?= "-device virtio-net-pci,netdev=net0,mac=@MAC@"
+QB_CMDLINE_IP_SLIRP ?= "ip=dhcp"
+QB_CMDLINE_IP_TAP ?= "ip=192.168.7.@CLIENT@::192.168.7.@GATEWAY@:255.255.255.0::eth0:off:8.8.8.8"
+QB_ROOTFS_EXTRA_OPT ?= ""
+QB_GRAPHICS ?= ""
+
+# This should be kept align with ROOT_VM
+QB_DRIVE_TYPE ?= "/dev/sd"
+
+inherit image-artifact-names
+
+# Create qemuboot.conf
+addtask do_write_qemuboot_conf after do_rootfs before do_image
+
+def qemuboot_vars(d):
+    build_vars = ['MACHINE', 'TUNE_ARCH', 'DEPLOY_DIR_IMAGE',
+                'KERNEL_IMAGETYPE', 'IMAGE_NAME', 'IMAGE_LINK_NAME',
+                'STAGING_DIR_NATIVE', 'STAGING_BINDIR_NATIVE',
+                'STAGING_DIR_HOST', 'SERIAL_CONSOLES', 'UNINATIVE_LOADER']
+    return build_vars + [k for k in d.keys() if k.startswith('QB_')]
+
+do_write_qemuboot_conf[vardeps] += "${@' '.join(qemuboot_vars(d))}"
+do_write_qemuboot_conf[vardepsexclude] += "TOPDIR"
+python do_write_qemuboot_conf() {
+    import configparser
+
+    qemuboot = "%s/%s.qemuboot.conf" % (d.getVar('IMGDEPLOYDIR'), d.getVar('IMAGE_NAME'))
+    if d.getVar('IMAGE_LINK_NAME'):
+        qemuboot_link = "%s/%s.qemuboot.conf" % (d.getVar('IMGDEPLOYDIR'), d.getVar('IMAGE_LINK_NAME'))
+    else:
+        qemuboot_link = ""
+    finalpath = d.getVar("DEPLOY_DIR_IMAGE")
+    topdir = d.getVar('TOPDIR')
+    cf = configparser.ConfigParser()
+    cf.add_section('config_bsp')
+    for k in sorted(qemuboot_vars(d)):
+        if ":" in k:
+            continue
+        # qemu-helper-native sysroot is not removed by rm_work and
+        # contains all tools required by runqemu
+        if k == 'STAGING_BINDIR_NATIVE':
+            val = os.path.join(d.getVar('BASE_WORKDIR'), d.getVar('BUILD_SYS'),
+                               'qemu-helper-native/1.0-r1/recipe-sysroot-native/usr/bin/')
+        else:
+            val = d.getVar(k)
+        if val is None:
+            continue
+        # we only want to write out relative paths so that we can relocate images
+        # and still run them
+        if val.startswith(topdir):
+            val = os.path.relpath(val, finalpath)
+        cf.set('config_bsp', k, '%s' % val)
+
+    # QB_DEFAULT_KERNEL's value of KERNEL_IMAGETYPE is the name of a symlink
+    # to the kernel file, which hinders relocatability of the qb conf.
+    # Read the link and replace it with the full filename of the target.
+    kernel_link = os.path.join(d.getVar('DEPLOY_DIR_IMAGE'), d.getVar('QB_DEFAULT_KERNEL'))
+    kernel = os.path.realpath(kernel_link)
+    # we only want to write out relative paths so that we can relocate images
+    # and still run them
+    kernel = os.path.relpath(kernel, finalpath)
+    cf.set('config_bsp', 'QB_DEFAULT_KERNEL', kernel)
+
+    bb.utils.mkdirhier(os.path.dirname(qemuboot))
+    with open(qemuboot, 'w') as f:
+        cf.write(f)
+
+    if qemuboot_link and qemuboot_link != qemuboot:
+        if os.path.lexists(qemuboot_link):
+           os.remove(qemuboot_link)
+        os.symlink(os.path.basename(qemuboot), qemuboot_link)
+}
diff --git a/poky/meta/classes-recipe/rootfs-postcommands.bbclass b/poky/meta/classes-recipe/rootfs-postcommands.bbclass
new file mode 100644
index 0000000..215e38e
--- /dev/null
+++ b/poky/meta/classes-recipe/rootfs-postcommands.bbclass
@@ -0,0 +1,471 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+# Zap the root password if debug-tweaks and empty-root-password features are not enabled
+ROOTFS_POSTPROCESS_COMMAND += '${@bb.utils.contains_any("IMAGE_FEATURES", [ 'debug-tweaks', 'empty-root-password' ], "", "zap_empty_root_password; ",d)}'
+
+# Allow dropbear/openssh to accept logins from accounts with an empty password string if debug-tweaks or allow-empty-password is enabled
+ROOTFS_POSTPROCESS_COMMAND += '${@bb.utils.contains_any("IMAGE_FEATURES", [ 'debug-tweaks', 'allow-empty-password' ], "ssh_allow_empty_password; ", "",d)}'
+
+# Allow dropbear/openssh to accept root logins if debug-tweaks or allow-root-login is enabled
+ROOTFS_POSTPROCESS_COMMAND += '${@bb.utils.contains_any("IMAGE_FEATURES", [ 'debug-tweaks', 'allow-root-login' ], "ssh_allow_root_login; ", "",d)}'
+
+# Autologin the root user on the serial console, if empty-root-password and serial-autologin-root are active
+ROOTFS_POSTPROCESS_COMMAND += '${@bb.utils.contains("IMAGE_FEATURES", [ 'empty-root-password', 'serial-autologin-root' ], "serial_autologin_root; ", "",d)}'
+
+# Enable postinst logging if debug-tweaks or post-install-logging is enabled
+ROOTFS_POSTPROCESS_COMMAND += '${@bb.utils.contains_any("IMAGE_FEATURES", [ 'debug-tweaks', 'post-install-logging' ], "postinst_enable_logging; ", "",d)}'
+
+# Create /etc/timestamp during image construction to give a reasonably sane default time setting
+ROOTFS_POSTPROCESS_COMMAND += "rootfs_update_timestamp; "
+
+# Tweak the mount options for rootfs in /etc/fstab if read-only-rootfs is enabled
+ROOTFS_POSTPROCESS_COMMAND += '${@bb.utils.contains("IMAGE_FEATURES", "read-only-rootfs", "read_only_rootfs_hook; ", "",d)}'
+
+# We also need to do the same for the kernel boot parameters,
+# otherwise kernel or initramfs end up mounting the rootfs read/write
+# (the default) if supported by the underlying storage.
+#
+# We do this with :append because the default value might get set later with ?=
+# and we don't want to disable such a default that by setting a value here.
+APPEND:append = '${@bb.utils.contains("IMAGE_FEATURES", "read-only-rootfs", " ro", "", d)}'
+
+# Generates test data file with data store variables expanded in json format
+ROOTFS_POSTPROCESS_COMMAND += "write_image_test_data; "
+
+# Write manifest
+IMAGE_MANIFEST = "${IMGDEPLOYDIR}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.manifest"
+ROOTFS_POSTUNINSTALL_COMMAND =+ "write_image_manifest ; "
+# Set default postinst log file
+POSTINST_LOGFILE ?= "${localstatedir}/log/postinstall.log"
+# Set default target for systemd images
+SYSTEMD_DEFAULT_TARGET ?= '${@bb.utils.contains_any("IMAGE_FEATURES", [ "x11-base", "weston" ], "graphical.target", "multi-user.target", d)}'
+ROOTFS_POSTPROCESS_COMMAND += '${@bb.utils.contains("DISTRO_FEATURES", "systemd", "set_systemd_default_target; systemd_create_users;", "", d)}'
+
+ROOTFS_POSTPROCESS_COMMAND += 'empty_var_volatile;'
+
+ROOTFS_POSTPROCESS_COMMAND += '${@bb.utils.contains("DISTRO_FEATURES", "overlayfs", "overlayfs_qa_check; overlayfs_postprocess;", "", d)}'
+
+inherit image-artifact-names
+
+# Sort the user and group entries in /etc by ID in order to make the content
+# deterministic. Package installs are not deterministic, causing the ordering
+# of entries to change between builds. In case that this isn't desired,
+# the command can be overridden.
+#
+# Note that useradd-staticids.bbclass has to be used to ensure that
+# the numeric IDs of dynamically created entries remain stable.
+#
+# We want this to run as late as possible, in particular after
+# systemd_sysusers_create and set_user_group. Using :append is not
+# enough for that, set_user_group is added that way and would end
+# up running after us.
+SORT_PASSWD_POSTPROCESS_COMMAND ??= " tidy_shadowutils_files; "
+python () {
+    d.appendVar('ROOTFS_POSTPROCESS_COMMAND', '${SORT_PASSWD_POSTPROCESS_COMMAND}')
+    d.appendVar('ROOTFS_POSTPROCESS_COMMAND', 'rootfs_reproducible;')
+}
+
+systemd_create_users () {
+	for conffile in ${IMAGE_ROOTFS}/usr/lib/sysusers.d/*.conf; do
+		[ -e $conffile ] || continue
+		grep -v "^#" $conffile | sed -e '/^$/d' | while read type name id comment; do
+		if [ "$type" = "u" ]; then
+			useradd_params="--shell /sbin/nologin"
+			[ "$id" != "-" ] && useradd_params="$useradd_params --uid $id"
+			[ "$comment" != "-" ] && useradd_params="$useradd_params --comment $comment"
+			useradd_params="$useradd_params --system $name"
+			eval useradd --root ${IMAGE_ROOTFS} $useradd_params || true
+		elif [ "$type" = "g" ]; then
+			groupadd_params=""
+			[ "$id" != "-" ] && groupadd_params="$groupadd_params --gid $id"
+			groupadd_params="$groupadd_params --system $name"
+			eval groupadd --root ${IMAGE_ROOTFS} $groupadd_params || true
+		elif [ "$type" = "m" ]; then
+			group=$id
+			eval groupadd --root ${IMAGE_ROOTFS} --system $group || true
+			eval useradd --root ${IMAGE_ROOTFS} --shell /sbin/nologin --system $name --no-user-group || true
+			eval usermod --root ${IMAGE_ROOTFS} -a -G $group $name
+		fi
+		done
+	done
+}
+
+#
+# A hook function to support read-only-rootfs IMAGE_FEATURES
+#
+read_only_rootfs_hook () {
+	# Tweak the mount option and fs_passno for rootfs in fstab
+	if [ -f ${IMAGE_ROOTFS}/etc/fstab ]; then
+		sed -i -e '/^[#[:space:]]*\/dev\/root/{s/defaults/ro/;s/\([[:space:]]*[[:digit:]]\)\([[:space:]]*\)[[:digit:]]$/\1\20/}' ${IMAGE_ROOTFS}/etc/fstab
+	fi
+
+	# Tweak the "mount -o remount,rw /" command in busybox-inittab inittab
+	if [ -f ${IMAGE_ROOTFS}/etc/inittab ]; then
+		sed -i 's|/bin/mount -o remount,rw /|/bin/mount -o remount,ro /|' ${IMAGE_ROOTFS}/etc/inittab
+	fi
+
+	# If we're using openssh and the /etc/ssh directory has no pre-generated keys,
+	# we should configure openssh to use the configuration file /etc/ssh/sshd_config_readonly
+	# and the keys under /var/run/ssh.
+	if [ -d ${IMAGE_ROOTFS}/etc/ssh ]; then
+		if [ -e ${IMAGE_ROOTFS}/etc/ssh/ssh_host_rsa_key ]; then
+			echo "SYSCONFDIR=\${SYSCONFDIR:-/etc/ssh}" >> ${IMAGE_ROOTFS}/etc/default/ssh
+			echo "SSHD_OPTS=" >> ${IMAGE_ROOTFS}/etc/default/ssh
+		else
+			echo "SYSCONFDIR=\${SYSCONFDIR:-/var/run/ssh}" >> ${IMAGE_ROOTFS}/etc/default/ssh
+			echo "SSHD_OPTS='-f /etc/ssh/sshd_config_readonly'" >> ${IMAGE_ROOTFS}/etc/default/ssh
+		fi
+	fi
+
+	# Also tweak the key location for dropbear in the same way.
+	if [ -d ${IMAGE_ROOTFS}/etc/dropbear ]; then
+		if [ ! -e ${IMAGE_ROOTFS}/etc/dropbear/dropbear_rsa_host_key ]; then
+			echo "DROPBEAR_RSAKEY_DIR=/var/lib/dropbear" >> ${IMAGE_ROOTFS}/etc/default/dropbear
+		fi
+	fi
+
+	if ${@bb.utils.contains("DISTRO_FEATURES", "sysvinit", "true", "false", d)}; then
+		# Change the value of ROOTFS_READ_ONLY in /etc/default/rcS to yes
+		if [ -e ${IMAGE_ROOTFS}/etc/default/rcS ]; then
+			sed -i 's/ROOTFS_READ_ONLY=no/ROOTFS_READ_ONLY=yes/' ${IMAGE_ROOTFS}/etc/default/rcS
+		fi
+		# Run populate-volatile.sh at rootfs time to set up basic files
+		# and directories to support read-only rootfs.
+		if [ -x ${IMAGE_ROOTFS}/etc/init.d/populate-volatile.sh ]; then
+			${IMAGE_ROOTFS}/etc/init.d/populate-volatile.sh
+		fi
+	fi
+
+	if ${@bb.utils.contains("DISTRO_FEATURES", "systemd", "true", "false", d)}; then
+	# Create machine-id
+	# 20:12 < mezcalero> koen: you have three options: a) run systemd-machine-id-setup at install time, b) have / read-only and an empty file there (for stateless) and c) boot with / writable
+		touch ${IMAGE_ROOTFS}${sysconfdir}/machine-id
+	fi
+}
+
+#
+# This function disallows empty root passwords
+#
+zap_empty_root_password () {
+	if [ -e ${IMAGE_ROOTFS}/etc/shadow ]; then
+		sed -i 's%^root::%root:*:%' ${IMAGE_ROOTFS}/etc/shadow
+        fi
+	if [ -e ${IMAGE_ROOTFS}/etc/passwd ]; then
+		sed -i 's%^root::%root:*:%' ${IMAGE_ROOTFS}/etc/passwd
+	fi
+}
+
+#
+# allow dropbear/openssh to accept logins from accounts with an empty password string
+#
+ssh_allow_empty_password () {
+	for config in sshd_config sshd_config_readonly; do
+		if [ -e ${IMAGE_ROOTFS}${sysconfdir}/ssh/$config ]; then
+			sed -i 's/^[#[:space:]]*PermitEmptyPasswords.*/PermitEmptyPasswords yes/' ${IMAGE_ROOTFS}${sysconfdir}/ssh/$config
+		fi
+	done
+
+	if [ -e ${IMAGE_ROOTFS}${sbindir}/dropbear ] ; then
+		if grep -q DROPBEAR_EXTRA_ARGS ${IMAGE_ROOTFS}${sysconfdir}/default/dropbear 2>/dev/null ; then
+			if ! grep -q "DROPBEAR_EXTRA_ARGS=.*-B" ${IMAGE_ROOTFS}${sysconfdir}/default/dropbear ; then
+				sed -i 's/^DROPBEAR_EXTRA_ARGS="*\([^"]*\)"*/DROPBEAR_EXTRA_ARGS="\1 -B"/' ${IMAGE_ROOTFS}${sysconfdir}/default/dropbear
+			fi
+		else
+			printf '\nDROPBEAR_EXTRA_ARGS="-B"\n' >> ${IMAGE_ROOTFS}${sysconfdir}/default/dropbear
+		fi
+	fi
+
+	if [ -d ${IMAGE_ROOTFS}${sysconfdir}/pam.d ] ; then
+		for f in `find ${IMAGE_ROOTFS}${sysconfdir}/pam.d/* -type f -exec test -e {} \; -print`
+		do
+			sed -i 's/nullok_secure/nullok/' $f
+		done
+	fi
+}
+
+#
+# allow dropbear/openssh to accept root logins
+#
+ssh_allow_root_login () {
+	for config in sshd_config sshd_config_readonly; do
+		if [ -e ${IMAGE_ROOTFS}${sysconfdir}/ssh/$config ]; then
+			sed -i 's/^[#[:space:]]*PermitRootLogin.*/PermitRootLogin yes/' ${IMAGE_ROOTFS}${sysconfdir}/ssh/$config
+		fi
+	done
+
+	if [ -e ${IMAGE_ROOTFS}${sbindir}/dropbear ] ; then
+		if grep -q DROPBEAR_EXTRA_ARGS ${IMAGE_ROOTFS}${sysconfdir}/default/dropbear 2>/dev/null ; then
+			sed -i '/^DROPBEAR_EXTRA_ARGS=/ s/-w//' ${IMAGE_ROOTFS}${sysconfdir}/default/dropbear
+		fi
+	fi
+}
+
+#
+# Autologin the 'root' user on the serial terminal,
+# if empty-root-password' AND 'serial-autologin-root are enabled
+#
+serial_autologin_root () {
+	if ${@bb.utils.contains("DISTRO_FEATURES", "sysvinit", "true", "false", d)}; then
+		# add autologin option to util-linux getty only
+		sed -i 's/options="/&--autologin root /' \
+			"${IMAGE_ROOTFS}${base_bindir}/start_getty"
+	elif ${@bb.utils.contains("DISTRO_FEATURES", "systemd", "true", "false", d)}; then
+		if [ -e ${IMAGE_ROOTFS}${systemd_system_unitdir}/serial-getty@.service ]; then
+			sed -i '/^\s*ExecStart\b/ s/getty /&--autologin root /' \
+				"${IMAGE_ROOTFS}${systemd_system_unitdir}/serial-getty@.service"
+		fi
+	fi
+}
+
+python tidy_shadowutils_files () {
+    import rootfspostcommands
+    rootfspostcommands.tidy_shadowutils_files(d.expand('${IMAGE_ROOTFS}${sysconfdir}'))
+}
+
+python sort_passwd () {
+    """
+    Deprecated in the favour of tidy_shadowutils_files.
+    """
+    import rootfspostcommands
+    bb.warn('[sort_passwd] You are using a deprecated function for '
+        'SORT_PASSWD_POSTPROCESS_COMMAND. The default one is now called '
+        '"tidy_shadowutils_files".')
+    rootfspostcommands.tidy_shadowutils_files(d.expand('${IMAGE_ROOTFS}${sysconfdir}'))
+}
+
+#
+# Enable postinst logging
+#
+postinst_enable_logging () {
+	mkdir -p ${IMAGE_ROOTFS}${sysconfdir}/default
+	echo "POSTINST_LOGGING=1" >> ${IMAGE_ROOTFS}${sysconfdir}/default/postinst
+	echo "LOGFILE=${POSTINST_LOGFILE}" >> ${IMAGE_ROOTFS}${sysconfdir}/default/postinst
+}
+
+#
+# Modify systemd default target
+#
+set_systemd_default_target () {
+	if [ -d ${IMAGE_ROOTFS}${sysconfdir}/systemd/system -a -e ${IMAGE_ROOTFS}${systemd_system_unitdir}/${SYSTEMD_DEFAULT_TARGET} ]; then
+		ln -sf ${systemd_system_unitdir}/${SYSTEMD_DEFAULT_TARGET} ${IMAGE_ROOTFS}${sysconfdir}/systemd/system/default.target
+	fi
+}
+
+# If /var/volatile is not empty, we have seen problems where programs such as the
+# journal make assumptions based on the contents of /var/volatile. The journal
+# would then write to /var/volatile before it was mounted, thus hiding the
+# items previously written.
+#
+# This change is to attempt to fix those types of issues in a way that doesn't
+# affect users that may not be using /var/volatile.
+empty_var_volatile () {
+	if [ -e ${IMAGE_ROOTFS}/etc/fstab ]; then
+		match=`awk '$1 !~ "#" && $2 ~ /\/var\/volatile/{print $2}' ${IMAGE_ROOTFS}/etc/fstab 2> /dev/null`
+		if [ -n "$match" ]; then
+			find ${IMAGE_ROOTFS}/var/volatile -mindepth 1 -delete
+		fi
+	fi
+}
+
+# Turn any symbolic /sbin/init link into a file
+remove_init_link () {
+	if [ -h ${IMAGE_ROOTFS}/sbin/init ]; then
+		LINKFILE=${IMAGE_ROOTFS}`readlink ${IMAGE_ROOTFS}/sbin/init`
+		rm ${IMAGE_ROOTFS}/sbin/init
+		cp $LINKFILE ${IMAGE_ROOTFS}/sbin/init
+	fi
+}
+
+make_zimage_symlink_relative () {
+	if [ -L ${IMAGE_ROOTFS}/boot/zImage ]; then
+		(cd ${IMAGE_ROOTFS}/boot/ && for i in `ls zImage-* | sort`; do ln -sf $i zImage; done)
+	fi
+}
+
+python write_image_manifest () {
+    from oe.rootfs import image_list_installed_packages
+    from oe.utils import format_pkg_list
+
+    deploy_dir = d.getVar('IMGDEPLOYDIR')
+    link_name = d.getVar('IMAGE_LINK_NAME')
+    manifest_name = d.getVar('IMAGE_MANIFEST')
+
+    if not manifest_name:
+        return
+
+    pkgs = image_list_installed_packages(d)
+    with open(manifest_name, 'w+') as image_manifest:
+        image_manifest.write(format_pkg_list(pkgs, "ver"))
+
+    if os.path.exists(manifest_name) and link_name:
+        manifest_link = deploy_dir + "/" + link_name + ".manifest"
+        if manifest_link != manifest_name:
+            if os.path.lexists(manifest_link):
+                os.remove(manifest_link)
+            os.symlink(os.path.basename(manifest_name), manifest_link)
+}
+
+# Can be used to create /etc/timestamp during image construction to give a reasonably
+# sane default time setting
+rootfs_update_timestamp () {
+	if [ "${REPRODUCIBLE_TIMESTAMP_ROOTFS}" != "" ]; then
+		# Convert UTC into %4Y%2m%2d%2H%2M%2S
+		sformatted=`date -u -d @${REPRODUCIBLE_TIMESTAMP_ROOTFS} +%4Y%2m%2d%2H%2M%2S`
+	else
+		sformatted=`date -u +%4Y%2m%2d%2H%2M%2S`
+	fi
+	echo $sformatted > ${IMAGE_ROOTFS}/etc/timestamp
+	bbnote "rootfs_update_timestamp: set /etc/timestamp to $sformatted"
+}
+
+# Prevent X from being started
+rootfs_no_x_startup () {
+	if [ -f ${IMAGE_ROOTFS}/etc/init.d/xserver-nodm ]; then
+		chmod a-x ${IMAGE_ROOTFS}/etc/init.d/xserver-nodm
+	fi
+}
+
+rootfs_trim_schemas () {
+	for schema in ${IMAGE_ROOTFS}/etc/gconf/schemas/*.schemas
+	do
+		# Need this in case no files exist
+		if [ -e $schema ]; then
+			oe-trim-schemas $schema > $schema.new
+			mv $schema.new $schema
+		fi
+	done
+}
+
+rootfs_check_host_user_contaminated () {
+	contaminated="${S}/host-user-contaminated.txt"
+	HOST_USER_UID="$(PSEUDO_UNLOAD=1 id -u)"
+	HOST_USER_GID="$(PSEUDO_UNLOAD=1 id -g)"
+
+	find "${IMAGE_ROOTFS}" -path "${IMAGE_ROOTFS}/home" -prune -o \
+	    -user "$HOST_USER_UID" -print -o -group "$HOST_USER_GID" -print >"$contaminated"
+
+	sed -e "s,${IMAGE_ROOTFS},," $contaminated | while read line; do
+		bbwarn "Path in the rootfs is owned by the same user or group as the user running bitbake:" $line `ls -lan ${IMAGE_ROOTFS}/$line`
+	done
+
+	if [ -s "$contaminated" ]; then
+		bbwarn "/etc/passwd:" `cat ${IMAGE_ROOTFS}/etc/passwd`
+		bbwarn "/etc/group:" `cat ${IMAGE_ROOTFS}/etc/group`
+	fi
+}
+
+# Make any absolute links in a sysroot relative
+rootfs_sysroot_relativelinks () {
+	sysroot-relativelinks.py ${SDK_OUTPUT}/${SDKTARGETSYSROOT}
+}
+
+# Generated test data json file
+python write_image_test_data() {
+    from oe.data import export2json
+
+    deploy_dir = d.getVar('IMGDEPLOYDIR')
+    link_name = d.getVar('IMAGE_LINK_NAME')
+    testdata_name = os.path.join(deploy_dir, "%s.testdata.json" % d.getVar('IMAGE_NAME'))
+
+    searchString = "%s/"%(d.getVar("TOPDIR")).replace("//","/")
+    export2json(d, testdata_name, searchString=searchString, replaceString="")
+
+    if os.path.exists(testdata_name) and link_name:
+        testdata_link = os.path.join(deploy_dir, "%s.testdata.json" % link_name)
+        if testdata_link != testdata_name:
+            if os.path.lexists(testdata_link):
+                os.remove(testdata_link)
+            os.symlink(os.path.basename(testdata_name), testdata_link)
+}
+write_image_test_data[vardepsexclude] += "TOPDIR"
+
+# Check for unsatisfied recommendations (RRECOMMENDS)
+python rootfs_log_check_recommends() {
+    log_path = d.expand("${T}/log.do_rootfs")
+    with open(log_path, 'r') as log:
+        for line in log:
+            if 'log_check' in line:
+                continue
+
+            if 'unsatisfied recommendation for' in line:
+                bb.warn('[log_check] %s: %s' % (d.getVar('PN'), line))
+}
+
+# Perform any additional adjustments needed to make rootf binary reproducible
+rootfs_reproducible () {
+	if [ "${REPRODUCIBLE_TIMESTAMP_ROOTFS}" != "" ]; then
+		# Convert UTC into %4Y%2m%2d%2H%2M%2S
+		sformatted=`date -u -d @${REPRODUCIBLE_TIMESTAMP_ROOTFS} +%4Y%2m%2d%2H%2M%2S`
+		echo $sformatted > ${IMAGE_ROOTFS}/etc/version
+		bbnote "rootfs_reproducible: set /etc/version to $sformatted"
+
+		if [ -d ${IMAGE_ROOTFS}${sysconfdir}/gconf ]; then
+			find ${IMAGE_ROOTFS}${sysconfdir}/gconf -name '%gconf.xml' -print0 | xargs -0r \
+			sed -i -e 's@\bmtime="[0-9][0-9]*"@mtime="'${REPRODUCIBLE_TIMESTAMP_ROOTFS}'"@g'
+		fi
+	fi
+}
+
+# Perform a dumb check for unit existence, not its validity
+python overlayfs_qa_check() {
+    from oe.overlayfs import mountUnitName
+
+    overlayMountPoints = d.getVarFlags("OVERLAYFS_MOUNT_POINT") or {}
+    imagepath = d.getVar("IMAGE_ROOTFS")
+    sysconfdir = d.getVar("sysconfdir")
+    searchpaths = [oe.path.join(imagepath, sysconfdir, "systemd", "system"),
+                   oe.path.join(imagepath, d.getVar("systemd_system_unitdir"))]
+    fstabpath = oe.path.join(imagepath, sysconfdir, "fstab")
+
+    if not any(os.path.exists(path) for path in [*searchpaths, fstabpath]):
+        return
+
+    fstabDevices = []
+    if os.path.isfile(fstabpath):
+        with open(fstabpath, 'r') as f:
+            for line in f:
+                if line[0] == '#':
+                    continue
+                path = line.split(maxsplit=2)
+                if len(path) > 2:
+                    fstabDevices.append(path[1])
+
+    allUnitExist = True;
+    for mountPoint in overlayMountPoints:
+        qaSkip = (d.getVarFlag("OVERLAYFS_QA_SKIP", mountPoint) or "").split()
+        if "mount-configured" in qaSkip:
+            continue
+
+        mountPath = d.getVarFlag('OVERLAYFS_MOUNT_POINT', mountPoint)
+        if mountPath in fstabDevices:
+            continue
+
+        mountUnit = mountUnitName(mountPath)
+        if any(os.path.isfile(oe.path.join(dirpath, mountUnit))
+               for dirpath in searchpaths):
+            continue
+
+        bb.warn(f'Mount path {mountPath} not found in fstab and unit '
+                f'{mountUnit} not found in systemd unit directories.')
+        bb.warn(f'Skip this check by setting OVERLAYFS_QA_SKIP[{mountPoint}] = '
+                '"mount-configured"')
+        allUnitExist = False;
+
+    if not allUnitExist:
+        bb.fatal('Not all mount paths and units are installed in the image')
+}
+
+python overlayfs_postprocess() {
+    import shutil
+
+    # install helper script
+    helperScriptName = "overlayfs-create-dirs.sh"
+    helperScriptSource = oe.path.join(d.getVar("COREBASE"), "meta/files", helperScriptName)
+    helperScriptDest = oe.path.join(d.getVar("IMAGE_ROOTFS"), "/usr/sbin/", helperScriptName)
+    shutil.copyfile(helperScriptSource, helperScriptDest)
+    os.chmod(helperScriptDest, 0o755)
+}
diff --git a/poky/meta/classes-recipe/rootfs_deb.bbclass b/poky/meta/classes-recipe/rootfs_deb.bbclass
new file mode 100644
index 0000000..c5c6426
--- /dev/null
+++ b/poky/meta/classes-recipe/rootfs_deb.bbclass
@@ -0,0 +1,41 @@
+#
+# Copyright 2006-2007 Openedhand Ltd.
+#
+# SPDX-License-Identifier: MIT
+#
+
+ROOTFS_PKGMANAGE = "dpkg apt"
+
+do_rootfs[depends] += "dpkg-native:do_populate_sysroot apt-native:do_populate_sysroot"
+do_populate_sdk[depends] += "dpkg-native:do_populate_sysroot apt-native:do_populate_sysroot bzip2-native:do_populate_sysroot"
+do_rootfs[recrdeptask] += "do_package_write_deb do_package_qa"
+do_rootfs[vardeps] += "PACKAGE_FEED_URIS PACKAGE_FEED_BASE_PATHS PACKAGE_FEED_ARCHS"
+
+do_rootfs[lockfiles] += "${DEPLOY_DIR_DEB}/deb.lock"
+do_populate_sdk[lockfiles] += "${DEPLOY_DIR_DEB}/deb.lock"
+do_populate_sdk_ext[lockfiles] += "${DEPLOY_DIR_DEB}/deb.lock"
+
+python rootfs_deb_bad_recommendations() {
+    if d.getVar("BAD_RECOMMENDATIONS"):
+        bb.warn("Debian package install does not support BAD_RECOMMENDATIONS")
+}
+do_rootfs[prefuncs] += "rootfs_deb_bad_recommendations"
+
+DEB_POSTPROCESS_COMMANDS = ""
+
+opkglibdir = "${localstatedir}/lib/opkg"
+
+python () {
+    # Map TARGET_ARCH to Debian's ideas about architectures
+    darch = d.getVar('SDK_ARCH')
+    if darch in ["x86", "i486", "i586", "i686", "pentium"]:
+         d.setVar('DEB_SDK_ARCH', 'i386')
+    elif darch == "x86_64":
+         d.setVar('DEB_SDK_ARCH', 'amd64')
+    elif darch == "arm":
+         d.setVar('DEB_SDK_ARCH', 'armel')
+    elif darch == "aarch64":
+         d.setVar('DEB_SDK_ARCH', 'arm64')
+    else:
+         bb.fatal("Unhandled SDK_ARCH %s" % darch)
+}
diff --git a/poky/meta/classes-recipe/rootfs_ipk.bbclass b/poky/meta/classes-recipe/rootfs_ipk.bbclass
new file mode 100644
index 0000000..a48ad07
--- /dev/null
+++ b/poky/meta/classes-recipe/rootfs_ipk.bbclass
@@ -0,0 +1,44 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+#
+# Creates a root filesystem out of IPKs
+#
+# This rootfs can be mounted via root-nfs or it can be put into an cramfs/jffs etc.
+# See image.bbclass for a usage of this.
+#
+
+EXTRAOPKGCONFIG ?= ""
+ROOTFS_PKGMANAGE = "opkg ${EXTRAOPKGCONFIG}"
+
+do_rootfs[depends] += "opkg-native:do_populate_sysroot opkg-utils-native:do_populate_sysroot"
+do_populate_sdk[depends] += "opkg-native:do_populate_sysroot opkg-utils-native:do_populate_sysroot"
+do_rootfs[recrdeptask] += "do_package_write_ipk do_package_qa"
+do_rootfs[vardeps] += "PACKAGE_FEED_URIS PACKAGE_FEED_BASE_PATHS PACKAGE_FEED_ARCHS"
+
+do_rootfs[lockfiles] += "${WORKDIR}/ipk.lock"
+do_populate_sdk[lockfiles] += "${WORKDIR}/sdk-ipk.lock"
+do_populate_sdk_ext[lockfiles] += "${WORKDIR}/sdk-ipk.lock"
+
+OPKG_PREPROCESS_COMMANDS = ""
+
+OPKG_POSTPROCESS_COMMANDS = ""
+
+OPKGLIBDIR ??= "${localstatedir}/lib"
+
+MULTILIBRE_ALLOW_REP = "${OPKGLIBDIR}/opkg|/usr/lib/opkg"
+
+python () {
+
+    if d.getVar('BUILD_IMAGES_FROM_FEEDS'):
+        flags = d.getVarFlag('do_rootfs', 'recrdeptask')
+        flags = flags.replace("do_package_write_ipk", "")
+        flags = flags.replace("do_deploy", "")
+        flags = flags.replace("do_populate_sysroot", "")
+        d.setVarFlag('do_rootfs', 'recrdeptask', flags)
+        d.setVar('OPKG_PREPROCESS_COMMANDS', "")
+        d.setVar('OPKG_POSTPROCESS_COMMANDS', '')
+}
diff --git a/poky/meta/classes-recipe/rootfs_rpm.bbclass b/poky/meta/classes-recipe/rootfs_rpm.bbclass
new file mode 100644
index 0000000..6eccd5a
--- /dev/null
+++ b/poky/meta/classes-recipe/rootfs_rpm.bbclass
@@ -0,0 +1,45 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+#
+# Creates a root filesystem out of rpm packages
+#
+
+ROOTFS_PKGMANAGE = "rpm dnf"
+
+# dnf is using our custom sysconfig module, and so will fail without these
+export STAGING_INCDIR
+export STAGING_LIBDIR
+
+# Add 100Meg of extra space for dnf
+IMAGE_ROOTFS_EXTRA_SPACE:append = "${@bb.utils.contains("PACKAGE_INSTALL", "dnf", " + 102400", "", d)}"
+
+# Dnf is python based, so be sure python3-native is available to us.
+EXTRANATIVEPATH += "python3-native"
+
+# opkg is needed for update-alternatives
+RPMROOTFSDEPENDS = "rpm-native:do_populate_sysroot \
+    dnf-native:do_populate_sysroot \
+    createrepo-c-native:do_populate_sysroot \
+    opkg-native:do_populate_sysroot"
+
+do_rootfs[depends] += "${RPMROOTFSDEPENDS}"
+do_populate_sdk[depends] += "${RPMROOTFSDEPENDS}"
+
+do_rootfs[recrdeptask] += "do_package_write_rpm do_package_qa"
+do_rootfs[vardeps] += "PACKAGE_FEED_URIS PACKAGE_FEED_BASE_PATHS PACKAGE_FEED_ARCHS"
+
+python () {
+    if d.getVar('BUILD_IMAGES_FROM_FEEDS'):
+        flags = d.getVarFlag('do_rootfs', 'recrdeptask')
+        flags = flags.replace("do_package_write_rpm", "")
+        flags = flags.replace("do_deploy", "")
+        flags = flags.replace("do_populate_sysroot", "")
+        d.setVarFlag('do_rootfs', 'recrdeptask', flags)
+        d.setVar('RPM_PREPROCESS_COMMANDS', '')
+        d.setVar('RPM_POSTPROCESS_COMMANDS', '')
+
+}
diff --git a/poky/meta/classes-recipe/rootfsdebugfiles.bbclass b/poky/meta/classes-recipe/rootfsdebugfiles.bbclass
new file mode 100644
index 0000000..cbcf876
--- /dev/null
+++ b/poky/meta/classes-recipe/rootfsdebugfiles.bbclass
@@ -0,0 +1,47 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+# This class installs additional files found on the build host
+# directly into the rootfs.
+#
+# One use case is to install a constant ssh host key in
+# an image that gets created for just one machine. This
+# solves two issues:
+# - host key generation on the device can stall when the
+#   kernel has not gathered enough entropy yet (seen in practice
+#   under qemu)
+# - ssh complains by default when the host key changes
+#
+# For dropbear, with the ssh host key store along side the local.conf:
+# 1. Extend local.conf:
+#    INHERIT += "rootfsdebugfiles"
+#    ROOTFS_DEBUG_FILES += "${TOPDIR}/conf/dropbear_rsa_host_key ${IMAGE_ROOTFS}/etc/dropbear/dropbear_rsa_host_key ;"
+# 2. Boot the image once, copy the dropbear_rsa_host_key from
+#    the device into your build conf directory.
+# 3. A optional parameter can be used to set file mode
+#    of the copied target, for instance:
+#    ROOTFS_DEBUG_FILES += "${TOPDIR}/conf/dropbear_rsa_host_key ${IMAGE_ROOTFS}/etc/dropbear/dropbear_rsa_host_key 0600;"
+#    in case they might be required to have a specific mode. (Shoundn't be too open, for example)
+#
+# Do not use for production images! It bypasses several
+# core build mechanisms (updating the image when one
+# of the files changes, license tracking in the image
+# manifest, ...).
+
+ROOTFS_DEBUG_FILES ?= ""
+ROOTFS_DEBUG_FILES[doc] = "Lists additional files or directories to be installed with 'cp -a' in the format 'source1 target1;source2 target2;...'"
+
+ROOTFS_POSTPROCESS_COMMAND += "rootfs_debug_files;"
+rootfs_debug_files () {
+   #!/bin/sh -e
+   echo "${ROOTFS_DEBUG_FILES}" | sed -e 's/;/\n/g' | while read source target mode; do
+      if [ -e "$source" ]; then
+         mkdir -p $(dirname $target)
+         cp -a $source $target
+         [ -n "$mode" ] && chmod $mode $target
+      fi
+   done
+}
diff --git a/poky/meta/classes-recipe/rust-bin.bbclass b/poky/meta/classes-recipe/rust-bin.bbclass
new file mode 100644
index 0000000..b8e7ef8
--- /dev/null
+++ b/poky/meta/classes-recipe/rust-bin.bbclass
@@ -0,0 +1,154 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+inherit rust
+
+RDEPENDS:${PN}:append:class-target = " ${RUSTLIB_DEP}"
+
+RUSTC_ARCHFLAGS += "-C opt-level=3 -g -L ${STAGING_DIR_HOST}/${rustlibdir} -C linker=${RUST_TARGET_CCLD}"
+EXTRA_OEMAKE += 'RUSTC_ARCHFLAGS="${RUSTC_ARCHFLAGS}"'
+
+# Some libraries alias with the standard library but libstd is configured to
+# make it difficult or imposisble to use its version. Unfortunately libstd
+# must be explicitly overridden using extern.
+OVERLAP_LIBS = "\
+    libc \
+    log \
+    getopts \
+    rand \
+"
+def get_overlap_deps(d):
+    deps = d.getVar("DEPENDS").split()
+    overlap_deps = []
+    for o in d.getVar("OVERLAP_LIBS").split():
+        l = len([o for dep in deps if (o + '-rs' in dep)])
+        if l > 0:
+            overlap_deps.append(o)
+    return " ".join(overlap_deps)
+OVERLAP_DEPS = "${@get_overlap_deps(d)}"
+
+# Prevents multiple static copies of standard library modules
+# See https://github.com/rust-lang/rust/issues/19680
+RUSTC_PREFER_DYNAMIC = "-C prefer-dynamic"
+RUSTC_FLAGS += "${RUSTC_PREFER_DYNAMIC}"
+
+CRATE_NAME ?= "${@d.getVar('BPN').replace('-rs', '').replace('-', '_')}"
+BINNAME ?= "${BPN}"
+LIBNAME ?= "lib${CRATE_NAME}-rs"
+CRATE_TYPE ?= "dylib"
+BIN_SRC ?= "${S}/src/main.rs"
+LIB_SRC ?= "${S}/src/lib.rs"
+
+rustbindest ?= "${bindir}"
+rustlibdest ?= "${rustlibdir}"
+RUST_RPATH_ABS ?= "${rustlibdir}:${rustlib}"
+
+def relative_rpaths(paths, base):
+    relpaths = set()
+    for p in paths.split(':'):
+        if p == base:
+            relpaths.add('$ORIGIN')
+            continue
+        relpaths.add(os.path.join('$ORIGIN', os.path.relpath(p, base)))
+    return '-rpath=' + ':'.join(relpaths) if len(relpaths) else ''
+
+RUST_LIB_RPATH_FLAGS ?= "${@relative_rpaths(d.getVar('RUST_RPATH_ABS', True), d.getVar('rustlibdest', True))}"
+RUST_BIN_RPATH_FLAGS ?= "${@relative_rpaths(d.getVar('RUST_RPATH_ABS', True), d.getVar('rustbindest', True))}"
+
+def libfilename(d):
+    if d.getVar('CRATE_TYPE', True) == 'dylib':
+        return d.getVar('LIBNAME', True) + '.so'
+    else:
+        return d.getVar('LIBNAME', True) + '.rlib'
+
+def link_args(d, bin):
+    linkargs = []
+    if bin:
+        rpaths = d.getVar('RUST_BIN_RPATH_FLAGS', False)
+    else:
+        rpaths = d.getVar('RUST_LIB_RPATH_FLAGS', False)
+        if d.getVar('CRATE_TYPE', True) == 'dylib':
+            linkargs.append('-soname')
+            linkargs.append(libfilename(d))
+    if len(rpaths):
+        linkargs.append(rpaths)
+    if len(linkargs):
+        return ' '.join(['-Wl,' + arg for arg in linkargs])
+    else:
+        return ''
+
+get_overlap_externs () {
+    externs=
+    for dep in ${OVERLAP_DEPS}; do
+        extern=$(ls ${STAGING_DIR_HOST}/${rustlibdir}/lib$dep-rs.{so,rlib} 2>/dev/null \
+                    | awk '{print $1}');
+        if [ -n "$extern" ]; then
+            externs="$externs --extern $dep=$extern"
+        else
+            echo "$dep in depends but no such library found in ${rustlibdir}!" >&2
+            exit 1
+        fi
+    done
+    echo "$externs"
+}
+
+do_configure () {
+}
+
+oe_runrustc () {
+	bbnote ${RUSTC} ${RUSTC_ARCHFLAGS} ${RUSTC_FLAGS} "$@"
+	"${RUSTC}" ${RUSTC_ARCHFLAGS} ${RUSTC_FLAGS} "$@"
+}
+
+oe_compile_rust_lib () {
+    rm -rf ${LIBNAME}.{rlib,so}
+    local -a link_args
+    if [ -n '${@link_args(d, False)}' ]; then
+        link_args[0]='-C'
+        link_args[1]='link-args=${@link_args(d, False)}'
+    fi
+    oe_runrustc $(get_overlap_externs) \
+        "${link_args[@]}" \
+        ${LIB_SRC} \
+        -o ${@libfilename(d)} \
+        --crate-name=${CRATE_NAME} --crate-type=${CRATE_TYPE} \
+        "$@"
+}
+oe_compile_rust_lib[vardeps] += "get_overlap_externs"
+
+oe_compile_rust_bin () {
+    rm -rf ${BINNAME}
+    local -a link_args
+    if [ -n '${@link_args(d, True)}' ]; then
+        link_args[0]='-C'
+        link_args[1]='link-args=${@link_args(d, True)}'
+    fi
+    oe_runrustc $(get_overlap_externs) \
+        "${link_args[@]}" \
+        ${BIN_SRC} -o ${BINNAME} "$@"
+}
+oe_compile_rust_bin[vardeps] += "get_overlap_externs"
+
+oe_install_rust_lib () {
+    for lib in $(ls ${LIBNAME}.{so,rlib} 2>/dev/null); do
+        echo Installing $lib
+        install -D -m 755 $lib ${D}/${rustlibdest}/$lib
+    done
+}
+
+oe_install_rust_bin () {
+    echo Installing ${BINNAME}
+    install -D -m 755 ${BINNAME} ${D}/${rustbindest}/${BINNAME}
+}
+
+do_rust_bin_fixups() {
+    for f in `find ${PKGD} -name '*.so*'`; do
+        echo "Strip rust note: $f"
+        ${OBJCOPY} -R .note.rustc $f $f
+    done
+}
+PACKAGE_PREPROCESS_FUNCS += "do_rust_bin_fixups"
+
diff --git a/poky/meta/classes-recipe/rust-common.bbclass b/poky/meta/classes-recipe/rust-common.bbclass
new file mode 100644
index 0000000..93bf6c8
--- /dev/null
+++ b/poky/meta/classes-recipe/rust-common.bbclass
@@ -0,0 +1,177 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+inherit python3native
+inherit rust-target-config
+
+# Common variables used by all Rust builds
+export rustlibdir = "${libdir}/rustlib/${RUST_HOST_SYS}/lib"
+FILES:${PN} += "${rustlibdir}/*.so"
+FILES:${PN}-dev += "${rustlibdir}/*.rlib ${rustlibdir}/*.rmeta"
+FILES:${PN}-dbg += "${rustlibdir}/.debug"
+
+RUSTLIB = "-L ${STAGING_DIR_HOST}${rustlibdir}"
+RUST_DEBUG_REMAP = "--remap-path-prefix=${WORKDIR}=/usr/src/debug/${PN}/${EXTENDPE}${PV}-${PR}"
+RUSTFLAGS += "${RUSTLIB} ${RUST_DEBUG_REMAP}"
+RUSTLIB_DEP ?= "libstd-rs"
+RUST_PANIC_STRATEGY ?= "unwind"
+
+def target_is_armv7(d):
+    '''Determine if target is armv7'''
+    # TUNE_FEATURES may include arm* even if the target is not arm
+    # in the case of *-native packages
+    if d.getVar('TARGET_ARCH') != 'arm':
+        return False
+
+    feat = d.getVar('TUNE_FEATURES')
+    feat = frozenset(feat.split())
+    mach_overrides = d.getVar('MACHINEOVERRIDES')
+    mach_overrides = frozenset(mach_overrides.split(':'))
+
+    v7=frozenset(['armv7a', 'armv7r', 'armv7m', 'armv7ve'])
+    if mach_overrides.isdisjoint(v7) and feat.isdisjoint(v7):
+        return False
+    else:
+        return True
+target_is_armv7[vardepvalue] = "${@target_is_armv7(d)}"
+
+# Responsible for taking Yocto triples and converting it to Rust triples
+def rust_base_triple(d, thing):
+    '''
+    Mangle bitbake's *_SYS into something that rust might support (see
+    rust/mk/cfg/* for a list)
+
+    Note that os is assumed to be some linux form
+    '''
+
+    # The llvm-target for armv7 is armv7-unknown-linux-gnueabihf
+    if d.getVar('{}_ARCH'.format(thing)) == d.getVar('TARGET_ARCH') and target_is_armv7(d):
+        arch = "armv7"
+    else:
+        arch = oe.rust.arch_to_rust_arch(d.getVar('{}_ARCH'.format(thing)))
+
+    # When bootstrapping rust-native, BUILD must be the same as upstream snapshot tarballs
+    bpn = d.getVar('BPN')
+    if thing == "BUILD" and bpn in ["rust"]:
+        return arch + "-unknown-linux-gnu"
+
+    vendor = d.getVar('{}_VENDOR'.format(thing))
+
+    # Default to glibc
+    libc = "-gnu"
+    os = d.getVar('{}_OS'.format(thing))
+    # This catches ARM targets and appends the necessary hard float bits
+    if os == "linux-gnueabi" or os == "linux-musleabi":
+        libc = bb.utils.contains('TUNE_FEATURES', 'callconvention-hard', 'hf', '', d)
+    elif "musl" in os:
+        libc = "-musl"
+        os = "linux"
+
+    return arch + vendor + '-' + os + libc
+
+
+# In some cases uname and the toolchain differ on their idea of the arch name
+RUST_BUILD_ARCH = "${@oe.rust.arch_to_rust_arch(d.getVar('BUILD_ARCH'))}"
+
+# Naming explanation
+# Yocto
+# - BUILD_SYS - Yocto triple of the build environment
+# - HOST_SYS - What we're building for in Yocto
+# - TARGET_SYS - What we're building for in Yocto
+#
+# So when building '-native' packages BUILD_SYS == HOST_SYS == TARGET_SYS
+# When building packages for the image HOST_SYS == TARGET_SYS
+# This is a gross over simplification as there are other modes but
+# currently this is all that's supported.
+#
+# Rust
+# - TARGET - the system where the binary will run
+# - HOST - the system where the binary is being built
+#
+# Rust additionally will use two additional cases:
+# - undecorated (e.g. CC) - equivalent to TARGET
+# - triple suffix (e.g. CC:x86_64_unknown_linux_gnu) - both
+#   see: https://github.com/alexcrichton/gcc-rs
+# The way that Rust's internal triples and Yocto triples are mapped together
+# its likely best to not use the triple suffix due to potential confusion.
+
+RUST_BUILD_SYS = "${@rust_base_triple(d, 'BUILD')}"
+RUST_BUILD_SYS[vardepvalue] = "${RUST_BUILD_SYS}"
+RUST_HOST_SYS = "${@rust_base_triple(d, 'HOST')}"
+RUST_HOST_SYS[vardepvalue] = "${RUST_HOST_SYS}"
+RUST_TARGET_SYS = "${@rust_base_triple(d, 'TARGET')}"
+RUST_TARGET_SYS[vardepvalue] = "${RUST_TARGET_SYS}"
+
+# wrappers to get around the fact that Rust needs a single
+# binary but Yocto's compiler and linker commands have
+# arguments. Technically the archiver is always one command but
+# this is necessary for builds that determine the prefix and then
+# use those commands based on the prefix.
+WRAPPER_DIR = "${WORKDIR}/wrapper"
+RUST_BUILD_CC = "${WRAPPER_DIR}/build-rust-cc"
+RUST_BUILD_CXX = "${WRAPPER_DIR}/build-rust-cxx"
+RUST_BUILD_CCLD = "${WRAPPER_DIR}/build-rust-ccld"
+RUST_BUILD_AR = "${WRAPPER_DIR}/build-rust-ar"
+RUST_TARGET_CC = "${WRAPPER_DIR}/target-rust-cc"
+RUST_TARGET_CXX = "${WRAPPER_DIR}/target-rust-cxx"
+RUST_TARGET_CCLD = "${WRAPPER_DIR}/target-rust-ccld"
+RUST_TARGET_AR = "${WRAPPER_DIR}/target-rust-ar"
+
+create_wrapper_rust () {
+	file="$1"
+	shift
+	extras="$1"
+	shift
+
+	cat <<- EOF > "${file}"
+	#!/usr/bin/env python3
+	import os, sys
+	orig_binary = "$@"
+	extras = "${extras}"
+	binary = orig_binary.split()[0]
+	args = orig_binary.split() + sys.argv[1:]
+	if extras:
+	    args.append(extras)
+	os.execvp(binary, args)
+	EOF
+	chmod +x "${file}"
+}
+
+WRAPPER_TARGET_CC = "${CC}"
+WRAPPER_TARGET_CXX = "${CXX}"
+WRAPPER_TARGET_CCLD = "${CCLD}"
+WRAPPER_TARGET_LDFLAGS = "${LDFLAGS}"
+WRAPPER_TARGET_EXTRALD = ""
+WRAPPER_TARGET_AR = "${AR}"
+
+# compiler is used by gcc-rs
+# linker is used by rustc/cargo
+# archiver is used by the build of libstd-rs
+do_rust_create_wrappers () {
+	mkdir -p "${WRAPPER_DIR}"
+
+	# Yocto Build / Rust Host C compiler
+	create_wrapper_rust "${RUST_BUILD_CC}" "" "${BUILD_CC}"
+	# Yocto Build / Rust Host C++ compiler
+	create_wrapper_rust "${RUST_BUILD_CXX}" "" "${BUILD_CXX}"
+	# Yocto Build / Rust Host linker
+	create_wrapper_rust "${RUST_BUILD_CCLD}" "" "${BUILD_CCLD}" "${BUILD_LDFLAGS}"
+	# Yocto Build / Rust Host archiver
+	create_wrapper_rust "${RUST_BUILD_AR}" "" "${BUILD_AR}"
+
+	# Yocto Target / Rust Target C compiler
+	create_wrapper_rust "${RUST_TARGET_CC}" "${WRAPPER_TARGET_EXTRALD}" "${WRAPPER_TARGET_CC}" "${WRAPPER_TARGET_LDFLAGS}"
+	# Yocto Target / Rust Target C++ compiler
+	create_wrapper_rust "${RUST_TARGET_CXX}" "${WRAPPER_TARGET_EXTRALD}" "${WRAPPER_TARGET_CXX}" "${CXXFLAGS}"
+	# Yocto Target / Rust Target linker
+	create_wrapper_rust "${RUST_TARGET_CCLD}" "${WRAPPER_TARGET_EXTRALD}" "${WRAPPER_TARGET_CCLD}" "${WRAPPER_TARGET_LDFLAGS}"
+	# Yocto Target / Rust Target archiver
+	create_wrapper_rust "${RUST_TARGET_AR}" "" "${WRAPPER_TARGET_AR}"
+
+}
+
+addtask rust_create_wrappers before do_configure after do_patch do_prepare_recipe_sysroot
+do_rust_create_wrappers[dirs] += "${WRAPPER_DIR}"
diff --git a/poky/meta/classes-recipe/rust-target-config.bbclass b/poky/meta/classes-recipe/rust-target-config.bbclass
new file mode 100644
index 0000000..9e1d81b
--- /dev/null
+++ b/poky/meta/classes-recipe/rust-target-config.bbclass
@@ -0,0 +1,391 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+# Right now this is focused on arm-specific tune features.
+# We get away with this for now as one can only use x86-64 as the build host
+# (not arm).
+# Note that TUNE_FEATURES is _always_ refering to the target, so we really
+# don't want to use this for the host/build.
+def llvm_features_from_tune(d):
+    f = []
+    feat = d.getVar('TUNE_FEATURES')
+    if not feat:
+        return []
+    feat = frozenset(feat.split())
+
+    mach_overrides = d.getVar('MACHINEOVERRIDES')
+    mach_overrides = frozenset(mach_overrides.split(':'))
+
+    if 'vfpv4' in feat:
+        f.append("+vfp4")
+    if 'vfpv3' in feat:
+        f.append("+vfp3")
+    if 'vfpv3d16' in feat:
+        f.append("+d16")
+
+    if 'vfpv2' in feat or 'vfp' in feat:
+        f.append("+vfp2")
+
+    if 'neon' in feat:
+        f.append("+neon")
+
+    if 'mips32' in feat:
+        f.append("+mips32")
+
+    if 'mips32r2' in feat:
+        f.append("+mips32r2")
+
+    if target_is_armv7(d):
+        f.append('+v7')
+
+    if ('armv6' in mach_overrides) or ('armv6' in feat):
+        f.append("+v6")
+    if 'armv5te' in feat:
+        f.append("+strict-align")
+        f.append("+v5te")
+    elif 'armv5' in feat:
+        f.append("+strict-align")
+        f.append("+v5")
+
+    if ('armv4' in mach_overrides) or ('armv4' in feat):
+        f.append("+strict-align")
+
+    if 'dsp' in feat:
+        f.append("+dsp")
+
+    if 'thumb' in feat:
+        if d.getVar('ARM_THUMB_OPT') == "thumb":
+            if target_is_armv7(d):
+                f.append('+thumb2')
+            f.append("+thumb-mode")
+
+    if 'cortexa5' in feat:
+        f.append("+a5")
+    if 'cortexa7' in feat:
+        f.append("+a7")
+    if 'cortexa9' in feat:
+        f.append("+a9")
+    if 'cortexa15' in feat:
+        f.append("+a15")
+    if 'cortexa17' in feat:
+        f.append("+a17")
+    if ('riscv64' in feat) or ('riscv32' in feat):
+        f.append("+a,+c,+d,+f,+m")
+    return f
+llvm_features_from_tune[vardepvalue] = "${@llvm_features_from_tune(d)}"
+
+# TARGET_CC_ARCH changes from build/cross/target so it'll do the right thing
+# this should go away when https://github.com/rust-lang/rust/pull/31709 is
+# stable (1.9.0?)
+def llvm_features_from_cc_arch(d):
+    f = []
+    feat = d.getVar('TARGET_CC_ARCH')
+    if not feat:
+        return []
+    feat = frozenset(feat.split())
+
+    if '-mmmx' in feat:
+        f.append("+mmx")
+    if '-msse' in feat:
+        f.append("+sse")
+    if '-msse2' in feat:
+        f.append("+sse2")
+    if '-msse3' in feat:
+        f.append("+sse3")
+    if '-mssse3' in feat:
+        f.append("+ssse3")
+    if '-msse4.1' in feat:
+        f.append("+sse4.1")
+    if '-msse4.2' in feat:
+        f.append("+sse4.2")
+    if '-msse4a' in feat:
+        f.append("+sse4a")
+    if '-mavx' in feat:
+        f.append("+avx")
+    if '-mavx2' in feat:
+        f.append("+avx2")
+
+    return f
+
+def llvm_features_from_target_fpu(d):
+    # TARGET_FPU can be hard or soft. +soft-float tell llvm to use soft float
+    # ABI. There is no option for hard.
+
+    fpu = d.getVar('TARGET_FPU', True)
+    return ["+soft-float"] if fpu == "soft" else []
+
+def llvm_features(d):
+    return ','.join(llvm_features_from_tune(d) +
+                    llvm_features_from_cc_arch(d) +
+                    llvm_features_from_target_fpu(d))
+
+llvm_features[vardepvalue] = "${@llvm_features(d)}"
+
+## arm-unknown-linux-gnueabihf
+DATA_LAYOUT[arm-eabi] = "e-m:e-p:32:32-i64:64-v128:64:128-a:0:32-n32-S64"
+TARGET_ENDIAN[arm-eabi] = "little"
+TARGET_POINTER_WIDTH[arm-eabi] = "32"
+TARGET_C_INT_WIDTH[arm-eabi] = "32"
+MAX_ATOMIC_WIDTH[arm-eabi] = "64"
+FEATURES[arm-eabi] = "+v6,+vfp2"
+
+## armv7-unknown-linux-gnueabihf
+DATA_LAYOUT[armv7-eabi] = "e-m:e-p:32:32-i64:64-v128:64:128-a:0:32-n32-S64"
+TARGET_ENDIAN[armv7-eabi] = "little"
+TARGET_POINTER_WIDTH[armv7-eabi] = "32"
+TARGET_C_INT_WIDTH[armv7-eabi] = "32"
+MAX_ATOMIC_WIDTH[armv7-eabi] = "64"
+FEATURES[armv7-eabi] = "+v7,+vfp2,+thumb2"
+
+## aarch64-unknown-linux-{gnu, musl}
+DATA_LAYOUT[aarch64] = "e-m:e-i8:8:32-i16:16:32-i64:64-i128:128-n32:64-S128"
+TARGET_ENDIAN[aarch64] = "little"
+TARGET_POINTER_WIDTH[aarch64] = "64"
+TARGET_C_INT_WIDTH[aarch64] = "32"
+MAX_ATOMIC_WIDTH[aarch64] = "128"
+
+## x86_64-unknown-linux-{gnu, musl}
+DATA_LAYOUT[x86_64] = "e-m:e-i64:64-f80:128-n8:16:32:64-S128"
+TARGET_ENDIAN[x86_64] = "little"
+TARGET_POINTER_WIDTH[x86_64] = "64"
+TARGET_C_INT_WIDTH[x86_64] = "32"
+MAX_ATOMIC_WIDTH[x86_64] = "64"
+
+## x86_64-unknown-linux-gnux32
+DATA_LAYOUT[x86_64-x32] = "e-m:e-p:32:32-p270:32:32-p271:32:32-p272:64:64-i64:64-f80:128-n8:16:32:64-S128"
+TARGET_ENDIAN[x86_64-x32] = "little"
+TARGET_POINTER_WIDTH[x86_64-x32] = "32"
+TARGET_C_INT_WIDTH[x86_64-x32] = "32"
+MAX_ATOMIC_WIDTH[x86_64-x32] = "64"
+
+## i686-unknown-linux-{gnu, musl}
+DATA_LAYOUT[i686] = "e-m:e-p:32:32-f64:32:64-f80:32-n8:16:32-S128"
+TARGET_ENDIAN[i686] = "little"
+TARGET_POINTER_WIDTH[i686] = "32"
+TARGET_C_INT_WIDTH[i686] = "32"
+MAX_ATOMIC_WIDTH[i686] = "64"
+
+## XXX: a bit of a hack so qemux86 builds, clone of i686-unknown-linux-{gnu, musl} above
+DATA_LAYOUT[i586] = "e-m:e-p:32:32-f64:32:64-f80:32-n8:16:32-S128"
+TARGET_ENDIAN[i586] = "little"
+TARGET_POINTER_WIDTH[i586] = "32"
+TARGET_C_INT_WIDTH[i586] = "32"
+MAX_ATOMIC_WIDTH[i586] = "64"
+
+## mips-unknown-linux-{gnu, musl}
+DATA_LAYOUT[mips] = "E-m:m-p:32:32-i8:8:32-i16:16:32-i64:64-n32-S64"
+TARGET_ENDIAN[mips] = "big"
+TARGET_POINTER_WIDTH[mips] = "32"
+TARGET_C_INT_WIDTH[mips] = "32"
+MAX_ATOMIC_WIDTH[mips] = "32"
+
+## mipsel-unknown-linux-{gnu, musl}
+DATA_LAYOUT[mipsel] = "e-m:m-p:32:32-i8:8:32-i16:16:32-i64:64-n32-S64"
+TARGET_ENDIAN[mipsel] = "little"
+TARGET_POINTER_WIDTH[mipsel] = "32"
+TARGET_C_INT_WIDTH[mipsel] = "32"
+MAX_ATOMIC_WIDTH[mipsel] = "32"
+
+## mips64-unknown-linux-{gnu, musl}
+DATA_LAYOUT[mips64] = "E-m:e-i8:8:32-i16:16:32-i64:64-n32:64-S128"
+TARGET_ENDIAN[mips64] = "big"
+TARGET_POINTER_WIDTH[mips64] = "64"
+TARGET_C_INT_WIDTH[mips64] = "64"
+MAX_ATOMIC_WIDTH[mips64] = "64"
+
+## mips64-n32-unknown-linux-{gnu, musl}
+DATA_LAYOUT[mips64-n32] = "E-m:e-p:32:32-i8:8:32-i16:16:32-i64:64-n32:64-S128"
+TARGET_ENDIAN[mips64-n32] = "big"
+TARGET_POINTER_WIDTH[mips64-n32] = "32"
+TARGET_C_INT_WIDTH[mips64-n32] = "32"
+MAX_ATOMIC_WIDTH[mips64-n32] = "64"
+
+## mips64el-unknown-linux-{gnu, musl}
+DATA_LAYOUT[mips64el] = "e-m:e-i8:8:32-i16:16:32-i64:64-n32:64-S128"
+TARGET_ENDIAN[mips64el] = "little"
+TARGET_POINTER_WIDTH[mips64el] = "64"
+TARGET_C_INT_WIDTH[mips64el] = "64"
+MAX_ATOMIC_WIDTH[mips64el] = "64"
+
+## powerpc-unknown-linux-{gnu, musl}
+DATA_LAYOUT[powerpc] = "E-m:e-p:32:32-i64:64-n32"
+TARGET_ENDIAN[powerpc] = "big"
+TARGET_POINTER_WIDTH[powerpc] = "32"
+TARGET_C_INT_WIDTH[powerpc] = "32"
+MAX_ATOMIC_WIDTH[powerpc] = "32"
+
+## powerpc64-unknown-linux-{gnu, musl}
+DATA_LAYOUT[powerpc64] = "E-m:e-i64:64-n32:64-S128-v256:256:256-v512:512:512"
+TARGET_ENDIAN[powerpc64] = "big"
+TARGET_POINTER_WIDTH[powerpc64] = "64"
+TARGET_C_INT_WIDTH[powerpc64] = "64"
+MAX_ATOMIC_WIDTH[powerpc64] = "64"
+
+## powerpc64le-unknown-linux-{gnu, musl}
+DATA_LAYOUT[powerpc64le] = "e-m:e-i64:64-n32:64-v256:256:256-v512:512:512"
+TARGET_ENDIAN[powerpc64le] = "little"
+TARGET_POINTER_WIDTH[powerpc64le] = "64"
+TARGET_C_INT_WIDTH[powerpc64le] = "64"
+MAX_ATOMIC_WIDTH[powerpc64le] = "64"
+
+## riscv32-unknown-linux-{gnu, musl}
+DATA_LAYOUT[riscv32] = "e-m:e-p:32:32-i64:64-n32-S128"
+TARGET_ENDIAN[riscv32] = "little"
+TARGET_POINTER_WIDTH[riscv32] = "32"
+TARGET_C_INT_WIDTH[riscv32] = "32"
+MAX_ATOMIC_WIDTH[riscv32] = "32"
+
+## riscv64-unknown-linux-{gnu, musl}
+DATA_LAYOUT[riscv64] = "e-m:e-p:64:64-i64:64-i128:128-n64-S128"
+TARGET_ENDIAN[riscv64] = "little"
+TARGET_POINTER_WIDTH[riscv64] = "64"
+TARGET_C_INT_WIDTH[riscv64] = "64"
+MAX_ATOMIC_WIDTH[riscv64] = "64"
+
+# Convert a normal arch (HOST_ARCH, TARGET_ARCH, BUILD_ARCH, etc) to something
+# rust's internals won't choke on.
+def arch_to_rust_target_arch(arch):
+    if arch == "i586" or arch == "i686":
+        return "x86"
+    elif arch == "mipsel":
+        return "mips"
+    elif arch == "mip64sel":
+        return "mips64"
+    elif arch == "armv7":
+        return "arm"
+    elif arch == "powerpc64le":
+        return "powerpc64"
+    else:
+        return arch
+
+# generates our target CPU value
+def llvm_cpu(d):
+    cpu = d.getVar('PACKAGE_ARCH')
+    target = d.getVar('TRANSLATED_TARGET_ARCH')
+
+    trans = {}
+    trans['corei7-64'] = "corei7"
+    trans['core2-32'] = "core2"
+    trans['x86-64'] = "x86-64"
+    trans['i686'] = "i686"
+    trans['i586'] = "i586"
+    trans['mips64'] = "mips64"
+    trans['mips64el'] = "mips64"
+    trans['riscv64'] = "generic-rv64"
+    trans['riscv32'] = "generic-rv32"
+
+    if target in ["mips", "mipsel", "powerpc"]:
+        feat = frozenset(d.getVar('TUNE_FEATURES').split())
+        if "mips32r2" in feat:
+            trans['mipsel'] = "mips32r2"
+            trans['mips'] = "mips32r2"
+        elif "mips32" in feat:
+            trans['mipsel'] = "mips32"
+            trans['mips'] = "mips32"
+        elif "ppc7400" in feat:
+            trans['powerpc'] = "7400"
+
+    try:
+        return trans[cpu]
+    except:
+        return trans.get(target, "generic")
+
+llvm_cpu[vardepvalue] = "${@llvm_cpu(d)}"
+
+def rust_gen_target(d, thing, wd, arch):
+    import json
+
+    build_sys = d.getVar('BUILD_SYS')
+    target_sys = d.getVar('TARGET_SYS')
+
+    sys = d.getVar('{}_SYS'.format(thing))
+    prefix = d.getVar('{}_PREFIX'.format(thing))
+    rustsys = d.getVar('RUST_{}_SYS'.format(thing))
+
+    abi = None
+    cpu = "generic"
+    features = ""
+
+    # Need to apply the target tuning consitently, only if the triplet applies to the target
+    # and not in the native case
+    if sys == target_sys and sys != build_sys:
+        abi = d.getVar('ABIEXTENSION')
+        cpu = llvm_cpu(d)
+        if bb.data.inherits_class('native', d):
+            features = ','.join(llvm_features_from_cc_arch(d))
+        else:
+            features = llvm_features(d) or ""
+        # arm and armv7 have different targets in llvm
+        if arch == "arm" and target_is_armv7(d):
+            arch = 'armv7'
+
+    rust_arch = oe.rust.arch_to_rust_arch(arch)
+
+    if abi:
+        arch_abi = "{}-{}".format(rust_arch, abi)
+    else:
+        arch_abi = rust_arch
+
+    features = features or d.getVarFlag('FEATURES', arch_abi) or ""
+    features = features.strip()
+
+    # build tspec
+    tspec = {}
+    tspec['llvm-target'] = rustsys
+    tspec['data-layout'] = d.getVarFlag('DATA_LAYOUT', arch_abi)
+    if tspec['data-layout'] is None:
+        bb.fatal("No rust target defined for %s" % arch_abi)
+    tspec['max-atomic-width'] = int(d.getVarFlag('MAX_ATOMIC_WIDTH', arch_abi))
+    tspec['target-pointer-width'] = d.getVarFlag('TARGET_POINTER_WIDTH', arch_abi)
+    tspec['target-c-int-width'] = d.getVarFlag('TARGET_C_INT_WIDTH', arch_abi)
+    tspec['target-endian'] = d.getVarFlag('TARGET_ENDIAN', arch_abi)
+    tspec['arch'] = arch_to_rust_target_arch(rust_arch)
+    tspec['os'] = "linux"
+    if "musl" in tspec['llvm-target']:
+        tspec['env'] = "musl"
+    else:
+        tspec['env'] = "gnu"
+    if "riscv64" in tspec['llvm-target']:
+        tspec['llvm-abiname'] = "lp64d"
+    if "riscv32" in tspec['llvm-target']:
+        tspec['llvm-abiname'] = "ilp32d"
+    tspec['vendor'] = "unknown"
+    tspec['target-family'] = "unix"
+    tspec['linker'] = "{}{}gcc".format(d.getVar('CCACHE'), prefix)
+    tspec['cpu'] = cpu
+    if features != "":
+        tspec['features'] = features
+    tspec['dynamic-linking'] = True
+    tspec['executables'] = True
+    tspec['linker-is-gnu'] = True
+    tspec['linker-flavor'] = "gcc"
+    tspec['has-rpath'] = True
+    tspec['position-independent-executables'] = True
+    tspec['panic-strategy'] = d.getVar("RUST_PANIC_STRATEGY")
+
+    # write out the target spec json file
+    with open(wd + rustsys + '.json', 'w') as f:
+        json.dump(tspec, f, indent=4)
+
+# These are accounted for in tmpdir path names so don't need to be in the task sig
+rust_gen_target[vardepsexclude] += "ABIEXTENSION llvm_cpu"
+
+do_rust_gen_targets[vardeps] += "DATA_LAYOUT TARGET_ENDIAN TARGET_POINTER_WIDTH TARGET_C_INT_WIDTH MAX_ATOMIC_WIDTH FEATURES"
+
+RUST_TARGETS_DIR = "${WORKDIR}/rust-targets/"
+export RUST_TARGET_PATH = "${RUST_TARGETS_DIR}"
+
+python do_rust_gen_targets () {
+    wd = d.getVar('RUST_TARGETS_DIR')
+    # Order of BUILD, HOST, TARGET is important in case the files overwrite, most specific last
+    rust_gen_target(d, 'BUILD', wd, d.getVar('BUILD_ARCH'))
+    rust_gen_target(d, 'HOST', wd, d.getVar('HOST_ARCH'))
+    rust_gen_target(d, 'TARGET', wd, d.getVar('TARGET_ARCH'))
+}
+
+addtask rust_gen_targets after do_patch before do_compile
+do_rust_gen_targets[dirs] += "${RUST_TARGETS_DIR}"
+
diff --git a/poky/meta/classes-recipe/rust.bbclass b/poky/meta/classes-recipe/rust.bbclass
new file mode 100644
index 0000000..dae25ca
--- /dev/null
+++ b/poky/meta/classes-recipe/rust.bbclass
@@ -0,0 +1,51 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+inherit rust-common
+
+RUSTC = "rustc"
+
+RUSTC_ARCHFLAGS += "--target=${RUST_HOST_SYS} ${RUSTFLAGS}"
+
+def rust_base_dep(d):
+    # Taken from meta/classes/base.bbclass `base_dep_prepend` and modified to
+    # use rust instead of gcc
+    deps = ""
+    if not d.getVar('INHIBIT_DEFAULT_RUST_DEPS'):
+        if (d.getVar('HOST_SYS') != d.getVar('BUILD_SYS')):
+            deps += " rust-native ${RUSTLIB_DEP}"
+        else:
+            deps += " rust-native"
+    return deps
+
+DEPENDS:append = " ${@rust_base_dep(d)}"
+
+# BUILD_LDFLAGS
+# 	${STAGING_LIBDIR_NATIVE}
+# 	${STAGING_BASE_LIBDIR_NATIVE}
+# BUILDSDK_LDFLAGS
+# 	${STAGING_LIBDIR}
+# 	#{STAGING_DIR_HOST}
+# TARGET_LDFLAGS ?????
+#RUSTC_BUILD_LDFLAGS = "\
+#	--sysroot ${STAGING_DIR_NATIVE} \
+#	-L${STAGING_LIBDIR_NATIVE}	\
+#	-L${STAGING_BASE_LIBDIR_NATIVE}	\
+#"
+
+# XXX: for some reason bitbake sets BUILD_* & TARGET_* but uses the bare
+# variables for HOST. Alias things to make it easier for us.
+HOST_LDFLAGS  ?= "${LDFLAGS}"
+HOST_CFLAGS   ?= "${CFLAGS}"
+HOST_CXXFLAGS ?= "${CXXFLAGS}"
+HOST_CPPFLAGS ?= "${CPPFLAGS}"
+
+rustlib_suffix="${TUNE_ARCH}${TARGET_VENDOR}-${TARGET_OS}/rustlib/${RUST_HOST_SYS}/lib"
+# Native sysroot standard library path
+rustlib_src="${prefix}/lib/${rustlib_suffix}"
+# Host sysroot standard library path
+rustlib="${libdir}/${rustlib_suffix}"
+rustlib:class-native="${libdir}/rustlib/${BUILD_SYS}/lib"
diff --git a/poky/meta/classes-recipe/scons.bbclass b/poky/meta/classes-recipe/scons.bbclass
new file mode 100644
index 0000000..5f0d4a9
--- /dev/null
+++ b/poky/meta/classes-recipe/scons.bbclass
@@ -0,0 +1,34 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+inherit python3native
+
+DEPENDS += "python3-scons-native"
+
+EXTRA_OESCONS ?= ""
+
+do_configure() {
+	if [ -n "${CONFIGURESTAMPFILE}" -a "${S}" = "${B}" ]; then
+		if [ -e "${CONFIGURESTAMPFILE}" -a "`cat ${CONFIGURESTAMPFILE}`" != "${BB_TASKHASH}" -a "${CLEANBROKEN}" != "1" ]; then
+			${STAGING_BINDIR_NATIVE}/scons --directory=${S} --clean PREFIX=${prefix} prefix=${prefix} ${EXTRA_OESCONS}
+		fi
+
+		mkdir -p `dirname ${CONFIGURESTAMPFILE}`
+		echo ${BB_TASKHASH} > ${CONFIGURESTAMPFILE}
+	fi
+}
+
+scons_do_compile() {
+	${STAGING_BINDIR_NATIVE}/scons --directory=${S} ${PARALLEL_MAKE} PREFIX=${prefix} prefix=${prefix} ${EXTRA_OESCONS} || \
+	die "scons build execution failed."
+}
+
+scons_do_install() {
+	${STAGING_BINDIR_NATIVE}/scons --directory=${S} install_root=${D}${prefix} PREFIX=${prefix} prefix=${prefix} ${EXTRA_OESCONS} install || \
+	die "scons install execution failed."
+}
+
+EXPORT_FUNCTIONS do_compile do_install
diff --git a/poky/meta/classes-recipe/setuptools3-base.bbclass b/poky/meta/classes-recipe/setuptools3-base.bbclass
new file mode 100644
index 0000000..21b688c
--- /dev/null
+++ b/poky/meta/classes-recipe/setuptools3-base.bbclass
@@ -0,0 +1,37 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+DEPENDS:append:class-target = " ${PYTHON_PN}-native ${PYTHON_PN}"
+DEPENDS:append:class-nativesdk = " ${PYTHON_PN}-native ${PYTHON_PN}"
+RDEPENDS:${PN}:append:class-target = " ${PYTHON_PN}-core"
+
+export STAGING_INCDIR
+export STAGING_LIBDIR
+
+# LDSHARED is the ld *command* used to create shared library
+export LDSHARED  = "${CCLD} -shared"
+# LDXXSHARED is the ld *command* used to create shared library of C++
+# objects
+export LDCXXSHARED  = "${CXX} -shared"
+# CCSHARED are the C *flags* used to create objects to go into a shared
+# library (module)
+export CCSHARED  = "-fPIC -DPIC"
+# LINKFORSHARED are the flags passed to the $(CC) command that links
+# the python executable
+export LINKFORSHARED = "${SECURITY_CFLAGS} -Xlinker -export-dynamic"
+
+FILES:${PN} += "${libdir}/* ${libdir}/${PYTHON_DIR}/*"
+
+FILES:${PN}-staticdev += "\
+  ${PYTHON_SITEPACKAGES_DIR}/*.a \
+"
+FILES:${PN}-dev += "\
+  ${datadir}/pkgconfig \
+  ${libdir}/pkgconfig \
+  ${PYTHON_SITEPACKAGES_DIR}/*.la \
+"
+inherit python3native python3targetconfig
+
diff --git a/poky/meta/classes-recipe/setuptools3.bbclass b/poky/meta/classes-recipe/setuptools3.bbclass
new file mode 100644
index 0000000..4c6e79e
--- /dev/null
+++ b/poky/meta/classes-recipe/setuptools3.bbclass
@@ -0,0 +1,38 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+inherit setuptools3-base python_pep517
+
+DEPENDS += "python3-setuptools-native python3-wheel-native"
+
+SETUPTOOLS_BUILD_ARGS ?= ""
+
+SETUPTOOLS_SETUP_PATH ?= "${S}"
+
+setuptools3_do_configure() {
+    :
+}
+
+setuptools3_do_compile() {
+        cd ${SETUPTOOLS_SETUP_PATH}
+        NO_FETCH_BUILD=1 \
+        STAGING_INCDIR=${STAGING_INCDIR} \
+        STAGING_LIBDIR=${STAGING_LIBDIR} \
+        ${STAGING_BINDIR_NATIVE}/${PYTHON_PN}-native/${PYTHON_PN} setup.py \
+        bdist_wheel --verbose --dist-dir ${PEP517_WHEEL_PATH} ${SETUPTOOLS_BUILD_ARGS} || \
+        bbfatal_log "'${PYTHON_PN} setup.py bdist_wheel ${SETUPTOOLS_BUILD_ARGS}' execution failed."
+}
+setuptools3_do_compile[vardepsexclude] = "MACHINE"
+do_compile[cleandirs] += "${PEP517_WHEEL_PATH}"
+
+# This could be removed in the future but some recipes in meta-oe still use it
+setuptools3_do_install() {
+        python_pep517_do_install
+}
+
+EXPORT_FUNCTIONS do_configure do_compile do_install
+
+export LDSHARED="${CCLD} -shared"
diff --git a/poky/meta/classes-recipe/setuptools3_legacy.bbclass b/poky/meta/classes-recipe/setuptools3_legacy.bbclass
new file mode 100644
index 0000000..21748f9
--- /dev/null
+++ b/poky/meta/classes-recipe/setuptools3_legacy.bbclass
@@ -0,0 +1,84 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+# This class is for packages which use the deprecated setuptools behaviour,
+# specifically custom install tasks which don't work correctly with bdist_wheel.
+# This behaviour is deprecated in setuptools[1] and won't work in the future, so
+# all users of this should consider their options: pure Python modules can use a
+# modern Python tool such as build[2], or packages which are doing more (such as
+# installing init scripts) should use a fully-featured build system such as Meson.
+#
+# [1] https://setuptools.pypa.io/en/latest/history.html#id142
+# [2] https://pypi.org/project/build/
+
+inherit setuptools3-base
+
+B = "${WORKDIR}/build"
+
+SETUPTOOLS_BUILD_ARGS ?= ""
+SETUPTOOLS_INSTALL_ARGS ?= "--root=${D} \
+    --prefix=${prefix} \
+    --install-lib=${PYTHON_SITEPACKAGES_DIR} \
+    --install-data=${datadir}"
+
+SETUPTOOLS_PYTHON = "python3"
+SETUPTOOLS_PYTHON:class-native = "nativepython3"
+
+SETUPTOOLS_SETUP_PATH ?= "${S}"
+
+setuptools3_legacy_do_configure() {
+    :
+}
+
+setuptools3_legacy_do_compile() {
+        cd ${SETUPTOOLS_SETUP_PATH}
+        NO_FETCH_BUILD=1 \
+        STAGING_INCDIR=${STAGING_INCDIR} \
+        STAGING_LIBDIR=${STAGING_LIBDIR} \
+        ${STAGING_BINDIR_NATIVE}/${PYTHON_PN}-native/${PYTHON_PN} setup.py \
+        build --build-base=${B} ${SETUPTOOLS_BUILD_ARGS} || \
+        bbfatal_log "'${PYTHON_PN} setup.py build ${SETUPTOOLS_BUILD_ARGS}' execution failed."
+}
+setuptools3_legacy_do_compile[vardepsexclude] = "MACHINE"
+
+setuptools3_legacy_do_install() {
+        cd ${SETUPTOOLS_SETUP_PATH}
+        install -d ${D}${PYTHON_SITEPACKAGES_DIR}
+        STAGING_INCDIR=${STAGING_INCDIR} \
+        STAGING_LIBDIR=${STAGING_LIBDIR} \
+        PYTHONPATH=${D}${PYTHON_SITEPACKAGES_DIR} \
+        ${STAGING_BINDIR_NATIVE}/${PYTHON_PN}-native/${PYTHON_PN} setup.py \
+        build --build-base=${B} install --skip-build ${SETUPTOOLS_INSTALL_ARGS} || \
+        bbfatal_log "'${PYTHON_PN} setup.py install ${SETUPTOOLS_INSTALL_ARGS}' execution failed."
+
+        # support filenames with *spaces*
+        find ${D} -name "*.py" -exec grep -q ${D} {} \; \
+                               -exec sed -i -e s:${D}::g {} \;
+
+        for i in ${D}${bindir}/* ${D}${sbindir}/*; do
+            if [ -f "$i" ]; then
+                sed -i -e s:${PYTHON}:${USRBINPATH}/env\ ${SETUPTOOLS_PYTHON}:g $i
+                sed -i -e s:${STAGING_BINDIR_NATIVE}:${bindir}:g $i
+            fi
+        done
+
+        rm -f ${D}${PYTHON_SITEPACKAGES_DIR}/easy-install.pth
+
+        #
+        # FIXME: Bandaid against wrong datadir computation
+        #
+        if [ -e ${D}${datadir}/share ]; then
+            mv -f ${D}${datadir}/share/* ${D}${datadir}/
+            rmdir ${D}${datadir}/share
+        fi
+}
+setuptools3_legacy_do_install[vardepsexclude] = "MACHINE"
+
+EXPORT_FUNCTIONS do_configure do_compile do_install
+
+export LDSHARED="${CCLD} -shared"
+DEPENDS += "python3-setuptools-native"
+
diff --git a/poky/meta/classes-recipe/siteinfo.bbclass b/poky/meta/classes-recipe/siteinfo.bbclass
new file mode 100644
index 0000000..d31c9b2
--- /dev/null
+++ b/poky/meta/classes-recipe/siteinfo.bbclass
@@ -0,0 +1,232 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+# This class exists to provide information about the targets that
+# may be needed by other classes and/or recipes. If you add a new
+# target this will probably need to be updated.
+
+#
+# Returns information about 'what' for the named target 'target'
+# where 'target' == "<arch>-<os>"
+#
+# 'what' can be one of
+# * target: Returns the target name ("<arch>-<os>")
+# * endianness: Return "be" for big endian targets, "le" for little endian
+# * bits: Returns the bit size of the target, either "32" or "64"
+# * libc: Returns the name of the c library used by the target
+#
+# It is an error for the target not to exist.
+# If 'what' doesn't exist then an empty value is returned
+#
+def siteinfo_data_for_machine(arch, os, d):
+    archinfo = {
+        "allarch": "endian-little bit-32", # bogus, but better than special-casing the checks below for allarch
+        "aarch64": "endian-little bit-64 arm-common arm-64",
+        "aarch64_be": "endian-big bit-64 arm-common arm-64",
+        "arc": "endian-little bit-32 arc-common",
+        "arceb": "endian-big bit-32 arc-common",
+        "arm": "endian-little bit-32 arm-common arm-32",
+        "armeb": "endian-big bit-32 arm-common arm-32",
+        "avr32": "endian-big bit-32 avr32-common",
+        "bfin": "endian-little bit-32 bfin-common",
+        "epiphany": "endian-little bit-32",
+        "i386": "endian-little bit-32 ix86-common",
+        "i486": "endian-little bit-32 ix86-common",
+        "i586": "endian-little bit-32 ix86-common",
+        "i686": "endian-little bit-32 ix86-common",
+        "ia64": "endian-little bit-64",
+        "lm32": "endian-big bit-32",
+        "m68k": "endian-big bit-32",
+        "microblaze": "endian-big bit-32 microblaze-common",
+        "microblazeel": "endian-little bit-32 microblaze-common",
+        "mips": "endian-big bit-32 mips-common",
+        "mips64": "endian-big bit-64 mips-common",
+        "mips64el": "endian-little bit-64 mips-common",
+        "mipsisa64r6": "endian-big bit-64 mips-common",
+        "mipsisa64r6el": "endian-little bit-64 mips-common",
+        "mipsel": "endian-little bit-32 mips-common",
+        "mipsisa32r6": "endian-big bit-32 mips-common",
+        "mipsisa32r6el": "endian-little bit-32 mips-common",
+        "powerpc": "endian-big bit-32 powerpc-common",
+        "powerpcle": "endian-little bit-32 powerpc-common",
+        "nios2": "endian-little bit-32 nios2-common",
+        "powerpc64": "endian-big bit-64 powerpc-common",
+        "powerpc64le": "endian-little bit-64 powerpc-common",
+        "ppc": "endian-big bit-32 powerpc-common",
+        "ppc64": "endian-big bit-64 powerpc-common",
+        "ppc64le" : "endian-little bit-64 powerpc-common",
+        "riscv32": "endian-little bit-32 riscv-common",
+        "riscv64": "endian-little bit-64 riscv-common",
+        "sh3": "endian-little bit-32 sh-common",
+        "sh3eb": "endian-big bit-32 sh-common",
+        "sh4": "endian-little bit-32 sh-common",
+        "sh4eb": "endian-big bit-32 sh-common",
+        "sparc": "endian-big bit-32",
+        "viac3": "endian-little bit-32 ix86-common",
+        "x86_64": "endian-little", # bitinfo specified in targetinfo
+    }
+    osinfo = {
+        "darwin": "common-darwin",
+        "darwin9": "common-darwin",
+        "linux": "common-linux common-glibc",
+        "linux-gnu": "common-linux common-glibc",
+        "linux-gnu_ilp32": "common-linux common-glibc",
+        "linux-gnux32": "common-linux common-glibc",
+        "linux-gnun32": "common-linux common-glibc",
+        "linux-gnueabi": "common-linux common-glibc",
+        "linux-gnuspe": "common-linux common-glibc",
+        "linux-musl": "common-linux common-musl",
+        "linux-muslx32": "common-linux common-musl",
+        "linux-musleabi": "common-linux common-musl",
+        "linux-muslspe": "common-linux common-musl",
+        "uclinux-uclibc": "common-uclibc",
+        "cygwin": "common-cygwin",
+        "mingw32": "common-mingw",
+    }
+    targetinfo = {
+        "aarch64-linux-gnu": "aarch64-linux",
+        "aarch64_be-linux-gnu": "aarch64_be-linux",
+        "aarch64-linux-gnu_ilp32": "bit-32 aarch64_be-linux arm-32",
+        "aarch64_be-linux-gnu_ilp32": "bit-32 aarch64_be-linux arm-32",
+        "aarch64-linux-musl": "aarch64-linux",
+        "aarch64_be-linux-musl": "aarch64_be-linux",
+        "arm-linux-gnueabi": "arm-linux",
+        "arm-linux-musleabi": "arm-linux",
+        "armeb-linux-gnueabi": "armeb-linux",
+        "armeb-linux-musleabi": "armeb-linux",
+        "microblazeel-linux" : "microblaze-linux",
+        "microblazeel-linux-musl" : "microblaze-linux",
+        "mips-linux-musl": "mips-linux",
+        "mipsel-linux-musl": "mipsel-linux",
+        "mips64-linux-musl": "mips64-linux",
+        "mips64el-linux-musl": "mips64el-linux",
+        "mips64-linux-gnun32": "mips-linux bit-32",
+        "mips64el-linux-gnun32": "mipsel-linux bit-32",
+        "mipsisa64r6-linux-gnun32": "mipsisa32r6-linux bit-32",
+        "mipsisa64r6el-linux-gnun32": "mipsisa32r6el-linux bit-32",
+        "powerpc-linux": "powerpc32-linux powerpc32-linux-glibc",
+        "powerpc-linux-musl": "powerpc-linux powerpc32-linux powerpc32-linux-musl",
+        "powerpcle-linux": "powerpc32-linux powerpc32-linux-glibc",
+        "powerpcle-linux-musl": "powerpc-linux powerpc32-linux powerpc32-linux-musl",
+        "powerpc-linux-gnuspe": "powerpc-linux powerpc32-linux powerpc32-linux-glibc",
+        "powerpc-linux-muslspe": "powerpc-linux powerpc32-linux powerpc32-linux-musl",
+        "powerpc64-linux-gnuspe": "powerpc-linux powerpc64-linux powerpc64-linux-glibc",
+        "powerpc64-linux-muslspe": "powerpc-linux powerpc64-linux powerpc64-linux-musl",
+        "powerpc64-linux": "powerpc-linux powerpc64-linux powerpc64-linux-glibc",
+        "powerpc64-linux-musl": "powerpc-linux powerpc64-linux powerpc64-linux-musl",
+        "powerpc64le-linux": "powerpc-linux powerpc64-linux powerpc64-linux-glibc",
+        "powerpc64le-linux-musl": "powerpc-linux powerpc64-linux powerpc64-linux-musl",
+        "riscv32-linux": "riscv32-linux",
+        "riscv32-linux-musl": "riscv32-linux",
+        "riscv64-linux": "riscv64-linux",
+        "riscv64-linux-musl": "riscv64-linux",
+        "x86_64-cygwin": "bit-64",
+        "x86_64-darwin": "bit-64",
+        "x86_64-darwin9": "bit-64",
+        "x86_64-linux": "bit-64",
+        "x86_64-linux-musl": "x86_64-linux bit-64",
+        "x86_64-linux-muslx32": "bit-32 ix86-common x32-linux",
+        "x86_64-elf": "bit-64",
+        "x86_64-linux-gnu": "bit-64 x86_64-linux",
+        "x86_64-linux-gnux32": "bit-32 ix86-common x32-linux",
+        "x86_64-mingw32": "bit-64",
+    }
+
+    # Add in any extra user supplied data which may come from a BSP layer, removing the
+    # need to always change this class directly
+    extra_siteinfo = (d.getVar("SITEINFO_EXTRA_DATAFUNCS") or "").split()
+    for m in extra_siteinfo:
+        call = m + "(archinfo, osinfo, targetinfo, d)"
+        locs = { "archinfo" : archinfo, "osinfo" : osinfo, "targetinfo" : targetinfo, "d" : d}
+        archinfo, osinfo, targetinfo = bb.utils.better_eval(call, locs)
+
+    target = "%s-%s" % (arch, os)
+
+    sitedata = []
+    if arch in archinfo:
+        sitedata.extend(archinfo[arch].split())
+    if os in osinfo:
+        sitedata.extend(osinfo[os].split())
+    if target in targetinfo:
+        sitedata.extend(targetinfo[target].split())
+    sitedata.append(target)
+    sitedata.append("common")
+
+    bb.debug(1, "SITE files %s" % sitedata);
+    return sitedata
+
+def siteinfo_data(d):
+    return siteinfo_data_for_machine(d.getVar("HOST_ARCH"), d.getVar("HOST_OS"), d)
+
+python () {
+    sitedata = set(siteinfo_data(d))
+    if "endian-little" in sitedata:
+        d.setVar("SITEINFO_ENDIANNESS", "le")
+    elif "endian-big" in sitedata:
+        d.setVar("SITEINFO_ENDIANNESS", "be")
+    else:
+        bb.error("Unable to determine endianness for architecture '%s'" %
+                 d.getVar("HOST_ARCH"))
+        bb.fatal("Please add your architecture to siteinfo.bbclass")
+
+    if "bit-32" in sitedata:
+        d.setVar("SITEINFO_BITS", "32")
+    elif "bit-64" in sitedata:
+        d.setVar("SITEINFO_BITS", "64")
+    else:
+        bb.error("Unable to determine bit size for architecture '%s'" %
+                 d.getVar("HOST_ARCH"))
+        bb.fatal("Please add your architecture to siteinfo.bbclass")
+}
+
+# Layers with siteconfig need to add a replacement path to this variable so the
+# sstate isn't path specific
+SITEINFO_PATHVARS = "COREBASE"
+
+def siteinfo_get_files(d, sysrootcache=False):
+    sitedata = siteinfo_data(d)
+    sitefiles = []
+    searched = []
+    for path in d.getVar("BBPATH").split(":"):
+        for element in sitedata:
+            filename = os.path.join(path, "site", element)
+            if os.path.exists(filename):
+                searched.append(filename + ":True")
+                sitefiles.append(filename)
+            else:
+                searched.append(filename + ":False")
+
+    # Have to parameterise out hardcoded paths such as COREBASE for the main site files
+    for var in d.getVar("SITEINFO_PATHVARS").split():
+        searched2 = []
+        replace = os.path.normpath(d.getVar(var))
+        for s in searched:
+            searched2.append(s.replace(replace, "${" + var + "}"))
+        searched = searched2
+
+    if bb.data.inherits_class('native', d) or bb.data.inherits_class('cross', d) or bb.data.inherits_class('crosssdk', d):
+        # We need sstate sigs for native/cross not to vary upon arch so we can't depend on the site files.
+        # In future we may want to depend upon all site files?
+        # This would show up as breaking sstatetests.SStateTests.test_sstate_32_64_same_hash for example
+        searched = []
+
+    if not sysrootcache:
+        return sitefiles, searched
+
+    # Now check for siteconfig cache files in sysroots
+    path_siteconfig = d.getVar('SITECONFIG_SYSROOTCACHE')
+    if path_siteconfig and os.path.isdir(path_siteconfig):
+        for i in os.listdir(path_siteconfig):
+            if not i.endswith("_config"):
+                continue
+            filename = os.path.join(path_siteconfig, i)
+            sitefiles.append(filename)
+    return sitefiles, searched
+
+#
+# Make some information available via variables
+#
+SITECONFIG_SYSROOTCACHE = "${STAGING_DATADIR}/${TARGET_SYS}_config_site.d"
diff --git a/poky/meta/classes-recipe/syslinux.bbclass b/poky/meta/classes-recipe/syslinux.bbclass
new file mode 100644
index 0000000..be3b898
--- /dev/null
+++ b/poky/meta/classes-recipe/syslinux.bbclass
@@ -0,0 +1,194 @@
+# syslinux.bbclass
+# Copyright (C) 2004-2006, Advanced Micro Devices, Inc.
+# SPDX-License-Identifier: MIT
+
+# Provide syslinux specific functions for building bootable images.
+
+# External variables
+# ${INITRD} - indicates a list of filesystem images to concatenate and use as an initrd (optional)
+# ${ROOTFS} - indicates a filesystem image to include as the root filesystem (optional)
+# ${AUTO_SYSLINUXMENU} - set this to 1 to enable creating an automatic menu
+# ${LABELS} - a list of targets for the automatic config
+# ${APPEND} - an override list of append strings for each label
+# ${SYSLINUX_OPTS} - additional options to add to the syslinux file ';' delimited
+# ${SYSLINUX_SPLASH} - A background for the vga boot menu if using the boot menu
+# ${SYSLINUX_DEFAULT_CONSOLE} - set to "console=ttyX" to change kernel boot default console
+# ${SYSLINUX_SERIAL} - Set an alternate serial port or turn off serial with empty string
+# ${SYSLINUX_SERIAL_TTY} - Set alternate console=tty... kernel boot argument
+# ${SYSLINUX_KERNEL_ARGS} - Add additional kernel arguments
+
+do_bootimg[depends] += "${MLPREFIX}syslinux:do_populate_sysroot \
+                        syslinux-native:do_populate_sysroot"
+
+ISOLINUXDIR ?= "/isolinux"
+SYSLINUXDIR = "/"
+# The kernel has an internal default console, which you can override with
+# a console=...some_tty...
+SYSLINUX_DEFAULT_CONSOLE ?= ""
+SYSLINUX_SERIAL ?= "0 115200"
+SYSLINUX_SERIAL_TTY ?= "console=ttyS0,115200"
+SYSLINUX_PROMPT ?= "0"
+SYSLINUX_TIMEOUT ?= "50"
+AUTO_SYSLINUXMENU ?= "1"
+SYSLINUX_ALLOWOPTIONS ?= "1"
+SYSLINUX_ROOT ?= "${ROOT}"
+SYSLINUX_CFG_VM  ?= "${S}/syslinux_vm.cfg"
+SYSLINUX_CFG_LIVE ?= "${S}/syslinux_live.cfg"
+APPEND ?= ""
+
+# Need UUID utility code.
+inherit fs-uuid
+
+syslinux_populate() {
+	DEST=$1
+	BOOTDIR=$2
+	CFGNAME=$3
+
+	install -d ${DEST}${BOOTDIR}
+
+	# Install the config files
+	install -m 0644 ${SYSLINUX_CFG} ${DEST}${BOOTDIR}/${CFGNAME}
+	if [ "${AUTO_SYSLINUXMENU}" = 1 ] ; then
+		install -m 0644 ${STAGING_DATADIR}/syslinux/vesamenu.c32 ${DEST}${BOOTDIR}/vesamenu.c32
+		install -m 0444 ${STAGING_DATADIR}/syslinux/libcom32.c32 ${DEST}${BOOTDIR}/libcom32.c32
+		install -m 0444 ${STAGING_DATADIR}/syslinux/libutil.c32 ${DEST}${BOOTDIR}/libutil.c32
+		if [ "${SYSLINUX_SPLASH}" != "" ] ; then
+			install -m 0644 ${SYSLINUX_SPLASH} ${DEST}${BOOTDIR}/splash.lss
+		fi
+	fi
+}
+
+syslinux_iso_populate() {
+	iso_dir=$1
+	syslinux_populate $iso_dir ${ISOLINUXDIR} isolinux.cfg
+	install -m 0644 ${STAGING_DATADIR}/syslinux/isolinux.bin $iso_dir${ISOLINUXDIR}
+	install -m 0644 ${STAGING_DATADIR}/syslinux/ldlinux.c32 $iso_dir${ISOLINUXDIR}
+}
+
+syslinux_hddimg_populate() {
+	hdd_dir=$1
+	syslinux_populate $hdd_dir ${SYSLINUXDIR} syslinux.cfg
+	install -m 0444 ${STAGING_DATADIR}/syslinux/ldlinux.sys $hdd_dir${SYSLINUXDIR}/ldlinux.sys
+}
+
+syslinux_hddimg_install() {
+	syslinux ${IMGDEPLOYDIR}/${IMAGE_NAME}.hddimg
+}
+
+python build_syslinux_cfg () {
+    import copy
+    import sys
+
+    workdir = d.getVar('WORKDIR')
+    if not workdir:
+        bb.error("WORKDIR not defined, unable to package")
+        return
+        
+    labels = d.getVar('LABELS')
+    if not labels:
+        bb.debug(1, "LABELS not defined, nothing to do")
+        return
+    
+    if labels == []:
+        bb.debug(1, "No labels, nothing to do")
+        return
+
+    cfile = d.getVar('SYSLINUX_CFG')
+    if not cfile:
+        bb.fatal('Unable to read SYSLINUX_CFG')
+
+    try:
+        cfgfile = open(cfile, 'w')
+    except OSError:
+        bb.fatal('Unable to open %s' % cfile)
+
+    cfgfile.write('# Automatically created by OE\n')
+
+    opts = d.getVar('SYSLINUX_OPTS')
+
+    if opts:
+        for opt in opts.split(';'):
+            cfgfile.write('%s\n' % opt)
+
+    allowoptions = d.getVar('SYSLINUX_ALLOWOPTIONS')
+    if allowoptions:
+        cfgfile.write('ALLOWOPTIONS %s\n' % allowoptions)
+    else:
+        cfgfile.write('ALLOWOPTIONS 1\n')
+
+    syslinux_default_console = d.getVar('SYSLINUX_DEFAULT_CONSOLE')
+    syslinux_serial_tty = d.getVar('SYSLINUX_SERIAL_TTY')
+    syslinux_serial = d.getVar('SYSLINUX_SERIAL')
+    if syslinux_serial:
+        cfgfile.write('SERIAL %s\n' % syslinux_serial)
+
+    menu = (d.getVar('AUTO_SYSLINUXMENU') == "1")
+
+    if menu and syslinux_serial:
+        cfgfile.write('DEFAULT Graphics console %s\n' % (labels.split()[0]))
+    else:
+        cfgfile.write('DEFAULT %s\n' % (labels.split()[0]))
+
+    timeout = d.getVar('SYSLINUX_TIMEOUT')
+
+    if timeout:
+        cfgfile.write('TIMEOUT %s\n' % timeout)
+    else:
+        cfgfile.write('TIMEOUT 50\n')
+
+    prompt = d.getVar('SYSLINUX_PROMPT')
+    if prompt:
+        cfgfile.write('PROMPT %s\n' % prompt)
+    else:
+        cfgfile.write('PROMPT 1\n')
+
+    if menu:
+        cfgfile.write('ui vesamenu.c32\n')
+        cfgfile.write('menu title Select kernel options and boot kernel\n')
+        cfgfile.write('menu tabmsg Press [Tab] to edit, [Return] to select\n')
+        splash = d.getVar('SYSLINUX_SPLASH')
+        if splash:
+            cfgfile.write('menu background splash.lss\n')
+    
+    for label in labels.split():
+        localdata = bb.data.createCopy(d)
+
+        overrides = localdata.getVar('OVERRIDES')
+        if not overrides:
+            bb.fatal('OVERRIDES not defined')
+
+        localdata.setVar('OVERRIDES', label + ':' + overrides)
+    
+        btypes = [ [ "", syslinux_default_console ] ]
+        if menu and syslinux_serial:
+            btypes = [ [ "Graphics console ", syslinux_default_console  ],
+                [ "Serial console ", syslinux_serial_tty ] ]
+
+        root= d.getVar('SYSLINUX_ROOT')
+        if not root:
+            bb.fatal('SYSLINUX_ROOT not defined')
+
+        kernel = localdata.getVar('KERNEL_IMAGETYPE')
+        for btype in btypes:
+            cfgfile.write('LABEL %s%s\nKERNEL /%s\n' % (btype[0], label, kernel))
+
+            exargs = d.getVar('SYSLINUX_KERNEL_ARGS')
+            if exargs:
+                btype[1] += " " + exargs
+
+            append = localdata.getVar('APPEND')
+            initrd = localdata.getVar('INITRD')
+
+            append = root + " " + append
+            cfgfile.write('APPEND ')
+
+            if initrd:
+                cfgfile.write('initrd=/initrd ')
+
+            cfgfile.write('LABEL=%s '% (label))
+            append = replace_rootfs_uuid(d, append)
+            cfgfile.write('%s %s\n' % (append, btype[1]))
+
+    cfgfile.close()
+}
+build_syslinux_cfg[dirs] = "${S}"
diff --git a/poky/meta/classes-recipe/systemd-boot-cfg.bbclass b/poky/meta/classes-recipe/systemd-boot-cfg.bbclass
new file mode 100644
index 0000000..366dd23
--- /dev/null
+++ b/poky/meta/classes-recipe/systemd-boot-cfg.bbclass
@@ -0,0 +1,77 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+SYSTEMD_BOOT_CFG ?= "${S}/loader.conf"
+SYSTEMD_BOOT_ENTRIES ?= ""
+SYSTEMD_BOOT_TIMEOUT ?= "10"
+
+# Uses MACHINE specific KERNEL_IMAGETYPE
+PACKAGE_ARCH = "${MACHINE_ARCH}"
+
+# Need UUID utility code.
+inherit fs-uuid
+
+python build_efi_cfg() {
+    s = d.getVar("S")
+    labels = d.getVar('LABELS')
+    if not labels:
+        bb.debug(1, "LABELS not defined, nothing to do")
+        return
+
+    if labels == []:
+        bb.debug(1, "No labels, nothing to do")
+        return
+
+    cfile = d.getVar('SYSTEMD_BOOT_CFG')
+    cdir = os.path.dirname(cfile)
+    if not os.path.exists(cdir):
+        os.makedirs(cdir)
+    try:
+         cfgfile = open(cfile, 'w')
+    except OSError:
+        bb.fatal('Unable to open %s' % cfile)
+
+    cfgfile.write('# Automatically created by OE\n')
+    cfgfile.write('default %s\n' % (labels.split()[0]))
+    timeout = d.getVar('SYSTEMD_BOOT_TIMEOUT')
+    if timeout:
+        cfgfile.write('timeout %s\n' % timeout)
+    else:
+        cfgfile.write('timeout 10\n')
+    cfgfile.close()
+
+    for label in labels.split():
+        localdata = d.createCopy()
+
+        entryfile = "%s/%s.conf" % (s, label)
+        if not os.path.exists(s):
+            os.makedirs(s)
+        d.appendVar("SYSTEMD_BOOT_ENTRIES", " " + entryfile)
+        try:
+            entrycfg = open(entryfile, "w")
+        except OSError:
+            bb.fatal('Unable to open %s' % entryfile)
+
+        entrycfg.write('title %s\n' % label)
+
+        kernel = localdata.getVar("KERNEL_IMAGETYPE")
+        entrycfg.write('linux /%s\n' % kernel)
+
+        append = localdata.getVar('APPEND')
+        initrd = localdata.getVar('INITRD')
+
+        if initrd:
+            entrycfg.write('initrd /initrd\n')
+        lb = label
+        if label == "install":
+            lb = "install-efi"
+        entrycfg.write('options LABEL=%s ' % lb)
+        if append:
+            append = replace_rootfs_uuid(d, append)
+            entrycfg.write('%s' % append)
+        entrycfg.write('\n')
+        entrycfg.close()
+}
diff --git a/poky/meta/classes-recipe/systemd-boot.bbclass b/poky/meta/classes-recipe/systemd-boot.bbclass
new file mode 100644
index 0000000..5aa32dd
--- /dev/null
+++ b/poky/meta/classes-recipe/systemd-boot.bbclass
@@ -0,0 +1,35 @@
+# Copyright (C) 2016 Intel Corporation
+#
+# SPDX-License-Identifier: MIT
+
+# systemd-boot.bbclass - The "systemd-boot" is essentially the gummiboot merged into systemd.
+#                        The original standalone gummiboot project is dead without any more
+#                        maintenance.
+#
+# Set EFI_PROVIDER = "systemd-boot" to use systemd-boot on your live images instead of grub-efi
+# (images built by image-live.bbclass)
+
+do_bootimg[depends] += "${MLPREFIX}systemd-boot:do_deploy"
+
+require conf/image-uefi.conf
+# Need UUID utility code.
+inherit fs-uuid
+
+efi_populate() {
+        efi_populate_common "$1" systemd
+
+        # systemd-boot requires these paths for configuration files
+        # they are not customizable so no point in new vars
+        install -d ${DEST}/loader
+        install -d ${DEST}/loader/entries
+        install -m 0644 ${SYSTEMD_BOOT_CFG} ${DEST}/loader/loader.conf
+        for i in ${SYSTEMD_BOOT_ENTRIES}; do
+            install -m 0644 ${i} ${DEST}/loader/entries
+        done
+}
+
+efi_iso_populate:append() {
+        cp -r $iso_dir/loader ${EFIIMGDIR}
+}
+
+inherit systemd-boot-cfg
diff --git a/poky/meta/classes-recipe/systemd.bbclass b/poky/meta/classes-recipe/systemd.bbclass
new file mode 100644
index 0000000..f6564c2
--- /dev/null
+++ b/poky/meta/classes-recipe/systemd.bbclass
@@ -0,0 +1,239 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+# The list of packages that should have systemd packaging scripts added.  For
+# each entry, optionally have a SYSTEMD_SERVICE:[package] that lists the service
+# files in this package.  If this variable isn't set, [package].service is used.
+SYSTEMD_PACKAGES ?= "${PN}"
+SYSTEMD_PACKAGES:class-native ?= ""
+SYSTEMD_PACKAGES:class-nativesdk ?= ""
+
+# Whether to enable or disable the services on installation.
+SYSTEMD_AUTO_ENABLE ??= "enable"
+
+# This class will be included in any recipe that supports systemd init scripts,
+# even if systemd is not in DISTRO_FEATURES.  As such don't make any changes
+# directly but check the DISTRO_FEATURES first.
+python __anonymous() {
+    # If the distro features have systemd but not sysvinit, inhibit update-rcd
+    # from doing any work so that pure-systemd images don't have redundant init
+    # files.
+    if bb.utils.contains('DISTRO_FEATURES', 'systemd', True, False, d):
+        d.appendVar("DEPENDS", " systemd-systemctl-native")
+        d.appendVar("PACKAGE_WRITE_DEPS", " systemd-systemctl-native")
+        if not bb.utils.contains('DISTRO_FEATURES', 'sysvinit', True, False, d):
+            d.setVar("INHIBIT_UPDATERCD_BBCLASS", "1")
+}
+
+systemd_postinst() {
+if systemctl >/dev/null 2>/dev/null; then
+	OPTS=""
+
+	if [ -n "$D" ]; then
+		OPTS="--root=$D"
+	fi
+
+	if [ "${SYSTEMD_AUTO_ENABLE}" = "enable" ]; then
+		for service in ${SYSTEMD_SERVICE_ESCAPED}; do
+			systemctl ${OPTS} enable "$service"
+		done
+	fi
+
+	if [ -z "$D" ]; then
+		systemctl daemon-reload
+		systemctl preset ${SYSTEMD_SERVICE_ESCAPED}
+
+		if [ "${SYSTEMD_AUTO_ENABLE}" = "enable" ]; then
+			systemctl --no-block restart ${SYSTEMD_SERVICE_ESCAPED}
+		fi
+	fi
+fi
+}
+
+systemd_prerm() {
+if systemctl >/dev/null 2>/dev/null; then
+	if [ -z "$D" ]; then
+		systemctl stop ${SYSTEMD_SERVICE_ESCAPED}
+
+		systemctl disable ${SYSTEMD_SERVICE_ESCAPED}
+	fi
+fi
+}
+
+
+systemd_populate_packages[vardeps] += "systemd_prerm systemd_postinst"
+systemd_populate_packages[vardepsexclude] += "OVERRIDES"
+
+
+python systemd_populate_packages() {
+    import re
+    import shlex
+
+    if not bb.utils.contains('DISTRO_FEATURES', 'systemd', True, False, d):
+        return
+
+    def get_package_var(d, var, pkg):
+        val = (d.getVar('%s:%s' % (var, pkg)) or "").strip()
+        if val == "":
+            val = (d.getVar(var) or "").strip()
+        return val
+
+    # Check if systemd-packages already included in PACKAGES
+    def systemd_check_package(pkg_systemd):
+        packages = d.getVar('PACKAGES')
+        if not pkg_systemd in packages.split():
+            bb.error('%s does not appear in package list, please add it' % pkg_systemd)
+
+
+    def systemd_generate_package_scripts(pkg):
+        bb.debug(1, 'adding systemd calls to postinst/postrm for %s' % pkg)
+
+        paths_escaped = ' '.join(shlex.quote(s) for s in d.getVar('SYSTEMD_SERVICE:' + pkg).split())
+        d.setVar('SYSTEMD_SERVICE_ESCAPED:' + pkg, paths_escaped)
+
+        # Add pkg to the overrides so that it finds the SYSTEMD_SERVICE:pkg
+        # variable.
+        localdata = d.createCopy()
+        localdata.prependVar("OVERRIDES", pkg + ":")
+
+        postinst = d.getVar('pkg_postinst:%s' % pkg)
+        if not postinst:
+            postinst = '#!/bin/sh\n'
+        postinst += localdata.getVar('systemd_postinst')
+        d.setVar('pkg_postinst:%s' % pkg, postinst)
+
+        prerm = d.getVar('pkg_prerm:%s' % pkg)
+        if not prerm:
+            prerm = '#!/bin/sh\n'
+        prerm += localdata.getVar('systemd_prerm')
+        d.setVar('pkg_prerm:%s' % pkg, prerm)
+
+
+    # Add files to FILES:*-systemd if existent and not already done
+    def systemd_append_file(pkg_systemd, file_append):
+        appended = False
+        if os.path.exists(oe.path.join(d.getVar("D"), file_append)):
+            var_name = "FILES:" + pkg_systemd
+            files = d.getVar(var_name, False) or ""
+            if file_append not in files.split():
+                d.appendVar(var_name, " " + file_append)
+                appended = True
+        return appended
+
+    # Add systemd files to FILES:*-systemd, parse for Also= and follow recursive
+    def systemd_add_files_and_parse(pkg_systemd, path, service, keys):
+        # avoid infinite recursion
+        if systemd_append_file(pkg_systemd, oe.path.join(path, service)):
+            fullpath = oe.path.join(d.getVar("D"), path, service)
+            if service.find('.service') != -1:
+                # for *.service add *@.service
+                service_base = service.replace('.service', '')
+                systemd_add_files_and_parse(pkg_systemd, path, service_base + '@.service', keys)
+            if service.find('.socket') != -1:
+                # for *.socket add *.service and *@.service
+                service_base = service.replace('.socket', '')
+                systemd_add_files_and_parse(pkg_systemd, path, service_base + '.service', keys)
+                systemd_add_files_and_parse(pkg_systemd, path, service_base + '@.service', keys)
+            for key in keys.split():
+                # recurse all dependencies found in keys ('Also';'Conflicts';..) and add to files
+                cmd = "grep %s %s | sed 's,%s=,,g' | tr ',' '\\n'" % (key, shlex.quote(fullpath), key)
+                pipe = os.popen(cmd, 'r')
+                line = pipe.readline()
+                while line:
+                    line = line.replace('\n', '')
+                    systemd_add_files_and_parse(pkg_systemd, path, line, keys)
+                    line = pipe.readline()
+                pipe.close()
+
+    # Check service-files and call systemd_add_files_and_parse for each entry
+    def systemd_check_services():
+        searchpaths = [oe.path.join(d.getVar("sysconfdir"), "systemd", "system"),]
+        searchpaths.append(d.getVar("systemd_system_unitdir"))
+        systemd_packages = d.getVar('SYSTEMD_PACKAGES')
+
+        keys = 'Also'
+        # scan for all in SYSTEMD_SERVICE[]
+        for pkg_systemd in systemd_packages.split():
+            for service in get_package_var(d, 'SYSTEMD_SERVICE', pkg_systemd).split():
+                path_found = ''
+
+                # Deal with adding, for example, 'ifplugd@eth0.service' from
+                # 'ifplugd@.service'
+                base = None
+                at = service.find('@')
+                if at != -1:
+                    ext = service.rfind('.')
+                    base = service[:at] + '@' + service[ext:]
+
+                for path in searchpaths:
+                    if os.path.exists(oe.path.join(d.getVar("D"), path, service)):
+                        path_found = path
+                        break
+                    elif base is not None:
+                        if os.path.exists(oe.path.join(d.getVar("D"), path, base)):
+                            path_found = path
+                            break
+
+                if path_found != '':
+                    systemd_add_files_and_parse(pkg_systemd, path_found, service, keys)
+                else:
+                    bb.fatal("Didn't find service unit '{0}', specified in SYSTEMD_SERVICE:{1}. {2}".format(
+                        service, pkg_systemd, "Also looked for service unit '{0}'.".format(base) if base is not None else ""))
+
+    def systemd_create_presets(pkg, action):
+        presetf = oe.path.join(d.getVar("PKGD"), d.getVar("systemd_unitdir"), "system-preset/98-%s.preset" % pkg)
+        bb.utils.mkdirhier(os.path.dirname(presetf))
+        with open(presetf, 'a') as fd:
+            for service in d.getVar('SYSTEMD_SERVICE:%s' % pkg).split():
+                fd.write("%s %s\n" % (action,service))
+        d.appendVar("FILES:%s" % pkg, ' ' + oe.path.join(d.getVar("systemd_unitdir"), "system-preset/98-%s.preset" % pkg))
+
+    # Run all modifications once when creating package
+    if os.path.exists(d.getVar("D")):
+        for pkg in d.getVar('SYSTEMD_PACKAGES').split():
+            systemd_check_package(pkg)
+            if d.getVar('SYSTEMD_SERVICE:' + pkg):
+                systemd_generate_package_scripts(pkg)
+                action = get_package_var(d, 'SYSTEMD_AUTO_ENABLE', pkg)
+                if action in ("enable", "disable"):
+                    systemd_create_presets(pkg, action)
+                elif action not in ("mask", "preset"):
+                    bb.fatal("SYSTEMD_AUTO_ENABLE:%s '%s' is not 'enable', 'disable', 'mask' or 'preset'" % (pkg, action))
+        systemd_check_services()
+}
+
+PACKAGESPLITFUNCS:prepend = "systemd_populate_packages "
+
+python rm_systemd_unitdir (){
+    import shutil
+    if not bb.utils.contains('DISTRO_FEATURES', 'systemd', True, False, d):
+        systemd_unitdir = oe.path.join(d.getVar("D"), d.getVar('systemd_unitdir'))
+        if os.path.exists(systemd_unitdir):
+            shutil.rmtree(systemd_unitdir)
+        systemd_libdir = os.path.dirname(systemd_unitdir)
+        if (os.path.exists(systemd_libdir) and not os.listdir(systemd_libdir)):
+            os.rmdir(systemd_libdir)
+}
+
+python rm_sysvinit_initddir (){
+    import shutil
+    sysv_initddir = oe.path.join(d.getVar("D"), (d.getVar('INIT_D_DIR') or "/etc/init.d"))
+
+    if bb.utils.contains('DISTRO_FEATURES', 'systemd', True, False, d) and \
+        not bb.utils.contains('DISTRO_FEATURES', 'sysvinit', True, False, d) and \
+        os.path.exists(sysv_initddir):
+        systemd_system_unitdir = oe.path.join(d.getVar("D"), d.getVar('systemd_system_unitdir'))
+
+        # If systemd_system_unitdir contains anything, delete sysv_initddir
+        if (os.path.exists(systemd_system_unitdir) and os.listdir(systemd_system_unitdir)):
+            shutil.rmtree(sysv_initddir)
+}
+
+do_install[postfuncs] += "${RMINITDIR} "
+RMINITDIR:class-target = " rm_sysvinit_initddir rm_systemd_unitdir "
+RMINITDIR:class-nativesdk = " rm_sysvinit_initddir rm_systemd_unitdir "
+RMINITDIR = ""
+
diff --git a/poky/meta/classes-recipe/testimage.bbclass b/poky/meta/classes-recipe/testimage.bbclass
new file mode 100644
index 0000000..8d2fab2
--- /dev/null
+++ b/poky/meta/classes-recipe/testimage.bbclass
@@ -0,0 +1,508 @@
+# Copyright (C) 2013 Intel Corporation
+#
+# SPDX-License-Identifier: MIT
+
+inherit metadata_scm
+inherit image-artifact-names
+
+# testimage.bbclass enables testing of qemu images using python unittests.
+# Most of the tests are commands run on target image over ssh.
+# To use it add testimage to global inherit and call your target image with -c testimage
+# You can try it out like this:
+# - first add IMAGE_CLASSES += "testimage" in local.conf
+# - build a qemu core-image-sato
+# - then bitbake core-image-sato -c testimage. That will run a standard suite of tests.
+#
+# The tests can be run automatically each time an image is built if you set
+# TESTIMAGE_AUTO = "1"
+
+TESTIMAGE_AUTO ??= "0"
+
+# You can set (or append to) TEST_SUITES in local.conf to select the tests
+# which you want to run for your target.
+# The test names are the module names in meta/lib/oeqa/runtime/cases.
+# Each name in TEST_SUITES represents a required test for the image. (no skipping allowed)
+# Appending "auto" means that it will try to run all tests that are suitable for the image (each test decides that on it's own).
+# Note that order in TEST_SUITES is relevant: tests are run in an order such that
+# tests mentioned in @skipUnlessPassed run before the tests that depend on them,
+# but without such dependencies, tests run in the order in which they are listed
+# in TEST_SUITES.
+#
+# A layer can add its own tests in lib/oeqa/runtime, provided it extends BBPATH as normal in its layer.conf.
+
+# TEST_LOG_DIR contains a command ssh log and may contain infromation about what command is running, output and return codes and for qemu a boot log till login.
+# Booting is handled by this class, and it's not a test in itself.
+# TEST_QEMUBOOT_TIMEOUT can be used to set the maximum time in seconds the launch code will wait for the login prompt.
+# TEST_OVERALL_TIMEOUT can be used to set the maximum time in seconds the tests will be allowed to run (defaults to no limit).
+# TEST_QEMUPARAMS can be used to pass extra parameters to qemu, e.g. "-m 1024" for setting the amount of ram to 1 GB.
+# TEST_RUNQEMUPARAMS can be used to pass extra parameters to runqemu, e.g. "gl" to enable OpenGL acceleration.
+# QEMU_USE_KVM can be set to "" to disable the use of kvm (by default it is enabled if target_arch == build_arch or both of them are x86 archs)
+
+# TESTIMAGE_BOOT_PATTERNS can be used to override certain patterns used to communicate with the target when booting,
+# if a pattern is not specifically present on this variable a default will be used when booting the target.
+# TESTIMAGE_BOOT_PATTERNS[<flag>] overrides the pattern used for that specific flag, where flag comes from a list of accepted flags
+# e.g. normally the system boots and waits for a login prompt (login:), after that it sends the command: "root\n" to log as the root user
+# if we wanted to log in as the hypothetical "webserver" user for example we could set the following:
+# TESTIMAGE_BOOT_PATTERNS = "send_login_user search_login_succeeded"
+# TESTIMAGE_BOOT_PATTERNS[send_login_user] = "webserver\n"
+# TESTIMAGE_BOOT_PATTERNS[search_login_succeeded] = "webserver@[a-zA-Z0-9\-]+:~#"
+# The accepted flags are the following: search_reached_prompt, send_login_user, search_login_succeeded, search_cmd_finished.
+# They are prefixed with either search/send, to differentiate if the pattern is meant to be sent or searched to/from the target terminal
+
+TEST_LOG_DIR ?= "${WORKDIR}/testimage"
+
+TEST_EXPORT_DIR ?= "${TMPDIR}/testimage/${PN}"
+TEST_INSTALL_TMP_DIR ?= "${WORKDIR}/testimage/install_tmp"
+TEST_NEEDED_PACKAGES_DIR ?= "${WORKDIR}/testimage/packages"
+TEST_EXTRACTED_DIR ?= "${TEST_NEEDED_PACKAGES_DIR}/extracted"
+TEST_PACKAGED_DIR ?= "${TEST_NEEDED_PACKAGES_DIR}/packaged"
+
+BASICTESTSUITE = "\
+    ping date df ssh scp python perl gi ptest parselogs \
+    logrotate connman systemd oe_syslog pam stap ldd xorg \
+    kernelmodule gcc buildcpio buildlzip buildgalculator \
+    dnf rpm opkg apt weston go rust"
+
+DEFAULT_TEST_SUITES = "${BASICTESTSUITE}"
+
+# musl doesn't support systemtap
+DEFAULT_TEST_SUITES:remove:libc-musl = "stap"
+
+# qemumips is quite slow and has reached the timeout limit several times on the YP build cluster,
+# mitigate this by removing build tests for qemumips machines.
+MIPSREMOVE ??= "buildcpio buildlzip buildgalculator"
+DEFAULT_TEST_SUITES:remove:qemumips = "${MIPSREMOVE}"
+DEFAULT_TEST_SUITES:remove:qemumips64 = "${MIPSREMOVE}"
+
+TEST_SUITES ?= "${DEFAULT_TEST_SUITES}"
+
+QEMU_USE_KVM ?= "1"
+TEST_QEMUBOOT_TIMEOUT ?= "1000"
+TEST_OVERALL_TIMEOUT ?= ""
+TEST_TARGET ?= "qemu"
+TEST_QEMUPARAMS ?= ""
+TEST_RUNQEMUPARAMS ?= ""
+
+TESTIMAGE_BOOT_PATTERNS ?= ""
+
+TESTIMAGEDEPENDS = ""
+TESTIMAGEDEPENDS:append:qemuall = " qemu-native:do_populate_sysroot qemu-helper-native:do_populate_sysroot qemu-helper-native:do_addto_recipe_sysroot"
+TESTIMAGEDEPENDS += "${@bb.utils.contains('IMAGE_PKGTYPE', 'rpm', 'cpio-native:do_populate_sysroot', '', d)}"
+TESTIMAGEDEPENDS += "${@bb.utils.contains('IMAGE_PKGTYPE', 'rpm', 'dnf-native:do_populate_sysroot', '', d)}"
+TESTIMAGEDEPENDS += "${@bb.utils.contains('IMAGE_PKGTYPE', 'rpm', 'createrepo-c-native:do_populate_sysroot', '', d)}"
+TESTIMAGEDEPENDS += "${@bb.utils.contains('IMAGE_PKGTYPE', 'ipk', 'opkg-utils-native:do_populate_sysroot package-index:do_package_index', '', d)}"
+TESTIMAGEDEPENDS += "${@bb.utils.contains('IMAGE_PKGTYPE', 'deb', 'apt-native:do_populate_sysroot  package-index:do_package_index', '', d)}"
+
+TESTIMAGELOCK = "${TMPDIR}/testimage.lock"
+TESTIMAGELOCK:qemuall = ""
+
+TESTIMAGE_DUMP_DIR ?= "${LOG_DIR}/runtime-hostdump/"
+
+TESTIMAGE_UPDATE_VARS ?= "DL_DIR WORKDIR DEPLOY_DIR"
+
+testimage_dump_target () {
+    top -bn1
+    ps
+    free
+    df
+    # The next command will export the default gateway IP
+    export DEFAULT_GATEWAY=$(ip route | awk '/default/ { print $3}')
+    ping -c3 $DEFAULT_GATEWAY
+    dmesg
+    netstat -an
+    ip address
+    # Next command will dump logs from /var/log/
+    find /var/log/ -type f 2>/dev/null -exec echo "====================" \; -exec echo {} \; -exec echo "====================" \; -exec cat {} \; -exec echo "" \;
+}
+
+testimage_dump_host () {
+    top -bn1
+    iostat -x -z -N -d -p ALL 20 2
+    ps -ef
+    free
+    df
+    memstat
+    dmesg
+    ip -s link
+    netstat -an
+}
+
+testimage_dump_monitor () {
+    query-status
+    query-block
+    dump-guest-memory {"paging":false,"protocol":"file:%s.img"}
+}
+
+python do_testimage() {
+    testimage_main(d)
+}
+
+addtask testimage
+do_testimage[nostamp] = "1"
+do_testimage[network] = "1"
+do_testimage[depends] += "${TESTIMAGEDEPENDS}"
+do_testimage[lockfiles] += "${TESTIMAGELOCK}"
+
+def testimage_sanity(d):
+    if (d.getVar('TEST_TARGET') == 'simpleremote'
+        and (not d.getVar('TEST_TARGET_IP')
+             or not d.getVar('TEST_SERVER_IP'))):
+        bb.fatal('When TEST_TARGET is set to "simpleremote" '
+                 'TEST_TARGET_IP and TEST_SERVER_IP are needed too.')
+
+def get_testimage_configuration(d, test_type, machine):
+    import platform
+    from oeqa.utils.metadata import get_layers
+    configuration = {'TEST_TYPE': test_type,
+                    'MACHINE': machine,
+                    'DISTRO': d.getVar("DISTRO"),
+                    'IMAGE_BASENAME': d.getVar("IMAGE_BASENAME"),
+                    'IMAGE_PKGTYPE': d.getVar("IMAGE_PKGTYPE"),
+                    'STARTTIME': d.getVar("DATETIME"),
+                    'HOST_DISTRO': oe.lsb.distro_identifier().replace(' ', '-'),
+                    'LAYERS': get_layers(d.getVar("BBLAYERS"))}
+    return configuration
+get_testimage_configuration[vardepsexclude] = "DATETIME"
+
+def get_testimage_json_result_dir(d):
+    json_result_dir = os.path.join(d.getVar("LOG_DIR"), 'oeqa')
+    custom_json_result_dir = d.getVar("OEQA_JSON_RESULT_DIR")
+    if custom_json_result_dir:
+        json_result_dir = custom_json_result_dir
+    return json_result_dir
+
+def get_testimage_result_id(configuration):
+    return '%s_%s_%s_%s' % (configuration['TEST_TYPE'], configuration['IMAGE_BASENAME'], configuration['MACHINE'], configuration['STARTTIME'])
+
+def get_testimage_boot_patterns(d):
+    from collections import defaultdict
+    boot_patterns = defaultdict(str)
+    # Only accept certain values
+    accepted_patterns = ['search_reached_prompt', 'send_login_user', 'search_login_succeeded', 'search_cmd_finished']
+    # Not all patterns need to be overriden, e.g. perhaps we only want to change the user
+    boot_patterns_flags = d.getVarFlags('TESTIMAGE_BOOT_PATTERNS') or {}
+    if boot_patterns_flags:
+        patterns_set = [p for p in boot_patterns_flags.items() if p[0] in d.getVar('TESTIMAGE_BOOT_PATTERNS').split()]
+        for flag, flagval in patterns_set:
+                if flag not in accepted_patterns:
+                    bb.fatal('Testimage: The only accepted boot patterns are: search_reached_prompt,send_login_user, \
+                    search_login_succeeded,search_cmd_finished\n Make sure your TESTIMAGE_BOOT_PATTERNS=%s \
+                    contains an accepted flag.' % d.getVar('TESTIMAGE_BOOT_PATTERNS'))
+                    return
+                # We know boot prompt is searched through in binary format, others might be expressions
+                if flag == 'search_reached_prompt':
+                    boot_patterns[flag] = flagval.encode()
+                else:
+                    boot_patterns[flag] = flagval.encode().decode('unicode-escape')
+    return boot_patterns
+
+
+def testimage_main(d):
+    import os
+    import json
+    import signal
+    import logging
+    import shutil
+
+    from bb.utils import export_proxies
+    from oeqa.runtime.context import OERuntimeTestContext
+    from oeqa.runtime.context import OERuntimeTestContextExecutor
+    from oeqa.core.target.qemu import supported_fstypes
+    from oeqa.core.utils.test import getSuiteCases
+    from oeqa.utils import make_logger_bitbake_compatible
+
+    def sigterm_exception(signum, stackframe):
+        """
+        Catch SIGTERM from worker in order to stop qemu.
+        """
+        os.kill(os.getpid(), signal.SIGINT)
+
+    def handle_test_timeout(timeout):
+        bb.warn("Global test timeout reached (%s seconds), stopping the tests." %(timeout))
+        os.kill(os.getpid(), signal.SIGINT)
+
+    testimage_sanity(d)
+
+    if (d.getVar('IMAGE_PKGTYPE') == 'rpm'
+       and ('dnf' in d.getVar('TEST_SUITES') or 'auto' in d.getVar('TEST_SUITES'))):
+        create_rpm_index(d)
+
+    logger = make_logger_bitbake_compatible(logging.getLogger("BitBake"))
+    pn = d.getVar("PN")
+
+    bb.utils.mkdirhier(d.getVar("TEST_LOG_DIR"))
+
+    image_name = ("%s/%s" % (d.getVar('DEPLOY_DIR_IMAGE'),
+                             d.getVar('IMAGE_LINK_NAME')))
+
+    tdname = "%s.testdata.json" % image_name
+    try:
+        with open(tdname, "r") as f:
+            td = json.load(f)
+    except FileNotFoundError as err:
+        bb.fatal('File %s not found (%s).\nHave you built the image with INHERIT += "testimage" in the conf/local.conf?' % (tdname, err))
+
+    # Some variables need to be updates (mostly paths) with the
+    # ones of the current environment because some tests require them.
+    for var in d.getVar('TESTIMAGE_UPDATE_VARS').split():
+        td[var] = d.getVar(var)
+
+    image_manifest = "%s.manifest" % image_name
+    image_packages = OERuntimeTestContextExecutor.readPackagesManifest(image_manifest)
+
+    extract_dir = d.getVar("TEST_EXTRACTED_DIR")
+
+    # Get machine
+    machine = d.getVar("MACHINE")
+
+    # Get rootfs
+    fstypes = d.getVar('IMAGE_FSTYPES').split()
+    if d.getVar("TEST_TARGET") == "qemu":
+        fstypes = [fs for fs in fstypes if fs in supported_fstypes]
+        if not fstypes:
+            bb.fatal('Unsupported image type built. Add a compatible image to '
+                     'IMAGE_FSTYPES. Supported types: %s' %
+                     ', '.join(supported_fstypes))
+    qfstype = fstypes[0]
+    qdeffstype = d.getVar("QB_DEFAULT_FSTYPE")
+    if qdeffstype:
+        qfstype = qdeffstype
+    rootfs = '%s.%s' % (image_name, qfstype)
+
+    # Get tmpdir (not really used, just for compatibility)
+    tmpdir = d.getVar("TMPDIR")
+
+    # Get deploy_dir_image (not really used, just for compatibility)
+    dir_image = d.getVar("DEPLOY_DIR_IMAGE")
+
+    # Get bootlog
+    bootlog = os.path.join(d.getVar("TEST_LOG_DIR"),
+                           'qemu_boot_log.%s' % d.getVar('DATETIME'))
+
+    # Get display
+    display = d.getVar("BB_ORIGENV").getVar("DISPLAY")
+
+    # Get kernel
+    kernel_name = ('%s-%s.bin' % (d.getVar("KERNEL_IMAGETYPE"), machine))
+    kernel = os.path.join(d.getVar("DEPLOY_DIR_IMAGE"), kernel_name)
+
+    # Get boottime
+    boottime = int(d.getVar("TEST_QEMUBOOT_TIMEOUT"))
+
+    # Get use_kvm
+    kvm = oe.types.qemu_use_kvm(d.getVar('QEMU_USE_KVM'), d.getVar('TARGET_ARCH'))
+
+    # Get OVMF
+    ovmf = d.getVar("QEMU_USE_OVMF")
+
+    slirp = False
+    if d.getVar("QEMU_USE_SLIRP"):
+        slirp = True
+
+    # TODO: We use the current implementation of qemu runner because of
+    # time constrains, qemu runner really needs a refactor too.
+    target_kwargs = { 'machine'     : machine,
+                      'rootfs'      : rootfs,
+                      'tmpdir'      : tmpdir,
+                      'dir_image'   : dir_image,
+                      'display'     : display,
+                      'kernel'      : kernel,
+                      'boottime'    : boottime,
+                      'bootlog'     : bootlog,
+                      'kvm'         : kvm,
+                      'slirp'       : slirp,
+                      'dump_dir'    : d.getVar("TESTIMAGE_DUMP_DIR"),
+                      'serial_ports': len(d.getVar("SERIAL_CONSOLES").split()),
+                      'ovmf'        : ovmf,
+                      'tmpfsdir'    : d.getVar("RUNQEMU_TMPFS_DIR"),
+                    }
+
+    if d.getVar("TESTIMAGE_BOOT_PATTERNS"):
+        target_kwargs['boot_patterns'] = get_testimage_boot_patterns(d)
+
+    # hardware controlled targets might need further access
+    target_kwargs['powercontrol_cmd'] = d.getVar("TEST_POWERCONTROL_CMD") or None
+    target_kwargs['powercontrol_extra_args'] = d.getVar("TEST_POWERCONTROL_EXTRA_ARGS") or ""
+    target_kwargs['serialcontrol_cmd'] = d.getVar("TEST_SERIALCONTROL_CMD") or None
+    target_kwargs['serialcontrol_extra_args'] = d.getVar("TEST_SERIALCONTROL_EXTRA_ARGS") or ""
+    target_kwargs['testimage_dump_monitor'] = d.getVar("testimage_dump_monitor") or ""
+    target_kwargs['testimage_dump_target'] = d.getVar("testimage_dump_target") or ""
+
+    def export_ssh_agent(d):
+        import os
+
+        variables = ['SSH_AGENT_PID', 'SSH_AUTH_SOCK']
+        for v in variables:
+            if v not in os.environ.keys():
+                val = d.getVar(v)
+                if val is not None:
+                    os.environ[v] = val
+
+    export_ssh_agent(d)
+
+    # runtime use network for download projects for build
+    export_proxies(d)
+
+    # we need the host dumper in test context
+    host_dumper = OERuntimeTestContextExecutor.getHostDumper(
+        d.getVar("testimage_dump_host"),
+        d.getVar("TESTIMAGE_DUMP_DIR"))
+
+    # the robot dance
+    target = OERuntimeTestContextExecutor.getTarget(
+        d.getVar("TEST_TARGET"), logger, d.getVar("TEST_TARGET_IP"),
+        d.getVar("TEST_SERVER_IP"), **target_kwargs)
+
+    # test context
+    tc = OERuntimeTestContext(td, logger, target, host_dumper,
+                              image_packages, extract_dir)
+
+    # Load tests before starting the target
+    test_paths = get_runtime_paths(d)
+    test_modules = d.getVar('TEST_SUITES').split()
+    if not test_modules:
+        bb.fatal('Empty test suite, please verify TEST_SUITES variable')
+
+    tc.loadTests(test_paths, modules=test_modules)
+
+    suitecases = getSuiteCases(tc.suites)
+    if not suitecases:
+        bb.fatal('Empty test suite, please verify TEST_SUITES variable')
+    else:
+        bb.debug(2, 'test suites:\n\t%s' % '\n\t'.join([str(c) for c in suitecases]))
+
+    package_extraction(d, tc.suites)
+
+    results = None
+    complete = False
+    orig_sigterm_handler = signal.signal(signal.SIGTERM, sigterm_exception)
+    try:
+        # We need to check if runqemu ends unexpectedly
+        # or if the worker send us a SIGTERM
+        tc.target.start(params=d.getVar("TEST_QEMUPARAMS"), runqemuparams=d.getVar("TEST_RUNQEMUPARAMS"))
+        import threading
+        try:
+            threading.Timer(int(d.getVar("TEST_OVERALL_TIMEOUT")), handle_test_timeout, (int(d.getVar("TEST_OVERALL_TIMEOUT")),)).start()
+        except ValueError:
+            pass
+        results = tc.runTests()
+        complete = True
+    except (KeyboardInterrupt, BlockingIOError) as err:
+        if isinstance(err, KeyboardInterrupt):
+            bb.error('testimage interrupted, shutting down...')
+        else:
+            bb.error('runqemu failed, shutting down...')
+        if results:
+            results.stop()
+        results = tc.results
+    finally:
+        signal.signal(signal.SIGTERM, orig_sigterm_handler)
+        tc.target.stop()
+
+    # Show results (if we have them)
+    if results:
+        configuration = get_testimage_configuration(d, 'runtime', machine)
+        results.logDetails(get_testimage_json_result_dir(d),
+                        configuration,
+                        get_testimage_result_id(configuration),
+                        dump_streams=d.getVar('TESTREPORT_FULLLOGS'))
+        results.logSummary(pn)
+
+    # Copy additional logs to tmp/log/oeqa so it's easier to find them
+    targetdir = os.path.join(get_testimage_json_result_dir(d), d.getVar("PN"))
+    os.makedirs(targetdir, exist_ok=True)
+    os.symlink(bootlog, os.path.join(targetdir, os.path.basename(bootlog)))
+    os.symlink(d.getVar("BB_LOGFILE"), os.path.join(targetdir, os.path.basename(d.getVar("BB_LOGFILE") + "." + d.getVar('DATETIME'))))
+
+    if not results or not complete:
+        bb.fatal('%s - FAILED - tests were interrupted during execution, check the logs in %s' % (pn, d.getVar("LOG_DIR")), forcelog=True)
+    if not results.wasSuccessful():
+        bb.fatal('%s - FAILED - also check the logs in %s' % (pn, d.getVar("LOG_DIR")), forcelog=True)
+
+def get_runtime_paths(d):
+    """
+    Returns a list of paths where runtime test must reside.
+
+    Runtime tests are expected in <LAYER_DIR>/lib/oeqa/runtime/cases/
+    """
+    paths = []
+
+    for layer in d.getVar('BBLAYERS').split():
+        path = os.path.join(layer, 'lib/oeqa/runtime/cases')
+        if os.path.isdir(path):
+            paths.append(path)
+    return paths
+
+def create_index(arg):
+    import subprocess
+
+    index_cmd = arg
+    try:
+        bb.note("Executing '%s' ..." % index_cmd)
+        result = subprocess.check_output(index_cmd,
+                                        stderr=subprocess.STDOUT,
+                                        shell=True)
+        result = result.decode('utf-8')
+    except subprocess.CalledProcessError as e:
+        return("Index creation command '%s' failed with return code "
+               '%d:\n%s' % (e.cmd, e.returncode, e.output.decode("utf-8")))
+    if result:
+        bb.note(result)
+    return None
+
+def create_rpm_index(d):
+    import glob
+    # Index RPMs
+    rpm_createrepo = bb.utils.which(os.getenv('PATH'), "createrepo_c")
+    index_cmds = []
+    archs = (d.getVar('ALL_MULTILIB_PACKAGE_ARCHS') or '').replace('-', '_')
+
+    for arch in archs.split():
+        rpm_dir = os.path.join(d.getVar('DEPLOY_DIR_RPM'), arch)
+        idx_path = os.path.join(d.getVar('WORKDIR'), 'oe-testimage-repo', arch)
+
+        if not os.path.isdir(rpm_dir):
+            continue
+
+        lockfilename = os.path.join(d.getVar('DEPLOY_DIR_RPM'), 'rpm.lock')
+        lf = bb.utils.lockfile(lockfilename, False)
+        oe.path.copyhardlinktree(rpm_dir, idx_path)
+        # Full indexes overload a 256MB image so reduce the number of rpms
+        # in the feed by filtering to specific packages needed by the tests.
+        package_list = glob.glob(idx_path + "*/*.rpm")
+
+        for pkg in package_list:
+            if os.path.basename(pkg).startswith(("curl-ptest")):
+                bb.utils.remove(pkg)
+
+            if not os.path.basename(pkg).startswith(("rpm", "run-postinsts", "busybox", "bash", "update-alternatives", "libc6", "curl", "musl")):
+                bb.utils.remove(pkg)
+
+        bb.utils.unlockfile(lf)
+        cmd = '%s --update -q %s' % (rpm_createrepo, idx_path)
+
+        # Create repodata
+        result = create_index(cmd)
+        if result:
+            bb.fatal('%s' % ('\n'.join(result)))
+
+def package_extraction(d, test_suites):
+    from oeqa.utils.package_manager import find_packages_to_extract
+    from oeqa.utils.package_manager import extract_packages
+
+    bb.utils.remove(d.getVar("TEST_NEEDED_PACKAGES_DIR"), recurse=True)
+    packages = find_packages_to_extract(test_suites)
+    if packages:
+        bb.utils.mkdirhier(d.getVar("TEST_INSTALL_TMP_DIR"))
+        bb.utils.mkdirhier(d.getVar("TEST_PACKAGED_DIR"))
+        bb.utils.mkdirhier(d.getVar("TEST_EXTRACTED_DIR"))
+        extract_packages(d, packages)
+
+testimage_main[vardepsexclude] += "BB_ORIGENV DATETIME"
+
+python () {
+    if oe.types.boolean(d.getVar("TESTIMAGE_AUTO") or "False"):
+        bb.build.addtask("testimage", "do_build", "do_image_complete", d)
+}
+
+inherit testsdk
diff --git a/poky/meta/classes-recipe/testsdk.bbclass b/poky/meta/classes-recipe/testsdk.bbclass
new file mode 100644
index 0000000..fd82e6e
--- /dev/null
+++ b/poky/meta/classes-recipe/testsdk.bbclass
@@ -0,0 +1,52 @@
+# Copyright (C) 2013 - 2016 Intel Corporation
+#
+# SPDX-License-Identifier: MIT
+
+# testsdk.bbclass enables testing for SDK and Extensible SDK
+#
+# To run SDK tests, run the commands:
+# $ bitbake <image-name> -c populate_sdk
+# $ bitbake <image-name> -c testsdk
+#
+# To run eSDK tests, run the commands:
+# $ bitbake <image-name> -c populate_sdk_ext
+# $ bitbake <image-name> -c testsdkext
+#
+# where "<image-name>" is an image like core-image-sato.
+
+TESTSDK_CLASS_NAME ?= "oeqa.sdk.testsdk.TestSDK"
+TESTSDKEXT_CLASS_NAME ?= "oeqa.sdkext.testsdk.TestSDKExt"
+
+def import_and_run(name, d):
+    import importlib
+
+    class_name = d.getVar(name)
+    if class_name:
+        module, cls = class_name.rsplit('.', 1)
+        m = importlib.import_module(module)
+        c = getattr(m, cls)()
+        c.run(d)
+    else:
+        bb.warn('No tests were run because %s did not define a class' % name)
+
+import_and_run[vardepsexclude] = "DATETIME BB_ORIGENV"
+
+python do_testsdk() {
+    import_and_run('TESTSDK_CLASS_NAME', d)
+}
+addtask testsdk
+do_testsdk[nostamp] = "1"
+do_testsdk[network] = "1"
+
+python do_testsdkext() {
+    import_and_run('TESTSDKEXT_CLASS_NAME', d)
+}
+addtask testsdkext
+do_testsdkext[nostamp] = "1"
+do_testsdkext[network] = "1"
+
+python () {
+    if oe.types.boolean(d.getVar("TESTIMAGE_AUTO") or "False"):
+        bb.build.addtask("testsdk", None, "do_populate_sdk", d)
+        bb.build.addtask("testsdkext", None, "do_populate_sdk_ext", d)
+}
diff --git a/poky/meta/classes-recipe/texinfo.bbclass b/poky/meta/classes-recipe/texinfo.bbclass
new file mode 100644
index 0000000..380247fa
--- /dev/null
+++ b/poky/meta/classes-recipe/texinfo.bbclass
@@ -0,0 +1,24 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+# This class is inherited by recipes whose upstream packages invoke the
+# texinfo utilities at build-time. Native and cross recipes are made to use the
+# dummy scripts provided by texinfo-dummy-native, for improved performance.
+# Target architecture recipes use the genuine Texinfo utilities. By default,
+# they use the Texinfo utilities on the host system. If you want to use the
+# Texinfo recipe, you can remove texinfo-native from ASSUME_PROVIDED and
+# makeinfo from SANITY_REQUIRED_UTILITIES.
+
+TEXDEP = "${@bb.utils.contains('DISTRO_FEATURES', 'api-documentation', 'texinfo-replacement-native', 'texinfo-dummy-native', d)}"
+TEXDEP:class-native = "texinfo-dummy-native"
+TEXDEP:class-cross = "texinfo-dummy-native"
+TEXDEP:class-crosssdk = "texinfo-dummy-native"
+TEXDEP:class-cross-canadian = "texinfo-dummy-native"
+DEPENDS:append = " ${TEXDEP}"
+
+# libtool-cross doesn't inherit cross
+TEXDEP:pn-libtool-cross = "texinfo-dummy-native"
+
diff --git a/poky/meta/classes-recipe/toolchain-scripts-base.bbclass b/poky/meta/classes-recipe/toolchain-scripts-base.bbclass
new file mode 100644
index 0000000..d24a986
--- /dev/null
+++ b/poky/meta/classes-recipe/toolchain-scripts-base.bbclass
@@ -0,0 +1,17 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+#This function create a version information file
+toolchain_create_sdk_version () {
+	local versionfile=$1
+	rm -f $versionfile
+	touch $versionfile
+	echo 'Distro: ${DISTRO}' >> $versionfile
+	echo 'Distro Version: ${DISTRO_VERSION}' >> $versionfile
+	echo 'Metadata Revision: ${METADATA_REVISION}' >> $versionfile
+	echo 'Timestamp: ${DATETIME}' >> $versionfile
+}
+toolchain_create_sdk_version[vardepsexclude] = "DATETIME"
diff --git a/poky/meta/classes-recipe/toolchain-scripts.bbclass b/poky/meta/classes-recipe/toolchain-scripts.bbclass
new file mode 100644
index 0000000..3cc823f
--- /dev/null
+++ b/poky/meta/classes-recipe/toolchain-scripts.bbclass
@@ -0,0 +1,236 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+inherit toolchain-scripts-base siteinfo kernel-arch
+
+# We want to be able to change the value of MULTIMACH_TARGET_SYS, because it
+# doesn't always match our expectations... but we default to the stock value
+REAL_MULTIMACH_TARGET_SYS ?= "${MULTIMACH_TARGET_SYS}"
+TARGET_CC_ARCH:append:libc-musl = " -mmusl"
+
+# default debug prefix map isn't valid in the SDK
+DEBUG_PREFIX_MAP = ""
+
+EXPORT_SDK_PS1 = "${@ 'export PS1=\\"%s\\"' % d.getVar('SDK_PS1') if d.getVar('SDK_PS1') else ''}"
+
+# This function creates an environment-setup-script for use in a deployable SDK
+toolchain_create_sdk_env_script () {
+	# Create environment setup script.  Remember that $SDKTARGETSYSROOT should
+	# only be expanded on the target at runtime.
+	base_sbindir=${10:-${base_sbindir_nativesdk}}
+	base_bindir=${9:-${base_bindir_nativesdk}}
+	sbindir=${8:-${sbindir_nativesdk}}
+	sdkpathnative=${7:-${SDKPATHNATIVE}}
+	prefix=${6:-${prefix_nativesdk}}
+	bindir=${5:-${bindir_nativesdk}}
+	libdir=${4:-${libdir}}
+	sysroot=${3:-${SDKTARGETSYSROOT}}
+	multimach_target_sys=${2:-${REAL_MULTIMACH_TARGET_SYS}}
+	script=${1:-${SDK_OUTPUT}/${SDKPATH}/environment-setup-$multimach_target_sys}
+	rm -f $script
+	touch $script
+
+	echo '# Check for LD_LIBRARY_PATH being set, which can break SDK and generally is a bad practice' >> $script
+	echo '# http://tldp.org/HOWTO/Program-Library-HOWTO/shared-libraries.html#AEN80' >> $script
+	echo '# http://xahlee.info/UnixResource_dir/_/ldpath.html' >> $script
+	echo '# Only disable this check if you are absolutely know what you are doing!' >> $script
+	echo 'if [ ! -z "$LD_LIBRARY_PATH" ]; then' >> $script
+	echo "    echo \"Your environment is misconfigured, you probably need to 'unset LD_LIBRARY_PATH'\"" >> $script
+	echo "    echo \"but please check why this was set in the first place and that it's safe to unset.\"" >> $script
+	echo '    echo "The SDK will not operate correctly in most cases when LD_LIBRARY_PATH is set."' >> $script
+	echo '    echo "For more references see:"' >> $script
+	echo '    echo "  http://tldp.org/HOWTO/Program-Library-HOWTO/shared-libraries.html#AEN80"' >> $script
+	echo '    echo "  http://xahlee.info/UnixResource_dir/_/ldpath.html"' >> $script
+	echo '    return 1' >> $script
+	echo 'fi' >> $script
+
+	echo "${EXPORT_SDK_PS1}" >> $script
+	echo 'export SDKTARGETSYSROOT='"$sysroot" >> $script
+	EXTRAPATH=""
+	for i in ${CANADIANEXTRAOS}; do
+		EXTRAPATH="$EXTRAPATH:$sdkpathnative$bindir/${TARGET_ARCH}${TARGET_VENDOR}-$i"
+	done
+	echo "export PATH=$sdkpathnative$bindir:$sdkpathnative$sbindir:$sdkpathnative$base_bindir:$sdkpathnative$base_sbindir:$sdkpathnative$bindir/../${HOST_SYS}/bin:$sdkpathnative$bindir/${TARGET_SYS}"$EXTRAPATH':$PATH' >> $script
+	echo 'export PKG_CONFIG_SYSROOT_DIR=$SDKTARGETSYSROOT' >> $script
+	echo 'export PKG_CONFIG_PATH=$SDKTARGETSYSROOT'"$libdir"'/pkgconfig:$SDKTARGETSYSROOT'"$prefix"'/share/pkgconfig' >> $script
+	echo 'export CONFIG_SITE=${SDKPATH}/site-config-'"${multimach_target_sys}" >> $script
+	echo "export OECORE_NATIVE_SYSROOT=\"$sdkpathnative\"" >> $script
+	echo 'export OECORE_TARGET_SYSROOT="$SDKTARGETSYSROOT"' >> $script
+	echo "export OECORE_ACLOCAL_OPTS=\"-I $sdkpathnative/usr/share/aclocal\"" >> $script
+	echo 'export OECORE_BASELIB="${baselib}"' >> $script
+	echo 'export OECORE_TARGET_ARCH="${TARGET_ARCH}"' >>$script
+	echo 'export OECORE_TARGET_OS="${TARGET_OS}"' >>$script
+
+	echo 'unset command_not_found_handle' >> $script
+
+	toolchain_shared_env_script
+}
+
+# This function creates an environment-setup-script in B which enables
+# a OE-core IDE to integrate with the build tree
+# Caller must ensure CONFIG_SITE is setup
+toolchain_create_tree_env_script () {
+	script=${B}/environment-setup-${REAL_MULTIMACH_TARGET_SYS}
+	rm -f $script
+	touch $script
+	echo 'standalone_sysroot_target="${STAGING_DIR}/${MACHINE}"' >> $script
+	echo 'standalone_sysroot_native="${STAGING_DIR}/${BUILD_ARCH}"' >> $script
+	echo 'orig=`pwd`; cd ${COREBASE}; . ./oe-init-build-env ${TOPDIR}; cd $orig' >> $script
+	echo 'export PATH=$standalone_sysroot_native/${bindir_native}:$standalone_sysroot_native/${bindir_native}/${TARGET_SYS}:$PATH' >> $script
+	echo 'export PKG_CONFIG_SYSROOT_DIR=$standalone_sysroot_target' >> $script
+	echo 'export PKG_CONFIG_PATH=$standalone_sysroot_target'"$libdir"'/pkgconfig:$standalone_sysroot_target'"$prefix"'/share/pkgconfig' >> $script
+	echo 'export CONFIG_SITE="${CONFIG_SITE}"' >> $script
+	echo 'export SDKTARGETSYSROOT=$standalone_sysroot_target' >> $script
+	echo 'export OECORE_NATIVE_SYSROOT=$standalone_sysroot_native' >> $script
+	echo 'export OECORE_TARGET_SYSROOT=$standalone_sysroot_target' >> $script
+	echo 'export OECORE_ACLOCAL_OPTS="-I $standalone_sysroot_native/usr/share/aclocal"' >> $script
+	echo 'export OECORE_BASELIB="${baselib}"' >> $script
+	echo 'export OECORE_TARGET_ARCH="${TARGET_ARCH}"' >>$script
+	echo 'export OECORE_TARGET_OS="${TARGET_OS}"' >>$script
+
+	toolchain_shared_env_script
+
+	cat >> $script <<EOF
+
+if [ -d "\$OECORE_NATIVE_SYSROOT/${datadir}/post-relocate-setup.d/" ]; then
+	for s in \$OECORE_NATIVE_SYSROOT/${datadir}/post-relocate-setup.d/*; do
+		if [ ! -x \$s ]; then
+			continue
+		fi
+		\$s "\$1"
+		status=\$?
+		if [ \$status != 0 ]; then
+			echo "post-relocate command \"\$s \$1\" failed with status \$status" >&2
+			exit \$status
+		fi
+	done
+fi
+EOF
+}
+
+toolchain_shared_env_script () {
+	echo 'export CC="${TARGET_PREFIX}gcc ${TARGET_CC_ARCH} --sysroot=$SDKTARGETSYSROOT"' >> $script
+	echo 'export CXX="${TARGET_PREFIX}g++ ${TARGET_CC_ARCH} --sysroot=$SDKTARGETSYSROOT"' >> $script
+	echo 'export CPP="${TARGET_PREFIX}gcc -E ${TARGET_CC_ARCH} --sysroot=$SDKTARGETSYSROOT"' >> $script
+	echo 'export AS="${TARGET_PREFIX}as ${TARGET_AS_ARCH}"' >> $script
+	echo 'export LD="${TARGET_PREFIX}ld ${TARGET_LD_ARCH} --sysroot=$SDKTARGETSYSROOT"' >> $script
+	echo 'export GDB=${TARGET_PREFIX}gdb' >> $script
+	echo 'export STRIP=${TARGET_PREFIX}strip' >> $script
+	echo 'export RANLIB=${TARGET_PREFIX}ranlib' >> $script
+	echo 'export OBJCOPY=${TARGET_PREFIX}objcopy' >> $script
+	echo 'export OBJDUMP=${TARGET_PREFIX}objdump' >> $script
+	echo 'export READELF=${TARGET_PREFIX}readelf' >> $script
+	echo 'export AR=${TARGET_PREFIX}ar' >> $script
+	echo 'export NM=${TARGET_PREFIX}nm' >> $script
+	echo 'export M4=m4' >> $script
+	echo 'export TARGET_PREFIX=${TARGET_PREFIX}' >> $script
+	echo 'export CONFIGURE_FLAGS="--target=${TARGET_SYS} --host=${TARGET_SYS} --build=${SDK_ARCH}-linux --with-libtool-sysroot=$SDKTARGETSYSROOT"' >> $script
+	echo 'export CFLAGS="${TARGET_CFLAGS}"' >> $script
+	echo 'export CXXFLAGS="${TARGET_CXXFLAGS}"' >> $script
+	echo 'export LDFLAGS="${TARGET_LDFLAGS}"' >> $script
+	echo 'export CPPFLAGS="${TARGET_CPPFLAGS}"' >> $script
+	echo 'export KCFLAGS="--sysroot=$SDKTARGETSYSROOT"' >> $script
+	echo 'export OECORE_DISTRO_VERSION="${DISTRO_VERSION}"' >> $script
+	echo 'export OECORE_SDK_VERSION="${SDK_VERSION}"' >> $script
+	echo 'export ARCH=${ARCH}' >> $script
+	echo 'export CROSS_COMPILE=${TARGET_PREFIX}' >> $script
+	echo 'export OECORE_TUNE_CCARGS="${TUNE_CCARGS}"' >> $script
+
+    cat >> $script <<EOF
+
+# Append environment subscripts
+if [ -d "\$OECORE_TARGET_SYSROOT/environment-setup.d" ]; then
+    for envfile in \$OECORE_TARGET_SYSROOT/environment-setup.d/*.sh; do
+	    . \$envfile
+    done
+fi
+if [ -d "\$OECORE_NATIVE_SYSROOT/environment-setup.d" ]; then
+    for envfile in \$OECORE_NATIVE_SYSROOT/environment-setup.d/*.sh; do
+	    . \$envfile
+    done
+fi
+EOF
+}
+
+toolchain_create_post_relocate_script() {
+	relocate_script=$1
+	env_dir=$2
+	rm -f $relocate_script
+	touch $relocate_script
+
+	cat >> $relocate_script <<EOF
+if [ -d "${SDKPATHNATIVE}/post-relocate-setup.d/" ]; then
+	# Source top-level SDK env scripts in case they are needed for the relocate
+	# scripts.
+	for env_setup_script in ${env_dir}/environment-setup-*; do
+		. \$env_setup_script
+		status=\$?
+		if [ \$status != 0 ]; then
+			echo "\$0: Failed to source \$env_setup_script with status \$status"
+			exit \$status
+		fi
+
+		for s in ${SDKPATHNATIVE}/post-relocate-setup.d/*; do
+			if [ ! -x \$s ]; then
+				continue
+			fi
+			\$s "\$1"
+			status=\$?
+			if [ \$status != 0 ]; then
+				echo "post-relocate command \"\$s \$1\" failed with status \$status" >&2
+				exit \$status
+			fi
+		done
+	done
+	rm -rf "${SDKPATHNATIVE}/post-relocate-setup.d"
+fi
+EOF
+}
+
+#we get the cached site config in the runtime
+TOOLCHAIN_CONFIGSITE_NOCACHE = "${@' '.join(siteinfo_get_files(d)[0])}"
+TOOLCHAIN_CONFIGSITE_SYSROOTCACHE = "${STAGING_DIR}/${MLPREFIX}${MACHINE}/${target_datadir}/${TARGET_SYS}_config_site.d"
+TOOLCHAIN_NEED_CONFIGSITE_CACHE ??= "virtual/${MLPREFIX}libc ncurses"
+DEPENDS += "${TOOLCHAIN_NEED_CONFIGSITE_CACHE}"
+
+#This function create a site config file
+toolchain_create_sdk_siteconfig () {
+	local siteconfig=$1
+
+	rm -f $siteconfig
+	touch $siteconfig
+
+	for sitefile in ${TOOLCHAIN_CONFIGSITE_NOCACHE} ; do
+		cat $sitefile >> $siteconfig
+	done
+
+	#get cached site config
+	for sitefile in ${TOOLCHAIN_NEED_CONFIGSITE_CACHE}; do
+		# Resolve virtual/* names to the real recipe name using sysroot-providers info
+		case $sitefile in virtual/*)
+			sitefile=`echo $sitefile | tr / _`
+			sitefile=`cat ${STAGING_DIR_TARGET}/sysroot-providers/$sitefile`
+		esac
+
+		if [ -r ${TOOLCHAIN_CONFIGSITE_SYSROOTCACHE}/${sitefile}_config ]; then
+			cat ${TOOLCHAIN_CONFIGSITE_SYSROOTCACHE}/${sitefile}_config >> $siteconfig
+		fi
+	done
+}
+# The immediate expansion above can result in unwanted path dependencies here
+toolchain_create_sdk_siteconfig[vardepsexclude] = "TOOLCHAIN_CONFIGSITE_SYSROOTCACHE"
+
+python __anonymous () {
+    import oe.classextend
+    deps = ""
+    for dep in (d.getVar('TOOLCHAIN_NEED_CONFIGSITE_CACHE') or "").split():
+        deps += " %s:do_populate_sysroot" % dep
+        for variant in (d.getVar('MULTILIB_VARIANTS') or "").split():
+            clsextend = oe.classextend.ClassExtender(variant, d)
+            newdep = clsextend.extend_name(dep)
+            deps += " %s:do_populate_sysroot" % newdep
+    d.appendVarFlag('do_configure', 'depends', deps)
+}
diff --git a/poky/meta/classes-recipe/uboot-config.bbclass b/poky/meta/classes-recipe/uboot-config.bbclass
new file mode 100644
index 0000000..7ab006a
--- /dev/null
+++ b/poky/meta/classes-recipe/uboot-config.bbclass
@@ -0,0 +1,137 @@
+# Handle U-Boot config for a machine
+#
+# The format to specify it, in the machine, is:
+#
+# UBOOT_CONFIG ??= <default>
+# UBOOT_CONFIG[foo] = "config,images,binary"
+#
+# or
+#
+# UBOOT_MACHINE = "config"
+#
+# Copyright 2013, 2014 (C) O.S. Systems Software LTDA.
+#
+# SPDX-License-Identifier: MIT
+
+
+def removesuffix(s, suffix):
+    if suffix and s.endswith(suffix):
+        return s[:-len(suffix)]
+    return s
+
+# Some versions of u-boot use .bin and others use .img.  By default use .bin
+# but enable individual recipes to change this value.
+UBOOT_SUFFIX ??= "bin"
+UBOOT_BINARY ?= "u-boot.${UBOOT_SUFFIX}"
+UBOOT_BINARYNAME ?= "${@os.path.splitext(d.getVar("UBOOT_BINARY"))[0]}"
+UBOOT_IMAGE ?= "${UBOOT_BINARYNAME}-${MACHINE}-${PV}-${PR}.${UBOOT_SUFFIX}"
+UBOOT_SYMLINK ?= "${UBOOT_BINARYNAME}-${MACHINE}.${UBOOT_SUFFIX}"
+UBOOT_MAKE_TARGET ?= "all"
+
+# Output the ELF generated. Some platforms can use the ELF file and directly
+# load it (JTAG booting, QEMU) additionally the ELF can be used for debugging
+# purposes.
+UBOOT_ELF ?= ""
+UBOOT_ELF_SUFFIX ?= "elf"
+UBOOT_ELF_IMAGE ?= "u-boot-${MACHINE}-${PV}-${PR}.${UBOOT_ELF_SUFFIX}"
+UBOOT_ELF_BINARY ?= "u-boot.${UBOOT_ELF_SUFFIX}"
+UBOOT_ELF_SYMLINK ?= "u-boot-${MACHINE}.${UBOOT_ELF_SUFFIX}"
+
+# Some versions of u-boot build an SPL (Second Program Loader) image that
+# should be packaged along with the u-boot binary as well as placed in the
+# deploy directory.  For those versions they can set the following variables
+# to allow packaging the SPL.
+SPL_SUFFIX ?= ""
+SPL_BINARY ?= ""
+SPL_DELIMITER  ?= "${@'.' if d.getVar("SPL_SUFFIX") else ''}"
+SPL_BINARYFILE ?= "${@os.path.basename(d.getVar("SPL_BINARY"))}"
+SPL_BINARYNAME ?= "${@removesuffix(d.getVar("SPL_BINARYFILE"), "." + d.getVar("SPL_SUFFIX"))}"
+SPL_IMAGE ?= "${SPL_BINARYNAME}-${MACHINE}-${PV}-${PR}${SPL_DELIMITER}${SPL_SUFFIX}"
+SPL_SYMLINK ?= "${SPL_BINARYNAME}-${MACHINE}${SPL_DELIMITER}${SPL_SUFFIX}"
+
+# Additional environment variables or a script can be installed alongside
+# u-boot to be used automatically on boot.  This file, typically 'uEnv.txt'
+# or 'boot.scr', should be packaged along with u-boot as well as placed in the
+# deploy directory.  Machine configurations needing one of these files should
+# include it in the SRC_URI and set the UBOOT_ENV parameter.
+UBOOT_ENV_SUFFIX ?= "txt"
+UBOOT_ENV ?= ""
+UBOOT_ENV_SRC_SUFFIX ?= "cmd"
+UBOOT_ENV_SRC ?= "${UBOOT_ENV}.${UBOOT_ENV_SRC_SUFFIX}"
+UBOOT_ENV_BINARY ?= "${UBOOT_ENV}.${UBOOT_ENV_SUFFIX}"
+UBOOT_ENV_IMAGE ?= "${UBOOT_ENV}-${MACHINE}-${PV}-${PR}.${UBOOT_ENV_SUFFIX}"
+UBOOT_ENV_SYMLINK ?= "${UBOOT_ENV}-${MACHINE}.${UBOOT_ENV_SUFFIX}"
+
+# Default name of u-boot initial env, but enable individual recipes to change
+# this value.
+UBOOT_INITIAL_ENV ?= "${PN}-initial-env"
+
+# U-Boot EXTLINUX variables. U-Boot searches for /boot/extlinux/extlinux.conf
+# to find EXTLINUX conf file.
+UBOOT_EXTLINUX_INSTALL_DIR ?= "/boot/extlinux"
+UBOOT_EXTLINUX_CONF_NAME ?= "extlinux.conf"
+UBOOT_EXTLINUX_SYMLINK ?= "${UBOOT_EXTLINUX_CONF_NAME}-${MACHINE}-${PR}"
+
+# Options for the device tree compiler passed to mkimage '-D' feature:
+UBOOT_MKIMAGE_DTCOPTS ??= ""
+SPL_MKIMAGE_DTCOPTS ??= ""
+
+# mkimage command
+UBOOT_MKIMAGE ?= "uboot-mkimage"
+UBOOT_MKIMAGE_SIGN ?= "${UBOOT_MKIMAGE}"
+
+# Arguments passed to mkimage for signing
+UBOOT_MKIMAGE_SIGN_ARGS ?= ""
+SPL_MKIMAGE_SIGN_ARGS ?= ""
+
+# Options to deploy the u-boot device tree
+UBOOT_DTB ?= ""
+UBOOT_DTB_BINARY ??= ""
+
+python () {
+    ubootmachine = d.getVar("UBOOT_MACHINE")
+    ubootconfigflags = d.getVarFlags('UBOOT_CONFIG')
+    ubootbinary = d.getVar('UBOOT_BINARY')
+    ubootbinaries = d.getVar('UBOOT_BINARIES')
+    # The "doc" varflag is special, we don't want to see it here
+    ubootconfigflags.pop('doc', None)
+    ubootconfig = (d.getVar('UBOOT_CONFIG') or "").split()
+
+    if not ubootmachine and not ubootconfig:
+        PN = d.getVar("PN")
+        FILE = os.path.basename(d.getVar("FILE"))
+        bb.debug(1, "To build %s, see %s for instructions on \
+                 setting up your machine config" % (PN, FILE))
+        raise bb.parse.SkipRecipe("Either UBOOT_MACHINE or UBOOT_CONFIG must be set in the %s machine configuration." % d.getVar("MACHINE"))
+
+    if ubootmachine and ubootconfig:
+        raise bb.parse.SkipRecipe("You cannot use UBOOT_MACHINE and UBOOT_CONFIG at the same time.")
+
+    if ubootconfigflags and ubootbinaries:
+        raise bb.parse.SkipRecipe("You cannot use UBOOT_BINARIES as it is internal to uboot_config.bbclass.")
+
+    if len(ubootconfig) > 0:
+        for config in ubootconfig:
+            found = False
+            for f, v in ubootconfigflags.items():
+                if config == f: 
+                    found = True
+                    items = v.split(',')
+                    if items[0] and len(items) > 3:
+                        raise bb.parse.SkipRecipe('Only config,images,binary can be specified!')
+                    d.appendVar('UBOOT_MACHINE', ' ' + items[0])
+                    # IMAGE_FSTYPES appending
+                    if len(items) > 1 and items[1]:
+                        bb.debug(1, "Appending '%s' to IMAGE_FSTYPES." % items[1])
+                        d.appendVar('IMAGE_FSTYPES', ' ' + items[1])
+                    if len(items) > 2 and items[2]:
+                        bb.debug(1, "Appending '%s' to UBOOT_BINARIES." % items[2])
+                        d.appendVar('UBOOT_BINARIES', ' ' + items[2])
+                    else:
+                        bb.debug(1, "Appending '%s' to UBOOT_BINARIES." % ubootbinary)
+                        d.appendVar('UBOOT_BINARIES', ' ' + ubootbinary)
+                    break
+
+            if not found:
+                raise bb.parse.SkipRecipe("The selected UBOOT_CONFIG key %s has no match in %s." % (ubootconfig, ubootconfigflags.keys()))
+}
diff --git a/poky/meta/classes-recipe/uboot-extlinux-config.bbclass b/poky/meta/classes-recipe/uboot-extlinux-config.bbclass
new file mode 100644
index 0000000..86a7d30
--- /dev/null
+++ b/poky/meta/classes-recipe/uboot-extlinux-config.bbclass
@@ -0,0 +1,158 @@
+# uboot-extlinux-config.bbclass
+#
+# This class allow the extlinux.conf generation for U-Boot use. The
+# U-Boot support for it is given to allow the Generic Distribution
+# Configuration specification use by OpenEmbedded-based products.
+#
+# External variables:
+#
+# UBOOT_EXTLINUX_CONSOLE           - Set to "console=ttyX" to change kernel boot
+#                                    default console.
+# UBOOT_EXTLINUX_LABELS            - A list of targets for the automatic config.
+# UBOOT_EXTLINUX_KERNEL_ARGS       - Add additional kernel arguments.
+# UBOOT_EXTLINUX_KERNEL_IMAGE      - Kernel image name.
+# UBOOT_EXTLINUX_FDTDIR            - Device tree directory.
+# UBOOT_EXTLINUX_FDT               - Device tree file.
+# UBOOT_EXTLINUX_INITRD            - Indicates a list of filesystem images to
+#                                    concatenate and use as an initrd (optional).
+# UBOOT_EXTLINUX_MENU_DESCRIPTION  - Name to use as description.
+# UBOOT_EXTLINUX_ROOT              - Root kernel cmdline.
+# UBOOT_EXTLINUX_TIMEOUT           - Timeout before DEFAULT selection is made.
+#                                    Measured in 1/10 of a second.
+# UBOOT_EXTLINUX_DEFAULT_LABEL     - Target to be selected by default after
+#                                    the timeout period
+#
+# If there's only one label system will boot automatically and menu won't be
+# created. If you want to use more than one labels, e.g linux and alternate,
+# use overrides to set menu description, console and others variables.
+#
+# Ex:
+#
+# UBOOT_EXTLINUX_LABELS ??= "default fallback"
+#
+# UBOOT_EXTLINUX_DEFAULT_LABEL ??= "Linux Default"
+# UBOOT_EXTLINUX_TIMEOUT ??= "30"
+#
+# UBOOT_EXTLINUX_KERNEL_IMAGE_default ??= "../zImage"
+# UBOOT_EXTLINUX_MENU_DESCRIPTION_default ??= "Linux Default"
+#
+# UBOOT_EXTLINUX_KERNEL_IMAGE_fallback ??= "../zImage-fallback"
+# UBOOT_EXTLINUX_MENU_DESCRIPTION_fallback ??= "Linux Fallback"
+#
+# Results:
+#
+# menu title Select the boot mode
+# TIMEOUT 30
+# DEFAULT Linux Default
+# LABEL Linux Default
+#   KERNEL ../zImage
+#   FDTDIR ../
+#   APPEND root=/dev/mmcblk2p2 rootwait rw console=${console}
+# LABEL Linux Fallback
+#   KERNEL ../zImage-fallback
+#   FDTDIR ../
+#   APPEND root=/dev/mmcblk2p2 rootwait rw console=${console}
+#
+# Copyright (C) 2016, O.S. Systems Software LTDA.  All Rights Reserved
+# SPDX-License-Identifier: MIT
+#
+# The kernel has an internal default console, which you can override with
+# a console=...some_tty...
+UBOOT_EXTLINUX_CONSOLE ??= "console=${console},${baudrate}"
+UBOOT_EXTLINUX_LABELS ??= "linux"
+UBOOT_EXTLINUX_FDT ??= ""
+UBOOT_EXTLINUX_FDTDIR ??= "../"
+UBOOT_EXTLINUX_KERNEL_IMAGE ??= "../${KERNEL_IMAGETYPE}"
+UBOOT_EXTLINUX_KERNEL_ARGS ??= "rootwait rw"
+UBOOT_EXTLINUX_MENU_DESCRIPTION:linux ??= "${DISTRO_NAME}"
+
+UBOOT_EXTLINUX_CONFIG = "${B}/extlinux.conf"
+
+python do_create_extlinux_config() {
+    if d.getVar("UBOOT_EXTLINUX") != "1":
+      return
+
+    if not d.getVar('WORKDIR'):
+        bb.error("WORKDIR not defined, unable to package")
+
+    labels = d.getVar('UBOOT_EXTLINUX_LABELS')
+    if not labels:
+        bb.fatal("UBOOT_EXTLINUX_LABELS not defined, nothing to do")
+
+    if not labels.strip():
+        bb.fatal("No labels, nothing to do")
+
+    cfile = d.getVar('UBOOT_EXTLINUX_CONFIG')
+    if not cfile:
+        bb.fatal('Unable to read UBOOT_EXTLINUX_CONFIG')
+
+    localdata = bb.data.createCopy(d)
+
+    try:
+        with open(cfile, 'w') as cfgfile:
+            cfgfile.write('# Generic Distro Configuration file generated by OpenEmbedded\n')
+
+            if len(labels.split()) > 1:
+                cfgfile.write('menu title Select the boot mode\n')
+
+            timeout =  localdata.getVar('UBOOT_EXTLINUX_TIMEOUT')
+            if timeout:
+                cfgfile.write('TIMEOUT %s\n' % (timeout))
+
+            if len(labels.split()) > 1:
+                default = localdata.getVar('UBOOT_EXTLINUX_DEFAULT_LABEL')
+                if default:
+                    cfgfile.write('DEFAULT %s\n' % (default))
+
+            # Need to deconflict the labels with existing overrides
+            label_overrides = labels.split()
+            default_overrides = localdata.getVar('OVERRIDES').split(':')
+            # We're keeping all the existing overrides that aren't used as a label
+            # an override for that label will be added back in while we're processing that label
+            keep_overrides = list(filter(lambda x: x not in label_overrides, default_overrides))
+
+            for label in labels.split():
+
+                localdata.setVar('OVERRIDES', ':'.join(keep_overrides + [label]))
+
+                extlinux_console = localdata.getVar('UBOOT_EXTLINUX_CONSOLE')
+
+                menu_description = localdata.getVar('UBOOT_EXTLINUX_MENU_DESCRIPTION')
+                if not menu_description:
+                    menu_description = label
+
+                root = localdata.getVar('UBOOT_EXTLINUX_ROOT')
+                if not root:
+                    bb.fatal('UBOOT_EXTLINUX_ROOT not defined')
+
+                kernel_image = localdata.getVar('UBOOT_EXTLINUX_KERNEL_IMAGE')
+                fdtdir = localdata.getVar('UBOOT_EXTLINUX_FDTDIR')
+
+                fdt = localdata.getVar('UBOOT_EXTLINUX_FDT')
+
+                if fdt:
+                    cfgfile.write('LABEL %s\n\tKERNEL %s\n\tFDT %s\n' %
+                                 (menu_description, kernel_image, fdt))
+                elif fdtdir:
+                    cfgfile.write('LABEL %s\n\tKERNEL %s\n\tFDTDIR %s\n' %
+                                 (menu_description, kernel_image, fdtdir))
+                else:
+                    cfgfile.write('LABEL %s\n\tKERNEL %s\n' % (menu_description, kernel_image))
+
+                kernel_args = localdata.getVar('UBOOT_EXTLINUX_KERNEL_ARGS')
+
+                initrd = localdata.getVar('UBOOT_EXTLINUX_INITRD')
+                if initrd:
+                    cfgfile.write('\tINITRD %s\n'% initrd)
+
+                kernel_args = root + " " + kernel_args
+                cfgfile.write('\tAPPEND %s %s\n' % (kernel_args, extlinux_console))
+
+    except OSError:
+        bb.fatal('Unable to open %s' % (cfile))
+}
+UBOOT_EXTLINUX_VARS = "CONSOLE MENU_DESCRIPTION ROOT KERNEL_IMAGE FDTDIR FDT KERNEL_ARGS INITRD"
+do_create_extlinux_config[vardeps] += "${@' '.join(['UBOOT_EXTLINUX_%s_%s' % (v, l) for v in d.getVar('UBOOT_EXTLINUX_VARS').split() for l in d.getVar('UBOOT_EXTLINUX_LABELS').split()])}"
+do_create_extlinux_config[vardepsexclude] += "OVERRIDES"
+
+addtask create_extlinux_config before do_install do_deploy after do_compile
diff --git a/poky/meta/classes-recipe/uboot-sign.bbclass b/poky/meta/classes-recipe/uboot-sign.bbclass
new file mode 100644
index 0000000..debbf23
--- /dev/null
+++ b/poky/meta/classes-recipe/uboot-sign.bbclass
@@ -0,0 +1,505 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+# This file is part of U-Boot verified boot support and is intended to be
+# inherited from u-boot recipe and from kernel-fitimage.bbclass.
+#
+# The signature procedure requires the user to generate an RSA key and
+# certificate in a directory and to define the following variable:
+#
+#   UBOOT_SIGN_KEYDIR = "/keys/directory"
+#   UBOOT_SIGN_KEYNAME = "dev" # keys name in keydir (eg. "dev.crt", "dev.key")
+#   UBOOT_MKIMAGE_DTCOPTS = "-I dts -O dtb -p 2000"
+#   UBOOT_SIGN_ENABLE = "1"
+#
+# As verified boot depends on fitImage generation, following is also required:
+#
+#   KERNEL_CLASSES ?= " kernel-fitimage "
+#   KERNEL_IMAGETYPE ?= "fitImage"
+#
+# The signature support is limited to the use of CONFIG_OF_SEPARATE in U-Boot.
+#
+# The tasks sequence is set as below, using DEPLOY_IMAGE_DIR as common place to
+# treat the device tree blob:
+#
+# * u-boot:do_install:append
+#   Install UBOOT_DTB_BINARY to datadir, so that kernel can use it for
+#   signing, and kernel will deploy UBOOT_DTB_BINARY after signs it.
+#
+# * virtual/kernel:do_assemble_fitimage
+#   Sign the image
+#
+# * u-boot:do_deploy[postfuncs]
+#   Deploy files like UBOOT_DTB_IMAGE, UBOOT_DTB_SYMLINK and others.
+#
+# For more details on signature process, please refer to U-Boot documentation.
+
+# We need some variables from u-boot-config
+inherit uboot-config
+
+# Enable use of a U-Boot fitImage
+UBOOT_FITIMAGE_ENABLE ?= "0"
+
+# Signature activation - these require their respective fitImages
+UBOOT_SIGN_ENABLE ?= "0"
+SPL_SIGN_ENABLE ?= "0"
+
+# Default value for deployment filenames.
+UBOOT_DTB_IMAGE ?= "u-boot-${MACHINE}-${PV}-${PR}.dtb"
+UBOOT_DTB_BINARY ?= "u-boot.dtb"
+UBOOT_DTB_SYMLINK ?= "u-boot-${MACHINE}.dtb"
+UBOOT_NODTB_IMAGE ?= "u-boot-nodtb-${MACHINE}-${PV}-${PR}.bin"
+UBOOT_NODTB_BINARY ?= "u-boot-nodtb.bin"
+UBOOT_NODTB_SYMLINK ?= "u-boot-nodtb-${MACHINE}.bin"
+UBOOT_ITS_IMAGE ?= "u-boot-its-${MACHINE}-${PV}-${PR}"
+UBOOT_ITS ?= "u-boot.its"
+UBOOT_ITS_SYMLINK ?= "u-boot-its-${MACHINE}"
+UBOOT_FITIMAGE_IMAGE ?= "u-boot-fitImage-${MACHINE}-${PV}-${PR}"
+UBOOT_FITIMAGE_BINARY ?= "u-boot-fitImage"
+UBOOT_FITIMAGE_SYMLINK ?= "u-boot-fitImage-${MACHINE}"
+SPL_DIR ?= "spl"
+SPL_DTB_IMAGE ?= "u-boot-spl-${MACHINE}-${PV}-${PR}.dtb"
+SPL_DTB_BINARY ?= "u-boot-spl.dtb"
+SPL_DTB_SYMLINK ?= "u-boot-spl-${MACHINE}.dtb"
+SPL_NODTB_IMAGE ?= "u-boot-spl-nodtb-${MACHINE}-${PV}-${PR}.bin"
+SPL_NODTB_BINARY ?= "u-boot-spl-nodtb.bin"
+SPL_NODTB_SYMLINK ?= "u-boot-spl-nodtb-${MACHINE}.bin"
+
+# U-Boot fitImage description
+UBOOT_FIT_DESC ?= "U-Boot fitImage for ${DISTRO_NAME}/${PV}/${MACHINE}"
+
+# Kernel / U-Boot fitImage Hash Algo
+FIT_HASH_ALG ?= "sha256"
+UBOOT_FIT_HASH_ALG ?= "sha256"
+
+# Kernel / U-Boot fitImage Signature Algo
+FIT_SIGN_ALG ?= "rsa2048"
+UBOOT_FIT_SIGN_ALG ?= "rsa2048"
+
+# Kernel / U-Boot fitImage Padding Algo
+FIT_PAD_ALG ?= "pkcs-1.5"
+
+# Generate keys for signing Kernel / U-Boot fitImage
+FIT_GENERATE_KEYS ?= "0"
+UBOOT_FIT_GENERATE_KEYS ?= "0"
+
+# Size of private keys in number of bits
+FIT_SIGN_NUMBITS ?= "2048"
+UBOOT_FIT_SIGN_NUMBITS ?= "2048"
+
+# args to openssl genrsa (Default is just the public exponent)
+FIT_KEY_GENRSA_ARGS ?= "-F4"
+UBOOT_FIT_KEY_GENRSA_ARGS ?= "-F4"
+
+# args to openssl req (Default is -batch for non interactive mode and
+# -new for new certificate)
+FIT_KEY_REQ_ARGS ?= "-batch -new"
+UBOOT_FIT_KEY_REQ_ARGS ?= "-batch -new"
+
+# Standard format for public key certificate
+FIT_KEY_SIGN_PKCS ?= "-x509"
+UBOOT_FIT_KEY_SIGN_PKCS ?= "-x509"
+
+# Functions on this bbclass can apply to either U-boot or Kernel,
+# depending on the scenario
+UBOOT_PN = "${@d.getVar('PREFERRED_PROVIDER_u-boot') or 'u-boot'}"
+KERNEL_PN = "${@d.getVar('PREFERRED_PROVIDER_virtual/kernel')}"
+
+# We need u-boot-tools-native if we're creating a U-Boot fitImage
+python() {
+    if d.getVar('UBOOT_FITIMAGE_ENABLE') == '1':
+        depends = d.getVar("DEPENDS")
+        depends = "%s u-boot-tools-native dtc-native" % depends
+        d.setVar("DEPENDS", depends)
+}
+
+concat_dtb_helper() {
+	if [ -e "${UBOOT_DTB_BINARY}" ]; then
+		ln -sf ${UBOOT_DTB_IMAGE} ${DEPLOYDIR}/${UBOOT_DTB_BINARY}
+		ln -sf ${UBOOT_DTB_IMAGE} ${DEPLOYDIR}/${UBOOT_DTB_SYMLINK}
+	fi
+
+	if [ -f "${UBOOT_NODTB_BINARY}" ]; then
+		install ${UBOOT_NODTB_BINARY} ${DEPLOYDIR}/${UBOOT_NODTB_IMAGE}
+		ln -sf ${UBOOT_NODTB_IMAGE} ${DEPLOYDIR}/${UBOOT_NODTB_SYMLINK}
+		ln -sf ${UBOOT_NODTB_IMAGE} ${DEPLOYDIR}/${UBOOT_NODTB_BINARY}
+	fi
+
+	# If we're not using a signed u-boot fit, concatenate SPL w/o DTB & U-Boot DTB
+	# with public key (otherwise it will be deployed by the equivalent
+	# concat_spl_dtb_helper function - cf. kernel-fitimage.bbclass for more details)
+	if [ "${SPL_SIGN_ENABLE}" != "1" ] ; then
+		deployed_uboot_dtb_binary='${DEPLOY_DIR_IMAGE}/${UBOOT_DTB_IMAGE}'
+		if [ "x${UBOOT_SUFFIX}" = "ximg" -o "x${UBOOT_SUFFIX}" = "xrom" ] && \
+			[ -e "$deployed_uboot_dtb_binary" ]; then
+			oe_runmake EXT_DTB=$deployed_uboot_dtb_binary
+			install ${UBOOT_BINARY} ${DEPLOYDIR}/${UBOOT_IMAGE}
+		elif [ -e "${DEPLOYDIR}/${UBOOT_NODTB_IMAGE}" -a -e "$deployed_uboot_dtb_binary" ]; then
+			cd ${DEPLOYDIR}
+			cat ${UBOOT_NODTB_IMAGE} $deployed_uboot_dtb_binary | tee ${B}/${CONFIG_B_PATH}/${UBOOT_BINARY} > ${UBOOT_IMAGE}
+
+			if [ -n "${UBOOT_CONFIG}" ]
+			then
+				i=0
+				j=0
+				for config in ${UBOOT_MACHINE}; do
+					i=$(expr $i + 1);
+					for type in ${UBOOT_CONFIG}; do
+						j=$(expr $j + 1);
+						if [ $j -eq $i ]
+						then
+							cp ${UBOOT_IMAGE} ${B}/${CONFIG_B_PATH}/u-boot-$type.${UBOOT_SUFFIX}
+						fi
+					done
+				done
+			fi
+		else
+			bbwarn "Failure while adding public key to u-boot binary. Verified boot won't be available."
+		fi
+	fi
+}
+
+concat_spl_dtb_helper() {
+
+	# We only deploy symlinks to the u-boot-spl.dtb,as the KERNEL_PN will
+	# be responsible for deploying the real file
+	if [ -e "${SPL_DIR}/${SPL_DTB_BINARY}" ] ; then
+		ln -sf ${SPL_DTB_IMAGE} ${DEPLOYDIR}/${SPL_DTB_SYMLINK}
+		ln -sf ${SPL_DTB_IMAGE} ${DEPLOYDIR}/${SPL_DTB_BINARY}
+	fi
+
+	# Concatenate the SPL nodtb binary and u-boot.dtb
+	deployed_spl_dtb_binary='${DEPLOY_DIR_IMAGE}/${SPL_DTB_IMAGE}'
+	if [ -e "${DEPLOYDIR}/${SPL_NODTB_IMAGE}" -a -e "$deployed_spl_dtb_binary" ] ; then
+		cd ${DEPLOYDIR}
+		cat ${SPL_NODTB_IMAGE} $deployed_spl_dtb_binary | tee ${B}/${CONFIG_B_PATH}/${SPL_BINARY} > ${SPL_IMAGE}
+	else
+		bbwarn "Failure while adding public key to spl binary. Verified U-Boot boot won't be available."
+	fi
+}
+
+
+concat_dtb() {
+	if [ "${UBOOT_SIGN_ENABLE}" = "1" -a "${PN}" = "${UBOOT_PN}" -a -n "${UBOOT_DTB_BINARY}" ]; then
+		mkdir -p ${DEPLOYDIR}
+		if [ -n "${UBOOT_CONFIG}" ]; then
+			for config in ${UBOOT_MACHINE}; do
+				CONFIG_B_PATH="$config"
+				cd ${B}/$config
+				concat_dtb_helper
+			done
+		else
+			CONFIG_B_PATH=""
+			cd ${B}
+			concat_dtb_helper
+		fi
+	fi
+}
+
+concat_spl_dtb() {
+	if [ "${SPL_SIGN_ENABLE}" = "1" -a "${PN}" = "${UBOOT_PN}" -a -n "${SPL_DTB_BINARY}" ]; then
+		mkdir -p ${DEPLOYDIR}
+		if [ -n "${UBOOT_CONFIG}" ]; then
+			for config in ${UBOOT_MACHINE}; do
+				CONFIG_B_PATH="$config"
+				cd ${B}/$config
+				concat_spl_dtb_helper
+			done
+		else
+			CONFIG_B_PATH=""
+			cd ${B}
+			concat_spl_dtb_helper
+		fi
+	fi
+}
+
+
+# Install UBOOT_DTB_BINARY to datadir, so that kernel can use it for
+# signing, and kernel will deploy UBOOT_DTB_BINARY after signs it.
+install_helper() {
+	if [ -f "${UBOOT_DTB_BINARY}" ]; then
+		# UBOOT_DTB_BINARY is a symlink to UBOOT_DTB_IMAGE, so we
+		# need both of them.
+		install -Dm 0644 ${UBOOT_DTB_BINARY} ${D}${datadir}/${UBOOT_DTB_IMAGE}
+		ln -sf ${UBOOT_DTB_IMAGE} ${D}${datadir}/${UBOOT_DTB_BINARY}
+	else
+		bbwarn "${UBOOT_DTB_BINARY} not found"
+	fi
+}
+
+# Install SPL dtb and u-boot nodtb to datadir,
+install_spl_helper() {
+	if [ -f "${SPL_DIR}/${SPL_DTB_BINARY}" ]; then
+		install -Dm 0644 ${SPL_DIR}/${SPL_DTB_BINARY} ${D}${datadir}/${SPL_DTB_IMAGE}
+		ln -sf ${SPL_DTB_IMAGE} ${D}${datadir}/${SPL_DTB_BINARY}
+	else
+		bbwarn "${SPL_DTB_BINARY} not found"
+	fi
+	if [ -f "${UBOOT_NODTB_BINARY}" ] ; then
+		install -Dm 0644 ${UBOOT_NODTB_BINARY} ${D}${datadir}/${UBOOT_NODTB_IMAGE}
+		ln -sf ${UBOOT_NODTB_IMAGE} ${D}${datadir}/${UBOOT_NODTB_BINARY}
+	else
+		bbwarn "${UBOOT_NODTB_BINARY} not found"
+	fi
+
+	# We need to install a 'stub' u-boot-fitimage + its to datadir,
+	# so that the KERNEL_PN can use the correct filename when
+	# assembling and deploying them
+	touch ${D}/${datadir}/${UBOOT_FITIMAGE_IMAGE}
+	touch ${D}/${datadir}/${UBOOT_ITS_IMAGE}
+}
+
+do_install:append() {
+	if [ "${PN}" = "${UBOOT_PN}" ]; then
+		if [ -n "${UBOOT_CONFIG}" ]; then
+			for config in ${UBOOT_MACHINE}; do
+				cd ${B}/$config
+				if [ "${UBOOT_SIGN_ENABLE}" = "1" -o "${UBOOT_FITIMAGE_ENABLE}" = "1" ] && \
+					[ -n "${UBOOT_DTB_BINARY}" ]; then
+					install_helper
+				fi
+				if [ "${UBOOT_FITIMAGE_ENABLE}" = "1" -a -n "${SPL_DTB_BINARY}" ]; then
+					install_spl_helper
+				fi
+			done
+		else
+			cd ${B}
+			if [ "${UBOOT_SIGN_ENABLE}" = "1" -o "${UBOOT_FITIMAGE_ENABLE}" = "1" ] && \
+				[ -n "${UBOOT_DTB_BINARY}" ]; then
+				install_helper
+			fi
+			if [ "${UBOOT_FITIMAGE_ENABLE}" = "1" -a -n "${SPL_DTB_BINARY}" ]; then
+				install_spl_helper
+			fi
+		fi
+	fi
+}
+
+do_uboot_generate_rsa_keys() {
+	if [ "${SPL_SIGN_ENABLE}" = "0" ] && [ "${UBOOT_FIT_GENERATE_KEYS}" = "1" ]; then
+		bbwarn "UBOOT_FIT_GENERATE_KEYS is set to 1 eventhough SPL_SIGN_ENABLE is set to 0. The keys will not be generated as they won't be used."
+	fi
+
+	if [ "${SPL_SIGN_ENABLE}" = "1" ] && [ "${UBOOT_FIT_GENERATE_KEYS}" = "1" ]; then
+
+		# Generate keys only if they don't already exist
+		if [ ! -f "${SPL_SIGN_KEYDIR}/${SPL_SIGN_KEYNAME}".key ] || \
+			[ ! -f "${SPL_SIGN_KEYDIR}/${SPL_SIGN_KEYNAME}".crt ]; then
+
+			# make directory if it does not already exist
+			mkdir -p "${SPL_SIGN_KEYDIR}"
+
+			echo "Generating RSA private key for signing U-Boot fitImage"
+			openssl genrsa ${UBOOT_FIT_KEY_GENRSA_ARGS} -out \
+				"${SPL_SIGN_KEYDIR}/${SPL_SIGN_KEYNAME}".key \
+				"${UBOOT_FIT_SIGN_NUMBITS}"
+
+			echo "Generating certificate for signing U-Boot fitImage"
+			openssl req ${FIT_KEY_REQ_ARGS} "${UBOOT_FIT_KEY_SIGN_PKCS}" \
+				-key "${SPL_SIGN_KEYDIR}/${SPL_SIGN_KEYNAME}".key \
+				-out "${SPL_SIGN_KEYDIR}/${SPL_SIGN_KEYNAME}".crt
+		fi
+	fi
+
+}
+
+addtask uboot_generate_rsa_keys before do_uboot_assemble_fitimage after do_compile
+
+# Create a ITS file for the U-boot FIT, for use when
+# we want to sign it so that the SPL can verify it
+uboot_fitimage_assemble() {
+	uboot_its="$1"
+	uboot_nodtb_bin="$2"
+	uboot_dtb="$3"
+	uboot_bin="$4"
+	spl_dtb="$5"
+	uboot_csum="${UBOOT_FIT_HASH_ALG}"
+	uboot_sign_algo="${UBOOT_FIT_SIGN_ALG}"
+	uboot_sign_keyname="${SPL_SIGN_KEYNAME}"
+
+	rm -f $uboot_its $uboot_bin
+
+	# First we create the ITS script
+	cat << EOF >> $uboot_its
+/dts-v1/;
+
+/ {
+    description = "${UBOOT_FIT_DESC}";
+    #address-cells = <1>;
+
+    images {
+        uboot {
+            description = "U-Boot image";
+            data = /incbin/("$uboot_nodtb_bin");
+            type = "standalone";
+            os = "u-boot";
+            arch = "${UBOOT_ARCH}";
+            compression = "none";
+            load = <${UBOOT_LOADADDRESS}>;
+            entry = <${UBOOT_ENTRYPOINT}>;
+EOF
+
+	if [ "${SPL_SIGN_ENABLE}" = "1" ] ; then
+		cat << EOF >> $uboot_its
+            signature {
+                algo = "$uboot_csum,$uboot_sign_algo";
+                key-name-hint = "$uboot_sign_keyname";
+            };
+EOF
+	fi
+
+	cat << EOF >> $uboot_its
+        };
+        fdt {
+            description = "U-Boot FDT";
+            data = /incbin/("$uboot_dtb");
+            type = "flat_dt";
+            arch = "${UBOOT_ARCH}";
+            compression = "none";
+EOF
+
+	if [ "${SPL_SIGN_ENABLE}" = "1" ] ; then
+		cat << EOF >> $uboot_its
+            signature {
+                algo = "$uboot_csum,$uboot_sign_algo";
+                key-name-hint = "$uboot_sign_keyname";
+            };
+EOF
+	fi
+
+	cat << EOF >> $uboot_its
+        };
+    };
+
+    configurations {
+        default = "conf";
+        conf {
+            description = "Boot with signed U-Boot FIT";
+            loadables = "uboot";
+            fdt = "fdt";
+        };
+    };
+};
+EOF
+
+	#
+	# Assemble the U-boot FIT image
+	#
+	${UBOOT_MKIMAGE} \
+		${@'-D "${SPL_MKIMAGE_DTCOPTS}"' if len('${SPL_MKIMAGE_DTCOPTS}') else ''} \
+		-f $uboot_its \
+		$uboot_bin
+
+	if [ "${SPL_SIGN_ENABLE}" = "1" ] ; then
+		#
+		# Sign the U-boot FIT image and add public key to SPL dtb
+		#
+		${UBOOT_MKIMAGE_SIGN} \
+			${@'-D "${SPL_MKIMAGE_DTCOPTS}"' if len('${SPL_MKIMAGE_DTCOPTS}') else ''} \
+			-F -k "${SPL_SIGN_KEYDIR}" \
+			-K "$spl_dtb" \
+			-r $uboot_bin \
+			${SPL_MKIMAGE_SIGN_ARGS}
+	fi
+
+}
+
+do_uboot_assemble_fitimage() {
+	# This function runs in KERNEL_PN context. The reason for that is that we need to
+	# support the scenario where UBOOT_SIGN_ENABLE is placing the Kernel fitImage's
+	# pubkey in the u-boot.dtb file, so that we can use it when building the U-Boot
+	# fitImage itself.
+	if [ "${UBOOT_FITIMAGE_ENABLE}" = "1" ] && \
+	   [ -n "${SPL_DTB_BINARY}" -a "${PN}" = "${KERNEL_PN}" ] ; then
+		if [ "${UBOOT_SIGN_ENABLE}" != "1" ]; then
+			# If we're not signing the Kernel fitImage, that means
+			# we need to copy the u-boot.dtb from staging ourselves
+			cp -P ${STAGING_DATADIR}/u-boot*.dtb ${B}
+		fi
+		# As we are in the kernel context, we need to copy u-boot-spl.dtb from staging first.
+		# Unfortunately, need to glob on top of ${SPL_DTB_BINARY} since _IMAGE and _SYMLINK
+		# will contain U-boot's PV
+		# Similarly, we need to get the filename for the 'stub' u-boot-fitimage + its in
+		# staging so that we can use it for creating the image with the correct filename
+		# in the KERNEL_PN context.
+		# As for the u-boot.dtb (with fitimage's pubkey), it should come from the dependent
+		# do_assemble_fitimage task
+		cp -P ${STAGING_DATADIR}/u-boot-spl*.dtb ${B}
+		cp -P ${STAGING_DATADIR}/u-boot-nodtb*.bin ${B}
+		rm -rf ${B}/u-boot-fitImage-* ${B}/u-boot-its-*
+		kernel_uboot_fitimage_name=`basename ${STAGING_DATADIR}/u-boot-fitImage-*`
+		kernel_uboot_its_name=`basename ${STAGING_DATADIR}/u-boot-its-*`
+		cd ${B}
+		uboot_fitimage_assemble $kernel_uboot_its_name ${UBOOT_NODTB_BINARY} \
+					${UBOOT_DTB_BINARY} $kernel_uboot_fitimage_name \
+					${SPL_DTB_BINARY}
+	fi
+}
+
+addtask uboot_assemble_fitimage before do_deploy after do_compile
+
+do_deploy:prepend:pn-${UBOOT_PN}() {
+	if [ "${UBOOT_SIGN_ENABLE}" = "1" -a -n "${UBOOT_DTB_BINARY}" ] ; then
+		concat_dtb
+	fi
+
+	if [ "${UBOOT_FITIMAGE_ENABLE}" = "1" ] ; then
+	# Deploy the u-boot-nodtb binary and symlinks...
+		if [ -f "${SPL_DIR}/${SPL_NODTB_BINARY}" ] ; then
+			echo "Copying u-boot-nodtb binary..."
+			install -m 0644 ${SPL_DIR}/${SPL_NODTB_BINARY} ${DEPLOYDIR}/${SPL_NODTB_IMAGE}
+			ln -sf ${SPL_NODTB_IMAGE} ${DEPLOYDIR}/${SPL_NODTB_SYMLINK}
+			ln -sf ${SPL_NODTB_IMAGE} ${DEPLOYDIR}/${SPL_NODTB_BINARY}
+		fi
+
+
+		# We only deploy the symlinks to the uboot-fitImage and uboot-its
+		# images, as the KERNEL_PN will take care of deploying the real file
+		ln -sf ${UBOOT_FITIMAGE_IMAGE} ${DEPLOYDIR}/${UBOOT_FITIMAGE_BINARY}
+		ln -sf ${UBOOT_FITIMAGE_IMAGE} ${DEPLOYDIR}/${UBOOT_FITIMAGE_SYMLINK}
+		ln -sf ${UBOOT_ITS_IMAGE} ${DEPLOYDIR}/${UBOOT_ITS}
+		ln -sf ${UBOOT_ITS_IMAGE} ${DEPLOYDIR}/${UBOOT_ITS_SYMLINK}
+	fi
+
+	if [ "${SPL_SIGN_ENABLE}" = "1" -a -n "${SPL_DTB_BINARY}" ] ; then
+		concat_spl_dtb
+	fi
+
+
+}
+
+do_deploy:append:pn-${UBOOT_PN}() {
+	# If we're creating a u-boot fitImage, point u-boot.bin
+	# symlink since it might get used by image recipes
+	if [ "${UBOOT_FITIMAGE_ENABLE}" = "1" ] ; then
+		ln -sf ${UBOOT_FITIMAGE_IMAGE} ${DEPLOYDIR}/${UBOOT_BINARY}
+		ln -sf ${UBOOT_FITIMAGE_IMAGE} ${DEPLOYDIR}/${UBOOT_SYMLINK}
+	fi
+}
+
+python () {
+    if (   (d.getVar('UBOOT_SIGN_ENABLE') == '1'
+            or d.getVar('UBOOT_FITIMAGE_ENABLE') == '1')
+        and d.getVar('PN') == d.getVar('UBOOT_PN')
+        and d.getVar('UBOOT_DTB_BINARY')):
+
+        # Make "bitbake u-boot -cdeploy" deploys the signed u-boot.dtb
+        # and/or the U-Boot fitImage
+        d.appendVarFlag('do_deploy', 'depends', ' %s:do_deploy' % d.getVar('KERNEL_PN'))
+
+    if d.getVar('UBOOT_FITIMAGE_ENABLE') == '1' and d.getVar('PN') == d.getVar('KERNEL_PN'):
+        # As the U-Boot fitImage is created by the KERNEL_PN, we need
+        # to make sure that the u-boot-spl.dtb and u-boot-spl-nodtb.bin
+        # files are in the staging dir for it's use
+        d.appendVarFlag('do_uboot_assemble_fitimage', 'depends', ' %s:do_populate_sysroot' % d.getVar('UBOOT_PN'))
+
+        # If the Kernel fitImage is being signed, we need to
+        # create the U-Boot fitImage after it
+        if d.getVar('UBOOT_SIGN_ENABLE') == '1':
+            d.appendVarFlag('do_uboot_assemble_fitimage', 'depends', ' %s:do_assemble_fitimage' % d.getVar('KERNEL_PN'))
+            d.appendVarFlag('do_uboot_assemble_fitimage', 'depends', ' %s:do_assemble_fitimage_initramfs' % d.getVar('KERNEL_PN'))
+
+}
diff --git a/poky/meta/classes-recipe/update-alternatives.bbclass b/poky/meta/classes-recipe/update-alternatives.bbclass
new file mode 100644
index 0000000..970d9bc
--- /dev/null
+++ b/poky/meta/classes-recipe/update-alternatives.bbclass
@@ -0,0 +1,333 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+# This class is used to help the alternatives system which is useful when
+# multiple sources provide same command. You can use update-alternatives
+# command directly in your recipe, but in most cases this class simplifies
+# that job.
+#
+# To use this class a number of variables should be defined:
+#
+# List all of the alternatives needed by a package:
+# ALTERNATIVE:<pkg> = "name1 name2 name3 ..."
+#
+#   i.e. ALTERNATIVE:busybox = "sh sed test bracket"
+#
+# The pathname of the link
+# ALTERNATIVE_LINK_NAME[name] = "target"
+#
+#   This is the name of the binary once it's been installed onto the runtime.
+#   This name is global to all split packages in this recipe, and should match
+#   other recipes with the same functionality.
+#   i.e. ALTERNATIVE_LINK_NAME[bracket] = "/usr/bin/["
+#
+# NOTE: If ALTERNATIVE_LINK_NAME is not defined, it defaults to ${bindir}/name
+#
+# The default link to create for all targets
+# ALTERNATIVE_TARGET = "target"
+#
+#   This is useful in a multicall binary case
+#   i.e. ALTERNATIVE_TARGET = "/bin/busybox"
+#
+# A non-default link to create for a target
+# ALTERNATIVE_TARGET[name] = "target"
+#
+#   This is the name of the binary as it's been install by do_install
+#   i.e. ALTERNATIVE_TARGET[sh] = "/bin/bash"
+#
+# A package specific link for a target
+# ALTERNATIVE_TARGET_<pkg>[name] = "target"
+#
+#   This is useful when a recipe provides multiple alternatives for the
+#   same item.
+#
+# NOTE: If ALTERNATIVE_TARGET is not defined, it will inherit the value
+# from ALTERNATIVE_LINK_NAME.
+#
+# NOTE: If the ALTERNATIVE_LINK_NAME and ALTERNATIVE_TARGET are the same,
+# ALTERNATIVE_TARGET will have '.{BPN}' appended to it.  If the file
+# referenced has not been renamed, it will also be renamed.  (This avoids
+# the need to rename alternative files in the do_install step, but still
+# supports it if necessary for some reason.)
+#
+# The default priority for any alternatives
+# ALTERNATIVE_PRIORITY = "priority"
+#
+#   i.e. default is ALTERNATIVE_PRIORITY = "10"
+#
+# The non-default priority for a specific target
+# ALTERNATIVE_PRIORITY[name] = "priority"
+#
+# The package priority for a specific target
+# ALTERNATIVE_PRIORITY_<pkg>[name] = "priority"
+
+ALTERNATIVE_PRIORITY = "10"
+
+# We need special processing for vardeps because it can not work on
+# modified flag values.  So we aggregate the flags into a new variable
+# and include that vairable in the set.
+UPDALTVARS  = "ALTERNATIVE ALTERNATIVE_LINK_NAME ALTERNATIVE_TARGET ALTERNATIVE_PRIORITY"
+
+PACKAGE_WRITE_DEPS += "virtual/update-alternatives-native"
+
+def gen_updatealternativesvardeps(d):
+    pkgs = (d.getVar("PACKAGES") or "").split()
+    vars = (d.getVar("UPDALTVARS") or "").split()
+
+    # First compute them for non_pkg versions
+    for v in vars:
+        for flag in sorted((d.getVarFlags(v) or {}).keys()):
+            if flag == "doc" or flag == "vardeps" or flag == "vardepsexp":
+                continue
+            d.appendVar('%s_VARDEPS' % (v), ' %s:%s' % (flag, d.getVarFlag(v, flag, False)))
+
+    for p in pkgs:
+        for v in vars:
+            for flag in sorted((d.getVarFlags("%s_%s" % (v,p)) or {}).keys()):
+                if flag == "doc" or flag == "vardeps" or flag == "vardepsexp":
+                    continue
+                d.appendVar('%s_VARDEPS_%s' % (v,p), ' %s:%s' % (flag, d.getVarFlag('%s_%s' % (v,p), flag, False)))
+
+def ua_extend_depends(d):
+    if not 'virtual/update-alternatives' in d.getVar('PROVIDES'):
+        d.appendVar('DEPENDS', ' virtual/${MLPREFIX}update-alternatives')
+
+def update_alternatives_enabled(d):
+    # Update Alternatives only works on target packages...
+    if bb.data.inherits_class('native', d) or \
+       bb.data.inherits_class('cross', d) or bb.data.inherits_class('crosssdk', d) or \
+       bb.data.inherits_class('cross-canadian', d):
+        return False
+
+    # Disable when targeting mingw32 (no target support)
+    if d.getVar("TARGET_OS") == "mingw32":
+        return False
+
+    return True
+
+python __anonymous() {
+    if not update_alternatives_enabled(d):
+        return
+
+    # compute special vardeps
+    gen_updatealternativesvardeps(d)
+
+    # extend the depends to include virtual/update-alternatives
+    ua_extend_depends(d)
+}
+
+def gen_updatealternativesvars(d):
+    ret = []
+    pkgs = (d.getVar("PACKAGES") or "").split()
+    vars = (d.getVar("UPDALTVARS") or "").split()
+
+    for v in vars:
+        ret.append(v + "_VARDEPS")
+
+    for p in pkgs:
+        for v in vars:
+            ret.append(v + ":" + p)
+            ret.append(v + "_VARDEPS_" + p)
+    return " ".join(ret)
+
+# Now the new stuff, we use a custom function to generate the right values
+populate_packages[vardeps] += "${UPDALTVARS} ${@gen_updatealternativesvars(d)}"
+
+# We need to do the rename after the image creation step, but before
+# the split and strip steps..  PACKAGE_PREPROCESS_FUNCS is the right
+# place for that.
+PACKAGE_PREPROCESS_FUNCS += "apply_update_alternative_renames"
+python apply_update_alternative_renames () {
+    if not update_alternatives_enabled(d):
+       return
+
+    import re
+
+    def update_files(alt_target, alt_target_rename, pkg, d):
+        f = d.getVar('FILES:' + pkg)
+        if f:
+            f = re.sub(r'(^|\s)%s(\s|$)' % re.escape (alt_target), r'\1%s\2' % alt_target_rename, f)
+            d.setVar('FILES:' + pkg, f)
+
+    # Check for deprecated usage...
+    pn = d.getVar('BPN')
+    if d.getVar('ALTERNATIVE_LINKS') != None:
+        bb.fatal('%s: Use of ALTERNATIVE_LINKS/ALTERNATIVE_PATH/ALTERNATIVE_NAME is no longer supported, please convert to the updated syntax, see update-alternatives.bbclass for more info.' % pn)
+
+    # Do actual update alternatives processing
+    pkgdest = d.getVar('PKGD')
+    for pkg in (d.getVar('PACKAGES') or "").split():
+        # If the src == dest, we know we need to rename the dest by appending ${BPN}
+        link_rename = []
+        for alt_name in (d.getVar('ALTERNATIVE:%s' % pkg) or "").split():
+            alt_link     = d.getVarFlag('ALTERNATIVE_LINK_NAME', alt_name)
+            if not alt_link:
+                alt_link = "%s/%s" % (d.getVar('bindir'), alt_name)
+                d.setVarFlag('ALTERNATIVE_LINK_NAME', alt_name, alt_link)
+            if alt_link.startswith(os.path.join(d.getVar('sysconfdir'), 'init.d')):
+                # Managing init scripts does not work (bug #10433), foremost
+                # because of a race with update-rc.d
+                bb.fatal("Using update-alternatives for managing SysV init scripts is not supported")
+
+            alt_target   = d.getVarFlag('ALTERNATIVE_TARGET_%s' % pkg, alt_name) or d.getVarFlag('ALTERNATIVE_TARGET', alt_name)
+            alt_target   = alt_target or d.getVar('ALTERNATIVE_TARGET_%s' % pkg) or d.getVar('ALTERNATIVE_TARGET') or alt_link
+            # Sometimes alt_target is specified as relative to the link name.
+            alt_target   = os.path.join(os.path.dirname(alt_link), alt_target)
+
+            # If the link and target are the same name, we need to rename the target.
+            if alt_link == alt_target:
+                src = '%s/%s' % (pkgdest, alt_target)
+                alt_target_rename = '%s.%s' % (alt_target, pn)
+                dest = '%s/%s' % (pkgdest, alt_target_rename)
+                if os.path.lexists(dest):
+                    bb.note('%s: Already renamed: %s' % (pn, alt_target_rename))
+                elif os.path.lexists(src):
+                    if os.path.islink(src):
+                        # Delay rename of links
+                        link_rename.append((alt_target, alt_target_rename))
+                    else:
+                        bb.note('%s: Rename %s -> %s' % (pn, alt_target, alt_target_rename))
+                        bb.utils.rename(src, dest)
+                        update_files(alt_target, alt_target_rename, pkg, d)
+                else:
+                    bb.warn("%s: alternative target (%s or %s) does not exist, skipping..." % (pn, alt_target, alt_target_rename))
+                    continue
+                d.setVarFlag('ALTERNATIVE_TARGET_%s' % pkg, alt_name, alt_target_rename)
+
+        # Process delayed link names
+        # Do these after other renames so we can correct broken links
+        for (alt_target, alt_target_rename) in link_rename:
+            src = '%s/%s' % (pkgdest, alt_target)
+            dest = '%s/%s' % (pkgdest, alt_target_rename)
+            link_target = oe.path.realpath(src, pkgdest, True)
+
+            if os.path.lexists(link_target):
+                # Ok, the link_target exists, we can rename
+                bb.note('%s: Rename (link) %s -> %s' % (pn, alt_target, alt_target_rename))
+                bb.utils.rename(src, dest)
+            else:
+                # Try to resolve the broken link to link.${BPN}
+                link_maybe = '%s.%s' % (os.readlink(src), pn)
+                if os.path.lexists(os.path.join(os.path.dirname(src), link_maybe)):
+                    # Ok, the renamed link target exists.. create a new link, and remove the original
+                    bb.note('%s: Creating new link %s -> %s' % (pn, alt_target_rename, link_maybe))
+                    os.symlink(link_maybe, dest)
+                    os.unlink(src)
+                else:
+                    bb.warn('%s: Unable to resolve dangling symlink: %s' % (pn, alt_target))
+                    continue
+            update_files(alt_target, alt_target_rename, pkg, d)
+}
+
+def update_alternatives_alt_targets(d, pkg):
+    """
+    Returns the update-alternatives metadata for a package.
+
+    The returned format is a list of tuples where the tuple contains:
+    alt_name:     The binary name
+    alt_link:     The path for the binary (Shared by different packages)
+    alt_target:   The path for the renamed binary (Unique per package)
+    alt_priority: The priority of the alt_target
+
+    All the alt_targets will be installed into the sysroot. The alt_link is
+    a symlink pointing to the alt_target with the highest priority.
+    """
+
+    pn = d.getVar('BPN')
+    pkgdest = d.getVar('PKGD')
+    updates = list()
+    for alt_name in (d.getVar('ALTERNATIVE:%s' % pkg) or "").split():
+        alt_link     = d.getVarFlag('ALTERNATIVE_LINK_NAME', alt_name)
+        alt_target   = d.getVarFlag('ALTERNATIVE_TARGET_%s' % pkg, alt_name) or \
+                       d.getVarFlag('ALTERNATIVE_TARGET', alt_name) or \
+                       d.getVar('ALTERNATIVE_TARGET_%s' % pkg) or \
+                       d.getVar('ALTERNATIVE_TARGET') or \
+                       alt_link
+        alt_priority = d.getVarFlag('ALTERNATIVE_PRIORITY_%s' % pkg,  alt_name) or \
+                       d.getVarFlag('ALTERNATIVE_PRIORITY',  alt_name) or \
+                       d.getVar('ALTERNATIVE_PRIORITY_%s' % pkg) or  \
+                       d.getVar('ALTERNATIVE_PRIORITY')
+
+        # This shouldn't trigger, as it should have been resolved earlier!
+        if alt_link == alt_target:
+            bb.note('alt_link == alt_target: %s == %s -- correcting, this should not happen!' % (alt_link, alt_target))
+            alt_target = '%s.%s' % (alt_target, pn)
+
+        if not os.path.lexists('%s/%s' % (pkgdest, alt_target)):
+            bb.warn('%s: NOT adding alternative provide %s: %s does not exist' % (pn, alt_link, alt_target))
+            continue
+
+        alt_target = os.path.normpath(alt_target)
+        updates.append( (alt_name, alt_link, alt_target, alt_priority) )
+
+    return updates
+
+PACKAGESPLITFUNCS:prepend = "populate_packages_updatealternatives "
+
+python populate_packages_updatealternatives () {
+    if not update_alternatives_enabled(d):
+        return
+
+    # Do actual update alternatives processing
+    for pkg in (d.getVar('PACKAGES') or "").split():
+        # Create post install/removal scripts
+        alt_setup_links = ""
+        alt_remove_links = ""
+        updates = update_alternatives_alt_targets(d, pkg)
+        for alt_name, alt_link, alt_target, alt_priority in updates:
+            alt_setup_links  += '\tupdate-alternatives --install %s %s %s %s\n' % (alt_link, alt_name, alt_target, alt_priority)
+            alt_remove_links += '\tupdate-alternatives --remove  %s %s\n' % (alt_name, alt_target)
+
+        if alt_setup_links:
+            # RDEPENDS setup
+            provider = d.getVar('VIRTUAL-RUNTIME_update-alternatives')
+            if provider:
+                #bb.note('adding runtime requirement for update-alternatives for %s' % pkg)
+                d.appendVar('RDEPENDS:%s' % pkg, ' ' + d.getVar('MLPREFIX', False) + provider)
+
+            bb.note('adding update-alternatives calls to postinst/prerm for %s' % pkg)
+            bb.note('%s' % alt_setup_links)
+            postinst = d.getVar('pkg_postinst:%s' % pkg)
+            if postinst:
+                postinst = alt_setup_links + postinst
+            else:
+                postinst = '#!/bin/sh\n' + alt_setup_links
+            d.setVar('pkg_postinst:%s' % pkg, postinst)
+
+            bb.note('%s' % alt_remove_links)
+            prerm = d.getVar('pkg_prerm:%s' % pkg) or '#!/bin/sh\n'
+            prerm += alt_remove_links
+            d.setVar('pkg_prerm:%s' % pkg, prerm)
+}
+
+python package_do_filedeps:append () {
+    if update_alternatives_enabled(d):
+        apply_update_alternative_provides(d)
+}
+
+def apply_update_alternative_provides(d):
+    pn = d.getVar('BPN')
+    pkgdest = d.getVar('PKGDEST')
+
+    for pkg in d.getVar('PACKAGES').split():
+        for alt_name in (d.getVar('ALTERNATIVE:%s' % pkg) or "").split():
+            alt_link     = d.getVarFlag('ALTERNATIVE_LINK_NAME', alt_name)
+            alt_target   = d.getVarFlag('ALTERNATIVE_TARGET_%s' % pkg, alt_name) or d.getVarFlag('ALTERNATIVE_TARGET', alt_name)
+            alt_target   = alt_target or d.getVar('ALTERNATIVE_TARGET_%s' % pkg) or d.getVar('ALTERNATIVE_TARGET') or alt_link
+
+            if alt_link == alt_target:
+                bb.warn('%s: alt_link == alt_target: %s == %s' % (pn, alt_link, alt_target))
+                alt_target = '%s.%s' % (alt_target, pn)
+
+            if not os.path.lexists('%s/%s/%s' % (pkgdest, pkg, alt_target)):
+                continue
+
+            # Add file provide
+            trans_target = oe.package.file_translate(alt_target)
+            d.appendVar('FILERPROVIDES:%s:%s' % (trans_target, pkg), " " + alt_link)
+            if not trans_target in (d.getVar('FILERPROVIDESFLIST:%s' % pkg) or ""):
+                d.appendVar('FILERPROVIDESFLIST:%s' % pkg, " " + trans_target)
+
diff --git a/poky/meta/classes-recipe/update-rc.d.bbclass b/poky/meta/classes-recipe/update-rc.d.bbclass
new file mode 100644
index 0000000..cb2aaba
--- /dev/null
+++ b/poky/meta/classes-recipe/update-rc.d.bbclass
@@ -0,0 +1,129 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+UPDATERCPN ?= "${PN}"
+
+DEPENDS:append:class-target = "${@bb.utils.contains('DISTRO_FEATURES', 'sysvinit', ' update-rc.d initscripts', '', d)}"
+
+UPDATERCD = "update-rc.d"
+UPDATERCD:class-cross = ""
+UPDATERCD:class-native = ""
+UPDATERCD:class-nativesdk = ""
+
+INITSCRIPT_PARAMS ?= "defaults"
+
+INIT_D_DIR = "${sysconfdir}/init.d"
+
+def use_updatercd(d):
+    # If the distro supports both sysvinit and systemd, and the current recipe
+    # supports systemd, only call update-rc.d on rootfs creation or if systemd
+    # is not running. That's because systemctl enable/disable will already call
+    # update-rc.d if it detects initscripts.
+    if bb.utils.contains('DISTRO_FEATURES', 'systemd', True, False, d) and bb.data.inherits_class('systemd', d):
+        return '[ -n "$D" -o ! -d /run/systemd/system ]'
+    return 'true'
+
+PACKAGE_WRITE_DEPS += "update-rc.d-native"
+
+updatercd_postinst() {
+if ${@use_updatercd(d)} && type update-rc.d >/dev/null 2>/dev/null; then
+	if [ -n "$D" ]; then
+		OPT="-r $D"
+	else
+		OPT="-s"
+	fi
+	update-rc.d $OPT ${INITSCRIPT_NAME} ${INITSCRIPT_PARAMS}
+fi
+}
+
+updatercd_prerm() {
+if ${@use_updatercd(d)} && [ -z "$D" -a -x "${INIT_D_DIR}/${INITSCRIPT_NAME}" ]; then
+	${INIT_D_DIR}/${INITSCRIPT_NAME} stop || :
+fi
+}
+
+updatercd_postrm() {
+if ${@use_updatercd(d)} && type update-rc.d >/dev/null 2>/dev/null; then
+	if [ -n "$D" ]; then
+		OPT="-f -r $D"
+	else
+		OPT="-f"
+	fi
+	update-rc.d $OPT ${INITSCRIPT_NAME} remove
+fi
+}
+
+
+def update_rc_after_parse(d):
+    if d.getVar('INITSCRIPT_PACKAGES', False) == None:
+        if d.getVar('INITSCRIPT_NAME', False) == None:
+            bb.fatal("%s inherits update-rc.d but doesn't set INITSCRIPT_NAME" % d.getVar('FILE', False))
+        if d.getVar('INITSCRIPT_PARAMS', False) == None:
+            bb.fatal("%s inherits update-rc.d but doesn't set INITSCRIPT_PARAMS" % d.getVar('FILE', False))
+
+python __anonymous() {
+    update_rc_after_parse(d)
+}
+
+PACKAGESPLITFUNCS:prepend = "${@bb.utils.contains('DISTRO_FEATURES', 'sysvinit', 'populate_packages_updatercd ', '', d)}"
+PACKAGESPLITFUNCS:remove:class-nativesdk = "populate_packages_updatercd "
+
+populate_packages_updatercd[vardeps] += "updatercd_prerm updatercd_postrm updatercd_postinst"
+populate_packages_updatercd[vardepsexclude] += "OVERRIDES"
+
+python populate_packages_updatercd () {
+    def update_rcd_auto_depend(pkg):
+        import subprocess
+        import os
+        path = d.expand("${D}${INIT_D_DIR}/${INITSCRIPT_NAME}")
+        if not os.path.exists(path):
+            return
+        statement = "grep -q -w '/etc/init.d/functions' %s" % path
+        if subprocess.call(statement, shell=True) == 0:
+            mlprefix = d.getVar('MLPREFIX') or ""
+            d.appendVar('RDEPENDS:' + pkg, ' %sinitd-functions' % (mlprefix))
+
+    def update_rcd_package(pkg):
+        bb.debug(1, 'adding update-rc.d calls to postinst/prerm/postrm for %s' % pkg)
+
+        localdata = bb.data.createCopy(d)
+        overrides = localdata.getVar("OVERRIDES")
+        localdata.setVar("OVERRIDES", "%s:%s" % (pkg, overrides))
+
+        update_rcd_auto_depend(pkg)
+
+        postinst = d.getVar('pkg_postinst:%s' % pkg)
+        if not postinst:
+            postinst = '#!/bin/sh\n'
+        postinst += localdata.getVar('updatercd_postinst')
+        d.setVar('pkg_postinst:%s' % pkg, postinst)
+
+        prerm = d.getVar('pkg_prerm:%s' % pkg)
+        if not prerm:
+            prerm = '#!/bin/sh\n'
+        prerm += localdata.getVar('updatercd_prerm')
+        d.setVar('pkg_prerm:%s' % pkg, prerm)
+
+        postrm = d.getVar('pkg_postrm:%s' % pkg)
+        if not postrm:
+                postrm = '#!/bin/sh\n'
+        postrm += localdata.getVar('updatercd_postrm')
+        d.setVar('pkg_postrm:%s' % pkg, postrm)
+
+        d.appendVar('RRECOMMENDS:' + pkg, " ${MLPREFIX}${UPDATERCD}")
+
+    # Check that this class isn't being inhibited (generally, by
+    # systemd.bbclass) before doing any work.
+    if not d.getVar("INHIBIT_UPDATERCD_BBCLASS"):
+        pkgs = d.getVar('INITSCRIPT_PACKAGES')
+        if pkgs == None:
+            pkgs = d.getVar('UPDATERCPN')
+            packages = (d.getVar('PACKAGES') or "").split()
+            if not pkgs in packages and packages != []:
+                pkgs = packages[0]
+        for pkg in pkgs.split():
+            update_rcd_package(pkg)
+}
diff --git a/poky/meta/classes-recipe/upstream-version-is-even.bbclass b/poky/meta/classes-recipe/upstream-version-is-even.bbclass
new file mode 100644
index 0000000..19587cb
--- /dev/null
+++ b/poky/meta/classes-recipe/upstream-version-is-even.bbclass
@@ -0,0 +1,11 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+# This class ensures that the upstream version check only
+# accepts even minor versions (i.e. 3.0.x, 3.2.x, 3.4.x, etc.)
+# This scheme is used by Gnome and a number of other projects
+# to signify stable releases vs development releases.
+UPSTREAM_CHECK_REGEX = "[^\d\.](?P<pver>\d+\.(\d*[02468])+(\.\d+)+)\.tar"
diff --git a/poky/meta/classes-recipe/vala.bbclass b/poky/meta/classes-recipe/vala.bbclass
new file mode 100644
index 0000000..460ddb3
--- /dev/null
+++ b/poky/meta/classes-recipe/vala.bbclass
@@ -0,0 +1,30 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+# Everyone needs vala-native and targets need vala, too,
+# because that is where target builds look for .vapi files.
+#
+VALADEPENDS = ""
+VALADEPENDS:class-target = "vala"
+DEPENDS:append = " vala-native ${VALADEPENDS}"
+
+# Our patched version of Vala looks in STAGING_DATADIR for .vapi files
+export STAGING_DATADIR
+# Upstream Vala >= 0.11 looks in XDG_DATA_DIRS for .vapi files
+export XDG_DATA_DIRS = "${STAGING_DATADIR}:${STAGING_LIBDIR}"
+
+# Package additional files
+FILES:${PN}-dev += "\
+    ${datadir}/vala/vapi/*.vapi \
+    ${datadir}/vala/vapi/*.deps \
+    ${datadir}/gir-1.0 \
+"
+
+# Remove vapigen.m4 that is bundled with tarballs
+# because it does not yet have our cross-compile fixes
+do_configure:prepend() {
+        rm -f ${S}/m4/vapigen.m4
+}
diff --git a/poky/meta/classes-recipe/waf.bbclass b/poky/meta/classes-recipe/waf.bbclass
new file mode 100644
index 0000000..5fa0cc4
--- /dev/null
+++ b/poky/meta/classes-recipe/waf.bbclass
@@ -0,0 +1,81 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+# avoids build breaks when using no-static-libs.inc
+DISABLE_STATIC = ""
+
+# What Python interpretter to use.  Defaults to Python 3 but can be
+# overridden if required.
+WAF_PYTHON ?= "python3"
+
+B = "${WORKDIR}/build"
+do_configure[cleandirs] += "${B}"
+
+EXTRA_OECONF:append = " ${PACKAGECONFIG_CONFARGS}"
+
+EXTRA_OEWAF_BUILD ??= ""
+# In most cases, you want to pass the same arguments to `waf build` and `waf
+# install`, but you can override it if necessary
+EXTRA_OEWAF_INSTALL ??= "${EXTRA_OEWAF_BUILD}"
+
+def waflock_hash(d):
+    # Calculates the hash used for the waf lock file. This should include
+    # all of the user controllable inputs passed to waf configure. Note
+    # that the full paths for ${B} and ${S} are used; this is OK and desired
+    # because a change to either of these should create a unique lock file
+    # to prevent collisions.
+    import hashlib
+    h = hashlib.sha512()
+    def update(name):
+        val = d.getVar(name)
+        if val is not None:
+            h.update(val.encode('utf-8'))
+    update('S')
+    update('B')
+    update('prefix')
+    update('EXTRA_OECONF')
+    return h.hexdigest()
+
+# Use WAFLOCK to specify a separate lock file. The build is already
+# sufficiently isolated by setting the output directory, this ensures that
+# bitbake won't step on toes of any other configured context in the source
+# directory (e.g. if the source is coming from externalsrc and was previously
+# configured elsewhere).
+export WAFLOCK = ".lock-waf_oe_${@waflock_hash(d)}_build"
+BB_BASEHASH_IGNORE_VARS += "WAFLOCK"
+
+python waf_preconfigure() {
+    import subprocess
+    subsrcdir = d.getVar('S')
+    python = d.getVar('WAF_PYTHON')
+    wafbin = os.path.join(subsrcdir, 'waf')
+    try:
+        result = subprocess.check_output([python, wafbin, '--version'], cwd=subsrcdir, stderr=subprocess.STDOUT)
+        version = result.decode('utf-8').split()[1]
+        if bb.utils.vercmp_string_op(version, "1.8.7", ">="):
+            d.setVar("WAF_EXTRA_CONF", "--bindir=${bindir} --libdir=${libdir}")
+    except subprocess.CalledProcessError as e:
+        bb.warn("Unable to execute waf --version, exit code %d. Assuming waf version without bindir/libdir support." % e.returncode)
+    except FileNotFoundError:
+        bb.fatal("waf does not exist in %s" % subsrcdir)
+}
+
+do_configure[prefuncs] += "waf_preconfigure"
+
+waf_do_configure() {
+	(cd ${S} && ${WAF_PYTHON} ./waf configure -o ${B} --prefix=${prefix} ${WAF_EXTRA_CONF} ${EXTRA_OECONF})
+}
+
+do_compile[progress] = "outof:^\[\s*(\d+)/\s*(\d+)\]\s+"
+waf_do_compile()  {
+	(cd ${S} && ${WAF_PYTHON} ./waf build ${@oe.utils.parallel_make_argument(d, '-j%d', limit=64)} ${EXTRA_OEWAF_BUILD})
+}
+
+waf_do_install() {
+	(cd ${S} && ${WAF_PYTHON} ./waf install --destdir=${D} ${EXTRA_OEWAF_INSTALL})
+}
+
+EXPORT_FUNCTIONS do_configure do_compile do_install
diff --git a/poky/meta/classes-recipe/xmlcatalog.bbclass b/poky/meta/classes-recipe/xmlcatalog.bbclass
new file mode 100644
index 0000000..5826d0a
--- /dev/null
+++ b/poky/meta/classes-recipe/xmlcatalog.bbclass
@@ -0,0 +1,32 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+DEPENDS = "libxml2-native"
+
+# A whitespace-separated list of XML catalogs to be registered, for example
+# "${sysconfdir}/xml/docbook-xml.xml".
+XMLCATALOGS ?= ""
+
+SYSROOT_PREPROCESS_FUNCS:append = " xmlcatalog_sstate_postinst"
+
+xmlcatalog_complete() {
+	ROOTCATALOG="${STAGING_ETCDIR_NATIVE}/xml/catalog"
+	if [ ! -f $ROOTCATALOG ]; then
+		mkdir --parents $(dirname $ROOTCATALOG)
+		xmlcatalog --noout --create $ROOTCATALOG
+	fi
+	for CATALOG in ${XMLCATALOGS}; do
+		xmlcatalog --noout --add nextCatalog unused file://$CATALOG $ROOTCATALOG
+	done
+}
+
+xmlcatalog_sstate_postinst() {
+	mkdir -p ${SYSROOT_DESTDIR}${bindir}
+	dest=${SYSROOT_DESTDIR}${bindir}/postinst-${PN}-xmlcatalog
+	echo '#!/bin/sh' > $dest
+	echo '${xmlcatalog_complete}' >> $dest
+	chmod 0755 $dest
+}
diff --git a/poky/meta/classes/allarch.bbclass b/poky/meta/classes/allarch.bbclass
deleted file mode 100644
index a766a65..0000000
--- a/poky/meta/classes/allarch.bbclass
+++ /dev/null
@@ -1,65 +0,0 @@
-#
-# This class is used for architecture independent recipes/data files (usually scripts)
-#
-
-python allarch_package_arch_handler () {
-    if bb.data.inherits_class("native", d) or bb.data.inherits_class("nativesdk", d) \
-        or bb.data.inherits_class("crosssdk", d):
-        return
-
-    variants = d.getVar("MULTILIB_VARIANTS")
-    if not variants:
-        d.setVar("PACKAGE_ARCH", "all" )
-}
-
-addhandler allarch_package_arch_handler
-allarch_package_arch_handler[eventmask] = "bb.event.RecipePreFinalise"
-
-python () {
-    # Allow this class to be included but overridden - only set
-    # the values if we're still "all" package arch.
-    if d.getVar("PACKAGE_ARCH") == "all":
-        # No need for virtual/libc or a cross compiler
-        d.setVar("INHIBIT_DEFAULT_DEPS","1")
-
-        # Set these to a common set of values, we shouldn't be using them other that for WORKDIR directory
-        # naming anyway
-        d.setVar("baselib", "lib")
-        d.setVar("TARGET_ARCH", "allarch")
-        d.setVar("TARGET_OS", "linux")
-        d.setVar("TARGET_CC_ARCH", "none")
-        d.setVar("TARGET_LD_ARCH", "none")
-        d.setVar("TARGET_AS_ARCH", "none")
-        d.setVar("TARGET_FPU", "")
-        d.setVar("TARGET_PREFIX", "")
-        # Expand PACKAGE_EXTRA_ARCHS since the staging code needs this
-        # (this removes any dependencies from the hash perspective)
-        d.setVar("PACKAGE_EXTRA_ARCHS", d.getVar("PACKAGE_EXTRA_ARCHS"))
-        d.setVar("SDK_ARCH", "none")
-        d.setVar("SDK_CC_ARCH", "none")
-        d.setVar("TARGET_CPPFLAGS", "none")
-        d.setVar("TARGET_CFLAGS", "none")
-        d.setVar("TARGET_CXXFLAGS", "none")
-        d.setVar("TARGET_LDFLAGS", "none")
-        d.setVar("POPULATESYSROOTDEPS", "")
-
-        # Avoid this being unnecessarily different due to nuances of
-        # the target machine that aren't important for "all" arch
-        # packages.
-        d.setVar("LDFLAGS", "")
-
-        # No need to do shared library processing or debug symbol handling
-        d.setVar("EXCLUDE_FROM_SHLIBS", "1")
-        d.setVar("INHIBIT_PACKAGE_DEBUG_SPLIT", "1")
-        d.setVar("INHIBIT_PACKAGE_STRIP", "1")
-
-        # These multilib values shouldn't change allarch packages so exclude them
-        d.appendVarFlag("emit_pkgdata", "vardepsexclude", " MULTILIB_VARIANTS")
-        d.appendVarFlag("write_specfile", "vardepsexclude", " MULTILIBS")
-        d.appendVarFlag("do_package", "vardepsexclude", " package_do_shlibs")
-    elif bb.data.inherits_class('packagegroup', d) and not bb.data.inherits_class('nativesdk', d):
-        bb.error("Please ensure recipe %s sets PACKAGE_ARCH before inherit packagegroup" % d.getVar("FILE"))
-}
-
-def qemu_wrapper_cmdline(data, rootfs_path, library_paths):
-    return 'false'
diff --git a/poky/meta/classes/archiver.bbclass b/poky/meta/classes/archiver.bbclass
index 5da369d..0710c1e 100644
--- a/poky/meta/classes/archiver.bbclass
+++ b/poky/meta/classes/archiver.bbclass
@@ -1,5 +1,9 @@
-# ex:ts=4:sw=4:sts=4:et
-# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
 #
 # This bbclass is used for creating archive for:
 #  1) original (or unpacked) source: ARCHIVER_MODE[src] = "original"
@@ -459,7 +463,9 @@
 
 def is_work_shared(d):
     pn = d.getVar('PN')
-    return bb.data.inherits_class('kernel', d) or pn.startswith('gcc-source')
+    return pn.startswith('gcc-source') or \
+        bb.data.inherits_class('kernel', d) or \
+        (bb.data.inherits_class('kernelsrc', d) and d.getVar('S') == d.getVar('STAGING_KERNEL_DIR'))
 
 # Run do_unpack and do_patch
 python do_unpack_and_patch() {
diff --git a/poky/meta/classes/autotools-brokensep.bbclass b/poky/meta/classes/autotools-brokensep.bbclass
deleted file mode 100644
index 71cf97a..0000000
--- a/poky/meta/classes/autotools-brokensep.bbclass
+++ /dev/null
@@ -1,5 +0,0 @@
-# Autotools class for recipes where separate build dir doesn't work
-# Ideally we should fix software so it does work. Standard autotools supports
-# this.
-inherit autotools
-B = "${S}"
diff --git a/poky/meta/classes/autotools.bbclass b/poky/meta/classes/autotools.bbclass
deleted file mode 100644
index 4ab2460..0000000
--- a/poky/meta/classes/autotools.bbclass
+++ /dev/null
@@ -1,254 +0,0 @@
-def get_autotools_dep(d):
-    if d.getVar('INHIBIT_AUTOTOOLS_DEPS'):
-        return ''
-
-    pn = d.getVar('PN')
-    deps = ''
-
-    if pn in ['autoconf-native', 'automake-native']:
-        return deps
-    deps += 'autoconf-native automake-native '
-
-    if not pn in ['libtool', 'libtool-native'] and not pn.endswith("libtool-cross"):
-        deps += 'libtool-native '
-        if not bb.data.inherits_class('native', d) \
-                        and not bb.data.inherits_class('nativesdk', d) \
-                        and not bb.data.inherits_class('cross', d) \
-                        and not d.getVar('INHIBIT_DEFAULT_DEPS'):
-            deps += 'libtool-cross '
-
-    return deps
-
-
-DEPENDS:prepend = "${@get_autotools_dep(d)} "
-
-inherit siteinfo
-
-# Space separated list of shell scripts with variables defined to supply test
-# results for autoconf tests we cannot run at build time.
-# The value of this variable is filled in in a prefunc because it depends on
-# the contents of the sysroot.
-export CONFIG_SITE
-
-acpaths ?= "default"
-EXTRA_AUTORECONF = "--exclude=autopoint --exclude=gtkdocize"
-
-export lt_cv_sys_lib_dlsearch_path_spec = "${libdir} ${base_libdir}"
-
-# When building tools for use at build-time it's recommended for the build
-# system to use these variables when cross-compiling.
-# (http://sources.redhat.com/autobook/autobook/autobook_270.html)
-export CPP_FOR_BUILD = "${BUILD_CPP}"
-export CPPFLAGS_FOR_BUILD = "${BUILD_CPPFLAGS}"
-
-export CC_FOR_BUILD = "${BUILD_CC}"
-export CFLAGS_FOR_BUILD = "${BUILD_CFLAGS}"
-
-export CXX_FOR_BUILD = "${BUILD_CXX}"
-export CXXFLAGS_FOR_BUILD="${BUILD_CXXFLAGS}"
-
-export LD_FOR_BUILD = "${BUILD_LD}"
-export LDFLAGS_FOR_BUILD = "${BUILD_LDFLAGS}"
-
-def append_libtool_sysroot(d):
-    # Only supply libtool sysroot option for non-native packages
-    if not bb.data.inherits_class('native', d):
-        return '--with-libtool-sysroot=${STAGING_DIR_HOST}'
-    return ""
-
-CONFIGUREOPTS = " --build=${BUILD_SYS} \
-		  --host=${HOST_SYS} \
-		  --target=${TARGET_SYS} \
-		  --prefix=${prefix} \
-		  --exec_prefix=${exec_prefix} \
-		  --bindir=${bindir} \
-		  --sbindir=${sbindir} \
-		  --libexecdir=${libexecdir} \
-		  --datadir=${datadir} \
-		  --sysconfdir=${sysconfdir} \
-		  --sharedstatedir=${sharedstatedir} \
-		  --localstatedir=${localstatedir} \
-		  --libdir=${libdir} \
-		  --includedir=${includedir} \
-		  --oldincludedir=${oldincludedir} \
-		  --infodir=${infodir} \
-		  --mandir=${mandir} \
-		  --disable-silent-rules \
-		  ${CONFIGUREOPT_DEPTRACK} \
-		  ${@append_libtool_sysroot(d)}"
-CONFIGUREOPT_DEPTRACK ?= "--disable-dependency-tracking"
-
-CACHED_CONFIGUREVARS ?= ""
-
-AUTOTOOLS_SCRIPT_PATH ?= "${S}"
-CONFIGURE_SCRIPT ?= "${AUTOTOOLS_SCRIPT_PATH}/configure"
-
-AUTOTOOLS_AUXDIR ?= "${AUTOTOOLS_SCRIPT_PATH}"
-
-oe_runconf () {
-	# Use relative path to avoid buildpaths in files
-	cfgscript_name="`basename ${CONFIGURE_SCRIPT}`"
-	cfgscript=`python3 -c "import os; print(os.path.relpath(os.path.dirname('${CONFIGURE_SCRIPT}'), '.'))"`/$cfgscript_name
-	if [ -x "$cfgscript" ] ; then
-		bbnote "Running $cfgscript ${CONFIGUREOPTS} ${EXTRA_OECONF} $@"
-		if ! CONFIG_SHELL=${CONFIG_SHELL-/bin/bash} ${CACHED_CONFIGUREVARS} $cfgscript ${CONFIGUREOPTS} ${EXTRA_OECONF} "$@"; then
-			bbnote "The following config.log files may provide further information."
-			bbnote `find ${B} -ignore_readdir_race -type f -name config.log`
-			bbfatal_log "configure failed"
-		fi
-	else
-		bbfatal "no configure script found at $cfgscript"
-	fi
-}
-
-CONFIGURESTAMPFILE = "${WORKDIR}/configure.sstate"
-
-autotools_preconfigure() {
-	if [ -n "${CONFIGURESTAMPFILE}" -a -e "${CONFIGURESTAMPFILE}" ]; then
-		if [ "`cat ${CONFIGURESTAMPFILE}`" != "${BB_TASKHASH}" ]; then
-			if [ "${S}" != "${B}" ]; then
-				echo "Previously configured separate build directory detected, cleaning ${B}"
-				rm -rf ${B}
-				mkdir -p ${B}
-			else
-				# At least remove the .la files since automake won't automatically
-				# regenerate them even if CFLAGS/LDFLAGS are different
-				cd ${S}
-				if [ "${CLEANBROKEN}" != "1" -a \( -e Makefile -o -e makefile -o -e GNUmakefile \) ]; then
-					oe_runmake clean
-				fi
-				find ${S} -ignore_readdir_race -name \*.la -delete
-			fi
-		fi
-	fi
-}
-
-autotools_postconfigure(){
-	if [ -n "${CONFIGURESTAMPFILE}" ]; then
-		mkdir -p `dirname ${CONFIGURESTAMPFILE}`
-		echo ${BB_TASKHASH} > ${CONFIGURESTAMPFILE}
-	fi
-}
-
-EXTRACONFFUNCS ??= ""
-
-EXTRA_OECONF:append = " ${PACKAGECONFIG_CONFARGS}"
-
-do_configure[prefuncs] += "autotools_preconfigure autotools_aclocals ${EXTRACONFFUNCS}"
-do_compile[prefuncs] += "autotools_aclocals"
-do_install[prefuncs] += "autotools_aclocals"
-do_configure[postfuncs] += "autotools_postconfigure"
-
-ACLOCALDIR = "${STAGING_DATADIR}/aclocal"
-ACLOCALEXTRAPATH = ""
-ACLOCALEXTRAPATH:class-target = " -I ${STAGING_DATADIR_NATIVE}/aclocal/"
-ACLOCALEXTRAPATH:class-nativesdk = " -I ${STAGING_DATADIR_NATIVE}/aclocal/"
-
-python autotools_aclocals () {
-    sitefiles, searched = siteinfo_get_files(d, sysrootcache=True)
-    d.setVar("CONFIG_SITE", " ".join(sitefiles))
-}
-
-do_configure[file-checksums] += "${@' '.join(siteinfo_get_files(d, sysrootcache=False)[1])}"
-
-CONFIGURE_FILES = "${S}/configure.in ${S}/configure.ac ${S}/config.h.in ${S}/acinclude.m4 Makefile.am"
-
-autotools_do_configure() {
-	# WARNING: gross hack follows:
-	# An autotools built package generally needs these scripts, however only
-	# automake or libtoolize actually install the current versions of them.
-	# This is a problem in builds that do not use libtool or automake, in the case
-	# where we -need- the latest version of these scripts.  e.g. running a build
-	# for a package whose autotools are old, on an x86_64 machine, which the old
-	# config.sub does not support.  Work around this by installing them manually
-	# regardless.
-
-	PRUNE_M4=""
-
-	for ac in `find ${S} -ignore_readdir_race -name configure.in -o -name configure.ac`; do
-		rm -f `dirname $ac`/configure
-	done
-	if [ -e ${AUTOTOOLS_SCRIPT_PATH}/configure.in -o -e ${AUTOTOOLS_SCRIPT_PATH}/configure.ac ]; then
-		olddir=`pwd`
-		cd ${AUTOTOOLS_SCRIPT_PATH}
-		mkdir -p ${ACLOCALDIR}
-		ACLOCAL="aclocal --system-acdir=${ACLOCALDIR}/"
-		if [ x"${acpaths}" = xdefault ]; then
-			acpaths=
-			for i in `find ${AUTOTOOLS_SCRIPT_PATH} -ignore_readdir_race -maxdepth 2 -name \*.m4|grep -v 'aclocal.m4'| \
-				grep -v 'acinclude.m4' | sed -e 's,\(.*/\).*$,\1,'|sort -u`; do
-				acpaths="$acpaths -I $i"
-			done
-		else
-			acpaths="${acpaths}"
-		fi
-		acpaths="$acpaths ${ACLOCALEXTRAPATH}"
-		AUTOV=`automake --version | sed -e '1{s/.* //;s/\.[0-9]\+$//};q'`
-		automake --version
-		echo "AUTOV is $AUTOV"
-		if [ -d ${STAGING_DATADIR_NATIVE}/aclocal-$AUTOV ]; then
-			ACLOCAL="$ACLOCAL --automake-acdir=${STAGING_DATADIR_NATIVE}/aclocal-$AUTOV"
-		fi
-		# autoreconf is too shy to overwrite aclocal.m4 if it doesn't look
-		# like it was auto-generated.  Work around this by blowing it away
-		# by hand, unless the package specifically asked not to run aclocal.
-		if ! echo ${EXTRA_AUTORECONF} | grep -q "aclocal"; then
-			rm -f aclocal.m4
-		fi
-		if [ -e configure.in ]; then
-			CONFIGURE_AC=configure.in
-		else
-			CONFIGURE_AC=configure.ac
-		fi
-		if grep -q "^[[:space:]]*AM_GLIB_GNU_GETTEXT" $CONFIGURE_AC; then
-			if grep -q "sed.*POTFILES" $CONFIGURE_AC; then
-				: do nothing -- we still have an old unmodified configure.ac
-	    		else
-				bbnote Executing glib-gettextize --force --copy
-				echo "no" | glib-gettextize --force --copy
-			fi
-		elif [ "${BPN}" != "gettext" ] && grep -q "^[[:space:]]*AM_GNU_GETTEXT" $CONFIGURE_AC; then
-			# We'd call gettextize here if it wasn't so broken...
-			cp ${STAGING_DATADIR_NATIVE}/gettext/config.rpath ${AUTOTOOLS_AUXDIR}/
-			if [ -d ${S}/po/ ]; then
-				cp -f ${STAGING_DATADIR_NATIVE}/gettext/po/Makefile.in.in ${S}/po/
-				if [ ! -e ${S}/po/remove-potcdate.sin ]; then
-					cp ${STAGING_DATADIR_NATIVE}/gettext/po/remove-potcdate.sin ${S}/po/
-				fi
-			fi
-			PRUNE_M4="$PRUNE_M4 gettext.m4 iconv.m4 lib-ld.m4 lib-link.m4 lib-prefix.m4 nls.m4 po.m4 progtest.m4"
-		fi
-		mkdir -p m4
-
-		for i in $PRUNE_M4; do
-			find ${S} -ignore_readdir_race -name $i -delete
-		done
-
-		bbnote Executing ACLOCAL=\"$ACLOCAL\" autoreconf -Wcross --verbose --install --force ${EXTRA_AUTORECONF} $acpaths
-		ACLOCAL="$ACLOCAL" autoreconf -Wcross -Wno-obsolete --verbose --install --force ${EXTRA_AUTORECONF} $acpaths || die "autoreconf execution failed."
-		cd $olddir
-	fi
-	if [ -e ${CONFIGURE_SCRIPT} ]; then
-		oe_runconf
-	else
-		bbnote "nothing to configure"
-	fi
-}
-
-autotools_do_compile() {
-	oe_runmake
-}
-
-autotools_do_install() {
-	oe_runmake 'DESTDIR=${D}' install
-	# Info dir listing isn't interesting at this point so remove it if it exists.
-	if [ -e "${D}${infodir}/dir" ]; then
-		rm -f ${D}${infodir}/dir
-	fi
-}
-
-inherit siteconfig
-
-EXPORT_FUNCTIONS do_configure do_compile do_install
-
-B = "${WORKDIR}/build"
diff --git a/poky/meta/classes/baremetal-image.bbclass b/poky/meta/classes/baremetal-image.bbclass
deleted file mode 100644
index cb9e250..0000000
--- a/poky/meta/classes/baremetal-image.bbclass
+++ /dev/null
@@ -1,122 +0,0 @@
-# Baremetal image class
-#
-# This class is meant to be inherited by recipes for baremetal/RTOS applications
-# It contains code that would be used by all of them, every recipe just needs to
-# override certain variables.
-#
-# For scalability purposes, code within this class focuses on the "image" wiring
-# to satisfy the OpenEmbedded image creation and testing infrastructure.
-#
-# See meta-skeleton for a working example.
-
-
-# Toolchain should be baremetal or newlib based.
-# TCLIBC="baremetal" or TCLIBC="newlib"
-COMPATIBLE_HOST:libc-musl:class-target = "null"
-COMPATIBLE_HOST:libc-glibc:class-target = "null"
-
-
-inherit rootfs-postcommands
-
-# Set some defaults, but these should be overriden by each recipe if required
-IMGDEPLOYDIR ?= "${WORKDIR}/deploy-${PN}-image-complete"
-BAREMETAL_BINNAME ?= "hello_baremetal_${MACHINE}"
-IMAGE_LINK_NAME ?= "baremetal-helloworld-image-${MACHINE}"
-IMAGE_NAME_SUFFIX ?= ""
-
-do_rootfs[dirs] = "${IMGDEPLOYDIR} ${DEPLOY_DIR_IMAGE}"
-
-do_image(){
-    install ${D}/${base_libdir}/firmware/${BAREMETAL_BINNAME}.bin ${IMGDEPLOYDIR}/${IMAGE_LINK_NAME}.bin
-    install ${D}/${base_libdir}/firmware/${BAREMETAL_BINNAME}.elf ${IMGDEPLOYDIR}/${IMAGE_LINK_NAME}.elf
-}
-
-do_image_complete(){
-    :
-}
-
-python do_rootfs(){
-    from oe.utils import execute_pre_post_process
-    from pathlib import Path
-
-    # Write empty manifest file to satisfy test infrastructure
-    deploy_dir = d.getVar('IMGDEPLOYDIR')
-    link_name = d.getVar('IMAGE_LINK_NAME')
-    manifest_name = d.getVar('IMAGE_MANIFEST')
-
-    Path(manifest_name).touch()
-    if os.path.exists(manifest_name) and link_name:
-        manifest_link = deploy_dir + "/" + link_name + ".manifest"
-        if manifest_link != manifest_name:
-            if os.path.lexists(manifest_link):
-                os.remove(manifest_link)
-            os.symlink(os.path.basename(manifest_name), manifest_link)
-    # A lot of postprocess commands assume the existence of rootfs/etc
-    sysconfdir = d.getVar("IMAGE_ROOTFS") + d.getVar('sysconfdir')
-    bb.utils.mkdirhier(sysconfdir)
-
-    execute_pre_post_process(d, d.getVar('ROOTFS_POSTPROCESS_COMMAND'))
-}
-
-
-# Assure binaries, manifest and qemubootconf are populated on DEPLOY_DIR_IMAGE
-do_image_complete[dirs] = "${TOPDIR}"
-SSTATETASKS += "do_image_complete"
-SSTATE_SKIP_CREATION:task-image-complete = '1'
-do_image_complete[sstate-inputdirs] = "${IMGDEPLOYDIR}"
-do_image_complete[sstate-outputdirs] = "${DEPLOY_DIR_IMAGE}"
-do_image_complete[stamp-extra-info] = "${MACHINE_ARCH}"
-addtask do_image_complete after do_image before do_build
-
-python do_image_complete_setscene () {
-    sstate_setscene(d)
-}
-addtask do_image_complete_setscene
-
-# QEMU generic Baremetal/RTOS parameters
-QB_DEFAULT_KERNEL ?= "${IMAGE_LINK_NAME}.bin"
-QB_MEM ?= "-m 256"
-QB_DEFAULT_FSTYPE ?= "bin"
-QB_DTB ?= ""
-QB_OPT_APPEND:append = " -nographic"
-
-# RISC-V tunes set the BIOS, unset, and instruct QEMU to
-# ignore the BIOS and boot from -kernel
-QB_DEFAULT_BIOS:qemuriscv64 = ""
-QB_DEFAULT_BIOS:qemuriscv32 = ""
-QB_OPT_APPEND:append:qemuriscv64 = " -bios none"
-QB_OPT_APPEND:append:qemuriscv32 = " -bios none"
-
-
-# Use the medium-any code model for the RISC-V 64 bit implementation,
-# since medlow can only access addresses below 0x80000000 and RAM
-# starts at 0x80000000 on RISC-V 64
-# Keep RISC-V 32 using -mcmodel=medlow (symbols lie between -2GB:2GB)
-CFLAGS:append:qemuriscv64 = " -mcmodel=medany"
-
-
-# This next part is necessary to trick the build system into thinking
-# its building an image recipe so it generates the qemuboot.conf
-addtask do_rootfs before do_image after do_install
-addtask do_image after do_rootfs before do_image_complete
-addtask do_image_complete after do_image before do_build
-inherit qemuboot
-
-# Based on image.bbclass to make sure we build qemu
-python(){
-    # do_addto_recipe_sysroot doesnt exist for all recipes, but we need it to have
-    # /usr/bin on recipe-sysroot (qemu) populated
-    # The do_addto_recipe_sysroot dependency is coming from EXTRA_IMAGDEPENDS now,
-    # we just need to add the logic to add its dependency to do_image.
-    def extraimage_getdepends(task):
-        deps = ""
-        for dep in (d.getVar('EXTRA_IMAGEDEPENDS') or "").split():
-        # Make sure we only add it for qemu
-            if 'qemu' in dep:
-                if ":" in dep:
-                    deps += " %s " % (dep)
-                else:
-                    deps += " %s:%s" % (dep, task)
-        return deps
-    d.appendVarFlag('do_image', 'depends', extraimage_getdepends('do_populate_sysroot')) 
-}
diff --git a/poky/meta/classes/base.bbclass b/poky/meta/classes/base.bbclass
deleted file mode 100644
index 571b675..0000000
--- a/poky/meta/classes/base.bbclass
+++ /dev/null
@@ -1,783 +0,0 @@
-BB_DEFAULT_TASK ?= "build"
-CLASSOVERRIDE ?= "class-target"
-
-inherit patch
-inherit staging
-
-inherit mirrors
-inherit utils
-inherit utility-tasks
-inherit logging
-
-OE_EXTRA_IMPORTS ?= ""
-
-OE_IMPORTS += "os sys time oe.path oe.utils oe.types oe.package oe.packagegroup oe.sstatesig oe.lsb oe.cachedpath oe.license oe.qa oe.reproducible oe.rust oe.buildcfg ${OE_EXTRA_IMPORTS}"
-OE_IMPORTS[type] = "list"
-
-PACKAGECONFIG_CONFARGS ??= ""
-
-def oe_import(d):
-    import sys
-
-    bbpath = [os.path.join(dir, "lib") for dir in d.getVar("BBPATH").split(":")]
-    sys.path[0:0] = [dir for dir in bbpath if dir not in sys.path]
-
-    import oe.data
-    for toimport in oe.data.typed_value("OE_IMPORTS", d):
-        try:
-            # Make a python object accessible from the metadata
-            bb.utils._context[toimport.split(".", 1)[0]] = __import__(toimport)
-        except AttributeError as e:
-            bb.error("Error importing OE modules: %s" % str(e))
-    return ""
-
-# We need the oe module name space early (before INHERITs get added)
-OE_IMPORTED := "${@oe_import(d)}"
-
-inherit metadata_scm
-
-def lsb_distro_identifier(d):
-    adjust = d.getVar('LSB_DISTRO_ADJUST')
-    adjust_func = None
-    if adjust:
-        try:
-            adjust_func = globals()[adjust]
-        except KeyError:
-            pass
-    return oe.lsb.distro_identifier(adjust_func)
-
-die() {
-	bbfatal_log "$*"
-}
-
-oe_runmake_call() {
-	bbnote ${MAKE} ${EXTRA_OEMAKE} "$@"
-	${MAKE} ${EXTRA_OEMAKE} "$@"
-}
-
-oe_runmake() {
-	oe_runmake_call "$@" || die "oe_runmake failed"
-}
-
-
-def get_base_dep(d):
-    if d.getVar('INHIBIT_DEFAULT_DEPS', False):
-        return ""
-    return "${BASE_DEFAULT_DEPS}"
-
-BASE_DEFAULT_DEPS = "virtual/${HOST_PREFIX}gcc virtual/${HOST_PREFIX}compilerlibs virtual/libc"
-
-BASEDEPENDS = ""
-BASEDEPENDS:class-target = "${@get_base_dep(d)}"
-BASEDEPENDS:class-nativesdk = "${@get_base_dep(d)}"
-
-DEPENDS:prepend="${BASEDEPENDS} "
-
-FILESPATH = "${@base_set_filespath(["${FILE_DIRNAME}/${BP}", "${FILE_DIRNAME}/${BPN}", "${FILE_DIRNAME}/files"], d)}"
-# THISDIR only works properly with imediate expansion as it has to run
-# in the context of the location its used (:=)
-THISDIR = "${@os.path.dirname(d.getVar('FILE'))}"
-
-def extra_path_elements(d):
-    path = ""
-    elements = (d.getVar('EXTRANATIVEPATH') or "").split()
-    for e in elements:
-        path = path + "${STAGING_BINDIR_NATIVE}/" + e + ":"
-    return path
-
-PATH:prepend = "${@extra_path_elements(d)}"
-
-def get_lic_checksum_file_list(d):
-    filelist = []
-    lic_files = d.getVar("LIC_FILES_CHKSUM") or ''
-    tmpdir = d.getVar("TMPDIR")
-    s = d.getVar("S")
-    b = d.getVar("B")
-    workdir = d.getVar("WORKDIR")
-
-    urls = lic_files.split()
-    for url in urls:
-        # We only care about items that are absolute paths since
-        # any others should be covered by SRC_URI.
-        try:
-            (method, host, path, user, pswd, parm) = bb.fetch.decodeurl(url)
-            if method != "file" or not path:
-                raise bb.fetch.MalformedUrl(url)
-
-            if path[0] == '/':
-                if path.startswith((tmpdir, s, b, workdir)):
-                    continue
-                filelist.append(path + ":" + str(os.path.exists(path)))
-        except bb.fetch.MalformedUrl:
-            bb.fatal(d.getVar('PN') + ": LIC_FILES_CHKSUM contains an invalid URL: " + url)
-    return " ".join(filelist)
-
-def setup_hosttools_dir(dest, toolsvar, d, fatal=True):
-    tools = d.getVar(toolsvar).split()
-    origbbenv = d.getVar("BB_ORIGENV", False)
-    path = origbbenv.getVar("PATH")
-    # Need to ignore our own scripts directories to avoid circular links
-    for p in path.split(":"):
-        if p.endswith("/scripts"):
-            path = path.replace(p, "/ignoreme")
-    bb.utils.mkdirhier(dest)
-    notfound = []
-    for tool in tools:
-        desttool = os.path.join(dest, tool)
-        if not os.path.exists(desttool):
-            # clean up dead symlink
-            if os.path.islink(desttool):
-                os.unlink(desttool)
-            srctool = bb.utils.which(path, tool, executable=True)
-            # gcc/g++ may link to ccache on some hosts, e.g.,
-            # /usr/local/bin/ccache/gcc -> /usr/bin/ccache, then which(gcc)
-            # would return /usr/local/bin/ccache/gcc, but what we need is
-            # /usr/bin/gcc, this code can check and fix that.
-            if "ccache" in srctool:
-                srctool = bb.utils.which(path, tool, executable=True, direction=1)
-            if srctool:
-                os.symlink(srctool, desttool)
-            else:
-                notfound.append(tool)
-
-    if notfound and fatal:
-        bb.fatal("The following required tools (as specified by HOSTTOOLS) appear to be unavailable in PATH, please install them in order to proceed:\n  %s" % " ".join(notfound))
-
-addtask fetch
-do_fetch[dirs] = "${DL_DIR}"
-do_fetch[file-checksums] = "${@bb.fetch.get_checksum_file_list(d)}"
-do_fetch[file-checksums] += " ${@get_lic_checksum_file_list(d)}"
-do_fetch[vardeps] += "SRCREV"
-do_fetch[network] = "1"
-python base_do_fetch() {
-
-    src_uri = (d.getVar('SRC_URI') or "").split()
-    if not src_uri:
-        return
-
-    try:
-        fetcher = bb.fetch2.Fetch(src_uri, d)
-        fetcher.download()
-    except bb.fetch2.BBFetchException as e:
-        bb.fatal("Bitbake Fetcher Error: " + repr(e))
-}
-
-addtask unpack after do_fetch
-do_unpack[dirs] = "${WORKDIR}"
-
-do_unpack[cleandirs] = "${@d.getVar('S') if os.path.normpath(d.getVar('S')) != os.path.normpath(d.getVar('WORKDIR')) else os.path.join('${S}', 'patches')}"
-
-python base_do_unpack() {
-    src_uri = (d.getVar('SRC_URI') or "").split()
-    if not src_uri:
-        return
-
-    try:
-        fetcher = bb.fetch2.Fetch(src_uri, d)
-        fetcher.unpack(d.getVar('WORKDIR'))
-    except bb.fetch2.BBFetchException as e:
-        bb.fatal("Bitbake Fetcher Error: " + repr(e))
-}
-
-SSTATETASKS += "do_deploy_source_date_epoch"
-
-do_deploy_source_date_epoch () {
-    mkdir -p ${SDE_DEPLOYDIR}
-    if [ -e ${SDE_FILE} ]; then
-        echo "Deploying SDE from ${SDE_FILE} -> ${SDE_DEPLOYDIR}."
-        cp -p ${SDE_FILE} ${SDE_DEPLOYDIR}/__source_date_epoch.txt
-    else
-        echo "${SDE_FILE} not found!"
-    fi
-}
-
-python do_deploy_source_date_epoch_setscene () {
-    sstate_setscene(d)
-    bb.utils.mkdirhier(d.getVar('SDE_DIR'))
-    sde_file = os.path.join(d.getVar('SDE_DEPLOYDIR'), '__source_date_epoch.txt')
-    if os.path.exists(sde_file):
-        target = d.getVar('SDE_FILE')
-        bb.debug(1, "Moving setscene SDE file %s -> %s" % (sde_file, target))
-        bb.utils.rename(sde_file, target)
-    else:
-        bb.debug(1, "%s not found!" % sde_file)
-}
-
-do_deploy_source_date_epoch[dirs] = "${SDE_DEPLOYDIR}"
-do_deploy_source_date_epoch[sstate-plaindirs] = "${SDE_DEPLOYDIR}"
-addtask do_deploy_source_date_epoch_setscene
-addtask do_deploy_source_date_epoch before do_configure after do_patch
-
-python create_source_date_epoch_stamp() {
-    # Version: 1
-    source_date_epoch = oe.reproducible.get_source_date_epoch(d, d.getVar('S'))
-    oe.reproducible.epochfile_write(source_date_epoch, d.getVar('SDE_FILE'), d)
-}
-do_unpack[postfuncs] += "create_source_date_epoch_stamp"
-
-def get_source_date_epoch_value(d):
-    return oe.reproducible.epochfile_read(d.getVar('SDE_FILE'), d)
-
-def get_layers_branch_rev(d):
-    revisions = oe.buildcfg.get_layer_revisions(d)
-    layers_branch_rev = ["%-20s = \"%s:%s\"" % (r[1], r[2], r[3]) for r in revisions]
-    i = len(layers_branch_rev)-1
-    p1 = layers_branch_rev[i].find("=")
-    s1 = layers_branch_rev[i][p1:]
-    while i > 0:
-        p2 = layers_branch_rev[i-1].find("=")
-        s2= layers_branch_rev[i-1][p2:]
-        if s1 == s2:
-            layers_branch_rev[i-1] = layers_branch_rev[i-1][0:p2]
-            i -= 1
-        else:
-            i -= 1
-            p1 = layers_branch_rev[i].find("=")
-            s1= layers_branch_rev[i][p1:]
-    return layers_branch_rev
-
-
-BUILDCFG_FUNCS ??= "buildcfg_vars get_layers_branch_rev buildcfg_neededvars"
-BUILDCFG_FUNCS[type] = "list"
-
-def buildcfg_vars(d):
-    statusvars = oe.data.typed_value('BUILDCFG_VARS', d)
-    for var in statusvars:
-        value = d.getVar(var)
-        if value is not None:
-            yield '%-20s = "%s"' % (var, value)
-
-def buildcfg_neededvars(d):
-    needed_vars = oe.data.typed_value("BUILDCFG_NEEDEDVARS", d)
-    pesteruser = []
-    for v in needed_vars:
-        val = d.getVar(v)
-        if not val or val == 'INVALID':
-            pesteruser.append(v)
-
-    if pesteruser:
-        bb.fatal('The following variable(s) were not set: %s\nPlease set them directly, or choose a MACHINE or DISTRO that sets them.' % ', '.join(pesteruser))
-
-addhandler base_eventhandler
-base_eventhandler[eventmask] = "bb.event.ConfigParsed bb.event.MultiConfigParsed bb.event.BuildStarted bb.event.RecipePreFinalise bb.event.RecipeParsed"
-python base_eventhandler() {
-    import bb.runqueue
-
-    if isinstance(e, bb.event.ConfigParsed):
-        if not d.getVar("NATIVELSBSTRING", False):
-            d.setVar("NATIVELSBSTRING", lsb_distro_identifier(d))
-        d.setVar("ORIGNATIVELSBSTRING", d.getVar("NATIVELSBSTRING", False))
-        d.setVar('BB_VERSION', bb.__version__)
-
-    # There might be no bb.event.ConfigParsed event if bitbake server is
-    # running, so check bb.event.BuildStarted too to make sure ${HOSTTOOLS_DIR}
-    # exists.
-    if isinstance(e, bb.event.ConfigParsed) or \
-            (isinstance(e, bb.event.BuildStarted) and not os.path.exists(d.getVar('HOSTTOOLS_DIR'))):
-        # Works with the line in layer.conf which changes PATH to point here
-        setup_hosttools_dir(d.getVar('HOSTTOOLS_DIR'), 'HOSTTOOLS', d)
-        setup_hosttools_dir(d.getVar('HOSTTOOLS_DIR'), 'HOSTTOOLS_NONFATAL', d, fatal=False)
-
-    if isinstance(e, bb.event.MultiConfigParsed):
-        # We need to expand SIGGEN_EXCLUDE_SAFE_RECIPE_DEPS in each of the multiconfig data stores
-        # own contexts so the variables get expanded correctly for that arch, then inject back into
-        # the main data store.
-        deps = []
-        for config in e.mcdata:
-            deps.append(e.mcdata[config].getVar("SIGGEN_EXCLUDE_SAFE_RECIPE_DEPS"))
-        deps = " ".join(deps)
-        e.mcdata[''].setVar("SIGGEN_EXCLUDE_SAFE_RECIPE_DEPS", deps)
-
-    if isinstance(e, bb.event.BuildStarted):
-        localdata = bb.data.createCopy(d)
-        statuslines = []
-        for func in oe.data.typed_value('BUILDCFG_FUNCS', localdata):
-            g = globals()
-            if func not in g:
-                bb.warn("Build configuration function '%s' does not exist" % func)
-            else:
-                flines = g[func](localdata)
-                if flines:
-                    statuslines.extend(flines)
-
-        statusheader = d.getVar('BUILDCFG_HEADER')
-        if statusheader:
-            bb.plain('\n%s\n%s\n' % (statusheader, '\n'.join(statuslines)))
-
-    # This code is to silence warnings where the SDK variables overwrite the 
-    # target ones and we'd see dulpicate key names overwriting each other
-    # for various PREFERRED_PROVIDERS
-    if isinstance(e, bb.event.RecipePreFinalise):
-        if d.getVar("TARGET_PREFIX") == d.getVar("SDK_PREFIX"):
-            d.delVar("PREFERRED_PROVIDER_virtual/${TARGET_PREFIX}binutils")
-            d.delVar("PREFERRED_PROVIDER_virtual/${TARGET_PREFIX}gcc")
-            d.delVar("PREFERRED_PROVIDER_virtual/${TARGET_PREFIX}g++")
-            d.delVar("PREFERRED_PROVIDER_virtual/${TARGET_PREFIX}compilerlibs")
-
-    if isinstance(e, bb.event.RecipeParsed):
-        #
-        # If we have multiple providers of virtual/X and a PREFERRED_PROVIDER_virtual/X is set
-        # skip parsing for all the other providers which will mean they get uninstalled from the
-        # sysroot since they're now "unreachable". This makes switching virtual/kernel work in 
-        # particular.
-        #
-        pn = d.getVar('PN')
-        source_mirror_fetch = d.getVar('SOURCE_MIRROR_FETCH', False)
-        if not source_mirror_fetch:
-            provs = (d.getVar("PROVIDES") or "").split()
-            multiprovidersallowed = (d.getVar("BB_MULTI_PROVIDER_ALLOWED") or "").split()
-            for p in provs:
-                if p.startswith("virtual/") and p not in multiprovidersallowed:
-                    profprov = d.getVar("PREFERRED_PROVIDER_" + p)
-                    if profprov and pn != profprov:
-                        raise bb.parse.SkipRecipe("PREFERRED_PROVIDER_%s set to %s, not %s" % (p, profprov, pn))
-}
-
-CONFIGURESTAMPFILE = "${WORKDIR}/configure.sstate"
-CLEANBROKEN = "0"
-
-addtask configure after do_patch
-do_configure[dirs] = "${B}"
-base_do_configure() {
-	if [ -n "${CONFIGURESTAMPFILE}" -a -e "${CONFIGURESTAMPFILE}" ]; then
-		if [ "`cat ${CONFIGURESTAMPFILE}`" != "${BB_TASKHASH}" ]; then
-			cd ${B}
-			if [ "${CLEANBROKEN}" != "1" -a \( -e Makefile -o -e makefile -o -e GNUmakefile \) ]; then
-				oe_runmake clean
-			fi
-			# -ignore_readdir_race does not work correctly with -delete;
-			# use xargs to avoid spurious build failures
-			find ${B} -ignore_readdir_race -name \*.la -type f -print0 | xargs -0 rm -f
-		fi
-	fi
-	if [ -n "${CONFIGURESTAMPFILE}" ]; then
-		mkdir -p `dirname ${CONFIGURESTAMPFILE}`
-		echo ${BB_TASKHASH} > ${CONFIGURESTAMPFILE}
-	fi
-}
-
-addtask compile after do_configure
-do_compile[dirs] = "${B}"
-base_do_compile() {
-	if [ -e Makefile -o -e makefile -o -e GNUmakefile ]; then
-		oe_runmake || die "make failed"
-	else
-		bbnote "nothing to compile"
-	fi
-}
-
-addtask install after do_compile
-do_install[dirs] = "${B}"
-# Remove and re-create ${D} so that is it guaranteed to be empty
-do_install[cleandirs] = "${D}"
-
-base_do_install() {
-	:
-}
-
-base_do_package() {
-	:
-}
-
-addtask build after do_populate_sysroot
-do_build[noexec] = "1"
-do_build[recrdeptask] += "do_deploy"
-do_build () {
-	:
-}
-
-def set_packagetriplet(d):
-    archs = []
-    tos = []
-    tvs = []
-
-    archs.append(d.getVar("PACKAGE_ARCHS").split())
-    tos.append(d.getVar("TARGET_OS"))
-    tvs.append(d.getVar("TARGET_VENDOR"))
-
-    def settriplet(d, varname, archs, tos, tvs):
-        triplets = []
-        for i in range(len(archs)):
-            for arch in archs[i]:
-                triplets.append(arch + tvs[i] + "-" + tos[i])
-        triplets.reverse()
-        d.setVar(varname, " ".join(triplets))
-
-    settriplet(d, "PKGTRIPLETS", archs, tos, tvs)
-
-    variants = d.getVar("MULTILIB_VARIANTS") or ""
-    for item in variants.split():
-        localdata = bb.data.createCopy(d)
-        overrides = localdata.getVar("OVERRIDES", False) + ":virtclass-multilib-" + item
-        localdata.setVar("OVERRIDES", overrides)
-
-        archs.append(localdata.getVar("PACKAGE_ARCHS").split())
-        tos.append(localdata.getVar("TARGET_OS"))
-        tvs.append(localdata.getVar("TARGET_VENDOR"))
-
-    settriplet(d, "PKGMLTRIPLETS", archs, tos, tvs)
-
-python () {
-    import string, re
-
-    # Handle backfilling
-    oe.utils.features_backfill("DISTRO_FEATURES", d)
-    oe.utils.features_backfill("MACHINE_FEATURES", d)
-
-    if d.getVar("S")[-1] == '/':
-        bb.warn("Recipe %s sets S variable with trailing slash '%s', remove it" % (d.getVar("PN"), d.getVar("S")))
-    if d.getVar("B")[-1] == '/':
-        bb.warn("Recipe %s sets B variable with trailing slash '%s', remove it" % (d.getVar("PN"), d.getVar("B")))
-
-    if os.path.normpath(d.getVar("WORKDIR")) != os.path.normpath(d.getVar("S")):
-        d.appendVar("PSEUDO_IGNORE_PATHS", ",${S}")
-    if os.path.normpath(d.getVar("WORKDIR")) != os.path.normpath(d.getVar("B")):
-        d.appendVar("PSEUDO_IGNORE_PATHS", ",${B}")
-
-    # To add a recipe to the skip list , set:
-    #   SKIP_RECIPE[pn] = "message"
-    pn = d.getVar('PN')
-    skip_msg = d.getVarFlag('SKIP_RECIPE', pn)
-    if skip_msg:
-        bb.debug(1, "Skipping %s %s" % (pn, skip_msg))
-        raise bb.parse.SkipRecipe("Recipe will be skipped because: %s" % (skip_msg))
-
-    # Handle PACKAGECONFIG
-    #
-    # These take the form:
-    #
-    # PACKAGECONFIG ??= "<default options>"
-    # PACKAGECONFIG[foo] = "--enable-foo,--disable-foo,foo_depends,foo_runtime_depends,foo_runtime_recommends,foo_conflict_packageconfig"
-    pkgconfigflags = d.getVarFlags("PACKAGECONFIG") or {}
-    if pkgconfigflags:
-        pkgconfig = (d.getVar('PACKAGECONFIG') or "").split()
-        pn = d.getVar("PN")
-
-        mlprefix = d.getVar("MLPREFIX")
-
-        def expandFilter(appends, extension, prefix):
-            appends = bb.utils.explode_deps(d.expand(" ".join(appends)))
-            newappends = []
-            for a in appends:
-                if a.endswith("-native") or ("-cross-" in a):
-                    newappends.append(a)
-                elif a.startswith("virtual/"):
-                    subs = a.split("/", 1)[1]
-                    if subs.startswith(prefix):
-                        newappends.append(a + extension)
-                    else:
-                        newappends.append("virtual/" + prefix + subs + extension)
-                else:
-                    if a.startswith(prefix):
-                        newappends.append(a + extension)
-                    else:
-                        newappends.append(prefix + a + extension)
-            return newappends
-
-        def appendVar(varname, appends):
-            if not appends:
-                return
-            if varname.find("DEPENDS") != -1:
-                if bb.data.inherits_class('nativesdk', d) or bb.data.inherits_class('cross-canadian', d) :
-                    appends = expandFilter(appends, "", "nativesdk-")
-                elif bb.data.inherits_class('native', d):
-                    appends = expandFilter(appends, "-native", "")
-                elif mlprefix:
-                    appends = expandFilter(appends, "", mlprefix)
-            varname = d.expand(varname)
-            d.appendVar(varname, " " + " ".join(appends))
-
-        extradeps = []
-        extrardeps = []
-        extrarrecs = []
-        extraconf = []
-        for flag, flagval in sorted(pkgconfigflags.items()):
-            items = flagval.split(",")
-            num = len(items)
-            if num > 6:
-                bb.error("%s: PACKAGECONFIG[%s] Only enable,disable,depend,rdepend,rrecommend,conflict_packageconfig can be specified!"
-                    % (d.getVar('PN'), flag))
-
-            if flag in pkgconfig:
-                if num >= 3 and items[2]:
-                    extradeps.append(items[2])
-                if num >= 4 and items[3]:
-                    extrardeps.append(items[3])
-                if num >= 5 and items[4]:
-                    extrarrecs.append(items[4])
-                if num >= 1 and items[0]:
-                    extraconf.append(items[0])
-            elif num >= 2 and items[1]:
-                    extraconf.append(items[1])
-
-            if num >= 6 and items[5]:
-                conflicts = set(items[5].split())
-                invalid = conflicts.difference(set(pkgconfigflags.keys()))
-                if invalid:
-                    bb.error("%s: PACKAGECONFIG[%s] Invalid conflict package config%s '%s' specified."
-                        % (d.getVar('PN'), flag, 's' if len(invalid) > 1 else '', ' '.join(invalid)))
-
-                if flag in pkgconfig:
-                    intersec = conflicts.intersection(set(pkgconfig))
-                    if intersec:
-                        bb.fatal("%s: PACKAGECONFIG[%s] Conflict package config%s '%s' set in PACKAGECONFIG."
-                            % (d.getVar('PN'), flag, 's' if len(intersec) > 1 else '', ' '.join(intersec)))
-
-        appendVar('DEPENDS', extradeps)
-        appendVar('RDEPENDS:${PN}', extrardeps)
-        appendVar('RRECOMMENDS:${PN}', extrarrecs)
-        appendVar('PACKAGECONFIG_CONFARGS', extraconf)
-
-    pn = d.getVar('PN')
-    license = d.getVar('LICENSE')
-    if license == "INVALID" and pn != "defaultpkgname":
-        bb.fatal('This recipe does not have the LICENSE field set (%s)' % pn)
-
-    if bb.data.inherits_class('license', d):
-        check_license_format(d)
-        unmatched_license_flags = check_license_flags(d)
-        if unmatched_license_flags:
-            if len(unmatched_license_flags) == 1:
-                message = "because it has a restricted license '{0}'. Which is not listed in LICENSE_FLAGS_ACCEPTED".format(unmatched_license_flags[0])
-            else:
-                message = "because it has restricted licenses {0}. Which are not listed in LICENSE_FLAGS_ACCEPTED".format(
-                    ", ".join("'{0}'".format(f) for f in unmatched_license_flags))
-            bb.debug(1, "Skipping %s %s" % (pn, message))
-            raise bb.parse.SkipRecipe(message)
-
-    # If we're building a target package we need to use fakeroot (pseudo)
-    # in order to capture permissions, owners, groups and special files
-    if not bb.data.inherits_class('native', d) and not bb.data.inherits_class('cross', d):
-        d.appendVarFlag('do_prepare_recipe_sysroot', 'depends', ' virtual/fakeroot-native:do_populate_sysroot')
-        d.appendVarFlag('do_install', 'depends', ' virtual/fakeroot-native:do_populate_sysroot')
-        d.setVarFlag('do_install', 'fakeroot', '1')
-        d.appendVarFlag('do_package', 'depends', ' virtual/fakeroot-native:do_populate_sysroot')
-        d.setVarFlag('do_package', 'fakeroot', '1')
-        d.setVarFlag('do_package_setscene', 'fakeroot', '1')
-        d.appendVarFlag('do_package_setscene', 'depends', ' virtual/fakeroot-native:do_populate_sysroot')
-        d.setVarFlag('do_devshell', 'fakeroot', '1')
-        d.appendVarFlag('do_devshell', 'depends', ' virtual/fakeroot-native:do_populate_sysroot')
-
-    need_machine = d.getVar('COMPATIBLE_MACHINE')
-    if need_machine and not d.getVar('PARSE_ALL_RECIPES', False):
-        import re
-        compat_machines = (d.getVar('MACHINEOVERRIDES') or "").split(":")
-        for m in compat_machines:
-            if re.match(need_machine, m):
-                break
-        else:
-            raise bb.parse.SkipRecipe("incompatible with machine %s (not in COMPATIBLE_MACHINE)" % d.getVar('MACHINE'))
-
-    source_mirror_fetch = d.getVar('SOURCE_MIRROR_FETCH', False) or d.getVar('PARSE_ALL_RECIPES', False)
-    if not source_mirror_fetch:
-        need_host = d.getVar('COMPATIBLE_HOST')
-        if need_host:
-            import re
-            this_host = d.getVar('HOST_SYS')
-            if not re.match(need_host, this_host):
-                raise bb.parse.SkipRecipe("incompatible with host %s (not in COMPATIBLE_HOST)" % this_host)
-
-        bad_licenses = (d.getVar('INCOMPATIBLE_LICENSE') or "").split()
-
-        check_license = False if pn.startswith("nativesdk-") else True
-        for t in ["-native", "-cross-${TARGET_ARCH}", "-cross-initial-${TARGET_ARCH}",
-              "-crosssdk-${SDK_SYS}", "-crosssdk-initial-${SDK_SYS}",
-              "-cross-canadian-${TRANSLATED_TARGET_ARCH}"]:
-            if pn.endswith(d.expand(t)):
-                check_license = False
-        if pn.startswith("gcc-source-"):
-            check_license = False
-
-        if check_license and bad_licenses:
-            bad_licenses = expand_wildcard_licenses(d, bad_licenses)
-
-            exceptions = (d.getVar("INCOMPATIBLE_LICENSE_EXCEPTIONS") or "").split()
-
-            for lic_exception in exceptions:
-                if ":" in lic_exception:
-                    lic_exception = lic_exception.split(":")[1]
-                if lic_exception in oe.license.obsolete_license_list():
-                    bb.fatal("Obsolete license %s used in INCOMPATIBLE_LICENSE_EXCEPTIONS" % lic_exception)
-
-            pkgs = d.getVar('PACKAGES').split()
-            skipped_pkgs = {}
-            unskipped_pkgs = []
-            for pkg in pkgs:
-                remaining_bad_licenses = oe.license.apply_pkg_license_exception(pkg, bad_licenses, exceptions)
-
-                incompatible_lic = incompatible_license(d, remaining_bad_licenses, pkg)
-                if incompatible_lic:
-                    skipped_pkgs[pkg] = incompatible_lic
-                else:
-                    unskipped_pkgs.append(pkg)
-
-            if unskipped_pkgs:
-                for pkg in skipped_pkgs:
-                    bb.debug(1, "Skipping the package %s at do_rootfs because of incompatible license(s): %s" % (pkg, ' '.join(skipped_pkgs[pkg])))
-                    d.setVar('_exclude_incompatible-' + pkg, ' '.join(skipped_pkgs[pkg]))
-                for pkg in unskipped_pkgs:
-                    bb.debug(1, "Including the package %s" % pkg)
-            else:
-                incompatible_lic = incompatible_license(d, bad_licenses)
-                for pkg in skipped_pkgs:
-                    incompatible_lic += skipped_pkgs[pkg]
-                incompatible_lic = sorted(list(set(incompatible_lic)))
-
-                if incompatible_lic:
-                    bb.debug(1, "Skipping recipe %s because of incompatible license(s): %s" % (pn, ' '.join(incompatible_lic)))
-                    raise bb.parse.SkipRecipe("it has incompatible license(s): %s" % ' '.join(incompatible_lic))
-
-    needsrcrev = False
-    srcuri = d.getVar('SRC_URI')
-    for uri_string in srcuri.split():
-        uri = bb.fetch.URI(uri_string)
-        # Also check downloadfilename as the URL path might not be useful for sniffing
-        path = uri.params.get("downloadfilename", uri.path)
-
-        # HTTP/FTP use the wget fetcher
-        if uri.scheme in ("http", "https", "ftp"):
-            d.appendVarFlag('do_fetch', 'depends', ' wget-native:do_populate_sysroot')
-
-        # Svn packages should DEPEND on subversion-native
-        if uri.scheme == "svn":
-            needsrcrev = True
-            d.appendVarFlag('do_fetch', 'depends', ' subversion-native:do_populate_sysroot')
-
-        # Git packages should DEPEND on git-native
-        elif uri.scheme in ("git", "gitsm"):
-            needsrcrev = True
-            d.appendVarFlag('do_fetch', 'depends', ' git-native:do_populate_sysroot')
-
-        # Mercurial packages should DEPEND on mercurial-native
-        elif uri.scheme == "hg":
-            needsrcrev = True
-            d.appendVar("EXTRANATIVEPATH", ' python3-native ')
-            d.appendVarFlag('do_fetch', 'depends', ' mercurial-native:do_populate_sysroot')
-
-        # Perforce packages support SRCREV = "${AUTOREV}"
-        elif uri.scheme == "p4":
-            needsrcrev = True
-
-        # OSC packages should DEPEND on osc-native
-        elif uri.scheme == "osc":
-            d.appendVarFlag('do_fetch', 'depends', ' osc-native:do_populate_sysroot')
-
-        elif uri.scheme == "npm":
-            d.appendVarFlag('do_fetch', 'depends', ' nodejs-native:do_populate_sysroot')
-
-        elif uri.scheme == "repo":
-            needsrcrev = True
-            d.appendVarFlag('do_fetch', 'depends', ' repo-native:do_populate_sysroot')
-
-        # *.lz4 should DEPEND on lz4-native for unpacking
-        if path.endswith('.lz4'):
-            d.appendVarFlag('do_unpack', 'depends', ' lz4-native:do_populate_sysroot')
-
-        # *.zst should DEPEND on zstd-native for unpacking
-        elif path.endswith('.zst'):
-            d.appendVarFlag('do_unpack', 'depends', ' zstd-native:do_populate_sysroot')
-
-        # *.lz should DEPEND on lzip-native for unpacking
-        elif path.endswith('.lz'):
-            d.appendVarFlag('do_unpack', 'depends', ' lzip-native:do_populate_sysroot')
-
-        # *.xz should DEPEND on xz-native for unpacking
-        elif path.endswith('.xz') or path.endswith('.txz'):
-            d.appendVarFlag('do_unpack', 'depends', ' xz-native:do_populate_sysroot')
-
-        # .zip should DEPEND on unzip-native for unpacking
-        elif path.endswith('.zip') or path.endswith('.jar'):
-            d.appendVarFlag('do_unpack', 'depends', ' unzip-native:do_populate_sysroot')
-
-        # Some rpm files may be compressed internally using xz (for example, rpms from Fedora)
-        elif path.endswith('.rpm'):
-            d.appendVarFlag('do_unpack', 'depends', ' xz-native:do_populate_sysroot')
-
-        # *.deb should DEPEND on xz-native for unpacking
-        elif path.endswith('.deb'):
-            d.appendVarFlag('do_unpack', 'depends', ' xz-native:do_populate_sysroot')
-
-    if needsrcrev:
-        d.setVar("SRCPV", "${@bb.fetch2.get_srcrev(d)}")
-
-        # Gather all named SRCREVs to add to the sstate hash calculation
-        # This anonymous python snippet is called multiple times so we
-        # need to be careful to not double up the appends here and cause
-        # the base hash to mismatch the task hash
-        for uri in srcuri.split():
-            parm = bb.fetch.decodeurl(uri)[5]
-            uri_names = parm.get("name", "").split(",")
-            for uri_name in filter(None, uri_names):
-                srcrev_name = "SRCREV_{}".format(uri_name)
-                if srcrev_name not in (d.getVarFlag("do_fetch", "vardeps") or "").split():
-                    d.appendVarFlag("do_fetch", "vardeps", " {}".format(srcrev_name))
-
-    set_packagetriplet(d)
-
-    # 'multimachine' handling
-    mach_arch = d.getVar('MACHINE_ARCH')
-    pkg_arch = d.getVar('PACKAGE_ARCH')
-
-    if (pkg_arch == mach_arch):
-        # Already machine specific - nothing further to do
-        return
-
-    #
-    # We always try to scan SRC_URI for urls with machine overrides
-    # unless the package sets SRC_URI_OVERRIDES_PACKAGE_ARCH=0
-    #
-    override = d.getVar('SRC_URI_OVERRIDES_PACKAGE_ARCH')
-    if override != '0':
-        paths = []
-        fpaths = (d.getVar('FILESPATH') or '').split(':')
-        machine = d.getVar('MACHINE')
-        for p in fpaths:
-            if os.path.basename(p) == machine and os.path.isdir(p):
-                paths.append(p)
-
-        if paths:
-            for s in srcuri.split():
-                if not s.startswith("file://"):
-                    continue
-                fetcher = bb.fetch2.Fetch([s], d)
-                local = fetcher.localpath(s)
-                for mp in paths:
-                    if local.startswith(mp):
-                        #bb.note("overriding PACKAGE_ARCH from %s to %s for %s" % (pkg_arch, mach_arch, pn))
-                        d.setVar('PACKAGE_ARCH', "${MACHINE_ARCH}")
-                        return
-
-    packages = d.getVar('PACKAGES').split()
-    for pkg in packages:
-        pkgarch = d.getVar("PACKAGE_ARCH_%s" % pkg)
-
-        # We could look for != PACKAGE_ARCH here but how to choose
-        # if multiple differences are present?
-        # Look through PACKAGE_ARCHS for the priority order?
-        if pkgarch and pkgarch == mach_arch:
-            d.setVar('PACKAGE_ARCH', "${MACHINE_ARCH}")
-            bb.warn("Recipe %s is marked as only being architecture specific but seems to have machine specific packages?! The recipe may as well mark itself as machine specific directly." % d.getVar("PN"))
-}
-
-addtask cleansstate after do_clean
-python do_cleansstate() {
-        sstate_clean_cachefiles(d)
-}
-addtask cleanall after do_cleansstate
-do_cleansstate[nostamp] = "1"
-
-python do_cleanall() {
-    src_uri = (d.getVar('SRC_URI') or "").split()
-    if not src_uri:
-        return
-
-    try:
-        fetcher = bb.fetch2.Fetch(src_uri, d)
-        fetcher.clean()
-    except bb.fetch2.BBFetchException as e:
-        bb.fatal(str(e))
-}
-do_cleanall[nostamp] = "1"
-
-
-EXPORT_FUNCTIONS do_fetch do_unpack do_configure do_compile do_install do_package
diff --git a/poky/meta/classes/bash-completion.bbclass b/poky/meta/classes/bash-completion.bbclass
deleted file mode 100644
index 803b2ca..0000000
--- a/poky/meta/classes/bash-completion.bbclass
+++ /dev/null
@@ -1,7 +0,0 @@
-DEPENDS:append:class-target = " bash-completion"
-
-PACKAGES += "${PN}-bash-completion"
-
-FILES:${PN}-bash-completion = "${datadir}/bash-completion ${sysconfdir}/bash_completion.d"
-
-RDEPENDS:${PN}-bash-completion = "bash-completion"
diff --git a/poky/meta/classes/bin_package.bbclass b/poky/meta/classes/bin_package.bbclass
deleted file mode 100644
index f0407e1..0000000
--- a/poky/meta/classes/bin_package.bbclass
+++ /dev/null
@@ -1,40 +0,0 @@
-#
-# ex:ts=4:sw=4:sts=4:et
-# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
-#
-# Common variable and task for the binary package recipe.
-# Basic principle:
-# * The files have been unpacked to ${S} by base.bbclass
-# * Skip do_configure and do_compile
-# * Use do_install to install the files to ${D}
-#
-# Note:
-# The "subdir" parameter in the SRC_URI is useful when the input package
-# is rpm, ipk, deb and so on, for example:
-#
-# SRC_URI = "http://foo.com/foo-1.0-r1.i586.rpm;subdir=foo-1.0"
-#
-# Then the files would be unpacked to ${WORKDIR}/foo-1.0, otherwise
-# they would be in ${WORKDIR}.
-#
-
-# Skip the unwanted steps
-do_configure[noexec] = "1"
-do_compile[noexec] = "1"
-
-# Install the files to ${D}
-bin_package_do_install () {
-    # Do it carefully
-    [ -d "${S}" ] || exit 1
-    if [ -z "$(ls -A ${S})" ]; then
-        bbfatal bin_package has nothing to install. Be sure the SRC_URI unpacks into S.
-    fi
-    cd ${S}
-    install -d ${D}${base_prefix}
-    tar --no-same-owner --exclude='./patches' --exclude='./.pc' -cpf - . \
-        | tar --no-same-owner -xpf - -C ${D}${base_prefix}
-}
-
-FILES:${PN} = "/"
-
-EXPORT_FUNCTIONS do_install
diff --git a/poky/meta/classes/binconfig-disabled.bbclass b/poky/meta/classes/binconfig-disabled.bbclass
deleted file mode 100644
index e8ac41b..0000000
--- a/poky/meta/classes/binconfig-disabled.bbclass
+++ /dev/null
@@ -1,30 +0,0 @@
-#
-# Class to disable binconfig files instead of installing them
-#
-
-# The list of scripts which should be disabled.
-BINCONFIG ?= ""
-
-FILES:${PN}-dev += "${bindir}/*-config"
-
-do_install:append () {
-	for x in ${BINCONFIG}; do
-		# Make the disabled script emit invalid parameters for those configure
-		# scripts which call it without checking the return code.
-		echo "#!/bin/sh" > ${D}$x
-		echo "echo 'ERROR: $x should not be used, use an alternative such as pkg-config' >&2" >> ${D}$x
-		echo "echo '--should-not-have-used-$x'" >> ${D}$x
-		echo "exit 1" >> ${D}$x
-		chmod +x ${D}$x
-	done
-}
-
-SYSROOT_PREPROCESS_FUNCS += "binconfig_disabled_sysroot_preprocess"
-
-binconfig_disabled_sysroot_preprocess () {
-	for x in ${BINCONFIG}; do
-		configname=`basename $x`
-		install -d ${SYSROOT_DESTDIR}${bindir_crossscripts}
-		install ${D}$x ${SYSROOT_DESTDIR}${bindir_crossscripts}
-	done
-}
diff --git a/poky/meta/classes/binconfig.bbclass b/poky/meta/classes/binconfig.bbclass
deleted file mode 100644
index 6e0c882..0000000
--- a/poky/meta/classes/binconfig.bbclass
+++ /dev/null
@@ -1,54 +0,0 @@
-FILES:${PN}-dev += "${bindir}/*-config"
- 
-# The namespaces can clash here hence the two step replace
-def get_binconfig_mangle(d):
-    s = "-e ''"
-    if not bb.data.inherits_class('native', d):
-        optional_quote = r"\(\"\?\)"
-        s += " -e 's:=%s${base_libdir}:=\\1OEBASELIBDIR:;'" % optional_quote
-        s += " -e 's:=%s${libdir}:=\\1OELIBDIR:;'" % optional_quote
-        s += " -e 's:=%s${includedir}:=\\1OEINCDIR:;'" % optional_quote
-        s += " -e 's:=%s${datadir}:=\\1OEDATADIR:'" % optional_quote
-        s += " -e 's:=%s${prefix}/:=\\1OEPREFIX/:'" % optional_quote
-        s += " -e 's:=%s${exec_prefix}/:=\\1OEEXECPREFIX/:'" % optional_quote
-        s += " -e 's:-L${libdir}:-LOELIBDIR:;'"
-        s += " -e 's:-I${includedir}:-IOEINCDIR:;'"
-        s += " -e 's:-L${WORKDIR}:-LOELIBDIR:'"
-        s += " -e 's:-I${WORKDIR}:-IOEINCDIR:'"
-        s += " -e 's:OEBASELIBDIR:${STAGING_BASELIBDIR}:;'"
-        s += " -e 's:OELIBDIR:${STAGING_LIBDIR}:;'"
-        s += " -e 's:OEINCDIR:${STAGING_INCDIR}:;'"
-        s += " -e 's:OEDATADIR:${STAGING_DATADIR}:'"
-        s += " -e 's:OEPREFIX:${STAGING_DIR_HOST}${prefix}:'"
-        s += " -e 's:OEEXECPREFIX:${STAGING_DIR_HOST}${exec_prefix}:'"
-        if d.getVar("OE_BINCONFIG_EXTRA_MANGLE", False):
-            s += d.getVar("OE_BINCONFIG_EXTRA_MANGLE")
-
-    return s
-
-BINCONFIG_GLOB ?= "*-config"
-
-PACKAGE_PREPROCESS_FUNCS += "binconfig_package_preprocess"
-
-binconfig_package_preprocess () {
-	for config in `find ${PKGD} -type f -name '${BINCONFIG_GLOB}'`; do
-		sed -i \
-		    -e 's:${STAGING_BASELIBDIR}:${base_libdir}:g;' \
-		    -e 's:${STAGING_LIBDIR}:${libdir}:g;' \
-		    -e 's:${STAGING_INCDIR}:${includedir}:g;' \
-		    -e 's:${STAGING_DATADIR}:${datadir}:' \
-		    -e 's:${STAGING_DIR_HOST}${prefix}:${prefix}:' \
-                    $config
-	done
-}
-
-SYSROOT_PREPROCESS_FUNCS += "binconfig_sysroot_preprocess"
-
-binconfig_sysroot_preprocess () {
-	for config in `find ${S} -type f -name '${BINCONFIG_GLOB}'` `find ${B} -type f -name '${BINCONFIG_GLOB}'`; do
-		configname=`basename $config`
-		install -d ${SYSROOT_DESTDIR}${bindir_crossscripts}
-		sed ${@get_binconfig_mangle(d)} $config > ${SYSROOT_DESTDIR}${bindir_crossscripts}/$configname
-		chmod u+x ${SYSROOT_DESTDIR}${bindir_crossscripts}/$configname
-	done
-}
diff --git a/poky/meta/classes/buildhistory.bbclass b/poky/meta/classes/buildhistory.bbclass
index 4ba9ec8..395f594 100644
--- a/poky/meta/classes/buildhistory.bbclass
+++ b/poky/meta/classes/buildhistory.bbclass
@@ -6,8 +6,10 @@
 # Copyright (C) 2011-2016 Intel Corporation
 # Copyright (C) 2007-2011 Koen Kooi <koen@openembedded.org>
 #
+# SPDX-License-Identifier: MIT
+#
 
-inherit image-artifact-names
+IMAGE_CLASSES += "image-artifact-names"
 
 BUILDHISTORY_FEATURES ?= "image package sdk"
 BUILDHISTORY_DIR ?= "${TOPDIR}/buildhistory"
diff --git a/poky/meta/classes/buildstats-summary.bbclass b/poky/meta/classes/buildstats-summary.bbclass
index f9b241b..12e8f17 100644
--- a/poky/meta/classes/buildstats-summary.bbclass
+++ b/poky/meta/classes/buildstats-summary.bbclass
@@ -1,3 +1,9 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
 # Summarize sstate usage at the end of the build
 python buildstats_summary () {
     import collections
diff --git a/poky/meta/classes/buildstats.bbclass b/poky/meta/classes/buildstats.bbclass
deleted file mode 100644
index 132ecaa..0000000
--- a/poky/meta/classes/buildstats.bbclass
+++ /dev/null
@@ -1,296 +0,0 @@
-BUILDSTATS_BASE = "${TMPDIR}/buildstats/"
-
-################################################################################
-# Build statistics gathering.
-#
-# The CPU and Time gathering/tracking functions and bbevent inspiration
-# were written by Christopher Larson.
-#
-################################################################################
-
-def get_buildprocess_cputime(pid):
-    with open("/proc/%d/stat" % pid, "r") as f:
-        fields = f.readline().rstrip().split()
-    # 13: utime, 14: stime, 15: cutime, 16: cstime
-    return sum(int(field) for field in fields[13:16])
-
-def get_process_cputime(pid):
-    import resource
-    with open("/proc/%d/stat" % pid, "r") as f:
-        fields = f.readline().rstrip().split()
-    stats = { 
-        'utime'  : fields[13],
-        'stime'  : fields[14], 
-        'cutime' : fields[15], 
-        'cstime' : fields[16],  
-    }
-    iostats = {}
-    if os.path.isfile("/proc/%d/io" % pid):
-        with open("/proc/%d/io" % pid, "r") as f:
-            while True:
-                i = f.readline().strip()
-                if not i:
-                    break
-                if not ":" in i:
-                    # one more extra line is appended (empty or containing "0")
-                    # most probably due to race condition in kernel while
-                    # updating IO stats
-                    break
-                i = i.split(": ")
-                iostats[i[0]] = i[1]
-    resources = resource.getrusage(resource.RUSAGE_SELF)
-    childres = resource.getrusage(resource.RUSAGE_CHILDREN)
-    return stats, iostats, resources, childres
-
-def get_cputime():
-    with open("/proc/stat", "r") as f:
-        fields = f.readline().rstrip().split()[1:]
-    return sum(int(field) for field in fields)
-
-def set_timedata(var, d, server_time):
-    d.setVar(var, server_time)
-
-def get_timedata(var, d, end_time):
-    oldtime = d.getVar(var, False)
-    if oldtime is None:
-        return
-    return end_time - oldtime
-
-def set_buildtimedata(var, d):
-    import time
-    time = time.time()
-    cputime = get_cputime()
-    proctime = get_buildprocess_cputime(os.getpid())
-    d.setVar(var, (time, cputime, proctime))
-
-def get_buildtimedata(var, d):
-    import time
-    timedata = d.getVar(var, False)
-    if timedata is None:
-        return
-    oldtime, oldcpu, oldproc = timedata
-    procdiff = get_buildprocess_cputime(os.getpid()) - oldproc
-    cpudiff = get_cputime() - oldcpu
-    end_time = time.time()
-    timediff = end_time - oldtime
-    if cpudiff > 0:
-        cpuperc = float(procdiff) * 100 / cpudiff
-    else:
-        cpuperc = None
-    return timediff, cpuperc
-
-def write_task_data(status, logfile, e, d):
-    with open(os.path.join(logfile), "a") as f:
-        elapsedtime = get_timedata("__timedata_task", d, e.time)
-        if elapsedtime:
-            f.write(d.expand("${PF}: %s\n" % e.task))
-            f.write(d.expand("Elapsed time: %0.2f seconds\n" % elapsedtime))
-            cpu, iostats, resources, childres = get_process_cputime(os.getpid())
-            if cpu:
-                f.write("utime: %s\n" % cpu['utime'])
-                f.write("stime: %s\n" % cpu['stime'])
-                f.write("cutime: %s\n" % cpu['cutime'])
-                f.write("cstime: %s\n" % cpu['cstime'])
-            for i in iostats:
-                f.write("IO %s: %s\n" % (i, iostats[i]))
-            rusages = ["ru_utime", "ru_stime", "ru_maxrss", "ru_minflt", "ru_majflt", "ru_inblock", "ru_oublock", "ru_nvcsw", "ru_nivcsw"]
-            for i in rusages:
-                f.write("rusage %s: %s\n" % (i, getattr(resources, i)))
-            for i in rusages:
-                f.write("Child rusage %s: %s\n" % (i, getattr(childres, i)))
-        if status == "passed":
-            f.write("Status: PASSED \n")
-        else:
-            f.write("Status: FAILED \n")
-        f.write("Ended: %0.2f \n" % e.time)
-
-def write_host_data(logfile, e, d, type):
-    import subprocess, os, datetime
-    # minimum time allowed for each command to run, in seconds
-    time_threshold = 0.5
-    limit = 10
-    # the total number of commands
-    num_cmds = 0
-    msg = ""
-    if type == "interval":
-        # interval at which data will be logged
-        interval = d.getVar("BB_HEARTBEAT_EVENT", False)
-        if interval is None:
-            bb.warn("buildstats: Collecting host data at intervals failed. Set BB_HEARTBEAT_EVENT=\"<interval>\" in conf/local.conf for the interval at which host data will be logged.")
-            d.setVar("BB_LOG_HOST_STAT_ON_INTERVAL", "0")
-            return
-        interval = int(interval)
-        cmds = d.getVar('BB_LOG_HOST_STAT_CMDS_INTERVAL')
-        msg = "Host Stats: Collecting data at %d second intervals.\n" % interval
-        if cmds is None:
-            d.setVar("BB_LOG_HOST_STAT_ON_INTERVAL", "0")
-            bb.warn("buildstats: Collecting host data at intervals failed. Set BB_LOG_HOST_STAT_CMDS_INTERVAL=\"command1 ; command2 ; ... \" in conf/local.conf.")
-            return
-    if type == "failure":
-        cmds = d.getVar('BB_LOG_HOST_STAT_CMDS_FAILURE')
-        msg = "Host Stats: Collecting data on failure.\n"
-        msg += "Failed at task: " + e.task + "\n"
-        if cmds is None:
-            d.setVar("BB_LOG_HOST_STAT_ON_FAILURE", "0")
-            bb.warn("buildstats: Collecting host data on failure failed. Set BB_LOG_HOST_STAT_CMDS_FAILURE=\"command1 ; command2 ; ... \" in conf/local.conf.")
-            return
-    c_san = []
-    for cmd in cmds.split(";"):
-        if len(cmd) == 0:
-            continue
-        num_cmds += 1
-        c_san.append(cmd)
-    if num_cmds == 0:
-        if type == "interval":
-            d.setVar("BB_LOG_HOST_STAT_ON_INTERVAL", "0")
-        if type == "failure":
-            d.setVar("BB_LOG_HOST_STAT_ON_FAILURE", "0")
-        return
-
-    # return if the interval is not enough to run all commands within the specified BB_HEARTBEAT_EVENT interval
-    if type == "interval":
-        limit = interval / num_cmds
-        if limit <= time_threshold:
-            d.setVar("BB_LOG_HOST_STAT_ON_INTERVAL", "0")
-            bb.warn("buildstats: Collecting host data failed. BB_HEARTBEAT_EVENT interval not enough to run the specified commands. Increase value of BB_HEARTBEAT_EVENT in conf/local.conf.")
-            return
-
-    # set the environment variables 
-    path = d.getVar("PATH")
-    opath = d.getVar("BB_ORIGENV", False).getVar("PATH")
-    ospath = os.environ['PATH']
-    os.environ['PATH'] = path + ":" + opath + ":" + ospath
-    with open(logfile, "a") as f:
-        f.write("Event Time: %f\nDate: %s\n" % (e.time, datetime.datetime.now()))
-        f.write("%s" % msg)
-        for c in c_san:
-            try:
-                output = subprocess.check_output(c.split(), stderr=subprocess.STDOUT, timeout=limit).decode('utf-8')
-            except (subprocess.CalledProcessError, subprocess.TimeoutExpired, FileNotFoundError) as err:
-                output = "Error running command: %s\n%s\n" % (c, err)
-            f.write("%s\n%s\n" % (c, output))
-    # reset the environment
-    os.environ['PATH'] = ospath
-
-python run_buildstats () {
-    import bb.build
-    import bb.event
-    import time, subprocess, platform
-
-    bn = d.getVar('BUILDNAME')
-    ########################################################################
-    # bitbake fires HeartbeatEvent even before a build has been
-    # triggered, causing BUILDNAME to be None
-    ########################################################################
-    if bn is not None:
-        bsdir = os.path.join(d.getVar('BUILDSTATS_BASE'), bn)
-        taskdir = os.path.join(bsdir, d.getVar('PF'))
-        if isinstance(e, bb.event.HeartbeatEvent) and bb.utils.to_boolean(d.getVar("BB_LOG_HOST_STAT_ON_INTERVAL")):
-            bb.utils.mkdirhier(bsdir)
-            write_host_data(os.path.join(bsdir, "host_stats_interval"), e, d, "interval")
-
-    if isinstance(e, bb.event.BuildStarted):
-        ########################################################################
-        # If the kernel was not configured to provide I/O statistics, issue
-        # a one time warning.
-        ########################################################################
-        if not os.path.isfile("/proc/%d/io" % os.getpid()):
-            bb.warn("The Linux kernel on your build host was not configured to provide process I/O statistics. (CONFIG_TASK_IO_ACCOUNTING is not set)")
-
-        ########################################################################
-        # at first pass make the buildstats hierarchy and then
-        # set the buildname
-        ########################################################################
-        bb.utils.mkdirhier(bsdir)
-        set_buildtimedata("__timedata_build", d)
-        build_time = os.path.join(bsdir, "build_stats")
-        # write start of build into build_time
-        with open(build_time, "a") as f:
-            host_info = platform.uname()
-            f.write("Host Info: ")
-            for x in host_info:
-                if x:
-                    f.write(x + " ")
-            f.write("\n")
-            f.write("Build Started: %0.2f \n" % d.getVar('__timedata_build', False)[0])
-
-    elif isinstance(e, bb.event.BuildCompleted):
-        build_time = os.path.join(bsdir, "build_stats")
-        with open(build_time, "a") as f:
-            ########################################################################
-            # Write build statistics for the build
-            ########################################################################
-            timedata = get_buildtimedata("__timedata_build", d)
-            if timedata:
-                time, cpu = timedata
-                # write end of build and cpu used into build_time
-                f.write("Elapsed time: %0.2f seconds \n" % (time))
-                if cpu:
-                    f.write("CPU usage: %0.1f%% \n" % cpu)
-
-    if isinstance(e, bb.build.TaskStarted):
-        set_timedata("__timedata_task", d, e.time)
-        bb.utils.mkdirhier(taskdir)
-        # write into the task event file the name and start time
-        with open(os.path.join(taskdir, e.task), "a") as f:
-            f.write("Event: %s \n" % bb.event.getName(e))
-            f.write("Started: %0.2f \n" % e.time)
-
-    elif isinstance(e, bb.build.TaskSucceeded):
-        write_task_data("passed", os.path.join(taskdir, e.task), e, d)
-        if e.task == "do_rootfs":
-            bs = os.path.join(bsdir, "build_stats")
-            with open(bs, "a") as f:
-                rootfs = d.getVar('IMAGE_ROOTFS')
-                if os.path.isdir(rootfs):
-                    try:
-                        rootfs_size = subprocess.check_output(["du", "-sh", rootfs],
-                                stderr=subprocess.STDOUT).decode('utf-8')
-                        f.write("Uncompressed Rootfs size: %s" % rootfs_size)
-                    except subprocess.CalledProcessError as err:
-                        bb.warn("Failed to get rootfs size: %s" % err.output.decode('utf-8'))
-
-    elif isinstance(e, bb.build.TaskFailed):
-        # Can have a failure before TaskStarted so need to mkdir here too
-        bb.utils.mkdirhier(taskdir)
-        write_task_data("failed", os.path.join(taskdir, e.task), e, d)
-        ########################################################################
-        # Lets make things easier and tell people where the build failed in
-        # build_status. We do this here because BuildCompleted triggers no
-        # matter what the status of the build actually is
-        ########################################################################
-        build_status = os.path.join(bsdir, "build_stats")
-        with open(build_status, "a") as f:
-            f.write(d.expand("Failed at: ${PF} at task: %s \n" % e.task))
-        if bb.utils.to_boolean(d.getVar("BB_LOG_HOST_STAT_ON_FAILURE")):
-            write_host_data(os.path.join(bsdir, "host_stats_%s_failure" % e.task), e, d, "failure")
-}
-
-addhandler run_buildstats
-run_buildstats[eventmask] = "bb.event.BuildStarted bb.event.BuildCompleted bb.event.HeartbeatEvent bb.build.TaskStarted bb.build.TaskSucceeded bb.build.TaskFailed"
-
-python runqueue_stats () {
-    import buildstats
-    from bb import event, runqueue
-    # We should not record any samples before the first task has started,
-    # because that's the first activity shown in the process chart.
-    # Besides, at that point we are sure that the build variables
-    # are available that we need to find the output directory.
-    # The persistent SystemStats is stored in the datastore and
-    # closed when the build is done.
-    system_stats = d.getVar('_buildstats_system_stats', False)
-    if not system_stats and isinstance(e, (bb.runqueue.sceneQueueTaskStarted, bb.runqueue.runQueueTaskStarted)):
-        system_stats = buildstats.SystemStats(d)
-        d.setVar('_buildstats_system_stats', system_stats)
-    if system_stats:
-        # Ensure that we sample at important events.
-        done = isinstance(e, bb.event.BuildCompleted)
-        if system_stats.sample(e, force=done):
-            d.setVar('_buildstats_system_stats', system_stats)
-        if done:
-            system_stats.close()
-            d.delVar('_buildstats_system_stats')
-}
-
-addhandler runqueue_stats
-runqueue_stats[eventmask] = "bb.runqueue.sceneQueueTaskStarted bb.runqueue.runQueueTaskStarted bb.event.HeartbeatEvent bb.event.BuildCompleted bb.event.MonitorDiskEvent"
diff --git a/poky/meta/classes/cargo.bbclass b/poky/meta/classes/cargo.bbclass
deleted file mode 100644
index 4a780a5..0000000
--- a/poky/meta/classes/cargo.bbclass
+++ /dev/null
@@ -1,90 +0,0 @@
-##
-## Purpose:
-## This class is used by any recipes that are built using
-## Cargo.
-
-inherit cargo_common
-
-# the binary we will use
-CARGO = "cargo"
-
-# We need cargo to compile for the target
-BASEDEPENDS:append = " cargo-native"
-
-# Ensure we get the right rust variant
-DEPENDS:append:class-target = " virtual/${TARGET_PREFIX}rust ${RUSTLIB_DEP}"
-DEPENDS:append:class-nativesdk = " virtual/${TARGET_PREFIX}rust ${RUSTLIB_DEP}"
-DEPENDS:append:class-native = " rust-native"
-
-# Enable build separation
-B = "${WORKDIR}/build"
-
-# In case something fails in the build process, give a bit more feedback on
-# where the issue occured
-export RUST_BACKTRACE = "1"
-
-# The directory of the Cargo.toml relative to the root directory, per default
-# assume there's a Cargo.toml directly in the root directory
-CARGO_SRC_DIR ??= ""
-
-# The actual path to the Cargo.toml
-MANIFEST_PATH ??= "${S}/${CARGO_SRC_DIR}/Cargo.toml"
-
-RUSTFLAGS ??= ""
-BUILD_MODE = "${@['--release', ''][d.getVar('DEBUG_BUILD') == '1']}"
-CARGO_BUILD_FLAGS = "-v --target ${HOST_SYS} ${BUILD_MODE} --manifest-path=${MANIFEST_PATH}"
-
-# This is based on the content of CARGO_BUILD_FLAGS and generally will need to
-# change if CARGO_BUILD_FLAGS changes.
-BUILD_DIR = "${@['release', 'debug'][d.getVar('DEBUG_BUILD') == '1']}"
-CARGO_TARGET_SUBDIR="${HOST_SYS}/${BUILD_DIR}"
-oe_cargo_build () {
-	export RUSTFLAGS="${RUSTFLAGS}"
-	export RUST_TARGET_PATH="${RUST_TARGET_PATH}"
-	bbnote "cargo = $(which ${CARGO})"
-	bbnote "rustc = $(which ${RUSTC})"
-	bbnote "${CARGO} build ${CARGO_BUILD_FLAGS} $@"
-	"${CARGO}" build ${CARGO_BUILD_FLAGS} "$@"
-}
-
-do_compile[progress] = "outof:\s+(\d+)/(\d+)"
-cargo_do_compile () {
-	oe_cargo_fix_env
-	oe_cargo_build
-}
-
-cargo_do_install () {
-	local have_installed=false
-	for tgt in "${B}/target/${CARGO_TARGET_SUBDIR}/"*; do
-		case $tgt in
-		*.so|*.rlib)
-			install -d "${D}${rustlibdir}"
-			install -m755 "$tgt" "${D}${rustlibdir}"
-			have_installed=true
-			;;
-		*examples)
-			if [ -d "$tgt" ]; then
-				for example in "$tgt/"*; do
-					if [ -f "$example" ] && [ -x "$example" ]; then
-						install -d "${D}${bindir}"
-						install -m755 "$example" "${D}${bindir}"
-						have_installed=true
-					fi
-				done
-			fi
-			;;
-		*)
-			if [ -f "$tgt" ] && [ -x "$tgt" ]; then
-				install -d "${D}${bindir}"
-				install -m755 "$tgt" "${D}${bindir}"
-				have_installed=true
-			fi
-			;;
-		esac
-	done
-	if ! $have_installed; then
-		die "Did not find anything to install"
-	fi
-}
-
-EXPORT_FUNCTIONS do_compile do_install
diff --git a/poky/meta/classes/cargo_common.bbclass b/poky/meta/classes/cargo_common.bbclass
deleted file mode 100644
index 39f3282..0000000
--- a/poky/meta/classes/cargo_common.bbclass
+++ /dev/null
@@ -1,124 +0,0 @@
-##
-## Purpose:
-## This class is to support building with cargo. It
-## must be different than cargo.bbclass because Rust
-## now builds with Cargo but cannot use cargo.bbclass
-## due to dependencies and assumptions in cargo.bbclass
-## that Rust & Cargo are already installed. So this
-## is used by cargo.bbclass and Rust
-##
-
-# add crate fetch support
-inherit rust-common
-
-# Where we download our registry and dependencies to
-export CARGO_HOME = "${WORKDIR}/cargo_home"
-
-# The pkg-config-rs library used by cargo build scripts disables itself when
-# cross compiling unless this is defined. We set up pkg-config appropriately
-# for cross compilation, so tell it we know better than it.
-export PKG_CONFIG_ALLOW_CROSS = "1"
-
-# Don't instruct cargo to use crates downloaded by bitbake. Some rust packages,
-# for example the rust compiler itself, come with their own vendored sources.
-# Specifying two [source.crates-io] will not work.
-CARGO_DISABLE_BITBAKE_VENDORING ?= "0"
-
-# Used by libstd-rs to point to the vendor dir included in rustc src
-CARGO_VENDORING_DIRECTORY ?= "${CARGO_HOME}/bitbake"
-
-CARGO_RUST_TARGET_CCLD ?= "${RUST_TARGET_CCLD}"
-cargo_common_do_configure () {
-	mkdir -p ${CARGO_HOME}/bitbake
-
-	cat <<- EOF > ${CARGO_HOME}/config
-	# EXTRA_OECARGO_PATHS
-	paths = [
-	$(for p in ${EXTRA_OECARGO_PATHS}; do echo \"$p\",; done)
-	]
-	EOF
-
-	cat <<- EOF >> ${CARGO_HOME}/config
-
-	# Local mirror vendored by bitbake
-	[source.bitbake]
-	directory = "${CARGO_VENDORING_DIRECTORY}"
-	EOF
-
-	if [ ${CARGO_DISABLE_BITBAKE_VENDORING} = "0" ]; then
-		cat <<- EOF >> ${CARGO_HOME}/config
-
-		[source.crates-io]
-		replace-with = "bitbake"
-		local-registry = "/nonexistant"
-		EOF
-	fi
-
-	cat <<- EOF >> ${CARGO_HOME}/config
-
-	[http]
-	# Multiplexing can't be enabled because http2 can't be enabled
-	# in curl-native without dependency loops
-	multiplexing = false
-
-	# Ignore the hard coded and incorrect path to certificates
-	cainfo = "${STAGING_ETCDIR_NATIVE}/ssl/certs/ca-certificates.crt"
-
-	EOF
-
-	cat <<- EOF >> ${CARGO_HOME}/config
-
-	# HOST_SYS
-	[target.${HOST_SYS}]
-	linker = "${CARGO_RUST_TARGET_CCLD}"
-	EOF
-
-	if [ "${HOST_SYS}" != "${BUILD_SYS}" ]; then
-		cat <<- EOF >> ${CARGO_HOME}/config
-
-		# BUILD_SYS
-		[target.${BUILD_SYS}]
-		linker = "${RUST_BUILD_CCLD}"
-		EOF
-	fi
-
-	# Put build output in build directory preferred by bitbake instead of
-	# inside source directory unless they are the same
-	if [ "${B}" != "${S}" ]; then
-		cat <<- EOF >> ${CARGO_HOME}/config
-
-		[build]
-		# Use out of tree build destination to avoid poluting the source tree
-		target-dir = "${B}/target"
-		EOF
-	fi
-
-	cat <<- EOF >> ${CARGO_HOME}/config
-
-	[term]
-	progress.when = 'always'
-	progress.width = 80
-	EOF
-}
-
-oe_cargo_fix_env () {
-	export CC="${RUST_TARGET_CC}"
-	export CXX="${RUST_TARGET_CXX}"
-	export CFLAGS="${CFLAGS}"
-	export CXXFLAGS="${CXXFLAGS}"
-	export AR="${AR}"
-	export TARGET_CC="${RUST_TARGET_CC}"
-	export TARGET_CXX="${RUST_TARGET_CXX}"
-	export TARGET_CFLAGS="${CFLAGS}"
-	export TARGET_CXXFLAGS="${CXXFLAGS}"
-	export TARGET_AR="${AR}"
-	export HOST_CC="${RUST_BUILD_CC}"
-	export HOST_CXX="${RUST_BUILD_CXX}"
-	export HOST_CFLAGS="${BUILD_CFLAGS}"
-	export HOST_CXXFLAGS="${BUILD_CXXFLAGS}"
-	export HOST_AR="${BUILD_AR}"
-}
-
-EXTRA_OECARGO_PATHS ??= ""
-
-EXPORT_FUNCTIONS do_configure
diff --git a/poky/meta/classes/ccache.bbclass b/poky/meta/classes/ccache.bbclass
index 4532894..34becb6 100644
--- a/poky/meta/classes/ccache.bbclass
+++ b/poky/meta/classes/ccache.bbclass
@@ -1,4 +1,10 @@
 #
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+#
 # Usage:
 # - Enable ccache
 #   Add the following line to a conffile such as conf/local.conf:
diff --git a/poky/meta/classes/ccmake.bbclass b/poky/meta/classes/ccmake.bbclass
index df5134a..c5b4bf6 100644
--- a/poky/meta/classes/ccmake.bbclass
+++ b/poky/meta/classes/ccmake.bbclass
@@ -1,3 +1,9 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
 inherit terminal
 
 python do_ccmake() {
diff --git a/poky/meta/classes/chrpath.bbclass b/poky/meta/classes/chrpath.bbclass
index 26b984c..1aecb4d 100644
--- a/poky/meta/classes/chrpath.bbclass
+++ b/poky/meta/classes/chrpath.bbclass
@@ -1,3 +1,9 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
 CHRPATH_BIN ?= "chrpath"
 PREPROCESS_RELOCATE_DIRS ?= ""
 
diff --git a/poky/meta/classes/cmake.bbclass b/poky/meta/classes/cmake.bbclass
deleted file mode 100644
index d9bcddb..0000000
--- a/poky/meta/classes/cmake.bbclass
+++ /dev/null
@@ -1,217 +0,0 @@
-# Path to the CMake file to process.
-OECMAKE_SOURCEPATH ??= "${S}"
-
-DEPENDS:prepend = "cmake-native "
-B = "${WORKDIR}/build"
-
-# What CMake generator to use.
-# The supported options are "Unix Makefiles" or "Ninja".
-OECMAKE_GENERATOR ?= "Ninja"
-
-python() {
-    generator = d.getVar("OECMAKE_GENERATOR")
-    if "Unix Makefiles" in generator:
-        args = "-G '" + generator +  "' -DCMAKE_MAKE_PROGRAM=" + d.getVar("MAKE")
-        d.setVar("OECMAKE_GENERATOR_ARGS", args)
-        d.setVarFlag("do_compile", "progress", "percent")
-    elif "Ninja" in generator:
-        args = "-G '" + generator + "' -DCMAKE_MAKE_PROGRAM=ninja"
-        d.appendVar("DEPENDS", " ninja-native")
-        d.setVar("OECMAKE_GENERATOR_ARGS", args)
-        d.setVarFlag("do_compile", "progress", r"outof:^\[(\d+)/(\d+)\]\s+")
-    else:
-        bb.fatal("Unknown CMake Generator %s" % generator)
-}
-OECMAKE_AR ?= "${AR}"
-
-# Compiler flags
-OECMAKE_C_FLAGS ?= "${HOST_CC_ARCH} ${TOOLCHAIN_OPTIONS} ${CFLAGS}"
-OECMAKE_CXX_FLAGS ?= "${HOST_CC_ARCH} ${TOOLCHAIN_OPTIONS} ${CXXFLAGS}"
-OECMAKE_C_FLAGS_RELEASE ?= "-DNDEBUG"
-OECMAKE_CXX_FLAGS_RELEASE ?= "-DNDEBUG"
-OECMAKE_C_LINK_FLAGS ?= "${HOST_CC_ARCH} ${TOOLCHAIN_OPTIONS} ${CPPFLAGS} ${LDFLAGS}"
-OECMAKE_CXX_LINK_FLAGS ?= "${HOST_CC_ARCH} ${TOOLCHAIN_OPTIONS} ${CXXFLAGS} ${LDFLAGS}"
-
-def oecmake_map_compiler(compiler, d):
-    args = d.getVar(compiler).split()
-    if args[0] == "ccache":
-        return args[1], args[0]
-    return args[0], ""
-
-# C/C++ Compiler (without cpu arch/tune arguments)
-OECMAKE_C_COMPILER ?= "${@oecmake_map_compiler('CC', d)[0]}"
-OECMAKE_C_COMPILER_LAUNCHER ?= "${@oecmake_map_compiler('CC', d)[1]}"
-OECMAKE_CXX_COMPILER ?= "${@oecmake_map_compiler('CXX', d)[0]}"
-OECMAKE_CXX_COMPILER_LAUNCHER ?= "${@oecmake_map_compiler('CXX', d)[1]}"
-
-# clear compiler vars for allarch to avoid sig hash difference
-OECMAKE_C_COMPILER_allarch = ""
-OECMAKE_C_COMPILER_LAUNCHER_allarch = ""
-OECMAKE_CXX_COMPILER_allarch = ""
-OECMAKE_CXX_COMPILER_LAUNCHER_allarch = ""
-
-OECMAKE_RPATH ?= ""
-OECMAKE_PERLNATIVE_DIR ??= ""
-OECMAKE_EXTRA_ROOT_PATH ?= ""
-
-OECMAKE_FIND_ROOT_PATH_MODE_PROGRAM = "ONLY"
-OECMAKE_FIND_ROOT_PATH_MODE_PROGRAM:class-native = "BOTH"
-
-EXTRA_OECMAKE:append = " ${PACKAGECONFIG_CONFARGS}"
-
-export CMAKE_BUILD_PARALLEL_LEVEL
-CMAKE_BUILD_PARALLEL_LEVEL:task-compile = "${@oe.utils.parallel_make(d, False)}"
-CMAKE_BUILD_PARALLEL_LEVEL:task-install = "${@oe.utils.parallel_make(d, True)}"
-
-OECMAKE_TARGET_COMPILE ?= "all"
-OECMAKE_TARGET_INSTALL ?= "install"
-
-def map_host_os_to_system_name(host_os):
-    if host_os.startswith('mingw'):
-        return 'Windows'
-    if host_os.startswith('linux'):
-        return 'Linux'
-    return host_os
-
-# CMake expects target architectures in the format of uname(2),
-# which do not always match TARGET_ARCH, so all the necessary
-# conversions should happen here.
-def map_host_arch_to_uname_arch(host_arch):
-    if host_arch == "powerpc":
-        return "ppc"
-    if host_arch == "powerpc64le":
-        return "ppc64le"
-    if host_arch == "powerpc64":
-        return "ppc64"
-    return host_arch
-
-cmake_do_generate_toolchain_file() {
-	if [ "${BUILD_SYS}" = "${HOST_SYS}" ]; then
-		cmake_crosscompiling="set( CMAKE_CROSSCOMPILING FALSE )"
-	fi
-	cat > ${WORKDIR}/toolchain.cmake <<EOF
-# CMake system name must be something like "Linux".
-# This is important for cross-compiling.
-$cmake_crosscompiling
-set( CMAKE_SYSTEM_NAME ${@map_host_os_to_system_name(d.getVar('HOST_OS'))} )
-set( CMAKE_SYSTEM_PROCESSOR ${@map_host_arch_to_uname_arch(d.getVar('HOST_ARCH'))} )
-set( CMAKE_C_COMPILER ${OECMAKE_C_COMPILER} )
-set( CMAKE_CXX_COMPILER ${OECMAKE_CXX_COMPILER} )
-set( CMAKE_C_COMPILER_LAUNCHER ${OECMAKE_C_COMPILER_LAUNCHER} )
-set( CMAKE_CXX_COMPILER_LAUNCHER ${OECMAKE_CXX_COMPILER_LAUNCHER} )
-set( CMAKE_ASM_COMPILER ${OECMAKE_C_COMPILER} )
-find_program( CMAKE_AR ${OECMAKE_AR} DOC "Archiver" REQUIRED )
-
-set( CMAKE_C_FLAGS "${OECMAKE_C_FLAGS}" CACHE STRING "CFLAGS" )
-set( CMAKE_CXX_FLAGS "${OECMAKE_CXX_FLAGS}" CACHE STRING "CXXFLAGS" )
-set( CMAKE_ASM_FLAGS "${OECMAKE_C_FLAGS}" CACHE STRING "ASM FLAGS" )
-set( CMAKE_C_FLAGS_RELEASE "${OECMAKE_C_FLAGS_RELEASE}" CACHE STRING "Additional CFLAGS for release" )
-set( CMAKE_CXX_FLAGS_RELEASE "${OECMAKE_CXX_FLAGS_RELEASE}" CACHE STRING "Additional CXXFLAGS for release" )
-set( CMAKE_ASM_FLAGS_RELEASE "${OECMAKE_C_FLAGS_RELEASE}" CACHE STRING "Additional ASM FLAGS for release" )
-set( CMAKE_C_LINK_FLAGS "${OECMAKE_C_LINK_FLAGS}" CACHE STRING "LDFLAGS" )
-set( CMAKE_CXX_LINK_FLAGS "${OECMAKE_CXX_LINK_FLAGS}" CACHE STRING "LDFLAGS" )
-
-# only search in the paths provided so cmake doesnt pick
-# up libraries and tools from the native build machine
-set( CMAKE_FIND_ROOT_PATH ${STAGING_DIR_HOST} ${STAGING_DIR_NATIVE} ${CROSS_DIR} ${OECMAKE_PERLNATIVE_DIR} ${OECMAKE_EXTRA_ROOT_PATH} ${EXTERNAL_TOOLCHAIN} ${HOSTTOOLS_DIR})
-set( CMAKE_FIND_ROOT_PATH_MODE_PACKAGE ONLY )
-set( CMAKE_FIND_ROOT_PATH_MODE_PROGRAM ${OECMAKE_FIND_ROOT_PATH_MODE_PROGRAM} )
-set( CMAKE_FIND_ROOT_PATH_MODE_LIBRARY ONLY )
-set( CMAKE_FIND_ROOT_PATH_MODE_INCLUDE ONLY )
-set( CMAKE_PROGRAM_PATH "/" )
-
-# Use qt.conf settings
-set( ENV{QT_CONF_PATH} ${WORKDIR}/qt.conf )
-
-# We need to set the rpath to the correct directory as cmake does not provide any
-# directory as rpath by default
-set( CMAKE_INSTALL_RPATH ${OECMAKE_RPATH} )
-
-# Use RPATHs relative to build directory for reproducibility
-set( CMAKE_BUILD_RPATH_USE_ORIGIN ON )
-
-# Use our cmake modules
-list(APPEND CMAKE_MODULE_PATH "${STAGING_DATADIR}/cmake/Modules/")
-
-# add for non /usr/lib libdir, e.g. /usr/lib64
-set( CMAKE_LIBRARY_PATH ${libdir} ${base_libdir})
-
-# add include dir to implicit includes in case it differs from /usr/include
-list(APPEND CMAKE_C_IMPLICIT_INCLUDE_DIRECTORIES ${includedir})
-list(APPEND CMAKE_CXX_IMPLICIT_INCLUDE_DIRECTORIES ${includedir})
-
-EOF
-}
-
-addtask generate_toolchain_file after do_patch before do_configure
-
-CONFIGURE_FILES = "CMakeLists.txt"
-
-do_configure[cleandirs] = "${@d.getVar('B') if d.getVar('S') != d.getVar('B') else ''}"
-
-cmake_do_configure() {
-	if [ "${OECMAKE_BUILDPATH}" ]; then
-		bbnote "cmake.bbclass no longer uses OECMAKE_BUILDPATH.  The default behaviour is now out-of-tree builds with B=WORKDIR/build."
-	fi
-
-	if [ "${S}" = "${B}" ]; then
-		find ${B} -name CMakeFiles -or -name Makefile -or -name cmake_install.cmake -or -name CMakeCache.txt -delete
-	fi
-
-	# Just like autotools cmake can use a site file to cache result that need generated binaries to run
-	if [ -e ${WORKDIR}/site-file.cmake ] ; then
-		oecmake_sitefile="-C ${WORKDIR}/site-file.cmake"
-	else
-		oecmake_sitefile=
-	fi
-
-	cmake \
-	  ${OECMAKE_GENERATOR_ARGS} \
-	  $oecmake_sitefile \
-	  ${OECMAKE_SOURCEPATH} \
-	  -DCMAKE_INSTALL_PREFIX:PATH=${prefix} \
-	  -DCMAKE_INSTALL_BINDIR:PATH=${@os.path.relpath(d.getVar('bindir'), d.getVar('prefix') + '/')} \
-	  -DCMAKE_INSTALL_SBINDIR:PATH=${@os.path.relpath(d.getVar('sbindir'), d.getVar('prefix') + '/')} \
-	  -DCMAKE_INSTALL_LIBEXECDIR:PATH=${@os.path.relpath(d.getVar('libexecdir'), d.getVar('prefix') + '/')} \
-	  -DCMAKE_INSTALL_SYSCONFDIR:PATH=${sysconfdir} \
-	  -DCMAKE_INSTALL_SHAREDSTATEDIR:PATH=${@os.path.relpath(d.getVar('sharedstatedir'), d.  getVar('prefix') + '/')} \
-	  -DCMAKE_INSTALL_LOCALSTATEDIR:PATH=${localstatedir} \
-	  -DCMAKE_INSTALL_LIBDIR:PATH=${@os.path.relpath(d.getVar('libdir'), d.getVar('prefix') + '/')} \
-	  -DCMAKE_INSTALL_INCLUDEDIR:PATH=${@os.path.relpath(d.getVar('includedir'), d.getVar('prefix') + '/')} \
-	  -DCMAKE_INSTALL_DATAROOTDIR:PATH=${@os.path.relpath(d.getVar('datadir'), d.getVar('prefix') + '/')} \
-	  -DPYTHON_EXECUTABLE:PATH=${PYTHON} \
-	  -DPython_EXECUTABLE:PATH=${PYTHON} \
-	  -DPython3_EXECUTABLE:PATH=${PYTHON} \
-	  -DLIB_SUFFIX=${@d.getVar('baselib').replace('lib', '')} \
-	  -DCMAKE_INSTALL_SO_NO_EXE=0 \
-	  -DCMAKE_TOOLCHAIN_FILE=${WORKDIR}/toolchain.cmake \
-	  -DCMAKE_NO_SYSTEM_FROM_IMPORTED=1 \
-	  -DCMAKE_EXPORT_NO_PACKAGE_REGISTRY=ON \
-	  -DFETCHCONTENT_FULLY_DISCONNECTED=ON \
-	  ${EXTRA_OECMAKE} \
-	  -Wno-dev
-}
-
-# To disable verbose cmake logs for a given recipe or globally config metadata e.g. local.conf
-# add following
-#
-# CMAKE_VERBOSE = ""
-#
-
-CMAKE_VERBOSE ??= "VERBOSE=1"
-
-# Then run do_compile again
-cmake_runcmake_build() {
-	bbnote ${DESTDIR:+DESTDIR=${DESTDIR} }${CMAKE_VERBOSE} cmake --build '${B}' "$@" -- ${EXTRA_OECMAKE_BUILD}
-	eval ${DESTDIR:+DESTDIR=${DESTDIR} }${CMAKE_VERBOSE} cmake --build '${B}' "$@" -- ${EXTRA_OECMAKE_BUILD}
-}
-
-cmake_do_compile()  {
-	cmake_runcmake_build --target ${OECMAKE_TARGET_COMPILE}
-}
-
-cmake_do_install() {
-	DESTDIR='${D}' cmake_runcmake_build --target ${OECMAKE_TARGET_INSTALL}
-}
-
-EXPORT_FUNCTIONS do_configure do_compile do_install do_generate_toolchain_file
diff --git a/poky/meta/classes/cml1.bbclass b/poky/meta/classes/cml1.bbclass
deleted file mode 100644
index d319d66..0000000
--- a/poky/meta/classes/cml1.bbclass
+++ /dev/null
@@ -1,101 +0,0 @@
-# returns all the elements from the src uri that are .cfg files
-def find_cfgs(d):
-    sources=src_patches(d, True)
-    sources_list=[]
-    for s in sources:
-        if s.endswith('.cfg'):
-            sources_list.append(s)
-
-    return sources_list
-
-cml1_do_configure() {
-	set -e
-	unset CFLAGS CPPFLAGS CXXFLAGS LDFLAGS
-	yes '' | oe_runmake oldconfig
-}
-
-EXPORT_FUNCTIONS do_configure
-addtask configure after do_unpack do_patch before do_compile
-
-inherit terminal
-
-OE_TERMINAL_EXPORTS += "HOST_EXTRACFLAGS HOSTLDFLAGS TERMINFO CROSS_CURSES_LIB CROSS_CURSES_INC"
-HOST_EXTRACFLAGS = "${BUILD_CFLAGS} ${BUILD_LDFLAGS}"
-HOSTLDFLAGS = "${BUILD_LDFLAGS}"
-CROSS_CURSES_LIB = "-lncurses -ltinfo"
-CROSS_CURSES_INC = '-DCURSES_LOC="<curses.h>"'
-TERMINFO = "${STAGING_DATADIR_NATIVE}/terminfo"
-
-KCONFIG_CONFIG_COMMAND ??= "menuconfig"
-KCONFIG_CONFIG_ROOTDIR ??= "${B}"
-python do_menuconfig() {
-    import shutil
-
-    config = os.path.join(d.getVar('KCONFIG_CONFIG_ROOTDIR'), ".config")
-    configorig = os.path.join(d.getVar('KCONFIG_CONFIG_ROOTDIR'), ".config.orig")
-
-    try:
-        mtime = os.path.getmtime(config)
-        shutil.copy(config, configorig)
-    except OSError:
-        mtime = 0
-
-    # setup native pkg-config variables (kconfig scripts call pkg-config directly, cannot generically be overriden to pkg-config-native)
-    d.setVar("PKG_CONFIG_DIR", "${STAGING_DIR_NATIVE}${libdir_native}/pkgconfig")
-    d.setVar("PKG_CONFIG_PATH", "${PKG_CONFIG_DIR}:${STAGING_DATADIR_NATIVE}/pkgconfig")
-    d.setVar("PKG_CONFIG_LIBDIR", "${PKG_CONFIG_DIR}")
-    d.setVarFlag("PKG_CONFIG_SYSROOT_DIR", "unexport", "1")
-    # ensure that environment variables are overwritten with this tasks 'd' values
-    d.appendVar("OE_TERMINAL_EXPORTS", " PKG_CONFIG_DIR PKG_CONFIG_PATH PKG_CONFIG_LIBDIR PKG_CONFIG_SYSROOT_DIR")
-
-    oe_terminal("sh -c \"make %s; if [ \\$? -ne 0 ]; then echo 'Command failed.'; printf 'Press any key to continue... '; read r; fi\"" % d.getVar('KCONFIG_CONFIG_COMMAND'),
-                d.getVar('PN') + ' Configuration', d)
-
-    # FIXME this check can be removed when the minimum bitbake version has been bumped
-    if hasattr(bb.build, 'write_taint'):
-        try:
-            newmtime = os.path.getmtime(config)
-        except OSError:
-            newmtime = 0
-
-        if newmtime > mtime:
-            bb.note("Configuration changed, recompile will be forced")
-            bb.build.write_taint('do_compile', d)
-}
-do_menuconfig[depends] += "ncurses-native:do_populate_sysroot"
-do_menuconfig[nostamp] = "1"
-do_menuconfig[dirs] = "${KCONFIG_CONFIG_ROOTDIR}"
-addtask menuconfig after do_configure
-
-python do_diffconfig() {
-    import shutil
-    import subprocess
-
-    workdir = d.getVar('WORKDIR')
-    fragment = workdir + '/fragment.cfg'
-    configorig = os.path.join(d.getVar('KCONFIG_CONFIG_ROOTDIR'), ".config.orig")
-    config = os.path.join(d.getVar('KCONFIG_CONFIG_ROOTDIR'), ".config")
-
-    try:
-        md5newconfig = bb.utils.md5_file(configorig)
-        md5config = bb.utils.md5_file(config)
-        isdiff = md5newconfig != md5config
-    except IOError as e:
-        bb.fatal("No config files found. Did you do menuconfig ?\n%s" % e)
-
-    if isdiff:
-        statement = 'diff --unchanged-line-format= --old-line-format= --new-line-format="%L" ' + configorig + ' ' + config + '>' + fragment
-        subprocess.call(statement, shell=True)
-        # No need to check the exit code as we know it's going to be
-        # non-zero, but that's what we expect.
-        shutil.copy(configorig, config)
-
-        bb.plain("Config fragment has been dumped into:\n %s" % fragment)
-    else:
-        if os.path.exists(fragment):
-            os.unlink(fragment)
-}
-
-do_diffconfig[nostamp] = "1"
-do_diffconfig[dirs] = "${KCONFIG_CONFIG_ROOTDIR}"
-addtask diffconfig
diff --git a/poky/meta/classes/compress_doc.bbclass b/poky/meta/classes/compress_doc.bbclass
deleted file mode 100644
index 379b6c1..0000000
--- a/poky/meta/classes/compress_doc.bbclass
+++ /dev/null
@@ -1,263 +0,0 @@
-# Compress man pages in ${mandir} and info pages in ${infodir}
-#
-# 1. The doc will be compressed to gz format by default.
-#
-# 2. It will automatically correct the compressed doc which is not
-# in ${DOC_COMPRESS} but in ${DOC_COMPRESS_LIST} to the format
-# of ${DOC_COMPRESS} policy
-#
-# 3. It is easy to add a new type compression by editing
-# local.conf, such as:
-# DOC_COMPRESS_LIST:append = ' abc'
-# DOC_COMPRESS = 'abc'
-# DOC_COMPRESS_CMD[abc] = 'abc compress cmd ***'
-# DOC_DECOMPRESS_CMD[abc] = 'abc decompress cmd ***'
-
-# All supported compression policy
-DOC_COMPRESS_LIST ?= "gz xz bz2"
-
-# Compression policy, must be one of ${DOC_COMPRESS_LIST}
-DOC_COMPRESS ?= "gz"
-
-# Compression shell command
-DOC_COMPRESS_CMD[gz] ?= 'gzip -v -9 -n'
-DOC_COMPRESS_CMD[bz2] ?= "bzip2 -v -9"
-DOC_COMPRESS_CMD[xz] ?= "xz -v"
-
-# Decompression shell command
-DOC_DECOMPRESS_CMD[gz] ?= 'gunzip -v'
-DOC_DECOMPRESS_CMD[bz2] ?= "bunzip2 -v"
-DOC_DECOMPRESS_CMD[xz] ?= "unxz -v"
-
-PACKAGE_PREPROCESS_FUNCS += "package_do_compress_doc compress_doc_updatealternatives"
-python package_do_compress_doc() {
-    compress_mode = d.getVar('DOC_COMPRESS')
-    compress_list = (d.getVar('DOC_COMPRESS_LIST') or '').split()
-    if compress_mode not in compress_list:
-        bb.fatal('Compression policy %s not supported (not listed in %s)\n' % (compress_mode, compress_list))
-
-    dvar = d.getVar('PKGD')
-    compress_cmds = {}
-    decompress_cmds = {}
-    for mode in compress_list:
-        compress_cmds[mode] = d.getVarFlag('DOC_COMPRESS_CMD', mode)
-        decompress_cmds[mode] = d.getVarFlag('DOC_DECOMPRESS_CMD', mode)
-
-    mandir = os.path.abspath(dvar + os.sep + d.getVar("mandir"))
-    if os.path.exists(mandir):
-        # Decompress doc files which format is not compress_mode
-        decompress_doc(mandir, compress_mode, decompress_cmds)
-        compress_doc(mandir, compress_mode, compress_cmds)
-
-    infodir = os.path.abspath(dvar + os.sep + d.getVar("infodir"))
-    if os.path.exists(infodir):
-        # Decompress doc files which format is not compress_mode
-        decompress_doc(infodir, compress_mode, decompress_cmds)
-        compress_doc(infodir, compress_mode, compress_cmds)
-}
-
-def _get_compress_format(file, compress_format_list):
-    for compress_format in compress_format_list:
-        compress_suffix = '.' + compress_format
-        if file.endswith(compress_suffix):
-            return compress_format
-
-    return ''
-
-# Collect hardlinks to dict, each element in dict lists hardlinks
-# which points to the same doc file.
-# {hardlink10: [hardlink11, hardlink12],,,}
-# The hardlink10, hardlink11 and hardlink12 are the same file.
-def _collect_hardlink(hardlink_dict, file):
-    for hardlink in hardlink_dict:
-        # Add to the existed hardlink
-        if os.path.samefile(hardlink, file):
-            hardlink_dict[hardlink].append(file)
-            return hardlink_dict
-
-    hardlink_dict[file] = []
-    return hardlink_dict
-
-def _process_hardlink(hardlink_dict, compress_mode, shell_cmds, decompress=False):
-    import subprocess
-    for target in hardlink_dict:
-        if decompress:
-            compress_format = _get_compress_format(target, shell_cmds.keys())
-            cmd = "%s -f %s" % (shell_cmds[compress_format], target)
-            bb.note('decompress hardlink %s' % target)
-        else:
-            cmd = "%s -f %s" % (shell_cmds[compress_mode], target)
-            bb.note('compress hardlink %s' % target)
-        (retval, output) = subprocess.getstatusoutput(cmd)
-        if retval:
-            bb.warn("de/compress file failed %s (cmd was %s)%s" % (retval, cmd, ":\n%s" % output if output else ""))
-            return
-
-        for hardlink_dup in hardlink_dict[target]:
-            if decompress:
-                # Remove compress suffix
-                compress_suffix = '.' + compress_format
-                new_hardlink = hardlink_dup[:-len(compress_suffix)]
-                new_target = target[:-len(compress_suffix)]
-            else:
-                # Append compress suffix
-                compress_suffix = '.' + compress_mode
-                new_hardlink = hardlink_dup + compress_suffix
-                new_target = target + compress_suffix
-
-            bb.note('hardlink %s-->%s' % (new_hardlink, new_target))
-            if not os.path.exists(new_hardlink):
-                os.link(new_target, new_hardlink)
-            if os.path.exists(hardlink_dup):
-                os.unlink(hardlink_dup)
-
-def _process_symlink(file, compress_format, decompress=False):
-    compress_suffix = '.' + compress_format
-    if decompress:
-        # Remove compress suffix
-        new_linkname = file[:-len(compress_suffix)]
-        new_source = os.readlink(file)[:-len(compress_suffix)]
-    else:
-        # Append compress suffix
-        new_linkname = file + compress_suffix
-        new_source = os.readlink(file) + compress_suffix
-
-    bb.note('symlink %s-->%s' % (new_linkname, new_source))
-    if not os.path.exists(new_linkname):
-        os.symlink(new_source, new_linkname)
-
-    os.unlink(file)
-
-def _is_info(file):
-    flags = '.info .info-'.split()
-    for flag in flags:
-        if flag in os.path.basename(file):
-            return True
-
-    return False
-
-def _is_man(file):
-    import re
-
-    # It refers MANSECT-var in man(1.6g)'s man.config
-    # ".1:.1p:.8:.2:.3:.3p:.4:.5:.6:.7:.9:.0p:.tcl:.n:.l:.p:.o"
-    # Not start with '.', and contain the above colon-seperate element
-    p = re.compile(r'[^\.]+\.([1-9lnop]|0p|tcl)')
-    if p.search(file):
-        return True
-
-    return False
-
-def _is_compress_doc(file, compress_format_list):
-    compress_format = _get_compress_format(file, compress_format_list)
-    compress_suffix = '.' + compress_format
-    if file.endswith(compress_suffix):
-        # Remove the compress suffix
-        uncompress_file = file[:-len(compress_suffix)]
-        if _is_info(uncompress_file) or _is_man(uncompress_file):
-            return True, compress_format
-
-    return False, ''
-
-def compress_doc(topdir, compress_mode, compress_cmds):
-    import subprocess
-    hardlink_dict = {}
-    for root, dirs, files in os.walk(topdir):
-        for f in files:
-            file = os.path.join(root, f)
-            if os.path.isdir(file):
-                continue
-
-            if _is_info(file) or _is_man(file):
-                # Symlink
-                if os.path.islink(file):
-                    _process_symlink(file, compress_mode)
-                # Hardlink
-                elif os.lstat(file).st_nlink > 1:
-                    _collect_hardlink(hardlink_dict, file)
-                # Normal file
-                elif os.path.isfile(file):
-                    cmd = "%s %s" % (compress_cmds[compress_mode], file)
-                    (retval, output) = subprocess.getstatusoutput(cmd)
-                    if retval:
-                        bb.warn("compress failed %s (cmd was %s)%s" % (retval, cmd, ":\n%s" % output if output else ""))
-                        continue
-                    bb.note('compress file %s' % file)
-
-    _process_hardlink(hardlink_dict, compress_mode, compress_cmds)
-
-# Decompress doc files which format is not compress_mode
-def decompress_doc(topdir, compress_mode, decompress_cmds):
-    import subprocess
-    hardlink_dict = {}
-    decompress = True
-    for root, dirs, files in os.walk(topdir):
-        for f in files:
-            file = os.path.join(root, f)
-            if os.path.isdir(file):
-                continue
-
-            res, compress_format = _is_compress_doc(file, decompress_cmds.keys())
-            # Decompress files which format is not compress_mode
-            if res and compress_mode!=compress_format:
-                # Symlink
-                if os.path.islink(file):
-                    _process_symlink(file, compress_format, decompress)
-                # Hardlink
-                elif os.lstat(file).st_nlink > 1:
-                    _collect_hardlink(hardlink_dict, file)
-                # Normal file
-                elif os.path.isfile(file):
-                    cmd = "%s %s" % (decompress_cmds[compress_format], file)
-                    (retval, output) = subprocess.getstatusoutput(cmd)
-                    if retval:
-                        bb.warn("decompress failed %s (cmd was %s)%s" % (retval, cmd, ":\n%s" % output if output else ""))
-                        continue
-                    bb.note('decompress file %s' % file)
-
-    _process_hardlink(hardlink_dict, compress_mode, decompress_cmds, decompress)
-
-python compress_doc_updatealternatives () {
-    if not bb.data.inherits_class('update-alternatives', d):
-        return
-
-    mandir = d.getVar("mandir")
-    infodir = d.getVar("infodir")
-    compress_mode = d.getVar('DOC_COMPRESS')
-    for pkg in (d.getVar('PACKAGES') or "").split():
-        old_names = (d.getVar('ALTERNATIVE:%s' % pkg) or "").split()
-        new_names = []
-        for old_name in old_names:
-            old_link     = d.getVarFlag('ALTERNATIVE_LINK_NAME', old_name)
-            old_target   = d.getVarFlag('ALTERNATIVE_TARGET_%s' % pkg, old_name) or \
-                d.getVarFlag('ALTERNATIVE_TARGET', old_name) or \
-                d.getVar('ALTERNATIVE_TARGET_%s' % pkg) or \
-                d.getVar('ALTERNATIVE_TARGET') or \
-                old_link
-            # Sometimes old_target is specified as relative to the link name.
-            old_target   = os.path.join(os.path.dirname(old_link), old_target)
-
-            # The updatealternatives used for compress doc
-            if mandir in old_target or infodir in old_target:
-                new_name = old_name + '.' + compress_mode
-                new_link = old_link + '.' + compress_mode
-                new_target = old_target + '.' + compress_mode
-                d.delVarFlag('ALTERNATIVE_LINK_NAME', old_name)
-                d.setVarFlag('ALTERNATIVE_LINK_NAME', new_name, new_link)
-                if d.getVarFlag('ALTERNATIVE_TARGET_%s' % pkg, old_name):
-                    d.delVarFlag('ALTERNATIVE_TARGET_%s' % pkg, old_name)
-                    d.setVarFlag('ALTERNATIVE_TARGET_%s' % pkg, new_name, new_target)
-                elif d.getVarFlag('ALTERNATIVE_TARGET', old_name):
-                    d.delVarFlag('ALTERNATIVE_TARGET', old_name)
-                    d.setVarFlag('ALTERNATIVE_TARGET', new_name, new_target)
-                elif d.getVar('ALTERNATIVE_TARGET_%s' % pkg):
-                    d.setVar('ALTERNATIVE_TARGET_%s' % pkg, new_target)
-                elif d.getVar('ALTERNATIVE_TARGET'):
-                    d.setVar('ALTERNATIVE_TARGET', new_target)
-
-                new_names.append(new_name)
-
-        if new_names:
-            d.setVar('ALTERNATIVE:%s' % pkg, ' '.join(new_names))
-}
-
diff --git a/poky/meta/classes/copyleft_compliance.bbclass b/poky/meta/classes/copyleft_compliance.bbclass
index eabf12c..9ff9956 100644
--- a/poky/meta/classes/copyleft_compliance.bbclass
+++ b/poky/meta/classes/copyleft_compliance.bbclass
@@ -1,3 +1,9 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
 # Deploy sources for recipes for compliance with copyleft-style licenses
 # Defaults to using symlinks, as it's a quick operation, and one can easily
 # follow the links when making use of the files (e.g. tar with the -h arg).
diff --git a/poky/meta/classes/copyleft_filter.bbclass b/poky/meta/classes/copyleft_filter.bbclass
index c36bce4..83cd900 100644
--- a/poky/meta/classes/copyleft_filter.bbclass
+++ b/poky/meta/classes/copyleft_filter.bbclass
@@ -1,10 +1,14 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
 # Filter the license, the copyleft_should_include returns True for the
 # COPYLEFT_LICENSE_INCLUDE recipe, and False for the
 # COPYLEFT_LICENSE_EXCLUDE.
 #
 # By default, includes all GPL and LGPL, and excludes CLOSED and Proprietary.
-#
-# vi:sts=4:sw=4:et
 
 COPYLEFT_LICENSE_INCLUDE ?= 'GPL* LGPL* AGPL*'
 COPYLEFT_LICENSE_INCLUDE[type] = 'list'
diff --git a/poky/meta/classes/core-image.bbclass b/poky/meta/classes/core-image.bbclass
deleted file mode 100644
index 84fd3ee..0000000
--- a/poky/meta/classes/core-image.bbclass
+++ /dev/null
@@ -1,79 +0,0 @@
-# Common code for generating core reference images
-#
-# Copyright (C) 2007-2011 Linux Foundation
-
-# IMAGE_FEATURES control content of the core reference images
-# 
-# By default we install packagegroup-core-boot and packagegroup-base-extended packages;
-# this gives us working (console only) rootfs.
-#
-# Available IMAGE_FEATURES:
-#
-# - weston              - Weston Wayland compositor
-# - x11                 - X server
-# - x11-base            - X server with minimal environment
-# - x11-sato            - OpenedHand Sato environment
-# - tools-debug         - debugging tools
-# - eclipse-debug       - Eclipse remote debugging support
-# - tools-profile       - profiling tools
-# - tools-testapps      - tools usable to make some device tests
-# - tools-sdk           - SDK (C/C++ compiler, autotools, etc.)
-# - nfs-server          - NFS server
-# - nfs-client          - NFS client
-# - ssh-server-dropbear - SSH server (dropbear)
-# - ssh-server-openssh  - SSH server (openssh)
-# - hwcodecs            - Install hardware acceleration codecs
-# - package-management  - installs package management tools and preserves the package manager database
-# - debug-tweaks        - makes an image suitable for development, e.g. allowing passwordless root logins
-#   - empty-root-password
-#   - allow-empty-password
-#   - allow-root-login
-#   - post-install-logging
-# - dev-pkgs            - development packages (headers, etc.) for all installed packages in the rootfs
-# - dbg-pkgs            - debug symbol packages for all installed packages in the rootfs
-# - lic-pkgs            - license packages for all installed pacakges in the rootfs, requires
-#                         LICENSE_CREATE_PACKAGE="1" to be set when building packages too
-# - doc-pkgs            - documentation packages for all installed packages in the rootfs
-# - bash-completion-pkgs - bash-completion packages for recipes using bash-completion bbclass
-# - ptest-pkgs          - ptest packages for all ptest-enabled recipes
-# - read-only-rootfs    - tweaks an image to support read-only rootfs
-# - stateless-rootfs    - systemctl-native not run, image populated by systemd at runtime
-# - splash              - bootup splash screen
-#
-FEATURE_PACKAGES_weston = "packagegroup-core-weston"
-FEATURE_PACKAGES_x11 = "packagegroup-core-x11"
-FEATURE_PACKAGES_x11-base = "packagegroup-core-x11-base"
-FEATURE_PACKAGES_x11-sato = "packagegroup-core-x11-sato"
-FEATURE_PACKAGES_tools-debug = "packagegroup-core-tools-debug"
-FEATURE_PACKAGES_eclipse-debug = "packagegroup-core-eclipse-debug"
-FEATURE_PACKAGES_tools-profile = "packagegroup-core-tools-profile"
-FEATURE_PACKAGES_tools-testapps = "packagegroup-core-tools-testapps"
-FEATURE_PACKAGES_tools-sdk = "packagegroup-core-sdk packagegroup-core-standalone-sdk-target"
-FEATURE_PACKAGES_nfs-server = "packagegroup-core-nfs-server"
-FEATURE_PACKAGES_nfs-client = "packagegroup-core-nfs-client"
-FEATURE_PACKAGES_ssh-server-dropbear = "packagegroup-core-ssh-dropbear"
-FEATURE_PACKAGES_ssh-server-openssh = "packagegroup-core-ssh-openssh"
-FEATURE_PACKAGES_hwcodecs = "${MACHINE_HWCODECS}"
-
-
-# IMAGE_FEATURES_REPLACES_foo = 'bar1 bar2'
-# Including image feature foo would replace the image features bar1 and bar2
-IMAGE_FEATURES_REPLACES_ssh-server-openssh = "ssh-server-dropbear"
-
-# IMAGE_FEATURES_CONFLICTS_foo = 'bar1 bar2'
-# An error exception would be raised if both image features foo and bar1(or bar2) are included
-
-MACHINE_HWCODECS ??= ""
-
-CORE_IMAGE_BASE_INSTALL = '\
-    packagegroup-core-boot \
-    packagegroup-base-extended \
-    \
-    ${CORE_IMAGE_EXTRA_INSTALL} \
-    '
-
-CORE_IMAGE_EXTRA_INSTALL ?= ""
-
-IMAGE_INSTALL ?= "${CORE_IMAGE_BASE_INSTALL}"
-
-inherit image
diff --git a/poky/meta/classes/cpan-base.bbclass b/poky/meta/classes/cpan-base.bbclass
deleted file mode 100644
index 93d11e1..0000000
--- a/poky/meta/classes/cpan-base.bbclass
+++ /dev/null
@@ -1,27 +0,0 @@
-#
-# cpan-base providers various perl related information needed for building
-# cpan modules
-#
-FILES:${PN} += "${libdir}/perl5 ${datadir}/perl5"
-
-DEPENDS  += "${@["perl", "perl-native"][(bb.data.inherits_class('native', d))]}"
-RDEPENDS:${PN} += "${@["perl", ""][(bb.data.inherits_class('native', d))]}"
-
-inherit perl-version
-
-def is_target(d):
-    if not bb.data.inherits_class('native', d):
-        return "yes"
-    return "no"
-
-PERLLIBDIRS = "${libdir}/perl5"
-PERLLIBDIRS:class-native = "${libdir}/perl5"
-
-def cpan_upstream_check_pattern(d):
-    for x in (d.getVar('SRC_URI') or '').split(' '):
-        if x.startswith("https://cpan.metacpan.org"):
-            _pattern = x.split('/')[-1].replace(d.getVar('PV'), r'(?P<pver>\d+.\d+)')
-            return _pattern
-    return ''
-
-UPSTREAM_CHECK_REGEX ?= "${@cpan_upstream_check_pattern(d)}"
diff --git a/poky/meta/classes/cpan.bbclass b/poky/meta/classes/cpan.bbclass
deleted file mode 100644
index 18f1b9d..0000000
--- a/poky/meta/classes/cpan.bbclass
+++ /dev/null
@@ -1,65 +0,0 @@
-#
-# This is for perl modules that use the old Makefile.PL build system
-#
-inherit cpan-base perlnative
-
-EXTRA_CPANFLAGS ?= ""
-EXTRA_PERLFLAGS ?= ""
-
-# Env var which tells perl if it should use host (no) or target (yes) settings
-export PERLCONFIGTARGET = "${@is_target(d)}"
-
-# Env var which tells perl where the perl include files are
-export PERL_INC = "${STAGING_LIBDIR}${PERL_OWN_DIR}/perl5/${@get_perl_version(d)}/${@get_perl_arch(d)}/CORE"
-export PERL_LIB = "${STAGING_LIBDIR}${PERL_OWN_DIR}/perl5/${@get_perl_version(d)}"
-export PERL_ARCHLIB = "${STAGING_LIBDIR}${PERL_OWN_DIR}/perl5/${@get_perl_version(d)}/${@get_perl_arch(d)}"
-export PERLHOSTLIB = "${STAGING_LIBDIR_NATIVE}/perl5/${@get_perl_version(d)}/"
-export PERLHOSTARCHLIB = "${STAGING_LIBDIR_NATIVE}/perl5/${@get_perl_version(d)}/${@get_perl_hostarch(d)}/"
-
-cpan_do_configure () {
-	yes '' | perl ${EXTRA_PERLFLAGS} Makefile.PL INSTALLDIRS=vendor NO_PERLLOCAL=1 NO_PACKLIST=1 PERL=$(which perl) ${EXTRA_CPANFLAGS}
-
-	# Makefile.PLs can exit with success without generating a
-	# Makefile, e.g. in cases of missing configure time
-	# dependencies. This is considered a best practice by
-	# cpantesters.org. See:
-	#  * http://wiki.cpantesters.org/wiki/CPANAuthorNotes
-	#  * http://www.nntp.perl.org/group/perl.qa/2008/08/msg11236.html
-	[ -e Makefile ] || bbfatal "No Makefile was generated by Makefile.PL"
-
-	if [ "${BUILD_SYS}" != "${HOST_SYS}" ]; then
-		. ${STAGING_LIBDIR}${PERL_OWN_DIR}/perl5/config.sh
-		# Use find since there can be a Makefile generated for each Makefile.PL
-		for f in `find -name Makefile.PL`; do
-			f2=`echo $f | sed -e 's/.PL//'`
-			test -f $f2 || continue
-			sed -i -e "s:\(PERL_ARCHLIB = \).*:\1${PERL_ARCHLIB}:" \
-				-e 's/perl.real/perl/' \
-				-e "s|^\(CCFLAGS =.*\)|\1 ${CFLAGS}|" \
-				$f2
-		done
-	fi
-}
-
-do_configure:append:class-target() {
-       find . -name Makefile | xargs sed -E -i \
-           -e 's:LD_RUN_PATH ?= ?"?[^"]*"?::g'
-}
-
-do_configure:append:class-nativesdk() {
-       find . -name Makefile | xargs sed -E -i \
-           -e 's:LD_RUN_PATH ?= ?"?[^"]*"?::g'
-}
-
-cpan_do_compile () {
-	oe_runmake PASTHRU_INC="${CFLAGS}" LD="${CCLD}"
-}
-
-cpan_do_install () {
-	oe_runmake DESTDIR="${D}" install_vendor
-	for PERLSCRIPT in `grep -rIEl '#! *${bindir}/perl-native.*/perl' ${D}`; do
-		sed -i -e 's|${bindir}/perl-native.*/perl|/usr/bin/env nativeperl|' $PERLSCRIPT
-	done
-}
-
-EXPORT_FUNCTIONS do_configure do_compile do_install
diff --git a/poky/meta/classes/cpan_build.bbclass b/poky/meta/classes/cpan_build.bbclass
deleted file mode 100644
index f3fb466..0000000
--- a/poky/meta/classes/cpan_build.bbclass
+++ /dev/null
@@ -1,41 +0,0 @@
-#
-# This is for perl modules that use the new Build.PL build system
-#
-inherit cpan-base perlnative
-
-EXTRA_CPAN_BUILD_FLAGS ?= ""
-
-# Env var which tells perl if it should use host (no) or target (yes) settings
-export PERLCONFIGTARGET = "${@is_target(d)}"
-export PERL_ARCHLIB = "${STAGING_LIBDIR}${PERL_OWN_DIR}/perl5/${@get_perl_version(d)}/${@get_perl_arch(d)}"
-export PERLHOSTLIB = "${STAGING_LIBDIR_NATIVE}/perl5/${@get_perl_version(d)}/"
-export PERLHOSTARCHLIB = "${STAGING_LIBDIR_NATIVE}/perl5/${@get_perl_version(d)}/${@get_perl_hostarch(d)}/"
-export LD = "${CCLD}"
-
-cpan_build_do_configure () {
-	if [ "${@is_target(d)}" = "yes" ]; then
-		# build for target
-		. ${STAGING_LIBDIR}/perl5/config.sh
-	fi
-
-	perl Build.PL --installdirs vendor --destdir ${D} \
-			${EXTRA_CPAN_BUILD_FLAGS}
-
-	# Build.PLs can exit with success without generating a
-	# Build, e.g. in cases of missing configure time
-	# dependencies. This is considered a best practice by
-	# cpantesters.org. See:
-	#  * http://wiki.cpantesters.org/wiki/CPANAuthorNotes
-	#  * http://www.nntp.perl.org/group/perl.qa/2008/08/msg11236.html
-	[ -e Build ] || bbfatal "No Build was generated by Build.PL"
-}
-
-cpan_build_do_compile () {
-        perl Build --perl "${bindir}/perl" verbose=1
-}
-
-cpan_build_do_install () {
-	perl Build install --destdir ${D}
-}
-
-EXPORT_FUNCTIONS do_configure do_compile do_install
diff --git a/poky/meta/classes/create-spdx.bbclass b/poky/meta/classes/create-spdx.bbclass
index 10deba6..47dd12c 100644
--- a/poky/meta/classes/create-spdx.bbclass
+++ b/poky/meta/classes/create-spdx.bbclass
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: GPL-2.0-only
 #
 
@@ -216,7 +218,7 @@
             filepath = Path(subdir) / file
             filename = str(filepath.relative_to(topdir))
 
-            if filepath.is_file() and not filepath.is_symlink():
+            if not filepath.is_symlink() and filepath.is_file():
                 spdx_file = oe.spdx.SPDXFile()
                 spdx_file.SPDXID = get_spdxid(file_counter)
                 for t in get_types(filepath):
diff --git a/poky/meta/classes/cross-canadian.bbclass b/poky/meta/classes/cross-canadian.bbclass
deleted file mode 100644
index a0e9d23..0000000
--- a/poky/meta/classes/cross-canadian.bbclass
+++ /dev/null
@@ -1,196 +0,0 @@
-#
-# NOTE - When using this class the user is responsible for ensuring that
-# TRANSLATED_TARGET_ARCH is added into PN. This ensures that if the TARGET_ARCH
-# is changed, another nativesdk xxx-canadian-cross can be installed
-#
-
-
-# SDK packages are built either explicitly by the user,
-# or indirectly via dependency.  No need to be in 'world'.
-EXCLUDE_FROM_WORLD = "1"
-NATIVESDKLIBC ?= "libc-glibc"
-LIBCOVERRIDE = ":${NATIVESDKLIBC}"
-CLASSOVERRIDE = "class-cross-canadian"
-STAGING_BINDIR_TOOLCHAIN = "${STAGING_DIR_NATIVE}${bindir_native}/${SDK_ARCH}${SDK_VENDOR}-${SDK_OS}:${STAGING_DIR_NATIVE}${bindir_native}/${TARGET_ARCH}${TARGET_VENDOR}-${TARGET_OS}"
-
-#
-# Update BASE_PACKAGE_ARCH and PACKAGE_ARCHS
-#
-PACKAGE_ARCH = "${SDK_ARCH}-${SDKPKGSUFFIX}"
-BASECANADIANEXTRAOS ?= "linux-musl"
-CANADIANEXTRAOS = "${BASECANADIANEXTRAOS}"
-CANADIANEXTRAVENDOR = ""
-MODIFYTOS ??= "1"
-python () {
-    archs = d.getVar('PACKAGE_ARCHS').split()
-    sdkarchs = []
-    for arch in archs:
-        sdkarchs.append(arch + '-${SDKPKGSUFFIX}')
-    d.setVar('PACKAGE_ARCHS', " ".join(sdkarchs))
-
-    # Allow the following code segment to be disabled, e.g. meta-environment
-    if d.getVar("MODIFYTOS") != "1":
-        return
-
-    if d.getVar("TCLIBC") in [ 'baremetal', 'newlib' ]:
-        return
-
-    tos = d.getVar("TARGET_OS")
-    tos_known = ["mingw32"]
-    extralibcs = [""]
-    if "musl" in d.getVar("BASECANADIANEXTRAOS"):
-        extralibcs.append("musl")
-    if "android" in tos:
-        extralibcs.append("android")
-    for variant in ["", "spe", "x32", "eabi", "n32", "_ilp32"]:
-        for libc in extralibcs:
-            entry = "linux"
-            if variant and libc:
-                entry = entry + "-" + libc + variant
-            elif variant:
-                entry = entry + "-gnu" + variant
-            elif libc:
-                entry = entry + "-" + libc
-            tos_known.append(entry)
-    if tos not in tos_known:
-        bb.fatal("Building cross-candian for an unknown TARGET_SYS (%s), please update cross-canadian.bbclass" % d.getVar("TARGET_SYS"))
-
-    for n in ["PROVIDES", "DEPENDS"]:
-        d.setVar(n, d.getVar(n))
-    d.setVar("STAGING_BINDIR_TOOLCHAIN", d.getVar("STAGING_BINDIR_TOOLCHAIN"))
-    for prefix in ["AR", "AS", "DLLTOOL", "CC", "CXX", "GCC", "LD", "LIPO", "NM", "OBJDUMP", "RANLIB", "STRIP", "WINDRES"]:
-        n = prefix + "_FOR_TARGET"
-        d.setVar(n, d.getVar(n))
-    # This is a bit ugly. We need to zero LIBC/ABI extension which will change TARGET_OS
-    # however we need the old value in some variables. We expand those here first.
-    tarch = d.getVar("TARGET_ARCH")
-    if tarch == "x86_64":
-        d.setVar("LIBCEXTENSION", "")
-        d.setVar("ABIEXTENSION", "")
-        d.appendVar("CANADIANEXTRAOS", " linux-gnux32")
-        for extraos in d.getVar("BASECANADIANEXTRAOS").split():
-            d.appendVar("CANADIANEXTRAOS", " " + extraos + "x32")
-    elif tarch == "powerpc":
-        # PowerPC can build "linux" and "linux-gnuspe"
-        d.setVar("LIBCEXTENSION", "")
-        d.setVar("ABIEXTENSION", "")
-        d.appendVar("CANADIANEXTRAOS", " linux-gnuspe")
-        for extraos in d.getVar("BASECANADIANEXTRAOS").split():
-            d.appendVar("CANADIANEXTRAOS", " " + extraos + "spe")
-    elif tarch == "mips64":
-        d.appendVar("CANADIANEXTRAOS", " linux-gnun32")
-        for extraos in d.getVar("BASECANADIANEXTRAOS").split():
-            d.appendVar("CANADIANEXTRAOS", " " + extraos + "n32")
-    if tarch == "arm" or tarch == "armeb":
-        d.appendVar("CANADIANEXTRAOS", " linux-gnueabi linux-musleabi")
-        d.setVar("TARGET_OS", "linux-gnueabi")
-    else:
-        d.setVar("TARGET_OS", "linux")
-
-    # Also need to handle multilib target vendors
-    vendors = d.getVar("CANADIANEXTRAVENDOR")
-    if not vendors:
-        vendors = all_multilib_tune_values(d, 'TARGET_VENDOR')
-    origvendor = d.getVar("TARGET_VENDOR_MULTILIB_ORIGINAL")
-    if origvendor:
-        d.setVar("TARGET_VENDOR", origvendor)
-        if origvendor not in vendors.split():
-            vendors = origvendor + " " + vendors
-    d.setVar("CANADIANEXTRAVENDOR", vendors)
-}
-MULTIMACH_TARGET_SYS = "${PACKAGE_ARCH}${HOST_VENDOR}-${HOST_OS}"
-
-INHIBIT_DEFAULT_DEPS = "1"
-
-STAGING_DIR_HOST = "${RECIPE_SYSROOT}"
-
-TOOLCHAIN_OPTIONS = " --sysroot=${RECIPE_SYSROOT}"
-
-PATH:append = ":${TMPDIR}/sysroots/${HOST_ARCH}/${bindir_cross}"
-PKGHIST_DIR = "${TMPDIR}/pkghistory/${HOST_ARCH}-${SDKPKGSUFFIX}${HOST_VENDOR}-${HOST_OS}/"
-
-HOST_ARCH = "${SDK_ARCH}"
-HOST_VENDOR = "${SDK_VENDOR}"
-HOST_OS = "${SDK_OS}"
-HOST_PREFIX = "${SDK_PREFIX}"
-HOST_CC_ARCH = "${SDK_CC_ARCH}"
-HOST_LD_ARCH = "${SDK_LD_ARCH}"
-HOST_AS_ARCH = "${SDK_AS_ARCH}"
-
-#assign DPKG_ARCH
-DPKG_ARCH = "${@debian_arch_map(d.getVar('SDK_ARCH'), '')}"
-
-CPPFLAGS = "${BUILDSDK_CPPFLAGS}"
-CFLAGS = "${BUILDSDK_CFLAGS}"
-CXXFLAGS = "${BUILDSDK_CFLAGS}"
-LDFLAGS = "${BUILDSDK_LDFLAGS} \
-           -Wl,-rpath-link,${STAGING_LIBDIR}/.. \
-           -Wl,-rpath,${libdir}/.. "
-
-#
-# We need chrpath >= 0.14 to ensure we can deal with 32 and 64 bit
-# binaries
-#
-DEPENDS:append = " chrpath-replacement-native"
-EXTRANATIVEPATH += "chrpath-native"
-
-# Path mangling needed by the cross packaging
-# Note that we use := here to ensure that libdir and includedir are
-# target paths.
-target_base_prefix := "${base_prefix}"
-target_prefix := "${prefix}"
-target_exec_prefix := "${exec_prefix}"
-target_base_libdir = "${target_base_prefix}/${baselib}"
-target_libdir = "${target_exec_prefix}/${baselib}"
-target_includedir := "${includedir}"
-
-# Change to place files in SDKPATH
-base_prefix = "${SDKPATHNATIVE}"
-prefix = "${SDKPATHNATIVE}${prefix_nativesdk}"
-exec_prefix = "${SDKPATHNATIVE}${prefix_nativesdk}"
-bindir = "${exec_prefix}/bin/${TARGET_ARCH}${TARGET_VENDOR}-${TARGET_OS}"
-sbindir = "${bindir}"
-base_bindir = "${bindir}"
-base_sbindir = "${bindir}"
-libdir = "${exec_prefix}/lib/${TARGET_ARCH}${TARGET_VENDOR}-${TARGET_OS}"
-libexecdir = "${exec_prefix}/libexec/${TARGET_ARCH}${TARGET_VENDOR}-${TARGET_OS}"
-
-FILES:${PN} = "${prefix}"
-
-export PKG_CONFIG_DIR = "${STAGING_DIR_HOST}${exec_prefix}/lib/pkgconfig"
-export PKG_CONFIG_SYSROOT_DIR = "${STAGING_DIR_HOST}"
-
-do_populate_sysroot[stamp-extra-info] = ""
-do_packagedata[stamp-extra-info] = ""
-
-USE_NLS = "${SDKUSE_NLS}"
-
-# We have to us TARGET_ARCH but we care about the absolute value
-# and not any particular tune that is enabled.
-TARGET_ARCH[vardepsexclude] = "TUNE_ARCH"
-
-PKGDATA_DIR = "${PKGDATA_DIR_SDK}"
-# If MLPREFIX is set by multilib code, shlibs
-# points to the wrong place so force it
-SHLIBSDIRS = "${PKGDATA_DIR}/nativesdk-shlibs2"
-SHLIBSWORKDIR = "${PKGDATA_DIR}/nativesdk-shlibs2"
-
-cross_canadian_bindirlinks () {
-	for i in linux ${CANADIANEXTRAOS}
-	do
-		for v in ${CANADIANEXTRAVENDOR}
-		do
-			d=${D}${bindir}/../${TARGET_ARCH}$v-$i
-			if [ -d $d ];
-			then
-			    continue
-			fi
-			install -d $d
-			for j in `ls ${D}${bindir}`
-			do
-				p=${TARGET_ARCH}$v-$i-`echo $j | sed -e s,${TARGET_PREFIX},,`
-				ln -s ../${TARGET_SYS}/$j $d/$p
-			done
-		done
-       done
-}
diff --git a/poky/meta/classes/cross.bbclass b/poky/meta/classes/cross.bbclass
deleted file mode 100644
index 9d95107..0000000
--- a/poky/meta/classes/cross.bbclass
+++ /dev/null
@@ -1,97 +0,0 @@
-inherit relocatable
-
-# Cross packages are built indirectly via dependency,
-# no need for them to be a direct target of 'world'
-EXCLUDE_FROM_WORLD = "1"
-
-CLASSOVERRIDE = "class-cross"
-PACKAGES = ""
-PACKAGES_DYNAMIC = ""
-PACKAGES_DYNAMIC:class-native = ""
-
-HOST_ARCH = "${BUILD_ARCH}"
-HOST_VENDOR = "${BUILD_VENDOR}"
-HOST_OS = "${BUILD_OS}"
-HOST_PREFIX = "${BUILD_PREFIX}"
-HOST_CC_ARCH = "${BUILD_CC_ARCH}"
-HOST_LD_ARCH = "${BUILD_LD_ARCH}"
-HOST_AS_ARCH = "${BUILD_AS_ARCH}"
-
-# No strip sysroot when DEBUG_BUILD is enabled
-INHIBIT_SYSROOT_STRIP ?= "${@oe.utils.vartrue('DEBUG_BUILD', '1', '', d)}"
-
-export lt_cv_sys_lib_dlsearch_path_spec = "${libdir} ${base_libdir} /lib /lib64 /usr/lib /usr/lib64"
-
-STAGING_DIR_HOST = "${RECIPE_SYSROOT_NATIVE}"
-
-PACKAGE_ARCH = "${BUILD_ARCH}"
-
-MULTIMACH_TARGET_SYS = "${BUILD_ARCH}${BUILD_VENDOR}-${BUILD_OS}"
-
-export PKG_CONFIG_DIR = "${exec_prefix}/lib/pkgconfig"
-export PKG_CONFIG_SYSROOT_DIR = ""
-
-TARGET_CPPFLAGS = ""
-TARGET_CFLAGS = ""
-TARGET_CXXFLAGS = ""
-TARGET_LDFLAGS = ""
-
-CPPFLAGS = "${BUILD_CPPFLAGS}"
-CFLAGS = "${BUILD_CFLAGS}"
-CXXFLAGS = "${BUILD_CFLAGS}"
-LDFLAGS = "${BUILD_LDFLAGS}"
-
-TOOLCHAIN_OPTIONS = ""
-
-# This class encodes staging paths into its scripts data so can only be
-# reused if we manipulate the paths.
-SSTATE_SCAN_CMD ?= "${SSTATE_SCAN_CMD_NATIVE}"
-
-# Path mangling needed by the cross packaging
-# Note that we use := here to ensure that libdir and includedir are
-# target paths.
-target_base_prefix := "${root_prefix}"
-target_prefix := "${prefix}"
-target_exec_prefix := "${exec_prefix}"
-target_base_libdir = "${target_base_prefix}/${baselib}"
-target_libdir = "${target_exec_prefix}/${baselib}"
-target_includedir := "${includedir}"
-
-# Overrides for paths
-CROSS_TARGET_SYS_DIR = "${TARGET_SYS}"
-prefix = "${STAGING_DIR_NATIVE}${prefix_native}"
-base_prefix = "${STAGING_DIR_NATIVE}"
-exec_prefix = "${STAGING_DIR_NATIVE}${prefix_native}"
-bindir = "${exec_prefix}/bin/${CROSS_TARGET_SYS_DIR}"
-sbindir = "${bindir}"
-base_bindir = "${bindir}"
-base_sbindir = "${bindir}"
-libdir = "${exec_prefix}/lib/${CROSS_TARGET_SYS_DIR}"
-libexecdir = "${exec_prefix}/libexec/${CROSS_TARGET_SYS_DIR}"
-
-do_populate_sysroot[sstate-inputdirs] = "${SYSROOT_DESTDIR}/${STAGING_DIR_NATIVE}/"
-do_packagedata[stamp-extra-info] = ""
-
-USE_NLS = "no"
-
-export CC = "${BUILD_CC}"
-export CXX = "${BUILD_CXX}"
-export FC = "${BUILD_FC}"
-export CPP = "${BUILD_CPP}"
-export LD = "${BUILD_LD}"
-export CCLD = "${BUILD_CCLD}"
-export AR = "${BUILD_AR}"
-export AS = "${BUILD_AS}"
-export RANLIB = "${BUILD_RANLIB}"
-export STRIP = "${BUILD_STRIP}"
-export NM = "${BUILD_NM}"
-
-inherit nopackages
-
-python do_addto_recipe_sysroot () {
-    bb.build.exec_func("extend_recipe_sysroot", d)
-}
-addtask addto_recipe_sysroot after do_populate_sysroot
-do_addto_recipe_sysroot[deptask] = "do_populate_sysroot"
-
-PATH:prepend = "${COREBASE}/scripts/cross-intercept:"
diff --git a/poky/meta/classes/crosssdk.bbclass b/poky/meta/classes/crosssdk.bbclass
deleted file mode 100644
index 04aecb6..0000000
--- a/poky/meta/classes/crosssdk.bbclass
+++ /dev/null
@@ -1,51 +0,0 @@
-inherit cross
-
-CLASSOVERRIDE = "class-crosssdk"
-NATIVESDKLIBC ?= "libc-glibc"
-LIBCOVERRIDE = ":${NATIVESDKLIBC}"
-MACHINEOVERRIDES = ""
-PACKAGE_ARCH = "${SDK_ARCH}"
-
-python () {
-    # set TUNE_PKGARCH to SDK_ARCH
-    d.setVar('TUNE_PKGARCH', d.getVar('SDK_ARCH'))
-    # Set features here to prevent appends and distro features backfill
-    # from modifying nativesdk distro features
-    features = set(d.getVar("DISTRO_FEATURES_NATIVESDK").split())
-    filtered = set(bb.utils.filter("DISTRO_FEATURES", d.getVar("DISTRO_FEATURES_FILTER_NATIVESDK"), d).split())
-    d.setVar("DISTRO_FEATURES", " ".join(sorted(features | filtered)))
-}
-
-STAGING_BINDIR_TOOLCHAIN = "${STAGING_DIR_NATIVE}${bindir_native}/${TARGET_ARCH}${TARGET_VENDOR}-${TARGET_OS}"
-
-# This class encodes staging paths into its scripts data so can only be
-# reused if we manipulate the paths.
-SSTATE_SCAN_CMD ?= "${SSTATE_SCAN_CMD_NATIVE}"
-
-TARGET_ARCH = "${SDK_ARCH}"
-TARGET_VENDOR = "${SDK_VENDOR}"
-TARGET_OS = "${SDK_OS}"
-TARGET_PREFIX = "${SDK_PREFIX}"
-TARGET_CC_ARCH = "${SDK_CC_ARCH}"
-TARGET_LD_ARCH = "${SDK_LD_ARCH}"
-TARGET_AS_ARCH = "${SDK_AS_ARCH}"
-TARGET_CPPFLAGS = ""
-TARGET_CFLAGS = ""
-TARGET_CXXFLAGS = ""
-TARGET_LDFLAGS = ""
-TARGET_FPU = ""
-
-
-target_libdir = "${SDKPATHNATIVE}${libdir_nativesdk}"
-target_includedir = "${SDKPATHNATIVE}${includedir_nativesdk}"
-target_base_libdir = "${SDKPATHNATIVE}${base_libdir_nativesdk}"
-target_prefix = "${SDKPATHNATIVE}${prefix_nativesdk}"
-target_exec_prefix = "${SDKPATHNATIVE}${prefix_nativesdk}"
-baselib = "lib"
-
-do_packagedata[stamp-extra-info] = ""
-
-# Need to force this to ensure consitency across architectures
-EXTRA_OECONF_GCC_FLOAT = ""
-
-USE_NLS = "no"
diff --git a/poky/meta/classes/cve-check.bbclass b/poky/meta/classes/cve-check.bbclass
index da7f933..4b4ea78 100644
--- a/poky/meta/classes/cve-check.bbclass
+++ b/poky/meta/classes/cve-check.bbclass
@@ -1,3 +1,9 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
 # This class is used to check recipes against public CVEs.
 #
 # In order to use this class just inherit the class in the
@@ -139,17 +145,18 @@
     """
     from oe.cve_check import get_patched_cves
 
-    if os.path.exists(d.getVar("CVE_CHECK_DB_FILE")):
-        try:
-            patched_cves = get_patched_cves(d)
-        except FileNotFoundError:
-            bb.fatal("Failure in searching patches")
-        ignored, patched, unpatched, status = check_cves(d, patched_cves)
-        if patched or unpatched or (d.getVar("CVE_CHECK_COVERAGE") == "1" and status):
-            cve_data = get_cve_info(d, patched + unpatched + ignored)
-            cve_write_data(d, patched, unpatched, ignored, cve_data, status)
-    else:
-        bb.note("No CVE database found, skipping CVE check")
+    with bb.utils.fileslocked([d.getVar("CVE_CHECK_DB_FILE_LOCK")], shared=True):
+        if os.path.exists(d.getVar("CVE_CHECK_DB_FILE")):
+            try:
+                patched_cves = get_patched_cves(d)
+            except FileNotFoundError:
+                bb.fatal("Failure in searching patches")
+            ignored, patched, unpatched, status = check_cves(d, patched_cves)
+            if patched or unpatched or (d.getVar("CVE_CHECK_COVERAGE") == "1" and status):
+                cve_data = get_cve_info(d, patched + unpatched + ignored)
+                cve_write_data(d, patched, unpatched, ignored, cve_data, status)
+        else:
+            bb.note("No CVE database found, skipping CVE check")
 
 }
 
@@ -290,7 +297,8 @@
             vendor = "%"
 
         # Find all relevant CVE IDs.
-        for cverow in conn.execute("SELECT DISTINCT ID FROM PRODUCTS WHERE PRODUCT IS ? AND VENDOR LIKE ?", (product, vendor)):
+        cve_cursor = conn.execute("SELECT DISTINCT ID FROM PRODUCTS WHERE PRODUCT IS ? AND VENDOR LIKE ?", (product, vendor))
+        for cverow in cve_cursor:
             cve = cverow[0]
 
             if cve in cve_ignore:
@@ -309,7 +317,8 @@
             vulnerable = False
             ignored = False
 
-            for row in conn.execute("SELECT * FROM PRODUCTS WHERE ID IS ? AND PRODUCT IS ? AND VENDOR LIKE ?", (cve, product, vendor)):
+            product_cursor = conn.execute("SELECT * FROM PRODUCTS WHERE ID IS ? AND PRODUCT IS ? AND VENDOR LIKE ?", (cve, product, vendor))
+            for row in product_cursor:
                 (_, _, _, version_start, operator_start, version_end, operator_end) = row
                 #bb.debug(2, "Evaluating row " + str(row))
                 if cve in cve_ignore:
@@ -353,10 +362,12 @@
                         bb.note("%s-%s is vulnerable to %s" % (pn, real_pv, cve))
                         cves_unpatched.append(cve)
                     break
+            product_cursor.close()
 
             if not vulnerable:
                 bb.note("%s-%s is not vulnerable to %s" % (pn, real_pv, cve))
                 patched_cves.add(cve)
+        cve_cursor.close()
 
         if not cves_in_product:
             bb.note("No CVE records found for product %s, pn %s" % (product, pn))
@@ -381,14 +392,15 @@
     conn = sqlite3.connect(db_file, uri=True)
 
     for cve in cves:
-        for row in conn.execute("SELECT * FROM NVD WHERE ID IS ?", (cve,)):
+        cursor = conn.execute("SELECT * FROM NVD WHERE ID IS ?", (cve,))
+        for row in cursor:
             cve_data[row[0]] = {}
             cve_data[row[0]]["summary"] = row[1]
             cve_data[row[0]]["scorev2"] = row[2]
             cve_data[row[0]]["scorev3"] = row[3]
             cve_data[row[0]]["modified"] = row[4]
             cve_data[row[0]]["vector"] = row[5]
-
+        cursor.close()
     conn.close()
     return cve_data
 
diff --git a/poky/meta/classes/debian.bbclass b/poky/meta/classes/debian.bbclass
deleted file mode 100644
index 8367be9..0000000
--- a/poky/meta/classes/debian.bbclass
+++ /dev/null
@@ -1,150 +0,0 @@
-# Debian package renaming only occurs when a package is built
-# We therefore have to make sure we build all runtime packages
-# before building the current package to make the packages runtime
-# depends are correct
-#
-# Custom library package names can be defined setting
-# DEBIANNAME: + pkgname to the desired name.
-#
-# Better expressed as ensure all RDEPENDS package before we package
-# This means we can't have circular RDEPENDS/RRECOMMENDS
-
-AUTO_LIBNAME_PKGS = "${PACKAGES}"
-
-inherit package
-
-DEBIANRDEP = "do_packagedata"
-do_package_write_ipk[deptask] = "${DEBIANRDEP}"
-do_package_write_deb[deptask] = "${DEBIANRDEP}"
-do_package_write_tar[deptask] = "${DEBIANRDEP}"
-do_package_write_rpm[deptask] = "${DEBIANRDEP}"
-do_package_write_ipk[rdeptask] = "${DEBIANRDEP}"
-do_package_write_deb[rdeptask] = "${DEBIANRDEP}"
-do_package_write_tar[rdeptask] = "${DEBIANRDEP}"
-do_package_write_rpm[rdeptask] = "${DEBIANRDEP}"
-
-python () {
-    if not d.getVar("PACKAGES"):
-        d.setVar("DEBIANRDEP", "")
-}
-
-python debian_package_name_hook () {
-    import glob, copy, stat, errno, re, pathlib, subprocess
-
-    pkgdest = d.getVar("PKGDEST")
-    packages = d.getVar('PACKAGES')
-    so_re = re.compile(r"lib.*\.so")
-
-    def socrunch(s):
-        s = s.lower().replace('_', '-')
-        m = re.match(r"^(.*)(.)\.so\.(.*)$", s)
-        if m is None:
-            return None
-        if m.group(2) in '0123456789':
-            bin = '%s%s-%s' % (m.group(1), m.group(2), m.group(3))
-        else:
-            bin = m.group(1) + m.group(2) + m.group(3)
-        dev = m.group(1) + m.group(2)
-        return (bin, dev)
-
-    def isexec(path):
-        try:
-            s = os.stat(path)
-        except (os.error, AttributeError):
-            return 0
-        return (s[stat.ST_MODE] & stat.S_IEXEC)
-
-    def add_rprovides(pkg, d):
-        newpkg = d.getVar('PKG:' + pkg)
-        if newpkg and newpkg != pkg:
-            provs = (d.getVar('RPROVIDES:' + pkg) or "").split()
-            if pkg not in provs:
-                d.appendVar('RPROVIDES:' + pkg, " " + pkg + " (=" + d.getVar("PKGV") + ")")
-
-    def auto_libname(packages, orig_pkg):
-        p = lambda var: pathlib.PurePath(d.getVar(var))
-        libdirs = (p("base_libdir"), p("libdir"))
-        bindirs = (p("base_bindir"), p("base_sbindir"), p("bindir"), p("sbindir"))
-
-        sonames = []
-        has_bins = 0
-        has_libs = 0
-        for f in pkgfiles[orig_pkg]:
-            # This is .../packages-split/orig_pkg/
-            pkgpath = pathlib.PurePath(pkgdest, orig_pkg)
-            # Strip pkgpath off the full path to a file in the package, re-root
-            # so it is absolute, and then get the parent directory of the file.
-            path = pathlib.PurePath("/") / (pathlib.PurePath(f).relative_to(pkgpath).parent)
-            if path in bindirs:
-                has_bins = 1
-            if path in libdirs:
-                has_libs = 1
-                if so_re.match(os.path.basename(f)):
-                    try:
-                        cmd = [d.expand("${TARGET_PREFIX}objdump"), "-p", f]
-                        output = subprocess.check_output(cmd).decode("utf-8")
-                        for m in re.finditer(r"\s+SONAME\s+([^\s]+)", output):
-                            if m.group(1) not in sonames:
-                                sonames.append(m.group(1))
-                    except subprocess.CalledProcessError:
-                        pass
-        bb.debug(1, 'LIBNAMES: pkg %s libs %d bins %d sonames %s' % (orig_pkg, has_libs, has_bins, sonames))
-        soname = None
-        if len(sonames) == 1:
-            soname = sonames[0]
-        elif len(sonames) > 1:
-            lead = d.getVar('LEAD_SONAME')
-            if lead:
-                r = re.compile(lead)
-                filtered = []
-                for s in sonames:
-                    if r.match(s):
-                        filtered.append(s)
-                if len(filtered) == 1:
-                    soname = filtered[0]
-                elif len(filtered) > 1:
-                    bb.note("Multiple matches (%s) for LEAD_SONAME '%s'" % (", ".join(filtered), lead))
-                else:
-                    bb.note("Multiple libraries (%s) found, but LEAD_SONAME '%s' doesn't match any of them" % (", ".join(sonames), lead))
-            else:
-                bb.note("Multiple libraries (%s) found and LEAD_SONAME not defined" % ", ".join(sonames))
-
-        if has_libs and not has_bins and soname:
-            soname_result = socrunch(soname)
-            if soname_result:
-                (pkgname, devname) = soname_result
-                for pkg in packages.split():
-                    if (d.getVar('PKG:' + pkg, False) or d.getVar('DEBIAN_NOAUTONAME:' + pkg, False)):
-                        add_rprovides(pkg, d)
-                        continue
-                    debian_pn = d.getVar('DEBIANNAME:' + pkg, False)
-                    if debian_pn:
-                        newpkg = debian_pn
-                    elif pkg == orig_pkg:
-                        newpkg = pkgname
-                    else:
-                        newpkg = pkg.replace(orig_pkg, devname, 1)
-                    mlpre=d.getVar('MLPREFIX')
-                    if mlpre:
-                        if not newpkg.find(mlpre) == 0:
-                            newpkg = mlpre + newpkg
-                    if newpkg != pkg:
-                        bb.note("debian: renaming %s to %s" % (pkg, newpkg))
-                        d.setVar('PKG:' + pkg, newpkg)
-                        add_rprovides(pkg, d)
-        else:
-            add_rprovides(orig_pkg, d)
-
-    # reversed sort is needed when some package is substring of another
-    # ie in ncurses we get without reverse sort: 
-    # DEBUG: LIBNAMES: pkgname libtic5 devname libtic pkg ncurses-libtic orig_pkg ncurses-libtic debian_pn None newpkg libtic5
-    # and later
-    # DEBUG: LIBNAMES: pkgname libtic5 devname libtic pkg ncurses-libticw orig_pkg ncurses-libtic debian_pn None newpkg libticw
-    # so we need to handle ncurses-libticw->libticw5 before ncurses-libtic->libtic5
-    for pkg in sorted((d.getVar('AUTO_LIBNAME_PKGS') or "").split(), reverse=True):
-        auto_libname(packages, pkg)
-}
-
-EXPORT_FUNCTIONS package_name_hook
-
-DEBIAN_NAMES = "1"
diff --git a/poky/meta/classes/deploy.bbclass b/poky/meta/classes/deploy.bbclass
deleted file mode 100644
index 7fbffe9..0000000
--- a/poky/meta/classes/deploy.bbclass
+++ /dev/null
@@ -1,12 +0,0 @@
-DEPLOYDIR = "${WORKDIR}/deploy-${PN}"
-SSTATETASKS += "do_deploy"
-do_deploy[sstate-inputdirs] = "${DEPLOYDIR}"
-do_deploy[sstate-outputdirs] = "${DEPLOY_DIR_IMAGE}"
-
-python do_deploy_setscene () {
-    sstate_setscene(d)
-}
-addtask do_deploy_setscene
-do_deploy[dirs] = "${B}"
-do_deploy[cleandirs] = "${DEPLOYDIR}"
-do_deploy[stamp-extra-info] = "${MACHINE_ARCH}"
diff --git a/poky/meta/classes/devicetree.bbclass b/poky/meta/classes/devicetree.bbclass
deleted file mode 100644
index 2a62ae7..0000000
--- a/poky/meta/classes/devicetree.bbclass
+++ /dev/null
@@ -1,148 +0,0 @@
-# This bbclass implements device tree compliation for user provided device tree
-# sources. The compilation of the device tree sources is the same as the kernel
-# device tree compilation process, this includes being able to include sources
-# from the kernel such as soc dtsi files or header files such as gpio.h. In
-# addition to device trees this bbclass also handles compilation of device tree
-# overlays.
-#
-# The output of this class behaves similar to how kernel-devicetree.bbclass
-# operates in that the output files are installed into /boot/devicetree.
-# However this class on purpose separates the deployed device trees into the
-# 'devicetree' subdirectory. This prevents clashes with the kernel-devicetree
-# output. Additionally the device trees are populated into the sysroot for
-# access via the sysroot from within other recipes.
-
-SECTION ?= "bsp"
-
-# The default inclusion of kernel device tree includes and headers means that
-# device trees built with them are at least GPL-2.0-only (and in some cases dual
-# licensed). Default to GPL-2.0-only if the recipe does not specify a license.
-LICENSE ?= "GPL-2.0-only"
-LIC_FILES_CHKSUM ?= "file://${COMMON_LICENSE_DIR}/GPL-2.0-only;md5=801f80980d171dd6425610833a22dbe6"
-
-INHIBIT_DEFAULT_DEPS = "1"
-DEPENDS += "dtc-native"
-
-inherit deploy kernel-arch
-
-COMPATIBLE_MACHINE ?= "^$"
-
-PROVIDES = "virtual/dtb"
-
-PACKAGE_ARCH = "${MACHINE_ARCH}"
-
-SYSROOT_DIRS += "/boot/devicetree"
-FILES:${PN} = "/boot/devicetree/*.dtb /boot/devicetree/*.dtbo"
-
-S = "${WORKDIR}"
-B = "${WORKDIR}/build"
-
-# Default kernel includes, these represent what are normally used for in-kernel
-# sources.
-KERNEL_INCLUDE ??= " \
-        ${STAGING_KERNEL_DIR}/arch/${ARCH}/boot/dts \
-        ${STAGING_KERNEL_DIR}/arch/${ARCH}/boot/dts/* \
-        ${STAGING_KERNEL_DIR}/scripts/dtc/include-prefixes \
-        "
-
-DT_INCLUDE[doc] = "Search paths to be made available to both the device tree compiler and preprocessor for inclusion."
-DT_INCLUDE ?= "${DT_FILES_PATH} ${KERNEL_INCLUDE}"
-DT_FILES_PATH[doc] = "Defaults to source directory, can be used to select dts files that are not in source (e.g. generated)."
-DT_FILES_PATH ?= "${S}"
-
-DT_PADDING_SIZE[doc] = "Size of padding on the device tree blob, used as extra space typically for additional properties during boot."
-DT_PADDING_SIZE ??= "0x3000"
-DT_RESERVED_MAP[doc] = "Number of reserved map entires."
-DT_RESERVED_MAP ??= "8"
-DT_BOOT_CPU[doc] = "The boot cpu, defaults to 0"
-DT_BOOT_CPU ??= "0"
-
-DTC_FLAGS ?= "-R ${DT_RESERVED_MAP} -b ${DT_BOOT_CPU}"
-DTC_PPFLAGS ?= "-nostdinc -undef -D__DTS__ -x assembler-with-cpp"
-DTC_BFLAGS ?= "-p ${DT_PADDING_SIZE} -@"
-DTC_OFLAGS ?= "-p 0 -@ -H epapr"
-
-python () {
-    if d.getVar("KERNEL_INCLUDE"):
-        # auto add dependency on kernel tree, but only if kernel include paths
-        # are specified.
-        d.appendVarFlag("do_compile", "depends", " virtual/kernel:do_configure")
-}
-
-def expand_includes(varname, d):
-    import glob
-    includes = set()
-    # expand all includes with glob
-    for i in (d.getVar(varname) or "").split():
-        for g in glob.glob(i):
-            if os.path.isdir(g): # only add directories to include path
-                includes.add(g)
-    return includes
-
-def devicetree_source_is_overlay(path):
-    # determine if a dts file is an overlay by checking if it uses "/plugin/;"
-    with open(path, "r") as f:
-        for i in f:
-            if i.startswith("/plugin/;"):
-                return True
-    return False
-
-def devicetree_compile(dtspath, includes, d):
-    import subprocess
-    dts = os.path.basename(dtspath)
-    dtname = os.path.splitext(dts)[0]
-    bb.note("Processing {0} [{1}]".format(dtname, dts))
-
-    # preprocess
-    ppargs = d.getVar("BUILD_CPP").split()
-    ppargs += (d.getVar("DTC_PPFLAGS") or "").split()
-    for i in includes:
-        ppargs.append("-I{0}".format(i))
-    ppargs += ["-o", "{0}.pp".format(dts), dtspath]
-    bb.note("Running {0}".format(" ".join(ppargs)))
-    subprocess.run(ppargs, check = True)
-
-    # determine if the file is an overlay or not (using the preprocessed file)
-    isoverlay = devicetree_source_is_overlay("{0}.pp".format(dts))
-
-    # compile
-    dtcargs = ["dtc"] + (d.getVar("DTC_FLAGS") or "").split()
-    if isoverlay:
-        dtcargs += (d.getVar("DTC_OFLAGS") or "").split()
-    else:
-        dtcargs += (d.getVar("DTC_BFLAGS") or "").split()
-    for i in includes:
-        dtcargs += ["-i", i]
-    dtcargs += ["-o", "{0}.{1}".format(dtname, "dtbo" if isoverlay else "dtb")]
-    dtcargs += ["-I", "dts", "-O", "dtb", "{0}.pp".format(dts)]
-    bb.note("Running {0}".format(" ".join(dtcargs)))
-    subprocess.run(dtcargs, check = True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
-
-python devicetree_do_compile() {
-    includes = expand_includes("DT_INCLUDE", d)
-    listpath = d.getVar("DT_FILES_PATH")
-    for dts in os.listdir(listpath):
-        dtspath = os.path.join(listpath, dts)
-        try:
-            if not(os.path.isfile(dtspath)) or not(dts.endswith(".dts") or devicetree_source_is_overlay(dtspath)):
-                continue # skip non-.dts files and non-overlay files
-        except:
-            continue # skip if can't determine if overlay
-        devicetree_compile(dtspath, includes, d)
-}
-
-devicetree_do_install() {
-    for DTB_FILE in `ls *.dtb *.dtbo`; do
-        install -Dm 0644 ${B}/${DTB_FILE} ${D}/boot/devicetree/${DTB_FILE}
-    done
-}
-
-devicetree_do_deploy() {
-    for DTB_FILE in `ls *.dtb *.dtbo`; do
-        install -Dm 0644 ${B}/${DTB_FILE} ${DEPLOYDIR}/devicetree/${DTB_FILE}
-    done
-}
-addtask deploy before do_build after do_install
-
-EXPORT_FUNCTIONS do_compile do_install do_deploy
-
diff --git a/poky/meta/classes/devshell.bbclass b/poky/meta/classes/devshell.bbclass
deleted file mode 100644
index 247d044..0000000
--- a/poky/meta/classes/devshell.bbclass
+++ /dev/null
@@ -1,160 +0,0 @@
-inherit terminal
-
-DEVSHELL = "${SHELL}"
-
-PATH:prepend:task-devshell = "${COREBASE}/scripts/git-intercept:"
-
-python do_devshell () {
-    if d.getVarFlag("do_devshell", "manualfakeroot"):
-       d.prependVar("DEVSHELL", "pseudo ")
-       fakeenv = d.getVar("FAKEROOTENV").split()
-       for f in fakeenv:
-            k = f.split("=")
-            d.setVar(k[0], k[1])
-            d.appendVar("OE_TERMINAL_EXPORTS", " " + k[0])
-       d.delVarFlag("do_devshell", "fakeroot")
-
-    oe_terminal(d.getVar('DEVSHELL'), 'OpenEmbedded Developer Shell', d)
-}
-
-addtask devshell after do_patch do_prepare_recipe_sysroot
-
-# The directory that the terminal starts in
-DEVSHELL_STARTDIR ?= "${S}"
-do_devshell[dirs] = "${DEVSHELL_STARTDIR}"
-do_devshell[nostamp] = "1"
-do_devshell[network] = "1"
-
-# devshell and fakeroot/pseudo need careful handling since only the final
-# command should run under fakeroot emulation, any X connection should
-# be done as the normal user. We therfore carefully construct the envionment
-# manually
-python () {
-    if d.getVarFlag("do_devshell", "fakeroot"):
-       # We need to signal our code that we want fakeroot however we
-       # can't manipulate the environment and variables here yet (see YOCTO #4795)
-       d.setVarFlag("do_devshell", "manualfakeroot", "1")
-       d.delVarFlag("do_devshell", "fakeroot")
-} 
-
-def pydevshell(d):
-
-    import code
-    import select
-    import signal
-    import termios
-
-    m, s = os.openpty()
-    sname = os.ttyname(s)
-
-    def noechoicanon(fd):
-        old = termios.tcgetattr(fd)
-        old[3] = old[3] &~ termios.ECHO &~ termios.ICANON
-        # &~ termios.ISIG
-        termios.tcsetattr(fd, termios.TCSADRAIN, old)
-
-    # No echo or buffering over the pty
-    noechoicanon(s)
-
-    pid = os.fork()
-    if pid:
-        os.close(m)
-        oe_terminal("oepydevshell-internal.py %s %d" % (sname, pid), 'OpenEmbedded Developer PyShell', d)
-        os._exit(0)
-    else:
-        os.close(s)
-
-        os.dup2(m, sys.stdin.fileno())
-        os.dup2(m, sys.stdout.fileno())
-        os.dup2(m, sys.stderr.fileno())
-
-        bb.utils.nonblockingfd(sys.stdout)
-        bb.utils.nonblockingfd(sys.stderr)
-        bb.utils.nonblockingfd(sys.stdin)
-
-        _context = {
-            "os": os,
-            "bb": bb,
-            "time": time,
-            "d": d,
-        }
-
-        ps1 = "pydevshell> "
-        ps2 = "... "
-        buf = []
-        more = False
-
-        i = code.InteractiveInterpreter(locals=_context)
-        print("OE PyShell (PN = %s)\n" % d.getVar("PN"))
-
-        def prompt(more):
-            if more:
-                prompt = ps2
-            else:
-                prompt = ps1
-            sys.stdout.write(prompt)
-            sys.stdout.flush()
-
-        # Restore Ctrl+C since bitbake masks this
-        def signal_handler(signal, frame):
-            raise KeyboardInterrupt
-        signal.signal(signal.SIGINT, signal_handler)
-
-        child = None
-
-        prompt(more)
-        while True:
-            try:
-                try:
-                    (r, _, _) = select.select([sys.stdin], [], [], 1)
-                    if not r:
-                        continue
-                    line = sys.stdin.readline().strip()
-                    if not line:
-                        prompt(more)
-                        continue
-                except EOFError as e:
-                    sys.stdout.write("\n")
-                    sys.stdout.flush()
-                except (OSError, IOError) as e:
-                    if e.errno == 11:
-                        continue
-                    if e.errno == 5:
-                        return
-                    raise
-                else:
-                    if not child:
-                        child = int(line)
-                        continue
-                    buf.append(line)
-                    source = "\n".join(buf)
-                    more = i.runsource(source, "<pyshell>")
-                    if not more:
-                        buf = []
-                    sys.stderr.flush()
-                    prompt(more)
-            except KeyboardInterrupt:
-                i.write("\nKeyboardInterrupt\n")
-                buf = []
-                more = False
-                prompt(more)
-            except SystemExit:
-                # Easiest way to ensure everything exits
-                os.kill(child, signal.SIGTERM)
-                break
-
-python do_pydevshell() {
-    import signal
-
-    try:
-        pydevshell(d)
-    except SystemExit:
-        # Stop the SIGTERM above causing an error exit code
-        return
-    finally:
-        return
-}
-addtask pydevshell after do_patch
-
-do_pydevshell[nostamp] = "1"
-do_pydevshell[network] = "1"
diff --git a/poky/meta/classes/devtool-source.bbclass b/poky/meta/classes/devtool-source.bbclass
index 41900e6..a02b1e9 100644
--- a/poky/meta/classes/devtool-source.bbclass
+++ b/poky/meta/classes/devtool-source.bbclass
@@ -1,3 +1,9 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
 # Development tool - source extraction helper class
 #
 # NOTE: this class is intended for use by devtool and should not be
diff --git a/poky/meta/classes/devupstream.bbclass b/poky/meta/classes/devupstream.bbclass
deleted file mode 100644
index ba6dc41..0000000
--- a/poky/meta/classes/devupstream.bbclass
+++ /dev/null
@@ -1,55 +0,0 @@
-# Class for use in BBCLASSEXTEND to make it easier to have a single recipe that
-# can build both stable tarballs and snapshots from upstream source
-# repositories.
-#
-# Usage:
-# BBCLASSEXTEND = "devupstream:target"
-# SRC_URI:class-devupstream = "git://git.example.com/example;branch=master"
-# SRCREV:class-devupstream = "abcdef"
-#
-# If the first entry in SRC_URI is a git: URL then S is rewritten to
-# WORKDIR/git.
-#
-# There are a few caveats that remain to be solved:
-# - You can't build native or nativesdk recipes using for example
-#   devupstream:native, you can only build target recipes.
-# - If the fetcher requires native tools (such as subversion-native) then
-#   bitbake won't be able to add them automatically.
-
-python devupstream_virtclass_handler () {
-    # Do nothing if this is inherited, as it's for BBCLASSEXTEND
-    if "devupstream" not in (d.getVar('BBCLASSEXTEND') or ""):
-        bb.error("Don't inherit devupstream, use BBCLASSEXTEND")
-        return
-
-    variant = d.getVar("BBEXTENDVARIANT")
-    if variant not in ("target", "native"):
-        bb.error("Unsupported variant %s. Pass the variant when using devupstream, for example devupstream:target" % variant)
-        return
-
-    # Develpment releases are never preferred by default
-    d.setVar("DEFAULT_PREFERENCE", "-1")
-
-    src_uri = d.getVar("SRC_URI:class-devupstream") or d.getVar("SRC_URI")
-    uri = bb.fetch2.URI(src_uri.split()[0])
-
-    if uri.scheme == "git" and not d.getVar("S:class-devupstream"):
-        d.setVar("S", "${WORKDIR}/git")
-
-    # Modify the PV if the recipe hasn't already overridden it
-    pv = d.getVar("PV")
-    proto_marker = "+" + uri.scheme
-    if proto_marker not in pv and not d.getVar("PV:class-devupstream"):
-        d.setVar("PV", pv + proto_marker + "${SRCPV}")
-
-    if variant == "native":
-        pn = d.getVar("PN")
-        d.setVar("PN", "%s-native" % (pn))
-        fn = d.getVar("FILE")
-        bb.parse.BBHandler.inherit("native", fn, 0, d)
-
-    d.appendVar("CLASSOVERRIDE", ":class-devupstream")
-}
-
-addhandler devupstream_virtclass_handler
-devupstream_virtclass_handler[eventmask] = "bb.event.RecipePreFinalise"
diff --git a/poky/meta/classes/distro_features_check.bbclass b/poky/meta/classes/distro_features_check.bbclass
deleted file mode 100644
index 8124a8c..0000000
--- a/poky/meta/classes/distro_features_check.bbclass
+++ /dev/null
@@ -1,7 +0,0 @@
-# Temporarily provide fallback to the old name of the class
-
-python __anonymous() {
-    bb.warn("distro_features_check.bbclass is deprecated, please use features_check.bbclass instead")
-}
-
-inherit features_check
diff --git a/poky/meta/classes/distrooverrides.bbclass b/poky/meta/classes/distrooverrides.bbclass
index bf3a2b2..8d9d7cd 100644
--- a/poky/meta/classes/distrooverrides.bbclass
+++ b/poky/meta/classes/distrooverrides.bbclass
@@ -1,3 +1,9 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
 # Turns certain DISTRO_FEATURES into overrides with the same
 # name plus a df- prefix. Ensures that these special
 # distro features remain set also for native and nativesdk
diff --git a/poky/meta/classes/dos2unix.bbclass b/poky/meta/classes/dos2unix.bbclass
deleted file mode 100644
index 3fc17e2..0000000
--- a/poky/meta/classes/dos2unix.bbclass
+++ /dev/null
@@ -1,14 +0,0 @@
-# Class for use to convert all CRLF line terminators to LF
-# provided that some projects are being developed/maintained
-# on Windows so they have different line terminators(CRLF) vs
-# on Linux(LF), which can cause annoying patching errors during
-# git push/checkout processes.
-
-do_convert_crlf_to_lf[depends] += "dos2unix-native:do_populate_sysroot"
-
-# Convert CRLF line terminators to LF
-do_convert_crlf_to_lf () {
-	find ${S} -type f -exec dos2unix {} \;
-}
-
-addtask convert_crlf_to_lf after do_unpack before do_patch
diff --git a/poky/meta/classes/externalsrc.bbclass b/poky/meta/classes/externalsrc.bbclass
deleted file mode 100644
index 90792a7..0000000
--- a/poky/meta/classes/externalsrc.bbclass
+++ /dev/null
@@ -1,268 +0,0 @@
-# Copyright (C) 2012 Linux Foundation
-# Author: Richard Purdie
-# Some code and influence taken from srctree.bbclass:
-# Copyright (C) 2009 Chris Larson <clarson@kergoth.com>
-# Released under the MIT license (see COPYING.MIT for the terms)
-#
-# externalsrc.bbclass enables use of an existing source tree, usually external to
-# the build system to build a piece of software rather than the usual fetch/unpack/patch
-# process.
-#
-# To use, add externalsrc to the global inherit and set EXTERNALSRC to point at the
-# directory you want to use containing the sources e.g. from local.conf for a recipe
-# called "myrecipe" you would do:
-#
-# INHERIT += "externalsrc"
-# EXTERNALSRC:pn-myrecipe = "/path/to/my/source/tree"
-#
-# In order to make this class work for both target and native versions (or with
-# multilibs/cross or other BBCLASSEXTEND variants), B is set to point to a separate
-# directory under the work directory (split source and build directories). This is
-# the default, but the build directory can be set to the source directory if
-# circumstances dictate by setting EXTERNALSRC_BUILD to the same value, e.g.:
-#
-# EXTERNALSRC_BUILD:pn-myrecipe = "/path/to/my/source/tree"
-#
-
-SRCTREECOVEREDTASKS ?= "do_patch do_unpack do_fetch"
-EXTERNALSRC_SYMLINKS ?= "oe-workdir:${WORKDIR} oe-logs:${T}"
-
-python () {
-    externalsrc = d.getVar('EXTERNALSRC')
-    externalsrcbuild = d.getVar('EXTERNALSRC_BUILD')
-
-    if externalsrc and not externalsrc.startswith("/"):
-        bb.error("EXTERNALSRC must be an absolute path")
-    if externalsrcbuild and not externalsrcbuild.startswith("/"):
-        bb.error("EXTERNALSRC_BUILD must be an absolute path")
-
-    # If this is the base recipe and EXTERNALSRC is set for it or any of its
-    # derivatives, then enable BB_DONT_CACHE to force the recipe to always be
-    # re-parsed so that the file-checksums function for do_compile is run every
-    # time.
-    bpn = d.getVar('BPN')
-    classextend = (d.getVar('BBCLASSEXTEND') or '').split()
-    if bpn == d.getVar('PN') or not classextend:
-        if (externalsrc or
-                ('native' in classextend and
-                 d.getVar('EXTERNALSRC:pn-%s-native' % bpn)) or
-                ('nativesdk' in classextend and
-                 d.getVar('EXTERNALSRC:pn-nativesdk-%s' % bpn)) or
-                ('cross' in classextend and
-                 d.getVar('EXTERNALSRC:pn-%s-cross' % bpn))):
-            d.setVar('BB_DONT_CACHE', '1')
-
-    if externalsrc:
-        import oe.recipeutils
-        import oe.path
-
-        d.setVar('S', externalsrc)
-        if externalsrcbuild:
-            d.setVar('B', externalsrcbuild)
-        else:
-            d.setVar('B', '${WORKDIR}/${BPN}-${PV}/')
-
-        local_srcuri = []
-        fetch = bb.fetch2.Fetch((d.getVar('SRC_URI') or '').split(), d)
-        for url in fetch.urls:
-            url_data = fetch.ud[url]
-            parm = url_data.parm
-            if (url_data.type == 'file' or
-                    url_data.type == 'npmsw' or url_data.type == 'crate' or
-                    'type' in parm and parm['type'] == 'kmeta'):
-                local_srcuri.append(url)
-
-        d.setVar('SRC_URI', ' '.join(local_srcuri))
-
-        # Dummy value because the default function can't be called with blank SRC_URI
-        d.setVar('SRCPV', '999')
-
-        if d.getVar('CONFIGUREOPT_DEPTRACK') == '--disable-dependency-tracking':
-            d.setVar('CONFIGUREOPT_DEPTRACK', '')
-
-        tasks = filter(lambda k: d.getVarFlag(k, "task"), d.keys())
-
-        for task in tasks:
-            if task.endswith("_setscene"):
-                # sstate is never going to work for external source trees, disable it
-                bb.build.deltask(task, d)
-            elif os.path.realpath(d.getVar('S')) == os.path.realpath(d.getVar('B')):
-                # Since configure will likely touch ${S}, ensure only we lock so one task has access at a time
-                d.appendVarFlag(task, "lockfiles", " ${S}/singletask.lock")
-
-            for funcname in [task, "base_" + task, "kernel_" + task]:
-                # We do not want our source to be wiped out, ever (kernel.bbclass does this for do_clean)
-                cleandirs = oe.recipeutils.split_var_value(d.getVarFlag(funcname, 'cleandirs', False) or '')
-                setvalue = False
-                for cleandir in cleandirs[:]:
-                    if oe.path.is_path_parent(externalsrc, d.expand(cleandir)):
-                        cleandirs.remove(cleandir)
-                        setvalue = True
-                if setvalue:
-                    d.setVarFlag(funcname, 'cleandirs', ' '.join(cleandirs))
-
-        fetch_tasks = ['do_fetch', 'do_unpack']
-        # If we deltask do_patch, there's no dependency to ensure do_unpack gets run, so add one
-        # Note that we cannot use d.appendVarFlag() here because deps is expected to be a list object, not a string
-        d.setVarFlag('do_configure', 'deps', (d.getVarFlag('do_configure', 'deps', False) or []) + ['do_unpack'])
-
-        for task in d.getVar("SRCTREECOVEREDTASKS").split():
-            if local_srcuri and task in fetch_tasks:
-                continue
-            bb.build.deltask(task, d)
-            if task == 'do_unpack':
-                # The reproducible build create_source_date_epoch_stamp function must
-                # be run after the source is available and before the
-                # do_deploy_source_date_epoch task.  In the normal case, it's attached
-                # to do_unpack as a postfuncs, but since we removed do_unpack (above)
-                # we need to move the function elsewhere.  The easiest thing to do is
-                # move it into the prefuncs of the do_deploy_source_date_epoch task.
-                # This is safe, as externalsrc runs with the source already unpacked.
-                d.prependVarFlag('do_deploy_source_date_epoch', 'prefuncs', 'create_source_date_epoch_stamp ')
-
-        d.prependVarFlag('do_compile', 'prefuncs', "externalsrc_compile_prefunc ")
-        d.prependVarFlag('do_configure', 'prefuncs', "externalsrc_configure_prefunc ")
-
-        d.setVarFlag('do_compile', 'file-checksums', '${@srctree_hash_files(d)}')
-        d.setVarFlag('do_configure', 'file-checksums', '${@srctree_configure_hash_files(d)}')
-
-        # We don't want the workdir to go away
-        d.appendVar('RM_WORK_EXCLUDE', ' ' + d.getVar('PN'))
-
-        bb.build.addtask('do_buildclean',
-                         'do_clean' if d.getVar('S') == d.getVar('B') else None,
-                         None, d)
-
-        # If B=S the same builddir is used even for different architectures.
-        # Thus, use a shared CONFIGURESTAMPFILE and STAMP directory so that
-        # change of do_configure task hash is correctly detected and stamps are
-        # invalidated if e.g. MACHINE changes.
-        if d.getVar('S') == d.getVar('B'):
-            configstamp = '${TMPDIR}/work-shared/${PN}/${EXTENDPE}${PV}-${PR}/configure.sstate'
-            d.setVar('CONFIGURESTAMPFILE', configstamp)
-            d.setVar('STAMP', '${STAMPS_DIR}/work-shared/${PN}/${EXTENDPE}${PV}-${PR}')
-            d.setVar('STAMPCLEAN', '${STAMPS_DIR}/work-shared/${PN}/*-*')
-}
-
-python externalsrc_configure_prefunc() {
-    s_dir = d.getVar('S')
-    # Create desired symlinks
-    symlinks = (d.getVar('EXTERNALSRC_SYMLINKS') or '').split()
-    newlinks = []
-    for symlink in symlinks:
-        symsplit = symlink.split(':', 1)
-        lnkfile = os.path.join(s_dir, symsplit[0])
-        target = d.expand(symsplit[1])
-        if len(symsplit) > 1:
-            if os.path.islink(lnkfile):
-                # Link already exists, leave it if it points to the right location already
-                if os.readlink(lnkfile) == target:
-                    continue
-                os.unlink(lnkfile)
-            elif os.path.exists(lnkfile):
-                # File/dir exists with same name as link, just leave it alone
-                continue
-            os.symlink(target, lnkfile)
-            newlinks.append(symsplit[0])
-    # Hide the symlinks from git
-    try:
-        git_exclude_file = os.path.join(s_dir, '.git/info/exclude')
-        if os.path.exists(git_exclude_file):
-            with open(git_exclude_file, 'r+') as efile:
-                elines = efile.readlines()
-                for link in newlinks:
-                    if link in elines or '/'+link in elines:
-                        continue
-                    efile.write('/' + link + '\n')
-    except IOError as ioe:
-        bb.note('Failed to hide EXTERNALSRC_SYMLINKS from git')
-}
-
-python externalsrc_compile_prefunc() {
-    # Make it obvious that this is happening, since forgetting about it could lead to much confusion
-    bb.plain('NOTE: %s: compiling from external source tree %s' % (d.getVar('PN'), d.getVar('EXTERNALSRC')))
-}
-
-do_buildclean[dirs] = "${S} ${B}"
-do_buildclean[nostamp] = "1"
-do_buildclean[doc] = "Call 'make clean' or equivalent in ${B}"
-externalsrc_do_buildclean() {
-	if [ -e Makefile -o -e makefile -o -e GNUmakefile ]; then
-		rm -f ${@' '.join([x.split(':')[0] for x in (d.getVar('EXTERNALSRC_SYMLINKS') or '').split()])}
-		if [ "${CLEANBROKEN}" != "1" ]; then
-			oe_runmake clean || die "make failed"
-		fi
-	else
-		bbnote "nothing to do - no makefile found"
-	fi
-}
-
-def srctree_hash_files(d, srcdir=None):
-    import shutil
-    import subprocess
-    import tempfile
-    import hashlib
-
-    s_dir = srcdir or d.getVar('EXTERNALSRC')
-    git_dir = None
-
-    try:
-        git_dir = os.path.join(s_dir,
-            subprocess.check_output(['git', '-C', s_dir, 'rev-parse', '--git-dir'], stderr=subprocess.DEVNULL).decode("utf-8").rstrip())
-        top_git_dir = os.path.join(s_dir, subprocess.check_output(['git', '-C', d.getVar("TOPDIR"), 'rev-parse', '--git-dir'],
-            stderr=subprocess.DEVNULL).decode("utf-8").rstrip())
-        if git_dir == top_git_dir:
-            git_dir = None
-    except subprocess.CalledProcessError:
-        pass
-
-    ret = " "
-    if git_dir is not None:
-        oe_hash_file = os.path.join(git_dir, 'oe-devtool-tree-sha1-%s' % d.getVar('PN'))
-        with tempfile.NamedTemporaryFile(prefix='oe-devtool-index') as tmp_index:
-            # Clone index
-            shutil.copyfile(os.path.join(git_dir, 'index'), tmp_index.name)
-            # Update our custom index
-            env = os.environ.copy()
-            env['GIT_INDEX_FILE'] = tmp_index.name
-            subprocess.check_output(['git', 'add', '-A', '.'], cwd=s_dir, env=env)
-            git_sha1 = subprocess.check_output(['git', 'write-tree'], cwd=s_dir, env=env).decode("utf-8")
-            submodule_helper = subprocess.check_output(['git', 'submodule--helper', 'list'], cwd=s_dir, env=env).decode("utf-8")
-            for line in submodule_helper.splitlines():
-                module_dir = os.path.join(s_dir, line.rsplit(maxsplit=1)[1])
-                if os.path.isdir(module_dir):
-                    proc = subprocess.Popen(['git', 'add', '-A', '.'], cwd=module_dir, env=env, stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
-                    proc.communicate()
-                    proc = subprocess.Popen(['git', 'write-tree'], cwd=module_dir, env=env, stdout=subprocess.PIPE, stderr=subprocess.DEVNULL)
-                    stdout, _ = proc.communicate()
-                    git_sha1 += stdout.decode("utf-8")
-            sha1 = hashlib.sha1(git_sha1.encode("utf-8")).hexdigest()
-        with open(oe_hash_file, 'w') as fobj:
-            fobj.write(sha1)
-        ret = oe_hash_file + ':True'
-    else:
-        ret = s_dir + '/*:True'
-    return ret
-
-def srctree_configure_hash_files(d):
-    """
-    Get the list of files that should trigger do_configure to re-execute,
-    based on the value of CONFIGURE_FILES
-    """
-    in_files = (d.getVar('CONFIGURE_FILES') or '').split()
-    out_items = []
-    search_files = []
-    for entry in in_files:
-        if entry.startswith('/'):
-            out_items.append('%s:%s' % (entry, os.path.exists(entry)))
-        else:
-            search_files.append(entry)
-    if search_files:
-        s_dir = d.getVar('EXTERNALSRC')
-        for root, _, files in os.walk(s_dir):
-            for f in files:
-                if f in search_files:
-                    out_items.append('%s:True' % os.path.join(root, f))
-    return ' '.join(out_items)
-
-EXPORT_FUNCTIONS do_buildclean
diff --git a/poky/meta/classes/extrausers.bbclass b/poky/meta/classes/extrausers.bbclass
index a8ef660..94576b8 100644
--- a/poky/meta/classes/extrausers.bbclass
+++ b/poky/meta/classes/extrausers.bbclass
@@ -1,3 +1,9 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
 # This bbclass is used for image level user/group configuration.
 # Inherit this class if you want to make EXTRA_USERS_PARAMS effective.
 
diff --git a/poky/meta/classes/features_check.bbclass b/poky/meta/classes/features_check.bbclass
deleted file mode 100644
index 3ef6b35..0000000
--- a/poky/meta/classes/features_check.bbclass
+++ /dev/null
@@ -1,54 +0,0 @@
-# Allow checking of required and conflicting features
-#
-# xxx = [DISTRO,MACHINE,COMBINED,IMAGE]
-#
-# ANY_OF_xxx_FEATURES:        ensure at least one item on this list is included
-#                             in xxx_FEATURES.
-# REQUIRED_xxx_FEATURES:      ensure every item on this list is included
-#                             in xxx_FEATURES.
-# CONFLICT_xxx_FEATURES:      ensure no item in this list is included in
-#                             xxx_FEATURES.
-#
-# Copyright 2019 (C) Texas Instruments Inc.
-# Copyright 2013 (C) O.S. Systems Software LTDA.
-
-python () {
-    if d.getVar('PARSE_ALL_RECIPES', False):
-        return
-
-    unused = True
-
-    for kind in ['DISTRO', 'MACHINE', 'COMBINED', 'IMAGE']:
-        if d.getVar('ANY_OF_' + kind + '_FEATURES') is None and not d.hasOverrides('ANY_OF_' + kind + '_FEATURES') and \
-           d.getVar('REQUIRED_' + kind + '_FEATURES') is None and not d.hasOverrides('REQUIRED_' + kind + '_FEATURES') and \
-           d.getVar('CONFLICT_' + kind + '_FEATURES') is None and not d.hasOverrides('CONFLICT_' + kind + '_FEATURES'):
-            continue
-
-        unused = False
-
-        # Assume at least one var is set.
-        features = set((d.getVar(kind + '_FEATURES') or '').split())
-
-        any_of_features = set((d.getVar('ANY_OF_' + kind + '_FEATURES') or '').split())
-        if any_of_features:
-            if set.isdisjoint(any_of_features, features):
-                raise bb.parse.SkipRecipe("one of '%s' needs to be in %s_FEATURES"
-                    % (' '.join(any_of_features), kind))
-
-        required_features = set((d.getVar('REQUIRED_' + kind + '_FEATURES') or '').split())
-        if required_features:
-            missing = set.difference(required_features, features)
-            if missing:
-                raise bb.parse.SkipRecipe("missing required %s feature%s '%s' (not in %s_FEATURES)"
-                    % (kind.lower(), 's' if len(missing) > 1 else '', ' '.join(missing), kind))
-
-        conflict_features = set((d.getVar('CONFLICT_' + kind + '_FEATURES') or '').split())
-        if conflict_features:
-            conflicts = set.intersection(conflict_features, features)
-            if conflicts:
-                raise bb.parse.SkipRecipe("conflicting %s feature%s '%s' (in %s_FEATURES)"
-                    % (kind.lower(), 's' if len(conflicts) > 1 else '', ' '.join(conflicts), kind))
-
-    if unused:
-        bb.warn("Recipe inherits features_check but doesn't use it")
-}
diff --git a/poky/meta/classes/fontcache.bbclass b/poky/meta/classes/fontcache.bbclass
deleted file mode 100644
index 442bfc7..0000000
--- a/poky/meta/classes/fontcache.bbclass
+++ /dev/null
@@ -1,57 +0,0 @@
-#
-# This class will generate the proper postinst/postrm scriptlets for font
-# packages.
-#
-
-PACKAGE_WRITE_DEPS += "qemu-native"
-inherit qemu
-
-FONT_PACKAGES ??= "${PN}"
-FONT_EXTRA_RDEPENDS ?= "${MLPREFIX}fontconfig-utils"
-FONTCONFIG_CACHE_DIR ?= "${localstatedir}/cache/fontconfig"
-FONTCONFIG_CACHE_PARAMS ?= "-v"
-# You can change this to e.g. FC_DEBUG=16 to debug fc-cache issues,
-# something has to be set, because qemuwrapper is using this variable after -E
-# multiple variables aren't allowed because for qemu they are separated
-# by comma and in -n "$D" case they should be separated by space
-FONTCONFIG_CACHE_ENV ?= "FC_DEBUG=1"
-fontcache_common() {
-if [ -n "$D" ] ; then
-	$INTERCEPT_DIR/postinst_intercept update_font_cache ${PKG} mlprefix=${MLPREFIX} binprefix=${MLPREFIX} \
-		'bindir="${bindir}"' \
-		'libdir="${libdir}"' \
-		'libexecdir="${libexecdir}"' \
-		'base_libdir="${base_libdir}"' \
-		'fontconfigcachedir="${FONTCONFIG_CACHE_DIR}"' \
-		'fontconfigcacheparams="${FONTCONFIG_CACHE_PARAMS}"' \
-		'fontconfigcacheenv="${FONTCONFIG_CACHE_ENV}"'
-else
-	${FONTCONFIG_CACHE_ENV} fc-cache ${FONTCONFIG_CACHE_PARAMS}
-fi
-}
-
-python () {
-    font_pkgs = d.getVar('FONT_PACKAGES').split()
-    deps = d.getVar("FONT_EXTRA_RDEPENDS")
-
-    for pkg in font_pkgs:
-        if deps: d.appendVar('RDEPENDS:' + pkg, ' '+deps)
-}
-
-python add_fontcache_postinsts() {
-    for pkg in d.getVar('FONT_PACKAGES').split():
-        bb.note("adding fonts postinst and postrm scripts to %s" % pkg)
-        postinst = d.getVar('pkg_postinst:%s' % pkg) or d.getVar('pkg_postinst')
-        if not postinst:
-            postinst = '#!/bin/sh\n'
-        postinst += d.getVar('fontcache_common')
-        d.setVar('pkg_postinst:%s' % pkg, postinst)
-
-        postrm = d.getVar('pkg_postrm:%s' % pkg) or d.getVar('pkg_postrm')
-        if not postrm:
-            postrm = '#!/bin/sh\n'
-        postrm += d.getVar('fontcache_common')
-        d.setVar('pkg_postrm:%s' % pkg, postrm)
-}
-
-PACKAGEFUNCS =+ "add_fontcache_postinsts"
diff --git a/poky/meta/classes/fs-uuid.bbclass b/poky/meta/classes/fs-uuid.bbclass
deleted file mode 100644
index 9b53dfb..0000000
--- a/poky/meta/classes/fs-uuid.bbclass
+++ /dev/null
@@ -1,24 +0,0 @@
-# Extract UUID from ${ROOTFS}, which must have been built
-# by the time that this function gets called. Only works
-# on ext file systems and depends on tune2fs.
-def get_rootfs_uuid(d):
-    import subprocess
-    rootfs = d.getVar('ROOTFS')
-    output = subprocess.check_output(['tune2fs', '-l', rootfs])
-    for line in output.split('\n'):
-        if line.startswith('Filesystem UUID:'):
-            uuid = line.split()[-1]
-            bb.note('UUID of %s: %s' % (rootfs, uuid))
-            return uuid
-    bb.fatal('Could not determine filesystem UUID of %s' % rootfs)
-
-# Replace the special <<uuid-of-rootfs>> inside a string (like the
-# root= APPEND string in a syslinux.cfg or systemd-boot entry) with the
-# actual UUID of the rootfs. Does nothing if the special string
-# is not used.
-def replace_rootfs_uuid(d, string):
-    UUID_PLACEHOLDER = '<<uuid-of-rootfs>>'
-    if UUID_PLACEHOLDER in string:
-        uuid = get_rootfs_uuid(d)
-        string = string.replace(UUID_PLACEHOLDER, uuid)
-    return string
diff --git a/poky/meta/classes/gconf.bbclass b/poky/meta/classes/gconf.bbclass
deleted file mode 100644
index 9d3668e..0000000
--- a/poky/meta/classes/gconf.bbclass
+++ /dev/null
@@ -1,71 +0,0 @@
-DEPENDS += "gconf"
-PACKAGE_WRITE_DEPS += "gconf-native"
-
-# These are for when gconftool is used natively and the prefix isn't necessarily
-# the sysroot.  TODO: replicate the postinst logic for -native packages going
-# into sysroot as they won't be running their own install-time schema
-# registration (disabled below) nor the postinst script (as they don't happen).
-export GCONF_SCHEMA_INSTALL_SOURCE = "xml:merged:${STAGING_DIR_NATIVE}${sysconfdir}/gconf/gconf.xml.defaults"
-export GCONF_BACKEND_DIR = "${STAGING_LIBDIR_NATIVE}/GConf/2"
-
-# Disable install-time schema registration as we're a packaging system so this
-# happens in the postinst script, not at install time.  Set both the configure
-# script option and the traditional envionment variable just to make sure.
-EXTRA_OECONF += "--disable-schemas-install"
-export GCONF_DISABLE_MAKEFILE_SCHEMA_INSTALL = "1"
-
-gconf_postinst() {
-if [ "x$D" != "x" ]; then
-	export GCONF_CONFIG_SOURCE="xml::$D${sysconfdir}/gconf/gconf.xml.defaults"
-else
-	export GCONF_CONFIG_SOURCE=`gconftool-2 --get-default-source`
-fi
-
-SCHEMA_LOCATION=$D/etc/gconf/schemas
-for SCHEMA in ${SCHEMA_FILES}; do
-	if [ -e $SCHEMA_LOCATION/$SCHEMA ]; then
-		HOME=$D/root gconftool-2 \
-			--makefile-install-rule $SCHEMA_LOCATION/$SCHEMA > /dev/null
-	fi
-done
-}
-
-gconf_prerm() {
-SCHEMA_LOCATION=/etc/gconf/schemas
-for SCHEMA in ${SCHEMA_FILES}; do
-	if [ -e $SCHEMA_LOCATION/$SCHEMA ]; then
-		HOME=/root GCONF_CONFIG_SOURCE=`gconftool-2 --get-default-source` \
-			gconftool-2 \
-			--makefile-uninstall-rule $SCHEMA_LOCATION/$SCHEMA > /dev/null
-	fi
-done
-}
-
-python populate_packages:append () {
-    import re
-    packages = d.getVar('PACKAGES').split()
-    pkgdest =  d.getVar('PKGDEST')
-    
-    for pkg in packages:
-        schema_dir = '%s/%s/etc/gconf/schemas' % (pkgdest, pkg)
-        schemas = []
-        schema_re = re.compile(r".*\.schemas$")
-        if os.path.exists(schema_dir):
-            for f in os.listdir(schema_dir):
-                if schema_re.match(f):
-                    schemas.append(f)
-        if schemas != []:
-            bb.note("adding gconf postinst and prerm scripts to %s" % pkg)
-            d.setVar('SCHEMA_FILES', " ".join(schemas))
-            postinst = d.getVar('pkg_postinst:%s' % pkg)
-            if not postinst:
-                postinst = '#!/bin/sh\n'
-            postinst += d.getVar('gconf_postinst')
-            d.setVar('pkg_postinst:%s' % pkg, postinst)
-            prerm = d.getVar('pkg_prerm:%s' % pkg)
-            if not prerm:
-                prerm = '#!/bin/sh\n'
-            prerm += d.getVar('gconf_prerm')
-            d.setVar('pkg_prerm:%s' % pkg, prerm)
-            d.appendVar("RDEPENDS:%s" % pkg, ' ' + d.getVar('MLPREFIX', False) + 'gconf')
-}
diff --git a/poky/meta/classes/gettext.bbclass b/poky/meta/classes/gettext.bbclass
deleted file mode 100644
index f11cb044..0000000
--- a/poky/meta/classes/gettext.bbclass
+++ /dev/null
@@ -1,22 +0,0 @@
-def gettext_dependencies(d):
-    if d.getVar('INHIBIT_DEFAULT_DEPS') and not oe.utils.inherits(d, 'cross-canadian'):
-        return ""
-    if d.getVar('USE_NLS') == 'no':
-        return "gettext-minimal-native"
-    return "gettext-native"
-
-def gettext_oeconf(d):
-    if d.getVar('USE_NLS') == 'no':
-        return '--disable-nls'
-    # Remove the NLS bits if USE_NLS is no or INHIBIT_DEFAULT_DEPS is set
-    if d.getVar('INHIBIT_DEFAULT_DEPS') and not oe.utils.inherits(d, 'cross-canadian'):
-        return '--disable-nls'
-    return "--enable-nls"
-
-BASEDEPENDS:append = " ${@gettext_dependencies(d)}"
-EXTRA_OECONF:append = " ${@gettext_oeconf(d)}"
-
-# Without this, msgfmt from gettext-native will not find ITS files
-# provided by target recipes (for example, polkit.its).
-GETTEXTDATADIRS:append:class-target = ":${STAGING_DATADIR}/gettext"
-export GETTEXTDATADIRS
diff --git a/poky/meta/classes/gi-docgen.bbclass b/poky/meta/classes/gi-docgen.bbclass
deleted file mode 100644
index 15581ca..0000000
--- a/poky/meta/classes/gi-docgen.bbclass
+++ /dev/null
@@ -1,24 +0,0 @@
-# gi-docgen is a new gnome documentation generator, which
-# seems to be a successor to gtk-doc:
-# https://gitlab.gnome.org/GNOME/gi-docgen
-
-# This variable is set to True if api-documentation is in
-# DISTRO_FEATURES, and False otherwise.
-GIDOCGEN_ENABLED ?= "${@bb.utils.contains('DISTRO_FEATURES', 'api-documentation', 'True', 'False', d)}"
-# When building native recipes, disable gi-docgen, as it is not necessary,
-# pulls in additional dependencies, and makes build times longer
-GIDOCGEN_ENABLED:class-native = "False"
-GIDOCGEN_ENABLED:class-nativesdk = "False"
-
-# meson: default option name to enable/disable gi-docgen. This matches most
-# projects' configuration. In doubts - check meson_options.txt in project's
-# source path.
-GIDOCGEN_MESON_OPTION ?= 'gtk_doc'
-GIDOCGEN_MESON_ENABLE_FLAG ?= 'true'
-GIDOCGEN_MESON_DISABLE_FLAG ?= 'false'
-
-# Auto enable/disable based on GIDOCGEN_ENABLED
-EXTRA_OEMESON:prepend = "-D${GIDOCGEN_MESON_OPTION}=${@bb.utils.contains('GIDOCGEN_ENABLED', 'True', '${GIDOCGEN_MESON_ENABLE_FLAG}', '${GIDOCGEN_MESON_DISABLE_FLAG}', d)} "
-
-DEPENDS:append = "${@' gi-docgen-native gi-docgen' if d.getVar('GIDOCGEN_ENABLED') == 'True' else ''}"
-
diff --git a/poky/meta/classes/gio-module-cache.bbclass b/poky/meta/classes/gio-module-cache.bbclass
deleted file mode 100644
index 021eeb1..0000000
--- a/poky/meta/classes/gio-module-cache.bbclass
+++ /dev/null
@@ -1,38 +0,0 @@
-PACKAGE_WRITE_DEPS += "qemu-native"
-inherit qemu
-
-GIO_MODULE_PACKAGES ??= "${PN}"
-
-gio_module_cache_common() {
-if [ "x$D" != "x" ]; then
-    $INTERCEPT_DIR/postinst_intercept update_gio_module_cache ${PKG} \
-            mlprefix=${MLPREFIX} \
-            binprefix=${MLPREFIX} \
-            libdir=${libdir} \
-            libexecdir=${libexecdir} \
-            base_libdir=${base_libdir} \
-            bindir=${bindir}
-else
-    ${libexecdir}/${MLPREFIX}gio-querymodules ${libdir}/gio/modules/
-fi
-}
-
-python populate_packages:append () {
-    packages = d.getVar('GIO_MODULE_PACKAGES').split()
-
-    for pkg in packages:
-        bb.note("adding gio-module-cache postinst and postrm scripts to %s" % pkg)
-
-        postinst = d.getVar('pkg_postinst:%s' % pkg)
-        if not postinst:
-            postinst = '#!/bin/sh\n'
-        postinst += d.getVar('gio_module_cache_common')
-        d.setVar('pkg_postinst:%s' % pkg, postinst)
-
-        postrm = d.getVar('pkg_postrm:%s' % pkg)
-        if not postrm:
-            postrm = '#!/bin/sh\n'
-        postrm += d.getVar('gio_module_cache_common')
-        d.setVar('pkg_postrm:%s' % pkg, postrm)
-}
-
diff --git a/poky/meta/classes/glide.bbclass b/poky/meta/classes/glide.bbclass
deleted file mode 100644
index 2db4ac6..0000000
--- a/poky/meta/classes/glide.bbclass
+++ /dev/null
@@ -1,9 +0,0 @@
-# Handle Glide Vendor Package Management use
-#
-# Copyright 2018 (C) O.S. Systems Software LTDA.
-
-DEPENDS:append = " glide-native"
-
-do_compile:prepend() {
-    ( cd ${B}/src/${GO_IMPORT} && glide install )
-}
diff --git a/poky/meta/classes/gnomebase.bbclass b/poky/meta/classes/gnomebase.bbclass
deleted file mode 100644
index 9a5bd9a..0000000
--- a/poky/meta/classes/gnomebase.bbclass
+++ /dev/null
@@ -1,31 +0,0 @@
-def gnome_verdir(v):
-    return ".".join(v.split(".")[:-1])
-
-
-GNOME_COMPRESS_TYPE ?= "xz"
-SECTION ?= "x11/gnome"
-GNOMEBN ?= "${BPN}"
-SRC_URI = "${GNOME_MIRROR}/${GNOMEBN}/${@gnome_verdir("${PV}")}/${GNOMEBN}-${PV}.tar.${GNOME_COMPRESS_TYPE};name=archive"
-
-FILES:${PN} += "${datadir}/application-registry  \
-                ${datadir}/mime-info \
-                ${datadir}/mime/packages \
-                ${datadir}/mime/application \
-                ${datadir}/gnome-2.0 \
-                ${datadir}/polkit* \
-                ${datadir}/GConf \
-                ${datadir}/glib-2.0/schemas \
-                ${datadir}/appdata \
-                ${datadir}/icons \
-"
-
-FILES:${PN}-doc += "${datadir}/devhelp"
-
-GNOMEBASEBUILDCLASS ??= "autotools"
-inherit ${GNOMEBASEBUILDCLASS} pkgconfig
-
-do_install:append() {
-	rm -rf ${D}${localstatedir}/lib/scrollkeeper/*
-	rm -rf ${D}${localstatedir}/scrollkeeper/*
-	rm -f ${D}${datadir}/applications/*.cache
-}
diff --git a/poky/meta/classes/go-mod.bbclass b/poky/meta/classes/go-mod.bbclass
deleted file mode 100644
index 674d243..0000000
--- a/poky/meta/classes/go-mod.bbclass
+++ /dev/null
@@ -1,20 +0,0 @@
-# Handle Go Modules support
-#
-# When using Go Modules, the the current working directory MUST be at or below
-# the location of the 'go.mod' file when the go tool is used, and there is no
-# way to tell it to look elsewhere.  It will automatically look upwards for the
-# file, but not downwards.
-#
-# To support this use case, we provide the `GO_WORKDIR` variable, which defaults
-# to `GO_IMPORT` but allows for easy override.
-#
-# Copyright 2020 (C) O.S. Systems Software LTDA.
-
-# The '-modcacherw' option ensures we have write access to the cached objects so
-# we avoid errors during clean task as well as when removing the TMPDIR.
-GOBUILDFLAGS:append = " -modcacherw"
-
-inherit go
-
-GO_WORKDIR ?= "${GO_IMPORT}"
-do_compile[dirs] += "${B}/src/${GO_WORKDIR}"
diff --git a/poky/meta/classes/go-ptest.bbclass b/poky/meta/classes/go-ptest.bbclass
deleted file mode 100644
index b282ff7..0000000
--- a/poky/meta/classes/go-ptest.bbclass
+++ /dev/null
@@ -1,54 +0,0 @@
-inherit go ptest
-
-do_compile_ptest_base() {
-	export TMPDIR="${GOTMPDIR}"
-	rm -f ${B}/.go_compiled_tests.list
-	go_list_package_tests | while read pkg; do
-		cd ${B}/src/$pkg
-		${GO} test ${GOPTESTBUILDFLAGS} $pkg
-		find . -mindepth 1 -maxdepth 1 -type f -name '*.test' -exec echo $pkg/{} \; | \
-			sed -e's,/\./,/,'>> ${B}/.go_compiled_tests.list
-	done
-	do_compile_ptest
-}
-
-do_compile_ptest_base[dirs] =+ "${GOTMPDIR}"
-
-go_make_ptest_wrapper() {
-	cat >${D}${PTEST_PATH}/run-ptest <<EOF
-#!/bin/sh
-RC=0
-run_test() (
-    cd "\$1"
-    ((((./\$2 ${GOPTESTFLAGS}; echo \$? >&3) | sed -r -e"s,^(PASS|SKIP|FAIL)\$,\\1: \$1/\$2," >&4) 3>&1) | (read rc; exit \$rc)) 4>&1
-    exit \$?)
-EOF
-
-}
-
-do_install_ptest_base() {
-	test -f "${B}/.go_compiled_tests.list" || exit 0
-	install -d ${D}${PTEST_PATH}
-	go_stage_testdata
-	go_make_ptest_wrapper
-	havetests=""
-	while read test; do
-		testdir=`dirname $test`
-		testprog=`basename $test`
-		install -d ${D}${PTEST_PATH}/$testdir
-		install -m 0755 ${B}/src/$test ${D}${PTEST_PATH}/$test
-	echo "run_test $testdir $testprog || RC=1" >> ${D}${PTEST_PATH}/run-ptest
-		havetests="yes"
-	done < ${B}/.go_compiled_tests.list
-	if [ -n "$havetests" ]; then
-		echo "exit \$RC" >> ${D}${PTEST_PATH}/run-ptest
-		chmod +x ${D}${PTEST_PATH}/run-ptest
-	else
-		rm -rf ${D}${PTEST_PATH}
-	fi
-	do_install_ptest
-	chown -R root:root ${D}${PTEST_PATH}
-}
-
-INSANE_SKIP:${PN}-ptest += "ldflags"
-
diff --git a/poky/meta/classes/go.bbclass b/poky/meta/classes/go.bbclass
deleted file mode 100644
index cd2daed..0000000
--- a/poky/meta/classes/go.bbclass
+++ /dev/null
@@ -1,164 +0,0 @@
-inherit goarch
-inherit linuxloader
-
-GO_PARALLEL_BUILD ?= "${@oe.utils.parallel_make_argument(d, '-p %d')}"
-
-export GODEBUG = "gocachehash=1"
-
-GOROOT:class-native = "${STAGING_LIBDIR_NATIVE}/go"
-GOROOT:class-nativesdk = "${STAGING_DIR_TARGET}${libdir}/go"
-GOROOT = "${STAGING_LIBDIR}/go"
-export GOROOT
-export GOROOT_FINAL = "${libdir}/go"
-export GOCACHE = "${B}/.cache"
-
-export GOARCH = "${TARGET_GOARCH}"
-export GOOS = "${TARGET_GOOS}"
-export GOHOSTARCH="${BUILD_GOARCH}"
-export GOHOSTOS="${BUILD_GOOS}"
-
-GOARM[export] = "0"
-GOARM:arm:class-target = "${TARGET_GOARM}"
-GOARM:arm:class-target[export] = "1"
-
-GO386[export] = "0"
-GO386:x86:class-target = "${TARGET_GO386}"
-GO386:x86:class-target[export] = "1"
-
-GOMIPS[export] = "0"
-GOMIPS:mips:class-target = "${TARGET_GOMIPS}"
-GOMIPS:mips:class-target[export] = "1"
-
-DEPENDS_GOLANG:class-target = "virtual/${TUNE_PKGARCH}-go virtual/${TARGET_PREFIX}go-runtime"
-DEPENDS_GOLANG:class-native = "go-native"
-DEPENDS_GOLANG:class-nativesdk = "virtual/${TARGET_PREFIX}go-crosssdk virtual/${TARGET_PREFIX}go-runtime"
-
-DEPENDS:append = " ${DEPENDS_GOLANG}"
-
-GO_LINKSHARED ?= "${@'-linkshared' if d.getVar('GO_DYNLINK') else ''}"
-GO_RPATH_LINK = "${@'-Wl,-rpath-link=${STAGING_DIR_TARGET}${libdir}/go/pkg/${TARGET_GOTUPLE}_dynlink' if d.getVar('GO_DYNLINK') else ''}"
-GO_RPATH = "${@'-r ${libdir}/go/pkg/${TARGET_GOTUPLE}_dynlink' if d.getVar('GO_DYNLINK') else ''}"
-GO_RPATH:class-native = "${@'-r ${STAGING_LIBDIR_NATIVE}/go/pkg/${TARGET_GOTUPLE}_dynlink' if d.getVar('GO_DYNLINK') else ''}"
-GO_RPATH_LINK:class-native = "${@'-Wl,-rpath-link=${STAGING_LIBDIR_NATIVE}/go/pkg/${TARGET_GOTUPLE}_dynlink' if d.getVar('GO_DYNLINK') else ''}"
-GO_EXTLDFLAGS ?= "${HOST_CC_ARCH}${TOOLCHAIN_OPTIONS} ${GO_RPATH_LINK} ${LDFLAGS}"
-GO_LINKMODE ?= ""
-GO_LINKMODE:class-nativesdk = "--linkmode=external"
-GO_LINKMODE:class-native = "--linkmode=external"
-GO_EXTRA_LDFLAGS ?= ""
-GO_LINUXLOADER ?= "-I ${@get_linuxloader(d)}"
-# Use system loader. If uninative is used, the uninative loader will be patched automatically
-GO_LINUXLOADER:class-native = ""
-GO_LDFLAGS ?= '-ldflags="${GO_RPATH} ${GO_LINKMODE} ${GO_LINUXLOADER} ${GO_EXTRA_LDFLAGS} -extldflags '${GO_EXTLDFLAGS}'"'
-export GOBUILDFLAGS ?= "-v ${GO_LDFLAGS} -trimpath"
-export GOPATH_OMIT_IN_ACTIONID ?= "1"
-export GOPTESTBUILDFLAGS ?= "${GOBUILDFLAGS} -c"
-export GOPTESTFLAGS ?= ""
-GOBUILDFLAGS:prepend:task-compile = "${GO_PARALLEL_BUILD} "
-
-export GO = "${HOST_PREFIX}go"
-GOTOOLDIR = "${STAGING_LIBDIR_NATIVE}/${TARGET_SYS}/go/pkg/tool/${BUILD_GOTUPLE}"
-GOTOOLDIR:class-native = "${STAGING_LIBDIR_NATIVE}/go/pkg/tool/${BUILD_GOTUPLE}"
-export GOTOOLDIR
-
-export CGO_ENABLED ?= "1"
-export CGO_CFLAGS ?= "${CFLAGS}"
-export CGO_CPPFLAGS ?= "${CPPFLAGS}"
-export CGO_CXXFLAGS ?= "${CXXFLAGS}"
-export CGO_LDFLAGS ?= "${LDFLAGS}"
-
-GO_INSTALL ?= "${GO_IMPORT}/..."
-GO_INSTALL_FILTEROUT ?= "${GO_IMPORT}/vendor/"
-
-B = "${WORKDIR}/build"
-export GOPATH = "${B}"
-export GOENV = "off"
-export GOTMPDIR ?= "${WORKDIR}/build-tmp"
-GOTMPDIR[vardepvalue] = ""
-
-python go_do_unpack() {
-    src_uri = (d.getVar('SRC_URI') or "").split()
-    if len(src_uri) == 0:
-        return
-
-    fetcher = bb.fetch2.Fetch(src_uri, d)
-    for url in fetcher.urls:
-        if fetcher.ud[url].type == 'git':
-            if fetcher.ud[url].parm.get('destsuffix') is None:
-                s_dirname = os.path.basename(d.getVar('S'))
-                fetcher.ud[url].parm['destsuffix'] = os.path.join(s_dirname, 'src', d.getVar('GO_IMPORT')) + '/'
-    fetcher.unpack(d.getVar('WORKDIR'))
-}
-
-go_list_packages() {
-	${GO} list -f '{{.ImportPath}}' ${GOBUILDFLAGS} ${GO_INSTALL} | \
-		egrep -v '${GO_INSTALL_FILTEROUT}'
-}
-
-go_list_package_tests() {
-	${GO} list -f '{{.ImportPath}} {{.TestGoFiles}}' ${GOBUILDFLAGS} ${GO_INSTALL} | \
-		grep -v '\[\]$' | \
-		egrep -v '${GO_INSTALL_FILTEROUT}' | \
-		awk '{ print $1 }'
-}
-
-go_do_configure() {
-	ln -snf ${S}/src ${B}/
-}
-do_configure[dirs] =+ "${GOTMPDIR}"
-
-go_do_compile() {
-	export TMPDIR="${GOTMPDIR}"
-	if [ -n "${GO_INSTALL}" ]; then
-		if [ -n "${GO_LINKSHARED}" ]; then
-			${GO} install ${GOBUILDFLAGS} `go_list_packages`
-			rm -rf ${B}/bin
-		fi
-		${GO} install ${GO_LINKSHARED} ${GOBUILDFLAGS} `go_list_packages`
-	fi
-}
-do_compile[dirs] =+ "${GOTMPDIR}"
-do_compile[cleandirs] = "${B}/bin ${B}/pkg"
-
-go_do_install() {
-	install -d ${D}${libdir}/go/src/${GO_IMPORT}
-	tar -C ${S}/src/${GO_IMPORT} -cf - --exclude-vcs --exclude '*.test' --exclude 'testdata' . | \
-		tar -C ${D}${libdir}/go/src/${GO_IMPORT} --no-same-owner -xf -
-	tar -C ${B} -cf - --exclude-vcs --exclude '*.test' --exclude 'testdata' pkg | \
-		tar -C ${D}${libdir}/go --no-same-owner -xf -
-
-	if [ -n "`ls ${B}/${GO_BUILD_BINDIR}/`" ]; then
-		install -d ${D}${bindir}
-		install -m 0755 ${B}/${GO_BUILD_BINDIR}/* ${D}${bindir}/
-	fi
-}
-
-go_stage_testdata() {
-	oldwd="$PWD"
-	cd ${S}/src
-	find ${GO_IMPORT} -depth -type d -name testdata | while read d; do
-		if echo "$d" | grep -q '/vendor/'; then
-			continue
-		fi
-		parent=`dirname $d`
-		install -d ${D}${PTEST_PATH}/$parent
-		cp --preserve=mode,timestamps -R $d ${D}${PTEST_PATH}/$parent/
-	done
-	cd "$oldwd"
-}
-
-EXPORT_FUNCTIONS do_unpack do_configure do_compile do_install
-
-FILES:${PN}-dev = "${libdir}/go/src"
-FILES:${PN}-staticdev = "${libdir}/go/pkg"
-
-INSANE_SKIP:${PN} += "ldflags"
-
-# Add -buildmode=pie to GOBUILDFLAGS to satisfy "textrel" QA checking, but mips
-# doesn't support -buildmode=pie, so skip the QA checking for mips/rv32 and its
-# variants.
-python() {
-    if 'mips' in d.getVar('TARGET_ARCH') or 'riscv32' in d.getVar('TARGET_ARCH'):
-        d.appendVar('INSANE_SKIP:%s' % d.getVar('PN'), " textrel")
-    else:
-        d.appendVar('GOBUILDFLAGS', ' -buildmode=pie')
-}
diff --git a/poky/meta/classes/goarch.bbclass b/poky/meta/classes/goarch.bbclass
deleted file mode 100644
index 92fec16..0000000
--- a/poky/meta/classes/goarch.bbclass
+++ /dev/null
@@ -1,116 +0,0 @@
-BUILD_GOOS = "${@go_map_os(d.getVar('BUILD_OS'), d)}"
-BUILD_GOARCH = "${@go_map_arch(d.getVar('BUILD_ARCH'), d)}"
-BUILD_GOTUPLE = "${BUILD_GOOS}_${BUILD_GOARCH}"
-HOST_GOOS = "${@go_map_os(d.getVar('HOST_OS'), d)}"
-HOST_GOARCH = "${@go_map_arch(d.getVar('HOST_ARCH'), d)}"
-HOST_GOARM = "${@go_map_arm(d.getVar('HOST_ARCH'), d)}"
-HOST_GO386 = "${@go_map_386(d.getVar('HOST_ARCH'), d.getVar('TUNE_FEATURES'), d)}"
-HOST_GOMIPS = "${@go_map_mips(d.getVar('HOST_ARCH'), d.getVar('TUNE_FEATURES'), d)}"
-HOST_GOARM:class-native = "7"
-HOST_GO386:class-native = "sse2"
-HOST_GOMIPS:class-native = "hardfloat"
-HOST_GOTUPLE = "${HOST_GOOS}_${HOST_GOARCH}"
-TARGET_GOOS = "${@go_map_os(d.getVar('TARGET_OS'), d)}"
-TARGET_GOARCH = "${@go_map_arch(d.getVar('TARGET_ARCH'), d)}"
-TARGET_GOARM = "${@go_map_arm(d.getVar('TARGET_ARCH'), d)}"
-TARGET_GO386 = "${@go_map_386(d.getVar('TARGET_ARCH'), d.getVar('TUNE_FEATURES'), d)}"
-TARGET_GOMIPS = "${@go_map_mips(d.getVar('TARGET_ARCH'), d.getVar('TUNE_FEATURES'), d)}"
-TARGET_GOARM:class-native = "7"
-TARGET_GO386:class-native = "sse2"
-TARGET_GOMIPS:class-native = "hardfloat"
-TARGET_GOTUPLE = "${TARGET_GOOS}_${TARGET_GOARCH}"
-GO_BUILD_BINDIR = "${@['bin/${HOST_GOTUPLE}','bin'][d.getVar('BUILD_GOTUPLE') == d.getVar('HOST_GOTUPLE')]}"
-
-# Use the MACHINEOVERRIDES to map ARM CPU architecture passed to GO via GOARM.
-# This is combined with *_ARCH to set HOST_GOARM and TARGET_GOARM.
-BASE_GOARM = ''
-BASE_GOARM:armv7ve = '7'
-BASE_GOARM:armv7a = '7'
-BASE_GOARM:armv6 = '6'
-BASE_GOARM:armv5 = '5'
-
-# Go supports dynamic linking on a limited set of architectures.
-# See the supportsDynlink function in go/src/cmd/compile/internal/gc/main.go
-GO_DYNLINK = ""
-GO_DYNLINK:arm ?= "1"
-GO_DYNLINK:aarch64 ?= "1"
-GO_DYNLINK:x86 ?= "1"
-GO_DYNLINK:x86-64 ?= "1"
-GO_DYNLINK:powerpc64 ?= "1"
-GO_DYNLINK:powerpc64le ?= "1"
-GO_DYNLINK:class-native ?= ""
-GO_DYNLINK:class-nativesdk = ""
-
-# define here because everybody inherits this class
-#
-COMPATIBLE_HOST:linux-gnux32 = "null"
-COMPATIBLE_HOST:linux-muslx32 = "null"
-COMPATIBLE_HOST:powerpc = "null"
-COMPATIBLE_HOST:powerpc64 = "null"
-COMPATIBLE_HOST:mipsarchn32 = "null"
-
-ARM_INSTRUCTION_SET:armv4 = "arm"
-ARM_INSTRUCTION_SET:armv5 = "arm"
-ARM_INSTRUCTION_SET:armv6 = "arm"
-
-TUNE_CCARGS:remove = "-march=mips32r2"
-SECURITY_NOPIE_CFLAGS ??= ""
-
-# go can't be built with ccache:
-# gcc: fatal error: no input files
-CCACHE_DISABLE ?= "1"
-
-def go_map_arch(a, d):
-    import re
-    if re.match('i.86', a):
-        return '386'
-    elif a == 'x86_64':
-        return 'amd64'
-    elif re.match('arm.*', a):
-        return 'arm'
-    elif re.match('aarch64.*', a):
-        return 'arm64'
-    elif re.match('mips64el.*', a):
-        return 'mips64le'
-    elif re.match('mips64.*', a):
-        return 'mips64'
-    elif a == 'mips':
-        return 'mips'
-    elif a == 'mipsel':
-        return 'mipsle'
-    elif re.match('p(pc|owerpc)(64le)', a):
-        return 'ppc64le'
-    elif re.match('p(pc|owerpc)(64)', a):
-        return 'ppc64'
-    elif a == 'riscv64':
-        return 'riscv64'
-    else:
-        raise bb.parse.SkipRecipe("Unsupported CPU architecture: %s" % a)
-
-def go_map_arm(a, d):
-    if a.startswith("arm"):
-        return d.getVar('BASE_GOARM')
-    return ''
-
-def go_map_386(a, f, d):
-    import re
-    if re.match('i.86', a):
-        if ('core2' in f) or ('corei7' in f):
-            return 'sse2'
-        else:
-            return 'softfloat'
-    return ''
-
-def go_map_mips(a, f, d):
-    import re
-    if a == 'mips' or a == 'mipsel':
-        if 'fpu-hard' in f:
-            return 'hardfloat'
-        else:
-            return 'softfloat'
-    return ''
-
-def go_map_os(o, d):
-    if o.startswith('linux'):
-        return 'linux'
-    return o
diff --git a/poky/meta/classes/gobject-introspection-data.bbclass b/poky/meta/classes/gobject-introspection-data.bbclass
deleted file mode 100644
index d90cdb4..0000000
--- a/poky/meta/classes/gobject-introspection-data.bbclass
+++ /dev/null
@@ -1,12 +0,0 @@
-# This variable is set to True if gobject-introspection-data is in
-# DISTRO_FEATURES and qemu-usermode is in MACHINE_FEATURES, and False otherwise.
-#
-# It should be used in recipes to determine whether introspection data should be built,
-# so that qemu use can be avoided when necessary.
-GI_DATA_ENABLED ?= "${@bb.utils.contains('DISTRO_FEATURES', 'gobject-introspection-data', \
-                      bb.utils.contains('MACHINE_FEATURES', 'qemu-usermode', 'True', 'False', d), 'False', d)}"
-
-do_compile:prepend() {
-    # This prevents g-ir-scanner from writing cache data to $HOME
-    export GI_SCANNER_DISABLE_CACHE=1
-}
diff --git a/poky/meta/classes/gobject-introspection.bbclass b/poky/meta/classes/gobject-introspection.bbclass
deleted file mode 100644
index 7bf9feb..0000000
--- a/poky/meta/classes/gobject-introspection.bbclass
+++ /dev/null
@@ -1,55 +0,0 @@
-# Inherit this class in recipes to enable building their introspection files
-
-# python3native is inherited to prevent introspection tools being run with
-# host's python 3 (they need to be run with native python 3)
-#
-# This also sets up autoconf-based recipes to build introspection data (or not),
-# depending on distro and machine features (see gobject-introspection-data class).
-inherit python3native gobject-introspection-data
-
-# meson: default option name to enable/disable introspection. This matches most
-# project's configuration. In doubts - check meson_options.txt in project's
-# source path.
-GIR_MESON_OPTION ?= 'introspection'
-GIR_MESON_ENABLE_FLAG ?= 'true'
-GIR_MESON_DISABLE_FLAG ?= 'false'
-
-# Define g-i options such that they can be disabled completely when GIR_MESON_OPTION is empty
-GIRMESONTARGET = "-D${GIR_MESON_OPTION}=${@bb.utils.contains('GI_DATA_ENABLED', 'True', '${GIR_MESON_ENABLE_FLAG}', '${GIR_MESON_DISABLE_FLAG}', d)} "
-GIRMESONBUILD = "-D${GIR_MESON_OPTION}=${GIR_MESON_DISABLE_FLAG} "
-# Auto enable/disable based on GI_DATA_ENABLED
-EXTRA_OECONF:prepend:class-target = "${@bb.utils.contains('GI_DATA_ENABLED', 'True', '--enable-introspection', '--disable-introspection', d)} "
-EXTRA_OEMESON:prepend:class-target = "${@['', '${GIRMESONTARGET}'][d.getVar('GIR_MESON_OPTION') != '']}"
-# When building native recipes, disable introspection, as it is not necessary,
-# pulls in additional dependencies, and makes build times longer
-EXTRA_OECONF:prepend:class-native = "--disable-introspection "
-EXTRA_OECONF:prepend:class-nativesdk = "--disable-introspection "
-EXTRA_OEMESON:prepend:class-native = "${@['', '${GIRMESONBUILD}'][d.getVar('GIR_MESON_OPTION') != '']}"
-EXTRA_OEMESON:prepend:class-nativesdk = "${@['', '${GIRMESONBUILD}'][d.getVar('GIR_MESON_OPTION') != '']}"
-
-# Generating introspection data depends on a combination of native and target
-# introspection tools, and qemu to run the target tools.
-DEPENDS:append:class-target = " gobject-introspection gobject-introspection-native qemu-native"
-
-# Even though introspection is disabled on -native, gobject-introspection package is still
-# needed for m4 macros.
-DEPENDS:append:class-native = " gobject-introspection-native"
-DEPENDS:append:class-nativesdk = " gobject-introspection-native"
-
-# This is used by introspection tools to find .gir includes
-export XDG_DATA_DIRS = "${STAGING_DATADIR}:${STAGING_LIBDIR}"
-
-do_configure:prepend:class-target () {
-    # introspection.m4 pre-packaged with upstream tarballs does not yet
-    # have our fixes
-    mkdir -p ${S}/m4
-    cp ${STAGING_DIR_TARGET}/${datadir}/aclocal/introspection.m4 ${S}/m4
-}
-
-# .typelib files are needed at runtime and so they go to the main package (so
-# they'll be together with libraries they support).
-FILES:${PN}:append = " ${libdir}/girepository-*/*.typelib" 
-    
-# .gir files go to dev package, as they're needed for developing (but not for
-# running) things that depends on introspection.
-FILES:${PN}-dev:append = " ${datadir}/gir-*/*.gir ${libdir}/gir-*/*.gir"
diff --git a/poky/meta/classes/grub-efi-cfg.bbclass b/poky/meta/classes/grub-efi-cfg.bbclass
deleted file mode 100644
index ea21b3d..0000000
--- a/poky/meta/classes/grub-efi-cfg.bbclass
+++ /dev/null
@@ -1,123 +0,0 @@
-# grub-efi.bbclass
-# Copyright (c) 2011, Intel Corporation.
-# All rights reserved.
-#
-# Released under the MIT license (see packages/COPYING)
-
-# Provide grub-efi specific functions for building bootable images.
-
-# External variables
-# ${INITRD} - indicates a list of filesystem images to concatenate and use as an initrd (optional)
-# ${ROOTFS} - indicates a filesystem image to include as the root filesystem (optional)
-# ${GRUB_GFXSERIAL} - set this to 1 to have graphics and serial in the boot menu
-# ${LABELS} - a list of targets for the automatic config
-# ${APPEND} - an override list of append strings for each label
-# ${GRUB_OPTS} - additional options to add to the config, ';' delimited # (optional)
-# ${GRUB_TIMEOUT} - timeout before executing the deault label (optional)
-# ${GRUB_ROOT} - grub's root device.
-
-GRUB_SERIAL ?= "console=ttyS0,115200"
-GRUB_CFG_VM = "${S}/grub_vm.cfg"
-GRUB_CFG_LIVE = "${S}/grub_live.cfg"
-GRUB_TIMEOUT ?= "10"
-#FIXME: build this from the machine config
-GRUB_OPTS ?= "serial --unit=0 --speed=115200 --word=8 --parity=no --stop=1"
-
-GRUB_ROOT ?= "${ROOT}"
-APPEND ?= ""
-
-# Uses MACHINE specific KERNEL_IMAGETYPE
-PACKAGE_ARCH = "${MACHINE_ARCH}"
-
-# Need UUID utility code.
-inherit fs-uuid
-
-python build_efi_cfg() {
-    import sys
-
-    workdir = d.getVar('WORKDIR')
-    if not workdir:
-        bb.error("WORKDIR not defined, unable to package")
-        return
-
-    gfxserial = d.getVar('GRUB_GFXSERIAL') or ""
-
-    labels = d.getVar('LABELS')
-    if not labels:
-        bb.debug(1, "LABELS not defined, nothing to do")
-        return
-
-    if labels == []:
-        bb.debug(1, "No labels, nothing to do")
-        return
-
-    cfile = d.getVar('GRUB_CFG')
-    if not cfile:
-        bb.fatal('Unable to read GRUB_CFG')
-
-    try:
-         cfgfile = open(cfile, 'w')
-    except OSError:
-        bb.fatal('Unable to open %s' % cfile)
-
-    cfgfile.write('# Automatically created by OE\n')
-
-    opts = d.getVar('GRUB_OPTS')
-    if opts:
-        for opt in opts.split(';'):
-            cfgfile.write('%s\n' % opt)
-
-    cfgfile.write('default=%s\n' % (labels.split()[0]))
-
-    timeout = d.getVar('GRUB_TIMEOUT')
-    if timeout:
-        cfgfile.write('timeout=%s\n' % timeout)
-    else:
-        cfgfile.write('timeout=50\n')
-
-    root = d.getVar('GRUB_ROOT')
-    if not root:
-        bb.fatal('GRUB_ROOT not defined')
-
-    if gfxserial == "1":
-        btypes = [ [ " graphics console", "" ],
-            [ " serial console", d.getVar('GRUB_SERIAL') or "" ] ]
-    else:
-        btypes = [ [ "", "" ] ]
-
-    for label in labels.split():
-        localdata = d.createCopy()
-
-        overrides = localdata.getVar('OVERRIDES')
-        if not overrides:
-            bb.fatal('OVERRIDES not defined')
-
-        localdata.setVar('OVERRIDES', 'grub_' + label + ':' + overrides)
-
-        for btype in btypes:
-            cfgfile.write('\nmenuentry \'%s%s\'{\n' % (label, btype[0]))
-            lb = label
-            if label == "install":
-                lb = "install-efi"
-            kernel = localdata.getVar('KERNEL_IMAGETYPE')
-            cfgfile.write('linux /%s LABEL=%s' % (kernel, lb))
-
-            cfgfile.write(' %s' % replace_rootfs_uuid(d, root))
-
-            append = localdata.getVar('APPEND')
-            initrd = localdata.getVar('INITRD')
-
-            if append:
-                append = replace_rootfs_uuid(d, append)
-                cfgfile.write(' %s' % (append))
-
-            cfgfile.write(' %s' % btype[1])
-            cfgfile.write('\n')
-
-            if initrd:
-                cfgfile.write('initrd /initrd')
-            cfgfile.write('\n}\n')
-
-    cfgfile.close()
-}
-build_efi_cfg[vardepsexclude] += "OVERRIDES"
diff --git a/poky/meta/classes/grub-efi.bbclass b/poky/meta/classes/grub-efi.bbclass
deleted file mode 100644
index 8fc6999..0000000
--- a/poky/meta/classes/grub-efi.bbclass
+++ /dev/null
@@ -1,8 +0,0 @@
-inherit grub-efi-cfg
-require conf/image-uefi.conf
-
-efi_populate() {
-	efi_populate_common "$1" grub-efi
-
-	install -m 0644 ${GRUB_CFG} ${DEST}${EFIDIR}/grub.cfg
-}
diff --git a/poky/meta/classes/gsettings.bbclass b/poky/meta/classes/gsettings.bbclass
deleted file mode 100644
index 3fa5bd4..0000000
--- a/poky/meta/classes/gsettings.bbclass
+++ /dev/null
@@ -1,42 +0,0 @@
-# A bbclass to handle installed GSettings (glib) schemas, updated the compiled
-# form on package install and remove.
-#
-# The compiled schemas are platform-agnostic, so we can depend on
-# glib-2.0-native for the native tool and run the postinst script when the
-# rootfs builds to save a little time on first boot.
-
-# TODO use a trigger so that this runs once per package operation run
-
-GSETTINGS_PACKAGE ?= "${PN}"
-
-python __anonymous() {
-    pkg = d.getVar("GSETTINGS_PACKAGE")
-    if pkg:
-        d.appendVar("PACKAGE_WRITE_DEPS", " glib-2.0-native")
-        d.appendVar("RDEPENDS:" + pkg, " ${MLPREFIX}glib-2.0-utils")
-        d.appendVar("FILES:" + pkg, " ${datadir}/glib-2.0/schemas")
-}
-
-gsettings_postinstrm () {
-	glib-compile-schemas $D${datadir}/glib-2.0/schemas
-}
-
-python populate_packages:append () {
-    pkg = d.getVar('GSETTINGS_PACKAGE')
-    if pkg:
-        bb.note("adding gsettings postinst scripts to %s" % pkg)
-
-        postinst = d.getVar('pkg_postinst:%s' % pkg) or d.getVar('pkg_postinst')
-        if not postinst:
-            postinst = '#!/bin/sh\n'
-        postinst += d.getVar('gsettings_postinstrm')
-        d.setVar('pkg_postinst:%s' % pkg, postinst)
-
-        bb.note("adding gsettings postrm scripts to %s" % pkg)
-
-        postrm = d.getVar('pkg_postrm:%s' % pkg) or d.getVar('pkg_postrm')
-        if not postrm:
-            postrm = '#!/bin/sh\n'
-        postrm += d.getVar('gsettings_postinstrm')
-        d.setVar('pkg_postrm:%s' % pkg, postrm)
-}
diff --git a/poky/meta/classes/gtk-doc.bbclass b/poky/meta/classes/gtk-doc.bbclass
deleted file mode 100644
index 07b46ac..0000000
--- a/poky/meta/classes/gtk-doc.bbclass
+++ /dev/null
@@ -1,83 +0,0 @@
-# Helper class to pull in the right gtk-doc dependencies and configure
-# gtk-doc to enable or disable documentation building (which requries the
-# use of usermode qemu).
-
-# This variable is set to True if api-documentation is in
-# DISTRO_FEATURES and qemu-usermode is in MACHINE_FEATURES, and False otherwise.
-#
-# It should be used in recipes to determine whether gtk-doc based documentation should be built,
-# so that qemu use can be avoided when necessary.
-GTKDOC_ENABLED:class-native = "False"
-GTKDOC_ENABLED ?= "${@bb.utils.contains('DISTRO_FEATURES', 'api-documentation', \
-                      bb.utils.contains('MACHINE_FEATURES', 'qemu-usermode', 'True', 'False', d), 'False', d)}"
-
-# meson: default option name to enable/disable gtk-doc. This matches most
-# project's configuration. In doubts - check meson_options.txt in project's
-# source path.
-GTKDOC_MESON_OPTION ?= 'docs'
-GTKDOC_MESON_ENABLE_FLAG ?= 'true'
-GTKDOC_MESON_DISABLE_FLAG ?= 'false'
-
-# Auto enable/disable based on GTKDOC_ENABLED
-EXTRA_OECONF:prepend:class-target = "${@bb.utils.contains('GTKDOC_ENABLED', 'True', '--enable-gtk-doc --enable-gtk-doc-html --disable-gtk-doc-pdf', \
-                                                                                    '--disable-gtk-doc', d)} "
-EXTRA_OEMESON:prepend:class-target = "-D${GTKDOC_MESON_OPTION}=${@bb.utils.contains('GTKDOC_ENABLED', 'True', '${GTKDOC_MESON_ENABLE_FLAG}', '${GTKDOC_MESON_DISABLE_FLAG}', d)} "
-
-# When building native recipes, disable gtkdoc, as it is not necessary,
-# pulls in additional dependencies, and makes build times longer
-EXTRA_OECONF:prepend:class-native = "--disable-gtk-doc "
-EXTRA_OECONF:prepend:class-nativesdk = "--disable-gtk-doc "
-EXTRA_OEMESON:prepend:class-native = "-D${GTKDOC_MESON_OPTION}=${GTKDOC_MESON_DISABLE_FLAG} "
-EXTRA_OEMESON:prepend:class-nativesdk = "-D${GTKDOC_MESON_OPTION}=${GTKDOC_MESON_DISABLE_FLAG} "
-
-# Even though gtkdoc is disabled on -native, gtk-doc package is still
-# needed for m4 macros.
-DEPENDS:append = " gtk-doc-native"
-
-# The documentation directory, where the infrastructure will be copied.
-# gtkdocize has a default of "." so to handle out-of-tree builds set this to $S.
-GTKDOC_DOCDIR ?= "${S}"
-
-export STAGING_DIR_HOST
-
-inherit python3native pkgconfig qemu
-DEPENDS:append = "${@' qemu-native' if d.getVar('GTKDOC_ENABLED') == 'True' else ''}"
-
-do_configure:prepend () {
-	# Need to use ||true as this is only needed if configure.ac both exists
-	# and uses GTK_DOC_CHECK.
-	gtkdocize --srcdir ${S} --docdir ${GTKDOC_DOCDIR} || true
-}
-
-do_compile:prepend:class-target () {
-    if [ ${GTKDOC_ENABLED} = True ]; then
-        # Write out a qemu wrapper that will be given to gtkdoc-scangobj so that it
-        # can run target helper binaries through that.
-        qemu_binary="${@qemu_wrapper_cmdline(d, '$STAGING_DIR_HOST', ['\\$GIR_EXTRA_LIBS_PATH','$STAGING_DIR_HOST/${libdir}','$STAGING_DIR_HOST/${base_libdir}'])}"
-        cat > ${B}/gtkdoc-qemuwrapper << EOF
-#!/bin/sh
-# Use a modules directory which doesn't exist so we don't load random things
-# which may then get deleted (or their dependencies) and potentially segfault
-export GIO_MODULE_DIR=${STAGING_LIBDIR}/gio/modules-dummy
-
-GIR_EXTRA_LIBS_PATH=\`find ${B} -name *.so -printf "%h\n"|sort|uniq| tr '\n' ':'\`\$GIR_EXTRA_LIBS_PATH
-GIR_EXTRA_LIBS_PATH=\`find ${B} -name .libs| tr '\n' ':'\`\$GIR_EXTRA_LIBS_PATH
-
-# meson sets this wrongly (only to libs in build-dir), qemu_wrapper_cmdline() and GIR_EXTRA_LIBS_PATH take care of it properly
-unset LD_LIBRARY_PATH
-
-if [ -d ".libs" ]; then
-    $qemu_binary ".libs/\$@"
-else
-    $qemu_binary "\$@"
-fi
-
-if [ \$? -ne 0 ]; then
-    echo "If the above error message is about missing .so libraries, then setting up GIR_EXTRA_LIBS_PATH in the recipe should help."
-    echo "(typically like this: GIR_EXTRA_LIBS_PATH=\"$""{B}/something/.libs\" )"
-    exit 1
-fi
-EOF
-        chmod +x ${B}/gtkdoc-qemuwrapper
-    fi
-}
diff --git a/poky/meta/classes/gtk-icon-cache.bbclass b/poky/meta/classes/gtk-icon-cache.bbclass
deleted file mode 100644
index 6808339..0000000
--- a/poky/meta/classes/gtk-icon-cache.bbclass
+++ /dev/null
@@ -1,89 +0,0 @@
-FILES:${PN} += "${datadir}/icons/hicolor"
-
-GTKIC_VERSION ??= '3'
-
-GTKPN = "${@ 'gtk4' if d.getVar('GTKIC_VERSION') == '4' else 'gtk+3' }"
-GTKIC_CMD = "${@ 'gtk-update-icon-cache-3.0.0' if d.getVar('GTKIC_VERSION') == '4' else 'gtk4-update-icon-cache' }"
-
-#gtk+3/gtk4 require GTK3DISTROFEATURES, DEPENDS on it make all the
-#recipes inherit this class require GTK3DISTROFEATURES
-inherit features_check
-ANY_OF_DISTRO_FEATURES = "${GTK3DISTROFEATURES}"
-
-DEPENDS +=" ${@ '' if d.getVar('BPN') == 'hicolor-icon-theme' else 'hicolor-icon-theme' } \
-            ${@ '' if d.getVar('BPN') == 'gdk-pixbuf' else 'gdk-pixbuf' } \
-            ${@ '' if d.getVar('BPN') == d.getVar('GTKPN') else d.getVar('GTKPN') } \
-            ${GTKPN}-native \
-"
-
-PACKAGE_WRITE_DEPS += "${GTKPN}-native gdk-pixbuf-native"
-
-gtk_icon_cache_postinst() {
-if [ "x$D" != "x" ]; then
-	$INTERCEPT_DIR/postinst_intercept update_gtk_icon_cache ${PKG} \
-		mlprefix=${MLPREFIX} \
-		libdir_native=${libdir_native}
-else
-
-	# Update the pixbuf loaders in case they haven't been registered yet
-	${libdir}/gdk-pixbuf-2.0/gdk-pixbuf-query-loaders --update-cache
-
-	for icondir in /usr/share/icons/* ; do
-		if [ -d $icondir ] ; then
-			${GTKIC_CMD} -fqt  $icondir
-		fi
-	done
-fi
-}
-
-gtk_icon_cache_postrm() {
-if [ "x$D" != "x" ]; then
-	$INTERCEPT_DIR/postinst_intercept update_gtk_icon_cache ${PKG} \
-		mlprefix=${MLPREFIX} \
-		libdir=${libdir}
-else
-	for icondir in /usr/share/icons/* ; do
-		if [ -d $icondir ] ; then
-			${GTKIC_CMD} -qt  $icondir
-		fi
-	done
-fi
-}
-
-python populate_packages:append () {
-    packages = d.getVar('PACKAGES').split()
-    pkgdest =  d.getVar('PKGDEST')
-    
-    for pkg in packages:
-        icon_dir = '%s/%s/%s/icons' % (pkgdest, pkg, d.getVar('datadir'))
-        if not os.path.exists(icon_dir):
-            continue
-
-        bb.note("adding hicolor-icon-theme dependency to %s" % pkg)
-        rdepends = ' ' + d.getVar('MLPREFIX', False) + "hicolor-icon-theme"
-        d.appendVar('RDEPENDS:%s' % pkg, rdepends)
-
-        #gtk_icon_cache_postinst depend on gdk-pixbuf and gtk+3/gtk4
-        bb.note("adding gdk-pixbuf dependency to %s" % pkg)
-        rdepends = ' ' + d.getVar('MLPREFIX', False) + "gdk-pixbuf"
-        d.appendVar('RDEPENDS:%s' % pkg, rdepends)
-
-        bb.note("adding %s dependency to %s" % (d.getVar('GTKPN'), pkg))
-        rdepends = ' ' + d.getVar('MLPREFIX', False) + d.getVar('GTKPN')
-        d.appendVar('RDEPENDS:%s' % pkg, rdepends)
-
-        bb.note("adding gtk-icon-cache postinst and postrm scripts to %s" % pkg)
-
-        postinst = d.getVar('pkg_postinst:%s' % pkg)
-        if not postinst:
-            postinst = '#!/bin/sh\n'
-        postinst += d.getVar('gtk_icon_cache_postinst')
-        d.setVar('pkg_postinst:%s' % pkg, postinst)
-
-        postrm = d.getVar('pkg_postrm:%s' % pkg)
-        if not postrm:
-            postrm = '#!/bin/sh\n'
-        postrm += d.getVar('gtk_icon_cache_postrm')
-        d.setVar('pkg_postrm:%s' % pkg, postrm)
-}
-
diff --git a/poky/meta/classes/gtk-immodules-cache.bbclass b/poky/meta/classes/gtk-immodules-cache.bbclass
deleted file mode 100644
index 2107517..0000000
--- a/poky/meta/classes/gtk-immodules-cache.bbclass
+++ /dev/null
@@ -1,76 +0,0 @@
-# This class will update the inputmethod module cache for virtual keyboards
-#
-# Usage: Set GTKIMMODULES_PACKAGES to the packages that needs to update the inputmethod modules
-
-PACKAGE_WRITE_DEPS += "qemu-native"
-
-inherit qemu
-
-GTKIMMODULES_PACKAGES ?= "${PN}"
-
-gtk_immodule_cache_postinst() {
-if [ "x$D" != "x" ]; then
-    $INTERCEPT_DIR/postinst_intercept update_gtk_immodules_cache ${PKG} \
-            mlprefix=${MLPREFIX} \
-            binprefix=${MLPREFIX} \
-            libdir=${libdir} \
-            libexecdir=${libexecdir} \
-            base_libdir=${base_libdir} \
-            bindir=${bindir}
-else
-    if [ ! -z `which gtk-query-immodules-2.0` ]; then
-        gtk-query-immodules-2.0 > ${libdir}/gtk-2.0/2.10.0/immodules.cache
-    fi
-    if [ ! -z `which gtk-query-immodules-3.0` ]; then
-        mkdir -p ${libdir}/gtk-3.0/3.0.0
-        gtk-query-immodules-3.0 > ${libdir}/gtk-3.0/3.0.0/immodules.cache
-    fi
-fi
-}
-
-gtk_immodule_cache_postrm() {
-if [ "x$D" != "x" ]; then
-    $INTERCEPT_DIR/postinst_intercept update_gtk_immodules_cache ${PKG} \
-            mlprefix=${MLPREFIX} \
-            binprefix=${MLPREFIX} \
-            libdir=${libdir} \
-            libexecdir=${libexecdir} \
-            base_libdir=${base_libdir} \
-            bindir=${bindir}
-else
-    if [ ! -z `which gtk-query-immodules-2.0` ]; then
-        gtk-query-immodules-2.0 > ${libdir}/gtk-2.0/2.10.0/immodules.cache
-    fi
-    if [ ! -z `which gtk-query-immodules-3.0` ]; then
-        gtk-query-immodules-3.0 > ${libdir}/gtk-3.0/3.0.0/immodules.cache
-    fi
-fi
-}
-
-python populate_packages:append () {
-    gtkimmodules_pkgs = d.getVar('GTKIMMODULES_PACKAGES').split()
-
-    for pkg in gtkimmodules_pkgs:
-            bb.note("adding gtk-immodule-cache postinst and postrm scripts to %s" % pkg)
-
-            postinst = d.getVar('pkg_postinst:%s' % pkg)
-            if not postinst:
-                postinst = '#!/bin/sh\n'
-            postinst += d.getVar('gtk_immodule_cache_postinst')
-            d.setVar('pkg_postinst:%s' % pkg, postinst)
-
-            postrm = d.getVar('pkg_postrm:%s' % pkg)
-            if not postrm:
-                postrm = '#!/bin/sh\n'
-            postrm += d.getVar('gtk_immodule_cache_postrm')
-            d.setVar('pkg_postrm:%s' % pkg, postrm)
-}
-
-python __anonymous() {
-    if not bb.data.inherits_class('native', d) and not bb.data.inherits_class('cross', d):
-        gtkimmodules_check = d.getVar('GTKIMMODULES_PACKAGES', False)
-        if not gtkimmodules_check:
-            bb_filename = d.getVar('FILE', False)
-            bb.fatal("ERROR: %s inherits gtk-immodules-cache but doesn't set GTKIMMODULES_PACKAGES" % bb_filename)
-}
-
diff --git a/poky/meta/classes/icecc.bbclass b/poky/meta/classes/icecc.bbclass
index 9b912a3..a11e781 100644
--- a/poky/meta/classes/icecc.bbclass
+++ b/poky/meta/classes/icecc.bbclass
@@ -1,3 +1,9 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
 # IceCream distributed compiling support
 #
 # Stages directories with symlinks from gcc/g++ to icecc, for both
diff --git a/poky/meta/classes/image-artifact-names.bbclass b/poky/meta/classes/image-artifact-names.bbclass
deleted file mode 100644
index f5769e5..0000000
--- a/poky/meta/classes/image-artifact-names.bbclass
+++ /dev/null
@@ -1,22 +0,0 @@
-##################################################################
-# Specific image creation and rootfs population info.
-##################################################################
-
-IMAGE_BASENAME ?= "${PN}"
-IMAGE_VERSION_SUFFIX ?= "-${DATETIME}"
-IMAGE_VERSION_SUFFIX[vardepsexclude] += "DATETIME SOURCE_DATE_EPOCH"
-IMAGE_NAME ?= "${IMAGE_BASENAME}-${MACHINE}${IMAGE_VERSION_SUFFIX}"
-IMAGE_LINK_NAME ?= "${IMAGE_BASENAME}-${MACHINE}"
-
-# IMAGE_NAME is the base name for everything produced when building images.
-# The actual image that contains the rootfs has an additional suffix (.rootfs
-# by default) followed by additional suffices which describe the format (.ext4,
-# .ext4.xz, etc.).
-IMAGE_NAME_SUFFIX ??= ".rootfs"
-
-python () {
-    if bb.data.inherits_class('deploy', d) and d.getVar("IMAGE_VERSION_SUFFIX") == "-${DATETIME}":
-        import datetime
-        d.setVar("IMAGE_VERSION_SUFFIX", "-" + datetime.datetime.fromtimestamp(int(d.getVar("SOURCE_DATE_EPOCH")), datetime.timezone.utc).strftime('%Y%m%d%H%M%S'))
-        d.setVarFlag("IMAGE_VERSION_SUFFIX", "vardepvalue", "")
-}
diff --git a/poky/meta/classes/image-buildinfo.bbclass b/poky/meta/classes/image-buildinfo.bbclass
index ef790bb..206cc9d 100644
--- a/poky/meta/classes/image-buildinfo.bbclass
+++ b/poky/meta/classes/image-buildinfo.bbclass
@@ -4,7 +4,7 @@
 # Copyright (C) 2014 Intel Corporation
 # Author: Alejandro Enedino Hernandez Samaniego <alejandro.hernandez@intel.com>
 #
-# Licensed under the MIT license, see COPYING.MIT for details
+# SPDX-License-Identifier: MIT
 #
 # Usage: add INHERIT += "image-buildinfo" to your conf file
 #
diff --git a/poky/meta/classes/image-combined-dbg.bbclass b/poky/meta/classes/image-combined-dbg.bbclass
deleted file mode 100644
index e5dc61f..0000000
--- a/poky/meta/classes/image-combined-dbg.bbclass
+++ /dev/null
@@ -1,9 +0,0 @@
-IMAGE_PREPROCESS_COMMAND:append = " combine_dbg_image; "
-
-combine_dbg_image () {
-        if [ "${IMAGE_GEN_DEBUGFS}" = "1" -a -e ${IMAGE_ROOTFS}-dbg ]; then
-                # copy target files into -dbg rootfs, so it can be used for
-                # debug purposes directly
-                tar -C ${IMAGE_ROOTFS} -cf - . | tar -C ${IMAGE_ROOTFS}-dbg -xf -
-        fi
-}
diff --git a/poky/meta/classes/image-container.bbclass b/poky/meta/classes/image-container.bbclass
deleted file mode 100644
index 3d19935..0000000
--- a/poky/meta/classes/image-container.bbclass
+++ /dev/null
@@ -1,21 +0,0 @@
-ROOTFS_BOOTSTRAP_INSTALL = ""
-IMAGE_TYPES_MASKED += "container"
-IMAGE_TYPEDEP:container = "tar.bz2"
-
-python __anonymous() {
-    if "container" in d.getVar("IMAGE_FSTYPES") and \
-       d.getVar("IMAGE_CONTAINER_NO_DUMMY") != "1" and \
-       "linux-dummy" not in d.getVar("PREFERRED_PROVIDER_virtual/kernel"):
-        msg = '"container" is in IMAGE_FSTYPES, but ' \
-              'PREFERRED_PROVIDER_virtual/kernel is not "linux-dummy". ' \
-              'Unless a particular kernel is needed, using linux-dummy will ' \
-              'prevent a kernel from being built, which can reduce ' \
-              'build times. If you don\'t want to use "linux-dummy", set ' \
-              '"IMAGE_CONTAINER_NO_DUMMY" to "1".'
-
-        # Raising skip recipe was Paul's clever idea. It causes the error to
-        # only be shown for the recipes actually requested to build, rather
-        # than bb.fatal which would appear for all recipes inheriting the
-        # class.
-        raise bb.parse.SkipRecipe(msg)
-}
diff --git a/poky/meta/classes/image-live.bbclass b/poky/meta/classes/image-live.bbclass
deleted file mode 100644
index 2c94819..0000000
--- a/poky/meta/classes/image-live.bbclass
+++ /dev/null
@@ -1,264 +0,0 @@
-# Copyright (C) 2004, Advanced Micro Devices, Inc.  All Rights Reserved
-# Released under the MIT license (see packages/COPYING)
-
-# Creates a bootable image using syslinux, your kernel and an optional
-# initrd
-
-#
-# End result is two things:
-#
-# 1. A .hddimg file which is an msdos filesystem containing syslinux, a kernel,
-# an initrd and a rootfs image. These can be written to harddisks directly and
-# also booted on USB flash disks (write them there with dd).
-#
-# 2. A CD .iso image
-
-# Boot process is that the initrd will boot and process which label was selected
-# in syslinux. Actions based on the label are then performed (e.g. installing to
-# an hdd)
-
-# External variables (also used by syslinux.bbclass)
-# ${INITRD} - indicates a list of filesystem images to concatenate and use as an initrd (optional)
-# ${HDDIMG_ID} - FAT image volume-id
-# ${ROOTFS} - indicates a filesystem image to include as the root filesystem (optional)
-
-inherit live-vm-common image-artifact-names
-
-do_bootimg[depends] += "dosfstools-native:do_populate_sysroot \
-                        mtools-native:do_populate_sysroot \
-                        cdrtools-native:do_populate_sysroot \
-                        virtual/kernel:do_deploy \
-                        ${MLPREFIX}syslinux:do_populate_sysroot \
-                        syslinux-native:do_populate_sysroot \
-                        ${@'%s:do_image_%s' % (d.getVar('PN'), d.getVar('LIVE_ROOTFS_TYPE').replace('-', '_')) if d.getVar('ROOTFS') else ''} \
-                        "
-
-
-LABELS_LIVE ?= "boot install"
-ROOT_LIVE ?= "root=/dev/ram0"
-INITRD_IMAGE_LIVE ?= "${MLPREFIX}core-image-minimal-initramfs"
-INITRD_LIVE ?= "${DEPLOY_DIR_IMAGE}/${INITRD_IMAGE_LIVE}-${MACHINE}.${INITRAMFS_FSTYPES}"
-
-LIVE_ROOTFS_TYPE ?= "ext4"
-ROOTFS ?= "${IMGDEPLOYDIR}/${IMAGE_LINK_NAME}.${LIVE_ROOTFS_TYPE}"
-
-IMAGE_TYPEDEP:live = "${LIVE_ROOTFS_TYPE}"
-IMAGE_TYPEDEP:iso = "${LIVE_ROOTFS_TYPE}"
-IMAGE_TYPEDEP:hddimg = "${LIVE_ROOTFS_TYPE}"
-IMAGE_TYPES_MASKED += "live hddimg iso"
-
-python() {
-    image_b = d.getVar('IMAGE_BASENAME')
-    initrd_i = d.getVar('INITRD_IMAGE_LIVE')
-    if image_b == initrd_i:
-        bb.error('INITRD_IMAGE_LIVE %s cannot use image live, hddimg or iso.' % initrd_i)
-        bb.fatal('Check IMAGE_FSTYPES and INITRAMFS_FSTYPES settings.')
-    elif initrd_i:
-        d.appendVarFlag('do_bootimg', 'depends', ' %s:do_image_complete' % initrd_i)
-}
-
-HDDDIR = "${S}/hddimg"
-ISODIR = "${S}/iso"
-EFIIMGDIR = "${S}/efi_img"
-COMPACT_ISODIR = "${S}/iso.z"
-
-ISOLINUXDIR ?= "/isolinux"
-ISO_BOOTIMG = "isolinux/isolinux.bin"
-ISO_BOOTCAT = "isolinux/boot.cat"
-MKISOFS_OPTIONS = "-no-emul-boot -boot-load-size 4 -boot-info-table"
-
-BOOTIMG_VOLUME_ID   ?= "boot"
-BOOTIMG_EXTRA_SPACE ?= "512"
-
-populate_live() {
-    populate_kernel $1
-	if [ -s "${ROOTFS}" ]; then
-		install -m 0644 ${ROOTFS} $1/rootfs.img
-	fi
-}
-
-build_iso() {
-	# Only create an ISO if we have an INITRD and the live or iso image type was selected
-	if [ -z "${INITRD}" ] || [ "${@bb.utils.contains_any('IMAGE_FSTYPES', 'live iso', '1', '0', d)}" != "1" ]; then
-		bbnote "ISO image will not be created."
-		return
-	fi
-	# ${INITRD} is a list of multiple filesystem images
-	for fs in ${INITRD}
-	do
-		if [ ! -s "$fs" ]; then
-			bbwarn "ISO image will not be created. $fs is invalid."
-			return
-		fi
-	done
-
-	populate_live ${ISODIR}
-
-	if [ "${PCBIOS}" = "1" ]; then
-		syslinux_iso_populate ${ISODIR}
-	fi
-	if [ "${EFI}" = "1" ]; then
-		efi_iso_populate ${ISODIR}
-		build_fat_img ${EFIIMGDIR} ${ISODIR}/efi.img
-	fi
-
-	# EFI only
-	if [ "${PCBIOS}" != "1" ] && [ "${EFI}" = "1" ] ; then
-		# Work around bug in isohybrid where it requires isolinux.bin
-		# In the boot catalog, even though it is not used
-		mkdir -p ${ISODIR}/${ISOLINUXDIR}
-		install -m 0644 ${STAGING_DATADIR}/syslinux/isolinux.bin ${ISODIR}${ISOLINUXDIR}
-	fi
-
-	# We used to have support for zisofs; this is a relic of that
-	mkisofs_compress_opts="-r"
-
-	# Check the size of ${ISODIR}/rootfs.img, use mkisofs -iso-level 3
-	# when it exceeds 3.8GB, the specification is 4G - 1 bytes, we need
-	# leave a few space for other files.
-	mkisofs_iso_level=""
-
-        if [ -n "${ROOTFS}" ] && [ -s "${ROOTFS}" ]; then
-		rootfs_img_size=`stat -c '%s' ${ISODIR}/rootfs.img`
-		# 4080218931 = 3.8 * 1024 * 1024 * 1024
-		if [ $rootfs_img_size -gt 4080218931 ]; then
-			bbnote "${ISODIR}/rootfs.img execeeds 3.8GB, using '-iso-level 3' for mkisofs"
-			mkisofs_iso_level="-iso-level 3"
-		fi
-	fi
-
-	if [ "${PCBIOS}" = "1" ] && [ "${EFI}" != "1" ] ; then
-		# PCBIOS only media
-		mkisofs -V ${BOOTIMG_VOLUME_ID} \
-		        -o ${IMGDEPLOYDIR}/${IMAGE_NAME}.iso \
-			-b ${ISO_BOOTIMG} -c ${ISO_BOOTCAT} \
-			$mkisofs_compress_opts \
-			${MKISOFS_OPTIONS} $mkisofs_iso_level ${ISODIR}
-	else
-		# EFI only OR EFI+PCBIOS
-		mkisofs -A ${BOOTIMG_VOLUME_ID} -V ${BOOTIMG_VOLUME_ID} \
-		        -o ${IMGDEPLOYDIR}/${IMAGE_NAME}.iso \
-			-b ${ISO_BOOTIMG} -c ${ISO_BOOTCAT} \
-			$mkisofs_compress_opts ${MKISOFS_OPTIONS} $mkisofs_iso_level \
-			-eltorito-alt-boot -eltorito-platform efi \
-			-b efi.img -no-emul-boot \
-			${ISODIR}
-		isohybrid_args="-u"
-	fi
-
-	isohybrid $isohybrid_args ${IMGDEPLOYDIR}/${IMAGE_NAME}.iso
-}
-
-build_fat_img() {
-	FATSOURCEDIR=$1
-	FATIMG=$2
-
-	# Calculate the size required for the final image including the
-	# data and filesystem overhead.
-	# Sectors: 512 bytes
-	#  Blocks: 1024 bytes
-
-	# Determine the sector count just for the data
-	SECTORS=$(expr $(du --apparent-size -ks ${FATSOURCEDIR} | cut -f 1) \* 2)
-
-	# Account for the filesystem overhead. This includes directory
-	# entries in the clusters as well as the FAT itself.
-	# Assumptions:
-	#   FAT32 (12 or 16 may be selected by mkdosfs, but the extra
-	#   padding will be minimal on those smaller images and not
-	#   worth the logic here to caclulate the smaller FAT sizes)
-	#   < 16 entries per directory
-	#   8.3 filenames only
-
-	# 32 bytes per dir entry
-	DIR_BYTES=$(expr $(find ${FATSOURCEDIR} | tail -n +2 | wc -l) \* 32)
-	# 32 bytes for every end-of-directory dir entry
-	DIR_BYTES=$(expr $DIR_BYTES + $(expr $(find ${FATSOURCEDIR} -type d | tail -n +2 | wc -l) \* 32))
-	# 4 bytes per FAT entry per sector of data
-	FAT_BYTES=$(expr $SECTORS \* 4)
-	# 4 bytes per FAT entry per end-of-cluster list
-	FAT_BYTES=$(expr $FAT_BYTES + $(expr $(find ${FATSOURCEDIR} -type d | tail -n +2 | wc -l) \* 4))
-
-	# Use a ceiling function to determine FS overhead in sectors
-	DIR_SECTORS=$(expr $(expr $DIR_BYTES + 511) / 512)
-	# There are two FATs on the image
-	FAT_SECTORS=$(expr $(expr $(expr $FAT_BYTES + 511) / 512) \* 2)
-	SECTORS=$(expr $SECTORS + $(expr $DIR_SECTORS + $FAT_SECTORS))
-
-	# Determine the final size in blocks accounting for some padding
-	BLOCKS=$(expr $(expr $SECTORS / 2) + ${BOOTIMG_EXTRA_SPACE})
-
-	# mkdosfs will sometimes use FAT16 when it is not appropriate,
-	# resulting in a boot failure from SYSLINUX. Use FAT32 for
-	# images larger than 512MB, otherwise let mkdosfs decide.
-	if [ $(expr $BLOCKS / 1024) -gt 512 ]; then
-		FATSIZE="-F 32"
-	fi
-
-	# mkdosfs will fail if ${FATIMG} exists. Since we are creating an
-	# new image, it is safe to delete any previous image.
-	if [ -e ${FATIMG} ]; then
-		rm ${FATIMG}
-	fi
-
-	if [ -z "${HDDIMG_ID}" ]; then
-		mkdosfs ${FATSIZE} -n ${BOOTIMG_VOLUME_ID} ${MKDOSFS_EXTRAOPTS} -C ${FATIMG} \
-			${BLOCKS}
-	else
-		mkdosfs ${FATSIZE} -n ${BOOTIMG_VOLUME_ID} ${MKDOSFS_EXTRAOPTS} -C ${FATIMG} \
-		${BLOCKS} -i ${HDDIMG_ID}
-	fi
-
-	# Copy FATSOURCEDIR recursively into the image file directly
-	mcopy -i ${FATIMG} -s ${FATSOURCEDIR}/* ::/
-}
-
-build_hddimg() {
-	# Create an HDD image
-	if [ "${@bb.utils.contains_any('IMAGE_FSTYPES', 'live hddimg', '1', '0', d)}" = "1" ] ; then
-		populate_live ${HDDDIR}
-
-		if [ "${PCBIOS}" = "1" ]; then
-			syslinux_hddimg_populate ${HDDDIR}
-		fi
-		if [ "${EFI}" = "1" ]; then
-			efi_hddimg_populate ${HDDDIR}
-		fi
-
-		# Check the size of ${HDDDIR}/rootfs.img, error out if it
-		# exceeds 4GB, it is the single file's max size of FAT fs.
-		if [ -f ${HDDDIR}/rootfs.img ]; then
-			rootfs_img_size=`stat -c '%s' ${HDDDIR}/rootfs.img`
-			max_size=`expr 4 \* 1024 \* 1024 \* 1024`
-			if [ $rootfs_img_size -ge $max_size ]; then
-				bberror "${HDDDIR}/rootfs.img rootfs size is greather than or equal to 4GB,"
-				bberror "and this doesn't work on a FAT filesystem. You can either:"
-				bberror "1) Reduce the size of rootfs.img, or,"
-				bbfatal "2) Use wic, vmdk,vhd, vhdx or vdi instead of hddimg\n"
-			fi
-		fi
-
-		build_fat_img ${HDDDIR} ${IMGDEPLOYDIR}/${IMAGE_NAME}.hddimg
-
-		if [ "${PCBIOS}" = "1" ]; then
-			syslinux_hddimg_install
-		fi
-
-		chmod 644 ${IMGDEPLOYDIR}/${IMAGE_NAME}.hddimg
-	fi
-}
-
-python do_bootimg() {
-    set_live_vm_vars(d, 'LIVE')
-    if d.getVar("PCBIOS") == "1":
-        bb.build.exec_func('build_syslinux_cfg', d)
-    if d.getVar("EFI") == "1":
-        bb.build.exec_func('build_efi_cfg', d)
-    bb.build.exec_func('build_hddimg', d)
-    bb.build.exec_func('build_iso', d)
-    bb.build.exec_func('create_symlinks', d)
-}
-do_bootimg[subimages] = "hddimg iso"
-do_bootimg[imgsuffix] = "."
-
-addtask bootimg before do_image_complete after do_rootfs
diff --git a/poky/meta/classes/image-postinst-intercepts.bbclass b/poky/meta/classes/image-postinst-intercepts.bbclass
deleted file mode 100644
index ed30bbd..0000000
--- a/poky/meta/classes/image-postinst-intercepts.bbclass
+++ /dev/null
@@ -1,23 +0,0 @@
-# Gather existing and candidate postinst intercepts from BBPATH
-POSTINST_INTERCEPTS_DIR ?= "${COREBASE}/scripts/postinst-intercepts"
-POSTINST_INTERCEPTS_PATHS ?= "${@':'.join('%s/postinst-intercepts' % p for p in '${BBPATH}'.split(':'))}:${POSTINST_INTERCEPTS_DIR}"
-
-python find_intercepts() {
-    intercepts = {}
-    search_paths = []
-    paths = d.getVar('POSTINST_INTERCEPTS_PATHS').split(':')
-    overrides = (':' + d.getVar('FILESOVERRIDES')).split(':') + ['']
-    search_paths = [os.path.join(p, op) for p in paths for op in overrides]
-    searched = oe.path.which_wild('*', ':'.join(search_paths), candidates=True)
-    files, chksums = [], []
-    for pathname, candidates in searched:
-        if os.path.isfile(pathname):
-            files.append(pathname)
-            chksums.append('%s:True' % pathname)
-            chksums.extend('%s:False' % c for c in candidates[:-1])
-
-    d.setVar('POSTINST_INTERCEPT_CHECKSUMS', ' '.join(chksums))
-    d.setVar('POSTINST_INTERCEPTS', ' '.join(files))
-}
-find_intercepts[eventmask] += "bb.event.RecipePreFinalise"
-addhandler find_intercepts
diff --git a/poky/meta/classes/image.bbclass b/poky/meta/classes/image.bbclass
deleted file mode 100644
index 2139a7e..0000000
--- a/poky/meta/classes/image.bbclass
+++ /dev/null
@@ -1,679 +0,0 @@
-
-IMAGE_CLASSES ??= ""
-
-# rootfs bootstrap install
-# warning -  image-container resets this
-ROOTFS_BOOTSTRAP_INSTALL = "run-postinsts"
-
-# Handle inherits of any of the image classes we need
-IMGCLASSES = "rootfs_${IMAGE_PKGTYPE} image_types ${IMAGE_CLASSES}"
-# Only Linux SDKs support populate_sdk_ext, fall back to populate_sdk_base
-# in the non-Linux SDK_OS case, such as mingw32
-IMGCLASSES += "${@['populate_sdk_base', 'populate_sdk_ext']['linux' in d.getVar("SDK_OS")]}"
-IMGCLASSES += "${@bb.utils.contains_any('IMAGE_FSTYPES', 'live iso hddimg', 'image-live', '', d)}"
-IMGCLASSES += "${@bb.utils.contains('IMAGE_FSTYPES', 'container', 'image-container', '', d)}"
-IMGCLASSES += "image_types_wic"
-IMGCLASSES += "rootfs-postcommands"
-IMGCLASSES += "image-postinst-intercepts"
-IMGCLASSES += "overlayfs-etc"
-inherit ${IMGCLASSES}
-
-TOOLCHAIN_TARGET_TASK += "${PACKAGE_INSTALL}"
-TOOLCHAIN_TARGET_TASK_ATTEMPTONLY += "${PACKAGE_INSTALL_ATTEMPTONLY}"
-POPULATE_SDK_POST_TARGET_COMMAND += "rootfs_sysroot_relativelinks; "
-
-LICENSE ?= "MIT"
-PACKAGES = ""
-DEPENDS += "${@' '.join(["%s-qemuwrapper-cross" % m for m in d.getVar("MULTILIB_VARIANTS").split()])} qemuwrapper-cross depmodwrapper-cross cross-localedef-native"
-RDEPENDS += "${PACKAGE_INSTALL} ${LINGUAS_INSTALL} ${IMAGE_INSTALL_DEBUGFS}"
-RRECOMMENDS += "${PACKAGE_INSTALL_ATTEMPTONLY}"
-PATH:prepend = "${@":".join(all_multilib_tune_values(d, 'STAGING_BINDIR_CROSS').split())}:"
-
-INHIBIT_DEFAULT_DEPS = "1"
-
-# IMAGE_FEATURES may contain any available package group
-IMAGE_FEATURES ?= ""
-IMAGE_FEATURES[type] = "list"
-IMAGE_FEATURES[validitems] += "debug-tweaks read-only-rootfs read-only-rootfs-delayed-postinsts stateless-rootfs empty-root-password allow-empty-password allow-root-login post-install-logging overlayfs-etc"
-
-# Generate companion debugfs?
-IMAGE_GEN_DEBUGFS ?= "0"
-
-# These packages will be installed as additional into debug rootfs
-IMAGE_INSTALL_DEBUGFS ?= ""
-
-# These packages will be removed from a read-only rootfs after all other
-# packages have been installed
-ROOTFS_RO_UNNEEDED ??= "update-rc.d base-passwd shadow ${VIRTUAL-RUNTIME_update-alternatives} ${ROOTFS_BOOTSTRAP_INSTALL}"
-
-# packages to install from features
-FEATURE_INSTALL = "${@' '.join(oe.packagegroup.required_packages(oe.data.typed_value('IMAGE_FEATURES', d), d))}"
-FEATURE_INSTALL[vardepvalue] = "${FEATURE_INSTALL}"
-FEATURE_INSTALL_OPTIONAL = "${@' '.join(oe.packagegroup.optional_packages(oe.data.typed_value('IMAGE_FEATURES', d), d))}"
-FEATURE_INSTALL_OPTIONAL[vardepvalue] = "${FEATURE_INSTALL_OPTIONAL}"
-
-# Define some very basic feature package groups
-FEATURE_PACKAGES_package-management = "${ROOTFS_PKGMANAGE}"
-SPLASH ?= "${@bb.utils.contains("MACHINE_FEATURES", "screen", "psplash", "", d)}"
-FEATURE_PACKAGES_splash = "${SPLASH}"
-
-IMAGE_INSTALL_COMPLEMENTARY = '${@complementary_globs("IMAGE_FEATURES", d)}'
-
-def check_image_features(d):
-    valid_features = (d.getVarFlag('IMAGE_FEATURES', 'validitems') or "").split()
-    valid_features += d.getVarFlags('COMPLEMENTARY_GLOB').keys()
-    for var in d:
-       if var.startswith("FEATURE_PACKAGES_"):
-           valid_features.append(var[17:])
-    valid_features.sort()
-
-    features = set(oe.data.typed_value('IMAGE_FEATURES', d))
-    for feature in features:
-        if feature not in valid_features:
-            if bb.utils.contains('EXTRA_IMAGE_FEATURES', feature, True, False, d):
-                raise bb.parse.SkipRecipe("'%s' in IMAGE_FEATURES (added via EXTRA_IMAGE_FEATURES) is not a valid image feature. Valid features: %s" % (feature, ' '.join(valid_features)))
-            else:
-                raise bb.parse.SkipRecipe("'%s' in IMAGE_FEATURES is not a valid image feature. Valid features: %s" % (feature, ' '.join(valid_features)))
-
-IMAGE_INSTALL ?= ""
-IMAGE_INSTALL[type] = "list"
-export PACKAGE_INSTALL ?= "${IMAGE_INSTALL} ${ROOTFS_BOOTSTRAP_INSTALL} ${FEATURE_INSTALL}"
-PACKAGE_INSTALL_ATTEMPTONLY ?= "${FEATURE_INSTALL_OPTIONAL}"
-
-IMGDEPLOYDIR = "${WORKDIR}/deploy-${PN}-image-complete"
-
-# Images are generally built explicitly, do not need to be part of world.
-EXCLUDE_FROM_WORLD = "1"
-
-USE_DEVFS ?= "1"
-USE_DEPMOD ?= "1"
-
-PID = "${@os.getpid()}"
-
-PACKAGE_ARCH = "${MACHINE_ARCH}"
-
-LDCONFIGDEPEND ?= "ldconfig-native:do_populate_sysroot"
-LDCONFIGDEPEND:libc-musl = ""
-
-# This is needed to have depmod data in PKGDATA_DIR,
-# but if you're building small initramfs image
-# e.g. to include it in your kernel, you probably
-# don't want this dependency, which is causing dependency loop
-KERNELDEPMODDEPEND ?= "virtual/kernel:do_packagedata"
-
-do_rootfs[depends] += " \
-    makedevs-native:do_populate_sysroot virtual/fakeroot-native:do_populate_sysroot ${LDCONFIGDEPEND} \
-    virtual/update-alternatives-native:do_populate_sysroot update-rc.d-native:do_populate_sysroot \
-    ${KERNELDEPMODDEPEND} \
-"
-do_rootfs[recrdeptask] += "do_packagedata"
-
-def rootfs_command_variables(d):
-    return ['ROOTFS_POSTPROCESS_COMMAND','ROOTFS_PREPROCESS_COMMAND','ROOTFS_POSTINSTALL_COMMAND','ROOTFS_POSTUNINSTALL_COMMAND','OPKG_PREPROCESS_COMMANDS','OPKG_POSTPROCESS_COMMANDS','IMAGE_POSTPROCESS_COMMAND',
-            'IMAGE_PREPROCESS_COMMAND','RPM_PREPROCESS_COMMANDS','RPM_POSTPROCESS_COMMANDS','DEB_PREPROCESS_COMMANDS','DEB_POSTPROCESS_COMMANDS']
-
-python () {
-    variables = rootfs_command_variables(d)
-    for var in variables:
-        if d.getVar(var, False):
-            d.setVarFlag(var, 'func', '1')
-}
-
-def rootfs_variables(d):
-    from oe.rootfs import variable_depends
-    variables = ['IMAGE_DEVICE_TABLE','IMAGE_DEVICE_TABLES','BUILD_IMAGES_FROM_FEEDS','IMAGE_TYPES_MASKED','IMAGE_ROOTFS_ALIGNMENT','IMAGE_OVERHEAD_FACTOR','IMAGE_ROOTFS_SIZE','IMAGE_ROOTFS_EXTRA_SPACE',
-                 'IMAGE_ROOTFS_MAXSIZE','IMAGE_NAME','IMAGE_LINK_NAME','IMAGE_MANIFEST','DEPLOY_DIR_IMAGE','IMAGE_FSTYPES','IMAGE_INSTALL_COMPLEMENTARY','IMAGE_LINGUAS', 'IMAGE_LINGUAS_COMPLEMENTARY', 'IMAGE_LOCALES_ARCHIVE',
-                 'MULTILIBRE_ALLOW_REP','MULTILIB_TEMP_ROOTFS','MULTILIB_VARIANTS','MULTILIBS','ALL_MULTILIB_PACKAGE_ARCHS','MULTILIB_GLOBAL_VARIANTS','BAD_RECOMMENDATIONS','NO_RECOMMENDATIONS',
-                 'PACKAGE_ARCHS','PACKAGE_CLASSES','TARGET_VENDOR','TARGET_ARCH','TARGET_OS','OVERRIDES','BBEXTENDVARIANT','FEED_DEPLOYDIR_BASE_URI','INTERCEPT_DIR','USE_DEVFS',
-                 'CONVERSIONTYPES', 'IMAGE_GEN_DEBUGFS', 'ROOTFS_RO_UNNEEDED', 'IMGDEPLOYDIR', 'PACKAGE_EXCLUDE_COMPLEMENTARY', 'REPRODUCIBLE_TIMESTAMP_ROOTFS', 'IMAGE_INSTALL_DEBUGFS']
-    variables.extend(rootfs_command_variables(d))
-    variables.extend(variable_depends(d))
-    return " ".join(variables)
-
-do_rootfs[vardeps] += "${@rootfs_variables(d)}"
-
-# This is needed to have kernel image in DEPLOY_DIR.
-# This follows many common usecases and user expectations.
-# But if you are building an image which doesn't need the kernel image at all,
-# you can unset this variable manually.
-KERNEL_DEPLOY_DEPEND ?= "virtual/kernel:do_deploy"
-do_build[depends] += "${KERNEL_DEPLOY_DEPEND}"
-
-
-python () {
-    def extraimage_getdepends(task):
-        deps = ""
-        for dep in (d.getVar('EXTRA_IMAGEDEPENDS') or "").split():
-            if ":" in dep:
-                deps += " %s " % (dep)
-            else:
-                deps += " %s:%s" % (dep, task)
-        return deps
-
-    d.appendVarFlag('do_image_complete', 'depends', extraimage_getdepends('do_populate_sysroot'))
-
-    deps = " " + imagetypes_getdepends(d)
-    d.appendVarFlag('do_rootfs', 'depends', deps)
-
-    #process IMAGE_FEATURES, we must do this before runtime_mapping_rename
-    #Check for replaces image features
-    features = set(oe.data.typed_value('IMAGE_FEATURES', d))
-    remain_features = features.copy()
-    for feature in features:
-        replaces = set((d.getVar("IMAGE_FEATURES_REPLACES_%s" % feature) or "").split())
-        remain_features -= replaces
-
-    #Check for conflict image features
-    for feature in remain_features:
-        conflicts = set((d.getVar("IMAGE_FEATURES_CONFLICTS_%s" % feature) or "").split())
-        temp = conflicts & remain_features
-        if temp:
-            bb.fatal("%s contains conflicting IMAGE_FEATURES %s %s" % (d.getVar('PN'), feature, ' '.join(list(temp))))
-
-    d.setVar('IMAGE_FEATURES', ' '.join(sorted(list(remain_features))))
-
-    check_image_features(d)
-}
-
-IMAGE_POSTPROCESS_COMMAND ?= ""
-
-# some default locales
-IMAGE_LINGUAS ?= "de-de fr-fr en-gb"
-
-LINGUAS_INSTALL ?= "${@" ".join(map(lambda s: "locale-base-%s" % s, d.getVar('IMAGE_LINGUAS').split()))}"
-
-# per default create a locale archive
-IMAGE_LOCALES_ARCHIVE ?= '1'
-
-# Prefer image, but use the fallback files for lookups if the image ones
-# aren't yet available.
-PSEUDO_PASSWD = "${IMAGE_ROOTFS}:${STAGING_DIR_NATIVE}"
-
-PSEUDO_IGNORE_PATHS .= ",${WORKDIR}/intercept_scripts,${WORKDIR}/oe-rootfs-repo,${WORKDIR}/sstate-build-image_complete"
-
-PACKAGE_EXCLUDE ??= ""
-PACKAGE_EXCLUDE[type] = "list"
-
-fakeroot python do_rootfs () {
-    from oe.rootfs import create_rootfs
-    from oe.manifest import create_manifest
-    import logging
-
-    logger = d.getVar('BB_TASK_LOGGER', False)
-    if logger:
-        logcatcher = bb.utils.LogCatcher()
-        logger.addHandler(logcatcher)
-    else:
-        logcatcher = None
-
-    # NOTE: if you add, remove or significantly refactor the stages of this
-    # process then you should recalculate the weightings here. This is quite
-    # easy to do - just change the MultiStageProgressReporter line temporarily
-    # to pass debug=True as the last parameter and you'll get a printout of
-    # the weightings as well as a map to the lines where next_stage() was
-    # called. Of course this isn't critical, but it helps to keep the progress
-    # reporting accurate.
-    stage_weights = [1, 203, 354, 186, 65, 4228, 1, 353, 49, 330, 382, 23, 1]
-    progress_reporter = bb.progress.MultiStageProgressReporter(d, stage_weights)
-    progress_reporter.next_stage()
-
-    # Handle package exclusions
-    excl_pkgs = d.getVar("PACKAGE_EXCLUDE").split()
-    inst_pkgs = d.getVar("PACKAGE_INSTALL").split()
-    inst_attempt_pkgs = d.getVar("PACKAGE_INSTALL_ATTEMPTONLY").split()
-
-    d.setVar('PACKAGE_INSTALL_ORIG', ' '.join(inst_pkgs))
-    d.setVar('PACKAGE_INSTALL_ATTEMPTONLY', ' '.join(inst_attempt_pkgs))
-
-    for pkg in excl_pkgs:
-        if pkg in inst_pkgs:
-            bb.warn("Package %s, set to be excluded, is in %s PACKAGE_INSTALL (%s).  It will be removed from the list." % (pkg, d.getVar('PN'), inst_pkgs))
-            inst_pkgs.remove(pkg)
-
-        if pkg in inst_attempt_pkgs:
-            bb.warn("Package %s, set to be excluded, is in %s PACKAGE_INSTALL_ATTEMPTONLY (%s).  It will be removed from the list." % (pkg, d.getVar('PN'), inst_pkgs))
-            inst_attempt_pkgs.remove(pkg)
-
-    d.setVar("PACKAGE_INSTALL", ' '.join(inst_pkgs))
-    d.setVar("PACKAGE_INSTALL_ATTEMPTONLY", ' '.join(inst_attempt_pkgs))
-
-    # Ensure we handle package name remapping
-    # We have to delay the runtime_mapping_rename until just before rootfs runs
-    # otherwise, the multilib renaming could step in and squash any fixups that
-    # may have occurred.
-    pn = d.getVar('PN')
-    runtime_mapping_rename("PACKAGE_INSTALL", pn, d)
-    runtime_mapping_rename("PACKAGE_INSTALL_ATTEMPTONLY", pn, d)
-    runtime_mapping_rename("BAD_RECOMMENDATIONS", pn, d)
-
-    # Generate the initial manifest
-    create_manifest(d)
-
-    progress_reporter.next_stage()
-
-    # generate rootfs
-    d.setVarFlag('REPRODUCIBLE_TIMESTAMP_ROOTFS', 'export', '1')
-    create_rootfs(d, progress_reporter=progress_reporter, logcatcher=logcatcher)
-
-    progress_reporter.finish()
-}
-do_rootfs[dirs] = "${TOPDIR}"
-do_rootfs[cleandirs] += "${IMAGE_ROOTFS} ${IMGDEPLOYDIR} ${S}"
-do_rootfs[file-checksums] += "${POSTINST_INTERCEPT_CHECKSUMS}"
-addtask rootfs after do_prepare_recipe_sysroot
-
-fakeroot python do_image () {
-    from oe.utils import execute_pre_post_process
-
-    d.setVarFlag('REPRODUCIBLE_TIMESTAMP_ROOTFS', 'export', '1')
-    pre_process_cmds = d.getVar("IMAGE_PREPROCESS_COMMAND")
-
-    execute_pre_post_process(d, pre_process_cmds)
-}
-do_image[dirs] = "${TOPDIR}"
-addtask do_image after do_rootfs
-
-fakeroot python do_image_complete () {
-    from oe.utils import execute_pre_post_process
-
-    post_process_cmds = d.getVar("IMAGE_POSTPROCESS_COMMAND")
-
-    execute_pre_post_process(d, post_process_cmds)
-}
-do_image_complete[dirs] = "${TOPDIR}"
-SSTATETASKS += "do_image_complete"
-SSTATE_SKIP_CREATION:task-image-complete = '1'
-do_image_complete[sstate-inputdirs] = "${IMGDEPLOYDIR}"
-do_image_complete[sstate-outputdirs] = "${DEPLOY_DIR_IMAGE}"
-do_image_complete[stamp-extra-info] = "${MACHINE_ARCH}"
-addtask do_image_complete after do_image before do_build
-python do_image_complete_setscene () {
-    sstate_setscene(d)
-}
-addtask do_image_complete_setscene
-
-# Add image-level QA/sanity checks to IMAGE_QA_COMMANDS
-#
-# IMAGE_QA_COMMANDS += " \
-#     image_check_everything_ok \
-# "
-# This task runs all functions in IMAGE_QA_COMMANDS after the rootfs
-# construction has completed in order to validate the resulting image.
-#
-# The functions should use ${IMAGE_ROOTFS} to find the unpacked rootfs
-# directory, which if QA passes will be the basis for the images.
-fakeroot python do_image_qa () {
-    from oe.utils import ImageQAFailed
-
-    qa_cmds = (d.getVar('IMAGE_QA_COMMANDS') or '').split()
-    qamsg = ""
-
-    for cmd in qa_cmds:
-        try:
-            bb.build.exec_func(cmd, d)
-        except oe.utils.ImageQAFailed as e:
-            qamsg = qamsg + '\tImage QA function %s failed: %s\n' % (e.name, e.description)
-        except Exception as e:
-            qamsg = qamsg + '\tImage QA function %s failed\n' % cmd
-
-    if qamsg:
-        imgname = d.getVar('IMAGE_NAME')
-        bb.fatal("QA errors found whilst validating image: %s\n%s" % (imgname, qamsg))
-}
-addtask do_image_qa after do_rootfs before do_image
-
-SSTATETASKS += "do_image_qa"
-SSTATE_SKIP_CREATION:task-image-qa = '1'
-do_image_qa[sstate-inputdirs] = ""
-do_image_qa[sstate-outputdirs] = ""
-python do_image_qa_setscene () {
-    sstate_setscene(d)
-}
-addtask do_image_qa_setscene
-
-def setup_debugfs_variables(d):
-    d.appendVar('IMAGE_ROOTFS', '-dbg')
-    if d.getVar('IMAGE_LINK_NAME'):
-        d.appendVar('IMAGE_LINK_NAME', '-dbg')
-    d.appendVar('IMAGE_NAME','-dbg')
-    d.setVar('IMAGE_BUILDING_DEBUGFS', 'true')
-    debugfs_image_fstypes = d.getVar('IMAGE_FSTYPES_DEBUGFS')
-    if debugfs_image_fstypes:
-        d.setVar('IMAGE_FSTYPES', debugfs_image_fstypes)
-
-python setup_debugfs () {
-    setup_debugfs_variables(d)
-}
-
-python () {
-    vardeps = set()
-    # We allow CONVERSIONTYPES to have duplicates. That avoids breaking
-    # derived distros when OE-core or some other layer independently adds
-    # the same type. There is still only one command for each type, but
-    # presumably the commands will do the same when the type is the same,
-    # even when added in different places.
-    #
-    # Without de-duplication, gen_conversion_cmds() below
-    # would create the same compression command multiple times.
-    ctypes = set(d.getVar('CONVERSIONTYPES').split())
-    old_overrides = d.getVar('OVERRIDES', False)
-
-    def _image_base_type(type):
-        basetype = type
-        for ctype in ctypes:
-            if type.endswith("." + ctype):
-                basetype = type[:-len("." + ctype)]
-                break
-
-        if basetype != type:
-            # New base type itself might be generated by a conversion command.
-            basetype = _image_base_type(basetype)
-
-        return basetype
-
-    basetypes = {}
-    alltypes = d.getVar('IMAGE_FSTYPES').split()
-    typedeps = {}
-
-    if d.getVar('IMAGE_GEN_DEBUGFS') == "1":
-        debugfs_fstypes = d.getVar('IMAGE_FSTYPES_DEBUGFS').split()
-        for t in debugfs_fstypes:
-            alltypes.append("debugfs_" + t)
-
-    def _add_type(t):
-        baset = _image_base_type(t)
-        input_t = t
-        if baset not in basetypes:
-            basetypes[baset]= []
-        if t not in basetypes[baset]:
-            basetypes[baset].append(t)
-        debug = ""
-        if t.startswith("debugfs_"):
-            t = t[8:]
-            debug = "debugfs_"
-        deps = (d.getVar('IMAGE_TYPEDEP:' + t) or "").split()
-        vardeps.add('IMAGE_TYPEDEP:' + t)
-        if baset not in typedeps:
-            typedeps[baset] = set()
-        deps = [debug + dep for dep in deps]
-        for dep in deps:
-            if dep not in alltypes:
-                alltypes.append(dep)
-            _add_type(dep)
-            basedep = _image_base_type(dep)
-            typedeps[baset].add(basedep)
-
-        if baset != input_t:
-            _add_type(baset)
-
-    for t in alltypes[:]:
-        _add_type(t)
-
-    d.appendVarFlag('do_image', 'vardeps', ' '.join(vardeps))
-
-    maskedtypes = (d.getVar('IMAGE_TYPES_MASKED') or "").split()
-    maskedtypes = [dbg + t for t in maskedtypes for dbg in ("", "debugfs_")]
-
-    for t in basetypes:
-        vardeps = set()
-        cmds = []
-        subimages = []
-        realt = t
-
-        if t in maskedtypes:
-            continue
-
-        localdata = bb.data.createCopy(d)
-        debug = ""
-        if t.startswith("debugfs_"):
-            setup_debugfs_variables(localdata)
-            debug = "setup_debugfs "
-            realt = t[8:]
-        localdata.setVar('OVERRIDES', '%s:%s' % (realt, old_overrides))
-        localdata.setVar('type', realt)
-        # Delete DATETIME so we don't expand any references to it now
-        # This means the task's hash can be stable rather than having hardcoded
-        # date/time values. It will get expanded at execution time.
-        # Similarly TMPDIR since otherwise we see QA stamp comparision problems
-        # Expand PV else it can trigger get_srcrev which can fail due to these variables being unset
-        localdata.setVar('PV', d.getVar('PV'))
-        localdata.delVar('DATETIME')
-        localdata.delVar('DATE')
-        localdata.delVar('TMPDIR')
-        localdata.delVar('IMAGE_VERSION_SUFFIX')
-        vardepsexclude = (d.getVarFlag('IMAGE_CMD:' + realt, 'vardepsexclude', True) or '').split()
-        for dep in vardepsexclude:
-            localdata.delVar(dep)
-
-        image_cmd = localdata.getVar("IMAGE_CMD")
-        vardeps.add('IMAGE_CMD:' + realt)
-        if image_cmd:
-            cmds.append("\t" + image_cmd)
-        else:
-            bb.fatal("No IMAGE_CMD defined for IMAGE_FSTYPES entry '%s' - possibly invalid type name or missing support class" % t)
-        cmds.append(localdata.expand("\tcd ${IMGDEPLOYDIR}"))
-
-        # Since a copy of IMAGE_CMD:xxx will be inlined within do_image_xxx,
-        # prevent a redundant copy of IMAGE_CMD:xxx being emitted as a function.
-        d.delVarFlag('IMAGE_CMD:' + realt, 'func')
-
-        rm_tmp_images = set()
-        def gen_conversion_cmds(bt):
-            for ctype in sorted(ctypes):
-                if bt.endswith("." + ctype):
-                    type = bt[0:-len(ctype) - 1]
-                    if type.startswith("debugfs_"):
-                        type = type[8:]
-                    # Create input image first.
-                    gen_conversion_cmds(type)
-                    localdata.setVar('type', type)
-                    cmd = "\t" + localdata.getVar("CONVERSION_CMD:" + ctype)
-                    if cmd not in cmds:
-                        cmds.append(cmd)
-                    vardeps.add('CONVERSION_CMD:' + ctype)
-                    subimage = type + "." + ctype
-                    if subimage not in subimages:
-                        subimages.append(subimage)
-                    if type not in alltypes:
-                        rm_tmp_images.add(localdata.expand("${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type}"))
-
-        for bt in basetypes[t]:
-            gen_conversion_cmds(bt)
-
-        localdata.setVar('type', realt)
-        if t not in alltypes:
-            rm_tmp_images.add(localdata.expand("${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type}"))
-        else:
-            subimages.append(realt)
-
-        # Clean up after applying all conversion commands. Some of them might
-        # use the same input, therefore we cannot delete sooner without applying
-        # some complex dependency analysis.
-        for image in sorted(rm_tmp_images):
-            cmds.append("\trm " + image)
-
-        after = 'do_image'
-        for dep in typedeps[t]:
-            after += ' do_image_%s' % dep.replace("-", "_").replace(".", "_")
-
-        task = "do_image_%s" % t.replace("-", "_").replace(".", "_")
-
-        d.setVar(task, '\n'.join(cmds))
-        d.setVarFlag(task, 'func', '1')
-        d.setVarFlag(task, 'fakeroot', '1')
-
-        d.appendVarFlag(task, 'prefuncs', ' ' + debug + ' set_image_size')
-        d.prependVarFlag(task, 'postfuncs', 'create_symlinks ')
-        d.appendVarFlag(task, 'subimages', ' ' + ' '.join(subimages))
-        d.appendVarFlag(task, 'vardeps', ' ' + ' '.join(vardeps))
-        d.appendVarFlag(task, 'vardepsexclude', ' DATETIME DATE ' + ' '.join(vardepsexclude))
-
-        bb.debug(2, "Adding task %s before %s, after %s" % (task, 'do_image_complete', after))
-        bb.build.addtask(task, 'do_image_complete', after, d)
-}
-
-#
-# Compute the rootfs size
-#
-def get_rootfs_size(d):
-    import subprocess, oe.utils
-
-    rootfs_alignment = int(d.getVar('IMAGE_ROOTFS_ALIGNMENT'))
-    overhead_factor = float(d.getVar('IMAGE_OVERHEAD_FACTOR'))
-    rootfs_req_size = int(d.getVar('IMAGE_ROOTFS_SIZE'))
-    rootfs_extra_space = eval(d.getVar('IMAGE_ROOTFS_EXTRA_SPACE'))
-    rootfs_maxsize = d.getVar('IMAGE_ROOTFS_MAXSIZE')
-    image_fstypes = d.getVar('IMAGE_FSTYPES') or ''
-    initramfs_fstypes = d.getVar('INITRAMFS_FSTYPES') or ''
-    initramfs_maxsize = d.getVar('INITRAMFS_MAXSIZE')
-
-    size_kb = oe.utils.directory_size(d.getVar("IMAGE_ROOTFS")) / 1024
-
-    base_size = size_kb * overhead_factor
-    bb.debug(1, '%f = %d * %f' % (base_size, size_kb, overhead_factor))
-    base_size2 = max(base_size, rootfs_req_size) + rootfs_extra_space
-    bb.debug(1, '%f = max(%f, %d)[%f] + %d' % (base_size2, base_size, rootfs_req_size, max(base_size, rootfs_req_size), rootfs_extra_space))
-
-    base_size = base_size2
-    if base_size != int(base_size):
-        base_size = int(base_size + 1)
-    else:
-        base_size = int(base_size)
-    bb.debug(1, '%f = int(%f)' % (base_size, base_size2))
-
-    base_size_saved = base_size
-    base_size += rootfs_alignment - 1
-    base_size -= base_size % rootfs_alignment
-    bb.debug(1, '%d = aligned(%d)' % (base_size, base_size_saved))
-
-    # Do not check image size of the debugfs image. This is not supposed
-    # to be deployed, etc. so it doesn't make sense to limit the size
-    # of the debug.
-    if (d.getVar('IMAGE_BUILDING_DEBUGFS') or "") == "true":
-        bb.debug(1, 'returning debugfs size %d' % (base_size))
-        return base_size
-
-    # Check the rootfs size against IMAGE_ROOTFS_MAXSIZE (if set)
-    if rootfs_maxsize:
-        rootfs_maxsize_int = int(rootfs_maxsize)
-        if base_size > rootfs_maxsize_int:
-            bb.fatal("The rootfs size %d(K) exceeds IMAGE_ROOTFS_MAXSIZE: %d(K)" % \
-                (base_size, rootfs_maxsize_int))
-
-    # Check the initramfs size against INITRAMFS_MAXSIZE (if set)
-    if image_fstypes == initramfs_fstypes != ''  and initramfs_maxsize:
-        initramfs_maxsize_int = int(initramfs_maxsize)
-        if base_size > initramfs_maxsize_int:
-            bb.error("The initramfs size %d(K) exceeds INITRAMFS_MAXSIZE: %d(K)" % \
-                (base_size, initramfs_maxsize_int))
-            bb.error("You can set INITRAMFS_MAXSIZE a larger value. Usually, it should")
-            bb.fatal("be less than 1/2 of ram size, or you may fail to boot it.\n")
-
-    bb.debug(1, 'returning %d' % (base_size))
-    return base_size
-
-python set_image_size () {
-        rootfs_size = get_rootfs_size(d)
-        d.setVar('ROOTFS_SIZE', str(rootfs_size))
-        d.setVarFlag('ROOTFS_SIZE', 'export', '1')
-}
-
-#
-# Create symlinks to the newly created image
-#
-python create_symlinks() {
-
-    deploy_dir = d.getVar('IMGDEPLOYDIR')
-    img_name = d.getVar('IMAGE_NAME')
-    link_name = d.getVar('IMAGE_LINK_NAME')
-    manifest_name = d.getVar('IMAGE_MANIFEST')
-    taskname = d.getVar("BB_CURRENTTASK")
-    subimages = (d.getVarFlag("do_" + taskname, 'subimages', False) or "").split()
-    imgsuffix = d.getVarFlag("do_" + taskname, 'imgsuffix') or d.expand("${IMAGE_NAME_SUFFIX}.")
-
-    if not link_name:
-        return
-    for type in subimages:
-        dst = os.path.join(deploy_dir, link_name + "." + type)
-        src = img_name + imgsuffix + type
-        if os.path.exists(os.path.join(deploy_dir, src)):
-            bb.note("Creating symlink: %s -> %s" % (dst, src))
-            if os.path.islink(dst):
-                os.remove(dst)
-            os.symlink(src, dst)
-        else:
-            bb.note("Skipping symlink, source does not exist: %s -> %s" % (dst, src))
-}
-
-MULTILIBRE_ALLOW_REP =. "${base_bindir}|${base_sbindir}|${bindir}|${sbindir}|${libexecdir}|${sysconfdir}|${nonarch_base_libdir}/udev|/lib/modules/[^/]*/modules.*|"
-MULTILIB_CHECK_FILE = "${WORKDIR}/multilib_check.py"
-MULTILIB_TEMP_ROOTFS = "${WORKDIR}/multilib"
-
-do_fetch[noexec] = "1"
-do_unpack[noexec] = "1"
-do_patch[noexec] = "1"
-do_configure[noexec] = "1"
-do_compile[noexec] = "1"
-do_install[noexec] = "1"
-deltask do_populate_lic
-deltask do_populate_sysroot
-do_package[noexec] = "1"
-deltask do_package_qa
-deltask do_packagedata
-deltask do_package_write_ipk
-deltask do_package_write_deb
-deltask do_package_write_rpm
-
-# Prepare the root links to point to the /usr counterparts.
-create_merged_usr_symlinks() {
-    root="$1"
-    install -d $root${base_bindir} $root${base_sbindir} $root${base_libdir}
-    ln -rs $root${base_bindir} $root/bin
-    ln -rs $root${base_sbindir} $root/sbin
-    ln -rs $root${base_libdir} $root/${baselib}
-
-    if [ "${nonarch_base_libdir}" != "${base_libdir}" ]; then
-       install -d $root${nonarch_base_libdir}
-       ln -rs $root${nonarch_base_libdir} $root/lib
-    fi
-
-    # create base links for multilibs
-    multi_libdirs="${@d.getVar('MULTILIB_VARIANTS')}"
-    for d in $multi_libdirs; do
-        install -d $root${exec_prefix}/$d
-        ln -rs $root${exec_prefix}/$d $root/$d
-    done
-}
-
-create_merged_usr_symlinks_rootfs() {
-    create_merged_usr_symlinks ${IMAGE_ROOTFS}
-}
-
-create_merged_usr_symlinks_sdk() {
-    create_merged_usr_symlinks ${SDK_OUTPUT}${SDKTARGETSYSROOT}
-}
-
-ROOTFS_PREPROCESS_COMMAND += "${@bb.utils.contains('DISTRO_FEATURES', 'usrmerge', 'create_merged_usr_symlinks_rootfs; ', '',d)}"
-POPULATE_SDK_PRE_TARGET_COMMAND += "${@bb.utils.contains('DISTRO_FEATURES', 'usrmerge', 'create_merged_usr_symlinks_sdk; ', '',d)}"
-
-reproducible_final_image_task () {
-    if [ "$REPRODUCIBLE_TIMESTAMP_ROOTFS" = "" ]; then
-        REPRODUCIBLE_TIMESTAMP_ROOTFS=`git -C "${COREBASE}" log -1 --pretty=%ct 2>/dev/null` || true
-        if [ "$REPRODUCIBLE_TIMESTAMP_ROOTFS" = "" ]; then
-            REPRODUCIBLE_TIMESTAMP_ROOTFS=`stat -c%Y ${@bb.utils.which(d.getVar("BBPATH"), "conf/bitbake.conf")}`
-        fi
-    fi
-    # Set mtime of all files to a reproducible value
-    bbnote "reproducible_final_image_task: mtime set to $REPRODUCIBLE_TIMESTAMP_ROOTFS"
-    find  ${IMAGE_ROOTFS} -print0 | xargs -0 touch -h  --date=@$REPRODUCIBLE_TIMESTAMP_ROOTFS
-}
-
-systemd_preset_all () {
-    if [ -e ${IMAGE_ROOTFS}${root_prefix}/lib/systemd/systemd ]; then
-	systemctl --root="${IMAGE_ROOTFS}" --preset-mode=enable-only preset-all
-    fi
-}
-
-IMAGE_PREPROCESS_COMMAND:append = " ${@ 'systemd_preset_all;' if bb.utils.contains('DISTRO_FEATURES', 'systemd', True, False, d) and not bb.utils.contains('IMAGE_FEATURES', 'stateless-rootfs', True, False, d) else ''} reproducible_final_image_task; "
-
-CVE_PRODUCT = ""
diff --git a/poky/meta/classes/image_types.bbclass b/poky/meta/classes/image_types.bbclass
deleted file mode 100644
index 0ffea91..0000000
--- a/poky/meta/classes/image_types.bbclass
+++ /dev/null
@@ -1,349 +0,0 @@
-# The default aligment of the size of the rootfs is set to 1KiB. In case
-# you're using the SD card emulation of a QEMU system simulator you may
-# set this value to 2048 (2MiB alignment).
-IMAGE_ROOTFS_ALIGNMENT ?= "1"
-
-def imagetypes_getdepends(d):
-    def adddep(depstr, deps):
-        for d in (depstr or "").split():
-            # Add task dependency if not already present
-            if ":" not in d:
-                d += ":do_populate_sysroot"
-            deps.add(d)
-
-    # Take a type in the form of foo.bar.car and split it into the items
-    # needed for the image deps "foo", and the conversion deps ["bar", "car"]
-    def split_types(typestring):
-        types = typestring.split(".")
-        return types[0], types[1:]
-
-    fstypes = set((d.getVar('IMAGE_FSTYPES') or "").split())
-    fstypes |= set((d.getVar('IMAGE_FSTYPES_DEBUGFS') or "").split())
-
-    deprecated = set()
-    deps = set()
-    for typestring in fstypes:
-        basetype, resttypes = split_types(typestring)
-
-        var = "IMAGE_DEPENDS_%s" % basetype
-        if d.getVar(var) is not None:
-            deprecated.add(var)
-
-        for typedepends in (d.getVar("IMAGE_TYPEDEP:%s" % basetype) or "").split():
-            base, rest = split_types(typedepends)
-            resttypes += rest
-
-            var = "IMAGE_DEPENDS_%s" % base
-            if d.getVar(var) is not None:
-                deprecated.add(var)
-
-        for ctype in resttypes:
-            adddep(d.getVar("CONVERSION_DEPENDS_%s" % ctype), deps)
-            adddep(d.getVar("COMPRESS_DEPENDS_%s" % ctype), deps)
-
-    if deprecated:
-        bb.fatal('Deprecated variable(s) found: "%s". '
-                 'Use do_image_<type>[depends] += "<recipe>:<task>" instead' % ', '.join(deprecated))
-
-    # Sort the set so that ordering is consistant
-    return " ".join(sorted(deps))
-
-XZ_COMPRESSION_LEVEL ?= "-9"
-XZ_INTEGRITY_CHECK ?= "crc32"
-
-ZIP_COMPRESSION_LEVEL ?= "-9"
-
-ZSTD_COMPRESSION_LEVEL ?= "-3"
-
-JFFS2_SUM_EXTRA_ARGS ?= ""
-IMAGE_CMD:jffs2 = "mkfs.jffs2 --root=${IMAGE_ROOTFS} --faketime --output=${IMGDEPLOYDIR}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.jffs2 ${EXTRA_IMAGECMD}"
-
-IMAGE_CMD:cramfs = "mkfs.cramfs ${IMAGE_ROOTFS} ${IMGDEPLOYDIR}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.cramfs ${EXTRA_IMAGECMD}"
-
-oe_mkext234fs () {
-	fstype=$1
-	extra_imagecmd=""
-
-	if [ $# -gt 1 ]; then
-		shift
-		extra_imagecmd=$@
-	fi
-
-	# If generating an empty image the size of the sparse block should be large
-	# enough to allocate an ext4 filesystem using 4096 bytes per inode, this is
-	# about 60K, so dd needs a minimum count of 60, with bs=1024 (bytes per IO)
-	eval local COUNT=\"0\"
-	eval local MIN_COUNT=\"60\"
-	if [ $ROOTFS_SIZE -lt $MIN_COUNT ]; then
-		eval COUNT=\"$MIN_COUNT\"
-	fi
-	# Create a sparse image block
-	bbdebug 1 Executing "dd if=/dev/zero of=${IMGDEPLOYDIR}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.$fstype seek=$ROOTFS_SIZE count=$COUNT bs=1024"
-	dd if=/dev/zero of=${IMGDEPLOYDIR}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.$fstype seek=$ROOTFS_SIZE count=$COUNT bs=1024
-	bbdebug 1 "Actual Rootfs size:  `du -s ${IMAGE_ROOTFS}`"
-	bbdebug 1 "Actual Partition size: `stat -c '%s' ${IMGDEPLOYDIR}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.$fstype`"
-	bbdebug 1 Executing "mkfs.$fstype -F $extra_imagecmd ${IMGDEPLOYDIR}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.$fstype -d ${IMAGE_ROOTFS}"
-	mkfs.$fstype -F $extra_imagecmd ${IMGDEPLOYDIR}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.$fstype -d ${IMAGE_ROOTFS}
-	# Error codes 0-3 indicate successfull operation of fsck (no errors or errors corrected)
-	fsck.$fstype -pvfD ${IMGDEPLOYDIR}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.$fstype || [ $? -le 3 ]
-}
-
-IMAGE_CMD:ext2 = "oe_mkext234fs ext2 ${EXTRA_IMAGECMD}"
-IMAGE_CMD:ext3 = "oe_mkext234fs ext3 ${EXTRA_IMAGECMD}"
-IMAGE_CMD:ext4 = "oe_mkext234fs ext4 ${EXTRA_IMAGECMD}"
-
-MIN_BTRFS_SIZE ?= "16384"
-IMAGE_CMD:btrfs () {
-	size=${ROOTFS_SIZE}
-	if [ ${size} -lt ${MIN_BTRFS_SIZE} ] ; then
-		size=${MIN_BTRFS_SIZE}
-		bbwarn "Rootfs size is too small for BTRFS. Filesystem will be extended to ${size}K"
-	fi
-	dd if=/dev/zero of=${IMGDEPLOYDIR}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.btrfs seek=${size} count=0 bs=1024
-	mkfs.btrfs ${EXTRA_IMAGECMD} -r ${IMAGE_ROOTFS} ${IMGDEPLOYDIR}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.btrfs
-}
-
-IMAGE_CMD:squashfs = "mksquashfs ${IMAGE_ROOTFS} ${IMGDEPLOYDIR}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.squashfs ${EXTRA_IMAGECMD} -noappend"
-IMAGE_CMD:squashfs-xz = "mksquashfs ${IMAGE_ROOTFS} ${IMGDEPLOYDIR}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.squashfs-xz ${EXTRA_IMAGECMD} -noappend -comp xz"
-IMAGE_CMD:squashfs-lzo = "mksquashfs ${IMAGE_ROOTFS} ${IMGDEPLOYDIR}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.squashfs-lzo ${EXTRA_IMAGECMD} -noappend -comp lzo"
-IMAGE_CMD:squashfs-lz4 = "mksquashfs ${IMAGE_ROOTFS} ${IMGDEPLOYDIR}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.squashfs-lz4 ${EXTRA_IMAGECMD} -noappend -comp lz4"
-IMAGE_CMD:squashfs-zst = "mksquashfs ${IMAGE_ROOTFS} ${IMGDEPLOYDIR}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.squashfs-zst ${EXTRA_IMAGECMD} -noappend -comp zstd"
-
-IMAGE_CMD:erofs = "mkfs.erofs ${EXTRA_IMAGECMD} ${IMGDEPLOYDIR}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.erofs ${IMAGE_ROOTFS}"
-IMAGE_CMD:erofs-lz4 = "mkfs.erofs -zlz4 ${EXTRA_IMAGECMD} ${IMGDEPLOYDIR}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.erofs-lz4 ${IMAGE_ROOTFS}"
-IMAGE_CMD:erofs-lz4hc = "mkfs.erofs -zlz4hc ${EXTRA_IMAGECMD} ${IMGDEPLOYDIR}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.erofs-lz4hc ${IMAGE_ROOTFS}"
-
-
-IMAGE_CMD_TAR ?= "tar"
-# ignore return code 1 "file changed as we read it" as other tasks(e.g. do_image_wic) may be hardlinking rootfs
-IMAGE_CMD:tar = "${IMAGE_CMD_TAR} --sort=name --format=posix --numeric-owner -cf ${IMGDEPLOYDIR}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.tar -C ${IMAGE_ROOTFS} . || [ $? -eq 1 ]"
-
-do_image_cpio[cleandirs] += "${WORKDIR}/cpio_append"
-IMAGE_CMD:cpio () {
-	(cd ${IMAGE_ROOTFS} && find . | sort | cpio --reproducible -o -H newc >${IMGDEPLOYDIR}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.cpio)
-	# We only need the /init symlink if we're building the real
-	# image. The -dbg image doesn't need it! By being clever
-	# about this we also avoid 'touch' below failing, as it
-	# might be trying to touch /sbin/init on the host since both
-	# the normal and the -dbg image share the same WORKDIR
-	if [ "${IMAGE_BUILDING_DEBUGFS}" != "true" ]; then
-		if [ ! -L ${IMAGE_ROOTFS}/init ] && [ ! -e ${IMAGE_ROOTFS}/init ]; then
-			if [ -L ${IMAGE_ROOTFS}/sbin/init ] || [ -e ${IMAGE_ROOTFS}/sbin/init ]; then
-				ln -sf /sbin/init ${WORKDIR}/cpio_append/init
-			else
-				touch ${WORKDIR}/cpio_append/init
-			fi
-			(cd  ${WORKDIR}/cpio_append && echo ./init | cpio -oA -H newc -F ${IMGDEPLOYDIR}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.cpio)
-		fi
-	fi
-}
-
-UBI_VOLNAME ?= "${MACHINE}-rootfs"
-UBI_VOLTYPE ?= "dynamic"
-UBI_IMGTYPE ?= "ubifs"
-
-write_ubi_config() {
-	if [ -z "$1" ]; then
-		local vname=""
-	else
-		local vname="_$1"
-	fi
-
-	cat <<EOF > ubinize${vname}-${IMAGE_NAME}.cfg
-[ubifs]
-mode=ubi
-image=${IMGDEPLOYDIR}/${IMAGE_NAME}${vname}${IMAGE_NAME_SUFFIX}.${UBI_IMGTYPE}
-vol_id=0
-vol_type=${UBI_VOLTYPE}
-vol_name=${UBI_VOLNAME}
-vol_flags=autoresize
-EOF
-}
-
-multiubi_mkfs() {
-	local mkubifs_args="$1"
-	local ubinize_args="$2"
-
-        # Added prompt error message for ubi and ubifs image creation.
-        if [ -z "$mkubifs_args" ] || [ -z "$ubinize_args" ]; then
-            bbfatal "MKUBIFS_ARGS and UBINIZE_ARGS have to be set, see http://www.linux-mtd.infradead.org/faq/ubifs.html for details"
-        fi
-
-	write_ubi_config "$3"
-
-	if [ -n "$vname" ]; then
-		mkfs.ubifs -r ${IMAGE_ROOTFS} -o ${IMGDEPLOYDIR}/${IMAGE_NAME}${vname}${IMAGE_NAME_SUFFIX}.ubifs ${mkubifs_args}
-	fi
-	ubinize -o ${IMGDEPLOYDIR}/${IMAGE_NAME}${vname}${IMAGE_NAME_SUFFIX}.ubi ${ubinize_args} ubinize${vname}-${IMAGE_NAME}.cfg
-
-	# Cleanup cfg file
-	mv ubinize${vname}-${IMAGE_NAME}.cfg ${IMGDEPLOYDIR}/
-
-	# Create own symlinks for 'named' volumes
-	if [ -n "$vname" ]; then
-		cd ${IMGDEPLOYDIR}
-		if [ -e ${IMAGE_NAME}${vname}${IMAGE_NAME_SUFFIX}.ubifs ]; then
-			ln -sf ${IMAGE_NAME}${vname}${IMAGE_NAME_SUFFIX}.ubifs \
-			${IMAGE_LINK_NAME}${vname}.ubifs
-		fi
-		if [ -e ${IMAGE_NAME}${vname}${IMAGE_NAME_SUFFIX}.ubi ]; then
-			ln -sf ${IMAGE_NAME}${vname}${IMAGE_NAME_SUFFIX}.ubi \
-			${IMAGE_LINK_NAME}${vname}.ubi
-		fi
-		cd -
-	fi
-}
-
-IMAGE_CMD:multiubi () {
-	# Split MKUBIFS_ARGS_<name> and UBINIZE_ARGS_<name>
-	for name in ${MULTIUBI_BUILD}; do
-		eval local mkubifs_args=\"\$MKUBIFS_ARGS_${name}\"
-		eval local ubinize_args=\"\$UBINIZE_ARGS_${name}\"
-
-		multiubi_mkfs "${mkubifs_args}" "${ubinize_args}" "${name}"
-	done
-}
-
-IMAGE_CMD:ubi () {
-	multiubi_mkfs "${MKUBIFS_ARGS}" "${UBINIZE_ARGS}"
-}
-IMAGE_TYPEDEP:ubi = "${UBI_IMGTYPE}"
-
-IMAGE_CMD:ubifs = "mkfs.ubifs -r ${IMAGE_ROOTFS} -o ${IMGDEPLOYDIR}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.ubifs ${MKUBIFS_ARGS}"
-
-MIN_F2FS_SIZE ?= "524288"
-IMAGE_CMD:f2fs () {
-        # We need to add additional smarts here form devices smaller than 1.5G
-        # Need to scale appropriately between 40M -> 1.5G as the "overprovision
-        # ratio" goes down as the device gets bigger (70% -> 4.5%), below about
-        # 500M the standard IMAGE_OVERHEAD_FACTOR does not work, so add additional
-        # space here when under 500M
-	size=${ROOTFS_SIZE}
-	if [ ${size} -lt ${MIN_F2FS_SIZE} ] ; then
-		size=${MIN_F2FS_SIZE}
-		bbwarn "Rootfs size is too small for F2FS. Filesystem will be extended to ${size}K"
-	fi
-	dd if=/dev/zero of=${IMGDEPLOYDIR}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.f2fs seek=${size} count=0 bs=1024
-	mkfs.f2fs ${EXTRA_IMAGECMD} ${IMGDEPLOYDIR}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.f2fs
-	sload.f2fs -f ${IMAGE_ROOTFS} ${IMGDEPLOYDIR}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.f2fs
-}
-
-EXTRA_IMAGECMD = ""
-
-inherit siteinfo kernel-arch image-artifact-names
-
-JFFS2_ENDIANNESS ?= "${@oe.utils.conditional('SITEINFO_ENDIANNESS', 'le', '-l', '-b', d)}"
-JFFS2_ERASEBLOCK ?= "0x40000"
-EXTRA_IMAGECMD:jffs2 ?= "--pad ${JFFS2_ENDIANNESS} --eraseblock=${JFFS2_ERASEBLOCK} --no-cleanmarkers"
-
-# Change these if you want default mkfs behavior (i.e. create minimal inode number)
-EXTRA_IMAGECMD:ext2 ?= "-i 4096"
-EXTRA_IMAGECMD:ext3 ?= "-i 4096"
-EXTRA_IMAGECMD:ext4 ?= "-i 4096"
-EXTRA_IMAGECMD:btrfs ?= "-n 4096 --shrink"
-EXTRA_IMAGECMD:f2fs ?= ""
-
-do_image_cpio[depends] += "cpio-native:do_populate_sysroot"
-do_image_jffs2[depends] += "mtd-utils-native:do_populate_sysroot"
-do_image_cramfs[depends] += "util-linux-native:do_populate_sysroot"
-do_image_ext2[depends] += "e2fsprogs-native:do_populate_sysroot"
-do_image_ext3[depends] += "e2fsprogs-native:do_populate_sysroot"
-do_image_ext4[depends] += "e2fsprogs-native:do_populate_sysroot"
-do_image_btrfs[depends] += "btrfs-tools-native:do_populate_sysroot"
-do_image_squashfs[depends] += "squashfs-tools-native:do_populate_sysroot"
-do_image_squashfs_xz[depends] += "squashfs-tools-native:do_populate_sysroot"
-do_image_squashfs_lzo[depends] += "squashfs-tools-native:do_populate_sysroot"
-do_image_squashfs_lz4[depends] += "squashfs-tools-native:do_populate_sysroot"
-do_image_squashfs_zst[depends] += "squashfs-tools-native:do_populate_sysroot"
-do_image_ubi[depends] += "mtd-utils-native:do_populate_sysroot"
-do_image_ubifs[depends] += "mtd-utils-native:do_populate_sysroot"
-do_image_multiubi[depends] += "mtd-utils-native:do_populate_sysroot"
-do_image_f2fs[depends] += "f2fs-tools-native:do_populate_sysroot"
-do_image_erofs[depends] += "erofs-utils-native:do_populate_sysroot"
-do_image_erofs_lz4[depends] += "erofs-utils-native:do_populate_sysroot"
-do_image_erofs_lz4hc[depends] += "erofs-utils-native:do_populate_sysroot"
-
-# This variable is available to request which values are suitable for IMAGE_FSTYPES
-IMAGE_TYPES = " \
-    jffs2 jffs2.sum \
-    cramfs \
-    ext2 ext2.gz ext2.bz2 ext2.lzma \
-    ext3 ext3.gz \
-    ext4 ext4.gz \
-    btrfs \
-    squashfs squashfs-xz squashfs-lzo squashfs-lz4 squashfs-zst \
-    ubi ubifs multiubi \
-    tar tar.gz tar.bz2 tar.xz tar.lz4 tar.zst \
-    cpio cpio.gz cpio.xz cpio.lzma cpio.lz4 cpio.zst \
-    wic wic.gz wic.bz2 wic.lzma wic.zst \
-    container \
-    f2fs \
-    erofs erofs-lz4 erofs-lz4hc \
-"
-# These image types are x86 specific as they need syslinux
-IMAGE_TYPES:append:x86 = " hddimg iso"
-IMAGE_TYPES:append:x86-64 = " hddimg iso"
-
-# Compression is a special case of conversion. The old variable
-# names are still supported for backward-compatibility. When defining
-# new compression or conversion commands, use CONVERSIONTYPES and
-# CONVERSION_CMD/DEPENDS.
-COMPRESSIONTYPES ?= ""
-
-CONVERSIONTYPES = "gz bz2 lzma xz lz4 lzo zip zst sum md5sum sha1sum sha224sum sha256sum sha384sum sha512sum bmap u-boot vmdk vhd vhdx vdi qcow2 base64 gzsync zsync ${COMPRESSIONTYPES}"
-CONVERSION_CMD:lzma = "lzma -k -f -7 ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type}"
-CONVERSION_CMD:gz = "gzip -f -9 -n -c --rsyncable ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type} > ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type}.gz"
-CONVERSION_CMD:bz2 = "pbzip2 -f -k ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type}"
-CONVERSION_CMD:xz = "xz -f -k -c ${XZ_COMPRESSION_LEVEL} ${XZ_DEFAULTS} --check=${XZ_INTEGRITY_CHECK} ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type} > ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type}.xz"
-CONVERSION_CMD:lz4 = "lz4 -9 -z -l ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type} ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type}.lz4"
-CONVERSION_CMD:lzo = "lzop -9 ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type}"
-CONVERSION_CMD:zip = "zip ${ZIP_COMPRESSION_LEVEL} ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type}.zip ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type}"
-CONVERSION_CMD:zst = "zstd -f -k -T0 -c ${ZSTD_COMPRESSION_LEVEL} ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type} > ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type}.zst"
-CONVERSION_CMD:sum = "sumtool -i ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type} -o ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type}.sum ${JFFS2_SUM_EXTRA_ARGS}"
-CONVERSION_CMD:md5sum = "md5sum ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type} > ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type}.md5sum"
-CONVERSION_CMD:sha1sum = "sha1sum ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type} > ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type}.sha1sum"
-CONVERSION_CMD:sha224sum = "sha224sum ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type} > ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type}.sha224sum"
-CONVERSION_CMD:sha256sum = "sha256sum ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type} > ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type}.sha256sum"
-CONVERSION_CMD:sha384sum = "sha384sum ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type} > ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type}.sha384sum"
-CONVERSION_CMD:sha512sum = "sha512sum ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type} > ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type}.sha512sum"
-CONVERSION_CMD:bmap = "bmaptool create ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type} -o ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type}.bmap"
-CONVERSION_CMD:u-boot = "mkimage -A ${UBOOT_ARCH} -O linux -T ramdisk -C none -n ${IMAGE_NAME} -d ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type} ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type}.u-boot"
-CONVERSION_CMD:vmdk = "qemu-img convert -O vmdk ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type} ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type}.vmdk"
-CONVERSION_CMD:vhdx = "qemu-img convert -O vhdx -o subformat=dynamic ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type} ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type}.vhdx"
-CONVERSION_CMD:vhd = "qemu-img convert -O vpc -o subformat=fixed ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type} ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type}.vhd"
-CONVERSION_CMD:vdi = "qemu-img convert -O vdi ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type} ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type}.vdi"
-CONVERSION_CMD:qcow2 = "qemu-img convert -O qcow2 ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type} ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type}.qcow2"
-CONVERSION_CMD:base64 = "base64 ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type} > ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type}.base64"
-CONVERSION_CMD:zsync = "zsyncmake_curl ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type}"
-CONVERSION_CMD:gzsync = "zsyncmake_curl -z ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type}"
-CONVERSION_DEPENDS_lzma = "xz-native"
-CONVERSION_DEPENDS_gz = "pigz-native"
-CONVERSION_DEPENDS_bz2 = "pbzip2-native"
-CONVERSION_DEPENDS_xz = "xz-native"
-CONVERSION_DEPENDS_lz4 = "lz4-native"
-CONVERSION_DEPENDS_lzo = "lzop-native"
-CONVERSION_DEPENDS_zip = "zip-native"
-CONVERSION_DEPENDS_zst = "zstd-native"
-CONVERSION_DEPENDS_sum = "mtd-utils-native"
-CONVERSION_DEPENDS_bmap = "bmap-tools-native"
-CONVERSION_DEPENDS_u-boot = "u-boot-tools-native"
-CONVERSION_DEPENDS_vmdk = "qemu-system-native"
-CONVERSION_DEPENDS_vdi = "qemu-system-native"
-CONVERSION_DEPENDS_qcow2 = "qemu-system-native"
-CONVERSION_DEPENDS_base64 = "coreutils-native"
-CONVERSION_DEPENDS_vhdx = "qemu-system-native"
-CONVERSION_DEPENDS_vhd = "qemu-system-native"
-CONVERSION_DEPENDS_zsync = "zsync-curl-native"
-CONVERSION_DEPENDS_gzsync = "zsync-curl-native"
-
-RUNNABLE_IMAGE_TYPES ?= "ext2 ext3 ext4"
-RUNNABLE_MACHINE_PATTERNS ?= "qemu"
-
-DEPLOYABLE_IMAGE_TYPES ?= "hddimg iso"
-
-# The IMAGE_TYPES_MASKED variable will be used to mask out from the IMAGE_FSTYPES,
-# images that will not be built at do_rootfs time: vmdk, vhd, vhdx, vdi, qcow2, hddimg, iso, etc.
-IMAGE_TYPES_MASKED ?= ""
-
-# bmap requires python3 to be in the PATH
-EXTRANATIVEPATH += "${@'python3-native' if d.getVar('IMAGE_FSTYPES').find('.bmap') else ''}"
diff --git a/poky/meta/classes/image_types_wic.bbclass b/poky/meta/classes/image_types_wic.bbclass
deleted file mode 100644
index 8497916..0000000
--- a/poky/meta/classes/image_types_wic.bbclass
+++ /dev/null
@@ -1,184 +0,0 @@
-# The WICVARS variable is used to define list of bitbake variables used in wic code
-# variables from this list is written to <image>.env file
-WICVARS ?= "\
-	APPEND \
-	ASSUME_PROVIDED \
-	BBLAYERS \
-	DEPLOY_DIR_IMAGE \
-	FAKEROOTCMD \
-	HOSTTOOLS_DIR \
-	IMAGE_BASENAME \
-	IMAGE_BOOT_FILES \
-	IMAGE_EFI_BOOT_FILES \
-	IMAGE_LINK_NAME \
-	IMAGE_ROOTFS \
-	IMGDEPLOYDIR \
-	INITRAMFS_FSTYPES \
-	INITRAMFS_IMAGE \
-	INITRAMFS_IMAGE_BUNDLE \
-	INITRAMFS_LINK_NAME \
-	INITRD \
-	INITRD_LIVE \
-	ISODIR \
-	KERNEL_IMAGETYPE \
-	MACHINE \
-	PSEUDO_IGNORE_PATHS \
-	RECIPE_SYSROOT_NATIVE \
-	ROOTFS_SIZE \
-	STAGING_DATADIR \
-	STAGING_DIR \
-	STAGING_DIR_HOST \
-	STAGING_LIBDIR \
-	TARGET_SYS \
-"
-
-inherit ${@bb.utils.contains('INITRAMFS_IMAGE_BUNDLE', '1', 'kernel-artifact-names', '', d)}
-
-WKS_FILE ??= "${IMAGE_BASENAME}.${MACHINE}.wks"
-WKS_FILES ?= "${WKS_FILE} ${IMAGE_BASENAME}.wks"
-WKS_SEARCH_PATH ?= "${THISDIR}:${@':'.join('%s/wic' % p for p in '${BBPATH}'.split(':'))}:${@':'.join('%s/scripts/lib/wic/canned-wks' % l for l in '${BBPATH}:${COREBASE}'.split(':'))}"
-WKS_FULL_PATH = "${@wks_search(d.getVar('WKS_FILES').split(), d.getVar('WKS_SEARCH_PATH')) or ''}"
-
-def wks_search(files, search_path):
-    for f in files:
-        if os.path.isabs(f):
-            if os.path.exists(f):
-                return f
-        else:
-            searched = bb.utils.which(search_path, f)
-            if searched:
-                return searched
-
-WIC_CREATE_EXTRA_ARGS ?= ""
-
-IMAGE_CMD:wic () {
-	out="${IMGDEPLOYDIR}/${IMAGE_NAME}"
-	build_wic="${WORKDIR}/build-wic"
-	tmp_wic="${WORKDIR}/tmp-wic"
-	wks="${WKS_FULL_PATH}"
-	if [ -e "$tmp_wic" ]; then
-		# Ensure we don't have any junk leftover from a previously interrupted
-		# do_image_wic execution
-		rm -rf "$tmp_wic"
-	fi
-	if [ -z "$wks" ]; then
-		bbfatal "No kickstart files from WKS_FILES were found: ${WKS_FILES}. Please set WKS_FILE or WKS_FILES appropriately."
-	fi
-	BUILDDIR="${TOPDIR}" PSEUDO_UNLOAD=1 wic create "$wks" --vars "${STAGING_DIR}/${MACHINE}/imgdata/" -e "${IMAGE_BASENAME}" -o "$build_wic/" -w "$tmp_wic" ${WIC_CREATE_EXTRA_ARGS}
-	mv "$build_wic/$(basename "${wks%.wks}")"*.direct "$out${IMAGE_NAME_SUFFIX}.wic"
-}
-IMAGE_CMD:wic[vardepsexclude] = "WKS_FULL_PATH WKS_FILES TOPDIR"
-do_image_wic[cleandirs] = "${WORKDIR}/build-wic"
-
-PSEUDO_IGNORE_PATHS .= ",${WORKDIR}/build-wic"
-
-# Rebuild when the wks file or vars in WICVARS change
-USING_WIC = "${@bb.utils.contains_any('IMAGE_FSTYPES', 'wic ' + ' '.join('wic.%s' % c for c in '${CONVERSIONTYPES}'.split()), '1', '', d)}"
-WKS_FILE_CHECKSUM = "${@'${WKS_FULL_PATH}:%s' % os.path.exists('${WKS_FULL_PATH}') if '${USING_WIC}' else ''}"
-do_image_wic[file-checksums] += "${WKS_FILE_CHECKSUM}"
-do_image_wic[depends] += "${@' '.join('%s-native:do_populate_sysroot' % r for r in ('parted', 'gptfdisk', 'dosfstools', 'mtools'))}"
-
-# We ensure all artfacts are deployed (e.g virtual/bootloader)
-do_image_wic[recrdeptask] += "do_deploy"
-do_image_wic[deptask] += "do_image_complete"
-
-WKS_FILE_DEPENDS_DEFAULT = '${@bb.utils.contains_any("BUILD_ARCH", [ 'x86_64', 'i686' ], "syslinux-native", "",d)}'
-WKS_FILE_DEPENDS_DEFAULT += "bmap-tools-native cdrtools-native btrfs-tools-native squashfs-tools-native e2fsprogs-native erofs-utils-native"
-# Unified kernel images need objcopy
-WKS_FILE_DEPENDS_DEFAULT += "virtual/${MLPREFIX}${TARGET_PREFIX}binutils"
-WKS_FILE_DEPENDS_BOOTLOADERS = ""
-WKS_FILE_DEPENDS_BOOTLOADERS:x86 = "syslinux grub-efi systemd-boot os-release"
-WKS_FILE_DEPENDS_BOOTLOADERS:x86-64 = "syslinux grub-efi systemd-boot os-release"
-WKS_FILE_DEPENDS_BOOTLOADERS:x86-x32 = "syslinux grub-efi"
-
-WKS_FILE_DEPENDS ??= "${WKS_FILE_DEPENDS_DEFAULT} ${WKS_FILE_DEPENDS_BOOTLOADERS}"
-
-DEPENDS += "${@ '${WKS_FILE_DEPENDS}' if d.getVar('USING_WIC') else '' }"
-
-python do_write_wks_template () {
-    """Write out expanded template contents to WKS_FULL_PATH."""
-    import re
-
-    template_body = d.getVar('_WKS_TEMPLATE')
-
-    # Remove any remnant variable references left behind by the expansion
-    # due to undefined variables
-    expand_var_regexp = re.compile(r"\${[^{}@\n\t :]+}")
-    while True:
-        new_body = re.sub(expand_var_regexp, '', template_body)
-        if new_body == template_body:
-            break
-        else:
-            template_body = new_body
-
-    wks_file = d.getVar('WKS_FULL_PATH')
-    with open(wks_file, 'w') as f:
-        f.write(template_body)
-    f.close()
-    # Copy the finalized wks file to the deploy directory for later use
-    depdir = d.getVar('IMGDEPLOYDIR')
-    basename = d.getVar('IMAGE_BASENAME')
-    bb.utils.copyfile(wks_file, "%s/%s" % (depdir, basename + '-' + os.path.basename(wks_file)))
-}
-
-do_flush_pseudodb() {
-	${FAKEROOTENV} ${FAKEROOTCMD} -S
-}
-
-python () {
-    if d.getVar('USING_WIC'):
-        wks_file_u = d.getVar('WKS_FULL_PATH', False)
-        wks_file = d.expand(wks_file_u)
-        base, ext = os.path.splitext(wks_file)
-        if ext == '.in' and os.path.exists(wks_file):
-            wks_out_file = os.path.join(d.getVar('WORKDIR'), os.path.basename(base))
-            d.setVar('WKS_FULL_PATH', wks_out_file)
-            d.setVar('WKS_TEMPLATE_PATH', wks_file_u)
-            d.setVar('WKS_FILE_CHECKSUM', '${WKS_TEMPLATE_PATH}:True')
-
-            # We need to re-parse each time the file changes, and bitbake
-            # needs to be told about that explicitly.
-            bb.parse.mark_dependency(d, wks_file)
-
-            try:
-                with open(wks_file, 'r') as f:
-                    body = f.read()
-            except (IOError, OSError) as exc:
-                pass
-            else:
-                # Previously, I used expandWithRefs to get the dependency list
-                # and add it to WICVARS, but there's no point re-parsing the
-                # file in process_wks_template as well, so just put it in
-                # a variable and let the metadata deal with the deps.
-                d.setVar('_WKS_TEMPLATE', body)
-                bb.build.addtask('do_write_wks_template', 'do_image_wic', 'do_image', d)
-        bb.build.addtask('do_image_wic', 'do_image_complete', None, d)
-}
-
-#
-# Write environment variables used by wic
-# to tmp/sysroots/<machine>/imgdata/<image>.env
-#
-python do_rootfs_wicenv () {
-    wicvars = d.getVar('WICVARS')
-    if not wicvars:
-        return
-
-    stdir = d.getVar('STAGING_DIR')
-    outdir = os.path.join(stdir, d.getVar('MACHINE'), 'imgdata')
-    bb.utils.mkdirhier(outdir)
-    basename = d.getVar('IMAGE_BASENAME')
-    with open(os.path.join(outdir, basename) + '.env', 'w') as envf:
-        for var in wicvars.split():
-            value = d.getVar(var)
-            if value:
-                envf.write('%s="%s"\n' % (var, value.strip()))
-    envf.close()
-    # Copy .env file to deploy directory for later use with stand alone wic
-    depdir = d.getVar('IMGDEPLOYDIR')
-    bb.utils.copyfile(os.path.join(outdir, basename) + '.env', os.path.join(depdir, basename) + '.env')
-}
-addtask do_flush_pseudodb after do_rootfs before do_image do_image_qa
-addtask do_rootfs_wicenv after do_image before do_image_wic
-do_rootfs_wicenv[vardeps] += "${WICVARS}"
-do_rootfs_wicenv[prefuncs] = 'set_image_size'
diff --git a/poky/meta/classes/insane.bbclass b/poky/meta/classes/insane.bbclass
deleted file mode 100644
index c8b434b..0000000
--- a/poky/meta/classes/insane.bbclass
+++ /dev/null
@@ -1,1447 +0,0 @@
-# BB Class inspired by ebuild.sh
-#
-# This class will test files after installation for certain
-# security issues and other kind of issues.
-#
-# Checks we do:
-#  -Check the ownership and permissions
-#  -Check the RUNTIME path for the $TMPDIR
-#  -Check if .la files wrongly point to workdir
-#  -Check if .pc files wrongly point to workdir
-#  -Check if packages contains .debug directories or .so files
-#   where they should be in -dev or -dbg
-#  -Check if config.log contains traces to broken autoconf tests
-#  -Check invalid characters (non-utf8) on some package metadata
-#  -Ensure that binaries in base_[bindir|sbindir|libdir] do not link
-#   into exec_prefix
-#  -Check that scripts in base_[bindir|sbindir|libdir] do not reference
-#   files under exec_prefix
-#  -Check if the package name is upper case
-
-# Elect whether a given type of error is a warning or error, they may
-# have been set by other files.
-WARN_QA ?= " libdir xorg-driver-abi buildpaths \
-            textrel incompatible-license files-invalid \
-            infodir build-deps src-uri-bad symlink-to-sysroot multilib \
-            invalid-packageconfig host-user-contaminated uppercase-pn patch-fuzz \
-            mime mime-xdg unlisted-pkg-lics unhandled-features-check \
-            missing-update-alternatives native-last missing-ptest \
-            license-exists license-no-generic license-syntax license-format \
-            license-incompatible license-file-missing obsolete-license \
-            "
-ERROR_QA ?= "dev-so debug-deps dev-deps debug-files arch pkgconfig la \
-            perms dep-cmp pkgvarcheck perm-config perm-line perm-link \
-            split-strip packages-list pkgv-undefined var-undefined \
-            version-going-backwards expanded-d invalid-chars \
-            license-checksum dev-elf file-rdeps configure-unsafe \
-            configure-gettext perllocalpod shebang-size \
-            already-stripped installed-vs-shipped ldflags compile-host-path \
-            install-host-path pn-overrides unknown-configure-option \
-            useless-rpaths rpaths staticdev empty-dirs \
-            "
-# Add usrmerge QA check based on distro feature
-ERROR_QA:append = "${@bb.utils.contains('DISTRO_FEATURES', 'usrmerge', ' usrmerge', '', d)}"
-
-FAKEROOT_QA = "host-user-contaminated"
-FAKEROOT_QA[doc] = "QA tests which need to run under fakeroot. If any \
-enabled tests are listed here, the do_package_qa task will run under fakeroot."
-
-ALL_QA = "${WARN_QA} ${ERROR_QA}"
-
-UNKNOWN_CONFIGURE_OPT_IGNORE ?= "--enable-nls --disable-nls --disable-silent-rules --disable-dependency-tracking --with-libtool-sysroot --disable-static"
-
-# This is a list of directories that are expected to be empty.
-QA_EMPTY_DIRS ?= " \
-    /dev/pts \
-    /media \
-    /proc \
-    /run \
-    /tmp \
-    ${localstatedir}/run \
-    ${localstatedir}/volatile \
-"
-# It is possible to specify why a directory is expected to be empty by defining
-# QA_EMPTY_DIRS_RECOMMENDATION:<path>, which will then be included in the error
-# message if the directory is not empty. If it is not specified for a directory,
-# then "but it is expected to be empty" will be used.
-
-def package_qa_clean_path(path, d, pkg=None):
-    """
-    Remove redundant paths from the path for display.  If pkg isn't set then
-    TMPDIR is stripped, otherwise PKGDEST/pkg is stripped.
-    """
-    if pkg:
-        path = path.replace(os.path.join(d.getVar("PKGDEST"), pkg), "/")
-    return path.replace(d.getVar("TMPDIR"), "/").replace("//", "/")
-
-QAPATHTEST[shebang-size] = "package_qa_check_shebang_size"
-def package_qa_check_shebang_size(path, name, d, elf, messages):
-    import stat
-    if os.path.islink(path) or stat.S_ISFIFO(os.stat(path).st_mode) or elf:
-        return
-
-    try:
-        with open(path, 'rb') as f:
-            stanza = f.readline(130)
-    except IOError:
-        return
-
-    if stanza.startswith(b'#!'):
-        #Shebang not found
-        try:
-            stanza = stanza.decode("utf-8")
-        except UnicodeDecodeError:
-            #If it is not a text file, it is not a script
-            return
-
-        if len(stanza) > 129:
-            oe.qa.add_message(messages, "shebang-size", "%s: %s maximum shebang size exceeded, the maximum size is 128." % (name, package_qa_clean_path(path, d)))
-            return
-
-QAPATHTEST[libexec] = "package_qa_check_libexec"
-def package_qa_check_libexec(path,name, d, elf, messages):
-
-    # Skip the case where the default is explicitly /usr/libexec
-    libexec = d.getVar('libexecdir')
-    if libexec == "/usr/libexec":
-        return True
-
-    if 'libexec' in path.split(os.path.sep):
-        oe.qa.add_message(messages, "libexec", "%s: %s is using libexec please relocate to %s" % (name, package_qa_clean_path(path, d), libexec))
-        return False
-
-    return True
-
-QAPATHTEST[rpaths] = "package_qa_check_rpath"
-def package_qa_check_rpath(file,name, d, elf, messages):
-    """
-    Check for dangerous RPATHs
-    """
-    if not elf:
-        return
-
-    if os.path.islink(file):
-        return
-
-    bad_dirs = [d.getVar('BASE_WORKDIR'), d.getVar('STAGING_DIR_TARGET')]
-
-    phdrs = elf.run_objdump("-p", d)
-
-    import re
-    rpath_re = re.compile(r"\s+RPATH\s+(.*)")
-    for line in phdrs.split("\n"):
-        m = rpath_re.match(line)
-        if m:
-            rpath = m.group(1)
-            for dir in bad_dirs:
-                if dir in rpath:
-                    oe.qa.add_message(messages, "rpaths", "package %s contains bad RPATH %s in file %s" % (name, rpath, file))
-
-QAPATHTEST[useless-rpaths] = "package_qa_check_useless_rpaths"
-def package_qa_check_useless_rpaths(file, name, d, elf, messages):
-    """
-    Check for RPATHs that are useless but not dangerous
-    """
-    def rpath_eq(a, b):
-        return os.path.normpath(a) == os.path.normpath(b)
-
-    if not elf:
-        return
-
-    if os.path.islink(file):
-        return
-
-    libdir = d.getVar("libdir")
-    base_libdir = d.getVar("base_libdir")
-
-    phdrs = elf.run_objdump("-p", d)
-
-    import re
-    rpath_re = re.compile(r"\s+RPATH\s+(.*)")
-    for line in phdrs.split("\n"):
-        m = rpath_re.match(line)
-        if m:
-            rpath = m.group(1)
-            if rpath_eq(rpath, libdir) or rpath_eq(rpath, base_libdir):
-                # The dynamic linker searches both these places anyway.  There is no point in
-                # looking there again.
-                oe.qa.add_message(messages, "useless-rpaths", "%s: %s contains probably-redundant RPATH %s" % (name, package_qa_clean_path(file, d, name), rpath))
-
-QAPATHTEST[dev-so] = "package_qa_check_dev"
-def package_qa_check_dev(path, name, d, elf, messages):
-    """
-    Check for ".so" library symlinks in non-dev packages
-    """
-
-    if not name.endswith("-dev") and not name.endswith("-dbg") and not name.endswith("-ptest") and not name.startswith("nativesdk-") and path.endswith(".so") and os.path.islink(path):
-        oe.qa.add_message(messages, "dev-so", "non -dev/-dbg/nativesdk- package %s contains symlink .so '%s'" % \
-                 (name, package_qa_clean_path(path, d, name)))
-
-QAPATHTEST[dev-elf] = "package_qa_check_dev_elf"
-def package_qa_check_dev_elf(path, name, d, elf, messages):
-    """
-    Check that -dev doesn't contain real shared libraries.  The test has to
-    check that the file is not a link and is an ELF object as some recipes
-    install link-time .so files that are linker scripts.
-    """
-    if name.endswith("-dev") and path.endswith(".so") and not os.path.islink(path) and elf:
-        oe.qa.add_message(messages, "dev-elf", "-dev package %s contains non-symlink .so '%s'" % \
-                 (name, package_qa_clean_path(path, d, name)))
-
-QAPATHTEST[staticdev] = "package_qa_check_staticdev"
-def package_qa_check_staticdev(path, name, d, elf, messages):
-    """
-    Check for ".a" library in non-staticdev packages
-    There are a number of exceptions to this rule, -pic packages can contain
-    static libraries, the _nonshared.a belong with their -dev packages and
-    libgcc.a, libgcov.a will be skipped in their packages
-    """
-
-    if not name.endswith("-pic") and not name.endswith("-staticdev") and not name.endswith("-ptest") and path.endswith(".a") and not path.endswith("_nonshared.a") and not '/usr/lib/debug-static/' in path and not '/.debug-static/' in path:
-        oe.qa.add_message(messages, "staticdev", "non -staticdev package contains static .a library: %s path '%s'" % \
-                 (name, package_qa_clean_path(path,d, name)))
-
-QAPATHTEST[mime] = "package_qa_check_mime"
-def package_qa_check_mime(path, name, d, elf, messages):
-    """
-    Check if package installs mime types to /usr/share/mime/packages
-    while no inheriting mime.bbclass
-    """
-
-    if d.getVar("datadir") + "/mime/packages" in path and path.endswith('.xml') and not bb.data.inherits_class("mime", d):
-        oe.qa.add_message(messages, "mime", "package contains mime types but does not inherit mime: %s path '%s'" % \
-                 (name, package_qa_clean_path(path,d)))
-
-QAPATHTEST[mime-xdg] = "package_qa_check_mime_xdg"
-def package_qa_check_mime_xdg(path, name, d, elf, messages):
-    """
-    Check if package installs desktop file containing MimeType and requires
-    mime-types.bbclass to create /usr/share/applications/mimeinfo.cache
-    """
-
-    if d.getVar("datadir") + "/applications" in path and path.endswith('.desktop') and not bb.data.inherits_class("mime-xdg", d):
-        mime_type_found = False
-        try:
-            with open(path, 'r') as f:
-                for line in f.read().split('\n'):
-                    if 'MimeType' in line:
-                        mime_type_found = True
-                        break;
-        except:
-            # At least libreoffice installs symlinks with absolute paths that are dangling here.
-            # We could implement some magic but for few (one) recipes it is not worth the effort so just warn:
-            wstr = "%s cannot open %s - is it a symlink with absolute path?\n" % (name, package_qa_clean_path(path,d))
-            wstr += "Please check if (linked) file contains key 'MimeType'.\n"
-            pkgname = name
-            if name == d.getVar('PN'):
-                pkgname = '${PN}'
-            wstr += "If yes: add \'inhert mime-xdg\' and \'MIME_XDG_PACKAGES += \"%s\"\' / if no add \'INSANE_SKIP:%s += \"mime-xdg\"\' to recipe." % (pkgname, pkgname)
-            oe.qa.add_message(messages, "mime-xdg", wstr)
-        if mime_type_found:
-            oe.qa.add_message(messages, "mime-xdg", "package contains desktop file with key 'MimeType' but does not inhert mime-xdg: %s path '%s'" % \
-                    (name, package_qa_clean_path(path,d)))
-
-def package_qa_check_libdir(d):
-    """
-    Check for wrong library installation paths. For instance, catch
-    recipes installing /lib/bar.so when ${base_libdir}="lib32" or
-    installing in /usr/lib64 when ${libdir}="/usr/lib"
-    """
-    import re
-
-    pkgdest = d.getVar('PKGDEST')
-    base_libdir = d.getVar("base_libdir") + os.sep
-    libdir = d.getVar("libdir") + os.sep
-    libexecdir = d.getVar("libexecdir") + os.sep
-    exec_prefix = d.getVar("exec_prefix") + os.sep
-
-    messages = []
-
-    # The re's are purposely fuzzy, as some there are some .so.x.y.z files
-    # that don't follow the standard naming convention. It checks later
-    # that they are actual ELF files
-    lib_re = re.compile(r"^/lib.+\.so(\..+)?$")
-    exec_re = re.compile(r"^%s.*/lib.+\.so(\..+)?$" % exec_prefix)
-
-    for root, dirs, files in os.walk(pkgdest):
-        if root == pkgdest:
-            # Skip subdirectories for any packages with libdir in INSANE_SKIP
-            skippackages = []
-            for package in dirs:
-                if 'libdir' in (d.getVar('INSANE_SKIP:' + package) or "").split():
-                    bb.note("Package %s skipping libdir QA test" % (package))
-                    skippackages.append(package)
-                elif d.getVar('PACKAGE_DEBUG_SPLIT_STYLE') == 'debug-file-directory' and package.endswith("-dbg"):
-                    bb.note("Package %s skipping libdir QA test for PACKAGE_DEBUG_SPLIT_STYLE equals debug-file-directory" % (package))
-                    skippackages.append(package)
-            for package in skippackages:
-                dirs.remove(package)
-        for file in files:
-            full_path = os.path.join(root, file)
-            rel_path = os.path.relpath(full_path, pkgdest)
-            if os.sep in rel_path:
-                package, rel_path = rel_path.split(os.sep, 1)
-                rel_path = os.sep + rel_path
-                if lib_re.match(rel_path):
-                    if base_libdir not in rel_path:
-                        # make sure it's an actual ELF file
-                        elf = oe.qa.ELFFile(full_path)
-                        try:
-                            elf.open()
-                            messages.append("%s: found library in wrong location: %s" % (package, rel_path))
-                        except (oe.qa.NotELFFileError):
-                            pass
-                if exec_re.match(rel_path):
-                    if libdir not in rel_path and libexecdir not in rel_path:
-                        # make sure it's an actual ELF file
-                        elf = oe.qa.ELFFile(full_path)
-                        try:
-                            elf.open()
-                            messages.append("%s: found library in wrong location: %s" % (package, rel_path))
-                        except (oe.qa.NotELFFileError):
-                            pass
-
-    if messages:
-        oe.qa.handle_error("libdir", "\n".join(messages), d)
-
-QAPATHTEST[debug-files] = "package_qa_check_dbg"
-def package_qa_check_dbg(path, name, d, elf, messages):
-    """
-    Check for ".debug" files or directories outside of the dbg package
-    """
-
-    if not "-dbg" in name and not "-ptest" in name:
-        if '.debug' in path.split(os.path.sep):
-            oe.qa.add_message(messages, "debug-files", "non debug package contains .debug directory: %s path %s" % \
-                     (name, package_qa_clean_path(path,d)))
-
-QAPATHTEST[arch] = "package_qa_check_arch"
-def package_qa_check_arch(path,name,d, elf, messages):
-    """
-    Check if archs are compatible
-    """
-    import re, oe.elf
-
-    if not elf:
-        return
-
-    target_os   = d.getVar('HOST_OS')
-    target_arch = d.getVar('HOST_ARCH')
-    provides = d.getVar('PROVIDES')
-    bpn = d.getVar('BPN')
-
-    if target_arch == "allarch":
-        pn = d.getVar('PN')
-        oe.qa.add_message(messages, "arch", pn + ": Recipe inherits the allarch class, but has packaged architecture-specific binaries")
-        return
-
-    # FIXME: Cross package confuse this check, so just skip them
-    for s in ['cross', 'nativesdk', 'cross-canadian']:
-        if bb.data.inherits_class(s, d):
-            return
-
-    # avoid following links to /usr/bin (e.g. on udev builds)
-    # we will check the files pointed to anyway...
-    if os.path.islink(path):
-        return
-
-    #if this will throw an exception, then fix the dict above
-    (machine, osabi, abiversion, littleendian, bits) \
-        = oe.elf.machine_dict(d)[target_os][target_arch]
-
-    # Check the architecture and endiannes of the binary
-    is_32 = (("virtual/kernel" in provides) or bb.data.inherits_class("module", d)) and \
-            (target_os == "linux-gnux32" or target_os == "linux-muslx32" or \
-            target_os == "linux-gnu_ilp32" or re.match(r'mips64.*32', d.getVar('DEFAULTTUNE')))
-    is_bpf = (oe.qa.elf_machine_to_string(elf.machine()) == "BPF")
-    if not ((machine == elf.machine()) or is_32 or is_bpf):
-        oe.qa.add_message(messages, "arch", "Architecture did not match (%s, expected %s) in %s" % \
-                 (oe.qa.elf_machine_to_string(elf.machine()), oe.qa.elf_machine_to_string(machine), package_qa_clean_path(path, d, name)))
-    elif not ((bits == elf.abiSize()) or is_32 or is_bpf):
-        oe.qa.add_message(messages, "arch", "Bit size did not match (%d, expected %d) in %s" % \
-                 (elf.abiSize(), bits, package_qa_clean_path(path, d, name)))
-    elif not ((littleendian == elf.isLittleEndian()) or is_bpf):
-        oe.qa.add_message(messages, "arch", "Endiannes did not match (%d, expected %d) in %s" % \
-                 (elf.isLittleEndian(), littleendian, package_qa_clean_path(path,d, name)))
-
-QAPATHTEST[desktop] = "package_qa_check_desktop"
-def package_qa_check_desktop(path, name, d, elf, messages):
-    """
-    Run all desktop files through desktop-file-validate.
-    """
-    if path.endswith(".desktop"):
-        desktop_file_validate = os.path.join(d.getVar('STAGING_BINDIR_NATIVE'),'desktop-file-validate')
-        output = os.popen("%s %s" % (desktop_file_validate, path))
-        # This only produces output on errors
-        for l in output:
-            oe.qa.add_message(messages, "desktop", "Desktop file issue: " + l.strip())
-
-QAPATHTEST[textrel] = "package_qa_textrel"
-def package_qa_textrel(path, name, d, elf, messages):
-    """
-    Check if the binary contains relocations in .text
-    """
-
-    if not elf:
-        return
-
-    if os.path.islink(path):
-        return
-
-    phdrs = elf.run_objdump("-p", d)
-    sane = True
-
-    import re
-    textrel_re = re.compile(r"\s+TEXTREL\s+")
-    for line in phdrs.split("\n"):
-        if textrel_re.match(line):
-            sane = False
-            break
-
-    if not sane:
-        path = package_qa_clean_path(path, d, name)
-        oe.qa.add_message(messages, "textrel", "%s: ELF binary %s has relocations in .text" % (name, path))
-
-QAPATHTEST[ldflags] = "package_qa_hash_style"
-def package_qa_hash_style(path, name, d, elf, messages):
-    """
-    Check if the binary has the right hash style...
-    """
-
-    if not elf:
-        return
-
-    if os.path.islink(path):
-        return
-
-    gnu_hash = "--hash-style=gnu" in d.getVar('LDFLAGS')
-    if not gnu_hash:
-        gnu_hash = "--hash-style=both" in d.getVar('LDFLAGS')
-    if not gnu_hash:
-        return
-
-    sane = False
-    has_syms = False
-
-    phdrs = elf.run_objdump("-p", d)
-
-    # If this binary has symbols, we expect it to have GNU_HASH too.
-    for line in phdrs.split("\n"):
-        if "SYMTAB" in line:
-            has_syms = True
-        if "GNU_HASH" in line or "MIPS_XHASH" in line:
-            sane = True
-        if ("[mips32]" in line or "[mips64]" in line) and d.getVar('TCLIBC') == "musl":
-            sane = True
-    if has_syms and not sane:
-        path = package_qa_clean_path(path, d, name)
-        oe.qa.add_message(messages, "ldflags", "File %s in package %s doesn't have GNU_HASH (didn't pass LDFLAGS?)" % (path, name))
-
-
-QAPATHTEST[buildpaths] = "package_qa_check_buildpaths"
-def package_qa_check_buildpaths(path, name, d, elf, messages):
-    """
-    Check for build paths inside target files and error if paths are not
-    explicitly ignored.
-    """
-    import stat
-
-    # Ignore symlinks/devs/fifos
-    mode = os.lstat(path).st_mode
-    if stat.S_ISLNK(mode) or stat.S_ISBLK(mode) or stat.S_ISFIFO(mode) or stat.S_ISCHR(mode) or stat.S_ISSOCK(mode):
-        return
-
-    tmpdir = bytes(d.getVar('TMPDIR'), encoding="utf-8")
-    with open(path, 'rb') as f:
-        file_content = f.read()
-        if tmpdir in file_content:
-            trimmed = path.replace(os.path.join (d.getVar("PKGDEST"), name), "")
-            oe.qa.add_message(messages, "buildpaths", "File %s in package %s contains reference to TMPDIR" % (trimmed, name))
-
-
-QAPATHTEST[xorg-driver-abi] = "package_qa_check_xorg_driver_abi"
-def package_qa_check_xorg_driver_abi(path, name, d, elf, messages):
-    """
-    Check that all packages containing Xorg drivers have ABI dependencies
-    """
-
-    # Skip dev, dbg or nativesdk packages
-    if name.endswith("-dev") or name.endswith("-dbg") or name.startswith("nativesdk-"):
-        return
-
-    driverdir = d.expand("${libdir}/xorg/modules/drivers/")
-    if driverdir in path and path.endswith(".so"):
-        mlprefix = d.getVar('MLPREFIX') or ''
-        for rdep in bb.utils.explode_deps(d.getVar('RDEPENDS:' + name) or ""):
-            if rdep.startswith("%sxorg-abi-" % mlprefix):
-                return
-        oe.qa.add_message(messages, "xorg-driver-abi", "Package %s contains Xorg driver (%s) but no xorg-abi- dependencies" % (name, os.path.basename(path)))
-
-QAPATHTEST[infodir] = "package_qa_check_infodir"
-def package_qa_check_infodir(path, name, d, elf, messages):
-    """
-    Check that /usr/share/info/dir isn't shipped in a particular package
-    """
-    infodir = d.expand("${infodir}/dir")
-
-    if infodir in path:
-        oe.qa.add_message(messages, "infodir", "The /usr/share/info/dir file is not meant to be shipped in a particular package.")
-
-QAPATHTEST[symlink-to-sysroot] = "package_qa_check_symlink_to_sysroot"
-def package_qa_check_symlink_to_sysroot(path, name, d, elf, messages):
-    """
-    Check that the package doesn't contain any absolute symlinks to the sysroot.
-    """
-    if os.path.islink(path):
-        target = os.readlink(path)
-        if os.path.isabs(target):
-            tmpdir = d.getVar('TMPDIR')
-            if target.startswith(tmpdir):
-                trimmed = path.replace(os.path.join (d.getVar("PKGDEST"), name), "")
-                oe.qa.add_message(messages, "symlink-to-sysroot", "Symlink %s in %s points to TMPDIR" % (trimmed, name))
-
-# Check license variables
-do_populate_lic[postfuncs] += "populate_lic_qa_checksum"
-python populate_lic_qa_checksum() {
-    """
-    Check for changes in the license files.
-    """
-
-    lic_files = d.getVar('LIC_FILES_CHKSUM') or ''
-    lic = d.getVar('LICENSE')
-    pn = d.getVar('PN')
-
-    if lic == "CLOSED":
-        return
-
-    if not lic_files and d.getVar('SRC_URI'):
-        oe.qa.handle_error("license-checksum", pn + ": Recipe file fetches files and does not have license file information (LIC_FILES_CHKSUM)", d)
-
-    srcdir = d.getVar('S')
-    corebase_licensefile = d.getVar('COREBASE') + "/LICENSE"
-    for url in lic_files.split():
-        try:
-            (type, host, path, user, pswd, parm) = bb.fetch.decodeurl(url)
-        except bb.fetch.MalformedUrl:
-            oe.qa.handle_error("license-checksum", pn + ": LIC_FILES_CHKSUM contains an invalid URL: " + url, d)
-            continue
-        srclicfile = os.path.join(srcdir, path)
-        if not os.path.isfile(srclicfile):
-            oe.qa.handle_error("license-checksum", pn + ": LIC_FILES_CHKSUM points to an invalid file: " + srclicfile, d)
-            continue
-
-        if (srclicfile == corebase_licensefile):
-            bb.warn("${COREBASE}/LICENSE is not a valid license file, please use '${COMMON_LICENSE_DIR}/MIT' for a MIT License file in LIC_FILES_CHKSUM. This will become an error in the future")
-
-        recipemd5 = parm.get('md5', '')
-        beginline, endline = 0, 0
-        if 'beginline' in parm:
-            beginline = int(parm['beginline'])
-        if 'endline' in parm:
-            endline = int(parm['endline'])
-
-        if (not beginline) and (not endline):
-            md5chksum = bb.utils.md5_file(srclicfile)
-            with open(srclicfile, 'r', errors='replace') as f:
-                license = f.read().splitlines()
-        else:
-            with open(srclicfile, 'rb') as f:
-                import hashlib
-                lineno = 0
-                license = []
-                m = hashlib.new('MD5', usedforsecurity=False)
-                for line in f:
-                    lineno += 1
-                    if (lineno >= beginline):
-                        if ((lineno <= endline) or not endline):
-                            m.update(line)
-                            license.append(line.decode('utf-8', errors='replace').rstrip())
-                        else:
-                            break
-                md5chksum = m.hexdigest()
-        if recipemd5 == md5chksum:
-            bb.note (pn + ": md5 checksum matched for ", url)
-        else:
-            if recipemd5:
-                msg = pn + ": The LIC_FILES_CHKSUM does not match for " + url
-                msg = msg + "\n" + pn + ": The new md5 checksum is " + md5chksum
-                max_lines = int(d.getVar('QA_MAX_LICENSE_LINES') or 20)
-                if not license or license[-1] != '':
-                    # Ensure that our license text ends with a line break
-                    # (will be added with join() below).
-                    license.append('')
-                remove = len(license) - max_lines
-                if remove > 0:
-                    start = max_lines // 2
-                    end = start + remove - 1
-                    del license[start:end]
-                    license.insert(start, '...')
-                msg = msg + "\n" + pn + ": Here is the selected license text:" + \
-                        "\n" + \
-                        "{:v^70}".format(" beginline=%d " % beginline if beginline else "") + \
-                        "\n" + "\n".join(license) + \
-                        "{:^^70}".format(" endline=%d " % endline if endline else "")
-                if beginline:
-                    if endline:
-                        srcfiledesc = "%s (lines %d through to %d)" % (srclicfile, beginline, endline)
-                    else:
-                        srcfiledesc = "%s (beginning on line %d)" % (srclicfile, beginline)
-                elif endline:
-                    srcfiledesc = "%s (ending on line %d)" % (srclicfile, endline)
-                else:
-                    srcfiledesc = srclicfile
-                msg = msg + "\n" + pn + ": Check if the license information has changed in %s to verify that the LICENSE value \"%s\" remains valid" % (srcfiledesc, lic)
-
-            else:
-                msg = pn + ": LIC_FILES_CHKSUM is not specified for " +  url
-                msg = msg + "\n" + pn + ": The md5 checksum is " + md5chksum
-            oe.qa.handle_error("license-checksum", msg, d)
-
-    oe.qa.exit_if_errors(d)
-}
-
-def qa_check_staged(path,d):
-    """
-    Check staged la and pc files for common problems like references to the work
-    directory.
-
-    As this is run after every stage we should be able to find the one
-    responsible for the errors easily even if we look at every .pc and .la file.
-    """
-
-    tmpdir = d.getVar('TMPDIR')
-    workdir = os.path.join(tmpdir, "work")
-    recipesysroot = d.getVar("RECIPE_SYSROOT")
-
-    if bb.data.inherits_class("native", d) or bb.data.inherits_class("cross", d):
-        pkgconfigcheck = workdir
-    else:
-        pkgconfigcheck = tmpdir
-
-    skip = (d.getVar('INSANE_SKIP') or "").split()
-    skip_la = False
-    if 'la' in skip:
-        bb.note("Recipe %s skipping qa checking: la" % d.getVar('PN'))
-        skip_la = True
-
-    skip_pkgconfig = False
-    if 'pkgconfig' in skip:
-        bb.note("Recipe %s skipping qa checking: pkgconfig" % d.getVar('PN'))
-        skip_pkgconfig = True
-
-    skip_shebang_size = False
-    if 'shebang-size' in skip:
-        bb.note("Recipe %s skipping qa checkking: shebang-size" % d.getVar('PN'))
-        skip_shebang_size = True
-
-    # find all .la and .pc files
-    # read the content
-    # and check for stuff that looks wrong
-    for root, dirs, files in os.walk(path):
-        for file in files:
-            path = os.path.join(root,file)
-            if file.endswith(".la") and not skip_la:
-                with open(path) as f:
-                    file_content = f.read()
-                    file_content = file_content.replace(recipesysroot, "")
-                    if workdir in file_content:
-                        error_msg = "%s failed sanity test (workdir) in path %s" % (file,root)
-                        oe.qa.handle_error("la", error_msg, d)
-            elif file.endswith(".pc") and not skip_pkgconfig:
-                with open(path) as f:
-                    file_content = f.read()
-                    file_content = file_content.replace(recipesysroot, "")
-                    if pkgconfigcheck in file_content:
-                        error_msg = "%s failed sanity test (tmpdir) in path %s" % (file,root)
-                        oe.qa.handle_error("pkgconfig", error_msg, d)
-
-            if not skip_shebang_size:
-                errors = {}
-                package_qa_check_shebang_size(path, "", d, None, errors)
-                for e in errors:
-                    oe.qa.handle_error(e, errors[e], d)
-
-
-# Run all package-wide warnfuncs and errorfuncs
-def package_qa_package(warnfuncs, errorfuncs, package, d):
-    warnings = {}
-    errors = {}
-
-    for func in warnfuncs:
-        func(package, d, warnings)
-    for func in errorfuncs:
-        func(package, d, errors)
-
-    for w in warnings:
-        oe.qa.handle_error(w, warnings[w], d)
-    for e in errors:
-        oe.qa.handle_error(e, errors[e], d)
-
-    return len(errors) == 0
-
-# Run all recipe-wide warnfuncs and errorfuncs
-def package_qa_recipe(warnfuncs, errorfuncs, pn, d):
-    warnings = {}
-    errors = {}
-
-    for func in warnfuncs:
-        func(pn, d, warnings)
-    for func in errorfuncs:
-        func(pn, d, errors)
-
-    for w in warnings:
-        oe.qa.handle_error(w, warnings[w], d)
-    for e in errors:
-        oe.qa.handle_error(e, errors[e], d)
-
-    return len(errors) == 0
-
-def prepopulate_objdump_p(elf, d):
-    output = elf.run_objdump("-p", d)
-    return (elf.name, output)
-
-# Walk over all files in a directory and call func
-def package_qa_walk(warnfuncs, errorfuncs, package, d):
-    #if this will throw an exception, then fix the dict above
-    target_os   = d.getVar('HOST_OS')
-    target_arch = d.getVar('HOST_ARCH')
-
-    warnings = {}
-    errors = {}
-    elves = {}
-    for path in pkgfiles[package]:
-            elf = None
-            if os.path.isfile(path):
-                elf = oe.qa.ELFFile(path)
-                try:
-                    elf.open()
-                    elf.close()
-                except oe.qa.NotELFFileError:
-                    elf = None
-            if elf:
-                elves[path] = elf
-
-    results = oe.utils.multiprocess_launch(prepopulate_objdump_p, elves.values(), d, extraargs=(d,))
-    for item in results:
-        elves[item[0]].set_objdump("-p", item[1])
-
-    for path in pkgfiles[package]:
-            if path in elves:
-                elves[path].open()
-            for func in warnfuncs:
-                func(path, package, d, elves.get(path), warnings)
-            for func in errorfuncs:
-                func(path, package, d, elves.get(path), errors)
-            if path in elves:
-                elves[path].close()
-
-    for w in warnings:
-        oe.qa.handle_error(w, warnings[w], d)
-    for e in errors:
-        oe.qa.handle_error(e, errors[e], d)
-
-def package_qa_check_rdepends(pkg, pkgdest, skip, taskdeps, packages, d):
-    # Don't do this check for kernel/module recipes, there aren't too many debug/development
-    # packages and you can get false positives e.g. on kernel-module-lirc-dev
-    if bb.data.inherits_class("kernel", d) or bb.data.inherits_class("module-base", d):
-        return
-
-    if not "-dbg" in pkg and not "packagegroup-" in pkg and not "-image" in pkg:
-        localdata = bb.data.createCopy(d)
-        localdata.setVar('OVERRIDES', localdata.getVar('OVERRIDES') + ':' + pkg)
-
-        # Now check the RDEPENDS
-        rdepends = bb.utils.explode_deps(localdata.getVar('RDEPENDS') or "")
-
-        # Now do the sanity check!!!
-        if "build-deps" not in skip:
-            for rdepend in rdepends:
-                if "-dbg" in rdepend and "debug-deps" not in skip:
-                    error_msg = "%s rdepends on %s" % (pkg,rdepend)
-                    oe.qa.handle_error("debug-deps", error_msg, d)
-                if (not "-dev" in pkg and not "-staticdev" in pkg) and rdepend.endswith("-dev") and "dev-deps" not in skip:
-                    error_msg = "%s rdepends on %s" % (pkg, rdepend)
-                    oe.qa.handle_error("dev-deps", error_msg, d)
-                if rdepend not in packages:
-                    rdep_data = oe.packagedata.read_subpkgdata(rdepend, d)
-                    if rdep_data and 'PN' in rdep_data and rdep_data['PN'] in taskdeps:
-                        continue
-                    if not rdep_data or not 'PN' in rdep_data:
-                        pkgdata_dir = d.getVar("PKGDATA_DIR")
-                        try:
-                            possibles = os.listdir("%s/runtime-rprovides/%s/" % (pkgdata_dir, rdepend))
-                        except OSError:
-                            possibles = []
-                        for p in possibles:
-                            rdep_data = oe.packagedata.read_subpkgdata(p, d)
-                            if rdep_data and 'PN' in rdep_data and rdep_data['PN'] in taskdeps:
-                                break
-                    if rdep_data and 'PN' in rdep_data and rdep_data['PN'] in taskdeps:
-                        continue
-                    if rdep_data and 'PN' in rdep_data:
-                        error_msg = "%s rdepends on %s, but it isn't a build dependency, missing %s in DEPENDS or PACKAGECONFIG?" % (pkg, rdepend, rdep_data['PN'])
-                    else:
-                        error_msg = "%s rdepends on %s, but it isn't a build dependency?" % (pkg, rdepend)
-                    oe.qa.handle_error("build-deps", error_msg, d)
-
-        if "file-rdeps" not in skip:
-            ignored_file_rdeps = set(['/bin/sh', '/usr/bin/env', 'rtld(GNU_HASH)'])
-            if bb.data.inherits_class('nativesdk', d):
-                ignored_file_rdeps |= set(['/bin/bash', '/usr/bin/perl', 'perl'])
-            # For Saving the FILERDEPENDS
-            filerdepends = {}
-            rdep_data = oe.packagedata.read_subpkgdata(pkg, d)
-            for key in rdep_data:
-                if key.startswith("FILERDEPENDS:"):
-                    for subkey in bb.utils.explode_deps(rdep_data[key]):
-                        if subkey not in ignored_file_rdeps and \
-                                not subkey.startswith('perl('):
-                            # We already know it starts with FILERDEPENDS_
-                            filerdepends[subkey] = key[13:]
-
-            if filerdepends:
-                done = rdepends[:]
-                # Add the rprovides of itself
-                if pkg not in done:
-                    done.insert(0, pkg)
-
-                # The python is not a package, but python-core provides it, so
-                # skip checking /usr/bin/python if python is in the rdeps, in
-                # case there is a RDEPENDS:pkg = "python" in the recipe.
-                for py in [ d.getVar('MLPREFIX') + "python", "python" ]:
-                    if py in done:
-                        filerdepends.pop("/usr/bin/python",None)
-                        done.remove(py)
-                for rdep in done:
-                    # The file dependencies may contain package names, e.g.,
-                    # perl
-                    filerdepends.pop(rdep,None)
-
-                    # For Saving the FILERPROVIDES, RPROVIDES and FILES_INFO
-                    rdep_data = oe.packagedata.read_subpkgdata(rdep, d)
-                    for key in rdep_data:
-                        if key.startswith("FILERPROVIDES:") or key.startswith("RPROVIDES:"):
-                            for subkey in bb.utils.explode_deps(rdep_data[key]):
-                                filerdepends.pop(subkey,None)
-                        # Add the files list to the rprovides
-                        if key.startswith("FILES_INFO:"):
-                            # Use eval() to make it as a dict
-                            for subkey in eval(rdep_data[key]):
-                                filerdepends.pop(subkey,None)
-                    if not filerdepends:
-                        # Break if all the file rdepends are met
-                        break
-            if filerdepends:
-                for key in filerdepends:
-                    error_msg = "%s contained in package %s requires %s, but no providers found in RDEPENDS:%s?" % \
-                            (filerdepends[key].replace(":%s" % pkg, "").replace("@underscore@", "_"), pkg, key, pkg)
-                    oe.qa.handle_error("file-rdeps", error_msg, d)
-package_qa_check_rdepends[vardepsexclude] = "OVERRIDES"
-
-def package_qa_check_deps(pkg, pkgdest, d):
-
-    localdata = bb.data.createCopy(d)
-    localdata.setVar('OVERRIDES', pkg)
-
-    def check_valid_deps(var):
-        try:
-            rvar = bb.utils.explode_dep_versions2(localdata.getVar(var) or "")
-        except ValueError as e:
-            bb.fatal("%s:%s: %s" % (var, pkg, e))
-        for dep in rvar:
-            for v in rvar[dep]:
-                if v and not v.startswith(('< ', '= ', '> ', '<= ', '>=')):
-                    error_msg = "%s:%s is invalid: %s (%s)   only comparisons <, =, >, <=, and >= are allowed" % (var, pkg, dep, v)
-                    oe.qa.handle_error("dep-cmp", error_msg, d)
-
-    check_valid_deps('RDEPENDS')
-    check_valid_deps('RRECOMMENDS')
-    check_valid_deps('RSUGGESTS')
-    check_valid_deps('RPROVIDES')
-    check_valid_deps('RREPLACES')
-    check_valid_deps('RCONFLICTS')
-
-QAPKGTEST[usrmerge] = "package_qa_check_usrmerge"
-def package_qa_check_usrmerge(pkg, d, messages):
-
-    pkgdest = d.getVar('PKGDEST')
-    pkg_dir = pkgdest + os.sep + pkg + os.sep
-    merged_dirs = ['bin', 'sbin', 'lib'] + d.getVar('MULTILIB_VARIANTS').split()
-    for f in merged_dirs:
-        if os.path.exists(pkg_dir + f) and not os.path.islink(pkg_dir + f):
-            msg = "%s package is not obeying usrmerge distro feature. /%s should be relocated to /usr." % (pkg, f)
-            oe.qa.add_message(messages, "usrmerge", msg)
-            return False
-    return True
-
-QAPKGTEST[perllocalpod] = "package_qa_check_perllocalpod"
-def package_qa_check_perllocalpod(pkg, d, messages):
-    """
-    Check that the recipe didn't ship a perlocal.pod file, which shouldn't be
-    installed in a distribution package.  cpan.bbclass sets NO_PERLLOCAL=1 to
-    handle this for most recipes.
-    """
-    import glob
-    pkgd = oe.path.join(d.getVar('PKGDEST'), pkg)
-    podpath = oe.path.join(pkgd, d.getVar("libdir"), "perl*", "*", "*", "perllocal.pod")
-
-    matches = glob.glob(podpath)
-    if matches:
-        matches = [package_qa_clean_path(path, d, pkg) for path in matches]
-        msg = "%s contains perllocal.pod (%s), should not be installed" % (pkg, " ".join(matches))
-        oe.qa.add_message(messages, "perllocalpod", msg)
-
-QAPKGTEST[expanded-d] = "package_qa_check_expanded_d"
-def package_qa_check_expanded_d(package, d, messages):
-    """
-    Check for the expanded D (${D}) value in pkg_* and FILES
-    variables, warn the user to use it correctly.
-    """
-    sane = True
-    expanded_d = d.getVar('D')
-
-    for var in 'FILES','pkg_preinst', 'pkg_postinst', 'pkg_prerm', 'pkg_postrm':
-        bbvar = d.getVar(var + ":" + package) or ""
-        if expanded_d in bbvar:
-            if var == 'FILES':
-                oe.qa.add_message(messages, "expanded-d", "FILES in %s recipe should not contain the ${D} variable as it references the local build directory not the target filesystem, best solution is to remove the ${D} reference" % package)
-                sane = False
-            else:
-                oe.qa.add_message(messages, "expanded-d", "%s in %s recipe contains ${D}, it should be replaced by $D instead" % (var, package))
-                sane = False
-    return sane
-
-QAPKGTEST[unlisted-pkg-lics] = "package_qa_check_unlisted_pkg_lics"
-def package_qa_check_unlisted_pkg_lics(package, d, messages):
-    """
-    Check that all licenses for a package are among the licenses for the recipe.
-    """
-    pkg_lics = d.getVar('LICENSE:' + package)
-    if not pkg_lics:
-        return True
-
-    recipe_lics_set = oe.license.list_licenses(d.getVar('LICENSE'))
-    package_lics = oe.license.list_licenses(pkg_lics)
-    unlisted = package_lics - recipe_lics_set
-    if unlisted:
-        oe.qa.add_message(messages, "unlisted-pkg-lics",
-                               "LICENSE:%s includes licenses (%s) that are not "
-                               "listed in LICENSE" % (package, ' '.join(unlisted)))
-        return False
-    obsolete = set(oe.license.obsolete_license_list()) & package_lics - recipe_lics_set
-    if obsolete:
-        oe.qa.add_message(messages, "obsolete-license",
-                               "LICENSE:%s includes obsolete licenses %s" % (package, ' '.join(obsolete)))
-        return False
-    return True
-
-QAPKGTEST[empty-dirs] = "package_qa_check_empty_dirs"
-def package_qa_check_empty_dirs(pkg, d, messages):
-    """
-    Check for the existence of files in directories that are expected to be
-    empty.
-    """
-
-    pkgd = oe.path.join(d.getVar('PKGDEST'), pkg)
-    for dir in (d.getVar('QA_EMPTY_DIRS') or "").split():
-        empty_dir = oe.path.join(pkgd, dir)
-        if os.path.exists(empty_dir) and os.listdir(empty_dir):
-            recommendation = (d.getVar('QA_EMPTY_DIRS_RECOMMENDATION:' + dir) or
-                              "but it is expected to be empty")
-            msg = "%s installs files in %s, %s" % (pkg, dir, recommendation)
-            oe.qa.add_message(messages, "empty-dirs", msg)
-
-def package_qa_check_encoding(keys, encode, d):
-    def check_encoding(key, enc):
-        sane = True
-        value = d.getVar(key)
-        if value:
-            try:
-                s = value.encode(enc)
-            except UnicodeDecodeError as e:
-                error_msg = "%s has non %s characters" % (key,enc)
-                sane = False
-                oe.qa.handle_error("invalid-chars", error_msg, d)
-        return sane
-
-    for key in keys:
-        sane = check_encoding(key, encode)
-        if not sane:
-            break
-
-HOST_USER_UID := "${@os.getuid()}"
-HOST_USER_GID := "${@os.getgid()}"
-
-QAPATHTEST[host-user-contaminated] = "package_qa_check_host_user"
-def package_qa_check_host_user(path, name, d, elf, messages):
-    """Check for paths outside of /home which are owned by the user running bitbake."""
-
-    if not os.path.lexists(path):
-        return
-
-    dest = d.getVar('PKGDEST')
-    pn = d.getVar('PN')
-    home = os.path.join(dest, name, 'home')
-    if path == home or path.startswith(home + os.sep):
-        return
-
-    try:
-        stat = os.lstat(path)
-    except OSError as exc:
-        import errno
-        if exc.errno != errno.ENOENT:
-            raise
-    else:
-        check_uid = int(d.getVar('HOST_USER_UID'))
-        if stat.st_uid == check_uid:
-            oe.qa.add_message(messages, "host-user-contaminated", "%s: %s is owned by uid %d, which is the same as the user running bitbake. This may be due to host contamination" % (pn, package_qa_clean_path(path, d, name), check_uid))
-            return False
-
-        check_gid = int(d.getVar('HOST_USER_GID'))
-        if stat.st_gid == check_gid:
-            oe.qa.add_message(messages, "host-user-contaminated", "%s: %s is owned by gid %d, which is the same as the user running bitbake. This may be due to host contamination" % (pn, package_qa_clean_path(path, d, name), check_gid))
-            return False
-    return True
-
-QARECIPETEST[unhandled-features-check] = "package_qa_check_unhandled_features_check"
-def package_qa_check_unhandled_features_check(pn, d, messages):
-    if not bb.data.inherits_class('features_check', d):
-        var_set = False
-        for kind in ['DISTRO', 'MACHINE', 'COMBINED']:
-            for var in ['ANY_OF_' + kind + '_FEATURES', 'REQUIRED_' + kind + '_FEATURES', 'CONFLICT_' + kind + '_FEATURES']:
-                if d.getVar(var) is not None or d.hasOverrides(var):
-                    var_set = True
-        if var_set:
-            oe.qa.handle_error("unhandled-features-check", "%s: recipe doesn't inherit features_check" % pn, d)
-
-QARECIPETEST[missing-update-alternatives] = "package_qa_check_missing_update_alternatives"
-def package_qa_check_missing_update_alternatives(pn, d, messages):
-    # Look at all packages and find out if any of those sets ALTERNATIVE variable
-    # without inheriting update-alternatives class
-    for pkg in (d.getVar('PACKAGES') or '').split():
-        if d.getVar('ALTERNATIVE:%s' % pkg) and not bb.data.inherits_class('update-alternatives', d):
-            oe.qa.handle_error("missing-update-alternatives", "%s: recipe defines ALTERNATIVE:%s but doesn't inherit update-alternatives. This might fail during do_rootfs later!" % (pn, pkg), d)
-
-# The PACKAGE FUNC to scan each package
-python do_package_qa () {
-    import subprocess
-    import oe.packagedata
-
-    bb.note("DO PACKAGE QA")
-
-    main_lic = d.getVar('LICENSE')
-
-    # Check for obsolete license references in main LICENSE (packages are checked below for any changes)
-    main_licenses = oe.license.list_licenses(d.getVar('LICENSE'))
-    obsolete = set(oe.license.obsolete_license_list()) & main_licenses
-    if obsolete:
-        oe.qa.handle_error("obsolete-license", "Recipe LICENSE includes obsolete licenses %s" % ' '.join(obsolete), d)
-
-    bb.build.exec_func("read_subpackage_metadata", d)
-
-    # Check non UTF-8 characters on recipe's metadata
-    package_qa_check_encoding(['DESCRIPTION', 'SUMMARY', 'LICENSE', 'SECTION'], 'utf-8', d)
-
-    logdir = d.getVar('T')
-    pn = d.getVar('PN')
-
-    # Scan the packages...
-    pkgdest = d.getVar('PKGDEST')
-    packages = set((d.getVar('PACKAGES') or '').split())
-
-    global pkgfiles
-    pkgfiles = {}
-    for pkg in packages:
-        pkgfiles[pkg] = []
-        pkgdir = os.path.join(pkgdest, pkg)
-        for walkroot, dirs, files in os.walk(pkgdir):
-            # Don't walk into top-level CONTROL or DEBIAN directories as these
-            # are temporary directories created by do_package.
-            if walkroot == pkgdir:
-                for control in ("CONTROL", "DEBIAN"):
-                    if control in dirs:
-                        dirs.remove(control)
-            for file in files:
-                pkgfiles[pkg].append(os.path.join(walkroot, file))
-
-    # no packages should be scanned
-    if not packages:
-        return
-
-    import re
-    # The package name matches the [a-z0-9.+-]+ regular expression
-    pkgname_pattern = re.compile(r"^[a-z0-9.+-]+$")
-
-    taskdepdata = d.getVar("BB_TASKDEPDATA", False)
-    taskdeps = set()
-    for dep in taskdepdata:
-        taskdeps.add(taskdepdata[dep][0])
-
-    def parse_test_matrix(matrix_name):
-        testmatrix = d.getVarFlags(matrix_name) or {}
-        g = globals()
-        warnchecks = []
-        for w in (d.getVar("WARN_QA") or "").split():
-            if w in skip:
-               continue
-            if w in testmatrix and testmatrix[w] in g:
-                warnchecks.append(g[testmatrix[w]])
-
-        errorchecks = []
-        for e in (d.getVar("ERROR_QA") or "").split():
-            if e in skip:
-               continue
-            if e in testmatrix and testmatrix[e] in g:
-                errorchecks.append(g[testmatrix[e]])
-        return warnchecks, errorchecks
-
-    for package in packages:
-        skip = set((d.getVar('INSANE_SKIP') or "").split() +
-                   (d.getVar('INSANE_SKIP:' + package) or "").split())
-        if skip:
-            bb.note("Package %s skipping QA tests: %s" % (package, str(skip)))
-
-        bb.note("Checking Package: %s" % package)
-        # Check package name
-        if not pkgname_pattern.match(package):
-            oe.qa.handle_error("pkgname",
-                    "%s doesn't match the [a-z0-9.+-]+ regex" % package, d)
-
-        warn_checks, error_checks = parse_test_matrix("QAPATHTEST")
-        package_qa_walk(warn_checks, error_checks, package, d)
-
-        warn_checks, error_checks = parse_test_matrix("QAPKGTEST")
-        package_qa_package(warn_checks, error_checks, package, d)
-
-        package_qa_check_rdepends(package, pkgdest, skip, taskdeps, packages, d)
-        package_qa_check_deps(package, pkgdest, d)
-
-    warn_checks, error_checks = parse_test_matrix("QARECIPETEST")
-    package_qa_recipe(warn_checks, error_checks, pn, d)
-
-    if 'libdir' in d.getVar("ALL_QA").split():
-        package_qa_check_libdir(d)
-
-    oe.qa.exit_if_errors(d)
-}
-
-# binutils is used for most checks, so need to set as dependency
-# POPULATESYSROOTDEPS is defined in staging class.
-do_package_qa[depends] += "${POPULATESYSROOTDEPS}"
-do_package_qa[vardeps] = "${@bb.utils.contains('ERROR_QA', 'empty-dirs', 'QA_EMPTY_DIRS', '', d)}"
-do_package_qa[vardepsexclude] = "BB_TASKDEPDATA"
-do_package_qa[rdeptask] = "do_packagedata"
-addtask do_package_qa after do_packagedata do_package before do_build
-
-# Add the package specific INSANE_SKIPs to the sstate dependencies
-python() {
-    pkgs = (d.getVar('PACKAGES') or '').split()
-    for pkg in pkgs:
-        d.appendVarFlag("do_package_qa", "vardeps", " INSANE_SKIP:{}".format(pkg))
-}
-
-SSTATETASKS += "do_package_qa"
-do_package_qa[sstate-inputdirs] = ""
-do_package_qa[sstate-outputdirs] = ""
-python do_package_qa_setscene () {
-    sstate_setscene(d)
-}
-addtask do_package_qa_setscene
-
-python do_qa_sysroot() {
-    bb.note("QA checking do_populate_sysroot")
-    sysroot_destdir = d.expand('${SYSROOT_DESTDIR}')
-    for sysroot_dir in d.expand('${SYSROOT_DIRS}').split():
-        qa_check_staged(sysroot_destdir + sysroot_dir, d)
-    oe.qa.exit_with_message_if_errors("do_populate_sysroot for this recipe installed files with QA issues", d)
-}
-do_populate_sysroot[postfuncs] += "do_qa_sysroot"
-
-python do_qa_patch() {
-    import subprocess
-
-    ###########################################################################
-    # Check patch.log for fuzz warnings
-    #
-    # Further information on why we check for patch fuzz warnings:
-    # http://lists.openembedded.org/pipermail/openembedded-core/2018-March/148675.html
-    # https://bugzilla.yoctoproject.org/show_bug.cgi?id=10450
-    ###########################################################################
-
-    logdir = d.getVar('T')
-    patchlog = os.path.join(logdir,"log.do_patch")
-
-    if os.path.exists(patchlog):
-        fuzzheader = '--- Patch fuzz start ---'
-        fuzzfooter = '--- Patch fuzz end ---'
-        statement = "grep -e '%s' %s > /dev/null" % (fuzzheader, patchlog)
-        if subprocess.call(statement, shell=True) == 0:
-            msg = "Fuzz detected:\n\n"
-            fuzzmsg = ""
-            inFuzzInfo = False
-            f = open(patchlog, "r")
-            for line in f:
-                if fuzzheader in line:
-                    inFuzzInfo = True
-                    fuzzmsg = ""
-                elif fuzzfooter in line:
-                    fuzzmsg = fuzzmsg.replace('\n\n', '\n')
-                    msg += fuzzmsg
-                    msg += "\n"
-                    inFuzzInfo = False
-                elif inFuzzInfo and not 'Now at patch' in line:
-                    fuzzmsg += line
-            f.close()
-            msg += "The context lines in the patches can be updated with devtool:\n"
-            msg += "\n"
-            msg += "    devtool modify %s\n" % d.getVar('PN')
-            msg += "    devtool finish --force-patch-refresh %s <layer_path>\n\n" % d.getVar('PN')
-            msg += "Don't forget to review changes done by devtool!\n"
-            if bb.utils.filter('ERROR_QA', 'patch-fuzz', d):
-                bb.error(msg)
-            elif bb.utils.filter('WARN_QA', 'patch-fuzz', d):
-                bb.warn(msg)
-            msg = "Patch log indicates that patches do not apply cleanly."
-            oe.qa.handle_error("patch-fuzz", msg, d)
-
-    # Check if the patch contains a correctly formatted and spelled Upstream-Status
-    import re
-    from oe import patch
-
-    for url in patch.src_patches(d):
-       (_, _, fullpath, _, _, _) = bb.fetch.decodeurl(url)
-
-       # skip patches not in oe-core
-       if '/meta/' not in fullpath:
-           continue
-
-       kinda_status_re = re.compile(r"^.*upstream.*status.*$", re.IGNORECASE | re.MULTILINE)
-       strict_status_re = re.compile(r"^Upstream-Status: (Pending|Submitted|Denied|Accepted|Inappropriate|Backport|Inactive-Upstream)( .+)?$", re.MULTILINE)
-       guidelines = "https://www.openembedded.org/wiki/Commit_Patch_Message_Guidelines#Patch_Header_Recommendations:_Upstream-Status"
-
-       with open(fullpath, encoding='utf-8', errors='ignore') as f:
-           file_content = f.read()
-           match_kinda = kinda_status_re.search(file_content)
-           match_strict = strict_status_re.search(file_content)
-
-           if not match_strict:
-               if match_kinda:
-                   bb.error("Malformed Upstream-Status in patch\n%s\nPlease correct according to %s :\n%s" % (fullpath, guidelines, match_kinda.group(0)))
-               else:
-                   bb.error("Missing Upstream-Status in patch\n%s\nPlease add according to %s ." % (fullpath, guidelines))
-}
-
-python do_qa_configure() {
-    import subprocess
-
-    ###########################################################################
-    # Check config.log for cross compile issues
-    ###########################################################################
-
-    configs = []
-    workdir = d.getVar('WORKDIR')
-
-    skip = (d.getVar('INSANE_SKIP') or "").split()
-    skip_configure_unsafe = False
-    if 'configure-unsafe' in skip:
-        bb.note("Recipe %s skipping qa checking: configure-unsafe" % d.getVar('PN'))
-        skip_configure_unsafe = True
-
-    if bb.data.inherits_class('autotools', d) and not skip_configure_unsafe:
-        bb.note("Checking autotools environment for common misconfiguration")
-        for root, dirs, files in os.walk(workdir):
-            statement = "grep -q -F -e 'is unsafe for cross-compilation' %s" % \
-                        os.path.join(root,"config.log")
-            if "config.log" in files:
-                if subprocess.call(statement, shell=True) == 0:
-                    error_msg = """This autoconf log indicates errors, it looked at host include and/or library paths while determining system capabilities.
-Rerun configure task after fixing this."""
-                    oe.qa.handle_error("configure-unsafe", error_msg, d)
-
-            if "configure.ac" in files:
-                configs.append(os.path.join(root,"configure.ac"))
-            if "configure.in" in files:
-                configs.append(os.path.join(root, "configure.in"))
-
-    ###########################################################################
-    # Check gettext configuration and dependencies are correct
-    ###########################################################################
-
-    skip_configure_gettext = False
-    if 'configure-gettext' in skip:
-        bb.note("Recipe %s skipping qa checking: configure-gettext" % d.getVar('PN'))
-        skip_configure_gettext = True
-
-    cnf = d.getVar('EXTRA_OECONF') or ""
-    if not ("gettext" in d.getVar('P') or "gcc-runtime" in d.getVar('P') or \
-            "--disable-nls" in cnf or skip_configure_gettext):
-        ml = d.getVar("MLPREFIX") or ""
-        if bb.data.inherits_class('cross-canadian', d):
-            gt = "nativesdk-gettext"
-        else:
-            gt = "gettext-native"
-        deps = bb.utils.explode_deps(d.getVar('DEPENDS') or "")
-        if gt not in deps:
-            for config in configs:
-                gnu = "grep \"^[[:space:]]*AM_GNU_GETTEXT\" %s >/dev/null" % config
-                if subprocess.call(gnu, shell=True) == 0:
-                    error_msg = "AM_GNU_GETTEXT used but no inherit gettext"
-                    oe.qa.handle_error("configure-gettext", error_msg, d)
-
-    ###########################################################################
-    # Check unrecognised configure options (with a white list)
-    ###########################################################################
-    if bb.data.inherits_class("autotools", d):
-        bb.note("Checking configure output for unrecognised options")
-        try:
-            if bb.data.inherits_class("autotools", d):
-                flag = "WARNING: unrecognized options:"
-                log = os.path.join(d.getVar('B'), 'config.log')
-            output = subprocess.check_output(['grep', '-F', flag, log]).decode("utf-8").replace(', ', ' ').replace('"', '')
-            options = set()
-            for line in output.splitlines():
-                options |= set(line.partition(flag)[2].split())
-            ignore_opts = set(d.getVar("UNKNOWN_CONFIGURE_OPT_IGNORE").split())
-            options -= ignore_opts
-            if options:
-                pn = d.getVar('PN')
-                error_msg = pn + ": configure was passed unrecognised options: " + " ".join(options)
-                oe.qa.handle_error("unknown-configure-option", error_msg, d)
-        except subprocess.CalledProcessError:
-            pass
-
-    # Check invalid PACKAGECONFIG
-    pkgconfig = (d.getVar("PACKAGECONFIG") or "").split()
-    if pkgconfig:
-        pkgconfigflags = d.getVarFlags("PACKAGECONFIG") or {}
-        for pconfig in pkgconfig:
-            if pconfig not in pkgconfigflags:
-                pn = d.getVar('PN')
-                error_msg = "%s: invalid PACKAGECONFIG: %s" % (pn, pconfig)
-                oe.qa.handle_error("invalid-packageconfig", error_msg, d)
-
-    oe.qa.exit_if_errors(d)
-}
-
-def unpack_check_src_uri(pn, d):
-    import re
-
-    skip = (d.getVar('INSANE_SKIP') or "").split()
-    if 'src-uri-bad' in skip:
-        bb.note("Recipe %s skipping qa checking: src-uri-bad" % d.getVar('PN'))
-        return
-
-    if "${PN}" in d.getVar("SRC_URI", False):
-        oe.qa.handle_error("src-uri-bad", "%s: SRC_URI uses PN not BPN" % pn, d)
-
-    for url in d.getVar("SRC_URI").split():
-        # Search for github and gitlab URLs that pull unstable archives (comment for future greppers)
-        if re.search(r"git(hu|la)b\.com/.+/.+/archive/.+", url):
-            oe.qa.handle_error("src-uri-bad", "%s: SRC_URI uses unstable GitHub/GitLab archives, convert recipe to use git protocol" % pn, d)
-
-python do_qa_unpack() {
-    src_uri = d.getVar('SRC_URI')
-    s_dir = d.getVar('S')
-    if src_uri and not os.path.exists(s_dir):
-        bb.warn('%s: the directory %s (%s) pointed to by the S variable doesn\'t exist - please set S within the recipe to point to where the source has been unpacked to' % (d.getVar('PN'), d.getVar('S', False), s_dir))
-
-    unpack_check_src_uri(d.getVar('PN'), d)
-}
-
-# Check for patch fuzz
-do_patch[postfuncs] += "do_qa_patch "
-
-# Check broken config.log files, for packages requiring Gettext which
-# don't have it in DEPENDS.
-#addtask qa_configure after do_configure before do_compile
-do_configure[postfuncs] += "do_qa_configure "
-
-# Check does S exist.
-do_unpack[postfuncs] += "do_qa_unpack"
-
-python () {
-    import re
-    
-    tests = d.getVar('ALL_QA').split()
-    if "desktop" in tests:
-        d.appendVar("PACKAGE_DEPENDS", " desktop-file-utils-native")
-
-    ###########################################################################
-    # Check various variables
-    ###########################################################################
-
-    # Checking ${FILESEXTRAPATHS}
-    extrapaths = (d.getVar("FILESEXTRAPATHS") or "")
-    if '__default' not in extrapaths.split(":"):
-        msg = "FILESEXTRAPATHS-variable, must always use :prepend (or :append)\n"
-        msg += "type of assignment, and don't forget the colon.\n"
-        msg += "Please assign it with the format of:\n"
-        msg += "  FILESEXTRAPATHS:append := \":${THISDIR}/Your_Files_Path\" or\n"
-        msg += "  FILESEXTRAPATHS:prepend := \"${THISDIR}/Your_Files_Path:\"\n"
-        msg += "in your bbappend file\n\n"
-        msg += "Your incorrect assignment is:\n"
-        msg += "%s\n" % extrapaths
-        bb.warn(msg)
-
-    overrides = d.getVar('OVERRIDES').split(':')
-    pn = d.getVar('PN')
-    if pn in overrides:
-        msg = 'Recipe %s has PN of "%s" which is in OVERRIDES, this can result in unexpected behaviour.' % (d.getVar("FILE"), pn)
-        oe.qa.handle_error("pn-overrides", msg, d)
-    prog = re.compile(r'[A-Z]')
-    if prog.search(pn):
-        oe.qa.handle_error("uppercase-pn", 'PN: %s is upper case, this can result in unexpected behavior.' % pn, d)
-
-    # Some people mistakenly use DEPENDS:${PN} instead of DEPENDS and wonder
-    # why it doesn't work.
-    if (d.getVar(d.expand('DEPENDS:${PN}'))):
-        oe.qa.handle_error("pkgvarcheck", "recipe uses DEPENDS:${PN}, should use DEPENDS", d)
-
-    issues = []
-    if (d.getVar('PACKAGES') or "").split():
-        for dep in (d.getVar('QADEPENDS') or "").split():
-            d.appendVarFlag('do_package_qa', 'depends', " %s:do_populate_sysroot" % dep)
-        for var in 'RDEPENDS', 'RRECOMMENDS', 'RSUGGESTS', 'RCONFLICTS', 'RPROVIDES', 'RREPLACES', 'FILES', 'pkg_preinst', 'pkg_postinst', 'pkg_prerm', 'pkg_postrm', 'ALLOW_EMPTY':
-            if d.getVar(var, False):
-                issues.append(var)
-
-        fakeroot_tests = d.getVar('FAKEROOT_QA').split()
-        if set(tests) & set(fakeroot_tests):
-            d.setVarFlag('do_package_qa', 'fakeroot', '1')
-            d.appendVarFlag('do_package_qa', 'depends', ' virtual/fakeroot-native:do_populate_sysroot')
-    else:
-        d.setVarFlag('do_package_qa', 'rdeptask', '')
-    for i in issues:
-        oe.qa.handle_error("pkgvarcheck", "%s: Variable %s is set as not being package specific, please fix this." % (d.getVar("FILE"), i), d)
-
-    if 'native-last' not in (d.getVar('INSANE_SKIP') or "").split():
-        for native_class in ['native', 'nativesdk']:
-            if bb.data.inherits_class(native_class, d):
-
-                inherited_classes = d.getVar('__inherit_cache', False) or []
-                needle = os.path.join('classes', native_class)
-
-                bbclassextend = (d.getVar('BBCLASSEXTEND') or '').split()
-                # BBCLASSEXTEND items are always added in the end
-                skip_classes = bbclassextend
-                if bb.data.inherits_class('native', d) or 'native' in bbclassextend:
-                    # native also inherits nopackages and relocatable bbclasses
-                    skip_classes.extend(['nopackages', 'relocatable'])
-
-                broken_order = []
-                for class_item in reversed(inherited_classes):
-                    if needle not in class_item:
-                        for extend_item in skip_classes:
-                            if os.path.join('classes', '%s.bbclass' % extend_item) in class_item:
-                                break
-                        else:
-                            pn = d.getVar('PN')
-                            broken_order.append(os.path.basename(class_item))
-                    else:
-                        break
-                if broken_order:
-                    oe.qa.handle_error("native-last", "%s: native/nativesdk class is not inherited last, this can result in unexpected behaviour. "
-                                             "Classes inherited after native/nativesdk: %s" % (pn, " ".join(broken_order)), d)
-
-    oe.qa.exit_if_errors(d)
-}
diff --git a/poky/meta/classes/kernel-arch.bbclass b/poky/meta/classes/kernel-arch.bbclass
deleted file mode 100644
index 348a3ad..0000000
--- a/poky/meta/classes/kernel-arch.bbclass
+++ /dev/null
@@ -1,68 +0,0 @@
-#
-# set the ARCH environment variable for kernel compilation (including
-# modules). return value must match one of the architecture directories
-# in the kernel source "arch" directory
-#
-
-valid_archs = "alpha cris ia64 \
-               i386 x86 \
-               m68knommu m68k ppc powerpc powerpc64 ppc64  \
-               sparc sparc64 \
-               arm aarch64 \
-               m32r mips \
-               sh sh64 um h8300   \
-               parisc s390  v850 \
-               avr32 blackfin \
-               microblaze \
-               nios2 arc riscv xtensa"
-
-def map_kernel_arch(a, d):
-    import re
-
-    valid_archs = d.getVar('valid_archs').split()
-
-    if   re.match('(i.86|athlon|x86.64)$', a):  return 'x86'
-    elif re.match('arceb$', a):                 return 'arc'
-    elif re.match('armeb$', a):                 return 'arm'
-    elif re.match('aarch64$', a):               return 'arm64'
-    elif re.match('aarch64_be$', a):            return 'arm64'
-    elif re.match('aarch64_ilp32$', a):         return 'arm64'
-    elif re.match('aarch64_be_ilp32$', a):      return 'arm64'
-    elif re.match('mips(isa|)(32|64|)(r6|)(el|)$', a):      return 'mips'
-    elif re.match('mcf', a):                    return 'm68k'
-    elif re.match('riscv(32|64|)(eb|)$', a):    return 'riscv'
-    elif re.match('p(pc|owerpc)(|64)', a):      return 'powerpc'
-    elif re.match('sh(3|4)$', a):               return 'sh'
-    elif re.match('bfin', a):                   return 'blackfin'
-    elif re.match('microblazee[bl]', a):        return 'microblaze'
-    elif a in valid_archs:                      return a
-    else:
-        if not d.getVar("TARGET_OS").startswith("linux"):
-            return a
-        bb.error("cannot map '%s' to a linux kernel architecture" % a)
-
-export ARCH = "${@map_kernel_arch(d.getVar('TARGET_ARCH'), d)}"
-
-def map_uboot_arch(a, d):
-    import re
-
-    if   re.match('p(pc|owerpc)(|64)', a): return 'ppc'
-    elif re.match('i.86$', a): return 'x86'
-    return a
-
-export UBOOT_ARCH = "${@map_uboot_arch(d.getVar('ARCH'), d)}"
-
-# Set TARGET_??_KERNEL_ARCH in the machine .conf to set architecture
-# specific options necessary for building the kernel and modules.
-TARGET_CC_KERNEL_ARCH ?= ""
-HOST_CC_KERNEL_ARCH ?= "${TARGET_CC_KERNEL_ARCH}"
-TARGET_LD_KERNEL_ARCH ?= ""
-HOST_LD_KERNEL_ARCH ?= "${TARGET_LD_KERNEL_ARCH}"
-TARGET_AR_KERNEL_ARCH ?= ""
-HOST_AR_KERNEL_ARCH ?= "${TARGET_AR_KERNEL_ARCH}"
-
-KERNEL_CC = "${CCACHE}${HOST_PREFIX}gcc ${HOST_CC_KERNEL_ARCH} -fuse-ld=bfd ${DEBUG_PREFIX_MAP} -fdebug-prefix-map=${STAGING_KERNEL_DIR}=${KERNEL_SRC_PATH} -fdebug-prefix-map=${STAGING_KERNEL_BUILDDIR}=${KERNEL_SRC_PATH}"
-KERNEL_LD = "${CCACHE}${HOST_PREFIX}ld.bfd ${HOST_LD_KERNEL_ARCH}"
-KERNEL_AR = "${CCACHE}${HOST_PREFIX}ar ${HOST_AR_KERNEL_ARCH}"
-TOOLCHAIN = "gcc"
-
diff --git a/poky/meta/classes/kernel-artifact-names.bbclass b/poky/meta/classes/kernel-artifact-names.bbclass
deleted file mode 100644
index e77107c..0000000
--- a/poky/meta/classes/kernel-artifact-names.bbclass
+++ /dev/null
@@ -1,31 +0,0 @@
-##################################################################
-# Specific kernel creation info
-# for recipes/bbclasses which need to reuse some of the kernel
-# artifacts, but aren't kernel recipes themselves
-##################################################################
-
-inherit image-artifact-names
-
-KERNEL_ARTIFACT_NAME ?= "${PKGE}-${PKGV}-${PKGR}-${MACHINE}${IMAGE_VERSION_SUFFIX}"
-KERNEL_ARTIFACT_LINK_NAME ?= "${MACHINE}"
-KERNEL_ARTIFACT_BIN_EXT ?= ".bin"
-
-KERNEL_IMAGE_NAME ?= "${KERNEL_ARTIFACT_NAME}"
-KERNEL_IMAGE_LINK_NAME ?= "${KERNEL_ARTIFACT_LINK_NAME}"
-KERNEL_IMAGE_BIN_EXT ?= "${KERNEL_ARTIFACT_BIN_EXT}"
-KERNEL_IMAGETYPE_SYMLINK ?= "1"
-
-KERNEL_DTB_NAME ?= "${KERNEL_ARTIFACT_NAME}"
-KERNEL_DTB_LINK_NAME ?= "${KERNEL_ARTIFACT_LINK_NAME}"
-KERNEL_DTB_BIN_EXT ?= "${KERNEL_ARTIFACT_BIN_EXT}"
-
-KERNEL_FIT_NAME ?= "${KERNEL_ARTIFACT_NAME}"
-KERNEL_FIT_LINK_NAME ?= "${KERNEL_ARTIFACT_LINK_NAME}"
-KERNEL_FIT_BIN_EXT ?= "${KERNEL_ARTIFACT_BIN_EXT}"
-
-MODULE_TARBALL_NAME ?= "${KERNEL_ARTIFACT_NAME}"
-MODULE_TARBALL_LINK_NAME ?= "${KERNEL_ARTIFACT_LINK_NAME}"
-MODULE_TARBALL_DEPLOY ?= "1"
-
-INITRAMFS_NAME ?= "initramfs-${KERNEL_ARTIFACT_NAME}"
-INITRAMFS_LINK_NAME ?= "initramfs-${KERNEL_ARTIFACT_LINK_NAME}"
diff --git a/poky/meta/classes/kernel-devicetree.bbclass b/poky/meta/classes/kernel-devicetree.bbclass
deleted file mode 100644
index b4338da..0000000
--- a/poky/meta/classes/kernel-devicetree.bbclass
+++ /dev/null
@@ -1,113 +0,0 @@
-# Support for device tree generation
-python () {
-    if not bb.data.inherits_class('nopackages', d):
-        d.appendVar("PACKAGES", " ${KERNEL_PACKAGE_NAME}-devicetree")
-        if d.getVar('KERNEL_DEVICETREE_BUNDLE') == '1':
-            d.appendVar("PACKAGES", " ${KERNEL_PACKAGE_NAME}-image-zimage-bundle")
-}
-
-FILES:${KERNEL_PACKAGE_NAME}-devicetree = "/${KERNEL_IMAGEDEST}/*.dtb /${KERNEL_IMAGEDEST}/*.dtbo"
-FILES:${KERNEL_PACKAGE_NAME}-image-zimage-bundle = "/${KERNEL_IMAGEDEST}/zImage-*.dtb.bin"
-
-# Generate kernel+devicetree bundle
-KERNEL_DEVICETREE_BUNDLE ?= "0"
-
-# dtc flags passed via DTC_FLAGS env variable
-KERNEL_DTC_FLAGS ?= ""
-
-normalize_dtb () {
-	dtb="$1"
-	if echo $dtb | grep -q '/dts/'; then
-		bbwarn "$dtb contains the full path to the the dts file, but only the dtb name should be used."
-		dtb=`basename $dtb | sed 's,\.dts$,.dtb,g'`
-	fi
-	echo "$dtb"
-}
-
-get_real_dtb_path_in_kernel () {
-	dtb="$1"
-	dtb_path="${B}/arch/${ARCH}/boot/dts/$dtb"
-	if [ ! -e "$dtb_path" ]; then
-		dtb_path="${B}/arch/${ARCH}/boot/$dtb"
-	fi
-	echo "$dtb_path"
-}
-
-do_configure:append() {
-	if [ "${KERNEL_DEVICETREE_BUNDLE}" = "1" ]; then
-		if echo ${KERNEL_IMAGETYPE_FOR_MAKE} | grep -q 'zImage'; then
-			case "${ARCH}" in
-				"arm")
-				config="${B}/.config"
-				if ! grep -q 'CONFIG_ARM_APPENDED_DTB=y' $config; then
-					bbwarn 'CONFIG_ARM_APPENDED_DTB is NOT enabled in the kernel. Enabling it to allow the kernel to boot with the Device Tree appended!'
-					sed -i "/CONFIG_ARM_APPENDED_DTB[ =]/d" $config
-					echo "CONFIG_ARM_APPENDED_DTB=y" >> $config
-					echo "# CONFIG_ARM_ATAG_DTB_COMPAT is not set" >> $config
-				fi
-				;;
-				*)
-				bberror "KERNEL_DEVICETREE_BUNDLE is not supported for ${ARCH}. Currently it is only supported for 'ARM'."
-			esac
-		else
-			bberror 'The KERNEL_DEVICETREE_BUNDLE requires the KERNEL_IMAGETYPE to contain zImage.'
-		fi
-	fi
-}
-
-do_compile:append() {
-	if [ -n "${KERNEL_DTC_FLAGS}" ]; then
-		export DTC_FLAGS="${KERNEL_DTC_FLAGS}"
-	fi
-
-	for dtbf in ${KERNEL_DEVICETREE}; do
-		dtb=`normalize_dtb "$dtbf"`
-		oe_runmake $dtb CC="${KERNEL_CC} $cc_extra " LD="${KERNEL_LD}" ${KERNEL_EXTRA_ARGS}
-	done
-}
-
-do_install:append() {
-	for dtbf in ${KERNEL_DEVICETREE}; do
-		dtb=`normalize_dtb "$dtbf"`
-		dtb_ext=${dtb##*.}
-		dtb_base_name=`basename $dtb .$dtb_ext`
-		dtb_path=`get_real_dtb_path_in_kernel "$dtb"`
-		install -m 0644 $dtb_path ${D}/${KERNEL_IMAGEDEST}/$dtb_base_name.$dtb_ext
-	done
-}
-
-do_deploy:append() {
-	for dtbf in ${KERNEL_DEVICETREE}; do
-		dtb=`normalize_dtb "$dtbf"`
-		dtb_ext=${dtb##*.}
-		dtb_base_name=`basename $dtb .$dtb_ext`
-		install -d $deployDir
-		install -m 0644 ${D}/${KERNEL_IMAGEDEST}/$dtb_base_name.$dtb_ext $deployDir/$dtb_base_name-${KERNEL_DTB_NAME}.$dtb_ext
-		if [ "${KERNEL_IMAGETYPE_SYMLINK}" = "1" ] ; then
-			ln -sf $dtb_base_name-${KERNEL_DTB_NAME}.$dtb_ext $deployDir/$dtb_base_name.$dtb_ext
-		fi
-		if [ -n "${KERNEL_DTB_LINK_NAME}" ] ; then
-			ln -sf $dtb_base_name-${KERNEL_DTB_NAME}.$dtb_ext $deployDir/$dtb_base_name-${KERNEL_DTB_LINK_NAME}.$dtb_ext
-		fi
-		for type in ${KERNEL_IMAGETYPE_FOR_MAKE}; do
-			if [ "$type" = "zImage" ] && [ "${KERNEL_DEVICETREE_BUNDLE}" = "1" ]; then
-				cat ${D}/${KERNEL_IMAGEDEST}/$type \
-					$deployDir/$dtb_base_name-${KERNEL_DTB_NAME}.$dtb_ext \
-					> $deployDir/$type-$dtb_base_name-${KERNEL_DTB_NAME}.$dtb_ext${KERNEL_DTB_BIN_EXT}
-				if [ -n "${KERNEL_DTB_LINK_NAME}" ]; then
-					ln -sf $type-$dtb_base_name-${KERNEL_DTB_NAME}.$dtb_ext${KERNEL_DTB_BIN_EXT} \
-						$deployDir/$type-$dtb_base_name-${KERNEL_DTB_LINK_NAME}.$dtb_ext${KERNEL_DTB_BIN_EXT}
-				fi
-				if [ -e "${KERNEL_OUTPUT_DIR}/${type}.initramfs" ]; then
-					cat ${KERNEL_OUTPUT_DIR}/${type}.initramfs \
-						$deployDir/$dtb_base_name-${KERNEL_DTB_NAME}.$dtb_ext \
-						>  $deployDir/${type}-${INITRAMFS_NAME}-$dtb_base_name-${KERNEL_DTB_NAME}.$dtb_ext${KERNEL_DTB_BIN_EXT}
-					if [ -n "${KERNEL_DTB_LINK_NAME}" ]; then
-						ln -sf ${type}-${INITRAMFS_NAME}-$dtb_base_name-${KERNEL_DTB_NAME}.$dtb_ext${KERNEL_DTB_BIN_EXT} \
-							$deployDir/${type}-${INITRAMFS_NAME}-$dtb_base_name-${KERNEL_DTB_LINK_NAME}.$dtb_ext${KERNEL_DTB_BIN_EXT}
-					fi
-				fi
-			fi
-		done
-	done
-}
diff --git a/poky/meta/classes/kernel-fitimage.bbclass b/poky/meta/classes/kernel-fitimage.bbclass
deleted file mode 100644
index 7531645..0000000
--- a/poky/meta/classes/kernel-fitimage.bbclass
+++ /dev/null
@@ -1,797 +0,0 @@
-inherit kernel-uboot kernel-artifact-names uboot-sign
-
-def get_fit_replacement_type(d):
-    kerneltypes = d.getVar('KERNEL_IMAGETYPES') or ""
-    replacementtype = ""
-    if 'fitImage' in kerneltypes.split():
-        uarch = d.getVar("UBOOT_ARCH")
-        if uarch == "arm64":
-            replacementtype = "Image"
-        elif uarch == "riscv":
-            replacementtype = "Image"
-        elif uarch == "mips":
-            replacementtype = "vmlinuz.bin"
-        elif uarch == "x86":
-            replacementtype = "bzImage"
-        elif uarch == "microblaze":
-            replacementtype = "linux.bin"
-        else:
-            replacementtype = "zImage"
-    return replacementtype
-
-KERNEL_IMAGETYPE_REPLACEMENT ?= "${@get_fit_replacement_type(d)}"
-DEPENDS:append = " ${@'u-boot-tools-native dtc-native' if 'fitImage' in (d.getVar('KERNEL_IMAGETYPES') or '').split() else ''}"
-
-python __anonymous () {
-        # Override KERNEL_IMAGETYPE_FOR_MAKE variable, which is internal
-        # to kernel.bbclass . We have to override it, since we pack zImage
-        # (at least for now) into the fitImage .
-        typeformake = d.getVar("KERNEL_IMAGETYPE_FOR_MAKE") or ""
-        if 'fitImage' in typeformake.split():
-            d.setVar('KERNEL_IMAGETYPE_FOR_MAKE', typeformake.replace('fitImage', d.getVar('KERNEL_IMAGETYPE_REPLACEMENT')))
-
-        image = d.getVar('INITRAMFS_IMAGE')
-        if image:
-            d.appendVarFlag('do_assemble_fitimage_initramfs', 'depends', ' ${INITRAMFS_IMAGE}:do_image_complete')
-
-        ubootenv = d.getVar('UBOOT_ENV')
-        if ubootenv:
-            d.appendVarFlag('do_assemble_fitimage', 'depends', ' virtual/bootloader:do_populate_sysroot')
-
-        #check if there are any dtb providers
-        providerdtb = d.getVar("PREFERRED_PROVIDER_virtual/dtb")
-        if providerdtb:
-            d.appendVarFlag('do_assemble_fitimage', 'depends', ' virtual/dtb:do_populate_sysroot')
-            d.appendVarFlag('do_assemble_fitimage_initramfs', 'depends', ' virtual/dtb:do_populate_sysroot')
-            d.setVar('EXTERNAL_KERNEL_DEVICETREE', "${RECIPE_SYSROOT}/boot/devicetree")
-
-        # Verified boot will sign the fitImage and append the public key to
-        # U-Boot dtb. We ensure the U-Boot dtb is deployed before assembling
-        # the fitImage:
-        if d.getVar('UBOOT_SIGN_ENABLE') == "1" and d.getVar('UBOOT_DTB_BINARY'):
-            uboot_pn = d.getVar('PREFERRED_PROVIDER_u-boot') or 'u-boot'
-            d.appendVarFlag('do_assemble_fitimage', 'depends', ' %s:do_populate_sysroot' % uboot_pn)
-            if d.getVar('INITRAMFS_IMAGE_BUNDLE') == "1":
-                d.appendVarFlag('do_assemble_fitimage_initramfs', 'depends', ' %s:do_populate_sysroot' % uboot_pn)
-}
-
-
-# Description string
-FIT_DESC ?= "Kernel fitImage for ${DISTRO_NAME}/${PV}/${MACHINE}"
-
-# Sign individual images as well
-FIT_SIGN_INDIVIDUAL ?= "0"
-
-FIT_CONF_PREFIX ?= "conf-"
-FIT_CONF_PREFIX[doc] = "Prefix to use for FIT configuration node name"
-
-FIT_SUPPORTED_INITRAMFS_FSTYPES ?= "cpio.lz4 cpio.lzo cpio.lzma cpio.xz cpio.zst cpio.gz ext2.gz cpio"
-
-# Keys used to sign individually image nodes.
-# The keys to sign image nodes must be different from those used to sign
-# configuration nodes, otherwise the "required" property, from
-# UBOOT_DTB_BINARY, will be set to "conf", because "conf" prevails on "image".
-# Then the images signature checking will not be mandatory and no error will be
-# raised in case of failure.
-# UBOOT_SIGN_IMG_KEYNAME = "dev2" # keys name in keydir (eg. "dev2.crt", "dev2.key")
-
-#
-# Emit the fitImage ITS header
-#
-# $1 ... .its filename
-fitimage_emit_fit_header() {
-	cat << EOF >> $1
-/dts-v1/;
-
-/ {
-        description = "${FIT_DESC}";
-        #address-cells = <1>;
-EOF
-}
-
-#
-# Emit the fitImage section bits
-#
-# $1 ... .its filename
-# $2 ... Section bit type: imagestart - image section start
-#                          confstart  - configuration section start
-#                          sectend    - section end
-#                          fitend     - fitimage end
-#
-fitimage_emit_section_maint() {
-	case $2 in
-	imagestart)
-		cat << EOF >> $1
-
-        images {
-EOF
-	;;
-	confstart)
-		cat << EOF >> $1
-
-        configurations {
-EOF
-	;;
-	sectend)
-		cat << EOF >> $1
-	};
-EOF
-	;;
-	fitend)
-		cat << EOF >> $1
-};
-EOF
-	;;
-	esac
-}
-
-#
-# Emit the fitImage ITS kernel section
-#
-# $1 ... .its filename
-# $2 ... Image counter
-# $3 ... Path to kernel image
-# $4 ... Compression type
-fitimage_emit_section_kernel() {
-
-	kernel_csum="${FIT_HASH_ALG}"
-	kernel_sign_algo="${FIT_SIGN_ALG}"
-	kernel_sign_keyname="${UBOOT_SIGN_IMG_KEYNAME}"
-
-	ENTRYPOINT="${UBOOT_ENTRYPOINT}"
-	if [ -n "${UBOOT_ENTRYSYMBOL}" ]; then
-		ENTRYPOINT=`${HOST_PREFIX}nm vmlinux | \
-			awk '$3=="${UBOOT_ENTRYSYMBOL}" {print "0x"$1;exit}'`
-	fi
-
-	cat << EOF >> $1
-                kernel-$2 {
-                        description = "Linux kernel";
-                        data = /incbin/("$3");
-                        type = "${UBOOT_MKIMAGE_KERNEL_TYPE}";
-                        arch = "${UBOOT_ARCH}";
-                        os = "linux";
-                        compression = "$4";
-                        load = <${UBOOT_LOADADDRESS}>;
-                        entry = <$ENTRYPOINT>;
-                        hash-1 {
-                                algo = "$kernel_csum";
-                        };
-                };
-EOF
-
-	if [ "${UBOOT_SIGN_ENABLE}" = "1" -a "${FIT_SIGN_INDIVIDUAL}" = "1" -a -n "$kernel_sign_keyname" ] ; then
-		sed -i '$ d' $1
-		cat << EOF >> $1
-                        signature-1 {
-                                algo = "$kernel_csum,$kernel_sign_algo";
-                                key-name-hint = "$kernel_sign_keyname";
-                        };
-                };
-EOF
-	fi
-}
-
-#
-# Emit the fitImage ITS DTB section
-#
-# $1 ... .its filename
-# $2 ... Image counter
-# $3 ... Path to DTB image
-fitimage_emit_section_dtb() {
-
-	dtb_csum="${FIT_HASH_ALG}"
-	dtb_sign_algo="${FIT_SIGN_ALG}"
-	dtb_sign_keyname="${UBOOT_SIGN_IMG_KEYNAME}"
-
-	dtb_loadline=""
-	dtb_ext=${DTB##*.}
-	if [ "${dtb_ext}" = "dtbo" ]; then
-		if [ -n "${UBOOT_DTBO_LOADADDRESS}" ]; then
-			dtb_loadline="load = <${UBOOT_DTBO_LOADADDRESS}>;"
-		fi
-	elif [ -n "${UBOOT_DTB_LOADADDRESS}" ]; then
-		dtb_loadline="load = <${UBOOT_DTB_LOADADDRESS}>;"
-	fi
-	cat << EOF >> $1
-                fdt-$2 {
-                        description = "Flattened Device Tree blob";
-                        data = /incbin/("$3");
-                        type = "flat_dt";
-                        arch = "${UBOOT_ARCH}";
-                        compression = "none";
-                        $dtb_loadline
-                        hash-1 {
-                                algo = "$dtb_csum";
-                        };
-                };
-EOF
-
-	if [ "${UBOOT_SIGN_ENABLE}" = "1" -a "${FIT_SIGN_INDIVIDUAL}" = "1" -a -n "$dtb_sign_keyname" ] ; then
-		sed -i '$ d' $1
-		cat << EOF >> $1
-                        signature-1 {
-                                algo = "$dtb_csum,$dtb_sign_algo";
-                                key-name-hint = "$dtb_sign_keyname";
-                        };
-                };
-EOF
-	fi
-}
-
-#
-# Emit the fitImage ITS u-boot script section
-#
-# $1 ... .its filename
-# $2 ... Image counter
-# $3 ... Path to boot script image
-fitimage_emit_section_boot_script() {
-
-	bootscr_csum="${FIT_HASH_ALG}"
-	bootscr_sign_algo="${FIT_SIGN_ALG}"
-	bootscr_sign_keyname="${UBOOT_SIGN_IMG_KEYNAME}"
-
-        cat << EOF >> $1
-                bootscr-$2 {
-                        description = "U-boot script";
-                        data = /incbin/("$3");
-                        type = "script";
-                        arch = "${UBOOT_ARCH}";
-                        compression = "none";
-                        hash-1 {
-                                algo = "$bootscr_csum";
-                        };
-                };
-EOF
-
-	if [ "${UBOOT_SIGN_ENABLE}" = "1" -a "${FIT_SIGN_INDIVIDUAL}" = "1" -a -n "$bootscr_sign_keyname" ] ; then
-		sed -i '$ d' $1
-		cat << EOF >> $1
-                        signature-1 {
-                                algo = "$bootscr_csum,$bootscr_sign_algo";
-                                key-name-hint = "$bootscr_sign_keyname";
-                        };
-                };
-EOF
-	fi
-}
-
-#
-# Emit the fitImage ITS setup section
-#
-# $1 ... .its filename
-# $2 ... Image counter
-# $3 ... Path to setup image
-fitimage_emit_section_setup() {
-
-	setup_csum="${FIT_HASH_ALG}"
-
-	cat << EOF >> $1
-                setup-$2 {
-                        description = "Linux setup.bin";
-                        data = /incbin/("$3");
-                        type = "x86_setup";
-                        arch = "${UBOOT_ARCH}";
-                        os = "linux";
-                        compression = "none";
-                        load = <0x00090000>;
-                        entry = <0x00090000>;
-                        hash-1 {
-                                algo = "$setup_csum";
-                        };
-                };
-EOF
-}
-
-#
-# Emit the fitImage ITS ramdisk section
-#
-# $1 ... .its filename
-# $2 ... Image counter
-# $3 ... Path to ramdisk image
-fitimage_emit_section_ramdisk() {
-
-	ramdisk_csum="${FIT_HASH_ALG}"
-	ramdisk_sign_algo="${FIT_SIGN_ALG}"
-	ramdisk_sign_keyname="${UBOOT_SIGN_IMG_KEYNAME}"
-	ramdisk_loadline=""
-	ramdisk_entryline=""
-
-	if [ -n "${UBOOT_RD_LOADADDRESS}" ]; then
-		ramdisk_loadline="load = <${UBOOT_RD_LOADADDRESS}>;"
-	fi
-	if [ -n "${UBOOT_RD_ENTRYPOINT}" ]; then
-		ramdisk_entryline="entry = <${UBOOT_RD_ENTRYPOINT}>;"
-	fi
-
-	cat << EOF >> $1
-                ramdisk-$2 {
-                        description = "${INITRAMFS_IMAGE}";
-                        data = /incbin/("$3");
-                        type = "ramdisk";
-                        arch = "${UBOOT_ARCH}";
-                        os = "linux";
-                        compression = "none";
-                        $ramdisk_loadline
-                        $ramdisk_entryline
-                        hash-1 {
-                                algo = "$ramdisk_csum";
-                        };
-                };
-EOF
-
-	if [ "${UBOOT_SIGN_ENABLE}" = "1" -a "${FIT_SIGN_INDIVIDUAL}" = "1" -a -n "$ramdisk_sign_keyname" ] ; then
-		sed -i '$ d' $1
-		cat << EOF >> $1
-                        signature-1 {
-                                algo = "$ramdisk_csum,$ramdisk_sign_algo";
-                                key-name-hint = "$ramdisk_sign_keyname";
-                        };
-                };
-EOF
-	fi
-}
-
-#
-# Emit the fitImage ITS configuration section
-#
-# $1 ... .its filename
-# $2 ... Linux kernel ID
-# $3 ... DTB image name
-# $4 ... ramdisk ID
-# $5 ... u-boot script ID
-# $6 ... config ID
-# $7 ... default flag
-fitimage_emit_section_config() {
-
-	conf_csum="${FIT_HASH_ALG}"
-	conf_sign_algo="${FIT_SIGN_ALG}"
-	conf_padding_algo="${FIT_PAD_ALG}"
-	if [ "${UBOOT_SIGN_ENABLE}" = "1" ] ; then
-		conf_sign_keyname="${UBOOT_SIGN_KEYNAME}"
-	fi
-
-	its_file="$1"
-	kernel_id="$2"
-	dtb_image="$3"
-	ramdisk_id="$4"
-	bootscr_id="$5"
-	config_id="$6"
-	default_flag="$7"
-
-	# Test if we have any DTBs at all
-	sep=""
-	conf_desc=""
-	conf_node="${FIT_CONF_PREFIX}"
-	kernel_line=""
-	fdt_line=""
-	ramdisk_line=""
-	bootscr_line=""
-	setup_line=""
-	default_line=""
-
-	# conf node name is selected based on dtb ID if it is present,
-	# otherwise its selected based on kernel ID
-	if [ -n "$dtb_image" ]; then
-		conf_node=$conf_node$dtb_image
-	else
-		conf_node=$conf_node$kernel_id
-	fi
-
-	if [ -n "$kernel_id" ]; then
-		conf_desc="Linux kernel"
-		sep=", "
-		kernel_line="kernel = \"kernel-$kernel_id\";"
-	fi
-
-	if [ -n "$dtb_image" ]; then
-		conf_desc="$conf_desc${sep}FDT blob"
-		sep=", "
-		fdt_line="fdt = \"fdt-$dtb_image\";"
-	fi
-
-	if [ -n "$ramdisk_id" ]; then
-		conf_desc="$conf_desc${sep}ramdisk"
-		sep=", "
-		ramdisk_line="ramdisk = \"ramdisk-$ramdisk_id\";"
-	fi
-
-	if [ -n "$bootscr_id" ]; then
-		conf_desc="$conf_desc${sep}u-boot script"
-		sep=", "
-		bootscr_line="bootscr = \"bootscr-$bootscr_id\";"
-	fi
-
-	if [ -n "$config_id" ]; then
-		conf_desc="$conf_desc${sep}setup"
-		setup_line="setup = \"setup-$config_id\";"
-	fi
-
-	if [ "$default_flag" = "1" ]; then
-		# default node is selected based on dtb ID if it is present,
-		# otherwise its selected based on kernel ID
-		if [ -n "$dtb_image" ]; then
-			default_line="default = \"${FIT_CONF_PREFIX}$dtb_image\";"
-		else
-			default_line="default = \"${FIT_CONF_PREFIX}$kernel_id\";"
-		fi
-	fi
-
-	cat << EOF >> $its_file
-                $default_line
-                $conf_node {
-                        description = "$default_flag $conf_desc";
-                        $kernel_line
-                        $fdt_line
-                        $ramdisk_line
-                        $bootscr_line
-                        $setup_line
-                        hash-1 {
-                                algo = "$conf_csum";
-                        };
-EOF
-
-	if [ -n "$conf_sign_keyname" ] ; then
-
-		sign_line="sign-images = "
-		sep=""
-
-		if [ -n "$kernel_id" ]; then
-			sign_line="$sign_line${sep}\"kernel\""
-			sep=", "
-		fi
-
-		if [ -n "$dtb_image" ]; then
-			sign_line="$sign_line${sep}\"fdt\""
-			sep=", "
-		fi
-
-		if [ -n "$ramdisk_id" ]; then
-			sign_line="$sign_line${sep}\"ramdisk\""
-			sep=", "
-		fi
-
-		if [ -n "$bootscr_id" ]; then
-			sign_line="$sign_line${sep}\"bootscr\""
-			sep=", "
-		fi
-
-		if [ -n "$config_id" ]; then
-			sign_line="$sign_line${sep}\"setup\""
-		fi
-
-		sign_line="$sign_line;"
-
-		cat << EOF >> $its_file
-                        signature-1 {
-                                algo = "$conf_csum,$conf_sign_algo";
-                                key-name-hint = "$conf_sign_keyname";
-                                padding = "$conf_padding_algo";
-                                $sign_line
-                        };
-EOF
-	fi
-
-	cat << EOF >> $its_file
-                };
-EOF
-}
-
-#
-# Assemble fitImage
-#
-# $1 ... .its filename
-# $2 ... fitImage name
-# $3 ... include ramdisk
-fitimage_assemble() {
-	kernelcount=1
-	dtbcount=""
-	DTBS=""
-	ramdiskcount=$3
-	setupcount=""
-	bootscr_id=""
-	rm -f $1 arch/${ARCH}/boot/$2
-
-	if [ -n "${UBOOT_SIGN_IMG_KEYNAME}" -a "${UBOOT_SIGN_KEYNAME}" = "${UBOOT_SIGN_IMG_KEYNAME}" ]; then
-		bbfatal "Keys used to sign images and configuration nodes must be different."
-	fi
-
-	fitimage_emit_fit_header $1
-
-	#
-	# Step 1: Prepare a kernel image section.
-	#
-	fitimage_emit_section_maint $1 imagestart
-
-	uboot_prep_kimage
-	fitimage_emit_section_kernel $1 $kernelcount linux.bin "$linux_comp"
-
-	#
-	# Step 2: Prepare a DTB image section
-	#
-
-	if [ -n "${KERNEL_DEVICETREE}" ]; then
-		dtbcount=1
-		for DTB in ${KERNEL_DEVICETREE}; do
-			if echo $DTB | grep -q '/dts/'; then
-				bbwarn "$DTB contains the full path to the the dts file, but only the dtb name should be used."
-				DTB=`basename $DTB | sed 's,\.dts$,.dtb,g'`
-			fi
-
-			# Skip ${DTB} if it's also provided in ${EXTERNAL_KERNEL_DEVICETREE}
-			if [ -n "${EXTERNAL_KERNEL_DEVICETREE}" ] && [ -s ${EXTERNAL_KERNEL_DEVICETREE}/${DTB} ]; then
-				continue
-			fi
-
-			DTB_PATH="arch/${ARCH}/boot/dts/$DTB"
-			if [ ! -e "$DTB_PATH" ]; then
-				DTB_PATH="arch/${ARCH}/boot/$DTB"
-			fi
-
-			DTB=$(echo "$DTB" | tr '/' '_')
-			DTBS="$DTBS $DTB"
-			fitimage_emit_section_dtb $1 $DTB $DTB_PATH
-		done
-	fi
-
-	if [ -n "${EXTERNAL_KERNEL_DEVICETREE}" ]; then
-		dtbcount=1
-		for DTB in $(find "${EXTERNAL_KERNEL_DEVICETREE}" \( -name '*.dtb' -o -name '*.dtbo' \) -printf '%P\n' | sort); do
-			DTB=$(echo "$DTB" | tr '/' '_')
-			DTBS="$DTBS $DTB"
-			fitimage_emit_section_dtb $1 $DTB "${EXTERNAL_KERNEL_DEVICETREE}/$DTB"
-		done
-	fi
-
-	#
-	# Step 3: Prepare a u-boot script section
-	#
-
-	if [ -n "${UBOOT_ENV}" ] && [ -d "${STAGING_DIR_HOST}/boot" ]; then
-		if [ -e "${STAGING_DIR_HOST}/boot/${UBOOT_ENV_BINARY}" ]; then
-			cp ${STAGING_DIR_HOST}/boot/${UBOOT_ENV_BINARY} ${B}
-			bootscr_id="${UBOOT_ENV_BINARY}"
-			fitimage_emit_section_boot_script $1 "$bootscr_id" ${UBOOT_ENV_BINARY}
-		else
-			bbwarn "${STAGING_DIR_HOST}/boot/${UBOOT_ENV_BINARY} not found."
-		fi
-	fi
-
-	#
-	# Step 4: Prepare a setup section. (For x86)
-	#
-	if [ -e arch/${ARCH}/boot/setup.bin ]; then
-		setupcount=1
-		fitimage_emit_section_setup $1 $setupcount arch/${ARCH}/boot/setup.bin
-	fi
-
-	#
-	# Step 5: Prepare a ramdisk section.
-	#
-	if [ "x${ramdiskcount}" = "x1" ] && [ "${INITRAMFS_IMAGE_BUNDLE}" != "1" ]; then
-		# Find and use the first initramfs image archive type we find
-		found=
-		for img in ${FIT_SUPPORTED_INITRAMFS_FSTYPES}; do
-			initramfs_path="${DEPLOY_DIR_IMAGE}/${INITRAMFS_IMAGE_NAME}.$img"
-			if [ -e "$initramfs_path" ]; then
-				bbnote "Found initramfs image: $initramfs_path"
-				found=true
-				fitimage_emit_section_ramdisk $1 "$ramdiskcount" "$initramfs_path"
-				break
-			else
-				bbnote "Did not find initramfs image: $initramfs_path"
-			fi
-		done
-
-		if [ -z "$found" ]; then
-			bbfatal "Could not find a valid initramfs type for ${INITRAMFS_IMAGE_NAME}, the supported types are: ${FIT_SUPPORTED_INITRAMFS_FSTYPES}"
-		fi
-	fi
-
-	fitimage_emit_section_maint $1 sectend
-
-	# Force the first Kernel and DTB in the default config
-	kernelcount=1
-	if [ -n "$dtbcount" ]; then
-		dtbcount=1
-	fi
-
-	#
-	# Step 6: Prepare a configurations section
-	#
-	fitimage_emit_section_maint $1 confstart
-
-	# kernel-fitimage.bbclass currently only supports a single kernel (no less or
-	# more) to be added to the FIT image along with 0 or more device trees and
-	# 0 or 1 ramdisk.
-        # It is also possible to include an initramfs bundle (kernel and rootfs in one binary)
-        # When the initramfs bundle is used ramdisk is disabled.
-	# If a device tree is to be part of the FIT image, then select
-	# the default configuration to be used is based on the dtbcount. If there is
-	# no dtb present than select the default configuation to be based on
-	# the kernelcount.
-	if [ -n "$DTBS" ]; then
-		i=1
-		for DTB in ${DTBS}; do
-			dtb_ext=${DTB##*.}
-			if [ "$dtb_ext" = "dtbo" ]; then
-				fitimage_emit_section_config $1 "" "$DTB" "" "$bootscr_id" "" "`expr $i = $dtbcount`"
-			else
-				fitimage_emit_section_config $1 $kernelcount "$DTB" "$ramdiskcount" "$bootscr_id" "$setupcount" "`expr $i = $dtbcount`"
-			fi
-			i=`expr $i + 1`
-		done
-	else
-		defaultconfigcount=1
-		fitimage_emit_section_config $1 $kernelcount "" "$ramdiskcount" "$bootscr_id"  "$setupcount" $defaultconfigcount
-	fi
-
-	fitimage_emit_section_maint $1 sectend
-
-	fitimage_emit_section_maint $1 fitend
-
-	#
-	# Step 7: Assemble the image
-	#
-	${UBOOT_MKIMAGE} \
-		${@'-D "${UBOOT_MKIMAGE_DTCOPTS}"' if len('${UBOOT_MKIMAGE_DTCOPTS}') else ''} \
-		-f $1 \
-		arch/${ARCH}/boot/$2
-
-	#
-	# Step 8: Sign the image and add public key to U-Boot dtb
-	#
-	if [ "x${UBOOT_SIGN_ENABLE}" = "x1" ] ; then
-		add_key_to_u_boot=""
-		if [ -n "${UBOOT_DTB_BINARY}" ]; then
-			# The u-boot.dtb is a symlink to UBOOT_DTB_IMAGE, so we need copy
-			# both of them, and don't dereference the symlink.
-			cp -P ${STAGING_DATADIR}/u-boot*.dtb ${B}
-			add_key_to_u_boot="-K ${B}/${UBOOT_DTB_BINARY}"
-		fi
-		${UBOOT_MKIMAGE_SIGN} \
-			${@'-D "${UBOOT_MKIMAGE_DTCOPTS}"' if len('${UBOOT_MKIMAGE_DTCOPTS}') else ''} \
-			-F -k "${UBOOT_SIGN_KEYDIR}" \
-			$add_key_to_u_boot \
-			-r arch/${ARCH}/boot/$2 \
-			${UBOOT_MKIMAGE_SIGN_ARGS}
-	fi
-}
-
-do_assemble_fitimage() {
-	if echo ${KERNEL_IMAGETYPES} | grep -wq "fitImage"; then
-		cd ${B}
-		fitimage_assemble fit-image.its fitImage ""
-	fi
-}
-
-addtask assemble_fitimage before do_install after do_compile
-
-do_assemble_fitimage_initramfs() {
-	if echo ${KERNEL_IMAGETYPES} | grep -wq "fitImage" && \
-		test -n "${INITRAMFS_IMAGE}" ; then
-		cd ${B}
-		if [ "${INITRAMFS_IMAGE_BUNDLE}" = "1" ]; then
-			fitimage_assemble fit-image-${INITRAMFS_IMAGE}.its fitImage ""
-		else
-			fitimage_assemble fit-image-${INITRAMFS_IMAGE}.its fitImage-${INITRAMFS_IMAGE} 1
-		fi
-	fi
-}
-
-addtask assemble_fitimage_initramfs before do_deploy after do_bundle_initramfs
-
-do_kernel_generate_rsa_keys() {
-	if [ "${UBOOT_SIGN_ENABLE}" = "0" ] && [ "${FIT_GENERATE_KEYS}" = "1" ]; then
-		bbwarn "FIT_GENERATE_KEYS is set to 1 even though UBOOT_SIGN_ENABLE is set to 0. The keys will not be generated as they won't be used."
-	fi
-
-	if [ "${UBOOT_SIGN_ENABLE}" = "1" ] && [ "${FIT_GENERATE_KEYS}" = "1" ]; then
-
-		# Generate keys to sign configuration nodes, only if they don't already exist
-		if [ ! -f "${UBOOT_SIGN_KEYDIR}/${UBOOT_SIGN_KEYNAME}".key ] || \
-			[ ! -f "${UBOOT_SIGN_KEYDIR}/${UBOOT_SIGN_KEYNAME}".crt ]; then
-
-			# make directory if it does not already exist
-			mkdir -p "${UBOOT_SIGN_KEYDIR}"
-
-			bbnote "Generating RSA private key for signing fitImage"
-			openssl genrsa ${FIT_KEY_GENRSA_ARGS} -out \
-				"${UBOOT_SIGN_KEYDIR}/${UBOOT_SIGN_KEYNAME}".key \
-			"${FIT_SIGN_NUMBITS}"
-
-			bbnote "Generating certificate for signing fitImage"
-			openssl req ${FIT_KEY_REQ_ARGS} "${FIT_KEY_SIGN_PKCS}" \
-				-key "${UBOOT_SIGN_KEYDIR}/${UBOOT_SIGN_KEYNAME}".key \
-				-out "${UBOOT_SIGN_KEYDIR}/${UBOOT_SIGN_KEYNAME}".crt
-		fi
-
-		# Generate keys to sign image nodes, only if they don't already exist
-		if [ ! -f "${UBOOT_SIGN_KEYDIR}/${UBOOT_SIGN_IMG_KEYNAME}".key ] || \
-			[ ! -f "${UBOOT_SIGN_KEYDIR}/${UBOOT_SIGN_IMG_KEYNAME}".crt ]; then
-
-			# make directory if it does not already exist
-			mkdir -p "${UBOOT_SIGN_KEYDIR}"
-
-			bbnote "Generating RSA private key for signing fitImage"
-			openssl genrsa ${FIT_KEY_GENRSA_ARGS} -out \
-				"${UBOOT_SIGN_KEYDIR}/${UBOOT_SIGN_IMG_KEYNAME}".key \
-			"${FIT_SIGN_NUMBITS}"
-
-			bbnote "Generating certificate for signing fitImage"
-			openssl req ${FIT_KEY_REQ_ARGS} "${FIT_KEY_SIGN_PKCS}" \
-				-key "${UBOOT_SIGN_KEYDIR}/${UBOOT_SIGN_IMG_KEYNAME}".key \
-				-out "${UBOOT_SIGN_KEYDIR}/${UBOOT_SIGN_IMG_KEYNAME}".crt
-		fi
-	fi
-}
-
-addtask kernel_generate_rsa_keys before do_assemble_fitimage after do_compile
-
-kernel_do_deploy[vardepsexclude] = "DATETIME"
-kernel_do_deploy:append() {
-	# Update deploy directory
-	if echo ${KERNEL_IMAGETYPES} | grep -wq "fitImage"; then
-
-		if [ "${INITRAMFS_IMAGE_BUNDLE}" != "1" ]; then
-			bbnote "Copying fit-image.its source file..."
-			install -m 0644 ${B}/fit-image.its "$deployDir/fitImage-its-${KERNEL_FIT_NAME}.its"
-			if [ -n "${KERNEL_FIT_LINK_NAME}" ] ; then
-				ln -snf fitImage-its-${KERNEL_FIT_NAME}.its "$deployDir/fitImage-its-${KERNEL_FIT_LINK_NAME}"
-			fi
-
-			bbnote "Copying linux.bin file..."
-			install -m 0644 ${B}/linux.bin $deployDir/fitImage-linux.bin-${KERNEL_FIT_NAME}${KERNEL_FIT_BIN_EXT}
-			if [ -n "${KERNEL_FIT_LINK_NAME}" ] ; then
-				ln -snf fitImage-linux.bin-${KERNEL_FIT_NAME}${KERNEL_FIT_BIN_EXT} "$deployDir/fitImage-linux.bin-${KERNEL_FIT_LINK_NAME}"
-			fi
-		fi
-
-		if [ -n "${INITRAMFS_IMAGE}" ]; then
-			bbnote "Copying fit-image-${INITRAMFS_IMAGE}.its source file..."
-			install -m 0644 ${B}/fit-image-${INITRAMFS_IMAGE}.its "$deployDir/fitImage-its-${INITRAMFS_IMAGE_NAME}-${KERNEL_FIT_NAME}.its"
-			if [ -n "${KERNEL_FIT_LINK_NAME}" ] ; then
-				ln -snf fitImage-its-${INITRAMFS_IMAGE_NAME}-${KERNEL_FIT_NAME}.its "$deployDir/fitImage-its-${INITRAMFS_IMAGE_NAME}-${KERNEL_FIT_LINK_NAME}"
-			fi
-
-			if [ "${INITRAMFS_IMAGE_BUNDLE}" != "1" ]; then
-				bbnote "Copying fitImage-${INITRAMFS_IMAGE} file..."
-				install -m 0644 ${B}/arch/${ARCH}/boot/fitImage-${INITRAMFS_IMAGE} "$deployDir/fitImage-${INITRAMFS_IMAGE_NAME}-${KERNEL_FIT_NAME}${KERNEL_FIT_BIN_EXT}"
-				if [ -n "${KERNEL_FIT_LINK_NAME}" ] ; then
-					ln -snf fitImage-${INITRAMFS_IMAGE_NAME}-${KERNEL_FIT_NAME}${KERNEL_FIT_BIN_EXT} "$deployDir/fitImage-${INITRAMFS_IMAGE_NAME}-${KERNEL_FIT_LINK_NAME}"
-				fi
-			fi
-		fi
-	fi
-	if [ "${UBOOT_SIGN_ENABLE}" = "1" -o "${UBOOT_FITIMAGE_ENABLE}" = "1" ] && \
-	   [ -n "${UBOOT_DTB_BINARY}" ] ; then
-		# UBOOT_DTB_IMAGE is a realfile, but we can't use
-		# ${UBOOT_DTB_IMAGE} since it contains ${PV} which is aimed
-		# for u-boot, but we are in kernel env now.
-		install -m 0644 ${B}/u-boot-${MACHINE}*.dtb "$deployDir/"
-	fi
-	if [ "${UBOOT_FITIMAGE_ENABLE}" = "1" -a -n "${UBOOT_BINARY}" -a -n "${SPL_DTB_BINARY}" ] ; then
-		# If we're also creating and/or signing the uboot fit, now we need to
-		# deploy it, it's its file, as well as u-boot-spl.dtb
-		install -m 0644 ${B}/u-boot-spl-${MACHINE}*.dtb "$deployDir/"
-		bbnote "Copying u-boot-fitImage file..."
-		install -m 0644 ${B}/u-boot-fitImage-* "$deployDir/"
-		bbnote "Copying u-boot-its file..."
-		install -m 0644 ${B}/u-boot-its-* "$deployDir/"
-	fi
-}
-
-# The function below performs the following in case of initramfs bundles:
-# - Removes do_assemble_fitimage. FIT generation is done through
-#   do_assemble_fitimage_initramfs. do_assemble_fitimage is not needed
-#   and should not be part of the tasks to be executed.
-# - Since do_kernel_generate_rsa_keys is inserted by default
-#   between do_compile and do_assemble_fitimage, this is
-#   not suitable in case of initramfs bundles.  do_kernel_generate_rsa_keys
-#   should be between do_bundle_initramfs and do_assemble_fitimage_initramfs.
-python () {
-    if d.getVar('INITRAMFS_IMAGE_BUNDLE') == "1":
-        bb.build.deltask('do_assemble_fitimage', d)
-        bb.build.deltask('kernel_generate_rsa_keys', d)
-        bb.build.addtask('kernel_generate_rsa_keys', 'do_assemble_fitimage_initramfs', 'do_bundle_initramfs', d)
-}
diff --git a/poky/meta/classes/kernel-grub.bbclass b/poky/meta/classes/kernel-grub.bbclass
deleted file mode 100644
index 44b2015..0000000
--- a/poky/meta/classes/kernel-grub.bbclass
+++ /dev/null
@@ -1,105 +0,0 @@
-#
-# While installing a rpm to update kernel on a deployed target, it will update
-# the boot area and the boot menu with the kernel as the priority but allow
-# you to fall back to the original kernel as well.
-#
-# - In kernel-image's preinstall scriptlet, it backs up original kernel to avoid
-#   probable confliction with the new one.
-#
-# - In kernel-image's postinstall scriptlet, it modifies grub's config file to
-#   updates the new kernel as the boot priority.
-#
-
-python __anonymous () {
-    import re
-
-    preinst = '''
-	# Parsing confliction
-	[ -f "$D/boot/grub/menu.list" ] && grubcfg="$D/boot/grub/menu.list"
-	[ -f "$D/boot/grub/grub.cfg" ] && grubcfg="$D/boot/grub/grub.cfg"
-	if [ -n "$grubcfg" ]; then
-		# Dereference symlink to avoid confliction with new kernel name.
-		if grep -q "/KERNEL_IMAGETYPE \+root=" $grubcfg; then
-			if [ -L "$D/boot/KERNEL_IMAGETYPE" ]; then
-				kimage=`realpath $D/boot/KERNEL_IMAGETYPE 2>/dev/null`
-				if [ -f "$D$kimage" ]; then
-					sed -i "s:KERNEL_IMAGETYPE \+root=:${kimage##*/} root=:" $grubcfg
-				fi
-			fi
-		fi
-
-		# Rename old kernel if it conflicts with new kernel name.
-		if grep -q "/KERNEL_IMAGETYPE-${KERNEL_VERSION} \+root=" $grubcfg; then
-			if [ -f "$D/boot/KERNEL_IMAGETYPE-${KERNEL_VERSION}" ]; then
-				timestamp=`date +%s`
-				kimage="$D/boot/KERNEL_IMAGETYPE-${KERNEL_VERSION}-$timestamp-back"
-				sed -i "s:KERNEL_IMAGETYPE-${KERNEL_VERSION} \+root=:${kimage##*/} root=:" $grubcfg
-				mv "$D/boot/KERNEL_IMAGETYPE-${KERNEL_VERSION}" "$kimage"
-			fi
-		fi
-	fi
-'''
-
-    postinst = '''
-	get_new_grub_cfg() {
-		grubcfg="$1"
-		old_image="$2"
-		title="Update KERNEL_IMAGETYPE-${KERNEL_VERSION}-${PV}"
-		if [ "${grubcfg##*/}" = "grub.cfg" ]; then
-			rootfs=`grep " *linux \+[^ ]\+ \+root=" $grubcfg -m 1 | \
-				 sed "s#${old_image}#${old_image%/*}/KERNEL_IMAGETYPE-${KERNEL_VERSION}#"`
-
-			echo "menuentry \"$title\" {"
-			echo "    set root=(hd0,1)"
-			echo "$rootfs"
-			echo "}"
-		elif [ "${grubcfg##*/}" = "menu.list" ]; then
-			rootfs=`grep "kernel \+[^ ]\+ \+root=" $grubcfg -m 1 | \
-				 sed "s#${old_image}#${old_image%/*}/KERNEL_IMAGETYPE-${KERNEL_VERSION}#"`
-
-			echo "default 0"
-			echo "timeout 30"
-			echo "title $title"
-			echo "root  (hd0,0)"
-			echo "$rootfs"
-		fi
-	}
-
-	get_old_grub_cfg() {
-		grubcfg="$1"
-		if [ "${grubcfg##*/}" = "grub.cfg" ]; then
-			cat "$grubcfg"
-		elif [ "${grubcfg##*/}" = "menu.list" ]; then
-			sed -e '/^default/d' -e '/^timeout/d' "$grubcfg"
-		fi
-	}
-
-	if [ -f "$D/boot/grub/grub.cfg" ]; then
-		grubcfg="$D/boot/grub/grub.cfg"
-		old_image=`grep ' *linux \+[^ ]\+ \+root=' -m 1 "$grubcfg" | awk '{print $2}'`
-	elif [ -f "$D/boot/grub/menu.list" ]; then
-		grubcfg="$D/boot/grub/menu.list"
-		old_image=`grep '^kernel \+[^ ]\+ \+root=' -m 1 "$grubcfg" | awk '{print $2}'`
-	fi
-
-	# Don't update grubcfg at first install while old bzImage doesn't exist.
-	if [ -f "$D/boot/${old_image##*/}" ]; then
-		grubcfgtmp="$grubcfg.tmp"
-		get_new_grub_cfg "$grubcfg" "$old_image"  > $grubcfgtmp
-		get_old_grub_cfg "$grubcfg" >> $grubcfgtmp
-		mv $grubcfgtmp $grubcfg
-		echo "Caution! Update kernel may affect kernel-module!"
-	fi
-'''
-
-    imagetypes = d.getVar('KERNEL_IMAGETYPES')
-    imagetypes = re.sub(r'\.gz$', '', imagetypes)
-
-    for type in imagetypes.split():
-        typelower = type.lower()
-        preinst_append = preinst.replace('KERNEL_IMAGETYPE', type)
-        postinst_prepend = postinst.replace('KERNEL_IMAGETYPE', type)
-        d.setVar('pkg_preinst:kernel-image-' + typelower + ':append', preinst_append)
-        d.setVar('pkg_postinst:kernel-image-' + typelower + ':prepend', postinst_prepend)
-}
-
diff --git a/poky/meta/classes/kernel-module-split.bbclass b/poky/meta/classes/kernel-module-split.bbclass
deleted file mode 100644
index a29c294..0000000
--- a/poky/meta/classes/kernel-module-split.bbclass
+++ /dev/null
@@ -1,191 +0,0 @@
-pkg_postinst:modules () {
-if [ -z "$D" ]; then
-	depmod -a ${KERNEL_VERSION}
-else
-	# image.bbclass will call depmodwrapper after everything is installed,
-	# no need to do it here as well
-	:
-fi
-}
-
-pkg_postrm:modules () {
-if [ -z "$D" ]; then
-	depmod -a ${KERNEL_VERSION}
-else
-	depmodwrapper -a -b $D ${KERNEL_VERSION}
-fi
-}
-
-autoload_postinst_fragment() {
-if [ x"$D" = "x" ]; then
-	modprobe %s || true
-fi
-}
-
-PACKAGE_WRITE_DEPS += "kmod-native depmodwrapper-cross"
-
-do_install:append() {
-	install -d ${D}${sysconfdir}/modules-load.d/ ${D}${sysconfdir}/modprobe.d/
-}
-
-KERNEL_SPLIT_MODULES ?= "1"
-PACKAGESPLITFUNCS:prepend = "split_kernel_module_packages "
-
-KERNEL_MODULES_META_PACKAGE ?= "${@ d.getVar("KERNEL_PACKAGE_NAME") or "kernel" }-modules"
-
-KERNEL_MODULE_PACKAGE_PREFIX ?= ""
-KERNEL_MODULE_PACKAGE_SUFFIX ?= "-${KERNEL_VERSION}"
-KERNEL_MODULE_PROVIDE_VIRTUAL ?= "1"
-
-python split_kernel_module_packages () {
-    import re
-
-    modinfoexp = re.compile("([^=]+)=(.*)")
-
-    def extract_modinfo(file):
-        import tempfile, subprocess
-        tempfile.tempdir = d.getVar("WORKDIR")
-        compressed = re.match( r'.*\.(gz|xz|zst)$', file)
-        tf = tempfile.mkstemp()
-        tmpfile = tf[1]
-        if compressed:
-            tmpkofile = tmpfile + ".ko"
-            if compressed.group(1) == 'gz':
-                cmd = "gunzip -dc %s > %s" % (file, tmpkofile)
-                subprocess.check_call(cmd, shell=True)
-            elif compressed.group(1) == 'xz':
-                cmd = "xz -dc %s > %s" % (file, tmpkofile)
-                subprocess.check_call(cmd, shell=True)
-            elif compressed.group(1) == 'zst':
-                cmd = "zstd -dc %s > %s" % (file, tmpkofile)
-                subprocess.check_call(cmd, shell=True)
-            else:
-                msg = "Cannot decompress '%s'" % file
-                raise msg
-            cmd = "%sobjcopy -j .modinfo -O binary %s %s" % (d.getVar("HOST_PREFIX") or "", tmpkofile, tmpfile)
-        else:
-            cmd = "%sobjcopy -j .modinfo -O binary %s %s" % (d.getVar("HOST_PREFIX") or "", file, tmpfile)
-        subprocess.check_call(cmd, shell=True)
-        # errors='replace': Some old kernel versions contain invalid utf-8 characters in mod descriptions (like 0xf6, 'ö')
-        f = open(tmpfile, errors='replace')
-        l = f.read().split("\000")
-        f.close()
-        os.close(tf[0])
-        os.unlink(tmpfile)
-        if compressed:
-            os.unlink(tmpkofile)
-        vals = {}
-        for i in l:
-            m = modinfoexp.match(i)
-            if not m:
-                continue
-            vals[m.group(1)] = m.group(2)
-        return vals
-
-    def frob_metadata(file, pkg, pattern, format, basename):
-        vals = extract_modinfo(file)
-
-        dvar = d.getVar('PKGD')
-
-        # If autoloading is requested, output /etc/modules-load.d/<name>.conf and append
-        # appropriate modprobe commands to the postinst
-        autoloadlist = (d.getVar("KERNEL_MODULE_AUTOLOAD") or "").split()
-        autoload = d.getVar('module_autoload_%s' % basename)
-        if autoload and autoload == basename:
-            bb.warn("module_autoload_%s was replaced by KERNEL_MODULE_AUTOLOAD for cases where basename == module name, please drop it" % basename)
-        if autoload and basename not in autoloadlist:
-            bb.warn("module_autoload_%s is defined but '%s' isn't included in KERNEL_MODULE_AUTOLOAD, please add it there" % (basename, basename))
-        if basename in autoloadlist:
-            name = '%s/etc/modules-load.d/%s.conf' % (dvar, basename)
-            f = open(name, 'w')
-            if autoload:
-                for m in autoload.split():
-                    f.write('%s\n' % m)
-            else:
-                f.write('%s\n' % basename)
-            f.close()
-            postinst = d.getVar('pkg_postinst:%s' % pkg)
-            if not postinst:
-                bb.fatal("pkg_postinst:%s not defined" % pkg)
-            postinst += d.getVar('autoload_postinst_fragment') % (autoload or basename)
-            d.setVar('pkg_postinst:%s' % pkg, postinst)
-
-        # Write out any modconf fragment
-        modconflist = (d.getVar("KERNEL_MODULE_PROBECONF") or "").split()
-        modconf = d.getVar('module_conf_%s' % basename)
-        if modconf and basename in modconflist:
-            name = '%s/etc/modprobe.d/%s.conf' % (dvar, basename)
-            f = open(name, 'w')
-            f.write("%s\n" % modconf)
-            f.close()
-        elif modconf:
-            bb.error("Please ensure module %s is listed in KERNEL_MODULE_PROBECONF since module_conf_%s is set" % (basename, basename))
-
-        files = d.getVar('FILES:%s' % pkg)
-        files = "%s /etc/modules-load.d/%s.conf /etc/modprobe.d/%s.conf" % (files, basename, basename)
-        d.setVar('FILES:%s' % pkg, files)
-
-        conffiles = d.getVar('CONFFILES:%s' % pkg)
-        conffiles = "%s /etc/modules-load.d/%s.conf /etc/modprobe.d/%s.conf" % (conffiles, basename, basename)
-        d.setVar('CONFFILES:%s' % pkg, conffiles)
-
-        if "description" in vals:
-            old_desc = d.getVar('DESCRIPTION:' + pkg) or ""
-            d.setVar('DESCRIPTION:' + pkg, old_desc + "; " + vals["description"])
-
-        rdepends = bb.utils.explode_dep_versions2(d.getVar('RDEPENDS:' + pkg) or "")
-        modinfo_deps = []
-        if "depends" in vals and vals["depends"] != "":
-            for dep in vals["depends"].split(","):
-                on = legitimize_package_name(dep)
-                dependency_pkg = format % on
-                modinfo_deps.append(dependency_pkg)
-        for dep in modinfo_deps:
-            if not dep in rdepends:
-                rdepends[dep] = []
-        d.setVar('RDEPENDS:' + pkg, bb.utils.join_deps(rdepends, commasep=False))
-
-        # Avoid automatic -dev recommendations for modules ending with -dev.
-        d.setVarFlag('RRECOMMENDS:' + pkg, 'nodeprrecs', 1)
-
-        # Provide virtual package without postfix
-        providevirt = d.getVar('KERNEL_MODULE_PROVIDE_VIRTUAL')
-        if providevirt == "1":
-           postfix = format.split('%s')[1]
-           d.setVar('RPROVIDES:' + pkg, pkg.replace(postfix, ''))
-
-    kernel_package_name = d.getVar("KERNEL_PACKAGE_NAME") or "kernel"
-    kernel_version = d.getVar("KERNEL_VERSION")
-
-    metapkg = d.getVar('KERNEL_MODULES_META_PACKAGE')
-    splitmods = d.getVar('KERNEL_SPLIT_MODULES')
-    postinst = d.getVar('pkg_postinst:modules')
-    postrm = d.getVar('pkg_postrm:modules')
-
-    if splitmods != '1':
-        etcdir = d.getVar('sysconfdir')
-        d.appendVar('FILES:' + metapkg, '%s/modules-load.d/ %s/modprobe.d/ %s/modules/' % (etcdir, etcdir, d.getVar("nonarch_base_libdir")))
-        d.appendVar('pkg_postinst:%s' % metapkg, postinst)
-        d.prependVar('pkg_postrm:%s' % metapkg, postrm);
-        return
-
-    module_regex = r'^(.*)\.k?o(?:\.(gz|xz|zst))?$'
-
-    module_pattern_prefix = d.getVar('KERNEL_MODULE_PACKAGE_PREFIX')
-    module_pattern_suffix = d.getVar('KERNEL_MODULE_PACKAGE_SUFFIX')
-    module_pattern = module_pattern_prefix + kernel_package_name + '-module-%s' + module_pattern_suffix
-
-    modules = do_split_packages(d, root='${nonarch_base_libdir}/modules', file_regex=module_regex, output_pattern=module_pattern, description='%s kernel module', postinst=postinst, postrm=postrm, recursive=True, hook=frob_metadata, extra_depends='%s-%s' % (kernel_package_name, kernel_version))
-    if modules:
-        d.appendVar('RDEPENDS:' + metapkg, ' '+' '.join(modules))
-
-    # If modules-load.d and modprobe.d are empty at this point, remove them to
-    # avoid warnings. removedirs only raises an OSError if an empty
-    # directory cannot be removed.
-    dvar = d.getVar('PKGD')
-    for dir in ["%s/etc/modprobe.d" % (dvar), "%s/etc/modules-load.d" % (dvar), "%s/etc" % (dvar)]:
-        if len(os.listdir(dir)) == 0:
-            os.rmdir(dir)
-}
-
-do_package[vardeps] += '${@" ".join(map(lambda s: "module_conf_" + s, (d.getVar("KERNEL_MODULE_PROBECONF") or "").split()))}'
diff --git a/poky/meta/classes/kernel-uboot.bbclass b/poky/meta/classes/kernel-uboot.bbclass
deleted file mode 100644
index 1bc98e0..0000000
--- a/poky/meta/classes/kernel-uboot.bbclass
+++ /dev/null
@@ -1,43 +0,0 @@
-# fitImage kernel compression algorithm
-FIT_KERNEL_COMP_ALG ?= "gzip"
-FIT_KERNEL_COMP_ALG_EXTENSION ?= ".gz"
-
-# Kernel image type passed to mkimage (i.e. kernel kernel_noload...)
-UBOOT_MKIMAGE_KERNEL_TYPE ?= "kernel"
-
-uboot_prep_kimage() {
-	if [ -e arch/${ARCH}/boot/compressed/vmlinux ]; then
-		vmlinux_path="arch/${ARCH}/boot/compressed/vmlinux"
-		linux_suffix=""
-		linux_comp="none"
-	elif [ -e arch/${ARCH}/boot/vmlinuz.bin ]; then
-		rm -f linux.bin
-		cp -l arch/${ARCH}/boot/vmlinuz.bin linux.bin
-		vmlinux_path=""
-		linux_suffix=""
-		linux_comp="none"
-	else
-		vmlinux_path="vmlinux"
-		# Use vmlinux.initramfs for linux.bin when INITRAMFS_IMAGE_BUNDLE set
-		# As per the implementation in kernel.bbclass.
-		# See do_bundle_initramfs function
-		if [ "${INITRAMFS_IMAGE_BUNDLE}" = "1" ] && [ -e vmlinux.initramfs ]; then
-			vmlinux_path="vmlinux.initramfs"
-		fi
-		linux_suffix="${FIT_KERNEL_COMP_ALG_EXTENSION}"
-		linux_comp="${FIT_KERNEL_COMP_ALG}"
-	fi
-
-	[ -n "${vmlinux_path}" ] && ${OBJCOPY} -O binary -R .note -R .comment -S "${vmlinux_path}" linux.bin
-
-	if [ "${linux_comp}" != "none" ] ; then
-		if [ "${linux_comp}" = "gzip" ] ; then
-			gzip -9 linux.bin
-		elif [ "${linux_comp}" = "lzo" ] ; then
-			lzop -9 linux.bin
-		fi
-		mv -f "linux.bin${linux_suffix}" linux.bin
-	fi
-
-	echo "${linux_comp}"
-}
diff --git a/poky/meta/classes/kernel-uimage.bbclass b/poky/meta/classes/kernel-uimage.bbclass
deleted file mode 100644
index 2e661ea..0000000
--- a/poky/meta/classes/kernel-uimage.bbclass
+++ /dev/null
@@ -1,35 +0,0 @@
-inherit kernel-uboot
-
-python __anonymous () {
-    if "uImage" in d.getVar('KERNEL_IMAGETYPES'):
-        depends = d.getVar("DEPENDS")
-        depends = "%s u-boot-tools-native" % depends
-        d.setVar("DEPENDS", depends)
-
-        # Override KERNEL_IMAGETYPE_FOR_MAKE variable, which is internal
-        # to kernel.bbclass . We override the variable here, since we need
-        # to build uImage using the kernel build system if and only if
-        # KEEPUIMAGE == yes. Otherwise, we pack compressed vmlinux into
-        # the uImage .
-        if d.getVar("KEEPUIMAGE") != 'yes':
-            typeformake = d.getVar("KERNEL_IMAGETYPE_FOR_MAKE") or ""
-            if "uImage" in typeformake.split():
-                d.setVar('KERNEL_IMAGETYPE_FOR_MAKE', typeformake.replace('uImage', 'vmlinux'))
-
-            # Enable building of uImage with mkimage
-            bb.build.addtask('do_uboot_mkimage', 'do_install', 'do_kernel_link_images', d)
-}
-
-do_uboot_mkimage[dirs] += "${B}"
-do_uboot_mkimage() {
-	uboot_prep_kimage
-
-	ENTRYPOINT=${UBOOT_ENTRYPOINT}
-	if [ -n "${UBOOT_ENTRYSYMBOL}" ]; then
-		ENTRYPOINT=`${HOST_PREFIX}nm ${B}/vmlinux | \
-			awk '$3=="${UBOOT_ENTRYSYMBOL}" {print "0x"$1;exit}'`
-	fi
-
-	uboot-mkimage -A ${UBOOT_ARCH} -O linux -T ${UBOOT_MKIMAGE_KERNEL_TYPE} -C "${linux_comp}" -a ${UBOOT_LOADADDRESS} -e $ENTRYPOINT -n "${DISTRO_NAME}/${PV}/${MACHINE}" -d linux.bin ${B}/arch/${ARCH}/boot/uImage
-	rm -f linux.bin
-}
diff --git a/poky/meta/classes/kernel-yocto.bbclass b/poky/meta/classes/kernel-yocto.bbclass
deleted file mode 100644
index ce1446f..0000000
--- a/poky/meta/classes/kernel-yocto.bbclass
+++ /dev/null
@@ -1,726 +0,0 @@
-# remove tasks that modify the source tree in case externalsrc is inherited
-SRCTREECOVEREDTASKS += "do_validate_branches do_kernel_configcheck do_kernel_checkout do_fetch do_unpack do_patch"
-PATCH_GIT_USER_EMAIL ?= "kernel-yocto@oe"
-PATCH_GIT_USER_NAME ?= "OpenEmbedded"
-
-# The distro or local.conf should set this, but if nobody cares...
-LINUX_KERNEL_TYPE ??= "standard"
-
-# KMETA ?= ""
-KBRANCH ?= "master"
-KMACHINE ?= "${MACHINE}"
-SRCREV_FORMAT ?= "meta_machine"
-
-# LEVELS:
-#   0: no reporting
-#   1: report options that are specified, but not in the final config
-#   2: report options that are not hardware related, but set by a BSP
-KCONF_AUDIT_LEVEL ?= "1"
-KCONF_BSP_AUDIT_LEVEL ?= "0"
-KMETA_AUDIT ?= "yes"
-KMETA_AUDIT_WERROR ?= ""
-
-# returns local (absolute) path names for all valid patches in the
-# src_uri
-def find_patches(d,subdir):
-    patches = src_patches(d)
-    patch_list=[]
-    for p in patches:
-        _, _, local, _, _, parm = bb.fetch.decodeurl(p)
-        # if patchdir has been passed, we won't be able to apply it so skip
-        # the patch for now, and special processing happens later
-        patchdir = ''
-        if "patchdir" in parm:
-            patchdir = parm["patchdir"]
-        if subdir:
-            if subdir == patchdir:
-                patch_list.append(local)
-        else:
-            # skip the patch if a patchdir was supplied, it won't be handled
-            # properly
-            if not patchdir:
-                patch_list.append(local)
-
-    return patch_list
-
-# returns all the elements from the src uri that are .scc files
-def find_sccs(d):
-    sources=src_patches(d, True)
-    sources_list=[]
-    for s in sources:
-        base, ext = os.path.splitext(os.path.basename(s))
-        if ext and ext in [".scc", ".cfg"]:
-            sources_list.append(s)
-        elif base and 'defconfig' in base:
-            sources_list.append(s)
-
-    return sources_list
-
-# check the SRC_URI for "kmeta" type'd git repositories. Return the name of
-# the repository as it will be found in WORKDIR
-def find_kernel_feature_dirs(d):
-    feature_dirs=[]
-    fetch = bb.fetch2.Fetch([], d)
-    for url in fetch.urls:
-        urldata = fetch.ud[url]
-        parm = urldata.parm
-        type=""
-        if "type" in parm:
-            type = parm["type"]
-        if "destsuffix" in parm:
-            destdir = parm["destsuffix"]
-            if type == "kmeta":
-                feature_dirs.append(destdir)
-	    
-    return feature_dirs
-
-# find the master/machine source branch. In the same way that the fetcher proceses
-# git repositories in the SRC_URI we take the first repo found, first branch.
-def get_machine_branch(d, default):
-    fetch = bb.fetch2.Fetch([], d)
-    for url in fetch.urls:
-        urldata = fetch.ud[url]
-        parm = urldata.parm
-        if "branch" in parm:
-            branches = urldata.parm.get("branch").split(',')
-            btype = urldata.parm.get("type")
-            if btype != "kmeta":
-                return branches[0]
-	    
-    return default
-
-# returns a list of all directories that are on FILESEXTRAPATHS (and
-# hence available to the build) that contain .scc or .cfg files
-def get_dirs_with_fragments(d):
-    extrapaths = []
-    extrafiles = []
-    extrapathsvalue = (d.getVar("FILESEXTRAPATHS") or "")
-    # Remove default flag which was used for checking
-    extrapathsvalue = extrapathsvalue.replace("__default:", "")
-    extrapaths = extrapathsvalue.split(":")
-    for path in extrapaths:
-        if path + ":True" not in extrafiles:
-            extrafiles.append(path + ":" + str(os.path.exists(path)))
-
-    return " ".join(extrafiles)
-
-do_kernel_metadata() {
-	set +e
-
-	if [ -n "$1" ]; then
-		mode="$1"
-	else
-		mode="patch"
-	fi
-
-	cd ${S}
-	export KMETA=${KMETA}
-
-	bbnote "do_kernel_metadata: for summary/debug, set KCONF_AUDIT_LEVEL > 0"
-
-	# if kernel tools are available in-tree, they are preferred
-	# and are placed on the path before any external tools. Unless
-	# the external tools flag is set, in that case we do nothing.
-	if [ -f "${S}/scripts/util/configme" ]; then
-		if [ -z "${EXTERNAL_KERNEL_TOOLS}" ]; then
-			PATH=${S}/scripts/util:${PATH}
-		fi
-	fi
-
-	# In a similar manner to the kernel itself:
-	#
-	#   defconfig: $(obj)/conf
-	#   ifeq ($(KBUILD_DEFCONFIG),)
-	#	$< --defconfig $(Kconfig)
-	#   else
-	#	@echo "*** Default configuration is based on '$(KBUILD_DEFCONFIG)'"
-	#	$(Q)$< --defconfig=arch/$(SRCARCH)/configs/$(KBUILD_DEFCONFIG) $(Kconfig)
-	#   endif
-	#
-	# If a defconfig is specified via the KBUILD_DEFCONFIG variable, we copy it
-	# from the source tree, into a common location and normalized "defconfig" name,
-	# where the rest of the process will include and incoroporate it into the build
-	#
-	# If the fetcher has already placed a defconfig in WORKDIR (from the SRC_URI),
-	# we don't overwrite it, but instead warn the user that SRC_URI defconfigs take
-	# precendence.
-	#
-	if [ -n "${KBUILD_DEFCONFIG}" ]; then
-		if [ -f "${S}/arch/${ARCH}/configs/${KBUILD_DEFCONFIG}" ]; then
-			if [ -f "${WORKDIR}/defconfig" ]; then
-				# If the two defconfig's are different, warn that we overwrote the
-				# one already placed in WORKDIR
-				cmp "${WORKDIR}/defconfig" "${S}/arch/${ARCH}/configs/${KBUILD_DEFCONFIG}"
-				if [ $? -ne 0 ]; then
-					bbdebug 1 "detected SRC_URI or unpatched defconfig in WORKDIR. ${KBUILD_DEFCONFIG} copied over it"
-				fi
-				cp -f ${S}/arch/${ARCH}/configs/${KBUILD_DEFCONFIG} ${WORKDIR}/defconfig
-			else
-				cp -f ${S}/arch/${ARCH}/configs/${KBUILD_DEFCONFIG} ${WORKDIR}/defconfig
-			fi
-			in_tree_defconfig="${WORKDIR}/defconfig"
-		else
-			bbfatal "A KBUILD_DEFCONFIG '${KBUILD_DEFCONFIG}' was specified, but not present in the source tree (${S}/arch/${ARCH}/configs/)"
-		fi
-	fi
-
-	if [ "$mode" = "patch" ]; then
-		# was anyone trying to patch the kernel meta data ?, we need to do
-		# this here, since the scc commands migrate the .cfg fragments to the
-		# kernel source tree, where they'll be used later.
-		check_git_config
-		patches="${@" ".join(find_patches(d,'kernel-meta'))}"
-		for p in $patches; do
-		    (
-			cd ${WORKDIR}/kernel-meta
-			git am -s $p
-		    )
-		done
-	fi
-
-	sccs_from_src_uri="${@" ".join(find_sccs(d))}"
-	patches="${@" ".join(find_patches(d,''))}"
-	feat_dirs="${@" ".join(find_kernel_feature_dirs(d))}"
-
-	# a quick check to make sure we don't have duplicate defconfigs If
-	# there's a defconfig in the SRC_URI, did we also have one from the
-	# KBUILD_DEFCONFIG processing above ?
-	src_uri_defconfig=$(echo $sccs_from_src_uri | awk '(match($0, "defconfig") != 0) { print $0 }' RS=' ')
-	# drop and defconfig's from the src_uri variable, we captured it just above here if it existed
-	sccs_from_src_uri=$(echo $sccs_from_src_uri | awk '(match($0, "defconfig") == 0) { print $0 }' RS=' ')
-
-	if [ -n "$in_tree_defconfig" ]; then
-		sccs_defconfig=$in_tree_defconfig
-		if [ -n "$src_uri_defconfig" ]; then
-			bbwarn "[NOTE]: defconfig was supplied both via KBUILD_DEFCONFIG and SRC_URI. Dropping SRC_URI entry $src_uri_defconfig"
-		fi
-	else
-		# if we didn't have an in-tree one, make our defconfig the one
-		# from the src_uri. Note: there may not have been one from the
-		# src_uri, so this can be an empty variable.
-		sccs_defconfig=$src_uri_defconfig
-	fi
-	sccs="$sccs_from_src_uri"
-
-	# check for feature directories/repos/branches that were part of the
-	# SRC_URI. If they were supplied, we convert them into include directives
-	# for the update part of the process
-	for f in ${feat_dirs}; do
-		if [ -d "${WORKDIR}/$f/meta" ]; then
-			includes="$includes -I${WORKDIR}/$f/kernel-meta"
-		elif [ -d "${WORKDIR}/../oe-local-files/$f" ]; then
-			includes="$includes -I${WORKDIR}/../oe-local-files/$f"
-	        elif [ -d "${WORKDIR}/$f" ]; then
-			includes="$includes -I${WORKDIR}/$f"
-		fi
-	done
-	for s in ${sccs} ${patches}; do
-		sdir=$(dirname $s)
-		includes="$includes -I${sdir}"
-                # if a SRC_URI passed patch or .scc has a subdir of "kernel-meta",
-                # then we add it to the search path
-                if [ -d "${sdir}/kernel-meta" ]; then
-			includes="$includes -I${sdir}/kernel-meta"
-                fi
-	done
-
-	# expand kernel features into their full path equivalents
-	bsp_definition=$(spp ${includes} --find -DKMACHINE=${KMACHINE} -DKTYPE=${LINUX_KERNEL_TYPE})
-	if [ -z "$bsp_definition" ]; then
-		if [ -z "$sccs_defconfig" ]; then
-			bbfatal_log "Could not locate BSP definition for ${KMACHINE}/${LINUX_KERNEL_TYPE} and no defconfig was provided"
-		fi
-	else
-		# if the bsp definition has "define KMETA_EXTERNAL_BSP t",
-		# then we need to set a flag that will instruct the next
-		# steps to use the BSP as both configuration and patches.
-		grep -q KMETA_EXTERNAL_BSP $bsp_definition
-		if [ $? -eq 0 ]; then
-		    KMETA_EXTERNAL_BSPS="t"
-		fi
-	fi
-	meta_dir=$(kgit --meta)
-
-	KERNEL_FEATURES_FINAL=""
-	if [ -n "${KERNEL_FEATURES}" ]; then
-		for feature in ${KERNEL_FEATURES}; do
-			feature_found=f
-			for d in $includes; do
-				path_to_check=$(echo $d | sed 's/^-I//')
-				if [ "$feature_found" = "f" ] && [ -e "$path_to_check/$feature" ]; then
-				    feature_found=t
-				fi
-			done
-			if [ "$feature_found" = "f" ]; then
-				if [ -n "${KERNEL_DANGLING_FEATURES_WARN_ONLY}" ]; then
-				    bbwarn "Feature '$feature' not found, but KERNEL_DANGLING_FEATURES_WARN_ONLY is set"
-				    bbwarn "This may cause runtime issues, dropping feature and allowing configuration to continue"
-				else
-				    bberror "Feature '$feature' not found, this will cause configuration failures."
-				    bberror "Check the SRC_URI for meta-data repositories or directories that may be missing"
-				    bbfatal_log "Set KERNEL_DANGLING_FEATURES_WARN_ONLY to ignore this issue"
-				fi
-			else
-				KERNEL_FEATURES_FINAL="$KERNEL_FEATURES_FINAL $feature"
-			fi
-		done
-        fi
-
-	if [ "$mode" = "config" ]; then
-		# run1: pull all the configuration fragments, no matter where they come from
-		elements="`echo -n ${bsp_definition} $sccs_defconfig ${sccs} ${patches} $KERNEL_FEATURES_FINAL`"
-		if [ -n "${elements}" ]; then
-			echo "${bsp_definition}" > ${S}/${meta_dir}/bsp_definition
-			scc --force -o ${S}/${meta_dir}:cfg,merge,meta ${includes} $sccs_defconfig $bsp_definition $sccs $patches $KERNEL_FEATURES_FINAL
-			if [ $? -ne 0 ]; then
-				bbfatal_log "Could not generate configuration queue for ${KMACHINE}."
-			fi
-		fi
-	fi
-
-	# if KMETA_EXTERNAL_BSPS has been set, or it has been detected from
-	# the bsp definition, then we inject the bsp_definition into the
-	# patch phase below.  we'll piggy back on the sccs variable.
-	if [ -n "${KMETA_EXTERNAL_BSPS}" ]; then
-		sccs="${bsp_definition} ${sccs}"
-	fi
-
-	if [ "$mode" = "patch" ]; then
-		# run2: only generate patches for elements that have been passed on the SRC_URI
-		elements="`echo -n ${sccs} ${patches} $KERNEL_FEATURES_FINAL`"
-		if [ -n "${elements}" ]; then
-			scc --force -o ${S}/${meta_dir}:patch --cmds patch ${includes} ${sccs} ${patches} $KERNEL_FEATURES_FINAL
-			if [ $? -ne 0 ]; then
-				bbfatal_log "Could not generate configuration queue for ${KMACHINE}."
-			fi
-		fi
-	fi
-
-	if [ ${KCONF_AUDIT_LEVEL} -gt 0 ]; then
-		bbnote "kernel meta data summary for ${KMACHINE} (${LINUX_KERNEL_TYPE}):"
-		bbnote "======================================================================"
-		if [ -n "${KMETA_EXTERNAL_BSPS}" ]; then
-			bbnote "Non kernel-cache (external) bsp"
-		fi
-		bbnote "BSP entry point / definition: $bsp_definition"
-		if [ -n "$in_tree_defconfig" ]; then
-			bbnote "KBUILD_DEFCONFIG: ${KBUILD_DEFCONFIG}"
-		fi
-		bbnote "Fragments from SRC_URI: $sccs_from_src_uri"
-		bbnote "KERNEL_FEATURES: $KERNEL_FEATURES_FINAL"
-		bbnote "Final scc/cfg list: $sccs_defconfig $bsp_definition $sccs $KERNEL_FEATURES_FINAL"
-	fi
-
-	set -e
-}
-
-do_patch() {
-	set +e
-	cd ${S}
-
-	check_git_config
-	meta_dir=$(kgit --meta)
-	(cd ${meta_dir}; ln -sf patch.queue series)
-	if [ -f "${meta_dir}/series" ]; then
-		kgit_extra_args=""
-		if [ "${KERNEL_DEBUG_TIMESTAMPS}" != "1" ]; then
-		    kgit_extra_args="--commit-sha author"
-		fi
-		kgit-s2q --gen -v $kgit_extra_args --patches .kernel-meta/
-		if [ $? -ne 0 ]; then
-			bberror "Could not apply patches for ${KMACHINE}."
-			bbfatal_log "Patch failures can be resolved in the linux source directory ${S})"
-		fi
-	fi
-
-	if [ -f "${meta_dir}/merge.queue" ]; then
-		# we need to merge all these branches
-		for b in $(cat ${meta_dir}/merge.queue); do
-			git show-ref --verify --quiet refs/heads/${b}
-			if [ $? -eq 0 ]; then
-				bbnote "Merging branch ${b}"
-				git merge -q --no-ff -m "Merge branch ${b}" ${b}
-			else
-				bbfatal "branch ${b} does not exist, cannot merge"
-			fi
-		done
-	fi
-
-	set -e
-}
-
-do_kernel_checkout() {
-	set +e
-
-	source_dir=`echo ${S} | sed 's%/$%%'`
-	source_workdir="${WORKDIR}/git"
-	if [ -d "${WORKDIR}/git/" ]; then
-		# case: git repository
-		# if S is WORKDIR/git, then we shouldn't be moving or deleting the tree.
-		if [ "${source_dir}" != "${source_workdir}" ]; then
-			if [ -d "${source_workdir}/.git" ]; then
-				# regular git repository with .git
-				rm -rf ${S}
-				mv ${WORKDIR}/git ${S}
-			else
-				# create source for bare cloned git repository
-				git clone ${WORKDIR}/git ${S}
-				rm -rf ${WORKDIR}/git
-			fi
-		fi
-		cd ${S}
-
-		# convert any remote branches to local tracking ones
-		for i in `git branch -a --no-color | grep remotes | grep -v HEAD`; do
-			b=`echo $i | cut -d' ' -f2 | sed 's%remotes/origin/%%'`;
-			git show-ref --quiet --verify -- "refs/heads/$b"
-			if [ $? -ne 0 ]; then
-				git branch $b $i > /dev/null
-			fi
-		done
-
-		# Create a working tree copy of the kernel by checking out a branch
-		machine_branch="${@ get_machine_branch(d, "${KBRANCH}" )}"
-
-		# checkout and clobber any unimportant files
-		git checkout -f ${machine_branch}
-	else
-		# case: we have no git repository at all. 
-		# To support low bandwidth options for building the kernel, we'll just 
-		# convert the tree to a git repo and let the rest of the process work unchanged
-		
-		# if ${S} hasn't been set to the proper subdirectory a default of "linux" is 
-		# used, but we can't initialize that empty directory. So check it and throw a
-		# clear error
-
-	        cd ${S}
-		if [ ! -f "Makefile" ]; then
-			bberror "S is not set to the linux source directory. Check "
-			bbfatal "the recipe and set S to the proper extracted subdirectory"
-		fi
-		rm -f .gitignore
-		git init
-		check_git_config
-		git add .
-		git commit -q -m "baseline commit: creating repo for ${PN}-${PV}"
-		git clean -d -f
-	fi
-
-	set -e
-}
-do_kernel_checkout[dirs] = "${S} ${WORKDIR}"
-
-addtask kernel_checkout before do_kernel_metadata after do_symlink_kernsrc
-addtask kernel_metadata after do_validate_branches do_unpack before do_patch
-do_kernel_metadata[depends] = "kern-tools-native:do_populate_sysroot"
-do_kernel_metadata[file-checksums] = " ${@get_dirs_with_fragments(d)}"
-do_validate_branches[depends] = "kern-tools-native:do_populate_sysroot"
-
-do_kernel_configme[depends] += "virtual/${TARGET_PREFIX}binutils:do_populate_sysroot"
-do_kernel_configme[depends] += "virtual/${TARGET_PREFIX}gcc:do_populate_sysroot"
-do_kernel_configme[depends] += "bc-native:do_populate_sysroot bison-native:do_populate_sysroot"
-do_kernel_configme[depends] += "kern-tools-native:do_populate_sysroot"
-do_kernel_configme[dirs] += "${S} ${B}"
-do_kernel_configme() {
-	do_kernel_metadata config
-
-	# translate the kconfig_mode into something that merge_config.sh
-	# understands
-	case ${KCONFIG_MODE} in
-		*allnoconfig)
-			config_flags="-n"
-			;;
-		*alldefconfig)
-			config_flags=""
-			;;
-		*)
-			if [ -f ${WORKDIR}/defconfig ]; then
-				config_flags="-n"
-			fi
-			;;
-	esac
-
-	cd ${S}
-
-	meta_dir=$(kgit --meta)
-	configs="$(scc --configs -o ${meta_dir})"
-	if [ $? -ne 0 ]; then
-		bberror "${configs}"
-		bbfatal_log "Could not find configuration queue (${meta_dir}/config.queue)"
-	fi
-
-	CFLAGS="${CFLAGS} ${TOOLCHAIN_OPTIONS}" HOSTCC="${BUILD_CC} ${BUILD_CFLAGS} ${BUILD_LDFLAGS}" HOSTCPP="${BUILD_CPP}" CC="${KERNEL_CC}" LD="${KERNEL_LD}" ARCH=${ARCH} merge_config.sh -O ${B} ${config_flags} ${configs} > ${meta_dir}/cfg/merge_config_build.log 2>&1
-	if [ $? -ne 0 -o ! -f ${B}/.config ]; then
-		bberror "Could not generate a .config for ${KMACHINE}-${LINUX_KERNEL_TYPE}"
-		if [ ${KCONF_AUDIT_LEVEL} -gt 1 ]; then
-			bbfatal_log "`cat ${meta_dir}/cfg/merge_config_build.log`"
-		else
-			bbfatal_log "Details can be found at: ${S}/${meta_dir}/cfg/merge_config_build.log"
-		fi
-	fi
-
-	if [ ! -z "${LINUX_VERSION_EXTENSION}" ]; then
-		echo "# Global settings from linux recipe" >> ${B}/.config
-		echo "CONFIG_LOCALVERSION="\"${LINUX_VERSION_EXTENSION}\" >> ${B}/.config
-	fi
-}
-
-addtask kernel_configme before do_configure after do_patch
-addtask config_analysis
-
-do_config_analysis[depends] = "virtual/kernel:do_configure"
-do_config_analysis[depends] += "kern-tools-native:do_populate_sysroot"
-
-CONFIG_AUDIT_FILE ?= "${WORKDIR}/config-audit.txt"
-CONFIG_ANALYSIS_FILE ?= "${WORKDIR}/config-analysis.txt"
-
-python do_config_analysis() {
-    import re, string, sys, subprocess
-
-    s = d.getVar('S')
-
-    env = os.environ.copy()
-    env['PATH'] = "%s:%s%s" % (d.getVar('PATH'), s, "/scripts/util/")
-    env['LD'] = d.getVar('KERNEL_LD')
-    env['CC'] = d.getVar('KERNEL_CC')
-    env['ARCH'] = d.getVar('ARCH')
-    env['srctree'] = s
-
-    # read specific symbols from the kernel recipe or from local.conf
-    # i.e.: CONFIG_ANALYSIS:pn-linux-yocto-dev = 'NF_CONNTRACK LOCALVERSION'
-    config = d.getVar( 'CONFIG_ANALYSIS' )
-    if not config:
-       config = [ "" ]
-    else:
-       config = config.split()
-
-    for c in config:
-        for action in ["analysis","audit"]:
-            if action == "analysis":
-                try:
-                    analysis = subprocess.check_output(['symbol_why.py', '--dotconfig',  '{}'.format( d.getVar('B') + '/.config' ), '--blame', c], cwd=s, env=env ).decode('utf-8')
-                except subprocess.CalledProcessError as e:
-                    bb.fatal( "config analysis failed: %s" % e.output.decode('utf-8'))
-
-                outfile = d.getVar( 'CONFIG_ANALYSIS_FILE' )
-
-            if action == "audit":
-                try:
-                    analysis = subprocess.check_output(['symbol_why.py', '--dotconfig',  '{}'.format( d.getVar('B') + '/.config' ), '--summary', '--extended', '--sanity', c], cwd=s, env=env ).decode('utf-8')
-                except subprocess.CalledProcessError as e:
-                    bb.fatal( "config analysis failed: %s" % e.output.decode('utf-8'))
-
-                outfile = d.getVar( 'CONFIG_AUDIT_FILE' )
-
-            if c:
-                outdir = os.path.dirname( outfile )
-                outname = os.path.basename( outfile )
-                outfile = outdir + '/'+ c + '-' + outname
-
-            if config and os.path.isfile(outfile):
-                os.remove(outfile)
-
-            with open(outfile, 'w+') as f:
-                f.write( analysis )
-
-            bb.warn( "Configuration {} executed, see: {} for details".format(action,outfile ))
-            if c:
-                bb.warn( analysis )
-}
-
-python do_kernel_configcheck() {
-    import re, string, sys, subprocess
-
-    s = d.getVar('S')
-
-    # if KMETA isn't set globally by a recipe using this routine, use kgit to
-    # locate or create the meta directory. Otherwise, kconf_check is not
-    # passed a valid meta-series for processing
-    kmeta = d.getVar("KMETA")
-    if not kmeta or not os.path.exists('{}/{}'.format(s,kmeta)):
-        kmeta = subprocess.check_output(['kgit', '--meta'], cwd=d.getVar('S')).decode('utf-8').rstrip()
-
-    env = os.environ.copy()
-    env['PATH'] = "%s:%s%s" % (d.getVar('PATH'), s, "/scripts/util/")
-    env['LD'] = d.getVar('KERNEL_LD')
-    env['CC'] = d.getVar('KERNEL_CC')
-    env['ARCH'] = d.getVar('ARCH')
-    env['srctree'] = s
-
-    try:
-        configs = subprocess.check_output(['scc', '--configs', '-o', s + '/.kernel-meta'], env=env).decode('utf-8')
-    except subprocess.CalledProcessError as e:
-        bb.fatal( "Cannot gather config fragments for audit: %s" % e.output.decode("utf-8") )
-
-    config_check_visibility = int(d.getVar("KCONF_AUDIT_LEVEL") or 0)
-    bsp_check_visibility = int(d.getVar("KCONF_BSP_AUDIT_LEVEL") or 0)
-    kmeta_audit_werror = d.getVar("KMETA_AUDIT_WERROR") or ""
-    warnings_detected = False
-
-    # if config check visibility is "1", that's the lowest level of audit. So
-    # we add the --classify option to the run, since classification will
-    # streamline the output to only report options that could be boot issues,
-    # or are otherwise required for proper operation.
-    extra_params = ""
-    if config_check_visibility == 1:
-       extra_params = "--classify"
-
-    # category #1: mismatches
-    try:
-        analysis = subprocess.check_output(['symbol_why.py', '--dotconfig',  '{}'.format( d.getVar('B') + '/.config' ), '--mismatches', extra_params], cwd=s, env=env ).decode('utf-8')
-    except subprocess.CalledProcessError as e:
-        bb.fatal( "config analysis failed: %s" % e.output.decode('utf-8'))
-
-    if analysis:
-        outfile = "{}/{}/cfg/mismatch.txt".format( s, kmeta )
-        if os.path.isfile(outfile):
-           os.remove(outfile)
-        with open(outfile, 'w+') as f:
-            f.write( analysis )
-
-        if config_check_visibility and os.stat(outfile).st_size > 0:
-            with open (outfile, "r") as myfile:
-                results = myfile.read()
-                bb.warn( "[kernel config]: specified values did not make it into the kernel's final configuration:\n\n%s" % results)
-                warnings_detected = True
-
-    # category #2: invalid fragment elements
-    extra_params = ""
-    if bsp_check_visibility > 1:
-        extra_params = "--strict"
-    try:
-        analysis = subprocess.check_output(['symbol_why.py', '--dotconfig',  '{}'.format( d.getVar('B') + '/.config' ), '--invalid', extra_params], cwd=s, env=env ).decode('utf-8')
-    except subprocess.CalledProcessError as e:
-        bb.fatal( "config analysis failed: %s" % e.output.decode('utf-8'))
-
-    if analysis:
-        outfile = "{}/{}/cfg/invalid.txt".format(s,kmeta)
-        if os.path.isfile(outfile):
-           os.remove(outfile)
-        with open(outfile, 'w+') as f:
-            f.write( analysis )
-
-        if bsp_check_visibility and os.stat(outfile).st_size > 0:
-            with open (outfile, "r") as myfile:
-                results = myfile.read()
-                bb.warn( "[kernel config]: This BSP contains fragments with warnings:\n\n%s" % results)
-                warnings_detected = True
-
-    # category #3: redefined options (this is pretty verbose and is debug only)
-    try:
-        analysis = subprocess.check_output(['symbol_why.py', '--dotconfig',  '{}'.format( d.getVar('B') + '/.config' ), '--sanity'], cwd=s, env=env ).decode('utf-8')
-    except subprocess.CalledProcessError as e:
-        bb.fatal( "config analysis failed: %s" % e.output.decode('utf-8'))
-
-    if analysis:
-        outfile = "{}/{}/cfg/redefinition.txt".format(s,kmeta)
-        if os.path.isfile(outfile):
-           os.remove(outfile)
-        with open(outfile, 'w+') as f:
-            f.write( analysis )
-
-        # if the audit level is greater than two, we report if a fragment has overriden
-        # a value from a base fragment. This is really only used for new kernel introduction
-        if bsp_check_visibility > 2 and os.stat(outfile).st_size > 0:
-            with open (outfile, "r") as myfile:
-                results = myfile.read()
-                bb.warn( "[kernel config]: This BSP has configuration options defined in more than one config, with differing values:\n\n%s" % results)
-                warnings_detected = True
-
-    if warnings_detected and kmeta_audit_werror:
-        bb.fatal( "configuration warnings detected, werror is set, promoting to fatal" )
-}
-
-# Ensure that the branches (BSP and meta) are on the locations specified by
-# their SRCREV values. If they are NOT on the right commits, the branches
-# are corrected to the proper commit.
-do_validate_branches() {
-	set +e
-	cd ${S}
-
-	machine_branch="${@ get_machine_branch(d, "${KBRANCH}" )}"
-	machine_srcrev="${SRCREV_machine}"
-
-	# if SRCREV is AUTOREV it shows up as AUTOINC there's nothing to
-	# check and we can exit early
-	if [ "${machine_srcrev}" = "AUTOINC" ]; then
-	    linux_yocto_dev='${@oe.utils.conditional("PREFERRED_PROVIDER_virtual/kernel", "linux-yocto-dev", "1", "", d)}'
-	    if [ -n "$linux_yocto_dev" ]; then
-		git checkout -q -f ${machine_branch}
-		ver=$(grep "^VERSION =" ${S}/Makefile | sed s/.*=\ *//)
-		patchlevel=$(grep "^PATCHLEVEL =" ${S}/Makefile | sed s/.*=\ *//)
-		sublevel=$(grep "^SUBLEVEL =" ${S}/Makefile | sed s/.*=\ *//)
-		kver="$ver.$patchlevel"
-		bbnote "dev kernel: performing version -> branch -> SRCREV validation"
-		bbnote "dev kernel: recipe version ${LINUX_VERSION}, src version: $kver"
-		echo "${LINUX_VERSION}" | grep -q $kver
-		if [ $? -ne 0 ]; then
-		    version="$(echo ${LINUX_VERSION} | sed 's/\+.*$//g')"
-		    versioned_branch="v$version/$machine_branch"
-
-		    machine_branch=$versioned_branch
-		    force_srcrev="$(git rev-parse $machine_branch 2> /dev/null)"
-		    if [ $? -ne 0 ]; then
-			bbfatal "kernel version mismatch detected, and no valid branch $machine_branch detected"
-		    fi
-
-		    bbnote "dev kernel: adjusting branch to $machine_branch, srcrev to: $force_srcrev"
-		fi
-	    else
-		bbnote "SRCREV validation is not required for AUTOREV"
-	    fi
-	elif [ "${machine_srcrev}" = "" ]; then
-		if [ "${SRCREV}" != "AUTOINC" ] && [ "${SRCREV}" != "INVALID" ]; then
-		       # SRCREV_machine_<MACHINE> was not set. This means that a custom recipe
-		       # that doesn't use the SRCREV_FORMAT "machine_meta" is being built. In
-		       # this case, we need to reset to the give SRCREV before heading to patching
-		       bbnote "custom recipe is being built, forcing SRCREV to ${SRCREV}"
-		       force_srcrev="${SRCREV}"
-		fi
-	else
-		git cat-file -t ${machine_srcrev} > /dev/null
-		if [ $? -ne 0 ]; then
-			bberror "${machine_srcrev} is not a valid commit ID."
-			bbfatal_log "The kernel source tree may be out of sync"
-		fi
-		force_srcrev=${machine_srcrev}
-	fi
-
-	git checkout -q -f ${machine_branch}
-	if [ -n "${force_srcrev}" ]; then
-		# see if the branch we are about to patch has been properly reset to the defined
-		# SRCREV .. if not, we reset it.
-		branch_head=`git rev-parse HEAD`
-		if [ "${force_srcrev}" != "${branch_head}" ]; then
-			current_branch=`git rev-parse --abbrev-ref HEAD`
-			git branch "$current_branch-orig"
-			git reset --hard ${force_srcrev}
-			# We've checked out HEAD, make sure we cleanup kgit-s2q fence post check
-			# so the patches are applied as expected otherwise no patching
-			# would be done in some corner cases.
-			kgit-s2q --clean
-		fi
-	fi
-
-	set -e
-}
-
-OE_TERMINAL_EXPORTS += "KBUILD_OUTPUT"
-KBUILD_OUTPUT = "${B}"
-
-python () {
-    # If diffconfig is available, ensure it runs after kernel_configme
-    if 'do_diffconfig' in d:
-        bb.build.addtask('do_diffconfig', None, 'do_kernel_configme', d)
-
-    externalsrc = d.getVar('EXTERNALSRC')
-    if externalsrc:
-        # If we deltask do_patch, do_kernel_configme is left without
-        # dependencies and runs too early
-        d.setVarFlag('do_kernel_configme', 'deps', (d.getVarFlag('do_kernel_configme', 'deps', False) or []) + ['do_unpack'])
-}
-
-# extra tasks
-addtask kernel_version_sanity_check after do_kernel_metadata do_kernel_checkout before do_compile
-addtask validate_branches before do_patch after do_kernel_checkout
-addtask kernel_configcheck after do_configure before do_compile
diff --git a/poky/meta/classes/kernel.bbclass b/poky/meta/classes/kernel.bbclass
deleted file mode 100644
index 61b3e8c..0000000
--- a/poky/meta/classes/kernel.bbclass
+++ /dev/null
@@ -1,815 +0,0 @@
-inherit linux-kernel-base kernel-module-split
-
-COMPATIBLE_HOST = ".*-linux"
-
-KERNEL_PACKAGE_NAME ??= "kernel"
-KERNEL_DEPLOYSUBDIR ??= "${@ "" if (d.getVar("KERNEL_PACKAGE_NAME") == "kernel") else d.getVar("KERNEL_PACKAGE_NAME") }"
-
-PROVIDES += "virtual/kernel"
-DEPENDS += "virtual/${TARGET_PREFIX}binutils virtual/${TARGET_PREFIX}gcc kmod-native bc-native bison-native"
-DEPENDS += "${@bb.utils.contains("INITRAMFS_FSTYPES", "cpio.lzo", "lzop-native", "", d)}"
-DEPENDS += "${@bb.utils.contains("INITRAMFS_FSTYPES", "cpio.lz4", "lz4-native", "", d)}"
-DEPENDS += "${@bb.utils.contains("INITRAMFS_FSTYPES", "cpio.zst", "zstd-native", "", d)}"
-PACKAGE_WRITE_DEPS += "depmodwrapper-cross"
-
-do_deploy[depends] += "depmodwrapper-cross:do_populate_sysroot gzip-native:do_populate_sysroot"
-do_clean[depends] += "make-mod-scripts:do_clean"
-
-CVE_PRODUCT ?= "linux_kernel"
-
-S = "${STAGING_KERNEL_DIR}"
-B = "${WORKDIR}/build"
-KBUILD_OUTPUT = "${B}"
-OE_TERMINAL_EXPORTS += "KBUILD_OUTPUT"
-
-# we include gcc above, we dont need virtual/libc
-INHIBIT_DEFAULT_DEPS = "1"
-
-KERNEL_IMAGETYPE ?= "zImage"
-INITRAMFS_IMAGE ?= ""
-INITRAMFS_IMAGE_NAME ?= "${@['${INITRAMFS_IMAGE}-${MACHINE}', ''][d.getVar('INITRAMFS_IMAGE') == '']}"
-INITRAMFS_TASK ?= ""
-INITRAMFS_IMAGE_BUNDLE ?= ""
-INITRAMFS_DEPLOY_DIR_IMAGE ?= "${DEPLOY_DIR_IMAGE}"
-INITRAMFS_MULTICONFIG ?= ""
-
-# KERNEL_VERSION is extracted from source code. It is evaluated as
-# None for the first parsing, since the code has not been fetched.
-# After the code is fetched, it will be evaluated as real version
-# number and cause kernel to be rebuilt. To avoid this, make
-# KERNEL_VERSION_NAME and KERNEL_VERSION_PKG_NAME depend on
-# LINUX_VERSION which is a constant.
-KERNEL_VERSION_NAME = "${@d.getVar('KERNEL_VERSION') or ""}"
-KERNEL_VERSION_NAME[vardepvalue] = "${LINUX_VERSION}"
-KERNEL_VERSION_PKG_NAME = "${@legitimize_package_name(d.getVar('KERNEL_VERSION'))}"
-KERNEL_VERSION_PKG_NAME[vardepvalue] = "${LINUX_VERSION}"
-
-python __anonymous () {
-    pn = d.getVar("PN")
-    kpn = d.getVar("KERNEL_PACKAGE_NAME")
-
-    # XXX Remove this after bug 11905 is resolved
-    #  FILES:${KERNEL_PACKAGE_NAME}-dev doesn't expand correctly
-    if kpn == pn:
-        bb.warn("Some packages (E.g. *-dev) might be missing due to "
-                "bug 11905 (variable KERNEL_PACKAGE_NAME == PN)")
-
-    # The default kernel recipe builds in a shared location defined by
-    # bitbake/distro confs: STAGING_KERNEL_DIR and STAGING_KERNEL_BUILDDIR.
-    # Set these variables to directories under ${WORKDIR} in alternate
-    # kernel recipes (I.e. where KERNEL_PACKAGE_NAME != kernel) so that they
-    # may build in parallel with the default kernel without clobbering.
-    if kpn != "kernel":
-        workdir = d.getVar("WORKDIR")
-        sourceDir = os.path.join(workdir, 'kernel-source')
-        artifactsDir = os.path.join(workdir, 'kernel-build-artifacts')
-        d.setVar("STAGING_KERNEL_DIR", sourceDir)
-        d.setVar("STAGING_KERNEL_BUILDDIR", artifactsDir)
-
-    # Merge KERNEL_IMAGETYPE and KERNEL_ALT_IMAGETYPE into KERNEL_IMAGETYPES
-    type = d.getVar('KERNEL_IMAGETYPE') or ""
-    alttype = d.getVar('KERNEL_ALT_IMAGETYPE') or ""
-    types = d.getVar('KERNEL_IMAGETYPES') or ""
-    if type not in types.split():
-        types = (type + ' ' + types).strip()
-    if alttype not in types.split():
-        types = (alttype + ' ' + types).strip()
-    d.setVar('KERNEL_IMAGETYPES', types)
-
-    # KERNEL_IMAGETYPES may contain a mixture of image types supported directly
-    # by the kernel build system and types which are created by post-processing
-    # the output of the kernel build system (e.g. compressing vmlinux ->
-    # vmlinux.gz in kernel_do_transform_kernel()).
-    # KERNEL_IMAGETYPE_FOR_MAKE should contain only image types supported
-    # directly by the kernel build system.
-    if not d.getVar('KERNEL_IMAGETYPE_FOR_MAKE'):
-        typeformake = set()
-        for type in types.split():
-            if type == 'vmlinux.gz':
-                type = 'vmlinux'
-            typeformake.add(type)
-
-        d.setVar('KERNEL_IMAGETYPE_FOR_MAKE', ' '.join(sorted(typeformake)))
-
-    kname = d.getVar('KERNEL_PACKAGE_NAME') or "kernel"
-    imagedest = d.getVar('KERNEL_IMAGEDEST')
-
-    for type in types.split():
-        if bb.data.inherits_class('nopackages', d):
-            continue
-        typelower = type.lower()
-        d.appendVar('PACKAGES', ' %s-image-%s' % (kname, typelower))
-        d.setVar('FILES:' + kname + '-image-' + typelower, '/' + imagedest + '/' + type + '-${KERNEL_VERSION_NAME}' + ' /' + imagedest + '/' + type)
-        d.appendVar('RDEPENDS:%s-image' % kname, ' %s-image-%s (= ${EXTENDPKGV})' % (kname, typelower))
-        splitmods = d.getVar("KERNEL_SPLIT_MODULES")
-        if splitmods != '1':
-            d.appendVar('RDEPENDS:%s-image' % kname, ' %s-modules (= ${EXTENDPKGV})' % kname)
-            d.appendVar('RDEPENDS:%s-image-%s' % (kname, typelower), ' %s-modules-${KERNEL_VERSION_PKG_NAME} (= ${EXTENDPKGV})' % kname)
-            d.setVar('PKG:%s-modules' % kname, '%s-modules-${KERNEL_VERSION_PKG_NAME}' % kname)
-            d.appendVar('RPROVIDES:%s-modules' % kname, '%s-modules-${KERNEL_VERSION_PKG_NAME}' % kname)
-
-        d.setVar('PKG:%s-image-%s' % (kname,typelower), '%s-image-%s-${KERNEL_VERSION_PKG_NAME}' % (kname, typelower))
-        d.setVar('ALLOW_EMPTY:%s-image-%s' % (kname, typelower), '1')
-        d.prependVar('pkg_postinst:%s-image-%s' % (kname,typelower), """set +e
-if [ -n "$D" ]; then
-    ln -sf %s-${KERNEL_VERSION} $D/${KERNEL_IMAGEDEST}/%s > /dev/null 2>&1
-else
-    ln -sf %s-${KERNEL_VERSION} ${KERNEL_IMAGEDEST}/%s > /dev/null 2>&1
-    if [ $? -ne 0 ]; then
-        echo "Filesystem on ${KERNEL_IMAGEDEST}/ doesn't support symlinks, falling back to copied image (%s)."
-        install -m 0644 ${KERNEL_IMAGEDEST}/%s-${KERNEL_VERSION} ${KERNEL_IMAGEDEST}/%s
-    fi
-fi
-set -e
-""" % (type, type, type, type, type, type, type))
-        d.setVar('pkg_postrm:%s-image-%s' % (kname,typelower), """set +e
-if [ -f "${KERNEL_IMAGEDEST}/%s" -o -L "${KERNEL_IMAGEDEST}/%s" ]; then
-    rm -f ${KERNEL_IMAGEDEST}/%s  > /dev/null 2>&1
-fi
-set -e
-""" % (type, type, type))
-
-
-    image = d.getVar('INITRAMFS_IMAGE')
-    # If the INTIRAMFS_IMAGE is set but the INITRAMFS_IMAGE_BUNDLE is set to 0,
-    # the do_bundle_initramfs does nothing, but the INITRAMFS_IMAGE is built
-    # standalone for use by wic and other tools.
-    if image:
-        if d.getVar('INITRAMFS_MULTICONFIG'):
-            d.appendVarFlag('do_bundle_initramfs', 'mcdepends', ' mc::${INITRAMFS_MULTICONFIG}:${INITRAMFS_IMAGE}:do_image_complete')
-        else:
-            d.appendVarFlag('do_bundle_initramfs', 'depends', ' ${INITRAMFS_IMAGE}:do_image_complete')
-    if image and bb.utils.to_boolean(d.getVar('INITRAMFS_IMAGE_BUNDLE')):
-        bb.build.addtask('do_transform_bundled_initramfs', 'do_deploy', 'do_bundle_initramfs', d)
-
-    # NOTE: setting INITRAMFS_TASK is for backward compatibility
-    #       The preferred method is to set INITRAMFS_IMAGE, because
-    #       this INITRAMFS_TASK has circular dependency problems
-    #       if the initramfs requires kernel modules
-    image_task = d.getVar('INITRAMFS_TASK')
-    if image_task:
-        d.appendVarFlag('do_configure', 'depends', ' ${INITRAMFS_TASK}')
-}
-
-# Here we pull in all various kernel image types which we support.
-#
-# In case you're wondering why kernel.bbclass inherits the other image
-# types instead of the other way around, the reason for that is to
-# maintain compatibility with various currently existing meta-layers.
-# By pulling in the various kernel image types here, we retain the
-# original behavior of kernel.bbclass, so no meta-layers should get
-# broken.
-#
-# KERNEL_CLASSES by default pulls in kernel-uimage.bbclass, since this
-# used to be the default behavior when only uImage was supported. This
-# variable can be appended by users who implement support for new kernel
-# image types.
-
-KERNEL_CLASSES ?= " kernel-uimage "
-inherit ${KERNEL_CLASSES}
-
-# Old style kernels may set ${S} = ${WORKDIR}/git for example
-# We need to move these over to STAGING_KERNEL_DIR. We can't just
-# create the symlink in advance as the git fetcher can't cope with
-# the symlink.
-do_unpack[cleandirs] += " ${S} ${STAGING_KERNEL_DIR} ${B} ${STAGING_KERNEL_BUILDDIR}"
-do_clean[cleandirs] += " ${S} ${STAGING_KERNEL_DIR} ${B} ${STAGING_KERNEL_BUILDDIR}"
-python do_symlink_kernsrc () {
-    s = d.getVar("S")
-    if s[-1] == '/':
-        # drop trailing slash, so that os.symlink(kernsrc, s) doesn't use s as directory name and fail
-        s=s[:-1]
-    kernsrc = d.getVar("STAGING_KERNEL_DIR")
-    if s != kernsrc:
-        bb.utils.mkdirhier(kernsrc)
-        bb.utils.remove(kernsrc, recurse=True)
-        if d.getVar("EXTERNALSRC"):
-            # With EXTERNALSRC S will not be wiped so we can symlink to it
-            os.symlink(s, kernsrc)
-        else:
-            import shutil
-            shutil.move(s, kernsrc)
-            os.symlink(kernsrc, s)
-}
-# do_patch is normally ordered before do_configure, but
-# externalsrc.bbclass deletes do_patch, breaking the dependency of
-# do_configure on do_symlink_kernsrc.
-addtask symlink_kernsrc before do_patch do_configure after do_unpack
-
-inherit kernel-arch deploy
-
-PACKAGES_DYNAMIC += "^${KERNEL_PACKAGE_NAME}-module-.*"
-PACKAGES_DYNAMIC += "^${KERNEL_PACKAGE_NAME}-image-.*"
-PACKAGES_DYNAMIC += "^${KERNEL_PACKAGE_NAME}-firmware-.*"
-
-export OS = "${TARGET_OS}"
-export CROSS_COMPILE = "${TARGET_PREFIX}"
-export KBUILD_BUILD_VERSION = "1"
-export KBUILD_BUILD_USER ?= "oe-user"
-export KBUILD_BUILD_HOST ?= "oe-host"
-
-KERNEL_RELEASE ?= "${KERNEL_VERSION}"
-
-# The directory where built kernel lies in the kernel tree
-KERNEL_OUTPUT_DIR ?= "arch/${ARCH}/boot"
-KERNEL_IMAGEDEST ?= "boot"
-
-#
-# configuration
-#
-export CMDLINE_CONSOLE = "console=${@d.getVar("KERNEL_CONSOLE") or "ttyS0"}"
-
-KERNEL_VERSION = "${@get_kernelversion_headers('${B}')}"
-
-# kernels are generally machine specific
-PACKAGE_ARCH = "${MACHINE_ARCH}"
-
-# U-Boot support
-UBOOT_ENTRYPOINT ?= "20008000"
-UBOOT_LOADADDRESS ?= "${UBOOT_ENTRYPOINT}"
-
-# Some Linux kernel configurations need additional parameters on the command line
-KERNEL_EXTRA_ARGS ?= ""
-
-EXTRA_OEMAKE = " HOSTCC="${BUILD_CC}" HOSTCFLAGS="${BUILD_CFLAGS}" HOSTLDFLAGS="${BUILD_LDFLAGS}" HOSTCPP="${BUILD_CPP}""
-EXTRA_OEMAKE += " HOSTCXX="${BUILD_CXX}" HOSTCXXFLAGS="${BUILD_CXXFLAGS}" PAHOLE=false"
-
-KERNEL_ALT_IMAGETYPE ??= ""
-
-copy_initramfs() {
-	echo "Copying initramfs into ./usr ..."
-	# In case the directory is not created yet from the first pass compile:
-	mkdir -p ${B}/usr
-	# Find and use the first initramfs image archive type we find
-	rm -f ${B}/usr/${INITRAMFS_IMAGE_NAME}.cpio
-	for img in cpio cpio.gz cpio.lz4 cpio.lzo cpio.lzma cpio.xz cpio.zst; do
-		if [ -e "${INITRAMFS_DEPLOY_DIR_IMAGE}/${INITRAMFS_IMAGE_NAME}.$img" ]; then
-			cp ${INITRAMFS_DEPLOY_DIR_IMAGE}/${INITRAMFS_IMAGE_NAME}.$img ${B}/usr/.
-			case $img in
-			*gz)
-				echo "gzip decompressing image"
-				gunzip -f ${B}/usr/${INITRAMFS_IMAGE_NAME}.$img
-				break
-				;;
-			*lz4)
-				echo "lz4 decompressing image"
-				lz4 -df ${B}/usr/${INITRAMFS_IMAGE_NAME}.$img ${B}/usr/${INITRAMFS_IMAGE_NAME}.cpio
-				break
-				;;
-			*lzo)
-				echo "lzo decompressing image"
-				lzop -df ${B}/usr/${INITRAMFS_IMAGE_NAME}.$img
-				break
-				;;
-			*lzma)
-				echo "lzma decompressing image"
-				lzma -df ${B}/usr/${INITRAMFS_IMAGE_NAME}.$img
-				break
-				;;
-			*xz)
-				echo "xz decompressing image"
-				xz -df ${B}/usr/${INITRAMFS_IMAGE_NAME}.$img
-				break
-				;;
-			*zst)
-				echo "zst decompressing image"
-				zstd -df ${B}/usr/${INITRAMFS_IMAGE_NAME}.$img
-				break
-				;;
-			esac
-			break
-		fi
-	done
-	# Verify that the above loop found a initramfs, fail otherwise
-	[ -f ${B}/usr/${INITRAMFS_IMAGE_NAME}.cpio ] && echo "Finished copy of initramfs into ./usr" || die "Could not find any ${INITRAMFS_DEPLOY_DIR_IMAGE}/${INITRAMFS_IMAGE_NAME}.cpio{.gz|.lz4|.lzo|.lzma|.xz|.zst) for bundling; INITRAMFS_IMAGE_NAME might be wrong."
-}
-
-do_bundle_initramfs () {
-	if [ ! -z "${INITRAMFS_IMAGE}" -a x"${INITRAMFS_IMAGE_BUNDLE}" = x1 ]; then
-		echo "Creating a kernel image with a bundled initramfs..."
-		copy_initramfs
-		# Backing up kernel image relies on its type(regular file or symbolic link)
-		tmp_path=""
-		for imageType in ${KERNEL_IMAGETYPE_FOR_MAKE} ; do
-			if [ -h ${KERNEL_OUTPUT_DIR}/$imageType ] ; then
-				linkpath=`readlink -n ${KERNEL_OUTPUT_DIR}/$imageType`
-				realpath=`readlink -fn ${KERNEL_OUTPUT_DIR}/$imageType`
-				mv -f $realpath $realpath.bak
-				tmp_path=$tmp_path" "$imageType"#"$linkpath"#"$realpath
-			elif [ -f ${KERNEL_OUTPUT_DIR}/$imageType ]; then
-				mv -f ${KERNEL_OUTPUT_DIR}/$imageType ${KERNEL_OUTPUT_DIR}/$imageType.bak
-				tmp_path=$tmp_path" "$imageType"##"
-			fi
-		done
-		use_alternate_initrd=CONFIG_INITRAMFS_SOURCE=${B}/usr/${INITRAMFS_IMAGE_NAME}.cpio
-		kernel_do_compile
-		# Restoring kernel image
-		for tp in $tmp_path ; do
-			imageType=`echo $tp|cut -d "#" -f 1`
-			linkpath=`echo $tp|cut -d "#" -f 2`
-			realpath=`echo $tp|cut -d "#" -f 3`
-			if [ -n "$realpath" ]; then
-				mv -f $realpath $realpath.initramfs
-				mv -f $realpath.bak $realpath
-				ln -sf $linkpath.initramfs ${B}/${KERNEL_OUTPUT_DIR}/$imageType.initramfs
-			else
-				mv -f ${KERNEL_OUTPUT_DIR}/$imageType ${KERNEL_OUTPUT_DIR}/$imageType.initramfs
-				mv -f ${KERNEL_OUTPUT_DIR}/$imageType.bak ${KERNEL_OUTPUT_DIR}/$imageType
-			fi
-		done
-	fi
-}
-do_bundle_initramfs[dirs] = "${B}"
-
-kernel_do_transform_bundled_initramfs() {
-        # vmlinux.gz is not built by kernel
-	if (echo "${KERNEL_IMAGETYPES}" | grep -wq "vmlinux\.gz"); then
-		gzip -9cn < ${KERNEL_OUTPUT_DIR}/vmlinux.initramfs > ${KERNEL_OUTPUT_DIR}/vmlinux.gz.initramfs
-        fi
-}
-do_transform_bundled_initramfs[dirs] = "${B}"
-
-python do_devshell:prepend () {
-    os.environ["LDFLAGS"] = ''
-}
-
-addtask bundle_initramfs after do_install before do_deploy
-
-KERNEL_DEBUG_TIMESTAMPS ??= "0"
-
-kernel_do_compile() {
-	unset CFLAGS CPPFLAGS CXXFLAGS LDFLAGS MACHINE
-
-	# setup native pkg-config variables (kconfig scripts call pkg-config directly, cannot generically be overriden to pkg-config-native)
-	export PKG_CONFIG_DIR="${STAGING_DIR_NATIVE}${libdir_native}/pkgconfig"
-	export PKG_CONFIG_PATH="$PKG_CONFIG_DIR:${STAGING_DATADIR_NATIVE}/pkgconfig"
-	export PKG_CONFIG_LIBDIR="$PKG_CONFIG_DIR"
-	export PKG_CONFIG_SYSROOT_DIR=""
-
-	if [ "${KERNEL_DEBUG_TIMESTAMPS}" != "1" ]; then
-		# kernel sources do not use do_unpack, so SOURCE_DATE_EPOCH may not
-		# be set....
-		if [ "${SOURCE_DATE_EPOCH}" = "" -o "${SOURCE_DATE_EPOCH}" = "0" ]; then
-			# The source directory is not necessarily a git repository, so we
-			# specify the git-dir to ensure that git does not query a
-			# repository in any parent directory.
-			SOURCE_DATE_EPOCH=`git --git-dir="${S}/.git" log -1 --pretty=%ct 2>/dev/null || echo "${REPRODUCIBLE_TIMESTAMP_ROOTFS}"`
-		fi
-
-		ts=`LC_ALL=C date -d @$SOURCE_DATE_EPOCH`
-		export KBUILD_BUILD_TIMESTAMP="$ts"
-		export KCONFIG_NOTIMESTAMP=1
-		bbnote "KBUILD_BUILD_TIMESTAMP: $ts"
-	fi
-	# The $use_alternate_initrd is only set from
-	# do_bundle_initramfs() This variable is specifically for the
-	# case where we are making a second pass at the kernel
-	# compilation and we want to force the kernel build to use a
-	# different initramfs image.  The way to do that in the kernel
-	# is to specify:
-	# make ...args... CONFIG_INITRAMFS_SOURCE=some_other_initramfs.cpio
-	if [ "$use_alternate_initrd" = "" ] && [ "${INITRAMFS_TASK}" != "" ] ; then
-		# The old style way of copying an prebuilt image and building it
-		# is turned on via INTIRAMFS_TASK != ""
-		copy_initramfs
-		use_alternate_initrd=CONFIG_INITRAMFS_SOURCE=${B}/usr/${INITRAMFS_IMAGE_NAME}.cpio
-	fi
-	for typeformake in ${KERNEL_IMAGETYPE_FOR_MAKE} ; do
-		oe_runmake ${typeformake} CC="${KERNEL_CC}" LD="${KERNEL_LD}" ${KERNEL_EXTRA_ARGS} $use_alternate_initrd
-	done
-}
-
-kernel_do_transform_kernel() {
-	# vmlinux.gz is not built by kernel
-	if (echo "${KERNEL_IMAGETYPES}" | grep -wq "vmlinux\.gz"); then
-		mkdir -p "${KERNEL_OUTPUT_DIR}"
-		gzip -9cn < ${B}/vmlinux > "${KERNEL_OUTPUT_DIR}/vmlinux.gz"
-	fi
-}
-do_transform_kernel[dirs] = "${B}"
-addtask transform_kernel after do_compile before do_install
-
-do_compile_kernelmodules() {
-	unset CFLAGS CPPFLAGS CXXFLAGS LDFLAGS MACHINE
-	if [ "${KERNEL_DEBUG_TIMESTAMPS}" != "1" ]; then
-		# kernel sources do not use do_unpack, so SOURCE_DATE_EPOCH may not
-		# be set....
-		if [ "${SOURCE_DATE_EPOCH}" = "" -o "${SOURCE_DATE_EPOCH}" = "0" ]; then
-			# The source directory is not necessarily a git repository, so we
-			# specify the git-dir to ensure that git does not query a
-			# repository in any parent directory.
-			SOURCE_DATE_EPOCH=`git --git-dir="${S}/.git" log -1 --pretty=%ct 2>/dev/null || echo "${REPRODUCIBLE_TIMESTAMP_ROOTFS}"`
-		fi
-
-		ts=`LC_ALL=C date -d @$SOURCE_DATE_EPOCH`
-		export KBUILD_BUILD_TIMESTAMP="$ts"
-		export KCONFIG_NOTIMESTAMP=1
-		bbnote "KBUILD_BUILD_TIMESTAMP: $ts"
-	fi
-	if (grep -q -i -e '^CONFIG_MODULES=y$' ${B}/.config); then
-		oe_runmake -C ${B} ${PARALLEL_MAKE} modules CC="${KERNEL_CC}" LD="${KERNEL_LD}" ${KERNEL_EXTRA_ARGS}
-
-		# Module.symvers gets updated during the 
-		# building of the kernel modules. We need to
-		# update this in the shared workdir since some
-		# external kernel modules has a dependency on
-		# other kernel modules and will look at this
-		# file to do symbol lookups
-		cp ${B}/Module.symvers ${STAGING_KERNEL_BUILDDIR}/
-		# 5.10+ kernels have module.lds that we need to copy for external module builds
-		if [ -e "${B}/scripts/module.lds" ]; then
-			install -Dm 0644 ${B}/scripts/module.lds ${STAGING_KERNEL_BUILDDIR}/scripts/module.lds
-		fi
-	else
-		bbnote "no modules to compile"
-	fi
-}
-addtask compile_kernelmodules after do_compile before do_strip
-
-kernel_do_install() {
-	#
-	# First install the modules
-	#
-	unset CFLAGS CPPFLAGS CXXFLAGS LDFLAGS MACHINE
-	if (grep -q -i -e '^CONFIG_MODULES=y$' .config); then
-		oe_runmake DEPMOD=echo MODLIB=${D}${nonarch_base_libdir}/modules/${KERNEL_VERSION} INSTALL_FW_PATH=${D}${nonarch_base_libdir}/firmware modules_install
-		rm "${D}${nonarch_base_libdir}/modules/${KERNEL_VERSION}/build"
-		rm "${D}${nonarch_base_libdir}/modules/${KERNEL_VERSION}/source"
-		# If the kernel/ directory is empty remove it to prevent QA issues
-		rmdir --ignore-fail-on-non-empty "${D}${nonarch_base_libdir}/modules/${KERNEL_VERSION}/kernel"
-	else
-		bbnote "no modules to install"
-	fi
-
-	#
-	# Install various kernel output (zImage, map file, config, module support files)
-	#
-	install -d ${D}/${KERNEL_IMAGEDEST}
-
-	#
-	# When including an initramfs bundle inside a FIT image, the fitImage is created after the install task
-	# by do_assemble_fitimage_initramfs.
-	# This happens after the generation of the initramfs bundle (done by do_bundle_initramfs).
-	# So, at the level of the install task we should not try to install the fitImage. fitImage is still not
-	# generated yet.
-	# After the generation of the fitImage, the deploy task copies the fitImage from the build directory to
-	# the deploy folder.
-	#
-
-	for imageType in ${KERNEL_IMAGETYPES} ; do
-		if [ $imageType != "fitImage" ] || [ "${INITRAMFS_IMAGE_BUNDLE}" != "1" ] ; then
-			install -m 0644 ${KERNEL_OUTPUT_DIR}/$imageType ${D}/${KERNEL_IMAGEDEST}/$imageType-${KERNEL_VERSION}
-		fi
-	done
-
-	install -m 0644 System.map ${D}/${KERNEL_IMAGEDEST}/System.map-${KERNEL_VERSION}
-	install -m 0644 .config ${D}/${KERNEL_IMAGEDEST}/config-${KERNEL_VERSION}
-	install -m 0644 vmlinux ${D}/${KERNEL_IMAGEDEST}/vmlinux-${KERNEL_VERSION}
-	[ -e Module.symvers ] && install -m 0644 Module.symvers ${D}/${KERNEL_IMAGEDEST}/Module.symvers-${KERNEL_VERSION}
-	install -d ${D}${sysconfdir}/modules-load.d
-	install -d ${D}${sysconfdir}/modprobe.d
-}
-
-# Must be ran no earlier than after do_kernel_checkout or else Makefile won't be in ${S}/Makefile
-do_kernel_version_sanity_check() {
-	if [ "x${KERNEL_VERSION_SANITY_SKIP}" = "x1" ]; then
-		exit 0
-	fi
-
-	# The Makefile determines the kernel version shown at runtime
-	# Don't use KERNEL_VERSION because the headers it grabs the version from aren't generated until do_compile
-	VERSION=$(grep "^VERSION =" ${S}/Makefile | sed s/.*=\ *//)
-	PATCHLEVEL=$(grep "^PATCHLEVEL =" ${S}/Makefile | sed s/.*=\ *//)
-	SUBLEVEL=$(grep "^SUBLEVEL =" ${S}/Makefile | sed s/.*=\ *//)
-	EXTRAVERSION=$(grep "^EXTRAVERSION =" ${S}/Makefile | sed s/.*=\ *//)
-
-	# Build a string for regex and a plain version string
-	reg="^${VERSION}\.${PATCHLEVEL}"
-	vers="${VERSION}.${PATCHLEVEL}"
-	if [ -n "${SUBLEVEL}" ]; then
-		# Ignoring a SUBLEVEL of zero is fine
-		if [ "${SUBLEVEL}" = "0" ]; then
-			reg="${reg}(\.${SUBLEVEL})?"
-		else
-			reg="${reg}\.${SUBLEVEL}"
-			vers="${vers}.${SUBLEVEL}"
-		fi
-	fi
-	vers="${vers}${EXTRAVERSION}"
-	reg="${reg}${EXTRAVERSION}"
-
-	if [ -z `echo ${PV} | grep -E "${reg}"` ]; then
-		bbfatal "Package Version (${PV}) does not match of kernel being built (${vers}). Please update the PV variable to match the kernel source or set KERNEL_VERSION_SANITY_SKIP=\"1\" in your recipe."
-	fi
-	exit 0
-}
-
-addtask shared_workdir after do_compile before do_compile_kernelmodules
-addtask shared_workdir_setscene
-
-do_shared_workdir_setscene () {
-	exit 1
-}
-
-emit_depmod_pkgdata() {
-	# Stash data for depmod
-	install -d ${PKGDESTWORK}/${KERNEL_PACKAGE_NAME}-depmod/
-	echo "${KERNEL_VERSION}" > ${PKGDESTWORK}/${KERNEL_PACKAGE_NAME}-depmod/${KERNEL_PACKAGE_NAME}-abiversion
-	cp ${B}/System.map ${PKGDESTWORK}/${KERNEL_PACKAGE_NAME}-depmod/System.map-${KERNEL_VERSION}
-}
-
-PACKAGEFUNCS += "emit_depmod_pkgdata"
-
-do_shared_workdir[cleandirs] += " ${STAGING_KERNEL_BUILDDIR}"
-do_shared_workdir () {
-	cd ${B}
-
-	kerneldir=${STAGING_KERNEL_BUILDDIR}
-	install -d $kerneldir
-
-	#
-	# Store the kernel version in sysroots for module-base.bbclass
-	#
-
-	echo "${KERNEL_VERSION}" > $kerneldir/${KERNEL_PACKAGE_NAME}-abiversion
-
-	# Copy files required for module builds
-	cp System.map $kerneldir/System.map-${KERNEL_VERSION}
-	[ -e Module.symvers ] && cp Module.symvers $kerneldir/
-	cp .config $kerneldir/
-	mkdir -p $kerneldir/include/config
-	cp include/config/kernel.release $kerneldir/include/config/kernel.release
-	if [ -e certs/signing_key.x509 ]; then
-		# The signing_key.* files are stored in the certs/ dir in
-		# newer Linux kernels
-		mkdir -p $kerneldir/certs
-		cp certs/signing_key.* $kerneldir/certs/
-	elif [ -e signing_key.priv ]; then
-		cp signing_key.* $kerneldir/
-	fi
-
-	# We can also copy over all the generated files and avoid special cases
-	# like version.h, but we've opted to keep this small until file creep starts
-	# to happen
-	if [ -e include/linux/version.h ]; then
-		mkdir -p $kerneldir/include/linux
-		cp include/linux/version.h $kerneldir/include/linux/version.h
-	fi
-
-	# As of Linux kernel version 3.0.1, the clean target removes
-	# arch/powerpc/lib/crtsavres.o which is present in
-	# KBUILD_LDFLAGS_MODULE, making it required to build external modules.
-	if [ ${ARCH} = "powerpc" ]; then
-		if [ -e arch/powerpc/lib/crtsavres.o ]; then
-			mkdir -p $kerneldir/arch/powerpc/lib/
-			cp arch/powerpc/lib/crtsavres.o $kerneldir/arch/powerpc/lib/crtsavres.o
-		fi
-	fi
-
-	if [ -d include/generated ]; then
-		mkdir -p $kerneldir/include/generated/
-		cp -fR include/generated/* $kerneldir/include/generated/
-	fi
-
-	if [ -d arch/${ARCH}/include/generated ]; then
-		mkdir -p $kerneldir/arch/${ARCH}/include/generated/
-		cp -fR arch/${ARCH}/include/generated/* $kerneldir/arch/${ARCH}/include/generated/
-	fi
-
-	if (grep -q -i -e '^CONFIG_UNWINDER_ORC=y$' $kerneldir/.config); then
-		# With CONFIG_UNWINDER_ORC (the default in 4.14), objtool is required for
-		# out-of-tree modules to be able to generate object files.
-		if [ -x tools/objtool/objtool ]; then
-			mkdir -p ${kerneldir}/tools/objtool
-			cp tools/objtool/objtool ${kerneldir}/tools/objtool/
-		fi
-	fi
-}
-
-# We don't need to stage anything, not the modules/firmware since those would clash with linux-firmware
-sysroot_stage_all () {
-	:
-}
-
-KERNEL_CONFIG_COMMAND ?= "oe_runmake_call -C ${S} CC="${KERNEL_CC}" LD="${KERNEL_LD}" O=${B} olddefconfig || oe_runmake -C ${S} O=${B} CC="${KERNEL_CC}" LD="${KERNEL_LD}" oldnoconfig"
-
-python check_oldest_kernel() {
-    oldest_kernel = d.getVar('OLDEST_KERNEL')
-    kernel_version = d.getVar('KERNEL_VERSION')
-    tclibc = d.getVar('TCLIBC')
-    if tclibc == 'glibc':
-        kernel_version = kernel_version.split('-', 1)[0]
-        if oldest_kernel and kernel_version:
-            if bb.utils.vercmp_string(kernel_version, oldest_kernel) < 0:
-                bb.warn('%s: OLDEST_KERNEL is "%s" but the version of the kernel you are building is "%s" - therefore %s as built may not be compatible with this kernel. Either set OLDEST_KERNEL to an older version, or build a newer kernel.' % (d.getVar('PN'), oldest_kernel, kernel_version, tclibc))
-}
-
-check_oldest_kernel[vardepsexclude] += "OLDEST_KERNEL KERNEL_VERSION"
-do_configure[prefuncs] += "check_oldest_kernel"
-
-kernel_do_configure() {
-	# fixes extra + in /lib/modules/2.6.37+
-	# $ scripts/setlocalversion . => +
-	# $ make kernelversion => 2.6.37
-	# $ make kernelrelease => 2.6.37+
-	touch ${B}/.scmversion ${S}/.scmversion
-
-	if [ "${S}" != "${B}" ] && [ -f "${S}/.config" ] && [ ! -f "${B}/.config" ]; then
-		mv "${S}/.config" "${B}/.config"
-	fi
-
-	# Copy defconfig to .config if .config does not exist. This allows
-	# recipes to manage the .config themselves in do_configure:prepend().
-	if [ -f "${WORKDIR}/defconfig" ] && [ ! -f "${B}/.config" ]; then
-		cp "${WORKDIR}/defconfig" "${B}/.config"
-	fi
-
-	${KERNEL_CONFIG_COMMAND}
-}
-
-do_savedefconfig() {
-	bbplain "Saving defconfig to:\n${B}/defconfig"
-	oe_runmake -C ${B} LD='${KERNEL_LD}' savedefconfig
-}
-do_savedefconfig[nostamp] = "1"
-addtask savedefconfig after do_configure
-
-inherit cml1
-
-KCONFIG_CONFIG_COMMAND:append = " PAHOLE=false LD='${KERNEL_LD}' HOSTLDFLAGS='${BUILD_LDFLAGS}'"
-
-EXPORT_FUNCTIONS do_compile do_transform_kernel do_transform_bundled_initramfs do_install do_configure
-
-# kernel-base becomes kernel-${KERNEL_VERSION}
-# kernel-image becomes kernel-image-${KERNEL_VERSION}
-PACKAGES = "${KERNEL_PACKAGE_NAME} ${KERNEL_PACKAGE_NAME}-base ${KERNEL_PACKAGE_NAME}-vmlinux ${KERNEL_PACKAGE_NAME}-image ${KERNEL_PACKAGE_NAME}-dev ${KERNEL_PACKAGE_NAME}-modules ${KERNEL_PACKAGE_NAME}-dbg"
-FILES:${PN} = ""
-FILES:${KERNEL_PACKAGE_NAME}-base = "${nonarch_base_libdir}/modules/${KERNEL_VERSION}/modules.order ${nonarch_base_libdir}/modules/${KERNEL_VERSION}/modules.builtin ${nonarch_base_libdir}/modules/${KERNEL_VERSION}/modules.builtin.modinfo"
-FILES:${KERNEL_PACKAGE_NAME}-image = ""
-FILES:${KERNEL_PACKAGE_NAME}-dev = "/${KERNEL_IMAGEDEST}/System.map* /${KERNEL_IMAGEDEST}/Module.symvers* /${KERNEL_IMAGEDEST}/config* ${KERNEL_SRC_PATH} ${nonarch_base_libdir}/modules/${KERNEL_VERSION}/build"
-FILES:${KERNEL_PACKAGE_NAME}-vmlinux = "/${KERNEL_IMAGEDEST}/vmlinux-${KERNEL_VERSION_NAME}"
-FILES:${KERNEL_PACKAGE_NAME}-modules = ""
-FILES:${KERNEL_PACKAGE_NAME}-dbg = "/usr/lib/debug /usr/src/debug"
-RDEPENDS:${KERNEL_PACKAGE_NAME} = "${KERNEL_PACKAGE_NAME}-base (= ${EXTENDPKGV})"
-# Allow machines to override this dependency if kernel image files are
-# not wanted in images as standard
-RRECOMMENDS:${KERNEL_PACKAGE_NAME}-base ?= "${KERNEL_PACKAGE_NAME}-image (= ${EXTENDPKGV})"
-PKG:${KERNEL_PACKAGE_NAME}-image = "${KERNEL_PACKAGE_NAME}-image-${@legitimize_package_name(d.getVar('KERNEL_VERSION'))}"
-RDEPENDS:${KERNEL_PACKAGE_NAME}-image += "${@oe.utils.conditional('KERNEL_IMAGETYPE', 'vmlinux', '${KERNEL_PACKAGE_NAME}-vmlinux (= ${EXTENDPKGV})', '', d)}"
-PKG:${KERNEL_PACKAGE_NAME}-base = "${KERNEL_PACKAGE_NAME}-${@legitimize_package_name(d.getVar('KERNEL_VERSION'))}"
-RPROVIDES:${KERNEL_PACKAGE_NAME}-base += "${KERNEL_PACKAGE_NAME}-${KERNEL_VERSION}"
-ALLOW_EMPTY:${KERNEL_PACKAGE_NAME} = "1"
-ALLOW_EMPTY:${KERNEL_PACKAGE_NAME}-base = "1"
-ALLOW_EMPTY:${KERNEL_PACKAGE_NAME}-image = "1"
-ALLOW_EMPTY:${KERNEL_PACKAGE_NAME}-modules = "1"
-DESCRIPTION:${KERNEL_PACKAGE_NAME}-modules = "Kernel modules meta package"
-
-pkg_postinst:${KERNEL_PACKAGE_NAME}-base () {
-	if [ ! -e "$D/lib/modules/${KERNEL_VERSION}" ]; then
-		mkdir -p $D/lib/modules/${KERNEL_VERSION}
-	fi
-	if [ -n "$D" ]; then
-		depmodwrapper -a -b $D ${KERNEL_VERSION}
-	else
-		depmod -a ${KERNEL_VERSION}
-	fi
-}
-
-PACKAGESPLITFUNCS:prepend = "split_kernel_packages "
-
-python split_kernel_packages () {
-    do_split_packages(d, root='${nonarch_base_libdir}/firmware', file_regex=r'^(.*)\.(bin|fw|cis|csp|dsp)$', output_pattern='${KERNEL_PACKAGE_NAME}-firmware-%s', description='Firmware for %s', recursive=True, extra_depends='')
-}
-
-# Many scripts want to look in arch/$arch/boot for the bootable
-# image. This poses a problem for vmlinux and vmlinuz based
-# booting. This task arranges to have vmlinux and vmlinuz appear
-# in the normalized directory location.
-do_kernel_link_images() {
-	if [ ! -d "${B}/arch/${ARCH}/boot" ]; then
-		mkdir ${B}/arch/${ARCH}/boot
-	fi
-	cd ${B}/arch/${ARCH}/boot
-	ln -sf ../../../vmlinux
-	if [ -f ../../../vmlinuz ]; then
-		ln -sf ../../../vmlinuz
-	fi
-	if [ -f ../../../vmlinuz.bin ]; then
-		ln -sf ../../../vmlinuz.bin
-	fi
-	if [ -f ../../../vmlinux.64 ]; then
-		ln -sf ../../../vmlinux.64
-	fi
-}
-addtask kernel_link_images after do_compile before do_strip
-
-python do_strip() {
-    import shutil
-
-    strip = d.getVar('STRIP')
-    extra_sections = d.getVar('KERNEL_IMAGE_STRIP_EXTRA_SECTIONS')
-    kernel_image = d.getVar('B') + "/" + d.getVar('KERNEL_OUTPUT_DIR') + "/vmlinux"
-
-    if (extra_sections and kernel_image.find(d.getVar('KERNEL_IMAGEDEST') + '/vmlinux') != -1):
-        kernel_image_stripped = kernel_image + ".stripped"
-        shutil.copy2(kernel_image, kernel_image_stripped)
-        oe.package.runstrip((kernel_image_stripped, 8, strip, extra_sections))
-        bb.debug(1, "KERNEL_IMAGE_STRIP_EXTRA_SECTIONS is set, stripping sections: " + \
-            extra_sections)
-}
-do_strip[dirs] = "${B}"
-
-addtask strip before do_sizecheck after do_kernel_link_images
-
-# Support checking the kernel size since some kernels need to reside in partitions
-# with a fixed length or there is a limit in transferring the kernel to memory.
-# If more than one image type is enabled, warn on any that don't fit but only fail
-# if none fit.
-do_sizecheck() {
-	if [ ! -z "${KERNEL_IMAGE_MAXSIZE}" ]; then
-		invalid=`echo ${KERNEL_IMAGE_MAXSIZE} | sed 's/[0-9]//g'`
-		if [ -n "$invalid" ]; then
-			die "Invalid KERNEL_IMAGE_MAXSIZE: ${KERNEL_IMAGE_MAXSIZE}, should be an integer (The unit is Kbytes)"
-		fi
-		at_least_one_fits=
-		for imageType in ${KERNEL_IMAGETYPES} ; do
-			size=`du -ks ${B}/${KERNEL_OUTPUT_DIR}/$imageType | awk '{print $1}'`
-			if [ $size -gt ${KERNEL_IMAGE_MAXSIZE} ]; then
-				bbwarn "This kernel $imageType (size=$size(K) > ${KERNEL_IMAGE_MAXSIZE}(K)) is too big for your device."
-			else
-				at_least_one_fits=y
-			fi
-		done
-		if [ -z "$at_least_one_fits" ]; then
-			die "All kernel images are too big for your device. Please reduce the size of the kernel by making more of it modular."
-		fi
-	fi
-}
-do_sizecheck[dirs] = "${B}"
-
-addtask sizecheck before do_install after do_strip
-
-inherit kernel-artifact-names
-
-kernel_do_deploy() {
-	deployDir="${DEPLOYDIR}"
-	if [ -n "${KERNEL_DEPLOYSUBDIR}" ]; then
-		deployDir="${DEPLOYDIR}/${KERNEL_DEPLOYSUBDIR}"
-		mkdir "$deployDir"
-	fi
-
-	for imageType in ${KERNEL_IMAGETYPES} ; do
-		baseName=$imageType-${KERNEL_IMAGE_NAME}
-
-		if [ -s ${KERNEL_OUTPUT_DIR}/$imageType.stripped ] ; then
-			install -m 0644 ${KERNEL_OUTPUT_DIR}/$imageType.stripped $deployDir/$baseName${KERNEL_IMAGE_BIN_EXT}
-		else
-			install -m 0644 ${KERNEL_OUTPUT_DIR}/$imageType $deployDir/$baseName${KERNEL_IMAGE_BIN_EXT}
-		fi
-		if [ -n "${KERNEL_IMAGE_LINK_NAME}" ] ; then
-			ln -sf $baseName${KERNEL_IMAGE_BIN_EXT} $deployDir/$imageType-${KERNEL_IMAGE_LINK_NAME}${KERNEL_IMAGE_BIN_EXT}
-		fi
-		if [ "${KERNEL_IMAGETYPE_SYMLINK}" = "1" ] ; then
-			ln -sf $baseName${KERNEL_IMAGE_BIN_EXT} $deployDir/$imageType
-		fi
-	done
-
-	if [ ${MODULE_TARBALL_DEPLOY} = "1" ] && (grep -q -i -e '^CONFIG_MODULES=y$' .config); then
-		mkdir -p ${D}${root_prefix}/lib
-		if [ -n "${SOURCE_DATE_EPOCH}" ]; then
-			TAR_ARGS="--sort=name --clamp-mtime --mtime=@${SOURCE_DATE_EPOCH}"
-		else
-			TAR_ARGS=""
-		fi
-		TAR_ARGS="$TAR_ARGS --owner=0 --group=0"
-		tar $TAR_ARGS -cv -C ${D}${root_prefix} lib | gzip -9n > $deployDir/modules-${MODULE_TARBALL_NAME}.tgz
-
-		if [ -n "${MODULE_TARBALL_LINK_NAME}" ] ; then
-			ln -sf modules-${MODULE_TARBALL_NAME}.tgz $deployDir/modules-${MODULE_TARBALL_LINK_NAME}.tgz
-		fi
-	fi
-
-	if [ ! -z "${INITRAMFS_IMAGE}" -a x"${INITRAMFS_IMAGE_BUNDLE}" = x1 ]; then
-		for imageType in ${KERNEL_IMAGETYPES} ; do
-			if [ "$imageType" = "fitImage" ] ; then
-				continue
-			fi
-			initramfsBaseName=$imageType-${INITRAMFS_NAME}
-			install -m 0644 ${KERNEL_OUTPUT_DIR}/$imageType.initramfs $deployDir/$initramfsBaseName${KERNEL_IMAGE_BIN_EXT}
-			if [ -n "${INITRAMFS_LINK_NAME}" ] ; then
-				ln -sf $initramfsBaseName${KERNEL_IMAGE_BIN_EXT} $deployDir/$imageType-${INITRAMFS_LINK_NAME}${KERNEL_IMAGE_BIN_EXT}
-			fi
-		done
-	fi
-}
-
-# We deploy to filenames that include PKGV and PKGR, read the saved data to
-# ensure we get the right values for both
-do_deploy[prefuncs] += "read_subpackage_metadata"
-
-addtask deploy after do_populate_sysroot do_packagedata
-
-EXPORT_FUNCTIONS do_deploy
-
-# Add using Device Tree support
-inherit kernel-devicetree
diff --git a/poky/meta/classes/kernelsrc.bbclass b/poky/meta/classes/kernelsrc.bbclass
deleted file mode 100644
index a951ba3..0000000
--- a/poky/meta/classes/kernelsrc.bbclass
+++ /dev/null
@@ -1,10 +0,0 @@
-S = "${STAGING_KERNEL_DIR}"
-deltask do_fetch
-deltask do_unpack
-do_patch[depends] += "virtual/kernel:do_shared_workdir"
-do_patch[noexec] = "1"
-do_package[depends] += "virtual/kernel:do_populate_sysroot"
-KERNEL_VERSION = "${@get_kernelversion_file("${STAGING_KERNEL_BUILDDIR}")}"
-
-inherit linux-kernel-base
-
diff --git a/poky/meta/classes/lib_package.bbclass b/poky/meta/classes/lib_package.bbclass
deleted file mode 100644
index 8849f59..0000000
--- a/poky/meta/classes/lib_package.bbclass
+++ /dev/null
@@ -1,7 +0,0 @@
-#
-# ${PN}-bin is defined in bitbake.conf
-#
-# We need to allow the other packages to be greedy with what they
-# want out of /usr/bin and /usr/sbin before ${PN}-bin gets greedy.
-# 
-PACKAGE_BEFORE_PN = "${PN}-bin"
diff --git a/poky/meta/classes/libc-package.bbclass b/poky/meta/classes/libc-package.bbclass
deleted file mode 100644
index 13ef8cd..0000000
--- a/poky/meta/classes/libc-package.bbclass
+++ /dev/null
@@ -1,384 +0,0 @@
-#
-# This class knows how to package up [e]glibc. Its shared since prebuild binary toolchains
-# may need packaging and its pointless to duplicate this code.
-#
-# Caller should set GLIBC_INTERNAL_USE_BINARY_LOCALE to one of:
-#  "compile" - Use QEMU to generate the binary locale files
-#  "precompiled" - The binary locale files are pregenerated and already present
-#  "ondevice" - The device will build the locale files upon first boot through the postinst
-
-GLIBC_INTERNAL_USE_BINARY_LOCALE ?= "ondevice"
-
-GLIBC_SPLIT_LC_PACKAGES ?= "0"
-
-python __anonymous () {
-    enabled = d.getVar("ENABLE_BINARY_LOCALE_GENERATION")
-
-    pn = d.getVar("PN")
-    if pn.endswith("-initial"):
-        enabled = False
-
-    if enabled and int(enabled):
-        import re
-
-        target_arch = d.getVar("TARGET_ARCH")
-        binary_arches = d.getVar("BINARY_LOCALE_ARCHES") or ""
-        use_cross_localedef = d.getVar("LOCALE_GENERATION_WITH_CROSS-LOCALEDEF") or ""
-
-        for regexp in binary_arches.split(" "):
-            r = re.compile(regexp)
-
-            if r.match(target_arch):
-                depends = d.getVar("DEPENDS")
-                if use_cross_localedef == "1" :
-                    depends = "%s cross-localedef-native" % depends
-                else:
-                    depends = "%s qemu-native" % depends
-                d.setVar("DEPENDS", depends)
-                d.setVar("GLIBC_INTERNAL_USE_BINARY_LOCALE", "compile")
-                break
-}
-
-# try to fix disable charsets/locales/locale-code compile fail
-PACKAGE_NO_GCONV ?= "0"
-
-OVERRIDES:append = ":${TARGET_ARCH}-${TARGET_OS}"
-
-locale_base_postinst_ontarget() {
-localedef --inputfile=${datadir}/i18n/locales/%s --charmap=%s %s
-}
-
-locale_base_postrm() {
-#!/bin/sh
-localedef --delete-from-archive --inputfile=${datadir}/locales/%s --charmap=%s %s
-}
-
-LOCALETREESRC ?= "${PKGD}"
-
-do_prep_locale_tree() {
-	treedir=${WORKDIR}/locale-tree
-	rm -rf $treedir
-	mkdir -p $treedir/${base_bindir} $treedir/${base_libdir} $treedir/${datadir} $treedir/${localedir}
-	tar -cf - -C ${LOCALETREESRC}${datadir} -p i18n | tar -xf - -C $treedir/${datadir}
-	# unzip to avoid parsing errors
-	for i in $treedir/${datadir}/i18n/charmaps/*gz; do 
-		gunzip $i
-	done
-	# The extract pattern "./l*.so*" is carefully selected so that it will
-	# match ld*.so and lib*.so*, but not any files in the gconv directory
-	# (if it exists). This makes sure we only unpack the files we need.
-	# This is important in case usrmerge is set in DISTRO_FEATURES, which
-	# means ${base_libdir} == ${libdir}.
-	tar -cf - -C ${LOCALETREESRC}${base_libdir} -p . | tar -xf - -C $treedir/${base_libdir} --wildcards './l*.so*'
-	if [ -f ${STAGING_LIBDIR_NATIVE}/libgcc_s.* ]; then
-		tar -cf - -C ${STAGING_LIBDIR_NATIVE} -p libgcc_s.* | tar -xf - -C $treedir/${base_libdir}
-	fi
-	install -m 0755 ${LOCALETREESRC}${bindir}/localedef $treedir/${base_bindir}
-}
-
-do_collect_bins_from_locale_tree() {
-	treedir=${WORKDIR}/locale-tree
-
-	parent=$(dirname ${localedir})
-	mkdir -p ${PKGD}/$parent
-	tar -cf - -C $treedir/$parent -p $(basename ${localedir}) | tar -xf - -C ${PKGD}$parent
-
-	# Finalize tree by chaning all duplicate files into hard links
-	cross-localedef-hardlink -c -v ${WORKDIR}/locale-tree
-}
-
-inherit qemu
-
-python package_do_split_gconvs () {
-    import re
-    if (d.getVar('PACKAGE_NO_GCONV') == '1'):
-        bb.note("package requested not splitting gconvs")
-        return
-
-    if not d.getVar('PACKAGES'):
-        return
-
-    mlprefix = d.getVar("MLPREFIX") or ""
-
-    bpn = d.getVar('BPN')
-    libdir = d.getVar('libdir')
-    if not libdir:
-        bb.error("libdir not defined")
-        return
-    datadir = d.getVar('datadir')
-    if not datadir:
-        bb.error("datadir not defined")
-        return
-
-    gconv_libdir = oe.path.join(libdir, "gconv")
-    charmap_dir = oe.path.join(datadir, "i18n", "charmaps")
-    locales_dir = oe.path.join(datadir, "i18n", "locales")
-    binary_locales_dir = d.getVar('localedir')
-
-    def calc_gconv_deps(fn, pkg, file_regex, output_pattern, group):
-        deps = []
-        f = open(fn, "rb")
-        c_re = re.compile(r'^copy "(.*)"')
-        i_re = re.compile(r'^include "(\w+)".*')
-        for l in f.readlines():
-            l = l.decode("latin-1")
-            m = c_re.match(l) or i_re.match(l)
-            if m:
-                dp = legitimize_package_name('%s%s-gconv-%s' % (mlprefix, bpn, m.group(1)))
-                if not dp in deps:
-                    deps.append(dp)
-        f.close()
-        if deps != []:
-            d.setVar('RDEPENDS:%s' % pkg, " ".join(deps))
-        if bpn != 'glibc':
-            d.setVar('RPROVIDES:%s' % pkg, pkg.replace(bpn, 'glibc'))
-
-    do_split_packages(d, gconv_libdir, file_regex=r'^(.*)\.so$', output_pattern=bpn+'-gconv-%s', \
-        description='gconv module for character set %s', hook=calc_gconv_deps, \
-        extra_depends=bpn+'-gconv')
-
-    def calc_charmap_deps(fn, pkg, file_regex, output_pattern, group):
-        deps = []
-        f = open(fn, "rb")
-        c_re = re.compile(r'^copy "(.*)"')
-        i_re = re.compile(r'^include "(\w+)".*')
-        for l in f.readlines():
-            l = l.decode("latin-1")
-            m = c_re.match(l) or i_re.match(l)
-            if m:
-                dp = legitimize_package_name('%s%s-charmap-%s' % (mlprefix, bpn, m.group(1)))
-                if not dp in deps:
-                    deps.append(dp)
-        f.close()
-        if deps != []:
-            d.setVar('RDEPENDS:%s' % pkg, " ".join(deps))
-        if bpn != 'glibc':
-            d.setVar('RPROVIDES:%s' % pkg, pkg.replace(bpn, 'glibc'))
-
-    do_split_packages(d, charmap_dir, file_regex=r'^(.*)\.gz$', output_pattern=bpn+'-charmap-%s', \
-        description='character map for %s encoding', hook=calc_charmap_deps, extra_depends='')
-
-    def calc_locale_deps(fn, pkg, file_regex, output_pattern, group):
-        deps = []
-        f = open(fn, "rb")
-        c_re = re.compile(r'^copy "(.*)"')
-        i_re = re.compile(r'^include "(\w+)".*')
-        for l in f.readlines():
-            l = l.decode("latin-1")
-            m = c_re.match(l) or i_re.match(l)
-            if m:
-                dp = legitimize_package_name(mlprefix+bpn+'-localedata-%s' % m.group(1))
-                if not dp in deps:
-                    deps.append(dp)
-        f.close()
-        if deps != []:
-            d.setVar('RDEPENDS:%s' % pkg, " ".join(deps))
-        if bpn != 'glibc':
-            d.setVar('RPROVIDES:%s' % pkg, pkg.replace(bpn, 'glibc'))
-
-    do_split_packages(d, locales_dir, file_regex=r'(.*)', output_pattern=bpn+'-localedata-%s', \
-        description='locale definition for %s', hook=calc_locale_deps, extra_depends='')
-    d.setVar('PACKAGES', d.getVar('PACKAGES', False) + ' ' + d.getVar('MLPREFIX', False) + bpn + '-gconv')
-
-    use_bin = d.getVar("GLIBC_INTERNAL_USE_BINARY_LOCALE")
-
-    dot_re = re.compile(r"(.*)\.(.*)")
-
-    # Read in supported locales and associated encodings
-    supported = {}
-    with open(oe.path.join(d.getVar('WORKDIR'), "SUPPORTED")) as f:
-        for line in f.readlines():
-            try:
-                locale, charset = line.rstrip().split()
-            except ValueError:
-                continue
-            supported[locale] = charset
-
-    # GLIBC_GENERATE_LOCALES var specifies which locales to be generated. empty or "all" means all locales
-    to_generate = d.getVar('GLIBC_GENERATE_LOCALES')
-    if not to_generate or to_generate == 'all':
-        to_generate = sorted(supported.keys())
-    else:
-        to_generate = to_generate.split()
-        for locale in to_generate:
-            if locale not in supported:
-                if '.' in locale:
-                    charset = locale.split('.')[1]
-                else:
-                    charset = 'UTF-8'
-                    bb.warn("Unsupported locale '%s', assuming encoding '%s'" % (locale, charset))
-                supported[locale] = charset
-
-    def output_locale_source(name, pkgname, locale, encoding):
-        d.setVar('RDEPENDS:%s' % pkgname, '%slocaledef %s-localedata-%s %s-charmap-%s' % \
-        (mlprefix, mlprefix+bpn, legitimize_package_name(locale), mlprefix+bpn, legitimize_package_name(encoding)))
-        d.setVar('pkg_postinst_ontarget:%s' % pkgname, d.getVar('locale_base_postinst_ontarget') \
-        % (locale, encoding, locale))
-        d.setVar('pkg_postrm:%s' % pkgname, d.getVar('locale_base_postrm') % \
-        (locale, encoding, locale))
-
-    def output_locale_binary_rdepends(name, pkgname, locale, encoding):
-        dep = legitimize_package_name('%s-binary-localedata-%s' % (bpn, name))
-        lcsplit = d.getVar('GLIBC_SPLIT_LC_PACKAGES')
-        if lcsplit and int(lcsplit):
-            d.appendVar('PACKAGES', ' ' + dep)
-            d.setVar('ALLOW_EMPTY:%s' % dep, '1')
-        d.setVar('RDEPENDS:%s' % pkgname, mlprefix + dep)
-
-    commands = {}
-
-    def output_locale_binary(name, pkgname, locale, encoding):
-        treedir = oe.path.join(d.getVar("WORKDIR"), "locale-tree")
-        ldlibdir = oe.path.join(treedir, d.getVar("base_libdir"))
-        path = d.getVar("PATH")
-        i18npath = oe.path.join(treedir, datadir, "i18n")
-        gconvpath = oe.path.join(treedir, "iconvdata")
-        outputpath = oe.path.join(treedir, binary_locales_dir)
-
-        use_cross_localedef = d.getVar("LOCALE_GENERATION_WITH_CROSS-LOCALEDEF") or "0"
-        if use_cross_localedef == "1":
-            target_arch = d.getVar('TARGET_ARCH')
-            locale_arch_options = { \
-                "arc":     " --uint32-align=4 --little-endian ", \
-                "arceb":   " --uint32-align=4 --big-endian ",    \
-                "arm":     " --uint32-align=4 --little-endian ", \
-                "armeb":   " --uint32-align=4 --big-endian ",    \
-                "aarch64": " --uint32-align=4 --little-endian ",    \
-                "aarch64_be": " --uint32-align=4 --big-endian ",    \
-                "sh4":     " --uint32-align=4 --big-endian ",    \
-                "powerpc": " --uint32-align=4 --big-endian ",    \
-                "powerpc64": " --uint32-align=4 --big-endian ",  \
-                "powerpc64le": " --uint32-align=4 --little-endian ",  \
-                "mips":    " --uint32-align=4 --big-endian ",    \
-                "mipsisa32r6":    " --uint32-align=4 --big-endian ",    \
-                "mips64":  " --uint32-align=4 --big-endian ",    \
-                "mipsisa64r6":  " --uint32-align=4 --big-endian ",    \
-                "mipsel":  " --uint32-align=4 --little-endian ", \
-                "mipsisa32r6el":  " --uint32-align=4 --little-endian ", \
-                "mips64el":" --uint32-align=4 --little-endian ", \
-                "mipsisa64r6el":" --uint32-align=4 --little-endian ", \
-                "riscv64": " --uint32-align=4 --little-endian ", \
-                "riscv32": " --uint32-align=4 --little-endian ", \
-                "i586":    " --uint32-align=4 --little-endian ", \
-                "i686":    " --uint32-align=4 --little-endian ", \
-                "x86_64":  " --uint32-align=4 --little-endian "  }
-
-            if target_arch in locale_arch_options:
-                localedef_opts = locale_arch_options[target_arch]
-            else:
-                bb.error("locale_arch_options not found for target_arch=" + target_arch)
-                bb.fatal("unknown arch:" + target_arch + " for locale_arch_options")
-
-            localedef_opts += " --force --no-hard-links --no-archive --prefix=%s \
-                --inputfile=%s/%s/i18n/locales/%s --charmap=%s %s/%s" \
-                % (treedir, treedir, datadir, locale, encoding, outputpath, name)
-
-            cmd = "PATH=\"%s\" I18NPATH=\"%s\" GCONV_PATH=\"%s\" cross-localedef %s" % \
-                (path, i18npath, gconvpath, localedef_opts)
-        else: # earlier slower qemu way 
-            qemu = qemu_target_binary(d) 
-            localedef_opts = "--force --no-hard-links --no-archive --prefix=%s \
-                --inputfile=%s/i18n/locales/%s --charmap=%s %s" \
-                % (treedir, datadir, locale, encoding, name)
-
-            qemu_options = d.getVar('QEMU_OPTIONS')
-
-            cmd = "PSEUDO_RELOADED=YES PATH=\"%s\" I18NPATH=\"%s\" %s -L %s \
-                -E LD_LIBRARY_PATH=%s %s %s${base_bindir}/localedef %s" % \
-                (path, i18npath, qemu, treedir, ldlibdir, qemu_options, treedir, localedef_opts)
-
-        commands["%s/%s" % (outputpath, name)] = cmd
-
-        bb.note("generating locale %s (%s)" % (locale, encoding))
-
-    def output_locale(name, locale, encoding):
-        pkgname = d.getVar('MLPREFIX', False) + 'locale-base-' + legitimize_package_name(name)
-        d.setVar('ALLOW_EMPTY:%s' % pkgname, '1')
-        d.setVar('PACKAGES', '%s %s' % (pkgname, d.getVar('PACKAGES')))
-        rprovides = ' %svirtual-locale-%s' % (mlprefix, legitimize_package_name(name))
-        m = re.match(r"(.*)_(.*)", name)
-        if m:
-            rprovides += ' %svirtual-locale-%s' % (mlprefix, m.group(1))
-        d.setVar('RPROVIDES:%s' % pkgname, rprovides)
-
-        if use_bin == "compile":
-            output_locale_binary_rdepends(name, pkgname, locale, encoding)
-            output_locale_binary(name, pkgname, locale, encoding)
-        elif use_bin == "precompiled":
-            output_locale_binary_rdepends(name, pkgname, locale, encoding)
-        else:
-            output_locale_source(name, pkgname, locale, encoding)
-
-    if use_bin == "compile":
-        bb.note("preparing tree for binary locale generation")
-        bb.build.exec_func("do_prep_locale_tree", d)
-
-    utf8_only = int(d.getVar('LOCALE_UTF8_ONLY') or 0)
-    utf8_is_default = int(d.getVar('LOCALE_UTF8_IS_DEFAULT') or 0)
-
-    encodings = {}
-    for locale in to_generate:
-        charset = supported[locale]
-        if utf8_only and charset != 'UTF-8':
-            continue
-
-        m = dot_re.match(locale)
-        if m:
-            base = m.group(1)
-        else:
-            base = locale
-
-        # Non-precompiled locales may be renamed so that the default
-        # (non-suffixed) encoding is always UTF-8, i.e., instead of en_US and
-        # en_US.UTF-8, we have en_US and en_US.ISO-8859-1. This implicitly
-        # contradicts SUPPORTED.
-        if use_bin == "precompiled" or not utf8_is_default:
-            output_locale(locale, base, charset)
-        else:
-            if charset == 'UTF-8':
-                output_locale(base, base, charset)
-            else:
-                output_locale('%s.%s' % (base, charset), base, charset)
-
-    def metapkg_hook(file, pkg, pattern, format, basename):
-        name = basename.split('/', 1)[0]
-        metapkg = legitimize_package_name('%s-binary-localedata-%s' % (mlprefix+bpn, name))
-        d.appendVar('RDEPENDS:%s' % metapkg, ' ' + pkg)
-
-    if use_bin == "compile":
-        makefile = oe.path.join(d.getVar("WORKDIR"), "locale-tree", "Makefile")
-        with open(makefile, "w") as m:
-            m.write("all: %s\n\n" % " ".join(commands.keys()))
-            total = len(commands)
-            for i, (maketarget, makerecipe) in enumerate(commands.items()):
-                m.write(maketarget + ":\n")
-                m.write("\t@echo 'Progress %d/%d'\n" % (i, total))
-                m.write("\t" + makerecipe + "\n\n")
-        d.setVar("EXTRA_OEMAKE", "-C %s ${PARALLEL_MAKE}" % (os.path.dirname(makefile)))
-        d.setVarFlag("oe_runmake", "progress", r"outof:Progress\s(\d+)/(\d+)")
-        bb.note("Executing binary locale generation makefile")
-        bb.build.exec_func("oe_runmake", d)
-        bb.note("collecting binary locales from locale tree")
-        bb.build.exec_func("do_collect_bins_from_locale_tree", d)
-
-    if use_bin in ('compile', 'precompiled'):
-        lcsplit = d.getVar('GLIBC_SPLIT_LC_PACKAGES')
-        if lcsplit and int(lcsplit):
-            do_split_packages(d, binary_locales_dir, file_regex=r'^(.*/LC_\w+)', \
-                output_pattern=bpn+'-binary-localedata-%s', \
-                description='binary locale definition for %s', recursive=True,
-                hook=metapkg_hook, extra_depends='', allow_dirs=True, match_path=True)
-        else:
-            do_split_packages(d, binary_locales_dir, file_regex=r'(.*)', \
-                output_pattern=bpn+'-binary-localedata-%s', \
-                description='binary locale definition for %s', extra_depends='', allow_dirs=True)
-    else:
-        bb.note("generation of binary locales disabled. this may break i18n!")
-
-}
-
-# We want to do this indirection so that we can safely 'return'
-# from the called function even though we're prepending
-python populate_packages:prepend () {
-    bb.build.exec_func('package_do_split_gconvs', d)
-}
diff --git a/poky/meta/classes/license.bbclass b/poky/meta/classes/license.bbclass
deleted file mode 100644
index 4ebfc4f..0000000
--- a/poky/meta/classes/license.bbclass
+++ /dev/null
@@ -1,420 +0,0 @@
-# Populates LICENSE_DIRECTORY as set in distro config with the license files as set by
-# LIC_FILES_CHKSUM.
-# TODO:
-# - There is a real issue revolving around license naming standards.
-
-LICENSE_DIRECTORY ??= "${DEPLOY_DIR}/licenses"
-LICSSTATEDIR = "${WORKDIR}/license-destdir/"
-
-# Create extra package with license texts and add it to RRECOMMENDS:${PN}
-LICENSE_CREATE_PACKAGE[type] = "boolean"
-LICENSE_CREATE_PACKAGE ??= "0"
-LICENSE_PACKAGE_SUFFIX ??= "-lic"
-LICENSE_FILES_DIRECTORY ??= "${datadir}/licenses/"
-
-addtask populate_lic after do_patch before do_build
-do_populate_lic[dirs] = "${LICSSTATEDIR}/${PN}"
-do_populate_lic[cleandirs] = "${LICSSTATEDIR}"
-
-python do_populate_lic() {
-    """
-    Populate LICENSE_DIRECTORY with licenses.
-    """
-    lic_files_paths = find_license_files(d)
-
-    # The base directory we wrangle licenses to
-    destdir = os.path.join(d.getVar('LICSSTATEDIR'), d.getVar('PN'))
-    copy_license_files(lic_files_paths, destdir)
-    info = get_recipe_info(d)
-    with open(os.path.join(destdir, "recipeinfo"), "w") as f:
-        for key in sorted(info.keys()):
-            f.write("%s: %s\n" % (key, info[key]))
-    oe.qa.exit_if_errors(d)
-}
-
-PSEUDO_IGNORE_PATHS .= ",${@','.join(((d.getVar('COMMON_LICENSE_DIR') or '') + ' ' + (d.getVar('LICENSE_PATH') or '') + ' ' + d.getVar('COREBASE') + '/meta/COPYING').split())}"
-# it would be better to copy them in do_install:append, but find_license_filesa is python
-python perform_packagecopy:prepend () {
-    enabled = oe.data.typed_value('LICENSE_CREATE_PACKAGE', d)
-    if d.getVar('CLASSOVERRIDE') == 'class-target' and enabled:
-        lic_files_paths = find_license_files(d)
-
-        # LICENSE_FILES_DIRECTORY starts with '/' so os.path.join cannot be used to join D and LICENSE_FILES_DIRECTORY
-        destdir = d.getVar('D') + os.path.join(d.getVar('LICENSE_FILES_DIRECTORY'), d.getVar('PN'))
-        copy_license_files(lic_files_paths, destdir)
-        add_package_and_files(d)
-}
-perform_packagecopy[vardeps] += "LICENSE_CREATE_PACKAGE"
-
-def get_recipe_info(d):
-    info = {}
-    info["PV"] = d.getVar("PV")
-    info["PR"] = d.getVar("PR")
-    info["LICENSE"] = d.getVar("LICENSE")
-    return info
-
-def add_package_and_files(d):
-    packages = d.getVar('PACKAGES')
-    files = d.getVar('LICENSE_FILES_DIRECTORY')
-    pn = d.getVar('PN')
-    pn_lic = "%s%s" % (pn, d.getVar('LICENSE_PACKAGE_SUFFIX', False))
-    if pn_lic in packages.split():
-        bb.warn("%s package already existed in %s." % (pn_lic, pn))
-    else:
-        # first in PACKAGES to be sure that nothing else gets LICENSE_FILES_DIRECTORY
-        d.setVar('PACKAGES', "%s %s" % (pn_lic, packages))
-        d.setVar('FILES:' + pn_lic, files)
-
-def copy_license_files(lic_files_paths, destdir):
-    import shutil
-    import errno
-
-    bb.utils.mkdirhier(destdir)
-    for (basename, path, beginline, endline) in lic_files_paths:
-        try:
-            src = path
-            dst = os.path.join(destdir, basename)
-            if os.path.exists(dst):
-                os.remove(dst)
-            if os.path.islink(src):
-                src = os.path.realpath(src)
-            canlink = os.access(src, os.W_OK) and (os.stat(src).st_dev == os.stat(destdir).st_dev) and beginline is None and endline is None
-            if canlink:
-                try:
-                    os.link(src, dst)
-                except OSError as err:
-                    if err.errno == errno.EXDEV:
-                        # Copy license files if hardlink is not possible even if st_dev is the
-                        # same on source and destination (docker container with device-mapper?)
-                        canlink = False
-                    else:
-                        raise
-                # Only chown if we did hardlink and we're running under pseudo
-                if canlink and os.environ.get('PSEUDO_DISABLED') == '0':
-                    os.chown(dst,0,0)
-            if not canlink:
-                begin_idx = max(0, int(beginline) - 1) if beginline is not None else None
-                end_idx = max(0, int(endline)) if endline is not None else None
-                if begin_idx is None and end_idx is None:
-                    shutil.copyfile(src, dst)
-                else:
-                    with open(src, 'rb') as src_f:
-                        with open(dst, 'wb') as dst_f:
-                            dst_f.write(b''.join(src_f.readlines()[begin_idx:end_idx]))
-
-        except Exception as e:
-            bb.warn("Could not copy license file %s to %s: %s" % (src, dst, e))
-
-def find_license_files(d):
-    """
-    Creates list of files used in LIC_FILES_CHKSUM and generic LICENSE files.
-    """
-    import shutil
-    import oe.license
-    from collections import defaultdict, OrderedDict
-
-    # All the license files for the package
-    lic_files = d.getVar('LIC_FILES_CHKSUM') or ""
-    pn = d.getVar('PN')
-    # The license files are located in S/LIC_FILE_CHECKSUM.
-    srcdir = d.getVar('S')
-    # Directory we store the generic licenses as set in the distro configuration
-    generic_directory = d.getVar('COMMON_LICENSE_DIR')
-    # List of basename, path tuples
-    lic_files_paths = []
-    # hash for keep track generic lics mappings
-    non_generic_lics = {}
-    # Entries from LIC_FILES_CHKSUM
-    lic_chksums = {}
-    license_source_dirs = []
-    license_source_dirs.append(generic_directory)
-    try:
-        additional_lic_dirs = d.getVar('LICENSE_PATH').split()
-        for lic_dir in additional_lic_dirs:
-            license_source_dirs.append(lic_dir)
-    except:
-        pass
-
-    class FindVisitor(oe.license.LicenseVisitor):
-        def visit_Str(self, node):
-            #
-            # Until I figure out what to do with
-            # the two modifiers I support (or greater = +
-            # and "with exceptions" being *
-            # we'll just strip out the modifier and put
-            # the base license.
-            find_license(node.s.replace("+", "").replace("*", ""))
-            self.generic_visit(node)
-
-        def visit_Constant(self, node):
-            find_license(node.value.replace("+", "").replace("*", ""))
-            self.generic_visit(node)
-
-    def find_license(license_type):
-        try:
-            bb.utils.mkdirhier(gen_lic_dest)
-        except:
-            pass
-        spdx_generic = None
-        license_source = None
-        # If the generic does not exist we need to check to see if there is an SPDX mapping to it,
-        # unless NO_GENERIC_LICENSE is set.
-        for lic_dir in license_source_dirs:
-            if not os.path.isfile(os.path.join(lic_dir, license_type)):
-                if d.getVarFlag('SPDXLICENSEMAP', license_type) != None:
-                    # Great, there is an SPDXLICENSEMAP. We can copy!
-                    bb.debug(1, "We need to use a SPDXLICENSEMAP for %s" % (license_type))
-                    spdx_generic = d.getVarFlag('SPDXLICENSEMAP', license_type)
-                    license_source = lic_dir
-                    break
-            elif os.path.isfile(os.path.join(lic_dir, license_type)):
-                spdx_generic = license_type
-                license_source = lic_dir
-                break
-
-        non_generic_lic = d.getVarFlag('NO_GENERIC_LICENSE', license_type)
-        if spdx_generic and license_source:
-            # we really should copy to generic_ + spdx_generic, however, that ends up messing the manifest
-            # audit up. This should be fixed in emit_pkgdata (or, we actually got and fix all the recipes)
-
-            lic_files_paths.append(("generic_" + license_type, os.path.join(license_source, spdx_generic),
-                                    None, None))
-
-            # The user may attempt to use NO_GENERIC_LICENSE for a generic license which doesn't make sense
-            # and should not be allowed, warn the user in this case.
-            if d.getVarFlag('NO_GENERIC_LICENSE', license_type):
-                oe.qa.handle_error("license-no-generic",
-                    "%s: %s is a generic license, please don't use NO_GENERIC_LICENSE for it." % (pn, license_type), d)
-
-        elif non_generic_lic and non_generic_lic in lic_chksums:
-            # if NO_GENERIC_LICENSE is set, we copy the license files from the fetched source
-            # of the package rather than the license_source_dirs.
-            lic_files_paths.append(("generic_" + license_type,
-                                    os.path.join(srcdir, non_generic_lic), None, None))
-            non_generic_lics[non_generic_lic] = license_type
-        else:
-            # Explicitly avoid the CLOSED license because this isn't generic
-            if license_type != 'CLOSED':
-                # And here is where we warn people that their licenses are lousy
-                oe.qa.handle_error("license-exists",
-                    "%s: No generic license file exists for: %s in any provider" % (pn, license_type), d)
-            pass
-
-    if not generic_directory:
-        bb.fatal("COMMON_LICENSE_DIR is unset. Please set this in your distro config")
-
-    for url in lic_files.split():
-        try:
-            (method, host, path, user, pswd, parm) = bb.fetch.decodeurl(url)
-            if method != "file" or not path:
-                raise bb.fetch.MalformedUrl()
-        except bb.fetch.MalformedUrl:
-            bb.fatal("%s: LIC_FILES_CHKSUM contains an invalid URL:  %s" % (d.getVar('PF'), url))
-        # We want the license filename and path
-        chksum = parm.get('md5', None)
-        beginline = parm.get('beginline')
-        endline = parm.get('endline')
-        lic_chksums[path] = (chksum, beginline, endline)
-
-    v = FindVisitor()
-    try:
-        v.visit_string(d.getVar('LICENSE'))
-    except oe.license.InvalidLicense as exc:
-        bb.fatal('%s: %s' % (d.getVar('PF'), exc))
-    except SyntaxError:
-        oe.qa.handle_error("license-syntax",
-            "%s: Failed to parse it's LICENSE field." % (d.getVar('PF')), d)
-    # Add files from LIC_FILES_CHKSUM to list of license files
-    lic_chksum_paths = defaultdict(OrderedDict)
-    for path, data in sorted(lic_chksums.items()):
-        lic_chksum_paths[os.path.basename(path)][data] = (os.path.join(srcdir, path), data[1], data[2])
-    for basename, files in lic_chksum_paths.items():
-        if len(files) == 1:
-            # Don't copy again a LICENSE already handled as non-generic
-            if basename in non_generic_lics:
-                continue
-            data = list(files.values())[0]
-            lic_files_paths.append(tuple([basename] + list(data)))
-        else:
-            # If there are multiple different license files with identical
-            # basenames we rename them to <file>.0, <file>.1, ...
-            for i, data in enumerate(files.values()):
-                lic_files_paths.append(tuple(["%s.%d" % (basename, i)] + list(data)))
-
-    return lic_files_paths
-
-def return_spdx(d, license):
-    """
-    This function returns the spdx mapping of a license if it exists.
-     """
-    return d.getVarFlag('SPDXLICENSEMAP', license)
-
-def canonical_license(d, license):
-    """
-    Return the canonical (SPDX) form of the license if available (so GPLv3
-    becomes GPL-3.0-only) or the passed license if there is no canonical form.
-    """
-    return d.getVarFlag('SPDXLICENSEMAP', license) or license
-
-def expand_wildcard_licenses(d, wildcard_licenses):
-    """
-    There are some common wildcard values users may want to use. Support them
-    here.
-    """
-    licenses = set(wildcard_licenses)
-    mapping = {
-        "AGPL-3.0*" : ["AGPL-3.0-only", "AGPL-3.0-or-later"],
-        "GPL-3.0*" : ["GPL-3.0-only", "GPL-3.0-or-later"],
-        "LGPL-3.0*" : ["LGPL-3.0-only", "LGPL-3.0-or-later"],
-    }
-    for k in mapping:
-        if k in wildcard_licenses:
-            licenses.remove(k)
-            for item in mapping[k]:
-                licenses.add(item)
-
-    for l in licenses:
-        if l in oe.license.obsolete_license_list():
-            bb.fatal("Error, %s is an obsolete license, please use an SPDX reference in INCOMPATIBLE_LICENSE" % l)
-        if "*" in l:
-            bb.fatal("Error, %s is an invalid license wildcard entry" % l)
-
-    return list(licenses)
-
-def incompatible_license_contains(license, truevalue, falsevalue, d):
-    license = canonical_license(d, license)
-    bad_licenses = (d.getVar('INCOMPATIBLE_LICENSE') or "").split()
-    bad_licenses = expand_wildcard_licenses(d, bad_licenses)
-    return truevalue if license in bad_licenses else falsevalue
-
-def incompatible_pkg_license(d, dont_want_licenses, license):
-    # Handles an "or" or two license sets provided by
-    # flattened_licenses(), pick one that works if possible.
-    def choose_lic_set(a, b):
-        return a if all(oe.license.license_ok(canonical_license(d, lic),
-                            dont_want_licenses) for lic in a) else b
-
-    try:
-        licenses = oe.license.flattened_licenses(license, choose_lic_set)
-    except oe.license.LicenseError as exc:
-        bb.fatal('%s: %s' % (d.getVar('P'), exc))
-
-    incompatible_lic = []
-    for l in licenses:
-        license = canonical_license(d, l)
-        if not oe.license.license_ok(license, dont_want_licenses):
-            incompatible_lic.append(license)
-
-    return sorted(incompatible_lic)
-
-def incompatible_license(d, dont_want_licenses, package=None):
-    """
-    This function checks if a recipe has only incompatible licenses. It also
-    take into consideration 'or' operand.  dont_want_licenses should be passed
-    as canonical (SPDX) names.
-    """
-    import oe.license
-    license = d.getVar("LICENSE:%s" % package) if package else None
-    if not license:
-        license = d.getVar('LICENSE')
-
-    return incompatible_pkg_license(d, dont_want_licenses, license)
-
-def check_license_flags(d):
-    """
-    This function checks if a recipe has any LICENSE_FLAGS that
-    aren't acceptable.
-
-    If it does, it returns the all LICENSE_FLAGS missing from the list
-    of acceptable license flags, or all of the LICENSE_FLAGS if there
-    is no list of acceptable flags.
-
-    If everything is is acceptable, it returns None.
-    """
-
-    def license_flag_matches(flag, acceptlist, pn):
-        """
-        Return True if flag matches something in acceptlist, None if not.
-
-        Before we test a flag against the acceptlist, we append _${PN}
-        to it.  We then try to match that string against the
-        acceptlist.  This covers the normal case, where we expect
-        LICENSE_FLAGS to be a simple string like 'commercial', which
-        the user typically matches exactly in the acceptlist by
-        explicitly appending the package name e.g 'commercial_foo'.
-        If we fail the match however, we then split the flag across
-        '_' and append each fragment and test until we either match or
-        run out of fragments.
-        """
-        flag_pn = ("%s_%s" % (flag, pn))
-        for candidate in acceptlist:
-            if flag_pn == candidate:
-                    return True
-
-        flag_cur = ""
-        flagments = flag_pn.split("_")
-        flagments.pop() # we've already tested the full string
-        for flagment in flagments:
-            if flag_cur:
-                flag_cur += "_"
-            flag_cur += flagment
-            for candidate in acceptlist:
-                if flag_cur == candidate:
-                    return True
-        return False
-
-    def all_license_flags_match(license_flags, acceptlist):
-        """ Return all unmatched flags, None if all flags match """
-        pn = d.getVar('PN')
-        split_acceptlist = acceptlist.split()
-        flags = []
-        for flag in license_flags.split():
-            if not license_flag_matches(flag, split_acceptlist, pn):
-                flags.append(flag)
-        return flags if flags else None
-
-    license_flags = d.getVar('LICENSE_FLAGS')
-    if license_flags:
-        acceptlist = d.getVar('LICENSE_FLAGS_ACCEPTED')
-        if not acceptlist:
-            return license_flags.split()
-        unmatched_flags = all_license_flags_match(license_flags, acceptlist)
-        if unmatched_flags:
-            return unmatched_flags
-    return None
-
-def check_license_format(d):
-    """
-    This function checks if LICENSE is well defined,
-        Validate operators in LICENSES.
-        No spaces are allowed between LICENSES.
-    """
-    pn = d.getVar('PN')
-    licenses = d.getVar('LICENSE')
-    from oe.license import license_operator, license_operator_chars, license_pattern
-
-    elements = list(filter(lambda x: x.strip(), license_operator.split(licenses)))
-    for pos, element in enumerate(elements):
-        if license_pattern.match(element):
-            if pos > 0 and license_pattern.match(elements[pos - 1]):
-                oe.qa.handle_error('license-format',
-                        '%s: LICENSE value "%s" has an invalid format - license names ' \
-                        'must be separated by the following characters to indicate ' \
-                        'the license selection: %s' %
-                        (pn, licenses, license_operator_chars), d)
-        elif not license_operator.match(element):
-            oe.qa.handle_error('license-format',
-                    '%s: LICENSE value "%s" has an invalid separator "%s" that is not ' \
-                    'in the valid list of separators (%s)' %
-                    (pn, licenses, element, license_operator_chars), d)
-
-SSTATETASKS += "do_populate_lic"
-do_populate_lic[sstate-inputdirs] = "${LICSSTATEDIR}"
-do_populate_lic[sstate-outputdirs] = "${LICENSE_DIRECTORY}/"
-
-IMAGE_CLASSES:append = " license_image"
-
-python do_populate_lic_setscene () {
-    sstate_setscene(d)
-}
-addtask do_populate_lic_setscene
diff --git a/poky/meta/classes/license_image.bbclass b/poky/meta/classes/license_image.bbclass
deleted file mode 100644
index 3213ea7..0000000
--- a/poky/meta/classes/license_image.bbclass
+++ /dev/null
@@ -1,289 +0,0 @@
-ROOTFS_LICENSE_DIR = "${IMAGE_ROOTFS}/usr/share/common-licenses"
-
-# This requires LICENSE_CREATE_PACKAGE=1 to work too
-COMPLEMENTARY_GLOB[lic-pkgs] = "*-lic"
-
-python() {
-    if not oe.data.typed_value('LICENSE_CREATE_PACKAGE', d):
-        features = set(oe.data.typed_value('IMAGE_FEATURES', d))
-        if 'lic-pkgs' in features:
-            bb.error("'lic-pkgs' in IMAGE_FEATURES but LICENSE_CREATE_PACKAGE not enabled to generate -lic packages")
-}
-
-python write_package_manifest() {
-    # Get list of installed packages
-    license_image_dir = d.expand('${LICENSE_DIRECTORY}/${IMAGE_NAME}')
-    bb.utils.mkdirhier(license_image_dir)
-    from oe.rootfs import image_list_installed_packages
-    from oe.utils import format_pkg_list
-
-    pkgs = image_list_installed_packages(d)
-    output = format_pkg_list(pkgs)
-    with open(os.path.join(license_image_dir, 'package.manifest'), "w+") as package_manifest:
-        package_manifest.write(output)
-}
-
-python license_create_manifest() {
-    import oe.packagedata
-    from oe.rootfs import image_list_installed_packages
-
-    build_images_from_feeds = d.getVar('BUILD_IMAGES_FROM_FEEDS')
-    if build_images_from_feeds == "1":
-        return 0
-
-    pkg_dic = {}
-    for pkg in sorted(image_list_installed_packages(d)):
-        pkg_info = os.path.join(d.getVar('PKGDATA_DIR'),
-                                'runtime-reverse', pkg)
-        pkg_name = os.path.basename(os.readlink(pkg_info))
-
-        pkg_dic[pkg_name] = oe.packagedata.read_pkgdatafile(pkg_info)
-        if not "LICENSE" in pkg_dic[pkg_name].keys():
-            pkg_lic_name = "LICENSE:" + pkg_name
-            pkg_dic[pkg_name]["LICENSE"] = pkg_dic[pkg_name][pkg_lic_name]
-
-    rootfs_license_manifest = os.path.join(d.getVar('LICENSE_DIRECTORY'),
-                        d.getVar('IMAGE_NAME'), 'license.manifest')
-    write_license_files(d, rootfs_license_manifest, pkg_dic, rootfs=True)
-}
-
-def write_license_files(d, license_manifest, pkg_dic, rootfs=True):
-    import re
-    import stat
-
-    bad_licenses = (d.getVar("INCOMPATIBLE_LICENSE") or "").split()
-    bad_licenses = expand_wildcard_licenses(d, bad_licenses)
-
-    exceptions = (d.getVar("INCOMPATIBLE_LICENSE_EXCEPTIONS") or "").split()
-    with open(license_manifest, "w") as license_file:
-        for pkg in sorted(pkg_dic):
-            remaining_bad_licenses = oe.license.apply_pkg_license_exception(pkg, bad_licenses, exceptions)
-            incompatible_licenses = incompatible_pkg_license(d, remaining_bad_licenses, pkg_dic[pkg]["LICENSE"])
-            if incompatible_licenses:
-                bb.fatal("Package %s cannot be installed into the image because it has incompatible license(s): %s" %(pkg, ' '.join(incompatible_licenses)))
-            else:
-                incompatible_licenses = incompatible_pkg_license(d, bad_licenses, pkg_dic[pkg]["LICENSE"])
-                if incompatible_licenses:
-                    oe.qa.handle_error('license-incompatible', "Including %s with incompatible license(s) %s into the image, because it has been allowed by exception list." %(pkg, ' '.join(incompatible_licenses)), d)
-            try:
-                (pkg_dic[pkg]["LICENSE"], pkg_dic[pkg]["LICENSES"]) = \
-                    oe.license.manifest_licenses(pkg_dic[pkg]["LICENSE"],
-                    remaining_bad_licenses, canonical_license, d)
-            except oe.license.LicenseError as exc:
-                bb.fatal('%s: %s' % (d.getVar('P'), exc))
-
-            if not "IMAGE_MANIFEST" in pkg_dic[pkg]:
-                # Rootfs manifest
-                license_file.write("PACKAGE NAME: %s\n" % pkg)
-                license_file.write("PACKAGE VERSION: %s\n" % pkg_dic[pkg]["PV"])
-                license_file.write("RECIPE NAME: %s\n" % pkg_dic[pkg]["PN"])
-                license_file.write("LICENSE: %s\n\n" % pkg_dic[pkg]["LICENSE"])
-
-                # If the package doesn't contain any file, that is, its size is 0, the license
-                # isn't relevant as far as the final image is concerned. So doing license check
-                # doesn't make much sense, skip it.
-                if pkg_dic[pkg]["PKGSIZE:%s" % pkg] == "0":
-                    continue
-            else:
-                # Image manifest
-                license_file.write("RECIPE NAME: %s\n" % pkg_dic[pkg]["PN"])
-                license_file.write("VERSION: %s\n" % pkg_dic[pkg]["PV"])
-                license_file.write("LICENSE: %s\n" % pkg_dic[pkg]["LICENSE"])
-                license_file.write("FILES: %s\n\n" % pkg_dic[pkg]["FILES"])
-
-            for lic in pkg_dic[pkg]["LICENSES"]:
-                lic_file = os.path.join(d.getVar('LICENSE_DIRECTORY'),
-                                        pkg_dic[pkg]["PN"], "generic_%s" % 
-                                        re.sub(r'\+', '', lic))
-                # add explicity avoid of CLOSED license because isn't generic
-                if lic == "CLOSED":
-                   continue
-
-                if not os.path.exists(lic_file):
-                    oe.qa.handle_error('license-file-missing',
-                                       "The license listed %s was not in the "\
-                                       "licenses collected for recipe %s"
-                                       % (lic, pkg_dic[pkg]["PN"]), d)
-    oe.qa.exit_if_errors(d)
-
-    # Two options here:
-    # - Just copy the manifest
-    # - Copy the manifest and the license directories
-    # With both options set we see a .5 M increase in core-image-minimal
-    copy_lic_manifest = d.getVar('COPY_LIC_MANIFEST')
-    copy_lic_dirs = d.getVar('COPY_LIC_DIRS')
-    if rootfs and copy_lic_manifest == "1":
-        rootfs_license_dir = d.getVar('ROOTFS_LICENSE_DIR')
-        bb.utils.mkdirhier(rootfs_license_dir)
-        rootfs_license_manifest = os.path.join(rootfs_license_dir,
-                os.path.split(license_manifest)[1])
-        if not os.path.exists(rootfs_license_manifest):
-            oe.path.copyhardlink(license_manifest, rootfs_license_manifest)
-
-        if copy_lic_dirs == "1":
-            for pkg in sorted(pkg_dic):
-                pkg_rootfs_license_dir = os.path.join(rootfs_license_dir, pkg)
-                bb.utils.mkdirhier(pkg_rootfs_license_dir)
-                pkg_license_dir = os.path.join(d.getVar('LICENSE_DIRECTORY'),
-                                            pkg_dic[pkg]["PN"]) 
-
-                pkg_manifest_licenses = [canonical_license(d, lic) \
-                        for lic in pkg_dic[pkg]["LICENSES"]]
-
-                licenses = os.listdir(pkg_license_dir)
-                for lic in licenses:
-                    pkg_license = os.path.join(pkg_license_dir, lic)
-                    pkg_rootfs_license = os.path.join(pkg_rootfs_license_dir, lic)
-
-                    if re.match(r"^generic_.*$", lic):
-                        generic_lic = canonical_license(d,
-                                re.search(r"^generic_(.*)$", lic).group(1))
-
-                        # Do not copy generic license into package if isn't
-                        # declared into LICENSES of the package.
-                        if not re.sub(r'\+$', '', generic_lic) in \
-                                [re.sub(r'\+', '', lic) for lic in \
-                                 pkg_manifest_licenses]:
-                            continue
-
-                        if oe.license.license_ok(generic_lic,
-                                bad_licenses) == False:
-                            continue
-
-                        # Make sure we use only canonical name for the license file
-                        generic_lic_file = "generic_%s" % generic_lic
-                        rootfs_license = os.path.join(rootfs_license_dir, generic_lic_file)
-                        if not os.path.exists(rootfs_license):
-                            oe.path.copyhardlink(pkg_license, rootfs_license)
-
-                        if not os.path.exists(pkg_rootfs_license):
-                            os.symlink(os.path.join('..', generic_lic_file), pkg_rootfs_license)
-                    else:
-                        if (oe.license.license_ok(canonical_license(d,
-                                lic), bad_licenses) == False or
-                                os.path.exists(pkg_rootfs_license)):
-                            continue
-
-                        oe.path.copyhardlink(pkg_license, pkg_rootfs_license)
-            # Fixup file ownership and permissions
-            for walkroot, dirs, files in os.walk(rootfs_license_dir):
-                for f in files:
-                    p = os.path.join(walkroot, f)
-                    os.lchown(p, 0, 0)
-                    if not os.path.islink(p):
-                        os.chmod(p, stat.S_IRUSR | stat.S_IWUSR | stat.S_IRGRP | stat.S_IROTH)
-                for dir in dirs:
-                    p = os.path.join(walkroot, dir)
-                    os.lchown(p, 0, 0)
-                    os.chmod(p, stat.S_IRWXU | stat.S_IRGRP | stat.S_IXGRP | stat.S_IROTH | stat.S_IXOTH)
-
-
-
-def license_deployed_manifest(d):
-    """
-    Write the license manifest for the deployed recipes.
-    The deployed recipes usually includes the bootloader
-    and extra files to boot the target.
-    """
-
-    dep_dic = {}
-    man_dic = {}
-    lic_dir = d.getVar("LICENSE_DIRECTORY")
-
-    dep_dic = get_deployed_dependencies(d)
-    for dep in dep_dic.keys():
-        man_dic[dep] = {}
-        # It is necessary to mark this will be used for image manifest
-        man_dic[dep]["IMAGE_MANIFEST"] = True
-        man_dic[dep]["PN"] = dep
-        man_dic[dep]["FILES"] = \
-            " ".join(get_deployed_files(dep_dic[dep]))
-        with open(os.path.join(lic_dir, dep, "recipeinfo"), "r") as f:
-            for line in f.readlines():
-                key,val = line.split(": ", 1)
-                man_dic[dep][key] = val[:-1]
-
-    lic_manifest_dir = os.path.join(d.getVar('LICENSE_DIRECTORY'),
-                                    d.getVar('IMAGE_NAME'))
-    bb.utils.mkdirhier(lic_manifest_dir)
-    image_license_manifest = os.path.join(lic_manifest_dir, 'image_license.manifest')
-    write_license_files(d, image_license_manifest, man_dic, rootfs=False)
-
-    link_name = d.getVar('IMAGE_LINK_NAME')
-    if link_name:
-        lic_manifest_symlink_dir = os.path.join(d.getVar('LICENSE_DIRECTORY'),
-                                    link_name)
-        # remove old symlink
-        if os.path.islink(lic_manifest_symlink_dir):
-            os.unlink(lic_manifest_symlink_dir)
-
-        # create the image dir symlink
-        if lic_manifest_dir != lic_manifest_symlink_dir:
-            os.symlink(lic_manifest_dir, lic_manifest_symlink_dir)
-
-def get_deployed_dependencies(d):
-    """
-    Get all the deployed dependencies of an image
-    """
-
-    deploy = {}
-    # Get all the dependencies for the current task (rootfs).
-    taskdata = d.getVar("BB_TASKDEPDATA", False)
-    pn = d.getVar("PN", True)
-    depends = list(set([dep[0] for dep
-                    in list(taskdata.values())
-                    if not dep[0].endswith("-native") and not dep[0] == pn]))
-
-    # To verify what was deployed it checks the rootfs dependencies against
-    # the SSTATE_MANIFESTS for "deploy" task.
-    # The manifest file name contains the arch. Because we are not running
-    # in the recipe context it is necessary to check every arch used.
-    sstate_manifest_dir = d.getVar("SSTATE_MANIFESTS")
-    archs = list(set(d.getVar("SSTATE_ARCHS").split()))
-    for dep in depends:
-        for arch in archs:
-            sstate_manifest_file = os.path.join(sstate_manifest_dir,
-                    "manifest-%s-%s.deploy" % (arch, dep))
-            if os.path.exists(sstate_manifest_file):
-                deploy[dep] = sstate_manifest_file
-                break
-
-    return deploy
-get_deployed_dependencies[vardepsexclude] = "BB_TASKDEPDATA"
-
-def get_deployed_files(man_file):
-    """
-    Get the files deployed from the sstate manifest
-    """
-
-    dep_files = []
-    excluded_files = []
-    with open(man_file, "r") as manifest:
-        all_files = manifest.read()
-    for f in all_files.splitlines():
-        if ((not (os.path.islink(f) or os.path.isdir(f))) and
-                not os.path.basename(f) in excluded_files):
-            dep_files.append(os.path.basename(f))
-    return dep_files
-
-ROOTFS_POSTPROCESS_COMMAND:prepend = "write_package_manifest; license_create_manifest; "
-do_rootfs[recrdeptask] += "do_populate_lic"
-
-python do_populate_lic_deploy() {
-    license_deployed_manifest(d)
-    oe.qa.exit_if_errors(d)
-}
-
-addtask populate_lic_deploy before do_build after do_image_complete
-do_populate_lic_deploy[recrdeptask] += "do_populate_lic do_deploy"
-
-python license_qa_dead_symlink() {
-    import os
-
-    for root, dirs, files in os.walk(d.getVar('ROOTFS_LICENSE_DIR')):
-        for file in files:
-            full_path = root + "/" + file
-            if os.path.islink(full_path) and not os.path.exists(full_path):
-                bb.error("broken symlink: " + full_path)
-}
-IMAGE_QA_COMMANDS += "license_qa_dead_symlink"
diff --git a/poky/meta/classes/linux-dummy.bbclass b/poky/meta/classes/linux-dummy.bbclass
deleted file mode 100644
index 9a06a50..0000000
--- a/poky/meta/classes/linux-dummy.bbclass
+++ /dev/null
@@ -1,26 +0,0 @@
-
-python __anonymous () {
-    if d.getVar('PREFERRED_PROVIDER_virtual/kernel') == 'linux-dummy':
-        # copy part codes from kernel.bbclass
-        kname = d.getVar('KERNEL_PACKAGE_NAME') or "kernel"
-
-        # set an empty package of kernel-devicetree
-        d.appendVar('PACKAGES', ' %s-devicetree' % kname)
-        d.setVar('ALLOW_EMPTY:%s-devicetree' % kname, '1')
-
-        # Merge KERNEL_IMAGETYPE and KERNEL_ALT_IMAGETYPE into KERNEL_IMAGETYPES
-        type = d.getVar('KERNEL_IMAGETYPE') or ""
-        alttype = d.getVar('KERNEL_ALT_IMAGETYPE') or ""
-        types = d.getVar('KERNEL_IMAGETYPES') or ""
-        if type not in types.split():
-            types = (type + ' ' + types).strip()
-        if alttype not in types.split():
-            types = (alttype + ' ' + types).strip()
-
-        # set empty packages of kernel-image-*
-        for type in types.split():
-            typelower = type.lower()
-            d.appendVar('PACKAGES', ' %s-image-%s' % (kname, typelower))
-            d.setVar('ALLOW_EMPTY:%s-image-%s' % (kname, typelower), '1')
-}
-
diff --git a/poky/meta/classes/linux-kernel-base.bbclass b/poky/meta/classes/linux-kernel-base.bbclass
deleted file mode 100644
index ba59222..0000000
--- a/poky/meta/classes/linux-kernel-base.bbclass
+++ /dev/null
@@ -1,41 +0,0 @@
-# parse kernel ABI version out of <linux/version.h>
-def get_kernelversion_headers(p):
-    import re
-
-    fn = p + '/include/linux/utsrelease.h'
-    if not os.path.isfile(fn):
-        # after 2.6.33-rc1
-        fn = p + '/include/generated/utsrelease.h'
-    if not os.path.isfile(fn):
-        fn = p + '/include/linux/version.h'
-
-    try:
-        f = open(fn, 'r')
-    except IOError:
-        return None
-
-    l = f.readlines()
-    f.close()
-    r = re.compile("#define UTS_RELEASE \"(.*)\"")
-    for s in l:
-        m = r.match(s)
-        if m:
-            return m.group(1)
-    return None
-
-
-def get_kernelversion_file(p):
-    fn = p + '/kernel-abiversion'
-
-    try:
-        with open(fn, 'r') as f:
-            return f.readlines()[0].strip()
-    except IOError:
-        return None
-
-def linux_module_packages(s, d):
-    suffix = ""
-    return " ".join(map(lambda s: "kernel-module-%s%s" % (s.lower().replace('_', '-').replace('@', '+'), suffix), s.split()))
-
-# that's all
-
diff --git a/poky/meta/classes/linuxloader.bbclass b/poky/meta/classes/linuxloader.bbclass
deleted file mode 100644
index 4447c88..0000000
--- a/poky/meta/classes/linuxloader.bbclass
+++ /dev/null
@@ -1,76 +0,0 @@
-def get_musl_loader_arch(d):
-    import re
-    ldso_arch = "NotSupported"
-
-    targetarch = d.getVar("TARGET_ARCH")
-    if targetarch.startswith("microblaze"):
-        ldso_arch = "microblaze${@bb.utils.contains('TUNE_FEATURES', 'bigendian', '', 'el', d)}"
-    elif targetarch.startswith("mips"):
-        ldso_arch = "mips${ABIEXTENSION}${MIPSPKGSFX_BYTE}${MIPSPKGSFX_R6}${MIPSPKGSFX_ENDIAN}${@['', '-sf'][d.getVar('TARGET_FPU') == 'soft']}"
-    elif targetarch == "powerpc":
-        ldso_arch = "powerpc${@['', '-sf'][d.getVar('TARGET_FPU') == 'soft']}"
-    elif targetarch.startswith("powerpc64"):
-        ldso_arch = "powerpc64${@bb.utils.contains('TUNE_FEATURES', 'bigendian', '', 'le', d)}"
-    elif targetarch == "x86_64":
-        ldso_arch = "x86_64"
-    elif re.search("i.86", targetarch):
-        ldso_arch = "i386"
-    elif targetarch.startswith("arm"):
-        ldso_arch = "arm${ARMPKGSFX_ENDIAN}${ARMPKGSFX_EABI}"
-    elif targetarch.startswith("aarch64"):
-        ldso_arch = "aarch64${ARMPKGSFX_ENDIAN_64}"
-    elif targetarch.startswith("riscv64"):
-        ldso_arch = "riscv64${@['', '-sf'][d.getVar('TARGET_FPU') == 'soft']}"
-    elif targetarch.startswith("riscv32"):
-        ldso_arch = "riscv32${@['', '-sf'][d.getVar('TARGET_FPU') == 'soft']}"
-    return ldso_arch
-
-def get_musl_loader(d):
-    import re
-    return "/lib/ld-musl-" + get_musl_loader_arch(d) + ".so.1"
-
-def get_glibc_loader(d):
-    import re
-
-    dynamic_loader = "NotSupported"
-    targetarch = d.getVar("TARGET_ARCH")
-    if targetarch in ["powerpc", "microblaze"]:
-        dynamic_loader = "${base_libdir}/ld.so.1"
-    elif targetarch in ["mipsisa32r6el", "mipsisa32r6", "mipsisa64r6el", "mipsisa64r6"]:
-        dynamic_loader = "${base_libdir}/ld-linux-mipsn8.so.1"
-    elif targetarch.startswith("mips"):
-        dynamic_loader = "${base_libdir}/ld.so.1"
-    elif targetarch == "powerpc64le":
-        dynamic_loader = "${base_libdir}/ld64.so.2"
-    elif targetarch == "powerpc64":
-        dynamic_loader = "${base_libdir}/ld64.so.1"
-    elif targetarch == "x86_64":
-        dynamic_loader = "${base_libdir}/ld-linux-x86-64.so.2"
-    elif re.search("i.86", targetarch):
-        dynamic_loader = "${base_libdir}/ld-linux.so.2"
-    elif targetarch == "arm":
-        dynamic_loader = "${base_libdir}/ld-linux${@['-armhf', ''][d.getVar('TARGET_FPU') == 'soft']}.so.3"
-    elif targetarch.startswith("aarch64"):
-        dynamic_loader = "${base_libdir}/ld-linux-aarch64${ARMPKGSFX_ENDIAN_64}.so.1"
-    elif targetarch.startswith("riscv64"):
-        dynamic_loader = "${base_libdir}/ld-linux-riscv64-lp64${@['d', ''][d.getVar('TARGET_FPU') == 'soft']}.so.1"
-    elif targetarch.startswith("riscv32"):
-        dynamic_loader = "${base_libdir}/ld-linux-riscv32-ilp32${@['d', ''][d.getVar('TARGET_FPU') == 'soft']}.so.1"
-    return dynamic_loader
-
-def get_linuxloader(d):
-    overrides = d.getVar("OVERRIDES").split(":")
-
-    if "libc-baremetal" in overrides:
-        return "NotSupported"
-
-    if "libc-musl" in overrides:
-        dynamic_loader = get_musl_loader(d)
-    else:
-        dynamic_loader = get_glibc_loader(d)
-    return dynamic_loader
-
-get_linuxloader[vardepvalue] = "${@get_linuxloader(d)}"
-get_musl_loader[vardepvalue] = "${@get_musl_loader(d)}"
-get_musl_loader_arch[vardepvalue] = "${@get_musl_loader_arch(d)}"
-get_glibc_loader[vardepvalue] = "${@get_glibc_loader(d)}"
diff --git a/poky/meta/classes/live-vm-common.bbclass b/poky/meta/classes/live-vm-common.bbclass
deleted file mode 100644
index 74e7074..0000000
--- a/poky/meta/classes/live-vm-common.bbclass
+++ /dev/null
@@ -1,94 +0,0 @@
-# Some of the vars for vm and live image are conflicted, this function
-# is used for fixing the problem.
-def set_live_vm_vars(d, suffix):
-    vars = ['GRUB_CFG', 'SYSLINUX_CFG', 'ROOT', 'LABELS', 'INITRD']
-    for var in vars:
-        var_with_suffix = var + '_' + suffix
-        if d.getVar(var):
-            bb.warn('Found potential conflicted var %s, please use %s rather than %s' % \
-                (var, var_with_suffix, var))
-        elif d.getVar(var_with_suffix):
-            d.setVar(var, d.getVar(var_with_suffix))
-
-
-EFI = "${@bb.utils.contains("MACHINE_FEATURES", "efi", "1", "0", d)}"
-EFI_PROVIDER ?= "grub-efi"
-EFI_CLASS = "${@bb.utils.contains("MACHINE_FEATURES", "efi", "${EFI_PROVIDER}", "", d)}"
-
-MKDOSFS_EXTRAOPTS ??= "-S 512"
-
-# Include legacy boot if MACHINE_FEATURES includes "pcbios" or if it does not
-# contain "efi". This way legacy is supported by default if neither is
-# specified, maintaining the original behavior.
-def pcbios(d):
-    pcbios = bb.utils.contains("MACHINE_FEATURES", "pcbios", "1", "0", d)
-    if pcbios == "0":
-        pcbios = bb.utils.contains("MACHINE_FEATURES", "efi", "0", "1", d)
-    return pcbios
-
-PCBIOS = "${@pcbios(d)}"
-PCBIOS_CLASS = "${@['','syslinux'][d.getVar('PCBIOS') == '1']}"
-
-# efi_populate_common DEST BOOTLOADER
-efi_populate_common() {
-        # DEST must be the root of the image so that EFIDIR is not
-        # nested under a top level directory.
-        DEST=$1
-
-        install -d ${DEST}${EFIDIR}
-
-        install -m 0644 ${DEPLOY_DIR_IMAGE}/$2-${EFI_BOOT_IMAGE} ${DEST}${EFIDIR}/${EFI_BOOT_IMAGE}
-        EFIPATH=$(echo "${EFIDIR}" | sed 's/\//\\/g')
-        printf 'fs0:%s\%s\n' "$EFIPATH" "${EFI_BOOT_IMAGE}" >${DEST}/startup.nsh
-}
-
-efi_iso_populate() {
-        iso_dir=$1
-        efi_populate $iso_dir
-        # Build a EFI directory to create efi.img
-        mkdir -p ${EFIIMGDIR}/${EFIDIR}
-        cp $iso_dir/${EFIDIR}/* ${EFIIMGDIR}${EFIDIR}
-        cp $iso_dir/${KERNEL_IMAGETYPE} ${EFIIMGDIR}
-
-        EFIPATH=$(echo "${EFIDIR}" | sed 's/\//\\/g')
-        printf 'fs0:%s\%s\n' "$EFIPATH" "${EFI_BOOT_IMAGE}" >${EFIIMGDIR}/startup.nsh
-
-        if [ -f "$iso_dir/initrd" ] ; then
-                cp $iso_dir/initrd ${EFIIMGDIR}
-        fi
-}
-
-efi_hddimg_populate() {
-	efi_populate $1
-}
-
-inherit ${EFI_CLASS}
-inherit ${PCBIOS_CLASS}
-
-populate_kernel() {
-	dest=$1
-	install -d $dest
-
-	# Install bzImage, initrd, and rootfs.img in DEST for all loaders to use.
-	bbnote "Trying to install ${DEPLOY_DIR_IMAGE}/${KERNEL_IMAGETYPE} as $dest/${KERNEL_IMAGETYPE}"
-	if [ -e ${DEPLOY_DIR_IMAGE}/${KERNEL_IMAGETYPE} ]; then
-		install -m 0644 ${DEPLOY_DIR_IMAGE}/${KERNEL_IMAGETYPE} $dest/${KERNEL_IMAGETYPE}
-	else
-		bbwarn "${DEPLOY_DIR_IMAGE}/${KERNEL_IMAGETYPE} doesn't exist"
-	fi
-
-	# initrd is made of concatenation of multiple filesystem images
-	if [ -n "${INITRD}" ]; then
-		rm -f $dest/initrd
-		for fs in ${INITRD}
-		do
-			if [ -s "$fs" ]; then
-				cat $fs >> $dest/initrd
-			else
-				bbfatal "$fs is invalid. initrd image creation failed."
-			fi
-		done
-		chmod 0644 $dest/initrd
-	fi
-}
-
diff --git a/poky/meta/classes/logging.bbclass b/poky/meta/classes/logging.bbclass
deleted file mode 100644
index a0c94e9..0000000
--- a/poky/meta/classes/logging.bbclass
+++ /dev/null
@@ -1,101 +0,0 @@
-# The following logging mechanisms are to be used in bash functions of recipes.
-# They are intended to map one to one in intention and output format with the
-# python recipe logging functions of a similar naming convention: bb.plain(),
-# bb.note(), etc.
-
-LOGFIFO = "${T}/fifo.${@os.getpid()}"
-
-# Print the output exactly as it is passed in. Typically used for output of
-# tasks that should be seen on the console. Use sparingly.
-# Output: logs console
-bbplain() {
-	if [ -p ${LOGFIFO} ] ; then
-		printf "%b\0" "bbplain $*" > ${LOGFIFO}
-	else
-		echo "$*"
-	fi
-}
-
-# Notify the user of a noteworthy condition. 
-# Output: logs
-bbnote() {
-	if [ -p ${LOGFIFO} ] ; then
-		printf "%b\0" "bbnote $*" > ${LOGFIFO}
-	else
-		echo "NOTE: $*"
-	fi
-}
-
-# Print a warning to the log. Warnings are non-fatal, and do not
-# indicate a build failure.
-# Output: logs console
-bbwarn() {
-	if [ -p ${LOGFIFO} ] ; then
-		printf "%b\0" "bbwarn $*" > ${LOGFIFO}
-	else
-		echo "WARNING: $*"
-	fi
-}
-
-# Print an error to the log. Errors are non-fatal in that the build can
-# continue, but they do indicate a build failure.
-# Output: logs console
-bberror() {
-	if [ -p ${LOGFIFO} ] ; then
-		printf "%b\0" "bberror $*" > ${LOGFIFO}
-	else
-		echo "ERROR: $*"
-	fi
-}
-
-# Print a fatal error to the log. Fatal errors indicate build failure
-# and halt the build, exiting with an error code.
-# Output: logs console
-bbfatal() {
-	if [ -p ${LOGFIFO} ] ; then
-		printf "%b\0" "bbfatal $*" > ${LOGFIFO}
-	else
-		echo "ERROR: $*"
-	fi
-	exit 1
-}
-
-# Like bbfatal, except prevents the suppression of the error log by
-# bitbake's UI.
-# Output: logs console
-bbfatal_log() {
-	if [ -p ${LOGFIFO} ] ; then
-		printf "%b\0" "bbfatal_log $*" > ${LOGFIFO}
-	else
-		echo "ERROR: $*"
-	fi
-	exit 1
-}
-
-# Print debug messages. These are appropriate for progress checkpoint
-# messages to the logs. Depending on the debug log level, they may also
-# go to the console.
-# Output: logs console
-# Usage: bbdebug 1 "first level debug message"
-#        bbdebug 2 "second level debug message"
-bbdebug() {
-	USAGE='Usage: bbdebug [123] "message"'
-	if [ $# -lt 2 ]; then
-		bbfatal "$USAGE"
-	fi
-	
-	# Strip off the debug level and ensure it is an integer
-	DBGLVL=$1; shift
-	NONDIGITS=$(echo "$DBGLVL" | tr -d "[:digit:]")
-	if [ "$NONDIGITS" ]; then
-		bbfatal "$USAGE"
-	fi
-
-	# All debug output is printed to the logs
-	if [ -p ${LOGFIFO} ] ; then
-		printf "%b\0" "bbdebug $DBGLVL $*" > ${LOGFIFO}
-	else
-		echo "DEBUG: $*"
-	fi
-}
-
diff --git a/poky/meta/classes/manpages.bbclass b/poky/meta/classes/manpages.bbclass
deleted file mode 100644
index 5e09c77..0000000
--- a/poky/meta/classes/manpages.bbclass
+++ /dev/null
@@ -1,45 +0,0 @@
-# Inherit this class to enable or disable building and installation of manpages
-# depending on whether 'api-documentation' is in DISTRO_FEATURES. Such building
-# tends to pull in the entire XML stack and other tools, so it's not enabled
-# by default.
-PACKAGECONFIG:append:class-target = " ${@bb.utils.contains('DISTRO_FEATURES', 'api-documentation', 'manpages', '', d)}"
-
-inherit qemu
-
-# usually manual files are packaged to ${PN}-doc except man-pages
-MAN_PKG ?= "${PN}-doc"
-
-# only add man-db to RDEPENDS when manual files are built and installed
-RDEPENDS:${MAN_PKG} += "${@bb.utils.contains('PACKAGECONFIG', 'manpages', 'man-db', '', d)}"
-
-pkg_postinst:${MAN_PKG}:append () {
-	# only update manual page index caches when manual files are built and installed
-	if ${@bb.utils.contains('PACKAGECONFIG', 'manpages', 'true', 'false', d)}; then
-		if test -n "$D"; then
-			if ${@bb.utils.contains('MACHINE_FEATURES', 'qemu-usermode', 'true', 'false', d)}; then
-				sed "s:\(\s\)/:\1$D/:g" $D${sysconfdir}/man_db.conf | ${@qemu_run_binary(d, '$D', '${bindir}/mandb')} -C - -u -q $D${mandir}
-				chown -R root:root $D${mandir}
-
-				mkdir -p $D${localstatedir}/cache/man
-				cd $D${mandir}
-				find . -name index.db | while read index; do
-					mkdir -p $D${localstatedir}/cache/man/$(dirname ${index})
-					mv ${index} $D${localstatedir}/cache/man/${index}
-					chown man:man $D${localstatedir}/cache/man/${index}
-				done
-				cd -
-			else
-				$INTERCEPT_DIR/postinst_intercept delay_to_first_boot ${PKG} mlprefix=${MLPREFIX}
-			fi
-		else
-			mandb -q
-		fi
-	fi
-}
-
-pkg_postrm:${MAN_PKG}:append () {
-	# only update manual page index caches when manual files are built and installed
-	if ${@bb.utils.contains('PACKAGECONFIG', 'manpages', 'true', 'false', d)}; then
-		mandb -q
-	fi
-}
diff --git a/poky/meta/classes/mcextend.bbclass b/poky/meta/classes/mcextend.bbclass
index 0f8f962..a489eeb 100644
--- a/poky/meta/classes/mcextend.bbclass
+++ b/poky/meta/classes/mcextend.bbclass
@@ -1,3 +1,9 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
 python mcextend_virtclass_handler () {
     cls = e.data.getVar("BBEXTENDCURR")
     variant = e.data.getVar("BBEXTENDVARIANT")
diff --git a/poky/meta/classes/meson-routines.bbclass b/poky/meta/classes/meson-routines.bbclass
deleted file mode 100644
index be3aeed..0000000
--- a/poky/meta/classes/meson-routines.bbclass
+++ /dev/null
@@ -1,51 +0,0 @@
-inherit siteinfo
-
-def meson_array(var, d):
-    items = d.getVar(var).split()
-    return repr(items[0] if len(items) == 1 else items)
-
-# Map our ARCH values to what Meson expects:
-# http://mesonbuild.com/Reference-tables.html#cpu-families
-def meson_cpu_family(var, d):
-    import re
-    arch = d.getVar(var)
-    if arch == 'powerpc':
-        return 'ppc'
-    elif arch == 'powerpc64' or arch == 'powerpc64le':
-        return 'ppc64'
-    elif arch == 'armeb':
-        return 'arm'
-    elif arch == 'aarch64_be':
-        return 'aarch64'
-    elif arch == 'mipsel':
-        return 'mips'
-    elif arch == 'mips64el':
-        return 'mips64'
-    elif re.match(r"i[3-6]86", arch):
-        return "x86"
-    elif arch == "microblazeel":
-        return "microblaze"
-    else:
-        return arch
-
-# Map our OS values to what Meson expects:
-# https://mesonbuild.com/Reference-tables.html#operating-system-names
-def meson_operating_system(var, d):
-    os = d.getVar(var)
-    if "mingw" in os:
-        return "windows"
-    # avoid e.g 'linux-gnueabi'
-    elif "linux" in os:
-        return "linux"
-    else:
-        return os
-
-def meson_endian(prefix, d):
-    arch, os = d.getVar(prefix + "_ARCH"), d.getVar(prefix + "_OS")
-    sitedata = siteinfo_data_for_machine(arch, os, d)
-    if "endian-little" in sitedata:
-        return "little"
-    elif "endian-big" in sitedata:
-        return "big"
-    else:
-        bb.fatal("Cannot determine endianism for %s-%s" % (arch, os))
diff --git a/poky/meta/classes/meson.bbclass b/poky/meta/classes/meson.bbclass
deleted file mode 100644
index 546cd04..0000000
--- a/poky/meta/classes/meson.bbclass
+++ /dev/null
@@ -1,173 +0,0 @@
-inherit python3native meson-routines qemu
-
-DEPENDS:append = " meson-native ninja-native"
-
-EXEWRAPPER_ENABLED:class-native = "False"
-EXEWRAPPER_ENABLED:class-nativesdk = "False"
-EXEWRAPPER_ENABLED ?= "${@bb.utils.contains('MACHINE_FEATURES', 'qemu-usermode', 'True', 'False', d)}"
-DEPENDS:append = "${@' qemu-native' if d.getVar('EXEWRAPPER_ENABLED') == 'True' else ''}"
-
-# As Meson enforces out-of-tree builds we can just use cleandirs
-B = "${WORKDIR}/build"
-do_configure[cleandirs] = "${B}"
-
-# Where the meson.build build configuration is
-MESON_SOURCEPATH = "${S}"
-
-def noprefix(var, d):
-    return d.getVar(var).replace(d.getVar('prefix') + '/', '', 1)
-
-MESON_BUILDTYPE ?= "${@oe.utils.vartrue('DEBUG_BUILD', 'debug', 'plain', d)}"
-MESON_BUILDTYPE[vardeps] += "DEBUG_BUILD"
-MESONOPTS = " --prefix ${prefix} \
-              --buildtype ${MESON_BUILDTYPE} \
-              --bindir ${@noprefix('bindir', d)} \
-              --sbindir ${@noprefix('sbindir', d)} \
-              --datadir ${@noprefix('datadir', d)} \
-              --libdir ${@noprefix('libdir', d)} \
-              --libexecdir ${@noprefix('libexecdir', d)} \
-              --includedir ${@noprefix('includedir', d)} \
-              --mandir ${@noprefix('mandir', d)} \
-              --infodir ${@noprefix('infodir', d)} \
-              --sysconfdir ${sysconfdir} \
-              --localstatedir ${localstatedir} \
-              --sharedstatedir ${sharedstatedir} \
-              --wrap-mode nodownload \
-              --native-file ${WORKDIR}/meson.native"
-
-EXTRA_OEMESON:append = " ${PACKAGECONFIG_CONFARGS}"
-
-MESON_CROSS_FILE = ""
-MESON_CROSS_FILE:class-target = "--cross-file ${WORKDIR}/meson.cross"
-MESON_CROSS_FILE:class-nativesdk = "--cross-file ${WORKDIR}/meson.cross"
-
-# Needed to set up qemu wrapper below
-export STAGING_DIR_HOST
-
-def rust_tool(d, target_var):
-    rustc = d.getVar('RUSTC')
-    if not rustc:
-        return ""
-    cmd = [rustc, "--target", d.getVar(target_var)] + d.getVar("RUSTFLAGS").split()
-    return "rust = %s" % repr(cmd)
-
-addtask write_config before do_configure
-do_write_config[vardeps] += "CC CXX LD AR NM STRIP READELF CFLAGS CXXFLAGS LDFLAGS RUSTC RUSTFLAGS"
-do_write_config() {
-    # This needs to be Py to split the args into single-element lists
-    cat >${WORKDIR}/meson.cross <<EOF
-[binaries]
-c = ${@meson_array('CC', d)}
-cpp = ${@meson_array('CXX', d)}
-cython = 'cython3'
-ar = ${@meson_array('AR', d)}
-nm = ${@meson_array('NM', d)}
-strip = ${@meson_array('STRIP', d)}
-readelf = ${@meson_array('READELF', d)}
-objcopy = ${@meson_array('OBJCOPY', d)}
-pkgconfig = 'pkg-config'
-llvm-config = 'llvm-config${LLVMVERSION}'
-cups-config = 'cups-config'
-g-ir-scanner = '${STAGING_BINDIR}/g-ir-scanner-wrapper'
-g-ir-compiler = '${STAGING_BINDIR}/g-ir-compiler-wrapper'
-${@rust_tool(d, "HOST_SYS")}
-${@"exe_wrapper = '${WORKDIR}/meson-qemuwrapper'" if d.getVar('EXEWRAPPER_ENABLED') == 'True' else ""}
-
-[built-in options]
-c_args = ${@meson_array('CFLAGS', d)}
-c_link_args = ${@meson_array('LDFLAGS', d)}
-cpp_args = ${@meson_array('CXXFLAGS', d)}
-cpp_link_args = ${@meson_array('LDFLAGS', d)}
-
-[properties]
-needs_exe_wrapper = true
-
-[host_machine]
-system = '${@meson_operating_system('HOST_OS', d)}'
-cpu_family = '${@meson_cpu_family('HOST_ARCH', d)}'
-cpu = '${HOST_ARCH}'
-endian = '${@meson_endian('HOST', d)}'
-
-[target_machine]
-system = '${@meson_operating_system('TARGET_OS', d)}'
-cpu_family = '${@meson_cpu_family('TARGET_ARCH', d)}'
-cpu = '${TARGET_ARCH}'
-endian = '${@meson_endian('TARGET', d)}'
-EOF
-
-    cat >${WORKDIR}/meson.native <<EOF
-[binaries]
-c = ${@meson_array('BUILD_CC', d)}
-cpp = ${@meson_array('BUILD_CXX', d)}
-cython = 'cython3'
-ar = ${@meson_array('BUILD_AR', d)}
-nm = ${@meson_array('BUILD_NM', d)}
-strip = ${@meson_array('BUILD_STRIP', d)}
-readelf = ${@meson_array('BUILD_READELF', d)}
-objcopy = ${@meson_array('BUILD_OBJCOPY', d)}
-pkgconfig = 'pkg-config-native'
-${@rust_tool(d, "BUILD_SYS")}
-
-[built-in options]
-c_args = ${@meson_array('BUILD_CFLAGS', d)}
-c_link_args = ${@meson_array('BUILD_LDFLAGS', d)}
-cpp_args = ${@meson_array('BUILD_CXXFLAGS', d)}
-cpp_link_args = ${@meson_array('BUILD_LDFLAGS', d)}
-EOF
-}
-
-do_write_config:append:class-target() {
-    # Write out a qemu wrapper that will be used as exe_wrapper so that meson
-    # can run target helper binaries through that.
-    qemu_binary="${@qemu_wrapper_cmdline(d, '$STAGING_DIR_HOST', ['$STAGING_DIR_HOST/${libdir}','$STAGING_DIR_HOST/${base_libdir}'])}"
-    cat > ${WORKDIR}/meson-qemuwrapper << EOF
-#!/bin/sh
-# Use a modules directory which doesn't exist so we don't load random things
-# which may then get deleted (or their dependencies) and potentially segfault
-export GIO_MODULE_DIR=${STAGING_LIBDIR}/gio/modules-dummy
-
-# meson sets this wrongly (only to libs in build-dir), qemu_wrapper_cmdline() and GIR_EXTRA_LIBS_PATH take care of it properly
-unset LD_LIBRARY_PATH
-
-$qemu_binary "\$@"
-EOF
-    chmod +x ${WORKDIR}/meson-qemuwrapper
-}
-
-# Tell externalsrc that changes to this file require a reconfigure
-CONFIGURE_FILES = "meson.build"
-
-meson_do_configure() {
-    # Meson requires this to be 'bfd, 'lld' or 'gold' from 0.53 onwards
-    # https://github.com/mesonbuild/meson/commit/ef9aeb188ea2bc7353e59916c18901cde90fa2b3
-    unset LD
-
-    # Work around "Meson fails if /tmp is mounted with noexec #2972"
-    mkdir -p "${B}/meson-private/tmp"
-    export TMPDIR="${B}/meson-private/tmp"
-    bbnote Executing meson ${EXTRA_OEMESON}...
-    if ! meson ${MESONOPTS} "${MESON_SOURCEPATH}" "${B}" ${MESON_CROSS_FILE} ${EXTRA_OEMESON}; then
-        bbfatal_log meson failed
-    fi
-}
-
-python meson_do_qa_configure() {
-    import re
-    warn_re = re.compile(r"^WARNING: Cross property (.+) is using default value (.+)$", re.MULTILINE)
-    with open(d.expand("${B}/meson-logs/meson-log.txt")) as logfile:
-        log = logfile.read()
-    for (prop, value) in warn_re.findall(log):
-        bb.warn("Meson cross property %s used without explicit assignment, defaulting to %s" % (prop, value))
-}
-do_configure[postfuncs] += "meson_do_qa_configure"
-
-do_compile[progress] = "outof:^\[(\d+)/(\d+)\]\s+"
-meson_do_compile() {
-    ninja -v ${PARALLEL_MAKE}
-}
-
-meson_do_install() {
-    DESTDIR='${D}' ninja -v ${PARALLEL_MAKEINST} install
-}
-
-EXPORT_FUNCTIONS do_configure do_compile do_install
diff --git a/poky/meta/classes/metadata_scm.bbclass b/poky/meta/classes/metadata_scm.bbclass
index f646b31..6842119 100644
--- a/poky/meta/classes/metadata_scm.bbclass
+++ b/poky/meta/classes/metadata_scm.bbclass
@@ -1,3 +1,8 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
 
 METADATA_BRANCH := "${@oe.buildcfg.detect_branch(d)}"
 METADATA_BRANCH[vardepvalue] = "${METADATA_BRANCH}"
diff --git a/poky/meta/classes/migrate_localcount.bbclass b/poky/meta/classes/migrate_localcount.bbclass
index 810a541..1d00c11 100644
--- a/poky/meta/classes/migrate_localcount.bbclass
+++ b/poky/meta/classes/migrate_localcount.bbclass
@@ -1,3 +1,9 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
 PRSERV_DUMPDIR ??= "${LOG_DIR}/db"
 LOCALCOUNT_DUMPFILE ??= "${PRSERV_DUMPDIR}/prserv-localcount-exports.inc"
 
diff --git a/poky/meta/classes/mime-xdg.bbclass b/poky/meta/classes/mime-xdg.bbclass
deleted file mode 100644
index 271f48d..0000000
--- a/poky/meta/classes/mime-xdg.bbclass
+++ /dev/null
@@ -1,74 +0,0 @@
-#
-# This class creates mime <-> application associations based on entry 
-# 'MimeType' in *.desktop files
-#
-
-DEPENDS += "desktop-file-utils"
-PACKAGE_WRITE_DEPS += "desktop-file-utils-native"
-DESKTOPDIR = "${datadir}/applications"
-
-# There are recipes out there installing their .desktop files as absolute
-# symlinks. For us these are dangling and cannot be introspected for "MimeType"
-# easily. By addding package-names to MIME_XDG_PACKAGES, packager can force
-# proper update-desktop-database handling. Note that all introspection is
-# skipped for MIME_XDG_PACKAGES not empty
-MIME_XDG_PACKAGES ?= ""
-
-mime_xdg_postinst() {
-if [ "x$D" != "x" ]; then
-	$INTERCEPT_DIR/postinst_intercept update_desktop_database ${PKG} \
-		mlprefix=${MLPREFIX} \
-		desktop_dir=${DESKTOPDIR}
-else
-	update-desktop-database $D${DESKTOPDIR}
-fi
-}
-
-mime_xdg_postrm() {
-if [ "x$D" != "x" ]; then
-	$INTERCEPT_DIR/postinst_intercept update_desktop_database ${PKG} \
-		mlprefix=${MLPREFIX} \
-		desktop_dir=${DESKTOPDIR}
-else
-	update-desktop-database $D${DESKTOPDIR}
-fi
-}
-
-python populate_packages:append () {
-    packages = d.getVar('PACKAGES').split()
-    pkgdest =  d.getVar('PKGDEST')
-    desktop_base = d.getVar('DESKTOPDIR')
-    forced_mime_xdg_pkgs = (d.getVar('MIME_XDG_PACKAGES') or '').split()
-
-    for pkg in packages:
-        desktops_with_mime_found = pkg in forced_mime_xdg_pkgs
-        if d.getVar('MIME_XDG_PACKAGES') == '':
-            desktop_dir = '%s/%s%s' % (pkgdest, pkg, desktop_base)
-            if os.path.exists(desktop_dir):
-                for df in os.listdir(desktop_dir):
-                    if df.endswith('.desktop'):
-                        try:
-                            with open(desktop_dir + '/'+ df, 'r') as f:
-                                for line in f.read().split('\n'):
-                                    if 'MimeType' in line:
-                                        desktops_with_mime_found = True
-                                        break;
-                        except:
-                            bb.warn('Could not open %s. Set MIME_XDG_PACKAGES in recipe or add mime-xdg to INSANE_SKIP.' % desktop_dir + '/'+ df)
-                    if desktops_with_mime_found:
-                        break
-        if desktops_with_mime_found:
-            bb.note("adding mime-xdg postinst and postrm scripts to %s" % pkg)
-            postinst = d.getVar('pkg_postinst:%s' % pkg)
-            if not postinst:
-                postinst = '#!/bin/sh\n'
-            postinst += d.getVar('mime_xdg_postinst')
-            d.setVar('pkg_postinst:%s' % pkg, postinst)
-            postrm = d.getVar('pkg_postrm:%s' % pkg)
-            if not postrm:
-                postrm = '#!/bin/sh\n'
-            postrm += d.getVar('mime_xdg_postrm')
-            d.setVar('pkg_postrm:%s' % pkg, postrm)
-            bb.note("adding desktop-file-utils dependency to %s" % pkg)
-            d.appendVar('RDEPENDS:' + pkg, " " + d.getVar('MLPREFIX')+"desktop-file-utils")
-}
diff --git a/poky/meta/classes/mime.bbclass b/poky/meta/classes/mime.bbclass
deleted file mode 100644
index 8d176a8..0000000
--- a/poky/meta/classes/mime.bbclass
+++ /dev/null
@@ -1,70 +0,0 @@
-#
-# This class is used by recipes installing mime types
-#
-
-DEPENDS += "${@bb.utils.contains('BPN', 'shared-mime-info', '', 'shared-mime-info', d)}"
-PACKAGE_WRITE_DEPS += "shared-mime-info-native"
-MIMEDIR = "${datadir}/mime"
-
-mime_postinst() {
-if [ "x$D" != "x" ]; then
-	$INTERCEPT_DIR/postinst_intercept update_mime_database ${PKG} \
-		mlprefix=${MLPREFIX} \
-		mimedir=${MIMEDIR}
-else
-	echo "Updating MIME database... this may take a while."
-	update-mime-database $D${MIMEDIR}
-fi
-}
-
-mime_postrm() {
-if [ "x$D" != "x" ]; then
-	$INTERCEPT_DIR/postinst_intercept update_mime_database ${PKG} \
-		mlprefix=${MLPREFIX} \
-		mimedir=${MIMEDIR}
-else
-	echo "Updating MIME database... this may take a while."
-	# $D${MIMEDIR}/packages belong to package shared-mime-info-data,
-	# packages like libfm-mime depend on shared-mime-info-data.
-	# after shared-mime-info-data uninstalled, $D${MIMEDIR}/packages
-	# is removed, but update-mime-database need this dir to update
-	# database, workaround to create one and remove it later
-	if [ ! -d $D${MIMEDIR}/packages ]; then
-		mkdir -p $D${MIMEDIR}/packages
-		update-mime-database $D${MIMEDIR}
-		rmdir --ignore-fail-on-non-empty $D${MIMEDIR}/packages
-	else
-		update-mime-database $D${MIMEDIR}
-fi
-fi
-}
-
-python populate_packages:append () {
-    packages = d.getVar('PACKAGES').split()
-    pkgdest =  d.getVar('PKGDEST')
-    mimedir = d.getVar('MIMEDIR')
-
-    for pkg in packages:
-        mime_packages_dir = '%s/%s%s/packages' % (pkgdest, pkg, mimedir)
-        mimes_types_found = False
-        if os.path.exists(mime_packages_dir):
-            for f in os.listdir(mime_packages_dir):
-                if f.endswith('.xml'):
-                    mimes_types_found = True
-                    break
-        if mimes_types_found:
-            bb.note("adding mime postinst and postrm scripts to %s" % pkg)
-            postinst = d.getVar('pkg_postinst:%s' % pkg)
-            if not postinst:
-                postinst = '#!/bin/sh\n'
-            postinst += d.getVar('mime_postinst')
-            d.setVar('pkg_postinst:%s' % pkg, postinst)
-            postrm = d.getVar('pkg_postrm:%s' % pkg)
-            if not postrm:
-                postrm = '#!/bin/sh\n'
-            postrm += d.getVar('mime_postrm')
-            d.setVar('pkg_postrm:%s' % pkg, postrm)
-            if pkg != 'shared-mime-info-data':
-                bb.note("adding shared-mime-info-data dependency to %s" % pkg)
-                d.appendVar('RDEPENDS:' + pkg, " " + d.getVar('MLPREFIX')+"shared-mime-info-data")
-}
diff --git a/poky/meta/classes/mirrors.bbclass b/poky/meta/classes/mirrors.bbclass
deleted file mode 100644
index ffdccff..0000000
--- a/poky/meta/classes/mirrors.bbclass
+++ /dev/null
@@ -1,89 +0,0 @@
-MIRRORS += "\
-${DEBIAN_MIRROR}	http://snapshot.debian.org/archive/debian/20180310T215105Z/pool \
-${DEBIAN_MIRROR}	http://snapshot.debian.org/archive/debian-archive/20120328T092752Z/debian/pool \
-${DEBIAN_MIRROR}	http://snapshot.debian.org/archive/debian-archive/20110127T084257Z/debian/pool \
-${DEBIAN_MIRROR}	http://snapshot.debian.org/archive/debian-archive/20090802T004153Z/debian/pool \
-${DEBIAN_MIRROR}	http://ftp.de.debian.org/debian/pool \
-${DEBIAN_MIRROR}	http://ftp.au.debian.org/debian/pool \
-${DEBIAN_MIRROR}	http://ftp.cl.debian.org/debian/pool \
-${DEBIAN_MIRROR}	http://ftp.hr.debian.org/debian/pool \
-${DEBIAN_MIRROR}	http://ftp.fi.debian.org/debian/pool \
-${DEBIAN_MIRROR}	http://ftp.hk.debian.org/debian/pool \
-${DEBIAN_MIRROR}	http://ftp.hu.debian.org/debian/pool \
-${DEBIAN_MIRROR}	http://ftp.ie.debian.org/debian/pool \
-${DEBIAN_MIRROR}	http://ftp.it.debian.org/debian/pool \
-${DEBIAN_MIRROR}	http://ftp.jp.debian.org/debian/pool \
-${DEBIAN_MIRROR}	http://ftp.no.debian.org/debian/pool \
-${DEBIAN_MIRROR}	http://ftp.pl.debian.org/debian/pool \
-${DEBIAN_MIRROR}	http://ftp.ro.debian.org/debian/pool \
-${DEBIAN_MIRROR}	http://ftp.si.debian.org/debian/pool \
-${DEBIAN_MIRROR}	http://ftp.es.debian.org/debian/pool \
-${DEBIAN_MIRROR}	http://ftp.se.debian.org/debian/pool \
-${DEBIAN_MIRROR}	http://ftp.tr.debian.org/debian/pool \
-${GNU_MIRROR}	https://mirrors.kernel.org/gnu \
-${KERNELORG_MIRROR}	http://www.kernel.org/pub \
-${GNUPG_MIRROR}	ftp://ftp.gnupg.org/gcrypt \
-${GNUPG_MIRROR}	ftp://ftp.franken.de/pub/crypt/mirror/ftp.gnupg.org/gcrypt \
-${GNUPG_MIRROR}	ftp://mirrors.dotsrc.org/gcrypt \
-ftp://dante.ctan.org/tex-archive ftp://ftp.fu-berlin.de/tex/CTAN \
-ftp://dante.ctan.org/tex-archive http://sunsite.sut.ac.jp/pub/archives/ctan/ \
-ftp://dante.ctan.org/tex-archive http://ctan.unsw.edu.au/ \
-ftp://ftp.gnutls.org/gcrypt/gnutls ${GNUPG_MIRROR}/gnutls \
-http://ftp.info-zip.org/pub/infozip/src/ ftp://sunsite.icm.edu.pl/pub/unix/archiving/info-zip/src/ \
-http://www.mirrorservice.org/sites/lsof.itap.purdue.edu/pub/tools/unix/lsof/ http://www.mirrorservice.org/sites/lsof.itap.purdue.edu/pub/tools/unix/lsof/OLD/ \
-${APACHE_MIRROR}  http://www.us.apache.org/dist \
-${APACHE_MIRROR}  http://archive.apache.org/dist \
-http://downloads.sourceforge.net/watchdog/ http://fossies.org/linux/misc/ \
-${SAVANNAH_GNU_MIRROR} http://download-mirror.savannah.gnu.org/releases \
-${SAVANNAH_NONGNU_MIRROR} http://download-mirror.savannah.nongnu.org/releases \
-ftp://sourceware.org/pub http://mirrors.kernel.org/sourceware \
-ftp://sourceware.org/pub http://gd.tuwien.ac.at/gnu/sourceware \
-ftp://sourceware.org/pub http://ftp.gwdg.de/pub/linux/sources.redhat.com/sourceware \
-cvs://.*/.*     http://downloads.yoctoproject.org/mirror/sources/ \
-svn://.*/.*     http://downloads.yoctoproject.org/mirror/sources/ \
-git://.*/.*     http://downloads.yoctoproject.org/mirror/sources/ \
-gitsm://.*/.*   http://downloads.yoctoproject.org/mirror/sources/ \
-hg://.*/.*      http://downloads.yoctoproject.org/mirror/sources/ \
-bzr://.*/.*     http://downloads.yoctoproject.org/mirror/sources/ \
-p4://.*/.*      http://downloads.yoctoproject.org/mirror/sources/ \
-osc://.*/.*     http://downloads.yoctoproject.org/mirror/sources/ \
-https?://.*/.*  http://downloads.yoctoproject.org/mirror/sources/ \
-ftp://.*/.*     http://downloads.yoctoproject.org/mirror/sources/ \
-npm://.*/?.*    http://downloads.yoctoproject.org/mirror/sources/ \
-cvs://.*/.*     http://sources.openembedded.org/ \
-svn://.*/.*     http://sources.openembedded.org/ \
-git://.*/.*     http://sources.openembedded.org/ \
-gitsm://.*/.*   http://sources.openembedded.org/ \
-hg://.*/.*      http://sources.openembedded.org/ \
-bzr://.*/.*     http://sources.openembedded.org/ \
-p4://.*/.*      http://sources.openembedded.org/ \
-osc://.*/.*     http://sources.openembedded.org/ \
-https?://.*/.*  http://sources.openembedded.org/ \
-ftp://.*/.*     http://sources.openembedded.org/ \
-npm://.*/?.*    http://sources.openembedded.org/ \
-${CPAN_MIRROR}  http://cpan.metacpan.org/ \
-${CPAN_MIRROR}  http://search.cpan.org/CPAN/ \
-https?://downloads.yoctoproject.org/releases/uninative/ https://mirrors.kernel.org/yocto/uninative/ \
-https?://downloads.yoctoproject.org/mirror/sources/ https://mirrors.kernel.org/yocto-sources/ \
-"
-
-# Use MIRRORS to provide git repo fallbacks using the https protocol, for cases
-# where git native protocol fetches may fail due to local firewall rules, etc.
-
-MIRRORS += "\
-git://salsa.debian.org/.*     git://salsa.debian.org/PATH;protocol=https \
-git://git.gnome.org/.*        git://gitlab.gnome.org/GNOME/PATH;protocol=https \
-git://.*/.*                   git://HOST/PATH;protocol=https \
-git://.*/.*                   git://HOST/git/PATH;protocol=https \
-"
-
-# Switch glibc and binutils recipes to use shallow clones as they're large and this
-# improves user experience whilst allowing the flexibility of git urls in the recipes
-BB_GIT_SHALLOW:pn-binutils = "1"
-BB_GIT_SHALLOW:pn-binutils-cross-${TARGET_ARCH} = "1"
-BB_GIT_SHALLOW:pn-binutils-cross-canadian-${TRANSLATED_TARGET_ARCH} = "1"
-BB_GIT_SHALLOW:pn-binutils-cross-testsuite = "1"
-BB_GIT_SHALLOW:pn-binutils-crosssdk-${SDK_SYS} = "1"
-BB_GIT_SHALLOW:pn-glibc = "1"
-PREMIRRORS += "git://sourceware.org/git/glibc.git https://downloads.yoctoproject.org/mirror/sources/ \
-              git://sourceware.org/git/binutils-gdb.git https://downloads.yoctoproject.org/mirror/sources/"
diff --git a/poky/meta/classes/module-base.bbclass b/poky/meta/classes/module-base.bbclass
deleted file mode 100644
index 27bd69f..0000000
--- a/poky/meta/classes/module-base.bbclass
+++ /dev/null
@@ -1,21 +0,0 @@
-inherit kernel-arch
-
-# We do the dependency this way because the output is not preserved
-# in sstate, so we must force do_compile to run (once).
-do_configure[depends] += "make-mod-scripts:do_compile"
-
-export OS = "${TARGET_OS}"
-export CROSS_COMPILE = "${TARGET_PREFIX}"
-
-# This points to the build artefacts from the main kernel build
-# such as .config and System.map
-# Confusingly it is not the module build output (which is ${B}) but
-# we didn't pick the name.
-export KBUILD_OUTPUT = "${STAGING_KERNEL_BUILDDIR}"
-
-export KERNEL_VERSION = "${@oe.utils.read_file('${STAGING_KERNEL_BUILDDIR}/kernel-abiversion')}"
-KERNEL_OBJECT_SUFFIX = ".ko"
-
-# kernel modules are generally machine specific
-PACKAGE_ARCH = "${MACHINE_ARCH}"
-
diff --git a/poky/meta/classes/module.bbclass b/poky/meta/classes/module.bbclass
deleted file mode 100644
index a09ec3e..0000000
--- a/poky/meta/classes/module.bbclass
+++ /dev/null
@@ -1,74 +0,0 @@
-inherit module-base kernel-module-split pkgconfig
-
-EXTRA_OEMAKE += "KERNEL_SRC=${STAGING_KERNEL_DIR}"
-
-MODULES_INSTALL_TARGET ?= "modules_install"
-MODULES_MODULE_SYMVERS_LOCATION ?= ""
-
-python __anonymous () {
-    depends = d.getVar('DEPENDS')
-    extra_symbols = []
-    for dep in depends.split():
-        if dep.startswith("kernel-module-"):
-            extra_symbols.append("${STAGING_INCDIR}/" + dep + "/Module.symvers")
-    d.setVar('KBUILD_EXTRA_SYMBOLS', " ".join(extra_symbols))
-}
-
-python do_devshell:prepend () {
-    os.environ['CFLAGS'] = ''
-    os.environ['CPPFLAGS'] = ''
-    os.environ['CXXFLAGS'] = ''
-    os.environ['LDFLAGS'] = ''
-
-    os.environ['KERNEL_PATH'] = d.getVar('STAGING_KERNEL_DIR')
-    os.environ['KERNEL_SRC'] = d.getVar('STAGING_KERNEL_DIR')
-    os.environ['KERNEL_VERSION'] = d.getVar('KERNEL_VERSION')
-    os.environ['CC'] = d.getVar('KERNEL_CC')
-    os.environ['LD'] = d.getVar('KERNEL_LD')
-    os.environ['AR'] = d.getVar('KERNEL_AR')
-    os.environ['O'] = d.getVar('STAGING_KERNEL_BUILDDIR')
-    kbuild_extra_symbols = d.getVar('KBUILD_EXTRA_SYMBOLS')
-    if kbuild_extra_symbols:
-        os.environ['KBUILD_EXTRA_SYMBOLS'] = kbuild_extra_symbols
-    else:
-        os.environ['KBUILD_EXTRA_SYMBOLS'] = ''
-}
-
-module_do_compile() {
-	unset CFLAGS CPPFLAGS CXXFLAGS LDFLAGS
-	oe_runmake KERNEL_PATH=${STAGING_KERNEL_DIR}   \
-		   KERNEL_VERSION=${KERNEL_VERSION}    \
-		   CC="${KERNEL_CC}" LD="${KERNEL_LD}" \
-		   AR="${KERNEL_AR}" \
-	           O=${STAGING_KERNEL_BUILDDIR} \
-		   KBUILD_EXTRA_SYMBOLS="${KBUILD_EXTRA_SYMBOLS}" \
-		   ${MAKE_TARGETS}
-}
-
-module_do_install() {
-	unset CFLAGS CPPFLAGS CXXFLAGS LDFLAGS
-	oe_runmake DEPMOD=echo MODLIB="${D}${nonarch_base_libdir}/modules/${KERNEL_VERSION}" \
-	           INSTALL_FW_PATH="${D}${nonarch_base_libdir}/firmware" \
-	           CC="${KERNEL_CC}" LD="${KERNEL_LD}" \
-	           O=${STAGING_KERNEL_BUILDDIR} \
-	           ${MODULES_INSTALL_TARGET}
-
-	if [ ! -e "${B}/${MODULES_MODULE_SYMVERS_LOCATION}/Module.symvers" ] ; then
-		bbwarn "Module.symvers not found in ${B}/${MODULES_MODULE_SYMVERS_LOCATION}"
-		bbwarn "Please consider setting MODULES_MODULE_SYMVERS_LOCATION to a"
-		bbwarn "directory below B to get correct inter-module dependencies"
-	else
-		install -Dm0644 "${B}/${MODULES_MODULE_SYMVERS_LOCATION}"/Module.symvers ${D}${includedir}/${BPN}/Module.symvers
-		# Module.symvers contains absolute path to the build directory.
-		# While it doesn't actually seem to matter which path is specified,
-		# clear them out to avoid confusion
-		sed -e 's:${B}/::g' -i ${D}${includedir}/${BPN}/Module.symvers
-	fi
-}
-
-EXPORT_FUNCTIONS do_compile do_install
-
-# add all splitted modules to PN RDEPENDS, PN can be empty now
-KERNEL_MODULES_META_PACKAGE = "${PN}"
-FILES:${PN} = ""
-ALLOW_EMPTY:${PN} = "1"
diff --git a/poky/meta/classes/multilib.bbclass b/poky/meta/classes/multilib.bbclass
index 5859ca8..10a4ef9 100644
--- a/poky/meta/classes/multilib.bbclass
+++ b/poky/meta/classes/multilib.bbclass
@@ -1,3 +1,9 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
 python multilib_virtclass_handler () {
     cls = e.data.getVar("BBEXTENDCURR")
     variant = e.data.getVar("BBEXTENDVARIANT")
diff --git a/poky/meta/classes/multilib_global.bbclass b/poky/meta/classes/multilib_global.bbclass
index e06307d..dcd89b2 100644
--- a/poky/meta/classes/multilib_global.bbclass
+++ b/poky/meta/classes/multilib_global.bbclass
@@ -1,3 +1,9 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
 def preferred_ml_updates(d):
     # If any of PREFERRED_PROVIDER, PREFERRED_RPROVIDER, REQUIRED_VERSION
     # or PREFERRED_VERSION are set, we need to mirror these variables in
diff --git a/poky/meta/classes/multilib_header.bbclass b/poky/meta/classes/multilib_header.bbclass
deleted file mode 100644
index efbc24f..0000000
--- a/poky/meta/classes/multilib_header.bbclass
+++ /dev/null
@@ -1,52 +0,0 @@
-inherit siteinfo
-
-# If applicable on the architecture, this routine will rename the header and
-# add a unique identifier to the name for the ABI/bitsize that is being used.
-# A wrapper will be generated for the architecture that knows how to call
-# all of the ABI variants for that given architecture.
-#
-oe_multilib_header() {
-
-	case ${HOST_OS} in
-	*-musl*)
-		return
-		;;
-	*)
-	esac
-        # For MIPS: "n32" is a special case, which needs to be
-        # distinct from both 64-bit and 32-bit.
-        case ${TARGET_ARCH} in
-        mips*)  case "${MIPSPKGSFX_ABI}" in
-                "-n32")
-                       ident=n32   
-                       ;;
-                *)     
-                       ident=${SITEINFO_BITS}
-                       ;;
-                esac
-                ;;
-        *)      ident=${SITEINFO_BITS}
-        esac
-	for each_header in "$@" ; do
-	   if [ ! -f "${D}/${includedir}/$each_header" ]; then
-	      bberror "oe_multilib_header: Unable to find header $each_header."
-	      continue
-	   fi
-	   stem=$(echo $each_header | sed 's#\.h$##')
-	   # if mips64/n32 set ident to n32
-	   mv ${D}/${includedir}/$each_header ${D}/${includedir}/${stem}-${ident}.h
-
-	   sed -e "s#ENTER_HEADER_FILENAME_HERE#${stem}#g" ${COREBASE}/scripts/multilib_header_wrapper.h > ${D}/${includedir}/$each_header
-	done
-}
-
-# Dependencies on arch variables like MIPSPKGSFX_ABI can be problematic.
-# We don't need multilib headers for native builds so brute force things.
-oe_multilib_header:class-native () {
-	return
-}
-
-# Nor do we need multilib headers for nativesdk builds.
-oe_multilib_header:class-nativesdk () {
-	return
-}
diff --git a/poky/meta/classes/multilib_script.bbclass b/poky/meta/classes/multilib_script.bbclass
deleted file mode 100644
index 4159734..0000000
--- a/poky/meta/classes/multilib_script.bbclass
+++ /dev/null
@@ -1,34 +0,0 @@
-#
-# Recipe needs to set MULTILIB_SCRIPTS in the form <pkgname>:<scriptname>, e.g.
-# MULTILIB_SCRIPTS = "${PN}-dev:${bindir}/file1 ${PN}:${base_bindir}/file2"
-# to indicate which script files to process from which packages.
-#
-
-inherit update-alternatives
-
-MULTILIB_SUFFIX = "${@d.getVar('base_libdir',1).split('/')[-1]}"
-
-PACKAGE_PREPROCESS_FUNCS += "multilibscript_rename"
-
-multilibscript_rename() {
-	:
-}
-
-python () {
-    # Do nothing if multilib isn't being used
-    if not d.getVar("MULTILIB_VARIANTS"):
-        return
-    # Do nothing for native/cross
-    if bb.data.inherits_class('native', d) or bb.data.inherits_class('cross', d):
-        return
-
-    for entry in (d.getVar("MULTILIB_SCRIPTS", False) or "").split():
-        pkg, script = entry.split(":")
-        epkg = d.expand(pkg)
-        scriptname = os.path.basename(script)
-        d.appendVar("ALTERNATIVE:" + epkg, " " + scriptname + " ")
-        d.setVarFlag("ALTERNATIVE_LINK_NAME", scriptname, script)
-        d.setVarFlag("ALTERNATIVE_TARGET", scriptname, script + "-${MULTILIB_SUFFIX}")
-        d.appendVar("multilibscript_rename",  "\n	mv ${PKGD}" + script + " ${PKGD}" + script + "-${MULTILIB_SUFFIX}")
-        d.appendVar("FILES:" + epkg, " " + script + "-${MULTILIB_SUFFIX}")
-}
diff --git a/poky/meta/classes/native.bbclass b/poky/meta/classes/native.bbclass
deleted file mode 100644
index 5a273cd..0000000
--- a/poky/meta/classes/native.bbclass
+++ /dev/null
@@ -1,230 +0,0 @@
-# We want native packages to be relocatable
-inherit relocatable
-
-# Native packages are built indirectly via dependency,
-# no need for them to be a direct target of 'world'
-EXCLUDE_FROM_WORLD = "1"
-
-PACKAGE_ARCH = "${BUILD_ARCH}"
-
-# used by cmake class
-OECMAKE_RPATH = "${libdir}"
-OECMAKE_RPATH:class-native = "${libdir}"
-
-TARGET_ARCH = "${BUILD_ARCH}"
-TARGET_OS = "${BUILD_OS}"
-TARGET_VENDOR = "${BUILD_VENDOR}"
-TARGET_PREFIX = "${BUILD_PREFIX}"
-TARGET_CC_ARCH = "${BUILD_CC_ARCH}"
-TARGET_LD_ARCH = "${BUILD_LD_ARCH}"
-TARGET_AS_ARCH = "${BUILD_AS_ARCH}"
-TARGET_CPPFLAGS = "${BUILD_CPPFLAGS}"
-TARGET_CFLAGS = "${BUILD_CFLAGS}"
-TARGET_CXXFLAGS = "${BUILD_CXXFLAGS}"
-TARGET_LDFLAGS = "${BUILD_LDFLAGS}"
-TARGET_FPU = ""
-TUNE_FEATURES = ""
-ABIEXTENSION = ""
-
-HOST_ARCH = "${BUILD_ARCH}"
-HOST_OS = "${BUILD_OS}"
-HOST_VENDOR = "${BUILD_VENDOR}"
-HOST_PREFIX = "${BUILD_PREFIX}"
-HOST_CC_ARCH = "${BUILD_CC_ARCH}"
-HOST_LD_ARCH = "${BUILD_LD_ARCH}"
-HOST_AS_ARCH = "${BUILD_AS_ARCH}"
-
-CPPFLAGS = "${BUILD_CPPFLAGS}"
-CFLAGS = "${BUILD_CFLAGS}"
-CXXFLAGS = "${BUILD_CXXFLAGS}"
-LDFLAGS = "${BUILD_LDFLAGS}"
-
-STAGING_BINDIR = "${STAGING_BINDIR_NATIVE}"
-STAGING_BINDIR_CROSS = "${STAGING_BINDIR_NATIVE}"
-
-# native pkg doesn't need the TOOLCHAIN_OPTIONS.
-TOOLCHAIN_OPTIONS = ""
-
-# Don't build ptest natively
-PTEST_ENABLED = "0"
-
-# Don't use site files for native builds
-export CONFIG_SITE = "${COREBASE}/meta/site/native"
-
-# set the compiler as well. It could have been set to something else
-export CC = "${BUILD_CC}"
-export CXX = "${BUILD_CXX}"
-export FC = "${BUILD_FC}"
-export CPP = "${BUILD_CPP}"
-export LD = "${BUILD_LD}"
-export CCLD = "${BUILD_CCLD}"
-export AR = "${BUILD_AR}"
-export AS = "${BUILD_AS}"
-export RANLIB = "${BUILD_RANLIB}"
-export STRIP = "${BUILD_STRIP}"
-export NM = "${BUILD_NM}"
-
-# Path prefixes
-base_prefix = "${STAGING_DIR_NATIVE}"
-prefix = "${STAGING_DIR_NATIVE}${prefix_native}"
-exec_prefix = "${STAGING_DIR_NATIVE}${prefix_native}"
-
-bindir = "${STAGING_BINDIR_NATIVE}"
-sbindir = "${STAGING_SBINDIR_NATIVE}"
-base_libdir = "${STAGING_LIBDIR_NATIVE}"
-libdir = "${STAGING_LIBDIR_NATIVE}"
-includedir = "${STAGING_INCDIR_NATIVE}"
-sysconfdir = "${STAGING_ETCDIR_NATIVE}"
-datadir = "${STAGING_DATADIR_NATIVE}"
-
-baselib = "lib"
-
-export lt_cv_sys_lib_dlsearch_path_spec = "${libdir} ${base_libdir} /lib /lib64 /usr/lib /usr/lib64"
-
-NATIVE_PACKAGE_PATH_SUFFIX ?= ""
-bindir .= "${NATIVE_PACKAGE_PATH_SUFFIX}"
-sbindir .= "${NATIVE_PACKAGE_PATH_SUFFIX}"
-base_libdir .= "${NATIVE_PACKAGE_PATH_SUFFIX}"
-libdir .= "${NATIVE_PACKAGE_PATH_SUFFIX}"
-libexecdir .= "${NATIVE_PACKAGE_PATH_SUFFIX}"
-
-do_populate_sysroot[sstate-inputdirs] = "${SYSROOT_DESTDIR}/${STAGING_DIR_NATIVE}/"
-do_populate_sysroot[sstate-outputdirs] = "${COMPONENTS_DIR}/${PACKAGE_ARCH}/${PN}"
-
-# Since we actually install these into situ there is no staging prefix
-STAGING_DIR_HOST = ""
-STAGING_DIR_TARGET = ""
-PKG_CONFIG_DIR = "${libdir}/pkgconfig"
-
-EXTRA_NATIVE_PKGCONFIG_PATH ?= ""
-PKG_CONFIG_PATH .= "${EXTRA_NATIVE_PKGCONFIG_PATH}"
-PKG_CONFIG_SYSROOT_DIR = ""
-PKG_CONFIG_SYSTEM_LIBRARY_PATH[unexport] = "1"
-PKG_CONFIG_SYSTEM_INCLUDE_PATH[unexport] = "1"
-
-# we dont want libc-*libc to kick in for native recipes
-LIBCOVERRIDE = ""
-CLASSOVERRIDE = "class-native"
-MACHINEOVERRIDES = ""
-MACHINE_FEATURES = ""
-
-PATH:prepend = "${COREBASE}/scripts/native-intercept:"
-
-# This class encodes staging paths into its scripts data so can only be
-# reused if we manipulate the paths.
-SSTATE_SCAN_CMD ?= "${SSTATE_SCAN_CMD_NATIVE}"
-
-# No strip sysroot when DEBUG_BUILD is enabled
-INHIBIT_SYSROOT_STRIP ?= "${@oe.utils.vartrue('DEBUG_BUILD', '1', '', d)}"
-
-python native_virtclass_handler () {
-    pn = e.data.getVar("PN")
-    if not pn.endswith("-native"):
-        return
-    bpn = e.data.getVar("BPN")
-
-    # Set features here to prevent appends and distro features backfill
-    # from modifying native distro features
-    features = set(d.getVar("DISTRO_FEATURES_NATIVE").split())
-    filtered = set(bb.utils.filter("DISTRO_FEATURES", d.getVar("DISTRO_FEATURES_FILTER_NATIVE"), d).split())
-    d.setVar("DISTRO_FEATURES", " ".join(sorted(features | filtered)))
-
-    classextend = e.data.getVar('BBCLASSEXTEND') or ""
-    if "native" not in classextend:
-        return
-
-    def map_dependencies(varname, d, suffix = "", selfref=True):
-        if suffix:
-            varname = varname + ":" + suffix
-        deps = d.getVar(varname)
-        if not deps:
-            return
-        deps = bb.utils.explode_deps(deps)
-        newdeps = []
-        for dep in deps:
-            if dep == pn:
-                if not selfref:
-                    continue
-                newdeps.append(dep)
-            elif "-cross-" in dep:
-                newdeps.append(dep.replace("-cross", "-native"))
-            elif not dep.endswith("-native"):
-                # Replace ${PN} with ${BPN} in the dependency to make sure
-                # dependencies on, e.g., ${PN}-foo become ${BPN}-foo-native
-                # rather than ${BPN}-native-foo-native.
-                newdeps.append(dep.replace(pn, bpn) + "-native")
-            else:
-                newdeps.append(dep)
-        d.setVar(varname, " ".join(newdeps), parsing=True)
-
-    map_dependencies("DEPENDS", e.data, selfref=False)
-    for pkg in e.data.getVar("PACKAGES", False).split():
-        map_dependencies("RDEPENDS", e.data, pkg)
-        map_dependencies("RRECOMMENDS", e.data, pkg)
-        map_dependencies("RSUGGESTS", e.data, pkg)
-        map_dependencies("RPROVIDES", e.data, pkg)
-        map_dependencies("RREPLACES", e.data, pkg)
-    map_dependencies("PACKAGES", e.data)
-
-    provides = e.data.getVar("PROVIDES")
-    nprovides = []
-    for prov in provides.split():
-        if prov.find(pn) != -1:
-            nprovides.append(prov)
-        elif not prov.endswith("-native"):
-            nprovides.append(prov + "-native")
-        else:
-            nprovides.append(prov)
-    e.data.setVar("PROVIDES", ' '.join(nprovides))
-
-
-}
-
-addhandler native_virtclass_handler
-native_virtclass_handler[eventmask] = "bb.event.RecipePreFinalise"
-
-python do_addto_recipe_sysroot () {
-    bb.build.exec_func("extend_recipe_sysroot", d)
-}
-addtask addto_recipe_sysroot after do_populate_sysroot
-do_addto_recipe_sysroot[deptask] = "do_populate_sysroot"
-
-inherit nopackages
-
-do_packagedata[stamp-extra-info] = ""
-
-USE_NLS = "no"
-
-RECIPERDEPTASK = "do_populate_sysroot"
-do_populate_sysroot[rdeptask] = "${RECIPERDEPTASK}"
-
-#
-# Native task outputs are directly run on the target (host) system after being
-# built. Even if the output of this recipe doesn't change, a change in one of
-# its dependencies may cause a change in the output it generates (e.g. rpm
-# output depends on the output of its dependent zstd library).
-#
-# This can cause poor interactions with hash equivalence, since this recipes
-# output-changing dependency is "hidden" and downstream task only see that this
-# recipe has the same outhash and therefore is equivalent. This can result in
-# different output in different cases.
-#
-# To resolve this, unhide the output-changing dependency by adding its unihash
-# to this tasks outhash calculation. Unfortunately, don't know specifically
-# know which dependencies are output-changing, so we have to add all of them.
-#
-python native_add_do_populate_sysroot_deps () {
-    current_task = "do_" + d.getVar("BB_CURRENTTASK")
-    if current_task != "do_populate_sysroot":
-        return
-
-    taskdepdata = d.getVar("BB_TASKDEPDATA", False)
-    pn = d.getVar("PN")
-    deps = {
-        dep[0]:dep[6] for dep in taskdepdata.values() if
-            dep[1] == current_task and dep[0] != pn
-    }
-
-    d.setVar("HASHEQUIV_EXTRA_SIGDATA", "\n".join("%s: %s" % (k, deps[k]) for k in sorted(deps.keys())))
-}
-SSTATECREATEFUNCS += "native_add_do_populate_sysroot_deps"
diff --git a/poky/meta/classes/nativesdk.bbclass b/poky/meta/classes/nativesdk.bbclass
deleted file mode 100644
index f8e9607..0000000
--- a/poky/meta/classes/nativesdk.bbclass
+++ /dev/null
@@ -1,117 +0,0 @@
-# SDK packages are built either explicitly by the user,
-# or indirectly via dependency.  No need to be in 'world'.
-EXCLUDE_FROM_WORLD = "1"
-
-STAGING_BINDIR_TOOLCHAIN = "${STAGING_DIR_NATIVE}${bindir_native}/${SDK_ARCH}${SDK_VENDOR}-${SDK_OS}"
-
-# libc for the SDK can be different to that of the target
-NATIVESDKLIBC ?= "libc-glibc"
-LIBCOVERRIDE = ":${NATIVESDKLIBC}"
-CLASSOVERRIDE = "class-nativesdk"
-MACHINEOVERRIDES = ""
-MACHINE_FEATURES = ""
-
-MULTILIBS = ""
-
-# we need consistent staging dir whether or not multilib is enabled
-STAGING_DIR_HOST = "${WORKDIR}/recipe-sysroot"
-STAGING_DIR_TARGET = "${WORKDIR}/recipe-sysroot"
-RECIPE_SYSROOT = "${WORKDIR}/recipe-sysroot"
-
-#
-# Update PACKAGE_ARCH and PACKAGE_ARCHS
-#
-PACKAGE_ARCH = "${SDK_ARCH}-${SDKPKGSUFFIX}"
-PACKAGE_ARCHS = "${SDK_PACKAGE_ARCHS}"
-
-#
-# We need chrpath >= 0.14 to ensure we can deal with 32 and 64 bit
-# binaries
-#
-DEPENDS:append = " chrpath-replacement-native"
-EXTRANATIVEPATH += "chrpath-native"
-
-PKGDATA_DIR = "${PKGDATA_DIR_SDK}"
-
-HOST_ARCH = "${SDK_ARCH}"
-HOST_VENDOR = "${SDK_VENDOR}"
-HOST_OS = "${SDK_OS}"
-HOST_PREFIX = "${SDK_PREFIX}"
-HOST_CC_ARCH = "${SDK_CC_ARCH}"
-HOST_LD_ARCH = "${SDK_LD_ARCH}"
-HOST_AS_ARCH = "${SDK_AS_ARCH}"
-#HOST_SYS = "${HOST_ARCH}${TARGET_VENDOR}-${HOST_OS}"
-
-TARGET_ARCH = "${SDK_ARCH}"
-TARGET_VENDOR = "${SDK_VENDOR}"
-TARGET_OS = "${SDK_OS}"
-TARGET_PREFIX = "${SDK_PREFIX}"
-TARGET_CC_ARCH = "${SDK_CC_ARCH}"
-TARGET_LD_ARCH = "${SDK_LD_ARCH}"
-TARGET_AS_ARCH = "${SDK_AS_ARCH}"
-TARGET_CPPFLAGS = "${BUILDSDK_CPPFLAGS}"
-TARGET_CFLAGS = "${BUILDSDK_CFLAGS}"
-TARGET_CXXFLAGS = "${BUILDSDK_CXXFLAGS}"
-TARGET_LDFLAGS = "${BUILDSDK_LDFLAGS}"
-TARGET_FPU = ""
-EXTRA_OECONF_GCC_FLOAT = ""
-
-CPPFLAGS = "${BUILDSDK_CPPFLAGS}"
-CFLAGS = "${BUILDSDK_CFLAGS}"
-CXXFLAGS = "${BUILDSDK_CXXFLAGS}"
-LDFLAGS = "${BUILDSDK_LDFLAGS}"
-
-# Change to place files in SDKPATH
-base_prefix = "${SDKPATHNATIVE}"
-prefix = "${SDKPATHNATIVE}${prefix_nativesdk}"
-exec_prefix = "${SDKPATHNATIVE}${prefix_nativesdk}"
-baselib = "lib"
-sbindir = "${bindir}"
-
-export PKG_CONFIG_DIR = "${STAGING_DIR_HOST}${libdir}/pkgconfig"
-export PKG_CONFIG_SYSROOT_DIR = "${STAGING_DIR_HOST}"
-
-python nativesdk_virtclass_handler () {
-    pn = e.data.getVar("PN")
-    if not (pn.endswith("-nativesdk") or pn.startswith("nativesdk-")):
-        return
-
-    # Set features here to prevent appends and distro features backfill
-    # from modifying nativesdk distro features
-    features = set(d.getVar("DISTRO_FEATURES_NATIVESDK").split())
-    filtered = set(bb.utils.filter("DISTRO_FEATURES", d.getVar("DISTRO_FEATURES_FILTER_NATIVESDK"), d).split())
-    d.setVar("DISTRO_FEATURES", " ".join(sorted(features | filtered)))
-
-    e.data.setVar("MLPREFIX", "nativesdk-")
-    e.data.setVar("PN", "nativesdk-" + e.data.getVar("PN").replace("-nativesdk", "").replace("nativesdk-", ""))
-}
-
-python () {
-    pn = d.getVar("PN")
-    if not pn.startswith("nativesdk-"):
-        return
-
-    import oe.classextend
-
-    clsextend = oe.classextend.NativesdkClassExtender("nativesdk", d)
-    clsextend.rename_packages()
-    clsextend.rename_package_variables((d.getVar("PACKAGEVARS") or "").split())
-
-    clsextend.map_depends_variable("DEPENDS")
-    clsextend.map_packagevars()
-    clsextend.map_variable("PROVIDES")
-    clsextend.map_regexp_variable("PACKAGES_DYNAMIC")
-    d.setVar("LIBCEXTENSION", "")
-    d.setVar("ABIEXTENSION", "")
-}
-
-addhandler nativesdk_virtclass_handler
-nativesdk_virtclass_handler[eventmask] = "bb.event.RecipePreFinalise"
-
-do_packagedata[stamp-extra-info] = ""
-
-USE_NLS = "${SDKUSE_NLS}"
-
-OLDEST_KERNEL = "${SDK_OLDEST_KERNEL}"
-
-PATH:prepend = "${COREBASE}/scripts/nativesdk-intercept:"
diff --git a/poky/meta/classes/nopackages.bbclass b/poky/meta/classes/nopackages.bbclass
deleted file mode 100644
index 7a4f632..0000000
--- a/poky/meta/classes/nopackages.bbclass
+++ /dev/null
@@ -1,13 +0,0 @@
-deltask do_package
-deltask do_package_write_rpm
-deltask do_package_write_ipk
-deltask do_package_write_deb
-deltask do_package_write_tar
-deltask do_package_qa
-deltask do_packagedata
-deltask do_package_setscene
-deltask do_package_write_rpm_setscene
-deltask do_package_write_ipk_setscene
-deltask do_package_write_deb_setscene
-deltask do_package_qa_setscene
-deltask do_packagedata_setscene
diff --git a/poky/meta/classes/npm.bbclass b/poky/meta/classes/npm.bbclass
deleted file mode 100644
index deea53c..0000000
--- a/poky/meta/classes/npm.bbclass
+++ /dev/null
@@ -1,340 +0,0 @@
-# Copyright (C) 2020 Savoir-Faire Linux
-#
-# SPDX-License-Identifier: GPL-2.0-only
-#
-# This bbclass builds and installs an npm package to the target. The package
-# sources files should be fetched in the calling recipe by using the SRC_URI
-# variable. The ${S} variable should be updated depending of your fetcher.
-#
-# Usage:
-#  SRC_URI = "..."
-#  inherit npm
-#
-# Optional variables:
-#  NPM_ARCH:
-#       Override the auto generated npm architecture.
-#
-#  NPM_INSTALL_DEV:
-#       Set to 1 to also install devDependencies.
-
-inherit python3native
-
-DEPENDS:prepend = "nodejs-native nodejs-oe-cache-native "
-RDEPENDS:${PN}:append:class-target = " nodejs"
-
-EXTRA_OENPM = ""
-
-NPM_INSTALL_DEV ?= "0"
-
-NPM_NODEDIR ?= "${RECIPE_SYSROOT_NATIVE}${prefix_native}"
-
-def npm_target_arch_map(target_arch):
-    """Maps arch names to npm arch names"""
-    import re
-    if re.match("p(pc|owerpc)(|64)", target_arch):
-        return "ppc"
-    elif re.match("i.86$", target_arch):
-        return "ia32"
-    elif re.match("x86_64$", target_arch):
-        return "x64"
-    elif re.match("arm64$", target_arch):
-        return "arm"
-    return target_arch
-
-NPM_ARCH ?= "${@npm_target_arch_map(d.getVar("TARGET_ARCH"))}"
-
-NPM_PACKAGE = "${WORKDIR}/npm-package"
-NPM_CACHE = "${WORKDIR}/npm-cache"
-NPM_BUILD = "${WORKDIR}/npm-build"
-NPM_REGISTRY = "${WORKDIR}/npm-registry"
-
-def npm_global_configs(d):
-    """Get the npm global configuration"""
-    configs = []
-    # Ensure no network access is done
-    configs.append(("offline", "true"))
-    configs.append(("proxy", "http://invalid"))
-    configs.append(("funds", False))
-    configs.append(("audit", False))
-    # Configure the cache directory
-    configs.append(("cache", d.getVar("NPM_CACHE")))
-    return configs
-
-## 'npm pack' runs 'prepare' and 'prepack' scripts. Support for
-## 'ignore-scripts' which prevents this behavior has been removed
-## from nodejs 16.  Use simple 'tar' instead of.
-def npm_pack(env, srcdir, workdir):
-    """Emulate 'npm pack' on a specified directory"""
-    import subprocess
-    import os
-    import json
-
-    src = os.path.join(srcdir, 'package.json')
-    with open(src) as f:
-        j = json.load(f)
-
-    # base does not really matter and is for documentation purposes
-    # only.  But the 'version' part must exist because other parts of
-    # the bbclass rely on it.
-    base = j['name'].split('/')[-1]
-    tarball = os.path.join(workdir, "%s-%s.tgz" % (base, j['version']));
-
-    # TODO: real 'npm pack' does not include directories while 'tar'
-    # does.  But this does not seem to matter...
-    subprocess.run(['tar', 'czf', tarball,
-                    '--exclude', './node-modules',
-                    '--exclude-vcs',
-                    '--transform', 's,^\./,package/,',
-                    '--mtime', '1985-10-26T08:15:00.000Z',
-                    '.'],
-                   check = True, cwd = srcdir)
-
-    return (tarball, j)
-
-python npm_do_configure() {
-    """
-    Step one: configure the npm cache and the main npm package
-
-    Every dependencies have been fetched and patched in the source directory.
-    They have to be packed (this remove unneeded files) and added to the npm
-    cache to be available for the next step.
-
-    The main package and its associated manifest file and shrinkwrap file have
-    to be configured to take into account these cached dependencies.
-    """
-    import base64
-    import copy
-    import json
-    import re
-    import shlex
-    import stat
-    import tempfile
-    from bb.fetch2.npm import NpmEnvironment
-    from bb.fetch2.npm import npm_unpack
-    from bb.fetch2.npmsw import foreach_dependencies
-    from bb.progress import OutOfProgressHandler
-    from oe.npm_registry import NpmRegistry
-
-    bb.utils.remove(d.getVar("NPM_CACHE"), recurse=True)
-    bb.utils.remove(d.getVar("NPM_PACKAGE"), recurse=True)
-
-    env = NpmEnvironment(d, configs=npm_global_configs(d))
-    registry = NpmRegistry(d.getVar('NPM_REGISTRY'), d.getVar('NPM_CACHE'))
-
-    def _npm_cache_add(tarball, pkg):
-        """Add tarball to local registry and register it in the
-           cache"""
-        registry.add_pkg(tarball, pkg)
-
-    def _npm_integrity(tarball):
-        """Return the npm integrity of a specified tarball"""
-        sha512 = bb.utils.sha512_file(tarball)
-        return "sha512-" + base64.b64encode(bytes.fromhex(sha512)).decode()
-
-    def _npmsw_dependency_dict(orig, deptree):
-        """
-        Return the sub dictionary in the 'orig' dictionary corresponding to the
-        'deptree' dependency tree. This function follows the shrinkwrap file
-        format.
-        """
-        ptr = orig
-        for dep in deptree:
-            if "dependencies" not in ptr:
-                ptr["dependencies"] = {}
-            ptr = ptr["dependencies"]
-            if dep not in ptr:
-                ptr[dep] = {}
-            ptr = ptr[dep]
-        return ptr
-
-    # Manage the manifest file and shrinkwrap files
-    orig_manifest_file = d.expand("${S}/package.json")
-    orig_shrinkwrap_file = d.expand("${S}/npm-shrinkwrap.json")
-    cached_manifest_file = d.expand("${NPM_PACKAGE}/package.json")
-    cached_shrinkwrap_file = d.expand("${NPM_PACKAGE}/npm-shrinkwrap.json")
-
-    with open(orig_manifest_file, "r") as f:
-        orig_manifest = json.load(f)
-
-    cached_manifest = copy.deepcopy(orig_manifest)
-    cached_manifest.pop("dependencies", None)
-    cached_manifest.pop("devDependencies", None)
-
-    has_shrinkwrap_file = True
-
-    try:
-        with open(orig_shrinkwrap_file, "r") as f:
-            orig_shrinkwrap = json.load(f)
-    except IOError:
-        has_shrinkwrap_file = False
-
-    if has_shrinkwrap_file:
-       cached_shrinkwrap = copy.deepcopy(orig_shrinkwrap)
-       cached_shrinkwrap.pop("dependencies", None)
-
-    # Manage the dependencies
-    progress = OutOfProgressHandler(d, r"^(\d+)/(\d+)$")
-    progress_total = 1 # also count the main package
-    progress_done = 0
-
-    def _count_dependency(name, params, deptree):
-        nonlocal progress_total
-        progress_total += 1
-
-    def _cache_dependency(name, params, deptree):
-        destsubdirs = [os.path.join("node_modules", dep) for dep in deptree]
-        destsuffix = os.path.join(*destsubdirs)
-        with tempfile.TemporaryDirectory() as tmpdir:
-            # Add the dependency to the npm cache
-            destdir = os.path.join(d.getVar("S"), destsuffix)
-            (tarball, pkg) = npm_pack(env, destdir, tmpdir)
-            _npm_cache_add(tarball, pkg)
-            # Add its signature to the cached shrinkwrap
-            dep = _npmsw_dependency_dict(cached_shrinkwrap, deptree)
-            dep["version"] = pkg['version']
-            dep["integrity"] = _npm_integrity(tarball)
-            if params.get("dev", False):
-                dep["dev"] = True
-            # Display progress
-            nonlocal progress_done
-            progress_done += 1
-            progress.write("%d/%d" % (progress_done, progress_total))
-
-    dev = bb.utils.to_boolean(d.getVar("NPM_INSTALL_DEV"), False)
-
-    if has_shrinkwrap_file:
-        foreach_dependencies(orig_shrinkwrap, _count_dependency, dev)
-        foreach_dependencies(orig_shrinkwrap, _cache_dependency, dev)
-
-    # Configure the main package
-    with tempfile.TemporaryDirectory() as tmpdir:
-        (tarball, _) = npm_pack(env, d.getVar("S"), tmpdir)
-        npm_unpack(tarball, d.getVar("NPM_PACKAGE"), d)
-
-    # Configure the cached manifest file and cached shrinkwrap file
-    def _update_manifest(depkey):
-        for name in orig_manifest.get(depkey, {}):
-            version = cached_shrinkwrap["dependencies"][name]["version"]
-            if depkey not in cached_manifest:
-                cached_manifest[depkey] = {}
-            cached_manifest[depkey][name] = version
-
-    if has_shrinkwrap_file:
-        _update_manifest("dependencies")
-
-    if dev:
-        if has_shrinkwrap_file:
-            _update_manifest("devDependencies")
-
-    os.chmod(cached_manifest_file, os.stat(cached_manifest_file).st_mode | stat.S_IWUSR)
-    with open(cached_manifest_file, "w") as f:
-        json.dump(cached_manifest, f, indent=2)
-
-    if has_shrinkwrap_file:
-        with open(cached_shrinkwrap_file, "w") as f:
-            json.dump(cached_shrinkwrap, f, indent=2)
-}
-
-python npm_do_compile() {
-    """
-    Step two: install the npm package
-
-    Use the configured main package and the cached dependencies to run the
-    installation process. The installation is done in a directory which is
-    not the destination directory yet.
-
-    A combination of 'npm pack' and 'npm install' is used to ensure that the
-    installed files are actual copies instead of symbolic links (which is the
-    default npm behavior).
-    """
-    import shlex
-    import tempfile
-    from bb.fetch2.npm import NpmEnvironment
-
-    bb.utils.remove(d.getVar("NPM_BUILD"), recurse=True)
-
-    with tempfile.TemporaryDirectory() as tmpdir:
-        args = []
-        configs = npm_global_configs(d)
-
-        if bb.utils.to_boolean(d.getVar("NPM_INSTALL_DEV"), False):
-            configs.append(("also", "development"))
-        else:
-            configs.append(("only", "production"))
-
-        # Report as many logs as possible for debugging purpose
-        configs.append(("loglevel", "silly"))
-
-        # Configure the installation to be done globally in the build directory
-        configs.append(("global", "true"))
-        configs.append(("prefix", d.getVar("NPM_BUILD")))
-
-        # Add node-gyp configuration
-        configs.append(("arch", d.getVar("NPM_ARCH")))
-        configs.append(("release", "true"))
-        configs.append(("nodedir", d.getVar("NPM_NODEDIR")))
-        configs.append(("python", d.getVar("PYTHON")))
-
-        env = NpmEnvironment(d, configs)
-
-        # Add node-pre-gyp configuration
-        args.append(("target_arch", d.getVar("NPM_ARCH")))
-        args.append(("build-from-source", "true"))
-
-        # Pack and install the main package
-        (tarball, _) = npm_pack(env, d.getVar("NPM_PACKAGE"), tmpdir)
-        cmd = "npm install %s %s" % (shlex.quote(tarball), d.getVar("EXTRA_OENPM"))
-        env.run(cmd, args=args)
-}
-
-npm_do_install() {
-    # Step three: final install
-    #
-    # The previous installation have to be filtered to remove some extra files.
-
-    rm -rf ${D}
-
-    # Copy the entire lib and bin directories
-    install -d ${D}/${nonarch_libdir}
-    cp --no-preserve=ownership --recursive ${NPM_BUILD}/lib/. ${D}/${nonarch_libdir}
-
-    if [ -d "${NPM_BUILD}/bin" ]
-    then
-        install -d ${D}/${bindir}
-        cp --no-preserve=ownership --recursive ${NPM_BUILD}/bin/. ${D}/${bindir}
-    fi
-
-    # If the package (or its dependencies) uses node-gyp to build native addons,
-    # object files, static libraries or other temporary files can be hidden in
-    # the lib directory. To reduce the package size and to avoid QA issues
-    # (staticdev with static library files) these files must be removed.
-    local GYP_REGEX=".*/build/Release/[^/]*.node"
-
-    # Remove any node-gyp directory in ${D} to remove temporary build files
-    for GYP_D_FILE in $(find ${D} -regex "${GYP_REGEX}")
-    do
-        local GYP_D_DIR=${GYP_D_FILE%/Release/*}
-
-        rm --recursive --force ${GYP_D_DIR}
-    done
-
-    # Copy only the node-gyp release files
-    for GYP_B_FILE in $(find ${NPM_BUILD} -regex "${GYP_REGEX}")
-    do
-        local GYP_D_FILE=${D}/${prefix}/${GYP_B_FILE#${NPM_BUILD}}
-
-        install -d ${GYP_D_FILE%/*}
-        install -m 755 ${GYP_B_FILE} ${GYP_D_FILE}
-    done
-
-    # Remove the shrinkwrap file which does not need to be packed
-    rm -f ${D}/${nonarch_libdir}/node_modules/*/npm-shrinkwrap.json
-    rm -f ${D}/${nonarch_libdir}/node_modules/@*/*/npm-shrinkwrap.json
-}
-
-FILES:${PN} += " \
-    ${bindir} \
-    ${nonarch_libdir} \
-"
-
-EXPORT_FUNCTIONS do_configure do_compile do_install
diff --git a/poky/meta/classes/oelint.bbclass b/poky/meta/classes/oelint.bbclass
index 2589d34..458a25e 100644
--- a/poky/meta/classes/oelint.bbclass
+++ b/poky/meta/classes/oelint.bbclass
@@ -1,3 +1,9 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
 addtask lint before do_build
 do_lint[nostamp] = "1"
 python do_lint() {
diff --git a/poky/meta/classes/overlayfs-etc.bbclass b/poky/meta/classes/overlayfs-etc.bbclass
index 91afee6..d0bc3ec 100644
--- a/poky/meta/classes/overlayfs-etc.bbclass
+++ b/poky/meta/classes/overlayfs-etc.bbclass
@@ -1,3 +1,9 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
 # Class for setting up /etc in overlayfs
 #
 # In order to have /etc directory in overlayfs a special handling at early boot stage is required
diff --git a/poky/meta/classes/overlayfs.bbclass b/poky/meta/classes/overlayfs.bbclass
index f7069ed..bdc6dd9 100644
--- a/poky/meta/classes/overlayfs.bbclass
+++ b/poky/meta/classes/overlayfs.bbclass
@@ -1,3 +1,9 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
 # Class for generation of overlayfs mount units
 #
 # It's often desired in Embedded System design to have a read-only rootfs.
diff --git a/poky/meta/classes/own-mirrors.bbclass b/poky/meta/classes/own-mirrors.bbclass
index ef97274..2f24ff1 100644
--- a/poky/meta/classes/own-mirrors.bbclass
+++ b/poky/meta/classes/own-mirrors.bbclass
@@ -1,3 +1,9 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
 PREMIRRORS:prepend = " \
 cvs://.*/.*     ${SOURCE_MIRROR_URL} \
 svn://.*/.*     ${SOURCE_MIRROR_URL} \
diff --git a/poky/meta/classes/package.bbclass b/poky/meta/classes/package.bbclass
deleted file mode 100644
index 97e97d2..0000000
--- a/poky/meta/classes/package.bbclass
+++ /dev/null
@@ -1,2552 +0,0 @@
-#
-# Packaging process
-#
-# Executive summary: This class iterates over the functions listed in PACKAGEFUNCS
-# Taking D and splitting it up into the packages listed in PACKAGES, placing the
-# resulting output in PKGDEST.
-#
-# There are the following default steps but PACKAGEFUNCS can be extended:
-#
-# a) package_convert_pr_autoinc - convert AUTOINC in PKGV to ${PRSERV_PV_AUTOINC}
-#
-# b) perform_packagecopy - Copy D into PKGD
-#
-# c) package_do_split_locales - Split out the locale files, updates FILES and PACKAGES
-#
-# d) split_and_strip_files - split the files into runtime and debug and strip them.
-#    Debug files include debug info split, and associated sources that end up in -dbg packages
-#
-# e) fixup_perms - Fix up permissions in the package before we split it.
-#
-# f) populate_packages - Split the files in PKGD into separate packages in PKGDEST/<pkgname>
-#    Also triggers the binary stripping code to put files in -dbg packages.
-#
-# g) package_do_filedeps - Collect perfile run-time dependency metadata
-#    The data is stores in FILER{PROVIDES,DEPENDS}_file_pkg variables with
-#    a list of affected files in FILER{PROVIDES,DEPENDS}FLIST_pkg
-#
-# h) package_do_shlibs - Look at the shared libraries generated and autotmatically add any
-#    dependencies found. Also stores the package name so anyone else using this library
-#    knows which package to depend on.
-#
-# i) package_do_pkgconfig - Keep track of which packages need and provide which .pc files
-#
-# j) read_shlibdeps - Reads the stored shlibs information into the metadata
-#
-# k) package_depchains - Adds automatic dependencies to -dbg and -dev packages
-#
-# l) emit_pkgdata - saves the packaging data into PKGDATA_DIR for use in later
-#    packaging steps
-
-inherit packagedata
-inherit chrpath
-inherit package_pkgdata
-inherit insane
-
-PKGD    = "${WORKDIR}/package"
-PKGDEST = "${WORKDIR}/packages-split"
-
-LOCALE_SECTION ?= ''
-
-ALL_MULTILIB_PACKAGE_ARCHS = "${@all_multilib_tune_values(d, 'PACKAGE_ARCHS')}"
-
-# rpm is used for the per-file dependency identification
-# dwarfsrcfiles is used to determine the list of debug source files
-PACKAGE_DEPENDS += "rpm-native dwarfsrcfiles-native"
-
-
-# If your postinstall can execute at rootfs creation time rather than on
-# target but depends on a native/cross tool in order to execute, you need to
-# list that tool in PACKAGE_WRITE_DEPS. Target package dependencies belong
-# in the package dependencies as normal, this is just for native/cross support
-# tools at rootfs build time.
-PACKAGE_WRITE_DEPS ??= ""
-
-def legitimize_package_name(s):
-    """
-    Make sure package names are legitimate strings
-    """
-    import re
-
-    def fixutf(m):
-        cp = m.group(1)
-        if cp:
-            return ('\\u%s' % cp).encode('latin-1').decode('unicode_escape')
-
-    # Handle unicode codepoints encoded as <U0123>, as in glibc locale files.
-    s = re.sub(r'<U([0-9A-Fa-f]{1,4})>', fixutf, s)
-
-    # Remaining package name validity fixes
-    return s.lower().replace('_', '-').replace('@', '+').replace(',', '+').replace('/', '-')
-
-def do_split_packages(d, root, file_regex, output_pattern, description, postinst=None, recursive=False, hook=None, extra_depends=None, aux_files_pattern=None, postrm=None, allow_dirs=False, prepend=False, match_path=False, aux_files_pattern_verbatim=None, allow_links=False, summary=None):
-    """
-    Used in .bb files to split up dynamically generated subpackages of a
-    given package, usually plugins or modules.
-
-    Arguments:
-    root           -- the path in which to search
-    file_regex     -- regular expression to match searched files. Use
-                      parentheses () to mark the part of this expression
-                      that should be used to derive the module name (to be
-                      substituted where %s is used in other function
-                      arguments as noted below)
-    output_pattern -- pattern to use for the package names. Must include %s.
-    description    -- description to set for each package. Must include %s.
-    postinst       -- postinstall script to use for all packages (as a
-                      string)
-    recursive      -- True to perform a recursive search - default False
-    hook           -- a hook function to be called for every match. The
-                      function will be called with the following arguments
-                      (in the order listed):
-                        f: full path to the file/directory match
-                        pkg: the package name
-                        file_regex: as above
-                        output_pattern: as above
-                        modulename: the module name derived using file_regex
-    extra_depends  -- extra runtime dependencies (RDEPENDS) to be set for
-                      all packages. The default value of None causes a
-                      dependency on the main package (${PN}) - if you do
-                      not want this, pass '' for this parameter.
-    aux_files_pattern -- extra item(s) to be added to FILES for each
-                      package. Can be a single string item or a list of
-                      strings for multiple items.  Must include %s.
-    postrm         -- postrm script to use for all packages (as a string)
-    allow_dirs     -- True allow directories to be matched - default False
-    prepend        -- if True, prepend created packages to PACKAGES instead
-                      of the default False which appends them
-    match_path     -- match file_regex on the whole relative path to the
-                      root rather than just the file name
-    aux_files_pattern_verbatim -- extra item(s) to be added to FILES for
-                      each package, using the actual derived module name
-                      rather than converting it to something legal for a
-                      package name. Can be a single string item or a list
-                      of strings for multiple items. Must include %s.
-    allow_links    -- True to allow symlinks to be matched - default False
-    summary        -- Summary to set for each package. Must include %s;
-                      defaults to description if not set.
-
-    """
-
-    dvar = d.getVar('PKGD')
-    root = d.expand(root)
-    output_pattern = d.expand(output_pattern)
-    extra_depends = d.expand(extra_depends)
-
-    # If the root directory doesn't exist, don't error out later but silently do
-    # no splitting.
-    if not os.path.exists(dvar + root):
-        return []
-
-    ml = d.getVar("MLPREFIX")
-    if ml:
-        if not output_pattern.startswith(ml):
-            output_pattern = ml + output_pattern
-
-        newdeps = []
-        for dep in (extra_depends or "").split():
-            if dep.startswith(ml):
-                newdeps.append(dep)
-            else:
-                newdeps.append(ml + dep)
-        if newdeps:
-            extra_depends = " ".join(newdeps)
-
-
-    packages = d.getVar('PACKAGES').split()
-    split_packages = set()
-
-    if postinst:
-        postinst = '#!/bin/sh\n' + postinst + '\n'
-    if postrm:
-        postrm = '#!/bin/sh\n' + postrm + '\n'
-    if not recursive:
-        objs = os.listdir(dvar + root)
-    else:
-        objs = []
-        for walkroot, dirs, files in os.walk(dvar + root):
-            for file in files:
-                relpath = os.path.join(walkroot, file).replace(dvar + root + '/', '', 1)
-                if relpath:
-                    objs.append(relpath)
-
-    if extra_depends == None:
-        extra_depends = d.getVar("PN")
-
-    if not summary:
-        summary = description
-
-    for o in sorted(objs):
-        import re, stat
-        if match_path:
-            m = re.match(file_regex, o)
-        else:
-            m = re.match(file_regex, os.path.basename(o))
-
-        if not m:
-            continue
-        f = os.path.join(dvar + root, o)
-        mode = os.lstat(f).st_mode
-        if not (stat.S_ISREG(mode) or (allow_links and stat.S_ISLNK(mode)) or (allow_dirs and stat.S_ISDIR(mode))):
-            continue
-        on = legitimize_package_name(m.group(1))
-        pkg = output_pattern % on
-        split_packages.add(pkg)
-        if not pkg in packages:
-            if prepend:
-                packages = [pkg] + packages
-            else:
-                packages.append(pkg)
-        oldfiles = d.getVar('FILES:' + pkg)
-        newfile = os.path.join(root, o)
-        # These names will be passed through glob() so if the filename actually
-        # contains * or ? (rare, but possible) we need to handle that specially
-        newfile = newfile.replace('*', '[*]')
-        newfile = newfile.replace('?', '[?]')
-        if not oldfiles:
-            the_files = [newfile]
-            if aux_files_pattern:
-                if type(aux_files_pattern) is list:
-                    for fp in aux_files_pattern:
-                        the_files.append(fp % on)
-                else:
-                    the_files.append(aux_files_pattern % on)
-            if aux_files_pattern_verbatim:
-                if type(aux_files_pattern_verbatim) is list:
-                    for fp in aux_files_pattern_verbatim:
-                        the_files.append(fp % m.group(1))
-                else:
-                    the_files.append(aux_files_pattern_verbatim % m.group(1))
-            d.setVar('FILES:' + pkg, " ".join(the_files))
-        else:
-            d.setVar('FILES:' + pkg, oldfiles + " " + newfile)
-        if extra_depends != '':
-            d.appendVar('RDEPENDS:' + pkg, ' ' + extra_depends)
-        if not d.getVar('DESCRIPTION:' + pkg):
-            d.setVar('DESCRIPTION:' + pkg, description % on)
-        if not d.getVar('SUMMARY:' + pkg):
-            d.setVar('SUMMARY:' + pkg, summary % on)
-        if postinst:
-            d.setVar('pkg_postinst:' + pkg, postinst)
-        if postrm:
-            d.setVar('pkg_postrm:' + pkg, postrm)
-        if callable(hook):
-            hook(f, pkg, file_regex, output_pattern, m.group(1))
-
-    d.setVar('PACKAGES', ' '.join(packages))
-    return list(split_packages)
-
-PACKAGE_DEPENDS += "file-native"
-
-python () {
-    if d.getVar('PACKAGES') != '':
-        deps = ""
-        for dep in (d.getVar('PACKAGE_DEPENDS') or "").split():
-            deps += " %s:do_populate_sysroot" % dep
-        if d.getVar('PACKAGE_MINIDEBUGINFO') == '1':
-            deps += ' xz-native:do_populate_sysroot'
-        d.appendVarFlag('do_package', 'depends', deps)
-
-        # shlibs requires any DEPENDS to have already packaged for the *.list files
-        d.appendVarFlag('do_package', 'deptask', " do_packagedata")
-}
-
-# Get a list of files from file vars by searching files under current working directory
-# The list contains symlinks, directories and normal files.
-def files_from_filevars(filevars):
-    import os,glob
-    cpath = oe.cachedpath.CachedPath()
-    files = []
-    for f in filevars:
-        if os.path.isabs(f):
-            f = '.' + f
-        if not f.startswith("./"):
-            f = './' + f
-        globbed = glob.glob(f)
-        if globbed:
-            if [ f ] != globbed:
-                files += globbed
-                continue
-        files.append(f)
-
-    symlink_paths = []
-    for ind, f in enumerate(files):
-        # Handle directory symlinks. Truncate path to the lowest level symlink
-        parent = ''
-        for dirname in f.split('/')[:-1]:
-            parent = os.path.join(parent, dirname)
-            if dirname == '.':
-                continue
-            if cpath.islink(parent):
-                bb.warn("FILES contains file '%s' which resides under a "
-                        "directory symlink. Please fix the recipe and use the "
-                        "real path for the file." % f[1:])
-                symlink_paths.append(f)
-                files[ind] = parent
-                f = parent
-                break
-
-        if not cpath.islink(f):
-            if cpath.isdir(f):
-                newfiles = [ os.path.join(f,x) for x in os.listdir(f) ]
-                if newfiles:
-                    files += newfiles
-
-    return files, symlink_paths
-
-# Called in package_<rpm,ipk,deb>.bbclass to get the correct list of configuration files
-def get_conffiles(pkg, d):
-    pkgdest = d.getVar('PKGDEST')
-    root = os.path.join(pkgdest, pkg)
-    cwd = os.getcwd()
-    os.chdir(root)
-
-    conffiles = d.getVar('CONFFILES:%s' % pkg);
-    if conffiles == None:
-        conffiles = d.getVar('CONFFILES')
-    if conffiles == None:
-        conffiles = ""
-    conffiles = conffiles.split()
-    conf_orig_list = files_from_filevars(conffiles)[0]
-
-    # Remove links and directories from conf_orig_list to get conf_list which only contains normal files
-    conf_list = []
-    for f in conf_orig_list:
-        if os.path.isdir(f):
-            continue
-        if os.path.islink(f):
-            continue
-        if not os.path.exists(f):
-            continue
-        conf_list.append(f)
-
-    # Remove the leading './'
-    for i in range(0, len(conf_list)):
-        conf_list[i] = conf_list[i][1:]
-
-    os.chdir(cwd)
-    return conf_list
-
-def checkbuildpath(file, d):
-    tmpdir = d.getVar('TMPDIR')
-    with open(file) as f:
-        file_content = f.read()
-        if tmpdir in file_content:
-            return True
-
-    return False
-
-def parse_debugsources_from_dwarfsrcfiles_output(dwarfsrcfiles_output):
-    debugfiles = {}
-
-    for line in dwarfsrcfiles_output.splitlines():
-        if line.startswith("\t"):
-            debugfiles[os.path.normpath(line.split()[0])] = ""
-
-    return debugfiles.keys()
-
-def source_info(file, d, fatal=True):
-    import subprocess
-
-    cmd = ["dwarfsrcfiles", file]
-    try:
-        output = subprocess.check_output(cmd, universal_newlines=True, stderr=subprocess.STDOUT)
-        retval = 0
-    except subprocess.CalledProcessError as exc:
-        output = exc.output
-        retval = exc.returncode
-
-    # 255 means a specific file wasn't fully parsed to get the debug file list, which is not a fatal failure
-    if retval != 0 and retval != 255:
-        msg = "dwarfsrcfiles failed with exit code %s (cmd was %s)%s" % (retval, cmd, ":\n%s" % output if output else "")
-        if fatal:
-            bb.fatal(msg)
-        bb.note(msg)
-
-    debugsources = parse_debugsources_from_dwarfsrcfiles_output(output)
-
-    return list(debugsources)
-
-def splitdebuginfo(file, dvar, dv, d):
-    # Function to split a single file into two components, one is the stripped
-    # target system binary, the other contains any debugging information. The
-    # two files are linked to reference each other.
-    #
-    # return a mapping of files:debugsources
-
-    import stat
-    import subprocess
-
-    src = file[len(dvar):]
-    dest = dv["libdir"] + os.path.dirname(src) + dv["dir"] + "/" + os.path.basename(src) + dv["append"]
-    debugfile = dvar + dest
-    sources = []
-
-    if file.endswith(".ko") and file.find("/lib/modules/") != -1:
-        if oe.package.is_kernel_module_signed(file):
-            bb.debug(1, "Skip strip on signed module %s" % file)
-            return (file, sources)
-
-    # Split the file...
-    bb.utils.mkdirhier(os.path.dirname(debugfile))
-    #bb.note("Split %s -> %s" % (file, debugfile))
-    # Only store off the hard link reference if we successfully split!
-
-    dvar = d.getVar('PKGD')
-    objcopy = d.getVar("OBJCOPY")
-
-    newmode = None
-    if not os.access(file, os.W_OK) or os.access(file, os.R_OK):
-        origmode = os.stat(file)[stat.ST_MODE]
-        newmode = origmode | stat.S_IWRITE | stat.S_IREAD
-        os.chmod(file, newmode)
-
-    # We need to extract the debug src information here...
-    if dv["srcdir"]:
-        sources = source_info(file, d)
-
-    bb.utils.mkdirhier(os.path.dirname(debugfile))
-
-    subprocess.check_output([objcopy, '--only-keep-debug', file, debugfile], stderr=subprocess.STDOUT)
-
-    # Set the debuglink to have the view of the file path on the target
-    subprocess.check_output([objcopy, '--add-gnu-debuglink', debugfile, file], stderr=subprocess.STDOUT)
-
-    if newmode:
-        os.chmod(file, origmode)
-
-    return (file, sources)
-
-def splitstaticdebuginfo(file, dvar, dv, d):
-    # Unlike the function above, there is no way to split a static library
-    # two components.  So to get similar results we will copy the unmodified
-    # static library (containing the debug symbols) into a new directory.
-    # We will then strip (preserving symbols) the static library in the
-    # typical location.
-    #
-    # return a mapping of files:debugsources
-
-    import stat
-
-    src = file[len(dvar):]
-    dest = dv["staticlibdir"] + os.path.dirname(src) + dv["staticdir"] + "/" + os.path.basename(src) + dv["staticappend"]
-    debugfile = dvar + dest
-    sources = []
-
-    # Copy the file...
-    bb.utils.mkdirhier(os.path.dirname(debugfile))
-    #bb.note("Copy %s -> %s" % (file, debugfile))
-
-    dvar = d.getVar('PKGD')
-
-    newmode = None
-    if not os.access(file, os.W_OK) or os.access(file, os.R_OK):
-        origmode = os.stat(file)[stat.ST_MODE]
-        newmode = origmode | stat.S_IWRITE | stat.S_IREAD
-        os.chmod(file, newmode)
-
-    # We need to extract the debug src information here...
-    if dv["srcdir"]:
-        sources = source_info(file, d)
-
-    bb.utils.mkdirhier(os.path.dirname(debugfile))
-
-    # Copy the unmodified item to the debug directory
-    shutil.copy2(file, debugfile)
-
-    if newmode:
-        os.chmod(file, origmode)
-
-    return (file, sources)
-
-def inject_minidebuginfo(file, dvar, dv, d):
-    # Extract just the symbols from debuginfo into minidebuginfo,
-    # compress it with xz and inject it back into the binary in a .gnu_debugdata section.
-    # https://sourceware.org/gdb/onlinedocs/gdb/MiniDebugInfo.html
-
-    import subprocess
-
-    readelf = d.getVar('READELF')
-    nm = d.getVar('NM')
-    objcopy = d.getVar('OBJCOPY')
-
-    minidebuginfodir = d.expand('${WORKDIR}/minidebuginfo')
-
-    src = file[len(dvar):]
-    dest = dv["libdir"] + os.path.dirname(src) + dv["dir"] + "/" + os.path.basename(src) + dv["append"]
-    debugfile = dvar + dest
-    minidebugfile = minidebuginfodir + src + '.minidebug'
-    bb.utils.mkdirhier(os.path.dirname(minidebugfile))
-
-    # If we didn't produce debuginfo for any reason, we can't produce minidebuginfo either
-    # so skip it.
-    if not os.path.exists(debugfile):
-        bb.debug(1, 'ELF file {} has no debuginfo, skipping minidebuginfo injection'.format(file))
-        return
-
-    # Find non-allocated PROGBITS, NOTE, and NOBITS sections in the debuginfo.
-    # We will exclude all of these from minidebuginfo to save space.
-    remove_section_names = []
-    for line in subprocess.check_output([readelf, '-W', '-S', debugfile], universal_newlines=True).splitlines():
-        fields = line.split()
-        if len(fields) < 8:
-            continue
-        name = fields[0]
-        type = fields[1]
-        flags = fields[7]
-        # .debug_ sections will be removed by objcopy -S so no need to explicitly remove them
-        if name.startswith('.debug_'):
-            continue
-        if 'A' not in flags and type in ['PROGBITS', 'NOTE', 'NOBITS']:
-            remove_section_names.append(name)
-
-    # List dynamic symbols in the binary. We can exclude these from minidebuginfo
-    # because they are always present in the binary.
-    dynsyms = set()
-    for line in subprocess.check_output([nm, '-D', file, '--format=posix', '--defined-only'], universal_newlines=True).splitlines():
-        dynsyms.add(line.split()[0])
-
-    # Find all function symbols from debuginfo which aren't in the dynamic symbols table.
-    # These are the ones we want to keep in minidebuginfo.
-    keep_symbols_file = minidebugfile + '.symlist'
-    found_any_symbols = False
-    with open(keep_symbols_file, 'w') as f:
-        for line in subprocess.check_output([nm, debugfile, '--format=sysv', '--defined-only'], universal_newlines=True).splitlines():
-            fields = line.split('|')
-            if len(fields) < 7:
-                continue
-            name = fields[0].strip()
-            type = fields[3].strip()
-            if type == 'FUNC' and name not in dynsyms:
-                f.write('{}\n'.format(name))
-                found_any_symbols = True
-
-    if not found_any_symbols:
-        bb.debug(1, 'ELF file {} contains no symbols, skipping minidebuginfo injection'.format(file))
-        return
-
-    bb.utils.remove(minidebugfile)
-    bb.utils.remove(minidebugfile + '.xz')
-
-    subprocess.check_call([objcopy, '-S'] +
-                          ['--remove-section={}'.format(s) for s in remove_section_names] +
-                          ['--keep-symbols={}'.format(keep_symbols_file), debugfile, minidebugfile])
-
-    subprocess.check_call(['xz', '--keep', minidebugfile])
-
-    subprocess.check_call([objcopy, '--add-section', '.gnu_debugdata={}.xz'.format(minidebugfile), file])
-
-def copydebugsources(debugsrcdir, sources, d):
-    # The debug src information written out to sourcefile is further processed
-    # and copied to the destination here.
-
-    import stat
-    import subprocess
-
-    if debugsrcdir and sources:
-        sourcefile = d.expand("${WORKDIR}/debugsources.list")
-        bb.utils.remove(sourcefile)
-
-        # filenames are null-separated - this is an artefact of the previous use
-        # of rpm's debugedit, which was writing them out that way, and the code elsewhere
-        # is still assuming that.
-        debuglistoutput = '\0'.join(sources) + '\0'
-        with open(sourcefile, 'a') as sf:
-           sf.write(debuglistoutput)
-
-        dvar = d.getVar('PKGD')
-        strip = d.getVar("STRIP")
-        objcopy = d.getVar("OBJCOPY")
-        workdir = d.getVar("WORKDIR")
-        sdir = d.getVar("S")
-        sparentdir = os.path.dirname(os.path.dirname(sdir))
-        sbasedir = os.path.basename(os.path.dirname(sdir)) + "/" + os.path.basename(sdir)
-        workparentdir = os.path.dirname(os.path.dirname(workdir))
-        workbasedir = os.path.basename(os.path.dirname(workdir)) + "/" + os.path.basename(workdir)
-
-        # If S isnt based on WORKDIR we can infer our sources are located elsewhere,
-        # e.g. using externalsrc; use S as base for our dirs
-        if workdir in sdir or 'work-shared' in sdir:
-            basedir = workbasedir
-            parentdir = workparentdir
-        else:
-            basedir = sbasedir
-            parentdir = sparentdir
-
-        # If build path exists in sourcefile, it means toolchain did not use
-        # -fdebug-prefix-map to compile
-        if checkbuildpath(sourcefile, d):
-            localsrc_prefix = parentdir + "/"
-        else:
-            localsrc_prefix = "/usr/src/debug/"
-
-        nosuchdir = []
-        basepath = dvar
-        for p in debugsrcdir.split("/"):
-            basepath = basepath + "/" + p
-            if not cpath.exists(basepath):
-                nosuchdir.append(basepath)
-        bb.utils.mkdirhier(basepath)
-        cpath.updatecache(basepath)
-
-        # Ignore files from the recipe sysroots (target and native)
-        processdebugsrc =  "LC_ALL=C ; sort -z -u '%s' | egrep -v -z '((<internal>|<built-in>)$|/.*recipe-sysroot.*/)' | "
-        # We need to ignore files that are not actually ours
-        # we do this by only paying attention to items from this package
-        processdebugsrc += "fgrep -zw '%s' | "
-        # Remove prefix in the source paths
-        processdebugsrc += "sed 's#%s##g' | "
-        processdebugsrc += "(cd '%s' ; cpio -pd0mlL --no-preserve-owner '%s%s' 2>/dev/null)"
-
-        cmd = processdebugsrc % (sourcefile, basedir, localsrc_prefix, parentdir, dvar, debugsrcdir)
-        try:
-            subprocess.check_output(cmd, shell=True, stderr=subprocess.STDOUT)
-        except subprocess.CalledProcessError:
-            # Can "fail" if internal headers/transient sources are attempted
-            pass
-
-        # cpio seems to have a bug with -lL together and symbolic links are just copied, not dereferenced.
-        # Work around this by manually finding and copying any symbolic links that made it through.
-        cmd = "find %s%s -type l -print0 -delete | sed s#%s%s/##g | (cd '%s' ; cpio -pd0mL --no-preserve-owner '%s%s')" % \
-                (dvar, debugsrcdir, dvar, debugsrcdir, parentdir, dvar, debugsrcdir)
-        subprocess.check_output(cmd, shell=True, stderr=subprocess.STDOUT)
-
-
-        # debugsources.list may be polluted from the host if we used externalsrc,
-        # cpio uses copy-pass and may have just created a directory structure
-        # matching the one from the host, if thats the case move those files to
-        # debugsrcdir to avoid host contamination.
-        # Empty dir structure will be deleted in the next step.
-
-        # Same check as above for externalsrc
-        if workdir not in sdir:
-            if os.path.exists(dvar + debugsrcdir + sdir):
-                cmd = "mv %s%s%s/* %s%s" % (dvar, debugsrcdir, sdir, dvar,debugsrcdir)
-                subprocess.check_output(cmd, shell=True, stderr=subprocess.STDOUT)
-
-        # The copy by cpio may have resulted in some empty directories!  Remove these
-        cmd = "find %s%s -empty -type d -delete" % (dvar, debugsrcdir)
-        subprocess.check_output(cmd, shell=True, stderr=subprocess.STDOUT)
-
-        # Also remove debugsrcdir if its empty
-        for p in nosuchdir[::-1]:
-            if os.path.exists(p) and not os.listdir(p):
-                os.rmdir(p)
-
-#
-# Package data handling routines
-#
-
-def get_package_mapping (pkg, basepkg, d, depversions=None):
-    import oe.packagedata
-
-    data = oe.packagedata.read_subpkgdata(pkg, d)
-    key = "PKG:%s" % pkg
-
-    if key in data:
-        if bb.data.inherits_class('allarch', d) and bb.data.inherits_class('packagegroup', d) and pkg != data[key]:
-            bb.error("An allarch packagegroup shouldn't depend on packages which are dynamically renamed (%s to %s)" % (pkg, data[key]))
-        # Have to avoid undoing the write_extra_pkgs(global_variants...)
-        if bb.data.inherits_class('allarch', d) and not d.getVar('MULTILIB_VARIANTS') \
-            and data[key] == basepkg:
-            return pkg
-        if depversions == []:
-            # Avoid returning a mapping if the renamed package rprovides its original name
-            rprovkey = "RPROVIDES:%s" % pkg
-            if rprovkey in data:
-                if pkg in bb.utils.explode_dep_versions2(data[rprovkey]):
-                    bb.note("%s rprovides %s, not replacing the latter" % (data[key], pkg))
-                    return pkg
-        # Do map to rewritten package name
-        return data[key]
-
-    return pkg
-
-def get_package_additional_metadata (pkg_type, d):
-    base_key = "PACKAGE_ADD_METADATA"
-    for key in ("%s_%s" % (base_key, pkg_type.upper()), base_key):
-        if d.getVar(key, False) is None:
-            continue
-        d.setVarFlag(key, "type", "list")
-        if d.getVarFlag(key, "separator") is None:
-            d.setVarFlag(key, "separator", "\\n")
-        metadata_fields = [field.strip() for field in oe.data.typed_value(key, d)]
-        return "\n".join(metadata_fields).strip()
-
-def runtime_mapping_rename (varname, pkg, d):
-    #bb.note("%s before: %s" % (varname, d.getVar(varname)))
-
-    new_depends = {}
-    deps = bb.utils.explode_dep_versions2(d.getVar(varname) or "")
-    for depend, depversions in deps.items():
-        new_depend = get_package_mapping(depend, pkg, d, depversions)
-        if depend != new_depend:
-            bb.note("package name mapping done: %s -> %s" % (depend, new_depend))
-        new_depends[new_depend] = deps[depend]
-
-    d.setVar(varname, bb.utils.join_deps(new_depends, commasep=False))
-
-    #bb.note("%s after: %s" % (varname, d.getVar(varname)))
-
-#
-# Used by do_packagedata (and possibly other routines post do_package)
-#
-
-PRSERV_ACTIVE = "${@bool(d.getVar("PRSERV_HOST"))}"
-PRSERV_ACTIVE[vardepvalue] = "${PRSERV_ACTIVE}"
-package_get_auto_pr[vardepsexclude] = "BB_TASKDEPDATA"
-package_get_auto_pr[vardeps] += "PRSERV_ACTIVE"
-python package_get_auto_pr() {
-    import oe.prservice
-
-    def get_do_package_hash(pn):
-        if d.getVar("BB_RUNTASK") != "do_package":
-            taskdepdata = d.getVar("BB_TASKDEPDATA", False)
-            for dep in taskdepdata:
-                if taskdepdata[dep][1] == "do_package" and taskdepdata[dep][0] == pn:
-                    return taskdepdata[dep][6]
-        return None
-
-    # Support per recipe PRSERV_HOST
-    pn = d.getVar('PN')
-    host = d.getVar("PRSERV_HOST_" + pn)
-    if not (host is None):
-        d.setVar("PRSERV_HOST", host)
-
-    pkgv = d.getVar("PKGV")
-
-    # PR Server not active, handle AUTOINC
-    if not d.getVar('PRSERV_HOST'):
-        d.setVar("PRSERV_PV_AUTOINC", "0")
-        return
-
-    auto_pr = None
-    pv = d.getVar("PV")
-    version = d.getVar("PRAUTOINX")
-    pkgarch = d.getVar("PACKAGE_ARCH")
-    checksum = get_do_package_hash(pn)
-
-    # If do_package isn't in the dependencies, we can't get the checksum...
-    if not checksum:
-        bb.warn('Task %s requested do_package unihash, but it was not available.' % d.getVar('BB_RUNTASK'))
-        #taskdepdata = d.getVar("BB_TASKDEPDATA", False)
-        #for dep in taskdepdata:
-        #    bb.warn('%s:%s = %s' % (taskdepdata[dep][0], taskdepdata[dep][1], taskdepdata[dep][6]))
-        return
-
-    if d.getVar('PRSERV_LOCKDOWN'):
-        auto_pr = d.getVar('PRAUTO_' + version + '_' + pkgarch) or d.getVar('PRAUTO_' + version) or None
-        if auto_pr is None:
-            bb.fatal("Can NOT get PRAUTO from lockdown exported file")
-        d.setVar('PRAUTO',str(auto_pr))
-        return
-
-    try:
-        conn = oe.prservice.prserv_make_conn(d)
-        if conn is not None:
-            if "AUTOINC" in pkgv:
-                srcpv = bb.fetch2.get_srcrev(d)
-                base_ver = "AUTOINC-%s" % version[:version.find(srcpv)]
-                value = conn.getPR(base_ver, pkgarch, srcpv)
-                d.setVar("PRSERV_PV_AUTOINC", str(value))
-
-            auto_pr = conn.getPR(version, pkgarch, checksum)
-            conn.close()
-    except Exception as e:
-        bb.fatal("Can NOT get PRAUTO, exception %s" %  str(e))
-    if auto_pr is None:
-        bb.fatal("Can NOT get PRAUTO from remote PR service")
-    d.setVar('PRAUTO',str(auto_pr))
-}
-
-#
-# Package functions suitable for inclusion in PACKAGEFUNCS
-#
-
-python package_convert_pr_autoinc() {
-    pkgv = d.getVar("PKGV")
-
-    # Adjust pkgv as necessary...
-    if 'AUTOINC' in pkgv:
-        d.setVar("PKGV", pkgv.replace("AUTOINC", "${PRSERV_PV_AUTOINC}"))
-
-    # Change PRSERV_PV_AUTOINC and EXTENDPRAUTO usage to special values
-    d.setVar('PRSERV_PV_AUTOINC', '@PRSERV_PV_AUTOINC@')
-    d.setVar('EXTENDPRAUTO', '@EXTENDPRAUTO@')
-}
-
-LOCALEBASEPN ??= "${PN}"
-
-python package_do_split_locales() {
-    if (d.getVar('PACKAGE_NO_LOCALE') == '1'):
-        bb.debug(1, "package requested not splitting locales")
-        return
-
-    packages = (d.getVar('PACKAGES') or "").split()
-
-    datadir = d.getVar('datadir')
-    if not datadir:
-        bb.note("datadir not defined")
-        return
-
-    dvar = d.getVar('PKGD')
-    pn = d.getVar('LOCALEBASEPN')
-
-    if pn + '-locale' in packages:
-        packages.remove(pn + '-locale')
-
-    localedir = os.path.join(dvar + datadir, 'locale')
-
-    if not cpath.isdir(localedir):
-        bb.debug(1, "No locale files in this package")
-        return
-
-    locales = os.listdir(localedir)
-
-    summary = d.getVar('SUMMARY') or pn
-    description = d.getVar('DESCRIPTION') or ""
-    locale_section = d.getVar('LOCALE_SECTION')
-    mlprefix = d.getVar('MLPREFIX') or ""
-    for l in sorted(locales):
-        ln = legitimize_package_name(l)
-        pkg = pn + '-locale-' + ln
-        packages.append(pkg)
-        d.setVar('FILES:' + pkg, os.path.join(datadir, 'locale', l))
-        d.setVar('RRECOMMENDS:' + pkg, '%svirtual-locale-%s' % (mlprefix, ln))
-        d.setVar('RPROVIDES:' + pkg, '%s-locale %s%s-translation' % (pn, mlprefix, ln))
-        d.setVar('SUMMARY:' + pkg, '%s - %s translations' % (summary, l))
-        d.setVar('DESCRIPTION:' + pkg, '%s  This package contains language translation files for the %s locale.' % (description, l))
-        if locale_section:
-            d.setVar('SECTION:' + pkg, locale_section)
-
-    d.setVar('PACKAGES', ' '.join(packages))
-
-    # Disabled by RP 18/06/07
-    # Wildcards aren't supported in debian
-    # They break with ipkg since glibc-locale* will mean that
-    # glibc-localedata-translit* won't install as a dependency
-    # for some other package which breaks meta-toolchain
-    # Probably breaks since virtual-locale- isn't provided anywhere
-    #rdep = (d.getVar('RDEPENDS:%s' % pn) or "").split()
-    #rdep.append('%s-locale*' % pn)
-    #d.setVar('RDEPENDS:%s' % pn, ' '.join(rdep))
-}
-
-python perform_packagecopy () {
-    import subprocess
-    import shutil
-
-    dest = d.getVar('D')
-    dvar = d.getVar('PKGD')
-
-    # Start by package population by taking a copy of the installed
-    # files to operate on
-    # Preserve sparse files and hard links
-    cmd = 'tar --exclude=./sysroot-only -cf - -C %s -p -S . | tar -xf - -C %s' % (dest, dvar)
-    subprocess.check_output(cmd, shell=True, stderr=subprocess.STDOUT)
-
-    # replace RPATHs for the nativesdk binaries, to make them relocatable
-    if bb.data.inherits_class('nativesdk', d) or bb.data.inherits_class('cross-canadian', d):
-        rpath_replace (dvar, d)
-}
-perform_packagecopy[cleandirs] = "${PKGD}"
-perform_packagecopy[dirs] = "${PKGD}"
-
-# We generate a master list of directories to process, we start by
-# seeding this list with reasonable defaults, then load from
-# the fs-perms.txt files
-python fixup_perms () {
-    import pwd, grp
-
-    # init using a string with the same format as a line as documented in
-    # the fs-perms.txt file
-    # <path> <mode> <uid> <gid> <walk> <fmode> <fuid> <fgid>
-    # <path> link <link target>
-    #
-    # __str__ can be used to print out an entry in the input format
-    #
-    # if fs_perms_entry.path is None:
-    #    an error occurred
-    # if fs_perms_entry.link, you can retrieve:
-    #    fs_perms_entry.path = path
-    #    fs_perms_entry.link = target of link
-    # if not fs_perms_entry.link, you can retrieve:
-    #    fs_perms_entry.path = path
-    #    fs_perms_entry.mode = expected dir mode or None
-    #    fs_perms_entry.uid = expected uid or -1
-    #    fs_perms_entry.gid = expected gid or -1
-    #    fs_perms_entry.walk = 'true' or something else
-    #    fs_perms_entry.fmode = expected file mode or None
-    #    fs_perms_entry.fuid = expected file uid or -1
-    #    fs_perms_entry_fgid = expected file gid or -1
-    class fs_perms_entry():
-        def __init__(self, line):
-            lsplit = line.split()
-            if len(lsplit) == 3 and lsplit[1].lower() == "link":
-                self._setlink(lsplit[0], lsplit[2])
-            elif len(lsplit) == 8:
-                self._setdir(lsplit[0], lsplit[1], lsplit[2], lsplit[3], lsplit[4], lsplit[5], lsplit[6], lsplit[7])
-            else:
-                msg = "Fixup Perms: invalid config line %s" % line
-                oe.qa.handle_error("perm-config", msg, d)
-                self.path = None
-                self.link = None
-
-        def _setdir(self, path, mode, uid, gid, walk, fmode, fuid, fgid):
-            self.path = os.path.normpath(path)
-            self.link = None
-            self.mode = self._procmode(mode)
-            self.uid  = self._procuid(uid)
-            self.gid  = self._procgid(gid)
-            self.walk = walk.lower()
-            self.fmode = self._procmode(fmode)
-            self.fuid = self._procuid(fuid)
-            self.fgid = self._procgid(fgid)
-
-        def _setlink(self, path, link):
-            self.path = os.path.normpath(path)
-            self.link = link
-
-        def _procmode(self, mode):
-            if not mode or (mode and mode == "-"):
-                return None
-            else:
-                return int(mode,8)
-
-        # Note uid/gid -1 has special significance in os.lchown
-        def _procuid(self, uid):
-            if uid is None or uid == "-":
-                return -1
-            elif uid.isdigit():
-                return int(uid)
-            else:
-                return pwd.getpwnam(uid).pw_uid
-
-        def _procgid(self, gid):
-            if gid is None or gid == "-":
-                return -1
-            elif gid.isdigit():
-                return int(gid)
-            else:
-                return grp.getgrnam(gid).gr_gid
-
-        # Use for debugging the entries
-        def __str__(self):
-            if self.link:
-                return "%s link %s" % (self.path, self.link)
-            else:
-                mode = "-"
-                if self.mode:
-                    mode = "0%o" % self.mode
-                fmode = "-"
-                if self.fmode:
-                    fmode = "0%o" % self.fmode
-                uid = self._mapugid(self.uid)
-                gid = self._mapugid(self.gid)
-                fuid = self._mapugid(self.fuid)
-                fgid = self._mapugid(self.fgid)
-                return "%s %s %s %s %s %s %s %s" % (self.path, mode, uid, gid, self.walk, fmode, fuid, fgid)
-
-        def _mapugid(self, id):
-            if id is None or id == -1:
-                return "-"
-            else:
-                return "%d" % id
-
-    # Fix the permission, owner and group of path
-    def fix_perms(path, mode, uid, gid, dir):
-        if mode and not os.path.islink(path):
-            #bb.note("Fixup Perms: chmod 0%o %s" % (mode, dir))
-            os.chmod(path, mode)
-        # -1 is a special value that means don't change the uid/gid
-        # if they are BOTH -1, don't bother to lchown
-        if not (uid == -1 and gid == -1):
-            #bb.note("Fixup Perms: lchown %d:%d %s" % (uid, gid, dir))
-            os.lchown(path, uid, gid)
-
-    # Return a list of configuration files based on either the default
-    # files/fs-perms.txt or the contents of FILESYSTEM_PERMS_TABLES
-    # paths are resolved via BBPATH
-    def get_fs_perms_list(d):
-        str = ""
-        bbpath = d.getVar('BBPATH')
-        fs_perms_tables = d.getVar('FILESYSTEM_PERMS_TABLES') or ""
-        for conf_file in fs_perms_tables.split():
-            confpath = bb.utils.which(bbpath, conf_file)
-            if confpath:
-                str += " %s" % bb.utils.which(bbpath, conf_file)
-            else:
-                bb.warn("cannot find %s specified in FILESYSTEM_PERMS_TABLES" % conf_file)
-        return str
-
-
-
-    dvar = d.getVar('PKGD')
-
-    fs_perms_table = {}
-    fs_link_table = {}
-
-    # By default all of the standard directories specified in
-    # bitbake.conf will get 0755 root:root.
-    target_path_vars = [    'base_prefix',
-                'prefix',
-                'exec_prefix',
-                'base_bindir',
-                'base_sbindir',
-                'base_libdir',
-                'datadir',
-                'sysconfdir',
-                'servicedir',
-                'sharedstatedir',
-                'localstatedir',
-                'infodir',
-                'mandir',
-                'docdir',
-                'bindir',
-                'sbindir',
-                'libexecdir',
-                'libdir',
-                'includedir',
-                'oldincludedir' ]
-
-    for path in target_path_vars:
-        dir = d.getVar(path) or ""
-        if dir == "":
-            continue
-        fs_perms_table[dir] = fs_perms_entry(d.expand("%s 0755 root root false - - -" % (dir)))
-
-    # Now we actually load from the configuration files
-    for conf in get_fs_perms_list(d).split():
-        if not os.path.exists(conf):
-            continue
-        with open(conf) as f:
-            for line in f:
-                if line.startswith('#'):
-                    continue
-                lsplit = line.split()
-                if len(lsplit) == 0:
-                    continue
-                if len(lsplit) != 8 and not (len(lsplit) == 3 and lsplit[1].lower() == "link"):
-                    msg = "Fixup perms: %s invalid line: %s" % (conf, line)
-                    oe.qa.handle_error("perm-line", msg, d)
-                    continue
-                entry = fs_perms_entry(d.expand(line))
-                if entry and entry.path:
-                    if entry.link:
-                        fs_link_table[entry.path] = entry
-                        if entry.path in fs_perms_table:
-                            fs_perms_table.pop(entry.path)
-                    else:
-                        fs_perms_table[entry.path] = entry
-                        if entry.path in fs_link_table:
-                            fs_link_table.pop(entry.path)
-
-    # Debug -- list out in-memory table
-    #for dir in fs_perms_table:
-    #    bb.note("Fixup Perms: %s: %s" % (dir, str(fs_perms_table[dir])))
-    #for link in fs_link_table:
-    #    bb.note("Fixup Perms: %s: %s" % (link, str(fs_link_table[link])))
-
-    # We process links first, so we can go back and fixup directory ownership
-    # for any newly created directories
-    # Process in sorted order so /run gets created before /run/lock, etc.
-    for entry in sorted(fs_link_table.values(), key=lambda x: x.link):
-        link = entry.link
-        dir = entry.path
-        origin = dvar + dir
-        if not (cpath.exists(origin) and cpath.isdir(origin) and not cpath.islink(origin)):
-            continue
-
-        if link[0] == "/":
-            target = dvar + link
-            ptarget = link
-        else:
-            target = os.path.join(os.path.dirname(origin), link)
-            ptarget = os.path.join(os.path.dirname(dir), link)
-        if os.path.exists(target):
-            msg = "Fixup Perms: Unable to correct directory link, target already exists: %s -> %s" % (dir, ptarget)
-            oe.qa.handle_error("perm-link", msg, d)
-            continue
-
-        # Create path to move directory to, move it, and then setup the symlink
-        bb.utils.mkdirhier(os.path.dirname(target))
-        #bb.note("Fixup Perms: Rename %s -> %s" % (dir, ptarget))
-        bb.utils.rename(origin, target)
-        #bb.note("Fixup Perms: Link %s -> %s" % (dir, link))
-        os.symlink(link, origin)
-
-    for dir in fs_perms_table:
-        origin = dvar + dir
-        if not (cpath.exists(origin) and cpath.isdir(origin)):
-            continue
-
-        fix_perms(origin, fs_perms_table[dir].mode, fs_perms_table[dir].uid, fs_perms_table[dir].gid, dir)
-
-        if fs_perms_table[dir].walk == 'true':
-            for root, dirs, files in os.walk(origin):
-                for dr in dirs:
-                    each_dir = os.path.join(root, dr)
-                    fix_perms(each_dir, fs_perms_table[dir].mode, fs_perms_table[dir].uid, fs_perms_table[dir].gid, dir)
-                for f in files:
-                    each_file = os.path.join(root, f)
-                    fix_perms(each_file, fs_perms_table[dir].fmode, fs_perms_table[dir].fuid, fs_perms_table[dir].fgid, dir)
-}
-
-def package_debug_vars(d):
-    # We default to '.debug' style
-    if d.getVar('PACKAGE_DEBUG_SPLIT_STYLE') == 'debug-file-directory':
-        # Single debug-file-directory style debug info
-        debug_vars = {
-            "append": ".debug",
-            "staticappend": "",
-            "dir": "",
-            "staticdir": "",
-            "libdir": "/usr/lib/debug",
-            "staticlibdir": "/usr/lib/debug-static",
-            "srcdir": "/usr/src/debug",
-        }
-    elif d.getVar('PACKAGE_DEBUG_SPLIT_STYLE') == 'debug-without-src':
-        # Original OE-core, a.k.a. ".debug", style debug info, but without sources in /usr/src/debug
-        debug_vars = {
-            "append": "",
-            "staticappend": "",
-            "dir": "/.debug",
-            "staticdir": "/.debug-static",
-            "libdir": "",
-            "staticlibdir": "",
-            "srcdir": "",
-        }
-    elif d.getVar('PACKAGE_DEBUG_SPLIT_STYLE') == 'debug-with-srcpkg':
-        debug_vars = {
-            "append": "",
-            "staticappend": "",
-            "dir": "/.debug",
-            "staticdir": "/.debug-static",
-            "libdir": "",
-            "staticlibdir": "",
-            "srcdir": "/usr/src/debug",
-        }
-    else:
-        # Original OE-core, a.k.a. ".debug", style debug info
-        debug_vars = {
-            "append": "",
-            "staticappend": "",
-            "dir": "/.debug",
-            "staticdir": "/.debug-static",
-            "libdir": "",
-            "staticlibdir": "",
-            "srcdir": "/usr/src/debug",
-        }
-
-    return debug_vars
-
-python split_and_strip_files () {
-    import stat, errno
-    import subprocess
-
-    dvar = d.getVar('PKGD')
-    pn = d.getVar('PN')
-    hostos = d.getVar('HOST_OS')
-
-    oldcwd = os.getcwd()
-    os.chdir(dvar)
-
-    dv = package_debug_vars(d)
-
-    #
-    # First lets figure out all of the files we may have to process ... do this only once!
-    #
-    elffiles = {}
-    symlinks = {}
-    staticlibs = []
-    inodes = {}
-    libdir = os.path.abspath(dvar + os.sep + d.getVar("libdir"))
-    baselibdir = os.path.abspath(dvar + os.sep + d.getVar("base_libdir"))
-    skipfiles = (d.getVar("INHIBIT_PACKAGE_STRIP_FILES") or "").split()
-    if (d.getVar('INHIBIT_PACKAGE_STRIP') != '1' or \
-            d.getVar('INHIBIT_PACKAGE_DEBUG_SPLIT') != '1'):
-        checkelf = {}
-        checkelflinks = {}
-        for root, dirs, files in cpath.walk(dvar):
-            for f in files:
-                file = os.path.join(root, f)
-
-                # Skip debug files
-                if dv["append"] and file.endswith(dv["append"]):
-                    continue
-                if dv["dir"] and dv["dir"] in os.path.dirname(file[len(dvar):]):
-                    continue
-
-                if file in skipfiles:
-                    continue
-
-                if oe.package.is_static_lib(file):
-                    staticlibs.append(file)
-                    continue
-
-                try:
-                    ltarget = cpath.realpath(file, dvar, False)
-                    s = cpath.lstat(ltarget)
-                except OSError as e:
-                    (err, strerror) = e.args
-                    if err != errno.ENOENT:
-                        raise
-                    # Skip broken symlinks
-                    continue
-                if not s:
-                    continue
-                # Check its an executable
-                if (s[stat.ST_MODE] & stat.S_IXUSR) or (s[stat.ST_MODE] & stat.S_IXGRP) \
-                        or (s[stat.ST_MODE] & stat.S_IXOTH) \
-                        or ((file.startswith(libdir) or file.startswith(baselibdir)) \
-                        and (".so" in f or ".node" in f)) \
-                        or (f.startswith('vmlinux') or ".ko" in f):
-
-                    if cpath.islink(file):
-                        checkelflinks[file] = ltarget
-                        continue
-                    # Use a reference of device ID and inode number to identify files
-                    file_reference = "%d_%d" % (s.st_dev, s.st_ino)
-                    checkelf[file] = (file, file_reference)
-
-        results = oe.utils.multiprocess_launch(oe.package.is_elf, checkelflinks.values(), d)
-        results_map = {}
-        for (ltarget, elf_file) in results:
-            results_map[ltarget] = elf_file
-        for file in checkelflinks:
-            ltarget = checkelflinks[file]
-            # If it's a symlink, and points to an ELF file, we capture the readlink target
-            if results_map[ltarget]:
-                target = os.readlink(file)
-                #bb.note("Sym: %s (%d)" % (ltarget, results_map[ltarget]))
-                symlinks[file] = target
-
-        results = oe.utils.multiprocess_launch(oe.package.is_elf, checkelf.keys(), d)
-
-        # Sort results by file path. This ensures that the files are always
-        # processed in the same order, which is important to make sure builds
-        # are reproducible when dealing with hardlinks
-        results.sort(key=lambda x: x[0])
-
-        for (file, elf_file) in results:
-            # It's a file (or hardlink), not a link
-            # ...but is it ELF, and is it already stripped?
-            if elf_file & 1:
-                if elf_file & 2:
-                    if 'already-stripped' in (d.getVar('INSANE_SKIP:' + pn) or "").split():
-                        bb.note("Skipping file %s from %s for already-stripped QA test" % (file[len(dvar):], pn))
-                    else:
-                        msg = "File '%s' from %s was already stripped, this will prevent future debugging!" % (file[len(dvar):], pn)
-                        oe.qa.handle_error("already-stripped", msg, d)
-                    continue
-
-                # At this point we have an unstripped elf file. We need to:
-                #  a) Make sure any file we strip is not hardlinked to anything else outside this tree
-                #  b) Only strip any hardlinked file once (no races)
-                #  c) Track any hardlinks between files so that we can reconstruct matching debug file hardlinks
-
-                # Use a reference of device ID and inode number to identify files
-                file_reference = checkelf[file][1]
-                if file_reference in inodes:
-                    os.unlink(file)
-                    os.link(inodes[file_reference][0], file)
-                    inodes[file_reference].append(file)
-                else:
-                    inodes[file_reference] = [file]
-                    # break hardlink
-                    bb.utils.break_hardlinks(file)
-                    elffiles[file] = elf_file
-                # Modified the file so clear the cache
-                cpath.updatecache(file)
-
-    def strip_pkgd_prefix(f):
-        nonlocal dvar
-
-        if f.startswith(dvar):
-            return f[len(dvar):]
-
-        return f
-
-    #
-    # First lets process debug splitting
-    #
-    if (d.getVar('INHIBIT_PACKAGE_DEBUG_SPLIT') != '1'):
-        results = oe.utils.multiprocess_launch(splitdebuginfo, list(elffiles), d, extraargs=(dvar, dv, d))
-
-        if dv["srcdir"] and not hostos.startswith("mingw"):
-            if (d.getVar('PACKAGE_DEBUG_STATIC_SPLIT') == '1'):
-                results = oe.utils.multiprocess_launch(splitstaticdebuginfo, staticlibs, d, extraargs=(dvar, dv, d))
-            else:
-                for file in staticlibs:
-                    results.append( (file,source_info(file, d)) )
-
-        d.setVar("PKGDEBUGSOURCES", {strip_pkgd_prefix(f): sorted(s) for f, s in results})
-
-        sources = set()
-        for r in results:
-            sources.update(r[1])
-
-        # Hardlink our debug symbols to the other hardlink copies
-        for ref in inodes:
-            if len(inodes[ref]) == 1:
-                continue
-
-            target = inodes[ref][0][len(dvar):]
-            for file in inodes[ref][1:]:
-                src = file[len(dvar):]
-                dest = dv["libdir"] + os.path.dirname(src) + dv["dir"] + "/" + os.path.basename(target) + dv["append"]
-                fpath = dvar + dest
-                ftarget = dvar + dv["libdir"] + os.path.dirname(target) + dv["dir"] + "/" + os.path.basename(target) + dv["append"]
-                bb.utils.mkdirhier(os.path.dirname(fpath))
-                # Only one hardlink of separated debug info file in each directory
-                if not os.access(fpath, os.R_OK):
-                    #bb.note("Link %s -> %s" % (fpath, ftarget))
-                    os.link(ftarget, fpath)
-
-        # Create symlinks for all cases we were able to split symbols
-        for file in symlinks:
-            src = file[len(dvar):]
-            dest = dv["libdir"] + os.path.dirname(src) + dv["dir"] + "/" + os.path.basename(src) + dv["append"]
-            fpath = dvar + dest
-            # Skip it if the target doesn't exist
-            try:
-                s = os.stat(fpath)
-            except OSError as e:
-                (err, strerror) = e.args
-                if err != errno.ENOENT:
-                    raise
-                continue
-
-            ltarget = symlinks[file]
-            lpath = os.path.dirname(ltarget)
-            lbase = os.path.basename(ltarget)
-            ftarget = ""
-            if lpath and lpath != ".":
-                ftarget += lpath + dv["dir"] + "/"
-            ftarget += lbase + dv["append"]
-            if lpath.startswith(".."):
-                ftarget = os.path.join("..", ftarget)
-            bb.utils.mkdirhier(os.path.dirname(fpath))
-            #bb.note("Symlink %s -> %s" % (fpath, ftarget))
-            os.symlink(ftarget, fpath)
-
-        # Process the dv["srcdir"] if requested...
-        # This copies and places the referenced sources for later debugging...
-        copydebugsources(dv["srcdir"], sources, d)
-    #
-    # End of debug splitting
-    #
-
-    #
-    # Now lets go back over things and strip them
-    #
-    if (d.getVar('INHIBIT_PACKAGE_STRIP') != '1'):
-        strip = d.getVar("STRIP")
-        sfiles = []
-        for file in elffiles:
-            elf_file = int(elffiles[file])
-            #bb.note("Strip %s" % file)
-            sfiles.append((file, elf_file, strip))
-        if (d.getVar('PACKAGE_STRIP_STATIC') == '1' or d.getVar('PACKAGE_DEBUG_STATIC_SPLIT') == '1'):
-            for f in staticlibs:
-                sfiles.append((f, 16, strip))
-
-        oe.utils.multiprocess_launch(oe.package.runstrip, sfiles, d)
-
-    # Build "minidebuginfo" and reinject it back into the stripped binaries
-    if d.getVar('PACKAGE_MINIDEBUGINFO') == '1':
-        oe.utils.multiprocess_launch(inject_minidebuginfo, list(elffiles), d,
-                                     extraargs=(dvar, dv, d))
-
-    #
-    # End of strip
-    #
-    os.chdir(oldcwd)
-}
-
-python populate_packages () {
-    import glob, re
-
-    workdir = d.getVar('WORKDIR')
-    outdir = d.getVar('DEPLOY_DIR')
-    dvar = d.getVar('PKGD')
-    packages = d.getVar('PACKAGES').split()
-    pn = d.getVar('PN')
-
-    bb.utils.mkdirhier(outdir)
-    os.chdir(dvar)
-    
-    autodebug = not (d.getVar("NOAUTOPACKAGEDEBUG") or False)
-
-    split_source_package = (d.getVar('PACKAGE_DEBUG_SPLIT_STYLE') == 'debug-with-srcpkg')
-
-    # If debug-with-srcpkg mode is enabled then add the source package if it
-    # doesn't exist and add the source file contents to the source package.
-    if split_source_package:
-        src_package_name = ('%s-src' % d.getVar('PN'))
-        if not src_package_name in packages:
-            packages.append(src_package_name)
-        d.setVar('FILES:%s' % src_package_name, '/usr/src/debug')
-
-    # Sanity check PACKAGES for duplicates
-    # Sanity should be moved to sanity.bbclass once we have the infrastructure
-    package_dict = {}
-
-    for i, pkg in enumerate(packages):
-        if pkg in package_dict:
-            msg = "%s is listed in PACKAGES multiple times, this leads to packaging errors." % pkg
-            oe.qa.handle_error("packages-list", msg, d)
-        # Ensure the source package gets the chance to pick up the source files
-        # before the debug package by ordering it first in PACKAGES. Whether it
-        # actually picks up any source files is controlled by
-        # PACKAGE_DEBUG_SPLIT_STYLE.
-        elif pkg.endswith("-src"):
-            package_dict[pkg] = (10, i)
-        elif autodebug and pkg.endswith("-dbg"):
-            package_dict[pkg] = (30, i)
-        else:
-            package_dict[pkg] = (50, i)
-    packages = sorted(package_dict.keys(), key=package_dict.get)
-    d.setVar('PACKAGES', ' '.join(packages))
-    pkgdest = d.getVar('PKGDEST')
-
-    seen = []
-
-    # os.mkdir masks the permissions with umask so we have to unset it first
-    oldumask = os.umask(0)
-
-    debug = []
-    for root, dirs, files in cpath.walk(dvar):
-        dir = root[len(dvar):]
-        if not dir:
-            dir = os.sep
-        for f in (files + dirs):
-            path = "." + os.path.join(dir, f)
-            if "/.debug/" in path or "/.debug-static/" in path or path.endswith("/.debug"):
-                debug.append(path)
-
-    for pkg in packages:
-        root = os.path.join(pkgdest, pkg)
-        bb.utils.mkdirhier(root)
-
-        filesvar = d.getVar('FILES:%s' % pkg) or ""
-        if "//" in filesvar:
-            msg = "FILES variable for package %s contains '//' which is invalid. Attempting to fix this but you should correct the metadata.\n" % pkg
-            oe.qa.handle_error("files-invalid", msg, d)
-            filesvar.replace("//", "/")
-
-        origfiles = filesvar.split()
-        files, symlink_paths = files_from_filevars(origfiles)
-
-        if autodebug and pkg.endswith("-dbg"):
-            files.extend(debug)
-
-        for file in files:
-            if (not cpath.islink(file)) and (not cpath.exists(file)):
-                continue
-            if file in seen:
-                continue
-            seen.append(file)
-
-            def mkdir(src, dest, p):
-                src = os.path.join(src, p)
-                dest = os.path.join(dest, p)
-                fstat = cpath.stat(src)
-                os.mkdir(dest)
-                os.chmod(dest, fstat.st_mode)
-                os.chown(dest, fstat.st_uid, fstat.st_gid)
-                if p not in seen:
-                    seen.append(p)
-                cpath.updatecache(dest)
-
-            def mkdir_recurse(src, dest, paths):
-                if cpath.exists(dest + '/' + paths):
-                    return
-                while paths.startswith("./"):
-                    paths = paths[2:]
-                p = "."
-                for c in paths.split("/"):
-                    p = os.path.join(p, c)
-                    if not cpath.exists(os.path.join(dest, p)):
-                        mkdir(src, dest, p)
-
-            if cpath.isdir(file) and not cpath.islink(file):
-                mkdir_recurse(dvar, root, file)
-                continue
-
-            mkdir_recurse(dvar, root, os.path.dirname(file))
-            fpath = os.path.join(root,file)
-            if not cpath.islink(file):
-                os.link(file, fpath)
-                continue
-            ret = bb.utils.copyfile(file, fpath)
-            if ret is False or ret == 0:
-                bb.fatal("File population failed")
-
-        # Check if symlink paths exist
-        for file in symlink_paths:
-            if not os.path.exists(os.path.join(root,file)):
-                bb.fatal("File '%s' cannot be packaged into '%s' because its "
-                         "parent directory structure does not exist. One of "
-                         "its parent directories is a symlink whose target "
-                         "directory is not included in the package." %
-                         (file, pkg))
-
-    os.umask(oldumask)
-    os.chdir(workdir)
-
-    # Handle excluding packages with incompatible licenses
-    package_list = []
-    for pkg in packages:
-        licenses = d.getVar('_exclude_incompatible-' + pkg)
-        if licenses:
-            msg = "Excluding %s from packaging as it has incompatible license(s): %s" % (pkg, licenses)
-            oe.qa.handle_error("incompatible-license", msg, d)
-        else:
-            package_list.append(pkg)
-    d.setVar('PACKAGES', ' '.join(package_list))
-
-    unshipped = []
-    for root, dirs, files in cpath.walk(dvar):
-        dir = root[len(dvar):]
-        if not dir:
-            dir = os.sep
-        for f in (files + dirs):
-            path = os.path.join(dir, f)
-            if ('.' + path) not in seen:
-                unshipped.append(path)
-
-    if unshipped != []:
-        msg = pn + ": Files/directories were installed but not shipped in any package:"
-        if "installed-vs-shipped" in (d.getVar('INSANE_SKIP:' + pn) or "").split():
-            bb.note("Package %s skipping QA tests: installed-vs-shipped" % pn)
-        else:
-            for f in unshipped:
-                msg = msg + "\n  " + f
-            msg = msg + "\nPlease set FILES such that these items are packaged. Alternatively if they are unneeded, avoid installing them or delete them within do_install.\n"
-            msg = msg + "%s: %d installed and not shipped files." % (pn, len(unshipped))
-            oe.qa.handle_error("installed-vs-shipped", msg, d)
-}
-populate_packages[dirs] = "${D}"
-
-python package_fixsymlinks () {
-    import errno
-    pkgdest = d.getVar('PKGDEST')
-    packages = d.getVar("PACKAGES", False).split()
-
-    dangling_links = {}
-    pkg_files = {}
-    for pkg in packages:
-        dangling_links[pkg] = []
-        pkg_files[pkg] = []
-        inst_root = os.path.join(pkgdest, pkg)
-        for path in pkgfiles[pkg]:
-                rpath = path[len(inst_root):]
-                pkg_files[pkg].append(rpath)
-                rtarget = cpath.realpath(path, inst_root, True, assume_dir = True)
-                if not cpath.lexists(rtarget):
-                    dangling_links[pkg].append(os.path.normpath(rtarget[len(inst_root):]))
-
-    newrdepends = {}
-    for pkg in dangling_links:
-        for l in dangling_links[pkg]:
-            found = False
-            bb.debug(1, "%s contains dangling link %s" % (pkg, l))
-            for p in packages:
-                if l in pkg_files[p]:
-                        found = True
-                        bb.debug(1, "target found in %s" % p)
-                        if p == pkg:
-                            break
-                        if pkg not in newrdepends:
-                            newrdepends[pkg] = []
-                        newrdepends[pkg].append(p)
-                        break
-            if found == False:
-                bb.note("%s contains dangling symlink to %s" % (pkg, l))
-
-    for pkg in newrdepends:
-        rdepends = bb.utils.explode_dep_versions2(d.getVar('RDEPENDS:' + pkg) or "")
-        for p in newrdepends[pkg]:
-            if p not in rdepends:
-                rdepends[p] = []
-        d.setVar('RDEPENDS:' + pkg, bb.utils.join_deps(rdepends, commasep=False))
-}
-
-
-python package_package_name_hook() {
-    """
-    A package_name_hook function can be used to rewrite the package names by
-    changing PKG.  For an example, see debian.bbclass.
-    """
-    pass
-}
-
-EXPORT_FUNCTIONS package_name_hook
-
-
-PKGDESTWORK = "${WORKDIR}/pkgdata"
-
-PKGDATA_VARS = "PN PE PV PR PKGE PKGV PKGR LICENSE DESCRIPTION SUMMARY RDEPENDS RPROVIDES RRECOMMENDS RSUGGESTS RREPLACES RCONFLICTS SECTION PKG ALLOW_EMPTY FILES CONFFILES FILES_INFO PACKAGE_ADD_METADATA pkg_postinst pkg_postrm pkg_preinst pkg_prerm"
-
-python emit_pkgdata() {
-    from glob import glob
-    import json
-    import bb.compress.zstd
-
-    def process_postinst_on_target(pkg, mlprefix):
-        pkgval = d.getVar('PKG:%s' % pkg)
-        if pkgval is None:
-            pkgval = pkg
-
-        defer_fragment = """
-if [ -n "$D" ]; then
-    $INTERCEPT_DIR/postinst_intercept delay_to_first_boot %s mlprefix=%s
-    exit 0
-fi
-""" % (pkgval, mlprefix)
-
-        postinst = d.getVar('pkg_postinst:%s' % pkg)
-        postinst_ontarget = d.getVar('pkg_postinst_ontarget:%s' % pkg)
-
-        if postinst_ontarget:
-            bb.debug(1, 'adding deferred pkg_postinst_ontarget() to pkg_postinst() for %s' % pkg)
-            if not postinst:
-                postinst = '#!/bin/sh\n'
-            postinst += defer_fragment
-            postinst += postinst_ontarget
-            d.setVar('pkg_postinst:%s' % pkg, postinst)
-
-    def add_set_e_to_scriptlets(pkg):
-        for scriptlet_name in ('pkg_preinst', 'pkg_postinst', 'pkg_prerm', 'pkg_postrm'):
-            scriptlet = d.getVar('%s:%s' % (scriptlet_name, pkg))
-            if scriptlet:
-                scriptlet_split = scriptlet.split('\n')
-                if scriptlet_split[0].startswith("#!"):
-                    scriptlet = scriptlet_split[0] + "\nset -e\n" + "\n".join(scriptlet_split[1:])
-                else:
-                    scriptlet = "set -e\n" + "\n".join(scriptlet_split[0:])
-            d.setVar('%s:%s' % (scriptlet_name, pkg), scriptlet)
-
-    def write_if_exists(f, pkg, var):
-        def encode(str):
-            import codecs
-            c = codecs.getencoder("unicode_escape")
-            return c(str)[0].decode("latin1")
-
-        val = d.getVar('%s:%s' % (var, pkg))
-        if val:
-            f.write('%s:%s: %s\n' % (var, pkg, encode(val)))
-            return val
-        val = d.getVar('%s' % (var))
-        if val:
-            f.write('%s: %s\n' % (var, encode(val)))
-        return val
-
-    def write_extra_pkgs(variants, pn, packages, pkgdatadir):
-        for variant in variants:
-            with open("%s/%s-%s" % (pkgdatadir, variant, pn), 'w') as fd:
-                fd.write("PACKAGES: %s\n" % ' '.join(
-                            map(lambda pkg: '%s-%s' % (variant, pkg), packages.split())))
-
-    def write_extra_runtime_pkgs(variants, packages, pkgdatadir):
-        for variant in variants:
-            for pkg in packages.split():
-                ml_pkg = "%s-%s" % (variant, pkg)
-                subdata_file = "%s/runtime/%s" % (pkgdatadir, ml_pkg)
-                with open(subdata_file, 'w') as fd:
-                    fd.write("PKG:%s: %s" % (ml_pkg, pkg))
-
-    packages = d.getVar('PACKAGES')
-    pkgdest = d.getVar('PKGDEST')
-    pkgdatadir = d.getVar('PKGDESTWORK')
-
-    data_file = pkgdatadir + d.expand("/${PN}")
-    with open(data_file, 'w') as fd:
-        fd.write("PACKAGES: %s\n" % packages)
-
-    pkgdebugsource = d.getVar("PKGDEBUGSOURCES") or []
-
-    pn = d.getVar('PN')
-    global_variants = (d.getVar('MULTILIB_GLOBAL_VARIANTS') or "").split()
-    variants = (d.getVar('MULTILIB_VARIANTS') or "").split()
-
-    if bb.data.inherits_class('kernel', d) or bb.data.inherits_class('module-base', d):
-        write_extra_pkgs(variants, pn, packages, pkgdatadir)
-
-    if bb.data.inherits_class('allarch', d) and not variants \
-        and not bb.data.inherits_class('packagegroup', d):
-        write_extra_pkgs(global_variants, pn, packages, pkgdatadir)
-
-    workdir = d.getVar('WORKDIR')
-
-    for pkg in packages.split():
-        pkgval = d.getVar('PKG:%s' % pkg)
-        if pkgval is None:
-            pkgval = pkg
-            d.setVar('PKG:%s' % pkg, pkg)
-
-        extended_data = {
-            "files_info": {}
-        }
-
-        pkgdestpkg = os.path.join(pkgdest, pkg)
-        files = {}
-        files_extra = {}
-        total_size = 0
-        seen = set()
-        for f in pkgfiles[pkg]:
-            fpath = os.sep + os.path.relpath(f, pkgdestpkg)
-
-            fstat = os.lstat(f)
-            files[fpath] = fstat.st_size
-
-            extended_data["files_info"].setdefault(fpath, {})
-            extended_data["files_info"][fpath]['size'] = fstat.st_size
-
-            if fstat.st_ino not in seen:
-                seen.add(fstat.st_ino)
-                total_size += fstat.st_size
-
-            if fpath in pkgdebugsource:
-                extended_data["files_info"][fpath]['debugsrc'] = pkgdebugsource[fpath]
-                del pkgdebugsource[fpath]
-
-        d.setVar('FILES_INFO:' + pkg , json.dumps(files, sort_keys=True))
-
-        process_postinst_on_target(pkg, d.getVar("MLPREFIX"))
-        add_set_e_to_scriptlets(pkg)
-
-        subdata_file = pkgdatadir + "/runtime/%s" % pkg
-        with open(subdata_file, 'w') as sf:
-            for var in (d.getVar('PKGDATA_VARS') or "").split():
-                val = write_if_exists(sf, pkg, var)
-
-            write_if_exists(sf, pkg, 'FILERPROVIDESFLIST')
-            for dfile in sorted((d.getVar('FILERPROVIDESFLIST:' + pkg) or "").split()):
-                write_if_exists(sf, pkg, 'FILERPROVIDES:' + dfile)
-
-            write_if_exists(sf, pkg, 'FILERDEPENDSFLIST')
-            for dfile in sorted((d.getVar('FILERDEPENDSFLIST:' + pkg) or "").split()):
-                write_if_exists(sf, pkg, 'FILERDEPENDS:' + dfile)
-
-            sf.write('%s:%s: %d\n' % ('PKGSIZE', pkg, total_size))
-
-        subdata_extended_file = pkgdatadir + "/extended/%s.json.zstd" % pkg
-        num_threads = int(d.getVar("BB_NUMBER_THREADS"))
-        with bb.compress.zstd.open(subdata_extended_file, "wt", encoding="utf-8", num_threads=num_threads) as f:
-            json.dump(extended_data, f, sort_keys=True, separators=(",", ":"))
-
-        # Symlinks needed for rprovides lookup
-        rprov = d.getVar('RPROVIDES:%s' % pkg) or d.getVar('RPROVIDES')
-        if rprov:
-            for p in bb.utils.explode_deps(rprov):
-                subdata_sym = pkgdatadir + "/runtime-rprovides/%s/%s" % (p, pkg)
-                bb.utils.mkdirhier(os.path.dirname(subdata_sym))
-                oe.path.symlink("../../runtime/%s" % pkg, subdata_sym, True)
-
-        allow_empty = d.getVar('ALLOW_EMPTY:%s' % pkg)
-        if not allow_empty:
-            allow_empty = d.getVar('ALLOW_EMPTY')
-        root = "%s/%s" % (pkgdest, pkg)
-        os.chdir(root)
-        g = glob('*')
-        if g or allow_empty == "1":
-            # Symlinks needed for reverse lookups (from the final package name)
-            subdata_sym = pkgdatadir + "/runtime-reverse/%s" % pkgval
-            oe.path.symlink("../runtime/%s" % pkg, subdata_sym, True)
-
-            packagedfile = pkgdatadir + '/runtime/%s.packaged' % pkg
-            open(packagedfile, 'w').close()
-
-    if bb.data.inherits_class('kernel', d) or bb.data.inherits_class('module-base', d):
-        write_extra_runtime_pkgs(variants, packages, pkgdatadir)
-
-    if bb.data.inherits_class('allarch', d) and not variants \
-        and not bb.data.inherits_class('packagegroup', d):
-        write_extra_runtime_pkgs(global_variants, packages, pkgdatadir)
-
-}
-emit_pkgdata[dirs] = "${PKGDESTWORK}/runtime ${PKGDESTWORK}/runtime-reverse ${PKGDESTWORK}/runtime-rprovides ${PKGDESTWORK}/extended"
-emit_pkgdata[vardepsexclude] = "BB_NUMBER_THREADS"
-
-ldconfig_postinst_fragment() {
-if [ x"$D" = "x" ]; then
-	if [ -x /sbin/ldconfig ]; then /sbin/ldconfig ; fi
-fi
-}
-
-RPMDEPS = "${STAGING_LIBDIR_NATIVE}/rpm/rpmdeps --alldeps --define '__font_provides %{nil}'"
-
-# Collect perfile run-time dependency metadata
-# Output:
-#  FILERPROVIDESFLIST:pkg - list of all files w/ deps
-#  FILERPROVIDES:filepath:pkg - per file dep
-#
-#  FILERDEPENDSFLIST:pkg - list of all files w/ deps
-#  FILERDEPENDS:filepath:pkg - per file dep
-
-python package_do_filedeps() {
-    if d.getVar('SKIP_FILEDEPS') == '1':
-        return
-
-    pkgdest = d.getVar('PKGDEST')
-    packages = d.getVar('PACKAGES')
-    rpmdeps = d.getVar('RPMDEPS')
-
-    def chunks(files, n):
-        return [files[i:i+n] for i in range(0, len(files), n)]
-
-    pkglist = []
-    for pkg in packages.split():
-        if d.getVar('SKIP_FILEDEPS:' + pkg) == '1':
-            continue
-        if pkg.endswith('-dbg') or pkg.endswith('-doc') or pkg.find('-locale-') != -1 or pkg.find('-localedata-') != -1 or pkg.find('-gconv-') != -1 or pkg.find('-charmap-') != -1 or pkg.startswith('kernel-module-') or pkg.endswith('-src'):
-            continue
-        for files in chunks(pkgfiles[pkg], 100):
-            pkglist.append((pkg, files, rpmdeps, pkgdest))
-
-    processed = oe.utils.multiprocess_launch(oe.package.filedeprunner, pkglist, d)
-
-    provides_files = {}
-    requires_files = {}
-
-    for result in processed:
-        (pkg, provides, requires) = result
-
-        if pkg not in provides_files:
-            provides_files[pkg] = []
-        if pkg not in requires_files:
-            requires_files[pkg] = []
-
-        for file in sorted(provides):
-            provides_files[pkg].append(file)
-            key = "FILERPROVIDES:" + file + ":" + pkg
-            d.appendVar(key, " " + " ".join(provides[file]))
-
-        for file in sorted(requires):
-            requires_files[pkg].append(file)
-            key = "FILERDEPENDS:" + file + ":" + pkg
-            d.appendVar(key, " " + " ".join(requires[file]))
-
-    for pkg in requires_files:
-        d.setVar("FILERDEPENDSFLIST:" + pkg, " ".join(sorted(requires_files[pkg])))
-    for pkg in provides_files:
-        d.setVar("FILERPROVIDESFLIST:" + pkg, " ".join(sorted(provides_files[pkg])))
-}
-
-SHLIBSDIRS = "${WORKDIR_PKGDATA}/${MLPREFIX}shlibs2"
-SHLIBSWORKDIR = "${PKGDESTWORK}/${MLPREFIX}shlibs2"
-
-python package_do_shlibs() {
-    import itertools
-    import re, pipes
-    import subprocess
-
-    exclude_shlibs = d.getVar('EXCLUDE_FROM_SHLIBS', False)
-    if exclude_shlibs:
-        bb.note("not generating shlibs")
-        return
-
-    lib_re = re.compile(r"^.*\.so")
-    libdir_re = re.compile(r".*/%s$" % d.getVar('baselib'))
-
-    packages = d.getVar('PACKAGES')
-
-    shlib_pkgs = []
-    exclusion_list = d.getVar("EXCLUDE_PACKAGES_FROM_SHLIBS")
-    if exclusion_list:
-        for pkg in packages.split():
-            if pkg not in exclusion_list.split():
-                shlib_pkgs.append(pkg)
-            else:
-                bb.note("not generating shlibs for %s" % pkg)
-    else:
-        shlib_pkgs = packages.split()
-
-    hostos = d.getVar('HOST_OS')
-
-    workdir = d.getVar('WORKDIR')
-
-    ver = d.getVar('PKGV')
-    if not ver:
-        msg = "PKGV not defined"
-        oe.qa.handle_error("pkgv-undefined", msg, d)
-        return
-
-    pkgdest = d.getVar('PKGDEST')
-
-    shlibswork_dir = d.getVar('SHLIBSWORKDIR')
-
-    def linux_so(file, pkg, pkgver, d):
-        needs_ldconfig = False
-        needed = set()
-        sonames = set()
-        renames = []
-        ldir = os.path.dirname(file).replace(pkgdest + "/" + pkg, '')
-        cmd = d.getVar('OBJDUMP') + " -p " + pipes.quote(file) + " 2>/dev/null"
-        fd = os.popen(cmd)
-        lines = fd.readlines()
-        fd.close()
-        rpath = tuple()
-        for l in lines:
-            m = re.match(r"\s+RPATH\s+([^\s]*)", l)
-            if m:
-                rpaths = m.group(1).replace("$ORIGIN", ldir).split(":")
-                rpath = tuple(map(os.path.normpath, rpaths))
-        for l in lines:
-            m = re.match(r"\s+NEEDED\s+([^\s]*)", l)
-            if m:
-                dep = m.group(1)
-                if dep not in needed:
-                    needed.add((dep, file, rpath))
-            m = re.match(r"\s+SONAME\s+([^\s]*)", l)
-            if m:
-                this_soname = m.group(1)
-                prov = (this_soname, ldir, pkgver)
-                if not prov in sonames:
-                    # if library is private (only used by package) then do not build shlib for it
-                    import fnmatch
-                    if not private_libs or len([i for i in private_libs if fnmatch.fnmatch(this_soname, i)]) == 0:
-                        sonames.add(prov)
-                if libdir_re.match(os.path.dirname(file)):
-                    needs_ldconfig = True
-                if needs_ldconfig and snap_symlinks and (os.path.basename(file) != this_soname):
-                    renames.append((file, os.path.join(os.path.dirname(file), this_soname)))
-        return (needs_ldconfig, needed, sonames, renames)
-
-    def darwin_so(file, needed, sonames, renames, pkgver):
-        if not os.path.exists(file):
-            return
-        ldir = os.path.dirname(file).replace(pkgdest + "/" + pkg, '')
-
-        def get_combinations(base):
-            #
-            # Given a base library name, find all combinations of this split by "." and "-"
-            #
-            combos = []
-            options = base.split(".")
-            for i in range(1, len(options) + 1):
-                combos.append(".".join(options[0:i]))
-            options = base.split("-")
-            for i in range(1, len(options) + 1):
-                combos.append("-".join(options[0:i]))
-            return combos
-
-        if (file.endswith('.dylib') or file.endswith('.so')) and not pkg.endswith('-dev') and not pkg.endswith('-dbg') and not pkg.endswith('-src'):
-            # Drop suffix
-            name = os.path.basename(file).rsplit(".",1)[0]
-            # Find all combinations
-            combos = get_combinations(name)
-            for combo in combos:
-                if not combo in sonames:
-                    prov = (combo, ldir, pkgver)
-                    sonames.add(prov)
-        if file.endswith('.dylib') or file.endswith('.so'):
-            rpath = []
-            p = subprocess.Popen([d.expand("${HOST_PREFIX}otool"), '-l', file], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
-            out, err = p.communicate()
-            # If returned successfully, process stdout for results
-            if p.returncode == 0:
-                for l in out.split("\n"):
-                    l = l.strip()
-                    if l.startswith('path '):
-                        rpath.append(l.split()[1])
-
-        p = subprocess.Popen([d.expand("${HOST_PREFIX}otool"), '-L', file], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
-        out, err = p.communicate()
-        # If returned successfully, process stdout for results
-        if p.returncode == 0:
-            for l in out.split("\n"):
-                l = l.strip()
-                if not l or l.endswith(":"):
-                    continue
-                if "is not an object file" in l:
-                    continue
-                name = os.path.basename(l.split()[0]).rsplit(".", 1)[0]
-                if name and name not in needed[pkg]:
-                     needed[pkg].add((name, file, tuple()))
-
-    def mingw_dll(file, needed, sonames, renames, pkgver):
-        if not os.path.exists(file):
-            return
-
-        if file.endswith(".dll"):
-            # assume all dlls are shared objects provided by the package
-            sonames.add((os.path.basename(file), os.path.dirname(file).replace(pkgdest + "/" + pkg, ''), pkgver))
-
-        if (file.endswith(".dll") or file.endswith(".exe")):
-            # use objdump to search for "DLL Name: .*\.dll"
-            p = subprocess.Popen([d.expand("${HOST_PREFIX}objdump"), "-p", file], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
-            out, err = p.communicate()
-            # process the output, grabbing all .dll names
-            if p.returncode == 0:
-                for m in re.finditer(r"DLL Name: (.*?\.dll)$", out.decode(), re.MULTILINE | re.IGNORECASE):
-                    dllname = m.group(1)
-                    if dllname:
-                        needed[pkg].add((dllname, file, tuple()))
-
-    if d.getVar('PACKAGE_SNAP_LIB_SYMLINKS') == "1":
-        snap_symlinks = True
-    else:
-        snap_symlinks = False
-
-    needed = {}
-
-    shlib_provider = oe.package.read_shlib_providers(d)
-
-    for pkg in shlib_pkgs:
-        private_libs = d.getVar('PRIVATE_LIBS:' + pkg) or d.getVar('PRIVATE_LIBS') or ""
-        private_libs = private_libs.split()
-        needs_ldconfig = False
-        bb.debug(2, "calculating shlib provides for %s" % pkg)
-
-        pkgver = d.getVar('PKGV:' + pkg)
-        if not pkgver:
-            pkgver = d.getVar('PV_' + pkg)
-        if not pkgver:
-            pkgver = ver
-
-        needed[pkg] = set()
-        sonames = set()
-        renames = []
-        linuxlist = []
-        for file in pkgfiles[pkg]:
-                soname = None
-                if cpath.islink(file):
-                    continue
-                if hostos == "darwin" or hostos == "darwin8":
-                    darwin_so(file, needed, sonames, renames, pkgver)
-                elif hostos.startswith("mingw"):
-                    mingw_dll(file, needed, sonames, renames, pkgver)
-                elif os.access(file, os.X_OK) or lib_re.match(file):
-                    linuxlist.append(file)
-
-        if linuxlist:
-            results = oe.utils.multiprocess_launch(linux_so, linuxlist, d, extraargs=(pkg, pkgver, d))
-            for r in results:
-                ldconfig = r[0]
-                needed[pkg] |= r[1]
-                sonames |= r[2]
-                renames.extend(r[3])
-                needs_ldconfig = needs_ldconfig or ldconfig
-
-        for (old, new) in renames:
-            bb.note("Renaming %s to %s" % (old, new))
-            bb.utils.rename(old, new)
-            pkgfiles[pkg].remove(old)
-
-        shlibs_file = os.path.join(shlibswork_dir, pkg + ".list")
-        if len(sonames):
-            with open(shlibs_file, 'w') as fd:
-                for s in sorted(sonames):
-                    if s[0] in shlib_provider and s[1] in shlib_provider[s[0]]:
-                        (old_pkg, old_pkgver) = shlib_provider[s[0]][s[1]]
-                        if old_pkg != pkg:
-                            bb.warn('%s-%s was registered as shlib provider for %s, changing it to %s-%s because it was built later' % (old_pkg, old_pkgver, s[0], pkg, pkgver))
-                    bb.debug(1, 'registering %s-%s as shlib provider for %s' % (pkg, pkgver, s[0]))
-                    fd.write(s[0] + ':' + s[1] + ':' + s[2] + '\n')
-                    if s[0] not in shlib_provider:
-                        shlib_provider[s[0]] = {}
-                    shlib_provider[s[0]][s[1]] = (pkg, pkgver)
-        if needs_ldconfig:
-            bb.debug(1, 'adding ldconfig call to postinst for %s' % pkg)
-            postinst = d.getVar('pkg_postinst:%s' % pkg)
-            if not postinst:
-                postinst = '#!/bin/sh\n'
-            postinst += d.getVar('ldconfig_postinst_fragment')
-            d.setVar('pkg_postinst:%s' % pkg, postinst)
-        bb.debug(1, 'LIBNAMES: pkg %s sonames %s' % (pkg, sonames))
-
-    assumed_libs = d.getVar('ASSUME_SHLIBS')
-    if assumed_libs:
-        libdir = d.getVar("libdir")
-        for e in assumed_libs.split():
-            l, dep_pkg = e.split(":")
-            lib_ver = None
-            dep_pkg = dep_pkg.rsplit("_", 1)
-            if len(dep_pkg) == 2:
-                lib_ver = dep_pkg[1]
-            dep_pkg = dep_pkg[0]
-            if l not in shlib_provider:
-                shlib_provider[l] = {}
-            shlib_provider[l][libdir] = (dep_pkg, lib_ver)
-
-    libsearchpath = [d.getVar('libdir'), d.getVar('base_libdir')]
-
-    for pkg in shlib_pkgs:
-        bb.debug(2, "calculating shlib requirements for %s" % pkg)
-
-        private_libs = d.getVar('PRIVATE_LIBS:' + pkg) or d.getVar('PRIVATE_LIBS') or ""
-        private_libs = private_libs.split()
-
-        deps = list()
-        for n in needed[pkg]:
-            # if n is in private libraries, don't try to search provider for it
-            # this could cause problem in case some abc.bb provides private
-            # /opt/abc/lib/libfoo.so.1 and contains /usr/bin/abc depending on system library libfoo.so.1
-            # but skipping it is still better alternative than providing own
-            # version and then adding runtime dependency for the same system library
-            import fnmatch
-            if private_libs and len([i for i in private_libs if fnmatch.fnmatch(n[0], i)]) > 0:
-                bb.debug(2, '%s: Dependency %s covered by PRIVATE_LIBS' % (pkg, n[0]))
-                continue
-            if n[0] in shlib_provider.keys():
-                shlib_provider_map = shlib_provider[n[0]]
-                matches = set()
-                for p in itertools.chain(list(n[2]), sorted(shlib_provider_map.keys()), libsearchpath):
-                    if p in shlib_provider_map:
-                        matches.add(p)
-                if len(matches) > 1:
-                    matchpkgs = ', '.join([shlib_provider_map[match][0] for match in matches])
-                    bb.error("%s: Multiple shlib providers for %s: %s (used by files: %s)" % (pkg, n[0], matchpkgs, n[1]))
-                elif len(matches) == 1:
-                    (dep_pkg, ver_needed) = shlib_provider_map[matches.pop()]
-
-                    bb.debug(2, '%s: Dependency %s requires package %s (used by files: %s)' % (pkg, n[0], dep_pkg, n[1]))
-
-                    if dep_pkg == pkg:
-                        continue
-
-                    if ver_needed:
-                        dep = "%s (>= %s)" % (dep_pkg, ver_needed)
-                    else:
-                        dep = dep_pkg
-                    if not dep in deps:
-                        deps.append(dep)
-                    continue
-            bb.note("Couldn't find shared library provider for %s, used by files: %s" % (n[0], n[1]))
-
-        deps_file = os.path.join(pkgdest, pkg + ".shlibdeps")
-        if os.path.exists(deps_file):
-            os.remove(deps_file)
-        if deps:
-            with open(deps_file, 'w') as fd:
-                for dep in sorted(deps):
-                    fd.write(dep + '\n')
-}
-
-python package_do_pkgconfig () {
-    import re
-
-    packages = d.getVar('PACKAGES')
-    workdir = d.getVar('WORKDIR')
-    pkgdest = d.getVar('PKGDEST')
-
-    shlibs_dirs = d.getVar('SHLIBSDIRS').split()
-    shlibswork_dir = d.getVar('SHLIBSWORKDIR')
-
-    pc_re = re.compile(r'(.*)\.pc$')
-    var_re = re.compile(r'(.*)=(.*)')
-    field_re = re.compile(r'(.*): (.*)')
-
-    pkgconfig_provided = {}
-    pkgconfig_needed = {}
-    for pkg in packages.split():
-        pkgconfig_provided[pkg] = []
-        pkgconfig_needed[pkg] = []
-        for file in sorted(pkgfiles[pkg]):
-                m = pc_re.match(file)
-                if m:
-                    pd = bb.data.init()
-                    name = m.group(1)
-                    pkgconfig_provided[pkg].append(os.path.basename(name))
-                    if not os.access(file, os.R_OK):
-                        continue
-                    with open(file, 'r') as f:
-                        lines = f.readlines()
-                    for l in lines:
-                        m = var_re.match(l)
-                        if m:
-                            name = m.group(1)
-                            val = m.group(2)
-                            pd.setVar(name, pd.expand(val))
-                            continue
-                        m = field_re.match(l)
-                        if m:
-                            hdr = m.group(1)
-                            exp = pd.expand(m.group(2))
-                            if hdr == 'Requires':
-                                pkgconfig_needed[pkg] += exp.replace(',', ' ').split()
-
-    for pkg in packages.split():
-        pkgs_file = os.path.join(shlibswork_dir, pkg + ".pclist")
-        if pkgconfig_provided[pkg] != []:
-            with open(pkgs_file, 'w') as f:
-                for p in sorted(pkgconfig_provided[pkg]):
-                    f.write('%s\n' % p)
-
-    # Go from least to most specific since the last one found wins
-    for dir in reversed(shlibs_dirs):
-        if not os.path.exists(dir):
-            continue
-        for file in sorted(os.listdir(dir)):
-            m = re.match(r'^(.*)\.pclist$', file)
-            if m:
-                pkg = m.group(1)
-                with open(os.path.join(dir, file)) as fd:
-                    lines = fd.readlines()
-                pkgconfig_provided[pkg] = []
-                for l in lines:
-                    pkgconfig_provided[pkg].append(l.rstrip())
-
-    for pkg in packages.split():
-        deps = []
-        for n in pkgconfig_needed[pkg]:
-            found = False
-            for k in pkgconfig_provided.keys():
-                if n in pkgconfig_provided[k]:
-                    if k != pkg and not (k in deps):
-                        deps.append(k)
-                    found = True
-            if found == False:
-                bb.note("couldn't find pkgconfig module '%s' in any package" % n)
-        deps_file = os.path.join(pkgdest, pkg + ".pcdeps")
-        if len(deps):
-            with open(deps_file, 'w') as fd:
-                for dep in deps:
-                    fd.write(dep + '\n')
-}
-
-def read_libdep_files(d):
-    pkglibdeps = {}
-    packages = d.getVar('PACKAGES').split()
-    for pkg in packages:
-        pkglibdeps[pkg] = {}
-        for extension in ".shlibdeps", ".pcdeps", ".clilibdeps":
-            depsfile = d.expand("${PKGDEST}/" + pkg + extension)
-            if os.access(depsfile, os.R_OK):
-                with open(depsfile) as fd:
-                    lines = fd.readlines()
-                for l in lines:
-                    l.rstrip()
-                    deps = bb.utils.explode_dep_versions2(l)
-                    for dep in deps:
-                        if not dep in pkglibdeps[pkg]:
-                            pkglibdeps[pkg][dep] = deps[dep]
-    return pkglibdeps
-
-python read_shlibdeps () {
-    pkglibdeps = read_libdep_files(d)
-
-    packages = d.getVar('PACKAGES').split()
-    for pkg in packages:
-        rdepends = bb.utils.explode_dep_versions2(d.getVar('RDEPENDS:' + pkg) or "")
-        for dep in sorted(pkglibdeps[pkg]):
-            # Add the dep if it's not already there, or if no comparison is set
-            if dep not in rdepends:
-                rdepends[dep] = []
-            for v in pkglibdeps[pkg][dep]:
-                if v not in rdepends[dep]:
-                    rdepends[dep].append(v)
-        d.setVar('RDEPENDS:' + pkg, bb.utils.join_deps(rdepends, commasep=False))
-}
-
-python package_depchains() {
-    """
-    For a given set of prefix and postfix modifiers, make those packages
-    RRECOMMENDS on the corresponding packages for its RDEPENDS.
-
-    Example:  If package A depends upon package B, and A's .bb emits an
-    A-dev package, this would make A-dev Recommends: B-dev.
-
-    If only one of a given suffix is specified, it will take the RRECOMMENDS
-    based on the RDEPENDS of *all* other packages. If more than one of a given
-    suffix is specified, its will only use the RDEPENDS of the single parent
-    package.
-    """
-
-    packages  = d.getVar('PACKAGES')
-    postfixes = (d.getVar('DEPCHAIN_POST') or '').split()
-    prefixes  = (d.getVar('DEPCHAIN_PRE') or '').split()
-
-    def pkg_adddeprrecs(pkg, base, suffix, getname, depends, d):
-
-        #bb.note('depends for %s is %s' % (base, depends))
-        rreclist = bb.utils.explode_dep_versions2(d.getVar('RRECOMMENDS:' + pkg) or "")
-
-        for depend in sorted(depends):
-            if depend.find('-native') != -1 or depend.find('-cross') != -1 or depend.startswith('virtual/'):
-                #bb.note("Skipping %s" % depend)
-                continue
-            if depend.endswith('-dev'):
-                depend = depend[:-4]
-            if depend.endswith('-dbg'):
-                depend = depend[:-4]
-            pkgname = getname(depend, suffix)
-            #bb.note("Adding %s for %s" % (pkgname, depend))
-            if pkgname not in rreclist and pkgname != pkg:
-                rreclist[pkgname] = []
-
-        #bb.note('setting: RRECOMMENDS:%s=%s' % (pkg, ' '.join(rreclist)))
-        d.setVar('RRECOMMENDS:%s' % pkg, bb.utils.join_deps(rreclist, commasep=False))
-
-    def pkg_addrrecs(pkg, base, suffix, getname, rdepends, d):
-
-        #bb.note('rdepends for %s is %s' % (base, rdepends))
-        rreclist = bb.utils.explode_dep_versions2(d.getVar('RRECOMMENDS:' + pkg) or "")
-
-        for depend in sorted(rdepends):
-            if depend.find('virtual-locale-') != -1:
-                #bb.note("Skipping %s" % depend)
-                continue
-            if depend.endswith('-dev'):
-                depend = depend[:-4]
-            if depend.endswith('-dbg'):
-                depend = depend[:-4]
-            pkgname = getname(depend, suffix)
-            #bb.note("Adding %s for %s" % (pkgname, depend))
-            if pkgname not in rreclist and pkgname != pkg:
-                rreclist[pkgname] = []
-
-        #bb.note('setting: RRECOMMENDS:%s=%s' % (pkg, ' '.join(rreclist)))
-        d.setVar('RRECOMMENDS:%s' % pkg, bb.utils.join_deps(rreclist, commasep=False))
-
-    def add_dep(list, dep):
-        if dep not in list:
-            list.append(dep)
-
-    depends = []
-    for dep in bb.utils.explode_deps(d.getVar('DEPENDS') or ""):
-        add_dep(depends, dep)
-
-    rdepends = []
-    for pkg in packages.split():
-        for dep in bb.utils.explode_deps(d.getVar('RDEPENDS:' + pkg) or ""):
-            add_dep(rdepends, dep)
-
-    #bb.note('rdepends is %s' % rdepends)
-
-    def post_getname(name, suffix):
-        return '%s%s' % (name, suffix)
-    def pre_getname(name, suffix):
-        return '%s%s' % (suffix, name)
-
-    pkgs = {}
-    for pkg in packages.split():
-        for postfix in postfixes:
-            if pkg.endswith(postfix):
-                if not postfix in pkgs:
-                    pkgs[postfix] = {}
-                pkgs[postfix][pkg] = (pkg[:-len(postfix)], post_getname)
-
-        for prefix in prefixes:
-            if pkg.startswith(prefix):
-                if not prefix in pkgs:
-                    pkgs[prefix] = {}
-                pkgs[prefix][pkg] = (pkg[:-len(prefix)], pre_getname)
-
-    if "-dbg" in pkgs:
-        pkglibdeps = read_libdep_files(d)
-        pkglibdeplist = []
-        for pkg in pkglibdeps:
-            for k in pkglibdeps[pkg]:
-                add_dep(pkglibdeplist, k)
-        dbgdefaultdeps = ((d.getVar('DEPCHAIN_DBGDEFAULTDEPS') == '1') or (bb.data.inherits_class('packagegroup', d)))
-
-    for suffix in pkgs:
-        for pkg in pkgs[suffix]:
-            if d.getVarFlag('RRECOMMENDS:' + pkg, 'nodeprrecs'):
-                continue
-            (base, func) = pkgs[suffix][pkg]
-            if suffix == "-dev":
-                pkg_adddeprrecs(pkg, base, suffix, func, depends, d)
-            elif suffix == "-dbg":
-                if not dbgdefaultdeps:
-                    pkg_addrrecs(pkg, base, suffix, func, pkglibdeplist, d)
-                    continue
-            if len(pkgs[suffix]) == 1:
-                pkg_addrrecs(pkg, base, suffix, func, rdepends, d)
-            else:
-                rdeps = []
-                for dep in bb.utils.explode_deps(d.getVar('RDEPENDS:' + base) or ""):
-                    add_dep(rdeps, dep)
-                pkg_addrrecs(pkg, base, suffix, func, rdeps, d)
-}
-
-# Since bitbake can't determine which variables are accessed during package
-# iteration, we need to list them here:
-PACKAGEVARS = "FILES RDEPENDS RRECOMMENDS SUMMARY DESCRIPTION RSUGGESTS RPROVIDES RCONFLICTS PKG ALLOW_EMPTY pkg_postinst pkg_postrm pkg_postinst_ontarget INITSCRIPT_NAME INITSCRIPT_PARAMS DEBIAN_NOAUTONAME ALTERNATIVE PKGE PKGV PKGR USERADD_PARAM GROUPADD_PARAM CONFFILES SYSTEMD_SERVICE LICENSE SECTION pkg_preinst pkg_prerm RREPLACES GROUPMEMS_PARAM SYSTEMD_AUTO_ENABLE SKIP_FILEDEPS PRIVATE_LIBS PACKAGE_ADD_METADATA"
-
-def gen_packagevar(d, pkgvars="PACKAGEVARS"):
-    ret = []
-    pkgs = (d.getVar("PACKAGES") or "").split()
-    vars = (d.getVar(pkgvars) or "").split()
-    for v in vars:
-        ret.append(v)
-    for p in pkgs:
-        for v in vars:
-            ret.append(v + ":" + p)
-
-        # Ensure that changes to INCOMPATIBLE_LICENSE re-run do_package for
-        # affected recipes.
-        ret.append('_exclude_incompatible-%s' % p)
-    return " ".join(ret)
-
-PACKAGE_PREPROCESS_FUNCS ?= ""
-# Functions for setting up PKGD
-PACKAGEBUILDPKGD ?= " \
-                package_prepare_pkgdata \
-                perform_packagecopy \
-                ${PACKAGE_PREPROCESS_FUNCS} \
-                split_and_strip_files \
-                fixup_perms \
-                "
-# Functions which split PKGD up into separate packages
-PACKAGESPLITFUNCS ?= " \
-                package_do_split_locales \
-                populate_packages"
-# Functions which process metadata based on split packages
-PACKAGEFUNCS += " \
-                package_fixsymlinks \
-                package_name_hook \
-                package_do_filedeps \
-                package_do_shlibs \
-                package_do_pkgconfig \
-                read_shlibdeps \
-                package_depchains \
-                emit_pkgdata"
-
-python do_package () {
-    # Change the following version to cause sstate to invalidate the package
-    # cache.  This is useful if an item this class depends on changes in a
-    # way that the output of this class changes.  rpmdeps is a good example
-    # as any change to rpmdeps requires this to be rerun.
-    # PACKAGE_BBCLASS_VERSION = "4"
-
-    # Init cachedpath
-    global cpath
-    cpath = oe.cachedpath.CachedPath()
-
-    ###########################################################################
-    # Sanity test the setup
-    ###########################################################################
-
-    packages = (d.getVar('PACKAGES') or "").split()
-    if len(packages) < 1:
-        bb.debug(1, "No packages to build, skipping do_package")
-        return
-
-    workdir = d.getVar('WORKDIR')
-    outdir = d.getVar('DEPLOY_DIR')
-    dest = d.getVar('D')
-    dvar = d.getVar('PKGD')
-    pn = d.getVar('PN')
-
-    if not workdir or not outdir or not dest or not dvar or not pn:
-        msg = "WORKDIR, DEPLOY_DIR, D, PN and PKGD all must be defined, unable to package"
-        oe.qa.handle_error("var-undefined", msg, d)
-        return
-
-    bb.build.exec_func("package_convert_pr_autoinc", d)
-
-    ###########################################################################
-    # Optimisations
-    ###########################################################################
-
-    # Continually expanding complex expressions is inefficient, particularly
-    # when we write to the datastore and invalidate the expansion cache. This
-    # code pre-expands some frequently used variables
-
-    def expandVar(x, d):
-        d.setVar(x, d.getVar(x))
-
-    for x in 'PN', 'PV', 'BPN', 'TARGET_SYS', 'EXTENDPRAUTO':
-        expandVar(x, d)
-
-    ###########################################################################
-    # Setup PKGD (from D)
-    ###########################################################################
-
-    for f in (d.getVar('PACKAGEBUILDPKGD') or '').split():
-        bb.build.exec_func(f, d)
-
-    ###########################################################################
-    # Split up PKGD into PKGDEST
-    ###########################################################################
-
-    cpath = oe.cachedpath.CachedPath()
-
-    for f in (d.getVar('PACKAGESPLITFUNCS') or '').split():
-        bb.build.exec_func(f, d)
-
-    ###########################################################################
-    # Process PKGDEST
-    ###########################################################################
-
-    # Build global list of files in each split package
-    global pkgfiles
-    pkgfiles = {}
-    packages = d.getVar('PACKAGES').split()
-    pkgdest = d.getVar('PKGDEST')
-    for pkg in packages:
-        pkgfiles[pkg] = []
-        for walkroot, dirs, files in cpath.walk(pkgdest + "/" + pkg):
-            for file in files:
-                pkgfiles[pkg].append(walkroot + os.sep + file)
-
-    for f in (d.getVar('PACKAGEFUNCS') or '').split():
-        bb.build.exec_func(f, d)
-
-    oe.qa.exit_if_errors(d)
-}
-
-do_package[dirs] = "${SHLIBSWORKDIR} ${D}"
-do_package[vardeps] += "${PACKAGEBUILDPKGD} ${PACKAGESPLITFUNCS} ${PACKAGEFUNCS} ${@gen_packagevar(d)}"
-addtask package after do_install
-
-SSTATETASKS += "do_package"
-do_package[cleandirs] = "${PKGDEST} ${PKGDESTWORK}"
-do_package[sstate-plaindirs] = "${PKGD} ${PKGDEST} ${PKGDESTWORK}"
-do_package_setscene[dirs] = "${STAGING_DIR}"
-
-python do_package_setscene () {
-    sstate_setscene(d)
-}
-addtask do_package_setscene
-
-# Copy from PKGDESTWORK to tempdirectory as tempdirectory can be cleaned at both
-# do_package_setscene and do_packagedata_setscene leading to races
-python do_packagedata () {
-    bb.build.exec_func("package_get_auto_pr", d)
-
-    src = d.expand("${PKGDESTWORK}")
-    dest = d.expand("${WORKDIR}/pkgdata-pdata-input")
-    oe.path.copyhardlinktree(src, dest)
-
-    bb.build.exec_func("packagedata_translate_pr_autoinc", d)
-}
-do_packagedata[cleandirs] += "${WORKDIR}/pkgdata-pdata-input"
-
-# Translate the EXTENDPRAUTO and AUTOINC to the final values
-packagedata_translate_pr_autoinc() {
-    find ${WORKDIR}/pkgdata-pdata-input -type f | xargs --no-run-if-empty \
-        sed -e 's,@PRSERV_PV_AUTOINC@,${PRSERV_PV_AUTOINC},g' \
-            -e 's,@EXTENDPRAUTO@,${EXTENDPRAUTO},g' -i
-}
-
-addtask packagedata before do_build after do_package
-
-SSTATETASKS += "do_packagedata"
-do_packagedata[sstate-inputdirs] = "${WORKDIR}/pkgdata-pdata-input"
-do_packagedata[sstate-outputdirs] = "${PKGDATA_DIR}"
-do_packagedata[stamp-extra-info] = "${MACHINE_ARCH}"
-
-python do_packagedata_setscene () {
-    sstate_setscene(d)
-}
-addtask do_packagedata_setscene
-
-#
-# Helper functions for the package writing classes
-#
-
-def mapping_rename_hook(d):
-    """
-    Rewrite variables to account for package renaming in things
-    like debian.bbclass or manual PKG variable name changes
-    """
-    pkg = d.getVar("PKG")
-    runtime_mapping_rename("RDEPENDS", pkg, d)
-    runtime_mapping_rename("RRECOMMENDS", pkg, d)
-    runtime_mapping_rename("RSUGGESTS", pkg, d)
diff --git a/poky/meta/classes/package_deb.bbclass b/poky/meta/classes/package_deb.bbclass
deleted file mode 100644
index a9b8ba0..0000000
--- a/poky/meta/classes/package_deb.bbclass
+++ /dev/null
@@ -1,327 +0,0 @@
-#
-# Copyright 2006-2008 OpenedHand Ltd.
-#
-
-inherit package
-
-IMAGE_PKGTYPE ?= "deb"
-
-DPKG_BUILDCMD ??= "dpkg-deb"
-
-DPKG_ARCH ?= "${@debian_arch_map(d.getVar('TARGET_ARCH'), d.getVar('TUNE_FEATURES'))}"
-DPKG_ARCH[vardepvalue] = "${DPKG_ARCH}"
-
-PKGWRITEDIRDEB = "${WORKDIR}/deploy-debs"
-
-APTCONF_TARGET = "${WORKDIR}"
-
-APT_ARGS = "${@['', '--no-install-recommends'][d.getVar("NO_RECOMMENDATIONS") == "1"]}"
-
-def debian_arch_map(arch, tune):
-    tune_features = tune.split()
-    if arch == "allarch":
-        return "all"
-    if arch in ["i586", "i686"]:
-        return "i386"
-    if arch == "x86_64":
-        if "mx32" in tune_features:
-            return "x32"
-        return "amd64"
-    if arch.startswith("mips"):
-        endian = ["el", ""]["bigendian" in tune_features]
-        if "n64" in tune_features:
-            return "mips64" + endian
-        if "n32" in tune_features:
-            return "mipsn32" + endian
-        return "mips" + endian
-    if arch == "powerpc":
-        return arch + ["", "spe"]["spe" in tune_features]
-    if arch == "aarch64":
-        return "arm64"
-    if arch == "arm":
-        return arch + ["el", "hf"]["callconvention-hard" in tune_features]
-    return arch
-
-python do_package_deb () {
-    packages = d.getVar('PACKAGES')
-    if not packages:
-        bb.debug(1, "PACKAGES not defined, nothing to package")
-        return
-
-    tmpdir = d.getVar('TMPDIR')
-    if os.access(os.path.join(tmpdir, "stamps", "DEB_PACKAGE_INDEX_CLEAN"),os.R_OK):
-        os.unlink(os.path.join(tmpdir, "stamps", "DEB_PACKAGE_INDEX_CLEAN"))
-
-    oe.utils.multiprocess_launch(deb_write_pkg, packages.split(), d, extraargs=(d,))
-}
-do_package_deb[vardeps] += "deb_write_pkg"
-do_package_deb[vardepsexclude] = "BB_NUMBER_THREADS"
-
-def deb_write_pkg(pkg, d):
-    import re, copy
-    import textwrap
-    import subprocess
-    import collections
-    import codecs
-
-    outdir = d.getVar('PKGWRITEDIRDEB')
-    pkgdest = d.getVar('PKGDEST')
-
-    def cleanupcontrol(root):
-        for p in ['CONTROL', 'DEBIAN']:
-            p = os.path.join(root, p)
-            if os.path.exists(p):
-                bb.utils.prunedir(p)
-
-    localdata = bb.data.createCopy(d)
-    root = "%s/%s" % (pkgdest, pkg)
-
-    lf = bb.utils.lockfile(root + ".lock")
-    try:
-
-        localdata.setVar('ROOT', '')
-        localdata.setVar('ROOT_%s' % pkg, root)
-        pkgname = localdata.getVar('PKG:%s' % pkg)
-        if not pkgname:
-            pkgname = pkg
-        localdata.setVar('PKG', pkgname)
-
-        localdata.setVar('OVERRIDES', d.getVar("OVERRIDES", False) + ":" + pkg)
-
-        basedir = os.path.join(os.path.dirname(root))
-
-        pkgoutdir = os.path.join(outdir, localdata.getVar('PACKAGE_ARCH'))
-        bb.utils.mkdirhier(pkgoutdir)
-
-        os.chdir(root)
-        cleanupcontrol(root)
-        from glob import glob
-        g = glob('*')
-        if not g and localdata.getVar('ALLOW_EMPTY', False) != "1":
-            bb.note("Not creating empty archive for %s-%s-%s" % (pkg, localdata.getVar('PKGV'), localdata.getVar('PKGR')))
-            return
-
-        controldir = os.path.join(root, 'DEBIAN')
-        bb.utils.mkdirhier(controldir)
-        os.chmod(controldir, 0o755)
-
-        ctrlfile = codecs.open(os.path.join(controldir, 'control'), 'w', 'utf-8')
-
-        fields = []
-        pe = d.getVar('PKGE')
-        if pe and int(pe) > 0:
-            fields.append(["Version: %s:%s-%s\n", ['PKGE', 'PKGV', 'PKGR']])
-        else:
-            fields.append(["Version: %s-%s\n", ['PKGV', 'PKGR']])
-        fields.append(["Description: %s\n", ['DESCRIPTION']])
-        fields.append(["Section: %s\n", ['SECTION']])
-        fields.append(["Priority: %s\n", ['PRIORITY']])
-        fields.append(["Maintainer: %s\n", ['MAINTAINER']])
-        fields.append(["Architecture: %s\n", ['DPKG_ARCH']])
-        fields.append(["OE: %s\n", ['PN']])
-        fields.append(["PackageArch: %s\n", ['PACKAGE_ARCH']])
-        if d.getVar('HOMEPAGE'):
-            fields.append(["Homepage: %s\n", ['HOMEPAGE']])
-
-        # Package, Version, Maintainer, Description - mandatory
-        # Section, Priority, Essential, Architecture, Source, Depends, Pre-Depends, Recommends, Suggests, Conflicts, Replaces, Provides - Optional
-
-
-        def pullData(l, d):
-            l2 = []
-            for i in l:
-                data = d.getVar(i)
-                if data is None:
-                    raise KeyError(i)
-                if i == 'DPKG_ARCH' and d.getVar('PACKAGE_ARCH') == 'all':
-                    data = 'all'
-                elif i == 'PACKAGE_ARCH' or i == 'DPKG_ARCH':
-                   # The params in deb package control don't allow character
-                   # `_', so change the arch's `_' to `-'. Such as `x86_64'
-                   # -->`x86-64'
-                   data = data.replace('_', '-')
-                l2.append(data)
-            return l2
-
-        ctrlfile.write("Package: %s\n" % pkgname)
-        if d.getVar('PACKAGE_ARCH') == "all":
-            ctrlfile.write("Multi-Arch: foreign\n")
-        # check for required fields
-        for (c, fs) in fields:
-            # Special behavior for description...
-            if 'DESCRIPTION' in fs:
-                 summary = localdata.getVar('SUMMARY') or localdata.getVar('DESCRIPTION') or "."
-                 ctrlfile.write('Description: %s\n' % summary)
-                 description = localdata.getVar('DESCRIPTION') or "."
-                 description = textwrap.dedent(description).strip()
-                 if '\\n' in description:
-                     # Manually indent
-                     for t in description.split('\\n'):
-                         ctrlfile.write(' %s\n' % (t.strip() or '.'))
-                 else:
-                     # Auto indent
-                     ctrlfile.write('%s\n' % textwrap.fill(description.strip(), width=74, initial_indent=' ', subsequent_indent=' '))
-
-            else:
-                 ctrlfile.write(c % tuple(pullData(fs, localdata)))
-
-        # more fields
-
-        custom_fields_chunk = get_package_additional_metadata("deb", localdata)
-        if custom_fields_chunk:
-            ctrlfile.write(custom_fields_chunk)
-            ctrlfile.write("\n")
-
-        mapping_rename_hook(localdata)
-
-        def debian_cmp_remap(var):
-            # dpkg does not allow for '(', ')' or ':' in a dependency name
-            # Replace any instances of them with '__'
-            #
-            # In debian '>' and '<' do not mean what it appears they mean
-            #   '<' = less or equal
-            #   '>' = greater or equal
-            # adjust these to the '<<' and '>>' equivalents
-            # Also, "=" specifiers only work if they have the PR in, so 1.2.3 != 1.2.3-r0
-            # so to avoid issues, map this to ">= 1.2.3 << 1.2.3.0"
-            for dep in list(var.keys()):
-                if '(' in dep or '/' in dep:
-                    newdep = re.sub(r'[(:)/]', '__', dep)
-                    if newdep.startswith("__"):
-                        newdep = "A" + newdep
-                    if newdep != dep:
-                        var[newdep] = var[dep]
-                        del var[dep]
-            for dep in var:
-                for i, v in enumerate(var[dep]):
-                    if (v or "").startswith("< "):
-                        var[dep][i] = var[dep][i].replace("< ", "<< ")
-                    elif (v or "").startswith("> "):
-                        var[dep][i] = var[dep][i].replace("> ", ">> ")
-                    elif (v or "").startswith("= ") and "-r" not in v:
-                        ver = var[dep][i].replace("= ", "")
-                        var[dep][i] = var[dep][i].replace("= ", ">= ")
-                        var[dep].append("<< " + ver + ".0")
-
-        rdepends = bb.utils.explode_dep_versions2(localdata.getVar("RDEPENDS") or "")
-        debian_cmp_remap(rdepends)
-        for dep in list(rdepends.keys()):
-                if dep == pkg:
-                        del rdepends[dep]
-                        continue
-                if '*' in dep:
-                        del rdepends[dep]
-        rrecommends = bb.utils.explode_dep_versions2(localdata.getVar("RRECOMMENDS") or "")
-        debian_cmp_remap(rrecommends)
-        for dep in list(rrecommends.keys()):
-                if '*' in dep:
-                        del rrecommends[dep]
-        rsuggests = bb.utils.explode_dep_versions2(localdata.getVar("RSUGGESTS") or "")
-        debian_cmp_remap(rsuggests)
-        # Deliberately drop version information here, not wanted/supported by deb
-        rprovides = dict.fromkeys(bb.utils.explode_dep_versions2(localdata.getVar("RPROVIDES") or ""), [])
-        # Remove file paths if any from rprovides, debian does not support custom providers
-        for key in list(rprovides.keys()):
-            if key.startswith('/'):
-                del rprovides[key]
-        rprovides = collections.OrderedDict(sorted(rprovides.items(), key=lambda x: x[0]))
-        debian_cmp_remap(rprovides)
-        rreplaces = bb.utils.explode_dep_versions2(localdata.getVar("RREPLACES") or "")
-        debian_cmp_remap(rreplaces)
-        rconflicts = bb.utils.explode_dep_versions2(localdata.getVar("RCONFLICTS") or "")
-        debian_cmp_remap(rconflicts)
-        if rdepends:
-            ctrlfile.write("Depends: %s\n" % bb.utils.join_deps(rdepends))
-        if rsuggests:
-            ctrlfile.write("Suggests: %s\n" % bb.utils.join_deps(rsuggests))
-        if rrecommends:
-            ctrlfile.write("Recommends: %s\n" % bb.utils.join_deps(rrecommends))
-        if rprovides:
-            ctrlfile.write("Provides: %s\n" % bb.utils.join_deps(rprovides))
-        if rreplaces:
-            ctrlfile.write("Replaces: %s\n" % bb.utils.join_deps(rreplaces))
-        if rconflicts:
-            ctrlfile.write("Conflicts: %s\n" % bb.utils.join_deps(rconflicts))
-        ctrlfile.close()
-
-        for script in ["preinst", "postinst", "prerm", "postrm"]:
-            scriptvar = localdata.getVar('pkg_%s' % script)
-            if not scriptvar:
-                continue
-            scriptvar = scriptvar.strip()
-            scriptfile = open(os.path.join(controldir, script), 'w')
-
-            if scriptvar.startswith("#!"):
-                pos = scriptvar.find("\n") + 1
-                scriptfile.write(scriptvar[:pos])
-            else:
-                pos = 0
-                scriptfile.write("#!/bin/sh\n")
-
-            # Prevent the prerm/postrm scripts from being run during an upgrade
-            if script in ('prerm', 'postrm'):
-                scriptfile.write('[ "$1" != "upgrade" ] || exit 0\n')
-
-            scriptfile.write(scriptvar[pos:])
-            scriptfile.write('\n')
-            scriptfile.close()
-            os.chmod(os.path.join(controldir, script), 0o755)
-
-        conffiles_str = ' '.join(get_conffiles(pkg, d))
-        if conffiles_str:
-            conffiles = open(os.path.join(controldir, 'conffiles'), 'w')
-            for f in conffiles_str.split():
-                if os.path.exists(oe.path.join(root, f)):
-                    conffiles.write('%s\n' % f)
-            conffiles.close()
-
-        os.chdir(basedir)
-        subprocess.check_output("PATH=\"%s\" %s -b %s %s" % (localdata.getVar("PATH"), localdata.getVar("DPKG_BUILDCMD"),
-                                                             root, pkgoutdir),
-                                stderr=subprocess.STDOUT,
-                                shell=True)
-
-    finally:
-        cleanupcontrol(root)
-        bb.utils.unlockfile(lf)
-
-# Otherwise allarch packages may change depending on override configuration
-deb_write_pkg[vardepsexclude] = "OVERRIDES"
-
-# Have to list any variables referenced as X_<pkg> that aren't in pkgdata here
-DEBEXTRAVARS = "PKGV PKGR PKGV DESCRIPTION SECTION PRIORITY MAINTAINER DPKG_ARCH PN HOMEPAGE PACKAGE_ADD_METADATA_DEB"
-do_package_write_deb[vardeps] += "${@gen_packagevar(d, 'DEBEXTRAVARS')}"
-
-SSTATETASKS += "do_package_write_deb"
-do_package_write_deb[sstate-inputdirs] = "${PKGWRITEDIRDEB}"
-do_package_write_deb[sstate-outputdirs] = "${DEPLOY_DIR_DEB}"
-
-python do_package_write_deb_setscene () {
-    tmpdir = d.getVar('TMPDIR')
-
-    if os.access(os.path.join(tmpdir, "stamps", "DEB_PACKAGE_INDEX_CLEAN"),os.R_OK):
-        os.unlink(os.path.join(tmpdir, "stamps", "DEB_PACKAGE_INDEX_CLEAN"))
-
-    sstate_setscene(d)
-}
-addtask do_package_write_deb_setscene
-
-python () {
-    if d.getVar('PACKAGES') != '':
-        deps = ' dpkg-native:do_populate_sysroot virtual/fakeroot-native:do_populate_sysroot'
-        d.appendVarFlag('do_package_write_deb', 'depends', deps)
-        d.setVarFlag('do_package_write_deb', 'fakeroot', "1")
-}
-
-python do_package_write_deb () {
-    bb.build.exec_func("read_subpackage_metadata", d)
-    bb.build.exec_func("do_package_deb", d)
-}
-do_package_write_deb[dirs] = "${PKGWRITEDIRDEB}"
-do_package_write_deb[cleandirs] = "${PKGWRITEDIRDEB}"
-do_package_write_deb[depends] += "${@oe.utils.build_depends_string(d.getVar('PACKAGE_WRITE_DEPS'), 'do_populate_sysroot')}"
-addtask package_write_deb after do_packagedata do_package do_deploy_source_date_epoch before do_build
-do_build[rdeptask] += "do_package_write_deb"
-
-PACKAGEINDEXDEPS += "dpkg-native:do_populate_sysroot"
-PACKAGEINDEXDEPS += "apt-native:do_populate_sysroot"
diff --git a/poky/meta/classes/package_ipk.bbclass b/poky/meta/classes/package_ipk.bbclass
deleted file mode 100644
index 9fe3c52..0000000
--- a/poky/meta/classes/package_ipk.bbclass
+++ /dev/null
@@ -1,286 +0,0 @@
-inherit package
-
-IMAGE_PKGTYPE ?= "ipk"
-
-IPKGCONF_TARGET = "${WORKDIR}/opkg.conf"
-IPKGCONF_SDK =  "${WORKDIR}/opkg-sdk.conf"
-IPKGCONF_SDK_TARGET = "${WORKDIR}/opkg-sdk-target.conf"
-
-PKGWRITEDIRIPK = "${WORKDIR}/deploy-ipks"
-
-# Program to be used to build opkg packages
-OPKGBUILDCMD ??= 'opkg-build -Z xz -a "${XZ_DEFAULTS}"'
-
-OPKG_ARGS += "--force_postinstall --prefer-arch-to-version"
-OPKG_ARGS += "${@['', '--no-install-recommends'][d.getVar("NO_RECOMMENDATIONS") == "1"]}"
-OPKG_ARGS += "${@['', '--add-exclude ' + ' --add-exclude '.join((d.getVar('PACKAGE_EXCLUDE') or "").split())][(d.getVar("PACKAGE_EXCLUDE") or "").strip() != ""]}"
-
-OPKGLIBDIR ??= "${localstatedir}/lib"
-
-python do_package_ipk () {
-    workdir = d.getVar('WORKDIR')
-    outdir = d.getVar('PKGWRITEDIRIPK')
-    tmpdir = d.getVar('TMPDIR')
-    pkgdest = d.getVar('PKGDEST')
-    if not workdir or not outdir or not tmpdir:
-        bb.error("Variables incorrectly set, unable to package")
-        return
-
-    packages = d.getVar('PACKAGES')
-    if not packages or packages == '':
-        bb.debug(1, "No packages; nothing to do")
-        return
-
-    # We're about to add new packages so the index needs to be checked
-    # so remove the appropriate stamp file.
-    if os.access(os.path.join(tmpdir, "stamps", "IPK_PACKAGE_INDEX_CLEAN"), os.R_OK):
-        os.unlink(os.path.join(tmpdir, "stamps", "IPK_PACKAGE_INDEX_CLEAN"))
-
-    oe.utils.multiprocess_launch(ipk_write_pkg, packages.split(), d, extraargs=(d,))
-}
-do_package_ipk[vardeps] += "ipk_write_pkg"
-do_package_ipk[vardepsexclude] = "BB_NUMBER_THREADS"
-
-def ipk_write_pkg(pkg, d):
-    import re, copy
-    import subprocess
-    import textwrap
-    import collections
-    import glob
-
-    def cleanupcontrol(root):
-        for p in ['CONTROL', 'DEBIAN']:
-            p = os.path.join(root, p)
-            if os.path.exists(p):
-                bb.utils.prunedir(p)
-
-    outdir = d.getVar('PKGWRITEDIRIPK')
-    pkgdest = d.getVar('PKGDEST')
-    recipesource = os.path.basename(d.getVar('FILE'))
-
-    localdata = bb.data.createCopy(d)
-    root = "%s/%s" % (pkgdest, pkg)
-
-    lf = bb.utils.lockfile(root + ".lock")
-    try:
-        localdata.setVar('ROOT', '')
-        localdata.setVar('ROOT_%s' % pkg, root)
-        pkgname = localdata.getVar('PKG:%s' % pkg)
-        if not pkgname:
-            pkgname = pkg
-        localdata.setVar('PKG', pkgname)
-
-        localdata.setVar('OVERRIDES', d.getVar("OVERRIDES", False) + ":" + pkg)
-
-        basedir = os.path.join(os.path.dirname(root))
-        arch = localdata.getVar('PACKAGE_ARCH')
-
-        if localdata.getVar('IPK_HIERARCHICAL_FEED', False) == "1":
-            # Spread packages across subdirectories so each isn't too crowded
-            if pkgname.startswith('lib'):
-                pkg_prefix = 'lib' + pkgname[3]
-            else:
-                pkg_prefix = pkgname[0]
-
-            # Keep -dbg, -dev, -doc, -staticdev, -locale and -locale-* packages
-            # together. These package suffixes are taken from the definitions of
-            # PACKAGES and PACKAGES_DYNAMIC in meta/conf/bitbake.conf
-            if pkgname[-4:] in ('-dbg', '-dev', '-doc'):
-                pkg_subdir = pkgname[:-4]
-            elif pkgname.endswith('-staticdev'):
-                pkg_subdir = pkgname[:-10]
-            elif pkgname.endswith('-locale'):
-                pkg_subdir = pkgname[:-7]
-            elif '-locale-' in pkgname:
-                pkg_subdir = pkgname[:pkgname.find('-locale-')]
-            else:
-                pkg_subdir = pkgname
-
-            pkgoutdir = "%s/%s/%s/%s" % (outdir, arch, pkg_prefix, pkg_subdir)
-        else:
-            pkgoutdir = "%s/%s" % (outdir, arch)
-
-        bb.utils.mkdirhier(pkgoutdir)
-        os.chdir(root)
-        cleanupcontrol(root)
-        g = glob.glob('*')
-        if not g and localdata.getVar('ALLOW_EMPTY', False) != "1":
-            bb.note("Not creating empty archive for %s-%s-%s" % (pkg, localdata.getVar('PKGV'), localdata.getVar('PKGR')))
-            return
-
-        controldir = os.path.join(root, 'CONTROL')
-        bb.utils.mkdirhier(controldir)
-        ctrlfile = open(os.path.join(controldir, 'control'), 'w')
-
-        fields = []
-        pe = d.getVar('PKGE')
-        if pe and int(pe) > 0:
-            fields.append(["Version: %s:%s-%s\n", ['PKGE', 'PKGV', 'PKGR']])
-        else:
-            fields.append(["Version: %s-%s\n", ['PKGV', 'PKGR']])
-        fields.append(["Description: %s\n", ['DESCRIPTION']])
-        fields.append(["Section: %s\n", ['SECTION']])
-        fields.append(["Priority: %s\n", ['PRIORITY']])
-        fields.append(["Maintainer: %s\n", ['MAINTAINER']])
-        fields.append(["License: %s\n", ['LICENSE']])
-        fields.append(["Architecture: %s\n", ['PACKAGE_ARCH']])
-        fields.append(["OE: %s\n", ['PN']])
-        if d.getVar('HOMEPAGE'):
-            fields.append(["Homepage: %s\n", ['HOMEPAGE']])
-
-        def pullData(l, d):
-            l2 = []
-            for i in l:
-                l2.append(d.getVar(i))
-            return l2
-
-        ctrlfile.write("Package: %s\n" % pkgname)
-        # check for required fields
-        for (c, fs) in fields:
-            for f in fs:
-                if localdata.getVar(f, False) is None:
-                    raise KeyError(f)
-            # Special behavior for description...
-            if 'DESCRIPTION' in fs:
-                summary = localdata.getVar('SUMMARY') or localdata.getVar('DESCRIPTION') or "."
-                ctrlfile.write('Description: %s\n' % summary)
-                description = localdata.getVar('DESCRIPTION') or "."
-                description = textwrap.dedent(description).strip()
-                if '\\n' in description:
-                    # Manually indent: multiline description includes a leading space
-                    for t in description.split('\\n'):
-                        ctrlfile.write(' %s\n' % (t.strip() or ' .'))
-                else:
-                    # Auto indent
-                    ctrlfile.write('%s\n' % textwrap.fill(description, width=74, initial_indent=' ', subsequent_indent=' '))
-            else:
-                ctrlfile.write(c % tuple(pullData(fs, localdata)))
-
-        custom_fields_chunk = get_package_additional_metadata("ipk", localdata)
-        if custom_fields_chunk is not None:
-            ctrlfile.write(custom_fields_chunk)
-            ctrlfile.write("\n")
-
-        mapping_rename_hook(localdata)
-
-        def debian_cmp_remap(var):
-            # In debian '>' and '<' do not mean what it appears they mean
-            #   '<' = less or equal
-            #   '>' = greater or equal
-            # adjust these to the '<<' and '>>' equivalents
-            # Also, "=" specifiers only work if they have the PR in, so 1.2.3 != 1.2.3-r0
-            # so to avoid issues, map this to ">= 1.2.3 << 1.2.3.0"
-            for dep in var:
-                for i, v in enumerate(var[dep]):
-                    if (v or "").startswith("< "):
-                        var[dep][i] = var[dep][i].replace("< ", "<< ")
-                    elif (v or "").startswith("> "):
-                        var[dep][i] = var[dep][i].replace("> ", ">> ")
-                    elif (v or "").startswith("= ") and "-r" not in v:
-                        ver = var[dep][i].replace("= ", "")
-                        var[dep][i] = var[dep][i].replace("= ", ">= ")
-                        var[dep].append("<< " + ver + ".0")
-
-        rdepends = bb.utils.explode_dep_versions2(localdata.getVar("RDEPENDS") or "")
-        debian_cmp_remap(rdepends)
-        rrecommends = bb.utils.explode_dep_versions2(localdata.getVar("RRECOMMENDS") or "")
-        debian_cmp_remap(rrecommends)
-        rsuggests = bb.utils.explode_dep_versions2(localdata.getVar("RSUGGESTS") or "")
-        debian_cmp_remap(rsuggests)
-        # Deliberately drop version information here, not wanted/supported by ipk
-        rprovides = dict.fromkeys(bb.utils.explode_dep_versions2(localdata.getVar("RPROVIDES") or ""), [])
-        rprovides = collections.OrderedDict(sorted(rprovides.items(), key=lambda x: x[0]))
-        debian_cmp_remap(rprovides)
-        rreplaces = bb.utils.explode_dep_versions2(localdata.getVar("RREPLACES") or "")
-        debian_cmp_remap(rreplaces)
-        rconflicts = bb.utils.explode_dep_versions2(localdata.getVar("RCONFLICTS") or "")
-        debian_cmp_remap(rconflicts)
-
-        if rdepends:
-            ctrlfile.write("Depends: %s\n" % bb.utils.join_deps(rdepends))
-        if rsuggests:
-            ctrlfile.write("Suggests: %s\n" % bb.utils.join_deps(rsuggests))
-        if rrecommends:
-            ctrlfile.write("Recommends: %s\n" % bb.utils.join_deps(rrecommends))
-        if rprovides:
-            ctrlfile.write("Provides: %s\n" % bb.utils.join_deps(rprovides))
-        if rreplaces:
-            ctrlfile.write("Replaces: %s\n" % bb.utils.join_deps(rreplaces))
-        if rconflicts:
-            ctrlfile.write("Conflicts: %s\n" % bb.utils.join_deps(rconflicts))
-        ctrlfile.write("Source: %s\n" % recipesource)
-        ctrlfile.close()
-
-        for script in ["preinst", "postinst", "prerm", "postrm"]:
-            scriptvar = localdata.getVar('pkg_%s' % script)
-            if not scriptvar:
-                continue
-            scriptfile = open(os.path.join(controldir, script), 'w')
-            scriptfile.write(scriptvar)
-            scriptfile.close()
-            os.chmod(os.path.join(controldir, script), 0o755)
-
-        conffiles_str = ' '.join(get_conffiles(pkg, d))
-        if conffiles_str:
-            conffiles = open(os.path.join(controldir, 'conffiles'), 'w')
-            for f in conffiles_str.split():
-                if os.path.exists(oe.path.join(root, f)):
-                    conffiles.write('%s\n' % f)
-            conffiles.close()
-
-        os.chdir(basedir)
-        subprocess.check_output("PATH=\"%s\" %s %s %s" % (localdata.getVar("PATH"),
-                                                          d.getVar("OPKGBUILDCMD"), pkg, pkgoutdir),
-                                stderr=subprocess.STDOUT,
-                                shell=True)
-
-        if d.getVar('IPK_SIGN_PACKAGES') == '1':
-            ipkver = "%s-%s" % (localdata.getVar('PKGV'), localdata.getVar('PKGR'))
-            ipk_to_sign = "%s/%s_%s_%s.ipk" % (pkgoutdir, pkgname, ipkver, localdata.getVar('PACKAGE_ARCH'))
-            sign_ipk(d, ipk_to_sign)
-
-    finally:
-        cleanupcontrol(root)
-        bb.utils.unlockfile(lf)
-
-# Have to list any variables referenced as X_<pkg> that aren't in pkgdata here
-IPKEXTRAVARS = "PRIORITY MAINTAINER PACKAGE_ARCH HOMEPAGE PACKAGE_ADD_METADATA_IPK"
-ipk_write_pkg[vardeps] += "${@gen_packagevar(d, 'IPKEXTRAVARS')}"
-
-# Otherwise allarch packages may change depending on override configuration
-ipk_write_pkg[vardepsexclude] = "OVERRIDES"
-
-
-SSTATETASKS += "do_package_write_ipk"
-do_package_write_ipk[sstate-inputdirs] = "${PKGWRITEDIRIPK}"
-do_package_write_ipk[sstate-outputdirs] = "${DEPLOY_DIR_IPK}"
-
-python do_package_write_ipk_setscene () {
-    tmpdir = d.getVar('TMPDIR')
-
-    if os.access(os.path.join(tmpdir, "stamps", "IPK_PACKAGE_INDEX_CLEAN"), os.R_OK):
-        os.unlink(os.path.join(tmpdir, "stamps", "IPK_PACKAGE_INDEX_CLEAN"))
-
-    sstate_setscene(d)
-}
-addtask do_package_write_ipk_setscene
-
-python () {
-    if d.getVar('PACKAGES') != '':
-        deps = ' opkg-utils-native:do_populate_sysroot virtual/fakeroot-native:do_populate_sysroot xz-native:do_populate_sysroot'
-        d.appendVarFlag('do_package_write_ipk', 'depends', deps)
-        d.setVarFlag('do_package_write_ipk', 'fakeroot', "1")
-}
-
-python do_package_write_ipk () {
-    bb.build.exec_func("read_subpackage_metadata", d)
-    bb.build.exec_func("do_package_ipk", d)
-}
-do_package_write_ipk[dirs] = "${PKGWRITEDIRIPK}"
-do_package_write_ipk[cleandirs] = "${PKGWRITEDIRIPK}"
-do_package_write_ipk[depends] += "${@oe.utils.build_depends_string(d.getVar('PACKAGE_WRITE_DEPS'), 'do_populate_sysroot')}"
-addtask package_write_ipk after do_packagedata do_package do_deploy_source_date_epoch before do_build
-do_build[rdeptask] += "do_package_write_ipk"
-
-PACKAGEINDEXDEPS += "opkg-utils-native:do_populate_sysroot"
-PACKAGEINDEXDEPS += "opkg-native:do_populate_sysroot"
diff --git a/poky/meta/classes/package_pkgdata.bbclass b/poky/meta/classes/package_pkgdata.bbclass
deleted file mode 100644
index a1ea8fc..0000000
--- a/poky/meta/classes/package_pkgdata.bbclass
+++ /dev/null
@@ -1,167 +0,0 @@
-WORKDIR_PKGDATA = "${WORKDIR}/pkgdata-sysroot"
-
-def package_populate_pkgdata_dir(pkgdatadir, d):
-    import glob
-
-    postinsts = []
-    seendirs = set()
-    stagingdir = d.getVar("PKGDATA_DIR")
-    pkgarchs = ['${MACHINE_ARCH}']
-    pkgarchs = pkgarchs + list(reversed(d.getVar("PACKAGE_EXTRA_ARCHS").split()))
-    pkgarchs.append('allarch')
-
-    bb.utils.mkdirhier(pkgdatadir)
-    for pkgarch in pkgarchs:
-        for manifest in glob.glob(d.expand("${SSTATE_MANIFESTS}/manifest-%s-*.packagedata" % pkgarch)):
-            with open(manifest, "r") as f:
-                for l in f:
-                    l = l.strip()
-                    dest = l.replace(stagingdir, "")
-                    if l.endswith("/"):
-                        staging_copydir(l, pkgdatadir, dest, seendirs)
-                        continue
-                    try:
-                        staging_copyfile(l, pkgdatadir, dest, postinsts, seendirs)
-                    except FileExistsError:
-                        continue
-
-python package_prepare_pkgdata() {
-    import copy
-    import glob
-
-    taskdepdata = d.getVar("BB_TASKDEPDATA", False)
-    mytaskname = d.getVar("BB_RUNTASK")
-    if mytaskname.endswith("_setscene"):
-        mytaskname = mytaskname.replace("_setscene", "")
-    workdir = d.getVar("WORKDIR")
-    pn = d.getVar("PN")
-    stagingdir = d.getVar("PKGDATA_DIR")
-    pkgdatadir = d.getVar("WORKDIR_PKGDATA")
-
-    # Detect bitbake -b usage
-    nodeps = d.getVar("BB_LIMITEDDEPS") or False
-    if nodeps:
-        staging_package_populate_pkgdata_dir(pkgdatadir, d)
-        return
-
-    start = None
-    configuredeps = []
-    for dep in taskdepdata:
-        data = taskdepdata[dep]
-        if data[1] == mytaskname and data[0] == pn:
-            start = dep
-            break
-    if start is None:
-        bb.fatal("Couldn't find ourself in BB_TASKDEPDATA?")
-
-    # We need to figure out which sysroot files we need to expose to this task.
-    # This needs to match what would get restored from sstate, which is controlled
-    # ultimately by calls from bitbake to setscene_depvalid().
-    # That function expects a setscene dependency tree. We build a dependency tree
-    # condensed to inter-sstate task dependencies, similar to that used by setscene
-    # tasks. We can then call into setscene_depvalid() and decide
-    # which dependencies we can "see" and should expose in the recipe specific sysroot.
-    setscenedeps = copy.deepcopy(taskdepdata)
-
-    start = set([start])
-
-    sstatetasks = d.getVar("SSTATETASKS").split()
-    # Add recipe specific tasks referenced by setscene_depvalid()
-    sstatetasks.append("do_stash_locale")
-
-    # If start is an sstate task (like do_package) we need to add in its direct dependencies
-    # else the code below won't recurse into them.
-    for dep in set(start):
-        for dep2 in setscenedeps[dep][3]:
-            start.add(dep2)
-        start.remove(dep)
-
-    # Create collapsed do_populate_sysroot -> do_populate_sysroot tree
-    for dep in taskdepdata:
-        data = setscenedeps[dep]
-        if data[1] not in sstatetasks:
-            for dep2 in setscenedeps:
-                data2 = setscenedeps[dep2]
-                if dep in data2[3]:
-                    data2[3].update(setscenedeps[dep][3])
-                    data2[3].remove(dep)
-            if dep in start:
-                start.update(setscenedeps[dep][3])
-                start.remove(dep)
-            del setscenedeps[dep]
-
-    # Remove circular references
-    for dep in setscenedeps:
-        if dep in setscenedeps[dep][3]:
-            setscenedeps[dep][3].remove(dep)
-
-    # Direct dependencies should be present and can be depended upon
-    for dep in set(start):
-        if setscenedeps[dep][1] == "do_packagedata":
-            if dep not in configuredeps:
-                configuredeps.append(dep)
-
-    msgbuf = []
-    # Call into setscene_depvalid for each sub-dependency and only copy sysroot files
-    # for ones that would be restored from sstate.
-    done = list(start)
-    next = list(start)
-    while next:
-        new = []
-        for dep in next:
-            data = setscenedeps[dep]
-            for datadep in data[3]:
-                if datadep in done:
-                    continue
-                taskdeps = {}
-                taskdeps[dep] = setscenedeps[dep][:2]
-                taskdeps[datadep] = setscenedeps[datadep][:2]
-                retval = setscene_depvalid(datadep, taskdeps, [], d, msgbuf)
-                done.append(datadep)
-                new.append(datadep)
-                if retval:
-                    msgbuf.append("Skipping setscene dependency %s" % datadep)
-                    continue
-                if datadep not in configuredeps and setscenedeps[datadep][1] == "do_packagedata":
-                    configuredeps.append(datadep)
-                    msgbuf.append("Adding dependency on %s" % setscenedeps[datadep][0])
-                else:
-                    msgbuf.append("Following dependency on %s" % setscenedeps[datadep][0])
-        next = new
-
-    # This logging is too verbose for day to day use sadly
-    #bb.debug(2, "\n".join(msgbuf))
-
-    seendirs = set()
-    postinsts = []
-    multilibs = {}
-    manifests = {}
-
-    msg_adding = []
-
-    for dep in configuredeps:
-        c = setscenedeps[dep][0]
-        msg_adding.append(c)
-
-        manifest, d2 = oe.sstatesig.find_sstate_manifest(c, setscenedeps[dep][2], "packagedata", d, multilibs)
-        destsysroot = pkgdatadir
-
-        if manifest:
-            targetdir = destsysroot
-            with open(manifest, "r") as f:
-                manifests[dep] = manifest
-                for l in f:
-                    l = l.strip()
-                    dest = targetdir + l.replace(stagingdir, "")
-                    if l.endswith("/"):
-                        staging_copydir(l, targetdir, dest, seendirs)
-                        continue
-                    staging_copyfile(l, targetdir, dest, postinsts, seendirs)
-
-    bb.note("Installed into pkgdata-sysroot: %s" % str(msg_adding))
-
-}
-package_prepare_pkgdata[cleandirs] = "${WORKDIR_PKGDATA}"
-package_prepare_pkgdata[vardepsexclude] += "MACHINE_ARCH PACKAGE_EXTRA_ARCHS SDK_ARCH BUILD_ARCH SDK_OS BB_TASKDEPDATA SSTATETASKS"
-
-
diff --git a/poky/meta/classes/package_rpm.bbclass b/poky/meta/classes/package_rpm.bbclass
deleted file mode 100644
index e9ff1f7..0000000
--- a/poky/meta/classes/package_rpm.bbclass
+++ /dev/null
@@ -1,755 +0,0 @@
-inherit package
-
-IMAGE_PKGTYPE ?= "rpm"
-
-RPM="rpm"
-RPMBUILD="rpmbuild"
-
-PKGWRITEDIRRPM = "${WORKDIR}/deploy-rpms"
-
-# Maintaining the perfile dependencies has singificant overhead when writing the
-# packages. When set, this value merges them for efficiency.
-MERGEPERFILEDEPS = "1"
-
-# Filter dependencies based on a provided function.
-def filter_deps(var, f):
-    import collections
-
-    depends_dict = bb.utils.explode_dep_versions2(var)
-    newdeps_dict = collections.OrderedDict()
-    for dep in depends_dict:
-        if f(dep):
-            newdeps_dict[dep] = depends_dict[dep]
-    return bb.utils.join_deps(newdeps_dict, commasep=False)
-
-# Filter out absolute paths (typically /bin/sh and /usr/bin/env) and any perl
-# dependencies for nativesdk packages.
-def filter_nativesdk_deps(srcname, var):
-    if var and srcname.startswith("nativesdk-"):
-        var = filter_deps(var, lambda dep: not dep.startswith('/') and dep != 'perl' and not dep.startswith('perl('))
-    return var
-
-# Construct per file dependencies file
-def write_rpm_perfiledata(srcname, d):
-    workdir = d.getVar('WORKDIR')
-    packages = d.getVar('PACKAGES')
-    pkgd = d.getVar('PKGD')
-
-    def dump_filerdeps(varname, outfile, d):
-        outfile.write("#!/usr/bin/env python3\n\n")
-        outfile.write("# Dependency table\n")
-        outfile.write('deps = {\n')
-        for pkg in packages.split():
-            dependsflist_key = 'FILE' + varname + 'FLIST' + ":" + pkg
-            dependsflist = (d.getVar(dependsflist_key) or "")
-            for dfile in dependsflist.split():
-                key = "FILE" + varname + ":" + dfile + ":" + pkg
-                deps = filter_nativesdk_deps(srcname, d.getVar(key) or "")
-                depends_dict = bb.utils.explode_dep_versions(deps)
-                file = dfile.replace("@underscore@", "_")
-                file = file.replace("@closebrace@", "]")
-                file = file.replace("@openbrace@", "[")
-                file = file.replace("@tab@", "\t")
-                file = file.replace("@space@", " ")
-                file = file.replace("@at@", "@")
-                outfile.write('"' + pkgd + file + '" : "')
-                for dep in depends_dict:
-                    ver = depends_dict[dep]
-                    if dep and ver:
-                        ver = ver.replace("(","")
-                        ver = ver.replace(")","")
-                        outfile.write(dep + " " + ver + " ")
-                    else:
-                        outfile.write(dep + " ")
-                outfile.write('",\n')
-        outfile.write('}\n\n')
-        outfile.write("import sys\n")
-        outfile.write("while 1:\n")
-        outfile.write("\tline = sys.stdin.readline().strip()\n")
-        outfile.write("\tif not line:\n")
-        outfile.write("\t\tsys.exit(0)\n")
-        outfile.write("\tif line in deps:\n")
-        outfile.write("\t\tprint(deps[line] + '\\n')\n")
-
-    # OE-core dependencies a.k.a. RPM requires
-    outdepends = workdir + "/" + srcname + ".requires"
-
-    dependsfile = open(outdepends, 'w')
-
-    dump_filerdeps('RDEPENDS', dependsfile, d)
-
-    dependsfile.close()
-    os.chmod(outdepends, 0o755)
-
-    # OE-core / RPM Provides
-    outprovides = workdir + "/" + srcname + ".provides"
-
-    providesfile = open(outprovides, 'w')
-
-    dump_filerdeps('RPROVIDES', providesfile, d)
-
-    providesfile.close()
-    os.chmod(outprovides, 0o755)
-
-    return (outdepends, outprovides)
-
-
-python write_specfile () {
-    import oe.packagedata
-
-    # append information for logs and patches to %prep
-    def add_prep(d,spec_files_bottom):
-        if d.getVarFlag('ARCHIVER_MODE', 'srpm') == '1' and bb.data.inherits_class('archiver', d):
-            spec_files_bottom.append('%%prep -n %s' % d.getVar('PN') )
-            spec_files_bottom.append('%s' % "echo \"include logs and patches, Please check them in SOURCES\"")
-            spec_files_bottom.append('')
-
-    # append the name of tarball to key word 'SOURCE' in xxx.spec.
-    def tail_source(d):
-        if d.getVarFlag('ARCHIVER_MODE', 'srpm') == '1' and bb.data.inherits_class('archiver', d):
-            ar_outdir = d.getVar('ARCHIVER_OUTDIR')
-            if not os.path.exists(ar_outdir):
-                return
-            source_list = os.listdir(ar_outdir)
-            source_number = 0
-            for source in source_list:
-                # do_deploy_archives may have already run (from sstate) meaning a .src.rpm may already 
-                # exist in ARCHIVER_OUTDIR so skip if present.
-                if source.endswith(".src.rpm"):
-                    continue
-                # The rpmbuild doesn't need the root permission, but it needs
-                # to know the file's user and group name, the only user and
-                # group in fakeroot is "root" when working in fakeroot.
-                f = os.path.join(ar_outdir, source)
-                os.chown(f, 0, 0)
-                spec_preamble_top.append('Source%s: %s' % (source_number, source))
-                source_number += 1
-
-    # In RPM, dependencies are of the format: pkg <>= Epoch:Version-Release
-    # This format is similar to OE, however there are restrictions on the
-    # characters that can be in a field.  In the Version field, "-"
-    # characters are not allowed.  "-" is allowed in the Release field.
-    #
-    # We translate the "-" in the version to a "+", by loading the PKGV
-    # from the dependent recipe, replacing the - with a +, and then using
-    # that value to do a replace inside of this recipe's dependencies.
-    # This preserves the "-" separator between the version and release, as
-    # well as any "-" characters inside of the release field.
-    #
-    # All of this has to happen BEFORE the mapping_rename_hook as
-    # after renaming we cannot look up the dependencies in the packagedata
-    # store.
-    def translate_vers(varname, d):
-        depends = d.getVar(varname)
-        if depends:
-            depends_dict = bb.utils.explode_dep_versions2(depends)
-            newdeps_dict = {}
-            for dep in depends_dict:
-                verlist = []
-                for ver in depends_dict[dep]:
-                    if '-' in ver:
-                        subd = oe.packagedata.read_subpkgdata_dict(dep, d)
-                        if 'PKGV' in subd:
-                            pv = subd['PV']
-                            pkgv = subd['PKGV']
-                            reppv = pkgv.replace('-', '+')
-                            ver = ver.replace(pv, reppv).replace(pkgv, reppv)
-                        if 'PKGR' in subd:
-                            # Make sure PKGR rather than PR in ver
-                            pr = '-' + subd['PR']
-                            pkgr = '-' + subd['PKGR']
-                            if pkgr not in ver:
-                                ver = ver.replace(pr, pkgr)
-                        verlist.append(ver)
-                    else:
-                        verlist.append(ver)
-                newdeps_dict[dep] = verlist
-            depends = bb.utils.join_deps(newdeps_dict)
-            d.setVar(varname, depends.strip())
-
-    # We need to change the style the dependency from BB to RPM
-    # This needs to happen AFTER the mapping_rename_hook
-    def print_deps(variable, tag, array, d):
-        depends = variable
-        if depends:
-            depends_dict = bb.utils.explode_dep_versions2(depends)
-            for dep in depends_dict:
-                for ver in depends_dict[dep]:
-                    ver = ver.replace('(', '')
-                    ver = ver.replace(')', '')
-                    array.append("%s: %s %s" % (tag, dep, ver))
-                if not len(depends_dict[dep]):
-                    array.append("%s: %s" % (tag, dep))
-
-    def walk_files(walkpath, target, conffiles, dirfiles):
-        # We can race against the ipk/deb backends which create CONTROL or DEBIAN directories
-        # when packaging. We just ignore these files which are created in 
-        # packages-split/ and not package/
-        # We have the odd situation where the CONTROL/DEBIAN directory can be removed in the middle of
-        # of the walk, the isdir() test would then fail and the walk code would assume its a file
-        # hence we check for the names in files too.
-        for rootpath, dirs, files in os.walk(walkpath):
-            path = rootpath.replace(walkpath, "")
-            if path.endswith("DEBIAN") or path.endswith("CONTROL"):
-                continue
-            path = path.replace("%", "%%%%%%%%")
-            path = path.replace("[", "?")
-            path = path.replace("]", "?")
-
-            # Treat all symlinks to directories as normal files.
-            # os.walk() lists them as directories.
-            def move_to_files(dir):
-                if os.path.islink(os.path.join(rootpath, dir)):
-                    files.append(dir)
-                    return True
-                else:
-                    return False
-            dirs[:] = [dir for dir in dirs if not move_to_files(dir)]
-
-            # Directory handling can happen in two ways, either DIRFILES is not set at all
-            # in which case we fall back to the older behaviour of packages owning all their
-            # directories
-            if dirfiles is None:
-                for dir in dirs:
-                    if dir == "CONTROL" or dir == "DEBIAN":
-                        continue
-                    dir = dir.replace("%", "%%%%%%%%")
-                    dir = dir.replace("[", "?")
-                    dir = dir.replace("]", "?")
-                    # All packages own the directories their files are in...
-                    target.append('%dir "' + path + '/' + dir + '"')
-            else:
-                # packages own only empty directories or explict directory.
-                # This will prevent the overlapping of security permission.
-                if path and not files and not dirs:
-                    target.append('%dir "' + path + '"')
-                elif path and path in dirfiles:
-                    target.append('%dir "' + path + '"')
-
-            for file in files:
-                if file == "CONTROL" or file == "DEBIAN":
-                    continue
-                file = file.replace("%", "%%%%%%%%")
-                file = file.replace("[", "?")
-                file = file.replace("]", "?")
-                if conffiles.count(path + '/' + file):
-                    target.append('%config "' + path + '/' + file + '"')
-                else:
-                    target.append('"' + path + '/' + file + '"')
-
-    # Prevent the prerm/postrm scripts from being run during an upgrade
-    def wrap_uninstall(scriptvar):
-        scr = scriptvar.strip()
-        if scr.startswith("#!"):
-            pos = scr.find("\n") + 1
-        else:
-            pos = 0
-        scr = scr[:pos] + 'if [ "$1" = "0" ] ; then\n' + scr[pos:] + '\nfi'
-        return scr
-
-    def get_perfile(varname, pkg, d):
-        deps = []
-        dependsflist_key = 'FILE' + varname + 'FLIST' + ":" + pkg
-        dependsflist = (d.getVar(dependsflist_key) or "")
-        for dfile in dependsflist.split():
-            key = "FILE" + varname + ":" + dfile + ":" + pkg
-            depends = d.getVar(key)
-            if depends:
-                deps.append(depends)
-        return " ".join(deps)
-
-    def append_description(spec_preamble, text):
-        """
-        Add the description to the spec file.
-        """
-        import textwrap
-        dedent_text = textwrap.dedent(text).strip()
-        # Bitbake saves "\n" as "\\n"
-        if '\\n' in dedent_text:
-            for t in dedent_text.split('\\n'):
-                spec_preamble.append(t.strip())
-        else:
-            spec_preamble.append('%s' % textwrap.fill(dedent_text, width=75))
-
-    packages = d.getVar('PACKAGES')
-    if not packages or packages == '':
-        bb.debug(1, "No packages; nothing to do")
-        return
-
-    pkgdest = d.getVar('PKGDEST')
-    if not pkgdest:
-        bb.fatal("No PKGDEST")
-
-    outspecfile = d.getVar('OUTSPECFILE')
-    if not outspecfile:
-        bb.fatal("No OUTSPECFILE")
-
-    # Construct the SPEC file...
-    srcname    = d.getVar('PN')
-    localdata = bb.data.createCopy(d)
-    localdata.setVar('OVERRIDES', d.getVar("OVERRIDES", False) + ":" + srcname)
-    srcsummary = (localdata.getVar('SUMMARY') or localdata.getVar('DESCRIPTION') or ".")
-    srcversion = localdata.getVar('PKGV').replace('-', '+')
-    srcrelease = localdata.getVar('PKGR')
-    srcepoch   = (localdata.getVar('PKGE') or "")
-    srclicense = localdata.getVar('LICENSE')
-    srcsection = localdata.getVar('SECTION')
-    srcmaintainer  = localdata.getVar('MAINTAINER')
-    srchomepage    = localdata.getVar('HOMEPAGE')
-    srcdescription = localdata.getVar('DESCRIPTION') or "."
-    srccustomtagschunk = get_package_additional_metadata("rpm", localdata)
-
-    srcdepends     = d.getVar('DEPENDS')
-    srcrdepends    = ""
-    srcrrecommends = ""
-    srcrsuggests   = ""
-    srcrprovides   = ""
-    srcrreplaces   = ""
-    srcrconflicts  = ""
-    srcrobsoletes  = ""
-
-    srcrpreinst  = []
-    srcrpostinst = []
-    srcrprerm    = []
-    srcrpostrm   = []
-
-    spec_preamble_top = []
-    spec_preamble_bottom = []
-
-    spec_scriptlets_top = []
-    spec_scriptlets_bottom = []
-
-    spec_files_top = []
-    spec_files_bottom = []
-
-    perfiledeps = (d.getVar("MERGEPERFILEDEPS") or "0") == "0"
-    extra_pkgdata = (d.getVar("RPM_EXTRA_PKGDATA") or "0") == "1"
-
-    for pkg in packages.split():
-        localdata = bb.data.createCopy(d)
-
-        root = "%s/%s" % (pkgdest, pkg)
-
-        localdata.setVar('ROOT', '')
-        localdata.setVar('ROOT_%s' % pkg, root)
-        pkgname = localdata.getVar('PKG:%s' % pkg)
-        if not pkgname:
-            pkgname = pkg
-        localdata.setVar('PKG', pkgname)
-
-        localdata.setVar('OVERRIDES', d.getVar("OVERRIDES", False) + ":" + pkg)
-
-        conffiles = get_conffiles(pkg, d)
-        dirfiles = localdata.getVar('DIRFILES')
-        if dirfiles is not None:
-            dirfiles = dirfiles.split()
-
-        splitname    = pkgname
-
-        splitsummary = (localdata.getVar('SUMMARY') or localdata.getVar('DESCRIPTION') or ".")
-        splitversion = (localdata.getVar('PKGV') or "").replace('-', '+')
-        splitrelease = (localdata.getVar('PKGR') or "")
-        splitepoch   = (localdata.getVar('PKGE') or "")
-        splitlicense = (localdata.getVar('LICENSE') or "")
-        splitsection = (localdata.getVar('SECTION') or "")
-        splitdescription = (localdata.getVar('DESCRIPTION') or ".")
-        splitcustomtagschunk = get_package_additional_metadata("rpm", localdata)
-
-        translate_vers('RDEPENDS', localdata)
-        translate_vers('RRECOMMENDS', localdata)
-        translate_vers('RSUGGESTS', localdata)
-        translate_vers('RPROVIDES', localdata)
-        translate_vers('RREPLACES', localdata)
-        translate_vers('RCONFLICTS', localdata)
-
-        # Map the dependencies into their final form
-        mapping_rename_hook(localdata)
-
-        splitrdepends    = localdata.getVar('RDEPENDS') or ""
-        splitrrecommends = localdata.getVar('RRECOMMENDS') or ""
-        splitrsuggests   = localdata.getVar('RSUGGESTS') or ""
-        splitrprovides   = localdata.getVar('RPROVIDES') or ""
-        splitrreplaces   = localdata.getVar('RREPLACES') or ""
-        splitrconflicts  = localdata.getVar('RCONFLICTS') or ""
-        splitrobsoletes  = ""
-
-        splitrpreinst  = localdata.getVar('pkg_preinst')
-        splitrpostinst = localdata.getVar('pkg_postinst')
-        splitrprerm    = localdata.getVar('pkg_prerm')
-        splitrpostrm   = localdata.getVar('pkg_postrm')
-
-
-        if not perfiledeps:
-            # Add in summary of per file dependencies
-            splitrdepends = splitrdepends + " " + get_perfile('RDEPENDS', pkg, d)
-            splitrprovides = splitrprovides + " " + get_perfile('RPROVIDES', pkg, d)
-
-        splitrdepends = filter_nativesdk_deps(srcname, splitrdepends)
-
-        # Gather special src/first package data
-        if srcname == splitname:
-            archiving = d.getVarFlag('ARCHIVER_MODE', 'srpm') == '1' and \
-                        bb.data.inherits_class('archiver', d)
-            if archiving and srclicense != splitlicense:
-                bb.warn("The SRPM produced may not have the correct overall source license in the License tag. This is due to the LICENSE for the primary package and SRPM conflicting.")
-
-            srclicense     = splitlicense
-            srcrdepends    = splitrdepends
-            srcrrecommends = splitrrecommends
-            srcrsuggests   = splitrsuggests
-            srcrprovides   = splitrprovides
-            srcrreplaces   = splitrreplaces
-            srcrconflicts  = splitrconflicts
-
-            srcrpreinst    = splitrpreinst
-            srcrpostinst   = splitrpostinst
-            srcrprerm      = splitrprerm
-            srcrpostrm     = splitrpostrm
-
-            file_list = []
-            walk_files(root, file_list, conffiles, dirfiles)
-            if not file_list and localdata.getVar('ALLOW_EMPTY', False) != "1":
-                bb.note("Not creating empty RPM package for %s" % splitname)
-            else:
-                spec_files_top.append('%files')
-                if extra_pkgdata:
-                    package_rpm_extra_pkgdata(splitname, spec_files_top, localdata)
-                spec_files_top.append('%defattr(-,-,-,-)')
-                if file_list:
-                    bb.note("Creating RPM package for %s" % splitname)
-                    spec_files_top.extend(file_list)
-                else:
-                    bb.note("Creating empty RPM package for %s" % splitname)
-                spec_files_top.append('')
-            continue
-
-        # Process subpackage data
-        spec_preamble_bottom.append('%%package -n %s' % splitname)
-        spec_preamble_bottom.append('Summary: %s' % splitsummary)
-        if srcversion != splitversion:
-            spec_preamble_bottom.append('Version: %s' % splitversion)
-        if srcrelease != splitrelease:
-            spec_preamble_bottom.append('Release: %s' % splitrelease)
-        if srcepoch != splitepoch:
-            spec_preamble_bottom.append('Epoch: %s' % splitepoch)
-        spec_preamble_bottom.append('License: %s' % splitlicense)
-        spec_preamble_bottom.append('Group: %s' % splitsection)
-
-        if srccustomtagschunk != splitcustomtagschunk:
-            spec_preamble_bottom.append(splitcustomtagschunk)
-
-        # Replaces == Obsoletes && Provides
-        robsoletes = bb.utils.explode_dep_versions2(splitrobsoletes)
-        rprovides = bb.utils.explode_dep_versions2(splitrprovides)
-        rreplaces = bb.utils.explode_dep_versions2(splitrreplaces)
-        for dep in rreplaces:
-            if not dep in robsoletes:
-                robsoletes[dep] = rreplaces[dep]
-            if not dep in rprovides:
-                rprovides[dep] = rreplaces[dep]
-        splitrobsoletes = bb.utils.join_deps(robsoletes, commasep=False)
-        splitrprovides = bb.utils.join_deps(rprovides, commasep=False)
-
-        print_deps(splitrdepends, "Requires", spec_preamble_bottom, d)
-        if splitrpreinst:
-            print_deps(splitrdepends, "Requires(pre)", spec_preamble_bottom, d)
-        if splitrpostinst:
-            print_deps(splitrdepends, "Requires(post)", spec_preamble_bottom, d)
-        if splitrprerm:
-            print_deps(splitrdepends, "Requires(preun)", spec_preamble_bottom, d)
-        if splitrpostrm:
-            print_deps(splitrdepends, "Requires(postun)", spec_preamble_bottom, d)
-
-        print_deps(splitrrecommends, "Recommends", spec_preamble_bottom, d)
-        print_deps(splitrsuggests,  "Suggests", spec_preamble_bottom, d)
-        print_deps(splitrprovides,  "Provides", spec_preamble_bottom, d)
-        print_deps(splitrobsoletes, "Obsoletes", spec_preamble_bottom, d)
-        print_deps(splitrconflicts,  "Conflicts", spec_preamble_bottom, d)
-
-        spec_preamble_bottom.append('')
-
-        spec_preamble_bottom.append('%%description -n %s' % splitname)
-        append_description(spec_preamble_bottom, splitdescription)
-
-        spec_preamble_bottom.append('')
-
-        # Now process scriptlets
-        if splitrpreinst:
-            spec_scriptlets_bottom.append('%%pre -n %s' % splitname)
-            spec_scriptlets_bottom.append('# %s - preinst' % splitname)
-            spec_scriptlets_bottom.append(splitrpreinst)
-            spec_scriptlets_bottom.append('')
-        if splitrpostinst:
-            spec_scriptlets_bottom.append('%%post -n %s' % splitname)
-            spec_scriptlets_bottom.append('# %s - postinst' % splitname)
-            spec_scriptlets_bottom.append(splitrpostinst)
-            spec_scriptlets_bottom.append('')
-        if splitrprerm:
-            spec_scriptlets_bottom.append('%%preun -n %s' % splitname)
-            spec_scriptlets_bottom.append('# %s - prerm' % splitname)
-            scriptvar = wrap_uninstall(splitrprerm)
-            spec_scriptlets_bottom.append(scriptvar)
-            spec_scriptlets_bottom.append('')
-        if splitrpostrm:
-            spec_scriptlets_bottom.append('%%postun -n %s' % splitname)
-            spec_scriptlets_bottom.append('# %s - postrm' % splitname)
-            scriptvar = wrap_uninstall(splitrpostrm)
-            spec_scriptlets_bottom.append(scriptvar)
-            spec_scriptlets_bottom.append('')
-
-        # Now process files
-        file_list = []
-        walk_files(root, file_list, conffiles, dirfiles)
-        if not file_list and localdata.getVar('ALLOW_EMPTY', False) != "1":
-            bb.note("Not creating empty RPM package for %s" % splitname)
-        else:
-            spec_files_bottom.append('%%files -n %s' % splitname)
-            if extra_pkgdata:
-                package_rpm_extra_pkgdata(splitname, spec_files_bottom, localdata)
-            spec_files_bottom.append('%defattr(-,-,-,-)')
-            if file_list:
-                bb.note("Creating RPM package for %s" % splitname)
-                spec_files_bottom.extend(file_list)
-            else:
-                bb.note("Creating empty RPM package for %s" % splitname)
-            spec_files_bottom.append('')
-
-        del localdata
-    
-    add_prep(d,spec_files_bottom)
-    spec_preamble_top.append('Summary: %s' % srcsummary)
-    spec_preamble_top.append('Name: %s' % srcname)
-    spec_preamble_top.append('Version: %s' % srcversion)
-    spec_preamble_top.append('Release: %s' % srcrelease)
-    if srcepoch and srcepoch.strip() != "":
-        spec_preamble_top.append('Epoch: %s' % srcepoch)
-    spec_preamble_top.append('License: %s' % srclicense)
-    spec_preamble_top.append('Group: %s' % srcsection)
-    spec_preamble_top.append('Packager: %s' % srcmaintainer)
-    if srchomepage:
-        spec_preamble_top.append('URL: %s' % srchomepage)
-    if srccustomtagschunk:
-        spec_preamble_top.append(srccustomtagschunk)
-    tail_source(d)
-
-    # Replaces == Obsoletes && Provides
-    robsoletes = bb.utils.explode_dep_versions2(srcrobsoletes)
-    rprovides = bb.utils.explode_dep_versions2(srcrprovides)
-    rreplaces = bb.utils.explode_dep_versions2(srcrreplaces)
-    for dep in rreplaces:
-        if not dep in robsoletes:
-            robsoletes[dep] = rreplaces[dep]
-        if not dep in rprovides:
-            rprovides[dep] = rreplaces[dep]
-    srcrobsoletes = bb.utils.join_deps(robsoletes, commasep=False)
-    srcrprovides = bb.utils.join_deps(rprovides, commasep=False)
-
-    print_deps(srcdepends, "BuildRequires", spec_preamble_top, d)
-    print_deps(srcrdepends, "Requires", spec_preamble_top, d)
-    if srcrpreinst:
-        print_deps(srcrdepends, "Requires(pre)", spec_preamble_top, d)
-    if srcrpostinst:
-        print_deps(srcrdepends, "Requires(post)", spec_preamble_top, d)
-    if srcrprerm:
-        print_deps(srcrdepends, "Requires(preun)", spec_preamble_top, d)
-    if srcrpostrm:
-        print_deps(srcrdepends, "Requires(postun)", spec_preamble_top, d)
-
-    print_deps(srcrrecommends, "Recommends", spec_preamble_top, d)
-    print_deps(srcrsuggests, "Suggests", spec_preamble_top, d)
-    print_deps(srcrprovides, "Provides", spec_preamble_top, d)
-    print_deps(srcrobsoletes, "Obsoletes", spec_preamble_top, d)
-    print_deps(srcrconflicts, "Conflicts", spec_preamble_top, d)
-
-    spec_preamble_top.append('')
-
-    spec_preamble_top.append('%description')
-    append_description(spec_preamble_top, srcdescription)
-
-    spec_preamble_top.append('')
-
-    if srcrpreinst:
-        spec_scriptlets_top.append('%pre')
-        spec_scriptlets_top.append('# %s - preinst' % srcname)
-        spec_scriptlets_top.append(srcrpreinst)
-        spec_scriptlets_top.append('')
-    if srcrpostinst:
-        spec_scriptlets_top.append('%post')
-        spec_scriptlets_top.append('# %s - postinst' % srcname)
-        spec_scriptlets_top.append(srcrpostinst)
-        spec_scriptlets_top.append('')
-    if srcrprerm:
-        spec_scriptlets_top.append('%preun')
-        spec_scriptlets_top.append('# %s - prerm' % srcname)
-        scriptvar = wrap_uninstall(srcrprerm)
-        spec_scriptlets_top.append(scriptvar)
-        spec_scriptlets_top.append('')
-    if srcrpostrm:
-        spec_scriptlets_top.append('%postun')
-        spec_scriptlets_top.append('# %s - postrm' % srcname)
-        scriptvar = wrap_uninstall(srcrpostrm)
-        spec_scriptlets_top.append(scriptvar)
-        spec_scriptlets_top.append('')
-
-    # Write the SPEC file
-    specfile = open(outspecfile, 'w')
-
-    # RPMSPEC_PREAMBLE is a way to add arbitrary text to the top
-    # of the generated spec file
-    external_preamble = d.getVar("RPMSPEC_PREAMBLE")
-    if external_preamble:
-        specfile.write(external_preamble + "\n")
-
-    for line in spec_preamble_top:
-        specfile.write(line + "\n")
-
-    for line in spec_preamble_bottom:
-        specfile.write(line + "\n")
-
-    for line in spec_scriptlets_top:
-        specfile.write(line + "\n")
-
-    for line in spec_scriptlets_bottom:
-        specfile.write(line + "\n")
-
-    for line in spec_files_top:
-        specfile.write(line + "\n")
-
-    for line in spec_files_bottom:
-        specfile.write(line + "\n")
-
-    specfile.close()
-}
-# Otherwise allarch packages may change depending on override configuration
-write_specfile[vardepsexclude] = "OVERRIDES"
-
-# Have to list any variables referenced as X_<pkg> that aren't in pkgdata here
-RPMEXTRAVARS = "PACKAGE_ADD_METADATA_RPM"
-write_specfile[vardeps] += "${@gen_packagevar(d, 'RPMEXTRAVARS')}"
-
-python do_package_rpm () {
-    workdir = d.getVar('WORKDIR')
-    tmpdir = d.getVar('TMPDIR')
-    pkgd = d.getVar('PKGD')
-    pkgdest = d.getVar('PKGDEST')
-    if not workdir or not pkgd or not tmpdir:
-        bb.error("Variables incorrectly set, unable to package")
-        return
-
-    packages = d.getVar('PACKAGES')
-    if not packages or packages == '':
-        bb.debug(1, "No packages; nothing to do")
-        return
-
-    # Construct the spec file...
-    # If the spec file already exist, and has not been stored into 
-    # pseudo's files.db, it maybe cause rpmbuild src.rpm fail,
-    # so remove it before doing rpmbuild src.rpm.
-    srcname    = d.getVar('PN')
-    outspecfile = workdir + "/" + srcname + ".spec"
-    if os.path.isfile(outspecfile):
-        os.remove(outspecfile)
-    d.setVar('OUTSPECFILE', outspecfile)
-    bb.build.exec_func('write_specfile', d)
-
-    perfiledeps = (d.getVar("MERGEPERFILEDEPS") or "0") == "0"
-    if perfiledeps:
-        outdepends, outprovides = write_rpm_perfiledata(srcname, d)
-
-    # Setup the rpmbuild arguments...
-    rpmbuild = d.getVar('RPMBUILD')
-    targetsys = d.getVar('TARGET_SYS')
-    targetvendor = d.getVar('HOST_VENDOR')
-
-    # Too many places in dnf stack assume that arch-independent packages are "noarch".
-    # Let's not fight against this.
-    package_arch = (d.getVar('PACKAGE_ARCH') or "").replace("-", "_")
-    if package_arch == "all":
-        package_arch = "noarch"
-
-    sdkpkgsuffix = (d.getVar('SDKPKGSUFFIX') or "nativesdk").replace("-", "_")
-    d.setVar('PACKAGE_ARCH_EXTEND', package_arch)
-    pkgwritedir = d.expand('${PKGWRITEDIRRPM}/${PACKAGE_ARCH_EXTEND}')
-    d.setVar('RPM_PKGWRITEDIR', pkgwritedir)
-    bb.debug(1, 'PKGWRITEDIR: %s' % d.getVar('RPM_PKGWRITEDIR'))
-    pkgarch = d.expand('${PACKAGE_ARCH_EXTEND}${HOST_VENDOR}-linux')
-    bb.utils.mkdirhier(pkgwritedir)
-    os.chmod(pkgwritedir, 0o755)
-
-    cmd = rpmbuild
-    cmd = cmd + " --noclean --nodeps --short-circuit --target " + pkgarch + " --buildroot " + pkgd
-    cmd = cmd + " --define '_topdir " + workdir + "' --define '_rpmdir " + pkgwritedir + "'"
-    cmd = cmd + " --define '_builddir " + d.getVar('B') + "'"
-    cmd = cmd + " --define '_build_name_fmt %%{NAME}-%%{VERSION}-%%{RELEASE}.%%{ARCH}.rpm'"
-    cmd = cmd + " --define '_use_internal_dependency_generator 0'"
-    cmd = cmd + " --define '_binaries_in_noarch_packages_terminate_build 0'"
-    cmd = cmd + " --define '_build_id_links none'"
-    cmd = cmd + " --define '_binary_payload w19T%d.zstdio'" % int(d.getVar("ZSTD_THREADS"))
-    cmd = cmd + " --define '_source_payload w19T%d.zstdio'" % int(d.getVar("ZSTD_THREADS"))
-    cmd = cmd + " --define 'clamp_mtime_to_source_date_epoch 1'"
-    cmd = cmd + " --define 'use_source_date_epoch_as_buildtime 1'"
-    cmd = cmd + " --define '_buildhost reproducible'"
-    cmd = cmd + " --define '__font_provides %{nil}'"
-    if perfiledeps:
-        cmd = cmd + " --define '__find_requires " + outdepends + "'"
-        cmd = cmd + " --define '__find_provides " + outprovides + "'"
-    else:
-        cmd = cmd + " --define '__find_requires %{nil}'"
-        cmd = cmd + " --define '__find_provides %{nil}'"
-    cmd = cmd + " --define '_unpackaged_files_terminate_build 0'"
-    cmd = cmd + " --define 'debug_package %{nil}'"
-    cmd = cmd + " --define '_tmppath " + workdir + "'"
-    if d.getVarFlag('ARCHIVER_MODE', 'srpm') == '1' and bb.data.inherits_class('archiver', d):
-        cmd = cmd + " --define '_sourcedir " + d.getVar('ARCHIVER_OUTDIR') + "'"
-        cmdsrpm = cmd + " --define '_srcrpmdir " + d.getVar('ARCHIVER_RPMOUTDIR') + "'"
-        cmdsrpm = cmdsrpm + " -bs " + outspecfile
-        # Build the .src.rpm
-        d.setVar('SBUILDSPEC', cmdsrpm + "\n")
-        d.setVarFlag('SBUILDSPEC', 'func', '1')
-        bb.build.exec_func('SBUILDSPEC', d)
-    cmd = cmd + " -bb " + outspecfile
-
-    # rpm 4 creates various empty directories in _topdir, let's clean them up
-    cleanupcmd = "rm -rf %s/BUILDROOT %s/SOURCES %s/SPECS %s/SRPMS" % (workdir, workdir, workdir, workdir)
-
-    # Build the rpm package!
-    d.setVar('BUILDSPEC', cmd + "\n" + cleanupcmd + "\n")
-    d.setVarFlag('BUILDSPEC', 'func', '1')
-    bb.build.exec_func('BUILDSPEC', d)
-
-    if d.getVar('RPM_SIGN_PACKAGES') == '1':
-        bb.build.exec_func("sign_rpm", d)
-}
-
-python () {
-    if d.getVar('PACKAGES') != '':
-        deps = ' rpm-native:do_populate_sysroot virtual/fakeroot-native:do_populate_sysroot'
-        d.appendVarFlag('do_package_write_rpm', 'depends', deps)
-        d.setVarFlag('do_package_write_rpm', 'fakeroot', '1')
-}
-
-SSTATETASKS += "do_package_write_rpm"
-do_package_write_rpm[sstate-inputdirs] = "${PKGWRITEDIRRPM}"
-do_package_write_rpm[sstate-outputdirs] = "${DEPLOY_DIR_RPM}"
-# Take a shared lock, we can write multiple packages at the same time...
-# but we need to stop the rootfs/solver from running while we do...
-do_package_write_rpm[sstate-lockfile-shared] += "${DEPLOY_DIR_RPM}/rpm.lock"
-
-python do_package_write_rpm_setscene () {
-    sstate_setscene(d)
-}
-addtask do_package_write_rpm_setscene
-
-python do_package_write_rpm () {
-    bb.build.exec_func("read_subpackage_metadata", d)
-    bb.build.exec_func("do_package_rpm", d)
-}
-
-do_package_write_rpm[dirs] = "${PKGWRITEDIRRPM}"
-do_package_write_rpm[cleandirs] = "${PKGWRITEDIRRPM}"
-do_package_write_rpm[depends] += "${@oe.utils.build_depends_string(d.getVar('PACKAGE_WRITE_DEPS'), 'do_populate_sysroot')}"
-addtask package_write_rpm after do_packagedata do_package do_deploy_source_date_epoch before do_build
-do_build[rdeptask] += "do_package_write_rpm"
-
-PACKAGEINDEXDEPS += "rpm-native:do_populate_sysroot"
-PACKAGEINDEXDEPS += "createrepo-c-native:do_populate_sysroot"
diff --git a/poky/meta/classes/package_tar.bbclass b/poky/meta/classes/package_tar.bbclass
deleted file mode 100644
index d6c1b30..0000000
--- a/poky/meta/classes/package_tar.bbclass
+++ /dev/null
@@ -1,71 +0,0 @@
-inherit package
-
-IMAGE_PKGTYPE ?= "tar"
-
-python do_package_tar () {
-    import subprocess
-
-    oldcwd = os.getcwd()
-
-    workdir = d.getVar('WORKDIR')
-    if not workdir:
-        bb.error("WORKDIR not defined, unable to package")
-        return
-
-    outdir = d.getVar('DEPLOY_DIR_TAR')
-    if not outdir:
-        bb.error("DEPLOY_DIR_TAR not defined, unable to package")
-        return
-
-    dvar = d.getVar('D')
-    if not dvar:
-        bb.error("D not defined, unable to package")
-        return
-
-    packages = d.getVar('PACKAGES')
-    if not packages:
-        bb.debug(1, "PACKAGES not defined, nothing to package")
-        return
-
-    pkgdest = d.getVar('PKGDEST')
-
-    bb.utils.mkdirhier(outdir)
-    bb.utils.mkdirhier(dvar)
-
-    for pkg in packages.split():
-        localdata = bb.data.createCopy(d)
-        root = "%s/%s" % (pkgdest, pkg)
-
-        overrides = localdata.getVar('OVERRIDES', False)
-        localdata.setVar('OVERRIDES', '%s:%s' % (overrides, pkg))
-
-        bb.utils.mkdirhier(root)
-        basedir = os.path.dirname(root)
-        tarfn = localdata.expand("${DEPLOY_DIR_TAR}/${PKG}-${PKGV}-${PKGR}.tar.gz")
-        os.chdir(root)
-        dlist = os.listdir(root)
-        if not dlist:
-            bb.note("Not creating empty archive for %s-%s-%s" % (pkg, localdata.getVar('PKGV'), localdata.getVar('PKGR')))
-            continue
-        args = "tar -cz --exclude=CONTROL --exclude=DEBIAN -f".split()
-        ret = subprocess.call(args + [tarfn] + dlist)
-        if ret != 0:
-            bb.error("Creation of tar %s failed." % tarfn)
-
-    os.chdir(oldcwd)
-}
-
-python () {
-    if d.getVar('PACKAGES') != '':
-        deps = ' tar-native:do_populate_sysroot virtual/fakeroot-native:do_populate_sysroot'
-        d.appendVarFlag('do_package_write_tar', 'depends', deps)
-        d.setVarFlag('do_package_write_tar', 'fakeroot', "1")
-}
-
-
-python do_package_write_tar () {
-    bb.build.exec_func("read_subpackage_metadata", d)
-    bb.build.exec_func("do_package_tar", d)
-}
-do_package_write_tar[dirs] = "${D}"
-addtask package_write_tar before do_build after do_packagedata do_package
diff --git a/poky/meta/classes/packagedata.bbclass b/poky/meta/classes/packagedata.bbclass
deleted file mode 100644
index c2760e2..0000000
--- a/poky/meta/classes/packagedata.bbclass
+++ /dev/null
@@ -1,34 +0,0 @@
-python read_subpackage_metadata () {
-    import oe.packagedata
-
-    vars = {
-        "PN" : d.getVar('PN'), 
-        "PE" : d.getVar('PE'), 
-        "PV" : d.getVar('PV'),
-        "PR" : d.getVar('PR'),
-    }
-
-    data = oe.packagedata.read_pkgdata(vars["PN"], d)
-
-    for key in data.keys():
-        d.setVar(key, data[key])
-
-    for pkg in d.getVar('PACKAGES').split():
-        sdata = oe.packagedata.read_subpkgdata(pkg, d)
-        for key in sdata.keys():
-            if key in vars:
-                if sdata[key] != vars[key]:
-                    if key == "PN":
-                        bb.fatal("Recipe %s is trying to create package %s which was already written by recipe %s. This will cause corruption, please resolve this and only provide the package from one recipe or the other or only build one of the recipes." % (vars[key], pkg, sdata[key]))
-                    bb.fatal("Recipe %s is trying to change %s from '%s' to '%s'. This will cause do_package_write_* failures since the incorrect data will be used and they will be unable to find the right workdir." % (vars["PN"], key, vars[key], sdata[key]))
-                continue
-            #
-            # If we set unsuffixed variables here there is a chance they could clobber override versions
-            # of that variable, e.g. DESCRIPTION could clobber DESCRIPTION:<pkgname>
-            # We therefore don't clobber for the unsuffixed variable versions
-            #
-            if key.endswith(":" + pkg):
-                d.setVar(key, sdata[key])
-            else:
-                d.setVar(key, sdata[key], parsing=True)
-}
diff --git a/poky/meta/classes/packagegroup.bbclass b/poky/meta/classes/packagegroup.bbclass
deleted file mode 100644
index 557b1b6..0000000
--- a/poky/meta/classes/packagegroup.bbclass
+++ /dev/null
@@ -1,61 +0,0 @@
-# Class for packagegroup (package group) recipes
-
-# By default, only the packagegroup package itself is in PACKAGES.
-# -dbg and -dev flavours are handled by the anonfunc below.
-# This means that packagegroup recipes used to build multiple packagegroup
-# packages have to modify PACKAGES after inheriting packagegroup.bbclass.
-PACKAGES = "${PN}"
-
-# By default, packagegroup packages do not depend on a certain architecture.
-# Only if dependencies are modified by MACHINE_FEATURES, packages
-# need to be set to MACHINE_ARCH before inheriting packagegroup.bbclass
-PACKAGE_ARCH ?= "all"
-
-# Fully expanded - so it applies the overrides as well
-PACKAGE_ARCH_EXPANDED := "${PACKAGE_ARCH}"
-
-LICENSE ?= "MIT"
-
-inherit ${@oe.utils.ifelse(d.getVar('PACKAGE_ARCH_EXPANDED') == 'all', 'allarch', '')}
-
-# This automatically adds -dbg and -dev flavours of all PACKAGES
-# to the list. Their dependencies (RRECOMMENDS) are handled as usual
-# by package_depchains in a following step.
-# Also mark all packages as ALLOW_EMPTY
-python () {
-    packages = d.getVar('PACKAGES').split()
-    if d.getVar('PACKAGEGROUP_DISABLE_COMPLEMENTARY') != '1':
-        types = ['', '-dbg', '-dev']
-        if bb.utils.contains('DISTRO_FEATURES', 'ptest', True, False, d):
-            types.append('-ptest')
-        packages = [pkg + suffix for pkg in packages
-                    for suffix in types]
-        d.setVar('PACKAGES', ' '.join(packages))
-    for pkg in packages:
-        d.setVar('ALLOW_EMPTY:%s' % pkg, '1')
-}
-
-# We don't want to look at shared library dependencies for the
-# dbg packages
-DEPCHAIN_DBGDEFAULTDEPS = "1"
-
-# We only need the packaging tasks - disable the rest
-deltask do_fetch
-deltask do_unpack
-deltask do_patch
-deltask do_configure
-deltask do_compile
-deltask do_install
-deltask do_populate_sysroot
-
-INHIBIT_DEFAULT_DEPS = "1"
-
-python () {
-    if bb.data.inherits_class('nativesdk', d):
-        return
-    initman = d.getVar("VIRTUAL-RUNTIME_init_manager")
-    if initman and initman in ['sysvinit', 'systemd'] and not bb.utils.contains('DISTRO_FEATURES', initman, True, False, d):
-        bb.fatal("Please ensure that your setting of VIRTUAL-RUNTIME_init_manager (%s) matches the entries enabled in DISTRO_FEATURES" % initman)
-}
-
-CVE_PRODUCT = ""
diff --git a/poky/meta/classes/patch.bbclass b/poky/meta/classes/patch.bbclass
deleted file mode 100644
index 8de7025..0000000
--- a/poky/meta/classes/patch.bbclass
+++ /dev/null
@@ -1,169 +0,0 @@
-# Copyright (C) 2006  OpenedHand LTD
-
-# Point to an empty file so any user's custom settings don't break things
-QUILTRCFILE ?= "${STAGING_ETCDIR_NATIVE}/quiltrc"
-
-PATCHDEPENDENCY = "${PATCHTOOL}-native:do_populate_sysroot"
-
-# There is a bug in patch 2.7.3 and earlier where index lines
-# in patches can change file modes when they shouldn't:
-# http://git.savannah.gnu.org/cgit/patch.git/patch/?id=82b800c9552a088a241457948219d25ce0a407a4
-# This leaks into debug sources in particular. Add the dependency
-# to target recipes to avoid this problem until we can rely on 2.7.4 or later.
-PATCHDEPENDENCY:append:class-target = " patch-replacement-native:do_populate_sysroot"
-
-PATCH_GIT_USER_NAME ?= "OpenEmbedded"
-PATCH_GIT_USER_EMAIL ?= "oe.patch@oe"
-
-inherit terminal
-
-python () {
-    if d.getVar('PATCHTOOL') == 'git' and d.getVar('PATCH_COMMIT_FUNCTIONS') == '1':
-        extratasks = bb.build.tasksbetween('do_unpack', 'do_patch', d)
-        try:
-            extratasks.remove('do_unpack')
-        except ValueError:
-            # For some recipes do_unpack doesn't exist, ignore it
-            pass
-
-        d.appendVarFlag('do_patch', 'prefuncs', ' patch_task_patch_prefunc')
-        for task in extratasks:
-            d.appendVarFlag(task, 'postfuncs', ' patch_task_postfunc')
-}
-
-python patch_task_patch_prefunc() {
-    # Prefunc for do_patch
-    srcsubdir = d.getVar('S')
-
-    workdir = os.path.abspath(d.getVar('WORKDIR'))
-    testsrcdir = os.path.abspath(srcsubdir)
-    if (testsrcdir + os.sep).startswith(workdir + os.sep):
-        # Double-check that either workdir or S or some directory in-between is a git repository
-        found = False
-        while testsrcdir != workdir:
-            if os.path.exists(os.path.join(testsrcdir, '.git')):
-                found = True
-                break
-            if testsrcdir == workdir:
-                break
-            testsrcdir = os.path.dirname(testsrcdir)
-        if not found:
-            bb.fatal('PATCHTOOL = "git" set for source tree that is not a git repository. Refusing to continue as that may result in commits being made in your metadata repository.')
-
-    patchdir = os.path.join(srcsubdir, 'patches')
-    if os.path.exists(patchdir):
-        if os.listdir(patchdir):
-            d.setVar('PATCH_HAS_PATCHES_DIR', '1')
-        else:
-            os.rmdir(patchdir)
-}
-
-python patch_task_postfunc() {
-    # Prefunc for task functions between do_unpack and do_patch
-    import oe.patch
-    import shutil
-    func = d.getVar('BB_RUNTASK')
-    srcsubdir = d.getVar('S')
-
-    if os.path.exists(srcsubdir):
-        if func == 'do_patch':
-            haspatches = (d.getVar('PATCH_HAS_PATCHES_DIR') == '1')
-            patchdir = os.path.join(srcsubdir, 'patches')
-            if os.path.exists(patchdir):
-                shutil.rmtree(patchdir)
-                if haspatches:
-                    stdout, _ = bb.process.run('git status --porcelain patches', cwd=srcsubdir)
-                    if stdout:
-                        bb.process.run('git checkout patches', cwd=srcsubdir)
-        stdout, _ = bb.process.run('git status --porcelain .', cwd=srcsubdir)
-        if stdout:
-            useroptions = []
-            oe.patch.GitApplyTree.gitCommandUserOptions(useroptions, d=d)
-            bb.process.run('git add .; git %s commit -a -m "Committing changes from %s\n\n%s"' % (' '.join(useroptions), func, oe.patch.GitApplyTree.ignore_commit_prefix + ' - from %s' % func), cwd=srcsubdir)
-}
-
-def src_patches(d, all=False, expand=True):
-    import oe.patch
-    return oe.patch.src_patches(d, all, expand)
-
-def should_apply(parm, d):
-    """Determine if we should apply the given patch"""
-    import oe.patch
-    return oe.patch.should_apply(parm, d)
-
-should_apply[vardepsexclude] = "DATE SRCDATE"
-
-python patch_do_patch() {
-    import oe.patch
-
-    patchsetmap = {
-        "patch": oe.patch.PatchTree,
-        "quilt": oe.patch.QuiltTree,
-        "git": oe.patch.GitApplyTree,
-    }
-
-    cls = patchsetmap[d.getVar('PATCHTOOL') or 'quilt']
-
-    resolvermap = {
-        "noop": oe.patch.NOOPResolver,
-        "user": oe.patch.UserResolver,
-    }
-
-    rcls = resolvermap[d.getVar('PATCHRESOLVE') or 'user']
-
-    classes = {}
-
-    s = d.getVar('S')
-
-    os.putenv('PATH', d.getVar('PATH'))
-
-    # We must use one TMPDIR per process so that the "patch" processes
-    # don't generate the same temp file name.
-
-    import tempfile
-    process_tmpdir = tempfile.mkdtemp()
-    os.environ['TMPDIR'] = process_tmpdir
-
-    for patch in src_patches(d):
-        _, _, local, _, _, parm = bb.fetch.decodeurl(patch)
-
-        if "patchdir" in parm:
-            patchdir = parm["patchdir"]
-            if not os.path.isabs(patchdir):
-                patchdir = os.path.join(s, patchdir)
-            if not os.path.isdir(patchdir):
-                bb.fatal("Target directory '%s' not found, patchdir '%s' is incorrect in patch file '%s'" %
-                    (patchdir, parm["patchdir"], parm['patchname']))
-        else:
-            patchdir = s
-
-        if not patchdir in classes:
-            patchset = cls(patchdir, d)
-            resolver = rcls(patchset, oe_terminal)
-            classes[patchdir] = (patchset, resolver)
-            patchset.Clean()
-        else:
-            patchset, resolver = classes[patchdir]
-
-        bb.note("Applying patch '%s' (%s)" % (parm['patchname'], oe.path.format_display(local, d)))
-        try:
-            patchset.Import({"file":local, "strippath": parm['striplevel']}, True)
-        except Exception as exc:
-            bb.utils.remove(process_tmpdir, True)
-            bb.fatal("Importing patch '%s' with striplevel '%s'\n%s" % (parm['patchname'], parm['striplevel'], repr(exc).replace("\\n", "\n")))
-        try:
-            resolver.Resolve()
-        except bb.BBHandledException as e:
-            bb.utils.remove(process_tmpdir, True)
-            bb.fatal("Applying patch '%s' on target directory '%s'\n%s" % (parm['patchname'], patchdir, repr(e).replace("\\n", "\n")))
-
-    bb.utils.remove(process_tmpdir, True)
-    del os.environ['TMPDIR']
-}
-patch_do_patch[vardepsexclude] = "PATCHRESOLVE"
-
-addtask patch after do_unpack
-do_patch[dirs] = "${WORKDIR}"
-do_patch[depends] = "${PATCHDEPENDENCY}"
-
-EXPORT_FUNCTIONS do_patch
diff --git a/poky/meta/classes/perl-version.bbclass b/poky/meta/classes/perl-version.bbclass
deleted file mode 100644
index 84b67b8..0000000
--- a/poky/meta/classes/perl-version.bbclass
+++ /dev/null
@@ -1,66 +0,0 @@
-PERL_OWN_DIR = ""
-
-# Determine the staged version of perl from the perl configuration file
-# Assign vardepvalue, because otherwise signature is changed before and after
-# perl is built (from None to real version in config.sh).
-get_perl_version[vardepvalue] = "${PERL_OWN_DIR}"
-def get_perl_version(d):
-    import re
-    cfg = d.expand('${STAGING_LIBDIR}${PERL_OWN_DIR}/perl5/config.sh')
-    try:
-        f = open(cfg, 'r')
-    except IOError:
-        return None
-    l = f.readlines();
-    f.close();
-    r = re.compile(r"^version='(\d*\.\d*\.\d*)'")
-    for s in l:
-        m = r.match(s)
-        if m:
-            return m.group(1)
-    return None
-
-PERLVERSION := "${@get_perl_version(d)}"
-PERLVERSION[vardepvalue] = ""
-
-
-# Determine the staged arch of perl from the perl configuration file
-# Assign vardepvalue, because otherwise signature is changed before and after
-# perl is built (from None to real version in config.sh).
-def get_perl_arch(d):
-    import re
-    cfg = d.expand('${STAGING_LIBDIR}${PERL_OWN_DIR}/perl5/config.sh')
-    try:
-        f = open(cfg, 'r')
-    except IOError:
-        return None
-    l = f.readlines();
-    f.close();
-    r = re.compile("^archname='([^']*)'")
-    for s in l:
-        m = r.match(s)
-        if m:
-            return m.group(1)
-    return None
-
-PERLARCH := "${@get_perl_arch(d)}"
-PERLARCH[vardepvalue] = ""
-
-# Determine the staged arch of perl-native from the perl configuration file
-# Assign vardepvalue, because otherwise signature is changed before and after
-# perl is built (from None to real version in config.sh).
-def get_perl_hostarch(d):
-    import re
-    cfg = d.expand('${STAGING_LIBDIR_NATIVE}/perl5/config.sh')
-    try:
-        f = open(cfg, 'r')
-    except IOError:
-        return None
-    l = f.readlines();
-    f.close();
-    r = re.compile("^archname='([^']*)'")
-    for s in l:
-        m = r.match(s)
-        if m:
-            return m.group(1)
-    return None
diff --git a/poky/meta/classes/perlnative.bbclass b/poky/meta/classes/perlnative.bbclass
deleted file mode 100644
index cc8de8b..0000000
--- a/poky/meta/classes/perlnative.bbclass
+++ /dev/null
@@ -1,3 +0,0 @@
-EXTRANATIVEPATH += "perl-native"
-DEPENDS += "perl-native"
-OECMAKE_PERLNATIVE_DIR = "${STAGING_BINDIR_NATIVE}/perl-native"
diff --git a/poky/meta/classes/pixbufcache.bbclass b/poky/meta/classes/pixbufcache.bbclass
deleted file mode 100644
index 886bf19..0000000
--- a/poky/meta/classes/pixbufcache.bbclass
+++ /dev/null
@@ -1,63 +0,0 @@
-#
-# This class will generate the proper postinst/postrm scriptlets for pixbuf
-# packages.
-#
-
-DEPENDS:append:class-target = " qemu-native"
-inherit qemu
-
-PIXBUF_PACKAGES ??= "${PN}"
-
-PACKAGE_WRITE_DEPS += "qemu-native gdk-pixbuf-native"
-
-pixbufcache_common() {
-if [ "x$D" != "x" ]; then
-	$INTERCEPT_DIR/postinst_intercept update_pixbuf_cache ${PKG} mlprefix=${MLPREFIX} binprefix=${MLPREFIX} libdir=${libdir} \
-		bindir=${bindir} base_libdir=${base_libdir}
-else
-
-	# Update the pixbuf loaders in case they haven't been registered yet
-	${libdir}/gdk-pixbuf-2.0/gdk-pixbuf-query-loaders --update-cache
-
-	if [ -x ${bindir}/gtk-update-icon-cache ] && [ -d ${datadir}/icons ]; then
-		for icondir in /usr/share/icons/*; do
-			if [ -d ${icondir} ]; then
-				gtk-update-icon-cache -t -q ${icondir}
-			fi
-		done
-	fi
-fi
-}
-
-python populate_packages:append() {
-    pixbuf_pkgs = d.getVar('PIXBUF_PACKAGES').split()
-
-    for pkg in pixbuf_pkgs:
-        bb.note("adding pixbuf postinst and postrm scripts to %s" % pkg)
-        postinst = d.getVar('pkg_postinst:%s' % pkg) or d.getVar('pkg_postinst')
-        if not postinst:
-            postinst = '#!/bin/sh\n'
-        postinst += d.getVar('pixbufcache_common')
-        d.setVar('pkg_postinst:%s' % pkg, postinst)
-
-        postrm = d.getVar('pkg_postrm:%s' % pkg) or d.getVar('pkg_postrm')
-        if not postrm:
-            postrm = '#!/bin/sh\n'
-        postrm += d.getVar('pixbufcache_common')
-        d.setVar('pkg_postrm:%s' % pkg, postrm)
-}
-
-gdkpixbuf_complete() {
-GDK_PIXBUF_FATAL_LOADER=1 ${STAGING_LIBDIR_NATIVE}/gdk-pixbuf-2.0/gdk-pixbuf-query-loaders --update-cache || exit 1
-}
-
-DEPENDS:append:class-native = " gdk-pixbuf-native"
-SYSROOT_PREPROCESS_FUNCS:append:class-native = " pixbufcache_sstate_postinst"
-
-pixbufcache_sstate_postinst() {
-	mkdir -p ${SYSROOT_DESTDIR}${bindir}
-	dest=${SYSROOT_DESTDIR}${bindir}/postinst-${PN}
-        echo '#!/bin/sh' > $dest
-	echo "${gdkpixbuf_complete}" >> $dest
-	chmod 0755 $dest
-}
diff --git a/poky/meta/classes/pkgconfig.bbclass b/poky/meta/classes/pkgconfig.bbclass
deleted file mode 100644
index fa94527..0000000
--- a/poky/meta/classes/pkgconfig.bbclass
+++ /dev/null
@@ -1,2 +0,0 @@
-DEPENDS:prepend = "pkgconfig-native "
-
diff --git a/poky/meta/classes/populate_sdk.bbclass b/poky/meta/classes/populate_sdk.bbclass
deleted file mode 100644
index f64a911..0000000
--- a/poky/meta/classes/populate_sdk.bbclass
+++ /dev/null
@@ -1,7 +0,0 @@
-# The majority of populate_sdk is located in populate_sdk_base
-# This chunk simply facilitates compatibility with SDK only recipes.
-
-inherit populate_sdk_base
-
-addtask populate_sdk after do_install before do_build
-
diff --git a/poky/meta/classes/populate_sdk_base.bbclass b/poky/meta/classes/populate_sdk_base.bbclass
deleted file mode 100644
index f260217..0000000
--- a/poky/meta/classes/populate_sdk_base.bbclass
+++ /dev/null
@@ -1,376 +0,0 @@
-PACKAGES = ""
-
-inherit image-postinst-intercepts image-artifact-names
-
-# Wildcards specifying complementary packages to install for every package that has been explicitly
-# installed into the rootfs
-COMPLEMENTARY_GLOB[dev-pkgs] = '*-dev'
-COMPLEMENTARY_GLOB[staticdev-pkgs] = '*-staticdev'
-COMPLEMENTARY_GLOB[doc-pkgs] = '*-doc'
-COMPLEMENTARY_GLOB[dbg-pkgs] = '*-dbg'
-COMPLEMENTARY_GLOB[src-pkgs] = '*-src'
-COMPLEMENTARY_GLOB[ptest-pkgs] = '*-ptest'
-COMPLEMENTARY_GLOB[bash-completion-pkgs] = '*-bash-completion'
-
-def complementary_globs(featurevar, d):
-    all_globs = d.getVarFlags('COMPLEMENTARY_GLOB')
-    globs = []
-    features = set((d.getVar(featurevar) or '').split())
-    for name, glob in all_globs.items():
-        if name in features:
-            globs.append(glob)
-    return ' '.join(globs)
-
-SDKIMAGE_FEATURES ??= "dev-pkgs dbg-pkgs src-pkgs ${@bb.utils.contains('DISTRO_FEATURES', 'api-documentation', 'doc-pkgs', '', d)}"
-SDKIMAGE_INSTALL_COMPLEMENTARY = '${@complementary_globs("SDKIMAGE_FEATURES", d)}'
-SDKIMAGE_INSTALL_COMPLEMENTARY[vardeps] += "SDKIMAGE_FEATURES"
-
-PACKAGE_ARCHS:append:task-populate-sdk = " sdk-provides-dummy-target"
-SDK_PACKAGE_ARCHS += "sdk-provides-dummy-${SDKPKGSUFFIX}"
-
-# List of locales to install, or "all" for all of them, or unset for none.
-SDKIMAGE_LINGUAS ?= "all"
-
-inherit rootfs_${IMAGE_PKGTYPE}
-
-SDK_DIR = "${WORKDIR}/sdk"
-SDK_OUTPUT = "${SDK_DIR}/image"
-SDK_DEPLOY = "${DEPLOY_DIR}/sdk"
-
-SDKDEPLOYDIR = "${WORKDIR}/${SDKMACHINE}-deploy-${PN}-populate-sdk"
-
-B:task-populate-sdk = "${SDK_DIR}"
-
-SDKTARGETSYSROOT = "${SDKPATH}/sysroots/${REAL_MULTIMACH_TARGET_SYS}"
-
-SDK_TOOLCHAIN_LANGS ??= ""
-SDK_TOOLCHAIN_LANGS:remove:sdkmingw32 = "rust"
-
-TOOLCHAIN_HOST_TASK ?= " \
-    nativesdk-packagegroup-sdk-host \
-    packagegroup-cross-canadian-${MACHINE} \
-    ${@bb.utils.contains('SDK_TOOLCHAIN_LANGS', 'go', 'packagegroup-go-cross-canadian-${MACHINE}', '', d)} \
-    ${@bb.utils.contains('SDK_TOOLCHAIN_LANGS', 'rust', 'packagegroup-rust-cross-canadian-${MACHINE}', '', d)} \
-"
-TOOLCHAIN_HOST_TASK_ATTEMPTONLY ?= ""
-TOOLCHAIN_TARGET_TASK ?= " \
-    ${@multilib_pkg_extend(d, 'packagegroup-core-standalone-sdk-target')} \
-    ${@bb.utils.contains('SDK_TOOLCHAIN_LANGS', 'go', multilib_pkg_extend(d, 'packagegroup-go-sdk-target'), '', d)} \
-    ${@bb.utils.contains('SDK_TOOLCHAIN_LANGS', 'rust', multilib_pkg_extend(d, 'libstd-rs'), '', d)} \
-    target-sdk-provides-dummy \
-"
-TOOLCHAIN_TARGET_TASK_ATTEMPTONLY ?= ""
-TOOLCHAIN_OUTPUTNAME ?= "${SDK_NAME}-toolchain-${SDK_VERSION}"
-
-# Default archived SDK's suffix
-SDK_ARCHIVE_TYPE ?= "tar.xz"
-SDK_XZ_COMPRESSION_LEVEL ?= "-9"
-SDK_XZ_OPTIONS ?= "${XZ_DEFAULTS} ${SDK_XZ_COMPRESSION_LEVEL}"
-
-# To support different sdk type according to SDK_ARCHIVE_TYPE, now support zip and tar.xz
-python () {
-    if d.getVar('SDK_ARCHIVE_TYPE') == 'zip':
-       d.setVar('SDK_ARCHIVE_DEPENDS', 'zip-native')
-       # SDK_ARCHIVE_CMD used to generate archived sdk ${TOOLCHAIN_OUTPUTNAME}.${SDK_ARCHIVE_TYPE} from input dir ${SDK_OUTPUT}/${SDKPATH} to output dir ${SDKDEPLOYDIR}
-       # recommand to cd into input dir first to avoid archive with buildpath
-       d.setVar('SDK_ARCHIVE_CMD', 'cd ${SDK_OUTPUT}/${SDKPATH}; zip -r -y ${SDKDEPLOYDIR}/${TOOLCHAIN_OUTPUTNAME}.${SDK_ARCHIVE_TYPE} .')
-    else:
-       d.setVar('SDK_ARCHIVE_DEPENDS', 'xz-native')
-       d.setVar('SDK_ARCHIVE_CMD', 'cd ${SDK_OUTPUT}/${SDKPATH}; tar ${SDKTAROPTS} -cf - . | xz ${SDK_XZ_OPTIONS} > ${SDKDEPLOYDIR}/${TOOLCHAIN_OUTPUTNAME}.${SDK_ARCHIVE_TYPE}')
-}
-
-SDK_RDEPENDS = "${TOOLCHAIN_TARGET_TASK} ${TOOLCHAIN_HOST_TASK}"
-SDK_DEPENDS = "virtual/fakeroot-native ${SDK_ARCHIVE_DEPENDS} cross-localedef-native nativesdk-qemuwrapper-cross ${@' '.join(["%s-qemuwrapper-cross" % m for m in d.getVar("MULTILIB_VARIANTS").split()])} qemuwrapper-cross"
-PATH:prepend = "${WORKDIR}/recipe-sysroot/${SDKPATHNATIVE}${bindir}/crossscripts:${@":".join(all_multilib_tune_values(d, 'STAGING_BINDIR_CROSS').split())}:"
-SDK_DEPENDS += "nativesdk-glibc-locale"
-
-# We want the MULTIARCH_TARGET_SYS to point to the TUNE_PKGARCH, not PACKAGE_ARCH as it
-# could be set to the MACHINE_ARCH
-REAL_MULTIMACH_TARGET_SYS = "${TUNE_PKGARCH}${TARGET_VENDOR}-${TARGET_OS}"
-
-PID = "${@os.getpid()}"
-
-EXCLUDE_FROM_WORLD = "1"
-
-SDK_PACKAGING_FUNC ?= "create_shar"
-SDK_PRE_INSTALL_COMMAND ?= ""
-SDK_POST_INSTALL_COMMAND ?= ""
-SDK_RELOCATE_AFTER_INSTALL ?= "1"
-
-SDKEXTPATH ??= "~/${@d.getVar('DISTRO')}_sdk"
-SDK_TITLE ??= "${@d.getVar('DISTRO_NAME') or d.getVar('DISTRO')} SDK"
-
-SDK_TARGET_MANIFEST = "${SDKDEPLOYDIR}/${TOOLCHAIN_OUTPUTNAME}.target.manifest"
-SDK_HOST_MANIFEST = "${SDKDEPLOYDIR}/${TOOLCHAIN_OUTPUTNAME}.host.manifest"
-SDK_EXT_TARGET_MANIFEST = "${SDK_DEPLOY}/${TOOLCHAINEXT_OUTPUTNAME}.target.manifest"
-SDK_EXT_HOST_MANIFEST = "${SDK_DEPLOY}/${TOOLCHAINEXT_OUTPUTNAME}.host.manifest"
-
-SDK_PRUNE_SYSROOT_DIRS ?= "/dev"
-
-python write_target_sdk_manifest () {
-    from oe.sdk import sdk_list_installed_packages
-    from oe.utils import format_pkg_list
-    sdkmanifestdir = os.path.dirname(d.getVar("SDK_TARGET_MANIFEST"))
-    pkgs = sdk_list_installed_packages(d, True)
-    if not os.path.exists(sdkmanifestdir):
-        bb.utils.mkdirhier(sdkmanifestdir)
-    with open(d.getVar('SDK_TARGET_MANIFEST'), 'w') as output:
-        output.write(format_pkg_list(pkgs, 'ver'))
-}
-
-sdk_prune_dirs () {
-    for d in ${SDK_PRUNE_SYSROOT_DIRS}; do
-        rm -rf ${SDK_OUTPUT}${SDKTARGETSYSROOT}$d
-    done
-}
-
-python write_sdk_test_data() {
-    from oe.data import export2json
-    testdata = "%s/%s.testdata.json" % (d.getVar('SDKDEPLOYDIR'), d.getVar('TOOLCHAIN_OUTPUTNAME'))
-    bb.utils.mkdirhier(os.path.dirname(testdata))
-    export2json(d, testdata)
-}
-
-python write_host_sdk_manifest () {
-    from oe.sdk import sdk_list_installed_packages
-    from oe.utils import format_pkg_list
-    sdkmanifestdir = os.path.dirname(d.getVar("SDK_HOST_MANIFEST"))
-    pkgs = sdk_list_installed_packages(d, False)
-    if not os.path.exists(sdkmanifestdir):
-        bb.utils.mkdirhier(sdkmanifestdir)
-    with open(d.getVar('SDK_HOST_MANIFEST'), 'w') as output:
-        output.write(format_pkg_list(pkgs, 'ver'))
-}
-
-POPULATE_SDK_POST_TARGET_COMMAND:append = " write_sdk_test_data ; "
-POPULATE_SDK_POST_TARGET_COMMAND:append:task-populate-sdk  = " write_target_sdk_manifest; sdk_prune_dirs; "
-POPULATE_SDK_POST_HOST_COMMAND:append:task-populate-sdk = " write_host_sdk_manifest; "
-
-SDK_PACKAGING_COMMAND = "${@'${SDK_PACKAGING_FUNC};' if '${SDK_PACKAGING_FUNC}' else ''}"
-SDK_POSTPROCESS_COMMAND = " create_sdk_files; check_sdk_sysroots; archive_sdk; ${SDK_PACKAGING_COMMAND} "
-
-def populate_sdk_common(d):
-    from oe.sdk import populate_sdk
-    from oe.manifest import create_manifest, Manifest
-
-    # Handle package exclusions
-    excl_pkgs = (d.getVar("PACKAGE_EXCLUDE") or "").split()
-    inst_pkgs = (d.getVar("PACKAGE_INSTALL") or "").split()
-    inst_attempt_pkgs = (d.getVar("PACKAGE_INSTALL_ATTEMPTONLY") or "").split()
-
-    d.setVar('PACKAGE_INSTALL_ORIG', ' '.join(inst_pkgs))
-    d.setVar('PACKAGE_INSTALL_ATTEMPTONLY', ' '.join(inst_attempt_pkgs))
-
-    for pkg in excl_pkgs:
-        if pkg in inst_pkgs:
-            bb.warn("Package %s, set to be excluded, is in %s PACKAGE_INSTALL (%s).  It will be removed from the list." % (pkg, d.getVar('PN'), inst_pkgs))
-            inst_pkgs.remove(pkg)
-
-        if pkg in inst_attempt_pkgs:
-            bb.warn("Package %s, set to be excluded, is in %s PACKAGE_INSTALL_ATTEMPTONLY (%s).  It will be removed from the list." % (pkg, d.getVar('PN'), inst_pkgs))
-            inst_attempt_pkgs.remove(pkg)
-
-    d.setVar("PACKAGE_INSTALL", ' '.join(inst_pkgs))
-    d.setVar("PACKAGE_INSTALL_ATTEMPTONLY", ' '.join(inst_attempt_pkgs))
-
-    pn = d.getVar('PN')
-    runtime_mapping_rename("TOOLCHAIN_TARGET_TASK", pn, d)
-    runtime_mapping_rename("TOOLCHAIN_TARGET_TASK_ATTEMPTONLY", pn, d)
-
-    ld = bb.data.createCopy(d)
-    ld.setVar("PKGDATA_DIR", "${STAGING_DIR}/${SDK_ARCH}-${SDKPKGSUFFIX}${SDK_VENDOR}-${SDK_OS}/pkgdata")
-    runtime_mapping_rename("TOOLCHAIN_HOST_TASK", pn, ld)
-    runtime_mapping_rename("TOOLCHAIN_HOST_TASK_ATTEMPTONLY", pn, ld)
-    d.setVar("TOOLCHAIN_HOST_TASK", ld.getVar("TOOLCHAIN_HOST_TASK"))
-    d.setVar("TOOLCHAIN_HOST_TASK_ATTEMPTONLY", ld.getVar("TOOLCHAIN_HOST_TASK_ATTEMPTONLY"))
-    
-    # create target/host SDK manifests
-    create_manifest(d, manifest_dir=d.getVar('SDK_DIR'),
-                    manifest_type=Manifest.MANIFEST_TYPE_SDK_HOST)
-    create_manifest(d, manifest_dir=d.getVar('SDK_DIR'),
-                    manifest_type=Manifest.MANIFEST_TYPE_SDK_TARGET)
-
-    populate_sdk(d)
-
-fakeroot python do_populate_sdk() {
-    populate_sdk_common(d)
-}
-SSTATETASKS += "do_populate_sdk"
-SSTATE_SKIP_CREATION:task-populate-sdk = '1'
-do_populate_sdk[cleandirs] = "${SDKDEPLOYDIR}"
-do_populate_sdk[sstate-inputdirs] = "${SDKDEPLOYDIR}"
-do_populate_sdk[sstate-outputdirs] = "${SDK_DEPLOY}"
-do_populate_sdk[stamp-extra-info] = "${MACHINE_ARCH}${SDKMACHINE}"
-python do_populate_sdk_setscene () {
-    sstate_setscene(d)
-}
-addtask do_populate_sdk_setscene
-
-PSEUDO_IGNORE_PATHS .= ",${SDKDEPLOYDIR},${WORKDIR}/oe-sdk-repo,${WORKDIR}/sstate-build-populate_sdk"
-
-fakeroot create_sdk_files() {
-	cp ${COREBASE}/scripts/relocate_sdk.py ${SDK_OUTPUT}/${SDKPATH}/
-
-	# Replace the ##DEFAULT_INSTALL_DIR## with the correct pattern.
-	# Escape special characters like '+' and '.' in the SDKPATH
-	escaped_sdkpath=$(echo ${SDKPATH} |sed -e "s:[\+\.]:\\\\\\\\\0:g")
-	sed -i -e "s:##DEFAULT_INSTALL_DIR##:$escaped_sdkpath:" ${SDK_OUTPUT}/${SDKPATH}/relocate_sdk.py
-
-       mkdir -p ${SDK_OUTPUT}/${SDKPATHNATIVE}${sysconfdir}/
-       echo '${SDKPATHNATIVE}${libdir_nativesdk}
-${SDKPATHNATIVE}${base_libdir_nativesdk}
-include /etc/ld.so.conf' > ${SDK_OUTPUT}/${SDKPATHNATIVE}${sysconfdir}/ld.so.conf
-}
-
-python check_sdk_sysroots() {
-    # Fails build if there are broken or dangling symlinks in SDK sysroots
-
-    if d.getVar('CHECK_SDK_SYSROOTS') != '1':
-        # disabled, bail out
-        return
-
-    def norm_path(path):
-        return os.path.abspath(path)
-
-    # Get scan root
-    SCAN_ROOT = norm_path("%s/%s/sysroots/" % (d.getVar('SDK_OUTPUT'),
-                                               d.getVar('SDKPATH')))
-
-    bb.note('Checking SDK sysroots at ' + SCAN_ROOT)
-
-    def check_symlink(linkPath):
-        if not os.path.islink(linkPath):
-            return
-
-        linkDirPath = os.path.dirname(linkPath)
-
-        targetPath = os.readlink(linkPath)
-        if not os.path.isabs(targetPath):
-            targetPath = os.path.join(linkDirPath, targetPath)
-        targetPath = norm_path(targetPath)
-
-        if SCAN_ROOT != os.path.commonprefix( [SCAN_ROOT, targetPath] ):
-            bb.error("Escaping symlink {0!s} --> {1!s}".format(linkPath, targetPath))
-            return
-
-        if not os.path.exists(targetPath):
-            bb.error("Broken symlink {0!s} --> {1!s}".format(linkPath, targetPath))
-            return
-
-        if os.path.isdir(targetPath):
-            dir_walk(targetPath)
-
-    def walk_error_handler(e):
-        bb.error(str(e))
-
-    def dir_walk(rootDir):
-        for dirPath,subDirEntries,fileEntries in os.walk(rootDir, followlinks=False, onerror=walk_error_handler):
-            entries = subDirEntries + fileEntries
-            for e in entries:
-                ePath = os.path.join(dirPath, e)
-                check_symlink(ePath)
-
-    # start
-    dir_walk(SCAN_ROOT)
-}
-
-SDKTAROPTS = "--owner=root --group=root"
-
-fakeroot archive_sdk() {
-	# Package it up
-	mkdir -p ${SDKDEPLOYDIR}
-	${SDK_ARCHIVE_CMD}
-}
-
-TOOLCHAIN_SHAR_EXT_TMPL ?= "${COREBASE}/meta/files/toolchain-shar-extract.sh"
-TOOLCHAIN_SHAR_REL_TMPL ?= "${COREBASE}/meta/files/toolchain-shar-relocate.sh"
-
-fakeroot create_shar() {
-	# copy in the template shar extractor script
-	cp ${TOOLCHAIN_SHAR_EXT_TMPL} ${SDKDEPLOYDIR}/${TOOLCHAIN_OUTPUTNAME}.sh
-
-	rm -f ${T}/pre_install_command ${T}/post_install_command
-
-	if [ "${SDK_RELOCATE_AFTER_INSTALL}" = "1" ] ; then
-		cp ${TOOLCHAIN_SHAR_REL_TMPL} ${T}/post_install_command
-	fi
-	cat << "EOF" >> ${T}/pre_install_command
-${SDK_PRE_INSTALL_COMMAND}
-EOF
-
-	cat << "EOF" >> ${T}/post_install_command
-${SDK_POST_INSTALL_COMMAND}
-EOF
-	sed -i -e '/@SDK_PRE_INSTALL_COMMAND@/r ${T}/pre_install_command' \
-		-e '/@SDK_POST_INSTALL_COMMAND@/r ${T}/post_install_command' \
-		${SDKDEPLOYDIR}/${TOOLCHAIN_OUTPUTNAME}.sh
-
-	# substitute variables
-	sed -i -e 's#@SDK_ARCH@#${SDK_ARCH}#g' \
-		-e 's#@SDKPATH@#${SDKPATH}#g' \
-		-e 's#@SDKPATHINSTALL@#${SDKPATHINSTALL}#g' \
-		-e 's#@SDKEXTPATH@#${SDKEXTPATH}#g' \
-		-e 's#@OLDEST_KERNEL@#${SDK_OLDEST_KERNEL}#g' \
-		-e 's#@REAL_MULTIMACH_TARGET_SYS@#${REAL_MULTIMACH_TARGET_SYS}#g' \
-		-e 's#@SDK_TITLE@#${@d.getVar("SDK_TITLE").replace('&', '\\&')}#g' \
-		-e 's#@SDK_VERSION@#${SDK_VERSION}#g' \
-		-e '/@SDK_PRE_INSTALL_COMMAND@/d' \
-		-e '/@SDK_POST_INSTALL_COMMAND@/d' \
-		-e 's#@SDK_GCC_VER@#${@oe.utils.host_gcc_version(d, taskcontextonly=True)}#g' \
-		-e 's#@SDK_ARCHIVE_TYPE@#${SDK_ARCHIVE_TYPE}#g' \
-		${SDKDEPLOYDIR}/${TOOLCHAIN_OUTPUTNAME}.sh
-
-	# add execution permission
-	chmod +x ${SDKDEPLOYDIR}/${TOOLCHAIN_OUTPUTNAME}.sh
-
-	# append the SDK tarball
-	cat ${SDKDEPLOYDIR}/${TOOLCHAIN_OUTPUTNAME}.${SDK_ARCHIVE_TYPE} >> ${SDKDEPLOYDIR}/${TOOLCHAIN_OUTPUTNAME}.sh
-
-	# delete the old tarball, we don't need it anymore
-	rm ${SDKDEPLOYDIR}/${TOOLCHAIN_OUTPUTNAME}.${SDK_ARCHIVE_TYPE}
-}
-
-populate_sdk_log_check() {
-	for target in $*
-	do
-		lf_path="`dirname ${BB_LOGFILE}`/log.do_$target.${PID}"
-
-		echo "log_check: Using $lf_path as logfile"
-
-		if [ -e "$lf_path" ]; then
-			${IMAGE_PKGTYPE}_log_check $target $lf_path
-		else
-			echo "Cannot find logfile [$lf_path]"
-		fi
-		echo "Logfile is clean"
-	done
-}
-
-def sdk_command_variables(d):
-    return ['OPKG_PREPROCESS_COMMANDS','OPKG_POSTPROCESS_COMMANDS','POPULATE_SDK_POST_HOST_COMMAND','POPULATE_SDK_PRE_TARGET_COMMAND','POPULATE_SDK_POST_TARGET_COMMAND','SDK_POSTPROCESS_COMMAND','RPM_PREPROCESS_COMMANDS','RPM_POSTPROCESS_COMMANDS']
-
-def sdk_variables(d):
-    variables = ['BUILD_IMAGES_FROM_FEEDS','SDK_OS','SDK_OUTPUT','SDKPATHNATIVE','SDKTARGETSYSROOT','SDK_DIR','SDK_VENDOR','SDKIMAGE_INSTALL_COMPLEMENTARY','SDK_PACKAGE_ARCHS','SDK_OUTPUT',
-                 'SDKTARGETSYSROOT','MULTILIB_VARIANTS','MULTILIBS','ALL_MULTILIB_PACKAGE_ARCHS','MULTILIB_GLOBAL_VARIANTS','BAD_RECOMMENDATIONS','NO_RECOMMENDATIONS','PACKAGE_ARCHS',
-                 'PACKAGE_CLASSES','TARGET_VENDOR','TARGET_VENDOR','TARGET_ARCH','TARGET_OS','BBEXTENDVARIANT','FEED_DEPLOYDIR_BASE_URI', 'PACKAGE_EXCLUDE_COMPLEMENTARY', 'IMAGE_INSTALL_DEBUGFS']
-    variables.extend(sdk_command_variables(d))
-    return " ".join(variables)
-
-do_populate_sdk[vardeps] += "${@sdk_variables(d)}"
-
-python () {
-    variables = sdk_command_variables(d)
-    for var in variables:
-        if d.getVar(var, False):
-            d.setVarFlag(var, 'func', '1')
-}
-
-do_populate_sdk[file-checksums] += "${TOOLCHAIN_SHAR_REL_TMPL}:True \
-                                    ${TOOLCHAIN_SHAR_EXT_TMPL}:True"
-
-do_populate_sdk[dirs] = "${PKGDATA_DIR} ${TOPDIR}"
-do_populate_sdk[depends] += "${@' '.join([x + ':do_populate_sysroot' for x in d.getVar('SDK_DEPENDS').split()])}  ${@d.getVarFlag('do_rootfs', 'depends', False)}"
-do_populate_sdk[rdepends] = "${@' '.join([x + ':do_package_write_${IMAGE_PKGTYPE} ' + x + ':do_packagedata' for x in d.getVar('SDK_RDEPENDS').split()])}"
-do_populate_sdk[recrdeptask] += "do_packagedata do_package_write_rpm do_package_write_ipk do_package_write_deb"
-do_populate_sdk[file-checksums] += "${POSTINST_INTERCEPT_CHECKSUMS}"
-addtask populate_sdk
diff --git a/poky/meta/classes/populate_sdk_ext.bbclass b/poky/meta/classes/populate_sdk_ext.bbclass
deleted file mode 100644
index d58b2ba..0000000
--- a/poky/meta/classes/populate_sdk_ext.bbclass
+++ /dev/null
@@ -1,836 +0,0 @@
-# Extensible SDK
-
-inherit populate_sdk_base
-
-# Used to override TOOLCHAIN_HOST_TASK in the eSDK case
-TOOLCHAIN_HOST_TASK_ESDK = " \
-    meta-environment-extsdk-${MACHINE} \
-    "
-
-SDK_RELOCATE_AFTER_INSTALL:task-populate-sdk-ext = "0"
-
-SDK_EXT = ""
-SDK_EXT:task-populate-sdk-ext = "-ext"
-
-# Options are full or minimal
-SDK_EXT_TYPE ?= "full"
-SDK_INCLUDE_PKGDATA ?= "0"
-SDK_INCLUDE_TOOLCHAIN ?= "${@'1' if d.getVar('SDK_EXT_TYPE') == 'full' else '0'}"
-SDK_INCLUDE_NATIVESDK ?= "0"
-SDK_INCLUDE_BUILDTOOLS ?= '1'
-
-SDK_RECRDEP_TASKS ?= ""
-SDK_CUSTOM_TEMPLATECONF ?= "0"
-
-ESDK_LOCALCONF_ALLOW ?= ""
-ESDK_LOCALCONF_REMOVE ?= "CONF_VERSION \
-                             BB_NUMBER_THREADS \
-                             BB_NUMBER_PARSE_THREADS \
-                             PARALLEL_MAKE \
-                             PRSERV_HOST \
-                             SSTATE_MIRRORS \
-                             DL_DIR \
-                             SSTATE_DIR \
-                             TMPDIR \
-                             BB_SERVER_TIMEOUT \
-                            "
-ESDK_CLASS_INHERIT_DISABLE ?= "buildhistory icecc"
-SDK_UPDATE_URL ?= ""
-
-SDK_TARGETS ?= "${PN}"
-
-def get_sdk_install_targets(d, images_only=False):
-    sdk_install_targets = ''
-    if images_only or d.getVar('SDK_EXT_TYPE') != 'minimal':
-        sdk_install_targets = d.getVar('SDK_TARGETS')
-
-        depd = d.getVar('BB_TASKDEPDATA', False)
-        tasklist = bb.build.tasksbetween('do_image_complete', 'do_build', d)
-        tasklist.remove('do_build')
-        for v in depd.values():
-            if v[1] in tasklist:
-                if v[0] not in sdk_install_targets:
-                    sdk_install_targets += ' {}'.format(v[0])
-
-    if not images_only:
-        if d.getVar('SDK_INCLUDE_PKGDATA') == '1':
-            sdk_install_targets += ' meta-world-pkgdata:do_allpackagedata'
-        if d.getVar('SDK_INCLUDE_TOOLCHAIN') == '1':
-            sdk_install_targets += ' meta-extsdk-toolchain:do_populate_sysroot'
-
-    return sdk_install_targets
-
-get_sdk_install_targets[vardepsexclude] = "BB_TASKDEPDATA"
-
-OE_INIT_ENV_SCRIPT ?= "oe-init-build-env"
-
-# The files from COREBASE that you want preserved in the COREBASE copied
-# into the sdk. This allows someone to have their own setup scripts in
-# COREBASE be preserved as well as untracked files.
-COREBASE_FILES ?= " \
-    oe-init-build-env \
-    scripts \
-    LICENSE \
-    .templateconf \
-"
-
-SDK_DIR:task-populate-sdk-ext = "${WORKDIR}/sdk-ext"
-B:task-populate-sdk-ext = "${SDK_DIR}"
-TOOLCHAINEXT_OUTPUTNAME ?= "${SDK_NAME}-toolchain-ext-${SDK_VERSION}"
-TOOLCHAIN_OUTPUTNAME:task-populate-sdk-ext = "${TOOLCHAINEXT_OUTPUTNAME}"
-
-SDK_EXT_TARGET_MANIFEST = "${SDK_DEPLOY}/${TOOLCHAINEXT_OUTPUTNAME}.target.manifest"
-SDK_EXT_HOST_MANIFEST = "${SDK_DEPLOY}/${TOOLCHAINEXT_OUTPUTNAME}.host.manifest"
-
-python write_target_sdk_ext_manifest () {
-    from oe.sdk import get_extra_sdkinfo
-    sstate_dir = d.expand('${SDK_OUTPUT}/${SDKPATH}/sstate-cache')
-    extra_info = get_extra_sdkinfo(sstate_dir)
-
-    target = d.getVar('TARGET_SYS')
-    target_multimach = d.getVar('MULTIMACH_TARGET_SYS')
-    real_target_multimach = d.getVar('REAL_MULTIMACH_TARGET_SYS')
-
-    pkgs = {}
-    os.makedirs(os.path.dirname(d.getVar('SDK_EXT_TARGET_MANIFEST')), exist_ok=True)
-    with open(d.getVar('SDK_EXT_TARGET_MANIFEST'), 'w') as f:
-        for fn in extra_info['filesizes']:
-            info = fn.split(':')
-            if info[2] in (target, target_multimach, real_target_multimach) \
-                    or info[5] == 'allarch':
-                if not info[1] in pkgs:
-                    f.write("%s %s %s\n" % (info[1], info[2], info[3]))
-                    pkgs[info[1]] = {}
-}
-python write_host_sdk_ext_manifest () {
-    from oe.sdk import get_extra_sdkinfo
-    sstate_dir = d.expand('${SDK_OUTPUT}/${SDKPATH}/sstate-cache')
-    extra_info = get_extra_sdkinfo(sstate_dir)
-    host = d.getVar('BUILD_SYS')
-    with open(d.getVar('SDK_EXT_HOST_MANIFEST'), 'w') as f:
-        for fn in extra_info['filesizes']:
-            info = fn.split(':')
-            if info[2] == host:
-                f.write("%s %s %s\n" % (info[1], info[2], info[3]))
-}
-
-SDK_POSTPROCESS_COMMAND:append:task-populate-sdk-ext = "write_target_sdk_ext_manifest; write_host_sdk_ext_manifest; "    
-
-SDK_TITLE:task-populate-sdk-ext = "${@d.getVar('DISTRO_NAME') or d.getVar('DISTRO')} Extensible SDK"
-
-def clean_esdk_builddir(d, sdkbasepath):
-    """Clean up traces of the fake build for create_filtered_tasklist()"""
-    import shutil
-    cleanpaths = ['cache', 'tmp']
-    for pth in cleanpaths:
-        fullpth = os.path.join(sdkbasepath, pth)
-        if os.path.isdir(fullpth):
-            shutil.rmtree(fullpth)
-        elif os.path.isfile(fullpth):
-            os.remove(fullpth)
-
-def create_filtered_tasklist(d, sdkbasepath, tasklistfile, conf_initpath):
-    """
-    Create a filtered list of tasks. Also double-checks that the build system
-    within the SDK basically works and required sstate artifacts are available.
-    """
-    import tempfile
-    import shutil
-    import oe.copy_buildsystem
-
-    # Create a temporary build directory that we can pass to the env setup script
-    shutil.copyfile(sdkbasepath + '/conf/local.conf', sdkbasepath + '/conf/local.conf.bak')
-    try:
-        with open(sdkbasepath + '/conf/local.conf', 'a') as f:
-            # Force the use of sstate from the build system
-            f.write('\nSSTATE_DIR:forcevariable = "%s"\n' % d.getVar('SSTATE_DIR'))
-            f.write('SSTATE_MIRRORS:forcevariable = "file://universal/(.*) file://universal-4.9/\\1 file://universal-4.9/(.*) file://universal-4.8/\\1"\n')
-            # Ensure TMPDIR is the default so that clean_esdk_builddir() can delete it
-            f.write('TMPDIR:forcevariable = "${TOPDIR}/tmp"\n')
-            f.write('TCLIBCAPPEND:forcevariable = ""\n')
-            # Drop uninative if the build isn't using it (or else NATIVELSBSTRING will
-            # be different and we won't be able to find our native sstate)
-            if not bb.data.inherits_class('uninative', d):
-                f.write('INHERIT:remove = "uninative"\n')
-
-        # Unfortunately the default SDKPATH (or even a custom value) may contain characters that bitbake
-        # will not allow in its COREBASE path, so we need to rename the directory temporarily
-        temp_sdkbasepath = d.getVar('SDK_OUTPUT') + '/tmp-renamed-sdk'
-        # Delete any existing temp dir
-        try:
-            shutil.rmtree(temp_sdkbasepath)
-        except FileNotFoundError:
-            pass
-        bb.utils.rename(sdkbasepath, temp_sdkbasepath)
-        cmdprefix = '. %s .; ' % conf_initpath
-        logfile = d.getVar('WORKDIR') + '/tasklist_bb_log.txt'
-        try:
-            oe.copy_buildsystem.check_sstate_task_list(d, get_sdk_install_targets(d), tasklistfile, cmdprefix=cmdprefix, cwd=temp_sdkbasepath, logfile=logfile)
-        except bb.process.ExecutionError as e:
-            msg = 'Failed to generate filtered task list for extensible SDK:\n%s' %  e.stdout.rstrip()
-            if 'attempted to execute unexpectedly and should have been setscened' in e.stdout:
-                msg += '\n----------\n\nNOTE: "attempted to execute unexpectedly and should have been setscened" errors indicate this may be caused by missing sstate artifacts that were likely produced in earlier builds, but have been subsequently deleted for some reason.\n'
-            bb.fatal(msg)
-        bb.utils.rename(temp_sdkbasepath, sdkbasepath)
-        # Clean out residue of running bitbake, which check_sstate_task_list()
-        # will effectively do
-        clean_esdk_builddir(d, sdkbasepath)
-    finally:
-        localconf = sdkbasepath + '/conf/local.conf'
-        if os.path.exists(localconf + '.bak'):
-            os.replace(localconf + '.bak', localconf)
-
-python copy_buildsystem () {
-    import re
-    import shutil
-    import glob
-    import oe.copy_buildsystem
-
-    oe_init_env_script = d.getVar('OE_INIT_ENV_SCRIPT')
-
-    conf_bbpath = ''
-    conf_initpath = ''
-    core_meta_subdir = ''
-
-    # Copy in all metadata layers + bitbake (as repositories)
-    buildsystem = oe.copy_buildsystem.BuildSystem('extensible SDK', d)
-    baseoutpath = d.getVar('SDK_OUTPUT') + '/' + d.getVar('SDKPATH')
-
-    #check if custome templateconf path is set
-    use_custom_templateconf = d.getVar('SDK_CUSTOM_TEMPLATECONF')
-
-    # Determine if we're building a derivative extensible SDK (from devtool build-sdk)
-    derivative = (d.getVar('SDK_DERIVATIVE') or '') == '1'
-    if derivative:
-        workspace_name = 'orig-workspace'
-    else:
-        workspace_name = None
-
-    corebase, sdkbblayers = buildsystem.copy_bitbake_and_layers(baseoutpath + '/layers', workspace_name)
-    conf_bbpath = os.path.join('layers', corebase, 'bitbake')
-
-    for path in os.listdir(baseoutpath + '/layers'):
-        relpath = os.path.join('layers', path, oe_init_env_script)
-        if os.path.exists(os.path.join(baseoutpath, relpath)):
-            conf_initpath = relpath
-
-        relpath = os.path.join('layers', path, 'scripts', 'devtool')
-        if os.path.exists(os.path.join(baseoutpath, relpath)):
-            scriptrelpath = os.path.dirname(relpath)
-
-        relpath = os.path.join('layers', path, 'meta')
-        if os.path.exists(os.path.join(baseoutpath, relpath, 'lib', 'oe')):
-            core_meta_subdir = relpath
-
-    d.setVar('oe_init_build_env_path', conf_initpath)
-    d.setVar('scriptrelpath', scriptrelpath)
-
-    # Write out config file for devtool
-    import configparser
-    config = configparser.SafeConfigParser()
-    config.add_section('General')
-    config.set('General', 'bitbake_subdir', conf_bbpath)
-    config.set('General', 'init_path', conf_initpath)
-    config.set('General', 'core_meta_subdir', core_meta_subdir)
-    config.add_section('SDK')
-    config.set('SDK', 'sdk_targets', d.getVar('SDK_TARGETS'))
-    updateurl = d.getVar('SDK_UPDATE_URL')
-    if updateurl:
-        config.set('SDK', 'updateserver', updateurl)
-    bb.utils.mkdirhier(os.path.join(baseoutpath, 'conf'))
-    with open(os.path.join(baseoutpath, 'conf', 'devtool.conf'), 'w') as f:
-        config.write(f)
-
-    unlockedsigs =  os.path.join(baseoutpath, 'conf', 'unlocked-sigs.inc')
-    with open(unlockedsigs, 'w') as f:
-        pass
-
-    # Create a layer for new recipes / appends
-    bbpath = d.getVar('BBPATH')
-    env = os.environ.copy()
-    env['PYTHONDONTWRITEBYTECODE'] = '1'
-    bb.process.run(['devtool', '--bbpath', bbpath, '--basepath', baseoutpath, 'create-workspace', '--create-only', os.path.join(baseoutpath, 'workspace')], env=env)
-
-    # Create bblayers.conf
-    bb.utils.mkdirhier(baseoutpath + '/conf')
-    with open(baseoutpath + '/conf/bblayers.conf', 'w') as f:
-        f.write('# WARNING: this configuration has been automatically generated and in\n')
-        f.write('# most cases should not be edited. If you need more flexibility than\n')
-        f.write('# this configuration provides, it is strongly suggested that you set\n')
-        f.write('# up a proper instance of the full build system and use that instead.\n\n')
-
-        # LCONF_VERSION may not be set, for example when using meta-poky
-        # so don't error if it isn't found
-        lconf_version = d.getVar('LCONF_VERSION', False)
-        if lconf_version is not None:
-            f.write('LCONF_VERSION = "%s"\n\n' % lconf_version)
-
-        f.write('BBPATH = "$' + '{TOPDIR}"\n')
-        f.write('SDKBASEMETAPATH = "$' + '{TOPDIR}"\n')
-        f.write('BBLAYERS := " \\\n')
-        for layerrelpath in sdkbblayers:
-            f.write('    $' + '{SDKBASEMETAPATH}/layers/%s \\\n' % layerrelpath)
-        f.write('    $' + '{SDKBASEMETAPATH}/workspace \\\n')
-        f.write('    "\n')
-
-    # Copy uninative tarball
-    # For now this is where uninative.bbclass expects the tarball
-    if bb.data.inherits_class('uninative', d):
-        uninative_file = d.expand('${UNINATIVE_DLDIR}/' + d.getVarFlag("UNINATIVE_CHECKSUM", d.getVar("BUILD_ARCH")) + '/${UNINATIVE_TARBALL}')
-        uninative_checksum = bb.utils.sha256_file(uninative_file)
-        uninative_outdir = '%s/downloads/uninative/%s' % (baseoutpath, uninative_checksum)
-        bb.utils.mkdirhier(uninative_outdir)
-        shutil.copy(uninative_file, uninative_outdir)
-
-    env_passthrough = (d.getVar('BB_ENV_PASSTHROUGH_ADDITIONS') or '').split()
-    env_passthrough_values = {}
-
-    # Create local.conf
-    builddir = d.getVar('TOPDIR')
-    if derivative and os.path.exists(builddir + '/conf/site.conf'):
-        shutil.copyfile(builddir + '/conf/site.conf', baseoutpath + '/conf/site.conf')
-    if derivative and os.path.exists(builddir + '/conf/auto.conf'):
-        shutil.copyfile(builddir + '/conf/auto.conf', baseoutpath + '/conf/auto.conf')
-    if derivative:
-        shutil.copyfile(builddir + '/conf/local.conf', baseoutpath + '/conf/local.conf')
-    else:
-        local_conf_allowed = (d.getVar('ESDK_LOCALCONF_ALLOW') or '').split()
-        local_conf_remove = (d.getVar('ESDK_LOCALCONF_REMOVE') or '').split()
-        def handle_var(varname, origvalue, op, newlines):
-            if varname in local_conf_remove or (origvalue.strip().startswith('/') and not varname in local_conf_allowed):
-                newlines.append('# Removed original setting of %s\n' % varname)
-                return None, op, 0, True
-            else:
-                if varname in env_passthrough:
-                    env_passthrough_values[varname] = origvalue
-                return origvalue, op, 0, True
-        varlist = ['[^#=+ ]*']
-        oldlines = []
-        if os.path.exists(builddir + '/conf/site.conf'):
-            with open(builddir + '/conf/site.conf', 'r') as f:
-                oldlines += f.readlines()
-        if os.path.exists(builddir + '/conf/auto.conf'):
-            with open(builddir + '/conf/auto.conf', 'r') as f:
-                oldlines += f.readlines()
-        if os.path.exists(builddir + '/conf/local.conf'):
-            with open(builddir + '/conf/local.conf', 'r') as f:
-                oldlines += f.readlines()
-        (updated, newlines) = bb.utils.edit_metadata(oldlines, varlist, handle_var)
-
-        with open(baseoutpath + '/conf/local.conf', 'w') as f:
-            f.write('# WARNING: this configuration has been automatically generated and in\n')
-            f.write('# most cases should not be edited. If you need more flexibility than\n')
-            f.write('# this configuration provides, it is strongly suggested that you set\n')
-            f.write('# up a proper instance of the full build system and use that instead.\n\n')
-            for line in newlines:
-                if line.strip() and not line.startswith('#'):
-                    f.write(line)
-            # Write a newline just in case there's none at the end of the original
-            f.write('\n')
-
-            f.write('TMPDIR = "${TOPDIR}/tmp"\n')
-            f.write('TCLIBCAPPEND = ""\n')
-            f.write('DL_DIR = "${TOPDIR}/downloads"\n')
-
-            if bb.data.inherits_class('uninative', d):
-               f.write('INHERIT += "%s"\n' % 'uninative')
-               f.write('UNINATIVE_CHECKSUM[%s] = "%s"\n\n' % (d.getVar('BUILD_ARCH'), uninative_checksum))
-            f.write('CONF_VERSION = "%s"\n\n' % d.getVar('CONF_VERSION', False))
-
-            # Some classes are not suitable for SDK, remove them from INHERIT
-            f.write('INHERIT:remove = "%s"\n' % d.getVar('ESDK_CLASS_INHERIT_DISABLE', False))
-
-            # Bypass the default connectivity check if any
-            f.write('CONNECTIVITY_CHECK_URIS = ""\n\n')
-
-            # This warning will come out if reverse dependencies for a task
-            # don't have sstate as well as the task itself. We already know
-            # this will be the case for the extensible sdk, so turn off the
-            # warning.
-            f.write('SIGGEN_LOCKEDSIGS_SSTATE_EXISTS_CHECK = "none"\n\n')
-
-            # Warn if the sigs in the locked-signature file don't match
-            # the sig computed from the metadata.
-            f.write('SIGGEN_LOCKEDSIGS_TASKSIG_CHECK = "warn"\n\n')
-
-            # We want to be able to set this without a full reparse
-            f.write('BB_HASHCONFIG_IGNORE_VARS:append = " SIGGEN_UNLOCKED_RECIPES"\n\n')
-
-            # Set up which tasks are ignored for run on install
-            f.write('BB_SETSCENE_ENFORCE_IGNORE_TASKS = "%:* *:do_shared_workdir *:do_rm_work wic-tools:* *:do_addto_recipe_sysroot"\n\n')
-
-            # Hide the config information from bitbake output (since it's fixed within the SDK)
-            f.write('BUILDCFG_HEADER = ""\n\n')
-
-            # Write METADATA_REVISION
-            f.write('METADATA_REVISION = "%s"\n\n' % d.getVar('METADATA_REVISION'))
-
-            f.write('# Provide a flag to indicate we are in the EXT_SDK Context\n')
-            f.write('WITHIN_EXT_SDK = "1"\n\n')
-
-            # Map gcc-dependent uninative sstate cache for installer usage
-            f.write('SSTATE_MIRRORS += " file://universal/(.*) file://universal-4.9/\\1 file://universal-4.9/(.*) file://universal-4.8/\\1"\n\n')
-
-            if d.getVar("PRSERV_HOST"):
-                # Override this, we now include PR data, so it should only point ot the local database
-                f.write('PRSERV_HOST = "localhost:0"\n\n')
-
-            # Allow additional config through sdk-extra.conf
-            fn = bb.cookerdata.findConfigFile('sdk-extra.conf', d)
-            if fn:
-                with open(fn, 'r') as xf:
-                    for line in xf:
-                        f.write(line)
-
-            # If you define a sdk_extraconf() function then it can contain additional config
-            # (Though this is awkward; sdk-extra.conf should probably be used instead)
-            extraconf = (d.getVar('sdk_extraconf') or '').strip()
-            if extraconf:
-                # Strip off any leading / trailing spaces
-                for line in extraconf.splitlines():
-                    f.write(line.strip() + '\n')
-
-            f.write('require conf/locked-sigs.inc\n')
-            f.write('require conf/unlocked-sigs.inc\n')
-
-    # Copy multiple configurations if they exist in the users config directory
-    if d.getVar('BBMULTICONFIG') is not None:
-        bb.utils.mkdirhier(os.path.join(baseoutpath, 'conf', 'multiconfig'))
-        for mc in d.getVar('BBMULTICONFIG').split():
-            dest_stub = "/conf/multiconfig/%s.conf" % (mc,)
-            if os.path.exists(builddir + dest_stub):
-                shutil.copyfile(builddir + dest_stub, baseoutpath + dest_stub)
-
-    cachedir = os.path.join(baseoutpath, 'cache')
-    bb.utils.mkdirhier(cachedir)
-    bb.parse.siggen.copy_unitaskhashes(cachedir)
-
-    # If PR Service is in use, we need to export this as well
-    bb.note('Do we have a pr database?')
-    if d.getVar("PRSERV_HOST"):
-        bb.note('Writing PR database...')
-        # Based on the code in classes/prexport.bbclass
-        import oe.prservice
-        #dump meta info of tables
-        localdata = d.createCopy()
-        localdata.setVar('PRSERV_DUMPOPT_COL', "1")
-        localdata.setVar('PRSERV_DUMPDIR', os.path.join(baseoutpath, 'conf'))
-        localdata.setVar('PRSERV_DUMPFILE', '${PRSERV_DUMPDIR}/prserv.inc')
-
-        bb.note('PR Database write to %s' % (localdata.getVar('PRSERV_DUMPFILE')))
-
-        retval = oe.prservice.prserv_dump_db(localdata)
-        if not retval:
-            bb.error("prexport_handler: export failed!")
-            return
-        (metainfo, datainfo) = retval
-        oe.prservice.prserv_export_tofile(localdata, metainfo, datainfo, True)
-
-    # Use templateconf.cfg file from builddir if exists
-    if os.path.exists(builddir + '/conf/templateconf.cfg') and use_custom_templateconf == '1':
-        shutil.copyfile(builddir + '/conf/templateconf.cfg', baseoutpath + '/conf/templateconf.cfg')
-    else:
-        # Write a templateconf.cfg
-        with open(baseoutpath + '/conf/templateconf.cfg', 'w') as f:
-            f.write('meta/conf\n')
-
-    # Ensure any variables set from the external environment (by way of
-    # BB_ENV_PASSTHROUGH_ADDITIONS) are set in the SDK's configuration
-    extralines = []
-    for name, value in env_passthrough_values.items():
-        actualvalue = d.getVar(name) or ''
-        if value != actualvalue:
-            extralines.append('%s = "%s"\n' % (name, actualvalue))
-    if extralines:
-        with open(baseoutpath + '/conf/local.conf', 'a') as f:
-            f.write('\n')
-            f.write('# Extra settings from environment:\n')
-            for line in extralines:
-                f.write(line)
-            f.write('\n')
-
-    # Filter the locked signatures file to just the sstate tasks we are interested in
-    excluded_targets = get_sdk_install_targets(d, images_only=True)
-    sigfile = d.getVar('WORKDIR') + '/locked-sigs.inc'
-    lockedsigs_pruned = baseoutpath + '/conf/locked-sigs.inc'
-    #nativesdk-only sigfile to merge into locked-sigs.inc
-    sdk_include_nativesdk = (d.getVar("SDK_INCLUDE_NATIVESDK") == '1')
-    nativesigfile = d.getVar('WORKDIR') + '/locked-sigs_nativesdk.inc'
-    nativesigfile_pruned = d.getVar('WORKDIR') + '/locked-sigs_nativesdk_pruned.inc'
-
-    if sdk_include_nativesdk:
-        oe.copy_buildsystem.prune_lockedsigs([],
-                                             excluded_targets.split(),
-                                             nativesigfile,
-                                             True,
-                                             nativesigfile_pruned)
-
-        oe.copy_buildsystem.merge_lockedsigs([],
-                                             sigfile,
-                                             nativesigfile_pruned,
-                                             sigfile)
-
-    oe.copy_buildsystem.prune_lockedsigs([],
-                                         excluded_targets.split(),
-                                         sigfile,
-                                         False,
-                                         lockedsigs_pruned)
-
-    sstate_out = baseoutpath + '/sstate-cache'
-    bb.utils.remove(sstate_out, True)
-
-    # uninative.bbclass sets NATIVELSBSTRING to 'universal%s' % oe.utils.host_gcc_version(d)
-    fixedlsbstring = "universal%s" % oe.utils.host_gcc_version(d)
-
-    sdk_include_toolchain = (d.getVar('SDK_INCLUDE_TOOLCHAIN') == '1')
-    sdk_ext_type = d.getVar('SDK_EXT_TYPE')
-    if (sdk_ext_type != 'minimal' or sdk_include_toolchain or derivative) and not sdk_include_nativesdk:
-        # Create the filtered task list used to generate the sstate cache shipped with the SDK
-        tasklistfn = d.getVar('WORKDIR') + '/tasklist.txt'
-        create_filtered_tasklist(d, baseoutpath, tasklistfn, conf_initpath)
-    else:
-        tasklistfn = None
-
-
-    cachedir = os.path.join(baseoutpath, 'cache')
-    bb.utils.mkdirhier(cachedir)
-    bb.parse.siggen.copy_unitaskhashes(cachedir)
-
-    # Add packagedata if enabled
-    if d.getVar('SDK_INCLUDE_PKGDATA') == '1':
-        lockedsigs_base = d.getVar('WORKDIR') + '/locked-sigs-base.inc'
-        lockedsigs_copy = d.getVar('WORKDIR') + '/locked-sigs-copy.inc'
-        shutil.move(lockedsigs_pruned, lockedsigs_base)
-        oe.copy_buildsystem.merge_lockedsigs(['do_packagedata'],
-                                             lockedsigs_base,
-                                             d.getVar('STAGING_DIR_HOST') + '/world-pkgdata/locked-sigs-pkgdata.inc',
-                                             lockedsigs_pruned,
-                                             lockedsigs_copy)
-
-    if sdk_include_toolchain:
-        lockedsigs_base = d.getVar('WORKDIR') + '/locked-sigs-base2.inc'
-        lockedsigs_toolchain = d.expand("${STAGING_DIR}/${TUNE_PKGARCH}/meta-extsdk-toolchain/locked-sigs/locked-sigs-extsdk-toolchain.inc")
-        shutil.move(lockedsigs_pruned, lockedsigs_base)
-        oe.copy_buildsystem.merge_lockedsigs([],
-                                             lockedsigs_base,
-                                             lockedsigs_toolchain,
-                                             lockedsigs_pruned)
-        oe.copy_buildsystem.create_locked_sstate_cache(lockedsigs_toolchain,
-                                                       d.getVar('SSTATE_DIR'),
-                                                       sstate_out, d,
-                                                       fixedlsbstring,
-                                                       filterfile=tasklistfn)
-
-    if sdk_ext_type == 'minimal':
-        if derivative:
-            # Assume the user is not going to set up an additional sstate
-            # mirror, thus we need to copy the additional artifacts (from
-            # workspace recipes) into the derivative SDK
-            lockedsigs_orig = d.getVar('TOPDIR') + '/conf/locked-sigs.inc'
-            if os.path.exists(lockedsigs_orig):
-                lockedsigs_extra = d.getVar('WORKDIR') + '/locked-sigs-extra.inc'
-                oe.copy_buildsystem.merge_lockedsigs(None,
-                                                     lockedsigs_orig,
-                                                     lockedsigs_pruned,
-                                                     None,
-                                                     lockedsigs_extra)
-                oe.copy_buildsystem.create_locked_sstate_cache(lockedsigs_extra,
-                                                               d.getVar('SSTATE_DIR'),
-                                                               sstate_out, d,
-                                                               fixedlsbstring,
-                                                               filterfile=tasklistfn)
-    else:
-        oe.copy_buildsystem.create_locked_sstate_cache(lockedsigs_pruned,
-                                                       d.getVar('SSTATE_DIR'),
-                                                       sstate_out, d,
-                                                       fixedlsbstring,
-                                                       filterfile=tasklistfn)
-
-    # We don't need sstate do_package files
-    for root, dirs, files in os.walk(sstate_out):
-        for name in files:
-            if name.endswith("_package.tar.zst"):
-                f = os.path.join(root, name)
-                os.remove(f)
-
-    # Write manifest file
-    # Note: at the moment we cannot include the env setup script here to keep
-    # it updated, since it gets modified during SDK installation (see
-    # sdk_ext_postinst() below) thus the checksum we take here would always
-    # be different.
-    manifest_file_list = ['conf/*']
-    if d.getVar('BBMULTICONFIG') is not None:
-        manifest_file_list.append('conf/multiconfig/*')
-
-    esdk_manifest_excludes = (d.getVar('ESDK_MANIFEST_EXCLUDES') or '').split()
-    esdk_manifest_excludes_list = []
-    for exclude_item in esdk_manifest_excludes:
-        esdk_manifest_excludes_list += glob.glob(os.path.join(baseoutpath, exclude_item))
-    manifest_file = os.path.join(baseoutpath, 'conf', 'sdk-conf-manifest')
-    with open(manifest_file, 'w') as f:
-        for item in manifest_file_list:
-            for fn in glob.glob(os.path.join(baseoutpath, item)):
-                if fn == manifest_file or os.path.isdir(fn):
-                    continue
-                if fn in esdk_manifest_excludes_list:
-                    continue
-                chksum = bb.utils.sha256_file(fn)
-                f.write('%s\t%s\n' % (chksum, os.path.relpath(fn, baseoutpath)))
-}
-
-def get_current_buildtools(d):
-    """Get the file name of the current buildtools installer"""
-    import glob
-    btfiles = glob.glob(os.path.join(d.getVar('SDK_DEPLOY'), '*-buildtools-nativesdk-standalone-*.sh'))
-    btfiles.sort(key=os.path.getctime)
-    return os.path.basename(btfiles[-1])
-
-def get_sdk_required_utilities(buildtools_fn, d):
-    """Find required utilities that aren't provided by the buildtools"""
-    sanity_required_utilities = (d.getVar('SANITY_REQUIRED_UTILITIES') or '').split()
-    sanity_required_utilities.append(d.expand('${BUILD_PREFIX}gcc'))
-    sanity_required_utilities.append(d.expand('${BUILD_PREFIX}g++'))
-    if buildtools_fn:
-        buildtools_installer = os.path.join(d.getVar('SDK_DEPLOY'), buildtools_fn)
-        filelist, _ = bb.process.run('%s -l' % buildtools_installer)
-    else:
-        buildtools_installer = None
-        filelist = ""
-    localdata = bb.data.createCopy(d)
-    localdata.setVar('SDKPATH', '.')
-    sdkpathnative = localdata.getVar('SDKPATHNATIVE')
-    sdkbindirs = [localdata.getVar('bindir_nativesdk'),
-                  localdata.getVar('sbindir_nativesdk'),
-                  localdata.getVar('base_bindir_nativesdk'),
-                  localdata.getVar('base_sbindir_nativesdk')]
-    for line in filelist.splitlines():
-        splitline = line.split()
-        if len(splitline) > 5:
-            fn = splitline[5]
-            if not fn.startswith('./'):
-                fn = './%s' % fn
-            if fn.startswith(sdkpathnative):
-                relpth = '/' + os.path.relpath(fn, sdkpathnative)
-                for bindir in sdkbindirs:
-                    if relpth.startswith(bindir):
-                        relpth = os.path.relpath(relpth, bindir)
-                        if relpth in sanity_required_utilities:
-                            sanity_required_utilities.remove(relpth)
-                        break
-    return ' '.join(sanity_required_utilities)
-
-install_tools() {
-	install -d ${SDK_OUTPUT}/${SDKPATHNATIVE}${bindir_nativesdk}
-	scripts="devtool recipetool oe-find-native-sysroot runqemu* wic"
-	for script in $scripts; do
-		for scriptfn in `find ${SDK_OUTPUT}/${SDKPATH}/${scriptrelpath} -maxdepth 1 -executable -name "$script"`; do
-			targetscriptfn="${SDK_OUTPUT}/${SDKPATHNATIVE}${bindir_nativesdk}/$(basename $scriptfn)"
-			test -e ${targetscriptfn} || ln -rs ${scriptfn} ${targetscriptfn}
-		done
-	done
-	# We can't use the same method as above because files in the sysroot won't exist at this point
-	# (they get populated from sstate on installation)
-	unfsd_path="${SDK_OUTPUT}/${SDKPATHNATIVE}${bindir_nativesdk}/unfsd"
-	if [ "${SDK_INCLUDE_TOOLCHAIN}" = "1" -a ! -e $unfsd_path ] ; then
-		binrelpath=${@os.path.relpath(d.getVar('STAGING_BINDIR_NATIVE'), d.getVar('TMPDIR'))}
-		ln -rs ${SDK_OUTPUT}/${SDKPATH}/tmp/$binrelpath/unfsd $unfsd_path
-	fi
-	touch ${SDK_OUTPUT}/${SDKPATH}/.devtoolbase
-
-	# find latest buildtools-tarball and install it
-	if [ -n "${SDK_BUILDTOOLS_INSTALLER}" ]; then
-		install ${SDK_DEPLOY}/${SDK_BUILDTOOLS_INSTALLER} ${SDK_OUTPUT}/${SDKPATH}
-	fi
-
-	install -m 0644 ${COREBASE}/meta/files/ext-sdk-prepare.py ${SDK_OUTPUT}/${SDKPATH}
-}
-do_populate_sdk_ext[file-checksums] += "${COREBASE}/meta/files/ext-sdk-prepare.py:True"
-
-sdk_ext_preinst() {
-	# Since bitbake won't run as root it doesn't make sense to try and install
-	# the extensible sdk as root.
-	if [ "`id -u`" = "0" ]; then
-		echo "ERROR: The extensible sdk cannot be installed as root."
-		exit 1
-	fi
-	if ! command -v locale > /dev/null; then
-		echo "ERROR: The installer requires the locale command, please install it first"
-		exit 1
-	fi
-        # Check setting of LC_ALL set above
-	canonicalised_locale=`echo $LC_ALL | sed 's/UTF-8/utf8/'`
-	if ! locale -a | grep -q $canonicalised_locale ; then
-		echo "ERROR: the installer requires the $LC_ALL locale to be installed (but not selected), please install it first"
-		exit 1
-	fi
-	# The relocation script used by buildtools installer requires python
-	if ! command -v python3 > /dev/null; then
-		echo "ERROR: The installer requires python3, please install it first"
-		exit 1
-	fi
-	missing_utils=""
-	for util in ${SDK_REQUIRED_UTILITIES}; do
-		if ! command -v $util > /dev/null; then
-			missing_utils="$missing_utils $util"
-		fi
-	done
-	if [ -n "$missing_utils" ] ; then
-		echo "ERROR: the SDK requires the following missing utilities, please install them: $missing_utils"
-		exit 1
-	fi
-	SDK_EXTENSIBLE="1"
-	if [ "$publish" = "1" ] && [ "${SDK_EXT_TYPE}" = "minimal" ] ; then
-		EXTRA_TAR_OPTIONS="$EXTRA_TAR_OPTIONS --exclude=sstate-cache"
-	fi
-}
-SDK_PRE_INSTALL_COMMAND:task-populate-sdk-ext = "${sdk_ext_preinst}"
-
-# FIXME this preparation should be done as part of the SDK construction
-sdk_ext_postinst() {
-	printf "\nExtracting buildtools...\n"
-	cd $target_sdk_dir
-	env_setup_script="$target_sdk_dir/environment-setup-${REAL_MULTIMACH_TARGET_SYS}"
-        if [ -n "${SDK_BUILDTOOLS_INSTALLER}" ]; then
-		printf "buildtools\ny" | ./${SDK_BUILDTOOLS_INSTALLER} > buildtools.log || { printf 'ERROR: buildtools installation failed:\n' ; cat buildtools.log ; echo "printf 'ERROR: this SDK was not fully installed and needs reinstalling\n'" >> $env_setup_script ; exit 1 ; }
-
-		# Delete the buildtools tar file since it won't be used again
-		rm -f ./${SDK_BUILDTOOLS_INSTALLER}
-		# We don't need the log either since it succeeded
-		rm -f buildtools.log
-
-		# Make sure when the user sets up the environment, they also get
-		# the buildtools-tarball tools in their path.
-		echo "# Save and reset OECORE_NATIVE_SYSROOT as buildtools may change it" >> $env_setup_script
-		echo "SAVED=\"\$OECORE_NATIVE_SYSROOT\"" >> $env_setup_script
-		echo ". $target_sdk_dir/buildtools/environment-setup*" >> $env_setup_script
-		echo "OECORE_NATIVE_SYSROOT=\"\$SAVED\"" >> $env_setup_script
-	fi
-
-	# Allow bitbake environment setup to be ran as part of this sdk.
-	echo "export OE_SKIP_SDK_CHECK=1" >> $env_setup_script
-	# Work around runqemu not knowing how to get this information within the eSDK
-	echo "export DEPLOY_DIR_IMAGE=$target_sdk_dir/tmp/${@os.path.relpath(d.getVar('DEPLOY_DIR_IMAGE'), d.getVar('TMPDIR'))}" >> $env_setup_script
-
-	# A bit of another hack, but we need this in the path only for devtool
-	# so put it at the end of $PATH.
-	echo "export PATH=$target_sdk_dir/sysroots/${SDK_SYS}${bindir_nativesdk}:\$PATH" >> $env_setup_script
-
-	echo "printf 'SDK environment now set up; additionally you may now run devtool to perform development tasks.\nRun devtool --help for further details.\n'" >> $env_setup_script
-
-	# Warn if trying to use external bitbake and the ext SDK together
-	echo "(which bitbake > /dev/null 2>&1 && echo 'WARNING: attempting to use the extensible SDK in an environment set up to run bitbake - this may lead to unexpected results. Please source this script in a new shell session instead.') || true" >> $env_setup_script
-
-	if [ "$prepare_buildsystem" != "no" -a -n "${SDK_BUILDTOOLS_INSTALLER}" ]; then
-		printf "Preparing build system...\n"
-		# dash which is /bin/sh on Ubuntu will not preserve the
-		# current working directory when first ran, nor will it set $1 when
-		# sourcing a script. That is why this has to look so ugly.
-		LOGFILE="$target_sdk_dir/preparing_build_system.log"
-		sh -c ". buildtools/environment-setup* > $LOGFILE && cd $target_sdk_dir/`dirname ${oe_init_build_env_path}` && set $target_sdk_dir && . $target_sdk_dir/${oe_init_build_env_path} $target_sdk_dir >> $LOGFILE && python3 $target_sdk_dir/ext-sdk-prepare.py $LOGFILE '${SDK_INSTALL_TARGETS}'" || { echo "printf 'ERROR: this SDK was not fully installed and needs reinstalling\n'" >> $env_setup_script ; exit 1 ; }
-	fi
-	if [ -e $target_sdk_dir/ext-sdk-prepare.py ]; then
-		rm $target_sdk_dir/ext-sdk-prepare.py
-	fi
-	echo done
-}
-
-SDK_POST_INSTALL_COMMAND:task-populate-sdk-ext = "${sdk_ext_postinst}"
-
-SDK_POSTPROCESS_COMMAND:prepend:task-populate-sdk-ext = "copy_buildsystem; install_tools; "
-
-SDK_INSTALL_TARGETS = ""
-fakeroot python do_populate_sdk_ext() {
-    # FIXME hopefully we can remove this restriction at some point, but uninative
-    # currently forces this upon us
-    if d.getVar('SDK_ARCH') != d.getVar('BUILD_ARCH'):
-        bb.fatal('The extensible SDK can currently only be built for the same architecture as the machine being built on - SDK_ARCH is set to %s (likely via setting SDKMACHINE) which is different from the architecture of the build machine (%s). Unable to continue.' % (d.getVar('SDK_ARCH'), d.getVar('BUILD_ARCH')))
-
-    # FIXME hopefully we can remove this restriction at some point, but the eSDK
-    # can only be built for the primary (default) multiconfig
-    if d.getVar('BB_CURRENT_MC') != 'default':
-        bb.fatal('The extensible SDK can currently only be built for the default multiconfig.  Currently trying to build for %s.' % d.getVar('BB_CURRENT_MC'))
-
-    # eSDK dependencies don't use the traditional variables and things don't work properly if they are set
-    d.setVar("TOOLCHAIN_HOST_TASK", "${TOOLCHAIN_HOST_TASK_ESDK}")
-    d.setVar("TOOLCHAIN_TARGET_TASK", "")
-
-    d.setVar('SDK_INSTALL_TARGETS', get_sdk_install_targets(d))
-    if d.getVar('SDK_INCLUDE_BUILDTOOLS') == '1':
-        buildtools_fn = get_current_buildtools(d)
-    else:
-        buildtools_fn = None
-    d.setVar('SDK_REQUIRED_UTILITIES', get_sdk_required_utilities(buildtools_fn, d))
-    d.setVar('SDK_BUILDTOOLS_INSTALLER', buildtools_fn)
-    d.setVar('SDKDEPLOYDIR', '${SDKEXTDEPLOYDIR}')
-    # ESDKs have a libc from the buildtools so ensure we don't ship linguas twice
-    d.delVar('SDKIMAGE_LINGUAS')
-    if d.getVar("SDK_INCLUDE_NATIVESDK") == '1':
-        generate_nativesdk_lockedsigs(d)
-    populate_sdk_common(d)
-}
-
-def generate_nativesdk_lockedsigs(d):
-    import oe.copy_buildsystem
-    sigfile = d.getVar('WORKDIR') + '/locked-sigs_nativesdk.inc'
-    oe.copy_buildsystem.generate_locked_sigs(sigfile, d)
-
-def get_ext_sdk_depends(d):
-    # Note: the deps varflag is a list not a string, so we need to specify expand=False
-    deps = d.getVarFlag('do_image_complete', 'deps', False)
-    pn = d.getVar('PN')
-    deplist = ['%s:%s' % (pn, dep) for dep in deps]
-    tasklist = bb.build.tasksbetween('do_image_complete', 'do_build', d)
-    tasklist.append('do_rootfs')
-    for task in tasklist:
-        deplist.extend((d.getVarFlag(task, 'depends') or '').split())
-    return ' '.join(deplist)
-
-python do_sdk_depends() {
-    # We have to do this separately in its own task so we avoid recursing into
-    # dependencies we don't need to (e.g. buildtools-tarball) and bringing those
-    # into the SDK's sstate-cache
-    import oe.copy_buildsystem
-    sigfile = d.getVar('WORKDIR') + '/locked-sigs.inc'
-    oe.copy_buildsystem.generate_locked_sigs(sigfile, d)
-}
-addtask sdk_depends
-
-do_sdk_depends[dirs] = "${WORKDIR}"
-do_sdk_depends[depends] = "${@get_ext_sdk_depends(d)} meta-extsdk-toolchain:do_populate_sysroot"
-do_sdk_depends[recrdeptask] = "${@d.getVarFlag('do_populate_sdk', 'recrdeptask', False)}"
-do_sdk_depends[recrdeptask] += "do_populate_lic do_package_qa do_populate_sysroot do_deploy ${SDK_RECRDEP_TASKS}"
-do_sdk_depends[rdepends] = "${@' '.join([x + ':do_package_write_${IMAGE_PKGTYPE} ' + x + ':do_packagedata' for x in d.getVar('TOOLCHAIN_HOST_TASK_ESDK').split()])}"
-
-do_populate_sdk_ext[dirs] = "${@d.getVarFlag('do_populate_sdk', 'dirs', False)}"
-
-do_populate_sdk_ext[depends] = "${@d.getVarFlag('do_populate_sdk', 'depends', False)} \
-                                ${@'buildtools-tarball:do_populate_sdk' if d.getVar('SDK_INCLUDE_BUILDTOOLS') == '1' else ''} \
-                                ${@'meta-world-pkgdata:do_collect_packagedata' if d.getVar('SDK_INCLUDE_PKGDATA') == '1' else ''} \
-                                ${@'meta-extsdk-toolchain:do_locked_sigs' if d.getVar('SDK_INCLUDE_TOOLCHAIN') == '1' else ''}"
-
-# We must avoid depending on do_build here if rm_work.bbclass is active,
-# because otherwise do_rm_work may run before do_populate_sdk_ext itself.
-# We can't mark do_populate_sdk_ext and do_sdk_depends as having to
-# run before do_rm_work, because then they would also run as part
-# of normal builds.
-do_populate_sdk_ext[rdepends] += "${@' '.join([x + ':' + (d.getVar('RM_WORK_BUILD_WITHOUT') or 'do_build') for x in d.getVar('SDK_TARGETS').split()])}"
-
-# Make sure code changes can result in rebuild
-do_populate_sdk_ext[vardeps] += "copy_buildsystem \
-                                 sdk_ext_postinst"
-
-# Since any change in the metadata of any layer should cause a rebuild of the
-# sdk(since the layers are put in the sdk) set the task to nostamp so it
-# always runs.
-do_populate_sdk_ext[nostamp] = "1"
-
-SDKEXTDEPLOYDIR = "${WORKDIR}/deploy-${PN}-populate-sdk-ext"
-
-SSTATETASKS += "do_populate_sdk_ext"
-SSTATE_SKIP_CREATION:task-populate-sdk-ext = '1'
-do_populate_sdk_ext[cleandirs] = "${SDKEXTDEPLOYDIR}"
-do_populate_sdk_ext[sstate-inputdirs] = "${SDKEXTDEPLOYDIR}"
-do_populate_sdk_ext[sstate-outputdirs] = "${SDK_DEPLOY}"
-do_populate_sdk_ext[stamp-extra-info] = "${MACHINE_ARCH}"
-
-addtask populate_sdk_ext after do_sdk_depends
diff --git a/poky/meta/classes/prexport.bbclass b/poky/meta/classes/prexport.bbclass
index 6dcf99e..e5098e3 100644
--- a/poky/meta/classes/prexport.bbclass
+++ b/poky/meta/classes/prexport.bbclass
@@ -1,3 +1,9 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
 PRSERV_DUMPOPT_VERSION = "${PRAUTOINX}"
 PRSERV_DUMPOPT_PKGARCH  = ""
 PRSERV_DUMPOPT_CHECKSUM = ""
diff --git a/poky/meta/classes/primport.bbclass b/poky/meta/classes/primport.bbclass
index 8ed45f0..0092417 100644
--- a/poky/meta/classes/primport.bbclass
+++ b/poky/meta/classes/primport.bbclass
@@ -1,3 +1,9 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
 python primport_handler () {
     import bb.event
     if not e.data:
diff --git a/poky/meta/classes/ptest-gnome.bbclass b/poky/meta/classes/ptest-gnome.bbclass
deleted file mode 100644
index 18bd3db..0000000
--- a/poky/meta/classes/ptest-gnome.bbclass
+++ /dev/null
@@ -1,8 +0,0 @@
-inherit ptest
-
-EXTRA_OECONF:append = " ${@bb.utils.contains('PTEST_ENABLED', '1', '--enable-installed-tests', '--disable-installed-tests', d)}"
-
-FILES:${PN}-ptest += "${libexecdir}/installed-tests/ \
-                      ${datadir}/installed-tests/"
-
-RDEPENDS:${PN}-ptest += "gnome-desktop-testing"
diff --git a/poky/meta/classes/ptest-perl.bbclass b/poky/meta/classes/ptest-perl.bbclass
deleted file mode 100644
index 5dd72c9..0000000
--- a/poky/meta/classes/ptest-perl.bbclass
+++ /dev/null
@@ -1,30 +0,0 @@
-inherit ptest
-
-FILESEXTRAPATHS:prepend := "${COREBASE}/meta/files:"
-
-SRC_URI += "file://ptest-perl/run-ptest"
-
-do_install_ptest_perl() {
-	install -d ${D}${PTEST_PATH}
-	if [ ! -f ${D}${PTEST_PATH}/run-ptest ]; then
-		install -m 0755 ${WORKDIR}/ptest-perl/run-ptest ${D}${PTEST_PATH}
-	fi
-	cp -r ${B}/t ${D}${PTEST_PATH}
-	chown -R root:root ${D}${PTEST_PATH}
-}
-
-FILES:${PN}-ptest:prepend = "${PTEST_PATH}/t/* ${PTEST_PATH}/run-ptest "
-
-RDEPENDS:${PN}-ptest:prepend = "perl "
-
-addtask install_ptest_perl after do_install_ptest_base before do_package
-
-python () {
-    if not bb.data.inherits_class('native', d) and not bb.data.inherits_class('cross', d):
-        d.setVarFlag('do_install_ptest_perl', 'fakeroot', '1')
-
-    # Remove all '*ptest_perl' tasks when ptest is not enabled
-    if not(d.getVar('PTEST_ENABLED') == "1"):
-        for i in ['do_install_ptest_perl']:
-            bb.build.deltask(i, d)
-}
diff --git a/poky/meta/classes/ptest.bbclass b/poky/meta/classes/ptest.bbclass
deleted file mode 100644
index c162f5d..0000000
--- a/poky/meta/classes/ptest.bbclass
+++ /dev/null
@@ -1,136 +0,0 @@
-SUMMARY:${PN}-ptest ?= "${SUMMARY} - Package test files"
-DESCRIPTION:${PN}-ptest ?= "${DESCRIPTION}  \
-This package contains a test directory ${PTEST_PATH} for package test purposes."
-
-PTEST_PATH ?= "${libdir}/${BPN}/ptest"
-PTEST_BUILD_HOST_FILES ?= "Makefile"
-PTEST_BUILD_HOST_PATTERN ?= ""
-PTEST_PARALLEL_MAKE ?= "${PARALLEL_MAKE}"
-PTEST_PARALLEL_MAKEINST ?= "${PARALLEL_MAKEINST}"
-EXTRA_OEMAKE:prepend:task-compile-ptest-base = "${PTEST_PARALLEL_MAKE} "
-EXTRA_OEMAKE:prepend:task-install-ptest-base = "${PTEST_PARALLEL_MAKEINST} "
-
-FILES:${PN}-ptest += "${PTEST_PATH}"
-SECTION:${PN}-ptest = "devel"
-ALLOW_EMPTY:${PN}-ptest = "1"
-PTEST_ENABLED = "${@bb.utils.contains('DISTRO_FEATURES', 'ptest', '1', '0', d)}"
-PTEST_ENABLED:class-native = ""
-PTEST_ENABLED:class-nativesdk = ""
-PTEST_ENABLED:class-cross-canadian = ""
-RDEPENDS:${PN}-ptest += "${PN}"
-RDEPENDS:${PN}-ptest:class-native = ""
-RDEPENDS:${PN}-ptest:class-nativesdk = ""
-RRECOMMENDS:${PN}-ptest += "ptest-runner"
-
-PACKAGES =+ "${@bb.utils.contains('PTEST_ENABLED', '1', '${PN}-ptest', '', d)}"
-
-require conf/distro/include/ptest-packagelists.inc
-
-do_configure_ptest() {
-    :
-}
-
-do_configure_ptest_base() {
-    do_configure_ptest
-}
-
-do_compile_ptest() {
-    :
-}
-
-do_compile_ptest_base() {
-    do_compile_ptest
-}
-
-do_install_ptest() {
-    :
-}
-
-do_install_ptest_base() {
-    if [ -f ${WORKDIR}/run-ptest ]; then
-        install -D ${WORKDIR}/run-ptest ${D}${PTEST_PATH}/run-ptest
-    fi
-    if grep -q install-ptest: Makefile; then
-        oe_runmake DESTDIR=${D}${PTEST_PATH} install-ptest
-    fi
-    do_install_ptest
-    chown -R root:root ${D}${PTEST_PATH}
-
-    # Strip build host paths from any installed Makefile
-    for filename in ${PTEST_BUILD_HOST_FILES}; do
-        for installed_ptest_file in $(find ${D}${PTEST_PATH} -type f -name $filename); do
-            bbnote "Stripping host paths from: $installed_ptest_file"
-            sed -e 's#${HOSTTOOLS_DIR}/*##g' \
-                -e 's#${WORKDIR}/*=#.=#g' \
-                -e 's#${WORKDIR}/*##g' \
-                -i $installed_ptest_file
-            if [ -n "${PTEST_BUILD_HOST_PATTERN}" ]; then
-               sed -E '/${PTEST_BUILD_HOST_PATTERN}/d' \
-                   -i $installed_ptest_file
-            fi
-        done
-    done
-}
-
-PTEST_BINDIR_PKGD_PATH = "${PKGD}${PTEST_PATH}/bin"
-
-# This function needs to run after apply_update_alternative_renames because the
-# aforementioned function will update the ALTERNATIVE_LINK_NAME flag. Append is
-# used here to make this function to run as late as possible.
-PACKAGE_PREPROCESS_FUNCS:append = "${@bb.utils.contains('PTEST_BINDIR', '1', \
-                                    bb.utils.contains('PTEST_ENABLED', '1', ' ptest_update_alternatives', '', d), '', d)}"
-
-python ptest_update_alternatives() {
-    """
-    This function will generate the symlinks in the PTEST_BINDIR_PKGD_PATH
-    to match the renamed binaries by update-alternatives.
-    """
-
-    if not bb.data.inherits_class('update-alternatives', d) \
-           or not update_alternatives_enabled(d):
-        return
-
-    bb.note("Generating symlinks for ptest")
-    bin_paths = { d.getVar("bindir"), d.getVar("base_bindir"),
-                   d.getVar("sbindir"), d.getVar("base_sbindir") }
-    ptest_bindir = d.getVar("PTEST_BINDIR_PKGD_PATH")
-    os.mkdir(ptest_bindir)
-    for pkg in (d.getVar('PACKAGES') or "").split():
-        alternatives = update_alternatives_alt_targets(d, pkg)
-        for alt_name, alt_link, alt_target, _ in alternatives:
-            # Some alternatives are for man pages,
-            # check if the alternative is in PATH
-            if os.path.dirname(alt_link) in bin_paths:
-                os.symlink(alt_target, os.path.join(ptest_bindir, alt_name))
-}
-
-do_configure_ptest_base[dirs] = "${B}"
-do_compile_ptest_base[dirs] = "${B}"
-do_install_ptest_base[dirs] = "${B}"
-do_install_ptest_base[cleandirs] = "${D}${PTEST_PATH}"
-
-addtask configure_ptest_base after do_configure before do_compile
-addtask compile_ptest_base   after do_compile   before do_install
-addtask install_ptest_base   after do_install   before do_package do_populate_sysroot
-
-python () {
-    if not bb.data.inherits_class('native', d) and not bb.data.inherits_class('cross', d):
-        d.setVarFlag('do_install_ptest_base', 'fakeroot', '1')
-        d.setVarFlag('do_install_ptest_base', 'umask', '022')
-
-    # Remove all '*ptest_base' tasks when ptest is not enabled
-    if not(d.getVar('PTEST_ENABLED') == "1"):
-        for i in ['do_configure_ptest_base', 'do_compile_ptest_base', 'do_install_ptest_base']:
-            bb.build.deltask(i, d)
-}
-
-QARECIPETEST[missing-ptest] = "package_qa_check_missing_ptest"
-def package_qa_check_missing_ptest(pn, d, messages):
-    # This checks that ptest package is actually included
-    # in standard oe-core ptest images - only for oe-core recipes
-    if not 'meta/recipes' in d.getVar('FILE') or not(d.getVar('PTEST_ENABLED') == "1"):
-        return
-
-    enabled_ptests = " ".join([d.getVar('PTESTS_FAST'), d.getVar('PTESTS_SLOW'), d.getVar('PTESTS_PROBLEMS')]).split()
-    if (pn + "-ptest").replace(d.getVar('MLPREFIX'), '') not in enabled_ptests:
-        oe.qa.handle_error("missing-ptest", "supports ptests but is not included in oe-core's ptest-packagelists.inc", d)
diff --git a/poky/meta/classes/pypi.bbclass b/poky/meta/classes/pypi.bbclass
deleted file mode 100644
index 5fa7b8a..0000000
--- a/poky/meta/classes/pypi.bbclass
+++ /dev/null
@@ -1,28 +0,0 @@
-def pypi_package(d):
-    bpn = d.getVar('BPN')
-    if bpn.startswith('python-'):
-        return bpn[7:]
-    elif bpn.startswith('python3-'):
-        return bpn[8:]
-    return bpn
-
-PYPI_PACKAGE ?= "${@pypi_package(d)}"
-PYPI_PACKAGE_EXT ?= "tar.gz"
-PYPI_ARCHIVE_NAME ?= "${PYPI_PACKAGE}-${PV}.${PYPI_PACKAGE_EXT}"
-
-def pypi_src_uri(d):
-    package = d.getVar('PYPI_PACKAGE')
-    archive_name = d.getVar('PYPI_ARCHIVE_NAME')
-    return 'https://files.pythonhosted.org/packages/source/%s/%s/%s' % (package[0], package, archive_name)
-
-PYPI_SRC_URI ?= "${@pypi_src_uri(d)}"
-
-HOMEPAGE ?= "https://pypi.python.org/pypi/${PYPI_PACKAGE}/"
-SECTION = "devel/python"
-SRC_URI:prepend = "${PYPI_SRC_URI} "
-S = "${WORKDIR}/${PYPI_PACKAGE}-${PV}"
-
-UPSTREAM_CHECK_URI ?= "https://pypi.org/project/${PYPI_PACKAGE}/"
-UPSTREAM_CHECK_REGEX ?= "/${PYPI_PACKAGE}/(?P<pver>(\d+[\.\-_]*)+)/"
-
-CVE_PRODUCT ?= "python:${PYPI_PACKAGE}"
diff --git a/poky/meta/classes/python3-dir.bbclass b/poky/meta/classes/python3-dir.bbclass
deleted file mode 100644
index ff03e58..0000000
--- a/poky/meta/classes/python3-dir.bbclass
+++ /dev/null
@@ -1,5 +0,0 @@
-PYTHON_BASEVERSION = "3.10"
-PYTHON_ABI = ""
-PYTHON_DIR = "python${PYTHON_BASEVERSION}"
-PYTHON_PN = "python3"
-PYTHON_SITEPACKAGES_DIR = "${libdir}/${PYTHON_DIR}/site-packages"
diff --git a/poky/meta/classes/python3native.bbclass b/poky/meta/classes/python3native.bbclass
deleted file mode 100644
index 3783c0c..0000000
--- a/poky/meta/classes/python3native.bbclass
+++ /dev/null
@@ -1,24 +0,0 @@
-inherit python3-dir
-
-PYTHON="${STAGING_BINDIR_NATIVE}/python3-native/python3"
-EXTRANATIVEPATH += "python3-native"
-DEPENDS:append = " python3-native "
-
-# python-config and other scripts are using sysconfig modules
-# which we patch to access these variables
-export STAGING_INCDIR
-export STAGING_LIBDIR
-
-# Packages can use
-# find_package(PythonInterp REQUIRED)
-# find_package(PythonLibs REQUIRED)
-# which ends up using libs/includes from build host
-# Therefore pre-empt that effort
-export PYTHON_LIBRARY="${STAGING_LIBDIR}/lib${PYTHON_DIR}${PYTHON_ABI}.so"
-export PYTHON_INCLUDE_DIR="${STAGING_INCDIR}/${PYTHON_DIR}${PYTHON_ABI}"
-
-# suppress host user's site-packages dirs.
-export PYTHONNOUSERSITE = "1"
-
-# autoconf macros will use their internal default preference otherwise
-export PYTHON
diff --git a/poky/meta/classes/python3targetconfig.bbclass b/poky/meta/classes/python3targetconfig.bbclass
deleted file mode 100644
index 2476858..0000000
--- a/poky/meta/classes/python3targetconfig.bbclass
+++ /dev/null
@@ -1,29 +0,0 @@
-inherit python3native
-
-EXTRA_PYTHON_DEPENDS ?= ""
-EXTRA_PYTHON_DEPENDS:class-target = "python3"
-DEPENDS:append = " ${EXTRA_PYTHON_DEPENDS}"
-
-do_configure:prepend:class-target() {
-        export _PYTHON_SYSCONFIGDATA_NAME="_sysconfigdata"
-}
-
-do_compile:prepend:class-target() {
-        export _PYTHON_SYSCONFIGDATA_NAME="_sysconfigdata"
-}
-
-do_install:prepend:class-target() {
-        export _PYTHON_SYSCONFIGDATA_NAME="_sysconfigdata"
-}
-
-do_configure:prepend:class-nativesdk() {
-        export _PYTHON_SYSCONFIGDATA_NAME="_sysconfigdata"
-}
-
-do_compile:prepend:class-nativesdk() {
-        export _PYTHON_SYSCONFIGDATA_NAME="_sysconfigdata"
-}
-
-do_install:prepend:class-nativesdk() {
-        export _PYTHON_SYSCONFIGDATA_NAME="_sysconfigdata"
-}
diff --git a/poky/meta/classes/python_flit_core.bbclass b/poky/meta/classes/python_flit_core.bbclass
deleted file mode 100644
index 7109307..0000000
--- a/poky/meta/classes/python_flit_core.bbclass
+++ /dev/null
@@ -1,8 +0,0 @@
-inherit python_pep517 python3native python3-dir setuptools3-base
-
-DEPENDS += "python3 python3-flit-core-native"
-
-python_flit_core_do_manual_build () {
-    cd ${PEP517_SOURCE_PATH}
-    nativepython3 -m flit_core.wheel --outdir ${PEP517_WHEEL_PATH} .
-}
diff --git a/poky/meta/classes/python_hatchling.bbclass b/poky/meta/classes/python_hatchling.bbclass
deleted file mode 100644
index 984eb6b..0000000
--- a/poky/meta/classes/python_hatchling.bbclass
+++ /dev/null
@@ -1,3 +0,0 @@
-inherit python_pep517 python3native python3-dir setuptools3-base
-
-DEPENDS += "python3-hatchling-native"
diff --git a/poky/meta/classes/python_pep517.bbclass b/poky/meta/classes/python_pep517.bbclass
deleted file mode 100644
index 7cdb9c8..0000000
--- a/poky/meta/classes/python_pep517.bbclass
+++ /dev/null
@@ -1,54 +0,0 @@
-# Common infrastructure for Python packages that use PEP-517 compliant packaging.
-# https://www.python.org/dev/peps/pep-0517/
-#
-# This class will build a wheel in do_compile, and use pypa/installer to install
-# it in do_install.
-
-DEPENDS:append = " python3-picobuild-native python3-installer-native"
-
-# Where to execute the build process from
-PEP517_SOURCE_PATH ?= "${S}"
-
-# The directory where wheels will be written
-PEP517_WHEEL_PATH ?= "${WORKDIR}/dist"
-
-PEP517_PICOBUILD_OPTS ?= ""
-
-# The interpreter to use for installed scripts
-PEP517_INSTALL_PYTHON = "python3"
-PEP517_INSTALL_PYTHON:class-native = "nativepython3"
-
-# pypa/installer option to control the bytecode compilation
-INSTALL_WHEEL_COMPILE_BYTECODE ?= "--compile-bytecode=0"
-
-# PEP517 doesn't have a specific configure step, so set an empty do_configure to avoid
-# running base_do_configure.
-python_pep517_do_configure () {
-    :
-}
-
-# When we have Python 3.11 we can parse pyproject.toml to determine the build
-# API entry point directly
-python_pep517_do_compile () {
-    nativepython3 -m picobuild --source ${PEP517_SOURCE_PATH} --dest ${PEP517_WHEEL_PATH} --wheel ${PEP517_PICOBUILD_OPTS}
-}
-do_compile[cleandirs] += "${PEP517_WHEEL_PATH}"
-
-python_pep517_do_install () {
-    COUNT=$(find ${PEP517_WHEEL_PATH} -name '*.whl' | wc -l)
-    if test $COUNT -eq 0; then
-        bbfatal No wheels found in ${PEP517_WHEEL_PATH}
-    elif test $COUNT -gt 1; then
-        bbfatal More than one wheel found in ${PEP517_WHEEL_PATH}, this should not happen
-    fi
-
-    nativepython3 -m installer ${INSTALL_WHEEL_COMPILE_BYTECODE} --interpreter "${USRBINPATH}/env ${PEP517_INSTALL_PYTHON}" --destdir=${D} ${PEP517_WHEEL_PATH}/*.whl
-}
-
-# A manual do_install that just uses unzip for bootstrapping purposes. Callers should DEPEND on unzip-native.
-python_pep517_do_bootstrap_install () {
-    install -d ${D}${PYTHON_SITEPACKAGES_DIR}
-    unzip -d ${D}${PYTHON_SITEPACKAGES_DIR} ${PEP517_WHEEL_PATH}/*.whl
-}
-
-EXPORT_FUNCTIONS do_configure do_compile do_install
diff --git a/poky/meta/classes/python_poetry_core.bbclass b/poky/meta/classes/python_poetry_core.bbclass
deleted file mode 100644
index 0aaf66b..0000000
--- a/poky/meta/classes/python_poetry_core.bbclass
+++ /dev/null
@@ -1,3 +0,0 @@
-inherit python_pep517 python3native setuptools3-base
-
-DEPENDS += "python3-poetry-core-native"
diff --git a/poky/meta/classes/python_pyo3.bbclass b/poky/meta/classes/python_pyo3.bbclass
deleted file mode 100644
index 10cc3a0..0000000
--- a/poky/meta/classes/python_pyo3.bbclass
+++ /dev/null
@@ -1,30 +0,0 @@
-#
-# This class helps make sure that Python extensions built with PyO3
-# and setuptools_rust properly set up the environment for cross compilation
-#
-
-inherit cargo python3-dir siteinfo
-
-export PYO3_CROSS="1"
-export PYO3_CROSS_PYTHON_VERSION="${PYTHON_BASEVERSION}"
-export PYO3_CROSS_LIB_DIR="${STAGING_LIBDIR}"
-export CARGO_BUILD_TARGET="${HOST_SYS}"
-export RUSTFLAGS
-export PYO3_PYTHON="${PYTHON}"
-export PYO3_CONFIG_FILE="${WORKDIR}/pyo3.config"
-
-python_pyo3_do_configure () {
-    cat > ${WORKDIR}/pyo3.config << EOF
-implementation=CPython
-version=${PYTHON_BASEVERSION}
-shared=true
-abi3=false
-lib_name=${PYTHON_DIR}
-lib_dir=${STAGING_LIBDIR}
-pointer_width=${SITEINFO_BITS}
-build_flags=WITH_THREAD
-suppress_build_script_link_lines=false
-EOF
-}
-
-EXPORT_FUNCTIONS do_configure
diff --git a/poky/meta/classes/python_setuptools3_rust.bbclass b/poky/meta/classes/python_setuptools3_rust.bbclass
deleted file mode 100644
index f12e5d0..0000000
--- a/poky/meta/classes/python_setuptools3_rust.bbclass
+++ /dev/null
@@ -1,11 +0,0 @@
-inherit python_pyo3 setuptools3
-
-DEPENDS += "python3-setuptools-rust-native"
-
-python_setuptools3_rust_do_configure() {
-    python_pyo3_do_configure
-    cargo_common_do_configure
-    setuptools3_do_configure
-}
-
-EXPORT_FUNCTIONS do_configure
diff --git a/poky/meta/classes/python_setuptools_build_meta.bbclass b/poky/meta/classes/python_setuptools_build_meta.bbclass
deleted file mode 100644
index 974054f..0000000
--- a/poky/meta/classes/python_setuptools_build_meta.bbclass
+++ /dev/null
@@ -1,3 +0,0 @@
-inherit setuptools3-base python_pep517
-
-DEPENDS += "python3-setuptools-native python3-wheel-native"
diff --git a/poky/meta/classes/qemu.bbclass b/poky/meta/classes/qemu.bbclass
deleted file mode 100644
index 7493ac3..0000000
--- a/poky/meta/classes/qemu.bbclass
+++ /dev/null
@@ -1,71 +0,0 @@
-#
-# This class contains functions for recipes that need QEMU or test for its
-# existence.
-#
-
-def qemu_target_binary(data):
-    package_arch = data.getVar("PACKAGE_ARCH")
-    qemu_target_binary = (data.getVar("QEMU_TARGET_BINARY_%s" % package_arch) or "")
-    if qemu_target_binary:
-        return qemu_target_binary
-
-    target_arch = data.getVar("TARGET_ARCH")
-    if target_arch in ("i486", "i586", "i686"):
-        target_arch = "i386"
-    elif target_arch == "powerpc":
-        target_arch = "ppc"
-    elif target_arch == "powerpc64":
-        target_arch = "ppc64"
-    elif target_arch == "powerpc64le":
-        target_arch = "ppc64le"
-
-    return "qemu-" + target_arch
-
-def qemu_wrapper_cmdline(data, rootfs_path, library_paths):
-    import string
-
-    qemu_binary = qemu_target_binary(data)
-    if qemu_binary == "qemu-allarch":
-        qemu_binary = "qemuwrapper"
-
-    qemu_options = data.getVar("QEMU_OPTIONS")    
-
-    return "PSEUDO_UNLOAD=1 " + qemu_binary + " " + qemu_options + " -L " + rootfs_path\
-            + " -E LD_LIBRARY_PATH=" + ":".join(library_paths) + " "
-
-# Next function will return a string containing the command that is needed to
-# to run a certain binary through qemu. For example, in order to make a certain
-# postinstall scriptlet run at do_rootfs time and running the postinstall is
-# architecture dependent, we can run it through qemu. For example, in the
-# postinstall scriptlet, we could use the following:
-#
-# ${@qemu_run_binary(d, '$D', '/usr/bin/test_app')} [test_app arguments]
-#
-def qemu_run_binary(data, rootfs_path, binary):
-    libdir = rootfs_path + data.getVar("libdir", False)
-    base_libdir = rootfs_path + data.getVar("base_libdir", False)
-
-    return qemu_wrapper_cmdline(data, rootfs_path, [libdir, base_libdir]) + rootfs_path + binary
-
-# QEMU_EXTRAOPTIONS is not meant to be directly used, the extensions are
-# PACKAGE_ARCH, *NOT* overrides.
-# In some cases (e.g. ppc) simply being arch specific (apparently) isn't good
-# enough and a PACKAGE_ARCH specific -cpu option is needed (hence we have to do
-# this dance). For others (e.g. arm) a -cpu option is not necessary, since the
-# qemu-arm default CPU supports all required architecture levels.
-
-QEMU_OPTIONS = "-r ${OLDEST_KERNEL} ${@d.getVar("QEMU_EXTRAOPTIONS_%s" % d.getVar('PACKAGE_ARCH')) or ""}"
-QEMU_OPTIONS[vardeps] += "QEMU_EXTRAOPTIONS_${PACKAGE_ARCH}"
-
-QEMU_EXTRAOPTIONS_ppce500v2 = " -cpu e500v2"
-QEMU_EXTRAOPTIONS_ppce500mc = " -cpu e500mc"
-QEMU_EXTRAOPTIONS_ppce5500 = " -cpu e500mc"
-QEMU_EXTRAOPTIONS_ppc64e5500 = " -cpu e500mc"
-QEMU_EXTRAOPTIONS_ppce6500 = " -cpu e500mc"
-QEMU_EXTRAOPTIONS_ppc64e6500 = " -cpu e500mc"
-QEMU_EXTRAOPTIONS_ppc7400 = " -cpu 7400"
-QEMU_EXTRAOPTIONS_powerpc64le = " -cpu POWER9"
-# Some packages e.g. fwupd sets PACKAGE_ARCH = MACHINE_ARCH and uses meson which
-# needs right options to usermode qemu
-QEMU_EXTRAOPTIONS_qemuppc = " -cpu 7400"
-QEMU_EXTRAOPTIONS_qemuppc64 = " -cpu POWER9"
diff --git a/poky/meta/classes/qemuboot.bbclass b/poky/meta/classes/qemuboot.bbclass
deleted file mode 100644
index ad84899..0000000
--- a/poky/meta/classes/qemuboot.bbclass
+++ /dev/null
@@ -1,165 +0,0 @@
-# Help runqemu boot target board, "QB" means Qemu Boot, the following
-# vars can be set in conf files, such as <bsp.conf> to make it can be
-# boot by runqemu:
-#
-# QB_SYSTEM_NAME: qemu name, e.g., "qemu-system-i386"
-#
-# QB_OPT_APPEND: options to append to qemu, e.g., "-device usb-mouse"
-#
-# QB_DEFAULT_KERNEL: default kernel to boot, e.g., "bzImage"
-#
-# QB_DEFAULT_FSTYPE: default FSTYPE to boot, e.g., "ext4"
-#
-# QB_MEM: memory, e.g., "-m 512"
-#
-# QB_MACHINE: qemu machine, e.g., "-machine virt"
-#
-# QB_CPU: qemu cpu, e.g., "-cpu qemu32"
-#
-# QB_CPU_KVM: the similar to QB_CPU, but used when kvm, e.g., '-cpu kvm64',
-#             set it when support kvm.
-#
-# QB_SMP: amount of CPU cores inside qemu guest, each mapped to a thread on the host,
-#             e.g. "-smp 8".
-#
-# QB_KERNEL_CMDLINE_APPEND: options to append to kernel's -append
-#                           option, e.g., "console=ttyS0 console=tty"
-#
-# QB_DTB: qemu dtb name
-#
-# QB_AUDIO_DRV: qemu audio driver, e.g., "alsa", set it when support audio
-#
-# QB_AUDIO_OPT: qemu audio option, e.g., "-device AC97", used
-#               when QB_AUDIO_DRV is set.
-#
-# QB_RNG: Pass-through for host random number generator, it can speedup boot
-#         in system mode, where system is experiencing entropy starvation
-#
-# QB_KERNEL_ROOT: kernel's root, e.g., /dev/vda
-#                 By default "/dev/vda rw" gets passed to the kernel.
-#                 To mount the rootfs read-only QB_KERNEL_ROOT can be set to e.g. "/dev/vda ro".
-#
-# QB_NETWORK_DEVICE: network device, e.g., "-device virtio-net-pci,netdev=net0,mac=@MAC@",
-#                    it needs work with QB_TAP_OPT and QB_SLIRP_OPT.
-#                    Note, runqemu will replace @MAC@ with a predefined mac, you can set
-#                    a custom one, but that may cause conflicts when multiple qemus are
-#                    running on the same host.
-#                    Note: If more than one interface of type -device virtio-net-device gets added,
-#                          QB_NETWORK_DEVICE:prepend might be used, since Qemu enumerates the eth*
-#                          devices in reverse order to -device arguments.
-#
-# QB_TAP_OPT: network option for 'tap' mode, e.g.,
-#             "-netdev tap,id=net0,ifname=@TAP@,script=no,downscript=no"
-#              Note, runqemu will replace "@TAP@" with the one which is used, such as tap0, tap1 ...
-#
-# QB_SLIRP_OPT: network option for SLIRP mode, e.g., -netdev user,id=net0"
-#
-# QB_CMDLINE_IP_SLIRP: If QB_NETWORK_DEVICE adds more than one network interface to qemu, usually the
-#                      ip= kernel comand line argument needs to be changed accordingly. Details are documented
-#                      in the kernel docuemntation https://www.kernel.org/doc/Documentation/filesystems/nfs/nfsroot.txt
-#                      Example to configure only the first interface: "ip=eth0:dhcp"
-# QB_CMDLINE_IP_TAP: This parameter is similar to the QB_CMDLINE_IP_SLIRP parameter. Since the tap interface requires
-#                    static IP configuration @CLIENT@ and @GATEWAY@ place holders are replaced by the IP and the gateway
-#                    address of the qemu guest by runqemu.
-#                    Example: "ip=192.168.7.@CLIENT@::192.168.7.@GATEWAY@:255.255.255.0::eth0"
-#
-# QB_ROOTFS_OPT: used as rootfs, e.g.,
-#               "-drive id=disk0,file=@ROOTFS@,if=none,format=raw -device virtio-blk-device,drive=disk0"
-#              Note, runqemu will replace "@ROOTFS@" with the one which is used, such as core-image-minimal-qemuarm64.ext4.
-#
-# QB_SERIAL_OPT: serial port, e.g., "-serial mon:stdio"
-#
-# QB_TCPSERIAL_OPT: tcp serial port option, e.g.,
-#                   " -device virtio-serial-device -chardev socket,id=virtcon,port=@PORT@,host=127.0.0.1 -device virtconsole,chardev=virtcon"
-#                   Note, runqemu will replace "@PORT@" with the port number which is used.
-#
-# QB_ROOTFS_EXTRA_OPT: extra options to be appended to the rootfs device in case there is none specified by QB_ROOTFS_OPT.
-#                      Can be used to automatically determine the image from the other variables
-#                      but define things link 'bootindex' when booting from EFI or 'readonly' when using squashfs
-#                      without the need to specify a dedicated qemu configuration
-#
-# QB_GRAPHICS: QEMU video card type (e.g. "-vga std")
-#
-# Usage:
-# IMAGE_CLASSES += "qemuboot"
-# See "runqemu help" for more info
-
-QB_MEM ?= "-m 256"
-QB_SMP ?= ""
-QB_SERIAL_OPT ?= "-serial mon:stdio -serial null"
-QB_DEFAULT_KERNEL ?= "${KERNEL_IMAGETYPE}"
-QB_DEFAULT_FSTYPE ?= "ext4"
-QB_RNG ?= "-object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-pci,rng=rng0"
-QB_OPT_APPEND ?= ""
-QB_NETWORK_DEVICE ?= "-device virtio-net-pci,netdev=net0,mac=@MAC@"
-QB_CMDLINE_IP_SLIRP ?= "ip=dhcp"
-QB_CMDLINE_IP_TAP ?= "ip=192.168.7.@CLIENT@::192.168.7.@GATEWAY@:255.255.255.0::eth0:off:8.8.8.8"
-QB_ROOTFS_EXTRA_OPT ?= ""
-QB_GRAPHICS ?= ""
-
-# This should be kept align with ROOT_VM
-QB_DRIVE_TYPE ?= "/dev/sd"
-
-inherit image-artifact-names
-
-# Create qemuboot.conf
-addtask do_write_qemuboot_conf after do_rootfs before do_image
-
-def qemuboot_vars(d):
-    build_vars = ['MACHINE', 'TUNE_ARCH', 'DEPLOY_DIR_IMAGE',
-                'KERNEL_IMAGETYPE', 'IMAGE_NAME', 'IMAGE_LINK_NAME',
-                'STAGING_DIR_NATIVE', 'STAGING_BINDIR_NATIVE',
-                'STAGING_DIR_HOST', 'SERIAL_CONSOLES', 'UNINATIVE_LOADER']
-    return build_vars + [k for k in d.keys() if k.startswith('QB_')]
-
-do_write_qemuboot_conf[vardeps] += "${@' '.join(qemuboot_vars(d))}"
-do_write_qemuboot_conf[vardepsexclude] += "TOPDIR"
-python do_write_qemuboot_conf() {
-    import configparser
-
-    qemuboot = "%s/%s.qemuboot.conf" % (d.getVar('IMGDEPLOYDIR'), d.getVar('IMAGE_NAME'))
-    if d.getVar('IMAGE_LINK_NAME'):
-        qemuboot_link = "%s/%s.qemuboot.conf" % (d.getVar('IMGDEPLOYDIR'), d.getVar('IMAGE_LINK_NAME'))
-    else:
-        qemuboot_link = ""
-    finalpath = d.getVar("DEPLOY_DIR_IMAGE")
-    topdir = d.getVar('TOPDIR')
-    cf = configparser.ConfigParser()
-    cf.add_section('config_bsp')
-    for k in sorted(qemuboot_vars(d)):
-        if ":" in k:
-            continue
-        # qemu-helper-native sysroot is not removed by rm_work and
-        # contains all tools required by runqemu
-        if k == 'STAGING_BINDIR_NATIVE':
-            val = os.path.join(d.getVar('BASE_WORKDIR'), d.getVar('BUILD_SYS'),
-                               'qemu-helper-native/1.0-r1/recipe-sysroot-native/usr/bin/')
-        else:
-            val = d.getVar(k)
-        if val is None:
-            continue
-        # we only want to write out relative paths so that we can relocate images
-        # and still run them
-        if val.startswith(topdir):
-            val = os.path.relpath(val, finalpath)
-        cf.set('config_bsp', k, '%s' % val)
-
-    # QB_DEFAULT_KERNEL's value of KERNEL_IMAGETYPE is the name of a symlink
-    # to the kernel file, which hinders relocatability of the qb conf.
-    # Read the link and replace it with the full filename of the target.
-    kernel_link = os.path.join(d.getVar('DEPLOY_DIR_IMAGE'), d.getVar('QB_DEFAULT_KERNEL'))
-    kernel = os.path.realpath(kernel_link)
-    # we only want to write out relative paths so that we can relocate images
-    # and still run them
-    kernel = os.path.relpath(kernel, finalpath)
-    cf.set('config_bsp', 'QB_DEFAULT_KERNEL', kernel)
-
-    bb.utils.mkdirhier(os.path.dirname(qemuboot))
-    with open(qemuboot, 'w') as f:
-        cf.write(f)
-
-    if qemuboot_link and qemuboot_link != qemuboot:
-        if os.path.lexists(qemuboot_link):
-           os.remove(qemuboot_link)
-        os.symlink(os.path.basename(qemuboot), qemuboot_link)
-}
diff --git a/poky/meta/classes/recipe_sanity.bbclass b/poky/meta/classes/recipe_sanity.bbclass
index 7fa4a84..1c2e24c 100644
--- a/poky/meta/classes/recipe_sanity.bbclass
+++ b/poky/meta/classes/recipe_sanity.bbclass
@@ -1,3 +1,9 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
 def __note(msg, d):
     bb.note("%s: recipe_sanity: %s" % (d.getVar("P"), msg))
 
diff --git a/poky/meta/classes/relative_symlinks.bbclass b/poky/meta/classes/relative_symlinks.bbclass
index 3157737..9ee20e0 100644
--- a/poky/meta/classes/relative_symlinks.bbclass
+++ b/poky/meta/classes/relative_symlinks.bbclass
@@ -1,3 +1,9 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
 do_install[postfuncs] += "install_relative_symlinks"
 
 python install_relative_symlinks () {
diff --git a/poky/meta/classes/relocatable.bbclass b/poky/meta/classes/relocatable.bbclass
index af04be5..d0a623f 100644
--- a/poky/meta/classes/relocatable.bbclass
+++ b/poky/meta/classes/relocatable.bbclass
@@ -1,3 +1,9 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
 inherit chrpath
 
 SYSROOT_PREPROCESS_FUNCS += "relocatable_binaries_preprocess relocatable_native_pcfiles"
diff --git a/poky/meta/classes/remove-libtool.bbclass b/poky/meta/classes/remove-libtool.bbclass
index 3fd0cd5..8e98738 100644
--- a/poky/meta/classes/remove-libtool.bbclass
+++ b/poky/meta/classes/remove-libtool.bbclass
@@ -1,3 +1,9 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
 # This class removes libtool .la files after do_install
 
 REMOVE_LIBTOOL_LA ?= "1"
diff --git a/poky/meta/classes/report-error.bbclass b/poky/meta/classes/report-error.bbclass
index 6866d47..2f692fb 100644
--- a/poky/meta/classes/report-error.bbclass
+++ b/poky/meta/classes/report-error.bbclass
@@ -4,7 +4,8 @@
 # Copyright (C) 2013 Intel Corporation
 # Author: Andreea Brandusa Proca <andreea.b.proca@intel.com>
 #
-# Licensed under the MIT license, see COPYING.MIT for details
+# SPDX-License-Identifier: MIT
+#
 
 ERR_REPORT_DIR ?= "${LOG_DIR}/error-report"
 
diff --git a/poky/meta/classes/rm_work.bbclass b/poky/meta/classes/rm_work.bbclass
index 5f12d5a..c493eff 100644
--- a/poky/meta/classes/rm_work.bbclass
+++ b/poky/meta/classes/rm_work.bbclass
@@ -1,4 +1,10 @@
 #
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+#
 # Removes source after build
 #
 # To use it add that line to conf/local.conf:
diff --git a/poky/meta/classes/rm_work_and_downloads.bbclass b/poky/meta/classes/rm_work_and_downloads.bbclass
index 15e6091..2695a38 100644
--- a/poky/meta/classes/rm_work_and_downloads.bbclass
+++ b/poky/meta/classes/rm_work_and_downloads.bbclass
@@ -1,8 +1,7 @@
 # Author:       Patrick Ohly <patrick.ohly@intel.com>
 # Copyright:    Copyright (C) 2015 Intel Corporation
 #
-# This file is licensed under the MIT license, see COPYING.MIT in
-# this source distribution for the terms.
+# SPDX-License-Identifier: MIT
 
 # This class is used like rm_work:
 # INHERIT += "rm_work_and_downloads"
diff --git a/poky/meta/classes/rootfs-postcommands.bbclass b/poky/meta/classes/rootfs-postcommands.bbclass
deleted file mode 100644
index a8a952f..0000000
--- a/poky/meta/classes/rootfs-postcommands.bbclass
+++ /dev/null
@@ -1,435 +0,0 @@
-
-# Zap the root password if debug-tweaks and empty-root-password features are not enabled
-ROOTFS_POSTPROCESS_COMMAND += '${@bb.utils.contains_any("IMAGE_FEATURES", [ 'debug-tweaks', 'empty-root-password' ], "", "zap_empty_root_password; ",d)}'
-
-# Allow dropbear/openssh to accept logins from accounts with an empty password string if debug-tweaks or allow-empty-password is enabled
-ROOTFS_POSTPROCESS_COMMAND += '${@bb.utils.contains_any("IMAGE_FEATURES", [ 'debug-tweaks', 'allow-empty-password' ], "ssh_allow_empty_password; ", "",d)}'
-
-# Allow dropbear/openssh to accept root logins if debug-tweaks or allow-root-login is enabled
-ROOTFS_POSTPROCESS_COMMAND += '${@bb.utils.contains_any("IMAGE_FEATURES", [ 'debug-tweaks', 'allow-root-login' ], "ssh_allow_root_login; ", "",d)}'
-
-# Enable postinst logging if debug-tweaks or post-install-logging is enabled
-ROOTFS_POSTPROCESS_COMMAND += '${@bb.utils.contains_any("IMAGE_FEATURES", [ 'debug-tweaks', 'post-install-logging' ], "postinst_enable_logging; ", "",d)}'
-
-# Create /etc/timestamp during image construction to give a reasonably sane default time setting
-ROOTFS_POSTPROCESS_COMMAND += "rootfs_update_timestamp; "
-
-# Tweak the mount options for rootfs in /etc/fstab if read-only-rootfs is enabled
-ROOTFS_POSTPROCESS_COMMAND += '${@bb.utils.contains("IMAGE_FEATURES", "read-only-rootfs", "read_only_rootfs_hook; ", "",d)}'
-
-# We also need to do the same for the kernel boot parameters,
-# otherwise kernel or initramfs end up mounting the rootfs read/write
-# (the default) if supported by the underlying storage.
-#
-# We do this with :append because the default value might get set later with ?=
-# and we don't want to disable such a default that by setting a value here.
-APPEND:append = '${@bb.utils.contains("IMAGE_FEATURES", "read-only-rootfs", " ro", "", d)}'
-
-# Generates test data file with data store variables expanded in json format
-ROOTFS_POSTPROCESS_COMMAND += "write_image_test_data; "
-
-# Write manifest
-IMAGE_MANIFEST = "${IMGDEPLOYDIR}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.manifest"
-ROOTFS_POSTUNINSTALL_COMMAND =+ "write_image_manifest ; "
-# Set default postinst log file
-POSTINST_LOGFILE ?= "${localstatedir}/log/postinstall.log"
-# Set default target for systemd images
-SYSTEMD_DEFAULT_TARGET ?= '${@bb.utils.contains_any("IMAGE_FEATURES", [ "x11-base", "weston" ], "graphical.target", "multi-user.target", d)}'
-ROOTFS_POSTPROCESS_COMMAND += '${@bb.utils.contains("DISTRO_FEATURES", "systemd", "set_systemd_default_target; systemd_create_users;", "", d)}'
-
-ROOTFS_POSTPROCESS_COMMAND += 'empty_var_volatile;'
-
-ROOTFS_POSTPROCESS_COMMAND += '${@bb.utils.contains("DISTRO_FEATURES", "overlayfs", "overlayfs_qa_check; overlayfs_postprocess;", "", d)}'
-
-inherit image-artifact-names
-
-# Sort the user and group entries in /etc by ID in order to make the content
-# deterministic. Package installs are not deterministic, causing the ordering
-# of entries to change between builds. In case that this isn't desired,
-# the command can be overridden.
-#
-# Note that useradd-staticids.bbclass has to be used to ensure that
-# the numeric IDs of dynamically created entries remain stable.
-#
-# We want this to run as late as possible, in particular after
-# systemd_sysusers_create and set_user_group. Using :append is not
-# enough for that, set_user_group is added that way and would end
-# up running after us.
-SORT_PASSWD_POSTPROCESS_COMMAND ??= " sort_passwd; "
-python () {
-    d.appendVar('ROOTFS_POSTPROCESS_COMMAND', '${SORT_PASSWD_POSTPROCESS_COMMAND}')
-    d.appendVar('ROOTFS_POSTPROCESS_COMMAND', 'rootfs_reproducible;')
-}
-
-systemd_create_users () {
-	for conffile in ${IMAGE_ROOTFS}/usr/lib/sysusers.d/*.conf; do
-		[ -e $conffile ] || continue
-		grep -v "^#" $conffile | sed -e '/^$/d' | while read type name id comment; do
-		if [ "$type" = "u" ]; then
-			useradd_params="--shell /sbin/nologin"
-			[ "$id" != "-" ] && useradd_params="$useradd_params --uid $id"
-			[ "$comment" != "-" ] && useradd_params="$useradd_params --comment $comment"
-			useradd_params="$useradd_params --system $name"
-			eval useradd --root ${IMAGE_ROOTFS} $useradd_params || true
-		elif [ "$type" = "g" ]; then
-			groupadd_params=""
-			[ "$id" != "-" ] && groupadd_params="$groupadd_params --gid $id"
-			groupadd_params="$groupadd_params --system $name"
-			eval groupadd --root ${IMAGE_ROOTFS} $groupadd_params || true
-		elif [ "$type" = "m" ]; then
-			group=$id
-			eval groupadd --root ${IMAGE_ROOTFS} --system $group || true
-			eval useradd --root ${IMAGE_ROOTFS} --shell /sbin/nologin --system $name --no-user-group || true
-			eval usermod --root ${IMAGE_ROOTFS} -a -G $group $name
-		fi
-		done
-	done
-}
-
-#
-# A hook function to support read-only-rootfs IMAGE_FEATURES
-#
-read_only_rootfs_hook () {
-	# Tweak the mount option and fs_passno for rootfs in fstab
-	if [ -f ${IMAGE_ROOTFS}/etc/fstab ]; then
-		sed -i -e '/^[#[:space:]]*\/dev\/root/{s/defaults/ro/;s/\([[:space:]]*[[:digit:]]\)\([[:space:]]*\)[[:digit:]]$/\1\20/}' ${IMAGE_ROOTFS}/etc/fstab
-	fi
-
-	# Tweak the "mount -o remount,rw /" command in busybox-inittab inittab
-	if [ -f ${IMAGE_ROOTFS}/etc/inittab ]; then
-		sed -i 's|/bin/mount -o remount,rw /|/bin/mount -o remount,ro /|' ${IMAGE_ROOTFS}/etc/inittab
-	fi
-
-	# If we're using openssh and the /etc/ssh directory has no pre-generated keys,
-	# we should configure openssh to use the configuration file /etc/ssh/sshd_config_readonly
-	# and the keys under /var/run/ssh.
-	if [ -d ${IMAGE_ROOTFS}/etc/ssh ]; then
-		if [ -e ${IMAGE_ROOTFS}/etc/ssh/ssh_host_rsa_key ]; then
-			echo "SYSCONFDIR=\${SYSCONFDIR:-/etc/ssh}" >> ${IMAGE_ROOTFS}/etc/default/ssh
-			echo "SSHD_OPTS=" >> ${IMAGE_ROOTFS}/etc/default/ssh
-		else
-			echo "SYSCONFDIR=\${SYSCONFDIR:-/var/run/ssh}" >> ${IMAGE_ROOTFS}/etc/default/ssh
-			echo "SSHD_OPTS='-f /etc/ssh/sshd_config_readonly'" >> ${IMAGE_ROOTFS}/etc/default/ssh
-		fi
-	fi
-
-	# Also tweak the key location for dropbear in the same way.
-	if [ -d ${IMAGE_ROOTFS}/etc/dropbear ]; then
-		if [ ! -e ${IMAGE_ROOTFS}/etc/dropbear/dropbear_rsa_host_key ]; then
-			echo "DROPBEAR_RSAKEY_DIR=/var/lib/dropbear" >> ${IMAGE_ROOTFS}/etc/default/dropbear
-		fi
-	fi
-
-	if ${@bb.utils.contains("DISTRO_FEATURES", "sysvinit", "true", "false", d)}; then
-		# Change the value of ROOTFS_READ_ONLY in /etc/default/rcS to yes
-		if [ -e ${IMAGE_ROOTFS}/etc/default/rcS ]; then
-			sed -i 's/ROOTFS_READ_ONLY=no/ROOTFS_READ_ONLY=yes/' ${IMAGE_ROOTFS}/etc/default/rcS
-		fi
-		# Run populate-volatile.sh at rootfs time to set up basic files
-		# and directories to support read-only rootfs.
-		if [ -x ${IMAGE_ROOTFS}/etc/init.d/populate-volatile.sh ]; then
-			${IMAGE_ROOTFS}/etc/init.d/populate-volatile.sh
-		fi
-	fi
-
-	if ${@bb.utils.contains("DISTRO_FEATURES", "systemd", "true", "false", d)}; then
-	# Create machine-id
-	# 20:12 < mezcalero> koen: you have three options: a) run systemd-machine-id-setup at install time, b) have / read-only and an empty file there (for stateless) and c) boot with / writable
-		touch ${IMAGE_ROOTFS}${sysconfdir}/machine-id
-	fi
-}
-
-#
-# This function disallows empty root passwords
-#
-zap_empty_root_password () {
-	if [ -e ${IMAGE_ROOTFS}/etc/shadow ]; then
-		sed -i 's%^root::%root:*:%' ${IMAGE_ROOTFS}/etc/shadow
-        fi
-	if [ -e ${IMAGE_ROOTFS}/etc/passwd ]; then
-		sed -i 's%^root::%root:*:%' ${IMAGE_ROOTFS}/etc/passwd
-	fi
-}
-
-#
-# allow dropbear/openssh to accept logins from accounts with an empty password string
-#
-ssh_allow_empty_password () {
-	for config in sshd_config sshd_config_readonly; do
-		if [ -e ${IMAGE_ROOTFS}${sysconfdir}/ssh/$config ]; then
-			sed -i 's/^[#[:space:]]*PermitEmptyPasswords.*/PermitEmptyPasswords yes/' ${IMAGE_ROOTFS}${sysconfdir}/ssh/$config
-		fi
-	done
-
-	if [ -e ${IMAGE_ROOTFS}${sbindir}/dropbear ] ; then
-		if grep -q DROPBEAR_EXTRA_ARGS ${IMAGE_ROOTFS}${sysconfdir}/default/dropbear 2>/dev/null ; then
-			if ! grep -q "DROPBEAR_EXTRA_ARGS=.*-B" ${IMAGE_ROOTFS}${sysconfdir}/default/dropbear ; then
-				sed -i 's/^DROPBEAR_EXTRA_ARGS="*\([^"]*\)"*/DROPBEAR_EXTRA_ARGS="\1 -B"/' ${IMAGE_ROOTFS}${sysconfdir}/default/dropbear
-			fi
-		else
-			printf '\nDROPBEAR_EXTRA_ARGS="-B"\n' >> ${IMAGE_ROOTFS}${sysconfdir}/default/dropbear
-		fi
-	fi
-
-	if [ -d ${IMAGE_ROOTFS}${sysconfdir}/pam.d ] ; then
-		for f in `find ${IMAGE_ROOTFS}${sysconfdir}/pam.d/* -type f -exec test -e {} \; -print`
-		do
-			sed -i 's/nullok_secure/nullok/' $f
-		done
-	fi
-}
-
-#
-# allow dropbear/openssh to accept root logins
-#
-ssh_allow_root_login () {
-	for config in sshd_config sshd_config_readonly; do
-		if [ -e ${IMAGE_ROOTFS}${sysconfdir}/ssh/$config ]; then
-			sed -i 's/^[#[:space:]]*PermitRootLogin.*/PermitRootLogin yes/' ${IMAGE_ROOTFS}${sysconfdir}/ssh/$config
-		fi
-	done
-
-	if [ -e ${IMAGE_ROOTFS}${sbindir}/dropbear ] ; then
-		if grep -q DROPBEAR_EXTRA_ARGS ${IMAGE_ROOTFS}${sysconfdir}/default/dropbear 2>/dev/null ; then
-			sed -i '/^DROPBEAR_EXTRA_ARGS=/ s/-w//' ${IMAGE_ROOTFS}${sysconfdir}/default/dropbear
-		fi
-	fi
-}
-
-python sort_passwd () {
-    import rootfspostcommands
-    rootfspostcommands.sort_passwd(d.expand('${IMAGE_ROOTFS}${sysconfdir}'))
-}
-
-#
-# Enable postinst logging
-#
-postinst_enable_logging () {
-	mkdir -p ${IMAGE_ROOTFS}${sysconfdir}/default
-	echo "POSTINST_LOGGING=1" >> ${IMAGE_ROOTFS}${sysconfdir}/default/postinst
-	echo "LOGFILE=${POSTINST_LOGFILE}" >> ${IMAGE_ROOTFS}${sysconfdir}/default/postinst
-}
-
-#
-# Modify systemd default target
-#
-set_systemd_default_target () {
-	if [ -d ${IMAGE_ROOTFS}${sysconfdir}/systemd/system -a -e ${IMAGE_ROOTFS}${systemd_system_unitdir}/${SYSTEMD_DEFAULT_TARGET} ]; then
-		ln -sf ${systemd_system_unitdir}/${SYSTEMD_DEFAULT_TARGET} ${IMAGE_ROOTFS}${sysconfdir}/systemd/system/default.target
-	fi
-}
-
-# If /var/volatile is not empty, we have seen problems where programs such as the
-# journal make assumptions based on the contents of /var/volatile. The journal
-# would then write to /var/volatile before it was mounted, thus hiding the
-# items previously written.
-#
-# This change is to attempt to fix those types of issues in a way that doesn't
-# affect users that may not be using /var/volatile.
-empty_var_volatile () {
-	if [ -e ${IMAGE_ROOTFS}/etc/fstab ]; then
-		match=`awk '$1 !~ "#" && $2 ~ /\/var\/volatile/{print $2}' ${IMAGE_ROOTFS}/etc/fstab 2> /dev/null`
-		if [ -n "$match" ]; then
-			find ${IMAGE_ROOTFS}/var/volatile -mindepth 1 -delete
-		fi
-	fi
-}
-
-# Turn any symbolic /sbin/init link into a file
-remove_init_link () {
-	if [ -h ${IMAGE_ROOTFS}/sbin/init ]; then
-		LINKFILE=${IMAGE_ROOTFS}`readlink ${IMAGE_ROOTFS}/sbin/init`
-		rm ${IMAGE_ROOTFS}/sbin/init
-		cp $LINKFILE ${IMAGE_ROOTFS}/sbin/init
-	fi
-}
-
-make_zimage_symlink_relative () {
-	if [ -L ${IMAGE_ROOTFS}/boot/zImage ]; then
-		(cd ${IMAGE_ROOTFS}/boot/ && for i in `ls zImage-* | sort`; do ln -sf $i zImage; done)
-	fi
-}
-
-python write_image_manifest () {
-    from oe.rootfs import image_list_installed_packages
-    from oe.utils import format_pkg_list
-
-    deploy_dir = d.getVar('IMGDEPLOYDIR')
-    link_name = d.getVar('IMAGE_LINK_NAME')
-    manifest_name = d.getVar('IMAGE_MANIFEST')
-
-    if not manifest_name:
-        return
-
-    pkgs = image_list_installed_packages(d)
-    with open(manifest_name, 'w+') as image_manifest:
-        image_manifest.write(format_pkg_list(pkgs, "ver"))
-
-    if os.path.exists(manifest_name) and link_name:
-        manifest_link = deploy_dir + "/" + link_name + ".manifest"
-        if manifest_link != manifest_name:
-            if os.path.lexists(manifest_link):
-                os.remove(manifest_link)
-            os.symlink(os.path.basename(manifest_name), manifest_link)
-}
-
-# Can be used to create /etc/timestamp during image construction to give a reasonably
-# sane default time setting
-rootfs_update_timestamp () {
-	if [ "${REPRODUCIBLE_TIMESTAMP_ROOTFS}" != "" ]; then
-		# Convert UTC into %4Y%2m%2d%2H%2M%2S
-		sformatted=`date -u -d @${REPRODUCIBLE_TIMESTAMP_ROOTFS} +%4Y%2m%2d%2H%2M%2S`
-	else
-		sformatted=`date -u +%4Y%2m%2d%2H%2M%2S`
-	fi
-	echo $sformatted > ${IMAGE_ROOTFS}/etc/timestamp
-	bbnote "rootfs_update_timestamp: set /etc/timestamp to $sformatted"
-}
-
-# Prevent X from being started
-rootfs_no_x_startup () {
-	if [ -f ${IMAGE_ROOTFS}/etc/init.d/xserver-nodm ]; then
-		chmod a-x ${IMAGE_ROOTFS}/etc/init.d/xserver-nodm
-	fi
-}
-
-rootfs_trim_schemas () {
-	for schema in ${IMAGE_ROOTFS}/etc/gconf/schemas/*.schemas
-	do
-		# Need this in case no files exist
-		if [ -e $schema ]; then
-			oe-trim-schemas $schema > $schema.new
-			mv $schema.new $schema
-		fi
-	done
-}
-
-rootfs_check_host_user_contaminated () {
-	contaminated="${S}/host-user-contaminated.txt"
-	HOST_USER_UID="$(PSEUDO_UNLOAD=1 id -u)"
-	HOST_USER_GID="$(PSEUDO_UNLOAD=1 id -g)"
-
-	find "${IMAGE_ROOTFS}" -path "${IMAGE_ROOTFS}/home" -prune -o \
-	    -user "$HOST_USER_UID" -print -o -group "$HOST_USER_GID" -print >"$contaminated"
-
-	sed -e "s,${IMAGE_ROOTFS},," $contaminated | while read line; do
-		bbwarn "Path in the rootfs is owned by the same user or group as the user running bitbake:" $line `ls -lan ${IMAGE_ROOTFS}/$line`
-	done
-
-	if [ -s "$contaminated" ]; then
-		bbwarn "/etc/passwd:" `cat ${IMAGE_ROOTFS}/etc/passwd`
-		bbwarn "/etc/group:" `cat ${IMAGE_ROOTFS}/etc/group`
-	fi
-}
-
-# Make any absolute links in a sysroot relative
-rootfs_sysroot_relativelinks () {
-	sysroot-relativelinks.py ${SDK_OUTPUT}/${SDKTARGETSYSROOT}
-}
-
-# Generated test data json file
-python write_image_test_data() {
-    from oe.data import export2json
-
-    deploy_dir = d.getVar('IMGDEPLOYDIR')
-    link_name = d.getVar('IMAGE_LINK_NAME')
-    testdata_name = os.path.join(deploy_dir, "%s.testdata.json" % d.getVar('IMAGE_NAME'))
-
-    searchString = "%s/"%(d.getVar("TOPDIR")).replace("//","/")
-    export2json(d, testdata_name, searchString=searchString, replaceString="")
-
-    if os.path.exists(testdata_name) and link_name:
-        testdata_link = os.path.join(deploy_dir, "%s.testdata.json" % link_name)
-        if testdata_link != testdata_name:
-            if os.path.lexists(testdata_link):
-                os.remove(testdata_link)
-            os.symlink(os.path.basename(testdata_name), testdata_link)
-}
-write_image_test_data[vardepsexclude] += "TOPDIR"
-
-# Check for unsatisfied recommendations (RRECOMMENDS)
-python rootfs_log_check_recommends() {
-    log_path = d.expand("${T}/log.do_rootfs")
-    with open(log_path, 'r') as log:
-        for line in log:
-            if 'log_check' in line:
-                continue
-
-            if 'unsatisfied recommendation for' in line:
-                bb.warn('[log_check] %s: %s' % (d.getVar('PN'), line))
-}
-
-# Perform any additional adjustments needed to make rootf binary reproducible
-rootfs_reproducible () {
-	if [ "${REPRODUCIBLE_TIMESTAMP_ROOTFS}" != "" ]; then
-		# Convert UTC into %4Y%2m%2d%2H%2M%2S
-		sformatted=`date -u -d @${REPRODUCIBLE_TIMESTAMP_ROOTFS} +%4Y%2m%2d%2H%2M%2S`
-		echo $sformatted > ${IMAGE_ROOTFS}/etc/version
-		bbnote "rootfs_reproducible: set /etc/version to $sformatted"
-
-		if [ -d ${IMAGE_ROOTFS}${sysconfdir}/gconf ]; then
-			find ${IMAGE_ROOTFS}${sysconfdir}/gconf -name '%gconf.xml' -print0 | xargs -0r \
-			sed -i -e 's@\bmtime="[0-9][0-9]*"@mtime="'${REPRODUCIBLE_TIMESTAMP_ROOTFS}'"@g'
-		fi
-	fi
-}
-
-# Perform a dumb check for unit existence, not its validity
-python overlayfs_qa_check() {
-    from oe.overlayfs import mountUnitName
-
-    overlayMountPoints = d.getVarFlags("OVERLAYFS_MOUNT_POINT") or {}
-    imagepath = d.getVar("IMAGE_ROOTFS")
-    sysconfdir = d.getVar("sysconfdir")
-    searchpaths = [oe.path.join(imagepath, sysconfdir, "systemd", "system"),
-                   oe.path.join(imagepath, d.getVar("systemd_system_unitdir"))]
-    fstabpath = oe.path.join(imagepath, sysconfdir, "fstab")
-
-    if not any(os.path.exists(path) for path in [*searchpaths, fstabpath]):
-        return
-
-    fstabDevices = []
-    if os.path.isfile(fstabpath):
-        with open(fstabpath, 'r') as f:
-            for line in f:
-                if line[0] == '#':
-                    continue
-                path = line.split(maxsplit=2)
-                if len(path) > 2:
-                    fstabDevices.append(path[1])
-
-    allUnitExist = True;
-    for mountPoint in overlayMountPoints:
-        qaSkip = (d.getVarFlag("OVERLAYFS_QA_SKIP", mountPoint) or "").split()
-        if "mount-configured" in qaSkip:
-            continue
-
-        mountPath = d.getVarFlag('OVERLAYFS_MOUNT_POINT', mountPoint)
-        if mountPath in fstabDevices:
-            continue
-
-        mountUnit = mountUnitName(mountPath)
-        if any(os.path.isfile(oe.path.join(dirpath, mountUnit))
-               for dirpath in searchpaths):
-            continue
-
-        bb.warn(f'Mount path {mountPath} not found in fstab and unit '
-                f'{mountUnit} not found in systemd unit directories.')
-        bb.warn(f'Skip this check by setting OVERLAYFS_QA_SKIP[{mountPoint}] = '
-                '"mount-configured"')
-        allUnitExist = False;
-
-    if not allUnitExist:
-        bb.fatal('Not all mount paths and units are installed in the image')
-}
-
-python overlayfs_postprocess() {
-    import shutil
-
-    # install helper script
-    helperScriptName = "overlayfs-create-dirs.sh"
-    helperScriptSource = oe.path.join(d.getVar("COREBASE"), "meta/files", helperScriptName)
-    helperScriptDest = oe.path.join(d.getVar("IMAGE_ROOTFS"), "/usr/sbin/", helperScriptName)
-    shutil.copyfile(helperScriptSource, helperScriptDest)
-    os.chmod(helperScriptDest, 0o755)
-}
diff --git a/poky/meta/classes/rootfs_deb.bbclass b/poky/meta/classes/rootfs_deb.bbclass
deleted file mode 100644
index 0469ba7..0000000
--- a/poky/meta/classes/rootfs_deb.bbclass
+++ /dev/null
@@ -1,39 +0,0 @@
-#
-# Copyright 2006-2007 Openedhand Ltd.
-#
-
-ROOTFS_PKGMANAGE = "dpkg apt"
-
-do_rootfs[depends] += "dpkg-native:do_populate_sysroot apt-native:do_populate_sysroot"
-do_populate_sdk[depends] += "dpkg-native:do_populate_sysroot apt-native:do_populate_sysroot bzip2-native:do_populate_sysroot"
-do_rootfs[recrdeptask] += "do_package_write_deb do_package_qa"
-do_rootfs[vardeps] += "PACKAGE_FEED_URIS PACKAGE_FEED_BASE_PATHS PACKAGE_FEED_ARCHS"
-
-do_rootfs[lockfiles] += "${DEPLOY_DIR_DEB}/deb.lock"
-do_populate_sdk[lockfiles] += "${DEPLOY_DIR_DEB}/deb.lock"
-do_populate_sdk_ext[lockfiles] += "${DEPLOY_DIR_DEB}/deb.lock"
-
-python rootfs_deb_bad_recommendations() {
-    if d.getVar("BAD_RECOMMENDATIONS"):
-        bb.warn("Debian package install does not support BAD_RECOMMENDATIONS")
-}
-do_rootfs[prefuncs] += "rootfs_deb_bad_recommendations"
-
-DEB_POSTPROCESS_COMMANDS = ""
-
-opkglibdir = "${localstatedir}/lib/opkg"
-
-python () {
-    # Map TARGET_ARCH to Debian's ideas about architectures
-    darch = d.getVar('SDK_ARCH')
-    if darch in ["x86", "i486", "i586", "i686", "pentium"]:
-         d.setVar('DEB_SDK_ARCH', 'i386')
-    elif darch == "x86_64":
-         d.setVar('DEB_SDK_ARCH', 'amd64')
-    elif darch == "arm":
-         d.setVar('DEB_SDK_ARCH', 'armel')
-    elif darch == "aarch64":
-         d.setVar('DEB_SDK_ARCH', 'arm64')
-    else:
-         bb.fatal("Unhandled SDK_ARCH %s" % darch)
-}
diff --git a/poky/meta/classes/rootfs_ipk.bbclass b/poky/meta/classes/rootfs_ipk.bbclass
deleted file mode 100644
index 245c256..0000000
--- a/poky/meta/classes/rootfs_ipk.bbclass
+++ /dev/null
@@ -1,38 +0,0 @@
-#
-# Creates a root filesystem out of IPKs
-#
-# This rootfs can be mounted via root-nfs or it can be put into an cramfs/jffs etc.
-# See image.bbclass for a usage of this.
-#
-
-EXTRAOPKGCONFIG ?= ""
-ROOTFS_PKGMANAGE = "opkg ${EXTRAOPKGCONFIG}"
-
-do_rootfs[depends] += "opkg-native:do_populate_sysroot opkg-utils-native:do_populate_sysroot"
-do_populate_sdk[depends] += "opkg-native:do_populate_sysroot opkg-utils-native:do_populate_sysroot"
-do_rootfs[recrdeptask] += "do_package_write_ipk do_package_qa"
-do_rootfs[vardeps] += "PACKAGE_FEED_URIS PACKAGE_FEED_BASE_PATHS PACKAGE_FEED_ARCHS"
-
-do_rootfs[lockfiles] += "${WORKDIR}/ipk.lock"
-do_populate_sdk[lockfiles] += "${WORKDIR}/sdk-ipk.lock"
-do_populate_sdk_ext[lockfiles] += "${WORKDIR}/sdk-ipk.lock"
-
-OPKG_PREPROCESS_COMMANDS = ""
-
-OPKG_POSTPROCESS_COMMANDS = ""
-
-OPKGLIBDIR ??= "${localstatedir}/lib"
-
-MULTILIBRE_ALLOW_REP = "${OPKGLIBDIR}/opkg|/usr/lib/opkg"
-
-python () {
-
-    if d.getVar('BUILD_IMAGES_FROM_FEEDS'):
-        flags = d.getVarFlag('do_rootfs', 'recrdeptask')
-        flags = flags.replace("do_package_write_ipk", "")
-        flags = flags.replace("do_deploy", "")
-        flags = flags.replace("do_populate_sysroot", "")
-        d.setVarFlag('do_rootfs', 'recrdeptask', flags)
-        d.setVar('OPKG_PREPROCESS_COMMANDS', "")
-        d.setVar('OPKG_POSTPROCESS_COMMANDS', '')
-}
diff --git a/poky/meta/classes/rootfs_rpm.bbclass b/poky/meta/classes/rootfs_rpm.bbclass
deleted file mode 100644
index bec4d63..0000000
--- a/poky/meta/classes/rootfs_rpm.bbclass
+++ /dev/null
@@ -1,39 +0,0 @@
-#
-# Creates a root filesystem out of rpm packages
-#
-
-ROOTFS_PKGMANAGE = "rpm dnf"
-
-# dnf is using our custom sysconfig module, and so will fail without these
-export STAGING_INCDIR
-export STAGING_LIBDIR
-
-# Add 100Meg of extra space for dnf
-IMAGE_ROOTFS_EXTRA_SPACE:append = "${@bb.utils.contains("PACKAGE_INSTALL", "dnf", " + 102400", "", d)}"
-
-# Dnf is python based, so be sure python3-native is available to us.
-EXTRANATIVEPATH += "python3-native"
-
-# opkg is needed for update-alternatives
-RPMROOTFSDEPENDS = "rpm-native:do_populate_sysroot \
-    dnf-native:do_populate_sysroot \
-    createrepo-c-native:do_populate_sysroot \
-    opkg-native:do_populate_sysroot"
-
-do_rootfs[depends] += "${RPMROOTFSDEPENDS}"
-do_populate_sdk[depends] += "${RPMROOTFSDEPENDS}"
-
-do_rootfs[recrdeptask] += "do_package_write_rpm do_package_qa"
-do_rootfs[vardeps] += "PACKAGE_FEED_URIS PACKAGE_FEED_BASE_PATHS PACKAGE_FEED_ARCHS"
-
-python () {
-    if d.getVar('BUILD_IMAGES_FROM_FEEDS'):
-        flags = d.getVarFlag('do_rootfs', 'recrdeptask')
-        flags = flags.replace("do_package_write_rpm", "")
-        flags = flags.replace("do_deploy", "")
-        flags = flags.replace("do_populate_sysroot", "")
-        d.setVarFlag('do_rootfs', 'recrdeptask', flags)
-        d.setVar('RPM_PREPROCESS_COMMANDS', '')
-        d.setVar('RPM_POSTPROCESS_COMMANDS', '')
-
-}
diff --git a/poky/meta/classes/rootfsdebugfiles.bbclass b/poky/meta/classes/rootfsdebugfiles.bbclass
deleted file mode 100644
index 85c7ec7..0000000
--- a/poky/meta/classes/rootfsdebugfiles.bbclass
+++ /dev/null
@@ -1,41 +0,0 @@
-# This class installs additional files found on the build host
-# directly into the rootfs.
-#
-# One use case is to install a constant ssh host key in
-# an image that gets created for just one machine. This
-# solves two issues:
-# - host key generation on the device can stall when the
-#   kernel has not gathered enough entropy yet (seen in practice
-#   under qemu)
-# - ssh complains by default when the host key changes
-#
-# For dropbear, with the ssh host key store along side the local.conf:
-# 1. Extend local.conf:
-#    INHERIT += "rootfsdebugfiles"
-#    ROOTFS_DEBUG_FILES += "${TOPDIR}/conf/dropbear_rsa_host_key ${IMAGE_ROOTFS}/etc/dropbear/dropbear_rsa_host_key ;"
-# 2. Boot the image once, copy the dropbear_rsa_host_key from
-#    the device into your build conf directory.
-# 3. A optional parameter can be used to set file mode
-#    of the copied target, for instance:
-#    ROOTFS_DEBUG_FILES += "${TOPDIR}/conf/dropbear_rsa_host_key ${IMAGE_ROOTFS}/etc/dropbear/dropbear_rsa_host_key 0600;"
-#    in case they might be required to have a specific mode. (Shoundn't be too open, for example)
-#
-# Do not use for production images! It bypasses several
-# core build mechanisms (updating the image when one
-# of the files changes, license tracking in the image
-# manifest, ...).
-
-ROOTFS_DEBUG_FILES ?= ""
-ROOTFS_DEBUG_FILES[doc] = "Lists additional files or directories to be installed with 'cp -a' in the format 'source1 target1;source2 target2;...'"
-
-ROOTFS_POSTPROCESS_COMMAND += "rootfs_debug_files;"
-rootfs_debug_files () {
-   #!/bin/sh -e
-   echo "${ROOTFS_DEBUG_FILES}" | sed -e 's/;/\n/g' | while read source target mode; do
-      if [ -e "$source" ]; then
-         mkdir -p $(dirname $target)
-         cp -a $source $target
-         [ -n "$mode" ] && chmod $mode $target
-      fi
-   done
-}
diff --git a/poky/meta/classes/rust-bin.bbclass b/poky/meta/classes/rust-bin.bbclass
deleted file mode 100644
index c87343b..0000000
--- a/poky/meta/classes/rust-bin.bbclass
+++ /dev/null
@@ -1,149 +0,0 @@
-inherit rust
-
-RDEPENDS:${PN}:append:class-target = " ${RUSTLIB_DEP}"
-
-RUSTC_ARCHFLAGS += "-C opt-level=3 -g -L ${STAGING_DIR_HOST}/${rustlibdir} -C linker=${RUST_TARGET_CCLD}"
-EXTRA_OEMAKE += 'RUSTC_ARCHFLAGS="${RUSTC_ARCHFLAGS}"'
-
-# Some libraries alias with the standard library but libstd is configured to
-# make it difficult or imposisble to use its version. Unfortunately libstd
-# must be explicitly overridden using extern.
-OVERLAP_LIBS = "\
-    libc \
-    log \
-    getopts \
-    rand \
-"
-def get_overlap_deps(d):
-    deps = d.getVar("DEPENDS").split()
-    overlap_deps = []
-    for o in d.getVar("OVERLAP_LIBS").split():
-        l = len([o for dep in deps if (o + '-rs' in dep)])
-        if l > 0:
-            overlap_deps.append(o)
-    return " ".join(overlap_deps)
-OVERLAP_DEPS = "${@get_overlap_deps(d)}"
-
-# Prevents multiple static copies of standard library modules
-# See https://github.com/rust-lang/rust/issues/19680
-RUSTC_PREFER_DYNAMIC = "-C prefer-dynamic"
-RUSTC_FLAGS += "${RUSTC_PREFER_DYNAMIC}"
-
-CRATE_NAME ?= "${@d.getVar('BPN').replace('-rs', '').replace('-', '_')}"
-BINNAME ?= "${BPN}"
-LIBNAME ?= "lib${CRATE_NAME}-rs"
-CRATE_TYPE ?= "dylib"
-BIN_SRC ?= "${S}/src/main.rs"
-LIB_SRC ?= "${S}/src/lib.rs"
-
-rustbindest ?= "${bindir}"
-rustlibdest ?= "${rustlibdir}"
-RUST_RPATH_ABS ?= "${rustlibdir}:${rustlib}"
-
-def relative_rpaths(paths, base):
-    relpaths = set()
-    for p in paths.split(':'):
-        if p == base:
-            relpaths.add('$ORIGIN')
-            continue
-        relpaths.add(os.path.join('$ORIGIN', os.path.relpath(p, base)))
-    return '-rpath=' + ':'.join(relpaths) if len(relpaths) else ''
-
-RUST_LIB_RPATH_FLAGS ?= "${@relative_rpaths(d.getVar('RUST_RPATH_ABS', True), d.getVar('rustlibdest', True))}"
-RUST_BIN_RPATH_FLAGS ?= "${@relative_rpaths(d.getVar('RUST_RPATH_ABS', True), d.getVar('rustbindest', True))}"
-
-def libfilename(d):
-    if d.getVar('CRATE_TYPE', True) == 'dylib':
-        return d.getVar('LIBNAME', True) + '.so'
-    else:
-        return d.getVar('LIBNAME', True) + '.rlib'
-
-def link_args(d, bin):
-    linkargs = []
-    if bin:
-        rpaths = d.getVar('RUST_BIN_RPATH_FLAGS', False)
-    else:
-        rpaths = d.getVar('RUST_LIB_RPATH_FLAGS', False)
-        if d.getVar('CRATE_TYPE', True) == 'dylib':
-            linkargs.append('-soname')
-            linkargs.append(libfilename(d))
-    if len(rpaths):
-        linkargs.append(rpaths)
-    if len(linkargs):
-        return ' '.join(['-Wl,' + arg for arg in linkargs])
-    else:
-        return ''
-
-get_overlap_externs () {
-    externs=
-    for dep in ${OVERLAP_DEPS}; do
-        extern=$(ls ${STAGING_DIR_HOST}/${rustlibdir}/lib$dep-rs.{so,rlib} 2>/dev/null \
-                    | awk '{print $1}');
-        if [ -n "$extern" ]; then
-            externs="$externs --extern $dep=$extern"
-        else
-            echo "$dep in depends but no such library found in ${rustlibdir}!" >&2
-            exit 1
-        fi
-    done
-    echo "$externs"
-}
-
-do_configure () {
-}
-
-oe_runrustc () {
-	export RUST_TARGET_PATH="${RUST_TARGET_PATH}"
-	bbnote ${RUSTC} ${RUSTC_ARCHFLAGS} ${RUSTC_FLAGS} "$@"
-	"${RUSTC}" ${RUSTC_ARCHFLAGS} ${RUSTC_FLAGS} "$@"
-}
-
-oe_compile_rust_lib () {
-    rm -rf ${LIBNAME}.{rlib,so}
-    local -a link_args
-    if [ -n '${@link_args(d, False)}' ]; then
-        link_args[0]='-C'
-        link_args[1]='link-args=${@link_args(d, False)}'
-    fi
-    oe_runrustc $(get_overlap_externs) \
-        "${link_args[@]}" \
-        ${LIB_SRC} \
-        -o ${@libfilename(d)} \
-        --crate-name=${CRATE_NAME} --crate-type=${CRATE_TYPE} \
-        "$@"
-}
-oe_compile_rust_lib[vardeps] += "get_overlap_externs"
-
-oe_compile_rust_bin () {
-    rm -rf ${BINNAME}
-    local -a link_args
-    if [ -n '${@link_args(d, True)}' ]; then
-        link_args[0]='-C'
-        link_args[1]='link-args=${@link_args(d, True)}'
-    fi
-    oe_runrustc $(get_overlap_externs) \
-        "${link_args[@]}" \
-        ${BIN_SRC} -o ${BINNAME} "$@"
-}
-oe_compile_rust_bin[vardeps] += "get_overlap_externs"
-
-oe_install_rust_lib () {
-    for lib in $(ls ${LIBNAME}.{so,rlib} 2>/dev/null); do
-        echo Installing $lib
-        install -D -m 755 $lib ${D}/${rustlibdest}/$lib
-    done
-}
-
-oe_install_rust_bin () {
-    echo Installing ${BINNAME}
-    install -D -m 755 ${BINNAME} ${D}/${rustbindest}/${BINNAME}
-}
-
-do_rust_bin_fixups() {
-    for f in `find ${PKGD} -name '*.so*'`; do
-        echo "Strip rust note: $f"
-        ${OBJCOPY} -R .note.rustc $f $f
-    done
-}
-PACKAGE_PREPROCESS_FUNCS += "do_rust_bin_fixups"
-
diff --git a/poky/meta/classes/rust-common.bbclass b/poky/meta/classes/rust-common.bbclass
deleted file mode 100644
index cb811ac..0000000
--- a/poky/meta/classes/rust-common.bbclass
+++ /dev/null
@@ -1,189 +0,0 @@
-inherit python3native
-
-# Common variables used by all Rust builds
-export rustlibdir = "${libdir}/rust"
-FILES:${PN} += "${rustlibdir}/*.so"
-FILES:${PN}-dev += "${rustlibdir}/*.rlib ${rustlibdir}/*.rmeta"
-FILES:${PN}-dbg += "${rustlibdir}/.debug"
-
-RUSTLIB = "-L ${STAGING_LIBDIR}/rust"
-RUST_DEBUG_REMAP = "--remap-path-prefix=${WORKDIR}=/usr/src/debug/${PN}/${EXTENDPE}${PV}-${PR}"
-RUSTFLAGS += "${RUSTLIB} ${RUST_DEBUG_REMAP}"
-RUSTLIB_DEP ?= "libstd-rs"
-export RUST_TARGET_PATH = "${STAGING_LIBDIR_NATIVE}/rustlib"
-RUST_PANIC_STRATEGY ?= "unwind"
-
-# Native builds are not effected by TCLIBC. Without this, rust-native
-# thinks it's "target" (i.e. x86_64-linux) is a musl target.
-RUST_LIBC = "${TCLIBC}"
-RUST_LIBC:class-crosssdk = "glibc"
-RUST_LIBC:class-native = "glibc"
-
-def determine_libc(d, thing):
-    '''Determine which libc something should target'''
-
-    # BUILD is never musl, TARGET may be musl or glibc,
-    # HOST could be musl, but only if a compiler is built to be run on
-    # target in which case HOST_SYS != BUILD_SYS.
-    if thing == 'TARGET':
-        libc = d.getVar('RUST_LIBC')
-    elif thing == 'BUILD' and (d.getVar('HOST_SYS') != d.getVar('BUILD_SYS')):
-        libc = d.getVar('RUST_LIBC')
-    else:
-        libc = d.getVar('RUST_LIBC:class-native')
-
-    return libc
-
-def target_is_armv7(d):
-    '''Determine if target is armv7'''
-    # TUNE_FEATURES may include arm* even if the target is not arm
-    # in the case of *-native packages
-    if d.getVar('TARGET_ARCH') != 'arm':
-        return False
-
-    feat = d.getVar('TUNE_FEATURES')
-    feat = frozenset(feat.split())
-    mach_overrides = d.getVar('MACHINEOVERRIDES')
-    mach_overrides = frozenset(mach_overrides.split(':'))
-
-    v7=frozenset(['armv7a', 'armv7r', 'armv7m', 'armv7ve'])
-    if mach_overrides.isdisjoint(v7) and feat.isdisjoint(v7):
-        return False
-    else:
-        return True
-target_is_armv7[vardepvalue] = "${@target_is_armv7(d)}"
-
-# Responsible for taking Yocto triples and converting it to Rust triples
-def rust_base_triple(d, thing):
-    '''
-    Mangle bitbake's *_SYS into something that rust might support (see
-    rust/mk/cfg/* for a list)
-
-    Note that os is assumed to be some linux form
-    '''
-
-    # The llvm-target for armv7 is armv7-unknown-linux-gnueabihf
-    if thing == "TARGET" and target_is_armv7(d):
-        arch = "armv7"
-    else:
-        arch = oe.rust.arch_to_rust_arch(d.getVar('{}_ARCH'.format(thing)))
-
-    # All the Yocto targets are Linux and are 'unknown'
-    vendor = "-unknown"
-    os = d.getVar('{}_OS'.format(thing))
-    libc = determine_libc(d, thing)
-
-    # Prefix with a dash and convert glibc -> gnu
-    if libc == "glibc":
-        libc = "-gnu"
-    elif libc == "musl":
-        libc = "-musl"
-
-    # Don't double up musl (only appears to be the case on aarch64)
-    if os == "linux-musl":
-        if libc != "-musl":
-            bb.fatal("{}_OS was '{}' but TCLIBC was not 'musl'".format(thing, os))
-        os = "linux"
-
-    # This catches ARM targets and appends the necessary hard float bits
-    if os == "linux-gnueabi" or os == "linux-musleabi":
-        libc = bb.utils.contains('TUNE_FEATURES', 'callconvention-hard', 'hf', '', d)
-    return arch + vendor + '-' + os + libc
-
-
-# In some cases uname and the toolchain differ on their idea of the arch name
-RUST_BUILD_ARCH = "${@oe.rust.arch_to_rust_arch(d.getVar('BUILD_ARCH'))}"
-
-# Naming explanation
-# Yocto
-# - BUILD_SYS - Yocto triple of the build environment
-# - HOST_SYS - What we're building for in Yocto
-# - TARGET_SYS - What we're building for in Yocto
-#
-# So when building '-native' packages BUILD_SYS == HOST_SYS == TARGET_SYS
-# When building packages for the image HOST_SYS == TARGET_SYS
-# This is a gross over simplification as there are other modes but
-# currently this is all that's supported.
-#
-# Rust
-# - TARGET - the system where the binary will run
-# - HOST - the system where the binary is being built
-#
-# Rust additionally will use two additional cases:
-# - undecorated (e.g. CC) - equivalent to TARGET
-# - triple suffix (e.g. CC:x86_64_unknown_linux_gnu) - both
-#   see: https://github.com/alexcrichton/gcc-rs
-# The way that Rust's internal triples and Yocto triples are mapped together
-# its likely best to not use the triple suffix due to potential confusion.
-
-RUST_BUILD_SYS = "${@rust_base_triple(d, 'BUILD')}"
-RUST_BUILD_SYS[vardepvalue] = "${RUST_BUILD_SYS}"
-RUST_HOST_SYS = "${@rust_base_triple(d, 'HOST')}"
-RUST_HOST_SYS[vardepvalue] = "${RUST_HOST_SYS}"
-RUST_TARGET_SYS = "${@rust_base_triple(d, 'TARGET')}"
-RUST_TARGET_SYS[vardepvalue] = "${RUST_TARGET_SYS}"
-
-# wrappers to get around the fact that Rust needs a single
-# binary but Yocto's compiler and linker commands have
-# arguments. Technically the archiver is always one command but
-# this is necessary for builds that determine the prefix and then
-# use those commands based on the prefix.
-WRAPPER_DIR = "${WORKDIR}/wrapper"
-RUST_BUILD_CC = "${WRAPPER_DIR}/build-rust-cc"
-RUST_BUILD_CXX = "${WRAPPER_DIR}/build-rust-cxx"
-RUST_BUILD_CCLD = "${WRAPPER_DIR}/build-rust-ccld"
-RUST_BUILD_AR = "${WRAPPER_DIR}/build-rust-ar"
-RUST_TARGET_CC = "${WRAPPER_DIR}/target-rust-cc"
-RUST_TARGET_CXX = "${WRAPPER_DIR}/target-rust-cxx"
-RUST_TARGET_CCLD = "${WRAPPER_DIR}/target-rust-ccld"
-RUST_TARGET_AR = "${WRAPPER_DIR}/target-rust-ar"
-
-create_wrapper () {
-	file="$1"
-	shift
-
-	cat <<- EOF > "${file}"
-	#!/usr/bin/env python3
-	import os, sys
-	orig_binary = "$@"
-	binary = orig_binary.split()[0]
-	args = orig_binary.split() + sys.argv[1:]
-	os.execvp(binary, args)
-	EOF
-	chmod +x "${file}"
-}
-
-export WRAPPER_TARGET_CC = "${CC}"
-export WRAPPER_TARGET_CXX = "${CXX}"
-export WRAPPER_TARGET_CCLD = "${CCLD}"
-export WRAPPER_TARGET_LDFLAGS = "${LDFLAGS}"
-export WRAPPER_TARGET_AR = "${AR}"
-
-# compiler is used by gcc-rs
-# linker is used by rustc/cargo
-# archiver is used by the build of libstd-rs
-do_rust_create_wrappers () {
-	mkdir -p "${WRAPPER_DIR}"
-
-	# Yocto Build / Rust Host C compiler
-	create_wrapper "${RUST_BUILD_CC}" "${BUILD_CC}"
-	# Yocto Build / Rust Host C++ compiler
-	create_wrapper "${RUST_BUILD_CXX}" "${BUILD_CXX}"
-	# Yocto Build / Rust Host linker
-	create_wrapper "${RUST_BUILD_CCLD}" "${BUILD_CCLD}" "${BUILD_LDFLAGS}"
-	# Yocto Build / Rust Host archiver
-	create_wrapper "${RUST_BUILD_AR}" "${BUILD_AR}"
-
-	# Yocto Target / Rust Target C compiler
-	create_wrapper "${RUST_TARGET_CC}" "${WRAPPER_TARGET_CC}" "${WRAPPER_TARGET_LDFLAGS}"
-	# Yocto Target / Rust Target C++ compiler
-	create_wrapper "${RUST_TARGET_CXX}" "${WRAPPER_TARGET_CXX}"
-	# Yocto Target / Rust Target linker
-	create_wrapper "${RUST_TARGET_CCLD}" "${WRAPPER_TARGET_CCLD}" "${WRAPPER_TARGET_LDFLAGS}"
-	# Yocto Target / Rust Target archiver
-	create_wrapper "${RUST_TARGET_AR}" "${WRAPPER_TARGET_AR}"
-
-}
-
-addtask rust_create_wrappers before do_configure after do_patch do_prepare_recipe_sysroot
-do_rust_create_wrappers[dirs] += "${WRAPPER_DIR}"
diff --git a/poky/meta/classes/rust-target-config.bbclass b/poky/meta/classes/rust-target-config.bbclass
deleted file mode 100644
index 87b7dee..0000000
--- a/poky/meta/classes/rust-target-config.bbclass
+++ /dev/null
@@ -1,374 +0,0 @@
-# Right now this is focused on arm-specific tune features.
-# We get away with this for now as one can only use x86-64 as the build host
-# (not arm).
-# Note that TUNE_FEATURES is _always_ refering to the target, so we really
-# don't want to use this for the host/build.
-def llvm_features_from_tune(d):
-    f = []
-    feat = d.getVar('TUNE_FEATURES')
-    if not feat:
-        return []
-    feat = frozenset(feat.split())
-
-    mach_overrides = d.getVar('MACHINEOVERRIDES')
-    mach_overrides = frozenset(mach_overrides.split(':'))
-
-    if 'vfpv4' in feat:
-        f.append("+vfp4")
-    if 'vfpv3' in feat:
-        f.append("+vfp3")
-    if 'vfpv3d16' in feat:
-        f.append("+d16")
-
-    if 'vfpv2' in feat or 'vfp' in feat:
-        f.append("+vfp2")
-
-    if 'neon' in feat:
-        f.append("+neon")
-
-    if 'mips32' in feat:
-        f.append("+mips32")
-
-    if 'mips32r2' in feat:
-        f.append("+mips32r2")
-
-    if target_is_armv7(d):
-        f.append('+v7')
-
-    if ('armv6' in mach_overrides) or ('armv6' in feat):
-        f.append("+v6")
-    if 'armv5te' in feat:
-        f.append("+strict-align")
-        f.append("+v5te")
-    elif 'armv5' in feat:
-        f.append("+strict-align")
-        f.append("+v5")
-
-    if ('armv4' in mach_overrides) or ('armv4' in feat):
-        f.append("+strict-align")
-
-    if 'dsp' in feat:
-        f.append("+dsp")
-
-    if 'thumb' in feat:
-        if d.getVar('ARM_THUMB_OPT') == "thumb":
-            if target_is_armv7(d):
-                f.append('+thumb2')
-            f.append("+thumb-mode")
-
-    if 'cortexa5' in feat:
-        f.append("+a5")
-    if 'cortexa7' in feat:
-        f.append("+a7")
-    if 'cortexa9' in feat:
-        f.append("+a9")
-    if 'cortexa15' in feat:
-        f.append("+a15")
-    if 'cortexa17' in feat:
-        f.append("+a17")
-    if ('riscv64' in feat) or ('riscv32' in feat):
-        f.append("+a,+c,+d,+f,+m")
-    return f
-llvm_features_from_tune[vardepvalue] = "${@llvm_features_from_tune(d)}"
-
-# TARGET_CC_ARCH changes from build/cross/target so it'll do the right thing
-# this should go away when https://github.com/rust-lang/rust/pull/31709 is
-# stable (1.9.0?)
-def llvm_features_from_cc_arch(d):
-    f = []
-    feat = d.getVar('TARGET_CC_ARCH')
-    if not feat:
-        return []
-    feat = frozenset(feat.split())
-
-    if '-mmmx' in feat:
-        f.append("+mmx")
-    if '-msse' in feat:
-        f.append("+sse")
-    if '-msse2' in feat:
-        f.append("+sse2")
-    if '-msse3' in feat:
-        f.append("+sse3")
-    if '-mssse3' in feat:
-        f.append("+ssse3")
-    if '-msse4.1' in feat:
-        f.append("+sse4.1")
-    if '-msse4.2' in feat:
-        f.append("+sse4.2")
-    if '-msse4a' in feat:
-        f.append("+sse4a")
-    if '-mavx' in feat:
-        f.append("+avx")
-    if '-mavx2' in feat:
-        f.append("+avx2")
-
-    return f
-
-def llvm_features_from_target_fpu(d):
-    # TARGET_FPU can be hard or soft. +soft-float tell llvm to use soft float
-    # ABI. There is no option for hard.
-
-    fpu = d.getVar('TARGET_FPU', True)
-    return ["+soft-float"] if fpu == "soft" else []
-
-def llvm_features(d):
-    return ','.join(llvm_features_from_tune(d) +
-                    llvm_features_from_cc_arch(d) +
-                    llvm_features_from_target_fpu(d))
-
-llvm_features[vardepvalue] = "${@llvm_features(d)}"
-
-## arm-unknown-linux-gnueabihf
-DATA_LAYOUT[arm-eabi] = "e-m:e-p:32:32-i64:64-v128:64:128-a:0:32-n32-S64"
-TARGET_ENDIAN[arm-eabi] = "little"
-TARGET_POINTER_WIDTH[arm-eabi] = "32"
-TARGET_C_INT_WIDTH[arm-eabi] = "32"
-MAX_ATOMIC_WIDTH[arm-eabi] = "64"
-FEATURES[arm-eabi] = "+v6,+vfp2"
-
-## armv7-unknown-linux-gnueabihf
-DATA_LAYOUT[armv7-eabi] = "e-m:e-p:32:32-i64:64-v128:64:128-a:0:32-n32-S64"
-TARGET_ENDIAN[armv7-eabi] = "little"
-TARGET_POINTER_WIDTH[armv7-eabi] = "32"
-TARGET_C_INT_WIDTH[armv7-eabi] = "32"
-MAX_ATOMIC_WIDTH[armv7-eabi] = "64"
-FEATURES[armv7-eabi] = "+v7,+vfp2,+thumb2"
-
-## aarch64-unknown-linux-{gnu, musl}
-DATA_LAYOUT[aarch64] = "e-m:e-i8:8:32-i16:16:32-i64:64-i128:128-n32:64-S128"
-TARGET_ENDIAN[aarch64] = "little"
-TARGET_POINTER_WIDTH[aarch64] = "64"
-TARGET_C_INT_WIDTH[aarch64] = "32"
-MAX_ATOMIC_WIDTH[aarch64] = "128"
-
-## x86_64-unknown-linux-{gnu, musl}
-DATA_LAYOUT[x86_64] = "e-m:e-i64:64-f80:128-n8:16:32:64-S128"
-TARGET_ENDIAN[x86_64] = "little"
-TARGET_POINTER_WIDTH[x86_64] = "64"
-TARGET_C_INT_WIDTH[x86_64] = "32"
-MAX_ATOMIC_WIDTH[x86_64] = "64"
-
-## x86_64-unknown-linux-gnux32
-DATA_LAYOUT[x86_64-x32] = "e-m:e-p:32:32-p270:32:32-p271:32:32-p272:64:64-i64:64-f80:128-n8:16:32:64-S128"
-TARGET_ENDIAN[x86_64-x32] = "little"
-TARGET_POINTER_WIDTH[x86_64-x32] = "32"
-TARGET_C_INT_WIDTH[x86_64-x32] = "32"
-MAX_ATOMIC_WIDTH[x86_64-x32] = "64"
-
-## i686-unknown-linux-{gnu, musl}
-DATA_LAYOUT[i686] = "e-m:e-p:32:32-f64:32:64-f80:32-n8:16:32-S128"
-TARGET_ENDIAN[i686] = "little"
-TARGET_POINTER_WIDTH[i686] = "32"
-TARGET_C_INT_WIDTH[i686] = "32"
-MAX_ATOMIC_WIDTH[i686] = "64"
-
-## XXX: a bit of a hack so qemux86 builds, clone of i686-unknown-linux-{gnu, musl} above
-DATA_LAYOUT[i586] = "e-m:e-p:32:32-f64:32:64-f80:32-n8:16:32-S128"
-TARGET_ENDIAN[i586] = "little"
-TARGET_POINTER_WIDTH[i586] = "32"
-TARGET_C_INT_WIDTH[i586] = "32"
-MAX_ATOMIC_WIDTH[i586] = "64"
-
-## mips-unknown-linux-{gnu, musl}
-DATA_LAYOUT[mips] = "E-m:m-p:32:32-i8:8:32-i16:16:32-i64:64-n32-S64"
-TARGET_ENDIAN[mips] = "big"
-TARGET_POINTER_WIDTH[mips] = "32"
-TARGET_C_INT_WIDTH[mips] = "32"
-MAX_ATOMIC_WIDTH[mips] = "32"
-
-## mipsel-unknown-linux-{gnu, musl}
-DATA_LAYOUT[mipsel] = "e-m:m-p:32:32-i8:8:32-i16:16:32-i64:64-n32-S64"
-TARGET_ENDIAN[mipsel] = "little"
-TARGET_POINTER_WIDTH[mipsel] = "32"
-TARGET_C_INT_WIDTH[mipsel] = "32"
-MAX_ATOMIC_WIDTH[mipsel] = "32"
-
-## mips64-unknown-linux-{gnu, musl}
-DATA_LAYOUT[mips64] = "E-m:e-i8:8:32-i16:16:32-i64:64-n32:64-S128"
-TARGET_ENDIAN[mips64] = "big"
-TARGET_POINTER_WIDTH[mips64] = "64"
-TARGET_C_INT_WIDTH[mips64] = "64"
-MAX_ATOMIC_WIDTH[mips64] = "64"
-
-## mips64el-unknown-linux-{gnu, musl}
-DATA_LAYOUT[mips64el] = "e-m:e-i8:8:32-i16:16:32-i64:64-n32:64-S128"
-TARGET_ENDIAN[mips64el] = "little"
-TARGET_POINTER_WIDTH[mips64el] = "64"
-TARGET_C_INT_WIDTH[mips64el] = "64"
-MAX_ATOMIC_WIDTH[mips64el] = "64"
-
-## powerpc-unknown-linux-{gnu, musl}
-DATA_LAYOUT[powerpc] = "E-m:e-p:32:32-i64:64-n32"
-TARGET_ENDIAN[powerpc] = "big"
-TARGET_POINTER_WIDTH[powerpc] = "32"
-TARGET_C_INT_WIDTH[powerpc] = "32"
-MAX_ATOMIC_WIDTH[powerpc] = "32"
-
-## powerpc64-unknown-linux-{gnu, musl}
-DATA_LAYOUT[powerpc64] = "E-m:e-i64:64-n32:64-S128-v256:256:256-v512:512:512"
-TARGET_ENDIAN[powerpc64] = "big"
-TARGET_POINTER_WIDTH[powerpc64] = "64"
-TARGET_C_INT_WIDTH[powerpc64] = "64"
-MAX_ATOMIC_WIDTH[powerpc64] = "64"
-
-## powerpc64le-unknown-linux-{gnu, musl}
-DATA_LAYOUT[powerpc64le] = "e-m:e-i64:64-n32:64-v256:256:256-v512:512:512"
-TARGET_ENDIAN[powerpc64le] = "little"
-TARGET_POINTER_WIDTH[powerpc64le] = "64"
-TARGET_C_INT_WIDTH[powerpc64le] = "64"
-MAX_ATOMIC_WIDTH[powerpc64le] = "64"
-
-## riscv32-unknown-linux-{gnu, musl}
-DATA_LAYOUT[riscv32] = "e-m:e-p:32:32-i64:64-n32-S128"
-TARGET_ENDIAN[riscv32] = "little"
-TARGET_POINTER_WIDTH[riscv32] = "32"
-TARGET_C_INT_WIDTH[riscv32] = "32"
-MAX_ATOMIC_WIDTH[riscv32] = "32"
-
-## riscv64-unknown-linux-{gnu, musl}
-DATA_LAYOUT[riscv64] = "e-m:e-p:64:64-i64:64-i128:128-n64-S128"
-TARGET_ENDIAN[riscv64] = "little"
-TARGET_POINTER_WIDTH[riscv64] = "64"
-TARGET_C_INT_WIDTH[riscv64] = "64"
-MAX_ATOMIC_WIDTH[riscv64] = "64"
-
-# Convert a normal arch (HOST_ARCH, TARGET_ARCH, BUILD_ARCH, etc) to something
-# rust's internals won't choke on.
-def arch_to_rust_target_arch(arch):
-    if arch == "i586" or arch == "i686":
-        return "x86"
-    elif arch == "mipsel":
-        return "mips"
-    elif arch == "mip64sel":
-        return "mips64"
-    elif arch == "armv7":
-        return "arm"
-    elif arch == "powerpc64le":
-        return "powerpc64"
-    else:
-        return arch
-
-# generates our target CPU value
-def llvm_cpu(d):
-    cpu = d.getVar('PACKAGE_ARCH')
-    target = d.getVar('TRANSLATED_TARGET_ARCH')
-
-    trans = {}
-    trans['corei7-64'] = "corei7"
-    trans['core2-32'] = "core2"
-    trans['x86-64'] = "x86-64"
-    trans['i686'] = "i686"
-    trans['i586'] = "i586"
-    trans['powerpc'] = "powerpc"
-    trans['mips64'] = "mips64"
-    trans['mips64el'] = "mips64"
-    trans['riscv64'] = "generic-rv64"
-    trans['riscv32'] = "generic-rv32"
-
-    if target in ["mips", "mipsel"]:
-        feat = frozenset(d.getVar('TUNE_FEATURES').split())
-        if "mips32r2" in feat:
-            trans['mipsel'] = "mips32r2"
-            trans['mips'] = "mips32r2"
-        elif "mips32" in feat:
-            trans['mipsel'] = "mips32"
-            trans['mips'] = "mips32"
-
-    try:
-        return trans[cpu]
-    except:
-        return trans.get(target, "generic")
-
-llvm_cpu[vardepvalue] = "${@llvm_cpu(d)}"
-
-def rust_gen_target(d, thing, wd, arch):
-    import json
-    sys = d.getVar('{}_SYS'.format(thing))
-    prefix = d.getVar('{}_PREFIX'.format(thing))
-
-    abi = None
-    cpu = "generic"
-    features = ""
-
-    if thing == "TARGET":
-        abi = d.getVar('ABIEXTENSION')
-        cpu = llvm_cpu(d)
-        if bb.data.inherits_class('native', d):
-            features = ','.join(llvm_features_from_cc_arch(d))
-        else:
-            features = llvm_features(d) or ""
-        # arm and armv7 have different targets in llvm
-        if arch == "arm" and target_is_armv7(d):
-            arch = 'armv7'
-
-    rust_arch = oe.rust.arch_to_rust_arch(arch)
-
-    if abi:
-        arch_abi = "{}-{}".format(rust_arch, abi)
-    else:
-        arch_abi = rust_arch
-
-    features = features or d.getVarFlag('FEATURES', arch_abi) or ""
-    features = features.strip()
-
-    llvm_target = d.getVar('RUST_TARGET_SYS')
-    if thing == "BUILD":
-        llvm_target = d.getVar('RUST_HOST_SYS')
-
-    # build tspec
-    tspec = {}
-    tspec['llvm-target'] = llvm_target
-    tspec['data-layout'] = d.getVarFlag('DATA_LAYOUT', arch_abi)
-    tspec['max-atomic-width'] = int(d.getVarFlag('MAX_ATOMIC_WIDTH', arch_abi))
-    tspec['target-pointer-width'] = d.getVarFlag('TARGET_POINTER_WIDTH', arch_abi)
-    tspec['target-c-int-width'] = d.getVarFlag('TARGET_C_INT_WIDTH', arch_abi)
-    tspec['target-endian'] = d.getVarFlag('TARGET_ENDIAN', arch_abi)
-    tspec['arch'] = arch_to_rust_target_arch(rust_arch)
-    tspec['os'] = "linux"
-    if "musl" in tspec['llvm-target']:
-        tspec['env'] = "musl"
-    else:
-        tspec['env'] = "gnu"
-    if "riscv64" in tspec['llvm-target']:
-        tspec['llvm-abiname'] = "lp64d"
-    if "riscv32" in tspec['llvm-target']:
-        tspec['llvm-abiname'] = "ilp32d"
-    tspec['vendor'] = "unknown"
-    tspec['target-family'] = "unix"
-    tspec['linker'] = "{}{}gcc".format(d.getVar('CCACHE'), prefix)
-    tspec['cpu'] = cpu
-    if features != "":
-        tspec['features'] = features
-    tspec['dynamic-linking'] = True
-    tspec['executables'] = True
-    tspec['linker-is-gnu'] = True
-    tspec['linker-flavor'] = "gcc"
-    tspec['has-rpath'] = True
-    tspec['has-elf-tls'] = True
-    tspec['position-independent-executables'] = True
-    tspec['panic-strategy'] = d.getVar("RUST_PANIC_STRATEGY")
-
-    # write out the target spec json file
-    with open(wd + sys + '.json', 'w') as f:
-        json.dump(tspec, f, indent=4)
-
-# These are accounted for in tmpdir path names so don't need to be in the task sig
-rust_gen_target[vardepsexclude] += "RUST_HOST_SYS RUST_TARGET_SYS ABIEXTENSION llvm_cpu"
-
-do_rust_gen_targets[vardeps] += "DATA_LAYOUT TARGET_ENDIAN TARGET_POINTER_WIDTH TARGET_C_INT_WIDTH MAX_ATOMIC_WIDTH FEATURES"
-
-RUST_TARGETGENS = "BUILD"
-
-python do_rust_gen_targets () {
-    wd = d.getVar('WORKDIR') + '/targets/'
-    # Order of BUILD, HOST, TARGET is important in case the files overwrite, most specific last
-    rust_gen_target(d, 'BUILD', wd, d.getVar('BUILD_ARCH'))
-    if "HOST" in d.getVar("RUST_TARGETGENS"):
-        rust_gen_target(d, 'HOST', wd, d.getVar('HOST_ARCH'))
-    if "TARGET" in d.getVar("RUST_TARGETGENS"):
-        rust_gen_target(d, 'TARGET', wd, d.getVar('TARGET_ARCH'))
-}
-
-addtask rust_gen_targets after do_patch before do_compile
-do_rust_gen_targets[dirs] += "${WORKDIR}/targets"
-
diff --git a/poky/meta/classes/rust.bbclass b/poky/meta/classes/rust.bbclass
deleted file mode 100644
index 5c8938d..0000000
--- a/poky/meta/classes/rust.bbclass
+++ /dev/null
@@ -1,45 +0,0 @@
-inherit rust-common
-
-RUSTC = "rustc"
-
-RUSTC_ARCHFLAGS += "--target=${HOST_SYS} ${RUSTFLAGS}"
-
-def rust_base_dep(d):
-    # Taken from meta/classes/base.bbclass `base_dep_prepend` and modified to
-    # use rust instead of gcc
-    deps = ""
-    if not d.getVar('INHIBIT_DEFAULT_RUST_DEPS'):
-        if (d.getVar('HOST_SYS') != d.getVar('BUILD_SYS')):
-            deps += " virtual/${TARGET_PREFIX}rust ${RUSTLIB_DEP}"
-        else:
-            deps += " rust-native"
-    return deps
-
-DEPENDS:append = " ${@rust_base_dep(d)}"
-
-# BUILD_LDFLAGS
-# 	${STAGING_LIBDIR_NATIVE}
-# 	${STAGING_BASE_LIBDIR_NATIVE}
-# BUILDSDK_LDFLAGS
-# 	${STAGING_LIBDIR}
-# 	#{STAGING_DIR_HOST}
-# TARGET_LDFLAGS ?????
-#RUSTC_BUILD_LDFLAGS = "\
-#	--sysroot ${STAGING_DIR_NATIVE} \
-#	-L${STAGING_LIBDIR_NATIVE}	\
-#	-L${STAGING_BASE_LIBDIR_NATIVE}	\
-#"
-
-# XXX: for some reason bitbake sets BUILD_* & TARGET_* but uses the bare
-# variables for HOST. Alias things to make it easier for us.
-HOST_LDFLAGS  ?= "${LDFLAGS}"
-HOST_CFLAGS   ?= "${CFLAGS}"
-HOST_CXXFLAGS ?= "${CXXFLAGS}"
-HOST_CPPFLAGS ?= "${CPPFLAGS}"
-
-rustlib_suffix="${TUNE_ARCH}${TARGET_VENDOR}-${TARGET_OS}/rustlib/${HOST_SYS}/lib"
-# Native sysroot standard library path
-rustlib_src="${prefix}/lib/${rustlib_suffix}"
-# Host sysroot standard library path
-rustlib="${libdir}/${rustlib_suffix}"
-rustlib:class-native="${libdir}/rustlib/${BUILD_SYS}/lib"
diff --git a/poky/meta/classes/sanity.bbclass b/poky/meta/classes/sanity.bbclass
deleted file mode 100644
index b1fac10..0000000
--- a/poky/meta/classes/sanity.bbclass
+++ /dev/null
@@ -1,1022 +0,0 @@
-#
-# Sanity check the users setup for common misconfigurations
-#
-
-SANITY_REQUIRED_UTILITIES ?= "patch diffstat git bzip2 tar \
-    gzip gawk chrpath wget cpio perl file which"
-
-def bblayers_conf_file(d):
-    return os.path.join(d.getVar('TOPDIR'), 'conf/bblayers.conf')
-
-def sanity_conf_read(fn):
-    with open(fn, 'r') as f:
-        lines = f.readlines()
-    return lines
-
-def sanity_conf_find_line(pattern, lines):
-    import re
-    return next(((index, line)
-        for index, line in enumerate(lines)
-        if re.search(pattern, line)), (None, None))
-
-def sanity_conf_update(fn, lines, version_var_name, new_version):
-    index, line = sanity_conf_find_line(r"^%s" % version_var_name, lines)
-    lines[index] = '%s = "%d"\n' % (version_var_name, new_version)
-    with open(fn, "w") as f:
-        f.write(''.join(lines))
-
-# Functions added to this variable MUST throw a NotImplementedError exception unless 
-# they successfully changed the config version in the config file. Exceptions
-# are used since exec_func doesn't handle return values.
-BBLAYERS_CONF_UPDATE_FUNCS += " \
-    conf/bblayers.conf:LCONF_VERSION:LAYER_CONF_VERSION:oecore_update_bblayers \
-    conf/local.conf:CONF_VERSION:LOCALCONF_VERSION:oecore_update_localconf \
-    conf/site.conf:SCONF_VERSION:SITE_CONF_VERSION:oecore_update_siteconf \
-"
-
-SANITY_DIFF_TOOL ?= "meld"
-
-SANITY_LOCALCONF_SAMPLE ?= "${COREBASE}/meta*/conf/local.conf.sample"
-python oecore_update_localconf() {
-    # Check we are using a valid local.conf
-    current_conf  = d.getVar('CONF_VERSION')
-    conf_version =  d.getVar('LOCALCONF_VERSION')
-
-    failmsg = """Your version of local.conf was generated from an older/newer version of 
-local.conf.sample and there have been updates made to this file. Please compare the two 
-files and merge any changes before continuing.
-
-Matching the version numbers will remove this message.
-
-\"${SANITY_DIFF_TOOL} conf/local.conf ${SANITY_LOCALCONF_SAMPLE}\" 
-
-is a good way to visualise the changes."""
-    failmsg = d.expand(failmsg)
-
-    raise NotImplementedError(failmsg)
-}
-
-SANITY_SITECONF_SAMPLE ?= "${COREBASE}/meta*/conf/site.conf.sample"
-python oecore_update_siteconf() {
-    # If we have a site.conf, check it's valid
-    current_sconf = d.getVar('SCONF_VERSION')
-    sconf_version = d.getVar('SITE_CONF_VERSION')
-
-    failmsg = """Your version of site.conf was generated from an older version of 
-site.conf.sample and there have been updates made to this file. Please compare the two 
-files and merge any changes before continuing.
-
-Matching the version numbers will remove this message.
-
-\"${SANITY_DIFF_TOOL} conf/site.conf ${SANITY_SITECONF_SAMPLE}\" 
-
-is a good way to visualise the changes."""
-    failmsg = d.expand(failmsg)
-
-    raise NotImplementedError(failmsg)
-}
-
-SANITY_BBLAYERCONF_SAMPLE ?= "${COREBASE}/meta*/conf/bblayers.conf.sample"
-python oecore_update_bblayers() {
-    # bblayers.conf is out of date, so see if we can resolve that
-
-    current_lconf = int(d.getVar('LCONF_VERSION'))
-    lconf_version = int(d.getVar('LAYER_CONF_VERSION'))
-
-    failmsg = """Your version of bblayers.conf has the wrong LCONF_VERSION (has ${LCONF_VERSION}, expecting ${LAYER_CONF_VERSION}).
-Please compare your file against bblayers.conf.sample and merge any changes before continuing.
-"${SANITY_DIFF_TOOL} conf/bblayers.conf ${SANITY_BBLAYERCONF_SAMPLE}" 
-
-is a good way to visualise the changes."""
-    failmsg = d.expand(failmsg)
-
-    if not current_lconf:
-        raise NotImplementedError(failmsg)
-
-    lines = []
-
-    if current_lconf < 4:
-        raise NotImplementedError(failmsg)
-
-    bblayers_fn = bblayers_conf_file(d)
-    lines = sanity_conf_read(bblayers_fn)
-
-    if current_lconf == 4 and lconf_version > 4:
-        topdir_var = '$' + '{TOPDIR}'
-        index, bbpath_line = sanity_conf_find_line('BBPATH', lines)
-        if bbpath_line:
-            start = bbpath_line.find('"')
-            if start != -1 and (len(bbpath_line) != (start + 1)):
-                if bbpath_line[start + 1] == '"':
-                    lines[index] = (bbpath_line[:start + 1] +
-                                    topdir_var + bbpath_line[start + 1:])
-                else:
-                    if not topdir_var in bbpath_line:
-                        lines[index] = (bbpath_line[:start + 1] +
-                                    topdir_var + ':' + bbpath_line[start + 1:])
-            else:
-                raise NotImplementedError(failmsg)
-        else:
-            index, bbfiles_line = sanity_conf_find_line('BBFILES', lines)
-            if bbfiles_line:
-                lines.insert(index, 'BBPATH = "' + topdir_var + '"\n')
-            else:
-                raise NotImplementedError(failmsg)
-
-        current_lconf += 1
-        sanity_conf_update(bblayers_fn, lines, 'LCONF_VERSION', current_lconf)
-        bb.note("Your conf/bblayers.conf has been automatically updated.")
-        return
-
-    elif current_lconf == 5 and lconf_version > 5:
-        # Null update, to avoid issues with people switching between poky and other distros
-        current_lconf = 6
-        sanity_conf_update(bblayers_fn, lines, 'LCONF_VERSION', current_lconf)
-        bb.note("Your conf/bblayers.conf has been automatically updated.")
-        return
-
-        status.addresult()
-
-    elif current_lconf == 6 and lconf_version > 6:
-        # Handle rename of meta-yocto -> meta-poky
-        # This marks the start of separate version numbers but code is needed in OE-Core
-        # for the migration, one last time.
-        layers = d.getVar('BBLAYERS').split()
-        layers = [ os.path.basename(path) for path in layers ]
-        if 'meta-yocto' in layers:
-            found = False
-            while True:
-                index, meta_yocto_line = sanity_conf_find_line(r'.*meta-yocto[\'"\s\n]', lines)
-                if meta_yocto_line:
-                    lines[index] = meta_yocto_line.replace('meta-yocto', 'meta-poky')
-                    found = True
-                else:
-                    break
-            if not found:
-                raise NotImplementedError(failmsg)
-            index, meta_yocto_line = sanity_conf_find_line('LCONF_VERSION.*\n', lines)
-            if meta_yocto_line:
-                lines[index] = 'POKY_BBLAYERS_CONF_VERSION = "1"\n'
-            else:
-                raise NotImplementedError(failmsg)
-            with open(bblayers_fn, "w") as f:
-                f.write(''.join(lines))
-            bb.note("Your conf/bblayers.conf has been automatically updated.")
-            return
-        current_lconf += 1
-        sanity_conf_update(bblayers_fn, lines, 'LCONF_VERSION', current_lconf)
-        bb.note("Your conf/bblayers.conf has been automatically updated.")
-        return
-
-    raise NotImplementedError(failmsg)
-}
-
-def raise_sanity_error(msg, d, network_error=False):
-    if d.getVar("SANITY_USE_EVENTS") == "1":
-        try:
-            bb.event.fire(bb.event.SanityCheckFailed(msg, network_error), d)
-        except TypeError:
-            bb.event.fire(bb.event.SanityCheckFailed(msg), d)
-        return
-
-    bb.fatal(""" OE-core's config sanity checker detected a potential misconfiguration.
-    Either fix the cause of this error or at your own risk disable the checker (see sanity.conf).
-    Following is the list of potential problems / advisories:
-    
-    %s""" % msg)
-
-# Check a single tune for validity.
-def check_toolchain_tune(data, tune, multilib):
-    tune_errors = []
-    if not tune:
-        return "No tuning found for %s multilib." % multilib
-    localdata = bb.data.createCopy(data)
-    if multilib != "default":
-        # Apply the overrides so we can look at the details.
-        overrides = localdata.getVar("OVERRIDES", False) + ":virtclass-multilib-" + multilib
-        localdata.setVar("OVERRIDES", overrides)
-    bb.debug(2, "Sanity-checking tuning '%s' (%s) features:" % (tune, multilib))
-    features = (localdata.getVar("TUNE_FEATURES:tune-%s" % tune) or "").split()
-    if not features:
-        return "Tuning '%s' has no defined features, and cannot be used." % tune
-    valid_tunes = localdata.getVarFlags('TUNEVALID') or {}
-    conflicts = localdata.getVarFlags('TUNECONFLICTS') or {}
-    # [doc] is the documentation for the variable, not a real feature
-    if 'doc' in valid_tunes:
-        del valid_tunes['doc']
-    if 'doc' in conflicts:
-        del conflicts['doc']
-    for feature in features:
-        if feature in conflicts:
-            for conflict in conflicts[feature].split():
-                if conflict in features:
-                    tune_errors.append("Feature '%s' conflicts with '%s'." %
-                        (feature, conflict))
-        if feature in valid_tunes:
-            bb.debug(2, "  %s: %s" % (feature, valid_tunes[feature]))
-        else:
-            tune_errors.append("Feature '%s' is not defined." % feature)
-    if tune_errors:
-        return "Tuning '%s' has the following errors:\n" % tune + '\n'.join(tune_errors)
-
-def check_toolchain(data):
-    tune_error_set = []
-    deftune = data.getVar("DEFAULTTUNE")
-    tune_errors = check_toolchain_tune(data, deftune, 'default')
-    if tune_errors:
-        tune_error_set.append(tune_errors)
-
-    multilibs = (data.getVar("MULTILIB_VARIANTS") or "").split()
-    global_multilibs = (data.getVar("MULTILIB_GLOBAL_VARIANTS") or "").split()
-
-    if multilibs:
-        seen_libs = []
-        seen_tunes = []
-        for lib in multilibs:
-            if lib in seen_libs:
-                tune_error_set.append("The multilib '%s' appears more than once." % lib)
-            else:
-                seen_libs.append(lib)
-            if not lib in global_multilibs:
-                tune_error_set.append("Multilib %s is not present in MULTILIB_GLOBAL_VARIANTS" % lib)
-            tune = data.getVar("DEFAULTTUNE:virtclass-multilib-%s" % lib)
-            if tune in seen_tunes:
-                tune_error_set.append("The tuning '%s' appears in more than one multilib." % tune)
-            else:
-                seen_libs.append(tune)
-            if tune == deftune:
-                tune_error_set.append("Multilib '%s' (%s) is also the default tuning." % (lib, deftune))
-            else:
-                tune_errors = check_toolchain_tune(data, tune, lib)
-            if tune_errors:
-                tune_error_set.append(tune_errors)
-    if tune_error_set:
-        return "Toolchain tunings invalid:\n" + '\n'.join(tune_error_set) + "\n"
-
-    return ""
-
-def check_conf_exists(fn, data):
-    bbpath = []
-    fn = data.expand(fn)
-    vbbpath = data.getVar("BBPATH", False)
-    if vbbpath:
-        bbpath += vbbpath.split(":")
-    for p in bbpath:
-        currname = os.path.join(data.expand(p), fn)
-        if os.access(currname, os.R_OK):
-            return True
-    return False
-
-def check_create_long_filename(filepath, pathname):
-    import string, random
-    testfile = os.path.join(filepath, ''.join(random.choice(string.ascii_letters) for x in range(200)))
-    try:
-        if not os.path.exists(filepath):
-            bb.utils.mkdirhier(filepath)
-        f = open(testfile, "w")
-        f.close()
-        os.remove(testfile)
-    except IOError as e:
-        import errno
-        err, strerror = e.args
-        if err == errno.ENAMETOOLONG:
-            return "Failed to create a file with a long name in %s. Please use a filesystem that does not unreasonably limit filename length.\n" % pathname
-        else:
-            return "Failed to create a file in %s: %s.\n" % (pathname, strerror)
-    except OSError as e:
-        errno, strerror = e.args
-        return "Failed to create %s directory in which to run long name sanity check: %s.\n" % (pathname, strerror)
-    return ""
-
-def check_path_length(filepath, pathname, limit):
-    if len(filepath) > limit:
-        return "The length of %s is longer than %s, this would cause unexpected errors, please use a shorter path.\n" % (pathname, limit)
-    return ""
-
-def get_filesystem_id(path):
-    import subprocess
-    try:
-        return subprocess.check_output(["stat", "-f", "-c", "%t", path]).decode('utf-8').strip()
-    except subprocess.CalledProcessError:
-        bb.warn("Can't get filesystem id of: %s" % path)
-        return None
-
-# Check that the path isn't located on nfs.
-def check_not_nfs(path, name):
-    # The nfs' filesystem id is 6969
-    if get_filesystem_id(path) == "6969":
-        return "The %s: %s can't be located on nfs.\n" % (name, path)
-    return ""
-
-# Check that the path is on a case-sensitive file system
-def check_case_sensitive(path, name):
-    import tempfile
-    with tempfile.NamedTemporaryFile(prefix='TmP', dir=path) as tmp_file:
-        if os.path.exists(tmp_file.name.lower()):
-            return "The %s (%s) can't be on a case-insensitive file system.\n" % (name, path)
-        return ""
-
-# Check that path isn't a broken symlink
-def check_symlink(lnk, data):
-    if os.path.islink(lnk) and not os.path.exists(lnk):
-        raise_sanity_error("%s is a broken symlink." % lnk, data)
-
-def check_connectivity(d):
-    # URI's to check can be set in the CONNECTIVITY_CHECK_URIS variable
-    # using the same syntax as for SRC_URI. If the variable is not set
-    # the check is skipped
-    test_uris = (d.getVar('CONNECTIVITY_CHECK_URIS') or "").split()
-    retval = ""
-
-    bbn = d.getVar('BB_NO_NETWORK')
-    if bbn not in (None, '0', '1'):
-        return 'BB_NO_NETWORK should be "0" or "1", but it is "%s"' % bbn
-
-    # Only check connectivity if network enabled and the
-    # CONNECTIVITY_CHECK_URIS are set
-    network_enabled = not (bbn == '1')
-    check_enabled = len(test_uris)
-    if check_enabled and network_enabled:
-        # Take a copy of the data store and unset MIRRORS and PREMIRRORS
-        data = bb.data.createCopy(d)
-        data.delVar('PREMIRRORS')
-        data.delVar('MIRRORS')
-        try:
-            fetcher = bb.fetch2.Fetch(test_uris, data)
-            fetcher.checkstatus()
-        except Exception as err:
-            # Allow the message to be configured so that users can be
-            # pointed to a support mechanism.
-            msg = data.getVar('CONNECTIVITY_CHECK_MSG') or ""
-            if len(msg) == 0:
-                msg = "%s.\n" % err
-                msg += "    Please ensure your host's network is configured correctly.\n"
-                msg += "    If your ISP or network is blocking the above URL,\n"
-                msg += "    try with another domain name, for example by setting:\n"
-                msg += "    CONNECTIVITY_CHECK_URIS = \"https://www.example.com/\""
-                msg += "    You could also set BB_NO_NETWORK = \"1\" to disable network\n"
-                msg += "    access if all required sources are on local disk.\n"
-            retval = msg
-
-    return retval
-
-def check_supported_distro(sanity_data):
-    from fnmatch import fnmatch
-
-    tested_distros = sanity_data.getVar('SANITY_TESTED_DISTROS')
-    if not tested_distros:
-        return
-
-    try:
-        distro = oe.lsb.distro_identifier()
-    except Exception:
-        distro = None
-
-    if not distro:
-        bb.warn('Host distribution could not be determined; you may possibly experience unexpected failures. It is recommended that you use a tested distribution.')
-
-    for supported in [x.strip() for x in tested_distros.split('\\n')]:
-        if fnmatch(distro, supported):
-            return
-
-    bb.warn('Host distribution "%s" has not been validated with this version of the build system; you may possibly experience unexpected failures. It is recommended that you use a tested distribution.' % distro)
-
-# Checks we should only make if MACHINE is set correctly
-def check_sanity_validmachine(sanity_data):
-    messages = ""
-
-    # Check TUNE_ARCH is set
-    if sanity_data.getVar('TUNE_ARCH') == 'INVALID':
-        messages = messages + 'TUNE_ARCH is unset. Please ensure your MACHINE configuration includes a valid tune configuration file which will set this correctly.\n'
-
-    # Check TARGET_OS is set
-    if sanity_data.getVar('TARGET_OS') == 'INVALID':
-        messages = messages + 'Please set TARGET_OS directly, or choose a MACHINE or DISTRO that does so.\n'
-
-    # Check that we don't have duplicate entries in PACKAGE_ARCHS & that TUNE_PKGARCH is in PACKAGE_ARCHS
-    pkgarchs = sanity_data.getVar('PACKAGE_ARCHS')
-    tunepkg = sanity_data.getVar('TUNE_PKGARCH')
-    defaulttune = sanity_data.getVar('DEFAULTTUNE')
-    tunefound = False
-    seen = {}
-    dups = []
-
-    for pa in pkgarchs.split():
-        if seen.get(pa, 0) == 1:
-            dups.append(pa)
-        else:
-            seen[pa] = 1
-        if pa == tunepkg:
-            tunefound = True
-
-    if len(dups):
-        messages = messages + "Error, the PACKAGE_ARCHS variable contains duplicates. The following archs are listed more than once: %s" % " ".join(dups)
-
-    if tunefound == False:
-        messages = messages + "Error, the PACKAGE_ARCHS variable (%s) for DEFAULTTUNE (%s) does not contain TUNE_PKGARCH (%s)." % (pkgarchs, defaulttune, tunepkg)
-
-    return messages
-
-# Patch before 2.7 can't handle all the features in git-style diffs.  Some
-# patches may incorrectly apply, and others won't apply at all.
-def check_patch_version(sanity_data):
-    import re, subprocess
-
-    try:
-        result = subprocess.check_output(["patch", "--version"], stderr=subprocess.STDOUT).decode('utf-8')
-        version = re.search(r"[0-9.]+", result.splitlines()[0]).group()
-        if bb.utils.vercmp_string_op(version, "2.7", "<"):
-            return "Your version of patch is older than 2.7 and has bugs which will break builds. Please install a newer version of patch.\n"
-        else:
-            return None
-    except subprocess.CalledProcessError as e:
-        return "Unable to execute patch --version, exit code %d:\n%s\n" % (e.returncode, e.output)
-
-# Glibc needs make 4.0 or later, we may as well match at this point
-def check_make_version(sanity_data):
-    import subprocess
-
-    try:
-        result = subprocess.check_output(['make', '--version'], stderr=subprocess.STDOUT).decode('utf-8')
-    except subprocess.CalledProcessError as e:
-        return "Unable to execute make --version, exit code %d\n%s\n" % (e.returncode, e.output)
-    version = result.split()[2]
-    if bb.utils.vercmp_string_op(version, "4.0", "<"):
-        return "Please install a make version of 4.0 or later.\n"
-
-    if bb.utils.vercmp_string_op(version, "4.2.1", "=="):
-        distro = oe.lsb.distro_identifier()
-        if "ubuntu" in distro or "debian" in distro or "linuxmint" in distro:
-            return None
-        return "make version 4.2.1 is known to have issues on Centos/OpenSUSE and other non-Ubuntu systems. Please use a buildtools-make-tarball or a newer version of make.\n"
-    return None
-
-
-# Check if we're running on WSL (Windows Subsystem for Linux).
-# WSLv1 is known not to work but WSLv2 should work properly as
-# long as the VHDX file is optimized often, let the user know
-# upfront.
-# More information on installing WSLv2 at:
-# https://docs.microsoft.com/en-us/windows/wsl/wsl2-install
-def check_wsl(d):
-    with open("/proc/version", "r") as f:
-        verdata = f.readlines()
-    for l in verdata:
-        if "Microsoft" in l:
-            return "OpenEmbedded doesn't work under WSLv1, please upgrade to WSLv2 if you want to run builds on Windows"
-        elif "microsoft" in l:
-            bb.warn("You are running bitbake under WSLv2, this works properly but you should optimize your VHDX file eventually to avoid running out of storage space")
-    return None
-
-# Require at least gcc version 7.5.
-#
-# This can be fixed on CentOS-7 with devtoolset-6+
-# https://www.softwarecollections.org/en/scls/rhscl/devtoolset-6/
-#
-# A less invasive fix is with scripts/install-buildtools (or with user
-# built buildtools-extended-tarball)
-#
-def check_gcc_version(sanity_data):
-    import subprocess
-    
-    build_cc, version = oe.utils.get_host_compiler_version(sanity_data)
-    if build_cc.strip() == "gcc":
-        if bb.utils.vercmp_string_op(version, "7.5", "<"):
-            return "Your version of gcc is older than 7.5 and will break builds. Please install a newer version of gcc (you could use the project's buildtools-extended-tarball or use scripts/install-buildtools).\n"
-    return None
-
-# Tar version 1.24 and onwards handle overwriting symlinks correctly
-# but earlier versions do not; this needs to work properly for sstate
-# Version 1.28 is needed so opkg-build works correctly when reproducibile builds are enabled
-def check_tar_version(sanity_data):
-    import subprocess
-    try:
-        result = subprocess.check_output(["tar", "--version"], stderr=subprocess.STDOUT).decode('utf-8')
-    except subprocess.CalledProcessError as e:
-        return "Unable to execute tar --version, exit code %d\n%s\n" % (e.returncode, e.output)
-    version = result.split()[3]
-    if bb.utils.vercmp_string_op(version, "1.28", "<"):
-        return "Your version of tar is older than 1.28 and does not have the support needed to enable reproducible builds. Please install a newer version of tar (you could use the project's buildtools-tarball from our last release or use scripts/install-buildtools).\n"
-    return None
-
-# We use git parameters and functionality only found in 1.7.8 or later
-# The kernel tools assume git >= 1.8.3.1 (verified needed > 1.7.9.5) see #6162 
-# The git fetcher also had workarounds for git < 1.7.9.2 which we've dropped
-def check_git_version(sanity_data):
-    import subprocess
-    try:
-        result = subprocess.check_output(["git", "--version"], stderr=subprocess.DEVNULL).decode('utf-8')
-    except subprocess.CalledProcessError as e:
-        return "Unable to execute git --version, exit code %d\n%s\n" % (e.returncode, e.output)
-    version = result.split()[2]
-    if bb.utils.vercmp_string_op(version, "1.8.3.1", "<"):
-        return "Your version of git is older than 1.8.3.1 and has bugs which will break builds. Please install a newer version of git.\n"
-    return None
-
-# Check the required perl modules which may not be installed by default
-def check_perl_modules(sanity_data):
-    import subprocess
-    ret = ""
-    modules = ( "Text::ParseWords", "Thread::Queue", "Data::Dumper" )
-    errresult = ''
-    for m in modules:
-        try:
-            subprocess.check_output(["perl", "-e", "use %s" % m])
-        except subprocess.CalledProcessError as e:
-            errresult += bytes.decode(e.output)
-            ret += "%s " % m
-    if ret:
-        return "Required perl module(s) not found: %s\n\n%s\n" % (ret, errresult)
-    return None
-
-def sanity_check_conffiles(d):
-    funcs = d.getVar('BBLAYERS_CONF_UPDATE_FUNCS').split()
-    for func in funcs:
-        conffile, current_version, required_version, func = func.split(":")
-        if check_conf_exists(conffile, d) and d.getVar(current_version) is not None and \
-                d.getVar(current_version) != d.getVar(required_version):
-            try:
-                bb.build.exec_func(func, d)
-            except NotImplementedError as e:
-                bb.fatal(str(e))
-            d.setVar("BB_INVALIDCONF", True)
-
-def drop_v14_cross_builds(d):
-    import glob
-    indexes = glob.glob(d.expand("${SSTATE_MANIFESTS}/index-${BUILD_ARCH}_*"))
-    for i in indexes:
-        with open(i, "r") as f:
-            lines = f.readlines()
-            for l in reversed(lines):
-                try:
-                    (stamp, manifest, workdir) = l.split()
-                except ValueError:
-                    bb.fatal("Invalid line '%s' in sstate manifest '%s'" % (l, i))
-                for m in glob.glob(manifest + ".*"):
-                    if m.endswith(".postrm"):
-                        continue
-                    sstate_clean_manifest(m, d)
-                bb.utils.remove(stamp + "*")
-                bb.utils.remove(workdir, recurse = True)
-
-def sanity_handle_abichanges(status, d):
-    #
-    # Check the 'ABI' of TMPDIR
-    #
-    import subprocess
-
-    current_abi = d.getVar('OELAYOUT_ABI')
-    abifile = d.getVar('SANITY_ABIFILE')
-    if os.path.exists(abifile):
-        with open(abifile, "r") as f:
-            abi = f.read().strip()
-        if not abi.isdigit():
-            with open(abifile, "w") as f:
-                f.write(current_abi)
-        elif int(abi) <= 11 and current_abi == "12":
-            status.addresult("The layout of TMPDIR changed for Recipe Specific Sysroots.\nConversion doesn't make sense and this change will rebuild everything so please delete TMPDIR (%s).\n" % d.getVar("TMPDIR"))
-        elif int(abi) <= 13 and current_abi == "14":
-            status.addresult("TMPDIR changed to include path filtering from the pseudo database.\nIt is recommended to use a clean TMPDIR with the new pseudo path filtering so TMPDIR (%s) would need to be removed to continue.\n" % d.getVar("TMPDIR"))
-        elif int(abi) == 14 and current_abi == "15":
-            drop_v14_cross_builds(d)
-            with open(abifile, "w") as f:
-                f.write(current_abi)
-        elif (abi != current_abi):
-            # Code to convert from one ABI to another could go here if possible.
-            status.addresult("Error, TMPDIR has changed its layout version number (%s to %s) and you need to either rebuild, revert or adjust it at your own risk.\n" % (abi, current_abi))
-    else:
-        with open(abifile, "w") as f:
-            f.write(current_abi)
-
-def check_sanity_sstate_dir_change(sstate_dir, data):
-    # Sanity checks to be done when the value of SSTATE_DIR changes
-
-    # Check that SSTATE_DIR isn't on a filesystem with limited filename length (eg. eCryptFS)
-    testmsg = ""
-    if sstate_dir != "":
-        testmsg = check_create_long_filename(sstate_dir, "SSTATE_DIR")
-        # If we don't have permissions to SSTATE_DIR, suggest the user set it as an SSTATE_MIRRORS
-        try:
-            err = testmsg.split(': ')[1].strip()
-            if err == "Permission denied.":
-                testmsg = testmsg + "You could try using %s in SSTATE_MIRRORS rather than as an SSTATE_CACHE.\n" % (sstate_dir)
-        except IndexError:
-            pass
-    return testmsg
-
-def check_sanity_version_change(status, d):
-    # Sanity checks to be done when SANITY_VERSION or NATIVELSBSTRING changes
-    # In other words, these tests run once in a given build directory and then 
-    # never again until the sanity version or host distrubution id/version changes.
-
-    # Check the python install is complete. Examples that are often removed in
-    # minimal installations: glib-2.0-natives requries # xml.parsers.expat and icu
-    # requires distutils.sysconfig.
-    try:
-        import xml.parsers.expat
-        import distutils.sysconfig
-    except ImportError as e:
-        status.addresult('Your Python 3 is not a full install. Please install the module %s (see the Getting Started guide for further information).\n' % e.name)
-
-    status.addresult(check_gcc_version(d))
-    status.addresult(check_make_version(d))
-    status.addresult(check_patch_version(d))
-    status.addresult(check_tar_version(d))
-    status.addresult(check_git_version(d))
-    status.addresult(check_perl_modules(d))
-    status.addresult(check_wsl(d))
-
-    missing = ""
-
-    if not check_app_exists("${MAKE}", d):
-        missing = missing + "GNU make,"
-
-    if not check_app_exists('${BUILD_CC}', d):
-        missing = missing + "C Compiler (%s)," % d.getVar("BUILD_CC")
-
-    if not check_app_exists('${BUILD_CXX}', d):
-        missing = missing + "C++ Compiler (%s)," % d.getVar("BUILD_CXX")
-
-    required_utilities = d.getVar('SANITY_REQUIRED_UTILITIES')
-
-    for util in required_utilities.split():
-        if not check_app_exists(util, d):
-            missing = missing + "%s," % util
-
-    if missing:
-        missing = missing.rstrip(',')
-        status.addresult("Please install the following missing utilities: %s\n" % missing)
-
-    assume_provided = d.getVar('ASSUME_PROVIDED').split()
-    # Check user doesn't have ASSUME_PROVIDED = instead of += in local.conf
-    if "diffstat-native" not in assume_provided:
-        status.addresult('Please use ASSUME_PROVIDED +=, not ASSUME_PROVIDED = in your local.conf\n')
-
-    # Check that TMPDIR isn't on a filesystem with limited filename length (eg. eCryptFS)
-    import stat
-    tmpdir = d.getVar('TMPDIR')
-    status.addresult(check_create_long_filename(tmpdir, "TMPDIR"))
-    tmpdirmode = os.stat(tmpdir).st_mode
-    if (tmpdirmode & stat.S_ISGID):
-        status.addresult("TMPDIR is setgid, please don't build in a setgid directory")
-    if (tmpdirmode & stat.S_ISUID):
-        status.addresult("TMPDIR is setuid, please don't build in a setuid directory")
-
-    # Check that a user isn't building in a path in PSEUDO_IGNORE_PATHS
-    pseudoignorepaths = d.getVar('PSEUDO_IGNORE_PATHS', expand=True).split(",")
-    workdir = d.getVar('WORKDIR', expand=True)
-    for i in pseudoignorepaths:
-        if i and workdir.startswith(i):
-            status.addresult("You are building in a path included in PSEUDO_IGNORE_PATHS " + str(i) + " please locate the build outside this path.\n")
-
-    # Check if PSEUDO_IGNORE_PATHS and and paths under pseudo control overlap
-    pseudoignorepaths = d.getVar('PSEUDO_IGNORE_PATHS', expand=True).split(",")
-    pseudo_control_dir = "${D},${PKGD},${PKGDEST},${IMAGEROOTFS},${SDK_OUTPUT}"
-    pseudocontroldir = d.expand(pseudo_control_dir).split(",")
-    for i in pseudoignorepaths:
-        for j in pseudocontroldir:
-            if i and j:
-                if j.startswith(i):
-                    status.addresult("A path included in PSEUDO_IGNORE_PATHS " + str(i) + " and the path " + str(j) + " overlap and this will break pseudo permission and ownership tracking. Please set the path " + str(j) + " to a different directory which does not overlap with pseudo controlled directories. \n")
-
-    # Some third-party software apparently relies on chmod etc. being suid root (!!)
-    import stat
-    suid_check_bins = "chown chmod mknod".split()
-    for bin_cmd in suid_check_bins:
-        bin_path = bb.utils.which(os.environ["PATH"], bin_cmd)
-        if bin_path:
-            bin_stat = os.stat(bin_path)
-            if bin_stat.st_uid == 0 and bin_stat.st_mode & stat.S_ISUID:
-                status.addresult('%s has the setuid bit set. This interferes with pseudo and may cause other issues that break the build process.\n' % bin_path)
-
-    # Check that we can fetch from various network transports
-    netcheck = check_connectivity(d)
-    status.addresult(netcheck)
-    if netcheck:
-        status.network_error = True
-
-    nolibs = d.getVar('NO32LIBS')
-    if not nolibs:
-        lib32path = '/lib'
-        if os.path.exists('/lib64') and ( os.path.islink('/lib64') or os.path.islink('/lib') ):
-           lib32path = '/lib32'
-
-        if os.path.exists('%s/libc.so.6' % lib32path) and not os.path.exists('/usr/include/gnu/stubs-32.h'):
-            status.addresult("You have a 32-bit libc, but no 32-bit headers.  You must install the 32-bit libc headers.\n")
-
-    bbpaths = d.getVar('BBPATH').split(":")
-    if ("." in bbpaths or "./" in bbpaths or "" in bbpaths):
-        status.addresult("BBPATH references the current directory, either through "    \
-                "an empty entry, a './' or a '.'.\n\t This is unsafe and means your "\
-                "layer configuration is adding empty elements to BBPATH.\n\t "\
-                "Please check your layer.conf files and other BBPATH "        \
-                "settings to remove the current working directory "           \
-                "references.\n" \
-                "Parsed BBPATH is" + str(bbpaths));
-
-    oes_bb_conf = d.getVar( 'OES_BITBAKE_CONF')
-    if not oes_bb_conf:
-        status.addresult('You are not using the OpenEmbedded version of conf/bitbake.conf. This means your environment is misconfigured, in particular check BBPATH.\n')
-
-    # The length of TMPDIR can't be longer than 410
-    status.addresult(check_path_length(tmpdir, "TMPDIR", 410))
-
-    # Check that TMPDIR isn't located on nfs
-    status.addresult(check_not_nfs(tmpdir, "TMPDIR"))
-
-    # Check for case-insensitive file systems (such as Linux in Docker on
-    # macOS with default HFS+ file system)
-    status.addresult(check_case_sensitive(tmpdir, "TMPDIR"))
-
-def sanity_check_locale(d):
-    """
-    Currently bitbake switches locale to en_US.UTF-8 so check that this locale actually exists.
-    """
-    import locale
-    try:
-        locale.setlocale(locale.LC_ALL, "en_US.UTF-8")
-    except locale.Error:
-        raise_sanity_error("Your system needs to support the en_US.UTF-8 locale.", d)
-
-def check_sanity_everybuild(status, d):
-    import os, stat
-    # Sanity tests which test the users environment so need to run at each build (or are so cheap
-    # it makes sense to always run them.
-
-    if 0 == os.getuid():
-        raise_sanity_error("Do not use Bitbake as root.", d)
-
-    # Check the Python version, we now have a minimum of Python 3.6
-    import sys
-    if sys.hexversion < 0x030600F0:
-        status.addresult('The system requires at least Python 3.6 to run. Please update your Python interpreter.\n')
-
-    # Check the bitbake version meets minimum requirements
-    minversion = d.getVar('BB_MIN_VERSION')
-    if bb.utils.vercmp_string_op(bb.__version__, minversion, "<"):
-        status.addresult('Bitbake version %s is required and version %s was found\n' % (minversion, bb.__version__))
-
-    sanity_check_locale(d)
-
-    paths = d.getVar('PATH').split(":")
-    if "." in paths or "./" in paths or "" in paths:
-        status.addresult("PATH contains '.', './' or '' (empty element), which will break the build, please remove this.\nParsed PATH is " + str(paths) + "\n")
-
-    #Check if bitbake is present in PATH environment variable
-    bb_check = bb.utils.which(d.getVar('PATH'), 'bitbake')
-    if not bb_check:
-        bb.warn("bitbake binary is not found in PATH, did you source the script?")
-
-    # Check whether 'inherit' directive is found (used for a class to inherit)
-    # in conf file it's supposed to be uppercase INHERIT
-    inherit = d.getVar('inherit')
-    if inherit:
-        status.addresult("Please don't use inherit directive in your local.conf. The directive is supposed to be used in classes and recipes only to inherit of bbclasses. Here INHERIT should be used.\n")
-
-    # Check that the DISTRO is valid, if set
-    # need to take into account DISTRO renaming DISTRO
-    distro = d.getVar('DISTRO')
-    if distro and distro != "nodistro":
-        if not ( check_conf_exists("conf/distro/${DISTRO}.conf", d) or check_conf_exists("conf/distro/include/${DISTRO}.inc", d) ):
-            status.addresult("DISTRO '%s' not found. Please set a valid DISTRO in your local.conf\n" % d.getVar("DISTRO"))
-
-    # Check that these variables don't use tilde-expansion as we don't do that
-    for v in ("TMPDIR", "DL_DIR", "SSTATE_DIR"):
-        if d.getVar(v).startswith("~"):
-            status.addresult("%s uses ~ but Bitbake will not expand this, use an absolute path or variables." % v)
-
-    # Check that DL_DIR is set, exists and is writable. In theory, we should never even hit the check if DL_DIR isn't 
-    # set, since so much relies on it being set.
-    dldir = d.getVar('DL_DIR')
-    if not dldir:
-        status.addresult("DL_DIR is not set. Your environment is misconfigured, check that DL_DIR is set, and if the directory exists, that it is writable. \n")
-    if os.path.exists(dldir) and not os.access(dldir, os.W_OK):
-        status.addresult("DL_DIR: %s exists but you do not appear to have write access to it. \n" % dldir)
-    check_symlink(dldir, d)
-
-    # Check that the MACHINE is valid, if it is set
-    machinevalid = True
-    if d.getVar('MACHINE'):
-        if not check_conf_exists("conf/machine/${MACHINE}.conf", d):
-            status.addresult('MACHINE=%s is invalid. Please set a valid MACHINE in your local.conf, environment or other configuration file.\n' % (d.getVar('MACHINE')))
-            machinevalid = False
-        else:
-            status.addresult(check_sanity_validmachine(d))
-    else:
-        status.addresult('Please set a MACHINE in your local.conf or environment\n')
-        machinevalid = False
-    if machinevalid:
-        status.addresult(check_toolchain(d))
-
-    # Check that the SDKMACHINE is valid, if it is set
-    if d.getVar('SDKMACHINE'):
-        if not check_conf_exists("conf/machine-sdk/${SDKMACHINE}.conf", d):
-            status.addresult('Specified SDKMACHINE value is not valid\n')
-        elif d.getVar('SDK_ARCH', False) == "${BUILD_ARCH}":
-            status.addresult('SDKMACHINE is set, but SDK_ARCH has not been changed as a result - SDKMACHINE may have been set too late (e.g. in the distro configuration)\n')
-
-    # If SDK_VENDOR looks like "-my-sdk" then the triples are badly formed so fail early
-    sdkvendor = d.getVar("SDK_VENDOR")
-    if not (sdkvendor.startswith("-") and sdkvendor.count("-") == 1):
-        status.addresult("SDK_VENDOR should be of the form '-foosdk' with a single dash; found '%s'\n" % sdkvendor)
-
-    check_supported_distro(d)
-
-    omask = os.umask(0o022)
-    if omask & 0o755:
-        status.addresult("Please use a umask which allows a+rx and u+rwx\n")
-    os.umask(omask)
-
-    if d.getVar('TARGET_ARCH') == "arm":
-        # This path is no longer user-readable in modern (very recent) Linux
-        try:
-            if os.path.exists("/proc/sys/vm/mmap_min_addr"):
-                f = open("/proc/sys/vm/mmap_min_addr", "r")
-                try:
-                    if (int(f.read().strip()) > 65536):
-                        status.addresult("/proc/sys/vm/mmap_min_addr is not <= 65536. This will cause problems with qemu so please fix the value (as root).\n\nTo fix this in later reboots, set vm.mmap_min_addr = 65536 in /etc/sysctl.conf.\n")
-                finally:
-                    f.close()
-        except:
-            pass
-
-    for checkdir in ['COREBASE', 'TMPDIR']:
-        val = d.getVar(checkdir)
-        if val.find('..') != -1:
-            status.addresult("Error, you have '..' in your %s directory path. Please ensure the variable contains an absolute path as this can break some recipe builds in obtuse ways." % checkdir)
-        if val.find('+') != -1:
-            status.addresult("Error, you have an invalid character (+) in your %s directory path. Please move the installation to a directory which doesn't include any + characters." % checkdir)
-        if val.find('@') != -1:
-            status.addresult("Error, you have an invalid character (@) in your %s directory path. Please move the installation to a directory which doesn't include any @ characters." % checkdir)
-        if val.find(' ') != -1:
-            status.addresult("Error, you have a space in your %s directory path. Please move the installation to a directory which doesn't include a space since autotools doesn't support this." % checkdir)
-        if val.find('%') != -1:
-            status.addresult("Error, you have an invalid character (%) in your %s directory path which causes problems with python string formatting. Please move the installation to a directory which doesn't include any % characters." % checkdir)
-
-    # Check the format of MIRRORS, PREMIRRORS and SSTATE_MIRRORS
-    import re
-    mirror_vars = ['MIRRORS', 'PREMIRRORS', 'SSTATE_MIRRORS']
-    protocols = ['http', 'ftp', 'file', 'https', \
-                 'git', 'gitsm', 'hg', 'osc', 'p4', 'svn', \
-                 'bzr', 'cvs', 'npm', 'sftp', 'ssh', 's3', 'az', 'ftps']
-    for mirror_var in mirror_vars:
-        mirrors = (d.getVar(mirror_var) or '').replace('\\n', ' ').split()
-
-        # Split into pairs
-        if len(mirrors) % 2 != 0:
-            bb.warn('Invalid mirror variable value for %s: %s, should contain paired members.' % (mirror_var, str(mirrors)))
-            continue
-        mirrors = list(zip(*[iter(mirrors)]*2))
-
-        for mirror_entry in mirrors:
-            pattern, mirror = mirror_entry
-
-            decoded = bb.fetch2.decodeurl(pattern)
-            try:
-                pattern_scheme = re.compile(decoded[0])
-            except re.error as exc:
-                bb.warn('Invalid scheme regex (%s) in %s; %s' % (pattern, mirror_var, mirror_entry))
-                continue
-
-            if not any(pattern_scheme.match(protocol) for protocol in protocols):
-                bb.warn('Invalid protocol (%s) in %s: %s' % (decoded[0], mirror_var, mirror_entry))
-                continue
-
-            if not any(mirror.startswith(protocol + '://') for protocol in protocols):
-                bb.warn('Invalid protocol in %s: %s' % (mirror_var, mirror_entry))
-                continue
-
-            if mirror.startswith('file://'):
-                import urllib
-                check_symlink(urllib.parse.urlparse(mirror).path, d)
-                # SSTATE_MIRROR ends with a /PATH string
-                if mirror.endswith('/PATH'):
-                    # remove /PATH$ from SSTATE_MIRROR to get a working
-                    # base directory path
-                    mirror_base = urllib.parse.urlparse(mirror[:-1*len('/PATH')]).path
-                    check_symlink(mirror_base, d)
-
-    # Check sstate mirrors aren't being used with a local hash server and no remote
-    hashserv = d.getVar("BB_HASHSERVE")
-    if d.getVar("SSTATE_MIRRORS") and hashserv and hashserv.startswith("unix://") and not d.getVar("BB_HASHSERVE_UPSTREAM"):
-        bb.warn("You are using a local hash equivalence server but have configured an sstate mirror. This will likely mean no sstate will match from the mirror. You may wish to disable the hash equivalence use (BB_HASHSERVE), or use a hash equivalence server alongside the sstate mirror.")
-
-    # Check that TMPDIR hasn't changed location since the last time we were run
-    tmpdir = d.getVar('TMPDIR')
-    checkfile = os.path.join(tmpdir, "saved_tmpdir")
-    if os.path.exists(checkfile):
-        with open(checkfile, "r") as f:
-            saved_tmpdir = f.read().strip()
-            if (saved_tmpdir != tmpdir):
-                status.addresult("Error, TMPDIR has changed location. You need to either move it back to %s or delete it and rebuild\n" % saved_tmpdir)
-    else:
-        bb.utils.mkdirhier(tmpdir)
-        # Remove setuid, setgid and sticky bits from TMPDIR
-        try:
-            os.chmod(tmpdir, os.stat(tmpdir).st_mode & ~ stat.S_ISUID)
-            os.chmod(tmpdir, os.stat(tmpdir).st_mode & ~ stat.S_ISGID)
-            os.chmod(tmpdir, os.stat(tmpdir).st_mode & ~ stat.S_ISVTX)
-        except OSError as exc:
-            bb.warn("Unable to chmod TMPDIR: %s" % exc)
-        with open(checkfile, "w") as f:
-            f.write(tmpdir)
-
-    # If /bin/sh is a symlink, check that it points to dash or bash
-    if os.path.islink('/bin/sh'):
-        real_sh = os.path.realpath('/bin/sh')
-        # Due to update-alternatives, the shell name may take various
-        # forms, such as /bin/dash, bin/bash, /bin/bash.bash ...
-        if '/dash' not in real_sh and '/bash' not in real_sh:
-            status.addresult("Error, /bin/sh links to %s, must be dash or bash\n" % real_sh)
-
-def check_sanity(sanity_data):
-    class SanityStatus(object):
-        def __init__(self):
-            self.messages = ""
-            self.network_error = False
-
-        def addresult(self, message):
-            if message:
-                self.messages = self.messages + message
-
-    status = SanityStatus()
-
-    tmpdir = sanity_data.getVar('TMPDIR')
-    sstate_dir = sanity_data.getVar('SSTATE_DIR')
-
-    check_symlink(sstate_dir, sanity_data)
-
-    # Check saved sanity info
-    last_sanity_version = 0
-    last_tmpdir = ""
-    last_sstate_dir = ""
-    last_nativelsbstr = ""
-    sanityverfile = sanity_data.expand("${TOPDIR}/cache/sanity_info")
-    if os.path.exists(sanityverfile):
-        with open(sanityverfile, 'r') as f:
-            for line in f:
-                if line.startswith('SANITY_VERSION'):
-                    last_sanity_version = int(line.split()[1])
-                if line.startswith('TMPDIR'):
-                    last_tmpdir = line.split()[1]
-                if line.startswith('SSTATE_DIR'):
-                    last_sstate_dir = line.split()[1]
-                if line.startswith('NATIVELSBSTRING'):
-                    last_nativelsbstr = line.split()[1]
-
-    check_sanity_everybuild(status, sanity_data)
-    
-    sanity_version = int(sanity_data.getVar('SANITY_VERSION') or 1)
-    network_error = False
-    # NATIVELSBSTRING var may have been overridden with "universal", so
-    # get actual host distribution id and version
-    nativelsbstr = lsb_distro_identifier(sanity_data)
-    if last_sanity_version < sanity_version or last_nativelsbstr != nativelsbstr: 
-        check_sanity_version_change(status, sanity_data)
-        status.addresult(check_sanity_sstate_dir_change(sstate_dir, sanity_data))
-    else: 
-        if last_sstate_dir != sstate_dir:
-            status.addresult(check_sanity_sstate_dir_change(sstate_dir, sanity_data))
-
-    if os.path.exists(os.path.dirname(sanityverfile)) and not status.messages:
-        with open(sanityverfile, 'w') as f:
-            f.write("SANITY_VERSION %s\n" % sanity_version) 
-            f.write("TMPDIR %s\n" % tmpdir) 
-            f.write("SSTATE_DIR %s\n" % sstate_dir) 
-            f.write("NATIVELSBSTRING %s\n" % nativelsbstr) 
-
-    sanity_handle_abichanges(status, sanity_data)
-
-    if status.messages != "":
-        raise_sanity_error(sanity_data.expand(status.messages), sanity_data, status.network_error)
-
-# Create a copy of the datastore and finalise it to ensure appends and 
-# overrides are set - the datastore has yet to be finalised at ConfigParsed
-def copy_data(e):
-    sanity_data = bb.data.createCopy(e.data)
-    sanity_data.finalize()
-    return sanity_data
-
-addhandler config_reparse_eventhandler
-config_reparse_eventhandler[eventmask] = "bb.event.ConfigParsed"
-python config_reparse_eventhandler() {
-    sanity_check_conffiles(e.data)
-}
-
-addhandler check_sanity_eventhandler
-check_sanity_eventhandler[eventmask] = "bb.event.SanityCheck bb.event.NetworkTest"
-python check_sanity_eventhandler() {
-    if bb.event.getName(e) == "SanityCheck":
-        sanity_data = copy_data(e)
-        check_sanity(sanity_data)
-        if e.generateevents:
-            sanity_data.setVar("SANITY_USE_EVENTS", "1")
-        bb.event.fire(bb.event.SanityCheckPassed(), e.data)
-    elif bb.event.getName(e) == "NetworkTest":
-        sanity_data = copy_data(e)
-        if e.generateevents:
-            sanity_data.setVar("SANITY_USE_EVENTS", "1")
-        bb.event.fire(bb.event.NetworkTestFailed() if check_connectivity(sanity_data) else bb.event.NetworkTestPassed(), e.data)
-
-    return
-}
diff --git a/poky/meta/classes/scons.bbclass b/poky/meta/classes/scons.bbclass
deleted file mode 100644
index 80f8382..0000000
--- a/poky/meta/classes/scons.bbclass
+++ /dev/null
@@ -1,28 +0,0 @@
-inherit python3native
-
-DEPENDS += "python3-scons-native"
-
-EXTRA_OESCONS ?= ""
-
-do_configure() {
-	if [ -n "${CONFIGURESTAMPFILE}" -a "${S}" = "${B}" ]; then
-		if [ -e "${CONFIGURESTAMPFILE}" -a "`cat ${CONFIGURESTAMPFILE}`" != "${BB_TASKHASH}" -a "${CLEANBROKEN}" != "1" ]; then
-			${STAGING_BINDIR_NATIVE}/scons --directory=${S} --clean PREFIX=${prefix} prefix=${prefix} ${EXTRA_OESCONS}
-		fi
-
-		mkdir -p `dirname ${CONFIGURESTAMPFILE}`
-		echo ${BB_TASKHASH} > ${CONFIGURESTAMPFILE}
-	fi
-}
-
-scons_do_compile() {
-	${STAGING_BINDIR_NATIVE}/scons --directory=${S} ${PARALLEL_MAKE} PREFIX=${prefix} prefix=${prefix} ${EXTRA_OESCONS} || \
-	die "scons build execution failed."
-}
-
-scons_do_install() {
-	${STAGING_BINDIR_NATIVE}/scons --directory=${S} install_root=${D}${prefix} PREFIX=${prefix} prefix=${prefix} ${EXTRA_OESCONS} install || \
-	die "scons install execution failed."
-}
-
-EXPORT_FUNCTIONS do_compile do_install
diff --git a/poky/meta/classes/setuptools3-base.bbclass b/poky/meta/classes/setuptools3-base.bbclass
deleted file mode 100644
index 15abe1d..0000000
--- a/poky/meta/classes/setuptools3-base.bbclass
+++ /dev/null
@@ -1,31 +0,0 @@
-DEPENDS:append:class-target = " ${PYTHON_PN}-native ${PYTHON_PN}"
-DEPENDS:append:class-nativesdk = " ${PYTHON_PN}-native ${PYTHON_PN}"
-RDEPENDS:${PN}:append:class-target = " ${PYTHON_PN}-core"
-
-export STAGING_INCDIR
-export STAGING_LIBDIR
-
-# LDSHARED is the ld *command* used to create shared library
-export LDSHARED  = "${CCLD} -shared"
-# LDXXSHARED is the ld *command* used to create shared library of C++
-# objects
-export LDCXXSHARED  = "${CXX} -shared"
-# CCSHARED are the C *flags* used to create objects to go into a shared
-# library (module)
-export CCSHARED  = "-fPIC -DPIC"
-# LINKFORSHARED are the flags passed to the $(CC) command that links
-# the python executable
-export LINKFORSHARED = "${SECURITY_CFLAGS} -Xlinker -export-dynamic"
-
-FILES:${PN} += "${libdir}/* ${libdir}/${PYTHON_DIR}/*"
-
-FILES:${PN}-staticdev += "\
-  ${PYTHON_SITEPACKAGES_DIR}/*.a \
-"
-FILES:${PN}-dev += "\
-  ${datadir}/pkgconfig \
-  ${libdir}/pkgconfig \
-  ${PYTHON_SITEPACKAGES_DIR}/*.la \
-"
-inherit python3native python3targetconfig
-
diff --git a/poky/meta/classes/setuptools3.bbclass b/poky/meta/classes/setuptools3.bbclass
deleted file mode 100644
index 66c9466..0000000
--- a/poky/meta/classes/setuptools3.bbclass
+++ /dev/null
@@ -1,32 +0,0 @@
-inherit setuptools3-base python_pep517
-
-DEPENDS += "python3-setuptools-native python3-wheel-native"
-
-SETUPTOOLS_BUILD_ARGS ?= ""
-
-SETUPTOOLS_SETUP_PATH ?= "${S}"
-
-setuptools3_do_configure() {
-    :
-}
-
-setuptools3_do_compile() {
-        cd ${SETUPTOOLS_SETUP_PATH}
-        NO_FETCH_BUILD=1 \
-        STAGING_INCDIR=${STAGING_INCDIR} \
-        STAGING_LIBDIR=${STAGING_LIBDIR} \
-        ${STAGING_BINDIR_NATIVE}/${PYTHON_PN}-native/${PYTHON_PN} setup.py \
-        bdist_wheel --verbose --dist-dir ${PEP517_WHEEL_PATH} ${SETUPTOOLS_BUILD_ARGS} || \
-        bbfatal_log "'${PYTHON_PN} setup.py bdist_wheel ${SETUPTOOLS_BUILD_ARGS}' execution failed."
-}
-setuptools3_do_compile[vardepsexclude] = "MACHINE"
-do_compile[cleandirs] += "${PEP517_WHEEL_PATH}"
-
-# This could be removed in the future but some recipes in meta-oe still use it
-setuptools3_do_install() {
-        python_pep517_do_install
-}
-
-EXPORT_FUNCTIONS do_configure do_compile do_install
-
-export LDSHARED="${CCLD} -shared"
diff --git a/poky/meta/classes/setuptools3_legacy.bbclass b/poky/meta/classes/setuptools3_legacy.bbclass
deleted file mode 100644
index 5a99daa..0000000
--- a/poky/meta/classes/setuptools3_legacy.bbclass
+++ /dev/null
@@ -1,78 +0,0 @@
-# This class is for packages which use the deprecated setuptools behaviour,
-# specifically custom install tasks which don't work correctly with bdist_wheel.
-# This behaviour is deprecated in setuptools[1] and won't work in the future, so
-# all users of this should consider their options: pure Python modules can use a
-# modern Python tool such as build[2], or packages which are doing more (such as
-# installing init scripts) should use a fully-featured build system such as Meson.
-#
-# [1] https://setuptools.pypa.io/en/latest/history.html#id142
-# [2] https://pypi.org/project/build/
-
-inherit setuptools3-base
-
-B = "${WORKDIR}/build"
-
-SETUPTOOLS_BUILD_ARGS ?= ""
-SETUPTOOLS_INSTALL_ARGS ?= "--root=${D} \
-    --prefix=${prefix} \
-    --install-lib=${PYTHON_SITEPACKAGES_DIR} \
-    --install-data=${datadir}"
-
-SETUPTOOLS_PYTHON = "python3"
-SETUPTOOLS_PYTHON:class-native = "nativepython3"
-
-SETUPTOOLS_SETUP_PATH ?= "${S}"
-
-setuptools3_legacy_do_configure() {
-    :
-}
-
-setuptools3_legacy_do_compile() {
-        cd ${SETUPTOOLS_SETUP_PATH}
-        NO_FETCH_BUILD=1 \
-        STAGING_INCDIR=${STAGING_INCDIR} \
-        STAGING_LIBDIR=${STAGING_LIBDIR} \
-        ${STAGING_BINDIR_NATIVE}/${PYTHON_PN}-native/${PYTHON_PN} setup.py \
-        build --build-base=${B} ${SETUPTOOLS_BUILD_ARGS} || \
-        bbfatal_log "'${PYTHON_PN} setup.py build ${SETUPTOOLS_BUILD_ARGS}' execution failed."
-}
-setuptools3_legacy_do_compile[vardepsexclude] = "MACHINE"
-
-setuptools3_legacy_do_install() {
-        cd ${SETUPTOOLS_SETUP_PATH}
-        install -d ${D}${PYTHON_SITEPACKAGES_DIR}
-        STAGING_INCDIR=${STAGING_INCDIR} \
-        STAGING_LIBDIR=${STAGING_LIBDIR} \
-        PYTHONPATH=${D}${PYTHON_SITEPACKAGES_DIR} \
-        ${STAGING_BINDIR_NATIVE}/${PYTHON_PN}-native/${PYTHON_PN} setup.py \
-        build --build-base=${B} install --skip-build ${SETUPTOOLS_INSTALL_ARGS} || \
-        bbfatal_log "'${PYTHON_PN} setup.py install ${SETUPTOOLS_INSTALL_ARGS}' execution failed."
-
-        # support filenames with *spaces*
-        find ${D} -name "*.py" -exec grep -q ${D} {} \; \
-                               -exec sed -i -e s:${D}::g {} \;
-
-        for i in ${D}${bindir}/* ${D}${sbindir}/*; do
-            if [ -f "$i" ]; then
-                sed -i -e s:${PYTHON}:${USRBINPATH}/env\ ${SETUPTOOLS_PYTHON}:g $i
-                sed -i -e s:${STAGING_BINDIR_NATIVE}:${bindir}:g $i
-            fi
-        done
-
-        rm -f ${D}${PYTHON_SITEPACKAGES_DIR}/easy-install.pth
-
-        #
-        # FIXME: Bandaid against wrong datadir computation
-        #
-        if [ -e ${D}${datadir}/share ]; then
-            mv -f ${D}${datadir}/share/* ${D}${datadir}/
-            rmdir ${D}${datadir}/share
-        fi
-}
-setuptools3_legacy_do_install[vardepsexclude] = "MACHINE"
-
-EXPORT_FUNCTIONS do_configure do_compile do_install
-
-export LDSHARED="${CCLD} -shared"
-DEPENDS += "python3-setuptools-native"
-
diff --git a/poky/meta/classes/sign_ipk.bbclass b/poky/meta/classes/sign_ipk.bbclass
index e5057b7..51c24b3 100644
--- a/poky/meta/classes/sign_ipk.bbclass
+++ b/poky/meta/classes/sign_ipk.bbclass
@@ -1,3 +1,9 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
 # Class for generating signed IPK packages.
 #
 # Configuration variables used by this class:
diff --git a/poky/meta/classes/sign_package_feed.bbclass b/poky/meta/classes/sign_package_feed.bbclass
index f1504c2..e9d6647 100644
--- a/poky/meta/classes/sign_package_feed.bbclass
+++ b/poky/meta/classes/sign_package_feed.bbclass
@@ -1,3 +1,9 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
 # Class for signing package feeds
 #
 # Related configuration variables that will be used after this class is
diff --git a/poky/meta/classes/sign_rpm.bbclass b/poky/meta/classes/sign_rpm.bbclass
index 73a55a5..ee0c480 100644
--- a/poky/meta/classes/sign_rpm.bbclass
+++ b/poky/meta/classes/sign_rpm.bbclass
@@ -1,3 +1,9 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
 # Class for generating signed RPM packages.
 #
 # Configuration variables used by this class:
diff --git a/poky/meta/classes/siteconfig.bbclass b/poky/meta/classes/siteconfig.bbclass
index 0cfa5a6..953cafd 100644
--- a/poky/meta/classes/siteconfig.bbclass
+++ b/poky/meta/classes/siteconfig.bbclass
@@ -1,3 +1,9 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
 python siteconfig_do_siteconfig () {
     shared_state = sstate_state_fromvars(d)
     if shared_state['task'] != 'populate_sysroot':
diff --git a/poky/meta/classes/siteinfo.bbclass b/poky/meta/classes/siteinfo.bbclass
deleted file mode 100644
index 3555d5a..0000000
--- a/poky/meta/classes/siteinfo.bbclass
+++ /dev/null
@@ -1,226 +0,0 @@
-# This class exists to provide information about the targets that
-# may be needed by other classes and/or recipes. If you add a new
-# target this will probably need to be updated.
-
-#
-# Returns information about 'what' for the named target 'target'
-# where 'target' == "<arch>-<os>"
-#
-# 'what' can be one of
-# * target: Returns the target name ("<arch>-<os>")
-# * endianness: Return "be" for big endian targets, "le" for little endian
-# * bits: Returns the bit size of the target, either "32" or "64"
-# * libc: Returns the name of the c library used by the target
-#
-# It is an error for the target not to exist.
-# If 'what' doesn't exist then an empty value is returned
-#
-def siteinfo_data_for_machine(arch, os, d):
-    archinfo = {
-        "allarch": "endian-little bit-32", # bogus, but better than special-casing the checks below for allarch
-        "aarch64": "endian-little bit-64 arm-common arm-64",
-        "aarch64_be": "endian-big bit-64 arm-common arm-64",
-        "arc": "endian-little bit-32 arc-common",
-        "arceb": "endian-big bit-32 arc-common",
-        "arm": "endian-little bit-32 arm-common arm-32",
-        "armeb": "endian-big bit-32 arm-common arm-32",
-        "avr32": "endian-big bit-32 avr32-common",
-        "bfin": "endian-little bit-32 bfin-common",
-        "epiphany": "endian-little bit-32",
-        "i386": "endian-little bit-32 ix86-common",
-        "i486": "endian-little bit-32 ix86-common",
-        "i586": "endian-little bit-32 ix86-common",
-        "i686": "endian-little bit-32 ix86-common",
-        "ia64": "endian-little bit-64",
-        "lm32": "endian-big bit-32",
-        "m68k": "endian-big bit-32",
-        "microblaze": "endian-big bit-32 microblaze-common",
-        "microblazeel": "endian-little bit-32 microblaze-common",
-        "mips": "endian-big bit-32 mips-common",
-        "mips64": "endian-big bit-64 mips-common",
-        "mips64el": "endian-little bit-64 mips-common",
-        "mipsisa64r6": "endian-big bit-64 mips-common",
-        "mipsisa64r6el": "endian-little bit-64 mips-common",
-        "mipsel": "endian-little bit-32 mips-common",
-        "mipsisa32r6": "endian-big bit-32 mips-common",
-        "mipsisa32r6el": "endian-little bit-32 mips-common",
-        "powerpc": "endian-big bit-32 powerpc-common",
-        "powerpcle": "endian-little bit-32 powerpc-common",
-        "nios2": "endian-little bit-32 nios2-common",
-        "powerpc64": "endian-big bit-64 powerpc-common",
-        "powerpc64le": "endian-little bit-64 powerpc-common",
-        "ppc": "endian-big bit-32 powerpc-common",
-        "ppc64": "endian-big bit-64 powerpc-common",
-        "ppc64le" : "endian-little bit-64 powerpc-common",
-        "riscv32": "endian-little bit-32 riscv-common",
-        "riscv64": "endian-little bit-64 riscv-common",
-        "sh3": "endian-little bit-32 sh-common",
-        "sh3eb": "endian-big bit-32 sh-common",
-        "sh4": "endian-little bit-32 sh-common",
-        "sh4eb": "endian-big bit-32 sh-common",
-        "sparc": "endian-big bit-32",
-        "viac3": "endian-little bit-32 ix86-common",
-        "x86_64": "endian-little", # bitinfo specified in targetinfo
-    }
-    osinfo = {
-        "darwin": "common-darwin",
-        "darwin9": "common-darwin",
-        "linux": "common-linux common-glibc",
-        "linux-gnu": "common-linux common-glibc",
-        "linux-gnu_ilp32": "common-linux common-glibc",
-        "linux-gnux32": "common-linux common-glibc",
-        "linux-gnun32": "common-linux common-glibc",
-        "linux-gnueabi": "common-linux common-glibc",
-        "linux-gnuspe": "common-linux common-glibc",
-        "linux-musl": "common-linux common-musl",
-        "linux-muslx32": "common-linux common-musl",
-        "linux-musleabi": "common-linux common-musl",
-        "linux-muslspe": "common-linux common-musl",
-        "uclinux-uclibc": "common-uclibc",
-        "cygwin": "common-cygwin",
-        "mingw32": "common-mingw",
-    }
-    targetinfo = {
-        "aarch64-linux-gnu": "aarch64-linux",
-        "aarch64_be-linux-gnu": "aarch64_be-linux",
-        "aarch64-linux-gnu_ilp32": "bit-32 aarch64_be-linux arm-32",
-        "aarch64_be-linux-gnu_ilp32": "bit-32 aarch64_be-linux arm-32",
-        "aarch64-linux-musl": "aarch64-linux",
-        "aarch64_be-linux-musl": "aarch64_be-linux",
-        "arm-linux-gnueabi": "arm-linux",
-        "arm-linux-musleabi": "arm-linux",
-        "armeb-linux-gnueabi": "armeb-linux",
-        "armeb-linux-musleabi": "armeb-linux",
-        "microblazeel-linux" : "microblaze-linux",
-        "microblazeel-linux-musl" : "microblaze-linux",
-        "mips-linux-musl": "mips-linux",
-        "mipsel-linux-musl": "mipsel-linux",
-        "mips64-linux-musl": "mips64-linux",
-        "mips64el-linux-musl": "mips64el-linux",
-        "mips64-linux-gnun32": "mips-linux bit-32",
-        "mips64el-linux-gnun32": "mipsel-linux bit-32",
-        "mipsisa64r6-linux-gnun32": "mipsisa32r6-linux bit-32",
-        "mipsisa64r6el-linux-gnun32": "mipsisa32r6el-linux bit-32",
-        "powerpc-linux": "powerpc32-linux powerpc32-linux-glibc",
-        "powerpc-linux-musl": "powerpc-linux powerpc32-linux powerpc32-linux-musl",
-        "powerpcle-linux": "powerpc32-linux powerpc32-linux-glibc",
-        "powerpcle-linux-musl": "powerpc-linux powerpc32-linux powerpc32-linux-musl",
-        "powerpc-linux-gnuspe": "powerpc-linux powerpc32-linux powerpc32-linux-glibc",
-        "powerpc-linux-muslspe": "powerpc-linux powerpc32-linux powerpc32-linux-musl",
-        "powerpc64-linux-gnuspe": "powerpc-linux powerpc64-linux powerpc64-linux-glibc",
-        "powerpc64-linux-muslspe": "powerpc-linux powerpc64-linux powerpc64-linux-musl",
-        "powerpc64-linux": "powerpc-linux powerpc64-linux powerpc64-linux-glibc",
-        "powerpc64-linux-musl": "powerpc-linux powerpc64-linux powerpc64-linux-musl",
-        "powerpc64le-linux": "powerpc-linux powerpc64-linux powerpc64-linux-glibc",
-        "powerpc64le-linux-musl": "powerpc-linux powerpc64-linux powerpc64-linux-musl",
-        "riscv32-linux": "riscv32-linux",
-        "riscv32-linux-musl": "riscv32-linux",
-        "riscv64-linux": "riscv64-linux",
-        "riscv64-linux-musl": "riscv64-linux",
-        "x86_64-cygwin": "bit-64",
-        "x86_64-darwin": "bit-64",
-        "x86_64-darwin9": "bit-64",
-        "x86_64-linux": "bit-64",
-        "x86_64-linux-musl": "x86_64-linux bit-64",
-        "x86_64-linux-muslx32": "bit-32 ix86-common x32-linux",
-        "x86_64-elf": "bit-64",
-        "x86_64-linux-gnu": "bit-64 x86_64-linux",
-        "x86_64-linux-gnux32": "bit-32 ix86-common x32-linux",
-        "x86_64-mingw32": "bit-64",
-    }
-
-    # Add in any extra user supplied data which may come from a BSP layer, removing the
-    # need to always change this class directly
-    extra_siteinfo = (d.getVar("SITEINFO_EXTRA_DATAFUNCS") or "").split()
-    for m in extra_siteinfo:
-        call = m + "(archinfo, osinfo, targetinfo, d)"
-        locs = { "archinfo" : archinfo, "osinfo" : osinfo, "targetinfo" : targetinfo, "d" : d}
-        archinfo, osinfo, targetinfo = bb.utils.better_eval(call, locs)
-
-    target = "%s-%s" % (arch, os)
-
-    sitedata = []
-    if arch in archinfo:
-        sitedata.extend(archinfo[arch].split())
-    if os in osinfo:
-        sitedata.extend(osinfo[os].split())
-    if target in targetinfo:
-        sitedata.extend(targetinfo[target].split())
-    sitedata.append(target)
-    sitedata.append("common")
-
-    bb.debug(1, "SITE files %s" % sitedata);
-    return sitedata
-
-def siteinfo_data(d):
-    return siteinfo_data_for_machine(d.getVar("HOST_ARCH"), d.getVar("HOST_OS"), d)
-
-python () {
-    sitedata = set(siteinfo_data(d))
-    if "endian-little" in sitedata:
-        d.setVar("SITEINFO_ENDIANNESS", "le")
-    elif "endian-big" in sitedata:
-        d.setVar("SITEINFO_ENDIANNESS", "be")
-    else:
-        bb.error("Unable to determine endianness for architecture '%s'" %
-                 d.getVar("HOST_ARCH"))
-        bb.fatal("Please add your architecture to siteinfo.bbclass")
-
-    if "bit-32" in sitedata:
-        d.setVar("SITEINFO_BITS", "32")
-    elif "bit-64" in sitedata:
-        d.setVar("SITEINFO_BITS", "64")
-    else:
-        bb.error("Unable to determine bit size for architecture '%s'" %
-                 d.getVar("HOST_ARCH"))
-        bb.fatal("Please add your architecture to siteinfo.bbclass")
-}
-
-# Layers with siteconfig need to add a replacement path to this variable so the
-# sstate isn't path specific
-SITEINFO_PATHVARS = "COREBASE"
-
-def siteinfo_get_files(d, sysrootcache=False):
-    sitedata = siteinfo_data(d)
-    sitefiles = []
-    searched = []
-    for path in d.getVar("BBPATH").split(":"):
-        for element in sitedata:
-            filename = os.path.join(path, "site", element)
-            if os.path.exists(filename):
-                searched.append(filename + ":True")
-                sitefiles.append(filename)
-            else:
-                searched.append(filename + ":False")
-
-    # Have to parameterise out hardcoded paths such as COREBASE for the main site files
-    for var in d.getVar("SITEINFO_PATHVARS").split():
-        searched2 = []
-        replace = os.path.normpath(d.getVar(var))
-        for s in searched:
-            searched2.append(s.replace(replace, "${" + var + "}"))
-        searched = searched2
-
-    if bb.data.inherits_class('native', d) or bb.data.inherits_class('cross', d) or bb.data.inherits_class('crosssdk', d):
-        # We need sstate sigs for native/cross not to vary upon arch so we can't depend on the site files.
-        # In future we may want to depend upon all site files?
-        # This would show up as breaking sstatetests.SStateTests.test_sstate_32_64_same_hash for example
-        searched = []
-
-    if not sysrootcache:
-        return sitefiles, searched
-
-    # Now check for siteconfig cache files in sysroots
-    path_siteconfig = d.getVar('SITECONFIG_SYSROOTCACHE')
-    if path_siteconfig and os.path.isdir(path_siteconfig):
-        for i in os.listdir(path_siteconfig):
-            if not i.endswith("_config"):
-                continue
-            filename = os.path.join(path_siteconfig, i)
-            sitefiles.append(filename)
-    return sitefiles, searched
-
-#
-# Make some information available via variables
-#
-SITECONFIG_SYSROOTCACHE = "${STAGING_DATADIR}/${TARGET_SYS}_config_site.d"
diff --git a/poky/meta/classes/sstate.bbclass b/poky/meta/classes/sstate.bbclass
deleted file mode 100644
index 0aa901f..0000000
--- a/poky/meta/classes/sstate.bbclass
+++ /dev/null
@@ -1,1358 +0,0 @@
-SSTATE_VERSION = "10"
-
-SSTATE_ZSTD_CLEVEL ??= "8"
-
-SSTATE_MANIFESTS ?= "${TMPDIR}/sstate-control"
-SSTATE_MANFILEPREFIX = "${SSTATE_MANIFESTS}/manifest-${SSTATE_MANMACH}-${PN}"
-
-def generate_sstatefn(spec, hash, taskname, siginfo, d):
-    if taskname is None:
-       return ""
-    extension = ".tar.zst"
-    # 8 chars reserved for siginfo
-    limit = 254 - 8
-    if siginfo:
-        limit = 254
-        extension = ".tar.zst.siginfo"
-    if not hash:
-        hash = "INVALID"
-    fn = spec + hash + "_" + taskname + extension
-    # If the filename is too long, attempt to reduce it
-    if len(fn) > limit:
-        components = spec.split(":")
-        # Fields 0,5,6 are mandatory, 1 is most useful, 2,3,4 are just for information
-        # 7 is for the separators
-        avail = (limit - len(hash + "_" + taskname + extension) - len(components[0]) - len(components[1]) - len(components[5]) - len(components[6]) - 7) // 3
-        components[2] = components[2][:avail]
-        components[3] = components[3][:avail]
-        components[4] = components[4][:avail]
-        spec = ":".join(components)
-        fn = spec + hash + "_" + taskname + extension
-        if len(fn) > limit:
-            bb.fatal("Unable to reduce sstate name to less than 255 chararacters")
-    return hash[:2] + "/" + hash[2:4] + "/" + fn
-
-SSTATE_PKGARCH    = "${PACKAGE_ARCH}"
-SSTATE_PKGSPEC    = "sstate:${PN}:${PACKAGE_ARCH}${TARGET_VENDOR}-${TARGET_OS}:${PV}:${PR}:${SSTATE_PKGARCH}:${SSTATE_VERSION}:"
-SSTATE_SWSPEC     = "sstate:${PN}::${PV}:${PR}::${SSTATE_VERSION}:"
-SSTATE_PKGNAME    = "${SSTATE_EXTRAPATH}${@generate_sstatefn(d.getVar('SSTATE_PKGSPEC'), d.getVar('BB_UNIHASH'), d.getVar('SSTATE_CURRTASK'), False, d)}"
-SSTATE_PKG        = "${SSTATE_DIR}/${SSTATE_PKGNAME}"
-SSTATE_EXTRAPATH   = ""
-SSTATE_EXTRAPATHWILDCARD = ""
-SSTATE_PATHSPEC   = "${SSTATE_DIR}/${SSTATE_EXTRAPATHWILDCARD}*/*/${SSTATE_PKGSPEC}*_${SSTATE_PATH_CURRTASK}.tar.zst*"
-
-# explicitly make PV to depend on evaluated value of PV variable
-PV[vardepvalue] = "${PV}"
-
-# We don't want the sstate to depend on things like the distro string
-# of the system, we let the sstate paths take care of this.
-SSTATE_EXTRAPATH[vardepvalue] = ""
-SSTATE_EXTRAPATHWILDCARD[vardepvalue] = ""
-
-# For multilib rpm the allarch packagegroup files can overwrite (in theory they're identical)
-SSTATE_ALLOW_OVERLAP_FILES = "${DEPLOY_DIR}/licenses/"
-# Avoid docbook/sgml catalog warnings for now
-SSTATE_ALLOW_OVERLAP_FILES += "${STAGING_ETCDIR_NATIVE}/sgml ${STAGING_DATADIR_NATIVE}/sgml"
-# sdk-provides-dummy-nativesdk and nativesdk-buildtools-perl-dummy overlap for different SDKMACHINE
-SSTATE_ALLOW_OVERLAP_FILES += "${DEPLOY_DIR_RPM}/sdk_provides_dummy_nativesdk/ ${DEPLOY_DIR_IPK}/sdk-provides-dummy-nativesdk/"
-SSTATE_ALLOW_OVERLAP_FILES += "${DEPLOY_DIR_RPM}/buildtools_dummy_nativesdk/ ${DEPLOY_DIR_IPK}/buildtools-dummy-nativesdk/"
-# target-sdk-provides-dummy overlaps that allarch is disabled when multilib is used
-SSTATE_ALLOW_OVERLAP_FILES += "${COMPONENTS_DIR}/sdk-provides-dummy-target/ ${DEPLOY_DIR_RPM}/sdk_provides_dummy_target/ ${DEPLOY_DIR_IPK}/sdk-provides-dummy-target/"
-# Archive the sources for many architectures in one deploy folder
-SSTATE_ALLOW_OVERLAP_FILES += "${DEPLOY_DIR_SRC}"
-# ovmf/grub-efi/systemd-boot/intel-microcode multilib recipes can generate identical overlapping files
-SSTATE_ALLOW_OVERLAP_FILES += "${DEPLOY_DIR_IMAGE}/ovmf"
-SSTATE_ALLOW_OVERLAP_FILES += "${DEPLOY_DIR_IMAGE}/grub-efi"
-SSTATE_ALLOW_OVERLAP_FILES += "${DEPLOY_DIR_IMAGE}/systemd-boot"
-SSTATE_ALLOW_OVERLAP_FILES += "${DEPLOY_DIR_IMAGE}/microcode"
-
-SSTATE_SCAN_FILES ?= "*.la *-config *_config postinst-*"
-SSTATE_SCAN_CMD ??= 'find ${SSTATE_BUILDDIR} \( -name "${@"\" -o -name \"".join(d.getVar("SSTATE_SCAN_FILES").split())}" \) -type f'
-SSTATE_SCAN_CMD_NATIVE ??= 'grep -Irl -e ${RECIPE_SYSROOT} -e ${RECIPE_SYSROOT_NATIVE} -e ${HOSTTOOLS_DIR} ${SSTATE_BUILDDIR}'
-SSTATE_HASHEQUIV_FILEMAP ?= " \
-    populate_sysroot:*/postinst-useradd-*:${TMPDIR} \
-    populate_sysroot:*/postinst-useradd-*:${COREBASE} \
-    populate_sysroot:*/postinst-useradd-*:regex-\s(PATH|PSEUDO_IGNORE_PATHS|HOME|LOGNAME|OMP_NUM_THREADS|USER)=.*\s \
-    populate_sysroot:*/crossscripts/*:${TMPDIR} \
-    populate_sysroot:*/crossscripts/*:${COREBASE} \
-    "
-
-BB_HASHFILENAME = "False ${SSTATE_PKGSPEC} ${SSTATE_SWSPEC}"
-
-SSTATE_ARCHS = " \
-    ${BUILD_ARCH} \
-    ${BUILD_ARCH}_${ORIGNATIVELSBSTRING} \
-    ${BUILD_ARCH}_${SDK_ARCH}_${SDK_OS} \
-    ${SDK_ARCH}_${SDK_OS} \
-    ${SDK_ARCH}_${PACKAGE_ARCH} \
-    allarch \
-    ${PACKAGE_ARCH} \
-    ${PACKAGE_EXTRA_ARCHS} \
-    ${MACHINE_ARCH}"
-SSTATE_ARCHS[vardepsexclude] = "ORIGNATIVELSBSTRING"
-
-SSTATE_MANMACH ?= "${SSTATE_PKGARCH}"
-
-SSTATECREATEFUNCS += "sstate_hardcode_path"
-SSTATECREATEFUNCS[vardeps] = "SSTATE_SCAN_FILES"
-SSTATEPOSTCREATEFUNCS = ""
-SSTATEPREINSTFUNCS = ""
-SSTATEPOSTUNPACKFUNCS = "sstate_hardcode_path_unpack"
-SSTATEPOSTINSTFUNCS = ""
-EXTRA_STAGING_FIXMES ?= "HOSTTOOLS_DIR"
-
-# Check whether sstate exists for tasks that support sstate and are in the
-# locked signatures file.
-SIGGEN_LOCKEDSIGS_SSTATE_EXISTS_CHECK ?= 'error'
-
-# Check whether the task's computed hash matches the task's hash in the
-# locked signatures file.
-SIGGEN_LOCKEDSIGS_TASKSIG_CHECK ?= "error"
-
-# The GnuPG key ID and passphrase to use to sign sstate archives (or unset to
-# not sign)
-SSTATE_SIG_KEY ?= ""
-SSTATE_SIG_PASSPHRASE ?= ""
-# Whether to verify the GnUPG signatures when extracting sstate archives
-SSTATE_VERIFY_SIG ?= "0"
-# List of signatures to consider valid.
-SSTATE_VALID_SIGS ??= ""
-SSTATE_VALID_SIGS[vardepvalue] = ""
-
-SSTATE_HASHEQUIV_METHOD ?= "oe.sstatesig.OEOuthashBasic"
-SSTATE_HASHEQUIV_METHOD[doc] = "The fully-qualified function used to calculate \
-    the output hash for a task, which in turn is used to determine equivalency. \
-    "
-
-SSTATE_HASHEQUIV_REPORT_TASKDATA ?= "0"
-SSTATE_HASHEQUIV_REPORT_TASKDATA[doc] = "Report additional useful data to the \
-    hash equivalency server, such as PN, PV, taskname, etc. This information \
-    is very useful for developers looking at task data, but may leak sensitive \
-    data if the equivalence server is public. \
-    "
-
-python () {
-    if bb.data.inherits_class('native', d):
-        d.setVar('SSTATE_PKGARCH', d.getVar('BUILD_ARCH', False))
-    elif bb.data.inherits_class('crosssdk', d):
-        d.setVar('SSTATE_PKGARCH', d.expand("${BUILD_ARCH}_${SDK_ARCH}_${SDK_OS}"))
-    elif bb.data.inherits_class('cross', d):
-        d.setVar('SSTATE_PKGARCH', d.expand("${BUILD_ARCH}"))
-    elif bb.data.inherits_class('nativesdk', d):
-        d.setVar('SSTATE_PKGARCH', d.expand("${SDK_ARCH}_${SDK_OS}"))
-    elif bb.data.inherits_class('cross-canadian', d):
-        d.setVar('SSTATE_PKGARCH', d.expand("${SDK_ARCH}_${PACKAGE_ARCH}"))
-    elif bb.data.inherits_class('allarch', d) and d.getVar("PACKAGE_ARCH") == "all":
-        d.setVar('SSTATE_PKGARCH', "allarch")
-    else:
-        d.setVar('SSTATE_MANMACH', d.expand("${PACKAGE_ARCH}"))
-
-    if bb.data.inherits_class('native', d) or bb.data.inherits_class('crosssdk', d) or bb.data.inherits_class('cross', d):
-        d.setVar('SSTATE_EXTRAPATH', "${NATIVELSBSTRING}/")
-        d.setVar('BB_HASHFILENAME', "True ${SSTATE_PKGSPEC} ${SSTATE_SWSPEC}")
-        d.setVar('SSTATE_EXTRAPATHWILDCARD', "${NATIVELSBSTRING}/")
-
-    unique_tasks = sorted(set((d.getVar('SSTATETASKS') or "").split()))
-    d.setVar('SSTATETASKS', " ".join(unique_tasks))
-    for task in unique_tasks:
-        d.prependVarFlag(task, 'prefuncs', "sstate_task_prefunc ")
-        d.appendVarFlag(task, 'postfuncs', " sstate_task_postfunc")
-        d.setVarFlag(task, 'network', '1')
-        d.setVarFlag(task + "_setscene", 'network', '1')
-}
-
-def sstate_init(task, d):
-    ss = {}
-    ss['task'] = task
-    ss['dirs'] = []
-    ss['plaindirs'] = []
-    ss['lockfiles'] = []
-    ss['lockfiles-shared'] = []
-    return ss
-
-def sstate_state_fromvars(d, task = None):
-    if task is None:
-        task = d.getVar('BB_CURRENTTASK')
-        if not task:
-            bb.fatal("sstate code running without task context?!")
-        task = task.replace("_setscene", "")
-
-    if task.startswith("do_"):
-        task = task[3:]
-    inputs = (d.getVarFlag("do_" + task, 'sstate-inputdirs') or "").split()
-    outputs = (d.getVarFlag("do_" + task, 'sstate-outputdirs') or "").split()
-    plaindirs = (d.getVarFlag("do_" + task, 'sstate-plaindirs') or "").split()
-    lockfiles = (d.getVarFlag("do_" + task, 'sstate-lockfile') or "").split()
-    lockfilesshared = (d.getVarFlag("do_" + task, 'sstate-lockfile-shared') or "").split()
-    interceptfuncs = (d.getVarFlag("do_" + task, 'sstate-interceptfuncs') or "").split()
-    fixmedir = d.getVarFlag("do_" + task, 'sstate-fixmedir') or ""
-    if not task or len(inputs) != len(outputs):
-        bb.fatal("sstate variables not setup correctly?!")
-
-    if task == "populate_lic":
-        d.setVar("SSTATE_PKGSPEC", "${SSTATE_SWSPEC}")
-        d.setVar("SSTATE_EXTRAPATH", "")
-        d.setVar('SSTATE_EXTRAPATHWILDCARD', "")
-
-    ss = sstate_init(task, d)
-    for i in range(len(inputs)):
-        sstate_add(ss, inputs[i], outputs[i], d)
-    ss['lockfiles'] = lockfiles
-    ss['lockfiles-shared'] = lockfilesshared
-    ss['plaindirs'] = plaindirs
-    ss['interceptfuncs'] = interceptfuncs
-    ss['fixmedir'] = fixmedir
-    return ss
-
-def sstate_add(ss, source, dest, d):
-    if not source.endswith("/"):
-         source = source + "/"
-    if not dest.endswith("/"):
-         dest = dest + "/"
-    source = os.path.normpath(source)
-    dest = os.path.normpath(dest)
-    srcbase = os.path.basename(source)
-    ss['dirs'].append([srcbase, source, dest])
-    return ss
-
-def sstate_install(ss, d):
-    import oe.path
-    import oe.sstatesig
-    import subprocess
-
-    sharedfiles = []
-    shareddirs = []
-    bb.utils.mkdirhier(d.expand("${SSTATE_MANIFESTS}"))
-
-    sstateinst = d.expand("${WORKDIR}/sstate-install-%s/" % ss['task'])
-
-    manifest, d2 = oe.sstatesig.sstate_get_manifest_filename(ss['task'], d)
-
-    if os.access(manifest, os.R_OK):
-        bb.fatal("Package already staged (%s)?!" % manifest)
-
-    d.setVar("SSTATE_INST_POSTRM", manifest + ".postrm")
-
-    locks = []
-    for lock in ss['lockfiles-shared']:
-        locks.append(bb.utils.lockfile(lock, True))
-    for lock in ss['lockfiles']:
-        locks.append(bb.utils.lockfile(lock))
-
-    for state in ss['dirs']:
-        bb.debug(2, "Staging files from %s to %s" % (state[1], state[2]))
-        for walkroot, dirs, files in os.walk(state[1]):
-            for file in files:
-                srcpath = os.path.join(walkroot, file)
-                dstpath = srcpath.replace(state[1], state[2])
-                #bb.debug(2, "Staging %s to %s" % (srcpath, dstpath))
-                sharedfiles.append(dstpath)
-            for dir in dirs:
-                srcdir = os.path.join(walkroot, dir)
-                dstdir = srcdir.replace(state[1], state[2])
-                #bb.debug(2, "Staging %s to %s" % (srcdir, dstdir))
-                if os.path.islink(srcdir):
-                    sharedfiles.append(dstdir)
-                    continue
-                if not dstdir.endswith("/"):
-                    dstdir = dstdir + "/"
-                shareddirs.append(dstdir)
-
-    # Check the file list for conflicts against files which already exist
-    overlap_allowed = (d.getVar("SSTATE_ALLOW_OVERLAP_FILES") or "").split()
-    match = []
-    for f in sharedfiles:
-        if os.path.exists(f) and not os.path.islink(f):
-            f = os.path.normpath(f)
-            realmatch = True
-            for w in overlap_allowed:
-                w = os.path.normpath(w)
-                if f.startswith(w):
-                    realmatch = False
-                    break
-            if realmatch:
-                match.append(f)
-                sstate_search_cmd = "grep -rlF '%s' %s --exclude=master.list | sed -e 's:^.*/::'" % (f, d.expand("${SSTATE_MANIFESTS}"))
-                search_output = subprocess.Popen(sstate_search_cmd, shell=True, stdout=subprocess.PIPE).communicate()[0]
-                if search_output:
-                    match.append("  (matched in %s)" % search_output.decode('utf-8').rstrip())
-                else:
-                    match.append("  (not matched to any task)")
-    if match:
-        bb.error("The recipe %s is trying to install files into a shared " \
-          "area when those files already exist. Those files and their manifest " \
-          "location are:\n  %s\nPlease verify which recipe should provide the " \
-          "above files.\n\nThe build has stopped, as continuing in this scenario WILL " \
-          "break things - if not now, possibly in the future (we've seen builds fail " \
-          "several months later). If the system knew how to recover from this " \
-          "automatically it would, however there are several different scenarios " \
-          "which can result in this and we don't know which one this is. It may be " \
-          "you have switched providers of something like virtual/kernel (e.g. from " \
-          "linux-yocto to linux-yocto-dev), in that case you need to execute the " \
-          "clean task for both recipes and it will resolve this error. It may be " \
-          "you changed DISTRO_FEATURES from systemd to udev or vice versa. Cleaning " \
-          "those recipes should again resolve this error, however switching " \
-          "DISTRO_FEATURES on an existing build directory is not supported - you " \
-          "should really clean out tmp and rebuild (reusing sstate should be safe). " \
-          "It could be the overlapping files detected are harmless in which case " \
-          "adding them to SSTATE_ALLOW_OVERLAP_FILES may be the correct solution. It could " \
-          "also be your build is including two different conflicting versions of " \
-          "things (e.g. bluez 4 and bluez 5 and the correct solution for that would " \
-          "be to resolve the conflict. If in doubt, please ask on the mailing list, " \
-          "sharing the error and filelist above." % \
-          (d.getVar('PN'), "\n  ".join(match)))
-        bb.fatal("If the above message is too much, the simpler version is you're advised to wipe out tmp and rebuild (reusing sstate is fine). That will likely fix things in most (but not all) cases.")
-
-    if ss['fixmedir'] and os.path.exists(ss['fixmedir'] + "/fixmepath.cmd"):
-        sharedfiles.append(ss['fixmedir'] + "/fixmepath.cmd")
-        sharedfiles.append(ss['fixmedir'] + "/fixmepath")
-
-    # Write out the manifest
-    f = open(manifest, "w")
-    for file in sharedfiles:
-        f.write(file + "\n")
-
-    # We want to ensure that directories appear at the end of the manifest
-    # so that when we test to see if they should be deleted any contents
-    # added by the task will have been removed first.
-    dirs = sorted(shareddirs, key=len)
-    # Must remove children first, which will have a longer path than the parent
-    for di in reversed(dirs):
-        f.write(di + "\n")
-    f.close()
-
-    # Append to the list of manifests for this PACKAGE_ARCH
-
-    i = d2.expand("${SSTATE_MANIFESTS}/index-${SSTATE_MANMACH}")
-    l = bb.utils.lockfile(i + ".lock")
-    filedata = d.getVar("STAMP") + " " + d2.getVar("SSTATE_MANFILEPREFIX") + " " + d.getVar("WORKDIR") + "\n"
-    manifests = []
-    if os.path.exists(i):
-        with open(i, "r") as f:
-            manifests = f.readlines()
-    # We append new entries, we don't remove older entries which may have the same
-    # manifest name but different versions from stamp/workdir. See below.
-    if filedata not in manifests:
-        with open(i, "a+") as f:
-            f.write(filedata)
-    bb.utils.unlockfile(l)
-
-    # Run the actual file install
-    for state in ss['dirs']:
-        if os.path.exists(state[1]):
-            oe.path.copyhardlinktree(state[1], state[2])
-
-    for postinst in (d.getVar('SSTATEPOSTINSTFUNCS') or '').split():
-        # All hooks should run in the SSTATE_INSTDIR
-        bb.build.exec_func(postinst, d, (sstateinst,))
-
-    for lock in locks:
-        bb.utils.unlockfile(lock)
-
-sstate_install[vardepsexclude] += "SSTATE_ALLOW_OVERLAP_FILES STATE_MANMACH SSTATE_MANFILEPREFIX"
-sstate_install[vardeps] += "${SSTATEPOSTINSTFUNCS}"
-
-def sstate_installpkg(ss, d):
-    from oe.gpg_sign import get_signer
-
-    sstateinst = d.expand("${WORKDIR}/sstate-install-%s/" % ss['task'])
-    d.setVar("SSTATE_CURRTASK", ss['task'])
-    sstatefetch = d.getVar('SSTATE_PKGNAME')
-    sstatepkg = d.getVar('SSTATE_PKG')
-
-    if not os.path.exists(sstatepkg):
-        pstaging_fetch(sstatefetch, d)
-
-    if not os.path.isfile(sstatepkg):
-        bb.note("Sstate package %s does not exist" % sstatepkg)
-        return False
-
-    sstate_clean(ss, d)
-
-    d.setVar('SSTATE_INSTDIR', sstateinst)
-
-    if bb.utils.to_boolean(d.getVar("SSTATE_VERIFY_SIG"), False):
-        if not os.path.isfile(sstatepkg + '.sig'):
-            bb.warn("No signature file for sstate package %s, skipping acceleration..." % sstatepkg)
-            return False
-        signer = get_signer(d, 'local')
-        if not signer.verify(sstatepkg + '.sig', d.getVar("SSTATE_VALID_SIGS")):
-            bb.warn("Cannot verify signature on sstate package %s, skipping acceleration..." % sstatepkg)
-            return False
-
-    # Empty sstateinst directory, ensure its clean
-    if os.path.exists(sstateinst):
-        oe.path.remove(sstateinst)
-    bb.utils.mkdirhier(sstateinst)
-
-    sstateinst = d.getVar("SSTATE_INSTDIR")
-    d.setVar('SSTATE_FIXMEDIR', ss['fixmedir'])
-
-    for f in (d.getVar('SSTATEPREINSTFUNCS') or '').split() + ['sstate_unpack_package']:
-        # All hooks should run in the SSTATE_INSTDIR
-        bb.build.exec_func(f, d, (sstateinst,))
-
-    return sstate_installpkgdir(ss, d)
-
-def sstate_installpkgdir(ss, d):
-    import oe.path
-    import subprocess
-
-    sstateinst = d.getVar("SSTATE_INSTDIR")
-    d.setVar('SSTATE_FIXMEDIR', ss['fixmedir'])
-
-    for f in (d.getVar('SSTATEPOSTUNPACKFUNCS') or '').split():
-        # All hooks should run in the SSTATE_INSTDIR
-        bb.build.exec_func(f, d, (sstateinst,))
-
-    def prepdir(dir):
-        # remove dir if it exists, ensure any parent directories do exist
-        if os.path.exists(dir):
-            oe.path.remove(dir)
-        bb.utils.mkdirhier(dir)
-        oe.path.remove(dir)
-
-    for state in ss['dirs']:
-        prepdir(state[1])
-        bb.utils.rename(sstateinst + state[0], state[1])
-    sstate_install(ss, d)
-
-    for plain in ss['plaindirs']:
-        workdir = d.getVar('WORKDIR')
-        sharedworkdir = os.path.join(d.getVar('TMPDIR'), "work-shared")
-        src = sstateinst + "/" + plain.replace(workdir, '')
-        if sharedworkdir in plain:
-            src = sstateinst + "/" + plain.replace(sharedworkdir, '')
-        dest = plain
-        bb.utils.mkdirhier(src)
-        prepdir(dest)
-        bb.utils.rename(src, dest)
-
-    return True
-
-python sstate_hardcode_path_unpack () {
-    # Fixup hardcoded paths
-    #
-    # Note: The logic below must match the reverse logic in
-    # sstate_hardcode_path(d)
-    import subprocess
-
-    sstateinst = d.getVar('SSTATE_INSTDIR')
-    sstatefixmedir = d.getVar('SSTATE_FIXMEDIR')
-    fixmefn = sstateinst + "fixmepath"
-    if os.path.isfile(fixmefn):
-        staging_target = d.getVar('RECIPE_SYSROOT')
-        staging_host = d.getVar('RECIPE_SYSROOT_NATIVE')
-
-        if bb.data.inherits_class('native', d) or bb.data.inherits_class('cross-canadian', d):
-            sstate_sed_cmd = "sed -i -e 's:FIXMESTAGINGDIRHOST:%s:g'" % (staging_host)
-        elif bb.data.inherits_class('cross', d) or bb.data.inherits_class('crosssdk', d):
-            sstate_sed_cmd = "sed -i -e 's:FIXMESTAGINGDIRTARGET:%s:g; s:FIXMESTAGINGDIRHOST:%s:g'" % (staging_target, staging_host)
-        else:
-            sstate_sed_cmd = "sed -i -e 's:FIXMESTAGINGDIRTARGET:%s:g'" % (staging_target)
-
-        extra_staging_fixmes = d.getVar('EXTRA_STAGING_FIXMES') or ''
-        for fixmevar in extra_staging_fixmes.split():
-            fixme_path = d.getVar(fixmevar)
-            sstate_sed_cmd += " -e 's:FIXME_%s:%s:g'" % (fixmevar, fixme_path)
-
-        # Add sstateinst to each filename in fixmepath, use xargs to efficiently call sed
-        sstate_hardcode_cmd = "sed -e 's:^:%s:g' %s | xargs %s" % (sstateinst, fixmefn, sstate_sed_cmd)
-
-        # Defer do_populate_sysroot relocation command
-        if sstatefixmedir:
-            bb.utils.mkdirhier(sstatefixmedir)
-            with open(sstatefixmedir + "/fixmepath.cmd", "w") as f:
-                sstate_hardcode_cmd = sstate_hardcode_cmd.replace(fixmefn, sstatefixmedir + "/fixmepath")
-                sstate_hardcode_cmd = sstate_hardcode_cmd.replace(sstateinst, "FIXMEFINALSSTATEINST")
-                sstate_hardcode_cmd = sstate_hardcode_cmd.replace(staging_host, "FIXMEFINALSSTATEHOST")
-                sstate_hardcode_cmd = sstate_hardcode_cmd.replace(staging_target, "FIXMEFINALSSTATETARGET")
-                f.write(sstate_hardcode_cmd)
-            bb.utils.copyfile(fixmefn, sstatefixmedir + "/fixmepath")
-            return
-
-        bb.note("Replacing fixme paths in sstate package: %s" % (sstate_hardcode_cmd))
-        subprocess.check_call(sstate_hardcode_cmd, shell=True)
-
-        # Need to remove this or we'd copy it into the target directory and may
-        # conflict with another writer
-        os.remove(fixmefn)
-}
-
-def sstate_clean_cachefile(ss, d):
-    import oe.path
-
-    if d.getVarFlag('do_%s' % ss['task'], 'task'):
-        d.setVar("SSTATE_PATH_CURRTASK", ss['task'])
-        sstatepkgfile = d.getVar('SSTATE_PATHSPEC')
-        bb.note("Removing %s" % sstatepkgfile)
-        oe.path.remove(sstatepkgfile)
-
-def sstate_clean_cachefiles(d):
-    for task in (d.getVar('SSTATETASKS') or "").split():
-        ld = d.createCopy()
-        ss = sstate_state_fromvars(ld, task)
-        sstate_clean_cachefile(ss, ld)
-
-def sstate_clean_manifest(manifest, d, canrace=False, prefix=None):
-    import oe.path
-
-    mfile = open(manifest)
-    entries = mfile.readlines()
-    mfile.close()
-
-    for entry in entries:
-        entry = entry.strip()
-        if prefix and not entry.startswith("/"):
-            entry = prefix + "/" + entry
-        bb.debug(2, "Removing manifest: %s" % entry)
-        # We can race against another package populating directories as we're removing them
-        # so we ignore errors here.
-        try:
-            if entry.endswith("/"):
-                if os.path.islink(entry[:-1]):
-                    os.remove(entry[:-1])
-                elif os.path.exists(entry) and len(os.listdir(entry)) == 0 and not canrace:
-                    # Removing directories whilst builds are in progress exposes a race. Only
-                    # do it in contexts where it is safe to do so.
-                    os.rmdir(entry[:-1])
-            else:
-                os.remove(entry)
-        except OSError:
-            pass
-
-    postrm = manifest + ".postrm"
-    if os.path.exists(manifest + ".postrm"):
-        import subprocess
-        os.chmod(postrm, 0o755)
-        subprocess.check_call(postrm, shell=True)
-        oe.path.remove(postrm)
-
-    oe.path.remove(manifest)
-
-def sstate_clean(ss, d):
-    import oe.path
-    import glob
-
-    d2 = d.createCopy()
-    stamp_clean = d.getVar("STAMPCLEAN")
-    extrainf = d.getVarFlag("do_" + ss['task'], 'stamp-extra-info')
-    if extrainf:
-        d2.setVar("SSTATE_MANMACH", extrainf)
-        wildcard_stfile = "%s.do_%s*.%s" % (stamp_clean, ss['task'], extrainf)
-    else:
-        wildcard_stfile = "%s.do_%s*" % (stamp_clean, ss['task'])
-
-    manifest = d2.expand("${SSTATE_MANFILEPREFIX}.%s" % ss['task'])
-
-    if os.path.exists(manifest):
-        locks = []
-        for lock in ss['lockfiles-shared']:
-            locks.append(bb.utils.lockfile(lock))
-        for lock in ss['lockfiles']:
-            locks.append(bb.utils.lockfile(lock))
-
-        sstate_clean_manifest(manifest, d, canrace=True)
-
-        for lock in locks:
-            bb.utils.unlockfile(lock)
-
-    # Remove the current and previous stamps, but keep the sigdata.
-    #
-    # The glob() matches do_task* which may match multiple tasks, for
-    # example: do_package and do_package_write_ipk, so we need to
-    # exactly match *.do_task.* and *.do_task_setscene.*
-    rm_stamp = '.do_%s.' % ss['task']
-    rm_setscene = '.do_%s_setscene.' % ss['task']
-    # For BB_SIGNATURE_HANDLER = "noop"
-    rm_nohash = ".do_%s" % ss['task']
-    for stfile in glob.glob(wildcard_stfile):
-        # Keep the sigdata
-        if ".sigdata." in stfile or ".sigbasedata." in stfile:
-            continue
-        # Preserve taint files in the stamps directory
-        if stfile.endswith('.taint'):
-            continue
-        if rm_stamp in stfile or rm_setscene in stfile or \
-                stfile.endswith(rm_nohash):
-            oe.path.remove(stfile)
-
-sstate_clean[vardepsexclude] = "SSTATE_MANFILEPREFIX"
-
-CLEANFUNCS += "sstate_cleanall"
-
-python sstate_cleanall() {
-    bb.note("Removing shared state for package %s" % d.getVar('PN'))
-
-    manifest_dir = d.getVar('SSTATE_MANIFESTS')
-    if not os.path.exists(manifest_dir):
-        return
-
-    tasks = d.getVar('SSTATETASKS').split()
-    for name in tasks:
-        ld = d.createCopy()
-        shared_state = sstate_state_fromvars(ld, name)
-        sstate_clean(shared_state, ld)
-}
-
-python sstate_hardcode_path () {
-    import subprocess, platform
-
-    # Need to remove hardcoded paths and fix these when we install the
-    # staging packages.
-    #
-    # Note: the logic in this function needs to match the reverse logic
-    # in sstate_installpkg(ss, d)
-
-    staging_target = d.getVar('RECIPE_SYSROOT')
-    staging_host = d.getVar('RECIPE_SYSROOT_NATIVE')
-    sstate_builddir = d.getVar('SSTATE_BUILDDIR')
-
-    sstate_sed_cmd = "sed -i -e 's:%s:FIXMESTAGINGDIRHOST:g'" % staging_host
-    if bb.data.inherits_class('native', d) or bb.data.inherits_class('cross-canadian', d):
-        sstate_grep_cmd = "grep -l -e '%s'" % (staging_host)
-    elif bb.data.inherits_class('cross', d) or bb.data.inherits_class('crosssdk', d):
-        sstate_grep_cmd = "grep -l -e '%s' -e '%s'" % (staging_target, staging_host)
-        sstate_sed_cmd += " -e 's:%s:FIXMESTAGINGDIRTARGET:g'" % staging_target
-    else:
-        sstate_grep_cmd = "grep -l -e '%s' -e '%s'" % (staging_target, staging_host)
-        sstate_sed_cmd += " -e 's:%s:FIXMESTAGINGDIRTARGET:g'" % staging_target
-
-    extra_staging_fixmes = d.getVar('EXTRA_STAGING_FIXMES') or ''
-    for fixmevar in extra_staging_fixmes.split():
-        fixme_path = d.getVar(fixmevar)
-        sstate_sed_cmd += " -e 's:%s:FIXME_%s:g'" % (fixme_path, fixmevar)
-        sstate_grep_cmd += " -e '%s'" % (fixme_path)
-
-    fixmefn =  sstate_builddir + "fixmepath"
-
-    sstate_scan_cmd = d.getVar('SSTATE_SCAN_CMD')
-    sstate_filelist_cmd = "tee %s" % (fixmefn)
-
-    # fixmepath file needs relative paths, drop sstate_builddir prefix
-    sstate_filelist_relative_cmd = "sed -i -e 's:^%s::g' %s" % (sstate_builddir, fixmefn)
-
-    xargs_no_empty_run_cmd = '--no-run-if-empty'
-    if platform.system() == 'Darwin':
-        xargs_no_empty_run_cmd = ''
-
-    # Limit the fixpaths and sed operations based on the initial grep search
-    # This has the side effect of making sure the vfs cache is hot
-    sstate_hardcode_cmd = "%s | xargs %s | %s | xargs %s %s" % (sstate_scan_cmd, sstate_grep_cmd, sstate_filelist_cmd, xargs_no_empty_run_cmd, sstate_sed_cmd)
-
-    bb.note("Removing hardcoded paths from sstate package: '%s'" % (sstate_hardcode_cmd))
-    subprocess.check_output(sstate_hardcode_cmd, shell=True, cwd=sstate_builddir)
-
-        # If the fixmefn is empty, remove it..
-    if os.stat(fixmefn).st_size == 0:
-        os.remove(fixmefn)
-    else:
-        bb.note("Replacing absolute paths in fixmepath file: '%s'" % (sstate_filelist_relative_cmd))
-        subprocess.check_output(sstate_filelist_relative_cmd, shell=True)
-}
-
-def sstate_package(ss, d):
-    import oe.path
-    import time
-
-    tmpdir = d.getVar('TMPDIR')
-
-    fixtime = False
-    if ss['task'] == "package":
-        fixtime = True
-
-    def fixtimestamp(root, path):
-        f = os.path.join(root, path)
-        if os.lstat(f).st_mtime > sde:
-            os.utime(f, (sde, sde), follow_symlinks=False)
-
-    sstatebuild = d.expand("${WORKDIR}/sstate-build-%s/" % ss['task'])
-    sde = int(d.getVar("SOURCE_DATE_EPOCH") or time.time())
-    d.setVar("SSTATE_CURRTASK", ss['task'])
-    bb.utils.remove(sstatebuild, recurse=True)
-    bb.utils.mkdirhier(sstatebuild)
-    for state in ss['dirs']:
-        if not os.path.exists(state[1]):
-            continue
-        srcbase = state[0].rstrip("/").rsplit('/', 1)[0]
-        # Find and error for absolute symlinks. We could attempt to relocate but its not
-        # clear where the symlink is relative to in this context. We could add that markup
-        # to sstate tasks but there aren't many of these so better just avoid them entirely.
-        for walkroot, dirs, files in os.walk(state[1]):
-            for file in files + dirs:
-                if fixtime:
-                    fixtimestamp(walkroot, file)
-                srcpath = os.path.join(walkroot, file)
-                if not os.path.islink(srcpath):
-                    continue
-                link = os.readlink(srcpath)
-                if not os.path.isabs(link):
-                    continue
-                if not link.startswith(tmpdir):
-                    continue
-                bb.error("sstate found an absolute path symlink %s pointing at %s. Please replace this with a relative link." % (srcpath, link))
-        bb.debug(2, "Preparing tree %s for packaging at %s" % (state[1], sstatebuild + state[0]))
-        bb.utils.rename(state[1], sstatebuild + state[0])
-
-    workdir = d.getVar('WORKDIR')
-    sharedworkdir = os.path.join(d.getVar('TMPDIR'), "work-shared")
-    for plain in ss['plaindirs']:
-        pdir = plain.replace(workdir, sstatebuild)
-        if sharedworkdir in plain:
-            pdir = plain.replace(sharedworkdir, sstatebuild)
-        bb.utils.mkdirhier(plain)
-        bb.utils.mkdirhier(pdir)
-        bb.utils.rename(plain, pdir)
-        if fixtime:
-            fixtimestamp(pdir, "")
-            for walkroot, dirs, files in os.walk(pdir):
-                for file in files + dirs:
-                    fixtimestamp(walkroot, file)
-
-    d.setVar('SSTATE_BUILDDIR', sstatebuild)
-    d.setVar('SSTATE_INSTDIR', sstatebuild)
-
-    if d.getVar('SSTATE_SKIP_CREATION') == '1':
-        return
-
-    sstate_create_package = ['sstate_report_unihash', 'sstate_create_package']
-    if d.getVar('SSTATE_SIG_KEY'):
-        sstate_create_package.append('sstate_sign_package')
-
-    for f in (d.getVar('SSTATECREATEFUNCS') or '').split() + \
-             sstate_create_package + \
-             (d.getVar('SSTATEPOSTCREATEFUNCS') or '').split():
-        # All hooks should run in SSTATE_BUILDDIR.
-        bb.build.exec_func(f, d, (sstatebuild,))
-
-    # SSTATE_PKG may have been changed by sstate_report_unihash
-    siginfo = d.getVar('SSTATE_PKG') + ".siginfo"
-    if not os.path.exists(siginfo):
-        bb.siggen.dump_this_task(siginfo, d)
-    else:
-        try:
-            os.utime(siginfo, None)
-        except PermissionError:
-            pass
-        except OSError as e:
-            # Handle read-only file systems gracefully
-            import errno
-            if e.errno != errno.EROFS:
-                raise e
-
-    return
-
-sstate_package[vardepsexclude] += "SSTATE_SIG_KEY"
-
-def pstaging_fetch(sstatefetch, d):
-    import bb.fetch2
-
-    # Only try and fetch if the user has configured a mirror
-    mirrors = d.getVar('SSTATE_MIRRORS')
-    if not mirrors:
-        return
-
-    # Copy the data object and override DL_DIR and SRC_URI
-    localdata = bb.data.createCopy(d)
-
-    dldir = localdata.expand("${SSTATE_DIR}")
-    bb.utils.mkdirhier(dldir)
-
-    localdata.delVar('MIRRORS')
-    localdata.setVar('FILESPATH', dldir)
-    localdata.setVar('DL_DIR', dldir)
-    localdata.setVar('PREMIRRORS', mirrors)
-    localdata.setVar('SRCPV', d.getVar('SRCPV'))
-
-    # if BB_NO_NETWORK is set but we also have SSTATE_MIRROR_ALLOW_NETWORK,
-    # we'll want to allow network access for the current set of fetches.
-    if bb.utils.to_boolean(localdata.getVar('BB_NO_NETWORK')) and \
-            bb.utils.to_boolean(localdata.getVar('SSTATE_MIRROR_ALLOW_NETWORK')):
-        localdata.delVar('BB_NO_NETWORK')
-
-    # Try a fetch from the sstate mirror, if it fails just return and
-    # we will build the package
-    uris = ['file://{0};downloadfilename={0}'.format(sstatefetch),
-            'file://{0}.siginfo;downloadfilename={0}.siginfo'.format(sstatefetch)]
-    if bb.utils.to_boolean(d.getVar("SSTATE_VERIFY_SIG"), False):
-        uris += ['file://{0}.sig;downloadfilename={0}.sig'.format(sstatefetch)]
-
-    for srcuri in uris:
-        localdata.setVar('SRC_URI', srcuri)
-        try:
-            fetcher = bb.fetch2.Fetch([srcuri], localdata, cache=False)
-            fetcher.checkstatus()
-            fetcher.download()
-
-        except bb.fetch2.BBFetchException:
-            pass
-
-pstaging_fetch[vardepsexclude] += "SRCPV"
-
-
-def sstate_setscene(d):
-    shared_state = sstate_state_fromvars(d)
-    accelerate = sstate_installpkg(shared_state, d)
-    if not accelerate:
-        msg = "No sstate archive obtainable, will run full task instead."
-        bb.warn(msg)
-        raise bb.BBHandledException(msg)
-
-python sstate_task_prefunc () {
-    shared_state = sstate_state_fromvars(d)
-    sstate_clean(shared_state, d)
-}
-sstate_task_prefunc[dirs] = "${WORKDIR}"
-
-python sstate_task_postfunc () {
-    shared_state = sstate_state_fromvars(d)
-
-    for intercept in shared_state['interceptfuncs']:
-        bb.build.exec_func(intercept, d, (d.getVar("WORKDIR"),))
-
-    omask = os.umask(0o002)
-    if omask != 0o002:
-       bb.note("Using umask 0o002 (not %0o) for sstate packaging" % omask)
-    sstate_package(shared_state, d)
-    os.umask(omask)
-
-    sstateinst = d.getVar("SSTATE_INSTDIR")
-    d.setVar('SSTATE_FIXMEDIR', shared_state['fixmedir'])
-
-    sstate_installpkgdir(shared_state, d)
-
-    bb.utils.remove(d.getVar("SSTATE_BUILDDIR"), recurse=True)
-}
-sstate_task_postfunc[dirs] = "${WORKDIR}"
-
-
-#
-# Shell function to generate a sstate package from a directory
-# set as SSTATE_BUILDDIR. Will be run from within SSTATE_BUILDDIR.
-#
-sstate_create_package () {
-	# Exit early if it already exists
-	if [ -e ${SSTATE_PKG} ]; then
-		touch ${SSTATE_PKG} 2>/dev/null || true
-		return
-	fi
-
-	mkdir --mode=0775 -p `dirname ${SSTATE_PKG}`
-	TFILE=`mktemp ${SSTATE_PKG}.XXXXXXXX`
-
-	OPT="-cS"
-	ZSTD="zstd -${SSTATE_ZSTD_CLEVEL} -T${ZSTD_THREADS}"
-	# Use pzstd if available
-	if [ -x "$(command -v pzstd)" ]; then
-		ZSTD="pzstd -${SSTATE_ZSTD_CLEVEL} -p ${ZSTD_THREADS}"
-	fi
-
-	# Need to handle empty directories
-	if [ "$(ls -A)" ]; then
-		set +e
-		tar -I "$ZSTD" $OPT -f $TFILE *
-		ret=$?
-		if [ $ret -ne 0 ] && [ $ret -ne 1 ]; then
-			exit 1
-		fi
-		set -e
-	else
-		tar -I "$ZSTD" $OPT --file=$TFILE --files-from=/dev/null
-	fi
-	chmod 0664 $TFILE
-	# Skip if it was already created by some other process
-	if [ -h ${SSTATE_PKG} ] && [ ! -e ${SSTATE_PKG} ]; then
-		# There is a symbolic link, but it links to nothing.
-		# Forcefully replace it with the new file.
-		ln -f $TFILE ${SSTATE_PKG} || true
-	elif [ ! -e ${SSTATE_PKG} ]; then
-		# Move into place using ln to attempt an atomic op.
-		# Abort if it already exists
-		ln $TFILE ${SSTATE_PKG} || true
-	else
-		touch ${SSTATE_PKG} 2>/dev/null || true
-	fi
-	rm $TFILE
-}
-
-python sstate_sign_package () {
-    from oe.gpg_sign import get_signer
-
-
-    signer = get_signer(d, 'local')
-    sstate_pkg = d.getVar('SSTATE_PKG')
-    if os.path.exists(sstate_pkg + '.sig'):
-        os.unlink(sstate_pkg + '.sig')
-    signer.detach_sign(sstate_pkg, d.getVar('SSTATE_SIG_KEY', False), None,
-                       d.getVar('SSTATE_SIG_PASSPHRASE'), armor=False)
-}
-
-python sstate_report_unihash() {
-    report_unihash = getattr(bb.parse.siggen, 'report_unihash', None)
-
-    if report_unihash:
-        ss = sstate_state_fromvars(d)
-        report_unihash(os.getcwd(), ss['task'], d)
-}
-
-#
-# Shell function to decompress and prepare a package for installation
-# Will be run from within SSTATE_INSTDIR.
-#
-sstate_unpack_package () {
-	ZSTD="zstd -T${ZSTD_THREADS}"
-	# Use pzstd if available
-	if [ -x "$(command -v pzstd)" ]; then
-		ZSTD="pzstd -p ${ZSTD_THREADS}"
-	fi
-
-	tar -I "$ZSTD" -xvpf ${SSTATE_PKG}
-	# update .siginfo atime on local/NFS mirror if it is a symbolic link
-	[ ! -h ${SSTATE_PKG}.siginfo ] || [ ! -e ${SSTATE_PKG}.siginfo ] || touch -a ${SSTATE_PKG}.siginfo 2>/dev/null || true
-	# update each symbolic link instead of any referenced file
-	touch --no-dereference ${SSTATE_PKG} 2>/dev/null || true
-	[ ! -e ${SSTATE_PKG}.sig ] || touch --no-dereference ${SSTATE_PKG}.sig 2>/dev/null || true
-	[ ! -e ${SSTATE_PKG}.siginfo ] || touch --no-dereference ${SSTATE_PKG}.siginfo 2>/dev/null || true
-}
-
-BB_HASHCHECK_FUNCTION = "sstate_checkhashes"
-
-def sstate_checkhashes(sq_data, d, siginfo=False, currentcount=0, summary=True, **kwargs):
-    found = set()
-    missed = set()
-
-    def gethash(task):
-        return sq_data['unihash'][task]
-
-    def getpathcomponents(task, d):
-        # Magic data from BB_HASHFILENAME
-        splithashfn = sq_data['hashfn'][task].split(" ")
-        spec = splithashfn[1]
-        if splithashfn[0] == "True":
-            extrapath = d.getVar("NATIVELSBSTRING") + "/"
-        else:
-            extrapath = ""
-        
-        tname = bb.runqueue.taskname_from_tid(task)[3:]
-
-        if tname in ["fetch", "unpack", "patch", "populate_lic", "preconfigure"] and splithashfn[2]:
-            spec = splithashfn[2]
-            extrapath = ""
-
-        return spec, extrapath, tname
-
-    def getsstatefile(tid, siginfo, d):
-        spec, extrapath, tname = getpathcomponents(tid, d)
-        return extrapath + generate_sstatefn(spec, gethash(tid), tname, siginfo, d)
-
-    for tid in sq_data['hash']:
-
-        sstatefile = d.expand("${SSTATE_DIR}/" + getsstatefile(tid, siginfo, d))
-
-        if os.path.exists(sstatefile):
-            found.add(tid)
-            bb.debug(2, "SState: Found valid sstate file %s" % sstatefile)
-        else:
-            missed.add(tid)
-            bb.debug(2, "SState: Looked for but didn't find file %s" % sstatefile)
-
-    foundLocal = len(found)
-    mirrors = d.getVar("SSTATE_MIRRORS")
-    if mirrors:
-        # Copy the data object and override DL_DIR and SRC_URI
-        localdata = bb.data.createCopy(d)
-
-        dldir = localdata.expand("${SSTATE_DIR}")
-        localdata.delVar('MIRRORS')
-        localdata.setVar('FILESPATH', dldir)
-        localdata.setVar('DL_DIR', dldir)
-        localdata.setVar('PREMIRRORS', mirrors)
-
-        bb.debug(2, "SState using premirror of: %s" % mirrors)
-
-        # if BB_NO_NETWORK is set but we also have SSTATE_MIRROR_ALLOW_NETWORK,
-        # we'll want to allow network access for the current set of fetches.
-        if bb.utils.to_boolean(localdata.getVar('BB_NO_NETWORK')) and \
-                bb.utils.to_boolean(localdata.getVar('SSTATE_MIRROR_ALLOW_NETWORK')):
-            localdata.delVar('BB_NO_NETWORK')
-
-        from bb.fetch2 import FetchConnectionCache
-        def checkstatus_init():
-            while not connection_cache_pool.full():
-                connection_cache_pool.put(FetchConnectionCache())
-
-        def checkstatus_end():
-            while not connection_cache_pool.empty():
-                connection_cache = connection_cache_pool.get()
-                connection_cache.close_connections()
-
-        def checkstatus(arg):
-            (tid, sstatefile) = arg
-
-            connection_cache = connection_cache_pool.get()
-            localdata2 = bb.data.createCopy(localdata)
-            srcuri = "file://" + sstatefile
-            localdata2.setVar('SRC_URI', srcuri)
-            bb.debug(2, "SState: Attempting to fetch %s" % srcuri)
-
-            import traceback
-
-            try:
-                fetcher = bb.fetch2.Fetch(srcuri.split(), localdata2,
-                            connection_cache=connection_cache)
-                fetcher.checkstatus()
-                bb.debug(2, "SState: Successful fetch test for %s" % srcuri)
-                found.add(tid)
-                missed.remove(tid)
-            except bb.fetch2.FetchError as e:
-                bb.debug(2, "SState: Unsuccessful fetch test for %s (%s)\n%s" % (srcuri, repr(e), traceback.format_exc()))
-            except Exception as e:
-                bb.error("SState: cannot test %s: %s\n%s" % (srcuri, repr(e), traceback.format_exc()))
-
-            connection_cache_pool.put(connection_cache)
-
-            if progress:
-                bb.event.fire(bb.event.ProcessProgress(msg, len(tasklist) - thread_worker.tasks.qsize()), d)
-
-        tasklist = []
-        for tid in missed:
-            sstatefile = d.expand(getsstatefile(tid, siginfo, d))
-            tasklist.append((tid, sstatefile))
-
-        if tasklist:
-            nproc = min(int(d.getVar("BB_NUMBER_THREADS")), len(tasklist))
-
-            progress = len(tasklist) >= 100
-            if progress:
-                msg = "Checking sstate mirror object availability"
-                bb.event.fire(bb.event.ProcessStarted(msg, len(tasklist)), d)
-
-            # Have to setup the fetcher environment here rather than in each thread as it would race
-            fetcherenv = bb.fetch2.get_fetcher_environment(d)
-            with bb.utils.environment(**fetcherenv):
-                bb.event.enable_threadlock()
-                import concurrent.futures
-                from queue import Queue
-                connection_cache_pool = Queue(nproc)
-                checkstatus_init()
-                with concurrent.futures.ThreadPoolExecutor(max_workers=nproc) as executor:
-                    executor.map(checkstatus, tasklist.copy())
-                checkstatus_end()
-                bb.event.disable_threadlock()
-
-            if progress:
-                bb.event.fire(bb.event.ProcessFinished(msg), d)
-
-    inheritlist = d.getVar("INHERIT")
-    if "toaster" in inheritlist:
-        evdata = {'missed': [], 'found': []};
-        for tid in missed:
-            sstatefile = d.expand(getsstatefile(tid, False, d))
-            evdata['missed'].append((bb.runqueue.fn_from_tid(tid), bb.runqueue.taskname_from_tid(tid), gethash(tid), sstatefile ) )
-        for tid in found:
-            sstatefile = d.expand(getsstatefile(tid, False, d))
-            evdata['found'].append((bb.runqueue.fn_from_tid(tid), bb.runqueue.taskname_from_tid(tid), gethash(tid), sstatefile ) )
-        bb.event.fire(bb.event.MetadataEvent("MissedSstate", evdata), d)
-
-    if summary:
-        # Print some summary statistics about the current task completion and how much sstate
-        # reuse there was. Avoid divide by zero errors.
-        total = len(sq_data['hash'])
-        complete = 0
-        if currentcount:
-            complete = (len(found) + currentcount) / (total + currentcount) * 100
-        match = 0
-        if total:
-            match = len(found) / total * 100
-        bb.plain("Sstate summary: Wanted %d Local %d Mirrors %d Missed %d Current %d (%d%% match, %d%% complete)" %
-            (total, foundLocal, len(found)-foundLocal, len(missed), currentcount, match, complete))
-
-    if hasattr(bb.parse.siggen, "checkhashes"):
-        bb.parse.siggen.checkhashes(sq_data, missed, found, d)
-
-    return found
-setscene_depvalid[vardepsexclude] = "SSTATE_EXCLUDEDEPS_SYSROOT"
-
-BB_SETSCENE_DEPVALID = "setscene_depvalid"
-
-def setscene_depvalid(task, taskdependees, notneeded, d, log=None):
-    # taskdependees is a dict of tasks which depend on task, each being a 3 item list of [PN, TASKNAME, FILENAME]
-    # task is included in taskdependees too
-    # Return - False - We need this dependency
-    #        - True - We can skip this dependency
-    import re
-
-    def logit(msg, log):
-        if log is not None:
-            log.append(msg)
-        else:
-            bb.debug(2, msg)
-
-    logit("Considering setscene task: %s" % (str(taskdependees[task])), log)
-
-    directtasks = ["do_populate_lic", "do_deploy_source_date_epoch", "do_shared_workdir", "do_stash_locale", "do_gcc_stash_builddir", "do_create_spdx"]
-
-    def isNativeCross(x):
-        return x.endswith("-native") or "-cross-" in x or "-crosssdk" in x or x.endswith("-cross")
-
-    # We only need to trigger deploy_source_date_epoch through direct dependencies
-    if taskdependees[task][1] in directtasks:
-        return True
-
-    # We only need to trigger packagedata through direct dependencies
-    # but need to preserve packagedata on packagedata links
-    if taskdependees[task][1] == "do_packagedata":
-        for dep in taskdependees:
-            if taskdependees[dep][1] == "do_packagedata":
-                return False
-        return True
-
-    for dep in taskdependees:
-        logit("  considering dependency: %s" % (str(taskdependees[dep])), log)
-        if task == dep:
-            continue
-        if dep in notneeded:
-            continue
-        # do_package_write_* and do_package doesn't need do_package
-        if taskdependees[task][1] == "do_package" and taskdependees[dep][1] in ['do_package', 'do_package_write_deb', 'do_package_write_ipk', 'do_package_write_rpm', 'do_packagedata', 'do_package_qa']:
-            continue
-        # do_package_write_* need do_populate_sysroot as they're mainly postinstall dependencies
-        if taskdependees[task][1] == "do_populate_sysroot" and taskdependees[dep][1] in ['do_package_write_deb', 'do_package_write_ipk', 'do_package_write_rpm']:
-            return False
-        # do_package/packagedata/package_qa/deploy don't need do_populate_sysroot
-        if taskdependees[task][1] == "do_populate_sysroot" and taskdependees[dep][1] in ['do_package', 'do_packagedata', 'do_package_qa', 'do_deploy']:
-            continue
-        # Native/Cross packages don't exist and are noexec anyway
-        if isNativeCross(taskdependees[dep][0]) and taskdependees[dep][1] in ['do_package_write_deb', 'do_package_write_ipk', 'do_package_write_rpm', 'do_packagedata', 'do_package', 'do_package_qa']:
-            continue
-
-        # This is due to the [depends] in useradd.bbclass complicating matters
-        # The logic *is* reversed here due to the way hard setscene dependencies are injected
-        if (taskdependees[task][1] == 'do_package' or taskdependees[task][1] == 'do_populate_sysroot') and taskdependees[dep][0].endswith(('shadow-native', 'shadow-sysroot', 'base-passwd', 'pseudo-native')) and taskdependees[dep][1] == 'do_populate_sysroot':
-            continue
-
-        # Consider sysroot depending on sysroot tasks
-        if taskdependees[task][1] == 'do_populate_sysroot' and taskdependees[dep][1] == 'do_populate_sysroot':
-            # Allow excluding certain recursive dependencies. If a recipe needs it should add a
-            # specific dependency itself, rather than relying on one of its dependees to pull
-            # them in.
-            # See also http://lists.openembedded.org/pipermail/openembedded-core/2018-January/146324.html
-            not_needed = False
-            excludedeps = d.getVar('_SSTATE_EXCLUDEDEPS_SYSROOT')
-            if excludedeps is None:
-                # Cache the regular expressions for speed
-                excludedeps = []
-                for excl in (d.getVar('SSTATE_EXCLUDEDEPS_SYSROOT') or "").split():
-                    excludedeps.append((re.compile(excl.split('->', 1)[0]), re.compile(excl.split('->', 1)[1])))
-                d.setVar('_SSTATE_EXCLUDEDEPS_SYSROOT', excludedeps)
-            for excl in excludedeps:
-                if excl[0].match(taskdependees[dep][0]):
-                    if excl[1].match(taskdependees[task][0]):
-                        not_needed = True
-                        break
-            if not_needed:
-                continue
-            # For meta-extsdk-toolchain we want all sysroot dependencies
-            if taskdependees[dep][0] == 'meta-extsdk-toolchain':
-                return False
-            # Native/Cross populate_sysroot need their dependencies
-            if isNativeCross(taskdependees[task][0]) and isNativeCross(taskdependees[dep][0]):
-                return False
-            # Target populate_sysroot depended on by cross tools need to be installed
-            if isNativeCross(taskdependees[dep][0]):
-                return False
-            # Native/cross tools depended upon by target sysroot are not needed
-            # Add an exception for shadow-native as required by useradd.bbclass
-            if isNativeCross(taskdependees[task][0]) and taskdependees[task][0] != 'shadow-native':
-                continue
-            # Target populate_sysroot need their dependencies
-            return False
-
-        if taskdependees[dep][1] in directtasks:
-            continue
-
-        # Safe fallthrough default
-        logit(" Default setscene dependency fall through due to dependency: %s" % (str(taskdependees[dep])), log)
-        return False
-    return True
-
-addhandler sstate_eventhandler
-sstate_eventhandler[eventmask] = "bb.build.TaskSucceeded"
-python sstate_eventhandler() {
-    d = e.data
-    writtensstate = d.getVar('SSTATE_CURRTASK')
-    if not writtensstate:
-        taskname = d.getVar("BB_RUNTASK")[3:]
-        spec = d.getVar('SSTATE_PKGSPEC')
-        swspec = d.getVar('SSTATE_SWSPEC')
-        if taskname in ["fetch", "unpack", "patch", "populate_lic", "preconfigure"] and swspec:
-            d.setVar("SSTATE_PKGSPEC", "${SSTATE_SWSPEC}")
-            d.setVar("SSTATE_EXTRAPATH", "")
-        d.setVar("SSTATE_CURRTASK", taskname)
-        siginfo = d.getVar('SSTATE_PKG') + ".siginfo"
-        if not os.path.exists(siginfo):
-            bb.siggen.dump_this_task(siginfo, d)
-        else:
-            try:
-                os.utime(siginfo, None)
-            except PermissionError:
-                pass
-            except OSError as e:
-                # Handle read-only file systems gracefully
-                import errno
-                if e.errno != errno.EROFS:
-                    raise e
-
-}
-
-SSTATE_PRUNE_OBSOLETEWORKDIR ?= "1"
-
-#
-# Event handler which removes manifests and stamps file for recipes which are no
-# longer 'reachable' in a build where they once were. 'Reachable' refers to
-# whether a recipe is parsed so recipes in a layer which was removed would no
-# longer be reachable. Switching between systemd and sysvinit where recipes
-# became skipped would be another example.
-#
-# Also optionally removes the workdir of those tasks/recipes
-#
-addhandler sstate_eventhandler_reachablestamps
-sstate_eventhandler_reachablestamps[eventmask] = "bb.event.ReachableStamps"
-python sstate_eventhandler_reachablestamps() {
-    import glob
-    d = e.data
-    stamps = e.stamps.values()
-    removeworkdir = (d.getVar("SSTATE_PRUNE_OBSOLETEWORKDIR", False) == "1")
-    preservestampfile = d.expand('${SSTATE_MANIFESTS}/preserve-stamps')
-    preservestamps = []
-    if os.path.exists(preservestampfile):
-        with open(preservestampfile, 'r') as f:
-            preservestamps = f.readlines()
-    seen = []
-
-    # The machine index contains all the stamps this machine has ever seen in this build directory.
-    # We should only remove things which this machine once accessed but no longer does.
-    machineindex = set()
-    bb.utils.mkdirhier(d.expand("${SSTATE_MANIFESTS}"))
-    mi = d.expand("${SSTATE_MANIFESTS}/index-machine-${MACHINE}")
-    if os.path.exists(mi):
-        with open(mi, "r") as f:
-            machineindex = set(line.strip() for line in f.readlines())
-
-    for a in sorted(list(set(d.getVar("SSTATE_ARCHS").split()))):
-        toremove = []
-        i = d.expand("${SSTATE_MANIFESTS}/index-" + a)
-        if not os.path.exists(i):
-            continue
-        manseen = set()
-        ignore = []
-        with open(i, "r") as f:
-            lines = f.readlines()
-            for l in reversed(lines):
-                try:
-                    (stamp, manifest, workdir) = l.split()
-                    # The index may have multiple entries for the same manifest as the code above only appends
-                    # new entries and there may be an entry with matching manifest but differing version in stamp/workdir.
-                    # The last entry in the list is the valid one, any earlier entries with matching manifests
-                    # should be ignored.
-                    if manifest in manseen:
-                        ignore.append(l)
-                        continue
-                    manseen.add(manifest)
-                    if stamp not in stamps and stamp not in preservestamps and stamp in machineindex:
-                        toremove.append(l)
-                        if stamp not in seen:
-                            bb.debug(2, "Stamp %s is not reachable, removing related manifests" % stamp)
-                            seen.append(stamp)
-                except ValueError:
-                    bb.fatal("Invalid line '%s' in sstate manifest '%s'" % (l, i))
-
-        if toremove:
-            msg = "Removing %d recipes from the %s sysroot" % (len(toremove), a)
-            bb.event.fire(bb.event.ProcessStarted(msg, len(toremove)), d)
-
-            removed = 0
-            for r in toremove:
-                (stamp, manifest, workdir) = r.split()
-                for m in glob.glob(manifest + ".*"):
-                    if m.endswith(".postrm"):
-                        continue
-                    sstate_clean_manifest(m, d)
-                bb.utils.remove(stamp + "*")
-                if removeworkdir:
-                    bb.utils.remove(workdir, recurse = True)
-                lines.remove(r)
-                removed = removed + 1
-                bb.event.fire(bb.event.ProcessProgress(msg, removed), d)
-
-            bb.event.fire(bb.event.ProcessFinished(msg), d)
-
-        with open(i, "w") as f:
-            for l in lines:
-                if l in ignore:
-                    continue
-                f.write(l)
-    machineindex |= set(stamps)
-    with open(mi, "w") as f:
-        for l in machineindex:
-            f.write(l + "\n")
-
-    if preservestamps:
-        os.remove(preservestampfile)
-}
-
-
-#
-# Bitbake can generate an event showing which setscene tasks are 'stale',
-# i.e. which ones will be rerun. These are ones where a stamp file is present but
-# it is stable (e.g. taskhash doesn't match). With that list we can go through
-# the manifests for matching tasks and "uninstall" those manifests now. We do
-# this now rather than mid build since the distribution of files between sstate
-# objects may have changed, new tasks may run first and if those new tasks overlap
-# with the stale tasks, we'd see overlapping files messages and failures. Thankfully
-# removing these files is fast.
-#
-addhandler sstate_eventhandler_stalesstate
-sstate_eventhandler_stalesstate[eventmask] = "bb.event.StaleSetSceneTasks"
-python sstate_eventhandler_stalesstate() {
-    d = e.data
-    tasks = e.tasks
-
-    bb.utils.mkdirhier(d.expand("${SSTATE_MANIFESTS}"))
-
-    for a in list(set(d.getVar("SSTATE_ARCHS").split())):
-        toremove = []
-        i = d.expand("${SSTATE_MANIFESTS}/index-" + a)
-        if not os.path.exists(i):
-            continue
-        with open(i, "r") as f:
-            lines = f.readlines()
-            for l in lines:
-                try:
-                    (stamp, manifest, workdir) = l.split()
-                    for tid in tasks:
-                        for s in tasks[tid]:
-                            if s.startswith(stamp):
-                                taskname = bb.runqueue.taskname_from_tid(tid)[3:]
-                                manname = manifest + "." + taskname
-                                if os.path.exists(manname):
-                                    bb.debug(2, "Sstate for %s is stale, removing related manifest %s" % (tid, manname))
-                                    toremove.append((manname, tid, tasks[tid]))
-                                    break
-                except ValueError:
-                    bb.fatal("Invalid line '%s' in sstate manifest '%s'" % (l, i))
-
-        if toremove:
-            msg = "Removing %d stale sstate objects for arch %s" % (len(toremove), a)
-            bb.event.fire(bb.event.ProcessStarted(msg, len(toremove)), d)
-
-            removed = 0
-            for (manname, tid, stamps) in toremove:
-                sstate_clean_manifest(manname, d)
-                for stamp in stamps:
-                    bb.utils.remove(stamp)
-                removed = removed + 1
-                bb.event.fire(bb.event.ProcessProgress(msg, removed), d)
-
-            bb.event.fire(bb.event.ProcessFinished(msg), d)
-}
diff --git a/poky/meta/classes/staging.bbclass b/poky/meta/classes/staging.bbclass
deleted file mode 100644
index bf8ca58..0000000
--- a/poky/meta/classes/staging.bbclass
+++ /dev/null
@@ -1,684 +0,0 @@
-# These directories will be staged in the sysroot
-SYSROOT_DIRS = " \
-    ${includedir} \
-    ${libdir} \
-    ${base_libdir} \
-    ${nonarch_base_libdir} \
-    ${datadir} \
-    /sysroot-only \
-"
-
-# These directories are also staged in the sysroot when they contain files that
-# are usable on the build system
-SYSROOT_DIRS_NATIVE = " \
-    ${bindir} \
-    ${sbindir} \
-    ${base_bindir} \
-    ${base_sbindir} \
-    ${libexecdir} \
-    ${sysconfdir} \
-    ${localstatedir} \
-"
-SYSROOT_DIRS:append:class-native = " ${SYSROOT_DIRS_NATIVE}"
-SYSROOT_DIRS:append:class-cross = " ${SYSROOT_DIRS_NATIVE}"
-SYSROOT_DIRS:append:class-crosssdk = " ${SYSROOT_DIRS_NATIVE}"
-
-# These directories will not be staged in the sysroot
-SYSROOT_DIRS_IGNORE = " \
-    ${mandir} \
-    ${docdir} \
-    ${infodir} \
-    ${datadir}/X11/locale \
-    ${datadir}/applications \
-    ${datadir}/bash-completion \
-    ${datadir}/fonts \
-    ${datadir}/gtk-doc/html \
-    ${datadir}/installed-tests \
-    ${datadir}/locale \
-    ${datadir}/pixmaps \
-    ${datadir}/terminfo \
-    ${libdir}/${BPN}/ptest \
-"
-
-sysroot_stage_dir() {
-	src="$1"
-	dest="$2"
-	# if the src doesn't exist don't do anything
-	if [ ! -d "$src" ]; then
-		 return
-	fi
-
-	mkdir -p "$dest"
-	rdest=$(realpath --relative-to="$src" "$dest")
-	(
-		cd $src
-		find . -print0 | cpio --null -pdlu $rdest
-	)
-}
-
-sysroot_stage_dirs() {
-	from="$1"
-	to="$2"
-
-	for dir in ${SYSROOT_DIRS}; do
-		sysroot_stage_dir "$from$dir" "$to$dir"
-	done
-
-	# Remove directories we do not care about
-	for dir in ${SYSROOT_DIRS_IGNORE}; do
-		rm -rf "$to$dir"
-	done
-}
-
-sysroot_stage_all() {
-	sysroot_stage_dirs ${D} ${SYSROOT_DESTDIR}
-}
-
-python sysroot_strip () {
-    inhibit_sysroot = d.getVar('INHIBIT_SYSROOT_STRIP')
-    if inhibit_sysroot and oe.types.boolean(inhibit_sysroot):
-        return
-
-    dstdir = d.getVar('SYSROOT_DESTDIR')
-    pn = d.getVar('PN')
-    libdir = d.getVar("libdir")
-    base_libdir = d.getVar("base_libdir")
-    qa_already_stripped = 'already-stripped' in (d.getVar('INSANE_SKIP:' + pn) or "").split()
-    strip_cmd = d.getVar("STRIP")
-
-    oe.package.strip_execs(pn, dstdir, strip_cmd, libdir, base_libdir, d,
-                           qa_already_stripped=qa_already_stripped)
-}
-
-do_populate_sysroot[dirs] = "${SYSROOT_DESTDIR}"
-
-addtask populate_sysroot after do_install
-
-SYSROOT_PREPROCESS_FUNCS ?= ""
-SYSROOT_DESTDIR = "${WORKDIR}/sysroot-destdir"
-
-python do_populate_sysroot () {
-    # SYSROOT 'version' 2
-    bb.build.exec_func("sysroot_stage_all", d)
-    bb.build.exec_func("sysroot_strip", d)
-    for f in (d.getVar('SYSROOT_PREPROCESS_FUNCS') or '').split():
-        bb.build.exec_func(f, d)
-    pn = d.getVar("PN")
-    multiprov = d.getVar("BB_MULTI_PROVIDER_ALLOWED").split()
-    provdir = d.expand("${SYSROOT_DESTDIR}${base_prefix}/sysroot-providers/")
-    bb.utils.mkdirhier(provdir)
-    for p in d.getVar("PROVIDES").split():
-        if p in multiprov:
-            continue
-        p = p.replace("/", "_")
-        with open(provdir + p, "w") as f:
-            f.write(pn)
-}
-
-do_populate_sysroot[vardeps] += "${SYSROOT_PREPROCESS_FUNCS}"
-do_populate_sysroot[vardepsexclude] += "BB_MULTI_PROVIDER_ALLOWED"
-
-POPULATESYSROOTDEPS = ""
-POPULATESYSROOTDEPS:class-target = "virtual/${MLPREFIX}${HOST_PREFIX}binutils:do_populate_sysroot"
-POPULATESYSROOTDEPS:class-nativesdk = "virtual/${HOST_PREFIX}binutils-crosssdk:do_populate_sysroot"
-do_populate_sysroot[depends] += "${POPULATESYSROOTDEPS}"
-
-SSTATETASKS += "do_populate_sysroot"
-do_populate_sysroot[cleandirs] = "${SYSROOT_DESTDIR}"
-do_populate_sysroot[sstate-inputdirs] = "${SYSROOT_DESTDIR}"
-do_populate_sysroot[sstate-outputdirs] = "${COMPONENTS_DIR}/${PACKAGE_ARCH}/${PN}"
-do_populate_sysroot[sstate-fixmedir] = "${COMPONENTS_DIR}/${PACKAGE_ARCH}/${PN}"
-
-python do_populate_sysroot_setscene () {
-    sstate_setscene(d)
-}
-addtask do_populate_sysroot_setscene
-
-def staging_copyfile(c, target, dest, postinsts, seendirs):
-    import errno
-
-    destdir = os.path.dirname(dest)
-    if destdir not in seendirs:
-        bb.utils.mkdirhier(destdir)
-        seendirs.add(destdir)
-    if "/usr/bin/postinst-" in c:
-        postinsts.append(dest)
-    if os.path.islink(c):
-        linkto = os.readlink(c)
-        if os.path.lexists(dest):
-            if not os.path.islink(dest):
-                raise OSError(errno.EEXIST, "Link %s already exists as a file" % dest, dest)
-            if os.readlink(dest) == linkto:
-                return dest
-            raise OSError(errno.EEXIST, "Link %s already exists to a different location? (%s vs %s)" % (dest, os.readlink(dest), linkto), dest)
-        os.symlink(linkto, dest)
-        #bb.warn(c)
-    else:
-        try:
-            os.link(c, dest)
-        except OSError as err:
-            if err.errno == errno.EXDEV:
-                bb.utils.copyfile(c, dest)
-            else:
-                raise
-    return dest
-
-def staging_copydir(c, target, dest, seendirs):
-    if dest not in seendirs:
-        bb.utils.mkdirhier(dest)
-        seendirs.add(dest)
-
-def staging_processfixme(fixme, target, recipesysroot, recipesysrootnative, d):
-    import subprocess
-
-    if not fixme:
-        return
-    cmd = "sed -e 's:^[^/]*/:%s/:g' %s | xargs sed -i -e 's:FIXMESTAGINGDIRTARGET:%s:g; s:FIXMESTAGINGDIRHOST:%s:g'" % (target, " ".join(fixme), recipesysroot, recipesysrootnative)
-    for fixmevar in ['PSEUDO_SYSROOT', 'HOSTTOOLS_DIR', 'PKGDATA_DIR', 'PSEUDO_LOCALSTATEDIR', 'LOGFIFO']:
-        fixme_path = d.getVar(fixmevar)
-        cmd += " -e 's:FIXME_%s:%s:g'" % (fixmevar, fixme_path)
-    bb.debug(2, cmd)
-    subprocess.check_output(cmd, shell=True, stderr=subprocess.STDOUT)
-
-
-def staging_populate_sysroot_dir(targetsysroot, nativesysroot, native, d):
-    import glob
-    import subprocess
-    import errno
-
-    fixme = []
-    postinsts = []
-    seendirs = set()
-    stagingdir = d.getVar("STAGING_DIR")
-    if native:
-        pkgarchs = ['${BUILD_ARCH}', '${BUILD_ARCH}_*']
-        targetdir = nativesysroot
-    else:
-        pkgarchs = ['${MACHINE_ARCH}']
-        pkgarchs = pkgarchs + list(reversed(d.getVar("PACKAGE_EXTRA_ARCHS").split()))
-        pkgarchs.append('allarch')
-        targetdir = targetsysroot
-
-    bb.utils.mkdirhier(targetdir)
-    for pkgarch in pkgarchs:
-        for manifest in glob.glob(d.expand("${SSTATE_MANIFESTS}/manifest-%s-*.populate_sysroot" % pkgarch)):
-            if manifest.endswith("-initial.populate_sysroot"):
-                # skip libgcc-initial due to file overlap
-                continue
-            if not native and (manifest.endswith("-native.populate_sysroot") or "nativesdk-" in manifest):
-                continue
-            if native and not (manifest.endswith("-native.populate_sysroot") or manifest.endswith("-cross.populate_sysroot") or "-cross-" in manifest):
-                continue
-            tmanifest = targetdir + "/" + os.path.basename(manifest)
-            if os.path.exists(tmanifest):
-                continue
-            try:
-                os.link(manifest, tmanifest)
-            except OSError as err:
-                if err.errno == errno.EXDEV:
-                    bb.utils.copyfile(manifest, tmanifest)
-                else:
-                    raise
-            with open(manifest, "r") as f:
-                for l in f:
-                    l = l.strip()
-                    if l.endswith("/fixmepath"):
-                        fixme.append(l)
-                        continue
-                    if l.endswith("/fixmepath.cmd"):
-                        continue
-                    dest = l.replace(stagingdir, "")
-                    dest = targetdir + "/" + "/".join(dest.split("/")[3:])
-                    if l.endswith("/"):
-                        staging_copydir(l, targetdir, dest, seendirs)
-                        continue
-                    try:
-                        staging_copyfile(l, targetdir, dest, postinsts, seendirs)
-                    except FileExistsError:
-                        continue
-
-    staging_processfixme(fixme, targetdir, targetsysroot, nativesysroot, d)
-    for p in postinsts:
-        subprocess.check_output(p, shell=True, stderr=subprocess.STDOUT)
-
-#
-# Manifests here are complicated. The main sysroot area has the unpacked sstate
-# which us unrelocated and tracked by the main sstate manifests. Each recipe
-# specific sysroot has manifests for each dependency that is installed there.
-# The task hash is used to tell whether the data needs to be reinstalled. We
-# use a symlink to point to the currently installed hash. There is also a
-# "complete" stamp file which is used to mark if installation completed. If
-# something fails (e.g. a postinst), this won't get written and we would
-# remove and reinstall the dependency. This also means partially installed
-# dependencies should get cleaned up correctly.
-#
-
-python extend_recipe_sysroot() {
-    import copy
-    import subprocess
-    import errno
-    import collections
-    import glob
-
-    taskdepdata = d.getVar("BB_TASKDEPDATA", False)
-    mytaskname = d.getVar("BB_RUNTASK")
-    if mytaskname.endswith("_setscene"):
-        mytaskname = mytaskname.replace("_setscene", "")
-    workdir = d.getVar("WORKDIR")
-    #bb.warn(str(taskdepdata))
-    pn = d.getVar("PN")
-    stagingdir = d.getVar("STAGING_DIR")
-    sharedmanifests = d.getVar("COMPONENTS_DIR") + "/manifests"
-    recipesysroot = d.getVar("RECIPE_SYSROOT")
-    recipesysrootnative = d.getVar("RECIPE_SYSROOT_NATIVE")
-
-    # Detect bitbake -b usage
-    nodeps = d.getVar("BB_LIMITEDDEPS") or False
-    if nodeps:
-        lock = bb.utils.lockfile(recipesysroot + "/sysroot.lock")
-        staging_populate_sysroot_dir(recipesysroot, recipesysrootnative, True, d)
-        staging_populate_sysroot_dir(recipesysroot, recipesysrootnative, False, d)
-        bb.utils.unlockfile(lock)
-        return
-
-    start = None
-    configuredeps = []
-    owntaskdeps = []
-    for dep in taskdepdata:
-        data = taskdepdata[dep]
-        if data[1] == mytaskname and data[0] == pn:
-            start = dep
-        elif data[0] == pn:
-            owntaskdeps.append(data[1])
-    if start is None:
-        bb.fatal("Couldn't find ourself in BB_TASKDEPDATA?")
-
-    # We need to figure out which sysroot files we need to expose to this task.
-    # This needs to match what would get restored from sstate, which is controlled
-    # ultimately by calls from bitbake to setscene_depvalid().
-    # That function expects a setscene dependency tree. We build a dependency tree
-    # condensed to inter-sstate task dependencies, similar to that used by setscene
-    # tasks. We can then call into setscene_depvalid() and decide
-    # which dependencies we can "see" and should expose in the recipe specific sysroot.
-    setscenedeps = copy.deepcopy(taskdepdata)
-
-    start = set([start])
-
-    sstatetasks = d.getVar("SSTATETASKS").split()
-    # Add recipe specific tasks referenced by setscene_depvalid()
-    sstatetasks.append("do_stash_locale")
-    sstatetasks.append("do_deploy")
-
-    def print_dep_tree(deptree):
-        data = ""
-        for dep in deptree:
-            deps = "    " + "\n    ".join(deptree[dep][3]) + "\n"
-            data = data + "%s:\n  %s\n  %s\n%s  %s\n  %s\n" % (deptree[dep][0], deptree[dep][1], deptree[dep][2], deps, deptree[dep][4], deptree[dep][5])
-        return data
-
-    #bb.note("Full dep tree is:\n%s" % print_dep_tree(taskdepdata))
-
-    #bb.note(" start2 is %s" % str(start))
-
-    # If start is an sstate task (like do_package) we need to add in its direct dependencies
-    # else the code below won't recurse into them.
-    for dep in set(start):
-        for dep2 in setscenedeps[dep][3]:
-            start.add(dep2)
-        start.remove(dep)
-
-    #bb.note(" start3 is %s" % str(start))
-
-    # Create collapsed do_populate_sysroot -> do_populate_sysroot tree
-    for dep in taskdepdata:
-        data = setscenedeps[dep]
-        if data[1] not in sstatetasks:
-            for dep2 in setscenedeps:
-                data2 = setscenedeps[dep2]
-                if dep in data2[3]:
-                    data2[3].update(setscenedeps[dep][3])
-                    data2[3].remove(dep)
-            if dep in start:
-                start.update(setscenedeps[dep][3])
-                start.remove(dep)
-            del setscenedeps[dep]
-
-    # Remove circular references
-    for dep in setscenedeps:
-        if dep in setscenedeps[dep][3]:
-            setscenedeps[dep][3].remove(dep)
-
-    #bb.note("Computed dep tree is:\n%s" % print_dep_tree(setscenedeps))
-    #bb.note(" start is %s" % str(start))
-
-    # Direct dependencies should be present and can be depended upon
-    for dep in sorted(set(start)):
-        if setscenedeps[dep][1] == "do_populate_sysroot":
-            if dep not in configuredeps:
-                configuredeps.append(dep)
-    bb.note("Direct dependencies are %s" % str(configuredeps))
-    #bb.note(" or %s" % str(start))
-
-    msgbuf = []
-    # Call into setscene_depvalid for each sub-dependency and only copy sysroot files
-    # for ones that would be restored from sstate.
-    done = list(start)
-    next = list(start)
-    while next:
-        new = []
-        for dep in next:
-            data = setscenedeps[dep]
-            for datadep in data[3]:
-                if datadep in done:
-                    continue
-                taskdeps = {}
-                taskdeps[dep] = setscenedeps[dep][:2]
-                taskdeps[datadep] = setscenedeps[datadep][:2]
-                retval = setscene_depvalid(datadep, taskdeps, [], d, msgbuf)
-                if retval:
-                    msgbuf.append("Skipping setscene dependency %s for installation into the sysroot" % datadep)
-                    continue
-                done.append(datadep)
-                new.append(datadep)
-                if datadep not in configuredeps and setscenedeps[datadep][1] == "do_populate_sysroot":
-                    configuredeps.append(datadep)
-                    msgbuf.append("Adding dependency on %s" % setscenedeps[datadep][0])
-                else:
-                    msgbuf.append("Following dependency on %s" % setscenedeps[datadep][0])
-        next = new
-
-    # This logging is too verbose for day to day use sadly
-    #bb.debug(2, "\n".join(msgbuf))
-
-    depdir = recipesysrootnative + "/installeddeps"
-    bb.utils.mkdirhier(depdir)
-    bb.utils.mkdirhier(sharedmanifests)
-
-    lock = bb.utils.lockfile(recipesysroot + "/sysroot.lock")
-
-    fixme = {}
-    seendirs = set()
-    postinsts = []
-    multilibs = {}
-    manifests = {}
-    # All files that we're going to be installing, to find conflicts.
-    fileset = {}
-
-    invalidate_tasks = set()
-    for f in os.listdir(depdir):
-        removed = []
-        if not f.endswith(".complete"):
-            continue
-        f = depdir + "/" + f
-        if os.path.islink(f) and not os.path.exists(f):
-            bb.note("%s no longer exists, removing from sysroot" % f)
-            lnk = os.readlink(f.replace(".complete", ""))
-            sstate_clean_manifest(depdir + "/" + lnk, d, canrace=True, prefix=workdir)
-            os.unlink(f)
-            os.unlink(f.replace(".complete", ""))
-            removed.append(os.path.basename(f.replace(".complete", "")))
-
-        # If we've removed files from the sysroot above, the task that installed them may still
-        # have a stamp file present for the task. This is probably invalid right now but may become
-        # valid again if the user were to change configuration back for example. Since we've removed
-        # the files a task might need, remove the stamp file too to force it to rerun.
-        # YOCTO #14790
-        if removed:
-            for i in glob.glob(depdir + "/index.*"):
-                if i.endswith("." + mytaskname):
-                    continue
-                with open(i, "r") as f:
-                    for l in f:
-                        if l.startswith("TaskDeps:"):
-                            continue
-                        l = l.strip()
-                        if l in removed:
-                            invalidate_tasks.add(i.rsplit(".", 1)[1])
-                            break
-    for t in invalidate_tasks:
-        bb.note("Invalidating stamps for task %s" % t)
-        bb.build.clean_stamp(t, d)
-
-    installed = []
-    for dep in configuredeps:
-        c = setscenedeps[dep][0]
-        if mytaskname in ["do_sdk_depends", "do_populate_sdk_ext"] and c.endswith("-initial"):
-            bb.note("Skipping initial setscene dependency %s for installation into the sysroot" % c)
-            continue
-        installed.append(c)
-
-    # We want to remove anything which this task previously installed but is no longer a dependency
-    taskindex = depdir + "/" + "index." + mytaskname
-    if os.path.exists(taskindex):
-        potential = []
-        with open(taskindex, "r") as f:
-            for l in f:
-                l = l.strip()
-                if l not in installed:
-                    fl = depdir + "/" + l
-                    if not os.path.exists(fl):
-                        # Was likely already uninstalled
-                        continue
-                    potential.append(l)
-        # We need to ensure no other task needs this dependency. We hold the sysroot
-        # lock so we ca search the indexes to check
-        if potential:
-            for i in glob.glob(depdir + "/index.*"):
-                if i.endswith("." + mytaskname):
-                    continue
-                with open(i, "r") as f:
-                    for l in f:
-                        if l.startswith("TaskDeps:"):
-                            prevtasks = l.split()[1:]
-                            if mytaskname in prevtasks:
-                                # We're a dependency of this task so we can clear items out the sysroot
-                                break
-                        l = l.strip()
-                        if l in potential:
-                            potential.remove(l)
-        for l in potential:
-            fl = depdir + "/" + l
-            bb.note("Task %s no longer depends on %s, removing from sysroot" % (mytaskname, l))
-            lnk = os.readlink(fl)
-            sstate_clean_manifest(depdir + "/" + lnk, d, canrace=True, prefix=workdir)
-            os.unlink(fl)
-            os.unlink(fl + ".complete")
-
-    msg_exists = []
-    msg_adding = []
-
-    # Handle all removals first since files may move between recipes
-    for dep in configuredeps:
-        c = setscenedeps[dep][0]
-        if c not in installed:
-            continue
-        taskhash = setscenedeps[dep][5]
-        taskmanifest = depdir + "/" + c + "." + taskhash
-
-        if os.path.exists(depdir + "/" + c):
-            lnk = os.readlink(depdir + "/" + c)
-            if lnk == c + "." + taskhash and os.path.exists(depdir + "/" + c + ".complete"):
-                continue
-            else:
-                bb.note("%s exists in sysroot, but is stale (%s vs. %s), removing." % (c, lnk, c + "." + taskhash))
-                sstate_clean_manifest(depdir + "/" + lnk, d, canrace=True, prefix=workdir)
-                os.unlink(depdir + "/" + c)
-                if os.path.lexists(depdir + "/" + c + ".complete"):
-                    os.unlink(depdir + "/" + c + ".complete")
-        elif os.path.lexists(depdir + "/" + c):
-            os.unlink(depdir + "/" + c)
-
-    binfiles = {}
-    # Now handle installs
-    for dep in configuredeps:
-        c = setscenedeps[dep][0]
-        if c not in installed:
-            continue
-        taskhash = setscenedeps[dep][5]
-        taskmanifest = depdir + "/" + c + "." + taskhash
-
-        if os.path.exists(depdir + "/" + c):
-            lnk = os.readlink(depdir + "/" + c)
-            if lnk == c + "." + taskhash and os.path.exists(depdir + "/" + c + ".complete"):
-                msg_exists.append(c)
-                continue
-
-        msg_adding.append(c)
-
-        os.symlink(c + "." + taskhash, depdir + "/" + c)
-
-        manifest, d2 = oe.sstatesig.find_sstate_manifest(c, setscenedeps[dep][2], "populate_sysroot", d, multilibs)
-        if d2 is not d:
-            # If we don't do this, the recipe sysroot will be placed in the wrong WORKDIR for multilibs
-            # We need a consistent WORKDIR for the image
-            d2.setVar("WORKDIR", d.getVar("WORKDIR"))
-        destsysroot = d2.getVar("RECIPE_SYSROOT")
-        # We put allarch recipes into the default sysroot
-        if manifest and "allarch" in manifest:
-            destsysroot = d.getVar("RECIPE_SYSROOT")
-
-        native = False
-        if c.endswith("-native") or "-cross-" in c or "-crosssdk" in c:
-            native = True
-
-        if manifest:
-            newmanifest = collections.OrderedDict()
-            targetdir = destsysroot
-            if native:
-                targetdir = recipesysrootnative
-            if targetdir not in fixme:
-                fixme[targetdir] = []
-            fm = fixme[targetdir]
-
-            with open(manifest, "r") as f:
-                manifests[dep] = manifest
-                for l in f:
-                    l = l.strip()
-                    if l.endswith("/fixmepath"):
-                        fm.append(l)
-                        continue
-                    if l.endswith("/fixmepath.cmd"):
-                        continue
-                    dest = l.replace(stagingdir, "")
-                    dest = "/" + "/".join(dest.split("/")[3:])
-                    newmanifest[l] = targetdir + dest
-
-                    # Check if files have already been installed by another
-                    # recipe and abort if they have, explaining what recipes are
-                    # conflicting.
-                    hashname = targetdir + dest
-                    if not hashname.endswith("/"):
-                        if hashname in fileset:
-                            bb.fatal("The file %s is installed by both %s and %s, aborting" % (dest, c, fileset[hashname]))
-                        else:
-                            fileset[hashname] = c
-
-            # Having multiple identical manifests in each sysroot eats diskspace so
-            # create a shared pool of them and hardlink if we can.
-            # We create the manifest in advance so that if something fails during installation,
-            # or the build is interrupted, subsequent exeuction can cleanup.
-            sharedm = sharedmanifests + "/" + os.path.basename(taskmanifest)
-            if not os.path.exists(sharedm):
-                smlock = bb.utils.lockfile(sharedm + ".lock")
-                # Can race here. You'd think it just means we may not end up with all copies hardlinked to each other
-                # but python can lose file handles so we need to do this under a lock.
-                if not os.path.exists(sharedm):
-                    with open(sharedm, 'w') as m:
-                       for l in newmanifest:
-                           dest = newmanifest[l]
-                           m.write(dest.replace(workdir + "/", "") + "\n")
-                bb.utils.unlockfile(smlock)
-            try:
-                os.link(sharedm, taskmanifest)
-            except OSError as err:
-                if err.errno == errno.EXDEV:
-                    bb.utils.copyfile(sharedm, taskmanifest)
-                else:
-                    raise
-            # Finally actually install the files
-            for l in newmanifest:
-                    dest = newmanifest[l]
-                    if l.endswith("/"):
-                        staging_copydir(l, targetdir, dest, seendirs)
-                        continue
-                    if "/bin/" in l or "/sbin/" in l:
-                        # defer /*bin/* files until last in case they need libs
-                        binfiles[l] = (targetdir, dest)
-                    else:
-                        staging_copyfile(l, targetdir, dest, postinsts, seendirs)
-
-    # Handle deferred binfiles
-    for l in binfiles:
-        (targetdir, dest) = binfiles[l]
-        staging_copyfile(l, targetdir, dest, postinsts, seendirs)
-
-    bb.note("Installed into sysroot: %s" % str(msg_adding))
-    bb.note("Skipping as already exists in sysroot: %s" % str(msg_exists))
-
-    for f in fixme:
-        staging_processfixme(fixme[f], f, recipesysroot, recipesysrootnative, d)
-
-    for p in postinsts:
-        subprocess.check_output(p, shell=True, stderr=subprocess.STDOUT)
-
-    for dep in manifests:
-        c = setscenedeps[dep][0]
-        os.symlink(manifests[dep], depdir + "/" + c + ".complete")
-
-    with open(taskindex, "w") as f:
-        f.write("TaskDeps: " + " ".join(owntaskdeps) + "\n")
-        for l in sorted(installed):
-            f.write(l + "\n")
-
-    bb.utils.unlockfile(lock)
-}
-extend_recipe_sysroot[vardepsexclude] += "MACHINE_ARCH PACKAGE_EXTRA_ARCHS SDK_ARCH BUILD_ARCH SDK_OS BB_TASKDEPDATA"
-
-do_prepare_recipe_sysroot[deptask] = "do_populate_sysroot"
-python do_prepare_recipe_sysroot () {
-    bb.build.exec_func("extend_recipe_sysroot", d)
-}
-addtask do_prepare_recipe_sysroot before do_configure after do_fetch
-
-python staging_taskhandler() {
-    bbtasks = e.tasklist
-    for task in bbtasks:
-        deps = d.getVarFlag(task, "depends")
-        if task == "do_configure" or (deps and "populate_sysroot" in deps):
-            d.prependVarFlag(task, "prefuncs", "extend_recipe_sysroot ")
-}
-staging_taskhandler[eventmask] = "bb.event.RecipeTaskPreProcess"
-addhandler staging_taskhandler
-
-
-#
-# Target build output, stored in do_populate_sysroot or do_package can depend
-# not only upon direct dependencies but also indirect ones. A good example is
-# linux-libc-headers. The toolchain depends on this but most target recipes do
-# not. There are some headers which are not used by the toolchain build and do
-# not change the toolchain task output, hence the task hashes can change without
-# changing the sysroot output of that recipe yet they can influence others.
-#
-# A specific example is rtc.h which can change rtcwake.c in util-linux but is not
-# used in the glibc or gcc build. To account for this, we need to account for the
-# populate_sysroot hashes in the task output hashes.
-#
-python target_add_sysroot_deps () {
-    current_task = "do_" + d.getVar("BB_CURRENTTASK")
-    if current_task not in ["do_populate_sysroot", "do_package"]:
-        return
-
-    pn = d.getVar("PN")
-    if pn.endswith("-native"):
-        return
-
-    taskdepdata = d.getVar("BB_TASKDEPDATA", False)
-    deps = {}
-    for dep in taskdepdata.values():
-        if dep[1] == "do_populate_sysroot" and not dep[0].endswith(("-native", "-initial")) and "-cross-" not in dep[0] and dep[0] != pn:
-            deps[dep[0]] = dep[6]
-
-    d.setVar("HASHEQUIV_EXTRA_SIGDATA", "\n".join("%s: %s" % (k, deps[k]) for k in sorted(deps.keys())))
-}
-SSTATECREATEFUNCS += "target_add_sysroot_deps"
-
diff --git a/poky/meta/classes/syslinux.bbclass b/poky/meta/classes/syslinux.bbclass
deleted file mode 100644
index 894f6b3..0000000
--- a/poky/meta/classes/syslinux.bbclass
+++ /dev/null
@@ -1,194 +0,0 @@
-# syslinux.bbclass
-# Copyright (C) 2004-2006, Advanced Micro Devices, Inc.  All Rights Reserved
-# Released under the MIT license (see packages/COPYING)
-
-# Provide syslinux specific functions for building bootable images.
-
-# External variables
-# ${INITRD} - indicates a list of filesystem images to concatenate and use as an initrd (optional)
-# ${ROOTFS} - indicates a filesystem image to include as the root filesystem (optional)
-# ${AUTO_SYSLINUXMENU} - set this to 1 to enable creating an automatic menu
-# ${LABELS} - a list of targets for the automatic config
-# ${APPEND} - an override list of append strings for each label
-# ${SYSLINUX_OPTS} - additional options to add to the syslinux file ';' delimited
-# ${SYSLINUX_SPLASH} - A background for the vga boot menu if using the boot menu
-# ${SYSLINUX_DEFAULT_CONSOLE} - set to "console=ttyX" to change kernel boot default console
-# ${SYSLINUX_SERIAL} - Set an alternate serial port or turn off serial with empty string
-# ${SYSLINUX_SERIAL_TTY} - Set alternate console=tty... kernel boot argument
-# ${SYSLINUX_KERNEL_ARGS} - Add additional kernel arguments
-
-do_bootimg[depends] += "${MLPREFIX}syslinux:do_populate_sysroot \
-                        syslinux-native:do_populate_sysroot"
-
-ISOLINUXDIR ?= "/isolinux"
-SYSLINUXDIR = "/"
-# The kernel has an internal default console, which you can override with
-# a console=...some_tty...
-SYSLINUX_DEFAULT_CONSOLE ?= ""
-SYSLINUX_SERIAL ?= "0 115200"
-SYSLINUX_SERIAL_TTY ?= "console=ttyS0,115200"
-SYSLINUX_PROMPT ?= "0"
-SYSLINUX_TIMEOUT ?= "50"
-AUTO_SYSLINUXMENU ?= "1"
-SYSLINUX_ALLOWOPTIONS ?= "1"
-SYSLINUX_ROOT ?= "${ROOT}"
-SYSLINUX_CFG_VM  ?= "${S}/syslinux_vm.cfg"
-SYSLINUX_CFG_LIVE ?= "${S}/syslinux_live.cfg"
-APPEND ?= ""
-
-# Need UUID utility code.
-inherit fs-uuid
-
-syslinux_populate() {
-	DEST=$1
-	BOOTDIR=$2
-	CFGNAME=$3
-
-	install -d ${DEST}${BOOTDIR}
-
-	# Install the config files
-	install -m 0644 ${SYSLINUX_CFG} ${DEST}${BOOTDIR}/${CFGNAME}
-	if [ "${AUTO_SYSLINUXMENU}" = 1 ] ; then
-		install -m 0644 ${STAGING_DATADIR}/syslinux/vesamenu.c32 ${DEST}${BOOTDIR}/vesamenu.c32
-		install -m 0444 ${STAGING_DATADIR}/syslinux/libcom32.c32 ${DEST}${BOOTDIR}/libcom32.c32
-		install -m 0444 ${STAGING_DATADIR}/syslinux/libutil.c32 ${DEST}${BOOTDIR}/libutil.c32
-		if [ "${SYSLINUX_SPLASH}" != "" ] ; then
-			install -m 0644 ${SYSLINUX_SPLASH} ${DEST}${BOOTDIR}/splash.lss
-		fi
-	fi
-}
-
-syslinux_iso_populate() {
-	iso_dir=$1
-	syslinux_populate $iso_dir ${ISOLINUXDIR} isolinux.cfg
-	install -m 0644 ${STAGING_DATADIR}/syslinux/isolinux.bin $iso_dir${ISOLINUXDIR}
-	install -m 0644 ${STAGING_DATADIR}/syslinux/ldlinux.c32 $iso_dir${ISOLINUXDIR}
-}
-
-syslinux_hddimg_populate() {
-	hdd_dir=$1
-	syslinux_populate $hdd_dir ${SYSLINUXDIR} syslinux.cfg
-	install -m 0444 ${STAGING_DATADIR}/syslinux/ldlinux.sys $hdd_dir${SYSLINUXDIR}/ldlinux.sys
-}
-
-syslinux_hddimg_install() {
-	syslinux ${IMGDEPLOYDIR}/${IMAGE_NAME}.hddimg
-}
-
-python build_syslinux_cfg () {
-    import copy
-    import sys
-
-    workdir = d.getVar('WORKDIR')
-    if not workdir:
-        bb.error("WORKDIR not defined, unable to package")
-        return
-        
-    labels = d.getVar('LABELS')
-    if not labels:
-        bb.debug(1, "LABELS not defined, nothing to do")
-        return
-    
-    if labels == []:
-        bb.debug(1, "No labels, nothing to do")
-        return
-
-    cfile = d.getVar('SYSLINUX_CFG')
-    if not cfile:
-        bb.fatal('Unable to read SYSLINUX_CFG')
-
-    try:
-        cfgfile = open(cfile, 'w')
-    except OSError:
-        bb.fatal('Unable to open %s' % cfile)
-
-    cfgfile.write('# Automatically created by OE\n')
-
-    opts = d.getVar('SYSLINUX_OPTS')
-
-    if opts:
-        for opt in opts.split(';'):
-            cfgfile.write('%s\n' % opt)
-
-    allowoptions = d.getVar('SYSLINUX_ALLOWOPTIONS')
-    if allowoptions:
-        cfgfile.write('ALLOWOPTIONS %s\n' % allowoptions)
-    else:
-        cfgfile.write('ALLOWOPTIONS 1\n')
-
-    syslinux_default_console = d.getVar('SYSLINUX_DEFAULT_CONSOLE')
-    syslinux_serial_tty = d.getVar('SYSLINUX_SERIAL_TTY')
-    syslinux_serial = d.getVar('SYSLINUX_SERIAL')
-    if syslinux_serial:
-        cfgfile.write('SERIAL %s\n' % syslinux_serial)
-
-    menu = (d.getVar('AUTO_SYSLINUXMENU') == "1")
-
-    if menu and syslinux_serial:
-        cfgfile.write('DEFAULT Graphics console %s\n' % (labels.split()[0]))
-    else:
-        cfgfile.write('DEFAULT %s\n' % (labels.split()[0]))
-
-    timeout = d.getVar('SYSLINUX_TIMEOUT')
-
-    if timeout:
-        cfgfile.write('TIMEOUT %s\n' % timeout)
-    else:
-        cfgfile.write('TIMEOUT 50\n')
-
-    prompt = d.getVar('SYSLINUX_PROMPT')
-    if prompt:
-        cfgfile.write('PROMPT %s\n' % prompt)
-    else:
-        cfgfile.write('PROMPT 1\n')
-
-    if menu:
-        cfgfile.write('ui vesamenu.c32\n')
-        cfgfile.write('menu title Select kernel options and boot kernel\n')
-        cfgfile.write('menu tabmsg Press [Tab] to edit, [Return] to select\n')
-        splash = d.getVar('SYSLINUX_SPLASH')
-        if splash:
-            cfgfile.write('menu background splash.lss\n')
-    
-    for label in labels.split():
-        localdata = bb.data.createCopy(d)
-
-        overrides = localdata.getVar('OVERRIDES')
-        if not overrides:
-            bb.fatal('OVERRIDES not defined')
-
-        localdata.setVar('OVERRIDES', label + ':' + overrides)
-    
-        btypes = [ [ "", syslinux_default_console ] ]
-        if menu and syslinux_serial:
-            btypes = [ [ "Graphics console ", syslinux_default_console  ],
-                [ "Serial console ", syslinux_serial_tty ] ]
-
-        root= d.getVar('SYSLINUX_ROOT')
-        if not root:
-            bb.fatal('SYSLINUX_ROOT not defined')
-
-        kernel = localdata.getVar('KERNEL_IMAGETYPE')
-        for btype in btypes:
-            cfgfile.write('LABEL %s%s\nKERNEL /%s\n' % (btype[0], label, kernel))
-
-            exargs = d.getVar('SYSLINUX_KERNEL_ARGS')
-            if exargs:
-                btype[1] += " " + exargs
-
-            append = localdata.getVar('APPEND')
-            initrd = localdata.getVar('INITRD')
-
-            append = root + " " + append
-            cfgfile.write('APPEND ')
-
-            if initrd:
-                cfgfile.write('initrd=/initrd ')
-
-            cfgfile.write('LABEL=%s '% (label))
-            append = replace_rootfs_uuid(d, append)
-            cfgfile.write('%s %s\n' % (append, btype[1]))
-
-    cfgfile.close()
-}
-build_syslinux_cfg[dirs] = "${S}"
diff --git a/poky/meta/classes/systemd-boot-cfg.bbclass b/poky/meta/classes/systemd-boot-cfg.bbclass
deleted file mode 100644
index b3e0e6a..0000000
--- a/poky/meta/classes/systemd-boot-cfg.bbclass
+++ /dev/null
@@ -1,71 +0,0 @@
-SYSTEMD_BOOT_CFG ?= "${S}/loader.conf"
-SYSTEMD_BOOT_ENTRIES ?= ""
-SYSTEMD_BOOT_TIMEOUT ?= "10"
-
-# Uses MACHINE specific KERNEL_IMAGETYPE
-PACKAGE_ARCH = "${MACHINE_ARCH}"
-
-# Need UUID utility code.
-inherit fs-uuid
-
-python build_efi_cfg() {
-    s = d.getVar("S")
-    labels = d.getVar('LABELS')
-    if not labels:
-        bb.debug(1, "LABELS not defined, nothing to do")
-        return
-
-    if labels == []:
-        bb.debug(1, "No labels, nothing to do")
-        return
-
-    cfile = d.getVar('SYSTEMD_BOOT_CFG')
-    cdir = os.path.dirname(cfile)
-    if not os.path.exists(cdir):
-        os.makedirs(cdir)
-    try:
-         cfgfile = open(cfile, 'w')
-    except OSError:
-        bb.fatal('Unable to open %s' % cfile)
-
-    cfgfile.write('# Automatically created by OE\n')
-    cfgfile.write('default %s\n' % (labels.split()[0]))
-    timeout = d.getVar('SYSTEMD_BOOT_TIMEOUT')
-    if timeout:
-        cfgfile.write('timeout %s\n' % timeout)
-    else:
-        cfgfile.write('timeout 10\n')
-    cfgfile.close()
-
-    for label in labels.split():
-        localdata = d.createCopy()
-
-        entryfile = "%s/%s.conf" % (s, label)
-        if not os.path.exists(s):
-            os.makedirs(s)
-        d.appendVar("SYSTEMD_BOOT_ENTRIES", " " + entryfile)
-        try:
-            entrycfg = open(entryfile, "w")
-        except OSError:
-            bb.fatal('Unable to open %s' % entryfile)
-
-        entrycfg.write('title %s\n' % label)
-
-        kernel = localdata.getVar("KERNEL_IMAGETYPE")
-        entrycfg.write('linux /%s\n' % kernel)
-
-        append = localdata.getVar('APPEND')
-        initrd = localdata.getVar('INITRD')
-
-        if initrd:
-            entrycfg.write('initrd /initrd\n')
-        lb = label
-        if label == "install":
-            lb = "install-efi"
-        entrycfg.write('options LABEL=%s ' % lb)
-        if append:
-            append = replace_rootfs_uuid(d, append)
-            entrycfg.write('%s' % append)
-        entrycfg.write('\n')
-        entrycfg.close()
-}
diff --git a/poky/meta/classes/systemd-boot.bbclass b/poky/meta/classes/systemd-boot.bbclass
deleted file mode 100644
index 57ec0ac..0000000
--- a/poky/meta/classes/systemd-boot.bbclass
+++ /dev/null
@@ -1,35 +0,0 @@
-# Copyright (C) 2016 Intel Corporation
-#
-# Released under the MIT license (see COPYING.MIT)
-
-# systemd-boot.bbclass - The "systemd-boot" is essentially the gummiboot merged into systemd.
-#                        The original standalone gummiboot project is dead without any more
-#                        maintenance.
-#
-# Set EFI_PROVIDER = "systemd-boot" to use systemd-boot on your live images instead of grub-efi
-# (images built by image-live.bbclass)
-
-do_bootimg[depends] += "${MLPREFIX}systemd-boot:do_deploy"
-
-require conf/image-uefi.conf
-# Need UUID utility code.
-inherit fs-uuid
-
-efi_populate() {
-        efi_populate_common "$1" systemd
-
-        # systemd-boot requires these paths for configuration files
-        # they are not customizable so no point in new vars
-        install -d ${DEST}/loader
-        install -d ${DEST}/loader/entries
-        install -m 0644 ${SYSTEMD_BOOT_CFG} ${DEST}/loader/loader.conf
-        for i in ${SYSTEMD_BOOT_ENTRIES}; do
-            install -m 0644 ${i} ${DEST}/loader/entries
-        done
-}
-
-efi_iso_populate:append() {
-        cp -r $iso_dir/loader ${EFIIMGDIR}
-}
-
-inherit systemd-boot-cfg
diff --git a/poky/meta/classes/systemd.bbclass b/poky/meta/classes/systemd.bbclass
deleted file mode 100644
index 09ec527..0000000
--- a/poky/meta/classes/systemd.bbclass
+++ /dev/null
@@ -1,233 +0,0 @@
-# The list of packages that should have systemd packaging scripts added.  For
-# each entry, optionally have a SYSTEMD_SERVICE:[package] that lists the service
-# files in this package.  If this variable isn't set, [package].service is used.
-SYSTEMD_PACKAGES ?= "${PN}"
-SYSTEMD_PACKAGES:class-native ?= ""
-SYSTEMD_PACKAGES:class-nativesdk ?= ""
-
-# Whether to enable or disable the services on installation.
-SYSTEMD_AUTO_ENABLE ??= "enable"
-
-# This class will be included in any recipe that supports systemd init scripts,
-# even if systemd is not in DISTRO_FEATURES.  As such don't make any changes
-# directly but check the DISTRO_FEATURES first.
-python __anonymous() {
-    # If the distro features have systemd but not sysvinit, inhibit update-rcd
-    # from doing any work so that pure-systemd images don't have redundant init
-    # files.
-    if bb.utils.contains('DISTRO_FEATURES', 'systemd', True, False, d):
-        d.appendVar("DEPENDS", " systemd-systemctl-native")
-        d.appendVar("PACKAGE_WRITE_DEPS", " systemd-systemctl-native")
-        if not bb.utils.contains('DISTRO_FEATURES', 'sysvinit', True, False, d):
-            d.setVar("INHIBIT_UPDATERCD_BBCLASS", "1")
-}
-
-systemd_postinst() {
-if systemctl >/dev/null 2>/dev/null; then
-	OPTS=""
-
-	if [ -n "$D" ]; then
-		OPTS="--root=$D"
-	fi
-
-	if [ "${SYSTEMD_AUTO_ENABLE}" = "enable" ]; then
-		for service in ${SYSTEMD_SERVICE_ESCAPED}; do
-			systemctl ${OPTS} enable "$service"
-		done
-	fi
-
-	if [ -z "$D" ]; then
-		systemctl daemon-reload
-		systemctl preset ${SYSTEMD_SERVICE_ESCAPED}
-
-		if [ "${SYSTEMD_AUTO_ENABLE}" = "enable" ]; then
-			systemctl --no-block restart ${SYSTEMD_SERVICE_ESCAPED}
-		fi
-	fi
-fi
-}
-
-systemd_prerm() {
-if systemctl >/dev/null 2>/dev/null; then
-	if [ -z "$D" ]; then
-		systemctl stop ${SYSTEMD_SERVICE_ESCAPED}
-
-		systemctl disable ${SYSTEMD_SERVICE_ESCAPED}
-	fi
-fi
-}
-
-
-systemd_populate_packages[vardeps] += "systemd_prerm systemd_postinst"
-systemd_populate_packages[vardepsexclude] += "OVERRIDES"
-
-
-python systemd_populate_packages() {
-    import re
-    import shlex
-
-    if not bb.utils.contains('DISTRO_FEATURES', 'systemd', True, False, d):
-        return
-
-    def get_package_var(d, var, pkg):
-        val = (d.getVar('%s:%s' % (var, pkg)) or "").strip()
-        if val == "":
-            val = (d.getVar(var) or "").strip()
-        return val
-
-    # Check if systemd-packages already included in PACKAGES
-    def systemd_check_package(pkg_systemd):
-        packages = d.getVar('PACKAGES')
-        if not pkg_systemd in packages.split():
-            bb.error('%s does not appear in package list, please add it' % pkg_systemd)
-
-
-    def systemd_generate_package_scripts(pkg):
-        bb.debug(1, 'adding systemd calls to postinst/postrm for %s' % pkg)
-
-        paths_escaped = ' '.join(shlex.quote(s) for s in d.getVar('SYSTEMD_SERVICE:' + pkg).split())
-        d.setVar('SYSTEMD_SERVICE_ESCAPED:' + pkg, paths_escaped)
-
-        # Add pkg to the overrides so that it finds the SYSTEMD_SERVICE:pkg
-        # variable.
-        localdata = d.createCopy()
-        localdata.prependVar("OVERRIDES", pkg + ":")
-
-        postinst = d.getVar('pkg_postinst:%s' % pkg)
-        if not postinst:
-            postinst = '#!/bin/sh\n'
-        postinst += localdata.getVar('systemd_postinst')
-        d.setVar('pkg_postinst:%s' % pkg, postinst)
-
-        prerm = d.getVar('pkg_prerm:%s' % pkg)
-        if not prerm:
-            prerm = '#!/bin/sh\n'
-        prerm += localdata.getVar('systemd_prerm')
-        d.setVar('pkg_prerm:%s' % pkg, prerm)
-
-
-    # Add files to FILES:*-systemd if existent and not already done
-    def systemd_append_file(pkg_systemd, file_append):
-        appended = False
-        if os.path.exists(oe.path.join(d.getVar("D"), file_append)):
-            var_name = "FILES:" + pkg_systemd
-            files = d.getVar(var_name, False) or ""
-            if file_append not in files.split():
-                d.appendVar(var_name, " " + file_append)
-                appended = True
-        return appended
-
-    # Add systemd files to FILES:*-systemd, parse for Also= and follow recursive
-    def systemd_add_files_and_parse(pkg_systemd, path, service, keys):
-        # avoid infinite recursion
-        if systemd_append_file(pkg_systemd, oe.path.join(path, service)):
-            fullpath = oe.path.join(d.getVar("D"), path, service)
-            if service.find('.service') != -1:
-                # for *.service add *@.service
-                service_base = service.replace('.service', '')
-                systemd_add_files_and_parse(pkg_systemd, path, service_base + '@.service', keys)
-            if service.find('.socket') != -1:
-                # for *.socket add *.service and *@.service
-                service_base = service.replace('.socket', '')
-                systemd_add_files_and_parse(pkg_systemd, path, service_base + '.service', keys)
-                systemd_add_files_and_parse(pkg_systemd, path, service_base + '@.service', keys)
-            for key in keys.split():
-                # recurse all dependencies found in keys ('Also';'Conflicts';..) and add to files
-                cmd = "grep %s %s | sed 's,%s=,,g' | tr ',' '\\n'" % (key, shlex.quote(fullpath), key)
-                pipe = os.popen(cmd, 'r')
-                line = pipe.readline()
-                while line:
-                    line = line.replace('\n', '')
-                    systemd_add_files_and_parse(pkg_systemd, path, line, keys)
-                    line = pipe.readline()
-                pipe.close()
-
-    # Check service-files and call systemd_add_files_and_parse for each entry
-    def systemd_check_services():
-        searchpaths = [oe.path.join(d.getVar("sysconfdir"), "systemd", "system"),]
-        searchpaths.append(d.getVar("systemd_system_unitdir"))
-        systemd_packages = d.getVar('SYSTEMD_PACKAGES')
-
-        keys = 'Also'
-        # scan for all in SYSTEMD_SERVICE[]
-        for pkg_systemd in systemd_packages.split():
-            for service in get_package_var(d, 'SYSTEMD_SERVICE', pkg_systemd).split():
-                path_found = ''
-
-                # Deal with adding, for example, 'ifplugd@eth0.service' from
-                # 'ifplugd@.service'
-                base = None
-                at = service.find('@')
-                if at != -1:
-                    ext = service.rfind('.')
-                    base = service[:at] + '@' + service[ext:]
-
-                for path in searchpaths:
-                    if os.path.exists(oe.path.join(d.getVar("D"), path, service)):
-                        path_found = path
-                        break
-                    elif base is not None:
-                        if os.path.exists(oe.path.join(d.getVar("D"), path, base)):
-                            path_found = path
-                            break
-
-                if path_found != '':
-                    systemd_add_files_and_parse(pkg_systemd, path_found, service, keys)
-                else:
-                    bb.fatal("Didn't find service unit '{0}', specified in SYSTEMD_SERVICE:{1}. {2}".format(
-                        service, pkg_systemd, "Also looked for service unit '{0}'.".format(base) if base is not None else ""))
-
-    def systemd_create_presets(pkg, action):
-        presetf = oe.path.join(d.getVar("PKGD"), d.getVar("systemd_unitdir"), "system-preset/98-%s.preset" % pkg)
-        bb.utils.mkdirhier(os.path.dirname(presetf))
-        with open(presetf, 'a') as fd:
-            for service in d.getVar('SYSTEMD_SERVICE:%s' % pkg).split():
-                fd.write("%s %s\n" % (action,service))
-        d.appendVar("FILES:%s" % pkg, ' ' + oe.path.join(d.getVar("systemd_unitdir"), "system-preset/98-%s.preset" % pkg))
-
-    # Run all modifications once when creating package
-    if os.path.exists(d.getVar("D")):
-        for pkg in d.getVar('SYSTEMD_PACKAGES').split():
-            systemd_check_package(pkg)
-            if d.getVar('SYSTEMD_SERVICE:' + pkg):
-                systemd_generate_package_scripts(pkg)
-                action = get_package_var(d, 'SYSTEMD_AUTO_ENABLE', pkg)
-                if action in ("enable", "disable"):
-                    systemd_create_presets(pkg, action)
-                elif action not in ("mask", "preset"):
-                    bb.fatal("SYSTEMD_AUTO_ENABLE:%s '%s' is not 'enable', 'disable', 'mask' or 'preset'" % (pkg, action))
-        systemd_check_services()
-}
-
-PACKAGESPLITFUNCS:prepend = "systemd_populate_packages "
-
-python rm_systemd_unitdir (){
-    import shutil
-    if not bb.utils.contains('DISTRO_FEATURES', 'systemd', True, False, d):
-        systemd_unitdir = oe.path.join(d.getVar("D"), d.getVar('systemd_unitdir'))
-        if os.path.exists(systemd_unitdir):
-            shutil.rmtree(systemd_unitdir)
-        systemd_libdir = os.path.dirname(systemd_unitdir)
-        if (os.path.exists(systemd_libdir) and not os.listdir(systemd_libdir)):
-            os.rmdir(systemd_libdir)
-}
-
-python rm_sysvinit_initddir (){
-    import shutil
-    sysv_initddir = oe.path.join(d.getVar("D"), (d.getVar('INIT_D_DIR') or "/etc/init.d"))
-
-    if bb.utils.contains('DISTRO_FEATURES', 'systemd', True, False, d) and \
-        not bb.utils.contains('DISTRO_FEATURES', 'sysvinit', True, False, d) and \
-        os.path.exists(sysv_initddir):
-        systemd_system_unitdir = oe.path.join(d.getVar("D"), d.getVar('systemd_system_unitdir'))
-
-        # If systemd_system_unitdir contains anything, delete sysv_initddir
-        if (os.path.exists(systemd_system_unitdir) and os.listdir(systemd_system_unitdir)):
-            shutil.rmtree(sysv_initddir)
-}
-
-do_install[postfuncs] += "${RMINITDIR} "
-RMINITDIR:class-target = " rm_sysvinit_initddir rm_systemd_unitdir "
-RMINITDIR:class-nativesdk = " rm_sysvinit_initddir rm_systemd_unitdir "
-RMINITDIR = ""
-
diff --git a/poky/meta/classes/terminal.bbclass b/poky/meta/classes/terminal.bbclass
index a564ee7..2dfc7db 100644
--- a/poky/meta/classes/terminal.bbclass
+++ b/poky/meta/classes/terminal.bbclass
@@ -1,3 +1,9 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
 OE_TERMINAL ?= 'auto'
 OE_TERMINAL[type] = 'choice'
 OE_TERMINAL[choices] = 'auto none \
diff --git a/poky/meta/classes/testexport.bbclass b/poky/meta/classes/testexport.bbclass
index 1b0fb44..f7c5242 100644
--- a/poky/meta/classes/testexport.bbclass
+++ b/poky/meta/classes/testexport.bbclass
@@ -1,7 +1,6 @@
 # Copyright (C) 2016 Intel Corporation
 #
-# Released under the MIT license (see COPYING.MIT)
-#
+# SPDX-License-Identifier: MIT
 #
 # testexport.bbclass allows to execute runtime test outside OE environment.
 # Most of the tests are commands run on target image over ssh.
@@ -23,10 +22,9 @@
 TEST_TARGET_IP ?= ""
 TEST_SERVER_IP ?= ""
 
-TEST_EXPORT_SDK_PACKAGES ?= ""
+require conf/testexport.conf
+
 TEST_EXPORT_SDK_ENABLED ?= "0"
-TEST_EXPORT_SDK_NAME ?= "testexport-tools-nativesdk"
-TEST_EXPORT_SDK_DIR ?= "sdk"
 
 TEST_EXPORT_DEPENDS = ""
 TEST_EXPORT_DEPENDS += "${@bb.utils.contains('IMAGE_PKGTYPE', 'rpm', 'cpio-native:do_populate_sysroot', '', d)}"
@@ -179,4 +177,4 @@
     tar.close()
     os.chdir(current_dir)
 
-inherit testimage
+IMAGE_CLASSES += "testimage"
diff --git a/poky/meta/classes/testimage.bbclass b/poky/meta/classes/testimage.bbclass
deleted file mode 100644
index 7898223..0000000
--- a/poky/meta/classes/testimage.bbclass
+++ /dev/null
@@ -1,508 +0,0 @@
-# Copyright (C) 2013 Intel Corporation
-#
-# Released under the MIT license (see COPYING.MIT)
-
-inherit metadata_scm
-inherit image-artifact-names
-
-# testimage.bbclass enables testing of qemu images using python unittests.
-# Most of the tests are commands run on target image over ssh.
-# To use it add testimage to global inherit and call your target image with -c testimage
-# You can try it out like this:
-# - first add IMAGE_CLASSES += "testimage" in local.conf
-# - build a qemu core-image-sato
-# - then bitbake core-image-sato -c testimage. That will run a standard suite of tests.
-#
-# The tests can be run automatically each time an image is built if you set
-# TESTIMAGE_AUTO = "1"
-
-TESTIMAGE_AUTO ??= "0"
-
-# You can set (or append to) TEST_SUITES in local.conf to select the tests
-# which you want to run for your target.
-# The test names are the module names in meta/lib/oeqa/runtime/cases.
-# Each name in TEST_SUITES represents a required test for the image. (no skipping allowed)
-# Appending "auto" means that it will try to run all tests that are suitable for the image (each test decides that on it's own).
-# Note that order in TEST_SUITES is relevant: tests are run in an order such that
-# tests mentioned in @skipUnlessPassed run before the tests that depend on them,
-# but without such dependencies, tests run in the order in which they are listed
-# in TEST_SUITES.
-#
-# A layer can add its own tests in lib/oeqa/runtime, provided it extends BBPATH as normal in its layer.conf.
-
-# TEST_LOG_DIR contains a command ssh log and may contain infromation about what command is running, output and return codes and for qemu a boot log till login.
-# Booting is handled by this class, and it's not a test in itself.
-# TEST_QEMUBOOT_TIMEOUT can be used to set the maximum time in seconds the launch code will wait for the login prompt.
-# TEST_OVERALL_TIMEOUT can be used to set the maximum time in seconds the tests will be allowed to run (defaults to no limit).
-# TEST_QEMUPARAMS can be used to pass extra parameters to qemu, e.g. "-m 1024" for setting the amount of ram to 1 GB.
-# TEST_RUNQEMUPARAMS can be used to pass extra parameters to runqemu, e.g. "gl" to enable OpenGL acceleration.
-# QEMU_USE_KVM can be set to "" to disable the use of kvm (by default it is enabled if target_arch == build_arch or both of them are x86 archs)
-
-# TESTIMAGE_BOOT_PATTERNS can be used to override certain patterns used to communicate with the target when booting,
-# if a pattern is not specifically present on this variable a default will be used when booting the target.
-# TESTIMAGE_BOOT_PATTERNS[<flag>] overrides the pattern used for that specific flag, where flag comes from a list of accepted flags
-# e.g. normally the system boots and waits for a login prompt (login:), after that it sends the command: "root\n" to log as the root user
-# if we wanted to log in as the hypothetical "webserver" user for example we could set the following:
-# TESTIMAGE_BOOT_PATTERNS = "send_login_user search_login_succeeded"
-# TESTIMAGE_BOOT_PATTERNS[send_login_user] = "webserver\n"
-# TESTIMAGE_BOOT_PATTERNS[search_login_succeeded] = "webserver@[a-zA-Z0-9\-]+:~#"
-# The accepted flags are the following: search_reached_prompt, send_login_user, search_login_succeeded, search_cmd_finished.
-# They are prefixed with either search/send, to differentiate if the pattern is meant to be sent or searched to/from the target terminal
-
-TEST_LOG_DIR ?= "${WORKDIR}/testimage"
-
-TEST_EXPORT_DIR ?= "${TMPDIR}/testimage/${PN}"
-TEST_INSTALL_TMP_DIR ?= "${WORKDIR}/testimage/install_tmp"
-TEST_NEEDED_PACKAGES_DIR ?= "${WORKDIR}/testimage/packages"
-TEST_EXTRACTED_DIR ?= "${TEST_NEEDED_PACKAGES_DIR}/extracted"
-TEST_PACKAGED_DIR ?= "${TEST_NEEDED_PACKAGES_DIR}/packaged"
-
-BASICTESTSUITE = "\
-    ping date df ssh scp python perl gi ptest parselogs \
-    logrotate connman systemd oe_syslog pam stap ldd xorg \
-    kernelmodule gcc buildcpio buildlzip buildgalculator \
-    dnf rpm opkg apt weston go rust"
-
-DEFAULT_TEST_SUITES = "${BASICTESTSUITE}"
-
-# musl doesn't support systemtap
-DEFAULT_TEST_SUITES:remove:libc-musl = "stap"
-
-# qemumips is quite slow and has reached the timeout limit several times on the YP build cluster,
-# mitigate this by removing build tests for qemumips machines.
-MIPSREMOVE ??= "buildcpio buildlzip buildgalculator"
-DEFAULT_TEST_SUITES:remove:qemumips = "${MIPSREMOVE}"
-DEFAULT_TEST_SUITES:remove:qemumips64 = "${MIPSREMOVE}"
-
-TEST_SUITES ?= "${DEFAULT_TEST_SUITES}"
-
-QEMU_USE_KVM ?= "1"
-TEST_QEMUBOOT_TIMEOUT ?= "1000"
-TEST_OVERALL_TIMEOUT ?= ""
-TEST_TARGET ?= "qemu"
-TEST_QEMUPARAMS ?= ""
-TEST_RUNQEMUPARAMS ?= ""
-
-TESTIMAGE_BOOT_PATTERNS ?= ""
-
-TESTIMAGEDEPENDS = ""
-TESTIMAGEDEPENDS:append:qemuall = " qemu-native:do_populate_sysroot qemu-helper-native:do_populate_sysroot qemu-helper-native:do_addto_recipe_sysroot"
-TESTIMAGEDEPENDS += "${@bb.utils.contains('IMAGE_PKGTYPE', 'rpm', 'cpio-native:do_populate_sysroot', '', d)}"
-TESTIMAGEDEPENDS += "${@bb.utils.contains('IMAGE_PKGTYPE', 'rpm', 'dnf-native:do_populate_sysroot', '', d)}"
-TESTIMAGEDEPENDS += "${@bb.utils.contains('IMAGE_PKGTYPE', 'rpm', 'createrepo-c-native:do_populate_sysroot', '', d)}"
-TESTIMAGEDEPENDS += "${@bb.utils.contains('IMAGE_PKGTYPE', 'ipk', 'opkg-utils-native:do_populate_sysroot package-index:do_package_index', '', d)}"
-TESTIMAGEDEPENDS += "${@bb.utils.contains('IMAGE_PKGTYPE', 'deb', 'apt-native:do_populate_sysroot  package-index:do_package_index', '', d)}"
-
-TESTIMAGELOCK = "${TMPDIR}/testimage.lock"
-TESTIMAGELOCK:qemuall = ""
-
-TESTIMAGE_DUMP_DIR ?= "${LOG_DIR}/runtime-hostdump/"
-
-TESTIMAGE_UPDATE_VARS ?= "DL_DIR WORKDIR DEPLOY_DIR"
-
-testimage_dump_target () {
-    top -bn1
-    ps
-    free
-    df
-    # The next command will export the default gateway IP
-    export DEFAULT_GATEWAY=$(ip route | awk '/default/ { print $3}')
-    ping -c3 $DEFAULT_GATEWAY
-    dmesg
-    netstat -an
-    ip address
-    # Next command will dump logs from /var/log/
-    find /var/log/ -type f 2>/dev/null -exec echo "====================" \; -exec echo {} \; -exec echo "====================" \; -exec cat {} \; -exec echo "" \;
-}
-
-testimage_dump_host () {
-    top -bn1
-    iostat -x -z -N -d -p ALL 20 2
-    ps -ef
-    free
-    df
-    memstat
-    dmesg
-    ip -s link
-    netstat -an
-}
-
-testimage_dump_monitor () {
-    query-status
-    query-block
-    dump-guest-memory {"paging":false,"protocol":"file:%s.img"}
-}
-
-python do_testimage() {
-    testimage_main(d)
-}
-
-addtask testimage
-do_testimage[nostamp] = "1"
-do_testimage[network] = "1"
-do_testimage[depends] += "${TESTIMAGEDEPENDS}"
-do_testimage[lockfiles] += "${TESTIMAGELOCK}"
-
-def testimage_sanity(d):
-    if (d.getVar('TEST_TARGET') == 'simpleremote'
-        and (not d.getVar('TEST_TARGET_IP')
-             or not d.getVar('TEST_SERVER_IP'))):
-        bb.fatal('When TEST_TARGET is set to "simpleremote" '
-                 'TEST_TARGET_IP and TEST_SERVER_IP are needed too.')
-
-def get_testimage_configuration(d, test_type, machine):
-    import platform
-    from oeqa.utils.metadata import get_layers
-    configuration = {'TEST_TYPE': test_type,
-                    'MACHINE': machine,
-                    'DISTRO': d.getVar("DISTRO"),
-                    'IMAGE_BASENAME': d.getVar("IMAGE_BASENAME"),
-                    'IMAGE_PKGTYPE': d.getVar("IMAGE_PKGTYPE"),
-                    'STARTTIME': d.getVar("DATETIME"),
-                    'HOST_DISTRO': oe.lsb.distro_identifier().replace(' ', '-'),
-                    'LAYERS': get_layers(d.getVar("BBLAYERS"))}
-    return configuration
-get_testimage_configuration[vardepsexclude] = "DATETIME"
-
-def get_testimage_json_result_dir(d):
-    json_result_dir = os.path.join(d.getVar("LOG_DIR"), 'oeqa')
-    custom_json_result_dir = d.getVar("OEQA_JSON_RESULT_DIR")
-    if custom_json_result_dir:
-        json_result_dir = custom_json_result_dir
-    return json_result_dir
-
-def get_testimage_result_id(configuration):
-    return '%s_%s_%s_%s' % (configuration['TEST_TYPE'], configuration['IMAGE_BASENAME'], configuration['MACHINE'], configuration['STARTTIME'])
-
-def get_testimage_boot_patterns(d):
-    from collections import defaultdict
-    boot_patterns = defaultdict(str)
-    # Only accept certain values
-    accepted_patterns = ['search_reached_prompt', 'send_login_user', 'search_login_succeeded', 'search_cmd_finished']
-    # Not all patterns need to be overriden, e.g. perhaps we only want to change the user
-    boot_patterns_flags = d.getVarFlags('TESTIMAGE_BOOT_PATTERNS') or {}
-    if boot_patterns_flags:
-        patterns_set = [p for p in boot_patterns_flags.items() if p[0] in d.getVar('TESTIMAGE_BOOT_PATTERNS').split()]
-        for flag, flagval in patterns_set:
-                if flag not in accepted_patterns:
-                    bb.fatal('Testimage: The only accepted boot patterns are: search_reached_prompt,send_login_user, \
-                    search_login_succeeded,search_cmd_finished\n Make sure your TESTIMAGE_BOOT_PATTERNS=%s \
-                    contains an accepted flag.' % d.getVar('TESTIMAGE_BOOT_PATTERNS'))
-                    return
-                # We know boot prompt is searched through in binary format, others might be expressions
-                if flag == 'search_reached_prompt':
-                    boot_patterns[flag] = flagval.encode()
-                else:
-                    boot_patterns[flag] = flagval.encode().decode('unicode-escape')
-    return boot_patterns
-
-
-def testimage_main(d):
-    import os
-    import json
-    import signal
-    import logging
-    import shutil
-
-    from bb.utils import export_proxies
-    from oeqa.runtime.context import OERuntimeTestContext
-    from oeqa.runtime.context import OERuntimeTestContextExecutor
-    from oeqa.core.target.qemu import supported_fstypes
-    from oeqa.core.utils.test import getSuiteCases
-    from oeqa.utils import make_logger_bitbake_compatible
-
-    def sigterm_exception(signum, stackframe):
-        """
-        Catch SIGTERM from worker in order to stop qemu.
-        """
-        os.kill(os.getpid(), signal.SIGINT)
-
-    def handle_test_timeout(timeout):
-        bb.warn("Global test timeout reached (%s seconds), stopping the tests." %(timeout))
-        os.kill(os.getpid(), signal.SIGINT)
-
-    testimage_sanity(d)
-
-    if (d.getVar('IMAGE_PKGTYPE') == 'rpm'
-       and ('dnf' in d.getVar('TEST_SUITES') or 'auto' in d.getVar('TEST_SUITES'))):
-        create_rpm_index(d)
-
-    logger = make_logger_bitbake_compatible(logging.getLogger("BitBake"))
-    pn = d.getVar("PN")
-
-    bb.utils.mkdirhier(d.getVar("TEST_LOG_DIR"))
-
-    image_name = ("%s/%s" % (d.getVar('DEPLOY_DIR_IMAGE'),
-                             d.getVar('IMAGE_LINK_NAME')))
-
-    tdname = "%s.testdata.json" % image_name
-    try:
-        with open(tdname, "r") as f:
-            td = json.load(f)
-    except FileNotFoundError as err:
-        bb.fatal('File %s not found (%s).\nHave you built the image with INHERIT += "testimage" in the conf/local.conf?' % (tdname, err))
-
-    # Some variables need to be updates (mostly paths) with the
-    # ones of the current environment because some tests require them.
-    for var in d.getVar('TESTIMAGE_UPDATE_VARS').split():
-        td[var] = d.getVar(var)
-
-    image_manifest = "%s.manifest" % image_name
-    image_packages = OERuntimeTestContextExecutor.readPackagesManifest(image_manifest)
-
-    extract_dir = d.getVar("TEST_EXTRACTED_DIR")
-
-    # Get machine
-    machine = d.getVar("MACHINE")
-
-    # Get rootfs
-    fstypes = d.getVar('IMAGE_FSTYPES').split()
-    if d.getVar("TEST_TARGET") == "qemu":
-        fstypes = [fs for fs in fstypes if fs in supported_fstypes]
-        if not fstypes:
-            bb.fatal('Unsupported image type built. Add a compatible image to '
-                     'IMAGE_FSTYPES. Supported types: %s' %
-                     ', '.join(supported_fstypes))
-    qfstype = fstypes[0]
-    qdeffstype = d.getVar("QB_DEFAULT_FSTYPE")
-    if qdeffstype:
-        qfstype = qdeffstype
-    rootfs = '%s.%s' % (image_name, qfstype)
-
-    # Get tmpdir (not really used, just for compatibility)
-    tmpdir = d.getVar("TMPDIR")
-
-    # Get deploy_dir_image (not really used, just for compatibility)
-    dir_image = d.getVar("DEPLOY_DIR_IMAGE")
-
-    # Get bootlog
-    bootlog = os.path.join(d.getVar("TEST_LOG_DIR"),
-                           'qemu_boot_log.%s' % d.getVar('DATETIME'))
-
-    # Get display
-    display = d.getVar("BB_ORIGENV").getVar("DISPLAY")
-
-    # Get kernel
-    kernel_name = ('%s-%s.bin' % (d.getVar("KERNEL_IMAGETYPE"), machine))
-    kernel = os.path.join(d.getVar("DEPLOY_DIR_IMAGE"), kernel_name)
-
-    # Get boottime
-    boottime = int(d.getVar("TEST_QEMUBOOT_TIMEOUT"))
-
-    # Get use_kvm
-    kvm = oe.types.qemu_use_kvm(d.getVar('QEMU_USE_KVM'), d.getVar('TARGET_ARCH'))
-
-    # Get OVMF
-    ovmf = d.getVar("QEMU_USE_OVMF")
-
-    slirp = False
-    if d.getVar("QEMU_USE_SLIRP"):
-        slirp = True
-
-    # TODO: We use the current implementation of qemu runner because of
-    # time constrains, qemu runner really needs a refactor too.
-    target_kwargs = { 'machine'     : machine,
-                      'rootfs'      : rootfs,
-                      'tmpdir'      : tmpdir,
-                      'dir_image'   : dir_image,
-                      'display'     : display,
-                      'kernel'      : kernel,
-                      'boottime'    : boottime,
-                      'bootlog'     : bootlog,
-                      'kvm'         : kvm,
-                      'slirp'       : slirp,
-                      'dump_dir'    : d.getVar("TESTIMAGE_DUMP_DIR"),
-                      'serial_ports': len(d.getVar("SERIAL_CONSOLES").split()),
-                      'ovmf'        : ovmf,
-                      'tmpfsdir'    : d.getVar("RUNQEMU_TMPFS_DIR"),
-                    }
-
-    if d.getVar("TESTIMAGE_BOOT_PATTERNS"):
-        target_kwargs['boot_patterns'] = get_testimage_boot_patterns(d)
-
-    # hardware controlled targets might need further access
-    target_kwargs['powercontrol_cmd'] = d.getVar("TEST_POWERCONTROL_CMD") or None
-    target_kwargs['powercontrol_extra_args'] = d.getVar("TEST_POWERCONTROL_EXTRA_ARGS") or ""
-    target_kwargs['serialcontrol_cmd'] = d.getVar("TEST_SERIALCONTROL_CMD") or None
-    target_kwargs['serialcontrol_extra_args'] = d.getVar("TEST_SERIALCONTROL_EXTRA_ARGS") or ""
-    target_kwargs['testimage_dump_monitor'] = d.getVar("testimage_dump_monitor") or ""
-    target_kwargs['testimage_dump_target'] = d.getVar("testimage_dump_target") or ""
-
-    def export_ssh_agent(d):
-        import os
-
-        variables = ['SSH_AGENT_PID', 'SSH_AUTH_SOCK']
-        for v in variables:
-            if v not in os.environ.keys():
-                val = d.getVar(v)
-                if val is not None:
-                    os.environ[v] = val
-
-    export_ssh_agent(d)
-
-    # runtime use network for download projects for build
-    export_proxies(d)
-
-    # we need the host dumper in test context
-    host_dumper = OERuntimeTestContextExecutor.getHostDumper(
-        d.getVar("testimage_dump_host"),
-        d.getVar("TESTIMAGE_DUMP_DIR"))
-
-    # the robot dance
-    target = OERuntimeTestContextExecutor.getTarget(
-        d.getVar("TEST_TARGET"), logger, d.getVar("TEST_TARGET_IP"),
-        d.getVar("TEST_SERVER_IP"), **target_kwargs)
-
-    # test context
-    tc = OERuntimeTestContext(td, logger, target, host_dumper,
-                              image_packages, extract_dir)
-
-    # Load tests before starting the target
-    test_paths = get_runtime_paths(d)
-    test_modules = d.getVar('TEST_SUITES').split()
-    if not test_modules:
-        bb.fatal('Empty test suite, please verify TEST_SUITES variable')
-
-    tc.loadTests(test_paths, modules=test_modules)
-
-    suitecases = getSuiteCases(tc.suites)
-    if not suitecases:
-        bb.fatal('Empty test suite, please verify TEST_SUITES variable')
-    else:
-        bb.debug(2, 'test suites:\n\t%s' % '\n\t'.join([str(c) for c in suitecases]))
-
-    package_extraction(d, tc.suites)
-
-    results = None
-    complete = False
-    orig_sigterm_handler = signal.signal(signal.SIGTERM, sigterm_exception)
-    try:
-        # We need to check if runqemu ends unexpectedly
-        # or if the worker send us a SIGTERM
-        tc.target.start(params=d.getVar("TEST_QEMUPARAMS"), runqemuparams=d.getVar("TEST_RUNQEMUPARAMS"))
-        import threading
-        try:
-            threading.Timer(int(d.getVar("TEST_OVERALL_TIMEOUT")), handle_test_timeout, (int(d.getVar("TEST_OVERALL_TIMEOUT")),)).start()
-        except ValueError:
-            pass
-        results = tc.runTests()
-        complete = True
-    except (KeyboardInterrupt, BlockingIOError) as err:
-        if isinstance(err, KeyboardInterrupt):
-            bb.error('testimage interrupted, shutting down...')
-        else:
-            bb.error('runqemu failed, shutting down...')
-        if results:
-            results.stop()
-        results = tc.results
-    finally:
-        signal.signal(signal.SIGTERM, orig_sigterm_handler)
-        tc.target.stop()
-
-    # Show results (if we have them)
-    if results:
-        configuration = get_testimage_configuration(d, 'runtime', machine)
-        results.logDetails(get_testimage_json_result_dir(d),
-                        configuration,
-                        get_testimage_result_id(configuration),
-                        dump_streams=d.getVar('TESTREPORT_FULLLOGS'))
-        results.logSummary(pn)
-
-    # Copy additional logs to tmp/log/oeqa so it's easier to find them
-    targetdir = os.path.join(get_testimage_json_result_dir(d), d.getVar("PN"))
-    os.makedirs(targetdir, exist_ok=True)
-    os.symlink(bootlog, os.path.join(targetdir, os.path.basename(bootlog)))
-    os.symlink(d.getVar("BB_LOGFILE"), os.path.join(targetdir, os.path.basename(d.getVar("BB_LOGFILE") + "." + d.getVar('DATETIME'))))
-
-    if not results or not complete:
-        bb.fatal('%s - FAILED - tests were interrupted during execution, check the logs in %s' % (pn, d.getVar("LOG_DIR")), forcelog=True)
-    if not results.wasSuccessful():
-        bb.fatal('%s - FAILED - also check the logs in %s' % (pn, d.getVar("LOG_DIR")), forcelog=True)
-
-def get_runtime_paths(d):
-    """
-    Returns a list of paths where runtime test must reside.
-
-    Runtime tests are expected in <LAYER_DIR>/lib/oeqa/runtime/cases/
-    """
-    paths = []
-
-    for layer in d.getVar('BBLAYERS').split():
-        path = os.path.join(layer, 'lib/oeqa/runtime/cases')
-        if os.path.isdir(path):
-            paths.append(path)
-    return paths
-
-def create_index(arg):
-    import subprocess
-
-    index_cmd = arg
-    try:
-        bb.note("Executing '%s' ..." % index_cmd)
-        result = subprocess.check_output(index_cmd,
-                                        stderr=subprocess.STDOUT,
-                                        shell=True)
-        result = result.decode('utf-8')
-    except subprocess.CalledProcessError as e:
-        return("Index creation command '%s' failed with return code "
-               '%d:\n%s' % (e.cmd, e.returncode, e.output.decode("utf-8")))
-    if result:
-        bb.note(result)
-    return None
-
-def create_rpm_index(d):
-    import glob
-    # Index RPMs
-    rpm_createrepo = bb.utils.which(os.getenv('PATH'), "createrepo_c")
-    index_cmds = []
-    archs = (d.getVar('ALL_MULTILIB_PACKAGE_ARCHS') or '').replace('-', '_')
-
-    for arch in archs.split():
-        rpm_dir = os.path.join(d.getVar('DEPLOY_DIR_RPM'), arch)
-        idx_path = os.path.join(d.getVar('WORKDIR'), 'oe-testimage-repo', arch)
-
-        if not os.path.isdir(rpm_dir):
-            continue
-
-        lockfilename = os.path.join(d.getVar('DEPLOY_DIR_RPM'), 'rpm.lock')
-        lf = bb.utils.lockfile(lockfilename, False)
-        oe.path.copyhardlinktree(rpm_dir, idx_path)
-        # Full indexes overload a 256MB image so reduce the number of rpms
-        # in the feed by filtering to specific packages needed by the tests.
-        package_list = glob.glob(idx_path + "*/*.rpm")
-
-        for pkg in package_list:
-            if os.path.basename(pkg).startswith(("curl-ptest")):
-                bb.utils.remove(pkg)
-
-            if not os.path.basename(pkg).startswith(("rpm", "run-postinsts", "busybox", "bash", "update-alternatives", "libc6", "curl", "musl")):
-                bb.utils.remove(pkg)
-
-        bb.utils.unlockfile(lf)
-        cmd = '%s --update -q %s' % (rpm_createrepo, idx_path)
-
-        # Create repodata
-        result = create_index(cmd)
-        if result:
-            bb.fatal('%s' % ('\n'.join(result)))
-
-def package_extraction(d, test_suites):
-    from oeqa.utils.package_manager import find_packages_to_extract
-    from oeqa.utils.package_manager import extract_packages
-
-    bb.utils.remove(d.getVar("TEST_NEEDED_PACKAGES_DIR"), recurse=True)
-    packages = find_packages_to_extract(test_suites)
-    if packages:
-        bb.utils.mkdirhier(d.getVar("TEST_INSTALL_TMP_DIR"))
-        bb.utils.mkdirhier(d.getVar("TEST_PACKAGED_DIR"))
-        bb.utils.mkdirhier(d.getVar("TEST_EXTRACTED_DIR"))
-        extract_packages(d, packages)
-
-testimage_main[vardepsexclude] += "BB_ORIGENV DATETIME"
-
-python () {
-    if oe.types.boolean(d.getVar("TESTIMAGE_AUTO") or "False"):
-        bb.build.addtask("testimage", "do_build", "do_image_complete", d)
-}
-
-inherit testsdk
diff --git a/poky/meta/classes/testsdk.bbclass b/poky/meta/classes/testsdk.bbclass
deleted file mode 100644
index 8b2e74f..0000000
--- a/poky/meta/classes/testsdk.bbclass
+++ /dev/null
@@ -1,52 +0,0 @@
-# Copyright (C) 2013 - 2016 Intel Corporation
-#
-# Released under the MIT license (see COPYING.MIT)
-
-# testsdk.bbclass enables testing for SDK and Extensible SDK
-#
-# To run SDK tests, run the commands:
-# $ bitbake <image-name> -c populate_sdk
-# $ bitbake <image-name> -c testsdk
-#
-# To run eSDK tests, run the commands:
-# $ bitbake <image-name> -c populate_sdk_ext
-# $ bitbake <image-name> -c testsdkext
-#
-# where "<image-name>" is an image like core-image-sato.
-
-TESTSDK_CLASS_NAME ?= "oeqa.sdk.testsdk.TestSDK"
-TESTSDKEXT_CLASS_NAME ?= "oeqa.sdkext.testsdk.TestSDKExt"
-
-def import_and_run(name, d):
-    import importlib
-
-    class_name = d.getVar(name)
-    if class_name:
-        module, cls = class_name.rsplit('.', 1)
-        m = importlib.import_module(module)
-        c = getattr(m, cls)()
-        c.run(d)
-    else:
-        bb.warn('No tests were run because %s did not define a class' % name)
-
-import_and_run[vardepsexclude] = "DATETIME BB_ORIGENV"
-
-python do_testsdk() {
-    import_and_run('TESTSDK_CLASS_NAME', d)
-}
-addtask testsdk
-do_testsdk[nostamp] = "1"
-do_testsdk[network] = "1"
-
-python do_testsdkext() {
-    import_and_run('TESTSDKEXT_CLASS_NAME', d)
-}
-addtask testsdkext
-do_testsdkext[nostamp] = "1"
-do_testsdkext[network] = "1"
-
-python () {
-    if oe.types.boolean(d.getVar("TESTIMAGE_AUTO") or "False"):
-        bb.build.addtask("testsdk", None, "do_populate_sdk", d)
-        bb.build.addtask("testsdkext", None, "do_populate_sdk_ext", d)
-}
diff --git a/poky/meta/classes/texinfo.bbclass b/poky/meta/classes/texinfo.bbclass
deleted file mode 100644
index 68c9d4f..0000000
--- a/poky/meta/classes/texinfo.bbclass
+++ /dev/null
@@ -1,18 +0,0 @@
-# This class is inherited by recipes whose upstream packages invoke the
-# texinfo utilities at build-time. Native and cross recipes are made to use the
-# dummy scripts provided by texinfo-dummy-native, for improved performance.
-# Target architecture recipes use the genuine Texinfo utilities. By default,
-# they use the Texinfo utilities on the host system. If you want to use the
-# Texinfo recipe, you can remove texinfo-native from ASSUME_PROVIDED and
-# makeinfo from SANITY_REQUIRED_UTILITIES.
-
-TEXDEP = "${@bb.utils.contains('DISTRO_FEATURES', 'api-documentation', 'texinfo-replacement-native', 'texinfo-dummy-native', d)}"
-TEXDEP:class-native = "texinfo-dummy-native"
-TEXDEP:class-cross = "texinfo-dummy-native"
-TEXDEP:class-crosssdk = "texinfo-dummy-native"
-TEXDEP:class-cross-canadian = "texinfo-dummy-native"
-DEPENDS:append = " ${TEXDEP}"
-
-# libtool-cross doesn't inherit cross
-TEXDEP:pn-libtool-cross = "texinfo-dummy-native"
-
diff --git a/poky/meta/classes/toaster.bbclass b/poky/meta/classes/toaster.bbclass
index f365c09..03c4f3a 100644
--- a/poky/meta/classes/toaster.bbclass
+++ b/poky/meta/classes/toaster.bbclass
@@ -3,7 +3,7 @@
 #
 # Copyright (C) 2013 Intel Corporation
 #
-# Released under the MIT license (see COPYING.MIT)
+# SPDX-License-Identifier: MIT
 #
 # This bbclass is designed to extract data used by OE-Core during the build process,
 # for recording in the Toaster system.
diff --git a/poky/meta/classes/toolchain-scripts-base.bbclass b/poky/meta/classes/toolchain-scripts-base.bbclass
deleted file mode 100644
index 2489b9d..0000000
--- a/poky/meta/classes/toolchain-scripts-base.bbclass
+++ /dev/null
@@ -1,11 +0,0 @@
-#This function create a version information file
-toolchain_create_sdk_version () {
-	local versionfile=$1
-	rm -f $versionfile
-	touch $versionfile
-	echo 'Distro: ${DISTRO}' >> $versionfile
-	echo 'Distro Version: ${DISTRO_VERSION}' >> $versionfile
-	echo 'Metadata Revision: ${METADATA_REVISION}' >> $versionfile
-	echo 'Timestamp: ${DATETIME}' >> $versionfile
-}
-toolchain_create_sdk_version[vardepsexclude] = "DATETIME"
diff --git a/poky/meta/classes/toolchain-scripts.bbclass b/poky/meta/classes/toolchain-scripts.bbclass
deleted file mode 100644
index 16f1e17..0000000
--- a/poky/meta/classes/toolchain-scripts.bbclass
+++ /dev/null
@@ -1,230 +0,0 @@
-inherit toolchain-scripts-base siteinfo kernel-arch
-
-# We want to be able to change the value of MULTIMACH_TARGET_SYS, because it
-# doesn't always match our expectations... but we default to the stock value
-REAL_MULTIMACH_TARGET_SYS ?= "${MULTIMACH_TARGET_SYS}"
-TARGET_CC_ARCH:append:libc-musl = " -mmusl"
-
-# default debug prefix map isn't valid in the SDK
-DEBUG_PREFIX_MAP = ""
-
-EXPORT_SDK_PS1 = "${@ 'export PS1=\\"%s\\"' % d.getVar('SDK_PS1') if d.getVar('SDK_PS1') else ''}"
-
-# This function creates an environment-setup-script for use in a deployable SDK
-toolchain_create_sdk_env_script () {
-	# Create environment setup script.  Remember that $SDKTARGETSYSROOT should
-	# only be expanded on the target at runtime.
-	base_sbindir=${10:-${base_sbindir_nativesdk}}
-	base_bindir=${9:-${base_bindir_nativesdk}}
-	sbindir=${8:-${sbindir_nativesdk}}
-	sdkpathnative=${7:-${SDKPATHNATIVE}}
-	prefix=${6:-${prefix_nativesdk}}
-	bindir=${5:-${bindir_nativesdk}}
-	libdir=${4:-${libdir}}
-	sysroot=${3:-${SDKTARGETSYSROOT}}
-	multimach_target_sys=${2:-${REAL_MULTIMACH_TARGET_SYS}}
-	script=${1:-${SDK_OUTPUT}/${SDKPATH}/environment-setup-$multimach_target_sys}
-	rm -f $script
-	touch $script
-
-	echo '# Check for LD_LIBRARY_PATH being set, which can break SDK and generally is a bad practice' >> $script
-	echo '# http://tldp.org/HOWTO/Program-Library-HOWTO/shared-libraries.html#AEN80' >> $script
-	echo '# http://xahlee.info/UnixResource_dir/_/ldpath.html' >> $script
-	echo '# Only disable this check if you are absolutely know what you are doing!' >> $script
-	echo 'if [ ! -z "$LD_LIBRARY_PATH" ]; then' >> $script
-	echo "    echo \"Your environment is misconfigured, you probably need to 'unset LD_LIBRARY_PATH'\"" >> $script
-	echo "    echo \"but please check why this was set in the first place and that it's safe to unset.\"" >> $script
-	echo '    echo "The SDK will not operate correctly in most cases when LD_LIBRARY_PATH is set."' >> $script
-	echo '    echo "For more references see:"' >> $script
-	echo '    echo "  http://tldp.org/HOWTO/Program-Library-HOWTO/shared-libraries.html#AEN80"' >> $script
-	echo '    echo "  http://xahlee.info/UnixResource_dir/_/ldpath.html"' >> $script
-	echo '    return 1' >> $script
-	echo 'fi' >> $script
-
-	echo "${EXPORT_SDK_PS1}" >> $script
-	echo 'export SDKTARGETSYSROOT='"$sysroot" >> $script
-	EXTRAPATH=""
-	for i in ${CANADIANEXTRAOS}; do
-		EXTRAPATH="$EXTRAPATH:$sdkpathnative$bindir/${TARGET_ARCH}${TARGET_VENDOR}-$i"
-	done
-	echo "export PATH=$sdkpathnative$bindir:$sdkpathnative$sbindir:$sdkpathnative$base_bindir:$sdkpathnative$base_sbindir:$sdkpathnative$bindir/../${HOST_SYS}/bin:$sdkpathnative$bindir/${TARGET_SYS}"$EXTRAPATH':$PATH' >> $script
-	echo 'export PKG_CONFIG_SYSROOT_DIR=$SDKTARGETSYSROOT' >> $script
-	echo 'export PKG_CONFIG_PATH=$SDKTARGETSYSROOT'"$libdir"'/pkgconfig:$SDKTARGETSYSROOT'"$prefix"'/share/pkgconfig' >> $script
-	echo 'export CONFIG_SITE=${SDKPATH}/site-config-'"${multimach_target_sys}" >> $script
-	echo "export OECORE_NATIVE_SYSROOT=\"$sdkpathnative\"" >> $script
-	echo 'export OECORE_TARGET_SYSROOT="$SDKTARGETSYSROOT"' >> $script
-	echo "export OECORE_ACLOCAL_OPTS=\"-I $sdkpathnative/usr/share/aclocal\"" >> $script
-	echo 'export OECORE_BASELIB="${baselib}"' >> $script
-	echo 'export OECORE_TARGET_ARCH="${TARGET_ARCH}"' >>$script
-	echo 'export OECORE_TARGET_OS="${TARGET_OS}"' >>$script
-
-	echo 'unset command_not_found_handle' >> $script
-
-	toolchain_shared_env_script
-}
-
-# This function creates an environment-setup-script in B which enables
-# a OE-core IDE to integrate with the build tree
-# Caller must ensure CONFIG_SITE is setup
-toolchain_create_tree_env_script () {
-	script=${B}/environment-setup-${REAL_MULTIMACH_TARGET_SYS}
-	rm -f $script
-	touch $script
-	echo 'standalone_sysroot_target="${STAGING_DIR}/${MACHINE}"' >> $script
-	echo 'standalone_sysroot_native="${STAGING_DIR}/${BUILD_ARCH}"' >> $script
-	echo 'orig=`pwd`; cd ${COREBASE}; . ./oe-init-build-env ${TOPDIR}; cd $orig' >> $script
-	echo 'export PATH=$standalone_sysroot_native/${bindir_native}:$standalone_sysroot_native/${bindir_native}/${TARGET_SYS}:$PATH' >> $script
-	echo 'export PKG_CONFIG_SYSROOT_DIR=$standalone_sysroot_target' >> $script
-	echo 'export PKG_CONFIG_PATH=$standalone_sysroot_target'"$libdir"'/pkgconfig:$standalone_sysroot_target'"$prefix"'/share/pkgconfig' >> $script
-	echo 'export CONFIG_SITE="${CONFIG_SITE}"' >> $script
-	echo 'export SDKTARGETSYSROOT=$standalone_sysroot_target' >> $script
-	echo 'export OECORE_NATIVE_SYSROOT=$standalone_sysroot_native' >> $script
-	echo 'export OECORE_TARGET_SYSROOT=$standalone_sysroot_target' >> $script
-	echo 'export OECORE_ACLOCAL_OPTS="-I $standalone_sysroot_native/usr/share/aclocal"' >> $script
-	echo 'export OECORE_BASELIB="${baselib}"' >> $script
-	echo 'export OECORE_TARGET_ARCH="${TARGET_ARCH}"' >>$script
-	echo 'export OECORE_TARGET_OS="${TARGET_OS}"' >>$script
-
-	toolchain_shared_env_script
-
-	cat >> $script <<EOF
-
-if [ -d "\$OECORE_NATIVE_SYSROOT/${datadir}/post-relocate-setup.d/" ]; then
-	for s in \$OECORE_NATIVE_SYSROOT/${datadir}/post-relocate-setup.d/*; do
-		if [ ! -x \$s ]; then
-			continue
-		fi
-		\$s "\$1"
-		status=\$?
-		if [ \$status != 0 ]; then
-			echo "post-relocate command \"\$s \$1\" failed with status \$status" >&2
-			exit \$status
-		fi
-	done
-fi
-EOF
-}
-
-toolchain_shared_env_script () {
-	echo 'export CC="${TARGET_PREFIX}gcc ${TARGET_CC_ARCH} --sysroot=$SDKTARGETSYSROOT"' >> $script
-	echo 'export CXX="${TARGET_PREFIX}g++ ${TARGET_CC_ARCH} --sysroot=$SDKTARGETSYSROOT"' >> $script
-	echo 'export CPP="${TARGET_PREFIX}gcc -E ${TARGET_CC_ARCH} --sysroot=$SDKTARGETSYSROOT"' >> $script
-	echo 'export AS="${TARGET_PREFIX}as ${TARGET_AS_ARCH}"' >> $script
-	echo 'export LD="${TARGET_PREFIX}ld ${TARGET_LD_ARCH} --sysroot=$SDKTARGETSYSROOT"' >> $script
-	echo 'export GDB=${TARGET_PREFIX}gdb' >> $script
-	echo 'export STRIP=${TARGET_PREFIX}strip' >> $script
-	echo 'export RANLIB=${TARGET_PREFIX}ranlib' >> $script
-	echo 'export OBJCOPY=${TARGET_PREFIX}objcopy' >> $script
-	echo 'export OBJDUMP=${TARGET_PREFIX}objdump' >> $script
-	echo 'export READELF=${TARGET_PREFIX}readelf' >> $script
-	echo 'export AR=${TARGET_PREFIX}ar' >> $script
-	echo 'export NM=${TARGET_PREFIX}nm' >> $script
-	echo 'export M4=m4' >> $script
-	echo 'export TARGET_PREFIX=${TARGET_PREFIX}' >> $script
-	echo 'export CONFIGURE_FLAGS="--target=${TARGET_SYS} --host=${TARGET_SYS} --build=${SDK_ARCH}-linux --with-libtool-sysroot=$SDKTARGETSYSROOT"' >> $script
-	echo 'export CFLAGS="${TARGET_CFLAGS}"' >> $script
-	echo 'export CXXFLAGS="${TARGET_CXXFLAGS}"' >> $script
-	echo 'export LDFLAGS="${TARGET_LDFLAGS}"' >> $script
-	echo 'export CPPFLAGS="${TARGET_CPPFLAGS}"' >> $script
-	echo 'export KCFLAGS="--sysroot=$SDKTARGETSYSROOT"' >> $script
-	echo 'export OECORE_DISTRO_VERSION="${DISTRO_VERSION}"' >> $script
-	echo 'export OECORE_SDK_VERSION="${SDK_VERSION}"' >> $script
-	echo 'export ARCH=${ARCH}' >> $script
-	echo 'export CROSS_COMPILE=${TARGET_PREFIX}' >> $script
-	echo 'export OECORE_TUNE_CCARGS="${TUNE_CCARGS}"' >> $script
-
-    cat >> $script <<EOF
-
-# Append environment subscripts
-if [ -d "\$OECORE_TARGET_SYSROOT/environment-setup.d" ]; then
-    for envfile in \$OECORE_TARGET_SYSROOT/environment-setup.d/*.sh; do
-	    . \$envfile
-    done
-fi
-if [ -d "\$OECORE_NATIVE_SYSROOT/environment-setup.d" ]; then
-    for envfile in \$OECORE_NATIVE_SYSROOT/environment-setup.d/*.sh; do
-	    . \$envfile
-    done
-fi
-EOF
-}
-
-toolchain_create_post_relocate_script() {
-	relocate_script=$1
-	env_dir=$2
-	rm -f $relocate_script
-	touch $relocate_script
-
-	cat >> $relocate_script <<EOF
-if [ -d "${SDKPATHNATIVE}/post-relocate-setup.d/" ]; then
-	# Source top-level SDK env scripts in case they are needed for the relocate
-	# scripts.
-	for env_setup_script in ${env_dir}/environment-setup-*; do
-		. \$env_setup_script
-		status=\$?
-		if [ \$status != 0 ]; then
-			echo "\$0: Failed to source \$env_setup_script with status \$status"
-			exit \$status
-		fi
-
-		for s in ${SDKPATHNATIVE}/post-relocate-setup.d/*; do
-			if [ ! -x \$s ]; then
-				continue
-			fi
-			\$s "\$1"
-			status=\$?
-			if [ \$status != 0 ]; then
-				echo "post-relocate command \"\$s \$1\" failed with status \$status" >&2
-				exit \$status
-			fi
-		done
-	done
-	rm -rf "${SDKPATHNATIVE}/post-relocate-setup.d"
-fi
-EOF
-}
-
-#we get the cached site config in the runtime
-TOOLCHAIN_CONFIGSITE_NOCACHE = "${@' '.join(siteinfo_get_files(d)[0])}"
-TOOLCHAIN_CONFIGSITE_SYSROOTCACHE = "${STAGING_DIR}/${MLPREFIX}${MACHINE}/${target_datadir}/${TARGET_SYS}_config_site.d"
-TOOLCHAIN_NEED_CONFIGSITE_CACHE ??= "virtual/${MLPREFIX}libc ncurses"
-DEPENDS += "${TOOLCHAIN_NEED_CONFIGSITE_CACHE}"
-
-#This function create a site config file
-toolchain_create_sdk_siteconfig () {
-	local siteconfig=$1
-
-	rm -f $siteconfig
-	touch $siteconfig
-
-	for sitefile in ${TOOLCHAIN_CONFIGSITE_NOCACHE} ; do
-		cat $sitefile >> $siteconfig
-	done
-
-	#get cached site config
-	for sitefile in ${TOOLCHAIN_NEED_CONFIGSITE_CACHE}; do
-		# Resolve virtual/* names to the real recipe name using sysroot-providers info
-		case $sitefile in virtual/*)
-			sitefile=`echo $sitefile | tr / _`
-			sitefile=`cat ${STAGING_DIR_TARGET}/sysroot-providers/$sitefile`
-		esac
-
-		if [ -r ${TOOLCHAIN_CONFIGSITE_SYSROOTCACHE}/${sitefile}_config ]; then
-			cat ${TOOLCHAIN_CONFIGSITE_SYSROOTCACHE}/${sitefile}_config >> $siteconfig
-		fi
-	done
-}
-# The immediate expansion above can result in unwanted path dependencies here
-toolchain_create_sdk_siteconfig[vardepsexclude] = "TOOLCHAIN_CONFIGSITE_SYSROOTCACHE"
-
-python __anonymous () {
-    import oe.classextend
-    deps = ""
-    for dep in (d.getVar('TOOLCHAIN_NEED_CONFIGSITE_CACHE') or "").split():
-        deps += " %s:do_populate_sysroot" % dep
-        for variant in (d.getVar('MULTILIB_VARIANTS') or "").split():
-            clsextend = oe.classextend.ClassExtender(variant, d)
-            newdep = clsextend.extend_name(dep)
-            deps += " %s:do_populate_sysroot" % newdep
-    d.appendVarFlag('do_configure', 'depends', deps)
-}
diff --git a/poky/meta/classes/typecheck.bbclass b/poky/meta/classes/typecheck.bbclass
index 72da932..160f7a0 100644
--- a/poky/meta/classes/typecheck.bbclass
+++ b/poky/meta/classes/typecheck.bbclass
@@ -1,3 +1,9 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
 # Check types of bitbake configuration variables
 #
 # See oe.types for details.
diff --git a/poky/meta/classes/uboot-config.bbclass b/poky/meta/classes/uboot-config.bbclass
deleted file mode 100644
index e8da8c7..0000000
--- a/poky/meta/classes/uboot-config.bbclass
+++ /dev/null
@@ -1,130 +0,0 @@
-# Handle U-Boot config for a machine
-#
-# The format to specify it, in the machine, is:
-#
-# UBOOT_CONFIG ??= <default>
-# UBOOT_CONFIG[foo] = "config,images,binary"
-#
-# or
-#
-# UBOOT_MACHINE = "config"
-#
-# Copyright 2013, 2014 (C) O.S. Systems Software LTDA.
-
-def removesuffix(s, suffix):
-    if suffix and s.endswith(suffix):
-        return s[:-len(suffix)]
-    return s
-
-# Some versions of u-boot use .bin and others use .img.  By default use .bin
-# but enable individual recipes to change this value.
-UBOOT_SUFFIX ??= "bin"
-UBOOT_BINARY ?= "u-boot.${UBOOT_SUFFIX}"
-UBOOT_BINARYNAME ?= "${@os.path.splitext(d.getVar("UBOOT_BINARY"))[0]}"
-UBOOT_IMAGE ?= "${UBOOT_BINARYNAME}-${MACHINE}-${PV}-${PR}.${UBOOT_SUFFIX}"
-UBOOT_SYMLINK ?= "${UBOOT_BINARYNAME}-${MACHINE}.${UBOOT_SUFFIX}"
-UBOOT_MAKE_TARGET ?= "all"
-
-# Output the ELF generated. Some platforms can use the ELF file and directly
-# load it (JTAG booting, QEMU) additionally the ELF can be used for debugging
-# purposes.
-UBOOT_ELF ?= ""
-UBOOT_ELF_SUFFIX ?= "elf"
-UBOOT_ELF_IMAGE ?= "u-boot-${MACHINE}-${PV}-${PR}.${UBOOT_ELF_SUFFIX}"
-UBOOT_ELF_BINARY ?= "u-boot.${UBOOT_ELF_SUFFIX}"
-UBOOT_ELF_SYMLINK ?= "u-boot-${MACHINE}.${UBOOT_ELF_SUFFIX}"
-
-# Some versions of u-boot build an SPL (Second Program Loader) image that
-# should be packaged along with the u-boot binary as well as placed in the
-# deploy directory.  For those versions they can set the following variables
-# to allow packaging the SPL.
-SPL_SUFFIX ?= ""
-SPL_BINARY ?= ""
-SPL_DELIMITER  ?= "${@'.' if d.getVar("SPL_SUFFIX") else ''}"
-SPL_BINARYFILE ?= "${@os.path.basename(d.getVar("SPL_BINARY"))}"
-SPL_BINARYNAME ?= "${@removesuffix(d.getVar("SPL_BINARYFILE"), "." + d.getVar("SPL_SUFFIX"))}"
-SPL_IMAGE ?= "${SPL_BINARYNAME}-${MACHINE}-${PV}-${PR}${SPL_DELIMITER}${SPL_SUFFIX}"
-SPL_SYMLINK ?= "${SPL_BINARYNAME}-${MACHINE}${SPL_DELIMITER}${SPL_SUFFIX}"
-
-# Additional environment variables or a script can be installed alongside
-# u-boot to be used automatically on boot.  This file, typically 'uEnv.txt'
-# or 'boot.scr', should be packaged along with u-boot as well as placed in the
-# deploy directory.  Machine configurations needing one of these files should
-# include it in the SRC_URI and set the UBOOT_ENV parameter.
-UBOOT_ENV_SUFFIX ?= "txt"
-UBOOT_ENV ?= ""
-UBOOT_ENV_SRC_SUFFIX ?= "cmd"
-UBOOT_ENV_SRC ?= "${UBOOT_ENV}.${UBOOT_ENV_SRC_SUFFIX}"
-UBOOT_ENV_BINARY ?= "${UBOOT_ENV}.${UBOOT_ENV_SUFFIX}"
-UBOOT_ENV_IMAGE ?= "${UBOOT_ENV}-${MACHINE}-${PV}-${PR}.${UBOOT_ENV_SUFFIX}"
-UBOOT_ENV_SYMLINK ?= "${UBOOT_ENV}-${MACHINE}.${UBOOT_ENV_SUFFIX}"
-
-# Default name of u-boot initial env, but enable individual recipes to change
-# this value.
-UBOOT_INITIAL_ENV ?= "${PN}-initial-env"
-
-# U-Boot EXTLINUX variables. U-Boot searches for /boot/extlinux/extlinux.conf
-# to find EXTLINUX conf file.
-UBOOT_EXTLINUX_INSTALL_DIR ?= "/boot/extlinux"
-UBOOT_EXTLINUX_CONF_NAME ?= "extlinux.conf"
-UBOOT_EXTLINUX_SYMLINK ?= "${UBOOT_EXTLINUX_CONF_NAME}-${MACHINE}-${PR}"
-
-# Options for the device tree compiler passed to mkimage '-D' feature:
-UBOOT_MKIMAGE_DTCOPTS ??= ""
-SPL_MKIMAGE_DTCOPTS ??= ""
-
-# mkimage command
-UBOOT_MKIMAGE ?= "uboot-mkimage"
-UBOOT_MKIMAGE_SIGN ?= "${UBOOT_MKIMAGE}"
-
-# Arguments passed to mkimage for signing
-UBOOT_MKIMAGE_SIGN_ARGS ?= ""
-SPL_MKIMAGE_SIGN_ARGS ?= ""
-
-# Options to deploy the u-boot device tree
-UBOOT_DTB ?= ""
-UBOOT_DTB_BINARY ??= ""
-
-python () {
-    ubootmachine = d.getVar("UBOOT_MACHINE")
-    ubootconfigflags = d.getVarFlags('UBOOT_CONFIG')
-    ubootbinary = d.getVar('UBOOT_BINARY')
-    ubootbinaries = d.getVar('UBOOT_BINARIES')
-    # The "doc" varflag is special, we don't want to see it here
-    ubootconfigflags.pop('doc', None)
-    ubootconfig = (d.getVar('UBOOT_CONFIG') or "").split()
-
-    if not ubootmachine and not ubootconfig:
-        PN = d.getVar("PN")
-        FILE = os.path.basename(d.getVar("FILE"))
-        bb.debug(1, "To build %s, see %s for instructions on \
-                 setting up your machine config" % (PN, FILE))
-        raise bb.parse.SkipRecipe("Either UBOOT_MACHINE or UBOOT_CONFIG must be set in the %s machine configuration." % d.getVar("MACHINE"))
-
-    if ubootmachine and ubootconfig:
-        raise bb.parse.SkipRecipe("You cannot use UBOOT_MACHINE and UBOOT_CONFIG at the same time.")
-
-    if ubootconfigflags and ubootbinaries:
-        raise bb.parse.SkipRecipe("You cannot use UBOOT_BINARIES as it is internal to uboot_config.bbclass.")
-
-    if len(ubootconfig) > 0:
-        for config in ubootconfig:
-            for f, v in ubootconfigflags.items():
-                if config == f: 
-                    items = v.split(',')
-                    if items[0] and len(items) > 3:
-                        raise bb.parse.SkipRecipe('Only config,images,binary can be specified!')
-                    d.appendVar('UBOOT_MACHINE', ' ' + items[0])
-                    # IMAGE_FSTYPES appending
-                    if len(items) > 1 and items[1]:
-                        bb.debug(1, "Appending '%s' to IMAGE_FSTYPES." % items[1])
-                        d.appendVar('IMAGE_FSTYPES', ' ' + items[1])
-                    if len(items) > 2 and items[2]:
-                        bb.debug(1, "Appending '%s' to UBOOT_BINARIES." % items[2])
-                        d.appendVar('UBOOT_BINARIES', ' ' + items[2])
-                    else:
-                        bb.debug(1, "Appending '%s' to UBOOT_BINARIES." % ubootbinary)
-                        d.appendVar('UBOOT_BINARIES', ' ' + ubootbinary)
-                    return
-        raise bb.parse.SkipRecipe("The selected UBOOT_CONFIG key %s has no match in %s." % (ubootconfig, ubootconfigflags.keys()))
-}
diff --git a/poky/meta/classes/uboot-extlinux-config.bbclass b/poky/meta/classes/uboot-extlinux-config.bbclass
deleted file mode 100644
index dcebe7f..0000000
--- a/poky/meta/classes/uboot-extlinux-config.bbclass
+++ /dev/null
@@ -1,158 +0,0 @@
-# uboot-extlinux-config.bbclass
-#
-# This class allow the extlinux.conf generation for U-Boot use. The
-# U-Boot support for it is given to allow the Generic Distribution
-# Configuration specification use by OpenEmbedded-based products.
-#
-# External variables:
-#
-# UBOOT_EXTLINUX_CONSOLE           - Set to "console=ttyX" to change kernel boot
-#                                    default console.
-# UBOOT_EXTLINUX_LABELS            - A list of targets for the automatic config.
-# UBOOT_EXTLINUX_KERNEL_ARGS       - Add additional kernel arguments.
-# UBOOT_EXTLINUX_KERNEL_IMAGE      - Kernel image name.
-# UBOOT_EXTLINUX_FDTDIR            - Device tree directory.
-# UBOOT_EXTLINUX_FDT               - Device tree file.
-# UBOOT_EXTLINUX_INITRD            - Indicates a list of filesystem images to
-#                                    concatenate and use as an initrd (optional).
-# UBOOT_EXTLINUX_MENU_DESCRIPTION  - Name to use as description.
-# UBOOT_EXTLINUX_ROOT              - Root kernel cmdline.
-# UBOOT_EXTLINUX_TIMEOUT           - Timeout before DEFAULT selection is made.
-#                                    Measured in 1/10 of a second.
-# UBOOT_EXTLINUX_DEFAULT_LABEL     - Target to be selected by default after
-#                                    the timeout period
-#
-# If there's only one label system will boot automatically and menu won't be
-# created. If you want to use more than one labels, e.g linux and alternate,
-# use overrides to set menu description, console and others variables.
-#
-# Ex:
-#
-# UBOOT_EXTLINUX_LABELS ??= "default fallback"
-#
-# UBOOT_EXTLINUX_DEFAULT_LABEL ??= "Linux Default"
-# UBOOT_EXTLINUX_TIMEOUT ??= "30"
-#
-# UBOOT_EXTLINUX_KERNEL_IMAGE_default ??= "../zImage"
-# UBOOT_EXTLINUX_MENU_DESCRIPTION_default ??= "Linux Default"
-#
-# UBOOT_EXTLINUX_KERNEL_IMAGE_fallback ??= "../zImage-fallback"
-# UBOOT_EXTLINUX_MENU_DESCRIPTION_fallback ??= "Linux Fallback"
-#
-# Results:
-#
-# menu title Select the boot mode
-# TIMEOUT 30
-# DEFAULT Linux Default
-# LABEL Linux Default
-#   KERNEL ../zImage
-#   FDTDIR ../
-#   APPEND root=/dev/mmcblk2p2 rootwait rw console=${console}
-# LABEL Linux Fallback
-#   KERNEL ../zImage-fallback
-#   FDTDIR ../
-#   APPEND root=/dev/mmcblk2p2 rootwait rw console=${console}
-#
-# Copyright (C) 2016, O.S. Systems Software LTDA.  All Rights Reserved
-# Released under the MIT license (see packages/COPYING)
-#
-# The kernel has an internal default console, which you can override with
-# a console=...some_tty...
-UBOOT_EXTLINUX_CONSOLE ??= "console=${console},${baudrate}"
-UBOOT_EXTLINUX_LABELS ??= "linux"
-UBOOT_EXTLINUX_FDT ??= ""
-UBOOT_EXTLINUX_FDTDIR ??= "../"
-UBOOT_EXTLINUX_KERNEL_IMAGE ??= "../${KERNEL_IMAGETYPE}"
-UBOOT_EXTLINUX_KERNEL_ARGS ??= "rootwait rw"
-UBOOT_EXTLINUX_MENU_DESCRIPTION:linux ??= "${DISTRO_NAME}"
-
-UBOOT_EXTLINUX_CONFIG = "${B}/extlinux.conf"
-
-python do_create_extlinux_config() {
-    if d.getVar("UBOOT_EXTLINUX") != "1":
-      return
-
-    if not d.getVar('WORKDIR'):
-        bb.error("WORKDIR not defined, unable to package")
-
-    labels = d.getVar('UBOOT_EXTLINUX_LABELS')
-    if not labels:
-        bb.fatal("UBOOT_EXTLINUX_LABELS not defined, nothing to do")
-
-    if not labels.strip():
-        bb.fatal("No labels, nothing to do")
-
-    cfile = d.getVar('UBOOT_EXTLINUX_CONFIG')
-    if not cfile:
-        bb.fatal('Unable to read UBOOT_EXTLINUX_CONFIG')
-
-    localdata = bb.data.createCopy(d)
-
-    try:
-        with open(cfile, 'w') as cfgfile:
-            cfgfile.write('# Generic Distro Configuration file generated by OpenEmbedded\n')
-
-            if len(labels.split()) > 1:
-                cfgfile.write('menu title Select the boot mode\n')
-
-            timeout =  localdata.getVar('UBOOT_EXTLINUX_TIMEOUT')
-            if timeout:
-                cfgfile.write('TIMEOUT %s\n' % (timeout))
-
-            if len(labels.split()) > 1:
-                default = localdata.getVar('UBOOT_EXTLINUX_DEFAULT_LABEL')
-                if default:
-                    cfgfile.write('DEFAULT %s\n' % (default))
-
-            # Need to deconflict the labels with existing overrides
-            label_overrides = labels.split()
-            default_overrides = localdata.getVar('OVERRIDES').split(':')
-            # We're keeping all the existing overrides that aren't used as a label
-            # an override for that label will be added back in while we're processing that label
-            keep_overrides = list(filter(lambda x: x not in label_overrides, default_overrides))
-
-            for label in labels.split():
-
-                localdata.setVar('OVERRIDES', ':'.join(keep_overrides + [label]))
-
-                extlinux_console = localdata.getVar('UBOOT_EXTLINUX_CONSOLE')
-
-                menu_description = localdata.getVar('UBOOT_EXTLINUX_MENU_DESCRIPTION')
-                if not menu_description:
-                    menu_description = label
-
-                root = localdata.getVar('UBOOT_EXTLINUX_ROOT')
-                if not root:
-                    bb.fatal('UBOOT_EXTLINUX_ROOT not defined')
-
-                kernel_image = localdata.getVar('UBOOT_EXTLINUX_KERNEL_IMAGE')
-                fdtdir = localdata.getVar('UBOOT_EXTLINUX_FDTDIR')
-
-                fdt = localdata.getVar('UBOOT_EXTLINUX_FDT')
-
-                if fdt:
-                    cfgfile.write('LABEL %s\n\tKERNEL %s\n\tFDT %s\n' %
-                                 (menu_description, kernel_image, fdt))
-                elif fdtdir:
-                    cfgfile.write('LABEL %s\n\tKERNEL %s\n\tFDTDIR %s\n' %
-                                 (menu_description, kernel_image, fdtdir))
-                else:
-                    cfgfile.write('LABEL %s\n\tKERNEL %s\n' % (menu_description, kernel_image))
-
-                kernel_args = localdata.getVar('UBOOT_EXTLINUX_KERNEL_ARGS')
-
-                initrd = localdata.getVar('UBOOT_EXTLINUX_INITRD')
-                if initrd:
-                    cfgfile.write('\tINITRD %s\n'% initrd)
-
-                kernel_args = root + " " + kernel_args
-                cfgfile.write('\tAPPEND %s %s\n' % (kernel_args, extlinux_console))
-
-    except OSError:
-        bb.fatal('Unable to open %s' % (cfile))
-}
-UBOOT_EXTLINUX_VARS = "CONSOLE MENU_DESCRIPTION ROOT KERNEL_IMAGE FDTDIR FDT KERNEL_ARGS INITRD"
-do_create_extlinux_config[vardeps] += "${@' '.join(['UBOOT_EXTLINUX_%s_%s' % (v, l) for v in d.getVar('UBOOT_EXTLINUX_VARS').split() for l in d.getVar('UBOOT_EXTLINUX_LABELS').split()])}"
-do_create_extlinux_config[vardepsexclude] += "OVERRIDES"
-
-addtask create_extlinux_config before do_install do_deploy after do_compile
diff --git a/poky/meta/classes/uboot-sign.bbclass b/poky/meta/classes/uboot-sign.bbclass
deleted file mode 100644
index eecdec9..0000000
--- a/poky/meta/classes/uboot-sign.bbclass
+++ /dev/null
@@ -1,499 +0,0 @@
-# This file is part of U-Boot verified boot support and is intended to be
-# inherited from u-boot recipe and from kernel-fitimage.bbclass.
-#
-# The signature procedure requires the user to generate an RSA key and
-# certificate in a directory and to define the following variable:
-#
-#   UBOOT_SIGN_KEYDIR = "/keys/directory"
-#   UBOOT_SIGN_KEYNAME = "dev" # keys name in keydir (eg. "dev.crt", "dev.key")
-#   UBOOT_MKIMAGE_DTCOPTS = "-I dts -O dtb -p 2000"
-#   UBOOT_SIGN_ENABLE = "1"
-#
-# As verified boot depends on fitImage generation, following is also required:
-#
-#   KERNEL_CLASSES ?= " kernel-fitimage "
-#   KERNEL_IMAGETYPE ?= "fitImage"
-#
-# The signature support is limited to the use of CONFIG_OF_SEPARATE in U-Boot.
-#
-# The tasks sequence is set as below, using DEPLOY_IMAGE_DIR as common place to
-# treat the device tree blob:
-#
-# * u-boot:do_install:append
-#   Install UBOOT_DTB_BINARY to datadir, so that kernel can use it for
-#   signing, and kernel will deploy UBOOT_DTB_BINARY after signs it.
-#
-# * virtual/kernel:do_assemble_fitimage
-#   Sign the image
-#
-# * u-boot:do_deploy[postfuncs]
-#   Deploy files like UBOOT_DTB_IMAGE, UBOOT_DTB_SYMLINK and others.
-#
-# For more details on signature process, please refer to U-Boot documentation.
-
-# We need some variables from u-boot-config
-inherit uboot-config
-
-# Enable use of a U-Boot fitImage
-UBOOT_FITIMAGE_ENABLE ?= "0"
-
-# Signature activation - these require their respective fitImages
-UBOOT_SIGN_ENABLE ?= "0"
-SPL_SIGN_ENABLE ?= "0"
-
-# Default value for deployment filenames.
-UBOOT_DTB_IMAGE ?= "u-boot-${MACHINE}-${PV}-${PR}.dtb"
-UBOOT_DTB_BINARY ?= "u-boot.dtb"
-UBOOT_DTB_SYMLINK ?= "u-boot-${MACHINE}.dtb"
-UBOOT_NODTB_IMAGE ?= "u-boot-nodtb-${MACHINE}-${PV}-${PR}.bin"
-UBOOT_NODTB_BINARY ?= "u-boot-nodtb.bin"
-UBOOT_NODTB_SYMLINK ?= "u-boot-nodtb-${MACHINE}.bin"
-UBOOT_ITS_IMAGE ?= "u-boot-its-${MACHINE}-${PV}-${PR}"
-UBOOT_ITS ?= "u-boot.its"
-UBOOT_ITS_SYMLINK ?= "u-boot-its-${MACHINE}"
-UBOOT_FITIMAGE_IMAGE ?= "u-boot-fitImage-${MACHINE}-${PV}-${PR}"
-UBOOT_FITIMAGE_BINARY ?= "u-boot-fitImage"
-UBOOT_FITIMAGE_SYMLINK ?= "u-boot-fitImage-${MACHINE}"
-SPL_DIR ?= "spl"
-SPL_DTB_IMAGE ?= "u-boot-spl-${MACHINE}-${PV}-${PR}.dtb"
-SPL_DTB_BINARY ?= "u-boot-spl.dtb"
-SPL_DTB_SYMLINK ?= "u-boot-spl-${MACHINE}.dtb"
-SPL_NODTB_IMAGE ?= "u-boot-spl-nodtb-${MACHINE}-${PV}-${PR}.bin"
-SPL_NODTB_BINARY ?= "u-boot-spl-nodtb.bin"
-SPL_NODTB_SYMLINK ?= "u-boot-spl-nodtb-${MACHINE}.bin"
-
-# U-Boot fitImage description
-UBOOT_FIT_DESC ?= "U-Boot fitImage for ${DISTRO_NAME}/${PV}/${MACHINE}"
-
-# Kernel / U-Boot fitImage Hash Algo
-FIT_HASH_ALG ?= "sha256"
-UBOOT_FIT_HASH_ALG ?= "sha256"
-
-# Kernel / U-Boot fitImage Signature Algo
-FIT_SIGN_ALG ?= "rsa2048"
-UBOOT_FIT_SIGN_ALG ?= "rsa2048"
-
-# Kernel / U-Boot fitImage Padding Algo
-FIT_PAD_ALG ?= "pkcs-1.5"
-
-# Generate keys for signing Kernel / U-Boot fitImage
-FIT_GENERATE_KEYS ?= "0"
-UBOOT_FIT_GENERATE_KEYS ?= "0"
-
-# Size of private keys in number of bits
-FIT_SIGN_NUMBITS ?= "2048"
-UBOOT_FIT_SIGN_NUMBITS ?= "2048"
-
-# args to openssl genrsa (Default is just the public exponent)
-FIT_KEY_GENRSA_ARGS ?= "-F4"
-UBOOT_FIT_KEY_GENRSA_ARGS ?= "-F4"
-
-# args to openssl req (Default is -batch for non interactive mode and
-# -new for new certificate)
-FIT_KEY_REQ_ARGS ?= "-batch -new"
-UBOOT_FIT_KEY_REQ_ARGS ?= "-batch -new"
-
-# Standard format for public key certificate
-FIT_KEY_SIGN_PKCS ?= "-x509"
-UBOOT_FIT_KEY_SIGN_PKCS ?= "-x509"
-
-# Functions on this bbclass can apply to either U-boot or Kernel,
-# depending on the scenario
-UBOOT_PN = "${@d.getVar('PREFERRED_PROVIDER_u-boot') or 'u-boot'}"
-KERNEL_PN = "${@d.getVar('PREFERRED_PROVIDER_virtual/kernel')}"
-
-# We need u-boot-tools-native if we're creating a U-Boot fitImage
-python() {
-    if d.getVar('UBOOT_FITIMAGE_ENABLE') == '1':
-        depends = d.getVar("DEPENDS")
-        depends = "%s u-boot-tools-native dtc-native" % depends
-        d.setVar("DEPENDS", depends)
-}
-
-concat_dtb_helper() {
-	if [ -e "${UBOOT_DTB_BINARY}" ]; then
-		ln -sf ${UBOOT_DTB_IMAGE} ${DEPLOYDIR}/${UBOOT_DTB_BINARY}
-		ln -sf ${UBOOT_DTB_IMAGE} ${DEPLOYDIR}/${UBOOT_DTB_SYMLINK}
-	fi
-
-	if [ -f "${UBOOT_NODTB_BINARY}" ]; then
-		install ${UBOOT_NODTB_BINARY} ${DEPLOYDIR}/${UBOOT_NODTB_IMAGE}
-		ln -sf ${UBOOT_NODTB_IMAGE} ${DEPLOYDIR}/${UBOOT_NODTB_SYMLINK}
-		ln -sf ${UBOOT_NODTB_IMAGE} ${DEPLOYDIR}/${UBOOT_NODTB_BINARY}
-	fi
-
-	# If we're not using a signed u-boot fit, concatenate SPL w/o DTB & U-Boot DTB
-	# with public key (otherwise it will be deployed by the equivalent
-	# concat_spl_dtb_helper function - cf. kernel-fitimage.bbclass for more details)
-	if [ "${SPL_SIGN_ENABLE}" != "1" ] ; then
-		deployed_uboot_dtb_binary='${DEPLOY_DIR_IMAGE}/${UBOOT_DTB_IMAGE}'
-		if [ "x${UBOOT_SUFFIX}" = "ximg" -o "x${UBOOT_SUFFIX}" = "xrom" ] && \
-			[ -e "$deployed_uboot_dtb_binary" ]; then
-			oe_runmake EXT_DTB=$deployed_uboot_dtb_binary
-			install ${UBOOT_BINARY} ${DEPLOYDIR}/${UBOOT_IMAGE}
-		elif [ -e "${DEPLOYDIR}/${UBOOT_NODTB_IMAGE}" -a -e "$deployed_uboot_dtb_binary" ]; then
-			cd ${DEPLOYDIR}
-			cat ${UBOOT_NODTB_IMAGE} $deployed_uboot_dtb_binary | tee ${B}/${CONFIG_B_PATH}/${UBOOT_BINARY} > ${UBOOT_IMAGE}
-
-			if [ -n "${UBOOT_CONFIG}" ]
-			then
-				i=0
-				j=0
-				for config in ${UBOOT_MACHINE}; do
-					i=$(expr $i + 1);
-					for type in ${UBOOT_CONFIG}; do
-						j=$(expr $j + 1);
-						if [ $j -eq $i ]
-						then
-							cp ${UBOOT_IMAGE} ${B}/${CONFIG_B_PATH}/u-boot-$type.${UBOOT_SUFFIX}
-						fi
-					done
-				done
-			fi
-		else
-			bbwarn "Failure while adding public key to u-boot binary. Verified boot won't be available."
-		fi
-	fi
-}
-
-concat_spl_dtb_helper() {
-
-	# We only deploy symlinks to the u-boot-spl.dtb,as the KERNEL_PN will
-	# be responsible for deploying the real file
-	if [ -e "${SPL_DIR}/${SPL_DTB_BINARY}" ] ; then
-		ln -sf ${SPL_DTB_IMAGE} ${DEPLOYDIR}/${SPL_DTB_SYMLINK}
-		ln -sf ${SPL_DTB_IMAGE} ${DEPLOYDIR}/${SPL_DTB_BINARY}
-	fi
-
-	# Concatenate the SPL nodtb binary and u-boot.dtb
-	deployed_spl_dtb_binary='${DEPLOY_DIR_IMAGE}/${SPL_DTB_IMAGE}'
-	if [ -e "${DEPLOYDIR}/${SPL_NODTB_IMAGE}" -a -e "$deployed_spl_dtb_binary" ] ; then
-		cd ${DEPLOYDIR}
-		cat ${SPL_NODTB_IMAGE} $deployed_spl_dtb_binary | tee ${B}/${CONFIG_B_PATH}/${SPL_BINARY} > ${SPL_IMAGE}
-	else
-		bbwarn "Failure while adding public key to spl binary. Verified U-Boot boot won't be available."
-	fi
-}
-
-
-concat_dtb() {
-	if [ "${UBOOT_SIGN_ENABLE}" = "1" -a "${PN}" = "${UBOOT_PN}" -a -n "${UBOOT_DTB_BINARY}" ]; then
-		mkdir -p ${DEPLOYDIR}
-		if [ -n "${UBOOT_CONFIG}" ]; then
-			for config in ${UBOOT_MACHINE}; do
-				CONFIG_B_PATH="$config"
-				cd ${B}/$config
-				concat_dtb_helper
-			done
-		else
-			CONFIG_B_PATH=""
-			cd ${B}
-			concat_dtb_helper
-		fi
-	fi
-}
-
-concat_spl_dtb() {
-	if [ "${SPL_SIGN_ENABLE}" = "1" -a "${PN}" = "${UBOOT_PN}" -a -n "${SPL_DTB_BINARY}" ]; then
-		mkdir -p ${DEPLOYDIR}
-		if [ -n "${UBOOT_CONFIG}" ]; then
-			for config in ${UBOOT_MACHINE}; do
-				CONFIG_B_PATH="$config"
-				cd ${B}/$config
-				concat_spl_dtb_helper
-			done
-		else
-			CONFIG_B_PATH=""
-			cd ${B}
-			concat_spl_dtb_helper
-		fi
-	fi
-}
-
-
-# Install UBOOT_DTB_BINARY to datadir, so that kernel can use it for
-# signing, and kernel will deploy UBOOT_DTB_BINARY after signs it.
-install_helper() {
-	if [ -f "${UBOOT_DTB_BINARY}" ]; then
-		# UBOOT_DTB_BINARY is a symlink to UBOOT_DTB_IMAGE, so we
-		# need both of them.
-		install -Dm 0644 ${UBOOT_DTB_BINARY} ${D}${datadir}/${UBOOT_DTB_IMAGE}
-		ln -sf ${UBOOT_DTB_IMAGE} ${D}${datadir}/${UBOOT_DTB_BINARY}
-	else
-		bbwarn "${UBOOT_DTB_BINARY} not found"
-	fi
-}
-
-# Install SPL dtb and u-boot nodtb to datadir,
-install_spl_helper() {
-	if [ -f "${SPL_DIR}/${SPL_DTB_BINARY}" ]; then
-		install -Dm 0644 ${SPL_DIR}/${SPL_DTB_BINARY} ${D}${datadir}/${SPL_DTB_IMAGE}
-		ln -sf ${SPL_DTB_IMAGE} ${D}${datadir}/${SPL_DTB_BINARY}
-	else
-		bbwarn "${SPL_DTB_BINARY} not found"
-	fi
-	if [ -f "${UBOOT_NODTB_BINARY}" ] ; then
-		install -Dm 0644 ${UBOOT_NODTB_BINARY} ${D}${datadir}/${UBOOT_NODTB_IMAGE}
-		ln -sf ${UBOOT_NODTB_IMAGE} ${D}${datadir}/${UBOOT_NODTB_BINARY}
-	else
-		bbwarn "${UBOOT_NODTB_BINARY} not found"
-	fi
-
-	# We need to install a 'stub' u-boot-fitimage + its to datadir,
-	# so that the KERNEL_PN can use the correct filename when
-	# assembling and deploying them
-	touch ${D}/${datadir}/${UBOOT_FITIMAGE_IMAGE}
-	touch ${D}/${datadir}/${UBOOT_ITS_IMAGE}
-}
-
-do_install:append() {
-	if [ "${PN}" = "${UBOOT_PN}" ]; then
-		if [ -n "${UBOOT_CONFIG}" ]; then
-			for config in ${UBOOT_MACHINE}; do
-				cd ${B}/$config
-				if [ "${UBOOT_SIGN_ENABLE}" = "1" -o "${UBOOT_FITIMAGE_ENABLE}" = "1" ] && \
-					[ -n "${UBOOT_DTB_BINARY}" ]; then
-					install_helper
-				fi
-				if [ "${UBOOT_FITIMAGE_ENABLE}" = "1" -a -n "${SPL_DTB_BINARY}" ]; then
-					install_spl_helper
-				fi
-			done
-		else
-			cd ${B}
-			if [ "${UBOOT_SIGN_ENABLE}" = "1" -o "${UBOOT_FITIMAGE_ENABLE}" = "1" ] && \
-				[ -n "${UBOOT_DTB_BINARY}" ]; then
-				install_helper
-			fi
-			if [ "${UBOOT_FITIMAGE_ENABLE}" = "1" -a -n "${SPL_DTB_BINARY}" ]; then
-				install_spl_helper
-			fi
-		fi
-	fi
-}
-
-do_uboot_generate_rsa_keys() {
-	if [ "${SPL_SIGN_ENABLE}" = "0" ] && [ "${UBOOT_FIT_GENERATE_KEYS}" = "1" ]; then
-		bbwarn "UBOOT_FIT_GENERATE_KEYS is set to 1 eventhough SPL_SIGN_ENABLE is set to 0. The keys will not be generated as they won't be used."
-	fi
-
-	if [ "${SPL_SIGN_ENABLE}" = "1" ] && [ "${UBOOT_FIT_GENERATE_KEYS}" = "1" ]; then
-
-		# Generate keys only if they don't already exist
-		if [ ! -f "${SPL_SIGN_KEYDIR}/${SPL_SIGN_KEYNAME}".key ] || \
-			[ ! -f "${SPL_SIGN_KEYDIR}/${SPL_SIGN_KEYNAME}".crt ]; then
-
-			# make directory if it does not already exist
-			mkdir -p "${SPL_SIGN_KEYDIR}"
-
-			echo "Generating RSA private key for signing U-Boot fitImage"
-			openssl genrsa ${UBOOT_FIT_KEY_GENRSA_ARGS} -out \
-				"${SPL_SIGN_KEYDIR}/${SPL_SIGN_KEYNAME}".key \
-				"${UBOOT_FIT_SIGN_NUMBITS}"
-
-			echo "Generating certificate for signing U-Boot fitImage"
-			openssl req ${FIT_KEY_REQ_ARGS} "${UBOOT_FIT_KEY_SIGN_PKCS}" \
-				-key "${SPL_SIGN_KEYDIR}/${SPL_SIGN_KEYNAME}".key \
-				-out "${SPL_SIGN_KEYDIR}/${SPL_SIGN_KEYNAME}".crt
-		fi
-	fi
-
-}
-
-addtask uboot_generate_rsa_keys before do_uboot_assemble_fitimage after do_compile
-
-# Create a ITS file for the U-boot FIT, for use when
-# we want to sign it so that the SPL can verify it
-uboot_fitimage_assemble() {
-	uboot_its="$1"
-	uboot_nodtb_bin="$2"
-	uboot_dtb="$3"
-	uboot_bin="$4"
-	spl_dtb="$5"
-	uboot_csum="${UBOOT_FIT_HASH_ALG}"
-	uboot_sign_algo="${UBOOT_FIT_SIGN_ALG}"
-	uboot_sign_keyname="${SPL_SIGN_KEYNAME}"
-
-	rm -f $uboot_its $uboot_bin
-
-	# First we create the ITS script
-	cat << EOF >> $uboot_its
-/dts-v1/;
-
-/ {
-    description = "${UBOOT_FIT_DESC}";
-    #address-cells = <1>;
-
-    images {
-        uboot {
-            description = "U-Boot image";
-            data = /incbin/("$uboot_nodtb_bin");
-            type = "standalone";
-            os = "u-boot";
-            arch = "${UBOOT_ARCH}";
-            compression = "none";
-            load = <${UBOOT_LOADADDRESS}>;
-            entry = <${UBOOT_ENTRYPOINT}>;
-EOF
-
-	if [ "${SPL_SIGN_ENABLE}" = "1" ] ; then
-		cat << EOF >> $uboot_its
-            signature {
-                algo = "$uboot_csum,$uboot_sign_algo";
-                key-name-hint = "$uboot_sign_keyname";
-            };
-EOF
-	fi
-
-	cat << EOF >> $uboot_its
-        };
-        fdt {
-            description = "U-Boot FDT";
-            data = /incbin/("$uboot_dtb");
-            type = "flat_dt";
-            arch = "${UBOOT_ARCH}";
-            compression = "none";
-EOF
-
-	if [ "${SPL_SIGN_ENABLE}" = "1" ] ; then
-		cat << EOF >> $uboot_its
-            signature {
-                algo = "$uboot_csum,$uboot_sign_algo";
-                key-name-hint = "$uboot_sign_keyname";
-            };
-EOF
-	fi
-
-	cat << EOF >> $uboot_its
-        };
-    };
-
-    configurations {
-        default = "conf";
-        conf {
-            description = "Boot with signed U-Boot FIT";
-            loadables = "uboot";
-            fdt = "fdt";
-        };
-    };
-};
-EOF
-
-	#
-	# Assemble the U-boot FIT image
-	#
-	${UBOOT_MKIMAGE} \
-		${@'-D "${SPL_MKIMAGE_DTCOPTS}"' if len('${SPL_MKIMAGE_DTCOPTS}') else ''} \
-		-f $uboot_its \
-		$uboot_bin
-
-	if [ "${SPL_SIGN_ENABLE}" = "1" ] ; then
-		#
-		# Sign the U-boot FIT image and add public key to SPL dtb
-		#
-		${UBOOT_MKIMAGE_SIGN} \
-			${@'-D "${SPL_MKIMAGE_DTCOPTS}"' if len('${SPL_MKIMAGE_DTCOPTS}') else ''} \
-			-F -k "${SPL_SIGN_KEYDIR}" \
-			-K "$spl_dtb" \
-			-r $uboot_bin \
-			${SPL_MKIMAGE_SIGN_ARGS}
-	fi
-
-}
-
-do_uboot_assemble_fitimage() {
-	# This function runs in KERNEL_PN context. The reason for that is that we need to
-	# support the scenario where UBOOT_SIGN_ENABLE is placing the Kernel fitImage's
-	# pubkey in the u-boot.dtb file, so that we can use it when building the U-Boot
-	# fitImage itself.
-	if [ "${UBOOT_FITIMAGE_ENABLE}" = "1" ] && \
-	   [ -n "${SPL_DTB_BINARY}" -a "${PN}" = "${KERNEL_PN}" ] ; then
-		if [ "${UBOOT_SIGN_ENABLE}" != "1" ]; then
-			# If we're not signing the Kernel fitImage, that means
-			# we need to copy the u-boot.dtb from staging ourselves
-			cp -P ${STAGING_DATADIR}/u-boot*.dtb ${B}
-		fi
-		# As we are in the kernel context, we need to copy u-boot-spl.dtb from staging first.
-		# Unfortunately, need to glob on top of ${SPL_DTB_BINARY} since _IMAGE and _SYMLINK
-		# will contain U-boot's PV
-		# Similarly, we need to get the filename for the 'stub' u-boot-fitimage + its in
-		# staging so that we can use it for creating the image with the correct filename
-		# in the KERNEL_PN context.
-		# As for the u-boot.dtb (with fitimage's pubkey), it should come from the dependent
-		# do_assemble_fitimage task
-		cp -P ${STAGING_DATADIR}/u-boot-spl*.dtb ${B}
-		cp -P ${STAGING_DATADIR}/u-boot-nodtb*.bin ${B}
-		rm -rf ${B}/u-boot-fitImage-* ${B}/u-boot-its-*
-		kernel_uboot_fitimage_name=`basename ${STAGING_DATADIR}/u-boot-fitImage-*`
-		kernel_uboot_its_name=`basename ${STAGING_DATADIR}/u-boot-its-*`
-		cd ${B}
-		uboot_fitimage_assemble $kernel_uboot_its_name ${UBOOT_NODTB_BINARY} \
-					${UBOOT_DTB_BINARY} $kernel_uboot_fitimage_name \
-					${SPL_DTB_BINARY}
-	fi
-}
-
-addtask uboot_assemble_fitimage before do_deploy after do_compile
-
-do_deploy:prepend:pn-${UBOOT_PN}() {
-	if [ "${UBOOT_SIGN_ENABLE}" = "1" -a -n "${UBOOT_DTB_BINARY}" ] ; then
-		concat_dtb
-	fi
-
-	if [ "${UBOOT_FITIMAGE_ENABLE}" = "1" ] ; then
-	# Deploy the u-boot-nodtb binary and symlinks...
-		if [ -f "${SPL_DIR}/${SPL_NODTB_BINARY}" ] ; then
-			echo "Copying u-boot-nodtb binary..."
-			install -m 0644 ${SPL_DIR}/${SPL_NODTB_BINARY} ${DEPLOYDIR}/${SPL_NODTB_IMAGE}
-			ln -sf ${SPL_NODTB_IMAGE} ${DEPLOYDIR}/${SPL_NODTB_SYMLINK}
-			ln -sf ${SPL_NODTB_IMAGE} ${DEPLOYDIR}/${SPL_NODTB_BINARY}
-		fi
-
-
-		# We only deploy the symlinks to the uboot-fitImage and uboot-its
-		# images, as the KERNEL_PN will take care of deploying the real file
-		ln -sf ${UBOOT_FITIMAGE_IMAGE} ${DEPLOYDIR}/${UBOOT_FITIMAGE_BINARY}
-		ln -sf ${UBOOT_FITIMAGE_IMAGE} ${DEPLOYDIR}/${UBOOT_FITIMAGE_SYMLINK}
-		ln -sf ${UBOOT_ITS_IMAGE} ${DEPLOYDIR}/${UBOOT_ITS}
-		ln -sf ${UBOOT_ITS_IMAGE} ${DEPLOYDIR}/${UBOOT_ITS_SYMLINK}
-	fi
-
-	if [ "${SPL_SIGN_ENABLE}" = "1" -a -n "${SPL_DTB_BINARY}" ] ; then
-		concat_spl_dtb
-	fi
-
-
-}
-
-do_deploy:append:pn-${UBOOT_PN}() {
-	# If we're creating a u-boot fitImage, point u-boot.bin
-	# symlink since it might get used by image recipes
-	if [ "${UBOOT_FITIMAGE_ENABLE}" = "1" ] ; then
-		ln -sf ${UBOOT_FITIMAGE_IMAGE} ${DEPLOYDIR}/${UBOOT_BINARY}
-		ln -sf ${UBOOT_FITIMAGE_IMAGE} ${DEPLOYDIR}/${UBOOT_SYMLINK}
-	fi
-}
-
-python () {
-    if (   (d.getVar('UBOOT_SIGN_ENABLE') == '1'
-            or d.getVar('UBOOT_FITIMAGE_ENABLE') == '1')
-        and d.getVar('PN') == d.getVar('UBOOT_PN')
-        and d.getVar('UBOOT_DTB_BINARY')):
-
-        # Make "bitbake u-boot -cdeploy" deploys the signed u-boot.dtb
-        # and/or the U-Boot fitImage
-        d.appendVarFlag('do_deploy', 'depends', ' %s:do_deploy' % d.getVar('KERNEL_PN'))
-
-    if d.getVar('UBOOT_FITIMAGE_ENABLE') == '1' and d.getVar('PN') == d.getVar('KERNEL_PN'):
-        # As the U-Boot fitImage is created by the KERNEL_PN, we need
-        # to make sure that the u-boot-spl.dtb and u-boot-spl-nodtb.bin
-        # files are in the staging dir for it's use
-        d.appendVarFlag('do_uboot_assemble_fitimage', 'depends', ' %s:do_populate_sysroot' % d.getVar('UBOOT_PN'))
-
-        # If the Kernel fitImage is being signed, we need to
-        # create the U-Boot fitImage after it
-        if d.getVar('UBOOT_SIGN_ENABLE') == '1':
-            d.appendVarFlag('do_uboot_assemble_fitimage', 'depends', ' %s:do_assemble_fitimage' % d.getVar('KERNEL_PN'))
-            d.appendVarFlag('do_uboot_assemble_fitimage', 'depends', ' %s:do_assemble_fitimage_initramfs' % d.getVar('KERNEL_PN'))
-
-}
diff --git a/poky/meta/classes/uninative.bbclass b/poky/meta/classes/uninative.bbclass
deleted file mode 100644
index 6a9e862..0000000
--- a/poky/meta/classes/uninative.bbclass
+++ /dev/null
@@ -1,171 +0,0 @@
-UNINATIVE_LOADER ?= "${UNINATIVE_STAGING_DIR}-uninative/${BUILD_ARCH}-linux/lib/${@bb.utils.contains('BUILD_ARCH', 'x86_64', 'ld-linux-x86-64.so.2', '', d)}${@bb.utils.contains('BUILD_ARCH', 'i686', 'ld-linux.so.2', '', d)}${@bb.utils.contains('BUILD_ARCH', 'aarch64', 'ld-linux-aarch64.so.1', '', d)}${@bb.utils.contains('BUILD_ARCH', 'ppc64le', 'ld64.so.2', '', d)}"
-UNINATIVE_STAGING_DIR ?= "${STAGING_DIR}"
-
-UNINATIVE_URL ?= "unset"
-UNINATIVE_TARBALL ?= "${BUILD_ARCH}-nativesdk-libc-${UNINATIVE_VERSION}.tar.xz"
-# Example checksums
-#UNINATIVE_CHECKSUM[aarch64] = "dead"
-#UNINATIVE_CHECKSUM[i686] = "dead"
-#UNINATIVE_CHECKSUM[x86_64] = "dead"
-UNINATIVE_DLDIR ?= "${DL_DIR}/uninative/"
-
-# Enabling uninative will change the following variables so they need to go the parsing ignored variables list to prevent multiple recipe parsing
-BB_HASHCONFIG_IGNORE_VARS += "NATIVELSBSTRING SSTATEPOSTUNPACKFUNCS BUILD_LDFLAGS"
-
-addhandler uninative_event_fetchloader
-uninative_event_fetchloader[eventmask] = "bb.event.BuildStarted"
-
-addhandler uninative_event_enable
-uninative_event_enable[eventmask] = "bb.event.ConfigParsed"
-
-python uninative_event_fetchloader() {
-    """
-    This event fires on the parent and will try to fetch the tarball if the
-    loader isn't already present.
-    """
-
-    chksum = d.getVarFlag("UNINATIVE_CHECKSUM", d.getVar("BUILD_ARCH"))
-    if not chksum:
-        bb.fatal("Uninative selected but not configured correctly, please set UNINATIVE_CHECKSUM[%s]" % d.getVar("BUILD_ARCH"))
-
-    loader = d.getVar("UNINATIVE_LOADER")
-    loaderchksum = loader + ".chksum"
-    if os.path.exists(loader) and os.path.exists(loaderchksum):
-        with open(loaderchksum, "r") as f:
-            readchksum = f.read().strip()
-        if readchksum == chksum:
-            return
-
-    import subprocess
-    try:
-        # Save and restore cwd as Fetch.download() does a chdir()
-        olddir = os.getcwd()
-
-        tarball = d.getVar("UNINATIVE_TARBALL")
-        tarballdir = os.path.join(d.getVar("UNINATIVE_DLDIR"), chksum)
-        tarballpath = os.path.join(tarballdir, tarball)
-
-        if not os.path.exists(tarballpath + ".done"):
-            bb.utils.mkdirhier(tarballdir)
-            if d.getVar("UNINATIVE_URL") == "unset":
-                bb.fatal("Uninative selected but not configured, please set UNINATIVE_URL")
-
-            localdata = bb.data.createCopy(d)
-            localdata.setVar('FILESPATH', "")
-            localdata.setVar('DL_DIR', tarballdir)
-            # Our games with path manipulation of DL_DIR mean standard PREMIRRORS don't work
-            # and we can't easily put 'chksum' into the url path from a url parameter with
-            # the current fetcher url handling
-            premirrors = bb.fetch2.mirror_from_string(localdata.getVar("PREMIRRORS"))
-            for line in premirrors:
-                try:
-                    (find, replace) = line
-                except ValueError:
-                    continue
-                if find.startswith("http"):
-                    localdata.appendVar("PREMIRRORS", " ${UNINATIVE_URL}${UNINATIVE_TARBALL} %s/uninative/%s/${UNINATIVE_TARBALL}" % (replace, chksum))
-
-            srcuri = d.expand("${UNINATIVE_URL}${UNINATIVE_TARBALL};sha256sum=%s" % chksum)
-            bb.note("Fetching uninative binary shim %s (will check PREMIRRORS first)" % srcuri)
-
-            fetcher = bb.fetch2.Fetch([srcuri], localdata, cache=False)
-            fetcher.download()
-            localpath = fetcher.localpath(srcuri)
-            if localpath != tarballpath and os.path.exists(localpath) and not os.path.exists(tarballpath):
-                # Follow the symlink behavior from the bitbake fetch2.
-                # This will cover the case where an existing symlink is broken
-                # as well as if there are two processes trying to create it
-                # at the same time.
-                if os.path.islink(tarballpath):
-                    # Broken symbolic link
-                    os.unlink(tarballpath)
-
-                # Deal with two processes trying to make symlink at once
-                try:
-                    os.symlink(localpath, tarballpath)
-                except FileExistsError:
-                    pass
-
-        # ldd output is "ldd (Ubuntu GLIBC 2.23-0ubuntu10) 2.23", extract last option from first line
-        glibcver = subprocess.check_output(["ldd", "--version"]).decode('utf-8').split('\n')[0].split()[-1]
-        if bb.utils.vercmp_string(d.getVar("UNINATIVE_MAXGLIBCVERSION"), glibcver) < 0:
-            raise RuntimeError("Your host glibc version (%s) is newer than that in uninative (%s). Disabling uninative so that sstate is not corrupted." % (glibcver, d.getVar("UNINATIVE_MAXGLIBCVERSION")))
-
-        cmd = d.expand("\
-mkdir -p ${UNINATIVE_STAGING_DIR}-uninative; \
-cd ${UNINATIVE_STAGING_DIR}-uninative; \
-tar -xJf ${UNINATIVE_DLDIR}/%s/${UNINATIVE_TARBALL}; \
-${UNINATIVE_STAGING_DIR}-uninative/relocate_sdk.py \
-  ${UNINATIVE_STAGING_DIR}-uninative/${BUILD_ARCH}-linux \
-  ${UNINATIVE_LOADER} \
-  ${UNINATIVE_LOADER} \
-  ${UNINATIVE_STAGING_DIR}-uninative/${BUILD_ARCH}-linux/${bindir_native}/patchelf-uninative \
-  ${UNINATIVE_STAGING_DIR}-uninative/${BUILD_ARCH}-linux${base_libdir_native}/libc*.so*" % chksum)
-        subprocess.check_output(cmd, shell=True)
-
-        with open(loaderchksum, "w") as f:
-            f.write(chksum)
-
-        enable_uninative(d)
-
-    except RuntimeError as e:
-        bb.warn(str(e))
-    except bb.fetch2.BBFetchException as exc:
-        bb.warn("Disabling uninative as unable to fetch uninative tarball: %s" % str(exc))
-        bb.warn("To build your own uninative loader, please bitbake uninative-tarball and set UNINATIVE_TARBALL appropriately.")
-    except subprocess.CalledProcessError as exc:
-        bb.warn("Disabling uninative as unable to install uninative tarball: %s" % str(exc))
-        bb.warn("To build your own uninative loader, please bitbake uninative-tarball and set UNINATIVE_TARBALL appropriately.")
-    finally:
-        os.chdir(olddir)
-}
-
-python uninative_event_enable() {
-    """
-    This event handler is called in the workers and is responsible for setting
-    up uninative if a loader is found.
-    """
-    enable_uninative(d)
-}
-
-def enable_uninative(d):
-    loader = d.getVar("UNINATIVE_LOADER")
-    if os.path.exists(loader):
-        bb.debug(2, "Enabling uninative")
-        d.setVar("NATIVELSBSTRING", "universal%s" % oe.utils.host_gcc_version(d))
-        d.appendVar("SSTATEPOSTUNPACKFUNCS", " uninative_changeinterp")
-        d.appendVarFlag("SSTATEPOSTUNPACKFUNCS", "vardepvalueexclude", "| uninative_changeinterp")
-        d.appendVar("BUILD_LDFLAGS", " -Wl,--allow-shlib-undefined -Wl,--dynamic-linker=${UNINATIVE_LOADER}")
-        d.appendVarFlag("BUILD_LDFLAGS", "vardepvalueexclude", "| -Wl,--allow-shlib-undefined -Wl,--dynamic-linker=${UNINATIVE_LOADER}")
-        d.appendVarFlag("BUILD_LDFLAGS", "vardepsexclude", "UNINATIVE_LOADER")
-        d.prependVar("PATH", "${STAGING_DIR}-uninative/${BUILD_ARCH}-linux${bindir_native}:")
-
-python uninative_changeinterp () {
-    import subprocess
-    import stat
-    import oe.qa
-
-    if not (bb.data.inherits_class('native', d) or bb.data.inherits_class('crosssdk', d) or bb.data.inherits_class('cross', d)):
-        return
-
-    sstateinst = d.getVar('SSTATE_INSTDIR')
-    for walkroot, dirs, files in os.walk(sstateinst):
-        for file in files:
-            if file.endswith(".so") or ".so." in file:
-                continue
-            f = os.path.join(walkroot, file)
-            if os.path.islink(f):
-                continue
-            s = os.stat(f)
-            if not ((s[stat.ST_MODE] & stat.S_IXUSR) or (s[stat.ST_MODE] & stat.S_IXGRP) or (s[stat.ST_MODE] & stat.S_IXOTH)):
-                continue
-            elf = oe.qa.ELFFile(f)
-            try:
-                elf.open()
-            except oe.qa.NotELFFileError:
-                continue
-            if not elf.isDynamic():
-                continue
-
-            subprocess.check_output(("patchelf-uninative", "--set-interpreter", d.getVar("UNINATIVE_LOADER"), f), stderr=subprocess.STDOUT)
-}
diff --git a/poky/meta/classes/update-alternatives.bbclass b/poky/meta/classes/update-alternatives.bbclass
deleted file mode 100644
index fc1ffd8..0000000
--- a/poky/meta/classes/update-alternatives.bbclass
+++ /dev/null
@@ -1,327 +0,0 @@
-# This class is used to help the alternatives system which is useful when
-# multiple sources provide same command. You can use update-alternatives
-# command directly in your recipe, but in most cases this class simplifies
-# that job.
-#
-# To use this class a number of variables should be defined:
-#
-# List all of the alternatives needed by a package:
-# ALTERNATIVE:<pkg> = "name1 name2 name3 ..."
-#
-#   i.e. ALTERNATIVE:busybox = "sh sed test bracket"
-#
-# The pathname of the link
-# ALTERNATIVE_LINK_NAME[name] = "target"
-#
-#   This is the name of the binary once it's been installed onto the runtime.
-#   This name is global to all split packages in this recipe, and should match
-#   other recipes with the same functionality.
-#   i.e. ALTERNATIVE_LINK_NAME[bracket] = "/usr/bin/["
-#
-# NOTE: If ALTERNATIVE_LINK_NAME is not defined, it defaults to ${bindir}/name
-#
-# The default link to create for all targets
-# ALTERNATIVE_TARGET = "target"
-#
-#   This is useful in a multicall binary case
-#   i.e. ALTERNATIVE_TARGET = "/bin/busybox"
-#
-# A non-default link to create for a target
-# ALTERNATIVE_TARGET[name] = "target"
-#
-#   This is the name of the binary as it's been install by do_install
-#   i.e. ALTERNATIVE_TARGET[sh] = "/bin/bash"
-#
-# A package specific link for a target
-# ALTERNATIVE_TARGET_<pkg>[name] = "target"
-#
-#   This is useful when a recipe provides multiple alternatives for the
-#   same item.
-#
-# NOTE: If ALTERNATIVE_TARGET is not defined, it will inherit the value
-# from ALTERNATIVE_LINK_NAME.
-#
-# NOTE: If the ALTERNATIVE_LINK_NAME and ALTERNATIVE_TARGET are the same,
-# ALTERNATIVE_TARGET will have '.{BPN}' appended to it.  If the file
-# referenced has not been renamed, it will also be renamed.  (This avoids
-# the need to rename alternative files in the do_install step, but still
-# supports it if necessary for some reason.)
-#
-# The default priority for any alternatives
-# ALTERNATIVE_PRIORITY = "priority"
-#
-#   i.e. default is ALTERNATIVE_PRIORITY = "10"
-#
-# The non-default priority for a specific target
-# ALTERNATIVE_PRIORITY[name] = "priority"
-#
-# The package priority for a specific target
-# ALTERNATIVE_PRIORITY_<pkg>[name] = "priority"
-
-ALTERNATIVE_PRIORITY = "10"
-
-# We need special processing for vardeps because it can not work on
-# modified flag values.  So we aggregate the flags into a new variable
-# and include that vairable in the set.
-UPDALTVARS  = "ALTERNATIVE ALTERNATIVE_LINK_NAME ALTERNATIVE_TARGET ALTERNATIVE_PRIORITY"
-
-PACKAGE_WRITE_DEPS += "virtual/update-alternatives-native"
-
-def gen_updatealternativesvardeps(d):
-    pkgs = (d.getVar("PACKAGES") or "").split()
-    vars = (d.getVar("UPDALTVARS") or "").split()
-
-    # First compute them for non_pkg versions
-    for v in vars:
-        for flag in sorted((d.getVarFlags(v) or {}).keys()):
-            if flag == "doc" or flag == "vardeps" or flag == "vardepsexp":
-                continue
-            d.appendVar('%s_VARDEPS' % (v), ' %s:%s' % (flag, d.getVarFlag(v, flag, False)))
-
-    for p in pkgs:
-        for v in vars:
-            for flag in sorted((d.getVarFlags("%s_%s" % (v,p)) or {}).keys()):
-                if flag == "doc" or flag == "vardeps" or flag == "vardepsexp":
-                    continue
-                d.appendVar('%s_VARDEPS_%s' % (v,p), ' %s:%s' % (flag, d.getVarFlag('%s_%s' % (v,p), flag, False)))
-
-def ua_extend_depends(d):
-    if not 'virtual/update-alternatives' in d.getVar('PROVIDES'):
-        d.appendVar('DEPENDS', ' virtual/${MLPREFIX}update-alternatives')
-
-def update_alternatives_enabled(d):
-    # Update Alternatives only works on target packages...
-    if bb.data.inherits_class('native', d) or \
-       bb.data.inherits_class('cross', d) or bb.data.inherits_class('crosssdk', d) or \
-       bb.data.inherits_class('cross-canadian', d):
-        return False
-
-    # Disable when targeting mingw32 (no target support)
-    if d.getVar("TARGET_OS") == "mingw32":
-        return False
-
-    return True
-
-python __anonymous() {
-    if not update_alternatives_enabled(d):
-        return
-
-    # compute special vardeps
-    gen_updatealternativesvardeps(d)
-
-    # extend the depends to include virtual/update-alternatives
-    ua_extend_depends(d)
-}
-
-def gen_updatealternativesvars(d):
-    ret = []
-    pkgs = (d.getVar("PACKAGES") or "").split()
-    vars = (d.getVar("UPDALTVARS") or "").split()
-
-    for v in vars:
-        ret.append(v + "_VARDEPS")
-
-    for p in pkgs:
-        for v in vars:
-            ret.append(v + ":" + p)
-            ret.append(v + "_VARDEPS_" + p)
-    return " ".join(ret)
-
-# Now the new stuff, we use a custom function to generate the right values
-populate_packages[vardeps] += "${UPDALTVARS} ${@gen_updatealternativesvars(d)}"
-
-# We need to do the rename after the image creation step, but before
-# the split and strip steps..  PACKAGE_PREPROCESS_FUNCS is the right
-# place for that.
-PACKAGE_PREPROCESS_FUNCS += "apply_update_alternative_renames"
-python apply_update_alternative_renames () {
-    if not update_alternatives_enabled(d):
-       return
-
-    import re
-
-    def update_files(alt_target, alt_target_rename, pkg, d):
-        f = d.getVar('FILES:' + pkg)
-        if f:
-            f = re.sub(r'(^|\s)%s(\s|$)' % re.escape (alt_target), r'\1%s\2' % alt_target_rename, f)
-            d.setVar('FILES:' + pkg, f)
-
-    # Check for deprecated usage...
-    pn = d.getVar('BPN')
-    if d.getVar('ALTERNATIVE_LINKS') != None:
-        bb.fatal('%s: Use of ALTERNATIVE_LINKS/ALTERNATIVE_PATH/ALTERNATIVE_NAME is no longer supported, please convert to the updated syntax, see update-alternatives.bbclass for more info.' % pn)
-
-    # Do actual update alternatives processing
-    pkgdest = d.getVar('PKGD')
-    for pkg in (d.getVar('PACKAGES') or "").split():
-        # If the src == dest, we know we need to rename the dest by appending ${BPN}
-        link_rename = []
-        for alt_name in (d.getVar('ALTERNATIVE:%s' % pkg) or "").split():
-            alt_link     = d.getVarFlag('ALTERNATIVE_LINK_NAME', alt_name)
-            if not alt_link:
-                alt_link = "%s/%s" % (d.getVar('bindir'), alt_name)
-                d.setVarFlag('ALTERNATIVE_LINK_NAME', alt_name, alt_link)
-            if alt_link.startswith(os.path.join(d.getVar('sysconfdir'), 'init.d')):
-                # Managing init scripts does not work (bug #10433), foremost
-                # because of a race with update-rc.d
-                bb.fatal("Using update-alternatives for managing SysV init scripts is not supported")
-
-            alt_target   = d.getVarFlag('ALTERNATIVE_TARGET_%s' % pkg, alt_name) or d.getVarFlag('ALTERNATIVE_TARGET', alt_name)
-            alt_target   = alt_target or d.getVar('ALTERNATIVE_TARGET_%s' % pkg) or d.getVar('ALTERNATIVE_TARGET') or alt_link
-            # Sometimes alt_target is specified as relative to the link name.
-            alt_target   = os.path.join(os.path.dirname(alt_link), alt_target)
-
-            # If the link and target are the same name, we need to rename the target.
-            if alt_link == alt_target:
-                src = '%s/%s' % (pkgdest, alt_target)
-                alt_target_rename = '%s.%s' % (alt_target, pn)
-                dest = '%s/%s' % (pkgdest, alt_target_rename)
-                if os.path.lexists(dest):
-                    bb.note('%s: Already renamed: %s' % (pn, alt_target_rename))
-                elif os.path.lexists(src):
-                    if os.path.islink(src):
-                        # Delay rename of links
-                        link_rename.append((alt_target, alt_target_rename))
-                    else:
-                        bb.note('%s: Rename %s -> %s' % (pn, alt_target, alt_target_rename))
-                        bb.utils.rename(src, dest)
-                        update_files(alt_target, alt_target_rename, pkg, d)
-                else:
-                    bb.warn("%s: alternative target (%s or %s) does not exist, skipping..." % (pn, alt_target, alt_target_rename))
-                    continue
-                d.setVarFlag('ALTERNATIVE_TARGET_%s' % pkg, alt_name, alt_target_rename)
-
-        # Process delayed link names
-        # Do these after other renames so we can correct broken links
-        for (alt_target, alt_target_rename) in link_rename:
-            src = '%s/%s' % (pkgdest, alt_target)
-            dest = '%s/%s' % (pkgdest, alt_target_rename)
-            link_target = oe.path.realpath(src, pkgdest, True)
-
-            if os.path.lexists(link_target):
-                # Ok, the link_target exists, we can rename
-                bb.note('%s: Rename (link) %s -> %s' % (pn, alt_target, alt_target_rename))
-                bb.utils.rename(src, dest)
-            else:
-                # Try to resolve the broken link to link.${BPN}
-                link_maybe = '%s.%s' % (os.readlink(src), pn)
-                if os.path.lexists(os.path.join(os.path.dirname(src), link_maybe)):
-                    # Ok, the renamed link target exists.. create a new link, and remove the original
-                    bb.note('%s: Creating new link %s -> %s' % (pn, alt_target_rename, link_maybe))
-                    os.symlink(link_maybe, dest)
-                    os.unlink(src)
-                else:
-                    bb.warn('%s: Unable to resolve dangling symlink: %s' % (pn, alt_target))
-                    continue
-            update_files(alt_target, alt_target_rename, pkg, d)
-}
-
-def update_alternatives_alt_targets(d, pkg):
-    """
-    Returns the update-alternatives metadata for a package.
-
-    The returned format is a list of tuples where the tuple contains:
-    alt_name:     The binary name
-    alt_link:     The path for the binary (Shared by different packages)
-    alt_target:   The path for the renamed binary (Unique per package)
-    alt_priority: The priority of the alt_target
-
-    All the alt_targets will be installed into the sysroot. The alt_link is
-    a symlink pointing to the alt_target with the highest priority.
-    """
-
-    pn = d.getVar('BPN')
-    pkgdest = d.getVar('PKGD')
-    updates = list()
-    for alt_name in (d.getVar('ALTERNATIVE:%s' % pkg) or "").split():
-        alt_link     = d.getVarFlag('ALTERNATIVE_LINK_NAME', alt_name)
-        alt_target   = d.getVarFlag('ALTERNATIVE_TARGET_%s' % pkg, alt_name) or \
-                       d.getVarFlag('ALTERNATIVE_TARGET', alt_name) or \
-                       d.getVar('ALTERNATIVE_TARGET_%s' % pkg) or \
-                       d.getVar('ALTERNATIVE_TARGET') or \
-                       alt_link
-        alt_priority = d.getVarFlag('ALTERNATIVE_PRIORITY_%s' % pkg,  alt_name) or \
-                       d.getVarFlag('ALTERNATIVE_PRIORITY',  alt_name) or \
-                       d.getVar('ALTERNATIVE_PRIORITY_%s' % pkg) or  \
-                       d.getVar('ALTERNATIVE_PRIORITY')
-
-        # This shouldn't trigger, as it should have been resolved earlier!
-        if alt_link == alt_target:
-            bb.note('alt_link == alt_target: %s == %s -- correcting, this should not happen!' % (alt_link, alt_target))
-            alt_target = '%s.%s' % (alt_target, pn)
-
-        if not os.path.lexists('%s/%s' % (pkgdest, alt_target)):
-            bb.warn('%s: NOT adding alternative provide %s: %s does not exist' % (pn, alt_link, alt_target))
-            continue
-
-        alt_target = os.path.normpath(alt_target)
-        updates.append( (alt_name, alt_link, alt_target, alt_priority) )
-
-    return updates
-
-PACKAGESPLITFUNCS:prepend = "populate_packages_updatealternatives "
-
-python populate_packages_updatealternatives () {
-    if not update_alternatives_enabled(d):
-        return
-
-    # Do actual update alternatives processing
-    for pkg in (d.getVar('PACKAGES') or "").split():
-        # Create post install/removal scripts
-        alt_setup_links = ""
-        alt_remove_links = ""
-        updates = update_alternatives_alt_targets(d, pkg)
-        for alt_name, alt_link, alt_target, alt_priority in updates:
-            alt_setup_links  += '\tupdate-alternatives --install %s %s %s %s\n' % (alt_link, alt_name, alt_target, alt_priority)
-            alt_remove_links += '\tupdate-alternatives --remove  %s %s\n' % (alt_name, alt_target)
-
-        if alt_setup_links:
-            # RDEPENDS setup
-            provider = d.getVar('VIRTUAL-RUNTIME_update-alternatives')
-            if provider:
-                #bb.note('adding runtime requirement for update-alternatives for %s' % pkg)
-                d.appendVar('RDEPENDS:%s' % pkg, ' ' + d.getVar('MLPREFIX', False) + provider)
-
-            bb.note('adding update-alternatives calls to postinst/prerm for %s' % pkg)
-            bb.note('%s' % alt_setup_links)
-            postinst = d.getVar('pkg_postinst:%s' % pkg)
-            if postinst:
-                postinst = alt_setup_links + postinst
-            else:
-                postinst = '#!/bin/sh\n' + alt_setup_links
-            d.setVar('pkg_postinst:%s' % pkg, postinst)
-
-            bb.note('%s' % alt_remove_links)
-            prerm = d.getVar('pkg_prerm:%s' % pkg) or '#!/bin/sh\n'
-            prerm += alt_remove_links
-            d.setVar('pkg_prerm:%s' % pkg, prerm)
-}
-
-python package_do_filedeps:append () {
-    if update_alternatives_enabled(d):
-        apply_update_alternative_provides(d)
-}
-
-def apply_update_alternative_provides(d):
-    pn = d.getVar('BPN')
-    pkgdest = d.getVar('PKGDEST')
-
-    for pkg in d.getVar('PACKAGES').split():
-        for alt_name in (d.getVar('ALTERNATIVE:%s' % pkg) or "").split():
-            alt_link     = d.getVarFlag('ALTERNATIVE_LINK_NAME', alt_name)
-            alt_target   = d.getVarFlag('ALTERNATIVE_TARGET_%s' % pkg, alt_name) or d.getVarFlag('ALTERNATIVE_TARGET', alt_name)
-            alt_target   = alt_target or d.getVar('ALTERNATIVE_TARGET_%s' % pkg) or d.getVar('ALTERNATIVE_TARGET') or alt_link
-
-            if alt_link == alt_target:
-                bb.warn('%s: alt_link == alt_target: %s == %s' % (pn, alt_link, alt_target))
-                alt_target = '%s.%s' % (alt_target, pn)
-
-            if not os.path.lexists('%s/%s/%s' % (pkgdest, pkg, alt_target)):
-                continue
-
-            # Add file provide
-            trans_target = oe.package.file_translate(alt_target)
-            d.appendVar('FILERPROVIDES:%s:%s' % (trans_target, pkg), " " + alt_link)
-            if not trans_target in (d.getVar('FILERPROVIDESFLIST:%s' % pkg) or ""):
-                d.appendVar('FILERPROVIDESFLIST:%s' % pkg, " " + trans_target)
-
diff --git a/poky/meta/classes/update-rc.d.bbclass b/poky/meta/classes/update-rc.d.bbclass
deleted file mode 100644
index 0a3a608..0000000
--- a/poky/meta/classes/update-rc.d.bbclass
+++ /dev/null
@@ -1,123 +0,0 @@
-UPDATERCPN ?= "${PN}"
-
-DEPENDS:append:class-target = "${@bb.utils.contains('DISTRO_FEATURES', 'sysvinit', ' update-rc.d initscripts', '', d)}"
-
-UPDATERCD = "update-rc.d"
-UPDATERCD:class-cross = ""
-UPDATERCD:class-native = ""
-UPDATERCD:class-nativesdk = ""
-
-INITSCRIPT_PARAMS ?= "defaults"
-
-INIT_D_DIR = "${sysconfdir}/init.d"
-
-def use_updatercd(d):
-    # If the distro supports both sysvinit and systemd, and the current recipe
-    # supports systemd, only call update-rc.d on rootfs creation or if systemd
-    # is not running. That's because systemctl enable/disable will already call
-    # update-rc.d if it detects initscripts.
-    if bb.utils.contains('DISTRO_FEATURES', 'systemd', True, False, d) and bb.data.inherits_class('systemd', d):
-        return '[ -n "$D" -o ! -d /run/systemd/system ]'
-    return 'true'
-
-PACKAGE_WRITE_DEPS += "update-rc.d-native"
-
-updatercd_postinst() {
-if ${@use_updatercd(d)} && type update-rc.d >/dev/null 2>/dev/null; then
-	if [ -n "$D" ]; then
-		OPT="-r $D"
-	else
-		OPT="-s"
-	fi
-	update-rc.d $OPT ${INITSCRIPT_NAME} ${INITSCRIPT_PARAMS}
-fi
-}
-
-updatercd_prerm() {
-if ${@use_updatercd(d)} && [ -z "$D" -a -x "${INIT_D_DIR}/${INITSCRIPT_NAME}" ]; then
-	${INIT_D_DIR}/${INITSCRIPT_NAME} stop || :
-fi
-}
-
-updatercd_postrm() {
-if ${@use_updatercd(d)} && type update-rc.d >/dev/null 2>/dev/null; then
-	if [ -n "$D" ]; then
-		OPT="-f -r $D"
-	else
-		OPT="-f"
-	fi
-	update-rc.d $OPT ${INITSCRIPT_NAME} remove
-fi
-}
-
-
-def update_rc_after_parse(d):
-    if d.getVar('INITSCRIPT_PACKAGES', False) == None:
-        if d.getVar('INITSCRIPT_NAME', False) == None:
-            bb.fatal("%s inherits update-rc.d but doesn't set INITSCRIPT_NAME" % d.getVar('FILE', False))
-        if d.getVar('INITSCRIPT_PARAMS', False) == None:
-            bb.fatal("%s inherits update-rc.d but doesn't set INITSCRIPT_PARAMS" % d.getVar('FILE', False))
-
-python __anonymous() {
-    update_rc_after_parse(d)
-}
-
-PACKAGESPLITFUNCS:prepend = "${@bb.utils.contains('DISTRO_FEATURES', 'sysvinit', 'populate_packages_updatercd ', '', d)}"
-PACKAGESPLITFUNCS:remove:class-nativesdk = "populate_packages_updatercd "
-
-populate_packages_updatercd[vardeps] += "updatercd_prerm updatercd_postrm updatercd_postinst"
-populate_packages_updatercd[vardepsexclude] += "OVERRIDES"
-
-python populate_packages_updatercd () {
-    def update_rcd_auto_depend(pkg):
-        import subprocess
-        import os
-        path = d.expand("${D}${INIT_D_DIR}/${INITSCRIPT_NAME}")
-        if not os.path.exists(path):
-            return
-        statement = "grep -q -w '/etc/init.d/functions' %s" % path
-        if subprocess.call(statement, shell=True) == 0:
-            mlprefix = d.getVar('MLPREFIX') or ""
-            d.appendVar('RDEPENDS:' + pkg, ' %sinitd-functions' % (mlprefix))
-
-    def update_rcd_package(pkg):
-        bb.debug(1, 'adding update-rc.d calls to postinst/prerm/postrm for %s' % pkg)
-
-        localdata = bb.data.createCopy(d)
-        overrides = localdata.getVar("OVERRIDES")
-        localdata.setVar("OVERRIDES", "%s:%s" % (pkg, overrides))
-
-        update_rcd_auto_depend(pkg)
-
-        postinst = d.getVar('pkg_postinst:%s' % pkg)
-        if not postinst:
-            postinst = '#!/bin/sh\n'
-        postinst += localdata.getVar('updatercd_postinst')
-        d.setVar('pkg_postinst:%s' % pkg, postinst)
-
-        prerm = d.getVar('pkg_prerm:%s' % pkg)
-        if not prerm:
-            prerm = '#!/bin/sh\n'
-        prerm += localdata.getVar('updatercd_prerm')
-        d.setVar('pkg_prerm:%s' % pkg, prerm)
-
-        postrm = d.getVar('pkg_postrm:%s' % pkg)
-        if not postrm:
-                postrm = '#!/bin/sh\n'
-        postrm += localdata.getVar('updatercd_postrm')
-        d.setVar('pkg_postrm:%s' % pkg, postrm)
-
-        d.appendVar('RRECOMMENDS:' + pkg, " ${MLPREFIX}${UPDATERCD}")
-
-    # Check that this class isn't being inhibited (generally, by
-    # systemd.bbclass) before doing any work.
-    if not d.getVar("INHIBIT_UPDATERCD_BBCLASS"):
-        pkgs = d.getVar('INITSCRIPT_PACKAGES')
-        if pkgs == None:
-            pkgs = d.getVar('UPDATERCPN')
-            packages = (d.getVar('PACKAGES') or "").split()
-            if not pkgs in packages and packages != []:
-                pkgs = packages[0]
-        for pkg in pkgs.split():
-            update_rcd_package(pkg)
-}
diff --git a/poky/meta/classes/upstream-version-is-even.bbclass b/poky/meta/classes/upstream-version-is-even.bbclass
deleted file mode 100644
index 256c752..0000000
--- a/poky/meta/classes/upstream-version-is-even.bbclass
+++ /dev/null
@@ -1,5 +0,0 @@
-# This class ensures that the upstream version check only
-# accepts even minor versions (i.e. 3.0.x, 3.2.x, 3.4.x, etc.)
-# This scheme is used by Gnome and a number of other projects
-# to signify stable releases vs development releases.
-UPSTREAM_CHECK_REGEX = "[^\d\.](?P<pver>\d+\.(\d*[02468])+(\.\d+)+)\.tar"
diff --git a/poky/meta/classes/useradd-staticids.bbclass b/poky/meta/classes/useradd-staticids.bbclass
index 3acf59c..abe484e 100644
--- a/poky/meta/classes/useradd-staticids.bbclass
+++ b/poky/meta/classes/useradd-staticids.bbclass
@@ -1,3 +1,9 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
 # In order to support a deterministic set of 'dynamic' users/groups,
 # we need a function to reformat the params based on a static file
 def update_useradd_static_config(d):
diff --git a/poky/meta/classes/useradd.bbclass b/poky/meta/classes/useradd.bbclass
index 20771a0..4d3bd9a 100644
--- a/poky/meta/classes/useradd.bbclass
+++ b/poky/meta/classes/useradd.bbclass
@@ -1,3 +1,9 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
 inherit useradd_base
 
 # base-passwd-cross provides the default passwd and group files in the
diff --git a/poky/meta/classes/useradd_base.bbclass b/poky/meta/classes/useradd_base.bbclass
index 7f5b9b7..863cb7b 100644
--- a/poky/meta/classes/useradd_base.bbclass
+++ b/poky/meta/classes/useradd_base.bbclass
@@ -1,3 +1,9 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
 # This bbclass provides basic functionality for user/group settings.
 # This bbclass is intended to be inherited by useradd.bbclass and
 # extrausers.bbclass.
diff --git a/poky/meta/classes/utility-tasks.bbclass b/poky/meta/classes/utility-tasks.bbclass
deleted file mode 100644
index 0466325..0000000
--- a/poky/meta/classes/utility-tasks.bbclass
+++ /dev/null
@@ -1,54 +0,0 @@
-addtask listtasks
-do_listtasks[nostamp] = "1"
-python do_listtasks() {
-    taskdescs = {}
-    maxlen = 0
-    for e in d.keys():
-        if d.getVarFlag(e, 'task'):
-            maxlen = max(maxlen, len(e))
-            if e.endswith('_setscene'):
-                desc = "%s (setscene version)" % (d.getVarFlag(e[:-9], 'doc') or '')
-            else:
-                desc = d.getVarFlag(e, 'doc') or ''
-            taskdescs[e] = desc
-
-    tasks = sorted(taskdescs.keys())
-    for taskname in tasks:
-        bb.plain("%s  %s" % (taskname.ljust(maxlen), taskdescs[taskname]))
-}
-
-CLEANFUNCS ?= ""
-
-T:task-clean = "${LOG_DIR}/cleanlogs/${PN}"
-addtask clean
-do_clean[nostamp] = "1"
-python do_clean() {
-    """clear the build and temp directories"""
-    dir = d.expand("${WORKDIR}")
-    bb.note("Removing " + dir)
-    oe.path.remove(dir)
-
-    dir = "%s.*" % d.getVar('STAMP')
-    bb.note("Removing " + dir)
-    oe.path.remove(dir)
-
-    for f in (d.getVar('CLEANFUNCS') or '').split():
-        bb.build.exec_func(f, d)
-}
-
-addtask checkuri
-do_checkuri[nostamp] = "1"
-do_checkuri[network] = "1"
-python do_checkuri() {
-    src_uri = (d.getVar('SRC_URI') or "").split()
-    if len(src_uri) == 0:
-        return
-
-    try:
-        fetcher = bb.fetch2.Fetch(src_uri, d)
-        fetcher.checkstatus()
-    except bb.fetch2.BBFetchException as e:
-        bb.fatal(str(e))
-}
-
-
diff --git a/poky/meta/classes/utils.bbclass b/poky/meta/classes/utils.bbclass
deleted file mode 100644
index e6f7f95..0000000
--- a/poky/meta/classes/utils.bbclass
+++ /dev/null
@@ -1,364 +0,0 @@
-
-oe_soinstall() {
-	# Purpose: Install shared library file and
-	#          create the necessary links
-	# Example: oe_soinstall libfoo.so.1.2.3 ${D}${libdir}
-	libname=`basename $1`
-	case "$libname" in
-	    *.so)
-	        bbfatal "oe_soinstall: Shared library must haved versioned filename (e.g. libfoo.so.1.2.3)"
-	        ;;
-	esac
-	install -m 755 $1 $2/$libname
-	sonamelink=`${HOST_PREFIX}readelf -d $1 |grep 'Library soname:' |sed -e 's/.*\[\(.*\)\].*/\1/'`
-	if [ -z $sonamelink ]; then
-		bbfatal "oe_soinstall: $libname is missing ELF tag 'SONAME'."
-	fi
-	solink=`echo $libname | sed -e 's/\.so\..*/.so/'`
-	ln -sf $libname $2/$sonamelink
-	ln -sf $libname $2/$solink
-}
-
-oe_libinstall() {
-	# Purpose: Install a library, in all its forms
-	# Example
-	#
-	# oe_libinstall libltdl ${STAGING_LIBDIR}/
-	# oe_libinstall -C src/libblah libblah ${D}/${libdir}/
-	dir=""
-	libtool=""
-	silent=""
-	require_static=""
-	require_shared=""
-	while [ "$#" -gt 0 ]; do
-		case "$1" in
-		-C)
-			shift
-			dir="$1"
-			;;
-		-s)
-			silent=1
-			;;
-		-a)
-			require_static=1
-			;;
-		-so)
-			require_shared=1
-			;;
-		-*)
-			bbfatal "oe_libinstall: unknown option: $1"
-			;;
-		*)
-			break;
-			;;
-		esac
-		shift
-	done
-
-	libname="$1"
-	shift
-	destpath="$1"
-	if [ -z "$destpath" ]; then
-		bbfatal "oe_libinstall: no destination path specified"
-	fi
-
-	__runcmd () {
-		if [ -z "$silent" ]; then
-			echo >&2 "oe_libinstall: $*"
-		fi
-		$*
-	}
-
-	if [ -z "$dir" ]; then
-		dir=`pwd`
-	fi
-
-	dotlai=$libname.lai
-
-	# Sanity check that the libname.lai is unique
-	number_of_files=`(cd $dir; find . -name "$dotlai") | wc -l`
-	if [ $number_of_files -gt 1 ]; then
-		bbfatal "oe_libinstall: $dotlai is not unique in $dir"
-	fi
-
-
-	dir=$dir`(cd $dir;find . -name "$dotlai") | sed "s/^\.//;s/\/$dotlai\$//;q"`
-	olddir=`pwd`
-	__runcmd cd $dir
-
-	lafile=$libname.la
-
-	# If such file doesn't exist, try to cut version suffix
-	if [ ! -f "$lafile" ]; then
-		libname1=`echo "$libname" | sed 's/-[0-9.]*$//'`
-		lafile1=$libname.la
-		if [ -f "$lafile1" ]; then
-			libname=$libname1
-			lafile=$lafile1
-		fi
-	fi
-
-	if [ -f "$lafile" ]; then
-		# libtool archive
-		eval `cat $lafile|grep "^library_names="`
-		libtool=1
-	else
-		library_names="$libname.so* $libname.dll.a $libname.*.dylib"
-	fi
-
-	__runcmd install -d $destpath/
-	dota=$libname.a
-	if [ -f "$dota" -o -n "$require_static" ]; then
-		rm -f $destpath/$dota
-		__runcmd install -m 0644 $dota $destpath/
-	fi
-	if [ -f "$dotlai" -a -n "$libtool" ]; then
-		rm -f $destpath/$libname.la
-		__runcmd install -m 0644 $dotlai $destpath/$libname.la
-	fi
-
-	for name in $library_names; do
-		files=`eval echo $name`
-		for f in $files; do
-			if [ ! -e "$f" ]; then
-				if [ -n "$libtool" ]; then
-					bbfatal "oe_libinstall: $dir/$f not found."
-				fi
-			elif [ -L "$f" ]; then
-				__runcmd cp -P "$f" $destpath/
-			elif [ ! -L "$f" ]; then
-				libfile="$f"
-				rm -f $destpath/$libfile
-				__runcmd install -m 0755 $libfile $destpath/
-			fi
-		done
-	done
-
-	if [ -z "$libfile" ]; then
-		if  [ -n "$require_shared" ]; then
-			bbfatal "oe_libinstall: unable to locate shared library"
-		fi
-	elif [ -z "$libtool" ]; then
-		# special case hack for non-libtool .so.#.#.# links
-		baselibfile=`basename "$libfile"`
-		if (echo $baselibfile | grep -qE '^lib.*\.so\.[0-9.]*$'); then
-			sonamelink=`${HOST_PREFIX}readelf -d $libfile |grep 'Library soname:' |sed -e 's/.*\[\(.*\)\].*/\1/'`
-			solink=`echo $baselibfile | sed -e 's/\.so\..*/.so/'`
-			if [ -n "$sonamelink" -a x"$baselibfile" != x"$sonamelink" ]; then
-				__runcmd ln -sf $baselibfile $destpath/$sonamelink
-			fi
-			__runcmd ln -sf $baselibfile $destpath/$solink
-		fi
-	fi
-
-	__runcmd cd "$olddir"
-}
-
-create_cmdline_wrapper () {
-	# Create a wrapper script where commandline options are needed
-	#
-	# These are useful to work around relocation issues, by passing extra options
-	# to a program
-	#
-	# Usage: create_cmdline_wrapper FILENAME <extra-options>
-
-	cmd=$1
-	shift
-
-	echo "Generating wrapper script for $cmd"
-
-	mv $cmd $cmd.real
-	cmdname=`basename $cmd`
-	dirname=`dirname $cmd`
-	cmdoptions=$@
-	if [ "${base_prefix}" != "" ]; then
-		relpath=`python3 -c "import os; print(os.path.relpath('${D}${base_prefix}', '$dirname'))"`
-		cmdoptions=`echo $@ | sed -e "s:${base_prefix}:\\$realdir/$relpath:g"`
-	fi
-	cat <<END >$cmd
-#!/bin/bash
-realpath=\`readlink -fn \$0\`
-realdir=\`dirname \$realpath\`
-exec -a \$realdir/$cmdname \$realdir/$cmdname.real $cmdoptions "\$@"
-END
-	chmod +x $cmd
-}
-
-create_cmdline_shebang_wrapper () {
-	# Create a wrapper script where commandline options are needed
-	#
-	# These are useful to work around shebang relocation issues, where shebangs are too
-	# long or have arguments in them, thus preventing them from using the /usr/bin/env
-	# shebang
-	#
-	# Usage: create_cmdline_wrapper FILENAME <extra-options>
-
-	cmd=$1
-	shift
-
-	echo "Generating wrapper script for $cmd"
-
-	# Strip #! and get remaining interpreter + arg
-	argument="$(sed -ne 's/^#! *//p;q' $cmd)"
-	# strip the shebang from the real script as we do not want it to be usable anyway
-	tail -n +2 $cmd > $cmd.real
-	chown --reference=$cmd $cmd.real
-	chmod --reference=$cmd $cmd.real
-	rm -f $cmd
-	cmdname=$(basename $cmd)
-	dirname=$(dirname $cmd)
-	cmdoptions=$@
-	if [ "${base_prefix}" != "" ]; then
-		relpath=`python3 -c "import os; print(os.path.relpath('${D}${base_prefix}', '$dirname'))"`
-		cmdoptions=`echo $@ | sed -e "s:${base_prefix}:\\$realdir/$relpath:g"`
-	fi
-	cat <<END >$cmd
-#!/usr/bin/env bash
-realpath=\`readlink -fn \$0\`
-realdir=\`dirname \$realpath\`
-exec -a \$realdir/$cmdname $argument \$realdir/$cmdname.real $cmdoptions "\$@"
-END
-	chmod +x $cmd
-}
-
-create_wrapper () {
-	# Create a wrapper script where extra environment variables are needed
-	#
-	# These are useful to work around relocation issues, by setting environment
-	# variables which point to paths in the filesystem.
-	#
-	# Usage: create_wrapper FILENAME [[VAR=VALUE]..]
-
-	cmd=$1
-	shift
-
-	echo "Generating wrapper script for $cmd"
-
-	mv $cmd $cmd.real
-	cmdname=`basename $cmd`
-	dirname=`dirname $cmd`
-	exportstring=$@
-	if [ "${base_prefix}" != "" ]; then
-		relpath=`python3 -c "import os; print(os.path.relpath('${D}${base_prefix}', '$dirname'))"`
-		exportstring=`echo $@ | sed -e "s:${base_prefix}:\\$realdir/$relpath:g"`
-	fi
-	cat <<END >$cmd
-#!/bin/bash
-realpath=\`readlink -fn \$0\`
-realdir=\`dirname \$realpath\`
-export $exportstring
-exec -a "\$0" \$realdir/$cmdname.real "\$@"
-END
-	chmod +x $cmd
-}
-
-# Copy files/directories from $1 to $2 but using hardlinks
-# (preserve symlinks)
-hardlinkdir () {
-	from=$1
-	to=$2
-	(cd $from; find . -print0 | cpio --null -pdlu $to)
-}
-
-
-def check_app_exists(app, d):
-    app = d.expand(app).split()[0].strip()
-    path = d.getVar('PATH')
-    return bool(bb.utils.which(path, app))
-
-def explode_deps(s):
-    return bb.utils.explode_deps(s)
-
-def base_set_filespath(path, d):
-    filespath = []
-    extrapaths = (d.getVar("FILESEXTRAPATHS") or "")
-    # Remove default flag which was used for checking
-    extrapaths = extrapaths.replace("__default:", "")
-    # Don't prepend empty strings to the path list
-    if extrapaths != "":
-        path = extrapaths.split(":") + path
-    # The ":" ensures we have an 'empty' override
-    overrides = (":" + (d.getVar("FILESOVERRIDES") or "")).split(":")
-    overrides.reverse()
-    for o in overrides:
-        for p in path:
-            if p != "":
-                filespath.append(os.path.join(p, o))
-    return ":".join(filespath)
-
-def extend_variants(d, var, extend, delim=':'):
-    """Return a string of all bb class extend variants for the given extend"""
-    variants = []
-    whole = d.getVar(var) or ""
-    for ext in whole.split():
-        eext = ext.split(delim)
-        if len(eext) > 1 and eext[0] == extend:
-            variants.append(eext[1])
-    return " ".join(variants)
-
-def multilib_pkg_extend(d, pkg):
-    variants = (d.getVar("MULTILIB_VARIANTS") or "").split()
-    if not variants:
-        return pkg
-    pkgs = pkg
-    for v in variants:
-        pkgs = pkgs + " " + v + "-" + pkg
-    return pkgs
-
-def get_multilib_datastore(variant, d):
-    return oe.utils.get_multilib_datastore(variant, d)
-
-def all_multilib_tune_values(d, var, unique = True, need_split = True, delim = ' '):
-    """Return a string of all ${var} in all multilib tune configuration"""
-    values = []
-    variants = (d.getVar("MULTILIB_VARIANTS") or "").split() + ['']
-    for item in variants:
-        localdata = get_multilib_datastore(item, d)
-        # We need WORKDIR to be consistent with the original datastore
-        localdata.setVar("WORKDIR", d.getVar("WORKDIR"))
-        value = localdata.getVar(var) or ""
-        if value != "":
-            if need_split:
-                for item in value.split(delim):
-                    values.append(item)
-            else:
-                values.append(value)
-    if unique:
-        #we do this to keep order as much as possible
-        ret = []
-        for value in values:
-            if not value in ret:
-                ret.append(value)
-    else:
-        ret = values
-    return " ".join(ret)
-
-def all_multilib_tune_list(vars, d):
-    """
-    Return a list of ${VAR} for each variable VAR in vars from each 
-    multilib tune configuration.
-    Is safe to be called from a multilib recipe/context as it can
-    figure out the original tune and remove the multilib overrides.
-    """
-    values = {}
-    for v in vars:
-        values[v] = []
-    values['ml'] = ['']
-
-    variants = (d.getVar("MULTILIB_VARIANTS") or "").split() + ['']
-    for item in variants:
-        localdata = get_multilib_datastore(item, d)
-        values[v].append(localdata.getVar(v))
-        values['ml'].append(item)
-    return values
-all_multilib_tune_list[vardepsexclude] = "OVERRIDES"
-
-# If the user hasn't set up their name/email, set some defaults
-check_git_config() {
-	if ! git config user.email > /dev/null ; then
-		git config --local user.email "${PATCH_GIT_USER_EMAIL}"
-	fi
-	if ! git config user.name > /dev/null ; then
-		git config --local user.name "${PATCH_GIT_USER_NAME}"
-	fi
-}
diff --git a/poky/meta/classes/vala.bbclass b/poky/meta/classes/vala.bbclass
deleted file mode 100644
index bfcceff..0000000
--- a/poky/meta/classes/vala.bbclass
+++ /dev/null
@@ -1,24 +0,0 @@
-# Everyone needs vala-native and targets need vala, too,
-# because that is where target builds look for .vapi files.
-#
-VALADEPENDS = ""
-VALADEPENDS:class-target = "vala"
-DEPENDS:append = " vala-native ${VALADEPENDS}"
-
-# Our patched version of Vala looks in STAGING_DATADIR for .vapi files
-export STAGING_DATADIR
-# Upstream Vala >= 0.11 looks in XDG_DATA_DIRS for .vapi files
-export XDG_DATA_DIRS = "${STAGING_DATADIR}:${STAGING_LIBDIR}"
-
-# Package additional files
-FILES:${PN}-dev += "\
-    ${datadir}/vala/vapi/*.vapi \
-    ${datadir}/vala/vapi/*.deps \
-    ${datadir}/gir-1.0 \
-"
-
-# Remove vapigen.m4 that is bundled with tarballs
-# because it does not yet have our cross-compile fixes
-do_configure:prepend() {
-        rm -f ${S}/m4/vapigen.m4
-}
diff --git a/poky/meta/classes/waf.bbclass b/poky/meta/classes/waf.bbclass
deleted file mode 100644
index 464564a..0000000
--- a/poky/meta/classes/waf.bbclass
+++ /dev/null
@@ -1,75 +0,0 @@
-# avoids build breaks when using no-static-libs.inc
-DISABLE_STATIC = ""
-
-# What Python interpretter to use.  Defaults to Python 3 but can be
-# overridden if required.
-WAF_PYTHON ?= "python3"
-
-B = "${WORKDIR}/build"
-do_configure[cleandirs] += "${B}"
-
-EXTRA_OECONF:append = " ${PACKAGECONFIG_CONFARGS}"
-
-EXTRA_OEWAF_BUILD ??= ""
-# In most cases, you want to pass the same arguments to `waf build` and `waf
-# install`, but you can override it if necessary
-EXTRA_OEWAF_INSTALL ??= "${EXTRA_OEWAF_BUILD}"
-
-def waflock_hash(d):
-    # Calculates the hash used for the waf lock file. This should include
-    # all of the user controllable inputs passed to waf configure. Note
-    # that the full paths for ${B} and ${S} are used; this is OK and desired
-    # because a change to either of these should create a unique lock file
-    # to prevent collisions.
-    import hashlib
-    h = hashlib.sha512()
-    def update(name):
-        val = d.getVar(name)
-        if val is not None:
-            h.update(val.encode('utf-8'))
-    update('S')
-    update('B')
-    update('prefix')
-    update('EXTRA_OECONF')
-    return h.hexdigest()
-
-# Use WAFLOCK to specify a separate lock file. The build is already
-# sufficiently isolated by setting the output directory, this ensures that
-# bitbake won't step on toes of any other configured context in the source
-# directory (e.g. if the source is coming from externalsrc and was previously
-# configured elsewhere).
-export WAFLOCK = ".lock-waf_oe_${@waflock_hash(d)}_build"
-BB_BASEHASH_IGNORE_VARS += "WAFLOCK"
-
-python waf_preconfigure() {
-    import subprocess
-    subsrcdir = d.getVar('S')
-    python = d.getVar('WAF_PYTHON')
-    wafbin = os.path.join(subsrcdir, 'waf')
-    try:
-        result = subprocess.check_output([python, wafbin, '--version'], cwd=subsrcdir, stderr=subprocess.STDOUT)
-        version = result.decode('utf-8').split()[1]
-        if bb.utils.vercmp_string_op(version, "1.8.7", ">="):
-            d.setVar("WAF_EXTRA_CONF", "--bindir=${bindir} --libdir=${libdir}")
-    except subprocess.CalledProcessError as e:
-        bb.warn("Unable to execute waf --version, exit code %d. Assuming waf version without bindir/libdir support." % e.returncode)
-    except FileNotFoundError:
-        bb.fatal("waf does not exist in %s" % subsrcdir)
-}
-
-do_configure[prefuncs] += "waf_preconfigure"
-
-waf_do_configure() {
-	(cd ${S} && ${WAF_PYTHON} ./waf configure -o ${B} --prefix=${prefix} ${WAF_EXTRA_CONF} ${EXTRA_OECONF})
-}
-
-do_compile[progress] = "outof:^\[\s*(\d+)/\s*(\d+)\]\s+"
-waf_do_compile()  {
-	(cd ${S} && ${WAF_PYTHON} ./waf build ${@oe.utils.parallel_make_argument(d, '-j%d', limit=64)} ${EXTRA_OEWAF_BUILD})
-}
-
-waf_do_install() {
-	(cd ${S} && ${WAF_PYTHON} ./waf install --destdir=${D} ${EXTRA_OEWAF_INSTALL})
-}
-
-EXPORT_FUNCTIONS do_configure do_compile do_install
diff --git a/poky/meta/classes/xmlcatalog.bbclass b/poky/meta/classes/xmlcatalog.bbclass
deleted file mode 100644
index be155b7..0000000
--- a/poky/meta/classes/xmlcatalog.bbclass
+++ /dev/null
@@ -1,26 +0,0 @@
-DEPENDS = "libxml2-native"
-
-# A whitespace-separated list of XML catalogs to be registered, for example
-# "${sysconfdir}/xml/docbook-xml.xml".
-XMLCATALOGS ?= ""
-
-SYSROOT_PREPROCESS_FUNCS:append = " xmlcatalog_sstate_postinst"
-
-xmlcatalog_complete() {
-	ROOTCATALOG="${STAGING_ETCDIR_NATIVE}/xml/catalog"
-	if [ ! -f $ROOTCATALOG ]; then
-		mkdir --parents $(dirname $ROOTCATALOG)
-		xmlcatalog --noout --create $ROOTCATALOG
-	fi
-	for CATALOG in ${XMLCATALOGS}; do
-		xmlcatalog --noout --add nextCatalog unused file://$CATALOG $ROOTCATALOG
-	done
-}
-
-xmlcatalog_sstate_postinst() {
-	mkdir -p ${SYSROOT_DESTDIR}${bindir}
-	dest=${SYSROOT_DESTDIR}${bindir}/postinst-${PN}-xmlcatalog
-	echo '#!/bin/sh' > $dest
-	echo '${xmlcatalog_complete}' >> $dest
-	chmod 0755 $dest
-}
diff --git a/poky/meta/classes/yocto-check-layer.bbclass b/poky/meta/classes/yocto-check-layer.bbclass
index 329d3f8..404f5fd 100644
--- a/poky/meta/classes/yocto-check-layer.bbclass
+++ b/poky/meta/classes/yocto-check-layer.bbclass
@@ -1,4 +1,10 @@
 #
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+#
 # This class is used by yocto-check-layer script for additional per-recipe tests
 # The first test ensures that the layer has no recipes skipping 'installed-vs-shipped' QA checks
 #
diff --git a/poky/meta/conf/bitbake.conf b/poky/meta/conf/bitbake.conf
index 1d36aae..52a36d7 100644
--- a/poky/meta/conf/bitbake.conf
+++ b/poky/meta/conf/bitbake.conf
@@ -645,10 +645,13 @@
 # Optimization flags.
 ##################################################################
 # Beware: applied last to first
-DEBUG_PREFIX_MAP ?= "-fmacro-prefix-map=${WORKDIR}=/usr/src/debug/${PN}/${EXTENDPE}${PV}-${PR} \
-                     -fdebug-prefix-map=${WORKDIR}=/usr/src/debug/${PN}/${EXTENDPE}${PV}-${PR} \
-                     -fdebug-prefix-map=${STAGING_DIR_HOST}= \
-                     -fdebug-prefix-map=${STAGING_DIR_NATIVE}= \
+DEBUG_PREFIX_MAP ?= "-fmacro-prefix-map=${S}=/usr/src/debug/${PN}/${EXTENDPE}${PV}-${PR} \
+ -fdebug-prefix-map=${S}=/usr/src/debug/${PN}/${EXTENDPE}${PV}-${PR} \
+ -fmacro-prefix-map=${B}=/usr/src/debug/${PN}/${EXTENDPE}${PV}-${PR} \
+ -fdebug-prefix-map=${B}=/usr/src/debug/${PN}/${EXTENDPE}${PV}-${PR} \
+ -fdebug-prefix-map=${STAGING_DIR_HOST}= \
+ -fmacro-prefix-map=${STAGING_DIR_HOST}= \
+ -fdebug-prefix-map=${STAGING_DIR_NATIVE}= \
 "
 DEBUG_FLAGS ?= "-g -feliminate-unused-debug-types ${DEBUG_PREFIX_MAP}"
 
@@ -925,7 +928,7 @@
 TRANSLATED_TARGET_ARCH ??= "${@d.getVar('TARGET_ARCH').replace("_", "-")}"
 
 # Set a default umask to use for tasks for determinism
-BB_DEFAULT_UMASK = "022"
+BB_DEFAULT_UMASK ??= "022"
 
 # Complete output from bitbake
 BB_CONSOLELOG ?= "${LOG_DIR}/cooker/${MACHINE}/${DATETIME}.log"
@@ -943,7 +946,7 @@
     SSTATE_HASHEQUIV_OWNER CCACHE_TOP_DIR BB_HASHSERVE GIT_CEILING_DIRECTORIES \
     OMP_NUM_THREADS BB_CURRENTTASK"
 BB_BASEHASH_IGNORE_VARS ?= "${BB_HASHEXCLUDE_COMMON} PSEUDO_IGNORE_PATHS BUILDHISTORY_DIR \
-    SSTATE_DIR SOURCE_DATE_EPOCH"
+    SSTATE_DIR SOURCE_DATE_EPOCH RUST_BUILD_SYS RUST_HOST_SYS RUST_TARGET_SYS"
 BB_HASHCONFIG_IGNORE_VARS ?= "${BB_HASHEXCLUDE_COMMON} DATE TIME SSH_AGENT_PID \
     SSH_AUTH_SOCK PSEUDO_BUILD BB_ENV_PASSTHROUGH_ADDITIONS DISABLE_SANITY_CHECKS \
     PARALLEL_MAKE BB_NUMBER_THREADS BB_ORIGENV BB_INVALIDCONF BBINCLUDED \
diff --git a/poky/meta/conf/distro/include/default-distrovars.inc b/poky/meta/conf/distro/include/default-distrovars.inc
index 230bab8..abf48f7 100644
--- a/poky/meta/conf/distro/include/default-distrovars.inc
+++ b/poky/meta/conf/distro/include/default-distrovars.inc
@@ -19,7 +19,7 @@
 # seccomp is not yet ported to microblaze
 DISTRO_FEATURES_DEFAULT:remove:microblaze = "seccomp"
 
-DISTRO_FEATURES_DEFAULT ?= "acl alsa bluetooth debuginfod ext2 ipv4 ipv6 largefile pcmcia usbgadget usbhost wifi xattr nfs zeroconf pci 3g nfc x11 vfat seccomp"
+DISTRO_FEATURES_DEFAULT ?= "acl alsa bluetooth debuginfod ext2 ipv4 ipv6 pcmcia usbgadget usbhost wifi xattr nfs zeroconf pci 3g nfc x11 vfat seccomp"
 DISTRO_FEATURES ?= "${DISTRO_FEATURES_DEFAULT}"
 IMAGE_FEATURES ?= ""
 
diff --git a/poky/meta/conf/distro/include/maintainers.inc b/poky/meta/conf/distro/include/maintainers.inc
index e20275c..3c80a3a 100644
--- a/poky/meta/conf/distro/include/maintainers.inc
+++ b/poky/meta/conf/distro/include/maintainers.inc
@@ -90,7 +90,6 @@
 RECIPE_MAINTAINER:pn-ca-certificates = "Alexander Kanavin <alex.kanavin@gmail.com>"
 RECIPE_MAINTAINER:pn-cairo = "Anuj Mittal <anuj.mittal@intel.com>"
 RECIPE_MAINTAINER:pn-cargo = "Randy MacLeod <Randy.MacLeod@windriver.com>"
-RECIPE_MAINTAINER:pn-cargo-cross-canadian-${TRANSLATED_TARGET_ARCH} = "Randy MacLeod <Randy.MacLeod@windriver.com>"
 RECIPE_MAINTAINER:pn-cantarell-fonts = "Alexander Kanavin <alex.kanavin@gmail.com>"
 RECIPE_MAINTAINER:pn-ccache = "Robert Yang <liezhi.yang@windriver.com>"
 RECIPE_MAINTAINER:pn-cdrtools-native = "Yi Zhao <yi.zhao@windriver.com>"
@@ -189,7 +188,7 @@
 RECIPE_MAINTAINER:pn-gcc-crosssdk-${SDK_SYS} = "Khem Raj <raj.khem@gmail.com>"
 RECIPE_MAINTAINER:pn-gcc-runtime = "Khem Raj <raj.khem@gmail.com>"
 RECIPE_MAINTAINER:pn-gcc-sanitizers = "Khem Raj <raj.khem@gmail.com>"
-RECIPE_MAINTAINER:pn-gcc-source-12.1.0 = "Khem Raj <raj.khem@gmail.com>"
+RECIPE_MAINTAINER:pn-gcc-source-12.2.0 = "Khem Raj <raj.khem@gmail.com>"
 RECIPE_MAINTAINER:pn-gconf = "Ross Burton <ross.burton@arm.com>"
 RECIPE_MAINTAINER:pn-gcr = "Alexander Kanavin <alex.kanavin@gmail.com>"
 RECIPE_MAINTAINER:pn-gdb = "Khem Raj <raj.khem@gmail.com>"
@@ -545,10 +544,10 @@
 RECIPE_MAINTAINER:pn-opensbi = "Alistair Francis <alistair.francis@wdc.com>"
 RECIPE_MAINTAINER:pn-openssh = "Unassigned <unassigned@yoctoproject.org>"
 RECIPE_MAINTAINER:pn-openssl = "Alexander Kanavin <alex.kanavin@gmail.com>"
-RECIPE_MAINTAINER:pn-opkg = "Alejandro del Castillo <alejandro.delcastillo@ni.com>"
-RECIPE_MAINTAINER:pn-opkg-arch-config = "Alejandro del Castillo <alejandro.delcastillo@ni.com>"
-RECIPE_MAINTAINER:pn-opkg-keyrings = "Alejandro del Castillo <alejandro.delcastillo@ni.com>"
-RECIPE_MAINTAINER:pn-opkg-utils = "Alejandro del Castillo <alejandro.delcastillo@ni.com>"
+RECIPE_MAINTAINER:pn-opkg = "Alex Stewart <alex.stewart@ni.com>"
+RECIPE_MAINTAINER:pn-opkg-arch-config = "Alex Stewart <alex.stewart@ni.com>"
+RECIPE_MAINTAINER:pn-opkg-keyrings = "Alex Stewart <alex.stewart@ni.com>"
+RECIPE_MAINTAINER:pn-opkg-utils = "Alex Stewart <alex.stewart@ni.com>"
 RECIPE_MAINTAINER:pn-orc = "Anuj Mittal <anuj.mittal@intel.com>"
 RECIPE_MAINTAINER:pn-os-release = "Ross Burton <ross.burton@arm.com>"
 RECIPE_MAINTAINER:pn-ovmf = "Ricardo Neri <ricardo.neri-calderon@linux.intel.com>"
@@ -718,12 +717,9 @@
 RECIPE_MAINTAINER:pn-ruby = "Ross Burton <ross.burton@arm.com>"
 RECIPE_MAINTAINER:pn-run-postinsts = "Ross Burton <ross.burton@arm.com>"
 RECIPE_MAINTAINER:pn-rust = "Randy MacLeod <Randy.MacLeod@windriver.com>"
-RECIPE_MAINTAINER:pn-rust-cross-${TUNE_PKGARCH}-${TCLIBC} = "Randy MacLeod <Randy.MacLeod@windriver.com>"
-RECIPE_MAINTAINER:pn-rust-crosssdk-${SDK_ARCH}-glibc = "Randy MacLeod <Randy.MacLeod@windriver.com>"
 RECIPE_MAINTAINER:pn-rust-cross-canadian-${TRANSLATED_TARGET_ARCH} = "Randy MacLeod <Randy.MacLeod@windriver.com>"
 RECIPE_MAINTAINER:pn-rust-hello-world = "Randy MacLeod <Randy.MacLeod@windriver.com>"
 RECIPE_MAINTAINER:pn-rust-llvm = "Randy MacLeod <Randy.MacLeod@windriver.com>"
-RECIPE_MAINTAINER:pn-rust-tools-cross-canadian-${TRANSLATED_TARGET_ARCH} = "Randy MacLeod <Randy.MacLeod@windriver.com>"
 RECIPE_MAINTAINER:pn-rxvt-unicode = "Unassigned <unassigned@yoctoproject.org>"
 RECIPE_MAINTAINER:pn-sato-screenshot = "Ross Burton <ross.burton@arm.com>"
 RECIPE_MAINTAINER:pn-sato-icon-theme = "Richard Purdie <richard.purdie@linuxfoundation.org>"
diff --git a/poky/meta/conf/distro/include/no-static-libs.inc b/poky/meta/conf/distro/include/no-static-libs.inc
index ee67383..7535992 100644
--- a/poky/meta/conf/distro/include/no-static-libs.inc
+++ b/poky/meta/conf/distro/include/no-static-libs.inc
@@ -18,6 +18,8 @@
 DISABLE_STATIC:pn-gcc-runtime = ""
 # libusb1-native is used to build static dfu-util-native
 DISABLE_STATIC:pn-libusb1-native = ""
+# needed by rust
+DISABLE_STATIC:pn-musl = ""
 
 EXTRA_OECONF:append = "${DISABLE_STATIC}"
 
diff --git a/poky/meta/conf/distro/include/ptest-packagelists.inc b/poky/meta/conf/distro/include/ptest-packagelists.inc
index 6c4339e..56088e4 100644
--- a/poky/meta/conf/distro/include/ptest-packagelists.inc
+++ b/poky/meta/conf/distro/include/ptest-packagelists.inc
@@ -22,6 +22,7 @@
     gettext-ptest \
     glib-networking-ptest \
     gzip-ptest \
+    json-c-ptest \
     json-glib-ptest \
     libconvert-asn1-perl-ptest \
     liberror-perl-ptest \
diff --git a/poky/meta/conf/distro/include/tcmode-default.inc b/poky/meta/conf/distro/include/tcmode-default.inc
index 4477689..9abd121 100644
--- a/poky/meta/conf/distro/include/tcmode-default.inc
+++ b/poky/meta/conf/distro/include/tcmode-default.inc
@@ -18,16 +18,16 @@
 
 GCCVERSION ?= "12.%"
 SDKGCCVERSION ?= "${GCCVERSION}"
-BINUVERSION ?= "2.38%"
+BINUVERSION ?= "2.39%"
 GDBVERSION ?= "12.%"
-GLIBCVERSION ?= "2.35"
-LINUXLIBCVERSION ?= "5.16%"
+GLIBCVERSION ?= "2.36"
+LINUXLIBCVERSION ?= "5.19%"
 QEMUVERSION ?= "7.0%"
-GOVERSION ?= "1.18%"
+GOVERSION ?= "1.19%"
 # This can not use wildcards like 8.0.% since it is also used in mesa to denote
 # llvm version being used, so always bump it with llvm recipe version bump
 LLVMVERSION ?= "14.0.6"
-RUSTVERSION ?= "1.62%"
+RUSTVERSION ?= "1.63%"
 
 PREFERRED_VERSION_gcc ?= "${GCCVERSION}"
 PREFERRED_VERSION_gcc-cross-${TARGET_ARCH} ?= "${GCCVERSION}"
diff --git a/poky/meta/conf/distro/include/yocto-uninative.inc b/poky/meta/conf/distro/include/yocto-uninative.inc
index 411fe45..7012db4 100644
--- a/poky/meta/conf/distro/include/yocto-uninative.inc
+++ b/poky/meta/conf/distro/include/yocto-uninative.inc
@@ -6,10 +6,10 @@
 # to the distro running on the build machine.
 #
 
-UNINATIVE_MAXGLIBCVERSION = "2.35"
-UNINATIVE_VERSION = "3.6"
+UNINATIVE_MAXGLIBCVERSION = "2.36"
+UNINATIVE_VERSION = "3.7"
 
 UNINATIVE_URL ?= "http://downloads.yoctoproject.org/releases/uninative/${UNINATIVE_VERSION}/"
-UNINATIVE_CHECKSUM[aarch64] ?= "d64831cf2792c8e470c2e42230660e1a8e5de56a579cdd59978791f663c2f3ed"
-UNINATIVE_CHECKSUM[i686] ?= "2f0ee9b66b1bb2c85e2b592fb3c9c7f5d77399fa638d74961330cdb8de34ca3b"
-UNINATIVE_CHECKSUM[x86_64] ?= "9bfc4c970495b3716b2f9e52c4df9f968c02463a9a95000f6657fbc3fde1f098"
+UNINATIVE_CHECKSUM[aarch64] ?= "6a29bcae4b5b716d2d520e18800b33943b65f8a835eac1ff8793fc5ee65b4be6"
+UNINATIVE_CHECKSUM[i686] ?= "3f6d52e64996570c716108d49f8108baccf499a283bbefae438c7266b7a93305"
+UNINATIVE_CHECKSUM[x86_64] ?= "b110bf2e10fe420f5ca2f3ec55f048ee5f0a54c7e34856a3594e51eb2aea0570"
diff --git a/poky/meta/conf/machine/include/x86/x86-base.inc b/poky/meta/conf/machine/include/x86/x86-base.inc
index b70924f..4052eac 100644
--- a/poky/meta/conf/machine/include/x86/x86-base.inc
+++ b/poky/meta/conf/machine/include/x86/x86-base.inc
@@ -18,7 +18,7 @@
 # kernel-related variables
 #
 PREFERRED_PROVIDER_virtual/kernel ??= "linux-yocto"
-PREFERRED_VERSION_linux-yocto ??= "5.15%"
+PREFERRED_VERSION_linux-yocto ??= "5.19%"
 
 #
 # XSERVER subcomponents, used to build the XSERVER variable
diff --git a/poky/meta/conf/machine/qemuarmv5.conf b/poky/meta/conf/machine/qemuarmv5.conf
index abdae5f..fe6c011 100644
--- a/poky/meta/conf/machine/qemuarmv5.conf
+++ b/poky/meta/conf/machine/qemuarmv5.conf
@@ -17,5 +17,5 @@
 QB_OPT_APPEND = "-device qemu-xhci -device usb-tablet -device usb-kbd"
 QB_DTB = "${@oe.utils.version_less_or_equal('PREFERRED_VERSION_linux-yocto', '4.7', '', 'zImage-versatile-pb.dtb', d)}"
 
-PREFERRED_VERSION_linux-yocto ??= "5.15%"
+PREFERRED_VERSION_linux-yocto ??= "5.19%"
 KMACHINE:qemuarmv5 = "arm-versatile-926ejs"
diff --git a/poky/meta/conf/machine/qemux86-64.conf b/poky/meta/conf/machine/qemux86-64.conf
index 9013534..8640867 100644
--- a/poky/meta/conf/machine/qemux86-64.conf
+++ b/poky/meta/conf/machine/qemux86-64.conf
@@ -10,7 +10,7 @@
 
 require conf/machine/include/qemu.inc
 DEFAULTTUNE ?= "core2-64"
-require conf/machine/include/x86/tune-core2.inc
+require conf/machine/include/x86/tune-corei7.inc
 require conf/machine/include/x86/qemuboot-x86.inc
 
 UBOOT_MACHINE ?= "qemu-x86_64_defconfig"
diff --git a/poky/meta/conf/testexport.conf b/poky/meta/conf/testexport.conf
new file mode 100644
index 0000000..8880f10
--- /dev/null
+++ b/poky/meta/conf/testexport.conf
@@ -0,0 +1,3 @@
+TEST_EXPORT_SDK_PACKAGES ?= ""
+TEST_EXPORT_SDK_DIR ?= "sdk"
+TEST_EXPORT_SDK_NAME ?= "testexport-tools-nativesdk"
diff --git a/poky/meta/files/layers.example.json b/poky/meta/files/layers.example.json
new file mode 100644
index 0000000..0a6a6a7
--- /dev/null
+++ b/poky/meta/files/layers.example.json
@@ -0,0 +1,48 @@
+{
+    "sources": {
+        "meta-alex": {
+            "contains_this_file": true,
+            "git-remote": {
+                "branch": "master",
+                "describe": "",
+                "remotes": {
+                    "remote-alex": {
+                        "uri": "https://github.com/kanavin/meta-alex"
+                    }
+                },
+                "rev": "05b25605fb8b2399e4706d7323828676bf0da0b5"
+            },
+            "path": "meta-alex"
+        },
+        "meta-intel": {
+            "git-remote": {
+                "branch": "master",
+                "describe": "15.0-hardknott-3.3-310-g0a96edae",
+                "remotes": {
+                    "origin": {
+                        "uri": "git://git.yoctoproject.org/meta-intel"
+                    }
+                },
+                "rev": "0a96edae609a3f48befac36af82cf1eed6786b4a"
+            },
+            "path": "meta-intel"
+        },
+        "poky": {
+            "git-remote": {
+                "branch": "akanavin/setup-layers",
+                "describe": "4.1_M1-374-g9dda719b2a",
+                "remotes": {
+                    "origin": {
+                        "uri": "git://git.yoctoproject.org/poky"
+                    },
+                    "poky-contrib": {
+                        "uri": "ssh://git@push.yoctoproject.org/poky-contrib"
+                    }
+                },
+                "rev": "9dda719b2a4727a4d43a6ab8d9e23f8ca68790ec"
+            },
+            "path": "poky"
+        }
+    },
+    "version": "1.0"
+}
diff --git a/poky/meta/files/layers.schema.json b/poky/meta/files/layers.schema.json
new file mode 100644
index 0000000..659ee8d
--- /dev/null
+++ b/poky/meta/files/layers.schema.json
@@ -0,0 +1,76 @@
+{
+    "description": "OpenEmbedder Layer Setup Manifest",
+    "type": "object",
+    "additionalProperties": false,
+    "required": [
+        "version"
+    ],
+    "properties": {
+        "version": {
+            "description": "The version of this document; currently '1.0'",
+            "enum": ["1.0"]
+        },
+        "sources": {
+            "description": "The dict of layer sources",
+            "type": "object",
+            "patternProperties": { ".*" : {
+                "type": "object",
+                "description": "The upstream source from which a set of layers may be fetched",
+                "additionalProperties": false,
+                "required": [
+                    "path"
+                ],
+                "properties": {
+                    "path": {
+                        "description": "The path where this layer source will be placed when fetching",
+                        "type": "string"
+                    },
+                    "contains_this_file": {
+                        "description": "Whether the directory with the layer source also contains this json description. Tools may want to skip the checkout of the source then.",
+                        "type": "boolean"
+                    },
+                    "git-remote": {
+                                "description": "A remote git source from which to fetch",
+                                "type": "object",
+                                "additionalProperties": false,
+                                "required": [
+                                    "rev"
+                                ],
+                                "properties": {
+                                    "branch": {
+                                        "description": "The git branch to fetch (optional)",
+                                        "type": "string"
+                                    },
+                                    "rev": {
+                                        "description": "The git revision to checkout",
+                                        "type": "string"
+                                    },
+                                    "describe": {
+                                        "description": "The output of 'git describe' (human readable description of the revision using tags in revision history).",
+                                        "type": "string"
+                                    },
+                                    "remotes": {
+                                        "description": "The dict of git remotes to add to this repository",
+                                        "type": "object",
+                                        "patternProperties": { ".*" : {
+                                            "description": "A git remote",
+                                            "type": "object",
+                                            "addtionalProperties": false,
+                                            "required": [
+                                                "uri"
+                                            ],
+                                            "properties": {
+                                                "uri": {
+                                                    "description": "The URI for the remote",
+                                                    "type": "string"
+                                                }
+                                            }
+                                        }}
+                                    }
+                                }
+                    }
+                }
+            }
+        }}
+    }
+}
diff --git a/poky/meta/lib/bblayers/buildconf.py b/poky/meta/lib/bblayers/buildconf.py
new file mode 100644
index 0000000..e07fc53
--- /dev/null
+++ b/poky/meta/lib/bblayers/buildconf.py
@@ -0,0 +1,85 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: GPL-2.0-only
+#
+
+import logging
+import os
+import stat
+import sys
+import shutil
+import json
+
+import bb.utils
+import bb.process
+
+from bblayers.common import LayerPlugin
+
+logger = logging.getLogger('bitbake-layers')
+
+sys.path.insert(0, os.path.dirname(os.path.dirname(__file__)))
+
+import oe.buildcfg
+
+def plugin_init(plugins):
+    return BuildConfPlugin()
+
+class BuildConfPlugin(LayerPlugin):
+    notes_fixme = """FIXME: Please place here the description of this build configuration.
+It will be shown to the users when they set up their builds via TEMPLATECONF.
+"""
+
+    def _save_conf(self, templatename, templatepath, oecorepath, relpaths_to_oecore):
+        confdir = os.path.join(os.environ["BBPATH"], "conf")
+        destdir = os.path.join(templatepath, "conf", "templates", templatename)
+        os.makedirs(destdir, exist_ok=True)
+
+        with open(os.path.join(confdir, "local.conf")) as src:
+            with open(os.path.join(destdir, "local.conf.sample"), 'w') as dest:
+                dest.write(src.read())
+
+        with open(os.path.join(confdir, "bblayers.conf")) as src:
+            with open(os.path.join(destdir, "bblayers.conf.sample"), 'w') as dest:
+                bblayers_data = src.read()
+
+                for (abspath, relpath) in relpaths_to_oecore:
+                    bblayers_data = bblayers_data.replace(abspath, "##OEROOT##/" + relpath)
+                dest.write(bblayers_data)
+
+        with open(os.path.join(destdir, "conf-notes.txt"), 'w') as dest:
+            dest.write(self.notes_fixme)
+
+        logger.info("""Configuration template placed into {}
+Please review the files in there, and particularly provide a configuration description in {}
+You can try out the configuration with
+TEMPLATECONF={} . {}/oe-init-build-env build-try-{}"""
+.format(destdir, os.path.join(destdir, "conf-notes.txt"), destdir, oecorepath, templatename))
+
+    def do_save_build_conf(self, args):
+        """ Save the currently active build configuration (conf/local.conf, conf/bblayers.conf) as a template into a layer.\n This template can later be used for setting up builds via TEMPLATECONF. """
+        repos = {}
+        layers = oe.buildcfg.get_layer_revisions(self.tinfoil.config_data)
+        targetlayer = None
+        oecore = None
+
+        for l in layers:
+            if l[0] == os.path.abspath(args.layerpath):
+                targetlayer = l[0]
+            if l[1] == 'meta':
+                oecore = os.path.dirname(l[0])
+
+        if not targetlayer:
+            logger.error("Layer {} not in one of the currently enabled layers:\n{}".format(args.layerpath, "\n".join([l[0] for l in layers])))
+        elif not oecore:
+            logger.error("Openembedded-core not in one of the currently enabled layers:\n{}".format("\n".join([l[0] for l in layers])))
+        else:
+            relpaths_to_oecore = [(l[0], os.path.relpath(l[0], start=oecore)) for l in layers]
+            self._save_conf(args.templatename, targetlayer, oecore, relpaths_to_oecore)
+
+    def register_commands(self, sp):
+        parser_build_conf = self.add_command(sp, 'save-build-conf', self.do_save_build_conf, parserecipes=False)
+        parser_build_conf.add_argument('layerpath',
+            help='The path to the layer where the configuration template should be saved.')
+        parser_build_conf.add_argument('templatename',
+            help='The name of the configuration template.')
diff --git a/poky/meta/lib/bblayers/create.py b/poky/meta/lib/bblayers/create.py
index 7ddb777..0aeb5d5 100644
--- a/poky/meta/lib/bblayers/create.py
+++ b/poky/meta/lib/bblayers/create.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: GPL-2.0-only
 #
 
diff --git a/poky/meta/lib/bblayers/makesetup.py b/poky/meta/lib/bblayers/makesetup.py
new file mode 100644
index 0000000..22f89d8
--- /dev/null
+++ b/poky/meta/lib/bblayers/makesetup.py
@@ -0,0 +1,107 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: GPL-2.0-only
+#
+
+import logging
+import os
+import stat
+import sys
+import shutil
+
+import bb.utils
+import bb.process
+
+from bblayers.common import LayerPlugin
+
+logger = logging.getLogger('bitbake-layers')
+
+sys.path.insert(0, os.path.dirname(os.path.dirname(__file__)))
+
+import oe.buildcfg
+
+def plugin_init(plugins):
+    return MakeSetupPlugin()
+
+class MakeSetupPlugin(LayerPlugin):
+
+    def _get_repo_path(self, layer_path):
+        repo_path, _ = bb.process.run('git rev-parse --show-toplevel', cwd=layer_path)
+        return repo_path.strip()
+
+    def _get_remotes(self, repo_path):
+        remotes = {}
+        remotes_list,_ = bb.process.run('git remote', cwd=repo_path)
+        for r in remotes_list.split():
+            uri,_ = bb.process.run('git remote get-url {r}'.format(r=r), cwd=repo_path)
+            remotes[r] = {'uri':uri.strip()}
+        return remotes
+
+    def _get_describe(self, repo_path):
+        try:
+            describe,_ = bb.process.run('git describe --tags', cwd=repo_path)
+        except bb.process.ExecutionError:
+            return ""
+        return describe.strip()
+
+    def make_repo_config(self, destdir):
+        """ This is a helper function for the writer plugins that discovers currently confugured layers.
+        The writers do not have to use it, but it can save a bit of work and avoid duplicated code, hence it is
+        available here. """
+        repos = {}
+        layers = oe.buildcfg.get_layer_revisions(self.tinfoil.config_data)
+        try:
+            destdir_repo = self._get_repo_path(destdir)
+        except bb.process.ExecutionError:
+            destdir_repo = None
+
+        for (l_path, l_name, l_branch, l_rev, l_ismodified) in layers:
+            if l_name == 'workspace':
+                continue
+            if l_ismodified:
+                logger.error("Layer {name} in {path} has uncommitted modifications or is not in a git repository.".format(name=l_name,path=l_path))
+                return
+            repo_path = self._get_repo_path(l_path)
+            if repo_path not in repos.keys():
+                repos[repo_path] = {'path':os.path.basename(repo_path),'git-remote':{'rev':l_rev, 'branch':l_branch, 'remotes':self._get_remotes(repo_path), 'describe':self._get_describe(repo_path)}}
+                if repo_path == destdir_repo:
+                    repos[repo_path]['contains_this_file'] = True
+                if not repos[repo_path]['git-remote']['remotes'] and not repos[repo_path]['contains_this_file']:
+                    logger.error("Layer repository in {path} does not have any remotes configured. Please add at least one with 'git remote add'.".format(path=repo_path))
+                    return
+
+        top_path = os.path.commonpath([os.path.dirname(r) for r in repos.keys()])
+
+        repos_nopaths = {}
+        for r in repos.keys():
+            r_nopath = os.path.basename(r)
+            repos_nopaths[r_nopath] = repos[r]
+            r_relpath = os.path.relpath(r, top_path)
+            repos_nopaths[r_nopath]['path'] = r_relpath
+        return repos_nopaths
+
+    def do_make_setup(self, args):
+        """ Writes out a configuration file and/or a script that replicate the directory structure and revisions of the layers in a current build. """
+        for p in self.plugins:
+            if str(p) == args.writer:
+                p.do_write(self, args)
+
+    def register_commands(self, sp):
+        parser_setup_layers = self.add_command(sp, 'create-layers-setup', self.do_make_setup, parserecipes=False)
+        parser_setup_layers.add_argument('destdir',
+            help='Directory where to write the output\n(if it is inside one of the layers, the layer becomes a bootstrap repository and thus will be excluded from fetching).')
+        parser_setup_layers.add_argument('--output-prefix', '-o',
+            help='File name prefix for the output files, if the default (setup-layers) is undesirable.')
+
+        self.plugins = []
+
+        for path in (self.tinfoil.config_data.getVar('BBPATH').split(':')):
+            pluginpath = os.path.join(path, 'lib', 'bblayers', 'setupwriters')
+            bb.utils.load_plugins(logger, self.plugins, pluginpath)
+
+        parser_setup_layers.add_argument('--writer', '-w', choices=[str(p) for p in self.plugins], help="Choose the output format (defaults to oe-setup-layers).\n\nCurrently supported options are:\noe-setup-layers - a self-contained python script and a json config for it.\n\n", default="oe-setup-layers")
+
+        for plugin in self.plugins:
+            if hasattr(plugin, 'register_arguments'):
+                plugin.register_arguments(parser_setup_layers)
diff --git a/poky/meta/lib/bblayers/setupwriters/oe-setup-layers.py b/poky/meta/lib/bblayers/setupwriters/oe-setup-layers.py
new file mode 100644
index 0000000..f6a484b
--- /dev/null
+++ b/poky/meta/lib/bblayers/setupwriters/oe-setup-layers.py
@@ -0,0 +1,50 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: GPL-2.0-only
+#
+
+import logging
+import os
+import json
+import stat
+
+logger = logging.getLogger('bitbake-layers')
+
+def plugin_init(plugins):
+    return OeSetupLayersWriter()
+
+class OeSetupLayersWriter():
+
+    def __str__(self):
+        return "oe-setup-layers"
+
+    def _write_python(self, input, output):
+        with open(input) as f:
+            script = f.read()
+        with open(output, 'w') as f:
+            f.write(script)
+        st = os.stat(output)
+        os.chmod(output, st.st_mode | stat.S_IEXEC | stat.S_IXGRP | stat.S_IXOTH)
+
+    def _write_json(self, repos, output):
+        with open(output, 'w') as f:
+            json.dump(repos, f, sort_keys=True, indent=4)
+
+    def do_write(self, parent, args):
+        """ Writes out a python script and a json config that replicate the directory structure and revisions of the layers in a current build. """
+        repos = parent.make_repo_config(args.destdir)
+        json = {"version":"1.0","sources":repos}
+        if not repos:
+            raise Exception("Could not determine layer sources")
+        output = args.output_prefix or "setup-layers"
+        output = os.path.join(os.path.abspath(args.destdir),output)
+        self._write_json(json, output + ".json")
+        logger.info('Created {}.json'.format(output))
+        if not args.json_only:
+            self._write_python(os.path.join(os.path.dirname(__file__),'../../../../scripts/oe-setup-layers'), output)
+        logger.info('Created {}'.format(output))
+
+    def register_arguments(self, parser):
+        parser.add_argument('--json-only', action='store_true',
+            help='When using the oe-setup-layers writer, write only the layer configuruation in json format. Otherwise, also a copy of scripts/oe-setup-layers (from oe-core or poky) is provided, which is a self contained python script that fetches all the needed layers and sets them to correct revisions using the data from the json.')
diff --git a/poky/meta/lib/buildstats.py b/poky/meta/lib/buildstats.py
index 99a8303..1ffe679 100644
--- a/poky/meta/lib/buildstats.py
+++ b/poky/meta/lib/buildstats.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: GPL-2.0-only
 #
 # Implements system state sampling. Called by buildstats.bbclass.
diff --git a/poky/meta/lib/oe/__init__.py b/poky/meta/lib/oe/__init__.py
index 4e7c09d..92f002d 100644
--- a/poky/meta/lib/oe/__init__.py
+++ b/poky/meta/lib/oe/__init__.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: GPL-2.0-only
 #
 
diff --git a/poky/meta/lib/oe/cachedpath.py b/poky/meta/lib/oe/cachedpath.py
index 254257a..0138b79 100644
--- a/poky/meta/lib/oe/cachedpath.py
+++ b/poky/meta/lib/oe/cachedpath.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: GPL-2.0-only
 #
 # Based on standard python library functions but avoid
diff --git a/poky/meta/lib/oe/classextend.py b/poky/meta/lib/oe/classextend.py
index e08d788..2013b29 100644
--- a/poky/meta/lib/oe/classextend.py
+++ b/poky/meta/lib/oe/classextend.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: GPL-2.0-only
 #
 
diff --git a/poky/meta/lib/oe/classutils.py b/poky/meta/lib/oe/classutils.py
index 08bb66b..ec3f6ad 100644
--- a/poky/meta/lib/oe/classutils.py
+++ b/poky/meta/lib/oe/classutils.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: GPL-2.0-only
 #
 
diff --git a/poky/meta/lib/oe/copy_buildsystem.py b/poky/meta/lib/oe/copy_buildsystem.py
index 79642fd..a0d8290 100644
--- a/poky/meta/lib/oe/copy_buildsystem.py
+++ b/poky/meta/lib/oe/copy_buildsystem.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: GPL-2.0-only
 #
 # This class should provide easy access to the different aspects of the
diff --git a/poky/meta/lib/oe/cve_check.py b/poky/meta/lib/oe/cve_check.py
index aa06497..4f1d80f 100644
--- a/poky/meta/lib/oe/cve_check.py
+++ b/poky/meta/lib/oe/cve_check.py
@@ -1,3 +1,9 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
 import collections
 import re
 import itertools
@@ -143,7 +149,7 @@
         else:
             vendor = "*"
 
-        cpe_id = f'cpe:2.3:a:{vendor}:{product}:{version}:*:*:*:*:*:*:*'
+        cpe_id = 'cpe:2.3:a:{}:{}:{}:*:*:*:*:*:*:*'.format(vendor, product, version)
         cpe_ids.append(cpe_id)
 
     return cpe_ids
diff --git a/poky/meta/lib/oe/data.py b/poky/meta/lib/oe/data.py
index 602130a..37121cf 100644
--- a/poky/meta/lib/oe/data.py
+++ b/poky/meta/lib/oe/data.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: GPL-2.0-only
 #
 
diff --git a/poky/meta/lib/oe/distro_check.py b/poky/meta/lib/oe/distro_check.py
index 4b2a9be..3494520 100644
--- a/poky/meta/lib/oe/distro_check.py
+++ b/poky/meta/lib/oe/distro_check.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: GPL-2.0-only
 #
 
diff --git a/poky/meta/lib/oe/elf.py b/poky/meta/lib/oe/elf.py
index 46c884a..fb07995 100644
--- a/poky/meta/lib/oe/elf.py
+++ b/poky/meta/lib/oe/elf.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: GPL-2.0-only
 #
 
diff --git a/poky/meta/lib/oe/gpg_sign.py b/poky/meta/lib/oe/gpg_sign.py
index aa9bb49..613dab8 100644
--- a/poky/meta/lib/oe/gpg_sign.py
+++ b/poky/meta/lib/oe/gpg_sign.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: GPL-2.0-only
 #
 
diff --git a/poky/meta/lib/oe/license.py b/poky/meta/lib/oe/license.py
index 99cfa5f..d9c8d94 100644
--- a/poky/meta/lib/oe/license.py
+++ b/poky/meta/lib/oe/license.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: GPL-2.0-only
 #
 """Code for parsing OpenEmbedded license strings"""
diff --git a/poky/meta/lib/oe/lsb.py b/poky/meta/lib/oe/lsb.py
index 43e4638..3ec03e5 100644
--- a/poky/meta/lib/oe/lsb.py
+++ b/poky/meta/lib/oe/lsb.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: GPL-2.0-only
 #
 
diff --git a/poky/meta/lib/oe/maketype.py b/poky/meta/lib/oe/maketype.py
index d36082c..7a83bdf 100644
--- a/poky/meta/lib/oe/maketype.py
+++ b/poky/meta/lib/oe/maketype.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: GPL-2.0-only
 #
 """OpenEmbedded variable typing support
diff --git a/poky/meta/lib/oe/manifest.py b/poky/meta/lib/oe/manifest.py
index 1a058dc..61f18ad 100644
--- a/poky/meta/lib/oe/manifest.py
+++ b/poky/meta/lib/oe/manifest.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: GPL-2.0-only
 #
 
diff --git a/poky/meta/lib/oe/npm_registry.py b/poky/meta/lib/oe/npm_registry.py
index 96c0aff..db581e2 100644
--- a/poky/meta/lib/oe/npm_registry.py
+++ b/poky/meta/lib/oe/npm_registry.py
@@ -1,3 +1,9 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
 import bb
 import json
 import subprocess
diff --git a/poky/meta/lib/oe/overlayfs.py b/poky/meta/lib/oe/overlayfs.py
index b5d5e88..8d7a047 100644
--- a/poky/meta/lib/oe/overlayfs.py
+++ b/poky/meta/lib/oe/overlayfs.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: GPL-2.0-only
 #
 # This file contains common functions for overlayfs and its QA check
diff --git a/poky/meta/lib/oe/package.py b/poky/meta/lib/oe/package.py
index 7d387ee..4aa40d7 100644
--- a/poky/meta/lib/oe/package.py
+++ b/poky/meta/lib/oe/package.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: GPL-2.0-only
 #
 
diff --git a/poky/meta/lib/oe/package_manager/__init__.py b/poky/meta/lib/oe/package_manager/__init__.py
index d3b4570..a6bf2fe 100644
--- a/poky/meta/lib/oe/package_manager/__init__.py
+++ b/poky/meta/lib/oe/package_manager/__init__.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: GPL-2.0-only
 #
 
diff --git a/poky/meta/lib/oe/package_manager/deb/__init__.py b/poky/meta/lib/oe/package_manager/deb/__init__.py
index b96ea0b..c672454 100644
--- a/poky/meta/lib/oe/package_manager/deb/__init__.py
+++ b/poky/meta/lib/oe/package_manager/deb/__init__.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: GPL-2.0-only
 #
 
diff --git a/poky/meta/lib/oe/package_manager/deb/manifest.py b/poky/meta/lib/oe/package_manager/deb/manifest.py
index d8eab24..72983ba 100644
--- a/poky/meta/lib/oe/package_manager/deb/manifest.py
+++ b/poky/meta/lib/oe/package_manager/deb/manifest.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: GPL-2.0-only
 #
 
diff --git a/poky/meta/lib/oe/package_manager/deb/rootfs.py b/poky/meta/lib/oe/package_manager/deb/rootfs.py
index 8fbaca1..1e25b64 100644
--- a/poky/meta/lib/oe/package_manager/deb/rootfs.py
+++ b/poky/meta/lib/oe/package_manager/deb/rootfs.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: GPL-2.0-only
 #
 
diff --git a/poky/meta/lib/oe/package_manager/deb/sdk.py b/poky/meta/lib/oe/package_manager/deb/sdk.py
index f4b0b65..653e42a 100644
--- a/poky/meta/lib/oe/package_manager/deb/sdk.py
+++ b/poky/meta/lib/oe/package_manager/deb/sdk.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: GPL-2.0-only
 #
 
diff --git a/poky/meta/lib/oe/package_manager/ipk/__init__.py b/poky/meta/lib/oe/package_manager/ipk/__init__.py
index 7cbea0f..caca522 100644
--- a/poky/meta/lib/oe/package_manager/ipk/__init__.py
+++ b/poky/meta/lib/oe/package_manager/ipk/__init__.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: GPL-2.0-only
 #
 
diff --git a/poky/meta/lib/oe/package_manager/ipk/manifest.py b/poky/meta/lib/oe/package_manager/ipk/manifest.py
index ae451c5..469e14c 100644
--- a/poky/meta/lib/oe/package_manager/ipk/manifest.py
+++ b/poky/meta/lib/oe/package_manager/ipk/manifest.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: GPL-2.0-only
 #
 
diff --git a/poky/meta/lib/oe/package_manager/ipk/rootfs.py b/poky/meta/lib/oe/package_manager/ipk/rootfs.py
index 10a8319..1f74f7e 100644
--- a/poky/meta/lib/oe/package_manager/ipk/rootfs.py
+++ b/poky/meta/lib/oe/package_manager/ipk/rootfs.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: GPL-2.0-only
 #
 
diff --git a/poky/meta/lib/oe/package_manager/ipk/sdk.py b/poky/meta/lib/oe/package_manager/ipk/sdk.py
index e2ca415..6a1f167 100644
--- a/poky/meta/lib/oe/package_manager/ipk/sdk.py
+++ b/poky/meta/lib/oe/package_manager/ipk/sdk.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: GPL-2.0-only
 #
 
diff --git a/poky/meta/lib/oe/package_manager/rpm/__init__.py b/poky/meta/lib/oe/package_manager/rpm/__init__.py
index d97dab3..18ec5c8 100644
--- a/poky/meta/lib/oe/package_manager/rpm/__init__.py
+++ b/poky/meta/lib/oe/package_manager/rpm/__init__.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: GPL-2.0-only
 #
 
diff --git a/poky/meta/lib/oe/package_manager/rpm/manifest.py b/poky/meta/lib/oe/package_manager/rpm/manifest.py
index e6604b3..6ee7c32 100644
--- a/poky/meta/lib/oe/package_manager/rpm/manifest.py
+++ b/poky/meta/lib/oe/package_manager/rpm/manifest.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: GPL-2.0-only
 #
 
diff --git a/poky/meta/lib/oe/package_manager/rpm/rootfs.py b/poky/meta/lib/oe/package_manager/rpm/rootfs.py
index 00d07cd..d4c415f 100644
--- a/poky/meta/lib/oe/package_manager/rpm/rootfs.py
+++ b/poky/meta/lib/oe/package_manager/rpm/rootfs.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: GPL-2.0-only
 #
 
diff --git a/poky/meta/lib/oe/package_manager/rpm/sdk.py b/poky/meta/lib/oe/package_manager/rpm/sdk.py
index c5f2324..0726a18 100644
--- a/poky/meta/lib/oe/package_manager/rpm/sdk.py
+++ b/poky/meta/lib/oe/package_manager/rpm/sdk.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: GPL-2.0-only
 #
 
diff --git a/poky/meta/lib/oe/packagedata.py b/poky/meta/lib/oe/packagedata.py
index 212f048..b2ed8b5 100644
--- a/poky/meta/lib/oe/packagedata.py
+++ b/poky/meta/lib/oe/packagedata.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: GPL-2.0-only
 #
 
diff --git a/poky/meta/lib/oe/packagegroup.py b/poky/meta/lib/oe/packagegroup.py
index 8fcaecd..7b75947 100644
--- a/poky/meta/lib/oe/packagegroup.py
+++ b/poky/meta/lib/oe/packagegroup.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: GPL-2.0-only
 #
 
diff --git a/poky/meta/lib/oe/patch.py b/poky/meta/lib/oe/patch.py
index 4ec9cae..b2dc8d0 100644
--- a/poky/meta/lib/oe/patch.py
+++ b/poky/meta/lib/oe/patch.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: GPL-2.0-only
 #
 
diff --git a/poky/meta/lib/oe/path.py b/poky/meta/lib/oe/path.py
index c8d8ad0..0dc8f17 100644
--- a/poky/meta/lib/oe/path.py
+++ b/poky/meta/lib/oe/path.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: GPL-2.0-only
 #
 
diff --git a/poky/meta/lib/oe/prservice.py b/poky/meta/lib/oe/prservice.py
index 339f7ae..2f2a0c1 100644
--- a/poky/meta/lib/oe/prservice.py
+++ b/poky/meta/lib/oe/prservice.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: GPL-2.0-only
 #
 
diff --git a/poky/meta/lib/oe/qa.py b/poky/meta/lib/oe/qa.py
index 89acd3e..b4cbc50 100644
--- a/poky/meta/lib/oe/qa.py
+++ b/poky/meta/lib/oe/qa.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: GPL-2.0-only
 #
 
diff --git a/poky/meta/lib/oe/reproducible.py b/poky/meta/lib/oe/reproducible.py
index 2e815df..04a1810 100644
--- a/poky/meta/lib/oe/reproducible.py
+++ b/poky/meta/lib/oe/reproducible.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: GPL-2.0-only
 #
 import os
diff --git a/poky/meta/lib/oe/rootfs.py b/poky/meta/lib/oe/rootfs.py
index 9e6b411..0b9911e 100644
--- a/poky/meta/lib/oe/rootfs.py
+++ b/poky/meta/lib/oe/rootfs.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: GPL-2.0-only
 #
 from abc import ABCMeta, abstractmethod
diff --git a/poky/meta/lib/oe/rust.py b/poky/meta/lib/oe/rust.py
index ec70b34..1dc9cf1 100644
--- a/poky/meta/lib/oe/rust.py
+++ b/poky/meta/lib/oe/rust.py
@@ -1,3 +1,9 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
 # Handle mismatches between `uname -m`-style output and Rust's arch names
 def arch_to_rust_arch(arch):
     if arch == "ppc64le":
diff --git a/poky/meta/lib/oe/sbom.py b/poky/meta/lib/oe/sbom.py
index 52bf514..bbf466b 100644
--- a/poky/meta/lib/oe/sbom.py
+++ b/poky/meta/lib/oe/sbom.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: GPL-2.0-only
 #
 
diff --git a/poky/meta/lib/oe/sdk.py b/poky/meta/lib/oe/sdk.py
index 2734766..81fcf15 100644
--- a/poky/meta/lib/oe/sdk.py
+++ b/poky/meta/lib/oe/sdk.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: GPL-2.0-only
 #
 
diff --git a/poky/meta/lib/oe/spdx.py b/poky/meta/lib/oe/spdx.py
index 6d56ed9..c74ea68 100644
--- a/poky/meta/lib/oe/spdx.py
+++ b/poky/meta/lib/oe/spdx.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: GPL-2.0-only
 #
 
diff --git a/poky/meta/lib/oe/sstatesig.py b/poky/meta/lib/oe/sstatesig.py
index de65244..fad10af 100644
--- a/poky/meta/lib/oe/sstatesig.py
+++ b/poky/meta/lib/oe/sstatesig.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: GPL-2.0-only
 #
 import bb.siggen
diff --git a/poky/meta/lib/oe/terminal.py b/poky/meta/lib/oe/terminal.py
index de8dceb..71ffc87 100644
--- a/poky/meta/lib/oe/terminal.py
+++ b/poky/meta/lib/oe/terminal.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: GPL-2.0-only
 #
 import logging
diff --git a/poky/meta/lib/oe/types.py b/poky/meta/lib/oe/types.py
index bbbabafb..b929afb 100644
--- a/poky/meta/lib/oe/types.py
+++ b/poky/meta/lib/oe/types.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: GPL-2.0-only
 #
 
diff --git a/poky/meta/lib/oe/useradd.py b/poky/meta/lib/oe/useradd.py
index 3caa3f8..54aa86f 100644
--- a/poky/meta/lib/oe/useradd.py
+++ b/poky/meta/lib/oe/useradd.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: GPL-2.0-only
 #
 import argparse
diff --git a/poky/meta/lib/oe/utils.py b/poky/meta/lib/oe/utils.py
index 1ee947d..69ca898 100644
--- a/poky/meta/lib/oe/utils.py
+++ b/poky/meta/lib/oe/utils.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: GPL-2.0-only
 #
 
diff --git a/poky/meta/lib/oeqa/controllers/__init__.py b/poky/meta/lib/oeqa/controllers/__init__.py
index cc3836c..0fc905b 100644
--- a/poky/meta/lib/oeqa/controllers/__init__.py
+++ b/poky/meta/lib/oeqa/controllers/__init__.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: GPL-2.0-only
 #
 # Enable other layers to have modules in the same named directory
diff --git a/poky/meta/lib/oeqa/controllers/testtargetloader.py b/poky/meta/lib/oeqa/controllers/testtargetloader.py
index 23101c7..209ff70 100644
--- a/poky/meta/lib/oeqa/controllers/testtargetloader.py
+++ b/poky/meta/lib/oeqa/controllers/testtargetloader.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: GPL-2.0-only
 #
 
diff --git a/poky/meta/lib/oeqa/core/utils/concurrencytest.py b/poky/meta/lib/oeqa/core/utils/concurrencytest.py
index 161a2f6..383479c 100644
--- a/poky/meta/lib/oeqa/core/utils/concurrencytest.py
+++ b/poky/meta/lib/oeqa/core/utils/concurrencytest.py
@@ -1,5 +1,7 @@
 #!/usr/bin/env python3
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: GPL-2.0-or-later
 #
 # Modified for use in OE by Richard Purdie, 2018
diff --git a/poky/meta/lib/oeqa/runtime/cases/_qemutiny.py b/poky/meta/lib/oeqa/runtime/cases/_qemutiny.py
index 6886e36..14ff8b9 100644
--- a/poky/meta/lib/oeqa/runtime/cases/_qemutiny.py
+++ b/poky/meta/lib/oeqa/runtime/cases/_qemutiny.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/meta/lib/oeqa/runtime/cases/apt.py b/poky/meta/lib/oeqa/runtime/cases/apt.py
index 574a34f..4e09374 100644
--- a/poky/meta/lib/oeqa/runtime/cases/apt.py
+++ b/poky/meta/lib/oeqa/runtime/cases/apt.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/meta/lib/oeqa/runtime/cases/boot.py b/poky/meta/lib/oeqa/runtime/cases/boot.py
index e1ad88a..dcee331 100644
--- a/poky/meta/lib/oeqa/runtime/cases/boot.py
+++ b/poky/meta/lib/oeqa/runtime/cases/boot.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/meta/lib/oeqa/runtime/cases/buildcpio.py b/poky/meta/lib/oeqa/runtime/cases/buildcpio.py
index e29bf16..bd3b46d 100644
--- a/poky/meta/lib/oeqa/runtime/cases/buildcpio.py
+++ b/poky/meta/lib/oeqa/runtime/cases/buildcpio.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/meta/lib/oeqa/runtime/cases/buildgalculator.py b/poky/meta/lib/oeqa/runtime/cases/buildgalculator.py
index e5cc3e2..2cfb324 100644
--- a/poky/meta/lib/oeqa/runtime/cases/buildgalculator.py
+++ b/poky/meta/lib/oeqa/runtime/cases/buildgalculator.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/meta/lib/oeqa/runtime/cases/buildlzip.py b/poky/meta/lib/oeqa/runtime/cases/buildlzip.py
index bc70b41..44f4f1b 100644
--- a/poky/meta/lib/oeqa/runtime/cases/buildlzip.py
+++ b/poky/meta/lib/oeqa/runtime/cases/buildlzip.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/meta/lib/oeqa/runtime/cases/connman.py b/poky/meta/lib/oeqa/runtime/cases/connman.py
index f0d15fa..a488752 100644
--- a/poky/meta/lib/oeqa/runtime/cases/connman.py
+++ b/poky/meta/lib/oeqa/runtime/cases/connman.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/meta/lib/oeqa/runtime/cases/date.py b/poky/meta/lib/oeqa/runtime/cases/date.py
index bd65374..a2523de 100644
--- a/poky/meta/lib/oeqa/runtime/cases/date.py
+++ b/poky/meta/lib/oeqa/runtime/cases/date.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/meta/lib/oeqa/runtime/cases/df.py b/poky/meta/lib/oeqa/runtime/cases/df.py
index bb155c9..43e0ebf 100644
--- a/poky/meta/lib/oeqa/runtime/cases/df.py
+++ b/poky/meta/lib/oeqa/runtime/cases/df.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/meta/lib/oeqa/runtime/cases/dnf.py b/poky/meta/lib/oeqa/runtime/cases/dnf.py
index f40c630..a8e23e5 100644
--- a/poky/meta/lib/oeqa/runtime/cases/dnf.py
+++ b/poky/meta/lib/oeqa/runtime/cases/dnf.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/meta/lib/oeqa/runtime/cases/ethernet_ip_connman.py b/poky/meta/lib/oeqa/runtime/cases/ethernet_ip_connman.py
index b93ee29..eac8f2d 100644
--- a/poky/meta/lib/oeqa/runtime/cases/ethernet_ip_connman.py
+++ b/poky/meta/lib/oeqa/runtime/cases/ethernet_ip_connman.py
@@ -1,3 +1,8 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
 from oeqa.runtime.case import OERuntimeTestCase
 from oeqa.core.decorator.depends import OETestDepends
 from oeqa.core.decorator.data import skipIfQemu
diff --git a/poky/meta/lib/oeqa/runtime/cases/gcc.py b/poky/meta/lib/oeqa/runtime/cases/gcc.py
index 1b6e431..17b1483 100644
--- a/poky/meta/lib/oeqa/runtime/cases/gcc.py
+++ b/poky/meta/lib/oeqa/runtime/cases/gcc.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/meta/lib/oeqa/runtime/cases/gi.py b/poky/meta/lib/oeqa/runtime/cases/gi.py
index 42bd100..78c7ddd 100644
--- a/poky/meta/lib/oeqa/runtime/cases/gi.py
+++ b/poky/meta/lib/oeqa/runtime/cases/gi.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/meta/lib/oeqa/runtime/cases/go.py b/poky/meta/lib/oeqa/runtime/cases/go.py
index 89ba2c3..7514d10 100644
--- a/poky/meta/lib/oeqa/runtime/cases/go.py
+++ b/poky/meta/lib/oeqa/runtime/cases/go.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/meta/lib/oeqa/runtime/cases/gstreamer.py b/poky/meta/lib/oeqa/runtime/cases/gstreamer.py
index f735f82..2295769 100644
--- a/poky/meta/lib/oeqa/runtime/cases/gstreamer.py
+++ b/poky/meta/lib/oeqa/runtime/cases/gstreamer.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/meta/lib/oeqa/runtime/cases/kernelmodule.py b/poky/meta/lib/oeqa/runtime/cases/kernelmodule.py
index 47fd2f8..9c42fcc 100644
--- a/poky/meta/lib/oeqa/runtime/cases/kernelmodule.py
+++ b/poky/meta/lib/oeqa/runtime/cases/kernelmodule.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/meta/lib/oeqa/runtime/cases/ksample.py b/poky/meta/lib/oeqa/runtime/cases/ksample.py
index c69e3fe..b684876 100644
--- a/poky/meta/lib/oeqa/runtime/cases/ksample.py
+++ b/poky/meta/lib/oeqa/runtime/cases/ksample.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/meta/lib/oeqa/runtime/cases/ldd.py b/poky/meta/lib/oeqa/runtime/cases/ldd.py
index 9c2caa8..f6841c6 100644
--- a/poky/meta/lib/oeqa/runtime/cases/ldd.py
+++ b/poky/meta/lib/oeqa/runtime/cases/ldd.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/meta/lib/oeqa/runtime/cases/logrotate.py b/poky/meta/lib/oeqa/runtime/cases/logrotate.py
index 2bff08f..6ad980c 100644
--- a/poky/meta/lib/oeqa/runtime/cases/logrotate.py
+++ b/poky/meta/lib/oeqa/runtime/cases/logrotate.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/meta/lib/oeqa/runtime/cases/multilib.py b/poky/meta/lib/oeqa/runtime/cases/multilib.py
index 0d1b9ae..68556e4 100644
--- a/poky/meta/lib/oeqa/runtime/cases/multilib.py
+++ b/poky/meta/lib/oeqa/runtime/cases/multilib.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/meta/lib/oeqa/runtime/cases/oe_syslog.py b/poky/meta/lib/oeqa/runtime/cases/oe_syslog.py
index 150b70d..cad0c88 100644
--- a/poky/meta/lib/oeqa/runtime/cases/oe_syslog.py
+++ b/poky/meta/lib/oeqa/runtime/cases/oe_syslog.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/meta/lib/oeqa/runtime/cases/opkg.py b/poky/meta/lib/oeqa/runtime/cases/opkg.py
index 9cfee1c..a29c93e 100644
--- a/poky/meta/lib/oeqa/runtime/cases/opkg.py
+++ b/poky/meta/lib/oeqa/runtime/cases/opkg.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/meta/lib/oeqa/runtime/cases/pam.py b/poky/meta/lib/oeqa/runtime/cases/pam.py
index a482ded..b3e8b56 100644
--- a/poky/meta/lib/oeqa/runtime/cases/pam.py
+++ b/poky/meta/lib/oeqa/runtime/cases/pam.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/meta/lib/oeqa/runtime/cases/parselogs.py b/poky/meta/lib/oeqa/runtime/cases/parselogs.py
index 1f9365f..e16c230 100644
--- a/poky/meta/lib/oeqa/runtime/cases/parselogs.py
+++ b/poky/meta/lib/oeqa/runtime/cases/parselogs.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
@@ -64,6 +66,7 @@
     "[pulseaudio] authkey.c: Failed to load authentication key",
     "was skipped because of a failed condition check",
     "was skipped because all trigger condition checks failed",
+    "xf86OpenConsole: Switching VT failed",
     ]
 
 video_related = [
@@ -140,6 +143,7 @@
         'Failed to initialize \'/amba/timer@101e3000\': -22',
         'jitterentropy: Initialization failed with host not compliant with requirements: 2',
         'clcd-pl11x: probe of 10120000.display failed with error -2',
+        'arm-charlcd 10008000.lcd: error -ENXIO: IRQ index 0 not found'
         ] + common_errors,
     'qemuarm64' : [
         'Fatal server error:',
diff --git a/poky/meta/lib/oeqa/runtime/cases/perl.py b/poky/meta/lib/oeqa/runtime/cases/perl.py
index 2c6b3b7..f11b300 100644
--- a/poky/meta/lib/oeqa/runtime/cases/perl.py
+++ b/poky/meta/lib/oeqa/runtime/cases/perl.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/meta/lib/oeqa/runtime/cases/ping.py b/poky/meta/lib/oeqa/runtime/cases/ping.py
index 498f80d..967b441 100644
--- a/poky/meta/lib/oeqa/runtime/cases/ping.py
+++ b/poky/meta/lib/oeqa/runtime/cases/ping.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/meta/lib/oeqa/runtime/cases/ptest.py b/poky/meta/lib/oeqa/runtime/cases/ptest.py
index 00742da..3ef9022 100644
--- a/poky/meta/lib/oeqa/runtime/cases/ptest.py
+++ b/poky/meta/lib/oeqa/runtime/cases/ptest.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/meta/lib/oeqa/runtime/cases/python.py b/poky/meta/lib/oeqa/runtime/cases/python.py
index ec54f1e..5d6d133 100644
--- a/poky/meta/lib/oeqa/runtime/cases/python.py
+++ b/poky/meta/lib/oeqa/runtime/cases/python.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/meta/lib/oeqa/runtime/cases/rpm.py b/poky/meta/lib/oeqa/runtime/cases/rpm.py
index a433911..e3cd818 100644
--- a/poky/meta/lib/oeqa/runtime/cases/rpm.py
+++ b/poky/meta/lib/oeqa/runtime/cases/rpm.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/meta/lib/oeqa/runtime/cases/rt.py b/poky/meta/lib/oeqa/runtime/cases/rt.py
index 849ac19..15ab4db 100644
--- a/poky/meta/lib/oeqa/runtime/cases/rt.py
+++ b/poky/meta/lib/oeqa/runtime/cases/rt.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/meta/lib/oeqa/runtime/cases/rtc.py b/poky/meta/lib/oeqa/runtime/cases/rtc.py
index c4e6681..b2159b1 100644
--- a/poky/meta/lib/oeqa/runtime/cases/rtc.py
+++ b/poky/meta/lib/oeqa/runtime/cases/rtc.py
@@ -1,3 +1,8 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
 from oeqa.runtime.case import OERuntimeTestCase
 from oeqa.core.decorator.depends import OETestDepends
 from oeqa.runtime.decorator.package import OEHasPackage
diff --git a/poky/meta/lib/oeqa/runtime/cases/runlevel.py b/poky/meta/lib/oeqa/runtime/cases/runlevel.py
index 3a4df8a..6734b0f 100644
--- a/poky/meta/lib/oeqa/runtime/cases/runlevel.py
+++ b/poky/meta/lib/oeqa/runtime/cases/runlevel.py
@@ -1,3 +1,8 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
 from oeqa.runtime.case import OERuntimeTestCase
 from oeqa.core.decorator.depends import OETestDepends
 
diff --git a/poky/meta/lib/oeqa/runtime/cases/rust.py b/poky/meta/lib/oeqa/runtime/cases/rust.py
index b3d6cf7..55b280d 100644
--- a/poky/meta/lib/oeqa/runtime/cases/rust.py
+++ b/poky/meta/lib/oeqa/runtime/cases/rust.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/meta/lib/oeqa/runtime/cases/scons.py b/poky/meta/lib/oeqa/runtime/cases/scons.py
index 3c7c7f7..4a8d4d4 100644
--- a/poky/meta/lib/oeqa/runtime/cases/scons.py
+++ b/poky/meta/lib/oeqa/runtime/cases/scons.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/meta/lib/oeqa/runtime/cases/scp.py b/poky/meta/lib/oeqa/runtime/cases/scp.py
index f2bbc94..ee97b8e 100644
--- a/poky/meta/lib/oeqa/runtime/cases/scp.py
+++ b/poky/meta/lib/oeqa/runtime/cases/scp.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/meta/lib/oeqa/runtime/cases/skeletoninit.py b/poky/meta/lib/oeqa/runtime/cases/skeletoninit.py
index a12f1e9..75951be 100644
--- a/poky/meta/lib/oeqa/runtime/cases/skeletoninit.py
+++ b/poky/meta/lib/oeqa/runtime/cases/skeletoninit.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/meta/lib/oeqa/runtime/cases/ssh.py b/poky/meta/lib/oeqa/runtime/cases/ssh.py
index e31224b..13aac54 100644
--- a/poky/meta/lib/oeqa/runtime/cases/ssh.py
+++ b/poky/meta/lib/oeqa/runtime/cases/ssh.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/meta/lib/oeqa/runtime/cases/stap.py b/poky/meta/lib/oeqa/runtime/cases/stap.py
index 480eaab..3be4162 100644
--- a/poky/meta/lib/oeqa/runtime/cases/stap.py
+++ b/poky/meta/lib/oeqa/runtime/cases/stap.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/meta/lib/oeqa/runtime/cases/storage.py b/poky/meta/lib/oeqa/runtime/cases/storage.py
index 972ef82..b05622f 100644
--- a/poky/meta/lib/oeqa/runtime/cases/storage.py
+++ b/poky/meta/lib/oeqa/runtime/cases/storage.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/meta/lib/oeqa/runtime/cases/suspend.py b/poky/meta/lib/oeqa/runtime/cases/suspend.py
index 0382d48..a625cc5 100644
--- a/poky/meta/lib/oeqa/runtime/cases/suspend.py
+++ b/poky/meta/lib/oeqa/runtime/cases/suspend.py
@@ -1,3 +1,8 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
 from oeqa.runtime.case import OERuntimeTestCase
 from oeqa.core.decorator.depends import OETestDepends
 from oeqa.core.decorator.data import skipIfQemu
diff --git a/poky/meta/lib/oeqa/runtime/cases/systemd.py b/poky/meta/lib/oeqa/runtime/cases/systemd.py
index 7c44abe..720b4b5 100644
--- a/poky/meta/lib/oeqa/runtime/cases/systemd.py
+++ b/poky/meta/lib/oeqa/runtime/cases/systemd.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/meta/lib/oeqa/runtime/cases/terminal.py b/poky/meta/lib/oeqa/runtime/cases/terminal.py
index 8fcca99..96ba3c3 100644
--- a/poky/meta/lib/oeqa/runtime/cases/terminal.py
+++ b/poky/meta/lib/oeqa/runtime/cases/terminal.py
@@ -1,3 +1,8 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
 from oeqa.runtime.case import OERuntimeTestCase
 from oeqa.core.decorator.depends import OETestDepends
 from oeqa.runtime.decorator.package import OEHasPackage
diff --git a/poky/meta/lib/oeqa/runtime/cases/usb_hid.py b/poky/meta/lib/oeqa/runtime/cases/usb_hid.py
index 8743174..6f23d2f 100644
--- a/poky/meta/lib/oeqa/runtime/cases/usb_hid.py
+++ b/poky/meta/lib/oeqa/runtime/cases/usb_hid.py
@@ -1,3 +1,8 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
 from oeqa.runtime.case import OERuntimeTestCase
 from oeqa.core.decorator.depends import OETestDepends
 from oeqa.core.decorator.data import skipIfQemu
diff --git a/poky/meta/lib/oeqa/runtime/cases/weston.py b/poky/meta/lib/oeqa/runtime/cases/weston.py
index 1fd471e..ee4d336 100644
--- a/poky/meta/lib/oeqa/runtime/cases/weston.py
+++ b/poky/meta/lib/oeqa/runtime/cases/weston.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/meta/lib/oeqa/runtime/cases/x32lib.py b/poky/meta/lib/oeqa/runtime/cases/x32lib.py
index f419c8f..014da4b 100644
--- a/poky/meta/lib/oeqa/runtime/cases/x32lib.py
+++ b/poky/meta/lib/oeqa/runtime/cases/x32lib.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/meta/lib/oeqa/runtime/cases/xorg.py b/poky/meta/lib/oeqa/runtime/cases/xorg.py
index d684558..09afb1e 100644
--- a/poky/meta/lib/oeqa/runtime/cases/xorg.py
+++ b/poky/meta/lib/oeqa/runtime/cases/xorg.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/meta/lib/oeqa/sdk/buildtools-cases/build.py b/poky/meta/lib/oeqa/sdk/buildtools-cases/build.py
index aee2e5a..c85c324 100644
--- a/poky/meta/lib/oeqa/sdk/buildtools-cases/build.py
+++ b/poky/meta/lib/oeqa/sdk/buildtools-cases/build.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/meta/lib/oeqa/sdk/buildtools-cases/gcc.py b/poky/meta/lib/oeqa/sdk/buildtools-cases/gcc.py
index 36ba15b..a62c4d0 100644
--- a/poky/meta/lib/oeqa/sdk/buildtools-cases/gcc.py
+++ b/poky/meta/lib/oeqa/sdk/buildtools-cases/gcc.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/meta/lib/oeqa/sdk/buildtools-cases/https.py b/poky/meta/lib/oeqa/sdk/buildtools-cases/https.py
index 35e549e..4525e3d 100644
--- a/poky/meta/lib/oeqa/sdk/buildtools-cases/https.py
+++ b/poky/meta/lib/oeqa/sdk/buildtools-cases/https.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/meta/lib/oeqa/sdk/buildtools-cases/sanity.py b/poky/meta/lib/oeqa/sdk/buildtools-cases/sanity.py
index 64baaa8..b9dfa39 100644
--- a/poky/meta/lib/oeqa/sdk/buildtools-cases/sanity.py
+++ b/poky/meta/lib/oeqa/sdk/buildtools-cases/sanity.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/meta/lib/oeqa/sdk/buildtools-docs-cases/build.py b/poky/meta/lib/oeqa/sdk/buildtools-docs-cases/build.py
index 5b0eca0..6e3ee94 100644
--- a/poky/meta/lib/oeqa/sdk/buildtools-docs-cases/build.py
+++ b/poky/meta/lib/oeqa/sdk/buildtools-docs-cases/build.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/meta/lib/oeqa/sdk/cases/assimp.py b/poky/meta/lib/oeqa/sdk/cases/assimp.py
index f166758..aa6541c 100644
--- a/poky/meta/lib/oeqa/sdk/cases/assimp.py
+++ b/poky/meta/lib/oeqa/sdk/cases/assimp.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/meta/lib/oeqa/sdk/cases/buildcpio.py b/poky/meta/lib/oeqa/sdk/cases/buildcpio.py
index e7fc211..c42c670 100644
--- a/poky/meta/lib/oeqa/sdk/cases/buildcpio.py
+++ b/poky/meta/lib/oeqa/sdk/cases/buildcpio.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/meta/lib/oeqa/sdk/cases/buildepoxy.py b/poky/meta/lib/oeqa/sdk/cases/buildepoxy.py
index ad08b77..ee515be 100644
--- a/poky/meta/lib/oeqa/sdk/cases/buildepoxy.py
+++ b/poky/meta/lib/oeqa/sdk/cases/buildepoxy.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/meta/lib/oeqa/sdk/cases/buildgalculator.py b/poky/meta/lib/oeqa/sdk/cases/buildgalculator.py
index 58ade92..178f074 100644
--- a/poky/meta/lib/oeqa/sdk/cases/buildgalculator.py
+++ b/poky/meta/lib/oeqa/sdk/cases/buildgalculator.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/meta/lib/oeqa/sdk/cases/buildlzip.py b/poky/meta/lib/oeqa/sdk/cases/buildlzip.py
index 49ae756..b4b7d85 100644
--- a/poky/meta/lib/oeqa/sdk/cases/buildlzip.py
+++ b/poky/meta/lib/oeqa/sdk/cases/buildlzip.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/meta/lib/oeqa/sdk/cases/gcc.py b/poky/meta/lib/oeqa/sdk/cases/gcc.py
index eb08ead..fc28b9c 100644
--- a/poky/meta/lib/oeqa/sdk/cases/gcc.py
+++ b/poky/meta/lib/oeqa/sdk/cases/gcc.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/meta/lib/oeqa/sdk/cases/perl.py b/poky/meta/lib/oeqa/sdk/cases/perl.py
index 14d76d8..8eab444 100644
--- a/poky/meta/lib/oeqa/sdk/cases/perl.py
+++ b/poky/meta/lib/oeqa/sdk/cases/perl.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/meta/lib/oeqa/sdk/cases/python.py b/poky/meta/lib/oeqa/sdk/cases/python.py
index d43354c..5ea992b 100644
--- a/poky/meta/lib/oeqa/sdk/cases/python.py
+++ b/poky/meta/lib/oeqa/sdk/cases/python.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/meta/lib/oeqa/sdk/cases/rust.py b/poky/meta/lib/oeqa/sdk/cases/rust.py
index 1075d37..31036f0 100644
--- a/poky/meta/lib/oeqa/sdk/cases/rust.py
+++ b/poky/meta/lib/oeqa/sdk/cases/rust.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/meta/lib/oeqa/sdk/files/rust/hello/build.rs b/poky/meta/lib/oeqa/sdk/files/rust/hello/build.rs
new file mode 100644
index 0000000..b1a533d
--- /dev/null
+++ b/poky/meta/lib/oeqa/sdk/files/rust/hello/build.rs
@@ -0,0 +1,3 @@
+/* This is the simplest build script just to invoke host compiler
+   in the build process. */
+fn main() {}
diff --git a/poky/meta/lib/oeqa/sdk/testmetaidesupport.py b/poky/meta/lib/oeqa/sdk/testmetaidesupport.py
index 2ff76fd..00ef30e 100644
--- a/poky/meta/lib/oeqa/sdk/testmetaidesupport.py
+++ b/poky/meta/lib/oeqa/sdk/testmetaidesupport.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/meta/lib/oeqa/selftest/cases/_sstatetests_noauto.py b/poky/meta/lib/oeqa/selftest/cases/_sstatetests_noauto.py
index bff6e77..5f1c8df 100644
--- a/poky/meta/lib/oeqa/selftest/cases/_sstatetests_noauto.py
+++ b/poky/meta/lib/oeqa/selftest/cases/_sstatetests_noauto.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/meta/lib/oeqa/selftest/cases/archiver.py b/poky/meta/lib/oeqa/selftest/cases/archiver.py
index 7519524..ffdea83 100644
--- a/poky/meta/lib/oeqa/selftest/cases/archiver.py
+++ b/poky/meta/lib/oeqa/selftest/cases/archiver.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/meta/lib/oeqa/selftest/cases/bblayers.py b/poky/meta/lib/oeqa/selftest/cases/bblayers.py
index 7d74833..c6bd5a1 100644
--- a/poky/meta/lib/oeqa/selftest/cases/bblayers.py
+++ b/poky/meta/lib/oeqa/selftest/cases/bblayers.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
@@ -6,12 +8,16 @@
 import re
 
 import oeqa.utils.ftools as ftools
-from oeqa.utils.commands import runCmd, get_bb_var, get_bb_vars
+from oeqa.utils.commands import runCmd, get_bb_var, get_bb_vars, bitbake
 
 from oeqa.selftest.case import OESelftestTestCase
 
 class BitbakeLayers(OESelftestTestCase):
 
+    def setUpLocal(self):
+        bitbake("python3-jsonschema-native")
+        bitbake("-c addto_recipe_sysroot python3-jsonschema-native")
+
     def test_bitbakelayers_layerindexshowdepends(self):
         result = runCmd('bitbake-layers layerindex-show-depends meta-poky')
         find_in_contents = re.search("openembedded-core", result.output)
@@ -111,6 +117,11 @@
 
         self.assertEqual(bb_vars['BBFILE_PRIORITY_%s' % layername], str(priority), 'BBFILE_PRIORITY_%s != %d' % (layername, priority))
 
+        result = runCmd('bitbake-layers save-build-conf {} {}'.format(layerpath, "buildconf-1"))
+        for f in ('local.conf.sample', 'bblayers.conf.sample', 'conf-notes.txt'):
+            fullpath = os.path.join(layerpath, "conf", "templates", "buildconf-1", f)
+            self.assertTrue(os.path.exists(fullpath), "Template configuration file {} not found".format(fullpath))
+
     def get_recipe_basename(self, recipe):
         recipe_file = ""
         result = runCmd("bitbake-layers show-recipes -f %s" % recipe)
@@ -121,3 +132,35 @@
 
         self.assertTrue(os.path.isfile(recipe_file), msg = "Can't find recipe file for %s" % recipe)
         return os.path.basename(recipe_file)
+
+    def validate_layersjson(self, json):
+        python = os.path.join(get_bb_var('STAGING_BINDIR', 'python3-jsonschema-native'), 'nativepython3')
+        jsonvalidator = os.path.join(get_bb_var('STAGING_BINDIR', 'python3-jsonschema-native'), 'jsonschema')
+        jsonschema = os.path.join(get_bb_var('COREBASE'), 'meta/files/layers.schema.json')
+        result = runCmd("{} {} -i {} {}".format(python, jsonvalidator, json, jsonschema))
+
+    def test_validate_examplelayersjson(self):
+        json = os.path.join(get_bb_var('COREBASE'), "meta/files/layers.example.json")
+        self.validate_layersjson(json)
+
+    def test_bitbakelayers_setup(self):
+        result = runCmd('bitbake-layers create-layers-setup {}'.format(self.testlayer_path))
+        jsonfile = os.path.join(self.testlayer_path, "setup-layers.json")
+        self.validate_layersjson(jsonfile)
+
+        # The revision-under-test may not necessarily be available on the remote server,
+        # so replace it with a revision that has a yocto-4.0 tag.
+        import json
+        with open(jsonfile) as f:
+            data = json.load(f)
+        for s in data['sources']:
+            data['sources'][s]['git-remote']['rev'] = '00cfdde791a0176c134f31e5a09eff725e75b905'
+        with open(jsonfile, 'w') as f:
+            json.dump(data, f)
+
+        testcheckoutdir = os.path.join(self.builddir, 'test-layer-checkout')
+        result = runCmd('{}/setup-layers --destdir {}'.format(self.testlayer_path, testcheckoutdir))
+        # May not necessarily be named 'poky' or 'openembedded-core'
+        oecoredir = os.listdir(testcheckoutdir)[0]
+        testcheckoutfile = os.path.join(testcheckoutdir, oecoredir, "oe-init-build-env")
+        self.assertTrue(os.path.exists(testcheckoutfile), "File {} not found in test layer checkout".format(testcheckoutfile))
diff --git a/poky/meta/lib/oeqa/selftest/cases/bblogging.py b/poky/meta/lib/oeqa/selftest/cases/bblogging.py
index 317e68b..1534a36 100644
--- a/poky/meta/lib/oeqa/selftest/cases/bblogging.py
+++ b/poky/meta/lib/oeqa/selftest/cases/bblogging.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/meta/lib/oeqa/selftest/cases/bbtests.py b/poky/meta/lib/oeqa/selftest/cases/bbtests.py
index 89267c7..d97bda1 100644
--- a/poky/meta/lib/oeqa/selftest/cases/bbtests.py
+++ b/poky/meta/lib/oeqa/selftest/cases/bbtests.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/meta/lib/oeqa/selftest/cases/binutils.py b/poky/meta/lib/oeqa/selftest/cases/binutils.py
index 3b0b44b..bf6fdca 100644
--- a/poky/meta/lib/oeqa/selftest/cases/binutils.py
+++ b/poky/meta/lib/oeqa/selftest/cases/binutils.py
@@ -1,4 +1,8 @@
+#
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
+#
 import os
 from oeqa.core.decorator import OETestTag
 from oeqa.core.case import OEPTestResultTestCase
diff --git a/poky/meta/lib/oeqa/selftest/cases/buildhistory.py b/poky/meta/lib/oeqa/selftest/cases/buildhistory.py
index d865da6..2d55994 100644
--- a/poky/meta/lib/oeqa/selftest/cases/buildhistory.py
+++ b/poky/meta/lib/oeqa/selftest/cases/buildhistory.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/meta/lib/oeqa/selftest/cases/buildoptions.py b/poky/meta/lib/oeqa/selftest/cases/buildoptions.py
index ad604d6..ee3e28d 100644
--- a/poky/meta/lib/oeqa/selftest/cases/buildoptions.py
+++ b/poky/meta/lib/oeqa/selftest/cases/buildoptions.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/meta/lib/oeqa/selftest/cases/containerimage.py b/poky/meta/lib/oeqa/selftest/cases/containerimage.py
index e0aea1a..23c0a14 100644
--- a/poky/meta/lib/oeqa/selftest/cases/containerimage.py
+++ b/poky/meta/lib/oeqa/selftest/cases/containerimage.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/meta/lib/oeqa/selftest/cases/cve_check.py b/poky/meta/lib/oeqa/selftest/cases/cve_check.py
index d0b2213..ac47af1 100644
--- a/poky/meta/lib/oeqa/selftest/cases/cve_check.py
+++ b/poky/meta/lib/oeqa/selftest/cases/cve_check.py
@@ -1,3 +1,9 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
 import json
 import os
 from oeqa.selftest.case import OESelftestTestCase
diff --git a/poky/meta/lib/oeqa/selftest/cases/debuginfod.py b/poky/meta/lib/oeqa/selftest/cases/debuginfod.py
new file mode 100644
index 0000000..01359ec
--- /dev/null
+++ b/poky/meta/lib/oeqa/selftest/cases/debuginfod.py
@@ -0,0 +1,44 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+import os
+import socketserver
+import subprocess
+
+from oeqa.selftest.case import OESelftestTestCase
+from oeqa.utils.commands import bitbake, get_bb_var, runqemu
+
+class Debuginfod(OESelftestTestCase):
+    def test_debuginfod(self):
+        self.write_config("""
+DISTRO_FEATURES:append = " debuginfod"
+CORE_IMAGE_EXTRA_INSTALL += "elfutils"
+        """)
+        bitbake("core-image-minimal elfutils-native:do_addto_recipe_sysroot")
+
+        native_sysroot = get_bb_var("RECIPE_SYSROOT_NATIVE", "elfutils-native")
+        cmd = [os.path.join(native_sysroot, "usr", "bin", "debuginfod"), "--verbose", get_bb_var("DEPLOY_DIR")]
+        for format in get_bb_var("PACKAGE_CLASSES").split():
+            if format == "package_deb":
+                cmd.append("--scan-deb-dir")
+            elif format == "package_ipk":
+                cmd.append("--scan-deb-dir")
+            elif format == "package_rpm":
+                cmd.append("--scan-rpm-dir")
+        # Find a free port
+        with socketserver.TCPServer(("localhost", 0), None) as s:
+            port = s.server_address[1]
+            cmd.append("--port=%d" % port)
+
+        try:
+            debuginfod = subprocess.Popen(cmd)
+
+            with runqemu("core-image-minimal", runqemuparams="nographic") as qemu:
+                cmd = "DEBUGINFOD_URLS=http://%s:%d/ debuginfod-find debuginfo /usr/bin/debuginfod" % (qemu.server_ip, port)
+                status, output = qemu.run_serial(cmd)
+                # This should be more comprehensive
+                self.assertIn("/.cache/debuginfod_client/", output)
+        finally:
+            debuginfod.kill()
diff --git a/poky/meta/lib/oeqa/selftest/cases/devtool.py b/poky/meta/lib/oeqa/selftest/cases/devtool.py
index 34fc791..142932e 100644
--- a/poky/meta/lib/oeqa/selftest/cases/devtool.py
+++ b/poky/meta/lib/oeqa/selftest/cases/devtool.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
@@ -246,6 +248,22 @@
         if remaining_removelines:
             self.fail('Expected removed lines not found: %s' % remaining_removelines)
 
+    def _test_devtool_add_git_url(self, git_url, version, pn, resulting_src_uri):
+        self.track_for_cleanup(self.workspacedir)
+        self.add_command_to_tearDown('bitbake-layers remove-layer */workspace')
+        result = runCmd('devtool add --version %s %s %s' % (version, pn, git_url))
+        self.assertExists(os.path.join(self.workspacedir, 'conf', 'layer.conf'), 'Workspace directory not created')
+        # Check the recipe name is correct
+        recipefile = get_bb_var('FILE', pn)
+        self.assertIn('%s_git.bb' % pn, recipefile, 'Recipe file incorrectly named')
+        self.assertIn(recipefile, result.output)
+        # Test devtool status
+        result = runCmd('devtool status')
+        self.assertIn(pn, result.output)
+        self.assertIn(recipefile, result.output)
+        checkvars = {}
+        checkvars['SRC_URI'] = resulting_src_uri
+        self._test_recipe_contents(recipefile, checkvars, [])
 
 class DevtoolBase(DevtoolTestCase):
 
@@ -380,6 +398,22 @@
         checkvars['DEPENDS'] = set(['dbus'])
         self._test_recipe_contents(recipefile, checkvars, [])
 
+    def test_devtool_add_git_style1(self):
+        version = 'v3.1.0'
+        pn = 'mbedtls'
+        # this will trigger reformat_git_uri with branch parameter in url
+        git_url = "'git://git@github.com/ARMmbed/mbedtls.git;branch=mbedtls-2.28;protocol=https'"
+        resulting_src_uri = "git://git@github.com/ARMmbed/mbedtls.git;branch=mbedtls-2.28;protocol=https"
+        self._test_devtool_add_git_url(git_url, version, pn, resulting_src_uri)
+
+    def test_devtool_add_git_style2(self):
+        version = 'v3.1.0'
+        pn = 'mbedtls'
+        # this will trigger reformat_git_uri with branch parameter in url
+        git_url = "'git://git@github.com/ARMmbed/mbedtls.git;protocol=https'"
+        resulting_src_uri = "git://git@github.com/ARMmbed/mbedtls.git;protocol=https;branch=master"
+        self._test_devtool_add_git_url(git_url, version, pn, resulting_src_uri)
+
     def test_devtool_add_library(self):
         # Fetch source
         tempdir = tempfile.mkdtemp(prefix='devtoolqa')
@@ -540,7 +574,7 @@
         result = runCmd('devtool status')
         self.assertIn(testrecipe, result.output)
         self.assertIn(srcdir, result.output)
-        # Check recipe
+        # Check recipedevtool add
         recipefile = get_bb_var('FILE', testrecipe)
         self.assertIn('%s_%s.bb' % (testrecipe, testver), recipefile, 'Recipe file incorrectly named')
         checkvars = {}
@@ -1923,7 +1957,6 @@
         self._test_recipe_contents(newrecipefile, checkvars, [])
         # Try again - change just name this time
         result = runCmd('devtool reset -n %s' % newrecipename)
-        shutil.rmtree(newsrctree)
         add_recipe()
         newrecipefile = os.path.join(self.workspacedir, 'recipes', newrecipename, '%s_%s.bb' % (newrecipename, recipever))
         result = runCmd('devtool rename %s %s' % (recipename, newrecipename))
@@ -1936,7 +1969,6 @@
         self._test_recipe_contents(newrecipefile, checkvars, [])
         # Try again - change just version this time
         result = runCmd('devtool reset -n %s' % newrecipename)
-        shutil.rmtree(newsrctree)
         add_recipe()
         newrecipefile = os.path.join(self.workspacedir, 'recipes', recipename, '%s_%s.bb' % (recipename, newrecipever))
         result = runCmd('devtool rename %s -V %s' % (recipename, newrecipever))
diff --git a/poky/meta/lib/oeqa/selftest/cases/distrodata.py b/poky/meta/lib/oeqa/selftest/cases/distrodata.py
index b80d091..b5554a6 100644
--- a/poky/meta/lib/oeqa/selftest/cases/distrodata.py
+++ b/poky/meta/lib/oeqa/selftest/cases/distrodata.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/meta/lib/oeqa/selftest/cases/eSDK.py b/poky/meta/lib/oeqa/selftest/cases/eSDK.py
index 3ea0f66..9f5de2c 100644
--- a/poky/meta/lib/oeqa/selftest/cases/eSDK.py
+++ b/poky/meta/lib/oeqa/selftest/cases/eSDK.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/meta/lib/oeqa/selftest/cases/fetch.py b/poky/meta/lib/oeqa/selftest/cases/fetch.py
index be14272..3d01cf6 100644
--- a/poky/meta/lib/oeqa/selftest/cases/fetch.py
+++ b/poky/meta/lib/oeqa/selftest/cases/fetch.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/meta/lib/oeqa/selftest/cases/fitimage.py b/poky/meta/lib/oeqa/selftest/cases/fitimage.py
index d732a90..14267db 100644
--- a/poky/meta/lib/oeqa/selftest/cases/fitimage.py
+++ b/poky/meta/lib/oeqa/selftest/cases/fitimage.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/meta/lib/oeqa/selftest/cases/gcc.py b/poky/meta/lib/oeqa/selftest/cases/gcc.py
index b9ea03a..6b9022e 100644
--- a/poky/meta/lib/oeqa/selftest/cases/gcc.py
+++ b/poky/meta/lib/oeqa/selftest/cases/gcc.py
@@ -1,4 +1,8 @@
+#
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
+#
 import os
 from oeqa.core.decorator import OETestTag
 from oeqa.core.case import OEPTestResultTestCase
diff --git a/poky/meta/lib/oeqa/selftest/cases/gdbserver.py b/poky/meta/lib/oeqa/selftest/cases/gdbserver.py
new file mode 100644
index 0000000..3621d9c
--- /dev/null
+++ b/poky/meta/lib/oeqa/selftest/cases/gdbserver.py
@@ -0,0 +1,66 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+import os
+import time
+import tempfile
+import shutil
+import concurrent.futures
+
+from oeqa.selftest.case import OESelftestTestCase
+from oeqa.utils.commands import bitbake, get_bb_var, runqemu, runCmd
+
+class GdbServerTest(OESelftestTestCase):
+    def test_gdb_server(self):
+        target_arch = self.td["TARGET_ARCH"]
+        target_sys = self.td["TARGET_SYS"]
+        deploy_dir = get_bb_var("DEPLOY_DIR_IMAGE")
+
+        features = """
+IMAGE_GEN_DEBUGFS = "1"
+IMAGE_FSTYPES_DEBUGFS = "tar.bz2"
+CORE_IMAGE_EXTRA_INSTALL = "gdbserver"
+        """
+        self.write_config(features)
+
+        gdb_recipe = "gdb-cross-" + target_arch
+        gdb_binary = target_sys + "-gdb"
+
+        bitbake("core-image-minimal %s:do_addto_recipe_sysroot" % gdb_recipe)
+
+        native_sysroot = get_bb_var("RECIPE_SYSROOT_NATIVE", gdb_recipe)
+        r = runCmd("%s --version" % gdb_binary, native_sysroot=native_sysroot, target_sys=target_sys)
+        self.assertEqual(r.status, 0)
+        self.assertIn("GNU gdb", r.output)
+
+        with tempfile.TemporaryDirectory(prefix="debugfs-") as debugfs:
+            filename = os.path.join(deploy_dir, "core-image-minimal-%s-dbg.tar.bz2" % self.td["MACHINE"])
+            shutil.unpack_archive(filename, debugfs)
+            filename = os.path.join(deploy_dir, "core-image-minimal-%s.tar.bz2" % self.td["MACHINE"])
+            shutil.unpack_archive(filename, debugfs)
+
+            with runqemu("core-image-minimal", runqemuparams="nographic") as qemu:
+                status, output = qemu.run_serial("kmod --help")
+                self.assertIn("modprobe", output)
+
+                with concurrent.futures.ThreadPoolExecutor(max_workers=1) as executor:
+                    def run_gdb():
+                        for _ in range(5):
+                            time.sleep(2)
+                            cmd = "%s --batch -ex 'set sysroot %s' -ex \"target extended-remote %s:9999\" -ex \"info line kmod_help\"" % (gdb_binary, debugfs, qemu.ip)
+                            self.logger.warning("starting gdb %s" % cmd)
+                            r = runCmd(cmd, native_sysroot=native_sysroot, target_sys=target_sys)
+                            self.assertEqual(0, r.status)
+                            line_re = r"Line \d+ of \"/usr/src/debug/kmod/.*/tools/kmod.c\" starts at address 0x[0-9A-Fa-f]+ <kmod_help>"
+                            self.assertRegex(r.output, line_re)
+                            break
+                        else:
+                            self.fail("Timed out connecting to gdb")
+                    future = executor.submit(run_gdb)
+
+                    status, output = qemu.run_serial("gdbserver --once :9999 kmod --help")
+                    self.assertEqual(status, 1)
+                    # The future either returns None, or raises an exception
+                    future.result()
diff --git a/poky/meta/lib/oeqa/selftest/cases/glibc.py b/poky/meta/lib/oeqa/selftest/cases/glibc.py
index 6fc98e9..a446543 100644
--- a/poky/meta/lib/oeqa/selftest/cases/glibc.py
+++ b/poky/meta/lib/oeqa/selftest/cases/glibc.py
@@ -1,4 +1,8 @@
+#
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
+#
 import os
 import contextlib
 from oeqa.core.decorator import OETestTag
diff --git a/poky/meta/lib/oeqa/selftest/cases/gotoolchain.py b/poky/meta/lib/oeqa/selftest/cases/gotoolchain.py
index 978898b..74c1c48 100644
--- a/poky/meta/lib/oeqa/selftest/cases/gotoolchain.py
+++ b/poky/meta/lib/oeqa/selftest/cases/gotoolchain.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/meta/lib/oeqa/selftest/cases/image_typedep.py b/poky/meta/lib/oeqa/selftest/cases/image_typedep.py
index 5b182a8..17c98ba 100644
--- a/poky/meta/lib/oeqa/selftest/cases/image_typedep.py
+++ b/poky/meta/lib/oeqa/selftest/cases/image_typedep.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/meta/lib/oeqa/selftest/cases/imagefeatures.py b/poky/meta/lib/oeqa/selftest/cases/imagefeatures.py
index 7deafef..d3fd528 100644
--- a/poky/meta/lib/oeqa/selftest/cases/imagefeatures.py
+++ b/poky/meta/lib/oeqa/selftest/cases/imagefeatures.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/meta/lib/oeqa/selftest/cases/incompatible_lic.py b/poky/meta/lib/oeqa/selftest/cases/incompatible_lic.py
index 6279d74..4edf60f 100644
--- a/poky/meta/lib/oeqa/selftest/cases/incompatible_lic.py
+++ b/poky/meta/lib/oeqa/selftest/cases/incompatible_lic.py
@@ -1,3 +1,8 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
 from oeqa.selftest.case import OESelftestTestCase
 from oeqa.utils.commands import bitbake
 
@@ -134,7 +139,7 @@
 
     def test_core_image_full_cmdline_weston(self):
         self.write_config("""
-INHERIT += "testimage"
+IMAGE_CLASSES += "testimage"
 INCOMPATIBLE_LICENSE:pn-core-image-full-cmdline = "GPL-3.0* LGPL-3.0*"
 INCOMPATIBLE_LICENSE:pn-core-image-weston = "GPL-3.0* LGPL-3.0*"
 # Settings for full-cmdline
diff --git a/poky/meta/lib/oeqa/selftest/cases/intercept.py b/poky/meta/lib/oeqa/selftest/cases/intercept.py
index f12874d..12583c3 100644
--- a/poky/meta/lib/oeqa/selftest/cases/intercept.py
+++ b/poky/meta/lib/oeqa/selftest/cases/intercept.py
@@ -1,3 +1,9 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
 from oeqa.selftest.case import OESelftestTestCase
 from oeqa.utils.commands import bitbake
 
diff --git a/poky/meta/lib/oeqa/selftest/cases/kerneldevelopment.py b/poky/meta/lib/oeqa/selftest/cases/kerneldevelopment.py
index b1623a1..4325f38 100644
--- a/poky/meta/lib/oeqa/selftest/cases/kerneldevelopment.py
+++ b/poky/meta/lib/oeqa/selftest/cases/kerneldevelopment.py
@@ -1,3 +1,9 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
 import os
 from oeqa.selftest.case import OESelftestTestCase
 from oeqa.utils.commands import runCmd, get_bb_var
diff --git a/poky/meta/lib/oeqa/selftest/cases/layerappend.py b/poky/meta/lib/oeqa/selftest/cases/layerappend.py
index 8fb1e6c..379ed58 100644
--- a/poky/meta/lib/oeqa/selftest/cases/layerappend.py
+++ b/poky/meta/lib/oeqa/selftest/cases/layerappend.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/meta/lib/oeqa/selftest/cases/liboe.py b/poky/meta/lib/oeqa/selftest/cases/liboe.py
index afe8f88..fab6929 100644
--- a/poky/meta/lib/oeqa/selftest/cases/liboe.py
+++ b/poky/meta/lib/oeqa/selftest/cases/liboe.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/meta/lib/oeqa/selftest/cases/lic_checksum.py b/poky/meta/lib/oeqa/selftest/cases/lic_checksum.py
index 8f1226e..5897a39 100644
--- a/poky/meta/lib/oeqa/selftest/cases/lic_checksum.py
+++ b/poky/meta/lib/oeqa/selftest/cases/lic_checksum.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/meta/lib/oeqa/selftest/cases/manifest.py b/poky/meta/lib/oeqa/selftest/cases/manifest.py
index 0a04c13..07a6c80 100644
--- a/poky/meta/lib/oeqa/selftest/cases/manifest.py
+++ b/poky/meta/lib/oeqa/selftest/cases/manifest.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/meta/lib/oeqa/selftest/cases/meta_ide.py b/poky/meta/lib/oeqa/selftest/cases/meta_ide.py
index ce7bba4..bae9835 100644
--- a/poky/meta/lib/oeqa/selftest/cases/meta_ide.py
+++ b/poky/meta/lib/oeqa/selftest/cases/meta_ide.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/meta/lib/oeqa/selftest/cases/multiconfig.py b/poky/meta/lib/oeqa/selftest/cases/multiconfig.py
index 83cbd13..f509cbf 100644
--- a/poky/meta/lib/oeqa/selftest/cases/multiconfig.py
+++ b/poky/meta/lib/oeqa/selftest/cases/multiconfig.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/meta/lib/oeqa/selftest/cases/newlib.py b/poky/meta/lib/oeqa/selftest/cases/newlib.py
index 999e3e7..fe57aa5 100644
--- a/poky/meta/lib/oeqa/selftest/cases/newlib.py
+++ b/poky/meta/lib/oeqa/selftest/cases/newlib.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/meta/lib/oeqa/selftest/cases/oelib/buildhistory.py b/poky/meta/lib/oeqa/selftest/cases/oelib/buildhistory.py
index 33bd6df..c3c15d8 100644
--- a/poky/meta/lib/oeqa/selftest/cases/oelib/buildhistory.py
+++ b/poky/meta/lib/oeqa/selftest/cases/oelib/buildhistory.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/meta/lib/oeqa/selftest/cases/oelib/elf.py b/poky/meta/lib/oeqa/selftest/cases/oelib/elf.py
index 5a5f9b4..7bf550b 100644
--- a/poky/meta/lib/oeqa/selftest/cases/oelib/elf.py
+++ b/poky/meta/lib/oeqa/selftest/cases/oelib/elf.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/meta/lib/oeqa/selftest/cases/oelib/license.py b/poky/meta/lib/oeqa/selftest/cases/oelib/license.py
index 3b35939..5eea12e 100644
--- a/poky/meta/lib/oeqa/selftest/cases/oelib/license.py
+++ b/poky/meta/lib/oeqa/selftest/cases/oelib/license.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/meta/lib/oeqa/selftest/cases/oelib/path.py b/poky/meta/lib/oeqa/selftest/cases/oelib/path.py
index a1cfa08..b963e44 100644
--- a/poky/meta/lib/oeqa/selftest/cases/oelib/path.py
+++ b/poky/meta/lib/oeqa/selftest/cases/oelib/path.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/meta/lib/oeqa/selftest/cases/oelib/types.py b/poky/meta/lib/oeqa/selftest/cases/oelib/types.py
index 7eb49e6..58318b1 100644
--- a/poky/meta/lib/oeqa/selftest/cases/oelib/types.py
+++ b/poky/meta/lib/oeqa/selftest/cases/oelib/types.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/meta/lib/oeqa/selftest/cases/oelib/utils.py b/poky/meta/lib/oeqa/selftest/cases/oelib/utils.py
index bbf67bf..0cb4642 100644
--- a/poky/meta/lib/oeqa/selftest/cases/oelib/utils.py
+++ b/poky/meta/lib/oeqa/selftest/cases/oelib/utils.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/meta/lib/oeqa/selftest/cases/oescripts.py b/poky/meta/lib/oeqa/selftest/cases/oescripts.py
index d3a789a..ea08d9a 100644
--- a/poky/meta/lib/oeqa/selftest/cases/oescripts.py
+++ b/poky/meta/lib/oeqa/selftest/cases/oescripts.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/meta/lib/oeqa/selftest/cases/overlayfs.py b/poky/meta/lib/oeqa/selftest/cases/overlayfs.py
index 96beb8b..bff22f2 100644
--- a/poky/meta/lib/oeqa/selftest/cases/overlayfs.py
+++ b/poky/meta/lib/oeqa/selftest/cases/overlayfs.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/meta/lib/oeqa/selftest/cases/package.py b/poky/meta/lib/oeqa/selftest/cases/package.py
index 51d8352..2d1b48a 100644
--- a/poky/meta/lib/oeqa/selftest/cases/package.py
+++ b/poky/meta/lib/oeqa/selftest/cases/package.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/meta/lib/oeqa/selftest/cases/pkgdata.py b/poky/meta/lib/oeqa/selftest/cases/pkgdata.py
index 87d069d..d786c33 100644
--- a/poky/meta/lib/oeqa/selftest/cases/pkgdata.py
+++ b/poky/meta/lib/oeqa/selftest/cases/pkgdata.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/meta/lib/oeqa/selftest/cases/prservice.py b/poky/meta/lib/oeqa/selftest/cases/prservice.py
index 10158ca..cb95503 100644
--- a/poky/meta/lib/oeqa/selftest/cases/prservice.py
+++ b/poky/meta/lib/oeqa/selftest/cases/prservice.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/meta/lib/oeqa/selftest/cases/pseudo.py b/poky/meta/lib/oeqa/selftest/cases/pseudo.py
index 33593d5..3ef8786 100644
--- a/poky/meta/lib/oeqa/selftest/cases/pseudo.py
+++ b/poky/meta/lib/oeqa/selftest/cases/pseudo.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/meta/lib/oeqa/selftest/cases/recipetool.py b/poky/meta/lib/oeqa/selftest/cases/recipetool.py
index 510dae6..25b06cd 100644
--- a/poky/meta/lib/oeqa/selftest/cases/recipetool.py
+++ b/poky/meta/lib/oeqa/selftest/cases/recipetool.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/meta/lib/oeqa/selftest/cases/recipeutils.py b/poky/meta/lib/oeqa/selftest/cases/recipeutils.py
index 74b2098..6334f1c 100644
--- a/poky/meta/lib/oeqa/selftest/cases/recipeutils.py
+++ b/poky/meta/lib/oeqa/selftest/cases/recipeutils.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/meta/lib/oeqa/selftest/cases/reproducible.py b/poky/meta/lib/oeqa/selftest/cases/reproducible.py
index 5042c11..f4dd779 100644
--- a/poky/meta/lib/oeqa/selftest/cases/reproducible.py
+++ b/poky/meta/lib/oeqa/selftest/cases/reproducible.py
@@ -16,6 +16,8 @@
 import datetime
 
 exclude_packages = [
+	'rust',
+	'rust-dbg'
 	]
 
 def is_excluded(package):
diff --git a/poky/meta/lib/oeqa/selftest/cases/resulttooltests.py b/poky/meta/lib/oeqa/selftest/cases/resulttooltests.py
index dac5c46..c2e76f1 100644
--- a/poky/meta/lib/oeqa/selftest/cases/resulttooltests.py
+++ b/poky/meta/lib/oeqa/selftest/cases/resulttooltests.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/meta/lib/oeqa/selftest/cases/rootfspostcommandstests.py b/poky/meta/lib/oeqa/selftest/cases/rootfspostcommandstests.py
new file mode 100644
index 0000000..44e2c09
--- /dev/null
+++ b/poky/meta/lib/oeqa/selftest/cases/rootfspostcommandstests.py
@@ -0,0 +1,97 @@
+# SPDX-FileCopyrightText: Huawei Inc.
+#
+# SPDX-License-Identifier: MIT
+
+import os
+import oe
+import unittest
+from oeqa.selftest.case import OESelftestTestCase
+from oeqa.utils.commands import bitbake, get_bb_vars
+
+class ShadowUtilsTidyFiles(OESelftestTestCase):
+    """
+    Check if shadow image rootfs files are tidy.
+
+    The tests are focused on testing the functionality provided by the
+    'tidy_shadowutils_files' rootfs postprocess command (via
+    SORT_PASSWD_POSTPROCESS_COMMAND).
+    """
+
+    def sysconf_build(self):
+        """
+        Verify if shadow tidy files tests are to be run and if yes, build a
+        test image and return its sysconf rootfs path.
+        """
+
+        test_image = "core-image-minimal"
+
+        config = 'IMAGE_CLASSES += "extrausers"\n'
+        config += 'EXTRA_USERS_PARAMS = "groupadd -g 1000 oeqatester; "\n'
+        config += 'EXTRA_USERS_PARAMS += "useradd -p \'\' -u 1000 -N -g 1000 oeqatester; "\n'
+        self.write_config(config)
+
+        vars = get_bb_vars(("IMAGE_ROOTFS", "SORT_PASSWD_POSTPROCESS_COMMAND", "sysconfdir"),
+            test_image)
+        passwd_postprocess_cmd = vars["SORT_PASSWD_POSTPROCESS_COMMAND"]
+        self.assertIsNotNone(passwd_postprocess_cmd)
+        if (passwd_postprocess_cmd.strip() != 'tidy_shadowutils_files;'):
+            raise unittest.SkipTest("Testcase skipped as 'tidy_shadowutils_files' "
+                "rootfs post process command is not the set SORT_PASSWD_POSTPROCESS_COMMAND.")
+
+        rootfs = vars["IMAGE_ROOTFS"]
+        self.assertIsNotNone(rootfs)
+        sysconfdir = vars["sysconfdir"]
+        bitbake(test_image)
+        self.assertIsNotNone(sysconfdir)
+
+        return oe.path.join(rootfs, sysconfdir)
+
+    def test_shadowutils_backup_files(self):
+        """
+        Test that the rootfs doesn't include any known shadow backup files.
+        """
+
+        backup_files = (
+            'group-',
+            'gshadow-',
+            'passwd-',
+            'shadow-',
+            'subgid-',
+            'subuid-',
+        )
+
+        rootfs_sysconfdir = self.sysconf_build()
+        found = []
+        for backup_file in backup_files:
+            backup_filepath = oe.path.join(rootfs_sysconfdir, backup_file)
+            if os.path.exists(backup_filepath):
+                found.append(backup_file)
+        if (found):
+            raise Exception('The following shadow backup files were found in '
+                'the rootfs: %s' % found)
+
+    def test_shadowutils_sorted_files(self):
+        """
+        Test that the 'passwd' and the 'group' shadow utils files are ordered
+        by ID.
+        """
+
+        files = (
+            'passwd',
+            'group',
+        )
+
+        rootfs_sysconfdir = self.sysconf_build()
+        unsorted = []
+        for file in files:
+            filepath = oe.path.join(rootfs_sysconfdir, file)
+            with open(filepath, 'rb') as f:
+                ids = []
+                lines = f.readlines()
+                for line in lines:
+                    entries = line.split(b':')
+                    ids.append(int(entries[2]))
+            if (ids != sorted(ids)):
+                unsorted.append(file)
+        if (unsorted):
+            raise Exception("The following files were not sorted by ID as expected: %s" % unsorted)
diff --git a/poky/meta/lib/oeqa/selftest/cases/rpmtests.py b/poky/meta/lib/oeqa/selftest/cases/rpmtests.py
new file mode 100644
index 0000000..902d7dc
--- /dev/null
+++ b/poky/meta/lib/oeqa/selftest/cases/rpmtests.py
@@ -0,0 +1,14 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+from oeqa.selftest.case import OESelftestTestCase
+from oeqa.utils.commands import bitbake
+
+class BitbakeTests(OESelftestTestCase):
+
+    def test_rpm_filenames(self):
+        test_recipe = "testrpm"
+        bitbake(test_recipe)
diff --git a/poky/meta/lib/oeqa/selftest/cases/runcmd.py b/poky/meta/lib/oeqa/selftest/cases/runcmd.py
index e961238..6fd96b8 100644
--- a/poky/meta/lib/oeqa/selftest/cases/runcmd.py
+++ b/poky/meta/lib/oeqa/selftest/cases/runcmd.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/meta/lib/oeqa/selftest/cases/runtime_test.py b/poky/meta/lib/oeqa/selftest/cases/runtime_test.py
index 857737f..fe83b24 100644
--- a/poky/meta/lib/oeqa/selftest/cases/runtime_test.py
+++ b/poky/meta/lib/oeqa/selftest/cases/runtime_test.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
@@ -23,7 +25,7 @@
         Author: Mariano Lopez <mariano.lopez@intel.com>
         """
 
-        features = 'INHERIT += "testexport"\n'
+        features = 'IMAGE_CLASSES += "testexport"\n'
         # These aren't the actual IP addresses but testexport class needs something defined
         features += 'TEST_SERVER_IP = "192.168.7.1"\n'
         features += 'TEST_TARGET_IP = "192.168.7.1"\n'
@@ -64,7 +66,7 @@
         Author: Mariano Lopez <mariano.lopez@intel.com>
         """
 
-        features = 'INHERIT += "testexport"\n'
+        features = 'IMAGE_CLASSES += "testexport"\n'
         # These aren't the actual IP addresses but testexport class needs something defined
         features += 'TEST_SERVER_IP = "192.168.7.1"\n'
         features += 'TEST_TARGET_IP = "192.168.7.1"\n'
@@ -119,7 +121,7 @@
         if get_bb_var('DISTRO') == 'poky-tiny':
             self.skipTest('core-image-full-cmdline not buildable for poky-tiny')
 
-        features = 'INHERIT += "testimage"\n'
+        features = 'IMAGE_CLASSES += "testimage"\n'
         features += 'IMAGE_INSTALL:append = " libssl"\n'
         features += 'TEST_SUITES = "ping ssh selftest"\n'
         self.write_config(features)
@@ -137,7 +139,7 @@
         if get_bb_var('DISTRO') == 'poky-tiny':
             self.skipTest('core-image-full-cmdline not buildable for poky-tiny')
 
-        features = 'INHERIT += "testimage"\n'
+        features = 'IMAGE_CLASSES += "testimage"\n'
         features += 'TEST_SUITES = "ping ssh dnf_runtime dnf.DnfBasicTest.test_dnf_help"\n'
         # We don't yet know what the server ip and port will be - they will be patched
         # in at the start of the on-image test
@@ -172,7 +174,7 @@
         if get_bb_var('DISTRO') == 'poky-tiny':
             self.skipTest('core-image-full-cmdline not buildable for poky-tiny')
 
-        features = 'INHERIT += "testimage"\n'
+        features = 'IMAGE_CLASSES += "testimage"\n'
         features += 'TEST_SUITES = "ping ssh apt.AptRepoTest.test_apt_install_from_repo"\n'
         # We don't yet know what the server ip and port will be - they will be patched
         # in at the start of the on-image test
@@ -222,7 +224,7 @@
 
         qemu_packageconfig = get_bb_var('PACKAGECONFIG', 'qemu-system-native')
         qemu_distrofeatures = get_bb_var('DISTRO_FEATURES', 'qemu-system-native')
-        features = 'INHERIT += "testimage"\n'
+        features = 'IMAGE_CLASSES += "testimage"\n'
         if 'gtk+' not in qemu_packageconfig:
             features += 'PACKAGECONFIG:append:pn-qemu-system-native = " gtk+"\n'
         if 'sdl' not in qemu_packageconfig:
@@ -267,7 +269,7 @@
         except subprocess.CalledProcessError as e:
             self.fail("Could not determine the path to dri drivers on the host via pkg-config.\nPlease install Mesa development files (particularly, dri.pc) on the host machine.")
         qemu_distrofeatures = get_bb_var('DISTRO_FEATURES', 'qemu-system-native')
-        features = 'INHERIT += "testimage"\n'
+        features = 'IMAGE_CLASSES += "testimage"\n'
         if 'opengl' not in qemu_distrofeatures:
             features += 'DISTRO_FEATURES:append = " opengl"\n'
         features += 'TEST_SUITES = "ping ssh virgl"\n'
diff --git a/poky/meta/lib/oeqa/selftest/cases/selftest.py b/poky/meta/lib/oeqa/selftest/cases/selftest.py
index 7268e25..a80a865 100644
--- a/poky/meta/lib/oeqa/selftest/cases/selftest.py
+++ b/poky/meta/lib/oeqa/selftest/cases/selftest.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/meta/lib/oeqa/selftest/cases/signing.py b/poky/meta/lib/oeqa/selftest/cases/signing.py
index 6f3d4ae..322e753 100644
--- a/poky/meta/lib/oeqa/selftest/cases/signing.py
+++ b/poky/meta/lib/oeqa/selftest/cases/signing.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/meta/lib/oeqa/selftest/cases/sstate.py b/poky/meta/lib/oeqa/selftest/cases/sstate.py
index 1767663..e73bb94 100644
--- a/poky/meta/lib/oeqa/selftest/cases/sstate.py
+++ b/poky/meta/lib/oeqa/selftest/cases/sstate.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/meta/lib/oeqa/selftest/cases/sstatetests.py b/poky/meta/lib/oeqa/selftest/cases/sstatetests.py
index 63827f3..ae766f9 100644
--- a/poky/meta/lib/oeqa/selftest/cases/sstatetests.py
+++ b/poky/meta/lib/oeqa/selftest/cases/sstatetests.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
@@ -384,8 +386,7 @@
         self.track_for_cleanup(self.topdir + "/tmp-sstatesamehash2")
         bitbake("world meta-toolchain -S none")
 
-        def get_files(d):
-            f = {}
+        def get_files(d, result):
             for root, dirs, files in os.walk(d):
                 for name in files:
                     if "meta-environment" in root or "cross-canadian" in root:
@@ -393,23 +394,22 @@
                     if "do_build" not in name:
                         # 1.4.1+gitAUTOINC+302fca9f4c-r0.do_package_write_ipk.sigdata.f3a2a38697da743f0dbed8b56aafcf79
                         (_, task, _, shash) = name.rsplit(".", 3)
-                        f[os.path.join(os.path.basename(root), task)] = shash
-            return f
+                        result[os.path.join(os.path.basename(root), task)] = shash
 
-        nativesdkdir = os.path.basename(glob.glob(self.topdir + "/tmp-sstatesamehash/stamps/*-nativesdk*-linux")[0])
+        files1 = {}
+        files2 = {}
+        subdirs = sorted(glob.glob(self.topdir + "/tmp-sstatesamehash/stamps/*-nativesdk*-linux"))
+        if allarch:
+            subdirs.extend(sorted(glob.glob(self.topdir + "/tmp-sstatesamehash/stamps/all-*-linux")))
 
-        files1 = get_files(self.topdir + "/tmp-sstatesamehash/stamps/" + nativesdkdir)
-        files2 = get_files(self.topdir + "/tmp-sstatesamehash2/stamps/" + nativesdkdir)
+        for subdir in subdirs:
+            nativesdkdir = os.path.basename(subdir)
+            get_files(self.topdir + "/tmp-sstatesamehash/stamps/" + nativesdkdir, files1)
+            get_files(self.topdir + "/tmp-sstatesamehash2/stamps/" + nativesdkdir, files2)
+
         self.maxDiff = None
         self.assertEqual(files1, files2)
 
-        if allarch:
-            allarchdir = os.path.basename(glob.glob(self.topdir + "/tmp-sstatesamehash/stamps/all-*-linux")[0])
-
-            files1 = get_files(self.topdir + "/tmp-sstatesamehash/stamps/" + allarchdir)
-            files2 = get_files(self.topdir + "/tmp-sstatesamehash2/stamps/" + allarchdir)
-            self.assertEqual(files1, files2)
-
     def test_sstate_sametune_samesigs(self):
         """
         The sstate checksums of two identical machines (using the same tune) should be the
diff --git a/poky/meta/lib/oeqa/selftest/cases/sysroot.py b/poky/meta/lib/oeqa/selftest/cases/sysroot.py
index 294ba4a..ef854f6 100644
--- a/poky/meta/lib/oeqa/selftest/cases/sysroot.py
+++ b/poky/meta/lib/oeqa/selftest/cases/sysroot.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/meta/lib/oeqa/selftest/cases/tinfoil.py b/poky/meta/lib/oeqa/selftest/cases/tinfoil.py
index c81d56d..0a66615 100644
--- a/poky/meta/lib/oeqa/selftest/cases/tinfoil.py
+++ b/poky/meta/lib/oeqa/selftest/cases/tinfoil.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/meta/lib/oeqa/selftest/cases/wic.py b/poky/meta/lib/oeqa/selftest/cases/wic.py
index 53058df..0d664d7 100644
--- a/poky/meta/lib/oeqa/selftest/cases/wic.py
+++ b/poky/meta/lib/oeqa/selftest/cases/wic.py
@@ -1420,7 +1420,7 @@
 
         # list directory content of the first partition
         result = runCmd("wic ls %s:1 -n %s" % (images[0], sysroot))
-        self.assertIn('\n%s        ' % kerneltype.upper(), result.output)
+        self.assertIn('\n%s ' % kerneltype.upper(), result.output)
         self.assertIn('\nEFI          <DIR>     ', result.output)
 
         # remove file. EFI partitions are case-insensitive so exercise that too
diff --git a/poky/meta/lib/oeqa/selftest/cases/wrapper.py b/poky/meta/lib/oeqa/selftest/cases/wrapper.py
index 6de6331..f2be442 100644
--- a/poky/meta/lib/oeqa/selftest/cases/wrapper.py
+++ b/poky/meta/lib/oeqa/selftest/cases/wrapper.py
@@ -1,3 +1,8 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
 from oeqa.selftest.case import OESelftestTestCase
 from oeqa.utils.commands import bitbake
 
diff --git a/poky/meta/lib/oeqa/utils/__init__.py b/poky/meta/lib/oeqa/utils/__init__.py
index 6d1ec4c..fbc7f7d 100644
--- a/poky/meta/lib/oeqa/utils/__init__.py
+++ b/poky/meta/lib/oeqa/utils/__init__.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 # Enable other layers to have modules in the same named directory
diff --git a/poky/meta/lib/oeqa/utils/commands.py b/poky/meta/lib/oeqa/utils/commands.py
index 0242614..f733fcd 100644
--- a/poky/meta/lib/oeqa/utils/commands.py
+++ b/poky/meta/lib/oeqa/utils/commands.py
@@ -168,15 +168,22 @@
 
 
 def runCmd(command, ignore_status=False, timeout=None, assert_error=True, sync=True,
-          native_sysroot=None, limit_exc_output=0, output_log=None, **options):
+          native_sysroot=None, target_sys=None, limit_exc_output=0, output_log=None, **options):
     result = Result()
 
     if native_sysroot:
-        extra_paths = "%s/sbin:%s/usr/sbin:%s/usr/bin" % \
-                      (native_sysroot, native_sysroot, native_sysroot)
-        nenv = dict(options.get('env', os.environ))
-        nenv['PATH'] = extra_paths + ':' + nenv.get('PATH', '')
-        options['env'] = nenv
+        new_env = dict(options.get('env', os.environ))
+        paths = new_env["PATH"].split(":")
+        paths = [
+            os.path.join(native_sysroot, "bin"),
+            os.path.join(native_sysroot, "sbin"),
+            os.path.join(native_sysroot, "usr", "bin"),
+            os.path.join(native_sysroot, "usr", "sbin"),
+        ] + paths
+        if target_sys:
+            paths = [os.path.join(native_sysroot, "usr", "bin", target_sys)] + paths
+        new_env["PATH"] = ":".join(paths)
+        options['env'] = new_env
 
     cmd = Command(command, timeout=timeout, output_log=output_log, **options)
     cmd.run()
diff --git a/poky/meta/lib/oeqa/utils/dump.py b/poky/meta/lib/oeqa/utils/dump.py
index 95a79a5..bcee03b 100644
--- a/poky/meta/lib/oeqa/utils/dump.py
+++ b/poky/meta/lib/oeqa/utils/dump.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/meta/lib/oeqa/utils/ftools.py b/poky/meta/lib/oeqa/utils/ftools.py
index 3093419..a50aaa8 100644
--- a/poky/meta/lib/oeqa/utils/ftools.py
+++ b/poky/meta/lib/oeqa/utils/ftools.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/meta/lib/oeqa/utils/httpserver.py b/poky/meta/lib/oeqa/utils/httpserver.py
index 58d3c3b..8ce1dd4 100644
--- a/poky/meta/lib/oeqa/utils/httpserver.py
+++ b/poky/meta/lib/oeqa/utils/httpserver.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/meta/lib/oeqa/utils/logparser.py b/poky/meta/lib/oeqa/utils/logparser.py
index 879aefc..7cb79a8 100644
--- a/poky/meta/lib/oeqa/utils/logparser.py
+++ b/poky/meta/lib/oeqa/utils/logparser.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/meta/lib/oeqa/utils/network.py b/poky/meta/lib/oeqa/utils/network.py
index 59d0172..da4ffda 100644
--- a/poky/meta/lib/oeqa/utils/network.py
+++ b/poky/meta/lib/oeqa/utils/network.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/meta/lib/oeqa/utils/nfs.py b/poky/meta/lib/oeqa/utils/nfs.py
index a37686c..c121865 100644
--- a/poky/meta/lib/oeqa/utils/nfs.py
+++ b/poky/meta/lib/oeqa/utils/nfs.py
@@ -1,4 +1,8 @@
+#
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
+#
 import os
 import sys
 import tempfile
diff --git a/poky/meta/lib/oeqa/utils/package_manager.py b/poky/meta/lib/oeqa/utils/package_manager.py
index 6b67f22..db799b6 100644
--- a/poky/meta/lib/oeqa/utils/package_manager.py
+++ b/poky/meta/lib/oeqa/utils/package_manager.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/meta/lib/oeqa/utils/qemurunner.py b/poky/meta/lib/oeqa/utils/qemurunner.py
index 76296d5..4c3d201 100644
--- a/poky/meta/lib/oeqa/utils/qemurunner.py
+++ b/poky/meta/lib/oeqa/utils/qemurunner.py
@@ -618,6 +618,8 @@
                 return self.qmp.cmd(command)
 
     def run_serial(self, command, raw=False, timeout=60):
+        # Returns (status, output) where status is 1 on success and 0 on error
+
         # We assume target system have echo to get command status
         if not raw:
             command = "%s; echo $?\n" % command
diff --git a/poky/meta/lib/oeqa/utils/subprocesstweak.py b/poky/meta/lib/oeqa/utils/subprocesstweak.py
index b47975a..3e43ed5 100644
--- a/poky/meta/lib/oeqa/utils/subprocesstweak.py
+++ b/poky/meta/lib/oeqa/utils/subprocesstweak.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 import subprocess
diff --git a/poky/meta/lib/rootfspostcommands.py b/poky/meta/lib/rootfspostcommands.py
index fdb9f5b..5386eea 100644
--- a/poky/meta/lib/rootfspostcommands.py
+++ b/poky/meta/lib/rootfspostcommands.py
@@ -1,16 +1,19 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: GPL-2.0-only
 #
 
 import os
 
-def sort_file(filename, mapping):
+def sort_shadowutils_file(filename, mapping):
     """
     Sorts a passwd or group file based on the numeric ID in the third column.
     If a mapping is given, the name from the first column is mapped via that
     dictionary instead (necessary for /etc/shadow and /etc/gshadow). If not,
     a new mapping is created on the fly and returned.
     """
+
     new_mapping = {}
     with open(filename, 'rb+') as f:
         lines = f.readlines()
@@ -31,30 +34,57 @@
         # We overwrite the entire file, i.e. no truncate() necessary.
         f.seek(0)
         f.write(b''.join(lines))
+
     return new_mapping
 
-def remove_backup(filename):
+def sort_shadowutils_files(sysconfdir):
     """
-    Removes the backup file for files like /etc/passwd.
+    Sorts shadow-utils 'passwd' and 'group' files in a rootfs' /etc directory
+    by ID.
     """
+
+    for main, shadow in (('passwd', 'shadow'),
+                         ('group', 'gshadow')):
+        filename = os.path.join(sysconfdir, main)
+        if os.path.exists(filename):
+            mapping = sort_shadowutils_file(filename, None)
+            filename = os.path.join(sysconfdir, shadow)
+            if os.path.exists(filename):
+                 sort_shadowutils_file(filename, mapping)
+
+def remove_shadowutils_backup_file(filename):
+    """
+    Remove shadow-utils backup file for files like /etc/passwd.
+    """
+
     backup_filename = filename + '-'
     if os.path.exists(backup_filename):
         os.unlink(backup_filename)
 
-def sort_passwd(sysconfdir):
+def remove_shadowutils_backup_files(sysconfdir):
     """
-    Sorts passwd and group files in a rootfs /etc directory by ID.
-    Backup files are sometimes are inconsistent and then cannot be
-    sorted (YOCTO #11043), and more importantly, are not needed in
-    the initial rootfs, so they get deleted.
+    Remove shadow-utils backup files in a rootfs /etc directory. They are not
+    needed in the initial root filesystem and sorting them can be inconsistent
+    (YOCTO #11043).
     """
-    for main, shadow in (('passwd', 'shadow'),
-                         ('group', 'gshadow')):
-        filename = os.path.join(sysconfdir, main)
-        remove_backup(filename)
-        if os.path.exists(filename):
-            mapping = sort_file(filename, None)
-            filename = os.path.join(sysconfdir, shadow)
-            remove_backup(filename)
-            if os.path.exists(filename):
-                 sort_file(filename, mapping)
+
+    for filename in (
+            'group',
+            'gshadow',
+            'passwd',
+            'shadow',
+            'subgid',
+            'subuid',
+        ):
+        filepath = os.path.join(sysconfdir, filename)
+        remove_shadowutils_backup_file(filepath)
+
+def tidy_shadowutils_files(sysconfdir):
+    """
+    Tidy up shadow-utils files.
+    """
+
+    remove_shadowutils_backup_files(sysconfdir)
+    sort_shadowutils_files(sysconfdir)
+
+    return True
diff --git a/poky/meta/recipes-bsp/efivar/efivar/0001-Fix-glibc-2.36-build-mount.h-conflicts.patch b/poky/meta/recipes-bsp/efivar/efivar/0001-Fix-glibc-2.36-build-mount.h-conflicts.patch
new file mode 100644
index 0000000..28dadab
--- /dev/null
+++ b/poky/meta/recipes-bsp/efivar/efivar/0001-Fix-glibc-2.36-build-mount.h-conflicts.patch
@@ -0,0 +1,60 @@
+From 7b0e7ba674321ec1ddd6b9cbb419e5fb44f88bb3 Mon Sep 17 00:00:00 2001
+From: Robbie Harwood <rharwood@redhat.com>
+Date: Thu, 28 Jul 2022 16:11:24 -0400
+Subject: [PATCH] Fix glibc 2.36 build (mount.h conflicts)
+
+glibc has decided that sys/mount.h and linux/mount.h are no longer
+usable at the same time.  This broke the build, since linux/fs.h itself
+includes linux/mount.h.  For now, fix the build by only including
+sys/mount.h where we need it.
+
+See-also: https://sourceware.org/glibc/wiki/Release/2.36#Usage_of_.3Clinux.2Fmount.h.3E_and_.3Csys.2Fmount.h.3E
+Resolves: #227
+
+Upstream-Status: Backport [https://github.com/rhboot/efivar/commit/bc65d63ebf8fe6ac8a099ff15ca200986dba1565]
+Signed-off-by: Robbie Harwood <rharwood@redhat.com>
+---
+ src/gpt.c   | 1 +
+ src/linux.c | 1 +
+ src/util.h  | 1 -
+ 3 files changed, 2 insertions(+), 1 deletion(-)
+
+diff --git a/src/gpt.c b/src/gpt.c
+index 1eda049..21413c3 100644
+--- a/src/gpt.c
++++ b/src/gpt.c
+@@ -17,6 +17,7 @@
+ #include <stdio.h>
+ #include <stdlib.h>
+ #include <string.h>
++#include <sys/mount.h>
+ #include <sys/param.h>
+ #include <sys/stat.h>
+ #include <sys/utsname.h>
+diff --git a/src/linux.c b/src/linux.c
+index 47e45ae..1780816 100644
+--- a/src/linux.c
++++ b/src/linux.c
+@@ -20,6 +20,7 @@
+ #include <stdbool.h>
+ #include <stdio.h>
+ #include <sys/ioctl.h>
++#include <sys/mount.h>
+ #include <sys/socket.h>
+ #include <sys/sysmacros.h>
+ #include <sys/types.h>
+diff --git a/src/util.h b/src/util.h
+index 3300666..1e67e44 100644
+--- a/src/util.h
++++ b/src/util.h
+@@ -23,7 +23,6 @@
+ #include <stdio.h>
+ #include <string.h>
+ #include <sys/ioctl.h>
+-#include <sys/mount.h>
+ #include <sys/stat.h>
+ #include <sys/types.h>
+ #include <tgmath.h>
+-- 
+2.37.1
+
diff --git a/poky/meta/recipes-bsp/efivar/efivar_38.bb b/poky/meta/recipes-bsp/efivar/efivar_38.bb
index 42625fa..6a69189 100644
--- a/poky/meta/recipes-bsp/efivar/efivar_38.bb
+++ b/poky/meta/recipes-bsp/efivar/efivar_38.bb
@@ -12,6 +12,7 @@
            file://0001-src-Makefile-build-util.c-separately-for-makeguids.patch \
            file://efisecdb-fix-build-with-musl-libc.patch \
            file://0001-Fix-invalid-free-in-main.patch \
+           file://0001-Fix-glibc-2.36-build-mount.h-conflicts.patch \
            "
 SRCREV = "1753149d4176ebfb2b135ac0aaf79340bf0e7a93"
 
diff --git a/poky/meta/recipes-bsp/gnu-efi/gnu-efi/lib-Makefile-fix-parallel-issue.patch b/poky/meta/recipes-bsp/gnu-efi/gnu-efi/lib-Makefile-fix-parallel-issue.patch
deleted file mode 100644
index dc00b8f..0000000
--- a/poky/meta/recipes-bsp/gnu-efi/gnu-efi/lib-Makefile-fix-parallel-issue.patch
+++ /dev/null
@@ -1,38 +0,0 @@
-From 3ec8c2a70304eabd5760937a4ec3fbc4068a77ed Mon Sep 17 00:00:00 2001
-From: Robert Yang <liezhi.yang@windriver.com>
-Date: Thu, 23 Apr 2015 01:49:31 -0700
-Subject: [PATCH 2/3] lib/Makefile: fix parallel issue
-
-Fixed:
-Assembler messages:
-Fatal error: can't create runtime/rtlock.o: No such file or directory
-Assembler messages:
-Fatal error: can't create runtime/rtdata.o: No such file or directory
-Assembler messages:
-Fatal error: can't create runtime/vm.o: No such file or directory
-Assembler messages:
-Fatal error: can't create runtime/efirtlib.o: No such file or directory
-
-Upstream-Status: Pending
-
-Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
----
- lib/Makefile | 2 ++
- 1 file changed, 2 insertions(+)
-
-diff --git a/lib/Makefile b/lib/Makefile
-index 048751a..ed39bbb 100644
---- a/lib/Makefile
-+++ b/lib/Makefile
-@@ -74,6 +74,8 @@ all: libsubdirs libefi.a
- libsubdirs:
- 	for sdir in $(SUBDIRS); do mkdir -p $$sdir; done
- 
-+$(OBJS): libsubdirs
-+
- libefi.a: $(OBJS)
- 	$(AR) $(ARFLAGS) $@ $(OBJS)
- 
--- 
-2.7.4
-
diff --git a/poky/meta/recipes-bsp/gnu-efi/gnu-efi/parallel-make-archives.patch b/poky/meta/recipes-bsp/gnu-efi/gnu-efi/parallel-make-archives.patch
index 8a0138b..63d9b6f 100644
--- a/poky/meta/recipes-bsp/gnu-efi/gnu-efi/parallel-make-archives.patch
+++ b/poky/meta/recipes-bsp/gnu-efi/gnu-efi/parallel-make-archives.patch
@@ -1,7 +1,7 @@
-From 48b2cdbcd761105e8ebad412fcbf23db1ac4ef7c Mon Sep 17 00:00:00 2001
+From f56ddb00a656af2e84f839738fad19909ac65047 Mon Sep 17 00:00:00 2001
 From: Saul Wold <sgw@linux.intel.com>
 Date: Sun, 9 Mar 2014 15:22:15 +0200
-Subject: [PATCH 1/3] Fix parallel make failure for archives
+Subject: [PATCH] Fix parallel make failure for archives
 
 Upstream-Status: Pending
 
@@ -20,12 +20,16 @@
 [Rebased for 3.0.8]
 Signed-off-by: Yi Zhao <yi.zhao@windriver.com>
 
+---
+ lib/Makefile | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
 diff --git a/lib/Makefile b/lib/Makefile
-index 0e6410d..048751a 100644
+index 1fc6a47..54b0ca7 100644
 --- a/lib/Makefile
 +++ b/lib/Makefile
-@@ -75,7 +75,7 @@ libsubdirs:
- 	for sdir in $(SUBDIRS); do mkdir -p $$sdir; done
+@@ -77,7 +77,7 @@ libsubdirs:
+ $(OBJS): libsubdirs
  
  libefi.a: $(OBJS)
 -	$(AR) $(ARFLAGS) $@ $^
@@ -33,6 +37,3 @@
  
  clean:
  	rm -f libefi.a *~ $(OBJS) */*.o
--- 
-2.7.4
-
diff --git a/poky/meta/recipes-bsp/gnu-efi/gnu-efi_3.0.14.bb b/poky/meta/recipes-bsp/gnu-efi/gnu-efi_3.0.14.bb
deleted file mode 100644
index 36d1035..0000000
--- a/poky/meta/recipes-bsp/gnu-efi/gnu-efi_3.0.14.bb
+++ /dev/null
@@ -1,70 +0,0 @@
-SUMMARY = "Libraries for producing EFI binaries"
-HOMEPAGE = "http://sourceforge.net/projects/gnu-efi/"
-DESCRIPTION = "GNU-EFI aims to Develop EFI applications for ARM-64, ARM-32, x86_64, IA-64 (IPF), IA-32 (x86), and MIPS platforms using the GNU toolchain and the EFI development environment."
-SECTION = "devel"
-LICENSE = "GPL-2.0-or-later | BSD-2-Clause"
-LIC_FILES_CHKSUM = "file://gnuefi/crt0-efi-arm.S;beginline=4;endline=16;md5=e582764a4776e60c95bf9ab617343d36 \
-                    file://gnuefi/crt0-efi-aarch64.S;beginline=4;endline=16;md5=e582764a4776e60c95bf9ab617343d36 \
-                    file://inc/efishellintf.h;beginline=13;endline=20;md5=202766b79d708eff3cc70fce15fb80c7 \
-                    file://lib/arm/math.c;beginline=2;endline=15;md5=8ed772501da77b2b3345aa6df8744c9e \
-                    file://lib/arm/initplat.c;beginline=2;endline=15;md5=8ed772501da77b2b3345aa6df8744c9e \
-                    file://lib/aarch64/math.c;beginline=2;endline=15;md5=8ed772501da77b2b3345aa6df8744c9e \
-                    file://lib/aarch64/initplat.c;beginline=2;endline=15;md5=8ed772501da77b2b3345aa6df8744c9e \
-                   "
-
-SRC_URI = "${SOURCEFORGE_MIRROR}/${BPN}/files/${BP}.tar.bz2 \
-           file://parallel-make-archives.patch \
-           file://lib-Makefile-fix-parallel-issue.patch \
-           file://gnu-efi-3.0.9-fix-clang-build.patch \
-           "
-
-SRC_URI[sha256sum] = "b73b643a0d5697d1f396d7431448e886dd805668789578e3e1a28277c9528435"
-
-COMPATIBLE_HOST = "(x86_64.*|i.86.*|aarch64.*|arm.*|riscv64.*)-linux"
-COMPATIBLE_HOST:armv4 = 'null'
-
-do_configure:linux-gnux32:prepend() {
-	cp ${STAGING_INCDIR}/gnu/stubs-x32.h ${STAGING_INCDIR}/gnu/stubs-64.h
-	cp ${STAGING_INCDIR}/bits/long-double-32.h ${STAGING_INCDIR}/bits/long-double-64.h
-}
-
-def gnu_efi_arch(d):
-    import re
-    tarch = d.getVar("TARGET_ARCH")
-    if re.match("i[3456789]86", tarch):
-        return "ia32"
-    return tarch
-
-EXTRA_OEMAKE = "'ARCH=${@gnu_efi_arch(d)}' 'CC=${CC}' 'AS=${AS}' 'LD=${LD}' 'AR=${AR}' \
-                'RANLIB=${RANLIB}' 'OBJCOPY=${OBJCOPY}' 'PREFIX=${prefix}' 'LIBDIR=${libdir}' \
-                "
-
-# gnu-efi's Makefile treats prefix as toolchain prefix, so don't
-# export it.
-prefix[unexport] = "1"
-
-do_install() {
-        oe_runmake install INSTALLROOT="${D}"
-}
-
-FILES:${PN} += "${libdir}/*.lds"
-
-# 64-bit binaries are expected for EFI when targeting X32
-INSANE_SKIP:${PN}-dev:append:linux-gnux32 = " arch"
-INSANE_SKIP:${PN}-dev:append:linux-muslx32 = " arch"
-
-BBCLASSEXTEND = "native"
-
-# It doesn't support sse, its make.defaults sets:
-# CFLAGS += -mno-mmx -mno-sse
-# So also remove -mfpmath=sse from TUNE_CCARGS
-TUNE_CCARGS:remove = "-mfpmath=sse"
-
-python () {
-    ccargs = d.getVar('TUNE_CCARGS').split()
-    if '-mx32' in ccargs:
-        # use x86_64 EFI ABI
-        ccargs.remove('-mx32')
-        ccargs.append('-m64')
-        d.setVar('TUNE_CCARGS', ' '.join(ccargs))
-}
diff --git a/poky/meta/recipes-bsp/gnu-efi/gnu-efi_3.0.15.bb b/poky/meta/recipes-bsp/gnu-efi/gnu-efi_3.0.15.bb
new file mode 100644
index 0000000..5ae6f39
--- /dev/null
+++ b/poky/meta/recipes-bsp/gnu-efi/gnu-efi_3.0.15.bb
@@ -0,0 +1,69 @@
+SUMMARY = "Libraries for producing EFI binaries"
+HOMEPAGE = "http://sourceforge.net/projects/gnu-efi/"
+DESCRIPTION = "GNU-EFI aims to Develop EFI applications for ARM-64, ARM-32, x86_64, IA-64 (IPF), IA-32 (x86), and MIPS platforms using the GNU toolchain and the EFI development environment."
+SECTION = "devel"
+LICENSE = "GPL-2.0-or-later | BSD-2-Clause"
+LIC_FILES_CHKSUM = "file://gnuefi/crt0-efi-arm.S;beginline=4;endline=16;md5=e582764a4776e60c95bf9ab617343d36 \
+                    file://gnuefi/crt0-efi-aarch64.S;beginline=4;endline=16;md5=e582764a4776e60c95bf9ab617343d36 \
+                    file://inc/efishellintf.h;beginline=13;endline=20;md5=202766b79d708eff3cc70fce15fb80c7 \
+                    file://lib/arm/math.c;beginline=2;endline=15;md5=8ed772501da77b2b3345aa6df8744c9e \
+                    file://lib/arm/initplat.c;beginline=2;endline=15;md5=8ed772501da77b2b3345aa6df8744c9e \
+                    file://lib/aarch64/math.c;beginline=2;endline=15;md5=8ed772501da77b2b3345aa6df8744c9e \
+                    file://lib/aarch64/initplat.c;beginline=2;endline=15;md5=8ed772501da77b2b3345aa6df8744c9e \
+                   "
+
+SRC_URI = "${SOURCEFORGE_MIRROR}/${BPN}/files/${BP}.tar.bz2 \
+           file://parallel-make-archives.patch \
+           file://gnu-efi-3.0.9-fix-clang-build.patch \
+           "
+
+SRC_URI[sha256sum] = "931a257b9c5c1ba65ff519f18373c438a26825f2db7866b163e96d1b168f20ea"
+
+COMPATIBLE_HOST = "(x86_64.*|i.86.*|aarch64.*|arm.*|riscv64.*)-linux"
+COMPATIBLE_HOST:armv4 = 'null'
+
+do_configure:linux-gnux32:prepend() {
+	cp ${STAGING_INCDIR}/gnu/stubs-x32.h ${STAGING_INCDIR}/gnu/stubs-64.h
+	cp ${STAGING_INCDIR}/bits/long-double-32.h ${STAGING_INCDIR}/bits/long-double-64.h
+}
+
+def gnu_efi_arch(d):
+    import re
+    tarch = d.getVar("TARGET_ARCH")
+    if re.match("i[3456789]86", tarch):
+        return "ia32"
+    return tarch
+
+EXTRA_OEMAKE = "'ARCH=${@gnu_efi_arch(d)}' 'CC=${CC}' 'AS=${AS}' 'LD=${LD}' 'AR=${AR}' \
+                'RANLIB=${RANLIB}' 'OBJCOPY=${OBJCOPY}' 'PREFIX=${prefix}' 'LIBDIR=${libdir}' \
+                "
+
+# gnu-efi's Makefile treats prefix as toolchain prefix, so don't
+# export it.
+prefix[unexport] = "1"
+
+do_install() {
+        oe_runmake install INSTALLROOT="${D}"
+}
+
+FILES:${PN} += "${libdir}/*.lds"
+
+# 64-bit binaries are expected for EFI when targeting X32
+INSANE_SKIP:${PN}-dev:append:linux-gnux32 = " arch"
+INSANE_SKIP:${PN}-dev:append:linux-muslx32 = " arch"
+
+BBCLASSEXTEND = "native"
+
+# It doesn't support sse, its make.defaults sets:
+# CFLAGS += -mno-mmx -mno-sse
+# So also remove -mfpmath=sse from TUNE_CCARGS
+TUNE_CCARGS:remove = "-mfpmath=sse"
+
+python () {
+    ccargs = d.getVar('TUNE_CCARGS').split()
+    if '-mx32' in ccargs:
+        # use x86_64 EFI ABI
+        ccargs.remove('-mx32')
+        ccargs.append('-m64')
+        d.setVar('TUNE_CCARGS', ' '.join(ccargs))
+}
diff --git a/poky/meta/recipes-bsp/grub/files/CVE-2021-3695-video-readers-png-Drop-greyscale-support-to-fix-heap.patch b/poky/meta/recipes-bsp/grub/files/CVE-2021-3695-video-readers-png-Drop-greyscale-support-to-fix-heap.patch
new file mode 100644
index 0000000..7f7bb1a
--- /dev/null
+++ b/poky/meta/recipes-bsp/grub/files/CVE-2021-3695-video-readers-png-Drop-greyscale-support-to-fix-heap.patch
@@ -0,0 +1,179 @@
+From e623866d9286410156e8b9d2c82d6253a1b22d08 Mon Sep 17 00:00:00 2001
+From: Daniel Axtens <dja@axtens.net>
+Date: Tue, 6 Jul 2021 18:51:35 +1000
+Subject: [PATCH] video/readers/png: Drop greyscale support to fix heap
+ out-of-bounds write
+
+A 16-bit greyscale PNG without alpha is processed in the following loop:
+
+      for (i = 0; i < (data->image_width * data->image_height);
+	   i++, d1 += 4, d2 += 2)
+	{
+	  d1[R3] = d2[1];
+	  d1[G3] = d2[1];
+	  d1[B3] = d2[1];
+	}
+
+The increment of d1 is wrong. d1 is incremented by 4 bytes per iteration,
+but there are only 3 bytes allocated for storage. This means that image
+data will overwrite somewhat-attacker-controlled parts of memory - 3 bytes
+out of every 4 following the end of the image.
+
+This has existed since greyscale support was added in 2013 in commit
+3ccf16dff98f (grub-core/video/readers/png.c: Support grayscale).
+
+Saving starfield.png as a 16-bit greyscale image without alpha in the gimp
+and attempting to load it causes grub-emu to crash - I don't think this code
+has ever worked.
+
+Delete all PNG greyscale support.
+
+Fixes: CVE-2021-3695
+
+Signed-off-by: Daniel Axtens <dja@axtens.net>
+Reviewed-by: Daniel Kiper <daniel.kiper@oracle.com>
+
+Upstream-Status: Backport
+CVE: CVE-2021-3695
+
+Reference to upstream patch:
+https://git.savannah.gnu.org/cgit/grub.git/commit/?id=e623866d9286410156e8b9d2c82d6253a1b22d08
+
+Signed-off-by: Yongxin Liu <yongxin.liu@windriver.com>
+---
+ grub-core/video/readers/png.c | 87 +++--------------------------------
+ 1 file changed, 7 insertions(+), 80 deletions(-)
+
+diff --git a/grub-core/video/readers/png.c b/grub-core/video/readers/png.c
+index 35ae553c8..a3161e25b 100644
+--- a/grub-core/video/readers/png.c
++++ b/grub-core/video/readers/png.c
+@@ -100,7 +100,7 @@ struct grub_png_data
+ 
+   unsigned image_width, image_height;
+   int bpp, is_16bit;
+-  int raw_bytes, is_gray, is_alpha, is_palette;
++  int raw_bytes, is_alpha, is_palette;
+   int row_bytes, color_bits;
+   grub_uint8_t *image_data;
+ 
+@@ -296,13 +296,13 @@ grub_png_decode_image_header (struct grub_png_data *data)
+     data->bpp = 3;
+   else
+     {
+-      data->is_gray = 1;
+-      data->bpp = 1;
++      return grub_error (GRUB_ERR_BAD_FILE_TYPE,
++			 "png: color type not supported");
+     }
+ 
+   if ((color_bits != 8) && (color_bits != 16)
+       && (color_bits != 4
+-	  || !(data->is_gray || data->is_palette)))
++	  || !data->is_palette))
+     return grub_error (GRUB_ERR_BAD_FILE_TYPE,
+                        "png: bit depth must be 8 or 16");
+ 
+@@ -331,7 +331,7 @@ grub_png_decode_image_header (struct grub_png_data *data)
+     }
+ 
+ #ifndef GRUB_CPU_WORDS_BIGENDIAN
+-  if (data->is_16bit || data->is_gray || data->is_palette)
++  if (data->is_16bit || data->is_palette)
+ #endif
+     {
+       data->image_data = grub_calloc (data->image_height, data->row_bytes);
+@@ -899,27 +899,8 @@ grub_png_convert_image (struct grub_png_data *data)
+       int shift;
+       int mask = (1 << data->color_bits) - 1;
+       unsigned j;
+-      if (data->is_gray)
+-	{
+-	  /* Generic formula is
+-	     (0xff * i) / ((1U << data->color_bits) - 1)
+-	     but for allowed bit depth of 1, 2 and for it's
+-	     equivalent to
+-	     (0xff / ((1U << data->color_bits) - 1)) * i
+-	     Precompute the multipliers to avoid division.
+-	  */
+-
+-	  const grub_uint8_t multipliers[5] = { 0xff, 0xff, 0x55, 0x24, 0x11 };
+-	  for (i = 0; i < (1U << data->color_bits); i++)
+-	    {
+-	      grub_uint8_t col = multipliers[data->color_bits] * i;
+-	      palette[i][0] = col;
+-	      palette[i][1] = col;
+-	      palette[i][2] = col;
+-	    }
+-	}
+-      else
+-	grub_memcpy (palette, data->palette, 3 << data->color_bits);
++
++      grub_memcpy (palette, data->palette, 3 << data->color_bits);
+       d1c = d1;
+       d2c = d2;
+       for (j = 0; j < data->image_height; j++, d1c += data->image_width * 3,
+@@ -957,60 +938,6 @@ grub_png_convert_image (struct grub_png_data *data)
+       return;
+     }
+ 
+-  if (data->is_gray)
+-    {
+-      switch (data->bpp)
+-	{
+-	case 4:
+-	  /* 16-bit gray with alpha.  */
+-	  for (i = 0; i < (data->image_width * data->image_height);
+-	       i++, d1 += 4, d2 += 4)
+-	    {
+-	      d1[R4] = d2[3];
+-	      d1[G4] = d2[3];
+-	      d1[B4] = d2[3];
+-	      d1[A4] = d2[1];
+-	    }
+-	  break;
+-	case 2:
+-	  if (data->is_16bit)
+-	    /* 16-bit gray without alpha.  */
+-	    {
+-	      for (i = 0; i < (data->image_width * data->image_height);
+-		   i++, d1 += 4, d2 += 2)
+-		{
+-		  d1[R3] = d2[1];
+-		  d1[G3] = d2[1];
+-		  d1[B3] = d2[1];
+-		}
+-	    }
+-	  else
+-	    /* 8-bit gray with alpha.  */
+-	    {
+-	      for (i = 0; i < (data->image_width * data->image_height);
+-		   i++, d1 += 4, d2 += 2)
+-		{
+-		  d1[R4] = d2[1];
+-		  d1[G4] = d2[1];
+-		  d1[B4] = d2[1];
+-		  d1[A4] = d2[0];
+-		}
+-	    }
+-	  break;
+-	  /* 8-bit gray without alpha.  */
+-	case 1:
+-	  for (i = 0; i < (data->image_width * data->image_height);
+-	       i++, d1 += 3, d2++)
+-	    {
+-	      d1[R3] = d2[0];
+-	      d1[G3] = d2[0];
+-	      d1[B3] = d2[0];
+-	    }
+-	  break;
+-	}
+-      return;
+-    }
+-
+     {
+   /* Only copy the upper 8 bit.  */
+ #ifndef GRUB_CPU_WORDS_BIGENDIAN
+-- 
+2.34.1
+
diff --git a/poky/meta/recipes-bsp/grub/files/CVE-2021-3696-video-readers-png-Avoid-heap-OOB-R-W-inserting-huff.patch b/poky/meta/recipes-bsp/grub/files/CVE-2021-3696-video-readers-png-Avoid-heap-OOB-R-W-inserting-huff.patch
new file mode 100644
index 0000000..f06514e
--- /dev/null
+++ b/poky/meta/recipes-bsp/grub/files/CVE-2021-3696-video-readers-png-Avoid-heap-OOB-R-W-inserting-huff.patch
@@ -0,0 +1,50 @@
+From 210245129c932dc9e1c2748d9d35524fb95b5042 Mon Sep 17 00:00:00 2001
+From: Daniel Axtens <dja@axtens.net>
+Date: Tue, 6 Jul 2021 23:25:07 +1000
+Subject: [PATCH] video/readers/png: Avoid heap OOB R/W inserting huff table
+ items
+
+In fuzzing we observed crashes where a code would attempt to be inserted
+into a huffman table before the start, leading to a set of heap OOB reads
+and writes as table entries with negative indices were shifted around and
+the new code written in.
+
+Catch the case where we would underflow the array and bail.
+
+Fixes: CVE-2021-3696
+
+Signed-off-by: Daniel Axtens <dja@axtens.net>
+Reviewed-by: Daniel Kiper <daniel.kiper@oracle.com>
+
+Upstream-Status: Backport
+CVE: CVE-2021-3696
+
+Reference to upstream patch:
+https://git.savannah.gnu.org/cgit/grub.git/commit/?id=210245129c932dc9e1c2748d9d35524fb95b5042
+
+Signed-off-by: Yongxin Liu <yongxin.liu@windriver.com>
+---
+ grub-core/video/readers/png.c | 7 +++++++
+ 1 file changed, 7 insertions(+)
+
+diff --git a/grub-core/video/readers/png.c b/grub-core/video/readers/png.c
+index a3161e25b..d7ed5aa6c 100644
+--- a/grub-core/video/readers/png.c
++++ b/grub-core/video/readers/png.c
+@@ -438,6 +438,13 @@ grub_png_insert_huff_item (struct huff_table *ht, int code, int len)
+   for (i = len; i < ht->max_length; i++)
+     n += ht->maxval[i];
+ 
++  if (n > ht->num_values)
++    {
++      grub_error (GRUB_ERR_BAD_FILE_TYPE,
++		  "png: out of range inserting huffman table item");
++      return;
++    }
++
+   for (i = 0; i < n; i++)
+     ht->values[ht->num_values - i] = ht->values[ht->num_values - i - 1];
+ 
+-- 
+2.34.1
+
diff --git a/poky/meta/recipes-bsp/grub/files/CVE-2021-3697-video-readers-jpeg-Block-int-underflow-wild-pointer.patch b/poky/meta/recipes-bsp/grub/files/CVE-2021-3697-video-readers-jpeg-Block-int-underflow-wild-pointer.patch
new file mode 100644
index 0000000..e9fc52d
--- /dev/null
+++ b/poky/meta/recipes-bsp/grub/files/CVE-2021-3697-video-readers-jpeg-Block-int-underflow-wild-pointer.patch
@@ -0,0 +1,84 @@
+From 22a3f97d39f6a10b08ad7fd1cc47c4dcd10413f6 Mon Sep 17 00:00:00 2001
+From: Daniel Axtens <dja@axtens.net>
+Date: Wed, 7 Jul 2021 15:38:19 +1000
+Subject: [PATCH] video/readers/jpeg: Block int underflow -> wild pointer write
+
+Certain 1 px wide images caused a wild pointer write in
+grub_jpeg_ycrcb_to_rgb(). This was caused because in grub_jpeg_decode_data(),
+we have the following loop:
+
+for (; data->r1 < nr1 && (!data->dri || rst);
+     data->r1++, data->bitmap_ptr += (vb * data->image_width - hb * nc1) * 3)
+
+We did not check if vb * width >= hb * nc1.
+
+On a 64-bit platform, if that turns out to be negative, it will underflow,
+be interpreted as unsigned 64-bit, then be added to the 64-bit pointer, so
+we see data->bitmap_ptr jump, e.g.:
+
+0x6180_0000_0480 to
+0x6181_0000_0498
+     ^
+     ~--- carry has occurred and this pointer is now far away from
+          any object.
+
+On a 32-bit platform, it will decrement the pointer, creating a pointer
+that won't crash but will overwrite random data.
+
+Catch the underflow and error out.
+
+Fixes: CVE-2021-3697
+
+Signed-off-by: Daniel Axtens <dja@axtens.net>
+Reviewed-by: Daniel Kiper <daniel.kiper@oracle.com>
+
+Upstream-Status: Backport
+CVE: CVE-2021-3697
+
+Reference to upstream patch:
+https://git.savannah.gnu.org/cgit/grub.git/commit/?id=22a3f97d39f6a10b08ad7fd1cc47c4dcd10413f6
+
+Signed-off-by: Yongxin Liu <yongxin.liu@windriver.com>
+---
+ grub-core/video/readers/jpeg.c | 10 +++++++++-
+ 1 file changed, 9 insertions(+), 1 deletion(-)
+
+diff --git a/grub-core/video/readers/jpeg.c b/grub-core/video/readers/jpeg.c
+index 579bbe8a4..09596fbf5 100644
+--- a/grub-core/video/readers/jpeg.c
++++ b/grub-core/video/readers/jpeg.c
+@@ -23,6 +23,7 @@
+ #include <grub/mm.h>
+ #include <grub/misc.h>
+ #include <grub/bufio.h>
++#include <grub/safemath.h>
+ 
+ GRUB_MOD_LICENSE ("GPLv3+");
+ 
+@@ -699,6 +700,7 @@ static grub_err_t
+ grub_jpeg_decode_data (struct grub_jpeg_data *data)
+ {
+   unsigned c1, vb, hb, nr1, nc1;
++  unsigned stride_a, stride_b, stride;
+   int rst = data->dri;
+   grub_err_t err = GRUB_ERR_NONE;
+ 
+@@ -711,8 +713,14 @@ grub_jpeg_decode_data (struct grub_jpeg_data *data)
+     return grub_error (GRUB_ERR_BAD_FILE_TYPE,
+ 		       "jpeg: attempted to decode data before start of stream");
+ 
++  if (grub_mul(vb, data->image_width, &stride_a) ||
++      grub_mul(hb, nc1, &stride_b) ||
++      grub_sub(stride_a, stride_b, &stride))
++    return grub_error (GRUB_ERR_BAD_FILE_TYPE,
++		       "jpeg: cannot decode image with these dimensions");
++
+   for (; data->r1 < nr1 && (!data->dri || rst);
+-       data->r1++, data->bitmap_ptr += (vb * data->image_width - hb * nc1) * 3)
++       data->r1++, data->bitmap_ptr += stride * 3)
+     for (c1 = 0;  c1 < nc1 && (!data->dri || rst);
+ 	c1++, rst--, data->bitmap_ptr += hb * 3)
+       {
+-- 
+2.34.1
+
diff --git a/poky/meta/recipes-bsp/grub/files/CVE-2022-28733-net-ip-Do-IP-fragment-maths-safely.patch b/poky/meta/recipes-bsp/grub/files/CVE-2022-28733-net-ip-Do-IP-fragment-maths-safely.patch
new file mode 100644
index 0000000..8bf9090
--- /dev/null
+++ b/poky/meta/recipes-bsp/grub/files/CVE-2022-28733-net-ip-Do-IP-fragment-maths-safely.patch
@@ -0,0 +1,63 @@
+From 3e4817538de828319ba6d59ced2fbb9b5ca13287 Mon Sep 17 00:00:00 2001
+From: Daniel Axtens <dja@axtens.net>
+Date: Mon, 20 Dec 2021 19:41:21 +1100
+Subject: [PATCH] net/ip: Do IP fragment maths safely
+
+We can receive packets with invalid IP fragmentation information. This
+can lead to rsm->total_len underflowing and becoming very large.
+
+Then, in grub_netbuff_alloc(), we add to this very large number, which can
+cause it to overflow and wrap back around to a small positive number.
+The allocation then succeeds, but the resulting buffer is too small and
+subsequent operations can write past the end of the buffer.
+
+Catch the underflow here.
+
+Fixes: CVE-2022-28733
+
+Signed-off-by: Daniel Axtens <dja@axtens.net>
+Reviewed-by: Daniel Kiper <daniel.kiper@oracle.com>
+
+Upstream-Status: Backport
+CVE: CVE-2022-28733
+
+Reference to upstream patch:
+https://git.savannah.gnu.org/cgit/grub.git/commit/?id=3e4817538de828319ba6d59ced2fbb9b5ca13287
+
+Signed-off-by: Yongxin Liu <yongxin.liu@windriver.com>
+
+---
+ grub-core/net/ip.c | 10 +++++++++-
+ 1 file changed, 9 insertions(+), 1 deletion(-)
+
+diff --git a/grub-core/net/ip.c b/grub-core/net/ip.c
+index e3d62e97f..3c3d0be0e 100644
+--- a/grub-core/net/ip.c
++++ b/grub-core/net/ip.c
+@@ -25,6 +25,7 @@
+ #include <grub/net/netbuff.h>
+ #include <grub/mm.h>
+ #include <grub/priority_queue.h>
++#include <grub/safemath.h>
+ #include <grub/time.h>
+ 
+ struct iphdr {
+@@ -512,7 +513,14 @@ grub_net_recv_ip4_packets (struct grub_net_buff *nb,
+     {
+       rsm->total_len = (8 * (grub_be_to_cpu16 (iph->frags) & OFFSET_MASK)
+ 			+ (nb->tail - nb->data));
+-      rsm->total_len -= ((iph->verhdrlen & 0xf) * sizeof (grub_uint32_t));
++
++      if (grub_sub (rsm->total_len, (iph->verhdrlen & 0xf) * sizeof (grub_uint32_t),
++		    &rsm->total_len))
++	{
++	  grub_dprintf ("net", "IP reassembly size underflow\n");
++	  return GRUB_ERR_NONE;
++	}
++
+       rsm->asm_netbuff = grub_netbuff_alloc (rsm->total_len);
+       if (!rsm->asm_netbuff)
+ 	{
+-- 
+2.34.1
+
diff --git a/poky/meta/recipes-bsp/grub/files/CVE-2022-28734-net-http-Error-out-on-headers-with-LF-without-CR.patch b/poky/meta/recipes-bsp/grub/files/CVE-2022-28734-net-http-Error-out-on-headers-with-LF-without-CR.patch
new file mode 100644
index 0000000..f31167d
--- /dev/null
+++ b/poky/meta/recipes-bsp/grub/files/CVE-2022-28734-net-http-Error-out-on-headers-with-LF-without-CR.patch
@@ -0,0 +1,58 @@
+From b26b4c08e7119281ff30d0fb4a6169bd2afa8fe4 Mon Sep 17 00:00:00 2001
+From: Daniel Axtens <dja@axtens.net>
+Date: Tue, 8 Mar 2022 19:04:40 +1100
+Subject: [PATCH] net/http: Error out on headers with LF without CR
+
+In a similar vein to the previous patch, parse_line() would write
+a NUL byte past the end of the buffer if there was an HTTP header
+with a LF rather than a CRLF.
+
+RFC-2616 says:
+
+  Many HTTP/1.1 header field values consist of words separated by LWS
+  or special characters. These special characters MUST be in a quoted
+  string to be used within a parameter value (as defined in section 3.6).
+
+We don't support quoted sections or continuation lines, etc.
+
+If we see an LF that's not part of a CRLF, bail out.
+
+Fixes: CVE-2022-28734
+
+Signed-off-by: Daniel Axtens <dja@axtens.net>
+Reviewed-by: Daniel Kiper <daniel.kiper@oracle.com>
+
+Upstream-Status: Backport
+CVE: CVE-2022-28734
+
+Reference to upstream patch:
+https://git.savannah.gnu.org/cgit/grub.git/commit/?id=b26b4c08e7119281ff30d0fb4a6169bd2afa8fe4
+
+Signed-off-by: Yongxin Liu <yongxin.liu@windriver.com>
+---
+ grub-core/net/http.c | 8 ++++++++
+ 1 file changed, 8 insertions(+)
+
+diff --git a/grub-core/net/http.c b/grub-core/net/http.c
+index 33a0a28c4..9291a13e2 100644
+--- a/grub-core/net/http.c
++++ b/grub-core/net/http.c
+@@ -68,7 +68,15 @@ parse_line (grub_file_t file, http_data_t data, char *ptr, grub_size_t len)
+   char *end = ptr + len;
+   while (end > ptr && *(end - 1) == '\r')
+     end--;
++
++  /* LF without CR. */
++  if (end == ptr + len)
++    {
++      data->errmsg = grub_strdup (_("invalid HTTP header - LF without CR"));
++      return GRUB_ERR_NONE;
++    }
+   *end = 0;
++
+   /* Trailing CRLF.  */
+   if (data->in_chunk_len == 1)
+     {
+-- 
+2.34.1
+
diff --git a/poky/meta/recipes-bsp/grub/files/CVE-2022-28734-net-http-Fix-OOB-write-for-split-http-headers.patch b/poky/meta/recipes-bsp/grub/files/CVE-2022-28734-net-http-Fix-OOB-write-for-split-http-headers.patch
new file mode 100644
index 0000000..e0ca1ee
--- /dev/null
+++ b/poky/meta/recipes-bsp/grub/files/CVE-2022-28734-net-http-Fix-OOB-write-for-split-http-headers.patch
@@ -0,0 +1,56 @@
+From ec6bfd3237394c1c7dbf2fd73417173318d22f4b Mon Sep 17 00:00:00 2001
+From: Daniel Axtens <dja@axtens.net>
+Date: Tue, 8 Mar 2022 18:17:03 +1100
+Subject: [PATCH] net/http: Fix OOB write for split http headers
+
+GRUB has special code for handling an http header that is split
+across two packets.
+
+The code tracks the end of line by looking for a "\n" byte. The
+code for split headers has always advanced the pointer just past the
+end of the line, whereas the code that handles unsplit headers does
+not advance the pointer. This extra advance causes the length to be
+one greater, which breaks an assumption in parse_line(), leading to
+it writing a NUL byte one byte past the end of the buffer where we
+reconstruct the line from the two packets.
+
+It's conceivable that an attacker controlled set of packets could
+cause this to zero out the first byte of the "next" pointer of the
+grub_mm_region structure following the current_line buffer.
+
+Do not advance the pointer in the split header case.
+
+Fixes: CVE-2022-28734
+
+Signed-off-by: Daniel Axtens <dja@axtens.net>
+Reviewed-by: Daniel Kiper <daniel.kiper@oracle.com>
+
+Upstream-Status: Backport
+CVE: CVE-2022-28734
+
+Reference to upstream patch:
+https://git.savannah.gnu.org/cgit/grub.git/commit/?id=ec6bfd3237394c1c7dbf2fd73417173318d22f4b
+
+Signed-off-by: Yongxin Liu <yongxin.liu@windriver.com>
+---
+ grub-core/net/http.c | 4 +---
+ 1 file changed, 1 insertion(+), 3 deletions(-)
+
+diff --git a/grub-core/net/http.c b/grub-core/net/http.c
+index f8d7bf0cd..33a0a28c4 100644
+--- a/grub-core/net/http.c
++++ b/grub-core/net/http.c
+@@ -190,9 +190,7 @@ http_receive (grub_net_tcp_socket_t sock __attribute__ ((unused)),
+ 	  int have_line = 1;
+ 	  char *t;
+ 	  ptr = grub_memchr (nb->data, '\n', nb->tail - nb->data);
+-	  if (ptr)
+-	    ptr++;
+-	  else
++	  if (ptr == NULL)
+ 	    {
+ 	      have_line = 0;
+ 	      ptr = (char *) nb->tail;
+-- 
+2.34.1
+
diff --git a/poky/meta/recipes-bsp/grub/files/CVE-2022-28735-kern-efi-sb-Reject-non-kernel-files-in-the-shim_lock.patch b/poky/meta/recipes-bsp/grub/files/CVE-2022-28735-kern-efi-sb-Reject-non-kernel-files-in-the-shim_lock.patch
new file mode 100644
index 0000000..7a59f10
--- /dev/null
+++ b/poky/meta/recipes-bsp/grub/files/CVE-2022-28735-kern-efi-sb-Reject-non-kernel-files-in-the-shim_lock.patch
@@ -0,0 +1,111 @@
+From 6fe755c5c07bb386fda58306bfd19e4a1c974c53 Mon Sep 17 00:00:00 2001
+From: Julian Andres Klode <julian.klode@canonical.com>
+Date: Thu, 2 Dec 2021 15:03:53 +0100
+Subject: [PATCH] kern/efi/sb: Reject non-kernel files in the shim_lock
+ verifier
+
+We must not allow other verifiers to pass things like the GRUB modules.
+Instead of maintaining a blocklist, maintain an allowlist of things
+that we do not care about.
+
+This allowlist really should be made reusable, and shared by the
+lockdown verifier, but this is the minimal patch addressing
+security concerns where the TPM verifier was able to mark modules
+as verified (or the OpenPGP verifier for that matter), when it
+should not do so on shim-powered secure boot systems.
+
+Fixes: CVE-2022-28735
+
+Signed-off-by: Julian Andres Klode <julian.klode@canonical.com>
+Reviewed-by: Daniel Kiper <daniel.kiper@oracle.com>
+
+Upstream-Status: Backport
+CVE:CVE-2022-28735
+
+Reference to upstream patch:
+https://git.savannah.gnu.org/cgit/grub.git/commit/?id=6fe755c5c07bb386fda58306bfd19e4a1c974c53
+
+Signed-off-by: Yongxin Liu <yongxin.liu@windriver.com>
+---
+ grub-core/kern/efi/sb.c | 39 ++++++++++++++++++++++++++++++++++++---
+ include/grub/verify.h   |  1 +
+ 2 files changed, 37 insertions(+), 3 deletions(-)
+
+diff --git a/grub-core/kern/efi/sb.c b/grub-core/kern/efi/sb.c
+index c52ec6226..89c4bb3fd 100644
+--- a/grub-core/kern/efi/sb.c
++++ b/grub-core/kern/efi/sb.c
+@@ -119,10 +119,11 @@ shim_lock_verifier_init (grub_file_t io __attribute__ ((unused)),
+ 			 void **context __attribute__ ((unused)),
+ 			 enum grub_verify_flags *flags)
+ {
+-  *flags = GRUB_VERIFY_FLAGS_SKIP_VERIFICATION;
++  *flags = GRUB_VERIFY_FLAGS_NONE;
+ 
+   switch (type & GRUB_FILE_TYPE_MASK)
+     {
++    /* Files we check. */
+     case GRUB_FILE_TYPE_LINUX_KERNEL:
+     case GRUB_FILE_TYPE_MULTIBOOT_KERNEL:
+     case GRUB_FILE_TYPE_BSD_KERNEL:
+@@ -130,11 +131,43 @@ shim_lock_verifier_init (grub_file_t io __attribute__ ((unused)),
+     case GRUB_FILE_TYPE_PLAN9_KERNEL:
+     case GRUB_FILE_TYPE_EFI_CHAINLOADED_IMAGE:
+       *flags = GRUB_VERIFY_FLAGS_SINGLE_CHUNK;
++      return GRUB_ERR_NONE;
+ 
+-      /* Fall through. */
++    /* Files that do not affect secureboot state. */
++    case GRUB_FILE_TYPE_NONE:
++    case GRUB_FILE_TYPE_LOOPBACK:
++    case GRUB_FILE_TYPE_LINUX_INITRD:
++    case GRUB_FILE_TYPE_OPENBSD_RAMDISK:
++    case GRUB_FILE_TYPE_XNU_RAMDISK:
++    case GRUB_FILE_TYPE_SIGNATURE:
++    case GRUB_FILE_TYPE_PUBLIC_KEY:
++    case GRUB_FILE_TYPE_PUBLIC_KEY_TRUST:
++    case GRUB_FILE_TYPE_PRINT_BLOCKLIST:
++    case GRUB_FILE_TYPE_TESTLOAD:
++    case GRUB_FILE_TYPE_GET_SIZE:
++    case GRUB_FILE_TYPE_FONT:
++    case GRUB_FILE_TYPE_ZFS_ENCRYPTION_KEY:
++    case GRUB_FILE_TYPE_CAT:
++    case GRUB_FILE_TYPE_HEXCAT:
++    case GRUB_FILE_TYPE_CMP:
++    case GRUB_FILE_TYPE_HASHLIST:
++    case GRUB_FILE_TYPE_TO_HASH:
++    case GRUB_FILE_TYPE_KEYBOARD_LAYOUT:
++    case GRUB_FILE_TYPE_PIXMAP:
++    case GRUB_FILE_TYPE_GRUB_MODULE_LIST:
++    case GRUB_FILE_TYPE_CONFIG:
++    case GRUB_FILE_TYPE_THEME:
++    case GRUB_FILE_TYPE_GETTEXT_CATALOG:
++    case GRUB_FILE_TYPE_FS_SEARCH:
++    case GRUB_FILE_TYPE_LOADENV:
++    case GRUB_FILE_TYPE_SAVEENV:
++    case GRUB_FILE_TYPE_VERIFY_SIGNATURE:
++      *flags = GRUB_VERIFY_FLAGS_SKIP_VERIFICATION;
++      return GRUB_ERR_NONE;
+ 
++    /* Other files. */
+     default:
+-      return GRUB_ERR_NONE;
++      return grub_error (GRUB_ERR_ACCESS_DENIED, N_("prohibited by secure boot policy"));
+     }
+ }
+ 
+diff --git a/include/grub/verify.h b/include/grub/verify.h
+index cd129c398..672ae1692 100644
+--- a/include/grub/verify.h
++++ b/include/grub/verify.h
+@@ -24,6 +24,7 @@
+ 
+ enum grub_verify_flags
+   {
++    GRUB_VERIFY_FLAGS_NONE		= 0,
+     GRUB_VERIFY_FLAGS_SKIP_VERIFICATION	= 1,
+     GRUB_VERIFY_FLAGS_SINGLE_CHUNK	= 2,
+     /* Defer verification to another authority. */
+-- 
+2.34.1
+
diff --git a/poky/meta/recipes-bsp/grub/files/video-Remove-trailing-whitespaces.patch b/poky/meta/recipes-bsp/grub/files/video-Remove-trailing-whitespaces.patch
new file mode 100644
index 0000000..2db9bcb
--- /dev/null
+++ b/poky/meta/recipes-bsp/grub/files/video-Remove-trailing-whitespaces.patch
@@ -0,0 +1,693 @@
+From 1f48917d8ddb490dcdc70176e0f58136b7f7811a Mon Sep 17 00:00:00 2001
+From: Elyes Haouas <ehaouas@noos.fr>
+Date: Fri, 4 Mar 2022 07:42:13 +0100
+Subject: [PATCH] video: Remove trailing whitespaces
+
+Signed-off-by: Elyes Haouas <ehaouas@noos.fr>
+Reviewed-by: Daniel Kiper <daniel.kiper@oracle.com>
+
+Upstream-Status: Backport
+
+Reference to upstream patch:
+https://git.savannah.gnu.org/cgit/grub.git/commit/?id=1f48917d8ddb490dcdc70176e0f58136b7f7811a
+
+Signed-off-by: Yongxin Liu <yongxin.liu@windriver.com>
+---
+ grub-core/video/bochs.c             |  2 +-
+ grub-core/video/capture.c           |  2 +-
+ grub-core/video/cirrus.c            |  4 ++--
+ grub-core/video/coreboot/cbfb.c     |  2 +-
+ grub-core/video/efi_gop.c           | 22 +++++++++----------
+ grub-core/video/fb/fbblit.c         |  8 +++----
+ grub-core/video/fb/video_fb.c       | 10 ++++-----
+ grub-core/video/i386/pc/vbe.c       | 34 ++++++++++++++---------------
+ grub-core/video/i386/pc/vga.c       |  6 ++---
+ grub-core/video/ieee1275.c          |  4 ++--
+ grub-core/video/radeon_fuloong2e.c  |  6 ++---
+ grub-core/video/radeon_yeeloong3a.c |  6 ++---
+ grub-core/video/readers/png.c       |  2 +-
+ grub-core/video/readers/tga.c       |  2 +-
+ grub-core/video/sis315_init.c       |  2 +-
+ grub-core/video/sis315pro.c         |  8 +++----
+ grub-core/video/sm712.c             | 10 ++++-----
+ grub-core/video/video.c             |  8 +++----
+ 18 files changed, 69 insertions(+), 69 deletions(-)
+
+diff --git a/grub-core/video/bochs.c b/grub-core/video/bochs.c
+index 30ea1bd82..edc651697 100644
+--- a/grub-core/video/bochs.c
++++ b/grub-core/video/bochs.c
+@@ -212,7 +212,7 @@ find_card (grub_pci_device_t dev, grub_pci_id_t pciid, void *data)
+ 
+   if (((class >> 16) & 0xffff) != 0x0300 || pciid != 0x11111234)
+     return 0;
+-  
++
+   addr = grub_pci_make_address (dev, GRUB_PCI_REG_ADDRESS_REG0);
+   framebuffer.base = grub_pci_read (addr) & GRUB_PCI_ADDR_MEM_MASK;
+   if (!framebuffer.base)
+diff --git a/grub-core/video/capture.c b/grub-core/video/capture.c
+index 4d3195e01..c653d89f9 100644
+--- a/grub-core/video/capture.c
++++ b/grub-core/video/capture.c
+@@ -92,7 +92,7 @@ grub_video_capture_start (const struct grub_video_mode_info *mode_info,
+   framebuffer.ptr = grub_calloc (framebuffer.mode_info.height, framebuffer.mode_info.pitch);
+   if (!framebuffer.ptr)
+     return grub_errno;
+-  
++
+   err = grub_video_fb_create_render_target_from_pointer (&framebuffer.render_target,
+ 							 &framebuffer.mode_info,
+ 							 framebuffer.ptr);
+diff --git a/grub-core/video/cirrus.c b/grub-core/video/cirrus.c
+index e2149e8ce..f5542ccdc 100644
+--- a/grub-core/video/cirrus.c
++++ b/grub-core/video/cirrus.c
+@@ -354,11 +354,11 @@ grub_video_cirrus_setup (unsigned int width, unsigned int height,
+     grub_uint8_t sr_ext = 0, hidden_dac = 0;
+ 
+     grub_vga_set_geometry (&config, grub_vga_cr_write);
+-    
++
+     grub_vga_gr_write (GRUB_VGA_GR_MODE_256_COLOR | GRUB_VGA_GR_MODE_READ_MODE1,
+ 		       GRUB_VGA_GR_MODE);
+     grub_vga_gr_write (GRUB_VGA_GR_GR6_GRAPHICS_MODE, GRUB_VGA_GR_GR6);
+-    
++
+     grub_vga_sr_write (GRUB_VGA_SR_MEMORY_MODE_NORMAL, GRUB_VGA_SR_MEMORY_MODE);
+ 
+     grub_vga_cr_write ((config.pitch >> CIRRUS_CR_EXTENDED_DISPLAY_PITCH_SHIFT)
+diff --git a/grub-core/video/coreboot/cbfb.c b/grub-core/video/coreboot/cbfb.c
+index 9af81fa5b..986003c51 100644
+--- a/grub-core/video/coreboot/cbfb.c
++++ b/grub-core/video/coreboot/cbfb.c
+@@ -106,7 +106,7 @@ grub_video_cbfb_setup (unsigned int width, unsigned int height,
+ 
+   grub_video_fb_set_palette (0, GRUB_VIDEO_FBSTD_NUMCOLORS,
+ 			     grub_video_fbstd_colors);
+-    
++
+   return err;
+ }
+ 
+diff --git a/grub-core/video/efi_gop.c b/grub-core/video/efi_gop.c
+index b7590dc6c..7a5054631 100644
+--- a/grub-core/video/efi_gop.c
++++ b/grub-core/video/efi_gop.c
+@@ -273,7 +273,7 @@ grub_video_gop_iterate (int (*hook) (const struct grub_video_mode_info *info, vo
+       grub_efi_status_t status;
+       struct grub_efi_gop_mode_info *info = NULL;
+       struct grub_video_mode_info mode_info;
+-	 
++
+       status = efi_call_4 (gop->query_mode, gop, mode, &size, &info);
+ 
+       if (status)
+@@ -390,7 +390,7 @@ grub_video_gop_setup (unsigned int width, unsigned int height,
+ 	  found = 1;
+ 	}
+     }
+- 
++
+   if (!found)
+     {
+       unsigned mode;
+@@ -399,7 +399,7 @@ grub_video_gop_setup (unsigned int width, unsigned int height,
+ 	{
+ 	  grub_efi_uintn_t size;
+ 	  grub_efi_status_t status;
+-	 
++
+ 	  status = efi_call_4 (gop->query_mode, gop, mode, &size, &info);
+ 	  if (status)
+ 	    {
+@@ -472,11 +472,11 @@ grub_video_gop_setup (unsigned int width, unsigned int height,
+   framebuffer.ptr = (void *) (grub_addr_t) gop->mode->fb_base;
+   framebuffer.offscreen
+     = grub_malloc (framebuffer.mode_info.height
+-		   * framebuffer.mode_info.width 
++		   * framebuffer.mode_info.width
+ 		   * sizeof (struct grub_efi_gop_blt_pixel));
+ 
+   buffer = framebuffer.offscreen;
+-      
++
+   if (!buffer)
+     {
+       grub_dprintf ("video", "GOP: couldn't allocate shadow\n");
+@@ -485,11 +485,11 @@ grub_video_gop_setup (unsigned int width, unsigned int height,
+ 				     &framebuffer.mode_info);
+       buffer = framebuffer.ptr;
+     }
+-    
++
+   grub_dprintf ("video", "GOP: initialising FB @ %p %dx%dx%d\n",
+ 		framebuffer.ptr, framebuffer.mode_info.width,
+ 		framebuffer.mode_info.height, framebuffer.mode_info.bpp);
+- 
++
+   err = grub_video_fb_create_render_target_from_pointer
+     (&framebuffer.render_target, &framebuffer.mode_info, buffer);
+ 
+@@ -498,15 +498,15 @@ grub_video_gop_setup (unsigned int width, unsigned int height,
+       grub_dprintf ("video", "GOP: Couldn't create FB target\n");
+       return err;
+     }
+- 
++
+   err = grub_video_fb_set_active_render_target (framebuffer.render_target);
+- 
++
+   if (err)
+     {
+       grub_dprintf ("video", "GOP: Couldn't set FB target\n");
+       return err;
+     }
+- 
++
+   err = grub_video_fb_set_palette (0, GRUB_VIDEO_FBSTD_NUMCOLORS,
+ 				   grub_video_fbstd_colors);
+ 
+@@ -514,7 +514,7 @@ grub_video_gop_setup (unsigned int width, unsigned int height,
+     grub_dprintf ("video", "GOP: Couldn't set palette\n");
+   else
+     grub_dprintf ("video", "GOP: Success\n");
+- 
++
+   return err;
+ }
+ 
+diff --git a/grub-core/video/fb/fbblit.c b/grub-core/video/fb/fbblit.c
+index d55924837..1010ef393 100644
+--- a/grub-core/video/fb/fbblit.c
++++ b/grub-core/video/fb/fbblit.c
+@@ -466,7 +466,7 @@ grub_video_fbblit_replace_24bit_indexa (struct grub_video_fbblit_info *dst,
+       for (i = 0; i < width; i++)
+         {
+ 	  register grub_uint32_t col;
+-	  if (*srcptr == 0xf0)	      
++	  if (*srcptr == 0xf0)
+ 	    col = palette[16];
+ 	  else
+ 	    col = palette[*srcptr & 0xf];
+@@ -478,7 +478,7 @@ grub_video_fbblit_replace_24bit_indexa (struct grub_video_fbblit_info *dst,
+ 	  *dstptr++ = col >> 0;
+ 	  *dstptr++ = col >> 8;
+ 	  *dstptr++ = col >> 16;
+-#endif	  
++#endif
+ 	  srcptr++;
+         }
+ 
+@@ -651,7 +651,7 @@ grub_video_fbblit_blend_24bit_indexa (struct grub_video_fbblit_info *dst,
+       for (i = 0; i < width; i++)
+         {
+ 	  register grub_uint32_t col;
+-	  if (*srcptr != 0xf0)	      
++	  if (*srcptr != 0xf0)
+ 	    {
+ 	      col = palette[*srcptr & 0xf];
+ #ifdef GRUB_CPU_WORDS_BIGENDIAN
+@@ -662,7 +662,7 @@ grub_video_fbblit_blend_24bit_indexa (struct grub_video_fbblit_info *dst,
+ 	      *dstptr++ = col >> 0;
+ 	      *dstptr++ = col >> 8;
+ 	      *dstptr++ = col >> 16;
+-#endif	  
++#endif
+ 	    }
+ 	  else
+ 	    dstptr += 3;
+diff --git a/grub-core/video/fb/video_fb.c b/grub-core/video/fb/video_fb.c
+index ae6b89f9a..fa4ebde26 100644
+--- a/grub-core/video/fb/video_fb.c
++++ b/grub-core/video/fb/video_fb.c
+@@ -754,7 +754,7 @@ grub_video_fb_unmap_color_int (struct grub_video_fbblit_info * source,
+           *alpha = 0;
+           return;
+         }
+-	
++
+       /* If we have an out-of-bounds color, return transparent black.  */
+       if (color > 255)
+         {
+@@ -1141,7 +1141,7 @@ grub_video_fb_scroll (grub_video_color_t color, int dx, int dy)
+       /* If everything is aligned on 32-bit use 32-bit copy.  */
+       if ((grub_addr_t) grub_video_fb_get_video_ptr (&target, src_x, src_y)
+ 	  % sizeof (grub_uint32_t) == 0
+-	  && (grub_addr_t) grub_video_fb_get_video_ptr (&target, dst_x, dst_y) 
++	  && (grub_addr_t) grub_video_fb_get_video_ptr (&target, dst_x, dst_y)
+ 	  % sizeof (grub_uint32_t) == 0
+ 	  && linelen % sizeof (grub_uint32_t) == 0
+ 	  && linedelta % sizeof (grub_uint32_t) == 0)
+@@ -1155,7 +1155,7 @@ grub_video_fb_scroll (grub_video_color_t color, int dx, int dy)
+       else if ((grub_addr_t) grub_video_fb_get_video_ptr (&target, src_x, src_y)
+ 	       % sizeof (grub_uint16_t) == 0
+ 	       && (grub_addr_t) grub_video_fb_get_video_ptr (&target,
+-							     dst_x, dst_y) 
++							     dst_x, dst_y)
+ 	       % sizeof (grub_uint16_t) == 0
+ 	       && linelen % sizeof (grub_uint16_t) == 0
+ 	       && linedelta % sizeof (grub_uint16_t) == 0)
+@@ -1170,7 +1170,7 @@ grub_video_fb_scroll (grub_video_color_t color, int dx, int dy)
+ 	{
+ 	  grub_uint8_t *src, *dst;
+ 	  DO_SCROLL
+-	}	
++	}
+     }
+ 
+   /* 4. Fill empty space with specified color.  In this implementation
+@@ -1615,7 +1615,7 @@ grub_video_fb_setup (unsigned int mode_type, unsigned int mode_mask,
+ 	  framebuffer.render_target = framebuffer.back_target;
+ 	  return GRUB_ERR_NONE;
+ 	}
+-      
++
+       mode_info->mode_type &= ~(GRUB_VIDEO_MODE_TYPE_DOUBLE_BUFFERED
+ 				| GRUB_VIDEO_MODE_TYPE_UPDATING_SWAP);
+ 
+diff --git a/grub-core/video/i386/pc/vbe.c b/grub-core/video/i386/pc/vbe.c
+index b7f911926..0e65b5206 100644
+--- a/grub-core/video/i386/pc/vbe.c
++++ b/grub-core/video/i386/pc/vbe.c
+@@ -219,7 +219,7 @@ grub_vbe_disable_mtrr (int mtrr)
+ }
+ 
+ /* Call VESA BIOS 0x4f09 to set palette data, return status.  */
+-static grub_vbe_status_t 
++static grub_vbe_status_t
+ grub_vbe_bios_set_palette_data (grub_uint32_t color_count,
+ 				grub_uint32_t start_index,
+ 				struct grub_vbe_palette_data *palette_data)
+@@ -237,7 +237,7 @@ grub_vbe_bios_set_palette_data (grub_uint32_t color_count,
+ }
+ 
+ /* Call VESA BIOS 0x4f00 to get VBE Controller Information, return status.  */
+-grub_vbe_status_t 
++grub_vbe_status_t
+ grub_vbe_bios_get_controller_info (struct grub_vbe_info_block *ci)
+ {
+   struct grub_bios_int_registers regs;
+@@ -251,7 +251,7 @@ grub_vbe_bios_get_controller_info (struct grub_vbe_info_block *ci)
+ }
+ 
+ /* Call VESA BIOS 0x4f01 to get VBE Mode Information, return status.  */
+-grub_vbe_status_t 
++grub_vbe_status_t
+ grub_vbe_bios_get_mode_info (grub_uint32_t mode,
+ 			     struct grub_vbe_mode_info_block *mode_info)
+ {
+@@ -285,7 +285,7 @@ grub_vbe_bios_set_mode (grub_uint32_t mode,
+ }
+ 
+ /* Call VESA BIOS 0x4f03 to return current VBE Mode, return status.  */
+-grub_vbe_status_t 
++grub_vbe_status_t
+ grub_vbe_bios_get_mode (grub_uint32_t *mode)
+ {
+   struct grub_bios_int_registers regs;
+@@ -298,7 +298,7 @@ grub_vbe_bios_get_mode (grub_uint32_t *mode)
+   return regs.eax & 0xffff;
+ }
+ 
+-grub_vbe_status_t 
++grub_vbe_status_t
+ grub_vbe_bios_getset_dac_palette_width (int set, int *dac_mask_size)
+ {
+   struct grub_bios_int_registers regs;
+@@ -346,7 +346,7 @@ grub_vbe_bios_get_memory_window (grub_uint32_t window,
+ }
+ 
+ /* Call VESA BIOS 0x4f06 to set scanline length (in bytes), return status.  */
+-grub_vbe_status_t 
++grub_vbe_status_t
+ grub_vbe_bios_set_scanline_length (grub_uint32_t length)
+ {
+   struct grub_bios_int_registers regs;
+@@ -354,14 +354,14 @@ grub_vbe_bios_set_scanline_length (grub_uint32_t length)
+   regs.ecx = length;
+   regs.eax = 0x4f06;
+   /* BL = 2, Set Scan Line in Bytes.  */
+-  regs.ebx = 0x0002;	
++  regs.ebx = 0x0002;
+   regs.flags = GRUB_CPU_INT_FLAGS_DEFAULT;
+   grub_bios_interrupt (0x10, &regs);
+   return regs.eax & 0xffff;
+ }
+ 
+ /* Call VESA BIOS 0x4f06 to return scanline length (in bytes), return status.  */
+-grub_vbe_status_t 
++grub_vbe_status_t
+ grub_vbe_bios_get_scanline_length (grub_uint32_t *length)
+ {
+   struct grub_bios_int_registers regs;
+@@ -377,7 +377,7 @@ grub_vbe_bios_get_scanline_length (grub_uint32_t *length)
+ }
+ 
+ /* Call VESA BIOS 0x4f07 to set display start, return status.  */
+-static grub_vbe_status_t 
++static grub_vbe_status_t
+ grub_vbe_bios_set_display_start (grub_uint32_t x, grub_uint32_t y)
+ {
+   struct grub_bios_int_registers regs;
+@@ -390,7 +390,7 @@ grub_vbe_bios_set_display_start (grub_uint32_t x, grub_uint32_t y)
+   regs.edx = y;
+   regs.eax = 0x4f07;
+   /* BL = 80h, Set Display Start during Vertical Retrace.  */
+-  regs.ebx = 0x0080;	
++  regs.ebx = 0x0080;
+   regs.flags = GRUB_CPU_INT_FLAGS_DEFAULT;
+   grub_bios_interrupt (0x10, &regs);
+ 
+@@ -401,7 +401,7 @@ grub_vbe_bios_set_display_start (grub_uint32_t x, grub_uint32_t y)
+ }
+ 
+ /* Call VESA BIOS 0x4f07 to get display start, return status.  */
+-grub_vbe_status_t 
++grub_vbe_status_t
+ grub_vbe_bios_get_display_start (grub_uint32_t *x,
+ 				 grub_uint32_t *y)
+ {
+@@ -419,7 +419,7 @@ grub_vbe_bios_get_display_start (grub_uint32_t *x,
+ }
+ 
+ /* Call VESA BIOS 0x4f0a.  */
+-grub_vbe_status_t 
++grub_vbe_status_t
+ grub_vbe_bios_get_pm_interface (grub_uint16_t *segment, grub_uint16_t *offset,
+ 				grub_uint16_t *length)
+ {
+@@ -896,7 +896,7 @@ vbe2videoinfo (grub_uint32_t mode,
+     case GRUB_VBE_MEMORY_MODEL_YUV:
+       mode_info->mode_type |= GRUB_VIDEO_MODE_TYPE_YUV;
+       break;
+-      
++
+     case GRUB_VBE_MEMORY_MODEL_DIRECT_COLOR:
+       mode_info->mode_type |= GRUB_VIDEO_MODE_TYPE_RGB;
+       break;
+@@ -923,10 +923,10 @@ vbe2videoinfo (grub_uint32_t mode,
+       break;
+     case 8:
+       mode_info->bytes_per_pixel = 1;
+-      break;  
++      break;
+     case 4:
+       mode_info->bytes_per_pixel = 0;
+-      break;  
++      break;
+     }
+ 
+   if (controller_info.version >= 0x300)
+@@ -976,7 +976,7 @@ grub_video_vbe_iterate (int (*hook) (const struct grub_video_mode_info *info, vo
+ 
+ static grub_err_t
+ grub_video_vbe_setup (unsigned int width, unsigned int height,
+-                      grub_video_mode_type_t mode_type, 
++                      grub_video_mode_type_t mode_type,
+ 		      grub_video_mode_type_t mode_mask)
+ {
+   grub_uint16_t *p;
+@@ -1193,7 +1193,7 @@ grub_video_vbe_print_adapter_specific_info (void)
+ 		controller_info.version & 0xFF,
+ 		controller_info.oem_software_rev >> 8,
+ 		controller_info.oem_software_rev & 0xFF);
+-  
++
+   /* The total_memory field is in 64 KiB units.  */
+   grub_printf_ (N_("              total memory: %d KiB\n"),
+ 		(controller_info.total_memory << 6));
+diff --git a/grub-core/video/i386/pc/vga.c b/grub-core/video/i386/pc/vga.c
+index b2f776c99..50d0b5e02 100644
+--- a/grub-core/video/i386/pc/vga.c
++++ b/grub-core/video/i386/pc/vga.c
+@@ -48,7 +48,7 @@ static struct
+   int back_page;
+ } framebuffer;
+ 
+-static unsigned char 
++static unsigned char
+ grub_vga_set_mode (unsigned char mode)
+ {
+   struct grub_bios_int_registers regs;
+@@ -182,10 +182,10 @@ grub_video_vga_setup (unsigned int width, unsigned int height,
+ 
+   is_target = 1;
+   err = grub_video_fb_set_active_render_target (framebuffer.render_target);
+- 
++
+   if (err)
+     return err;
+- 
++
+   err = grub_video_fb_set_palette (0, GRUB_VIDEO_FBSTD_NUMCOLORS,
+ 				   grub_video_fbstd_colors);
+ 
+diff --git a/grub-core/video/ieee1275.c b/grub-core/video/ieee1275.c
+index f437fb0df..ca3d3c3b2 100644
+--- a/grub-core/video/ieee1275.c
++++ b/grub-core/video/ieee1275.c
+@@ -233,7 +233,7 @@ grub_video_ieee1275_setup (unsigned int width, unsigned int height,
+       /* TODO. */
+       return grub_error (GRUB_ERR_IO, "can't set mode %dx%d", width, height);
+     }
+-  
++
+   err = grub_video_ieee1275_fill_mode_info (dev, &framebuffer.mode_info);
+   if (err)
+     {
+@@ -260,7 +260,7 @@ grub_video_ieee1275_setup (unsigned int width, unsigned int height,
+ 
+   grub_video_ieee1275_set_palette (0, framebuffer.mode_info.number_of_colors,
+ 				   grub_video_fbstd_colors);
+-    
++
+   return err;
+ }
+ 
+diff --git a/grub-core/video/radeon_fuloong2e.c b/grub-core/video/radeon_fuloong2e.c
+index b4da34b5e..40917acb7 100644
+--- a/grub-core/video/radeon_fuloong2e.c
++++ b/grub-core/video/radeon_fuloong2e.c
+@@ -75,7 +75,7 @@ find_card (grub_pci_device_t dev, grub_pci_id_t pciid, void *data)
+   if (((class >> 16) & 0xffff) != GRUB_PCI_CLASS_SUBCLASS_VGA
+       || pciid != 0x515a1002)
+     return 0;
+-  
++
+   *found = 1;
+ 
+   addr = grub_pci_make_address (dev, GRUB_PCI_REG_ADDRESS_REG0);
+@@ -139,7 +139,7 @@ grub_video_radeon_fuloong2e_setup (unsigned int width, unsigned int height,
+   framebuffer.mapped = 1;
+ 
+   /* Prevent garbage from appearing on the screen.  */
+-  grub_memset (framebuffer.ptr, 0x55, 
++  grub_memset (framebuffer.ptr, 0x55,
+ 	       framebuffer.mode_info.height * framebuffer.mode_info.pitch);
+ 
+ #ifndef TEST
+@@ -152,7 +152,7 @@ grub_video_radeon_fuloong2e_setup (unsigned int width, unsigned int height,
+     return err;
+ 
+   err = grub_video_fb_set_active_render_target (framebuffer.render_target);
+-  
++
+   if (err)
+     return err;
+ 
+diff --git a/grub-core/video/radeon_yeeloong3a.c b/grub-core/video/radeon_yeeloong3a.c
+index 52614feb6..48631c181 100644
+--- a/grub-core/video/radeon_yeeloong3a.c
++++ b/grub-core/video/radeon_yeeloong3a.c
+@@ -74,7 +74,7 @@ find_card (grub_pci_device_t dev, grub_pci_id_t pciid, void *data)
+   if (((class >> 16) & 0xffff) != GRUB_PCI_CLASS_SUBCLASS_VGA
+       || pciid != 0x96151002)
+     return 0;
+-  
++
+   *found = 1;
+ 
+   addr = grub_pci_make_address (dev, GRUB_PCI_REG_ADDRESS_REG0);
+@@ -137,7 +137,7 @@ grub_video_radeon_yeeloong3a_setup (unsigned int width, unsigned int height,
+ #endif
+ 
+   /* Prevent garbage from appearing on the screen.  */
+-  grub_memset (framebuffer.ptr, 0, 
++  grub_memset (framebuffer.ptr, 0,
+ 	       framebuffer.mode_info.height * framebuffer.mode_info.pitch);
+ 
+ #ifndef TEST
+@@ -150,7 +150,7 @@ grub_video_radeon_yeeloong3a_setup (unsigned int width, unsigned int height,
+     return err;
+ 
+   err = grub_video_fb_set_active_render_target (framebuffer.render_target);
+-  
++
+   if (err)
+     return err;
+ 
+diff --git a/grub-core/video/readers/png.c b/grub-core/video/readers/png.c
+index 0157ff742..54dfedf43 100644
+--- a/grub-core/video/readers/png.c
++++ b/grub-core/video/readers/png.c
+@@ -916,7 +916,7 @@ grub_png_convert_image (struct grub_png_data *data)
+ 	}
+       return;
+     }
+-  
++
+   if (data->is_gray)
+     {
+       switch (data->bpp)
+diff --git a/grub-core/video/readers/tga.c b/grub-core/video/readers/tga.c
+index 7cb9d1d2a..a9ec3a1b6 100644
+--- a/grub-core/video/readers/tga.c
++++ b/grub-core/video/readers/tga.c
+@@ -127,7 +127,7 @@ tga_load_palette (struct tga_data *data)
+ 
+   if (len > sizeof (data->palette))
+     len = sizeof (data->palette);
+-  
++
+   if (grub_file_read (data->file, &data->palette, len)
+       != (grub_ssize_t) len)
+     return grub_errno;
+diff --git a/grub-core/video/sis315_init.c b/grub-core/video/sis315_init.c
+index ae5c1419c..09c3c7bbe 100644
+--- a/grub-core/video/sis315_init.c
++++ b/grub-core/video/sis315_init.c
+@@ -1,4 +1,4 @@
+-static const struct { grub_uint8_t reg; grub_uint8_t val; } sr_dump [] = 
++static const struct { grub_uint8_t reg; grub_uint8_t val; } sr_dump [] =
+ {
+   { 0x28, 0x81 },
+   { 0x2a, 0x00 },
+diff --git a/grub-core/video/sis315pro.c b/grub-core/video/sis315pro.c
+index 22a0c85a6..4d2f9999a 100644
+--- a/grub-core/video/sis315pro.c
++++ b/grub-core/video/sis315pro.c
+@@ -103,7 +103,7 @@ find_card (grub_pci_device_t dev, grub_pci_id_t pciid, void *data)
+   if (((class >> 16) & 0xffff) != GRUB_PCI_CLASS_SUBCLASS_VGA
+       || pciid != GRUB_SIS315PRO_PCIID)
+     return 0;
+-  
++
+   *found = 1;
+ 
+   addr = grub_pci_make_address (dev, GRUB_PCI_REG_ADDRESS_REG0);
+@@ -218,7 +218,7 @@ grub_video_sis315pro_setup (unsigned int width, unsigned int height,
+ 
+ #ifndef TEST
+   /* Prevent garbage from appearing on the screen.  */
+-  grub_memset (framebuffer.ptr, 0, 
++  grub_memset (framebuffer.ptr, 0,
+ 	       framebuffer.mode_info.height * framebuffer.mode_info.pitch);
+   grub_arch_sync_dma_caches (framebuffer.ptr,
+ 			     framebuffer.mode_info.height
+@@ -231,7 +231,7 @@ grub_video_sis315pro_setup (unsigned int width, unsigned int height,
+ 	     | GRUB_VGA_IO_MISC_EXTERNAL_CLOCK_0
+ 	     | GRUB_VGA_IO_MISC_28MHZ
+ 	     | GRUB_VGA_IO_MISC_ENABLE_VRAM_ACCESS
+-	     | GRUB_VGA_IO_MISC_COLOR, 
++	     | GRUB_VGA_IO_MISC_COLOR,
+ 	     GRUB_VGA_IO_MISC_WRITE + GRUB_MACHINE_PCI_IO_BASE);
+ 
+   grub_vga_sr_write (0x86, 5);
+@@ -335,7 +335,7 @@ grub_video_sis315pro_setup (unsigned int width, unsigned int height,
+   {
+     if (read_sis_cmd (0x5) != 0xa1)
+       write_sis_cmd (0x86, 0x5);
+-    
++
+     write_sis_cmd (read_sis_cmd (0x20) | 0xa1, 0x20);
+     write_sis_cmd (read_sis_cmd (0x1e) | 0xda, 0x1e);
+ 
+diff --git a/grub-core/video/sm712.c b/grub-core/video/sm712.c
+index 10c46eb65..65f59f84b 100644
+--- a/grub-core/video/sm712.c
++++ b/grub-core/video/sm712.c
+@@ -167,7 +167,7 @@ enum
+     GRUB_SM712_CR_SHADOW_VGA_VBLANK_START = 0x46,
+     GRUB_SM712_CR_SHADOW_VGA_VBLANK_END = 0x47,
+     GRUB_SM712_CR_SHADOW_VGA_VRETRACE_START = 0x48,
+-    GRUB_SM712_CR_SHADOW_VGA_VRETRACE_END = 0x49,    
++    GRUB_SM712_CR_SHADOW_VGA_VRETRACE_END = 0x49,
+     GRUB_SM712_CR_SHADOW_VGA_OVERFLOW = 0x4a,
+     GRUB_SM712_CR_SHADOW_VGA_CELL_HEIGHT = 0x4b,
+     GRUB_SM712_CR_SHADOW_VGA_HDISPLAY_END = 0x4c,
+@@ -375,7 +375,7 @@ find_card (grub_pci_device_t dev, grub_pci_id_t pciid, void *data)
+   if (((class >> 16) & 0xffff) != GRUB_PCI_CLASS_SUBCLASS_VGA
+       || pciid != GRUB_SM712_PCIID)
+     return 0;
+-  
++
+   *found = 1;
+ 
+   addr = grub_pci_make_address (dev, GRUB_PCI_REG_ADDRESS_REG0);
+@@ -471,7 +471,7 @@ grub_video_sm712_setup (unsigned int width, unsigned int height,
+ 
+ #if !defined (TEST) && !defined(GENINIT)
+   /* Prevent garbage from appearing on the screen.  */
+-  grub_memset ((void *) framebuffer.cached_ptr, 0, 
++  grub_memset ((void *) framebuffer.cached_ptr, 0,
+ 	       framebuffer.mode_info.height * framebuffer.mode_info.pitch);
+ #endif
+ 
+@@ -482,7 +482,7 @@ grub_video_sm712_setup (unsigned int width, unsigned int height,
+   grub_sm712_sr_write (0x2, 0x6b);
+   grub_sm712_write_reg (0, GRUB_VGA_IO_PIXEL_MASK);
+   grub_sm712_sr_write (GRUB_VGA_SR_RESET_ASYNC, GRUB_VGA_SR_RESET);
+-  grub_sm712_write_reg (GRUB_VGA_IO_MISC_NEGATIVE_VERT_POLARITY 
++  grub_sm712_write_reg (GRUB_VGA_IO_MISC_NEGATIVE_VERT_POLARITY
+ 			| GRUB_VGA_IO_MISC_NEGATIVE_HORIZ_POLARITY
+ 			| GRUB_VGA_IO_MISC_UPPER_64K
+ 			| GRUB_VGA_IO_MISC_EXTERNAL_CLOCK_0
+@@ -694,7 +694,7 @@ grub_video_sm712_setup (unsigned int width, unsigned int height,
+   for (i = 0; i < ARRAY_SIZE (dda_lookups); i++)
+     grub_sm712_write_dda_lookup (i, dda_lookups[i].compare, dda_lookups[i].dda,
+ 				 dda_lookups[i].vcentering);
+-  
++
+   /* Undocumented  */
+   grub_sm712_cr_write (0, 0x9c);
+   grub_sm712_cr_write (0, 0x9d);
+diff --git a/grub-core/video/video.c b/grub-core/video/video.c
+index 983424107..8937da745 100644
+--- a/grub-core/video/video.c
++++ b/grub-core/video/video.c
+@@ -491,13 +491,13 @@ parse_modespec (const char *current_mode, int *width, int *height, int *depth)
+ 		       current_mode);
+ 
+   param++;
+-  
++
+   *width = grub_strtoul (value, 0, 0);
+   if (grub_errno != GRUB_ERR_NONE)
+       return grub_error (GRUB_ERR_BAD_ARGUMENT,
+ 			 N_("invalid video mode specification `%s'"),
+ 			 current_mode);
+-  
++
+   /* Find height value.  */
+   value = param;
+   param = grub_strchr(param, 'x');
+@@ -513,13 +513,13 @@ parse_modespec (const char *current_mode, int *width, int *height, int *depth)
+     {
+       /* We have optional color depth value.  */
+       param++;
+-      
++
+       *height = grub_strtoul (value, 0, 0);
+       if (grub_errno != GRUB_ERR_NONE)
+ 	return grub_error (GRUB_ERR_BAD_ARGUMENT,
+ 			   N_("invalid video mode specification `%s'"),
+ 			   current_mode);
+-      
++
+       /* Convert color depth value.  */
+       value = param;
+       *depth = grub_strtoul (value, 0, 0);
+-- 
+2.34.1
+
diff --git a/poky/meta/recipes-bsp/grub/files/video-readers-jpeg-Abort-sooner-if-a-read-operation-.patch b/poky/meta/recipes-bsp/grub/files/video-readers-jpeg-Abort-sooner-if-a-read-operation-.patch
new file mode 100644
index 0000000..0c7deae
--- /dev/null
+++ b/poky/meta/recipes-bsp/grub/files/video-readers-jpeg-Abort-sooner-if-a-read-operation-.patch
@@ -0,0 +1,264 @@
+From d5caac8ab79d068ad9a41030c772d03a4d4fbd7b Mon Sep 17 00:00:00 2001
+From: Daniel Axtens <dja@axtens.net>
+Date: Mon, 28 Jun 2021 14:16:14 +1000
+Subject: [PATCH] video/readers/jpeg: Abort sooner if a read operation fails
+
+Fuzzing revealed some inputs that were taking a long time, potentially
+forever, because they did not bail quickly upon encountering an I/O error.
+
+Try to catch I/O errors sooner and bail out.
+
+Signed-off-by: Daniel Axtens <dja@axtens.net>
+Reviewed-by: Daniel Kiper <daniel.kiper@oracle.com>
+
+Upstream-Status: Backport
+
+Reference to upstream patch:
+https://git.savannah.gnu.org/cgit/grub.git/commit/?id=d5caac8ab79d068ad9a41030c772d03a4d4fbd7b
+
+Signed-off-by: Yongxin Liu <yongxin.liu@windriver.com>
+---
+ grub-core/video/readers/jpeg.c | 86 +++++++++++++++++++++++++++-------
+ 1 file changed, 70 insertions(+), 16 deletions(-)
+
+diff --git a/grub-core/video/readers/jpeg.c b/grub-core/video/readers/jpeg.c
+index c47ffd651..806c56c78 100644
+--- a/grub-core/video/readers/jpeg.c
++++ b/grub-core/video/readers/jpeg.c
+@@ -109,9 +109,17 @@ static grub_uint8_t
+ grub_jpeg_get_byte (struct grub_jpeg_data *data)
+ {
+   grub_uint8_t r;
++  grub_ssize_t bytes_read;
+ 
+   r = 0;
+-  grub_file_read (data->file, &r, 1);
++  bytes_read = grub_file_read (data->file, &r, 1);
++
++  if (bytes_read != 1)
++    {
++      grub_error (GRUB_ERR_BAD_FILE_TYPE,
++		  "jpeg: unexpected end of data");
++      return 0;
++    }
+ 
+   return r;
+ }
+@@ -120,9 +128,17 @@ static grub_uint16_t
+ grub_jpeg_get_word (struct grub_jpeg_data *data)
+ {
+   grub_uint16_t r;
++  grub_ssize_t bytes_read;
+ 
+   r = 0;
+-  grub_file_read (data->file, &r, sizeof (grub_uint16_t));
++  bytes_read = grub_file_read (data->file, &r, sizeof (grub_uint16_t));
++
++  if (bytes_read != sizeof (grub_uint16_t))
++    {
++      grub_error (GRUB_ERR_BAD_FILE_TYPE,
++		  "jpeg: unexpected end of data");
++      return 0;
++    }
+ 
+   return grub_be_to_cpu16 (r);
+ }
+@@ -135,6 +151,11 @@ grub_jpeg_get_bit (struct grub_jpeg_data *data)
+   if (data->bit_mask == 0)
+     {
+       data->bit_save = grub_jpeg_get_byte (data);
++      if (grub_errno != GRUB_ERR_NONE) {
++	grub_error (GRUB_ERR_BAD_FILE_TYPE,
++		    "jpeg: file read error");
++	return 0;
++      }
+       if (data->bit_save == JPEG_ESC_CHAR)
+ 	{
+ 	  if (grub_jpeg_get_byte (data) != 0)
+@@ -143,6 +164,11 @@ grub_jpeg_get_bit (struct grub_jpeg_data *data)
+ 			  "jpeg: invalid 0xFF in data stream");
+ 	      return 0;
+ 	    }
++	  if (grub_errno != GRUB_ERR_NONE)
++	    {
++	      grub_error (GRUB_ERR_BAD_FILE_TYPE, "jpeg: file read error");
++	      return 0;
++	    }
+ 	}
+       data->bit_mask = 0x80;
+     }
+@@ -161,7 +187,7 @@ grub_jpeg_get_number (struct grub_jpeg_data *data, int num)
+     return 0;
+ 
+   msb = value = grub_jpeg_get_bit (data);
+-  for (i = 1; i < num; i++)
++  for (i = 1; i < num && grub_errno == GRUB_ERR_NONE; i++)
+     value = (value << 1) + (grub_jpeg_get_bit (data) != 0);
+   if (!msb)
+     value += 1 - (1 << num);
+@@ -208,6 +234,8 @@ grub_jpeg_decode_huff_table (struct grub_jpeg_data *data)
+   while (data->file->offset + sizeof (count) + 1 <= next_marker)
+     {
+       id = grub_jpeg_get_byte (data);
++      if (grub_errno != GRUB_ERR_NONE)
++	return grub_errno;
+       ac = (id >> 4) & 1;
+       id &= 0xF;
+       if (id > 1)
+@@ -258,6 +286,8 @@ grub_jpeg_decode_quan_table (struct grub_jpeg_data *data)
+ 
+   next_marker = data->file->offset;
+   next_marker += grub_jpeg_get_word (data);
++  if (grub_errno != GRUB_ERR_NONE)
++    return grub_errno;
+ 
+   if (next_marker > data->file->size)
+     {
+@@ -269,6 +299,8 @@ grub_jpeg_decode_quan_table (struct grub_jpeg_data *data)
+ 	 <= next_marker)
+     {
+       id = grub_jpeg_get_byte (data);
++      if (grub_errno != GRUB_ERR_NONE)
++        return grub_errno;
+       if (id >= 0x10)		/* Upper 4-bit is precision.  */
+ 	return grub_error (GRUB_ERR_BAD_FILE_TYPE,
+ 			   "jpeg: only 8-bit precision is supported");
+@@ -300,6 +332,9 @@ grub_jpeg_decode_sof (struct grub_jpeg_data *data)
+   next_marker = data->file->offset;
+   next_marker += grub_jpeg_get_word (data);
+ 
++  if (grub_errno != GRUB_ERR_NONE)
++    return grub_errno;
++
+   if (grub_jpeg_get_byte (data) != 8)
+     return grub_error (GRUB_ERR_BAD_FILE_TYPE,
+ 		       "jpeg: only 8-bit precision is supported");
+@@ -325,6 +360,8 @@ grub_jpeg_decode_sof (struct grub_jpeg_data *data)
+ 	return grub_error (GRUB_ERR_BAD_FILE_TYPE, "jpeg: invalid index");
+ 
+       ss = grub_jpeg_get_byte (data);	/* Sampling factor.  */
++      if (grub_errno != GRUB_ERR_NONE)
++	return grub_errno;
+       if (!id)
+ 	{
+ 	  grub_uint8_t vs, hs;
+@@ -504,7 +541,7 @@ grub_jpeg_idct_transform (jpeg_data_unit_t du)
+     }
+ }
+ 
+-static void
++static grub_err_t
+ grub_jpeg_decode_du (struct grub_jpeg_data *data, int id, jpeg_data_unit_t du)
+ {
+   int h1, h2, qt;
+@@ -519,6 +556,9 @@ grub_jpeg_decode_du (struct grub_jpeg_data *data, int id, jpeg_data_unit_t du)
+   data->dc_value[id] +=
+     grub_jpeg_get_number (data, grub_jpeg_get_huff_code (data, h1));
+ 
++  if (grub_errno != GRUB_ERR_NONE)
++    return grub_errno;
++
+   du[0] = data->dc_value[id] * (int) data->quan_table[qt][0];
+   pos = 1;
+   while (pos < ARRAY_SIZE (data->quan_table[qt]))
+@@ -533,11 +573,13 @@ grub_jpeg_decode_du (struct grub_jpeg_data *data, int id, jpeg_data_unit_t du)
+       num >>= 4;
+       pos += num;
+ 
++      if (grub_errno != GRUB_ERR_NONE)
++        return grub_errno;
++
+       if (pos >= ARRAY_SIZE (jpeg_zigzag_order))
+ 	{
+-	  grub_error (GRUB_ERR_BAD_FILE_TYPE,
+-		      "jpeg: invalid position in zigzag order!?");
+-	  return;
++	  return grub_error (GRUB_ERR_BAD_FILE_TYPE,
++			     "jpeg: invalid position in zigzag order!?");
+ 	}
+ 
+       du[jpeg_zigzag_order[pos]] = val * (int) data->quan_table[qt][pos];
+@@ -545,6 +587,7 @@ grub_jpeg_decode_du (struct grub_jpeg_data *data, int id, jpeg_data_unit_t du)
+     }
+ 
+   grub_jpeg_idct_transform (du);
++  return GRUB_ERR_NONE;
+ }
+ 
+ static void
+@@ -603,7 +646,8 @@ grub_jpeg_decode_sos (struct grub_jpeg_data *data)
+   data_offset += grub_jpeg_get_word (data);
+ 
+   cc = grub_jpeg_get_byte (data);
+-
++  if (grub_errno != GRUB_ERR_NONE)
++    return grub_errno;
+   if (cc != 3 && cc != 1)
+     return grub_error (GRUB_ERR_BAD_FILE_TYPE,
+ 		       "jpeg: component count must be 1 or 3");
+@@ -616,7 +660,8 @@ grub_jpeg_decode_sos (struct grub_jpeg_data *data)
+       id = grub_jpeg_get_byte (data) - 1;
+       if ((id < 0) || (id >= 3))
+ 	return grub_error (GRUB_ERR_BAD_FILE_TYPE, "jpeg: invalid index");
+-
++      if (grub_errno != GRUB_ERR_NONE)
++	return grub_errno;
+       ht = grub_jpeg_get_byte (data);
+       data->comp_index[id][1] = (ht >> 4);
+       data->comp_index[id][2] = (ht & 0xF) + 2;
+@@ -624,11 +669,14 @@ grub_jpeg_decode_sos (struct grub_jpeg_data *data)
+       if ((data->comp_index[id][1] < 0) || (data->comp_index[id][1] > 3) ||
+ 	  (data->comp_index[id][2] < 0) || (data->comp_index[id][2] > 3))
+ 	return grub_error (GRUB_ERR_BAD_FILE_TYPE, "jpeg: invalid hufftable index");
++      if (grub_errno != GRUB_ERR_NONE)
++	return grub_errno;
+     }
+ 
+   grub_jpeg_get_byte (data);	/* Skip 3 unused bytes.  */
+   grub_jpeg_get_word (data);
+-
++  if (grub_errno != GRUB_ERR_NONE)
++    return grub_errno;
+   if (data->file->offset != data_offset)
+     return grub_error (GRUB_ERR_BAD_FILE_TYPE, "jpeg: extra byte in sos");
+ 
+@@ -646,6 +694,7 @@ grub_jpeg_decode_data (struct grub_jpeg_data *data)
+ {
+   unsigned c1, vb, hb, nr1, nc1;
+   int rst = data->dri;
++  grub_err_t err = GRUB_ERR_NONE;
+ 
+   vb = 8 << data->log_vs;
+   hb = 8 << data->log_hs;
+@@ -666,17 +715,22 @@ grub_jpeg_decode_data (struct grub_jpeg_data *data)
+ 
+ 	for (r2 = 0; r2 < (1U << data->log_vs); r2++)
+ 	  for (c2 = 0; c2 < (1U << data->log_hs); c2++)
+-	    grub_jpeg_decode_du (data, 0, data->ydu[r2 * 2 + c2]);
++            {
++              err = grub_jpeg_decode_du (data, 0, data->ydu[r2 * 2 + c2]);
++              if (err != GRUB_ERR_NONE)
++                return err;
++            }
+ 
+ 	if (data->color_components >= 3)
+ 	  {
+-	    grub_jpeg_decode_du (data, 1, data->cbdu);
+-	    grub_jpeg_decode_du (data, 2, data->crdu);
++	    err = grub_jpeg_decode_du (data, 1, data->cbdu);
++	    if (err != GRUB_ERR_NONE)
++	      return err;
++	    err = grub_jpeg_decode_du (data, 2, data->crdu);
++	    if (err != GRUB_ERR_NONE)
++	      return err;
+ 	  }
+ 
+-	if (grub_errno)
+-	  return grub_errno;
+-
+ 	nr2 = (data->r1 == nr1 - 1) ? (data->image_height - data->r1 * vb) : vb;
+ 	nc2 = (c1 == nc1 - 1) ? (data->image_width - c1 * hb) : hb;
+ 
+-- 
+2.34.1
+
diff --git a/poky/meta/recipes-bsp/grub/files/video-readers-jpeg-Refuse-to-handle-multiple-start-o.patch b/poky/meta/recipes-bsp/grub/files/video-readers-jpeg-Refuse-to-handle-multiple-start-o.patch
new file mode 100644
index 0000000..91ecaad
--- /dev/null
+++ b/poky/meta/recipes-bsp/grub/files/video-readers-jpeg-Refuse-to-handle-multiple-start-o.patch
@@ -0,0 +1,53 @@
+From 166a4d61448f74745afe1dac2f2cfb85d04909bf Mon Sep 17 00:00:00 2001
+From: Daniel Axtens <dja@axtens.net>
+Date: Mon, 28 Jun 2021 14:25:17 +1000
+Subject: [PATCH] video/readers/jpeg: Refuse to handle multiple start of
+ streams
+
+An invalid file could contain multiple start of stream blocks, which
+would cause us to reallocate and leak our bitmap. Refuse to handle
+multiple start of streams.
+
+Additionally, fix a grub_error() call formatting.
+
+Signed-off-by: Daniel Axtens <dja@axtens.net>
+Reviewed-by: Daniel Kiper <daniel.kiper@oracle.com>
+
+Upstream-Status: Backport
+
+Reference to upstream patch:
+https://git.savannah.gnu.org/cgit/grub.git/commit/?id=166a4d61448f74745afe1dac2f2cfb85d04909bf
+
+Signed-off-by: Yongxin Liu <yongxin.liu@windriver.com>
+---
+ grub-core/video/readers/jpeg.c | 7 +++++--
+ 1 file changed, 5 insertions(+), 2 deletions(-)
+
+diff --git a/grub-core/video/readers/jpeg.c b/grub-core/video/readers/jpeg.c
+index 2284a6c06..579bbe8a4 100644
+--- a/grub-core/video/readers/jpeg.c
++++ b/grub-core/video/readers/jpeg.c
+@@ -683,6 +683,9 @@ grub_jpeg_decode_sos (struct grub_jpeg_data *data)
+   if (data->file->offset != data_offset)
+     return grub_error (GRUB_ERR_BAD_FILE_TYPE, "jpeg: extra byte in sos");
+ 
++  if (*data->bitmap)
++    return grub_error (GRUB_ERR_BAD_FILE_TYPE, "jpeg: too many start of scan blocks");
++
+   if (grub_video_bitmap_create (data->bitmap, data->image_width,
+ 				data->image_height,
+ 				GRUB_VIDEO_BLIT_FORMAT_RGB_888))
+@@ -705,8 +708,8 @@ grub_jpeg_decode_data (struct grub_jpeg_data *data)
+   nc1 = (data->image_width + hb - 1)  >> (3 + data->log_hs);
+ 
+   if (data->bitmap_ptr == NULL)
+-    return grub_error(GRUB_ERR_BAD_FILE_TYPE,
+-		      "jpeg: attempted to decode data before start of stream");
++    return grub_error (GRUB_ERR_BAD_FILE_TYPE,
++		       "jpeg: attempted to decode data before start of stream");
+ 
+   for (; data->r1 < nr1 && (!data->dri || rst);
+        data->r1++, data->bitmap_ptr += (vb * data->image_width - hb * nc1) * 3)
+-- 
+2.34.1
+
diff --git a/poky/meta/recipes-bsp/grub/grub2.inc b/poky/meta/recipes-bsp/grub/grub2.inc
index 45852ab..47ea561 100644
--- a/poky/meta/recipes-bsp/grub/grub2.inc
+++ b/poky/meta/recipes-bsp/grub/grub2.inc
@@ -22,6 +22,16 @@
            file://0001-RISC-V-Restore-the-typcast-to-long.patch \
            file://CVE-2021-3981-grub-mkconfig-Restore-umask-for-the-grub.cfg.patch \
            file://0001-configure.ac-Use-_zicsr_zifencei-extentions-on-riscv.patch \
+           file://video-Remove-trailing-whitespaces.patch \
+           file://CVE-2021-3695-video-readers-png-Drop-greyscale-support-to-fix-heap.patch \
+           file://CVE-2021-3696-video-readers-png-Avoid-heap-OOB-R-W-inserting-huff.patch \
+           file://video-readers-jpeg-Abort-sooner-if-a-read-operation-.patch \
+           file://video-readers-jpeg-Refuse-to-handle-multiple-start-o.patch \
+           file://CVE-2021-3697-video-readers-jpeg-Block-int-underflow-wild-pointer.patch \
+           file://CVE-2022-28733-net-ip-Do-IP-fragment-maths-safely.patch \
+           file://CVE-2022-28734-net-http-Fix-OOB-write-for-split-http-headers.patch \
+           file://CVE-2022-28734-net-http-Error-out-on-headers-with-LF-without-CR.patch \
+           file://CVE-2022-28735-kern-efi-sb-Reject-non-kernel-files-in-the-shim_lock.patch \
 "
 
 SRC_URI[sha256sum] = "23b64b4c741569f9426ed2e3d0e6780796fca081bee4c99f62aa3f53ae803f5f"
diff --git a/poky/meta/recipes-bsp/setserial/setserial/0001-setserial.c-Add-needed-system-headers-for-ioctl-and-.patch b/poky/meta/recipes-bsp/setserial/setserial/0001-setserial.c-Add-needed-system-headers-for-ioctl-and-.patch
new file mode 100644
index 0000000..10c6ae8
--- /dev/null
+++ b/poky/meta/recipes-bsp/setserial/setserial/0001-setserial.c-Add-needed-system-headers-for-ioctl-and-.patch
@@ -0,0 +1,41 @@
+From 9bbb342f5d9ad5dc75486fd35ada8e287ba19299 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Mon, 15 Aug 2022 13:03:17 -0700
+Subject: [PATCH] setserial.c: Add needed system headers for ioctl() and
+ close() calls
+
+Add int return type for main() function
+
+Fixes
+error: type specifier missing, defaults to 'int'; ISO C99 and later do not support implicit int [-Wimplicit-int]
+error: call to undeclared function 'close'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declarat
+ion]
+
+Upstream-Status: Submitted [https://sourceforge.net/p/setserial/discussion/7060/thread/95d874c12c/]
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ setserial.c | 4 +++-
+ 1 file changed, 3 insertions(+), 1 deletion(-)
+
+diff --git a/setserial.c b/setserial.c
+index bfda8fd..6a95513 100644
+--- a/setserial.c
++++ b/setserial.c
+@@ -16,6 +16,8 @@
+ #include <termios.h>
+ #include <string.h>
+ #include <errno.h>
++#include <unistd.h>
++#include <sys/ioctl.h>
+ 
+ #ifdef HAVE_ASM_IOCTLS_H
+ #include <asm/ioctls.h>
+@@ -715,7 +717,7 @@ fprintf(stderr, "\t* port\t\tset the I/O port\n");
+ 	exit(1);
+ }
+ 
+-main(int argc, char **argv)
++int main(int argc, char **argv)
+ {
+ 	int	get_flag = 0, wild_intr_flag = 0;
+ 	int	c;
diff --git a/poky/meta/recipes-bsp/setserial/setserial_2.17.bb b/poky/meta/recipes-bsp/setserial/setserial_2.17.bb
index 5b31cd1..e46aeb9 100644
--- a/poky/meta/recipes-bsp/setserial/setserial_2.17.bb
+++ b/poky/meta/recipes-bsp/setserial/setserial_2.17.bb
@@ -15,7 +15,8 @@
 SRC_URI = "${SOURCEFORGE_MIRROR}/setserial/${BPN}-${PV}.tar.gz \
            file://add_stdlib.patch \
            file://ldflags.patch \
-          "
+           file://0001-setserial.c-Add-needed-system-headers-for-ioctl-and-.patch \
+           "
 
 SRC_URI[md5sum] = "c4867d72c41564318e0107745eb7a0f2"
 SRC_URI[sha256sum] = "7e4487d320ac31558563424189435d396ddf77953bb23111a17a3d1487b5794a"
diff --git a/poky/meta/recipes-connectivity/bind/bind-9.18.4/0001-avoid-start-failure-with-bind-user.patch b/poky/meta/recipes-connectivity/bind/bind-9.18.6/0001-avoid-start-failure-with-bind-user.patch
similarity index 100%
rename from poky/meta/recipes-connectivity/bind/bind-9.18.4/0001-avoid-start-failure-with-bind-user.patch
rename to poky/meta/recipes-connectivity/bind/bind-9.18.6/0001-avoid-start-failure-with-bind-user.patch
diff --git a/poky/meta/recipes-connectivity/bind/bind-9.18.4/0001-named-lwresd-V-and-start-log-hide-build-options.patch b/poky/meta/recipes-connectivity/bind/bind-9.18.6/0001-named-lwresd-V-and-start-log-hide-build-options.patch
similarity index 100%
rename from poky/meta/recipes-connectivity/bind/bind-9.18.4/0001-named-lwresd-V-and-start-log-hide-build-options.patch
rename to poky/meta/recipes-connectivity/bind/bind-9.18.6/0001-named-lwresd-V-and-start-log-hide-build-options.patch
diff --git a/poky/meta/recipes-connectivity/bind/bind-9.18.4/bind-ensure-searching-for-json-headers-searches-sysr.patch b/poky/meta/recipes-connectivity/bind/bind-9.18.6/bind-ensure-searching-for-json-headers-searches-sysr.patch
similarity index 100%
rename from poky/meta/recipes-connectivity/bind/bind-9.18.4/bind-ensure-searching-for-json-headers-searches-sysr.patch
rename to poky/meta/recipes-connectivity/bind/bind-9.18.6/bind-ensure-searching-for-json-headers-searches-sysr.patch
diff --git a/poky/meta/recipes-connectivity/bind/bind-9.18.4/bind9 b/poky/meta/recipes-connectivity/bind/bind-9.18.6/bind9
similarity index 100%
rename from poky/meta/recipes-connectivity/bind/bind-9.18.4/bind9
rename to poky/meta/recipes-connectivity/bind/bind-9.18.6/bind9
diff --git a/poky/meta/recipes-connectivity/bind/bind-9.18.4/conf.patch b/poky/meta/recipes-connectivity/bind/bind-9.18.6/conf.patch
similarity index 100%
rename from poky/meta/recipes-connectivity/bind/bind-9.18.4/conf.patch
rename to poky/meta/recipes-connectivity/bind/bind-9.18.6/conf.patch
diff --git a/poky/meta/recipes-connectivity/bind/bind-9.18.4/generate-rndc-key.sh b/poky/meta/recipes-connectivity/bind/bind-9.18.6/generate-rndc-key.sh
similarity index 100%
rename from poky/meta/recipes-connectivity/bind/bind-9.18.4/generate-rndc-key.sh
rename to poky/meta/recipes-connectivity/bind/bind-9.18.6/generate-rndc-key.sh
diff --git a/poky/meta/recipes-connectivity/bind/bind-9.18.4/init.d-add-support-for-read-only-rootfs.patch b/poky/meta/recipes-connectivity/bind/bind-9.18.6/init.d-add-support-for-read-only-rootfs.patch
similarity index 100%
rename from poky/meta/recipes-connectivity/bind/bind-9.18.4/init.d-add-support-for-read-only-rootfs.patch
rename to poky/meta/recipes-connectivity/bind/bind-9.18.6/init.d-add-support-for-read-only-rootfs.patch
diff --git a/poky/meta/recipes-connectivity/bind/bind-9.18.4/make-etc-initd-bind-stop-work.patch b/poky/meta/recipes-connectivity/bind/bind-9.18.6/make-etc-initd-bind-stop-work.patch
similarity index 100%
rename from poky/meta/recipes-connectivity/bind/bind-9.18.4/make-etc-initd-bind-stop-work.patch
rename to poky/meta/recipes-connectivity/bind/bind-9.18.6/make-etc-initd-bind-stop-work.patch
diff --git a/poky/meta/recipes-connectivity/bind/bind-9.18.4/named.service b/poky/meta/recipes-connectivity/bind/bind-9.18.6/named.service
similarity index 100%
rename from poky/meta/recipes-connectivity/bind/bind-9.18.4/named.service
rename to poky/meta/recipes-connectivity/bind/bind-9.18.6/named.service
diff --git a/poky/meta/recipes-connectivity/bind/bind_9.18.4.bb b/poky/meta/recipes-connectivity/bind/bind_9.18.4.bb
deleted file mode 100644
index 8c62fc7..0000000
--- a/poky/meta/recipes-connectivity/bind/bind_9.18.4.bb
+++ /dev/null
@@ -1,114 +0,0 @@
-SUMMARY = "ISC Internet Domain Name Server"
-HOMEPAGE = "https://www.isc.org/bind/"
-DESCRIPTION = "BIND 9 provides a full-featured Domain Name Server system"
-SECTION = "console/network"
-
-LICENSE = "MPL-2.0"
-LIC_FILES_CHKSUM = "file://COPYRIGHT;md5=9a4a897f202c0710e07f2f2836bc2b62"
-
-DEPENDS = "openssl libcap zlib libuv"
-
-SRC_URI = "https://ftp.isc.org/isc/bind9/${PV}/${BPN}-${PV}.tar.xz \
-           file://conf.patch \
-           file://named.service \
-           file://bind9 \
-           file://generate-rndc-key.sh \
-           file://make-etc-initd-bind-stop-work.patch \
-           file://init.d-add-support-for-read-only-rootfs.patch \
-           file://bind-ensure-searching-for-json-headers-searches-sysr.patch \
-           file://0001-named-lwresd-V-and-start-log-hide-build-options.patch \
-           file://0001-avoid-start-failure-with-bind-user.patch \
-           "
-
-SRC_URI[sha256sum] = "f277ae50159a00c300eb926a9c5d51953038a936bd8242d6913dfb6eac42761d"
-
-UPSTREAM_CHECK_URI = "https://ftp.isc.org/isc/bind9/"
-# follow the ESV versions divisible by 2
-UPSTREAM_CHECK_REGEX = "(?P<pver>9.(\d*[02468])+(\.\d+)+(-P\d+)*)/"
-
-# Issue only affects dhcpd with recent bind versions. We don't ship dhcpd anymore
-# so the issue doesn't affect us.
-CVE_CHECK_IGNORE += "CVE-2019-6470"
-
-inherit autotools update-rc.d systemd useradd pkgconfig multilib_header update-alternatives
-
-# PACKAGECONFIGs readline and libedit should NOT be set at same time
-PACKAGECONFIG ?= "readline"
-PACKAGECONFIG[httpstats] = "--with-libxml2=${STAGING_DIR_HOST}${prefix},--without-libxml2,libxml2"
-PACKAGECONFIG[readline] = "--with-readline=readline,,readline"
-PACKAGECONFIG[libedit] = "--with-readline=libedit,,libedit"
-PACKAGECONFIG[dns-over-http] = "--enable-doh,--disable-doh,nghttp2"
-
-EXTRA_OECONF = " --disable-devpoll --disable-auto-validation --enable-epoll \
-                 --with-gssapi=no --with-lmdb=no --with-zlib \
-                 --sysconfdir=${sysconfdir}/bind \
-                 --with-openssl=${STAGING_DIR_HOST}${prefix} \
-               "
-LDFLAGS:append = " -lz"
-
-# dhcp needs .la so keep them
-REMOVE_LIBTOOL_LA = "0"
-
-USERADD_PACKAGES = "${PN}"
-USERADD_PARAM:${PN} = "--system --home ${localstatedir}/cache/bind --no-create-home \
-                       --user-group bind"
-
-INITSCRIPT_NAME = "bind"
-INITSCRIPT_PARAMS = "defaults"
-
-SYSTEMD_SERVICE:${PN} = "named.service"
-
-do_install:append() {
-
-	install -d -o bind "${D}${localstatedir}/cache/bind"
-	install -d "${D}${sysconfdir}/bind"
-	install -d "${D}${sysconfdir}/init.d"
-	install -m 644 ${S}/conf/* "${D}${sysconfdir}/bind/"
-	install -m 755 "${S}/init.d" "${D}${sysconfdir}/init.d/bind"
-
-	# Install systemd related files
-	install -d ${D}${sbindir}
-	install -m 755 ${WORKDIR}/generate-rndc-key.sh ${D}${sbindir}
-	install -d ${D}${systemd_system_unitdir}
-	install -m 0644 ${WORKDIR}/named.service ${D}${systemd_system_unitdir}
-	sed -i -e 's,@BASE_BINDIR@,${base_bindir},g' \
-	       -e 's,@SBINDIR@,${sbindir},g' \
-	       ${D}${systemd_system_unitdir}/named.service
-
-	install -d ${D}${sysconfdir}/default
-	install -m 0644 ${WORKDIR}/bind9 ${D}${sysconfdir}/default
-
-	if ${@bb.utils.contains('DISTRO_FEATURES', 'systemd', 'true', 'false', d)}; then
-		install -d ${D}${sysconfdir}/tmpfiles.d
-		echo "d /run/named 0755 bind bind - -" > ${D}${sysconfdir}/tmpfiles.d/bind.conf
-	fi
-}
-
-CONFFILES:${PN} = " \
-	${sysconfdir}/bind/named.conf \
-	${sysconfdir}/bind/named.conf.local \
-	${sysconfdir}/bind/named.conf.options \
-	${sysconfdir}/bind/db.0 \
-	${sysconfdir}/bind/db.127 \
-	${sysconfdir}/bind/db.empty \
-	${sysconfdir}/bind/db.local \
-	${sysconfdir}/bind/db.root \
-	"
-
-ALTERNATIVE:${PN}-utils = "nslookup"
-ALTERNATIVE_LINK_NAME[nslookup] = "${bindir}/nslookup"
-ALTERNATIVE_PRIORITY = "100"
-
-PACKAGE_BEFORE_PN += "${PN}-utils"
-FILES:${PN}-utils = "${bindir}/host ${bindir}/dig ${bindir}/mdig ${bindir}/nslookup ${bindir}/nsupdate"
-FILES:${PN}-dev += "${bindir}/isc-config.h"
-FILES:${PN} += "${sbindir}/generate-rndc-key.sh"
-
-PACKAGE_BEFORE_PN += "${PN}-libs"
-# special arrangement below due to
-# https://github.com/isc-projects/bind9/commit/0e25af628cd776f98c04fc4cc59048f5448f6c88
-FILES_SOLIBSDEV = "${libdir}/*[!0-9].so ${libdir}/libbind9.so"
-FILES:${PN}-libs = "${libdir}/named/*.so* ${libdir}/*-${PV}.so"
-FILES:${PN}-staticdev += "${libdir}/*.la"
-
-DEV_PKG_DEPENDENCY = ""
diff --git a/poky/meta/recipes-connectivity/bind/bind_9.18.6.bb b/poky/meta/recipes-connectivity/bind/bind_9.18.6.bb
new file mode 100644
index 0000000..5f54942
--- /dev/null
+++ b/poky/meta/recipes-connectivity/bind/bind_9.18.6.bb
@@ -0,0 +1,114 @@
+SUMMARY = "ISC Internet Domain Name Server"
+HOMEPAGE = "https://www.isc.org/bind/"
+DESCRIPTION = "BIND 9 provides a full-featured Domain Name Server system"
+SECTION = "console/network"
+
+LICENSE = "MPL-2.0"
+LIC_FILES_CHKSUM = "file://COPYRIGHT;md5=9a4a897f202c0710e07f2f2836bc2b62"
+
+DEPENDS = "openssl libcap zlib libuv"
+
+SRC_URI = "https://ftp.isc.org/isc/bind9/${PV}/${BPN}-${PV}.tar.xz \
+           file://conf.patch \
+           file://named.service \
+           file://bind9 \
+           file://generate-rndc-key.sh \
+           file://make-etc-initd-bind-stop-work.patch \
+           file://init.d-add-support-for-read-only-rootfs.patch \
+           file://bind-ensure-searching-for-json-headers-searches-sysr.patch \
+           file://0001-named-lwresd-V-and-start-log-hide-build-options.patch \
+           file://0001-avoid-start-failure-with-bind-user.patch \
+           "
+
+SRC_URI[sha256sum] = "d43a0fed03c774d1685d203598218c0b7774a88fcc390a0170710d5feb7fbff1"
+
+UPSTREAM_CHECK_URI = "https://ftp.isc.org/isc/bind9/"
+# follow the ESV versions divisible by 2
+UPSTREAM_CHECK_REGEX = "(?P<pver>9.(\d*[02468])+(\.\d+)+(-P\d+)*)/"
+
+# Issue only affects dhcpd with recent bind versions. We don't ship dhcpd anymore
+# so the issue doesn't affect us.
+CVE_CHECK_IGNORE += "CVE-2019-6470"
+
+inherit autotools update-rc.d systemd useradd pkgconfig multilib_header update-alternatives
+
+# PACKAGECONFIGs readline and libedit should NOT be set at same time
+PACKAGECONFIG ?= "readline"
+PACKAGECONFIG[httpstats] = "--with-libxml2=${STAGING_DIR_HOST}${prefix},--without-libxml2,libxml2"
+PACKAGECONFIG[readline] = "--with-readline=readline,,readline"
+PACKAGECONFIG[libedit] = "--with-readline=libedit,,libedit"
+PACKAGECONFIG[dns-over-http] = "--enable-doh,--disable-doh,nghttp2"
+
+EXTRA_OECONF = " --disable-devpoll --disable-auto-validation --enable-epoll \
+                 --with-gssapi=no --with-lmdb=no --with-zlib \
+                 --sysconfdir=${sysconfdir}/bind \
+                 --with-openssl=${STAGING_DIR_HOST}${prefix} \
+               "
+LDFLAGS:append = " -lz"
+
+# dhcp needs .la so keep them
+REMOVE_LIBTOOL_LA = "0"
+
+USERADD_PACKAGES = "${PN}"
+USERADD_PARAM:${PN} = "--system --home ${localstatedir}/cache/bind --no-create-home \
+                       --user-group bind"
+
+INITSCRIPT_NAME = "bind"
+INITSCRIPT_PARAMS = "defaults"
+
+SYSTEMD_SERVICE:${PN} = "named.service"
+
+do_install:append() {
+
+	install -d -o bind "${D}${localstatedir}/cache/bind"
+	install -d "${D}${sysconfdir}/bind"
+	install -d "${D}${sysconfdir}/init.d"
+	install -m 644 ${S}/conf/* "${D}${sysconfdir}/bind/"
+	install -m 755 "${S}/init.d" "${D}${sysconfdir}/init.d/bind"
+
+	# Install systemd related files
+	install -d ${D}${sbindir}
+	install -m 755 ${WORKDIR}/generate-rndc-key.sh ${D}${sbindir}
+	install -d ${D}${systemd_system_unitdir}
+	install -m 0644 ${WORKDIR}/named.service ${D}${systemd_system_unitdir}
+	sed -i -e 's,@BASE_BINDIR@,${base_bindir},g' \
+	       -e 's,@SBINDIR@,${sbindir},g' \
+	       ${D}${systemd_system_unitdir}/named.service
+
+	install -d ${D}${sysconfdir}/default
+	install -m 0644 ${WORKDIR}/bind9 ${D}${sysconfdir}/default
+
+	if ${@bb.utils.contains('DISTRO_FEATURES', 'systemd', 'true', 'false', d)}; then
+		install -d ${D}${sysconfdir}/tmpfiles.d
+		echo "d /run/named 0755 bind bind - -" > ${D}${sysconfdir}/tmpfiles.d/bind.conf
+	fi
+}
+
+CONFFILES:${PN} = " \
+	${sysconfdir}/bind/named.conf \
+	${sysconfdir}/bind/named.conf.local \
+	${sysconfdir}/bind/named.conf.options \
+	${sysconfdir}/bind/db.0 \
+	${sysconfdir}/bind/db.127 \
+	${sysconfdir}/bind/db.empty \
+	${sysconfdir}/bind/db.local \
+	${sysconfdir}/bind/db.root \
+	"
+
+ALTERNATIVE:${PN}-utils = "nslookup"
+ALTERNATIVE_LINK_NAME[nslookup] = "${bindir}/nslookup"
+ALTERNATIVE_PRIORITY = "100"
+
+PACKAGE_BEFORE_PN += "${PN}-utils"
+FILES:${PN}-utils = "${bindir}/host ${bindir}/dig ${bindir}/mdig ${bindir}/nslookup ${bindir}/nsupdate"
+FILES:${PN}-dev += "${bindir}/isc-config.h"
+FILES:${PN} += "${sbindir}/generate-rndc-key.sh"
+
+PACKAGE_BEFORE_PN += "${PN}-libs"
+# special arrangement below due to
+# https://github.com/isc-projects/bind9/commit/0e25af628cd776f98c04fc4cc59048f5448f6c88
+FILES_SOLIBSDEV = "${libdir}/*[!0-9].so ${libdir}/libbind9.so"
+FILES:${PN}-libs = "${libdir}/named/*.so* ${libdir}/*-${PV}.so"
+FILES:${PN}-staticdev += "${libdir}/*.la"
+
+DEV_PKG_DEPENDENCY = ""
diff --git a/poky/meta/recipes-connectivity/bluez5/bluez5.inc b/poky/meta/recipes-connectivity/bluez5/bluez5.inc
index 22dd07b..79d4645 100644
--- a/poky/meta/recipes-connectivity/bluez5/bluez5.inc
+++ b/poky/meta/recipes-connectivity/bluez5/bluez5.inc
@@ -53,7 +53,6 @@
            ${@bb.utils.contains('DISTRO_FEATURES', 'systemd', '', 'file://0001-Allow-using-obexd-without-systemd-in-the-user-sessio.patch', d)} \
            file://0001-tests-add-a-target-for-building-tests-without-runnin.patch \
            file://0001-test-gatt-Fix-hung-issue.patch \
-           file://fix_service.patch \
            "
 S = "${WORKDIR}/bluez-${PV}"
 
diff --git a/poky/meta/recipes-connectivity/bluez5/bluez5/fix_service.patch b/poky/meta/recipes-connectivity/bluez5/bluez5/fix_service.patch
deleted file mode 100644
index 96fdf6b..0000000
--- a/poky/meta/recipes-connectivity/bluez5/bluez5/fix_service.patch
+++ /dev/null
@@ -1,30 +0,0 @@
-The systemd bluetooth service failed to start because the /var/lib/bluetooth
-path of ReadWritePaths= is created by the bluetooth daemon itself.
-
-The commit systemd: Add more filesystem lockdown (442d211) add ReadWritePaths=/etc/bluetooth
-and ReadOnlyPaths=/var/lib/bluetooth options to the bluetooth systemd service.
-The existing ProtectSystem=full option mounts the /usr, the boot loader
-directories and /etc read-only. This means the two option are useless and could be removed.
-
-Upstream-Status: Submitted [https://github.com/bluez/bluez/issues/329]
-
-Index: bluez-5.64/src/bluetooth.service.in
-===================================================================
---- bluez-5.64.orig/src/bluetooth.service.in
-+++ bluez-5.64/src/bluetooth.service.in
-@@ -15,12 +15,12 @@ LimitNPROC=1
- 
- # Filesystem lockdown
- ProtectHome=true
--ProtectSystem=full
-+ProtectSystem=strict
- PrivateTmp=true
- ProtectKernelTunables=true
- ProtectControlGroups=true
--ReadWritePaths=@statedir@
--ReadOnlyPaths=@confdir@
-+ConfigurationDirectory=bluetooth
-+StateDirectory=bluetooth
- 
- # Execute Mappings
- MemoryDenyWriteExecute=true
diff --git a/poky/meta/recipes-connectivity/bluez5/bluez5_5.64.bb b/poky/meta/recipes-connectivity/bluez5/bluez5_5.64.bb
deleted file mode 100644
index 4319f9a..0000000
--- a/poky/meta/recipes-connectivity/bluez5/bluez5_5.64.bb
+++ /dev/null
@@ -1,70 +0,0 @@
-require bluez5.inc
-
-SRC_URI[sha256sum] = "ae437e65b6b3070c198bc5b0109fe9cdeb9eaa387380e2072f9de65fe8a1de34"
-
-# These issues have kernel fixes rather than bluez fixes so exclude here
-CVE_CHECK_IGNORE += "CVE-2020-12352 CVE-2020-24490"
-
-# noinst programs in Makefile.tools that are conditional on READLINE
-# support
-NOINST_TOOLS_READLINE ?= " \
-    ${@bb.utils.contains('PACKAGECONFIG', 'deprecated', 'attrib/gatttool', '', d)} \
-    tools/obex-client-tool \
-    tools/obex-server-tool \
-    tools/bluetooth-player \
-    tools/obexctl \
-    tools/btmgmt \
-"
-
-# noinst programs in Makefile.tools that are conditional on TESTING
-# support
-NOINST_TOOLS_TESTING ?= " \
-    emulator/btvirt \
-    emulator/b1ee \
-    emulator/hfp \
-    peripheral/btsensor \
-    tools/3dsp \
-    tools/mgmt-tester \
-    tools/gap-tester \
-    tools/l2cap-tester \
-    tools/sco-tester \
-    tools/smp-tester \
-    tools/hci-tester \
-    tools/rfcomm-tester \
-    tools/bnep-tester \
-    tools/userchan-tester \
-"
-
-# noinst programs in Makefile.tools that are conditional on TOOLS
-# support
-NOINST_TOOLS_BT ?= " \
-    tools/bdaddr \
-    tools/avinfo \
-    tools/avtest \
-    tools/scotest \
-    tools/amptest \
-    tools/hwdb \
-    tools/hcieventmask \
-    tools/hcisecfilter \
-    tools/btinfo \
-    tools/btsnoop \
-    tools/btproxy \
-    tools/btiotest \
-    tools/bneptest \
-    tools/mcaptest \
-    tools/cltest \
-    tools/oobtest \
-    tools/advtest \
-    tools/seq2bseq \
-    tools/nokfw \
-    tools/create-image \
-    tools/eddystone \
-    tools/ibeacon \
-    tools/btgatt-client \
-    tools/btgatt-server \
-    tools/test-runner \
-    tools/check-selftest \
-    tools/gatt-service \
-    profiles/iap/iapd \
-    ${@bb.utils.contains('PACKAGECONFIG', 'btpclient', 'tools/btpclient', '', d)} \
-"
diff --git a/poky/meta/recipes-connectivity/bluez5/bluez5_5.65.bb b/poky/meta/recipes-connectivity/bluez5/bluez5_5.65.bb
new file mode 100644
index 0000000..4c15aeb
--- /dev/null
+++ b/poky/meta/recipes-connectivity/bluez5/bluez5_5.65.bb
@@ -0,0 +1,70 @@
+require bluez5.inc
+
+SRC_URI[sha256sum] = "2565a4d48354b576e6ad92e25b54ed66808296581c8abb80587051f9993d96d4"
+
+# These issues have kernel fixes rather than bluez fixes so exclude here
+CVE_CHECK_IGNORE += "CVE-2020-12352 CVE-2020-24490"
+
+# noinst programs in Makefile.tools that are conditional on READLINE
+# support
+NOINST_TOOLS_READLINE ?= " \
+    ${@bb.utils.contains('PACKAGECONFIG', 'deprecated', 'attrib/gatttool', '', d)} \
+    tools/obex-client-tool \
+    tools/obex-server-tool \
+    tools/bluetooth-player \
+    tools/obexctl \
+    tools/btmgmt \
+"
+
+# noinst programs in Makefile.tools that are conditional on TESTING
+# support
+NOINST_TOOLS_TESTING ?= " \
+    emulator/btvirt \
+    emulator/b1ee \
+    emulator/hfp \
+    peripheral/btsensor \
+    tools/3dsp \
+    tools/mgmt-tester \
+    tools/gap-tester \
+    tools/l2cap-tester \
+    tools/sco-tester \
+    tools/smp-tester \
+    tools/hci-tester \
+    tools/rfcomm-tester \
+    tools/bnep-tester \
+    tools/userchan-tester \
+"
+
+# noinst programs in Makefile.tools that are conditional on TOOLS
+# support
+NOINST_TOOLS_BT ?= " \
+    tools/bdaddr \
+    tools/avinfo \
+    tools/avtest \
+    tools/scotest \
+    tools/amptest \
+    tools/hwdb \
+    tools/hcieventmask \
+    tools/hcisecfilter \
+    tools/btinfo \
+    tools/btsnoop \
+    tools/btproxy \
+    tools/btiotest \
+    tools/bneptest \
+    tools/mcaptest \
+    tools/cltest \
+    tools/oobtest \
+    tools/advtest \
+    tools/seq2bseq \
+    tools/nokfw \
+    tools/create-image \
+    tools/eddystone \
+    tools/ibeacon \
+    tools/btgatt-client \
+    tools/btgatt-server \
+    tools/test-runner \
+    tools/check-selftest \
+    tools/gatt-service \
+    profiles/iap/iapd \
+    ${@bb.utils.contains('PACKAGECONFIG', 'btpclient', 'tools/btpclient', '', d)} \
+"
diff --git a/poky/meta/recipes-connectivity/connman/connman.inc b/poky/meta/recipes-connectivity/connman/connman.inc
index 5880ecd..d7af94f 100644
--- a/poky/meta/recipes-connectivity/connman/connman.inc
+++ b/poky/meta/recipes-connectivity/connman/connman.inc
@@ -28,10 +28,15 @@
     --enable-tools \
     --disable-polkit \
 "
+# For smooth operation it would be best to start only one wireless daemon at a time.
+# If wpa-supplicant is running, connman will use it preferentially.
+# Select either wpa-supplicant or iwd
+WIRELESS_DAEMON ??= "wpa-supplicant"
 
 PACKAGECONFIG ??= "wispr iptables client\
-                   ${@bb.utils.filter('DISTRO_FEATURES', '3g systemd wifi', d)} \
+                   ${@bb.utils.filter('DISTRO_FEATURES', '3g systemd', d)} \
                    ${@bb.utils.contains('DISTRO_FEATURES', 'bluetooth', 'bluez', '', d)} \
+                   ${@bb.utils.contains('DISTRO_FEATURES', 'wifi', 'wifi ${WIRELESS_DAEMON}', '', d)} \
 "
 
 # If you want ConnMan to support VPN, add following statement into
@@ -39,9 +44,11 @@
 # PACKAGECONFIG:append:pn-connman = " openvpn vpnc l2tp pptp"
 
 PACKAGECONFIG[systemd] = "--with-systemdunitdir=${systemd_system_unitdir}/ --with-tmpfilesdir=${sysconfdir}/tmpfiles.d/,--with-systemdunitdir='' --with-tmpfilesdir=''"
-PACKAGECONFIG[wifi] = "--enable-wifi, --disable-wifi, wpa-supplicant, wpa-supplicant"
+PACKAGECONFIG[wifi] = "--enable-wifi, --disable-wifi"
 PACKAGECONFIG[bluez] = "--enable-bluetooth, --disable-bluetooth, bluez5, bluez5"
 PACKAGECONFIG[3g] = "--enable-ofono, --disable-ofono, ofono, ofono"
+PACKAGECONFIG[wpa-supplicant] = ",,wpa-supplicant,wpa-supplicant"
+PACKAGECONFIG[iwd] = "--enable-iwd,--disable-iwd,,iwd"
 PACKAGECONFIG[tist] = "--enable-tist,--disable-tist,"
 PACKAGECONFIG[openvpn] = "--enable-openvpn --with-openvpn=${sbindir}/openvpn,--disable-openvpn,,openvpn"
 PACKAGECONFIG[vpnc] = "--enable-vpnc --with-vpnc=${sbindir}/vpnc,--disable-vpnc,,vpnc"
diff --git a/poky/meta/recipes-connectivity/connman/connman/CVE-2022-32292.patch b/poky/meta/recipes-connectivity/connman/connman/CVE-2022-32292.patch
new file mode 100644
index 0000000..182c5ca
--- /dev/null
+++ b/poky/meta/recipes-connectivity/connman/connman/CVE-2022-32292.patch
@@ -0,0 +1,37 @@
+From d1a5ede5d255bde8ef707f8441b997563b9312bd Mon Sep 17 00:00:00 2001
+From: Nathan Crandall <ncrandall@tesla.com>
+Date: Tue, 12 Jul 2022 08:56:34 +0200
+Subject: gweb: Fix OOB write in received_data()
+
+There is a mismatch of handling binary vs. C-string data with memchr
+and strlen, resulting in pos, count, and bytes_read to become out of
+sync and result in a heap overflow.  Instead, do not treat the buffer
+as an ASCII C-string. We calculate the count based on the return value
+of memchr, instead of strlen.
+
+Fixes: CVE-2022-32292
+
+CVE: CVE-2022-32292
+
+Upstream-Status: Backport [https://git.kernel.org/pub/scm/network/connman/connman.git/commit/?id=d1a5ede5d255bde8ef707f8441b997563b9312bd]
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ gweb/gweb.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/gweb/gweb.c b/gweb/gweb.c
+index 12fcb1d8..13c6c5f2 100644
+--- a/gweb/gweb.c
++++ b/gweb/gweb.c
+@@ -918,7 +918,7 @@ static gboolean received_data(GIOChannel *channel, GIOCondition cond,
+ 		}
+ 
+ 		*pos = '\0';
+-		count = strlen((char *) ptr);
++		count = pos - ptr;
+ 		if (count > 0 && ptr[count - 1] == '\r') {
+ 			ptr[--count] = '\0';
+ 			bytes_read--;
+-- 
+cgit 
+
diff --git a/poky/meta/recipes-connectivity/connman/connman/CVE-2022-32293_p1.patch b/poky/meta/recipes-connectivity/connman/connman/CVE-2022-32293_p1.patch
new file mode 100644
index 0000000..b280203
--- /dev/null
+++ b/poky/meta/recipes-connectivity/connman/connman/CVE-2022-32293_p1.patch
@@ -0,0 +1,141 @@
+From 72343929836de80727a27d6744c869dff045757c Mon Sep 17 00:00:00 2001
+From: Daniel Wagner <wagi@monom.org>
+Date: Tue, 5 Jul 2022 08:32:12 +0200
+Subject: wispr: Add reference counter to portal context
+
+Track the connman_wispr_portal_context live time via a
+refcounter. This only adds the infrastructure to do proper reference
+counting.
+
+Fixes: CVE-2022-32293
+CVE: CVE-2022-32293
+Upstream-Status: Backport [https://git.kernel.org/pub/scm/network/connman/connman.git/commit/?id=416bfaff988882c553c672e5bfc2d4f648d29e8a]
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ src/wispr.c | 52 ++++++++++++++++++++++++++++++++++++++++++----------
+ 1 file changed, 42 insertions(+), 10 deletions(-)
+
+diff --git a/src/wispr.c b/src/wispr.c
+index a07896ca..bde7e63b 100644
+--- a/src/wispr.c
++++ b/src/wispr.c
+@@ -56,6 +56,7 @@ struct wispr_route {
+ };
+ 
+ struct connman_wispr_portal_context {
++	int refcount;
+ 	struct connman_service *service;
+ 	enum connman_ipconfig_type type;
+ 	struct connman_wispr_portal *wispr_portal;
+@@ -97,6 +98,11 @@ static char *online_check_ipv4_url = NULL;
+ static char *online_check_ipv6_url = NULL;
+ static bool enable_online_to_ready_transition = false;
+ 
++#define wispr_portal_context_ref(wp_context) \
++	wispr_portal_context_ref_debug(wp_context, __FILE__, __LINE__, __func__)
++#define wispr_portal_context_unref(wp_context) \
++	wispr_portal_context_unref_debug(wp_context, __FILE__, __LINE__, __func__)
++
+ static void connman_wispr_message_init(struct connman_wispr_message *msg)
+ {
+ 	DBG("");
+@@ -162,9 +168,6 @@ static void free_connman_wispr_portal_context(
+ {
+ 	DBG("context %p", wp_context);
+ 
+-	if (!wp_context)
+-		return;
+-
+ 	if (wp_context->wispr_portal) {
+ 		if (wp_context->wispr_portal->ipv4_context == wp_context)
+ 			wp_context->wispr_portal->ipv4_context = NULL;
+@@ -201,9 +204,38 @@ static void free_connman_wispr_portal_context(
+ 	g_free(wp_context);
+ }
+ 
++static struct connman_wispr_portal_context *
++wispr_portal_context_ref_debug(struct connman_wispr_portal_context *wp_context,
++			const char *file, int line, const char *caller)
++{
++	DBG("%p ref %d by %s:%d:%s()", wp_context,
++		wp_context->refcount + 1, file, line, caller);
++
++	__sync_fetch_and_add(&wp_context->refcount, 1);
++
++	return wp_context;
++}
++
++static void wispr_portal_context_unref_debug(
++		struct connman_wispr_portal_context *wp_context,
++		const char *file, int line, const char *caller)
++{
++	if (!wp_context)
++		return;
++
++	DBG("%p ref %d by %s:%d:%s()", wp_context,
++		wp_context->refcount - 1, file, line, caller);
++
++	if (__sync_fetch_and_sub(&wp_context->refcount, 1) != 1)
++		return;
++
++	free_connman_wispr_portal_context(wp_context);
++}
++
+ static struct connman_wispr_portal_context *create_wispr_portal_context(void)
+ {
+-	return g_try_new0(struct connman_wispr_portal_context, 1);
++	return wispr_portal_context_ref(
++		g_new0(struct connman_wispr_portal_context, 1));
+ }
+ 
+ static void free_connman_wispr_portal(gpointer data)
+@@ -215,8 +247,8 @@ static void free_connman_wispr_portal(gpointer data)
+ 	if (!wispr_portal)
+ 		return;
+ 
+-	free_connman_wispr_portal_context(wispr_portal->ipv4_context);
+-	free_connman_wispr_portal_context(wispr_portal->ipv6_context);
++	wispr_portal_context_unref(wispr_portal->ipv4_context);
++	wispr_portal_context_unref(wispr_portal->ipv6_context);
+ 
+ 	g_free(wispr_portal);
+ }
+@@ -452,7 +484,7 @@ static void portal_manage_status(GWebResult *result,
+ 		connman_info("Client-Timezone: %s", str);
+ 
+ 	if (!enable_online_to_ready_transition)
+-		free_connman_wispr_portal_context(wp_context);
++		wispr_portal_context_unref(wp_context);
+ 
+ 	__connman_service_ipconfig_indicate_state(service,
+ 					CONNMAN_SERVICE_STATE_ONLINE, type);
+@@ -616,7 +648,7 @@ static void wispr_portal_request_wispr_login(struct connman_service *service,
+ 				return;
+ 		}
+ 
+-		free_connman_wispr_portal_context(wp_context);
++		wispr_portal_context_unref(wp_context);
+ 		return;
+ 	}
+ 
+@@ -952,7 +984,7 @@ static int wispr_portal_detect(struct connman_wispr_portal_context *wp_context)
+ 
+ 		if (wp_context->token == 0) {
+ 			err = -EINVAL;
+-			free_connman_wispr_portal_context(wp_context);
++			wispr_portal_context_unref(wp_context);
+ 		}
+ 	} else if (wp_context->timeout == 0) {
+ 		wp_context->timeout = g_idle_add(no_proxy_callback, wp_context);
+@@ -1001,7 +1033,7 @@ int __connman_wispr_start(struct connman_service *service,
+ 
+ 	/* If there is already an existing context, we wipe it */
+ 	if (wp_context)
+-		free_connman_wispr_portal_context(wp_context);
++		wispr_portal_context_unref(wp_context);
+ 
+ 	wp_context = create_wispr_portal_context();
+ 	if (!wp_context)
+-- 
+cgit 
+
diff --git a/poky/meta/recipes-connectivity/connman/connman/CVE-2022-32293_p2.patch b/poky/meta/recipes-connectivity/connman/connman/CVE-2022-32293_p2.patch
new file mode 100644
index 0000000..56f8fc8
--- /dev/null
+++ b/poky/meta/recipes-connectivity/connman/connman/CVE-2022-32293_p2.patch
@@ -0,0 +1,174 @@
+From 416bfaff988882c553c672e5bfc2d4f648d29e8a Mon Sep 17 00:00:00 2001
+From: Daniel Wagner <wagi@monom.org>
+Date: Tue, 5 Jul 2022 09:11:09 +0200
+Subject: wispr: Update portal context references
+
+Maintain proper portal context references to avoid UAF.
+
+Fixes: CVE-2022-32293
+CVE: CVE-2022-32293
+Upstream-Status: Backport [https://git.kernel.org/pub/scm/network/connman/connman.git/commit/?id=72343929836de80727a27d6744c869dff045757c]
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ src/wispr.c | 34 ++++++++++++++++++++++------------
+ 1 file changed, 22 insertions(+), 12 deletions(-)
+
+diff --git a/src/wispr.c b/src/wispr.c
+index bde7e63b..84bed33f 100644
+--- a/src/wispr.c
++++ b/src/wispr.c
+@@ -105,8 +105,6 @@ static bool enable_online_to_ready_transition = false;
+ 
+ static void connman_wispr_message_init(struct connman_wispr_message *msg)
+ {
+-	DBG("");
+-
+ 	msg->has_error = false;
+ 	msg->current_element = NULL;
+ 
+@@ -166,8 +164,6 @@ static void free_wispr_routes(struct connman_wispr_portal_context *wp_context)
+ static void free_connman_wispr_portal_context(
+ 		struct connman_wispr_portal_context *wp_context)
+ {
+-	DBG("context %p", wp_context);
+-
+ 	if (wp_context->wispr_portal) {
+ 		if (wp_context->wispr_portal->ipv4_context == wp_context)
+ 			wp_context->wispr_portal->ipv4_context = NULL;
+@@ -483,9 +479,6 @@ static void portal_manage_status(GWebResult *result,
+ 				&str))
+ 		connman_info("Client-Timezone: %s", str);
+ 
+-	if (!enable_online_to_ready_transition)
+-		wispr_portal_context_unref(wp_context);
+-
+ 	__connman_service_ipconfig_indicate_state(service,
+ 					CONNMAN_SERVICE_STATE_ONLINE, type);
+ 
+@@ -546,14 +539,17 @@ static void wispr_portal_request_portal(
+ {
+ 	DBG("");
+ 
++	wispr_portal_context_ref(wp_context);
+ 	wp_context->request_id = g_web_request_get(wp_context->web,
+ 					wp_context->status_url,
+ 					wispr_portal_web_result,
+ 					wispr_route_request,
+ 					wp_context);
+ 
+-	if (wp_context->request_id == 0)
++	if (wp_context->request_id == 0) {
+ 		wispr_portal_error(wp_context);
++		wispr_portal_context_unref(wp_context);
++	}
+ }
+ 
+ static bool wispr_input(const guint8 **data, gsize *length,
+@@ -618,13 +614,15 @@ static void wispr_portal_browser_reply_cb(struct connman_service *service,
+ 		return;
+ 
+ 	if (!authentication_done) {
+-		wispr_portal_error(wp_context);
+ 		free_wispr_routes(wp_context);
++		wispr_portal_error(wp_context);
++		wispr_portal_context_unref(wp_context);
+ 		return;
+ 	}
+ 
+ 	/* Restarting the test */
+ 	__connman_service_wispr_start(service, wp_context->type);
++	wispr_portal_context_unref(wp_context);
+ }
+ 
+ static void wispr_portal_request_wispr_login(struct connman_service *service,
+@@ -700,11 +698,13 @@ static bool wispr_manage_message(GWebResult *result,
+ 
+ 		wp_context->wispr_result = CONNMAN_WISPR_RESULT_LOGIN;
+ 
++		wispr_portal_context_ref(wp_context);
+ 		if (__connman_agent_request_login_input(wp_context->service,
+ 					wispr_portal_request_wispr_login,
+-					wp_context) != -EINPROGRESS)
++					wp_context) != -EINPROGRESS) {
+ 			wispr_portal_error(wp_context);
+-		else
++			wispr_portal_context_unref(wp_context);
++		} else
+ 			return true;
+ 
+ 		break;
+@@ -753,6 +753,7 @@ static bool wispr_portal_web_result(GWebResult *result, gpointer user_data)
+ 		if (length > 0) {
+ 			g_web_parser_feed_data(wp_context->wispr_parser,
+ 								chunk, length);
++			wispr_portal_context_unref(wp_context);
+ 			return true;
+ 		}
+ 
+@@ -770,6 +771,7 @@ static bool wispr_portal_web_result(GWebResult *result, gpointer user_data)
+ 
+ 	switch (status) {
+ 	case 000:
++		wispr_portal_context_ref(wp_context);
+ 		__connman_agent_request_browser(wp_context->service,
+ 				wispr_portal_browser_reply_cb,
+ 				wp_context->status_url, wp_context);
+@@ -781,11 +783,14 @@ static bool wispr_portal_web_result(GWebResult *result, gpointer user_data)
+ 		if (g_web_result_get_header(result, "X-ConnMan-Status",
+ 						&str)) {
+ 			portal_manage_status(result, wp_context);
++			wispr_portal_context_unref(wp_context);
+ 			return false;
+-		} else
++		} else {
++			wispr_portal_context_ref(wp_context);
+ 			__connman_agent_request_browser(wp_context->service,
+ 					wispr_portal_browser_reply_cb,
+ 					wp_context->redirect_url, wp_context);
++		}
+ 
+ 		break;
+ 	case 300:
+@@ -798,6 +803,7 @@ static bool wispr_portal_web_result(GWebResult *result, gpointer user_data)
+ 			!g_web_result_get_header(result, "Location",
+ 							&redirect)) {
+ 
++			wispr_portal_context_ref(wp_context);
+ 			__connman_agent_request_browser(wp_context->service,
+ 					wispr_portal_browser_reply_cb,
+ 					wp_context->status_url, wp_context);
+@@ -808,6 +814,7 @@ static bool wispr_portal_web_result(GWebResult *result, gpointer user_data)
+ 
+ 		wp_context->redirect_url = g_strdup(redirect);
+ 
++		wispr_portal_context_ref(wp_context);
+ 		wp_context->request_id = g_web_request_get(wp_context->web,
+ 				redirect, wispr_portal_web_result,
+ 				wispr_route_request, wp_context);
+@@ -820,6 +827,7 @@ static bool wispr_portal_web_result(GWebResult *result, gpointer user_data)
+ 
+ 		break;
+ 	case 505:
++		wispr_portal_context_ref(wp_context);
+ 		__connman_agent_request_browser(wp_context->service,
+ 				wispr_portal_browser_reply_cb,
+ 				wp_context->status_url, wp_context);
+@@ -832,6 +840,7 @@ static bool wispr_portal_web_result(GWebResult *result, gpointer user_data)
+ 	wp_context->request_id = 0;
+ done:
+ 	wp_context->wispr_msg.message_type = -1;
++	wispr_portal_context_unref(wp_context);
+ 	return false;
+ }
+ 
+@@ -890,6 +899,7 @@ static void proxy_callback(const char *proxy, void *user_data)
+ 					xml_wispr_parser_callback, wp_context);
+ 
+ 	wispr_portal_request_portal(wp_context);
++	wispr_portal_context_unref(wp_context);
+ }
+ 
+ static gboolean no_proxy_callback(gpointer user_data)
+-- 
+cgit 
+
diff --git a/poky/meta/recipes-connectivity/connman/connman_1.41.bb b/poky/meta/recipes-connectivity/connman/connman_1.41.bb
index 736b78e..79542b2 100644
--- a/poky/meta/recipes-connectivity/connman/connman_1.41.bb
+++ b/poky/meta/recipes-connectivity/connman/connman_1.41.bb
@@ -5,6 +5,9 @@
            file://0001-connman.service-stop-systemd-resolved-when-we-use-co.patch \
            file://connman \
            file://no-version-scripts.patch \
+           file://CVE-2022-32293_p1.patch \
+           file://CVE-2022-32293_p2.patch \
+           file://CVE-2022-32292.patch \
            "
 
 SRC_URI:append:libc-musl = " file://0002-resolve-musl-does-not-implement-res_ninit.patch"
diff --git a/poky/meta/recipes-connectivity/iproute2/iproute2/0001-configure-Define-_GNU_SOURCE-when-checking-for-setns.patch b/poky/meta/recipes-connectivity/iproute2/iproute2/0001-configure-Define-_GNU_SOURCE-when-checking-for-setns.patch
new file mode 100644
index 0000000..04d44ef
--- /dev/null
+++ b/poky/meta/recipes-connectivity/iproute2/iproute2/0001-configure-Define-_GNU_SOURCE-when-checking-for-setns.patch
@@ -0,0 +1,28 @@
+From dc837a6b4c2cad7f31cddfe56cd652e26baadc02 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Wed, 10 Aug 2022 22:31:03 -0700
+Subject: [PATCH] configure: Define _GNU_SOURCE when checking for setns
+
+glibc defines this function only as gnu extention
+
+Upstream-Status: Submitted [https://lore.kernel.org/netdev/20220811053440.778649-1-raj.khem@gmail.com/T/#u]
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ configure | 1 +
+ 1 file changed, 1 insertion(+)
+
+diff --git a/configure b/configure
+index 440facb..c02753b 100755
+--- a/configure
++++ b/configure
+@@ -191,6 +191,7 @@ check_ipt_lib_dir()
+ check_setns()
+ {
+     cat >$TMPDIR/setnstest.c <<EOF
++#define _GNU_SOURCE
+ #include <sched.h>
+ int main(int argc, char **argv)
+ {
+-- 
+2.37.1
+
diff --git a/poky/meta/recipes-connectivity/iproute2/iproute2/0001-ip-ipstats.c-add-an-include-where-MIN-is-defined.patch b/poky/meta/recipes-connectivity/iproute2/iproute2/0001-ip-ipstats.c-add-an-include-where-MIN-is-defined.patch
new file mode 100644
index 0000000..edd7381
--- /dev/null
+++ b/poky/meta/recipes-connectivity/iproute2/iproute2/0001-ip-ipstats.c-add-an-include-where-MIN-is-defined.patch
@@ -0,0 +1,25 @@
+From c8a99f1035ec7b158a204f90e9a7ed3c0b1e3d52 Mon Sep 17 00:00:00 2001
+From: Alexander Kanavin <alex@linutronix.de>
+Date: Fri, 5 Aug 2022 11:31:56 +0200
+Subject: [PATCH] ip/ipstats.c: add an include where MIN is defined
+
+Otherwise, non-glibc systems error out (e.g. on musl).
+
+Upstream-Status: Submitted [by email to stephen@networkplumber.org,netdev@vger.kernel.org]
+Signed-off-by: Alexander Kanavin <alex@linutronix.de>
+---
+ ip/ipstats.c | 1 +
+ 1 file changed, 1 insertion(+)
+
+diff --git a/ip/ipstats.c b/ip/ipstats.c
+index 5cdd15a..1ac275b 100644
+--- a/ip/ipstats.c
++++ b/ip/ipstats.c
+@@ -1,6 +1,7 @@
+ // SPDX-License-Identifier: GPL-2.0+
+ #include <assert.h>
+ #include <errno.h>
++#include <sys/param.h>
+ 
+ #include "list.h"
+ #include "utils.h"
diff --git a/poky/meta/recipes-connectivity/iproute2/iproute2_5.18.0.bb b/poky/meta/recipes-connectivity/iproute2/iproute2_5.18.0.bb
deleted file mode 100644
index 3e01c70..0000000
--- a/poky/meta/recipes-connectivity/iproute2/iproute2_5.18.0.bb
+++ /dev/null
@@ -1,11 +0,0 @@
-require iproute2.inc
-
-SRC_URI = "${KERNELORG_MIRROR}/linux/utils/net/${BPN}/${BP}.tar.xz \
-           file://0001-libc-compat.h-add-musl-workaround.patch \
-           "
-
-SRC_URI[sha256sum] = "5ba3d464d51c8c283550d507ffac3d10f7aec587b7c66b0ccb6950643646389e"
-
-# CFLAGS are computed in Makefile and reference CCOPTS
-#
-EXTRA_OEMAKE:append = " CCOPTS='${CFLAGS}'"
diff --git a/poky/meta/recipes-connectivity/iproute2/iproute2_5.19.0.bb b/poky/meta/recipes-connectivity/iproute2/iproute2_5.19.0.bb
new file mode 100644
index 0000000..6a00779
--- /dev/null
+++ b/poky/meta/recipes-connectivity/iproute2/iproute2_5.19.0.bb
@@ -0,0 +1,13 @@
+require iproute2.inc
+
+SRC_URI = "${KERNELORG_MIRROR}/linux/utils/net/${BPN}/${BP}.tar.xz \
+           file://0001-libc-compat.h-add-musl-workaround.patch \
+           file://0001-ip-ipstats.c-add-an-include-where-MIN-is-defined.patch \
+           file://0001-configure-Define-_GNU_SOURCE-when-checking-for-setns.patch \
+           "
+
+SRC_URI[sha256sum] = "26b7a34d6a7fd2f7a42e2b39c5a90cb61bac522d1096067ffeb195e5693d7791"
+
+# CFLAGS are computed in Makefile and reference CCOPTS
+#
+EXTRA_OEMAKE:append = " CCOPTS='${CFLAGS}'"
diff --git a/poky/meta/recipes-connectivity/kea/kea_2.0.2.bb b/poky/meta/recipes-connectivity/kea/kea_2.0.2.bb
deleted file mode 100644
index 13da1f8..0000000
--- a/poky/meta/recipes-connectivity/kea/kea_2.0.2.bb
+++ /dev/null
@@ -1,77 +0,0 @@
-SUMMARY = "ISC Kea DHCP Server"
-DESCRIPTION = "Kea is the next generation of DHCP software developed by ISC. It supports both DHCPv4 and DHCPv6 protocols along with their extensions, e.g. prefix delegation and dynamic updates to DNS."
-HOMEPAGE = "http://kea.isc.org"
-SECTION = "connectivity"
-LICENSE = "MPL-2.0 & Apache-2.0"
-LIC_FILES_CHKSUM = "file://COPYING;md5=b4ecee995eeb6780a17dd7e539e97abc"
-
-DEPENDS = "boost log4cplus openssl"
-
-SRC_URI = "http://ftp.isc.org/isc/kea/${PV}/${BP}.tar.gz \
-           file://kea-dhcp4.service \
-           file://kea-dhcp6.service \
-           file://kea-dhcp-ddns.service \
-           file://kea-dhcp4-server \
-           file://kea-dhcp6-server \
-           file://kea-dhcp-ddns-server \
-           file://fix-multilib-conflict.patch \
-           file://fix_pid_keactrl.patch \
-           file://0001-src-lib-log-logger_unittest_support.cc-do-not-write-.patch \
-           "
-SRC_URI[sha256sum] = "8d28213bdc8e2bb870a383b30ac1e53d54e1eba43d2f86e5151b08b66aa6cf32"
-
-inherit autotools systemd update-rc.d upstream-version-is-even
-
-INITSCRIPT_NAME = "kea-dhcp4-server"
-INITSCRIPT_PARAMS = "defaults 30"
-
-SYSTEMD_SERVICE:${PN} = "kea-dhcp4.service kea-dhcp6.service kea-dhcp-ddns.service"
-SYSTEMD_AUTO_ENABLE = "disable"
-
-DEBUG_OPTIMIZATION:remove:mips = " -Og"
-DEBUG_OPTIMIZATION:append:mips = " -O"
-BUILD_OPTIMIZATION:remove:mips = " -Og"
-BUILD_OPTIMIZATION:append:mips = " -O"
-
-DEBUG_OPTIMIZATION:remove:mipsel = " -Og"
-DEBUG_OPTIMIZATION:append:mipsel = " -O"
-BUILD_OPTIMIZATION:remove:mipsel = " -Og"
-BUILD_OPTIMIZATION:append:mipsel = " -O"
-
-EXTRA_OECONF = "--with-boost-libs=-lboost_system \
-                --with-log4cplus=${STAGING_DIR_TARGET}${prefix} \
-                --with-openssl=${STAGING_DIR_TARGET}${prefix}"
-
-do_configure:prepend() {
-    # replace abs_top_builddir to avoid introducing the build path
-    # don't expand the abs_top_builddir on the target as the abs_top_builddir is meanlingless on the target
-    find ${S} -type f -name *.sh.in | xargs sed -i  "s:@abs_top_builddir@:@abs_top_builddir_placeholder@:g"
-    sed -i "s:@abs_top_srcdir@:@abs_top_srcdir_placeholder@:g" ${S}/src/bin/admin/kea-admin.in
-}
-
-# patch out build host paths for reproducibility
-do_compile:prepend:class-target() {
-        sed -i -e "s,${WORKDIR},,g" ${B}/config.report
-}
-
-do_install:append() {
-    install -d ${D}${sysconfdir}/init.d
-    install -d ${D}${systemd_system_unitdir}
-
-    install -m 0644 ${WORKDIR}/kea-dhcp*service ${D}${systemd_system_unitdir}
-    install -m 0755 ${WORKDIR}/kea-*-server ${D}${sysconfdir}/init.d
-    sed -i -e 's,@SBINDIR@,${sbindir},g' -e 's,@BASE_BINDIR@,${base_bindir},g' \
-           -e 's,@LOCALSTATEDIR@,${localstatedir},g' -e 's,@SYSCONFDIR@,${sysconfdir},g' \
-           ${D}${systemd_system_unitdir}/kea-dhcp*service ${D}${sbindir}/keactrl
-}
-
-do_install:append() {
-    rm -rf "${D}${localstatedir}"
-}
-
-CONFFILES:${PN} = "${sysconfdir}/kea/keactrl.conf"
-
-FILES:${PN}-staticdev += "${libdir}/kea/hooks/*.a ${libdir}/hooks/*.a"
-FILES:${PN} += "${libdir}/hooks/*.so"
-
-PARALLEL_MAKEINST = ""
diff --git a/poky/meta/recipes-connectivity/kea/kea_2.2.0.bb b/poky/meta/recipes-connectivity/kea/kea_2.2.0.bb
new file mode 100644
index 0000000..2c2e5a7
--- /dev/null
+++ b/poky/meta/recipes-connectivity/kea/kea_2.2.0.bb
@@ -0,0 +1,77 @@
+SUMMARY = "ISC Kea DHCP Server"
+DESCRIPTION = "Kea is the next generation of DHCP software developed by ISC. It supports both DHCPv4 and DHCPv6 protocols along with their extensions, e.g. prefix delegation and dynamic updates to DNS."
+HOMEPAGE = "http://kea.isc.org"
+SECTION = "connectivity"
+LICENSE = "MPL-2.0"
+LIC_FILES_CHKSUM = "file://COPYING;md5=97ce14bdd2733f5b84ab5e29380d057d"
+
+DEPENDS = "boost log4cplus openssl"
+
+SRC_URI = "http://ftp.isc.org/isc/kea/${PV}/${BP}.tar.gz \
+           file://kea-dhcp4.service \
+           file://kea-dhcp6.service \
+           file://kea-dhcp-ddns.service \
+           file://kea-dhcp4-server \
+           file://kea-dhcp6-server \
+           file://kea-dhcp-ddns-server \
+           file://fix-multilib-conflict.patch \
+           file://fix_pid_keactrl.patch \
+           file://0001-src-lib-log-logger_unittest_support.cc-do-not-write-.patch \
+           "
+SRC_URI[sha256sum] = "da7d90ca62a772602dac6e77e507319038422895ad68eeb142f1487d67d531d2"
+
+inherit autotools systemd update-rc.d upstream-version-is-even
+
+INITSCRIPT_NAME = "kea-dhcp4-server"
+INITSCRIPT_PARAMS = "defaults 30"
+
+SYSTEMD_SERVICE:${PN} = "kea-dhcp4.service kea-dhcp6.service kea-dhcp-ddns.service"
+SYSTEMD_AUTO_ENABLE = "disable"
+
+DEBUG_OPTIMIZATION:remove:mips = " -Og"
+DEBUG_OPTIMIZATION:append:mips = " -O"
+BUILD_OPTIMIZATION:remove:mips = " -Og"
+BUILD_OPTIMIZATION:append:mips = " -O"
+
+DEBUG_OPTIMIZATION:remove:mipsel = " -Og"
+DEBUG_OPTIMIZATION:append:mipsel = " -O"
+BUILD_OPTIMIZATION:remove:mipsel = " -Og"
+BUILD_OPTIMIZATION:append:mipsel = " -O"
+
+EXTRA_OECONF = "--with-boost-libs=-lboost_system \
+                --with-log4cplus=${STAGING_DIR_TARGET}${prefix} \
+                --with-openssl=${STAGING_DIR_TARGET}${prefix}"
+
+do_configure:prepend() {
+    # replace abs_top_builddir to avoid introducing the build path
+    # don't expand the abs_top_builddir on the target as the abs_top_builddir is meanlingless on the target
+    find ${S} -type f -name *.sh.in | xargs sed -i  "s:@abs_top_builddir@:@abs_top_builddir_placeholder@:g"
+    sed -i "s:@abs_top_srcdir@:@abs_top_srcdir_placeholder@:g" ${S}/src/bin/admin/kea-admin.in
+}
+
+# patch out build host paths for reproducibility
+do_compile:prepend:class-target() {
+        sed -i -e "s,${WORKDIR},,g" ${B}/config.report
+}
+
+do_install:append() {
+    install -d ${D}${sysconfdir}/init.d
+    install -d ${D}${systemd_system_unitdir}
+
+    install -m 0644 ${WORKDIR}/kea-dhcp*service ${D}${systemd_system_unitdir}
+    install -m 0755 ${WORKDIR}/kea-*-server ${D}${sysconfdir}/init.d
+    sed -i -e 's,@SBINDIR@,${sbindir},g' -e 's,@BASE_BINDIR@,${base_bindir},g' \
+           -e 's,@LOCALSTATEDIR@,${localstatedir},g' -e 's,@SYSCONFDIR@,${sysconfdir},g' \
+           ${D}${systemd_system_unitdir}/kea-dhcp*service ${D}${sbindir}/keactrl
+}
+
+do_install:append() {
+    rm -rf "${D}${localstatedir}"
+}
+
+CONFFILES:${PN} = "${sysconfdir}/kea/keactrl.conf"
+
+FILES:${PN}-staticdev += "${libdir}/kea/hooks/*.a ${libdir}/hooks/*.a"
+FILES:${PN} += "${libdir}/hooks/*.so"
+
+PARALLEL_MAKEINST = ""
diff --git a/poky/meta/recipes-connectivity/mobile-broadband-provider-info/mobile-broadband-provider-info_git.bb b/poky/meta/recipes-connectivity/mobile-broadband-provider-info/mobile-broadband-provider-info_git.bb
index e6f216e..2cc92b7 100644
--- a/poky/meta/recipes-connectivity/mobile-broadband-provider-info/mobile-broadband-provider-info_git.bb
+++ b/poky/meta/recipes-connectivity/mobile-broadband-provider-info/mobile-broadband-provider-info_git.bb
@@ -5,8 +5,8 @@
 LICENSE = "PD"
 LIC_FILES_CHKSUM = "file://COPYING;md5=87964579b2a8ece4bc6744d2dc9a8b04"
 
-SRCREV = "3d5c8d0f7e0264768a2c000d0fd4b4d4a991e041"
-PV = "20220511"
+SRCREV = "fe19892a8168bf19d81e3bc4ee319bf7f9f058f5"
+PV = "20220725"
 PE = "1"
 
 SRC_URI = "git://gitlab.gnome.org/GNOME/mobile-broadband-provider-info.git;protocol=https;branch=main"
diff --git a/poky/meta/recipes-connectivity/nfs-utils/nfs-utils/0005-mountd-Check-for-return-of-stat-function.patch b/poky/meta/recipes-connectivity/nfs-utils/nfs-utils/0005-mountd-Check-for-return-of-stat-function.patch
new file mode 100644
index 0000000..13a21e5
--- /dev/null
+++ b/poky/meta/recipes-connectivity/nfs-utils/nfs-utils/0005-mountd-Check-for-return-of-stat-function.patch
@@ -0,0 +1,34 @@
+From 887ecc7837962e9be77a4fea7d9122648f73a84a Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Mon, 15 Aug 2022 14:47:53 -0700
+Subject: [PATCH] mountd: Check for return of stat function
+
+simplify the check, stat() return 0 on success -1 on failure
+
+Fixes clang reported errors e.g.
+
+| v4clients.c:29:6: error: logical not is only applied to the left hand side of this comparison [-Werror,-Wlogical-not-parentheses]
+|         if (!stat("/proc/fs/nfsd/clients", &sb) == 0 ||
+|             ^                                   ~~
+
+Upstream-Status: Submitted [https://patchwork.kernel.org/project/linux-nfs/patch/20220816024403.2694169-1-raj.khem@gmail.com/]
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+Cc: Konstantin Khorenko <khorenko@virtuozzo.com>
+Cc: Steve Dickson <steved@redhat.com>
+---
+ support/export/v4clients.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/support/export/v4clients.c b/support/export/v4clients.c
+index 5f15b61..3230251 100644
+--- a/support/export/v4clients.c
++++ b/support/export/v4clients.c
+@@ -26,7 +26,7 @@ void v4clients_init(void)
+ {
+ 	struct stat sb;
+ 
+-	if (!stat("/proc/fs/nfsd/clients", &sb) == 0 ||
++	if (stat("/proc/fs/nfsd/clients", &sb) != 0 ||
+ 	    !S_ISDIR(sb.st_mode))
+ 		return;
+ 	if (clients_fd >= 0)
diff --git a/poky/meta/recipes-connectivity/nfs-utils/nfs-utils/0006-Fix-function-prototypes.patch b/poky/meta/recipes-connectivity/nfs-utils/nfs-utils/0006-Fix-function-prototypes.patch
new file mode 100644
index 0000000..793bc46
--- /dev/null
+++ b/poky/meta/recipes-connectivity/nfs-utils/nfs-utils/0006-Fix-function-prototypes.patch
@@ -0,0 +1,93 @@
+From cf0ffbb5c8fa167376926d12a63613f15aa7602f Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Mon, 15 Aug 2022 14:50:15 -0700
+Subject: [PATCH] Fix function prototypes
+
+Clang is now erroring out on functions with out parameter types
+
+Fixes errors like
+error: a function declaration without a prototype is deprecated in all versions of C [-Werror,-Wstrict-prototypes]
+
+Upstream-Status: Submitted [https://patchwork.kernel.org/project/linux-nfs/patch/20220816024403.2694169-2-raj.khem@gmail.com/]
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ support/export/auth.c     | 2 +-
+ support/export/v4root.c   | 2 +-
+ support/export/xtab.c     | 2 +-
+ utils/exportfs/exportfs.c | 4 ++--
+ utils/mount/network.c     | 2 +-
+ 5 files changed, 6 insertions(+), 6 deletions(-)
+
+diff --git a/support/export/auth.c b/support/export/auth.c
+index 03ce4b8..2d7960f 100644
+--- a/support/export/auth.c
++++ b/support/export/auth.c
+@@ -82,7 +82,7 @@ check_useipaddr(void)
+ }
+ 
+ unsigned int
+-auth_reload()
++auth_reload(void)
+ {
+ 	struct stat		stb;
+ 	static ino_t		last_inode;
+diff --git a/support/export/v4root.c b/support/export/v4root.c
+index c12a7d8..fbb0ad5 100644
+--- a/support/export/v4root.c
++++ b/support/export/v4root.c
+@@ -198,7 +198,7 @@ static int v4root_add_parents(nfs_export *exp)
+  * looking for components of the v4 mount.
+  */
+ void
+-v4root_set()
++v4root_set(void)
+ {
+ 	nfs_export	*exp;
+ 	int	i;
+diff --git a/support/export/xtab.c b/support/export/xtab.c
+index c888a80..e210ca9 100644
+--- a/support/export/xtab.c
++++ b/support/export/xtab.c
+@@ -135,7 +135,7 @@ xtab_write(char *xtab, char *xtabtmp, char *lockfn, int is_export)
+ }
+ 
+ int
+-xtab_export_write()
++xtab_export_write(void)
+ {
+ 	return xtab_write(etab.statefn, etab.tmpfn, etab.lockfn, 1);
+ }
+diff --git a/utils/exportfs/exportfs.c b/utils/exportfs/exportfs.c
+index 6ba615d..0897b22 100644
+--- a/utils/exportfs/exportfs.c
++++ b/utils/exportfs/exportfs.c
+@@ -69,14 +69,14 @@ static int _lockfd = -1;
+  * need these additional lockfile() routines.
+  */
+ static void
+-grab_lockfile()
++grab_lockfile(void)
+ {
+ 	_lockfd = open(lockfile, O_CREAT|O_RDWR, 0666);
+ 	if (_lockfd != -1)
+ 		lockf(_lockfd, F_LOCK, 0);
+ }
+ static void
+-release_lockfile()
++release_lockfile(void)
+ {
+ 	if (_lockfd != -1) {
+ 		lockf(_lockfd, F_ULOCK, 0);
+diff --git a/utils/mount/network.c b/utils/mount/network.c
+index ed2f825..01ead49 100644
+--- a/utils/mount/network.c
++++ b/utils/mount/network.c
+@@ -179,7 +179,7 @@ static const unsigned long probe_mnt3_only[] = {
+ 
+ static const unsigned int *nfs_default_proto(void);
+ #ifdef MOUNT_CONFIG
+-static const unsigned int *nfs_default_proto()
++static const unsigned int *nfs_default_proto(void)
+ {
+ 	extern unsigned long config_default_proto;
+ 	/*
diff --git a/poky/meta/recipes-connectivity/nfs-utils/nfs-utils_2.6.1.bb b/poky/meta/recipes-connectivity/nfs-utils/nfs-utils_2.6.1.bb
deleted file mode 100644
index bbed5ae..0000000
--- a/poky/meta/recipes-connectivity/nfs-utils/nfs-utils_2.6.1.bb
+++ /dev/null
@@ -1,145 +0,0 @@
-SUMMARY = "userspace utilities for kernel nfs"
-DESCRIPTION = "The nfs-utils package provides a daemon for the kernel \
-NFS server and related tools."
-HOMEPAGE = "http://nfs.sourceforge.net/"
-SECTION = "console/network"
-
-LICENSE = "MIT & GPL-2.0-or-later & BSD-3-Clause"
-LIC_FILES_CHKSUM = "file://COPYING;md5=95f3a93a5c3c7888de623b46ea085a84"
-
-# util-linux for libblkid
-DEPENDS = "libcap libevent util-linux sqlite3 libtirpc"
-RDEPENDS:${PN} = "${PN}-client"
-RRECOMMENDS:${PN} = "kernel-module-nfsd"
-
-inherit useradd
-
-USERADD_PACKAGES = "${PN}-client"
-USERADD_PARAM:${PN}-client = "--system  --home-dir /var/lib/nfs \
-			      --shell /bin/false --user-group rpcuser"
-
-SRC_URI = "${KERNELORG_MIRROR}/linux/utils/nfs-utils/${PV}/nfs-utils-${PV}.tar.xz \
-           file://nfsserver \
-           file://nfscommon \
-           file://nfs-utils.conf \
-           file://nfs-server.service \
-           file://nfs-mountd.service \
-           file://nfs-statd.service \
-           file://proc-fs-nfsd.mount \
-           file://nfs-utils-debianize-start-statd.patch \
-           file://bugfix-adjust-statd-service-name.patch \
-           file://0001-Makefile.am-fix-undefined-function-for-libnsm.a.patch \
-           file://clang-warnings.patch \
-           "
-SRC_URI[sha256sum] = "60dfcd94a9f3d72a12bc7058d811787ec87a6d593d70da2123faf9aad3d7a1df"
-
-# Only kernel-module-nfsd is required here (but can be built-in)  - the nfsd module will
-# pull in the remainder of the dependencies.
-
-INITSCRIPT_PACKAGES = "${PN} ${PN}-client"
-INITSCRIPT_NAME = "nfsserver"
-INITSCRIPT_PARAMS = "defaults"
-INITSCRIPT_NAME:${PN}-client = "nfscommon"
-INITSCRIPT_PARAMS:${PN}-client = "defaults 19 21"
-
-inherit autotools-brokensep update-rc.d systemd pkgconfig
-
-SYSTEMD_PACKAGES = "${PN} ${PN}-client"
-SYSTEMD_SERVICE:${PN} = "nfs-server.service nfs-mountd.service"
-SYSTEMD_SERVICE:${PN}-client = "nfs-statd.service"
-
-# --enable-uuid is need for cross-compiling
-EXTRA_OECONF = "--with-statduser=rpcuser \
-                --enable-mountconfig \
-                --enable-libmount-mount \
-                --enable-uuid \
-                --disable-gss \
-                --disable-nfsdcltrack \
-                --with-statdpath=/var/lib/nfs/statd \
-                --with-rpcgen=${HOSTTOOLS_DIR}/rpcgen \
-               "
-
-PACKAGECONFIG ??= "tcp-wrappers \
-    ${@bb.utils.filter('DISTRO_FEATURES', 'ipv6', d)} \
-"
-PACKAGECONFIG:remove:libc-musl = "tcp-wrappers"
-PACKAGECONFIG[tcp-wrappers] = "--with-tcp-wrappers,--without-tcp-wrappers,tcp-wrappers"
-PACKAGECONFIG[ipv6] = "--enable-ipv6,--disable-ipv6,"
-# libdevmapper is available in meta-oe
-PACKAGECONFIG[nfsv41] = "--enable-nfsv41,--disable-nfsv41,libdevmapper,libdevmapper"
-# keyutils is available in meta-oe
-PACKAGECONFIG[nfsv4] = "--enable-nfsv4,--disable-nfsv4,keyutils,python3-core"
-
-PACKAGES =+ "${PN}-client ${PN}-mount ${PN}-stats"
-
-CONFFILES:${PN}-client += "${localstatedir}/lib/nfs/etab \
-			   ${localstatedir}/lib/nfs/rmtab \
-			   ${localstatedir}/lib/nfs/xtab \
-			   ${localstatedir}/lib/nfs/statd/state \
-			   ${sysconfdir}/nfsmount.conf"
-
-FILES:${PN}-client = "${sbindir}/*statd \
-		      ${sbindir}/rpc.idmapd ${sbindir}/sm-notify \
-		      ${sbindir}/showmount ${sbindir}/nfsstat \
-		      ${localstatedir}/lib/nfs \
-		      ${sysconfdir}/nfs-utils.conf \
-		      ${sysconfdir}/nfsmount.conf \
-		      ${sysconfdir}/init.d/nfscommon \
-		      ${systemd_system_unitdir}/nfs-statd.service"
-RDEPENDS:${PN}-client = "${PN}-mount rpcbind"
-
-FILES:${PN}-mount = "${base_sbindir}/*mount.nfs*"
-
-FILES:${PN}-stats = "${sbindir}/mountstats ${sbindir}/nfsiostat ${sbindir}/nfsdclnts"
-RDEPENDS:${PN}-stats = "python3-core"
-
-FILES:${PN}-staticdev += "${libdir}/libnfsidmap/*.a"
-
-FILES:${PN} += "${systemd_unitdir} ${libdir}/libnfsidmap/"
-
-do_configure:prepend() {
-	sed -i -e 's,sbindir = /sbin,sbindir = ${base_sbindir},g' \
-		${S}/utils/mount/Makefile.am
-}
-
-# Make clean needed because the package comes with
-# precompiled 64-bit objects that break the build
-do_compile:prepend() {
-	make clean
-}
-
-# Works on systemd only
-HIGH_RLIMIT_NOFILE ??= "4096"
-
-do_install:append () {
-	install -d ${D}${sysconfdir}/init.d
-	install -m 0755 ${WORKDIR}/nfsserver ${D}${sysconfdir}/init.d/nfsserver
-	install -m 0755 ${WORKDIR}/nfscommon ${D}${sysconfdir}/init.d/nfscommon
-
-	install -m 0755 ${WORKDIR}/nfs-utils.conf ${D}${sysconfdir}
-	install -m 0755 ${S}/utils/mount/nfsmount.conf ${D}${sysconfdir}
-
-	install -d ${D}${systemd_system_unitdir}
-	install -m 0644 ${WORKDIR}/nfs-server.service ${D}${systemd_system_unitdir}/
-	install -m 0644 ${WORKDIR}/nfs-mountd.service ${D}${systemd_system_unitdir}/
-	install -m 0644 ${WORKDIR}/nfs-statd.service ${D}${systemd_system_unitdir}/
-	sed -i -e 's,@SBINDIR@,${sbindir},g' \
-		-e 's,@SYSCONFDIR@,${sysconfdir},g' \
-		-e 's,@HIGH_RLIMIT_NOFILE@,${HIGH_RLIMIT_NOFILE},g' \
-		${D}${systemd_system_unitdir}/*.service
-	if ${@bb.utils.contains('DISTRO_FEATURES','systemd','true','false',d)}; then
-		install -m 0644 ${WORKDIR}/proc-fs-nfsd.mount ${D}${systemd_system_unitdir}/
-		install -d ${D}${systemd_system_unitdir}/sysinit.target.wants/
-		ln -sf ../proc-fs-nfsd.mount ${D}${systemd_system_unitdir}/sysinit.target.wants/proc-fs-nfsd.mount
-	fi
-
-	# kernel code as of 3.8 hard-codes this path as a default
-	install -d ${D}/var/lib/nfs/v4recovery
-
-	# chown the directories and files
-	chown -R rpcuser:rpcuser ${D}${localstatedir}/lib/nfs/statd
-	chmod 0644 ${D}${localstatedir}/lib/nfs/statd/state
-
-	# Make python tools use python 3
-	sed -i -e '1s,#!.*python.*,#!${bindir}/python3,' ${D}${sbindir}/mountstats ${D}${sbindir}/nfsiostat
-}
diff --git a/poky/meta/recipes-connectivity/nfs-utils/nfs-utils_2.6.2.bb b/poky/meta/recipes-connectivity/nfs-utils/nfs-utils_2.6.2.bb
new file mode 100644
index 0000000..4b5c28c
--- /dev/null
+++ b/poky/meta/recipes-connectivity/nfs-utils/nfs-utils_2.6.2.bb
@@ -0,0 +1,150 @@
+SUMMARY = "userspace utilities for kernel nfs"
+DESCRIPTION = "The nfs-utils package provides a daemon for the kernel \
+NFS server and related tools."
+HOMEPAGE = "http://nfs.sourceforge.net/"
+SECTION = "console/network"
+
+LICENSE = "MIT & GPL-2.0-or-later & BSD-3-Clause"
+LIC_FILES_CHKSUM = "file://COPYING;md5=95f3a93a5c3c7888de623b46ea085a84"
+
+# util-linux for libblkid
+DEPENDS = "libcap libevent util-linux sqlite3 libtirpc"
+RDEPENDS:${PN} = "${PN}-client"
+RRECOMMENDS:${PN} = "kernel-module-nfsd"
+
+inherit useradd
+
+USERADD_PACKAGES = "${PN}-client"
+USERADD_PARAM:${PN}-client = "--system  --home-dir /var/lib/nfs \
+			      --shell /bin/false --user-group rpcuser"
+
+SRC_URI = "${KERNELORG_MIRROR}/linux/utils/nfs-utils/${PV}/nfs-utils-${PV}.tar.xz \
+           file://nfsserver \
+           file://nfscommon \
+           file://nfs-utils.conf \
+           file://nfs-server.service \
+           file://nfs-mountd.service \
+           file://nfs-statd.service \
+           file://proc-fs-nfsd.mount \
+           file://nfs-utils-debianize-start-statd.patch \
+           file://bugfix-adjust-statd-service-name.patch \
+           file://0001-Makefile.am-fix-undefined-function-for-libnsm.a.patch \
+           file://clang-warnings.patch \
+           file://0005-mountd-Check-for-return-of-stat-function.patch \
+           file://0006-Fix-function-prototypes.patch \
+           "
+SRC_URI[sha256sum] = "5200873e81c4d610e2462fc262fe18135f2dbe78b7979f95accd159ae64d5011"
+
+# Only kernel-module-nfsd is required here (but can be built-in)  - the nfsd module will
+# pull in the remainder of the dependencies.
+
+INITSCRIPT_PACKAGES = "${PN} ${PN}-client"
+INITSCRIPT_NAME = "nfsserver"
+INITSCRIPT_PARAMS = "defaults"
+INITSCRIPT_NAME:${PN}-client = "nfscommon"
+INITSCRIPT_PARAMS:${PN}-client = "defaults 19 21"
+
+inherit autotools-brokensep update-rc.d systemd pkgconfig
+
+SYSTEMD_PACKAGES = "${PN} ${PN}-client"
+SYSTEMD_SERVICE:${PN} = "nfs-server.service nfs-mountd.service"
+SYSTEMD_SERVICE:${PN}-client = "nfs-statd.service"
+
+# --enable-uuid is need for cross-compiling
+EXTRA_OECONF = "--with-statduser=rpcuser \
+                --enable-mountconfig \
+                --enable-libmount-mount \
+                --enable-uuid \
+                --disable-gss \
+                --disable-nfsdcltrack \
+                --with-statdpath=/var/lib/nfs/statd \
+                --with-rpcgen=${HOSTTOOLS_DIR}/rpcgen \
+               "
+
+PACKAGECONFIG ??= "tcp-wrappers \
+    ${@bb.utils.filter('DISTRO_FEATURES', 'ipv6', d)} \
+"
+PACKAGECONFIG:remove:libc-musl = "tcp-wrappers"
+PACKAGECONFIG[tcp-wrappers] = "--with-tcp-wrappers,--without-tcp-wrappers,tcp-wrappers"
+PACKAGECONFIG[ipv6] = "--enable-ipv6,--disable-ipv6,"
+# libdevmapper is available in meta-oe
+PACKAGECONFIG[nfsv41] = "--enable-nfsv41,--disable-nfsv41,libdevmapper,libdevmapper"
+# keyutils is available in meta-oe
+PACKAGECONFIG[nfsv4] = "--enable-nfsv4,--disable-nfsv4,keyutils,python3-core"
+
+PACKAGES =+ "${PN}-client ${PN}-mount ${PN}-stats ${PN}-rpcctl"
+
+CONFFILES:${PN}-client += "${localstatedir}/lib/nfs/etab \
+			   ${localstatedir}/lib/nfs/rmtab \
+			   ${localstatedir}/lib/nfs/xtab \
+			   ${localstatedir}/lib/nfs/statd/state \
+			   ${sysconfdir}/nfsmount.conf"
+
+FILES:${PN}-client = "${sbindir}/*statd \
+		      ${sbindir}/rpc.idmapd ${sbindir}/sm-notify \
+		      ${sbindir}/showmount ${sbindir}/nfsstat \
+		      ${localstatedir}/lib/nfs \
+		      ${sysconfdir}/nfs-utils.conf \
+		      ${sysconfdir}/nfsmount.conf \
+		      ${sysconfdir}/init.d/nfscommon \
+		      ${systemd_system_unitdir}/nfs-statd.service"
+RDEPENDS:${PN}-client = "${PN}-mount rpcbind"
+
+FILES:${PN}-mount = "${base_sbindir}/*mount.nfs*"
+
+FILES:${PN}-stats = "${sbindir}/mountstats ${sbindir}/nfsiostat ${sbindir}/nfsdclnts"
+RDEPENDS:${PN}-stats = "python3-core"
+
+FILES:${PN}-rpcctl = "${sbindir}/rpcctl"
+RDEPENDS:${PN}-rpcctl = "python3-core"
+
+FILES:${PN}-staticdev += "${libdir}/libnfsidmap/*.a"
+
+FILES:${PN} += "${systemd_unitdir} ${libdir}/libnfsidmap/ ${nonarch_libdir}/modprobe.d"
+
+do_configure:prepend() {
+	sed -i -e 's,sbindir = /sbin,sbindir = ${base_sbindir},g' \
+		${S}/utils/mount/Makefile.am
+}
+
+# Make clean needed because the package comes with
+# precompiled 64-bit objects that break the build
+do_compile:prepend() {
+	make clean
+}
+
+# Works on systemd only
+HIGH_RLIMIT_NOFILE ??= "4096"
+
+do_install:append () {
+	install -d ${D}${sysconfdir}/init.d
+	install -m 0755 ${WORKDIR}/nfsserver ${D}${sysconfdir}/init.d/nfsserver
+	install -m 0755 ${WORKDIR}/nfscommon ${D}${sysconfdir}/init.d/nfscommon
+
+	install -m 0755 ${WORKDIR}/nfs-utils.conf ${D}${sysconfdir}
+	install -m 0755 ${S}/utils/mount/nfsmount.conf ${D}${sysconfdir}
+
+	install -d ${D}${systemd_system_unitdir}
+	install -m 0644 ${WORKDIR}/nfs-server.service ${D}${systemd_system_unitdir}/
+	install -m 0644 ${WORKDIR}/nfs-mountd.service ${D}${systemd_system_unitdir}/
+	install -m 0644 ${WORKDIR}/nfs-statd.service ${D}${systemd_system_unitdir}/
+	sed -i -e 's,@SBINDIR@,${sbindir},g' \
+		-e 's,@SYSCONFDIR@,${sysconfdir},g' \
+		-e 's,@HIGH_RLIMIT_NOFILE@,${HIGH_RLIMIT_NOFILE},g' \
+		${D}${systemd_system_unitdir}/*.service
+	if ${@bb.utils.contains('DISTRO_FEATURES','systemd','true','false',d)}; then
+		install -m 0644 ${WORKDIR}/proc-fs-nfsd.mount ${D}${systemd_system_unitdir}/
+		install -d ${D}${systemd_system_unitdir}/sysinit.target.wants/
+		ln -sf ../proc-fs-nfsd.mount ${D}${systemd_system_unitdir}/sysinit.target.wants/proc-fs-nfsd.mount
+	fi
+
+	# kernel code as of 3.8 hard-codes this path as a default
+	install -d ${D}/var/lib/nfs/v4recovery
+
+	# chown the directories and files
+	chown -R rpcuser:rpcuser ${D}${localstatedir}/lib/nfs/statd
+	chmod 0644 ${D}${localstatedir}/lib/nfs/statd/state
+
+	# Make python tools use python 3
+	sed -i -e '1s,#!.*python.*,#!${bindir}/python3,' ${D}${sbindir}/mountstats ${D}${sbindir}/nfsiostat
+}
diff --git a/poky/meta/recipes-connectivity/openssh/openssh/ssh_config b/poky/meta/recipes-connectivity/openssh/openssh/ssh_config
index e0d0238..ca70f37 100644
--- a/poky/meta/recipes-connectivity/openssh/openssh/ssh_config
+++ b/poky/meta/recipes-connectivity/openssh/openssh/ssh_config
@@ -1,4 +1,4 @@
-#	$OpenBSD: ssh_config,v 1.33 2017/05/07 23:12:57 djm Exp $
+#	$OpenBSD: ssh_config,v 1.35 2020/07/17 03:43:42 dtucker Exp $
 
 # This is the ssh client system-wide configuration file.  See
 # ssh_config(5) for more information.  This file provides defaults for
@@ -17,6 +17,8 @@
 # list of available options, their meanings and defaults, please see the
 # ssh_config(5) man page.
 
+Include /etc/ssh/ssh_config.d/*.conf
+
 Host *
   ForwardAgent yes
   ForwardX11 yes
@@ -36,7 +38,6 @@
 #   IdentityFile ~/.ssh/id_ecdsa
 #   IdentityFile ~/.ssh/id_ed25519
 #   Port 22
-#   Protocol 2
 #   Ciphers aes128-ctr,aes192-ctr,aes256-ctr,aes128-cbc,3des-cbc
 #   MACs hmac-md5,hmac-sha1,umac-64@openssh.com
 #   EscapeChar ~
@@ -46,3 +47,4 @@
 #   VisualHostKey no
 #   ProxyCommand ssh -q -W %h:%p gateway.example.com
 #   RekeyLimit 1G 1h
+#   UserKnownHostsFile ~/.ssh/known_hosts.d/%k
diff --git a/poky/meta/recipes-connectivity/openssh/openssh/sshd_config b/poky/meta/recipes-connectivity/openssh/openssh/sshd_config
index 15f061b..e9eaf93 100644
--- a/poky/meta/recipes-connectivity/openssh/openssh/sshd_config
+++ b/poky/meta/recipes-connectivity/openssh/openssh/sshd_config
@@ -1,4 +1,4 @@
-#	$OpenBSD: sshd_config,v 1.102 2018/02/16 02:32:40 djm Exp $
+#	$OpenBSD: sshd_config,v 1.104 2021/07/02 05:11:21 dtucker Exp $
 
 # This is the sshd server system-wide configuration file.  See
 # sshd_config(5) for more information.
@@ -10,6 +10,8 @@
 # possible, but leave them commented.  Uncommented options override the
 # default value.
 
+Include /etc/ssh/sshd_config.d/*.conf
+
 #Port 22
 #AddressFamily any
 #ListenAddress 0.0.0.0
@@ -57,9 +59,9 @@
 #PasswordAuthentication yes
 #PermitEmptyPasswords no
 
-# Change to yes to enable challenge-response passwords (beware issues with
-# some PAM modules and threads)
-ChallengeResponseAuthentication no
+# Change to yes to enable keyboard-interactive authentication (beware issues
+# with some PAM modules and threads)
+KbdInteractiveAuthentication no
 
 # Kerberos options
 #KerberosAuthentication no
@@ -73,13 +75,13 @@
 
 # Set this to 'yes' to enable PAM authentication, account processing,
 # and session processing. If this is enabled, PAM authentication will
-# be allowed through the ChallengeResponseAuthentication and
+# be allowed through the KbdInteractiveAuthentication and
 # PasswordAuthentication.  Depending on your PAM configuration,
-# PAM authentication via ChallengeResponseAuthentication may bypass
+# PAM authentication via KbdInteractiveAuthentication may bypass
 # the setting of "PermitRootLogin without-password".
 # If you just want the PAM account and session checks to run without
 # PAM authentication, then enable this but set PasswordAuthentication
-# and ChallengeResponseAuthentication to 'no'.
+# and KbdInteractiveAuthentication to 'no'.
 #UsePAM no
 
 #AllowAgentForwarding yes
@@ -92,7 +94,6 @@
 #PrintMotd yes
 #PrintLastLog yes
 #TCPKeepAlive yes
-#UseLogin no
 #PermitUserEnvironment no
 Compression no
 ClientAliveInterval 15
diff --git a/poky/meta/recipes-core/dropbear/dropbear.inc b/poky/meta/recipes-core/dropbear/dropbear.inc
deleted file mode 100644
index e170587..0000000
--- a/poky/meta/recipes-core/dropbear/dropbear.inc
+++ /dev/null
@@ -1,128 +0,0 @@
-SUMMARY = "A lightweight SSH and SCP implementation"
-HOMEPAGE = "http://matt.ucc.asn.au/dropbear/dropbear.html"
-DESCRIPTION = "Dropbear is a relatively small SSH server and client. It runs on a variety of POSIX-based platforms. Dropbear is open source software, distributed under a MIT-style license. Dropbear is particularly useful for "embedded"-type Linux (or other Unix) systems, such as wireless routers."
-SECTION = "console/network"
-
-# some files are from other projects and have others license terms:
-#   public domain, OpenSSH 3.5p1, OpenSSH3.6.1p2, PuTTY
-LICENSE = "MIT & BSD-3-Clause & BSD-2-Clause & PD"
-LIC_FILES_CHKSUM = "file://LICENSE;md5=25cf44512b7bc8966a48b6b1a9b7605f"
-
-DEPENDS = "zlib virtual/crypt"
-RPROVIDES:${PN} = "ssh sshd"
-RCONFLICTS:${PN} = "openssh-sshd openssh"
-
-DEPENDS += "${@bb.utils.contains('DISTRO_FEATURES', 'pam', 'libpam', '', d)}"
-
-SRC_URI = "http://matt.ucc.asn.au/dropbear/releases/dropbear-${PV}.tar.bz2 \
-           file://0001-urandom-xauth-changes-to-options.h.patch \
-           file://init \
-           file://dropbearkey.service \
-           file://dropbear@.service \
-           file://dropbear.socket \
-           file://dropbear.default \
-           ${@bb.utils.contains('DISTRO_FEATURES', 'pam', '${PAM_SRC_URI}', '', d)} \
-           ${@bb.utils.contains('PACKAGECONFIG', 'disable-weak-ciphers', 'file://dropbear-disable-weak-ciphers.patch', '', d)} "
-
-PAM_SRC_URI = "file://0005-dropbear-enable-pam.patch \
-               file://0006-dropbear-configuration-file.patch \
-               file://dropbear"
-
-PAM_PLUGINS = "libpam-runtime \
-	pam-plugin-deny \
-	pam-plugin-permit \
-	pam-plugin-unix \
-	"
-RDEPENDS:${PN} += "${@bb.utils.contains('DISTRO_FEATURES', 'pam', '${PAM_PLUGINS}', '', d)}"
-
-inherit autotools update-rc.d systemd
-
-CVE_PRODUCT = "dropbear_ssh"
-
-INITSCRIPT_NAME = "dropbear"
-INITSCRIPT_PARAMS = "defaults 10"
-
-SYSTEMD_SERVICE:${PN} = "dropbear.socket"
-
-SBINCOMMANDS = "dropbear dropbearkey dropbearconvert"
-BINCOMMANDS = "dbclient ssh scp"
-EXTRA_OEMAKE = 'MULTI=1 SCPPROGRESS=1 PROGRAMS="${SBINCOMMANDS} ${BINCOMMANDS}"'
-
-PACKAGECONFIG ?= "disable-weak-ciphers"
-PACKAGECONFIG[system-libtom] = "--disable-bundled-libtom,--enable-bundled-libtom,libtommath libtomcrypt"
-PACKAGECONFIG[disable-weak-ciphers] = ""
-
-EXTRA_OECONF += "\
- ${@bb.utils.contains('DISTRO_FEATURES', 'pam', '--enable-pam', '--disable-pam', d)}"
-
-# This option appends to CFLAGS and LDFLAGS from OE
-# This is causing [textrel] QA warning
-EXTRA_OECONF += "--disable-harden"
-
-# musl does not implement wtmp/logwtmp APIs
-EXTRA_OECONF:append:libc-musl = " --disable-wtmp --disable-lastlog"
-
-do_install() {
-	install -d ${D}${sysconfdir} \
-		${D}${sysconfdir}/init.d \
-		${D}${sysconfdir}/default \
-		${D}${sysconfdir}/dropbear \
-		${D}${bindir} \
-		${D}${sbindir} \
-		${D}${localstatedir}
-
-	install -m 0644 ${WORKDIR}/dropbear.default ${D}${sysconfdir}/default/dropbear
-
-	install -m 0755 dropbearmulti ${D}${sbindir}/
-
-	for i in ${BINCOMMANDS}
-	do
-		# ssh and scp symlinks are created by update-alternatives
-		if [ $i = ssh ] || [ $i = scp ]; then continue; fi
-		ln -s ${sbindir}/dropbearmulti ${D}${bindir}/$i
-	done
-	for i in ${SBINCOMMANDS}
-	do
-		ln -s ./dropbearmulti ${D}${sbindir}/$i
-	done
-	sed -e 's,/etc,${sysconfdir},g' \
-		-e 's,/usr/sbin,${sbindir},g' \
-		-e 's,/var,${localstatedir},g' \
-		-e 's,/usr/bin,${bindir},g' \
-		-e 's,/usr,${prefix},g' ${WORKDIR}/init > ${D}${sysconfdir}/init.d/dropbear
-	chmod 755 ${D}${sysconfdir}/init.d/dropbear
-	if [ "${@bb.utils.filter('DISTRO_FEATURES', 'pam', d)}" ]; then
-		install -d ${D}${sysconfdir}/pam.d
-		install -m 0644 ${WORKDIR}/dropbear  ${D}${sysconfdir}/pam.d/
-	fi
-
-	# deal with systemd unit files
-	install -d ${D}${systemd_system_unitdir}
-	install -m 0644 ${WORKDIR}/dropbearkey.service ${D}${systemd_system_unitdir}
-	install -m 0644 ${WORKDIR}/dropbear@.service ${D}${systemd_system_unitdir}
-	install -m 0644 ${WORKDIR}/dropbear.socket ${D}${systemd_system_unitdir}
-	sed -i -e 's,@BASE_BINDIR@,${base_bindir},g' \
-		-e 's,@BINDIR@,${bindir},g' \
-		-e 's,@SBINDIR@,${sbindir},g' \
-		${D}${systemd_system_unitdir}/dropbear.socket ${D}${systemd_system_unitdir}/*.service
-}
-
-inherit update-alternatives
-
-ALTERNATIVE_PRIORITY = "20"
-ALTERNATIVE:${PN} = "${@bb.utils.filter('BINCOMMANDS', 'scp ssh', d)}"
-
-ALTERNATIVE_TARGET = "${sbindir}/dropbearmulti"
-
-pkg_postrm:${PN} () {
-  if [ -f "${sysconfdir}/dropbear/dropbear_rsa_host_key" ]; then
-        rm ${sysconfdir}/dropbear/dropbear_rsa_host_key
-  fi
-  if [ -f "${sysconfdir}/dropbear/dropbear_dss_host_key" ]; then
-        rm ${sysconfdir}/dropbear/dropbear_dss_host_key
-  fi
-}
-
-CONFFILES:${PN} = "${sysconfdir}/default/dropbear"
-
-FILES:${PN} += "${bindir}"
diff --git a/poky/meta/recipes-core/dropbear/dropbear_2022.82.bb b/poky/meta/recipes-core/dropbear/dropbear_2022.82.bb
index 154a407..2de243b 100644
--- a/poky/meta/recipes-core/dropbear/dropbear_2022.82.bb
+++ b/poky/meta/recipes-core/dropbear/dropbear_2022.82.bb
@@ -1,3 +1,130 @@
-require dropbear.inc
+SUMMARY = "A lightweight SSH and SCP implementation"
+HOMEPAGE = "http://matt.ucc.asn.au/dropbear/dropbear.html"
+DESCRIPTION = "Dropbear is a relatively small SSH server and client. It runs on a variety of POSIX-based platforms. Dropbear is open source software, distributed under a MIT-style license. Dropbear is particularly useful for "embedded"-type Linux (or other Unix) systems, such as wireless routers."
+SECTION = "console/network"
+
+# some files are from other projects and have others license terms:
+#   public domain, OpenSSH 3.5p1, OpenSSH3.6.1p2, PuTTY
+LICENSE = "MIT & BSD-3-Clause & BSD-2-Clause & PD"
+LIC_FILES_CHKSUM = "file://LICENSE;md5=25cf44512b7bc8966a48b6b1a9b7605f"
+
+DEPENDS = "zlib virtual/crypt"
+RPROVIDES:${PN} = "ssh sshd"
+RCONFLICTS:${PN} = "openssh-sshd openssh"
+
+DEPENDS += "${@bb.utils.contains('DISTRO_FEATURES', 'pam', 'libpam', '', d)}"
+
+SRC_URI = "http://matt.ucc.asn.au/dropbear/releases/dropbear-${PV}.tar.bz2 \
+           file://0001-urandom-xauth-changes-to-options.h.patch \
+           file://init \
+           file://dropbearkey.service \
+           file://dropbear@.service \
+           file://dropbear.socket \
+           file://dropbear.default \
+           ${@bb.utils.contains('DISTRO_FEATURES', 'pam', '${PAM_SRC_URI}', '', d)} \
+           ${@bb.utils.contains('PACKAGECONFIG', 'disable-weak-ciphers', 'file://dropbear-disable-weak-ciphers.patch', '', d)} "
 
 SRC_URI[sha256sum] = "3a038d2bbc02bf28bbdd20c012091f741a3ec5cbe460691811d714876aad75d1"
+
+PAM_SRC_URI = "file://0005-dropbear-enable-pam.patch \
+               file://0006-dropbear-configuration-file.patch \
+               file://dropbear"
+
+PAM_PLUGINS = "libpam-runtime \
+	pam-plugin-deny \
+	pam-plugin-permit \
+	pam-plugin-unix \
+	"
+RDEPENDS:${PN} += "${@bb.utils.contains('DISTRO_FEATURES', 'pam', '${PAM_PLUGINS}', '', d)}"
+
+inherit autotools update-rc.d systemd
+
+CVE_PRODUCT = "dropbear_ssh"
+
+INITSCRIPT_NAME = "dropbear"
+INITSCRIPT_PARAMS = "defaults 10"
+
+SYSTEMD_SERVICE:${PN} = "dropbear.socket"
+
+SBINCOMMANDS = "dropbear dropbearkey dropbearconvert"
+BINCOMMANDS = "dbclient ssh scp"
+EXTRA_OEMAKE = 'MULTI=1 SCPPROGRESS=1 PROGRAMS="${SBINCOMMANDS} ${BINCOMMANDS}"'
+
+PACKAGECONFIG ?= "disable-weak-ciphers"
+PACKAGECONFIG[system-libtom] = "--disable-bundled-libtom,--enable-bundled-libtom,libtommath libtomcrypt"
+PACKAGECONFIG[disable-weak-ciphers] = ""
+
+EXTRA_OECONF += "\
+ ${@bb.utils.contains('DISTRO_FEATURES', 'pam', '--enable-pam', '--disable-pam', d)}"
+
+# This option appends to CFLAGS and LDFLAGS from OE
+# This is causing [textrel] QA warning
+EXTRA_OECONF += "--disable-harden"
+
+# musl does not implement wtmp/logwtmp APIs
+EXTRA_OECONF:append:libc-musl = " --disable-wtmp --disable-lastlog"
+
+do_install() {
+	install -d ${D}${sysconfdir} \
+		${D}${sysconfdir}/init.d \
+		${D}${sysconfdir}/default \
+		${D}${sysconfdir}/dropbear \
+		${D}${bindir} \
+		${D}${sbindir} \
+		${D}${localstatedir}
+
+	install -m 0644 ${WORKDIR}/dropbear.default ${D}${sysconfdir}/default/dropbear
+
+	install -m 0755 dropbearmulti ${D}${sbindir}/
+
+	for i in ${BINCOMMANDS}
+	do
+		# ssh and scp symlinks are created by update-alternatives
+		if [ $i = ssh ] || [ $i = scp ]; then continue; fi
+		ln -s ${sbindir}/dropbearmulti ${D}${bindir}/$i
+	done
+	for i in ${SBINCOMMANDS}
+	do
+		ln -s ./dropbearmulti ${D}${sbindir}/$i
+	done
+	sed -e 's,/etc,${sysconfdir},g' \
+		-e 's,/usr/sbin,${sbindir},g' \
+		-e 's,/var,${localstatedir},g' \
+		-e 's,/usr/bin,${bindir},g' \
+		-e 's,/usr,${prefix},g' ${WORKDIR}/init > ${D}${sysconfdir}/init.d/dropbear
+	chmod 755 ${D}${sysconfdir}/init.d/dropbear
+	if [ "${@bb.utils.filter('DISTRO_FEATURES', 'pam', d)}" ]; then
+		install -d ${D}${sysconfdir}/pam.d
+		install -m 0644 ${WORKDIR}/dropbear  ${D}${sysconfdir}/pam.d/
+	fi
+
+	# deal with systemd unit files
+	install -d ${D}${systemd_system_unitdir}
+	install -m 0644 ${WORKDIR}/dropbearkey.service ${D}${systemd_system_unitdir}
+	install -m 0644 ${WORKDIR}/dropbear@.service ${D}${systemd_system_unitdir}
+	install -m 0644 ${WORKDIR}/dropbear.socket ${D}${systemd_system_unitdir}
+	sed -i -e 's,@BASE_BINDIR@,${base_bindir},g' \
+		-e 's,@BINDIR@,${bindir},g' \
+		-e 's,@SBINDIR@,${sbindir},g' \
+		${D}${systemd_system_unitdir}/dropbear.socket ${D}${systemd_system_unitdir}/*.service
+}
+
+inherit update-alternatives
+
+ALTERNATIVE_PRIORITY = "20"
+ALTERNATIVE:${PN} = "${@bb.utils.filter('BINCOMMANDS', 'scp ssh', d)}"
+
+ALTERNATIVE_TARGET = "${sbindir}/dropbearmulti"
+
+pkg_postrm:${PN} () {
+  if [ -f "${sysconfdir}/dropbear/dropbear_rsa_host_key" ]; then
+        rm ${sysconfdir}/dropbear/dropbear_rsa_host_key
+  fi
+  if [ -f "${sysconfdir}/dropbear/dropbear_dss_host_key" ]; then
+        rm ${sysconfdir}/dropbear/dropbear_dss_host_key
+  fi
+}
+
+CONFFILES:${PN} = "${sysconfdir}/default/dropbear"
+
+FILES:${PN} += "${bindir}"
diff --git a/poky/meta/recipes-core/ell/ell/0001-build-fix-time.h-related-breakage-on-musl.patch b/poky/meta/recipes-core/ell/ell/0001-build-fix-time.h-related-breakage-on-musl.patch
deleted file mode 100644
index 02b5a9f..0000000
--- a/poky/meta/recipes-core/ell/ell/0001-build-fix-time.h-related-breakage-on-musl.patch
+++ /dev/null
@@ -1,79 +0,0 @@
-From cd9513a025174181b66ac13de45813f6e15727d3 Mon Sep 17 00:00:00 2001
-From: =?UTF-8?q?Milan=20P=2E=20Stani=C4=87?= <mps@arvanta.net>
-Date: Mon, 6 Jun 2022 22:05:39 +0200
-Subject: [PATCH] build: fix time.h related breakage on musl
-
-missing time.h for struct timeval usage
-forward declaration of struct timeval in time-private.h
-
-Upstream-Status: Backport
-Signed-off-by: Alexander Kanavin <alex@linutronix.de>
----
- ell/dhcp-transport.c  | 1 +
- ell/dhcp6-transport.c | 1 +
- ell/icmp6.c           | 1 +
- ell/time-private.h    | 2 +-
- ell/time.c            | 1 +
- 5 files changed, 5 insertions(+), 1 deletion(-)
-
-diff --git a/ell/dhcp-transport.c b/ell/dhcp-transport.c
-index ef030de..c4cf0ca 100644
---- a/ell/dhcp-transport.c
-+++ b/ell/dhcp-transport.c
-@@ -40,6 +40,7 @@
- #include <linux/filter.h>
- #include <net/if_arp.h>
- #include <errno.h>
-+#include <sys/time.h>
- 
- #include "io.h"
- #include "util.h"
-diff --git a/ell/dhcp6-transport.c b/ell/dhcp6-transport.c
-index 30c425f..5ff6516 100644
---- a/ell/dhcp6-transport.c
-+++ b/ell/dhcp6-transport.c
-@@ -35,6 +35,7 @@
- #include <net/if.h>
- #include <unistd.h>
- #include <errno.h>
-+#include <sys/time.h>
- 
- #include "private.h"
- #include "missing.h"
-diff --git a/ell/icmp6.c b/ell/icmp6.c
-index 368977f..7319903 100644
---- a/ell/icmp6.c
-+++ b/ell/icmp6.c
-@@ -36,6 +36,7 @@
- #include <net/if.h>
- #include <unistd.h>
- #include <errno.h>
-+#include <sys/time.h>
- 
- #include "private.h"
- #include "useful.h"
-diff --git a/ell/time-private.h b/ell/time-private.h
-index 5295d94..83c23dd 100644
---- a/ell/time-private.h
-+++ b/ell/time-private.h
-@@ -19,7 +19,7 @@
-  *  Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA  02110-1301  USA
-  *
-  */
--
-+struct timeval;
- uint64_t _time_pick_interval_secs(uint32_t min_secs, uint32_t max_secs);
- uint64_t _time_fuzz_msecs(uint64_t ms);
- uint64_t _time_fuzz_secs(uint32_t secs, uint32_t max_offset);
-diff --git a/ell/time.c b/ell/time.c
-index 10e10b0..41e5725 100644
---- a/ell/time.c
-+++ b/ell/time.c
-@@ -26,6 +26,7 @@
- 
- #define _GNU_SOURCE
- #include <time.h>
-+#include <sys/time.h>
- 
- #include "time.h"
- #include "time-private.h"
diff --git a/poky/meta/recipes-core/ell/ell_0.51.bb b/poky/meta/recipes-core/ell/ell_0.51.bb
deleted file mode 100644
index 806833b..0000000
--- a/poky/meta/recipes-core/ell/ell_0.51.bb
+++ /dev/null
@@ -1,24 +0,0 @@
-SUMMARY  = "Embedded Linux Library"
-HOMEPAGE = "https://01.org/ell"
-DESCRIPTION = "The Embedded Linux Library (ELL) provides core, \
-low-level functionality for system daemons. It typically has no \
-dependencies other than the Linux kernel, C standard library, and \
-libdl (for dynamic linking). While ELL is designed to be efficient \
-and compact enough for use on embedded Linux platforms, it is not \
-limited to resource-constrained systems."
-SECTION = "libs"
-LICENSE  = "LGPL-2.1-only"
-LIC_FILES_CHKSUM = "file://COPYING;md5=fb504b67c50331fc78734fed90fb0e09"
-
-DEPENDS = "dbus"
-
-inherit autotools pkgconfig
-
-SRC_URI = "https://mirrors.edge.kernel.org/pub/linux/libs/${BPN}/${BPN}-${PV}.tar.xz \
-           file://0001-build-fix-time.h-related-breakage-on-musl.patch \
-           "
-SRC_URI[sha256sum] = "ba86cfa4aaf10151443edd63a7687914465d969f5dda00a2c1fcb11bd85e417f"
-
-do_configure:prepend () {
-    mkdir -p ${S}/build-aux
-}
diff --git a/poky/meta/recipes-core/ell/ell_0.52.bb b/poky/meta/recipes-core/ell/ell_0.52.bb
new file mode 100644
index 0000000..b2af0d0
--- /dev/null
+++ b/poky/meta/recipes-core/ell/ell_0.52.bb
@@ -0,0 +1,22 @@
+SUMMARY  = "Embedded Linux Library"
+HOMEPAGE = "https://01.org/ell"
+DESCRIPTION = "The Embedded Linux Library (ELL) provides core, \
+low-level functionality for system daemons. It typically has no \
+dependencies other than the Linux kernel, C standard library, and \
+libdl (for dynamic linking). While ELL is designed to be efficient \
+and compact enough for use on embedded Linux platforms, it is not \
+limited to resource-constrained systems."
+SECTION = "libs"
+LICENSE  = "LGPL-2.1-only"
+LIC_FILES_CHKSUM = "file://COPYING;md5=fb504b67c50331fc78734fed90fb0e09"
+
+DEPENDS = "dbus"
+
+inherit autotools pkgconfig
+
+SRC_URI = "https://mirrors.edge.kernel.org/pub/linux/libs/${BPN}/${BPN}-${PV}.tar.xz"
+SRC_URI[sha256sum] = "83099b14beda2b253a2c69460f9613c5e955b63349e3c00cf2fd506f5b3ba7d0"
+
+do_configure:prepend () {
+    mkdir -p ${S}/build-aux
+}
diff --git a/poky/meta/recipes-core/glib-networking/glib-networking_2.72.1.bb b/poky/meta/recipes-core/glib-networking/glib-networking_2.72.1.bb
deleted file mode 100644
index 41f18d1..0000000
--- a/poky/meta/recipes-core/glib-networking/glib-networking_2.72.1.bb
+++ /dev/null
@@ -1,38 +0,0 @@
-SUMMARY = "GLib networking extensions"
-DESCRIPTION = "glib-networking contains the implementations of certain GLib networking features that cannot be implemented directly in GLib itself because of their dependencies."
-HOMEPAGE = "https://gitlab.gnome.org/GNOME/glib-networking/"
-BUGTRACKER = "http://bugzilla.gnome.org"
-
-LICENSE = "LGPL-2.1-only"
-LIC_FILES_CHKSUM = "file://COPYING;md5=4fbd65380cdd255951079008b364516c"
-
-SECTION = "libs"
-DEPENDS = "glib-2.0"
-
-SRC_URI[archive.sha256sum] = "6fc1bedc8062484dc8a0204965995ef2367c3db5c934058ff1607e5a24d95a74"
-
-PACKAGECONFIG ??= "openssl ${@bb.utils.contains('PTEST_ENABLED', '1', 'tests', '', d)}"
-
-PACKAGECONFIG[gnutls] = "-Dgnutls=enabled,-Dgnutls=disabled,gnutls"
-PACKAGECONFIG[openssl] = "-Dopenssl=enabled,-Dopenssl=disabled,openssl"
-PACKAGECONFIG[libproxy] = "-Dlibproxy=enabled,-Dlibproxy=disabled,libproxy"
-PACKAGECONFIG[tests] = "-Dinstalled_tests=true,-Dinstalled_tests=false"
-
-EXTRA_OEMESON = "-Dgnome_proxy=disabled"
-
-GNOMEBASEBUILDCLASS = "meson"
-inherit gnomebase gettext upstream-version-is-even gio-module-cache ptest-gnome
-
-SRC_URI += "file://run-ptest"
-
-FILES:${PN} += "\
-                ${libdir}/gio/modules/libgio*.so \
-                ${datadir}/dbus-1/services/ \
-                ${systemd_user_unitdir} \
-                "
-FILES:${PN}-dev += "${libdir}/gio/modules/libgio*.la"
-FILES:${PN}-staticdev += "${libdir}/gio/modules/libgio*.a"
-
-RDEPENDS:${PN}-ptest += "bash"
-
-BBCLASSEXTEND = "native nativesdk"
diff --git a/poky/meta/recipes-core/glib-networking/glib-networking_2.72.2.bb b/poky/meta/recipes-core/glib-networking/glib-networking_2.72.2.bb
new file mode 100644
index 0000000..746d1bc
--- /dev/null
+++ b/poky/meta/recipes-core/glib-networking/glib-networking_2.72.2.bb
@@ -0,0 +1,38 @@
+SUMMARY = "GLib networking extensions"
+DESCRIPTION = "glib-networking contains the implementations of certain GLib networking features that cannot be implemented directly in GLib itself because of their dependencies."
+HOMEPAGE = "https://gitlab.gnome.org/GNOME/glib-networking/"
+BUGTRACKER = "http://bugzilla.gnome.org"
+
+LICENSE = "LGPL-2.1-only"
+LIC_FILES_CHKSUM = "file://COPYING;md5=4fbd65380cdd255951079008b364516c"
+
+SECTION = "libs"
+DEPENDS = "glib-2.0"
+
+SRC_URI[archive.sha256sum] = "cd2a084c7bb91d78e849fb55d40e472f6d8f6862cddc9f12c39149359ba18268"
+
+PACKAGECONFIG ??= "openssl ${@bb.utils.contains('PTEST_ENABLED', '1', 'tests', '', d)}"
+
+PACKAGECONFIG[gnutls] = "-Dgnutls=enabled,-Dgnutls=disabled,gnutls"
+PACKAGECONFIG[openssl] = "-Dopenssl=enabled,-Dopenssl=disabled,openssl"
+PACKAGECONFIG[libproxy] = "-Dlibproxy=enabled,-Dlibproxy=disabled,libproxy"
+PACKAGECONFIG[tests] = "-Dinstalled_tests=true,-Dinstalled_tests=false"
+
+EXTRA_OEMESON = "-Dgnome_proxy=disabled"
+
+GNOMEBASEBUILDCLASS = "meson"
+inherit gnomebase gettext upstream-version-is-even gio-module-cache ptest-gnome
+
+SRC_URI += "file://run-ptest"
+
+FILES:${PN} += "\
+                ${libdir}/gio/modules/libgio*.so \
+                ${datadir}/dbus-1/services/ \
+                ${systemd_user_unitdir} \
+                "
+FILES:${PN}-dev += "${libdir}/gio/modules/libgio*.la"
+FILES:${PN}-staticdev += "${libdir}/gio/modules/libgio*.a"
+
+RDEPENDS:${PN}-ptest += "bash"
+
+BBCLASSEXTEND = "native nativesdk"
diff --git a/poky/meta/recipes-core/glibc/cross-localedef-native_2.35.bb b/poky/meta/recipes-core/glibc/cross-localedef-native_2.35.bb
deleted file mode 100644
index b7b54e9..0000000
--- a/poky/meta/recipes-core/glibc/cross-localedef-native_2.35.bb
+++ /dev/null
@@ -1,54 +0,0 @@
-SUMMARY = "Cross locale generation tool for glibc"
-HOMEPAGE = "http://www.gnu.org/software/libc/libc.html"
-SECTION = "libs"
-LICENSE = "LGPL-2.1-only"
-
-LIC_FILES_CHKSUM = "file://LICENSES;md5=1541fd8f5e8f1579512bf05f533371ba \
-      file://COPYING;md5=b234ee4d69f5fce4486a80fdaf4a4263 \
-      file://posix/rxspencer/COPYRIGHT;md5=dc5485bb394a13b2332ec1c785f5d83a \
-      file://COPYING.LIB;md5=4fbd65380cdd255951079008b364516c"
-
-require glibc-version.inc
-
-# Tell autotools that we're working in the localedef directory
-#
-AUTOTOOLS_SCRIPT_PATH = "${S}/localedef"
-
-inherit autotools
-inherit native
-
-FILESEXTRAPATHS =. "${FILE_DIRNAME}/${PN}:${FILE_DIRNAME}/glibc:"
-
-SRC_URI = "${GLIBC_GIT_URI};branch=${SRCBRANCH};name=glibc \
-           git://github.com/kraj/localedef;branch=master;name=localedef;destsuffix=git/localedef;protocol=https \
-           \
-           file://0001-localedef-Add-hardlink-resolver-from-util-linux.patch \
-           file://0002-localedef-fix-ups-hardlink-to-make-it-compile.patch \
-           \
-           file://0010-eglibc-Cross-building-and-testing-instructions.patch \
-           file://0011-eglibc-Help-bootstrap-cross-toolchain.patch \
-           file://0012-eglibc-Resolve-__fpscr_values-on-SH4.patch \
-           file://0013-eglibc-Forward-port-cross-locale-generation-support.patch \
-           file://0014-localedef-add-to-archive-uses-a-hard-coded-locale-pa.patch \
-           file://0021-Replace-echo-with-printf-builtin-in-nscd-init-script.patch \
-           file://0023-timezone-Make-shell-interpreter-overridable-in-tzsel.patch \
-           "
-# Makes for a rather long rev (22 characters), but...
-#
-SRCREV_FORMAT = "glibc_localedef"
-
-S = "${WORKDIR}/git"
-
-EXTRA_OECONF = "--with-glibc=${S}"
-
-# We do not need bash to run tzselect script, the default is to use
-# bash but it can be configured by setting KSHELL Makefile variable
-EXTRA_OEMAKE += "KSHELL=/bin/sh"
-
-CFLAGS += "-fgnu89-inline -std=gnu99 -DIS_IN\(x\)='0'"
-
-do_install() {
-	install -d ${D}${bindir}
-	install -m 0755 ${B}/localedef ${D}${bindir}/cross-localedef
-	install -m 0755 ${B}/cross-localedef-hardlink ${D}${bindir}/cross-localedef-hardlink
-}
diff --git a/poky/meta/recipes-core/glibc/cross-localedef-native_2.36.bb b/poky/meta/recipes-core/glibc/cross-localedef-native_2.36.bb
new file mode 100644
index 0000000..f4ea763
--- /dev/null
+++ b/poky/meta/recipes-core/glibc/cross-localedef-native_2.36.bb
@@ -0,0 +1,54 @@
+SUMMARY = "Cross locale generation tool for glibc"
+HOMEPAGE = "http://www.gnu.org/software/libc/libc.html"
+SECTION = "libs"
+LICENSE = "LGPL-2.1-only"
+
+LIC_FILES_CHKSUM = "file://LICENSES;md5=1541fd8f5e8f1579512bf05f533371ba \
+      file://COPYING;md5=b234ee4d69f5fce4486a80fdaf4a4263 \
+      file://posix/rxspencer/COPYRIGHT;md5=dc5485bb394a13b2332ec1c785f5d83a \
+      file://COPYING.LIB;md5=4fbd65380cdd255951079008b364516c"
+
+require glibc-version.inc
+
+# Tell autotools that we're working in the localedef directory
+#
+AUTOTOOLS_SCRIPT_PATH = "${S}/localedef"
+
+inherit autotools
+inherit native
+
+FILESEXTRAPATHS =. "${FILE_DIRNAME}/${PN}:${FILE_DIRNAME}/glibc:"
+
+SRC_URI = "${GLIBC_GIT_URI};branch=${SRCBRANCH};name=glibc \
+           git://github.com/kraj/localedef;branch=master;name=localedef;destsuffix=git/localedef;protocol=https \
+           \
+           file://0001-localedef-Add-hardlink-resolver-from-util-linux.patch \
+           file://0002-localedef-fix-ups-hardlink-to-make-it-compile.patch \
+           \
+           file://0010-eglibc-Cross-building-and-testing-instructions.patch \
+           file://0011-eglibc-Help-bootstrap-cross-toolchain.patch \
+           file://0012-eglibc-Resolve-__fpscr_values-on-SH4.patch \
+           file://0013-eglibc-Forward-port-cross-locale-generation-support.patch \
+           file://0014-localedef-add-to-archive-uses-a-hard-coded-locale-pa.patch \
+           file://0019-Replace-echo-with-printf-builtin-in-nscd-init-script.patch \
+           file://0021-timezone-Make-shell-interpreter-overridable-in-tzsel.patch \
+           "
+# Makes for a rather long rev (22 characters), but...
+#
+SRCREV_FORMAT = "glibc_localedef"
+
+S = "${WORKDIR}/git"
+
+EXTRA_OECONF = "--with-glibc=${S}"
+
+# We do not need bash to run tzselect script, the default is to use
+# bash but it can be configured by setting KSHELL Makefile variable
+EXTRA_OEMAKE += "KSHELL=/bin/sh"
+
+CFLAGS += "-fgnu89-inline -std=gnu99 -DIS_IN\(x\)='0'"
+
+do_install() {
+	install -d ${D}${bindir}
+	install -m 0755 ${B}/localedef ${D}${bindir}/cross-localedef
+	install -m 0755 ${B}/cross-localedef-hardlink ${D}${bindir}/cross-localedef-hardlink
+}
diff --git a/poky/meta/recipes-core/glibc/glibc-common.inc b/poky/meta/recipes-core/glibc/glibc-common.inc
index 90a6a53..fba172d 100644
--- a/poky/meta/recipes-core/glibc/glibc-common.inc
+++ b/poky/meta/recipes-core/glibc/glibc-common.inc
@@ -22,4 +22,4 @@
 #
 COMPATIBLE_HOST:libc-musl:class-target = "null"
 
-PV = "2.35"
+PV = "2.36"
diff --git a/poky/meta/recipes-core/glibc/glibc-locale_2.35.bb b/poky/meta/recipes-core/glibc/glibc-locale_2.36.bb
similarity index 100%
rename from poky/meta/recipes-core/glibc/glibc-locale_2.35.bb
rename to poky/meta/recipes-core/glibc/glibc-locale_2.36.bb
diff --git a/poky/meta/recipes-core/glibc/glibc-mtrace_2.35.bb b/poky/meta/recipes-core/glibc/glibc-mtrace_2.36.bb
similarity index 100%
rename from poky/meta/recipes-core/glibc/glibc-mtrace_2.35.bb
rename to poky/meta/recipes-core/glibc/glibc-mtrace_2.36.bb
diff --git a/poky/meta/recipes-core/glibc/glibc-scripts_2.35.bb b/poky/meta/recipes-core/glibc/glibc-scripts_2.36.bb
similarity index 100%
rename from poky/meta/recipes-core/glibc/glibc-scripts_2.35.bb
rename to poky/meta/recipes-core/glibc/glibc-scripts_2.36.bb
diff --git a/poky/meta/recipes-core/glibc/glibc-tests_2.35.bb b/poky/meta/recipes-core/glibc/glibc-tests_2.35.bb
deleted file mode 100644
index 96d0569..0000000
--- a/poky/meta/recipes-core/glibc/glibc-tests_2.35.bb
+++ /dev/null
@@ -1,120 +0,0 @@
-require glibc_${PV}.bb
-require glibc-tests.inc
-
-inherit ptest features_check
-REQUIRED_DISTRO_FEATURES = "ptest"
-
-SRC_URI:append = " \
-	file://reproducible-paths.patch \
-	file://run-ptest \
-"
-
-SUMMARY = "glibc tests to be run with ptest"
-
-# Erase some variables already set by glibc_${PV}
-python __anonymous() {
-       # Remove packages provided by glibc build, we only need a subset of them
-       d.setVar("PACKAGES", "${PN} ${PN}-ptest")
-
-       d.setVar("PROVIDES", "${PN} ${PN}-ptest")
-       d.setVar("RPROVIDES", "${PN} ${PN}-ptest")
-
-       bbclassextend = d.getVar("BBCLASSEXTEND").replace("nativesdk", "").strip()
-       d.setVar("BBCLASSEXTEND", bbclassextend)
-       d.setVar("RRECOMMENDS", "")
-       d.setVar("SYSTEMD_SERVICE:nscd", "")
-       d.setVar("SYSTEMD_PACKAGES", "")
-}
-
-# Remove any leftovers from original glibc recipe
-RPROVIDES:${PN} = "${PN}"
-RRECOMMENDS:${PN} = ""
-RDEPENDS:${PN} = " glibc sed"
-DEPENDS:append = " sed"
-
-export oe_srcdir="${exec_prefix}/src/debug/glibc/${PV}/"
-
-# Just build tests for target - do not run them
-do_check:append () {
-	oe_runmake -i check run-built-tests=no
-}
-addtask do_check after do_compile before do_install_ptest_base
-
-glibc_strip_build_directory () {
-	# Delete all non executable files from build directory
-	find ${B} ! -executable -type f -delete
-
-	# Remove build dynamic libraries and links to them as
-	# those are already installed in the target device
-	find ${B} -type f -name "*.so" -delete
-	find ${B} -type l -name "*.so*" -delete
-
-	# Remove headers (installed with glibc)
-	find ${B} -type f -name "*.h" -delete
-
-	find ${B} -type f -name "isomac" -delete
-	find ${B} -type f -name "annexc" -delete
-}
-
-do_install_ptest_base () {
-	glibc_strip_build_directory
-
-	ls -r ${B}/*/*-time64 > ${B}/tst_time64
-
-	# Remove '-time64' suffix - those tests are also time related
-	sed -e "s/-time64$//" ${B}/tst_time64 > ${B}/tst_time_tmp
-	tst_time=$(cat ${B}/tst_time_tmp ${B}/tst_time64)
-
-	rm ${B}/tst_time_tmp ${B}/tst_time64
-	echo "${tst_time}"
-
-	# Install build test programs to the image
-	install -d ${D}${PTEST_PATH}/tests/glibc-ptest/
-
-	for f in "${tst_time}"
-	do
-	    cp -r ${f} ${D}${PTEST_PATH}/tests/glibc-ptest/
-	done
-
-	install -d ${D}${PTEST_PATH}
-	cp ${WORKDIR}/run-ptest ${D}${PTEST_PATH}/
-
-}
-
-# The datadir directory is required to allow core (and reused)
-# glibc cleanup function to finish correctly, as this directory
-# is not created for ptests
-stash_locale_package_cleanup:prepend () {
-	mkdir -p ${PKGD}${datadir}
-}
-
-stash_locale_sysroot_cleanup:prepend () {
-	mkdir -p ${SYSROOT_DESTDIR}${datadir}
-}
-
-# Prevent the do_package() task to set 'libc6' prefix
-# for glibc tests related packages
-python populate_packages:prepend () {
-    if d.getVar('DEBIAN_NAMES'):
-        d.setVar('DEBIAN_NAMES', '')
-}
-
-FILES:${PN} = "${PTEST_PATH}/* /usr/src/debug/${PN}/*"
-
-EXCLUDE_FROM_SHLIBS = "1"
-
-# Install debug data in .debug and sources in /usr/src/debug
-# It is more handy to have _all_ the sources and symbols in one
-# place (package) as this recipe will be used for validation and
-# debugging.
-PACKAGE_DEBUG_SPLIT_STYLE = ".debug"
-
-# glibc test cases violate by default some Yocto/OE checks (staticdev,
-# textrel)
-# 'debug-files' - add everything (including debug) into one package
-#                 (no need to install/build *-src package)
-INSANE_SKIP:${PN} += "staticdev textrel debug-files rpaths"
-
-deltask do_stash_locale
-do_install[noexec] = "1"
-do_populate_sysroot[noexec] = "1"
diff --git a/poky/meta/recipes-core/glibc/glibc-tests_2.36.bb b/poky/meta/recipes-core/glibc/glibc-tests_2.36.bb
new file mode 100644
index 0000000..aca9675
--- /dev/null
+++ b/poky/meta/recipes-core/glibc/glibc-tests_2.36.bb
@@ -0,0 +1,119 @@
+require glibc_${PV}.bb
+require glibc-tests.inc
+
+inherit ptest features_check
+REQUIRED_DISTRO_FEATURES = "ptest"
+
+SRC_URI:append = " \
+	file://run-ptest \
+"
+
+SUMMARY = "glibc tests to be run with ptest"
+
+# Erase some variables already set by glibc_${PV}
+python __anonymous() {
+       # Remove packages provided by glibc build, we only need a subset of them
+       d.setVar("PACKAGES", "${PN} ${PN}-ptest")
+
+       d.setVar("PROVIDES", "${PN} ${PN}-ptest")
+       d.setVar("RPROVIDES", "${PN} ${PN}-ptest")
+
+       bbclassextend = d.getVar("BBCLASSEXTEND").replace("nativesdk", "").strip()
+       d.setVar("BBCLASSEXTEND", bbclassextend)
+       d.setVar("RRECOMMENDS", "")
+       d.setVar("SYSTEMD_SERVICE:nscd", "")
+       d.setVar("SYSTEMD_PACKAGES", "")
+}
+
+# Remove any leftovers from original glibc recipe
+RPROVIDES:${PN} = "${PN}"
+RRECOMMENDS:${PN} = ""
+RDEPENDS:${PN} = " glibc sed"
+DEPENDS:append = " sed"
+
+export oe_srcdir="${exec_prefix}/src/debug/glibc/${PV}/"
+
+# Just build tests for target - do not run them
+do_check:append () {
+	oe_runmake -i check run-built-tests=no
+}
+addtask do_check after do_compile before do_install_ptest_base
+
+glibc_strip_build_directory () {
+	# Delete all non executable files from build directory
+	find ${B} ! -executable -type f -delete
+
+	# Remove build dynamic libraries and links to them as
+	# those are already installed in the target device
+	find ${B} -type f -name "*.so" -delete
+	find ${B} -type l -name "*.so*" -delete
+
+	# Remove headers (installed with glibc)
+	find ${B} -type f -name "*.h" -delete
+
+	find ${B} -type f -name "isomac" -delete
+	find ${B} -type f -name "annexc" -delete
+}
+
+do_install_ptest_base () {
+	glibc_strip_build_directory
+
+	ls -r ${B}/*/*-time64 > ${B}/tst_time64
+
+	# Remove '-time64' suffix - those tests are also time related
+	sed -e "s/-time64$//" ${B}/tst_time64 > ${B}/tst_time_tmp
+	tst_time=$(cat ${B}/tst_time_tmp ${B}/tst_time64)
+
+	rm ${B}/tst_time_tmp ${B}/tst_time64
+	echo "${tst_time}"
+
+	# Install build test programs to the image
+	install -d ${D}${PTEST_PATH}/tests/glibc-ptest/
+
+	for f in "${tst_time}"
+	do
+	    cp -r ${f} ${D}${PTEST_PATH}/tests/glibc-ptest/
+	done
+
+	install -d ${D}${PTEST_PATH}
+	cp ${WORKDIR}/run-ptest ${D}${PTEST_PATH}/
+
+}
+
+# The datadir directory is required to allow core (and reused)
+# glibc cleanup function to finish correctly, as this directory
+# is not created for ptests
+stash_locale_package_cleanup:prepend () {
+	mkdir -p ${PKGD}${datadir}
+}
+
+stash_locale_sysroot_cleanup:prepend () {
+	mkdir -p ${SYSROOT_DESTDIR}${datadir}
+}
+
+# Prevent the do_package() task to set 'libc6' prefix
+# for glibc tests related packages
+python populate_packages:prepend () {
+    if d.getVar('DEBIAN_NAMES'):
+        d.setVar('DEBIAN_NAMES', '')
+}
+
+FILES:${PN} = "${PTEST_PATH}/* /usr/src/debug/${PN}/*"
+
+EXCLUDE_FROM_SHLIBS = "1"
+
+# Install debug data in .debug and sources in /usr/src/debug
+# It is more handy to have _all_ the sources and symbols in one
+# place (package) as this recipe will be used for validation and
+# debugging.
+PACKAGE_DEBUG_SPLIT_STYLE = ".debug"
+
+# glibc test cases violate by default some Yocto/OE checks (staticdev,
+# textrel)
+# 'debug-files' - add everything (including debug) into one package
+#                 (no need to install/build *-src package)
+INSANE_SKIP:${PN} += "staticdev textrel debug-files rpaths"
+
+deltask do_stash_locale
+do_install[noexec] = "1"
+do_populate_sysroot[noexec] = "1"
diff --git a/poky/meta/recipes-core/glibc/glibc-testsuite_2.35.bb b/poky/meta/recipes-core/glibc/glibc-testsuite_2.36.bb
similarity index 100%
rename from poky/meta/recipes-core/glibc/glibc-testsuite_2.35.bb
rename to poky/meta/recipes-core/glibc/glibc-testsuite_2.36.bb
diff --git a/poky/meta/recipes-core/glibc/glibc-version.inc b/poky/meta/recipes-core/glibc/glibc-version.inc
index 99017ce..a078eb6 100644
--- a/poky/meta/recipes-core/glibc/glibc-version.inc
+++ b/poky/meta/recipes-core/glibc/glibc-version.inc
@@ -1,6 +1,6 @@
-SRCBRANCH ?= "release/2.35/master"
-PV = "2.35"
-SRCREV_glibc ?= "b6aade18a7e5719c942aa2da6cf3157aca993fa4"
+SRCBRANCH ?= "release/2.36/master"
+PV = "2.36"
+SRCREV_glibc ?= "3bd3c612e98a53ce60ed972f5cd2b90628b3cba5"
 SRCREV_localedef ?= "794da69788cbf9bf57b59a852f9f11307663fa87"
 
 GLIBC_GIT_URI ?= "git://sourceware.org/git/glibc.git"
diff --git a/poky/meta/recipes-core/glibc/glibc/0001-localedef-Add-hardlink-resolver-from-util-linux.patch b/poky/meta/recipes-core/glibc/glibc/0001-localedef-Add-hardlink-resolver-from-util-linux.patch
index 546fe58..dfbd700 100644
--- a/poky/meta/recipes-core/glibc/glibc/0001-localedef-Add-hardlink-resolver-from-util-linux.patch
+++ b/poky/meta/recipes-core/glibc/glibc/0001-localedef-Add-hardlink-resolver-from-util-linux.patch
@@ -1,4 +1,4 @@
-From 8778429a3345bb5c0361332cf5103f394717a396 Mon Sep 17 00:00:00 2001
+From c6dca721df6dd8c39ffe16e61623516bd8742d39 Mon Sep 17 00:00:00 2001
 From: Jason Wessel <jason.wessel@windriver.com>
 Date: Sat, 7 Dec 2019 09:59:22 -0800
 Subject: [PATCH] localedef: Add hardlink resolver from util-linux
diff --git a/poky/meta/recipes-core/glibc/glibc/0002-localedef-fix-ups-hardlink-to-make-it-compile.patch b/poky/meta/recipes-core/glibc/glibc/0002-localedef-fix-ups-hardlink-to-make-it-compile.patch
index 94a05cf..57f1a36 100644
--- a/poky/meta/recipes-core/glibc/glibc/0002-localedef-fix-ups-hardlink-to-make-it-compile.patch
+++ b/poky/meta/recipes-core/glibc/glibc/0002-localedef-fix-ups-hardlink-to-make-it-compile.patch
@@ -1,4 +1,4 @@
-From 87a69126d97bb8d5d52e34e451b4a7076efd6bed Mon Sep 17 00:00:00 2001
+From 3e391efa9b179ae886dd0942202bd2a6698e2679 Mon Sep 17 00:00:00 2001
 From: Jason Wessel <jason.wessel@windriver.com>
 Date: Sat, 7 Dec 2019 10:01:37 -0800
 Subject: [PATCH] localedef: fix-ups hardlink to make it compile
diff --git a/poky/meta/recipes-core/glibc/glibc/0003-nativesdk-glibc-Look-for-host-system-ld.so.cache-as-.patch b/poky/meta/recipes-core/glibc/glibc/0003-nativesdk-glibc-Look-for-host-system-ld.so.cache-as-.patch
index 9a60507..4eb23bd 100644
--- a/poky/meta/recipes-core/glibc/glibc/0003-nativesdk-glibc-Look-for-host-system-ld.so.cache-as-.patch
+++ b/poky/meta/recipes-core/glibc/glibc/0003-nativesdk-glibc-Look-for-host-system-ld.so.cache-as-.patch
@@ -1,4 +1,4 @@
-From 752b0d32fc96728ee624dbd62bf23e034d8d2aed Mon Sep 17 00:00:00 2001
+From a74ac72e6a25121c99f3875cf0245a435729e897 Mon Sep 17 00:00:00 2001
 From: Khem Raj <raj.khem@gmail.com>
 Date: Wed, 18 Mar 2015 01:48:24 +0000
 Subject: [PATCH] nativesdk-glibc: Look for host system ld.so.cache as well
@@ -30,10 +30,10 @@
  1 file changed, 8 insertions(+), 8 deletions(-)
 
 diff --git a/elf/dl-load.c b/elf/dl-load.c
-index 721593135e..39c4657fa2 100644
+index 1ad0868dad..c5e235d918 100644
 --- a/elf/dl-load.c
 +++ b/elf/dl-load.c
-@@ -2208,6 +2208,14 @@ _dl_map_object (struct link_map *loader, const char *name,
+@@ -2109,6 +2109,14 @@ _dl_map_object (struct link_map *loader, const char *name,
              }
          }
  
@@ -48,7 +48,7 @@
  #ifdef USE_LDCONFIG
        if (fd == -1
  	  && (__glibc_likely ((mode & __RTLD_SECURE) == 0)
-@@ -2266,14 +2274,6 @@ _dl_map_object (struct link_map *loader, const char *name,
+@@ -2167,14 +2175,6 @@ _dl_map_object (struct link_map *loader, const char *name,
  	}
  #endif
  
diff --git a/poky/meta/recipes-core/glibc/glibc/0004-nativesdk-glibc-Fix-buffer-overrun-with-a-relocated-.patch b/poky/meta/recipes-core/glibc/glibc/0004-nativesdk-glibc-Fix-buffer-overrun-with-a-relocated-.patch
index da288d6..7eaf70b 100644
--- a/poky/meta/recipes-core/glibc/glibc/0004-nativesdk-glibc-Fix-buffer-overrun-with-a-relocated-.patch
+++ b/poky/meta/recipes-core/glibc/glibc/0004-nativesdk-glibc-Fix-buffer-overrun-with-a-relocated-.patch
@@ -1,4 +1,4 @@
-From 2f7407697f2a905fedb98037152e7830f73bc6c6 Mon Sep 17 00:00:00 2001
+From d2f16ab250dbb93ae21e9e9286ddf696141db735 Mon Sep 17 00:00:00 2001
 From: Khem Raj <raj.khem@gmail.com>
 Date: Wed, 18 Mar 2015 01:50:00 +0000
 Subject: [PATCH] nativesdk-glibc: Fix buffer overrun with a relocated SDK
@@ -21,10 +21,10 @@
  1 file changed, 12 insertions(+)
 
 diff --git a/elf/dl-load.c b/elf/dl-load.c
-index 39c4657fa2..daa3af6c51 100644
+index c5e235d918..ce3cbfa3c4 100644
 --- a/elf/dl-load.c
 +++ b/elf/dl-load.c
-@@ -1904,7 +1904,19 @@ open_path (const char *name, size_t namelen, int mode,
+@@ -1809,7 +1809,19 @@ open_path (const char *name, size_t namelen, int mode,
         given on the command line when rtld is run directly.  */
      return -1;
  
diff --git a/poky/meta/recipes-core/glibc/glibc/0005-nativesdk-glibc-Raise-the-size-of-arrays-containing-.patch b/poky/meta/recipes-core/glibc/glibc/0005-nativesdk-glibc-Raise-the-size-of-arrays-containing-.patch
index 14bcaf3..1fb7620 100644
--- a/poky/meta/recipes-core/glibc/glibc/0005-nativesdk-glibc-Raise-the-size-of-arrays-containing-.patch
+++ b/poky/meta/recipes-core/glibc/glibc/0005-nativesdk-glibc-Raise-the-size-of-arrays-containing-.patch
@@ -1,4 +1,4 @@
-From 88a31cd08801df53249963f3b26c7dbcee6ae2f8 Mon Sep 17 00:00:00 2001
+From 2d41508ed1059df2df9994d35d870be2005f575f Mon Sep 17 00:00:00 2001
 From: Khem Raj <raj.khem@gmail.com>
 Date: Wed, 18 Mar 2015 01:51:38 +0000
 Subject: [PATCH] nativesdk-glibc: Raise the size of arrays containing dl paths
@@ -26,10 +26,10 @@
  8 files changed, 16 insertions(+), 10 deletions(-)
 
 diff --git a/elf/dl-cache.c b/elf/dl-cache.c
-index 2b8da8650d..3d9787bda4 100644
+index 8bbf110d02..c02a95d9b5 100644
 --- a/elf/dl-cache.c
 +++ b/elf/dl-cache.c
-@@ -355,6 +355,10 @@ search_cache (const char *string_table, uint32_t string_table_size,
+@@ -352,6 +352,10 @@ search_cache (const char *string_table, uint32_t string_table_size,
    return best;
  }
  
@@ -41,7 +41,7 @@
  _dl_cache_libcmp (const char *p1, const char *p2)
  {
 diff --git a/elf/dl-load.c b/elf/dl-load.c
-index daa3af6c51..e323952993 100644
+index ce3cbfa3c4..e116db24a1 100644
 --- a/elf/dl-load.c
 +++ b/elf/dl-load.c
 @@ -117,8 +117,8 @@ enum { ncapstr = 1, max_capstrlen = 0 };
@@ -56,7 +56,7 @@
    SYSTEM_DIRS_LEN
  };
 diff --git a/elf/dl-usage.c b/elf/dl-usage.c
-index 5ad3a72559..88f26d3692 100644
+index 98d8c98948..77ca98cbf9 100644
 --- a/elf/dl-usage.c
 +++ b/elf/dl-usage.c
 @@ -25,6 +25,8 @@
@@ -87,7 +87,7 @@
    print_hwcaps_subdirectories (state);
    print_legacy_hwcap_directories ();
 diff --git a/elf/interp.c b/elf/interp.c
-index 91966702ca..dc86c20e83 100644
+index d82af036d1..9d282b2769 100644
 --- a/elf/interp.c
 +++ b/elf/interp.c
 @@ -18,5 +18,5 @@
@@ -98,7 +98,7 @@
 +const char __invoke_dynamic_linker__[4096] __attribute__ ((section (".interp")))
    = RUNTIME_LINKER;
 diff --git a/elf/ldconfig.c b/elf/ldconfig.c
-index 101d56ac8e..33debef60a 100644
+index 9394ac6438..7f66b1a460 100644
 --- a/elf/ldconfig.c
 +++ b/elf/ldconfig.c
 @@ -176,6 +176,9 @@ static struct argp argp =
@@ -112,10 +112,10 @@
     a platform.  */
  static int
 diff --git a/elf/rtld.c b/elf/rtld.c
-index 4b09e84b0d..56d93ff616 100644
+index cbbaf4a331..d2d27a0127 100644
 --- a/elf/rtld.c
 +++ b/elf/rtld.c
-@@ -193,6 +193,7 @@ dso_name_valid_for_suid (const char *p)
+@@ -189,6 +189,7 @@ dso_name_valid_for_suid (const char *p)
      }
    return *p != '\0';
  }
@@ -124,7 +124,7 @@
  static void
  audit_list_init (struct audit_list *list)
 diff --git a/iconv/gconv_conf.c b/iconv/gconv_conf.c
-index 077082af66..46b6152455 100644
+index f069e28323..6288f715ba 100644
 --- a/iconv/gconv_conf.c
 +++ b/iconv/gconv_conf.c
 @@ -35,7 +35,7 @@
@@ -137,7 +137,7 @@
  /* Type to represent search path.  */
  struct path_elem
 diff --git a/sysdeps/generic/dl-cache.h b/sysdeps/generic/dl-cache.h
-index 964d50a486..94bf68ca9d 100644
+index 93d4bea930..5249176441 100644
 --- a/sysdeps/generic/dl-cache.h
 +++ b/sysdeps/generic/dl-cache.h
 @@ -34,10 +34,6 @@
diff --git a/poky/meta/recipes-core/glibc/glibc/0006-nativesdk-glibc-Allow-64-bit-atomics-for-x86.patch b/poky/meta/recipes-core/glibc/glibc/0006-nativesdk-glibc-Allow-64-bit-atomics-for-x86.patch
index 493b2da..c66bcf8 100644
--- a/poky/meta/recipes-core/glibc/glibc/0006-nativesdk-glibc-Allow-64-bit-atomics-for-x86.patch
+++ b/poky/meta/recipes-core/glibc/glibc/0006-nativesdk-glibc-Allow-64-bit-atomics-for-x86.patch
@@ -1,4 +1,4 @@
-From a1fbd7ef1da02f334ff72c52cb11116164649067 Mon Sep 17 00:00:00 2001
+From 946d1cadf0bb54216409e8e0eb09be3e96044dbf Mon Sep 17 00:00:00 2001
 From: Khem Raj <raj.khem@gmail.com>
 Date: Thu, 31 Dec 2015 14:35:35 -0800
 Subject: [PATCH] nativesdk-glibc: Allow 64 bit atomics for x86
@@ -17,10 +17,10 @@
  1 file changed, 1 insertion(+), 6 deletions(-)
 
 diff --git a/sysdeps/x86/atomic-machine.h b/sysdeps/x86/atomic-machine.h
-index 2692d94a92..9d39bfdbd5 100644
+index f24f1c71ed..574487ca54 100644
 --- a/sysdeps/x86/atomic-machine.h
 +++ b/sysdeps/x86/atomic-machine.h
-@@ -52,19 +52,14 @@ typedef uintmax_t uatomic_max_t;
+@@ -26,19 +26,14 @@
  #define LOCK_PREFIX "lock;"
  
  #define USE_ATOMIC_COMPILER_BUILTINS	1
diff --git a/poky/meta/recipes-core/glibc/glibc/0007-nativesdk-glibc-Make-relocatable-install-for-locales.patch b/poky/meta/recipes-core/glibc/glibc/0007-nativesdk-glibc-Make-relocatable-install-for-locales.patch
index b40d2bd..dc24c02 100644
--- a/poky/meta/recipes-core/glibc/glibc/0007-nativesdk-glibc-Make-relocatable-install-for-locales.patch
+++ b/poky/meta/recipes-core/glibc/glibc/0007-nativesdk-glibc-Make-relocatable-install-for-locales.patch
@@ -1,4 +1,4 @@
-From bf1603b3d73f64de777be00f7e55f2cfef596102 Mon Sep 17 00:00:00 2001
+From ce4e796fa8bd2df962cf7a0e4bc69ab6181e4ebf Mon Sep 17 00:00:00 2001
 From: Khem Raj <raj.khem@gmail.com>
 Date: Fri, 3 Aug 2018 09:55:12 -0700
 Subject: [PATCH] nativesdk-glibc: Make relocatable install for locales
@@ -19,7 +19,7 @@
  4 files changed, 8 insertions(+), 7 deletions(-)
 
 diff --git a/locale/findlocale.c b/locale/findlocale.c
-index 5986373edd..856ba9afc0 100644
+index fc433b61d8..d6f030f13c 100644
 --- a/locale/findlocale.c
 +++ b/locale/findlocale.c
 @@ -55,7 +55,7 @@ struct __locale_data *const _nl_C[] attribute_hidden =
@@ -41,7 +41,7 @@
    else
      /* We really have to load some data.  First see whether the name is
 diff --git a/locale/loadarchive.c b/locale/loadarchive.c
-index 512769eaec..436619091b 100644
+index fcc4913319..62cae8c6c0 100644
 --- a/locale/loadarchive.c
 +++ b/locale/loadarchive.c
 @@ -42,7 +42,7 @@
@@ -54,10 +54,10 @@
  /* Size of initial mapping window, optimal if large enough to
     cover the header plus the initial locale.  */
 diff --git a/locale/localeinfo.h b/locale/localeinfo.h
-index b3d4da0185..22f9dc1140 100644
+index fd43033a19..3dc26272a0 100644
 --- a/locale/localeinfo.h
 +++ b/locale/localeinfo.h
-@@ -331,7 +331,7 @@ _nl_lookup_word (locale_t l, int category, int item)
+@@ -347,7 +347,7 @@ _nl_lookup_word (locale_t l, int category, int item)
  }
  
  /* Default search path if no LOCPATH environment variable.  */
@@ -67,7 +67,7 @@
  /* Load the locale data for CATEGORY from the file specified by *NAME.
     If *NAME is "", use environment variables as specified by POSIX, and
 diff --git a/locale/programs/locale.c b/locale/programs/locale.c
-index e9275d6b83..a9109155e5 100644
+index 1b51b50d68..87c9049444 100644
 --- a/locale/programs/locale.c
 +++ b/locale/programs/locale.c
 @@ -631,6 +631,7 @@ nameentcmp (const void *a, const void *b)
diff --git a/poky/meta/recipes-core/glibc/glibc/0008-nativesdk-glibc-Fall-back-to-faccessat-on-faccess2-r.patch b/poky/meta/recipes-core/glibc/glibc/0008-nativesdk-glibc-Fall-back-to-faccessat-on-faccess2-r.patch
index a47dd53..4d08072 100644
--- a/poky/meta/recipes-core/glibc/glibc/0008-nativesdk-glibc-Fall-back-to-faccessat-on-faccess2-r.patch
+++ b/poky/meta/recipes-core/glibc/glibc/0008-nativesdk-glibc-Fall-back-to-faccessat-on-faccess2-r.patch
@@ -1,4 +1,4 @@
-From 78b2e81940561069faf7698931a033784f794e40 Mon Sep 17 00:00:00 2001
+From 95508f06f13604ed96f28d18eb1670ea1ed02063 Mon Sep 17 00:00:00 2001
 From: Khem Raj <raj.khem@gmail.com>
 Date: Sat, 6 Mar 2021 14:48:56 -0800
 Subject: [PATCH] nativesdk-glibc: Fall back to faccessat on faccess2 returns
@@ -14,7 +14,7 @@
  1 file changed, 5 insertions(+), 1 deletion(-)
 
 diff --git a/sysdeps/unix/sysv/linux/faccessat.c b/sysdeps/unix/sysv/linux/faccessat.c
-index 13160d3249..ee3ddc9b79 100644
+index 1378bb2db8..19f2044172 100644
 --- a/sysdeps/unix/sysv/linux/faccessat.c
 +++ b/sysdeps/unix/sysv/linux/faccessat.c
 @@ -30,7 +30,11 @@ __faccessat (int fd, const char *file, int mode, int flag)
diff --git a/poky/meta/recipes-core/glibc/glibc/0009-yes-within-the-path-sets-wrong-config-variables.patch b/poky/meta/recipes-core/glibc/glibc/0009-yes-within-the-path-sets-wrong-config-variables.patch
index 77644a2..6b80ad3 100644
--- a/poky/meta/recipes-core/glibc/glibc/0009-yes-within-the-path-sets-wrong-config-variables.patch
+++ b/poky/meta/recipes-core/glibc/glibc/0009-yes-within-the-path-sets-wrong-config-variables.patch
@@ -1,4 +1,4 @@
-From f6e96a95212bc1fef57b9594a7dddc0c20639873 Mon Sep 17 00:00:00 2001
+From 07655aaa14f9d1f3a521caadde2936067ce84b07 Mon Sep 17 00:00:00 2001
 From: Khem Raj <raj.khem@gmail.com>
 Date: Wed, 18 Mar 2015 00:31:06 +0000
 Subject: [PATCH] 'yes' within the path sets wrong config variables
@@ -29,10 +29,10 @@
  12 files changed, 28 insertions(+), 28 deletions(-)
 
 diff --git a/sysdeps/aarch64/configure b/sysdeps/aarch64/configure
-index 4c1fac49f3..597314f476 100644
+index bf972122b1..f9397b8d6e 100644
 --- a/sysdeps/aarch64/configure
 +++ b/sysdeps/aarch64/configure
-@@ -157,12 +157,12 @@ else
+@@ -152,12 +152,12 @@ else
    cat confdefs.h - <<_ACEOF >conftest.$ac_ext
  /* end confdefs.h.  */
  #ifdef __AARCH64EB__
@@ -48,10 +48,10 @@
  else
    libc_cv_aarch64_be=no
 diff --git a/sysdeps/aarch64/configure.ac b/sysdeps/aarch64/configure.ac
-index 3347c13fa1..4af163c0b6 100644
+index 51253d9802..ba36a0e8b4 100644
 --- a/sysdeps/aarch64/configure.ac
 +++ b/sysdeps/aarch64/configure.ac
-@@ -17,8 +17,8 @@ AC_DEFINE(SUPPORT_STATIC_PIE)
+@@ -13,8 +13,8 @@ AC_DEFINE(SUPPORT_STATIC_PIE)
  # the dynamic linker via %ifdef.
  AC_CACHE_CHECK([for big endian],
    [libc_cv_aarch64_be],
@@ -63,10 +63,10 @@
    ], libc_cv_aarch64_be=yes, libc_cv_aarch64_be=no)])
  if test $libc_cv_aarch64_be = yes; then
 diff --git a/sysdeps/arm/configure b/sysdeps/arm/configure
-index 431e843b2b..e152461138 100644
+index 5b0237e521..969fc9fe95 100644
 --- a/sysdeps/arm/configure
 +++ b/sysdeps/arm/configure
-@@ -151,12 +151,12 @@ else
+@@ -148,12 +148,12 @@ else
    cat confdefs.h - <<_ACEOF >conftest.$ac_ext
  /* end confdefs.h.  */
  #ifdef __ARM_PCS_VFP
@@ -82,10 +82,10 @@
  else
    libc_cv_arm_pcs_vfp=no
 diff --git a/sysdeps/arm/configure.ac b/sysdeps/arm/configure.ac
-index 90cdd69c75..05a262ba00 100644
+index 5172e30bbe..f06dedd7c5 100644
 --- a/sysdeps/arm/configure.ac
 +++ b/sysdeps/arm/configure.ac
-@@ -15,8 +15,8 @@ AC_DEFINE(PI_STATIC_AND_HIDDEN)
+@@ -10,8 +10,8 @@ GLIBC_PROVIDES dnl See aclocal.m4 in the top level source directory.
  # the dynamic linker via %ifdef.
  AC_CACHE_CHECK([whether the compiler is using the ARM hard-float ABI],
    [libc_cv_arm_pcs_vfp],
@@ -97,10 +97,10 @@
    ], libc_cv_arm_pcs_vfp=yes, libc_cv_arm_pcs_vfp=no)])
  if test $libc_cv_arm_pcs_vfp = yes; then
 diff --git a/sysdeps/mips/configure b/sysdeps/mips/configure
-index 4e13248c03..f14af952d0 100644
+index 3f4d9e9759..888453c70b 100644
 --- a/sysdeps/mips/configure
 +++ b/sysdeps/mips/configure
-@@ -143,11 +143,11 @@ else
+@@ -145,11 +145,11 @@ else
  /* end confdefs.h.  */
  dnl
  #ifdef __mips_nan2008
@@ -115,11 +115,11 @@
  else
    libc_cv_mips_nan2008=no
 diff --git a/sysdeps/mips/configure.ac b/sysdeps/mips/configure.ac
-index bcbdaffd9f..ad3057f4cc 100644
+index d3cd780d78..250223d206 100644
 --- a/sysdeps/mips/configure.ac
 +++ b/sysdeps/mips/configure.ac
 @@ -6,9 +6,9 @@ dnl position independent way.
- dnl AC_DEFINE(PI_STATIC_AND_HIDDEN)
+ AC_DEFINE(HIDDEN_VAR_NEEDS_DYNAMIC_RELOC)
  
  AC_CACHE_CHECK([whether the compiler is using the 2008 NaN encoding],
 -  libc_cv_mips_nan2008, [AC_EGREP_CPP(yes, [dnl
@@ -131,7 +131,7 @@
  if test x$libc_cv_mips_nan2008 = xyes; then
    AC_DEFINE(HAVE_MIPS_NAN2008)
 diff --git a/sysdeps/nios2/configure b/sysdeps/nios2/configure
-index 14c8a3a014..dde3814ef2 100644
+index b3cd28349e..f47e5a5adc 100644
 --- a/sysdeps/nios2/configure
 +++ b/sysdeps/nios2/configure
 @@ -142,12 +142,12 @@ else
@@ -150,7 +150,7 @@
  else
    libc_cv_nios2_be=no
 diff --git a/sysdeps/nios2/configure.ac b/sysdeps/nios2/configure.ac
-index f05f43802b..dc8639902d 100644
+index f738e9a7ed..4085851cbc 100644
 --- a/sysdeps/nios2/configure.ac
 +++ b/sysdeps/nios2/configure.ac
 @@ -4,8 +4,8 @@ GLIBC_PROVIDES dnl See aclocal.m4 in the top level source directory.
diff --git a/poky/meta/recipes-core/glibc/glibc/0010-eglibc-Cross-building-and-testing-instructions.patch b/poky/meta/recipes-core/glibc/glibc/0010-eglibc-Cross-building-and-testing-instructions.patch
index 295fa31..ba8696d 100644
--- a/poky/meta/recipes-core/glibc/glibc/0010-eglibc-Cross-building-and-testing-instructions.patch
+++ b/poky/meta/recipes-core/glibc/glibc/0010-eglibc-Cross-building-and-testing-instructions.patch
@@ -1,4 +1,4 @@
-From d6300e80c7c010fa7ca33e36e826151558cec498 Mon Sep 17 00:00:00 2001
+From 9373891f13f3550f9b3f896c34ac152efd369ca9 Mon Sep 17 00:00:00 2001
 From: Khem Raj <raj.khem@gmail.com>
 Date: Wed, 18 Mar 2015 00:42:58 +0000
 Subject: [PATCH] eglibc: Cross building and testing instructions
diff --git a/poky/meta/recipes-core/glibc/glibc/0011-eglibc-Help-bootstrap-cross-toolchain.patch b/poky/meta/recipes-core/glibc/glibc/0011-eglibc-Help-bootstrap-cross-toolchain.patch
index 9e00da8..1f6ff1f 100644
--- a/poky/meta/recipes-core/glibc/glibc/0011-eglibc-Help-bootstrap-cross-toolchain.patch
+++ b/poky/meta/recipes-core/glibc/glibc/0011-eglibc-Help-bootstrap-cross-toolchain.patch
@@ -1,4 +1,4 @@
-From 1c8044544d2cbdc529910a3ed6eba4b0ce7ae549 Mon Sep 17 00:00:00 2001
+From 7f2fd574646cb5ecbbc09372a2d8580ab72ec158 Mon Sep 17 00:00:00 2001
 From: Khem Raj <raj.khem@gmail.com>
 Date: Wed, 18 Mar 2015 00:49:28 +0000
 Subject: [PATCH] eglibc: Help bootstrap cross toolchain
@@ -29,7 +29,7 @@
  create mode 100644 include/stubs-bootstrap.h
 
 diff --git a/Makefile b/Makefile
-index a49870d3d1..81673d7645 100644
+index 179dd478ff..55cfb740bf 100644
 --- a/Makefile
 +++ b/Makefile
 @@ -79,9 +79,18 @@ subdir-dirs = include
@@ -52,7 +52,7 @@
  ifeq (yes,$(build-shared))
  headers += gnu/lib-names.h
  endif
-@@ -420,6 +429,16 @@ others: $(common-objpfx)testrun.sh $(common-objpfx)debugglibc.sh
+@@ -421,6 +430,16 @@ others: $(common-objpfx)testrun.sh $(common-objpfx)debugglibc.sh
  
  subdir-stubs := $(foreach dir,$(subdirs),$(common-objpfx)$(dir)/stubs)
  
@@ -69,7 +69,7 @@
  ifndef abi-variants
  installed-stubs = $(inst_includedir)/gnu/stubs.h
  else
-@@ -446,6 +465,7 @@ $(inst_includedir)/gnu/stubs.h: $(+force)
+@@ -447,6 +466,7 @@ $(inst_includedir)/gnu/stubs.h: $(+force)
  
  install-others-nosubdir: $(installed-stubs)
  endif
diff --git a/poky/meta/recipes-core/glibc/glibc/0012-eglibc-Resolve-__fpscr_values-on-SH4.patch b/poky/meta/recipes-core/glibc/glibc/0012-eglibc-Resolve-__fpscr_values-on-SH4.patch
index 03c81bf..399e14f 100644
--- a/poky/meta/recipes-core/glibc/glibc/0012-eglibc-Resolve-__fpscr_values-on-SH4.patch
+++ b/poky/meta/recipes-core/glibc/glibc/0012-eglibc-Resolve-__fpscr_values-on-SH4.patch
@@ -1,4 +1,4 @@
-From e5999ffd1b8690c2902a6406c07f51023a6bf7ec Mon Sep 17 00:00:00 2001
+From 9f1803a2f91d59a9478ca4d8d93e1de5c62671e5 Mon Sep 17 00:00:00 2001
 From: Khem Raj <raj.khem@gmail.com>
 Date: Wed, 18 Mar 2015 00:55:53 +0000
 Subject: [PATCH] eglibc: Resolve __fpscr_values on SH4
@@ -33,7 +33,7 @@
      # a*
      alphasort64;
 diff --git a/sysdeps/unix/sysv/linux/sh/sysdep.S b/sysdeps/unix/sysv/linux/sh/sysdep.S
-index a18fbb2e8b..59421bfbb0 100644
+index c5e3a7a365..35120031c4 100644
 --- a/sysdeps/unix/sysv/linux/sh/sysdep.S
 +++ b/sysdeps/unix/sysv/linux/sh/sysdep.S
 @@ -30,3 +30,14 @@ ENTRY (__syscall_error)
diff --git a/poky/meta/recipes-core/glibc/glibc/0013-eglibc-Forward-port-cross-locale-generation-support.patch b/poky/meta/recipes-core/glibc/glibc/0013-eglibc-Forward-port-cross-locale-generation-support.patch
index 48bb062..7d89155 100644
--- a/poky/meta/recipes-core/glibc/glibc/0013-eglibc-Forward-port-cross-locale-generation-support.patch
+++ b/poky/meta/recipes-core/glibc/glibc/0013-eglibc-Forward-port-cross-locale-generation-support.patch
@@ -1,4 +1,4 @@
-From 99ae3189430eaa5472b2117e5a999109a6ca9251 Mon Sep 17 00:00:00 2001
+From 2c6449014151a4bcd4b253b2acc920f0b3d6b13f Mon Sep 17 00:00:00 2001
 From: Khem Raj <raj.khem@gmail.com>
 Date: Wed, 18 Mar 2015 01:33:49 +0000
 Subject: [PATCH] eglibc: Forward port cross locale generation support
@@ -23,7 +23,7 @@
  create mode 100644 locale/catnames.c
 
 diff --git a/locale/Makefile b/locale/Makefile
-index b7c60681fa..07c606cde3 100644
+index eb55750496..b0461ac4b9 100644
 --- a/locale/Makefile
 +++ b/locale/Makefile
 @@ -26,7 +26,8 @@ headers		= langinfo.h locale.h bits/locale.h \
@@ -89,10 +89,10 @@
 +    [LC_ALL] = sizeof ("LC_ALL") - 1
 +  };
 diff --git a/locale/localeinfo.h b/locale/localeinfo.h
-index 22f9dc1140..fa31b3c5ea 100644
+index 3dc26272a0..b667d32c23 100644
 --- a/locale/localeinfo.h
 +++ b/locale/localeinfo.h
-@@ -230,7 +230,7 @@ __libc_tsd_define (extern, locale_t, LOCALE)
+@@ -246,7 +246,7 @@ __libc_tsd_define (extern, locale_t, LOCALE)
     unused.  We can manage this playing some tricks with weak references.
     But with thread-local locale settings, it becomes quite ungainly unless
     we can use __thread variables.  So only in that case do we attempt this.  */
@@ -102,7 +102,7 @@
  # define NL_CURRENT_INDIRECT	1
  #endif
 diff --git a/locale/programs/charmap-dir.c b/locale/programs/charmap-dir.c
-index 4841bfd05d..ffcba1fd79 100644
+index 396a0d76c0..91f4a765ee 100644
 --- a/locale/programs/charmap-dir.c
 +++ b/locale/programs/charmap-dir.c
 @@ -18,7 +18,9 @@
@@ -148,7 +148,7 @@
    return NULL;
  }
 diff --git a/locale/programs/ld-collate.c b/locale/programs/ld-collate.c
-index 06a5203334..84b3ff4166 100644
+index 992814491d..da4dde4663 100644
 --- a/locale/programs/ld-collate.c
 +++ b/locale/programs/ld-collate.c
 @@ -352,7 +352,7 @@ new_element (struct locale_collate_t *collate, const char *mbs, size_t mbslen,
@@ -160,7 +160,7 @@
        uint32_t zero = 0;
        /* Handle <U0000> as a single character.  */
        if (nwcs == 0)
-@@ -1783,8 +1783,7 @@ symbol `%s' has the same encoding as"), (*eptr)->name);
+@@ -1776,8 +1776,7 @@ symbol `%s' has the same encoding as"), (*eptr)->name);
  
  	      if ((*eptr)->nwcs == runp->nwcs)
  		{
@@ -170,7 +170,7 @@
  
  		  if (c == 0)
  		    {
-@@ -2011,9 +2010,9 @@ add_to_tablewc (uint32_t ch, struct element_t *runp)
+@@ -2004,9 +2003,9 @@ add_to_tablewc (uint32_t ch, struct element_t *runp)
  	     one consecutive entry.  */
  	  if (runp->wcnext != NULL
  	      && runp->nwcs == runp->wcnext->nwcs
@@ -183,7 +183,7 @@
  	      && (runp->wcs[runp->nwcs - 1]
  		  == runp->wcnext->wcs[runp->nwcs - 1] + 1))
  	    {
-@@ -2037,9 +2036,9 @@ add_to_tablewc (uint32_t ch, struct element_t *runp)
+@@ -2030,9 +2029,9 @@ add_to_tablewc (uint32_t ch, struct element_t *runp)
  		runp = runp->wcnext;
  	      while (runp->wcnext != NULL
  		     && runp->nwcs == runp->wcnext->nwcs
@@ -197,7 +197,7 @@
  			 == runp->wcnext->wcs[runp->nwcs - 1] + 1));
  
 diff --git a/locale/programs/ld-ctype.c b/locale/programs/ld-ctype.c
-index 07b64ac5a1..70b49ab733 100644
+index c6749dbd82..ac99777925 100644
 --- a/locale/programs/ld-ctype.c
 +++ b/locale/programs/ld-ctype.c
 @@ -914,7 +914,7 @@ ctype_output (struct localedef_t *locale, const struct charmap_t *charmap,
@@ -229,7 +229,7 @@
  	  handle_digits = 1;
  	  goto read_charclass;
  
-@@ -3903,8 +3903,7 @@ allocate_arrays (struct locale_ctype_t *ctype, const struct charmap_t *charmap,
+@@ -3876,8 +3876,7 @@ allocate_arrays (struct locale_ctype_t *ctype, const struct charmap_t *charmap,
  
  	  while (idx < number)
  	    {
@@ -239,7 +239,7 @@
  	      if (res == 0)
  		{
  		  replace = 1;
-@@ -3941,11 +3940,11 @@ allocate_arrays (struct locale_ctype_t *ctype, const struct charmap_t *charmap,
+@@ -3914,11 +3913,11 @@ allocate_arrays (struct locale_ctype_t *ctype, const struct charmap_t *charmap,
        for (size_t cnt = 0; cnt < number; ++cnt)
  	{
  	  struct translit_to_t *srunp;
@@ -253,7 +253,7 @@
  	      srunp = srunp->next;
  	    }
  	  /* Plus one for the extra NUL character marking the end of
-@@ -3969,18 +3968,18 @@ allocate_arrays (struct locale_ctype_t *ctype, const struct charmap_t *charmap,
+@@ -3942,18 +3941,18 @@ allocate_arrays (struct locale_ctype_t *ctype, const struct charmap_t *charmap,
  	  ctype->translit_from_idx[cnt] = from_len;
  	  ctype->translit_to_idx[cnt] = to_len;
  
@@ -279,7 +279,7 @@
  	      srunp = srunp->next;
  	    }
 diff --git a/locale/programs/ld-time.c b/locale/programs/ld-time.c
-index e6f320d2b3..c6631ad101 100644
+index b58fecfcee..a4d70e0780 100644
 --- a/locale/programs/ld-time.c
 +++ b/locale/programs/ld-time.c
 @@ -219,8 +219,10 @@ No definition for %s category found"), "LC_TIME");
@@ -348,20 +348,20 @@
  
  
 diff --git a/locale/programs/linereader.c b/locale/programs/linereader.c
-index a1f22b28ed..cbd3b40ceb 100644
+index 0460074a0c..31a7151f66 100644
 --- a/locale/programs/linereader.c
 +++ b/locale/programs/linereader.c
-@@ -594,7 +594,7 @@ get_string (struct linereader *lr, const struct charmap_t *charmap,
+@@ -776,7 +776,7 @@ get_string (struct linereader *lr, const struct charmap_t *charmap,
  {
    int return_widestr = lr->return_widestr;
-   char *buf;
+   struct lr_buffer lrb;
 -  wchar_t *buf2 = NULL;
 +  uint32_t *buf2 = NULL;
-   size_t bufact;
-   size_t bufmax = 56;
+ 
+   lr_buffer_init (&lrb);
  
 diff --git a/locale/programs/localedef.c b/locale/programs/localedef.c
-index f0da25e9e5..5d9e01cda2 100644
+index 35a092a111..94712bf114 100644
 --- a/locale/programs/localedef.c
 +++ b/locale/programs/localedef.c
 @@ -108,6 +108,7 @@ void (*argp_program_version_hook) (FILE *, struct argp_state *) = print_version;
@@ -407,7 +407,7 @@
        force_output = 1;
        break;
 diff --git a/locale/programs/locfile.c b/locale/programs/locfile.c
-index 1427b518a9..dafa84a20b 100644
+index 8fa74dce60..8d5aca6d9e 100644
 --- a/locale/programs/locfile.c
 +++ b/locale/programs/locfile.c
 @@ -543,6 +543,9 @@ compare_files (const char *filename1, const char *filename2, size_t size,
@@ -430,7 +430,7 @@
  
  /* Record that FILE's next element is the 32-bit integer VALUE.  */
 diff --git a/locale/programs/locfile.h b/locale/programs/locfile.h
-index cbc20fe88d..ae88e6d0af 100644
+index 57b2211e2f..e9498c6c7e 100644
 --- a/locale/programs/locfile.h
 +++ b/locale/programs/locfile.h
 @@ -70,6 +70,8 @@ extern void write_all_categories (struct localedef_t *definitions,
@@ -519,7 +519,7 @@
 +
  #endif /* locfile.h */
 diff --git a/locale/setlocale.c b/locale/setlocale.c
-index 19ed85ae8e..f28ca11446 100644
+index 56c14d8533..6aac00503e 100644
 --- a/locale/setlocale.c
 +++ b/locale/setlocale.c
 @@ -63,35 +63,6 @@ static char *const _nl_current_used[] =
diff --git a/poky/meta/recipes-core/glibc/glibc/0014-localedef-add-to-archive-uses-a-hard-coded-locale-pa.patch b/poky/meta/recipes-core/glibc/glibc/0014-localedef-add-to-archive-uses-a-hard-coded-locale-pa.patch
index eae1ee8..c47025a 100644
--- a/poky/meta/recipes-core/glibc/glibc/0014-localedef-add-to-archive-uses-a-hard-coded-locale-pa.patch
+++ b/poky/meta/recipes-core/glibc/glibc/0014-localedef-add-to-archive-uses-a-hard-coded-locale-pa.patch
@@ -1,4 +1,4 @@
-From 32c2e23ad29f63f57f544daf1a59259147cd1008 Mon Sep 17 00:00:00 2001
+From 8ebf6708ba54147b44f5638b93f123fd55d4c37e Mon Sep 17 00:00:00 2001
 From: Khem Raj <raj.khem@gmail.com>
 Date: Fri, 3 Aug 2018 09:42:06 -0700
 Subject: [PATCH] localedef --add-to-archive uses a hard-coded locale path
@@ -18,7 +18,7 @@
  1 file changed, 25 insertions(+), 10 deletions(-)
 
 diff --git a/locale/programs/locarchive.c b/locale/programs/locarchive.c
-index 477499bd40..fe7b5ff60c 100644
+index eeb2fa6ffe..15274b0191 100644
 --- a/locale/programs/locarchive.c
 +++ b/locale/programs/locarchive.c
 @@ -339,12 +339,24 @@ enlarge_archive (struct locarhandle *ah, const struct locarhead *head)
diff --git a/poky/meta/recipes-core/glibc/glibc/0015-locale-prevent-maybe-uninitialized-errors-with-Os-BZ.patch b/poky/meta/recipes-core/glibc/glibc/0015-locale-prevent-maybe-uninitialized-errors-with-Os-BZ.patch
new file mode 100644
index 0000000..933fa0e
--- /dev/null
+++ b/poky/meta/recipes-core/glibc/glibc/0015-locale-prevent-maybe-uninitialized-errors-with-Os-BZ.patch
@@ -0,0 +1,53 @@
+From bd2b87eaa2e99310f5439df95bea12a48dc978bf Mon Sep 17 00:00:00 2001
+From: Martin Jansa <martin.jansa@gmail.com>
+Date: Mon, 17 Dec 2018 21:36:18 +0000
+Subject: [PATCH] locale: prevent maybe-uninitialized errors with -Os [BZ
+ #19444]
+
+Fixes following error when building for aarch64 with -Os:
+| In file included from strcoll_l.c:43:
+| strcoll_l.c: In function '__strcoll_l':
+| ../locale/weight.h:31:26: error: 'seq2.back_us' may be used uninitialized in this function [-Werror=maybe-uninitialized]
+|    int_fast32_t i = table[*(*cpp)++];
+|                           ^~~~~~~~~
+| strcoll_l.c:304:18: note: 'seq2.back_us' was declared here
+|    coll_seq seq1, seq2;
+|                   ^~~~
+| In file included from strcoll_l.c:43:
+| ../locale/weight.h:31:26: error: 'seq1.back_us' may be used uninitialized in this function [-Werror=maybe-uninitialized]
+|    int_fast32_t i = table[*(*cpp)++];
+|                           ^~~~~~~~~
+| strcoll_l.c:304:12: note: 'seq1.back_us' was declared here
+|    coll_seq seq1, seq2;
+|             ^~~~
+
+        Partial fix for [BZ #19444]
+        * locale/weight.h: Fix build with -Os.
+
+Upstream-Status: Submitted [https://patchwork.ozlabs.org/patch/1014766]
+
+Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ locale/weight.h | 7 +++++++
+ 1 file changed, 7 insertions(+)
+
+diff --git a/locale/weight.h b/locale/weight.h
+index 8be2d220f8..4a4d5aa6b2 100644
+--- a/locale/weight.h
++++ b/locale/weight.h
+@@ -27,7 +27,14 @@ findidx (const int32_t *table,
+ 	 const unsigned char *extra,
+ 	 const unsigned char **cpp, size_t len)
+ {
++  /* With GCC 8 when compiling with -Os the compiler warns that
++     seq1.back_us and seq2.back_us might be used uninitialized.
++     This uninitialized use is impossible for the same reason
++     as described in comments in locale/weightwc.h.  */
++  DIAG_PUSH_NEEDS_COMMENT;
++  DIAG_IGNORE_Os_NEEDS_COMMENT (8, "-Wmaybe-uninitialized");
+   int32_t i = table[*(*cpp)++];
++  DIAG_POP_NEEDS_COMMENT;
+   const unsigned char *cp;
+   const unsigned char *usrc;
+ 
diff --git a/poky/meta/recipes-core/glibc/glibc/0016-locale-prevent-maybe-uninitialized-errors-with-Os-BZ.patch b/poky/meta/recipes-core/glibc/glibc/0016-locale-prevent-maybe-uninitialized-errors-with-Os-BZ.patch
deleted file mode 100644
index 4e51036..0000000
--- a/poky/meta/recipes-core/glibc/glibc/0016-locale-prevent-maybe-uninitialized-errors-with-Os-BZ.patch
+++ /dev/null
@@ -1,53 +0,0 @@
-From c59bc6eb421ad3310c43951a11d2561bbf34e95e Mon Sep 17 00:00:00 2001
-From: Martin Jansa <martin.jansa@gmail.com>
-Date: Mon, 17 Dec 2018 21:36:18 +0000
-Subject: [PATCH] locale: prevent maybe-uninitialized errors with -Os [BZ
- #19444]
-
-Fixes following error when building for aarch64 with -Os:
-| In file included from strcoll_l.c:43:
-| strcoll_l.c: In function '__strcoll_l':
-| ../locale/weight.h:31:26: error: 'seq2.back_us' may be used uninitialized in this function [-Werror=maybe-uninitialized]
-|    int_fast32_t i = table[*(*cpp)++];
-|                           ^~~~~~~~~
-| strcoll_l.c:304:18: note: 'seq2.back_us' was declared here
-|    coll_seq seq1, seq2;
-|                   ^~~~
-| In file included from strcoll_l.c:43:
-| ../locale/weight.h:31:26: error: 'seq1.back_us' may be used uninitialized in this function [-Werror=maybe-uninitialized]
-|    int_fast32_t i = table[*(*cpp)++];
-|                           ^~~~~~~~~
-| strcoll_l.c:304:12: note: 'seq1.back_us' was declared here
-|    coll_seq seq1, seq2;
-|             ^~~~
-
-        Partial fix for [BZ #19444]
-        * locale/weight.h: Fix build with -Os.
-
-Upstream-Status: Submitted [https://patchwork.ozlabs.org/patch/1014766]
-
-Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
----
- locale/weight.h | 7 +++++++
- 1 file changed, 7 insertions(+)
-
-diff --git a/locale/weight.h b/locale/weight.h
-index 076529c0ba..2ac83657f7 100644
---- a/locale/weight.h
-+++ b/locale/weight.h
-@@ -27,7 +27,14 @@ findidx (const int32_t *table,
- 	 const unsigned char *extra,
- 	 const unsigned char **cpp, size_t len)
- {
-+  /* With GCC 8 when compiling with -Os the compiler warns that
-+     seq1.back_us and seq2.back_us might be used uninitialized.
-+     This uninitialized use is impossible for the same reason
-+     as described in comments in locale/weightwc.h.  */
-+  DIAG_PUSH_NEEDS_COMMENT;
-+  DIAG_IGNORE_Os_NEEDS_COMMENT (8, "-Wmaybe-uninitialized");
-   int_fast32_t i = table[*(*cpp)++];
-+  DIAG_POP_NEEDS_COMMENT;
-   const unsigned char *cp;
-   const unsigned char *usrc;
- 
diff --git a/poky/meta/recipes-core/glibc/glibc/0016-readlib-Add-OECORE_KNOWN_INTERPRETER_NAMES-to-known-.patch b/poky/meta/recipes-core/glibc/glibc/0016-readlib-Add-OECORE_KNOWN_INTERPRETER_NAMES-to-known-.patch
new file mode 100644
index 0000000..f45951a
--- /dev/null
+++ b/poky/meta/recipes-core/glibc/glibc/0016-readlib-Add-OECORE_KNOWN_INTERPRETER_NAMES-to-known-.patch
@@ -0,0 +1,29 @@
+From 58dd1336c1c32716f4f0938bf18f2ddfbe9305ca Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Wed, 18 Mar 2015 00:11:22 +0000
+Subject: [PATCH] readlib: Add OECORE_KNOWN_INTERPRETER_NAMES to known names
+
+This bolts in a hook for OE to pass its own version of interpreter
+names into glibc especially for multilib case, where it differs from any
+other distros
+
+Upstream-Status: Inappropriate [OE specific]
+
+Signed-off-by: Lianhao Lu <lianhao.lu@intel.com>
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ elf/readlib.c | 1 +
+ 1 file changed, 1 insertion(+)
+
+diff --git a/elf/readlib.c b/elf/readlib.c
+index ed42fbd48e..777f6c80be 100644
+--- a/elf/readlib.c
++++ b/elf/readlib.c
+@@ -49,6 +49,7 @@ static struct known_names interpreters[] =
+ #ifdef SYSDEP_KNOWN_INTERPRETER_NAMES
+   SYSDEP_KNOWN_INTERPRETER_NAMES
+ #endif
++  OECORE_KNOWN_INTERPRETER_NAMES
+ };
+ 
+ static struct known_names known_libs[] =
diff --git a/poky/meta/recipes-core/glibc/glibc/0017-powerpc-Do-not-ask-compiler-for-finding-arch.patch b/poky/meta/recipes-core/glibc/glibc/0017-powerpc-Do-not-ask-compiler-for-finding-arch.patch
new file mode 100644
index 0000000..cb6f7dc
--- /dev/null
+++ b/poky/meta/recipes-core/glibc/glibc/0017-powerpc-Do-not-ask-compiler-for-finding-arch.patch
@@ -0,0 +1,48 @@
+From 93c5b86fae5e42e148e5182466eb0ac26298159c Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Fri, 7 Aug 2020 14:31:16 -0700
+Subject: [PATCH] powerpc: Do not ask compiler for finding arch
+
+This does not work well in cross compiling environments like OE
+and moreover it uses its own -mcpu/-march options via cflags
+
+Upstream-Status: Inappropriate [ OE-Specific]
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ sysdeps/powerpc/preconfigure    | 5 +----
+ sysdeps/powerpc/preconfigure.ac | 5 +----
+ 2 files changed, 2 insertions(+), 8 deletions(-)
+
+diff --git a/sysdeps/powerpc/preconfigure b/sysdeps/powerpc/preconfigure
+index dfe8e20399..bbff040f0f 100644
+--- a/sysdeps/powerpc/preconfigure
++++ b/sysdeps/powerpc/preconfigure
+@@ -29,10 +29,7 @@ esac
+ # directive which shows up, and try using it.
+ case "${machine}:${submachine}" in
+ *powerpc*:)
+-  archcpu=`echo "int foo () { return 0; }" \
+-	   | $CC $CFLAGS $CPPFLAGS -S -frecord-gcc-switches -xc -o - - \
+-	   | grep -E "mcpu=|.machine" -m 1 \
+-	   | sed -e "s/.*machine //" -e "s/.*mcpu=\(.*\)\"/\1/"`
++  archcpu=''
+   # Note if you add patterns here you must ensure that an appropriate
+   # directory exists in sysdeps/powerpc.  Likewise, if we find a
+   # cpu, don't let the generic configure append extra compiler options.
+diff --git a/sysdeps/powerpc/preconfigure.ac b/sysdeps/powerpc/preconfigure.ac
+index 6c63bd8257..3e925f1d48 100644
+--- a/sysdeps/powerpc/preconfigure.ac
++++ b/sysdeps/powerpc/preconfigure.ac
+@@ -29,10 +29,7 @@ esac
+ # directive which shows up, and try using it.
+ case "${machine}:${submachine}" in
+ *powerpc*:)
+-  archcpu=`echo "int foo () { return 0; }" \
+-	   | $CC $CFLAGS $CPPFLAGS -S -frecord-gcc-switches -xc -o - - \
+-	   | grep -E "mcpu=|[.]machine" -m 1 \
+-	   | sed -e "s/.*machine //" -e "s/.*mcpu=\(.*\)\"/\1/"`
++  archcpu=''
+   # Note if you add patterns here you must ensure that an appropriate
+   # directory exists in sysdeps/powerpc.  Likewise, if we find a
+   # cpu, don't let the generic configure append extra compiler options.
diff --git a/poky/meta/recipes-core/glibc/glibc/0017-readlib-Add-OECORE_KNOWN_INTERPRETER_NAMES-to-known-.patch b/poky/meta/recipes-core/glibc/glibc/0017-readlib-Add-OECORE_KNOWN_INTERPRETER_NAMES-to-known-.patch
deleted file mode 100644
index 77a2bab..0000000
--- a/poky/meta/recipes-core/glibc/glibc/0017-readlib-Add-OECORE_KNOWN_INTERPRETER_NAMES-to-known-.patch
+++ /dev/null
@@ -1,29 +0,0 @@
-From 9f4fcec5662bfa6f8aa6a36dda6f4c05f6e30e51 Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Wed, 18 Mar 2015 00:11:22 +0000
-Subject: [PATCH] readlib: Add OECORE_KNOWN_INTERPRETER_NAMES to known names
-
-This bolts in a hook for OE to pass its own version of interpreter
-names into glibc especially for multilib case, where it differs from any
-other distros
-
-Upstream-Status: Inappropriate [OE specific]
-
-Signed-off-by: Lianhao Lu <lianhao.lu@intel.com>
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
----
- elf/readlib.c | 1 +
- 1 file changed, 1 insertion(+)
-
-diff --git a/elf/readlib.c b/elf/readlib.c
-index 64b20d7804..50318158fb 100644
---- a/elf/readlib.c
-+++ b/elf/readlib.c
-@@ -49,6 +49,7 @@ static struct known_names interpreters[] =
- #ifdef SYSDEP_KNOWN_INTERPRETER_NAMES
-   SYSDEP_KNOWN_INTERPRETER_NAMES
- #endif
-+  OECORE_KNOWN_INTERPRETER_NAMES
- };
- 
- static struct known_names known_libs[] =
diff --git a/poky/meta/recipes-core/glibc/glibc/0018-wordsize.h-Unify-the-header-between-arm-and-aarch64.patch b/poky/meta/recipes-core/glibc/glibc/0018-wordsize.h-Unify-the-header-between-arm-and-aarch64.patch
index 3b2d638..996471a 100644
--- a/poky/meta/recipes-core/glibc/glibc/0018-wordsize.h-Unify-the-header-between-arm-and-aarch64.patch
+++ b/poky/meta/recipes-core/glibc/glibc/0018-wordsize.h-Unify-the-header-between-arm-and-aarch64.patch
@@ -1,7 +1,7 @@
-From 4d6bce6b106d9d9a629aadba74d74cd8a500ccbf Mon Sep 17 00:00:00 2001
+From e2dba281429384cc22a73a58eaf79459e64be266 Mon Sep 17 00:00:00 2001
 From: Khem Raj <raj.khem@gmail.com>
 Date: Fri, 15 May 2020 17:05:45 -0700
-Subject: [PATCH 18/24] wordsize.h: Unify the header between arm and aarch64
+Subject: [PATCH] wordsize.h: Unify the header between arm and aarch64
 
 This helps OE multilibs to not sythesize this header which causes all
 kind of recursions and other issues since wordsize is fundamental header
@@ -11,10 +11,10 @@
 
 Signed-off-by: Khem Raj <raj.khem@gmail.com>
 ---
- sysdeps/aarch64/bits/wordsize.h          |  8 ++++++--
- sysdeps/{aarch64 => arm}/bits/wordsize.h | 10 +++++++---
- 2 files changed, 13 insertions(+), 5 deletions(-)
- copy sysdeps/{aarch64 => arm}/bits/wordsize.h (80%)
+ sysdeps/aarch64/bits/wordsize.h          | 8 ++++++--
+ sysdeps/{aarch64 => arm}/bits/wordsize.h | 8 ++++++--
+ 2 files changed, 12 insertions(+), 4 deletions(-)
+ copy sysdeps/{aarch64 => arm}/bits/wordsize.h (85%)
 
 diff --git a/sysdeps/aarch64/bits/wordsize.h b/sysdeps/aarch64/bits/wordsize.h
 index 4635431f0e..5ef0ed21f3 100644
@@ -40,10 +40,10 @@
  
  #define __WORDSIZE_TIME64_COMPAT32	0
 diff --git a/sysdeps/aarch64/bits/wordsize.h b/sysdeps/arm/bits/wordsize.h
-similarity index 80%
+similarity index 85%
 copy from sysdeps/aarch64/bits/wordsize.h
 copy to sysdeps/arm/bits/wordsize.h
-index 4635431f0e..34fcdef1f1 100644
+index 4635431f0e..5ef0ed21f3 100644
 --- a/sysdeps/aarch64/bits/wordsize.h
 +++ b/sysdeps/arm/bits/wordsize.h
 @@ -17,12 +17,16 @@
@@ -65,6 +65,3 @@
  #endif
  
  #define __WORDSIZE_TIME64_COMPAT32	0
--- 
-2.34.1
-
diff --git a/poky/meta/recipes-core/glibc/glibc/0019-Replace-echo-with-printf-builtin-in-nscd-init-script.patch b/poky/meta/recipes-core/glibc/glibc/0019-Replace-echo-with-printf-builtin-in-nscd-init-script.patch
new file mode 100644
index 0000000..5181cfe
--- /dev/null
+++ b/poky/meta/recipes-core/glibc/glibc/0019-Replace-echo-with-printf-builtin-in-nscd-init-script.patch
@@ -0,0 +1,79 @@
+From 97a71e1dd07ba6721464150b03fd67823b6271e2 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Thu, 31 Dec 2015 14:33:02 -0800
+Subject: [PATCH] Replace echo with printf builtin in nscd init script
+
+The nscd init script calls for #! /bin/bash interpreter
+since it uses bash specific extentions namely (translated strings)
+and echo -n command, replace echo with printf and
+switch the shell interpreter to #!/bin/sh.
+
+Upstream-Status: Pending
+Signed-off-by: Ross Burton <ross.burton@arm.com>
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ nscd/nscd.init | 20 ++++++++++----------
+ 1 file changed, 10 insertions(+), 10 deletions(-)
+
+diff --git a/nscd/nscd.init b/nscd/nscd.init
+index a882da7d8b..857b541381 100644
+--- a/nscd/nscd.init
++++ b/nscd/nscd.init
+@@ -1,4 +1,4 @@
+-#!/bin/bash
++#!/bin/sh
+ #
+ # nscd:		Starts the Name Switch Cache Daemon
+ #
+@@ -49,16 +49,16 @@ prog=nscd
+ start () {
+     [ -d /var/run/nscd ] || mkdir /var/run/nscd
+     [ -d /var/db/nscd ] || mkdir /var/db/nscd
+-    echo -n $"Starting $prog: "
++    printf "Starting $prog: "
+     daemon /usr/sbin/nscd
+     RETVAL=$?
+-    echo
++    printf "\n"
+     [ $RETVAL -eq 0 ] && touch /var/lock/subsys/nscd
+     return $RETVAL
+ }
+ 
+ stop () {
+-    echo -n $"Stopping $prog: "
++    printf "Stopping $prog: "
+     /usr/sbin/nscd -K
+     RETVAL=$?
+     if [ $RETVAL -eq 0 ]; then
+@@ -67,11 +67,11 @@ stop () {
+ 	# a non-privileged user
+ 	rm -f /var/run/nscd/nscd.pid
+ 	rm -f /var/run/nscd/socket
+-       	success $"$prog shutdown"
++	success "$prog shutdown"
+     else
+-       	failure $"$prog shutdown"
++	failure "$prog shutdown"
+     fi
+-    echo
++    printf "\n"
+     return $RETVAL
+ }
+ 
+@@ -103,13 +103,13 @@ case "$1" in
+ 	RETVAL=$?
+ 	;;
+     force-reload | reload)
+-    	echo -n $"Reloading $prog: "
++	printf "Reloading $prog: "
+ 	killproc /usr/sbin/nscd -HUP
+ 	RETVAL=$?
+-	echo
++	printf "\n"
+ 	;;
+     *)
+-	echo $"Usage: $0 {start|stop|status|restart|reload|condrestart}"
++	printf "Usage: $0 {start|stop|status|restart|reload|condrestart}\n"
+ 	RETVAL=1
+ 	;;
+ esac
diff --git a/poky/meta/recipes-core/glibc/glibc/0019-powerpc-Do-not-ask-compiler-for-finding-arch.patch b/poky/meta/recipes-core/glibc/glibc/0019-powerpc-Do-not-ask-compiler-for-finding-arch.patch
deleted file mode 100644
index 4313c68..0000000
--- a/poky/meta/recipes-core/glibc/glibc/0019-powerpc-Do-not-ask-compiler-for-finding-arch.patch
+++ /dev/null
@@ -1,48 +0,0 @@
-From eb44466ec976d800bb697b10775efa28f22ec216 Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Fri, 7 Aug 2020 14:31:16 -0700
-Subject: [PATCH] powerpc: Do not ask compiler for finding arch
-
-This does not work well in cross compiling environments like OE
-and moreover it uses its own -mcpu/-march options via cflags
-
-Upstream-Status: Inappropriate [ OE-Specific]
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
----
- sysdeps/powerpc/preconfigure    | 5 +----
- sysdeps/powerpc/preconfigure.ac | 5 +----
- 2 files changed, 2 insertions(+), 8 deletions(-)
-
-diff --git a/sysdeps/powerpc/preconfigure b/sysdeps/powerpc/preconfigure
-index dfe8e20399..bbff040f0f 100644
---- a/sysdeps/powerpc/preconfigure
-+++ b/sysdeps/powerpc/preconfigure
-@@ -29,10 +29,7 @@ esac
- # directive which shows up, and try using it.
- case "${machine}:${submachine}" in
- *powerpc*:)
--  archcpu=`echo "int foo () { return 0; }" \
--	   | $CC $CFLAGS $CPPFLAGS -S -frecord-gcc-switches -xc -o - - \
--	   | grep -E "mcpu=|.machine" -m 1 \
--	   | sed -e "s/.*machine //" -e "s/.*mcpu=\(.*\)\"/\1/"`
-+  archcpu=''
-   # Note if you add patterns here you must ensure that an appropriate
-   # directory exists in sysdeps/powerpc.  Likewise, if we find a
-   # cpu, don't let the generic configure append extra compiler options.
-diff --git a/sysdeps/powerpc/preconfigure.ac b/sysdeps/powerpc/preconfigure.ac
-index 6c63bd8257..3e925f1d48 100644
---- a/sysdeps/powerpc/preconfigure.ac
-+++ b/sysdeps/powerpc/preconfigure.ac
-@@ -29,10 +29,7 @@ esac
- # directive which shows up, and try using it.
- case "${machine}:${submachine}" in
- *powerpc*:)
--  archcpu=`echo "int foo () { return 0; }" \
--	   | $CC $CFLAGS $CPPFLAGS -S -frecord-gcc-switches -xc -o - - \
--	   | grep -E "mcpu=|[.]machine" -m 1 \
--	   | sed -e "s/.*machine //" -e "s/.*mcpu=\(.*\)\"/\1/"`
-+  archcpu=''
-   # Note if you add patterns here you must ensure that an appropriate
-   # directory exists in sysdeps/powerpc.  Likewise, if we find a
-   # cpu, don't let the generic configure append extra compiler options.
diff --git a/poky/meta/recipes-core/glibc/glibc/0020-sysdeps-gnu-configure.ac-Set-libc_cv_rootsbindir-onl.patch b/poky/meta/recipes-core/glibc/glibc/0020-sysdeps-gnu-configure.ac-Set-libc_cv_rootsbindir-onl.patch
new file mode 100644
index 0000000..396f332
--- /dev/null
+++ b/poky/meta/recipes-core/glibc/glibc/0020-sysdeps-gnu-configure.ac-Set-libc_cv_rootsbindir-onl.patch
@@ -0,0 +1,40 @@
+From 3b5b6079512af8af50d0a43d4c1c218f5ba1b302 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Wed, 18 Mar 2015 00:27:10 +0000
+Subject: [PATCH] sysdeps/gnu/configure.ac: Set libc_cv_rootsbindir only if its
+ empty
+
+This ensures that it can be set in build environment
+
+Upstream-Status: Pending
+Signed-off-by: Matthieu Crapet <Matthieu.Crapet@ingenico.com>
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ sysdeps/gnu/configure    | 2 +-
+ sysdeps/gnu/configure.ac | 2 +-
+ 2 files changed, 2 insertions(+), 2 deletions(-)
+
+diff --git a/sysdeps/gnu/configure b/sysdeps/gnu/configure
+index c15d1087e8..d30d6e37ae 100644
+--- a/sysdeps/gnu/configure
++++ b/sysdeps/gnu/configure
+@@ -32,6 +32,6 @@ case "$prefix" in
+   else
+     libc_cv_localstatedir=$localstatedir
+    fi
+-  libc_cv_rootsbindir=/sbin
++  libc_cv_rootsbindir=${libc_cv_rootsbindir:=/sbin}
+   ;;
+ esac
+diff --git a/sysdeps/gnu/configure.ac b/sysdeps/gnu/configure.ac
+index 634fe4de2a..492112e0fd 100644
+--- a/sysdeps/gnu/configure.ac
++++ b/sysdeps/gnu/configure.ac
+@@ -21,6 +21,6 @@ case "$prefix" in
+   else
+     libc_cv_localstatedir=$localstatedir
+    fi
+-  libc_cv_rootsbindir=/sbin
++  libc_cv_rootsbindir=${libc_cv_rootsbindir:=/sbin}
+   ;;
+ esac
diff --git a/poky/meta/recipes-core/glibc/glibc/0021-Replace-echo-with-printf-builtin-in-nscd-init-script.patch b/poky/meta/recipes-core/glibc/glibc/0021-Replace-echo-with-printf-builtin-in-nscd-init-script.patch
deleted file mode 100644
index 42c498b..0000000
--- a/poky/meta/recipes-core/glibc/glibc/0021-Replace-echo-with-printf-builtin-in-nscd-init-script.patch
+++ /dev/null
@@ -1,79 +0,0 @@
-From 77fbd98f551d5b2cd338aa7f524e5ed980edb65e Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Thu, 31 Dec 2015 14:33:02 -0800
-Subject: [PATCH] Replace echo with printf builtin in nscd init script
-
-The nscd init script calls for #! /bin/bash interpreter
-since it uses bash specific extentions namely (translated strings)
-and echo -n command, replace echo with printf and
-switch the shell interpreter to #!/bin/sh.
-
-Upstream-Status: Submitted [https://patchwork.sourceware.org/project/glibc/patch/20211209203557.1318333-1-raj.khem@gmail.com/]
-Signed-off-by: Ross Burton <ross.burton@arm.com>
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
----
- nscd/nscd.init | 20 ++++++++++----------
- 1 file changed, 10 insertions(+), 10 deletions(-)
-
-diff --git a/nscd/nscd.init b/nscd/nscd.init
-index a882da7d8b..857b541381 100644
---- a/nscd/nscd.init
-+++ b/nscd/nscd.init
-@@ -1,4 +1,4 @@
--#!/bin/bash
-+#!/bin/sh
- #
- # nscd:		Starts the Name Switch Cache Daemon
- #
-@@ -49,16 +49,16 @@ prog=nscd
- start () {
-     [ -d /var/run/nscd ] || mkdir /var/run/nscd
-     [ -d /var/db/nscd ] || mkdir /var/db/nscd
--    echo -n $"Starting $prog: "
-+    printf "Starting $prog: "
-     daemon /usr/sbin/nscd
-     RETVAL=$?
--    echo
-+    printf "\n"
-     [ $RETVAL -eq 0 ] && touch /var/lock/subsys/nscd
-     return $RETVAL
- }
- 
- stop () {
--    echo -n $"Stopping $prog: "
-+    printf "Stopping $prog: "
-     /usr/sbin/nscd -K
-     RETVAL=$?
-     if [ $RETVAL -eq 0 ]; then
-@@ -67,11 +67,11 @@ stop () {
- 	# a non-privileged user
- 	rm -f /var/run/nscd/nscd.pid
- 	rm -f /var/run/nscd/socket
--       	success $"$prog shutdown"
-+	success "$prog shutdown"
-     else
--       	failure $"$prog shutdown"
-+	failure "$prog shutdown"
-     fi
--    echo
-+    printf "\n"
-     return $RETVAL
- }
- 
-@@ -103,13 +103,13 @@ case "$1" in
- 	RETVAL=$?
- 	;;
-     force-reload | reload)
--    	echo -n $"Reloading $prog: "
-+	printf "Reloading $prog: "
- 	killproc /usr/sbin/nscd -HUP
- 	RETVAL=$?
--	echo
-+	printf "\n"
- 	;;
-     *)
--	echo $"Usage: $0 {start|stop|status|restart|reload|condrestart}"
-+	printf "Usage: $0 {start|stop|status|restart|reload|condrestart}\n"
- 	RETVAL=1
- 	;;
- esac
diff --git a/poky/meta/recipes-core/glibc/glibc/0021-timezone-Make-shell-interpreter-overridable-in-tzsel.patch b/poky/meta/recipes-core/glibc/glibc/0021-timezone-Make-shell-interpreter-overridable-in-tzsel.patch
new file mode 100644
index 0000000..2f4e92d
--- /dev/null
+++ b/poky/meta/recipes-core/glibc/glibc/0021-timezone-Make-shell-interpreter-overridable-in-tzsel.patch
@@ -0,0 +1,47 @@
+From 8d5ff7357354394b958321204b75e3855781aefe Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Thu, 9 Dec 2021 15:14:42 -0800
+Subject: [PATCH] timezone: Make shell interpreter overridable in tzselect.ksh
+
+define new macro called KSHELL which can be used to define default shell
+use Bash by default
+
+Upstream-Status: Pending
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ Makeconfig        | 9 +++++++++
+ timezone/Makefile | 1 +
+ 2 files changed, 10 insertions(+)
+
+diff --git a/Makeconfig b/Makeconfig
+index ba70321af1..4b643768d9 100644
+--- a/Makeconfig
++++ b/Makeconfig
+@@ -293,6 +293,15 @@ ifndef sysincludedir
+ sysincludedir = /usr/include
+ endif
+ 
++# The full path name of a Posix-compliant shell, preferably one that supports
++# the Korn shell's 'select' statement as an extension.
++# These days, Bash is the most popular.
++# It should be OK to set this to /bin/sh, on platforms where /bin/sh
++# lacks 'select' or doesn't completely conform to Posix, but /bin/bash
++# is typically nicer if it works.
++ifndef KSHELL
++KSHELL = /bin/bash
++endif
+ 
+ # Commands to install files.
+ ifndef INSTALL_DATA
+diff --git a/timezone/Makefile b/timezone/Makefile
+index a789c22d26..3e69409a94 100644
+--- a/timezone/Makefile
++++ b/timezone/Makefile
+@@ -134,6 +134,7 @@ $(objpfx)tzselect: tzselect.ksh $(common-objpfx)config.make
+ 	    -e '/TZVERSION=/s|see_Makefile|"$(version)"|' \
+ 	    -e '/PKGVERSION=/s|=.*|="$(PKGVERSION)"|' \
+ 	    -e '/REPORT_BUGS_TO=/s|=.*|="$(REPORT_BUGS_TO)"|' \
++	    -e 's|#!/bin/bash|#!$(KSHELL)|g' \
+ 	    < $< > $@.new
+ 	chmod 555 $@.new
+ 	mv -f $@.new $@
diff --git a/poky/meta/recipes-core/glibc/glibc/0022-sysdeps-gnu-configure.ac-Set-libc_cv_rootsbindir-onl.patch b/poky/meta/recipes-core/glibc/glibc/0022-sysdeps-gnu-configure.ac-Set-libc_cv_rootsbindir-onl.patch
deleted file mode 100644
index 5ac9d6d..0000000
--- a/poky/meta/recipes-core/glibc/glibc/0022-sysdeps-gnu-configure.ac-Set-libc_cv_rootsbindir-onl.patch
+++ /dev/null
@@ -1,39 +0,0 @@
-From 5d1384d86fc44404ca32c6fda2d46ec357337c91 Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Wed, 18 Mar 2015 00:27:10 +0000
-Subject: [PATCH] sysdeps/gnu/configure.ac: Set libc_cv_rootsbindir only if its empty
-
-This ensures that it can be set in build environment
-
-Upstream-Status: Submitted [https://patchwork.sourceware.org/project/glibc/patch/20211209203557.1318333-2-raj.khem@gmail.com/]
-Signed-off-by: Matthieu Crapet <Matthieu.Crapet@ingenico.com>
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
----
- sysdeps/gnu/configure    | 2 +-
- sysdeps/gnu/configure.ac | 2 +-
- 2 files changed, 2 insertions(+), 2 deletions(-)
-
-diff --git a/sysdeps/gnu/configure b/sysdeps/gnu/configure
-index c15d1087e8..d30d6e37ae 100644
---- a/sysdeps/gnu/configure
-+++ b/sysdeps/gnu/configure
-@@ -32,6 +32,6 @@ case "$prefix" in
-   else
-     libc_cv_localstatedir=$localstatedir
-    fi
--  libc_cv_rootsbindir=/sbin
-+  libc_cv_rootsbindir=${libc_cv_rootsbindir:=/sbin}
-   ;;
- esac
-diff --git a/sysdeps/gnu/configure.ac b/sysdeps/gnu/configure.ac
-index 634fe4de2a..492112e0fd 100644
---- a/sysdeps/gnu/configure.ac
-+++ b/sysdeps/gnu/configure.ac
-@@ -21,6 +21,6 @@ case "$prefix" in
-   else
-     libc_cv_localstatedir=$localstatedir
-    fi
--  libc_cv_rootsbindir=/sbin
-+  libc_cv_rootsbindir=${libc_cv_rootsbindir:=/sbin}
-   ;;
- esac
diff --git a/poky/meta/recipes-core/glibc/glibc/0022-tzselect.ksh-Use-bin-sh-default-shell-interpreter.patch b/poky/meta/recipes-core/glibc/glibc/0022-tzselect.ksh-Use-bin-sh-default-shell-interpreter.patch
new file mode 100644
index 0000000..c409327
--- /dev/null
+++ b/poky/meta/recipes-core/glibc/glibc/0022-tzselect.ksh-Use-bin-sh-default-shell-interpreter.patch
@@ -0,0 +1,27 @@
+From ba1365f19ccc8378f2fcff892721187537479884 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Wed, 15 Dec 2021 21:47:53 -0800
+Subject: [PATCH] tzselect.ksh: Use /bin/sh default shell interpreter
+
+checkbashism reports no issues with tzselect.ksh, therefore using
+/bin/sh instead of /bin/bash should be safe and portable across systems
+which don't ship bash ( embedded systems )
+
+Upstream-Status: Pending
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+Cc: Adhemerval Zanella <adhemerval.zanella@linaro.org>
+Cc: Paul Eggert <eggert@cs.ucla.edu>
+---
+ timezone/tzselect.ksh | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/timezone/tzselect.ksh b/timezone/tzselect.ksh
+index 18fce27e24..cc08efb0fb 100755
+--- a/timezone/tzselect.ksh
++++ b/timezone/tzselect.ksh
+@@ -1,4 +1,4 @@
+-#!/bin/bash
++#!/bin/sh
+ # Ask the user about the time zone, and output the resulting TZ value to stdout.
+ # Interact with the user via stderr and stdin.
+ 
diff --git a/poky/meta/recipes-core/glibc/glibc/0023-fix-create-thread-failed-in-unprivileged-process-BZ-.patch b/poky/meta/recipes-core/glibc/glibc/0023-fix-create-thread-failed-in-unprivileged-process-BZ-.patch
new file mode 100644
index 0000000..7b0965f
--- /dev/null
+++ b/poky/meta/recipes-core/glibc/glibc/0023-fix-create-thread-failed-in-unprivileged-process-BZ-.patch
@@ -0,0 +1,86 @@
+From ffbb37732807e180b14a21d1bf79ad5038252c02 Mon Sep 17 00:00:00 2001
+From: Hongxu Jia <hongxu.jia@windriver.com>
+Date: Sun, 29 Aug 2021 20:49:16 +0800
+Subject: [PATCH] fix create thread failed in unprivileged process [BZ #28287]
+
+Since commit [d8ea0d0168 Add an internal wrapper for clone, clone2 and clone3]
+applied, start a unprivileged container (docker run without --privileged),
+it creates a thread failed in container.
+
+In commit d8ea0d0168, it calls __clone3 if HAVE_CLONE3_WAPPER is defined.  If
+__clone3 returns -1 with ENOSYS, fall back to clone or clone2.
+
+As known from [1], cloneXXX fails with EPERM if CLONE_NEWCGROUP,
+CLONE_NEWIPC, CLONE_NEWNET, CLONE_NEWNS, CLONE_NEWPID, or CLONE_NEWUTS
+was specified by an unprivileged process (process without CAP_SYS_ADMIN)
+
+[1] https://man7.org/linux/man-pages/man2/clone3.2.html
+
+So if __clone3 returns -1 with EPERM, fall back to clone or clone2 could
+fix the issue. Here are the test steps:
+
+1) Prepare test code
+cat > conftest.c <<ENDOF
+ #include <pthread.h>
+ #include <stdio.h>
+
+int check_me = 0;
+void* func(void* data) {check_me = 42; printf("start thread: check_me %d\n", check_me); return &check_me;}
+int main()
+{
+  pthread_t t;
+  void *ret;
+  pthread_create (&t, 0, func, 0);
+  pthread_join (t, &ret);
+  printf("check_me %d, p %p\n", check_me, &ret);
+  return (check_me != 42 || ret != &check_me);
+}
+
+ENDOF
+
+2) Compile
+gcc -o conftest -pthread conftest.c
+
+3) Start a container with glibc 2.34 installed
+[skip details]
+docker run -it <container-image-name> bash
+
+4) Run conftest without this patch
+$ ./conftest
+check_me 0, p 0x7ffd91ccd400
+
+5) Run conftest with this patch
+$ ./conftest
+start thread: check_me 42
+check_me 42, p 0x7ffe253c6f20
+
+Upstream-Status: Inappropriate [Rejected by upstream]
+
+Upstream glibc rejected it because the latest docker has resolved the issue [1],
+and upstream glibc does not backward compatibility with old docker[2]
+
+In order to build Yocto with uninative in old docker, we need this local
+patch
+
+[1] https://github.com/moby/moby/commit/9f6b562dd12ef7b1f9e2f8e6f2ab6477790a6594
+[2] https://sourceware.org/pipermail/libc-alpha/2021-August/130590.html
+
+Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ sysdeps/unix/sysv/linux/clone-internal.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/sysdeps/unix/sysv/linux/clone-internal.c b/sysdeps/unix/sysv/linux/clone-internal.c
+index a71effcbd3..a0569113aa 100644
+--- a/sysdeps/unix/sysv/linux/clone-internal.c
++++ b/sysdeps/unix/sysv/linux/clone-internal.c
+@@ -52,7 +52,7 @@ __clone_internal (struct clone_args *cl_args,
+   /* Try clone3 first.  */
+   int saved_errno = errno;
+   ret = __clone3 (cl_args, sizeof (*cl_args), func, arg);
+-  if (ret != -1 || errno != ENOSYS)
++  if (ret != -1 || (errno != ENOSYS && errno != EPERM))
+     return ret;
+ 
+   /* NB: Restore errno since errno may be checked against non-zero
diff --git a/poky/meta/recipes-core/glibc/glibc/0023-timezone-Make-shell-interpreter-overridable-in-tzsel.patch b/poky/meta/recipes-core/glibc/glibc/0023-timezone-Make-shell-interpreter-overridable-in-tzsel.patch
deleted file mode 100644
index e5e6ceb..0000000
--- a/poky/meta/recipes-core/glibc/glibc/0023-timezone-Make-shell-interpreter-overridable-in-tzsel.patch
+++ /dev/null
@@ -1,47 +0,0 @@
-From c0f251c58655e3377fe1c67a026c21ef68d2abcf Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Thu, 9 Dec 2021 15:14:42 -0800
-Subject: [PATCH] timezone: Make shell interpreter overridable in tzselect.ksh
-
-define new macro called KSHELL which can be used to define default shell
-use Bash by default
-
-Upstream-Status: Submitted [https://patchwork.sourceware.org/project/glibc/patch/20211209234015.1554552-1-raj.khem@gmail.com/]
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
----
- Makeconfig        | 9 +++++++++
- timezone/Makefile | 1 +
- 2 files changed, 10 insertions(+)
-
-diff --git a/Makeconfig b/Makeconfig
-index 775bf12b65..7b9a8f0a94 100644
---- a/Makeconfig
-+++ b/Makeconfig
-@@ -293,6 +293,15 @@ ifndef sysincludedir
- sysincludedir = /usr/include
- endif
- 
-+# The full path name of a Posix-compliant shell, preferably one that supports
-+# the Korn shell's 'select' statement as an extension.
-+# These days, Bash is the most popular.
-+# It should be OK to set this to /bin/sh, on platforms where /bin/sh
-+# lacks 'select' or doesn't completely conform to Posix, but /bin/bash
-+# is typically nicer if it works.
-+ifndef KSHELL
-+KSHELL = /bin/bash
-+endif
- 
- # Commands to install files.
- ifndef INSTALL_DATA
-diff --git a/timezone/Makefile b/timezone/Makefile
-index c624a189b3..dc8f5277de 100644
---- a/timezone/Makefile
-+++ b/timezone/Makefile
-@@ -127,6 +127,7 @@ $(objpfx)tzselect: tzselect.ksh $(common-objpfx)config.make
- 	    -e '/TZVERSION=/s|see_Makefile|"$(version)"|' \
- 	    -e '/PKGVERSION=/s|=.*|="$(PKGVERSION)"|' \
- 	    -e '/REPORT_BUGS_TO=/s|=.*|="$(REPORT_BUGS_TO)"|' \
-+	    -e 's|#!/bin/bash|#!$(KSHELL)|g' \
- 	    < $< > $@.new
- 	chmod 555 $@.new
- 	mv -f $@.new $@
diff --git a/poky/meta/recipes-core/glibc/glibc/0024-Avoid-hardcoded-build-time-paths-in-the-output-binar.patch b/poky/meta/recipes-core/glibc/glibc/0024-Avoid-hardcoded-build-time-paths-in-the-output-binar.patch
new file mode 100644
index 0000000..7983d1f
--- /dev/null
+++ b/poky/meta/recipes-core/glibc/glibc/0024-Avoid-hardcoded-build-time-paths-in-the-output-binar.patch
@@ -0,0 +1,32 @@
+From f873e25e29684cbbf7b141d9c6ee725268505c29 Mon Sep 17 00:00:00 2001
+From: Richard Purdie <richard.purdie@linuxfoundation.org>
+Date: Sun, 24 Jul 2022 07:07:29 -0700
+Subject: [PATCH] Avoid hardcoded build time paths in the output binaries
+
+replace the compile definitions with the output locations.
+
+Upstream-Status: Inappropriate [would need reworking somehow to be acceptable upstream]
+
+Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ support/Makefile | 6 +++---
+ 1 file changed, 3 insertions(+), 3 deletions(-)
+
+diff --git a/support/Makefile b/support/Makefile
+index 9b50eac117..4c24d9f61a 100644
+--- a/support/Makefile
++++ b/support/Makefile
+@@ -218,9 +218,9 @@ libsupport-inhibit-o += .o
+ endif
+ 
+ CFLAGS-support_paths.c = \
+-		-DSRCDIR_PATH=\"`cd .. ; pwd`\" \
+-		-DOBJDIR_PATH=\"`cd $(objpfx)/..; pwd`\" \
+-		-DOBJDIR_ELF_LDSO_PATH=\"`cd $(objpfx)/..; pwd`/elf/$(rtld-installed-name)\" \
++		-DSRCDIR_PATH=\"$(oe_srcdir)\" \
++		-DOBJDIR_PATH=\"$(libdir)/glibc-tests/ptest/tests/glibc-ptest\" \
++		-DOBJDIR_ELF_LDSO_PATH=\"$(slibdir)/$(rtld-installed-name)\" \
+ 		-DINSTDIR_PATH=\"$(prefix)\" \
+ 		-DLIBDIR_PATH=\"$(libdir)\" \
+ 		-DBINDIR_PATH=\"$(bindir)\" \
diff --git a/poky/meta/recipes-core/glibc/glibc/0024-fix-create-thread-failed-in-unprivileged-process-BZ-.patch b/poky/meta/recipes-core/glibc/glibc/0024-fix-create-thread-failed-in-unprivileged-process-BZ-.patch
deleted file mode 100644
index b431ea1..0000000
--- a/poky/meta/recipes-core/glibc/glibc/0024-fix-create-thread-failed-in-unprivileged-process-BZ-.patch
+++ /dev/null
@@ -1,88 +0,0 @@
-From 6609858239b8f94e12c19eac0cec425511d1211f Mon Sep 17 00:00:00 2001
-From: Hongxu Jia <hongxu.jia@windriver.com>
-Date: Sun, 29 Aug 2021 20:49:16 +0800
-Subject: [PATCH] fix create thread failed in unprivileged process [BZ #28287]
-
-Since commit [d8ea0d0168 Add an internal wrapper for clone, clone2 and clone3]
-applied, start a unprivileged container (docker run without --privileged),
-it creates a thread failed in container.
-
-In commit d8ea0d0168, it calls __clone3 if HAVE_CLONE3_WAPPER is defined.  If
-__clone3 returns -1 with ENOSYS, fall back to clone or clone2.
-
-As known from [1], cloneXXX fails with EPERM if CLONE_NEWCGROUP,
-CLONE_NEWIPC, CLONE_NEWNET, CLONE_NEWNS, CLONE_NEWPID, or CLONE_NEWUTS
-was specified by an unprivileged process (process without CAP_SYS_ADMIN)
-
-[1] https://man7.org/linux/man-pages/man2/clone3.2.html
-
-So if __clone3 returns -1 with EPERM, fall back to clone or clone2 could
-fix the issue. Here are the test steps:
-
-1) Prepare test code
-cat > conftest.c <<ENDOF
- #include <pthread.h>
- #include <stdio.h>
-
-int check_me = 0;
-void* func(void* data) {check_me = 42; printf("start thread: check_me %d\n", check_me); return &check_me;}
-int main()
-{
-  pthread_t t;
-  void *ret;
-  pthread_create (&t, 0, func, 0);
-  pthread_join (t, &ret);
-  printf("check_me %d, p %p\n", check_me, &ret);
-  return (check_me != 42 || ret != &check_me);
-}
-
-ENDOF
-
-2) Compile
-gcc -o conftest -pthread conftest.c
-
-3) Start a container with glibc 2.34 installed
-[skip details]
-docker run -it <container-image-name> bash
-
-4) Run conftest without this patch
-$ ./conftest
-check_me 0, p 0x7ffd91ccd400
-
-5) Run conftest with this patch
-$ ./conftest
-start thread: check_me 42
-check_me 42, p 0x7ffe253c6f20
-
-Upstream-Status: Inappropriate [Rejected by upstream]
-
-Upstream glibc rejected it because the latest docker has resolved the issue [1],
-and upstream glibc does not backward compatibility with old docker[2]
-
-In order to build Yocto with uninative in old docker, we need this local
-patch
-
-[1] https://github.com/moby/moby/commit/9f6b562dd12ef7b1f9e2f8e6f2ab6477790a6594
-[2] https://sourceware.org/pipermail/libc-alpha/2021-August/130590.html
-
-Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
----
- sysdeps/unix/sysv/linux/clone-internal.c | 2 +-
- 1 file changed, 1 insertion(+), 1 deletion(-)
-
-diff --git a/sysdeps/unix/sysv/linux/clone-internal.c b/sysdeps/unix/sysv/linux/clone-internal.c
-index a71effcbd3..a0569113aa 100644
---- a/sysdeps/unix/sysv/linux/clone-internal.c
-+++ b/sysdeps/unix/sysv/linux/clone-internal.c
-@@ -52,7 +52,7 @@ __clone_internal (struct clone_args *cl_args,
-   /* Try clone3 first.  */
-   int saved_errno = errno;
-   ret = __clone3 (cl_args, sizeof (*cl_args), func, arg);
--  if (ret != -1 || errno != ENOSYS)
-+  if (ret != -1 || (errno != ENOSYS && errno != EPERM))
-     return ret;
- 
-   /* NB: Restore errno since errno may be checked against non-zero
--- 
-2.27.0
-
diff --git a/poky/meta/recipes-core/glibc/glibc/0025-startup-Force-O2.patch b/poky/meta/recipes-core/glibc/glibc/0025-startup-Force-O2.patch
new file mode 100644
index 0000000..1f34262
--- /dev/null
+++ b/poky/meta/recipes-core/glibc/glibc/0025-startup-Force-O2.patch
@@ -0,0 +1,28 @@
+From 5e635e5dc7d1b21a78f38109d4f43a03bec865c8 Mon Sep 17 00:00:00 2001
+From: "H.J. Lu" <hjl.tools@gmail.com>
+Date: Sun, 7 Aug 2022 12:51:48 +0200
+Subject: [PATCH] startup: Force -O2
+
+Upstream-Status: Submitted [https://sourceware.org/bugzilla/show_bug.cgi?id=29249]
+
+Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
+---
+ sysdeps/unix/sysv/linux/startup.h | 5 +++++
+ 1 file changed, 5 insertions(+)
+
+diff --git a/sysdeps/unix/sysv/linux/startup.h b/sysdeps/unix/sysv/linux/startup.h
+index 39859b404a..e1fc1b682d 100644
+--- a/sysdeps/unix/sysv/linux/startup.h
++++ b/sysdeps/unix/sysv/linux/startup.h
+@@ -21,6 +21,11 @@
+ #else
+ # include <sysdep.h>
+ 
++# if !defined __OPTIMIZE__ || __OPTIMIZE__ < 2
++/* Force to fold strlen.  */
++#  pragma GCC optimize(2)
++# endif
++
+ /* Avoid a run-time invocation of strlen.  */
+ #define _startup_fatal(message)                                         \
+   do                                                                    \
diff --git a/poky/meta/recipes-core/glibc/glibc/reproducible-paths.patch b/poky/meta/recipes-core/glibc/glibc/reproducible-paths.patch
deleted file mode 100644
index 0754dca..0000000
--- a/poky/meta/recipes-core/glibc/glibc/reproducible-paths.patch
+++ /dev/null
@@ -1,23 +0,0 @@
-Avoid hardcoded build time paths in the output binaries by replacing the compile
-definitions with the output locations.
-
-Upstream-Status: Inappropriate [would need reworking somehow to be acceptable upstream]
-Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
-
-Index: git/support/Makefile
-===================================================================
---- git.orig/support/Makefile
-+++ git/support/Makefile
-@@ -216,9 +216,9 @@ libsupport-inhibit-o += .o
- endif
- 
- CFLAGS-support_paths.c = \
--		-DSRCDIR_PATH=\"`cd .. ; pwd`\" \
--		-DOBJDIR_PATH=\"`cd $(objpfx)/..; pwd`\" \
--		-DOBJDIR_ELF_LDSO_PATH=\"`cd $(objpfx)/..; pwd`/elf/$(rtld-installed-name)\" \
-+		-DSRCDIR_PATH=\"$(oe_srcdir)\" \
-+		-DOBJDIR_PATH=\"$(libdir)/glibc-tests/ptest/tests/glibc-ptest\" \
-+		-DOBJDIR_ELF_LDSO_PATH=\"$(slibdir)/$(rtld-installed-name)\" \
- 		-DINSTDIR_PATH=\"$(prefix)\" \
- 		-DLIBDIR_PATH=\"$(libdir)\" \
- 		-DBINDIR_PATH=\"$(bindir)\" \
diff --git a/poky/meta/recipes-core/glibc/glibc_2.35.bb b/poky/meta/recipes-core/glibc/glibc_2.35.bb
deleted file mode 100644
index 96fe39c..0000000
--- a/poky/meta/recipes-core/glibc/glibc_2.35.bb
+++ /dev/null
@@ -1,123 +0,0 @@
-require glibc.inc
-require glibc-version.inc
-
-CVE_CHECK_IGNORE += "CVE-2020-10029 CVE-2021-27645"
-
-# glibc https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2019-1010022
-# glibc https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2019-1010023
-# glibc https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2019-1010024
-# Upstream glibc maintainers dispute there is any issue and have no plans to address it further.
-# "this is being treated as a non-security bug and no real threat."
-CVE_CHECK_IGNORE += "CVE-2019-1010022 CVE-2019-1010023 CVE-2019-1010024"
-
-# glibc https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2019-1010025
-# Allows for ASLR bypass so can bypass some hardening, not an exploit in itself, may allow
-# easier access for another. "ASLR bypass itself is not a vulnerability."
-# Potential patch at https://sourceware.org/bugzilla/show_bug.cgi?id=22853
-CVE_CHECK_IGNORE += "CVE-2019-1010025"
-
-DEPENDS += "gperf-native bison-native"
-
-NATIVESDKFIXES ?= ""
-NATIVESDKFIXES:class-nativesdk = "\
-           file://0003-nativesdk-glibc-Look-for-host-system-ld.so.cache-as-.patch \
-           file://0004-nativesdk-glibc-Fix-buffer-overrun-with-a-relocated-.patch \
-           file://0005-nativesdk-glibc-Raise-the-size-of-arrays-containing-.patch \
-           file://0006-nativesdk-glibc-Allow-64-bit-atomics-for-x86.patch \
-           file://0007-nativesdk-glibc-Make-relocatable-install-for-locales.patch \
-           file://0008-nativesdk-glibc-Fall-back-to-faccessat-on-faccess2-r.patch \
-"
-
-SRC_URI =  "${GLIBC_GIT_URI};branch=${SRCBRANCH};name=glibc \
-           file://etc/ld.so.conf \
-           file://generate-supported.mk \
-           file://makedbs.sh \
-           \
-           ${NATIVESDKFIXES} \
-           file://0009-yes-within-the-path-sets-wrong-config-variables.patch \
-           file://0010-eglibc-Cross-building-and-testing-instructions.patch \
-           file://0011-eglibc-Help-bootstrap-cross-toolchain.patch \
-           file://0012-eglibc-Resolve-__fpscr_values-on-SH4.patch \
-           file://0013-eglibc-Forward-port-cross-locale-generation-support.patch \
-           file://0014-localedef-add-to-archive-uses-a-hard-coded-locale-pa.patch \
-           file://0016-locale-prevent-maybe-uninitialized-errors-with-Os-BZ.patch \
-           file://0017-readlib-Add-OECORE_KNOWN_INTERPRETER_NAMES-to-known-.patch \
-           file://0018-wordsize.h-Unify-the-header-between-arm-and-aarch64.patch \
-           file://0019-powerpc-Do-not-ask-compiler-for-finding-arch.patch \
-           file://0021-Replace-echo-with-printf-builtin-in-nscd-init-script.patch \
-           file://0022-sysdeps-gnu-configure.ac-Set-libc_cv_rootsbindir-onl.patch \
-           file://0023-timezone-Make-shell-interpreter-overridable-in-tzsel.patch \
-           file://0024-fix-create-thread-failed-in-unprivileged-process-BZ-.patch \
-           "
-S = "${WORKDIR}/git"
-B = "${WORKDIR}/build-${TARGET_SYS}"
-
-PACKAGES_DYNAMIC = ""
-
-# the -isystem in bitbake.conf screws up glibc do_stage
-BUILD_CPPFLAGS = "-I${STAGING_INCDIR_NATIVE}"
-TARGET_CPPFLAGS = "-I${STAGING_DIR_TARGET}${includedir}"
-
-GLIBC_BROKEN_LOCALES = ""
-
-GLIBCPIE ??= ""
-
-EXTRA_OECONF = "--enable-kernel=${OLDEST_KERNEL} \
-                --disable-profile \
-                --disable-debug --without-gd \
-                --enable-clocale=gnu \
-                --with-headers=${STAGING_INCDIR} \
-                --without-selinux \
-                --enable-tunables \
-                --enable-bind-now \
-                --enable-stack-protector=strong \
-                --disable-crypt \
-                --with-default-link \
-                ${@bb.utils.contains_any('SELECTED_OPTIMIZATION', '-O0 -Og', '--disable-werror', '', d)} \
-                ${GLIBCPIE} \
-                ${GLIBC_EXTRA_OECONF}"
-
-EXTRA_OECONF += "${@get_libc_fpu_setting(bb, d)}"
-
-EXTRA_OECONF:append:x86 = " ${@bb.utils.contains_any('TUNE_FEATURES', 'i586 c3', '--disable-cet', '--enable-cet', d)}"
-EXTRA_OECONF:append:x86-64 = " --enable-cet"
-
-PACKAGECONFIG ??= "nscd memory-tagging"
-PACKAGECONFIG[nscd] = "--enable-nscd,--disable-nscd"
-PACKAGECONFIG[memory-tagging] = "--enable-memory-tagging,--disable-memory-tagging"
-
-do_patch:append() {
-    bb.build.exec_func('do_fix_readlib_c', d)
-}
-
-do_fix_readlib_c () {
-	sed -i -e 's#OECORE_KNOWN_INTERPRETER_NAMES#${EGLIBC_KNOWN_INTERPRETER_NAMES}#' ${S}/elf/readlib.c
-}
-
-do_configure () {
-# override this function to avoid the autoconf/automake/aclocal/autoheader
-# calls for now
-# don't pass CPPFLAGS into configure, since it upsets the kernel-headers
-# version check and doesn't really help with anything
-        (cd ${S} && gnu-configize) || die "failure in running gnu-configize"
-        find ${S} -name "configure" | xargs touch
-        CPPFLAGS="" oe_runconf
-}
-
-LDFLAGS += "-fuse-ld=bfd"
-do_compile () {
-	base_do_compile
-	echo "Adjust ldd script"
-	if [ -n "${RTLDLIST}" ]
-	then
-		prevrtld=`cat ${B}/elf/ldd | grep "^RTLDLIST=" | sed 's#^RTLDLIST="\?\([^"]*\)"\?$#\1#'`
-		# remove duplicate entries
-		newrtld=`echo $(printf '%s\n' ${prevrtld} ${RTLDLIST} | LC_ALL=C sort -u)`
-		echo "ldd \"${prevrtld} ${RTLDLIST}\" -> \"${newrtld}\""
-		sed -i ${B}/elf/ldd -e "s#^RTLDLIST=.*\$#RTLDLIST=\"${newrtld}\"#"
-	fi
-}
-
-require glibc-package.inc
-
-BBCLASSEXTEND = "nativesdk"
diff --git a/poky/meta/recipes-core/glibc/glibc_2.36.bb b/poky/meta/recipes-core/glibc/glibc_2.36.bb
new file mode 100644
index 0000000..1cfa810
--- /dev/null
+++ b/poky/meta/recipes-core/glibc/glibc_2.36.bb
@@ -0,0 +1,126 @@
+require glibc.inc
+require glibc-version.inc
+
+CVE_CHECK_IGNORE += "CVE-2020-10029 CVE-2021-27645"
+
+# glibc https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2019-1010022
+# glibc https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2019-1010023
+# glibc https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2019-1010024
+# Upstream glibc maintainers dispute there is any issue and have no plans to address it further.
+# "this is being treated as a non-security bug and no real threat."
+CVE_CHECK_IGNORE += "CVE-2019-1010022 CVE-2019-1010023 CVE-2019-1010024"
+
+# glibc https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2019-1010025
+# Allows for ASLR bypass so can bypass some hardening, not an exploit in itself, may allow
+# easier access for another. "ASLR bypass itself is not a vulnerability."
+# Potential patch at https://sourceware.org/bugzilla/show_bug.cgi?id=22853
+CVE_CHECK_IGNORE += "CVE-2019-1010025"
+
+DEPENDS += "gperf-native bison-native"
+
+NATIVESDKFIXES ?= ""
+NATIVESDKFIXES:class-nativesdk = "\
+           file://0003-nativesdk-glibc-Look-for-host-system-ld.so.cache-as-.patch \
+           file://0004-nativesdk-glibc-Fix-buffer-overrun-with-a-relocated-.patch \
+           file://0005-nativesdk-glibc-Raise-the-size-of-arrays-containing-.patch \
+           file://0006-nativesdk-glibc-Allow-64-bit-atomics-for-x86.patch \
+           file://0007-nativesdk-glibc-Make-relocatable-install-for-locales.patch \
+           file://0008-nativesdk-glibc-Fall-back-to-faccessat-on-faccess2-r.patch \
+"
+
+SRC_URI =  "${GLIBC_GIT_URI};branch=${SRCBRANCH};name=glibc \
+           file://etc/ld.so.conf \
+           file://generate-supported.mk \
+           file://makedbs.sh \
+           \
+           ${NATIVESDKFIXES} \
+           file://0009-yes-within-the-path-sets-wrong-config-variables.patch \
+           file://0010-eglibc-Cross-building-and-testing-instructions.patch \
+           file://0011-eglibc-Help-bootstrap-cross-toolchain.patch \
+           file://0012-eglibc-Resolve-__fpscr_values-on-SH4.patch \
+           file://0013-eglibc-Forward-port-cross-locale-generation-support.patch \
+           file://0014-localedef-add-to-archive-uses-a-hard-coded-locale-pa.patch \
+           file://0015-locale-prevent-maybe-uninitialized-errors-with-Os-BZ.patch \
+           file://0016-readlib-Add-OECORE_KNOWN_INTERPRETER_NAMES-to-known-.patch \
+           file://0017-powerpc-Do-not-ask-compiler-for-finding-arch.patch \
+           file://0018-wordsize.h-Unify-the-header-between-arm-and-aarch64.patch \
+           file://0019-Replace-echo-with-printf-builtin-in-nscd-init-script.patch \
+           file://0020-sysdeps-gnu-configure.ac-Set-libc_cv_rootsbindir-onl.patch \
+           file://0021-timezone-Make-shell-interpreter-overridable-in-tzsel.patch \
+           file://0022-tzselect.ksh-Use-bin-sh-default-shell-interpreter.patch \
+           file://0023-fix-create-thread-failed-in-unprivileged-process-BZ-.patch \
+           file://0024-Avoid-hardcoded-build-time-paths-in-the-output-binar.patch \
+           file://0025-startup-Force-O2.patch \
+"
+S = "${WORKDIR}/git"
+B = "${WORKDIR}/build-${TARGET_SYS}"
+
+PACKAGES_DYNAMIC = ""
+
+# the -isystem in bitbake.conf screws up glibc do_stage
+BUILD_CPPFLAGS = "-I${STAGING_INCDIR_NATIVE}"
+TARGET_CPPFLAGS = "-I${STAGING_DIR_TARGET}${includedir}"
+
+GLIBC_BROKEN_LOCALES = ""
+
+GLIBCPIE ??= ""
+
+EXTRA_OECONF = "--enable-kernel=${OLDEST_KERNEL} \
+                --disable-profile \
+                --disable-debug --without-gd \
+                --enable-clocale=gnu \
+                --with-headers=${STAGING_INCDIR} \
+                --without-selinux \
+                --enable-tunables \
+                --enable-bind-now \
+                --enable-stack-protector=strong \
+                --disable-crypt \
+                --with-default-link \
+                ${@bb.utils.contains_any('SELECTED_OPTIMIZATION', '-O0 -Og', '--disable-werror', '', d)} \
+                ${GLIBCPIE} \
+                ${GLIBC_EXTRA_OECONF}"
+
+EXTRA_OECONF += "${@get_libc_fpu_setting(bb, d)}"
+
+EXTRA_OECONF:append:x86 = " ${@bb.utils.contains_any('TUNE_FEATURES', 'i586 c3', '--disable-cet', '--enable-cet', d)}"
+EXTRA_OECONF:append:x86-64 = " --enable-cet"
+
+PACKAGECONFIG ??= "nscd memory-tagging"
+PACKAGECONFIG[nscd] = "--enable-nscd,--disable-nscd"
+PACKAGECONFIG[memory-tagging] = "--enable-memory-tagging,--disable-memory-tagging"
+
+do_patch:append() {
+    bb.build.exec_func('do_fix_readlib_c', d)
+}
+
+do_fix_readlib_c () {
+	sed -i -e 's#OECORE_KNOWN_INTERPRETER_NAMES#${EGLIBC_KNOWN_INTERPRETER_NAMES}#' ${S}/elf/readlib.c
+}
+
+do_configure () {
+# override this function to avoid the autoconf/automake/aclocal/autoheader
+# calls for now
+# don't pass CPPFLAGS into configure, since it upsets the kernel-headers
+# version check and doesn't really help with anything
+        (cd ${S} && gnu-configize) || die "failure in running gnu-configize"
+        find ${S} -name "configure" | xargs touch
+        CPPFLAGS="" oe_runconf
+}
+
+LDFLAGS += "-fuse-ld=bfd"
+do_compile () {
+	base_do_compile
+	echo "Adjust ldd script"
+	if [ -n "${RTLDLIST}" ]
+	then
+		prevrtld=`cat ${B}/elf/ldd | grep "^RTLDLIST=" | sed 's#^RTLDLIST="\?\([^"]*\)"\?$#\1#'`
+		# remove duplicate entries
+		newrtld=`echo $(printf '%s\n' ${prevrtld} ${RTLDLIST} | LC_ALL=C sort -u)`
+		echo "ldd \"${prevrtld} ${RTLDLIST}\" -> \"${newrtld}\""
+		sed -i ${B}/elf/ldd -e "s#^RTLDLIST=.*\$#RTLDLIST=\"${newrtld}\"#"
+	fi
+}
+
+require glibc-package.inc
+
+BBCLASSEXTEND = "nativesdk"
diff --git a/poky/meta/recipes-core/libcgroup/libcgroup/0001-api-Use-GNU-strerror_r-when-available.patch b/poky/meta/recipes-core/libcgroup/libcgroup/0001-api-Use-GNU-strerror_r-when-available.patch
new file mode 100644
index 0000000..96321d2
--- /dev/null
+++ b/poky/meta/recipes-core/libcgroup/libcgroup/0001-api-Use-GNU-strerror_r-when-available.patch
@@ -0,0 +1,55 @@
+From d190c0c548b3219b75e4c399aa89186e77bbe270 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Tue, 23 Aug 2022 20:03:09 -0700
+Subject: [PATCH] api: Use GNU strerror_r when available
+
+GNU strerror_r is only available in glibc, musl impelents the XSI
+version which is slightly different, therefore check if GNU version is
+available before using it, otherwise use the XSI compliant version.
+
+Upstream-Status: Submitted [https://github.com/libcgroup/libcgroup/pull/236]
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ configure.ac | 5 +++++
+ src/api.c    | 8 ++++++--
+ 2 files changed, 11 insertions(+), 2 deletions(-)
+
+diff --git a/configure.ac b/configure.ac
+index b68c655..831866d 100644
+--- a/configure.ac
++++ b/configure.ac
+@@ -183,6 +183,11 @@ AC_FUNC_REALLOC
+ AC_FUNC_STAT
+ AC_CHECK_FUNCS([getmntent hasmntopt memset mkdir rmdir strdup])
+ 
++orig_CFLAGS="$CFLAGS"
++CFLAGS="$CFLAGS -D_GNU_SOURCE"
++AC_FUNC_STRERROR_R
++CFLAGS="$orig_CFLAGS"
++
+ AC_SEARCH_LIBS(
+ 	[fts_open],
+ 	[fts],
+diff --git a/src/api.c b/src/api.c
+index 5c6de11..06aa1d6 100644
+--- a/src/api.c
++++ b/src/api.c
+@@ -4571,9 +4571,13 @@ const char *cgroup_strerror(int code)
+ {
+ 	int idx = code % ECGROUPNOTCOMPILED;
+ 
+-	if (code == ECGOTHER)
++	if (code == ECGOTHER) {
++#ifdef STRERROR_R_CHAR_P
+ 		return strerror_r(cgroup_get_last_errno(), errtext, MAXLEN);
+-
++#else
++		return strerror_r(cgroup_get_last_errno(), errtext, sizeof (errtext)) ? "unknown error" : errtext;
++#endif
++	}
+ 	if (idx >= sizeof(cgroup_strerror_codes)/sizeof(cgroup_strerror_codes[0]))
+ 		return "Invalid error code";
+ 
+-- 
+2.37.2
+
diff --git a/poky/meta/recipes-core/libcgroup/libcgroup_2.0.2.bb b/poky/meta/recipes-core/libcgroup/libcgroup_2.0.2.bb
deleted file mode 100644
index 7ade372..0000000
--- a/poky/meta/recipes-core/libcgroup/libcgroup_2.0.2.bb
+++ /dev/null
@@ -1,33 +0,0 @@
-SUMMARY = "Linux control group abstraction library"
-HOMEPAGE = "http://libcg.sourceforge.net/"
-DESCRIPTION = "libcgroup is a library that abstracts the control group file system \
-in Linux. Control groups allow you to limit, account and isolate resource usage \
-(CPU, memory, disk I/O, etc.) of groups of processes."
-SECTION = "libs"
-LICENSE = "LGPL-2.1-only"
-LIC_FILES_CHKSUM = "file://COPYING;md5=2d5025d4aa3495befef8f17206a5b0a1"
-
-inherit autotools pkgconfig
-
-DEPENDS = "bison-native flex-native"
-
-SRC_URI = "https://github.com/${BPN}/${BPN}/releases/download/v${PV}/${BP}.tar.gz"
-
-SRC_URI[sha256sum] = "8ef63b32e0aff619547dbb8a25e1f6bab152d7c4864795cf915571a5994d0cf8"
-UPSTREAM_CHECK_URI = "https://github.com/libcgroup/libcgroup/releases/"
-
-DEPENDS:append:libc-musl = " fts "
-EXTRA_OEMAKE:append:libc-musl = " LIBS=-lfts"
-
-PACKAGECONFIG = "${@bb.utils.filter('DISTRO_FEATURES', 'pam', d)}"
-PACKAGECONFIG[pam] = "--enable-pam-module-dir=${base_libdir}/security --enable-pam=yes,--enable-pam=no,libpam"
-
-PACKAGES =+ "cgroups-pam-plugin"
-FILES:cgroups-pam-plugin = "${base_libdir}/security/pam_cgroup.so*"
-FILES:${PN}-dev += "${base_libdir}/security/*.la"
-FILES:${PN}-staticdev += "${base_libdir}/security/pam_cgroup.a"
-
-do_install:append() {
-	# Until we ship the test suite, this library isn't useful
-	rm -f ${D}${libdir}/libcgroupfortesting.*
-}
diff --git a/poky/meta/recipes-core/libcgroup/libcgroup_3.0.0.bb b/poky/meta/recipes-core/libcgroup/libcgroup_3.0.0.bb
new file mode 100644
index 0000000..f3e8412
--- /dev/null
+++ b/poky/meta/recipes-core/libcgroup/libcgroup_3.0.0.bb
@@ -0,0 +1,35 @@
+SUMMARY = "Linux control group abstraction library"
+HOMEPAGE = "http://libcg.sourceforge.net/"
+DESCRIPTION = "libcgroup is a library that abstracts the control group file system \
+in Linux. Control groups allow you to limit, account and isolate resource usage \
+(CPU, memory, disk I/O, etc.) of groups of processes."
+SECTION = "libs"
+LICENSE = "LGPL-2.1-only"
+LIC_FILES_CHKSUM = "file://COPYING;md5=4d794c5d710e5b3547a6cc6a6609a641"
+
+inherit autotools pkgconfig
+
+DEPENDS = "bison-native flex-native"
+
+SRC_URI = "https://github.com/${BPN}/${BPN}/releases/download/v3.0/${BP}.tar.gz \
+           file://0001-api-Use-GNU-strerror_r-when-available.patch \
+"
+
+SRC_URI[sha256sum] = "8d284d896fca1c981b55850e92acd3ad9648a69227c028dda7ae3402af878edd"
+UPSTREAM_CHECK_URI = "https://github.com/libcgroup/libcgroup/releases/"
+
+DEPENDS:append:libc-musl = " fts "
+EXTRA_OEMAKE:append:libc-musl = " LIBS=-lfts"
+
+PACKAGECONFIG = "${@bb.utils.filter('DISTRO_FEATURES', 'pam', d)}"
+PACKAGECONFIG[pam] = "--enable-pam-module-dir=${base_libdir}/security --enable-pam=yes,--enable-pam=no,libpam"
+
+PACKAGES =+ "cgroups-pam-plugin"
+FILES:cgroups-pam-plugin = "${base_libdir}/security/pam_cgroup.so*"
+FILES:${PN}-dev += "${base_libdir}/security/*.la"
+FILES:${PN}-staticdev += "${base_libdir}/security/pam_cgroup.a"
+
+do_install:append() {
+	# Until we ship the test suite, this library isn't useful
+	rm -f ${D}${libdir}/libcgroupfortesting.*
+}
diff --git a/poky/meta/recipes-core/libxml/libxml2_2.9.14.bb b/poky/meta/recipes-core/libxml/libxml2_2.9.14.bb
index 3081ebf..d803db8 100644
--- a/poky/meta/recipes-core/libxml/libxml2_2.9.14.bb
+++ b/poky/meta/recipes-core/libxml/libxml2_2.9.14.bb
@@ -29,6 +29,10 @@
 
 BINCONFIG = "${bindir}/xml2-config"
 
+# Fixed since 2.9.11 via
+# https://gitlab.gnome.org/GNOME/libxml2/-/commit/c1ba6f54d32b707ca6d91cb3257ce9de82876b6f
+CVE_CHECK_IGNORE += "CVE-2016-3709"
+
 PACKAGECONFIG ??= "python \
     ${@bb.utils.filter('DISTRO_FEATURES', 'ipv6', d)} \
 "
@@ -105,6 +109,8 @@
 do_install:append:class-native () {
 	# Docs are not needed in the native case
 	rm ${D}${datadir}/gtk-doc -rf
+
+	create_wrapper ${D}${bindir}/xmllint XML_CATALOG_FILES=${sysconfdir}/xml/catalog
 }
 
 BBCLASSEXTEND = "native nativesdk"
diff --git a/poky/meta/recipes-core/meta/cve-update-db-native.bb b/poky/meta/recipes-core/meta/cve-update-db-native.bb
index 18af89b..944243f 100644
--- a/poky/meta/recipes-core/meta/cve-update-db-native.bb
+++ b/poky/meta/recipes-core/meta/cve-update-db-native.bb
@@ -66,9 +66,7 @@
 
     # Connect to database
     conn = sqlite3.connect(db_file)
-    c = conn.cursor()
-
-    initialize_db(c)
+    initialize_db(conn)
 
     with bb.progress.ProgressHandler(d) as ph, open(os.path.join(d.getVar("TMPDIR"), 'cve_check'), 'a') as cve_f:
         total_years = date.today().year + 1 - YEAR_START
@@ -98,19 +96,21 @@
                     return
 
             # Compare with current db last modified date
-            c.execute("select DATE from META where YEAR = ?", (year,))
-            meta = c.fetchone()
+            cursor = conn.execute("select DATE from META where YEAR = ?", (year,))
+            meta = cursor.fetchone()
+            cursor.close()
+
             if not meta or meta[0] != last_modified:
                 bb.debug(2, "Updating entries")
                 # Clear products table entries corresponding to current year
-                c.execute("delete from PRODUCTS where ID like ?", ('CVE-%d%%' % year,))
+                conn.execute("delete from PRODUCTS where ID like ?", ('CVE-%d%%' % year,)).close()
 
                 # Update db with current year json file
                 try:
                     response = urllib.request.urlopen(json_url)
                     if response:
-                        update_db(c, gzip.decompress(response.read()).decode('utf-8'))
-                    c.execute("insert or replace into META values (?, ?)", [year, last_modified])
+                        update_db(conn, gzip.decompress(response.read()).decode('utf-8'))
+                    conn.execute("insert or replace into META values (?, ?)", [year, last_modified]).close()
                 except urllib.error.URLError as e:
                     cve_f.write('Warning: CVE db update error, CVE data is outdated.\n\n')
                     bb.warn("Cannot parse CVE data (%s), update failed" % e.reason)
@@ -129,21 +129,26 @@
 do_fetch[file-checksums] = ""
 do_fetch[vardeps] = ""
 
-def initialize_db(c):
-    c.execute("CREATE TABLE IF NOT EXISTS META (YEAR INTEGER UNIQUE, DATE TEXT)")
+def initialize_db(conn):
+    with conn:
+        c = conn.cursor()
 
-    c.execute("CREATE TABLE IF NOT EXISTS NVD (ID TEXT UNIQUE, SUMMARY TEXT, \
-        SCOREV2 TEXT, SCOREV3 TEXT, MODIFIED INTEGER, VECTOR TEXT)")
+        c.execute("CREATE TABLE IF NOT EXISTS META (YEAR INTEGER UNIQUE, DATE TEXT)")
 
-    c.execute("CREATE TABLE IF NOT EXISTS PRODUCTS (ID TEXT, \
-        VENDOR TEXT, PRODUCT TEXT, VERSION_START TEXT, OPERATOR_START TEXT, \
-        VERSION_END TEXT, OPERATOR_END TEXT)")
-    c.execute("CREATE INDEX IF NOT EXISTS PRODUCT_ID_IDX on PRODUCTS(ID);")
+        c.execute("CREATE TABLE IF NOT EXISTS NVD (ID TEXT UNIQUE, SUMMARY TEXT, \
+            SCOREV2 TEXT, SCOREV3 TEXT, MODIFIED INTEGER, VECTOR TEXT)")
 
-def parse_node_and_insert(c, node, cveId):
+        c.execute("CREATE TABLE IF NOT EXISTS PRODUCTS (ID TEXT, \
+            VENDOR TEXT, PRODUCT TEXT, VERSION_START TEXT, OPERATOR_START TEXT, \
+            VERSION_END TEXT, OPERATOR_END TEXT)")
+        c.execute("CREATE INDEX IF NOT EXISTS PRODUCT_ID_IDX on PRODUCTS(ID);")
+
+        c.close()
+
+def parse_node_and_insert(conn, node, cveId):
     # Parse children node if needed
     for child in node.get('children', ()):
-        parse_node_and_insert(c, child, cveId)
+        parse_node_and_insert(conn, child, cveId)
 
     def cpe_generator():
         for cpe in node.get('cpe_match', ()):
@@ -200,9 +205,9 @@
                     # Save processing by representing as -.
                     yield [cveId, vendor, product, '-', '', '', '']
 
-    c.executemany("insert into PRODUCTS values (?, ?, ?, ?, ?, ?, ?)", cpe_generator())
+    conn.executemany("insert into PRODUCTS values (?, ?, ?, ?, ?, ?, ?)", cpe_generator()).close()
 
-def update_db(c, jsondata):
+def update_db(conn, jsondata):
     import json
     root = json.loads(jsondata)
 
@@ -226,12 +231,12 @@
             accessVector = accessVector or "UNKNOWN"
             cvssv3 = 0.0
 
-        c.execute("insert or replace into NVD values (?, ?, ?, ?, ?, ?)",
-                [cveId, cveDesc, cvssv2, cvssv3, date, accessVector])
+        conn.execute("insert or replace into NVD values (?, ?, ?, ?, ?, ?)",
+                [cveId, cveDesc, cvssv2, cvssv3, date, accessVector]).close()
 
         configurations = elt['configurations']['nodes']
         for config in configurations:
-            parse_node_and_insert(c, config, cveId)
+            parse_node_and_insert(conn, config, cveId)
 
 
 do_fetch[nostamp] = "1"
diff --git a/poky/meta/recipes-core/meta/testexport-tarball.bb b/poky/meta/recipes-core/meta/testexport-tarball.bb
index bb9f8de..abdd009 100644
--- a/poky/meta/recipes-core/meta/testexport-tarball.bb
+++ b/poky/meta/recipes-core/meta/testexport-tarball.bb
@@ -4,7 +4,7 @@
 SUMMARY = "Standalone tarball for test systems with missing software"
 LICENSE = "MIT"
 
-TEST_EXPORT_SDK_PACKAGES ??= ""
+require conf/testexport.conf
 
 TOOLCHAIN_TARGET_TASK ?= ""
 
diff --git a/poky/meta/recipes-core/musl/musl/0001-Make-dynamic-linker-a-relative-symlink-to-libc.patch b/poky/meta/recipes-core/musl/musl/0001-Make-dynamic-linker-a-relative-symlink-to-libc.patch
index ba00efe..8b097f3 100644
--- a/poky/meta/recipes-core/musl/musl/0001-Make-dynamic-linker-a-relative-symlink-to-libc.patch
+++ b/poky/meta/recipes-core/musl/musl/0001-Make-dynamic-linker-a-relative-symlink-to-libc.patch
@@ -1,7 +1,7 @@
-From 0ec74744a4cba7c5fdfaa2685995119a4fca0260 Mon Sep 17 00:00:00 2001
+From f95b6fd0475a95c00e886219271cb5c93838e3c3 Mon Sep 17 00:00:00 2001
 From: Amarnath Valluri <amarnath.valluri@intel.com>
 Date: Wed, 18 Jan 2017 16:14:37 +0200
-Subject: [PATCH] Make dynamic linker a relative symlink to libc
+Subject: [PATCH 1/2] Make dynamic linker a relative symlink to libc
 
 absolute symlink into $(libdir) fails to load in a cross build
 environment, especially when executing qemu in usermode to run target
@@ -13,18 +13,19 @@
  Make use of 'ln -r' to create relative symlinks, as most fo the distros
  shipping coreutils 8.16+
 
+Upstream-Status: Pending
 Signed-off-by: Khem Raj <raj.khem@gmail.com>
 Signed-off-by: Amarnath Valluri <amarnath.valluri@intel.com>
 ---
-Upstream-Status: Pending
----
  Makefile         | 2 +-
  tools/install.sh | 8 +++++---
  2 files changed, 6 insertions(+), 4 deletions(-)
 
+diff --git a/Makefile b/Makefile
+index e8cc4436..466d9afd 100644
 --- a/Makefile
 +++ b/Makefile
-@@ -210,7 +210,7 @@ $(DESTDIR)$(includedir)/%: $(srcdir)/inc
+@@ -210,7 +210,7 @@ $(DESTDIR)$(includedir)/%: $(srcdir)/include/%
  	$(INSTALL) -D -m 644 $< $@
  
  $(DESTDIR)$(LDSO_PATHNAME): $(DESTDIR)$(libdir)/libc.so
@@ -33,6 +34,8 @@
  
  install-libs: $(ALL_LIBS:lib/%=$(DESTDIR)$(libdir)/%) $(if $(SHARED_LIBS),$(DESTDIR)$(LDSO_PATHNAME),)
  
+diff --git a/tools/install.sh b/tools/install.sh
+index d913b60b..b6a7f797 100755
 --- a/tools/install.sh
 +++ b/tools/install.sh
 @@ -6,18 +6,20 @@
@@ -58,7 +61,7 @@
  m) mode=$OPTARG ;;
  ?) usage ;;
  esac
-@@ -48,7 +50,7 @@ trap 'rm -f "$tmp"' EXIT INT QUIT TERM H
+@@ -48,7 +50,7 @@ trap 'rm -f "$tmp"' EXIT INT QUIT TERM HUP
  umask 077
  
  if test "$symlink" ; then
@@ -67,3 +70,6 @@
  else
  cat < "$1" > "$tmp"
  chmod "$mode" "$tmp"
+-- 
+2.37.2
+
diff --git a/poky/meta/recipes-core/musl/musl/0002-ldso-Use-syslibdir-and-libdir-as-default-pathes-to-l.patch b/poky/meta/recipes-core/musl/musl/0002-ldso-Use-syslibdir-and-libdir-as-default-pathes-to-l.patch
index 0aeb5eb..59bfae5 100644
--- a/poky/meta/recipes-core/musl/musl/0002-ldso-Use-syslibdir-and-libdir-as-default-pathes-to-l.patch
+++ b/poky/meta/recipes-core/musl/musl/0002-ldso-Use-syslibdir-and-libdir-as-default-pathes-to-l.patch
@@ -1,7 +1,8 @@
-From 5a2886f81dbca3f2ed28eebe7d27d471da278db8 Mon Sep 17 00:00:00 2001
+From 3cce8716c6c3ae2e0c835caeac3780ec35090b2d Mon Sep 17 00:00:00 2001
 From: Serhey Popovych <serhe.popovych@gmail.com>
 Date: Tue, 11 Dec 2018 05:44:20 -0500
-Subject: [PATCH] ldso: Use syslibdir and libdir as default pathes to libdirs
+Subject: [PATCH 2/2] ldso: Use syslibdir and libdir as default pathes to
+ libdirs
 
 In absence of /etc/ld-musl-$(ARCH).path ldso uses default path to search
 libraries /lib:/usr/local/lib:/usr/lib.
@@ -20,6 +21,8 @@
  ldso/dynlink.c | 4 +++-
  2 files changed, 5 insertions(+), 2 deletions(-)
 
+diff --git a/Makefile b/Makefile
+index 466d9afd..d2f458fa 100644
 --- a/Makefile
 +++ b/Makefile
 @@ -47,7 +47,8 @@ CFLAGS_AUTO = -Os -pipe
@@ -32,6 +35,8 @@
  CFLAGS_ALL += $(CPPFLAGS) $(CFLAGS_AUTO) $(CFLAGS)
  
  LDFLAGS_ALL = $(LDFLAGS_AUTO) $(LDFLAGS)
+diff --git a/ldso/dynlink.c b/ldso/dynlink.c
+index cc677952..b0e8815b 100644
 --- a/ldso/dynlink.c
 +++ b/ldso/dynlink.c
 @@ -29,6 +29,8 @@
@@ -40,10 +45,10 @@
  
 +#define SYS_PATH_DFLT SYSLIBDIR ":" LIBDIR
 +
- static void error(const char *, ...);
- 
- #define MAXP2(a,b) (-(-(a)&-(b)))
-@@ -1094,7 +1096,7 @@ static struct dso *load_library(const ch
+ static void error_impl(const char *, ...);
+ static void error_noop(const char *, ...);
+ static void (*error)(const char *, ...) = error_noop;
+@@ -1097,7 +1099,7 @@ static struct dso *load_library(const char *name, struct dso *needed_by)
  					sys_path = "";
  				}
  			}
@@ -52,3 +57,6 @@
  			fd = path_open(name, sys_path, buf, sizeof buf);
  		}
  		pathname = buf;
+-- 
+2.37.2
+
diff --git a/poky/meta/recipes-core/musl/musl_git.bb b/poky/meta/recipes-core/musl/musl_git.bb
index fde5fc0..97c27eb 100644
--- a/poky/meta/recipes-core/musl/musl_git.bb
+++ b/poky/meta/recipes-core/musl/musl_git.bb
@@ -4,7 +4,7 @@
 require musl.inc
 inherit linuxloader
 
-SRCREV = "6e9d2370c7559af80b32a91f20898f41597e093b"
+SRCREV = "37e18b7bf307fa4a8c745feebfcba54a0ba74f30"
 
 BASEVER = "1.2.3"
 
diff --git a/poky/meta/recipes-core/ncurses/files/exit_prototype.patch b/poky/meta/recipes-core/ncurses/files/exit_prototype.patch
new file mode 100644
index 0000000..791421a
--- /dev/null
+++ b/poky/meta/recipes-core/ncurses/files/exit_prototype.patch
@@ -0,0 +1,22 @@
+Add needed headers for including mbstate_t and exit()
+
+Upstream-Status: Inappropriate [Reconfigure will solve it]
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+--- a/configure
++++ b/configure
+@@ -3422,6 +3422,7 @@ rm -f "conftest.$ac_objext" "conftest.$a
+   cat >"conftest.$ac_ext" <<_ACEOF
+ #line 3423 "configure"
+ #include "confdefs.h"
++#include <stdlib.h>
+ $ac_declaration
+ int
+ main (void)
+@@ -12997,6 +12998,7 @@ cat >"conftest.$ac_ext" <<_ACEOF
+ #include <stdlib.h>
+ #include <stdarg.h>
+ #include <stdio.h>
++#include <wchar.h>
+ #ifdef HAVE_LIBUTF8_H
+ #include <libutf8.h>
+ #endif
diff --git a/poky/meta/recipes-core/ncurses/ncurses_6.3+20220423.bb b/poky/meta/recipes-core/ncurses/ncurses_6.3+20220423.bb
index 9fba5b5..ca5fa32 100644
--- a/poky/meta/recipes-core/ncurses/ncurses_6.3+20220423.bb
+++ b/poky/meta/recipes-core/ncurses/ncurses_6.3+20220423.bb
@@ -3,6 +3,7 @@
 SRC_URI += "file://0001-tic-hang.patch \
            file://0002-configure-reproducible.patch \
            file://0003-gen-pkgconfig.in-Do-not-include-LDFLAGS-in-generated.patch \
+           file://exit_prototype.patch \
            "
 # commit id corresponds to the revision in package version
 SRCREV = "20db1fb41ec91cd8a1f528e770362092c5403378"
diff --git a/poky/meta/recipes-core/ovmf/ovmf/0003-ovmf-Update-to-latest.patch b/poky/meta/recipes-core/ovmf/ovmf/0003-ovmf-Update-to-latest.patch
deleted file mode 100644
index d710429..0000000
--- a/poky/meta/recipes-core/ovmf/ovmf/0003-ovmf-Update-to-latest.patch
+++ /dev/null
@@ -1,45 +0,0 @@
-From 67267d8cc31df16a3608cad1a17c5f1470ef8bbd Mon Sep 17 00:00:00 2001
-From: Steve Langasek <steve.langasek@ubuntu.com>
-Date: Sat, 10 Jun 2017 01:39:36 -0700
-Subject: [PATCH 3/6] ovmf: Update to latest
-
-Description: pass -fno-stack-protector to all GCC toolchains
- The upstream build rules inexplicably pass -fno-stack-protector only
- when building for i386 and amd64.  Add this essential argument to the
- generic rules for gcc 4.4 and later.
-Last-Updated: 2016-04-12
-Upstream-Status: Pending
----
- BaseTools/Conf/tools_def.template | 8 ++++----
- 1 file changed, 4 insertions(+), 4 deletions(-)
-
-diff --git a/BaseTools/Conf/tools_def.template b/BaseTools/Conf/tools_def.template
-index 498696e583..36241b6ede 100755
---- a/BaseTools/Conf/tools_def.template
-+++ b/BaseTools/Conf/tools_def.template
-@@ -1897,10 +1897,10 @@ DEFINE GCC_RISCV64_RC_FLAGS        = -I binary -O elf64-littleriscv   -B riscv
- # GCC Build Flag for included header file list generation

- DEFINE GCC_DEPS_FLAGS              = -MMD -MF $@.deps

- 

--DEFINE GCC48_ALL_CC_FLAGS            = DEF(GCC_ALL_CC_FLAGS) -ffunction-sections -fdata-sections -DSTRING_ARRAY_NAME=$(BASE_NAME)Strings

-+DEFINE GCC48_ALL_CC_FLAGS            = DEF(GCC_ALL_CC_FLAGS) -ffunction-sections -fdata-sections -fno-stack-protector -DSTRING_ARRAY_NAME=$(BASE_NAME)Strings

- DEFINE GCC48_IA32_X64_DLINK_COMMON   = -nostdlib -Wl,-n,-q,--gc-sections -z common-page-size=0x20

--DEFINE GCC48_IA32_CC_FLAGS           = DEF(GCC48_ALL_CC_FLAGS) -m32 -march=i586 -malign-double -fno-stack-protector -D EFI32 -fno-asynchronous-unwind-tables -Wno-address

--DEFINE GCC48_X64_CC_FLAGS            = DEF(GCC48_ALL_CC_FLAGS) -m64 -fno-stack-protector "-DEFIAPI=__attribute__((ms_abi))" -maccumulate-outgoing-args -mno-red-zone -Wno-address -mcmodel=small -fpie -fno-asynchronous-unwind-tables -Wno-address

-+DEFINE GCC48_IA32_CC_FLAGS           = DEF(GCC48_ALL_CC_FLAGS) -m32 -march=i586 -malign-double -D EFI32 -fno-asynchronous-unwind-tables -Wno-address

-+DEFINE GCC48_X64_CC_FLAGS            = DEF(GCC48_ALL_CC_FLAGS) -m64 "-DEFIAPI=__attribute__((ms_abi))" -maccumulate-outgoing-args -mno-red-zone -Wno-address -mcmodel=small -fpie -fno-asynchronous-unwind-tables -Wno-address

- DEFINE GCC48_IA32_X64_ASLDLINK_FLAGS = DEF(GCC48_IA32_X64_DLINK_COMMON) -Wl,--entry,ReferenceAcpiTable -u ReferenceAcpiTable

- DEFINE GCC48_IA32_X64_DLINK_FLAGS    = DEF(GCC48_IA32_X64_DLINK_COMMON) -Wl,--entry,$(IMAGE_ENTRY_POINT) -u $(IMAGE_ENTRY_POINT) -Wl,-Map,$(DEST_DIR_DEBUG)/$(BASE_NAME).map,--whole-archive

- DEFINE GCC48_IA32_DLINK2_FLAGS       = -Wl,--defsym=PECOFF_HEADER_SIZE=0x220 DEF(GCC_DLINK2_FLAGS_COMMON)

-@@ -1909,7 +1909,7 @@ DEFINE GCC48_X64_DLINK2_FLAGS        = -Wl,--defsym=PECOFF_HEADER_SIZE=0x228 DEF
- DEFINE GCC48_ASM_FLAGS               = DEF(GCC_ASM_FLAGS)

- DEFINE GCC48_ARM_ASM_FLAGS           = $(ARCHASM_FLAGS) $(PLATFORM_FLAGS) DEF(GCC_ASM_FLAGS) -mlittle-endian

- DEFINE GCC48_AARCH64_ASM_FLAGS       = $(ARCHASM_FLAGS) $(PLATFORM_FLAGS) DEF(GCC_ASM_FLAGS) -mlittle-endian

--DEFINE GCC48_ARM_CC_FLAGS            = $(ARCHCC_FLAGS) $(PLATFORM_FLAGS) DEF(GCC_ARM_CC_FLAGS) -fstack-protector -mword-relocations

-+DEFINE GCC48_ARM_CC_FLAGS            = $(ARCHCC_FLAGS) $(PLATFORM_FLAGS) DEF(GCC_ARM_CC_FLAGS) -mword-relocations

- DEFINE GCC48_ARM_CC_XIPFLAGS         = DEF(GCC_ARM_CC_XIPFLAGS)

- DEFINE GCC48_AARCH64_CC_FLAGS        = $(ARCHCC_FLAGS) $(PLATFORM_FLAGS) -mcmodel=large DEF(GCC_AARCH64_CC_FLAGS)

- DEFINE GCC48_AARCH64_CC_XIPFLAGS     = DEF(GCC_AARCH64_CC_XIPFLAGS)

--- 
-2.32.0
-
diff --git a/poky/meta/recipes-core/ovmf/ovmf_git.bb b/poky/meta/recipes-core/ovmf/ovmf_git.bb
index aac30ea..4054223 100644
--- a/poky/meta/recipes-core/ovmf/ovmf_git.bb
+++ b/poky/meta/recipes-core/ovmf/ovmf_git.bb
@@ -22,7 +22,6 @@
 SRC_URI = "gitsm://github.com/tianocore/edk2.git;branch=master;protocol=https \
            file://0001-ovmf-update-path-to-native-BaseTools.patch \
            file://0002-BaseTools-makefile-adjust-to-build-in-under-bitbake.patch \
-           file://0003-ovmf-Update-to-latest.patch \
            file://0005-debug-prefix-map.patch \
            file://0006-reproducible.patch \
            "
diff --git a/poky/meta/recipes-core/packagegroups/packagegroup-base.bb b/poky/meta/recipes-core/packagegroups/packagegroup-base.bb
index 7489ef6..d60e177 100644
--- a/poky/meta/recipes-core/packagegroups/packagegroup-base.bb
+++ b/poky/meta/recipes-core/packagegroups/packagegroup-base.bb
@@ -267,11 +267,14 @@
 # packagegroup-base-wifi contain everything needed to get WiFi working
 # WEP/WPA connection needs to be supported out-of-box
 #
+# Choose either 'wpa-supplicant' or 'iwd' as wireless-daemon
+WIRELESS_DAEMON ??= "wpa-supplicant"
 SUMMARY:packagegroup-base-wifi = "WiFi support"
 RDEPENDS:packagegroup-base-wifi = "\
     iw \
     wireless-regdb-static \
-    wpa-supplicant"
+    ${WIRELESS_DAEMON} \
+"
 
 RRECOMMENDS:packagegroup-base-wifi = "\
     ${@bb.utils.contains('COMBINED_FEATURES', 'usbhost', 'kernel-module-zd1211rw', '',d)} \
diff --git a/poky/meta/recipes-core/packagegroups/packagegroup-rust-cross-canadian.bb b/poky/meta/recipes-core/packagegroups/packagegroup-rust-cross-canadian.bb
index 0d4f5ec..bb10a2d 100644
--- a/poky/meta/recipes-core/packagegroups/packagegroup-rust-cross-canadian.bb
+++ b/poky/meta/recipes-core/packagegroups/packagegroup-rust-cross-canadian.bb
@@ -6,13 +6,16 @@
 PACKAGEGROUP_DISABLE_COMPLEMENTARY = "1"
 
 RUST="rust-cross-canadian-${TRANSLATED_TARGET_ARCH}"
-CARGO="cargo-cross-canadian-${TRANSLATED_TARGET_ARCH}"
-RUST_TOOLS="rust-tools-cross-canadian-${TRANSLATED_TARGET_ARCH}"
 
 RDEPENDS:${PN} = " \
     ${@all_multilib_tune_values(d, 'RUST')} \
-    ${@all_multilib_tune_values(d, 'CARGO')} \
-    rust-cross-canadian-src \
-    ${@all_multilib_tune_values(d, 'RUST_TOOLS')} \
+    nativesdk-binutils \
+    nativesdk-gcc \
+    nativesdk-glibc-dev \
+    nativesdk-libgcc-dev \
+    nativesdk-rust \
+    nativesdk-cargo \
+    nativesdk-rust-tools-clippy \
+    nativesdk-rust-tools-rustfmt \
 "
 
diff --git a/poky/meta/recipes-core/packagegroups/packagegroup-self-hosted.bb b/poky/meta/recipes-core/packagegroups/packagegroup-self-hosted.bb
index 772b86b..a1b0ee2 100644
--- a/poky/meta/recipes-core/packagegroups/packagegroup-self-hosted.bb
+++ b/poky/meta/recipes-core/packagegroups/packagegroup-self-hosted.bb
@@ -98,11 +98,14 @@
     glibc-utils \
     rpcsvc-proto \
     "
+
+STRACE = "strace"
+STRACE:riscv32 = ""
 RDEPENDS:packagegroup-self-hosted-debug = " \
     gdb \
     gdbserver \
     rsync \
-    strace \
+    ${STRACE} \
     tcf-agent"
 
 
diff --git a/poky/meta/recipes-core/systemd/systemd-boot_251.3.bb b/poky/meta/recipes-core/systemd/systemd-boot_251.4.bb
similarity index 100%
rename from poky/meta/recipes-core/systemd/systemd-boot_251.3.bb
rename to poky/meta/recipes-core/systemd/systemd-boot_251.4.bb
diff --git a/poky/meta/recipes-core/systemd/systemd.inc b/poky/meta/recipes-core/systemd/systemd.inc
index 03f1559..71eb93f 100644
--- a/poky/meta/recipes-core/systemd/systemd.inc
+++ b/poky/meta/recipes-core/systemd/systemd.inc
@@ -14,7 +14,7 @@
 LIC_FILES_CHKSUM = "file://LICENSE.GPL2;md5=751419260aa954499f7abaabaa882bbe \
                     file://LICENSE.LGPL2.1;md5=4fbd65380cdd255951079008b364516c"
 
-SRCREV = "516108f273888df3dcfa4f42b140252a285a2288"
+SRCREV = "2a674b4b66af1a050a0362b646d2fca90c90112e"
 SRCBRANCH = "v251-stable"
 SRC_URI = "git://github.com/systemd/systemd-stable.git;protocol=https;branch=${SRCBRANCH} \
 "
diff --git a/poky/meta/recipes-core/systemd/systemd/0001-glibc-Remove-include-linux-fs.h-to-resolve-fsconfig_.patch b/poky/meta/recipes-core/systemd/systemd/0001-glibc-Remove-include-linux-fs.h-to-resolve-fsconfig_.patch
deleted file mode 100644
index 6222dfe..0000000
--- a/poky/meta/recipes-core/systemd/systemd/0001-glibc-Remove-include-linux-fs.h-to-resolve-fsconfig_.patch
+++ /dev/null
@@ -1,97 +0,0 @@
-From b0933e76c6f0594c10cf8a9a70b34e15b68066d1 Mon Sep 17 00:00:00 2001
-From: Rudi Heitbaum <rudi@heitbaum.com>
-Date: Sat, 23 Jul 2022 10:38:49 +0000
-Subject: [PATCH] glibc: Remove #include <linux/fs.h> to resolve fsconfig_command/mount_attr conflict with glibc 2.36
-
-Upstream-Status: Backport [https://github.com/systemd/systemd/pull/23992/commits/21c03ad5e9d8d0350e30dae92a5e15da318a1539]
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
----
- meson.build             | 13 ++++++++++++-
- src/basic/fd-util.c     |  2 ++
- src/core/namespace.c    |  2 ++
- src/shared/mount-util.c |  2 ++
- 4 files changed, 18 insertions(+), 1 deletion(-)
-
-diff --git a/meson.build b/meson.build
-index 9c170acc0a..a2e4d5054e 100644
---- a/meson.build
-+++ b/meson.build
-@@ -481,7 +481,6 @@ decl_headers = '''
- #include <uchar.h>
- #include <sys/mount.h>
- #include <sys/stat.h>
--#include <linux/fs.h>
- '''
- 
- foreach decl : ['char16_t',
-@@ -493,6 +492,17 @@ foreach decl : ['char16_t',
-         # We get -1 if the size cannot be determined
-         have = cc.sizeof(decl, prefix : decl_headers, args : '-D_GNU_SOURCE') > 0
- 
-+        if decl == 'struct mount_attr'
-+                if have
-+                        want_linux_fs_h = false
-+                else
-+                        have = cc.sizeof(decl,
-+                                         prefix : decl_headers + '#include <linux/fs.h>',
-+                                         args : '-D_GNU_SOURCE') > 0
-+                        want_linux_fs_h = have
-+                endif
-+        endif
-+
-         if decl == 'struct statx'
-                 if have
-                         want_linux_stat_h = false
-@@ -508,6 +518,7 @@ foreach decl : ['char16_t',
- endforeach
- 
- conf.set10('WANT_LINUX_STAT_H', want_linux_stat_h)
-+conf.set10('WANT_LINUX_FS_H', want_linux_fs_h)
- 
- foreach ident : ['secure_getenv', '__secure_getenv']
-         conf.set10('HAVE_' + ident.to_upper(), cc.has_function(ident))
-diff --git a/src/basic/fd-util.c b/src/basic/fd-util.c
-index 6c1de92a26..00591d6c2d 100644
---- a/src/basic/fd-util.c
-+++ b/src/basic/fd-util.c
-@@ -3,7 +3,9 @@
- #include <errno.h>
- #include <fcntl.h>
- #include <linux/btrfs.h>
-+#if WANT_LINUX_FS_H
- #include <linux/fs.h>
-+#endif
- #include <linux/magic.h>
- #include <sys/ioctl.h>
- #include <sys/resource.h>
-diff --git a/src/core/namespace.c b/src/core/namespace.c
-index 3256871803..2eafe43290 100644
---- a/src/core/namespace.c
-+++ b/src/core/namespace.c
-@@ -7,7 +7,9 @@
- #include <sys/file.h>
- #include <sys/mount.h>
- #include <unistd.h>
-+#if WANT_LINUX_FS_H
- #include <linux/fs.h>
-+#endif
- 
- #include "alloc-util.h"
- #include "base-filesystem.h"
-diff --git a/src/shared/mount-util.c b/src/shared/mount-util.c
-index e76e4a0b38..0c8dec7688 100644
---- a/src/shared/mount-util.c
-+++ b/src/shared/mount-util.c
-@@ -7,7 +7,9 @@
- #include <sys/statvfs.h>
- #include <unistd.h>
- #include <linux/loop.h>
-+#if WANT_LINUX_FS_H
- #include <linux/fs.h>
-+#endif
- 
- #include "alloc-util.h"
- #include "chase-symlinks.h"
--- 
-2.25.1
-
diff --git a/poky/meta/recipes-core/systemd/systemd_251.3.bb b/poky/meta/recipes-core/systemd/systemd_251.3.bb
deleted file mode 100644
index 72b9155..0000000
--- a/poky/meta/recipes-core/systemd/systemd_251.3.bb
+++ /dev/null
@@ -1,797 +0,0 @@
-require systemd.inc
-
-PROVIDES = "udev"
-
-PE = "1"
-
-DEPENDS = "intltool-native gperf-native libcap util-linux python3-jinja2-native"
-
-SECTION = "base/shell"
-
-inherit useradd pkgconfig meson perlnative update-rc.d update-alternatives qemu systemd gettext bash-completion manpages features_check
-
-# As this recipe builds udev, respect systemd being in DISTRO_FEATURES so
-# that we don't build both udev and systemd in world builds.
-REQUIRED_DISTRO_FEATURES = "systemd"
-
-SRC_URI += " \
-           file://touchscreen.rules \
-           file://00-create-volatile.conf \
-           ${@bb.utils.contains('PACKAGECONFIG', 'polkit_hostnamed_fallback', 'file://org.freedesktop.hostname1_no_polkit.conf', '', d)} \
-           ${@bb.utils.contains('PACKAGECONFIG', 'polkit_hostnamed_fallback', 'file://00-hostnamed-network-user.conf', '', d)} \
-           file://init \
-           file://99-default.preset \
-           file://systemd-pager.sh \
-           file://0001-binfmt-Don-t-install-dependency-links-at-install-tim.patch \
-           file://0003-implment-systemd-sysv-install-for-OE.patch \
-           file://0001-Move-sysusers.d-sysctl.d-binfmt.d-modules-load.d-to-.patch \
-           file://0001-glibc-Remove-include-linux-fs.h-to-resolve-fsconfig_.patch \
-           "
-
-# patches needed by musl
-SRC_URI:append:libc-musl = " ${SRC_URI_MUSL}"
-SRC_URI_MUSL = "\
-               file://0003-missing_type.h-add-comparison_fn_t.patch \
-               file://0004-add-fallback-parse_printf_format-implementation.patch \
-               file://0005-src-basic-missing.h-check-for-missing-strndupa.patch \
-               file://0007-don-t-fail-if-GLOB_BRACE-and-GLOB_ALTDIRFUNC-is-not-.patch \
-               file://0008-add-missing-FTW_-macros-for-musl.patch \
-               file://0010-Use-uintmax_t-for-handling-rlim_t.patch \
-               file://0011-test-sizeof.c-Disable-tests-for-missing-typedefs-in-.patch \
-               file://0012-don-t-pass-AT_SYMLINK_NOFOLLOW-flag-to-faccessat.patch \
-               file://0013-Define-glibc-compatible-basename-for-non-glibc-syste.patch \
-               file://0014-Do-not-disable-buffering-when-writing-to-oom_score_a.patch \
-               file://0015-distinguish-XSI-compliant-strerror_r-from-GNU-specif.patch \
-               file://0018-avoid-redefinition-of-prctl_mm_map-structure.patch \
-               file://0022-do-not-disable-buffer-in-writing-files.patch \
-               file://0025-Handle-__cpu_mask-usage.patch \
-               file://0026-Handle-missing-gshadow.patch \
-               file://0028-missing_syscall.h-Define-MIPS-ABI-defines-for-musl.patch \
-               file://0001-pass-correct-parameters-to-getdents64.patch \
-               file://0002-Add-sys-stat.h-for-S_IFDIR.patch \
-               file://0001-Adjust-for-musl-headers.patch \
-               "
-
-PAM_PLUGINS = " \
-    pam-plugin-unix \
-    pam-plugin-loginuid \
-    pam-plugin-keyinit \
-"
-
-PACKAGECONFIG ??= " \
-    ${@bb.utils.filter('DISTRO_FEATURES', 'acl audit efi ldconfig pam selinux smack usrmerge polkit seccomp', d)} \
-    ${@bb.utils.contains('DISTRO_FEATURES', 'wifi', 'rfkill', '', d)} \
-    ${@bb.utils.contains('DISTRO_FEATURES', 'x11', 'xkbcommon', '', d)} \
-    ${@bb.utils.contains('DISTRO_FEATURES', 'sysvinit', '', 'link-udev-shared', d)} \
-    backlight \
-    binfmt \
-    gshadow \
-    hibernate \
-    hostnamed \
-    idn \
-    ima \
-    kmod \
-    localed \
-    logind \
-    machined \
-    myhostname \
-    networkd \
-    nss \
-    nss-mymachines \
-    nss-resolve \
-    quotacheck \
-    randomseed \
-    resolved \
-    set-time-epoch \
-    sysusers \
-    sysvinit \
-    timedated \
-    timesyncd \
-    userdb \
-    utmp \
-    vconsole \
-    wheel-group \
-    zstd \
-"
-
-PACKAGECONFIG:remove:libc-musl = " \
-    gshadow \
-    idn \
-    localed \
-    myhostname \
-    nss \
-    nss-mymachines \
-    nss-resolve \
-    sysusers \
-    userdb \
-    utmp \
-"
-
-# https://github.com/seccomp/libseccomp/issues/347
-PACKAGECONFIG:remove:mipsarch = "seccomp"
-
-CFLAGS:append:libc-musl = " -D__UAPI_DEF_ETHHDR=0 "
-
-# Some of the dependencies are weak-style recommends - if not available at runtime,
-# systemd won't fail but the library-related feature will be skipped with a warning.
-
-# Use the upstream systemd serial-getty@.service and rely on
-# systemd-getty-generator instead of using the OE-core specific
-# systemd-serialgetty.bb - not enabled by default.
-PACKAGECONFIG[serial-getty-generator] = ""
-
-PACKAGECONFIG[acl] = "-Dacl=true,-Dacl=false,acl"
-PACKAGECONFIG[audit] = "-Daudit=true,-Daudit=false,audit"
-PACKAGECONFIG[backlight] = "-Dbacklight=true,-Dbacklight=false"
-PACKAGECONFIG[binfmt] = "-Dbinfmt=true,-Dbinfmt=false"
-PACKAGECONFIG[bzip2] = "-Dbzip2=true,-Dbzip2=false,bzip2"
-PACKAGECONFIG[cgroupv2] = "-Ddefault-hierarchy=unified,-Ddefault-hierarchy=hybrid"
-PACKAGECONFIG[coredump] = "-Dcoredump=true,-Dcoredump=false"
-PACKAGECONFIG[cryptsetup] = "-Dlibcryptsetup=true,-Dlibcryptsetup=false,cryptsetup,,cryptsetup"
-PACKAGECONFIG[tpm2] = "-Dtpm2=true,-Dtpm2=false,tpm2-tss,tpm2-tss libtss2 libtss2-tcti-device"
-PACKAGECONFIG[dbus] = "-Ddbus=true,-Ddbus=false,dbus"
-PACKAGECONFIG[efi] = "-Defi=true,-Defi=false"
-PACKAGECONFIG[gnu-efi] = "-Dgnu-efi=true -Defi-libdir=${STAGING_LIBDIR} -Defi-includedir=${STAGING_INCDIR}/efi,-Dgnu-efi=false,gnu-efi"
-PACKAGECONFIG[elfutils] = "-Delfutils=true,-Delfutils=false,elfutils"
-PACKAGECONFIG[firstboot] = "-Dfirstboot=true,-Dfirstboot=false"
-PACKAGECONFIG[repart] = "-Drepart=true,-Drepart=false"
-PACKAGECONFIG[homed] = "-Dhomed=true,-Dhomed=false"
-# Sign the journal for anti-tampering
-PACKAGECONFIG[gcrypt] = "-Dgcrypt=true,-Dgcrypt=false,libgcrypt"
-PACKAGECONFIG[gnutls] = "-Dgnutls=true,-Dgnutls=false,gnutls"
-PACKAGECONFIG[gshadow] = "-Dgshadow=true,-Dgshadow=false"
-PACKAGECONFIG[hibernate] = "-Dhibernate=true,-Dhibernate=false"
-PACKAGECONFIG[hostnamed] = "-Dhostnamed=true,-Dhostnamed=false"
-PACKAGECONFIG[idn] = "-Didn=true,-Didn=false"
-PACKAGECONFIG[ima] = "-Dima=true,-Dima=false"
-# importd requires journal-upload/xz/zlib/bzip2/gcrypt
-PACKAGECONFIG[importd] = "-Dimportd=true,-Dimportd=false"
-# Update NAT firewall rules
-PACKAGECONFIG[iptc] = "-Dlibiptc=true,-Dlibiptc=false,iptables"
-PACKAGECONFIG[journal-upload] = "-Dlibcurl=true,-Dlibcurl=false,curl"
-PACKAGECONFIG[kmod] = "-Dkmod=true,-Dkmod=false,kmod"
-PACKAGECONFIG[ldconfig] = "-Dldconfig=true,-Dldconfig=false,,ldconfig"
-PACKAGECONFIG[libidn] = "-Dlibidn=true,-Dlibidn=false,libidn,,libidn"
-PACKAGECONFIG[libidn2] = "-Dlibidn2=true,-Dlibidn2=false,libidn2,,libidn2"
-# Link udev shared with systemd helper library.
-# If enabled the udev package depends on the systemd package (which has the needed shared library).
-PACKAGECONFIG[link-udev-shared] = "-Dlink-udev-shared=true,-Dlink-udev-shared=false"
-PACKAGECONFIG[localed] = "-Dlocaled=true,-Dlocaled=false"
-PACKAGECONFIG[logind] = "-Dlogind=true,-Dlogind=false"
-PACKAGECONFIG[lz4] = "-Dlz4=true,-Dlz4=false,lz4"
-PACKAGECONFIG[machined] = "-Dmachined=true,-Dmachined=false"
-PACKAGECONFIG[manpages] = "-Dman=true,-Dman=false,libxslt-native xmlto-native docbook-xml-dtd4-native docbook-xsl-stylesheets-native"
-PACKAGECONFIG[microhttpd] = "-Dmicrohttpd=true,-Dmicrohttpd=false,libmicrohttpd"
-PACKAGECONFIG[myhostname] = "-Dnss-myhostname=true,-Dnss-myhostname=false,,libnss-myhostname"
-PACKAGECONFIG[networkd] = "-Dnetworkd=true,-Dnetworkd=false"
-PACKAGECONFIG[nss] = "-Dnss-systemd=true,-Dnss-systemd=false"
-PACKAGECONFIG[nss-mymachines] = "-Dnss-mymachines=true,-Dnss-mymachines=false"
-PACKAGECONFIG[nss-resolve] = "-Dnss-resolve=true,-Dnss-resolve=false"
-PACKAGECONFIG[oomd] = "-Doomd=true,-Doomd=false"
-PACKAGECONFIG[openssl] = "-Dopenssl=true,-Dopenssl=false,openssl"
-PACKAGECONFIG[pam] = "-Dpam=true,-Dpam=false,libpam,${PAM_PLUGINS}"
-PACKAGECONFIG[pcre2] = "-Dpcre2=true,-Dpcre2=false,libpcre2"
-PACKAGECONFIG[polkit] = "-Dpolkit=true,-Dpolkit=false"
-# If polkit is disabled and networkd+hostnamed are in use, enabling this option and
-# using dbus-broker will allow networkd to be authorized to change the
-# hostname without acquiring additional privileges
-PACKAGECONFIG[polkit_hostnamed_fallback] = ",,,,dbus-broker,polkit"
-PACKAGECONFIG[portabled] = "-Dportabled=true,-Dportabled=false"
-PACKAGECONFIG[qrencode] = "-Dqrencode=true,-Dqrencode=false,qrencode,,qrencode"
-PACKAGECONFIG[quotacheck] = "-Dquotacheck=true,-Dquotacheck=false"
-PACKAGECONFIG[randomseed] = "-Drandomseed=true,-Drandomseed=false"
-PACKAGECONFIG[resolved] = "-Dresolve=true,-Dresolve=false"
-PACKAGECONFIG[rfkill] = "-Drfkill=true,-Drfkill=false"
-PACKAGECONFIG[seccomp] = "-Dseccomp=true,-Dseccomp=false,libseccomp"
-PACKAGECONFIG[selinux] = "-Dselinux=true,-Dselinux=false,libselinux,initscripts-sushell"
-PACKAGECONFIG[smack] = "-Dsmack=true,-Dsmack=false"
-PACKAGECONFIG[sysext] = "-Dsysext=true, -Dsysext=false"
-PACKAGECONFIG[sysusers] = "-Dsysusers=true,-Dsysusers=false"
-PACKAGECONFIG[sysvinit] = "-Dsysvinit-path=${sysconfdir}/init.d -Dsysvrcnd-path=${sysconfdir},-Dsysvinit-path= -Dsysvrcnd-path=,,systemd-compat-units update-rc.d"
-# When enabled use reproducble build timestamp if set as time epoch,
-# or build time if not. When disabled, time epoch is unset.
-def build_epoch(d):
-    epoch = d.getVar('SOURCE_DATE_EPOCH') or "-1"
-    return '-Dtime-epoch=%d' % int(epoch)
-PACKAGECONFIG[set-time-epoch] = "${@build_epoch(d)},-Dtime-epoch=0"
-PACKAGECONFIG[timedated] = "-Dtimedated=true,-Dtimedated=false"
-PACKAGECONFIG[timesyncd] = "-Dtimesyncd=true,-Dtimesyncd=false"
-PACKAGECONFIG[usrmerge] = "-Dsplit-usr=false,-Dsplit-usr=true"
-PACKAGECONFIG[sbinmerge] = "-Dsplit-bin=false,-Dsplit-bin=true"
-PACKAGECONFIG[userdb] = "-Duserdb=true,-Duserdb=false"
-PACKAGECONFIG[utmp] = "-Dutmp=true,-Dutmp=false"
-PACKAGECONFIG[valgrind] = "-DVALGRIND=1,,valgrind"
-PACKAGECONFIG[vconsole] = "-Dvconsole=true,-Dvconsole=false,,${PN}-vconsole-setup"
-PACKAGECONFIG[wheel-group] = "-Dwheel-group=true, -Dwheel-group=false"
-PACKAGECONFIG[xdg-autostart] = "-Dxdg-autostart=true,-Dxdg-autostart=false"
-# Verify keymaps on locale change
-PACKAGECONFIG[xkbcommon] = "-Dxkbcommon=true,-Dxkbcommon=false,libxkbcommon"
-PACKAGECONFIG[xz] = "-Dxz=true,-Dxz=false,xz"
-PACKAGECONFIG[zlib] = "-Dzlib=true,-Dzlib=false,zlib"
-PACKAGECONFIG[zstd] = "-Dzstd=true,-Dzstd=false,zstd"
-
-# Helper variables to clarify locations.  This mirrors the logic in systemd's
-# build system.
-rootprefix ?= "${root_prefix}"
-rootlibdir ?= "${base_libdir}"
-rootlibexecdir = "${rootprefix}/lib"
-
-EXTRA_OEMESON += "-Dnobody-user=nobody \
-                  -Dnobody-group=nobody \
-                  -Drootlibdir=${rootlibdir} \
-                  -Drootprefix=${rootprefix} \
-                  -Ddefault-locale=C \
-                  -Dmode=release \
-                  -Dsystem-alloc-uid-min=101 \
-                  -Dsystem-uid-max=999 \
-                  -Dsystem-alloc-gid-min=101 \
-                  -Dsystem-gid-max=999 \
-                  "
-
-# Hardcode target binary paths to avoid using paths from sysroot
-EXTRA_OEMESON += "-Dkexec-path=${sbindir}/kexec \
-                  -Dkmod-path=${base_bindir}/kmod \
-                  -Dmount-path=${base_bindir}/mount \
-                  -Dquotacheck-path=${sbindir}/quotacheck \
-                  -Dquotaon-path=${sbindir}/quotaon \
-                  -Dsulogin-path=${base_sbindir}/sulogin \
-                  -Dnologin-path=${base_sbindir}/nologin \
-                  -Dumount-path=${base_bindir}/umount"
-
-# The 60 seconds is watchdog's default vaule.
-WATCHDOG_TIMEOUT ??= "60"
-
-do_install() {
-	meson_do_install
-	install -d ${D}/${base_sbindir}
-	if ${@bb.utils.contains('PACKAGECONFIG', 'serial-getty-generator', 'false', 'true', d)}; then
-		# Provided by a separate recipe
-		rm ${D}${systemd_system_unitdir}/serial-getty* -f
-	fi
-
-	# Provide support for initramfs
-	[ ! -e ${D}/init ] && ln -s ${rootlibexecdir}/systemd/systemd ${D}/init
-	[ ! -e ${D}/${base_sbindir}/udevd ] && ln -s ${rootlibexecdir}/systemd/systemd-udevd ${D}/${base_sbindir}/udevd
-
-	install -d ${D}${sysconfdir}/udev/rules.d/
-	install -d ${D}${sysconfdir}/tmpfiles.d
-	for rule in $(find ${WORKDIR} -maxdepth 1 -type f -name "*.rules"); do
-		install -m 0644 $rule ${D}${sysconfdir}/udev/rules.d/
-	done
-
-	install -m 0644 ${WORKDIR}/00-create-volatile.conf ${D}${sysconfdir}/tmpfiles.d/
-
-	if ${@bb.utils.contains('DISTRO_FEATURES','sysvinit','true','false',d)}; then
-		install -d ${D}${sysconfdir}/init.d
-		install -m 0755 ${WORKDIR}/init ${D}${sysconfdir}/init.d/systemd-udevd
-		sed -i s%@UDEVD@%${rootlibexecdir}/systemd/systemd-udevd% ${D}${sysconfdir}/init.d/systemd-udevd
-		install -Dm 0755 ${S}/src/systemctl/systemd-sysv-install.SKELETON ${D}${systemd_unitdir}/systemd-sysv-install
-	fi
-
-	if "${@'true' if oe.types.boolean(d.getVar('VOLATILE_LOG_DIR')) else 'false'}"; then
-		# /var/log is typically a symbolic link to inside /var/volatile,
-		# which is expected to be empty.
-		rm -rf ${D}${localstatedir}/log
-	else
-		chown root:systemd-journal ${D}${localstatedir}/log/journal
-
-		# journal-remote creates this at start
-		rm -rf ${D}${localstatedir}/log/journal/remote
-	fi
-
-	install -d ${D}${systemd_system_unitdir}/graphical.target.wants
-	install -d ${D}${systemd_system_unitdir}/multi-user.target.wants
-	install -d ${D}${systemd_system_unitdir}/poweroff.target.wants
-	install -d ${D}${systemd_system_unitdir}/reboot.target.wants
-	install -d ${D}${systemd_system_unitdir}/rescue.target.wants
-
-	# Create symlinks for systemd-update-utmp-runlevel.service
-	if ${@bb.utils.contains('PACKAGECONFIG', 'utmp', 'true', 'false', d)}; then
-		ln -sf ../systemd-update-utmp-runlevel.service ${D}${systemd_system_unitdir}/graphical.target.wants/systemd-update-utmp-runlevel.service
-		ln -sf ../systemd-update-utmp-runlevel.service ${D}${systemd_system_unitdir}/multi-user.target.wants/systemd-update-utmp-runlevel.service
-		ln -sf ../systemd-update-utmp-runlevel.service ${D}${systemd_system_unitdir}/poweroff.target.wants/systemd-update-utmp-runlevel.service
-		ln -sf ../systemd-update-utmp-runlevel.service ${D}${systemd_system_unitdir}/reboot.target.wants/systemd-update-utmp-runlevel.service
-		ln -sf ../systemd-update-utmp-runlevel.service ${D}${systemd_system_unitdir}/rescue.target.wants/systemd-update-utmp-runlevel.service
-	fi
-
-	# this file is needed to exist if networkd is disabled but timesyncd is still in use since timesyncd checks it
-	# for existence else it fails
-	if [ -s ${D}${exec_prefix}/lib/tmpfiles.d/systemd.conf ] &&
-	   ! ${@bb.utils.contains('PACKAGECONFIG', 'networkd', 'true', 'false', d)}; then
-		echo 'd /run/systemd/netif/links 0755 root root -' >>${D}${exec_prefix}/lib/tmpfiles.d/systemd.conf
-	fi
-	if ! ${@bb.utils.contains('PACKAGECONFIG', 'resolved', 'true', 'false', d)}; then
-		echo 'L! ${sysconfdir}/resolv.conf - - - - ../run/systemd/resolve/resolv.conf' >>${D}${exec_prefix}/lib/tmpfiles.d/etc.conf
-		echo 'd /run/systemd/resolve 0755 root root -' >>${D}${exec_prefix}/lib/tmpfiles.d/systemd.conf
-		echo 'f /run/systemd/resolve/resolv.conf 0644 root root' >>${D}${exec_prefix}/lib/tmpfiles.d/systemd.conf
-		ln -s ../run/systemd/resolve/resolv.conf ${D}${sysconfdir}/resolv-conf.systemd
-	else
-		sed -i -e "s%^L! /etc/resolv.conf.*$%L! /etc/resolv.conf - - - - ../run/systemd/resolve/resolv.conf%g" ${D}${exec_prefix}/lib/tmpfiles.d/etc.conf
-		ln -s ../run/systemd/resolve/resolv.conf ${D}${sysconfdir}/resolv-conf.systemd
-	fi
-	if ${@bb.utils.contains('DISTRO_FEATURES', 'x11', 'false', 'true', d)}; then
-		rm ${D}${exec_prefix}/lib/tmpfiles.d/x11.conf
-		rm -r ${D}${sysconfdir}/X11
-	fi
-
-	# If polkit is setup fixup permissions and ownership
-	if ${@bb.utils.contains('PACKAGECONFIG', 'polkit', 'true', 'false', d)}; then
-		if [ -d ${D}${datadir}/polkit-1/rules.d ]; then
-			chmod 700 ${D}${datadir}/polkit-1/rules.d
-			chown polkitd:root ${D}${datadir}/polkit-1/rules.d
-		fi
-	fi
-
-	# If polkit is not available and a fallback was requested, install a drop-in that allows networkd to
-	# request hostname changes via DBUS without elevating its privileges
-	if ${@bb.utils.contains('PACKAGECONFIG', 'polkit_hostnamed_fallback', 'true', 'false', d)}; then
-		install -d ${D}${systemd_system_unitdir}/systemd-hostnamed.service.d/
-		install -m 0644 ${WORKDIR}/00-hostnamed-network-user.conf ${D}${systemd_system_unitdir}/systemd-hostnamed.service.d/
-		install -d ${D}${datadir}/dbus-1/system.d/
-		install -m 0644 ${WORKDIR}/org.freedesktop.hostname1_no_polkit.conf ${D}${datadir}/dbus-1/system.d/
-	fi
-
-	# create link for existing udev rules
-	ln -s ${base_bindir}/udevadm ${D}${base_sbindir}/udevadm
-
-	# install default policy for presets
-	# https://www.freedesktop.org/wiki/Software/systemd/Preset/#howto
-	install -Dm 0644 ${WORKDIR}/99-default.preset ${D}${systemd_unitdir}/system-preset/99-default.preset
-
-	# add a profile fragment to disable systemd pager with busybox less
-	install -Dm 0644 ${WORKDIR}/systemd-pager.sh ${D}${sysconfdir}/profile.d/systemd-pager.sh
-
-    if [ -n "${WATCHDOG_TIMEOUT}" ]; then
-        sed -i -e 's/#RebootWatchdogSec=10min/RebootWatchdogSec=${WATCHDOG_TIMEOUT}/' \
-            ${D}/${sysconfdir}/systemd/system.conf
-    fi
-}
-
-python populate_packages:prepend (){
-    systemdlibdir = d.getVar("rootlibdir")
-    do_split_packages(d, systemdlibdir, r'^lib(.*)\.so\.*', 'lib%s', 'Systemd %s library', extra_depends='', allow_links=True)
-}
-PACKAGES_DYNAMIC += "^lib(udev|systemd|nss).*"
-
-PACKAGE_BEFORE_PN = "\
-    ${PN}-gui \
-    ${PN}-vconsole-setup \
-    ${PN}-initramfs \
-    ${PN}-analyze \
-    ${PN}-kernel-install \
-    ${PN}-rpm-macros \
-    ${PN}-binfmt \
-    ${PN}-zsh-completion \
-    ${PN}-container \
-    ${PN}-journal-gatewayd \
-    ${PN}-journal-upload \
-    ${PN}-journal-remote \
-    ${PN}-extra-utils \
-    ${PN}-udev-rules \
-    libsystemd-shared \
-    udev \
-    udev-hwdb \
-"
-
-SUMMARY:${PN}-container = "Tools for containers and VMs"
-DESCRIPTION:${PN}-container = "Systemd tools to spawn and manage containers and virtual machines."
-
-SUMMARY:${PN}-journal-gatewayd = "HTTP server for journal events"
-DESCRIPTION:${PN}-journal-gatewayd = "systemd-journal-gatewayd serves journal events over the network. Clients must connect using HTTP. The server listens on port 19531 by default."
-
-SUMMARY:${PN}-journal-upload = "Send journal messages over the network"
-DESCRIPTION:${PN}-journal-upload = "systemd-journal-upload uploads journal entries to a specified URL."
-
-SUMMARY:${PN}-journal-remote = "Receive journal messages over the network"
-DESCRIPTION:${PN}-journal-remote = "systemd-journal-remote is a command to receive serialized journal events and store them to journal files."
-
-SUMMARY:libsystemd-shared = "Systemd shared library"
-
-SYSTEMD_PACKAGES = "${@bb.utils.contains('PACKAGECONFIG', 'binfmt', '${PN}-binfmt', '', d)} \
-                    ${@bb.utils.contains('PACKAGECONFIG', 'microhttpd', '${PN}-journal-gatewayd', '', d)} \
-                    ${@bb.utils.contains('PACKAGECONFIG', 'microhttpd', '${PN}-journal-remote', '', d)} \
-                    ${@bb.utils.contains('PACKAGECONFIG', 'journal-upload', '${PN}-journal-upload', '', d)} \
-"
-SYSTEMD_SERVICE:${PN}-binfmt = "systemd-binfmt.service"
-
-USERADD_PACKAGES = "${PN} ${PN}-extra-utils \
-                    ${@bb.utils.contains('PACKAGECONFIG', 'microhttpd', '${PN}-journal-gatewayd', '', d)} \
-                    ${@bb.utils.contains('PACKAGECONFIG', 'microhttpd', '${PN}-journal-remote', '', d)} \
-                    ${@bb.utils.contains('PACKAGECONFIG', 'journal-upload', '${PN}-journal-upload', '', d)} \
-"
-GROUPADD_PARAM:${PN} = "-r systemd-journal;"
-GROUPADD_PARAM:${PN} += "${@bb.utils.contains('PACKAGECONFIG', 'polkit_hostnamed_fallback', '-r systemd-hostname;', '', d)}"
-USERADD_PARAM:${PN} += "${@bb.utils.contains('PACKAGECONFIG', 'coredump', '--system -d / -M --shell /sbin/nologin systemd-coredump;', '', d)}"
-USERADD_PARAM:${PN} += "${@bb.utils.contains('PACKAGECONFIG', 'networkd', '--system -d / -M --shell /sbin/nologin systemd-network;', '', d)}"
-USERADD_PARAM:${PN} += "${@bb.utils.contains('PACKAGECONFIG', 'polkit', '--system --no-create-home --user-group --home-dir ${sysconfdir}/polkit-1 polkitd;', '', d)}"
-USERADD_PARAM:${PN} += "${@bb.utils.contains('PACKAGECONFIG', 'resolved', '--system -d / -M --shell /sbin/nologin systemd-resolve;', '', d)}"
-USERADD_PARAM:${PN} += "${@bb.utils.contains('PACKAGECONFIG', 'timesyncd', '--system -d / -M --shell /sbin/nologin systemd-timesync;', '', d)}"
-USERADD_PARAM:${PN}-extra-utils = "--system -d / -M --shell /sbin/nologin systemd-bus-proxy"
-USERADD_PARAM:${PN}-journal-gatewayd = "--system -d / -M --shell /sbin/nologin systemd-journal-gateway"
-USERADD_PARAM:${PN}-journal-remote = "--system -d / -M --shell /sbin/nologin systemd-journal-remote"
-USERADD_PARAM:${PN}-journal-upload = "--system -d / -M --shell /sbin/nologin systemd-journal-upload"
-
-FILES:${PN}-analyze = "${bindir}/systemd-analyze"
-
-FILES:${PN}-initramfs = "/init"
-RDEPENDS:${PN}-initramfs = "${PN}"
-
-FILES:${PN}-gui = "${bindir}/systemadm"
-
-FILES:${PN}-vconsole-setup = "${rootlibexecdir}/systemd/systemd-vconsole-setup \
-                              ${systemd_system_unitdir}/systemd-vconsole-setup.service \
-                              ${systemd_system_unitdir}/sysinit.target.wants/systemd-vconsole-setup.service"
-
-RDEPENDS:${PN}-kernel-install += "bash"
-FILES:${PN}-kernel-install = "${bindir}/kernel-install \
-                              ${sysconfdir}/kernel/ \
-                              ${exec_prefix}/lib/kernel \
-                             "
-FILES:${PN}-rpm-macros = "${exec_prefix}/lib/rpm \
-                         "
-
-FILES:${PN}-zsh-completion = "${datadir}/zsh/site-functions"
-
-FILES:${PN}-binfmt = "${sysconfdir}/binfmt.d/ \
-                      ${exec_prefix}/lib/binfmt.d \
-                      ${rootlibexecdir}/systemd/systemd-binfmt \
-                      ${systemd_system_unitdir}/proc-sys-fs-binfmt_misc.* \
-                      ${systemd_system_unitdir}/systemd-binfmt.service"
-RRECOMMENDS:${PN}-binfmt = "kernel-module-binfmt-misc"
-
-RRECOMMENDS:${PN}-vconsole-setup = "kbd kbd-consolefonts kbd-keymaps"
-
-
-FILES:${PN}-journal-gatewayd = "${rootlibexecdir}/systemd/systemd-journal-gatewayd \
-                                ${systemd_system_unitdir}/systemd-journal-gatewayd.service \
-                                ${systemd_system_unitdir}/systemd-journal-gatewayd.socket \
-                                ${systemd_system_unitdir}/sockets.target.wants/systemd-journal-gatewayd.socket \
-                                ${datadir}/systemd/gatewayd/browse.html \
-                               "
-SYSTEMD_SERVICE:${PN}-journal-gatewayd = "systemd-journal-gatewayd.socket"
-
-FILES:${PN}-journal-upload = "${rootlibexecdir}/systemd/systemd-journal-upload \
-                              ${systemd_system_unitdir}/systemd-journal-upload.service \
-                              ${sysconfdir}/systemd/journal-upload.conf \
-                             "
-SYSTEMD_SERVICE:${PN}-journal-upload = "systemd-journal-upload.service"
-
-FILES:${PN}-journal-remote = "${rootlibexecdir}/systemd/systemd-journal-remote \
-                              ${sysconfdir}/systemd/journal-remote.conf \
-                              ${systemd_system_unitdir}/systemd-journal-remote.service \
-                              ${systemd_system_unitdir}/systemd-journal-remote.socket \
-                             "
-SYSTEMD_SERVICE:${PN}-journal-remote = "systemd-journal-remote.socket"
-
-
-FILES:${PN}-container = "${sysconfdir}/dbus-1/system.d/org.freedesktop.import1.conf \
-                         ${sysconfdir}/dbus-1/system.d/org.freedesktop.machine1.conf \
-                         ${sysconfdir}/systemd/system/multi-user.target.wants/machines.target \
-                         ${base_bindir}/machinectl \
-                         ${bindir}/systemd-nspawn \
-                         ${nonarch_libdir}/systemd/import-pubring.gpg \
-                         ${systemd_system_unitdir}/busnames.target.wants/org.freedesktop.import1.busname \
-                         ${systemd_system_unitdir}/busnames.target.wants/org.freedesktop.machine1.busname \
-                         ${systemd_system_unitdir}/local-fs.target.wants/var-lib-machines.mount \
-                         ${systemd_system_unitdir}/machines.target.wants/var-lib-machines.mount \
-                         ${systemd_system_unitdir}/remote-fs.target.wants/var-lib-machines.mount \
-                         ${systemd_system_unitdir}/machine.slice \
-                         ${systemd_system_unitdir}/machines.target \
-                         ${systemd_system_unitdir}/org.freedesktop.import1.busname \
-                         ${systemd_system_unitdir}/org.freedesktop.machine1.busname \
-                         ${systemd_system_unitdir}/systemd-importd.service \
-                         ${systemd_system_unitdir}/systemd-machined.service \
-                         ${systemd_system_unitdir}/dbus-org.freedesktop.machine1.service \
-                         ${systemd_system_unitdir}/var-lib-machines.mount \
-                         ${rootlibexecdir}/systemd/systemd-import \
-                         ${rootlibexecdir}/systemd/systemd-importd \
-                         ${rootlibexecdir}/systemd/systemd-machined \
-                         ${rootlibexecdir}/systemd/systemd-pull \
-                         ${exec_prefix}/lib/tmpfiles.d/systemd-nspawn.conf \
-                         ${exec_prefix}/lib/tmpfiles.d/README \
-                         ${systemd_system_unitdir}/systemd-nspawn@.service \
-                         ${libdir}/libnss_mymachines.so.2 \
-                         ${datadir}/dbus-1/system-services/org.freedesktop.import1.service \
-                         ${datadir}/dbus-1/system-services/org.freedesktop.machine1.service \
-                         ${datadir}/dbus-1/system.d/org.freedesktop.import1.conf \
-                         ${datadir}/dbus-1/system.d/org.freedesktop.machine1.conf \
-                         ${datadir}/polkit-1/actions/org.freedesktop.import1.policy \
-                         ${datadir}/polkit-1/actions/org.freedesktop.machine1.policy \
-                        "
-
-# "machinectl import-tar" uses "tar --numeric-owner", not supported by busybox.
-RRECOMMENDS:${PN}-container += "\
-                         ${PN}-journal-gatewayd \
-                         ${PN}-journal-remote \
-                         ${PN}-journal-upload \
-                         kernel-module-dm-mod \
-                         kernel-module-loop \
-                         kernel-module-tun \
-                         tar \
-                        "
-
-FILES:${PN}-extra-utils = "\
-                        ${base_bindir}/systemd-escape \
-                        ${base_bindir}/systemd-inhibit \
-                        ${bindir}/systemd-detect-virt \
-                        ${bindir}/systemd-dissect \
-                        ${bindir}/systemd-path \
-                        ${bindir}/systemd-run \
-                        ${bindir}/systemd-cat \
-                        ${bindir}/systemd-delta \
-                        ${bindir}/systemd-cgls \
-                        ${bindir}/systemd-cgtop \
-                        ${bindir}/systemd-stdio-bridge \
-                        ${base_bindir}/systemd-ask-password \
-                        ${base_bindir}/systemd-tty-ask-password-agent \
-                        ${systemd_system_unitdir}/systemd-ask-password-console.path \
-                        ${systemd_system_unitdir}/systemd-ask-password-console.service \
-                        ${systemd_system_unitdir}/systemd-ask-password-wall.path \
-                        ${systemd_system_unitdir}/systemd-ask-password-wall.service \
-                        ${systemd_system_unitdir}/sysinit.target.wants/systemd-ask-password-console.path \
-                        ${systemd_system_unitdir}/sysinit.target.wants/systemd-ask-password-wall.path \
-                        ${systemd_system_unitdir}/multi-user.target.wants/systemd-ask-password-wall.path \
-                        ${rootlibexecdir}/systemd/systemd-resolve-host \
-                        ${rootlibexecdir}/systemd/systemd-ac-power \
-                        ${rootlibexecdir}/systemd/systemd-activate \
-                        ${rootlibexecdir}/systemd/systemd-bus-proxyd \
-                        ${systemd_system_unitdir}/systemd-bus-proxyd.service \
-                        ${systemd_system_unitdir}/systemd-bus-proxyd.socket \
-                        ${rootlibexecdir}/systemd/systemd-socket-proxyd \
-                        ${rootlibexecdir}/systemd/systemd-reply-password \
-                        ${rootlibexecdir}/systemd/systemd-sleep \
-                        ${rootlibexecdir}/systemd/system-sleep \
-                        ${systemd_system_unitdir}/systemd-hibernate.service \
-                        ${systemd_system_unitdir}/systemd-hybrid-sleep.service \
-                        ${systemd_system_unitdir}/systemd-suspend.service \
-                        ${systemd_system_unitdir}/sleep.target \
-                        ${rootlibexecdir}/systemd/systemd-initctl \
-                        ${systemd_system_unitdir}/systemd-initctl.service \
-                        ${systemd_system_unitdir}/systemd-initctl.socket \
-                        ${systemd_system_unitdir}/sockets.target.wants/systemd-initctl.socket \
-                        ${rootlibexecdir}/systemd/system-generators/systemd-gpt-auto-generator \
-                        ${rootlibexecdir}/systemd/systemd-cgroups-agent \
-"
-
-FILES:${PN}-udev-rules = "\
-                        ${rootlibexecdir}/udev/rules.d/70-uaccess.rules \
-                        ${rootlibexecdir}/udev/rules.d/71-seat.rules \
-                        ${rootlibexecdir}/udev/rules.d/73-seat-late.rules \
-                        ${rootlibexecdir}/udev/rules.d/99-systemd.rules \
-"
-
-CONFFILES:${PN} = "${sysconfdir}/systemd/coredump.conf \
-	${sysconfdir}/systemd/journald.conf \
-	${sysconfdir}/systemd/logind.conf \
-	${sysconfdir}/systemd/networkd.conf \
-	${sysconfdir}/systemd/pstore.conf \
-	${sysconfdir}/systemd/resolved.conf \
-	${sysconfdir}/systemd/sleep.conf \
-	${sysconfdir}/systemd/system.conf \
-	${sysconfdir}/systemd/timesyncd.conf \
-	${sysconfdir}/systemd/user.conf \
-"
-
-FILES:${PN} = " ${base_bindir}/* \
-                ${base_sbindir}/shutdown \
-                ${base_sbindir}/halt \
-                ${base_sbindir}/poweroff \
-                ${base_sbindir}/runlevel \
-                ${base_sbindir}/telinit \
-                ${base_sbindir}/resolvconf \
-                ${base_sbindir}/reboot \
-                ${base_sbindir}/init \
-                ${datadir}/dbus-1/services \
-                ${datadir}/dbus-1/system-services \
-                ${datadir}/polkit-1 \
-                ${datadir}/${BPN} \
-                ${datadir}/factory \
-                ${sysconfdir}/dbus-1/ \
-                ${sysconfdir}/modules-load.d/ \
-                ${sysconfdir}/pam.d/ \
-                ${sysconfdir}/profile.d/ \
-                ${sysconfdir}/sysctl.d/ \
-                ${sysconfdir}/systemd/ \
-                ${sysconfdir}/tmpfiles.d/ \
-                ${sysconfdir}/xdg/ \
-                ${sysconfdir}/init.d/README \
-                ${sysconfdir}/resolv-conf.systemd \
-                ${sysconfdir}/X11/xinit/xinitrc.d/* \
-                ${rootlibexecdir}/systemd/* \
-                ${libdir}/pam.d \
-                ${nonarch_libdir}/pam.d \
-                ${systemd_unitdir}/* \
-                ${base_libdir}/security/*.so \
-                /cgroup \
-                ${bindir}/systemd* \
-                ${bindir}/busctl \
-                ${bindir}/coredumpctl \
-                ${bindir}/localectl \
-                ${bindir}/hostnamectl \
-                ${bindir}/resolvectl \
-                ${bindir}/timedatectl \
-                ${bindir}/bootctl \
-                ${bindir}/oomctl \
-                ${exec_prefix}/lib/tmpfiles.d/*.conf \
-                ${exec_prefix}/lib/systemd \
-                ${exec_prefix}/lib/modules-load.d \
-                ${exec_prefix}/lib/sysctl.d \
-                ${exec_prefix}/lib/sysusers.d \
-                ${exec_prefix}/lib/environment.d \
-                ${localstatedir} \
-                ${rootlibexecdir}/modprobe.d/systemd.conf \
-                ${rootlibexecdir}/modprobe.d/README \
-                ${datadir}/dbus-1/system.d/org.freedesktop.timedate1.conf \
-                ${datadir}/dbus-1/system.d/org.freedesktop.locale1.conf \
-                ${datadir}/dbus-1/system.d/org.freedesktop.network1.conf \
-                ${datadir}/dbus-1/system.d/org.freedesktop.resolve1.conf \
-                ${datadir}/dbus-1/system.d/org.freedesktop.systemd1.conf \
-                ${@bb.utils.contains('PACKAGECONFIG', 'polkit_hostnamed_fallback', '${datadir}/dbus-1/system.d/org.freedesktop.hostname1_no_polkit.conf', '', d)} \
-                ${datadir}/dbus-1/system.d/org.freedesktop.hostname1.conf \
-                ${datadir}/dbus-1/system.d/org.freedesktop.login1.conf \
-                ${datadir}/dbus-1/system.d/org.freedesktop.timesync1.conf \
-                ${datadir}/dbus-1/system.d/org.freedesktop.portable1.conf \
-                ${datadir}/dbus-1/system.d/org.freedesktop.oom1.conf \
-                ${datadir}/dbus-1/system.d/org.freedesktop.home1.conf \
-               "
-
-FILES:${PN}-dev += "${base_libdir}/security/*.la ${datadir}/dbus-1/interfaces/ ${sysconfdir}/rpm/macros.systemd"
-
-RDEPENDS:${PN} += "kmod dbus util-linux-mount util-linux-umount udev (= ${EXTENDPKGV}) systemd-udev-rules util-linux-agetty util-linux-fsck"
-RDEPENDS:${PN} += "${@bb.utils.contains('PACKAGECONFIG', 'serial-getty-generator', '', 'systemd-serialgetty', d)}"
-RDEPENDS:${PN} += "volatile-binds"
-
-RRECOMMENDS:${PN} += "systemd-extra-utils \
-                      udev-hwdb \
-                      e2fsprogs-e2fsck \
-                      kernel-module-autofs4 kernel-module-unix kernel-module-ipv6 kernel-module-sch-fq-codel \
-                      os-release \
-                      systemd-conf \
-"
-
-INSANE_SKIP:${PN} += "dev-so libdir"
-INSANE_SKIP:${PN}-dbg += "libdir"
-INSANE_SKIP:${PN}-doc += " libdir"
-INSANE_SKIP:libsystemd-shared += "libdir"
-
-FILES:libsystemd-shared = "${rootlibexecdir}/systemd/libsystemd-shared*.so"
-
-RPROVIDES:udev = "hotplug"
-
-RDEPENDS:udev-hwdb += "udev"
-
-FILES:udev += "${base_sbindir}/udevd \
-               ${rootlibexecdir}/systemd/network/99-default.link \
-               ${rootlibexecdir}/systemd/systemd-udevd \
-               ${rootlibexecdir}/udev/accelerometer \
-               ${rootlibexecdir}/udev/ata_id \
-               ${rootlibexecdir}/udev/cdrom_id \
-               ${rootlibexecdir}/udev/collect \
-               ${rootlibexecdir}/udev/dmi_memory_id \
-               ${rootlibexecdir}/udev/fido_id \
-               ${rootlibexecdir}/udev/findkeyboards \
-               ${rootlibexecdir}/udev/keyboard-force-release.sh \
-               ${rootlibexecdir}/udev/keymap \
-               ${rootlibexecdir}/udev/mtd_probe \
-               ${rootlibexecdir}/udev/scsi_id \
-               ${rootlibexecdir}/udev/v4l_id \
-               ${rootlibexecdir}/udev/keymaps \
-               ${rootlibexecdir}/udev/rules.d/50-udev-default.rules \
-               ${rootlibexecdir}/udev/rules.d/60-autosuspend.rules \
-               ${rootlibexecdir}/udev/rules.d/60-autosuspend-chromiumos.rules \
-               ${rootlibexecdir}/udev/rules.d/60-block.rules \
-               ${rootlibexecdir}/udev/rules.d/60-cdrom_id.rules \
-               ${rootlibexecdir}/udev/rules.d/60-drm.rules \
-               ${rootlibexecdir}/udev/rules.d/60-evdev.rules \
-               ${rootlibexecdir}/udev/rules.d/60-fido-id.rules \
-               ${rootlibexecdir}/udev/rules.d/60-input-id.rules \
-               ${rootlibexecdir}/udev/rules.d/60-persistent-alsa.rules \
-               ${rootlibexecdir}/udev/rules.d/60-persistent-input.rules \
-               ${rootlibexecdir}/udev/rules.d/60-persistent-storage.rules \
-               ${rootlibexecdir}/udev/rules.d/60-persistent-storage-tape.rules \
-               ${rootlibexecdir}/udev/rules.d/60-persistent-v4l.rules \
-               ${rootlibexecdir}/udev/rules.d/60-sensor.rules \
-               ${rootlibexecdir}/udev/rules.d/60-serial.rules \
-               ${rootlibexecdir}/udev/rules.d/61-autosuspend-manual.rules \
-               ${rootlibexecdir}/udev/rules.d/64-btrfs.rules \
-               ${rootlibexecdir}/udev/rules.d/70-camera.rules \
-               ${rootlibexecdir}/udev/rules.d/70-joystick.rules \
-               ${rootlibexecdir}/udev/rules.d/70-memory.rules \
-               ${rootlibexecdir}/udev/rules.d/70-mouse.rules \
-               ${rootlibexecdir}/udev/rules.d/70-power-switch.rules \
-               ${rootlibexecdir}/udev/rules.d/70-touchpad.rules \
-               ${rootlibexecdir}/udev/rules.d/75-net-description.rules \
-               ${rootlibexecdir}/udev/rules.d/75-probe_mtd.rules \
-               ${rootlibexecdir}/udev/rules.d/78-sound-card.rules \
-               ${rootlibexecdir}/udev/rules.d/80-drivers.rules \
-               ${rootlibexecdir}/udev/rules.d/80-net-setup-link.rules \
-               ${rootlibexecdir}/udev/rules.d/81-net-dhcp.rules \
-               ${rootlibexecdir}/udev/rules.d/90-vconsole.rules \
-               ${rootlibexecdir}/udev/rules.d/README \
-               ${sysconfdir}/udev \
-               ${sysconfdir}/init.d/systemd-udevd \
-               ${systemd_system_unitdir}/*udev* \
-               ${systemd_system_unitdir}/*.wants/*udev* \
-               ${base_bindir}/systemd-hwdb \
-               ${base_bindir}/udevadm \
-               ${base_sbindir}/udevadm \
-               ${datadir}/bash-completion/completions/udevadm \
-               ${systemd_system_unitdir}/systemd-hwdb-update.service \
-              "
-
-FILES:udev-hwdb = "${rootlibexecdir}/udev/hwdb.d \
-                   "
-
-RCONFLICTS:${PN} = "tiny-init ${@bb.utils.contains('PACKAGECONFIG', 'resolved', 'resolvconf', '', d)}"
-
-INITSCRIPT_PACKAGES = "udev"
-INITSCRIPT_NAME:udev = "systemd-udevd"
-INITSCRIPT_PARAMS:udev = "start 03 S ."
-
-python __anonymous() {
-    if not bb.utils.contains('DISTRO_FEATURES', 'sysvinit', True, False, d):
-        d.setVar("INHIBIT_UPDATERCD_BBCLASS", "1")
-
-    if bb.utils.contains('PACKAGECONFIG', 'repart', True, False, d) and not bb.utils.contains('PACKAGECONFIG', 'openssl', True, False, d):
-        bb.error("PACKAGECONFIG[repart] requires PACKAGECONFIG[openssl]")
-
-    if bb.utils.contains('PACKAGECONFIG', 'homed', True, False, d) and not bb.utils.contains('PACKAGECONFIG', 'userdb openssl cryptsetup', True, False, d):
-        bb.error("PACKAGECONFIG[homed] requires PACKAGECONFIG[userdb], PACKAGECONFIG[openssl] and PACKAGECONFIG[cryptsetup]")
-}
-
-python do_warn_musl() {
-    if d.getVar('TCLIBC') == "musl":
-        bb.warn("Using systemd with musl is not recommended since it is not supported upstream and some patches are known to be problematic.")
-}
-addtask warn_musl before do_configure
-
-ALTERNATIVE:${PN} = "halt reboot shutdown poweroff runlevel ${@bb.utils.contains('PACKAGECONFIG', 'resolved', 'resolv-conf', '', d)}"
-
-ALTERNATIVE_TARGET[resolv-conf] = "${sysconfdir}/resolv-conf.systemd"
-ALTERNATIVE_LINK_NAME[resolv-conf] = "${sysconfdir}/resolv.conf"
-ALTERNATIVE_PRIORITY[resolv-conf] ?= "50"
-
-ALTERNATIVE_TARGET[halt] = "${base_bindir}/systemctl"
-ALTERNATIVE_LINK_NAME[halt] = "${base_sbindir}/halt"
-ALTERNATIVE_PRIORITY[halt] ?= "300"
-
-ALTERNATIVE_TARGET[reboot] = "${base_bindir}/systemctl"
-ALTERNATIVE_LINK_NAME[reboot] = "${base_sbindir}/reboot"
-ALTERNATIVE_PRIORITY[reboot] ?= "300"
-
-ALTERNATIVE_TARGET[shutdown] = "${base_bindir}/systemctl"
-ALTERNATIVE_LINK_NAME[shutdown] = "${base_sbindir}/shutdown"
-ALTERNATIVE_PRIORITY[shutdown] ?= "300"
-
-ALTERNATIVE_TARGET[poweroff] = "${base_bindir}/systemctl"
-ALTERNATIVE_LINK_NAME[poweroff] = "${base_sbindir}/poweroff"
-ALTERNATIVE_PRIORITY[poweroff] ?= "300"
-
-ALTERNATIVE_TARGET[runlevel] = "${base_bindir}/systemctl"
-ALTERNATIVE_LINK_NAME[runlevel] = "${base_sbindir}/runlevel"
-ALTERNATIVE_PRIORITY[runlevel] ?= "300"
-
-pkg_postinst:${PN}:libc-glibc () {
-	sed -e '/^hosts:/s/\s*\<myhostname\>//' \
-		-e 's/\(^hosts:.*\)\(\<files\>\)\(.*\)\(\<dns\>\)\(.*\)/\1\2 myhostname \3\4\5/' \
-		-i $D${sysconfdir}/nsswitch.conf
-}
-
-pkg_prerm:${PN}:libc-glibc () {
-	sed -e '/^hosts:/s/\s*\<myhostname\>//' \
-		-e '/^hosts:/s/\s*myhostname//' \
-		-i $D${sysconfdir}/nsswitch.conf
-}
-
-PACKAGE_WRITE_DEPS += "qemu-native"
-pkg_postinst:udev-hwdb () {
-	if test -n "$D"; then
-		$INTERCEPT_DIR/postinst_intercept update_udev_hwdb ${PKG} mlprefix=${MLPREFIX} binprefix=${MLPREFIX} rootlibexecdir="${rootlibexecdir}" PREFERRED_PROVIDER_udev="${PREFERRED_PROVIDER_udev}" base_bindir="${base_bindir}"
-	else
-		udevadm hwdb --update
-	fi
-}
-
-pkg_prerm:udev-hwdb () {
-	rm -f $D${sysconfdir}/udev/hwdb.bin
-}
diff --git a/poky/meta/recipes-core/systemd/systemd_251.4.bb b/poky/meta/recipes-core/systemd/systemd_251.4.bb
new file mode 100644
index 0000000..8497e24
--- /dev/null
+++ b/poky/meta/recipes-core/systemd/systemd_251.4.bb
@@ -0,0 +1,796 @@
+require systemd.inc
+
+PROVIDES = "udev"
+
+PE = "1"
+
+DEPENDS = "intltool-native gperf-native libcap util-linux python3-jinja2-native"
+
+SECTION = "base/shell"
+
+inherit useradd pkgconfig meson perlnative update-rc.d update-alternatives qemu systemd gettext bash-completion manpages features_check
+
+# As this recipe builds udev, respect systemd being in DISTRO_FEATURES so
+# that we don't build both udev and systemd in world builds.
+REQUIRED_DISTRO_FEATURES = "systemd"
+
+SRC_URI += " \
+           file://touchscreen.rules \
+           file://00-create-volatile.conf \
+           ${@bb.utils.contains('PACKAGECONFIG', 'polkit_hostnamed_fallback', 'file://org.freedesktop.hostname1_no_polkit.conf', '', d)} \
+           ${@bb.utils.contains('PACKAGECONFIG', 'polkit_hostnamed_fallback', 'file://00-hostnamed-network-user.conf', '', d)} \
+           file://init \
+           file://99-default.preset \
+           file://systemd-pager.sh \
+           file://0001-binfmt-Don-t-install-dependency-links-at-install-tim.patch \
+           file://0003-implment-systemd-sysv-install-for-OE.patch \
+           file://0001-Move-sysusers.d-sysctl.d-binfmt.d-modules-load.d-to-.patch \
+           "
+
+# patches needed by musl
+SRC_URI:append:libc-musl = " ${SRC_URI_MUSL}"
+SRC_URI_MUSL = "\
+               file://0003-missing_type.h-add-comparison_fn_t.patch \
+               file://0004-add-fallback-parse_printf_format-implementation.patch \
+               file://0005-src-basic-missing.h-check-for-missing-strndupa.patch \
+               file://0007-don-t-fail-if-GLOB_BRACE-and-GLOB_ALTDIRFUNC-is-not-.patch \
+               file://0008-add-missing-FTW_-macros-for-musl.patch \
+               file://0010-Use-uintmax_t-for-handling-rlim_t.patch \
+               file://0011-test-sizeof.c-Disable-tests-for-missing-typedefs-in-.patch \
+               file://0012-don-t-pass-AT_SYMLINK_NOFOLLOW-flag-to-faccessat.patch \
+               file://0013-Define-glibc-compatible-basename-for-non-glibc-syste.patch \
+               file://0014-Do-not-disable-buffering-when-writing-to-oom_score_a.patch \
+               file://0015-distinguish-XSI-compliant-strerror_r-from-GNU-specif.patch \
+               file://0018-avoid-redefinition-of-prctl_mm_map-structure.patch \
+               file://0022-do-not-disable-buffer-in-writing-files.patch \
+               file://0025-Handle-__cpu_mask-usage.patch \
+               file://0026-Handle-missing-gshadow.patch \
+               file://0028-missing_syscall.h-Define-MIPS-ABI-defines-for-musl.patch \
+               file://0001-pass-correct-parameters-to-getdents64.patch \
+               file://0002-Add-sys-stat.h-for-S_IFDIR.patch \
+               file://0001-Adjust-for-musl-headers.patch \
+               "
+
+PAM_PLUGINS = " \
+    pam-plugin-unix \
+    pam-plugin-loginuid \
+    pam-plugin-keyinit \
+"
+
+PACKAGECONFIG ??= " \
+    ${@bb.utils.filter('DISTRO_FEATURES', 'acl audit efi ldconfig pam selinux smack usrmerge polkit seccomp', d)} \
+    ${@bb.utils.contains('DISTRO_FEATURES', 'wifi', 'rfkill', '', d)} \
+    ${@bb.utils.contains('DISTRO_FEATURES', 'x11', 'xkbcommon', '', d)} \
+    ${@bb.utils.contains('DISTRO_FEATURES', 'sysvinit', '', 'link-udev-shared', d)} \
+    backlight \
+    binfmt \
+    gshadow \
+    hibernate \
+    hostnamed \
+    idn \
+    ima \
+    kmod \
+    localed \
+    logind \
+    machined \
+    myhostname \
+    networkd \
+    nss \
+    nss-mymachines \
+    nss-resolve \
+    quotacheck \
+    randomseed \
+    resolved \
+    set-time-epoch \
+    sysusers \
+    sysvinit \
+    timedated \
+    timesyncd \
+    userdb \
+    utmp \
+    vconsole \
+    wheel-group \
+    zstd \
+"
+
+PACKAGECONFIG:remove:libc-musl = " \
+    gshadow \
+    idn \
+    localed \
+    myhostname \
+    nss \
+    nss-mymachines \
+    nss-resolve \
+    sysusers \
+    userdb \
+    utmp \
+"
+
+# https://github.com/seccomp/libseccomp/issues/347
+PACKAGECONFIG:remove:mipsarch = "seccomp"
+
+CFLAGS:append:libc-musl = " -D__UAPI_DEF_ETHHDR=0 "
+
+# Some of the dependencies are weak-style recommends - if not available at runtime,
+# systemd won't fail but the library-related feature will be skipped with a warning.
+
+# Use the upstream systemd serial-getty@.service and rely on
+# systemd-getty-generator instead of using the OE-core specific
+# systemd-serialgetty.bb - not enabled by default.
+PACKAGECONFIG[serial-getty-generator] = ""
+
+PACKAGECONFIG[acl] = "-Dacl=true,-Dacl=false,acl"
+PACKAGECONFIG[audit] = "-Daudit=true,-Daudit=false,audit"
+PACKAGECONFIG[backlight] = "-Dbacklight=true,-Dbacklight=false"
+PACKAGECONFIG[binfmt] = "-Dbinfmt=true,-Dbinfmt=false"
+PACKAGECONFIG[bzip2] = "-Dbzip2=true,-Dbzip2=false,bzip2"
+PACKAGECONFIG[cgroupv2] = "-Ddefault-hierarchy=unified,-Ddefault-hierarchy=hybrid"
+PACKAGECONFIG[coredump] = "-Dcoredump=true,-Dcoredump=false"
+PACKAGECONFIG[cryptsetup] = "-Dlibcryptsetup=true,-Dlibcryptsetup=false,cryptsetup,,cryptsetup"
+PACKAGECONFIG[tpm2] = "-Dtpm2=true,-Dtpm2=false,tpm2-tss,tpm2-tss libtss2 libtss2-tcti-device"
+PACKAGECONFIG[dbus] = "-Ddbus=true,-Ddbus=false,dbus"
+PACKAGECONFIG[efi] = "-Defi=true,-Defi=false"
+PACKAGECONFIG[gnu-efi] = "-Dgnu-efi=true -Defi-libdir=${STAGING_LIBDIR} -Defi-includedir=${STAGING_INCDIR}/efi,-Dgnu-efi=false,gnu-efi"
+PACKAGECONFIG[elfutils] = "-Delfutils=true,-Delfutils=false,elfutils"
+PACKAGECONFIG[firstboot] = "-Dfirstboot=true,-Dfirstboot=false"
+PACKAGECONFIG[repart] = "-Drepart=true,-Drepart=false"
+PACKAGECONFIG[homed] = "-Dhomed=true,-Dhomed=false"
+# Sign the journal for anti-tampering
+PACKAGECONFIG[gcrypt] = "-Dgcrypt=true,-Dgcrypt=false,libgcrypt"
+PACKAGECONFIG[gnutls] = "-Dgnutls=true,-Dgnutls=false,gnutls"
+PACKAGECONFIG[gshadow] = "-Dgshadow=true,-Dgshadow=false"
+PACKAGECONFIG[hibernate] = "-Dhibernate=true,-Dhibernate=false"
+PACKAGECONFIG[hostnamed] = "-Dhostnamed=true,-Dhostnamed=false"
+PACKAGECONFIG[idn] = "-Didn=true,-Didn=false"
+PACKAGECONFIG[ima] = "-Dima=true,-Dima=false"
+# importd requires journal-upload/xz/zlib/bzip2/gcrypt
+PACKAGECONFIG[importd] = "-Dimportd=true,-Dimportd=false"
+# Update NAT firewall rules
+PACKAGECONFIG[iptc] = "-Dlibiptc=true,-Dlibiptc=false,iptables"
+PACKAGECONFIG[journal-upload] = "-Dlibcurl=true,-Dlibcurl=false,curl"
+PACKAGECONFIG[kmod] = "-Dkmod=true,-Dkmod=false,kmod"
+PACKAGECONFIG[ldconfig] = "-Dldconfig=true,-Dldconfig=false,,ldconfig"
+PACKAGECONFIG[libidn] = "-Dlibidn=true,-Dlibidn=false,libidn,,libidn"
+PACKAGECONFIG[libidn2] = "-Dlibidn2=true,-Dlibidn2=false,libidn2,,libidn2"
+# Link udev shared with systemd helper library.
+# If enabled the udev package depends on the systemd package (which has the needed shared library).
+PACKAGECONFIG[link-udev-shared] = "-Dlink-udev-shared=true,-Dlink-udev-shared=false"
+PACKAGECONFIG[localed] = "-Dlocaled=true,-Dlocaled=false"
+PACKAGECONFIG[logind] = "-Dlogind=true,-Dlogind=false"
+PACKAGECONFIG[lz4] = "-Dlz4=true,-Dlz4=false,lz4"
+PACKAGECONFIG[machined] = "-Dmachined=true,-Dmachined=false"
+PACKAGECONFIG[manpages] = "-Dman=true,-Dman=false,libxslt-native xmlto-native docbook-xml-dtd4-native docbook-xsl-stylesheets-native"
+PACKAGECONFIG[microhttpd] = "-Dmicrohttpd=true,-Dmicrohttpd=false,libmicrohttpd"
+PACKAGECONFIG[myhostname] = "-Dnss-myhostname=true,-Dnss-myhostname=false,,libnss-myhostname"
+PACKAGECONFIG[networkd] = "-Dnetworkd=true,-Dnetworkd=false"
+PACKAGECONFIG[nss] = "-Dnss-systemd=true,-Dnss-systemd=false"
+PACKAGECONFIG[nss-mymachines] = "-Dnss-mymachines=true,-Dnss-mymachines=false"
+PACKAGECONFIG[nss-resolve] = "-Dnss-resolve=true,-Dnss-resolve=false"
+PACKAGECONFIG[oomd] = "-Doomd=true,-Doomd=false"
+PACKAGECONFIG[openssl] = "-Dopenssl=true,-Dopenssl=false,openssl"
+PACKAGECONFIG[pam] = "-Dpam=true,-Dpam=false,libpam,${PAM_PLUGINS}"
+PACKAGECONFIG[pcre2] = "-Dpcre2=true,-Dpcre2=false,libpcre2"
+PACKAGECONFIG[polkit] = "-Dpolkit=true,-Dpolkit=false"
+# If polkit is disabled and networkd+hostnamed are in use, enabling this option and
+# using dbus-broker will allow networkd to be authorized to change the
+# hostname without acquiring additional privileges
+PACKAGECONFIG[polkit_hostnamed_fallback] = ",,,,dbus-broker,polkit"
+PACKAGECONFIG[portabled] = "-Dportabled=true,-Dportabled=false"
+PACKAGECONFIG[qrencode] = "-Dqrencode=true,-Dqrencode=false,qrencode,,qrencode"
+PACKAGECONFIG[quotacheck] = "-Dquotacheck=true,-Dquotacheck=false"
+PACKAGECONFIG[randomseed] = "-Drandomseed=true,-Drandomseed=false"
+PACKAGECONFIG[resolved] = "-Dresolve=true,-Dresolve=false"
+PACKAGECONFIG[rfkill] = "-Drfkill=true,-Drfkill=false"
+PACKAGECONFIG[seccomp] = "-Dseccomp=true,-Dseccomp=false,libseccomp"
+PACKAGECONFIG[selinux] = "-Dselinux=true,-Dselinux=false,libselinux,initscripts-sushell"
+PACKAGECONFIG[smack] = "-Dsmack=true,-Dsmack=false"
+PACKAGECONFIG[sysext] = "-Dsysext=true, -Dsysext=false"
+PACKAGECONFIG[sysusers] = "-Dsysusers=true,-Dsysusers=false"
+PACKAGECONFIG[sysvinit] = "-Dsysvinit-path=${sysconfdir}/init.d -Dsysvrcnd-path=${sysconfdir},-Dsysvinit-path= -Dsysvrcnd-path=,,systemd-compat-units update-rc.d"
+# When enabled use reproducble build timestamp if set as time epoch,
+# or build time if not. When disabled, time epoch is unset.
+def build_epoch(d):
+    epoch = d.getVar('SOURCE_DATE_EPOCH') or "-1"
+    return '-Dtime-epoch=%d' % int(epoch)
+PACKAGECONFIG[set-time-epoch] = "${@build_epoch(d)},-Dtime-epoch=0"
+PACKAGECONFIG[timedated] = "-Dtimedated=true,-Dtimedated=false"
+PACKAGECONFIG[timesyncd] = "-Dtimesyncd=true,-Dtimesyncd=false"
+PACKAGECONFIG[usrmerge] = "-Dsplit-usr=false,-Dsplit-usr=true"
+PACKAGECONFIG[sbinmerge] = "-Dsplit-bin=false,-Dsplit-bin=true"
+PACKAGECONFIG[userdb] = "-Duserdb=true,-Duserdb=false"
+PACKAGECONFIG[utmp] = "-Dutmp=true,-Dutmp=false"
+PACKAGECONFIG[valgrind] = "-DVALGRIND=1,,valgrind"
+PACKAGECONFIG[vconsole] = "-Dvconsole=true,-Dvconsole=false,,${PN}-vconsole-setup"
+PACKAGECONFIG[wheel-group] = "-Dwheel-group=true, -Dwheel-group=false"
+PACKAGECONFIG[xdg-autostart] = "-Dxdg-autostart=true,-Dxdg-autostart=false"
+# Verify keymaps on locale change
+PACKAGECONFIG[xkbcommon] = "-Dxkbcommon=true,-Dxkbcommon=false,libxkbcommon"
+PACKAGECONFIG[xz] = "-Dxz=true,-Dxz=false,xz"
+PACKAGECONFIG[zlib] = "-Dzlib=true,-Dzlib=false,zlib"
+PACKAGECONFIG[zstd] = "-Dzstd=true,-Dzstd=false,zstd"
+
+# Helper variables to clarify locations.  This mirrors the logic in systemd's
+# build system.
+rootprefix ?= "${root_prefix}"
+rootlibdir ?= "${base_libdir}"
+rootlibexecdir = "${rootprefix}/lib"
+
+EXTRA_OEMESON += "-Dnobody-user=nobody \
+                  -Dnobody-group=nobody \
+                  -Drootlibdir=${rootlibdir} \
+                  -Drootprefix=${rootprefix} \
+                  -Ddefault-locale=C \
+                  -Dmode=release \
+                  -Dsystem-alloc-uid-min=101 \
+                  -Dsystem-uid-max=999 \
+                  -Dsystem-alloc-gid-min=101 \
+                  -Dsystem-gid-max=999 \
+                  "
+
+# Hardcode target binary paths to avoid using paths from sysroot
+EXTRA_OEMESON += "-Dkexec-path=${sbindir}/kexec \
+                  -Dkmod-path=${base_bindir}/kmod \
+                  -Dmount-path=${base_bindir}/mount \
+                  -Dquotacheck-path=${sbindir}/quotacheck \
+                  -Dquotaon-path=${sbindir}/quotaon \
+                  -Dsulogin-path=${base_sbindir}/sulogin \
+                  -Dnologin-path=${base_sbindir}/nologin \
+                  -Dumount-path=${base_bindir}/umount"
+
+# The 60 seconds is watchdog's default vaule.
+WATCHDOG_TIMEOUT ??= "60"
+
+do_install() {
+	meson_do_install
+	install -d ${D}/${base_sbindir}
+	if ${@bb.utils.contains('PACKAGECONFIG', 'serial-getty-generator', 'false', 'true', d)}; then
+		# Provided by a separate recipe
+		rm ${D}${systemd_system_unitdir}/serial-getty* -f
+	fi
+
+	# Provide support for initramfs
+	[ ! -e ${D}/init ] && ln -s ${rootlibexecdir}/systemd/systemd ${D}/init
+	[ ! -e ${D}/${base_sbindir}/udevd ] && ln -s ${rootlibexecdir}/systemd/systemd-udevd ${D}/${base_sbindir}/udevd
+
+	install -d ${D}${sysconfdir}/udev/rules.d/
+	install -d ${D}${sysconfdir}/tmpfiles.d
+	for rule in $(find ${WORKDIR} -maxdepth 1 -type f -name "*.rules"); do
+		install -m 0644 $rule ${D}${sysconfdir}/udev/rules.d/
+	done
+
+	install -m 0644 ${WORKDIR}/00-create-volatile.conf ${D}${sysconfdir}/tmpfiles.d/
+
+	if ${@bb.utils.contains('DISTRO_FEATURES','sysvinit','true','false',d)}; then
+		install -d ${D}${sysconfdir}/init.d
+		install -m 0755 ${WORKDIR}/init ${D}${sysconfdir}/init.d/systemd-udevd
+		sed -i s%@UDEVD@%${rootlibexecdir}/systemd/systemd-udevd% ${D}${sysconfdir}/init.d/systemd-udevd
+		install -Dm 0755 ${S}/src/systemctl/systemd-sysv-install.SKELETON ${D}${systemd_unitdir}/systemd-sysv-install
+	fi
+
+	if "${@'true' if oe.types.boolean(d.getVar('VOLATILE_LOG_DIR')) else 'false'}"; then
+		# /var/log is typically a symbolic link to inside /var/volatile,
+		# which is expected to be empty.
+		rm -rf ${D}${localstatedir}/log
+	else
+		chown root:systemd-journal ${D}${localstatedir}/log/journal
+
+		# journal-remote creates this at start
+		rm -rf ${D}${localstatedir}/log/journal/remote
+	fi
+
+	install -d ${D}${systemd_system_unitdir}/graphical.target.wants
+	install -d ${D}${systemd_system_unitdir}/multi-user.target.wants
+	install -d ${D}${systemd_system_unitdir}/poweroff.target.wants
+	install -d ${D}${systemd_system_unitdir}/reboot.target.wants
+	install -d ${D}${systemd_system_unitdir}/rescue.target.wants
+
+	# Create symlinks for systemd-update-utmp-runlevel.service
+	if ${@bb.utils.contains('PACKAGECONFIG', 'utmp', 'true', 'false', d)}; then
+		ln -sf ../systemd-update-utmp-runlevel.service ${D}${systemd_system_unitdir}/graphical.target.wants/systemd-update-utmp-runlevel.service
+		ln -sf ../systemd-update-utmp-runlevel.service ${D}${systemd_system_unitdir}/multi-user.target.wants/systemd-update-utmp-runlevel.service
+		ln -sf ../systemd-update-utmp-runlevel.service ${D}${systemd_system_unitdir}/poweroff.target.wants/systemd-update-utmp-runlevel.service
+		ln -sf ../systemd-update-utmp-runlevel.service ${D}${systemd_system_unitdir}/reboot.target.wants/systemd-update-utmp-runlevel.service
+		ln -sf ../systemd-update-utmp-runlevel.service ${D}${systemd_system_unitdir}/rescue.target.wants/systemd-update-utmp-runlevel.service
+	fi
+
+	# this file is needed to exist if networkd is disabled but timesyncd is still in use since timesyncd checks it
+	# for existence else it fails
+	if [ -s ${D}${exec_prefix}/lib/tmpfiles.d/systemd.conf ] &&
+	   ! ${@bb.utils.contains('PACKAGECONFIG', 'networkd', 'true', 'false', d)}; then
+		echo 'd /run/systemd/netif/links 0755 root root -' >>${D}${exec_prefix}/lib/tmpfiles.d/systemd.conf
+	fi
+	if ! ${@bb.utils.contains('PACKAGECONFIG', 'resolved', 'true', 'false', d)}; then
+		echo 'L! ${sysconfdir}/resolv.conf - - - - ../run/systemd/resolve/resolv.conf' >>${D}${exec_prefix}/lib/tmpfiles.d/etc.conf
+		echo 'd /run/systemd/resolve 0755 root root -' >>${D}${exec_prefix}/lib/tmpfiles.d/systemd.conf
+		echo 'f /run/systemd/resolve/resolv.conf 0644 root root' >>${D}${exec_prefix}/lib/tmpfiles.d/systemd.conf
+		ln -s ../run/systemd/resolve/resolv.conf ${D}${sysconfdir}/resolv-conf.systemd
+	else
+		sed -i -e "s%^L! /etc/resolv.conf.*$%L! /etc/resolv.conf - - - - ../run/systemd/resolve/resolv.conf%g" ${D}${exec_prefix}/lib/tmpfiles.d/etc.conf
+		ln -s ../run/systemd/resolve/resolv.conf ${D}${sysconfdir}/resolv-conf.systemd
+	fi
+	if ${@bb.utils.contains('DISTRO_FEATURES', 'x11', 'false', 'true', d)}; then
+		rm ${D}${exec_prefix}/lib/tmpfiles.d/x11.conf
+		rm -r ${D}${sysconfdir}/X11
+	fi
+
+	# If polkit is setup fixup permissions and ownership
+	if ${@bb.utils.contains('PACKAGECONFIG', 'polkit', 'true', 'false', d)}; then
+		if [ -d ${D}${datadir}/polkit-1/rules.d ]; then
+			chmod 700 ${D}${datadir}/polkit-1/rules.d
+			chown polkitd:root ${D}${datadir}/polkit-1/rules.d
+		fi
+	fi
+
+	# If polkit is not available and a fallback was requested, install a drop-in that allows networkd to
+	# request hostname changes via DBUS without elevating its privileges
+	if ${@bb.utils.contains('PACKAGECONFIG', 'polkit_hostnamed_fallback', 'true', 'false', d)}; then
+		install -d ${D}${systemd_system_unitdir}/systemd-hostnamed.service.d/
+		install -m 0644 ${WORKDIR}/00-hostnamed-network-user.conf ${D}${systemd_system_unitdir}/systemd-hostnamed.service.d/
+		install -d ${D}${datadir}/dbus-1/system.d/
+		install -m 0644 ${WORKDIR}/org.freedesktop.hostname1_no_polkit.conf ${D}${datadir}/dbus-1/system.d/
+	fi
+
+	# create link for existing udev rules
+	ln -s ${base_bindir}/udevadm ${D}${base_sbindir}/udevadm
+
+	# install default policy for presets
+	# https://www.freedesktop.org/wiki/Software/systemd/Preset/#howto
+	install -Dm 0644 ${WORKDIR}/99-default.preset ${D}${systemd_unitdir}/system-preset/99-default.preset
+
+	# add a profile fragment to disable systemd pager with busybox less
+	install -Dm 0644 ${WORKDIR}/systemd-pager.sh ${D}${sysconfdir}/profile.d/systemd-pager.sh
+
+    if [ -n "${WATCHDOG_TIMEOUT}" ]; then
+        sed -i -e 's/#RebootWatchdogSec=10min/RebootWatchdogSec=${WATCHDOG_TIMEOUT}/' \
+            ${D}/${sysconfdir}/systemd/system.conf
+    fi
+}
+
+python populate_packages:prepend (){
+    systemdlibdir = d.getVar("rootlibdir")
+    do_split_packages(d, systemdlibdir, r'^lib(.*)\.so\.*', 'lib%s', 'Systemd %s library', extra_depends='', allow_links=True)
+}
+PACKAGES_DYNAMIC += "^lib(udev|systemd|nss).*"
+
+PACKAGE_BEFORE_PN = "\
+    ${PN}-gui \
+    ${PN}-vconsole-setup \
+    ${PN}-initramfs \
+    ${PN}-analyze \
+    ${PN}-kernel-install \
+    ${PN}-rpm-macros \
+    ${PN}-binfmt \
+    ${PN}-zsh-completion \
+    ${PN}-container \
+    ${PN}-journal-gatewayd \
+    ${PN}-journal-upload \
+    ${PN}-journal-remote \
+    ${PN}-extra-utils \
+    ${PN}-udev-rules \
+    libsystemd-shared \
+    udev \
+    udev-hwdb \
+"
+
+SUMMARY:${PN}-container = "Tools for containers and VMs"
+DESCRIPTION:${PN}-container = "Systemd tools to spawn and manage containers and virtual machines."
+
+SUMMARY:${PN}-journal-gatewayd = "HTTP server for journal events"
+DESCRIPTION:${PN}-journal-gatewayd = "systemd-journal-gatewayd serves journal events over the network. Clients must connect using HTTP. The server listens on port 19531 by default."
+
+SUMMARY:${PN}-journal-upload = "Send journal messages over the network"
+DESCRIPTION:${PN}-journal-upload = "systemd-journal-upload uploads journal entries to a specified URL."
+
+SUMMARY:${PN}-journal-remote = "Receive journal messages over the network"
+DESCRIPTION:${PN}-journal-remote = "systemd-journal-remote is a command to receive serialized journal events and store them to journal files."
+
+SUMMARY:libsystemd-shared = "Systemd shared library"
+
+SYSTEMD_PACKAGES = "${@bb.utils.contains('PACKAGECONFIG', 'binfmt', '${PN}-binfmt', '', d)} \
+                    ${@bb.utils.contains('PACKAGECONFIG', 'microhttpd', '${PN}-journal-gatewayd', '', d)} \
+                    ${@bb.utils.contains('PACKAGECONFIG', 'microhttpd', '${PN}-journal-remote', '', d)} \
+                    ${@bb.utils.contains('PACKAGECONFIG', 'journal-upload', '${PN}-journal-upload', '', d)} \
+"
+SYSTEMD_SERVICE:${PN}-binfmt = "systemd-binfmt.service"
+
+USERADD_PACKAGES = "${PN} ${PN}-extra-utils \
+                    ${@bb.utils.contains('PACKAGECONFIG', 'microhttpd', '${PN}-journal-gatewayd', '', d)} \
+                    ${@bb.utils.contains('PACKAGECONFIG', 'microhttpd', '${PN}-journal-remote', '', d)} \
+                    ${@bb.utils.contains('PACKAGECONFIG', 'journal-upload', '${PN}-journal-upload', '', d)} \
+"
+GROUPADD_PARAM:${PN} = "-r systemd-journal;"
+GROUPADD_PARAM:${PN} += "${@bb.utils.contains('PACKAGECONFIG', 'polkit_hostnamed_fallback', '-r systemd-hostname;', '', d)}"
+USERADD_PARAM:${PN} += "${@bb.utils.contains('PACKAGECONFIG', 'coredump', '--system -d / -M --shell /sbin/nologin systemd-coredump;', '', d)}"
+USERADD_PARAM:${PN} += "${@bb.utils.contains('PACKAGECONFIG', 'networkd', '--system -d / -M --shell /sbin/nologin systemd-network;', '', d)}"
+USERADD_PARAM:${PN} += "${@bb.utils.contains('PACKAGECONFIG', 'polkit', '--system --no-create-home --user-group --home-dir ${sysconfdir}/polkit-1 polkitd;', '', d)}"
+USERADD_PARAM:${PN} += "${@bb.utils.contains('PACKAGECONFIG', 'resolved', '--system -d / -M --shell /sbin/nologin systemd-resolve;', '', d)}"
+USERADD_PARAM:${PN} += "${@bb.utils.contains('PACKAGECONFIG', 'timesyncd', '--system -d / -M --shell /sbin/nologin systemd-timesync;', '', d)}"
+USERADD_PARAM:${PN}-extra-utils = "--system -d / -M --shell /sbin/nologin systemd-bus-proxy"
+USERADD_PARAM:${PN}-journal-gatewayd = "--system -d / -M --shell /sbin/nologin systemd-journal-gateway"
+USERADD_PARAM:${PN}-journal-remote = "--system -d / -M --shell /sbin/nologin systemd-journal-remote"
+USERADD_PARAM:${PN}-journal-upload = "--system -d / -M --shell /sbin/nologin systemd-journal-upload"
+
+FILES:${PN}-analyze = "${bindir}/systemd-analyze"
+
+FILES:${PN}-initramfs = "/init"
+RDEPENDS:${PN}-initramfs = "${PN}"
+
+FILES:${PN}-gui = "${bindir}/systemadm"
+
+FILES:${PN}-vconsole-setup = "${rootlibexecdir}/systemd/systemd-vconsole-setup \
+                              ${systemd_system_unitdir}/systemd-vconsole-setup.service \
+                              ${systemd_system_unitdir}/sysinit.target.wants/systemd-vconsole-setup.service"
+
+RDEPENDS:${PN}-kernel-install += "bash"
+FILES:${PN}-kernel-install = "${bindir}/kernel-install \
+                              ${sysconfdir}/kernel/ \
+                              ${exec_prefix}/lib/kernel \
+                             "
+FILES:${PN}-rpm-macros = "${exec_prefix}/lib/rpm \
+                         "
+
+FILES:${PN}-zsh-completion = "${datadir}/zsh/site-functions"
+
+FILES:${PN}-binfmt = "${sysconfdir}/binfmt.d/ \
+                      ${exec_prefix}/lib/binfmt.d \
+                      ${rootlibexecdir}/systemd/systemd-binfmt \
+                      ${systemd_system_unitdir}/proc-sys-fs-binfmt_misc.* \
+                      ${systemd_system_unitdir}/systemd-binfmt.service"
+RRECOMMENDS:${PN}-binfmt = "kernel-module-binfmt-misc"
+
+RRECOMMENDS:${PN}-vconsole-setup = "kbd kbd-consolefonts kbd-keymaps"
+
+
+FILES:${PN}-journal-gatewayd = "${rootlibexecdir}/systemd/systemd-journal-gatewayd \
+                                ${systemd_system_unitdir}/systemd-journal-gatewayd.service \
+                                ${systemd_system_unitdir}/systemd-journal-gatewayd.socket \
+                                ${systemd_system_unitdir}/sockets.target.wants/systemd-journal-gatewayd.socket \
+                                ${datadir}/systemd/gatewayd/browse.html \
+                               "
+SYSTEMD_SERVICE:${PN}-journal-gatewayd = "systemd-journal-gatewayd.socket"
+
+FILES:${PN}-journal-upload = "${rootlibexecdir}/systemd/systemd-journal-upload \
+                              ${systemd_system_unitdir}/systemd-journal-upload.service \
+                              ${sysconfdir}/systemd/journal-upload.conf \
+                             "
+SYSTEMD_SERVICE:${PN}-journal-upload = "systemd-journal-upload.service"
+
+FILES:${PN}-journal-remote = "${rootlibexecdir}/systemd/systemd-journal-remote \
+                              ${sysconfdir}/systemd/journal-remote.conf \
+                              ${systemd_system_unitdir}/systemd-journal-remote.service \
+                              ${systemd_system_unitdir}/systemd-journal-remote.socket \
+                             "
+SYSTEMD_SERVICE:${PN}-journal-remote = "systemd-journal-remote.socket"
+
+
+FILES:${PN}-container = "${sysconfdir}/dbus-1/system.d/org.freedesktop.import1.conf \
+                         ${sysconfdir}/dbus-1/system.d/org.freedesktop.machine1.conf \
+                         ${sysconfdir}/systemd/system/multi-user.target.wants/machines.target \
+                         ${base_bindir}/machinectl \
+                         ${bindir}/systemd-nspawn \
+                         ${nonarch_libdir}/systemd/import-pubring.gpg \
+                         ${systemd_system_unitdir}/busnames.target.wants/org.freedesktop.import1.busname \
+                         ${systemd_system_unitdir}/busnames.target.wants/org.freedesktop.machine1.busname \
+                         ${systemd_system_unitdir}/local-fs.target.wants/var-lib-machines.mount \
+                         ${systemd_system_unitdir}/machines.target.wants/var-lib-machines.mount \
+                         ${systemd_system_unitdir}/remote-fs.target.wants/var-lib-machines.mount \
+                         ${systemd_system_unitdir}/machine.slice \
+                         ${systemd_system_unitdir}/machines.target \
+                         ${systemd_system_unitdir}/org.freedesktop.import1.busname \
+                         ${systemd_system_unitdir}/org.freedesktop.machine1.busname \
+                         ${systemd_system_unitdir}/systemd-importd.service \
+                         ${systemd_system_unitdir}/systemd-machined.service \
+                         ${systemd_system_unitdir}/dbus-org.freedesktop.machine1.service \
+                         ${systemd_system_unitdir}/var-lib-machines.mount \
+                         ${rootlibexecdir}/systemd/systemd-import \
+                         ${rootlibexecdir}/systemd/systemd-importd \
+                         ${rootlibexecdir}/systemd/systemd-machined \
+                         ${rootlibexecdir}/systemd/systemd-pull \
+                         ${exec_prefix}/lib/tmpfiles.d/systemd-nspawn.conf \
+                         ${exec_prefix}/lib/tmpfiles.d/README \
+                         ${systemd_system_unitdir}/systemd-nspawn@.service \
+                         ${libdir}/libnss_mymachines.so.2 \
+                         ${datadir}/dbus-1/system-services/org.freedesktop.import1.service \
+                         ${datadir}/dbus-1/system-services/org.freedesktop.machine1.service \
+                         ${datadir}/dbus-1/system.d/org.freedesktop.import1.conf \
+                         ${datadir}/dbus-1/system.d/org.freedesktop.machine1.conf \
+                         ${datadir}/polkit-1/actions/org.freedesktop.import1.policy \
+                         ${datadir}/polkit-1/actions/org.freedesktop.machine1.policy \
+                        "
+
+# "machinectl import-tar" uses "tar --numeric-owner", not supported by busybox.
+RRECOMMENDS:${PN}-container += "\
+                         ${PN}-journal-gatewayd \
+                         ${PN}-journal-remote \
+                         ${PN}-journal-upload \
+                         kernel-module-dm-mod \
+                         kernel-module-loop \
+                         kernel-module-tun \
+                         tar \
+                        "
+
+FILES:${PN}-extra-utils = "\
+                        ${base_bindir}/systemd-escape \
+                        ${base_bindir}/systemd-inhibit \
+                        ${bindir}/systemd-detect-virt \
+                        ${bindir}/systemd-dissect \
+                        ${bindir}/systemd-path \
+                        ${bindir}/systemd-run \
+                        ${bindir}/systemd-cat \
+                        ${bindir}/systemd-delta \
+                        ${bindir}/systemd-cgls \
+                        ${bindir}/systemd-cgtop \
+                        ${bindir}/systemd-stdio-bridge \
+                        ${base_bindir}/systemd-ask-password \
+                        ${base_bindir}/systemd-tty-ask-password-agent \
+                        ${systemd_system_unitdir}/systemd-ask-password-console.path \
+                        ${systemd_system_unitdir}/systemd-ask-password-console.service \
+                        ${systemd_system_unitdir}/systemd-ask-password-wall.path \
+                        ${systemd_system_unitdir}/systemd-ask-password-wall.service \
+                        ${systemd_system_unitdir}/sysinit.target.wants/systemd-ask-password-console.path \
+                        ${systemd_system_unitdir}/sysinit.target.wants/systemd-ask-password-wall.path \
+                        ${systemd_system_unitdir}/multi-user.target.wants/systemd-ask-password-wall.path \
+                        ${rootlibexecdir}/systemd/systemd-resolve-host \
+                        ${rootlibexecdir}/systemd/systemd-ac-power \
+                        ${rootlibexecdir}/systemd/systemd-activate \
+                        ${rootlibexecdir}/systemd/systemd-bus-proxyd \
+                        ${systemd_system_unitdir}/systemd-bus-proxyd.service \
+                        ${systemd_system_unitdir}/systemd-bus-proxyd.socket \
+                        ${rootlibexecdir}/systemd/systemd-socket-proxyd \
+                        ${rootlibexecdir}/systemd/systemd-reply-password \
+                        ${rootlibexecdir}/systemd/systemd-sleep \
+                        ${rootlibexecdir}/systemd/system-sleep \
+                        ${systemd_system_unitdir}/systemd-hibernate.service \
+                        ${systemd_system_unitdir}/systemd-hybrid-sleep.service \
+                        ${systemd_system_unitdir}/systemd-suspend.service \
+                        ${systemd_system_unitdir}/sleep.target \
+                        ${rootlibexecdir}/systemd/systemd-initctl \
+                        ${systemd_system_unitdir}/systemd-initctl.service \
+                        ${systemd_system_unitdir}/systemd-initctl.socket \
+                        ${systemd_system_unitdir}/sockets.target.wants/systemd-initctl.socket \
+                        ${rootlibexecdir}/systemd/system-generators/systemd-gpt-auto-generator \
+                        ${rootlibexecdir}/systemd/systemd-cgroups-agent \
+"
+
+FILES:${PN}-udev-rules = "\
+                        ${rootlibexecdir}/udev/rules.d/70-uaccess.rules \
+                        ${rootlibexecdir}/udev/rules.d/71-seat.rules \
+                        ${rootlibexecdir}/udev/rules.d/73-seat-late.rules \
+                        ${rootlibexecdir}/udev/rules.d/99-systemd.rules \
+"
+
+CONFFILES:${PN} = "${sysconfdir}/systemd/coredump.conf \
+	${sysconfdir}/systemd/journald.conf \
+	${sysconfdir}/systemd/logind.conf \
+	${sysconfdir}/systemd/networkd.conf \
+	${sysconfdir}/systemd/pstore.conf \
+	${sysconfdir}/systemd/resolved.conf \
+	${sysconfdir}/systemd/sleep.conf \
+	${sysconfdir}/systemd/system.conf \
+	${sysconfdir}/systemd/timesyncd.conf \
+	${sysconfdir}/systemd/user.conf \
+"
+
+FILES:${PN} = " ${base_bindir}/* \
+                ${base_sbindir}/shutdown \
+                ${base_sbindir}/halt \
+                ${base_sbindir}/poweroff \
+                ${base_sbindir}/runlevel \
+                ${base_sbindir}/telinit \
+                ${base_sbindir}/resolvconf \
+                ${base_sbindir}/reboot \
+                ${base_sbindir}/init \
+                ${datadir}/dbus-1/services \
+                ${datadir}/dbus-1/system-services \
+                ${datadir}/polkit-1 \
+                ${datadir}/${BPN} \
+                ${datadir}/factory \
+                ${sysconfdir}/dbus-1/ \
+                ${sysconfdir}/modules-load.d/ \
+                ${sysconfdir}/pam.d/ \
+                ${sysconfdir}/profile.d/ \
+                ${sysconfdir}/sysctl.d/ \
+                ${sysconfdir}/systemd/ \
+                ${sysconfdir}/tmpfiles.d/ \
+                ${sysconfdir}/xdg/ \
+                ${sysconfdir}/init.d/README \
+                ${sysconfdir}/resolv-conf.systemd \
+                ${sysconfdir}/X11/xinit/xinitrc.d/* \
+                ${rootlibexecdir}/systemd/* \
+                ${libdir}/pam.d \
+                ${nonarch_libdir}/pam.d \
+                ${systemd_unitdir}/* \
+                ${base_libdir}/security/*.so \
+                /cgroup \
+                ${bindir}/systemd* \
+                ${bindir}/busctl \
+                ${bindir}/coredumpctl \
+                ${bindir}/localectl \
+                ${bindir}/hostnamectl \
+                ${bindir}/resolvectl \
+                ${bindir}/timedatectl \
+                ${bindir}/bootctl \
+                ${bindir}/oomctl \
+                ${exec_prefix}/lib/tmpfiles.d/*.conf \
+                ${exec_prefix}/lib/systemd \
+                ${exec_prefix}/lib/modules-load.d \
+                ${exec_prefix}/lib/sysctl.d \
+                ${exec_prefix}/lib/sysusers.d \
+                ${exec_prefix}/lib/environment.d \
+                ${localstatedir} \
+                ${rootlibexecdir}/modprobe.d/systemd.conf \
+                ${rootlibexecdir}/modprobe.d/README \
+                ${datadir}/dbus-1/system.d/org.freedesktop.timedate1.conf \
+                ${datadir}/dbus-1/system.d/org.freedesktop.locale1.conf \
+                ${datadir}/dbus-1/system.d/org.freedesktop.network1.conf \
+                ${datadir}/dbus-1/system.d/org.freedesktop.resolve1.conf \
+                ${datadir}/dbus-1/system.d/org.freedesktop.systemd1.conf \
+                ${@bb.utils.contains('PACKAGECONFIG', 'polkit_hostnamed_fallback', '${datadir}/dbus-1/system.d/org.freedesktop.hostname1_no_polkit.conf', '', d)} \
+                ${datadir}/dbus-1/system.d/org.freedesktop.hostname1.conf \
+                ${datadir}/dbus-1/system.d/org.freedesktop.login1.conf \
+                ${datadir}/dbus-1/system.d/org.freedesktop.timesync1.conf \
+                ${datadir}/dbus-1/system.d/org.freedesktop.portable1.conf \
+                ${datadir}/dbus-1/system.d/org.freedesktop.oom1.conf \
+                ${datadir}/dbus-1/system.d/org.freedesktop.home1.conf \
+               "
+
+FILES:${PN}-dev += "${base_libdir}/security/*.la ${datadir}/dbus-1/interfaces/ ${sysconfdir}/rpm/macros.systemd"
+
+RDEPENDS:${PN} += "kmod dbus util-linux-mount util-linux-umount udev (= ${EXTENDPKGV}) systemd-udev-rules util-linux-agetty util-linux-fsck"
+RDEPENDS:${PN} += "${@bb.utils.contains('PACKAGECONFIG', 'serial-getty-generator', '', 'systemd-serialgetty', d)}"
+RDEPENDS:${PN} += "volatile-binds"
+
+RRECOMMENDS:${PN} += "systemd-extra-utils \
+                      udev-hwdb \
+                      e2fsprogs-e2fsck \
+                      kernel-module-autofs4 kernel-module-unix kernel-module-ipv6 kernel-module-sch-fq-codel \
+                      os-release \
+                      systemd-conf \
+"
+
+INSANE_SKIP:${PN} += "dev-so libdir"
+INSANE_SKIP:${PN}-dbg += "libdir"
+INSANE_SKIP:${PN}-doc += " libdir"
+INSANE_SKIP:libsystemd-shared += "libdir"
+
+FILES:libsystemd-shared = "${rootlibexecdir}/systemd/libsystemd-shared*.so"
+
+RPROVIDES:udev = "hotplug"
+
+RDEPENDS:udev-hwdb += "udev"
+
+FILES:udev += "${base_sbindir}/udevd \
+               ${rootlibexecdir}/systemd/network/99-default.link \
+               ${rootlibexecdir}/systemd/systemd-udevd \
+               ${rootlibexecdir}/udev/accelerometer \
+               ${rootlibexecdir}/udev/ata_id \
+               ${rootlibexecdir}/udev/cdrom_id \
+               ${rootlibexecdir}/udev/collect \
+               ${rootlibexecdir}/udev/dmi_memory_id \
+               ${rootlibexecdir}/udev/fido_id \
+               ${rootlibexecdir}/udev/findkeyboards \
+               ${rootlibexecdir}/udev/keyboard-force-release.sh \
+               ${rootlibexecdir}/udev/keymap \
+               ${rootlibexecdir}/udev/mtd_probe \
+               ${rootlibexecdir}/udev/scsi_id \
+               ${rootlibexecdir}/udev/v4l_id \
+               ${rootlibexecdir}/udev/keymaps \
+               ${rootlibexecdir}/udev/rules.d/50-udev-default.rules \
+               ${rootlibexecdir}/udev/rules.d/60-autosuspend.rules \
+               ${rootlibexecdir}/udev/rules.d/60-autosuspend-chromiumos.rules \
+               ${rootlibexecdir}/udev/rules.d/60-block.rules \
+               ${rootlibexecdir}/udev/rules.d/60-cdrom_id.rules \
+               ${rootlibexecdir}/udev/rules.d/60-drm.rules \
+               ${rootlibexecdir}/udev/rules.d/60-evdev.rules \
+               ${rootlibexecdir}/udev/rules.d/60-fido-id.rules \
+               ${rootlibexecdir}/udev/rules.d/60-input-id.rules \
+               ${rootlibexecdir}/udev/rules.d/60-persistent-alsa.rules \
+               ${rootlibexecdir}/udev/rules.d/60-persistent-input.rules \
+               ${rootlibexecdir}/udev/rules.d/60-persistent-storage.rules \
+               ${rootlibexecdir}/udev/rules.d/60-persistent-storage-tape.rules \
+               ${rootlibexecdir}/udev/rules.d/60-persistent-v4l.rules \
+               ${rootlibexecdir}/udev/rules.d/60-sensor.rules \
+               ${rootlibexecdir}/udev/rules.d/60-serial.rules \
+               ${rootlibexecdir}/udev/rules.d/61-autosuspend-manual.rules \
+               ${rootlibexecdir}/udev/rules.d/64-btrfs.rules \
+               ${rootlibexecdir}/udev/rules.d/70-camera.rules \
+               ${rootlibexecdir}/udev/rules.d/70-joystick.rules \
+               ${rootlibexecdir}/udev/rules.d/70-memory.rules \
+               ${rootlibexecdir}/udev/rules.d/70-mouse.rules \
+               ${rootlibexecdir}/udev/rules.d/70-power-switch.rules \
+               ${rootlibexecdir}/udev/rules.d/70-touchpad.rules \
+               ${rootlibexecdir}/udev/rules.d/75-net-description.rules \
+               ${rootlibexecdir}/udev/rules.d/75-probe_mtd.rules \
+               ${rootlibexecdir}/udev/rules.d/78-sound-card.rules \
+               ${rootlibexecdir}/udev/rules.d/80-drivers.rules \
+               ${rootlibexecdir}/udev/rules.d/80-net-setup-link.rules \
+               ${rootlibexecdir}/udev/rules.d/81-net-dhcp.rules \
+               ${rootlibexecdir}/udev/rules.d/90-vconsole.rules \
+               ${rootlibexecdir}/udev/rules.d/README \
+               ${sysconfdir}/udev \
+               ${sysconfdir}/init.d/systemd-udevd \
+               ${systemd_system_unitdir}/*udev* \
+               ${systemd_system_unitdir}/*.wants/*udev* \
+               ${base_bindir}/systemd-hwdb \
+               ${base_bindir}/udevadm \
+               ${base_sbindir}/udevadm \
+               ${datadir}/bash-completion/completions/udevadm \
+               ${systemd_system_unitdir}/systemd-hwdb-update.service \
+              "
+
+FILES:udev-hwdb = "${rootlibexecdir}/udev/hwdb.d \
+                   "
+
+RCONFLICTS:${PN} = "tiny-init ${@bb.utils.contains('PACKAGECONFIG', 'resolved', 'resolvconf', '', d)}"
+
+INITSCRIPT_PACKAGES = "udev"
+INITSCRIPT_NAME:udev = "systemd-udevd"
+INITSCRIPT_PARAMS:udev = "start 03 S ."
+
+python __anonymous() {
+    if not bb.utils.contains('DISTRO_FEATURES', 'sysvinit', True, False, d):
+        d.setVar("INHIBIT_UPDATERCD_BBCLASS", "1")
+
+    if bb.utils.contains('PACKAGECONFIG', 'repart', True, False, d) and not bb.utils.contains('PACKAGECONFIG', 'openssl', True, False, d):
+        bb.error("PACKAGECONFIG[repart] requires PACKAGECONFIG[openssl]")
+
+    if bb.utils.contains('PACKAGECONFIG', 'homed', True, False, d) and not bb.utils.contains('PACKAGECONFIG', 'userdb openssl cryptsetup', True, False, d):
+        bb.error("PACKAGECONFIG[homed] requires PACKAGECONFIG[userdb], PACKAGECONFIG[openssl] and PACKAGECONFIG[cryptsetup]")
+}
+
+python do_warn_musl() {
+    if d.getVar('TCLIBC') == "musl":
+        bb.warn("Using systemd with musl is not recommended since it is not supported upstream and some patches are known to be problematic.")
+}
+addtask warn_musl before do_configure
+
+ALTERNATIVE:${PN} = "halt reboot shutdown poweroff runlevel ${@bb.utils.contains('PACKAGECONFIG', 'resolved', 'resolv-conf', '', d)}"
+
+ALTERNATIVE_TARGET[resolv-conf] = "${sysconfdir}/resolv-conf.systemd"
+ALTERNATIVE_LINK_NAME[resolv-conf] = "${sysconfdir}/resolv.conf"
+ALTERNATIVE_PRIORITY[resolv-conf] ?= "50"
+
+ALTERNATIVE_TARGET[halt] = "${base_bindir}/systemctl"
+ALTERNATIVE_LINK_NAME[halt] = "${base_sbindir}/halt"
+ALTERNATIVE_PRIORITY[halt] ?= "300"
+
+ALTERNATIVE_TARGET[reboot] = "${base_bindir}/systemctl"
+ALTERNATIVE_LINK_NAME[reboot] = "${base_sbindir}/reboot"
+ALTERNATIVE_PRIORITY[reboot] ?= "300"
+
+ALTERNATIVE_TARGET[shutdown] = "${base_bindir}/systemctl"
+ALTERNATIVE_LINK_NAME[shutdown] = "${base_sbindir}/shutdown"
+ALTERNATIVE_PRIORITY[shutdown] ?= "300"
+
+ALTERNATIVE_TARGET[poweroff] = "${base_bindir}/systemctl"
+ALTERNATIVE_LINK_NAME[poweroff] = "${base_sbindir}/poweroff"
+ALTERNATIVE_PRIORITY[poweroff] ?= "300"
+
+ALTERNATIVE_TARGET[runlevel] = "${base_bindir}/systemctl"
+ALTERNATIVE_LINK_NAME[runlevel] = "${base_sbindir}/runlevel"
+ALTERNATIVE_PRIORITY[runlevel] ?= "300"
+
+pkg_postinst:${PN}:libc-glibc () {
+	sed -e '/^hosts:/s/\s*\<myhostname\>//' \
+		-e 's/\(^hosts:.*\)\(\<files\>\)\(.*\)\(\<dns\>\)\(.*\)/\1\2 myhostname \3\4\5/' \
+		-i $D${sysconfdir}/nsswitch.conf
+}
+
+pkg_prerm:${PN}:libc-glibc () {
+	sed -e '/^hosts:/s/\s*\<myhostname\>//' \
+		-e '/^hosts:/s/\s*myhostname//' \
+		-i $D${sysconfdir}/nsswitch.conf
+}
+
+PACKAGE_WRITE_DEPS += "qemu-native"
+pkg_postinst:udev-hwdb () {
+	if test -n "$D"; then
+		$INTERCEPT_DIR/postinst_intercept update_udev_hwdb ${PKG} mlprefix=${MLPREFIX} binprefix=${MLPREFIX} rootlibexecdir="${rootlibexecdir}" PREFERRED_PROVIDER_udev="${PREFERRED_PROVIDER_udev}" base_bindir="${base_bindir}"
+	else
+		udevadm hwdb --update
+	fi
+}
+
+pkg_prerm:udev-hwdb () {
+	rm -f $D${sysconfdir}/udev/hwdb.bin
+}
diff --git a/poky/meta/recipes-core/sysvinit/sysvinit-inittab/start_getty b/poky/meta/recipes-core/sysvinit/sysvinit-inittab/start_getty
index 699a1ea..f60409e 100644
--- a/poky/meta/recipes-core/sysvinit/sysvinit-inittab/start_getty
+++ b/poky/meta/recipes-core/sysvinit/sysvinit-inittab/start_getty
@@ -9,9 +9,13 @@
         if [ -x "/usr/bin/setsid" ] ; then
             setsid="/usr/bin/setsid"
         fi
+        options=""
         ;;
 esac
 
 if [ -e /sys/class/tty/$2 -a -c /dev/$2 ]; then
-	${setsid:-} ${getty} -L $1 $2 $3
+	${setsid:-} ${getty} ${options:-} -L $1 $2 $3
+else
+    # Prevent respawning to fast error if /dev entry does not exist
+    sleep 1000
 fi
diff --git a/poky/meta/recipes-core/sysvinit/sysvinit/install.patch b/poky/meta/recipes-core/sysvinit/sysvinit/install.patch
index 90563a6..bc6d493 100644
--- a/poky/meta/recipes-core/sysvinit/sysvinit/install.patch
+++ b/poky/meta/recipes-core/sysvinit/sysvinit/install.patch
@@ -3,7 +3,7 @@
 Date: Fri, 18 Jun 2010 09:40:30 +0800
 Subject: [PATCH] sysvinit: upgrade to version 2.88dsf
 
-Upstream-Status: Pending
+Upstream-Status: Submitted [https://github.com/slicer69/sysvinit/pull/13]
 
 ---
  src/Makefile | 53 +++++++++++++++++++++++++++++-----------------------
diff --git a/poky/meta/recipes-core/sysvinit/sysvinit/sysvinit_remove_linux_fs.patch b/poky/meta/recipes-core/sysvinit/sysvinit/sysvinit_remove_linux_fs.patch
new file mode 100644
index 0000000..89d65c2
--- /dev/null
+++ b/poky/meta/recipes-core/sysvinit/sysvinit/sysvinit_remove_linux_fs.patch
@@ -0,0 +1,17 @@
+# From glibc 2.36, <linux/mount.h> (included from <linux/fs.h>) and 
+# <sys/mount.h> (included from glibc) are no longer compatible:
+# https://sourceware.org/glibc/wiki/Release/2.36#Usage_of_.3Clinux.2Fmount.h.3E_and_.3Csys.2Fmount.h.3E
+
+Upstream-Status: Pending
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+
+--- sysvinit-3.04/src/sulogin.c.orig	2022-08-07 23:07:42.952576274 +0200
++++ sysvinit-3.04/src/sulogin.c	2022-08-07 23:08:26.511470983 +0200
+@@ -51,7 +51,6 @@
+ #ifdef __linux__
+ #  include <sys/statfs.h>
+ #  include <sys/mount.h>
+-#  include <linux/fs.h>
+ #  include <linux/magic.h>
+ #  include <linux/major.h>
+ #  ifndef TMPFS_MAGIC
diff --git a/poky/meta/recipes-core/sysvinit/sysvinit_3.04.bb b/poky/meta/recipes-core/sysvinit/sysvinit_3.04.bb
index f678f65..76b187c 100644
--- a/poky/meta/recipes-core/sysvinit/sysvinit_3.04.bb
+++ b/poky/meta/recipes-core/sysvinit/sysvinit_3.04.bb
@@ -15,6 +15,7 @@
            file://pidof-add-m-option.patch \
            file://realpath.patch \
            file://0001-include-sys-sysmacros.h-for-major-minor-defines-in-g.patch \
+           file://sysvinit_remove_linux_fs.patch \
            file://rcS-default \
            file://rc \
            file://rcS \
diff --git a/poky/meta/recipes-core/util-linux/util-linux-libuuid_2.38.bb b/poky/meta/recipes-core/util-linux/util-linux-libuuid_2.38.1.bb
similarity index 100%
rename from poky/meta/recipes-core/util-linux/util-linux-libuuid_2.38.bb
rename to poky/meta/recipes-core/util-linux/util-linux-libuuid_2.38.1.bb
diff --git a/poky/meta/recipes-core/util-linux/util-linux.inc b/poky/meta/recipes-core/util-linux/util-linux.inc
index c9bddfb..3868b1c 100644
--- a/poky/meta/recipes-core/util-linux/util-linux.inc
+++ b/poky/meta/recipes-core/util-linux/util-linux.inc
@@ -35,6 +35,8 @@
            file://run-ptest \
            file://display_testname_for_subtest.patch \
            file://avoid_parallel_tests.patch \
+           file://0001-check-for-sys-pidfd.h.patch \
+           file://0001-configure.ac-Improve-check-for-magic.patch \
            "
 
-SRC_URI[sha256sum] = "6d111cbe4d55b336db2f1fbeffbc65b89908704c01136371d32aa9bec373eb64"
+SRC_URI[sha256sum] = "60492a19b44e6cf9a3ddff68325b333b8b52b6c59ce3ebd6a0ecaa4c5117e84f"
diff --git a/poky/meta/recipes-core/util-linux/util-linux/0001-check-for-sys-pidfd.h.patch b/poky/meta/recipes-core/util-linux/util-linux/0001-check-for-sys-pidfd.h.patch
new file mode 100644
index 0000000..19f57f1
--- /dev/null
+++ b/poky/meta/recipes-core/util-linux/util-linux/0001-check-for-sys-pidfd.h.patch
@@ -0,0 +1,53 @@
+From 548bc568f3c735e53fb5b0a5ab6473a3f1457b91 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Sun, 7 Aug 2022 14:39:19 -0700
+Subject: [PATCH] check for sys/pidfd.h
+
+This header in newer glibc defines the signatures of functions
+pidfd_send_signal() and pidfd_open() and when these functions are
+defined by libc then we need to include the relevant header to get
+the definitions. Clang 15+ has started to error out when function
+signatures are missing.
+
+Fixes errors like
+misc-utils/kill.c:402:6: error: call to undeclared function 'pidfd_send_signal'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
+        if (pidfd_send_signal(pfd, ctl->numsig, &info, 0) < 0)
+
+Upstream-Status: Submitted [https://github.com/util-linux/util-linux/pull/1769]
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ configure.ac          | 1 +
+ include/pidfd-utils.h | 4 +++-
+ 2 files changed, 4 insertions(+), 1 deletion(-)
+
+diff --git a/configure.ac b/configure.ac
+index a511e93de..fd7d9245f 100644
+--- a/configure.ac
++++ b/configure.ac
+@@ -342,6 +342,7 @@ AC_CHECK_HEADERS([ \
+ 	sys/mkdev.h \
+ 	sys/mount.h \
+ 	sys/param.h \
++	sys/pidfd.h \
+ 	sys/prctl.h \
+ 	sys/resource.h \
+ 	sys/sendfile.h \
+diff --git a/include/pidfd-utils.h b/include/pidfd-utils.h
+index eddede976..d9e33cbc5 100644
+--- a/include/pidfd-utils.h
++++ b/include/pidfd-utils.h
+@@ -4,8 +4,10 @@
+ #ifdef HAVE_SYS_SYSCALL_H
+ # include <sys/syscall.h>
+ # if defined(SYS_pidfd_send_signal) && defined(SYS_pidfd_open)
++#  ifdef HAVE_SYS_PIDFD_H
++#   include <sys/pidfd.h>
++#  endif
+ #  include <sys/types.h>
+-
+ #  ifndef HAVE_PIDFD_SEND_SIGNAL
+ static inline int pidfd_send_signal(int pidfd, int sig, siginfo_t *info,
+ 				    unsigned int flags)
+-- 
+2.37.1
+
diff --git a/poky/meta/recipes-core/util-linux/util-linux/0001-configure.ac-Improve-check-for-magic.patch b/poky/meta/recipes-core/util-linux/util-linux/0001-configure.ac-Improve-check-for-magic.patch
new file mode 100644
index 0000000..00611fe
--- /dev/null
+++ b/poky/meta/recipes-core/util-linux/util-linux/0001-configure.ac-Improve-check-for-magic.patch
@@ -0,0 +1,40 @@
+From 263381ddd46eea2293c70bc811273b66bc52087b Mon Sep 17 00:00:00 2001
+From: Mateusz Marciniec <mateuszmar2@gmail.com>
+Date: Fri, 19 Aug 2022 14:47:49 +0200
+Subject: [PATCH] configure.ac: Improve check for magic
+
+Check whether magic.h header exists before defining HAVE_MAGIC.
+
+Despite library availability there still can be missing header.
+Current test doesn't cover that possibility which will lead compilation
+to fail in case of separate sysroot.
+
+Upstream-Status: Backport
+[https://github.com/util-linux/util-linux/commit/263381ddd46eea2293c70bc811273b66bc52087b]
+
+Signed-off-by: Mateusz Marciniec <mateuszmar2@gmail.com>
+Signed-off-by: Tomasz Dziendzielski <tomasz.dziendzielski@gmail.com>
+---
+ configure.ac | 6 ++++--
+ 1 file changed, 4 insertions(+), 2 deletions(-)
+
+diff --git a/configure.ac b/configure.ac
+index daa8f0dca..968a0daf0 100644
+--- a/configure.ac
++++ b/configure.ac
+@@ -1570,8 +1570,10 @@ AC_ARG_WITH([libmagic],
+ )
+ AS_IF([test "x$with_libmagic" = xno], [have_magic=no], [
+   AC_CHECK_LIB([magic], [magic_open], [
+-    AC_DEFINE([HAVE_MAGIC], [1], [Define to 1 if you have the libmagic present.])
+-    MAGIC_LIBS="-lmagic"
++    AC_CHECK_HEADER(magic.h, [
++      AC_DEFINE([HAVE_MAGIC], [1], [Define to 1 if you have the libmagic present.])
++      MAGIC_LIBS="-lmagic"
++    ])
+   ])
+ ])
+ AC_SUBST([MAGIC_LIBS])
+-- 
+2.37.1
+
diff --git a/poky/meta/recipes-core/util-linux/util-linux_2.38.1.bb b/poky/meta/recipes-core/util-linux/util-linux_2.38.1.bb
new file mode 100644
index 0000000..50ecc10
--- /dev/null
+++ b/poky/meta/recipes-core/util-linux/util-linux_2.38.1.bb
@@ -0,0 +1,321 @@
+require util-linux.inc
+
+#gtk-doc is not enabled as it requires xmlto which requires util-linux
+inherit autotools gettext manpages pkgconfig systemd update-alternatives python3-dir bash-completion ptest
+DEPENDS = "libcap-ng ncurses virtual/crypt zlib util-linux-libuuid"
+
+PACKAGES =+ "${PN}-swaponoff"
+PACKAGES += "${@bb.utils.contains('PACKAGECONFIG', 'pylibmount', '${PN}-pylibmount', '', d)}"
+
+python util_linux_binpackages () {
+    def pkg_hook(f, pkg, file_regex, output_pattern, modulename):
+        pn = d.getVar('PN')
+        d.appendVar('RRECOMMENDS:%s' % pn, ' %s' % pkg)
+
+        if d.getVar('ALTERNATIVE:' + pkg):
+            return
+        if d.getVarFlag('ALTERNATIVE_LINK_NAME', modulename):
+            d.setVar('ALTERNATIVE:' + pkg, modulename)
+
+    bindirs = sorted(list(set(d.expand("${base_sbindir} ${base_bindir} ${sbindir} ${bindir}").split())))
+    for dir in bindirs:
+        do_split_packages(d, root=dir,
+                          file_regex=r'(.*)', output_pattern='${PN}-%s',
+                          description='${PN} %s',
+                          hook=pkg_hook, extra_depends='')
+
+    # There are some symlinks for some binaries which we have ignored
+    # above. Add them to the package owning the binary they are
+    # pointing to
+    extras = {}
+    dvar = d.getVar('PKGD')
+    for root in bindirs:
+        for walkroot, dirs, files in os.walk(dvar + root):
+            for f in files:
+                file = os.path.join(walkroot, f)
+                if not os.path.islink(file):
+                    continue
+
+                pkg = os.path.basename(os.readlink(file))
+                extras.setdefault(pkg, [])
+                extras[pkg].append(file.replace(dvar, '', 1))
+
+    pn = d.getVar('PN')
+    for pkg, links in extras.items():
+        of = d.getVar('FILES:' + pn + '-' + pkg)
+        links = of + " " + " ".join(sorted(links))
+        d.setVar('FILES:' + pn + '-' + pkg, links)
+}
+
+# we must execute before update-alternatives PACKAGE_PREPROCESS_FUNCS
+PACKAGE_PREPROCESS_FUNCS =+ "util_linux_binpackages "
+
+# skip libuuid as it will be packaged by the util-linux-libuuid recipe
+python util_linux_libpackages() {
+    do_split_packages(d, root=d.getVar('UTIL_LINUX_LIBDIR'), file_regex=r'^lib(?!uuid)(.*)\.so\..*$',
+                      output_pattern='${PN}-lib%s',
+                      description='${PN} lib%s',
+                      extra_depends='', prepend=True, allow_links=True)
+}
+
+PACKAGESPLITFUNCS =+ "util_linux_libpackages"
+
+PACKAGES_DYNAMIC = "^${PN}-.*"
+
+CACHED_CONFIGUREVARS += "scanf_cv_alloc_modifier=ms"
+UTIL_LINUX_LIBDIR = "${libdir}"
+UTIL_LINUX_LIBDIR:class-target = "${base_libdir}"
+EXTRA_OECONF = "\
+    --enable-libuuid --enable-libblkid \
+    \
+    --enable-fsck --enable-kill --enable-last --enable-mesg \
+    --enable-mount --enable-partx --enable-rfkill \
+    --enable-unshare --enable-write \
+    \
+    --disable-bfs --disable-login \
+    --disable-makeinstall-chown --disable-minix --disable-newgrp \
+    --disable-use-tty-group --disable-vipw --disable-raw \
+    \
+    --without-udev \
+    \
+    usrsbin_execdir='${sbindir}' \
+    --libdir='${UTIL_LINUX_LIBDIR}' \
+"
+
+EXTRA_OECONF:append:class-target = " --enable-setpriv"
+EXTRA_OECONF:append:class-native = " --without-cap-ng --disable-setpriv"
+EXTRA_OECONF:append:class-nativesdk = " --without-cap-ng --disable-setpriv"
+EXTRA_OECONF:append = " --disable-hwclock-gplv3"
+
+# enable pcre2 for native/nativesdk to match host distros
+# this helps to keep same expectations when using the SDK or
+# build host versions during development
+#
+PACKAGECONFIG ?= "pcre2"
+PACKAGECONFIG:class-target ?= "${@bb.utils.contains('DISTRO_FEATURES', 'pam', 'chfn-chsh pam', '', d)}"
+# inherit manpages requires this to be present, however util-linux does not have
+# configuration options, and installs manpages always
+PACKAGECONFIG[manpages] = ""
+PACKAGECONFIG[pam] = "--enable-su --enable-runuser,--disable-su --disable-runuser, libpam,"
+# Respect the systemd feature for uuidd
+PACKAGECONFIG[systemd] = "--with-systemd --with-systemdsystemunitdir=${systemd_system_unitdir}, --without-systemd --without-systemdsystemunitdir,systemd"
+# Build python bindings for libmount
+PACKAGECONFIG[pylibmount] = "--with-python=3 --enable-pylibmount,--without-python --disable-pylibmount,python3"
+# Readline support
+PACKAGECONFIG[readline] = "--with-readline,--without-readline,readline"
+# PCRE support in hardlink
+PACKAGECONFIG[pcre2] = ",,libpcre2"
+PACKAGECONFIG[cryptsetup] = "--with-cryptsetup,--without-cryptsetup,cryptsetup"
+PACKAGECONFIG[chfn-chsh] = "--enable-chfn-chsh,--disable-chfn-chsh,"
+
+EXTRA_OEMAKE = "ARCH=${TARGET_ARCH} CPU= CPUOPT= 'OPT=${CFLAGS}'"
+
+ALLOW_EMPTY:${PN} = "1"
+FILES:${PN} = ""
+FILES:${PN}-doc += "${datadir}/getopt/getopt-*.*"
+FILES:${PN}-dev += "${PYTHON_SITEPACKAGES_DIR}/libmount/pylibmount.la"
+FILES:${PN}-mount = "${sysconfdir}/default/mountall"
+FILES:${PN}-runuser = "${sysconfdir}/pam.d/runuser*"
+FILES:${PN}-su = "${sysconfdir}/pam.d/su-l"
+CONFFILES:${PN}-su = "${sysconfdir}/pam.d/su-l"
+FILES:${PN}-pylibmount = "${PYTHON_SITEPACKAGES_DIR}/libmount/pylibmount.so \
+                          ${PYTHON_SITEPACKAGES_DIR}/libmount/__init__.* \
+                          ${PYTHON_SITEPACKAGES_DIR}/libmount/__pycache__/*"
+
+# Util-linux' blkid replaces the e2fsprogs one
+RCONFLICTS:${PN}-blkid = "${MLPREFIX}e2fsprogs-blkid"
+RREPLACES:${PN}-blkid = "${MLPREFIX}e2fsprogs-blkid"
+
+RRECOMMENDS:${PN}:class-native = ""
+RRECOMMENDS:${PN}:class-nativesdk = ""
+RDEPENDS:${PN}:class-native = ""
+RDEPENDS:${PN}:class-nativesdk = ""
+
+RDEPENDS:${PN} += " util-linux-libuuid"
+RDEPENDS:${PN}-dev += " util-linux-libuuid-dev"
+
+RPROVIDES:${PN}-dev = "${PN}-libblkid-dev ${PN}-libmount-dev"
+
+RDEPENDS:${PN}-bash-completion += "${PN}-lsblk"
+RDEPENDS:${PN}-ptest += "bash bc btrfs-tools coreutils e2fsprogs findutils grep iproute2 kmod mdadm procps sed socat which xz"
+RRECOMMENDS:${PN}-ptest += "kernel-module-scsi-debug kernel-module-sd-mod kernel-module-loop kernel-module-algif-hash"
+RDEPENDS:${PN}-swaponoff = "${PN}-swapon ${PN}-swapoff"
+ALLOW_EMPTY:${PN}-swaponoff = "1"
+
+#SYSTEMD_PACKAGES = "${PN}-uuidd ${PN}-fstrim"
+SYSTEMD_SERVICE:${PN}-uuidd = "uuidd.socket uuidd.service"
+SYSTEMD_AUTO_ENABLE:${PN}-uuidd = "disable"
+SYSTEMD_SERVICE:${PN}-fstrim = "fstrim.timer fstrim.service"
+SYSTEMD_AUTO_ENABLE:${PN}-fstrim = "disable"
+
+do_install () {
+	# with ccache the timestamps on compiled files may
+	# end up earlier than on their inputs, this allows
+	# for the resultant compilation in the install step.
+	oe_runmake 'CC=${CC}' 'LD=${LD}' \
+		'LDFLAGS=${LDFLAGS}' 'DESTDIR=${D}' install
+
+	mkdir -p ${D}${base_bindir}
+
+        sbinprogs="agetty ctrlaltdel cfdisk vipw vigr"
+        sbinprogs_a="pivot_root hwclock mkswap losetup swapon swapoff fdisk fsck blkid blockdev fstrim sulogin switch_root nologin"
+        binprogs_a="dmesg getopt kill more umount mount login su mountpoint"
+
+        if [ "${base_sbindir}" != "${sbindir}" ]; then
+        	mkdir -p ${D}${base_sbindir}
+                for p in $sbinprogs $sbinprogs_a; do
+                        if [ -f "${D}${sbindir}/$p" ]; then
+                                mv "${D}${sbindir}/$p" "${D}${base_sbindir}/$p"
+                        fi
+                done
+        fi
+
+        if [ "${base_bindir}" != "${bindir}" ]; then
+        	mkdir -p ${D}${base_bindir}
+                for p in $binprogs_a; do
+                        if [ -f "${D}${bindir}/$p" ]; then
+                                mv "${D}${bindir}/$p" "${D}${base_bindir}/$p"
+                        fi
+                done
+        fi
+
+	install -d ${D}${sysconfdir}/default/
+	echo 'MOUNTALL="-t nonfs,nosmbfs,noncpfs"' > ${D}${sysconfdir}/default/mountall
+
+	rm -f ${D}${bindir}/chkdupexe
+}
+
+do_install:append:class-target () {
+	if [ "${@bb.utils.filter('PACKAGECONFIG', 'pam', d)}" ]; then
+		install -d ${D}${sysconfdir}/pam.d
+		install -m 0644 ${WORKDIR}/runuser.pamd ${D}${sysconfdir}/pam.d/runuser
+		install -m 0644 ${WORKDIR}/runuser-l.pamd ${D}${sysconfdir}/pam.d/runuser-l
+		# Required for "su -" aka "su --login" because
+		# otherwise it uses "other", which has "auth pam_deny.so"
+		# and thus prevents the operation.
+		ln -s su ${D}${sysconfdir}/pam.d/su-l
+	fi
+}
+# nologin causes a conflict with shadow-native
+# kill causes a conflict with coreutils-native (if ${bindir}==${base_bindir})
+do_install:append:class-native () {
+	rm -f ${D}${base_sbindir}/nologin
+	rm -f ${D}${base_bindir}/kill
+}
+
+# dm-verity support introduces a circular build dependency, so util-linux-libuuid is split out for target builds
+# Need to build libuuid for uuidgen, but then delete it and let the other recipe ship it
+do_install:append () {
+	rm -rf ${D}${includedir}/uuid ${D}${libdir}/pkgconfig/uuid.pc ${D}${libdir}/libuuid* ${D}${base_libdir}/libuuid*
+}
+
+ALTERNATIVE_PRIORITY = "80"
+
+ALTERNATIVE_LINK_NAME[blkid] = "${base_sbindir}/blkid"
+ALTERNATIVE_LINK_NAME[blockdev] = "${base_sbindir}/blockdev"
+ALTERNATIVE_LINK_NAME[cal] = "${bindir}/cal"
+ALTERNATIVE_LINK_NAME[chfn] = "${bindir}/chfn"
+ALTERNATIVE_LINK_NAME[chsh] = "${bindir}/chsh"
+ALTERNATIVE_LINK_NAME[chrt] = "${bindir}/chrt"
+ALTERNATIVE_LINK_NAME[dmesg] = "${base_bindir}/dmesg"
+ALTERNATIVE_LINK_NAME[eject] = "${bindir}/eject"
+ALTERNATIVE_LINK_NAME[fallocate] = "${bindir}/fallocate"
+ALTERNATIVE_LINK_NAME[fdisk] = "${base_sbindir}/fdisk"
+ALTERNATIVE_LINK_NAME[findfs] = "${sbindir}/findfs"
+ALTERNATIVE_LINK_NAME[flock] = "${bindir}/flock"
+ALTERNATIVE_LINK_NAME[fsck] = "${base_sbindir}/fsck"
+ALTERNATIVE_LINK_NAME[fsfreeze] = "${sbindir}/fsfreeze"
+ALTERNATIVE_LINK_NAME[fstrim] = "${base_sbindir}/fstrim"
+ALTERNATIVE_LINK_NAME[getopt] = "${base_bindir}/getopt"
+ALTERNATIVE:${PN}-agetty = "getty"
+ALTERNATIVE_LINK_NAME[getty] = "${base_sbindir}/getty"
+ALTERNATIVE_TARGET[getty] = "${base_sbindir}/agetty"
+ALTERNATIVE_LINK_NAME[hexdump] = "${bindir}/hexdump"
+ALTERNATIVE_LINK_NAME[hwclock] = "${base_sbindir}/hwclock"
+ALTERNATIVE_LINK_NAME[ionice] = "${bindir}/ionice"
+ALTERNATIVE_LINK_NAME[kill] = "${base_bindir}/kill"
+ALTERNATIVE:${PN}-last = "last lastb"
+ALTERNATIVE_LINK_NAME[last] = "${bindir}/last"
+ALTERNATIVE_LINK_NAME[lastb] = "${bindir}/lastb"
+ALTERNATIVE_LINK_NAME[logger] = "${bindir}/logger"
+ALTERNATIVE_LINK_NAME[losetup] = "${base_sbindir}/losetup"
+ALTERNATIVE_LINK_NAME[mesg] = "${bindir}/mesg"
+ALTERNATIVE_LINK_NAME[mkswap] = "${base_sbindir}/mkswap"
+ALTERNATIVE_LINK_NAME[mcookie] = "${bindir}/mcookie"
+ALTERNATIVE_LINK_NAME[more] = "${base_bindir}/more"
+ALTERNATIVE_LINK_NAME[mount] = "${base_bindir}/mount"
+ALTERNATIVE_LINK_NAME[mountpoint] = "${base_bindir}/mountpoint"
+ALTERNATIVE_LINK_NAME[nologin] = "${base_sbindir}/nologin"
+ALTERNATIVE_LINK_NAME[nsenter] = "${bindir}/nsenter"
+ALTERNATIVE_LINK_NAME[pivot_root] = "${base_sbindir}/pivot_root"
+ALTERNATIVE_LINK_NAME[prlimit] = "${bindir}/prlimit"
+ALTERNATIVE_LINK_NAME[readprofile] = "${sbindir}/readprofile"
+ALTERNATIVE_LINK_NAME[renice] = "${bindir}/renice"
+ALTERNATIVE_LINK_NAME[rev] = "${bindir}/rev"
+ALTERNATIVE_LINK_NAME[rfkill] = "${sbindir}/rfkill"
+ALTERNATIVE_LINK_NAME[rtcwake] = "${sbindir}/rtcwake"
+ALTERNATIVE_LINK_NAME[setpriv] = "${bindir}/setpriv"
+ALTERNATIVE_LINK_NAME[setsid] = "${bindir}/setsid"
+ALTERNATIVE_LINK_NAME[su] = "${base_bindir}/su"
+ALTERNATIVE_LINK_NAME[sulogin] = "${base_sbindir}/sulogin"
+ALTERNATIVE_LINK_NAME[swapoff] = "${base_sbindir}/swapoff"
+ALTERNATIVE_LINK_NAME[swapon] = "${base_sbindir}/swapon"
+ALTERNATIVE_LINK_NAME[switch_root] = "${base_sbindir}/switch_root"
+ALTERNATIVE_LINK_NAME[taskset] = "${bindir}/taskset"
+ALTERNATIVE_LINK_NAME[umount] = "${base_bindir}/umount"
+ALTERNATIVE_LINK_NAME[unshare] = "${bindir}/unshare"
+ALTERNATIVE_LINK_NAME[utmpdump] = "${bindir}/utmpdump"
+ALTERNATIVE_LINK_NAME[uuidgen] = "${bindir}/uuidgen"
+ALTERNATIVE_LINK_NAME[wall] = "${bindir}/wall"
+
+ALTERNATIVE:${PN}-doc = "\
+blkid.8 eject.1 findfs.8 fsck.8 kill.1 last.1 lastb.1 libblkid.3 logger.1 mesg.1 \
+mountpoint.1 nologin.8 rfkill.8 sulogin.8 utmpdump.1 uuid.3 wall.1\
+"
+ALTERNATIVE:${PN}-doc += "${@bb.utils.contains('PACKAGECONFIG', 'pam', 'su.1', '', d)}"
+
+ALTERNATIVE_LINK_NAME[blkid.8] = "${mandir}/man8/blkid.8"
+ALTERNATIVE_LINK_NAME[eject.1] = "${mandir}/man1/eject.1"
+ALTERNATIVE_LINK_NAME[findfs.8] = "${mandir}/man8/findfs.8"
+ALTERNATIVE_LINK_NAME[fsck.8] = "${mandir}/man8/fsck.8"
+ALTERNATIVE_LINK_NAME[kill.1] = "${mandir}/man1/kill.1"
+ALTERNATIVE_LINK_NAME[last.1] = "${mandir}/man1/last.1"
+ALTERNATIVE_LINK_NAME[lastb.1] = "${mandir}/man1/lastb.1"
+ALTERNATIVE_LINK_NAME[libblkid.3] = "${mandir}/man3/libblkid.3"
+ALTERNATIVE_LINK_NAME[logger.1] = "${mandir}/man1/logger.1"
+ALTERNATIVE_LINK_NAME[mesg.1] = "${mandir}/man1/mesg.1"
+ALTERNATIVE_LINK_NAME[mountpoint.1] = "${mandir}/man1/mountpoint.1"
+ALTERNATIVE_LINK_NAME[nologin.8] = "${mandir}/man8/nologin.8"
+ALTERNATIVE_LINK_NAME[rfkill.8] = "${mandir}/man8/rfkill.8"
+ALTERNATIVE_LINK_NAME[setpriv.1] = "${mandir}/man1/setpriv.1"
+ALTERNATIVE_LINK_NAME[su.1] = "${mandir}/man1/su.1"
+ALTERNATIVE_LINK_NAME[sulogin.8] = "${mandir}/man8/sulogin.8"
+ALTERNATIVE_LINK_NAME[utmpdump.1] = "${mandir}/man1/utmpdump.1"
+ALTERNATIVE_LINK_NAME[uuid.3] = "${mandir}/man3/uuid.3"
+ALTERNATIVE_LINK_NAME[wall.1] = "${mandir}/man1/wall.1"
+
+BBCLASSEXTEND = "native nativesdk"
+
+PTEST_BINDIR = "1"
+do_compile_ptest() {
+    oe_runmake buildtest-TESTS
+}
+
+do_install_ptest() {
+    mkdir -p ${D}${PTEST_PATH}/tests/ts
+    find . -name 'test*' -maxdepth 1 -type f -perm -111 -exec cp {} ${D}${PTEST_PATH} \;
+    find ./.libs -name 'sample*' -maxdepth 1 -type f -perm -111 -exec cp {} ${D}${PTEST_PATH} \;
+    find ./.libs -name 'test*' -maxdepth 1 -type f -perm -111 -exec cp {} ${D}${PTEST_PATH} \;
+
+    cp ${S}/tests/*.sh ${D}${PTEST_PATH}/tests/
+    cp -pR ${S}/tests/expected ${D}${PTEST_PATH}/tests/expected
+    cp -pR ${S}/tests/ts ${D}${PTEST_PATH}/tests/
+    cp ${WORKDIR}/build/config.h ${D}${PTEST_PATH}
+
+    sed -i 's|@base_sbindir@|${base_sbindir}|g' ${D}${PTEST_PATH}/run-ptest
+
+    # chfn needs PAM
+    if ! ${@bb.utils.contains('PACKAGECONFIG', 'pam', 'true', 'false', d)}; then
+        rm -rf ${D}${PTEST_PATH}/tests/ts/chfn
+    fi
+}
diff --git a/poky/meta/recipes-core/util-linux/util-linux_2.38.bb b/poky/meta/recipes-core/util-linux/util-linux_2.38.bb
deleted file mode 100644
index 8a7b47a..0000000
--- a/poky/meta/recipes-core/util-linux/util-linux_2.38.bb
+++ /dev/null
@@ -1,321 +0,0 @@
-require util-linux.inc
-
-#gtk-doc is not enabled as it requires xmlto which requires util-linux
-inherit autotools gettext manpages pkgconfig systemd update-alternatives python3-dir bash-completion ptest
-DEPENDS = "libcap-ng ncurses virtual/crypt zlib util-linux-libuuid"
-
-PACKAGES =+ "${PN}-swaponoff"
-PACKAGES += "${@bb.utils.contains('PACKAGECONFIG', 'pylibmount', '${PN}-pylibmount', '', d)}"
-
-python util_linux_binpackages () {
-    def pkg_hook(f, pkg, file_regex, output_pattern, modulename):
-        pn = d.getVar('PN')
-        d.appendVar('RRECOMMENDS:%s' % pn, ' %s' % pkg)
-
-        if d.getVar('ALTERNATIVE:' + pkg):
-            return
-        if d.getVarFlag('ALTERNATIVE_LINK_NAME', modulename):
-            d.setVar('ALTERNATIVE:' + pkg, modulename)
-
-    bindirs = sorted(list(set(d.expand("${base_sbindir} ${base_bindir} ${sbindir} ${bindir}").split())))
-    for dir in bindirs:
-        do_split_packages(d, root=dir,
-                          file_regex=r'(.*)', output_pattern='${PN}-%s',
-                          description='${PN} %s',
-                          hook=pkg_hook, extra_depends='')
-
-    # There are some symlinks for some binaries which we have ignored
-    # above. Add them to the package owning the binary they are
-    # pointing to
-    extras = {}
-    dvar = d.getVar('PKGD')
-    for root in bindirs:
-        for walkroot, dirs, files in os.walk(dvar + root):
-            for f in files:
-                file = os.path.join(walkroot, f)
-                if not os.path.islink(file):
-                    continue
-
-                pkg = os.path.basename(os.readlink(file))
-                extras.setdefault(pkg, [])
-                extras[pkg].append(file.replace(dvar, '', 1))
-
-    pn = d.getVar('PN')
-    for pkg, links in extras.items():
-        of = d.getVar('FILES:' + pn + '-' + pkg)
-        links = of + " " + " ".join(sorted(links))
-        d.setVar('FILES:' + pn + '-' + pkg, links)
-}
-
-# we must execute before update-alternatives PACKAGE_PREPROCESS_FUNCS
-PACKAGE_PREPROCESS_FUNCS =+ "util_linux_binpackages "
-
-# skip libuuid as it will be packaged by the util-linux-libuuid recipe
-python util_linux_libpackages() {
-    do_split_packages(d, root=d.getVar('UTIL_LINUX_LIBDIR'), file_regex=r'^lib(?!uuid)(.*)\.so\..*$',
-                      output_pattern='${PN}-lib%s',
-                      description='${PN} lib%s',
-                      extra_depends='', prepend=True, allow_links=True)
-}
-
-PACKAGESPLITFUNCS =+ "util_linux_libpackages"
-
-PACKAGES_DYNAMIC = "^${PN}-.*"
-
-CACHED_CONFIGUREVARS += "scanf_cv_alloc_modifier=ms"
-UTIL_LINUX_LIBDIR = "${libdir}"
-UTIL_LINUX_LIBDIR:class-target = "${base_libdir}"
-EXTRA_OECONF = "\
-    --enable-libuuid --enable-libblkid \
-    \
-    --enable-fsck --enable-kill --enable-last --enable-mesg \
-    --enable-mount --enable-partx --enable-raw --enable-rfkill \
-    --enable-unshare --enable-write \
-    \
-    --disable-bfs --disable-login \
-    --disable-makeinstall-chown --disable-minix --disable-newgrp \
-    --disable-use-tty-group --disable-vipw --disable-raw \
-    \
-    --without-udev \
-    \
-    usrsbin_execdir='${sbindir}' \
-    --libdir='${UTIL_LINUX_LIBDIR}' \
-"
-
-EXTRA_OECONF:append:class-target = " --enable-setpriv"
-EXTRA_OECONF:append:class-native = " --without-cap-ng --disable-setpriv"
-EXTRA_OECONF:append:class-nativesdk = " --without-cap-ng --disable-setpriv"
-EXTRA_OECONF:append = " --disable-hwclock-gplv3"
-
-# enable pcre2 for native/nativesdk to match host distros
-# this helps to keep same expectations when using the SDK or
-# build host versions during development
-#
-PACKAGECONFIG ?= "pcre2"
-PACKAGECONFIG:class-target ?= "${@bb.utils.contains('DISTRO_FEATURES', 'pam', 'chfn-chsh pam', '', d)}"
-# inherit manpages requires this to be present, however util-linux does not have
-# configuration options, and installs manpages always
-PACKAGECONFIG[manpages] = ""
-PACKAGECONFIG[pam] = "--enable-su --enable-runuser,--disable-su --disable-runuser, libpam,"
-# Respect the systemd feature for uuidd
-PACKAGECONFIG[systemd] = "--with-systemd --with-systemdsystemunitdir=${systemd_system_unitdir}, --without-systemd --without-systemdsystemunitdir,systemd"
-# Build python bindings for libmount
-PACKAGECONFIG[pylibmount] = "--with-python=3 --enable-pylibmount,--without-python --disable-pylibmount,python3"
-# Readline support
-PACKAGECONFIG[readline] = "--with-readline,--without-readline,readline"
-# PCRE support in hardlink
-PACKAGECONFIG[pcre2] = ",,libpcre2"
-PACKAGECONFIG[cryptsetup] = "--with-cryptsetup,--without-cryptsetup,cryptsetup"
-PACKAGECONFIG[chfn-chsh] = "--enable-chfn-chsh,--disable-chfn-chsh,"
-
-EXTRA_OEMAKE = "ARCH=${TARGET_ARCH} CPU= CPUOPT= 'OPT=${CFLAGS}'"
-
-ALLOW_EMPTY:${PN} = "1"
-FILES:${PN} = ""
-FILES:${PN}-doc += "${datadir}/getopt/getopt-*.*"
-FILES:${PN}-dev += "${PYTHON_SITEPACKAGES_DIR}/libmount/pylibmount.la"
-FILES:${PN}-mount = "${sysconfdir}/default/mountall"
-FILES:${PN}-runuser = "${sysconfdir}/pam.d/runuser*"
-FILES:${PN}-su = "${sysconfdir}/pam.d/su-l"
-CONFFILES:${PN}-su = "${sysconfdir}/pam.d/su-l"
-FILES:${PN}-pylibmount = "${PYTHON_SITEPACKAGES_DIR}/libmount/pylibmount.so \
-                          ${PYTHON_SITEPACKAGES_DIR}/libmount/__init__.* \
-                          ${PYTHON_SITEPACKAGES_DIR}/libmount/__pycache__/*"
-
-# Util-linux' blkid replaces the e2fsprogs one
-RCONFLICTS:${PN}-blkid = "${MLPREFIX}e2fsprogs-blkid"
-RREPLACES:${PN}-blkid = "${MLPREFIX}e2fsprogs-blkid"
-
-RRECOMMENDS:${PN}:class-native = ""
-RRECOMMENDS:${PN}:class-nativesdk = ""
-RDEPENDS:${PN}:class-native = ""
-RDEPENDS:${PN}:class-nativesdk = ""
-
-RDEPENDS:${PN} += " util-linux-libuuid"
-RDEPENDS:${PN}-dev += " util-linux-libuuid-dev"
-
-RPROVIDES:${PN}-dev = "${PN}-libblkid-dev ${PN}-libmount-dev"
-
-RDEPENDS:${PN}-bash-completion += "${PN}-lsblk"
-RDEPENDS:${PN}-ptest += "bash bc btrfs-tools coreutils e2fsprogs findutils grep iproute2 kmod mdadm procps sed socat which xz"
-RRECOMMENDS:${PN}-ptest += "kernel-module-scsi-debug kernel-module-sd-mod kernel-module-loop kernel-module-algif-hash"
-RDEPENDS:${PN}-swaponoff = "${PN}-swapon ${PN}-swapoff"
-ALLOW_EMPTY:${PN}-swaponoff = "1"
-
-#SYSTEMD_PACKAGES = "${PN}-uuidd ${PN}-fstrim"
-SYSTEMD_SERVICE:${PN}-uuidd = "uuidd.socket uuidd.service"
-SYSTEMD_AUTO_ENABLE:${PN}-uuidd = "disable"
-SYSTEMD_SERVICE:${PN}-fstrim = "fstrim.timer fstrim.service"
-SYSTEMD_AUTO_ENABLE:${PN}-fstrim = "disable"
-
-do_install () {
-	# with ccache the timestamps on compiled files may
-	# end up earlier than on their inputs, this allows
-	# for the resultant compilation in the install step.
-	oe_runmake 'CC=${CC}' 'LD=${LD}' \
-		'LDFLAGS=${LDFLAGS}' 'DESTDIR=${D}' install
-
-	mkdir -p ${D}${base_bindir}
-
-        sbinprogs="agetty ctrlaltdel cfdisk vipw vigr"
-        sbinprogs_a="pivot_root hwclock mkswap losetup swapon swapoff fdisk fsck blkid blockdev fstrim sulogin switch_root nologin"
-        binprogs_a="dmesg getopt kill more umount mount login su mountpoint"
-
-        if [ "${base_sbindir}" != "${sbindir}" ]; then
-        	mkdir -p ${D}${base_sbindir}
-                for p in $sbinprogs $sbinprogs_a; do
-                        if [ -f "${D}${sbindir}/$p" ]; then
-                                mv "${D}${sbindir}/$p" "${D}${base_sbindir}/$p"
-                        fi
-                done
-        fi
-
-        if [ "${base_bindir}" != "${bindir}" ]; then
-        	mkdir -p ${D}${base_bindir}
-                for p in $binprogs_a; do
-                        if [ -f "${D}${bindir}/$p" ]; then
-                                mv "${D}${bindir}/$p" "${D}${base_bindir}/$p"
-                        fi
-                done
-        fi
-
-	install -d ${D}${sysconfdir}/default/
-	echo 'MOUNTALL="-t nonfs,nosmbfs,noncpfs"' > ${D}${sysconfdir}/default/mountall
-
-	rm -f ${D}${bindir}/chkdupexe
-}
-
-do_install:append:class-target () {
-	if [ "${@bb.utils.filter('PACKAGECONFIG', 'pam', d)}" ]; then
-		install -d ${D}${sysconfdir}/pam.d
-		install -m 0644 ${WORKDIR}/runuser.pamd ${D}${sysconfdir}/pam.d/runuser
-		install -m 0644 ${WORKDIR}/runuser-l.pamd ${D}${sysconfdir}/pam.d/runuser-l
-		# Required for "su -" aka "su --login" because
-		# otherwise it uses "other", which has "auth pam_deny.so"
-		# and thus prevents the operation.
-		ln -s su ${D}${sysconfdir}/pam.d/su-l
-	fi
-}
-# nologin causes a conflict with shadow-native
-# kill causes a conflict with coreutils-native (if ${bindir}==${base_bindir})
-do_install:append:class-native () {
-	rm -f ${D}${base_sbindir}/nologin
-	rm -f ${D}${base_bindir}/kill
-}
-
-# dm-verity support introduces a circular build dependency, so util-linux-libuuid is split out for target builds
-# Need to build libuuid for uuidgen, but then delete it and let the other recipe ship it
-do_install:append () {
-	rm -rf ${D}${includedir}/uuid ${D}${libdir}/pkgconfig/uuid.pc ${D}${libdir}/libuuid* ${D}${base_libdir}/libuuid*
-}
-
-ALTERNATIVE_PRIORITY = "80"
-
-ALTERNATIVE_LINK_NAME[blkid] = "${base_sbindir}/blkid"
-ALTERNATIVE_LINK_NAME[blockdev] = "${base_sbindir}/blockdev"
-ALTERNATIVE_LINK_NAME[cal] = "${bindir}/cal"
-ALTERNATIVE_LINK_NAME[chfn] = "${bindir}/chfn"
-ALTERNATIVE_LINK_NAME[chsh] = "${bindir}/chsh"
-ALTERNATIVE_LINK_NAME[chrt] = "${bindir}/chrt"
-ALTERNATIVE_LINK_NAME[dmesg] = "${base_bindir}/dmesg"
-ALTERNATIVE_LINK_NAME[eject] = "${bindir}/eject"
-ALTERNATIVE_LINK_NAME[fallocate] = "${bindir}/fallocate"
-ALTERNATIVE_LINK_NAME[fdisk] = "${base_sbindir}/fdisk"
-ALTERNATIVE_LINK_NAME[findfs] = "${sbindir}/findfs"
-ALTERNATIVE_LINK_NAME[flock] = "${bindir}/flock"
-ALTERNATIVE_LINK_NAME[fsck] = "${base_sbindir}/fsck"
-ALTERNATIVE_LINK_NAME[fsfreeze] = "${sbindir}/fsfreeze"
-ALTERNATIVE_LINK_NAME[fstrim] = "${base_sbindir}/fstrim"
-ALTERNATIVE_LINK_NAME[getopt] = "${base_bindir}/getopt"
-ALTERNATIVE:${PN}-agetty = "getty"
-ALTERNATIVE_LINK_NAME[getty] = "${base_sbindir}/getty"
-ALTERNATIVE_TARGET[getty] = "${base_sbindir}/agetty"
-ALTERNATIVE_LINK_NAME[hexdump] = "${bindir}/hexdump"
-ALTERNATIVE_LINK_NAME[hwclock] = "${base_sbindir}/hwclock"
-ALTERNATIVE_LINK_NAME[ionice] = "${bindir}/ionice"
-ALTERNATIVE_LINK_NAME[kill] = "${base_bindir}/kill"
-ALTERNATIVE:${PN}-last = "last lastb"
-ALTERNATIVE_LINK_NAME[last] = "${bindir}/last"
-ALTERNATIVE_LINK_NAME[lastb] = "${bindir}/lastb"
-ALTERNATIVE_LINK_NAME[logger] = "${bindir}/logger"
-ALTERNATIVE_LINK_NAME[losetup] = "${base_sbindir}/losetup"
-ALTERNATIVE_LINK_NAME[mesg] = "${bindir}/mesg"
-ALTERNATIVE_LINK_NAME[mkswap] = "${base_sbindir}/mkswap"
-ALTERNATIVE_LINK_NAME[mcookie] = "${bindir}/mcookie"
-ALTERNATIVE_LINK_NAME[more] = "${base_bindir}/more"
-ALTERNATIVE_LINK_NAME[mount] = "${base_bindir}/mount"
-ALTERNATIVE_LINK_NAME[mountpoint] = "${base_bindir}/mountpoint"
-ALTERNATIVE_LINK_NAME[nologin] = "${base_sbindir}/nologin"
-ALTERNATIVE_LINK_NAME[nsenter] = "${bindir}/nsenter"
-ALTERNATIVE_LINK_NAME[pivot_root] = "${base_sbindir}/pivot_root"
-ALTERNATIVE_LINK_NAME[prlimit] = "${bindir}/prlimit"
-ALTERNATIVE_LINK_NAME[readprofile] = "${sbindir}/readprofile"
-ALTERNATIVE_LINK_NAME[renice] = "${bindir}/renice"
-ALTERNATIVE_LINK_NAME[rev] = "${bindir}/rev"
-ALTERNATIVE_LINK_NAME[rfkill] = "${sbindir}/rfkill"
-ALTERNATIVE_LINK_NAME[rtcwake] = "${sbindir}/rtcwake"
-ALTERNATIVE_LINK_NAME[setpriv] = "${bindir}/setpriv"
-ALTERNATIVE_LINK_NAME[setsid] = "${bindir}/setsid"
-ALTERNATIVE_LINK_NAME[su] = "${base_bindir}/su"
-ALTERNATIVE_LINK_NAME[sulogin] = "${base_sbindir}/sulogin"
-ALTERNATIVE_LINK_NAME[swapoff] = "${base_sbindir}/swapoff"
-ALTERNATIVE_LINK_NAME[swapon] = "${base_sbindir}/swapon"
-ALTERNATIVE_LINK_NAME[switch_root] = "${base_sbindir}/switch_root"
-ALTERNATIVE_LINK_NAME[taskset] = "${bindir}/taskset"
-ALTERNATIVE_LINK_NAME[umount] = "${base_bindir}/umount"
-ALTERNATIVE_LINK_NAME[unshare] = "${bindir}/unshare"
-ALTERNATIVE_LINK_NAME[utmpdump] = "${bindir}/utmpdump"
-ALTERNATIVE_LINK_NAME[uuidgen] = "${bindir}/uuidgen"
-ALTERNATIVE_LINK_NAME[wall] = "${bindir}/wall"
-
-ALTERNATIVE:${PN}-doc = "\
-blkid.8 eject.1 findfs.8 fsck.8 kill.1 last.1 lastb.1 libblkid.3 logger.1 mesg.1 \
-mountpoint.1 nologin.8 rfkill.8 sulogin.8 utmpdump.1 uuid.3 wall.1\
-"
-ALTERNATIVE:${PN}-doc += "${@bb.utils.contains('PACKAGECONFIG', 'pam', 'su.1', '', d)}"
-
-ALTERNATIVE_LINK_NAME[blkid.8] = "${mandir}/man8/blkid.8"
-ALTERNATIVE_LINK_NAME[eject.1] = "${mandir}/man1/eject.1"
-ALTERNATIVE_LINK_NAME[findfs.8] = "${mandir}/man8/findfs.8"
-ALTERNATIVE_LINK_NAME[fsck.8] = "${mandir}/man8/fsck.8"
-ALTERNATIVE_LINK_NAME[kill.1] = "${mandir}/man1/kill.1"
-ALTERNATIVE_LINK_NAME[last.1] = "${mandir}/man1/last.1"
-ALTERNATIVE_LINK_NAME[lastb.1] = "${mandir}/man1/lastb.1"
-ALTERNATIVE_LINK_NAME[libblkid.3] = "${mandir}/man3/libblkid.3"
-ALTERNATIVE_LINK_NAME[logger.1] = "${mandir}/man1/logger.1"
-ALTERNATIVE_LINK_NAME[mesg.1] = "${mandir}/man1/mesg.1"
-ALTERNATIVE_LINK_NAME[mountpoint.1] = "${mandir}/man1/mountpoint.1"
-ALTERNATIVE_LINK_NAME[nologin.8] = "${mandir}/man8/nologin.8"
-ALTERNATIVE_LINK_NAME[rfkill.8] = "${mandir}/man8/rfkill.8"
-ALTERNATIVE_LINK_NAME[setpriv.1] = "${mandir}/man1/setpriv.1"
-ALTERNATIVE_LINK_NAME[su.1] = "${mandir}/man1/su.1"
-ALTERNATIVE_LINK_NAME[sulogin.8] = "${mandir}/man8/sulogin.8"
-ALTERNATIVE_LINK_NAME[utmpdump.1] = "${mandir}/man1/utmpdump.1"
-ALTERNATIVE_LINK_NAME[uuid.3] = "${mandir}/man3/uuid.3"
-ALTERNATIVE_LINK_NAME[wall.1] = "${mandir}/man1/wall.1"
-
-BBCLASSEXTEND = "native nativesdk"
-
-PTEST_BINDIR = "1"
-do_compile_ptest() {
-    oe_runmake buildtest-TESTS
-}
-
-do_install_ptest() {
-    mkdir -p ${D}${PTEST_PATH}/tests/ts
-    find . -name 'test*' -maxdepth 1 -type f -perm -111 -exec cp {} ${D}${PTEST_PATH} \;
-    find ./.libs -name 'sample*' -maxdepth 1 -type f -perm -111 -exec cp {} ${D}${PTEST_PATH} \;
-    find ./.libs -name 'test*' -maxdepth 1 -type f -perm -111 -exec cp {} ${D}${PTEST_PATH} \;
-
-    cp ${S}/tests/*.sh ${D}${PTEST_PATH}/tests/
-    cp -pR ${S}/tests/expected ${D}${PTEST_PATH}/tests/expected
-    cp -pR ${S}/tests/ts ${D}${PTEST_PATH}/tests/
-    cp ${WORKDIR}/build/config.h ${D}${PTEST_PATH}
-
-    sed -i 's|@base_sbindir@|${base_sbindir}|g' ${D}${PTEST_PATH}/run-ptest
-
-    # chfn needs PAM
-    if ! ${@bb.utils.contains('PACKAGECONFIG', 'pam', 'true', 'false', d)}; then
-        rm -rf ${D}${PTEST_PATH}/tests/ts/chfn
-    fi
-}
diff --git a/poky/meta/recipes-core/zlib/zlib/0001-Fix-a-bug-when-getting-a-gzip-header-extra-field-wit.patch b/poky/meta/recipes-core/zlib/zlib/0001-Fix-a-bug-when-getting-a-gzip-header-extra-field-wit.patch
new file mode 100644
index 0000000..96ab563
--- /dev/null
+++ b/poky/meta/recipes-core/zlib/zlib/0001-Fix-a-bug-when-getting-a-gzip-header-extra-field-wit.patch
@@ -0,0 +1,38 @@
+From eff308af425b67093bab25f80f1ae950166bece1 Mon Sep 17 00:00:00 2001
+From: Mark Adler <fork@madler.net>
+Date: Sat, 30 Jul 2022 15:51:11 -0700
+Subject: [PATCH] Fix a bug when getting a gzip header extra field with inflate().
+
+If the extra field was larger than the space the user provided with
+inflateGetHeader(), and if multiple calls of inflate() delivered
+the extra header data, then there could be a buffer overflow of the
+provided space. This commit assures that provided space is not
+exceeded.
+
+CVE: CVE-2022-37434
+Upstream-Status: Backport [https://github.com/madler/zlib/commit/eff308af425b67093bab25f80f1ae950166be]
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ inflate.c | 5 +++--
+ 1 file changed, 3 insertions(+), 2 deletions(-)
+
+diff --git a/inflate.c b/inflate.c
+index 7be8c63..7a72897 100644
+--- a/inflate.c
++++ b/inflate.c
+@@ -763,9 +763,10 @@ int flush;
+                 copy = state->length;
+                 if (copy > have) copy = have;
+                 if (copy) {
++                    len = state->head->extra_len - state->length;
+                     if (state->head != Z_NULL &&
+-                        state->head->extra != Z_NULL) {
+-                        len = state->head->extra_len - state->length;
++                        state->head->extra != Z_NULL &&
++                        len < state->head->extra_max) {
+                         zmemcpy(state->head->extra + len, next,
+                                 len + copy > state->head->extra_max ?
+                                 state->head->extra_max - len : copy);
+-- 
+2.37.2
+
diff --git a/poky/meta/recipes-core/zlib/zlib/0001-Fix-extra-field-processing-bug-that-dereferences-NUL.patch b/poky/meta/recipes-core/zlib/zlib/0001-Fix-extra-field-processing-bug-that-dereferences-NUL.patch
new file mode 100644
index 0000000..a0978c5
--- /dev/null
+++ b/poky/meta/recipes-core/zlib/zlib/0001-Fix-extra-field-processing-bug-that-dereferences-NUL.patch
@@ -0,0 +1,36 @@
+From 1eb7682f845ac9e9bf9ae35bbfb3bad5dacbd91d Mon Sep 17 00:00:00 2001
+From: Mark Adler <fork@madler.net>
+Date: Mon, 8 Aug 2022 10:50:09 -0700
+Subject: [PATCH] Fix extra field processing bug that dereferences NULL
+ state->head.
+
+The recent commit to fix a gzip header extra field processing bug
+introduced the new bug fixed here.
+
+CVE: CVE-2022-37434
+Upstream-Status: Backport [https://github.com/madler/zlib/commit/1eb7682f845ac9e9bf9ae35bbfb3bad5dacbd91d]
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ inflate.c | 4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+diff --git a/inflate.c b/inflate.c
+index 7a72897..2a3c4fe 100644
+--- a/inflate.c
++++ b/inflate.c
+@@ -763,10 +763,10 @@ int flush;
+                 copy = state->length;
+                 if (copy > have) copy = have;
+                 if (copy) {
+-                    len = state->head->extra_len - state->length;
+                     if (state->head != Z_NULL &&
+                         state->head->extra != Z_NULL &&
+-                        len < state->head->extra_max) {
++                        (len = state->head->extra_len - state->length) <
++                            state->head->extra_max) {
+                         zmemcpy(state->head->extra + len, next,
+                                 len + copy > state->head->extra_max ?
+                                 state->head->extra_max - len : copy);
+-- 
+2.37.2
+
diff --git a/poky/meta/recipes-core/zlib/zlib_1.2.12.bb b/poky/meta/recipes-core/zlib/zlib_1.2.12.bb
index 77e7a49..b999f13 100644
--- a/poky/meta/recipes-core/zlib/zlib_1.2.12.bb
+++ b/poky/meta/recipes-core/zlib/zlib_1.2.12.bb
@@ -12,6 +12,8 @@
            file://0001-configure-Pass-LDFLAGS-to-link-tests.patch \
            file://run-ptest \
            file://0001-Correct-incorrect-inputs-provided-to-the-CRC-functio.patch \
+           file://0001-Fix-a-bug-when-getting-a-gzip-header-extra-field-wit.patch \
+           file://0001-Fix-extra-field-processing-bug-that-dereferences-NUL.patch \
            "
 UPSTREAM_CHECK_URI = "http://zlib.net/"
 
diff --git a/poky/meta/recipes-devtools/apt/apt/0001-Do-not-init-tables-from-dpkg-configuration.patch b/poky/meta/recipes-devtools/apt/apt/0001-Do-not-init-tables-from-dpkg-configuration.patch
index 59b9cd1..37a3133 100644
--- a/poky/meta/recipes-devtools/apt/apt/0001-Do-not-init-tables-from-dpkg-configuration.patch
+++ b/poky/meta/recipes-devtools/apt/apt/0001-Do-not-init-tables-from-dpkg-configuration.patch
@@ -1,4 +1,4 @@
-From 11ba49594ae9d11f0070198c146b5e437fa83022 Mon Sep 17 00:00:00 2001
+From b84280fec4e1d0d33eca78e76556023f8f8fe5b7 Mon Sep 17 00:00:00 2001
 From: Alexander Kanavin <alex.kanavin@gmail.com>
 Date: Fri, 10 May 2019 16:47:38 +0200
 Subject: [PATCH] Do not init tables from dpkg configuration
@@ -13,7 +13,7 @@
  1 file changed, 2 insertions(+), 2 deletions(-)
 
 diff --git a/apt-pkg/init.cc b/apt-pkg/init.cc
-index b9d9b15..1725c59 100644
+index b9d9b15d2..1725c5966 100644
 --- a/apt-pkg/init.cc
 +++ b/apt-pkg/init.cc
 @@ -281,8 +281,8 @@ bool pkgInitSystem(Configuration &Cnf,pkgSystem *&Sys)
diff --git a/poky/meta/recipes-devtools/apt/apt/0001-Remove-using-std-binary_function.patch b/poky/meta/recipes-devtools/apt/apt/0001-Remove-using-std-binary_function.patch
new file mode 100644
index 0000000..3065210
--- /dev/null
+++ b/poky/meta/recipes-devtools/apt/apt/0001-Remove-using-std-binary_function.patch
@@ -0,0 +1,87 @@
+From e91fb0618ce0a5d42f239d0fca602544858f0819 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Tue, 16 Aug 2022 08:44:18 -0700
+Subject: [PATCH] Remove using std::binary_function
+
+std::binary_function and std::unary_function are deprecated since c++11
+and removed in c++17, therefore remove it and use lambda functions to get same
+functionality implemented.
+
+Upstream-Status: Submitted [https://salsa.debian.org/apt-team/apt/-/merge_requests/253]
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ ftparchive/apt-ftparchive.cc | 33 ++++++++++-----------------------
+ 1 file changed, 10 insertions(+), 23 deletions(-)
+
+diff --git a/ftparchive/apt-ftparchive.cc b/ftparchive/apt-ftparchive.cc
+index 87ce9153c..56fdc2246 100644
+--- a/ftparchive/apt-ftparchive.cc
++++ b/ftparchive/apt-ftparchive.cc
+@@ -48,6 +48,11 @@
+ using namespace std;
+ unsigned Quiet = 0;
+ 
++auto ContentsCompare = [](const auto &a, const auto &b) { return a.ContentsMTime < b.ContentsMTime; };
++auto DBCompare = [](const auto &a, const auto &b) { return a.BinCacheDB < b.BinCacheDB; };
++auto SrcDBCompare = [](const auto &a, const auto &b) { return a.SrcCacheDB < b.SrcCacheDB; };
++
++
+ static struct timeval GetTimevalFromSteadyClock()			/*{{{*/
+ {
+    auto const Time = std::chrono::steady_clock::now().time_since_epoch();
+@@ -116,24 +121,6 @@ struct PackageMap
+    bool SrcDone;
+    time_t ContentsMTime;
+    
+-   struct ContentsCompare : public binary_function<PackageMap,PackageMap,bool>
+-   {
+-      inline bool operator() (const PackageMap &x,const PackageMap &y)
+-      {return x.ContentsMTime < y.ContentsMTime;};
+-   };
+-    
+-   struct DBCompare : public binary_function<PackageMap,PackageMap,bool>
+-   {
+-      inline bool operator() (const PackageMap &x,const PackageMap &y)
+-      {return x.BinCacheDB < y.BinCacheDB;};
+-   };  
+-
+-   struct SrcDBCompare : public binary_function<PackageMap,PackageMap,bool>
+-   {
+-      inline bool operator() (const PackageMap &x,const PackageMap &y)
+-      {return x.SrcCacheDB < y.SrcCacheDB;};
+-   };
+-   
+    void GetGeneral(Configuration &Setup,Configuration &Block);
+    bool GenPackages(Configuration &Setup,struct CacheDB::Stats &Stats);
+    bool GenSources(Configuration &Setup,struct CacheDB::Stats &Stats);
+@@ -869,7 +856,7 @@ static bool DoGenerateContents(Configuration &Setup,
+       else
+ 	 I->ContentsMTime = A.st_mtime;
+    }
+-   stable_sort(PkgList.begin(),PkgList.end(),PackageMap::ContentsCompare());
++   stable_sort(PkgList.begin(),PkgList.end(),ContentsCompare);
+    
+    /* Now for Contents.. The process here is to do a make-like dependency
+       check. Each contents file is verified to be newer than the package files
+@@ -941,8 +928,8 @@ static bool Generate(CommandLine &CmdL)
+    LoadBinDir(PkgList,Setup);
+ 
+    // Sort by cache DB to improve IO locality.
+-   stable_sort(PkgList.begin(),PkgList.end(),PackageMap::DBCompare());
+-   stable_sort(PkgList.begin(),PkgList.end(),PackageMap::SrcDBCompare());
++   stable_sort(PkgList.begin(),PkgList.end(),DBCompare);
++   stable_sort(PkgList.begin(),PkgList.end(),SrcDBCompare);
+ 
+    // Generate packages
+    if (_config->FindB("APT::FTPArchive::ContentsOnly", false) == false)
+@@ -993,8 +980,8 @@ static bool Clean(CommandLine &CmdL)
+    LoadBinDir(PkgList,Setup);
+ 
+    // Sort by cache DB to improve IO locality.
+-   stable_sort(PkgList.begin(),PkgList.end(),PackageMap::DBCompare());
+-   stable_sort(PkgList.begin(),PkgList.end(),PackageMap::SrcDBCompare());
++   stable_sort(PkgList.begin(),PkgList.end(),DBCompare);
++   stable_sort(PkgList.begin(),PkgList.end(),SrcDBCompare);
+ 
+    string CacheDir = Setup.FindDir("Dir::CacheDir");
+ 
diff --git a/poky/meta/recipes-devtools/apt/apt/0001-Revert-always-run-dpkg-configure-a-at-the-end-of-our.patch b/poky/meta/recipes-devtools/apt/apt/0001-Revert-always-run-dpkg-configure-a-at-the-end-of-our.patch
index 593ed7d..6f4d5b6 100644
--- a/poky/meta/recipes-devtools/apt/apt/0001-Revert-always-run-dpkg-configure-a-at-the-end-of-our.patch
+++ b/poky/meta/recipes-devtools/apt/apt/0001-Revert-always-run-dpkg-configure-a-at-the-end-of-our.patch
@@ -1,4 +1,4 @@
-From 47c2b42af60ceefd8ed52b32a3a365facf0e05b8 Mon Sep 17 00:00:00 2001
+From a2dd661484536492b47d4c88998f2bf516749bc8 Mon Sep 17 00:00:00 2001
 From: Alexander Kanavin <alex.kanavin@gmail.com>
 Date: Thu, 21 May 2020 20:13:25 +0000
 Subject: [PATCH] Revert "always run 'dpkg --configure -a' at the end of our
@@ -20,7 +20,7 @@
  1 file changed, 2 insertions(+), 7 deletions(-)
 
 diff --git a/apt-pkg/deb/dpkgpm.cc b/apt-pkg/deb/dpkgpm.cc
-index 93effa9..4375781 100644
+index 93effa959..4375781d1 100644
 --- a/apt-pkg/deb/dpkgpm.cc
 +++ b/apt-pkg/deb/dpkgpm.cc
 @@ -1199,12 +1199,6 @@ void pkgDPkgPM::BuildPackagesProgressMap()
diff --git a/poky/meta/recipes-devtools/apt/apt_2.4.5.bb b/poky/meta/recipes-devtools/apt/apt_2.4.5.bb
index 1d94dd1..564bdee 100644
--- a/poky/meta/recipes-devtools/apt/apt_2.4.5.bb
+++ b/poky/meta/recipes-devtools/apt/apt_2.4.5.bb
@@ -13,6 +13,7 @@
            file://0001-cmake-Do-not-build-po-files.patch \
            file://0001-Hide-fstatat64-and-prlimit64-defines-on-musl.patch \
            file://0001-aptwebserver.cc-Include-array.patch \
+           file://0001-Remove-using-std-binary_function.patch \
            "
 
 SRC_URI:append:class-native = " \
@@ -138,5 +139,5 @@
 
 do_install:append() {
 	# Avoid non-reproducible -src package
-	sed -i -e "s,${B},,g" ${B}/apt-pkg/tagfile-keys.cc
+	sed -i -e "s,${B}/include/,,g" ${B}/apt-pkg/tagfile-keys.cc
 }
diff --git a/poky/meta/recipes-devtools/autoconf/autoconf/0001-specify-void-prototype-for-functions-with-no-paramet.patch b/poky/meta/recipes-devtools/autoconf/autoconf/0001-specify-void-prototype-for-functions-with-no-paramet.patch
new file mode 100644
index 0000000..4d8aa29
--- /dev/null
+++ b/poky/meta/recipes-devtools/autoconf/autoconf/0001-specify-void-prototype-for-functions-with-no-paramet.patch
@@ -0,0 +1,64 @@
+From 7ccfea413216bddd988823acf4e93421ea0f7f9f Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Tue, 16 Aug 2022 18:35:45 -0700
+Subject: [PATCH] specify void prototype for functions with no parameters
+
+Compilers defaulting to C99 flag such functions as warning which fails
+to compile when using -Werror
+
+Fixes
+error: a function declaration without a prototype is deprecated in all versions of C [-Werror,-Wstrict-prototypes]
+
+Upstream-Status: Submitted [https://lists.gnu.org/archive/html/autoconf-patches/2022-08/msg00003.html]
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ lib/autoconf/c.m4 | 4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+--- a/lib/autoconf/c.m4
++++ b/lib/autoconf/c.m4
+@@ -127,7 +127,7 @@ m4_if([$2], [main], ,
+ [/* Override any GCC internal prototype to avoid an error.
+    Use char because int might match the return type of a GCC
+    builtin and then its argument prototype would still apply.  */
+-char $2 ();])], [return $2 ();])])
++char $2 (void);])], [return $2 ();])])
+ 
+ 
+ # AC_LANG_FUNC_LINK_TRY(C)(FUNCTION)
+@@ -151,7 +151,7 @@ m4_define([AC_LANG_FUNC_LINK_TRY(C)],
+ #define $1 innocuous_$1
+ 
+ /* System header to define __stub macros and hopefully few prototypes,
+-   which can conflict with char $1 (); below.  */
++   which can conflict with char $1 (void); below.  */
+ 
+ #include <limits.h>
+ #undef $1
+@@ -162,7 +162,7 @@ m4_define([AC_LANG_FUNC_LINK_TRY(C)],
+ #ifdef __cplusplus
+ extern "C"
+ #endif
+-char $1 ();
++char $1 (void);
+ /* The GNU C library defines this for functions which it implements
+     to always fail with ENOSYS.  Some functions are actually named
+     something starting with __ and the normal name is an alias.  */
+@@ -252,7 +252,7 @@ dnl other built-in extern "C" functions,
+ dnl when it actually happens.
+ [AC_LANG_PROGRAM([[$1
+ namespace conftest {
+-  extern "C" int $2 ();
++  extern "C" int $2 (void);
+ }]],
+ [[return conftest::$2 ();]])])
+ 
+@@ -2457,7 +2457,7 @@ using std::strcmp;
+ 
+ namespace {
+ 
+-void test_exception_syntax()
++void test_exception_syntax(void)
+ {
+   try {
+     throw "test";
diff --git a/poky/meta/recipes-devtools/autoconf/autoconf_2.71.bb b/poky/meta/recipes-devtools/autoconf/autoconf_2.71.bb
index 799191e..239b268 100644
--- a/poky/meta/recipes-devtools/autoconf/autoconf_2.71.bb
+++ b/poky/meta/recipes-devtools/autoconf/autoconf_2.71.bb
@@ -18,6 +18,7 @@
            file://preferbash.patch \
            file://autotest-automake-result-format.patch \
            file://man-host-perl.patch \
+           file://0001-specify-void-prototype-for-functions-with-no-paramet.patch \
            "
 SRC_URI:append:class-native = " file://no-man.patch"
 
diff --git a/poky/meta/recipes-devtools/binutils/binutils-2.38.inc b/poky/meta/recipes-devtools/binutils/binutils-2.38.inc
deleted file mode 100644
index 742ca86..0000000
--- a/poky/meta/recipes-devtools/binutils/binutils-2.38.inc
+++ /dev/null
@@ -1,37 +0,0 @@
-LIC_FILES_CHKSUM="\
-    file://COPYING;md5=59530bdf33659b29e73d4adb9f9f6552\
-    file://COPYING.LIB;md5=9f604d8a4f8e74f4f5140845a21b6674\
-    file://COPYING3;md5=d32239bcb673463ab874e80d47fae504\
-    file://COPYING3.LIB;md5=6a6a8e020838b23406c81b19c1d46df6\
-    file://gas/COPYING;md5=d32239bcb673463ab874e80d47fae504\
-    file://include/COPYING;md5=59530bdf33659b29e73d4adb9f9f6552\
-    file://include/COPYING3;md5=d32239bcb673463ab874e80d47fae504\
-    file://libiberty/COPYING.LIB;md5=a916467b91076e631dd8edb7424769c7\
-    file://bfd/COPYING;md5=d32239bcb673463ab874e80d47fae504\
-    "
-
-# When upgrading to 2.39, please make sure there is no trailing .0, so
-# that upstream version check can work correctly.
-PV = "2.38"
-CVE_VERSION = "2.38"
-SRCBRANCH ?= "binutils-2_38-branch"
-
-UPSTREAM_CHECK_GITTAGREGEX = "binutils-(?P<pver>\d+_(\d_?)*)"
-
-SRCREV ?= "eed56ee299b9ef8754bb4e53f2e9cf2a7c28c04d"
-BINUTILS_GIT_URI ?= "git://sourceware.org/git/binutils-gdb.git;branch=${SRCBRANCH};protocol=git"
-SRC_URI = "\
-     ${BINUTILS_GIT_URI} \
-     file://0004-Point-scripts-location-to-libdir.patch \
-     file://0005-Only-generate-an-RPATH-entry-if-LD_RUN_PATH-is-not-e.patch \
-     file://0006-don-t-let-the-distro-compiler-point-to-the-wrong-ins.patch \
-     file://0007-warn-for-uses-of-system-directories-when-cross-linki.patch \
-     file://0008-fix-the-incorrect-assembling-for-ppc-wait-mnemonic.patch \
-     file://0009-Use-libtool-2.4.patch \
-     file://0010-Fix-rpath-in-libtool-when-sysroot-is-enabled.patch \
-     file://0011-sync-with-OE-libtool-changes.patch \
-     file://0012-Check-for-clang-before-checking-gcc-version.patch \
-     file://0013-Avoid-as-info-race-condition.patch \
-     file://0014-CVE-2019-1010204.patch \
-"
-S  = "${WORKDIR}/git"
diff --git a/poky/meta/recipes-devtools/binutils/binutils-2.39.inc b/poky/meta/recipes-devtools/binutils/binutils-2.39.inc
new file mode 100644
index 0000000..89612a3
--- /dev/null
+++ b/poky/meta/recipes-devtools/binutils/binutils-2.39.inc
@@ -0,0 +1,35 @@
+LIC_FILES_CHKSUM="\
+    file://COPYING;md5=59530bdf33659b29e73d4adb9f9f6552\
+    file://COPYING.LIB;md5=9f604d8a4f8e74f4f5140845a21b6674\
+    file://COPYING3;md5=d32239bcb673463ab874e80d47fae504\
+    file://COPYING3.LIB;md5=6a6a8e020838b23406c81b19c1d46df6\
+    file://gas/COPYING;md5=d32239bcb673463ab874e80d47fae504\
+    file://include/COPYING;md5=59530bdf33659b29e73d4adb9f9f6552\
+    file://include/COPYING3;md5=d32239bcb673463ab874e80d47fae504\
+    file://libiberty/COPYING.LIB;md5=a916467b91076e631dd8edb7424769c7\
+    file://bfd/COPYING;md5=d32239bcb673463ab874e80d47fae504\
+    "
+
+# When upgrading to 2.39, please make sure there is no trailing .0, so
+# that upstream version check can work correctly.
+PV = "2.39"
+CVE_VERSION = "2.39"
+SRCBRANCH ?= "binutils-2_39-branch"
+
+UPSTREAM_CHECK_GITTAGREGEX = "binutils-(?P<pver>\d+_(\d_?)*)"
+
+SRCREV ?= "f89058434f13382c85b8729464192bc7763d88a4"
+BINUTILS_GIT_URI ?= "git://sourceware.org/git/binutils-gdb.git;branch=${SRCBRANCH};protocol=git"
+SRC_URI = "\
+     ${BINUTILS_GIT_URI} \
+     file://0004-Point-scripts-location-to-libdir.patch \
+     file://0005-don-t-let-the-distro-compiler-point-to-the-wrong-ins.patch \
+     file://0006-warn-for-uses-of-system-directories-when-cross-linki.patch \
+     file://0007-fix-the-incorrect-assembling-for-ppc-wait-mnemonic.patch \
+     file://0008-Use-libtool-2.4.patch \
+     file://0009-Fix-rpath-in-libtool-when-sysroot-is-enabled.patch \
+     file://0010-sync-with-OE-libtool-changes.patch \
+     file://0011-Check-for-clang-before-checking-gcc-version.patch \
+     file://0012-Only-generate-an-RPATH-entry-if-LD_RUN_PATH-is-not-e.patch \
+"
+S  = "${WORKDIR}/git"
diff --git a/poky/meta/recipes-devtools/binutils/binutils-cross-canadian.inc b/poky/meta/recipes-devtools/binutils/binutils-cross-canadian.inc
index b3d591e..4e8f10c 100644
--- a/poky/meta/recipes-devtools/binutils/binutils-cross-canadian.inc
+++ b/poky/meta/recipes-devtools/binutils/binutils-cross-canadian.inc
@@ -27,4 +27,6 @@
 	cross_canadian_bindirlinks
 }
 
+FILES:${PN} += "${sysconfdir}/gprofng.rc"
+
 BBCLASSEXTEND = ""
diff --git a/poky/meta/recipes-devtools/binutils/binutils-cross-canadian_2.38.bb b/poky/meta/recipes-devtools/binutils/binutils-cross-canadian_2.39.bb
similarity index 100%
rename from poky/meta/recipes-devtools/binutils/binutils-cross-canadian_2.38.bb
rename to poky/meta/recipes-devtools/binutils/binutils-cross-canadian_2.39.bb
diff --git a/poky/meta/recipes-devtools/binutils/binutils-cross-testsuite_2.38.bb b/poky/meta/recipes-devtools/binutils/binutils-cross-testsuite_2.39.bb
similarity index 100%
rename from poky/meta/recipes-devtools/binutils/binutils-cross-testsuite_2.38.bb
rename to poky/meta/recipes-devtools/binutils/binutils-cross-testsuite_2.39.bb
diff --git a/poky/meta/recipes-devtools/binutils/binutils-cross.inc b/poky/meta/recipes-devtools/binutils/binutils-cross.inc
index 02ec891..835d4fa 100644
--- a/poky/meta/recipes-devtools/binutils/binutils-cross.inc
+++ b/poky/meta/recipes-devtools/binutils/binutils-cross.inc
@@ -16,6 +16,7 @@
 # and mean the linker scripts have to be relocated.
 EXTRA_OECONF += "--with-sysroot=${STAGING_DIR_TARGET} \
                 --disable-install-libbfd \
+                --disable-gprofng \
                 --enable-poison-system-directories \
                 --with-lib-path==${target_base_libdir}:=${target_libdir} \
                 "
diff --git a/poky/meta/recipes-devtools/binutils/binutils-cross_2.38.bb b/poky/meta/recipes-devtools/binutils/binutils-cross_2.39.bb
similarity index 100%
rename from poky/meta/recipes-devtools/binutils/binutils-cross_2.38.bb
rename to poky/meta/recipes-devtools/binutils/binutils-cross_2.39.bb
diff --git a/poky/meta/recipes-devtools/binutils/binutils-crosssdk_2.38.bb b/poky/meta/recipes-devtools/binutils/binutils-crosssdk_2.39.bb
similarity index 100%
rename from poky/meta/recipes-devtools/binutils/binutils-crosssdk_2.38.bb
rename to poky/meta/recipes-devtools/binutils/binutils-crosssdk_2.39.bb
diff --git a/poky/meta/recipes-devtools/binutils/binutils/0001-binutils-crosssdk-Generate-relocatable-SDKs.patch b/poky/meta/recipes-devtools/binutils/binutils/0001-binutils-crosssdk-Generate-relocatable-SDKs.patch
index 719928b..9a7ee49 100644
--- a/poky/meta/recipes-devtools/binutils/binutils/0001-binutils-crosssdk-Generate-relocatable-SDKs.patch
+++ b/poky/meta/recipes-devtools/binutils/binutils/0001-binutils-crosssdk-Generate-relocatable-SDKs.patch
@@ -1,4 +1,4 @@
-From 07bb7fbdacaf9cd6a1a252ffbc98f4e05e305d50 Mon Sep 17 00:00:00 2001
+From a0ac147aec127c66c9e38292faa50bb56d3c2a19 Mon Sep 17 00:00:00 2001
 From: Khem Raj <raj.khem@gmail.com>
 Date: Mon, 2 Mar 2015 01:58:54 +0000
 Subject: [PATCH] binutils-crosssdk: Generate relocatable SDKs
diff --git a/poky/meta/recipes-devtools/binutils/binutils/0002-binutils-cross-Do-not-generate-linker-script-directo.patch b/poky/meta/recipes-devtools/binutils/binutils/0002-binutils-cross-Do-not-generate-linker-script-directo.patch
index a3f7d62..cab9c0e 100644
--- a/poky/meta/recipes-devtools/binutils/binutils/0002-binutils-cross-Do-not-generate-linker-script-directo.patch
+++ b/poky/meta/recipes-devtools/binutils/binutils/0002-binutils-cross-Do-not-generate-linker-script-directo.patch
@@ -1,4 +1,4 @@
-From f820ab7ea7e94d4df548be3388163ff2efb2ea96 Mon Sep 17 00:00:00 2001
+From fd7065bfd20364679e3c3f329b19059bbc51ab02 Mon Sep 17 00:00:00 2001
 From: Khem Raj <raj.khem@gmail.com>
 Date: Mon, 6 Mar 2017 23:37:05 -0800
 Subject: [PATCH] binutils-cross: Do not generate linker script directories
diff --git a/poky/meta/recipes-devtools/binutils/binutils/0003-binutils-nativesdk-Search-for-alternative-ld.so.conf.patch b/poky/meta/recipes-devtools/binutils/binutils/0003-binutils-nativesdk-Search-for-alternative-ld.so.conf.patch
index 59a97c1..4fe5520 100644
--- a/poky/meta/recipes-devtools/binutils/binutils/0003-binutils-nativesdk-Search-for-alternative-ld.so.conf.patch
+++ b/poky/meta/recipes-devtools/binutils/binutils/0003-binutils-nativesdk-Search-for-alternative-ld.so.conf.patch
@@ -1,4 +1,4 @@
-From b2ccd25828b40310caeb094c0413e3a30a4dc0a5 Mon Sep 17 00:00:00 2001
+From 67735b3647f98ce0f010ff8b4f9b5c5da576cb17 Mon Sep 17 00:00:00 2001
 From: Richard Purdie <richard.purdie@linuxfoundation.org>
 Date: Wed, 19 Feb 2020 09:51:16 -0800
 Subject: [PATCH] binutils-nativesdk: Search for alternative ld.so.conf in SDK
@@ -29,7 +29,7 @@
  5 files changed, 7 insertions(+), 3 deletions(-)
 
 diff --git a/ld/Makefile.am b/ld/Makefile.am
-index b55a873d927..61db131fb0d 100644
+index d31021c13e2..29782385ca4 100644
 --- a/ld/Makefile.am
 +++ b/ld/Makefile.am
 @@ -42,7 +42,8 @@ ZLIBINC = @zlibinc@
@@ -41,12 +41,12 @@
 +           -DSYSCONFDIR="\"$(sysconfdir)\""
  WARN_CFLAGS = @WARN_CFLAGS@
  NO_WERROR = @NO_WERROR@
- AM_CFLAGS = $(WARN_CFLAGS) $(ELF_CLFAGS)
+ AM_CFLAGS = $(WARN_CFLAGS) $(ELF_CLFAGS) $(JANSSON_CFLAGS)
 diff --git a/ld/Makefile.in b/ld/Makefile.in
-index 61e93eeaf1e..860eb21a785 100644
+index ee0c98f65b0..04ee68a2c67 100644
 --- a/ld/Makefile.in
 +++ b/ld/Makefile.in
-@@ -556,7 +556,8 @@ ZLIB = @zlibdir@ -lz
+@@ -562,7 +562,8 @@ ZLIB = @zlibdir@ -lz
  ZLIBINC = @zlibinc@
  ELF_CLFAGS = -DELF_LIST_OPTIONS=@elf_list_options@ \
  	   -DELF_SHLIB_LIST_OPTIONS=@elf_shlib_list_options@ \
@@ -54,13 +54,13 @@
 +	   -DELF_PLT_UNWIND_LIST_OPTIONS=@elf_plt_unwind_list_options@ \
 +           -DSYSCONFDIR="\"$(sysconfdir)\""
  
- AM_CFLAGS = $(WARN_CFLAGS) $(ELF_CLFAGS)
+ AM_CFLAGS = $(WARN_CFLAGS) $(ELF_CLFAGS) $(JANSSON_CFLAGS)
  
 diff --git a/ld/ldelf.c b/ld/ldelf.c
-index 121c25d948f..34cbc60e5e9 100644
+index bfa0d54753a..0d61a3209ec 100644
 --- a/ld/ldelf.c
 +++ b/ld/ldelf.c
-@@ -930,7 +930,7 @@ ldelf_check_ld_so_conf (const struct bfd_link_needed_list *l, int force,
+@@ -936,7 +936,7 @@ ldelf_check_ld_so_conf (const struct bfd_link_needed_list *l, int force,
  
        info.path = NULL;
        info.len = info.alloc = 0;
diff --git a/poky/meta/recipes-devtools/binutils/binutils/0004-Point-scripts-location-to-libdir.patch b/poky/meta/recipes-devtools/binutils/binutils/0004-Point-scripts-location-to-libdir.patch
index 8f323eb..5b0f2ee 100644
--- a/poky/meta/recipes-devtools/binutils/binutils/0004-Point-scripts-location-to-libdir.patch
+++ b/poky/meta/recipes-devtools/binutils/binutils/0004-Point-scripts-location-to-libdir.patch
@@ -1,4 +1,4 @@
-From 7a7b777cdfded080aab1021fa6bcdb20345f5cfd Mon Sep 17 00:00:00 2001
+From 2158e5bd4c6ea4db89e33d46ef25428e37bfc3a6 Mon Sep 17 00:00:00 2001
 From: Khem Raj <raj.khem@gmail.com>
 Date: Mon, 2 Mar 2015 01:09:58 +0000
 Subject: [PATCH] Point scripts location to libdir
@@ -12,10 +12,10 @@
  2 files changed, 2 insertions(+), 2 deletions(-)
 
 diff --git a/ld/Makefile.am b/ld/Makefile.am
-index 61db131fb0d..5b5ee64d121 100644
+index 29782385ca4..062e6b6814b 100644
 --- a/ld/Makefile.am
 +++ b/ld/Makefile.am
-@@ -51,7 +51,7 @@ AM_CFLAGS = $(WARN_CFLAGS) $(ELF_CLFAGS)
+@@ -51,7 +51,7 @@ AM_CFLAGS = $(WARN_CFLAGS) $(ELF_CLFAGS) $(JANSSON_CFLAGS)
  # We put the scripts in the directory $(scriptdir)/ldscripts.
  # We can't put the scripts in $(datadir) because the SEARCH_DIR
  # directives need to be different for native and cross linkers.
@@ -25,10 +25,10 @@
  EMUL = @EMUL@
  EMULATION_OFILES = @EMULATION_OFILES@
 diff --git a/ld/Makefile.in b/ld/Makefile.in
-index 860eb21a785..d719747919c 100644
+index 04ee68a2c67..782d4017a60 100644
 --- a/ld/Makefile.in
 +++ b/ld/Makefile.in
-@@ -564,7 +564,7 @@ AM_CFLAGS = $(WARN_CFLAGS) $(ELF_CLFAGS)
+@@ -570,7 +570,7 @@ AM_CFLAGS = $(WARN_CFLAGS) $(ELF_CLFAGS) $(JANSSON_CFLAGS)
  # We put the scripts in the directory $(scriptdir)/ldscripts.
  # We can't put the scripts in $(datadir) because the SEARCH_DIR
  # directives need to be different for native and cross linkers.
diff --git a/poky/meta/recipes-devtools/binutils/binutils/0005-Only-generate-an-RPATH-entry-if-LD_RUN_PATH-is-not-e.patch b/poky/meta/recipes-devtools/binutils/binutils/0005-Only-generate-an-RPATH-entry-if-LD_RUN_PATH-is-not-e.patch
deleted file mode 100644
index 9977740..0000000
--- a/poky/meta/recipes-devtools/binutils/binutils/0005-Only-generate-an-RPATH-entry-if-LD_RUN_PATH-is-not-e.patch
+++ /dev/null
@@ -1,37 +0,0 @@
-From edddb1f294d667eac94649ba0665fe464990ed18 Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Mon, 2 Mar 2015 01:27:17 +0000
-Subject: [PATCH] Only generate an RPATH entry if LD_RUN_PATH is not empty
-
-for cases where -rpath isn't specified. debian (#151024)
-
-Upstream-Status: Pending
-
-Signed-off-by: Chris Chimelis <chris@debian.org>
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
----
- ld/ldelf.c | 4 ++++
- 1 file changed, 4 insertions(+)
-
-diff --git a/ld/ldelf.c b/ld/ldelf.c
-index 34cbc60e5e9..b1965a9e96f 100644
---- a/ld/ldelf.c
-+++ b/ld/ldelf.c
-@@ -1277,6 +1277,8 @@ ldelf_after_open (int use_libpath, int native, int is_linux, int is_freebsd,
- 		  && command_line.rpath == NULL)
- 		{
- 		  path = (const char *) getenv ("LD_RUN_PATH");
-+		  if ((path) && (strlen (path) == 0))
-+		      path = NULL;
- 		  if (path
- 		      && ldelf_search_needed (path, &n, force,
- 					      is_linux, elfsize))
-@@ -1636,6 +1638,8 @@ ldelf_before_allocation (char *audit, char *depaudit,
-   rpath = command_line.rpath;
-   if (rpath == NULL)
-     rpath = (const char *) getenv ("LD_RUN_PATH");
-+  if ((rpath) && (strlen (rpath) == 0))
-+    rpath = NULL;
- 
-   for (abfd = link_info.input_bfds; abfd; abfd = abfd->link.next)
-     if (bfd_get_flavour (abfd) == bfd_target_elf_flavour)
diff --git a/poky/meta/recipes-devtools/binutils/binutils/0005-don-t-let-the-distro-compiler-point-to-the-wrong-ins.patch b/poky/meta/recipes-devtools/binutils/binutils/0005-don-t-let-the-distro-compiler-point-to-the-wrong-ins.patch
new file mode 100644
index 0000000..2495079
--- /dev/null
+++ b/poky/meta/recipes-devtools/binutils/binutils/0005-don-t-let-the-distro-compiler-point-to-the-wrong-ins.patch
@@ -0,0 +1,32 @@
+From e74d765a1a95253c9247228bd7ccbcabecdd8f7e Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Mon, 2 Mar 2015 01:39:01 +0000
+Subject: [PATCH] don't let the distro compiler point to the wrong installation
+ location
+
+Thanks to RP for helping find the source code causing the issue.
+
+2010/08/13
+Nitin A Kamble <nitin.a.kamble@intel.com>
+
+Upstream-Status: Inappropriate [embedded specific]
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ libiberty/Makefile.in | 3 ++-
+ 1 file changed, 2 insertions(+), 1 deletion(-)
+
+diff --git a/libiberty/Makefile.in b/libiberty/Makefile.in
+index abef3c4601b..880c8826482 100644
+--- a/libiberty/Makefile.in
++++ b/libiberty/Makefile.in
+@@ -385,7 +385,8 @@ install-strip: install
+ # multilib-specific flags, it's overridden by FLAGS_TO_PASS from the
+ # default multilib, so we have to take CFLAGS into account as well,
+ # since it will be passed the multilib flags.
+-MULTIOSDIR = `$(CC) $(CFLAGS) -print-multi-os-directory`
++#MULTIOSDIR = `$(CC) $(CFLAGS) -print-multi-os-directory`
++MULTIOSDIR = ""
+ install_to_libdir: all
+ 	if test -n "${target_header_dir}"; then \
+ 		${mkinstalldirs} $(DESTDIR)$(libdir)/$(MULTIOSDIR); \
diff --git a/poky/meta/recipes-devtools/binutils/binutils/0006-don-t-let-the-distro-compiler-point-to-the-wrong-ins.patch b/poky/meta/recipes-devtools/binutils/binutils/0006-don-t-let-the-distro-compiler-point-to-the-wrong-ins.patch
deleted file mode 100644
index 507d0b1..0000000
--- a/poky/meta/recipes-devtools/binutils/binutils/0006-don-t-let-the-distro-compiler-point-to-the-wrong-ins.patch
+++ /dev/null
@@ -1,32 +0,0 @@
-From fc9e8b99969bb32a4b009eab763bade6c554ef73 Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Mon, 2 Mar 2015 01:39:01 +0000
-Subject: [PATCH] don't let the distro compiler point to the wrong installation
- location
-
-Thanks to RP for helping find the source code causing the issue.
-
-2010/08/13
-Nitin A Kamble <nitin.a.kamble@intel.com>
-
-Upstream-Status: Inappropriate [embedded specific]
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
----
- libiberty/Makefile.in | 3 ++-
- 1 file changed, 2 insertions(+), 1 deletion(-)
-
-diff --git a/libiberty/Makefile.in b/libiberty/Makefile.in
-index abef3c4601b..880c8826482 100644
---- a/libiberty/Makefile.in
-+++ b/libiberty/Makefile.in
-@@ -385,7 +385,8 @@ install-strip: install
- # multilib-specific flags, it's overridden by FLAGS_TO_PASS from the
- # default multilib, so we have to take CFLAGS into account as well,
- # since it will be passed the multilib flags.
--MULTIOSDIR = `$(CC) $(CFLAGS) -print-multi-os-directory`
-+#MULTIOSDIR = `$(CC) $(CFLAGS) -print-multi-os-directory`
-+MULTIOSDIR = ""
- install_to_libdir: all
- 	if test -n "${target_header_dir}"; then \
- 		${mkinstalldirs} $(DESTDIR)$(libdir)/$(MULTIOSDIR); \
diff --git a/poky/meta/recipes-devtools/binutils/binutils/0006-warn-for-uses-of-system-directories-when-cross-linki.patch b/poky/meta/recipes-devtools/binutils/binutils/0006-warn-for-uses-of-system-directories-when-cross-linki.patch
new file mode 100644
index 0000000..00fb5aa
--- /dev/null
+++ b/poky/meta/recipes-devtools/binutils/binutils/0006-warn-for-uses-of-system-directories-when-cross-linki.patch
@@ -0,0 +1,288 @@
+From 2c43b1357db6b09d1645704afd3f45be6de0cf4d Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Fri, 15 Jan 2016 06:31:09 +0000
+Subject: [PATCH] warn for uses of system directories when cross linking
+
+2008-07-02  Joseph Myers  <joseph@codesourcery.com>
+
+    ld/
+    * ld.h (args_type): Add error_poison_system_directories.
+    * ld.texinfo (--error-poison-system-directories): Document.
+    * ldfile.c (ldfile_add_library_path): Check
+    command_line.error_poison_system_directories.
+    * ldmain.c (main): Initialize
+    command_line.error_poison_system_directories.
+    * lexsup.c (enum option_values): Add
+    OPTION_ERROR_POISON_SYSTEM_DIRECTORIES.
+    (ld_options): Add --error-poison-system-directories.
+    (parse_args): Handle new option.
+
+2007-06-13  Joseph Myers  <joseph@codesourcery.com>
+
+    ld/
+    * config.in: Regenerate.
+    * ld.h (args_type): Add poison_system_directories.
+    * ld.texinfo (--no-poison-system-directories): Document.
+    * ldfile.c (ldfile_add_library_path): Check
+    command_line.poison_system_directories.
+    * ldmain.c (main): Initialize
+    command_line.poison_system_directories.
+    * lexsup.c (enum option_values): Add
+    OPTION_NO_POISON_SYSTEM_DIRECTORIES.
+    (ld_options): Add --no-poison-system-directories.
+    (parse_args): Handle new option.
+
+2007-04-20  Joseph Myers  <joseph@codesourcery.com>
+
+    Merge from Sourcery G++ binutils 2.17:
+
+    2007-03-20  Joseph Myers  <joseph@codesourcery.com>
+    Based on patch by Mark Hatle <mark.hatle@windriver.com>.
+    ld/
+    * configure.in (--enable-poison-system-directories): New option.
+    * configure, config.in: Regenerate.
+    * ldfile.c (ldfile_add_library_path): If
+    ENABLE_POISON_SYSTEM_DIRECTORIES defined, warn for use of /lib,
+    /usr/lib, /usr/local/lib or /usr/X11R6/lib.
+
+Upstream-Status: Pending
+
+Signed-off-by: Mark Hatle <mark.hatle@windriver.com>
+Signed-off-by: Scott Garman <scott.a.garman@intel.com>
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ ld/config.in    |  3 +++
+ ld/configure    | 16 ++++++++++++++++
+ ld/configure.ac | 10 ++++++++++
+ ld/ld.h         |  8 ++++++++
+ ld/ld.texi      | 12 ++++++++++++
+ ld/ldfile.c     | 17 +++++++++++++++++
+ ld/ldlex.h      |  2 ++
+ ld/ldmain.c     |  6 ++++--
+ ld/lexsup.c     | 16 ++++++++++++++++
+ 9 files changed, 88 insertions(+), 2 deletions(-)
+
+diff --git a/ld/config.in b/ld/config.in
+index d4c1fc420b5..1aece0b2c29 100644
+--- a/ld/config.in
++++ b/ld/config.in
+@@ -55,6 +55,9 @@
+    language is requested. */
+ #undef ENABLE_NLS
+ 
++/* Define to warn for use of native system library directories */
++#undef ENABLE_POISON_SYSTEM_DIRECTORIES
++
+ /* Additional extension a shared object might have. */
+ #undef EXTRA_SHLIB_EXTENSION
+ 
+diff --git a/ld/configure b/ld/configure
+index e58fb7f3a35..d0a467ac101 100755
+--- a/ld/configure
++++ b/ld/configure
+@@ -836,6 +836,7 @@ with_lib_path
+ enable_targets
+ enable_64_bit_bfd
+ with_sysroot
++enable_poison_system_directories
+ enable_gold
+ enable_got
+ enable_compressed_debug_sections
+@@ -1514,6 +1515,8 @@ Optional Features:
+   --enable-checking       enable run-time checks
+   --enable-targets        alternative target configurations
+   --enable-64-bit-bfd     64-bit support (on hosts with narrower word sizes)
++  --enable-poison-system-directories
++                          warn for use of native system library directories
+   --enable-gold[=ARG]     build gold [ARG={default,yes,no}]
+   --enable-got=<type>     GOT handling scheme (target, single, negative,
+                           multigot)
+@@ -15349,6 +15352,19 @@ fi
+ 
+ 
+ 
++# Check whether --enable-poison-system-directories was given.
++if test "${enable_poison_system_directories+set}" = set; then :
++  enableval=$enable_poison_system_directories;
++else
++  enable_poison_system_directories=no
++fi
++
++if test "x${enable_poison_system_directories}" = "xyes"; then
++
++$as_echo "#define ENABLE_POISON_SYSTEM_DIRECTORIES 1" >>confdefs.h
++
++fi
++
+ # Check whether --enable-gold was given.
+ if test "${enable_gold+set}" = set; then :
+   enableval=$enable_gold; case "${enableval}" in
+diff --git a/ld/configure.ac b/ld/configure.ac
+index 4331d6b1302..e2976bc2926 100644
+--- a/ld/configure.ac
++++ b/ld/configure.ac
+@@ -102,6 +102,16 @@ AC_SUBST(use_sysroot)
+ AC_SUBST(TARGET_SYSTEM_ROOT)
+ AC_SUBST(TARGET_SYSTEM_ROOT_DEFINE)
+ 
++AC_ARG_ENABLE([poison-system-directories],
++         AS_HELP_STRING([--enable-poison-system-directories],
++                [warn for use of native system library directories]),,
++         [enable_poison_system_directories=no])
++if test "x${enable_poison_system_directories}" = "xyes"; then
++  AC_DEFINE([ENABLE_POISON_SYSTEM_DIRECTORIES],
++       [1],
++       [Define to warn for use of native system library directories])
++fi
++
+ dnl Use --enable-gold to decide if this linker should be the default.
+ dnl "install_as_default" is set to false if gold is the default linker.
+ dnl "installed_linker" is the installed BFD linker name.
+diff --git a/ld/ld.h b/ld/ld.h
+index f3086bf30de..db5064243c7 100644
+--- a/ld/ld.h
++++ b/ld/ld.h
+@@ -162,6 +162,14 @@ typedef struct
+      in the linker script.  */
+   bool force_group_allocation;
+ 
++  /* If TRUE (the default) warn for uses of system directories when
++     cross linking.  */
++  bool poison_system_directories;
++
++  /* If TRUE (default FALSE) give an error for uses of system
++     directories when cross linking instead of a warning.  */
++  bool error_poison_system_directories;
++
+   /* Big or little endian as set on command line.  */
+   enum endian_enum endian;
+ 
+diff --git a/ld/ld.texi b/ld/ld.texi
+index eabbec8faa9..c4680e4947e 100644
+--- a/ld/ld.texi
++++ b/ld/ld.texi
+@@ -2947,6 +2947,18 @@ creation of the metadata note, if one had been enabled by an earlier
+ occurrence of the --package-metdata option.
+ If the linker has been built with libjansson, then the JSON string
+ will be validated.
++
++@kindex --no-poison-system-directories
++@item --no-poison-system-directories
++Do not warn for @option{-L} options using system directories such as
++@file{/usr/lib} when cross linking.  This option is intended for use
++in chroot environments when such directories contain the correct
++libraries for the target system rather than the host.
++
++@kindex --error-poison-system-directories
++@item --error-poison-system-directories
++Give an error instead of a warning for @option{-L} options using
++system directories when cross linking.
+ @end table
+ 
+ @c man end
+diff --git a/ld/ldfile.c b/ld/ldfile.c
+index 731ae5f7aed..dd8f03fd960 100644
+--- a/ld/ldfile.c
++++ b/ld/ldfile.c
+@@ -117,6 +117,23 @@ ldfile_add_library_path (const char *name, bool cmdline)
+     new_dirs->name = concat (ld_sysroot, name + strlen ("$SYSROOT"), (const char *) NULL);
+   else
+     new_dirs->name = xstrdup (name);
++
++#ifdef ENABLE_POISON_SYSTEM_DIRECTORIES
++  if (command_line.poison_system_directories
++  && ((!strncmp (name, "/lib", 4))
++      || (!strncmp (name, "/usr/lib", 8))
++      || (!strncmp (name, "/usr/local/lib", 14))
++      || (!strncmp (name, "/usr/X11R6/lib", 14))))
++   {
++     if (command_line.error_poison_system_directories)
++       einfo (_("%X%P: error: library search path \"%s\" is unsafe for "
++            "cross-compilation\n"), name);
++     else
++       einfo (_("%P: warning: library search path \"%s\" is unsafe for "
++            "cross-compilation\n"), name);
++   }
++#endif
++
+ }
+ 
+ /* Try to open a BFD for a lang_input_statement.  */
+diff --git a/ld/ldlex.h b/ld/ldlex.h
+index 57ade1f754b..64007ff8684 100644
+--- a/ld/ldlex.h
++++ b/ld/ldlex.h
+@@ -168,6 +168,8 @@ enum option_values
+   OPTION_NO_WARN_EXECSTACK,
+   OPTION_WARN_RWX_SEGMENTS,
+   OPTION_NO_WARN_RWX_SEGMENTS,
++  OPTION_NO_POISON_SYSTEM_DIRECTORIES,
++  OPTION_ERROR_POISON_SYSTEM_DIRECTORIES,
+ };
+ 
+ /* The initial parser states.  */
+diff --git a/ld/ldmain.c b/ld/ldmain.c
+index 1ae90a77749..f40750fd816 100644
+--- a/ld/ldmain.c
++++ b/ld/ldmain.c
+@@ -322,6 +322,8 @@ main (int argc, char **argv)
+   command_line.warn_mismatch = true;
+   command_line.warn_search_mismatch = true;
+   command_line.check_section_addresses = -1;
++  command_line.poison_system_directories = true;
++  command_line.error_poison_system_directories = false;
+ 
+   /* We initialize DEMANGLING based on the environment variable
+      COLLECT_NO_DEMANGLE.  The gcc collect2 program will demangle the
+@@ -1447,7 +1449,7 @@ undefined_symbol (struct bfd_link_info *info,
+       argv[1] = "undefined-symbol";
+       argv[2] = (char *) name;
+       argv[3] = NULL;
+-      
++
+       if (verbose)
+ 	einfo (_("%P: About to run error handling script '%s' with arguments: '%s' '%s'\n"),
+ 	       argv[0], argv[1], argv[2]);
+@@ -1468,7 +1470,7 @@ undefined_symbol (struct bfd_link_info *info,
+ 	 carry on to issue the normal error message.  */
+     }
+ #endif /* SUPPORT_ERROR_HANDLING_SCRIPT */
+-  
++
+   if (section != NULL)
+     {
+       if (error_count < MAX_ERRORS_IN_A_ROW)
+diff --git a/ld/lexsup.c b/ld/lexsup.c
+index 9225f71b3ce..92fb66f1fa2 100644
+--- a/ld/lexsup.c
++++ b/ld/lexsup.c
+@@ -608,6 +608,14 @@ static const struct ld_option ld_options[] =
+ 		   "                                <method> is: share-unconflicted (default),\n"
+ 		   "                                             share-duplicated"),
+     TWO_DASHES },
++  { {"no-poison-system-directories", no_argument, NULL,
++     OPTION_NO_POISON_SYSTEM_DIRECTORIES},
++    '\0', NULL, N_("Do not warn for -L options using system directories"),
++    TWO_DASHES },
++  { {"error-poison-system-directories", no_argument, NULL,
++    +     OPTION_ERROR_POISON_SYSTEM_DIRECTORIES},
++    '\0', NULL, N_("Give an error for -L options using system directories"),
++    TWO_DASHES },
+ };
+ 
+ #define OPTION_COUNT ARRAY_SIZE (ld_options)
+@@ -1722,6 +1730,14 @@ parse_args (unsigned argc, char **argv)
+ 	  config.print_map_discarded = true;
+ 	  break;
+ 
++	case OPTION_NO_POISON_SYSTEM_DIRECTORIES:
++	  command_line.poison_system_directories = false;
++	  break;
++
++	case OPTION_ERROR_POISON_SYSTEM_DIRECTORIES:
++	  command_line.error_poison_system_directories = true;
++	  break;
++
+ 	case OPTION_DEPENDENCY_FILE:
+ 	  config.dependency_file = optarg;
+ 	  break;
diff --git a/poky/meta/recipes-devtools/binutils/binutils/0007-fix-the-incorrect-assembling-for-ppc-wait-mnemonic.patch b/poky/meta/recipes-devtools/binutils/binutils/0007-fix-the-incorrect-assembling-for-ppc-wait-mnemonic.patch
new file mode 100644
index 0000000..4ae1580
--- /dev/null
+++ b/poky/meta/recipes-devtools/binutils/binutils/0007-fix-the-incorrect-assembling-for-ppc-wait-mnemonic.patch
@@ -0,0 +1,37 @@
+From 883b6c0930410f8553b3bce0dd98131bc1694fa6 Mon Sep 17 00:00:00 2001
+From: Zhenhua Luo <zhenhua.luo@nxp.com>
+Date: Sat, 11 Jun 2016 22:08:29 -0500
+Subject: [PATCH] fix the incorrect assembling for ppc wait mnemonic
+
+The wait mnemonic for ppc targets is incorrectly assembled into 0x7c00003c due
+to duplicated address definition with waitasec instruction. The issue causes
+kernel boot calltrace for ppc targets when wait instruction is executed.
+
+Upstream-Status: Pending
+Signed-off-by: Zhenhua Luo <zhenhua.luo@nxp.com>
+---
+ opcodes/ppc-opc.c | 4 +---
+ 1 file changed, 1 insertion(+), 3 deletions(-)
+
+diff --git a/opcodes/ppc-opc.c b/opcodes/ppc-opc.c
+index 7637d3e349e..8e074e13208 100644
+--- a/opcodes/ppc-opc.c
++++ b/opcodes/ppc-opc.c
+@@ -6947,8 +6947,6 @@ const struct powerpc_opcode powerpc_opcodes[] = {
+ {"waitasec",	X(31,30),      XRTRARB_MASK, POWER8,	POWER9,		{0}},
+ {"waitrsv",	XWCPL(31,30,1,0),0xffffffff, POWER10,	EXT,		{0}},
+ {"pause_short",	XWCPL(31,30,2,0),0xffffffff, POWER10,	EXT,		{0}},
+-{"wait",	X(31,30),	XWCPL_MASK,  POWER10,	0,		{WC, PL}},
+-{"wait",	X(31,30),	XWC_MASK,    POWER9,	POWER10,	{WC}},
+ 
+ {"lwepx",	X(31,31),	X_MASK,	  E500MC|PPCA2, 0,		{RT, RA0, RB}},
+ 
+@@ -7002,7 +7000,7 @@ const struct powerpc_opcode powerpc_opcodes[] = {
+ 
+ {"waitrsv",	X(31,62)|(1<<21), 0xffffffff, E500MC|PPCA2, EXT,	{0}},
+ {"waitimpl",	X(31,62)|(2<<21), 0xffffffff, E500MC|PPCA2, EXT,	{0}},
+-{"wait",	X(31,62),	XWC_MASK,    E500MC|PPCA2, 0,		{WC}},
++{"wait",	X(31,62),	XWC_MASK,    E500MC|PPCA2|POWER9|POWER10, 0,	{WC}},
+ 
+ {"dcbstep",	XRT(31,63,0),	XRT_MASK,    E500MC|PPCA2, 0,		{RA0, RB}},
+ 
diff --git a/poky/meta/recipes-devtools/binutils/binutils/0007-warn-for-uses-of-system-directories-when-cross-linki.patch b/poky/meta/recipes-devtools/binutils/binutils/0007-warn-for-uses-of-system-directories-when-cross-linki.patch
deleted file mode 100644
index 547bfca..0000000
--- a/poky/meta/recipes-devtools/binutils/binutils/0007-warn-for-uses-of-system-directories-when-cross-linki.patch
+++ /dev/null
@@ -1,288 +0,0 @@
-From 9fb1bafb20371d82b674778d2a8b5c9444fed417 Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Fri, 15 Jan 2016 06:31:09 +0000
-Subject: [PATCH] warn for uses of system directories when cross linking
-
-2008-07-02  Joseph Myers  <joseph@codesourcery.com>
-
-    ld/
-    * ld.h (args_type): Add error_poison_system_directories.
-    * ld.texinfo (--error-poison-system-directories): Document.
-    * ldfile.c (ldfile_add_library_path): Check
-    command_line.error_poison_system_directories.
-    * ldmain.c (main): Initialize
-    command_line.error_poison_system_directories.
-    * lexsup.c (enum option_values): Add
-    OPTION_ERROR_POISON_SYSTEM_DIRECTORIES.
-    (ld_options): Add --error-poison-system-directories.
-    (parse_args): Handle new option.
-
-2007-06-13  Joseph Myers  <joseph@codesourcery.com>
-
-    ld/
-    * config.in: Regenerate.
-    * ld.h (args_type): Add poison_system_directories.
-    * ld.texinfo (--no-poison-system-directories): Document.
-    * ldfile.c (ldfile_add_library_path): Check
-    command_line.poison_system_directories.
-    * ldmain.c (main): Initialize
-    command_line.poison_system_directories.
-    * lexsup.c (enum option_values): Add
-    OPTION_NO_POISON_SYSTEM_DIRECTORIES.
-    (ld_options): Add --no-poison-system-directories.
-    (parse_args): Handle new option.
-
-2007-04-20  Joseph Myers  <joseph@codesourcery.com>
-
-    Merge from Sourcery G++ binutils 2.17:
-
-    2007-03-20  Joseph Myers  <joseph@codesourcery.com>
-    Based on patch by Mark Hatle <mark.hatle@windriver.com>.
-    ld/
-    * configure.in (--enable-poison-system-directories): New option.
-    * configure, config.in: Regenerate.
-    * ldfile.c (ldfile_add_library_path): If
-    ENABLE_POISON_SYSTEM_DIRECTORIES defined, warn for use of /lib,
-    /usr/lib, /usr/local/lib or /usr/X11R6/lib.
-
-Upstream-Status: Pending
-
-Signed-off-by: Mark Hatle <mark.hatle@windriver.com>
-Signed-off-by: Scott Garman <scott.a.garman@intel.com>
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
----
- ld/config.in    |  3 +++
- ld/configure    | 16 ++++++++++++++++
- ld/configure.ac | 10 ++++++++++
- ld/ld.h         |  8 ++++++++
- ld/ld.texi      | 12 ++++++++++++
- ld/ldfile.c     | 17 +++++++++++++++++
- ld/ldlex.h      |  2 ++
- ld/ldmain.c     |  6 ++++--
- ld/lexsup.c     | 16 ++++++++++++++++
- 9 files changed, 88 insertions(+), 2 deletions(-)
-
-diff --git a/ld/config.in b/ld/config.in
-index 26d55a00d47..ffad464783c 100644
---- a/ld/config.in
-+++ b/ld/config.in
-@@ -43,6 +43,9 @@
-    language is requested. */
- #undef ENABLE_NLS
- 
-+/* Define to warn for use of native system library directories */
-+#undef ENABLE_POISON_SYSTEM_DIRECTORIES
-+
- /* Additional extension a shared object might have. */
- #undef EXTRA_SHLIB_EXTENSION
- 
-diff --git a/ld/configure b/ld/configure
-index 26150d62898..1f9ec8ec580 100755
---- a/ld/configure
-+++ b/ld/configure
-@@ -831,6 +831,7 @@ with_lib_path
- enable_targets
- enable_64_bit_bfd
- with_sysroot
-+enable_poison_system_directories
- enable_gold
- enable_got
- enable_compressed_debug_sections
-@@ -1500,6 +1501,8 @@ Optional Features:
-   --enable-checking       enable run-time checks
-   --enable-targets        alternative target configurations
-   --enable-64-bit-bfd     64-bit support (on hosts with narrower word sizes)
-+  --enable-poison-system-directories
-+                          warn for use of native system library directories
-   --enable-gold[=ARG]     build gold [ARG={default,yes,no}]
-   --enable-got=<type>     GOT handling scheme (target, single, negative,
-                           multigot)
-@@ -15312,6 +15315,19 @@ fi
- 
- 
- 
-+# Check whether --enable-poison-system-directories was given.
-+if test "${enable_poison_system_directories+set}" = set; then :
-+  enableval=$enable_poison_system_directories;
-+else
-+  enable_poison_system_directories=no
-+fi
-+
-+if test "x${enable_poison_system_directories}" = "xyes"; then
-+
-+$as_echo "#define ENABLE_POISON_SYSTEM_DIRECTORIES 1" >>confdefs.h
-+
-+fi
-+
- # Check whether --enable-gold was given.
- if test "${enable_gold+set}" = set; then :
-   enableval=$enable_gold; case "${enableval}" in
-diff --git a/ld/configure.ac b/ld/configure.ac
-index 7f4cff079b7..57d1abff870 100644
---- a/ld/configure.ac
-+++ b/ld/configure.ac
-@@ -102,6 +102,16 @@ AC_SUBST(use_sysroot)
- AC_SUBST(TARGET_SYSTEM_ROOT)
- AC_SUBST(TARGET_SYSTEM_ROOT_DEFINE)
- 
-+AC_ARG_ENABLE([poison-system-directories],
-+         AS_HELP_STRING([--enable-poison-system-directories],
-+                [warn for use of native system library directories]),,
-+         [enable_poison_system_directories=no])
-+if test "x${enable_poison_system_directories}" = "xyes"; then
-+  AC_DEFINE([ENABLE_POISON_SYSTEM_DIRECTORIES],
-+       [1],
-+       [Define to warn for use of native system library directories])
-+fi
-+
- dnl Use --enable-gold to decide if this linker should be the default.
- dnl "install_as_default" is set to false if gold is the default linker.
- dnl "installed_linker" is the installed BFD linker name.
-diff --git a/ld/ld.h b/ld/ld.h
-index f3086bf30de..db5064243c7 100644
---- a/ld/ld.h
-+++ b/ld/ld.h
-@@ -162,6 +162,14 @@ typedef struct
-      in the linker script.  */
-   bool force_group_allocation;
- 
-+  /* If TRUE (the default) warn for uses of system directories when
-+     cross linking.  */
-+  bool poison_system_directories;
-+
-+  /* If TRUE (default FALSE) give an error for uses of system
-+     directories when cross linking instead of a warning.  */
-+  bool error_poison_system_directories;
-+
-   /* Big or little endian as set on command line.  */
-   enum endian_enum endian;
- 
-diff --git a/ld/ld.texi b/ld/ld.texi
-index fc75e9b3625..dca697d626e 100644
---- a/ld/ld.texi
-+++ b/ld/ld.texi
-@@ -2892,6 +2892,18 @@ string identifying the original linked file does not change.
- 
- Passing @code{none} for @var{style} disables the setting from any
- @code{--build-id} options earlier on the command line.
-+
-+@kindex --no-poison-system-directories
-+@item --no-poison-system-directories
-+Do not warn for @option{-L} options using system directories such as
-+@file{/usr/lib} when cross linking.  This option is intended for use
-+in chroot environments when such directories contain the correct
-+libraries for the target system rather than the host.
-+
-+@kindex --error-poison-system-directories
-+@item --error-poison-system-directories
-+Give an error instead of a warning for @option{-L} options using
-+system directories when cross linking.
- @end table
- 
- @c man end
-diff --git a/ld/ldfile.c b/ld/ldfile.c
-index 731ae5f7aed..dd8f03fd960 100644
---- a/ld/ldfile.c
-+++ b/ld/ldfile.c
-@@ -117,6 +117,23 @@ ldfile_add_library_path (const char *name, bool cmdline)
-     new_dirs->name = concat (ld_sysroot, name + strlen ("$SYSROOT"), (const char *) NULL);
-   else
-     new_dirs->name = xstrdup (name);
-+
-+#ifdef ENABLE_POISON_SYSTEM_DIRECTORIES
-+  if (command_line.poison_system_directories
-+  && ((!strncmp (name, "/lib", 4))
-+      || (!strncmp (name, "/usr/lib", 8))
-+      || (!strncmp (name, "/usr/local/lib", 14))
-+      || (!strncmp (name, "/usr/X11R6/lib", 14))))
-+   {
-+     if (command_line.error_poison_system_directories)
-+       einfo (_("%X%P: error: library search path \"%s\" is unsafe for "
-+            "cross-compilation\n"), name);
-+     else
-+       einfo (_("%P: warning: library search path \"%s\" is unsafe for "
-+            "cross-compilation\n"), name);
-+   }
-+#endif
-+
- }
- 
- /* Try to open a BFD for a lang_input_statement.  */
-diff --git a/ld/ldlex.h b/ld/ldlex.h
-index bc58fea73cc..a1595589197 100644
---- a/ld/ldlex.h
-+++ b/ld/ldlex.h
-@@ -164,6 +164,8 @@ enum option_values
-   OPTION_CTF_VARIABLES,
-   OPTION_NO_CTF_VARIABLES,
-   OPTION_CTF_SHARE_TYPES,
-+  OPTION_NO_POISON_SYSTEM_DIRECTORIES,
-+  OPTION_ERROR_POISON_SYSTEM_DIRECTORIES,
- };
- 
- /* The initial parser states.  */
-diff --git a/ld/ldmain.c b/ld/ldmain.c
-index 1ae90a77749..f40750fd816 100644
---- a/ld/ldmain.c
-+++ b/ld/ldmain.c
-@@ -322,6 +322,8 @@ main (int argc, char **argv)
-   command_line.warn_mismatch = true;
-   command_line.warn_search_mismatch = true;
-   command_line.check_section_addresses = -1;
-+  command_line.poison_system_directories = true;
-+  command_line.error_poison_system_directories = false;
- 
-   /* We initialize DEMANGLING based on the environment variable
-      COLLECT_NO_DEMANGLE.  The gcc collect2 program will demangle the
-@@ -1447,7 +1449,7 @@ undefined_symbol (struct bfd_link_info *info,
-       argv[1] = "undefined-symbol";
-       argv[2] = (char *) name;
-       argv[3] = NULL;
--      
-+
-       if (verbose)
- 	einfo (_("%P: About to run error handling script '%s' with arguments: '%s' '%s'\n"),
- 	       argv[0], argv[1], argv[2]);
-@@ -1468,7 +1470,7 @@ undefined_symbol (struct bfd_link_info *info,
- 	 carry on to issue the normal error message.  */
-     }
- #endif /* SUPPORT_ERROR_HANDLING_SCRIPT */
--  
-+
-   if (section != NULL)
-     {
-       if (error_count < MAX_ERRORS_IN_A_ROW)
-diff --git a/ld/lexsup.c b/ld/lexsup.c
-index 5acc47ed5a0..d03c6136ccf 100644
---- a/ld/lexsup.c
-+++ b/ld/lexsup.c
-@@ -600,6 +600,14 @@ static const struct ld_option ld_options[] =
- 		   "                                <method> is: share-unconflicted (default),\n"
- 		   "                                             share-duplicated"),
-     TWO_DASHES },
-+  { {"no-poison-system-directories", no_argument, NULL,
-+     OPTION_NO_POISON_SYSTEM_DIRECTORIES},
-+    '\0', NULL, N_("Do not warn for -L options using system directories"),
-+    TWO_DASHES },
-+  { {"error-poison-system-directories", no_argument, NULL,
-+    +     OPTION_ERROR_POISON_SYSTEM_DIRECTORIES},
-+    '\0', NULL, N_("Give an error for -L options using system directories"),
-+    TWO_DASHES },
- };
- 
- #define OPTION_COUNT ARRAY_SIZE (ld_options)
-@@ -1702,6 +1710,14 @@ parse_args (unsigned argc, char **argv)
- 	  config.print_map_discarded = true;
- 	  break;
- 
-+	case OPTION_NO_POISON_SYSTEM_DIRECTORIES:
-+	  command_line.poison_system_directories = false;
-+	  break;
-+
-+	case OPTION_ERROR_POISON_SYSTEM_DIRECTORIES:
-+	  command_line.error_poison_system_directories = true;
-+	  break;
-+
- 	case OPTION_DEPENDENCY_FILE:
- 	  config.dependency_file = optarg;
- 	  break;
diff --git a/poky/meta/recipes-devtools/binutils/binutils/0008-Use-libtool-2.4.patch b/poky/meta/recipes-devtools/binutils/binutils/0008-Use-libtool-2.4.patch
new file mode 100644
index 0000000..21e2c4f
--- /dev/null
+++ b/poky/meta/recipes-devtools/binutils/binutils/0008-Use-libtool-2.4.patch
@@ -0,0 +1,31741 @@
+From 0f45262ef0d656c576adbb0b0f42b8f417895008 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Sun, 14 Feb 2016 17:04:07 +0000
+Subject: [PATCH] Use libtool 2.4
+
+get libtool sysroot support
+
+Upstream-Status: Pending
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ bfd/Makefile.in                     |    3 +
+ bfd/configure                       | 1333 +++++++++---
+ bfd/configure.ac                    |    2 +-
+ binutils/Makefile.in                |    3 +
+ binutils/configure                  | 1331 +++++++++---
+ gas/Makefile.in                     |    3 +
+ gas/configure                       | 1331 +++++++++---
+ gdbsupport/Makefile.in              |    1 +
+ gprof/Makefile.in                   |    3 +
+ gprof/configure                     | 1331 +++++++++---
+ gprofng/Makefile.in                 |    2 +
+ gprofng/configure                   | 1701 ++++++++++++----
+ gprofng/doc/Makefile.in             |    2 +
+ gprofng/gp-display-html/Makefile.in |    2 +
+ gprofng/libcollector/Makefile.in    |    2 +
+ gprofng/libcollector/configure      | 1703 ++++++++++++----
+ gprofng/src/Makefile.in             |    2 +
+ ld/Makefile.in                      |    3 +
+ ld/configure                        | 1704 ++++++++++++----
+ libbacktrace/Makefile.in            |    3 +
+ libbacktrace/configure              | 1331 +++++++++---
+ libctf/Makefile.in                  |    2 +
+ libctf/configure                    | 1330 +++++++++---
+ libtool.m4                          | 1093 ++++++----
+ ltmain.sh                           | 2925 ++++++++++++++++++---------
+ ltoptions.m4                        |    2 +-
+ ltversion.m4                        |   12 +-
+ lt~obsolete.m4                      |    2 +-
+ opcodes/Makefile.in                 |    3 +
+ opcodes/configure                   | 1331 +++++++++---
+ sim/Makefile.in                     |    3 +
+ zlib/Makefile.in                    |  204 +-
+ zlib/aclocal.m4                     |  218 +-
+ zlib/configure                      | 1554 +++++++++-----
+ 34 files changed, 14804 insertions(+), 5671 deletions(-)
+
+diff --git a/bfd/Makefile.in b/bfd/Makefile.in
+index a26f74d7199..6edacdfeb0e 100644
+--- a/bfd/Makefile.in
++++ b/bfd/Makefile.in
+@@ -346,6 +346,7 @@ DATADIRNAME = @DATADIRNAME@
+ DEBUGDIR = @DEBUGDIR@
+ DEFS = @DEFS@
+ DEPDIR = @DEPDIR@
++DLLTOOL = @DLLTOOL@
+ DSYMUTIL = @DSYMUTIL@
+ DUMPBIN = @DUMPBIN@
+ ECHO_C = @ECHO_C@
+@@ -380,6 +381,7 @@ LN_S = @LN_S@
+ LTLIBOBJS = @LTLIBOBJS@
+ MAINT = @MAINT@
+ MAKEINFO = @MAKEINFO@
++MANIFEST_TOOL = @MANIFEST_TOOL@
+ MKDIR_P = @MKDIR_P@
+ MKINSTALLDIRS = @MKINSTALLDIRS@
+ MSGFMT = @MSGFMT@
+@@ -421,6 +423,7 @@ abs_builddir = @abs_builddir@
+ abs_srcdir = @abs_srcdir@
+ abs_top_builddir = @abs_top_builddir@
+ abs_top_srcdir = @abs_top_srcdir@
++ac_ct_AR = @ac_ct_AR@
+ ac_ct_CC = @ac_ct_CC@
+ ac_ct_DUMPBIN = @ac_ct_DUMPBIN@
+ all_backends = @all_backends@
+diff --git a/bfd/configure b/bfd/configure
+index 4f591b750d8..d90db11744b 100755
+--- a/bfd/configure
++++ b/bfd/configure
+@@ -702,6 +702,9 @@ OTOOL
+ LIPO
+ NMEDIT
+ DSYMUTIL
++MANIFEST_TOOL
++ac_ct_AR
++DLLTOOL
+ OBJDUMP
+ LN_S
+ NM
+@@ -820,6 +823,7 @@ enable_static
+ with_pic
+ enable_fast_install
+ with_gnu_ld
++with_libtool_sysroot
+ enable_libtool_lock
+ enable_plugins
+ enable_largefile
+@@ -1504,6 +1508,8 @@ Optional Packages:
+   --with-pic              try to use only PIC/non-PIC objects [default=use
+                           both]
+   --with-gnu-ld           assume the C compiler uses GNU ld [default=no]
++  --with-libtool-sysroot=DIR Search for dependent libraries within DIR
++                        (or the compiler's sysroot if not specified).
+   --with-mmap             try using mmap for BFD input files if available
+   --with-separate-debug-dir=DIR
+                           Look for global separate debug info in DIR
+@@ -5024,8 +5030,8 @@ esac
+ 
+ 
+ 
+-macro_version='2.2.7a'
+-macro_revision='1.3134'
++macro_version='2.4'
++macro_revision='1.3293'
+ 
+ 
+ 
+@@ -5065,7 +5071,7 @@ ECHO=$ECHO$ECHO$ECHO$ECHO$ECHO$ECHO
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking how to print strings" >&5
+ $as_echo_n "checking how to print strings... " >&6; }
+ # Test print first, because it will be a builtin if present.
+-if test "X`print -r -- -n 2>/dev/null`" = X-n && \
++if test "X`( print -r -- -n ) 2>/dev/null`" = X-n && \
+    test "X`print -r -- $ECHO 2>/dev/null`" = "X$ECHO"; then
+   ECHO='print -r --'
+ elif test "X`printf %s $ECHO 2>/dev/null`" = "X$ECHO"; then
+@@ -5758,8 +5764,8 @@ $as_echo_n "checking whether the shell understands some XSI constructs... " >&6;
+ # Try some XSI features
+ xsi_shell=no
+ ( _lt_dummy="a/b/c"
+-  test "${_lt_dummy##*/},${_lt_dummy%/*},"${_lt_dummy%"$_lt_dummy"}, \
+-      = c,a/b,, \
++  test "${_lt_dummy##*/},${_lt_dummy%/*},${_lt_dummy#??}"${_lt_dummy%"$_lt_dummy"}, \
++      = c,a/b,b/c, \
+     && eval 'test $(( 1 + 1 )) -eq 2 \
+     && test "${#_lt_dummy}" -eq 5' ) >/dev/null 2>&1 \
+   && xsi_shell=yes
+@@ -5808,6 +5814,80 @@ esac
+ 
+ 
+ 
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking how to convert $build file names to $host format" >&5
++$as_echo_n "checking how to convert $build file names to $host format... " >&6; }
++if ${lt_cv_to_host_file_cmd+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  case $host in
++  *-*-mingw* )
++    case $build in
++      *-*-mingw* ) # actually msys
++        lt_cv_to_host_file_cmd=func_convert_file_msys_to_w32
++        ;;
++      *-*-cygwin* )
++        lt_cv_to_host_file_cmd=func_convert_file_cygwin_to_w32
++        ;;
++      * ) # otherwise, assume *nix
++        lt_cv_to_host_file_cmd=func_convert_file_nix_to_w32
++        ;;
++    esac
++    ;;
++  *-*-cygwin* )
++    case $build in
++      *-*-mingw* ) # actually msys
++        lt_cv_to_host_file_cmd=func_convert_file_msys_to_cygwin
++        ;;
++      *-*-cygwin* )
++        lt_cv_to_host_file_cmd=func_convert_file_noop
++        ;;
++      * ) # otherwise, assume *nix
++        lt_cv_to_host_file_cmd=func_convert_file_nix_to_cygwin
++        ;;
++    esac
++    ;;
++  * ) # unhandled hosts (and "normal" native builds)
++    lt_cv_to_host_file_cmd=func_convert_file_noop
++    ;;
++esac
++
++fi
++
++to_host_file_cmd=$lt_cv_to_host_file_cmd
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_to_host_file_cmd" >&5
++$as_echo "$lt_cv_to_host_file_cmd" >&6; }
++
++
++
++
++
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking how to convert $build file names to toolchain format" >&5
++$as_echo_n "checking how to convert $build file names to toolchain format... " >&6; }
++if ${lt_cv_to_tool_file_cmd+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  #assume ordinary cross tools, or native build.
++lt_cv_to_tool_file_cmd=func_convert_file_noop
++case $host in
++  *-*-mingw* )
++    case $build in
++      *-*-mingw* ) # actually msys
++        lt_cv_to_tool_file_cmd=func_convert_file_msys_to_w32
++        ;;
++    esac
++    ;;
++esac
++
++fi
++
++to_tool_file_cmd=$lt_cv_to_tool_file_cmd
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_to_tool_file_cmd" >&5
++$as_echo "$lt_cv_to_tool_file_cmd" >&6; }
++
++
++
++
++
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $LD option to reload object files" >&5
+ $as_echo_n "checking for $LD option to reload object files... " >&6; }
+ if ${lt_cv_ld_reload_flag+:} false; then :
+@@ -5824,6 +5904,11 @@ case $reload_flag in
+ esac
+ reload_cmds='$LD$reload_flag -o $output$reload_objs'
+ case $host_os in
++  cygwin* | mingw* | pw32* | cegcc*)
++    if test "$GCC" != yes; then
++      reload_cmds=false
++    fi
++    ;;
+   darwin*)
+     if test "$GCC" = yes; then
+       reload_cmds='$LTCC $LTCFLAGS -nostdlib ${wl}-r -o $output$reload_objs'
+@@ -5992,7 +6077,8 @@ mingw* | pw32*)
+     lt_cv_deplibs_check_method='file_magic ^x86 archive import|^x86 DLL'
+     lt_cv_file_magic_cmd='func_win32_libid'
+   else
+-    lt_cv_deplibs_check_method='file_magic file format pei*-i386(.*architecture: i386)?'
++    # Keep this pattern in sync with the one in func_win32_libid.
++    lt_cv_deplibs_check_method='file_magic file format (pei*-i386(.*architecture: i386)?|pe-arm-wince|pe-x86-64)'
+     lt_cv_file_magic_cmd='$OBJDUMP -f'
+   fi
+   ;;
+@@ -6146,6 +6232,21 @@ esac
+ fi
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_deplibs_check_method" >&5
+ $as_echo "$lt_cv_deplibs_check_method" >&6; }
++
++file_magic_glob=
++want_nocaseglob=no
++if test "$build" = "$host"; then
++  case $host_os in
++  mingw* | pw32*)
++    if ( shopt | grep nocaseglob ) >/dev/null 2>&1; then
++      want_nocaseglob=yes
++    else
++      file_magic_glob=`echo aAbBcCdDeEfFgGhHiIjJkKlLmMnNoOpPqQrRsStTuUvVwWxXyYzZ | $SED -e "s/\(..\)/s\/[\1]\/[\1]\/g;/g"`
++    fi
++    ;;
++  esac
++fi
++
+ file_magic_cmd=$lt_cv_file_magic_cmd
+ deplibs_check_method=$lt_cv_deplibs_check_method
+ test -z "$deplibs_check_method" && deplibs_check_method=unknown
+@@ -6161,6 +6262,157 @@ test -z "$deplibs_check_method" && deplibs_check_method=unknown
+ 
+ 
+ 
++
++
++
++
++
++
++
++
++
++
++if test -n "$ac_tool_prefix"; then
++  # Extract the first word of "${ac_tool_prefix}dlltool", so it can be a program name with args.
++set dummy ${ac_tool_prefix}dlltool; ac_word=$2
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
++$as_echo_n "checking for $ac_word... " >&6; }
++if ${ac_cv_prog_DLLTOOL+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  if test -n "$DLLTOOL"; then
++  ac_cv_prog_DLLTOOL="$DLLTOOL" # Let the user override the test.
++else
++as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
++for as_dir in $PATH
++do
++  IFS=$as_save_IFS
++  test -z "$as_dir" && as_dir=.
++    for ac_exec_ext in '' $ac_executable_extensions; do
++  if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
++    ac_cv_prog_DLLTOOL="${ac_tool_prefix}dlltool"
++    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
++    break 2
++  fi
++done
++  done
++IFS=$as_save_IFS
++
++fi
++fi
++DLLTOOL=$ac_cv_prog_DLLTOOL
++if test -n "$DLLTOOL"; then
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $DLLTOOL" >&5
++$as_echo "$DLLTOOL" >&6; }
++else
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
++$as_echo "no" >&6; }
++fi
++
++
++fi
++if test -z "$ac_cv_prog_DLLTOOL"; then
++  ac_ct_DLLTOOL=$DLLTOOL
++  # Extract the first word of "dlltool", so it can be a program name with args.
++set dummy dlltool; ac_word=$2
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
++$as_echo_n "checking for $ac_word... " >&6; }
++if ${ac_cv_prog_ac_ct_DLLTOOL+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  if test -n "$ac_ct_DLLTOOL"; then
++  ac_cv_prog_ac_ct_DLLTOOL="$ac_ct_DLLTOOL" # Let the user override the test.
++else
++as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
++for as_dir in $PATH
++do
++  IFS=$as_save_IFS
++  test -z "$as_dir" && as_dir=.
++    for ac_exec_ext in '' $ac_executable_extensions; do
++  if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
++    ac_cv_prog_ac_ct_DLLTOOL="dlltool"
++    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
++    break 2
++  fi
++done
++  done
++IFS=$as_save_IFS
++
++fi
++fi
++ac_ct_DLLTOOL=$ac_cv_prog_ac_ct_DLLTOOL
++if test -n "$ac_ct_DLLTOOL"; then
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_DLLTOOL" >&5
++$as_echo "$ac_ct_DLLTOOL" >&6; }
++else
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
++$as_echo "no" >&6; }
++fi
++
++  if test "x$ac_ct_DLLTOOL" = x; then
++    DLLTOOL="false"
++  else
++    case $cross_compiling:$ac_tool_warned in
++yes:)
++{ $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5
++$as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;}
++ac_tool_warned=yes ;;
++esac
++    DLLTOOL=$ac_ct_DLLTOOL
++  fi
++else
++  DLLTOOL="$ac_cv_prog_DLLTOOL"
++fi
++
++test -z "$DLLTOOL" && DLLTOOL=dlltool
++
++
++
++
++
++
++
++
++
++
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking how to associate runtime and link libraries" >&5
++$as_echo_n "checking how to associate runtime and link libraries... " >&6; }
++if ${lt_cv_sharedlib_from_linklib_cmd+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  lt_cv_sharedlib_from_linklib_cmd='unknown'
++
++case $host_os in
++cygwin* | mingw* | pw32* | cegcc*)
++  # two different shell functions defined in ltmain.sh
++  # decide which to use based on capabilities of $DLLTOOL
++  case `$DLLTOOL --help 2>&1` in
++  *--identify-strict*)
++    lt_cv_sharedlib_from_linklib_cmd=func_cygming_dll_for_implib
++    ;;
++  *)
++    lt_cv_sharedlib_from_linklib_cmd=func_cygming_dll_for_implib_fallback
++    ;;
++  esac
++  ;;
++*)
++  # fallback: assume linklib IS sharedlib
++  lt_cv_sharedlib_from_linklib_cmd="$ECHO"
++  ;;
++esac
++
++fi
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_sharedlib_from_linklib_cmd" >&5
++$as_echo "$lt_cv_sharedlib_from_linklib_cmd" >&6; }
++sharedlib_from_linklib_cmd=$lt_cv_sharedlib_from_linklib_cmd
++test -z "$sharedlib_from_linklib_cmd" && sharedlib_from_linklib_cmd=$ECHO
++
++
++
++
++
++
++
+ plugin_option=
+ plugin_names="liblto_plugin.so liblto_plugin-0.dll cyglto_plugin-0.dll"
+ for plugin in $plugin_names; do
+@@ -6175,8 +6427,10 @@ for plugin in $plugin_names; do
+ done
+ 
+ if test -n "$ac_tool_prefix"; then
+-  # Extract the first word of "${ac_tool_prefix}ar", so it can be a program name with args.
+-set dummy ${ac_tool_prefix}ar; ac_word=$2
++  for ac_prog in ar
++  do
++    # Extract the first word of "$ac_tool_prefix$ac_prog", so it can be a program name with args.
++set dummy $ac_tool_prefix$ac_prog; ac_word=$2
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+ $as_echo_n "checking for $ac_word... " >&6; }
+ if ${ac_cv_prog_AR+:} false; then :
+@@ -6192,7 +6446,7 @@ do
+   test -z "$as_dir" && as_dir=.
+     for ac_exec_ext in '' $ac_executable_extensions; do
+   if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
+-    ac_cv_prog_AR="${ac_tool_prefix}ar"
++    ac_cv_prog_AR="$ac_tool_prefix$ac_prog"
+     $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
+     break 2
+   fi
+@@ -6212,11 +6466,15 @@ $as_echo "no" >&6; }
+ fi
+ 
+ 
++    test -n "$AR" && break
++  done
+ fi
+-if test -z "$ac_cv_prog_AR"; then
++if test -z "$AR"; then
+   ac_ct_AR=$AR
+-  # Extract the first word of "ar", so it can be a program name with args.
+-set dummy ar; ac_word=$2
++  for ac_prog in ar
++do
++  # Extract the first word of "$ac_prog", so it can be a program name with args.
++set dummy $ac_prog; ac_word=$2
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+ $as_echo_n "checking for $ac_word... " >&6; }
+ if ${ac_cv_prog_ac_ct_AR+:} false; then :
+@@ -6232,7 +6490,7 @@ do
+   test -z "$as_dir" && as_dir=.
+     for ac_exec_ext in '' $ac_executable_extensions; do
+   if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
+-    ac_cv_prog_ac_ct_AR="ar"
++    ac_cv_prog_ac_ct_AR="$ac_prog"
+     $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
+     break 2
+   fi
+@@ -6251,6 +6509,10 @@ else
+ $as_echo "no" >&6; }
+ fi
+ 
++
++  test -n "$ac_ct_AR" && break
++done
++
+   if test "x$ac_ct_AR" = x; then
+     AR="false"
+   else
+@@ -6262,25 +6524,20 @@ ac_tool_warned=yes ;;
+ esac
+     AR=$ac_ct_AR
+   fi
+-else
+-  AR="$ac_cv_prog_AR"
+ fi
+ 
+-test -z "$AR" && AR=ar
+-if test -n "$plugin_option"; then
+-  if $AR --help 2>&1 | grep -q "\--plugin"; then
+-    touch conftest.c
+-    $AR $plugin_option rc conftest.a conftest.c
+-    if test "$?" != 0; then
+-      { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: Failed: $AR $plugin_option rc" >&5
++  touch conftest.c
++  $AR $plugin_option rc conftest.a conftest.c
++  if test "$?" != 0; then
++    { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: Failed: $AR $plugin_option rc" >&5
+ $as_echo "$as_me: WARNING: Failed: $AR $plugin_option rc" >&2;}
+-    else
+-      AR="$AR $plugin_option"
+-    fi
+-    rm -f conftest.*
++  else
++    AR="$AR $plugin_option"
+   fi
+-fi
+-test -z "$AR_FLAGS" && AR_FLAGS=cru
++  rm -f conftest.*
++: ${AR=ar}
++: ${AR_FLAGS=cru}
++
+ 
+ 
+ 
+@@ -6291,6 +6548,63 @@ test -z "$AR_FLAGS" && AR_FLAGS=cru
+ 
+ 
+ 
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for archiver @FILE support" >&5
++$as_echo_n "checking for archiver @FILE support... " >&6; }
++if ${lt_cv_ar_at_file+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  lt_cv_ar_at_file=no
++   cat confdefs.h - <<_ACEOF >conftest.$ac_ext
++/* end confdefs.h.  */
++
++int
++main ()
++{
++
++  ;
++  return 0;
++}
++_ACEOF
++if ac_fn_c_try_compile "$LINENO"; then :
++  echo conftest.$ac_objext > conftest.lst
++      lt_ar_try='$AR $AR_FLAGS libconftest.a @conftest.lst >&5'
++      { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$lt_ar_try\""; } >&5
++  (eval $lt_ar_try) 2>&5
++  ac_status=$?
++  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
++  test $ac_status = 0; }
++      if test "$ac_status" -eq 0; then
++	# Ensure the archiver fails upon bogus file names.
++	rm -f conftest.$ac_objext libconftest.a
++	{ { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$lt_ar_try\""; } >&5
++  (eval $lt_ar_try) 2>&5
++  ac_status=$?
++  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
++  test $ac_status = 0; }
++	if test "$ac_status" -ne 0; then
++          lt_cv_ar_at_file=@
++        fi
++      fi
++      rm -f conftest.* libconftest.a
++
++fi
++rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext
++
++fi
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_ar_at_file" >&5
++$as_echo "$lt_cv_ar_at_file" >&6; }
++
++if test "x$lt_cv_ar_at_file" = xno; then
++  archiver_list_spec=
++else
++  archiver_list_spec=$lt_cv_ar_at_file
++fi
++
++
++
++
++
++
+ 
+ if test -n "$ac_tool_prefix"; then
+   # Extract the first word of "${ac_tool_prefix}strip", so it can be a program name with args.
+@@ -6631,8 +6945,8 @@ esac
+ lt_cv_sys_global_symbol_to_cdecl="sed -n -e 's/^T .* \(.*\)$/extern int \1();/p' -e 's/^$symcode* .* \(.*\)$/extern char \1;/p'"
+ 
+ # Transform an extracted symbol line into symbol name and symbol address
+-lt_cv_sys_global_symbol_to_c_name_address="sed -n -e 's/^: \([^ ]*\) $/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"\2\", (void *) \&\2},/p'"
+-lt_cv_sys_global_symbol_to_c_name_address_lib_prefix="sed -n -e 's/^: \([^ ]*\) $/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \(lib[^ ]*\)$/  {\"\2\", (void *) \&\2},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"lib\2\", (void *) \&\2},/p'"
++lt_cv_sys_global_symbol_to_c_name_address="sed -n -e 's/^: \([^ ]*\)[ ]*$/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"\2\", (void *) \&\2},/p'"
++lt_cv_sys_global_symbol_to_c_name_address_lib_prefix="sed -n -e 's/^: \([^ ]*\)[ ]*$/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \(lib[^ ]*\)$/  {\"\2\", (void *) \&\2},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"lib\2\", (void *) \&\2},/p'"
+ 
+ # Handle CRLF in mingw tool chain
+ opt_cr=
+@@ -6668,6 +6982,7 @@ for ac_symprfx in "" "_"; do
+   else
+     lt_cv_sys_global_symbol_pipe="sed -n -e 's/^.*[	 ]\($symcode$symcode*\)[	 ][	 ]*$ac_symprfx$sympat$opt_cr$/$symxfrm/p'"
+   fi
++  lt_cv_sys_global_symbol_pipe="$lt_cv_sys_global_symbol_pipe | sed '/ __gnu_lto/d'"
+ 
+   # Check to see that the pipe works correctly.
+   pipe_works=no
+@@ -6709,6 +7024,18 @@ _LT_EOF
+       if $GREP ' nm_test_var$' "$nlist" >/dev/null; then
+ 	if $GREP ' nm_test_func$' "$nlist" >/dev/null; then
+ 	  cat <<_LT_EOF > conftest.$ac_ext
++/* Keep this code in sync between libtool.m4, ltmain, lt_system.h, and tests.  */
++#if defined(_WIN32) || defined(__CYGWIN__) || defined(_WIN32_WCE)
++/* DATA imports from DLLs on WIN32 con't be const, because runtime
++   relocations are performed -- see ld's documentation on pseudo-relocs.  */
++# define LT_DLSYM_CONST
++#elif defined(__osf__)
++/* This system does not cope well with relocations in const data.  */
++# define LT_DLSYM_CONST
++#else
++# define LT_DLSYM_CONST const
++#endif
++
+ #ifdef __cplusplus
+ extern "C" {
+ #endif
+@@ -6720,7 +7047,7 @@ _LT_EOF
+ 	  cat <<_LT_EOF >> conftest.$ac_ext
+ 
+ /* The mapping between symbol names and symbols.  */
+-const struct {
++LT_DLSYM_CONST struct {
+   const char *name;
+   void       *address;
+ }
+@@ -6746,8 +7073,8 @@ static const void *lt_preloaded_setup() {
+ _LT_EOF
+ 	  # Now try linking the two files.
+ 	  mv conftest.$ac_objext conftstm.$ac_objext
+-	  lt_save_LIBS="$LIBS"
+-	  lt_save_CFLAGS="$CFLAGS"
++	  lt_globsym_save_LIBS=$LIBS
++	  lt_globsym_save_CFLAGS=$CFLAGS
+ 	  LIBS="conftstm.$ac_objext"
+ 	  CFLAGS="$CFLAGS$lt_prog_compiler_no_builtin_flag"
+ 	  if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_link\""; } >&5
+@@ -6757,8 +7084,8 @@ _LT_EOF
+   test $ac_status = 0; } && test -s conftest${ac_exeext}; then
+ 	    pipe_works=yes
+ 	  fi
+-	  LIBS="$lt_save_LIBS"
+-	  CFLAGS="$lt_save_CFLAGS"
++	  LIBS=$lt_globsym_save_LIBS
++	  CFLAGS=$lt_globsym_save_CFLAGS
+ 	else
+ 	  echo "cannot find nm_test_func in $nlist" >&5
+ 	fi
+@@ -6795,6 +7122,14 @@ else
+ $as_echo "ok" >&6; }
+ fi
+ 
++# Response file support.
++if test "$lt_cv_nm_interface" = "MS dumpbin"; then
++  nm_file_list_spec='@'
++elif $NM --help 2>/dev/null | grep '[@]FILE' >/dev/null; then
++  nm_file_list_spec='@'
++fi
++
++
+ 
+ 
+ 
+@@ -6813,6 +7148,47 @@ fi
+ 
+ 
+ 
++
++
++
++
++
++
++
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for sysroot" >&5
++$as_echo_n "checking for sysroot... " >&6; }
++
++# Check whether --with-libtool-sysroot was given.
++if test "${with_libtool_sysroot+set}" = set; then :
++  withval=$with_libtool_sysroot;
++else
++  with_libtool_sysroot=no
++fi
++
++
++lt_sysroot=
++case ${with_libtool_sysroot} in #(
++ yes)
++   if test "$GCC" = yes; then
++     lt_sysroot=`$CC --print-sysroot 2>/dev/null`
++   fi
++   ;; #(
++ /*)
++   lt_sysroot=`echo "$with_libtool_sysroot" | sed -e "$sed_quote_subst"`
++   ;; #(
++ no|'')
++   ;; #(
++ *)
++   { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${with_libtool_sysroot}" >&5
++$as_echo "${with_libtool_sysroot}" >&6; }
++   as_fn_error $? "The sysroot must be an absolute path." "$LINENO" 5
++   ;;
++esac
++
++ { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${lt_sysroot:-no}" >&5
++$as_echo "${lt_sysroot:-no}" >&6; }
++
++
+ 
+ 
+ 
+@@ -7022,6 +7398,123 @@ esac
+ 
+ need_locks="$enable_libtool_lock"
+ 
++if test -n "$ac_tool_prefix"; then
++  # Extract the first word of "${ac_tool_prefix}mt", so it can be a program name with args.
++set dummy ${ac_tool_prefix}mt; ac_word=$2
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
++$as_echo_n "checking for $ac_word... " >&6; }
++if ${ac_cv_prog_MANIFEST_TOOL+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  if test -n "$MANIFEST_TOOL"; then
++  ac_cv_prog_MANIFEST_TOOL="$MANIFEST_TOOL" # Let the user override the test.
++else
++as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
++for as_dir in $PATH
++do
++  IFS=$as_save_IFS
++  test -z "$as_dir" && as_dir=.
++    for ac_exec_ext in '' $ac_executable_extensions; do
++  if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
++    ac_cv_prog_MANIFEST_TOOL="${ac_tool_prefix}mt"
++    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
++    break 2
++  fi
++done
++  done
++IFS=$as_save_IFS
++
++fi
++fi
++MANIFEST_TOOL=$ac_cv_prog_MANIFEST_TOOL
++if test -n "$MANIFEST_TOOL"; then
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $MANIFEST_TOOL" >&5
++$as_echo "$MANIFEST_TOOL" >&6; }
++else
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
++$as_echo "no" >&6; }
++fi
++
++
++fi
++if test -z "$ac_cv_prog_MANIFEST_TOOL"; then
++  ac_ct_MANIFEST_TOOL=$MANIFEST_TOOL
++  # Extract the first word of "mt", so it can be a program name with args.
++set dummy mt; ac_word=$2
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
++$as_echo_n "checking for $ac_word... " >&6; }
++if ${ac_cv_prog_ac_ct_MANIFEST_TOOL+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  if test -n "$ac_ct_MANIFEST_TOOL"; then
++  ac_cv_prog_ac_ct_MANIFEST_TOOL="$ac_ct_MANIFEST_TOOL" # Let the user override the test.
++else
++as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
++for as_dir in $PATH
++do
++  IFS=$as_save_IFS
++  test -z "$as_dir" && as_dir=.
++    for ac_exec_ext in '' $ac_executable_extensions; do
++  if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
++    ac_cv_prog_ac_ct_MANIFEST_TOOL="mt"
++    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
++    break 2
++  fi
++done
++  done
++IFS=$as_save_IFS
++
++fi
++fi
++ac_ct_MANIFEST_TOOL=$ac_cv_prog_ac_ct_MANIFEST_TOOL
++if test -n "$ac_ct_MANIFEST_TOOL"; then
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_MANIFEST_TOOL" >&5
++$as_echo "$ac_ct_MANIFEST_TOOL" >&6; }
++else
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
++$as_echo "no" >&6; }
++fi
++
++  if test "x$ac_ct_MANIFEST_TOOL" = x; then
++    MANIFEST_TOOL=":"
++  else
++    case $cross_compiling:$ac_tool_warned in
++yes:)
++{ $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5
++$as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;}
++ac_tool_warned=yes ;;
++esac
++    MANIFEST_TOOL=$ac_ct_MANIFEST_TOOL
++  fi
++else
++  MANIFEST_TOOL="$ac_cv_prog_MANIFEST_TOOL"
++fi
++
++test -z "$MANIFEST_TOOL" && MANIFEST_TOOL=mt
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking if $MANIFEST_TOOL is a manifest tool" >&5
++$as_echo_n "checking if $MANIFEST_TOOL is a manifest tool... " >&6; }
++if ${lt_cv_path_mainfest_tool+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  lt_cv_path_mainfest_tool=no
++  echo "$as_me:$LINENO: $MANIFEST_TOOL '-?'" >&5
++  $MANIFEST_TOOL '-?' 2>conftest.err > conftest.out
++  cat conftest.err >&5
++  if $GREP 'Manifest Tool' conftest.out > /dev/null; then
++    lt_cv_path_mainfest_tool=yes
++  fi
++  rm -f conftest*
++fi
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_path_mainfest_tool" >&5
++$as_echo "$lt_cv_path_mainfest_tool" >&6; }
++if test "x$lt_cv_path_mainfest_tool" != xyes; then
++  MANIFEST_TOOL=:
++fi
++
++
++
++
++
+ 
+   case $host_os in
+     rhapsody* | darwin*)
+@@ -7585,6 +8078,8 @@ _LT_EOF
+       $LTCC $LTCFLAGS -c -o conftest.o conftest.c 2>&5
+       echo "$AR cru libconftest.a conftest.o" >&5
+       $AR cru libconftest.a conftest.o 2>&5
++      echo "$RANLIB libconftest.a" >&5
++      $RANLIB libconftest.a 2>&5
+       cat > conftest.c << _LT_EOF
+ int main() { return 0;}
+ _LT_EOF
+@@ -8136,8 +8631,6 @@ fi
+ lt_prog_compiler_pic=
+ lt_prog_compiler_static=
+ 
+-{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $compiler option to produce PIC" >&5
+-$as_echo_n "checking for $compiler option to produce PIC... " >&6; }
+ 
+   if test "$GCC" = yes; then
+     lt_prog_compiler_wl='-Wl,'
+@@ -8303,6 +8796,12 @@ $as_echo_n "checking for $compiler option to produce PIC... " >&6; }
+ 	lt_prog_compiler_pic='--shared'
+ 	lt_prog_compiler_static='--static'
+ 	;;
++      nagfor*)
++	# NAG Fortran compiler
++	lt_prog_compiler_wl='-Wl,-Wl,,'
++	lt_prog_compiler_pic='-PIC'
++	lt_prog_compiler_static='-Bstatic'
++	;;
+       pgcc* | pgf77* | pgf90* | pgf95* | pgfortran*)
+         # Portland Group compilers (*not* the Pentium gcc compiler,
+ 	# which looks to be a dead project)
+@@ -8365,7 +8864,7 @@ $as_echo_n "checking for $compiler option to produce PIC... " >&6; }
+       lt_prog_compiler_pic='-KPIC'
+       lt_prog_compiler_static='-Bstatic'
+       case $cc_basename in
+-      f77* | f90* | f95*)
++      f77* | f90* | f95* | sunf77* | sunf90* | sunf95*)
+ 	lt_prog_compiler_wl='-Qoption ld ';;
+       *)
+ 	lt_prog_compiler_wl='-Wl,';;
+@@ -8422,13 +8921,17 @@ case $host_os in
+     lt_prog_compiler_pic="$lt_prog_compiler_pic -DPIC"
+     ;;
+ esac
+-{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_prog_compiler_pic" >&5
+-$as_echo "$lt_prog_compiler_pic" >&6; }
+-
+-
+-
+-
+ 
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $compiler option to produce PIC" >&5
++$as_echo_n "checking for $compiler option to produce PIC... " >&6; }
++if ${lt_cv_prog_compiler_pic+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  lt_cv_prog_compiler_pic=$lt_prog_compiler_pic
++fi
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_prog_compiler_pic" >&5
++$as_echo "$lt_cv_prog_compiler_pic" >&6; }
++lt_prog_compiler_pic=$lt_cv_prog_compiler_pic
+ 
+ #
+ # Check to make sure the PIC flag actually works.
+@@ -8489,6 +8992,11 @@ fi
+ 
+ 
+ 
++
++
++
++
++
+ #
+ # Check to make sure the static flag actually works.
+ #
+@@ -8839,7 +9347,8 @@ _LT_EOF
+       allow_undefined_flag=unsupported
+       always_export_symbols=no
+       enable_shared_with_static_runtimes=yes
+-      export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1 DATA/'\'' | $SED -e '\''/^[AITW][ ]/s/.*[ ]//'\'' | sort | uniq > $export_symbols'
++      export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1 DATA/;s/^.*[ ]__nm__\([^ ]*\)[ ][^ ]*/\1 DATA/;/^I[ ]/d;/^[AITW][ ]/s/.* //'\'' | sort | uniq > $export_symbols'
++      exclude_expsyms='[_]+GLOBAL_OFFSET_TABLE_|[_]+GLOBAL__[FID]_.*|[_]+head_[A-Za-z0-9_]+_dll|[A-Za-z0-9_]+_dll_iname'
+ 
+       if $LD --help 2>&1 | $GREP 'auto-import' > /dev/null; then
+         archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib'
+@@ -8938,12 +9447,12 @@ _LT_EOF
+ 	  whole_archive_flag_spec='--whole-archive$convenience --no-whole-archive'
+ 	  hardcode_libdir_flag_spec=
+ 	  hardcode_libdir_flag_spec_ld='-rpath $libdir'
+-	  archive_cmds='$LD -shared $libobjs $deplibs $compiler_flags -soname $soname -o $lib'
++	  archive_cmds='$LD -shared $libobjs $deplibs $linker_flags -soname $soname -o $lib'
+ 	  if test "x$supports_anon_versioning" = xyes; then
+ 	    archive_expsym_cmds='echo "{ global:" > $output_objdir/$libname.ver~
+ 	      cat $export_symbols | sed -e "s/\(.*\)/\1;/" >> $output_objdir/$libname.ver~
+ 	      echo "local: *; };" >> $output_objdir/$libname.ver~
+-	      $LD -shared $libobjs $deplibs $compiler_flags -soname $soname -version-script $output_objdir/$libname.ver -o $lib'
++	      $LD -shared $libobjs $deplibs $linker_flags -soname $soname -version-script $output_objdir/$libname.ver -o $lib'
+ 	  fi
+ 	  ;;
+ 	esac
+@@ -8957,8 +9466,8 @@ _LT_EOF
+ 	archive_cmds='$LD -Bshareable $libobjs $deplibs $linker_flags -o $lib'
+ 	wlarc=
+       else
+-	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
+-	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
++	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
++	archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
+       fi
+       ;;
+ 
+@@ -8976,8 +9485,8 @@ _LT_EOF
+ 
+ _LT_EOF
+       elif $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then
+-	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
+-	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
++	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
++	archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
+       else
+ 	ld_shlibs=no
+       fi
+@@ -9023,8 +9532,8 @@ _LT_EOF
+ 
+     *)
+       if $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then
+-	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
+-	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
++	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
++	archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
+       else
+ 	ld_shlibs=no
+       fi
+@@ -9154,7 +9663,13 @@ _LT_EOF
+ 	allow_undefined_flag='-berok'
+         # Determine the default libpath from the value encoded in an
+         # empty executable.
+-        cat confdefs.h - <<_ACEOF >conftest.$ac_ext
++        if test "${lt_cv_aix_libpath+set}" = set; then
++  aix_libpath=$lt_cv_aix_libpath
++else
++  if ${lt_cv_aix_libpath_+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+ /* end confdefs.h.  */
+ 
+ int
+@@ -9167,22 +9682,29 @@ main ()
+ _ACEOF
+ if ac_fn_c_try_link "$LINENO"; then :
+ 
+-lt_aix_libpath_sed='
+-    /Import File Strings/,/^$/ {
+-	/^0/ {
+-	    s/^0  *\(.*\)$/\1/
+-	    p
+-	}
+-    }'
+-aix_libpath=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
+-# Check for a 64-bit object if we didn't find anything.
+-if test -z "$aix_libpath"; then
+-  aix_libpath=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
+-fi
++  lt_aix_libpath_sed='
++      /Import File Strings/,/^$/ {
++	  /^0/ {
++	      s/^0  *\([^ ]*\) *$/\1/
++	      p
++	  }
++      }'
++  lt_cv_aix_libpath_=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
++  # Check for a 64-bit object if we didn't find anything.
++  if test -z "$lt_cv_aix_libpath_"; then
++    lt_cv_aix_libpath_=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
++  fi
+ fi
+ rm -f core conftest.err conftest.$ac_objext \
+     conftest$ac_exeext conftest.$ac_ext
+-if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
++  if test -z "$lt_cv_aix_libpath_"; then
++    lt_cv_aix_libpath_="/usr/lib:/lib"
++  fi
++
++fi
++
++  aix_libpath=$lt_cv_aix_libpath_
++fi
+ 
+         hardcode_libdir_flag_spec='${wl}-blibpath:$libdir:'"$aix_libpath"
+         archive_expsym_cmds='$CC -o $output_objdir/$soname $libobjs $deplibs '"\${wl}$no_entry_flag"' $compiler_flags `if test "x${allow_undefined_flag}" != "x"; then func_echo_all "${wl}${allow_undefined_flag}"; else :; fi` '"\${wl}$exp_sym_flag:\$export_symbols $shared_flag"
+@@ -9194,7 +9716,13 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 	else
+ 	 # Determine the default libpath from the value encoded in an
+ 	 # empty executable.
+-	 cat confdefs.h - <<_ACEOF >conftest.$ac_ext
++	 if test "${lt_cv_aix_libpath+set}" = set; then
++  aix_libpath=$lt_cv_aix_libpath
++else
++  if ${lt_cv_aix_libpath_+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+ /* end confdefs.h.  */
+ 
+ int
+@@ -9207,22 +9735,29 @@ main ()
+ _ACEOF
+ if ac_fn_c_try_link "$LINENO"; then :
+ 
+-lt_aix_libpath_sed='
+-    /Import File Strings/,/^$/ {
+-	/^0/ {
+-	    s/^0  *\(.*\)$/\1/
+-	    p
+-	}
+-    }'
+-aix_libpath=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
+-# Check for a 64-bit object if we didn't find anything.
+-if test -z "$aix_libpath"; then
+-  aix_libpath=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
+-fi
++  lt_aix_libpath_sed='
++      /Import File Strings/,/^$/ {
++	  /^0/ {
++	      s/^0  *\([^ ]*\) *$/\1/
++	      p
++	  }
++      }'
++  lt_cv_aix_libpath_=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
++  # Check for a 64-bit object if we didn't find anything.
++  if test -z "$lt_cv_aix_libpath_"; then
++    lt_cv_aix_libpath_=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
++  fi
+ fi
+ rm -f core conftest.err conftest.$ac_objext \
+     conftest$ac_exeext conftest.$ac_ext
+-if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
++  if test -z "$lt_cv_aix_libpath_"; then
++    lt_cv_aix_libpath_="/usr/lib:/lib"
++  fi
++
++fi
++
++  aix_libpath=$lt_cv_aix_libpath_
++fi
+ 
+ 	 hardcode_libdir_flag_spec='${wl}-blibpath:$libdir:'"$aix_libpath"
+ 	  # Warning - without using the other run time loading flags,
+@@ -9267,20 +9802,63 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+       # Microsoft Visual C++.
+       # hardcode_libdir_flag_spec is actually meaningless, as there is
+       # no search path for DLLs.
+-      hardcode_libdir_flag_spec=' '
+-      allow_undefined_flag=unsupported
+-      # Tell ltmain to make .lib files, not .a files.
+-      libext=lib
+-      # Tell ltmain to make .dll files, not .so files.
+-      shrext_cmds=".dll"
+-      # FIXME: Setting linknames here is a bad hack.
+-      archive_cmds='$CC -o $lib $libobjs $compiler_flags `func_echo_all "$deplibs" | $SED '\''s/ -lc$//'\''` -link -dll~linknames='
+-      # The linker will automatically build a .lib file if we build a DLL.
+-      old_archive_from_new_cmds='true'
+-      # FIXME: Should let the user specify the lib program.
+-      old_archive_cmds='lib -OUT:$oldlib$oldobjs$old_deplibs'
+-      fix_srcfile_path='`cygpath -w "$srcfile"`'
+-      enable_shared_with_static_runtimes=yes
++      case $cc_basename in
++      cl*)
++	# Native MSVC
++	hardcode_libdir_flag_spec=' '
++	allow_undefined_flag=unsupported
++	always_export_symbols=yes
++	file_list_spec='@'
++	# Tell ltmain to make .lib files, not .a files.
++	libext=lib
++	# Tell ltmain to make .dll files, not .so files.
++	shrext_cmds=".dll"
++	# FIXME: Setting linknames here is a bad hack.
++	archive_cmds='$CC -o $output_objdir/$soname $libobjs $compiler_flags $deplibs -Wl,-dll~linknames='
++	archive_expsym_cmds='if test "x`$SED 1q $export_symbols`" = xEXPORTS; then
++	    sed -n -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' -e '1\\\!p' < $export_symbols > $output_objdir/$soname.exp;
++	  else
++	    sed -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' < $export_symbols > $output_objdir/$soname.exp;
++	  fi~
++	  $CC -o $tool_output_objdir$soname $libobjs $compiler_flags $deplibs "@$tool_output_objdir$soname.exp" -Wl,-DLL,-IMPLIB:"$tool_output_objdir$libname.dll.lib"~
++	  linknames='
++	# The linker will not automatically build a static lib if we build a DLL.
++	# _LT_TAGVAR(old_archive_from_new_cmds, )='true'
++	enable_shared_with_static_runtimes=yes
++	export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1,DATA/'\'' | $SED -e '\''/^[AITW][ ]/s/.*[ ]//'\'' | sort | uniq > $export_symbols'
++	# Don't use ranlib
++	old_postinstall_cmds='chmod 644 $oldlib'
++	postlink_cmds='lt_outputfile="@OUTPUT@"~
++	  lt_tool_outputfile="@TOOL_OUTPUT@"~
++	  case $lt_outputfile in
++	    *.exe|*.EXE) ;;
++	    *)
++	      lt_outputfile="$lt_outputfile.exe"
++	      lt_tool_outputfile="$lt_tool_outputfile.exe"
++	      ;;
++	  esac~
++	  if test "$MANIFEST_TOOL" != ":" && test -f "$lt_outputfile.manifest"; then
++	    $MANIFEST_TOOL -manifest "$lt_tool_outputfile.manifest" -outputresource:"$lt_tool_outputfile" || exit 1;
++	    $RM "$lt_outputfile.manifest";
++	  fi'
++	;;
++      *)
++	# Assume MSVC wrapper
++	hardcode_libdir_flag_spec=' '
++	allow_undefined_flag=unsupported
++	# Tell ltmain to make .lib files, not .a files.
++	libext=lib
++	# Tell ltmain to make .dll files, not .so files.
++	shrext_cmds=".dll"
++	# FIXME: Setting linknames here is a bad hack.
++	archive_cmds='$CC -o $lib $libobjs $compiler_flags `func_echo_all "$deplibs" | $SED '\''s/ -lc$//'\''` -link -dll~linknames='
++	# The linker will automatically build a .lib file if we build a DLL.
++	old_archive_from_new_cmds='true'
++	# FIXME: Should let the user specify the lib program.
++	old_archive_cmds='lib -OUT:$oldlib$oldobjs$old_deplibs'
++	enable_shared_with_static_runtimes=yes
++	;;
++      esac
+       ;;
+ 
+     darwin* | rhapsody*)
+@@ -9341,7 +9919,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 
+     # FreeBSD 3 and greater uses gcc -shared to do shared libraries.
+     freebsd* | dragonfly*)
+-      archive_cmds='$CC -shared -o $lib $libobjs $deplibs $compiler_flags'
++      archive_cmds='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags'
+       hardcode_libdir_flag_spec='-R$libdir'
+       hardcode_direct=yes
+       hardcode_shlibpath_var=no
+@@ -9349,7 +9927,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 
+     hpux9*)
+       if test "$GCC" = yes; then
+-	archive_cmds='$RM $output_objdir/$soname~$CC -shared -fPIC ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $libobjs $deplibs $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
++	archive_cmds='$RM $output_objdir/$soname~$CC -shared $pic_flag ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $libobjs $deplibs $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
+       else
+ 	archive_cmds='$RM $output_objdir/$soname~$LD -b +b $install_libdir -o $output_objdir/$soname $libobjs $deplibs $linker_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
+       fi
+@@ -9365,7 +9943,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 
+     hpux10*)
+       if test "$GCC" = yes && test "$with_gnu_ld" = no; then
+-	archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
++	archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
+       else
+ 	archive_cmds='$LD -b +h $soname +b $install_libdir -o $lib $libobjs $deplibs $linker_flags'
+       fi
+@@ -9389,10 +9967,10 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 	  archive_cmds='$CC -shared ${wl}+h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
+ 	  ;;
+ 	ia64*)
+-	  archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags'
++	  archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags'
+ 	  ;;
+ 	*)
+-	  archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
++	  archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
+ 	  ;;
+ 	esac
+       else
+@@ -9471,23 +10049,36 @@ fi
+ 
+     irix5* | irix6* | nonstopux*)
+       if test "$GCC" = yes; then
+-	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
++	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
+ 	# Try to use the -exported_symbol ld option, if it does not
+ 	# work, assume that -exports_file does not work either and
+ 	# implicitly export all symbols.
+-        save_LDFLAGS="$LDFLAGS"
+-        LDFLAGS="$LDFLAGS -shared ${wl}-exported_symbol ${wl}foo ${wl}-update_registry ${wl}/dev/null"
+-        cat confdefs.h - <<_ACEOF >conftest.$ac_ext
++	# This should be the same for all languages, so no per-tag cache variable.
++	{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether the $host_os linker accepts -exported_symbol" >&5
++$as_echo_n "checking whether the $host_os linker accepts -exported_symbol... " >&6; }
++if ${lt_cv_irix_exported_symbol+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  save_LDFLAGS="$LDFLAGS"
++	   LDFLAGS="$LDFLAGS -shared ${wl}-exported_symbol ${wl}foo ${wl}-update_registry ${wl}/dev/null"
++	   cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+ /* end confdefs.h.  */
+-int foo(void) {}
++int foo (void) { return 0; }
+ _ACEOF
+ if ac_fn_c_try_link "$LINENO"; then :
+-  archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations ${wl}-exports_file ${wl}$export_symbols -o $lib'
+-
++  lt_cv_irix_exported_symbol=yes
++else
++  lt_cv_irix_exported_symbol=no
+ fi
+ rm -f core conftest.err conftest.$ac_objext \
+     conftest$ac_exeext conftest.$ac_ext
+-        LDFLAGS="$save_LDFLAGS"
++           LDFLAGS="$save_LDFLAGS"
++fi
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_irix_exported_symbol" >&5
++$as_echo "$lt_cv_irix_exported_symbol" >&6; }
++	if test "$lt_cv_irix_exported_symbol" = yes; then
++          archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations ${wl}-exports_file ${wl}$export_symbols -o $lib'
++	fi
+       else
+ 	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -o $lib'
+ 	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -exports_file $export_symbols -o $lib'
+@@ -9572,7 +10163,7 @@ rm -f core conftest.err conftest.$ac_objext \
+     osf4* | osf5*)	# as osf3* with the addition of -msym flag
+       if test "$GCC" = yes; then
+ 	allow_undefined_flag=' ${wl}-expect_unresolved ${wl}\*'
+-	archive_cmds='$CC -shared${allow_undefined_flag} $libobjs $deplibs $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
++	archive_cmds='$CC -shared${allow_undefined_flag} $pic_flag $libobjs $deplibs $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
+ 	hardcode_libdir_flag_spec='${wl}-rpath ${wl}$libdir'
+       else
+ 	allow_undefined_flag=' -expect_unresolved \*'
+@@ -9591,9 +10182,9 @@ rm -f core conftest.err conftest.$ac_objext \
+       no_undefined_flag=' -z defs'
+       if test "$GCC" = yes; then
+ 	wlarc='${wl}'
+-	archive_cmds='$CC -shared ${wl}-z ${wl}text ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
++	archive_cmds='$CC -shared $pic_flag ${wl}-z ${wl}text ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
+ 	archive_expsym_cmds='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~
+-	  $CC -shared ${wl}-z ${wl}text ${wl}-M ${wl}$lib.exp ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags~$RM $lib.exp'
++	  $CC -shared $pic_flag ${wl}-z ${wl}text ${wl}-M ${wl}$lib.exp ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags~$RM $lib.exp'
+       else
+ 	case `$CC -V 2>&1` in
+ 	*"Compilers 5.0"*)
+@@ -10169,8 +10760,9 @@ cygwin* | mingw* | pw32* | cegcc*)
+   need_version=no
+   need_lib_prefix=no
+ 
+-  case $GCC,$host_os in
+-  yes,cygwin* | yes,mingw* | yes,pw32* | yes,cegcc*)
++  case $GCC,$cc_basename in
++  yes,*)
++    # gcc
+     library_names_spec='$libname.dll.a'
+     # DLL is installed to $(libdir)/../bin by postinstall_cmds
+     postinstall_cmds='base_file=`basename \${file}`~
+@@ -10203,13 +10795,71 @@ cygwin* | mingw* | pw32* | cegcc*)
+       library_names_spec='`echo ${libname} | sed -e 's/^lib/pw/'``echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}'
+       ;;
+     esac
++    dynamic_linker='Win32 ld.exe'
++    ;;
++
++  *,cl*)
++    # Native MSVC
++    libname_spec='$name'
++    soname_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}'
++    library_names_spec='${libname}.dll.lib'
++
++    case $build_os in
++    mingw*)
++      sys_lib_search_path_spec=
++      lt_save_ifs=$IFS
++      IFS=';'
++      for lt_path in $LIB
++      do
++        IFS=$lt_save_ifs
++        # Let DOS variable expansion print the short 8.3 style file name.
++        lt_path=`cd "$lt_path" 2>/dev/null && cmd //C "for %i in (".") do @echo %~si"`
++        sys_lib_search_path_spec="$sys_lib_search_path_spec $lt_path"
++      done
++      IFS=$lt_save_ifs
++      # Convert to MSYS style.
++      sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | sed -e 's|\\\\|/|g' -e 's| \\([a-zA-Z]\\):| /\\1|g' -e 's|^ ||'`
++      ;;
++    cygwin*)
++      # Convert to unix form, then to dos form, then back to unix form
++      # but this time dos style (no spaces!) so that the unix form looks
++      # like /cygdrive/c/PROGRA~1:/cygdr...
++      sys_lib_search_path_spec=`cygpath --path --unix "$LIB"`
++      sys_lib_search_path_spec=`cygpath --path --dos "$sys_lib_search_path_spec" 2>/dev/null`
++      sys_lib_search_path_spec=`cygpath --path --unix "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
++      ;;
++    *)
++      sys_lib_search_path_spec="$LIB"
++      if $ECHO "$sys_lib_search_path_spec" | $GREP ';[c-zC-Z]:/' >/dev/null; then
++        # It is most probably a Windows format PATH.
++        sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e 's/;/ /g'`
++      else
++        sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
++      fi
++      # FIXME: find the short name or the path components, as spaces are
++      # common. (e.g. "Program Files" -> "PROGRA~1")
++      ;;
++    esac
++
++    # DLL is installed to $(libdir)/../bin by postinstall_cmds
++    postinstall_cmds='base_file=`basename \${file}`~
++      dlpath=`$SHELL 2>&1 -c '\''. $dir/'\''\${base_file}'\''i; echo \$dlname'\''`~
++      dldir=$destdir/`dirname \$dlpath`~
++      test -d \$dldir || mkdir -p \$dldir~
++      $install_prog $dir/$dlname \$dldir/$dlname'
++    postuninstall_cmds='dldll=`$SHELL 2>&1 -c '\''. $file; echo \$dlname'\''`~
++      dlpath=$dir/\$dldll~
++       $RM \$dlpath'
++    shlibpath_overrides_runpath=yes
++    dynamic_linker='Win32 link.exe'
+     ;;
+ 
+   *)
++    # Assume MSVC wrapper
+     library_names_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext} $libname.lib'
++    dynamic_linker='Win32 ld.exe'
+     ;;
+   esac
+-  dynamic_linker='Win32 ld.exe'
+   # FIXME: first we should search . and the directory the executable is in
+   shlibpath_var=PATH
+   ;;
+@@ -11087,7 +11737,7 @@ else
+   lt_dlunknown=0; lt_dlno_uscore=1; lt_dlneed_uscore=2
+   lt_status=$lt_dlunknown
+   cat > conftest.$ac_ext <<_LT_EOF
+-#line 11090 "configure"
++#line $LINENO "configure"
+ #include "confdefs.h"
+ 
+ #if HAVE_DLFCN_H
+@@ -11131,10 +11781,10 @@ else
+ /* When -fvisbility=hidden is used, assume the code has been annotated
+    correspondingly for the symbols needed.  */
+ #if defined(__GNUC__) && (((__GNUC__ == 3) && (__GNUC_MINOR__ >= 3)) || (__GNUC__ > 3))
+-void fnord () __attribute__((visibility("default")));
++int fnord () __attribute__((visibility("default")));
+ #endif
+ 
+-void fnord () { int i=42; }
++int fnord () { return 42; }
+ int main ()
+ {
+   void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW);
+@@ -11193,7 +11843,7 @@ else
+   lt_dlunknown=0; lt_dlno_uscore=1; lt_dlneed_uscore=2
+   lt_status=$lt_dlunknown
+   cat > conftest.$ac_ext <<_LT_EOF
+-#line 11196 "configure"
++#line $LINENO "configure"
+ #include "confdefs.h"
+ 
+ #if HAVE_DLFCN_H
+@@ -11237,10 +11887,10 @@ else
+ /* When -fvisbility=hidden is used, assume the code has been annotated
+    correspondingly for the symbols needed.  */
+ #if defined(__GNUC__) && (((__GNUC__ == 3) && (__GNUC_MINOR__ >= 3)) || (__GNUC__ > 3))
+-void fnord () __attribute__((visibility("default")));
++int fnord () __attribute__((visibility("default")));
+ #endif
+ 
+-void fnord () { int i=42; }
++int fnord () { return 42; }
+ int main ()
+ {
+   void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW);
+@@ -13225,7 +13875,7 @@ SHARED_LDFLAGS=
+ if test "$enable_shared" = "yes"; then
+   x=`sed -n -e 's/^[ 	]*PICFLAG[ 	]*=[ 	]*//p' < ../libiberty/Makefile | sed -n '$p'`
+   if test -n "$x"; then
+-    SHARED_LIBADD="-L`pwd`/../libiberty/pic -liberty"
++    SHARED_LIBADD="`pwd`/../libiberty/pic/libiberty.a"
+   fi
+ fi
+ 
+@@ -15869,13 +16519,20 @@ exeext='`$ECHO "$exeext" | $SED "$delay_single_quote_subst"`'
+ lt_unset='`$ECHO "$lt_unset" | $SED "$delay_single_quote_subst"`'
+ lt_SP2NL='`$ECHO "$lt_SP2NL" | $SED "$delay_single_quote_subst"`'
+ lt_NL2SP='`$ECHO "$lt_NL2SP" | $SED "$delay_single_quote_subst"`'
++lt_cv_to_host_file_cmd='`$ECHO "$lt_cv_to_host_file_cmd" | $SED "$delay_single_quote_subst"`'
++lt_cv_to_tool_file_cmd='`$ECHO "$lt_cv_to_tool_file_cmd" | $SED "$delay_single_quote_subst"`'
+ reload_flag='`$ECHO "$reload_flag" | $SED "$delay_single_quote_subst"`'
+ reload_cmds='`$ECHO "$reload_cmds" | $SED "$delay_single_quote_subst"`'
+ OBJDUMP='`$ECHO "$OBJDUMP" | $SED "$delay_single_quote_subst"`'
+ deplibs_check_method='`$ECHO "$deplibs_check_method" | $SED "$delay_single_quote_subst"`'
+ file_magic_cmd='`$ECHO "$file_magic_cmd" | $SED "$delay_single_quote_subst"`'
++file_magic_glob='`$ECHO "$file_magic_glob" | $SED "$delay_single_quote_subst"`'
++want_nocaseglob='`$ECHO "$want_nocaseglob" | $SED "$delay_single_quote_subst"`'
++DLLTOOL='`$ECHO "$DLLTOOL" | $SED "$delay_single_quote_subst"`'
++sharedlib_from_linklib_cmd='`$ECHO "$sharedlib_from_linklib_cmd" | $SED "$delay_single_quote_subst"`'
+ AR='`$ECHO "$AR" | $SED "$delay_single_quote_subst"`'
+ AR_FLAGS='`$ECHO "$AR_FLAGS" | $SED "$delay_single_quote_subst"`'
++archiver_list_spec='`$ECHO "$archiver_list_spec" | $SED "$delay_single_quote_subst"`'
+ STRIP='`$ECHO "$STRIP" | $SED "$delay_single_quote_subst"`'
+ RANLIB='`$ECHO "$RANLIB" | $SED "$delay_single_quote_subst"`'
+ old_postinstall_cmds='`$ECHO "$old_postinstall_cmds" | $SED "$delay_single_quote_subst"`'
+@@ -15890,14 +16547,17 @@ lt_cv_sys_global_symbol_pipe='`$ECHO "$lt_cv_sys_global_symbol_pipe" | $SED "$de
+ lt_cv_sys_global_symbol_to_cdecl='`$ECHO "$lt_cv_sys_global_symbol_to_cdecl" | $SED "$delay_single_quote_subst"`'
+ lt_cv_sys_global_symbol_to_c_name_address='`$ECHO "$lt_cv_sys_global_symbol_to_c_name_address" | $SED "$delay_single_quote_subst"`'
+ lt_cv_sys_global_symbol_to_c_name_address_lib_prefix='`$ECHO "$lt_cv_sys_global_symbol_to_c_name_address_lib_prefix" | $SED "$delay_single_quote_subst"`'
++nm_file_list_spec='`$ECHO "$nm_file_list_spec" | $SED "$delay_single_quote_subst"`'
++lt_sysroot='`$ECHO "$lt_sysroot" | $SED "$delay_single_quote_subst"`'
+ objdir='`$ECHO "$objdir" | $SED "$delay_single_quote_subst"`'
+ MAGIC_CMD='`$ECHO "$MAGIC_CMD" | $SED "$delay_single_quote_subst"`'
+ lt_prog_compiler_no_builtin_flag='`$ECHO "$lt_prog_compiler_no_builtin_flag" | $SED "$delay_single_quote_subst"`'
+-lt_prog_compiler_wl='`$ECHO "$lt_prog_compiler_wl" | $SED "$delay_single_quote_subst"`'
+ lt_prog_compiler_pic='`$ECHO "$lt_prog_compiler_pic" | $SED "$delay_single_quote_subst"`'
++lt_prog_compiler_wl='`$ECHO "$lt_prog_compiler_wl" | $SED "$delay_single_quote_subst"`'
+ lt_prog_compiler_static='`$ECHO "$lt_prog_compiler_static" | $SED "$delay_single_quote_subst"`'
+ lt_cv_prog_compiler_c_o='`$ECHO "$lt_cv_prog_compiler_c_o" | $SED "$delay_single_quote_subst"`'
+ need_locks='`$ECHO "$need_locks" | $SED "$delay_single_quote_subst"`'
++MANIFEST_TOOL='`$ECHO "$MANIFEST_TOOL" | $SED "$delay_single_quote_subst"`'
+ DSYMUTIL='`$ECHO "$DSYMUTIL" | $SED "$delay_single_quote_subst"`'
+ NMEDIT='`$ECHO "$NMEDIT" | $SED "$delay_single_quote_subst"`'
+ LIPO='`$ECHO "$LIPO" | $SED "$delay_single_quote_subst"`'
+@@ -15930,12 +16590,12 @@ hardcode_shlibpath_var='`$ECHO "$hardcode_shlibpath_var" | $SED "$delay_single_q
+ hardcode_automatic='`$ECHO "$hardcode_automatic" | $SED "$delay_single_quote_subst"`'
+ inherit_rpath='`$ECHO "$inherit_rpath" | $SED "$delay_single_quote_subst"`'
+ link_all_deplibs='`$ECHO "$link_all_deplibs" | $SED "$delay_single_quote_subst"`'
+-fix_srcfile_path='`$ECHO "$fix_srcfile_path" | $SED "$delay_single_quote_subst"`'
+ always_export_symbols='`$ECHO "$always_export_symbols" | $SED "$delay_single_quote_subst"`'
+ export_symbols_cmds='`$ECHO "$export_symbols_cmds" | $SED "$delay_single_quote_subst"`'
+ exclude_expsyms='`$ECHO "$exclude_expsyms" | $SED "$delay_single_quote_subst"`'
+ include_expsyms='`$ECHO "$include_expsyms" | $SED "$delay_single_quote_subst"`'
+ prelink_cmds='`$ECHO "$prelink_cmds" | $SED "$delay_single_quote_subst"`'
++postlink_cmds='`$ECHO "$postlink_cmds" | $SED "$delay_single_quote_subst"`'
+ file_list_spec='`$ECHO "$file_list_spec" | $SED "$delay_single_quote_subst"`'
+ variables_saved_for_relink='`$ECHO "$variables_saved_for_relink" | $SED "$delay_single_quote_subst"`'
+ need_lib_prefix='`$ECHO "$need_lib_prefix" | $SED "$delay_single_quote_subst"`'
+@@ -15990,8 +16650,13 @@ reload_flag \
+ OBJDUMP \
+ deplibs_check_method \
+ file_magic_cmd \
++file_magic_glob \
++want_nocaseglob \
++DLLTOOL \
++sharedlib_from_linklib_cmd \
+ AR \
+ AR_FLAGS \
++archiver_list_spec \
+ STRIP \
+ RANLIB \
+ CC \
+@@ -16001,12 +16666,14 @@ lt_cv_sys_global_symbol_pipe \
+ lt_cv_sys_global_symbol_to_cdecl \
+ lt_cv_sys_global_symbol_to_c_name_address \
+ lt_cv_sys_global_symbol_to_c_name_address_lib_prefix \
++nm_file_list_spec \
+ lt_prog_compiler_no_builtin_flag \
+-lt_prog_compiler_wl \
+ lt_prog_compiler_pic \
++lt_prog_compiler_wl \
+ lt_prog_compiler_static \
+ lt_cv_prog_compiler_c_o \
+ need_locks \
++MANIFEST_TOOL \
+ DSYMUTIL \
+ NMEDIT \
+ LIPO \
+@@ -16022,7 +16689,6 @@ no_undefined_flag \
+ hardcode_libdir_flag_spec \
+ hardcode_libdir_flag_spec_ld \
+ hardcode_libdir_separator \
+-fix_srcfile_path \
+ exclude_expsyms \
+ include_expsyms \
+ file_list_spec \
+@@ -16058,6 +16724,7 @@ module_cmds \
+ module_expsym_cmds \
+ export_symbols_cmds \
+ prelink_cmds \
++postlink_cmds \
+ postinstall_cmds \
+ postuninstall_cmds \
+ finish_cmds \
+@@ -16826,7 +17493,8 @@ $as_echo X"$file" |
+ # NOTE: Changes made to this file will be lost: look at ltmain.sh.
+ #
+ #   Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001, 2003, 2004, 2005,
+-#                 2006, 2007, 2008, 2009 Free Software Foundation, Inc.
++#                 2006, 2007, 2008, 2009, 2010 Free Software Foundation,
++#                 Inc.
+ #   Written by Gordon Matzigkeit, 1996
+ #
+ #   This file is part of GNU Libtool.
+@@ -16929,19 +17597,42 @@ SP2NL=$lt_lt_SP2NL
+ # turn newlines into spaces.
+ NL2SP=$lt_lt_NL2SP
+ 
++# convert \$build file names to \$host format.
++to_host_file_cmd=$lt_cv_to_host_file_cmd
++
++# convert \$build files to toolchain format.
++to_tool_file_cmd=$lt_cv_to_tool_file_cmd
++
+ # An object symbol dumper.
+ OBJDUMP=$lt_OBJDUMP
+ 
+ # Method to check whether dependent libraries are shared objects.
+ deplibs_check_method=$lt_deplibs_check_method
+ 
+-# Command to use when deplibs_check_method == "file_magic".
++# Command to use when deplibs_check_method = "file_magic".
+ file_magic_cmd=$lt_file_magic_cmd
+ 
++# How to find potential files when deplibs_check_method = "file_magic".
++file_magic_glob=$lt_file_magic_glob
++
++# Find potential files using nocaseglob when deplibs_check_method = "file_magic".
++want_nocaseglob=$lt_want_nocaseglob
++
++# DLL creation program.
++DLLTOOL=$lt_DLLTOOL
++
++# Command to associate shared and link libraries.
++sharedlib_from_linklib_cmd=$lt_sharedlib_from_linklib_cmd
++
+ # The archiver.
+ AR=$lt_AR
++
++# Flags to create an archive.
+ AR_FLAGS=$lt_AR_FLAGS
+ 
++# How to feed a file listing to the archiver.
++archiver_list_spec=$lt_archiver_list_spec
++
+ # A symbol stripping program.
+ STRIP=$lt_STRIP
+ 
+@@ -16971,6 +17662,12 @@ global_symbol_to_c_name_address=$lt_lt_cv_sys_global_symbol_to_c_name_address
+ # Transform the output of nm in a C name address pair when lib prefix is needed.
+ global_symbol_to_c_name_address_lib_prefix=$lt_lt_cv_sys_global_symbol_to_c_name_address_lib_prefix
+ 
++# Specify filename containing input files for \$NM.
++nm_file_list_spec=$lt_nm_file_list_spec
++
++# The root where to search for dependent libraries,and in which our libraries should be installed.
++lt_sysroot=$lt_sysroot
++
+ # The name of the directory that contains temporary libtool files.
+ objdir=$objdir
+ 
+@@ -16980,6 +17677,9 @@ MAGIC_CMD=$MAGIC_CMD
+ # Must we lock files when doing compilation?
+ need_locks=$lt_need_locks
+ 
++# Manifest tool.
++MANIFEST_TOOL=$lt_MANIFEST_TOOL
++
+ # Tool to manipulate archived DWARF debug symbol files on Mac OS X.
+ DSYMUTIL=$lt_DSYMUTIL
+ 
+@@ -17094,12 +17794,12 @@ with_gcc=$GCC
+ # Compiler flag to turn off builtin functions.
+ no_builtin_flag=$lt_lt_prog_compiler_no_builtin_flag
+ 
+-# How to pass a linker flag through the compiler.
+-wl=$lt_lt_prog_compiler_wl
+-
+ # Additional compiler flags for building library objects.
+ pic_flag=$lt_lt_prog_compiler_pic
+ 
++# How to pass a linker flag through the compiler.
++wl=$lt_lt_prog_compiler_wl
++
+ # Compiler flag to prevent dynamic linking.
+ link_static_flag=$lt_lt_prog_compiler_static
+ 
+@@ -17186,9 +17886,6 @@ inherit_rpath=$inherit_rpath
+ # Whether libtool must link a program against all its dependency libraries.
+ link_all_deplibs=$link_all_deplibs
+ 
+-# Fix the shell variable \$srcfile for the compiler.
+-fix_srcfile_path=$lt_fix_srcfile_path
+-
+ # Set to "yes" if exported symbols are required.
+ always_export_symbols=$always_export_symbols
+ 
+@@ -17204,6 +17901,9 @@ include_expsyms=$lt_include_expsyms
+ # Commands necessary for linking programs (against libraries) with templates.
+ prelink_cmds=$lt_prelink_cmds
+ 
++# Commands necessary for finishing linking programs.
++postlink_cmds=$lt_postlink_cmds
++
+ # Specify filename containing input files.
+ file_list_spec=$lt_file_list_spec
+ 
+@@ -17236,210 +17936,169 @@ ltmain="$ac_aux_dir/ltmain.sh"
+   # if finds mixed CR/LF and LF-only lines.  Since sed operates in
+   # text mode, it properly converts lines to CR/LF.  This bash problem
+   # is reportedly fixed, but why not run on old versions too?
+-  sed '/^# Generated shell functions inserted here/q' "$ltmain" >> "$cfgfile" \
+-    || (rm -f "$cfgfile"; exit 1)
+-
+-  case $xsi_shell in
+-  yes)
+-    cat << \_LT_EOF >> "$cfgfile"
+-
+-# func_dirname file append nondir_replacement
+-# Compute the dirname of FILE.  If nonempty, add APPEND to the result,
+-# otherwise set result to NONDIR_REPLACEMENT.
+-func_dirname ()
+-{
+-  case ${1} in
+-    */*) func_dirname_result="${1%/*}${2}" ;;
+-    *  ) func_dirname_result="${3}" ;;
+-  esac
+-}
+-
+-# func_basename file
+-func_basename ()
+-{
+-  func_basename_result="${1##*/}"
+-}
+-
+-# func_dirname_and_basename file append nondir_replacement
+-# perform func_basename and func_dirname in a single function
+-# call:
+-#   dirname:  Compute the dirname of FILE.  If nonempty,
+-#             add APPEND to the result, otherwise set result
+-#             to NONDIR_REPLACEMENT.
+-#             value returned in "$func_dirname_result"
+-#   basename: Compute filename of FILE.
+-#             value retuned in "$func_basename_result"
+-# Implementation must be kept synchronized with func_dirname
+-# and func_basename. For efficiency, we do not delegate to
+-# those functions but instead duplicate the functionality here.
+-func_dirname_and_basename ()
+-{
+-  case ${1} in
+-    */*) func_dirname_result="${1%/*}${2}" ;;
+-    *  ) func_dirname_result="${3}" ;;
+-  esac
+-  func_basename_result="${1##*/}"
+-}
+-
+-# func_stripname prefix suffix name
+-# strip PREFIX and SUFFIX off of NAME.
+-# PREFIX and SUFFIX must not contain globbing or regex special
+-# characters, hashes, percent signs, but SUFFIX may contain a leading
+-# dot (in which case that matches only a dot).
+-func_stripname ()
+-{
+-  # pdksh 5.2.14 does not do ${X%$Y} correctly if both X and Y are
+-  # positional parameters, so assign one to ordinary parameter first.
+-  func_stripname_result=${3}
+-  func_stripname_result=${func_stripname_result#"${1}"}
+-  func_stripname_result=${func_stripname_result%"${2}"}
+-}
+-
+-# func_opt_split
+-func_opt_split ()
+-{
+-  func_opt_split_opt=${1%%=*}
+-  func_opt_split_arg=${1#*=}
+-}
+-
+-# func_lo2o object
+-func_lo2o ()
+-{
+-  case ${1} in
+-    *.lo) func_lo2o_result=${1%.lo}.${objext} ;;
+-    *)    func_lo2o_result=${1} ;;
+-  esac
+-}
+-
+-# func_xform libobj-or-source
+-func_xform ()
+-{
+-  func_xform_result=${1%.*}.lo
+-}
+-
+-# func_arith arithmetic-term...
+-func_arith ()
+-{
+-  func_arith_result=$(( $* ))
+-}
+-
+-# func_len string
+-# STRING may not start with a hyphen.
+-func_len ()
+-{
+-  func_len_result=${#1}
+-}
+-
+-_LT_EOF
+-    ;;
+-  *) # Bourne compatible functions.
+-    cat << \_LT_EOF >> "$cfgfile"
+-
+-# func_dirname file append nondir_replacement
+-# Compute the dirname of FILE.  If nonempty, add APPEND to the result,
+-# otherwise set result to NONDIR_REPLACEMENT.
+-func_dirname ()
+-{
+-  # Extract subdirectory from the argument.
+-  func_dirname_result=`$ECHO "${1}" | $SED "$dirname"`
+-  if test "X$func_dirname_result" = "X${1}"; then
+-    func_dirname_result="${3}"
+-  else
+-    func_dirname_result="$func_dirname_result${2}"
+-  fi
+-}
+-
+-# func_basename file
+-func_basename ()
+-{
+-  func_basename_result=`$ECHO "${1}" | $SED "$basename"`
+-}
+-
+-
+-# func_stripname prefix suffix name
+-# strip PREFIX and SUFFIX off of NAME.
+-# PREFIX and SUFFIX must not contain globbing or regex special
+-# characters, hashes, percent signs, but SUFFIX may contain a leading
+-# dot (in which case that matches only a dot).
+-# func_strip_suffix prefix name
+-func_stripname ()
+-{
+-  case ${2} in
+-    .*) func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%\\\\${2}\$%%"`;;
+-    *)  func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%${2}\$%%"`;;
+-  esac
+-}
+-
+-# sed scripts:
+-my_sed_long_opt='1s/^\(-[^=]*\)=.*/\1/;q'
+-my_sed_long_arg='1s/^-[^=]*=//'
+-
+-# func_opt_split
+-func_opt_split ()
+-{
+-  func_opt_split_opt=`$ECHO "${1}" | $SED "$my_sed_long_opt"`
+-  func_opt_split_arg=`$ECHO "${1}" | $SED "$my_sed_long_arg"`
+-}
+-
+-# func_lo2o object
+-func_lo2o ()
+-{
+-  func_lo2o_result=`$ECHO "${1}" | $SED "$lo2o"`
+-}
+-
+-# func_xform libobj-or-source
+-func_xform ()
+-{
+-  func_xform_result=`$ECHO "${1}" | $SED 's/\.[^.]*$/.lo/'`
+-}
+-
+-# func_arith arithmetic-term...
+-func_arith ()
+-{
+-  func_arith_result=`expr "$@"`
+-}
+-
+-# func_len string
+-# STRING may not start with a hyphen.
+-func_len ()
+-{
+-  func_len_result=`expr "$1" : ".*" 2>/dev/null || echo $max_cmd_len`
+-}
+-
+-_LT_EOF
+-esac
+-
+-case $lt_shell_append in
+-  yes)
+-    cat << \_LT_EOF >> "$cfgfile"
+-
+-# func_append var value
+-# Append VALUE to the end of shell variable VAR.
+-func_append ()
+-{
+-  eval "$1+=\$2"
+-}
+-_LT_EOF
+-    ;;
+-  *)
+-    cat << \_LT_EOF >> "$cfgfile"
+-
+-# func_append var value
+-# Append VALUE to the end of shell variable VAR.
+-func_append ()
+-{
+-  eval "$1=\$$1\$2"
+-}
+-
+-_LT_EOF
+-    ;;
+-  esac
+-
+-
+-  sed -n '/^# Generated shell functions inserted here/,$p' "$ltmain" >> "$cfgfile" \
+-    || (rm -f "$cfgfile"; exit 1)
+-
+-  mv -f "$cfgfile" "$ofile" ||
++  sed '$q' "$ltmain" >> "$cfgfile" \
++     || (rm -f "$cfgfile"; exit 1)
++
++  if test x"$xsi_shell" = xyes; then
++  sed -e '/^func_dirname ()$/,/^} # func_dirname /c\
++func_dirname ()\
++{\
++\    case ${1} in\
++\      */*) func_dirname_result="${1%/*}${2}" ;;\
++\      *  ) func_dirname_result="${3}" ;;\
++\    esac\
++} # Extended-shell func_dirname implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_basename ()$/,/^} # func_basename /c\
++func_basename ()\
++{\
++\    func_basename_result="${1##*/}"\
++} # Extended-shell func_basename implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_dirname_and_basename ()$/,/^} # func_dirname_and_basename /c\
++func_dirname_and_basename ()\
++{\
++\    case ${1} in\
++\      */*) func_dirname_result="${1%/*}${2}" ;;\
++\      *  ) func_dirname_result="${3}" ;;\
++\    esac\
++\    func_basename_result="${1##*/}"\
++} # Extended-shell func_dirname_and_basename implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_stripname ()$/,/^} # func_stripname /c\
++func_stripname ()\
++{\
++\    # pdksh 5.2.14 does not do ${X%$Y} correctly if both X and Y are\
++\    # positional parameters, so assign one to ordinary parameter first.\
++\    func_stripname_result=${3}\
++\    func_stripname_result=${func_stripname_result#"${1}"}\
++\    func_stripname_result=${func_stripname_result%"${2}"}\
++} # Extended-shell func_stripname implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_split_long_opt ()$/,/^} # func_split_long_opt /c\
++func_split_long_opt ()\
++{\
++\    func_split_long_opt_name=${1%%=*}\
++\    func_split_long_opt_arg=${1#*=}\
++} # Extended-shell func_split_long_opt implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_split_short_opt ()$/,/^} # func_split_short_opt /c\
++func_split_short_opt ()\
++{\
++\    func_split_short_opt_arg=${1#??}\
++\    func_split_short_opt_name=${1%"$func_split_short_opt_arg"}\
++} # Extended-shell func_split_short_opt implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_lo2o ()$/,/^} # func_lo2o /c\
++func_lo2o ()\
++{\
++\    case ${1} in\
++\      *.lo) func_lo2o_result=${1%.lo}.${objext} ;;\
++\      *)    func_lo2o_result=${1} ;;\
++\    esac\
++} # Extended-shell func_lo2o implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_xform ()$/,/^} # func_xform /c\
++func_xform ()\
++{\
++    func_xform_result=${1%.*}.lo\
++} # Extended-shell func_xform implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_arith ()$/,/^} # func_arith /c\
++func_arith ()\
++{\
++    func_arith_result=$(( $* ))\
++} # Extended-shell func_arith implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_len ()$/,/^} # func_len /c\
++func_len ()\
++{\
++    func_len_result=${#1}\
++} # Extended-shell func_len implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++fi
++
++if test x"$lt_shell_append" = xyes; then
++  sed -e '/^func_append ()$/,/^} # func_append /c\
++func_append ()\
++{\
++    eval "${1}+=\\${2}"\
++} # Extended-shell func_append implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_append_quoted ()$/,/^} # func_append_quoted /c\
++func_append_quoted ()\
++{\
++\    func_quote_for_eval "${2}"\
++\    eval "${1}+=\\\\ \\$func_quote_for_eval_result"\
++} # Extended-shell func_append_quoted implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  # Save a `func_append' function call where possible by direct use of '+='
++  sed -e 's%func_append \([a-zA-Z_]\{1,\}\) "%\1+="%g' $cfgfile > $cfgfile.tmp \
++    && mv -f "$cfgfile.tmp" "$cfgfile" \
++      || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++  test 0 -eq $? || _lt_function_replace_fail=:
++else
++  # Save a `func_append' function call even when '+=' is not available
++  sed -e 's%func_append \([a-zA-Z_]\{1,\}\) "%\1="$\1%g' $cfgfile > $cfgfile.tmp \
++    && mv -f "$cfgfile.tmp" "$cfgfile" \
++      || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++  test 0 -eq $? || _lt_function_replace_fail=:
++fi
++
++if test x"$_lt_function_replace_fail" = x":"; then
++  { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: Unable to substitute extended shell functions in $ofile" >&5
++$as_echo "$as_me: WARNING: Unable to substitute extended shell functions in $ofile" >&2;}
++fi
++
++
++   mv -f "$cfgfile" "$ofile" ||
+     (rm -f "$ofile" && cp "$cfgfile" "$ofile" && rm -f "$cfgfile")
+   chmod +x "$ofile"
+ 
+diff --git a/bfd/configure.ac b/bfd/configure.ac
+index 6146efb5ae3..73e5e03d016 100644
+--- a/bfd/configure.ac
++++ b/bfd/configure.ac
+@@ -282,7 +282,7 @@ changequote(,)dnl
+   x=`sed -n -e 's/^[ 	]*PICFLAG[ 	]*=[ 	]*//p' < ../libiberty/Makefile | sed -n '$p'`
+ changequote([,])dnl
+   if test -n "$x"; then
+-    SHARED_LIBADD="-L`pwd`/../libiberty/pic -liberty"
++    SHARED_LIBADD="`pwd`/../libiberty/pic/libiberty.a"
+   fi
+ fi
+ 
+diff --git a/binutils/Makefile.in b/binutils/Makefile.in
+index 78d32b350e3..ad4f2de7358 100644
+--- a/binutils/Makefile.in
++++ b/binutils/Makefile.in
+@@ -492,6 +492,7 @@ DEBUGINFOD_LIBS = @DEBUGINFOD_LIBS@
+ DEFS = @DEFS@
+ DEMANGLER_NAME = @DEMANGLER_NAME@
+ DEPDIR = @DEPDIR@
++DLLTOOL = @DLLTOOL@
+ DLLTOOL_DEFS = @DLLTOOL_DEFS@
+ DSYMUTIL = @DSYMUTIL@
+ DUMPBIN = @DUMPBIN@
+@@ -533,6 +534,7 @@ LTLIBICONV = @LTLIBICONV@
+ LTLIBOBJS = @LTLIBOBJS@
+ MAINT = @MAINT@
+ MAKEINFO = @MAKEINFO@
++MANIFEST_TOOL = @MANIFEST_TOOL@
+ MKDIR_P = @MKDIR_P@
+ MKINSTALLDIRS = @MKINSTALLDIRS@
+ MSGFMT = @MSGFMT@
+@@ -579,6 +581,7 @@ abs_builddir = @abs_builddir@
+ abs_srcdir = @abs_srcdir@
+ abs_top_builddir = @abs_top_builddir@
+ abs_top_srcdir = @abs_top_srcdir@
++ac_ct_AR = @ac_ct_AR@
+ ac_ct_CC = @ac_ct_CC@
+ ac_ct_DUMPBIN = @ac_ct_DUMPBIN@
+ am__include = @am__include@
+diff --git a/binutils/configure b/binutils/configure
+index 149815542f9..43952bde405 100755
+--- a/binutils/configure
++++ b/binutils/configure
+@@ -698,8 +698,11 @@ OTOOL
+ LIPO
+ NMEDIT
+ DSYMUTIL
++MANIFEST_TOOL
+ RANLIB
++ac_ct_AR
+ AR
++DLLTOOL
+ OBJDUMP
+ LN_S
+ NM
+@@ -816,6 +819,7 @@ enable_static
+ with_pic
+ enable_fast_install
+ with_gnu_ld
++with_libtool_sysroot
+ enable_libtool_lock
+ enable_plugins
+ enable_largefile
+@@ -1514,6 +1518,8 @@ Optional Packages:
+   --with-pic              try to use only PIC/non-PIC objects [default=use
+                           both]
+   --with-gnu-ld           assume the C compiler uses GNU ld [default=no]
++  --with-libtool-sysroot=DIR Search for dependent libraries within DIR
++                        (or the compiler's sysroot if not specified).
+   --with-debuginfod       Enable debuginfo lookups with debuginfod
+                           (auto/yes/no)
+   --with-system-zlib      use installed libz
+@@ -4893,8 +4899,8 @@ esac
+ 
+ 
+ 
+-macro_version='2.2.7a'
+-macro_revision='1.3134'
++macro_version='2.4'
++macro_revision='1.3293'
+ 
+ 
+ 
+@@ -4934,7 +4940,7 @@ ECHO=$ECHO$ECHO$ECHO$ECHO$ECHO$ECHO
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking how to print strings" >&5
+ $as_echo_n "checking how to print strings... " >&6; }
+ # Test print first, because it will be a builtin if present.
+-if test "X`print -r -- -n 2>/dev/null`" = X-n && \
++if test "X`( print -r -- -n ) 2>/dev/null`" = X-n && \
+    test "X`print -r -- $ECHO 2>/dev/null`" = "X$ECHO"; then
+   ECHO='print -r --'
+ elif test "X`printf %s $ECHO 2>/dev/null`" = "X$ECHO"; then
+@@ -5627,8 +5633,8 @@ $as_echo_n "checking whether the shell understands some XSI constructs... " >&6;
+ # Try some XSI features
+ xsi_shell=no
+ ( _lt_dummy="a/b/c"
+-  test "${_lt_dummy##*/},${_lt_dummy%/*},"${_lt_dummy%"$_lt_dummy"}, \
+-      = c,a/b,, \
++  test "${_lt_dummy##*/},${_lt_dummy%/*},${_lt_dummy#??}"${_lt_dummy%"$_lt_dummy"}, \
++      = c,a/b,b/c, \
+     && eval 'test $(( 1 + 1 )) -eq 2 \
+     && test "${#_lt_dummy}" -eq 5' ) >/dev/null 2>&1 \
+   && xsi_shell=yes
+@@ -5677,6 +5683,80 @@ esac
+ 
+ 
+ 
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking how to convert $build file names to $host format" >&5
++$as_echo_n "checking how to convert $build file names to $host format... " >&6; }
++if ${lt_cv_to_host_file_cmd+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  case $host in
++  *-*-mingw* )
++    case $build in
++      *-*-mingw* ) # actually msys
++        lt_cv_to_host_file_cmd=func_convert_file_msys_to_w32
++        ;;
++      *-*-cygwin* )
++        lt_cv_to_host_file_cmd=func_convert_file_cygwin_to_w32
++        ;;
++      * ) # otherwise, assume *nix
++        lt_cv_to_host_file_cmd=func_convert_file_nix_to_w32
++        ;;
++    esac
++    ;;
++  *-*-cygwin* )
++    case $build in
++      *-*-mingw* ) # actually msys
++        lt_cv_to_host_file_cmd=func_convert_file_msys_to_cygwin
++        ;;
++      *-*-cygwin* )
++        lt_cv_to_host_file_cmd=func_convert_file_noop
++        ;;
++      * ) # otherwise, assume *nix
++        lt_cv_to_host_file_cmd=func_convert_file_nix_to_cygwin
++        ;;
++    esac
++    ;;
++  * ) # unhandled hosts (and "normal" native builds)
++    lt_cv_to_host_file_cmd=func_convert_file_noop
++    ;;
++esac
++
++fi
++
++to_host_file_cmd=$lt_cv_to_host_file_cmd
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_to_host_file_cmd" >&5
++$as_echo "$lt_cv_to_host_file_cmd" >&6; }
++
++
++
++
++
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking how to convert $build file names to toolchain format" >&5
++$as_echo_n "checking how to convert $build file names to toolchain format... " >&6; }
++if ${lt_cv_to_tool_file_cmd+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  #assume ordinary cross tools, or native build.
++lt_cv_to_tool_file_cmd=func_convert_file_noop
++case $host in
++  *-*-mingw* )
++    case $build in
++      *-*-mingw* ) # actually msys
++        lt_cv_to_tool_file_cmd=func_convert_file_msys_to_w32
++        ;;
++    esac
++    ;;
++esac
++
++fi
++
++to_tool_file_cmd=$lt_cv_to_tool_file_cmd
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_to_tool_file_cmd" >&5
++$as_echo "$lt_cv_to_tool_file_cmd" >&6; }
++
++
++
++
++
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $LD option to reload object files" >&5
+ $as_echo_n "checking for $LD option to reload object files... " >&6; }
+ if ${lt_cv_ld_reload_flag+:} false; then :
+@@ -5693,6 +5773,11 @@ case $reload_flag in
+ esac
+ reload_cmds='$LD$reload_flag -o $output$reload_objs'
+ case $host_os in
++  cygwin* | mingw* | pw32* | cegcc*)
++    if test "$GCC" != yes; then
++      reload_cmds=false
++    fi
++    ;;
+   darwin*)
+     if test "$GCC" = yes; then
+       reload_cmds='$LTCC $LTCFLAGS -nostdlib ${wl}-r -o $output$reload_objs'
+@@ -5861,7 +5946,8 @@ mingw* | pw32*)
+     lt_cv_deplibs_check_method='file_magic ^x86 archive import|^x86 DLL'
+     lt_cv_file_magic_cmd='func_win32_libid'
+   else
+-    lt_cv_deplibs_check_method='file_magic file format pei*-i386(.*architecture: i386)?'
++    # Keep this pattern in sync with the one in func_win32_libid.
++    lt_cv_deplibs_check_method='file_magic file format (pei*-i386(.*architecture: i386)?|pe-arm-wince|pe-x86-64)'
+     lt_cv_file_magic_cmd='$OBJDUMP -f'
+   fi
+   ;;
+@@ -6015,6 +6101,21 @@ esac
+ fi
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_deplibs_check_method" >&5
+ $as_echo "$lt_cv_deplibs_check_method" >&6; }
++
++file_magic_glob=
++want_nocaseglob=no
++if test "$build" = "$host"; then
++  case $host_os in
++  mingw* | pw32*)
++    if ( shopt | grep nocaseglob ) >/dev/null 2>&1; then
++      want_nocaseglob=yes
++    else
++      file_magic_glob=`echo aAbBcCdDeEfFgGhHiIjJkKlLmMnNoOpPqQrRsStTuUvVwWxXyYzZ | $SED -e "s/\(..\)/s\/[\1]\/[\1]\/g;/g"`
++    fi
++    ;;
++  esac
++fi
++
+ file_magic_cmd=$lt_cv_file_magic_cmd
+ deplibs_check_method=$lt_cv_deplibs_check_method
+ test -z "$deplibs_check_method" && deplibs_check_method=unknown
+@@ -6030,6 +6131,157 @@ test -z "$deplibs_check_method" && deplibs_check_method=unknown
+ 
+ 
+ 
++
++
++
++
++
++
++
++
++
++
++if test -n "$ac_tool_prefix"; then
++  # Extract the first word of "${ac_tool_prefix}dlltool", so it can be a program name with args.
++set dummy ${ac_tool_prefix}dlltool; ac_word=$2
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
++$as_echo_n "checking for $ac_word... " >&6; }
++if ${ac_cv_prog_DLLTOOL+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  if test -n "$DLLTOOL"; then
++  ac_cv_prog_DLLTOOL="$DLLTOOL" # Let the user override the test.
++else
++as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
++for as_dir in $PATH
++do
++  IFS=$as_save_IFS
++  test -z "$as_dir" && as_dir=.
++    for ac_exec_ext in '' $ac_executable_extensions; do
++  if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
++    ac_cv_prog_DLLTOOL="${ac_tool_prefix}dlltool"
++    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
++    break 2
++  fi
++done
++  done
++IFS=$as_save_IFS
++
++fi
++fi
++DLLTOOL=$ac_cv_prog_DLLTOOL
++if test -n "$DLLTOOL"; then
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $DLLTOOL" >&5
++$as_echo "$DLLTOOL" >&6; }
++else
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
++$as_echo "no" >&6; }
++fi
++
++
++fi
++if test -z "$ac_cv_prog_DLLTOOL"; then
++  ac_ct_DLLTOOL=$DLLTOOL
++  # Extract the first word of "dlltool", so it can be a program name with args.
++set dummy dlltool; ac_word=$2
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
++$as_echo_n "checking for $ac_word... " >&6; }
++if ${ac_cv_prog_ac_ct_DLLTOOL+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  if test -n "$ac_ct_DLLTOOL"; then
++  ac_cv_prog_ac_ct_DLLTOOL="$ac_ct_DLLTOOL" # Let the user override the test.
++else
++as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
++for as_dir in $PATH
++do
++  IFS=$as_save_IFS
++  test -z "$as_dir" && as_dir=.
++    for ac_exec_ext in '' $ac_executable_extensions; do
++  if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
++    ac_cv_prog_ac_ct_DLLTOOL="dlltool"
++    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
++    break 2
++  fi
++done
++  done
++IFS=$as_save_IFS
++
++fi
++fi
++ac_ct_DLLTOOL=$ac_cv_prog_ac_ct_DLLTOOL
++if test -n "$ac_ct_DLLTOOL"; then
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_DLLTOOL" >&5
++$as_echo "$ac_ct_DLLTOOL" >&6; }
++else
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
++$as_echo "no" >&6; }
++fi
++
++  if test "x$ac_ct_DLLTOOL" = x; then
++    DLLTOOL="false"
++  else
++    case $cross_compiling:$ac_tool_warned in
++yes:)
++{ $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5
++$as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;}
++ac_tool_warned=yes ;;
++esac
++    DLLTOOL=$ac_ct_DLLTOOL
++  fi
++else
++  DLLTOOL="$ac_cv_prog_DLLTOOL"
++fi
++
++test -z "$DLLTOOL" && DLLTOOL=dlltool
++
++
++
++
++
++
++
++
++
++
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking how to associate runtime and link libraries" >&5
++$as_echo_n "checking how to associate runtime and link libraries... " >&6; }
++if ${lt_cv_sharedlib_from_linklib_cmd+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  lt_cv_sharedlib_from_linklib_cmd='unknown'
++
++case $host_os in
++cygwin* | mingw* | pw32* | cegcc*)
++  # two different shell functions defined in ltmain.sh
++  # decide which to use based on capabilities of $DLLTOOL
++  case `$DLLTOOL --help 2>&1` in
++  *--identify-strict*)
++    lt_cv_sharedlib_from_linklib_cmd=func_cygming_dll_for_implib
++    ;;
++  *)
++    lt_cv_sharedlib_from_linklib_cmd=func_cygming_dll_for_implib_fallback
++    ;;
++  esac
++  ;;
++*)
++  # fallback: assume linklib IS sharedlib
++  lt_cv_sharedlib_from_linklib_cmd="$ECHO"
++  ;;
++esac
++
++fi
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_sharedlib_from_linklib_cmd" >&5
++$as_echo "$lt_cv_sharedlib_from_linklib_cmd" >&6; }
++sharedlib_from_linklib_cmd=$lt_cv_sharedlib_from_linklib_cmd
++test -z "$sharedlib_from_linklib_cmd" && sharedlib_from_linklib_cmd=$ECHO
++
++
++
++
++
++
++
+ plugin_option=
+ plugin_names="liblto_plugin.so liblto_plugin-0.dll cyglto_plugin-0.dll"
+ for plugin in $plugin_names; do
+@@ -6044,8 +6296,10 @@ for plugin in $plugin_names; do
+ done
+ 
+ if test -n "$ac_tool_prefix"; then
+-  # Extract the first word of "${ac_tool_prefix}ar", so it can be a program name with args.
+-set dummy ${ac_tool_prefix}ar; ac_word=$2
++  for ac_prog in ar
++  do
++    # Extract the first word of "$ac_tool_prefix$ac_prog", so it can be a program name with args.
++set dummy $ac_tool_prefix$ac_prog; ac_word=$2
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+ $as_echo_n "checking for $ac_word... " >&6; }
+ if ${ac_cv_prog_AR+:} false; then :
+@@ -6061,7 +6315,7 @@ do
+   test -z "$as_dir" && as_dir=.
+     for ac_exec_ext in '' $ac_executable_extensions; do
+   if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
+-    ac_cv_prog_AR="${ac_tool_prefix}ar"
++    ac_cv_prog_AR="$ac_tool_prefix$ac_prog"
+     $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
+     break 2
+   fi
+@@ -6081,11 +6335,15 @@ $as_echo "no" >&6; }
+ fi
+ 
+ 
++    test -n "$AR" && break
++  done
+ fi
+-if test -z "$ac_cv_prog_AR"; then
++if test -z "$AR"; then
+   ac_ct_AR=$AR
+-  # Extract the first word of "ar", so it can be a program name with args.
+-set dummy ar; ac_word=$2
++  for ac_prog in ar
++do
++  # Extract the first word of "$ac_prog", so it can be a program name with args.
++set dummy $ac_prog; ac_word=$2
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+ $as_echo_n "checking for $ac_word... " >&6; }
+ if ${ac_cv_prog_ac_ct_AR+:} false; then :
+@@ -6101,7 +6359,7 @@ do
+   test -z "$as_dir" && as_dir=.
+     for ac_exec_ext in '' $ac_executable_extensions; do
+   if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
+-    ac_cv_prog_ac_ct_AR="ar"
++    ac_cv_prog_ac_ct_AR="$ac_prog"
+     $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
+     break 2
+   fi
+@@ -6120,6 +6378,10 @@ else
+ $as_echo "no" >&6; }
+ fi
+ 
++
++  test -n "$ac_ct_AR" && break
++done
++
+   if test "x$ac_ct_AR" = x; then
+     AR="false"
+   else
+@@ -6131,29 +6393,81 @@ ac_tool_warned=yes ;;
+ esac
+     AR=$ac_ct_AR
+   fi
+-else
+-  AR="$ac_cv_prog_AR"
+ fi
+ 
+-test -z "$AR" && AR=ar
+-if test -n "$plugin_option"; then
+-  if $AR --help 2>&1 | grep -q "\--plugin"; then
+-    touch conftest.c
+-    $AR $plugin_option rc conftest.a conftest.c
+-    if test "$?" != 0; then
+-      { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: Failed: $AR $plugin_option rc" >&5
++  touch conftest.c
++  $AR $plugin_option rc conftest.a conftest.c
++  if test "$?" != 0; then
++    { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: Failed: $AR $plugin_option rc" >&5
+ $as_echo "$as_me: WARNING: Failed: $AR $plugin_option rc" >&2;}
+-    else
+-      AR="$AR $plugin_option"
+-    fi
+-    rm -f conftest.*
++  else
++    AR="$AR $plugin_option"
+   fi
+-fi
+-test -z "$AR_FLAGS" && AR_FLAGS=cru
++  rm -f conftest.*
++: ${AR=ar}
++: ${AR_FLAGS=cru}
++
++
++
++
++
++
++
++
++
++
++
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for archiver @FILE support" >&5
++$as_echo_n "checking for archiver @FILE support... " >&6; }
++if ${lt_cv_ar_at_file+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  lt_cv_ar_at_file=no
++   cat confdefs.h - <<_ACEOF >conftest.$ac_ext
++/* end confdefs.h.  */
++
++int
++main ()
++{
+ 
++  ;
++  return 0;
++}
++_ACEOF
++if ac_fn_c_try_compile "$LINENO"; then :
++  echo conftest.$ac_objext > conftest.lst
++      lt_ar_try='$AR $AR_FLAGS libconftest.a @conftest.lst >&5'
++      { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$lt_ar_try\""; } >&5
++  (eval $lt_ar_try) 2>&5
++  ac_status=$?
++  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
++  test $ac_status = 0; }
++      if test "$ac_status" -eq 0; then
++	# Ensure the archiver fails upon bogus file names.
++	rm -f conftest.$ac_objext libconftest.a
++	{ { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$lt_ar_try\""; } >&5
++  (eval $lt_ar_try) 2>&5
++  ac_status=$?
++  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
++  test $ac_status = 0; }
++	if test "$ac_status" -ne 0; then
++          lt_cv_ar_at_file=@
++        fi
++      fi
++      rm -f conftest.* libconftest.a
+ 
++fi
++rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext
+ 
++fi
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_ar_at_file" >&5
++$as_echo "$lt_cv_ar_at_file" >&6; }
+ 
++if test "x$lt_cv_ar_at_file" = xno; then
++  archiver_list_spec=
++else
++  archiver_list_spec=$lt_cv_ar_at_file
++fi
+ 
+ 
+ 
+@@ -6500,8 +6814,8 @@ esac
+ lt_cv_sys_global_symbol_to_cdecl="sed -n -e 's/^T .* \(.*\)$/extern int \1();/p' -e 's/^$symcode* .* \(.*\)$/extern char \1;/p'"
+ 
+ # Transform an extracted symbol line into symbol name and symbol address
+-lt_cv_sys_global_symbol_to_c_name_address="sed -n -e 's/^: \([^ ]*\) $/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"\2\", (void *) \&\2},/p'"
+-lt_cv_sys_global_symbol_to_c_name_address_lib_prefix="sed -n -e 's/^: \([^ ]*\) $/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \(lib[^ ]*\)$/  {\"\2\", (void *) \&\2},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"lib\2\", (void *) \&\2},/p'"
++lt_cv_sys_global_symbol_to_c_name_address="sed -n -e 's/^: \([^ ]*\)[ ]*$/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"\2\", (void *) \&\2},/p'"
++lt_cv_sys_global_symbol_to_c_name_address_lib_prefix="sed -n -e 's/^: \([^ ]*\)[ ]*$/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \(lib[^ ]*\)$/  {\"\2\", (void *) \&\2},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"lib\2\", (void *) \&\2},/p'"
+ 
+ # Handle CRLF in mingw tool chain
+ opt_cr=
+@@ -6537,6 +6851,7 @@ for ac_symprfx in "" "_"; do
+   else
+     lt_cv_sys_global_symbol_pipe="sed -n -e 's/^.*[	 ]\($symcode$symcode*\)[	 ][	 ]*$ac_symprfx$sympat$opt_cr$/$symxfrm/p'"
+   fi
++  lt_cv_sys_global_symbol_pipe="$lt_cv_sys_global_symbol_pipe | sed '/ __gnu_lto/d'"
+ 
+   # Check to see that the pipe works correctly.
+   pipe_works=no
+@@ -6578,6 +6893,18 @@ _LT_EOF
+       if $GREP ' nm_test_var$' "$nlist" >/dev/null; then
+ 	if $GREP ' nm_test_func$' "$nlist" >/dev/null; then
+ 	  cat <<_LT_EOF > conftest.$ac_ext
++/* Keep this code in sync between libtool.m4, ltmain, lt_system.h, and tests.  */
++#if defined(_WIN32) || defined(__CYGWIN__) || defined(_WIN32_WCE)
++/* DATA imports from DLLs on WIN32 con't be const, because runtime
++   relocations are performed -- see ld's documentation on pseudo-relocs.  */
++# define LT_DLSYM_CONST
++#elif defined(__osf__)
++/* This system does not cope well with relocations in const data.  */
++# define LT_DLSYM_CONST
++#else
++# define LT_DLSYM_CONST const
++#endif
++
+ #ifdef __cplusplus
+ extern "C" {
+ #endif
+@@ -6589,7 +6916,7 @@ _LT_EOF
+ 	  cat <<_LT_EOF >> conftest.$ac_ext
+ 
+ /* The mapping between symbol names and symbols.  */
+-const struct {
++LT_DLSYM_CONST struct {
+   const char *name;
+   void       *address;
+ }
+@@ -6615,8 +6942,8 @@ static const void *lt_preloaded_setup() {
+ _LT_EOF
+ 	  # Now try linking the two files.
+ 	  mv conftest.$ac_objext conftstm.$ac_objext
+-	  lt_save_LIBS="$LIBS"
+-	  lt_save_CFLAGS="$CFLAGS"
++	  lt_globsym_save_LIBS=$LIBS
++	  lt_globsym_save_CFLAGS=$CFLAGS
+ 	  LIBS="conftstm.$ac_objext"
+ 	  CFLAGS="$CFLAGS$lt_prog_compiler_no_builtin_flag"
+ 	  if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_link\""; } >&5
+@@ -6626,8 +6953,8 @@ _LT_EOF
+   test $ac_status = 0; } && test -s conftest${ac_exeext}; then
+ 	    pipe_works=yes
+ 	  fi
+-	  LIBS="$lt_save_LIBS"
+-	  CFLAGS="$lt_save_CFLAGS"
++	  LIBS=$lt_globsym_save_LIBS
++	  CFLAGS=$lt_globsym_save_CFLAGS
+ 	else
+ 	  echo "cannot find nm_test_func in $nlist" >&5
+ 	fi
+@@ -6664,6 +6991,19 @@ else
+ $as_echo "ok" >&6; }
+ fi
+ 
++# Response file support.
++if test "$lt_cv_nm_interface" = "MS dumpbin"; then
++  nm_file_list_spec='@'
++elif $NM --help 2>/dev/null | grep '[@]FILE' >/dev/null; then
++  nm_file_list_spec='@'
++fi
++
++
++
++
++
++
++
+ 
+ 
+ 
+@@ -6680,6 +7020,42 @@ fi
+ 
+ 
+ 
++
++
++
++
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for sysroot" >&5
++$as_echo_n "checking for sysroot... " >&6; }
++
++# Check whether --with-libtool-sysroot was given.
++if test "${with_libtool_sysroot+set}" = set; then :
++  withval=$with_libtool_sysroot;
++else
++  with_libtool_sysroot=no
++fi
++
++
++lt_sysroot=
++case ${with_libtool_sysroot} in #(
++ yes)
++   if test "$GCC" = yes; then
++     lt_sysroot=`$CC --print-sysroot 2>/dev/null`
++   fi
++   ;; #(
++ /*)
++   lt_sysroot=`echo "$with_libtool_sysroot" | sed -e "$sed_quote_subst"`
++   ;; #(
++ no|'')
++   ;; #(
++ *)
++   { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${with_libtool_sysroot}" >&5
++$as_echo "${with_libtool_sysroot}" >&6; }
++   as_fn_error $? "The sysroot must be an absolute path." "$LINENO" 5
++   ;;
++esac
++
++ { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${lt_sysroot:-no}" >&5
++$as_echo "${lt_sysroot:-no}" >&6; }
+ 
+ 
+ 
+@@ -6891,6 +7267,123 @@ esac
+ 
+ need_locks="$enable_libtool_lock"
+ 
++if test -n "$ac_tool_prefix"; then
++  # Extract the first word of "${ac_tool_prefix}mt", so it can be a program name with args.
++set dummy ${ac_tool_prefix}mt; ac_word=$2
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
++$as_echo_n "checking for $ac_word... " >&6; }
++if ${ac_cv_prog_MANIFEST_TOOL+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  if test -n "$MANIFEST_TOOL"; then
++  ac_cv_prog_MANIFEST_TOOL="$MANIFEST_TOOL" # Let the user override the test.
++else
++as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
++for as_dir in $PATH
++do
++  IFS=$as_save_IFS
++  test -z "$as_dir" && as_dir=.
++    for ac_exec_ext in '' $ac_executable_extensions; do
++  if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
++    ac_cv_prog_MANIFEST_TOOL="${ac_tool_prefix}mt"
++    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
++    break 2
++  fi
++done
++  done
++IFS=$as_save_IFS
++
++fi
++fi
++MANIFEST_TOOL=$ac_cv_prog_MANIFEST_TOOL
++if test -n "$MANIFEST_TOOL"; then
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $MANIFEST_TOOL" >&5
++$as_echo "$MANIFEST_TOOL" >&6; }
++else
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
++$as_echo "no" >&6; }
++fi
++
++
++fi
++if test -z "$ac_cv_prog_MANIFEST_TOOL"; then
++  ac_ct_MANIFEST_TOOL=$MANIFEST_TOOL
++  # Extract the first word of "mt", so it can be a program name with args.
++set dummy mt; ac_word=$2
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
++$as_echo_n "checking for $ac_word... " >&6; }
++if ${ac_cv_prog_ac_ct_MANIFEST_TOOL+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  if test -n "$ac_ct_MANIFEST_TOOL"; then
++  ac_cv_prog_ac_ct_MANIFEST_TOOL="$ac_ct_MANIFEST_TOOL" # Let the user override the test.
++else
++as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
++for as_dir in $PATH
++do
++  IFS=$as_save_IFS
++  test -z "$as_dir" && as_dir=.
++    for ac_exec_ext in '' $ac_executable_extensions; do
++  if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
++    ac_cv_prog_ac_ct_MANIFEST_TOOL="mt"
++    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
++    break 2
++  fi
++done
++  done
++IFS=$as_save_IFS
++
++fi
++fi
++ac_ct_MANIFEST_TOOL=$ac_cv_prog_ac_ct_MANIFEST_TOOL
++if test -n "$ac_ct_MANIFEST_TOOL"; then
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_MANIFEST_TOOL" >&5
++$as_echo "$ac_ct_MANIFEST_TOOL" >&6; }
++else
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
++$as_echo "no" >&6; }
++fi
++
++  if test "x$ac_ct_MANIFEST_TOOL" = x; then
++    MANIFEST_TOOL=":"
++  else
++    case $cross_compiling:$ac_tool_warned in
++yes:)
++{ $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5
++$as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;}
++ac_tool_warned=yes ;;
++esac
++    MANIFEST_TOOL=$ac_ct_MANIFEST_TOOL
++  fi
++else
++  MANIFEST_TOOL="$ac_cv_prog_MANIFEST_TOOL"
++fi
++
++test -z "$MANIFEST_TOOL" && MANIFEST_TOOL=mt
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking if $MANIFEST_TOOL is a manifest tool" >&5
++$as_echo_n "checking if $MANIFEST_TOOL is a manifest tool... " >&6; }
++if ${lt_cv_path_mainfest_tool+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  lt_cv_path_mainfest_tool=no
++  echo "$as_me:$LINENO: $MANIFEST_TOOL '-?'" >&5
++  $MANIFEST_TOOL '-?' 2>conftest.err > conftest.out
++  cat conftest.err >&5
++  if $GREP 'Manifest Tool' conftest.out > /dev/null; then
++    lt_cv_path_mainfest_tool=yes
++  fi
++  rm -f conftest*
++fi
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_path_mainfest_tool" >&5
++$as_echo "$lt_cv_path_mainfest_tool" >&6; }
++if test "x$lt_cv_path_mainfest_tool" != xyes; then
++  MANIFEST_TOOL=:
++fi
++
++
++
++
++
+ 
+   case $host_os in
+     rhapsody* | darwin*)
+@@ -7454,6 +7947,8 @@ _LT_EOF
+       $LTCC $LTCFLAGS -c -o conftest.o conftest.c 2>&5
+       echo "$AR cru libconftest.a conftest.o" >&5
+       $AR cru libconftest.a conftest.o 2>&5
++      echo "$RANLIB libconftest.a" >&5
++      $RANLIB libconftest.a 2>&5
+       cat > conftest.c << _LT_EOF
+ int main() { return 0;}
+ _LT_EOF
+@@ -8036,8 +8531,6 @@ fi
+ lt_prog_compiler_pic=
+ lt_prog_compiler_static=
+ 
+-{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $compiler option to produce PIC" >&5
+-$as_echo_n "checking for $compiler option to produce PIC... " >&6; }
+ 
+   if test "$GCC" = yes; then
+     lt_prog_compiler_wl='-Wl,'
+@@ -8203,6 +8696,12 @@ $as_echo_n "checking for $compiler option to produce PIC... " >&6; }
+ 	lt_prog_compiler_pic='--shared'
+ 	lt_prog_compiler_static='--static'
+ 	;;
++      nagfor*)
++	# NAG Fortran compiler
++	lt_prog_compiler_wl='-Wl,-Wl,,'
++	lt_prog_compiler_pic='-PIC'
++	lt_prog_compiler_static='-Bstatic'
++	;;
+       pgcc* | pgf77* | pgf90* | pgf95* | pgfortran*)
+         # Portland Group compilers (*not* the Pentium gcc compiler,
+ 	# which looks to be a dead project)
+@@ -8265,7 +8764,7 @@ $as_echo_n "checking for $compiler option to produce PIC... " >&6; }
+       lt_prog_compiler_pic='-KPIC'
+       lt_prog_compiler_static='-Bstatic'
+       case $cc_basename in
+-      f77* | f90* | f95*)
++      f77* | f90* | f95* | sunf77* | sunf90* | sunf95*)
+ 	lt_prog_compiler_wl='-Qoption ld ';;
+       *)
+ 	lt_prog_compiler_wl='-Wl,';;
+@@ -8322,13 +8821,17 @@ case $host_os in
+     lt_prog_compiler_pic="$lt_prog_compiler_pic -DPIC"
+     ;;
+ esac
+-{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_prog_compiler_pic" >&5
+-$as_echo "$lt_prog_compiler_pic" >&6; }
+-
+-
+-
+-
+ 
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $compiler option to produce PIC" >&5
++$as_echo_n "checking for $compiler option to produce PIC... " >&6; }
++if ${lt_cv_prog_compiler_pic+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  lt_cv_prog_compiler_pic=$lt_prog_compiler_pic
++fi
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_prog_compiler_pic" >&5
++$as_echo "$lt_cv_prog_compiler_pic" >&6; }
++lt_prog_compiler_pic=$lt_cv_prog_compiler_pic
+ 
+ #
+ # Check to make sure the PIC flag actually works.
+@@ -8389,6 +8892,11 @@ fi
+ 
+ 
+ 
++
++
++
++
++
+ #
+ # Check to make sure the static flag actually works.
+ #
+@@ -8739,7 +9247,8 @@ _LT_EOF
+       allow_undefined_flag=unsupported
+       always_export_symbols=no
+       enable_shared_with_static_runtimes=yes
+-      export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1 DATA/'\'' | $SED -e '\''/^[AITW][ ]/s/.*[ ]//'\'' | sort | uniq > $export_symbols'
++      export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1 DATA/;s/^.*[ ]__nm__\([^ ]*\)[ ][^ ]*/\1 DATA/;/^I[ ]/d;/^[AITW][ ]/s/.* //'\'' | sort | uniq > $export_symbols'
++      exclude_expsyms='[_]+GLOBAL_OFFSET_TABLE_|[_]+GLOBAL__[FID]_.*|[_]+head_[A-Za-z0-9_]+_dll|[A-Za-z0-9_]+_dll_iname'
+ 
+       if $LD --help 2>&1 | $GREP 'auto-import' > /dev/null; then
+         archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib'
+@@ -8838,12 +9347,12 @@ _LT_EOF
+ 	  whole_archive_flag_spec='--whole-archive$convenience --no-whole-archive'
+ 	  hardcode_libdir_flag_spec=
+ 	  hardcode_libdir_flag_spec_ld='-rpath $libdir'
+-	  archive_cmds='$LD -shared $libobjs $deplibs $compiler_flags -soname $soname -o $lib'
++	  archive_cmds='$LD -shared $libobjs $deplibs $linker_flags -soname $soname -o $lib'
+ 	  if test "x$supports_anon_versioning" = xyes; then
+ 	    archive_expsym_cmds='echo "{ global:" > $output_objdir/$libname.ver~
+ 	      cat $export_symbols | sed -e "s/\(.*\)/\1;/" >> $output_objdir/$libname.ver~
+ 	      echo "local: *; };" >> $output_objdir/$libname.ver~
+-	      $LD -shared $libobjs $deplibs $compiler_flags -soname $soname -version-script $output_objdir/$libname.ver -o $lib'
++	      $LD -shared $libobjs $deplibs $linker_flags -soname $soname -version-script $output_objdir/$libname.ver -o $lib'
+ 	  fi
+ 	  ;;
+ 	esac
+@@ -8857,8 +9366,8 @@ _LT_EOF
+ 	archive_cmds='$LD -Bshareable $libobjs $deplibs $linker_flags -o $lib'
+ 	wlarc=
+       else
+-	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
+-	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
++	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
++	archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
+       fi
+       ;;
+ 
+@@ -8876,8 +9385,8 @@ _LT_EOF
+ 
+ _LT_EOF
+       elif $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then
+-	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
+-	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
++	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
++	archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
+       else
+ 	ld_shlibs=no
+       fi
+@@ -8923,8 +9432,8 @@ _LT_EOF
+ 
+     *)
+       if $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then
+-	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
+-	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
++	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
++	archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
+       else
+ 	ld_shlibs=no
+       fi
+@@ -9054,7 +9563,13 @@ _LT_EOF
+ 	allow_undefined_flag='-berok'
+         # Determine the default libpath from the value encoded in an
+         # empty executable.
+-        cat confdefs.h - <<_ACEOF >conftest.$ac_ext
++        if test "${lt_cv_aix_libpath+set}" = set; then
++  aix_libpath=$lt_cv_aix_libpath
++else
++  if ${lt_cv_aix_libpath_+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+ /* end confdefs.h.  */
+ 
+ int
+@@ -9067,22 +9582,29 @@ main ()
+ _ACEOF
+ if ac_fn_c_try_link "$LINENO"; then :
+ 
+-lt_aix_libpath_sed='
+-    /Import File Strings/,/^$/ {
+-	/^0/ {
+-	    s/^0  *\(.*\)$/\1/
+-	    p
+-	}
+-    }'
+-aix_libpath=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
+-# Check for a 64-bit object if we didn't find anything.
+-if test -z "$aix_libpath"; then
+-  aix_libpath=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
+-fi
++  lt_aix_libpath_sed='
++      /Import File Strings/,/^$/ {
++	  /^0/ {
++	      s/^0  *\([^ ]*\) *$/\1/
++	      p
++	  }
++      }'
++  lt_cv_aix_libpath_=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
++  # Check for a 64-bit object if we didn't find anything.
++  if test -z "$lt_cv_aix_libpath_"; then
++    lt_cv_aix_libpath_=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
++  fi
+ fi
+ rm -f core conftest.err conftest.$ac_objext \
+     conftest$ac_exeext conftest.$ac_ext
+-if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
++  if test -z "$lt_cv_aix_libpath_"; then
++    lt_cv_aix_libpath_="/usr/lib:/lib"
++  fi
++
++fi
++
++  aix_libpath=$lt_cv_aix_libpath_
++fi
+ 
+         hardcode_libdir_flag_spec='${wl}-blibpath:$libdir:'"$aix_libpath"
+         archive_expsym_cmds='$CC -o $output_objdir/$soname $libobjs $deplibs '"\${wl}$no_entry_flag"' $compiler_flags `if test "x${allow_undefined_flag}" != "x"; then func_echo_all "${wl}${allow_undefined_flag}"; else :; fi` '"\${wl}$exp_sym_flag:\$export_symbols $shared_flag"
+@@ -9094,7 +9616,13 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 	else
+ 	 # Determine the default libpath from the value encoded in an
+ 	 # empty executable.
+-	 cat confdefs.h - <<_ACEOF >conftest.$ac_ext
++	 if test "${lt_cv_aix_libpath+set}" = set; then
++  aix_libpath=$lt_cv_aix_libpath
++else
++  if ${lt_cv_aix_libpath_+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+ /* end confdefs.h.  */
+ 
+ int
+@@ -9107,22 +9635,29 @@ main ()
+ _ACEOF
+ if ac_fn_c_try_link "$LINENO"; then :
+ 
+-lt_aix_libpath_sed='
+-    /Import File Strings/,/^$/ {
+-	/^0/ {
+-	    s/^0  *\(.*\)$/\1/
+-	    p
+-	}
+-    }'
+-aix_libpath=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
+-# Check for a 64-bit object if we didn't find anything.
+-if test -z "$aix_libpath"; then
+-  aix_libpath=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
+-fi
++  lt_aix_libpath_sed='
++      /Import File Strings/,/^$/ {
++	  /^0/ {
++	      s/^0  *\([^ ]*\) *$/\1/
++	      p
++	  }
++      }'
++  lt_cv_aix_libpath_=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
++  # Check for a 64-bit object if we didn't find anything.
++  if test -z "$lt_cv_aix_libpath_"; then
++    lt_cv_aix_libpath_=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
++  fi
+ fi
+ rm -f core conftest.err conftest.$ac_objext \
+     conftest$ac_exeext conftest.$ac_ext
+-if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
++  if test -z "$lt_cv_aix_libpath_"; then
++    lt_cv_aix_libpath_="/usr/lib:/lib"
++  fi
++
++fi
++
++  aix_libpath=$lt_cv_aix_libpath_
++fi
+ 
+ 	 hardcode_libdir_flag_spec='${wl}-blibpath:$libdir:'"$aix_libpath"
+ 	  # Warning - without using the other run time loading flags,
+@@ -9167,20 +9702,63 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+       # Microsoft Visual C++.
+       # hardcode_libdir_flag_spec is actually meaningless, as there is
+       # no search path for DLLs.
+-      hardcode_libdir_flag_spec=' '
+-      allow_undefined_flag=unsupported
+-      # Tell ltmain to make .lib files, not .a files.
+-      libext=lib
+-      # Tell ltmain to make .dll files, not .so files.
+-      shrext_cmds=".dll"
+-      # FIXME: Setting linknames here is a bad hack.
+-      archive_cmds='$CC -o $lib $libobjs $compiler_flags `func_echo_all "$deplibs" | $SED '\''s/ -lc$//'\''` -link -dll~linknames='
+-      # The linker will automatically build a .lib file if we build a DLL.
+-      old_archive_from_new_cmds='true'
+-      # FIXME: Should let the user specify the lib program.
+-      old_archive_cmds='lib -OUT:$oldlib$oldobjs$old_deplibs'
+-      fix_srcfile_path='`cygpath -w "$srcfile"`'
+-      enable_shared_with_static_runtimes=yes
++      case $cc_basename in
++      cl*)
++	# Native MSVC
++	hardcode_libdir_flag_spec=' '
++	allow_undefined_flag=unsupported
++	always_export_symbols=yes
++	file_list_spec='@'
++	# Tell ltmain to make .lib files, not .a files.
++	libext=lib
++	# Tell ltmain to make .dll files, not .so files.
++	shrext_cmds=".dll"
++	# FIXME: Setting linknames here is a bad hack.
++	archive_cmds='$CC -o $output_objdir/$soname $libobjs $compiler_flags $deplibs -Wl,-dll~linknames='
++	archive_expsym_cmds='if test "x`$SED 1q $export_symbols`" = xEXPORTS; then
++	    sed -n -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' -e '1\\\!p' < $export_symbols > $output_objdir/$soname.exp;
++	  else
++	    sed -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' < $export_symbols > $output_objdir/$soname.exp;
++	  fi~
++	  $CC -o $tool_output_objdir$soname $libobjs $compiler_flags $deplibs "@$tool_output_objdir$soname.exp" -Wl,-DLL,-IMPLIB:"$tool_output_objdir$libname.dll.lib"~
++	  linknames='
++	# The linker will not automatically build a static lib if we build a DLL.
++	# _LT_TAGVAR(old_archive_from_new_cmds, )='true'
++	enable_shared_with_static_runtimes=yes
++	export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1,DATA/'\'' | $SED -e '\''/^[AITW][ ]/s/.*[ ]//'\'' | sort | uniq > $export_symbols'
++	# Don't use ranlib
++	old_postinstall_cmds='chmod 644 $oldlib'
++	postlink_cmds='lt_outputfile="@OUTPUT@"~
++	  lt_tool_outputfile="@TOOL_OUTPUT@"~
++	  case $lt_outputfile in
++	    *.exe|*.EXE) ;;
++	    *)
++	      lt_outputfile="$lt_outputfile.exe"
++	      lt_tool_outputfile="$lt_tool_outputfile.exe"
++	      ;;
++	  esac~
++	  if test "$MANIFEST_TOOL" != ":" && test -f "$lt_outputfile.manifest"; then
++	    $MANIFEST_TOOL -manifest "$lt_tool_outputfile.manifest" -outputresource:"$lt_tool_outputfile" || exit 1;
++	    $RM "$lt_outputfile.manifest";
++	  fi'
++	;;
++      *)
++	# Assume MSVC wrapper
++	hardcode_libdir_flag_spec=' '
++	allow_undefined_flag=unsupported
++	# Tell ltmain to make .lib files, not .a files.
++	libext=lib
++	# Tell ltmain to make .dll files, not .so files.
++	shrext_cmds=".dll"
++	# FIXME: Setting linknames here is a bad hack.
++	archive_cmds='$CC -o $lib $libobjs $compiler_flags `func_echo_all "$deplibs" | $SED '\''s/ -lc$//'\''` -link -dll~linknames='
++	# The linker will automatically build a .lib file if we build a DLL.
++	old_archive_from_new_cmds='true'
++	# FIXME: Should let the user specify the lib program.
++	old_archive_cmds='lib -OUT:$oldlib$oldobjs$old_deplibs'
++	enable_shared_with_static_runtimes=yes
++	;;
++      esac
+       ;;
+ 
+     darwin* | rhapsody*)
+@@ -9241,7 +9819,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 
+     # FreeBSD 3 and greater uses gcc -shared to do shared libraries.
+     freebsd* | dragonfly*)
+-      archive_cmds='$CC -shared -o $lib $libobjs $deplibs $compiler_flags'
++      archive_cmds='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags'
+       hardcode_libdir_flag_spec='-R$libdir'
+       hardcode_direct=yes
+       hardcode_shlibpath_var=no
+@@ -9249,7 +9827,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 
+     hpux9*)
+       if test "$GCC" = yes; then
+-	archive_cmds='$RM $output_objdir/$soname~$CC -shared -fPIC ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $libobjs $deplibs $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
++	archive_cmds='$RM $output_objdir/$soname~$CC -shared $pic_flag ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $libobjs $deplibs $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
+       else
+ 	archive_cmds='$RM $output_objdir/$soname~$LD -b +b $install_libdir -o $output_objdir/$soname $libobjs $deplibs $linker_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
+       fi
+@@ -9265,7 +9843,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 
+     hpux10*)
+       if test "$GCC" = yes && test "$with_gnu_ld" = no; then
+-	archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
++	archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
+       else
+ 	archive_cmds='$LD -b +h $soname +b $install_libdir -o $lib $libobjs $deplibs $linker_flags'
+       fi
+@@ -9289,10 +9867,10 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 	  archive_cmds='$CC -shared ${wl}+h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
+ 	  ;;
+ 	ia64*)
+-	  archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags'
++	  archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags'
+ 	  ;;
+ 	*)
+-	  archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
++	  archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
+ 	  ;;
+ 	esac
+       else
+@@ -9371,23 +9949,36 @@ fi
+ 
+     irix5* | irix6* | nonstopux*)
+       if test "$GCC" = yes; then
+-	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
++	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
+ 	# Try to use the -exported_symbol ld option, if it does not
+ 	# work, assume that -exports_file does not work either and
+ 	# implicitly export all symbols.
+-        save_LDFLAGS="$LDFLAGS"
+-        LDFLAGS="$LDFLAGS -shared ${wl}-exported_symbol ${wl}foo ${wl}-update_registry ${wl}/dev/null"
+-        cat confdefs.h - <<_ACEOF >conftest.$ac_ext
++	# This should be the same for all languages, so no per-tag cache variable.
++	{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether the $host_os linker accepts -exported_symbol" >&5
++$as_echo_n "checking whether the $host_os linker accepts -exported_symbol... " >&6; }
++if ${lt_cv_irix_exported_symbol+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  save_LDFLAGS="$LDFLAGS"
++	   LDFLAGS="$LDFLAGS -shared ${wl}-exported_symbol ${wl}foo ${wl}-update_registry ${wl}/dev/null"
++	   cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+ /* end confdefs.h.  */
+-int foo(void) {}
++int foo (void) { return 0; }
+ _ACEOF
+ if ac_fn_c_try_link "$LINENO"; then :
+-  archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations ${wl}-exports_file ${wl}$export_symbols -o $lib'
+-
++  lt_cv_irix_exported_symbol=yes
++else
++  lt_cv_irix_exported_symbol=no
+ fi
+ rm -f core conftest.err conftest.$ac_objext \
+     conftest$ac_exeext conftest.$ac_ext
+-        LDFLAGS="$save_LDFLAGS"
++           LDFLAGS="$save_LDFLAGS"
++fi
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_irix_exported_symbol" >&5
++$as_echo "$lt_cv_irix_exported_symbol" >&6; }
++	if test "$lt_cv_irix_exported_symbol" = yes; then
++          archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations ${wl}-exports_file ${wl}$export_symbols -o $lib'
++	fi
+       else
+ 	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -o $lib'
+ 	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -exports_file $export_symbols -o $lib'
+@@ -9472,7 +10063,7 @@ rm -f core conftest.err conftest.$ac_objext \
+     osf4* | osf5*)	# as osf3* with the addition of -msym flag
+       if test "$GCC" = yes; then
+ 	allow_undefined_flag=' ${wl}-expect_unresolved ${wl}\*'
+-	archive_cmds='$CC -shared${allow_undefined_flag} $libobjs $deplibs $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
++	archive_cmds='$CC -shared${allow_undefined_flag} $pic_flag $libobjs $deplibs $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
+ 	hardcode_libdir_flag_spec='${wl}-rpath ${wl}$libdir'
+       else
+ 	allow_undefined_flag=' -expect_unresolved \*'
+@@ -9491,9 +10082,9 @@ rm -f core conftest.err conftest.$ac_objext \
+       no_undefined_flag=' -z defs'
+       if test "$GCC" = yes; then
+ 	wlarc='${wl}'
+-	archive_cmds='$CC -shared ${wl}-z ${wl}text ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
++	archive_cmds='$CC -shared $pic_flag ${wl}-z ${wl}text ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
+ 	archive_expsym_cmds='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~
+-	  $CC -shared ${wl}-z ${wl}text ${wl}-M ${wl}$lib.exp ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags~$RM $lib.exp'
++	  $CC -shared $pic_flag ${wl}-z ${wl}text ${wl}-M ${wl}$lib.exp ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags~$RM $lib.exp'
+       else
+ 	case `$CC -V 2>&1` in
+ 	*"Compilers 5.0"*)
+@@ -10069,8 +10660,9 @@ cygwin* | mingw* | pw32* | cegcc*)
+   need_version=no
+   need_lib_prefix=no
+ 
+-  case $GCC,$host_os in
+-  yes,cygwin* | yes,mingw* | yes,pw32* | yes,cegcc*)
++  case $GCC,$cc_basename in
++  yes,*)
++    # gcc
+     library_names_spec='$libname.dll.a'
+     # DLL is installed to $(libdir)/../bin by postinstall_cmds
+     postinstall_cmds='base_file=`basename \${file}`~
+@@ -10103,13 +10695,71 @@ cygwin* | mingw* | pw32* | cegcc*)
+       library_names_spec='`echo ${libname} | sed -e 's/^lib/pw/'``echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}'
+       ;;
+     esac
++    dynamic_linker='Win32 ld.exe'
++    ;;
++
++  *,cl*)
++    # Native MSVC
++    libname_spec='$name'
++    soname_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}'
++    library_names_spec='${libname}.dll.lib'
++
++    case $build_os in
++    mingw*)
++      sys_lib_search_path_spec=
++      lt_save_ifs=$IFS
++      IFS=';'
++      for lt_path in $LIB
++      do
++        IFS=$lt_save_ifs
++        # Let DOS variable expansion print the short 8.3 style file name.
++        lt_path=`cd "$lt_path" 2>/dev/null && cmd //C "for %i in (".") do @echo %~si"`
++        sys_lib_search_path_spec="$sys_lib_search_path_spec $lt_path"
++      done
++      IFS=$lt_save_ifs
++      # Convert to MSYS style.
++      sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | sed -e 's|\\\\|/|g' -e 's| \\([a-zA-Z]\\):| /\\1|g' -e 's|^ ||'`
++      ;;
++    cygwin*)
++      # Convert to unix form, then to dos form, then back to unix form
++      # but this time dos style (no spaces!) so that the unix form looks
++      # like /cygdrive/c/PROGRA~1:/cygdr...
++      sys_lib_search_path_spec=`cygpath --path --unix "$LIB"`
++      sys_lib_search_path_spec=`cygpath --path --dos "$sys_lib_search_path_spec" 2>/dev/null`
++      sys_lib_search_path_spec=`cygpath --path --unix "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
++      ;;
++    *)
++      sys_lib_search_path_spec="$LIB"
++      if $ECHO "$sys_lib_search_path_spec" | $GREP ';[c-zC-Z]:/' >/dev/null; then
++        # It is most probably a Windows format PATH.
++        sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e 's/;/ /g'`
++      else
++        sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
++      fi
++      # FIXME: find the short name or the path components, as spaces are
++      # common. (e.g. "Program Files" -> "PROGRA~1")
++      ;;
++    esac
++
++    # DLL is installed to $(libdir)/../bin by postinstall_cmds
++    postinstall_cmds='base_file=`basename \${file}`~
++      dlpath=`$SHELL 2>&1 -c '\''. $dir/'\''\${base_file}'\''i; echo \$dlname'\''`~
++      dldir=$destdir/`dirname \$dlpath`~
++      test -d \$dldir || mkdir -p \$dldir~
++      $install_prog $dir/$dlname \$dldir/$dlname'
++    postuninstall_cmds='dldll=`$SHELL 2>&1 -c '\''. $file; echo \$dlname'\''`~
++      dlpath=$dir/\$dldll~
++       $RM \$dlpath'
++    shlibpath_overrides_runpath=yes
++    dynamic_linker='Win32 link.exe'
+     ;;
+ 
+   *)
++    # Assume MSVC wrapper
+     library_names_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext} $libname.lib'
++    dynamic_linker='Win32 ld.exe'
+     ;;
+   esac
+-  dynamic_linker='Win32 ld.exe'
+   # FIXME: first we should search . and the directory the executable is in
+   shlibpath_var=PATH
+   ;;
+@@ -10987,7 +11637,7 @@ else
+   lt_dlunknown=0; lt_dlno_uscore=1; lt_dlneed_uscore=2
+   lt_status=$lt_dlunknown
+   cat > conftest.$ac_ext <<_LT_EOF
+-#line 10990 "configure"
++#line $LINENO "configure"
+ #include "confdefs.h"
+ 
+ #if HAVE_DLFCN_H
+@@ -11031,10 +11681,10 @@ else
+ /* When -fvisbility=hidden is used, assume the code has been annotated
+    correspondingly for the symbols needed.  */
+ #if defined(__GNUC__) && (((__GNUC__ == 3) && (__GNUC_MINOR__ >= 3)) || (__GNUC__ > 3))
+-void fnord () __attribute__((visibility("default")));
++int fnord () __attribute__((visibility("default")));
+ #endif
+ 
+-void fnord () { int i=42; }
++int fnord () { return 42; }
+ int main ()
+ {
+   void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW);
+@@ -11093,7 +11743,7 @@ else
+   lt_dlunknown=0; lt_dlno_uscore=1; lt_dlneed_uscore=2
+   lt_status=$lt_dlunknown
+   cat > conftest.$ac_ext <<_LT_EOF
+-#line 11096 "configure"
++#line $LINENO "configure"
+ #include "confdefs.h"
+ 
+ #if HAVE_DLFCN_H
+@@ -11137,10 +11787,10 @@ else
+ /* When -fvisbility=hidden is used, assume the code has been annotated
+    correspondingly for the symbols needed.  */
+ #if defined(__GNUC__) && (((__GNUC__ == 3) && (__GNUC_MINOR__ >= 3)) || (__GNUC__ > 3))
+-void fnord () __attribute__((visibility("default")));
++int fnord () __attribute__((visibility("default")));
+ #endif
+ 
+-void fnord () { int i=42; }
++int fnord () { return 42; }
+ int main ()
+ {
+   void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW);
+@@ -15642,13 +16292,20 @@ exeext='`$ECHO "$exeext" | $SED "$delay_single_quote_subst"`'
+ lt_unset='`$ECHO "$lt_unset" | $SED "$delay_single_quote_subst"`'
+ lt_SP2NL='`$ECHO "$lt_SP2NL" | $SED "$delay_single_quote_subst"`'
+ lt_NL2SP='`$ECHO "$lt_NL2SP" | $SED "$delay_single_quote_subst"`'
++lt_cv_to_host_file_cmd='`$ECHO "$lt_cv_to_host_file_cmd" | $SED "$delay_single_quote_subst"`'
++lt_cv_to_tool_file_cmd='`$ECHO "$lt_cv_to_tool_file_cmd" | $SED "$delay_single_quote_subst"`'
+ reload_flag='`$ECHO "$reload_flag" | $SED "$delay_single_quote_subst"`'
+ reload_cmds='`$ECHO "$reload_cmds" | $SED "$delay_single_quote_subst"`'
+ OBJDUMP='`$ECHO "$OBJDUMP" | $SED "$delay_single_quote_subst"`'
+ deplibs_check_method='`$ECHO "$deplibs_check_method" | $SED "$delay_single_quote_subst"`'
+ file_magic_cmd='`$ECHO "$file_magic_cmd" | $SED "$delay_single_quote_subst"`'
++file_magic_glob='`$ECHO "$file_magic_glob" | $SED "$delay_single_quote_subst"`'
++want_nocaseglob='`$ECHO "$want_nocaseglob" | $SED "$delay_single_quote_subst"`'
++DLLTOOL='`$ECHO "$DLLTOOL" | $SED "$delay_single_quote_subst"`'
++sharedlib_from_linklib_cmd='`$ECHO "$sharedlib_from_linklib_cmd" | $SED "$delay_single_quote_subst"`'
+ AR='`$ECHO "$AR" | $SED "$delay_single_quote_subst"`'
+ AR_FLAGS='`$ECHO "$AR_FLAGS" | $SED "$delay_single_quote_subst"`'
++archiver_list_spec='`$ECHO "$archiver_list_spec" | $SED "$delay_single_quote_subst"`'
+ STRIP='`$ECHO "$STRIP" | $SED "$delay_single_quote_subst"`'
+ RANLIB='`$ECHO "$RANLIB" | $SED "$delay_single_quote_subst"`'
+ old_postinstall_cmds='`$ECHO "$old_postinstall_cmds" | $SED "$delay_single_quote_subst"`'
+@@ -15663,14 +16320,17 @@ lt_cv_sys_global_symbol_pipe='`$ECHO "$lt_cv_sys_global_symbol_pipe" | $SED "$de
+ lt_cv_sys_global_symbol_to_cdecl='`$ECHO "$lt_cv_sys_global_symbol_to_cdecl" | $SED "$delay_single_quote_subst"`'
+ lt_cv_sys_global_symbol_to_c_name_address='`$ECHO "$lt_cv_sys_global_symbol_to_c_name_address" | $SED "$delay_single_quote_subst"`'
+ lt_cv_sys_global_symbol_to_c_name_address_lib_prefix='`$ECHO "$lt_cv_sys_global_symbol_to_c_name_address_lib_prefix" | $SED "$delay_single_quote_subst"`'
++nm_file_list_spec='`$ECHO "$nm_file_list_spec" | $SED "$delay_single_quote_subst"`'
++lt_sysroot='`$ECHO "$lt_sysroot" | $SED "$delay_single_quote_subst"`'
+ objdir='`$ECHO "$objdir" | $SED "$delay_single_quote_subst"`'
+ MAGIC_CMD='`$ECHO "$MAGIC_CMD" | $SED "$delay_single_quote_subst"`'
+ lt_prog_compiler_no_builtin_flag='`$ECHO "$lt_prog_compiler_no_builtin_flag" | $SED "$delay_single_quote_subst"`'
+-lt_prog_compiler_wl='`$ECHO "$lt_prog_compiler_wl" | $SED "$delay_single_quote_subst"`'
+ lt_prog_compiler_pic='`$ECHO "$lt_prog_compiler_pic" | $SED "$delay_single_quote_subst"`'
++lt_prog_compiler_wl='`$ECHO "$lt_prog_compiler_wl" | $SED "$delay_single_quote_subst"`'
+ lt_prog_compiler_static='`$ECHO "$lt_prog_compiler_static" | $SED "$delay_single_quote_subst"`'
+ lt_cv_prog_compiler_c_o='`$ECHO "$lt_cv_prog_compiler_c_o" | $SED "$delay_single_quote_subst"`'
+ need_locks='`$ECHO "$need_locks" | $SED "$delay_single_quote_subst"`'
++MANIFEST_TOOL='`$ECHO "$MANIFEST_TOOL" | $SED "$delay_single_quote_subst"`'
+ DSYMUTIL='`$ECHO "$DSYMUTIL" | $SED "$delay_single_quote_subst"`'
+ NMEDIT='`$ECHO "$NMEDIT" | $SED "$delay_single_quote_subst"`'
+ LIPO='`$ECHO "$LIPO" | $SED "$delay_single_quote_subst"`'
+@@ -15703,12 +16363,12 @@ hardcode_shlibpath_var='`$ECHO "$hardcode_shlibpath_var" | $SED "$delay_single_q
+ hardcode_automatic='`$ECHO "$hardcode_automatic" | $SED "$delay_single_quote_subst"`'
+ inherit_rpath='`$ECHO "$inherit_rpath" | $SED "$delay_single_quote_subst"`'
+ link_all_deplibs='`$ECHO "$link_all_deplibs" | $SED "$delay_single_quote_subst"`'
+-fix_srcfile_path='`$ECHO "$fix_srcfile_path" | $SED "$delay_single_quote_subst"`'
+ always_export_symbols='`$ECHO "$always_export_symbols" | $SED "$delay_single_quote_subst"`'
+ export_symbols_cmds='`$ECHO "$export_symbols_cmds" | $SED "$delay_single_quote_subst"`'
+ exclude_expsyms='`$ECHO "$exclude_expsyms" | $SED "$delay_single_quote_subst"`'
+ include_expsyms='`$ECHO "$include_expsyms" | $SED "$delay_single_quote_subst"`'
+ prelink_cmds='`$ECHO "$prelink_cmds" | $SED "$delay_single_quote_subst"`'
++postlink_cmds='`$ECHO "$postlink_cmds" | $SED "$delay_single_quote_subst"`'
+ file_list_spec='`$ECHO "$file_list_spec" | $SED "$delay_single_quote_subst"`'
+ variables_saved_for_relink='`$ECHO "$variables_saved_for_relink" | $SED "$delay_single_quote_subst"`'
+ need_lib_prefix='`$ECHO "$need_lib_prefix" | $SED "$delay_single_quote_subst"`'
+@@ -15763,8 +16423,13 @@ reload_flag \
+ OBJDUMP \
+ deplibs_check_method \
+ file_magic_cmd \
++file_magic_glob \
++want_nocaseglob \
++DLLTOOL \
++sharedlib_from_linklib_cmd \
+ AR \
+ AR_FLAGS \
++archiver_list_spec \
+ STRIP \
+ RANLIB \
+ CC \
+@@ -15774,12 +16439,14 @@ lt_cv_sys_global_symbol_pipe \
+ lt_cv_sys_global_symbol_to_cdecl \
+ lt_cv_sys_global_symbol_to_c_name_address \
+ lt_cv_sys_global_symbol_to_c_name_address_lib_prefix \
++nm_file_list_spec \
+ lt_prog_compiler_no_builtin_flag \
+-lt_prog_compiler_wl \
+ lt_prog_compiler_pic \
++lt_prog_compiler_wl \
+ lt_prog_compiler_static \
+ lt_cv_prog_compiler_c_o \
+ need_locks \
++MANIFEST_TOOL \
+ DSYMUTIL \
+ NMEDIT \
+ LIPO \
+@@ -15795,7 +16462,6 @@ no_undefined_flag \
+ hardcode_libdir_flag_spec \
+ hardcode_libdir_flag_spec_ld \
+ hardcode_libdir_separator \
+-fix_srcfile_path \
+ exclude_expsyms \
+ include_expsyms \
+ file_list_spec \
+@@ -15831,6 +16497,7 @@ module_cmds \
+ module_expsym_cmds \
+ export_symbols_cmds \
+ prelink_cmds \
++postlink_cmds \
+ postinstall_cmds \
+ postuninstall_cmds \
+ finish_cmds \
+@@ -16596,7 +17263,8 @@ $as_echo X"$file" |
+ # NOTE: Changes made to this file will be lost: look at ltmain.sh.
+ #
+ #   Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001, 2003, 2004, 2005,
+-#                 2006, 2007, 2008, 2009 Free Software Foundation, Inc.
++#                 2006, 2007, 2008, 2009, 2010 Free Software Foundation,
++#                 Inc.
+ #   Written by Gordon Matzigkeit, 1996
+ #
+ #   This file is part of GNU Libtool.
+@@ -16699,19 +17367,42 @@ SP2NL=$lt_lt_SP2NL
+ # turn newlines into spaces.
+ NL2SP=$lt_lt_NL2SP
+ 
++# convert \$build file names to \$host format.
++to_host_file_cmd=$lt_cv_to_host_file_cmd
++
++# convert \$build files to toolchain format.
++to_tool_file_cmd=$lt_cv_to_tool_file_cmd
++
+ # An object symbol dumper.
+ OBJDUMP=$lt_OBJDUMP
+ 
+ # Method to check whether dependent libraries are shared objects.
+ deplibs_check_method=$lt_deplibs_check_method
+ 
+-# Command to use when deplibs_check_method == "file_magic".
++# Command to use when deplibs_check_method = "file_magic".
+ file_magic_cmd=$lt_file_magic_cmd
+ 
++# How to find potential files when deplibs_check_method = "file_magic".
++file_magic_glob=$lt_file_magic_glob
++
++# Find potential files using nocaseglob when deplibs_check_method = "file_magic".
++want_nocaseglob=$lt_want_nocaseglob
++
++# DLL creation program.
++DLLTOOL=$lt_DLLTOOL
++
++# Command to associate shared and link libraries.
++sharedlib_from_linklib_cmd=$lt_sharedlib_from_linklib_cmd
++
+ # The archiver.
+ AR=$lt_AR
++
++# Flags to create an archive.
+ AR_FLAGS=$lt_AR_FLAGS
+ 
++# How to feed a file listing to the archiver.
++archiver_list_spec=$lt_archiver_list_spec
++
+ # A symbol stripping program.
+ STRIP=$lt_STRIP
+ 
+@@ -16741,6 +17432,12 @@ global_symbol_to_c_name_address=$lt_lt_cv_sys_global_symbol_to_c_name_address
+ # Transform the output of nm in a C name address pair when lib prefix is needed.
+ global_symbol_to_c_name_address_lib_prefix=$lt_lt_cv_sys_global_symbol_to_c_name_address_lib_prefix
+ 
++# Specify filename containing input files for \$NM.
++nm_file_list_spec=$lt_nm_file_list_spec
++
++# The root where to search for dependent libraries,and in which our libraries should be installed.
++lt_sysroot=$lt_sysroot
++
+ # The name of the directory that contains temporary libtool files.
+ objdir=$objdir
+ 
+@@ -16750,6 +17447,9 @@ MAGIC_CMD=$MAGIC_CMD
+ # Must we lock files when doing compilation?
+ need_locks=$lt_need_locks
+ 
++# Manifest tool.
++MANIFEST_TOOL=$lt_MANIFEST_TOOL
++
+ # Tool to manipulate archived DWARF debug symbol files on Mac OS X.
+ DSYMUTIL=$lt_DSYMUTIL
+ 
+@@ -16864,12 +17564,12 @@ with_gcc=$GCC
+ # Compiler flag to turn off builtin functions.
+ no_builtin_flag=$lt_lt_prog_compiler_no_builtin_flag
+ 
+-# How to pass a linker flag through the compiler.
+-wl=$lt_lt_prog_compiler_wl
+-
+ # Additional compiler flags for building library objects.
+ pic_flag=$lt_lt_prog_compiler_pic
+ 
++# How to pass a linker flag through the compiler.
++wl=$lt_lt_prog_compiler_wl
++
+ # Compiler flag to prevent dynamic linking.
+ link_static_flag=$lt_lt_prog_compiler_static
+ 
+@@ -16956,9 +17656,6 @@ inherit_rpath=$inherit_rpath
+ # Whether libtool must link a program against all its dependency libraries.
+ link_all_deplibs=$link_all_deplibs
+ 
+-# Fix the shell variable \$srcfile for the compiler.
+-fix_srcfile_path=$lt_fix_srcfile_path
+-
+ # Set to "yes" if exported symbols are required.
+ always_export_symbols=$always_export_symbols
+ 
+@@ -16974,6 +17671,9 @@ include_expsyms=$lt_include_expsyms
+ # Commands necessary for linking programs (against libraries) with templates.
+ prelink_cmds=$lt_prelink_cmds
+ 
++# Commands necessary for finishing linking programs.
++postlink_cmds=$lt_postlink_cmds
++
+ # Specify filename containing input files.
+ file_list_spec=$lt_file_list_spec
+ 
+@@ -17006,210 +17706,169 @@ ltmain="$ac_aux_dir/ltmain.sh"
+   # if finds mixed CR/LF and LF-only lines.  Since sed operates in
+   # text mode, it properly converts lines to CR/LF.  This bash problem
+   # is reportedly fixed, but why not run on old versions too?
+-  sed '/^# Generated shell functions inserted here/q' "$ltmain" >> "$cfgfile" \
+-    || (rm -f "$cfgfile"; exit 1)
+-
+-  case $xsi_shell in
+-  yes)
+-    cat << \_LT_EOF >> "$cfgfile"
+-
+-# func_dirname file append nondir_replacement
+-# Compute the dirname of FILE.  If nonempty, add APPEND to the result,
+-# otherwise set result to NONDIR_REPLACEMENT.
+-func_dirname ()
+-{
+-  case ${1} in
+-    */*) func_dirname_result="${1%/*}${2}" ;;
+-    *  ) func_dirname_result="${3}" ;;
+-  esac
+-}
+-
+-# func_basename file
+-func_basename ()
+-{
+-  func_basename_result="${1##*/}"
+-}
+-
+-# func_dirname_and_basename file append nondir_replacement
+-# perform func_basename and func_dirname in a single function
+-# call:
+-#   dirname:  Compute the dirname of FILE.  If nonempty,
+-#             add APPEND to the result, otherwise set result
+-#             to NONDIR_REPLACEMENT.
+-#             value returned in "$func_dirname_result"
+-#   basename: Compute filename of FILE.
+-#             value retuned in "$func_basename_result"
+-# Implementation must be kept synchronized with func_dirname
+-# and func_basename. For efficiency, we do not delegate to
+-# those functions but instead duplicate the functionality here.
+-func_dirname_and_basename ()
+-{
+-  case ${1} in
+-    */*) func_dirname_result="${1%/*}${2}" ;;
+-    *  ) func_dirname_result="${3}" ;;
+-  esac
+-  func_basename_result="${1##*/}"
+-}
+-
+-# func_stripname prefix suffix name
+-# strip PREFIX and SUFFIX off of NAME.
+-# PREFIX and SUFFIX must not contain globbing or regex special
+-# characters, hashes, percent signs, but SUFFIX may contain a leading
+-# dot (in which case that matches only a dot).
+-func_stripname ()
+-{
+-  # pdksh 5.2.14 does not do ${X%$Y} correctly if both X and Y are
+-  # positional parameters, so assign one to ordinary parameter first.
+-  func_stripname_result=${3}
+-  func_stripname_result=${func_stripname_result#"${1}"}
+-  func_stripname_result=${func_stripname_result%"${2}"}
+-}
+-
+-# func_opt_split
+-func_opt_split ()
+-{
+-  func_opt_split_opt=${1%%=*}
+-  func_opt_split_arg=${1#*=}
+-}
+-
+-# func_lo2o object
+-func_lo2o ()
+-{
+-  case ${1} in
+-    *.lo) func_lo2o_result=${1%.lo}.${objext} ;;
+-    *)    func_lo2o_result=${1} ;;
+-  esac
+-}
+-
+-# func_xform libobj-or-source
+-func_xform ()
+-{
+-  func_xform_result=${1%.*}.lo
+-}
+-
+-# func_arith arithmetic-term...
+-func_arith ()
+-{
+-  func_arith_result=$(( $* ))
+-}
+-
+-# func_len string
+-# STRING may not start with a hyphen.
+-func_len ()
+-{
+-  func_len_result=${#1}
+-}
+-
+-_LT_EOF
+-    ;;
+-  *) # Bourne compatible functions.
+-    cat << \_LT_EOF >> "$cfgfile"
+-
+-# func_dirname file append nondir_replacement
+-# Compute the dirname of FILE.  If nonempty, add APPEND to the result,
+-# otherwise set result to NONDIR_REPLACEMENT.
+-func_dirname ()
+-{
+-  # Extract subdirectory from the argument.
+-  func_dirname_result=`$ECHO "${1}" | $SED "$dirname"`
+-  if test "X$func_dirname_result" = "X${1}"; then
+-    func_dirname_result="${3}"
+-  else
+-    func_dirname_result="$func_dirname_result${2}"
+-  fi
+-}
+-
+-# func_basename file
+-func_basename ()
+-{
+-  func_basename_result=`$ECHO "${1}" | $SED "$basename"`
+-}
+-
+-
+-# func_stripname prefix suffix name
+-# strip PREFIX and SUFFIX off of NAME.
+-# PREFIX and SUFFIX must not contain globbing or regex special
+-# characters, hashes, percent signs, but SUFFIX may contain a leading
+-# dot (in which case that matches only a dot).
+-# func_strip_suffix prefix name
+-func_stripname ()
+-{
+-  case ${2} in
+-    .*) func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%\\\\${2}\$%%"`;;
+-    *)  func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%${2}\$%%"`;;
+-  esac
+-}
+-
+-# sed scripts:
+-my_sed_long_opt='1s/^\(-[^=]*\)=.*/\1/;q'
+-my_sed_long_arg='1s/^-[^=]*=//'
+-
+-# func_opt_split
+-func_opt_split ()
+-{
+-  func_opt_split_opt=`$ECHO "${1}" | $SED "$my_sed_long_opt"`
+-  func_opt_split_arg=`$ECHO "${1}" | $SED "$my_sed_long_arg"`
+-}
+-
+-# func_lo2o object
+-func_lo2o ()
+-{
+-  func_lo2o_result=`$ECHO "${1}" | $SED "$lo2o"`
+-}
+-
+-# func_xform libobj-or-source
+-func_xform ()
+-{
+-  func_xform_result=`$ECHO "${1}" | $SED 's/\.[^.]*$/.lo/'`
+-}
+-
+-# func_arith arithmetic-term...
+-func_arith ()
+-{
+-  func_arith_result=`expr "$@"`
+-}
+-
+-# func_len string
+-# STRING may not start with a hyphen.
+-func_len ()
+-{
+-  func_len_result=`expr "$1" : ".*" 2>/dev/null || echo $max_cmd_len`
+-}
+-
+-_LT_EOF
+-esac
+-
+-case $lt_shell_append in
+-  yes)
+-    cat << \_LT_EOF >> "$cfgfile"
+-
+-# func_append var value
+-# Append VALUE to the end of shell variable VAR.
+-func_append ()
+-{
+-  eval "$1+=\$2"
+-}
+-_LT_EOF
+-    ;;
+-  *)
+-    cat << \_LT_EOF >> "$cfgfile"
+-
+-# func_append var value
+-# Append VALUE to the end of shell variable VAR.
+-func_append ()
+-{
+-  eval "$1=\$$1\$2"
+-}
+-
+-_LT_EOF
+-    ;;
+-  esac
+-
+-
+-  sed -n '/^# Generated shell functions inserted here/,$p' "$ltmain" >> "$cfgfile" \
+-    || (rm -f "$cfgfile"; exit 1)
+-
+-  mv -f "$cfgfile" "$ofile" ||
++  sed '$q' "$ltmain" >> "$cfgfile" \
++     || (rm -f "$cfgfile"; exit 1)
++
++  if test x"$xsi_shell" = xyes; then
++  sed -e '/^func_dirname ()$/,/^} # func_dirname /c\
++func_dirname ()\
++{\
++\    case ${1} in\
++\      */*) func_dirname_result="${1%/*}${2}" ;;\
++\      *  ) func_dirname_result="${3}" ;;\
++\    esac\
++} # Extended-shell func_dirname implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_basename ()$/,/^} # func_basename /c\
++func_basename ()\
++{\
++\    func_basename_result="${1##*/}"\
++} # Extended-shell func_basename implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_dirname_and_basename ()$/,/^} # func_dirname_and_basename /c\
++func_dirname_and_basename ()\
++{\
++\    case ${1} in\
++\      */*) func_dirname_result="${1%/*}${2}" ;;\
++\      *  ) func_dirname_result="${3}" ;;\
++\    esac\
++\    func_basename_result="${1##*/}"\
++} # Extended-shell func_dirname_and_basename implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_stripname ()$/,/^} # func_stripname /c\
++func_stripname ()\
++{\
++\    # pdksh 5.2.14 does not do ${X%$Y} correctly if both X and Y are\
++\    # positional parameters, so assign one to ordinary parameter first.\
++\    func_stripname_result=${3}\
++\    func_stripname_result=${func_stripname_result#"${1}"}\
++\    func_stripname_result=${func_stripname_result%"${2}"}\
++} # Extended-shell func_stripname implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_split_long_opt ()$/,/^} # func_split_long_opt /c\
++func_split_long_opt ()\
++{\
++\    func_split_long_opt_name=${1%%=*}\
++\    func_split_long_opt_arg=${1#*=}\
++} # Extended-shell func_split_long_opt implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_split_short_opt ()$/,/^} # func_split_short_opt /c\
++func_split_short_opt ()\
++{\
++\    func_split_short_opt_arg=${1#??}\
++\    func_split_short_opt_name=${1%"$func_split_short_opt_arg"}\
++} # Extended-shell func_split_short_opt implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_lo2o ()$/,/^} # func_lo2o /c\
++func_lo2o ()\
++{\
++\    case ${1} in\
++\      *.lo) func_lo2o_result=${1%.lo}.${objext} ;;\
++\      *)    func_lo2o_result=${1} ;;\
++\    esac\
++} # Extended-shell func_lo2o implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_xform ()$/,/^} # func_xform /c\
++func_xform ()\
++{\
++    func_xform_result=${1%.*}.lo\
++} # Extended-shell func_xform implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_arith ()$/,/^} # func_arith /c\
++func_arith ()\
++{\
++    func_arith_result=$(( $* ))\
++} # Extended-shell func_arith implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_len ()$/,/^} # func_len /c\
++func_len ()\
++{\
++    func_len_result=${#1}\
++} # Extended-shell func_len implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++fi
++
++if test x"$lt_shell_append" = xyes; then
++  sed -e '/^func_append ()$/,/^} # func_append /c\
++func_append ()\
++{\
++    eval "${1}+=\\${2}"\
++} # Extended-shell func_append implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_append_quoted ()$/,/^} # func_append_quoted /c\
++func_append_quoted ()\
++{\
++\    func_quote_for_eval "${2}"\
++\    eval "${1}+=\\\\ \\$func_quote_for_eval_result"\
++} # Extended-shell func_append_quoted implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  # Save a `func_append' function call where possible by direct use of '+='
++  sed -e 's%func_append \([a-zA-Z_]\{1,\}\) "%\1+="%g' $cfgfile > $cfgfile.tmp \
++    && mv -f "$cfgfile.tmp" "$cfgfile" \
++      || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++  test 0 -eq $? || _lt_function_replace_fail=:
++else
++  # Save a `func_append' function call even when '+=' is not available
++  sed -e 's%func_append \([a-zA-Z_]\{1,\}\) "%\1="$\1%g' $cfgfile > $cfgfile.tmp \
++    && mv -f "$cfgfile.tmp" "$cfgfile" \
++      || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++  test 0 -eq $? || _lt_function_replace_fail=:
++fi
++
++if test x"$_lt_function_replace_fail" = x":"; then
++  { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: Unable to substitute extended shell functions in $ofile" >&5
++$as_echo "$as_me: WARNING: Unable to substitute extended shell functions in $ofile" >&2;}
++fi
++
++
++   mv -f "$cfgfile" "$ofile" ||
+     (rm -f "$ofile" && cp "$cfgfile" "$ofile" && rm -f "$cfgfile")
+   chmod +x "$ofile"
+ 
+diff --git a/gas/Makefile.in b/gas/Makefile.in
+index c57d78f82c4..da370b21855 100644
+--- a/gas/Makefile.in
++++ b/gas/Makefile.in
+@@ -373,6 +373,7 @@ CYGPATH_W = @CYGPATH_W@
+ DATADIRNAME = @DATADIRNAME@
+ DEFS = @DEFS@
+ DEPDIR = @DEPDIR@
++DLLTOOL = @DLLTOOL@
+ DSYMUTIL = @DSYMUTIL@
+ DUMPBIN = @DUMPBIN@
+ ECHO_C = @ECHO_C@
+@@ -409,6 +410,7 @@ LN_S = @LN_S@
+ LTLIBOBJS = @LTLIBOBJS@
+ MAINT = @MAINT@
+ MAKEINFO = @MAKEINFO@
++MANIFEST_TOOL = @MANIFEST_TOOL@
+ MKDIR_P = @MKDIR_P@
+ MKINSTALLDIRS = @MKINSTALLDIRS@
+ MSGFMT = @MSGFMT@
+@@ -447,6 +449,7 @@ abs_builddir = @abs_builddir@
+ abs_srcdir = @abs_srcdir@
+ abs_top_builddir = @abs_top_builddir@
+ abs_top_srcdir = @abs_top_srcdir@
++ac_ct_AR = @ac_ct_AR@
+ ac_ct_CC = @ac_ct_CC@
+ ac_ct_DUMPBIN = @ac_ct_DUMPBIN@
+ am__include = @am__include@
+diff --git a/gas/configure b/gas/configure
+index 12c16faefd9..11b5127bf3a 100755
+--- a/gas/configure
++++ b/gas/configure
+@@ -681,8 +681,11 @@ OTOOL
+ LIPO
+ NMEDIT
+ DSYMUTIL
++MANIFEST_TOOL
+ RANLIB
++ac_ct_AR
+ AR
++DLLTOOL
+ OBJDUMP
+ LN_S
+ NM
+@@ -799,6 +802,7 @@ enable_static
+ with_pic
+ enable_fast_install
+ with_gnu_ld
++with_libtool_sysroot
+ enable_libtool_lock
+ enable_plugins
+ enable_largefile
+@@ -1490,6 +1494,8 @@ Optional Packages:
+   --with-pic              try to use only PIC/non-PIC objects [default=use
+                           both]
+   --with-gnu-ld           assume the C compiler uses GNU ld [default=no]
++  --with-libtool-sysroot=DIR Search for dependent libraries within DIR
++                        (or the compiler's sysroot if not specified).
+   --with-cpu=CPU          default cpu variant is CPU (currently only supported
+                           on ARC)
+   --with-system-zlib      use installed libz
+@@ -4608,8 +4614,8 @@ esac
+ 
+ 
+ 
+-macro_version='2.2.7a'
+-macro_revision='1.3134'
++macro_version='2.4'
++macro_revision='1.3293'
+ 
+ 
+ 
+@@ -4649,7 +4655,7 @@ ECHO=$ECHO$ECHO$ECHO$ECHO$ECHO$ECHO
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking how to print strings" >&5
+ $as_echo_n "checking how to print strings... " >&6; }
+ # Test print first, because it will be a builtin if present.
+-if test "X`print -r -- -n 2>/dev/null`" = X-n && \
++if test "X`( print -r -- -n ) 2>/dev/null`" = X-n && \
+    test "X`print -r -- $ECHO 2>/dev/null`" = "X$ECHO"; then
+   ECHO='print -r --'
+ elif test "X`printf %s $ECHO 2>/dev/null`" = "X$ECHO"; then
+@@ -5342,8 +5348,8 @@ $as_echo_n "checking whether the shell understands some XSI constructs... " >&6;
+ # Try some XSI features
+ xsi_shell=no
+ ( _lt_dummy="a/b/c"
+-  test "${_lt_dummy##*/},${_lt_dummy%/*},"${_lt_dummy%"$_lt_dummy"}, \
+-      = c,a/b,, \
++  test "${_lt_dummy##*/},${_lt_dummy%/*},${_lt_dummy#??}"${_lt_dummy%"$_lt_dummy"}, \
++      = c,a/b,b/c, \
+     && eval 'test $(( 1 + 1 )) -eq 2 \
+     && test "${#_lt_dummy}" -eq 5' ) >/dev/null 2>&1 \
+   && xsi_shell=yes
+@@ -5392,6 +5398,80 @@ esac
+ 
+ 
+ 
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking how to convert $build file names to $host format" >&5
++$as_echo_n "checking how to convert $build file names to $host format... " >&6; }
++if ${lt_cv_to_host_file_cmd+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  case $host in
++  *-*-mingw* )
++    case $build in
++      *-*-mingw* ) # actually msys
++        lt_cv_to_host_file_cmd=func_convert_file_msys_to_w32
++        ;;
++      *-*-cygwin* )
++        lt_cv_to_host_file_cmd=func_convert_file_cygwin_to_w32
++        ;;
++      * ) # otherwise, assume *nix
++        lt_cv_to_host_file_cmd=func_convert_file_nix_to_w32
++        ;;
++    esac
++    ;;
++  *-*-cygwin* )
++    case $build in
++      *-*-mingw* ) # actually msys
++        lt_cv_to_host_file_cmd=func_convert_file_msys_to_cygwin
++        ;;
++      *-*-cygwin* )
++        lt_cv_to_host_file_cmd=func_convert_file_noop
++        ;;
++      * ) # otherwise, assume *nix
++        lt_cv_to_host_file_cmd=func_convert_file_nix_to_cygwin
++        ;;
++    esac
++    ;;
++  * ) # unhandled hosts (and "normal" native builds)
++    lt_cv_to_host_file_cmd=func_convert_file_noop
++    ;;
++esac
++
++fi
++
++to_host_file_cmd=$lt_cv_to_host_file_cmd
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_to_host_file_cmd" >&5
++$as_echo "$lt_cv_to_host_file_cmd" >&6; }
++
++
++
++
++
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking how to convert $build file names to toolchain format" >&5
++$as_echo_n "checking how to convert $build file names to toolchain format... " >&6; }
++if ${lt_cv_to_tool_file_cmd+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  #assume ordinary cross tools, or native build.
++lt_cv_to_tool_file_cmd=func_convert_file_noop
++case $host in
++  *-*-mingw* )
++    case $build in
++      *-*-mingw* ) # actually msys
++        lt_cv_to_tool_file_cmd=func_convert_file_msys_to_w32
++        ;;
++    esac
++    ;;
++esac
++
++fi
++
++to_tool_file_cmd=$lt_cv_to_tool_file_cmd
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_to_tool_file_cmd" >&5
++$as_echo "$lt_cv_to_tool_file_cmd" >&6; }
++
++
++
++
++
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $LD option to reload object files" >&5
+ $as_echo_n "checking for $LD option to reload object files... " >&6; }
+ if ${lt_cv_ld_reload_flag+:} false; then :
+@@ -5408,6 +5488,11 @@ case $reload_flag in
+ esac
+ reload_cmds='$LD$reload_flag -o $output$reload_objs'
+ case $host_os in
++  cygwin* | mingw* | pw32* | cegcc*)
++    if test "$GCC" != yes; then
++      reload_cmds=false
++    fi
++    ;;
+   darwin*)
+     if test "$GCC" = yes; then
+       reload_cmds='$LTCC $LTCFLAGS -nostdlib ${wl}-r -o $output$reload_objs'
+@@ -5576,7 +5661,8 @@ mingw* | pw32*)
+     lt_cv_deplibs_check_method='file_magic ^x86 archive import|^x86 DLL'
+     lt_cv_file_magic_cmd='func_win32_libid'
+   else
+-    lt_cv_deplibs_check_method='file_magic file format pei*-i386(.*architecture: i386)?'
++    # Keep this pattern in sync with the one in func_win32_libid.
++    lt_cv_deplibs_check_method='file_magic file format (pei*-i386(.*architecture: i386)?|pe-arm-wince|pe-x86-64)'
+     lt_cv_file_magic_cmd='$OBJDUMP -f'
+   fi
+   ;;
+@@ -5730,6 +5816,21 @@ esac
+ fi
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_deplibs_check_method" >&5
+ $as_echo "$lt_cv_deplibs_check_method" >&6; }
++
++file_magic_glob=
++want_nocaseglob=no
++if test "$build" = "$host"; then
++  case $host_os in
++  mingw* | pw32*)
++    if ( shopt | grep nocaseglob ) >/dev/null 2>&1; then
++      want_nocaseglob=yes
++    else
++      file_magic_glob=`echo aAbBcCdDeEfFgGhHiIjJkKlLmMnNoOpPqQrRsStTuUvVwWxXyYzZ | $SED -e "s/\(..\)/s\/[\1]\/[\1]\/g;/g"`
++    fi
++    ;;
++  esac
++fi
++
+ file_magic_cmd=$lt_cv_file_magic_cmd
+ deplibs_check_method=$lt_cv_deplibs_check_method
+ test -z "$deplibs_check_method" && deplibs_check_method=unknown
+@@ -5745,6 +5846,157 @@ test -z "$deplibs_check_method" && deplibs_check_method=unknown
+ 
+ 
+ 
++
++
++
++
++
++
++
++
++
++
++if test -n "$ac_tool_prefix"; then
++  # Extract the first word of "${ac_tool_prefix}dlltool", so it can be a program name with args.
++set dummy ${ac_tool_prefix}dlltool; ac_word=$2
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
++$as_echo_n "checking for $ac_word... " >&6; }
++if ${ac_cv_prog_DLLTOOL+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  if test -n "$DLLTOOL"; then
++  ac_cv_prog_DLLTOOL="$DLLTOOL" # Let the user override the test.
++else
++as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
++for as_dir in $PATH
++do
++  IFS=$as_save_IFS
++  test -z "$as_dir" && as_dir=.
++    for ac_exec_ext in '' $ac_executable_extensions; do
++  if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
++    ac_cv_prog_DLLTOOL="${ac_tool_prefix}dlltool"
++    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
++    break 2
++  fi
++done
++  done
++IFS=$as_save_IFS
++
++fi
++fi
++DLLTOOL=$ac_cv_prog_DLLTOOL
++if test -n "$DLLTOOL"; then
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $DLLTOOL" >&5
++$as_echo "$DLLTOOL" >&6; }
++else
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
++$as_echo "no" >&6; }
++fi
++
++
++fi
++if test -z "$ac_cv_prog_DLLTOOL"; then
++  ac_ct_DLLTOOL=$DLLTOOL
++  # Extract the first word of "dlltool", so it can be a program name with args.
++set dummy dlltool; ac_word=$2
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
++$as_echo_n "checking for $ac_word... " >&6; }
++if ${ac_cv_prog_ac_ct_DLLTOOL+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  if test -n "$ac_ct_DLLTOOL"; then
++  ac_cv_prog_ac_ct_DLLTOOL="$ac_ct_DLLTOOL" # Let the user override the test.
++else
++as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
++for as_dir in $PATH
++do
++  IFS=$as_save_IFS
++  test -z "$as_dir" && as_dir=.
++    for ac_exec_ext in '' $ac_executable_extensions; do
++  if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
++    ac_cv_prog_ac_ct_DLLTOOL="dlltool"
++    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
++    break 2
++  fi
++done
++  done
++IFS=$as_save_IFS
++
++fi
++fi
++ac_ct_DLLTOOL=$ac_cv_prog_ac_ct_DLLTOOL
++if test -n "$ac_ct_DLLTOOL"; then
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_DLLTOOL" >&5
++$as_echo "$ac_ct_DLLTOOL" >&6; }
++else
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
++$as_echo "no" >&6; }
++fi
++
++  if test "x$ac_ct_DLLTOOL" = x; then
++    DLLTOOL="false"
++  else
++    case $cross_compiling:$ac_tool_warned in
++yes:)
++{ $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5
++$as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;}
++ac_tool_warned=yes ;;
++esac
++    DLLTOOL=$ac_ct_DLLTOOL
++  fi
++else
++  DLLTOOL="$ac_cv_prog_DLLTOOL"
++fi
++
++test -z "$DLLTOOL" && DLLTOOL=dlltool
++
++
++
++
++
++
++
++
++
++
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking how to associate runtime and link libraries" >&5
++$as_echo_n "checking how to associate runtime and link libraries... " >&6; }
++if ${lt_cv_sharedlib_from_linklib_cmd+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  lt_cv_sharedlib_from_linklib_cmd='unknown'
++
++case $host_os in
++cygwin* | mingw* | pw32* | cegcc*)
++  # two different shell functions defined in ltmain.sh
++  # decide which to use based on capabilities of $DLLTOOL
++  case `$DLLTOOL --help 2>&1` in
++  *--identify-strict*)
++    lt_cv_sharedlib_from_linklib_cmd=func_cygming_dll_for_implib
++    ;;
++  *)
++    lt_cv_sharedlib_from_linklib_cmd=func_cygming_dll_for_implib_fallback
++    ;;
++  esac
++  ;;
++*)
++  # fallback: assume linklib IS sharedlib
++  lt_cv_sharedlib_from_linklib_cmd="$ECHO"
++  ;;
++esac
++
++fi
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_sharedlib_from_linklib_cmd" >&5
++$as_echo "$lt_cv_sharedlib_from_linklib_cmd" >&6; }
++sharedlib_from_linklib_cmd=$lt_cv_sharedlib_from_linklib_cmd
++test -z "$sharedlib_from_linklib_cmd" && sharedlib_from_linklib_cmd=$ECHO
++
++
++
++
++
++
++
+ plugin_option=
+ plugin_names="liblto_plugin.so liblto_plugin-0.dll cyglto_plugin-0.dll"
+ for plugin in $plugin_names; do
+@@ -5759,8 +6011,10 @@ for plugin in $plugin_names; do
+ done
+ 
+ if test -n "$ac_tool_prefix"; then
+-  # Extract the first word of "${ac_tool_prefix}ar", so it can be a program name with args.
+-set dummy ${ac_tool_prefix}ar; ac_word=$2
++  for ac_prog in ar
++  do
++    # Extract the first word of "$ac_tool_prefix$ac_prog", so it can be a program name with args.
++set dummy $ac_tool_prefix$ac_prog; ac_word=$2
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+ $as_echo_n "checking for $ac_word... " >&6; }
+ if ${ac_cv_prog_AR+:} false; then :
+@@ -5776,7 +6030,7 @@ do
+   test -z "$as_dir" && as_dir=.
+     for ac_exec_ext in '' $ac_executable_extensions; do
+   if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
+-    ac_cv_prog_AR="${ac_tool_prefix}ar"
++    ac_cv_prog_AR="$ac_tool_prefix$ac_prog"
+     $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
+     break 2
+   fi
+@@ -5796,11 +6050,15 @@ $as_echo "no" >&6; }
+ fi
+ 
+ 
++    test -n "$AR" && break
++  done
+ fi
+-if test -z "$ac_cv_prog_AR"; then
++if test -z "$AR"; then
+   ac_ct_AR=$AR
+-  # Extract the first word of "ar", so it can be a program name with args.
+-set dummy ar; ac_word=$2
++  for ac_prog in ar
++do
++  # Extract the first word of "$ac_prog", so it can be a program name with args.
++set dummy $ac_prog; ac_word=$2
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+ $as_echo_n "checking for $ac_word... " >&6; }
+ if ${ac_cv_prog_ac_ct_AR+:} false; then :
+@@ -5816,7 +6074,7 @@ do
+   test -z "$as_dir" && as_dir=.
+     for ac_exec_ext in '' $ac_executable_extensions; do
+   if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
+-    ac_cv_prog_ac_ct_AR="ar"
++    ac_cv_prog_ac_ct_AR="$ac_prog"
+     $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
+     break 2
+   fi
+@@ -5835,6 +6093,10 @@ else
+ $as_echo "no" >&6; }
+ fi
+ 
++
++  test -n "$ac_ct_AR" && break
++done
++
+   if test "x$ac_ct_AR" = x; then
+     AR="false"
+   else
+@@ -5846,29 +6108,81 @@ ac_tool_warned=yes ;;
+ esac
+     AR=$ac_ct_AR
+   fi
+-else
+-  AR="$ac_cv_prog_AR"
+ fi
+ 
+-test -z "$AR" && AR=ar
+-if test -n "$plugin_option"; then
+-  if $AR --help 2>&1 | grep -q "\--plugin"; then
+-    touch conftest.c
+-    $AR $plugin_option rc conftest.a conftest.c
+-    if test "$?" != 0; then
+-      { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: Failed: $AR $plugin_option rc" >&5
++  touch conftest.c
++  $AR $plugin_option rc conftest.a conftest.c
++  if test "$?" != 0; then
++    { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: Failed: $AR $plugin_option rc" >&5
+ $as_echo "$as_me: WARNING: Failed: $AR $plugin_option rc" >&2;}
+-    else
+-      AR="$AR $plugin_option"
+-    fi
+-    rm -f conftest.*
++  else
++    AR="$AR $plugin_option"
+   fi
+-fi
+-test -z "$AR_FLAGS" && AR_FLAGS=cru
++  rm -f conftest.*
++: ${AR=ar}
++: ${AR_FLAGS=cru}
++
++
++
++
++
++
++
++
++
++
++
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for archiver @FILE support" >&5
++$as_echo_n "checking for archiver @FILE support... " >&6; }
++if ${lt_cv_ar_at_file+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  lt_cv_ar_at_file=no
++   cat confdefs.h - <<_ACEOF >conftest.$ac_ext
++/* end confdefs.h.  */
++
++int
++main ()
++{
+ 
++  ;
++  return 0;
++}
++_ACEOF
++if ac_fn_c_try_compile "$LINENO"; then :
++  echo conftest.$ac_objext > conftest.lst
++      lt_ar_try='$AR $AR_FLAGS libconftest.a @conftest.lst >&5'
++      { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$lt_ar_try\""; } >&5
++  (eval $lt_ar_try) 2>&5
++  ac_status=$?
++  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
++  test $ac_status = 0; }
++      if test "$ac_status" -eq 0; then
++	# Ensure the archiver fails upon bogus file names.
++	rm -f conftest.$ac_objext libconftest.a
++	{ { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$lt_ar_try\""; } >&5
++  (eval $lt_ar_try) 2>&5
++  ac_status=$?
++  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
++  test $ac_status = 0; }
++	if test "$ac_status" -ne 0; then
++          lt_cv_ar_at_file=@
++        fi
++      fi
++      rm -f conftest.* libconftest.a
+ 
++fi
++rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext
+ 
++fi
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_ar_at_file" >&5
++$as_echo "$lt_cv_ar_at_file" >&6; }
+ 
++if test "x$lt_cv_ar_at_file" = xno; then
++  archiver_list_spec=
++else
++  archiver_list_spec=$lt_cv_ar_at_file
++fi
+ 
+ 
+ 
+@@ -6215,8 +6529,8 @@ esac
+ lt_cv_sys_global_symbol_to_cdecl="sed -n -e 's/^T .* \(.*\)$/extern int \1();/p' -e 's/^$symcode* .* \(.*\)$/extern char \1;/p'"
+ 
+ # Transform an extracted symbol line into symbol name and symbol address
+-lt_cv_sys_global_symbol_to_c_name_address="sed -n -e 's/^: \([^ ]*\) $/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"\2\", (void *) \&\2},/p'"
+-lt_cv_sys_global_symbol_to_c_name_address_lib_prefix="sed -n -e 's/^: \([^ ]*\) $/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \(lib[^ ]*\)$/  {\"\2\", (void *) \&\2},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"lib\2\", (void *) \&\2},/p'"
++lt_cv_sys_global_symbol_to_c_name_address="sed -n -e 's/^: \([^ ]*\)[ ]*$/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"\2\", (void *) \&\2},/p'"
++lt_cv_sys_global_symbol_to_c_name_address_lib_prefix="sed -n -e 's/^: \([^ ]*\)[ ]*$/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \(lib[^ ]*\)$/  {\"\2\", (void *) \&\2},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"lib\2\", (void *) \&\2},/p'"
+ 
+ # Handle CRLF in mingw tool chain
+ opt_cr=
+@@ -6252,6 +6566,7 @@ for ac_symprfx in "" "_"; do
+   else
+     lt_cv_sys_global_symbol_pipe="sed -n -e 's/^.*[	 ]\($symcode$symcode*\)[	 ][	 ]*$ac_symprfx$sympat$opt_cr$/$symxfrm/p'"
+   fi
++  lt_cv_sys_global_symbol_pipe="$lt_cv_sys_global_symbol_pipe | sed '/ __gnu_lto/d'"
+ 
+   # Check to see that the pipe works correctly.
+   pipe_works=no
+@@ -6293,6 +6608,18 @@ _LT_EOF
+       if $GREP ' nm_test_var$' "$nlist" >/dev/null; then
+ 	if $GREP ' nm_test_func$' "$nlist" >/dev/null; then
+ 	  cat <<_LT_EOF > conftest.$ac_ext
++/* Keep this code in sync between libtool.m4, ltmain, lt_system.h, and tests.  */
++#if defined(_WIN32) || defined(__CYGWIN__) || defined(_WIN32_WCE)
++/* DATA imports from DLLs on WIN32 con't be const, because runtime
++   relocations are performed -- see ld's documentation on pseudo-relocs.  */
++# define LT_DLSYM_CONST
++#elif defined(__osf__)
++/* This system does not cope well with relocations in const data.  */
++# define LT_DLSYM_CONST
++#else
++# define LT_DLSYM_CONST const
++#endif
++
+ #ifdef __cplusplus
+ extern "C" {
+ #endif
+@@ -6304,7 +6631,7 @@ _LT_EOF
+ 	  cat <<_LT_EOF >> conftest.$ac_ext
+ 
+ /* The mapping between symbol names and symbols.  */
+-const struct {
++LT_DLSYM_CONST struct {
+   const char *name;
+   void       *address;
+ }
+@@ -6330,8 +6657,8 @@ static const void *lt_preloaded_setup() {
+ _LT_EOF
+ 	  # Now try linking the two files.
+ 	  mv conftest.$ac_objext conftstm.$ac_objext
+-	  lt_save_LIBS="$LIBS"
+-	  lt_save_CFLAGS="$CFLAGS"
++	  lt_globsym_save_LIBS=$LIBS
++	  lt_globsym_save_CFLAGS=$CFLAGS
+ 	  LIBS="conftstm.$ac_objext"
+ 	  CFLAGS="$CFLAGS$lt_prog_compiler_no_builtin_flag"
+ 	  if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_link\""; } >&5
+@@ -6341,8 +6668,8 @@ _LT_EOF
+   test $ac_status = 0; } && test -s conftest${ac_exeext}; then
+ 	    pipe_works=yes
+ 	  fi
+-	  LIBS="$lt_save_LIBS"
+-	  CFLAGS="$lt_save_CFLAGS"
++	  LIBS=$lt_globsym_save_LIBS
++	  CFLAGS=$lt_globsym_save_CFLAGS
+ 	else
+ 	  echo "cannot find nm_test_func in $nlist" >&5
+ 	fi
+@@ -6379,6 +6706,19 @@ else
+ $as_echo "ok" >&6; }
+ fi
+ 
++# Response file support.
++if test "$lt_cv_nm_interface" = "MS dumpbin"; then
++  nm_file_list_spec='@'
++elif $NM --help 2>/dev/null | grep '[@]FILE' >/dev/null; then
++  nm_file_list_spec='@'
++fi
++
++
++
++
++
++
++
+ 
+ 
+ 
+@@ -6395,6 +6735,42 @@ fi
+ 
+ 
+ 
++
++
++
++
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for sysroot" >&5
++$as_echo_n "checking for sysroot... " >&6; }
++
++# Check whether --with-libtool-sysroot was given.
++if test "${with_libtool_sysroot+set}" = set; then :
++  withval=$with_libtool_sysroot;
++else
++  with_libtool_sysroot=no
++fi
++
++
++lt_sysroot=
++case ${with_libtool_sysroot} in #(
++ yes)
++   if test "$GCC" = yes; then
++     lt_sysroot=`$CC --print-sysroot 2>/dev/null`
++   fi
++   ;; #(
++ /*)
++   lt_sysroot=`echo "$with_libtool_sysroot" | sed -e "$sed_quote_subst"`
++   ;; #(
++ no|'')
++   ;; #(
++ *)
++   { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${with_libtool_sysroot}" >&5
++$as_echo "${with_libtool_sysroot}" >&6; }
++   as_fn_error $? "The sysroot must be an absolute path." "$LINENO" 5
++   ;;
++esac
++
++ { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${lt_sysroot:-no}" >&5
++$as_echo "${lt_sysroot:-no}" >&6; }
+ 
+ 
+ 
+@@ -6606,6 +6982,123 @@ esac
+ 
+ need_locks="$enable_libtool_lock"
+ 
++if test -n "$ac_tool_prefix"; then
++  # Extract the first word of "${ac_tool_prefix}mt", so it can be a program name with args.
++set dummy ${ac_tool_prefix}mt; ac_word=$2
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
++$as_echo_n "checking for $ac_word... " >&6; }
++if ${ac_cv_prog_MANIFEST_TOOL+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  if test -n "$MANIFEST_TOOL"; then
++  ac_cv_prog_MANIFEST_TOOL="$MANIFEST_TOOL" # Let the user override the test.
++else
++as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
++for as_dir in $PATH
++do
++  IFS=$as_save_IFS
++  test -z "$as_dir" && as_dir=.
++    for ac_exec_ext in '' $ac_executable_extensions; do
++  if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
++    ac_cv_prog_MANIFEST_TOOL="${ac_tool_prefix}mt"
++    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
++    break 2
++  fi
++done
++  done
++IFS=$as_save_IFS
++
++fi
++fi
++MANIFEST_TOOL=$ac_cv_prog_MANIFEST_TOOL
++if test -n "$MANIFEST_TOOL"; then
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $MANIFEST_TOOL" >&5
++$as_echo "$MANIFEST_TOOL" >&6; }
++else
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
++$as_echo "no" >&6; }
++fi
++
++
++fi
++if test -z "$ac_cv_prog_MANIFEST_TOOL"; then
++  ac_ct_MANIFEST_TOOL=$MANIFEST_TOOL
++  # Extract the first word of "mt", so it can be a program name with args.
++set dummy mt; ac_word=$2
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
++$as_echo_n "checking for $ac_word... " >&6; }
++if ${ac_cv_prog_ac_ct_MANIFEST_TOOL+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  if test -n "$ac_ct_MANIFEST_TOOL"; then
++  ac_cv_prog_ac_ct_MANIFEST_TOOL="$ac_ct_MANIFEST_TOOL" # Let the user override the test.
++else
++as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
++for as_dir in $PATH
++do
++  IFS=$as_save_IFS
++  test -z "$as_dir" && as_dir=.
++    for ac_exec_ext in '' $ac_executable_extensions; do
++  if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
++    ac_cv_prog_ac_ct_MANIFEST_TOOL="mt"
++    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
++    break 2
++  fi
++done
++  done
++IFS=$as_save_IFS
++
++fi
++fi
++ac_ct_MANIFEST_TOOL=$ac_cv_prog_ac_ct_MANIFEST_TOOL
++if test -n "$ac_ct_MANIFEST_TOOL"; then
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_MANIFEST_TOOL" >&5
++$as_echo "$ac_ct_MANIFEST_TOOL" >&6; }
++else
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
++$as_echo "no" >&6; }
++fi
++
++  if test "x$ac_ct_MANIFEST_TOOL" = x; then
++    MANIFEST_TOOL=":"
++  else
++    case $cross_compiling:$ac_tool_warned in
++yes:)
++{ $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5
++$as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;}
++ac_tool_warned=yes ;;
++esac
++    MANIFEST_TOOL=$ac_ct_MANIFEST_TOOL
++  fi
++else
++  MANIFEST_TOOL="$ac_cv_prog_MANIFEST_TOOL"
++fi
++
++test -z "$MANIFEST_TOOL" && MANIFEST_TOOL=mt
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking if $MANIFEST_TOOL is a manifest tool" >&5
++$as_echo_n "checking if $MANIFEST_TOOL is a manifest tool... " >&6; }
++if ${lt_cv_path_mainfest_tool+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  lt_cv_path_mainfest_tool=no
++  echo "$as_me:$LINENO: $MANIFEST_TOOL '-?'" >&5
++  $MANIFEST_TOOL '-?' 2>conftest.err > conftest.out
++  cat conftest.err >&5
++  if $GREP 'Manifest Tool' conftest.out > /dev/null; then
++    lt_cv_path_mainfest_tool=yes
++  fi
++  rm -f conftest*
++fi
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_path_mainfest_tool" >&5
++$as_echo "$lt_cv_path_mainfest_tool" >&6; }
++if test "x$lt_cv_path_mainfest_tool" != xyes; then
++  MANIFEST_TOOL=:
++fi
++
++
++
++
++
+ 
+   case $host_os in
+     rhapsody* | darwin*)
+@@ -7169,6 +7662,8 @@ _LT_EOF
+       $LTCC $LTCFLAGS -c -o conftest.o conftest.c 2>&5
+       echo "$AR cru libconftest.a conftest.o" >&5
+       $AR cru libconftest.a conftest.o 2>&5
++      echo "$RANLIB libconftest.a" >&5
++      $RANLIB libconftest.a 2>&5
+       cat > conftest.c << _LT_EOF
+ int main() { return 0;}
+ _LT_EOF
+@@ -7751,8 +8246,6 @@ fi
+ lt_prog_compiler_pic=
+ lt_prog_compiler_static=
+ 
+-{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $compiler option to produce PIC" >&5
+-$as_echo_n "checking for $compiler option to produce PIC... " >&6; }
+ 
+   if test "$GCC" = yes; then
+     lt_prog_compiler_wl='-Wl,'
+@@ -7918,6 +8411,12 @@ $as_echo_n "checking for $compiler option to produce PIC... " >&6; }
+ 	lt_prog_compiler_pic='--shared'
+ 	lt_prog_compiler_static='--static'
+ 	;;
++      nagfor*)
++	# NAG Fortran compiler
++	lt_prog_compiler_wl='-Wl,-Wl,,'
++	lt_prog_compiler_pic='-PIC'
++	lt_prog_compiler_static='-Bstatic'
++	;;
+       pgcc* | pgf77* | pgf90* | pgf95* | pgfortran*)
+         # Portland Group compilers (*not* the Pentium gcc compiler,
+ 	# which looks to be a dead project)
+@@ -7980,7 +8479,7 @@ $as_echo_n "checking for $compiler option to produce PIC... " >&6; }
+       lt_prog_compiler_pic='-KPIC'
+       lt_prog_compiler_static='-Bstatic'
+       case $cc_basename in
+-      f77* | f90* | f95*)
++      f77* | f90* | f95* | sunf77* | sunf90* | sunf95*)
+ 	lt_prog_compiler_wl='-Qoption ld ';;
+       *)
+ 	lt_prog_compiler_wl='-Wl,';;
+@@ -8037,13 +8536,17 @@ case $host_os in
+     lt_prog_compiler_pic="$lt_prog_compiler_pic -DPIC"
+     ;;
+ esac
+-{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_prog_compiler_pic" >&5
+-$as_echo "$lt_prog_compiler_pic" >&6; }
+-
+-
+-
+-
+ 
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $compiler option to produce PIC" >&5
++$as_echo_n "checking for $compiler option to produce PIC... " >&6; }
++if ${lt_cv_prog_compiler_pic+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  lt_cv_prog_compiler_pic=$lt_prog_compiler_pic
++fi
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_prog_compiler_pic" >&5
++$as_echo "$lt_cv_prog_compiler_pic" >&6; }
++lt_prog_compiler_pic=$lt_cv_prog_compiler_pic
+ 
+ #
+ # Check to make sure the PIC flag actually works.
+@@ -8104,6 +8607,11 @@ fi
+ 
+ 
+ 
++
++
++
++
++
+ #
+ # Check to make sure the static flag actually works.
+ #
+@@ -8454,7 +8962,8 @@ _LT_EOF
+       allow_undefined_flag=unsupported
+       always_export_symbols=no
+       enable_shared_with_static_runtimes=yes
+-      export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1 DATA/'\'' | $SED -e '\''/^[AITW][ ]/s/.*[ ]//'\'' | sort | uniq > $export_symbols'
++      export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1 DATA/;s/^.*[ ]__nm__\([^ ]*\)[ ][^ ]*/\1 DATA/;/^I[ ]/d;/^[AITW][ ]/s/.* //'\'' | sort | uniq > $export_symbols'
++      exclude_expsyms='[_]+GLOBAL_OFFSET_TABLE_|[_]+GLOBAL__[FID]_.*|[_]+head_[A-Za-z0-9_]+_dll|[A-Za-z0-9_]+_dll_iname'
+ 
+       if $LD --help 2>&1 | $GREP 'auto-import' > /dev/null; then
+         archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib'
+@@ -8553,12 +9062,12 @@ _LT_EOF
+ 	  whole_archive_flag_spec='--whole-archive$convenience --no-whole-archive'
+ 	  hardcode_libdir_flag_spec=
+ 	  hardcode_libdir_flag_spec_ld='-rpath $libdir'
+-	  archive_cmds='$LD -shared $libobjs $deplibs $compiler_flags -soname $soname -o $lib'
++	  archive_cmds='$LD -shared $libobjs $deplibs $linker_flags -soname $soname -o $lib'
+ 	  if test "x$supports_anon_versioning" = xyes; then
+ 	    archive_expsym_cmds='echo "{ global:" > $output_objdir/$libname.ver~
+ 	      cat $export_symbols | sed -e "s/\(.*\)/\1;/" >> $output_objdir/$libname.ver~
+ 	      echo "local: *; };" >> $output_objdir/$libname.ver~
+-	      $LD -shared $libobjs $deplibs $compiler_flags -soname $soname -version-script $output_objdir/$libname.ver -o $lib'
++	      $LD -shared $libobjs $deplibs $linker_flags -soname $soname -version-script $output_objdir/$libname.ver -o $lib'
+ 	  fi
+ 	  ;;
+ 	esac
+@@ -8572,8 +9081,8 @@ _LT_EOF
+ 	archive_cmds='$LD -Bshareable $libobjs $deplibs $linker_flags -o $lib'
+ 	wlarc=
+       else
+-	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
+-	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
++	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
++	archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
+       fi
+       ;;
+ 
+@@ -8591,8 +9100,8 @@ _LT_EOF
+ 
+ _LT_EOF
+       elif $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then
+-	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
+-	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
++	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
++	archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
+       else
+ 	ld_shlibs=no
+       fi
+@@ -8638,8 +9147,8 @@ _LT_EOF
+ 
+     *)
+       if $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then
+-	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
+-	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
++	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
++	archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
+       else
+ 	ld_shlibs=no
+       fi
+@@ -8769,7 +9278,13 @@ _LT_EOF
+ 	allow_undefined_flag='-berok'
+         # Determine the default libpath from the value encoded in an
+         # empty executable.
+-        cat confdefs.h - <<_ACEOF >conftest.$ac_ext
++        if test "${lt_cv_aix_libpath+set}" = set; then
++  aix_libpath=$lt_cv_aix_libpath
++else
++  if ${lt_cv_aix_libpath_+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+ /* end confdefs.h.  */
+ 
+ int
+@@ -8782,22 +9297,29 @@ main ()
+ _ACEOF
+ if ac_fn_c_try_link "$LINENO"; then :
+ 
+-lt_aix_libpath_sed='
+-    /Import File Strings/,/^$/ {
+-	/^0/ {
+-	    s/^0  *\(.*\)$/\1/
+-	    p
+-	}
+-    }'
+-aix_libpath=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
+-# Check for a 64-bit object if we didn't find anything.
+-if test -z "$aix_libpath"; then
+-  aix_libpath=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
+-fi
++  lt_aix_libpath_sed='
++      /Import File Strings/,/^$/ {
++	  /^0/ {
++	      s/^0  *\([^ ]*\) *$/\1/
++	      p
++	  }
++      }'
++  lt_cv_aix_libpath_=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
++  # Check for a 64-bit object if we didn't find anything.
++  if test -z "$lt_cv_aix_libpath_"; then
++    lt_cv_aix_libpath_=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
++  fi
+ fi
+ rm -f core conftest.err conftest.$ac_objext \
+     conftest$ac_exeext conftest.$ac_ext
+-if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
++  if test -z "$lt_cv_aix_libpath_"; then
++    lt_cv_aix_libpath_="/usr/lib:/lib"
++  fi
++
++fi
++
++  aix_libpath=$lt_cv_aix_libpath_
++fi
+ 
+         hardcode_libdir_flag_spec='${wl}-blibpath:$libdir:'"$aix_libpath"
+         archive_expsym_cmds='$CC -o $output_objdir/$soname $libobjs $deplibs '"\${wl}$no_entry_flag"' $compiler_flags `if test "x${allow_undefined_flag}" != "x"; then func_echo_all "${wl}${allow_undefined_flag}"; else :; fi` '"\${wl}$exp_sym_flag:\$export_symbols $shared_flag"
+@@ -8809,7 +9331,13 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 	else
+ 	 # Determine the default libpath from the value encoded in an
+ 	 # empty executable.
+-	 cat confdefs.h - <<_ACEOF >conftest.$ac_ext
++	 if test "${lt_cv_aix_libpath+set}" = set; then
++  aix_libpath=$lt_cv_aix_libpath
++else
++  if ${lt_cv_aix_libpath_+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+ /* end confdefs.h.  */
+ 
+ int
+@@ -8822,22 +9350,29 @@ main ()
+ _ACEOF
+ if ac_fn_c_try_link "$LINENO"; then :
+ 
+-lt_aix_libpath_sed='
+-    /Import File Strings/,/^$/ {
+-	/^0/ {
+-	    s/^0  *\(.*\)$/\1/
+-	    p
+-	}
+-    }'
+-aix_libpath=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
+-# Check for a 64-bit object if we didn't find anything.
+-if test -z "$aix_libpath"; then
+-  aix_libpath=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
+-fi
++  lt_aix_libpath_sed='
++      /Import File Strings/,/^$/ {
++	  /^0/ {
++	      s/^0  *\([^ ]*\) *$/\1/
++	      p
++	  }
++      }'
++  lt_cv_aix_libpath_=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
++  # Check for a 64-bit object if we didn't find anything.
++  if test -z "$lt_cv_aix_libpath_"; then
++    lt_cv_aix_libpath_=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
++  fi
+ fi
+ rm -f core conftest.err conftest.$ac_objext \
+     conftest$ac_exeext conftest.$ac_ext
+-if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
++  if test -z "$lt_cv_aix_libpath_"; then
++    lt_cv_aix_libpath_="/usr/lib:/lib"
++  fi
++
++fi
++
++  aix_libpath=$lt_cv_aix_libpath_
++fi
+ 
+ 	 hardcode_libdir_flag_spec='${wl}-blibpath:$libdir:'"$aix_libpath"
+ 	  # Warning - without using the other run time loading flags,
+@@ -8882,20 +9417,63 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+       # Microsoft Visual C++.
+       # hardcode_libdir_flag_spec is actually meaningless, as there is
+       # no search path for DLLs.
+-      hardcode_libdir_flag_spec=' '
+-      allow_undefined_flag=unsupported
+-      # Tell ltmain to make .lib files, not .a files.
+-      libext=lib
+-      # Tell ltmain to make .dll files, not .so files.
+-      shrext_cmds=".dll"
+-      # FIXME: Setting linknames here is a bad hack.
+-      archive_cmds='$CC -o $lib $libobjs $compiler_flags `func_echo_all "$deplibs" | $SED '\''s/ -lc$//'\''` -link -dll~linknames='
+-      # The linker will automatically build a .lib file if we build a DLL.
+-      old_archive_from_new_cmds='true'
+-      # FIXME: Should let the user specify the lib program.
+-      old_archive_cmds='lib -OUT:$oldlib$oldobjs$old_deplibs'
+-      fix_srcfile_path='`cygpath -w "$srcfile"`'
+-      enable_shared_with_static_runtimes=yes
++      case $cc_basename in
++      cl*)
++	# Native MSVC
++	hardcode_libdir_flag_spec=' '
++	allow_undefined_flag=unsupported
++	always_export_symbols=yes
++	file_list_spec='@'
++	# Tell ltmain to make .lib files, not .a files.
++	libext=lib
++	# Tell ltmain to make .dll files, not .so files.
++	shrext_cmds=".dll"
++	# FIXME: Setting linknames here is a bad hack.
++	archive_cmds='$CC -o $output_objdir/$soname $libobjs $compiler_flags $deplibs -Wl,-dll~linknames='
++	archive_expsym_cmds='if test "x`$SED 1q $export_symbols`" = xEXPORTS; then
++	    sed -n -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' -e '1\\\!p' < $export_symbols > $output_objdir/$soname.exp;
++	  else
++	    sed -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' < $export_symbols > $output_objdir/$soname.exp;
++	  fi~
++	  $CC -o $tool_output_objdir$soname $libobjs $compiler_flags $deplibs "@$tool_output_objdir$soname.exp" -Wl,-DLL,-IMPLIB:"$tool_output_objdir$libname.dll.lib"~
++	  linknames='
++	# The linker will not automatically build a static lib if we build a DLL.
++	# _LT_TAGVAR(old_archive_from_new_cmds, )='true'
++	enable_shared_with_static_runtimes=yes
++	export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1,DATA/'\'' | $SED -e '\''/^[AITW][ ]/s/.*[ ]//'\'' | sort | uniq > $export_symbols'
++	# Don't use ranlib
++	old_postinstall_cmds='chmod 644 $oldlib'
++	postlink_cmds='lt_outputfile="@OUTPUT@"~
++	  lt_tool_outputfile="@TOOL_OUTPUT@"~
++	  case $lt_outputfile in
++	    *.exe|*.EXE) ;;
++	    *)
++	      lt_outputfile="$lt_outputfile.exe"
++	      lt_tool_outputfile="$lt_tool_outputfile.exe"
++	      ;;
++	  esac~
++	  if test "$MANIFEST_TOOL" != ":" && test -f "$lt_outputfile.manifest"; then
++	    $MANIFEST_TOOL -manifest "$lt_tool_outputfile.manifest" -outputresource:"$lt_tool_outputfile" || exit 1;
++	    $RM "$lt_outputfile.manifest";
++	  fi'
++	;;
++      *)
++	# Assume MSVC wrapper
++	hardcode_libdir_flag_spec=' '
++	allow_undefined_flag=unsupported
++	# Tell ltmain to make .lib files, not .a files.
++	libext=lib
++	# Tell ltmain to make .dll files, not .so files.
++	shrext_cmds=".dll"
++	# FIXME: Setting linknames here is a bad hack.
++	archive_cmds='$CC -o $lib $libobjs $compiler_flags `func_echo_all "$deplibs" | $SED '\''s/ -lc$//'\''` -link -dll~linknames='
++	# The linker will automatically build a .lib file if we build a DLL.
++	old_archive_from_new_cmds='true'
++	# FIXME: Should let the user specify the lib program.
++	old_archive_cmds='lib -OUT:$oldlib$oldobjs$old_deplibs'
++	enable_shared_with_static_runtimes=yes
++	;;
++      esac
+       ;;
+ 
+     darwin* | rhapsody*)
+@@ -8956,7 +9534,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 
+     # FreeBSD 3 and greater uses gcc -shared to do shared libraries.
+     freebsd* | dragonfly*)
+-      archive_cmds='$CC -shared -o $lib $libobjs $deplibs $compiler_flags'
++      archive_cmds='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags'
+       hardcode_libdir_flag_spec='-R$libdir'
+       hardcode_direct=yes
+       hardcode_shlibpath_var=no
+@@ -8964,7 +9542,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 
+     hpux9*)
+       if test "$GCC" = yes; then
+-	archive_cmds='$RM $output_objdir/$soname~$CC -shared -fPIC ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $libobjs $deplibs $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
++	archive_cmds='$RM $output_objdir/$soname~$CC -shared $pic_flag ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $libobjs $deplibs $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
+       else
+ 	archive_cmds='$RM $output_objdir/$soname~$LD -b +b $install_libdir -o $output_objdir/$soname $libobjs $deplibs $linker_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
+       fi
+@@ -8980,7 +9558,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 
+     hpux10*)
+       if test "$GCC" = yes && test "$with_gnu_ld" = no; then
+-	archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
++	archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
+       else
+ 	archive_cmds='$LD -b +h $soname +b $install_libdir -o $lib $libobjs $deplibs $linker_flags'
+       fi
+@@ -9004,10 +9582,10 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 	  archive_cmds='$CC -shared ${wl}+h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
+ 	  ;;
+ 	ia64*)
+-	  archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags'
++	  archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags'
+ 	  ;;
+ 	*)
+-	  archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
++	  archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
+ 	  ;;
+ 	esac
+       else
+@@ -9086,23 +9664,36 @@ fi
+ 
+     irix5* | irix6* | nonstopux*)
+       if test "$GCC" = yes; then
+-	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
++	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
+ 	# Try to use the -exported_symbol ld option, if it does not
+ 	# work, assume that -exports_file does not work either and
+ 	# implicitly export all symbols.
+-        save_LDFLAGS="$LDFLAGS"
+-        LDFLAGS="$LDFLAGS -shared ${wl}-exported_symbol ${wl}foo ${wl}-update_registry ${wl}/dev/null"
+-        cat confdefs.h - <<_ACEOF >conftest.$ac_ext
++	# This should be the same for all languages, so no per-tag cache variable.
++	{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether the $host_os linker accepts -exported_symbol" >&5
++$as_echo_n "checking whether the $host_os linker accepts -exported_symbol... " >&6; }
++if ${lt_cv_irix_exported_symbol+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  save_LDFLAGS="$LDFLAGS"
++	   LDFLAGS="$LDFLAGS -shared ${wl}-exported_symbol ${wl}foo ${wl}-update_registry ${wl}/dev/null"
++	   cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+ /* end confdefs.h.  */
+-int foo(void) {}
++int foo (void) { return 0; }
+ _ACEOF
+ if ac_fn_c_try_link "$LINENO"; then :
+-  archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations ${wl}-exports_file ${wl}$export_symbols -o $lib'
+-
++  lt_cv_irix_exported_symbol=yes
++else
++  lt_cv_irix_exported_symbol=no
+ fi
+ rm -f core conftest.err conftest.$ac_objext \
+     conftest$ac_exeext conftest.$ac_ext
+-        LDFLAGS="$save_LDFLAGS"
++           LDFLAGS="$save_LDFLAGS"
++fi
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_irix_exported_symbol" >&5
++$as_echo "$lt_cv_irix_exported_symbol" >&6; }
++	if test "$lt_cv_irix_exported_symbol" = yes; then
++          archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations ${wl}-exports_file ${wl}$export_symbols -o $lib'
++	fi
+       else
+ 	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -o $lib'
+ 	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -exports_file $export_symbols -o $lib'
+@@ -9187,7 +9778,7 @@ rm -f core conftest.err conftest.$ac_objext \
+     osf4* | osf5*)	# as osf3* with the addition of -msym flag
+       if test "$GCC" = yes; then
+ 	allow_undefined_flag=' ${wl}-expect_unresolved ${wl}\*'
+-	archive_cmds='$CC -shared${allow_undefined_flag} $libobjs $deplibs $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
++	archive_cmds='$CC -shared${allow_undefined_flag} $pic_flag $libobjs $deplibs $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
+ 	hardcode_libdir_flag_spec='${wl}-rpath ${wl}$libdir'
+       else
+ 	allow_undefined_flag=' -expect_unresolved \*'
+@@ -9206,9 +9797,9 @@ rm -f core conftest.err conftest.$ac_objext \
+       no_undefined_flag=' -z defs'
+       if test "$GCC" = yes; then
+ 	wlarc='${wl}'
+-	archive_cmds='$CC -shared ${wl}-z ${wl}text ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
++	archive_cmds='$CC -shared $pic_flag ${wl}-z ${wl}text ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
+ 	archive_expsym_cmds='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~
+-	  $CC -shared ${wl}-z ${wl}text ${wl}-M ${wl}$lib.exp ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags~$RM $lib.exp'
++	  $CC -shared $pic_flag ${wl}-z ${wl}text ${wl}-M ${wl}$lib.exp ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags~$RM $lib.exp'
+       else
+ 	case `$CC -V 2>&1` in
+ 	*"Compilers 5.0"*)
+@@ -9784,8 +10375,9 @@ cygwin* | mingw* | pw32* | cegcc*)
+   need_version=no
+   need_lib_prefix=no
+ 
+-  case $GCC,$host_os in
+-  yes,cygwin* | yes,mingw* | yes,pw32* | yes,cegcc*)
++  case $GCC,$cc_basename in
++  yes,*)
++    # gcc
+     library_names_spec='$libname.dll.a'
+     # DLL is installed to $(libdir)/../bin by postinstall_cmds
+     postinstall_cmds='base_file=`basename \${file}`~
+@@ -9818,13 +10410,71 @@ cygwin* | mingw* | pw32* | cegcc*)
+       library_names_spec='`echo ${libname} | sed -e 's/^lib/pw/'``echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}'
+       ;;
+     esac
++    dynamic_linker='Win32 ld.exe'
++    ;;
++
++  *,cl*)
++    # Native MSVC
++    libname_spec='$name'
++    soname_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}'
++    library_names_spec='${libname}.dll.lib'
++
++    case $build_os in
++    mingw*)
++      sys_lib_search_path_spec=
++      lt_save_ifs=$IFS
++      IFS=';'
++      for lt_path in $LIB
++      do
++        IFS=$lt_save_ifs
++        # Let DOS variable expansion print the short 8.3 style file name.
++        lt_path=`cd "$lt_path" 2>/dev/null && cmd //C "for %i in (".") do @echo %~si"`
++        sys_lib_search_path_spec="$sys_lib_search_path_spec $lt_path"
++      done
++      IFS=$lt_save_ifs
++      # Convert to MSYS style.
++      sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | sed -e 's|\\\\|/|g' -e 's| \\([a-zA-Z]\\):| /\\1|g' -e 's|^ ||'`
++      ;;
++    cygwin*)
++      # Convert to unix form, then to dos form, then back to unix form
++      # but this time dos style (no spaces!) so that the unix form looks
++      # like /cygdrive/c/PROGRA~1:/cygdr...
++      sys_lib_search_path_spec=`cygpath --path --unix "$LIB"`
++      sys_lib_search_path_spec=`cygpath --path --dos "$sys_lib_search_path_spec" 2>/dev/null`
++      sys_lib_search_path_spec=`cygpath --path --unix "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
++      ;;
++    *)
++      sys_lib_search_path_spec="$LIB"
++      if $ECHO "$sys_lib_search_path_spec" | $GREP ';[c-zC-Z]:/' >/dev/null; then
++        # It is most probably a Windows format PATH.
++        sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e 's/;/ /g'`
++      else
++        sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
++      fi
++      # FIXME: find the short name or the path components, as spaces are
++      # common. (e.g. "Program Files" -> "PROGRA~1")
++      ;;
++    esac
++
++    # DLL is installed to $(libdir)/../bin by postinstall_cmds
++    postinstall_cmds='base_file=`basename \${file}`~
++      dlpath=`$SHELL 2>&1 -c '\''. $dir/'\''\${base_file}'\''i; echo \$dlname'\''`~
++      dldir=$destdir/`dirname \$dlpath`~
++      test -d \$dldir || mkdir -p \$dldir~
++      $install_prog $dir/$dlname \$dldir/$dlname'
++    postuninstall_cmds='dldll=`$SHELL 2>&1 -c '\''. $file; echo \$dlname'\''`~
++      dlpath=$dir/\$dldll~
++       $RM \$dlpath'
++    shlibpath_overrides_runpath=yes
++    dynamic_linker='Win32 link.exe'
+     ;;
+ 
+   *)
++    # Assume MSVC wrapper
+     library_names_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext} $libname.lib'
++    dynamic_linker='Win32 ld.exe'
+     ;;
+   esac
+-  dynamic_linker='Win32 ld.exe'
+   # FIXME: first we should search . and the directory the executable is in
+   shlibpath_var=PATH
+   ;;
+@@ -10702,7 +11352,7 @@ else
+   lt_dlunknown=0; lt_dlno_uscore=1; lt_dlneed_uscore=2
+   lt_status=$lt_dlunknown
+   cat > conftest.$ac_ext <<_LT_EOF
+-#line 10705 "configure"
++#line $LINENO "configure"
+ #include "confdefs.h"
+ 
+ #if HAVE_DLFCN_H
+@@ -10746,10 +11396,10 @@ else
+ /* When -fvisbility=hidden is used, assume the code has been annotated
+    correspondingly for the symbols needed.  */
+ #if defined(__GNUC__) && (((__GNUC__ == 3) && (__GNUC_MINOR__ >= 3)) || (__GNUC__ > 3))
+-void fnord () __attribute__((visibility("default")));
++int fnord () __attribute__((visibility("default")));
+ #endif
+ 
+-void fnord () { int i=42; }
++int fnord () { return 42; }
+ int main ()
+ {
+   void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW);
+@@ -10808,7 +11458,7 @@ else
+   lt_dlunknown=0; lt_dlno_uscore=1; lt_dlneed_uscore=2
+   lt_status=$lt_dlunknown
+   cat > conftest.$ac_ext <<_LT_EOF
+-#line 10811 "configure"
++#line $LINENO "configure"
+ #include "confdefs.h"
+ 
+ #if HAVE_DLFCN_H
+@@ -10852,10 +11502,10 @@ else
+ /* When -fvisbility=hidden is used, assume the code has been annotated
+    correspondingly for the symbols needed.  */
+ #if defined(__GNUC__) && (((__GNUC__ == 3) && (__GNUC_MINOR__ >= 3)) || (__GNUC__ > 3))
+-void fnord () __attribute__((visibility("default")));
++int fnord () __attribute__((visibility("default")));
+ #endif
+ 
+-void fnord () { int i=42; }
++int fnord () { return 42; }
+ int main ()
+ {
+   void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW);
+@@ -14834,13 +15484,20 @@ exeext='`$ECHO "$exeext" | $SED "$delay_single_quote_subst"`'
+ lt_unset='`$ECHO "$lt_unset" | $SED "$delay_single_quote_subst"`'
+ lt_SP2NL='`$ECHO "$lt_SP2NL" | $SED "$delay_single_quote_subst"`'
+ lt_NL2SP='`$ECHO "$lt_NL2SP" | $SED "$delay_single_quote_subst"`'
++lt_cv_to_host_file_cmd='`$ECHO "$lt_cv_to_host_file_cmd" | $SED "$delay_single_quote_subst"`'
++lt_cv_to_tool_file_cmd='`$ECHO "$lt_cv_to_tool_file_cmd" | $SED "$delay_single_quote_subst"`'
+ reload_flag='`$ECHO "$reload_flag" | $SED "$delay_single_quote_subst"`'
+ reload_cmds='`$ECHO "$reload_cmds" | $SED "$delay_single_quote_subst"`'
+ OBJDUMP='`$ECHO "$OBJDUMP" | $SED "$delay_single_quote_subst"`'
+ deplibs_check_method='`$ECHO "$deplibs_check_method" | $SED "$delay_single_quote_subst"`'
+ file_magic_cmd='`$ECHO "$file_magic_cmd" | $SED "$delay_single_quote_subst"`'
++file_magic_glob='`$ECHO "$file_magic_glob" | $SED "$delay_single_quote_subst"`'
++want_nocaseglob='`$ECHO "$want_nocaseglob" | $SED "$delay_single_quote_subst"`'
++DLLTOOL='`$ECHO "$DLLTOOL" | $SED "$delay_single_quote_subst"`'
++sharedlib_from_linklib_cmd='`$ECHO "$sharedlib_from_linklib_cmd" | $SED "$delay_single_quote_subst"`'
+ AR='`$ECHO "$AR" | $SED "$delay_single_quote_subst"`'
+ AR_FLAGS='`$ECHO "$AR_FLAGS" | $SED "$delay_single_quote_subst"`'
++archiver_list_spec='`$ECHO "$archiver_list_spec" | $SED "$delay_single_quote_subst"`'
+ STRIP='`$ECHO "$STRIP" | $SED "$delay_single_quote_subst"`'
+ RANLIB='`$ECHO "$RANLIB" | $SED "$delay_single_quote_subst"`'
+ old_postinstall_cmds='`$ECHO "$old_postinstall_cmds" | $SED "$delay_single_quote_subst"`'
+@@ -14855,14 +15512,17 @@ lt_cv_sys_global_symbol_pipe='`$ECHO "$lt_cv_sys_global_symbol_pipe" | $SED "$de
+ lt_cv_sys_global_symbol_to_cdecl='`$ECHO "$lt_cv_sys_global_symbol_to_cdecl" | $SED "$delay_single_quote_subst"`'
+ lt_cv_sys_global_symbol_to_c_name_address='`$ECHO "$lt_cv_sys_global_symbol_to_c_name_address" | $SED "$delay_single_quote_subst"`'
+ lt_cv_sys_global_symbol_to_c_name_address_lib_prefix='`$ECHO "$lt_cv_sys_global_symbol_to_c_name_address_lib_prefix" | $SED "$delay_single_quote_subst"`'
++nm_file_list_spec='`$ECHO "$nm_file_list_spec" | $SED "$delay_single_quote_subst"`'
++lt_sysroot='`$ECHO "$lt_sysroot" | $SED "$delay_single_quote_subst"`'
+ objdir='`$ECHO "$objdir" | $SED "$delay_single_quote_subst"`'
+ MAGIC_CMD='`$ECHO "$MAGIC_CMD" | $SED "$delay_single_quote_subst"`'
+ lt_prog_compiler_no_builtin_flag='`$ECHO "$lt_prog_compiler_no_builtin_flag" | $SED "$delay_single_quote_subst"`'
+-lt_prog_compiler_wl='`$ECHO "$lt_prog_compiler_wl" | $SED "$delay_single_quote_subst"`'
+ lt_prog_compiler_pic='`$ECHO "$lt_prog_compiler_pic" | $SED "$delay_single_quote_subst"`'
++lt_prog_compiler_wl='`$ECHO "$lt_prog_compiler_wl" | $SED "$delay_single_quote_subst"`'
+ lt_prog_compiler_static='`$ECHO "$lt_prog_compiler_static" | $SED "$delay_single_quote_subst"`'
+ lt_cv_prog_compiler_c_o='`$ECHO "$lt_cv_prog_compiler_c_o" | $SED "$delay_single_quote_subst"`'
+ need_locks='`$ECHO "$need_locks" | $SED "$delay_single_quote_subst"`'
++MANIFEST_TOOL='`$ECHO "$MANIFEST_TOOL" | $SED "$delay_single_quote_subst"`'
+ DSYMUTIL='`$ECHO "$DSYMUTIL" | $SED "$delay_single_quote_subst"`'
+ NMEDIT='`$ECHO "$NMEDIT" | $SED "$delay_single_quote_subst"`'
+ LIPO='`$ECHO "$LIPO" | $SED "$delay_single_quote_subst"`'
+@@ -14895,12 +15555,12 @@ hardcode_shlibpath_var='`$ECHO "$hardcode_shlibpath_var" | $SED "$delay_single_q
+ hardcode_automatic='`$ECHO "$hardcode_automatic" | $SED "$delay_single_quote_subst"`'
+ inherit_rpath='`$ECHO "$inherit_rpath" | $SED "$delay_single_quote_subst"`'
+ link_all_deplibs='`$ECHO "$link_all_deplibs" | $SED "$delay_single_quote_subst"`'
+-fix_srcfile_path='`$ECHO "$fix_srcfile_path" | $SED "$delay_single_quote_subst"`'
+ always_export_symbols='`$ECHO "$always_export_symbols" | $SED "$delay_single_quote_subst"`'
+ export_symbols_cmds='`$ECHO "$export_symbols_cmds" | $SED "$delay_single_quote_subst"`'
+ exclude_expsyms='`$ECHO "$exclude_expsyms" | $SED "$delay_single_quote_subst"`'
+ include_expsyms='`$ECHO "$include_expsyms" | $SED "$delay_single_quote_subst"`'
+ prelink_cmds='`$ECHO "$prelink_cmds" | $SED "$delay_single_quote_subst"`'
++postlink_cmds='`$ECHO "$postlink_cmds" | $SED "$delay_single_quote_subst"`'
+ file_list_spec='`$ECHO "$file_list_spec" | $SED "$delay_single_quote_subst"`'
+ variables_saved_for_relink='`$ECHO "$variables_saved_for_relink" | $SED "$delay_single_quote_subst"`'
+ need_lib_prefix='`$ECHO "$need_lib_prefix" | $SED "$delay_single_quote_subst"`'
+@@ -14955,8 +15615,13 @@ reload_flag \
+ OBJDUMP \
+ deplibs_check_method \
+ file_magic_cmd \
++file_magic_glob \
++want_nocaseglob \
++DLLTOOL \
++sharedlib_from_linklib_cmd \
+ AR \
+ AR_FLAGS \
++archiver_list_spec \
+ STRIP \
+ RANLIB \
+ CC \
+@@ -14966,12 +15631,14 @@ lt_cv_sys_global_symbol_pipe \
+ lt_cv_sys_global_symbol_to_cdecl \
+ lt_cv_sys_global_symbol_to_c_name_address \
+ lt_cv_sys_global_symbol_to_c_name_address_lib_prefix \
++nm_file_list_spec \
+ lt_prog_compiler_no_builtin_flag \
+-lt_prog_compiler_wl \
+ lt_prog_compiler_pic \
++lt_prog_compiler_wl \
+ lt_prog_compiler_static \
+ lt_cv_prog_compiler_c_o \
+ need_locks \
++MANIFEST_TOOL \
+ DSYMUTIL \
+ NMEDIT \
+ LIPO \
+@@ -14987,7 +15654,6 @@ no_undefined_flag \
+ hardcode_libdir_flag_spec \
+ hardcode_libdir_flag_spec_ld \
+ hardcode_libdir_separator \
+-fix_srcfile_path \
+ exclude_expsyms \
+ include_expsyms \
+ file_list_spec \
+@@ -15023,6 +15689,7 @@ module_cmds \
+ module_expsym_cmds \
+ export_symbols_cmds \
+ prelink_cmds \
++postlink_cmds \
+ postinstall_cmds \
+ postuninstall_cmds \
+ finish_cmds \
+@@ -15795,7 +16462,8 @@ $as_echo X"$file" |
+ # NOTE: Changes made to this file will be lost: look at ltmain.sh.
+ #
+ #   Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001, 2003, 2004, 2005,
+-#                 2006, 2007, 2008, 2009 Free Software Foundation, Inc.
++#                 2006, 2007, 2008, 2009, 2010 Free Software Foundation,
++#                 Inc.
+ #   Written by Gordon Matzigkeit, 1996
+ #
+ #   This file is part of GNU Libtool.
+@@ -15898,19 +16566,42 @@ SP2NL=$lt_lt_SP2NL
+ # turn newlines into spaces.
+ NL2SP=$lt_lt_NL2SP
+ 
++# convert \$build file names to \$host format.
++to_host_file_cmd=$lt_cv_to_host_file_cmd
++
++# convert \$build files to toolchain format.
++to_tool_file_cmd=$lt_cv_to_tool_file_cmd
++
+ # An object symbol dumper.
+ OBJDUMP=$lt_OBJDUMP
+ 
+ # Method to check whether dependent libraries are shared objects.
+ deplibs_check_method=$lt_deplibs_check_method
+ 
+-# Command to use when deplibs_check_method == "file_magic".
++# Command to use when deplibs_check_method = "file_magic".
+ file_magic_cmd=$lt_file_magic_cmd
+ 
++# How to find potential files when deplibs_check_method = "file_magic".
++file_magic_glob=$lt_file_magic_glob
++
++# Find potential files using nocaseglob when deplibs_check_method = "file_magic".
++want_nocaseglob=$lt_want_nocaseglob
++
++# DLL creation program.
++DLLTOOL=$lt_DLLTOOL
++
++# Command to associate shared and link libraries.
++sharedlib_from_linklib_cmd=$lt_sharedlib_from_linklib_cmd
++
+ # The archiver.
+ AR=$lt_AR
++
++# Flags to create an archive.
+ AR_FLAGS=$lt_AR_FLAGS
+ 
++# How to feed a file listing to the archiver.
++archiver_list_spec=$lt_archiver_list_spec
++
+ # A symbol stripping program.
+ STRIP=$lt_STRIP
+ 
+@@ -15940,6 +16631,12 @@ global_symbol_to_c_name_address=$lt_lt_cv_sys_global_symbol_to_c_name_address
+ # Transform the output of nm in a C name address pair when lib prefix is needed.
+ global_symbol_to_c_name_address_lib_prefix=$lt_lt_cv_sys_global_symbol_to_c_name_address_lib_prefix
+ 
++# Specify filename containing input files for \$NM.
++nm_file_list_spec=$lt_nm_file_list_spec
++
++# The root where to search for dependent libraries,and in which our libraries should be installed.
++lt_sysroot=$lt_sysroot
++
+ # The name of the directory that contains temporary libtool files.
+ objdir=$objdir
+ 
+@@ -15949,6 +16646,9 @@ MAGIC_CMD=$MAGIC_CMD
+ # Must we lock files when doing compilation?
+ need_locks=$lt_need_locks
+ 
++# Manifest tool.
++MANIFEST_TOOL=$lt_MANIFEST_TOOL
++
+ # Tool to manipulate archived DWARF debug symbol files on Mac OS X.
+ DSYMUTIL=$lt_DSYMUTIL
+ 
+@@ -16063,12 +16763,12 @@ with_gcc=$GCC
+ # Compiler flag to turn off builtin functions.
+ no_builtin_flag=$lt_lt_prog_compiler_no_builtin_flag
+ 
+-# How to pass a linker flag through the compiler.
+-wl=$lt_lt_prog_compiler_wl
+-
+ # Additional compiler flags for building library objects.
+ pic_flag=$lt_lt_prog_compiler_pic
+ 
++# How to pass a linker flag through the compiler.
++wl=$lt_lt_prog_compiler_wl
++
+ # Compiler flag to prevent dynamic linking.
+ link_static_flag=$lt_lt_prog_compiler_static
+ 
+@@ -16155,9 +16855,6 @@ inherit_rpath=$inherit_rpath
+ # Whether libtool must link a program against all its dependency libraries.
+ link_all_deplibs=$link_all_deplibs
+ 
+-# Fix the shell variable \$srcfile for the compiler.
+-fix_srcfile_path=$lt_fix_srcfile_path
+-
+ # Set to "yes" if exported symbols are required.
+ always_export_symbols=$always_export_symbols
+ 
+@@ -16173,6 +16870,9 @@ include_expsyms=$lt_include_expsyms
+ # Commands necessary for linking programs (against libraries) with templates.
+ prelink_cmds=$lt_prelink_cmds
+ 
++# Commands necessary for finishing linking programs.
++postlink_cmds=$lt_postlink_cmds
++
+ # Specify filename containing input files.
+ file_list_spec=$lt_file_list_spec
+ 
+@@ -16205,210 +16905,169 @@ ltmain="$ac_aux_dir/ltmain.sh"
+   # if finds mixed CR/LF and LF-only lines.  Since sed operates in
+   # text mode, it properly converts lines to CR/LF.  This bash problem
+   # is reportedly fixed, but why not run on old versions too?
+-  sed '/^# Generated shell functions inserted here/q' "$ltmain" >> "$cfgfile" \
+-    || (rm -f "$cfgfile"; exit 1)
+-
+-  case $xsi_shell in
+-  yes)
+-    cat << \_LT_EOF >> "$cfgfile"
+-
+-# func_dirname file append nondir_replacement
+-# Compute the dirname of FILE.  If nonempty, add APPEND to the result,
+-# otherwise set result to NONDIR_REPLACEMENT.
+-func_dirname ()
+-{
+-  case ${1} in
+-    */*) func_dirname_result="${1%/*}${2}" ;;
+-    *  ) func_dirname_result="${3}" ;;
+-  esac
+-}
+-
+-# func_basename file
+-func_basename ()
+-{
+-  func_basename_result="${1##*/}"
+-}
+-
+-# func_dirname_and_basename file append nondir_replacement
+-# perform func_basename and func_dirname in a single function
+-# call:
+-#   dirname:  Compute the dirname of FILE.  If nonempty,
+-#             add APPEND to the result, otherwise set result
+-#             to NONDIR_REPLACEMENT.
+-#             value returned in "$func_dirname_result"
+-#   basename: Compute filename of FILE.
+-#             value retuned in "$func_basename_result"
+-# Implementation must be kept synchronized with func_dirname
+-# and func_basename. For efficiency, we do not delegate to
+-# those functions but instead duplicate the functionality here.
+-func_dirname_and_basename ()
+-{
+-  case ${1} in
+-    */*) func_dirname_result="${1%/*}${2}" ;;
+-    *  ) func_dirname_result="${3}" ;;
+-  esac
+-  func_basename_result="${1##*/}"
+-}
+-
+-# func_stripname prefix suffix name
+-# strip PREFIX and SUFFIX off of NAME.
+-# PREFIX and SUFFIX must not contain globbing or regex special
+-# characters, hashes, percent signs, but SUFFIX may contain a leading
+-# dot (in which case that matches only a dot).
+-func_stripname ()
+-{
+-  # pdksh 5.2.14 does not do ${X%$Y} correctly if both X and Y are
+-  # positional parameters, so assign one to ordinary parameter first.
+-  func_stripname_result=${3}
+-  func_stripname_result=${func_stripname_result#"${1}"}
+-  func_stripname_result=${func_stripname_result%"${2}"}
+-}
+-
+-# func_opt_split
+-func_opt_split ()
+-{
+-  func_opt_split_opt=${1%%=*}
+-  func_opt_split_arg=${1#*=}
+-}
+-
+-# func_lo2o object
+-func_lo2o ()
+-{
+-  case ${1} in
+-    *.lo) func_lo2o_result=${1%.lo}.${objext} ;;
+-    *)    func_lo2o_result=${1} ;;
+-  esac
+-}
+-
+-# func_xform libobj-or-source
+-func_xform ()
+-{
+-  func_xform_result=${1%.*}.lo
+-}
+-
+-# func_arith arithmetic-term...
+-func_arith ()
+-{
+-  func_arith_result=$(( $* ))
+-}
+-
+-# func_len string
+-# STRING may not start with a hyphen.
+-func_len ()
+-{
+-  func_len_result=${#1}
+-}
+-
+-_LT_EOF
+-    ;;
+-  *) # Bourne compatible functions.
+-    cat << \_LT_EOF >> "$cfgfile"
+-
+-# func_dirname file append nondir_replacement
+-# Compute the dirname of FILE.  If nonempty, add APPEND to the result,
+-# otherwise set result to NONDIR_REPLACEMENT.
+-func_dirname ()
+-{
+-  # Extract subdirectory from the argument.
+-  func_dirname_result=`$ECHO "${1}" | $SED "$dirname"`
+-  if test "X$func_dirname_result" = "X${1}"; then
+-    func_dirname_result="${3}"
+-  else
+-    func_dirname_result="$func_dirname_result${2}"
+-  fi
+-}
+-
+-# func_basename file
+-func_basename ()
+-{
+-  func_basename_result=`$ECHO "${1}" | $SED "$basename"`
+-}
+-
+-
+-# func_stripname prefix suffix name
+-# strip PREFIX and SUFFIX off of NAME.
+-# PREFIX and SUFFIX must not contain globbing or regex special
+-# characters, hashes, percent signs, but SUFFIX may contain a leading
+-# dot (in which case that matches only a dot).
+-# func_strip_suffix prefix name
+-func_stripname ()
+-{
+-  case ${2} in
+-    .*) func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%\\\\${2}\$%%"`;;
+-    *)  func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%${2}\$%%"`;;
+-  esac
+-}
+-
+-# sed scripts:
+-my_sed_long_opt='1s/^\(-[^=]*\)=.*/\1/;q'
+-my_sed_long_arg='1s/^-[^=]*=//'
+-
+-# func_opt_split
+-func_opt_split ()
+-{
+-  func_opt_split_opt=`$ECHO "${1}" | $SED "$my_sed_long_opt"`
+-  func_opt_split_arg=`$ECHO "${1}" | $SED "$my_sed_long_arg"`
+-}
+-
+-# func_lo2o object
+-func_lo2o ()
+-{
+-  func_lo2o_result=`$ECHO "${1}" | $SED "$lo2o"`
+-}
+-
+-# func_xform libobj-or-source
+-func_xform ()
+-{
+-  func_xform_result=`$ECHO "${1}" | $SED 's/\.[^.]*$/.lo/'`
+-}
+-
+-# func_arith arithmetic-term...
+-func_arith ()
+-{
+-  func_arith_result=`expr "$@"`
+-}
+-
+-# func_len string
+-# STRING may not start with a hyphen.
+-func_len ()
+-{
+-  func_len_result=`expr "$1" : ".*" 2>/dev/null || echo $max_cmd_len`
+-}
+-
+-_LT_EOF
+-esac
+-
+-case $lt_shell_append in
+-  yes)
+-    cat << \_LT_EOF >> "$cfgfile"
+-
+-# func_append var value
+-# Append VALUE to the end of shell variable VAR.
+-func_append ()
+-{
+-  eval "$1+=\$2"
+-}
+-_LT_EOF
+-    ;;
+-  *)
+-    cat << \_LT_EOF >> "$cfgfile"
+-
+-# func_append var value
+-# Append VALUE to the end of shell variable VAR.
+-func_append ()
+-{
+-  eval "$1=\$$1\$2"
+-}
+-
+-_LT_EOF
+-    ;;
+-  esac
+-
+-
+-  sed -n '/^# Generated shell functions inserted here/,$p' "$ltmain" >> "$cfgfile" \
+-    || (rm -f "$cfgfile"; exit 1)
+-
+-  mv -f "$cfgfile" "$ofile" ||
++  sed '$q' "$ltmain" >> "$cfgfile" \
++     || (rm -f "$cfgfile"; exit 1)
++
++  if test x"$xsi_shell" = xyes; then
++  sed -e '/^func_dirname ()$/,/^} # func_dirname /c\
++func_dirname ()\
++{\
++\    case ${1} in\
++\      */*) func_dirname_result="${1%/*}${2}" ;;\
++\      *  ) func_dirname_result="${3}" ;;\
++\    esac\
++} # Extended-shell func_dirname implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_basename ()$/,/^} # func_basename /c\
++func_basename ()\
++{\
++\    func_basename_result="${1##*/}"\
++} # Extended-shell func_basename implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_dirname_and_basename ()$/,/^} # func_dirname_and_basename /c\
++func_dirname_and_basename ()\
++{\
++\    case ${1} in\
++\      */*) func_dirname_result="${1%/*}${2}" ;;\
++\      *  ) func_dirname_result="${3}" ;;\
++\    esac\
++\    func_basename_result="${1##*/}"\
++} # Extended-shell func_dirname_and_basename implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_stripname ()$/,/^} # func_stripname /c\
++func_stripname ()\
++{\
++\    # pdksh 5.2.14 does not do ${X%$Y} correctly if both X and Y are\
++\    # positional parameters, so assign one to ordinary parameter first.\
++\    func_stripname_result=${3}\
++\    func_stripname_result=${func_stripname_result#"${1}"}\
++\    func_stripname_result=${func_stripname_result%"${2}"}\
++} # Extended-shell func_stripname implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_split_long_opt ()$/,/^} # func_split_long_opt /c\
++func_split_long_opt ()\
++{\
++\    func_split_long_opt_name=${1%%=*}\
++\    func_split_long_opt_arg=${1#*=}\
++} # Extended-shell func_split_long_opt implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_split_short_opt ()$/,/^} # func_split_short_opt /c\
++func_split_short_opt ()\
++{\
++\    func_split_short_opt_arg=${1#??}\
++\    func_split_short_opt_name=${1%"$func_split_short_opt_arg"}\
++} # Extended-shell func_split_short_opt implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_lo2o ()$/,/^} # func_lo2o /c\
++func_lo2o ()\
++{\
++\    case ${1} in\
++\      *.lo) func_lo2o_result=${1%.lo}.${objext} ;;\
++\      *)    func_lo2o_result=${1} ;;\
++\    esac\
++} # Extended-shell func_lo2o implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_xform ()$/,/^} # func_xform /c\
++func_xform ()\
++{\
++    func_xform_result=${1%.*}.lo\
++} # Extended-shell func_xform implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_arith ()$/,/^} # func_arith /c\
++func_arith ()\
++{\
++    func_arith_result=$(( $* ))\
++} # Extended-shell func_arith implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_len ()$/,/^} # func_len /c\
++func_len ()\
++{\
++    func_len_result=${#1}\
++} # Extended-shell func_len implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++fi
++
++if test x"$lt_shell_append" = xyes; then
++  sed -e '/^func_append ()$/,/^} # func_append /c\
++func_append ()\
++{\
++    eval "${1}+=\\${2}"\
++} # Extended-shell func_append implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_append_quoted ()$/,/^} # func_append_quoted /c\
++func_append_quoted ()\
++{\
++\    func_quote_for_eval "${2}"\
++\    eval "${1}+=\\\\ \\$func_quote_for_eval_result"\
++} # Extended-shell func_append_quoted implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  # Save a `func_append' function call where possible by direct use of '+='
++  sed -e 's%func_append \([a-zA-Z_]\{1,\}\) "%\1+="%g' $cfgfile > $cfgfile.tmp \
++    && mv -f "$cfgfile.tmp" "$cfgfile" \
++      || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++  test 0 -eq $? || _lt_function_replace_fail=:
++else
++  # Save a `func_append' function call even when '+=' is not available
++  sed -e 's%func_append \([a-zA-Z_]\{1,\}\) "%\1="$\1%g' $cfgfile > $cfgfile.tmp \
++    && mv -f "$cfgfile.tmp" "$cfgfile" \
++      || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++  test 0 -eq $? || _lt_function_replace_fail=:
++fi
++
++if test x"$_lt_function_replace_fail" = x":"; then
++  { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: Unable to substitute extended shell functions in $ofile" >&5
++$as_echo "$as_me: WARNING: Unable to substitute extended shell functions in $ofile" >&2;}
++fi
++
++
++   mv -f "$cfgfile" "$ofile" ||
+     (rm -f "$ofile" && cp "$cfgfile" "$ofile" && rm -f "$cfgfile")
+   chmod +x "$ofile"
+ 
+diff --git a/gdbsupport/Makefile.in b/gdbsupport/Makefile.in
+index bdceff3b56a..6aadae41031 100644
+--- a/gdbsupport/Makefile.in
++++ b/gdbsupport/Makefile.in
+@@ -233,6 +233,7 @@ CATOBJEXT = @CATOBJEXT@
+ CC = @CC@
+ CCDEPMODE = @CCDEPMODE@
+ CFLAGS = @CFLAGS@
++CONFIG_STATUS_DEPENDENCIES = @CONFIG_STATUS_DEPENDENCIES@
+ CPP = @CPP@
+ CPPFLAGS = @CPPFLAGS@
+ CXX = @CXX@
+diff --git a/gprof/Makefile.in b/gprof/Makefile.in
+index 5ef5ece74a9..9d7ce8b62b2 100644
+--- a/gprof/Makefile.in
++++ b/gprof/Makefile.in
+@@ -321,6 +321,7 @@ CYGPATH_W = @CYGPATH_W@
+ DATADIRNAME = @DATADIRNAME@
+ DEFS = @DEFS@
+ DEPDIR = @DEPDIR@
++DLLTOOL = @DLLTOOL@
+ DSYMUTIL = @DSYMUTIL@
+ DUMPBIN = @DUMPBIN@
+ ECHO_C = @ECHO_C@
+@@ -352,6 +353,7 @@ LN_S = @LN_S@
+ LTLIBOBJS = @LTLIBOBJS@
+ MAINT = @MAINT@
+ MAKEINFO = @MAKEINFO@
++MANIFEST_TOOL = @MANIFEST_TOOL@
+ MKDIR_P = @MKDIR_P@
+ MKINSTALLDIRS = @MKINSTALLDIRS@
+ MSGFMT = @MSGFMT@
+@@ -387,6 +389,7 @@ abs_builddir = @abs_builddir@
+ abs_srcdir = @abs_srcdir@
+ abs_top_builddir = @abs_top_builddir@
+ abs_top_srcdir = @abs_top_srcdir@
++ac_ct_AR = @ac_ct_AR@
+ ac_ct_CC = @ac_ct_CC@
+ ac_ct_DUMPBIN = @ac_ct_DUMPBIN@
+ am__include = @am__include@
+diff --git a/gprof/configure b/gprof/configure
+index 5a59f1c1d0e..2506887d3b0 100755
+--- a/gprof/configure
++++ b/gprof/configure
+@@ -663,8 +663,11 @@ OTOOL
+ LIPO
+ NMEDIT
+ DSYMUTIL
++MANIFEST_TOOL
+ RANLIB
++ac_ct_AR
+ AR
++DLLTOOL
+ OBJDUMP
+ LN_S
+ NM
+@@ -781,6 +784,7 @@ enable_static
+ with_pic
+ enable_fast_install
+ with_gnu_ld
++with_libtool_sysroot
+ enable_libtool_lock
+ enable_plugins
+ enable_largefile
+@@ -1443,6 +1447,8 @@ Optional Packages:
+   --with-pic              try to use only PIC/non-PIC objects [default=use
+                           both]
+   --with-gnu-ld           assume the C compiler uses GNU ld [default=no]
++  --with-libtool-sysroot=DIR Search for dependent libraries within DIR
++                        (or the compiler's sysroot if not specified).
+ 
+ Some influential environment variables:
+   CC          C compiler command
+@@ -4510,8 +4516,8 @@ esac
+ 
+ 
+ 
+-macro_version='2.2.7a'
+-macro_revision='1.3134'
++macro_version='2.4'
++macro_revision='1.3293'
+ 
+ 
+ 
+@@ -4551,7 +4557,7 @@ ECHO=$ECHO$ECHO$ECHO$ECHO$ECHO$ECHO
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking how to print strings" >&5
+ $as_echo_n "checking how to print strings... " >&6; }
+ # Test print first, because it will be a builtin if present.
+-if test "X`print -r -- -n 2>/dev/null`" = X-n && \
++if test "X`( print -r -- -n ) 2>/dev/null`" = X-n && \
+    test "X`print -r -- $ECHO 2>/dev/null`" = "X$ECHO"; then
+   ECHO='print -r --'
+ elif test "X`printf %s $ECHO 2>/dev/null`" = "X$ECHO"; then
+@@ -5244,8 +5250,8 @@ $as_echo_n "checking whether the shell understands some XSI constructs... " >&6;
+ # Try some XSI features
+ xsi_shell=no
+ ( _lt_dummy="a/b/c"
+-  test "${_lt_dummy##*/},${_lt_dummy%/*},"${_lt_dummy%"$_lt_dummy"}, \
+-      = c,a/b,, \
++  test "${_lt_dummy##*/},${_lt_dummy%/*},${_lt_dummy#??}"${_lt_dummy%"$_lt_dummy"}, \
++      = c,a/b,b/c, \
+     && eval 'test $(( 1 + 1 )) -eq 2 \
+     && test "${#_lt_dummy}" -eq 5' ) >/dev/null 2>&1 \
+   && xsi_shell=yes
+@@ -5294,6 +5300,80 @@ esac
+ 
+ 
+ 
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking how to convert $build file names to $host format" >&5
++$as_echo_n "checking how to convert $build file names to $host format... " >&6; }
++if ${lt_cv_to_host_file_cmd+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  case $host in
++  *-*-mingw* )
++    case $build in
++      *-*-mingw* ) # actually msys
++        lt_cv_to_host_file_cmd=func_convert_file_msys_to_w32
++        ;;
++      *-*-cygwin* )
++        lt_cv_to_host_file_cmd=func_convert_file_cygwin_to_w32
++        ;;
++      * ) # otherwise, assume *nix
++        lt_cv_to_host_file_cmd=func_convert_file_nix_to_w32
++        ;;
++    esac
++    ;;
++  *-*-cygwin* )
++    case $build in
++      *-*-mingw* ) # actually msys
++        lt_cv_to_host_file_cmd=func_convert_file_msys_to_cygwin
++        ;;
++      *-*-cygwin* )
++        lt_cv_to_host_file_cmd=func_convert_file_noop
++        ;;
++      * ) # otherwise, assume *nix
++        lt_cv_to_host_file_cmd=func_convert_file_nix_to_cygwin
++        ;;
++    esac
++    ;;
++  * ) # unhandled hosts (and "normal" native builds)
++    lt_cv_to_host_file_cmd=func_convert_file_noop
++    ;;
++esac
++
++fi
++
++to_host_file_cmd=$lt_cv_to_host_file_cmd
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_to_host_file_cmd" >&5
++$as_echo "$lt_cv_to_host_file_cmd" >&6; }
++
++
++
++
++
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking how to convert $build file names to toolchain format" >&5
++$as_echo_n "checking how to convert $build file names to toolchain format... " >&6; }
++if ${lt_cv_to_tool_file_cmd+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  #assume ordinary cross tools, or native build.
++lt_cv_to_tool_file_cmd=func_convert_file_noop
++case $host in
++  *-*-mingw* )
++    case $build in
++      *-*-mingw* ) # actually msys
++        lt_cv_to_tool_file_cmd=func_convert_file_msys_to_w32
++        ;;
++    esac
++    ;;
++esac
++
++fi
++
++to_tool_file_cmd=$lt_cv_to_tool_file_cmd
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_to_tool_file_cmd" >&5
++$as_echo "$lt_cv_to_tool_file_cmd" >&6; }
++
++
++
++
++
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $LD option to reload object files" >&5
+ $as_echo_n "checking for $LD option to reload object files... " >&6; }
+ if ${lt_cv_ld_reload_flag+:} false; then :
+@@ -5310,6 +5390,11 @@ case $reload_flag in
+ esac
+ reload_cmds='$LD$reload_flag -o $output$reload_objs'
+ case $host_os in
++  cygwin* | mingw* | pw32* | cegcc*)
++    if test "$GCC" != yes; then
++      reload_cmds=false
++    fi
++    ;;
+   darwin*)
+     if test "$GCC" = yes; then
+       reload_cmds='$LTCC $LTCFLAGS -nostdlib ${wl}-r -o $output$reload_objs'
+@@ -5478,7 +5563,8 @@ mingw* | pw32*)
+     lt_cv_deplibs_check_method='file_magic ^x86 archive import|^x86 DLL'
+     lt_cv_file_magic_cmd='func_win32_libid'
+   else
+-    lt_cv_deplibs_check_method='file_magic file format pei*-i386(.*architecture: i386)?'
++    # Keep this pattern in sync with the one in func_win32_libid.
++    lt_cv_deplibs_check_method='file_magic file format (pei*-i386(.*architecture: i386)?|pe-arm-wince|pe-x86-64)'
+     lt_cv_file_magic_cmd='$OBJDUMP -f'
+   fi
+   ;;
+@@ -5632,6 +5718,21 @@ esac
+ fi
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_deplibs_check_method" >&5
+ $as_echo "$lt_cv_deplibs_check_method" >&6; }
++
++file_magic_glob=
++want_nocaseglob=no
++if test "$build" = "$host"; then
++  case $host_os in
++  mingw* | pw32*)
++    if ( shopt | grep nocaseglob ) >/dev/null 2>&1; then
++      want_nocaseglob=yes
++    else
++      file_magic_glob=`echo aAbBcCdDeEfFgGhHiIjJkKlLmMnNoOpPqQrRsStTuUvVwWxXyYzZ | $SED -e "s/\(..\)/s\/[\1]\/[\1]\/g;/g"`
++    fi
++    ;;
++  esac
++fi
++
+ file_magic_cmd=$lt_cv_file_magic_cmd
+ deplibs_check_method=$lt_cv_deplibs_check_method
+ test -z "$deplibs_check_method" && deplibs_check_method=unknown
+@@ -5647,6 +5748,157 @@ test -z "$deplibs_check_method" && deplibs_check_method=unknown
+ 
+ 
+ 
++
++
++
++
++
++
++
++
++
++
++if test -n "$ac_tool_prefix"; then
++  # Extract the first word of "${ac_tool_prefix}dlltool", so it can be a program name with args.
++set dummy ${ac_tool_prefix}dlltool; ac_word=$2
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
++$as_echo_n "checking for $ac_word... " >&6; }
++if ${ac_cv_prog_DLLTOOL+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  if test -n "$DLLTOOL"; then
++  ac_cv_prog_DLLTOOL="$DLLTOOL" # Let the user override the test.
++else
++as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
++for as_dir in $PATH
++do
++  IFS=$as_save_IFS
++  test -z "$as_dir" && as_dir=.
++    for ac_exec_ext in '' $ac_executable_extensions; do
++  if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
++    ac_cv_prog_DLLTOOL="${ac_tool_prefix}dlltool"
++    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
++    break 2
++  fi
++done
++  done
++IFS=$as_save_IFS
++
++fi
++fi
++DLLTOOL=$ac_cv_prog_DLLTOOL
++if test -n "$DLLTOOL"; then
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $DLLTOOL" >&5
++$as_echo "$DLLTOOL" >&6; }
++else
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
++$as_echo "no" >&6; }
++fi
++
++
++fi
++if test -z "$ac_cv_prog_DLLTOOL"; then
++  ac_ct_DLLTOOL=$DLLTOOL
++  # Extract the first word of "dlltool", so it can be a program name with args.
++set dummy dlltool; ac_word=$2
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
++$as_echo_n "checking for $ac_word... " >&6; }
++if ${ac_cv_prog_ac_ct_DLLTOOL+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  if test -n "$ac_ct_DLLTOOL"; then
++  ac_cv_prog_ac_ct_DLLTOOL="$ac_ct_DLLTOOL" # Let the user override the test.
++else
++as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
++for as_dir in $PATH
++do
++  IFS=$as_save_IFS
++  test -z "$as_dir" && as_dir=.
++    for ac_exec_ext in '' $ac_executable_extensions; do
++  if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
++    ac_cv_prog_ac_ct_DLLTOOL="dlltool"
++    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
++    break 2
++  fi
++done
++  done
++IFS=$as_save_IFS
++
++fi
++fi
++ac_ct_DLLTOOL=$ac_cv_prog_ac_ct_DLLTOOL
++if test -n "$ac_ct_DLLTOOL"; then
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_DLLTOOL" >&5
++$as_echo "$ac_ct_DLLTOOL" >&6; }
++else
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
++$as_echo "no" >&6; }
++fi
++
++  if test "x$ac_ct_DLLTOOL" = x; then
++    DLLTOOL="false"
++  else
++    case $cross_compiling:$ac_tool_warned in
++yes:)
++{ $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5
++$as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;}
++ac_tool_warned=yes ;;
++esac
++    DLLTOOL=$ac_ct_DLLTOOL
++  fi
++else
++  DLLTOOL="$ac_cv_prog_DLLTOOL"
++fi
++
++test -z "$DLLTOOL" && DLLTOOL=dlltool
++
++
++
++
++
++
++
++
++
++
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking how to associate runtime and link libraries" >&5
++$as_echo_n "checking how to associate runtime and link libraries... " >&6; }
++if ${lt_cv_sharedlib_from_linklib_cmd+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  lt_cv_sharedlib_from_linklib_cmd='unknown'
++
++case $host_os in
++cygwin* | mingw* | pw32* | cegcc*)
++  # two different shell functions defined in ltmain.sh
++  # decide which to use based on capabilities of $DLLTOOL
++  case `$DLLTOOL --help 2>&1` in
++  *--identify-strict*)
++    lt_cv_sharedlib_from_linklib_cmd=func_cygming_dll_for_implib
++    ;;
++  *)
++    lt_cv_sharedlib_from_linklib_cmd=func_cygming_dll_for_implib_fallback
++    ;;
++  esac
++  ;;
++*)
++  # fallback: assume linklib IS sharedlib
++  lt_cv_sharedlib_from_linklib_cmd="$ECHO"
++  ;;
++esac
++
++fi
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_sharedlib_from_linklib_cmd" >&5
++$as_echo "$lt_cv_sharedlib_from_linklib_cmd" >&6; }
++sharedlib_from_linklib_cmd=$lt_cv_sharedlib_from_linklib_cmd
++test -z "$sharedlib_from_linklib_cmd" && sharedlib_from_linklib_cmd=$ECHO
++
++
++
++
++
++
++
+ plugin_option=
+ plugin_names="liblto_plugin.so liblto_plugin-0.dll cyglto_plugin-0.dll"
+ for plugin in $plugin_names; do
+@@ -5661,8 +5913,10 @@ for plugin in $plugin_names; do
+ done
+ 
+ if test -n "$ac_tool_prefix"; then
+-  # Extract the first word of "${ac_tool_prefix}ar", so it can be a program name with args.
+-set dummy ${ac_tool_prefix}ar; ac_word=$2
++  for ac_prog in ar
++  do
++    # Extract the first word of "$ac_tool_prefix$ac_prog", so it can be a program name with args.
++set dummy $ac_tool_prefix$ac_prog; ac_word=$2
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+ $as_echo_n "checking for $ac_word... " >&6; }
+ if ${ac_cv_prog_AR+:} false; then :
+@@ -5678,7 +5932,7 @@ do
+   test -z "$as_dir" && as_dir=.
+     for ac_exec_ext in '' $ac_executable_extensions; do
+   if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
+-    ac_cv_prog_AR="${ac_tool_prefix}ar"
++    ac_cv_prog_AR="$ac_tool_prefix$ac_prog"
+     $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
+     break 2
+   fi
+@@ -5698,11 +5952,15 @@ $as_echo "no" >&6; }
+ fi
+ 
+ 
++    test -n "$AR" && break
++  done
+ fi
+-if test -z "$ac_cv_prog_AR"; then
++if test -z "$AR"; then
+   ac_ct_AR=$AR
+-  # Extract the first word of "ar", so it can be a program name with args.
+-set dummy ar; ac_word=$2
++  for ac_prog in ar
++do
++  # Extract the first word of "$ac_prog", so it can be a program name with args.
++set dummy $ac_prog; ac_word=$2
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+ $as_echo_n "checking for $ac_word... " >&6; }
+ if ${ac_cv_prog_ac_ct_AR+:} false; then :
+@@ -5718,7 +5976,7 @@ do
+   test -z "$as_dir" && as_dir=.
+     for ac_exec_ext in '' $ac_executable_extensions; do
+   if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
+-    ac_cv_prog_ac_ct_AR="ar"
++    ac_cv_prog_ac_ct_AR="$ac_prog"
+     $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
+     break 2
+   fi
+@@ -5737,6 +5995,10 @@ else
+ $as_echo "no" >&6; }
+ fi
+ 
++
++  test -n "$ac_ct_AR" && break
++done
++
+   if test "x$ac_ct_AR" = x; then
+     AR="false"
+   else
+@@ -5748,25 +6010,19 @@ ac_tool_warned=yes ;;
+ esac
+     AR=$ac_ct_AR
+   fi
+-else
+-  AR="$ac_cv_prog_AR"
+ fi
+ 
+-test -z "$AR" && AR=ar
+-if test -n "$plugin_option"; then
+-  if $AR --help 2>&1 | grep -q "\--plugin"; then
+-    touch conftest.c
+-    $AR $plugin_option rc conftest.a conftest.c
+-    if test "$?" != 0; then
+-      { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: Failed: $AR $plugin_option rc" >&5
++  touch conftest.c
++  $AR $plugin_option rc conftest.a conftest.c
++  if test "$?" != 0; then
++    { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: Failed: $AR $plugin_option rc" >&5
+ $as_echo "$as_me: WARNING: Failed: $AR $plugin_option rc" >&2;}
+-    else
+-      AR="$AR $plugin_option"
+-    fi
+-    rm -f conftest.*
++  else
++    AR="$AR $plugin_option"
+   fi
+-fi
+-test -z "$AR_FLAGS" && AR_FLAGS=cru
++  rm -f conftest.*
++: ${AR=ar}
++: ${AR_FLAGS=cru}
+ 
+ 
+ 
+@@ -5778,6 +6034,64 @@ test -z "$AR_FLAGS" && AR_FLAGS=cru
+ 
+ 
+ 
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for archiver @FILE support" >&5
++$as_echo_n "checking for archiver @FILE support... " >&6; }
++if ${lt_cv_ar_at_file+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  lt_cv_ar_at_file=no
++   cat confdefs.h - <<_ACEOF >conftest.$ac_ext
++/* end confdefs.h.  */
++
++int
++main ()
++{
++
++  ;
++  return 0;
++}
++_ACEOF
++if ac_fn_c_try_compile "$LINENO"; then :
++  echo conftest.$ac_objext > conftest.lst
++      lt_ar_try='$AR $AR_FLAGS libconftest.a @conftest.lst >&5'
++      { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$lt_ar_try\""; } >&5
++  (eval $lt_ar_try) 2>&5
++  ac_status=$?
++  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
++  test $ac_status = 0; }
++      if test "$ac_status" -eq 0; then
++	# Ensure the archiver fails upon bogus file names.
++	rm -f conftest.$ac_objext libconftest.a
++	{ { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$lt_ar_try\""; } >&5
++  (eval $lt_ar_try) 2>&5
++  ac_status=$?
++  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
++  test $ac_status = 0; }
++	if test "$ac_status" -ne 0; then
++          lt_cv_ar_at_file=@
++        fi
++      fi
++      rm -f conftest.* libconftest.a
++
++fi
++rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext
++
++fi
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_ar_at_file" >&5
++$as_echo "$lt_cv_ar_at_file" >&6; }
++
++if test "x$lt_cv_ar_at_file" = xno; then
++  archiver_list_spec=
++else
++  archiver_list_spec=$lt_cv_ar_at_file
++fi
++
++
++
++
++
++
++
+ if test -n "$ac_tool_prefix"; then
+   # Extract the first word of "${ac_tool_prefix}strip", so it can be a program name with args.
+ set dummy ${ac_tool_prefix}strip; ac_word=$2
+@@ -6117,8 +6431,8 @@ esac
+ lt_cv_sys_global_symbol_to_cdecl="sed -n -e 's/^T .* \(.*\)$/extern int \1();/p' -e 's/^$symcode* .* \(.*\)$/extern char \1;/p'"
+ 
+ # Transform an extracted symbol line into symbol name and symbol address
+-lt_cv_sys_global_symbol_to_c_name_address="sed -n -e 's/^: \([^ ]*\) $/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"\2\", (void *) \&\2},/p'"
+-lt_cv_sys_global_symbol_to_c_name_address_lib_prefix="sed -n -e 's/^: \([^ ]*\) $/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \(lib[^ ]*\)$/  {\"\2\", (void *) \&\2},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"lib\2\", (void *) \&\2},/p'"
++lt_cv_sys_global_symbol_to_c_name_address="sed -n -e 's/^: \([^ ]*\)[ ]*$/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"\2\", (void *) \&\2},/p'"
++lt_cv_sys_global_symbol_to_c_name_address_lib_prefix="sed -n -e 's/^: \([^ ]*\)[ ]*$/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \(lib[^ ]*\)$/  {\"\2\", (void *) \&\2},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"lib\2\", (void *) \&\2},/p'"
+ 
+ # Handle CRLF in mingw tool chain
+ opt_cr=
+@@ -6154,6 +6468,7 @@ for ac_symprfx in "" "_"; do
+   else
+     lt_cv_sys_global_symbol_pipe="sed -n -e 's/^.*[	 ]\($symcode$symcode*\)[	 ][	 ]*$ac_symprfx$sympat$opt_cr$/$symxfrm/p'"
+   fi
++  lt_cv_sys_global_symbol_pipe="$lt_cv_sys_global_symbol_pipe | sed '/ __gnu_lto/d'"
+ 
+   # Check to see that the pipe works correctly.
+   pipe_works=no
+@@ -6195,6 +6510,18 @@ _LT_EOF
+       if $GREP ' nm_test_var$' "$nlist" >/dev/null; then
+ 	if $GREP ' nm_test_func$' "$nlist" >/dev/null; then
+ 	  cat <<_LT_EOF > conftest.$ac_ext
++/* Keep this code in sync between libtool.m4, ltmain, lt_system.h, and tests.  */
++#if defined(_WIN32) || defined(__CYGWIN__) || defined(_WIN32_WCE)
++/* DATA imports from DLLs on WIN32 con't be const, because runtime
++   relocations are performed -- see ld's documentation on pseudo-relocs.  */
++# define LT_DLSYM_CONST
++#elif defined(__osf__)
++/* This system does not cope well with relocations in const data.  */
++# define LT_DLSYM_CONST
++#else
++# define LT_DLSYM_CONST const
++#endif
++
+ #ifdef __cplusplus
+ extern "C" {
+ #endif
+@@ -6206,7 +6533,7 @@ _LT_EOF
+ 	  cat <<_LT_EOF >> conftest.$ac_ext
+ 
+ /* The mapping between symbol names and symbols.  */
+-const struct {
++LT_DLSYM_CONST struct {
+   const char *name;
+   void       *address;
+ }
+@@ -6232,8 +6559,8 @@ static const void *lt_preloaded_setup() {
+ _LT_EOF
+ 	  # Now try linking the two files.
+ 	  mv conftest.$ac_objext conftstm.$ac_objext
+-	  lt_save_LIBS="$LIBS"
+-	  lt_save_CFLAGS="$CFLAGS"
++	  lt_globsym_save_LIBS=$LIBS
++	  lt_globsym_save_CFLAGS=$CFLAGS
+ 	  LIBS="conftstm.$ac_objext"
+ 	  CFLAGS="$CFLAGS$lt_prog_compiler_no_builtin_flag"
+ 	  if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_link\""; } >&5
+@@ -6243,8 +6570,8 @@ _LT_EOF
+   test $ac_status = 0; } && test -s conftest${ac_exeext}; then
+ 	    pipe_works=yes
+ 	  fi
+-	  LIBS="$lt_save_LIBS"
+-	  CFLAGS="$lt_save_CFLAGS"
++	  LIBS=$lt_globsym_save_LIBS
++	  CFLAGS=$lt_globsym_save_CFLAGS
+ 	else
+ 	  echo "cannot find nm_test_func in $nlist" >&5
+ 	fi
+@@ -6281,6 +6608,18 @@ else
+ $as_echo "ok" >&6; }
+ fi
+ 
++# Response file support.
++if test "$lt_cv_nm_interface" = "MS dumpbin"; then
++  nm_file_list_spec='@'
++elif $NM --help 2>/dev/null | grep '[@]FILE' >/dev/null; then
++  nm_file_list_spec='@'
++fi
++
++
++
++
++
++
+ 
+ 
+ 
+@@ -6297,6 +6636,43 @@ fi
+ 
+ 
+ 
++
++
++
++
++
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for sysroot" >&5
++$as_echo_n "checking for sysroot... " >&6; }
++
++# Check whether --with-libtool-sysroot was given.
++if test "${with_libtool_sysroot+set}" = set; then :
++  withval=$with_libtool_sysroot;
++else
++  with_libtool_sysroot=no
++fi
++
++
++lt_sysroot=
++case ${with_libtool_sysroot} in #(
++ yes)
++   if test "$GCC" = yes; then
++     lt_sysroot=`$CC --print-sysroot 2>/dev/null`
++   fi
++   ;; #(
++ /*)
++   lt_sysroot=`echo "$with_libtool_sysroot" | sed -e "$sed_quote_subst"`
++   ;; #(
++ no|'')
++   ;; #(
++ *)
++   { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${with_libtool_sysroot}" >&5
++$as_echo "${with_libtool_sysroot}" >&6; }
++   as_fn_error $? "The sysroot must be an absolute path." "$LINENO" 5
++   ;;
++esac
++
++ { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${lt_sysroot:-no}" >&5
++$as_echo "${lt_sysroot:-no}" >&6; }
+ 
+ 
+ 
+@@ -6508,6 +6884,123 @@ esac
+ 
+ need_locks="$enable_libtool_lock"
+ 
++if test -n "$ac_tool_prefix"; then
++  # Extract the first word of "${ac_tool_prefix}mt", so it can be a program name with args.
++set dummy ${ac_tool_prefix}mt; ac_word=$2
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
++$as_echo_n "checking for $ac_word... " >&6; }
++if ${ac_cv_prog_MANIFEST_TOOL+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  if test -n "$MANIFEST_TOOL"; then
++  ac_cv_prog_MANIFEST_TOOL="$MANIFEST_TOOL" # Let the user override the test.
++else
++as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
++for as_dir in $PATH
++do
++  IFS=$as_save_IFS
++  test -z "$as_dir" && as_dir=.
++    for ac_exec_ext in '' $ac_executable_extensions; do
++  if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
++    ac_cv_prog_MANIFEST_TOOL="${ac_tool_prefix}mt"
++    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
++    break 2
++  fi
++done
++  done
++IFS=$as_save_IFS
++
++fi
++fi
++MANIFEST_TOOL=$ac_cv_prog_MANIFEST_TOOL
++if test -n "$MANIFEST_TOOL"; then
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $MANIFEST_TOOL" >&5
++$as_echo "$MANIFEST_TOOL" >&6; }
++else
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
++$as_echo "no" >&6; }
++fi
++
++
++fi
++if test -z "$ac_cv_prog_MANIFEST_TOOL"; then
++  ac_ct_MANIFEST_TOOL=$MANIFEST_TOOL
++  # Extract the first word of "mt", so it can be a program name with args.
++set dummy mt; ac_word=$2
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
++$as_echo_n "checking for $ac_word... " >&6; }
++if ${ac_cv_prog_ac_ct_MANIFEST_TOOL+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  if test -n "$ac_ct_MANIFEST_TOOL"; then
++  ac_cv_prog_ac_ct_MANIFEST_TOOL="$ac_ct_MANIFEST_TOOL" # Let the user override the test.
++else
++as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
++for as_dir in $PATH
++do
++  IFS=$as_save_IFS
++  test -z "$as_dir" && as_dir=.
++    for ac_exec_ext in '' $ac_executable_extensions; do
++  if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
++    ac_cv_prog_ac_ct_MANIFEST_TOOL="mt"
++    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
++    break 2
++  fi
++done
++  done
++IFS=$as_save_IFS
++
++fi
++fi
++ac_ct_MANIFEST_TOOL=$ac_cv_prog_ac_ct_MANIFEST_TOOL
++if test -n "$ac_ct_MANIFEST_TOOL"; then
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_MANIFEST_TOOL" >&5
++$as_echo "$ac_ct_MANIFEST_TOOL" >&6; }
++else
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
++$as_echo "no" >&6; }
++fi
++
++  if test "x$ac_ct_MANIFEST_TOOL" = x; then
++    MANIFEST_TOOL=":"
++  else
++    case $cross_compiling:$ac_tool_warned in
++yes:)
++{ $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5
++$as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;}
++ac_tool_warned=yes ;;
++esac
++    MANIFEST_TOOL=$ac_ct_MANIFEST_TOOL
++  fi
++else
++  MANIFEST_TOOL="$ac_cv_prog_MANIFEST_TOOL"
++fi
++
++test -z "$MANIFEST_TOOL" && MANIFEST_TOOL=mt
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking if $MANIFEST_TOOL is a manifest tool" >&5
++$as_echo_n "checking if $MANIFEST_TOOL is a manifest tool... " >&6; }
++if ${lt_cv_path_mainfest_tool+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  lt_cv_path_mainfest_tool=no
++  echo "$as_me:$LINENO: $MANIFEST_TOOL '-?'" >&5
++  $MANIFEST_TOOL '-?' 2>conftest.err > conftest.out
++  cat conftest.err >&5
++  if $GREP 'Manifest Tool' conftest.out > /dev/null; then
++    lt_cv_path_mainfest_tool=yes
++  fi
++  rm -f conftest*
++fi
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_path_mainfest_tool" >&5
++$as_echo "$lt_cv_path_mainfest_tool" >&6; }
++if test "x$lt_cv_path_mainfest_tool" != xyes; then
++  MANIFEST_TOOL=:
++fi
++
++
++
++
++
+ 
+   case $host_os in
+     rhapsody* | darwin*)
+@@ -7071,6 +7564,8 @@ _LT_EOF
+       $LTCC $LTCFLAGS -c -o conftest.o conftest.c 2>&5
+       echo "$AR cru libconftest.a conftest.o" >&5
+       $AR cru libconftest.a conftest.o 2>&5
++      echo "$RANLIB libconftest.a" >&5
++      $RANLIB libconftest.a 2>&5
+       cat > conftest.c << _LT_EOF
+ int main() { return 0;}
+ _LT_EOF
+@@ -7653,8 +8148,6 @@ fi
+ lt_prog_compiler_pic=
+ lt_prog_compiler_static=
+ 
+-{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $compiler option to produce PIC" >&5
+-$as_echo_n "checking for $compiler option to produce PIC... " >&6; }
+ 
+   if test "$GCC" = yes; then
+     lt_prog_compiler_wl='-Wl,'
+@@ -7820,6 +8313,12 @@ $as_echo_n "checking for $compiler option to produce PIC... " >&6; }
+ 	lt_prog_compiler_pic='--shared'
+ 	lt_prog_compiler_static='--static'
+ 	;;
++      nagfor*)
++	# NAG Fortran compiler
++	lt_prog_compiler_wl='-Wl,-Wl,,'
++	lt_prog_compiler_pic='-PIC'
++	lt_prog_compiler_static='-Bstatic'
++	;;
+       pgcc* | pgf77* | pgf90* | pgf95* | pgfortran*)
+         # Portland Group compilers (*not* the Pentium gcc compiler,
+ 	# which looks to be a dead project)
+@@ -7882,7 +8381,7 @@ $as_echo_n "checking for $compiler option to produce PIC... " >&6; }
+       lt_prog_compiler_pic='-KPIC'
+       lt_prog_compiler_static='-Bstatic'
+       case $cc_basename in
+-      f77* | f90* | f95*)
++      f77* | f90* | f95* | sunf77* | sunf90* | sunf95*)
+ 	lt_prog_compiler_wl='-Qoption ld ';;
+       *)
+ 	lt_prog_compiler_wl='-Wl,';;
+@@ -7939,13 +8438,17 @@ case $host_os in
+     lt_prog_compiler_pic="$lt_prog_compiler_pic -DPIC"
+     ;;
+ esac
+-{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_prog_compiler_pic" >&5
+-$as_echo "$lt_prog_compiler_pic" >&6; }
+-
+-
+-
+-
+ 
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $compiler option to produce PIC" >&5
++$as_echo_n "checking for $compiler option to produce PIC... " >&6; }
++if ${lt_cv_prog_compiler_pic+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  lt_cv_prog_compiler_pic=$lt_prog_compiler_pic
++fi
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_prog_compiler_pic" >&5
++$as_echo "$lt_cv_prog_compiler_pic" >&6; }
++lt_prog_compiler_pic=$lt_cv_prog_compiler_pic
+ 
+ #
+ # Check to make sure the PIC flag actually works.
+@@ -8006,6 +8509,11 @@ fi
+ 
+ 
+ 
++
++
++
++
++
+ #
+ # Check to make sure the static flag actually works.
+ #
+@@ -8356,7 +8864,8 @@ _LT_EOF
+       allow_undefined_flag=unsupported
+       always_export_symbols=no
+       enable_shared_with_static_runtimes=yes
+-      export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1 DATA/'\'' | $SED -e '\''/^[AITW][ ]/s/.*[ ]//'\'' | sort | uniq > $export_symbols'
++      export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1 DATA/;s/^.*[ ]__nm__\([^ ]*\)[ ][^ ]*/\1 DATA/;/^I[ ]/d;/^[AITW][ ]/s/.* //'\'' | sort | uniq > $export_symbols'
++      exclude_expsyms='[_]+GLOBAL_OFFSET_TABLE_|[_]+GLOBAL__[FID]_.*|[_]+head_[A-Za-z0-9_]+_dll|[A-Za-z0-9_]+_dll_iname'
+ 
+       if $LD --help 2>&1 | $GREP 'auto-import' > /dev/null; then
+         archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib'
+@@ -8455,12 +8964,12 @@ _LT_EOF
+ 	  whole_archive_flag_spec='--whole-archive$convenience --no-whole-archive'
+ 	  hardcode_libdir_flag_spec=
+ 	  hardcode_libdir_flag_spec_ld='-rpath $libdir'
+-	  archive_cmds='$LD -shared $libobjs $deplibs $compiler_flags -soname $soname -o $lib'
++	  archive_cmds='$LD -shared $libobjs $deplibs $linker_flags -soname $soname -o $lib'
+ 	  if test "x$supports_anon_versioning" = xyes; then
+ 	    archive_expsym_cmds='echo "{ global:" > $output_objdir/$libname.ver~
+ 	      cat $export_symbols | sed -e "s/\(.*\)/\1;/" >> $output_objdir/$libname.ver~
+ 	      echo "local: *; };" >> $output_objdir/$libname.ver~
+-	      $LD -shared $libobjs $deplibs $compiler_flags -soname $soname -version-script $output_objdir/$libname.ver -o $lib'
++	      $LD -shared $libobjs $deplibs $linker_flags -soname $soname -version-script $output_objdir/$libname.ver -o $lib'
+ 	  fi
+ 	  ;;
+ 	esac
+@@ -8474,8 +8983,8 @@ _LT_EOF
+ 	archive_cmds='$LD -Bshareable $libobjs $deplibs $linker_flags -o $lib'
+ 	wlarc=
+       else
+-	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
+-	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
++	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
++	archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
+       fi
+       ;;
+ 
+@@ -8493,8 +9002,8 @@ _LT_EOF
+ 
+ _LT_EOF
+       elif $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then
+-	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
+-	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
++	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
++	archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
+       else
+ 	ld_shlibs=no
+       fi
+@@ -8540,8 +9049,8 @@ _LT_EOF
+ 
+     *)
+       if $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then
+-	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
+-	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
++	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
++	archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
+       else
+ 	ld_shlibs=no
+       fi
+@@ -8671,7 +9180,13 @@ _LT_EOF
+ 	allow_undefined_flag='-berok'
+         # Determine the default libpath from the value encoded in an
+         # empty executable.
+-        cat confdefs.h - <<_ACEOF >conftest.$ac_ext
++        if test "${lt_cv_aix_libpath+set}" = set; then
++  aix_libpath=$lt_cv_aix_libpath
++else
++  if ${lt_cv_aix_libpath_+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+ /* end confdefs.h.  */
+ 
+ int
+@@ -8684,22 +9199,29 @@ main ()
+ _ACEOF
+ if ac_fn_c_try_link "$LINENO"; then :
+ 
+-lt_aix_libpath_sed='
+-    /Import File Strings/,/^$/ {
+-	/^0/ {
+-	    s/^0  *\(.*\)$/\1/
+-	    p
+-	}
+-    }'
+-aix_libpath=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
+-# Check for a 64-bit object if we didn't find anything.
+-if test -z "$aix_libpath"; then
+-  aix_libpath=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
+-fi
++  lt_aix_libpath_sed='
++      /Import File Strings/,/^$/ {
++	  /^0/ {
++	      s/^0  *\([^ ]*\) *$/\1/
++	      p
++	  }
++      }'
++  lt_cv_aix_libpath_=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
++  # Check for a 64-bit object if we didn't find anything.
++  if test -z "$lt_cv_aix_libpath_"; then
++    lt_cv_aix_libpath_=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
++  fi
+ fi
+ rm -f core conftest.err conftest.$ac_objext \
+     conftest$ac_exeext conftest.$ac_ext
+-if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
++  if test -z "$lt_cv_aix_libpath_"; then
++    lt_cv_aix_libpath_="/usr/lib:/lib"
++  fi
++
++fi
++
++  aix_libpath=$lt_cv_aix_libpath_
++fi
+ 
+         hardcode_libdir_flag_spec='${wl}-blibpath:$libdir:'"$aix_libpath"
+         archive_expsym_cmds='$CC -o $output_objdir/$soname $libobjs $deplibs '"\${wl}$no_entry_flag"' $compiler_flags `if test "x${allow_undefined_flag}" != "x"; then func_echo_all "${wl}${allow_undefined_flag}"; else :; fi` '"\${wl}$exp_sym_flag:\$export_symbols $shared_flag"
+@@ -8711,7 +9233,13 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 	else
+ 	 # Determine the default libpath from the value encoded in an
+ 	 # empty executable.
+-	 cat confdefs.h - <<_ACEOF >conftest.$ac_ext
++	 if test "${lt_cv_aix_libpath+set}" = set; then
++  aix_libpath=$lt_cv_aix_libpath
++else
++  if ${lt_cv_aix_libpath_+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+ /* end confdefs.h.  */
+ 
+ int
+@@ -8724,22 +9252,29 @@ main ()
+ _ACEOF
+ if ac_fn_c_try_link "$LINENO"; then :
+ 
+-lt_aix_libpath_sed='
+-    /Import File Strings/,/^$/ {
+-	/^0/ {
+-	    s/^0  *\(.*\)$/\1/
+-	    p
+-	}
+-    }'
+-aix_libpath=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
+-# Check for a 64-bit object if we didn't find anything.
+-if test -z "$aix_libpath"; then
+-  aix_libpath=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
+-fi
++  lt_aix_libpath_sed='
++      /Import File Strings/,/^$/ {
++	  /^0/ {
++	      s/^0  *\([^ ]*\) *$/\1/
++	      p
++	  }
++      }'
++  lt_cv_aix_libpath_=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
++  # Check for a 64-bit object if we didn't find anything.
++  if test -z "$lt_cv_aix_libpath_"; then
++    lt_cv_aix_libpath_=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
++  fi
+ fi
+ rm -f core conftest.err conftest.$ac_objext \
+     conftest$ac_exeext conftest.$ac_ext
+-if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
++  if test -z "$lt_cv_aix_libpath_"; then
++    lt_cv_aix_libpath_="/usr/lib:/lib"
++  fi
++
++fi
++
++  aix_libpath=$lt_cv_aix_libpath_
++fi
+ 
+ 	 hardcode_libdir_flag_spec='${wl}-blibpath:$libdir:'"$aix_libpath"
+ 	  # Warning - without using the other run time loading flags,
+@@ -8784,20 +9319,63 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+       # Microsoft Visual C++.
+       # hardcode_libdir_flag_spec is actually meaningless, as there is
+       # no search path for DLLs.
+-      hardcode_libdir_flag_spec=' '
+-      allow_undefined_flag=unsupported
+-      # Tell ltmain to make .lib files, not .a files.
+-      libext=lib
+-      # Tell ltmain to make .dll files, not .so files.
+-      shrext_cmds=".dll"
+-      # FIXME: Setting linknames here is a bad hack.
+-      archive_cmds='$CC -o $lib $libobjs $compiler_flags `func_echo_all "$deplibs" | $SED '\''s/ -lc$//'\''` -link -dll~linknames='
+-      # The linker will automatically build a .lib file if we build a DLL.
+-      old_archive_from_new_cmds='true'
+-      # FIXME: Should let the user specify the lib program.
+-      old_archive_cmds='lib -OUT:$oldlib$oldobjs$old_deplibs'
+-      fix_srcfile_path='`cygpath -w "$srcfile"`'
+-      enable_shared_with_static_runtimes=yes
++      case $cc_basename in
++      cl*)
++	# Native MSVC
++	hardcode_libdir_flag_spec=' '
++	allow_undefined_flag=unsupported
++	always_export_symbols=yes
++	file_list_spec='@'
++	# Tell ltmain to make .lib files, not .a files.
++	libext=lib
++	# Tell ltmain to make .dll files, not .so files.
++	shrext_cmds=".dll"
++	# FIXME: Setting linknames here is a bad hack.
++	archive_cmds='$CC -o $output_objdir/$soname $libobjs $compiler_flags $deplibs -Wl,-dll~linknames='
++	archive_expsym_cmds='if test "x`$SED 1q $export_symbols`" = xEXPORTS; then
++	    sed -n -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' -e '1\\\!p' < $export_symbols > $output_objdir/$soname.exp;
++	  else
++	    sed -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' < $export_symbols > $output_objdir/$soname.exp;
++	  fi~
++	  $CC -o $tool_output_objdir$soname $libobjs $compiler_flags $deplibs "@$tool_output_objdir$soname.exp" -Wl,-DLL,-IMPLIB:"$tool_output_objdir$libname.dll.lib"~
++	  linknames='
++	# The linker will not automatically build a static lib if we build a DLL.
++	# _LT_TAGVAR(old_archive_from_new_cmds, )='true'
++	enable_shared_with_static_runtimes=yes
++	export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1,DATA/'\'' | $SED -e '\''/^[AITW][ ]/s/.*[ ]//'\'' | sort | uniq > $export_symbols'
++	# Don't use ranlib
++	old_postinstall_cmds='chmod 644 $oldlib'
++	postlink_cmds='lt_outputfile="@OUTPUT@"~
++	  lt_tool_outputfile="@TOOL_OUTPUT@"~
++	  case $lt_outputfile in
++	    *.exe|*.EXE) ;;
++	    *)
++	      lt_outputfile="$lt_outputfile.exe"
++	      lt_tool_outputfile="$lt_tool_outputfile.exe"
++	      ;;
++	  esac~
++	  if test "$MANIFEST_TOOL" != ":" && test -f "$lt_outputfile.manifest"; then
++	    $MANIFEST_TOOL -manifest "$lt_tool_outputfile.manifest" -outputresource:"$lt_tool_outputfile" || exit 1;
++	    $RM "$lt_outputfile.manifest";
++	  fi'
++	;;
++      *)
++	# Assume MSVC wrapper
++	hardcode_libdir_flag_spec=' '
++	allow_undefined_flag=unsupported
++	# Tell ltmain to make .lib files, not .a files.
++	libext=lib
++	# Tell ltmain to make .dll files, not .so files.
++	shrext_cmds=".dll"
++	# FIXME: Setting linknames here is a bad hack.
++	archive_cmds='$CC -o $lib $libobjs $compiler_flags `func_echo_all "$deplibs" | $SED '\''s/ -lc$//'\''` -link -dll~linknames='
++	# The linker will automatically build a .lib file if we build a DLL.
++	old_archive_from_new_cmds='true'
++	# FIXME: Should let the user specify the lib program.
++	old_archive_cmds='lib -OUT:$oldlib$oldobjs$old_deplibs'
++	enable_shared_with_static_runtimes=yes
++	;;
++      esac
+       ;;
+ 
+     darwin* | rhapsody*)
+@@ -8858,7 +9436,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 
+     # FreeBSD 3 and greater uses gcc -shared to do shared libraries.
+     freebsd* | dragonfly*)
+-      archive_cmds='$CC -shared -o $lib $libobjs $deplibs $compiler_flags'
++      archive_cmds='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags'
+       hardcode_libdir_flag_spec='-R$libdir'
+       hardcode_direct=yes
+       hardcode_shlibpath_var=no
+@@ -8866,7 +9444,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 
+     hpux9*)
+       if test "$GCC" = yes; then
+-	archive_cmds='$RM $output_objdir/$soname~$CC -shared -fPIC ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $libobjs $deplibs $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
++	archive_cmds='$RM $output_objdir/$soname~$CC -shared $pic_flag ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $libobjs $deplibs $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
+       else
+ 	archive_cmds='$RM $output_objdir/$soname~$LD -b +b $install_libdir -o $output_objdir/$soname $libobjs $deplibs $linker_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
+       fi
+@@ -8882,7 +9460,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 
+     hpux10*)
+       if test "$GCC" = yes && test "$with_gnu_ld" = no; then
+-	archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
++	archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
+       else
+ 	archive_cmds='$LD -b +h $soname +b $install_libdir -o $lib $libobjs $deplibs $linker_flags'
+       fi
+@@ -8906,10 +9484,10 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 	  archive_cmds='$CC -shared ${wl}+h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
+ 	  ;;
+ 	ia64*)
+-	  archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags'
++	  archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags'
+ 	  ;;
+ 	*)
+-	  archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
++	  archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
+ 	  ;;
+ 	esac
+       else
+@@ -8988,23 +9566,36 @@ fi
+ 
+     irix5* | irix6* | nonstopux*)
+       if test "$GCC" = yes; then
+-	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
++	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
+ 	# Try to use the -exported_symbol ld option, if it does not
+ 	# work, assume that -exports_file does not work either and
+ 	# implicitly export all symbols.
+-        save_LDFLAGS="$LDFLAGS"
+-        LDFLAGS="$LDFLAGS -shared ${wl}-exported_symbol ${wl}foo ${wl}-update_registry ${wl}/dev/null"
+-        cat confdefs.h - <<_ACEOF >conftest.$ac_ext
++	# This should be the same for all languages, so no per-tag cache variable.
++	{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether the $host_os linker accepts -exported_symbol" >&5
++$as_echo_n "checking whether the $host_os linker accepts -exported_symbol... " >&6; }
++if ${lt_cv_irix_exported_symbol+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  save_LDFLAGS="$LDFLAGS"
++	   LDFLAGS="$LDFLAGS -shared ${wl}-exported_symbol ${wl}foo ${wl}-update_registry ${wl}/dev/null"
++	   cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+ /* end confdefs.h.  */
+-int foo(void) {}
++int foo (void) { return 0; }
+ _ACEOF
+ if ac_fn_c_try_link "$LINENO"; then :
+-  archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations ${wl}-exports_file ${wl}$export_symbols -o $lib'
+-
++  lt_cv_irix_exported_symbol=yes
++else
++  lt_cv_irix_exported_symbol=no
+ fi
+ rm -f core conftest.err conftest.$ac_objext \
+     conftest$ac_exeext conftest.$ac_ext
+-        LDFLAGS="$save_LDFLAGS"
++           LDFLAGS="$save_LDFLAGS"
++fi
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_irix_exported_symbol" >&5
++$as_echo "$lt_cv_irix_exported_symbol" >&6; }
++	if test "$lt_cv_irix_exported_symbol" = yes; then
++          archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations ${wl}-exports_file ${wl}$export_symbols -o $lib'
++	fi
+       else
+ 	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -o $lib'
+ 	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -exports_file $export_symbols -o $lib'
+@@ -9089,7 +9680,7 @@ rm -f core conftest.err conftest.$ac_objext \
+     osf4* | osf5*)	# as osf3* with the addition of -msym flag
+       if test "$GCC" = yes; then
+ 	allow_undefined_flag=' ${wl}-expect_unresolved ${wl}\*'
+-	archive_cmds='$CC -shared${allow_undefined_flag} $libobjs $deplibs $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
++	archive_cmds='$CC -shared${allow_undefined_flag} $pic_flag $libobjs $deplibs $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
+ 	hardcode_libdir_flag_spec='${wl}-rpath ${wl}$libdir'
+       else
+ 	allow_undefined_flag=' -expect_unresolved \*'
+@@ -9108,9 +9699,9 @@ rm -f core conftest.err conftest.$ac_objext \
+       no_undefined_flag=' -z defs'
+       if test "$GCC" = yes; then
+ 	wlarc='${wl}'
+-	archive_cmds='$CC -shared ${wl}-z ${wl}text ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
++	archive_cmds='$CC -shared $pic_flag ${wl}-z ${wl}text ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
+ 	archive_expsym_cmds='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~
+-	  $CC -shared ${wl}-z ${wl}text ${wl}-M ${wl}$lib.exp ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags~$RM $lib.exp'
++	  $CC -shared $pic_flag ${wl}-z ${wl}text ${wl}-M ${wl}$lib.exp ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags~$RM $lib.exp'
+       else
+ 	case `$CC -V 2>&1` in
+ 	*"Compilers 5.0"*)
+@@ -9686,8 +10277,9 @@ cygwin* | mingw* | pw32* | cegcc*)
+   need_version=no
+   need_lib_prefix=no
+ 
+-  case $GCC,$host_os in
+-  yes,cygwin* | yes,mingw* | yes,pw32* | yes,cegcc*)
++  case $GCC,$cc_basename in
++  yes,*)
++    # gcc
+     library_names_spec='$libname.dll.a'
+     # DLL is installed to $(libdir)/../bin by postinstall_cmds
+     postinstall_cmds='base_file=`basename \${file}`~
+@@ -9720,13 +10312,71 @@ cygwin* | mingw* | pw32* | cegcc*)
+       library_names_spec='`echo ${libname} | sed -e 's/^lib/pw/'``echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}'
+       ;;
+     esac
++    dynamic_linker='Win32 ld.exe'
++    ;;
++
++  *,cl*)
++    # Native MSVC
++    libname_spec='$name'
++    soname_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}'
++    library_names_spec='${libname}.dll.lib'
++
++    case $build_os in
++    mingw*)
++      sys_lib_search_path_spec=
++      lt_save_ifs=$IFS
++      IFS=';'
++      for lt_path in $LIB
++      do
++        IFS=$lt_save_ifs
++        # Let DOS variable expansion print the short 8.3 style file name.
++        lt_path=`cd "$lt_path" 2>/dev/null && cmd //C "for %i in (".") do @echo %~si"`
++        sys_lib_search_path_spec="$sys_lib_search_path_spec $lt_path"
++      done
++      IFS=$lt_save_ifs
++      # Convert to MSYS style.
++      sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | sed -e 's|\\\\|/|g' -e 's| \\([a-zA-Z]\\):| /\\1|g' -e 's|^ ||'`
++      ;;
++    cygwin*)
++      # Convert to unix form, then to dos form, then back to unix form
++      # but this time dos style (no spaces!) so that the unix form looks
++      # like /cygdrive/c/PROGRA~1:/cygdr...
++      sys_lib_search_path_spec=`cygpath --path --unix "$LIB"`
++      sys_lib_search_path_spec=`cygpath --path --dos "$sys_lib_search_path_spec" 2>/dev/null`
++      sys_lib_search_path_spec=`cygpath --path --unix "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
++      ;;
++    *)
++      sys_lib_search_path_spec="$LIB"
++      if $ECHO "$sys_lib_search_path_spec" | $GREP ';[c-zC-Z]:/' >/dev/null; then
++        # It is most probably a Windows format PATH.
++        sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e 's/;/ /g'`
++      else
++        sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
++      fi
++      # FIXME: find the short name or the path components, as spaces are
++      # common. (e.g. "Program Files" -> "PROGRA~1")
++      ;;
++    esac
++
++    # DLL is installed to $(libdir)/../bin by postinstall_cmds
++    postinstall_cmds='base_file=`basename \${file}`~
++      dlpath=`$SHELL 2>&1 -c '\''. $dir/'\''\${base_file}'\''i; echo \$dlname'\''`~
++      dldir=$destdir/`dirname \$dlpath`~
++      test -d \$dldir || mkdir -p \$dldir~
++      $install_prog $dir/$dlname \$dldir/$dlname'
++    postuninstall_cmds='dldll=`$SHELL 2>&1 -c '\''. $file; echo \$dlname'\''`~
++      dlpath=$dir/\$dldll~
++       $RM \$dlpath'
++    shlibpath_overrides_runpath=yes
++    dynamic_linker='Win32 link.exe'
+     ;;
+ 
+   *)
++    # Assume MSVC wrapper
+     library_names_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext} $libname.lib'
++    dynamic_linker='Win32 ld.exe'
+     ;;
+   esac
+-  dynamic_linker='Win32 ld.exe'
+   # FIXME: first we should search . and the directory the executable is in
+   shlibpath_var=PATH
+   ;;
+@@ -10604,7 +11254,7 @@ else
+   lt_dlunknown=0; lt_dlno_uscore=1; lt_dlneed_uscore=2
+   lt_status=$lt_dlunknown
+   cat > conftest.$ac_ext <<_LT_EOF
+-#line 10607 "configure"
++#line $LINENO "configure"
+ #include "confdefs.h"
+ 
+ #if HAVE_DLFCN_H
+@@ -10648,10 +11298,10 @@ else
+ /* When -fvisbility=hidden is used, assume the code has been annotated
+    correspondingly for the symbols needed.  */
+ #if defined(__GNUC__) && (((__GNUC__ == 3) && (__GNUC_MINOR__ >= 3)) || (__GNUC__ > 3))
+-void fnord () __attribute__((visibility("default")));
++int fnord () __attribute__((visibility("default")));
+ #endif
+ 
+-void fnord () { int i=42; }
++int fnord () { return 42; }
+ int main ()
+ {
+   void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW);
+@@ -10710,7 +11360,7 @@ else
+   lt_dlunknown=0; lt_dlno_uscore=1; lt_dlneed_uscore=2
+   lt_status=$lt_dlunknown
+   cat > conftest.$ac_ext <<_LT_EOF
+-#line 10713 "configure"
++#line $LINENO "configure"
+ #include "confdefs.h"
+ 
+ #if HAVE_DLFCN_H
+@@ -10754,10 +11404,10 @@ else
+ /* When -fvisbility=hidden is used, assume the code has been annotated
+    correspondingly for the symbols needed.  */
+ #if defined(__GNUC__) && (((__GNUC__ == 3) && (__GNUC_MINOR__ >= 3)) || (__GNUC__ > 3))
+-void fnord () __attribute__((visibility("default")));
++int fnord () __attribute__((visibility("default")));
+ #endif
+ 
+-void fnord () { int i=42; }
++int fnord () { return 42; }
+ int main ()
+ {
+   void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW);
+@@ -12777,13 +13427,20 @@ exeext='`$ECHO "$exeext" | $SED "$delay_single_quote_subst"`'
+ lt_unset='`$ECHO "$lt_unset" | $SED "$delay_single_quote_subst"`'
+ lt_SP2NL='`$ECHO "$lt_SP2NL" | $SED "$delay_single_quote_subst"`'
+ lt_NL2SP='`$ECHO "$lt_NL2SP" | $SED "$delay_single_quote_subst"`'
++lt_cv_to_host_file_cmd='`$ECHO "$lt_cv_to_host_file_cmd" | $SED "$delay_single_quote_subst"`'
++lt_cv_to_tool_file_cmd='`$ECHO "$lt_cv_to_tool_file_cmd" | $SED "$delay_single_quote_subst"`'
+ reload_flag='`$ECHO "$reload_flag" | $SED "$delay_single_quote_subst"`'
+ reload_cmds='`$ECHO "$reload_cmds" | $SED "$delay_single_quote_subst"`'
+ OBJDUMP='`$ECHO "$OBJDUMP" | $SED "$delay_single_quote_subst"`'
+ deplibs_check_method='`$ECHO "$deplibs_check_method" | $SED "$delay_single_quote_subst"`'
+ file_magic_cmd='`$ECHO "$file_magic_cmd" | $SED "$delay_single_quote_subst"`'
++file_magic_glob='`$ECHO "$file_magic_glob" | $SED "$delay_single_quote_subst"`'
++want_nocaseglob='`$ECHO "$want_nocaseglob" | $SED "$delay_single_quote_subst"`'
++DLLTOOL='`$ECHO "$DLLTOOL" | $SED "$delay_single_quote_subst"`'
++sharedlib_from_linklib_cmd='`$ECHO "$sharedlib_from_linklib_cmd" | $SED "$delay_single_quote_subst"`'
+ AR='`$ECHO "$AR" | $SED "$delay_single_quote_subst"`'
+ AR_FLAGS='`$ECHO "$AR_FLAGS" | $SED "$delay_single_quote_subst"`'
++archiver_list_spec='`$ECHO "$archiver_list_spec" | $SED "$delay_single_quote_subst"`'
+ STRIP='`$ECHO "$STRIP" | $SED "$delay_single_quote_subst"`'
+ RANLIB='`$ECHO "$RANLIB" | $SED "$delay_single_quote_subst"`'
+ old_postinstall_cmds='`$ECHO "$old_postinstall_cmds" | $SED "$delay_single_quote_subst"`'
+@@ -12798,14 +13455,17 @@ lt_cv_sys_global_symbol_pipe='`$ECHO "$lt_cv_sys_global_symbol_pipe" | $SED "$de
+ lt_cv_sys_global_symbol_to_cdecl='`$ECHO "$lt_cv_sys_global_symbol_to_cdecl" | $SED "$delay_single_quote_subst"`'
+ lt_cv_sys_global_symbol_to_c_name_address='`$ECHO "$lt_cv_sys_global_symbol_to_c_name_address" | $SED "$delay_single_quote_subst"`'
+ lt_cv_sys_global_symbol_to_c_name_address_lib_prefix='`$ECHO "$lt_cv_sys_global_symbol_to_c_name_address_lib_prefix" | $SED "$delay_single_quote_subst"`'
++nm_file_list_spec='`$ECHO "$nm_file_list_spec" | $SED "$delay_single_quote_subst"`'
++lt_sysroot='`$ECHO "$lt_sysroot" | $SED "$delay_single_quote_subst"`'
+ objdir='`$ECHO "$objdir" | $SED "$delay_single_quote_subst"`'
+ MAGIC_CMD='`$ECHO "$MAGIC_CMD" | $SED "$delay_single_quote_subst"`'
+ lt_prog_compiler_no_builtin_flag='`$ECHO "$lt_prog_compiler_no_builtin_flag" | $SED "$delay_single_quote_subst"`'
+-lt_prog_compiler_wl='`$ECHO "$lt_prog_compiler_wl" | $SED "$delay_single_quote_subst"`'
+ lt_prog_compiler_pic='`$ECHO "$lt_prog_compiler_pic" | $SED "$delay_single_quote_subst"`'
++lt_prog_compiler_wl='`$ECHO "$lt_prog_compiler_wl" | $SED "$delay_single_quote_subst"`'
+ lt_prog_compiler_static='`$ECHO "$lt_prog_compiler_static" | $SED "$delay_single_quote_subst"`'
+ lt_cv_prog_compiler_c_o='`$ECHO "$lt_cv_prog_compiler_c_o" | $SED "$delay_single_quote_subst"`'
+ need_locks='`$ECHO "$need_locks" | $SED "$delay_single_quote_subst"`'
++MANIFEST_TOOL='`$ECHO "$MANIFEST_TOOL" | $SED "$delay_single_quote_subst"`'
+ DSYMUTIL='`$ECHO "$DSYMUTIL" | $SED "$delay_single_quote_subst"`'
+ NMEDIT='`$ECHO "$NMEDIT" | $SED "$delay_single_quote_subst"`'
+ LIPO='`$ECHO "$LIPO" | $SED "$delay_single_quote_subst"`'
+@@ -12838,12 +13498,12 @@ hardcode_shlibpath_var='`$ECHO "$hardcode_shlibpath_var" | $SED "$delay_single_q
+ hardcode_automatic='`$ECHO "$hardcode_automatic" | $SED "$delay_single_quote_subst"`'
+ inherit_rpath='`$ECHO "$inherit_rpath" | $SED "$delay_single_quote_subst"`'
+ link_all_deplibs='`$ECHO "$link_all_deplibs" | $SED "$delay_single_quote_subst"`'
+-fix_srcfile_path='`$ECHO "$fix_srcfile_path" | $SED "$delay_single_quote_subst"`'
+ always_export_symbols='`$ECHO "$always_export_symbols" | $SED "$delay_single_quote_subst"`'
+ export_symbols_cmds='`$ECHO "$export_symbols_cmds" | $SED "$delay_single_quote_subst"`'
+ exclude_expsyms='`$ECHO "$exclude_expsyms" | $SED "$delay_single_quote_subst"`'
+ include_expsyms='`$ECHO "$include_expsyms" | $SED "$delay_single_quote_subst"`'
+ prelink_cmds='`$ECHO "$prelink_cmds" | $SED "$delay_single_quote_subst"`'
++postlink_cmds='`$ECHO "$postlink_cmds" | $SED "$delay_single_quote_subst"`'
+ file_list_spec='`$ECHO "$file_list_spec" | $SED "$delay_single_quote_subst"`'
+ variables_saved_for_relink='`$ECHO "$variables_saved_for_relink" | $SED "$delay_single_quote_subst"`'
+ need_lib_prefix='`$ECHO "$need_lib_prefix" | $SED "$delay_single_quote_subst"`'
+@@ -12898,8 +13558,13 @@ reload_flag \
+ OBJDUMP \
+ deplibs_check_method \
+ file_magic_cmd \
++file_magic_glob \
++want_nocaseglob \
++DLLTOOL \
++sharedlib_from_linklib_cmd \
+ AR \
+ AR_FLAGS \
++archiver_list_spec \
+ STRIP \
+ RANLIB \
+ CC \
+@@ -12909,12 +13574,14 @@ lt_cv_sys_global_symbol_pipe \
+ lt_cv_sys_global_symbol_to_cdecl \
+ lt_cv_sys_global_symbol_to_c_name_address \
+ lt_cv_sys_global_symbol_to_c_name_address_lib_prefix \
++nm_file_list_spec \
+ lt_prog_compiler_no_builtin_flag \
+-lt_prog_compiler_wl \
+ lt_prog_compiler_pic \
++lt_prog_compiler_wl \
+ lt_prog_compiler_static \
+ lt_cv_prog_compiler_c_o \
+ need_locks \
++MANIFEST_TOOL \
+ DSYMUTIL \
+ NMEDIT \
+ LIPO \
+@@ -12930,7 +13597,6 @@ no_undefined_flag \
+ hardcode_libdir_flag_spec \
+ hardcode_libdir_flag_spec_ld \
+ hardcode_libdir_separator \
+-fix_srcfile_path \
+ exclude_expsyms \
+ include_expsyms \
+ file_list_spec \
+@@ -12966,6 +13632,7 @@ module_cmds \
+ module_expsym_cmds \
+ export_symbols_cmds \
+ prelink_cmds \
++postlink_cmds \
+ postinstall_cmds \
+ postuninstall_cmds \
+ finish_cmds \
+@@ -13731,7 +14398,8 @@ $as_echo X"$file" |
+ # NOTE: Changes made to this file will be lost: look at ltmain.sh.
+ #
+ #   Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001, 2003, 2004, 2005,
+-#                 2006, 2007, 2008, 2009 Free Software Foundation, Inc.
++#                 2006, 2007, 2008, 2009, 2010 Free Software Foundation,
++#                 Inc.
+ #   Written by Gordon Matzigkeit, 1996
+ #
+ #   This file is part of GNU Libtool.
+@@ -13834,19 +14502,42 @@ SP2NL=$lt_lt_SP2NL
+ # turn newlines into spaces.
+ NL2SP=$lt_lt_NL2SP
+ 
++# convert \$build file names to \$host format.
++to_host_file_cmd=$lt_cv_to_host_file_cmd
++
++# convert \$build files to toolchain format.
++to_tool_file_cmd=$lt_cv_to_tool_file_cmd
++
+ # An object symbol dumper.
+ OBJDUMP=$lt_OBJDUMP
+ 
+ # Method to check whether dependent libraries are shared objects.
+ deplibs_check_method=$lt_deplibs_check_method
+ 
+-# Command to use when deplibs_check_method == "file_magic".
++# Command to use when deplibs_check_method = "file_magic".
+ file_magic_cmd=$lt_file_magic_cmd
+ 
++# How to find potential files when deplibs_check_method = "file_magic".
++file_magic_glob=$lt_file_magic_glob
++
++# Find potential files using nocaseglob when deplibs_check_method = "file_magic".
++want_nocaseglob=$lt_want_nocaseglob
++
++# DLL creation program.
++DLLTOOL=$lt_DLLTOOL
++
++# Command to associate shared and link libraries.
++sharedlib_from_linklib_cmd=$lt_sharedlib_from_linklib_cmd
++
+ # The archiver.
+ AR=$lt_AR
++
++# Flags to create an archive.
+ AR_FLAGS=$lt_AR_FLAGS
+ 
++# How to feed a file listing to the archiver.
++archiver_list_spec=$lt_archiver_list_spec
++
+ # A symbol stripping program.
+ STRIP=$lt_STRIP
+ 
+@@ -13876,6 +14567,12 @@ global_symbol_to_c_name_address=$lt_lt_cv_sys_global_symbol_to_c_name_address
+ # Transform the output of nm in a C name address pair when lib prefix is needed.
+ global_symbol_to_c_name_address_lib_prefix=$lt_lt_cv_sys_global_symbol_to_c_name_address_lib_prefix
+ 
++# Specify filename containing input files for \$NM.
++nm_file_list_spec=$lt_nm_file_list_spec
++
++# The root where to search for dependent libraries,and in which our libraries should be installed.
++lt_sysroot=$lt_sysroot
++
+ # The name of the directory that contains temporary libtool files.
+ objdir=$objdir
+ 
+@@ -13885,6 +14582,9 @@ MAGIC_CMD=$MAGIC_CMD
+ # Must we lock files when doing compilation?
+ need_locks=$lt_need_locks
+ 
++# Manifest tool.
++MANIFEST_TOOL=$lt_MANIFEST_TOOL
++
+ # Tool to manipulate archived DWARF debug symbol files on Mac OS X.
+ DSYMUTIL=$lt_DSYMUTIL
+ 
+@@ -13999,12 +14699,12 @@ with_gcc=$GCC
+ # Compiler flag to turn off builtin functions.
+ no_builtin_flag=$lt_lt_prog_compiler_no_builtin_flag
+ 
+-# How to pass a linker flag through the compiler.
+-wl=$lt_lt_prog_compiler_wl
+-
+ # Additional compiler flags for building library objects.
+ pic_flag=$lt_lt_prog_compiler_pic
+ 
++# How to pass a linker flag through the compiler.
++wl=$lt_lt_prog_compiler_wl
++
+ # Compiler flag to prevent dynamic linking.
+ link_static_flag=$lt_lt_prog_compiler_static
+ 
+@@ -14091,9 +14791,6 @@ inherit_rpath=$inherit_rpath
+ # Whether libtool must link a program against all its dependency libraries.
+ link_all_deplibs=$link_all_deplibs
+ 
+-# Fix the shell variable \$srcfile for the compiler.
+-fix_srcfile_path=$lt_fix_srcfile_path
+-
+ # Set to "yes" if exported symbols are required.
+ always_export_symbols=$always_export_symbols
+ 
+@@ -14109,6 +14806,9 @@ include_expsyms=$lt_include_expsyms
+ # Commands necessary for linking programs (against libraries) with templates.
+ prelink_cmds=$lt_prelink_cmds
+ 
++# Commands necessary for finishing linking programs.
++postlink_cmds=$lt_postlink_cmds
++
+ # Specify filename containing input files.
+ file_list_spec=$lt_file_list_spec
+ 
+@@ -14141,210 +14841,169 @@ ltmain="$ac_aux_dir/ltmain.sh"
+   # if finds mixed CR/LF and LF-only lines.  Since sed operates in
+   # text mode, it properly converts lines to CR/LF.  This bash problem
+   # is reportedly fixed, but why not run on old versions too?
+-  sed '/^# Generated shell functions inserted here/q' "$ltmain" >> "$cfgfile" \
+-    || (rm -f "$cfgfile"; exit 1)
+-
+-  case $xsi_shell in
+-  yes)
+-    cat << \_LT_EOF >> "$cfgfile"
+-
+-# func_dirname file append nondir_replacement
+-# Compute the dirname of FILE.  If nonempty, add APPEND to the result,
+-# otherwise set result to NONDIR_REPLACEMENT.
+-func_dirname ()
+-{
+-  case ${1} in
+-    */*) func_dirname_result="${1%/*}${2}" ;;
+-    *  ) func_dirname_result="${3}" ;;
+-  esac
+-}
+-
+-# func_basename file
+-func_basename ()
+-{
+-  func_basename_result="${1##*/}"
+-}
+-
+-# func_dirname_and_basename file append nondir_replacement
+-# perform func_basename and func_dirname in a single function
+-# call:
+-#   dirname:  Compute the dirname of FILE.  If nonempty,
+-#             add APPEND to the result, otherwise set result
+-#             to NONDIR_REPLACEMENT.
+-#             value returned in "$func_dirname_result"
+-#   basename: Compute filename of FILE.
+-#             value retuned in "$func_basename_result"
+-# Implementation must be kept synchronized with func_dirname
+-# and func_basename. For efficiency, we do not delegate to
+-# those functions but instead duplicate the functionality here.
+-func_dirname_and_basename ()
+-{
+-  case ${1} in
+-    */*) func_dirname_result="${1%/*}${2}" ;;
+-    *  ) func_dirname_result="${3}" ;;
+-  esac
+-  func_basename_result="${1##*/}"
+-}
+-
+-# func_stripname prefix suffix name
+-# strip PREFIX and SUFFIX off of NAME.
+-# PREFIX and SUFFIX must not contain globbing or regex special
+-# characters, hashes, percent signs, but SUFFIX may contain a leading
+-# dot (in which case that matches only a dot).
+-func_stripname ()
+-{
+-  # pdksh 5.2.14 does not do ${X%$Y} correctly if both X and Y are
+-  # positional parameters, so assign one to ordinary parameter first.
+-  func_stripname_result=${3}
+-  func_stripname_result=${func_stripname_result#"${1}"}
+-  func_stripname_result=${func_stripname_result%"${2}"}
+-}
+-
+-# func_opt_split
+-func_opt_split ()
+-{
+-  func_opt_split_opt=${1%%=*}
+-  func_opt_split_arg=${1#*=}
+-}
+-
+-# func_lo2o object
+-func_lo2o ()
+-{
+-  case ${1} in
+-    *.lo) func_lo2o_result=${1%.lo}.${objext} ;;
+-    *)    func_lo2o_result=${1} ;;
+-  esac
+-}
+-
+-# func_xform libobj-or-source
+-func_xform ()
+-{
+-  func_xform_result=${1%.*}.lo
+-}
+-
+-# func_arith arithmetic-term...
+-func_arith ()
+-{
+-  func_arith_result=$(( $* ))
+-}
+-
+-# func_len string
+-# STRING may not start with a hyphen.
+-func_len ()
+-{
+-  func_len_result=${#1}
+-}
+-
+-_LT_EOF
+-    ;;
+-  *) # Bourne compatible functions.
+-    cat << \_LT_EOF >> "$cfgfile"
+-
+-# func_dirname file append nondir_replacement
+-# Compute the dirname of FILE.  If nonempty, add APPEND to the result,
+-# otherwise set result to NONDIR_REPLACEMENT.
+-func_dirname ()
+-{
+-  # Extract subdirectory from the argument.
+-  func_dirname_result=`$ECHO "${1}" | $SED "$dirname"`
+-  if test "X$func_dirname_result" = "X${1}"; then
+-    func_dirname_result="${3}"
+-  else
+-    func_dirname_result="$func_dirname_result${2}"
+-  fi
+-}
+-
+-# func_basename file
+-func_basename ()
+-{
+-  func_basename_result=`$ECHO "${1}" | $SED "$basename"`
+-}
+-
+-
+-# func_stripname prefix suffix name
+-# strip PREFIX and SUFFIX off of NAME.
+-# PREFIX and SUFFIX must not contain globbing or regex special
+-# characters, hashes, percent signs, but SUFFIX may contain a leading
+-# dot (in which case that matches only a dot).
+-# func_strip_suffix prefix name
+-func_stripname ()
+-{
+-  case ${2} in
+-    .*) func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%\\\\${2}\$%%"`;;
+-    *)  func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%${2}\$%%"`;;
+-  esac
+-}
+-
+-# sed scripts:
+-my_sed_long_opt='1s/^\(-[^=]*\)=.*/\1/;q'
+-my_sed_long_arg='1s/^-[^=]*=//'
+-
+-# func_opt_split
+-func_opt_split ()
+-{
+-  func_opt_split_opt=`$ECHO "${1}" | $SED "$my_sed_long_opt"`
+-  func_opt_split_arg=`$ECHO "${1}" | $SED "$my_sed_long_arg"`
+-}
+-
+-# func_lo2o object
+-func_lo2o ()
+-{
+-  func_lo2o_result=`$ECHO "${1}" | $SED "$lo2o"`
+-}
+-
+-# func_xform libobj-or-source
+-func_xform ()
+-{
+-  func_xform_result=`$ECHO "${1}" | $SED 's/\.[^.]*$/.lo/'`
+-}
+-
+-# func_arith arithmetic-term...
+-func_arith ()
+-{
+-  func_arith_result=`expr "$@"`
+-}
+-
+-# func_len string
+-# STRING may not start with a hyphen.
+-func_len ()
+-{
+-  func_len_result=`expr "$1" : ".*" 2>/dev/null || echo $max_cmd_len`
+-}
+-
+-_LT_EOF
+-esac
+-
+-case $lt_shell_append in
+-  yes)
+-    cat << \_LT_EOF >> "$cfgfile"
+-
+-# func_append var value
+-# Append VALUE to the end of shell variable VAR.
+-func_append ()
+-{
+-  eval "$1+=\$2"
+-}
+-_LT_EOF
+-    ;;
+-  *)
+-    cat << \_LT_EOF >> "$cfgfile"
+-
+-# func_append var value
+-# Append VALUE to the end of shell variable VAR.
+-func_append ()
+-{
+-  eval "$1=\$$1\$2"
+-}
+-
+-_LT_EOF
+-    ;;
+-  esac
+-
+-
+-  sed -n '/^# Generated shell functions inserted here/,$p' "$ltmain" >> "$cfgfile" \
+-    || (rm -f "$cfgfile"; exit 1)
+-
+-  mv -f "$cfgfile" "$ofile" ||
++  sed '$q' "$ltmain" >> "$cfgfile" \
++     || (rm -f "$cfgfile"; exit 1)
++
++  if test x"$xsi_shell" = xyes; then
++  sed -e '/^func_dirname ()$/,/^} # func_dirname /c\
++func_dirname ()\
++{\
++\    case ${1} in\
++\      */*) func_dirname_result="${1%/*}${2}" ;;\
++\      *  ) func_dirname_result="${3}" ;;\
++\    esac\
++} # Extended-shell func_dirname implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_basename ()$/,/^} # func_basename /c\
++func_basename ()\
++{\
++\    func_basename_result="${1##*/}"\
++} # Extended-shell func_basename implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_dirname_and_basename ()$/,/^} # func_dirname_and_basename /c\
++func_dirname_and_basename ()\
++{\
++\    case ${1} in\
++\      */*) func_dirname_result="${1%/*}${2}" ;;\
++\      *  ) func_dirname_result="${3}" ;;\
++\    esac\
++\    func_basename_result="${1##*/}"\
++} # Extended-shell func_dirname_and_basename implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_stripname ()$/,/^} # func_stripname /c\
++func_stripname ()\
++{\
++\    # pdksh 5.2.14 does not do ${X%$Y} correctly if both X and Y are\
++\    # positional parameters, so assign one to ordinary parameter first.\
++\    func_stripname_result=${3}\
++\    func_stripname_result=${func_stripname_result#"${1}"}\
++\    func_stripname_result=${func_stripname_result%"${2}"}\
++} # Extended-shell func_stripname implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_split_long_opt ()$/,/^} # func_split_long_opt /c\
++func_split_long_opt ()\
++{\
++\    func_split_long_opt_name=${1%%=*}\
++\    func_split_long_opt_arg=${1#*=}\
++} # Extended-shell func_split_long_opt implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_split_short_opt ()$/,/^} # func_split_short_opt /c\
++func_split_short_opt ()\
++{\
++\    func_split_short_opt_arg=${1#??}\
++\    func_split_short_opt_name=${1%"$func_split_short_opt_arg"}\
++} # Extended-shell func_split_short_opt implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_lo2o ()$/,/^} # func_lo2o /c\
++func_lo2o ()\
++{\
++\    case ${1} in\
++\      *.lo) func_lo2o_result=${1%.lo}.${objext} ;;\
++\      *)    func_lo2o_result=${1} ;;\
++\    esac\
++} # Extended-shell func_lo2o implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_xform ()$/,/^} # func_xform /c\
++func_xform ()\
++{\
++    func_xform_result=${1%.*}.lo\
++} # Extended-shell func_xform implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_arith ()$/,/^} # func_arith /c\
++func_arith ()\
++{\
++    func_arith_result=$(( $* ))\
++} # Extended-shell func_arith implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_len ()$/,/^} # func_len /c\
++func_len ()\
++{\
++    func_len_result=${#1}\
++} # Extended-shell func_len implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++fi
++
++if test x"$lt_shell_append" = xyes; then
++  sed -e '/^func_append ()$/,/^} # func_append /c\
++func_append ()\
++{\
++    eval "${1}+=\\${2}"\
++} # Extended-shell func_append implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_append_quoted ()$/,/^} # func_append_quoted /c\
++func_append_quoted ()\
++{\
++\    func_quote_for_eval "${2}"\
++\    eval "${1}+=\\\\ \\$func_quote_for_eval_result"\
++} # Extended-shell func_append_quoted implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  # Save a `func_append' function call where possible by direct use of '+='
++  sed -e 's%func_append \([a-zA-Z_]\{1,\}\) "%\1+="%g' $cfgfile > $cfgfile.tmp \
++    && mv -f "$cfgfile.tmp" "$cfgfile" \
++      || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++  test 0 -eq $? || _lt_function_replace_fail=:
++else
++  # Save a `func_append' function call even when '+=' is not available
++  sed -e 's%func_append \([a-zA-Z_]\{1,\}\) "%\1="$\1%g' $cfgfile > $cfgfile.tmp \
++    && mv -f "$cfgfile.tmp" "$cfgfile" \
++      || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++  test 0 -eq $? || _lt_function_replace_fail=:
++fi
++
++if test x"$_lt_function_replace_fail" = x":"; then
++  { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: Unable to substitute extended shell functions in $ofile" >&5
++$as_echo "$as_me: WARNING: Unable to substitute extended shell functions in $ofile" >&2;}
++fi
++
++
++   mv -f "$cfgfile" "$ofile" ||
+     (rm -f "$ofile" && cp "$cfgfile" "$ofile" && rm -f "$cfgfile")
+   chmod +x "$ofile"
+ 
+diff --git a/gprofng/Makefile.in b/gprofng/Makefile.in
+index fd5279b4df1..6e74c7b302a 100644
+--- a/gprofng/Makefile.in
++++ b/gprofng/Makefile.in
+@@ -253,6 +253,7 @@ CXXFLAGS = @CXXFLAGS@
+ CYGPATH_W = @CYGPATH_W@
+ DEFS = @DEFS@
+ DEPDIR = @DEPDIR@
++DLLTOOL = @DLLTOOL@
+ DSYMUTIL = @DSYMUTIL@
+ DUMPBIN = @DUMPBIN@
+ ECHO_C = @ECHO_C@
+@@ -290,6 +291,7 @@ LN_S = @LN_S@
+ LTLIBOBJS = @LTLIBOBJS@
+ MAINT = @MAINT@
+ MAKEINFO = @MAKEINFO@
++MANIFEST_TOOL = @MANIFEST_TOOL@
+ MKDIR_P = @MKDIR_P@
+ NM = @NM@
+ NMEDIT = @NMEDIT@
+diff --git a/gprofng/configure b/gprofng/configure
+index ac14d126ac0..f8d7685a72e 100755
+--- a/gprofng/configure
++++ b/gprofng/configure
+@@ -672,6 +672,8 @@ OTOOL
+ LIPO
+ NMEDIT
+ DSYMUTIL
++MANIFEST_TOOL
++DLLTOOL
+ OBJDUMP
+ LN_S
+ NM
+@@ -802,6 +804,7 @@ enable_static
+ with_pic
+ enable_fast_install
+ with_gnu_ld
++with_libtool_sysroot
+ enable_libtool_lock
+ enable_werror_always
+ enable_gprofng_tools
+@@ -1465,6 +1468,8 @@ Optional Packages:
+   --with-pic              try to use only PIC/non-PIC objects [default=use
+                           both]
+   --with-gnu-ld           assume the C compiler uses GNU ld [default=no]
++  --with-libtool-sysroot=DIR Search for dependent libraries within DIR
++                        (or the compiler's sysroot if not specified).
+   --with-jdk=PATH         specify prefix directory for installed JDK.
+   --with-system-zlib      use installed libz
+ 
+@@ -6156,8 +6161,8 @@ esac
+ 
+ 
+ 
+-macro_version='2.2.7a'
+-macro_revision='1.3134'
++macro_version='2.4'
++macro_revision='1.3293'
+ 
+ 
+ 
+@@ -6197,7 +6202,7 @@ ECHO=$ECHO$ECHO$ECHO$ECHO$ECHO$ECHO
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking how to print strings" >&5
+ $as_echo_n "checking how to print strings... " >&6; }
+ # Test print first, because it will be a builtin if present.
+-if test "X`print -r -- -n 2>/dev/null`" = X-n && \
++if test "X`( print -r -- -n ) 2>/dev/null`" = X-n && \
+    test "X`print -r -- $ECHO 2>/dev/null`" = "X$ECHO"; then
+   ECHO='print -r --'
+ elif test "X`printf %s $ECHO 2>/dev/null`" = "X$ECHO"; then
+@@ -6890,8 +6895,8 @@ $as_echo_n "checking whether the shell understands some XSI constructs... " >&6;
+ # Try some XSI features
+ xsi_shell=no
+ ( _lt_dummy="a/b/c"
+-  test "${_lt_dummy##*/},${_lt_dummy%/*},"${_lt_dummy%"$_lt_dummy"}, \
+-      = c,a/b,, \
++  test "${_lt_dummy##*/},${_lt_dummy%/*},${_lt_dummy#??}"${_lt_dummy%"$_lt_dummy"}, \
++      = c,a/b,b/c, \
+     && eval 'test $(( 1 + 1 )) -eq 2 \
+     && test "${#_lt_dummy}" -eq 5' ) >/dev/null 2>&1 \
+   && xsi_shell=yes
+@@ -6940,6 +6945,80 @@ esac
+ 
+ 
+ 
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking how to convert $build file names to $host format" >&5
++$as_echo_n "checking how to convert $build file names to $host format... " >&6; }
++if ${lt_cv_to_host_file_cmd+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  case $host in
++  *-*-mingw* )
++    case $build in
++      *-*-mingw* ) # actually msys
++        lt_cv_to_host_file_cmd=func_convert_file_msys_to_w32
++        ;;
++      *-*-cygwin* )
++        lt_cv_to_host_file_cmd=func_convert_file_cygwin_to_w32
++        ;;
++      * ) # otherwise, assume *nix
++        lt_cv_to_host_file_cmd=func_convert_file_nix_to_w32
++        ;;
++    esac
++    ;;
++  *-*-cygwin* )
++    case $build in
++      *-*-mingw* ) # actually msys
++        lt_cv_to_host_file_cmd=func_convert_file_msys_to_cygwin
++        ;;
++      *-*-cygwin* )
++        lt_cv_to_host_file_cmd=func_convert_file_noop
++        ;;
++      * ) # otherwise, assume *nix
++        lt_cv_to_host_file_cmd=func_convert_file_nix_to_cygwin
++        ;;
++    esac
++    ;;
++  * ) # unhandled hosts (and "normal" native builds)
++    lt_cv_to_host_file_cmd=func_convert_file_noop
++    ;;
++esac
++
++fi
++
++to_host_file_cmd=$lt_cv_to_host_file_cmd
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_to_host_file_cmd" >&5
++$as_echo "$lt_cv_to_host_file_cmd" >&6; }
++
++
++
++
++
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking how to convert $build file names to toolchain format" >&5
++$as_echo_n "checking how to convert $build file names to toolchain format... " >&6; }
++if ${lt_cv_to_tool_file_cmd+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  #assume ordinary cross tools, or native build.
++lt_cv_to_tool_file_cmd=func_convert_file_noop
++case $host in
++  *-*-mingw* )
++    case $build in
++      *-*-mingw* ) # actually msys
++        lt_cv_to_tool_file_cmd=func_convert_file_msys_to_w32
++        ;;
++    esac
++    ;;
++esac
++
++fi
++
++to_tool_file_cmd=$lt_cv_to_tool_file_cmd
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_to_tool_file_cmd" >&5
++$as_echo "$lt_cv_to_tool_file_cmd" >&6; }
++
++
++
++
++
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $LD option to reload object files" >&5
+ $as_echo_n "checking for $LD option to reload object files... " >&6; }
+ if ${lt_cv_ld_reload_flag+:} false; then :
+@@ -6956,6 +7035,11 @@ case $reload_flag in
+ esac
+ reload_cmds='$LD$reload_flag -o $output$reload_objs'
+ case $host_os in
++  cygwin* | mingw* | pw32* | cegcc*)
++    if test "$GCC" != yes; then
++      reload_cmds=false
++    fi
++    ;;
+   darwin*)
+     if test "$GCC" = yes; then
+       reload_cmds='$LTCC $LTCFLAGS -nostdlib ${wl}-r -o $output$reload_objs'
+@@ -7124,7 +7208,8 @@ mingw* | pw32*)
+     lt_cv_deplibs_check_method='file_magic ^x86 archive import|^x86 DLL'
+     lt_cv_file_magic_cmd='func_win32_libid'
+   else
+-    lt_cv_deplibs_check_method='file_magic file format pei*-i386(.*architecture: i386)?'
++    # Keep this pattern in sync with the one in func_win32_libid.
++    lt_cv_deplibs_check_method='file_magic file format (pei*-i386(.*architecture: i386)?|pe-arm-wince|pe-x86-64)'
+     lt_cv_file_magic_cmd='$OBJDUMP -f'
+   fi
+   ;;
+@@ -7278,6 +7363,21 @@ esac
+ fi
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_deplibs_check_method" >&5
+ $as_echo "$lt_cv_deplibs_check_method" >&6; }
++
++file_magic_glob=
++want_nocaseglob=no
++if test "$build" = "$host"; then
++  case $host_os in
++  mingw* | pw32*)
++    if ( shopt | grep nocaseglob ) >/dev/null 2>&1; then
++      want_nocaseglob=yes
++    else
++      file_magic_glob=`echo aAbBcCdDeEfFgGhHiIjJkKlLmMnNoOpPqQrRsStTuUvVwWxXyYzZ | $SED -e "s/\(..\)/s\/[\1]\/[\1]\/g;/g"`
++    fi
++    ;;
++  esac
++fi
++
+ file_magic_cmd=$lt_cv_file_magic_cmd
+ deplibs_check_method=$lt_cv_deplibs_check_method
+ test -z "$deplibs_check_method" && deplibs_check_method=unknown
+@@ -7293,6 +7393,157 @@ test -z "$deplibs_check_method" && deplibs_check_method=unknown
+ 
+ 
+ 
++
++
++
++
++
++
++
++
++
++
++if test -n "$ac_tool_prefix"; then
++  # Extract the first word of "${ac_tool_prefix}dlltool", so it can be a program name with args.
++set dummy ${ac_tool_prefix}dlltool; ac_word=$2
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
++$as_echo_n "checking for $ac_word... " >&6; }
++if ${ac_cv_prog_DLLTOOL+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  if test -n "$DLLTOOL"; then
++  ac_cv_prog_DLLTOOL="$DLLTOOL" # Let the user override the test.
++else
++as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
++for as_dir in $PATH
++do
++  IFS=$as_save_IFS
++  test -z "$as_dir" && as_dir=.
++    for ac_exec_ext in '' $ac_executable_extensions; do
++  if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
++    ac_cv_prog_DLLTOOL="${ac_tool_prefix}dlltool"
++    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
++    break 2
++  fi
++done
++  done
++IFS=$as_save_IFS
++
++fi
++fi
++DLLTOOL=$ac_cv_prog_DLLTOOL
++if test -n "$DLLTOOL"; then
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $DLLTOOL" >&5
++$as_echo "$DLLTOOL" >&6; }
++else
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
++$as_echo "no" >&6; }
++fi
++
++
++fi
++if test -z "$ac_cv_prog_DLLTOOL"; then
++  ac_ct_DLLTOOL=$DLLTOOL
++  # Extract the first word of "dlltool", so it can be a program name with args.
++set dummy dlltool; ac_word=$2
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
++$as_echo_n "checking for $ac_word... " >&6; }
++if ${ac_cv_prog_ac_ct_DLLTOOL+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  if test -n "$ac_ct_DLLTOOL"; then
++  ac_cv_prog_ac_ct_DLLTOOL="$ac_ct_DLLTOOL" # Let the user override the test.
++else
++as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
++for as_dir in $PATH
++do
++  IFS=$as_save_IFS
++  test -z "$as_dir" && as_dir=.
++    for ac_exec_ext in '' $ac_executable_extensions; do
++  if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
++    ac_cv_prog_ac_ct_DLLTOOL="dlltool"
++    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
++    break 2
++  fi
++done
++  done
++IFS=$as_save_IFS
++
++fi
++fi
++ac_ct_DLLTOOL=$ac_cv_prog_ac_ct_DLLTOOL
++if test -n "$ac_ct_DLLTOOL"; then
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_DLLTOOL" >&5
++$as_echo "$ac_ct_DLLTOOL" >&6; }
++else
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
++$as_echo "no" >&6; }
++fi
++
++  if test "x$ac_ct_DLLTOOL" = x; then
++    DLLTOOL="false"
++  else
++    case $cross_compiling:$ac_tool_warned in
++yes:)
++{ $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5
++$as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;}
++ac_tool_warned=yes ;;
++esac
++    DLLTOOL=$ac_ct_DLLTOOL
++  fi
++else
++  DLLTOOL="$ac_cv_prog_DLLTOOL"
++fi
++
++test -z "$DLLTOOL" && DLLTOOL=dlltool
++
++
++
++
++
++
++
++
++
++
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking how to associate runtime and link libraries" >&5
++$as_echo_n "checking how to associate runtime and link libraries... " >&6; }
++if ${lt_cv_sharedlib_from_linklib_cmd+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  lt_cv_sharedlib_from_linklib_cmd='unknown'
++
++case $host_os in
++cygwin* | mingw* | pw32* | cegcc*)
++  # two different shell functions defined in ltmain.sh
++  # decide which to use based on capabilities of $DLLTOOL
++  case `$DLLTOOL --help 2>&1` in
++  *--identify-strict*)
++    lt_cv_sharedlib_from_linklib_cmd=func_cygming_dll_for_implib
++    ;;
++  *)
++    lt_cv_sharedlib_from_linklib_cmd=func_cygming_dll_for_implib_fallback
++    ;;
++  esac
++  ;;
++*)
++  # fallback: assume linklib IS sharedlib
++  lt_cv_sharedlib_from_linklib_cmd="$ECHO"
++  ;;
++esac
++
++fi
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_sharedlib_from_linklib_cmd" >&5
++$as_echo "$lt_cv_sharedlib_from_linklib_cmd" >&6; }
++sharedlib_from_linklib_cmd=$lt_cv_sharedlib_from_linklib_cmd
++test -z "$sharedlib_from_linklib_cmd" && sharedlib_from_linklib_cmd=$ECHO
++
++
++
++
++
++
++
+ plugin_option=
+ plugin_names="liblto_plugin.so liblto_plugin-0.dll cyglto_plugin-0.dll"
+ for plugin in $plugin_names; do
+@@ -7307,8 +7558,10 @@ for plugin in $plugin_names; do
+ done
+ 
+ if test -n "$ac_tool_prefix"; then
+-  # Extract the first word of "${ac_tool_prefix}ar", so it can be a program name with args.
+-set dummy ${ac_tool_prefix}ar; ac_word=$2
++  for ac_prog in ar
++  do
++    # Extract the first word of "$ac_tool_prefix$ac_prog", so it can be a program name with args.
++set dummy $ac_tool_prefix$ac_prog; ac_word=$2
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+ $as_echo_n "checking for $ac_word... " >&6; }
+ if ${ac_cv_prog_AR+:} false; then :
+@@ -7324,7 +7577,7 @@ do
+   test -z "$as_dir" && as_dir=.
+     for ac_exec_ext in '' $ac_executable_extensions; do
+   if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
+-    ac_cv_prog_AR="${ac_tool_prefix}ar"
++    ac_cv_prog_AR="$ac_tool_prefix$ac_prog"
+     $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
+     break 2
+   fi
+@@ -7344,11 +7597,15 @@ $as_echo "no" >&6; }
+ fi
+ 
+ 
++    test -n "$AR" && break
++  done
+ fi
+-if test -z "$ac_cv_prog_AR"; then
++if test -z "$AR"; then
+   ac_ct_AR=$AR
+-  # Extract the first word of "ar", so it can be a program name with args.
+-set dummy ar; ac_word=$2
++  for ac_prog in ar
++do
++  # Extract the first word of "$ac_prog", so it can be a program name with args.
++set dummy $ac_prog; ac_word=$2
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+ $as_echo_n "checking for $ac_word... " >&6; }
+ if ${ac_cv_prog_ac_ct_AR+:} false; then :
+@@ -7364,7 +7621,7 @@ do
+   test -z "$as_dir" && as_dir=.
+     for ac_exec_ext in '' $ac_executable_extensions; do
+   if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
+-    ac_cv_prog_ac_ct_AR="ar"
++    ac_cv_prog_ac_ct_AR="$ac_prog"
+     $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
+     break 2
+   fi
+@@ -7383,6 +7640,10 @@ else
+ $as_echo "no" >&6; }
+ fi
+ 
++
++  test -n "$ac_ct_AR" && break
++done
++
+   if test "x$ac_ct_AR" = x; then
+     AR="false"
+   else
+@@ -7394,29 +7655,81 @@ ac_tool_warned=yes ;;
+ esac
+     AR=$ac_ct_AR
+   fi
+-else
+-  AR="$ac_cv_prog_AR"
+ fi
+ 
+-test -z "$AR" && AR=ar
+-if test -n "$plugin_option"; then
+-  if $AR --help 2>&1 | grep -q "\--plugin"; then
+-    touch conftest.c
+-    $AR $plugin_option rc conftest.a conftest.c
+-    if test "$?" != 0; then
+-      { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: Failed: $AR $plugin_option rc" >&5
++  touch conftest.c
++  $AR $plugin_option rc conftest.a conftest.c
++  if test "$?" != 0; then
++    { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: Failed: $AR $plugin_option rc" >&5
+ $as_echo "$as_me: WARNING: Failed: $AR $plugin_option rc" >&2;}
+-    else
+-      AR="$AR $plugin_option"
+-    fi
+-    rm -f conftest.*
++  else
++    AR="$AR $plugin_option"
+   fi
+-fi
+-test -z "$AR_FLAGS" && AR_FLAGS=cru
++  rm -f conftest.*
++: ${AR=ar}
++: ${AR_FLAGS=cru}
++
++
++
++
++
++
++
+ 
+ 
+ 
+ 
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for archiver @FILE support" >&5
++$as_echo_n "checking for archiver @FILE support... " >&6; }
++if ${lt_cv_ar_at_file+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  lt_cv_ar_at_file=no
++   cat confdefs.h - <<_ACEOF >conftest.$ac_ext
++/* end confdefs.h.  */
++
++int
++main ()
++{
++
++  ;
++  return 0;
++}
++_ACEOF
++if ac_fn_c_try_compile "$LINENO"; then :
++  echo conftest.$ac_objext > conftest.lst
++      lt_ar_try='$AR $AR_FLAGS libconftest.a @conftest.lst >&5'
++      { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$lt_ar_try\""; } >&5
++  (eval $lt_ar_try) 2>&5
++  ac_status=$?
++  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
++  test $ac_status = 0; }
++      if test "$ac_status" -eq 0; then
++	# Ensure the archiver fails upon bogus file names.
++	rm -f conftest.$ac_objext libconftest.a
++	{ { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$lt_ar_try\""; } >&5
++  (eval $lt_ar_try) 2>&5
++  ac_status=$?
++  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
++  test $ac_status = 0; }
++	if test "$ac_status" -ne 0; then
++          lt_cv_ar_at_file=@
++        fi
++      fi
++      rm -f conftest.* libconftest.a
++
++fi
++rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext
++
++fi
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_ar_at_file" >&5
++$as_echo "$lt_cv_ar_at_file" >&6; }
++
++if test "x$lt_cv_ar_at_file" = xno; then
++  archiver_list_spec=
++else
++  archiver_list_spec=$lt_cv_ar_at_file
++fi
+ 
+ 
+ 
+@@ -7763,8 +8076,8 @@ esac
+ lt_cv_sys_global_symbol_to_cdecl="sed -n -e 's/^T .* \(.*\)$/extern int \1();/p' -e 's/^$symcode* .* \(.*\)$/extern char \1;/p'"
+ 
+ # Transform an extracted symbol line into symbol name and symbol address
+-lt_cv_sys_global_symbol_to_c_name_address="sed -n -e 's/^: \([^ ]*\) $/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"\2\", (void *) \&\2},/p'"
+-lt_cv_sys_global_symbol_to_c_name_address_lib_prefix="sed -n -e 's/^: \([^ ]*\) $/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \(lib[^ ]*\)$/  {\"\2\", (void *) \&\2},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"lib\2\", (void *) \&\2},/p'"
++lt_cv_sys_global_symbol_to_c_name_address="sed -n -e 's/^: \([^ ]*\)[ ]*$/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"\2\", (void *) \&\2},/p'"
++lt_cv_sys_global_symbol_to_c_name_address_lib_prefix="sed -n -e 's/^: \([^ ]*\)[ ]*$/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \(lib[^ ]*\)$/  {\"\2\", (void *) \&\2},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"lib\2\", (void *) \&\2},/p'"
+ 
+ # Handle CRLF in mingw tool chain
+ opt_cr=
+@@ -7800,6 +8113,7 @@ for ac_symprfx in "" "_"; do
+   else
+     lt_cv_sys_global_symbol_pipe="sed -n -e 's/^.*[	 ]\($symcode$symcode*\)[	 ][	 ]*$ac_symprfx$sympat$opt_cr$/$symxfrm/p'"
+   fi
++  lt_cv_sys_global_symbol_pipe="$lt_cv_sys_global_symbol_pipe | sed '/ __gnu_lto/d'"
+ 
+   # Check to see that the pipe works correctly.
+   pipe_works=no
+@@ -7841,6 +8155,18 @@ _LT_EOF
+       if $GREP ' nm_test_var$' "$nlist" >/dev/null; then
+ 	if $GREP ' nm_test_func$' "$nlist" >/dev/null; then
+ 	  cat <<_LT_EOF > conftest.$ac_ext
++/* Keep this code in sync between libtool.m4, ltmain, lt_system.h, and tests.  */
++#if defined(_WIN32) || defined(__CYGWIN__) || defined(_WIN32_WCE)
++/* DATA imports from DLLs on WIN32 con't be const, because runtime
++   relocations are performed -- see ld's documentation on pseudo-relocs.  */
++# define LT_DLSYM_CONST
++#elif defined(__osf__)
++/* This system does not cope well with relocations in const data.  */
++# define LT_DLSYM_CONST
++#else
++# define LT_DLSYM_CONST const
++#endif
++
+ #ifdef __cplusplus
+ extern "C" {
+ #endif
+@@ -7852,7 +8178,7 @@ _LT_EOF
+ 	  cat <<_LT_EOF >> conftest.$ac_ext
+ 
+ /* The mapping between symbol names and symbols.  */
+-const struct {
++LT_DLSYM_CONST struct {
+   const char *name;
+   void       *address;
+ }
+@@ -7878,8 +8204,8 @@ static const void *lt_preloaded_setup() {
+ _LT_EOF
+ 	  # Now try linking the two files.
+ 	  mv conftest.$ac_objext conftstm.$ac_objext
+-	  lt_save_LIBS="$LIBS"
+-	  lt_save_CFLAGS="$CFLAGS"
++	  lt_globsym_save_LIBS=$LIBS
++	  lt_globsym_save_CFLAGS=$CFLAGS
+ 	  LIBS="conftstm.$ac_objext"
+ 	  CFLAGS="$CFLAGS$lt_prog_compiler_no_builtin_flag"
+ 	  if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_link\""; } >&5
+@@ -7889,8 +8215,8 @@ _LT_EOF
+   test $ac_status = 0; } && test -s conftest${ac_exeext}; then
+ 	    pipe_works=yes
+ 	  fi
+-	  LIBS="$lt_save_LIBS"
+-	  CFLAGS="$lt_save_CFLAGS"
++	  LIBS=$lt_globsym_save_LIBS
++	  CFLAGS=$lt_globsym_save_CFLAGS
+ 	else
+ 	  echo "cannot find nm_test_func in $nlist" >&5
+ 	fi
+@@ -7927,6 +8253,13 @@ else
+ $as_echo "ok" >&6; }
+ fi
+ 
++# Response file support.
++if test "$lt_cv_nm_interface" = "MS dumpbin"; then
++  nm_file_list_spec='@'
++elif $NM --help 2>/dev/null | grep '[@]FILE' >/dev/null; then
++  nm_file_list_spec='@'
++fi
++
+ 
+ 
+ 
+@@ -7946,6 +8279,48 @@ fi
+ 
+ 
+ 
++
++
++
++
++
++
++
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for sysroot" >&5
++$as_echo_n "checking for sysroot... " >&6; }
++
++# Check whether --with-libtool-sysroot was given.
++if test "${with_libtool_sysroot+set}" = set; then :
++  withval=$with_libtool_sysroot;
++else
++  with_libtool_sysroot=no
++fi
++
++
++lt_sysroot=
++case ${with_libtool_sysroot} in #(
++ yes)
++   if test "$GCC" = yes; then
++     lt_sysroot=`$CC --print-sysroot 2>/dev/null`
++   fi
++   ;; #(
++ /*)
++   lt_sysroot=`echo "$with_libtool_sysroot" | sed -e "$sed_quote_subst"`
++   ;; #(
++ no|'')
++   ;; #(
++ *)
++   { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${with_libtool_sysroot}" >&5
++$as_echo "${with_libtool_sysroot}" >&6; }
++   as_fn_error $? "The sysroot must be an absolute path." "$LINENO" 5
++   ;;
++esac
++
++ { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${lt_sysroot:-no}" >&5
++$as_echo "${lt_sysroot:-no}" >&6; }
++
++
++
+ 
+ 
+ # Check whether --enable-libtool-lock was given.
+@@ -8154,6 +8529,123 @@ esac
+ 
+ need_locks="$enable_libtool_lock"
+ 
++if test -n "$ac_tool_prefix"; then
++  # Extract the first word of "${ac_tool_prefix}mt", so it can be a program name with args.
++set dummy ${ac_tool_prefix}mt; ac_word=$2
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
++$as_echo_n "checking for $ac_word... " >&6; }
++if ${ac_cv_prog_MANIFEST_TOOL+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  if test -n "$MANIFEST_TOOL"; then
++  ac_cv_prog_MANIFEST_TOOL="$MANIFEST_TOOL" # Let the user override the test.
++else
++as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
++for as_dir in $PATH
++do
++  IFS=$as_save_IFS
++  test -z "$as_dir" && as_dir=.
++    for ac_exec_ext in '' $ac_executable_extensions; do
++  if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
++    ac_cv_prog_MANIFEST_TOOL="${ac_tool_prefix}mt"
++    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
++    break 2
++  fi
++done
++  done
++IFS=$as_save_IFS
++
++fi
++fi
++MANIFEST_TOOL=$ac_cv_prog_MANIFEST_TOOL
++if test -n "$MANIFEST_TOOL"; then
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $MANIFEST_TOOL" >&5
++$as_echo "$MANIFEST_TOOL" >&6; }
++else
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
++$as_echo "no" >&6; }
++fi
++
++
++fi
++if test -z "$ac_cv_prog_MANIFEST_TOOL"; then
++  ac_ct_MANIFEST_TOOL=$MANIFEST_TOOL
++  # Extract the first word of "mt", so it can be a program name with args.
++set dummy mt; ac_word=$2
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
++$as_echo_n "checking for $ac_word... " >&6; }
++if ${ac_cv_prog_ac_ct_MANIFEST_TOOL+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  if test -n "$ac_ct_MANIFEST_TOOL"; then
++  ac_cv_prog_ac_ct_MANIFEST_TOOL="$ac_ct_MANIFEST_TOOL" # Let the user override the test.
++else
++as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
++for as_dir in $PATH
++do
++  IFS=$as_save_IFS
++  test -z "$as_dir" && as_dir=.
++    for ac_exec_ext in '' $ac_executable_extensions; do
++  if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
++    ac_cv_prog_ac_ct_MANIFEST_TOOL="mt"
++    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
++    break 2
++  fi
++done
++  done
++IFS=$as_save_IFS
++
++fi
++fi
++ac_ct_MANIFEST_TOOL=$ac_cv_prog_ac_ct_MANIFEST_TOOL
++if test -n "$ac_ct_MANIFEST_TOOL"; then
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_MANIFEST_TOOL" >&5
++$as_echo "$ac_ct_MANIFEST_TOOL" >&6; }
++else
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
++$as_echo "no" >&6; }
++fi
++
++  if test "x$ac_ct_MANIFEST_TOOL" = x; then
++    MANIFEST_TOOL=":"
++  else
++    case $cross_compiling:$ac_tool_warned in
++yes:)
++{ $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5
++$as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;}
++ac_tool_warned=yes ;;
++esac
++    MANIFEST_TOOL=$ac_ct_MANIFEST_TOOL
++  fi
++else
++  MANIFEST_TOOL="$ac_cv_prog_MANIFEST_TOOL"
++fi
++
++test -z "$MANIFEST_TOOL" && MANIFEST_TOOL=mt
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking if $MANIFEST_TOOL is a manifest tool" >&5
++$as_echo_n "checking if $MANIFEST_TOOL is a manifest tool... " >&6; }
++if ${lt_cv_path_mainfest_tool+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  lt_cv_path_mainfest_tool=no
++  echo "$as_me:$LINENO: $MANIFEST_TOOL '-?'" >&5
++  $MANIFEST_TOOL '-?' 2>conftest.err > conftest.out
++  cat conftest.err >&5
++  if $GREP 'Manifest Tool' conftest.out > /dev/null; then
++    lt_cv_path_mainfest_tool=yes
++  fi
++  rm -f conftest*
++fi
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_path_mainfest_tool" >&5
++$as_echo "$lt_cv_path_mainfest_tool" >&6; }
++if test "x$lt_cv_path_mainfest_tool" != xyes; then
++  MANIFEST_TOOL=:
++fi
++
++
++
++
++
+ 
+   case $host_os in
+     rhapsody* | darwin*)
+@@ -8717,6 +9209,8 @@ _LT_EOF
+       $LTCC $LTCFLAGS -c -o conftest.o conftest.c 2>&5
+       echo "$AR cru libconftest.a conftest.o" >&5
+       $AR cru libconftest.a conftest.o 2>&5
++      echo "$RANLIB libconftest.a" >&5
++      $RANLIB libconftest.a 2>&5
+       cat > conftest.c << _LT_EOF
+ int main() { return 0;}
+ _LT_EOF
+@@ -8785,6 +9279,16 @@ done
+ 
+ 
+ 
++func_stripname_cnf ()
++{
++  case ${2} in
++  .*) func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%\\\\${2}\$%%"`;;
++  *)  func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%${2}\$%%"`;;
++  esac
++} # func_stripname_cnf
++
++
++
+ 
+ 
+ # Set options
+@@ -9270,8 +9774,6 @@ fi
+ lt_prog_compiler_pic=
+ lt_prog_compiler_static=
+ 
+-{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $compiler option to produce PIC" >&5
+-$as_echo_n "checking for $compiler option to produce PIC... " >&6; }
+ 
+   if test "$GCC" = yes; then
+     lt_prog_compiler_wl='-Wl,'
+@@ -9437,6 +9939,12 @@ $as_echo_n "checking for $compiler option to produce PIC... " >&6; }
+ 	lt_prog_compiler_pic='--shared'
+ 	lt_prog_compiler_static='--static'
+ 	;;
++      nagfor*)
++	# NAG Fortran compiler
++	lt_prog_compiler_wl='-Wl,-Wl,,'
++	lt_prog_compiler_pic='-PIC'
++	lt_prog_compiler_static='-Bstatic'
++	;;
+       pgcc* | pgf77* | pgf90* | pgf95* | pgfortran*)
+         # Portland Group compilers (*not* the Pentium gcc compiler,
+ 	# which looks to be a dead project)
+@@ -9499,7 +10007,7 @@ $as_echo_n "checking for $compiler option to produce PIC... " >&6; }
+       lt_prog_compiler_pic='-KPIC'
+       lt_prog_compiler_static='-Bstatic'
+       case $cc_basename in
+-      f77* | f90* | f95*)
++      f77* | f90* | f95* | sunf77* | sunf90* | sunf95*)
+ 	lt_prog_compiler_wl='-Qoption ld ';;
+       *)
+ 	lt_prog_compiler_wl='-Wl,';;
+@@ -9556,13 +10064,17 @@ case $host_os in
+     lt_prog_compiler_pic="$lt_prog_compiler_pic -DPIC"
+     ;;
+ esac
+-{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_prog_compiler_pic" >&5
+-$as_echo "$lt_prog_compiler_pic" >&6; }
+-
+-
+-
+-
+ 
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $compiler option to produce PIC" >&5
++$as_echo_n "checking for $compiler option to produce PIC... " >&6; }
++if ${lt_cv_prog_compiler_pic+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  lt_cv_prog_compiler_pic=$lt_prog_compiler_pic
++fi
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_prog_compiler_pic" >&5
++$as_echo "$lt_cv_prog_compiler_pic" >&6; }
++lt_prog_compiler_pic=$lt_cv_prog_compiler_pic
+ 
+ #
+ # Check to make sure the PIC flag actually works.
+@@ -9623,6 +10135,11 @@ fi
+ 
+ 
+ 
++
++
++
++
++
+ #
+ # Check to make sure the static flag actually works.
+ #
+@@ -9973,7 +10490,8 @@ _LT_EOF
+       allow_undefined_flag=unsupported
+       always_export_symbols=no
+       enable_shared_with_static_runtimes=yes
+-      export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1 DATA/'\'' | $SED -e '\''/^[AITW][ ]/s/.*[ ]//'\'' | sort | uniq > $export_symbols'
++      export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1 DATA/;s/^.*[ ]__nm__\([^ ]*\)[ ][^ ]*/\1 DATA/;/^I[ ]/d;/^[AITW][ ]/s/.* //'\'' | sort | uniq > $export_symbols'
++      exclude_expsyms='[_]+GLOBAL_OFFSET_TABLE_|[_]+GLOBAL__[FID]_.*|[_]+head_[A-Za-z0-9_]+_dll|[A-Za-z0-9_]+_dll_iname'
+ 
+       if $LD --help 2>&1 | $GREP 'auto-import' > /dev/null; then
+         archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib'
+@@ -10072,12 +10590,12 @@ _LT_EOF
+ 	  whole_archive_flag_spec='--whole-archive$convenience --no-whole-archive'
+ 	  hardcode_libdir_flag_spec=
+ 	  hardcode_libdir_flag_spec_ld='-rpath $libdir'
+-	  archive_cmds='$LD -shared $libobjs $deplibs $compiler_flags -soname $soname -o $lib'
++	  archive_cmds='$LD -shared $libobjs $deplibs $linker_flags -soname $soname -o $lib'
+ 	  if test "x$supports_anon_versioning" = xyes; then
+ 	    archive_expsym_cmds='echo "{ global:" > $output_objdir/$libname.ver~
+ 	      cat $export_symbols | sed -e "s/\(.*\)/\1;/" >> $output_objdir/$libname.ver~
+ 	      echo "local: *; };" >> $output_objdir/$libname.ver~
+-	      $LD -shared $libobjs $deplibs $compiler_flags -soname $soname -version-script $output_objdir/$libname.ver -o $lib'
++	      $LD -shared $libobjs $deplibs $linker_flags -soname $soname -version-script $output_objdir/$libname.ver -o $lib'
+ 	  fi
+ 	  ;;
+ 	esac
+@@ -10091,8 +10609,8 @@ _LT_EOF
+ 	archive_cmds='$LD -Bshareable $libobjs $deplibs $linker_flags -o $lib'
+ 	wlarc=
+       else
+-	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
+-	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
++	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
++	archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
+       fi
+       ;;
+ 
+@@ -10110,8 +10628,8 @@ _LT_EOF
+ 
+ _LT_EOF
+       elif $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then
+-	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
+-	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
++	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
++	archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
+       else
+ 	ld_shlibs=no
+       fi
+@@ -10157,8 +10675,8 @@ _LT_EOF
+ 
+     *)
+       if $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then
+-	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
+-	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
++	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
++	archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
+       else
+ 	ld_shlibs=no
+       fi
+@@ -10288,7 +10806,13 @@ _LT_EOF
+ 	allow_undefined_flag='-berok'
+         # Determine the default libpath from the value encoded in an
+         # empty executable.
+-        cat confdefs.h - <<_ACEOF >conftest.$ac_ext
++        if test "${lt_cv_aix_libpath+set}" = set; then
++  aix_libpath=$lt_cv_aix_libpath
++else
++  if ${lt_cv_aix_libpath_+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+ /* end confdefs.h.  */
+ 
+ int
+@@ -10301,22 +10825,29 @@ main ()
+ _ACEOF
+ if ac_fn_c_try_link "$LINENO"; then :
+ 
+-lt_aix_libpath_sed='
+-    /Import File Strings/,/^$/ {
+-	/^0/ {
+-	    s/^0  *\(.*\)$/\1/
+-	    p
+-	}
+-    }'
+-aix_libpath=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
+-# Check for a 64-bit object if we didn't find anything.
+-if test -z "$aix_libpath"; then
+-  aix_libpath=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
+-fi
++  lt_aix_libpath_sed='
++      /Import File Strings/,/^$/ {
++	  /^0/ {
++	      s/^0  *\([^ ]*\) *$/\1/
++	      p
++	  }
++      }'
++  lt_cv_aix_libpath_=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
++  # Check for a 64-bit object if we didn't find anything.
++  if test -z "$lt_cv_aix_libpath_"; then
++    lt_cv_aix_libpath_=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
++  fi
+ fi
+ rm -f core conftest.err conftest.$ac_objext \
+     conftest$ac_exeext conftest.$ac_ext
+-if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
++  if test -z "$lt_cv_aix_libpath_"; then
++    lt_cv_aix_libpath_="/usr/lib:/lib"
++  fi
++
++fi
++
++  aix_libpath=$lt_cv_aix_libpath_
++fi
+ 
+         hardcode_libdir_flag_spec='${wl}-blibpath:$libdir:'"$aix_libpath"
+         archive_expsym_cmds='$CC -o $output_objdir/$soname $libobjs $deplibs '"\${wl}$no_entry_flag"' $compiler_flags `if test "x${allow_undefined_flag}" != "x"; then func_echo_all "${wl}${allow_undefined_flag}"; else :; fi` '"\${wl}$exp_sym_flag:\$export_symbols $shared_flag"
+@@ -10328,7 +10859,13 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 	else
+ 	 # Determine the default libpath from the value encoded in an
+ 	 # empty executable.
+-	 cat confdefs.h - <<_ACEOF >conftest.$ac_ext
++	 if test "${lt_cv_aix_libpath+set}" = set; then
++  aix_libpath=$lt_cv_aix_libpath
++else
++  if ${lt_cv_aix_libpath_+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+ /* end confdefs.h.  */
+ 
+ int
+@@ -10341,22 +10878,29 @@ main ()
+ _ACEOF
+ if ac_fn_c_try_link "$LINENO"; then :
+ 
+-lt_aix_libpath_sed='
+-    /Import File Strings/,/^$/ {
+-	/^0/ {
+-	    s/^0  *\(.*\)$/\1/
+-	    p
+-	}
+-    }'
+-aix_libpath=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
+-# Check for a 64-bit object if we didn't find anything.
+-if test -z "$aix_libpath"; then
+-  aix_libpath=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
+-fi
++  lt_aix_libpath_sed='
++      /Import File Strings/,/^$/ {
++	  /^0/ {
++	      s/^0  *\([^ ]*\) *$/\1/
++	      p
++	  }
++      }'
++  lt_cv_aix_libpath_=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
++  # Check for a 64-bit object if we didn't find anything.
++  if test -z "$lt_cv_aix_libpath_"; then
++    lt_cv_aix_libpath_=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
++  fi
+ fi
+ rm -f core conftest.err conftest.$ac_objext \
+     conftest$ac_exeext conftest.$ac_ext
+-if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
++  if test -z "$lt_cv_aix_libpath_"; then
++    lt_cv_aix_libpath_="/usr/lib:/lib"
++  fi
++
++fi
++
++  aix_libpath=$lt_cv_aix_libpath_
++fi
+ 
+ 	 hardcode_libdir_flag_spec='${wl}-blibpath:$libdir:'"$aix_libpath"
+ 	  # Warning - without using the other run time loading flags,
+@@ -10401,20 +10945,63 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+       # Microsoft Visual C++.
+       # hardcode_libdir_flag_spec is actually meaningless, as there is
+       # no search path for DLLs.
+-      hardcode_libdir_flag_spec=' '
+-      allow_undefined_flag=unsupported
+-      # Tell ltmain to make .lib files, not .a files.
+-      libext=lib
+-      # Tell ltmain to make .dll files, not .so files.
+-      shrext_cmds=".dll"
+-      # FIXME: Setting linknames here is a bad hack.
+-      archive_cmds='$CC -o $lib $libobjs $compiler_flags `func_echo_all "$deplibs" | $SED '\''s/ -lc$//'\''` -link -dll~linknames='
+-      # The linker will automatically build a .lib file if we build a DLL.
+-      old_archive_from_new_cmds='true'
+-      # FIXME: Should let the user specify the lib program.
+-      old_archive_cmds='lib -OUT:$oldlib$oldobjs$old_deplibs'
+-      fix_srcfile_path='`cygpath -w "$srcfile"`'
+-      enable_shared_with_static_runtimes=yes
++      case $cc_basename in
++      cl*)
++	# Native MSVC
++	hardcode_libdir_flag_spec=' '
++	allow_undefined_flag=unsupported
++	always_export_symbols=yes
++	file_list_spec='@'
++	# Tell ltmain to make .lib files, not .a files.
++	libext=lib
++	# Tell ltmain to make .dll files, not .so files.
++	shrext_cmds=".dll"
++	# FIXME: Setting linknames here is a bad hack.
++	archive_cmds='$CC -o $output_objdir/$soname $libobjs $compiler_flags $deplibs -Wl,-dll~linknames='
++	archive_expsym_cmds='if test "x`$SED 1q $export_symbols`" = xEXPORTS; then
++	    sed -n -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' -e '1\\\!p' < $export_symbols > $output_objdir/$soname.exp;
++	  else
++	    sed -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' < $export_symbols > $output_objdir/$soname.exp;
++	  fi~
++	  $CC -o $tool_output_objdir$soname $libobjs $compiler_flags $deplibs "@$tool_output_objdir$soname.exp" -Wl,-DLL,-IMPLIB:"$tool_output_objdir$libname.dll.lib"~
++	  linknames='
++	# The linker will not automatically build a static lib if we build a DLL.
++	# _LT_TAGVAR(old_archive_from_new_cmds, )='true'
++	enable_shared_with_static_runtimes=yes
++	export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1,DATA/'\'' | $SED -e '\''/^[AITW][ ]/s/.*[ ]//'\'' | sort | uniq > $export_symbols'
++	# Don't use ranlib
++	old_postinstall_cmds='chmod 644 $oldlib'
++	postlink_cmds='lt_outputfile="@OUTPUT@"~
++	  lt_tool_outputfile="@TOOL_OUTPUT@"~
++	  case $lt_outputfile in
++	    *.exe|*.EXE) ;;
++	    *)
++	      lt_outputfile="$lt_outputfile.exe"
++	      lt_tool_outputfile="$lt_tool_outputfile.exe"
++	      ;;
++	  esac~
++	  if test "$MANIFEST_TOOL" != ":" && test -f "$lt_outputfile.manifest"; then
++	    $MANIFEST_TOOL -manifest "$lt_tool_outputfile.manifest" -outputresource:"$lt_tool_outputfile" || exit 1;
++	    $RM "$lt_outputfile.manifest";
++	  fi'
++	;;
++      *)
++	# Assume MSVC wrapper
++	hardcode_libdir_flag_spec=' '
++	allow_undefined_flag=unsupported
++	# Tell ltmain to make .lib files, not .a files.
++	libext=lib
++	# Tell ltmain to make .dll files, not .so files.
++	shrext_cmds=".dll"
++	# FIXME: Setting linknames here is a bad hack.
++	archive_cmds='$CC -o $lib $libobjs $compiler_flags `func_echo_all "$deplibs" | $SED '\''s/ -lc$//'\''` -link -dll~linknames='
++	# The linker will automatically build a .lib file if we build a DLL.
++	old_archive_from_new_cmds='true'
++	# FIXME: Should let the user specify the lib program.
++	old_archive_cmds='lib -OUT:$oldlib$oldobjs$old_deplibs'
++	enable_shared_with_static_runtimes=yes
++	;;
++      esac
+       ;;
+ 
+     darwin* | rhapsody*)
+@@ -10475,7 +11062,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 
+     # FreeBSD 3 and greater uses gcc -shared to do shared libraries.
+     freebsd* | dragonfly*)
+-      archive_cmds='$CC -shared -o $lib $libobjs $deplibs $compiler_flags'
++      archive_cmds='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags'
+       hardcode_libdir_flag_spec='-R$libdir'
+       hardcode_direct=yes
+       hardcode_shlibpath_var=no
+@@ -10483,7 +11070,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 
+     hpux9*)
+       if test "$GCC" = yes; then
+-	archive_cmds='$RM $output_objdir/$soname~$CC -shared -fPIC ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $libobjs $deplibs $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
++	archive_cmds='$RM $output_objdir/$soname~$CC -shared $pic_flag ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $libobjs $deplibs $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
+       else
+ 	archive_cmds='$RM $output_objdir/$soname~$LD -b +b $install_libdir -o $output_objdir/$soname $libobjs $deplibs $linker_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
+       fi
+@@ -10499,7 +11086,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 
+     hpux10*)
+       if test "$GCC" = yes && test "$with_gnu_ld" = no; then
+-	archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
++	archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
+       else
+ 	archive_cmds='$LD -b +h $soname +b $install_libdir -o $lib $libobjs $deplibs $linker_flags'
+       fi
+@@ -10523,10 +11110,10 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 	  archive_cmds='$CC -shared ${wl}+h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
+ 	  ;;
+ 	ia64*)
+-	  archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags'
++	  archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags'
+ 	  ;;
+ 	*)
+-	  archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
++	  archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
+ 	  ;;
+ 	esac
+       else
+@@ -10605,23 +11192,36 @@ fi
+ 
+     irix5* | irix6* | nonstopux*)
+       if test "$GCC" = yes; then
+-	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
++	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
+ 	# Try to use the -exported_symbol ld option, if it does not
+ 	# work, assume that -exports_file does not work either and
+ 	# implicitly export all symbols.
+-        save_LDFLAGS="$LDFLAGS"
+-        LDFLAGS="$LDFLAGS -shared ${wl}-exported_symbol ${wl}foo ${wl}-update_registry ${wl}/dev/null"
+-        cat confdefs.h - <<_ACEOF >conftest.$ac_ext
++	# This should be the same for all languages, so no per-tag cache variable.
++	{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether the $host_os linker accepts -exported_symbol" >&5
++$as_echo_n "checking whether the $host_os linker accepts -exported_symbol... " >&6; }
++if ${lt_cv_irix_exported_symbol+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  save_LDFLAGS="$LDFLAGS"
++	   LDFLAGS="$LDFLAGS -shared ${wl}-exported_symbol ${wl}foo ${wl}-update_registry ${wl}/dev/null"
++	   cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+ /* end confdefs.h.  */
+-int foo(void) {}
++int foo (void) { return 0; }
+ _ACEOF
+ if ac_fn_c_try_link "$LINENO"; then :
+-  archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations ${wl}-exports_file ${wl}$export_symbols -o $lib'
+-
++  lt_cv_irix_exported_symbol=yes
++else
++  lt_cv_irix_exported_symbol=no
+ fi
+ rm -f core conftest.err conftest.$ac_objext \
+     conftest$ac_exeext conftest.$ac_ext
+-        LDFLAGS="$save_LDFLAGS"
++           LDFLAGS="$save_LDFLAGS"
++fi
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_irix_exported_symbol" >&5
++$as_echo "$lt_cv_irix_exported_symbol" >&6; }
++	if test "$lt_cv_irix_exported_symbol" = yes; then
++          archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations ${wl}-exports_file ${wl}$export_symbols -o $lib'
++	fi
+       else
+ 	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -o $lib'
+ 	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -exports_file $export_symbols -o $lib'
+@@ -10706,7 +11306,7 @@ rm -f core conftest.err conftest.$ac_objext \
+     osf4* | osf5*)	# as osf3* with the addition of -msym flag
+       if test "$GCC" = yes; then
+ 	allow_undefined_flag=' ${wl}-expect_unresolved ${wl}\*'
+-	archive_cmds='$CC -shared${allow_undefined_flag} $libobjs $deplibs $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
++	archive_cmds='$CC -shared${allow_undefined_flag} $pic_flag $libobjs $deplibs $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
+ 	hardcode_libdir_flag_spec='${wl}-rpath ${wl}$libdir'
+       else
+ 	allow_undefined_flag=' -expect_unresolved \*'
+@@ -10725,9 +11325,9 @@ rm -f core conftest.err conftest.$ac_objext \
+       no_undefined_flag=' -z defs'
+       if test "$GCC" = yes; then
+ 	wlarc='${wl}'
+-	archive_cmds='$CC -shared ${wl}-z ${wl}text ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
++	archive_cmds='$CC -shared $pic_flag ${wl}-z ${wl}text ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
+ 	archive_expsym_cmds='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~
+-	  $CC -shared ${wl}-z ${wl}text ${wl}-M ${wl}$lib.exp ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags~$RM $lib.exp'
++	  $CC -shared $pic_flag ${wl}-z ${wl}text ${wl}-M ${wl}$lib.exp ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags~$RM $lib.exp'
+       else
+ 	case `$CC -V 2>&1` in
+ 	*"Compilers 5.0"*)
+@@ -11303,8 +11903,9 @@ cygwin* | mingw* | pw32* | cegcc*)
+   need_version=no
+   need_lib_prefix=no
+ 
+-  case $GCC,$host_os in
+-  yes,cygwin* | yes,mingw* | yes,pw32* | yes,cegcc*)
++  case $GCC,$cc_basename in
++  yes,*)
++    # gcc
+     library_names_spec='$libname.dll.a'
+     # DLL is installed to $(libdir)/../bin by postinstall_cmds
+     postinstall_cmds='base_file=`basename \${file}`~
+@@ -11337,13 +11938,71 @@ cygwin* | mingw* | pw32* | cegcc*)
+       library_names_spec='`echo ${libname} | sed -e 's/^lib/pw/'``echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}'
+       ;;
+     esac
++    dynamic_linker='Win32 ld.exe'
++    ;;
++
++  *,cl*)
++    # Native MSVC
++    libname_spec='$name'
++    soname_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}'
++    library_names_spec='${libname}.dll.lib'
++
++    case $build_os in
++    mingw*)
++      sys_lib_search_path_spec=
++      lt_save_ifs=$IFS
++      IFS=';'
++      for lt_path in $LIB
++      do
++        IFS=$lt_save_ifs
++        # Let DOS variable expansion print the short 8.3 style file name.
++        lt_path=`cd "$lt_path" 2>/dev/null && cmd //C "for %i in (".") do @echo %~si"`
++        sys_lib_search_path_spec="$sys_lib_search_path_spec $lt_path"
++      done
++      IFS=$lt_save_ifs
++      # Convert to MSYS style.
++      sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | sed -e 's|\\\\|/|g' -e 's| \\([a-zA-Z]\\):| /\\1|g' -e 's|^ ||'`
++      ;;
++    cygwin*)
++      # Convert to unix form, then to dos form, then back to unix form
++      # but this time dos style (no spaces!) so that the unix form looks
++      # like /cygdrive/c/PROGRA~1:/cygdr...
++      sys_lib_search_path_spec=`cygpath --path --unix "$LIB"`
++      sys_lib_search_path_spec=`cygpath --path --dos "$sys_lib_search_path_spec" 2>/dev/null`
++      sys_lib_search_path_spec=`cygpath --path --unix "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
++      ;;
++    *)
++      sys_lib_search_path_spec="$LIB"
++      if $ECHO "$sys_lib_search_path_spec" | $GREP ';[c-zC-Z]:/' >/dev/null; then
++        # It is most probably a Windows format PATH.
++        sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e 's/;/ /g'`
++      else
++        sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
++      fi
++      # FIXME: find the short name or the path components, as spaces are
++      # common. (e.g. "Program Files" -> "PROGRA~1")
++      ;;
++    esac
++
++    # DLL is installed to $(libdir)/../bin by postinstall_cmds
++    postinstall_cmds='base_file=`basename \${file}`~
++      dlpath=`$SHELL 2>&1 -c '\''. $dir/'\''\${base_file}'\''i; echo \$dlname'\''`~
++      dldir=$destdir/`dirname \$dlpath`~
++      test -d \$dldir || mkdir -p \$dldir~
++      $install_prog $dir/$dlname \$dldir/$dlname'
++    postuninstall_cmds='dldll=`$SHELL 2>&1 -c '\''. $file; echo \$dlname'\''`~
++      dlpath=$dir/\$dldll~
++       $RM \$dlpath'
++    shlibpath_overrides_runpath=yes
++    dynamic_linker='Win32 link.exe'
+     ;;
+ 
+   *)
++    # Assume MSVC wrapper
+     library_names_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext} $libname.lib'
++    dynamic_linker='Win32 ld.exe'
+     ;;
+   esac
+-  dynamic_linker='Win32 ld.exe'
+   # FIXME: first we should search . and the directory the executable is in
+   shlibpath_var=PATH
+   ;;
+@@ -12221,7 +12880,7 @@ else
+   lt_dlunknown=0; lt_dlno_uscore=1; lt_dlneed_uscore=2
+   lt_status=$lt_dlunknown
+   cat > conftest.$ac_ext <<_LT_EOF
+-#line 12224 "configure"
++#line $LINENO "configure"
+ #include "confdefs.h"
+ 
+ #if HAVE_DLFCN_H
+@@ -12265,10 +12924,10 @@ else
+ /* When -fvisbility=hidden is used, assume the code has been annotated
+    correspondingly for the symbols needed.  */
+ #if defined(__GNUC__) && (((__GNUC__ == 3) && (__GNUC_MINOR__ >= 3)) || (__GNUC__ > 3))
+-void fnord () __attribute__((visibility("default")));
++int fnord () __attribute__((visibility("default")));
+ #endif
+ 
+-void fnord () { int i=42; }
++int fnord () { return 42; }
+ int main ()
+ {
+   void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW);
+@@ -12327,7 +12986,7 @@ else
+   lt_dlunknown=0; lt_dlno_uscore=1; lt_dlneed_uscore=2
+   lt_status=$lt_dlunknown
+   cat > conftest.$ac_ext <<_LT_EOF
+-#line 12330 "configure"
++#line $LINENO "configure"
+ #include "confdefs.h"
+ 
+ #if HAVE_DLFCN_H
+@@ -12371,10 +13030,10 @@ else
+ /* When -fvisbility=hidden is used, assume the code has been annotated
+    correspondingly for the symbols needed.  */
+ #if defined(__GNUC__) && (((__GNUC__ == 3) && (__GNUC_MINOR__ >= 3)) || (__GNUC__ > 3))
+-void fnord () __attribute__((visibility("default")));
++int fnord () __attribute__((visibility("default")));
+ #endif
+ 
+-void fnord () { int i=42; }
++int fnord () { return 42; }
+ int main ()
+ {
+   void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW);
+@@ -12766,6 +13425,7 @@ $RM -r conftest*
+ 
+   # Allow CC to be a program name with arguments.
+   lt_save_CC=$CC
++  lt_save_CFLAGS=$CFLAGS
+   lt_save_LD=$LD
+   lt_save_GCC=$GCC
+   GCC=$GXX
+@@ -12783,6 +13443,7 @@ $RM -r conftest*
+   fi
+   test -z "${LDCXX+set}" || LD=$LDCXX
+   CC=${CXX-"c++"}
++  CFLAGS=$CXXFLAGS
+   compiler=$CC
+   compiler_CXX=$CC
+   for cc_temp in $compiler""; do
+@@ -13065,7 +13726,13 @@ $as_echo_n "checking whether the $compiler linker ($LD) supports shared librarie
+           allow_undefined_flag_CXX='-berok'
+           # Determine the default libpath from the value encoded in an empty
+           # executable.
+-          cat confdefs.h - <<_ACEOF >conftest.$ac_ext
++          if test "${lt_cv_aix_libpath+set}" = set; then
++  aix_libpath=$lt_cv_aix_libpath
++else
++  if ${lt_cv_aix_libpath__CXX+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+ /* end confdefs.h.  */
+ 
+ int
+@@ -13078,22 +13745,29 @@ main ()
+ _ACEOF
+ if ac_fn_cxx_try_link "$LINENO"; then :
+ 
+-lt_aix_libpath_sed='
+-    /Import File Strings/,/^$/ {
+-	/^0/ {
+-	    s/^0  *\(.*\)$/\1/
+-	    p
+-	}
+-    }'
+-aix_libpath=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
+-# Check for a 64-bit object if we didn't find anything.
+-if test -z "$aix_libpath"; then
+-  aix_libpath=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
+-fi
++  lt_aix_libpath_sed='
++      /Import File Strings/,/^$/ {
++	  /^0/ {
++	      s/^0  *\([^ ]*\) *$/\1/
++	      p
++	  }
++      }'
++  lt_cv_aix_libpath__CXX=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
++  # Check for a 64-bit object if we didn't find anything.
++  if test -z "$lt_cv_aix_libpath__CXX"; then
++    lt_cv_aix_libpath__CXX=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
++  fi
+ fi
+ rm -f core conftest.err conftest.$ac_objext \
+     conftest$ac_exeext conftest.$ac_ext
+-if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
++  if test -z "$lt_cv_aix_libpath__CXX"; then
++    lt_cv_aix_libpath__CXX="/usr/lib:/lib"
++  fi
++
++fi
++
++  aix_libpath=$lt_cv_aix_libpath__CXX
++fi
+ 
+           hardcode_libdir_flag_spec_CXX='${wl}-blibpath:$libdir:'"$aix_libpath"
+ 
+@@ -13106,7 +13780,13 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+           else
+ 	    # Determine the default libpath from the value encoded in an
+ 	    # empty executable.
+-	    cat confdefs.h - <<_ACEOF >conftest.$ac_ext
++	    if test "${lt_cv_aix_libpath+set}" = set; then
++  aix_libpath=$lt_cv_aix_libpath
++else
++  if ${lt_cv_aix_libpath__CXX+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+ /* end confdefs.h.  */
+ 
+ int
+@@ -13119,22 +13799,29 @@ main ()
+ _ACEOF
+ if ac_fn_cxx_try_link "$LINENO"; then :
+ 
+-lt_aix_libpath_sed='
+-    /Import File Strings/,/^$/ {
+-	/^0/ {
+-	    s/^0  *\(.*\)$/\1/
+-	    p
+-	}
+-    }'
+-aix_libpath=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
+-# Check for a 64-bit object if we didn't find anything.
+-if test -z "$aix_libpath"; then
+-  aix_libpath=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
+-fi
++  lt_aix_libpath_sed='
++      /Import File Strings/,/^$/ {
++	  /^0/ {
++	      s/^0  *\([^ ]*\) *$/\1/
++	      p
++	  }
++      }'
++  lt_cv_aix_libpath__CXX=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
++  # Check for a 64-bit object if we didn't find anything.
++  if test -z "$lt_cv_aix_libpath__CXX"; then
++    lt_cv_aix_libpath__CXX=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
++  fi
+ fi
+ rm -f core conftest.err conftest.$ac_objext \
+     conftest$ac_exeext conftest.$ac_ext
+-if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
++  if test -z "$lt_cv_aix_libpath__CXX"; then
++    lt_cv_aix_libpath__CXX="/usr/lib:/lib"
++  fi
++
++fi
++
++  aix_libpath=$lt_cv_aix_libpath__CXX
++fi
+ 
+ 	    hardcode_libdir_flag_spec_CXX='${wl}-blibpath:$libdir:'"$aix_libpath"
+ 	    # Warning - without using the other run time loading flags,
+@@ -13177,29 +13864,75 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+         ;;
+ 
+       cygwin* | mingw* | pw32* | cegcc*)
+-        # _LT_TAGVAR(hardcode_libdir_flag_spec, CXX) is actually meaningless,
+-        # as there is no search path for DLLs.
+-        hardcode_libdir_flag_spec_CXX='-L$libdir'
+-        export_dynamic_flag_spec_CXX='${wl}--export-all-symbols'
+-        allow_undefined_flag_CXX=unsupported
+-        always_export_symbols_CXX=no
+-        enable_shared_with_static_runtimes_CXX=yes
+-
+-        if $LD --help 2>&1 | $GREP 'auto-import' > /dev/null; then
+-          archive_cmds_CXX='$CC -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib'
+-          # If the export-symbols file already is a .def file (1st line
+-          # is EXPORTS), use it as is; otherwise, prepend...
+-          archive_expsym_cmds_CXX='if test "x`$SED 1q $export_symbols`" = xEXPORTS; then
+-	    cp $export_symbols $output_objdir/$soname.def;
+-          else
+-	    echo EXPORTS > $output_objdir/$soname.def;
+-	    cat $export_symbols >> $output_objdir/$soname.def;
+-          fi~
+-          $CC -shared -nostdlib $output_objdir/$soname.def $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib'
+-        else
+-          ld_shlibs_CXX=no
+-        fi
+-        ;;
++	case $GXX,$cc_basename in
++	,cl* | no,cl*)
++	  # Native MSVC
++	  # hardcode_libdir_flag_spec is actually meaningless, as there is
++	  # no search path for DLLs.
++	  hardcode_libdir_flag_spec_CXX=' '
++	  allow_undefined_flag_CXX=unsupported
++	  always_export_symbols_CXX=yes
++	  file_list_spec_CXX='@'
++	  # Tell ltmain to make .lib files, not .a files.
++	  libext=lib
++	  # Tell ltmain to make .dll files, not .so files.
++	  shrext_cmds=".dll"
++	  # FIXME: Setting linknames here is a bad hack.
++	  archive_cmds_CXX='$CC -o $output_objdir/$soname $libobjs $compiler_flags $deplibs -Wl,-dll~linknames='
++	  archive_expsym_cmds_CXX='if test "x`$SED 1q $export_symbols`" = xEXPORTS; then
++	      $SED -n -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' -e '1\\\!p' < $export_symbols > $output_objdir/$soname.exp;
++	    else
++	      $SED -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' < $export_symbols > $output_objdir/$soname.exp;
++	    fi~
++	    $CC -o $tool_output_objdir$soname $libobjs $compiler_flags $deplibs "@$tool_output_objdir$soname.exp" -Wl,-DLL,-IMPLIB:"$tool_output_objdir$libname.dll.lib"~
++	    linknames='
++	  # The linker will not automatically build a static lib if we build a DLL.
++	  # _LT_TAGVAR(old_archive_from_new_cmds, CXX)='true'
++	  enable_shared_with_static_runtimes_CXX=yes
++	  # Don't use ranlib
++	  old_postinstall_cmds_CXX='chmod 644 $oldlib'
++	  postlink_cmds_CXX='lt_outputfile="@OUTPUT@"~
++	    lt_tool_outputfile="@TOOL_OUTPUT@"~
++	    case $lt_outputfile in
++	      *.exe|*.EXE) ;;
++	      *)
++		lt_outputfile="$lt_outputfile.exe"
++		lt_tool_outputfile="$lt_tool_outputfile.exe"
++		;;
++	    esac~
++	    func_to_tool_file "$lt_outputfile"~
++	    if test "$MANIFEST_TOOL" != ":" && test -f "$lt_outputfile.manifest"; then
++	      $MANIFEST_TOOL -manifest "$lt_tool_outputfile.manifest" -outputresource:"$lt_tool_outputfile" || exit 1;
++	      $RM "$lt_outputfile.manifest";
++	    fi'
++	  ;;
++	*)
++	  # g++
++	  # _LT_TAGVAR(hardcode_libdir_flag_spec, CXX) is actually meaningless,
++	  # as there is no search path for DLLs.
++	  hardcode_libdir_flag_spec_CXX='-L$libdir'
++	  export_dynamic_flag_spec_CXX='${wl}--export-all-symbols'
++	  allow_undefined_flag_CXX=unsupported
++	  always_export_symbols_CXX=no
++	  enable_shared_with_static_runtimes_CXX=yes
++
++	  if $LD --help 2>&1 | $GREP 'auto-import' > /dev/null; then
++	    archive_cmds_CXX='$CC -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib'
++	    # If the export-symbols file already is a .def file (1st line
++	    # is EXPORTS), use it as is; otherwise, prepend...
++	    archive_expsym_cmds_CXX='if test "x`$SED 1q $export_symbols`" = xEXPORTS; then
++	      cp $export_symbols $output_objdir/$soname.def;
++	    else
++	      echo EXPORTS > $output_objdir/$soname.def;
++	      cat $export_symbols >> $output_objdir/$soname.def;
++	    fi~
++	    $CC -shared -nostdlib $output_objdir/$soname.def $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib'
++	  else
++	    ld_shlibs_CXX=no
++	  fi
++	  ;;
++	esac
++	;;
+       darwin* | rhapsody*)
+ 
+ 
+@@ -13305,7 +14038,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+             ;;
+           *)
+             if test "$GXX" = yes; then
+-              archive_cmds_CXX='$RM $output_objdir/$soname~$CC -shared -nostdlib -fPIC ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
++              archive_cmds_CXX='$RM $output_objdir/$soname~$CC -shared -nostdlib $pic_flag ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
+             else
+               # FIXME: insert proper C++ library support
+               ld_shlibs_CXX=no
+@@ -13376,10 +14109,10 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 	            archive_cmds_CXX='$CC -shared -nostdlib -fPIC ${wl}+h ${wl}$soname -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags'
+ 	            ;;
+ 	          ia64*)
+-	            archive_cmds_CXX='$CC -shared -nostdlib -fPIC ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags'
++	            archive_cmds_CXX='$CC -shared -nostdlib $pic_flag ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags'
+ 	            ;;
+ 	          *)
+-	            archive_cmds_CXX='$CC -shared -nostdlib -fPIC ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags'
++	            archive_cmds_CXX='$CC -shared -nostdlib $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags'
+ 	            ;;
+ 	        esac
+ 	      fi
+@@ -13420,9 +14153,9 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+           *)
+ 	    if test "$GXX" = yes; then
+ 	      if test "$with_gnu_ld" = no; then
+-	        archive_cmds_CXX='$CC -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
++	        archive_cmds_CXX='$CC -shared $pic_flag -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
+ 	      else
+-	        archive_cmds_CXX='$CC -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` -o $lib'
++	        archive_cmds_CXX='$CC -shared $pic_flag -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` -o $lib'
+ 	      fi
+ 	    fi
+ 	    link_all_deplibs_CXX=yes
+@@ -13492,20 +14225,20 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 	      prelink_cmds_CXX='tpldir=Template.dir~
+ 		rm -rf $tpldir~
+ 		$CC --prelink_objects --instantiation_dir $tpldir $objs $libobjs $compile_deplibs~
+-		compile_command="$compile_command `find $tpldir -name \*.o | $NL2SP`"'
++		compile_command="$compile_command `find $tpldir -name \*.o | sort | $NL2SP`"'
+ 	      old_archive_cmds_CXX='tpldir=Template.dir~
+ 		rm -rf $tpldir~
+ 		$CC --prelink_objects --instantiation_dir $tpldir $oldobjs$old_deplibs~
+-		$AR $AR_FLAGS $oldlib$oldobjs$old_deplibs `find $tpldir -name \*.o | $NL2SP`~
++		$AR $AR_FLAGS $oldlib$oldobjs$old_deplibs `find $tpldir -name \*.o | sort | $NL2SP`~
+ 		$RANLIB $oldlib'
+ 	      archive_cmds_CXX='tpldir=Template.dir~
+ 		rm -rf $tpldir~
+ 		$CC --prelink_objects --instantiation_dir $tpldir $predep_objects $libobjs $deplibs $convenience $postdep_objects~
+-		$CC -shared $pic_flag $predep_objects $libobjs $deplibs `find $tpldir -name \*.o | $NL2SP` $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname -o $lib'
++		$CC -shared $pic_flag $predep_objects $libobjs $deplibs `find $tpldir -name \*.o | sort | $NL2SP` $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname -o $lib'
+ 	      archive_expsym_cmds_CXX='tpldir=Template.dir~
+ 		rm -rf $tpldir~
+ 		$CC --prelink_objects --instantiation_dir $tpldir $predep_objects $libobjs $deplibs $convenience $postdep_objects~
+-		$CC -shared $pic_flag $predep_objects $libobjs $deplibs `find $tpldir -name \*.o | $NL2SP` $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname ${wl}-retain-symbols-file ${wl}$export_symbols -o $lib'
++		$CC -shared $pic_flag $predep_objects $libobjs $deplibs `find $tpldir -name \*.o | sort | $NL2SP` $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname ${wl}-retain-symbols-file ${wl}$export_symbols -o $lib'
+ 	      ;;
+ 	    *) # Version 6 and above use weak symbols
+ 	      archive_cmds_CXX='$CC -shared $pic_flag $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname -o $lib'
+@@ -13700,7 +14433,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 	          archive_cmds_CXX='$CC -shared -nostdlib ${allow_undefined_flag} $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
+ 		  ;;
+ 	        *)
+-	          archive_cmds_CXX='$CC -shared -nostdlib ${allow_undefined_flag} $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
++	          archive_cmds_CXX='$CC -shared $pic_flag -nostdlib ${allow_undefined_flag} $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
+ 		  ;;
+ 	      esac
+ 
+@@ -13746,7 +14479,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 
+       solaris*)
+         case $cc_basename in
+-          CC*)
++          CC* | sunCC*)
+ 	    # Sun C++ 4.2, 5.x and Centerline C++
+             archive_cmds_need_lc_CXX=yes
+ 	    no_undefined_flag_CXX=' -zdefs'
+@@ -13787,9 +14520,9 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 	    if test "$GXX" = yes && test "$with_gnu_ld" = no; then
+ 	      no_undefined_flag_CXX=' ${wl}-z ${wl}defs'
+ 	      if $CC --version | $GREP -v '^2\.7' > /dev/null; then
+-	        archive_cmds_CXX='$CC -shared -nostdlib $LDFLAGS $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-h $wl$soname -o $lib'
++	        archive_cmds_CXX='$CC -shared $pic_flag -nostdlib $LDFLAGS $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-h $wl$soname -o $lib'
+ 	        archive_expsym_cmds_CXX='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~
+-		  $CC -shared -nostdlib ${wl}-M $wl$lib.exp -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~$RM $lib.exp'
++		  $CC -shared $pic_flag -nostdlib ${wl}-M $wl$lib.exp -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~$RM $lib.exp'
+ 
+ 	        # Commands to make compiler produce verbose output that lists
+ 	        # what "hidden" libraries, object files and flags are used when
+@@ -13924,6 +14657,13 @@ private:
+ };
+ _LT_EOF
+ 
++
++_lt_libdeps_save_CFLAGS=$CFLAGS
++case "$CC $CFLAGS " in #(
++*\ -flto*\ *) CFLAGS="$CFLAGS -fno-lto" ;;
++*\ -fwhopr*\ *) CFLAGS="$CFLAGS -fno-whopr" ;;
++esac
++
+ if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_compile\""; } >&5
+   (eval $ac_compile) 2>&5
+   ac_status=$?
+@@ -13937,7 +14677,7 @@ if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_compile\""; } >&5
+   pre_test_object_deps_done=no
+ 
+   for p in `eval "$output_verbose_link_cmd"`; do
+-    case $p in
++    case ${prev}${p} in
+ 
+     -L* | -R* | -l*)
+        # Some compilers place space between "-{L,R}" and the path.
+@@ -13946,13 +14686,22 @@ if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_compile\""; } >&5
+           test $p = "-R"; then
+ 	 prev=$p
+ 	 continue
+-       else
+-	 prev=
+        fi
+ 
++       # Expand the sysroot to ease extracting the directories later.
++       if test -z "$prev"; then
++         case $p in
++         -L*) func_stripname_cnf '-L' '' "$p"; prev=-L; p=$func_stripname_result ;;
++         -R*) func_stripname_cnf '-R' '' "$p"; prev=-R; p=$func_stripname_result ;;
++         -l*) func_stripname_cnf '-l' '' "$p"; prev=-l; p=$func_stripname_result ;;
++         esac
++       fi
++       case $p in
++       =*) func_stripname_cnf '=' '' "$p"; p=$lt_sysroot$func_stripname_result ;;
++       esac
+        if test "$pre_test_object_deps_done" = no; then
+-	 case $p in
+-	 -L* | -R*)
++	 case ${prev} in
++	 -L | -R)
+ 	   # Internal compiler library paths should come after those
+ 	   # provided the user.  The postdeps already come after the
+ 	   # user supplied libs so there is no need to process them.
+@@ -13972,8 +14721,10 @@ if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_compile\""; } >&5
+ 	   postdeps_CXX="${postdeps_CXX} ${prev}${p}"
+ 	 fi
+        fi
++       prev=
+        ;;
+ 
++    *.lto.$objext) ;; # Ignore GCC LTO objects
+     *.$objext)
+        # This assumes that the test object file only shows up
+        # once in the compiler output.
+@@ -14009,6 +14760,7 @@ else
+ fi
+ 
+ $RM -f confest.$objext
++CFLAGS=$_lt_libdeps_save_CFLAGS
+ 
+ # PORTME: override above test on systems where it is broken
+ case $host_os in
+@@ -14044,7 +14796,7 @@ linux*)
+ 
+ solaris*)
+   case $cc_basename in
+-  CC*)
++  CC* | sunCC*)
+     # The more standards-conforming stlport4 library is
+     # incompatible with the Cstd library. Avoid specifying
+     # it if it's in CXXFLAGS. Ignore libCrun as
+@@ -14109,8 +14861,6 @@ fi
+ lt_prog_compiler_pic_CXX=
+ lt_prog_compiler_static_CXX=
+ 
+-{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $compiler option to produce PIC" >&5
+-$as_echo_n "checking for $compiler option to produce PIC... " >&6; }
+ 
+   # C++ specific cases for pic, static, wl, etc.
+   if test "$GXX" = yes; then
+@@ -14215,6 +14965,11 @@ $as_echo_n "checking for $compiler option to produce PIC... " >&6; }
+ 	  ;;
+ 	esac
+ 	;;
++      mingw* | cygwin* | os2* | pw32* | cegcc*)
++	# This hack is so that the source file can tell whether it is being
++	# built for inclusion in a dll (and should export symbols for example).
++	lt_prog_compiler_pic_CXX='-DDLL_EXPORT'
++	;;
+       dgux*)
+ 	case $cc_basename in
+ 	  ec++*)
+@@ -14367,7 +15122,7 @@ $as_echo_n "checking for $compiler option to produce PIC... " >&6; }
+ 	;;
+       solaris*)
+ 	case $cc_basename in
+-	  CC*)
++	  CC* | sunCC*)
+ 	    # Sun C++ 4.2, 5.x and Centerline C++
+ 	    lt_prog_compiler_pic_CXX='-KPIC'
+ 	    lt_prog_compiler_static_CXX='-Bstatic'
+@@ -14432,10 +15187,17 @@ case $host_os in
+     lt_prog_compiler_pic_CXX="$lt_prog_compiler_pic_CXX -DPIC"
+     ;;
+ esac
+-{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_prog_compiler_pic_CXX" >&5
+-$as_echo "$lt_prog_compiler_pic_CXX" >&6; }
+-
+ 
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $compiler option to produce PIC" >&5
++$as_echo_n "checking for $compiler option to produce PIC... " >&6; }
++if ${lt_cv_prog_compiler_pic_CXX+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  lt_cv_prog_compiler_pic_CXX=$lt_prog_compiler_pic_CXX
++fi
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_prog_compiler_pic_CXX" >&5
++$as_echo "$lt_cv_prog_compiler_pic_CXX" >&6; }
++lt_prog_compiler_pic_CXX=$lt_cv_prog_compiler_pic_CXX
+ 
+ #
+ # Check to make sure the PIC flag actually works.
+@@ -14493,6 +15255,8 @@ fi
+ 
+ 
+ 
++
++
+ #
+ # Check to make sure the static flag actually works.
+ #
+@@ -14670,6 +15434,7 @@ fi
+ $as_echo_n "checking whether the $compiler linker ($LD) supports shared libraries... " >&6; }
+ 
+   export_symbols_cmds_CXX='$NM $libobjs $convenience | $global_symbol_pipe | $SED '\''s/.* //'\'' | sort | uniq > $export_symbols'
++  exclude_expsyms_CXX='_GLOBAL_OFFSET_TABLE_|_GLOBAL__F[ID]_.*'
+   case $host_os in
+   aix[4-9]*)
+     # If we're using GNU nm, then we don't want the "-C" option.
+@@ -14684,15 +15449,20 @@ $as_echo_n "checking whether the $compiler linker ($LD) supports shared librarie
+     ;;
+   pw32*)
+     export_symbols_cmds_CXX="$ltdll_cmds"
+-  ;;
++    ;;
+   cygwin* | mingw* | cegcc*)
+-    export_symbols_cmds_CXX='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1 DATA/;/^.*[ ]__nm__/s/^.*[ ]__nm__\([^ ]*\)[ ][^ ]*/\1 DATA/;/^I[ ]/d;/^[AITW][ ]/s/.* //'\'' | sort | uniq > $export_symbols'
+-  ;;
++    case $cc_basename in
++    cl*) ;;
++    *)
++      export_symbols_cmds_CXX='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1 DATA/;s/^.*[ ]__nm__\([^ ]*\)[ ][^ ]*/\1 DATA/;/^I[ ]/d;/^[AITW][ ]/s/.* //'\'' | sort | uniq > $export_symbols'
++      exclude_expsyms_CXX='[_]+GLOBAL_OFFSET_TABLE_|[_]+GLOBAL__[FID]_.*|[_]+head_[A-Za-z0-9_]+_dll|[A-Za-z0-9_]+_dll_iname'
++      ;;
++    esac
++    ;;
+   *)
+     export_symbols_cmds_CXX='$NM $libobjs $convenience | $global_symbol_pipe | $SED '\''s/.* //'\'' | sort | uniq > $export_symbols'
+-  ;;
++    ;;
+   esac
+-  exclude_expsyms_CXX='_GLOBAL_OFFSET_TABLE_|_GLOBAL__F[ID]_.*'
+ 
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ld_shlibs_CXX" >&5
+ $as_echo "$ld_shlibs_CXX" >&6; }
+@@ -14955,8 +15725,9 @@ cygwin* | mingw* | pw32* | cegcc*)
+   need_version=no
+   need_lib_prefix=no
+ 
+-  case $GCC,$host_os in
+-  yes,cygwin* | yes,mingw* | yes,pw32* | yes,cegcc*)
++  case $GCC,$cc_basename in
++  yes,*)
++    # gcc
+     library_names_spec='$libname.dll.a'
+     # DLL is installed to $(libdir)/../bin by postinstall_cmds
+     postinstall_cmds='base_file=`basename \${file}`~
+@@ -14988,13 +15759,71 @@ cygwin* | mingw* | pw32* | cegcc*)
+       library_names_spec='`echo ${libname} | sed -e 's/^lib/pw/'``echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}'
+       ;;
+     esac
++    dynamic_linker='Win32 ld.exe'
++    ;;
++
++  *,cl*)
++    # Native MSVC
++    libname_spec='$name'
++    soname_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}'
++    library_names_spec='${libname}.dll.lib'
++
++    case $build_os in
++    mingw*)
++      sys_lib_search_path_spec=
++      lt_save_ifs=$IFS
++      IFS=';'
++      for lt_path in $LIB
++      do
++        IFS=$lt_save_ifs
++        # Let DOS variable expansion print the short 8.3 style file name.
++        lt_path=`cd "$lt_path" 2>/dev/null && cmd //C "for %i in (".") do @echo %~si"`
++        sys_lib_search_path_spec="$sys_lib_search_path_spec $lt_path"
++      done
++      IFS=$lt_save_ifs
++      # Convert to MSYS style.
++      sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | sed -e 's|\\\\|/|g' -e 's| \\([a-zA-Z]\\):| /\\1|g' -e 's|^ ||'`
++      ;;
++    cygwin*)
++      # Convert to unix form, then to dos form, then back to unix form
++      # but this time dos style (no spaces!) so that the unix form looks
++      # like /cygdrive/c/PROGRA~1:/cygdr...
++      sys_lib_search_path_spec=`cygpath --path --unix "$LIB"`
++      sys_lib_search_path_spec=`cygpath --path --dos "$sys_lib_search_path_spec" 2>/dev/null`
++      sys_lib_search_path_spec=`cygpath --path --unix "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
++      ;;
++    *)
++      sys_lib_search_path_spec="$LIB"
++      if $ECHO "$sys_lib_search_path_spec" | $GREP ';[c-zC-Z]:/' >/dev/null; then
++        # It is most probably a Windows format PATH.
++        sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e 's/;/ /g'`
++      else
++        sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
++      fi
++      # FIXME: find the short name or the path components, as spaces are
++      # common. (e.g. "Program Files" -> "PROGRA~1")
++      ;;
++    esac
++
++    # DLL is installed to $(libdir)/../bin by postinstall_cmds
++    postinstall_cmds='base_file=`basename \${file}`~
++      dlpath=`$SHELL 2>&1 -c '\''. $dir/'\''\${base_file}'\''i; echo \$dlname'\''`~
++      dldir=$destdir/`dirname \$dlpath`~
++      test -d \$dldir || mkdir -p \$dldir~
++      $install_prog $dir/$dlname \$dldir/$dlname'
++    postuninstall_cmds='dldll=`$SHELL 2>&1 -c '\''. $file; echo \$dlname'\''`~
++      dlpath=$dir/\$dldll~
++       $RM \$dlpath'
++    shlibpath_overrides_runpath=yes
++    dynamic_linker='Win32 link.exe'
+     ;;
+ 
+   *)
++    # Assume MSVC wrapper
+     library_names_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext} $libname.lib'
++    dynamic_linker='Win32 ld.exe'
+     ;;
+   esac
+-  dynamic_linker='Win32 ld.exe'
+   # FIXME: first we should search . and the directory the executable is in
+   shlibpath_var=PATH
+   ;;
+@@ -15534,6 +16363,7 @@ fi
+   fi # test -n "$compiler"
+ 
+   CC=$lt_save_CC
++  CFLAGS=$lt_save_CFLAGS
+   LDCXX=$LD
+   LD=$lt_save_LD
+   GCC=$lt_save_GCC
+@@ -17663,13 +18493,20 @@ exeext='`$ECHO "$exeext" | $SED "$delay_single_quote_subst"`'
+ lt_unset='`$ECHO "$lt_unset" | $SED "$delay_single_quote_subst"`'
+ lt_SP2NL='`$ECHO "$lt_SP2NL" | $SED "$delay_single_quote_subst"`'
+ lt_NL2SP='`$ECHO "$lt_NL2SP" | $SED "$delay_single_quote_subst"`'
++lt_cv_to_host_file_cmd='`$ECHO "$lt_cv_to_host_file_cmd" | $SED "$delay_single_quote_subst"`'
++lt_cv_to_tool_file_cmd='`$ECHO "$lt_cv_to_tool_file_cmd" | $SED "$delay_single_quote_subst"`'
+ reload_flag='`$ECHO "$reload_flag" | $SED "$delay_single_quote_subst"`'
+ reload_cmds='`$ECHO "$reload_cmds" | $SED "$delay_single_quote_subst"`'
+ OBJDUMP='`$ECHO "$OBJDUMP" | $SED "$delay_single_quote_subst"`'
+ deplibs_check_method='`$ECHO "$deplibs_check_method" | $SED "$delay_single_quote_subst"`'
+ file_magic_cmd='`$ECHO "$file_magic_cmd" | $SED "$delay_single_quote_subst"`'
++file_magic_glob='`$ECHO "$file_magic_glob" | $SED "$delay_single_quote_subst"`'
++want_nocaseglob='`$ECHO "$want_nocaseglob" | $SED "$delay_single_quote_subst"`'
++DLLTOOL='`$ECHO "$DLLTOOL" | $SED "$delay_single_quote_subst"`'
++sharedlib_from_linklib_cmd='`$ECHO "$sharedlib_from_linklib_cmd" | $SED "$delay_single_quote_subst"`'
+ AR='`$ECHO "$AR" | $SED "$delay_single_quote_subst"`'
+ AR_FLAGS='`$ECHO "$AR_FLAGS" | $SED "$delay_single_quote_subst"`'
++archiver_list_spec='`$ECHO "$archiver_list_spec" | $SED "$delay_single_quote_subst"`'
+ STRIP='`$ECHO "$STRIP" | $SED "$delay_single_quote_subst"`'
+ RANLIB='`$ECHO "$RANLIB" | $SED "$delay_single_quote_subst"`'
+ old_postinstall_cmds='`$ECHO "$old_postinstall_cmds" | $SED "$delay_single_quote_subst"`'
+@@ -17684,14 +18521,17 @@ lt_cv_sys_global_symbol_pipe='`$ECHO "$lt_cv_sys_global_symbol_pipe" | $SED "$de
+ lt_cv_sys_global_symbol_to_cdecl='`$ECHO "$lt_cv_sys_global_symbol_to_cdecl" | $SED "$delay_single_quote_subst"`'
+ lt_cv_sys_global_symbol_to_c_name_address='`$ECHO "$lt_cv_sys_global_symbol_to_c_name_address" | $SED "$delay_single_quote_subst"`'
+ lt_cv_sys_global_symbol_to_c_name_address_lib_prefix='`$ECHO "$lt_cv_sys_global_symbol_to_c_name_address_lib_prefix" | $SED "$delay_single_quote_subst"`'
++nm_file_list_spec='`$ECHO "$nm_file_list_spec" | $SED "$delay_single_quote_subst"`'
++lt_sysroot='`$ECHO "$lt_sysroot" | $SED "$delay_single_quote_subst"`'
+ objdir='`$ECHO "$objdir" | $SED "$delay_single_quote_subst"`'
+ MAGIC_CMD='`$ECHO "$MAGIC_CMD" | $SED "$delay_single_quote_subst"`'
+ lt_prog_compiler_no_builtin_flag='`$ECHO "$lt_prog_compiler_no_builtin_flag" | $SED "$delay_single_quote_subst"`'
+-lt_prog_compiler_wl='`$ECHO "$lt_prog_compiler_wl" | $SED "$delay_single_quote_subst"`'
+ lt_prog_compiler_pic='`$ECHO "$lt_prog_compiler_pic" | $SED "$delay_single_quote_subst"`'
++lt_prog_compiler_wl='`$ECHO "$lt_prog_compiler_wl" | $SED "$delay_single_quote_subst"`'
+ lt_prog_compiler_static='`$ECHO "$lt_prog_compiler_static" | $SED "$delay_single_quote_subst"`'
+ lt_cv_prog_compiler_c_o='`$ECHO "$lt_cv_prog_compiler_c_o" | $SED "$delay_single_quote_subst"`'
+ need_locks='`$ECHO "$need_locks" | $SED "$delay_single_quote_subst"`'
++MANIFEST_TOOL='`$ECHO "$MANIFEST_TOOL" | $SED "$delay_single_quote_subst"`'
+ DSYMUTIL='`$ECHO "$DSYMUTIL" | $SED "$delay_single_quote_subst"`'
+ NMEDIT='`$ECHO "$NMEDIT" | $SED "$delay_single_quote_subst"`'
+ LIPO='`$ECHO "$LIPO" | $SED "$delay_single_quote_subst"`'
+@@ -17724,12 +18564,12 @@ hardcode_shlibpath_var='`$ECHO "$hardcode_shlibpath_var" | $SED "$delay_single_q
+ hardcode_automatic='`$ECHO "$hardcode_automatic" | $SED "$delay_single_quote_subst"`'
+ inherit_rpath='`$ECHO "$inherit_rpath" | $SED "$delay_single_quote_subst"`'
+ link_all_deplibs='`$ECHO "$link_all_deplibs" | $SED "$delay_single_quote_subst"`'
+-fix_srcfile_path='`$ECHO "$fix_srcfile_path" | $SED "$delay_single_quote_subst"`'
+ always_export_symbols='`$ECHO "$always_export_symbols" | $SED "$delay_single_quote_subst"`'
+ export_symbols_cmds='`$ECHO "$export_symbols_cmds" | $SED "$delay_single_quote_subst"`'
+ exclude_expsyms='`$ECHO "$exclude_expsyms" | $SED "$delay_single_quote_subst"`'
+ include_expsyms='`$ECHO "$include_expsyms" | $SED "$delay_single_quote_subst"`'
+ prelink_cmds='`$ECHO "$prelink_cmds" | $SED "$delay_single_quote_subst"`'
++postlink_cmds='`$ECHO "$postlink_cmds" | $SED "$delay_single_quote_subst"`'
+ file_list_spec='`$ECHO "$file_list_spec" | $SED "$delay_single_quote_subst"`'
+ variables_saved_for_relink='`$ECHO "$variables_saved_for_relink" | $SED "$delay_single_quote_subst"`'
+ need_lib_prefix='`$ECHO "$need_lib_prefix" | $SED "$delay_single_quote_subst"`'
+@@ -17768,8 +18608,8 @@ old_archive_cmds_CXX='`$ECHO "$old_archive_cmds_CXX" | $SED "$delay_single_quote
+ compiler_CXX='`$ECHO "$compiler_CXX" | $SED "$delay_single_quote_subst"`'
+ GCC_CXX='`$ECHO "$GCC_CXX" | $SED "$delay_single_quote_subst"`'
+ lt_prog_compiler_no_builtin_flag_CXX='`$ECHO "$lt_prog_compiler_no_builtin_flag_CXX" | $SED "$delay_single_quote_subst"`'
+-lt_prog_compiler_wl_CXX='`$ECHO "$lt_prog_compiler_wl_CXX" | $SED "$delay_single_quote_subst"`'
+ lt_prog_compiler_pic_CXX='`$ECHO "$lt_prog_compiler_pic_CXX" | $SED "$delay_single_quote_subst"`'
++lt_prog_compiler_wl_CXX='`$ECHO "$lt_prog_compiler_wl_CXX" | $SED "$delay_single_quote_subst"`'
+ lt_prog_compiler_static_CXX='`$ECHO "$lt_prog_compiler_static_CXX" | $SED "$delay_single_quote_subst"`'
+ lt_cv_prog_compiler_c_o_CXX='`$ECHO "$lt_cv_prog_compiler_c_o_CXX" | $SED "$delay_single_quote_subst"`'
+ archive_cmds_need_lc_CXX='`$ECHO "$archive_cmds_need_lc_CXX" | $SED "$delay_single_quote_subst"`'
+@@ -17796,12 +18636,12 @@ hardcode_shlibpath_var_CXX='`$ECHO "$hardcode_shlibpath_var_CXX" | $SED "$delay_
+ hardcode_automatic_CXX='`$ECHO "$hardcode_automatic_CXX" | $SED "$delay_single_quote_subst"`'
+ inherit_rpath_CXX='`$ECHO "$inherit_rpath_CXX" | $SED "$delay_single_quote_subst"`'
+ link_all_deplibs_CXX='`$ECHO "$link_all_deplibs_CXX" | $SED "$delay_single_quote_subst"`'
+-fix_srcfile_path_CXX='`$ECHO "$fix_srcfile_path_CXX" | $SED "$delay_single_quote_subst"`'
+ always_export_symbols_CXX='`$ECHO "$always_export_symbols_CXX" | $SED "$delay_single_quote_subst"`'
+ export_symbols_cmds_CXX='`$ECHO "$export_symbols_cmds_CXX" | $SED "$delay_single_quote_subst"`'
+ exclude_expsyms_CXX='`$ECHO "$exclude_expsyms_CXX" | $SED "$delay_single_quote_subst"`'
+ include_expsyms_CXX='`$ECHO "$include_expsyms_CXX" | $SED "$delay_single_quote_subst"`'
+ prelink_cmds_CXX='`$ECHO "$prelink_cmds_CXX" | $SED "$delay_single_quote_subst"`'
++postlink_cmds_CXX='`$ECHO "$postlink_cmds_CXX" | $SED "$delay_single_quote_subst"`'
+ file_list_spec_CXX='`$ECHO "$file_list_spec_CXX" | $SED "$delay_single_quote_subst"`'
+ hardcode_action_CXX='`$ECHO "$hardcode_action_CXX" | $SED "$delay_single_quote_subst"`'
+ compiler_lib_search_dirs_CXX='`$ECHO "$compiler_lib_search_dirs_CXX" | $SED "$delay_single_quote_subst"`'
+@@ -17839,8 +18679,13 @@ reload_flag \
+ OBJDUMP \
+ deplibs_check_method \
+ file_magic_cmd \
++file_magic_glob \
++want_nocaseglob \
++DLLTOOL \
++sharedlib_from_linklib_cmd \
+ AR \
+ AR_FLAGS \
++archiver_list_spec \
+ STRIP \
+ RANLIB \
+ CC \
+@@ -17850,12 +18695,14 @@ lt_cv_sys_global_symbol_pipe \
+ lt_cv_sys_global_symbol_to_cdecl \
+ lt_cv_sys_global_symbol_to_c_name_address \
+ lt_cv_sys_global_symbol_to_c_name_address_lib_prefix \
++nm_file_list_spec \
+ lt_prog_compiler_no_builtin_flag \
+-lt_prog_compiler_wl \
+ lt_prog_compiler_pic \
++lt_prog_compiler_wl \
+ lt_prog_compiler_static \
+ lt_cv_prog_compiler_c_o \
+ need_locks \
++MANIFEST_TOOL \
+ DSYMUTIL \
+ NMEDIT \
+ LIPO \
+@@ -17871,7 +18718,6 @@ no_undefined_flag \
+ hardcode_libdir_flag_spec \
+ hardcode_libdir_flag_spec_ld \
+ hardcode_libdir_separator \
+-fix_srcfile_path \
+ exclude_expsyms \
+ include_expsyms \
+ file_list_spec \
+@@ -17893,8 +18739,8 @@ LD_CXX \
+ reload_flag_CXX \
+ compiler_CXX \
+ lt_prog_compiler_no_builtin_flag_CXX \
+-lt_prog_compiler_wl_CXX \
+ lt_prog_compiler_pic_CXX \
++lt_prog_compiler_wl_CXX \
+ lt_prog_compiler_static_CXX \
+ lt_cv_prog_compiler_c_o_CXX \
+ export_dynamic_flag_spec_CXX \
+@@ -17906,7 +18752,6 @@ no_undefined_flag_CXX \
+ hardcode_libdir_flag_spec_CXX \
+ hardcode_libdir_flag_spec_ld_CXX \
+ hardcode_libdir_separator_CXX \
+-fix_srcfile_path_CXX \
+ exclude_expsyms_CXX \
+ include_expsyms_CXX \
+ file_list_spec_CXX \
+@@ -17940,6 +18785,7 @@ module_cmds \
+ module_expsym_cmds \
+ export_symbols_cmds \
+ prelink_cmds \
++postlink_cmds \
+ postinstall_cmds \
+ postuninstall_cmds \
+ finish_cmds \
+@@ -17954,7 +18800,8 @@ archive_expsym_cmds_CXX \
+ module_cmds_CXX \
+ module_expsym_cmds_CXX \
+ export_symbols_cmds_CXX \
+-prelink_cmds_CXX; do
++prelink_cmds_CXX \
++postlink_cmds_CXX; do
+     case \`eval \\\\\$ECHO \\\\""\\\\\$\$var"\\\\"\` in
+     *[\\\\\\\`\\"\\\$]*)
+       eval "lt_\$var=\\\\\\"\\\`\\\$ECHO \\"\\\$\$var\\" | \\\$SED -e \\"\\\$double_quote_subst\\" -e \\"\\\$sed_quote_subst\\" -e \\"\\\$delay_variable_subst\\"\\\`\\\\\\""
+@@ -18711,7 +19558,8 @@ $as_echo X"$file" |
+ # NOTE: Changes made to this file will be lost: look at ltmain.sh.
+ #
+ #   Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001, 2003, 2004, 2005,
+-#                 2006, 2007, 2008, 2009 Free Software Foundation, Inc.
++#                 2006, 2007, 2008, 2009, 2010 Free Software Foundation,
++#                 Inc.
+ #   Written by Gordon Matzigkeit, 1996
+ #
+ #   This file is part of GNU Libtool.
+@@ -18814,19 +19662,42 @@ SP2NL=$lt_lt_SP2NL
+ # turn newlines into spaces.
+ NL2SP=$lt_lt_NL2SP
+ 
++# convert \$build file names to \$host format.
++to_host_file_cmd=$lt_cv_to_host_file_cmd
++
++# convert \$build files to toolchain format.
++to_tool_file_cmd=$lt_cv_to_tool_file_cmd
++
+ # An object symbol dumper.
+ OBJDUMP=$lt_OBJDUMP
+ 
+ # Method to check whether dependent libraries are shared objects.
+ deplibs_check_method=$lt_deplibs_check_method
+ 
+-# Command to use when deplibs_check_method == "file_magic".
++# Command to use when deplibs_check_method = "file_magic".
+ file_magic_cmd=$lt_file_magic_cmd
+ 
++# How to find potential files when deplibs_check_method = "file_magic".
++file_magic_glob=$lt_file_magic_glob
++
++# Find potential files using nocaseglob when deplibs_check_method = "file_magic".
++want_nocaseglob=$lt_want_nocaseglob
++
++# DLL creation program.
++DLLTOOL=$lt_DLLTOOL
++
++# Command to associate shared and link libraries.
++sharedlib_from_linklib_cmd=$lt_sharedlib_from_linklib_cmd
++
+ # The archiver.
+ AR=$lt_AR
++
++# Flags to create an archive.
+ AR_FLAGS=$lt_AR_FLAGS
+ 
++# How to feed a file listing to the archiver.
++archiver_list_spec=$lt_archiver_list_spec
++
+ # A symbol stripping program.
+ STRIP=$lt_STRIP
+ 
+@@ -18856,6 +19727,12 @@ global_symbol_to_c_name_address=$lt_lt_cv_sys_global_symbol_to_c_name_address
+ # Transform the output of nm in a C name address pair when lib prefix is needed.
+ global_symbol_to_c_name_address_lib_prefix=$lt_lt_cv_sys_global_symbol_to_c_name_address_lib_prefix
+ 
++# Specify filename containing input files for \$NM.
++nm_file_list_spec=$lt_nm_file_list_spec
++
++# The root where to search for dependent libraries,and in which our libraries should be installed.
++lt_sysroot=$lt_sysroot
++
+ # The name of the directory that contains temporary libtool files.
+ objdir=$objdir
+ 
+@@ -18865,6 +19742,9 @@ MAGIC_CMD=$MAGIC_CMD
+ # Must we lock files when doing compilation?
+ need_locks=$lt_need_locks
+ 
++# Manifest tool.
++MANIFEST_TOOL=$lt_MANIFEST_TOOL
++
+ # Tool to manipulate archived DWARF debug symbol files on Mac OS X.
+ DSYMUTIL=$lt_DSYMUTIL
+ 
+@@ -18979,12 +19859,12 @@ with_gcc=$GCC
+ # Compiler flag to turn off builtin functions.
+ no_builtin_flag=$lt_lt_prog_compiler_no_builtin_flag
+ 
+-# How to pass a linker flag through the compiler.
+-wl=$lt_lt_prog_compiler_wl
+-
+ # Additional compiler flags for building library objects.
+ pic_flag=$lt_lt_prog_compiler_pic
+ 
++# How to pass a linker flag through the compiler.
++wl=$lt_lt_prog_compiler_wl
++
+ # Compiler flag to prevent dynamic linking.
+ link_static_flag=$lt_lt_prog_compiler_static
+ 
+@@ -19071,9 +19951,6 @@ inherit_rpath=$inherit_rpath
+ # Whether libtool must link a program against all its dependency libraries.
+ link_all_deplibs=$link_all_deplibs
+ 
+-# Fix the shell variable \$srcfile for the compiler.
+-fix_srcfile_path=$lt_fix_srcfile_path
+-
+ # Set to "yes" if exported symbols are required.
+ always_export_symbols=$always_export_symbols
+ 
+@@ -19089,6 +19966,9 @@ include_expsyms=$lt_include_expsyms
+ # Commands necessary for linking programs (against libraries) with templates.
+ prelink_cmds=$lt_prelink_cmds
+ 
++# Commands necessary for finishing linking programs.
++postlink_cmds=$lt_postlink_cmds
++
+ # Specify filename containing input files.
+ file_list_spec=$lt_file_list_spec
+ 
+@@ -19135,210 +20015,169 @@ ltmain="$ac_aux_dir/ltmain.sh"
+   # if finds mixed CR/LF and LF-only lines.  Since sed operates in
+   # text mode, it properly converts lines to CR/LF.  This bash problem
+   # is reportedly fixed, but why not run on old versions too?
+-  sed '/^# Generated shell functions inserted here/q' "$ltmain" >> "$cfgfile" \
+-    || (rm -f "$cfgfile"; exit 1)
+-
+-  case $xsi_shell in
+-  yes)
+-    cat << \_LT_EOF >> "$cfgfile"
+-
+-# func_dirname file append nondir_replacement
+-# Compute the dirname of FILE.  If nonempty, add APPEND to the result,
+-# otherwise set result to NONDIR_REPLACEMENT.
+-func_dirname ()
+-{
+-  case ${1} in
+-    */*) func_dirname_result="${1%/*}${2}" ;;
+-    *  ) func_dirname_result="${3}" ;;
+-  esac
+-}
+-
+-# func_basename file
+-func_basename ()
+-{
+-  func_basename_result="${1##*/}"
+-}
+-
+-# func_dirname_and_basename file append nondir_replacement
+-# perform func_basename and func_dirname in a single function
+-# call:
+-#   dirname:  Compute the dirname of FILE.  If nonempty,
+-#             add APPEND to the result, otherwise set result
+-#             to NONDIR_REPLACEMENT.
+-#             value returned in "$func_dirname_result"
+-#   basename: Compute filename of FILE.
+-#             value retuned in "$func_basename_result"
+-# Implementation must be kept synchronized with func_dirname
+-# and func_basename. For efficiency, we do not delegate to
+-# those functions but instead duplicate the functionality here.
+-func_dirname_and_basename ()
+-{
+-  case ${1} in
+-    */*) func_dirname_result="${1%/*}${2}" ;;
+-    *  ) func_dirname_result="${3}" ;;
+-  esac
+-  func_basename_result="${1##*/}"
+-}
+-
+-# func_stripname prefix suffix name
+-# strip PREFIX and SUFFIX off of NAME.
+-# PREFIX and SUFFIX must not contain globbing or regex special
+-# characters, hashes, percent signs, but SUFFIX may contain a leading
+-# dot (in which case that matches only a dot).
+-func_stripname ()
+-{
+-  # pdksh 5.2.14 does not do ${X%$Y} correctly if both X and Y are
+-  # positional parameters, so assign one to ordinary parameter first.
+-  func_stripname_result=${3}
+-  func_stripname_result=${func_stripname_result#"${1}"}
+-  func_stripname_result=${func_stripname_result%"${2}"}
+-}
+-
+-# func_opt_split
+-func_opt_split ()
+-{
+-  func_opt_split_opt=${1%%=*}
+-  func_opt_split_arg=${1#*=}
+-}
+-
+-# func_lo2o object
+-func_lo2o ()
+-{
+-  case ${1} in
+-    *.lo) func_lo2o_result=${1%.lo}.${objext} ;;
+-    *)    func_lo2o_result=${1} ;;
+-  esac
+-}
+-
+-# func_xform libobj-or-source
+-func_xform ()
+-{
+-  func_xform_result=${1%.*}.lo
+-}
+-
+-# func_arith arithmetic-term...
+-func_arith ()
+-{
+-  func_arith_result=$(( $* ))
+-}
+-
+-# func_len string
+-# STRING may not start with a hyphen.
+-func_len ()
+-{
+-  func_len_result=${#1}
+-}
+-
+-_LT_EOF
+-    ;;
+-  *) # Bourne compatible functions.
+-    cat << \_LT_EOF >> "$cfgfile"
+-
+-# func_dirname file append nondir_replacement
+-# Compute the dirname of FILE.  If nonempty, add APPEND to the result,
+-# otherwise set result to NONDIR_REPLACEMENT.
+-func_dirname ()
+-{
+-  # Extract subdirectory from the argument.
+-  func_dirname_result=`$ECHO "${1}" | $SED "$dirname"`
+-  if test "X$func_dirname_result" = "X${1}"; then
+-    func_dirname_result="${3}"
+-  else
+-    func_dirname_result="$func_dirname_result${2}"
+-  fi
+-}
+-
+-# func_basename file
+-func_basename ()
+-{
+-  func_basename_result=`$ECHO "${1}" | $SED "$basename"`
+-}
+-
+-
+-# func_stripname prefix suffix name
+-# strip PREFIX and SUFFIX off of NAME.
+-# PREFIX and SUFFIX must not contain globbing or regex special
+-# characters, hashes, percent signs, but SUFFIX may contain a leading
+-# dot (in which case that matches only a dot).
+-# func_strip_suffix prefix name
+-func_stripname ()
+-{
+-  case ${2} in
+-    .*) func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%\\\\${2}\$%%"`;;
+-    *)  func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%${2}\$%%"`;;
+-  esac
+-}
+-
+-# sed scripts:
+-my_sed_long_opt='1s/^\(-[^=]*\)=.*/\1/;q'
+-my_sed_long_arg='1s/^-[^=]*=//'
+-
+-# func_opt_split
+-func_opt_split ()
+-{
+-  func_opt_split_opt=`$ECHO "${1}" | $SED "$my_sed_long_opt"`
+-  func_opt_split_arg=`$ECHO "${1}" | $SED "$my_sed_long_arg"`
+-}
+-
+-# func_lo2o object
+-func_lo2o ()
+-{
+-  func_lo2o_result=`$ECHO "${1}" | $SED "$lo2o"`
+-}
+-
+-# func_xform libobj-or-source
+-func_xform ()
+-{
+-  func_xform_result=`$ECHO "${1}" | $SED 's/\.[^.]*$/.lo/'`
+-}
+-
+-# func_arith arithmetic-term...
+-func_arith ()
+-{
+-  func_arith_result=`expr "$@"`
+-}
+-
+-# func_len string
+-# STRING may not start with a hyphen.
+-func_len ()
+-{
+-  func_len_result=`expr "$1" : ".*" 2>/dev/null || echo $max_cmd_len`
+-}
+-
+-_LT_EOF
+-esac
+-
+-case $lt_shell_append in
+-  yes)
+-    cat << \_LT_EOF >> "$cfgfile"
+-
+-# func_append var value
+-# Append VALUE to the end of shell variable VAR.
+-func_append ()
+-{
+-  eval "$1+=\$2"
+-}
+-_LT_EOF
+-    ;;
+-  *)
+-    cat << \_LT_EOF >> "$cfgfile"
+-
+-# func_append var value
+-# Append VALUE to the end of shell variable VAR.
+-func_append ()
+-{
+-  eval "$1=\$$1\$2"
+-}
+-
+-_LT_EOF
+-    ;;
+-  esac
+-
+-
+-  sed -n '/^# Generated shell functions inserted here/,$p' "$ltmain" >> "$cfgfile" \
+-    || (rm -f "$cfgfile"; exit 1)
+-
+-  mv -f "$cfgfile" "$ofile" ||
++  sed '$q' "$ltmain" >> "$cfgfile" \
++     || (rm -f "$cfgfile"; exit 1)
++
++  if test x"$xsi_shell" = xyes; then
++  sed -e '/^func_dirname ()$/,/^} # func_dirname /c\
++func_dirname ()\
++{\
++\    case ${1} in\
++\      */*) func_dirname_result="${1%/*}${2}" ;;\
++\      *  ) func_dirname_result="${3}" ;;\
++\    esac\
++} # Extended-shell func_dirname implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_basename ()$/,/^} # func_basename /c\
++func_basename ()\
++{\
++\    func_basename_result="${1##*/}"\
++} # Extended-shell func_basename implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_dirname_and_basename ()$/,/^} # func_dirname_and_basename /c\
++func_dirname_and_basename ()\
++{\
++\    case ${1} in\
++\      */*) func_dirname_result="${1%/*}${2}" ;;\
++\      *  ) func_dirname_result="${3}" ;;\
++\    esac\
++\    func_basename_result="${1##*/}"\
++} # Extended-shell func_dirname_and_basename implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_stripname ()$/,/^} # func_stripname /c\
++func_stripname ()\
++{\
++\    # pdksh 5.2.14 does not do ${X%$Y} correctly if both X and Y are\
++\    # positional parameters, so assign one to ordinary parameter first.\
++\    func_stripname_result=${3}\
++\    func_stripname_result=${func_stripname_result#"${1}"}\
++\    func_stripname_result=${func_stripname_result%"${2}"}\
++} # Extended-shell func_stripname implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_split_long_opt ()$/,/^} # func_split_long_opt /c\
++func_split_long_opt ()\
++{\
++\    func_split_long_opt_name=${1%%=*}\
++\    func_split_long_opt_arg=${1#*=}\
++} # Extended-shell func_split_long_opt implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_split_short_opt ()$/,/^} # func_split_short_opt /c\
++func_split_short_opt ()\
++{\
++\    func_split_short_opt_arg=${1#??}\
++\    func_split_short_opt_name=${1%"$func_split_short_opt_arg"}\
++} # Extended-shell func_split_short_opt implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_lo2o ()$/,/^} # func_lo2o /c\
++func_lo2o ()\
++{\
++\    case ${1} in\
++\      *.lo) func_lo2o_result=${1%.lo}.${objext} ;;\
++\      *)    func_lo2o_result=${1} ;;\
++\    esac\
++} # Extended-shell func_lo2o implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_xform ()$/,/^} # func_xform /c\
++func_xform ()\
++{\
++    func_xform_result=${1%.*}.lo\
++} # Extended-shell func_xform implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_arith ()$/,/^} # func_arith /c\
++func_arith ()\
++{\
++    func_arith_result=$(( $* ))\
++} # Extended-shell func_arith implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_len ()$/,/^} # func_len /c\
++func_len ()\
++{\
++    func_len_result=${#1}\
++} # Extended-shell func_len implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++fi
++
++if test x"$lt_shell_append" = xyes; then
++  sed -e '/^func_append ()$/,/^} # func_append /c\
++func_append ()\
++{\
++    eval "${1}+=\\${2}"\
++} # Extended-shell func_append implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_append_quoted ()$/,/^} # func_append_quoted /c\
++func_append_quoted ()\
++{\
++\    func_quote_for_eval "${2}"\
++\    eval "${1}+=\\\\ \\$func_quote_for_eval_result"\
++} # Extended-shell func_append_quoted implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  # Save a `func_append' function call where possible by direct use of '+='
++  sed -e 's%func_append \([a-zA-Z_]\{1,\}\) "%\1+="%g' $cfgfile > $cfgfile.tmp \
++    && mv -f "$cfgfile.tmp" "$cfgfile" \
++      || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++  test 0 -eq $? || _lt_function_replace_fail=:
++else
++  # Save a `func_append' function call even when '+=' is not available
++  sed -e 's%func_append \([a-zA-Z_]\{1,\}\) "%\1="$\1%g' $cfgfile > $cfgfile.tmp \
++    && mv -f "$cfgfile.tmp" "$cfgfile" \
++      || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++  test 0 -eq $? || _lt_function_replace_fail=:
++fi
++
++if test x"$_lt_function_replace_fail" = x":"; then
++  { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: Unable to substitute extended shell functions in $ofile" >&5
++$as_echo "$as_me: WARNING: Unable to substitute extended shell functions in $ofile" >&2;}
++fi
++
++
++   mv -f "$cfgfile" "$ofile" ||
+     (rm -f "$ofile" && cp "$cfgfile" "$ofile" && rm -f "$cfgfile")
+   chmod +x "$ofile"
+ 
+@@ -19366,12 +20205,12 @@ with_gcc=$GCC_CXX
+ # Compiler flag to turn off builtin functions.
+ no_builtin_flag=$lt_lt_prog_compiler_no_builtin_flag_CXX
+ 
+-# How to pass a linker flag through the compiler.
+-wl=$lt_lt_prog_compiler_wl_CXX
+-
+ # Additional compiler flags for building library objects.
+ pic_flag=$lt_lt_prog_compiler_pic_CXX
+ 
++# How to pass a linker flag through the compiler.
++wl=$lt_lt_prog_compiler_wl_CXX
++
+ # Compiler flag to prevent dynamic linking.
+ link_static_flag=$lt_lt_prog_compiler_static_CXX
+ 
+@@ -19458,9 +20297,6 @@ inherit_rpath=$inherit_rpath_CXX
+ # Whether libtool must link a program against all its dependency libraries.
+ link_all_deplibs=$link_all_deplibs_CXX
+ 
+-# Fix the shell variable \$srcfile for the compiler.
+-fix_srcfile_path=$lt_fix_srcfile_path_CXX
+-
+ # Set to "yes" if exported symbols are required.
+ always_export_symbols=$always_export_symbols_CXX
+ 
+@@ -19476,6 +20312,9 @@ include_expsyms=$lt_include_expsyms_CXX
+ # Commands necessary for linking programs (against libraries) with templates.
+ prelink_cmds=$lt_prelink_cmds_CXX
+ 
++# Commands necessary for finishing linking programs.
++postlink_cmds=$lt_postlink_cmds_CXX
++
+ # Specify filename containing input files.
+ file_list_spec=$lt_file_list_spec_CXX
+ 
+diff --git a/gprofng/doc/Makefile.in b/gprofng/doc/Makefile.in
+index 4050586f6a8..394469e3768 100644
+--- a/gprofng/doc/Makefile.in
++++ b/gprofng/doc/Makefile.in
+@@ -237,6 +237,7 @@ CXXFLAGS = @CXXFLAGS@
+ CYGPATH_W = @CYGPATH_W@
+ DEFS = @DEFS@
+ DEPDIR = @DEPDIR@
++DLLTOOL = @DLLTOOL@
+ DSYMUTIL = @DSYMUTIL@
+ DUMPBIN = @DUMPBIN@
+ ECHO_C = @ECHO_C@
+@@ -272,6 +273,7 @@ LN_S = @LN_S@
+ LTLIBOBJS = @LTLIBOBJS@
+ MAINT = @MAINT@
+ MAKEINFO = @MAKEINFO@
++MANIFEST_TOOL = @MANIFEST_TOOL@
+ MKDIR_P = @MKDIR_P@
+ NM = @NM@
+ NMEDIT = @NMEDIT@
+diff --git a/gprofng/gp-display-html/Makefile.in b/gprofng/gp-display-html/Makefile.in
+index 1206a79d3f0..2f763e5f760 100644
+--- a/gprofng/gp-display-html/Makefile.in
++++ b/gprofng/gp-display-html/Makefile.in
+@@ -200,6 +200,7 @@ CXXFLAGS = @CXXFLAGS@
+ CYGPATH_W = @CYGPATH_W@
+ DEFS = @DEFS@
+ DEPDIR = @DEPDIR@
++DLLTOOL = @DLLTOOL@
+ DSYMUTIL = @DSYMUTIL@
+ DUMPBIN = @DUMPBIN@
+ ECHO_C = @ECHO_C@
+@@ -235,6 +236,7 @@ LN_S = @LN_S@
+ LTLIBOBJS = @LTLIBOBJS@
+ MAINT = @MAINT@
+ MAKEINFO = @MAKEINFO@
++MANIFEST_TOOL = @MANIFEST_TOOL@
+ MKDIR_P = @MKDIR_P@
+ NM = @NM@
+ NMEDIT = @NMEDIT@
+diff --git a/gprofng/libcollector/Makefile.in b/gprofng/libcollector/Makefile.in
+index 9372c6dea78..0cf4f58c0ec 100644
+--- a/gprofng/libcollector/Makefile.in
++++ b/gprofng/libcollector/Makefile.in
+@@ -316,6 +316,7 @@ CXXFLAGS = @CXXFLAGS@
+ CYGPATH_W = @CYGPATH_W@
+ DEFS = @DEFS@
+ DEPDIR = @DEPDIR@
++DLLTOOL = @DLLTOOL@
+ DSYMUTIL = @DSYMUTIL@
+ DUMPBIN = @DUMPBIN@
+ ECHO_C = @ECHO_C@
+@@ -342,6 +343,7 @@ LN_S = @LN_S@
+ LTLIBOBJS = @LTLIBOBJS@
+ MAINT = @MAINT@
+ MAKEINFO = @MAKEINFO@
++MANIFEST_TOOL = @MANIFEST_TOOL@
+ MKDIR_P = @MKDIR_P@
+ NM = @NM@
+ NMEDIT = @NMEDIT@
+diff --git a/gprofng/libcollector/configure b/gprofng/libcollector/configure
+index ec38721ced2..d9daed11e3f 100755
+--- a/gprofng/libcollector/configure
++++ b/gprofng/libcollector/configure
+@@ -641,6 +641,8 @@ OTOOL
+ LIPO
+ NMEDIT
+ DSYMUTIL
++MANIFEST_TOOL
++DLLTOOL
+ OBJDUMP
+ LN_S
+ NM
+@@ -770,6 +772,7 @@ enable_static
+ with_pic
+ enable_fast_install
+ with_gnu_ld
++with_libtool_sysroot
+ enable_libtool_lock
+ '
+       ac_precious_vars='build_alias
+@@ -1425,6 +1428,8 @@ Optional Packages:
+   --with-pic              try to use only PIC/non-PIC objects [default=use
+                           both]
+   --with-gnu-ld           assume the C compiler uses GNU ld [default=no]
++  --with-libtool-sysroot=DIR Search for dependent libraries within DIR
++                        (or the compiler's sysroot if not specified).
+ 
+ Some influential environment variables:
+   CC          C compiler command
+@@ -5969,8 +5974,8 @@ esac
+ 
+ 
+ 
+-macro_version='2.2.7a'
+-macro_revision='1.3134'
++macro_version='2.4'
++macro_revision='1.3293'
+ 
+ 
+ 
+@@ -6010,7 +6015,7 @@ ECHO=$ECHO$ECHO$ECHO$ECHO$ECHO$ECHO
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking how to print strings" >&5
+ $as_echo_n "checking how to print strings... " >&6; }
+ # Test print first, because it will be a builtin if present.
+-if test "X`print -r -- -n 2>/dev/null`" = X-n && \
++if test "X`( print -r -- -n ) 2>/dev/null`" = X-n && \
+    test "X`print -r -- $ECHO 2>/dev/null`" = "X$ECHO"; then
+   ECHO='print -r --'
+ elif test "X`printf %s $ECHO 2>/dev/null`" = "X$ECHO"; then
+@@ -6703,8 +6708,8 @@ $as_echo_n "checking whether the shell understands some XSI constructs... " >&6;
+ # Try some XSI features
+ xsi_shell=no
+ ( _lt_dummy="a/b/c"
+-  test "${_lt_dummy##*/},${_lt_dummy%/*},"${_lt_dummy%"$_lt_dummy"}, \
+-      = c,a/b,, \
++  test "${_lt_dummy##*/},${_lt_dummy%/*},${_lt_dummy#??}"${_lt_dummy%"$_lt_dummy"}, \
++      = c,a/b,b/c, \
+     && eval 'test $(( 1 + 1 )) -eq 2 \
+     && test "${#_lt_dummy}" -eq 5' ) >/dev/null 2>&1 \
+   && xsi_shell=yes
+@@ -6753,6 +6758,80 @@ esac
+ 
+ 
+ 
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking how to convert $build file names to $host format" >&5
++$as_echo_n "checking how to convert $build file names to $host format... " >&6; }
++if ${lt_cv_to_host_file_cmd+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  case $host in
++  *-*-mingw* )
++    case $build in
++      *-*-mingw* ) # actually msys
++        lt_cv_to_host_file_cmd=func_convert_file_msys_to_w32
++        ;;
++      *-*-cygwin* )
++        lt_cv_to_host_file_cmd=func_convert_file_cygwin_to_w32
++        ;;
++      * ) # otherwise, assume *nix
++        lt_cv_to_host_file_cmd=func_convert_file_nix_to_w32
++        ;;
++    esac
++    ;;
++  *-*-cygwin* )
++    case $build in
++      *-*-mingw* ) # actually msys
++        lt_cv_to_host_file_cmd=func_convert_file_msys_to_cygwin
++        ;;
++      *-*-cygwin* )
++        lt_cv_to_host_file_cmd=func_convert_file_noop
++        ;;
++      * ) # otherwise, assume *nix
++        lt_cv_to_host_file_cmd=func_convert_file_nix_to_cygwin
++        ;;
++    esac
++    ;;
++  * ) # unhandled hosts (and "normal" native builds)
++    lt_cv_to_host_file_cmd=func_convert_file_noop
++    ;;
++esac
++
++fi
++
++to_host_file_cmd=$lt_cv_to_host_file_cmd
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_to_host_file_cmd" >&5
++$as_echo "$lt_cv_to_host_file_cmd" >&6; }
++
++
++
++
++
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking how to convert $build file names to toolchain format" >&5
++$as_echo_n "checking how to convert $build file names to toolchain format... " >&6; }
++if ${lt_cv_to_tool_file_cmd+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  #assume ordinary cross tools, or native build.
++lt_cv_to_tool_file_cmd=func_convert_file_noop
++case $host in
++  *-*-mingw* )
++    case $build in
++      *-*-mingw* ) # actually msys
++        lt_cv_to_tool_file_cmd=func_convert_file_msys_to_w32
++        ;;
++    esac
++    ;;
++esac
++
++fi
++
++to_tool_file_cmd=$lt_cv_to_tool_file_cmd
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_to_tool_file_cmd" >&5
++$as_echo "$lt_cv_to_tool_file_cmd" >&6; }
++
++
++
++
++
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $LD option to reload object files" >&5
+ $as_echo_n "checking for $LD option to reload object files... " >&6; }
+ if ${lt_cv_ld_reload_flag+:} false; then :
+@@ -6769,6 +6848,11 @@ case $reload_flag in
+ esac
+ reload_cmds='$LD$reload_flag -o $output$reload_objs'
+ case $host_os in
++  cygwin* | mingw* | pw32* | cegcc*)
++    if test "$GCC" != yes; then
++      reload_cmds=false
++    fi
++    ;;
+   darwin*)
+     if test "$GCC" = yes; then
+       reload_cmds='$LTCC $LTCFLAGS -nostdlib ${wl}-r -o $output$reload_objs'
+@@ -6937,7 +7021,8 @@ mingw* | pw32*)
+     lt_cv_deplibs_check_method='file_magic ^x86 archive import|^x86 DLL'
+     lt_cv_file_magic_cmd='func_win32_libid'
+   else
+-    lt_cv_deplibs_check_method='file_magic file format pei*-i386(.*architecture: i386)?'
++    # Keep this pattern in sync with the one in func_win32_libid.
++    lt_cv_deplibs_check_method='file_magic file format (pei*-i386(.*architecture: i386)?|pe-arm-wince|pe-x86-64)'
+     lt_cv_file_magic_cmd='$OBJDUMP -f'
+   fi
+   ;;
+@@ -7091,6 +7176,21 @@ esac
+ fi
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_deplibs_check_method" >&5
+ $as_echo "$lt_cv_deplibs_check_method" >&6; }
++
++file_magic_glob=
++want_nocaseglob=no
++if test "$build" = "$host"; then
++  case $host_os in
++  mingw* | pw32*)
++    if ( shopt | grep nocaseglob ) >/dev/null 2>&1; then
++      want_nocaseglob=yes
++    else
++      file_magic_glob=`echo aAbBcCdDeEfFgGhHiIjJkKlLmMnNoOpPqQrRsStTuUvVwWxXyYzZ | $SED -e "s/\(..\)/s\/[\1]\/[\1]\/g;/g"`
++    fi
++    ;;
++  esac
++fi
++
+ file_magic_cmd=$lt_cv_file_magic_cmd
+ deplibs_check_method=$lt_cv_deplibs_check_method
+ test -z "$deplibs_check_method" && deplibs_check_method=unknown
+@@ -7106,6 +7206,157 @@ test -z "$deplibs_check_method" && deplibs_check_method=unknown
+ 
+ 
+ 
++
++
++
++
++
++
++
++
++
++
++if test -n "$ac_tool_prefix"; then
++  # Extract the first word of "${ac_tool_prefix}dlltool", so it can be a program name with args.
++set dummy ${ac_tool_prefix}dlltool; ac_word=$2
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
++$as_echo_n "checking for $ac_word... " >&6; }
++if ${ac_cv_prog_DLLTOOL+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  if test -n "$DLLTOOL"; then
++  ac_cv_prog_DLLTOOL="$DLLTOOL" # Let the user override the test.
++else
++as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
++for as_dir in $PATH
++do
++  IFS=$as_save_IFS
++  test -z "$as_dir" && as_dir=.
++    for ac_exec_ext in '' $ac_executable_extensions; do
++  if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
++    ac_cv_prog_DLLTOOL="${ac_tool_prefix}dlltool"
++    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
++    break 2
++  fi
++done
++  done
++IFS=$as_save_IFS
++
++fi
++fi
++DLLTOOL=$ac_cv_prog_DLLTOOL
++if test -n "$DLLTOOL"; then
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $DLLTOOL" >&5
++$as_echo "$DLLTOOL" >&6; }
++else
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
++$as_echo "no" >&6; }
++fi
++
++
++fi
++if test -z "$ac_cv_prog_DLLTOOL"; then
++  ac_ct_DLLTOOL=$DLLTOOL
++  # Extract the first word of "dlltool", so it can be a program name with args.
++set dummy dlltool; ac_word=$2
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
++$as_echo_n "checking for $ac_word... " >&6; }
++if ${ac_cv_prog_ac_ct_DLLTOOL+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  if test -n "$ac_ct_DLLTOOL"; then
++  ac_cv_prog_ac_ct_DLLTOOL="$ac_ct_DLLTOOL" # Let the user override the test.
++else
++as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
++for as_dir in $PATH
++do
++  IFS=$as_save_IFS
++  test -z "$as_dir" && as_dir=.
++    for ac_exec_ext in '' $ac_executable_extensions; do
++  if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
++    ac_cv_prog_ac_ct_DLLTOOL="dlltool"
++    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
++    break 2
++  fi
++done
++  done
++IFS=$as_save_IFS
++
++fi
++fi
++ac_ct_DLLTOOL=$ac_cv_prog_ac_ct_DLLTOOL
++if test -n "$ac_ct_DLLTOOL"; then
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_DLLTOOL" >&5
++$as_echo "$ac_ct_DLLTOOL" >&6; }
++else
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
++$as_echo "no" >&6; }
++fi
++
++  if test "x$ac_ct_DLLTOOL" = x; then
++    DLLTOOL="false"
++  else
++    case $cross_compiling:$ac_tool_warned in
++yes:)
++{ $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5
++$as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;}
++ac_tool_warned=yes ;;
++esac
++    DLLTOOL=$ac_ct_DLLTOOL
++  fi
++else
++  DLLTOOL="$ac_cv_prog_DLLTOOL"
++fi
++
++test -z "$DLLTOOL" && DLLTOOL=dlltool
++
++
++
++
++
++
++
++
++
++
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking how to associate runtime and link libraries" >&5
++$as_echo_n "checking how to associate runtime and link libraries... " >&6; }
++if ${lt_cv_sharedlib_from_linklib_cmd+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  lt_cv_sharedlib_from_linklib_cmd='unknown'
++
++case $host_os in
++cygwin* | mingw* | pw32* | cegcc*)
++  # two different shell functions defined in ltmain.sh
++  # decide which to use based on capabilities of $DLLTOOL
++  case `$DLLTOOL --help 2>&1` in
++  *--identify-strict*)
++    lt_cv_sharedlib_from_linklib_cmd=func_cygming_dll_for_implib
++    ;;
++  *)
++    lt_cv_sharedlib_from_linklib_cmd=func_cygming_dll_for_implib_fallback
++    ;;
++  esac
++  ;;
++*)
++  # fallback: assume linklib IS sharedlib
++  lt_cv_sharedlib_from_linklib_cmd="$ECHO"
++  ;;
++esac
++
++fi
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_sharedlib_from_linklib_cmd" >&5
++$as_echo "$lt_cv_sharedlib_from_linklib_cmd" >&6; }
++sharedlib_from_linklib_cmd=$lt_cv_sharedlib_from_linklib_cmd
++test -z "$sharedlib_from_linklib_cmd" && sharedlib_from_linklib_cmd=$ECHO
++
++
++
++
++
++
++
+ plugin_option=
+ plugin_names="liblto_plugin.so liblto_plugin-0.dll cyglto_plugin-0.dll"
+ for plugin in $plugin_names; do
+@@ -7120,8 +7371,10 @@ for plugin in $plugin_names; do
+ done
+ 
+ if test -n "$ac_tool_prefix"; then
+-  # Extract the first word of "${ac_tool_prefix}ar", so it can be a program name with args.
+-set dummy ${ac_tool_prefix}ar; ac_word=$2
++  for ac_prog in ar
++  do
++    # Extract the first word of "$ac_tool_prefix$ac_prog", so it can be a program name with args.
++set dummy $ac_tool_prefix$ac_prog; ac_word=$2
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+ $as_echo_n "checking for $ac_word... " >&6; }
+ if ${ac_cv_prog_AR+:} false; then :
+@@ -7137,7 +7390,7 @@ do
+   test -z "$as_dir" && as_dir=.
+     for ac_exec_ext in '' $ac_executable_extensions; do
+   if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
+-    ac_cv_prog_AR="${ac_tool_prefix}ar"
++    ac_cv_prog_AR="$ac_tool_prefix$ac_prog"
+     $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
+     break 2
+   fi
+@@ -7157,11 +7410,15 @@ $as_echo "no" >&6; }
+ fi
+ 
+ 
++    test -n "$AR" && break
++  done
+ fi
+-if test -z "$ac_cv_prog_AR"; then
++if test -z "$AR"; then
+   ac_ct_AR=$AR
+-  # Extract the first word of "ar", so it can be a program name with args.
+-set dummy ar; ac_word=$2
++  for ac_prog in ar
++do
++  # Extract the first word of "$ac_prog", so it can be a program name with args.
++set dummy $ac_prog; ac_word=$2
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+ $as_echo_n "checking for $ac_word... " >&6; }
+ if ${ac_cv_prog_ac_ct_AR+:} false; then :
+@@ -7177,7 +7434,7 @@ do
+   test -z "$as_dir" && as_dir=.
+     for ac_exec_ext in '' $ac_executable_extensions; do
+   if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
+-    ac_cv_prog_ac_ct_AR="ar"
++    ac_cv_prog_ac_ct_AR="$ac_prog"
+     $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
+     break 2
+   fi
+@@ -7196,6 +7453,10 @@ else
+ $as_echo "no" >&6; }
+ fi
+ 
++
++  test -n "$ac_ct_AR" && break
++done
++
+   if test "x$ac_ct_AR" = x; then
+     AR="false"
+   else
+@@ -7207,29 +7468,81 @@ ac_tool_warned=yes ;;
+ esac
+     AR=$ac_ct_AR
+   fi
+-else
+-  AR="$ac_cv_prog_AR"
+ fi
+ 
+-test -z "$AR" && AR=ar
+-if test -n "$plugin_option"; then
+-  if $AR --help 2>&1 | grep -q "\--plugin"; then
+-    touch conftest.c
+-    $AR $plugin_option rc conftest.a conftest.c
+-    if test "$?" != 0; then
+-      { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: Failed: $AR $plugin_option rc" >&5
++  touch conftest.c
++  $AR $plugin_option rc conftest.a conftest.c
++  if test "$?" != 0; then
++    { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: Failed: $AR $plugin_option rc" >&5
+ $as_echo "$as_me: WARNING: Failed: $AR $plugin_option rc" >&2;}
+-    else
+-      AR="$AR $plugin_option"
+-    fi
+-    rm -f conftest.*
++  else
++    AR="$AR $plugin_option"
+   fi
+-fi
+-test -z "$AR_FLAGS" && AR_FLAGS=cru
++  rm -f conftest.*
++: ${AR=ar}
++: ${AR_FLAGS=cru}
++
++
++
++
++
++
++
+ 
+ 
+ 
+ 
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for archiver @FILE support" >&5
++$as_echo_n "checking for archiver @FILE support... " >&6; }
++if ${lt_cv_ar_at_file+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  lt_cv_ar_at_file=no
++   cat confdefs.h - <<_ACEOF >conftest.$ac_ext
++/* end confdefs.h.  */
++
++int
++main ()
++{
++
++  ;
++  return 0;
++}
++_ACEOF
++if ac_fn_c_try_compile "$LINENO"; then :
++  echo conftest.$ac_objext > conftest.lst
++      lt_ar_try='$AR $AR_FLAGS libconftest.a @conftest.lst >&5'
++      { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$lt_ar_try\""; } >&5
++  (eval $lt_ar_try) 2>&5
++  ac_status=$?
++  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
++  test $ac_status = 0; }
++      if test "$ac_status" -eq 0; then
++	# Ensure the archiver fails upon bogus file names.
++	rm -f conftest.$ac_objext libconftest.a
++	{ { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$lt_ar_try\""; } >&5
++  (eval $lt_ar_try) 2>&5
++  ac_status=$?
++  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
++  test $ac_status = 0; }
++	if test "$ac_status" -ne 0; then
++          lt_cv_ar_at_file=@
++        fi
++      fi
++      rm -f conftest.* libconftest.a
++
++fi
++rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext
++
++fi
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_ar_at_file" >&5
++$as_echo "$lt_cv_ar_at_file" >&6; }
++
++if test "x$lt_cv_ar_at_file" = xno; then
++  archiver_list_spec=
++else
++  archiver_list_spec=$lt_cv_ar_at_file
++fi
+ 
+ 
+ 
+@@ -7576,8 +7889,8 @@ esac
+ lt_cv_sys_global_symbol_to_cdecl="sed -n -e 's/^T .* \(.*\)$/extern int \1();/p' -e 's/^$symcode* .* \(.*\)$/extern char \1;/p'"
+ 
+ # Transform an extracted symbol line into symbol name and symbol address
+-lt_cv_sys_global_symbol_to_c_name_address="sed -n -e 's/^: \([^ ]*\) $/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"\2\", (void *) \&\2},/p'"
+-lt_cv_sys_global_symbol_to_c_name_address_lib_prefix="sed -n -e 's/^: \([^ ]*\) $/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \(lib[^ ]*\)$/  {\"\2\", (void *) \&\2},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"lib\2\", (void *) \&\2},/p'"
++lt_cv_sys_global_symbol_to_c_name_address="sed -n -e 's/^: \([^ ]*\)[ ]*$/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"\2\", (void *) \&\2},/p'"
++lt_cv_sys_global_symbol_to_c_name_address_lib_prefix="sed -n -e 's/^: \([^ ]*\)[ ]*$/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \(lib[^ ]*\)$/  {\"\2\", (void *) \&\2},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"lib\2\", (void *) \&\2},/p'"
+ 
+ # Handle CRLF in mingw tool chain
+ opt_cr=
+@@ -7613,6 +7926,7 @@ for ac_symprfx in "" "_"; do
+   else
+     lt_cv_sys_global_symbol_pipe="sed -n -e 's/^.*[	 ]\($symcode$symcode*\)[	 ][	 ]*$ac_symprfx$sympat$opt_cr$/$symxfrm/p'"
+   fi
++  lt_cv_sys_global_symbol_pipe="$lt_cv_sys_global_symbol_pipe | sed '/ __gnu_lto/d'"
+ 
+   # Check to see that the pipe works correctly.
+   pipe_works=no
+@@ -7654,6 +7968,18 @@ _LT_EOF
+       if $GREP ' nm_test_var$' "$nlist" >/dev/null; then
+ 	if $GREP ' nm_test_func$' "$nlist" >/dev/null; then
+ 	  cat <<_LT_EOF > conftest.$ac_ext
++/* Keep this code in sync between libtool.m4, ltmain, lt_system.h, and tests.  */
++#if defined(_WIN32) || defined(__CYGWIN__) || defined(_WIN32_WCE)
++/* DATA imports from DLLs on WIN32 con't be const, because runtime
++   relocations are performed -- see ld's documentation on pseudo-relocs.  */
++# define LT_DLSYM_CONST
++#elif defined(__osf__)
++/* This system does not cope well with relocations in const data.  */
++# define LT_DLSYM_CONST
++#else
++# define LT_DLSYM_CONST const
++#endif
++
+ #ifdef __cplusplus
+ extern "C" {
+ #endif
+@@ -7665,7 +7991,7 @@ _LT_EOF
+ 	  cat <<_LT_EOF >> conftest.$ac_ext
+ 
+ /* The mapping between symbol names and symbols.  */
+-const struct {
++LT_DLSYM_CONST struct {
+   const char *name;
+   void       *address;
+ }
+@@ -7691,8 +8017,8 @@ static const void *lt_preloaded_setup() {
+ _LT_EOF
+ 	  # Now try linking the two files.
+ 	  mv conftest.$ac_objext conftstm.$ac_objext
+-	  lt_save_LIBS="$LIBS"
+-	  lt_save_CFLAGS="$CFLAGS"
++	  lt_globsym_save_LIBS=$LIBS
++	  lt_globsym_save_CFLAGS=$CFLAGS
+ 	  LIBS="conftstm.$ac_objext"
+ 	  CFLAGS="$CFLAGS$lt_prog_compiler_no_builtin_flag"
+ 	  if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_link\""; } >&5
+@@ -7702,8 +8028,8 @@ _LT_EOF
+   test $ac_status = 0; } && test -s conftest${ac_exeext}; then
+ 	    pipe_works=yes
+ 	  fi
+-	  LIBS="$lt_save_LIBS"
+-	  CFLAGS="$lt_save_CFLAGS"
++	  LIBS=$lt_globsym_save_LIBS
++	  CFLAGS=$lt_globsym_save_CFLAGS
+ 	else
+ 	  echo "cannot find nm_test_func in $nlist" >&5
+ 	fi
+@@ -7740,6 +8066,13 @@ else
+ $as_echo "ok" >&6; }
+ fi
+ 
++# Response file support.
++if test "$lt_cv_nm_interface" = "MS dumpbin"; then
++  nm_file_list_spec='@'
++elif $NM --help 2>/dev/null | grep '[@]FILE' >/dev/null; then
++  nm_file_list_spec='@'
++fi
++
+ 
+ 
+ 
+@@ -7759,6 +8092,48 @@ fi
+ 
+ 
+ 
++
++
++
++
++
++
++
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for sysroot" >&5
++$as_echo_n "checking for sysroot... " >&6; }
++
++# Check whether --with-libtool-sysroot was given.
++if test "${with_libtool_sysroot+set}" = set; then :
++  withval=$with_libtool_sysroot;
++else
++  with_libtool_sysroot=no
++fi
++
++
++lt_sysroot=
++case ${with_libtool_sysroot} in #(
++ yes)
++   if test "$GCC" = yes; then
++     lt_sysroot=`$CC --print-sysroot 2>/dev/null`
++   fi
++   ;; #(
++ /*)
++   lt_sysroot=`echo "$with_libtool_sysroot" | sed -e "$sed_quote_subst"`
++   ;; #(
++ no|'')
++   ;; #(
++ *)
++   { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${with_libtool_sysroot}" >&5
++$as_echo "${with_libtool_sysroot}" >&6; }
++   as_fn_error $? "The sysroot must be an absolute path." "$LINENO" 5
++   ;;
++esac
++
++ { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${lt_sysroot:-no}" >&5
++$as_echo "${lt_sysroot:-no}" >&6; }
++
++
++
+ 
+ 
+ # Check whether --enable-libtool-lock was given.
+@@ -7967,6 +8342,123 @@ esac
+ 
+ need_locks="$enable_libtool_lock"
+ 
++if test -n "$ac_tool_prefix"; then
++  # Extract the first word of "${ac_tool_prefix}mt", so it can be a program name with args.
++set dummy ${ac_tool_prefix}mt; ac_word=$2
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
++$as_echo_n "checking for $ac_word... " >&6; }
++if ${ac_cv_prog_MANIFEST_TOOL+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  if test -n "$MANIFEST_TOOL"; then
++  ac_cv_prog_MANIFEST_TOOL="$MANIFEST_TOOL" # Let the user override the test.
++else
++as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
++for as_dir in $PATH
++do
++  IFS=$as_save_IFS
++  test -z "$as_dir" && as_dir=.
++    for ac_exec_ext in '' $ac_executable_extensions; do
++  if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
++    ac_cv_prog_MANIFEST_TOOL="${ac_tool_prefix}mt"
++    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
++    break 2
++  fi
++done
++  done
++IFS=$as_save_IFS
++
++fi
++fi
++MANIFEST_TOOL=$ac_cv_prog_MANIFEST_TOOL
++if test -n "$MANIFEST_TOOL"; then
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $MANIFEST_TOOL" >&5
++$as_echo "$MANIFEST_TOOL" >&6; }
++else
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
++$as_echo "no" >&6; }
++fi
++
++
++fi
++if test -z "$ac_cv_prog_MANIFEST_TOOL"; then
++  ac_ct_MANIFEST_TOOL=$MANIFEST_TOOL
++  # Extract the first word of "mt", so it can be a program name with args.
++set dummy mt; ac_word=$2
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
++$as_echo_n "checking for $ac_word... " >&6; }
++if ${ac_cv_prog_ac_ct_MANIFEST_TOOL+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  if test -n "$ac_ct_MANIFEST_TOOL"; then
++  ac_cv_prog_ac_ct_MANIFEST_TOOL="$ac_ct_MANIFEST_TOOL" # Let the user override the test.
++else
++as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
++for as_dir in $PATH
++do
++  IFS=$as_save_IFS
++  test -z "$as_dir" && as_dir=.
++    for ac_exec_ext in '' $ac_executable_extensions; do
++  if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
++    ac_cv_prog_ac_ct_MANIFEST_TOOL="mt"
++    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
++    break 2
++  fi
++done
++  done
++IFS=$as_save_IFS
++
++fi
++fi
++ac_ct_MANIFEST_TOOL=$ac_cv_prog_ac_ct_MANIFEST_TOOL
++if test -n "$ac_ct_MANIFEST_TOOL"; then
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_MANIFEST_TOOL" >&5
++$as_echo "$ac_ct_MANIFEST_TOOL" >&6; }
++else
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
++$as_echo "no" >&6; }
++fi
++
++  if test "x$ac_ct_MANIFEST_TOOL" = x; then
++    MANIFEST_TOOL=":"
++  else
++    case $cross_compiling:$ac_tool_warned in
++yes:)
++{ $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5
++$as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;}
++ac_tool_warned=yes ;;
++esac
++    MANIFEST_TOOL=$ac_ct_MANIFEST_TOOL
++  fi
++else
++  MANIFEST_TOOL="$ac_cv_prog_MANIFEST_TOOL"
++fi
++
++test -z "$MANIFEST_TOOL" && MANIFEST_TOOL=mt
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking if $MANIFEST_TOOL is a manifest tool" >&5
++$as_echo_n "checking if $MANIFEST_TOOL is a manifest tool... " >&6; }
++if ${lt_cv_path_mainfest_tool+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  lt_cv_path_mainfest_tool=no
++  echo "$as_me:$LINENO: $MANIFEST_TOOL '-?'" >&5
++  $MANIFEST_TOOL '-?' 2>conftest.err > conftest.out
++  cat conftest.err >&5
++  if $GREP 'Manifest Tool' conftest.out > /dev/null; then
++    lt_cv_path_mainfest_tool=yes
++  fi
++  rm -f conftest*
++fi
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_path_mainfest_tool" >&5
++$as_echo "$lt_cv_path_mainfest_tool" >&6; }
++if test "x$lt_cv_path_mainfest_tool" != xyes; then
++  MANIFEST_TOOL=:
++fi
++
++
++
++
++
+ 
+   case $host_os in
+     rhapsody* | darwin*)
+@@ -8530,6 +9022,8 @@ _LT_EOF
+       $LTCC $LTCFLAGS -c -o conftest.o conftest.c 2>&5
+       echo "$AR cru libconftest.a conftest.o" >&5
+       $AR cru libconftest.a conftest.o 2>&5
++      echo "$RANLIB libconftest.a" >&5
++      $RANLIB libconftest.a 2>&5
+       cat > conftest.c << _LT_EOF
+ int main() { return 0;}
+ _LT_EOF
+@@ -8598,6 +9092,16 @@ done
+ 
+ 
+ 
++func_stripname_cnf ()
++{
++  case ${2} in
++  .*) func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%\\\\${2}\$%%"`;;
++  *)  func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%${2}\$%%"`;;
++  esac
++} # func_stripname_cnf
++
++
++
+ 
+ 
+ # Set options
+@@ -9113,8 +9617,6 @@ fi
+ lt_prog_compiler_pic=
+ lt_prog_compiler_static=
+ 
+-{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $compiler option to produce PIC" >&5
+-$as_echo_n "checking for $compiler option to produce PIC... " >&6; }
+ 
+   if test "$GCC" = yes; then
+     lt_prog_compiler_wl='-Wl,'
+@@ -9280,6 +9782,12 @@ $as_echo_n "checking for $compiler option to produce PIC... " >&6; }
+ 	lt_prog_compiler_pic='--shared'
+ 	lt_prog_compiler_static='--static'
+ 	;;
++      nagfor*)
++	# NAG Fortran compiler
++	lt_prog_compiler_wl='-Wl,-Wl,,'
++	lt_prog_compiler_pic='-PIC'
++	lt_prog_compiler_static='-Bstatic'
++	;;
+       pgcc* | pgf77* | pgf90* | pgf95* | pgfortran*)
+         # Portland Group compilers (*not* the Pentium gcc compiler,
+ 	# which looks to be a dead project)
+@@ -9342,7 +9850,7 @@ $as_echo_n "checking for $compiler option to produce PIC... " >&6; }
+       lt_prog_compiler_pic='-KPIC'
+       lt_prog_compiler_static='-Bstatic'
+       case $cc_basename in
+-      f77* | f90* | f95*)
++      f77* | f90* | f95* | sunf77* | sunf90* | sunf95*)
+ 	lt_prog_compiler_wl='-Qoption ld ';;
+       *)
+ 	lt_prog_compiler_wl='-Wl,';;
+@@ -9399,13 +9907,17 @@ case $host_os in
+     lt_prog_compiler_pic="$lt_prog_compiler_pic -DPIC"
+     ;;
+ esac
+-{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_prog_compiler_pic" >&5
+-$as_echo "$lt_prog_compiler_pic" >&6; }
+-
+-
+-
+-
+ 
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $compiler option to produce PIC" >&5
++$as_echo_n "checking for $compiler option to produce PIC... " >&6; }
++if ${lt_cv_prog_compiler_pic+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  lt_cv_prog_compiler_pic=$lt_prog_compiler_pic
++fi
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_prog_compiler_pic" >&5
++$as_echo "$lt_cv_prog_compiler_pic" >&6; }
++lt_prog_compiler_pic=$lt_cv_prog_compiler_pic
+ 
+ #
+ # Check to make sure the PIC flag actually works.
+@@ -9466,6 +9978,11 @@ fi
+ 
+ 
+ 
++
++
++
++
++
+ #
+ # Check to make sure the static flag actually works.
+ #
+@@ -9816,7 +10333,8 @@ _LT_EOF
+       allow_undefined_flag=unsupported
+       always_export_symbols=no
+       enable_shared_with_static_runtimes=yes
+-      export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1 DATA/'\'' | $SED -e '\''/^[AITW][ ]/s/.*[ ]//'\'' | sort | uniq > $export_symbols'
++      export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1 DATA/;s/^.*[ ]__nm__\([^ ]*\)[ ][^ ]*/\1 DATA/;/^I[ ]/d;/^[AITW][ ]/s/.* //'\'' | sort | uniq > $export_symbols'
++      exclude_expsyms='[_]+GLOBAL_OFFSET_TABLE_|[_]+GLOBAL__[FID]_.*|[_]+head_[A-Za-z0-9_]+_dll|[A-Za-z0-9_]+_dll_iname'
+ 
+       if $LD --help 2>&1 | $GREP 'auto-import' > /dev/null; then
+         archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib'
+@@ -9915,12 +10433,12 @@ _LT_EOF
+ 	  whole_archive_flag_spec='--whole-archive$convenience --no-whole-archive'
+ 	  hardcode_libdir_flag_spec=
+ 	  hardcode_libdir_flag_spec_ld='-rpath $libdir'
+-	  archive_cmds='$LD -shared $libobjs $deplibs $compiler_flags -soname $soname -o $lib'
++	  archive_cmds='$LD -shared $libobjs $deplibs $linker_flags -soname $soname -o $lib'
+ 	  if test "x$supports_anon_versioning" = xyes; then
+ 	    archive_expsym_cmds='echo "{ global:" > $output_objdir/$libname.ver~
+ 	      cat $export_symbols | sed -e "s/\(.*\)/\1;/" >> $output_objdir/$libname.ver~
+ 	      echo "local: *; };" >> $output_objdir/$libname.ver~
+-	      $LD -shared $libobjs $deplibs $compiler_flags -soname $soname -version-script $output_objdir/$libname.ver -o $lib'
++	      $LD -shared $libobjs $deplibs $linker_flags -soname $soname -version-script $output_objdir/$libname.ver -o $lib'
+ 	  fi
+ 	  ;;
+ 	esac
+@@ -9934,8 +10452,8 @@ _LT_EOF
+ 	archive_cmds='$LD -Bshareable $libobjs $deplibs $linker_flags -o $lib'
+ 	wlarc=
+       else
+-	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
+-	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
++	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
++	archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
+       fi
+       ;;
+ 
+@@ -9953,8 +10471,8 @@ _LT_EOF
+ 
+ _LT_EOF
+       elif $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then
+-	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
+-	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
++	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
++	archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
+       else
+ 	ld_shlibs=no
+       fi
+@@ -10000,8 +10518,8 @@ _LT_EOF
+ 
+     *)
+       if $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then
+-	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
+-	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
++	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
++	archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
+       else
+ 	ld_shlibs=no
+       fi
+@@ -10131,7 +10649,13 @@ _LT_EOF
+ 	allow_undefined_flag='-berok'
+         # Determine the default libpath from the value encoded in an
+         # empty executable.
+-        cat confdefs.h - <<_ACEOF >conftest.$ac_ext
++        if test "${lt_cv_aix_libpath+set}" = set; then
++  aix_libpath=$lt_cv_aix_libpath
++else
++  if ${lt_cv_aix_libpath_+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+ /* end confdefs.h.  */
+ 
+ int
+@@ -10144,22 +10668,29 @@ main ()
+ _ACEOF
+ if ac_fn_c_try_link "$LINENO"; then :
+ 
+-lt_aix_libpath_sed='
+-    /Import File Strings/,/^$/ {
+-	/^0/ {
+-	    s/^0  *\(.*\)$/\1/
+-	    p
+-	}
+-    }'
+-aix_libpath=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
+-# Check for a 64-bit object if we didn't find anything.
+-if test -z "$aix_libpath"; then
+-  aix_libpath=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
+-fi
++  lt_aix_libpath_sed='
++      /Import File Strings/,/^$/ {
++	  /^0/ {
++	      s/^0  *\([^ ]*\) *$/\1/
++	      p
++	  }
++      }'
++  lt_cv_aix_libpath_=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
++  # Check for a 64-bit object if we didn't find anything.
++  if test -z "$lt_cv_aix_libpath_"; then
++    lt_cv_aix_libpath_=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
++  fi
+ fi
+ rm -f core conftest.err conftest.$ac_objext \
+     conftest$ac_exeext conftest.$ac_ext
+-if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
++  if test -z "$lt_cv_aix_libpath_"; then
++    lt_cv_aix_libpath_="/usr/lib:/lib"
++  fi
++
++fi
++
++  aix_libpath=$lt_cv_aix_libpath_
++fi
+ 
+         hardcode_libdir_flag_spec='${wl}-blibpath:$libdir:'"$aix_libpath"
+         archive_expsym_cmds='$CC -o $output_objdir/$soname $libobjs $deplibs '"\${wl}$no_entry_flag"' $compiler_flags `if test "x${allow_undefined_flag}" != "x"; then func_echo_all "${wl}${allow_undefined_flag}"; else :; fi` '"\${wl}$exp_sym_flag:\$export_symbols $shared_flag"
+@@ -10171,7 +10702,13 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 	else
+ 	 # Determine the default libpath from the value encoded in an
+ 	 # empty executable.
+-	 cat confdefs.h - <<_ACEOF >conftest.$ac_ext
++	 if test "${lt_cv_aix_libpath+set}" = set; then
++  aix_libpath=$lt_cv_aix_libpath
++else
++  if ${lt_cv_aix_libpath_+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+ /* end confdefs.h.  */
+ 
+ int
+@@ -10184,22 +10721,29 @@ main ()
+ _ACEOF
+ if ac_fn_c_try_link "$LINENO"; then :
+ 
+-lt_aix_libpath_sed='
+-    /Import File Strings/,/^$/ {
+-	/^0/ {
+-	    s/^0  *\(.*\)$/\1/
+-	    p
+-	}
+-    }'
+-aix_libpath=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
+-# Check for a 64-bit object if we didn't find anything.
+-if test -z "$aix_libpath"; then
+-  aix_libpath=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
+-fi
++  lt_aix_libpath_sed='
++      /Import File Strings/,/^$/ {
++	  /^0/ {
++	      s/^0  *\([^ ]*\) *$/\1/
++	      p
++	  }
++      }'
++  lt_cv_aix_libpath_=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
++  # Check for a 64-bit object if we didn't find anything.
++  if test -z "$lt_cv_aix_libpath_"; then
++    lt_cv_aix_libpath_=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
++  fi
+ fi
+ rm -f core conftest.err conftest.$ac_objext \
+     conftest$ac_exeext conftest.$ac_ext
+-if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
++  if test -z "$lt_cv_aix_libpath_"; then
++    lt_cv_aix_libpath_="/usr/lib:/lib"
++  fi
++
++fi
++
++  aix_libpath=$lt_cv_aix_libpath_
++fi
+ 
+ 	 hardcode_libdir_flag_spec='${wl}-blibpath:$libdir:'"$aix_libpath"
+ 	  # Warning - without using the other run time loading flags,
+@@ -10243,21 +10787,64 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+       # When not using gcc, we currently assume that we are using
+       # Microsoft Visual C++.
+       # hardcode_libdir_flag_spec is actually meaningless, as there is
+-      # no search path for DLLs.
+-      hardcode_libdir_flag_spec=' '
+-      allow_undefined_flag=unsupported
+-      # Tell ltmain to make .lib files, not .a files.
+-      libext=lib
+-      # Tell ltmain to make .dll files, not .so files.
+-      shrext_cmds=".dll"
+-      # FIXME: Setting linknames here is a bad hack.
+-      archive_cmds='$CC -o $lib $libobjs $compiler_flags `func_echo_all "$deplibs" | $SED '\''s/ -lc$//'\''` -link -dll~linknames='
+-      # The linker will automatically build a .lib file if we build a DLL.
+-      old_archive_from_new_cmds='true'
+-      # FIXME: Should let the user specify the lib program.
+-      old_archive_cmds='lib -OUT:$oldlib$oldobjs$old_deplibs'
+-      fix_srcfile_path='`cygpath -w "$srcfile"`'
+-      enable_shared_with_static_runtimes=yes
++      # no search path for DLLs.
++      case $cc_basename in
++      cl*)
++	# Native MSVC
++	hardcode_libdir_flag_spec=' '
++	allow_undefined_flag=unsupported
++	always_export_symbols=yes
++	file_list_spec='@'
++	# Tell ltmain to make .lib files, not .a files.
++	libext=lib
++	# Tell ltmain to make .dll files, not .so files.
++	shrext_cmds=".dll"
++	# FIXME: Setting linknames here is a bad hack.
++	archive_cmds='$CC -o $output_objdir/$soname $libobjs $compiler_flags $deplibs -Wl,-dll~linknames='
++	archive_expsym_cmds='if test "x`$SED 1q $export_symbols`" = xEXPORTS; then
++	    sed -n -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' -e '1\\\!p' < $export_symbols > $output_objdir/$soname.exp;
++	  else
++	    sed -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' < $export_symbols > $output_objdir/$soname.exp;
++	  fi~
++	  $CC -o $tool_output_objdir$soname $libobjs $compiler_flags $deplibs "@$tool_output_objdir$soname.exp" -Wl,-DLL,-IMPLIB:"$tool_output_objdir$libname.dll.lib"~
++	  linknames='
++	# The linker will not automatically build a static lib if we build a DLL.
++	# _LT_TAGVAR(old_archive_from_new_cmds, )='true'
++	enable_shared_with_static_runtimes=yes
++	export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1,DATA/'\'' | $SED -e '\''/^[AITW][ ]/s/.*[ ]//'\'' | sort | uniq > $export_symbols'
++	# Don't use ranlib
++	old_postinstall_cmds='chmod 644 $oldlib'
++	postlink_cmds='lt_outputfile="@OUTPUT@"~
++	  lt_tool_outputfile="@TOOL_OUTPUT@"~
++	  case $lt_outputfile in
++	    *.exe|*.EXE) ;;
++	    *)
++	      lt_outputfile="$lt_outputfile.exe"
++	      lt_tool_outputfile="$lt_tool_outputfile.exe"
++	      ;;
++	  esac~
++	  if test "$MANIFEST_TOOL" != ":" && test -f "$lt_outputfile.manifest"; then
++	    $MANIFEST_TOOL -manifest "$lt_tool_outputfile.manifest" -outputresource:"$lt_tool_outputfile" || exit 1;
++	    $RM "$lt_outputfile.manifest";
++	  fi'
++	;;
++      *)
++	# Assume MSVC wrapper
++	hardcode_libdir_flag_spec=' '
++	allow_undefined_flag=unsupported
++	# Tell ltmain to make .lib files, not .a files.
++	libext=lib
++	# Tell ltmain to make .dll files, not .so files.
++	shrext_cmds=".dll"
++	# FIXME: Setting linknames here is a bad hack.
++	archive_cmds='$CC -o $lib $libobjs $compiler_flags `func_echo_all "$deplibs" | $SED '\''s/ -lc$//'\''` -link -dll~linknames='
++	# The linker will automatically build a .lib file if we build a DLL.
++	old_archive_from_new_cmds='true'
++	# FIXME: Should let the user specify the lib program.
++	old_archive_cmds='lib -OUT:$oldlib$oldobjs$old_deplibs'
++	enable_shared_with_static_runtimes=yes
++	;;
++      esac
+       ;;
+ 
+     darwin* | rhapsody*)
+@@ -10318,7 +10905,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 
+     # FreeBSD 3 and greater uses gcc -shared to do shared libraries.
+     freebsd* | dragonfly*)
+-      archive_cmds='$CC -shared -o $lib $libobjs $deplibs $compiler_flags'
++      archive_cmds='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags'
+       hardcode_libdir_flag_spec='-R$libdir'
+       hardcode_direct=yes
+       hardcode_shlibpath_var=no
+@@ -10326,7 +10913,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 
+     hpux9*)
+       if test "$GCC" = yes; then
+-	archive_cmds='$RM $output_objdir/$soname~$CC -shared -fPIC ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $libobjs $deplibs $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
++	archive_cmds='$RM $output_objdir/$soname~$CC -shared $pic_flag ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $libobjs $deplibs $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
+       else
+ 	archive_cmds='$RM $output_objdir/$soname~$LD -b +b $install_libdir -o $output_objdir/$soname $libobjs $deplibs $linker_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
+       fi
+@@ -10342,7 +10929,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 
+     hpux10*)
+       if test "$GCC" = yes && test "$with_gnu_ld" = no; then
+-	archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
++	archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
+       else
+ 	archive_cmds='$LD -b +h $soname +b $install_libdir -o $lib $libobjs $deplibs $linker_flags'
+       fi
+@@ -10366,10 +10953,10 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 	  archive_cmds='$CC -shared ${wl}+h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
+ 	  ;;
+ 	ia64*)
+-	  archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags'
++	  archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags'
+ 	  ;;
+ 	*)
+-	  archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
++	  archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
+ 	  ;;
+ 	esac
+       else
+@@ -10448,23 +11035,36 @@ fi
+ 
+     irix5* | irix6* | nonstopux*)
+       if test "$GCC" = yes; then
+-	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
++	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
+ 	# Try to use the -exported_symbol ld option, if it does not
+ 	# work, assume that -exports_file does not work either and
+ 	# implicitly export all symbols.
+-        save_LDFLAGS="$LDFLAGS"
+-        LDFLAGS="$LDFLAGS -shared ${wl}-exported_symbol ${wl}foo ${wl}-update_registry ${wl}/dev/null"
+-        cat confdefs.h - <<_ACEOF >conftest.$ac_ext
++	# This should be the same for all languages, so no per-tag cache variable.
++	{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether the $host_os linker accepts -exported_symbol" >&5
++$as_echo_n "checking whether the $host_os linker accepts -exported_symbol... " >&6; }
++if ${lt_cv_irix_exported_symbol+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  save_LDFLAGS="$LDFLAGS"
++	   LDFLAGS="$LDFLAGS -shared ${wl}-exported_symbol ${wl}foo ${wl}-update_registry ${wl}/dev/null"
++	   cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+ /* end confdefs.h.  */
+-int foo(void) {}
++int foo (void) { return 0; }
+ _ACEOF
+ if ac_fn_c_try_link "$LINENO"; then :
+-  archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations ${wl}-exports_file ${wl}$export_symbols -o $lib'
+-
++  lt_cv_irix_exported_symbol=yes
++else
++  lt_cv_irix_exported_symbol=no
+ fi
+ rm -f core conftest.err conftest.$ac_objext \
+     conftest$ac_exeext conftest.$ac_ext
+-        LDFLAGS="$save_LDFLAGS"
++           LDFLAGS="$save_LDFLAGS"
++fi
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_irix_exported_symbol" >&5
++$as_echo "$lt_cv_irix_exported_symbol" >&6; }
++	if test "$lt_cv_irix_exported_symbol" = yes; then
++          archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations ${wl}-exports_file ${wl}$export_symbols -o $lib'
++	fi
+       else
+ 	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -o $lib'
+ 	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -exports_file $export_symbols -o $lib'
+@@ -10549,7 +11149,7 @@ rm -f core conftest.err conftest.$ac_objext \
+     osf4* | osf5*)	# as osf3* with the addition of -msym flag
+       if test "$GCC" = yes; then
+ 	allow_undefined_flag=' ${wl}-expect_unresolved ${wl}\*'
+-	archive_cmds='$CC -shared${allow_undefined_flag} $libobjs $deplibs $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
++	archive_cmds='$CC -shared${allow_undefined_flag} $pic_flag $libobjs $deplibs $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
+ 	hardcode_libdir_flag_spec='${wl}-rpath ${wl}$libdir'
+       else
+ 	allow_undefined_flag=' -expect_unresolved \*'
+@@ -10568,9 +11168,9 @@ rm -f core conftest.err conftest.$ac_objext \
+       no_undefined_flag=' -z defs'
+       if test "$GCC" = yes; then
+ 	wlarc='${wl}'
+-	archive_cmds='$CC -shared ${wl}-z ${wl}text ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
++	archive_cmds='$CC -shared $pic_flag ${wl}-z ${wl}text ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
+ 	archive_expsym_cmds='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~
+-	  $CC -shared ${wl}-z ${wl}text ${wl}-M ${wl}$lib.exp ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags~$RM $lib.exp'
++	  $CC -shared $pic_flag ${wl}-z ${wl}text ${wl}-M ${wl}$lib.exp ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags~$RM $lib.exp'
+       else
+ 	case `$CC -V 2>&1` in
+ 	*"Compilers 5.0"*)
+@@ -11146,8 +11746,9 @@ cygwin* | mingw* | pw32* | cegcc*)
+   need_version=no
+   need_lib_prefix=no
+ 
+-  case $GCC,$host_os in
+-  yes,cygwin* | yes,mingw* | yes,pw32* | yes,cegcc*)
++  case $GCC,$cc_basename in
++  yes,*)
++    # gcc
+     library_names_spec='$libname.dll.a'
+     # DLL is installed to $(libdir)/../bin by postinstall_cmds
+     postinstall_cmds='base_file=`basename \${file}`~
+@@ -11180,13 +11781,71 @@ cygwin* | mingw* | pw32* | cegcc*)
+       library_names_spec='`echo ${libname} | sed -e 's/^lib/pw/'``echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}'
+       ;;
+     esac
++    dynamic_linker='Win32 ld.exe'
++    ;;
++
++  *,cl*)
++    # Native MSVC
++    libname_spec='$name'
++    soname_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}'
++    library_names_spec='${libname}.dll.lib'
++
++    case $build_os in
++    mingw*)
++      sys_lib_search_path_spec=
++      lt_save_ifs=$IFS
++      IFS=';'
++      for lt_path in $LIB
++      do
++        IFS=$lt_save_ifs
++        # Let DOS variable expansion print the short 8.3 style file name.
++        lt_path=`cd "$lt_path" 2>/dev/null && cmd //C "for %i in (".") do @echo %~si"`
++        sys_lib_search_path_spec="$sys_lib_search_path_spec $lt_path"
++      done
++      IFS=$lt_save_ifs
++      # Convert to MSYS style.
++      sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | sed -e 's|\\\\|/|g' -e 's| \\([a-zA-Z]\\):| /\\1|g' -e 's|^ ||'`
++      ;;
++    cygwin*)
++      # Convert to unix form, then to dos form, then back to unix form
++      # but this time dos style (no spaces!) so that the unix form looks
++      # like /cygdrive/c/PROGRA~1:/cygdr...
++      sys_lib_search_path_spec=`cygpath --path --unix "$LIB"`
++      sys_lib_search_path_spec=`cygpath --path --dos "$sys_lib_search_path_spec" 2>/dev/null`
++      sys_lib_search_path_spec=`cygpath --path --unix "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
++      ;;
++    *)
++      sys_lib_search_path_spec="$LIB"
++      if $ECHO "$sys_lib_search_path_spec" | $GREP ';[c-zC-Z]:/' >/dev/null; then
++        # It is most probably a Windows format PATH.
++        sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e 's/;/ /g'`
++      else
++        sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
++      fi
++      # FIXME: find the short name or the path components, as spaces are
++      # common. (e.g. "Program Files" -> "PROGRA~1")
++      ;;
++    esac
++
++    # DLL is installed to $(libdir)/../bin by postinstall_cmds
++    postinstall_cmds='base_file=`basename \${file}`~
++      dlpath=`$SHELL 2>&1 -c '\''. $dir/'\''\${base_file}'\''i; echo \$dlname'\''`~
++      dldir=$destdir/`dirname \$dlpath`~
++      test -d \$dldir || mkdir -p \$dldir~
++      $install_prog $dir/$dlname \$dldir/$dlname'
++    postuninstall_cmds='dldll=`$SHELL 2>&1 -c '\''. $file; echo \$dlname'\''`~
++      dlpath=$dir/\$dldll~
++       $RM \$dlpath'
++    shlibpath_overrides_runpath=yes
++    dynamic_linker='Win32 link.exe'
+     ;;
+ 
+   *)
++    # Assume MSVC wrapper
+     library_names_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext} $libname.lib'
++    dynamic_linker='Win32 ld.exe'
+     ;;
+   esac
+-  dynamic_linker='Win32 ld.exe'
+   # FIXME: first we should search . and the directory the executable is in
+   shlibpath_var=PATH
+   ;;
+@@ -12064,7 +12723,7 @@ else
+   lt_dlunknown=0; lt_dlno_uscore=1; lt_dlneed_uscore=2
+   lt_status=$lt_dlunknown
+   cat > conftest.$ac_ext <<_LT_EOF
+-#line 12067 "configure"
++#line $LINENO "configure"
+ #include "confdefs.h"
+ 
+ #if HAVE_DLFCN_H
+@@ -12108,10 +12767,10 @@ else
+ /* When -fvisbility=hidden is used, assume the code has been annotated
+    correspondingly for the symbols needed.  */
+ #if defined(__GNUC__) && (((__GNUC__ == 3) && (__GNUC_MINOR__ >= 3)) || (__GNUC__ > 3))
+-void fnord () __attribute__((visibility("default")));
++int fnord () __attribute__((visibility("default")));
+ #endif
+ 
+-void fnord () { int i=42; }
++int fnord () { return 42; }
+ int main ()
+ {
+   void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW);
+@@ -12170,7 +12829,7 @@ else
+   lt_dlunknown=0; lt_dlno_uscore=1; lt_dlneed_uscore=2
+   lt_status=$lt_dlunknown
+   cat > conftest.$ac_ext <<_LT_EOF
+-#line 12173 "configure"
++#line $LINENO "configure"
+ #include "confdefs.h"
+ 
+ #if HAVE_DLFCN_H
+@@ -12214,10 +12873,10 @@ else
+ /* When -fvisbility=hidden is used, assume the code has been annotated
+    correspondingly for the symbols needed.  */
+ #if defined(__GNUC__) && (((__GNUC__ == 3) && (__GNUC_MINOR__ >= 3)) || (__GNUC__ > 3))
+-void fnord () __attribute__((visibility("default")));
++int fnord () __attribute__((visibility("default")));
+ #endif
+ 
+-void fnord () { int i=42; }
++int fnord () { return 42; }
+ int main ()
+ {
+   void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW);
+@@ -12609,6 +13268,7 @@ $RM -r conftest*
+ 
+   # Allow CC to be a program name with arguments.
+   lt_save_CC=$CC
++  lt_save_CFLAGS=$CFLAGS
+   lt_save_LD=$LD
+   lt_save_GCC=$GCC
+   GCC=$GXX
+@@ -12626,6 +13286,7 @@ $RM -r conftest*
+   fi
+   test -z "${LDCXX+set}" || LD=$LDCXX
+   CC=${CXX-"c++"}
++  CFLAGS=$CXXFLAGS
+   compiler=$CC
+   compiler_CXX=$CC
+   for cc_temp in $compiler""; do
+@@ -12908,7 +13569,13 @@ $as_echo_n "checking whether the $compiler linker ($LD) supports shared librarie
+           allow_undefined_flag_CXX='-berok'
+           # Determine the default libpath from the value encoded in an empty
+           # executable.
+-          cat confdefs.h - <<_ACEOF >conftest.$ac_ext
++          if test "${lt_cv_aix_libpath+set}" = set; then
++  aix_libpath=$lt_cv_aix_libpath
++else
++  if ${lt_cv_aix_libpath__CXX+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+ /* end confdefs.h.  */
+ 
+ int
+@@ -12921,22 +13588,29 @@ main ()
+ _ACEOF
+ if ac_fn_cxx_try_link "$LINENO"; then :
+ 
+-lt_aix_libpath_sed='
+-    /Import File Strings/,/^$/ {
+-	/^0/ {
+-	    s/^0  *\(.*\)$/\1/
+-	    p
+-	}
+-    }'
+-aix_libpath=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
+-# Check for a 64-bit object if we didn't find anything.
+-if test -z "$aix_libpath"; then
+-  aix_libpath=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
+-fi
++  lt_aix_libpath_sed='
++      /Import File Strings/,/^$/ {
++	  /^0/ {
++	      s/^0  *\([^ ]*\) *$/\1/
++	      p
++	  }
++      }'
++  lt_cv_aix_libpath__CXX=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
++  # Check for a 64-bit object if we didn't find anything.
++  if test -z "$lt_cv_aix_libpath__CXX"; then
++    lt_cv_aix_libpath__CXX=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
++  fi
+ fi
+ rm -f core conftest.err conftest.$ac_objext \
+     conftest$ac_exeext conftest.$ac_ext
+-if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
++  if test -z "$lt_cv_aix_libpath__CXX"; then
++    lt_cv_aix_libpath__CXX="/usr/lib:/lib"
++  fi
++
++fi
++
++  aix_libpath=$lt_cv_aix_libpath__CXX
++fi
+ 
+           hardcode_libdir_flag_spec_CXX='${wl}-blibpath:$libdir:'"$aix_libpath"
+ 
+@@ -12949,7 +13623,13 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+           else
+ 	    # Determine the default libpath from the value encoded in an
+ 	    # empty executable.
+-	    cat confdefs.h - <<_ACEOF >conftest.$ac_ext
++	    if test "${lt_cv_aix_libpath+set}" = set; then
++  aix_libpath=$lt_cv_aix_libpath
++else
++  if ${lt_cv_aix_libpath__CXX+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+ /* end confdefs.h.  */
+ 
+ int
+@@ -12962,22 +13642,29 @@ main ()
+ _ACEOF
+ if ac_fn_cxx_try_link "$LINENO"; then :
+ 
+-lt_aix_libpath_sed='
+-    /Import File Strings/,/^$/ {
+-	/^0/ {
+-	    s/^0  *\(.*\)$/\1/
+-	    p
+-	}
+-    }'
+-aix_libpath=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
+-# Check for a 64-bit object if we didn't find anything.
+-if test -z "$aix_libpath"; then
+-  aix_libpath=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
+-fi
++  lt_aix_libpath_sed='
++      /Import File Strings/,/^$/ {
++	  /^0/ {
++	      s/^0  *\([^ ]*\) *$/\1/
++	      p
++	  }
++      }'
++  lt_cv_aix_libpath__CXX=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
++  # Check for a 64-bit object if we didn't find anything.
++  if test -z "$lt_cv_aix_libpath__CXX"; then
++    lt_cv_aix_libpath__CXX=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
++  fi
+ fi
+ rm -f core conftest.err conftest.$ac_objext \
+     conftest$ac_exeext conftest.$ac_ext
+-if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
++  if test -z "$lt_cv_aix_libpath__CXX"; then
++    lt_cv_aix_libpath__CXX="/usr/lib:/lib"
++  fi
++
++fi
++
++  aix_libpath=$lt_cv_aix_libpath__CXX
++fi
+ 
+ 	    hardcode_libdir_flag_spec_CXX='${wl}-blibpath:$libdir:'"$aix_libpath"
+ 	    # Warning - without using the other run time loading flags,
+@@ -13020,29 +13707,75 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+         ;;
+ 
+       cygwin* | mingw* | pw32* | cegcc*)
+-        # _LT_TAGVAR(hardcode_libdir_flag_spec, CXX) is actually meaningless,
+-        # as there is no search path for DLLs.
+-        hardcode_libdir_flag_spec_CXX='-L$libdir'
+-        export_dynamic_flag_spec_CXX='${wl}--export-all-symbols'
+-        allow_undefined_flag_CXX=unsupported
+-        always_export_symbols_CXX=no
+-        enable_shared_with_static_runtimes_CXX=yes
+-
+-        if $LD --help 2>&1 | $GREP 'auto-import' > /dev/null; then
+-          archive_cmds_CXX='$CC -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib'
+-          # If the export-symbols file already is a .def file (1st line
+-          # is EXPORTS), use it as is; otherwise, prepend...
+-          archive_expsym_cmds_CXX='if test "x`$SED 1q $export_symbols`" = xEXPORTS; then
+-	    cp $export_symbols $output_objdir/$soname.def;
+-          else
+-	    echo EXPORTS > $output_objdir/$soname.def;
+-	    cat $export_symbols >> $output_objdir/$soname.def;
+-          fi~
+-          $CC -shared -nostdlib $output_objdir/$soname.def $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib'
+-        else
+-          ld_shlibs_CXX=no
+-        fi
+-        ;;
++	case $GXX,$cc_basename in
++	,cl* | no,cl*)
++	  # Native MSVC
++	  # hardcode_libdir_flag_spec is actually meaningless, as there is
++	  # no search path for DLLs.
++	  hardcode_libdir_flag_spec_CXX=' '
++	  allow_undefined_flag_CXX=unsupported
++	  always_export_symbols_CXX=yes
++	  file_list_spec_CXX='@'
++	  # Tell ltmain to make .lib files, not .a files.
++	  libext=lib
++	  # Tell ltmain to make .dll files, not .so files.
++	  shrext_cmds=".dll"
++	  # FIXME: Setting linknames here is a bad hack.
++	  archive_cmds_CXX='$CC -o $output_objdir/$soname $libobjs $compiler_flags $deplibs -Wl,-dll~linknames='
++	  archive_expsym_cmds_CXX='if test "x`$SED 1q $export_symbols`" = xEXPORTS; then
++	      $SED -n -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' -e '1\\\!p' < $export_symbols > $output_objdir/$soname.exp;
++	    else
++	      $SED -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' < $export_symbols > $output_objdir/$soname.exp;
++	    fi~
++	    $CC -o $tool_output_objdir$soname $libobjs $compiler_flags $deplibs "@$tool_output_objdir$soname.exp" -Wl,-DLL,-IMPLIB:"$tool_output_objdir$libname.dll.lib"~
++	    linknames='
++	  # The linker will not automatically build a static lib if we build a DLL.
++	  # _LT_TAGVAR(old_archive_from_new_cmds, CXX)='true'
++	  enable_shared_with_static_runtimes_CXX=yes
++	  # Don't use ranlib
++	  old_postinstall_cmds_CXX='chmod 644 $oldlib'
++	  postlink_cmds_CXX='lt_outputfile="@OUTPUT@"~
++	    lt_tool_outputfile="@TOOL_OUTPUT@"~
++	    case $lt_outputfile in
++	      *.exe|*.EXE) ;;
++	      *)
++		lt_outputfile="$lt_outputfile.exe"
++		lt_tool_outputfile="$lt_tool_outputfile.exe"
++		;;
++	    esac~
++	    func_to_tool_file "$lt_outputfile"~
++	    if test "$MANIFEST_TOOL" != ":" && test -f "$lt_outputfile.manifest"; then
++	      $MANIFEST_TOOL -manifest "$lt_tool_outputfile.manifest" -outputresource:"$lt_tool_outputfile" || exit 1;
++	      $RM "$lt_outputfile.manifest";
++	    fi'
++	  ;;
++	*)
++	  # g++
++	  # _LT_TAGVAR(hardcode_libdir_flag_spec, CXX) is actually meaningless,
++	  # as there is no search path for DLLs.
++	  hardcode_libdir_flag_spec_CXX='-L$libdir'
++	  export_dynamic_flag_spec_CXX='${wl}--export-all-symbols'
++	  allow_undefined_flag_CXX=unsupported
++	  always_export_symbols_CXX=no
++	  enable_shared_with_static_runtimes_CXX=yes
++
++	  if $LD --help 2>&1 | $GREP 'auto-import' > /dev/null; then
++	    archive_cmds_CXX='$CC -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib'
++	    # If the export-symbols file already is a .def file (1st line
++	    # is EXPORTS), use it as is; otherwise, prepend...
++	    archive_expsym_cmds_CXX='if test "x`$SED 1q $export_symbols`" = xEXPORTS; then
++	      cp $export_symbols $output_objdir/$soname.def;
++	    else
++	      echo EXPORTS > $output_objdir/$soname.def;
++	      cat $export_symbols >> $output_objdir/$soname.def;
++	    fi~
++	    $CC -shared -nostdlib $output_objdir/$soname.def $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib'
++	  else
++	    ld_shlibs_CXX=no
++	  fi
++	  ;;
++	esac
++	;;
+       darwin* | rhapsody*)
+ 
+ 
+@@ -13148,7 +13881,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+             ;;
+           *)
+             if test "$GXX" = yes; then
+-              archive_cmds_CXX='$RM $output_objdir/$soname~$CC -shared -nostdlib -fPIC ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
++              archive_cmds_CXX='$RM $output_objdir/$soname~$CC -shared -nostdlib $pic_flag ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
+             else
+               # FIXME: insert proper C++ library support
+               ld_shlibs_CXX=no
+@@ -13219,10 +13952,10 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 	            archive_cmds_CXX='$CC -shared -nostdlib -fPIC ${wl}+h ${wl}$soname -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags'
+ 	            ;;
+ 	          ia64*)
+-	            archive_cmds_CXX='$CC -shared -nostdlib -fPIC ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags'
++	            archive_cmds_CXX='$CC -shared -nostdlib $pic_flag ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags'
+ 	            ;;
+ 	          *)
+-	            archive_cmds_CXX='$CC -shared -nostdlib -fPIC ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags'
++	            archive_cmds_CXX='$CC -shared -nostdlib $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags'
+ 	            ;;
+ 	        esac
+ 	      fi
+@@ -13263,9 +13996,9 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+           *)
+ 	    if test "$GXX" = yes; then
+ 	      if test "$with_gnu_ld" = no; then
+-	        archive_cmds_CXX='$CC -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
++	        archive_cmds_CXX='$CC -shared $pic_flag -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
+ 	      else
+-	        archive_cmds_CXX='$CC -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` -o $lib'
++	        archive_cmds_CXX='$CC -shared $pic_flag -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` -o $lib'
+ 	      fi
+ 	    fi
+ 	    link_all_deplibs_CXX=yes
+@@ -13335,20 +14068,20 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 	      prelink_cmds_CXX='tpldir=Template.dir~
+ 		rm -rf $tpldir~
+ 		$CC --prelink_objects --instantiation_dir $tpldir $objs $libobjs $compile_deplibs~
+-		compile_command="$compile_command `find $tpldir -name \*.o | $NL2SP`"'
++		compile_command="$compile_command `find $tpldir -name \*.o | sort | $NL2SP`"'
+ 	      old_archive_cmds_CXX='tpldir=Template.dir~
+ 		rm -rf $tpldir~
+ 		$CC --prelink_objects --instantiation_dir $tpldir $oldobjs$old_deplibs~
+-		$AR $AR_FLAGS $oldlib$oldobjs$old_deplibs `find $tpldir -name \*.o | $NL2SP`~
++		$AR $AR_FLAGS $oldlib$oldobjs$old_deplibs `find $tpldir -name \*.o | sort | $NL2SP`~
+ 		$RANLIB $oldlib'
+ 	      archive_cmds_CXX='tpldir=Template.dir~
+ 		rm -rf $tpldir~
+ 		$CC --prelink_objects --instantiation_dir $tpldir $predep_objects $libobjs $deplibs $convenience $postdep_objects~
+-		$CC -shared $pic_flag $predep_objects $libobjs $deplibs `find $tpldir -name \*.o | $NL2SP` $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname -o $lib'
++		$CC -shared $pic_flag $predep_objects $libobjs $deplibs `find $tpldir -name \*.o | sort | $NL2SP` $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname -o $lib'
+ 	      archive_expsym_cmds_CXX='tpldir=Template.dir~
+ 		rm -rf $tpldir~
+ 		$CC --prelink_objects --instantiation_dir $tpldir $predep_objects $libobjs $deplibs $convenience $postdep_objects~
+-		$CC -shared $pic_flag $predep_objects $libobjs $deplibs `find $tpldir -name \*.o | $NL2SP` $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname ${wl}-retain-symbols-file ${wl}$export_symbols -o $lib'
++		$CC -shared $pic_flag $predep_objects $libobjs $deplibs `find $tpldir -name \*.o | sort | $NL2SP` $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname ${wl}-retain-symbols-file ${wl}$export_symbols -o $lib'
+ 	      ;;
+ 	    *) # Version 6 and above use weak symbols
+ 	      archive_cmds_CXX='$CC -shared $pic_flag $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname -o $lib'
+@@ -13543,7 +14276,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 	          archive_cmds_CXX='$CC -shared -nostdlib ${allow_undefined_flag} $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
+ 		  ;;
+ 	        *)
+-	          archive_cmds_CXX='$CC -shared -nostdlib ${allow_undefined_flag} $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
++	          archive_cmds_CXX='$CC -shared $pic_flag -nostdlib ${allow_undefined_flag} $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
+ 		  ;;
+ 	      esac
+ 
+@@ -13589,7 +14322,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 
+       solaris*)
+         case $cc_basename in
+-          CC*)
++          CC* | sunCC*)
+ 	    # Sun C++ 4.2, 5.x and Centerline C++
+             archive_cmds_need_lc_CXX=yes
+ 	    no_undefined_flag_CXX=' -zdefs'
+@@ -13630,9 +14363,9 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 	    if test "$GXX" = yes && test "$with_gnu_ld" = no; then
+ 	      no_undefined_flag_CXX=' ${wl}-z ${wl}defs'
+ 	      if $CC --version | $GREP -v '^2\.7' > /dev/null; then
+-	        archive_cmds_CXX='$CC -shared -nostdlib $LDFLAGS $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-h $wl$soname -o $lib'
++	        archive_cmds_CXX='$CC -shared $pic_flag -nostdlib $LDFLAGS $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-h $wl$soname -o $lib'
+ 	        archive_expsym_cmds_CXX='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~
+-		  $CC -shared -nostdlib ${wl}-M $wl$lib.exp -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~$RM $lib.exp'
++		  $CC -shared $pic_flag -nostdlib ${wl}-M $wl$lib.exp -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~$RM $lib.exp'
+ 
+ 	        # Commands to make compiler produce verbose output that lists
+ 	        # what "hidden" libraries, object files and flags are used when
+@@ -13767,6 +14500,13 @@ private:
+ };
+ _LT_EOF
+ 
++
++_lt_libdeps_save_CFLAGS=$CFLAGS
++case "$CC $CFLAGS " in #(
++*\ -flto*\ *) CFLAGS="$CFLAGS -fno-lto" ;;
++*\ -fwhopr*\ *) CFLAGS="$CFLAGS -fno-whopr" ;;
++esac
++
+ if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_compile\""; } >&5
+   (eval $ac_compile) 2>&5
+   ac_status=$?
+@@ -13780,7 +14520,7 @@ if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_compile\""; } >&5
+   pre_test_object_deps_done=no
+ 
+   for p in `eval "$output_verbose_link_cmd"`; do
+-    case $p in
++    case ${prev}${p} in
+ 
+     -L* | -R* | -l*)
+        # Some compilers place space between "-{L,R}" and the path.
+@@ -13789,13 +14529,22 @@ if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_compile\""; } >&5
+           test $p = "-R"; then
+ 	 prev=$p
+ 	 continue
+-       else
+-	 prev=
+        fi
+ 
++       # Expand the sysroot to ease extracting the directories later.
++       if test -z "$prev"; then
++         case $p in
++         -L*) func_stripname_cnf '-L' '' "$p"; prev=-L; p=$func_stripname_result ;;
++         -R*) func_stripname_cnf '-R' '' "$p"; prev=-R; p=$func_stripname_result ;;
++         -l*) func_stripname_cnf '-l' '' "$p"; prev=-l; p=$func_stripname_result ;;
++         esac
++       fi
++       case $p in
++       =*) func_stripname_cnf '=' '' "$p"; p=$lt_sysroot$func_stripname_result ;;
++       esac
+        if test "$pre_test_object_deps_done" = no; then
+-	 case $p in
+-	 -L* | -R*)
++	 case ${prev} in
++	 -L | -R)
+ 	   # Internal compiler library paths should come after those
+ 	   # provided the user.  The postdeps already come after the
+ 	   # user supplied libs so there is no need to process them.
+@@ -13815,8 +14564,10 @@ if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_compile\""; } >&5
+ 	   postdeps_CXX="${postdeps_CXX} ${prev}${p}"
+ 	 fi
+        fi
++       prev=
+        ;;
+ 
++    *.lto.$objext) ;; # Ignore GCC LTO objects
+     *.$objext)
+        # This assumes that the test object file only shows up
+        # once in the compiler output.
+@@ -13852,6 +14603,7 @@ else
+ fi
+ 
+ $RM -f confest.$objext
++CFLAGS=$_lt_libdeps_save_CFLAGS
+ 
+ # PORTME: override above test on systems where it is broken
+ case $host_os in
+@@ -13887,7 +14639,7 @@ linux*)
+ 
+ solaris*)
+   case $cc_basename in
+-  CC*)
++  CC* | sunCC*)
+     # The more standards-conforming stlport4 library is
+     # incompatible with the Cstd library. Avoid specifying
+     # it if it's in CXXFLAGS. Ignore libCrun as
+@@ -13952,8 +14704,6 @@ fi
+ lt_prog_compiler_pic_CXX=
+ lt_prog_compiler_static_CXX=
+ 
+-{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $compiler option to produce PIC" >&5
+-$as_echo_n "checking for $compiler option to produce PIC... " >&6; }
+ 
+   # C++ specific cases for pic, static, wl, etc.
+   if test "$GXX" = yes; then
+@@ -14058,6 +14808,11 @@ $as_echo_n "checking for $compiler option to produce PIC... " >&6; }
+ 	  ;;
+ 	esac
+ 	;;
++      mingw* | cygwin* | os2* | pw32* | cegcc*)
++	# This hack is so that the source file can tell whether it is being
++	# built for inclusion in a dll (and should export symbols for example).
++	lt_prog_compiler_pic_CXX='-DDLL_EXPORT'
++	;;
+       dgux*)
+ 	case $cc_basename in
+ 	  ec++*)
+@@ -14210,7 +14965,7 @@ $as_echo_n "checking for $compiler option to produce PIC... " >&6; }
+ 	;;
+       solaris*)
+ 	case $cc_basename in
+-	  CC*)
++	  CC* | sunCC*)
+ 	    # Sun C++ 4.2, 5.x and Centerline C++
+ 	    lt_prog_compiler_pic_CXX='-KPIC'
+ 	    lt_prog_compiler_static_CXX='-Bstatic'
+@@ -14275,10 +15030,17 @@ case $host_os in
+     lt_prog_compiler_pic_CXX="$lt_prog_compiler_pic_CXX -DPIC"
+     ;;
+ esac
+-{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_prog_compiler_pic_CXX" >&5
+-$as_echo "$lt_prog_compiler_pic_CXX" >&6; }
+-
+ 
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $compiler option to produce PIC" >&5
++$as_echo_n "checking for $compiler option to produce PIC... " >&6; }
++if ${lt_cv_prog_compiler_pic_CXX+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  lt_cv_prog_compiler_pic_CXX=$lt_prog_compiler_pic_CXX
++fi
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_prog_compiler_pic_CXX" >&5
++$as_echo "$lt_cv_prog_compiler_pic_CXX" >&6; }
++lt_prog_compiler_pic_CXX=$lt_cv_prog_compiler_pic_CXX
+ 
+ #
+ # Check to make sure the PIC flag actually works.
+@@ -14336,6 +15098,8 @@ fi
+ 
+ 
+ 
++
++
+ #
+ # Check to make sure the static flag actually works.
+ #
+@@ -14513,6 +15277,7 @@ fi
+ $as_echo_n "checking whether the $compiler linker ($LD) supports shared libraries... " >&6; }
+ 
+   export_symbols_cmds_CXX='$NM $libobjs $convenience | $global_symbol_pipe | $SED '\''s/.* //'\'' | sort | uniq > $export_symbols'
++  exclude_expsyms_CXX='_GLOBAL_OFFSET_TABLE_|_GLOBAL__F[ID]_.*'
+   case $host_os in
+   aix[4-9]*)
+     # If we're using GNU nm, then we don't want the "-C" option.
+@@ -14527,15 +15292,20 @@ $as_echo_n "checking whether the $compiler linker ($LD) supports shared librarie
+     ;;
+   pw32*)
+     export_symbols_cmds_CXX="$ltdll_cmds"
+-  ;;
++    ;;
+   cygwin* | mingw* | cegcc*)
+-    export_symbols_cmds_CXX='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1 DATA/;/^.*[ ]__nm__/s/^.*[ ]__nm__\([^ ]*\)[ ][^ ]*/\1 DATA/;/^I[ ]/d;/^[AITW][ ]/s/.* //'\'' | sort | uniq > $export_symbols'
+-  ;;
++    case $cc_basename in
++    cl*) ;;
++    *)
++      export_symbols_cmds_CXX='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1 DATA/;s/^.*[ ]__nm__\([^ ]*\)[ ][^ ]*/\1 DATA/;/^I[ ]/d;/^[AITW][ ]/s/.* //'\'' | sort | uniq > $export_symbols'
++      exclude_expsyms_CXX='[_]+GLOBAL_OFFSET_TABLE_|[_]+GLOBAL__[FID]_.*|[_]+head_[A-Za-z0-9_]+_dll|[A-Za-z0-9_]+_dll_iname'
++      ;;
++    esac
++    ;;
+   *)
+     export_symbols_cmds_CXX='$NM $libobjs $convenience | $global_symbol_pipe | $SED '\''s/.* //'\'' | sort | uniq > $export_symbols'
+-  ;;
++    ;;
+   esac
+-  exclude_expsyms_CXX='_GLOBAL_OFFSET_TABLE_|_GLOBAL__F[ID]_.*'
+ 
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ld_shlibs_CXX" >&5
+ $as_echo "$ld_shlibs_CXX" >&6; }
+@@ -14798,8 +15568,9 @@ cygwin* | mingw* | pw32* | cegcc*)
+   need_version=no
+   need_lib_prefix=no
+ 
+-  case $GCC,$host_os in
+-  yes,cygwin* | yes,mingw* | yes,pw32* | yes,cegcc*)
++  case $GCC,$cc_basename in
++  yes,*)
++    # gcc
+     library_names_spec='$libname.dll.a'
+     # DLL is installed to $(libdir)/../bin by postinstall_cmds
+     postinstall_cmds='base_file=`basename \${file}`~
+@@ -14831,13 +15602,71 @@ cygwin* | mingw* | pw32* | cegcc*)
+       library_names_spec='`echo ${libname} | sed -e 's/^lib/pw/'``echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}'
+       ;;
+     esac
++    dynamic_linker='Win32 ld.exe'
++    ;;
++
++  *,cl*)
++    # Native MSVC
++    libname_spec='$name'
++    soname_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}'
++    library_names_spec='${libname}.dll.lib'
++
++    case $build_os in
++    mingw*)
++      sys_lib_search_path_spec=
++      lt_save_ifs=$IFS
++      IFS=';'
++      for lt_path in $LIB
++      do
++        IFS=$lt_save_ifs
++        # Let DOS variable expansion print the short 8.3 style file name.
++        lt_path=`cd "$lt_path" 2>/dev/null && cmd //C "for %i in (".") do @echo %~si"`
++        sys_lib_search_path_spec="$sys_lib_search_path_spec $lt_path"
++      done
++      IFS=$lt_save_ifs
++      # Convert to MSYS style.
++      sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | sed -e 's|\\\\|/|g' -e 's| \\([a-zA-Z]\\):| /\\1|g' -e 's|^ ||'`
++      ;;
++    cygwin*)
++      # Convert to unix form, then to dos form, then back to unix form
++      # but this time dos style (no spaces!) so that the unix form looks
++      # like /cygdrive/c/PROGRA~1:/cygdr...
++      sys_lib_search_path_spec=`cygpath --path --unix "$LIB"`
++      sys_lib_search_path_spec=`cygpath --path --dos "$sys_lib_search_path_spec" 2>/dev/null`
++      sys_lib_search_path_spec=`cygpath --path --unix "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
++      ;;
++    *)
++      sys_lib_search_path_spec="$LIB"
++      if $ECHO "$sys_lib_search_path_spec" | $GREP ';[c-zC-Z]:/' >/dev/null; then
++        # It is most probably a Windows format PATH.
++        sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e 's/;/ /g'`
++      else
++        sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
++      fi
++      # FIXME: find the short name or the path components, as spaces are
++      # common. (e.g. "Program Files" -> "PROGRA~1")
++      ;;
++    esac
++
++    # DLL is installed to $(libdir)/../bin by postinstall_cmds
++    postinstall_cmds='base_file=`basename \${file}`~
++      dlpath=`$SHELL 2>&1 -c '\''. $dir/'\''\${base_file}'\''i; echo \$dlname'\''`~
++      dldir=$destdir/`dirname \$dlpath`~
++      test -d \$dldir || mkdir -p \$dldir~
++      $install_prog $dir/$dlname \$dldir/$dlname'
++    postuninstall_cmds='dldll=`$SHELL 2>&1 -c '\''. $file; echo \$dlname'\''`~
++      dlpath=$dir/\$dldll~
++       $RM \$dlpath'
++    shlibpath_overrides_runpath=yes
++    dynamic_linker='Win32 link.exe'
+     ;;
+ 
+   *)
++    # Assume MSVC wrapper
+     library_names_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext} $libname.lib'
++    dynamic_linker='Win32 ld.exe'
+     ;;
+   esac
+-  dynamic_linker='Win32 ld.exe'
+   # FIXME: first we should search . and the directory the executable is in
+   shlibpath_var=PATH
+   ;;
+@@ -15377,6 +16206,7 @@ fi
+   fi # test -n "$compiler"
+ 
+   CC=$lt_save_CC
++  CFLAGS=$lt_save_CFLAGS
+   LDCXX=$LD
+   LD=$lt_save_LD
+   GCC=$lt_save_GCC
+@@ -16321,13 +17151,20 @@ exeext='`$ECHO "$exeext" | $SED "$delay_single_quote_subst"`'
+ lt_unset='`$ECHO "$lt_unset" | $SED "$delay_single_quote_subst"`'
+ lt_SP2NL='`$ECHO "$lt_SP2NL" | $SED "$delay_single_quote_subst"`'
+ lt_NL2SP='`$ECHO "$lt_NL2SP" | $SED "$delay_single_quote_subst"`'
++lt_cv_to_host_file_cmd='`$ECHO "$lt_cv_to_host_file_cmd" | $SED "$delay_single_quote_subst"`'
++lt_cv_to_tool_file_cmd='`$ECHO "$lt_cv_to_tool_file_cmd" | $SED "$delay_single_quote_subst"`'
+ reload_flag='`$ECHO "$reload_flag" | $SED "$delay_single_quote_subst"`'
+ reload_cmds='`$ECHO "$reload_cmds" | $SED "$delay_single_quote_subst"`'
+ OBJDUMP='`$ECHO "$OBJDUMP" | $SED "$delay_single_quote_subst"`'
+ deplibs_check_method='`$ECHO "$deplibs_check_method" | $SED "$delay_single_quote_subst"`'
+ file_magic_cmd='`$ECHO "$file_magic_cmd" | $SED "$delay_single_quote_subst"`'
++file_magic_glob='`$ECHO "$file_magic_glob" | $SED "$delay_single_quote_subst"`'
++want_nocaseglob='`$ECHO "$want_nocaseglob" | $SED "$delay_single_quote_subst"`'
++DLLTOOL='`$ECHO "$DLLTOOL" | $SED "$delay_single_quote_subst"`'
++sharedlib_from_linklib_cmd='`$ECHO "$sharedlib_from_linklib_cmd" | $SED "$delay_single_quote_subst"`'
+ AR='`$ECHO "$AR" | $SED "$delay_single_quote_subst"`'
+ AR_FLAGS='`$ECHO "$AR_FLAGS" | $SED "$delay_single_quote_subst"`'
++archiver_list_spec='`$ECHO "$archiver_list_spec" | $SED "$delay_single_quote_subst"`'
+ STRIP='`$ECHO "$STRIP" | $SED "$delay_single_quote_subst"`'
+ RANLIB='`$ECHO "$RANLIB" | $SED "$delay_single_quote_subst"`'
+ old_postinstall_cmds='`$ECHO "$old_postinstall_cmds" | $SED "$delay_single_quote_subst"`'
+@@ -16342,14 +17179,17 @@ lt_cv_sys_global_symbol_pipe='`$ECHO "$lt_cv_sys_global_symbol_pipe" | $SED "$de
+ lt_cv_sys_global_symbol_to_cdecl='`$ECHO "$lt_cv_sys_global_symbol_to_cdecl" | $SED "$delay_single_quote_subst"`'
+ lt_cv_sys_global_symbol_to_c_name_address='`$ECHO "$lt_cv_sys_global_symbol_to_c_name_address" | $SED "$delay_single_quote_subst"`'
+ lt_cv_sys_global_symbol_to_c_name_address_lib_prefix='`$ECHO "$lt_cv_sys_global_symbol_to_c_name_address_lib_prefix" | $SED "$delay_single_quote_subst"`'
++nm_file_list_spec='`$ECHO "$nm_file_list_spec" | $SED "$delay_single_quote_subst"`'
++lt_sysroot='`$ECHO "$lt_sysroot" | $SED "$delay_single_quote_subst"`'
+ objdir='`$ECHO "$objdir" | $SED "$delay_single_quote_subst"`'
+ MAGIC_CMD='`$ECHO "$MAGIC_CMD" | $SED "$delay_single_quote_subst"`'
+ lt_prog_compiler_no_builtin_flag='`$ECHO "$lt_prog_compiler_no_builtin_flag" | $SED "$delay_single_quote_subst"`'
+-lt_prog_compiler_wl='`$ECHO "$lt_prog_compiler_wl" | $SED "$delay_single_quote_subst"`'
+ lt_prog_compiler_pic='`$ECHO "$lt_prog_compiler_pic" | $SED "$delay_single_quote_subst"`'
++lt_prog_compiler_wl='`$ECHO "$lt_prog_compiler_wl" | $SED "$delay_single_quote_subst"`'
+ lt_prog_compiler_static='`$ECHO "$lt_prog_compiler_static" | $SED "$delay_single_quote_subst"`'
+ lt_cv_prog_compiler_c_o='`$ECHO "$lt_cv_prog_compiler_c_o" | $SED "$delay_single_quote_subst"`'
+ need_locks='`$ECHO "$need_locks" | $SED "$delay_single_quote_subst"`'
++MANIFEST_TOOL='`$ECHO "$MANIFEST_TOOL" | $SED "$delay_single_quote_subst"`'
+ DSYMUTIL='`$ECHO "$DSYMUTIL" | $SED "$delay_single_quote_subst"`'
+ NMEDIT='`$ECHO "$NMEDIT" | $SED "$delay_single_quote_subst"`'
+ LIPO='`$ECHO "$LIPO" | $SED "$delay_single_quote_subst"`'
+@@ -16382,12 +17222,12 @@ hardcode_shlibpath_var='`$ECHO "$hardcode_shlibpath_var" | $SED "$delay_single_q
+ hardcode_automatic='`$ECHO "$hardcode_automatic" | $SED "$delay_single_quote_subst"`'
+ inherit_rpath='`$ECHO "$inherit_rpath" | $SED "$delay_single_quote_subst"`'
+ link_all_deplibs='`$ECHO "$link_all_deplibs" | $SED "$delay_single_quote_subst"`'
+-fix_srcfile_path='`$ECHO "$fix_srcfile_path" | $SED "$delay_single_quote_subst"`'
+ always_export_symbols='`$ECHO "$always_export_symbols" | $SED "$delay_single_quote_subst"`'
+ export_symbols_cmds='`$ECHO "$export_symbols_cmds" | $SED "$delay_single_quote_subst"`'
+ exclude_expsyms='`$ECHO "$exclude_expsyms" | $SED "$delay_single_quote_subst"`'
+ include_expsyms='`$ECHO "$include_expsyms" | $SED "$delay_single_quote_subst"`'
+ prelink_cmds='`$ECHO "$prelink_cmds" | $SED "$delay_single_quote_subst"`'
++postlink_cmds='`$ECHO "$postlink_cmds" | $SED "$delay_single_quote_subst"`'
+ file_list_spec='`$ECHO "$file_list_spec" | $SED "$delay_single_quote_subst"`'
+ variables_saved_for_relink='`$ECHO "$variables_saved_for_relink" | $SED "$delay_single_quote_subst"`'
+ need_lib_prefix='`$ECHO "$need_lib_prefix" | $SED "$delay_single_quote_subst"`'
+@@ -16426,8 +17266,8 @@ old_archive_cmds_CXX='`$ECHO "$old_archive_cmds_CXX" | $SED "$delay_single_quote
+ compiler_CXX='`$ECHO "$compiler_CXX" | $SED "$delay_single_quote_subst"`'
+ GCC_CXX='`$ECHO "$GCC_CXX" | $SED "$delay_single_quote_subst"`'
+ lt_prog_compiler_no_builtin_flag_CXX='`$ECHO "$lt_prog_compiler_no_builtin_flag_CXX" | $SED "$delay_single_quote_subst"`'
+-lt_prog_compiler_wl_CXX='`$ECHO "$lt_prog_compiler_wl_CXX" | $SED "$delay_single_quote_subst"`'
+ lt_prog_compiler_pic_CXX='`$ECHO "$lt_prog_compiler_pic_CXX" | $SED "$delay_single_quote_subst"`'
++lt_prog_compiler_wl_CXX='`$ECHO "$lt_prog_compiler_wl_CXX" | $SED "$delay_single_quote_subst"`'
+ lt_prog_compiler_static_CXX='`$ECHO "$lt_prog_compiler_static_CXX" | $SED "$delay_single_quote_subst"`'
+ lt_cv_prog_compiler_c_o_CXX='`$ECHO "$lt_cv_prog_compiler_c_o_CXX" | $SED "$delay_single_quote_subst"`'
+ archive_cmds_need_lc_CXX='`$ECHO "$archive_cmds_need_lc_CXX" | $SED "$delay_single_quote_subst"`'
+@@ -16454,12 +17294,12 @@ hardcode_shlibpath_var_CXX='`$ECHO "$hardcode_shlibpath_var_CXX" | $SED "$delay_
+ hardcode_automatic_CXX='`$ECHO "$hardcode_automatic_CXX" | $SED "$delay_single_quote_subst"`'
+ inherit_rpath_CXX='`$ECHO "$inherit_rpath_CXX" | $SED "$delay_single_quote_subst"`'
+ link_all_deplibs_CXX='`$ECHO "$link_all_deplibs_CXX" | $SED "$delay_single_quote_subst"`'
+-fix_srcfile_path_CXX='`$ECHO "$fix_srcfile_path_CXX" | $SED "$delay_single_quote_subst"`'
+ always_export_symbols_CXX='`$ECHO "$always_export_symbols_CXX" | $SED "$delay_single_quote_subst"`'
+ export_symbols_cmds_CXX='`$ECHO "$export_symbols_cmds_CXX" | $SED "$delay_single_quote_subst"`'
+ exclude_expsyms_CXX='`$ECHO "$exclude_expsyms_CXX" | $SED "$delay_single_quote_subst"`'
+ include_expsyms_CXX='`$ECHO "$include_expsyms_CXX" | $SED "$delay_single_quote_subst"`'
+ prelink_cmds_CXX='`$ECHO "$prelink_cmds_CXX" | $SED "$delay_single_quote_subst"`'
++postlink_cmds_CXX='`$ECHO "$postlink_cmds_CXX" | $SED "$delay_single_quote_subst"`'
+ file_list_spec_CXX='`$ECHO "$file_list_spec_CXX" | $SED "$delay_single_quote_subst"`'
+ hardcode_action_CXX='`$ECHO "$hardcode_action_CXX" | $SED "$delay_single_quote_subst"`'
+ compiler_lib_search_dirs_CXX='`$ECHO "$compiler_lib_search_dirs_CXX" | $SED "$delay_single_quote_subst"`'
+@@ -16497,8 +17337,13 @@ reload_flag \
+ OBJDUMP \
+ deplibs_check_method \
+ file_magic_cmd \
++file_magic_glob \
++want_nocaseglob \
++DLLTOOL \
++sharedlib_from_linklib_cmd \
+ AR \
+ AR_FLAGS \
++archiver_list_spec \
+ STRIP \
+ RANLIB \
+ CC \
+@@ -16508,12 +17353,14 @@ lt_cv_sys_global_symbol_pipe \
+ lt_cv_sys_global_symbol_to_cdecl \
+ lt_cv_sys_global_symbol_to_c_name_address \
+ lt_cv_sys_global_symbol_to_c_name_address_lib_prefix \
++nm_file_list_spec \
+ lt_prog_compiler_no_builtin_flag \
+-lt_prog_compiler_wl \
+ lt_prog_compiler_pic \
++lt_prog_compiler_wl \
+ lt_prog_compiler_static \
+ lt_cv_prog_compiler_c_o \
+ need_locks \
++MANIFEST_TOOL \
+ DSYMUTIL \
+ NMEDIT \
+ LIPO \
+@@ -16529,7 +17376,6 @@ no_undefined_flag \
+ hardcode_libdir_flag_spec \
+ hardcode_libdir_flag_spec_ld \
+ hardcode_libdir_separator \
+-fix_srcfile_path \
+ exclude_expsyms \
+ include_expsyms \
+ file_list_spec \
+@@ -16551,8 +17397,8 @@ LD_CXX \
+ reload_flag_CXX \
+ compiler_CXX \
+ lt_prog_compiler_no_builtin_flag_CXX \
+-lt_prog_compiler_wl_CXX \
+ lt_prog_compiler_pic_CXX \
++lt_prog_compiler_wl_CXX \
+ lt_prog_compiler_static_CXX \
+ lt_cv_prog_compiler_c_o_CXX \
+ export_dynamic_flag_spec_CXX \
+@@ -16564,7 +17410,6 @@ no_undefined_flag_CXX \
+ hardcode_libdir_flag_spec_CXX \
+ hardcode_libdir_flag_spec_ld_CXX \
+ hardcode_libdir_separator_CXX \
+-fix_srcfile_path_CXX \
+ exclude_expsyms_CXX \
+ include_expsyms_CXX \
+ file_list_spec_CXX \
+@@ -16598,6 +17443,7 @@ module_cmds \
+ module_expsym_cmds \
+ export_symbols_cmds \
+ prelink_cmds \
++postlink_cmds \
+ postinstall_cmds \
+ postuninstall_cmds \
+ finish_cmds \
+@@ -16612,7 +17458,8 @@ archive_expsym_cmds_CXX \
+ module_cmds_CXX \
+ module_expsym_cmds_CXX \
+ export_symbols_cmds_CXX \
+-prelink_cmds_CXX; do
++prelink_cmds_CXX \
++postlink_cmds_CXX; do
+     case \`eval \\\\\$ECHO \\\\""\\\\\$\$var"\\\\"\` in
+     *[\\\\\\\`\\"\\\$]*)
+       eval "lt_\$var=\\\\\\"\\\`\\\$ECHO \\"\\\$\$var\\" | \\\$SED -e \\"\\\$double_quote_subst\\" -e \\"\\\$sed_quote_subst\\" -e \\"\\\$delay_variable_subst\\"\\\`\\\\\\""
+@@ -17366,7 +18213,8 @@ $as_echo X"$file" |
+ # NOTE: Changes made to this file will be lost: look at ltmain.sh.
+ #
+ #   Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001, 2003, 2004, 2005,
+-#                 2006, 2007, 2008, 2009 Free Software Foundation, Inc.
++#                 2006, 2007, 2008, 2009, 2010 Free Software Foundation,
++#                 Inc.
+ #   Written by Gordon Matzigkeit, 1996
+ #
+ #   This file is part of GNU Libtool.
+@@ -17469,19 +18317,42 @@ SP2NL=$lt_lt_SP2NL
+ # turn newlines into spaces.
+ NL2SP=$lt_lt_NL2SP
+ 
++# convert \$build file names to \$host format.
++to_host_file_cmd=$lt_cv_to_host_file_cmd
++
++# convert \$build files to toolchain format.
++to_tool_file_cmd=$lt_cv_to_tool_file_cmd
++
+ # An object symbol dumper.
+ OBJDUMP=$lt_OBJDUMP
+ 
+ # Method to check whether dependent libraries are shared objects.
+ deplibs_check_method=$lt_deplibs_check_method
+ 
+-# Command to use when deplibs_check_method == "file_magic".
++# Command to use when deplibs_check_method = "file_magic".
+ file_magic_cmd=$lt_file_magic_cmd
+ 
++# How to find potential files when deplibs_check_method = "file_magic".
++file_magic_glob=$lt_file_magic_glob
++
++# Find potential files using nocaseglob when deplibs_check_method = "file_magic".
++want_nocaseglob=$lt_want_nocaseglob
++
++# DLL creation program.
++DLLTOOL=$lt_DLLTOOL
++
++# Command to associate shared and link libraries.
++sharedlib_from_linklib_cmd=$lt_sharedlib_from_linklib_cmd
++
+ # The archiver.
+ AR=$lt_AR
++
++# Flags to create an archive.
+ AR_FLAGS=$lt_AR_FLAGS
+ 
++# How to feed a file listing to the archiver.
++archiver_list_spec=$lt_archiver_list_spec
++
+ # A symbol stripping program.
+ STRIP=$lt_STRIP
+ 
+@@ -17511,6 +18382,12 @@ global_symbol_to_c_name_address=$lt_lt_cv_sys_global_symbol_to_c_name_address
+ # Transform the output of nm in a C name address pair when lib prefix is needed.
+ global_symbol_to_c_name_address_lib_prefix=$lt_lt_cv_sys_global_symbol_to_c_name_address_lib_prefix
+ 
++# Specify filename containing input files for \$NM.
++nm_file_list_spec=$lt_nm_file_list_spec
++
++# The root where to search for dependent libraries,and in which our libraries should be installed.
++lt_sysroot=$lt_sysroot
++
+ # The name of the directory that contains temporary libtool files.
+ objdir=$objdir
+ 
+@@ -17520,6 +18397,9 @@ MAGIC_CMD=$MAGIC_CMD
+ # Must we lock files when doing compilation?
+ need_locks=$lt_need_locks
+ 
++# Manifest tool.
++MANIFEST_TOOL=$lt_MANIFEST_TOOL
++
+ # Tool to manipulate archived DWARF debug symbol files on Mac OS X.
+ DSYMUTIL=$lt_DSYMUTIL
+ 
+@@ -17634,12 +18514,12 @@ with_gcc=$GCC
+ # Compiler flag to turn off builtin functions.
+ no_builtin_flag=$lt_lt_prog_compiler_no_builtin_flag
+ 
+-# How to pass a linker flag through the compiler.
+-wl=$lt_lt_prog_compiler_wl
+-
+ # Additional compiler flags for building library objects.
+ pic_flag=$lt_lt_prog_compiler_pic
+ 
++# How to pass a linker flag through the compiler.
++wl=$lt_lt_prog_compiler_wl
++
+ # Compiler flag to prevent dynamic linking.
+ link_static_flag=$lt_lt_prog_compiler_static
+ 
+@@ -17726,9 +18606,6 @@ inherit_rpath=$inherit_rpath
+ # Whether libtool must link a program against all its dependency libraries.
+ link_all_deplibs=$link_all_deplibs
+ 
+-# Fix the shell variable \$srcfile for the compiler.
+-fix_srcfile_path=$lt_fix_srcfile_path
+-
+ # Set to "yes" if exported symbols are required.
+ always_export_symbols=$always_export_symbols
+ 
+@@ -17744,6 +18621,9 @@ include_expsyms=$lt_include_expsyms
+ # Commands necessary for linking programs (against libraries) with templates.
+ prelink_cmds=$lt_prelink_cmds
+ 
++# Commands necessary for finishing linking programs.
++postlink_cmds=$lt_postlink_cmds
++
+ # Specify filename containing input files.
+ file_list_spec=$lt_file_list_spec
+ 
+@@ -17790,210 +18670,169 @@ ltmain="$ac_aux_dir/ltmain.sh"
+   # if finds mixed CR/LF and LF-only lines.  Since sed operates in
+   # text mode, it properly converts lines to CR/LF.  This bash problem
+   # is reportedly fixed, but why not run on old versions too?
+-  sed '/^# Generated shell functions inserted here/q' "$ltmain" >> "$cfgfile" \
+-    || (rm -f "$cfgfile"; exit 1)
+-
+-  case $xsi_shell in
+-  yes)
+-    cat << \_LT_EOF >> "$cfgfile"
+-
+-# func_dirname file append nondir_replacement
+-# Compute the dirname of FILE.  If nonempty, add APPEND to the result,
+-# otherwise set result to NONDIR_REPLACEMENT.
+-func_dirname ()
+-{
+-  case ${1} in
+-    */*) func_dirname_result="${1%/*}${2}" ;;
+-    *  ) func_dirname_result="${3}" ;;
+-  esac
+-}
+-
+-# func_basename file
+-func_basename ()
+-{
+-  func_basename_result="${1##*/}"
+-}
+-
+-# func_dirname_and_basename file append nondir_replacement
+-# perform func_basename and func_dirname in a single function
+-# call:
+-#   dirname:  Compute the dirname of FILE.  If nonempty,
+-#             add APPEND to the result, otherwise set result
+-#             to NONDIR_REPLACEMENT.
+-#             value returned in "$func_dirname_result"
+-#   basename: Compute filename of FILE.
+-#             value retuned in "$func_basename_result"
+-# Implementation must be kept synchronized with func_dirname
+-# and func_basename. For efficiency, we do not delegate to
+-# those functions but instead duplicate the functionality here.
+-func_dirname_and_basename ()
+-{
+-  case ${1} in
+-    */*) func_dirname_result="${1%/*}${2}" ;;
+-    *  ) func_dirname_result="${3}" ;;
+-  esac
+-  func_basename_result="${1##*/}"
+-}
+-
+-# func_stripname prefix suffix name
+-# strip PREFIX and SUFFIX off of NAME.
+-# PREFIX and SUFFIX must not contain globbing or regex special
+-# characters, hashes, percent signs, but SUFFIX may contain a leading
+-# dot (in which case that matches only a dot).
+-func_stripname ()
+-{
+-  # pdksh 5.2.14 does not do ${X%$Y} correctly if both X and Y are
+-  # positional parameters, so assign one to ordinary parameter first.
+-  func_stripname_result=${3}
+-  func_stripname_result=${func_stripname_result#"${1}"}
+-  func_stripname_result=${func_stripname_result%"${2}"}
+-}
+-
+-# func_opt_split
+-func_opt_split ()
+-{
+-  func_opt_split_opt=${1%%=*}
+-  func_opt_split_arg=${1#*=}
+-}
+-
+-# func_lo2o object
+-func_lo2o ()
+-{
+-  case ${1} in
+-    *.lo) func_lo2o_result=${1%.lo}.${objext} ;;
+-    *)    func_lo2o_result=${1} ;;
+-  esac
+-}
+-
+-# func_xform libobj-or-source
+-func_xform ()
+-{
+-  func_xform_result=${1%.*}.lo
+-}
+-
+-# func_arith arithmetic-term...
+-func_arith ()
+-{
+-  func_arith_result=$(( $* ))
+-}
+-
+-# func_len string
+-# STRING may not start with a hyphen.
+-func_len ()
+-{
+-  func_len_result=${#1}
+-}
+-
+-_LT_EOF
+-    ;;
+-  *) # Bourne compatible functions.
+-    cat << \_LT_EOF >> "$cfgfile"
+-
+-# func_dirname file append nondir_replacement
+-# Compute the dirname of FILE.  If nonempty, add APPEND to the result,
+-# otherwise set result to NONDIR_REPLACEMENT.
+-func_dirname ()
+-{
+-  # Extract subdirectory from the argument.
+-  func_dirname_result=`$ECHO "${1}" | $SED "$dirname"`
+-  if test "X$func_dirname_result" = "X${1}"; then
+-    func_dirname_result="${3}"
+-  else
+-    func_dirname_result="$func_dirname_result${2}"
+-  fi
+-}
+-
+-# func_basename file
+-func_basename ()
+-{
+-  func_basename_result=`$ECHO "${1}" | $SED "$basename"`
+-}
+-
+-
+-# func_stripname prefix suffix name
+-# strip PREFIX and SUFFIX off of NAME.
+-# PREFIX and SUFFIX must not contain globbing or regex special
+-# characters, hashes, percent signs, but SUFFIX may contain a leading
+-# dot (in which case that matches only a dot).
+-# func_strip_suffix prefix name
+-func_stripname ()
+-{
+-  case ${2} in
+-    .*) func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%\\\\${2}\$%%"`;;
+-    *)  func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%${2}\$%%"`;;
+-  esac
+-}
+-
+-# sed scripts:
+-my_sed_long_opt='1s/^\(-[^=]*\)=.*/\1/;q'
+-my_sed_long_arg='1s/^-[^=]*=//'
+-
+-# func_opt_split
+-func_opt_split ()
+-{
+-  func_opt_split_opt=`$ECHO "${1}" | $SED "$my_sed_long_opt"`
+-  func_opt_split_arg=`$ECHO "${1}" | $SED "$my_sed_long_arg"`
+-}
+-
+-# func_lo2o object
+-func_lo2o ()
+-{
+-  func_lo2o_result=`$ECHO "${1}" | $SED "$lo2o"`
+-}
+-
+-# func_xform libobj-or-source
+-func_xform ()
+-{
+-  func_xform_result=`$ECHO "${1}" | $SED 's/\.[^.]*$/.lo/'`
+-}
+-
+-# func_arith arithmetic-term...
+-func_arith ()
+-{
+-  func_arith_result=`expr "$@"`
+-}
+-
+-# func_len string
+-# STRING may not start with a hyphen.
+-func_len ()
+-{
+-  func_len_result=`expr "$1" : ".*" 2>/dev/null || echo $max_cmd_len`
+-}
+-
+-_LT_EOF
+-esac
+-
+-case $lt_shell_append in
+-  yes)
+-    cat << \_LT_EOF >> "$cfgfile"
+-
+-# func_append var value
+-# Append VALUE to the end of shell variable VAR.
+-func_append ()
+-{
+-  eval "$1+=\$2"
+-}
+-_LT_EOF
+-    ;;
+-  *)
+-    cat << \_LT_EOF >> "$cfgfile"
+-
+-# func_append var value
+-# Append VALUE to the end of shell variable VAR.
+-func_append ()
+-{
+-  eval "$1=\$$1\$2"
+-}
+-
+-_LT_EOF
+-    ;;
+-  esac
+-
+-
+-  sed -n '/^# Generated shell functions inserted here/,$p' "$ltmain" >> "$cfgfile" \
+-    || (rm -f "$cfgfile"; exit 1)
+-
+-  mv -f "$cfgfile" "$ofile" ||
++  sed '$q' "$ltmain" >> "$cfgfile" \
++     || (rm -f "$cfgfile"; exit 1)
++
++  if test x"$xsi_shell" = xyes; then
++  sed -e '/^func_dirname ()$/,/^} # func_dirname /c\
++func_dirname ()\
++{\
++\    case ${1} in\
++\      */*) func_dirname_result="${1%/*}${2}" ;;\
++\      *  ) func_dirname_result="${3}" ;;\
++\    esac\
++} # Extended-shell func_dirname implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_basename ()$/,/^} # func_basename /c\
++func_basename ()\
++{\
++\    func_basename_result="${1##*/}"\
++} # Extended-shell func_basename implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_dirname_and_basename ()$/,/^} # func_dirname_and_basename /c\
++func_dirname_and_basename ()\
++{\
++\    case ${1} in\
++\      */*) func_dirname_result="${1%/*}${2}" ;;\
++\      *  ) func_dirname_result="${3}" ;;\
++\    esac\
++\    func_basename_result="${1##*/}"\
++} # Extended-shell func_dirname_and_basename implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_stripname ()$/,/^} # func_stripname /c\
++func_stripname ()\
++{\
++\    # pdksh 5.2.14 does not do ${X%$Y} correctly if both X and Y are\
++\    # positional parameters, so assign one to ordinary parameter first.\
++\    func_stripname_result=${3}\
++\    func_stripname_result=${func_stripname_result#"${1}"}\
++\    func_stripname_result=${func_stripname_result%"${2}"}\
++} # Extended-shell func_stripname implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_split_long_opt ()$/,/^} # func_split_long_opt /c\
++func_split_long_opt ()\
++{\
++\    func_split_long_opt_name=${1%%=*}\
++\    func_split_long_opt_arg=${1#*=}\
++} # Extended-shell func_split_long_opt implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_split_short_opt ()$/,/^} # func_split_short_opt /c\
++func_split_short_opt ()\
++{\
++\    func_split_short_opt_arg=${1#??}\
++\    func_split_short_opt_name=${1%"$func_split_short_opt_arg"}\
++} # Extended-shell func_split_short_opt implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_lo2o ()$/,/^} # func_lo2o /c\
++func_lo2o ()\
++{\
++\    case ${1} in\
++\      *.lo) func_lo2o_result=${1%.lo}.${objext} ;;\
++\      *)    func_lo2o_result=${1} ;;\
++\    esac\
++} # Extended-shell func_lo2o implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_xform ()$/,/^} # func_xform /c\
++func_xform ()\
++{\
++    func_xform_result=${1%.*}.lo\
++} # Extended-shell func_xform implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_arith ()$/,/^} # func_arith /c\
++func_arith ()\
++{\
++    func_arith_result=$(( $* ))\
++} # Extended-shell func_arith implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_len ()$/,/^} # func_len /c\
++func_len ()\
++{\
++    func_len_result=${#1}\
++} # Extended-shell func_len implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++fi
++
++if test x"$lt_shell_append" = xyes; then
++  sed -e '/^func_append ()$/,/^} # func_append /c\
++func_append ()\
++{\
++    eval "${1}+=\\${2}"\
++} # Extended-shell func_append implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_append_quoted ()$/,/^} # func_append_quoted /c\
++func_append_quoted ()\
++{\
++\    func_quote_for_eval "${2}"\
++\    eval "${1}+=\\\\ \\$func_quote_for_eval_result"\
++} # Extended-shell func_append_quoted implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  # Save a `func_append' function call where possible by direct use of '+='
++  sed -e 's%func_append \([a-zA-Z_]\{1,\}\) "%\1+="%g' $cfgfile > $cfgfile.tmp \
++    && mv -f "$cfgfile.tmp" "$cfgfile" \
++      || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++  test 0 -eq $? || _lt_function_replace_fail=:
++else
++  # Save a `func_append' function call even when '+=' is not available
++  sed -e 's%func_append \([a-zA-Z_]\{1,\}\) "%\1="$\1%g' $cfgfile > $cfgfile.tmp \
++    && mv -f "$cfgfile.tmp" "$cfgfile" \
++      || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++  test 0 -eq $? || _lt_function_replace_fail=:
++fi
++
++if test x"$_lt_function_replace_fail" = x":"; then
++  { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: Unable to substitute extended shell functions in $ofile" >&5
++$as_echo "$as_me: WARNING: Unable to substitute extended shell functions in $ofile" >&2;}
++fi
++
++
++   mv -f "$cfgfile" "$ofile" ||
+     (rm -f "$ofile" && cp "$cfgfile" "$ofile" && rm -f "$cfgfile")
+   chmod +x "$ofile"
+ 
+@@ -18021,12 +18860,12 @@ with_gcc=$GCC_CXX
+ # Compiler flag to turn off builtin functions.
+ no_builtin_flag=$lt_lt_prog_compiler_no_builtin_flag_CXX
+ 
+-# How to pass a linker flag through the compiler.
+-wl=$lt_lt_prog_compiler_wl_CXX
+-
+ # Additional compiler flags for building library objects.
+ pic_flag=$lt_lt_prog_compiler_pic_CXX
+ 
++# How to pass a linker flag through the compiler.
++wl=$lt_lt_prog_compiler_wl_CXX
++
+ # Compiler flag to prevent dynamic linking.
+ link_static_flag=$lt_lt_prog_compiler_static_CXX
+ 
+@@ -18113,9 +18952,6 @@ inherit_rpath=$inherit_rpath_CXX
+ # Whether libtool must link a program against all its dependency libraries.
+ link_all_deplibs=$link_all_deplibs_CXX
+ 
+-# Fix the shell variable \$srcfile for the compiler.
+-fix_srcfile_path=$lt_fix_srcfile_path_CXX
+-
+ # Set to "yes" if exported symbols are required.
+ always_export_symbols=$always_export_symbols_CXX
+ 
+@@ -18131,6 +18967,9 @@ include_expsyms=$lt_include_expsyms_CXX
+ # Commands necessary for linking programs (against libraries) with templates.
+ prelink_cmds=$lt_prelink_cmds_CXX
+ 
++# Commands necessary for finishing linking programs.
++postlink_cmds=$lt_postlink_cmds_CXX
++
+ # Specify filename containing input files.
+ file_list_spec=$lt_file_list_spec_CXX
+ 
+diff --git a/gprofng/src/Makefile.in b/gprofng/src/Makefile.in
+index ba7fdd6e8ad..3a0fc5dbbe7 100644
+--- a/gprofng/src/Makefile.in
++++ b/gprofng/src/Makefile.in
+@@ -324,6 +324,7 @@ CXXFLAGS = @CXXFLAGS@
+ CYGPATH_W = @CYGPATH_W@
+ DEFS = @DEFS@
+ DEPDIR = @DEPDIR@
++DLLTOOL = @DLLTOOL@
+ DSYMUTIL = @DSYMUTIL@
+ DUMPBIN = @DUMPBIN@
+ ECHO_C = @ECHO_C@
+@@ -359,6 +360,7 @@ LN_S = @LN_S@
+ LTLIBOBJS = @LTLIBOBJS@
+ MAINT = @MAINT@
+ MAKEINFO = @MAKEINFO@
++MANIFEST_TOOL = @MANIFEST_TOOL@
+ MKDIR_P = @MKDIR_P@
+ NM = @NM@
+ NMEDIT = @NMEDIT@
+diff --git a/ld/Makefile.in b/ld/Makefile.in
+index 782d4017a60..71bbe487aef 100644
+--- a/ld/Makefile.in
++++ b/ld/Makefile.in
+@@ -383,6 +383,7 @@ CYGPATH_W = @CYGPATH_W@
+ DATADIRNAME = @DATADIRNAME@
+ DEFS = @DEFS@
+ DEPDIR = @DEPDIR@
++DLLTOOL = @DLLTOOL@
+ DSYMUTIL = @DSYMUTIL@
+ DUMPBIN = @DUMPBIN@
+ ECHO_C = @ECHO_C@
+@@ -433,6 +434,7 @@ LN_S = @LN_S@
+ LTLIBOBJS = @LTLIBOBJS@
+ MAINT = @MAINT@
+ MAKEINFO = @MAKEINFO@
++MANIFEST_TOOL = @MANIFEST_TOOL@
+ MKDIR_P = @MKDIR_P@
+ MKINSTALLDIRS = @MKINSTALLDIRS@
+ MSGFMT = @MSGFMT@
+@@ -481,6 +483,7 @@ abs_builddir = @abs_builddir@
+ abs_srcdir = @abs_srcdir@
+ abs_top_builddir = @abs_top_builddir@
+ abs_top_srcdir = @abs_top_srcdir@
++ac_ct_AR = @ac_ct_AR@
+ ac_ct_CC = @ac_ct_CC@
+ ac_ct_CXX = @ac_ct_CXX@
+ ac_ct_DUMPBIN = @ac_ct_DUMPBIN@
+diff --git a/ld/configure b/ld/configure
+index d0a467ac101..45b20013a45 100755
+--- a/ld/configure
++++ b/ld/configure
+@@ -700,8 +700,11 @@ OTOOL
+ LIPO
+ NMEDIT
+ DSYMUTIL
++MANIFEST_TOOL
+ RANLIB
++ac_ct_AR
+ AR
++DLLTOOL
+ OBJDUMP
+ LN_S
+ NM
+@@ -828,6 +831,7 @@ enable_static
+ with_pic
+ enable_fast_install
+ with_gnu_ld
++with_libtool_sysroot
+ enable_libtool_lock
+ enable_plugins
+ enable_largefile
+@@ -1552,6 +1556,8 @@ Optional Packages:
+   --with-pic              try to use only PIC/non-PIC objects [default=use
+                           both]
+   --with-gnu-ld           assume the C compiler uses GNU ld [default=no]
++  --with-libtool-sysroot=DIR Search for dependent libraries within DIR
++                        (or the compiler's sysroot if not specified).
+   --with-lib-path=dir1:dir2...  set default LIB_PATH
+   --with-sysroot=DIR Search for usr/lib et al within DIR.
+   --with-system-zlib      use installed libz
+@@ -5399,8 +5405,8 @@ esac
+ 
+ 
+ 
+-macro_version='2.2.7a'
+-macro_revision='1.3134'
++macro_version='2.4'
++macro_revision='1.3293'
+ 
+ 
+ 
+@@ -5440,7 +5446,7 @@ ECHO=$ECHO$ECHO$ECHO$ECHO$ECHO$ECHO
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking how to print strings" >&5
+ $as_echo_n "checking how to print strings... " >&6; }
+ # Test print first, because it will be a builtin if present.
+-if test "X`print -r -- -n 2>/dev/null`" = X-n && \
++if test "X`( print -r -- -n ) 2>/dev/null`" = X-n && \
+    test "X`print -r -- $ECHO 2>/dev/null`" = "X$ECHO"; then
+   ECHO='print -r --'
+ elif test "X`printf %s $ECHO 2>/dev/null`" = "X$ECHO"; then
+@@ -6133,8 +6139,8 @@ $as_echo_n "checking whether the shell understands some XSI constructs... " >&6;
+ # Try some XSI features
+ xsi_shell=no
+ ( _lt_dummy="a/b/c"
+-  test "${_lt_dummy##*/},${_lt_dummy%/*},"${_lt_dummy%"$_lt_dummy"}, \
+-      = c,a/b,, \
++  test "${_lt_dummy##*/},${_lt_dummy%/*},${_lt_dummy#??}"${_lt_dummy%"$_lt_dummy"}, \
++      = c,a/b,b/c, \
+     && eval 'test $(( 1 + 1 )) -eq 2 \
+     && test "${#_lt_dummy}" -eq 5' ) >/dev/null 2>&1 \
+   && xsi_shell=yes
+@@ -6183,6 +6189,80 @@ esac
+ 
+ 
+ 
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking how to convert $build file names to $host format" >&5
++$as_echo_n "checking how to convert $build file names to $host format... " >&6; }
++if ${lt_cv_to_host_file_cmd+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  case $host in
++  *-*-mingw* )
++    case $build in
++      *-*-mingw* ) # actually msys
++        lt_cv_to_host_file_cmd=func_convert_file_msys_to_w32
++        ;;
++      *-*-cygwin* )
++        lt_cv_to_host_file_cmd=func_convert_file_cygwin_to_w32
++        ;;
++      * ) # otherwise, assume *nix
++        lt_cv_to_host_file_cmd=func_convert_file_nix_to_w32
++        ;;
++    esac
++    ;;
++  *-*-cygwin* )
++    case $build in
++      *-*-mingw* ) # actually msys
++        lt_cv_to_host_file_cmd=func_convert_file_msys_to_cygwin
++        ;;
++      *-*-cygwin* )
++        lt_cv_to_host_file_cmd=func_convert_file_noop
++        ;;
++      * ) # otherwise, assume *nix
++        lt_cv_to_host_file_cmd=func_convert_file_nix_to_cygwin
++        ;;
++    esac
++    ;;
++  * ) # unhandled hosts (and "normal" native builds)
++    lt_cv_to_host_file_cmd=func_convert_file_noop
++    ;;
++esac
++
++fi
++
++to_host_file_cmd=$lt_cv_to_host_file_cmd
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_to_host_file_cmd" >&5
++$as_echo "$lt_cv_to_host_file_cmd" >&6; }
++
++
++
++
++
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking how to convert $build file names to toolchain format" >&5
++$as_echo_n "checking how to convert $build file names to toolchain format... " >&6; }
++if ${lt_cv_to_tool_file_cmd+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  #assume ordinary cross tools, or native build.
++lt_cv_to_tool_file_cmd=func_convert_file_noop
++case $host in
++  *-*-mingw* )
++    case $build in
++      *-*-mingw* ) # actually msys
++        lt_cv_to_tool_file_cmd=func_convert_file_msys_to_w32
++        ;;
++    esac
++    ;;
++esac
++
++fi
++
++to_tool_file_cmd=$lt_cv_to_tool_file_cmd
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_to_tool_file_cmd" >&5
++$as_echo "$lt_cv_to_tool_file_cmd" >&6; }
++
++
++
++
++
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $LD option to reload object files" >&5
+ $as_echo_n "checking for $LD option to reload object files... " >&6; }
+ if ${lt_cv_ld_reload_flag+:} false; then :
+@@ -6199,6 +6279,11 @@ case $reload_flag in
+ esac
+ reload_cmds='$LD$reload_flag -o $output$reload_objs'
+ case $host_os in
++  cygwin* | mingw* | pw32* | cegcc*)
++    if test "$GCC" != yes; then
++      reload_cmds=false
++    fi
++    ;;
+   darwin*)
+     if test "$GCC" = yes; then
+       reload_cmds='$LTCC $LTCFLAGS -nostdlib ${wl}-r -o $output$reload_objs'
+@@ -6367,7 +6452,8 @@ mingw* | pw32*)
+     lt_cv_deplibs_check_method='file_magic ^x86 archive import|^x86 DLL'
+     lt_cv_file_magic_cmd='func_win32_libid'
+   else
+-    lt_cv_deplibs_check_method='file_magic file format pei*-i386(.*architecture: i386)?'
++    # Keep this pattern in sync with the one in func_win32_libid.
++    lt_cv_deplibs_check_method='file_magic file format (pei*-i386(.*architecture: i386)?|pe-arm-wince|pe-x86-64)'
+     lt_cv_file_magic_cmd='$OBJDUMP -f'
+   fi
+   ;;
+@@ -6521,6 +6607,21 @@ esac
+ fi
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_deplibs_check_method" >&5
+ $as_echo "$lt_cv_deplibs_check_method" >&6; }
++
++file_magic_glob=
++want_nocaseglob=no
++if test "$build" = "$host"; then
++  case $host_os in
++  mingw* | pw32*)
++    if ( shopt | grep nocaseglob ) >/dev/null 2>&1; then
++      want_nocaseglob=yes
++    else
++      file_magic_glob=`echo aAbBcCdDeEfFgGhHiIjJkKlLmMnNoOpPqQrRsStTuUvVwWxXyYzZ | $SED -e "s/\(..\)/s\/[\1]\/[\1]\/g;/g"`
++    fi
++    ;;
++  esac
++fi
++
+ file_magic_cmd=$lt_cv_file_magic_cmd
+ deplibs_check_method=$lt_cv_deplibs_check_method
+ test -z "$deplibs_check_method" && deplibs_check_method=unknown
+@@ -6536,6 +6637,157 @@ test -z "$deplibs_check_method" && deplibs_check_method=unknown
+ 
+ 
+ 
++
++
++
++
++
++
++
++
++
++
++if test -n "$ac_tool_prefix"; then
++  # Extract the first word of "${ac_tool_prefix}dlltool", so it can be a program name with args.
++set dummy ${ac_tool_prefix}dlltool; ac_word=$2
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
++$as_echo_n "checking for $ac_word... " >&6; }
++if ${ac_cv_prog_DLLTOOL+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  if test -n "$DLLTOOL"; then
++  ac_cv_prog_DLLTOOL="$DLLTOOL" # Let the user override the test.
++else
++as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
++for as_dir in $PATH
++do
++  IFS=$as_save_IFS
++  test -z "$as_dir" && as_dir=.
++    for ac_exec_ext in '' $ac_executable_extensions; do
++  if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
++    ac_cv_prog_DLLTOOL="${ac_tool_prefix}dlltool"
++    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
++    break 2
++  fi
++done
++  done
++IFS=$as_save_IFS
++
++fi
++fi
++DLLTOOL=$ac_cv_prog_DLLTOOL
++if test -n "$DLLTOOL"; then
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $DLLTOOL" >&5
++$as_echo "$DLLTOOL" >&6; }
++else
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
++$as_echo "no" >&6; }
++fi
++
++
++fi
++if test -z "$ac_cv_prog_DLLTOOL"; then
++  ac_ct_DLLTOOL=$DLLTOOL
++  # Extract the first word of "dlltool", so it can be a program name with args.
++set dummy dlltool; ac_word=$2
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
++$as_echo_n "checking for $ac_word... " >&6; }
++if ${ac_cv_prog_ac_ct_DLLTOOL+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  if test -n "$ac_ct_DLLTOOL"; then
++  ac_cv_prog_ac_ct_DLLTOOL="$ac_ct_DLLTOOL" # Let the user override the test.
++else
++as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
++for as_dir in $PATH
++do
++  IFS=$as_save_IFS
++  test -z "$as_dir" && as_dir=.
++    for ac_exec_ext in '' $ac_executable_extensions; do
++  if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
++    ac_cv_prog_ac_ct_DLLTOOL="dlltool"
++    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
++    break 2
++  fi
++done
++  done
++IFS=$as_save_IFS
++
++fi
++fi
++ac_ct_DLLTOOL=$ac_cv_prog_ac_ct_DLLTOOL
++if test -n "$ac_ct_DLLTOOL"; then
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_DLLTOOL" >&5
++$as_echo "$ac_ct_DLLTOOL" >&6; }
++else
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
++$as_echo "no" >&6; }
++fi
++
++  if test "x$ac_ct_DLLTOOL" = x; then
++    DLLTOOL="false"
++  else
++    case $cross_compiling:$ac_tool_warned in
++yes:)
++{ $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5
++$as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;}
++ac_tool_warned=yes ;;
++esac
++    DLLTOOL=$ac_ct_DLLTOOL
++  fi
++else
++  DLLTOOL="$ac_cv_prog_DLLTOOL"
++fi
++
++test -z "$DLLTOOL" && DLLTOOL=dlltool
++
++
++
++
++
++
++
++
++
++
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking how to associate runtime and link libraries" >&5
++$as_echo_n "checking how to associate runtime and link libraries... " >&6; }
++if ${lt_cv_sharedlib_from_linklib_cmd+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  lt_cv_sharedlib_from_linklib_cmd='unknown'
++
++case $host_os in
++cygwin* | mingw* | pw32* | cegcc*)
++  # two different shell functions defined in ltmain.sh
++  # decide which to use based on capabilities of $DLLTOOL
++  case `$DLLTOOL --help 2>&1` in
++  *--identify-strict*)
++    lt_cv_sharedlib_from_linklib_cmd=func_cygming_dll_for_implib
++    ;;
++  *)
++    lt_cv_sharedlib_from_linklib_cmd=func_cygming_dll_for_implib_fallback
++    ;;
++  esac
++  ;;
++*)
++  # fallback: assume linklib IS sharedlib
++  lt_cv_sharedlib_from_linklib_cmd="$ECHO"
++  ;;
++esac
++
++fi
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_sharedlib_from_linklib_cmd" >&5
++$as_echo "$lt_cv_sharedlib_from_linklib_cmd" >&6; }
++sharedlib_from_linklib_cmd=$lt_cv_sharedlib_from_linklib_cmd
++test -z "$sharedlib_from_linklib_cmd" && sharedlib_from_linklib_cmd=$ECHO
++
++
++
++
++
++
++
+ plugin_option=
+ plugin_names="liblto_plugin.so liblto_plugin-0.dll cyglto_plugin-0.dll"
+ for plugin in $plugin_names; do
+@@ -6550,8 +6802,10 @@ for plugin in $plugin_names; do
+ done
+ 
+ if test -n "$ac_tool_prefix"; then
+-  # Extract the first word of "${ac_tool_prefix}ar", so it can be a program name with args.
+-set dummy ${ac_tool_prefix}ar; ac_word=$2
++  for ac_prog in ar
++  do
++    # Extract the first word of "$ac_tool_prefix$ac_prog", so it can be a program name with args.
++set dummy $ac_tool_prefix$ac_prog; ac_word=$2
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+ $as_echo_n "checking for $ac_word... " >&6; }
+ if ${ac_cv_prog_AR+:} false; then :
+@@ -6567,7 +6821,7 @@ do
+   test -z "$as_dir" && as_dir=.
+     for ac_exec_ext in '' $ac_executable_extensions; do
+   if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
+-    ac_cv_prog_AR="${ac_tool_prefix}ar"
++    ac_cv_prog_AR="$ac_tool_prefix$ac_prog"
+     $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
+     break 2
+   fi
+@@ -6587,11 +6841,15 @@ $as_echo "no" >&6; }
+ fi
+ 
+ 
++    test -n "$AR" && break
++  done
+ fi
+-if test -z "$ac_cv_prog_AR"; then
++if test -z "$AR"; then
+   ac_ct_AR=$AR
+-  # Extract the first word of "ar", so it can be a program name with args.
+-set dummy ar; ac_word=$2
++  for ac_prog in ar
++do
++  # Extract the first word of "$ac_prog", so it can be a program name with args.
++set dummy $ac_prog; ac_word=$2
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+ $as_echo_n "checking for $ac_word... " >&6; }
+ if ${ac_cv_prog_ac_ct_AR+:} false; then :
+@@ -6607,7 +6865,7 @@ do
+   test -z "$as_dir" && as_dir=.
+     for ac_exec_ext in '' $ac_executable_extensions; do
+   if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
+-    ac_cv_prog_ac_ct_AR="ar"
++    ac_cv_prog_ac_ct_AR="$ac_prog"
+     $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
+     break 2
+   fi
+@@ -6626,6 +6884,10 @@ else
+ $as_echo "no" >&6; }
+ fi
+ 
++
++  test -n "$ac_ct_AR" && break
++done
++
+   if test "x$ac_ct_AR" = x; then
+     AR="false"
+   else
+@@ -6637,25 +6899,19 @@ ac_tool_warned=yes ;;
+ esac
+     AR=$ac_ct_AR
+   fi
+-else
+-  AR="$ac_cv_prog_AR"
+ fi
+ 
+-test -z "$AR" && AR=ar
+-if test -n "$plugin_option"; then
+-  if $AR --help 2>&1 | grep -q "\--plugin"; then
+-    touch conftest.c
+-    $AR $plugin_option rc conftest.a conftest.c
+-    if test "$?" != 0; then
+-      { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: Failed: $AR $plugin_option rc" >&5
++  touch conftest.c
++  $AR $plugin_option rc conftest.a conftest.c
++  if test "$?" != 0; then
++    { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: Failed: $AR $plugin_option rc" >&5
+ $as_echo "$as_me: WARNING: Failed: $AR $plugin_option rc" >&2;}
+-    else
+-      AR="$AR $plugin_option"
+-    fi
+-    rm -f conftest.*
++  else
++    AR="$AR $plugin_option"
+   fi
+-fi
+-test -z "$AR_FLAGS" && AR_FLAGS=cru
++  rm -f conftest.*
++: ${AR=ar}
++: ${AR_FLAGS=cru}
+ 
+ 
+ 
+@@ -6667,6 +6923,64 @@ test -z "$AR_FLAGS" && AR_FLAGS=cru
+ 
+ 
+ 
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for archiver @FILE support" >&5
++$as_echo_n "checking for archiver @FILE support... " >&6; }
++if ${lt_cv_ar_at_file+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  lt_cv_ar_at_file=no
++   cat confdefs.h - <<_ACEOF >conftest.$ac_ext
++/* end confdefs.h.  */
++
++int
++main ()
++{
++
++  ;
++  return 0;
++}
++_ACEOF
++if ac_fn_c_try_compile "$LINENO"; then :
++  echo conftest.$ac_objext > conftest.lst
++      lt_ar_try='$AR $AR_FLAGS libconftest.a @conftest.lst >&5'
++      { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$lt_ar_try\""; } >&5
++  (eval $lt_ar_try) 2>&5
++  ac_status=$?
++  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
++  test $ac_status = 0; }
++      if test "$ac_status" -eq 0; then
++	# Ensure the archiver fails upon bogus file names.
++	rm -f conftest.$ac_objext libconftest.a
++	{ { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$lt_ar_try\""; } >&5
++  (eval $lt_ar_try) 2>&5
++  ac_status=$?
++  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
++  test $ac_status = 0; }
++	if test "$ac_status" -ne 0; then
++          lt_cv_ar_at_file=@
++        fi
++      fi
++      rm -f conftest.* libconftest.a
++
++fi
++rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext
++
++fi
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_ar_at_file" >&5
++$as_echo "$lt_cv_ar_at_file" >&6; }
++
++if test "x$lt_cv_ar_at_file" = xno; then
++  archiver_list_spec=
++else
++  archiver_list_spec=$lt_cv_ar_at_file
++fi
++
++
++
++
++
++
++
+ if test -n "$ac_tool_prefix"; then
+   # Extract the first word of "${ac_tool_prefix}strip", so it can be a program name with args.
+ set dummy ${ac_tool_prefix}strip; ac_word=$2
+@@ -7006,8 +7320,8 @@ esac
+ lt_cv_sys_global_symbol_to_cdecl="sed -n -e 's/^T .* \(.*\)$/extern int \1();/p' -e 's/^$symcode* .* \(.*\)$/extern char \1;/p'"
+ 
+ # Transform an extracted symbol line into symbol name and symbol address
+-lt_cv_sys_global_symbol_to_c_name_address="sed -n -e 's/^: \([^ ]*\) $/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"\2\", (void *) \&\2},/p'"
+-lt_cv_sys_global_symbol_to_c_name_address_lib_prefix="sed -n -e 's/^: \([^ ]*\) $/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \(lib[^ ]*\)$/  {\"\2\", (void *) \&\2},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"lib\2\", (void *) \&\2},/p'"
++lt_cv_sys_global_symbol_to_c_name_address="sed -n -e 's/^: \([^ ]*\)[ ]*$/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"\2\", (void *) \&\2},/p'"
++lt_cv_sys_global_symbol_to_c_name_address_lib_prefix="sed -n -e 's/^: \([^ ]*\)[ ]*$/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \(lib[^ ]*\)$/  {\"\2\", (void *) \&\2},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"lib\2\", (void *) \&\2},/p'"
+ 
+ # Handle CRLF in mingw tool chain
+ opt_cr=
+@@ -7043,6 +7357,7 @@ for ac_symprfx in "" "_"; do
+   else
+     lt_cv_sys_global_symbol_pipe="sed -n -e 's/^.*[	 ]\($symcode$symcode*\)[	 ][	 ]*$ac_symprfx$sympat$opt_cr$/$symxfrm/p'"
+   fi
++  lt_cv_sys_global_symbol_pipe="$lt_cv_sys_global_symbol_pipe | sed '/ __gnu_lto/d'"
+ 
+   # Check to see that the pipe works correctly.
+   pipe_works=no
+@@ -7084,6 +7399,18 @@ _LT_EOF
+       if $GREP ' nm_test_var$' "$nlist" >/dev/null; then
+ 	if $GREP ' nm_test_func$' "$nlist" >/dev/null; then
+ 	  cat <<_LT_EOF > conftest.$ac_ext
++/* Keep this code in sync between libtool.m4, ltmain, lt_system.h, and tests.  */
++#if defined(_WIN32) || defined(__CYGWIN__) || defined(_WIN32_WCE)
++/* DATA imports from DLLs on WIN32 con't be const, because runtime
++   relocations are performed -- see ld's documentation on pseudo-relocs.  */
++# define LT_DLSYM_CONST
++#elif defined(__osf__)
++/* This system does not cope well with relocations in const data.  */
++# define LT_DLSYM_CONST
++#else
++# define LT_DLSYM_CONST const
++#endif
++
+ #ifdef __cplusplus
+ extern "C" {
+ #endif
+@@ -7095,7 +7422,7 @@ _LT_EOF
+ 	  cat <<_LT_EOF >> conftest.$ac_ext
+ 
+ /* The mapping between symbol names and symbols.  */
+-const struct {
++LT_DLSYM_CONST struct {
+   const char *name;
+   void       *address;
+ }
+@@ -7121,8 +7448,8 @@ static const void *lt_preloaded_setup() {
+ _LT_EOF
+ 	  # Now try linking the two files.
+ 	  mv conftest.$ac_objext conftstm.$ac_objext
+-	  lt_save_LIBS="$LIBS"
+-	  lt_save_CFLAGS="$CFLAGS"
++	  lt_globsym_save_LIBS=$LIBS
++	  lt_globsym_save_CFLAGS=$CFLAGS
+ 	  LIBS="conftstm.$ac_objext"
+ 	  CFLAGS="$CFLAGS$lt_prog_compiler_no_builtin_flag"
+ 	  if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_link\""; } >&5
+@@ -7132,8 +7459,8 @@ _LT_EOF
+   test $ac_status = 0; } && test -s conftest${ac_exeext}; then
+ 	    pipe_works=yes
+ 	  fi
+-	  LIBS="$lt_save_LIBS"
+-	  CFLAGS="$lt_save_CFLAGS"
++	  LIBS=$lt_globsym_save_LIBS
++	  CFLAGS=$lt_globsym_save_CFLAGS
+ 	else
+ 	  echo "cannot find nm_test_func in $nlist" >&5
+ 	fi
+@@ -7170,6 +7497,17 @@ else
+ $as_echo "ok" >&6; }
+ fi
+ 
++# Response file support.
++if test "$lt_cv_nm_interface" = "MS dumpbin"; then
++  nm_file_list_spec='@'
++elif $NM --help 2>/dev/null | grep '[@]FILE' >/dev/null; then
++  nm_file_list_spec='@'
++fi
++
++
++
++
++
+ 
+ 
+ 
+@@ -7186,6 +7524,44 @@ fi
+ 
+ 
+ 
++
++
++
++
++
++
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for sysroot" >&5
++$as_echo_n "checking for sysroot... " >&6; }
++
++# Check whether --with-libtool-sysroot was given.
++if test "${with_libtool_sysroot+set}" = set; then :
++  withval=$with_libtool_sysroot;
++else
++  with_libtool_sysroot=no
++fi
++
++
++lt_sysroot=
++case ${with_libtool_sysroot} in #(
++ yes)
++   if test "$GCC" = yes; then
++     lt_sysroot=`$CC --print-sysroot 2>/dev/null`
++   fi
++   ;; #(
++ /*)
++   lt_sysroot=`echo "$with_libtool_sysroot" | sed -e "$sed_quote_subst"`
++   ;; #(
++ no|'')
++   ;; #(
++ *)
++   { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${with_libtool_sysroot}" >&5
++$as_echo "${with_libtool_sysroot}" >&6; }
++   as_fn_error $? "The sysroot must be an absolute path." "$LINENO" 5
++   ;;
++esac
++
++ { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${lt_sysroot:-no}" >&5
++$as_echo "${lt_sysroot:-no}" >&6; }
+ 
+ 
+ 
+@@ -7397,6 +7773,123 @@ esac
+ 
+ need_locks="$enable_libtool_lock"
+ 
++if test -n "$ac_tool_prefix"; then
++  # Extract the first word of "${ac_tool_prefix}mt", so it can be a program name with args.
++set dummy ${ac_tool_prefix}mt; ac_word=$2
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
++$as_echo_n "checking for $ac_word... " >&6; }
++if ${ac_cv_prog_MANIFEST_TOOL+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  if test -n "$MANIFEST_TOOL"; then
++  ac_cv_prog_MANIFEST_TOOL="$MANIFEST_TOOL" # Let the user override the test.
++else
++as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
++for as_dir in $PATH
++do
++  IFS=$as_save_IFS
++  test -z "$as_dir" && as_dir=.
++    for ac_exec_ext in '' $ac_executable_extensions; do
++  if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
++    ac_cv_prog_MANIFEST_TOOL="${ac_tool_prefix}mt"
++    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
++    break 2
++  fi
++done
++  done
++IFS=$as_save_IFS
++
++fi
++fi
++MANIFEST_TOOL=$ac_cv_prog_MANIFEST_TOOL
++if test -n "$MANIFEST_TOOL"; then
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $MANIFEST_TOOL" >&5
++$as_echo "$MANIFEST_TOOL" >&6; }
++else
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
++$as_echo "no" >&6; }
++fi
++
++
++fi
++if test -z "$ac_cv_prog_MANIFEST_TOOL"; then
++  ac_ct_MANIFEST_TOOL=$MANIFEST_TOOL
++  # Extract the first word of "mt", so it can be a program name with args.
++set dummy mt; ac_word=$2
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
++$as_echo_n "checking for $ac_word... " >&6; }
++if ${ac_cv_prog_ac_ct_MANIFEST_TOOL+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  if test -n "$ac_ct_MANIFEST_TOOL"; then
++  ac_cv_prog_ac_ct_MANIFEST_TOOL="$ac_ct_MANIFEST_TOOL" # Let the user override the test.
++else
++as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
++for as_dir in $PATH
++do
++  IFS=$as_save_IFS
++  test -z "$as_dir" && as_dir=.
++    for ac_exec_ext in '' $ac_executable_extensions; do
++  if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
++    ac_cv_prog_ac_ct_MANIFEST_TOOL="mt"
++    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
++    break 2
++  fi
++done
++  done
++IFS=$as_save_IFS
++
++fi
++fi
++ac_ct_MANIFEST_TOOL=$ac_cv_prog_ac_ct_MANIFEST_TOOL
++if test -n "$ac_ct_MANIFEST_TOOL"; then
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_MANIFEST_TOOL" >&5
++$as_echo "$ac_ct_MANIFEST_TOOL" >&6; }
++else
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
++$as_echo "no" >&6; }
++fi
++
++  if test "x$ac_ct_MANIFEST_TOOL" = x; then
++    MANIFEST_TOOL=":"
++  else
++    case $cross_compiling:$ac_tool_warned in
++yes:)
++{ $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5
++$as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;}
++ac_tool_warned=yes ;;
++esac
++    MANIFEST_TOOL=$ac_ct_MANIFEST_TOOL
++  fi
++else
++  MANIFEST_TOOL="$ac_cv_prog_MANIFEST_TOOL"
++fi
++
++test -z "$MANIFEST_TOOL" && MANIFEST_TOOL=mt
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking if $MANIFEST_TOOL is a manifest tool" >&5
++$as_echo_n "checking if $MANIFEST_TOOL is a manifest tool... " >&6; }
++if ${lt_cv_path_mainfest_tool+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  lt_cv_path_mainfest_tool=no
++  echo "$as_me:$LINENO: $MANIFEST_TOOL '-?'" >&5
++  $MANIFEST_TOOL '-?' 2>conftest.err > conftest.out
++  cat conftest.err >&5
++  if $GREP 'Manifest Tool' conftest.out > /dev/null; then
++    lt_cv_path_mainfest_tool=yes
++  fi
++  rm -f conftest*
++fi
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_path_mainfest_tool" >&5
++$as_echo "$lt_cv_path_mainfest_tool" >&6; }
++if test "x$lt_cv_path_mainfest_tool" != xyes; then
++  MANIFEST_TOOL=:
++fi
++
++
++
++
++
+ 
+   case $host_os in
+     rhapsody* | darwin*)
+@@ -7960,6 +8453,8 @@ _LT_EOF
+       $LTCC $LTCFLAGS -c -o conftest.o conftest.c 2>&5
+       echo "$AR cru libconftest.a conftest.o" >&5
+       $AR cru libconftest.a conftest.o 2>&5
++      echo "$RANLIB libconftest.a" >&5
++      $RANLIB libconftest.a 2>&5
+       cat > conftest.c << _LT_EOF
+ int main() { return 0;}
+ _LT_EOF
+@@ -8028,6 +8523,16 @@ done
+ 
+ 
+ 
++func_stripname_cnf ()
++{
++  case ${2} in
++  .*) func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%\\\\${2}\$%%"`;;
++  *)  func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%${2}\$%%"`;;
++  esac
++} # func_stripname_cnf
++
++
++
+ 
+ 
+ # Set options
+@@ -8543,8 +9048,6 @@ fi
+ lt_prog_compiler_pic=
+ lt_prog_compiler_static=
+ 
+-{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $compiler option to produce PIC" >&5
+-$as_echo_n "checking for $compiler option to produce PIC... " >&6; }
+ 
+   if test "$GCC" = yes; then
+     lt_prog_compiler_wl='-Wl,'
+@@ -8710,6 +9213,12 @@ $as_echo_n "checking for $compiler option to produce PIC... " >&6; }
+ 	lt_prog_compiler_pic='--shared'
+ 	lt_prog_compiler_static='--static'
+ 	;;
++      nagfor*)
++	# NAG Fortran compiler
++	lt_prog_compiler_wl='-Wl,-Wl,,'
++	lt_prog_compiler_pic='-PIC'
++	lt_prog_compiler_static='-Bstatic'
++	;;
+       pgcc* | pgf77* | pgf90* | pgf95* | pgfortran*)
+         # Portland Group compilers (*not* the Pentium gcc compiler,
+ 	# which looks to be a dead project)
+@@ -8772,7 +9281,7 @@ $as_echo_n "checking for $compiler option to produce PIC... " >&6; }
+       lt_prog_compiler_pic='-KPIC'
+       lt_prog_compiler_static='-Bstatic'
+       case $cc_basename in
+-      f77* | f90* | f95*)
++      f77* | f90* | f95* | sunf77* | sunf90* | sunf95*)
+ 	lt_prog_compiler_wl='-Qoption ld ';;
+       *)
+ 	lt_prog_compiler_wl='-Wl,';;
+@@ -8829,13 +9338,17 @@ case $host_os in
+     lt_prog_compiler_pic="$lt_prog_compiler_pic -DPIC"
+     ;;
+ esac
+-{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_prog_compiler_pic" >&5
+-$as_echo "$lt_prog_compiler_pic" >&6; }
+-
+-
+-
+-
+ 
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $compiler option to produce PIC" >&5
++$as_echo_n "checking for $compiler option to produce PIC... " >&6; }
++if ${lt_cv_prog_compiler_pic+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  lt_cv_prog_compiler_pic=$lt_prog_compiler_pic
++fi
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_prog_compiler_pic" >&5
++$as_echo "$lt_cv_prog_compiler_pic" >&6; }
++lt_prog_compiler_pic=$lt_cv_prog_compiler_pic
+ 
+ #
+ # Check to make sure the PIC flag actually works.
+@@ -8896,6 +9409,11 @@ fi
+ 
+ 
+ 
++
++
++
++
++
+ #
+ # Check to make sure the static flag actually works.
+ #
+@@ -9246,7 +9764,8 @@ _LT_EOF
+       allow_undefined_flag=unsupported
+       always_export_symbols=no
+       enable_shared_with_static_runtimes=yes
+-      export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1 DATA/'\'' | $SED -e '\''/^[AITW][ ]/s/.*[ ]//'\'' | sort | uniq > $export_symbols'
++      export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1 DATA/;s/^.*[ ]__nm__\([^ ]*\)[ ][^ ]*/\1 DATA/;/^I[ ]/d;/^[AITW][ ]/s/.* //'\'' | sort | uniq > $export_symbols'
++      exclude_expsyms='[_]+GLOBAL_OFFSET_TABLE_|[_]+GLOBAL__[FID]_.*|[_]+head_[A-Za-z0-9_]+_dll|[A-Za-z0-9_]+_dll_iname'
+ 
+       if $LD --help 2>&1 | $GREP 'auto-import' > /dev/null; then
+         archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib'
+@@ -9345,12 +9864,12 @@ _LT_EOF
+ 	  whole_archive_flag_spec='--whole-archive$convenience --no-whole-archive'
+ 	  hardcode_libdir_flag_spec=
+ 	  hardcode_libdir_flag_spec_ld='-rpath $libdir'
+-	  archive_cmds='$LD -shared $libobjs $deplibs $compiler_flags -soname $soname -o $lib'
++	  archive_cmds='$LD -shared $libobjs $deplibs $linker_flags -soname $soname -o $lib'
+ 	  if test "x$supports_anon_versioning" = xyes; then
+ 	    archive_expsym_cmds='echo "{ global:" > $output_objdir/$libname.ver~
+ 	      cat $export_symbols | sed -e "s/\(.*\)/\1;/" >> $output_objdir/$libname.ver~
+ 	      echo "local: *; };" >> $output_objdir/$libname.ver~
+-	      $LD -shared $libobjs $deplibs $compiler_flags -soname $soname -version-script $output_objdir/$libname.ver -o $lib'
++	      $LD -shared $libobjs $deplibs $linker_flags -soname $soname -version-script $output_objdir/$libname.ver -o $lib'
+ 	  fi
+ 	  ;;
+ 	esac
+@@ -9364,8 +9883,8 @@ _LT_EOF
+ 	archive_cmds='$LD -Bshareable $libobjs $deplibs $linker_flags -o $lib'
+ 	wlarc=
+       else
+-	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
+-	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
++	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
++	archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
+       fi
+       ;;
+ 
+@@ -9383,8 +9902,8 @@ _LT_EOF
+ 
+ _LT_EOF
+       elif $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then
+-	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
+-	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
++	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
++	archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
+       else
+ 	ld_shlibs=no
+       fi
+@@ -9430,8 +9949,8 @@ _LT_EOF
+ 
+     *)
+       if $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then
+-	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
+-	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
++	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
++	archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
+       else
+ 	ld_shlibs=no
+       fi
+@@ -9561,7 +10080,13 @@ _LT_EOF
+ 	allow_undefined_flag='-berok'
+         # Determine the default libpath from the value encoded in an
+         # empty executable.
+-        cat confdefs.h - <<_ACEOF >conftest.$ac_ext
++        if test "${lt_cv_aix_libpath+set}" = set; then
++  aix_libpath=$lt_cv_aix_libpath
++else
++  if ${lt_cv_aix_libpath_+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+ /* end confdefs.h.  */
+ 
+ int
+@@ -9574,22 +10099,29 @@ main ()
+ _ACEOF
+ if ac_fn_c_try_link "$LINENO"; then :
+ 
+-lt_aix_libpath_sed='
+-    /Import File Strings/,/^$/ {
+-	/^0/ {
+-	    s/^0  *\(.*\)$/\1/
+-	    p
+-	}
+-    }'
+-aix_libpath=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
+-# Check for a 64-bit object if we didn't find anything.
+-if test -z "$aix_libpath"; then
+-  aix_libpath=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
+-fi
++  lt_aix_libpath_sed='
++      /Import File Strings/,/^$/ {
++	  /^0/ {
++	      s/^0  *\([^ ]*\) *$/\1/
++	      p
++	  }
++      }'
++  lt_cv_aix_libpath_=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
++  # Check for a 64-bit object if we didn't find anything.
++  if test -z "$lt_cv_aix_libpath_"; then
++    lt_cv_aix_libpath_=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
++  fi
+ fi
+ rm -f core conftest.err conftest.$ac_objext \
+     conftest$ac_exeext conftest.$ac_ext
+-if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
++  if test -z "$lt_cv_aix_libpath_"; then
++    lt_cv_aix_libpath_="/usr/lib:/lib"
++  fi
++
++fi
++
++  aix_libpath=$lt_cv_aix_libpath_
++fi
+ 
+         hardcode_libdir_flag_spec='${wl}-blibpath:$libdir:'"$aix_libpath"
+         archive_expsym_cmds='$CC -o $output_objdir/$soname $libobjs $deplibs '"\${wl}$no_entry_flag"' $compiler_flags `if test "x${allow_undefined_flag}" != "x"; then func_echo_all "${wl}${allow_undefined_flag}"; else :; fi` '"\${wl}$exp_sym_flag:\$export_symbols $shared_flag"
+@@ -9601,7 +10133,13 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 	else
+ 	 # Determine the default libpath from the value encoded in an
+ 	 # empty executable.
+-	 cat confdefs.h - <<_ACEOF >conftest.$ac_ext
++	 if test "${lt_cv_aix_libpath+set}" = set; then
++  aix_libpath=$lt_cv_aix_libpath
++else
++  if ${lt_cv_aix_libpath_+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+ /* end confdefs.h.  */
+ 
+ int
+@@ -9614,22 +10152,29 @@ main ()
+ _ACEOF
+ if ac_fn_c_try_link "$LINENO"; then :
+ 
+-lt_aix_libpath_sed='
+-    /Import File Strings/,/^$/ {
+-	/^0/ {
+-	    s/^0  *\(.*\)$/\1/
+-	    p
+-	}
+-    }'
+-aix_libpath=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
+-# Check for a 64-bit object if we didn't find anything.
+-if test -z "$aix_libpath"; then
+-  aix_libpath=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
+-fi
++  lt_aix_libpath_sed='
++      /Import File Strings/,/^$/ {
++	  /^0/ {
++	      s/^0  *\([^ ]*\) *$/\1/
++	      p
++	  }
++      }'
++  lt_cv_aix_libpath_=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
++  # Check for a 64-bit object if we didn't find anything.
++  if test -z "$lt_cv_aix_libpath_"; then
++    lt_cv_aix_libpath_=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
++  fi
+ fi
+ rm -f core conftest.err conftest.$ac_objext \
+     conftest$ac_exeext conftest.$ac_ext
+-if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
++  if test -z "$lt_cv_aix_libpath_"; then
++    lt_cv_aix_libpath_="/usr/lib:/lib"
++  fi
++
++fi
++
++  aix_libpath=$lt_cv_aix_libpath_
++fi
+ 
+ 	 hardcode_libdir_flag_spec='${wl}-blibpath:$libdir:'"$aix_libpath"
+ 	  # Warning - without using the other run time loading flags,
+@@ -9673,21 +10218,64 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+       # When not using gcc, we currently assume that we are using
+       # Microsoft Visual C++.
+       # hardcode_libdir_flag_spec is actually meaningless, as there is
+-      # no search path for DLLs.
+-      hardcode_libdir_flag_spec=' '
+-      allow_undefined_flag=unsupported
+-      # Tell ltmain to make .lib files, not .a files.
+-      libext=lib
+-      # Tell ltmain to make .dll files, not .so files.
+-      shrext_cmds=".dll"
+-      # FIXME: Setting linknames here is a bad hack.
+-      archive_cmds='$CC -o $lib $libobjs $compiler_flags `func_echo_all "$deplibs" | $SED '\''s/ -lc$//'\''` -link -dll~linknames='
+-      # The linker will automatically build a .lib file if we build a DLL.
+-      old_archive_from_new_cmds='true'
+-      # FIXME: Should let the user specify the lib program.
+-      old_archive_cmds='lib -OUT:$oldlib$oldobjs$old_deplibs'
+-      fix_srcfile_path='`cygpath -w "$srcfile"`'
+-      enable_shared_with_static_runtimes=yes
++      # no search path for DLLs.
++      case $cc_basename in
++      cl*)
++	# Native MSVC
++	hardcode_libdir_flag_spec=' '
++	allow_undefined_flag=unsupported
++	always_export_symbols=yes
++	file_list_spec='@'
++	# Tell ltmain to make .lib files, not .a files.
++	libext=lib
++	# Tell ltmain to make .dll files, not .so files.
++	shrext_cmds=".dll"
++	# FIXME: Setting linknames here is a bad hack.
++	archive_cmds='$CC -o $output_objdir/$soname $libobjs $compiler_flags $deplibs -Wl,-dll~linknames='
++	archive_expsym_cmds='if test "x`$SED 1q $export_symbols`" = xEXPORTS; then
++	    sed -n -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' -e '1\\\!p' < $export_symbols > $output_objdir/$soname.exp;
++	  else
++	    sed -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' < $export_symbols > $output_objdir/$soname.exp;
++	  fi~
++	  $CC -o $tool_output_objdir$soname $libobjs $compiler_flags $deplibs "@$tool_output_objdir$soname.exp" -Wl,-DLL,-IMPLIB:"$tool_output_objdir$libname.dll.lib"~
++	  linknames='
++	# The linker will not automatically build a static lib if we build a DLL.
++	# _LT_TAGVAR(old_archive_from_new_cmds, )='true'
++	enable_shared_with_static_runtimes=yes
++	export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1,DATA/'\'' | $SED -e '\''/^[AITW][ ]/s/.*[ ]//'\'' | sort | uniq > $export_symbols'
++	# Don't use ranlib
++	old_postinstall_cmds='chmod 644 $oldlib'
++	postlink_cmds='lt_outputfile="@OUTPUT@"~
++	  lt_tool_outputfile="@TOOL_OUTPUT@"~
++	  case $lt_outputfile in
++	    *.exe|*.EXE) ;;
++	    *)
++	      lt_outputfile="$lt_outputfile.exe"
++	      lt_tool_outputfile="$lt_tool_outputfile.exe"
++	      ;;
++	  esac~
++	  if test "$MANIFEST_TOOL" != ":" && test -f "$lt_outputfile.manifest"; then
++	    $MANIFEST_TOOL -manifest "$lt_tool_outputfile.manifest" -outputresource:"$lt_tool_outputfile" || exit 1;
++	    $RM "$lt_outputfile.manifest";
++	  fi'
++	;;
++      *)
++	# Assume MSVC wrapper
++	hardcode_libdir_flag_spec=' '
++	allow_undefined_flag=unsupported
++	# Tell ltmain to make .lib files, not .a files.
++	libext=lib
++	# Tell ltmain to make .dll files, not .so files.
++	shrext_cmds=".dll"
++	# FIXME: Setting linknames here is a bad hack.
++	archive_cmds='$CC -o $lib $libobjs $compiler_flags `func_echo_all "$deplibs" | $SED '\''s/ -lc$//'\''` -link -dll~linknames='
++	# The linker will automatically build a .lib file if we build a DLL.
++	old_archive_from_new_cmds='true'
++	# FIXME: Should let the user specify the lib program.
++	old_archive_cmds='lib -OUT:$oldlib$oldobjs$old_deplibs'
++	enable_shared_with_static_runtimes=yes
++	;;
++      esac
+       ;;
+ 
+     darwin* | rhapsody*)
+@@ -9748,7 +10336,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 
+     # FreeBSD 3 and greater uses gcc -shared to do shared libraries.
+     freebsd* | dragonfly*)
+-      archive_cmds='$CC -shared -o $lib $libobjs $deplibs $compiler_flags'
++      archive_cmds='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags'
+       hardcode_libdir_flag_spec='-R$libdir'
+       hardcode_direct=yes
+       hardcode_shlibpath_var=no
+@@ -9756,7 +10344,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 
+     hpux9*)
+       if test "$GCC" = yes; then
+-	archive_cmds='$RM $output_objdir/$soname~$CC -shared -fPIC ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $libobjs $deplibs $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
++	archive_cmds='$RM $output_objdir/$soname~$CC -shared $pic_flag ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $libobjs $deplibs $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
+       else
+ 	archive_cmds='$RM $output_objdir/$soname~$LD -b +b $install_libdir -o $output_objdir/$soname $libobjs $deplibs $linker_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
+       fi
+@@ -9772,7 +10360,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 
+     hpux10*)
+       if test "$GCC" = yes && test "$with_gnu_ld" = no; then
+-	archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
++	archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
+       else
+ 	archive_cmds='$LD -b +h $soname +b $install_libdir -o $lib $libobjs $deplibs $linker_flags'
+       fi
+@@ -9796,10 +10384,10 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 	  archive_cmds='$CC -shared ${wl}+h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
+ 	  ;;
+ 	ia64*)
+-	  archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags'
++	  archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags'
+ 	  ;;
+ 	*)
+-	  archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
++	  archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
+ 	  ;;
+ 	esac
+       else
+@@ -9878,23 +10466,36 @@ fi
+ 
+     irix5* | irix6* | nonstopux*)
+       if test "$GCC" = yes; then
+-	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
++	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
+ 	# Try to use the -exported_symbol ld option, if it does not
+ 	# work, assume that -exports_file does not work either and
+ 	# implicitly export all symbols.
+-        save_LDFLAGS="$LDFLAGS"
+-        LDFLAGS="$LDFLAGS -shared ${wl}-exported_symbol ${wl}foo ${wl}-update_registry ${wl}/dev/null"
+-        cat confdefs.h - <<_ACEOF >conftest.$ac_ext
++	# This should be the same for all languages, so no per-tag cache variable.
++	{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether the $host_os linker accepts -exported_symbol" >&5
++$as_echo_n "checking whether the $host_os linker accepts -exported_symbol... " >&6; }
++if ${lt_cv_irix_exported_symbol+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  save_LDFLAGS="$LDFLAGS"
++	   LDFLAGS="$LDFLAGS -shared ${wl}-exported_symbol ${wl}foo ${wl}-update_registry ${wl}/dev/null"
++	   cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+ /* end confdefs.h.  */
+-int foo(void) {}
++int foo (void) { return 0; }
+ _ACEOF
+ if ac_fn_c_try_link "$LINENO"; then :
+-  archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations ${wl}-exports_file ${wl}$export_symbols -o $lib'
+-
++  lt_cv_irix_exported_symbol=yes
++else
++  lt_cv_irix_exported_symbol=no
+ fi
+ rm -f core conftest.err conftest.$ac_objext \
+     conftest$ac_exeext conftest.$ac_ext
+-        LDFLAGS="$save_LDFLAGS"
++           LDFLAGS="$save_LDFLAGS"
++fi
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_irix_exported_symbol" >&5
++$as_echo "$lt_cv_irix_exported_symbol" >&6; }
++	if test "$lt_cv_irix_exported_symbol" = yes; then
++          archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations ${wl}-exports_file ${wl}$export_symbols -o $lib'
++	fi
+       else
+ 	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -o $lib'
+ 	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -exports_file $export_symbols -o $lib'
+@@ -9979,7 +10580,7 @@ rm -f core conftest.err conftest.$ac_objext \
+     osf4* | osf5*)	# as osf3* with the addition of -msym flag
+       if test "$GCC" = yes; then
+ 	allow_undefined_flag=' ${wl}-expect_unresolved ${wl}\*'
+-	archive_cmds='$CC -shared${allow_undefined_flag} $libobjs $deplibs $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
++	archive_cmds='$CC -shared${allow_undefined_flag} $pic_flag $libobjs $deplibs $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
+ 	hardcode_libdir_flag_spec='${wl}-rpath ${wl}$libdir'
+       else
+ 	allow_undefined_flag=' -expect_unresolved \*'
+@@ -9998,9 +10599,9 @@ rm -f core conftest.err conftest.$ac_objext \
+       no_undefined_flag=' -z defs'
+       if test "$GCC" = yes; then
+ 	wlarc='${wl}'
+-	archive_cmds='$CC -shared ${wl}-z ${wl}text ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
++	archive_cmds='$CC -shared $pic_flag ${wl}-z ${wl}text ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
+ 	archive_expsym_cmds='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~
+-	  $CC -shared ${wl}-z ${wl}text ${wl}-M ${wl}$lib.exp ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags~$RM $lib.exp'
++	  $CC -shared $pic_flag ${wl}-z ${wl}text ${wl}-M ${wl}$lib.exp ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags~$RM $lib.exp'
+       else
+ 	case `$CC -V 2>&1` in
+ 	*"Compilers 5.0"*)
+@@ -10576,8 +11177,9 @@ cygwin* | mingw* | pw32* | cegcc*)
+   need_version=no
+   need_lib_prefix=no
+ 
+-  case $GCC,$host_os in
+-  yes,cygwin* | yes,mingw* | yes,pw32* | yes,cegcc*)
++  case $GCC,$cc_basename in
++  yes,*)
++    # gcc
+     library_names_spec='$libname.dll.a'
+     # DLL is installed to $(libdir)/../bin by postinstall_cmds
+     postinstall_cmds='base_file=`basename \${file}`~
+@@ -10610,13 +11212,71 @@ cygwin* | mingw* | pw32* | cegcc*)
+       library_names_spec='`echo ${libname} | sed -e 's/^lib/pw/'``echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}'
+       ;;
+     esac
++    dynamic_linker='Win32 ld.exe'
++    ;;
++
++  *,cl*)
++    # Native MSVC
++    libname_spec='$name'
++    soname_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}'
++    library_names_spec='${libname}.dll.lib'
++
++    case $build_os in
++    mingw*)
++      sys_lib_search_path_spec=
++      lt_save_ifs=$IFS
++      IFS=';'
++      for lt_path in $LIB
++      do
++        IFS=$lt_save_ifs
++        # Let DOS variable expansion print the short 8.3 style file name.
++        lt_path=`cd "$lt_path" 2>/dev/null && cmd //C "for %i in (".") do @echo %~si"`
++        sys_lib_search_path_spec="$sys_lib_search_path_spec $lt_path"
++      done
++      IFS=$lt_save_ifs
++      # Convert to MSYS style.
++      sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | sed -e 's|\\\\|/|g' -e 's| \\([a-zA-Z]\\):| /\\1|g' -e 's|^ ||'`
++      ;;
++    cygwin*)
++      # Convert to unix form, then to dos form, then back to unix form
++      # but this time dos style (no spaces!) so that the unix form looks
++      # like /cygdrive/c/PROGRA~1:/cygdr...
++      sys_lib_search_path_spec=`cygpath --path --unix "$LIB"`
++      sys_lib_search_path_spec=`cygpath --path --dos "$sys_lib_search_path_spec" 2>/dev/null`
++      sys_lib_search_path_spec=`cygpath --path --unix "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
++      ;;
++    *)
++      sys_lib_search_path_spec="$LIB"
++      if $ECHO "$sys_lib_search_path_spec" | $GREP ';[c-zC-Z]:/' >/dev/null; then
++        # It is most probably a Windows format PATH.
++        sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e 's/;/ /g'`
++      else
++        sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
++      fi
++      # FIXME: find the short name or the path components, as spaces are
++      # common. (e.g. "Program Files" -> "PROGRA~1")
++      ;;
++    esac
++
++    # DLL is installed to $(libdir)/../bin by postinstall_cmds
++    postinstall_cmds='base_file=`basename \${file}`~
++      dlpath=`$SHELL 2>&1 -c '\''. $dir/'\''\${base_file}'\''i; echo \$dlname'\''`~
++      dldir=$destdir/`dirname \$dlpath`~
++      test -d \$dldir || mkdir -p \$dldir~
++      $install_prog $dir/$dlname \$dldir/$dlname'
++    postuninstall_cmds='dldll=`$SHELL 2>&1 -c '\''. $file; echo \$dlname'\''`~
++      dlpath=$dir/\$dldll~
++       $RM \$dlpath'
++    shlibpath_overrides_runpath=yes
++    dynamic_linker='Win32 link.exe'
+     ;;
+ 
+   *)
++    # Assume MSVC wrapper
+     library_names_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext} $libname.lib'
++    dynamic_linker='Win32 ld.exe'
+     ;;
+   esac
+-  dynamic_linker='Win32 ld.exe'
+   # FIXME: first we should search . and the directory the executable is in
+   shlibpath_var=PATH
+   ;;
+@@ -11494,7 +12154,7 @@ else
+   lt_dlunknown=0; lt_dlno_uscore=1; lt_dlneed_uscore=2
+   lt_status=$lt_dlunknown
+   cat > conftest.$ac_ext <<_LT_EOF
+-#line 11494 "configure"
++#line $LINENO "configure"
+ #include "confdefs.h"
+ 
+ #if HAVE_DLFCN_H
+@@ -11538,10 +12198,10 @@ else
+ /* When -fvisbility=hidden is used, assume the code has been annotated
+    correspondingly for the symbols needed.  */
+ #if defined(__GNUC__) && (((__GNUC__ == 3) && (__GNUC_MINOR__ >= 3)) || (__GNUC__ > 3))
+-void fnord () __attribute__((visibility("default")));
++int fnord () __attribute__((visibility("default")));
+ #endif
+ 
+-void fnord () { int i=42; }
++int fnord () { return 42; }
+ int main ()
+ {
+   void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW);
+@@ -11600,7 +12260,7 @@ else
+   lt_dlunknown=0; lt_dlno_uscore=1; lt_dlneed_uscore=2
+   lt_status=$lt_dlunknown
+   cat > conftest.$ac_ext <<_LT_EOF
+-#line 11600 "configure"
++#line $LINENO "configure"
+ #include "confdefs.h"
+ 
+ #if HAVE_DLFCN_H
+@@ -11644,10 +12304,10 @@ else
+ /* When -fvisbility=hidden is used, assume the code has been annotated
+    correspondingly for the symbols needed.  */
+ #if defined(__GNUC__) && (((__GNUC__ == 3) && (__GNUC_MINOR__ >= 3)) || (__GNUC__ > 3))
+-void fnord () __attribute__((visibility("default")));
++int fnord () __attribute__((visibility("default")));
+ #endif
+ 
+-void fnord () { int i=42; }
++int fnord () { return 42; }
+ int main ()
+ {
+   void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW);
+@@ -12039,6 +12699,7 @@ $RM -r conftest*
+ 
+   # Allow CC to be a program name with arguments.
+   lt_save_CC=$CC
++  lt_save_CFLAGS=$CFLAGS
+   lt_save_LD=$LD
+   lt_save_GCC=$GCC
+   GCC=$GXX
+@@ -12056,6 +12717,7 @@ $RM -r conftest*
+   fi
+   test -z "${LDCXX+set}" || LD=$LDCXX
+   CC=${CXX-"c++"}
++  CFLAGS=$CXXFLAGS
+   compiler=$CC
+   compiler_CXX=$CC
+   for cc_temp in $compiler""; do
+@@ -12338,7 +13000,13 @@ $as_echo_n "checking whether the $compiler linker ($LD) supports shared librarie
+           allow_undefined_flag_CXX='-berok'
+           # Determine the default libpath from the value encoded in an empty
+           # executable.
+-          cat confdefs.h - <<_ACEOF >conftest.$ac_ext
++          if test "${lt_cv_aix_libpath+set}" = set; then
++  aix_libpath=$lt_cv_aix_libpath
++else
++  if ${lt_cv_aix_libpath__CXX+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+ /* end confdefs.h.  */
+ 
+ int
+@@ -12351,22 +13019,29 @@ main ()
+ _ACEOF
+ if ac_fn_cxx_try_link "$LINENO"; then :
+ 
+-lt_aix_libpath_sed='
+-    /Import File Strings/,/^$/ {
+-	/^0/ {
+-	    s/^0  *\(.*\)$/\1/
+-	    p
+-	}
+-    }'
+-aix_libpath=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
+-# Check for a 64-bit object if we didn't find anything.
+-if test -z "$aix_libpath"; then
+-  aix_libpath=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
+-fi
++  lt_aix_libpath_sed='
++      /Import File Strings/,/^$/ {
++	  /^0/ {
++	      s/^0  *\([^ ]*\) *$/\1/
++	      p
++	  }
++      }'
++  lt_cv_aix_libpath__CXX=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
++  # Check for a 64-bit object if we didn't find anything.
++  if test -z "$lt_cv_aix_libpath__CXX"; then
++    lt_cv_aix_libpath__CXX=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
++  fi
+ fi
+ rm -f core conftest.err conftest.$ac_objext \
+     conftest$ac_exeext conftest.$ac_ext
+-if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
++  if test -z "$lt_cv_aix_libpath__CXX"; then
++    lt_cv_aix_libpath__CXX="/usr/lib:/lib"
++  fi
++
++fi
++
++  aix_libpath=$lt_cv_aix_libpath__CXX
++fi
+ 
+           hardcode_libdir_flag_spec_CXX='${wl}-blibpath:$libdir:'"$aix_libpath"
+ 
+@@ -12379,7 +13054,13 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+           else
+ 	    # Determine the default libpath from the value encoded in an
+ 	    # empty executable.
+-	    cat confdefs.h - <<_ACEOF >conftest.$ac_ext
++	    if test "${lt_cv_aix_libpath+set}" = set; then
++  aix_libpath=$lt_cv_aix_libpath
++else
++  if ${lt_cv_aix_libpath__CXX+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+ /* end confdefs.h.  */
+ 
+ int
+@@ -12392,22 +13073,29 @@ main ()
+ _ACEOF
+ if ac_fn_cxx_try_link "$LINENO"; then :
+ 
+-lt_aix_libpath_sed='
+-    /Import File Strings/,/^$/ {
+-	/^0/ {
+-	    s/^0  *\(.*\)$/\1/
+-	    p
+-	}
+-    }'
+-aix_libpath=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
+-# Check for a 64-bit object if we didn't find anything.
+-if test -z "$aix_libpath"; then
+-  aix_libpath=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
+-fi
++  lt_aix_libpath_sed='
++      /Import File Strings/,/^$/ {
++	  /^0/ {
++	      s/^0  *\([^ ]*\) *$/\1/
++	      p
++	  }
++      }'
++  lt_cv_aix_libpath__CXX=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
++  # Check for a 64-bit object if we didn't find anything.
++  if test -z "$lt_cv_aix_libpath__CXX"; then
++    lt_cv_aix_libpath__CXX=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
++  fi
+ fi
+ rm -f core conftest.err conftest.$ac_objext \
+     conftest$ac_exeext conftest.$ac_ext
+-if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
++  if test -z "$lt_cv_aix_libpath__CXX"; then
++    lt_cv_aix_libpath__CXX="/usr/lib:/lib"
++  fi
++
++fi
++
++  aix_libpath=$lt_cv_aix_libpath__CXX
++fi
+ 
+ 	    hardcode_libdir_flag_spec_CXX='${wl}-blibpath:$libdir:'"$aix_libpath"
+ 	    # Warning - without using the other run time loading flags,
+@@ -12450,29 +13138,75 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+         ;;
+ 
+       cygwin* | mingw* | pw32* | cegcc*)
+-        # _LT_TAGVAR(hardcode_libdir_flag_spec, CXX) is actually meaningless,
+-        # as there is no search path for DLLs.
+-        hardcode_libdir_flag_spec_CXX='-L$libdir'
+-        export_dynamic_flag_spec_CXX='${wl}--export-all-symbols'
+-        allow_undefined_flag_CXX=unsupported
+-        always_export_symbols_CXX=no
+-        enable_shared_with_static_runtimes_CXX=yes
+-
+-        if $LD --help 2>&1 | $GREP 'auto-import' > /dev/null; then
+-          archive_cmds_CXX='$CC -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib'
+-          # If the export-symbols file already is a .def file (1st line
+-          # is EXPORTS), use it as is; otherwise, prepend...
+-          archive_expsym_cmds_CXX='if test "x`$SED 1q $export_symbols`" = xEXPORTS; then
+-	    cp $export_symbols $output_objdir/$soname.def;
+-          else
+-	    echo EXPORTS > $output_objdir/$soname.def;
+-	    cat $export_symbols >> $output_objdir/$soname.def;
+-          fi~
+-          $CC -shared -nostdlib $output_objdir/$soname.def $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib'
+-        else
+-          ld_shlibs_CXX=no
+-        fi
+-        ;;
++	case $GXX,$cc_basename in
++	,cl* | no,cl*)
++	  # Native MSVC
++	  # hardcode_libdir_flag_spec is actually meaningless, as there is
++	  # no search path for DLLs.
++	  hardcode_libdir_flag_spec_CXX=' '
++	  allow_undefined_flag_CXX=unsupported
++	  always_export_symbols_CXX=yes
++	  file_list_spec_CXX='@'
++	  # Tell ltmain to make .lib files, not .a files.
++	  libext=lib
++	  # Tell ltmain to make .dll files, not .so files.
++	  shrext_cmds=".dll"
++	  # FIXME: Setting linknames here is a bad hack.
++	  archive_cmds_CXX='$CC -o $output_objdir/$soname $libobjs $compiler_flags $deplibs -Wl,-dll~linknames='
++	  archive_expsym_cmds_CXX='if test "x`$SED 1q $export_symbols`" = xEXPORTS; then
++	      $SED -n -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' -e '1\\\!p' < $export_symbols > $output_objdir/$soname.exp;
++	    else
++	      $SED -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' < $export_symbols > $output_objdir/$soname.exp;
++	    fi~
++	    $CC -o $tool_output_objdir$soname $libobjs $compiler_flags $deplibs "@$tool_output_objdir$soname.exp" -Wl,-DLL,-IMPLIB:"$tool_output_objdir$libname.dll.lib"~
++	    linknames='
++	  # The linker will not automatically build a static lib if we build a DLL.
++	  # _LT_TAGVAR(old_archive_from_new_cmds, CXX)='true'
++	  enable_shared_with_static_runtimes_CXX=yes
++	  # Don't use ranlib
++	  old_postinstall_cmds_CXX='chmod 644 $oldlib'
++	  postlink_cmds_CXX='lt_outputfile="@OUTPUT@"~
++	    lt_tool_outputfile="@TOOL_OUTPUT@"~
++	    case $lt_outputfile in
++	      *.exe|*.EXE) ;;
++	      *)
++		lt_outputfile="$lt_outputfile.exe"
++		lt_tool_outputfile="$lt_tool_outputfile.exe"
++		;;
++	    esac~
++	    func_to_tool_file "$lt_outputfile"~
++	    if test "$MANIFEST_TOOL" != ":" && test -f "$lt_outputfile.manifest"; then
++	      $MANIFEST_TOOL -manifest "$lt_tool_outputfile.manifest" -outputresource:"$lt_tool_outputfile" || exit 1;
++	      $RM "$lt_outputfile.manifest";
++	    fi'
++	  ;;
++	*)
++	  # g++
++	  # _LT_TAGVAR(hardcode_libdir_flag_spec, CXX) is actually meaningless,
++	  # as there is no search path for DLLs.
++	  hardcode_libdir_flag_spec_CXX='-L$libdir'
++	  export_dynamic_flag_spec_CXX='${wl}--export-all-symbols'
++	  allow_undefined_flag_CXX=unsupported
++	  always_export_symbols_CXX=no
++	  enable_shared_with_static_runtimes_CXX=yes
++
++	  if $LD --help 2>&1 | $GREP 'auto-import' > /dev/null; then
++	    archive_cmds_CXX='$CC -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib'
++	    # If the export-symbols file already is a .def file (1st line
++	    # is EXPORTS), use it as is; otherwise, prepend...
++	    archive_expsym_cmds_CXX='if test "x`$SED 1q $export_symbols`" = xEXPORTS; then
++	      cp $export_symbols $output_objdir/$soname.def;
++	    else
++	      echo EXPORTS > $output_objdir/$soname.def;
++	      cat $export_symbols >> $output_objdir/$soname.def;
++	    fi~
++	    $CC -shared -nostdlib $output_objdir/$soname.def $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib'
++	  else
++	    ld_shlibs_CXX=no
++	  fi
++	  ;;
++	esac
++	;;
+       darwin* | rhapsody*)
+ 
+ 
+@@ -12578,7 +13312,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+             ;;
+           *)
+             if test "$GXX" = yes; then
+-              archive_cmds_CXX='$RM $output_objdir/$soname~$CC -shared -nostdlib -fPIC ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
++              archive_cmds_CXX='$RM $output_objdir/$soname~$CC -shared -nostdlib $pic_flag ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
+             else
+               # FIXME: insert proper C++ library support
+               ld_shlibs_CXX=no
+@@ -12649,10 +13383,10 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 	            archive_cmds_CXX='$CC -shared -nostdlib -fPIC ${wl}+h ${wl}$soname -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags'
+ 	            ;;
+ 	          ia64*)
+-	            archive_cmds_CXX='$CC -shared -nostdlib -fPIC ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags'
++	            archive_cmds_CXX='$CC -shared -nostdlib $pic_flag ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags'
+ 	            ;;
+ 	          *)
+-	            archive_cmds_CXX='$CC -shared -nostdlib -fPIC ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags'
++	            archive_cmds_CXX='$CC -shared -nostdlib $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags'
+ 	            ;;
+ 	        esac
+ 	      fi
+@@ -12693,9 +13427,9 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+           *)
+ 	    if test "$GXX" = yes; then
+ 	      if test "$with_gnu_ld" = no; then
+-	        archive_cmds_CXX='$CC -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
++	        archive_cmds_CXX='$CC -shared $pic_flag -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
+ 	      else
+-	        archive_cmds_CXX='$CC -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` -o $lib'
++	        archive_cmds_CXX='$CC -shared $pic_flag -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` -o $lib'
+ 	      fi
+ 	    fi
+ 	    link_all_deplibs_CXX=yes
+@@ -12765,20 +13499,20 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 	      prelink_cmds_CXX='tpldir=Template.dir~
+ 		rm -rf $tpldir~
+ 		$CC --prelink_objects --instantiation_dir $tpldir $objs $libobjs $compile_deplibs~
+-		compile_command="$compile_command `find $tpldir -name \*.o | $NL2SP`"'
++		compile_command="$compile_command `find $tpldir -name \*.o | sort | $NL2SP`"'
+ 	      old_archive_cmds_CXX='tpldir=Template.dir~
+ 		rm -rf $tpldir~
+ 		$CC --prelink_objects --instantiation_dir $tpldir $oldobjs$old_deplibs~
+-		$AR $AR_FLAGS $oldlib$oldobjs$old_deplibs `find $tpldir -name \*.o | $NL2SP`~
++		$AR $AR_FLAGS $oldlib$oldobjs$old_deplibs `find $tpldir -name \*.o | sort | $NL2SP`~
+ 		$RANLIB $oldlib'
+ 	      archive_cmds_CXX='tpldir=Template.dir~
+ 		rm -rf $tpldir~
+ 		$CC --prelink_objects --instantiation_dir $tpldir $predep_objects $libobjs $deplibs $convenience $postdep_objects~
+-		$CC -shared $pic_flag $predep_objects $libobjs $deplibs `find $tpldir -name \*.o | $NL2SP` $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname -o $lib'
++		$CC -shared $pic_flag $predep_objects $libobjs $deplibs `find $tpldir -name \*.o | sort | $NL2SP` $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname -o $lib'
+ 	      archive_expsym_cmds_CXX='tpldir=Template.dir~
+ 		rm -rf $tpldir~
+ 		$CC --prelink_objects --instantiation_dir $tpldir $predep_objects $libobjs $deplibs $convenience $postdep_objects~
+-		$CC -shared $pic_flag $predep_objects $libobjs $deplibs `find $tpldir -name \*.o | $NL2SP` $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname ${wl}-retain-symbols-file ${wl}$export_symbols -o $lib'
++		$CC -shared $pic_flag $predep_objects $libobjs $deplibs `find $tpldir -name \*.o | sort | $NL2SP` $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname ${wl}-retain-symbols-file ${wl}$export_symbols -o $lib'
+ 	      ;;
+ 	    *) # Version 6 and above use weak symbols
+ 	      archive_cmds_CXX='$CC -shared $pic_flag $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname -o $lib'
+@@ -12973,7 +13707,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 	          archive_cmds_CXX='$CC -shared -nostdlib ${allow_undefined_flag} $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
+ 		  ;;
+ 	        *)
+-	          archive_cmds_CXX='$CC -shared -nostdlib ${allow_undefined_flag} $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
++	          archive_cmds_CXX='$CC -shared $pic_flag -nostdlib ${allow_undefined_flag} $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
+ 		  ;;
+ 	      esac
+ 
+@@ -13019,7 +13753,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 
+       solaris*)
+         case $cc_basename in
+-          CC*)
++          CC* | sunCC*)
+ 	    # Sun C++ 4.2, 5.x and Centerline C++
+             archive_cmds_need_lc_CXX=yes
+ 	    no_undefined_flag_CXX=' -zdefs'
+@@ -13060,9 +13794,9 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 	    if test "$GXX" = yes && test "$with_gnu_ld" = no; then
+ 	      no_undefined_flag_CXX=' ${wl}-z ${wl}defs'
+ 	      if $CC --version | $GREP -v '^2\.7' > /dev/null; then
+-	        archive_cmds_CXX='$CC -shared -nostdlib $LDFLAGS $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-h $wl$soname -o $lib'
++	        archive_cmds_CXX='$CC -shared $pic_flag -nostdlib $LDFLAGS $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-h $wl$soname -o $lib'
+ 	        archive_expsym_cmds_CXX='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~
+-		  $CC -shared -nostdlib ${wl}-M $wl$lib.exp -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~$RM $lib.exp'
++		  $CC -shared $pic_flag -nostdlib ${wl}-M $wl$lib.exp -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~$RM $lib.exp'
+ 
+ 	        # Commands to make compiler produce verbose output that lists
+ 	        # what "hidden" libraries, object files and flags are used when
+@@ -13197,6 +13931,13 @@ private:
+ };
+ _LT_EOF
+ 
++
++_lt_libdeps_save_CFLAGS=$CFLAGS
++case "$CC $CFLAGS " in #(
++*\ -flto*\ *) CFLAGS="$CFLAGS -fno-lto" ;;
++*\ -fwhopr*\ *) CFLAGS="$CFLAGS -fno-whopr" ;;
++esac
++
+ if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_compile\""; } >&5
+   (eval $ac_compile) 2>&5
+   ac_status=$?
+@@ -13210,7 +13951,7 @@ if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_compile\""; } >&5
+   pre_test_object_deps_done=no
+ 
+   for p in `eval "$output_verbose_link_cmd"`; do
+-    case $p in
++    case ${prev}${p} in
+ 
+     -L* | -R* | -l*)
+        # Some compilers place space between "-{L,R}" and the path.
+@@ -13219,13 +13960,22 @@ if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_compile\""; } >&5
+           test $p = "-R"; then
+ 	 prev=$p
+ 	 continue
+-       else
+-	 prev=
+        fi
+ 
++       # Expand the sysroot to ease extracting the directories later.
++       if test -z "$prev"; then
++         case $p in
++         -L*) func_stripname_cnf '-L' '' "$p"; prev=-L; p=$func_stripname_result ;;
++         -R*) func_stripname_cnf '-R' '' "$p"; prev=-R; p=$func_stripname_result ;;
++         -l*) func_stripname_cnf '-l' '' "$p"; prev=-l; p=$func_stripname_result ;;
++         esac
++       fi
++       case $p in
++       =*) func_stripname_cnf '=' '' "$p"; p=$lt_sysroot$func_stripname_result ;;
++       esac
+        if test "$pre_test_object_deps_done" = no; then
+-	 case $p in
+-	 -L* | -R*)
++	 case ${prev} in
++	 -L | -R)
+ 	   # Internal compiler library paths should come after those
+ 	   # provided the user.  The postdeps already come after the
+ 	   # user supplied libs so there is no need to process them.
+@@ -13245,8 +13995,10 @@ if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_compile\""; } >&5
+ 	   postdeps_CXX="${postdeps_CXX} ${prev}${p}"
+ 	 fi
+        fi
++       prev=
+        ;;
+ 
++    *.lto.$objext) ;; # Ignore GCC LTO objects
+     *.$objext)
+        # This assumes that the test object file only shows up
+        # once in the compiler output.
+@@ -13282,6 +14034,7 @@ else
+ fi
+ 
+ $RM -f confest.$objext
++CFLAGS=$_lt_libdeps_save_CFLAGS
+ 
+ # PORTME: override above test on systems where it is broken
+ case $host_os in
+@@ -13317,7 +14070,7 @@ linux*)
+ 
+ solaris*)
+   case $cc_basename in
+-  CC*)
++  CC* | sunCC*)
+     # The more standards-conforming stlport4 library is
+     # incompatible with the Cstd library. Avoid specifying
+     # it if it's in CXXFLAGS. Ignore libCrun as
+@@ -13382,8 +14135,6 @@ fi
+ lt_prog_compiler_pic_CXX=
+ lt_prog_compiler_static_CXX=
+ 
+-{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $compiler option to produce PIC" >&5
+-$as_echo_n "checking for $compiler option to produce PIC... " >&6; }
+ 
+   # C++ specific cases for pic, static, wl, etc.
+   if test "$GXX" = yes; then
+@@ -13488,6 +14239,11 @@ $as_echo_n "checking for $compiler option to produce PIC... " >&6; }
+ 	  ;;
+ 	esac
+ 	;;
++      mingw* | cygwin* | os2* | pw32* | cegcc*)
++	# This hack is so that the source file can tell whether it is being
++	# built for inclusion in a dll (and should export symbols for example).
++	lt_prog_compiler_pic_CXX='-DDLL_EXPORT'
++	;;
+       dgux*)
+ 	case $cc_basename in
+ 	  ec++*)
+@@ -13640,7 +14396,7 @@ $as_echo_n "checking for $compiler option to produce PIC... " >&6; }
+ 	;;
+       solaris*)
+ 	case $cc_basename in
+-	  CC*)
++	  CC* | sunCC*)
+ 	    # Sun C++ 4.2, 5.x and Centerline C++
+ 	    lt_prog_compiler_pic_CXX='-KPIC'
+ 	    lt_prog_compiler_static_CXX='-Bstatic'
+@@ -13705,10 +14461,17 @@ case $host_os in
+     lt_prog_compiler_pic_CXX="$lt_prog_compiler_pic_CXX -DPIC"
+     ;;
+ esac
+-{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_prog_compiler_pic_CXX" >&5
+-$as_echo "$lt_prog_compiler_pic_CXX" >&6; }
+-
+ 
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $compiler option to produce PIC" >&5
++$as_echo_n "checking for $compiler option to produce PIC... " >&6; }
++if ${lt_cv_prog_compiler_pic_CXX+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  lt_cv_prog_compiler_pic_CXX=$lt_prog_compiler_pic_CXX
++fi
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_prog_compiler_pic_CXX" >&5
++$as_echo "$lt_cv_prog_compiler_pic_CXX" >&6; }
++lt_prog_compiler_pic_CXX=$lt_cv_prog_compiler_pic_CXX
+ 
+ #
+ # Check to make sure the PIC flag actually works.
+@@ -13766,6 +14529,8 @@ fi
+ 
+ 
+ 
++
++
+ #
+ # Check to make sure the static flag actually works.
+ #
+@@ -13943,6 +14708,7 @@ fi
+ $as_echo_n "checking whether the $compiler linker ($LD) supports shared libraries... " >&6; }
+ 
+   export_symbols_cmds_CXX='$NM $libobjs $convenience | $global_symbol_pipe | $SED '\''s/.* //'\'' | sort | uniq > $export_symbols'
++  exclude_expsyms_CXX='_GLOBAL_OFFSET_TABLE_|_GLOBAL__F[ID]_.*'
+   case $host_os in
+   aix[4-9]*)
+     # If we're using GNU nm, then we don't want the "-C" option.
+@@ -13957,15 +14723,20 @@ $as_echo_n "checking whether the $compiler linker ($LD) supports shared librarie
+     ;;
+   pw32*)
+     export_symbols_cmds_CXX="$ltdll_cmds"
+-  ;;
++    ;;
+   cygwin* | mingw* | cegcc*)
+-    export_symbols_cmds_CXX='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1 DATA/;/^.*[ ]__nm__/s/^.*[ ]__nm__\([^ ]*\)[ ][^ ]*/\1 DATA/;/^I[ ]/d;/^[AITW][ ]/s/.* //'\'' | sort | uniq > $export_symbols'
+-  ;;
++    case $cc_basename in
++    cl*) ;;
++    *)
++      export_symbols_cmds_CXX='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1 DATA/;s/^.*[ ]__nm__\([^ ]*\)[ ][^ ]*/\1 DATA/;/^I[ ]/d;/^[AITW][ ]/s/.* //'\'' | sort | uniq > $export_symbols'
++      exclude_expsyms_CXX='[_]+GLOBAL_OFFSET_TABLE_|[_]+GLOBAL__[FID]_.*|[_]+head_[A-Za-z0-9_]+_dll|[A-Za-z0-9_]+_dll_iname'
++      ;;
++    esac
++    ;;
+   *)
+     export_symbols_cmds_CXX='$NM $libobjs $convenience | $global_symbol_pipe | $SED '\''s/.* //'\'' | sort | uniq > $export_symbols'
+-  ;;
++    ;;
+   esac
+-  exclude_expsyms_CXX='_GLOBAL_OFFSET_TABLE_|_GLOBAL__F[ID]_.*'
+ 
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ld_shlibs_CXX" >&5
+ $as_echo "$ld_shlibs_CXX" >&6; }
+@@ -14228,8 +14999,9 @@ cygwin* | mingw* | pw32* | cegcc*)
+   need_version=no
+   need_lib_prefix=no
+ 
+-  case $GCC,$host_os in
+-  yes,cygwin* | yes,mingw* | yes,pw32* | yes,cegcc*)
++  case $GCC,$cc_basename in
++  yes,*)
++    # gcc
+     library_names_spec='$libname.dll.a'
+     # DLL is installed to $(libdir)/../bin by postinstall_cmds
+     postinstall_cmds='base_file=`basename \${file}`~
+@@ -14261,13 +15033,71 @@ cygwin* | mingw* | pw32* | cegcc*)
+       library_names_spec='`echo ${libname} | sed -e 's/^lib/pw/'``echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}'
+       ;;
+     esac
++    dynamic_linker='Win32 ld.exe'
++    ;;
++
++  *,cl*)
++    # Native MSVC
++    libname_spec='$name'
++    soname_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}'
++    library_names_spec='${libname}.dll.lib'
++
++    case $build_os in
++    mingw*)
++      sys_lib_search_path_spec=
++      lt_save_ifs=$IFS
++      IFS=';'
++      for lt_path in $LIB
++      do
++        IFS=$lt_save_ifs
++        # Let DOS variable expansion print the short 8.3 style file name.
++        lt_path=`cd "$lt_path" 2>/dev/null && cmd //C "for %i in (".") do @echo %~si"`
++        sys_lib_search_path_spec="$sys_lib_search_path_spec $lt_path"
++      done
++      IFS=$lt_save_ifs
++      # Convert to MSYS style.
++      sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | sed -e 's|\\\\|/|g' -e 's| \\([a-zA-Z]\\):| /\\1|g' -e 's|^ ||'`
++      ;;
++    cygwin*)
++      # Convert to unix form, then to dos form, then back to unix form
++      # but this time dos style (no spaces!) so that the unix form looks
++      # like /cygdrive/c/PROGRA~1:/cygdr...
++      sys_lib_search_path_spec=`cygpath --path --unix "$LIB"`
++      sys_lib_search_path_spec=`cygpath --path --dos "$sys_lib_search_path_spec" 2>/dev/null`
++      sys_lib_search_path_spec=`cygpath --path --unix "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
++      ;;
++    *)
++      sys_lib_search_path_spec="$LIB"
++      if $ECHO "$sys_lib_search_path_spec" | $GREP ';[c-zC-Z]:/' >/dev/null; then
++        # It is most probably a Windows format PATH.
++        sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e 's/;/ /g'`
++      else
++        sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
++      fi
++      # FIXME: find the short name or the path components, as spaces are
++      # common. (e.g. "Program Files" -> "PROGRA~1")
++      ;;
++    esac
++
++    # DLL is installed to $(libdir)/../bin by postinstall_cmds
++    postinstall_cmds='base_file=`basename \${file}`~
++      dlpath=`$SHELL 2>&1 -c '\''. $dir/'\''\${base_file}'\''i; echo \$dlname'\''`~
++      dldir=$destdir/`dirname \$dlpath`~
++      test -d \$dldir || mkdir -p \$dldir~
++      $install_prog $dir/$dlname \$dldir/$dlname'
++    postuninstall_cmds='dldll=`$SHELL 2>&1 -c '\''. $file; echo \$dlname'\''`~
++      dlpath=$dir/\$dldll~
++       $RM \$dlpath'
++    shlibpath_overrides_runpath=yes
++    dynamic_linker='Win32 link.exe'
+     ;;
+ 
+   *)
++    # Assume MSVC wrapper
+     library_names_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext} $libname.lib'
++    dynamic_linker='Win32 ld.exe'
+     ;;
+   esac
+-  dynamic_linker='Win32 ld.exe'
+   # FIXME: first we should search . and the directory the executable is in
+   shlibpath_var=PATH
+   ;;
+@@ -14807,6 +15637,7 @@ fi
+   fi # test -n "$compiler"
+ 
+   CC=$lt_save_CC
++  CFLAGS=$lt_save_CFLAGS
+   LDCXX=$LD
+   LD=$lt_save_LD
+   GCC=$lt_save_GCC
+@@ -18172,13 +19003,20 @@ exeext='`$ECHO "$exeext" | $SED "$delay_single_quote_subst"`'
+ lt_unset='`$ECHO "$lt_unset" | $SED "$delay_single_quote_subst"`'
+ lt_SP2NL='`$ECHO "$lt_SP2NL" | $SED "$delay_single_quote_subst"`'
+ lt_NL2SP='`$ECHO "$lt_NL2SP" | $SED "$delay_single_quote_subst"`'
++lt_cv_to_host_file_cmd='`$ECHO "$lt_cv_to_host_file_cmd" | $SED "$delay_single_quote_subst"`'
++lt_cv_to_tool_file_cmd='`$ECHO "$lt_cv_to_tool_file_cmd" | $SED "$delay_single_quote_subst"`'
+ reload_flag='`$ECHO "$reload_flag" | $SED "$delay_single_quote_subst"`'
+ reload_cmds='`$ECHO "$reload_cmds" | $SED "$delay_single_quote_subst"`'
+ OBJDUMP='`$ECHO "$OBJDUMP" | $SED "$delay_single_quote_subst"`'
+ deplibs_check_method='`$ECHO "$deplibs_check_method" | $SED "$delay_single_quote_subst"`'
+ file_magic_cmd='`$ECHO "$file_magic_cmd" | $SED "$delay_single_quote_subst"`'
++file_magic_glob='`$ECHO "$file_magic_glob" | $SED "$delay_single_quote_subst"`'
++want_nocaseglob='`$ECHO "$want_nocaseglob" | $SED "$delay_single_quote_subst"`'
++DLLTOOL='`$ECHO "$DLLTOOL" | $SED "$delay_single_quote_subst"`'
++sharedlib_from_linklib_cmd='`$ECHO "$sharedlib_from_linklib_cmd" | $SED "$delay_single_quote_subst"`'
+ AR='`$ECHO "$AR" | $SED "$delay_single_quote_subst"`'
+ AR_FLAGS='`$ECHO "$AR_FLAGS" | $SED "$delay_single_quote_subst"`'
++archiver_list_spec='`$ECHO "$archiver_list_spec" | $SED "$delay_single_quote_subst"`'
+ STRIP='`$ECHO "$STRIP" | $SED "$delay_single_quote_subst"`'
+ RANLIB='`$ECHO "$RANLIB" | $SED "$delay_single_quote_subst"`'
+ old_postinstall_cmds='`$ECHO "$old_postinstall_cmds" | $SED "$delay_single_quote_subst"`'
+@@ -18193,14 +19031,17 @@ lt_cv_sys_global_symbol_pipe='`$ECHO "$lt_cv_sys_global_symbol_pipe" | $SED "$de
+ lt_cv_sys_global_symbol_to_cdecl='`$ECHO "$lt_cv_sys_global_symbol_to_cdecl" | $SED "$delay_single_quote_subst"`'
+ lt_cv_sys_global_symbol_to_c_name_address='`$ECHO "$lt_cv_sys_global_symbol_to_c_name_address" | $SED "$delay_single_quote_subst"`'
+ lt_cv_sys_global_symbol_to_c_name_address_lib_prefix='`$ECHO "$lt_cv_sys_global_symbol_to_c_name_address_lib_prefix" | $SED "$delay_single_quote_subst"`'
++nm_file_list_spec='`$ECHO "$nm_file_list_spec" | $SED "$delay_single_quote_subst"`'
++lt_sysroot='`$ECHO "$lt_sysroot" | $SED "$delay_single_quote_subst"`'
+ objdir='`$ECHO "$objdir" | $SED "$delay_single_quote_subst"`'
+ MAGIC_CMD='`$ECHO "$MAGIC_CMD" | $SED "$delay_single_quote_subst"`'
+ lt_prog_compiler_no_builtin_flag='`$ECHO "$lt_prog_compiler_no_builtin_flag" | $SED "$delay_single_quote_subst"`'
+-lt_prog_compiler_wl='`$ECHO "$lt_prog_compiler_wl" | $SED "$delay_single_quote_subst"`'
+ lt_prog_compiler_pic='`$ECHO "$lt_prog_compiler_pic" | $SED "$delay_single_quote_subst"`'
++lt_prog_compiler_wl='`$ECHO "$lt_prog_compiler_wl" | $SED "$delay_single_quote_subst"`'
+ lt_prog_compiler_static='`$ECHO "$lt_prog_compiler_static" | $SED "$delay_single_quote_subst"`'
+ lt_cv_prog_compiler_c_o='`$ECHO "$lt_cv_prog_compiler_c_o" | $SED "$delay_single_quote_subst"`'
+ need_locks='`$ECHO "$need_locks" | $SED "$delay_single_quote_subst"`'
++MANIFEST_TOOL='`$ECHO "$MANIFEST_TOOL" | $SED "$delay_single_quote_subst"`'
+ DSYMUTIL='`$ECHO "$DSYMUTIL" | $SED "$delay_single_quote_subst"`'
+ NMEDIT='`$ECHO "$NMEDIT" | $SED "$delay_single_quote_subst"`'
+ LIPO='`$ECHO "$LIPO" | $SED "$delay_single_quote_subst"`'
+@@ -18233,12 +19074,12 @@ hardcode_shlibpath_var='`$ECHO "$hardcode_shlibpath_var" | $SED "$delay_single_q
+ hardcode_automatic='`$ECHO "$hardcode_automatic" | $SED "$delay_single_quote_subst"`'
+ inherit_rpath='`$ECHO "$inherit_rpath" | $SED "$delay_single_quote_subst"`'
+ link_all_deplibs='`$ECHO "$link_all_deplibs" | $SED "$delay_single_quote_subst"`'
+-fix_srcfile_path='`$ECHO "$fix_srcfile_path" | $SED "$delay_single_quote_subst"`'
+ always_export_symbols='`$ECHO "$always_export_symbols" | $SED "$delay_single_quote_subst"`'
+ export_symbols_cmds='`$ECHO "$export_symbols_cmds" | $SED "$delay_single_quote_subst"`'
+ exclude_expsyms='`$ECHO "$exclude_expsyms" | $SED "$delay_single_quote_subst"`'
+ include_expsyms='`$ECHO "$include_expsyms" | $SED "$delay_single_quote_subst"`'
+ prelink_cmds='`$ECHO "$prelink_cmds" | $SED "$delay_single_quote_subst"`'
++postlink_cmds='`$ECHO "$postlink_cmds" | $SED "$delay_single_quote_subst"`'
+ file_list_spec='`$ECHO "$file_list_spec" | $SED "$delay_single_quote_subst"`'
+ variables_saved_for_relink='`$ECHO "$variables_saved_for_relink" | $SED "$delay_single_quote_subst"`'
+ need_lib_prefix='`$ECHO "$need_lib_prefix" | $SED "$delay_single_quote_subst"`'
+@@ -18277,8 +19118,8 @@ old_archive_cmds_CXX='`$ECHO "$old_archive_cmds_CXX" | $SED "$delay_single_quote
+ compiler_CXX='`$ECHO "$compiler_CXX" | $SED "$delay_single_quote_subst"`'
+ GCC_CXX='`$ECHO "$GCC_CXX" | $SED "$delay_single_quote_subst"`'
+ lt_prog_compiler_no_builtin_flag_CXX='`$ECHO "$lt_prog_compiler_no_builtin_flag_CXX" | $SED "$delay_single_quote_subst"`'
+-lt_prog_compiler_wl_CXX='`$ECHO "$lt_prog_compiler_wl_CXX" | $SED "$delay_single_quote_subst"`'
+ lt_prog_compiler_pic_CXX='`$ECHO "$lt_prog_compiler_pic_CXX" | $SED "$delay_single_quote_subst"`'
++lt_prog_compiler_wl_CXX='`$ECHO "$lt_prog_compiler_wl_CXX" | $SED "$delay_single_quote_subst"`'
+ lt_prog_compiler_static_CXX='`$ECHO "$lt_prog_compiler_static_CXX" | $SED "$delay_single_quote_subst"`'
+ lt_cv_prog_compiler_c_o_CXX='`$ECHO "$lt_cv_prog_compiler_c_o_CXX" | $SED "$delay_single_quote_subst"`'
+ archive_cmds_need_lc_CXX='`$ECHO "$archive_cmds_need_lc_CXX" | $SED "$delay_single_quote_subst"`'
+@@ -18305,12 +19146,12 @@ hardcode_shlibpath_var_CXX='`$ECHO "$hardcode_shlibpath_var_CXX" | $SED "$delay_
+ hardcode_automatic_CXX='`$ECHO "$hardcode_automatic_CXX" | $SED "$delay_single_quote_subst"`'
+ inherit_rpath_CXX='`$ECHO "$inherit_rpath_CXX" | $SED "$delay_single_quote_subst"`'
+ link_all_deplibs_CXX='`$ECHO "$link_all_deplibs_CXX" | $SED "$delay_single_quote_subst"`'
+-fix_srcfile_path_CXX='`$ECHO "$fix_srcfile_path_CXX" | $SED "$delay_single_quote_subst"`'
+ always_export_symbols_CXX='`$ECHO "$always_export_symbols_CXX" | $SED "$delay_single_quote_subst"`'
+ export_symbols_cmds_CXX='`$ECHO "$export_symbols_cmds_CXX" | $SED "$delay_single_quote_subst"`'
+ exclude_expsyms_CXX='`$ECHO "$exclude_expsyms_CXX" | $SED "$delay_single_quote_subst"`'
+ include_expsyms_CXX='`$ECHO "$include_expsyms_CXX" | $SED "$delay_single_quote_subst"`'
+ prelink_cmds_CXX='`$ECHO "$prelink_cmds_CXX" | $SED "$delay_single_quote_subst"`'
++postlink_cmds_CXX='`$ECHO "$postlink_cmds_CXX" | $SED "$delay_single_quote_subst"`'
+ file_list_spec_CXX='`$ECHO "$file_list_spec_CXX" | $SED "$delay_single_quote_subst"`'
+ hardcode_action_CXX='`$ECHO "$hardcode_action_CXX" | $SED "$delay_single_quote_subst"`'
+ compiler_lib_search_dirs_CXX='`$ECHO "$compiler_lib_search_dirs_CXX" | $SED "$delay_single_quote_subst"`'
+@@ -18348,8 +19189,13 @@ reload_flag \
+ OBJDUMP \
+ deplibs_check_method \
+ file_magic_cmd \
++file_magic_glob \
++want_nocaseglob \
++DLLTOOL \
++sharedlib_from_linklib_cmd \
+ AR \
+ AR_FLAGS \
++archiver_list_spec \
+ STRIP \
+ RANLIB \
+ CC \
+@@ -18359,12 +19205,14 @@ lt_cv_sys_global_symbol_pipe \
+ lt_cv_sys_global_symbol_to_cdecl \
+ lt_cv_sys_global_symbol_to_c_name_address \
+ lt_cv_sys_global_symbol_to_c_name_address_lib_prefix \
++nm_file_list_spec \
+ lt_prog_compiler_no_builtin_flag \
+-lt_prog_compiler_wl \
+ lt_prog_compiler_pic \
++lt_prog_compiler_wl \
+ lt_prog_compiler_static \
+ lt_cv_prog_compiler_c_o \
+ need_locks \
++MANIFEST_TOOL \
+ DSYMUTIL \
+ NMEDIT \
+ LIPO \
+@@ -18380,7 +19228,6 @@ no_undefined_flag \
+ hardcode_libdir_flag_spec \
+ hardcode_libdir_flag_spec_ld \
+ hardcode_libdir_separator \
+-fix_srcfile_path \
+ exclude_expsyms \
+ include_expsyms \
+ file_list_spec \
+@@ -18402,8 +19249,8 @@ LD_CXX \
+ reload_flag_CXX \
+ compiler_CXX \
+ lt_prog_compiler_no_builtin_flag_CXX \
+-lt_prog_compiler_wl_CXX \
+ lt_prog_compiler_pic_CXX \
++lt_prog_compiler_wl_CXX \
+ lt_prog_compiler_static_CXX \
+ lt_cv_prog_compiler_c_o_CXX \
+ export_dynamic_flag_spec_CXX \
+@@ -18415,7 +19262,6 @@ no_undefined_flag_CXX \
+ hardcode_libdir_flag_spec_CXX \
+ hardcode_libdir_flag_spec_ld_CXX \
+ hardcode_libdir_separator_CXX \
+-fix_srcfile_path_CXX \
+ exclude_expsyms_CXX \
+ include_expsyms_CXX \
+ file_list_spec_CXX \
+@@ -18449,6 +19295,7 @@ module_cmds \
+ module_expsym_cmds \
+ export_symbols_cmds \
+ prelink_cmds \
++postlink_cmds \
+ postinstall_cmds \
+ postuninstall_cmds \
+ finish_cmds \
+@@ -18463,7 +19310,8 @@ archive_expsym_cmds_CXX \
+ module_cmds_CXX \
+ module_expsym_cmds_CXX \
+ export_symbols_cmds_CXX \
+-prelink_cmds_CXX; do
++prelink_cmds_CXX \
++postlink_cmds_CXX; do
+     case \`eval \\\\\$ECHO \\\\""\\\\\$\$var"\\\\"\` in
+     *[\\\\\\\`\\"\\\$]*)
+       eval "lt_\$var=\\\\\\"\\\`\\\$ECHO \\"\\\$\$var\\" | \\\$SED -e \\"\\\$double_quote_subst\\" -e \\"\\\$sed_quote_subst\\" -e \\"\\\$delay_variable_subst\\"\\\`\\\\\\""
+@@ -19228,7 +20076,8 @@ $as_echo X"$file" |
+ # NOTE: Changes made to this file will be lost: look at ltmain.sh.
+ #
+ #   Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001, 2003, 2004, 2005,
+-#                 2006, 2007, 2008, 2009 Free Software Foundation, Inc.
++#                 2006, 2007, 2008, 2009, 2010 Free Software Foundation,
++#                 Inc.
+ #   Written by Gordon Matzigkeit, 1996
+ #
+ #   This file is part of GNU Libtool.
+@@ -19331,19 +20180,42 @@ SP2NL=$lt_lt_SP2NL
+ # turn newlines into spaces.
+ NL2SP=$lt_lt_NL2SP
+ 
++# convert \$build file names to \$host format.
++to_host_file_cmd=$lt_cv_to_host_file_cmd
++
++# convert \$build files to toolchain format.
++to_tool_file_cmd=$lt_cv_to_tool_file_cmd
++
+ # An object symbol dumper.
+ OBJDUMP=$lt_OBJDUMP
+ 
+ # Method to check whether dependent libraries are shared objects.
+ deplibs_check_method=$lt_deplibs_check_method
+ 
+-# Command to use when deplibs_check_method == "file_magic".
++# Command to use when deplibs_check_method = "file_magic".
+ file_magic_cmd=$lt_file_magic_cmd
+ 
++# How to find potential files when deplibs_check_method = "file_magic".
++file_magic_glob=$lt_file_magic_glob
++
++# Find potential files using nocaseglob when deplibs_check_method = "file_magic".
++want_nocaseglob=$lt_want_nocaseglob
++
++# DLL creation program.
++DLLTOOL=$lt_DLLTOOL
++
++# Command to associate shared and link libraries.
++sharedlib_from_linklib_cmd=$lt_sharedlib_from_linklib_cmd
++
+ # The archiver.
+ AR=$lt_AR
++
++# Flags to create an archive.
+ AR_FLAGS=$lt_AR_FLAGS
+ 
++# How to feed a file listing to the archiver.
++archiver_list_spec=$lt_archiver_list_spec
++
+ # A symbol stripping program.
+ STRIP=$lt_STRIP
+ 
+@@ -19373,6 +20245,12 @@ global_symbol_to_c_name_address=$lt_lt_cv_sys_global_symbol_to_c_name_address
+ # Transform the output of nm in a C name address pair when lib prefix is needed.
+ global_symbol_to_c_name_address_lib_prefix=$lt_lt_cv_sys_global_symbol_to_c_name_address_lib_prefix
+ 
++# Specify filename containing input files for \$NM.
++nm_file_list_spec=$lt_nm_file_list_spec
++
++# The root where to search for dependent libraries,and in which our libraries should be installed.
++lt_sysroot=$lt_sysroot
++
+ # The name of the directory that contains temporary libtool files.
+ objdir=$objdir
+ 
+@@ -19382,6 +20260,9 @@ MAGIC_CMD=$MAGIC_CMD
+ # Must we lock files when doing compilation?
+ need_locks=$lt_need_locks
+ 
++# Manifest tool.
++MANIFEST_TOOL=$lt_MANIFEST_TOOL
++
+ # Tool to manipulate archived DWARF debug symbol files on Mac OS X.
+ DSYMUTIL=$lt_DSYMUTIL
+ 
+@@ -19496,12 +20377,12 @@ with_gcc=$GCC
+ # Compiler flag to turn off builtin functions.
+ no_builtin_flag=$lt_lt_prog_compiler_no_builtin_flag
+ 
+-# How to pass a linker flag through the compiler.
+-wl=$lt_lt_prog_compiler_wl
+-
+ # Additional compiler flags for building library objects.
+ pic_flag=$lt_lt_prog_compiler_pic
+ 
++# How to pass a linker flag through the compiler.
++wl=$lt_lt_prog_compiler_wl
++
+ # Compiler flag to prevent dynamic linking.
+ link_static_flag=$lt_lt_prog_compiler_static
+ 
+@@ -19588,9 +20469,6 @@ inherit_rpath=$inherit_rpath
+ # Whether libtool must link a program against all its dependency libraries.
+ link_all_deplibs=$link_all_deplibs
+ 
+-# Fix the shell variable \$srcfile for the compiler.
+-fix_srcfile_path=$lt_fix_srcfile_path
+-
+ # Set to "yes" if exported symbols are required.
+ always_export_symbols=$always_export_symbols
+ 
+@@ -19606,6 +20484,9 @@ include_expsyms=$lt_include_expsyms
+ # Commands necessary for linking programs (against libraries) with templates.
+ prelink_cmds=$lt_prelink_cmds
+ 
++# Commands necessary for finishing linking programs.
++postlink_cmds=$lt_postlink_cmds
++
+ # Specify filename containing input files.
+ file_list_spec=$lt_file_list_spec
+ 
+@@ -19652,210 +20533,169 @@ ltmain="$ac_aux_dir/ltmain.sh"
+   # if finds mixed CR/LF and LF-only lines.  Since sed operates in
+   # text mode, it properly converts lines to CR/LF.  This bash problem
+   # is reportedly fixed, but why not run on old versions too?
+-  sed '/^# Generated shell functions inserted here/q' "$ltmain" >> "$cfgfile" \
+-    || (rm -f "$cfgfile"; exit 1)
+-
+-  case $xsi_shell in
+-  yes)
+-    cat << \_LT_EOF >> "$cfgfile"
+-
+-# func_dirname file append nondir_replacement
+-# Compute the dirname of FILE.  If nonempty, add APPEND to the result,
+-# otherwise set result to NONDIR_REPLACEMENT.
+-func_dirname ()
+-{
+-  case ${1} in
+-    */*) func_dirname_result="${1%/*}${2}" ;;
+-    *  ) func_dirname_result="${3}" ;;
+-  esac
+-}
+-
+-# func_basename file
+-func_basename ()
+-{
+-  func_basename_result="${1##*/}"
+-}
+-
+-# func_dirname_and_basename file append nondir_replacement
+-# perform func_basename and func_dirname in a single function
+-# call:
+-#   dirname:  Compute the dirname of FILE.  If nonempty,
+-#             add APPEND to the result, otherwise set result
+-#             to NONDIR_REPLACEMENT.
+-#             value returned in "$func_dirname_result"
+-#   basename: Compute filename of FILE.
+-#             value retuned in "$func_basename_result"
+-# Implementation must be kept synchronized with func_dirname
+-# and func_basename. For efficiency, we do not delegate to
+-# those functions but instead duplicate the functionality here.
+-func_dirname_and_basename ()
+-{
+-  case ${1} in
+-    */*) func_dirname_result="${1%/*}${2}" ;;
+-    *  ) func_dirname_result="${3}" ;;
+-  esac
+-  func_basename_result="${1##*/}"
+-}
+-
+-# func_stripname prefix suffix name
+-# strip PREFIX and SUFFIX off of NAME.
+-# PREFIX and SUFFIX must not contain globbing or regex special
+-# characters, hashes, percent signs, but SUFFIX may contain a leading
+-# dot (in which case that matches only a dot).
+-func_stripname ()
+-{
+-  # pdksh 5.2.14 does not do ${X%$Y} correctly if both X and Y are
+-  # positional parameters, so assign one to ordinary parameter first.
+-  func_stripname_result=${3}
+-  func_stripname_result=${func_stripname_result#"${1}"}
+-  func_stripname_result=${func_stripname_result%"${2}"}
+-}
+-
+-# func_opt_split
+-func_opt_split ()
+-{
+-  func_opt_split_opt=${1%%=*}
+-  func_opt_split_arg=${1#*=}
+-}
+-
+-# func_lo2o object
+-func_lo2o ()
+-{
+-  case ${1} in
+-    *.lo) func_lo2o_result=${1%.lo}.${objext} ;;
+-    *)    func_lo2o_result=${1} ;;
+-  esac
+-}
+-
+-# func_xform libobj-or-source
+-func_xform ()
+-{
+-  func_xform_result=${1%.*}.lo
+-}
+-
+-# func_arith arithmetic-term...
+-func_arith ()
+-{
+-  func_arith_result=$(( $* ))
+-}
+-
+-# func_len string
+-# STRING may not start with a hyphen.
+-func_len ()
+-{
+-  func_len_result=${#1}
+-}
+-
+-_LT_EOF
+-    ;;
+-  *) # Bourne compatible functions.
+-    cat << \_LT_EOF >> "$cfgfile"
+-
+-# func_dirname file append nondir_replacement
+-# Compute the dirname of FILE.  If nonempty, add APPEND to the result,
+-# otherwise set result to NONDIR_REPLACEMENT.
+-func_dirname ()
+-{
+-  # Extract subdirectory from the argument.
+-  func_dirname_result=`$ECHO "${1}" | $SED "$dirname"`
+-  if test "X$func_dirname_result" = "X${1}"; then
+-    func_dirname_result="${3}"
+-  else
+-    func_dirname_result="$func_dirname_result${2}"
+-  fi
+-}
+-
+-# func_basename file
+-func_basename ()
+-{
+-  func_basename_result=`$ECHO "${1}" | $SED "$basename"`
+-}
+-
+-
+-# func_stripname prefix suffix name
+-# strip PREFIX and SUFFIX off of NAME.
+-# PREFIX and SUFFIX must not contain globbing or regex special
+-# characters, hashes, percent signs, but SUFFIX may contain a leading
+-# dot (in which case that matches only a dot).
+-# func_strip_suffix prefix name
+-func_stripname ()
+-{
+-  case ${2} in
+-    .*) func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%\\\\${2}\$%%"`;;
+-    *)  func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%${2}\$%%"`;;
+-  esac
+-}
+-
+-# sed scripts:
+-my_sed_long_opt='1s/^\(-[^=]*\)=.*/\1/;q'
+-my_sed_long_arg='1s/^-[^=]*=//'
+-
+-# func_opt_split
+-func_opt_split ()
+-{
+-  func_opt_split_opt=`$ECHO "${1}" | $SED "$my_sed_long_opt"`
+-  func_opt_split_arg=`$ECHO "${1}" | $SED "$my_sed_long_arg"`
+-}
+-
+-# func_lo2o object
+-func_lo2o ()
+-{
+-  func_lo2o_result=`$ECHO "${1}" | $SED "$lo2o"`
+-}
+-
+-# func_xform libobj-or-source
+-func_xform ()
+-{
+-  func_xform_result=`$ECHO "${1}" | $SED 's/\.[^.]*$/.lo/'`
+-}
+-
+-# func_arith arithmetic-term...
+-func_arith ()
+-{
+-  func_arith_result=`expr "$@"`
+-}
+-
+-# func_len string
+-# STRING may not start with a hyphen.
+-func_len ()
+-{
+-  func_len_result=`expr "$1" : ".*" 2>/dev/null || echo $max_cmd_len`
+-}
+-
+-_LT_EOF
+-esac
+-
+-case $lt_shell_append in
+-  yes)
+-    cat << \_LT_EOF >> "$cfgfile"
+-
+-# func_append var value
+-# Append VALUE to the end of shell variable VAR.
+-func_append ()
+-{
+-  eval "$1+=\$2"
+-}
+-_LT_EOF
+-    ;;
+-  *)
+-    cat << \_LT_EOF >> "$cfgfile"
+-
+-# func_append var value
+-# Append VALUE to the end of shell variable VAR.
+-func_append ()
+-{
+-  eval "$1=\$$1\$2"
+-}
+-
+-_LT_EOF
+-    ;;
+-  esac
+-
+-
+-  sed -n '/^# Generated shell functions inserted here/,$p' "$ltmain" >> "$cfgfile" \
+-    || (rm -f "$cfgfile"; exit 1)
+-
+-  mv -f "$cfgfile" "$ofile" ||
++  sed '$q' "$ltmain" >> "$cfgfile" \
++     || (rm -f "$cfgfile"; exit 1)
++
++  if test x"$xsi_shell" = xyes; then
++  sed -e '/^func_dirname ()$/,/^} # func_dirname /c\
++func_dirname ()\
++{\
++\    case ${1} in\
++\      */*) func_dirname_result="${1%/*}${2}" ;;\
++\      *  ) func_dirname_result="${3}" ;;\
++\    esac\
++} # Extended-shell func_dirname implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_basename ()$/,/^} # func_basename /c\
++func_basename ()\
++{\
++\    func_basename_result="${1##*/}"\
++} # Extended-shell func_basename implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_dirname_and_basename ()$/,/^} # func_dirname_and_basename /c\
++func_dirname_and_basename ()\
++{\
++\    case ${1} in\
++\      */*) func_dirname_result="${1%/*}${2}" ;;\
++\      *  ) func_dirname_result="${3}" ;;\
++\    esac\
++\    func_basename_result="${1##*/}"\
++} # Extended-shell func_dirname_and_basename implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_stripname ()$/,/^} # func_stripname /c\
++func_stripname ()\
++{\
++\    # pdksh 5.2.14 does not do ${X%$Y} correctly if both X and Y are\
++\    # positional parameters, so assign one to ordinary parameter first.\
++\    func_stripname_result=${3}\
++\    func_stripname_result=${func_stripname_result#"${1}"}\
++\    func_stripname_result=${func_stripname_result%"${2}"}\
++} # Extended-shell func_stripname implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_split_long_opt ()$/,/^} # func_split_long_opt /c\
++func_split_long_opt ()\
++{\
++\    func_split_long_opt_name=${1%%=*}\
++\    func_split_long_opt_arg=${1#*=}\
++} # Extended-shell func_split_long_opt implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_split_short_opt ()$/,/^} # func_split_short_opt /c\
++func_split_short_opt ()\
++{\
++\    func_split_short_opt_arg=${1#??}\
++\    func_split_short_opt_name=${1%"$func_split_short_opt_arg"}\
++} # Extended-shell func_split_short_opt implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_lo2o ()$/,/^} # func_lo2o /c\
++func_lo2o ()\
++{\
++\    case ${1} in\
++\      *.lo) func_lo2o_result=${1%.lo}.${objext} ;;\
++\      *)    func_lo2o_result=${1} ;;\
++\    esac\
++} # Extended-shell func_lo2o implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_xform ()$/,/^} # func_xform /c\
++func_xform ()\
++{\
++    func_xform_result=${1%.*}.lo\
++} # Extended-shell func_xform implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_arith ()$/,/^} # func_arith /c\
++func_arith ()\
++{\
++    func_arith_result=$(( $* ))\
++} # Extended-shell func_arith implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_len ()$/,/^} # func_len /c\
++func_len ()\
++{\
++    func_len_result=${#1}\
++} # Extended-shell func_len implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++fi
++
++if test x"$lt_shell_append" = xyes; then
++  sed -e '/^func_append ()$/,/^} # func_append /c\
++func_append ()\
++{\
++    eval "${1}+=\\${2}"\
++} # Extended-shell func_append implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_append_quoted ()$/,/^} # func_append_quoted /c\
++func_append_quoted ()\
++{\
++\    func_quote_for_eval "${2}"\
++\    eval "${1}+=\\\\ \\$func_quote_for_eval_result"\
++} # Extended-shell func_append_quoted implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  # Save a `func_append' function call where possible by direct use of '+='
++  sed -e 's%func_append \([a-zA-Z_]\{1,\}\) "%\1+="%g' $cfgfile > $cfgfile.tmp \
++    && mv -f "$cfgfile.tmp" "$cfgfile" \
++      || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++  test 0 -eq $? || _lt_function_replace_fail=:
++else
++  # Save a `func_append' function call even when '+=' is not available
++  sed -e 's%func_append \([a-zA-Z_]\{1,\}\) "%\1="$\1%g' $cfgfile > $cfgfile.tmp \
++    && mv -f "$cfgfile.tmp" "$cfgfile" \
++      || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++  test 0 -eq $? || _lt_function_replace_fail=:
++fi
++
++if test x"$_lt_function_replace_fail" = x":"; then
++  { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: Unable to substitute extended shell functions in $ofile" >&5
++$as_echo "$as_me: WARNING: Unable to substitute extended shell functions in $ofile" >&2;}
++fi
++
++
++   mv -f "$cfgfile" "$ofile" ||
+     (rm -f "$ofile" && cp "$cfgfile" "$ofile" && rm -f "$cfgfile")
+   chmod +x "$ofile"
+ 
+@@ -19883,12 +20723,12 @@ with_gcc=$GCC_CXX
+ # Compiler flag to turn off builtin functions.
+ no_builtin_flag=$lt_lt_prog_compiler_no_builtin_flag_CXX
+ 
+-# How to pass a linker flag through the compiler.
+-wl=$lt_lt_prog_compiler_wl_CXX
+-
+ # Additional compiler flags for building library objects.
+ pic_flag=$lt_lt_prog_compiler_pic_CXX
+ 
++# How to pass a linker flag through the compiler.
++wl=$lt_lt_prog_compiler_wl_CXX
++
+ # Compiler flag to prevent dynamic linking.
+ link_static_flag=$lt_lt_prog_compiler_static_CXX
+ 
+@@ -19975,9 +20815,6 @@ inherit_rpath=$inherit_rpath_CXX
+ # Whether libtool must link a program against all its dependency libraries.
+ link_all_deplibs=$link_all_deplibs_CXX
+ 
+-# Fix the shell variable \$srcfile for the compiler.
+-fix_srcfile_path=$lt_fix_srcfile_path_CXX
+-
+ # Set to "yes" if exported symbols are required.
+ always_export_symbols=$always_export_symbols_CXX
+ 
+@@ -19993,6 +20830,9 @@ include_expsyms=$lt_include_expsyms_CXX
+ # Commands necessary for linking programs (against libraries) with templates.
+ prelink_cmds=$lt_prelink_cmds_CXX
+ 
++# Commands necessary for finishing linking programs.
++postlink_cmds=$lt_postlink_cmds_CXX
++
+ # Specify filename containing input files.
+ file_list_spec=$lt_file_list_spec_CXX
+ 
+diff --git a/libbacktrace/Makefile.in b/libbacktrace/Makefile.in
+index e6a4c8e2ef3..3547e3649b7 100644
+--- a/libbacktrace/Makefile.in
++++ b/libbacktrace/Makefile.in
+@@ -827,6 +827,7 @@ CPP = @CPP@
+ CPPFLAGS = @CPPFLAGS@
+ CYGPATH_W = @CYGPATH_W@
+ DEFS = @DEFS@
++DLLTOOL = @DLLTOOL@
+ DSYMUTIL = @DSYMUTIL@
+ DUMPBIN = @DUMPBIN@
+ DWZ = @DWZ@
+@@ -854,6 +855,7 @@ LN_S = @LN_S@
+ LTLIBOBJS = @LTLIBOBJS@
+ MAINT = @MAINT@
+ MAKEINFO = @MAKEINFO@
++MANIFEST_TOOL = @MANIFEST_TOOL@
+ MKDIR_P = @MKDIR_P@
+ NM = @NM@
+ NMEDIT = @NMEDIT@
+@@ -886,6 +888,7 @@ abs_builddir = @abs_builddir@
+ abs_srcdir = @abs_srcdir@
+ abs_top_builddir = @abs_top_builddir@
+ abs_top_srcdir = @abs_top_srcdir@
++ac_ct_AR = @ac_ct_AR@
+ ac_ct_CC = @ac_ct_CC@
+ ac_ct_DUMPBIN = @ac_ct_DUMPBIN@
+ am__leading_dot = @am__leading_dot@
+diff --git a/libbacktrace/configure b/libbacktrace/configure
+index 406b67b8cbc..b648da40aab 100755
+--- a/libbacktrace/configure
++++ b/libbacktrace/configure
+@@ -680,7 +680,10 @@ OTOOL
+ LIPO
+ NMEDIT
+ DSYMUTIL
++MANIFEST_TOOL
++ac_ct_AR
+ AR
++DLLTOOL
+ OBJDUMP
+ LN_S
+ NM
+@@ -798,6 +801,7 @@ enable_static
+ with_pic
+ enable_fast_install
+ with_gnu_ld
++with_libtool_sysroot
+ enable_libtool_lock
+ enable_largefile
+ enable_cet
+@@ -1458,6 +1462,8 @@ Optional Packages:
+   --with-pic              try to use only PIC/non-PIC objects [default=use
+                           both]
+   --with-gnu-ld           assume the C compiler uses GNU ld [default=no]
++  --with-libtool-sysroot=DIR Search for dependent libraries within DIR
++                        (or the compiler's sysroot if not specified).
+   --with-system-libunwind use installed libunwind
+ 
+ Some influential environment variables:
+@@ -5446,8 +5452,8 @@ esac
+ 
+ 
+ 
+-macro_version='2.2.7a'
+-macro_revision='1.3134'
++macro_version='2.4'
++macro_revision='1.3293'
+ 
+ 
+ 
+@@ -5487,7 +5493,7 @@ ECHO=$ECHO$ECHO$ECHO$ECHO$ECHO$ECHO
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking how to print strings" >&5
+ $as_echo_n "checking how to print strings... " >&6; }
+ # Test print first, because it will be a builtin if present.
+-if test "X`print -r -- -n 2>/dev/null`" = X-n && \
++if test "X`( print -r -- -n ) 2>/dev/null`" = X-n && \
+    test "X`print -r -- $ECHO 2>/dev/null`" = "X$ECHO"; then
+   ECHO='print -r --'
+ elif test "X`printf %s $ECHO 2>/dev/null`" = "X$ECHO"; then
+@@ -6180,8 +6186,8 @@ $as_echo_n "checking whether the shell understands some XSI constructs... " >&6;
+ # Try some XSI features
+ xsi_shell=no
+ ( _lt_dummy="a/b/c"
+-  test "${_lt_dummy##*/},${_lt_dummy%/*},"${_lt_dummy%"$_lt_dummy"}, \
+-      = c,a/b,, \
++  test "${_lt_dummy##*/},${_lt_dummy%/*},${_lt_dummy#??}"${_lt_dummy%"$_lt_dummy"}, \
++      = c,a/b,b/c, \
+     && eval 'test $(( 1 + 1 )) -eq 2 \
+     && test "${#_lt_dummy}" -eq 5' ) >/dev/null 2>&1 \
+   && xsi_shell=yes
+@@ -6230,6 +6236,80 @@ esac
+ 
+ 
+ 
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking how to convert $build file names to $host format" >&5
++$as_echo_n "checking how to convert $build file names to $host format... " >&6; }
++if ${lt_cv_to_host_file_cmd+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  case $host in
++  *-*-mingw* )
++    case $build in
++      *-*-mingw* ) # actually msys
++        lt_cv_to_host_file_cmd=func_convert_file_msys_to_w32
++        ;;
++      *-*-cygwin* )
++        lt_cv_to_host_file_cmd=func_convert_file_cygwin_to_w32
++        ;;
++      * ) # otherwise, assume *nix
++        lt_cv_to_host_file_cmd=func_convert_file_nix_to_w32
++        ;;
++    esac
++    ;;
++  *-*-cygwin* )
++    case $build in
++      *-*-mingw* ) # actually msys
++        lt_cv_to_host_file_cmd=func_convert_file_msys_to_cygwin
++        ;;
++      *-*-cygwin* )
++        lt_cv_to_host_file_cmd=func_convert_file_noop
++        ;;
++      * ) # otherwise, assume *nix
++        lt_cv_to_host_file_cmd=func_convert_file_nix_to_cygwin
++        ;;
++    esac
++    ;;
++  * ) # unhandled hosts (and "normal" native builds)
++    lt_cv_to_host_file_cmd=func_convert_file_noop
++    ;;
++esac
++
++fi
++
++to_host_file_cmd=$lt_cv_to_host_file_cmd
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_to_host_file_cmd" >&5
++$as_echo "$lt_cv_to_host_file_cmd" >&6; }
++
++
++
++
++
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking how to convert $build file names to toolchain format" >&5
++$as_echo_n "checking how to convert $build file names to toolchain format... " >&6; }
++if ${lt_cv_to_tool_file_cmd+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  #assume ordinary cross tools, or native build.
++lt_cv_to_tool_file_cmd=func_convert_file_noop
++case $host in
++  *-*-mingw* )
++    case $build in
++      *-*-mingw* ) # actually msys
++        lt_cv_to_tool_file_cmd=func_convert_file_msys_to_w32
++        ;;
++    esac
++    ;;
++esac
++
++fi
++
++to_tool_file_cmd=$lt_cv_to_tool_file_cmd
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_to_tool_file_cmd" >&5
++$as_echo "$lt_cv_to_tool_file_cmd" >&6; }
++
++
++
++
++
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $LD option to reload object files" >&5
+ $as_echo_n "checking for $LD option to reload object files... " >&6; }
+ if ${lt_cv_ld_reload_flag+:} false; then :
+@@ -6246,6 +6326,11 @@ case $reload_flag in
+ esac
+ reload_cmds='$LD$reload_flag -o $output$reload_objs'
+ case $host_os in
++  cygwin* | mingw* | pw32* | cegcc*)
++    if test "$GCC" != yes; then
++      reload_cmds=false
++    fi
++    ;;
+   darwin*)
+     if test "$GCC" = yes; then
+       reload_cmds='$LTCC $LTCFLAGS -nostdlib ${wl}-r -o $output$reload_objs'
+@@ -6414,7 +6499,8 @@ mingw* | pw32*)
+     lt_cv_deplibs_check_method='file_magic ^x86 archive import|^x86 DLL'
+     lt_cv_file_magic_cmd='func_win32_libid'
+   else
+-    lt_cv_deplibs_check_method='file_magic file format pei*-i386(.*architecture: i386)?'
++    # Keep this pattern in sync with the one in func_win32_libid.
++    lt_cv_deplibs_check_method='file_magic file format (pei*-i386(.*architecture: i386)?|pe-arm-wince|pe-x86-64)'
+     lt_cv_file_magic_cmd='$OBJDUMP -f'
+   fi
+   ;;
+@@ -6568,6 +6654,21 @@ esac
+ fi
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_deplibs_check_method" >&5
+ $as_echo "$lt_cv_deplibs_check_method" >&6; }
++
++file_magic_glob=
++want_nocaseglob=no
++if test "$build" = "$host"; then
++  case $host_os in
++  mingw* | pw32*)
++    if ( shopt | grep nocaseglob ) >/dev/null 2>&1; then
++      want_nocaseglob=yes
++    else
++      file_magic_glob=`echo aAbBcCdDeEfFgGhHiIjJkKlLmMnNoOpPqQrRsStTuUvVwWxXyYzZ | $SED -e "s/\(..\)/s\/[\1]\/[\1]\/g;/g"`
++    fi
++    ;;
++  esac
++fi
++
+ file_magic_cmd=$lt_cv_file_magic_cmd
+ deplibs_check_method=$lt_cv_deplibs_check_method
+ test -z "$deplibs_check_method" && deplibs_check_method=unknown
+@@ -6583,6 +6684,157 @@ test -z "$deplibs_check_method" && deplibs_check_method=unknown
+ 
+ 
+ 
++
++
++
++
++
++
++
++
++
++
++if test -n "$ac_tool_prefix"; then
++  # Extract the first word of "${ac_tool_prefix}dlltool", so it can be a program name with args.
++set dummy ${ac_tool_prefix}dlltool; ac_word=$2
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
++$as_echo_n "checking for $ac_word... " >&6; }
++if ${ac_cv_prog_DLLTOOL+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  if test -n "$DLLTOOL"; then
++  ac_cv_prog_DLLTOOL="$DLLTOOL" # Let the user override the test.
++else
++as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
++for as_dir in $PATH
++do
++  IFS=$as_save_IFS
++  test -z "$as_dir" && as_dir=.
++    for ac_exec_ext in '' $ac_executable_extensions; do
++  if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
++    ac_cv_prog_DLLTOOL="${ac_tool_prefix}dlltool"
++    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
++    break 2
++  fi
++done
++  done
++IFS=$as_save_IFS
++
++fi
++fi
++DLLTOOL=$ac_cv_prog_DLLTOOL
++if test -n "$DLLTOOL"; then
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $DLLTOOL" >&5
++$as_echo "$DLLTOOL" >&6; }
++else
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
++$as_echo "no" >&6; }
++fi
++
++
++fi
++if test -z "$ac_cv_prog_DLLTOOL"; then
++  ac_ct_DLLTOOL=$DLLTOOL
++  # Extract the first word of "dlltool", so it can be a program name with args.
++set dummy dlltool; ac_word=$2
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
++$as_echo_n "checking for $ac_word... " >&6; }
++if ${ac_cv_prog_ac_ct_DLLTOOL+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  if test -n "$ac_ct_DLLTOOL"; then
++  ac_cv_prog_ac_ct_DLLTOOL="$ac_ct_DLLTOOL" # Let the user override the test.
++else
++as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
++for as_dir in $PATH
++do
++  IFS=$as_save_IFS
++  test -z "$as_dir" && as_dir=.
++    for ac_exec_ext in '' $ac_executable_extensions; do
++  if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
++    ac_cv_prog_ac_ct_DLLTOOL="dlltool"
++    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
++    break 2
++  fi
++done
++  done
++IFS=$as_save_IFS
++
++fi
++fi
++ac_ct_DLLTOOL=$ac_cv_prog_ac_ct_DLLTOOL
++if test -n "$ac_ct_DLLTOOL"; then
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_DLLTOOL" >&5
++$as_echo "$ac_ct_DLLTOOL" >&6; }
++else
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
++$as_echo "no" >&6; }
++fi
++
++  if test "x$ac_ct_DLLTOOL" = x; then
++    DLLTOOL="false"
++  else
++    case $cross_compiling:$ac_tool_warned in
++yes:)
++{ $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5
++$as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;}
++ac_tool_warned=yes ;;
++esac
++    DLLTOOL=$ac_ct_DLLTOOL
++  fi
++else
++  DLLTOOL="$ac_cv_prog_DLLTOOL"
++fi
++
++test -z "$DLLTOOL" && DLLTOOL=dlltool
++
++
++
++
++
++
++
++
++
++
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking how to associate runtime and link libraries" >&5
++$as_echo_n "checking how to associate runtime and link libraries... " >&6; }
++if ${lt_cv_sharedlib_from_linklib_cmd+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  lt_cv_sharedlib_from_linklib_cmd='unknown'
++
++case $host_os in
++cygwin* | mingw* | pw32* | cegcc*)
++  # two different shell functions defined in ltmain.sh
++  # decide which to use based on capabilities of $DLLTOOL
++  case `$DLLTOOL --help 2>&1` in
++  *--identify-strict*)
++    lt_cv_sharedlib_from_linklib_cmd=func_cygming_dll_for_implib
++    ;;
++  *)
++    lt_cv_sharedlib_from_linklib_cmd=func_cygming_dll_for_implib_fallback
++    ;;
++  esac
++  ;;
++*)
++  # fallback: assume linklib IS sharedlib
++  lt_cv_sharedlib_from_linklib_cmd="$ECHO"
++  ;;
++esac
++
++fi
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_sharedlib_from_linklib_cmd" >&5
++$as_echo "$lt_cv_sharedlib_from_linklib_cmd" >&6; }
++sharedlib_from_linklib_cmd=$lt_cv_sharedlib_from_linklib_cmd
++test -z "$sharedlib_from_linklib_cmd" && sharedlib_from_linklib_cmd=$ECHO
++
++
++
++
++
++
++
+ plugin_option=
+ plugin_names="liblto_plugin.so liblto_plugin-0.dll cyglto_plugin-0.dll"
+ for plugin in $plugin_names; do
+@@ -6597,8 +6849,10 @@ for plugin in $plugin_names; do
+ done
+ 
+ if test -n "$ac_tool_prefix"; then
+-  # Extract the first word of "${ac_tool_prefix}ar", so it can be a program name with args.
+-set dummy ${ac_tool_prefix}ar; ac_word=$2
++  for ac_prog in ar
++  do
++    # Extract the first word of "$ac_tool_prefix$ac_prog", so it can be a program name with args.
++set dummy $ac_tool_prefix$ac_prog; ac_word=$2
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+ $as_echo_n "checking for $ac_word... " >&6; }
+ if ${ac_cv_prog_AR+:} false; then :
+@@ -6614,7 +6868,7 @@ do
+   test -z "$as_dir" && as_dir=.
+     for ac_exec_ext in '' $ac_executable_extensions; do
+   if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
+-    ac_cv_prog_AR="${ac_tool_prefix}ar"
++    ac_cv_prog_AR="$ac_tool_prefix$ac_prog"
+     $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
+     break 2
+   fi
+@@ -6634,11 +6888,15 @@ $as_echo "no" >&6; }
+ fi
+ 
+ 
++    test -n "$AR" && break
++  done
+ fi
+-if test -z "$ac_cv_prog_AR"; then
++if test -z "$AR"; then
+   ac_ct_AR=$AR
+-  # Extract the first word of "ar", so it can be a program name with args.
+-set dummy ar; ac_word=$2
++  for ac_prog in ar
++do
++  # Extract the first word of "$ac_prog", so it can be a program name with args.
++set dummy $ac_prog; ac_word=$2
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+ $as_echo_n "checking for $ac_word... " >&6; }
+ if ${ac_cv_prog_ac_ct_AR+:} false; then :
+@@ -6654,7 +6912,7 @@ do
+   test -z "$as_dir" && as_dir=.
+     for ac_exec_ext in '' $ac_executable_extensions; do
+   if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
+-    ac_cv_prog_ac_ct_AR="ar"
++    ac_cv_prog_ac_ct_AR="$ac_prog"
+     $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
+     break 2
+   fi
+@@ -6673,6 +6931,10 @@ else
+ $as_echo "no" >&6; }
+ fi
+ 
++
++  test -n "$ac_ct_AR" && break
++done
++
+   if test "x$ac_ct_AR" = x; then
+     AR="false"
+   else
+@@ -6684,25 +6946,19 @@ ac_tool_warned=yes ;;
+ esac
+     AR=$ac_ct_AR
+   fi
+-else
+-  AR="$ac_cv_prog_AR"
+ fi
+ 
+-test -z "$AR" && AR=ar
+-if test -n "$plugin_option"; then
+-  if $AR --help 2>&1 | grep -q "\--plugin"; then
+-    touch conftest.c
+-    $AR $plugin_option rc conftest.a conftest.c
+-    if test "$?" != 0; then
+-      { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: Failed: $AR $plugin_option rc" >&5
++  touch conftest.c
++  $AR $plugin_option rc conftest.a conftest.c
++  if test "$?" != 0; then
++    { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: Failed: $AR $plugin_option rc" >&5
+ $as_echo "$as_me: WARNING: Failed: $AR $plugin_option rc" >&2;}
+-    else
+-      AR="$AR $plugin_option"
+-    fi
+-    rm -f conftest.*
++  else
++    AR="$AR $plugin_option"
+   fi
+-fi
+-test -z "$AR_FLAGS" && AR_FLAGS=cru
++  rm -f conftest.*
++: ${AR=ar}
++: ${AR_FLAGS=cru}
+ 
+ 
+ 
+@@ -6714,6 +6970,64 @@ test -z "$AR_FLAGS" && AR_FLAGS=cru
+ 
+ 
+ 
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for archiver @FILE support" >&5
++$as_echo_n "checking for archiver @FILE support... " >&6; }
++if ${lt_cv_ar_at_file+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  lt_cv_ar_at_file=no
++   cat confdefs.h - <<_ACEOF >conftest.$ac_ext
++/* end confdefs.h.  */
++
++int
++main ()
++{
++
++  ;
++  return 0;
++}
++_ACEOF
++if ac_fn_c_try_compile "$LINENO"; then :
++  echo conftest.$ac_objext > conftest.lst
++      lt_ar_try='$AR $AR_FLAGS libconftest.a @conftest.lst >&5'
++      { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$lt_ar_try\""; } >&5
++  (eval $lt_ar_try) 2>&5
++  ac_status=$?
++  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
++  test $ac_status = 0; }
++      if test "$ac_status" -eq 0; then
++	# Ensure the archiver fails upon bogus file names.
++	rm -f conftest.$ac_objext libconftest.a
++	{ { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$lt_ar_try\""; } >&5
++  (eval $lt_ar_try) 2>&5
++  ac_status=$?
++  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
++  test $ac_status = 0; }
++	if test "$ac_status" -ne 0; then
++          lt_cv_ar_at_file=@
++        fi
++      fi
++      rm -f conftest.* libconftest.a
++
++fi
++rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext
++
++fi
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_ar_at_file" >&5
++$as_echo "$lt_cv_ar_at_file" >&6; }
++
++if test "x$lt_cv_ar_at_file" = xno; then
++  archiver_list_spec=
++else
++  archiver_list_spec=$lt_cv_ar_at_file
++fi
++
++
++
++
++
++
++
+ if test -n "$ac_tool_prefix"; then
+   # Extract the first word of "${ac_tool_prefix}strip", so it can be a program name with args.
+ set dummy ${ac_tool_prefix}strip; ac_word=$2
+@@ -7053,8 +7367,8 @@ esac
+ lt_cv_sys_global_symbol_to_cdecl="sed -n -e 's/^T .* \(.*\)$/extern int \1();/p' -e 's/^$symcode* .* \(.*\)$/extern char \1;/p'"
+ 
+ # Transform an extracted symbol line into symbol name and symbol address
+-lt_cv_sys_global_symbol_to_c_name_address="sed -n -e 's/^: \([^ ]*\) $/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"\2\", (void *) \&\2},/p'"
+-lt_cv_sys_global_symbol_to_c_name_address_lib_prefix="sed -n -e 's/^: \([^ ]*\) $/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \(lib[^ ]*\)$/  {\"\2\", (void *) \&\2},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"lib\2\", (void *) \&\2},/p'"
++lt_cv_sys_global_symbol_to_c_name_address="sed -n -e 's/^: \([^ ]*\)[ ]*$/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"\2\", (void *) \&\2},/p'"
++lt_cv_sys_global_symbol_to_c_name_address_lib_prefix="sed -n -e 's/^: \([^ ]*\)[ ]*$/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \(lib[^ ]*\)$/  {\"\2\", (void *) \&\2},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"lib\2\", (void *) \&\2},/p'"
+ 
+ # Handle CRLF in mingw tool chain
+ opt_cr=
+@@ -7090,6 +7404,7 @@ for ac_symprfx in "" "_"; do
+   else
+     lt_cv_sys_global_symbol_pipe="sed -n -e 's/^.*[	 ]\($symcode$symcode*\)[	 ][	 ]*$ac_symprfx$sympat$opt_cr$/$symxfrm/p'"
+   fi
++  lt_cv_sys_global_symbol_pipe="$lt_cv_sys_global_symbol_pipe | sed '/ __gnu_lto/d'"
+ 
+   # Check to see that the pipe works correctly.
+   pipe_works=no
+@@ -7131,6 +7446,18 @@ _LT_EOF
+       if $GREP ' nm_test_var$' "$nlist" >/dev/null; then
+ 	if $GREP ' nm_test_func$' "$nlist" >/dev/null; then
+ 	  cat <<_LT_EOF > conftest.$ac_ext
++/* Keep this code in sync between libtool.m4, ltmain, lt_system.h, and tests.  */
++#if defined(_WIN32) || defined(__CYGWIN__) || defined(_WIN32_WCE)
++/* DATA imports from DLLs on WIN32 con't be const, because runtime
++   relocations are performed -- see ld's documentation on pseudo-relocs.  */
++# define LT_DLSYM_CONST
++#elif defined(__osf__)
++/* This system does not cope well with relocations in const data.  */
++# define LT_DLSYM_CONST
++#else
++# define LT_DLSYM_CONST const
++#endif
++
+ #ifdef __cplusplus
+ extern "C" {
+ #endif
+@@ -7142,7 +7469,7 @@ _LT_EOF
+ 	  cat <<_LT_EOF >> conftest.$ac_ext
+ 
+ /* The mapping between symbol names and symbols.  */
+-const struct {
++LT_DLSYM_CONST struct {
+   const char *name;
+   void       *address;
+ }
+@@ -7168,8 +7495,8 @@ static const void *lt_preloaded_setup() {
+ _LT_EOF
+ 	  # Now try linking the two files.
+ 	  mv conftest.$ac_objext conftstm.$ac_objext
+-	  lt_save_LIBS="$LIBS"
+-	  lt_save_CFLAGS="$CFLAGS"
++	  lt_globsym_save_LIBS=$LIBS
++	  lt_globsym_save_CFLAGS=$CFLAGS
+ 	  LIBS="conftstm.$ac_objext"
+ 	  CFLAGS="$CFLAGS$lt_prog_compiler_no_builtin_flag"
+ 	  if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_link\""; } >&5
+@@ -7179,8 +7506,8 @@ _LT_EOF
+   test $ac_status = 0; } && test -s conftest${ac_exeext}; then
+ 	    pipe_works=yes
+ 	  fi
+-	  LIBS="$lt_save_LIBS"
+-	  CFLAGS="$lt_save_CFLAGS"
++	  LIBS=$lt_globsym_save_LIBS
++	  CFLAGS=$lt_globsym_save_CFLAGS
+ 	else
+ 	  echo "cannot find nm_test_func in $nlist" >&5
+ 	fi
+@@ -7217,6 +7544,18 @@ else
+ $as_echo "ok" >&6; }
+ fi
+ 
++# Response file support.
++if test "$lt_cv_nm_interface" = "MS dumpbin"; then
++  nm_file_list_spec='@'
++elif $NM --help 2>/dev/null | grep '[@]FILE' >/dev/null; then
++  nm_file_list_spec='@'
++fi
++
++
++
++
++
++
+ 
+ 
+ 
+@@ -7233,6 +7572,43 @@ fi
+ 
+ 
+ 
++
++
++
++
++
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for sysroot" >&5
++$as_echo_n "checking for sysroot... " >&6; }
++
++# Check whether --with-libtool-sysroot was given.
++if test "${with_libtool_sysroot+set}" = set; then :
++  withval=$with_libtool_sysroot;
++else
++  with_libtool_sysroot=no
++fi
++
++
++lt_sysroot=
++case ${with_libtool_sysroot} in #(
++ yes)
++   if test "$GCC" = yes; then
++     lt_sysroot=`$CC --print-sysroot 2>/dev/null`
++   fi
++   ;; #(
++ /*)
++   lt_sysroot=`echo "$with_libtool_sysroot" | sed -e "$sed_quote_subst"`
++   ;; #(
++ no|'')
++   ;; #(
++ *)
++   { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${with_libtool_sysroot}" >&5
++$as_echo "${with_libtool_sysroot}" >&6; }
++   as_fn_error $? "The sysroot must be an absolute path." "$LINENO" 5
++   ;;
++esac
++
++ { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${lt_sysroot:-no}" >&5
++$as_echo "${lt_sysroot:-no}" >&6; }
+ 
+ 
+ 
+@@ -7444,6 +7820,123 @@ esac
+ 
+ need_locks="$enable_libtool_lock"
+ 
++if test -n "$ac_tool_prefix"; then
++  # Extract the first word of "${ac_tool_prefix}mt", so it can be a program name with args.
++set dummy ${ac_tool_prefix}mt; ac_word=$2
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
++$as_echo_n "checking for $ac_word... " >&6; }
++if ${ac_cv_prog_MANIFEST_TOOL+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  if test -n "$MANIFEST_TOOL"; then
++  ac_cv_prog_MANIFEST_TOOL="$MANIFEST_TOOL" # Let the user override the test.
++else
++as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
++for as_dir in $PATH
++do
++  IFS=$as_save_IFS
++  test -z "$as_dir" && as_dir=.
++    for ac_exec_ext in '' $ac_executable_extensions; do
++  if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
++    ac_cv_prog_MANIFEST_TOOL="${ac_tool_prefix}mt"
++    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
++    break 2
++  fi
++done
++  done
++IFS=$as_save_IFS
++
++fi
++fi
++MANIFEST_TOOL=$ac_cv_prog_MANIFEST_TOOL
++if test -n "$MANIFEST_TOOL"; then
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $MANIFEST_TOOL" >&5
++$as_echo "$MANIFEST_TOOL" >&6; }
++else
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
++$as_echo "no" >&6; }
++fi
++
++
++fi
++if test -z "$ac_cv_prog_MANIFEST_TOOL"; then
++  ac_ct_MANIFEST_TOOL=$MANIFEST_TOOL
++  # Extract the first word of "mt", so it can be a program name with args.
++set dummy mt; ac_word=$2
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
++$as_echo_n "checking for $ac_word... " >&6; }
++if ${ac_cv_prog_ac_ct_MANIFEST_TOOL+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  if test -n "$ac_ct_MANIFEST_TOOL"; then
++  ac_cv_prog_ac_ct_MANIFEST_TOOL="$ac_ct_MANIFEST_TOOL" # Let the user override the test.
++else
++as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
++for as_dir in $PATH
++do
++  IFS=$as_save_IFS
++  test -z "$as_dir" && as_dir=.
++    for ac_exec_ext in '' $ac_executable_extensions; do
++  if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
++    ac_cv_prog_ac_ct_MANIFEST_TOOL="mt"
++    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
++    break 2
++  fi
++done
++  done
++IFS=$as_save_IFS
++
++fi
++fi
++ac_ct_MANIFEST_TOOL=$ac_cv_prog_ac_ct_MANIFEST_TOOL
++if test -n "$ac_ct_MANIFEST_TOOL"; then
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_MANIFEST_TOOL" >&5
++$as_echo "$ac_ct_MANIFEST_TOOL" >&6; }
++else
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
++$as_echo "no" >&6; }
++fi
++
++  if test "x$ac_ct_MANIFEST_TOOL" = x; then
++    MANIFEST_TOOL=":"
++  else
++    case $cross_compiling:$ac_tool_warned in
++yes:)
++{ $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5
++$as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;}
++ac_tool_warned=yes ;;
++esac
++    MANIFEST_TOOL=$ac_ct_MANIFEST_TOOL
++  fi
++else
++  MANIFEST_TOOL="$ac_cv_prog_MANIFEST_TOOL"
++fi
++
++test -z "$MANIFEST_TOOL" && MANIFEST_TOOL=mt
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking if $MANIFEST_TOOL is a manifest tool" >&5
++$as_echo_n "checking if $MANIFEST_TOOL is a manifest tool... " >&6; }
++if ${lt_cv_path_mainfest_tool+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  lt_cv_path_mainfest_tool=no
++  echo "$as_me:$LINENO: $MANIFEST_TOOL '-?'" >&5
++  $MANIFEST_TOOL '-?' 2>conftest.err > conftest.out
++  cat conftest.err >&5
++  if $GREP 'Manifest Tool' conftest.out > /dev/null; then
++    lt_cv_path_mainfest_tool=yes
++  fi
++  rm -f conftest*
++fi
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_path_mainfest_tool" >&5
++$as_echo "$lt_cv_path_mainfest_tool" >&6; }
++if test "x$lt_cv_path_mainfest_tool" != xyes; then
++  MANIFEST_TOOL=:
++fi
++
++
++
++
++
+ 
+   case $host_os in
+     rhapsody* | darwin*)
+@@ -8007,6 +8500,8 @@ _LT_EOF
+       $LTCC $LTCFLAGS -c -o conftest.o conftest.c 2>&5
+       echo "$AR cru libconftest.a conftest.o" >&5
+       $AR cru libconftest.a conftest.o 2>&5
++      echo "$RANLIB libconftest.a" >&5
++      $RANLIB libconftest.a 2>&5
+       cat > conftest.c << _LT_EOF
+ int main() { return 0;}
+ _LT_EOF
+@@ -8589,8 +9084,6 @@ fi
+ lt_prog_compiler_pic=
+ lt_prog_compiler_static=
+ 
+-{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $compiler option to produce PIC" >&5
+-$as_echo_n "checking for $compiler option to produce PIC... " >&6; }
+ 
+   if test "$GCC" = yes; then
+     lt_prog_compiler_wl='-Wl,'
+@@ -8756,6 +9249,12 @@ $as_echo_n "checking for $compiler option to produce PIC... " >&6; }
+ 	lt_prog_compiler_pic='--shared'
+ 	lt_prog_compiler_static='--static'
+ 	;;
++      nagfor*)
++	# NAG Fortran compiler
++	lt_prog_compiler_wl='-Wl,-Wl,,'
++	lt_prog_compiler_pic='-PIC'
++	lt_prog_compiler_static='-Bstatic'
++	;;
+       pgcc* | pgf77* | pgf90* | pgf95* | pgfortran*)
+         # Portland Group compilers (*not* the Pentium gcc compiler,
+ 	# which looks to be a dead project)
+@@ -8818,7 +9317,7 @@ $as_echo_n "checking for $compiler option to produce PIC... " >&6; }
+       lt_prog_compiler_pic='-KPIC'
+       lt_prog_compiler_static='-Bstatic'
+       case $cc_basename in
+-      f77* | f90* | f95*)
++      f77* | f90* | f95* | sunf77* | sunf90* | sunf95*)
+ 	lt_prog_compiler_wl='-Qoption ld ';;
+       *)
+ 	lt_prog_compiler_wl='-Wl,';;
+@@ -8875,13 +9374,17 @@ case $host_os in
+     lt_prog_compiler_pic="$lt_prog_compiler_pic -DPIC"
+     ;;
+ esac
+-{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_prog_compiler_pic" >&5
+-$as_echo "$lt_prog_compiler_pic" >&6; }
+-
+-
+-
+-
+ 
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $compiler option to produce PIC" >&5
++$as_echo_n "checking for $compiler option to produce PIC... " >&6; }
++if ${lt_cv_prog_compiler_pic+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  lt_cv_prog_compiler_pic=$lt_prog_compiler_pic
++fi
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_prog_compiler_pic" >&5
++$as_echo "$lt_cv_prog_compiler_pic" >&6; }
++lt_prog_compiler_pic=$lt_cv_prog_compiler_pic
+ 
+ #
+ # Check to make sure the PIC flag actually works.
+@@ -8942,6 +9445,11 @@ fi
+ 
+ 
+ 
++
++
++
++
++
+ #
+ # Check to make sure the static flag actually works.
+ #
+@@ -9292,7 +9800,8 @@ _LT_EOF
+       allow_undefined_flag=unsupported
+       always_export_symbols=no
+       enable_shared_with_static_runtimes=yes
+-      export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1 DATA/'\'' | $SED -e '\''/^[AITW][ ]/s/.*[ ]//'\'' | sort | uniq > $export_symbols'
++      export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1 DATA/;s/^.*[ ]__nm__\([^ ]*\)[ ][^ ]*/\1 DATA/;/^I[ ]/d;/^[AITW][ ]/s/.* //'\'' | sort | uniq > $export_symbols'
++      exclude_expsyms='[_]+GLOBAL_OFFSET_TABLE_|[_]+GLOBAL__[FID]_.*|[_]+head_[A-Za-z0-9_]+_dll|[A-Za-z0-9_]+_dll_iname'
+ 
+       if $LD --help 2>&1 | $GREP 'auto-import' > /dev/null; then
+         archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib'
+@@ -9391,12 +9900,12 @@ _LT_EOF
+ 	  whole_archive_flag_spec='--whole-archive$convenience --no-whole-archive'
+ 	  hardcode_libdir_flag_spec=
+ 	  hardcode_libdir_flag_spec_ld='-rpath $libdir'
+-	  archive_cmds='$LD -shared $libobjs $deplibs $compiler_flags -soname $soname -o $lib'
++	  archive_cmds='$LD -shared $libobjs $deplibs $linker_flags -soname $soname -o $lib'
+ 	  if test "x$supports_anon_versioning" = xyes; then
+ 	    archive_expsym_cmds='echo "{ global:" > $output_objdir/$libname.ver~
+ 	      cat $export_symbols | sed -e "s/\(.*\)/\1;/" >> $output_objdir/$libname.ver~
+ 	      echo "local: *; };" >> $output_objdir/$libname.ver~
+-	      $LD -shared $libobjs $deplibs $compiler_flags -soname $soname -version-script $output_objdir/$libname.ver -o $lib'
++	      $LD -shared $libobjs $deplibs $linker_flags -soname $soname -version-script $output_objdir/$libname.ver -o $lib'
+ 	  fi
+ 	  ;;
+ 	esac
+@@ -9410,8 +9919,8 @@ _LT_EOF
+ 	archive_cmds='$LD -Bshareable $libobjs $deplibs $linker_flags -o $lib'
+ 	wlarc=
+       else
+-	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
+-	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
++	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
++	archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
+       fi
+       ;;
+ 
+@@ -9429,8 +9938,8 @@ _LT_EOF
+ 
+ _LT_EOF
+       elif $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then
+-	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
+-	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
++	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
++	archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
+       else
+ 	ld_shlibs=no
+       fi
+@@ -9476,8 +9985,8 @@ _LT_EOF
+ 
+     *)
+       if $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then
+-	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
+-	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
++	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
++	archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
+       else
+ 	ld_shlibs=no
+       fi
+@@ -9607,7 +10116,13 @@ _LT_EOF
+ 	allow_undefined_flag='-berok'
+         # Determine the default libpath from the value encoded in an
+         # empty executable.
+-        cat confdefs.h - <<_ACEOF >conftest.$ac_ext
++        if test "${lt_cv_aix_libpath+set}" = set; then
++  aix_libpath=$lt_cv_aix_libpath
++else
++  if ${lt_cv_aix_libpath_+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+ /* end confdefs.h.  */
+ 
+ int
+@@ -9620,22 +10135,29 @@ main ()
+ _ACEOF
+ if ac_fn_c_try_link "$LINENO"; then :
+ 
+-lt_aix_libpath_sed='
+-    /Import File Strings/,/^$/ {
+-	/^0/ {
+-	    s/^0  *\(.*\)$/\1/
+-	    p
+-	}
+-    }'
+-aix_libpath=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
+-# Check for a 64-bit object if we didn't find anything.
+-if test -z "$aix_libpath"; then
+-  aix_libpath=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
+-fi
++  lt_aix_libpath_sed='
++      /Import File Strings/,/^$/ {
++	  /^0/ {
++	      s/^0  *\([^ ]*\) *$/\1/
++	      p
++	  }
++      }'
++  lt_cv_aix_libpath_=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
++  # Check for a 64-bit object if we didn't find anything.
++  if test -z "$lt_cv_aix_libpath_"; then
++    lt_cv_aix_libpath_=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
++  fi
+ fi
+ rm -f core conftest.err conftest.$ac_objext \
+     conftest$ac_exeext conftest.$ac_ext
+-if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
++  if test -z "$lt_cv_aix_libpath_"; then
++    lt_cv_aix_libpath_="/usr/lib:/lib"
++  fi
++
++fi
++
++  aix_libpath=$lt_cv_aix_libpath_
++fi
+ 
+         hardcode_libdir_flag_spec='${wl}-blibpath:$libdir:'"$aix_libpath"
+         archive_expsym_cmds='$CC -o $output_objdir/$soname $libobjs $deplibs '"\${wl}$no_entry_flag"' $compiler_flags `if test "x${allow_undefined_flag}" != "x"; then func_echo_all "${wl}${allow_undefined_flag}"; else :; fi` '"\${wl}$exp_sym_flag:\$export_symbols $shared_flag"
+@@ -9647,7 +10169,13 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 	else
+ 	 # Determine the default libpath from the value encoded in an
+ 	 # empty executable.
+-	 cat confdefs.h - <<_ACEOF >conftest.$ac_ext
++	 if test "${lt_cv_aix_libpath+set}" = set; then
++  aix_libpath=$lt_cv_aix_libpath
++else
++  if ${lt_cv_aix_libpath_+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+ /* end confdefs.h.  */
+ 
+ int
+@@ -9660,22 +10188,29 @@ main ()
+ _ACEOF
+ if ac_fn_c_try_link "$LINENO"; then :
+ 
+-lt_aix_libpath_sed='
+-    /Import File Strings/,/^$/ {
+-	/^0/ {
+-	    s/^0  *\(.*\)$/\1/
+-	    p
+-	}
+-    }'
+-aix_libpath=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
+-# Check for a 64-bit object if we didn't find anything.
+-if test -z "$aix_libpath"; then
+-  aix_libpath=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
+-fi
++  lt_aix_libpath_sed='
++      /Import File Strings/,/^$/ {
++	  /^0/ {
++	      s/^0  *\([^ ]*\) *$/\1/
++	      p
++	  }
++      }'
++  lt_cv_aix_libpath_=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
++  # Check for a 64-bit object if we didn't find anything.
++  if test -z "$lt_cv_aix_libpath_"; then
++    lt_cv_aix_libpath_=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
++  fi
+ fi
+ rm -f core conftest.err conftest.$ac_objext \
+     conftest$ac_exeext conftest.$ac_ext
+-if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
++  if test -z "$lt_cv_aix_libpath_"; then
++    lt_cv_aix_libpath_="/usr/lib:/lib"
++  fi
++
++fi
++
++  aix_libpath=$lt_cv_aix_libpath_
++fi
+ 
+ 	 hardcode_libdir_flag_spec='${wl}-blibpath:$libdir:'"$aix_libpath"
+ 	  # Warning - without using the other run time loading flags,
+@@ -9720,20 +10255,63 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+       # Microsoft Visual C++.
+       # hardcode_libdir_flag_spec is actually meaningless, as there is
+       # no search path for DLLs.
+-      hardcode_libdir_flag_spec=' '
+-      allow_undefined_flag=unsupported
+-      # Tell ltmain to make .lib files, not .a files.
+-      libext=lib
+-      # Tell ltmain to make .dll files, not .so files.
+-      shrext_cmds=".dll"
+-      # FIXME: Setting linknames here is a bad hack.
+-      archive_cmds='$CC -o $lib $libobjs $compiler_flags `func_echo_all "$deplibs" | $SED '\''s/ -lc$//'\''` -link -dll~linknames='
+-      # The linker will automatically build a .lib file if we build a DLL.
+-      old_archive_from_new_cmds='true'
+-      # FIXME: Should let the user specify the lib program.
+-      old_archive_cmds='lib -OUT:$oldlib$oldobjs$old_deplibs'
+-      fix_srcfile_path='`cygpath -w "$srcfile"`'
+-      enable_shared_with_static_runtimes=yes
++      case $cc_basename in
++      cl*)
++	# Native MSVC
++	hardcode_libdir_flag_spec=' '
++	allow_undefined_flag=unsupported
++	always_export_symbols=yes
++	file_list_spec='@'
++	# Tell ltmain to make .lib files, not .a files.
++	libext=lib
++	# Tell ltmain to make .dll files, not .so files.
++	shrext_cmds=".dll"
++	# FIXME: Setting linknames here is a bad hack.
++	archive_cmds='$CC -o $output_objdir/$soname $libobjs $compiler_flags $deplibs -Wl,-dll~linknames='
++	archive_expsym_cmds='if test "x`$SED 1q $export_symbols`" = xEXPORTS; then
++	    sed -n -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' -e '1\\\!p' < $export_symbols > $output_objdir/$soname.exp;
++	  else
++	    sed -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' < $export_symbols > $output_objdir/$soname.exp;
++	  fi~
++	  $CC -o $tool_output_objdir$soname $libobjs $compiler_flags $deplibs "@$tool_output_objdir$soname.exp" -Wl,-DLL,-IMPLIB:"$tool_output_objdir$libname.dll.lib"~
++	  linknames='
++	# The linker will not automatically build a static lib if we build a DLL.
++	# _LT_TAGVAR(old_archive_from_new_cmds, )='true'
++	enable_shared_with_static_runtimes=yes
++	export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1,DATA/'\'' | $SED -e '\''/^[AITW][ ]/s/.*[ ]//'\'' | sort | uniq > $export_symbols'
++	# Don't use ranlib
++	old_postinstall_cmds='chmod 644 $oldlib'
++	postlink_cmds='lt_outputfile="@OUTPUT@"~
++	  lt_tool_outputfile="@TOOL_OUTPUT@"~
++	  case $lt_outputfile in
++	    *.exe|*.EXE) ;;
++	    *)
++	      lt_outputfile="$lt_outputfile.exe"
++	      lt_tool_outputfile="$lt_tool_outputfile.exe"
++	      ;;
++	  esac~
++	  if test "$MANIFEST_TOOL" != ":" && test -f "$lt_outputfile.manifest"; then
++	    $MANIFEST_TOOL -manifest "$lt_tool_outputfile.manifest" -outputresource:"$lt_tool_outputfile" || exit 1;
++	    $RM "$lt_outputfile.manifest";
++	  fi'
++	;;
++      *)
++	# Assume MSVC wrapper
++	hardcode_libdir_flag_spec=' '
++	allow_undefined_flag=unsupported
++	# Tell ltmain to make .lib files, not .a files.
++	libext=lib
++	# Tell ltmain to make .dll files, not .so files.
++	shrext_cmds=".dll"
++	# FIXME: Setting linknames here is a bad hack.
++	archive_cmds='$CC -o $lib $libobjs $compiler_flags `func_echo_all "$deplibs" | $SED '\''s/ -lc$//'\''` -link -dll~linknames='
++	# The linker will automatically build a .lib file if we build a DLL.
++	old_archive_from_new_cmds='true'
++	# FIXME: Should let the user specify the lib program.
++	old_archive_cmds='lib -OUT:$oldlib$oldobjs$old_deplibs'
++	enable_shared_with_static_runtimes=yes
++	;;
++      esac
+       ;;
+ 
+     darwin* | rhapsody*)
+@@ -9794,7 +10372,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 
+     # FreeBSD 3 and greater uses gcc -shared to do shared libraries.
+     freebsd* | dragonfly*)
+-      archive_cmds='$CC -shared -o $lib $libobjs $deplibs $compiler_flags'
++      archive_cmds='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags'
+       hardcode_libdir_flag_spec='-R$libdir'
+       hardcode_direct=yes
+       hardcode_shlibpath_var=no
+@@ -9802,7 +10380,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 
+     hpux9*)
+       if test "$GCC" = yes; then
+-	archive_cmds='$RM $output_objdir/$soname~$CC -shared -fPIC ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $libobjs $deplibs $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
++	archive_cmds='$RM $output_objdir/$soname~$CC -shared $pic_flag ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $libobjs $deplibs $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
+       else
+ 	archive_cmds='$RM $output_objdir/$soname~$LD -b +b $install_libdir -o $output_objdir/$soname $libobjs $deplibs $linker_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
+       fi
+@@ -9818,7 +10396,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 
+     hpux10*)
+       if test "$GCC" = yes && test "$with_gnu_ld" = no; then
+-	archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
++	archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
+       else
+ 	archive_cmds='$LD -b +h $soname +b $install_libdir -o $lib $libobjs $deplibs $linker_flags'
+       fi
+@@ -9842,10 +10420,10 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 	  archive_cmds='$CC -shared ${wl}+h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
+ 	  ;;
+ 	ia64*)
+-	  archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags'
++	  archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags'
+ 	  ;;
+ 	*)
+-	  archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
++	  archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
+ 	  ;;
+ 	esac
+       else
+@@ -9924,23 +10502,36 @@ fi
+ 
+     irix5* | irix6* | nonstopux*)
+       if test "$GCC" = yes; then
+-	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
++	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
+ 	# Try to use the -exported_symbol ld option, if it does not
+ 	# work, assume that -exports_file does not work either and
+ 	# implicitly export all symbols.
+-        save_LDFLAGS="$LDFLAGS"
+-        LDFLAGS="$LDFLAGS -shared ${wl}-exported_symbol ${wl}foo ${wl}-update_registry ${wl}/dev/null"
+-        cat confdefs.h - <<_ACEOF >conftest.$ac_ext
++	# This should be the same for all languages, so no per-tag cache variable.
++	{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether the $host_os linker accepts -exported_symbol" >&5
++$as_echo_n "checking whether the $host_os linker accepts -exported_symbol... " >&6; }
++if ${lt_cv_irix_exported_symbol+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  save_LDFLAGS="$LDFLAGS"
++	   LDFLAGS="$LDFLAGS -shared ${wl}-exported_symbol ${wl}foo ${wl}-update_registry ${wl}/dev/null"
++	   cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+ /* end confdefs.h.  */
+-int foo(void) {}
++int foo (void) { return 0; }
+ _ACEOF
+ if ac_fn_c_try_link "$LINENO"; then :
+-  archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations ${wl}-exports_file ${wl}$export_symbols -o $lib'
+-
++  lt_cv_irix_exported_symbol=yes
++else
++  lt_cv_irix_exported_symbol=no
+ fi
+ rm -f core conftest.err conftest.$ac_objext \
+     conftest$ac_exeext conftest.$ac_ext
+-        LDFLAGS="$save_LDFLAGS"
++           LDFLAGS="$save_LDFLAGS"
++fi
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_irix_exported_symbol" >&5
++$as_echo "$lt_cv_irix_exported_symbol" >&6; }
++	if test "$lt_cv_irix_exported_symbol" = yes; then
++          archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations ${wl}-exports_file ${wl}$export_symbols -o $lib'
++	fi
+       else
+ 	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -o $lib'
+ 	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -exports_file $export_symbols -o $lib'
+@@ -10025,7 +10616,7 @@ rm -f core conftest.err conftest.$ac_objext \
+     osf4* | osf5*)	# as osf3* with the addition of -msym flag
+       if test "$GCC" = yes; then
+ 	allow_undefined_flag=' ${wl}-expect_unresolved ${wl}\*'
+-	archive_cmds='$CC -shared${allow_undefined_flag} $libobjs $deplibs $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
++	archive_cmds='$CC -shared${allow_undefined_flag} $pic_flag $libobjs $deplibs $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
+ 	hardcode_libdir_flag_spec='${wl}-rpath ${wl}$libdir'
+       else
+ 	allow_undefined_flag=' -expect_unresolved \*'
+@@ -10044,9 +10635,9 @@ rm -f core conftest.err conftest.$ac_objext \
+       no_undefined_flag=' -z defs'
+       if test "$GCC" = yes; then
+ 	wlarc='${wl}'
+-	archive_cmds='$CC -shared ${wl}-z ${wl}text ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
++	archive_cmds='$CC -shared $pic_flag ${wl}-z ${wl}text ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
+ 	archive_expsym_cmds='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~
+-	  $CC -shared ${wl}-z ${wl}text ${wl}-M ${wl}$lib.exp ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags~$RM $lib.exp'
++	  $CC -shared $pic_flag ${wl}-z ${wl}text ${wl}-M ${wl}$lib.exp ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags~$RM $lib.exp'
+       else
+ 	case `$CC -V 2>&1` in
+ 	*"Compilers 5.0"*)
+@@ -10622,8 +11213,9 @@ cygwin* | mingw* | pw32* | cegcc*)
+   need_version=no
+   need_lib_prefix=no
+ 
+-  case $GCC,$host_os in
+-  yes,cygwin* | yes,mingw* | yes,pw32* | yes,cegcc*)
++  case $GCC,$cc_basename in
++  yes,*)
++    # gcc
+     library_names_spec='$libname.dll.a'
+     # DLL is installed to $(libdir)/../bin by postinstall_cmds
+     postinstall_cmds='base_file=`basename \${file}`~
+@@ -10656,13 +11248,71 @@ cygwin* | mingw* | pw32* | cegcc*)
+       library_names_spec='`echo ${libname} | sed -e 's/^lib/pw/'``echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}'
+       ;;
+     esac
++    dynamic_linker='Win32 ld.exe'
++    ;;
++
++  *,cl*)
++    # Native MSVC
++    libname_spec='$name'
++    soname_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}'
++    library_names_spec='${libname}.dll.lib'
++
++    case $build_os in
++    mingw*)
++      sys_lib_search_path_spec=
++      lt_save_ifs=$IFS
++      IFS=';'
++      for lt_path in $LIB
++      do
++        IFS=$lt_save_ifs
++        # Let DOS variable expansion print the short 8.3 style file name.
++        lt_path=`cd "$lt_path" 2>/dev/null && cmd //C "for %i in (".") do @echo %~si"`
++        sys_lib_search_path_spec="$sys_lib_search_path_spec $lt_path"
++      done
++      IFS=$lt_save_ifs
++      # Convert to MSYS style.
++      sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | sed -e 's|\\\\|/|g' -e 's| \\([a-zA-Z]\\):| /\\1|g' -e 's|^ ||'`
++      ;;
++    cygwin*)
++      # Convert to unix form, then to dos form, then back to unix form
++      # but this time dos style (no spaces!) so that the unix form looks
++      # like /cygdrive/c/PROGRA~1:/cygdr...
++      sys_lib_search_path_spec=`cygpath --path --unix "$LIB"`
++      sys_lib_search_path_spec=`cygpath --path --dos "$sys_lib_search_path_spec" 2>/dev/null`
++      sys_lib_search_path_spec=`cygpath --path --unix "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
++      ;;
++    *)
++      sys_lib_search_path_spec="$LIB"
++      if $ECHO "$sys_lib_search_path_spec" | $GREP ';[c-zC-Z]:/' >/dev/null; then
++        # It is most probably a Windows format PATH.
++        sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e 's/;/ /g'`
++      else
++        sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
++      fi
++      # FIXME: find the short name or the path components, as spaces are
++      # common. (e.g. "Program Files" -> "PROGRA~1")
++      ;;
++    esac
++
++    # DLL is installed to $(libdir)/../bin by postinstall_cmds
++    postinstall_cmds='base_file=`basename \${file}`~
++      dlpath=`$SHELL 2>&1 -c '\''. $dir/'\''\${base_file}'\''i; echo \$dlname'\''`~
++      dldir=$destdir/`dirname \$dlpath`~
++      test -d \$dldir || mkdir -p \$dldir~
++      $install_prog $dir/$dlname \$dldir/$dlname'
++    postuninstall_cmds='dldll=`$SHELL 2>&1 -c '\''. $file; echo \$dlname'\''`~
++      dlpath=$dir/\$dldll~
++       $RM \$dlpath'
++    shlibpath_overrides_runpath=yes
++    dynamic_linker='Win32 link.exe'
+     ;;
+ 
+   *)
++    # Assume MSVC wrapper
+     library_names_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext} $libname.lib'
++    dynamic_linker='Win32 ld.exe'
+     ;;
+   esac
+-  dynamic_linker='Win32 ld.exe'
+   # FIXME: first we should search . and the directory the executable is in
+   shlibpath_var=PATH
+   ;;
+@@ -11540,7 +12190,7 @@ else
+   lt_dlunknown=0; lt_dlno_uscore=1; lt_dlneed_uscore=2
+   lt_status=$lt_dlunknown
+   cat > conftest.$ac_ext <<_LT_EOF
+-#line 11543 "configure"
++#line $LINENO "configure"
+ #include "confdefs.h"
+ 
+ #if HAVE_DLFCN_H
+@@ -11584,10 +12234,10 @@ else
+ /* When -fvisbility=hidden is used, assume the code has been annotated
+    correspondingly for the symbols needed.  */
+ #if defined(__GNUC__) && (((__GNUC__ == 3) && (__GNUC_MINOR__ >= 3)) || (__GNUC__ > 3))
+-void fnord () __attribute__((visibility("default")));
++int fnord () __attribute__((visibility("default")));
+ #endif
+ 
+-void fnord () { int i=42; }
++int fnord () { return 42; }
+ int main ()
+ {
+   void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW);
+@@ -11646,7 +12296,7 @@ else
+   lt_dlunknown=0; lt_dlno_uscore=1; lt_dlneed_uscore=2
+   lt_status=$lt_dlunknown
+   cat > conftest.$ac_ext <<_LT_EOF
+-#line 11649 "configure"
++#line $LINENO "configure"
+ #include "confdefs.h"
+ 
+ #if HAVE_DLFCN_H
+@@ -11690,10 +12340,10 @@ else
+ /* When -fvisbility=hidden is used, assume the code has been annotated
+    correspondingly for the symbols needed.  */
+ #if defined(__GNUC__) && (((__GNUC__ == 3) && (__GNUC_MINOR__ >= 3)) || (__GNUC__ > 3))
+-void fnord () __attribute__((visibility("default")));
++int fnord () __attribute__((visibility("default")));
+ #endif
+ 
+-void fnord () { int i=42; }
++int fnord () { return 42; }
+ int main ()
+ {
+   void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW);
+@@ -14979,13 +15629,20 @@ exeext='`$ECHO "$exeext" | $SED "$delay_single_quote_subst"`'
+ lt_unset='`$ECHO "$lt_unset" | $SED "$delay_single_quote_subst"`'
+ lt_SP2NL='`$ECHO "$lt_SP2NL" | $SED "$delay_single_quote_subst"`'
+ lt_NL2SP='`$ECHO "$lt_NL2SP" | $SED "$delay_single_quote_subst"`'
++lt_cv_to_host_file_cmd='`$ECHO "$lt_cv_to_host_file_cmd" | $SED "$delay_single_quote_subst"`'
++lt_cv_to_tool_file_cmd='`$ECHO "$lt_cv_to_tool_file_cmd" | $SED "$delay_single_quote_subst"`'
+ reload_flag='`$ECHO "$reload_flag" | $SED "$delay_single_quote_subst"`'
+ reload_cmds='`$ECHO "$reload_cmds" | $SED "$delay_single_quote_subst"`'
+ OBJDUMP='`$ECHO "$OBJDUMP" | $SED "$delay_single_quote_subst"`'
+ deplibs_check_method='`$ECHO "$deplibs_check_method" | $SED "$delay_single_quote_subst"`'
+ file_magic_cmd='`$ECHO "$file_magic_cmd" | $SED "$delay_single_quote_subst"`'
++file_magic_glob='`$ECHO "$file_magic_glob" | $SED "$delay_single_quote_subst"`'
++want_nocaseglob='`$ECHO "$want_nocaseglob" | $SED "$delay_single_quote_subst"`'
++DLLTOOL='`$ECHO "$DLLTOOL" | $SED "$delay_single_quote_subst"`'
++sharedlib_from_linklib_cmd='`$ECHO "$sharedlib_from_linklib_cmd" | $SED "$delay_single_quote_subst"`'
+ AR='`$ECHO "$AR" | $SED "$delay_single_quote_subst"`'
+ AR_FLAGS='`$ECHO "$AR_FLAGS" | $SED "$delay_single_quote_subst"`'
++archiver_list_spec='`$ECHO "$archiver_list_spec" | $SED "$delay_single_quote_subst"`'
+ STRIP='`$ECHO "$STRIP" | $SED "$delay_single_quote_subst"`'
+ RANLIB='`$ECHO "$RANLIB" | $SED "$delay_single_quote_subst"`'
+ old_postinstall_cmds='`$ECHO "$old_postinstall_cmds" | $SED "$delay_single_quote_subst"`'
+@@ -15000,14 +15657,17 @@ lt_cv_sys_global_symbol_pipe='`$ECHO "$lt_cv_sys_global_symbol_pipe" | $SED "$de
+ lt_cv_sys_global_symbol_to_cdecl='`$ECHO "$lt_cv_sys_global_symbol_to_cdecl" | $SED "$delay_single_quote_subst"`'
+ lt_cv_sys_global_symbol_to_c_name_address='`$ECHO "$lt_cv_sys_global_symbol_to_c_name_address" | $SED "$delay_single_quote_subst"`'
+ lt_cv_sys_global_symbol_to_c_name_address_lib_prefix='`$ECHO "$lt_cv_sys_global_symbol_to_c_name_address_lib_prefix" | $SED "$delay_single_quote_subst"`'
++nm_file_list_spec='`$ECHO "$nm_file_list_spec" | $SED "$delay_single_quote_subst"`'
++lt_sysroot='`$ECHO "$lt_sysroot" | $SED "$delay_single_quote_subst"`'
+ objdir='`$ECHO "$objdir" | $SED "$delay_single_quote_subst"`'
+ MAGIC_CMD='`$ECHO "$MAGIC_CMD" | $SED "$delay_single_quote_subst"`'
+ lt_prog_compiler_no_builtin_flag='`$ECHO "$lt_prog_compiler_no_builtin_flag" | $SED "$delay_single_quote_subst"`'
+-lt_prog_compiler_wl='`$ECHO "$lt_prog_compiler_wl" | $SED "$delay_single_quote_subst"`'
+ lt_prog_compiler_pic='`$ECHO "$lt_prog_compiler_pic" | $SED "$delay_single_quote_subst"`'
++lt_prog_compiler_wl='`$ECHO "$lt_prog_compiler_wl" | $SED "$delay_single_quote_subst"`'
+ lt_prog_compiler_static='`$ECHO "$lt_prog_compiler_static" | $SED "$delay_single_quote_subst"`'
+ lt_cv_prog_compiler_c_o='`$ECHO "$lt_cv_prog_compiler_c_o" | $SED "$delay_single_quote_subst"`'
+ need_locks='`$ECHO "$need_locks" | $SED "$delay_single_quote_subst"`'
++MANIFEST_TOOL='`$ECHO "$MANIFEST_TOOL" | $SED "$delay_single_quote_subst"`'
+ DSYMUTIL='`$ECHO "$DSYMUTIL" | $SED "$delay_single_quote_subst"`'
+ NMEDIT='`$ECHO "$NMEDIT" | $SED "$delay_single_quote_subst"`'
+ LIPO='`$ECHO "$LIPO" | $SED "$delay_single_quote_subst"`'
+@@ -15040,12 +15700,12 @@ hardcode_shlibpath_var='`$ECHO "$hardcode_shlibpath_var" | $SED "$delay_single_q
+ hardcode_automatic='`$ECHO "$hardcode_automatic" | $SED "$delay_single_quote_subst"`'
+ inherit_rpath='`$ECHO "$inherit_rpath" | $SED "$delay_single_quote_subst"`'
+ link_all_deplibs='`$ECHO "$link_all_deplibs" | $SED "$delay_single_quote_subst"`'
+-fix_srcfile_path='`$ECHO "$fix_srcfile_path" | $SED "$delay_single_quote_subst"`'
+ always_export_symbols='`$ECHO "$always_export_symbols" | $SED "$delay_single_quote_subst"`'
+ export_symbols_cmds='`$ECHO "$export_symbols_cmds" | $SED "$delay_single_quote_subst"`'
+ exclude_expsyms='`$ECHO "$exclude_expsyms" | $SED "$delay_single_quote_subst"`'
+ include_expsyms='`$ECHO "$include_expsyms" | $SED "$delay_single_quote_subst"`'
+ prelink_cmds='`$ECHO "$prelink_cmds" | $SED "$delay_single_quote_subst"`'
++postlink_cmds='`$ECHO "$postlink_cmds" | $SED "$delay_single_quote_subst"`'
+ file_list_spec='`$ECHO "$file_list_spec" | $SED "$delay_single_quote_subst"`'
+ variables_saved_for_relink='`$ECHO "$variables_saved_for_relink" | $SED "$delay_single_quote_subst"`'
+ need_lib_prefix='`$ECHO "$need_lib_prefix" | $SED "$delay_single_quote_subst"`'
+@@ -15100,8 +15760,13 @@ reload_flag \
+ OBJDUMP \
+ deplibs_check_method \
+ file_magic_cmd \
++file_magic_glob \
++want_nocaseglob \
++DLLTOOL \
++sharedlib_from_linklib_cmd \
+ AR \
+ AR_FLAGS \
++archiver_list_spec \
+ STRIP \
+ RANLIB \
+ CC \
+@@ -15111,12 +15776,14 @@ lt_cv_sys_global_symbol_pipe \
+ lt_cv_sys_global_symbol_to_cdecl \
+ lt_cv_sys_global_symbol_to_c_name_address \
+ lt_cv_sys_global_symbol_to_c_name_address_lib_prefix \
++nm_file_list_spec \
+ lt_prog_compiler_no_builtin_flag \
+-lt_prog_compiler_wl \
+ lt_prog_compiler_pic \
++lt_prog_compiler_wl \
+ lt_prog_compiler_static \
+ lt_cv_prog_compiler_c_o \
+ need_locks \
++MANIFEST_TOOL \
+ DSYMUTIL \
+ NMEDIT \
+ LIPO \
+@@ -15132,7 +15799,6 @@ no_undefined_flag \
+ hardcode_libdir_flag_spec \
+ hardcode_libdir_flag_spec_ld \
+ hardcode_libdir_separator \
+-fix_srcfile_path \
+ exclude_expsyms \
+ include_expsyms \
+ file_list_spec \
+@@ -15168,6 +15834,7 @@ module_cmds \
+ module_expsym_cmds \
+ export_symbols_cmds \
+ prelink_cmds \
++postlink_cmds \
+ postinstall_cmds \
+ postuninstall_cmds \
+ finish_cmds \
+@@ -15866,7 +16533,8 @@ esac ;;
+ # NOTE: Changes made to this file will be lost: look at ltmain.sh.
+ #
+ #   Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001, 2003, 2004, 2005,
+-#                 2006, 2007, 2008, 2009 Free Software Foundation, Inc.
++#                 2006, 2007, 2008, 2009, 2010 Free Software Foundation,
++#                 Inc.
+ #   Written by Gordon Matzigkeit, 1996
+ #
+ #   This file is part of GNU Libtool.
+@@ -15969,19 +16637,42 @@ SP2NL=$lt_lt_SP2NL
+ # turn newlines into spaces.
+ NL2SP=$lt_lt_NL2SP
+ 
++# convert \$build file names to \$host format.
++to_host_file_cmd=$lt_cv_to_host_file_cmd
++
++# convert \$build files to toolchain format.
++to_tool_file_cmd=$lt_cv_to_tool_file_cmd
++
+ # An object symbol dumper.
+ OBJDUMP=$lt_OBJDUMP
+ 
+ # Method to check whether dependent libraries are shared objects.
+ deplibs_check_method=$lt_deplibs_check_method
+ 
+-# Command to use when deplibs_check_method == "file_magic".
++# Command to use when deplibs_check_method = "file_magic".
+ file_magic_cmd=$lt_file_magic_cmd
+ 
++# How to find potential files when deplibs_check_method = "file_magic".
++file_magic_glob=$lt_file_magic_glob
++
++# Find potential files using nocaseglob when deplibs_check_method = "file_magic".
++want_nocaseglob=$lt_want_nocaseglob
++
++# DLL creation program.
++DLLTOOL=$lt_DLLTOOL
++
++# Command to associate shared and link libraries.
++sharedlib_from_linklib_cmd=$lt_sharedlib_from_linklib_cmd
++
+ # The archiver.
+ AR=$lt_AR
++
++# Flags to create an archive.
+ AR_FLAGS=$lt_AR_FLAGS
+ 
++# How to feed a file listing to the archiver.
++archiver_list_spec=$lt_archiver_list_spec
++
+ # A symbol stripping program.
+ STRIP=$lt_STRIP
+ 
+@@ -16011,6 +16702,12 @@ global_symbol_to_c_name_address=$lt_lt_cv_sys_global_symbol_to_c_name_address
+ # Transform the output of nm in a C name address pair when lib prefix is needed.
+ global_symbol_to_c_name_address_lib_prefix=$lt_lt_cv_sys_global_symbol_to_c_name_address_lib_prefix
+ 
++# Specify filename containing input files for \$NM.
++nm_file_list_spec=$lt_nm_file_list_spec
++
++# The root where to search for dependent libraries,and in which our libraries should be installed.
++lt_sysroot=$lt_sysroot
++
+ # The name of the directory that contains temporary libtool files.
+ objdir=$objdir
+ 
+@@ -16020,6 +16717,9 @@ MAGIC_CMD=$MAGIC_CMD
+ # Must we lock files when doing compilation?
+ need_locks=$lt_need_locks
+ 
++# Manifest tool.
++MANIFEST_TOOL=$lt_MANIFEST_TOOL
++
+ # Tool to manipulate archived DWARF debug symbol files on Mac OS X.
+ DSYMUTIL=$lt_DSYMUTIL
+ 
+@@ -16134,12 +16834,12 @@ with_gcc=$GCC
+ # Compiler flag to turn off builtin functions.
+ no_builtin_flag=$lt_lt_prog_compiler_no_builtin_flag
+ 
+-# How to pass a linker flag through the compiler.
+-wl=$lt_lt_prog_compiler_wl
+-
+ # Additional compiler flags for building library objects.
+ pic_flag=$lt_lt_prog_compiler_pic
+ 
++# How to pass a linker flag through the compiler.
++wl=$lt_lt_prog_compiler_wl
++
+ # Compiler flag to prevent dynamic linking.
+ link_static_flag=$lt_lt_prog_compiler_static
+ 
+@@ -16226,9 +16926,6 @@ inherit_rpath=$inherit_rpath
+ # Whether libtool must link a program against all its dependency libraries.
+ link_all_deplibs=$link_all_deplibs
+ 
+-# Fix the shell variable \$srcfile for the compiler.
+-fix_srcfile_path=$lt_fix_srcfile_path
+-
+ # Set to "yes" if exported symbols are required.
+ always_export_symbols=$always_export_symbols
+ 
+@@ -16244,6 +16941,9 @@ include_expsyms=$lt_include_expsyms
+ # Commands necessary for linking programs (against libraries) with templates.
+ prelink_cmds=$lt_prelink_cmds
+ 
++# Commands necessary for finishing linking programs.
++postlink_cmds=$lt_postlink_cmds
++
+ # Specify filename containing input files.
+ file_list_spec=$lt_file_list_spec
+ 
+@@ -16276,210 +16976,169 @@ ltmain="$ac_aux_dir/ltmain.sh"
+   # if finds mixed CR/LF and LF-only lines.  Since sed operates in
+   # text mode, it properly converts lines to CR/LF.  This bash problem
+   # is reportedly fixed, but why not run on old versions too?
+-  sed '/^# Generated shell functions inserted here/q' "$ltmain" >> "$cfgfile" \
+-    || (rm -f "$cfgfile"; exit 1)
+-
+-  case $xsi_shell in
+-  yes)
+-    cat << \_LT_EOF >> "$cfgfile"
+-
+-# func_dirname file append nondir_replacement
+-# Compute the dirname of FILE.  If nonempty, add APPEND to the result,
+-# otherwise set result to NONDIR_REPLACEMENT.
+-func_dirname ()
+-{
+-  case ${1} in
+-    */*) func_dirname_result="${1%/*}${2}" ;;
+-    *  ) func_dirname_result="${3}" ;;
+-  esac
+-}
+-
+-# func_basename file
+-func_basename ()
+-{
+-  func_basename_result="${1##*/}"
+-}
+-
+-# func_dirname_and_basename file append nondir_replacement
+-# perform func_basename and func_dirname in a single function
+-# call:
+-#   dirname:  Compute the dirname of FILE.  If nonempty,
+-#             add APPEND to the result, otherwise set result
+-#             to NONDIR_REPLACEMENT.
+-#             value returned in "$func_dirname_result"
+-#   basename: Compute filename of FILE.
+-#             value retuned in "$func_basename_result"
+-# Implementation must be kept synchronized with func_dirname
+-# and func_basename. For efficiency, we do not delegate to
+-# those functions but instead duplicate the functionality here.
+-func_dirname_and_basename ()
+-{
+-  case ${1} in
+-    */*) func_dirname_result="${1%/*}${2}" ;;
+-    *  ) func_dirname_result="${3}" ;;
+-  esac
+-  func_basename_result="${1##*/}"
+-}
+-
+-# func_stripname prefix suffix name
+-# strip PREFIX and SUFFIX off of NAME.
+-# PREFIX and SUFFIX must not contain globbing or regex special
+-# characters, hashes, percent signs, but SUFFIX may contain a leading
+-# dot (in which case that matches only a dot).
+-func_stripname ()
+-{
+-  # pdksh 5.2.14 does not do ${X%$Y} correctly if both X and Y are
+-  # positional parameters, so assign one to ordinary parameter first.
+-  func_stripname_result=${3}
+-  func_stripname_result=${func_stripname_result#"${1}"}
+-  func_stripname_result=${func_stripname_result%"${2}"}
+-}
+-
+-# func_opt_split
+-func_opt_split ()
+-{
+-  func_opt_split_opt=${1%%=*}
+-  func_opt_split_arg=${1#*=}
+-}
+-
+-# func_lo2o object
+-func_lo2o ()
+-{
+-  case ${1} in
+-    *.lo) func_lo2o_result=${1%.lo}.${objext} ;;
+-    *)    func_lo2o_result=${1} ;;
+-  esac
+-}
+-
+-# func_xform libobj-or-source
+-func_xform ()
+-{
+-  func_xform_result=${1%.*}.lo
+-}
+-
+-# func_arith arithmetic-term...
+-func_arith ()
+-{
+-  func_arith_result=$(( $* ))
+-}
+-
+-# func_len string
+-# STRING may not start with a hyphen.
+-func_len ()
+-{
+-  func_len_result=${#1}
+-}
+-
+-_LT_EOF
+-    ;;
+-  *) # Bourne compatible functions.
+-    cat << \_LT_EOF >> "$cfgfile"
+-
+-# func_dirname file append nondir_replacement
+-# Compute the dirname of FILE.  If nonempty, add APPEND to the result,
+-# otherwise set result to NONDIR_REPLACEMENT.
+-func_dirname ()
+-{
+-  # Extract subdirectory from the argument.
+-  func_dirname_result=`$ECHO "${1}" | $SED "$dirname"`
+-  if test "X$func_dirname_result" = "X${1}"; then
+-    func_dirname_result="${3}"
+-  else
+-    func_dirname_result="$func_dirname_result${2}"
+-  fi
+-}
+-
+-# func_basename file
+-func_basename ()
+-{
+-  func_basename_result=`$ECHO "${1}" | $SED "$basename"`
+-}
+-
+-
+-# func_stripname prefix suffix name
+-# strip PREFIX and SUFFIX off of NAME.
+-# PREFIX and SUFFIX must not contain globbing or regex special
+-# characters, hashes, percent signs, but SUFFIX may contain a leading
+-# dot (in which case that matches only a dot).
+-# func_strip_suffix prefix name
+-func_stripname ()
+-{
+-  case ${2} in
+-    .*) func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%\\\\${2}\$%%"`;;
+-    *)  func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%${2}\$%%"`;;
+-  esac
+-}
+-
+-# sed scripts:
+-my_sed_long_opt='1s/^\(-[^=]*\)=.*/\1/;q'
+-my_sed_long_arg='1s/^-[^=]*=//'
+-
+-# func_opt_split
+-func_opt_split ()
+-{
+-  func_opt_split_opt=`$ECHO "${1}" | $SED "$my_sed_long_opt"`
+-  func_opt_split_arg=`$ECHO "${1}" | $SED "$my_sed_long_arg"`
+-}
+-
+-# func_lo2o object
+-func_lo2o ()
+-{
+-  func_lo2o_result=`$ECHO "${1}" | $SED "$lo2o"`
+-}
+-
+-# func_xform libobj-or-source
+-func_xform ()
+-{
+-  func_xform_result=`$ECHO "${1}" | $SED 's/\.[^.]*$/.lo/'`
+-}
+-
+-# func_arith arithmetic-term...
+-func_arith ()
+-{
+-  func_arith_result=`expr "$@"`
+-}
+-
+-# func_len string
+-# STRING may not start with a hyphen.
+-func_len ()
+-{
+-  func_len_result=`expr "$1" : ".*" 2>/dev/null || echo $max_cmd_len`
+-}
+-
+-_LT_EOF
+-esac
+-
+-case $lt_shell_append in
+-  yes)
+-    cat << \_LT_EOF >> "$cfgfile"
+-
+-# func_append var value
+-# Append VALUE to the end of shell variable VAR.
+-func_append ()
+-{
+-  eval "$1+=\$2"
+-}
+-_LT_EOF
+-    ;;
+-  *)
+-    cat << \_LT_EOF >> "$cfgfile"
+-
+-# func_append var value
+-# Append VALUE to the end of shell variable VAR.
+-func_append ()
+-{
+-  eval "$1=\$$1\$2"
+-}
+-
+-_LT_EOF
+-    ;;
+-  esac
+-
+-
+-  sed -n '/^# Generated shell functions inserted here/,$p' "$ltmain" >> "$cfgfile" \
+-    || (rm -f "$cfgfile"; exit 1)
+-
+-  mv -f "$cfgfile" "$ofile" ||
++  sed '$q' "$ltmain" >> "$cfgfile" \
++     || (rm -f "$cfgfile"; exit 1)
++
++  if test x"$xsi_shell" = xyes; then
++  sed -e '/^func_dirname ()$/,/^} # func_dirname /c\
++func_dirname ()\
++{\
++\    case ${1} in\
++\      */*) func_dirname_result="${1%/*}${2}" ;;\
++\      *  ) func_dirname_result="${3}" ;;\
++\    esac\
++} # Extended-shell func_dirname implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_basename ()$/,/^} # func_basename /c\
++func_basename ()\
++{\
++\    func_basename_result="${1##*/}"\
++} # Extended-shell func_basename implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_dirname_and_basename ()$/,/^} # func_dirname_and_basename /c\
++func_dirname_and_basename ()\
++{\
++\    case ${1} in\
++\      */*) func_dirname_result="${1%/*}${2}" ;;\
++\      *  ) func_dirname_result="${3}" ;;\
++\    esac\
++\    func_basename_result="${1##*/}"\
++} # Extended-shell func_dirname_and_basename implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_stripname ()$/,/^} # func_stripname /c\
++func_stripname ()\
++{\
++\    # pdksh 5.2.14 does not do ${X%$Y} correctly if both X and Y are\
++\    # positional parameters, so assign one to ordinary parameter first.\
++\    func_stripname_result=${3}\
++\    func_stripname_result=${func_stripname_result#"${1}"}\
++\    func_stripname_result=${func_stripname_result%"${2}"}\
++} # Extended-shell func_stripname implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_split_long_opt ()$/,/^} # func_split_long_opt /c\
++func_split_long_opt ()\
++{\
++\    func_split_long_opt_name=${1%%=*}\
++\    func_split_long_opt_arg=${1#*=}\
++} # Extended-shell func_split_long_opt implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_split_short_opt ()$/,/^} # func_split_short_opt /c\
++func_split_short_opt ()\
++{\
++\    func_split_short_opt_arg=${1#??}\
++\    func_split_short_opt_name=${1%"$func_split_short_opt_arg"}\
++} # Extended-shell func_split_short_opt implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_lo2o ()$/,/^} # func_lo2o /c\
++func_lo2o ()\
++{\
++\    case ${1} in\
++\      *.lo) func_lo2o_result=${1%.lo}.${objext} ;;\
++\      *)    func_lo2o_result=${1} ;;\
++\    esac\
++} # Extended-shell func_lo2o implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_xform ()$/,/^} # func_xform /c\
++func_xform ()\
++{\
++    func_xform_result=${1%.*}.lo\
++} # Extended-shell func_xform implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_arith ()$/,/^} # func_arith /c\
++func_arith ()\
++{\
++    func_arith_result=$(( $* ))\
++} # Extended-shell func_arith implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_len ()$/,/^} # func_len /c\
++func_len ()\
++{\
++    func_len_result=${#1}\
++} # Extended-shell func_len implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++fi
++
++if test x"$lt_shell_append" = xyes; then
++  sed -e '/^func_append ()$/,/^} # func_append /c\
++func_append ()\
++{\
++    eval "${1}+=\\${2}"\
++} # Extended-shell func_append implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_append_quoted ()$/,/^} # func_append_quoted /c\
++func_append_quoted ()\
++{\
++\    func_quote_for_eval "${2}"\
++\    eval "${1}+=\\\\ \\$func_quote_for_eval_result"\
++} # Extended-shell func_append_quoted implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  # Save a `func_append' function call where possible by direct use of '+='
++  sed -e 's%func_append \([a-zA-Z_]\{1,\}\) "%\1+="%g' $cfgfile > $cfgfile.tmp \
++    && mv -f "$cfgfile.tmp" "$cfgfile" \
++      || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++  test 0 -eq $? || _lt_function_replace_fail=:
++else
++  # Save a `func_append' function call even when '+=' is not available
++  sed -e 's%func_append \([a-zA-Z_]\{1,\}\) "%\1="$\1%g' $cfgfile > $cfgfile.tmp \
++    && mv -f "$cfgfile.tmp" "$cfgfile" \
++      || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++  test 0 -eq $? || _lt_function_replace_fail=:
++fi
++
++if test x"$_lt_function_replace_fail" = x":"; then
++  { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: Unable to substitute extended shell functions in $ofile" >&5
++$as_echo "$as_me: WARNING: Unable to substitute extended shell functions in $ofile" >&2;}
++fi
++
++
++   mv -f "$cfgfile" "$ofile" ||
+     (rm -f "$ofile" && cp "$cfgfile" "$ofile" && rm -f "$cfgfile")
+   chmod +x "$ofile"
+ 
+diff --git a/libctf/Makefile.in b/libctf/Makefile.in
+index 1984f50867a..51a3dd26e87 100644
+--- a/libctf/Makefile.in
++++ b/libctf/Makefile.in
+@@ -393,6 +393,7 @@ CYGPATH_W = @CYGPATH_W@
+ DATADIRNAME = @DATADIRNAME@
+ DEFS = @DEFS@
+ DEPDIR = @DEPDIR@
++DLLTOOL = @DLLTOOL@
+ DSYMUTIL = @DSYMUTIL@
+ DUMPBIN = @DUMPBIN@
+ ECHO_C = @ECHO_C@
+@@ -426,6 +427,7 @@ LN_S = @LN_S@
+ LTLIBOBJS = @LTLIBOBJS@
+ MAINT = @MAINT@
+ MAKEINFO = @MAKEINFO@
++MANIFEST_TOOL = @MANIFEST_TOOL@
+ MKDIR_P = @MKDIR_P@
+ NM = @NM@
+ NMEDIT = @NMEDIT@
+diff --git a/libctf/configure b/libctf/configure
+index 8704bc215f4..c1bf438bda6 100755
+--- a/libctf/configure
++++ b/libctf/configure
+@@ -669,6 +669,8 @@ OTOOL
+ LIPO
+ NMEDIT
+ DSYMUTIL
++MANIFEST_TOOL
++DLLTOOL
+ OBJDUMP
+ LN_S
+ NM
+@@ -800,6 +802,7 @@ enable_static
+ with_pic
+ enable_fast_install
+ with_gnu_ld
++with_libtool_sysroot
+ enable_libtool_lock
+ enable_largefile
+ enable_werror_always
+@@ -1463,6 +1466,8 @@ Optional Packages:
+   --with-pic              try to use only PIC/non-PIC objects [default=use
+                           both]
+   --with-gnu-ld           assume the C compiler uses GNU ld [default=no]
++  --with-libtool-sysroot=DIR Search for dependent libraries within DIR
++                        (or the compiler's sysroot if not specified).
+   --with-system-zlib      use installed libz
+ 
+ Some influential environment variables:
+@@ -5571,8 +5576,8 @@ esac
+ 
+ 
+ 
+-macro_version='2.2.7a'
+-macro_revision='1.3134'
++macro_version='2.4'
++macro_revision='1.3293'
+ 
+ 
+ 
+@@ -5612,7 +5617,7 @@ ECHO=$ECHO$ECHO$ECHO$ECHO$ECHO$ECHO
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking how to print strings" >&5
+ $as_echo_n "checking how to print strings... " >&6; }
+ # Test print first, because it will be a builtin if present.
+-if test "X`print -r -- -n 2>/dev/null`" = X-n && \
++if test "X`( print -r -- -n ) 2>/dev/null`" = X-n && \
+    test "X`print -r -- $ECHO 2>/dev/null`" = "X$ECHO"; then
+   ECHO='print -r --'
+ elif test "X`printf %s $ECHO 2>/dev/null`" = "X$ECHO"; then
+@@ -6305,8 +6310,8 @@ $as_echo_n "checking whether the shell understands some XSI constructs... " >&6;
+ # Try some XSI features
+ xsi_shell=no
+ ( _lt_dummy="a/b/c"
+-  test "${_lt_dummy##*/},${_lt_dummy%/*},"${_lt_dummy%"$_lt_dummy"}, \
+-      = c,a/b,, \
++  test "${_lt_dummy##*/},${_lt_dummy%/*},${_lt_dummy#??}"${_lt_dummy%"$_lt_dummy"}, \
++      = c,a/b,b/c, \
+     && eval 'test $(( 1 + 1 )) -eq 2 \
+     && test "${#_lt_dummy}" -eq 5' ) >/dev/null 2>&1 \
+   && xsi_shell=yes
+@@ -6355,6 +6360,80 @@ esac
+ 
+ 
+ 
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking how to convert $build file names to $host format" >&5
++$as_echo_n "checking how to convert $build file names to $host format... " >&6; }
++if ${lt_cv_to_host_file_cmd+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  case $host in
++  *-*-mingw* )
++    case $build in
++      *-*-mingw* ) # actually msys
++        lt_cv_to_host_file_cmd=func_convert_file_msys_to_w32
++        ;;
++      *-*-cygwin* )
++        lt_cv_to_host_file_cmd=func_convert_file_cygwin_to_w32
++        ;;
++      * ) # otherwise, assume *nix
++        lt_cv_to_host_file_cmd=func_convert_file_nix_to_w32
++        ;;
++    esac
++    ;;
++  *-*-cygwin* )
++    case $build in
++      *-*-mingw* ) # actually msys
++        lt_cv_to_host_file_cmd=func_convert_file_msys_to_cygwin
++        ;;
++      *-*-cygwin* )
++        lt_cv_to_host_file_cmd=func_convert_file_noop
++        ;;
++      * ) # otherwise, assume *nix
++        lt_cv_to_host_file_cmd=func_convert_file_nix_to_cygwin
++        ;;
++    esac
++    ;;
++  * ) # unhandled hosts (and "normal" native builds)
++    lt_cv_to_host_file_cmd=func_convert_file_noop
++    ;;
++esac
++
++fi
++
++to_host_file_cmd=$lt_cv_to_host_file_cmd
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_to_host_file_cmd" >&5
++$as_echo "$lt_cv_to_host_file_cmd" >&6; }
++
++
++
++
++
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking how to convert $build file names to toolchain format" >&5
++$as_echo_n "checking how to convert $build file names to toolchain format... " >&6; }
++if ${lt_cv_to_tool_file_cmd+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  #assume ordinary cross tools, or native build.
++lt_cv_to_tool_file_cmd=func_convert_file_noop
++case $host in
++  *-*-mingw* )
++    case $build in
++      *-*-mingw* ) # actually msys
++        lt_cv_to_tool_file_cmd=func_convert_file_msys_to_w32
++        ;;
++    esac
++    ;;
++esac
++
++fi
++
++to_tool_file_cmd=$lt_cv_to_tool_file_cmd
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_to_tool_file_cmd" >&5
++$as_echo "$lt_cv_to_tool_file_cmd" >&6; }
++
++
++
++
++
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $LD option to reload object files" >&5
+ $as_echo_n "checking for $LD option to reload object files... " >&6; }
+ if ${lt_cv_ld_reload_flag+:} false; then :
+@@ -6371,6 +6450,11 @@ case $reload_flag in
+ esac
+ reload_cmds='$LD$reload_flag -o $output$reload_objs'
+ case $host_os in
++  cygwin* | mingw* | pw32* | cegcc*)
++    if test "$GCC" != yes; then
++      reload_cmds=false
++    fi
++    ;;
+   darwin*)
+     if test "$GCC" = yes; then
+       reload_cmds='$LTCC $LTCFLAGS -nostdlib ${wl}-r -o $output$reload_objs'
+@@ -6539,7 +6623,8 @@ mingw* | pw32*)
+     lt_cv_deplibs_check_method='file_magic ^x86 archive import|^x86 DLL'
+     lt_cv_file_magic_cmd='func_win32_libid'
+   else
+-    lt_cv_deplibs_check_method='file_magic file format pei*-i386(.*architecture: i386)?'
++    # Keep this pattern in sync with the one in func_win32_libid.
++    lt_cv_deplibs_check_method='file_magic file format (pei*-i386(.*architecture: i386)?|pe-arm-wince|pe-x86-64)'
+     lt_cv_file_magic_cmd='$OBJDUMP -f'
+   fi
+   ;;
+@@ -6693,6 +6778,21 @@ esac
+ fi
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_deplibs_check_method" >&5
+ $as_echo "$lt_cv_deplibs_check_method" >&6; }
++
++file_magic_glob=
++want_nocaseglob=no
++if test "$build" = "$host"; then
++  case $host_os in
++  mingw* | pw32*)
++    if ( shopt | grep nocaseglob ) >/dev/null 2>&1; then
++      want_nocaseglob=yes
++    else
++      file_magic_glob=`echo aAbBcCdDeEfFgGhHiIjJkKlLmMnNoOpPqQrRsStTuUvVwWxXyYzZ | $SED -e "s/\(..\)/s\/[\1]\/[\1]\/g;/g"`
++    fi
++    ;;
++  esac
++fi
++
+ file_magic_cmd=$lt_cv_file_magic_cmd
+ deplibs_check_method=$lt_cv_deplibs_check_method
+ test -z "$deplibs_check_method" && deplibs_check_method=unknown
+@@ -6708,6 +6808,157 @@ test -z "$deplibs_check_method" && deplibs_check_method=unknown
+ 
+ 
+ 
++
++
++
++
++
++
++
++
++
++
++if test -n "$ac_tool_prefix"; then
++  # Extract the first word of "${ac_tool_prefix}dlltool", so it can be a program name with args.
++set dummy ${ac_tool_prefix}dlltool; ac_word=$2
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
++$as_echo_n "checking for $ac_word... " >&6; }
++if ${ac_cv_prog_DLLTOOL+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  if test -n "$DLLTOOL"; then
++  ac_cv_prog_DLLTOOL="$DLLTOOL" # Let the user override the test.
++else
++as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
++for as_dir in $PATH
++do
++  IFS=$as_save_IFS
++  test -z "$as_dir" && as_dir=.
++    for ac_exec_ext in '' $ac_executable_extensions; do
++  if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
++    ac_cv_prog_DLLTOOL="${ac_tool_prefix}dlltool"
++    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
++    break 2
++  fi
++done
++  done
++IFS=$as_save_IFS
++
++fi
++fi
++DLLTOOL=$ac_cv_prog_DLLTOOL
++if test -n "$DLLTOOL"; then
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $DLLTOOL" >&5
++$as_echo "$DLLTOOL" >&6; }
++else
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
++$as_echo "no" >&6; }
++fi
++
++
++fi
++if test -z "$ac_cv_prog_DLLTOOL"; then
++  ac_ct_DLLTOOL=$DLLTOOL
++  # Extract the first word of "dlltool", so it can be a program name with args.
++set dummy dlltool; ac_word=$2
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
++$as_echo_n "checking for $ac_word... " >&6; }
++if ${ac_cv_prog_ac_ct_DLLTOOL+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  if test -n "$ac_ct_DLLTOOL"; then
++  ac_cv_prog_ac_ct_DLLTOOL="$ac_ct_DLLTOOL" # Let the user override the test.
++else
++as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
++for as_dir in $PATH
++do
++  IFS=$as_save_IFS
++  test -z "$as_dir" && as_dir=.
++    for ac_exec_ext in '' $ac_executable_extensions; do
++  if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
++    ac_cv_prog_ac_ct_DLLTOOL="dlltool"
++    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
++    break 2
++  fi
++done
++  done
++IFS=$as_save_IFS
++
++fi
++fi
++ac_ct_DLLTOOL=$ac_cv_prog_ac_ct_DLLTOOL
++if test -n "$ac_ct_DLLTOOL"; then
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_DLLTOOL" >&5
++$as_echo "$ac_ct_DLLTOOL" >&6; }
++else
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
++$as_echo "no" >&6; }
++fi
++
++  if test "x$ac_ct_DLLTOOL" = x; then
++    DLLTOOL="false"
++  else
++    case $cross_compiling:$ac_tool_warned in
++yes:)
++{ $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5
++$as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;}
++ac_tool_warned=yes ;;
++esac
++    DLLTOOL=$ac_ct_DLLTOOL
++  fi
++else
++  DLLTOOL="$ac_cv_prog_DLLTOOL"
++fi
++
++test -z "$DLLTOOL" && DLLTOOL=dlltool
++
++
++
++
++
++
++
++
++
++
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking how to associate runtime and link libraries" >&5
++$as_echo_n "checking how to associate runtime and link libraries... " >&6; }
++if ${lt_cv_sharedlib_from_linklib_cmd+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  lt_cv_sharedlib_from_linklib_cmd='unknown'
++
++case $host_os in
++cygwin* | mingw* | pw32* | cegcc*)
++  # two different shell functions defined in ltmain.sh
++  # decide which to use based on capabilities of $DLLTOOL
++  case `$DLLTOOL --help 2>&1` in
++  *--identify-strict*)
++    lt_cv_sharedlib_from_linklib_cmd=func_cygming_dll_for_implib
++    ;;
++  *)
++    lt_cv_sharedlib_from_linklib_cmd=func_cygming_dll_for_implib_fallback
++    ;;
++  esac
++  ;;
++*)
++  # fallback: assume linklib IS sharedlib
++  lt_cv_sharedlib_from_linklib_cmd="$ECHO"
++  ;;
++esac
++
++fi
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_sharedlib_from_linklib_cmd" >&5
++$as_echo "$lt_cv_sharedlib_from_linklib_cmd" >&6; }
++sharedlib_from_linklib_cmd=$lt_cv_sharedlib_from_linklib_cmd
++test -z "$sharedlib_from_linklib_cmd" && sharedlib_from_linklib_cmd=$ECHO
++
++
++
++
++
++
++
+ plugin_option=
+ plugin_names="liblto_plugin.so liblto_plugin-0.dll cyglto_plugin-0.dll"
+ for plugin in $plugin_names; do
+@@ -6722,8 +6973,10 @@ for plugin in $plugin_names; do
+ done
+ 
+ if test -n "$ac_tool_prefix"; then
+-  # Extract the first word of "${ac_tool_prefix}ar", so it can be a program name with args.
+-set dummy ${ac_tool_prefix}ar; ac_word=$2
++  for ac_prog in ar
++  do
++    # Extract the first word of "$ac_tool_prefix$ac_prog", so it can be a program name with args.
++set dummy $ac_tool_prefix$ac_prog; ac_word=$2
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+ $as_echo_n "checking for $ac_word... " >&6; }
+ if ${ac_cv_prog_AR+:} false; then :
+@@ -6739,7 +6992,7 @@ do
+   test -z "$as_dir" && as_dir=.
+     for ac_exec_ext in '' $ac_executable_extensions; do
+   if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
+-    ac_cv_prog_AR="${ac_tool_prefix}ar"
++    ac_cv_prog_AR="$ac_tool_prefix$ac_prog"
+     $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
+     break 2
+   fi
+@@ -6759,11 +7012,15 @@ $as_echo "no" >&6; }
+ fi
+ 
+ 
++    test -n "$AR" && break
++  done
+ fi
+-if test -z "$ac_cv_prog_AR"; then
++if test -z "$AR"; then
+   ac_ct_AR=$AR
+-  # Extract the first word of "ar", so it can be a program name with args.
+-set dummy ar; ac_word=$2
++  for ac_prog in ar
++do
++  # Extract the first word of "$ac_prog", so it can be a program name with args.
++set dummy $ac_prog; ac_word=$2
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+ $as_echo_n "checking for $ac_word... " >&6; }
+ if ${ac_cv_prog_ac_ct_AR+:} false; then :
+@@ -6779,7 +7036,7 @@ do
+   test -z "$as_dir" && as_dir=.
+     for ac_exec_ext in '' $ac_executable_extensions; do
+   if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
+-    ac_cv_prog_ac_ct_AR="ar"
++    ac_cv_prog_ac_ct_AR="$ac_prog"
+     $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
+     break 2
+   fi
+@@ -6798,6 +7055,10 @@ else
+ $as_echo "no" >&6; }
+ fi
+ 
++
++  test -n "$ac_ct_AR" && break
++done
++
+   if test "x$ac_ct_AR" = x; then
+     AR="false"
+   else
+@@ -6809,25 +7070,19 @@ ac_tool_warned=yes ;;
+ esac
+     AR=$ac_ct_AR
+   fi
+-else
+-  AR="$ac_cv_prog_AR"
+ fi
+ 
+-test -z "$AR" && AR=ar
+-if test -n "$plugin_option"; then
+-  if $AR --help 2>&1 | grep -q "\--plugin"; then
+-    touch conftest.c
+-    $AR $plugin_option rc conftest.a conftest.c
+-    if test "$?" != 0; then
+-      { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: Failed: $AR $plugin_option rc" >&5
++  touch conftest.c
++  $AR $plugin_option rc conftest.a conftest.c
++  if test "$?" != 0; then
++    { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: Failed: $AR $plugin_option rc" >&5
+ $as_echo "$as_me: WARNING: Failed: $AR $plugin_option rc" >&2;}
+-    else
+-      AR="$AR $plugin_option"
+-    fi
+-    rm -f conftest.*
++  else
++    AR="$AR $plugin_option"
+   fi
+-fi
+-test -z "$AR_FLAGS" && AR_FLAGS=cru
++  rm -f conftest.*
++: ${AR=ar}
++: ${AR_FLAGS=cru}
+ 
+ 
+ 
+@@ -6839,6 +7094,64 @@ test -z "$AR_FLAGS" && AR_FLAGS=cru
+ 
+ 
+ 
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for archiver @FILE support" >&5
++$as_echo_n "checking for archiver @FILE support... " >&6; }
++if ${lt_cv_ar_at_file+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  lt_cv_ar_at_file=no
++   cat confdefs.h - <<_ACEOF >conftest.$ac_ext
++/* end confdefs.h.  */
++
++int
++main ()
++{
++
++  ;
++  return 0;
++}
++_ACEOF
++if ac_fn_c_try_compile "$LINENO"; then :
++  echo conftest.$ac_objext > conftest.lst
++      lt_ar_try='$AR $AR_FLAGS libconftest.a @conftest.lst >&5'
++      { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$lt_ar_try\""; } >&5
++  (eval $lt_ar_try) 2>&5
++  ac_status=$?
++  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
++  test $ac_status = 0; }
++      if test "$ac_status" -eq 0; then
++	# Ensure the archiver fails upon bogus file names.
++	rm -f conftest.$ac_objext libconftest.a
++	{ { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$lt_ar_try\""; } >&5
++  (eval $lt_ar_try) 2>&5
++  ac_status=$?
++  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
++  test $ac_status = 0; }
++	if test "$ac_status" -ne 0; then
++          lt_cv_ar_at_file=@
++        fi
++      fi
++      rm -f conftest.* libconftest.a
++
++fi
++rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext
++
++fi
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_ar_at_file" >&5
++$as_echo "$lt_cv_ar_at_file" >&6; }
++
++if test "x$lt_cv_ar_at_file" = xno; then
++  archiver_list_spec=
++else
++  archiver_list_spec=$lt_cv_ar_at_file
++fi
++
++
++
++
++
++
++
+ if test -n "$ac_tool_prefix"; then
+   # Extract the first word of "${ac_tool_prefix}strip", so it can be a program name with args.
+ set dummy ${ac_tool_prefix}strip; ac_word=$2
+@@ -7178,8 +7491,8 @@ esac
+ lt_cv_sys_global_symbol_to_cdecl="sed -n -e 's/^T .* \(.*\)$/extern int \1();/p' -e 's/^$symcode* .* \(.*\)$/extern char \1;/p'"
+ 
+ # Transform an extracted symbol line into symbol name and symbol address
+-lt_cv_sys_global_symbol_to_c_name_address="sed -n -e 's/^: \([^ ]*\) $/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"\2\", (void *) \&\2},/p'"
+-lt_cv_sys_global_symbol_to_c_name_address_lib_prefix="sed -n -e 's/^: \([^ ]*\) $/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \(lib[^ ]*\)$/  {\"\2\", (void *) \&\2},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"lib\2\", (void *) \&\2},/p'"
++lt_cv_sys_global_symbol_to_c_name_address="sed -n -e 's/^: \([^ ]*\)[ ]*$/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"\2\", (void *) \&\2},/p'"
++lt_cv_sys_global_symbol_to_c_name_address_lib_prefix="sed -n -e 's/^: \([^ ]*\)[ ]*$/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \(lib[^ ]*\)$/  {\"\2\", (void *) \&\2},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"lib\2\", (void *) \&\2},/p'"
+ 
+ # Handle CRLF in mingw tool chain
+ opt_cr=
+@@ -7215,6 +7528,7 @@ for ac_symprfx in "" "_"; do
+   else
+     lt_cv_sys_global_symbol_pipe="sed -n -e 's/^.*[	 ]\($symcode$symcode*\)[	 ][	 ]*$ac_symprfx$sympat$opt_cr$/$symxfrm/p'"
+   fi
++  lt_cv_sys_global_symbol_pipe="$lt_cv_sys_global_symbol_pipe | sed '/ __gnu_lto/d'"
+ 
+   # Check to see that the pipe works correctly.
+   pipe_works=no
+@@ -7256,6 +7570,18 @@ _LT_EOF
+       if $GREP ' nm_test_var$' "$nlist" >/dev/null; then
+ 	if $GREP ' nm_test_func$' "$nlist" >/dev/null; then
+ 	  cat <<_LT_EOF > conftest.$ac_ext
++/* Keep this code in sync between libtool.m4, ltmain, lt_system.h, and tests.  */
++#if defined(_WIN32) || defined(__CYGWIN__) || defined(_WIN32_WCE)
++/* DATA imports from DLLs on WIN32 con't be const, because runtime
++   relocations are performed -- see ld's documentation on pseudo-relocs.  */
++# define LT_DLSYM_CONST
++#elif defined(__osf__)
++/* This system does not cope well with relocations in const data.  */
++# define LT_DLSYM_CONST
++#else
++# define LT_DLSYM_CONST const
++#endif
++
+ #ifdef __cplusplus
+ extern "C" {
+ #endif
+@@ -7267,7 +7593,7 @@ _LT_EOF
+ 	  cat <<_LT_EOF >> conftest.$ac_ext
+ 
+ /* The mapping between symbol names and symbols.  */
+-const struct {
++LT_DLSYM_CONST struct {
+   const char *name;
+   void       *address;
+ }
+@@ -7293,8 +7619,8 @@ static const void *lt_preloaded_setup() {
+ _LT_EOF
+ 	  # Now try linking the two files.
+ 	  mv conftest.$ac_objext conftstm.$ac_objext
+-	  lt_save_LIBS="$LIBS"
+-	  lt_save_CFLAGS="$CFLAGS"
++	  lt_globsym_save_LIBS=$LIBS
++	  lt_globsym_save_CFLAGS=$CFLAGS
+ 	  LIBS="conftstm.$ac_objext"
+ 	  CFLAGS="$CFLAGS$lt_prog_compiler_no_builtin_flag"
+ 	  if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_link\""; } >&5
+@@ -7304,8 +7630,8 @@ _LT_EOF
+   test $ac_status = 0; } && test -s conftest${ac_exeext}; then
+ 	    pipe_works=yes
+ 	  fi
+-	  LIBS="$lt_save_LIBS"
+-	  CFLAGS="$lt_save_CFLAGS"
++	  LIBS=$lt_globsym_save_LIBS
++	  CFLAGS=$lt_globsym_save_CFLAGS
+ 	else
+ 	  echo "cannot find nm_test_func in $nlist" >&5
+ 	fi
+@@ -7342,6 +7668,14 @@ else
+ $as_echo "ok" >&6; }
+ fi
+ 
++# Response file support.
++if test "$lt_cv_nm_interface" = "MS dumpbin"; then
++  nm_file_list_spec='@'
++elif $NM --help 2>/dev/null | grep '[@]FILE' >/dev/null; then
++  nm_file_list_spec='@'
++fi
++
++
+ 
+ 
+ 
+@@ -7360,6 +7694,47 @@ fi
+ 
+ 
+ 
++
++
++
++
++
++
++
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for sysroot" >&5
++$as_echo_n "checking for sysroot... " >&6; }
++
++# Check whether --with-libtool-sysroot was given.
++if test "${with_libtool_sysroot+set}" = set; then :
++  withval=$with_libtool_sysroot;
++else
++  with_libtool_sysroot=no
++fi
++
++
++lt_sysroot=
++case ${with_libtool_sysroot} in #(
++ yes)
++   if test "$GCC" = yes; then
++     lt_sysroot=`$CC --print-sysroot 2>/dev/null`
++   fi
++   ;; #(
++ /*)
++   lt_sysroot=`echo "$with_libtool_sysroot" | sed -e "$sed_quote_subst"`
++   ;; #(
++ no|'')
++   ;; #(
++ *)
++   { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${with_libtool_sysroot}" >&5
++$as_echo "${with_libtool_sysroot}" >&6; }
++   as_fn_error $? "The sysroot must be an absolute path." "$LINENO" 5
++   ;;
++esac
++
++ { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${lt_sysroot:-no}" >&5
++$as_echo "${lt_sysroot:-no}" >&6; }
++
++
+ 
+ 
+ 
+@@ -7569,6 +7944,123 @@ esac
+ 
+ need_locks="$enable_libtool_lock"
+ 
++if test -n "$ac_tool_prefix"; then
++  # Extract the first word of "${ac_tool_prefix}mt", so it can be a program name with args.
++set dummy ${ac_tool_prefix}mt; ac_word=$2
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
++$as_echo_n "checking for $ac_word... " >&6; }
++if ${ac_cv_prog_MANIFEST_TOOL+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  if test -n "$MANIFEST_TOOL"; then
++  ac_cv_prog_MANIFEST_TOOL="$MANIFEST_TOOL" # Let the user override the test.
++else
++as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
++for as_dir in $PATH
++do
++  IFS=$as_save_IFS
++  test -z "$as_dir" && as_dir=.
++    for ac_exec_ext in '' $ac_executable_extensions; do
++  if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
++    ac_cv_prog_MANIFEST_TOOL="${ac_tool_prefix}mt"
++    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
++    break 2
++  fi
++done
++  done
++IFS=$as_save_IFS
++
++fi
++fi
++MANIFEST_TOOL=$ac_cv_prog_MANIFEST_TOOL
++if test -n "$MANIFEST_TOOL"; then
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $MANIFEST_TOOL" >&5
++$as_echo "$MANIFEST_TOOL" >&6; }
++else
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
++$as_echo "no" >&6; }
++fi
++
++
++fi
++if test -z "$ac_cv_prog_MANIFEST_TOOL"; then
++  ac_ct_MANIFEST_TOOL=$MANIFEST_TOOL
++  # Extract the first word of "mt", so it can be a program name with args.
++set dummy mt; ac_word=$2
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
++$as_echo_n "checking for $ac_word... " >&6; }
++if ${ac_cv_prog_ac_ct_MANIFEST_TOOL+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  if test -n "$ac_ct_MANIFEST_TOOL"; then
++  ac_cv_prog_ac_ct_MANIFEST_TOOL="$ac_ct_MANIFEST_TOOL" # Let the user override the test.
++else
++as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
++for as_dir in $PATH
++do
++  IFS=$as_save_IFS
++  test -z "$as_dir" && as_dir=.
++    for ac_exec_ext in '' $ac_executable_extensions; do
++  if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
++    ac_cv_prog_ac_ct_MANIFEST_TOOL="mt"
++    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
++    break 2
++  fi
++done
++  done
++IFS=$as_save_IFS
++
++fi
++fi
++ac_ct_MANIFEST_TOOL=$ac_cv_prog_ac_ct_MANIFEST_TOOL
++if test -n "$ac_ct_MANIFEST_TOOL"; then
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_MANIFEST_TOOL" >&5
++$as_echo "$ac_ct_MANIFEST_TOOL" >&6; }
++else
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
++$as_echo "no" >&6; }
++fi
++
++  if test "x$ac_ct_MANIFEST_TOOL" = x; then
++    MANIFEST_TOOL=":"
++  else
++    case $cross_compiling:$ac_tool_warned in
++yes:)
++{ $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5
++$as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;}
++ac_tool_warned=yes ;;
++esac
++    MANIFEST_TOOL=$ac_ct_MANIFEST_TOOL
++  fi
++else
++  MANIFEST_TOOL="$ac_cv_prog_MANIFEST_TOOL"
++fi
++
++test -z "$MANIFEST_TOOL" && MANIFEST_TOOL=mt
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking if $MANIFEST_TOOL is a manifest tool" >&5
++$as_echo_n "checking if $MANIFEST_TOOL is a manifest tool... " >&6; }
++if ${lt_cv_path_mainfest_tool+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  lt_cv_path_mainfest_tool=no
++  echo "$as_me:$LINENO: $MANIFEST_TOOL '-?'" >&5
++  $MANIFEST_TOOL '-?' 2>conftest.err > conftest.out
++  cat conftest.err >&5
++  if $GREP 'Manifest Tool' conftest.out > /dev/null; then
++    lt_cv_path_mainfest_tool=yes
++  fi
++  rm -f conftest*
++fi
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_path_mainfest_tool" >&5
++$as_echo "$lt_cv_path_mainfest_tool" >&6; }
++if test "x$lt_cv_path_mainfest_tool" != xyes; then
++  MANIFEST_TOOL=:
++fi
++
++
++
++
++
+ 
+   case $host_os in
+     rhapsody* | darwin*)
+@@ -8132,6 +8624,8 @@ _LT_EOF
+       $LTCC $LTCFLAGS -c -o conftest.o conftest.c 2>&5
+       echo "$AR cru libconftest.a conftest.o" >&5
+       $AR cru libconftest.a conftest.o 2>&5
++      echo "$RANLIB libconftest.a" >&5
++      $RANLIB libconftest.a 2>&5
+       cat > conftest.c << _LT_EOF
+ int main() { return 0;}
+ _LT_EOF
+@@ -8684,8 +9178,6 @@ fi
+ lt_prog_compiler_pic=
+ lt_prog_compiler_static=
+ 
+-{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $compiler option to produce PIC" >&5
+-$as_echo_n "checking for $compiler option to produce PIC... " >&6; }
+ 
+   if test "$GCC" = yes; then
+     lt_prog_compiler_wl='-Wl,'
+@@ -8851,6 +9343,12 @@ $as_echo_n "checking for $compiler option to produce PIC... " >&6; }
+ 	lt_prog_compiler_pic='--shared'
+ 	lt_prog_compiler_static='--static'
+ 	;;
++      nagfor*)
++	# NAG Fortran compiler
++	lt_prog_compiler_wl='-Wl,-Wl,,'
++	lt_prog_compiler_pic='-PIC'
++	lt_prog_compiler_static='-Bstatic'
++	;;
+       pgcc* | pgf77* | pgf90* | pgf95* | pgfortran*)
+         # Portland Group compilers (*not* the Pentium gcc compiler,
+ 	# which looks to be a dead project)
+@@ -8913,7 +9411,7 @@ $as_echo_n "checking for $compiler option to produce PIC... " >&6; }
+       lt_prog_compiler_pic='-KPIC'
+       lt_prog_compiler_static='-Bstatic'
+       case $cc_basename in
+-      f77* | f90* | f95*)
++      f77* | f90* | f95* | sunf77* | sunf90* | sunf95*)
+ 	lt_prog_compiler_wl='-Qoption ld ';;
+       *)
+ 	lt_prog_compiler_wl='-Wl,';;
+@@ -8970,13 +9468,17 @@ case $host_os in
+     lt_prog_compiler_pic="$lt_prog_compiler_pic -DPIC"
+     ;;
+ esac
+-{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_prog_compiler_pic" >&5
+-$as_echo "$lt_prog_compiler_pic" >&6; }
+-
+-
+-
+-
+ 
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $compiler option to produce PIC" >&5
++$as_echo_n "checking for $compiler option to produce PIC... " >&6; }
++if ${lt_cv_prog_compiler_pic+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  lt_cv_prog_compiler_pic=$lt_prog_compiler_pic
++fi
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_prog_compiler_pic" >&5
++$as_echo "$lt_cv_prog_compiler_pic" >&6; }
++lt_prog_compiler_pic=$lt_cv_prog_compiler_pic
+ 
+ #
+ # Check to make sure the PIC flag actually works.
+@@ -9037,6 +9539,11 @@ fi
+ 
+ 
+ 
++
++
++
++
++
+ #
+ # Check to make sure the static flag actually works.
+ #
+@@ -9387,7 +9894,8 @@ _LT_EOF
+       allow_undefined_flag=unsupported
+       always_export_symbols=no
+       enable_shared_with_static_runtimes=yes
+-      export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1 DATA/'\'' | $SED -e '\''/^[AITW][ ]/s/.*[ ]//'\'' | sort | uniq > $export_symbols'
++      export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1 DATA/;s/^.*[ ]__nm__\([^ ]*\)[ ][^ ]*/\1 DATA/;/^I[ ]/d;/^[AITW][ ]/s/.* //'\'' | sort | uniq > $export_symbols'
++      exclude_expsyms='[_]+GLOBAL_OFFSET_TABLE_|[_]+GLOBAL__[FID]_.*|[_]+head_[A-Za-z0-9_]+_dll|[A-Za-z0-9_]+_dll_iname'
+ 
+       if $LD --help 2>&1 | $GREP 'auto-import' > /dev/null; then
+         archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib'
+@@ -9486,12 +9994,12 @@ _LT_EOF
+ 	  whole_archive_flag_spec='--whole-archive$convenience --no-whole-archive'
+ 	  hardcode_libdir_flag_spec=
+ 	  hardcode_libdir_flag_spec_ld='-rpath $libdir'
+-	  archive_cmds='$LD -shared $libobjs $deplibs $compiler_flags -soname $soname -o $lib'
++	  archive_cmds='$LD -shared $libobjs $deplibs $linker_flags -soname $soname -o $lib'
+ 	  if test "x$supports_anon_versioning" = xyes; then
+ 	    archive_expsym_cmds='echo "{ global:" > $output_objdir/$libname.ver~
+ 	      cat $export_symbols | sed -e "s/\(.*\)/\1;/" >> $output_objdir/$libname.ver~
+ 	      echo "local: *; };" >> $output_objdir/$libname.ver~
+-	      $LD -shared $libobjs $deplibs $compiler_flags -soname $soname -version-script $output_objdir/$libname.ver -o $lib'
++	      $LD -shared $libobjs $deplibs $linker_flags -soname $soname -version-script $output_objdir/$libname.ver -o $lib'
+ 	  fi
+ 	  ;;
+ 	esac
+@@ -9505,8 +10013,8 @@ _LT_EOF
+ 	archive_cmds='$LD -Bshareable $libobjs $deplibs $linker_flags -o $lib'
+ 	wlarc=
+       else
+-	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
+-	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
++	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
++	archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
+       fi
+       ;;
+ 
+@@ -9524,8 +10032,8 @@ _LT_EOF
+ 
+ _LT_EOF
+       elif $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then
+-	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
+-	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
++	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
++	archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
+       else
+ 	ld_shlibs=no
+       fi
+@@ -9571,8 +10079,8 @@ _LT_EOF
+ 
+     *)
+       if $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then
+-	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
+-	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
++	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
++	archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
+       else
+ 	ld_shlibs=no
+       fi
+@@ -9702,7 +10210,13 @@ _LT_EOF
+ 	allow_undefined_flag='-berok'
+         # Determine the default libpath from the value encoded in an
+         # empty executable.
+-        cat confdefs.h - <<_ACEOF >conftest.$ac_ext
++        if test "${lt_cv_aix_libpath+set}" = set; then
++  aix_libpath=$lt_cv_aix_libpath
++else
++  if ${lt_cv_aix_libpath_+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+ /* end confdefs.h.  */
+ 
+ int
+@@ -9715,22 +10229,29 @@ main ()
+ _ACEOF
+ if ac_fn_c_try_link "$LINENO"; then :
+ 
+-lt_aix_libpath_sed='
+-    /Import File Strings/,/^$/ {
+-	/^0/ {
+-	    s/^0  *\(.*\)$/\1/
+-	    p
+-	}
+-    }'
+-aix_libpath=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
+-# Check for a 64-bit object if we didn't find anything.
+-if test -z "$aix_libpath"; then
+-  aix_libpath=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
+-fi
++  lt_aix_libpath_sed='
++      /Import File Strings/,/^$/ {
++	  /^0/ {
++	      s/^0  *\([^ ]*\) *$/\1/
++	      p
++	  }
++      }'
++  lt_cv_aix_libpath_=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
++  # Check for a 64-bit object if we didn't find anything.
++  if test -z "$lt_cv_aix_libpath_"; then
++    lt_cv_aix_libpath_=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
++  fi
+ fi
+ rm -f core conftest.err conftest.$ac_objext \
+     conftest$ac_exeext conftest.$ac_ext
+-if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
++  if test -z "$lt_cv_aix_libpath_"; then
++    lt_cv_aix_libpath_="/usr/lib:/lib"
++  fi
++
++fi
++
++  aix_libpath=$lt_cv_aix_libpath_
++fi
+ 
+         hardcode_libdir_flag_spec='${wl}-blibpath:$libdir:'"$aix_libpath"
+         archive_expsym_cmds='$CC -o $output_objdir/$soname $libobjs $deplibs '"\${wl}$no_entry_flag"' $compiler_flags `if test "x${allow_undefined_flag}" != "x"; then func_echo_all "${wl}${allow_undefined_flag}"; else :; fi` '"\${wl}$exp_sym_flag:\$export_symbols $shared_flag"
+@@ -9742,7 +10263,13 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 	else
+ 	 # Determine the default libpath from the value encoded in an
+ 	 # empty executable.
+-	 cat confdefs.h - <<_ACEOF >conftest.$ac_ext
++	 if test "${lt_cv_aix_libpath+set}" = set; then
++  aix_libpath=$lt_cv_aix_libpath
++else
++  if ${lt_cv_aix_libpath_+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+ /* end confdefs.h.  */
+ 
+ int
+@@ -9755,22 +10282,29 @@ main ()
+ _ACEOF
+ if ac_fn_c_try_link "$LINENO"; then :
+ 
+-lt_aix_libpath_sed='
+-    /Import File Strings/,/^$/ {
+-	/^0/ {
+-	    s/^0  *\(.*\)$/\1/
+-	    p
+-	}
+-    }'
+-aix_libpath=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
+-# Check for a 64-bit object if we didn't find anything.
+-if test -z "$aix_libpath"; then
+-  aix_libpath=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
+-fi
++  lt_aix_libpath_sed='
++      /Import File Strings/,/^$/ {
++	  /^0/ {
++	      s/^0  *\([^ ]*\) *$/\1/
++	      p
++	  }
++      }'
++  lt_cv_aix_libpath_=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
++  # Check for a 64-bit object if we didn't find anything.
++  if test -z "$lt_cv_aix_libpath_"; then
++    lt_cv_aix_libpath_=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
++  fi
+ fi
+ rm -f core conftest.err conftest.$ac_objext \
+     conftest$ac_exeext conftest.$ac_ext
+-if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
++  if test -z "$lt_cv_aix_libpath_"; then
++    lt_cv_aix_libpath_="/usr/lib:/lib"
++  fi
++
++fi
++
++  aix_libpath=$lt_cv_aix_libpath_
++fi
+ 
+ 	 hardcode_libdir_flag_spec='${wl}-blibpath:$libdir:'"$aix_libpath"
+ 	  # Warning - without using the other run time loading flags,
+@@ -9815,20 +10349,63 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+       # Microsoft Visual C++.
+       # hardcode_libdir_flag_spec is actually meaningless, as there is
+       # no search path for DLLs.
+-      hardcode_libdir_flag_spec=' '
+-      allow_undefined_flag=unsupported
+-      # Tell ltmain to make .lib files, not .a files.
+-      libext=lib
+-      # Tell ltmain to make .dll files, not .so files.
+-      shrext_cmds=".dll"
+-      # FIXME: Setting linknames here is a bad hack.
+-      archive_cmds='$CC -o $lib $libobjs $compiler_flags `func_echo_all "$deplibs" | $SED '\''s/ -lc$//'\''` -link -dll~linknames='
+-      # The linker will automatically build a .lib file if we build a DLL.
+-      old_archive_from_new_cmds='true'
+-      # FIXME: Should let the user specify the lib program.
+-      old_archive_cmds='lib -OUT:$oldlib$oldobjs$old_deplibs'
+-      fix_srcfile_path='`cygpath -w "$srcfile"`'
+-      enable_shared_with_static_runtimes=yes
++      case $cc_basename in
++      cl*)
++	# Native MSVC
++	hardcode_libdir_flag_spec=' '
++	allow_undefined_flag=unsupported
++	always_export_symbols=yes
++	file_list_spec='@'
++	# Tell ltmain to make .lib files, not .a files.
++	libext=lib
++	# Tell ltmain to make .dll files, not .so files.
++	shrext_cmds=".dll"
++	# FIXME: Setting linknames here is a bad hack.
++	archive_cmds='$CC -o $output_objdir/$soname $libobjs $compiler_flags $deplibs -Wl,-dll~linknames='
++	archive_expsym_cmds='if test "x`$SED 1q $export_symbols`" = xEXPORTS; then
++	    sed -n -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' -e '1\\\!p' < $export_symbols > $output_objdir/$soname.exp;
++	  else
++	    sed -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' < $export_symbols > $output_objdir/$soname.exp;
++	  fi~
++	  $CC -o $tool_output_objdir$soname $libobjs $compiler_flags $deplibs "@$tool_output_objdir$soname.exp" -Wl,-DLL,-IMPLIB:"$tool_output_objdir$libname.dll.lib"~
++	  linknames='
++	# The linker will not automatically build a static lib if we build a DLL.
++	# _LT_TAGVAR(old_archive_from_new_cmds, )='true'
++	enable_shared_with_static_runtimes=yes
++	export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1,DATA/'\'' | $SED -e '\''/^[AITW][ ]/s/.*[ ]//'\'' | sort | uniq > $export_symbols'
++	# Don't use ranlib
++	old_postinstall_cmds='chmod 644 $oldlib'
++	postlink_cmds='lt_outputfile="@OUTPUT@"~
++	  lt_tool_outputfile="@TOOL_OUTPUT@"~
++	  case $lt_outputfile in
++	    *.exe|*.EXE) ;;
++	    *)
++	      lt_outputfile="$lt_outputfile.exe"
++	      lt_tool_outputfile="$lt_tool_outputfile.exe"
++	      ;;
++	  esac~
++	  if test "$MANIFEST_TOOL" != ":" && test -f "$lt_outputfile.manifest"; then
++	    $MANIFEST_TOOL -manifest "$lt_tool_outputfile.manifest" -outputresource:"$lt_tool_outputfile" || exit 1;
++	    $RM "$lt_outputfile.manifest";
++	  fi'
++	;;
++      *)
++	# Assume MSVC wrapper
++	hardcode_libdir_flag_spec=' '
++	allow_undefined_flag=unsupported
++	# Tell ltmain to make .lib files, not .a files.
++	libext=lib
++	# Tell ltmain to make .dll files, not .so files.
++	shrext_cmds=".dll"
++	# FIXME: Setting linknames here is a bad hack.
++	archive_cmds='$CC -o $lib $libobjs $compiler_flags `func_echo_all "$deplibs" | $SED '\''s/ -lc$//'\''` -link -dll~linknames='
++	# The linker will automatically build a .lib file if we build a DLL.
++	old_archive_from_new_cmds='true'
++	# FIXME: Should let the user specify the lib program.
++	old_archive_cmds='lib -OUT:$oldlib$oldobjs$old_deplibs'
++	enable_shared_with_static_runtimes=yes
++	;;
++      esac
+       ;;
+ 
+     darwin* | rhapsody*)
+@@ -9889,7 +10466,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 
+     # FreeBSD 3 and greater uses gcc -shared to do shared libraries.
+     freebsd* | dragonfly*)
+-      archive_cmds='$CC -shared -o $lib $libobjs $deplibs $compiler_flags'
++      archive_cmds='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags'
+       hardcode_libdir_flag_spec='-R$libdir'
+       hardcode_direct=yes
+       hardcode_shlibpath_var=no
+@@ -9897,7 +10474,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 
+     hpux9*)
+       if test "$GCC" = yes; then
+-	archive_cmds='$RM $output_objdir/$soname~$CC -shared -fPIC ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $libobjs $deplibs $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
++	archive_cmds='$RM $output_objdir/$soname~$CC -shared $pic_flag ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $libobjs $deplibs $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
+       else
+ 	archive_cmds='$RM $output_objdir/$soname~$LD -b +b $install_libdir -o $output_objdir/$soname $libobjs $deplibs $linker_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
+       fi
+@@ -9913,7 +10490,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 
+     hpux10*)
+       if test "$GCC" = yes && test "$with_gnu_ld" = no; then
+-	archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
++	archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
+       else
+ 	archive_cmds='$LD -b +h $soname +b $install_libdir -o $lib $libobjs $deplibs $linker_flags'
+       fi
+@@ -9937,10 +10514,10 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 	  archive_cmds='$CC -shared ${wl}+h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
+ 	  ;;
+ 	ia64*)
+-	  archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags'
++	  archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags'
+ 	  ;;
+ 	*)
+-	  archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
++	  archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
+ 	  ;;
+ 	esac
+       else
+@@ -10019,23 +10596,36 @@ fi
+ 
+     irix5* | irix6* | nonstopux*)
+       if test "$GCC" = yes; then
+-	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
++	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
+ 	# Try to use the -exported_symbol ld option, if it does not
+ 	# work, assume that -exports_file does not work either and
+ 	# implicitly export all symbols.
+-        save_LDFLAGS="$LDFLAGS"
+-        LDFLAGS="$LDFLAGS -shared ${wl}-exported_symbol ${wl}foo ${wl}-update_registry ${wl}/dev/null"
+-        cat confdefs.h - <<_ACEOF >conftest.$ac_ext
++	# This should be the same for all languages, so no per-tag cache variable.
++	{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether the $host_os linker accepts -exported_symbol" >&5
++$as_echo_n "checking whether the $host_os linker accepts -exported_symbol... " >&6; }
++if ${lt_cv_irix_exported_symbol+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  save_LDFLAGS="$LDFLAGS"
++	   LDFLAGS="$LDFLAGS -shared ${wl}-exported_symbol ${wl}foo ${wl}-update_registry ${wl}/dev/null"
++	   cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+ /* end confdefs.h.  */
+-int foo(void) {}
++int foo (void) { return 0; }
+ _ACEOF
+ if ac_fn_c_try_link "$LINENO"; then :
+-  archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations ${wl}-exports_file ${wl}$export_symbols -o $lib'
+-
++  lt_cv_irix_exported_symbol=yes
++else
++  lt_cv_irix_exported_symbol=no
+ fi
+ rm -f core conftest.err conftest.$ac_objext \
+     conftest$ac_exeext conftest.$ac_ext
+-        LDFLAGS="$save_LDFLAGS"
++           LDFLAGS="$save_LDFLAGS"
++fi
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_irix_exported_symbol" >&5
++$as_echo "$lt_cv_irix_exported_symbol" >&6; }
++	if test "$lt_cv_irix_exported_symbol" = yes; then
++          archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations ${wl}-exports_file ${wl}$export_symbols -o $lib'
++	fi
+       else
+ 	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -o $lib'
+ 	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -exports_file $export_symbols -o $lib'
+@@ -10120,7 +10710,7 @@ rm -f core conftest.err conftest.$ac_objext \
+     osf4* | osf5*)	# as osf3* with the addition of -msym flag
+       if test "$GCC" = yes; then
+ 	allow_undefined_flag=' ${wl}-expect_unresolved ${wl}\*'
+-	archive_cmds='$CC -shared${allow_undefined_flag} $libobjs $deplibs $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
++	archive_cmds='$CC -shared${allow_undefined_flag} $pic_flag $libobjs $deplibs $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
+ 	hardcode_libdir_flag_spec='${wl}-rpath ${wl}$libdir'
+       else
+ 	allow_undefined_flag=' -expect_unresolved \*'
+@@ -10139,9 +10729,9 @@ rm -f core conftest.err conftest.$ac_objext \
+       no_undefined_flag=' -z defs'
+       if test "$GCC" = yes; then
+ 	wlarc='${wl}'
+-	archive_cmds='$CC -shared ${wl}-z ${wl}text ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
++	archive_cmds='$CC -shared $pic_flag ${wl}-z ${wl}text ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
+ 	archive_expsym_cmds='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~
+-	  $CC -shared ${wl}-z ${wl}text ${wl}-M ${wl}$lib.exp ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags~$RM $lib.exp'
++	  $CC -shared $pic_flag ${wl}-z ${wl}text ${wl}-M ${wl}$lib.exp ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags~$RM $lib.exp'
+       else
+ 	case `$CC -V 2>&1` in
+ 	*"Compilers 5.0"*)
+@@ -10717,8 +11307,9 @@ cygwin* | mingw* | pw32* | cegcc*)
+   need_version=no
+   need_lib_prefix=no
+ 
+-  case $GCC,$host_os in
+-  yes,cygwin* | yes,mingw* | yes,pw32* | yes,cegcc*)
++  case $GCC,$cc_basename in
++  yes,*)
++    # gcc
+     library_names_spec='$libname.dll.a'
+     # DLL is installed to $(libdir)/../bin by postinstall_cmds
+     postinstall_cmds='base_file=`basename \${file}`~
+@@ -10751,13 +11342,71 @@ cygwin* | mingw* | pw32* | cegcc*)
+       library_names_spec='`echo ${libname} | sed -e 's/^lib/pw/'``echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}'
+       ;;
+     esac
++    dynamic_linker='Win32 ld.exe'
++    ;;
++
++  *,cl*)
++    # Native MSVC
++    libname_spec='$name'
++    soname_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}'
++    library_names_spec='${libname}.dll.lib'
++
++    case $build_os in
++    mingw*)
++      sys_lib_search_path_spec=
++      lt_save_ifs=$IFS
++      IFS=';'
++      for lt_path in $LIB
++      do
++        IFS=$lt_save_ifs
++        # Let DOS variable expansion print the short 8.3 style file name.
++        lt_path=`cd "$lt_path" 2>/dev/null && cmd //C "for %i in (".") do @echo %~si"`
++        sys_lib_search_path_spec="$sys_lib_search_path_spec $lt_path"
++      done
++      IFS=$lt_save_ifs
++      # Convert to MSYS style.
++      sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | sed -e 's|\\\\|/|g' -e 's| \\([a-zA-Z]\\):| /\\1|g' -e 's|^ ||'`
++      ;;
++    cygwin*)
++      # Convert to unix form, then to dos form, then back to unix form
++      # but this time dos style (no spaces!) so that the unix form looks
++      # like /cygdrive/c/PROGRA~1:/cygdr...
++      sys_lib_search_path_spec=`cygpath --path --unix "$LIB"`
++      sys_lib_search_path_spec=`cygpath --path --dos "$sys_lib_search_path_spec" 2>/dev/null`
++      sys_lib_search_path_spec=`cygpath --path --unix "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
++      ;;
++    *)
++      sys_lib_search_path_spec="$LIB"
++      if $ECHO "$sys_lib_search_path_spec" | $GREP ';[c-zC-Z]:/' >/dev/null; then
++        # It is most probably a Windows format PATH.
++        sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e 's/;/ /g'`
++      else
++        sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
++      fi
++      # FIXME: find the short name or the path components, as spaces are
++      # common. (e.g. "Program Files" -> "PROGRA~1")
++      ;;
++    esac
++
++    # DLL is installed to $(libdir)/../bin by postinstall_cmds
++    postinstall_cmds='base_file=`basename \${file}`~
++      dlpath=`$SHELL 2>&1 -c '\''. $dir/'\''\${base_file}'\''i; echo \$dlname'\''`~
++      dldir=$destdir/`dirname \$dlpath`~
++      test -d \$dldir || mkdir -p \$dldir~
++      $install_prog $dir/$dlname \$dldir/$dlname'
++    postuninstall_cmds='dldll=`$SHELL 2>&1 -c '\''. $file; echo \$dlname'\''`~
++      dlpath=$dir/\$dldll~
++       $RM \$dlpath'
++    shlibpath_overrides_runpath=yes
++    dynamic_linker='Win32 link.exe'
+     ;;
+ 
+   *)
++    # Assume MSVC wrapper
+     library_names_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext} $libname.lib'
++    dynamic_linker='Win32 ld.exe'
+     ;;
+   esac
+-  dynamic_linker='Win32 ld.exe'
+   # FIXME: first we should search . and the directory the executable is in
+   shlibpath_var=PATH
+   ;;
+@@ -11635,7 +12284,7 @@ else
+   lt_dlunknown=0; lt_dlno_uscore=1; lt_dlneed_uscore=2
+   lt_status=$lt_dlunknown
+   cat > conftest.$ac_ext <<_LT_EOF
+-#line 11638 "configure"
++#line $LINENO "configure"
+ #include "confdefs.h"
+ 
+ #if HAVE_DLFCN_H
+@@ -11679,10 +12328,10 @@ else
+ /* When -fvisbility=hidden is used, assume the code has been annotated
+    correspondingly for the symbols needed.  */
+ #if defined(__GNUC__) && (((__GNUC__ == 3) && (__GNUC_MINOR__ >= 3)) || (__GNUC__ > 3))
+-void fnord () __attribute__((visibility("default")));
++int fnord () __attribute__((visibility("default")));
+ #endif
+ 
+-void fnord () { int i=42; }
++int fnord () { return 42; }
+ int main ()
+ {
+   void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW);
+@@ -11741,7 +12390,7 @@ else
+   lt_dlunknown=0; lt_dlno_uscore=1; lt_dlneed_uscore=2
+   lt_status=$lt_dlunknown
+   cat > conftest.$ac_ext <<_LT_EOF
+-#line 11744 "configure"
++#line $LINENO "configure"
+ #include "confdefs.h"
+ 
+ #if HAVE_DLFCN_H
+@@ -11785,10 +12434,10 @@ else
+ /* When -fvisbility=hidden is used, assume the code has been annotated
+    correspondingly for the symbols needed.  */
+ #if defined(__GNUC__) && (((__GNUC__ == 3) && (__GNUC_MINOR__ >= 3)) || (__GNUC__ > 3))
+-void fnord () __attribute__((visibility("default")));
++int fnord () __attribute__((visibility("default")));
+ #endif
+ 
+-void fnord () { int i=42; }
++int fnord () { return 42; }
+ int main ()
+ {
+   void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW);
+@@ -14473,13 +15122,20 @@ exeext='`$ECHO "$exeext" | $SED "$delay_single_quote_subst"`'
+ lt_unset='`$ECHO "$lt_unset" | $SED "$delay_single_quote_subst"`'
+ lt_SP2NL='`$ECHO "$lt_SP2NL" | $SED "$delay_single_quote_subst"`'
+ lt_NL2SP='`$ECHO "$lt_NL2SP" | $SED "$delay_single_quote_subst"`'
++lt_cv_to_host_file_cmd='`$ECHO "$lt_cv_to_host_file_cmd" | $SED "$delay_single_quote_subst"`'
++lt_cv_to_tool_file_cmd='`$ECHO "$lt_cv_to_tool_file_cmd" | $SED "$delay_single_quote_subst"`'
+ reload_flag='`$ECHO "$reload_flag" | $SED "$delay_single_quote_subst"`'
+ reload_cmds='`$ECHO "$reload_cmds" | $SED "$delay_single_quote_subst"`'
+ OBJDUMP='`$ECHO "$OBJDUMP" | $SED "$delay_single_quote_subst"`'
+ deplibs_check_method='`$ECHO "$deplibs_check_method" | $SED "$delay_single_quote_subst"`'
+ file_magic_cmd='`$ECHO "$file_magic_cmd" | $SED "$delay_single_quote_subst"`'
++file_magic_glob='`$ECHO "$file_magic_glob" | $SED "$delay_single_quote_subst"`'
++want_nocaseglob='`$ECHO "$want_nocaseglob" | $SED "$delay_single_quote_subst"`'
++DLLTOOL='`$ECHO "$DLLTOOL" | $SED "$delay_single_quote_subst"`'
++sharedlib_from_linklib_cmd='`$ECHO "$sharedlib_from_linklib_cmd" | $SED "$delay_single_quote_subst"`'
+ AR='`$ECHO "$AR" | $SED "$delay_single_quote_subst"`'
+ AR_FLAGS='`$ECHO "$AR_FLAGS" | $SED "$delay_single_quote_subst"`'
++archiver_list_spec='`$ECHO "$archiver_list_spec" | $SED "$delay_single_quote_subst"`'
+ STRIP='`$ECHO "$STRIP" | $SED "$delay_single_quote_subst"`'
+ RANLIB='`$ECHO "$RANLIB" | $SED "$delay_single_quote_subst"`'
+ old_postinstall_cmds='`$ECHO "$old_postinstall_cmds" | $SED "$delay_single_quote_subst"`'
+@@ -14494,14 +15150,17 @@ lt_cv_sys_global_symbol_pipe='`$ECHO "$lt_cv_sys_global_symbol_pipe" | $SED "$de
+ lt_cv_sys_global_symbol_to_cdecl='`$ECHO "$lt_cv_sys_global_symbol_to_cdecl" | $SED "$delay_single_quote_subst"`'
+ lt_cv_sys_global_symbol_to_c_name_address='`$ECHO "$lt_cv_sys_global_symbol_to_c_name_address" | $SED "$delay_single_quote_subst"`'
+ lt_cv_sys_global_symbol_to_c_name_address_lib_prefix='`$ECHO "$lt_cv_sys_global_symbol_to_c_name_address_lib_prefix" | $SED "$delay_single_quote_subst"`'
++nm_file_list_spec='`$ECHO "$nm_file_list_spec" | $SED "$delay_single_quote_subst"`'
++lt_sysroot='`$ECHO "$lt_sysroot" | $SED "$delay_single_quote_subst"`'
+ objdir='`$ECHO "$objdir" | $SED "$delay_single_quote_subst"`'
+ MAGIC_CMD='`$ECHO "$MAGIC_CMD" | $SED "$delay_single_quote_subst"`'
+ lt_prog_compiler_no_builtin_flag='`$ECHO "$lt_prog_compiler_no_builtin_flag" | $SED "$delay_single_quote_subst"`'
+-lt_prog_compiler_wl='`$ECHO "$lt_prog_compiler_wl" | $SED "$delay_single_quote_subst"`'
+ lt_prog_compiler_pic='`$ECHO "$lt_prog_compiler_pic" | $SED "$delay_single_quote_subst"`'
++lt_prog_compiler_wl='`$ECHO "$lt_prog_compiler_wl" | $SED "$delay_single_quote_subst"`'
+ lt_prog_compiler_static='`$ECHO "$lt_prog_compiler_static" | $SED "$delay_single_quote_subst"`'
+ lt_cv_prog_compiler_c_o='`$ECHO "$lt_cv_prog_compiler_c_o" | $SED "$delay_single_quote_subst"`'
+ need_locks='`$ECHO "$need_locks" | $SED "$delay_single_quote_subst"`'
++MANIFEST_TOOL='`$ECHO "$MANIFEST_TOOL" | $SED "$delay_single_quote_subst"`'
+ DSYMUTIL='`$ECHO "$DSYMUTIL" | $SED "$delay_single_quote_subst"`'
+ NMEDIT='`$ECHO "$NMEDIT" | $SED "$delay_single_quote_subst"`'
+ LIPO='`$ECHO "$LIPO" | $SED "$delay_single_quote_subst"`'
+@@ -14534,12 +15193,12 @@ hardcode_shlibpath_var='`$ECHO "$hardcode_shlibpath_var" | $SED "$delay_single_q
+ hardcode_automatic='`$ECHO "$hardcode_automatic" | $SED "$delay_single_quote_subst"`'
+ inherit_rpath='`$ECHO "$inherit_rpath" | $SED "$delay_single_quote_subst"`'
+ link_all_deplibs='`$ECHO "$link_all_deplibs" | $SED "$delay_single_quote_subst"`'
+-fix_srcfile_path='`$ECHO "$fix_srcfile_path" | $SED "$delay_single_quote_subst"`'
+ always_export_symbols='`$ECHO "$always_export_symbols" | $SED "$delay_single_quote_subst"`'
+ export_symbols_cmds='`$ECHO "$export_symbols_cmds" | $SED "$delay_single_quote_subst"`'
+ exclude_expsyms='`$ECHO "$exclude_expsyms" | $SED "$delay_single_quote_subst"`'
+ include_expsyms='`$ECHO "$include_expsyms" | $SED "$delay_single_quote_subst"`'
+ prelink_cmds='`$ECHO "$prelink_cmds" | $SED "$delay_single_quote_subst"`'
++postlink_cmds='`$ECHO "$postlink_cmds" | $SED "$delay_single_quote_subst"`'
+ file_list_spec='`$ECHO "$file_list_spec" | $SED "$delay_single_quote_subst"`'
+ variables_saved_for_relink='`$ECHO "$variables_saved_for_relink" | $SED "$delay_single_quote_subst"`'
+ need_lib_prefix='`$ECHO "$need_lib_prefix" | $SED "$delay_single_quote_subst"`'
+@@ -14594,8 +15253,13 @@ reload_flag \
+ OBJDUMP \
+ deplibs_check_method \
+ file_magic_cmd \
++file_magic_glob \
++want_nocaseglob \
++DLLTOOL \
++sharedlib_from_linklib_cmd \
+ AR \
+ AR_FLAGS \
++archiver_list_spec \
+ STRIP \
+ RANLIB \
+ CC \
+@@ -14605,12 +15269,14 @@ lt_cv_sys_global_symbol_pipe \
+ lt_cv_sys_global_symbol_to_cdecl \
+ lt_cv_sys_global_symbol_to_c_name_address \
+ lt_cv_sys_global_symbol_to_c_name_address_lib_prefix \
++nm_file_list_spec \
+ lt_prog_compiler_no_builtin_flag \
+-lt_prog_compiler_wl \
+ lt_prog_compiler_pic \
++lt_prog_compiler_wl \
+ lt_prog_compiler_static \
+ lt_cv_prog_compiler_c_o \
+ need_locks \
++MANIFEST_TOOL \
+ DSYMUTIL \
+ NMEDIT \
+ LIPO \
+@@ -14626,7 +15292,6 @@ no_undefined_flag \
+ hardcode_libdir_flag_spec \
+ hardcode_libdir_flag_spec_ld \
+ hardcode_libdir_separator \
+-fix_srcfile_path \
+ exclude_expsyms \
+ include_expsyms \
+ file_list_spec \
+@@ -14662,6 +15327,7 @@ module_cmds \
+ module_expsym_cmds \
+ export_symbols_cmds \
+ prelink_cmds \
++postlink_cmds \
+ postinstall_cmds \
+ postuninstall_cmds \
+ finish_cmds \
+@@ -15418,7 +16084,8 @@ $as_echo X"$file" |
+ # NOTE: Changes made to this file will be lost: look at ltmain.sh.
+ #
+ #   Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001, 2003, 2004, 2005,
+-#                 2006, 2007, 2008, 2009 Free Software Foundation, Inc.
++#                 2006, 2007, 2008, 2009, 2010 Free Software Foundation,
++#                 Inc.
+ #   Written by Gordon Matzigkeit, 1996
+ #
+ #   This file is part of GNU Libtool.
+@@ -15521,19 +16188,42 @@ SP2NL=$lt_lt_SP2NL
+ # turn newlines into spaces.
+ NL2SP=$lt_lt_NL2SP
+ 
++# convert \$build file names to \$host format.
++to_host_file_cmd=$lt_cv_to_host_file_cmd
++
++# convert \$build files to toolchain format.
++to_tool_file_cmd=$lt_cv_to_tool_file_cmd
++
+ # An object symbol dumper.
+ OBJDUMP=$lt_OBJDUMP
+ 
+ # Method to check whether dependent libraries are shared objects.
+ deplibs_check_method=$lt_deplibs_check_method
+ 
+-# Command to use when deplibs_check_method == "file_magic".
++# Command to use when deplibs_check_method = "file_magic".
+ file_magic_cmd=$lt_file_magic_cmd
+ 
++# How to find potential files when deplibs_check_method = "file_magic".
++file_magic_glob=$lt_file_magic_glob
++
++# Find potential files using nocaseglob when deplibs_check_method = "file_magic".
++want_nocaseglob=$lt_want_nocaseglob
++
++# DLL creation program.
++DLLTOOL=$lt_DLLTOOL
++
++# Command to associate shared and link libraries.
++sharedlib_from_linklib_cmd=$lt_sharedlib_from_linklib_cmd
++
+ # The archiver.
+ AR=$lt_AR
++
++# Flags to create an archive.
+ AR_FLAGS=$lt_AR_FLAGS
+ 
++# How to feed a file listing to the archiver.
++archiver_list_spec=$lt_archiver_list_spec
++
+ # A symbol stripping program.
+ STRIP=$lt_STRIP
+ 
+@@ -15563,6 +16253,12 @@ global_symbol_to_c_name_address=$lt_lt_cv_sys_global_symbol_to_c_name_address
+ # Transform the output of nm in a C name address pair when lib prefix is needed.
+ global_symbol_to_c_name_address_lib_prefix=$lt_lt_cv_sys_global_symbol_to_c_name_address_lib_prefix
+ 
++# Specify filename containing input files for \$NM.
++nm_file_list_spec=$lt_nm_file_list_spec
++
++# The root where to search for dependent libraries,and in which our libraries should be installed.
++lt_sysroot=$lt_sysroot
++
+ # The name of the directory that contains temporary libtool files.
+ objdir=$objdir
+ 
+@@ -15572,6 +16268,9 @@ MAGIC_CMD=$MAGIC_CMD
+ # Must we lock files when doing compilation?
+ need_locks=$lt_need_locks
+ 
++# Manifest tool.
++MANIFEST_TOOL=$lt_MANIFEST_TOOL
++
+ # Tool to manipulate archived DWARF debug symbol files on Mac OS X.
+ DSYMUTIL=$lt_DSYMUTIL
+ 
+@@ -15686,12 +16385,12 @@ with_gcc=$GCC
+ # Compiler flag to turn off builtin functions.
+ no_builtin_flag=$lt_lt_prog_compiler_no_builtin_flag
+ 
+-# How to pass a linker flag through the compiler.
+-wl=$lt_lt_prog_compiler_wl
+-
+ # Additional compiler flags for building library objects.
+ pic_flag=$lt_lt_prog_compiler_pic
+ 
++# How to pass a linker flag through the compiler.
++wl=$lt_lt_prog_compiler_wl
++
+ # Compiler flag to prevent dynamic linking.
+ link_static_flag=$lt_lt_prog_compiler_static
+ 
+@@ -15778,9 +16477,6 @@ inherit_rpath=$inherit_rpath
+ # Whether libtool must link a program against all its dependency libraries.
+ link_all_deplibs=$link_all_deplibs
+ 
+-# Fix the shell variable \$srcfile for the compiler.
+-fix_srcfile_path=$lt_fix_srcfile_path
+-
+ # Set to "yes" if exported symbols are required.
+ always_export_symbols=$always_export_symbols
+ 
+@@ -15796,6 +16492,9 @@ include_expsyms=$lt_include_expsyms
+ # Commands necessary for linking programs (against libraries) with templates.
+ prelink_cmds=$lt_prelink_cmds
+ 
++# Commands necessary for finishing linking programs.
++postlink_cmds=$lt_postlink_cmds
++
+ # Specify filename containing input files.
+ file_list_spec=$lt_file_list_spec
+ 
+@@ -15828,210 +16527,169 @@ ltmain="$ac_aux_dir/ltmain.sh"
+   # if finds mixed CR/LF and LF-only lines.  Since sed operates in
+   # text mode, it properly converts lines to CR/LF.  This bash problem
+   # is reportedly fixed, but why not run on old versions too?
+-  sed '/^# Generated shell functions inserted here/q' "$ltmain" >> "$cfgfile" \
+-    || (rm -f "$cfgfile"; exit 1)
+-
+-  case $xsi_shell in
+-  yes)
+-    cat << \_LT_EOF >> "$cfgfile"
+-
+-# func_dirname file append nondir_replacement
+-# Compute the dirname of FILE.  If nonempty, add APPEND to the result,
+-# otherwise set result to NONDIR_REPLACEMENT.
+-func_dirname ()
+-{
+-  case ${1} in
+-    */*) func_dirname_result="${1%/*}${2}" ;;
+-    *  ) func_dirname_result="${3}" ;;
+-  esac
+-}
+-
+-# func_basename file
+-func_basename ()
+-{
+-  func_basename_result="${1##*/}"
+-}
+-
+-# func_dirname_and_basename file append nondir_replacement
+-# perform func_basename and func_dirname in a single function
+-# call:
+-#   dirname:  Compute the dirname of FILE.  If nonempty,
+-#             add APPEND to the result, otherwise set result
+-#             to NONDIR_REPLACEMENT.
+-#             value returned in "$func_dirname_result"
+-#   basename: Compute filename of FILE.
+-#             value retuned in "$func_basename_result"
+-# Implementation must be kept synchronized with func_dirname
+-# and func_basename. For efficiency, we do not delegate to
+-# those functions but instead duplicate the functionality here.
+-func_dirname_and_basename ()
+-{
+-  case ${1} in
+-    */*) func_dirname_result="${1%/*}${2}" ;;
+-    *  ) func_dirname_result="${3}" ;;
+-  esac
+-  func_basename_result="${1##*/}"
+-}
+-
+-# func_stripname prefix suffix name
+-# strip PREFIX and SUFFIX off of NAME.
+-# PREFIX and SUFFIX must not contain globbing or regex special
+-# characters, hashes, percent signs, but SUFFIX may contain a leading
+-# dot (in which case that matches only a dot).
+-func_stripname ()
+-{
+-  # pdksh 5.2.14 does not do ${X%$Y} correctly if both X and Y are
+-  # positional parameters, so assign one to ordinary parameter first.
+-  func_stripname_result=${3}
+-  func_stripname_result=${func_stripname_result#"${1}"}
+-  func_stripname_result=${func_stripname_result%"${2}"}
+-}
+-
+-# func_opt_split
+-func_opt_split ()
+-{
+-  func_opt_split_opt=${1%%=*}
+-  func_opt_split_arg=${1#*=}
+-}
+-
+-# func_lo2o object
+-func_lo2o ()
+-{
+-  case ${1} in
+-    *.lo) func_lo2o_result=${1%.lo}.${objext} ;;
+-    *)    func_lo2o_result=${1} ;;
+-  esac
+-}
+-
+-# func_xform libobj-or-source
+-func_xform ()
+-{
+-  func_xform_result=${1%.*}.lo
+-}
+-
+-# func_arith arithmetic-term...
+-func_arith ()
+-{
+-  func_arith_result=$(( $* ))
+-}
+-
+-# func_len string
+-# STRING may not start with a hyphen.
+-func_len ()
+-{
+-  func_len_result=${#1}
+-}
+-
+-_LT_EOF
+-    ;;
+-  *) # Bourne compatible functions.
+-    cat << \_LT_EOF >> "$cfgfile"
+-
+-# func_dirname file append nondir_replacement
+-# Compute the dirname of FILE.  If nonempty, add APPEND to the result,
+-# otherwise set result to NONDIR_REPLACEMENT.
+-func_dirname ()
+-{
+-  # Extract subdirectory from the argument.
+-  func_dirname_result=`$ECHO "${1}" | $SED "$dirname"`
+-  if test "X$func_dirname_result" = "X${1}"; then
+-    func_dirname_result="${3}"
+-  else
+-    func_dirname_result="$func_dirname_result${2}"
+-  fi
+-}
+-
+-# func_basename file
+-func_basename ()
+-{
+-  func_basename_result=`$ECHO "${1}" | $SED "$basename"`
+-}
+-
+-
+-# func_stripname prefix suffix name
+-# strip PREFIX and SUFFIX off of NAME.
+-# PREFIX and SUFFIX must not contain globbing or regex special
+-# characters, hashes, percent signs, but SUFFIX may contain a leading
+-# dot (in which case that matches only a dot).
+-# func_strip_suffix prefix name
+-func_stripname ()
+-{
+-  case ${2} in
+-    .*) func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%\\\\${2}\$%%"`;;
+-    *)  func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%${2}\$%%"`;;
+-  esac
+-}
+-
+-# sed scripts:
+-my_sed_long_opt='1s/^\(-[^=]*\)=.*/\1/;q'
+-my_sed_long_arg='1s/^-[^=]*=//'
+-
+-# func_opt_split
+-func_opt_split ()
+-{
+-  func_opt_split_opt=`$ECHO "${1}" | $SED "$my_sed_long_opt"`
+-  func_opt_split_arg=`$ECHO "${1}" | $SED "$my_sed_long_arg"`
+-}
+-
+-# func_lo2o object
+-func_lo2o ()
+-{
+-  func_lo2o_result=`$ECHO "${1}" | $SED "$lo2o"`
+-}
+-
+-# func_xform libobj-or-source
+-func_xform ()
+-{
+-  func_xform_result=`$ECHO "${1}" | $SED 's/\.[^.]*$/.lo/'`
+-}
+-
+-# func_arith arithmetic-term...
+-func_arith ()
+-{
+-  func_arith_result=`expr "$@"`
+-}
+-
+-# func_len string
+-# STRING may not start with a hyphen.
+-func_len ()
+-{
+-  func_len_result=`expr "$1" : ".*" 2>/dev/null || echo $max_cmd_len`
+-}
+-
+-_LT_EOF
+-esac
+-
+-case $lt_shell_append in
+-  yes)
+-    cat << \_LT_EOF >> "$cfgfile"
+-
+-# func_append var value
+-# Append VALUE to the end of shell variable VAR.
+-func_append ()
+-{
+-  eval "$1+=\$2"
+-}
+-_LT_EOF
+-    ;;
+-  *)
+-    cat << \_LT_EOF >> "$cfgfile"
+-
+-# func_append var value
+-# Append VALUE to the end of shell variable VAR.
+-func_append ()
+-{
+-  eval "$1=\$$1\$2"
+-}
+-
+-_LT_EOF
+-    ;;
+-  esac
+-
+-
+-  sed -n '/^# Generated shell functions inserted here/,$p' "$ltmain" >> "$cfgfile" \
+-    || (rm -f "$cfgfile"; exit 1)
+-
+-  mv -f "$cfgfile" "$ofile" ||
++  sed '$q' "$ltmain" >> "$cfgfile" \
++     || (rm -f "$cfgfile"; exit 1)
++
++  if test x"$xsi_shell" = xyes; then
++  sed -e '/^func_dirname ()$/,/^} # func_dirname /c\
++func_dirname ()\
++{\
++\    case ${1} in\
++\      */*) func_dirname_result="${1%/*}${2}" ;;\
++\      *  ) func_dirname_result="${3}" ;;\
++\    esac\
++} # Extended-shell func_dirname implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_basename ()$/,/^} # func_basename /c\
++func_basename ()\
++{\
++\    func_basename_result="${1##*/}"\
++} # Extended-shell func_basename implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_dirname_and_basename ()$/,/^} # func_dirname_and_basename /c\
++func_dirname_and_basename ()\
++{\
++\    case ${1} in\
++\      */*) func_dirname_result="${1%/*}${2}" ;;\
++\      *  ) func_dirname_result="${3}" ;;\
++\    esac\
++\    func_basename_result="${1##*/}"\
++} # Extended-shell func_dirname_and_basename implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_stripname ()$/,/^} # func_stripname /c\
++func_stripname ()\
++{\
++\    # pdksh 5.2.14 does not do ${X%$Y} correctly if both X and Y are\
++\    # positional parameters, so assign one to ordinary parameter first.\
++\    func_stripname_result=${3}\
++\    func_stripname_result=${func_stripname_result#"${1}"}\
++\    func_stripname_result=${func_stripname_result%"${2}"}\
++} # Extended-shell func_stripname implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_split_long_opt ()$/,/^} # func_split_long_opt /c\
++func_split_long_opt ()\
++{\
++\    func_split_long_opt_name=${1%%=*}\
++\    func_split_long_opt_arg=${1#*=}\
++} # Extended-shell func_split_long_opt implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_split_short_opt ()$/,/^} # func_split_short_opt /c\
++func_split_short_opt ()\
++{\
++\    func_split_short_opt_arg=${1#??}\
++\    func_split_short_opt_name=${1%"$func_split_short_opt_arg"}\
++} # Extended-shell func_split_short_opt implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_lo2o ()$/,/^} # func_lo2o /c\
++func_lo2o ()\
++{\
++\    case ${1} in\
++\      *.lo) func_lo2o_result=${1%.lo}.${objext} ;;\
++\      *)    func_lo2o_result=${1} ;;\
++\    esac\
++} # Extended-shell func_lo2o implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_xform ()$/,/^} # func_xform /c\
++func_xform ()\
++{\
++    func_xform_result=${1%.*}.lo\
++} # Extended-shell func_xform implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_arith ()$/,/^} # func_arith /c\
++func_arith ()\
++{\
++    func_arith_result=$(( $* ))\
++} # Extended-shell func_arith implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_len ()$/,/^} # func_len /c\
++func_len ()\
++{\
++    func_len_result=${#1}\
++} # Extended-shell func_len implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++fi
++
++if test x"$lt_shell_append" = xyes; then
++  sed -e '/^func_append ()$/,/^} # func_append /c\
++func_append ()\
++{\
++    eval "${1}+=\\${2}"\
++} # Extended-shell func_append implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_append_quoted ()$/,/^} # func_append_quoted /c\
++func_append_quoted ()\
++{\
++\    func_quote_for_eval "${2}"\
++\    eval "${1}+=\\\\ \\$func_quote_for_eval_result"\
++} # Extended-shell func_append_quoted implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  # Save a `func_append' function call where possible by direct use of '+='
++  sed -e 's%func_append \([a-zA-Z_]\{1,\}\) "%\1+="%g' $cfgfile > $cfgfile.tmp \
++    && mv -f "$cfgfile.tmp" "$cfgfile" \
++      || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++  test 0 -eq $? || _lt_function_replace_fail=:
++else
++  # Save a `func_append' function call even when '+=' is not available
++  sed -e 's%func_append \([a-zA-Z_]\{1,\}\) "%\1="$\1%g' $cfgfile > $cfgfile.tmp \
++    && mv -f "$cfgfile.tmp" "$cfgfile" \
++      || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++  test 0 -eq $? || _lt_function_replace_fail=:
++fi
++
++if test x"$_lt_function_replace_fail" = x":"; then
++  { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: Unable to substitute extended shell functions in $ofile" >&5
++$as_echo "$as_me: WARNING: Unable to substitute extended shell functions in $ofile" >&2;}
++fi
++
++
++   mv -f "$cfgfile" "$ofile" ||
+     (rm -f "$ofile" && cp "$cfgfile" "$ofile" && rm -f "$cfgfile")
+   chmod +x "$ofile"
+ 
+diff --git a/libtool.m4 b/libtool.m4
+index ad63ebbb385..b65c22bf80d 100644
+--- a/libtool.m4
++++ b/libtool.m4
+@@ -1,7 +1,8 @@
+ # libtool.m4 - Configure libtool for the host system. -*-Autoconf-*-
+ #
+ #   Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001, 2003, 2004, 2005,
+-#                 2006, 2007, 2008, 2009 Free Software Foundation, Inc.
++#                 2006, 2007, 2008, 2009, 2010 Free Software Foundation,
++#                 Inc.
+ #   Written by Gordon Matzigkeit, 1996
+ #
+ # This file is free software; the Free Software Foundation gives
+@@ -10,7 +11,8 @@
+ 
+ m4_define([_LT_COPYING], [dnl
+ #   Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001, 2003, 2004, 2005,
+-#                 2006, 2007, 2008, 2009 Free Software Foundation, Inc.
++#                 2006, 2007, 2008, 2009, 2010 Free Software Foundation,
++#                 Inc.
+ #   Written by Gordon Matzigkeit, 1996
+ #
+ #   This file is part of GNU Libtool.
+@@ -37,7 +39,7 @@ m4_define([_LT_COPYING], [dnl
+ # 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
+ ])
+ 
+-# serial 56 LT_INIT
++# serial 57 LT_INIT
+ 
+ 
+ # LT_PREREQ(VERSION)
+@@ -166,10 +168,13 @@ _LT_DECL([], [exeext], [0], [Executable file suffix (normally "")])dnl
+ dnl
+ m4_require([_LT_FILEUTILS_DEFAULTS])dnl
+ m4_require([_LT_CHECK_SHELL_FEATURES])dnl
++m4_require([_LT_PATH_CONVERSION_FUNCTIONS])dnl
+ m4_require([_LT_CMD_RELOAD])dnl
+ m4_require([_LT_CHECK_MAGIC_METHOD])dnl
++m4_require([_LT_CHECK_SHAREDLIB_FROM_LINKLIB])dnl
+ m4_require([_LT_CMD_OLD_ARCHIVE])dnl
+ m4_require([_LT_CMD_GLOBAL_SYMBOLS])dnl
++m4_require([_LT_WITH_SYSROOT])dnl
+ 
+ _LT_CONFIG_LIBTOOL_INIT([
+ # See if we are running on zsh, and set the options which allow our
+@@ -632,7 +637,7 @@ m4_ifset([AC_PACKAGE_NAME], [AC_PACKAGE_NAME ])config.lt[]dnl
+ m4_ifset([AC_PACKAGE_VERSION], [ AC_PACKAGE_VERSION])
+ configured by $[0], generated by m4_PACKAGE_STRING.
+ 
+-Copyright (C) 2009 Free Software Foundation, Inc.
++Copyright (C) 2010 Free Software Foundation, Inc.
+ This config.lt script is free software; the Free Software Foundation
+ gives unlimited permision to copy, distribute and modify it."
+ 
+@@ -746,15 +751,12 @@ _LT_EOF
+   # if finds mixed CR/LF and LF-only lines.  Since sed operates in
+   # text mode, it properly converts lines to CR/LF.  This bash problem
+   # is reportedly fixed, but why not run on old versions too?
+-  sed '/^# Generated shell functions inserted here/q' "$ltmain" >> "$cfgfile" \
+-    || (rm -f "$cfgfile"; exit 1)
++  sed '$q' "$ltmain" >> "$cfgfile" \
++     || (rm -f "$cfgfile"; exit 1)
+ 
+-  _LT_PROG_XSI_SHELLFNS
++  _LT_PROG_REPLACE_SHELLFNS
+ 
+-  sed -n '/^# Generated shell functions inserted here/,$p' "$ltmain" >> "$cfgfile" \
+-    || (rm -f "$cfgfile"; exit 1)
+-
+-  mv -f "$cfgfile" "$ofile" ||
++   mv -f "$cfgfile" "$ofile" ||
+     (rm -f "$ofile" && cp "$cfgfile" "$ofile" && rm -f "$cfgfile")
+   chmod +x "$ofile"
+ ],
+@@ -980,6 +982,8 @@ _LT_EOF
+       $LTCC $LTCFLAGS -c -o conftest.o conftest.c 2>&AS_MESSAGE_LOG_FD
+       echo "$AR cru libconftest.a conftest.o" >&AS_MESSAGE_LOG_FD
+       $AR cru libconftest.a conftest.o 2>&AS_MESSAGE_LOG_FD
++      echo "$RANLIB libconftest.a" >&AS_MESSAGE_LOG_FD
++      $RANLIB libconftest.a 2>&AS_MESSAGE_LOG_FD
+       cat > conftest.c << _LT_EOF
+ int main() { return 0;}
+ _LT_EOF
+@@ -1069,30 +1073,41 @@ m4_defun([_LT_DARWIN_LINKER_FEATURES],
+   fi
+ ])
+ 
+-# _LT_SYS_MODULE_PATH_AIX
+-# -----------------------
++# _LT_SYS_MODULE_PATH_AIX([TAGNAME])
++# ----------------------------------
+ # Links a minimal program and checks the executable
+ # for the system default hardcoded library path. In most cases,
+ # this is /usr/lib:/lib, but when the MPI compilers are used
+ # the location of the communication and MPI libs are included too.
+ # If we don't find anything, use the default library path according
+ # to the aix ld manual.
++# Store the results from the different compilers for each TAGNAME.
++# Allow to override them for all tags through lt_cv_aix_libpath.
+ m4_defun([_LT_SYS_MODULE_PATH_AIX],
+ [m4_require([_LT_DECL_SED])dnl
+-AC_LINK_IFELSE([AC_LANG_SOURCE([AC_LANG_PROGRAM])],[
+-lt_aix_libpath_sed='
+-    /Import File Strings/,/^$/ {
+-	/^0/ {
+-	    s/^0  *\(.*\)$/\1/
+-	    p
+-	}
+-    }'
+-aix_libpath=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
+-# Check for a 64-bit object if we didn't find anything.
+-if test -z "$aix_libpath"; then
+-  aix_libpath=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
+-fi],[])
+-if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
++if test "${lt_cv_aix_libpath+set}" = set; then
++  aix_libpath=$lt_cv_aix_libpath
++else
++  AC_CACHE_VAL([_LT_TAGVAR([lt_cv_aix_libpath_], [$1])],
++  [AC_LINK_IFELSE([AC_LANG_PROGRAM],[
++  lt_aix_libpath_sed='[
++      /Import File Strings/,/^$/ {
++	  /^0/ {
++	      s/^0  *\([^ ]*\) *$/\1/
++	      p
++	  }
++      }]'
++  _LT_TAGVAR([lt_cv_aix_libpath_], [$1])=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
++  # Check for a 64-bit object if we didn't find anything.
++  if test -z "$_LT_TAGVAR([lt_cv_aix_libpath_], [$1])"; then
++    _LT_TAGVAR([lt_cv_aix_libpath_], [$1])=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
++  fi],[])
++  if test -z "$_LT_TAGVAR([lt_cv_aix_libpath_], [$1])"; then
++    _LT_TAGVAR([lt_cv_aix_libpath_], [$1])="/usr/lib:/lib"
++  fi
++  ])
++  aix_libpath=$_LT_TAGVAR([lt_cv_aix_libpath_], [$1])
++fi
+ ])# _LT_SYS_MODULE_PATH_AIX
+ 
+ 
+@@ -1117,7 +1132,7 @@ ECHO=$ECHO$ECHO$ECHO$ECHO$ECHO$ECHO
+ 
+ AC_MSG_CHECKING([how to print strings])
+ # Test print first, because it will be a builtin if present.
+-if test "X`print -r -- -n 2>/dev/null`" = X-n && \
++if test "X`( print -r -- -n ) 2>/dev/null`" = X-n && \
+    test "X`print -r -- $ECHO 2>/dev/null`" = "X$ECHO"; then
+   ECHO='print -r --'
+ elif test "X`printf %s $ECHO 2>/dev/null`" = "X$ECHO"; then
+@@ -1161,6 +1176,39 @@ _LT_DECL([], [ECHO], [1], [An echo program that protects backslashes])
+ ])# _LT_PROG_ECHO_BACKSLASH
+ 
+ 
++# _LT_WITH_SYSROOT
++# ----------------
++AC_DEFUN([_LT_WITH_SYSROOT],
++[AC_MSG_CHECKING([for sysroot])
++AC_ARG_WITH([libtool-sysroot],
++[  --with-libtool-sysroot[=DIR] Search for dependent libraries within DIR
++                        (or the compiler's sysroot if not specified).],
++[], [with_libtool_sysroot=no])
++
++dnl lt_sysroot will always be passed unquoted.  We quote it here
++dnl in case the user passed a directory name.
++lt_sysroot=
++case ${with_libtool_sysroot} in #(
++ yes)
++   if test "$GCC" = yes; then
++     lt_sysroot=`$CC --print-sysroot 2>/dev/null`
++   fi
++   ;; #(
++ /*)
++   lt_sysroot=`echo "$with_libtool_sysroot" | sed -e "$sed_quote_subst"`
++   ;; #(
++ no|'')
++   ;; #(
++ *)
++   AC_MSG_RESULT([${with_libtool_sysroot}])
++   AC_MSG_ERROR([The sysroot must be an absolute path.])
++   ;;
++esac
++
++ AC_MSG_RESULT([${lt_sysroot:-no}])
++_LT_DECL([], [lt_sysroot], [0], [The root where to search for ]dnl
++[dependent libraries, and in which our libraries should be installed.])])
++
+ # _LT_ENABLE_LOCK
+ # ---------------
+ m4_defun([_LT_ENABLE_LOCK],
+@@ -1320,6 +1368,51 @@ need_locks="$enable_libtool_lock"
+ ])# _LT_ENABLE_LOCK
+ 
+ 
++# _LT_PROG_AR
++# -----------
++m4_defun([_LT_PROG_AR],
++[AC_CHECK_TOOLS(AR, [ar], false)
++  touch conftest.c
++  $AR $plugin_option rc conftest.a conftest.c
++  if test "$?" != 0; then
++    AC_MSG_WARN([Failed: $AR $plugin_option rc])
++  else
++    AR="$AR $plugin_option"
++  fi
++  rm -f conftest.*
++: ${AR=ar}
++: ${AR_FLAGS=cru}
++_LT_DECL([], [AR], [1], [The archiver])
++_LT_DECL([], [AR_FLAGS], [1], [Flags to create an archive])
++
++AC_CACHE_CHECK([for archiver @FILE support], [lt_cv_ar_at_file],
++  [lt_cv_ar_at_file=no
++   AC_COMPILE_IFELSE([AC_LANG_PROGRAM],
++     [echo conftest.$ac_objext > conftest.lst
++      lt_ar_try='$AR $AR_FLAGS libconftest.a @conftest.lst >&AS_MESSAGE_LOG_FD'
++      AC_TRY_EVAL([lt_ar_try])
++      if test "$ac_status" -eq 0; then
++	# Ensure the archiver fails upon bogus file names.
++	rm -f conftest.$ac_objext libconftest.a
++	AC_TRY_EVAL([lt_ar_try])
++	if test "$ac_status" -ne 0; then
++          lt_cv_ar_at_file=@
++        fi
++      fi
++      rm -f conftest.* libconftest.a
++     ])
++  ])
++
++if test "x$lt_cv_ar_at_file" = xno; then
++  archiver_list_spec=
++else
++  archiver_list_spec=$lt_cv_ar_at_file
++fi
++_LT_DECL([], [archiver_list_spec], [1],
++  [How to feed a file listing to the archiver])
++])# _LT_PROG_AR
++
++
+ # _LT_CMD_OLD_ARCHIVE
+ # -------------------
+ m4_defun([_LT_CMD_OLD_ARCHIVE],
+@@ -1336,23 +1429,7 @@ for plugin in $plugin_names; do
+   fi
+ done
+ 
+-AC_CHECK_TOOL(AR, ar, false)
+-test -z "$AR" && AR=ar
+-if test -n "$plugin_option"; then
+-  if $AR --help 2>&1 | grep -q "\--plugin"; then
+-    touch conftest.c
+-    $AR $plugin_option rc conftest.a conftest.c
+-    if test "$?" != 0; then
+-      AC_MSG_WARN([Failed: $AR $plugin_option rc])
+-    else
+-      AR="$AR $plugin_option"
+-    fi
+-    rm -f conftest.*
+-  fi
+-fi
+-test -z "$AR_FLAGS" && AR_FLAGS=cru
+-_LT_DECL([], [AR], [1], [The archiver])
+-_LT_DECL([], [AR_FLAGS], [1])
++_LT_PROG_AR
+ 
+ AC_CHECK_TOOL(STRIP, strip, :)
+ test -z "$STRIP" && STRIP=:
+@@ -1653,7 +1730,7 @@ else
+   lt_dlunknown=0; lt_dlno_uscore=1; lt_dlneed_uscore=2
+   lt_status=$lt_dlunknown
+   cat > conftest.$ac_ext <<_LT_EOF
+-[#line __oline__ "configure"
++[#line $LINENO "configure"
+ #include "confdefs.h"
+ 
+ #if HAVE_DLFCN_H
+@@ -1697,10 +1774,10 @@ else
+ /* When -fvisbility=hidden is used, assume the code has been annotated
+    correspondingly for the symbols needed.  */
+ #if defined(__GNUC__) && (((__GNUC__ == 3) && (__GNUC_MINOR__ >= 3)) || (__GNUC__ > 3))
+-void fnord () __attribute__((visibility("default")));
++int fnord () __attribute__((visibility("default")));
+ #endif
+ 
+-void fnord () { int i=42; }
++int fnord () { return 42; }
+ int main ()
+ {
+   void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW);
+@@ -2240,8 +2317,9 @@ cygwin* | mingw* | pw32* | cegcc*)
+   need_version=no
+   need_lib_prefix=no
+ 
+-  case $GCC,$host_os in
+-  yes,cygwin* | yes,mingw* | yes,pw32* | yes,cegcc*)
++  case $GCC,$cc_basename in
++  yes,*)
++    # gcc
+     library_names_spec='$libname.dll.a'
+     # DLL is installed to $(libdir)/../bin by postinstall_cmds
+     postinstall_cmds='base_file=`basename \${file}`~
+@@ -2274,13 +2352,71 @@ m4_if([$1], [],[
+       library_names_spec='`echo ${libname} | sed -e 's/^lib/pw/'``echo ${release} | $SED -e 's/[[.]]/-/g'`${versuffix}${shared_ext}'
+       ;;
+     esac
++    dynamic_linker='Win32 ld.exe'
++    ;;
++
++  *,cl*)
++    # Native MSVC
++    libname_spec='$name'
++    soname_spec='${libname}`echo ${release} | $SED -e 's/[[.]]/-/g'`${versuffix}${shared_ext}'
++    library_names_spec='${libname}.dll.lib'
++
++    case $build_os in
++    mingw*)
++      sys_lib_search_path_spec=
++      lt_save_ifs=$IFS
++      IFS=';'
++      for lt_path in $LIB
++      do
++        IFS=$lt_save_ifs
++        # Let DOS variable expansion print the short 8.3 style file name.
++        lt_path=`cd "$lt_path" 2>/dev/null && cmd //C "for %i in (".") do @echo %~si"`
++        sys_lib_search_path_spec="$sys_lib_search_path_spec $lt_path"
++      done
++      IFS=$lt_save_ifs
++      # Convert to MSYS style.
++      sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | sed -e 's|\\\\|/|g' -e 's| \\([[a-zA-Z]]\\):| /\\1|g' -e 's|^ ||'`
++      ;;
++    cygwin*)
++      # Convert to unix form, then to dos form, then back to unix form
++      # but this time dos style (no spaces!) so that the unix form looks
++      # like /cygdrive/c/PROGRA~1:/cygdr...
++      sys_lib_search_path_spec=`cygpath --path --unix "$LIB"`
++      sys_lib_search_path_spec=`cygpath --path --dos "$sys_lib_search_path_spec" 2>/dev/null`
++      sys_lib_search_path_spec=`cygpath --path --unix "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
++      ;;
++    *)
++      sys_lib_search_path_spec="$LIB"
++      if $ECHO "$sys_lib_search_path_spec" | [$GREP ';[c-zC-Z]:/' >/dev/null]; then
++        # It is most probably a Windows format PATH.
++        sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e 's/;/ /g'`
++      else
++        sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
++      fi
++      # FIXME: find the short name or the path components, as spaces are
++      # common. (e.g. "Program Files" -> "PROGRA~1")
++      ;;
++    esac
++
++    # DLL is installed to $(libdir)/../bin by postinstall_cmds
++    postinstall_cmds='base_file=`basename \${file}`~
++      dlpath=`$SHELL 2>&1 -c '\''. $dir/'\''\${base_file}'\''i; echo \$dlname'\''`~
++      dldir=$destdir/`dirname \$dlpath`~
++      test -d \$dldir || mkdir -p \$dldir~
++      $install_prog $dir/$dlname \$dldir/$dlname'
++    postuninstall_cmds='dldll=`$SHELL 2>&1 -c '\''. $file; echo \$dlname'\''`~
++      dlpath=$dir/\$dldll~
++       $RM \$dlpath'
++    shlibpath_overrides_runpath=yes
++    dynamic_linker='Win32 link.exe'
+     ;;
+ 
+   *)
++    # Assume MSVC wrapper
+     library_names_spec='${libname}`echo ${release} | $SED -e 's/[[.]]/-/g'`${versuffix}${shared_ext} $libname.lib'
++    dynamic_linker='Win32 ld.exe'
+     ;;
+   esac
+-  dynamic_linker='Win32 ld.exe'
+   # FIXME: first we should search . and the directory the executable is in
+   shlibpath_var=PATH
+   ;;
+@@ -2970,6 +3106,11 @@ case $reload_flag in
+ esac
+ reload_cmds='$LD$reload_flag -o $output$reload_objs'
+ case $host_os in
++  cygwin* | mingw* | pw32* | cegcc*)
++    if test "$GCC" != yes; then
++      reload_cmds=false
++    fi
++    ;;
+   darwin*)
+     if test "$GCC" = yes; then
+       reload_cmds='$LTCC $LTCFLAGS -nostdlib ${wl}-r -o $output$reload_objs'
+@@ -3036,7 +3177,8 @@ mingw* | pw32*)
+     lt_cv_deplibs_check_method='file_magic ^x86 archive import|^x86 DLL'
+     lt_cv_file_magic_cmd='func_win32_libid'
+   else
+-    lt_cv_deplibs_check_method='file_magic file format pei*-i386(.*architecture: i386)?'
++    # Keep this pattern in sync with the one in func_win32_libid.
++    lt_cv_deplibs_check_method='file_magic file format (pei*-i386(.*architecture: i386)?|pe-arm-wince|pe-x86-64)'
+     lt_cv_file_magic_cmd='$OBJDUMP -f'
+   fi
+   ;;
+@@ -3187,6 +3329,21 @@ tpf*)
+   ;;
+ esac
+ ])
++
++file_magic_glob=
++want_nocaseglob=no
++if test "$build" = "$host"; then
++  case $host_os in
++  mingw* | pw32*)
++    if ( shopt | grep nocaseglob ) >/dev/null 2>&1; then
++      want_nocaseglob=yes
++    else
++      file_magic_glob=`echo aAbBcCdDeEfFgGhHiIjJkKlLmMnNoOpPqQrRsStTuUvVwWxXyYzZ | $SED -e "s/\(..\)/s\/[[\1]]\/[[\1]]\/g;/g"`
++    fi
++    ;;
++  esac
++fi
++
+ file_magic_cmd=$lt_cv_file_magic_cmd
+ deplibs_check_method=$lt_cv_deplibs_check_method
+ test -z "$deplibs_check_method" && deplibs_check_method=unknown
+@@ -3194,7 +3351,11 @@ test -z "$deplibs_check_method" && deplibs_check_method=unknown
+ _LT_DECL([], [deplibs_check_method], [1],
+     [Method to check whether dependent libraries are shared objects])
+ _LT_DECL([], [file_magic_cmd], [1],
+-    [Command to use when deplibs_check_method == "file_magic"])
++    [Command to use when deplibs_check_method = "file_magic"])
++_LT_DECL([], [file_magic_glob], [1],
++    [How to find potential files when deplibs_check_method = "file_magic"])
++_LT_DECL([], [want_nocaseglob], [1],
++    [Find potential files using nocaseglob when deplibs_check_method = "file_magic"])
+ ])# _LT_CHECK_MAGIC_METHOD
+ 
+ 
+@@ -3305,6 +3466,67 @@ dnl aclocal-1.4 backwards compatibility:
+ dnl AC_DEFUN([AM_PROG_NM], [])
+ dnl AC_DEFUN([AC_PROG_NM], [])
+ 
++# _LT_CHECK_SHAREDLIB_FROM_LINKLIB
++# --------------------------------
++# how to determine the name of the shared library
++# associated with a specific link library.
++#  -- PORTME fill in with the dynamic library characteristics
++m4_defun([_LT_CHECK_SHAREDLIB_FROM_LINKLIB],
++[m4_require([_LT_DECL_EGREP])
++m4_require([_LT_DECL_OBJDUMP])
++m4_require([_LT_DECL_DLLTOOL])
++AC_CACHE_CHECK([how to associate runtime and link libraries],
++lt_cv_sharedlib_from_linklib_cmd,
++[lt_cv_sharedlib_from_linklib_cmd='unknown'
++
++case $host_os in
++cygwin* | mingw* | pw32* | cegcc*)
++  # two different shell functions defined in ltmain.sh
++  # decide which to use based on capabilities of $DLLTOOL
++  case `$DLLTOOL --help 2>&1` in
++  *--identify-strict*)
++    lt_cv_sharedlib_from_linklib_cmd=func_cygming_dll_for_implib
++    ;;
++  *)
++    lt_cv_sharedlib_from_linklib_cmd=func_cygming_dll_for_implib_fallback
++    ;;
++  esac
++  ;;
++*)
++  # fallback: assume linklib IS sharedlib
++  lt_cv_sharedlib_from_linklib_cmd="$ECHO"
++  ;;
++esac
++])
++sharedlib_from_linklib_cmd=$lt_cv_sharedlib_from_linklib_cmd
++test -z "$sharedlib_from_linklib_cmd" && sharedlib_from_linklib_cmd=$ECHO
++
++_LT_DECL([], [sharedlib_from_linklib_cmd], [1],
++    [Command to associate shared and link libraries])
++])# _LT_CHECK_SHAREDLIB_FROM_LINKLIB
++
++
++# _LT_PATH_MANIFEST_TOOL
++# ----------------------
++# locate the manifest tool
++m4_defun([_LT_PATH_MANIFEST_TOOL],
++[AC_CHECK_TOOL(MANIFEST_TOOL, mt, :)
++test -z "$MANIFEST_TOOL" && MANIFEST_TOOL=mt
++AC_CACHE_CHECK([if $MANIFEST_TOOL is a manifest tool], [lt_cv_path_mainfest_tool],
++  [lt_cv_path_mainfest_tool=no
++  echo "$as_me:$LINENO: $MANIFEST_TOOL '-?'" >&AS_MESSAGE_LOG_FD
++  $MANIFEST_TOOL '-?' 2>conftest.err > conftest.out
++  cat conftest.err >&AS_MESSAGE_LOG_FD
++  if $GREP 'Manifest Tool' conftest.out > /dev/null; then
++    lt_cv_path_mainfest_tool=yes
++  fi
++  rm -f conftest*])
++if test "x$lt_cv_path_mainfest_tool" != xyes; then
++  MANIFEST_TOOL=:
++fi
++_LT_DECL([], [MANIFEST_TOOL], [1], [Manifest tool])dnl
++])# _LT_PATH_MANIFEST_TOOL
++
+ 
+ # LT_LIB_M
+ # --------
+@@ -3431,8 +3653,8 @@ esac
+ lt_cv_sys_global_symbol_to_cdecl="sed -n -e 's/^T .* \(.*\)$/extern int \1();/p' -e 's/^$symcode* .* \(.*\)$/extern char \1;/p'"
+ 
+ # Transform an extracted symbol line into symbol name and symbol address
+-lt_cv_sys_global_symbol_to_c_name_address="sed -n -e 's/^: \([[^ ]]*\) $/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([[^ ]]*\) \([[^ ]]*\)$/  {\"\2\", (void *) \&\2},/p'"
+-lt_cv_sys_global_symbol_to_c_name_address_lib_prefix="sed -n -e 's/^: \([[^ ]]*\) $/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([[^ ]]*\) \(lib[[^ ]]*\)$/  {\"\2\", (void *) \&\2},/p' -e 's/^$symcode* \([[^ ]]*\) \([[^ ]]*\)$/  {\"lib\2\", (void *) \&\2},/p'"
++lt_cv_sys_global_symbol_to_c_name_address="sed -n -e 's/^: \([[^ ]]*\)[[ ]]*$/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([[^ ]]*\) \([[^ ]]*\)$/  {\"\2\", (void *) \&\2},/p'"
++lt_cv_sys_global_symbol_to_c_name_address_lib_prefix="sed -n -e 's/^: \([[^ ]]*\)[[ ]]*$/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([[^ ]]*\) \(lib[[^ ]]*\)$/  {\"\2\", (void *) \&\2},/p' -e 's/^$symcode* \([[^ ]]*\) \([[^ ]]*\)$/  {\"lib\2\", (void *) \&\2},/p'"
+ 
+ # Handle CRLF in mingw tool chain
+ opt_cr=
+@@ -3468,6 +3690,7 @@ for ac_symprfx in "" "_"; do
+   else
+     lt_cv_sys_global_symbol_pipe="sed -n -e 's/^.*[[	 ]]\($symcode$symcode*\)[[	 ]][[	 ]]*$ac_symprfx$sympat$opt_cr$/$symxfrm/p'"
+   fi
++  lt_cv_sys_global_symbol_pipe="$lt_cv_sys_global_symbol_pipe | sed '/ __gnu_lto/d'"
+ 
+   # Check to see that the pipe works correctly.
+   pipe_works=no
+@@ -3501,6 +3724,18 @@ _LT_EOF
+       if $GREP ' nm_test_var$' "$nlist" >/dev/null; then
+ 	if $GREP ' nm_test_func$' "$nlist" >/dev/null; then
+ 	  cat <<_LT_EOF > conftest.$ac_ext
++/* Keep this code in sync between libtool.m4, ltmain, lt_system.h, and tests.  */
++#if defined(_WIN32) || defined(__CYGWIN__) || defined(_WIN32_WCE)
++/* DATA imports from DLLs on WIN32 con't be const, because runtime
++   relocations are performed -- see ld's documentation on pseudo-relocs.  */
++# define LT@&t@_DLSYM_CONST
++#elif defined(__osf__)
++/* This system does not cope well with relocations in const data.  */
++# define LT@&t@_DLSYM_CONST
++#else
++# define LT@&t@_DLSYM_CONST const
++#endif
++
+ #ifdef __cplusplus
+ extern "C" {
+ #endif
+@@ -3512,7 +3747,7 @@ _LT_EOF
+ 	  cat <<_LT_EOF >> conftest.$ac_ext
+ 
+ /* The mapping between symbol names and symbols.  */
+-const struct {
++LT@&t@_DLSYM_CONST struct {
+   const char *name;
+   void       *address;
+ }
+@@ -3538,15 +3773,15 @@ static const void *lt_preloaded_setup() {
+ _LT_EOF
+ 	  # Now try linking the two files.
+ 	  mv conftest.$ac_objext conftstm.$ac_objext
+-	  lt_save_LIBS="$LIBS"
+-	  lt_save_CFLAGS="$CFLAGS"
++	  lt_globsym_save_LIBS=$LIBS
++	  lt_globsym_save_CFLAGS=$CFLAGS
+ 	  LIBS="conftstm.$ac_objext"
+ 	  CFLAGS="$CFLAGS$_LT_TAGVAR(lt_prog_compiler_no_builtin_flag, $1)"
+ 	  if AC_TRY_EVAL(ac_link) && test -s conftest${ac_exeext}; then
+ 	    pipe_works=yes
+ 	  fi
+-	  LIBS="$lt_save_LIBS"
+-	  CFLAGS="$lt_save_CFLAGS"
++	  LIBS=$lt_globsym_save_LIBS
++	  CFLAGS=$lt_globsym_save_CFLAGS
+ 	else
+ 	  echo "cannot find nm_test_func in $nlist" >&AS_MESSAGE_LOG_FD
+ 	fi
+@@ -3579,6 +3814,13 @@ else
+   AC_MSG_RESULT(ok)
+ fi
+ 
++# Response file support.
++if test "$lt_cv_nm_interface" = "MS dumpbin"; then
++  nm_file_list_spec='@'
++elif $NM --help 2>/dev/null | grep '[[@]]FILE' >/dev/null; then
++  nm_file_list_spec='@'
++fi
++
+ _LT_DECL([global_symbol_pipe], [lt_cv_sys_global_symbol_pipe], [1],
+     [Take the output of nm and produce a listing of raw symbols and C names])
+ _LT_DECL([global_symbol_to_cdecl], [lt_cv_sys_global_symbol_to_cdecl], [1],
+@@ -3589,6 +3831,8 @@ _LT_DECL([global_symbol_to_c_name_address],
+ _LT_DECL([global_symbol_to_c_name_address_lib_prefix],
+     [lt_cv_sys_global_symbol_to_c_name_address_lib_prefix], [1],
+     [Transform the output of nm in a C name address pair when lib prefix is needed])
++_LT_DECL([], [nm_file_list_spec], [1],
++    [Specify filename containing input files for $NM])
+ ]) # _LT_CMD_GLOBAL_SYMBOLS
+ 
+ 
+@@ -3600,7 +3844,6 @@ _LT_TAGVAR(lt_prog_compiler_wl, $1)=
+ _LT_TAGVAR(lt_prog_compiler_pic, $1)=
+ _LT_TAGVAR(lt_prog_compiler_static, $1)=
+ 
+-AC_MSG_CHECKING([for $compiler option to produce PIC])
+ m4_if([$1], [CXX], [
+   # C++ specific cases for pic, static, wl, etc.
+   if test "$GXX" = yes; then
+@@ -3706,6 +3949,12 @@ m4_if([$1], [CXX], [
+ 	  ;;
+ 	esac
+ 	;;
++      mingw* | cygwin* | os2* | pw32* | cegcc*)
++	# This hack is so that the source file can tell whether it is being
++	# built for inclusion in a dll (and should export symbols for example).
++	m4_if([$1], [GCJ], [],
++	  [_LT_TAGVAR(lt_prog_compiler_pic, $1)='-DDLL_EXPORT'])
++	;;
+       dgux*)
+ 	case $cc_basename in
+ 	  ec++*)
+@@ -3858,7 +4107,7 @@ m4_if([$1], [CXX], [
+ 	;;
+       solaris*)
+ 	case $cc_basename in
+-	  CC*)
++	  CC* | sunCC*)
+ 	    # Sun C++ 4.2, 5.x and Centerline C++
+ 	    _LT_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC'
+ 	    _LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic'
+@@ -4081,6 +4330,12 @@ m4_if([$1], [CXX], [
+ 	_LT_TAGVAR(lt_prog_compiler_pic, $1)='--shared'
+ 	_LT_TAGVAR(lt_prog_compiler_static, $1)='--static'
+ 	;;
++      nagfor*)
++	# NAG Fortran compiler
++	_LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,-Wl,,'
++	_LT_TAGVAR(lt_prog_compiler_pic, $1)='-PIC'
++	_LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic'
++	;;
+       pgcc* | pgf77* | pgf90* | pgf95* | pgfortran*)
+         # Portland Group compilers (*not* the Pentium gcc compiler,
+ 	# which looks to be a dead project)
+@@ -4143,7 +4398,7 @@ m4_if([$1], [CXX], [
+       _LT_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC'
+       _LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic'
+       case $cc_basename in
+-      f77* | f90* | f95*)
++      f77* | f90* | f95* | sunf77* | sunf90* | sunf95*)
+ 	_LT_TAGVAR(lt_prog_compiler_wl, $1)='-Qoption ld ';;
+       *)
+ 	_LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,';;
+@@ -4200,9 +4455,11 @@ case $host_os in
+     _LT_TAGVAR(lt_prog_compiler_pic, $1)="$_LT_TAGVAR(lt_prog_compiler_pic, $1)@&t@m4_if([$1],[],[ -DPIC],[m4_if([$1],[CXX],[ -DPIC],[])])"
+     ;;
+ esac
+-AC_MSG_RESULT([$_LT_TAGVAR(lt_prog_compiler_pic, $1)])
+-_LT_TAGDECL([wl], [lt_prog_compiler_wl], [1],
+-	[How to pass a linker flag through the compiler])
++
++AC_CACHE_CHECK([for $compiler option to produce PIC],
++  [_LT_TAGVAR(lt_cv_prog_compiler_pic, $1)],
++  [_LT_TAGVAR(lt_cv_prog_compiler_pic, $1)=$_LT_TAGVAR(lt_prog_compiler_pic, $1)])
++_LT_TAGVAR(lt_prog_compiler_pic, $1)=$_LT_TAGVAR(lt_cv_prog_compiler_pic, $1)
+ 
+ #
+ # Check to make sure the PIC flag actually works.
+@@ -4221,6 +4478,8 @@ fi
+ _LT_TAGDECL([pic_flag], [lt_prog_compiler_pic], [1],
+ 	[Additional compiler flags for building library objects])
+ 
++_LT_TAGDECL([wl], [lt_prog_compiler_wl], [1],
++	[How to pass a linker flag through the compiler])
+ #
+ # Check to make sure the static flag actually works.
+ #
+@@ -4241,6 +4500,7 @@ _LT_TAGDECL([link_static_flag], [lt_prog_compiler_static], [1],
+ m4_defun([_LT_LINKER_SHLIBS],
+ [AC_REQUIRE([LT_PATH_LD])dnl
+ AC_REQUIRE([LT_PATH_NM])dnl
++m4_require([_LT_PATH_MANIFEST_TOOL])dnl
+ m4_require([_LT_FILEUTILS_DEFAULTS])dnl
+ m4_require([_LT_DECL_EGREP])dnl
+ m4_require([_LT_DECL_SED])dnl
+@@ -4249,6 +4509,7 @@ m4_require([_LT_TAG_COMPILER])dnl
+ AC_MSG_CHECKING([whether the $compiler linker ($LD) supports shared libraries])
+ m4_if([$1], [CXX], [
+   _LT_TAGVAR(export_symbols_cmds, $1)='$NM $libobjs $convenience | $global_symbol_pipe | $SED '\''s/.* //'\'' | sort | uniq > $export_symbols'
++  _LT_TAGVAR(exclude_expsyms, $1)=['_GLOBAL_OFFSET_TABLE_|_GLOBAL__F[ID]_.*']
+   case $host_os in
+   aix[[4-9]]*)
+     # If we're using GNU nm, then we don't want the "-C" option.
+@@ -4263,15 +4524,20 @@ m4_if([$1], [CXX], [
+     ;;
+   pw32*)
+     _LT_TAGVAR(export_symbols_cmds, $1)="$ltdll_cmds"
+-  ;;
++    ;;
+   cygwin* | mingw* | cegcc*)
+-    _LT_TAGVAR(export_symbols_cmds, $1)='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[[BCDGRS]][[ ]]/s/.*[[ ]]\([[^ ]]*\)/\1 DATA/;/^.*[[ ]]__nm__/s/^.*[[ ]]__nm__\([[^ ]]*\)[[ ]][[^ ]]*/\1 DATA/;/^I[[ ]]/d;/^[[AITW]][[ ]]/s/.* //'\'' | sort | uniq > $export_symbols'
+-  ;;
++    case $cc_basename in
++    cl*) ;;
++    *)
++      _LT_TAGVAR(export_symbols_cmds, $1)='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[[BCDGRS]][[ ]]/s/.*[[ ]]\([[^ ]]*\)/\1 DATA/;s/^.*[[ ]]__nm__\([[^ ]]*\)[[ ]][[^ ]]*/\1 DATA/;/^I[[ ]]/d;/^[[AITW]][[ ]]/s/.* //'\'' | sort | uniq > $export_symbols'
++      _LT_TAGVAR(exclude_expsyms, $1)=['[_]+GLOBAL_OFFSET_TABLE_|[_]+GLOBAL__[FID]_.*|[_]+head_[A-Za-z0-9_]+_dll|[A-Za-z0-9_]+_dll_iname']
++      ;;
++    esac
++    ;;
+   *)
+     _LT_TAGVAR(export_symbols_cmds, $1)='$NM $libobjs $convenience | $global_symbol_pipe | $SED '\''s/.* //'\'' | sort | uniq > $export_symbols'
+-  ;;
++    ;;
+   esac
+-  _LT_TAGVAR(exclude_expsyms, $1)=['_GLOBAL_OFFSET_TABLE_|_GLOBAL__F[ID]_.*']
+ ], [
+   runpath_var=
+   _LT_TAGVAR(allow_undefined_flag, $1)=
+@@ -4439,7 +4705,8 @@ _LT_EOF
+       _LT_TAGVAR(allow_undefined_flag, $1)=unsupported
+       _LT_TAGVAR(always_export_symbols, $1)=no
+       _LT_TAGVAR(enable_shared_with_static_runtimes, $1)=yes
+-      _LT_TAGVAR(export_symbols_cmds, $1)='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[[BCDGRS]][[ ]]/s/.*[[ ]]\([[^ ]]*\)/\1 DATA/'\'' | $SED -e '\''/^[[AITW]][[ ]]/s/.*[[ ]]//'\'' | sort | uniq > $export_symbols'
++      _LT_TAGVAR(export_symbols_cmds, $1)='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[[BCDGRS]][[ ]]/s/.*[[ ]]\([[^ ]]*\)/\1 DATA/;s/^.*[[ ]]__nm__\([[^ ]]*\)[[ ]][[^ ]]*/\1 DATA/;/^I[[ ]]/d;/^[[AITW]][[ ]]/s/.* //'\'' | sort | uniq > $export_symbols'
++      _LT_TAGVAR(exclude_expsyms, $1)=['[_]+GLOBAL_OFFSET_TABLE_|[_]+GLOBAL__[FID]_.*|[_]+head_[A-Za-z0-9_]+_dll|[A-Za-z0-9_]+_dll_iname']
+ 
+       if $LD --help 2>&1 | $GREP 'auto-import' > /dev/null; then
+         _LT_TAGVAR(archive_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib'
+@@ -4538,12 +4805,12 @@ _LT_EOF
+ 	  _LT_TAGVAR(whole_archive_flag_spec, $1)='--whole-archive$convenience --no-whole-archive'
+ 	  _LT_TAGVAR(hardcode_libdir_flag_spec, $1)=
+ 	  _LT_TAGVAR(hardcode_libdir_flag_spec_ld, $1)='-rpath $libdir'
+-	  _LT_TAGVAR(archive_cmds, $1)='$LD -shared $libobjs $deplibs $compiler_flags -soname $soname -o $lib'
++	  _LT_TAGVAR(archive_cmds, $1)='$LD -shared $libobjs $deplibs $linker_flags -soname $soname -o $lib'
+ 	  if test "x$supports_anon_versioning" = xyes; then
+ 	    _LT_TAGVAR(archive_expsym_cmds, $1)='echo "{ global:" > $output_objdir/$libname.ver~
+ 	      cat $export_symbols | sed -e "s/\(.*\)/\1;/" >> $output_objdir/$libname.ver~
+ 	      echo "local: *; };" >> $output_objdir/$libname.ver~
+-	      $LD -shared $libobjs $deplibs $compiler_flags -soname $soname -version-script $output_objdir/$libname.ver -o $lib'
++	      $LD -shared $libobjs $deplibs $linker_flags -soname $soname -version-script $output_objdir/$libname.ver -o $lib'
+ 	  fi
+ 	  ;;
+ 	esac
+@@ -4557,8 +4824,8 @@ _LT_EOF
+ 	_LT_TAGVAR(archive_cmds, $1)='$LD -Bshareable $libobjs $deplibs $linker_flags -o $lib'
+ 	wlarc=
+       else
+-	_LT_TAGVAR(archive_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
+-	_LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
++	_LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
++	_LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
+       fi
+       ;;
+ 
+@@ -4576,8 +4843,8 @@ _LT_EOF
+ 
+ _LT_EOF
+       elif $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then
+-	_LT_TAGVAR(archive_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
+-	_LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
++	_LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
++	_LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
+       else
+ 	_LT_TAGVAR(ld_shlibs, $1)=no
+       fi
+@@ -4623,8 +4890,8 @@ _LT_EOF
+ 
+     *)
+       if $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then
+-	_LT_TAGVAR(archive_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
+-	_LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
++	_LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
++	_LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
+       else
+ 	_LT_TAGVAR(ld_shlibs, $1)=no
+       fi
+@@ -4754,7 +5021,7 @@ _LT_EOF
+ 	_LT_TAGVAR(allow_undefined_flag, $1)='-berok'
+         # Determine the default libpath from the value encoded in an
+         # empty executable.
+-        _LT_SYS_MODULE_PATH_AIX
++        _LT_SYS_MODULE_PATH_AIX([$1])
+         _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-blibpath:$libdir:'"$aix_libpath"
+         _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -o $output_objdir/$soname $libobjs $deplibs '"\${wl}$no_entry_flag"' $compiler_flags `if test "x${allow_undefined_flag}" != "x"; then func_echo_all "${wl}${allow_undefined_flag}"; else :; fi` '"\${wl}$exp_sym_flag:\$export_symbols $shared_flag"
+       else
+@@ -4765,7 +5032,7 @@ _LT_EOF
+ 	else
+ 	 # Determine the default libpath from the value encoded in an
+ 	 # empty executable.
+-	 _LT_SYS_MODULE_PATH_AIX
++	 _LT_SYS_MODULE_PATH_AIX([$1])
+ 	 _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-blibpath:$libdir:'"$aix_libpath"
+ 	  # Warning - without using the other run time loading flags,
+ 	  # -berok will link without error, but may produce a broken library.
+@@ -4809,20 +5076,63 @@ _LT_EOF
+       # Microsoft Visual C++.
+       # hardcode_libdir_flag_spec is actually meaningless, as there is
+       # no search path for DLLs.
+-      _LT_TAGVAR(hardcode_libdir_flag_spec, $1)=' '
+-      _LT_TAGVAR(allow_undefined_flag, $1)=unsupported
+-      # Tell ltmain to make .lib files, not .a files.
+-      libext=lib
+-      # Tell ltmain to make .dll files, not .so files.
+-      shrext_cmds=".dll"
+-      # FIXME: Setting linknames here is a bad hack.
+-      _LT_TAGVAR(archive_cmds, $1)='$CC -o $lib $libobjs $compiler_flags `func_echo_all "$deplibs" | $SED '\''s/ -lc$//'\''` -link -dll~linknames='
+-      # The linker will automatically build a .lib file if we build a DLL.
+-      _LT_TAGVAR(old_archive_from_new_cmds, $1)='true'
+-      # FIXME: Should let the user specify the lib program.
+-      _LT_TAGVAR(old_archive_cmds, $1)='lib -OUT:$oldlib$oldobjs$old_deplibs'
+-      _LT_TAGVAR(fix_srcfile_path, $1)='`cygpath -w "$srcfile"`'
+-      _LT_TAGVAR(enable_shared_with_static_runtimes, $1)=yes
++      case $cc_basename in
++      cl*)
++	# Native MSVC
++	_LT_TAGVAR(hardcode_libdir_flag_spec, $1)=' '
++	_LT_TAGVAR(allow_undefined_flag, $1)=unsupported
++	_LT_TAGVAR(always_export_symbols, $1)=yes
++	_LT_TAGVAR(file_list_spec, $1)='@'
++	# Tell ltmain to make .lib files, not .a files.
++	libext=lib
++	# Tell ltmain to make .dll files, not .so files.
++	shrext_cmds=".dll"
++	# FIXME: Setting linknames here is a bad hack.
++	_LT_TAGVAR(archive_cmds, $1)='$CC -o $output_objdir/$soname $libobjs $compiler_flags $deplibs -Wl,-dll~linknames='
++	_LT_TAGVAR(archive_expsym_cmds, $1)='if test "x`$SED 1q $export_symbols`" = xEXPORTS; then
++	    sed -n -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' -e '1\\\!p' < $export_symbols > $output_objdir/$soname.exp;
++	  else
++	    sed -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' < $export_symbols > $output_objdir/$soname.exp;
++	  fi~
++	  $CC -o $tool_output_objdir$soname $libobjs $compiler_flags $deplibs "@$tool_output_objdir$soname.exp" -Wl,-DLL,-IMPLIB:"$tool_output_objdir$libname.dll.lib"~
++	  linknames='
++	# The linker will not automatically build a static lib if we build a DLL.
++	# _LT_TAGVAR(old_archive_from_new_cmds, $1)='true'
++	_LT_TAGVAR(enable_shared_with_static_runtimes, $1)=yes
++	_LT_TAGVAR(export_symbols_cmds, $1)='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[[BCDGRS]][[ ]]/s/.*[[ ]]\([[^ ]]*\)/\1,DATA/'\'' | $SED -e '\''/^[[AITW]][[ ]]/s/.*[[ ]]//'\'' | sort | uniq > $export_symbols'
++	# Don't use ranlib
++	_LT_TAGVAR(old_postinstall_cmds, $1)='chmod 644 $oldlib'
++	_LT_TAGVAR(postlink_cmds, $1)='lt_outputfile="@OUTPUT@"~
++	  lt_tool_outputfile="@TOOL_OUTPUT@"~
++	  case $lt_outputfile in
++	    *.exe|*.EXE) ;;
++	    *)
++	      lt_outputfile="$lt_outputfile.exe"
++	      lt_tool_outputfile="$lt_tool_outputfile.exe"
++	      ;;
++	  esac~
++	  if test "$MANIFEST_TOOL" != ":" && test -f "$lt_outputfile.manifest"; then
++	    $MANIFEST_TOOL -manifest "$lt_tool_outputfile.manifest" -outputresource:"$lt_tool_outputfile" || exit 1;
++	    $RM "$lt_outputfile.manifest";
++	  fi'
++	;;
++      *)
++	# Assume MSVC wrapper
++	_LT_TAGVAR(hardcode_libdir_flag_spec, $1)=' '
++	_LT_TAGVAR(allow_undefined_flag, $1)=unsupported
++	# Tell ltmain to make .lib files, not .a files.
++	libext=lib
++	# Tell ltmain to make .dll files, not .so files.
++	shrext_cmds=".dll"
++	# FIXME: Setting linknames here is a bad hack.
++	_LT_TAGVAR(archive_cmds, $1)='$CC -o $lib $libobjs $compiler_flags `func_echo_all "$deplibs" | $SED '\''s/ -lc$//'\''` -link -dll~linknames='
++	# The linker will automatically build a .lib file if we build a DLL.
++	_LT_TAGVAR(old_archive_from_new_cmds, $1)='true'
++	# FIXME: Should let the user specify the lib program.
++	_LT_TAGVAR(old_archive_cmds, $1)='lib -OUT:$oldlib$oldobjs$old_deplibs'
++	_LT_TAGVAR(enable_shared_with_static_runtimes, $1)=yes
++	;;
++      esac
+       ;;
+ 
+     darwin* | rhapsody*)
+@@ -4856,7 +5166,7 @@ _LT_EOF
+ 
+     # FreeBSD 3 and greater uses gcc -shared to do shared libraries.
+     freebsd* | dragonfly*)
+-      _LT_TAGVAR(archive_cmds, $1)='$CC -shared -o $lib $libobjs $deplibs $compiler_flags'
++      _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags'
+       _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-R$libdir'
+       _LT_TAGVAR(hardcode_direct, $1)=yes
+       _LT_TAGVAR(hardcode_shlibpath_var, $1)=no
+@@ -4864,7 +5174,7 @@ _LT_EOF
+ 
+     hpux9*)
+       if test "$GCC" = yes; then
+-	_LT_TAGVAR(archive_cmds, $1)='$RM $output_objdir/$soname~$CC -shared -fPIC ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $libobjs $deplibs $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
++	_LT_TAGVAR(archive_cmds, $1)='$RM $output_objdir/$soname~$CC -shared $pic_flag ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $libobjs $deplibs $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
+       else
+ 	_LT_TAGVAR(archive_cmds, $1)='$RM $output_objdir/$soname~$LD -b +b $install_libdir -o $output_objdir/$soname $libobjs $deplibs $linker_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
+       fi
+@@ -4880,7 +5190,7 @@ _LT_EOF
+ 
+     hpux10*)
+       if test "$GCC" = yes && test "$with_gnu_ld" = no; then
+-	_LT_TAGVAR(archive_cmds, $1)='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
++	_LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
+       else
+ 	_LT_TAGVAR(archive_cmds, $1)='$LD -b +h $soname +b $install_libdir -o $lib $libobjs $deplibs $linker_flags'
+       fi
+@@ -4904,10 +5214,10 @@ _LT_EOF
+ 	  _LT_TAGVAR(archive_cmds, $1)='$CC -shared ${wl}+h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
+ 	  ;;
+ 	ia64*)
+-	  _LT_TAGVAR(archive_cmds, $1)='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags'
++	  _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags'
+ 	  ;;
+ 	*)
+-	  _LT_TAGVAR(archive_cmds, $1)='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
++	  _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
+ 	  ;;
+ 	esac
+       else
+@@ -4954,16 +5264,31 @@ _LT_EOF
+ 
+     irix5* | irix6* | nonstopux*)
+       if test "$GCC" = yes; then
+-	_LT_TAGVAR(archive_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
++	_LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
+ 	# Try to use the -exported_symbol ld option, if it does not
+ 	# work, assume that -exports_file does not work either and
+ 	# implicitly export all symbols.
+-        save_LDFLAGS="$LDFLAGS"
+-        LDFLAGS="$LDFLAGS -shared ${wl}-exported_symbol ${wl}foo ${wl}-update_registry ${wl}/dev/null"
+-        AC_LINK_IFELSE([AC_LANG_SOURCE([int foo(void) {}])],
+-          _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations ${wl}-exports_file ${wl}$export_symbols -o $lib'
+-        )
+-        LDFLAGS="$save_LDFLAGS"
++	# This should be the same for all languages, so no per-tag cache variable.
++	AC_CACHE_CHECK([whether the $host_os linker accepts -exported_symbol],
++	  [lt_cv_irix_exported_symbol],
++	  [save_LDFLAGS="$LDFLAGS"
++	   LDFLAGS="$LDFLAGS -shared ${wl}-exported_symbol ${wl}foo ${wl}-update_registry ${wl}/dev/null"
++	   AC_LINK_IFELSE(
++	     [AC_LANG_SOURCE(
++	        [AC_LANG_CASE([C], [[int foo (void) { return 0; }]],
++			      [C++], [[int foo (void) { return 0; }]],
++			      [Fortran 77], [[
++      subroutine foo
++      end]],
++			      [Fortran], [[
++      subroutine foo
++      end]])])],
++	      [lt_cv_irix_exported_symbol=yes],
++	      [lt_cv_irix_exported_symbol=no])
++           LDFLAGS="$save_LDFLAGS"])
++	if test "$lt_cv_irix_exported_symbol" = yes; then
++          _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations ${wl}-exports_file ${wl}$export_symbols -o $lib'
++	fi
+       else
+ 	_LT_TAGVAR(archive_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -o $lib'
+ 	_LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -exports_file $export_symbols -o $lib'
+@@ -5048,7 +5373,7 @@ _LT_EOF
+     osf4* | osf5*)	# as osf3* with the addition of -msym flag
+       if test "$GCC" = yes; then
+ 	_LT_TAGVAR(allow_undefined_flag, $1)=' ${wl}-expect_unresolved ${wl}\*'
+-	_LT_TAGVAR(archive_cmds, $1)='$CC -shared${allow_undefined_flag} $libobjs $deplibs $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
++	_LT_TAGVAR(archive_cmds, $1)='$CC -shared${allow_undefined_flag} $pic_flag $libobjs $deplibs $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
+ 	_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-rpath ${wl}$libdir'
+       else
+ 	_LT_TAGVAR(allow_undefined_flag, $1)=' -expect_unresolved \*'
+@@ -5067,9 +5392,9 @@ _LT_EOF
+       _LT_TAGVAR(no_undefined_flag, $1)=' -z defs'
+       if test "$GCC" = yes; then
+ 	wlarc='${wl}'
+-	_LT_TAGVAR(archive_cmds, $1)='$CC -shared ${wl}-z ${wl}text ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
++	_LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag ${wl}-z ${wl}text ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
+ 	_LT_TAGVAR(archive_expsym_cmds, $1)='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~
+-	  $CC -shared ${wl}-z ${wl}text ${wl}-M ${wl}$lib.exp ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags~$RM $lib.exp'
++	  $CC -shared $pic_flag ${wl}-z ${wl}text ${wl}-M ${wl}$lib.exp ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags~$RM $lib.exp'
+       else
+ 	case `$CC -V 2>&1` in
+ 	*"Compilers 5.0"*)
+@@ -5341,8 +5666,6 @@ _LT_TAGDECL([], [inherit_rpath], [0],
+     to runtime path list])
+ _LT_TAGDECL([], [link_all_deplibs], [0],
+     [Whether libtool must link a program against all its dependency libraries])
+-_LT_TAGDECL([], [fix_srcfile_path], [1],
+-    [Fix the shell variable $srcfile for the compiler])
+ _LT_TAGDECL([], [always_export_symbols], [0],
+     [Set to "yes" if exported symbols are required])
+ _LT_TAGDECL([], [export_symbols_cmds], [2],
+@@ -5353,6 +5676,8 @@ _LT_TAGDECL([], [include_expsyms], [1],
+     [Symbols that must always be exported])
+ _LT_TAGDECL([], [prelink_cmds], [2],
+     [Commands necessary for linking programs (against libraries) with templates])
++_LT_TAGDECL([], [postlink_cmds], [2],
++    [Commands necessary for finishing linking programs])
+ _LT_TAGDECL([], [file_list_spec], [1],
+     [Specify filename containing input files])
+ dnl FIXME: Not yet implemented
+@@ -5454,6 +5779,7 @@ CC="$lt_save_CC"
+ m4_defun([_LT_LANG_CXX_CONFIG],
+ [m4_require([_LT_FILEUTILS_DEFAULTS])dnl
+ m4_require([_LT_DECL_EGREP])dnl
++m4_require([_LT_PATH_MANIFEST_TOOL])dnl
+ if test -n "$CXX" && ( test "X$CXX" != "Xno" &&
+     ( (test "X$CXX" = "Xg++" && `g++ -v >/dev/null 2>&1` ) ||
+     (test "X$CXX" != "Xg++"))) ; then
+@@ -5515,6 +5841,7 @@ if test "$_lt_caught_CXX_error" != yes; then
+ 
+   # Allow CC to be a program name with arguments.
+   lt_save_CC=$CC
++  lt_save_CFLAGS=$CFLAGS
+   lt_save_LD=$LD
+   lt_save_GCC=$GCC
+   GCC=$GXX
+@@ -5532,6 +5859,7 @@ if test "$_lt_caught_CXX_error" != yes; then
+   fi
+   test -z "${LDCXX+set}" || LD=$LDCXX
+   CC=${CXX-"c++"}
++  CFLAGS=$CXXFLAGS
+   compiler=$CC
+   _LT_TAGVAR(compiler, $1)=$CC
+   _LT_CC_BASENAME([$compiler])
+@@ -5695,7 +6023,7 @@ if test "$_lt_caught_CXX_error" != yes; then
+           _LT_TAGVAR(allow_undefined_flag, $1)='-berok'
+           # Determine the default libpath from the value encoded in an empty
+           # executable.
+-          _LT_SYS_MODULE_PATH_AIX
++          _LT_SYS_MODULE_PATH_AIX([$1])
+           _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-blibpath:$libdir:'"$aix_libpath"
+ 
+           _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -o $output_objdir/$soname $libobjs $deplibs '"\${wl}$no_entry_flag"' $compiler_flags `if test "x${allow_undefined_flag}" != "x"; then func_echo_all "${wl}${allow_undefined_flag}"; else :; fi` '"\${wl}$exp_sym_flag:\$export_symbols $shared_flag"
+@@ -5707,7 +6035,7 @@ if test "$_lt_caught_CXX_error" != yes; then
+           else
+ 	    # Determine the default libpath from the value encoded in an
+ 	    # empty executable.
+-	    _LT_SYS_MODULE_PATH_AIX
++	    _LT_SYS_MODULE_PATH_AIX([$1])
+ 	    _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-blibpath:$libdir:'"$aix_libpath"
+ 	    # Warning - without using the other run time loading flags,
+ 	    # -berok will link without error, but may produce a broken library.
+@@ -5749,29 +6077,75 @@ if test "$_lt_caught_CXX_error" != yes; then
+         ;;
+ 
+       cygwin* | mingw* | pw32* | cegcc*)
+-        # _LT_TAGVAR(hardcode_libdir_flag_spec, $1) is actually meaningless,
+-        # as there is no search path for DLLs.
+-        _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-L$libdir'
+-        _LT_TAGVAR(export_dynamic_flag_spec, $1)='${wl}--export-all-symbols'
+-        _LT_TAGVAR(allow_undefined_flag, $1)=unsupported
+-        _LT_TAGVAR(always_export_symbols, $1)=no
+-        _LT_TAGVAR(enable_shared_with_static_runtimes, $1)=yes
+-
+-        if $LD --help 2>&1 | $GREP 'auto-import' > /dev/null; then
+-          _LT_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib'
+-          # If the export-symbols file already is a .def file (1st line
+-          # is EXPORTS), use it as is; otherwise, prepend...
+-          _LT_TAGVAR(archive_expsym_cmds, $1)='if test "x`$SED 1q $export_symbols`" = xEXPORTS; then
+-	    cp $export_symbols $output_objdir/$soname.def;
+-          else
+-	    echo EXPORTS > $output_objdir/$soname.def;
+-	    cat $export_symbols >> $output_objdir/$soname.def;
+-          fi~
+-          $CC -shared -nostdlib $output_objdir/$soname.def $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib'
+-        else
+-          _LT_TAGVAR(ld_shlibs, $1)=no
+-        fi
+-        ;;
++	case $GXX,$cc_basename in
++	,cl* | no,cl*)
++	  # Native MSVC
++	  # hardcode_libdir_flag_spec is actually meaningless, as there is
++	  # no search path for DLLs.
++	  _LT_TAGVAR(hardcode_libdir_flag_spec, $1)=' '
++	  _LT_TAGVAR(allow_undefined_flag, $1)=unsupported
++	  _LT_TAGVAR(always_export_symbols, $1)=yes
++	  _LT_TAGVAR(file_list_spec, $1)='@'
++	  # Tell ltmain to make .lib files, not .a files.
++	  libext=lib
++	  # Tell ltmain to make .dll files, not .so files.
++	  shrext_cmds=".dll"
++	  # FIXME: Setting linknames here is a bad hack.
++	  _LT_TAGVAR(archive_cmds, $1)='$CC -o $output_objdir/$soname $libobjs $compiler_flags $deplibs -Wl,-dll~linknames='
++	  _LT_TAGVAR(archive_expsym_cmds, $1)='if test "x`$SED 1q $export_symbols`" = xEXPORTS; then
++	      $SED -n -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' -e '1\\\!p' < $export_symbols > $output_objdir/$soname.exp;
++	    else
++	      $SED -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' < $export_symbols > $output_objdir/$soname.exp;
++	    fi~
++	    $CC -o $tool_output_objdir$soname $libobjs $compiler_flags $deplibs "@$tool_output_objdir$soname.exp" -Wl,-DLL,-IMPLIB:"$tool_output_objdir$libname.dll.lib"~
++	    linknames='
++	  # The linker will not automatically build a static lib if we build a DLL.
++	  # _LT_TAGVAR(old_archive_from_new_cmds, $1)='true'
++	  _LT_TAGVAR(enable_shared_with_static_runtimes, $1)=yes
++	  # Don't use ranlib
++	  _LT_TAGVAR(old_postinstall_cmds, $1)='chmod 644 $oldlib'
++	  _LT_TAGVAR(postlink_cmds, $1)='lt_outputfile="@OUTPUT@"~
++	    lt_tool_outputfile="@TOOL_OUTPUT@"~
++	    case $lt_outputfile in
++	      *.exe|*.EXE) ;;
++	      *)
++		lt_outputfile="$lt_outputfile.exe"
++		lt_tool_outputfile="$lt_tool_outputfile.exe"
++		;;
++	    esac~
++	    func_to_tool_file "$lt_outputfile"~
++	    if test "$MANIFEST_TOOL" != ":" && test -f "$lt_outputfile.manifest"; then
++	      $MANIFEST_TOOL -manifest "$lt_tool_outputfile.manifest" -outputresource:"$lt_tool_outputfile" || exit 1;
++	      $RM "$lt_outputfile.manifest";
++	    fi'
++	  ;;
++	*)
++	  # g++
++	  # _LT_TAGVAR(hardcode_libdir_flag_spec, $1) is actually meaningless,
++	  # as there is no search path for DLLs.
++	  _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-L$libdir'
++	  _LT_TAGVAR(export_dynamic_flag_spec, $1)='${wl}--export-all-symbols'
++	  _LT_TAGVAR(allow_undefined_flag, $1)=unsupported
++	  _LT_TAGVAR(always_export_symbols, $1)=no
++	  _LT_TAGVAR(enable_shared_with_static_runtimes, $1)=yes
++
++	  if $LD --help 2>&1 | $GREP 'auto-import' > /dev/null; then
++	    _LT_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib'
++	    # If the export-symbols file already is a .def file (1st line
++	    # is EXPORTS), use it as is; otherwise, prepend...
++	    _LT_TAGVAR(archive_expsym_cmds, $1)='if test "x`$SED 1q $export_symbols`" = xEXPORTS; then
++	      cp $export_symbols $output_objdir/$soname.def;
++	    else
++	      echo EXPORTS > $output_objdir/$soname.def;
++	      cat $export_symbols >> $output_objdir/$soname.def;
++	    fi~
++	    $CC -shared -nostdlib $output_objdir/$soname.def $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib'
++	  else
++	    _LT_TAGVAR(ld_shlibs, $1)=no
++	  fi
++	  ;;
++	esac
++	;;
+       darwin* | rhapsody*)
+         _LT_DARWIN_LINKER_FEATURES($1)
+ 	;;
+@@ -5846,7 +6220,7 @@ if test "$_lt_caught_CXX_error" != yes; then
+             ;;
+           *)
+             if test "$GXX" = yes; then
+-              _LT_TAGVAR(archive_cmds, $1)='$RM $output_objdir/$soname~$CC -shared -nostdlib -fPIC ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
++              _LT_TAGVAR(archive_cmds, $1)='$RM $output_objdir/$soname~$CC -shared -nostdlib $pic_flag ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
+             else
+               # FIXME: insert proper C++ library support
+               _LT_TAGVAR(ld_shlibs, $1)=no
+@@ -5917,10 +6291,10 @@ if test "$_lt_caught_CXX_error" != yes; then
+ 	            _LT_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib -fPIC ${wl}+h ${wl}$soname -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags'
+ 	            ;;
+ 	          ia64*)
+-	            _LT_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib -fPIC ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags'
++	            _LT_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib $pic_flag ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags'
+ 	            ;;
+ 	          *)
+-	            _LT_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib -fPIC ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags'
++	            _LT_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags'
+ 	            ;;
+ 	        esac
+ 	      fi
+@@ -5961,9 +6335,9 @@ if test "$_lt_caught_CXX_error" != yes; then
+           *)
+ 	    if test "$GXX" = yes; then
+ 	      if test "$with_gnu_ld" = no; then
+-	        _LT_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
++	        _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
+ 	      else
+-	        _LT_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` -o $lib'
++	        _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` -o $lib'
+ 	      fi
+ 	    fi
+ 	    _LT_TAGVAR(link_all_deplibs, $1)=yes
+@@ -6033,20 +6407,20 @@ if test "$_lt_caught_CXX_error" != yes; then
+ 	      _LT_TAGVAR(prelink_cmds, $1)='tpldir=Template.dir~
+ 		rm -rf $tpldir~
+ 		$CC --prelink_objects --instantiation_dir $tpldir $objs $libobjs $compile_deplibs~
+-		compile_command="$compile_command `find $tpldir -name \*.o | $NL2SP`"'
++		compile_command="$compile_command `find $tpldir -name \*.o | sort | $NL2SP`"'
+ 	      _LT_TAGVAR(old_archive_cmds, $1)='tpldir=Template.dir~
+ 		rm -rf $tpldir~
+ 		$CC --prelink_objects --instantiation_dir $tpldir $oldobjs$old_deplibs~
+-		$AR $AR_FLAGS $oldlib$oldobjs$old_deplibs `find $tpldir -name \*.o | $NL2SP`~
++		$AR $AR_FLAGS $oldlib$oldobjs$old_deplibs `find $tpldir -name \*.o | sort | $NL2SP`~
+ 		$RANLIB $oldlib'
+ 	      _LT_TAGVAR(archive_cmds, $1)='tpldir=Template.dir~
+ 		rm -rf $tpldir~
+ 		$CC --prelink_objects --instantiation_dir $tpldir $predep_objects $libobjs $deplibs $convenience $postdep_objects~
+-		$CC -shared $pic_flag $predep_objects $libobjs $deplibs `find $tpldir -name \*.o | $NL2SP` $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname -o $lib'
++		$CC -shared $pic_flag $predep_objects $libobjs $deplibs `find $tpldir -name \*.o | sort | $NL2SP` $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname -o $lib'
+ 	      _LT_TAGVAR(archive_expsym_cmds, $1)='tpldir=Template.dir~
+ 		rm -rf $tpldir~
+ 		$CC --prelink_objects --instantiation_dir $tpldir $predep_objects $libobjs $deplibs $convenience $postdep_objects~
+-		$CC -shared $pic_flag $predep_objects $libobjs $deplibs `find $tpldir -name \*.o | $NL2SP` $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname ${wl}-retain-symbols-file ${wl}$export_symbols -o $lib'
++		$CC -shared $pic_flag $predep_objects $libobjs $deplibs `find $tpldir -name \*.o | sort | $NL2SP` $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname ${wl}-retain-symbols-file ${wl}$export_symbols -o $lib'
+ 	      ;;
+ 	    *) # Version 6 and above use weak symbols
+ 	      _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname -o $lib'
+@@ -6241,7 +6615,7 @@ if test "$_lt_caught_CXX_error" != yes; then
+ 	          _LT_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib ${allow_undefined_flag} $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
+ 		  ;;
+ 	        *)
+-	          _LT_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib ${allow_undefined_flag} $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
++	          _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag -nostdlib ${allow_undefined_flag} $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
+ 		  ;;
+ 	      esac
+ 
+@@ -6287,7 +6661,7 @@ if test "$_lt_caught_CXX_error" != yes; then
+ 
+       solaris*)
+         case $cc_basename in
+-          CC*)
++          CC* | sunCC*)
+ 	    # Sun C++ 4.2, 5.x and Centerline C++
+             _LT_TAGVAR(archive_cmds_need_lc,$1)=yes
+ 	    _LT_TAGVAR(no_undefined_flag, $1)=' -zdefs'
+@@ -6328,9 +6702,9 @@ if test "$_lt_caught_CXX_error" != yes; then
+ 	    if test "$GXX" = yes && test "$with_gnu_ld" = no; then
+ 	      _LT_TAGVAR(no_undefined_flag, $1)=' ${wl}-z ${wl}defs'
+ 	      if $CC --version | $GREP -v '^2\.7' > /dev/null; then
+-	        _LT_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib $LDFLAGS $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-h $wl$soname -o $lib'
++	        _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag -nostdlib $LDFLAGS $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-h $wl$soname -o $lib'
+ 	        _LT_TAGVAR(archive_expsym_cmds, $1)='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~
+-		  $CC -shared -nostdlib ${wl}-M $wl$lib.exp -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~$RM $lib.exp'
++		  $CC -shared $pic_flag -nostdlib ${wl}-M $wl$lib.exp -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~$RM $lib.exp'
+ 
+ 	        # Commands to make compiler produce verbose output that lists
+ 	        # what "hidden" libraries, object files and flags are used when
+@@ -6459,6 +6833,7 @@ if test "$_lt_caught_CXX_error" != yes; then
+   fi # test -n "$compiler"
+ 
+   CC=$lt_save_CC
++  CFLAGS=$lt_save_CFLAGS
+   LDCXX=$LD
+   LD=$lt_save_LD
+   GCC=$lt_save_GCC
+@@ -6473,6 +6848,29 @@ AC_LANG_POP
+ ])# _LT_LANG_CXX_CONFIG
+ 
+ 
++# _LT_FUNC_STRIPNAME_CNF
++# ----------------------
++# func_stripname_cnf prefix suffix name
++# strip PREFIX and SUFFIX off of NAME.
++# PREFIX and SUFFIX must not contain globbing or regex special
++# characters, hashes, percent signs, but SUFFIX may contain a leading
++# dot (in which case that matches only a dot).
++#
++# This function is identical to the (non-XSI) version of func_stripname,
++# except this one can be used by m4 code that may be executed by configure,
++# rather than the libtool script.
++m4_defun([_LT_FUNC_STRIPNAME_CNF],[dnl
++AC_REQUIRE([_LT_DECL_SED])
++AC_REQUIRE([_LT_PROG_ECHO_BACKSLASH])
++func_stripname_cnf ()
++{
++  case ${2} in
++  .*) func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%\\\\${2}\$%%"`;;
++  *)  func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%${2}\$%%"`;;
++  esac
++} # func_stripname_cnf
++])# _LT_FUNC_STRIPNAME_CNF
++
+ # _LT_SYS_HIDDEN_LIBDEPS([TAGNAME])
+ # ---------------------------------
+ # Figure out "hidden" library dependencies from verbose
+@@ -6481,6 +6879,7 @@ AC_LANG_POP
+ # objects, libraries and library flags.
+ m4_defun([_LT_SYS_HIDDEN_LIBDEPS],
+ [m4_require([_LT_FILEUTILS_DEFAULTS])dnl
++AC_REQUIRE([_LT_FUNC_STRIPNAME_CNF])dnl
+ # Dependencies to place before and after the object being linked:
+ _LT_TAGVAR(predep_objects, $1)=
+ _LT_TAGVAR(postdep_objects, $1)=
+@@ -6531,6 +6930,13 @@ public class foo {
+ };
+ _LT_EOF
+ ])
++
++_lt_libdeps_save_CFLAGS=$CFLAGS
++case "$CC $CFLAGS " in #(
++*\ -flto*\ *) CFLAGS="$CFLAGS -fno-lto" ;;
++*\ -fwhopr*\ *) CFLAGS="$CFLAGS -fno-whopr" ;;
++esac
++
+ dnl Parse the compiler output and extract the necessary
+ dnl objects, libraries and library flags.
+ if AC_TRY_EVAL(ac_compile); then
+@@ -6542,7 +6948,7 @@ if AC_TRY_EVAL(ac_compile); then
+   pre_test_object_deps_done=no
+ 
+   for p in `eval "$output_verbose_link_cmd"`; do
+-    case $p in
++    case ${prev}${p} in
+ 
+     -L* | -R* | -l*)
+        # Some compilers place space between "-{L,R}" and the path.
+@@ -6551,13 +6957,22 @@ if AC_TRY_EVAL(ac_compile); then
+           test $p = "-R"; then
+ 	 prev=$p
+ 	 continue
+-       else
+-	 prev=
+        fi
+ 
++       # Expand the sysroot to ease extracting the directories later.
++       if test -z "$prev"; then
++         case $p in
++         -L*) func_stripname_cnf '-L' '' "$p"; prev=-L; p=$func_stripname_result ;;
++         -R*) func_stripname_cnf '-R' '' "$p"; prev=-R; p=$func_stripname_result ;;
++         -l*) func_stripname_cnf '-l' '' "$p"; prev=-l; p=$func_stripname_result ;;
++         esac
++       fi
++       case $p in
++       =*) func_stripname_cnf '=' '' "$p"; p=$lt_sysroot$func_stripname_result ;;
++       esac
+        if test "$pre_test_object_deps_done" = no; then
+-	 case $p in
+-	 -L* | -R*)
++	 case ${prev} in
++	 -L | -R)
+ 	   # Internal compiler library paths should come after those
+ 	   # provided the user.  The postdeps already come after the
+ 	   # user supplied libs so there is no need to process them.
+@@ -6577,8 +6992,10 @@ if AC_TRY_EVAL(ac_compile); then
+ 	   _LT_TAGVAR(postdeps, $1)="${_LT_TAGVAR(postdeps, $1)} ${prev}${p}"
+ 	 fi
+        fi
++       prev=
+        ;;
+ 
++    *.lto.$objext) ;; # Ignore GCC LTO objects
+     *.$objext)
+        # This assumes that the test object file only shows up
+        # once in the compiler output.
+@@ -6614,6 +7031,7 @@ else
+ fi
+ 
+ $RM -f confest.$objext
++CFLAGS=$_lt_libdeps_save_CFLAGS
+ 
+ # PORTME: override above test on systems where it is broken
+ m4_if([$1], [CXX],
+@@ -6650,7 +7068,7 @@ linux*)
+ 
+ solaris*)
+   case $cc_basename in
+-  CC*)
++  CC* | sunCC*)
+     # The more standards-conforming stlport4 library is
+     # incompatible with the Cstd library. Avoid specifying
+     # it if it's in CXXFLAGS. Ignore libCrun as
+@@ -6763,7 +7181,9 @@ if test "$_lt_disable_F77" != yes; then
+   # Allow CC to be a program name with arguments.
+   lt_save_CC="$CC"
+   lt_save_GCC=$GCC
++  lt_save_CFLAGS=$CFLAGS
+   CC=${F77-"f77"}
++  CFLAGS=$FFLAGS
+   compiler=$CC
+   _LT_TAGVAR(compiler, $1)=$CC
+   _LT_CC_BASENAME([$compiler])
+@@ -6817,6 +7237,7 @@ if test "$_lt_disable_F77" != yes; then
+ 
+   GCC=$lt_save_GCC
+   CC="$lt_save_CC"
++  CFLAGS="$lt_save_CFLAGS"
+ fi # test "$_lt_disable_F77" != yes
+ 
+ AC_LANG_POP
+@@ -6893,7 +7314,9 @@ if test "$_lt_disable_FC" != yes; then
+   # Allow CC to be a program name with arguments.
+   lt_save_CC="$CC"
+   lt_save_GCC=$GCC
++  lt_save_CFLAGS=$CFLAGS
+   CC=${FC-"f95"}
++  CFLAGS=$FCFLAGS
+   compiler=$CC
+   GCC=$ac_cv_fc_compiler_gnu
+ 
+@@ -6949,7 +7372,8 @@ if test "$_lt_disable_FC" != yes; then
+   fi # test -n "$compiler"
+ 
+   GCC=$lt_save_GCC
+-  CC="$lt_save_CC"
++  CC=$lt_save_CC
++  CFLAGS=$lt_save_CFLAGS
+ fi # test "$_lt_disable_FC" != yes
+ 
+ AC_LANG_POP
+@@ -6986,10 +7410,12 @@ _LT_COMPILER_BOILERPLATE
+ _LT_LINKER_BOILERPLATE
+ 
+ # Allow CC to be a program name with arguments.
+-lt_save_CC="$CC"
++lt_save_CC=$CC
++lt_save_CFLAGS=$CFLAGS
+ lt_save_GCC=$GCC
+ GCC=yes
+ CC=${GCJ-"gcj"}
++CFLAGS=$GCJFLAGS
+ compiler=$CC
+ _LT_TAGVAR(compiler, $1)=$CC
+ _LT_TAGVAR(LD, $1)="$LD"
+@@ -7020,7 +7446,8 @@ fi
+ AC_LANG_RESTORE
+ 
+ GCC=$lt_save_GCC
+-CC="$lt_save_CC"
++CC=$lt_save_CC
++CFLAGS=$lt_save_CFLAGS
+ ])# _LT_LANG_GCJ_CONFIG
+ 
+ 
+@@ -7055,9 +7482,11 @@ _LT_LINKER_BOILERPLATE
+ 
+ # Allow CC to be a program name with arguments.
+ lt_save_CC="$CC"
++lt_save_CFLAGS=$CFLAGS
+ lt_save_GCC=$GCC
+ GCC=
+ CC=${RC-"windres"}
++CFLAGS=
+ compiler=$CC
+ _LT_TAGVAR(compiler, $1)=$CC
+ _LT_CC_BASENAME([$compiler])
+@@ -7070,7 +7499,8 @@ fi
+ 
+ GCC=$lt_save_GCC
+ AC_LANG_RESTORE
+-CC="$lt_save_CC"
++CC=$lt_save_CC
++CFLAGS=$lt_save_CFLAGS
+ ])# _LT_LANG_RC_CONFIG
+ 
+ 
+@@ -7129,6 +7559,15 @@ _LT_DECL([], [OBJDUMP], [1], [An object symbol dumper])
+ AC_SUBST([OBJDUMP])
+ ])
+ 
++# _LT_DECL_DLLTOOL
++# ----------------
++# Ensure DLLTOOL variable is set.
++m4_defun([_LT_DECL_DLLTOOL],
++[AC_CHECK_TOOL(DLLTOOL, dlltool, false)
++test -z "$DLLTOOL" && DLLTOOL=dlltool
++_LT_DECL([], [DLLTOOL], [1], [DLL creation program])
++AC_SUBST([DLLTOOL])
++])
+ 
+ # _LT_DECL_SED
+ # ------------
+@@ -7222,8 +7661,8 @@ m4_defun([_LT_CHECK_SHELL_FEATURES],
+ # Try some XSI features
+ xsi_shell=no
+ ( _lt_dummy="a/b/c"
+-  test "${_lt_dummy##*/},${_lt_dummy%/*},"${_lt_dummy%"$_lt_dummy"}, \
+-      = c,a/b,, \
++  test "${_lt_dummy##*/},${_lt_dummy%/*},${_lt_dummy#??}"${_lt_dummy%"$_lt_dummy"}, \
++      = c,a/b,b/c, \
+     && eval 'test $(( 1 + 1 )) -eq 2 \
+     && test "${#_lt_dummy}" -eq 5' ) >/dev/null 2>&1 \
+   && xsi_shell=yes
+@@ -7262,206 +7701,162 @@ _LT_DECL([NL2SP], [lt_NL2SP], [1], [turn newlines into spaces])dnl
+ ])# _LT_CHECK_SHELL_FEATURES
+ 
+ 
+-# _LT_PROG_XSI_SHELLFNS
+-# ---------------------
+-# Bourne and XSI compatible variants of some useful shell functions.
+-m4_defun([_LT_PROG_XSI_SHELLFNS],
+-[case $xsi_shell in
+-  yes)
+-    cat << \_LT_EOF >> "$cfgfile"
+-
+-# func_dirname file append nondir_replacement
+-# Compute the dirname of FILE.  If nonempty, add APPEND to the result,
+-# otherwise set result to NONDIR_REPLACEMENT.
+-func_dirname ()
+-{
+-  case ${1} in
+-    */*) func_dirname_result="${1%/*}${2}" ;;
+-    *  ) func_dirname_result="${3}" ;;
+-  esac
+-}
+-
+-# func_basename file
+-func_basename ()
+-{
+-  func_basename_result="${1##*/}"
+-}
+-
+-# func_dirname_and_basename file append nondir_replacement
+-# perform func_basename and func_dirname in a single function
+-# call:
+-#   dirname:  Compute the dirname of FILE.  If nonempty,
+-#             add APPEND to the result, otherwise set result
+-#             to NONDIR_REPLACEMENT.
+-#             value returned in "$func_dirname_result"
+-#   basename: Compute filename of FILE.
+-#             value retuned in "$func_basename_result"
+-# Implementation must be kept synchronized with func_dirname
+-# and func_basename. For efficiency, we do not delegate to
+-# those functions but instead duplicate the functionality here.
+-func_dirname_and_basename ()
+-{
+-  case ${1} in
+-    */*) func_dirname_result="${1%/*}${2}" ;;
+-    *  ) func_dirname_result="${3}" ;;
+-  esac
+-  func_basename_result="${1##*/}"
+-}
+-
+-# func_stripname prefix suffix name
+-# strip PREFIX and SUFFIX off of NAME.
+-# PREFIX and SUFFIX must not contain globbing or regex special
+-# characters, hashes, percent signs, but SUFFIX may contain a leading
+-# dot (in which case that matches only a dot).
+-func_stripname ()
+-{
+-  # pdksh 5.2.14 does not do ${X%$Y} correctly if both X and Y are
+-  # positional parameters, so assign one to ordinary parameter first.
+-  func_stripname_result=${3}
+-  func_stripname_result=${func_stripname_result#"${1}"}
+-  func_stripname_result=${func_stripname_result%"${2}"}
+-}
+-
+-# func_opt_split
+-func_opt_split ()
+-{
+-  func_opt_split_opt=${1%%=*}
+-  func_opt_split_arg=${1#*=}
+-}
+-
+-# func_lo2o object
+-func_lo2o ()
+-{
+-  case ${1} in
+-    *.lo) func_lo2o_result=${1%.lo}.${objext} ;;
+-    *)    func_lo2o_result=${1} ;;
+-  esac
+-}
+-
+-# func_xform libobj-or-source
+-func_xform ()
+-{
+-  func_xform_result=${1%.*}.lo
+-}
+-
+-# func_arith arithmetic-term...
+-func_arith ()
+-{
+-  func_arith_result=$(( $[*] ))
+-}
+-
+-# func_len string
+-# STRING may not start with a hyphen.
+-func_len ()
+-{
+-  func_len_result=${#1}
+-}
++# _LT_PROG_FUNCTION_REPLACE (FUNCNAME, REPLACEMENT-BODY)
++# ------------------------------------------------------
++# In `$cfgfile', look for function FUNCNAME delimited by `^FUNCNAME ()$' and
++# '^} FUNCNAME ', and replace its body with REPLACEMENT-BODY.
++m4_defun([_LT_PROG_FUNCTION_REPLACE],
++[dnl {
++sed -e '/^$1 ()$/,/^} # $1 /c\
++$1 ()\
++{\
++m4_bpatsubsts([$2], [$], [\\], [^\([	 ]\)], [\\\1])
++} # Extended-shell $1 implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++])
+ 
+-_LT_EOF
+-    ;;
+-  *) # Bourne compatible functions.
+-    cat << \_LT_EOF >> "$cfgfile"
+ 
+-# func_dirname file append nondir_replacement
+-# Compute the dirname of FILE.  If nonempty, add APPEND to the result,
+-# otherwise set result to NONDIR_REPLACEMENT.
+-func_dirname ()
+-{
+-  # Extract subdirectory from the argument.
+-  func_dirname_result=`$ECHO "${1}" | $SED "$dirname"`
+-  if test "X$func_dirname_result" = "X${1}"; then
+-    func_dirname_result="${3}"
+-  else
+-    func_dirname_result="$func_dirname_result${2}"
+-  fi
+-}
++# _LT_PROG_REPLACE_SHELLFNS
++# -------------------------
++# Replace existing portable implementations of several shell functions with
++# equivalent extended shell implementations where those features are available..
++m4_defun([_LT_PROG_REPLACE_SHELLFNS],
++[if test x"$xsi_shell" = xyes; then
++  _LT_PROG_FUNCTION_REPLACE([func_dirname], [dnl
++    case ${1} in
++      */*) func_dirname_result="${1%/*}${2}" ;;
++      *  ) func_dirname_result="${3}" ;;
++    esac])
++
++  _LT_PROG_FUNCTION_REPLACE([func_basename], [dnl
++    func_basename_result="${1##*/}"])
++
++  _LT_PROG_FUNCTION_REPLACE([func_dirname_and_basename], [dnl
++    case ${1} in
++      */*) func_dirname_result="${1%/*}${2}" ;;
++      *  ) func_dirname_result="${3}" ;;
++    esac
++    func_basename_result="${1##*/}"])
+ 
+-# func_basename file
+-func_basename ()
+-{
+-  func_basename_result=`$ECHO "${1}" | $SED "$basename"`
+-}
++  _LT_PROG_FUNCTION_REPLACE([func_stripname], [dnl
++    # pdksh 5.2.14 does not do ${X%$Y} correctly if both X and Y are
++    # positional parameters, so assign one to ordinary parameter first.
++    func_stripname_result=${3}
++    func_stripname_result=${func_stripname_result#"${1}"}
++    func_stripname_result=${func_stripname_result%"${2}"}])
+ 
+-dnl func_dirname_and_basename
+-dnl A portable version of this function is already defined in general.m4sh
+-dnl so there is no need for it here.
++  _LT_PROG_FUNCTION_REPLACE([func_split_long_opt], [dnl
++    func_split_long_opt_name=${1%%=*}
++    func_split_long_opt_arg=${1#*=}])
+ 
+-# func_stripname prefix suffix name
+-# strip PREFIX and SUFFIX off of NAME.
+-# PREFIX and SUFFIX must not contain globbing or regex special
+-# characters, hashes, percent signs, but SUFFIX may contain a leading
+-# dot (in which case that matches only a dot).
+-# func_strip_suffix prefix name
+-func_stripname ()
+-{
+-  case ${2} in
+-    .*) func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%\\\\${2}\$%%"`;;
+-    *)  func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%${2}\$%%"`;;
+-  esac
+-}
++  _LT_PROG_FUNCTION_REPLACE([func_split_short_opt], [dnl
++    func_split_short_opt_arg=${1#??}
++    func_split_short_opt_name=${1%"$func_split_short_opt_arg"}])
+ 
+-# sed scripts:
+-my_sed_long_opt='1s/^\(-[[^=]]*\)=.*/\1/;q'
+-my_sed_long_arg='1s/^-[[^=]]*=//'
++  _LT_PROG_FUNCTION_REPLACE([func_lo2o], [dnl
++    case ${1} in
++      *.lo) func_lo2o_result=${1%.lo}.${objext} ;;
++      *)    func_lo2o_result=${1} ;;
++    esac])
+ 
+-# func_opt_split
+-func_opt_split ()
+-{
+-  func_opt_split_opt=`$ECHO "${1}" | $SED "$my_sed_long_opt"`
+-  func_opt_split_arg=`$ECHO "${1}" | $SED "$my_sed_long_arg"`
+-}
++  _LT_PROG_FUNCTION_REPLACE([func_xform], [    func_xform_result=${1%.*}.lo])
+ 
+-# func_lo2o object
+-func_lo2o ()
+-{
+-  func_lo2o_result=`$ECHO "${1}" | $SED "$lo2o"`
+-}
++  _LT_PROG_FUNCTION_REPLACE([func_arith], [    func_arith_result=$(( $[*] ))])
+ 
+-# func_xform libobj-or-source
+-func_xform ()
+-{
+-  func_xform_result=`$ECHO "${1}" | $SED 's/\.[[^.]]*$/.lo/'`
+-}
++  _LT_PROG_FUNCTION_REPLACE([func_len], [    func_len_result=${#1}])
++fi
+ 
+-# func_arith arithmetic-term...
+-func_arith ()
+-{
+-  func_arith_result=`expr "$[@]"`
+-}
++if test x"$lt_shell_append" = xyes; then
++  _LT_PROG_FUNCTION_REPLACE([func_append], [    eval "${1}+=\\${2}"])
+ 
+-# func_len string
+-# STRING may not start with a hyphen.
+-func_len ()
+-{
+-  func_len_result=`expr "$[1]" : ".*" 2>/dev/null || echo $max_cmd_len`
+-}
++  _LT_PROG_FUNCTION_REPLACE([func_append_quoted], [dnl
++    func_quote_for_eval "${2}"
++dnl m4 expansion turns \\\\ into \\, and then the shell eval turns that into \
++    eval "${1}+=\\\\ \\$func_quote_for_eval_result"])
+ 
+-_LT_EOF
+-esac
++  # Save a `func_append' function call where possible by direct use of '+='
++  sed -e 's%func_append \([[a-zA-Z_]]\{1,\}\) "%\1+="%g' $cfgfile > $cfgfile.tmp \
++    && mv -f "$cfgfile.tmp" "$cfgfile" \
++      || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++  test 0 -eq $? || _lt_function_replace_fail=:
++else
++  # Save a `func_append' function call even when '+=' is not available
++  sed -e 's%func_append \([[a-zA-Z_]]\{1,\}\) "%\1="$\1%g' $cfgfile > $cfgfile.tmp \
++    && mv -f "$cfgfile.tmp" "$cfgfile" \
++      || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++  test 0 -eq $? || _lt_function_replace_fail=:
++fi
+ 
+-case $lt_shell_append in
+-  yes)
+-    cat << \_LT_EOF >> "$cfgfile"
++if test x"$_lt_function_replace_fail" = x":"; then
++  AC_MSG_WARN([Unable to substitute extended shell functions in $ofile])
++fi
++])
+ 
+-# func_append var value
+-# Append VALUE to the end of shell variable VAR.
+-func_append ()
+-{
+-  eval "$[1]+=\$[2]"
+-}
+-_LT_EOF
++# _LT_PATH_CONVERSION_FUNCTIONS
++# -----------------------------
++# Determine which file name conversion functions should be used by
++# func_to_host_file (and, implicitly, by func_to_host_path).  These are needed
++# for certain cross-compile configurations and native mingw.
++m4_defun([_LT_PATH_CONVERSION_FUNCTIONS],
++[AC_REQUIRE([AC_CANONICAL_HOST])dnl
++AC_REQUIRE([AC_CANONICAL_BUILD])dnl
++AC_MSG_CHECKING([how to convert $build file names to $host format])
++AC_CACHE_VAL(lt_cv_to_host_file_cmd,
++[case $host in
++  *-*-mingw* )
++    case $build in
++      *-*-mingw* ) # actually msys
++        lt_cv_to_host_file_cmd=func_convert_file_msys_to_w32
++        ;;
++      *-*-cygwin* )
++        lt_cv_to_host_file_cmd=func_convert_file_cygwin_to_w32
++        ;;
++      * ) # otherwise, assume *nix
++        lt_cv_to_host_file_cmd=func_convert_file_nix_to_w32
++        ;;
++    esac
+     ;;
+-  *)
+-    cat << \_LT_EOF >> "$cfgfile"
+-
+-# func_append var value
+-# Append VALUE to the end of shell variable VAR.
+-func_append ()
+-{
+-  eval "$[1]=\$$[1]\$[2]"
+-}
+-
+-_LT_EOF
++  *-*-cygwin* )
++    case $build in
++      *-*-mingw* ) # actually msys
++        lt_cv_to_host_file_cmd=func_convert_file_msys_to_cygwin
++        ;;
++      *-*-cygwin* )
++        lt_cv_to_host_file_cmd=func_convert_file_noop
++        ;;
++      * ) # otherwise, assume *nix
++        lt_cv_to_host_file_cmd=func_convert_file_nix_to_cygwin
++        ;;
++    esac
+     ;;
+-  esac
++  * ) # unhandled hosts (and "normal" native builds)
++    lt_cv_to_host_file_cmd=func_convert_file_noop
++    ;;
++esac
++])
++to_host_file_cmd=$lt_cv_to_host_file_cmd
++AC_MSG_RESULT([$lt_cv_to_host_file_cmd])
++_LT_DECL([to_host_file_cmd], [lt_cv_to_host_file_cmd],
++         [0], [convert $build file names to $host format])dnl
++
++AC_MSG_CHECKING([how to convert $build file names to toolchain format])
++AC_CACHE_VAL(lt_cv_to_tool_file_cmd,
++[#assume ordinary cross tools, or native build.
++lt_cv_to_tool_file_cmd=func_convert_file_noop
++case $host in
++  *-*-mingw* )
++    case $build in
++      *-*-mingw* ) # actually msys
++        lt_cv_to_tool_file_cmd=func_convert_file_msys_to_w32
++        ;;
++    esac
++    ;;
++esac
+ ])
++to_tool_file_cmd=$lt_cv_to_tool_file_cmd
++AC_MSG_RESULT([$lt_cv_to_tool_file_cmd])
++_LT_DECL([to_tool_file_cmd], [lt_cv_to_tool_file_cmd],
++         [0], [convert $build files to toolchain format])dnl
++])# _LT_PATH_CONVERSION_FUNCTIONS
+diff --git a/ltmain.sh b/ltmain.sh
+index 9503ec85d70..70e856e0659 100644
+--- a/ltmain.sh
++++ b/ltmain.sh
+@@ -1,10 +1,9 @@
+-# Generated from ltmain.m4sh.
+ 
+-# libtool (GNU libtool 1.3134 2009-11-29) 2.2.7a
++# libtool (GNU libtool) 2.4
+ # Written by Gordon Matzigkeit <gord@gnu.ai.mit.edu>, 1996
+ 
+ # Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001, 2003, 2004, 2005, 2006,
+-# 2007, 2008, 2009 Free Software Foundation, Inc.
++# 2007, 2008, 2009, 2010 Free Software Foundation, Inc.
+ # This is free software; see the source for copying conditions.  There is NO
+ # warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
+ 
+@@ -38,7 +37,6 @@
+ #   -n, --dry-run            display commands without modifying any files
+ #       --features           display basic configuration information and exit
+ #       --mode=MODE          use operation mode MODE
+-#       --no-finish          let install mode avoid finish commands
+ #       --preserve-dup-deps  don't remove duplicate dependency libraries
+ #       --quiet, --silent    don't print informational messages
+ #       --no-quiet, --no-silent
+@@ -71,17 +69,19 @@
+ #         compiler:		$LTCC
+ #         compiler flags:		$LTCFLAGS
+ #         linker:		$LD (gnu? $with_gnu_ld)
+-#         $progname:	(GNU libtool 1.3134 2009-11-29) 2.2.7a
++#         $progname:	(GNU libtool) 2.4
+ #         automake:	$automake_version
+ #         autoconf:	$autoconf_version
+ #
+ # Report bugs to <bug-libtool@gnu.org>.
++# GNU libtool home page: <http://www.gnu.org/software/libtool/>.
++# General help using GNU software: <http://www.gnu.org/gethelp/>.
+ 
+ PROGRAM=libtool
+ PACKAGE=libtool
+-VERSION=2.2.7a
+-TIMESTAMP=" 1.3134 2009-11-29"
+-package_revision=1.3134
++VERSION=2.4
++TIMESTAMP=""
++package_revision=1.3293
+ 
+ # Be Bourne compatible
+ if test -n "${ZSH_VERSION+set}" && (emulate sh) >/dev/null 2>&1; then
+@@ -106,9 +106,6 @@ _LTECHO_EOF'
+ }
+ 
+ # NLS nuisances: We save the old values to restore during execute mode.
+-# Only set LANG and LC_ALL to C if already set.
+-# These must not be set unconditionally because not all systems understand
+-# e.g. LANG=C (notably SCO).
+ lt_user_locale=
+ lt_safe_locale=
+ for lt_var in LANG LANGUAGE LC_ALL LC_CTYPE LC_COLLATE LC_MESSAGES
+@@ -121,15 +118,13 @@ do
+ 	  lt_safe_locale=\"$lt_var=C; \$lt_safe_locale\"
+ 	fi"
+ done
++LC_ALL=C
++LANGUAGE=C
++export LANGUAGE LC_ALL
+ 
+ $lt_unset CDPATH
+ 
+ 
+-
+-
+-
+-
+-
+ # Work around backward compatibility issue on IRIX 6.5. On IRIX 6.4+, sh
+ # is ksh but when the shell is invoked as "sh" and the current value of
+ # the _XPG environment variable is not equal to 1 (one), the special
+@@ -140,7 +135,7 @@ progpath="$0"
+ 
+ 
+ : ${CP="cp -f"}
+-: ${ECHO=$as_echo}
++test "${ECHO+set}" = set || ECHO=${as_echo-'printf %s\n'}
+ : ${EGREP="/bin/grep -E"}
+ : ${FGREP="/bin/grep -F"}
+ : ${GREP="/bin/grep"}
+@@ -149,7 +144,7 @@ progpath="$0"
+ : ${MKDIR="mkdir"}
+ : ${MV="mv -f"}
+ : ${RM="rm -f"}
+-: ${SED="/mount/endor/wildenhu/local-x86_64/bin/sed"}
++: ${SED="/bin/sed"}
+ : ${SHELL="${CONFIG_SHELL-/bin/sh}"}
+ : ${Xsed="$SED -e 1s/^X//"}
+ 
+@@ -169,6 +164,27 @@ IFS=" 	$lt_nl"
+ dirname="s,/[^/]*$,,"
+ basename="s,^.*/,,"
+ 
++# func_dirname file append nondir_replacement
++# Compute the dirname of FILE.  If nonempty, add APPEND to the result,
++# otherwise set result to NONDIR_REPLACEMENT.
++func_dirname ()
++{
++    func_dirname_result=`$ECHO "${1}" | $SED "$dirname"`
++    if test "X$func_dirname_result" = "X${1}"; then
++      func_dirname_result="${3}"
++    else
++      func_dirname_result="$func_dirname_result${2}"
++    fi
++} # func_dirname may be replaced by extended shell implementation
++
++
++# func_basename file
++func_basename ()
++{
++    func_basename_result=`$ECHO "${1}" | $SED "$basename"`
++} # func_basename may be replaced by extended shell implementation
++
++
+ # func_dirname_and_basename file append nondir_replacement
+ # perform func_basename and func_dirname in a single function
+ # call:
+@@ -183,17 +199,31 @@ basename="s,^.*/,,"
+ # those functions but instead duplicate the functionality here.
+ func_dirname_and_basename ()
+ {
+-  # Extract subdirectory from the argument.
+-  func_dirname_result=`$ECHO "${1}" | $SED -e "$dirname"`
+-  if test "X$func_dirname_result" = "X${1}"; then
+-    func_dirname_result="${3}"
+-  else
+-    func_dirname_result="$func_dirname_result${2}"
+-  fi
+-  func_basename_result=`$ECHO "${1}" | $SED -e "$basename"`
+-}
++    # Extract subdirectory from the argument.
++    func_dirname_result=`$ECHO "${1}" | $SED -e "$dirname"`
++    if test "X$func_dirname_result" = "X${1}"; then
++      func_dirname_result="${3}"
++    else
++      func_dirname_result="$func_dirname_result${2}"
++    fi
++    func_basename_result=`$ECHO "${1}" | $SED -e "$basename"`
++} # func_dirname_and_basename may be replaced by extended shell implementation
++
++
++# func_stripname prefix suffix name
++# strip PREFIX and SUFFIX off of NAME.
++# PREFIX and SUFFIX must not contain globbing or regex special
++# characters, hashes, percent signs, but SUFFIX may contain a leading
++# dot (in which case that matches only a dot).
++# func_strip_suffix prefix name
++func_stripname ()
++{
++    case ${2} in
++      .*) func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%\\\\${2}\$%%"`;;
++      *)  func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%${2}\$%%"`;;
++    esac
++} # func_stripname may be replaced by extended shell implementation
+ 
+-# Generated shell functions inserted here.
+ 
+ # These SED scripts presuppose an absolute path with a trailing slash.
+ pathcar='s,^/\([^/]*\).*$,\1,'
+@@ -376,6 +406,15 @@ sed_quote_subst='s/\([`"$\\]\)/\\\1/g'
+ # Same as above, but do not quote variable references.
+ double_quote_subst='s/\(["`\\]\)/\\\1/g'
+ 
++# Sed substitution that turns a string into a regex matching for the
++# string literally.
++sed_make_literal_regex='s,[].[^$\\*\/],\\&,g'
++
++# Sed substitution that converts a w32 file name or path
++# which contains forward slashes, into one that contains
++# (escaped) backslashes.  A very naive implementation.
++lt_sed_naive_backslashify='s|\\\\*|\\|g;s|/|\\|g;s|\\|\\\\|g'
++
+ # Re-`\' parameter expansions in output of double_quote_subst that were
+ # `\'-ed in input to the same.  If an odd number of `\' preceded a '$'
+ # in input to double_quote_subst, that '$' was protected from expansion.
+@@ -404,7 +443,7 @@ opt_warning=:
+ # name if it has been set yet.
+ func_echo ()
+ {
+-    $ECHO "$progname${mode+: }$mode: $*"
++    $ECHO "$progname: ${opt_mode+$opt_mode: }$*"
+ }
+ 
+ # func_verbose arg...
+@@ -430,14 +469,14 @@ func_echo_all ()
+ # Echo program name prefixed message to standard error.
+ func_error ()
+ {
+-    $ECHO "$progname${mode+: }$mode: "${1+"$@"} 1>&2
++    $ECHO "$progname: ${opt_mode+$opt_mode: }"${1+"$@"} 1>&2
+ }
+ 
+ # func_warning arg...
+ # Echo program name prefixed warning message to standard error.
+ func_warning ()
+ {
+-    $opt_warning && $ECHO "$progname${mode+: }$mode: warning: "${1+"$@"} 1>&2
++    $opt_warning && $ECHO "$progname: ${opt_mode+$opt_mode: }warning: "${1+"$@"} 1>&2
+ 
+     # bash bug again:
+     :
+@@ -656,19 +695,35 @@ func_show_eval_locale ()
+     fi
+ }
+ 
+-
+-
++# func_tr_sh
++# Turn $1 into a string suitable for a shell variable name.
++# Result is stored in $func_tr_sh_result.  All characters
++# not in the set a-zA-Z0-9_ are replaced with '_'. Further,
++# if $1 begins with a digit, a '_' is prepended as well.
++func_tr_sh ()
++{
++  case $1 in
++  [0-9]* | *[!a-zA-Z0-9_]*)
++    func_tr_sh_result=`$ECHO "$1" | $SED 's/^\([0-9]\)/_\1/; s/[^a-zA-Z0-9_]/_/g'`
++    ;;
++  * )
++    func_tr_sh_result=$1
++    ;;
++  esac
++}
+ 
+ 
+ # func_version
+ # Echo version message to standard output and exit.
+ func_version ()
+ {
++    $opt_debug
++
+     $SED -n '/(C)/!b go
+ 	:more
+ 	/\./!{
+ 	  N
+-	  s/\n# //
++	  s/\n# / /
+ 	  b more
+ 	}
+ 	:go
+@@ -685,7 +740,9 @@ func_version ()
+ # Echo short help message to standard output and exit.
+ func_usage ()
+ {
+-    $SED -n '/^# Usage:/,/^#  *-h/ {
++    $opt_debug
++
++    $SED -n '/^# Usage:/,/^#  *.*--help/ {
+         s/^# //
+ 	s/^# *$//
+ 	s/\$progname/'$progname'/
+@@ -701,7 +758,10 @@ func_usage ()
+ # unless 'noexit' is passed as argument.
+ func_help ()
+ {
++    $opt_debug
++
+     $SED -n '/^# Usage:/,/# Report bugs to/ {
++	:print
+         s/^# //
+ 	s/^# *$//
+ 	s*\$progname*'$progname'*
+@@ -714,7 +774,11 @@ func_help ()
+ 	s/\$automake_version/'"`(automake --version) 2>/dev/null |$SED 1q`"'/
+ 	s/\$autoconf_version/'"`(autoconf --version) 2>/dev/null |$SED 1q`"'/
+ 	p
+-     }' < "$progpath"
++	d
++     }
++     /^# .* home page:/b print
++     /^# General help using/b print
++     ' < "$progpath"
+     ret=$?
+     if test -z "$1"; then
+       exit $ret
+@@ -726,12 +790,39 @@ func_help ()
+ # exit_cmd.
+ func_missing_arg ()
+ {
+-    func_error "missing argument for $1"
++    $opt_debug
++
++    func_error "missing argument for $1."
+     exit_cmd=exit
+ }
+ 
+-exit_cmd=:
+ 
++# func_split_short_opt shortopt
++# Set func_split_short_opt_name and func_split_short_opt_arg shell
++# variables after splitting SHORTOPT after the 2nd character.
++func_split_short_opt ()
++{
++    my_sed_short_opt='1s/^\(..\).*$/\1/;q'
++    my_sed_short_rest='1s/^..\(.*\)$/\1/;q'
++
++    func_split_short_opt_name=`$ECHO "$1" | $SED "$my_sed_short_opt"`
++    func_split_short_opt_arg=`$ECHO "$1" | $SED "$my_sed_short_rest"`
++} # func_split_short_opt may be replaced by extended shell implementation
++
++
++# func_split_long_opt longopt
++# Set func_split_long_opt_name and func_split_long_opt_arg shell
++# variables after splitting LONGOPT at the `=' sign.
++func_split_long_opt ()
++{
++    my_sed_long_opt='1s/^\(--[^=]*\)=.*/\1/;q'
++    my_sed_long_arg='1s/^--[^=]*=//'
++
++    func_split_long_opt_name=`$ECHO "$1" | $SED "$my_sed_long_opt"`
++    func_split_long_opt_arg=`$ECHO "$1" | $SED "$my_sed_long_arg"`
++} # func_split_long_opt may be replaced by extended shell implementation
++
++exit_cmd=:
+ 
+ 
+ 
+@@ -741,26 +832,64 @@ magic="%%%MAGIC variable%%%"
+ magic_exe="%%%MAGIC EXE variable%%%"
+ 
+ # Global variables.
+-# $mode is unset
+ nonopt=
+-execute_dlfiles=
+ preserve_args=
+ lo2o="s/\\.lo\$/.${objext}/"
+ o2lo="s/\\.${objext}\$/.lo/"
+ extracted_archives=
+ extracted_serial=0
+ 
+-opt_dry_run=false
+-opt_finish=:
+-opt_duplicate_deps=false
+-opt_silent=false
+-opt_debug=:
+-
+ # If this variable is set in any of the actions, the command in it
+ # will be execed at the end.  This prevents here-documents from being
+ # left over by shells.
+ exec_cmd=
+ 
++# func_append var value
++# Append VALUE to the end of shell variable VAR.
++func_append ()
++{
++    eval "${1}=\$${1}\${2}"
++} # func_append may be replaced by extended shell implementation
++
++# func_append_quoted var value
++# Quote VALUE and append to the end of shell variable VAR, separated
++# by a space.
++func_append_quoted ()
++{
++    func_quote_for_eval "${2}"
++    eval "${1}=\$${1}\\ \$func_quote_for_eval_result"
++} # func_append_quoted may be replaced by extended shell implementation
++
++
++# func_arith arithmetic-term...
++func_arith ()
++{
++    func_arith_result=`expr "${@}"`
++} # func_arith may be replaced by extended shell implementation
++
++
++# func_len string
++# STRING may not start with a hyphen.
++func_len ()
++{
++    func_len_result=`expr "${1}" : ".*" 2>/dev/null || echo $max_cmd_len`
++} # func_len may be replaced by extended shell implementation
++
++
++# func_lo2o object
++func_lo2o ()
++{
++    func_lo2o_result=`$ECHO "${1}" | $SED "$lo2o"`
++} # func_lo2o may be replaced by extended shell implementation
++
++
++# func_xform libobj-or-source
++func_xform ()
++{
++    func_xform_result=`$ECHO "${1}" | $SED 's/\.[^.]*$/.lo/'`
++} # func_xform may be replaced by extended shell implementation
++
++
+ # func_fatal_configuration arg...
+ # Echo program name prefixed message to standard error, followed by
+ # a configuration failure hint, and exit.
+@@ -850,130 +979,204 @@ func_enable_tag ()
+   esac
+ }
+ 
+-# Parse options once, thoroughly.  This comes as soon as possible in
+-# the script to make things like `libtool --version' happen quickly.
++# func_check_version_match
++# Ensure that we are using m4 macros, and libtool script from the same
++# release of libtool.
++func_check_version_match ()
+ {
++  if test "$package_revision" != "$macro_revision"; then
++    if test "$VERSION" != "$macro_version"; then
++      if test -z "$macro_version"; then
++        cat >&2 <<_LT_EOF
++$progname: Version mismatch error.  This is $PACKAGE $VERSION, but the
++$progname: definition of this LT_INIT comes from an older release.
++$progname: You should recreate aclocal.m4 with macros from $PACKAGE $VERSION
++$progname: and run autoconf again.
++_LT_EOF
++      else
++        cat >&2 <<_LT_EOF
++$progname: Version mismatch error.  This is $PACKAGE $VERSION, but the
++$progname: definition of this LT_INIT comes from $PACKAGE $macro_version.
++$progname: You should recreate aclocal.m4 with macros from $PACKAGE $VERSION
++$progname: and run autoconf again.
++_LT_EOF
++      fi
++    else
++      cat >&2 <<_LT_EOF
++$progname: Version mismatch error.  This is $PACKAGE $VERSION, revision $package_revision,
++$progname: but the definition of this LT_INIT comes from revision $macro_revision.
++$progname: You should recreate aclocal.m4 with macros from revision $package_revision
++$progname: of $PACKAGE $VERSION and run autoconf again.
++_LT_EOF
++    fi
+ 
+-  # Shorthand for --mode=foo, only valid as the first argument
+-  case $1 in
+-  clean|clea|cle|cl)
+-    shift; set dummy --mode clean ${1+"$@"}; shift
+-    ;;
+-  compile|compil|compi|comp|com|co|c)
+-    shift; set dummy --mode compile ${1+"$@"}; shift
+-    ;;
+-  execute|execut|execu|exec|exe|ex|e)
+-    shift; set dummy --mode execute ${1+"$@"}; shift
+-    ;;
+-  finish|finis|fini|fin|fi|f)
+-    shift; set dummy --mode finish ${1+"$@"}; shift
+-    ;;
+-  install|instal|insta|inst|ins|in|i)
+-    shift; set dummy --mode install ${1+"$@"}; shift
+-    ;;
+-  link|lin|li|l)
+-    shift; set dummy --mode link ${1+"$@"}; shift
+-    ;;
+-  uninstall|uninstal|uninsta|uninst|unins|unin|uni|un|u)
+-    shift; set dummy --mode uninstall ${1+"$@"}; shift
+-    ;;
+-  esac
++    exit $EXIT_MISMATCH
++  fi
++}
++
++
++# Shorthand for --mode=foo, only valid as the first argument
++case $1 in
++clean|clea|cle|cl)
++  shift; set dummy --mode clean ${1+"$@"}; shift
++  ;;
++compile|compil|compi|comp|com|co|c)
++  shift; set dummy --mode compile ${1+"$@"}; shift
++  ;;
++execute|execut|execu|exec|exe|ex|e)
++  shift; set dummy --mode execute ${1+"$@"}; shift
++  ;;
++finish|finis|fini|fin|fi|f)
++  shift; set dummy --mode finish ${1+"$@"}; shift
++  ;;
++install|instal|insta|inst|ins|in|i)
++  shift; set dummy --mode install ${1+"$@"}; shift
++  ;;
++link|lin|li|l)
++  shift; set dummy --mode link ${1+"$@"}; shift
++  ;;
++uninstall|uninstal|uninsta|uninst|unins|unin|uni|un|u)
++  shift; set dummy --mode uninstall ${1+"$@"}; shift
++  ;;
++esac
+ 
+-  # Parse non-mode specific arguments:
+-  while test "$#" -gt 0; do
++
++
++# Option defaults:
++opt_debug=:
++opt_dry_run=false
++opt_config=false
++opt_preserve_dup_deps=false
++opt_features=false
++opt_finish=false
++opt_help=false
++opt_help_all=false
++opt_silent=:
++opt_verbose=:
++opt_silent=false
++opt_verbose=false
++
++
++# Parse options once, thoroughly.  This comes as soon as possible in the
++# script to make things like `--version' happen as quickly as we can.
++{
++  # this just eases exit handling
++  while test $# -gt 0; do
+     opt="$1"
+     shift
+-
+     case $opt in
+-      --config)		func_config					;;
+-
+-      --debug)		preserve_args="$preserve_args $opt"
++      --debug|-x)	opt_debug='set -x'
+ 			func_echo "enabling shell trace mode"
+-			opt_debug='set -x'
+ 			$opt_debug
+ 			;;
+-
+-      -dlopen)		test "$#" -eq 0 && func_missing_arg "$opt" && break
+-			execute_dlfiles="$execute_dlfiles $1"
+-			shift
++      --dry-run|--dryrun|-n)
++			opt_dry_run=:
+ 			;;
+-
+-      --dry-run | -n)	opt_dry_run=:					;;
+-      --features)       func_features					;;
+-      --finish)		mode="finish"					;;
+-      --no-finish)	opt_finish=false				;;
+-
+-      --mode)		test "$#" -eq 0 && func_missing_arg "$opt" && break
+-			case $1 in
+-			  # Valid mode arguments:
+-			  clean)	;;
+-			  compile)	;;
+-			  execute)	;;
+-			  finish)	;;
+-			  install)	;;
+-			  link)		;;
+-			  relink)	;;
+-			  uninstall)	;;
+-
+-			  # Catch anything else as an error
+-			  *) func_error "invalid argument for $opt"
+-			     exit_cmd=exit
+-			     break
+-			     ;;
+-		        esac
+-
+-			mode="$1"
++      --config)
++			opt_config=:
++func_config
++			;;
++      --dlopen|-dlopen)
++			optarg="$1"
++			opt_dlopen="${opt_dlopen+$opt_dlopen
++}$optarg"
+ 			shift
+ 			;;
+-
+       --preserve-dup-deps)
+-			opt_duplicate_deps=:				;;
+-
+-      --quiet|--silent)	preserve_args="$preserve_args $opt"
+-			opt_silent=:
+-			opt_verbose=false
++			opt_preserve_dup_deps=:
+ 			;;
+-
+-      --no-quiet|--no-silent)
+-			preserve_args="$preserve_args $opt"
+-			opt_silent=false
++      --features)
++			opt_features=:
++func_features
+ 			;;
+-
+-      --verbose| -v)	preserve_args="$preserve_args $opt"
++      --finish)
++			opt_finish=:
++set dummy --mode finish ${1+"$@"}; shift
++			;;
++      --help)
++			opt_help=:
++			;;
++      --help-all)
++			opt_help_all=:
++opt_help=': help-all'
++			;;
++      --mode)
++			test $# = 0 && func_missing_arg $opt && break
++			optarg="$1"
++			opt_mode="$optarg"
++case $optarg in
++  # Valid mode arguments:
++  clean|compile|execute|finish|install|link|relink|uninstall) ;;
++
++  # Catch anything else as an error
++  *) func_error "invalid argument for $opt"
++     exit_cmd=exit
++     break
++     ;;
++esac
++			shift
++			;;
++      --no-silent|--no-quiet)
+ 			opt_silent=false
+-			opt_verbose=:
++func_append preserve_args " $opt"
+ 			;;
+-
+-      --no-verbose)	preserve_args="$preserve_args $opt"
++      --no-verbose)
+ 			opt_verbose=false
++func_append preserve_args " $opt"
+ 			;;
+-
+-      --tag)		test "$#" -eq 0 && func_missing_arg "$opt" && break
+-			preserve_args="$preserve_args $opt $1"
+-			func_enable_tag "$1"	# tagname is set here
++      --silent|--quiet)
++			opt_silent=:
++func_append preserve_args " $opt"
++        opt_verbose=false
++			;;
++      --verbose|-v)
++			opt_verbose=:
++func_append preserve_args " $opt"
++opt_silent=false
++			;;
++      --tag)
++			test $# = 0 && func_missing_arg $opt && break
++			optarg="$1"
++			opt_tag="$optarg"
++func_append preserve_args " $opt $optarg"
++func_enable_tag "$optarg"
+ 			shift
+ 			;;
+ 
++      -\?|-h)		func_usage				;;
++      --help)		func_help				;;
++      --version)	func_version				;;
++
+       # Separate optargs to long options:
+-      -dlopen=*|--mode=*|--tag=*)
+-			func_opt_split "$opt"
+-			set dummy "$func_opt_split_opt" "$func_opt_split_arg" ${1+"$@"}
++      --*=*)
++			func_split_long_opt "$opt"
++			set dummy "$func_split_long_opt_name" "$func_split_long_opt_arg" ${1+"$@"}
+ 			shift
+ 			;;
+ 
+-      -\?|-h)		func_usage					;;
+-      --help)		opt_help=:					;;
+-      --help-all)	opt_help=': help-all'				;;
+-      --version)	func_version					;;
+-
+-      -*)		func_fatal_help "unrecognized option \`$opt'"	;;
+-
+-      *)		nonopt="$opt"
+-			break
++      # Separate non-argument short options:
++      -\?*|-h*|-n*|-v*)
++			func_split_short_opt "$opt"
++			set dummy "$func_split_short_opt_name" "-$func_split_short_opt_arg" ${1+"$@"}
++			shift
+ 			;;
++
++      --)		break					;;
++      -*)		func_fatal_help "unrecognized option \`$opt'" ;;
++      *)		set dummy "$opt" ${1+"$@"};	shift; break  ;;
+     esac
+   done
+ 
++  # Validate options:
++
++  # save first non-option argument
++  if test "$#" -gt 0; then
++    nonopt="$opt"
++    shift
++  fi
++
++  # preserve --debug
++  test "$opt_debug" = : || func_append preserve_args " --debug"
+ 
+   case $host in
+     *cygwin* | *mingw* | *pw32* | *cegcc* | *solaris2* )
+@@ -981,82 +1184,44 @@ func_enable_tag ()
+       opt_duplicate_compiler_generated_deps=:
+       ;;
+     *)
+-      opt_duplicate_compiler_generated_deps=$opt_duplicate_deps
++      opt_duplicate_compiler_generated_deps=$opt_preserve_dup_deps
+       ;;
+   esac
+ 
+-  # Having warned about all mis-specified options, bail out if
+-  # anything was wrong.
+-  $exit_cmd $EXIT_FAILURE
+-}
++  $opt_help || {
++    # Sanity checks first:
++    func_check_version_match
+ 
+-# func_check_version_match
+-# Ensure that we are using m4 macros, and libtool script from the same
+-# release of libtool.
+-func_check_version_match ()
+-{
+-  if test "$package_revision" != "$macro_revision"; then
+-    if test "$VERSION" != "$macro_version"; then
+-      if test -z "$macro_version"; then
+-        cat >&2 <<_LT_EOF
+-$progname: Version mismatch error.  This is $PACKAGE $VERSION, but the
+-$progname: definition of this LT_INIT comes from an older release.
+-$progname: You should recreate aclocal.m4 with macros from $PACKAGE $VERSION
+-$progname: and run autoconf again.
+-_LT_EOF
+-      else
+-        cat >&2 <<_LT_EOF
+-$progname: Version mismatch error.  This is $PACKAGE $VERSION, but the
+-$progname: definition of this LT_INIT comes from $PACKAGE $macro_version.
+-$progname: You should recreate aclocal.m4 with macros from $PACKAGE $VERSION
+-$progname: and run autoconf again.
+-_LT_EOF
+-      fi
+-    else
+-      cat >&2 <<_LT_EOF
+-$progname: Version mismatch error.  This is $PACKAGE $VERSION, revision $package_revision,
+-$progname: but the definition of this LT_INIT comes from revision $macro_revision.
+-$progname: You should recreate aclocal.m4 with macros from revision $package_revision
+-$progname: of $PACKAGE $VERSION and run autoconf again.
+-_LT_EOF
++    if test "$build_libtool_libs" != yes && test "$build_old_libs" != yes; then
++      func_fatal_configuration "not configured to build any kind of library"
+     fi
+ 
+-    exit $EXIT_MISMATCH
+-  fi
+-}
+-
++    # Darwin sucks
++    eval std_shrext=\"$shrext_cmds\"
+ 
+-## ----------- ##
+-##    Main.    ##
+-## ----------- ##
+-
+-$opt_help || {
+-  # Sanity checks first:
+-  func_check_version_match
+-
+-  if test "$build_libtool_libs" != yes && test "$build_old_libs" != yes; then
+-    func_fatal_configuration "not configured to build any kind of library"
+-  fi
++    # Only execute mode is allowed to have -dlopen flags.
++    if test -n "$opt_dlopen" && test "$opt_mode" != execute; then
++      func_error "unrecognized option \`-dlopen'"
++      $ECHO "$help" 1>&2
++      exit $EXIT_FAILURE
++    fi
+ 
+-  test -z "$mode" && func_fatal_error "error: you must specify a MODE."
++    # Change the help message to a mode-specific one.
++    generic_help="$help"
++    help="Try \`$progname --help --mode=$opt_mode' for more information."
++  }
+ 
+ 
+-  # Darwin sucks
+-  eval "std_shrext=\"$shrext_cmds\""
++  # Bail if the options were screwed
++  $exit_cmd $EXIT_FAILURE
++}
+ 
+ 
+-  # Only execute mode is allowed to have -dlopen flags.
+-  if test -n "$execute_dlfiles" && test "$mode" != execute; then
+-    func_error "unrecognized option \`-dlopen'"
+-    $ECHO "$help" 1>&2
+-    exit $EXIT_FAILURE
+-  fi
+ 
+-  # Change the help message to a mode-specific one.
+-  generic_help="$help"
+-  help="Try \`$progname --help --mode=$mode' for more information."
+-}
+ 
++## ----------- ##
++##    Main.    ##
++## ----------- ##
+ 
+ # func_lalib_p file
+ # True iff FILE is a libtool `.la' library or `.lo' object file.
+@@ -1121,12 +1286,9 @@ func_ltwrapper_executable_p ()
+ # temporary ltwrapper_script.
+ func_ltwrapper_scriptname ()
+ {
+-    func_ltwrapper_scriptname_result=""
+-    if func_ltwrapper_executable_p "$1"; then
+-	func_dirname_and_basename "$1" "" "."
+-	func_stripname '' '.exe' "$func_basename_result"
+-	func_ltwrapper_scriptname_result="$func_dirname_result/$objdir/${func_stripname_result}_ltshwrapper"
+-    fi
++    func_dirname_and_basename "$1" "" "."
++    func_stripname '' '.exe' "$func_basename_result"
++    func_ltwrapper_scriptname_result="$func_dirname_result/$objdir/${func_stripname_result}_ltshwrapper"
+ }
+ 
+ # func_ltwrapper_p file
+@@ -1149,7 +1311,7 @@ func_execute_cmds ()
+     save_ifs=$IFS; IFS='~'
+     for cmd in $1; do
+       IFS=$save_ifs
+-      eval "cmd=\"$cmd\""
++      eval cmd=\"$cmd\"
+       func_show_eval "$cmd" "${2-:}"
+     done
+     IFS=$save_ifs
+@@ -1172,6 +1334,37 @@ func_source ()
+ }
+ 
+ 
++# func_resolve_sysroot PATH
++# Replace a leading = in PATH with a sysroot.  Store the result into
++# func_resolve_sysroot_result
++func_resolve_sysroot ()
++{
++  func_resolve_sysroot_result=$1
++  case $func_resolve_sysroot_result in
++  =*)
++    func_stripname '=' '' "$func_resolve_sysroot_result"
++    func_resolve_sysroot_result=$lt_sysroot$func_stripname_result
++    ;;
++  esac
++}
++
++# func_replace_sysroot PATH
++# If PATH begins with the sysroot, replace it with = and
++# store the result into func_replace_sysroot_result.
++func_replace_sysroot ()
++{
++  case "$lt_sysroot:$1" in
++  ?*:"$lt_sysroot"*)
++    func_stripname "$lt_sysroot" '' "$1"
++    func_replace_sysroot_result="=$func_stripname_result"
++    ;;
++  *)
++    # Including no sysroot.
++    func_replace_sysroot_result=$1
++    ;;
++  esac
++}
++
+ # func_infer_tag arg
+ # Infer tagged configuration to use if any are available and
+ # if one wasn't chosen via the "--tag" command line option.
+@@ -1184,8 +1377,7 @@ func_infer_tag ()
+     if test -n "$available_tags" && test -z "$tagname"; then
+       CC_quoted=
+       for arg in $CC; do
+-        func_quote_for_eval "$arg"
+-	CC_quoted="$CC_quoted $func_quote_for_eval_result"
++	func_append_quoted CC_quoted "$arg"
+       done
+       CC_expanded=`func_echo_all $CC`
+       CC_quoted_expanded=`func_echo_all $CC_quoted`
+@@ -1204,8 +1396,7 @@ func_infer_tag ()
+ 	    CC_quoted=
+ 	    for arg in $CC; do
+ 	      # Double-quote args containing other shell metacharacters.
+-	      func_quote_for_eval "$arg"
+-	      CC_quoted="$CC_quoted $func_quote_for_eval_result"
++	      func_append_quoted CC_quoted "$arg"
+ 	    done
+ 	    CC_expanded=`func_echo_all $CC`
+ 	    CC_quoted_expanded=`func_echo_all $CC_quoted`
+@@ -1274,6 +1465,486 @@ EOF
+     }
+ }
+ 
++
++##################################################
++# FILE NAME AND PATH CONVERSION HELPER FUNCTIONS #
++##################################################
++
++# func_convert_core_file_wine_to_w32 ARG
++# Helper function used by file name conversion functions when $build is *nix,
++# and $host is mingw, cygwin, or some other w32 environment. Relies on a
++# correctly configured wine environment available, with the winepath program
++# in $build's $PATH.
++#
++# ARG is the $build file name to be converted to w32 format.
++# Result is available in $func_convert_core_file_wine_to_w32_result, and will
++# be empty on error (or when ARG is empty)
++func_convert_core_file_wine_to_w32 ()
++{
++  $opt_debug
++  func_convert_core_file_wine_to_w32_result="$1"
++  if test -n "$1"; then
++    # Unfortunately, winepath does not exit with a non-zero error code, so we
++    # are forced to check the contents of stdout. On the other hand, if the
++    # command is not found, the shell will set an exit code of 127 and print
++    # *an error message* to stdout. So we must check for both error code of
++    # zero AND non-empty stdout, which explains the odd construction:
++    func_convert_core_file_wine_to_w32_tmp=`winepath -w "$1" 2>/dev/null`
++    if test "$?" -eq 0 && test -n "${func_convert_core_file_wine_to_w32_tmp}"; then
++      func_convert_core_file_wine_to_w32_result=`$ECHO "$func_convert_core_file_wine_to_w32_tmp" |
++        $SED -e "$lt_sed_naive_backslashify"`
++    else
++      func_convert_core_file_wine_to_w32_result=
++    fi
++  fi
++}
++# end: func_convert_core_file_wine_to_w32
++
++
++# func_convert_core_path_wine_to_w32 ARG
++# Helper function used by path conversion functions when $build is *nix, and
++# $host is mingw, cygwin, or some other w32 environment. Relies on a correctly
++# configured wine environment available, with the winepath program in $build's
++# $PATH. Assumes ARG has no leading or trailing path separator characters.
++#
++# ARG is path to be converted from $build format to win32.
++# Result is available in $func_convert_core_path_wine_to_w32_result.
++# Unconvertible file (directory) names in ARG are skipped; if no directory names
++# are convertible, then the result may be empty.
++func_convert_core_path_wine_to_w32 ()
++{
++  $opt_debug
++  # unfortunately, winepath doesn't convert paths, only file names
++  func_convert_core_path_wine_to_w32_result=""
++  if test -n "$1"; then
++    oldIFS=$IFS
++    IFS=:
++    for func_convert_core_path_wine_to_w32_f in $1; do
++      IFS=$oldIFS
++      func_convert_core_file_wine_to_w32 "$func_convert_core_path_wine_to_w32_f"
++      if test -n "$func_convert_core_file_wine_to_w32_result" ; then
++        if test -z "$func_convert_core_path_wine_to_w32_result"; then
++          func_convert_core_path_wine_to_w32_result="$func_convert_core_file_wine_to_w32_result"
++        else
++          func_append func_convert_core_path_wine_to_w32_result ";$func_convert_core_file_wine_to_w32_result"
++        fi
++      fi
++    done
++    IFS=$oldIFS
++  fi
++}
++# end: func_convert_core_path_wine_to_w32
++
++
++# func_cygpath ARGS...
++# Wrapper around calling the cygpath program via LT_CYGPATH. This is used when
++# when (1) $build is *nix and Cygwin is hosted via a wine environment; or (2)
++# $build is MSYS and $host is Cygwin, or (3) $build is Cygwin. In case (1) or
++# (2), returns the Cygwin file name or path in func_cygpath_result (input
++# file name or path is assumed to be in w32 format, as previously converted
++# from $build's *nix or MSYS format). In case (3), returns the w32 file name
++# or path in func_cygpath_result (input file name or path is assumed to be in
++# Cygwin format). Returns an empty string on error.
++#
++# ARGS are passed to cygpath, with the last one being the file name or path to
++# be converted.
++#
++# Specify the absolute *nix (or w32) name to cygpath in the LT_CYGPATH
++# environment variable; do not put it in $PATH.
++func_cygpath ()
++{
++  $opt_debug
++  if test -n "$LT_CYGPATH" && test -f "$LT_CYGPATH"; then
++    func_cygpath_result=`$LT_CYGPATH "$@" 2>/dev/null`
++    if test "$?" -ne 0; then
++      # on failure, ensure result is empty
++      func_cygpath_result=
++    fi
++  else
++    func_cygpath_result=
++    func_error "LT_CYGPATH is empty or specifies non-existent file: \`$LT_CYGPATH'"
++  fi
++}
++#end: func_cygpath
++
++
++# func_convert_core_msys_to_w32 ARG
++# Convert file name or path ARG from MSYS format to w32 format.  Return
++# result in func_convert_core_msys_to_w32_result.
++func_convert_core_msys_to_w32 ()
++{
++  $opt_debug
++  # awkward: cmd appends spaces to result
++  func_convert_core_msys_to_w32_result=`( cmd //c echo "$1" ) 2>/dev/null |
++    $SED -e 's/[ ]*$//' -e "$lt_sed_naive_backslashify"`
++}
++#end: func_convert_core_msys_to_w32
++
++
++# func_convert_file_check ARG1 ARG2
++# Verify that ARG1 (a file name in $build format) was converted to $host
++# format in ARG2. Otherwise, emit an error message, but continue (resetting
++# func_to_host_file_result to ARG1).
++func_convert_file_check ()
++{
++  $opt_debug
++  if test -z "$2" && test -n "$1" ; then
++    func_error "Could not determine host file name corresponding to"
++    func_error "  \`$1'"
++    func_error "Continuing, but uninstalled executables may not work."
++    # Fallback:
++    func_to_host_file_result="$1"
++  fi
++}
++# end func_convert_file_check
++
++
++# func_convert_path_check FROM_PATHSEP TO_PATHSEP FROM_PATH TO_PATH
++# Verify that FROM_PATH (a path in $build format) was converted to $host
++# format in TO_PATH. Otherwise, emit an error message, but continue, resetting
++# func_to_host_file_result to a simplistic fallback value (see below).
++func_convert_path_check ()
++{
++  $opt_debug
++  if test -z "$4" && test -n "$3"; then
++    func_error "Could not determine the host path corresponding to"
++    func_error "  \`$3'"
++    func_error "Continuing, but uninstalled executables may not work."
++    # Fallback.  This is a deliberately simplistic "conversion" and
++    # should not be "improved".  See libtool.info.
++    if test "x$1" != "x$2"; then
++      lt_replace_pathsep_chars="s|$1|$2|g"
++      func_to_host_path_result=`echo "$3" |
++        $SED -e "$lt_replace_pathsep_chars"`
++    else
++      func_to_host_path_result="$3"
++    fi
++  fi
++}
++# end func_convert_path_check
++
++
++# func_convert_path_front_back_pathsep FRONTPAT BACKPAT REPL ORIG
++# Modifies func_to_host_path_result by prepending REPL if ORIG matches FRONTPAT
++# and appending REPL if ORIG matches BACKPAT.
++func_convert_path_front_back_pathsep ()
++{
++  $opt_debug
++  case $4 in
++  $1 ) func_to_host_path_result="$3$func_to_host_path_result"
++    ;;
++  esac
++  case $4 in
++  $2 ) func_append func_to_host_path_result "$3"
++    ;;
++  esac
++}
++# end func_convert_path_front_back_pathsep
++
++
++##################################################
++# $build to $host FILE NAME CONVERSION FUNCTIONS #
++##################################################
++# invoked via `$to_host_file_cmd ARG'
++#
++# In each case, ARG is the path to be converted from $build to $host format.
++# Result will be available in $func_to_host_file_result.
++
++
++# func_to_host_file ARG
++# Converts the file name ARG from $build format to $host format. Return result
++# in func_to_host_file_result.
++func_to_host_file ()
++{
++  $opt_debug
++  $to_host_file_cmd "$1"
++}
++# end func_to_host_file
++
++
++# func_to_tool_file ARG LAZY
++# converts the file name ARG from $build format to toolchain format. Return
++# result in func_to_tool_file_result.  If the conversion in use is listed
++# in (the comma separated) LAZY, no conversion takes place.
++func_to_tool_file ()
++{
++  $opt_debug
++  case ,$2, in
++    *,"$to_tool_file_cmd",*)
++      func_to_tool_file_result=$1
++      ;;
++    *)
++      $to_tool_file_cmd "$1"
++      func_to_tool_file_result=$func_to_host_file_result
++      ;;
++  esac
++}
++# end func_to_tool_file
++
++
++# func_convert_file_noop ARG
++# Copy ARG to func_to_host_file_result.
++func_convert_file_noop ()
++{
++  func_to_host_file_result="$1"
++}
++# end func_convert_file_noop
++
++
++# func_convert_file_msys_to_w32 ARG
++# Convert file name ARG from (mingw) MSYS to (mingw) w32 format; automatic
++# conversion to w32 is not available inside the cwrapper.  Returns result in
++# func_to_host_file_result.
++func_convert_file_msys_to_w32 ()
++{
++  $opt_debug
++  func_to_host_file_result="$1"
++  if test -n "$1"; then
++    func_convert_core_msys_to_w32 "$1"
++    func_to_host_file_result="$func_convert_core_msys_to_w32_result"
++  fi
++  func_convert_file_check "$1" "$func_to_host_file_result"
++}
++# end func_convert_file_msys_to_w32
++
++
++# func_convert_file_cygwin_to_w32 ARG
++# Convert file name ARG from Cygwin to w32 format.  Returns result in
++# func_to_host_file_result.
++func_convert_file_cygwin_to_w32 ()
++{
++  $opt_debug
++  func_to_host_file_result="$1"
++  if test -n "$1"; then
++    # because $build is cygwin, we call "the" cygpath in $PATH; no need to use
++    # LT_CYGPATH in this case.
++    func_to_host_file_result=`cygpath -m "$1"`
++  fi
++  func_convert_file_check "$1" "$func_to_host_file_result"
++}
++# end func_convert_file_cygwin_to_w32
++
++
++# func_convert_file_nix_to_w32 ARG
++# Convert file name ARG from *nix to w32 format.  Requires a wine environment
++# and a working winepath. Returns result in func_to_host_file_result.
++func_convert_file_nix_to_w32 ()
++{
++  $opt_debug
++  func_to_host_file_result="$1"
++  if test -n "$1"; then
++    func_convert_core_file_wine_to_w32 "$1"
++    func_to_host_file_result="$func_convert_core_file_wine_to_w32_result"
++  fi
++  func_convert_file_check "$1" "$func_to_host_file_result"
++}
++# end func_convert_file_nix_to_w32
++
++
++# func_convert_file_msys_to_cygwin ARG
++# Convert file name ARG from MSYS to Cygwin format.  Requires LT_CYGPATH set.
++# Returns result in func_to_host_file_result.
++func_convert_file_msys_to_cygwin ()
++{
++  $opt_debug
++  func_to_host_file_result="$1"
++  if test -n "$1"; then
++    func_convert_core_msys_to_w32 "$1"
++    func_cygpath -u "$func_convert_core_msys_to_w32_result"
++    func_to_host_file_result="$func_cygpath_result"
++  fi
++  func_convert_file_check "$1" "$func_to_host_file_result"
++}
++# end func_convert_file_msys_to_cygwin
++
++
++# func_convert_file_nix_to_cygwin ARG
++# Convert file name ARG from *nix to Cygwin format.  Requires Cygwin installed
++# in a wine environment, working winepath, and LT_CYGPATH set.  Returns result
++# in func_to_host_file_result.
++func_convert_file_nix_to_cygwin ()
++{
++  $opt_debug
++  func_to_host_file_result="$1"
++  if test -n "$1"; then
++    # convert from *nix to w32, then use cygpath to convert from w32 to cygwin.
++    func_convert_core_file_wine_to_w32 "$1"
++    func_cygpath -u "$func_convert_core_file_wine_to_w32_result"
++    func_to_host_file_result="$func_cygpath_result"
++  fi
++  func_convert_file_check "$1" "$func_to_host_file_result"
++}
++# end func_convert_file_nix_to_cygwin
++
++
++#############################################
++# $build to $host PATH CONVERSION FUNCTIONS #
++#############################################
++# invoked via `$to_host_path_cmd ARG'
++#
++# In each case, ARG is the path to be converted from $build to $host format.
++# The result will be available in $func_to_host_path_result.
++#
++# Path separators are also converted from $build format to $host format.  If
++# ARG begins or ends with a path separator character, it is preserved (but
++# converted to $host format) on output.
++#
++# All path conversion functions are named using the following convention:
++#   file name conversion function    : func_convert_file_X_to_Y ()
++#   path conversion function         : func_convert_path_X_to_Y ()
++# where, for any given $build/$host combination the 'X_to_Y' value is the
++# same.  If conversion functions are added for new $build/$host combinations,
++# the two new functions must follow this pattern, or func_init_to_host_path_cmd
++# will break.
++
++
++# func_init_to_host_path_cmd
++# Ensures that function "pointer" variable $to_host_path_cmd is set to the
++# appropriate value, based on the value of $to_host_file_cmd.
++to_host_path_cmd=
++func_init_to_host_path_cmd ()
++{
++  $opt_debug
++  if test -z "$to_host_path_cmd"; then
++    func_stripname 'func_convert_file_' '' "$to_host_file_cmd"
++    to_host_path_cmd="func_convert_path_${func_stripname_result}"
++  fi
++}
++
++
++# func_to_host_path ARG
++# Converts the path ARG from $build format to $host format. Return result
++# in func_to_host_path_result.
++func_to_host_path ()
++{
++  $opt_debug
++  func_init_to_host_path_cmd
++  $to_host_path_cmd "$1"
++}
++# end func_to_host_path
++
++
++# func_convert_path_noop ARG
++# Copy ARG to func_to_host_path_result.
++func_convert_path_noop ()
++{
++  func_to_host_path_result="$1"
++}
++# end func_convert_path_noop
++
++
++# func_convert_path_msys_to_w32 ARG
++# Convert path ARG from (mingw) MSYS to (mingw) w32 format; automatic
++# conversion to w32 is not available inside the cwrapper.  Returns result in
++# func_to_host_path_result.
++func_convert_path_msys_to_w32 ()
++{
++  $opt_debug
++  func_to_host_path_result="$1"
++  if test -n "$1"; then
++    # Remove leading and trailing path separator characters from ARG.  MSYS
++    # behavior is inconsistent here; cygpath turns them into '.;' and ';.';
++    # and winepath ignores them completely.
++    func_stripname : : "$1"
++    func_to_host_path_tmp1=$func_stripname_result
++    func_convert_core_msys_to_w32 "$func_to_host_path_tmp1"
++    func_to_host_path_result="$func_convert_core_msys_to_w32_result"
++    func_convert_path_check : ";" \
++      "$func_to_host_path_tmp1" "$func_to_host_path_result"
++    func_convert_path_front_back_pathsep ":*" "*:" ";" "$1"
++  fi
++}
++# end func_convert_path_msys_to_w32
++
++
++# func_convert_path_cygwin_to_w32 ARG
++# Convert path ARG from Cygwin to w32 format.  Returns result in
++# func_to_host_file_result.
++func_convert_path_cygwin_to_w32 ()
++{
++  $opt_debug
++  func_to_host_path_result="$1"
++  if test -n "$1"; then
++    # See func_convert_path_msys_to_w32:
++    func_stripname : : "$1"
++    func_to_host_path_tmp1=$func_stripname_result
++    func_to_host_path_result=`cygpath -m -p "$func_to_host_path_tmp1"`
++    func_convert_path_check : ";" \
++      "$func_to_host_path_tmp1" "$func_to_host_path_result"
++    func_convert_path_front_back_pathsep ":*" "*:" ";" "$1"
++  fi
++}
++# end func_convert_path_cygwin_to_w32
++
++
++# func_convert_path_nix_to_w32 ARG
++# Convert path ARG from *nix to w32 format.  Requires a wine environment and
++# a working winepath.  Returns result in func_to_host_file_result.
++func_convert_path_nix_to_w32 ()
++{
++  $opt_debug
++  func_to_host_path_result="$1"
++  if test -n "$1"; then
++    # See func_convert_path_msys_to_w32:
++    func_stripname : : "$1"
++    func_to_host_path_tmp1=$func_stripname_result
++    func_convert_core_path_wine_to_w32 "$func_to_host_path_tmp1"
++    func_to_host_path_result="$func_convert_core_path_wine_to_w32_result"
++    func_convert_path_check : ";" \
++      "$func_to_host_path_tmp1" "$func_to_host_path_result"
++    func_convert_path_front_back_pathsep ":*" "*:" ";" "$1"
++  fi
++}
++# end func_convert_path_nix_to_w32
++
++
++# func_convert_path_msys_to_cygwin ARG
++# Convert path ARG from MSYS to Cygwin format.  Requires LT_CYGPATH set.
++# Returns result in func_to_host_file_result.
++func_convert_path_msys_to_cygwin ()
++{
++  $opt_debug
++  func_to_host_path_result="$1"
++  if test -n "$1"; then
++    # See func_convert_path_msys_to_w32:
++    func_stripname : : "$1"
++    func_to_host_path_tmp1=$func_stripname_result
++    func_convert_core_msys_to_w32 "$func_to_host_path_tmp1"
++    func_cygpath -u -p "$func_convert_core_msys_to_w32_result"
++    func_to_host_path_result="$func_cygpath_result"
++    func_convert_path_check : : \
++      "$func_to_host_path_tmp1" "$func_to_host_path_result"
++    func_convert_path_front_back_pathsep ":*" "*:" : "$1"
++  fi
++}
++# end func_convert_path_msys_to_cygwin
++
++
++# func_convert_path_nix_to_cygwin ARG
++# Convert path ARG from *nix to Cygwin format.  Requires Cygwin installed in a
++# a wine environment, working winepath, and LT_CYGPATH set.  Returns result in
++# func_to_host_file_result.
++func_convert_path_nix_to_cygwin ()
++{
++  $opt_debug
++  func_to_host_path_result="$1"
++  if test -n "$1"; then
++    # Remove leading and trailing path separator characters from
++    # ARG. msys behavior is inconsistent here, cygpath turns them
++    # into '.;' and ';.', and winepath ignores them completely.
++    func_stripname : : "$1"
++    func_to_host_path_tmp1=$func_stripname_result
++    func_convert_core_path_wine_to_w32 "$func_to_host_path_tmp1"
++    func_cygpath -u -p "$func_convert_core_path_wine_to_w32_result"
++    func_to_host_path_result="$func_cygpath_result"
++    func_convert_path_check : : \
++      "$func_to_host_path_tmp1" "$func_to_host_path_result"
++    func_convert_path_front_back_pathsep ":*" "*:" : "$1"
++  fi
++}
++# end func_convert_path_nix_to_cygwin
++
++
+ # func_mode_compile arg...
+ func_mode_compile ()
+ {
+@@ -1314,12 +1985,12 @@ func_mode_compile ()
+ 	  ;;
+ 
+ 	-pie | -fpie | -fPIE)
+-          pie_flag="$pie_flag $arg"
++          func_append pie_flag " $arg"
+ 	  continue
+ 	  ;;
+ 
+ 	-shared | -static | -prefer-pic | -prefer-non-pic)
+-	  later="$later $arg"
++	  func_append later " $arg"
+ 	  continue
+ 	  ;;
+ 
+@@ -1340,15 +2011,14 @@ func_mode_compile ()
+ 	  save_ifs="$IFS"; IFS=','
+ 	  for arg in $args; do
+ 	    IFS="$save_ifs"
+-	    func_quote_for_eval "$arg"
+-	    lastarg="$lastarg $func_quote_for_eval_result"
++	    func_append_quoted lastarg "$arg"
+ 	  done
+ 	  IFS="$save_ifs"
+ 	  func_stripname ' ' '' "$lastarg"
+ 	  lastarg=$func_stripname_result
+ 
+ 	  # Add the arguments to base_compile.
+-	  base_compile="$base_compile $lastarg"
++	  func_append base_compile " $lastarg"
+ 	  continue
+ 	  ;;
+ 
+@@ -1364,8 +2034,7 @@ func_mode_compile ()
+       esac    #  case $arg_mode
+ 
+       # Aesthetically quote the previous argument.
+-      func_quote_for_eval "$lastarg"
+-      base_compile="$base_compile $func_quote_for_eval_result"
++      func_append_quoted base_compile "$lastarg"
+     done # for arg
+ 
+     case $arg_mode in
+@@ -1496,17 +2165,16 @@ compiler."
+ 	$opt_dry_run || $RM $removelist
+ 	exit $EXIT_FAILURE
+       fi
+-      removelist="$removelist $output_obj"
++      func_append removelist " $output_obj"
+       $ECHO "$srcfile" > "$lockfile"
+     fi
+ 
+     $opt_dry_run || $RM $removelist
+-    removelist="$removelist $lockfile"
++    func_append removelist " $lockfile"
+     trap '$opt_dry_run || $RM $removelist; exit $EXIT_FAILURE' 1 2 15
+ 
+-    if test -n "$fix_srcfile_path"; then
+-      eval "srcfile=\"$fix_srcfile_path\""
+-    fi
++    func_to_tool_file "$srcfile" func_convert_file_msys_to_w32
++    srcfile=$func_to_tool_file_result
+     func_quote_for_eval "$srcfile"
+     qsrcfile=$func_quote_for_eval_result
+ 
+@@ -1526,7 +2194,7 @@ compiler."
+ 
+       if test -z "$output_obj"; then
+ 	# Place PIC objects in $objdir
+-	command="$command -o $lobj"
++	func_append command " -o $lobj"
+       fi
+ 
+       func_show_eval_locale "$command"	\
+@@ -1573,11 +2241,11 @@ compiler."
+ 	command="$base_compile $qsrcfile $pic_flag"
+       fi
+       if test "$compiler_c_o" = yes; then
+-	command="$command -o $obj"
++	func_append command " -o $obj"
+       fi
+ 
+       # Suppress compiler output if we already did a PIC compilation.
+-      command="$command$suppress_output"
++      func_append command "$suppress_output"
+       func_show_eval_locale "$command" \
+         '$opt_dry_run || $RM $removelist; exit $EXIT_FAILURE'
+ 
+@@ -1622,13 +2290,13 @@ compiler."
+ }
+ 
+ $opt_help || {
+-  test "$mode" = compile && func_mode_compile ${1+"$@"}
++  test "$opt_mode" = compile && func_mode_compile ${1+"$@"}
+ }
+ 
+ func_mode_help ()
+ {
+     # We need to display help for each of the modes.
+-    case $mode in
++    case $opt_mode in
+       "")
+         # Generic help is extracted from the usage comments
+         # at the start of this file.
+@@ -1659,8 +2327,8 @@ This mode accepts the following additional options:
+ 
+   -o OUTPUT-FILE    set the output file name to OUTPUT-FILE
+   -no-suppress      do not suppress compiler output for multiple passes
+-  -prefer-pic       try to building PIC objects only
+-  -prefer-non-pic   try to building non-PIC objects only
++  -prefer-pic       try to build PIC objects only
++  -prefer-non-pic   try to build non-PIC objects only
+   -shared           do not build a \`.o' file suitable for static linking
+   -static           only build a \`.o' file suitable for static linking
+   -Wc,FLAG          pass FLAG directly to the compiler
+@@ -1804,7 +2472,7 @@ Otherwise, only FILE itself is deleted using RM."
+         ;;
+ 
+       *)
+-        func_fatal_help "invalid operation mode \`$mode'"
++        func_fatal_help "invalid operation mode \`$opt_mode'"
+         ;;
+     esac
+ 
+@@ -1819,13 +2487,13 @@ if $opt_help; then
+   else
+     {
+       func_help noexit
+-      for mode in compile link execute install finish uninstall clean; do
++      for opt_mode in compile link execute install finish uninstall clean; do
+ 	func_mode_help
+       done
+     } | sed -n '1p; 2,$s/^Usage:/  or: /p'
+     {
+       func_help noexit
+-      for mode in compile link execute install finish uninstall clean; do
++      for opt_mode in compile link execute install finish uninstall clean; do
+ 	echo
+ 	func_mode_help
+       done
+@@ -1854,13 +2522,16 @@ func_mode_execute ()
+       func_fatal_help "you must specify a COMMAND"
+ 
+     # Handle -dlopen flags immediately.
+-    for file in $execute_dlfiles; do
++    for file in $opt_dlopen; do
+       test -f "$file" \
+ 	|| func_fatal_help "\`$file' is not a file"
+ 
+       dir=
+       case $file in
+       *.la)
++	func_resolve_sysroot "$file"
++	file=$func_resolve_sysroot_result
++
+ 	# Check to see that this really is a libtool archive.
+ 	func_lalib_unsafe_p "$file" \
+ 	  || func_fatal_help "\`$lib' is not a valid libtool archive"
+@@ -1882,7 +2553,7 @@ func_mode_execute ()
+ 	dir="$func_dirname_result"
+ 
+ 	if test -f "$dir/$objdir/$dlname"; then
+-	  dir="$dir/$objdir"
++	  func_append dir "/$objdir"
+ 	else
+ 	  if test ! -f "$dir/$dlname"; then
+ 	    func_fatal_error "cannot find \`$dlname' in \`$dir' or \`$dir/$objdir'"
+@@ -1907,10 +2578,10 @@ func_mode_execute ()
+       test -n "$absdir" && dir="$absdir"
+ 
+       # Now add the directory to shlibpath_var.
+-      if eval test -z \"\$$shlibpath_var\"; then
+-	eval $shlibpath_var=\$dir
++      if eval "test -z \"\$$shlibpath_var\""; then
++	eval "$shlibpath_var=\"\$dir\""
+       else
+-	eval $shlibpath_var=\$dir:\$$shlibpath_var
++	eval "$shlibpath_var=\"\$dir:\$$shlibpath_var\""
+       fi
+     done
+ 
+@@ -1939,8 +2610,7 @@ func_mode_execute ()
+ 	;;
+       esac
+       # Quote arguments (to preserve shell metacharacters).
+-      func_quote_for_eval "$file"
+-      args="$args $func_quote_for_eval_result"
++      func_append_quoted args "$file"
+     done
+ 
+     if test "X$opt_dry_run" = Xfalse; then
+@@ -1972,22 +2642,59 @@ func_mode_execute ()
+     fi
+ }
+ 
+-test "$mode" = execute && func_mode_execute ${1+"$@"}
++test "$opt_mode" = execute && func_mode_execute ${1+"$@"}
+ 
+ 
+ # func_mode_finish arg...
+ func_mode_finish ()
+ {
+     $opt_debug
+-    libdirs="$nonopt"
++    libs=
++    libdirs=
+     admincmds=
+ 
+-    if test -n "$finish_cmds$finish_eval" && test -n "$libdirs"; then
+-      for dir
+-      do
+-	libdirs="$libdirs $dir"
+-      done
++    for opt in "$nonopt" ${1+"$@"}
++    do
++      if test -d "$opt"; then
++	func_append libdirs " $opt"
+ 
++      elif test -f "$opt"; then
++	if func_lalib_unsafe_p "$opt"; then
++	  func_append libs " $opt"
++	else
++	  func_warning "\`$opt' is not a valid libtool archive"
++	fi
++
++      else
++	func_fatal_error "invalid argument \`$opt'"
++      fi
++    done
++
++    if test -n "$libs"; then
++      if test -n "$lt_sysroot"; then
++        sysroot_regex=`$ECHO "$lt_sysroot" | $SED "$sed_make_literal_regex"`
++        sysroot_cmd="s/\([ ']\)$sysroot_regex/\1/g;"
++      else
++        sysroot_cmd=
++      fi
++
++      # Remove sysroot references
++      if $opt_dry_run; then
++        for lib in $libs; do
++          echo "removing references to $lt_sysroot and \`=' prefixes from $lib"
++        done
++      else
++        tmpdir=`func_mktempdir`
++        for lib in $libs; do
++	  sed -e "${sysroot_cmd} s/\([ ']-[LR]\)=/\1/g; s/\([ ']\)=/\1/g" $lib \
++	    > $tmpdir/tmp-la
++	  mv -f $tmpdir/tmp-la $lib
++	done
++        ${RM}r "$tmpdir"
++      fi
++    fi
++
++    if test -n "$finish_cmds$finish_eval" && test -n "$libdirs"; then
+       for libdir in $libdirs; do
+ 	if test -n "$finish_cmds"; then
+ 	  # Do each command in the finish commands.
+@@ -1997,7 +2704,7 @@ func_mode_finish ()
+ 	if test -n "$finish_eval"; then
+ 	  # Do the single finish_eval.
+ 	  eval cmds=\"$finish_eval\"
+-	  $opt_dry_run || eval "$cmds" || admincmds="$admincmds
++	  $opt_dry_run || eval "$cmds" || func_append admincmds "
+        $cmds"
+ 	fi
+       done
+@@ -2006,53 +2713,55 @@ func_mode_finish ()
+     # Exit here if they wanted silent mode.
+     $opt_silent && exit $EXIT_SUCCESS
+ 
+-    echo "----------------------------------------------------------------------"
+-    echo "Libraries have been installed in:"
+-    for libdir in $libdirs; do
+-      $ECHO "   $libdir"
+-    done
+-    echo
+-    echo "If you ever happen to want to link against installed libraries"
+-    echo "in a given directory, LIBDIR, you must either use libtool, and"
+-    echo "specify the full pathname of the library, or use the \`-LLIBDIR'"
+-    echo "flag during linking and do at least one of the following:"
+-    if test -n "$shlibpath_var"; then
+-      echo "   - add LIBDIR to the \`$shlibpath_var' environment variable"
+-      echo "     during execution"
+-    fi
+-    if test -n "$runpath_var"; then
+-      echo "   - add LIBDIR to the \`$runpath_var' environment variable"
+-      echo "     during linking"
+-    fi
+-    if test -n "$hardcode_libdir_flag_spec"; then
+-      libdir=LIBDIR
+-      eval "flag=\"$hardcode_libdir_flag_spec\""
++    if test -n "$finish_cmds$finish_eval" && test -n "$libdirs"; then
++      echo "----------------------------------------------------------------------"
++      echo "Libraries have been installed in:"
++      for libdir in $libdirs; do
++	$ECHO "   $libdir"
++      done
++      echo
++      echo "If you ever happen to want to link against installed libraries"
++      echo "in a given directory, LIBDIR, you must either use libtool, and"
++      echo "specify the full pathname of the library, or use the \`-LLIBDIR'"
++      echo "flag during linking and do at least one of the following:"
++      if test -n "$shlibpath_var"; then
++	echo "   - add LIBDIR to the \`$shlibpath_var' environment variable"
++	echo "     during execution"
++      fi
++      if test -n "$runpath_var"; then
++	echo "   - add LIBDIR to the \`$runpath_var' environment variable"
++	echo "     during linking"
++      fi
++      if test -n "$hardcode_libdir_flag_spec"; then
++	libdir=LIBDIR
++	eval flag=\"$hardcode_libdir_flag_spec\"
+ 
+-      $ECHO "   - use the \`$flag' linker flag"
+-    fi
+-    if test -n "$admincmds"; then
+-      $ECHO "   - have your system administrator run these commands:$admincmds"
+-    fi
+-    if test -f /etc/ld.so.conf; then
+-      echo "   - have your system administrator add LIBDIR to \`/etc/ld.so.conf'"
+-    fi
+-    echo
++	$ECHO "   - use the \`$flag' linker flag"
++      fi
++      if test -n "$admincmds"; then
++	$ECHO "   - have your system administrator run these commands:$admincmds"
++      fi
++      if test -f /etc/ld.so.conf; then
++	echo "   - have your system administrator add LIBDIR to \`/etc/ld.so.conf'"
++      fi
++      echo
+ 
+-    echo "See any operating system documentation about shared libraries for"
+-    case $host in
+-      solaris2.[6789]|solaris2.1[0-9])
+-        echo "more information, such as the ld(1), crle(1) and ld.so(8) manual"
+-	echo "pages."
+-	;;
+-      *)
+-        echo "more information, such as the ld(1) and ld.so(8) manual pages."
+-        ;;
+-    esac
+-    echo "----------------------------------------------------------------------"
++      echo "See any operating system documentation about shared libraries for"
++      case $host in
++	solaris2.[6789]|solaris2.1[0-9])
++	  echo "more information, such as the ld(1), crle(1) and ld.so(8) manual"
++	  echo "pages."
++	  ;;
++	*)
++	  echo "more information, such as the ld(1) and ld.so(8) manual pages."
++	  ;;
++      esac
++      echo "----------------------------------------------------------------------"
++    fi
+     exit $EXIT_SUCCESS
+ }
+ 
+-test "$mode" = finish && func_mode_finish ${1+"$@"}
++test "$opt_mode" = finish && func_mode_finish ${1+"$@"}
+ 
+ 
+ # func_mode_install arg...
+@@ -2077,7 +2786,7 @@ func_mode_install ()
+     # The real first argument should be the name of the installation program.
+     # Aesthetically quote it.
+     func_quote_for_eval "$arg"
+-    install_prog="$install_prog$func_quote_for_eval_result"
++    func_append install_prog "$func_quote_for_eval_result"
+     install_shared_prog=$install_prog
+     case " $install_prog " in
+       *[\\\ /]cp\ *) install_cp=: ;;
+@@ -2097,7 +2806,7 @@ func_mode_install ()
+     do
+       arg2=
+       if test -n "$dest"; then
+-	files="$files $dest"
++	func_append files " $dest"
+ 	dest=$arg
+ 	continue
+       fi
+@@ -2135,11 +2844,11 @@ func_mode_install ()
+ 
+       # Aesthetically quote the argument.
+       func_quote_for_eval "$arg"
+-      install_prog="$install_prog $func_quote_for_eval_result"
++      func_append install_prog " $func_quote_for_eval_result"
+       if test -n "$arg2"; then
+ 	func_quote_for_eval "$arg2"
+       fi
+-      install_shared_prog="$install_shared_prog $func_quote_for_eval_result"
++      func_append install_shared_prog " $func_quote_for_eval_result"
+     done
+ 
+     test -z "$install_prog" && \
+@@ -2151,7 +2860,7 @@ func_mode_install ()
+     if test -n "$install_override_mode" && $no_mode; then
+       if $install_cp; then :; else
+ 	func_quote_for_eval "$install_override_mode"
+-	install_shared_prog="$install_shared_prog -m $func_quote_for_eval_result"
++	func_append install_shared_prog " -m $func_quote_for_eval_result"
+       fi
+     fi
+ 
+@@ -2209,10 +2918,13 @@ func_mode_install ()
+       case $file in
+       *.$libext)
+ 	# Do the static libraries later.
+-	staticlibs="$staticlibs $file"
++	func_append staticlibs " $file"
+ 	;;
+ 
+       *.la)
++	func_resolve_sysroot "$file"
++	file=$func_resolve_sysroot_result
++
+ 	# Check to see that this really is a libtool archive.
+ 	func_lalib_unsafe_p "$file" \
+ 	  || func_fatal_help "\`$file' is not a valid libtool archive"
+@@ -2226,23 +2938,30 @@ func_mode_install ()
+ 	if test "X$destdir" = "X$libdir"; then
+ 	  case "$current_libdirs " in
+ 	  *" $libdir "*) ;;
+-	  *) current_libdirs="$current_libdirs $libdir" ;;
++	  *) func_append current_libdirs " $libdir" ;;
+ 	  esac
+ 	else
+ 	  # Note the libdir as a future libdir.
+ 	  case "$future_libdirs " in
+ 	  *" $libdir "*) ;;
+-	  *) future_libdirs="$future_libdirs $libdir" ;;
++	  *) func_append future_libdirs " $libdir" ;;
+ 	  esac
+ 	fi
+ 
+ 	func_dirname "$file" "/" ""
+ 	dir="$func_dirname_result"
+-	dir="$dir$objdir"
++	func_append dir "$objdir"
+ 
+ 	if test -n "$relink_command"; then
++      # Strip any trailing slash from the destination.
++      func_stripname '' '/' "$libdir"
++      destlibdir=$func_stripname_result
++
++      func_stripname '' '/' "$destdir"
++      s_destdir=$func_stripname_result
++
+ 	  # Determine the prefix the user has applied to our future dir.
+-	  inst_prefix_dir=`$ECHO "$destdir" | $SED -e "s%$libdir\$%%"`
++	  inst_prefix_dir=`$ECHO "X$s_destdir" | $Xsed -e "s%$destlibdir\$%%"`
+ 
+ 	  # Don't allow the user to place us outside of our expected
+ 	  # location b/c this prevents finding dependent libraries that
+@@ -2315,7 +3034,7 @@ func_mode_install ()
+ 	func_show_eval "$install_prog $instname $destdir/$name" 'exit $?'
+ 
+ 	# Maybe install the static library, too.
+-	test -n "$old_library" && staticlibs="$staticlibs $dir/$old_library"
++	test -n "$old_library" && func_append staticlibs " $dir/$old_library"
+ 	;;
+ 
+       *.lo)
+@@ -2503,7 +3222,7 @@ func_mode_install ()
+     test -n "$future_libdirs" && \
+       func_warning "remember to run \`$progname --finish$future_libdirs'"
+ 
+-    if test -n "$current_libdirs" && $opt_finish; then
++    if test -n "$current_libdirs"; then
+       # Maybe just do a dry run.
+       $opt_dry_run && current_libdirs=" -n$current_libdirs"
+       exec_cmd='$SHELL $progpath $preserve_args --finish$current_libdirs'
+@@ -2512,7 +3231,7 @@ func_mode_install ()
+     fi
+ }
+ 
+-test "$mode" = install && func_mode_install ${1+"$@"}
++test "$opt_mode" = install && func_mode_install ${1+"$@"}
+ 
+ 
+ # func_generate_dlsyms outputname originator pic_p
+@@ -2559,6 +3278,18 @@ extern \"C\" {
+ #pragma GCC diagnostic ignored \"-Wstrict-prototypes\"
+ #endif
+ 
++/* Keep this code in sync between libtool.m4, ltmain, lt_system.h, and tests.  */
++#if defined(_WIN32) || defined(__CYGWIN__) || defined(_WIN32_WCE)
++/* DATA imports from DLLs on WIN32 con't be const, because runtime
++   relocations are performed -- see ld's documentation on pseudo-relocs.  */
++# define LT_DLSYM_CONST
++#elif defined(__osf__)
++/* This system does not cope well with relocations in const data.  */
++# define LT_DLSYM_CONST
++#else
++# define LT_DLSYM_CONST const
++#endif
++
+ /* External symbol declarations for the compiler. */\
+ "
+ 
+@@ -2570,21 +3301,22 @@ extern \"C\" {
+ 	  # Add our own program objects to the symbol list.
+ 	  progfiles=`$ECHO "$objs$old_deplibs" | $SP2NL | $SED "$lo2o" | $NL2SP`
+ 	  for progfile in $progfiles; do
+-	    func_verbose "extracting global C symbols from \`$progfile'"
+-	    $opt_dry_run || eval "$NM $progfile | $global_symbol_pipe >> '$nlist'"
++	    func_to_tool_file "$progfile" func_convert_file_msys_to_w32
++	    func_verbose "extracting global C symbols from \`$func_to_tool_file_result'"
++	    $opt_dry_run || eval "$NM $func_to_tool_file_result | $global_symbol_pipe >> '$nlist'"
+ 	  done
+ 
+ 	  if test -n "$exclude_expsyms"; then
+ 	    $opt_dry_run || {
+-	      $EGREP -v " ($exclude_expsyms)$" "$nlist" > "$nlist"T
+-	      $MV "$nlist"T "$nlist"
++	      eval '$EGREP -v " ($exclude_expsyms)$" "$nlist" > "$nlist"T'
++	      eval '$MV "$nlist"T "$nlist"'
+ 	    }
+ 	  fi
+ 
+ 	  if test -n "$export_symbols_regex"; then
+ 	    $opt_dry_run || {
+-	      $EGREP -e "$export_symbols_regex" "$nlist" > "$nlist"T
+-	      $MV "$nlist"T "$nlist"
++	      eval '$EGREP -e "$export_symbols_regex" "$nlist" > "$nlist"T'
++	      eval '$MV "$nlist"T "$nlist"'
+ 	    }
+ 	  fi
+ 
+@@ -2593,23 +3325,23 @@ extern \"C\" {
+ 	    export_symbols="$output_objdir/$outputname.exp"
+ 	    $opt_dry_run || {
+ 	      $RM $export_symbols
+-	      ${SED} -n -e '/^: @PROGRAM@ $/d' -e 's/^.* \(.*\)$/\1/p' < "$nlist" > "$export_symbols"
++	      eval "${SED} -n -e '/^: @PROGRAM@ $/d' -e 's/^.* \(.*\)$/\1/p' "'< "$nlist" > "$export_symbols"'
+ 	      case $host in
+ 	      *cygwin* | *mingw* | *cegcc* )
+-                echo EXPORTS > "$output_objdir/$outputname.def"
+-                cat "$export_symbols" >> "$output_objdir/$outputname.def"
++                eval "echo EXPORTS "'> "$output_objdir/$outputname.def"'
++                eval 'cat "$export_symbols" >> "$output_objdir/$outputname.def"'
+ 	        ;;
+ 	      esac
+ 	    }
+ 	  else
+ 	    $opt_dry_run || {
+-	      ${SED} -e 's/\([].[*^$]\)/\\\1/g' -e 's/^/ /' -e 's/$/$/' < "$export_symbols" > "$output_objdir/$outputname.exp"
+-	      $GREP -f "$output_objdir/$outputname.exp" < "$nlist" > "$nlist"T
+-	      $MV "$nlist"T "$nlist"
++	      eval "${SED} -e 's/\([].[*^$]\)/\\\\\1/g' -e 's/^/ /' -e 's/$/$/'"' < "$export_symbols" > "$output_objdir/$outputname.exp"'
++	      eval '$GREP -f "$output_objdir/$outputname.exp" < "$nlist" > "$nlist"T'
++	      eval '$MV "$nlist"T "$nlist"'
+ 	      case $host in
+ 	        *cygwin* | *mingw* | *cegcc* )
+-	          echo EXPORTS > "$output_objdir/$outputname.def"
+-	          cat "$nlist" >> "$output_objdir/$outputname.def"
++	          eval "echo EXPORTS "'> "$output_objdir/$outputname.def"'
++	          eval 'cat "$nlist" >> "$output_objdir/$outputname.def"'
+ 	          ;;
+ 	      esac
+ 	    }
+@@ -2620,10 +3352,52 @@ extern \"C\" {
+ 	  func_verbose "extracting global C symbols from \`$dlprefile'"
+ 	  func_basename "$dlprefile"
+ 	  name="$func_basename_result"
+-	  $opt_dry_run || {
+-	    $ECHO ": $name " >> "$nlist"
+-	    eval "$NM $dlprefile 2>/dev/null | $global_symbol_pipe >> '$nlist'"
+-	  }
++          case $host in
++	    *cygwin* | *mingw* | *cegcc* )
++	      # if an import library, we need to obtain dlname
++	      if func_win32_import_lib_p "$dlprefile"; then
++	        func_tr_sh "$dlprefile"
++	        eval "curr_lafile=\$libfile_$func_tr_sh_result"
++	        dlprefile_dlbasename=""
++	        if test -n "$curr_lafile" && func_lalib_p "$curr_lafile"; then
++	          # Use subshell, to avoid clobbering current variable values
++	          dlprefile_dlname=`source "$curr_lafile" && echo "$dlname"`
++	          if test -n "$dlprefile_dlname" ; then
++	            func_basename "$dlprefile_dlname"
++	            dlprefile_dlbasename="$func_basename_result"
++	          else
++	            # no lafile. user explicitly requested -dlpreopen <import library>.
++	            $sharedlib_from_linklib_cmd "$dlprefile"
++	            dlprefile_dlbasename=$sharedlib_from_linklib_result
++	          fi
++	        fi
++	        $opt_dry_run || {
++	          if test -n "$dlprefile_dlbasename" ; then
++	            eval '$ECHO ": $dlprefile_dlbasename" >> "$nlist"'
++	          else
++	            func_warning "Could not compute DLL name from $name"
++	            eval '$ECHO ": $name " >> "$nlist"'
++	          fi
++	          func_to_tool_file "$dlprefile" func_convert_file_msys_to_w32
++	          eval "$NM \"$func_to_tool_file_result\" 2>/dev/null | $global_symbol_pipe |
++	            $SED -e '/I __imp/d' -e 's/I __nm_/D /;s/_nm__//' >> '$nlist'"
++	        }
++	      else # not an import lib
++	        $opt_dry_run || {
++	          eval '$ECHO ": $name " >> "$nlist"'
++	          func_to_tool_file "$dlprefile" func_convert_file_msys_to_w32
++	          eval "$NM \"$func_to_tool_file_result\" 2>/dev/null | $global_symbol_pipe >> '$nlist'"
++	        }
++	      fi
++	    ;;
++	    *)
++	      $opt_dry_run || {
++	        eval '$ECHO ": $name " >> "$nlist"'
++	        func_to_tool_file "$dlprefile" func_convert_file_msys_to_w32
++	        eval "$NM \"$func_to_tool_file_result\" 2>/dev/null | $global_symbol_pipe >> '$nlist'"
++	      }
++	    ;;
++          esac
+ 	done
+ 
+ 	$opt_dry_run || {
+@@ -2661,26 +3435,9 @@ typedef struct {
+   const char *name;
+   void *address;
+ } lt_dlsymlist;
+-"
+-	  case $host in
+-	  *cygwin* | *mingw* | *cegcc* )
+-	    echo >> "$output_objdir/$my_dlsyms" "\
+-/* DATA imports from DLLs on WIN32 con't be const, because
+-   runtime relocations are performed -- see ld's documentation
+-   on pseudo-relocs.  */"
+-	    lt_dlsym_const= ;;
+-	  *osf5*)
+-	    echo >> "$output_objdir/$my_dlsyms" "\
+-/* This system does not cope well with relocations in const data */"
+-	    lt_dlsym_const= ;;
+-	  *)
+-	    lt_dlsym_const=const ;;
+-	  esac
+-
+-	  echo >> "$output_objdir/$my_dlsyms" "\
+-extern $lt_dlsym_const lt_dlsymlist
++extern LT_DLSYM_CONST lt_dlsymlist
+ lt_${my_prefix}_LTX_preloaded_symbols[];
+-$lt_dlsym_const lt_dlsymlist
++LT_DLSYM_CONST lt_dlsymlist
+ lt_${my_prefix}_LTX_preloaded_symbols[] =
+ {\
+   { \"$my_originator\", (void *) 0 },"
+@@ -2736,7 +3493,7 @@ static const void *lt_preloaded_setup() {
+ 	for arg in $LTCFLAGS; do
+ 	  case $arg in
+ 	  -pie | -fpie | -fPIE) ;;
+-	  *) symtab_cflags="$symtab_cflags $arg" ;;
++	  *) func_append symtab_cflags " $arg" ;;
+ 	  esac
+ 	done
+ 
+@@ -2796,9 +3553,11 @@ func_win32_libid ()
+     win32_libid_type="x86 archive import"
+     ;;
+   *ar\ archive*) # could be an import, or static
+-    if $OBJDUMP -f "$1" | $SED -e '10q' 2>/dev/null |
+-       $EGREP 'file format (pe-i386(.*architecture: i386)?|pe-arm-wince|pe-x86-64)' >/dev/null; then
+-      win32_nmres=`$NM -f posix -A "$1" |
++    # Keep the egrep pattern in sync with the one in _LT_CHECK_MAGIC_METHOD.
++    if eval $OBJDUMP -f $1 | $SED -e '10q' 2>/dev/null |
++       $EGREP 'file format (pei*-i386(.*architecture: i386)?|pe-arm-wince|pe-x86-64)' >/dev/null; then
++      func_to_tool_file "$1" func_convert_file_msys_to_w32
++      win32_nmres=`eval $NM -f posix -A \"$func_to_tool_file_result\" |
+ 	$SED -n -e '
+ 	    1,100{
+ 		/ I /{
+@@ -2827,6 +3586,131 @@ func_win32_libid ()
+   $ECHO "$win32_libid_type"
+ }
+ 
++# func_cygming_dll_for_implib ARG
++#
++# Platform-specific function to extract the
++# name of the DLL associated with the specified
++# import library ARG.
++# Invoked by eval'ing the libtool variable
++#    $sharedlib_from_linklib_cmd
++# Result is available in the variable
++#    $sharedlib_from_linklib_result
++func_cygming_dll_for_implib ()
++{
++  $opt_debug
++  sharedlib_from_linklib_result=`$DLLTOOL --identify-strict --identify "$1"`
++}
++
++# func_cygming_dll_for_implib_fallback_core SECTION_NAME LIBNAMEs
++#
++# The is the core of a fallback implementation of a
++# platform-specific function to extract the name of the
++# DLL associated with the specified import library LIBNAME.
++#
++# SECTION_NAME is either .idata$6 or .idata$7, depending
++# on the platform and compiler that created the implib.
++#
++# Echos the name of the DLL associated with the
++# specified import library.
++func_cygming_dll_for_implib_fallback_core ()
++{
++  $opt_debug
++  match_literal=`$ECHO "$1" | $SED "$sed_make_literal_regex"`
++  $OBJDUMP -s --section "$1" "$2" 2>/dev/null |
++    $SED '/^Contents of section '"$match_literal"':/{
++      # Place marker at beginning of archive member dllname section
++      s/.*/====MARK====/
++      p
++      d
++    }
++    # These lines can sometimes be longer than 43 characters, but
++    # are always uninteresting
++    /:[	 ]*file format pe[i]\{,1\}-/d
++    /^In archive [^:]*:/d
++    # Ensure marker is printed
++    /^====MARK====/p
++    # Remove all lines with less than 43 characters
++    /^.\{43\}/!d
++    # From remaining lines, remove first 43 characters
++    s/^.\{43\}//' |
++    $SED -n '
++      # Join marker and all lines until next marker into a single line
++      /^====MARK====/ b para
++      H
++      $ b para
++      b
++      :para
++      x
++      s/\n//g
++      # Remove the marker
++      s/^====MARK====//
++      # Remove trailing dots and whitespace
++      s/[\. \t]*$//
++      # Print
++      /./p' |
++    # we now have a list, one entry per line, of the stringified
++    # contents of the appropriate section of all members of the
++    # archive which possess that section. Heuristic: eliminate
++    # all those which have a first or second character that is
++    # a '.' (that is, objdump's representation of an unprintable
++    # character.) This should work for all archives with less than
++    # 0x302f exports -- but will fail for DLLs whose name actually
++    # begins with a literal '.' or a single character followed by
++    # a '.'.
++    #
++    # Of those that remain, print the first one.
++    $SED -e '/^\./d;/^.\./d;q'
++}
++
++# func_cygming_gnu_implib_p ARG
++# This predicate returns with zero status (TRUE) if
++# ARG is a GNU/binutils-style import library. Returns
++# with nonzero status (FALSE) otherwise.
++func_cygming_gnu_implib_p ()
++{
++  $opt_debug
++  func_to_tool_file "$1" func_convert_file_msys_to_w32
++  func_cygming_gnu_implib_tmp=`$NM "$func_to_tool_file_result" | eval "$global_symbol_pipe" | $EGREP ' (_head_[A-Za-z0-9_]+_[ad]l*|[A-Za-z0-9_]+_[ad]l*_iname)$'`
++  test -n "$func_cygming_gnu_implib_tmp"
++}
++
++# func_cygming_ms_implib_p ARG
++# This predicate returns with zero status (TRUE) if
++# ARG is an MS-style import library. Returns
++# with nonzero status (FALSE) otherwise.
++func_cygming_ms_implib_p ()
++{
++  $opt_debug
++  func_to_tool_file "$1" func_convert_file_msys_to_w32
++  func_cygming_ms_implib_tmp=`$NM "$func_to_tool_file_result" | eval "$global_symbol_pipe" | $GREP '_NULL_IMPORT_DESCRIPTOR'`
++  test -n "$func_cygming_ms_implib_tmp"
++}
++
++# func_cygming_dll_for_implib_fallback ARG
++# Platform-specific function to extract the
++# name of the DLL associated with the specified
++# import library ARG.
++#
++# This fallback implementation is for use when $DLLTOOL
++# does not support the --identify-strict option.
++# Invoked by eval'ing the libtool variable
++#    $sharedlib_from_linklib_cmd
++# Result is available in the variable
++#    $sharedlib_from_linklib_result
++func_cygming_dll_for_implib_fallback ()
++{
++  $opt_debug
++  if func_cygming_gnu_implib_p "$1" ; then
++    # binutils import library
++    sharedlib_from_linklib_result=`func_cygming_dll_for_implib_fallback_core '.idata$7' "$1"`
++  elif func_cygming_ms_implib_p "$1" ; then
++    # ms-generated import library
++    sharedlib_from_linklib_result=`func_cygming_dll_for_implib_fallback_core '.idata$6' "$1"`
++  else
++    # unknown
++    sharedlib_from_linklib_result=""
++  fi
++}
+ 
+ 
+ # func_extract_an_archive dir oldlib
+@@ -2917,7 +3801,7 @@ func_extract_archives ()
+ 	    darwin_file=
+ 	    darwin_files=
+ 	    for darwin_file in $darwin_filelist; do
+-	      darwin_files=`find unfat-$$ -name $darwin_file -print | $NL2SP`
++	      darwin_files=`find unfat-$$ -name $darwin_file -print | sort | $NL2SP`
+ 	      $LIPO -create -output "$darwin_file" $darwin_files
+ 	    done # $darwin_filelist
+ 	    $RM -rf unfat-$$
+@@ -2932,7 +3816,7 @@ func_extract_archives ()
+         func_extract_an_archive "$my_xdir" "$my_xabs"
+ 	;;
+       esac
+-      my_oldobjs="$my_oldobjs "`find $my_xdir -name \*.$objext -print -o -name \*.lo -print | $NL2SP`
++      my_oldobjs="$my_oldobjs "`find $my_xdir -name \*.$objext -print -o -name \*.lo -print | sort | $NL2SP`
+     done
+ 
+     func_extract_archives_result="$my_oldobjs"
+@@ -3014,7 +3898,110 @@ func_fallback_echo ()
+ _LTECHO_EOF'
+ }
+     ECHO=\"$qECHO\"
+-  fi\
++  fi
++
++# Very basic option parsing. These options are (a) specific to
++# the libtool wrapper, (b) are identical between the wrapper
++# /script/ and the wrapper /executable/ which is used only on
++# windows platforms, and (c) all begin with the string "--lt-"
++# (application programs are unlikely to have options which match
++# this pattern).
++#
++# There are only two supported options: --lt-debug and
++# --lt-dump-script. There is, deliberately, no --lt-help.
++#
++# The first argument to this parsing function should be the
++# script's $0 value, followed by "$@".
++lt_option_debug=
++func_parse_lt_options ()
++{
++  lt_script_arg0=\$0
++  shift
++  for lt_opt
++  do
++    case \"\$lt_opt\" in
++    --lt-debug) lt_option_debug=1 ;;
++    --lt-dump-script)
++        lt_dump_D=\`\$ECHO \"X\$lt_script_arg0\" | $SED -e 's/^X//' -e 's%/[^/]*$%%'\`
++        test \"X\$lt_dump_D\" = \"X\$lt_script_arg0\" && lt_dump_D=.
++        lt_dump_F=\`\$ECHO \"X\$lt_script_arg0\" | $SED -e 's/^X//' -e 's%^.*/%%'\`
++        cat \"\$lt_dump_D/\$lt_dump_F\"
++        exit 0
++      ;;
++    --lt-*)
++        \$ECHO \"Unrecognized --lt- option: '\$lt_opt'\" 1>&2
++        exit 1
++      ;;
++    esac
++  done
++
++  # Print the debug banner immediately:
++  if test -n \"\$lt_option_debug\"; then
++    echo \"${outputname}:${output}:\${LINENO}: libtool wrapper (GNU $PACKAGE$TIMESTAMP) $VERSION\" 1>&2
++  fi
++}
++
++# Used when --lt-debug. Prints its arguments to stdout
++# (redirection is the responsibility of the caller)
++func_lt_dump_args ()
++{
++  lt_dump_args_N=1;
++  for lt_arg
++  do
++    \$ECHO \"${outputname}:${output}:\${LINENO}: newargv[\$lt_dump_args_N]: \$lt_arg\"
++    lt_dump_args_N=\`expr \$lt_dump_args_N + 1\`
++  done
++}
++
++# Core function for launching the target application
++func_exec_program_core ()
++{
++"
++  case $host in
++  # Backslashes separate directories on plain windows
++  *-*-mingw | *-*-os2* | *-cegcc*)
++    $ECHO "\
++      if test -n \"\$lt_option_debug\"; then
++        \$ECHO \"${outputname}:${output}:\${LINENO}: newargv[0]: \$progdir\\\\\$program\" 1>&2
++        func_lt_dump_args \${1+\"\$@\"} 1>&2
++      fi
++      exec \"\$progdir\\\\\$program\" \${1+\"\$@\"}
++"
++    ;;
++
++  *)
++    $ECHO "\
++      if test -n \"\$lt_option_debug\"; then
++        \$ECHO \"${outputname}:${output}:\${LINENO}: newargv[0]: \$progdir/\$program\" 1>&2
++        func_lt_dump_args \${1+\"\$@\"} 1>&2
++      fi
++      exec \"\$progdir/\$program\" \${1+\"\$@\"}
++"
++    ;;
++  esac
++  $ECHO "\
++      \$ECHO \"\$0: cannot exec \$program \$*\" 1>&2
++      exit 1
++}
++
++# A function to encapsulate launching the target application
++# Strips options in the --lt-* namespace from \$@ and
++# launches target application with the remaining arguments.
++func_exec_program ()
++{
++  for lt_wr_arg
++  do
++    case \$lt_wr_arg in
++    --lt-*) ;;
++    *) set x \"\$@\" \"\$lt_wr_arg\"; shift;;
++    esac
++    shift
++  done
++  func_exec_program_core \${1+\"\$@\"}
++}
++
++  # Parse options
++  func_parse_lt_options \"\$0\" \${1+\"\$@\"}
+ 
+   # Find the directory that this script lives in.
+   thisdir=\`\$ECHO \"\$file\" | $SED 's%/[^/]*$%%'\`
+@@ -3078,7 +4065,7 @@ _LTECHO_EOF'
+ 
+     # relink executable if necessary
+     if test -n \"\$relink_command\"; then
+-      if relink_command_output=\`eval \"\$relink_command\" 2>&1\`; then :
++      if relink_command_output=\`eval \$relink_command 2>&1\`; then :
+       else
+ 	$ECHO \"\$relink_command_output\" >&2
+ 	$RM \"\$progdir/\$file\"
+@@ -3102,6 +4089,18 @@ _LTECHO_EOF'
+ 
+   if test -f \"\$progdir/\$program\"; then"
+ 
++	# fixup the dll searchpath if we need to.
++	#
++	# Fix the DLL searchpath if we need to.  Do this before prepending
++	# to shlibpath, because on Windows, both are PATH and uninstalled
++	# libraries must come first.
++	if test -n "$dllsearchpath"; then
++	  $ECHO "\
++    # Add the dll search path components to the executable PATH
++    PATH=$dllsearchpath:\$PATH
++"
++	fi
++
+ 	# Export our shlibpath_var if we have one.
+ 	if test "$shlibpath_overrides_runpath" = yes && test -n "$shlibpath_var" && test -n "$temp_rpath"; then
+ 	  $ECHO "\
+@@ -3116,35 +4115,10 @@ _LTECHO_EOF'
+ "
+ 	fi
+ 
+-	# fixup the dll searchpath if we need to.
+-	if test -n "$dllsearchpath"; then
+-	  $ECHO "\
+-    # Add the dll search path components to the executable PATH
+-    PATH=$dllsearchpath:\$PATH
+-"
+-	fi
+-
+ 	$ECHO "\
+     if test \"\$libtool_execute_magic\" != \"$magic\"; then
+       # Run the actual program with our arguments.
+-"
+-	case $host in
+-	# Backslashes separate directories on plain windows
+-	*-*-mingw | *-*-os2* | *-cegcc*)
+-	  $ECHO "\
+-      exec \"\$progdir\\\\\$program\" \${1+\"\$@\"}
+-"
+-	  ;;
+-
+-	*)
+-	  $ECHO "\
+-      exec \"\$progdir/\$program\" \${1+\"\$@\"}
+-"
+-	  ;;
+-	esac
+-	$ECHO "\
+-      \$ECHO \"\$0: cannot exec \$program \$*\" 1>&2
+-      exit 1
++      func_exec_program \${1+\"\$@\"}
+     fi
+   else
+     # The program doesn't exist.
+@@ -3158,166 +4132,6 @@ fi\
+ }
+ 
+ 
+-# func_to_host_path arg
+-#
+-# Convert paths to host format when used with build tools.
+-# Intended for use with "native" mingw (where libtool itself
+-# is running under the msys shell), or in the following cross-
+-# build environments:
+-#    $build          $host
+-#    mingw (msys)    mingw  [e.g. native]
+-#    cygwin          mingw
+-#    *nix + wine     mingw
+-# where wine is equipped with the `winepath' executable.
+-# In the native mingw case, the (msys) shell automatically
+-# converts paths for any non-msys applications it launches,
+-# but that facility isn't available from inside the cwrapper.
+-# Similar accommodations are necessary for $host mingw and
+-# $build cygwin.  Calling this function does no harm for other
+-# $host/$build combinations not listed above.
+-#
+-# ARG is the path (on $build) that should be converted to
+-# the proper representation for $host. The result is stored
+-# in $func_to_host_path_result.
+-func_to_host_path ()
+-{
+-  func_to_host_path_result="$1"
+-  if test -n "$1"; then
+-    case $host in
+-      *mingw* )
+-        lt_sed_naive_backslashify='s|\\\\*|\\|g;s|/|\\|g;s|\\|\\\\|g'
+-        case $build in
+-          *mingw* ) # actually, msys
+-            # awkward: cmd appends spaces to result
+-            func_to_host_path_result=`( cmd //c echo "$1" ) 2>/dev/null |
+-              $SED -e 's/[ ]*$//' -e "$lt_sed_naive_backslashify"`
+-            ;;
+-          *cygwin* )
+-            func_to_host_path_result=`cygpath -w "$1" |
+-	      $SED -e "$lt_sed_naive_backslashify"`
+-            ;;
+-          * )
+-            # Unfortunately, winepath does not exit with a non-zero
+-            # error code, so we are forced to check the contents of
+-            # stdout. On the other hand, if the command is not
+-            # found, the shell will set an exit code of 127 and print
+-            # *an error message* to stdout. So we must check for both
+-            # error code of zero AND non-empty stdout, which explains
+-            # the odd construction:
+-            func_to_host_path_tmp1=`winepath -w "$1" 2>/dev/null`
+-            if test "$?" -eq 0 && test -n "${func_to_host_path_tmp1}"; then
+-              func_to_host_path_result=`$ECHO "$func_to_host_path_tmp1" |
+-                $SED -e "$lt_sed_naive_backslashify"`
+-            else
+-              # Allow warning below.
+-              func_to_host_path_result=
+-            fi
+-            ;;
+-        esac
+-        if test -z "$func_to_host_path_result" ; then
+-          func_error "Could not determine host path corresponding to"
+-          func_error "  \`$1'"
+-          func_error "Continuing, but uninstalled executables may not work."
+-          # Fallback:
+-          func_to_host_path_result="$1"
+-        fi
+-        ;;
+-    esac
+-  fi
+-}
+-# end: func_to_host_path
+-
+-# func_to_host_pathlist arg
+-#
+-# Convert pathlists to host format when used with build tools.
+-# See func_to_host_path(), above. This function supports the
+-# following $build/$host combinations (but does no harm for
+-# combinations not listed here):
+-#    $build          $host
+-#    mingw (msys)    mingw  [e.g. native]
+-#    cygwin          mingw
+-#    *nix + wine     mingw
+-#
+-# Path separators are also converted from $build format to
+-# $host format. If ARG begins or ends with a path separator
+-# character, it is preserved (but converted to $host format)
+-# on output.
+-#
+-# ARG is a pathlist (on $build) that should be converted to
+-# the proper representation on $host. The result is stored
+-# in $func_to_host_pathlist_result.
+-func_to_host_pathlist ()
+-{
+-  func_to_host_pathlist_result="$1"
+-  if test -n "$1"; then
+-    case $host in
+-      *mingw* )
+-        lt_sed_naive_backslashify='s|\\\\*|\\|g;s|/|\\|g;s|\\|\\\\|g'
+-        # Remove leading and trailing path separator characters from
+-        # ARG. msys behavior is inconsistent here, cygpath turns them
+-        # into '.;' and ';.', and winepath ignores them completely.
+-	func_stripname : : "$1"
+-        func_to_host_pathlist_tmp1=$func_stripname_result
+-        case $build in
+-          *mingw* ) # Actually, msys.
+-            # Awkward: cmd appends spaces to result.
+-            func_to_host_pathlist_result=`
+-	      ( cmd //c echo "$func_to_host_pathlist_tmp1" ) 2>/dev/null |
+-	      $SED -e 's/[ ]*$//' -e "$lt_sed_naive_backslashify"`
+-            ;;
+-          *cygwin* )
+-            func_to_host_pathlist_result=`cygpath -w -p "$func_to_host_pathlist_tmp1" |
+-              $SED -e "$lt_sed_naive_backslashify"`
+-            ;;
+-          * )
+-            # unfortunately, winepath doesn't convert pathlists
+-            func_to_host_pathlist_result=""
+-            func_to_host_pathlist_oldIFS=$IFS
+-            IFS=:
+-            for func_to_host_pathlist_f in $func_to_host_pathlist_tmp1 ; do
+-              IFS=$func_to_host_pathlist_oldIFS
+-              if test -n "$func_to_host_pathlist_f" ; then
+-                func_to_host_path "$func_to_host_pathlist_f"
+-                if test -n "$func_to_host_path_result" ; then
+-                  if test -z "$func_to_host_pathlist_result" ; then
+-                    func_to_host_pathlist_result="$func_to_host_path_result"
+-                  else
+-                    func_append func_to_host_pathlist_result ";$func_to_host_path_result"
+-                  fi
+-                fi
+-              fi
+-            done
+-            IFS=$func_to_host_pathlist_oldIFS
+-            ;;
+-        esac
+-        if test -z "$func_to_host_pathlist_result"; then
+-          func_error "Could not determine the host path(s) corresponding to"
+-          func_error "  \`$1'"
+-          func_error "Continuing, but uninstalled executables may not work."
+-          # Fallback. This may break if $1 contains DOS-style drive
+-          # specifications. The fix is not to complicate the expression
+-          # below, but for the user to provide a working wine installation
+-          # with winepath so that path translation in the cross-to-mingw
+-          # case works properly.
+-          lt_replace_pathsep_nix_to_dos="s|:|;|g"
+-          func_to_host_pathlist_result=`echo "$func_to_host_pathlist_tmp1" |\
+-            $SED -e "$lt_replace_pathsep_nix_to_dos"`
+-        fi
+-        # Now, add the leading and trailing path separators back
+-        case "$1" in
+-          :* ) func_to_host_pathlist_result=";$func_to_host_pathlist_result"
+-            ;;
+-        esac
+-        case "$1" in
+-          *: ) func_append func_to_host_pathlist_result ";"
+-            ;;
+-        esac
+-        ;;
+-    esac
+-  fi
+-}
+-# end: func_to_host_pathlist
+-
+ # func_emit_cwrapperexe_src
+ # emit the source code for a wrapper executable on stdout
+ # Must ONLY be called from within func_mode_link because
+@@ -3334,10 +4148,6 @@ func_emit_cwrapperexe_src ()
+ 
+    This wrapper executable should never be moved out of the build directory.
+    If it is, it will not operate correctly.
+-
+-   Currently, it simply execs the wrapper *script* "$SHELL $output",
+-   but could eventually absorb all of the scripts functionality and
+-   exec $objdir/$outputname directly.
+ */
+ EOF
+ 	    cat <<"EOF"
+@@ -3462,22 +4272,13 @@ int setenv (const char *, const char *, int);
+   if (stale) { free ((void *) stale); stale = 0; } \
+ } while (0)
+ 
+-#undef LTWRAPPER_DEBUGPRINTF
+-#if defined LT_DEBUGWRAPPER
+-# define LTWRAPPER_DEBUGPRINTF(args) ltwrapper_debugprintf args
+-static void
+-ltwrapper_debugprintf (const char *fmt, ...)
+-{
+-    va_list args;
+-    va_start (args, fmt);
+-    (void) vfprintf (stderr, fmt, args);
+-    va_end (args);
+-}
++#if defined(LT_DEBUGWRAPPER)
++static int lt_debug = 1;
+ #else
+-# define LTWRAPPER_DEBUGPRINTF(args)
++static int lt_debug = 0;
+ #endif
+ 
+-const char *program_name = NULL;
++const char *program_name = "libtool-wrapper"; /* in case xstrdup fails */
+ 
+ void *xmalloc (size_t num);
+ char *xstrdup (const char *string);
+@@ -3487,7 +4288,10 @@ char *chase_symlinks (const char *pathspec);
+ int make_executable (const char *path);
+ int check_executable (const char *path);
+ char *strendzap (char *str, const char *pat);
+-void lt_fatal (const char *message, ...);
++void lt_debugprintf (const char *file, int line, const char *fmt, ...);
++void lt_fatal (const char *file, int line, const char *message, ...);
++static const char *nonnull (const char *s);
++static const char *nonempty (const char *s);
+ void lt_setenv (const char *name, const char *value);
+ char *lt_extend_str (const char *orig_value, const char *add, int to_end);
+ void lt_update_exe_path (const char *name, const char *value);
+@@ -3497,14 +4301,14 @@ void lt_dump_script (FILE *f);
+ EOF
+ 
+ 	    cat <<EOF
+-const char * MAGIC_EXE = "$magic_exe";
++volatile const char * MAGIC_EXE = "$magic_exe";
+ const char * LIB_PATH_VARNAME = "$shlibpath_var";
+ EOF
+ 
+ 	    if test "$shlibpath_overrides_runpath" = yes && test -n "$shlibpath_var" && test -n "$temp_rpath"; then
+-              func_to_host_pathlist "$temp_rpath"
++              func_to_host_path "$temp_rpath"
+ 	      cat <<EOF
+-const char * LIB_PATH_VALUE   = "$func_to_host_pathlist_result";
++const char * LIB_PATH_VALUE   = "$func_to_host_path_result";
+ EOF
+ 	    else
+ 	      cat <<"EOF"
+@@ -3513,10 +4317,10 @@ EOF
+ 	    fi
+ 
+ 	    if test -n "$dllsearchpath"; then
+-              func_to_host_pathlist "$dllsearchpath:"
++              func_to_host_path "$dllsearchpath:"
+ 	      cat <<EOF
+ const char * EXE_PATH_VARNAME = "PATH";
+-const char * EXE_PATH_VALUE   = "$func_to_host_pathlist_result";
++const char * EXE_PATH_VALUE   = "$func_to_host_path_result";
+ EOF
+ 	    else
+ 	      cat <<"EOF"
+@@ -3539,12 +4343,10 @@ EOF
+ 	    cat <<"EOF"
+ 
+ #define LTWRAPPER_OPTION_PREFIX         "--lt-"
+-#define LTWRAPPER_OPTION_PREFIX_LENGTH  5
+ 
+-static const size_t opt_prefix_len         = LTWRAPPER_OPTION_PREFIX_LENGTH;
+ static const char *ltwrapper_option_prefix = LTWRAPPER_OPTION_PREFIX;
+-
+ static const char *dumpscript_opt       = LTWRAPPER_OPTION_PREFIX "dump-script";
++static const char *debug_opt            = LTWRAPPER_OPTION_PREFIX "debug";
+ 
+ int
+ main (int argc, char *argv[])
+@@ -3561,10 +4363,13 @@ main (int argc, char *argv[])
+   int i;
+ 
+   program_name = (char *) xstrdup (base_name (argv[0]));
+-  LTWRAPPER_DEBUGPRINTF (("(main) argv[0]      : %s\n", argv[0]));
+-  LTWRAPPER_DEBUGPRINTF (("(main) program_name : %s\n", program_name));
++  newargz = XMALLOC (char *, argc + 1);
+ 
+-  /* very simple arg parsing; don't want to rely on getopt */
++  /* very simple arg parsing; don't want to rely on getopt
++   * also, copy all non cwrapper options to newargz, except
++   * argz[0], which is handled differently
++   */
++  newargc=0;
+   for (i = 1; i < argc; i++)
+     {
+       if (strcmp (argv[i], dumpscript_opt) == 0)
+@@ -3581,21 +4386,54 @@ EOF
+ 	  lt_dump_script (stdout);
+ 	  return 0;
+ 	}
++      if (strcmp (argv[i], debug_opt) == 0)
++	{
++          lt_debug = 1;
++          continue;
++	}
++      if (strcmp (argv[i], ltwrapper_option_prefix) == 0)
++        {
++          /* however, if there is an option in the LTWRAPPER_OPTION_PREFIX
++             namespace, but it is not one of the ones we know about and
++             have already dealt with, above (inluding dump-script), then
++             report an error. Otherwise, targets might begin to believe
++             they are allowed to use options in the LTWRAPPER_OPTION_PREFIX
++             namespace. The first time any user complains about this, we'll
++             need to make LTWRAPPER_OPTION_PREFIX a configure-time option
++             or a configure.ac-settable value.
++           */
++          lt_fatal (__FILE__, __LINE__,
++		    "unrecognized %s option: '%s'",
++                    ltwrapper_option_prefix, argv[i]);
++        }
++      /* otherwise ... */
++      newargz[++newargc] = xstrdup (argv[i]);
+     }
++  newargz[++newargc] = NULL;
++
++EOF
++	    cat <<EOF
++  /* The GNU banner must be the first non-error debug message */
++  lt_debugprintf (__FILE__, __LINE__, "libtool wrapper (GNU $PACKAGE$TIMESTAMP) $VERSION\n");
++EOF
++	    cat <<"EOF"
++  lt_debugprintf (__FILE__, __LINE__, "(main) argv[0]: %s\n", argv[0]);
++  lt_debugprintf (__FILE__, __LINE__, "(main) program_name: %s\n", program_name);
+ 
+-  newargz = XMALLOC (char *, argc + 1);
+   tmp_pathspec = find_executable (argv[0]);
+   if (tmp_pathspec == NULL)
+-    lt_fatal ("Couldn't find %s", argv[0]);
+-  LTWRAPPER_DEBUGPRINTF (("(main) found exe (before symlink chase) at : %s\n",
+-			  tmp_pathspec));
++    lt_fatal (__FILE__, __LINE__, "couldn't find %s", argv[0]);
++  lt_debugprintf (__FILE__, __LINE__,
++                  "(main) found exe (before symlink chase) at: %s\n",
++		  tmp_pathspec);
+ 
+   actual_cwrapper_path = chase_symlinks (tmp_pathspec);
+-  LTWRAPPER_DEBUGPRINTF (("(main) found exe (after symlink chase) at : %s\n",
+-			  actual_cwrapper_path));
++  lt_debugprintf (__FILE__, __LINE__,
++                  "(main) found exe (after symlink chase) at: %s\n",
++		  actual_cwrapper_path);
+   XFREE (tmp_pathspec);
+ 
+-  actual_cwrapper_name = xstrdup( base_name (actual_cwrapper_path));
++  actual_cwrapper_name = xstrdup (base_name (actual_cwrapper_path));
+   strendzap (actual_cwrapper_path, actual_cwrapper_name);
+ 
+   /* wrapper name transforms */
+@@ -3613,8 +4451,9 @@ EOF
+   target_name = tmp_pathspec;
+   tmp_pathspec = 0;
+ 
+-  LTWRAPPER_DEBUGPRINTF (("(main) libtool target name: %s\n",
+-			  target_name));
++  lt_debugprintf (__FILE__, __LINE__,
++		  "(main) libtool target name: %s\n",
++		  target_name);
+ EOF
+ 
+ 	    cat <<EOF
+@@ -3664,35 +4503,19 @@ EOF
+ 
+   lt_setenv ("BIN_SH", "xpg4"); /* for Tru64 */
+   lt_setenv ("DUALCASE", "1");  /* for MSK sh */
+-  lt_update_lib_path (LIB_PATH_VARNAME, LIB_PATH_VALUE);
++  /* Update the DLL searchpath.  EXE_PATH_VALUE ($dllsearchpath) must
++     be prepended before (that is, appear after) LIB_PATH_VALUE ($temp_rpath)
++     because on Windows, both *_VARNAMEs are PATH but uninstalled
++     libraries must come first. */
+   lt_update_exe_path (EXE_PATH_VARNAME, EXE_PATH_VALUE);
++  lt_update_lib_path (LIB_PATH_VARNAME, LIB_PATH_VALUE);
+ 
+-  newargc=0;
+-  for (i = 1; i < argc; i++)
+-    {
+-      if (strncmp (argv[i], ltwrapper_option_prefix, opt_prefix_len) == 0)
+-        {
+-          /* however, if there is an option in the LTWRAPPER_OPTION_PREFIX
+-             namespace, but it is not one of the ones we know about and
+-             have already dealt with, above (inluding dump-script), then
+-             report an error. Otherwise, targets might begin to believe
+-             they are allowed to use options in the LTWRAPPER_OPTION_PREFIX
+-             namespace. The first time any user complains about this, we'll
+-             need to make LTWRAPPER_OPTION_PREFIX a configure-time option
+-             or a configure.ac-settable value.
+-           */
+-          lt_fatal ("Unrecognized option in %s namespace: '%s'",
+-                    ltwrapper_option_prefix, argv[i]);
+-        }
+-      /* otherwise ... */
+-      newargz[++newargc] = xstrdup (argv[i]);
+-    }
+-  newargz[++newargc] = NULL;
+-
+-  LTWRAPPER_DEBUGPRINTF     (("(main) lt_argv_zero : %s\n", (lt_argv_zero ? lt_argv_zero : "<NULL>")));
++  lt_debugprintf (__FILE__, __LINE__, "(main) lt_argv_zero: %s\n",
++		  nonnull (lt_argv_zero));
+   for (i = 0; i < newargc; i++)
+     {
+-      LTWRAPPER_DEBUGPRINTF (("(main) newargz[%d]   : %s\n", i, (newargz[i] ? newargz[i] : "<NULL>")));
++      lt_debugprintf (__FILE__, __LINE__, "(main) newargz[%d]: %s\n",
++		      i, nonnull (newargz[i]));
+     }
+ 
+ EOF
+@@ -3706,7 +4529,9 @@ EOF
+   if (rval == -1)
+     {
+       /* failed to start process */
+-      LTWRAPPER_DEBUGPRINTF (("(main) failed to launch target \"%s\": errno = %d\n", lt_argv_zero, errno));
++      lt_debugprintf (__FILE__, __LINE__,
++		      "(main) failed to launch target \"%s\": %s\n",
++		      lt_argv_zero, nonnull (strerror (errno)));
+       return 127;
+     }
+   return rval;
+@@ -3728,7 +4553,7 @@ xmalloc (size_t num)
+ {
+   void *p = (void *) malloc (num);
+   if (!p)
+-    lt_fatal ("Memory exhausted");
++    lt_fatal (__FILE__, __LINE__, "memory exhausted");
+ 
+   return p;
+ }
+@@ -3762,8 +4587,8 @@ check_executable (const char *path)
+ {
+   struct stat st;
+ 
+-  LTWRAPPER_DEBUGPRINTF (("(check_executable)  : %s\n",
+-			  path ? (*path ? path : "EMPTY!") : "NULL!"));
++  lt_debugprintf (__FILE__, __LINE__, "(check_executable): %s\n",
++                  nonempty (path));
+   if ((!path) || (!*path))
+     return 0;
+ 
+@@ -3780,8 +4605,8 @@ make_executable (const char *path)
+   int rval = 0;
+   struct stat st;
+ 
+-  LTWRAPPER_DEBUGPRINTF (("(make_executable)   : %s\n",
+-			  path ? (*path ? path : "EMPTY!") : "NULL!"));
++  lt_debugprintf (__FILE__, __LINE__, "(make_executable): %s\n",
++                  nonempty (path));
+   if ((!path) || (!*path))
+     return 0;
+ 
+@@ -3807,8 +4632,8 @@ find_executable (const char *wrapper)
+   int tmp_len;
+   char *concat_name;
+ 
+-  LTWRAPPER_DEBUGPRINTF (("(find_executable)   : %s\n",
+-			  wrapper ? (*wrapper ? wrapper : "EMPTY!") : "NULL!"));
++  lt_debugprintf (__FILE__, __LINE__, "(find_executable): %s\n",
++                  nonempty (wrapper));
+ 
+   if ((wrapper == NULL) || (*wrapper == '\0'))
+     return NULL;
+@@ -3861,7 +4686,8 @@ find_executable (const char *wrapper)
+ 		{
+ 		  /* empty path: current directory */
+ 		  if (getcwd (tmp, LT_PATHMAX) == NULL)
+-		    lt_fatal ("getcwd failed");
++		    lt_fatal (__FILE__, __LINE__, "getcwd failed: %s",
++                              nonnull (strerror (errno)));
+ 		  tmp_len = strlen (tmp);
+ 		  concat_name =
+ 		    XMALLOC (char, tmp_len + 1 + strlen (wrapper) + 1);
+@@ -3886,7 +4712,8 @@ find_executable (const char *wrapper)
+     }
+   /* Relative path | not found in path: prepend cwd */
+   if (getcwd (tmp, LT_PATHMAX) == NULL)
+-    lt_fatal ("getcwd failed");
++    lt_fatal (__FILE__, __LINE__, "getcwd failed: %s",
++              nonnull (strerror (errno)));
+   tmp_len = strlen (tmp);
+   concat_name = XMALLOC (char, tmp_len + 1 + strlen (wrapper) + 1);
+   memcpy (concat_name, tmp, tmp_len);
+@@ -3912,8 +4739,9 @@ chase_symlinks (const char *pathspec)
+   int has_symlinks = 0;
+   while (strlen (tmp_pathspec) && !has_symlinks)
+     {
+-      LTWRAPPER_DEBUGPRINTF (("checking path component for symlinks: %s\n",
+-			      tmp_pathspec));
++      lt_debugprintf (__FILE__, __LINE__,
++		      "checking path component for symlinks: %s\n",
++		      tmp_pathspec);
+       if (lstat (tmp_pathspec, &s) == 0)
+ 	{
+ 	  if (S_ISLNK (s.st_mode) != 0)
+@@ -3935,8 +4763,9 @@ chase_symlinks (const char *pathspec)
+ 	}
+       else
+ 	{
+-	  char *errstr = strerror (errno);
+-	  lt_fatal ("Error accessing file %s (%s)", tmp_pathspec, errstr);
++	  lt_fatal (__FILE__, __LINE__,
++		    "error accessing file \"%s\": %s",
++		    tmp_pathspec, nonnull (strerror (errno)));
+ 	}
+     }
+   XFREE (tmp_pathspec);
+@@ -3949,7 +4778,8 @@ chase_symlinks (const char *pathspec)
+   tmp_pathspec = realpath (pathspec, buf);
+   if (tmp_pathspec == 0)
+     {
+-      lt_fatal ("Could not follow symlinks for %s", pathspec);
++      lt_fatal (__FILE__, __LINE__,
++		"could not follow symlinks for %s", pathspec);
+     }
+   return xstrdup (tmp_pathspec);
+ #endif
+@@ -3975,11 +4805,25 @@ strendzap (char *str, const char *pat)
+   return str;
+ }
+ 
++void
++lt_debugprintf (const char *file, int line, const char *fmt, ...)
++{
++  va_list args;
++  if (lt_debug)
++    {
++      (void) fprintf (stderr, "%s:%s:%d: ", program_name, file, line);
++      va_start (args, fmt);
++      (void) vfprintf (stderr, fmt, args);
++      va_end (args);
++    }
++}
++
+ static void
+-lt_error_core (int exit_status, const char *mode,
++lt_error_core (int exit_status, const char *file,
++	       int line, const char *mode,
+ 	       const char *message, va_list ap)
+ {
+-  fprintf (stderr, "%s: %s: ", program_name, mode);
++  fprintf (stderr, "%s:%s:%d: %s: ", program_name, file, line, mode);
+   vfprintf (stderr, message, ap);
+   fprintf (stderr, ".\n");
+ 
+@@ -3988,20 +4832,32 @@ lt_error_core (int exit_status, const char *mode,
+ }
+ 
+ void
+-lt_fatal (const char *message, ...)
++lt_fatal (const char *file, int line, const char *message, ...)
+ {
+   va_list ap;
+   va_start (ap, message);
+-  lt_error_core (EXIT_FAILURE, "FATAL", message, ap);
++  lt_error_core (EXIT_FAILURE, file, line, "FATAL", message, ap);
+   va_end (ap);
+ }
+ 
++static const char *
++nonnull (const char *s)
++{
++  return s ? s : "(null)";
++}
++
++static const char *
++nonempty (const char *s)
++{
++  return (s && !*s) ? "(empty)" : nonnull (s);
++}
++
+ void
+ lt_setenv (const char *name, const char *value)
+ {
+-  LTWRAPPER_DEBUGPRINTF (("(lt_setenv) setting '%s' to '%s'\n",
+-                          (name ? name : "<NULL>"),
+-                          (value ? value : "<NULL>")));
++  lt_debugprintf (__FILE__, __LINE__,
++		  "(lt_setenv) setting '%s' to '%s'\n",
++                  nonnull (name), nonnull (value));
+   {
+ #ifdef HAVE_SETENV
+     /* always make a copy, for consistency with !HAVE_SETENV */
+@@ -4049,9 +4905,9 @@ lt_extend_str (const char *orig_value, const char *add, int to_end)
+ void
+ lt_update_exe_path (const char *name, const char *value)
+ {
+-  LTWRAPPER_DEBUGPRINTF (("(lt_update_exe_path) modifying '%s' by prepending '%s'\n",
+-                          (name ? name : "<NULL>"),
+-                          (value ? value : "<NULL>")));
++  lt_debugprintf (__FILE__, __LINE__,
++		  "(lt_update_exe_path) modifying '%s' by prepending '%s'\n",
++                  nonnull (name), nonnull (value));
+ 
+   if (name && *name && value && *value)
+     {
+@@ -4070,9 +4926,9 @@ lt_update_exe_path (const char *name, const char *value)
+ void
+ lt_update_lib_path (const char *name, const char *value)
+ {
+-  LTWRAPPER_DEBUGPRINTF (("(lt_update_lib_path) modifying '%s' by prepending '%s'\n",
+-                          (name ? name : "<NULL>"),
+-                          (value ? value : "<NULL>")));
++  lt_debugprintf (__FILE__, __LINE__,
++		  "(lt_update_lib_path) modifying '%s' by prepending '%s'\n",
++                  nonnull (name), nonnull (value));
+ 
+   if (name && *name && value && *value)
+     {
+@@ -4222,7 +5078,7 @@ EOF
+ func_win32_import_lib_p ()
+ {
+     $opt_debug
+-    case `eval "$file_magic_cmd \"\$1\" 2>/dev/null" | $SED -e 10q` in
++    case `eval $file_magic_cmd \"\$1\" 2>/dev/null | $SED -e 10q` in
+     *import*) : ;;
+     *) false ;;
+     esac
+@@ -4401,9 +5257,9 @@ func_mode_link ()
+ 	    ;;
+ 	  *)
+ 	    if test "$prev" = dlfiles; then
+-	      dlfiles="$dlfiles $arg"
++	      func_append dlfiles " $arg"
+ 	    else
+-	      dlprefiles="$dlprefiles $arg"
++	      func_append dlprefiles " $arg"
+ 	    fi
+ 	    prev=
+ 	    continue
+@@ -4427,7 +5283,7 @@ func_mode_link ()
+ 	    *-*-darwin*)
+ 	      case "$deplibs " in
+ 		*" $qarg.ltframework "*) ;;
+-		*) deplibs="$deplibs $qarg.ltframework" # this is fixed later
++		*) func_append deplibs " $qarg.ltframework" # this is fixed later
+ 		   ;;
+ 	      esac
+ 	      ;;
+@@ -4446,7 +5302,7 @@ func_mode_link ()
+ 	    moreargs=
+ 	    for fil in `cat "$save_arg"`
+ 	    do
+-#	      moreargs="$moreargs $fil"
++#	      func_append moreargs " $fil"
+ 	      arg=$fil
+ 	      # A libtool-controlled object.
+ 
+@@ -4475,7 +5331,7 @@ func_mode_link ()
+ 
+ 		  if test "$prev" = dlfiles; then
+ 		    if test "$build_libtool_libs" = yes && test "$dlopen_support" = yes; then
+-		      dlfiles="$dlfiles $pic_object"
++		      func_append dlfiles " $pic_object"
+ 		      prev=
+ 		      continue
+ 		    else
+@@ -4487,7 +5343,7 @@ func_mode_link ()
+ 		  # CHECK ME:  I think I busted this.  -Ossama
+ 		  if test "$prev" = dlprefiles; then
+ 		    # Preload the old-style object.
+-		    dlprefiles="$dlprefiles $pic_object"
++		    func_append dlprefiles " $pic_object"
+ 		    prev=
+ 		  fi
+ 
+@@ -4557,12 +5413,12 @@ func_mode_link ()
+ 	  if test "$prev" = rpath; then
+ 	    case "$rpath " in
+ 	    *" $arg "*) ;;
+-	    *) rpath="$rpath $arg" ;;
++	    *) func_append rpath " $arg" ;;
+ 	    esac
+ 	  else
+ 	    case "$xrpath " in
+ 	    *" $arg "*) ;;
+-	    *) xrpath="$xrpath $arg" ;;
++	    *) func_append xrpath " $arg" ;;
+ 	    esac
+ 	  fi
+ 	  prev=
+@@ -4574,28 +5430,28 @@ func_mode_link ()
+ 	  continue
+ 	  ;;
+ 	weak)
+-	  weak_libs="$weak_libs $arg"
++	  func_append weak_libs " $arg"
+ 	  prev=
+ 	  continue
+ 	  ;;
+ 	xcclinker)
+-	  linker_flags="$linker_flags $qarg"
+-	  compiler_flags="$compiler_flags $qarg"
++	  func_append linker_flags " $qarg"
++	  func_append compiler_flags " $qarg"
+ 	  prev=
+ 	  func_append compile_command " $qarg"
+ 	  func_append finalize_command " $qarg"
+ 	  continue
+ 	  ;;
+ 	xcompiler)
+-	  compiler_flags="$compiler_flags $qarg"
++	  func_append compiler_flags " $qarg"
+ 	  prev=
+ 	  func_append compile_command " $qarg"
+ 	  func_append finalize_command " $qarg"
+ 	  continue
+ 	  ;;
+ 	xlinker)
+-	  linker_flags="$linker_flags $qarg"
+-	  compiler_flags="$compiler_flags $wl$qarg"
++	  func_append linker_flags " $qarg"
++	  func_append compiler_flags " $wl$qarg"
+ 	  prev=
+ 	  func_append compile_command " $wl$qarg"
+ 	  func_append finalize_command " $wl$qarg"
+@@ -4686,15 +5542,16 @@ func_mode_link ()
+ 	;;
+ 
+       -L*)
+-	func_stripname '-L' '' "$arg"
+-	dir=$func_stripname_result
+-	if test -z "$dir"; then
++	func_stripname "-L" '' "$arg"
++	if test -z "$func_stripname_result"; then
+ 	  if test "$#" -gt 0; then
+ 	    func_fatal_error "require no space between \`-L' and \`$1'"
+ 	  else
+ 	    func_fatal_error "need path for \`-L' option"
+ 	  fi
+ 	fi
++	func_resolve_sysroot "$func_stripname_result"
++	dir=$func_resolve_sysroot_result
+ 	# We need an absolute path.
+ 	case $dir in
+ 	[\\/]* | [A-Za-z]:[\\/]*) ;;
+@@ -4706,10 +5563,16 @@ func_mode_link ()
+ 	  ;;
+ 	esac
+ 	case "$deplibs " in
+-	*" -L$dir "*) ;;
++	*" -L$dir "* | *" $arg "*)
++	  # Will only happen for absolute or sysroot arguments
++	  ;;
+ 	*)
+-	  deplibs="$deplibs -L$dir"
+-	  lib_search_path="$lib_search_path $dir"
++	  # Preserve sysroot, but never include relative directories
++	  case $dir in
++	    [\\/]* | [A-Za-z]:[\\/]* | =*) func_append deplibs " $arg" ;;
++	    *) func_append deplibs " -L$dir" ;;
++	  esac
++	  func_append lib_search_path " $dir"
+ 	  ;;
+ 	esac
+ 	case $host in
+@@ -4718,12 +5581,12 @@ func_mode_link ()
+ 	  case :$dllsearchpath: in
+ 	  *":$dir:"*) ;;
+ 	  ::) dllsearchpath=$dir;;
+-	  *) dllsearchpath="$dllsearchpath:$dir";;
++	  *) func_append dllsearchpath ":$dir";;
+ 	  esac
+ 	  case :$dllsearchpath: in
+ 	  *":$testbindir:"*) ;;
+ 	  ::) dllsearchpath=$testbindir;;
+-	  *) dllsearchpath="$dllsearchpath:$testbindir";;
++	  *) func_append dllsearchpath ":$testbindir";;
+ 	  esac
+ 	  ;;
+ 	esac
+@@ -4747,7 +5610,7 @@ func_mode_link ()
+ 	    ;;
+ 	  *-*-rhapsody* | *-*-darwin1.[012])
+ 	    # Rhapsody C and math libraries are in the System framework
+-	    deplibs="$deplibs System.ltframework"
++	    func_append deplibs " System.ltframework"
+ 	    continue
+ 	    ;;
+ 	  *-*-sco3.2v5* | *-*-sco5v6*)
+@@ -4758,9 +5621,6 @@ func_mode_link ()
+ 	    # Compiler inserts libc in the correct place for threads to work
+ 	    test "X$arg" = "X-lc" && continue
+ 	    ;;
+-	  *-*-linux*)
+-	    test "X$arg" = "X-lc" && continue
+-	    ;;
+ 	  esac
+ 	elif test "X$arg" = "X-lc_r"; then
+ 	 case $host in
+@@ -4770,7 +5630,7 @@ func_mode_link ()
+ 	   ;;
+ 	 esac
+ 	fi
+-	deplibs="$deplibs $arg"
++	func_append deplibs " $arg"
+ 	continue
+ 	;;
+ 
+@@ -4782,8 +5642,8 @@ func_mode_link ()
+       # Tru64 UNIX uses -model [arg] to determine the layout of C++
+       # classes, name mangling, and exception handling.
+       # Darwin uses the -arch flag to determine output architecture.
+-      -model|-arch|-isysroot)
+-	compiler_flags="$compiler_flags $arg"
++      -model|-arch|-isysroot|--sysroot)
++	func_append compiler_flags " $arg"
+ 	func_append compile_command " $arg"
+ 	func_append finalize_command " $arg"
+ 	prev=xcompiler
+@@ -4791,12 +5651,12 @@ func_mode_link ()
+ 	;;
+ 
+       -mt|-mthreads|-kthread|-Kthread|-pthread|-pthreads|--thread-safe|-threads)
+-	compiler_flags="$compiler_flags $arg"
++	func_append compiler_flags " $arg"
+ 	func_append compile_command " $arg"
+ 	func_append finalize_command " $arg"
+ 	case "$new_inherited_linker_flags " in
+ 	    *" $arg "*) ;;
+-	    * ) new_inherited_linker_flags="$new_inherited_linker_flags $arg" ;;
++	    * ) func_append new_inherited_linker_flags " $arg" ;;
+ 	esac
+ 	continue
+ 	;;
+@@ -4863,13 +5723,17 @@ func_mode_link ()
+ 	# We need an absolute path.
+ 	case $dir in
+ 	[\\/]* | [A-Za-z]:[\\/]*) ;;
++	=*)
++	  func_stripname '=' '' "$dir"
++	  dir=$lt_sysroot$func_stripname_result
++	  ;;
+ 	*)
+ 	  func_fatal_error "only absolute run-paths are allowed"
+ 	  ;;
+ 	esac
+ 	case "$xrpath " in
+ 	*" $dir "*) ;;
+-	*) xrpath="$xrpath $dir" ;;
++	*) func_append xrpath " $dir" ;;
+ 	esac
+ 	continue
+ 	;;
+@@ -4922,8 +5786,8 @@ func_mode_link ()
+ 	for flag in $args; do
+ 	  IFS="$save_ifs"
+           func_quote_for_eval "$flag"
+-	  arg="$arg $func_quote_for_eval_result"
+-	  compiler_flags="$compiler_flags $func_quote_for_eval_result"
++	  func_append arg " $func_quote_for_eval_result"
++	  func_append compiler_flags " $func_quote_for_eval_result"
+ 	done
+ 	IFS="$save_ifs"
+ 	func_stripname ' ' '' "$arg"
+@@ -4938,9 +5802,9 @@ func_mode_link ()
+ 	for flag in $args; do
+ 	  IFS="$save_ifs"
+           func_quote_for_eval "$flag"
+-	  arg="$arg $wl$func_quote_for_eval_result"
+-	  compiler_flags="$compiler_flags $wl$func_quote_for_eval_result"
+-	  linker_flags="$linker_flags $func_quote_for_eval_result"
++	  func_append arg " $wl$func_quote_for_eval_result"
++	  func_append compiler_flags " $wl$func_quote_for_eval_result"
++	  func_append linker_flags " $func_quote_for_eval_result"
+ 	done
+ 	IFS="$save_ifs"
+ 	func_stripname ' ' '' "$arg"
+@@ -4968,24 +5832,27 @@ func_mode_link ()
+ 	arg="$func_quote_for_eval_result"
+ 	;;
+ 
+-      # -64, -mips[0-9] enable 64-bit mode on the SGI compiler
+-      # -r[0-9][0-9]* specifies the processor on the SGI compiler
+-      # -xarch=*, -xtarget=* enable 64-bit mode on the Sun compiler
+-      # +DA*, +DD* enable 64-bit mode on the HP compiler
+-      # -q* pass through compiler args for the IBM compiler
+-      # -m*, -t[45]*, -txscale* pass through architecture-specific
+-      # compiler args for GCC
+-      # -F/path gives path to uninstalled frameworks, gcc on darwin
+-      # -p, -pg, --coverage, -fprofile-* pass through profiling flag for GCC
+-      # @file GCC response files
+-      # -tp=* Portland pgcc target processor selection
++      # Flags to be passed through unchanged, with rationale:
++      # -64, -mips[0-9]      enable 64-bit mode for the SGI compiler
++      # -r[0-9][0-9]*        specify processor for the SGI compiler
++      # -xarch=*, -xtarget=* enable 64-bit mode for the Sun compiler
++      # +DA*, +DD*           enable 64-bit mode for the HP compiler
++      # -q*                  compiler args for the IBM compiler
++      # -m*, -t[45]*, -txscale* architecture-specific flags for GCC
++      # -F/path              path to uninstalled frameworks, gcc on darwin
++      # -p, -pg, --coverage, -fprofile-*  profiling flags for GCC
++      # @file                GCC response files
++      # -tp=*                Portland pgcc target processor selection
++      # --sysroot=*          for sysroot support
++      # -O*, -flto*, -fwhopr*, -fuse-linker-plugin GCC link-time optimization
+       -64|-mips[0-9]|-r[0-9][0-9]*|-xarch=*|-xtarget=*|+DA*|+DD*|-q*|-m*| \
+-      -t[45]*|-txscale*|-p|-pg|--coverage|-fprofile-*|-F*|@*|-tp=*)
++      -t[45]*|-txscale*|-p|-pg|--coverage|-fprofile-*|-F*|@*|-tp=*|--sysroot=*| \
++      -O*|-flto*|-fwhopr*|-fuse-linker-plugin)
+         func_quote_for_eval "$arg"
+ 	arg="$func_quote_for_eval_result"
+         func_append compile_command " $arg"
+         func_append finalize_command " $arg"
+-        compiler_flags="$compiler_flags $arg"
++        func_append compiler_flags " $arg"
+         continue
+         ;;
+ 
+@@ -4997,7 +5864,7 @@ func_mode_link ()
+ 
+       *.$objext)
+ 	# A standard object.
+-	objs="$objs $arg"
++	func_append objs " $arg"
+ 	;;
+ 
+       *.lo)
+@@ -5028,7 +5895,7 @@ func_mode_link ()
+ 
+ 	    if test "$prev" = dlfiles; then
+ 	      if test "$build_libtool_libs" = yes && test "$dlopen_support" = yes; then
+-		dlfiles="$dlfiles $pic_object"
++		func_append dlfiles " $pic_object"
+ 		prev=
+ 		continue
+ 	      else
+@@ -5040,7 +5907,7 @@ func_mode_link ()
+ 	    # CHECK ME:  I think I busted this.  -Ossama
+ 	    if test "$prev" = dlprefiles; then
+ 	      # Preload the old-style object.
+-	      dlprefiles="$dlprefiles $pic_object"
++	      func_append dlprefiles " $pic_object"
+ 	      prev=
+ 	    fi
+ 
+@@ -5085,24 +5952,25 @@ func_mode_link ()
+ 
+       *.$libext)
+ 	# An archive.
+-	deplibs="$deplibs $arg"
+-	old_deplibs="$old_deplibs $arg"
++	func_append deplibs " $arg"
++	func_append old_deplibs " $arg"
+ 	continue
+ 	;;
+ 
+       *.la)
+ 	# A libtool-controlled library.
+ 
++	func_resolve_sysroot "$arg"
+ 	if test "$prev" = dlfiles; then
+ 	  # This library was specified with -dlopen.
+-	  dlfiles="$dlfiles $arg"
++	  func_append dlfiles " $func_resolve_sysroot_result"
+ 	  prev=
+ 	elif test "$prev" = dlprefiles; then
+ 	  # The library was specified with -dlpreopen.
+-	  dlprefiles="$dlprefiles $arg"
++	  func_append dlprefiles " $func_resolve_sysroot_result"
+ 	  prev=
+ 	else
+-	  deplibs="$deplibs $arg"
++	  func_append deplibs " $func_resolve_sysroot_result"
+ 	fi
+ 	continue
+ 	;;
+@@ -5127,7 +5995,7 @@ func_mode_link ()
+       func_fatal_help "the \`$prevarg' option requires an argument"
+ 
+     if test "$export_dynamic" = yes && test -n "$export_dynamic_flag_spec"; then
+-      eval "arg=\"$export_dynamic_flag_spec\""
++      eval arg=\"$export_dynamic_flag_spec\"
+       func_append compile_command " $arg"
+       func_append finalize_command " $arg"
+     fi
+@@ -5144,11 +6012,13 @@ func_mode_link ()
+     else
+       shlib_search_path=
+     fi
+-    eval "sys_lib_search_path=\"$sys_lib_search_path_spec\""
+-    eval "sys_lib_dlsearch_path=\"$sys_lib_dlsearch_path_spec\""
++    eval sys_lib_search_path=\"$sys_lib_search_path_spec\"
++    eval sys_lib_dlsearch_path=\"$sys_lib_dlsearch_path_spec\"
+ 
+     func_dirname "$output" "/" ""
+     output_objdir="$func_dirname_result$objdir"
++    func_to_tool_file "$output_objdir/"
++    tool_output_objdir=$func_to_tool_file_result
+     # Create the object directory.
+     func_mkdir_p "$output_objdir"
+ 
+@@ -5169,12 +6039,12 @@ func_mode_link ()
+     # Find all interdependent deplibs by searching for libraries
+     # that are linked more than once (e.g. -la -lb -la)
+     for deplib in $deplibs; do
+-      if $opt_duplicate_deps ; then
++      if $opt_preserve_dup_deps ; then
+ 	case "$libs " in
+-	*" $deplib "*) specialdeplibs="$specialdeplibs $deplib" ;;
++	*" $deplib "*) func_append specialdeplibs " $deplib" ;;
+ 	esac
+       fi
+-      libs="$libs $deplib"
++      func_append libs " $deplib"
+     done
+ 
+     if test "$linkmode" = lib; then
+@@ -5187,9 +6057,9 @@ func_mode_link ()
+       if $opt_duplicate_compiler_generated_deps; then
+ 	for pre_post_dep in $predeps $postdeps; do
+ 	  case "$pre_post_deps " in
+-	  *" $pre_post_dep "*) specialdeplibs="$specialdeplibs $pre_post_deps" ;;
++	  *" $pre_post_dep "*) func_append specialdeplibs " $pre_post_deps" ;;
+ 	  esac
+-	  pre_post_deps="$pre_post_deps $pre_post_dep"
++	  func_append pre_post_deps " $pre_post_dep"
+ 	done
+       fi
+       pre_post_deps=
+@@ -5256,8 +6126,9 @@ func_mode_link ()
+ 	for lib in $dlprefiles; do
+ 	  # Ignore non-libtool-libs
+ 	  dependency_libs=
++	  func_resolve_sysroot "$lib"
+ 	  case $lib in
+-	  *.la)	func_source "$lib" ;;
++	  *.la)	func_source "$func_resolve_sysroot_result" ;;
+ 	  esac
+ 
+ 	  # Collect preopened libtool deplibs, except any this library
+@@ -5267,7 +6138,7 @@ func_mode_link ()
+             deplib_base=$func_basename_result
+ 	    case " $weak_libs " in
+ 	    *" $deplib_base "*) ;;
+-	    *) deplibs="$deplibs $deplib" ;;
++	    *) func_append deplibs " $deplib" ;;
+ 	    esac
+ 	  done
+ 	done
+@@ -5288,11 +6159,11 @@ func_mode_link ()
+ 	    compile_deplibs="$deplib $compile_deplibs"
+ 	    finalize_deplibs="$deplib $finalize_deplibs"
+ 	  else
+-	    compiler_flags="$compiler_flags $deplib"
++	    func_append compiler_flags " $deplib"
+ 	    if test "$linkmode" = lib ; then
+ 		case "$new_inherited_linker_flags " in
+ 		    *" $deplib "*) ;;
+-		    * ) new_inherited_linker_flags="$new_inherited_linker_flags $deplib" ;;
++		    * ) func_append new_inherited_linker_flags " $deplib" ;;
+ 		esac
+ 	    fi
+ 	  fi
+@@ -5377,7 +6248,7 @@ func_mode_link ()
+ 	    if test "$linkmode" = lib ; then
+ 		case "$new_inherited_linker_flags " in
+ 		    *" $deplib "*) ;;
+-		    * ) new_inherited_linker_flags="$new_inherited_linker_flags $deplib" ;;
++		    * ) func_append new_inherited_linker_flags " $deplib" ;;
+ 		esac
+ 	    fi
+ 	  fi
+@@ -5390,7 +6261,8 @@ func_mode_link ()
+ 	    test "$pass" = conv && continue
+ 	    newdependency_libs="$deplib $newdependency_libs"
+ 	    func_stripname '-L' '' "$deplib"
+-	    newlib_search_path="$newlib_search_path $func_stripname_result"
++	    func_resolve_sysroot "$func_stripname_result"
++	    func_append newlib_search_path " $func_resolve_sysroot_result"
+ 	    ;;
+ 	  prog)
+ 	    if test "$pass" = conv; then
+@@ -5404,7 +6276,8 @@ func_mode_link ()
+ 	      finalize_deplibs="$deplib $finalize_deplibs"
+ 	    fi
+ 	    func_stripname '-L' '' "$deplib"
+-	    newlib_search_path="$newlib_search_path $func_stripname_result"
++	    func_resolve_sysroot "$func_stripname_result"
++	    func_append newlib_search_path " $func_resolve_sysroot_result"
+ 	    ;;
+ 	  *)
+ 	    func_warning "\`-L' is ignored for archives/objects"
+@@ -5415,17 +6288,21 @@ func_mode_link ()
+ 	-R*)
+ 	  if test "$pass" = link; then
+ 	    func_stripname '-R' '' "$deplib"
+-	    dir=$func_stripname_result
++	    func_resolve_sysroot "$func_stripname_result"
++	    dir=$func_resolve_sysroot_result
+ 	    # Make sure the xrpath contains only unique directories.
+ 	    case "$xrpath " in
+ 	    *" $dir "*) ;;
+-	    *) xrpath="$xrpath $dir" ;;
++	    *) func_append xrpath " $dir" ;;
+ 	    esac
+ 	  fi
+ 	  deplibs="$deplib $deplibs"
+ 	  continue
+ 	  ;;
+-	*.la) lib="$deplib" ;;
++	*.la)
++	  func_resolve_sysroot "$deplib"
++	  lib=$func_resolve_sysroot_result
++	  ;;
+ 	*.$libext)
+ 	  if test "$pass" = conv; then
+ 	    deplibs="$deplib $deplibs"
+@@ -5488,11 +6365,11 @@ func_mode_link ()
+ 	    if test "$pass" = dlpreopen || test "$dlopen_support" != yes || test "$build_libtool_libs" = no; then
+ 	      # If there is no dlopen support or we're linking statically,
+ 	      # we need to preload.
+-	      newdlprefiles="$newdlprefiles $deplib"
++	      func_append newdlprefiles " $deplib"
+ 	      compile_deplibs="$deplib $compile_deplibs"
+ 	      finalize_deplibs="$deplib $finalize_deplibs"
+ 	    else
+-	      newdlfiles="$newdlfiles $deplib"
++	      func_append newdlfiles " $deplib"
+ 	    fi
+ 	  fi
+ 	  continue
+@@ -5538,7 +6415,7 @@ func_mode_link ()
+ 	  for tmp_inherited_linker_flag in $tmp_inherited_linker_flags; do
+ 	    case " $new_inherited_linker_flags " in
+ 	      *" $tmp_inherited_linker_flag "*) ;;
+-	      *) new_inherited_linker_flags="$new_inherited_linker_flags $tmp_inherited_linker_flag";;
++	      *) func_append new_inherited_linker_flags " $tmp_inherited_linker_flag";;
+ 	    esac
+ 	  done
+ 	fi
+@@ -5546,8 +6423,8 @@ func_mode_link ()
+ 	if test "$linkmode,$pass" = "lib,link" ||
+ 	   test "$linkmode,$pass" = "prog,scan" ||
+ 	   { test "$linkmode" != prog && test "$linkmode" != lib; }; then
+-	  test -n "$dlopen" && dlfiles="$dlfiles $dlopen"
+-	  test -n "$dlpreopen" && dlprefiles="$dlprefiles $dlpreopen"
++	  test -n "$dlopen" && func_append dlfiles " $dlopen"
++	  test -n "$dlpreopen" && func_append dlprefiles " $dlpreopen"
+ 	fi
+ 
+ 	if test "$pass" = conv; then
+@@ -5558,20 +6435,20 @@ func_mode_link ()
+ 	      func_fatal_error "cannot find name of link library for \`$lib'"
+ 	    fi
+ 	    # It is a libtool convenience library, so add in its objects.
+-	    convenience="$convenience $ladir/$objdir/$old_library"
+-	    old_convenience="$old_convenience $ladir/$objdir/$old_library"
++	    func_append convenience " $ladir/$objdir/$old_library"
++	    func_append old_convenience " $ladir/$objdir/$old_library"
+ 	  elif test "$linkmode" != prog && test "$linkmode" != lib; then
+ 	    func_fatal_error "\`$lib' is not a convenience library"
+ 	  fi
+ 	  tmp_libs=
+ 	  for deplib in $dependency_libs; do
+ 	    deplibs="$deplib $deplibs"
+-	    if $opt_duplicate_deps ; then
++	    if $opt_preserve_dup_deps ; then
+ 	      case "$tmp_libs " in
+-	      *" $deplib "*) specialdeplibs="$specialdeplibs $deplib" ;;
++	      *" $deplib "*) func_append specialdeplibs " $deplib" ;;
+ 	      esac
+ 	    fi
+-	    tmp_libs="$tmp_libs $deplib"
++	    func_append tmp_libs " $deplib"
+ 	  done
+ 	  continue
+ 	fi # $pass = conv
+@@ -5579,9 +6456,15 @@ func_mode_link ()
+ 
+ 	# Get the name of the library we link against.
+ 	linklib=
+-	for l in $old_library $library_names; do
+-	  linklib="$l"
+-	done
++	if test -n "$old_library" &&
++	   { test "$prefer_static_libs" = yes ||
++	     test "$prefer_static_libs,$installed" = "built,no"; }; then
++	  linklib=$old_library
++	else
++	  for l in $old_library $library_names; do
++	    linklib="$l"
++	  done
++	fi
+ 	if test -z "$linklib"; then
+ 	  func_fatal_error "cannot find name of link library for \`$lib'"
+ 	fi
+@@ -5598,9 +6481,9 @@ func_mode_link ()
+ 	    # statically, we need to preload.  We also need to preload any
+ 	    # dependent libraries so libltdl's deplib preloader doesn't
+ 	    # bomb out in the load deplibs phase.
+-	    dlprefiles="$dlprefiles $lib $dependency_libs"
++	    func_append dlprefiles " $lib $dependency_libs"
+ 	  else
+-	    newdlfiles="$newdlfiles $lib"
++	    func_append newdlfiles " $lib"
+ 	  fi
+ 	  continue
+ 	fi # $pass = dlopen
+@@ -5622,14 +6505,14 @@ func_mode_link ()
+ 
+ 	# Find the relevant object directory and library name.
+ 	if test "X$installed" = Xyes; then
+-	  if test ! -f "$libdir/$linklib" && test -f "$abs_ladir/$linklib"; then
++	  if test ! -f "$lt_sysroot$libdir/$linklib" && test -f "$abs_ladir/$linklib"; then
+ 	    func_warning "library \`$lib' was moved."
+ 	    dir="$ladir"
+ 	    absdir="$abs_ladir"
+ 	    libdir="$abs_ladir"
+ 	  else
+-	    dir="$libdir"
+-	    absdir="$libdir"
++	    dir="$lt_sysroot$libdir"
++	    absdir="$lt_sysroot$libdir"
+ 	  fi
+ 	  test "X$hardcode_automatic" = Xyes && avoidtemprpath=yes
+ 	else
+@@ -5637,12 +6520,12 @@ func_mode_link ()
+ 	    dir="$ladir"
+ 	    absdir="$abs_ladir"
+ 	    # Remove this search path later
+-	    notinst_path="$notinst_path $abs_ladir"
++	    func_append notinst_path " $abs_ladir"
+ 	  else
+ 	    dir="$ladir/$objdir"
+ 	    absdir="$abs_ladir/$objdir"
+ 	    # Remove this search path later
+-	    notinst_path="$notinst_path $abs_ladir"
++	    func_append notinst_path " $abs_ladir"
+ 	  fi
+ 	fi # $installed = yes
+ 	func_stripname 'lib' '.la' "$laname"
+@@ -5653,20 +6536,46 @@ func_mode_link ()
+ 	  if test -z "$libdir" && test "$linkmode" = prog; then
+ 	    func_fatal_error "only libraries may -dlpreopen a convenience library: \`$lib'"
+ 	  fi
+-	  # Prefer using a static library (so that no silly _DYNAMIC symbols
+-	  # are required to link).
+-	  if test -n "$old_library"; then
+-	    newdlprefiles="$newdlprefiles $dir/$old_library"
+-	    # Keep a list of preopened convenience libraries to check
+-	    # that they are being used correctly in the link pass.
+-	    test -z "$libdir" && \
+-		dlpreconveniencelibs="$dlpreconveniencelibs $dir/$old_library"
+-	  # Otherwise, use the dlname, so that lt_dlopen finds it.
+-	  elif test -n "$dlname"; then
+-	    newdlprefiles="$newdlprefiles $dir/$dlname"
+-	  else
+-	    newdlprefiles="$newdlprefiles $dir/$linklib"
+-	  fi
++	  case "$host" in
++	    # special handling for platforms with PE-DLLs.
++	    *cygwin* | *mingw* | *cegcc* )
++	      # Linker will automatically link against shared library if both
++	      # static and shared are present.  Therefore, ensure we extract
++	      # symbols from the import library if a shared library is present
++	      # (otherwise, the dlopen module name will be incorrect).  We do
++	      # this by putting the import library name into $newdlprefiles.
++	      # We recover the dlopen module name by 'saving' the la file
++	      # name in a special purpose variable, and (later) extracting the
++	      # dlname from the la file.
++	      if test -n "$dlname"; then
++	        func_tr_sh "$dir/$linklib"
++	        eval "libfile_$func_tr_sh_result=\$abs_ladir/\$laname"
++	        func_append newdlprefiles " $dir/$linklib"
++	      else
++	        func_append newdlprefiles " $dir/$old_library"
++	        # Keep a list of preopened convenience libraries to check
++	        # that they are being used correctly in the link pass.
++	        test -z "$libdir" && \
++	          func_append dlpreconveniencelibs " $dir/$old_library"
++	      fi
++	    ;;
++	    * )
++	      # Prefer using a static library (so that no silly _DYNAMIC symbols
++	      # are required to link).
++	      if test -n "$old_library"; then
++	        func_append newdlprefiles " $dir/$old_library"
++	        # Keep a list of preopened convenience libraries to check
++	        # that they are being used correctly in the link pass.
++	        test -z "$libdir" && \
++	          func_append dlpreconveniencelibs " $dir/$old_library"
++	      # Otherwise, use the dlname, so that lt_dlopen finds it.
++	      elif test -n "$dlname"; then
++	        func_append newdlprefiles " $dir/$dlname"
++	      else
++	        func_append newdlprefiles " $dir/$linklib"
++	      fi
++	    ;;
++	  esac
+ 	fi # $pass = dlpreopen
+ 
+ 	if test -z "$libdir"; then
+@@ -5684,7 +6593,7 @@ func_mode_link ()
+ 
+ 
+ 	if test "$linkmode" = prog && test "$pass" != link; then
+-	  newlib_search_path="$newlib_search_path $ladir"
++	  func_append newlib_search_path " $ladir"
+ 	  deplibs="$lib $deplibs"
+ 
+ 	  linkalldeplibs=no
+@@ -5697,7 +6606,8 @@ func_mode_link ()
+ 	  for deplib in $dependency_libs; do
+ 	    case $deplib in
+ 	    -L*) func_stripname '-L' '' "$deplib"
+-	         newlib_search_path="$newlib_search_path $func_stripname_result"
++	         func_resolve_sysroot "$func_stripname_result"
++	         func_append newlib_search_path " $func_resolve_sysroot_result"
+ 		 ;;
+ 	    esac
+ 	    # Need to link against all dependency_libs?
+@@ -5708,12 +6618,12 @@ func_mode_link ()
+ 	      # or/and link against static libraries
+ 	      newdependency_libs="$deplib $newdependency_libs"
+ 	    fi
+-	    if $opt_duplicate_deps ; then
++	    if $opt_preserve_dup_deps ; then
+ 	      case "$tmp_libs " in
+-	      *" $deplib "*) specialdeplibs="$specialdeplibs $deplib" ;;
++	      *" $deplib "*) func_append specialdeplibs " $deplib" ;;
+ 	      esac
+ 	    fi
+-	    tmp_libs="$tmp_libs $deplib"
++	    func_append tmp_libs " $deplib"
+ 	  done # for deplib
+ 	  continue
+ 	fi # $linkmode = prog...
+@@ -5728,7 +6638,7 @@ func_mode_link ()
+ 	      # Make sure the rpath contains only unique directories.
+ 	      case "$temp_rpath:" in
+ 	      *"$absdir:"*) ;;
+-	      *) temp_rpath="$temp_rpath$absdir:" ;;
++	      *) func_append temp_rpath "$absdir:" ;;
+ 	      esac
+ 	    fi
+ 
+@@ -5740,7 +6650,7 @@ func_mode_link ()
+ 	    *)
+ 	      case "$compile_rpath " in
+ 	      *" $absdir "*) ;;
+-	      *) compile_rpath="$compile_rpath $absdir"
++	      *) func_append compile_rpath " $absdir" ;;
+ 	      esac
+ 	      ;;
+ 	    esac
+@@ -5749,7 +6659,7 @@ func_mode_link ()
+ 	    *)
+ 	      case "$finalize_rpath " in
+ 	      *" $libdir "*) ;;
+-	      *) finalize_rpath="$finalize_rpath $libdir"
++	      *) func_append finalize_rpath " $libdir" ;;
+ 	      esac
+ 	      ;;
+ 	    esac
+@@ -5774,12 +6684,12 @@ func_mode_link ()
+ 	  case $host in
+ 	  *cygwin* | *mingw* | *cegcc*)
+ 	      # No point in relinking DLLs because paths are not encoded
+-	      notinst_deplibs="$notinst_deplibs $lib"
++	      func_append notinst_deplibs " $lib"
+ 	      need_relink=no
+ 	    ;;
+ 	  *)
+ 	    if test "$installed" = no; then
+-	      notinst_deplibs="$notinst_deplibs $lib"
++	      func_append notinst_deplibs " $lib"
+ 	      need_relink=yes
+ 	    fi
+ 	    ;;
+@@ -5814,7 +6724,7 @@ func_mode_link ()
+ 	    *)
+ 	      case "$compile_rpath " in
+ 	      *" $absdir "*) ;;
+-	      *) compile_rpath="$compile_rpath $absdir"
++	      *) func_append compile_rpath " $absdir" ;;
+ 	      esac
+ 	      ;;
+ 	    esac
+@@ -5823,7 +6733,7 @@ func_mode_link ()
+ 	    *)
+ 	      case "$finalize_rpath " in
+ 	      *" $libdir "*) ;;
+-	      *) finalize_rpath="$finalize_rpath $libdir"
++	      *) func_append finalize_rpath " $libdir" ;;
+ 	      esac
+ 	      ;;
+ 	    esac
+@@ -5835,7 +6745,7 @@ func_mode_link ()
+ 	    shift
+ 	    realname="$1"
+ 	    shift
+-	    eval "libname=\"$libname_spec\""
++	    libname=`eval "\\$ECHO \"$libname_spec\""`
+ 	    # use dlname if we got it. it's perfectly good, no?
+ 	    if test -n "$dlname"; then
+ 	      soname="$dlname"
+@@ -5848,7 +6758,7 @@ func_mode_link ()
+ 		versuffix="-$major"
+ 		;;
+ 	      esac
+-	      eval "soname=\"$soname_spec\""
++	      eval soname=\"$soname_spec\"
+ 	    else
+ 	      soname="$realname"
+ 	    fi
+@@ -5877,7 +6787,7 @@ func_mode_link ()
+ 	    linklib=$newlib
+ 	  fi # test -n "$old_archive_from_expsyms_cmds"
+ 
+-	  if test "$linkmode" = prog || test "$mode" != relink; then
++	  if test "$linkmode" = prog || test "$opt_mode" != relink; then
+ 	    add_shlibpath=
+ 	    add_dir=
+ 	    add=
+@@ -5933,7 +6843,7 @@ func_mode_link ()
+ 		if test -n "$inst_prefix_dir"; then
+ 		  case $libdir in
+ 		    [\\/]*)
+-		      add_dir="$add_dir -L$inst_prefix_dir$libdir"
++		      func_append add_dir " -L$inst_prefix_dir$libdir"
+ 		      ;;
+ 		  esac
+ 		fi
+@@ -5955,7 +6865,7 @@ func_mode_link ()
+ 	    if test -n "$add_shlibpath"; then
+ 	      case :$compile_shlibpath: in
+ 	      *":$add_shlibpath:"*) ;;
+-	      *) compile_shlibpath="$compile_shlibpath$add_shlibpath:" ;;
++	      *) func_append compile_shlibpath "$add_shlibpath:" ;;
+ 	      esac
+ 	    fi
+ 	    if test "$linkmode" = prog; then
+@@ -5969,13 +6879,13 @@ func_mode_link ()
+ 		 test "$hardcode_shlibpath_var" = yes; then
+ 		case :$finalize_shlibpath: in
+ 		*":$libdir:"*) ;;
+-		*) finalize_shlibpath="$finalize_shlibpath$libdir:" ;;
++		*) func_append finalize_shlibpath "$libdir:" ;;
+ 		esac
+ 	      fi
+ 	    fi
+ 	  fi
+ 
+-	  if test "$linkmode" = prog || test "$mode" = relink; then
++	  if test "$linkmode" = prog || test "$opt_mode" = relink; then
+ 	    add_shlibpath=
+ 	    add_dir=
+ 	    add=
+@@ -5989,7 +6899,7 @@ func_mode_link ()
+ 	    elif test "$hardcode_shlibpath_var" = yes; then
+ 	      case :$finalize_shlibpath: in
+ 	      *":$libdir:"*) ;;
+-	      *) finalize_shlibpath="$finalize_shlibpath$libdir:" ;;
++	      *) func_append finalize_shlibpath "$libdir:" ;;
+ 	      esac
+ 	      add="-l$name"
+ 	    elif test "$hardcode_automatic" = yes; then
+@@ -6001,12 +6911,12 @@ func_mode_link ()
+ 	      fi
+ 	    else
+ 	      # We cannot seem to hardcode it, guess we'll fake it.
+-	      add_dir="-L$libdir"
++	      add_dir="-L$lt_sysroot$libdir"
+ 	      # Try looking first in the location we're being installed to.
+ 	      if test -n "$inst_prefix_dir"; then
+ 		case $libdir in
+ 		  [\\/]*)
+-		    add_dir="$add_dir -L$inst_prefix_dir$libdir"
++		    func_append add_dir " -L$inst_prefix_dir$libdir"
+ 		    ;;
+ 		esac
+ 	      fi
+@@ -6083,27 +6993,33 @@ func_mode_link ()
+ 	           temp_xrpath=$func_stripname_result
+ 		   case " $xrpath " in
+ 		   *" $temp_xrpath "*) ;;
+-		   *) xrpath="$xrpath $temp_xrpath";;
++		   *) func_append xrpath " $temp_xrpath";;
+ 		   esac;;
+-	      *) temp_deplibs="$temp_deplibs $libdir";;
++	      *) func_append temp_deplibs " $libdir";;
+ 	      esac
+ 	    done
+ 	    dependency_libs="$temp_deplibs"
+ 	  fi
+ 
+-	  newlib_search_path="$newlib_search_path $absdir"
++	  func_append newlib_search_path " $absdir"
+ 	  # Link against this library
+ 	  test "$link_static" = no && newdependency_libs="$abs_ladir/$laname $newdependency_libs"
+ 	  # ... and its dependency_libs
+ 	  tmp_libs=
+ 	  for deplib in $dependency_libs; do
+ 	    newdependency_libs="$deplib $newdependency_libs"
+-	    if $opt_duplicate_deps ; then
++	    case $deplib in
++              -L*) func_stripname '-L' '' "$deplib"
++                   func_resolve_sysroot "$func_stripname_result";;
++              *) func_resolve_sysroot "$deplib" ;;
++            esac
++	    if $opt_preserve_dup_deps ; then
+ 	      case "$tmp_libs " in
+-	      *" $deplib "*) specialdeplibs="$specialdeplibs $deplib" ;;
++	      *" $func_resolve_sysroot_result "*)
++                func_append specialdeplibs " $func_resolve_sysroot_result" ;;
+ 	      esac
+ 	    fi
+-	    tmp_libs="$tmp_libs $deplib"
++	    func_append tmp_libs " $func_resolve_sysroot_result"
+ 	  done
+ 
+ 	  if test "$link_all_deplibs" != no; then
+@@ -6113,8 +7029,10 @@ func_mode_link ()
+ 	      case $deplib in
+ 	      -L*) path="$deplib" ;;
+ 	      *.la)
++	        func_resolve_sysroot "$deplib"
++	        deplib=$func_resolve_sysroot_result
+ 	        func_dirname "$deplib" "" "."
+-		dir="$func_dirname_result"
++		dir=$func_dirname_result
+ 		# We need an absolute path.
+ 		case $dir in
+ 		[\\/]* | [A-Za-z]:[\\/]*) absdir="$dir" ;;
+@@ -6130,7 +7048,7 @@ func_mode_link ()
+ 		case $host in
+ 		*-*-darwin*)
+ 		  depdepl=
+-		  deplibrary_names=`${SED} -n -e 's/^library_names=\(.*\)$/\1/p' $deplib`
++		  eval deplibrary_names=`${SED} -n -e 's/^library_names=\(.*\)$/\1/p' $deplib`
+ 		  if test -n "$deplibrary_names" ; then
+ 		    for tmp in $deplibrary_names ; do
+ 		      depdepl=$tmp
+@@ -6141,8 +7059,8 @@ func_mode_link ()
+                       if test -z "$darwin_install_name"; then
+                           darwin_install_name=`${OTOOL64} -L $depdepl  | awk '{if (NR == 2) {print $1;exit}}'`
+                       fi
+-		      compiler_flags="$compiler_flags ${wl}-dylib_file ${wl}${darwin_install_name}:${depdepl}"
+-		      linker_flags="$linker_flags -dylib_file ${darwin_install_name}:${depdepl}"
++		      func_append compiler_flags " ${wl}-dylib_file ${wl}${darwin_install_name}:${depdepl}"
++		      func_append linker_flags " -dylib_file ${darwin_install_name}:${depdepl}"
+ 		      path=
+ 		    fi
+ 		  fi
+@@ -6152,7 +7070,7 @@ func_mode_link ()
+ 		  ;;
+ 		esac
+ 		else
+-		  libdir=`${SED} -n -e 's/^libdir=\(.*\)$/\1/p' $deplib`
++		  eval libdir=`${SED} -n -e 's/^libdir=\(.*\)$/\1/p' $deplib`
+ 		  test -z "$libdir" && \
+ 		    func_fatal_error "\`$deplib' is not a valid libtool archive"
+ 		  test "$absdir" != "$libdir" && \
+@@ -6192,7 +7110,7 @@ func_mode_link ()
+ 	  for dir in $newlib_search_path; do
+ 	    case "$lib_search_path " in
+ 	    *" $dir "*) ;;
+-	    *) lib_search_path="$lib_search_path $dir" ;;
++	    *) func_append lib_search_path " $dir" ;;
+ 	    esac
+ 	  done
+ 	  newlib_search_path=
+@@ -6205,7 +7123,7 @@ func_mode_link ()
+ 	fi
+ 	for var in $vars dependency_libs; do
+ 	  # Add libraries to $var in reverse order
+-	  eval tmp_libs=\$$var
++	  eval tmp_libs=\"\$$var\"
+ 	  new_libs=
+ 	  for deplib in $tmp_libs; do
+ 	    # FIXME: Pedantically, this is the right thing to do, so
+@@ -6250,13 +7168,13 @@ func_mode_link ()
+ 	    -L*)
+ 	      case " $tmp_libs " in
+ 	      *" $deplib "*) ;;
+-	      *) tmp_libs="$tmp_libs $deplib" ;;
++	      *) func_append tmp_libs " $deplib" ;;
+ 	      esac
+ 	      ;;
+-	    *) tmp_libs="$tmp_libs $deplib" ;;
++	    *) func_append tmp_libs " $deplib" ;;
+ 	    esac
+ 	  done
+-	  eval $var=\$tmp_libs
++	  eval $var=\"$tmp_libs\"
+ 	done # for var
+       fi
+       # Last step: remove runtime libs from dependency_libs
+@@ -6269,7 +7187,7 @@ func_mode_link ()
+ 	  ;;
+ 	esac
+ 	if test -n "$i" ; then
+-	  tmp_libs="$tmp_libs $i"
++	  func_append tmp_libs " $i"
+ 	fi
+       done
+       dependency_libs=$tmp_libs
+@@ -6310,7 +7228,7 @@ func_mode_link ()
+       # Now set the variables for building old libraries.
+       build_libtool_libs=no
+       oldlibs="$output"
+-      objs="$objs$old_deplibs"
++      func_append objs "$old_deplibs"
+       ;;
+ 
+     lib)
+@@ -6319,8 +7237,8 @@ func_mode_link ()
+       lib*)
+ 	func_stripname 'lib' '.la' "$outputname"
+ 	name=$func_stripname_result
+-	eval "shared_ext=\"$shrext_cmds\""
+-	eval "libname=\"$libname_spec\""
++	eval shared_ext=\"$shrext_cmds\"
++	eval libname=\"$libname_spec\"
+ 	;;
+       *)
+ 	test "$module" = no && \
+@@ -6330,8 +7248,8 @@ func_mode_link ()
+ 	  # Add the "lib" prefix for modules if required
+ 	  func_stripname '' '.la' "$outputname"
+ 	  name=$func_stripname_result
+-	  eval "shared_ext=\"$shrext_cmds\""
+-	  eval "libname=\"$libname_spec\""
++	  eval shared_ext=\"$shrext_cmds\"
++	  eval libname=\"$libname_spec\"
+ 	else
+ 	  func_stripname '' '.la' "$outputname"
+ 	  libname=$func_stripname_result
+@@ -6346,7 +7264,7 @@ func_mode_link ()
+ 	  echo
+ 	  $ECHO "*** Warning: Linking the shared library $output against the non-libtool"
+ 	  $ECHO "*** objects $objs is not portable!"
+-	  libobjs="$libobjs $objs"
++	  func_append libobjs " $objs"
+ 	fi
+       fi
+ 
+@@ -6544,7 +7462,7 @@ func_mode_link ()
+ 	  done
+ 
+ 	  # Make executables depend on our current version.
+-	  verstring="$verstring:${current}.0"
++	  func_append verstring ":${current}.0"
+ 	  ;;
+ 
+ 	qnx)
+@@ -6612,10 +7530,10 @@ func_mode_link ()
+       fi
+ 
+       func_generate_dlsyms "$libname" "$libname" "yes"
+-      libobjs="$libobjs $symfileobj"
++      func_append libobjs " $symfileobj"
+       test "X$libobjs" = "X " && libobjs=
+ 
+-      if test "$mode" != relink; then
++      if test "$opt_mode" != relink; then
+ 	# Remove our outputs, but don't remove object files since they
+ 	# may have been created when compiling PIC objects.
+ 	removelist=
+@@ -6631,7 +7549,7 @@ func_mode_link ()
+ 		   continue
+ 		 fi
+ 	       fi
+-	       removelist="$removelist $p"
++	       func_append removelist " $p"
+ 	       ;;
+ 	    *) ;;
+ 	  esac
+@@ -6642,7 +7560,7 @@ func_mode_link ()
+ 
+       # Now set the variables for building old libraries.
+       if test "$build_old_libs" = yes && test "$build_libtool_libs" != convenience ; then
+-	oldlibs="$oldlibs $output_objdir/$libname.$libext"
++	func_append oldlibs " $output_objdir/$libname.$libext"
+ 
+ 	# Transform .lo files to .o files.
+ 	oldobjs="$objs "`$ECHO "$libobjs" | $SP2NL | $SED "/\.${libext}$/d; $lo2o" | $NL2SP`
+@@ -6659,10 +7577,11 @@ func_mode_link ()
+ 	# If the user specified any rpath flags, then add them.
+ 	temp_xrpath=
+ 	for libdir in $xrpath; do
+-	  temp_xrpath="$temp_xrpath -R$libdir"
++	  func_replace_sysroot "$libdir"
++	  func_append temp_xrpath " -R$func_replace_sysroot_result"
+ 	  case "$finalize_rpath " in
+ 	  *" $libdir "*) ;;
+-	  *) finalize_rpath="$finalize_rpath $libdir" ;;
++	  *) func_append finalize_rpath " $libdir" ;;
+ 	  esac
+ 	done
+ 	if test "$hardcode_into_libs" != yes || test "$build_old_libs" = yes; then
+@@ -6676,7 +7595,7 @@ func_mode_link ()
+       for lib in $old_dlfiles; do
+ 	case " $dlprefiles $dlfiles " in
+ 	*" $lib "*) ;;
+-	*) dlfiles="$dlfiles $lib" ;;
++	*) func_append dlfiles " $lib" ;;
+ 	esac
+       done
+ 
+@@ -6686,7 +7605,7 @@ func_mode_link ()
+       for lib in $old_dlprefiles; do
+ 	case "$dlprefiles " in
+ 	*" $lib "*) ;;
+-	*) dlprefiles="$dlprefiles $lib" ;;
++	*) func_append dlprefiles " $lib" ;;
+ 	esac
+       done
+ 
+@@ -6698,7 +7617,7 @@ func_mode_link ()
+ 	    ;;
+ 	  *-*-rhapsody* | *-*-darwin1.[012])
+ 	    # Rhapsody C library is in the System framework
+-	    deplibs="$deplibs System.ltframework"
++	    func_append deplibs " System.ltframework"
+ 	    ;;
+ 	  *-*-netbsd*)
+ 	    # Don't link with libc until the a.out ld.so is fixed.
+@@ -6715,7 +7634,7 @@ func_mode_link ()
+ 	  *)
+ 	    # Add libc to deplibs on all other systems if necessary.
+ 	    if test "$build_libtool_need_lc" = "yes"; then
+-	      deplibs="$deplibs -lc"
++	      func_append deplibs " -lc"
+ 	    fi
+ 	    ;;
+ 	  esac
+@@ -6764,18 +7683,18 @@ EOF
+ 		if test "X$allow_libtool_libs_with_static_runtimes" = "Xyes" ; then
+ 		  case " $predeps $postdeps " in
+ 		  *" $i "*)
+-		    newdeplibs="$newdeplibs $i"
++		    func_append newdeplibs " $i"
+ 		    i=""
+ 		    ;;
+ 		  esac
+ 		fi
+ 		if test -n "$i" ; then
+-		  eval "libname=\"$libname_spec\""
+-		  eval "deplib_matches=\"$library_names_spec\""
++		  libname=`eval "\\$ECHO \"$libname_spec\""`
++		  deplib_matches=`eval "\\$ECHO \"$library_names_spec\""`
+ 		  set dummy $deplib_matches; shift
+ 		  deplib_match=$1
+ 		  if test `expr "$ldd_output" : ".*$deplib_match"` -ne 0 ; then
+-		    newdeplibs="$newdeplibs $i"
++		    func_append newdeplibs " $i"
+ 		  else
+ 		    droppeddeps=yes
+ 		    echo
+@@ -6789,7 +7708,7 @@ EOF
+ 		fi
+ 		;;
+ 	      *)
+-		newdeplibs="$newdeplibs $i"
++		func_append newdeplibs " $i"
+ 		;;
+ 	      esac
+ 	    done
+@@ -6807,18 +7726,18 @@ EOF
+ 		  if test "X$allow_libtool_libs_with_static_runtimes" = "Xyes" ; then
+ 		    case " $predeps $postdeps " in
+ 		    *" $i "*)
+-		      newdeplibs="$newdeplibs $i"
++		      func_append newdeplibs " $i"
+ 		      i=""
+ 		      ;;
+ 		    esac
+ 		  fi
+ 		  if test -n "$i" ; then
+-		    eval "libname=\"$libname_spec\""
+-		    eval "deplib_matches=\"$library_names_spec\""
++		    libname=`eval "\\$ECHO \"$libname_spec\""`
++		    deplib_matches=`eval "\\$ECHO \"$library_names_spec\""`
+ 		    set dummy $deplib_matches; shift
+ 		    deplib_match=$1
+ 		    if test `expr "$ldd_output" : ".*$deplib_match"` -ne 0 ; then
+-		      newdeplibs="$newdeplibs $i"
++		      func_append newdeplibs " $i"
+ 		    else
+ 		      droppeddeps=yes
+ 		      echo
+@@ -6840,7 +7759,7 @@ EOF
+ 		fi
+ 		;;
+ 	      *)
+-		newdeplibs="$newdeplibs $i"
++		func_append newdeplibs " $i"
+ 		;;
+ 	      esac
+ 	    done
+@@ -6857,15 +7776,27 @@ EOF
+ 	      if test "X$allow_libtool_libs_with_static_runtimes" = "Xyes" ; then
+ 		case " $predeps $postdeps " in
+ 		*" $a_deplib "*)
+-		  newdeplibs="$newdeplibs $a_deplib"
++		  func_append newdeplibs " $a_deplib"
+ 		  a_deplib=""
+ 		  ;;
+ 		esac
+ 	      fi
+ 	      if test -n "$a_deplib" ; then
+-		eval "libname=\"$libname_spec\""
++		libname=`eval "\\$ECHO \"$libname_spec\""`
++		if test -n "$file_magic_glob"; then
++		  libnameglob=`func_echo_all "$libname" | $SED -e $file_magic_glob`
++		else
++		  libnameglob=$libname
++		fi
++		test "$want_nocaseglob" = yes && nocaseglob=`shopt -p nocaseglob`
+ 		for i in $lib_search_path $sys_lib_search_path $shlib_search_path; do
+-		  potential_libs=`ls $i/$libname[.-]* 2>/dev/null`
++		  if test "$want_nocaseglob" = yes; then
++		    shopt -s nocaseglob
++		    potential_libs=`ls $i/$libnameglob[.-]* 2>/dev/null`
++		    $nocaseglob
++		  else
++		    potential_libs=`ls $i/$libnameglob[.-]* 2>/dev/null`
++		  fi
+ 		  for potent_lib in $potential_libs; do
+ 		      # Follow soft links.
+ 		      if ls -lLd "$potent_lib" 2>/dev/null |
+@@ -6885,10 +7816,10 @@ EOF
+ 			*) potlib=`$ECHO "$potlib" | $SED 's,[^/]*$,,'`"$potliblink";;
+ 			esac
+ 		      done
+-		      if eval "$file_magic_cmd \"\$potlib\"" 2>/dev/null |
++		      if eval $file_magic_cmd \"\$potlib\" 2>/dev/null |
+ 			 $SED -e 10q |
+ 			 $EGREP "$file_magic_regex" > /dev/null; then
+-			newdeplibs="$newdeplibs $a_deplib"
++			func_append newdeplibs " $a_deplib"
+ 			a_deplib=""
+ 			break 2
+ 		      fi
+@@ -6913,7 +7844,7 @@ EOF
+ 	      ;;
+ 	    *)
+ 	      # Add a -L argument.
+-	      newdeplibs="$newdeplibs $a_deplib"
++	      func_append newdeplibs " $a_deplib"
+ 	      ;;
+ 	    esac
+ 	  done # Gone through all deplibs.
+@@ -6929,20 +7860,20 @@ EOF
+ 	      if test "X$allow_libtool_libs_with_static_runtimes" = "Xyes" ; then
+ 		case " $predeps $postdeps " in
+ 		*" $a_deplib "*)
+-		  newdeplibs="$newdeplibs $a_deplib"
++		  func_append newdeplibs " $a_deplib"
+ 		  a_deplib=""
+ 		  ;;
+ 		esac
+ 	      fi
+ 	      if test -n "$a_deplib" ; then
+-		eval "libname=\"$libname_spec\""
++		libname=`eval "\\$ECHO \"$libname_spec\""`
+ 		for i in $lib_search_path $sys_lib_search_path $shlib_search_path; do
+ 		  potential_libs=`ls $i/$libname[.-]* 2>/dev/null`
+ 		  for potent_lib in $potential_libs; do
+ 		    potlib="$potent_lib" # see symlink-check above in file_magic test
+ 		    if eval "\$ECHO \"$potent_lib\"" 2>/dev/null | $SED 10q | \
+ 		       $EGREP "$match_pattern_regex" > /dev/null; then
+-		      newdeplibs="$newdeplibs $a_deplib"
++		      func_append newdeplibs " $a_deplib"
+ 		      a_deplib=""
+ 		      break 2
+ 		    fi
+@@ -6967,7 +7898,7 @@ EOF
+ 	      ;;
+ 	    *)
+ 	      # Add a -L argument.
+-	      newdeplibs="$newdeplibs $a_deplib"
++	      func_append newdeplibs " $a_deplib"
+ 	      ;;
+ 	    esac
+ 	  done # Gone through all deplibs.
+@@ -7071,7 +8002,7 @@ EOF
+ 	*)
+ 	  case " $deplibs " in
+ 	  *" -L$path/$objdir "*)
+-	    new_libs="$new_libs -L$path/$objdir" ;;
++	    func_append new_libs " -L$path/$objdir" ;;
+ 	  esac
+ 	  ;;
+ 	esac
+@@ -7081,10 +8012,10 @@ EOF
+ 	-L*)
+ 	  case " $new_libs " in
+ 	  *" $deplib "*) ;;
+-	  *) new_libs="$new_libs $deplib" ;;
++	  *) func_append new_libs " $deplib" ;;
+ 	  esac
+ 	  ;;
+-	*) new_libs="$new_libs $deplib" ;;
++	*) func_append new_libs " $deplib" ;;
+ 	esac
+       done
+       deplibs="$new_libs"
+@@ -7101,10 +8032,12 @@ EOF
+ 	  hardcode_libdirs=
+ 	  dep_rpath=
+ 	  rpath="$finalize_rpath"
+-	  test "$mode" != relink && rpath="$compile_rpath$rpath"
++	  test "$opt_mode" != relink && rpath="$compile_rpath$rpath"
+ 	  for libdir in $rpath; do
+ 	    if test -n "$hardcode_libdir_flag_spec"; then
+ 	      if test -n "$hardcode_libdir_separator"; then
++		func_replace_sysroot "$libdir"
++		libdir=$func_replace_sysroot_result
+ 		if test -z "$hardcode_libdirs"; then
+ 		  hardcode_libdirs="$libdir"
+ 		else
+@@ -7113,18 +8046,18 @@ EOF
+ 		  *"$hardcode_libdir_separator$libdir$hardcode_libdir_separator"*)
+ 		    ;;
+ 		  *)
+-		    hardcode_libdirs="$hardcode_libdirs$hardcode_libdir_separator$libdir"
++		    func_append hardcode_libdirs "$hardcode_libdir_separator$libdir"
+ 		    ;;
+ 		  esac
+ 		fi
+ 	      else
+-		eval "flag=\"$hardcode_libdir_flag_spec\""
+-		dep_rpath="$dep_rpath $flag"
++		eval flag=\"$hardcode_libdir_flag_spec\"
++		func_append dep_rpath " $flag"
+ 	      fi
+ 	    elif test -n "$runpath_var"; then
+ 	      case "$perm_rpath " in
+ 	      *" $libdir "*) ;;
+-	      *) perm_rpath="$perm_rpath $libdir" ;;
++	      *) func_apped perm_rpath " $libdir" ;;
+ 	      esac
+ 	    fi
+ 	  done
+@@ -7133,40 +8066,38 @@ EOF
+ 	     test -n "$hardcode_libdirs"; then
+ 	    libdir="$hardcode_libdirs"
+ 	    if test -n "$hardcode_libdir_flag_spec_ld"; then
+-	      eval "dep_rpath=\"$hardcode_libdir_flag_spec_ld\""
++	      eval dep_rpath=\"$hardcode_libdir_flag_spec_ld\"
+ 	    else
+-	      eval "dep_rpath=\"$hardcode_libdir_flag_spec\""
++	      eval dep_rpath=\"$hardcode_libdir_flag_spec\"
+ 	    fi
+ 	  fi
+ 	  if test -n "$runpath_var" && test -n "$perm_rpath"; then
+ 	    # We should set the runpath_var.
+ 	    rpath=
+ 	    for dir in $perm_rpath; do
+-	      rpath="$rpath$dir:"
++	      func_append rpath "$dir:"
+ 	    done
+-	    eval $runpath_var=\$rpath\$$runpath_var
+-	    export $runpath_var
++	    eval "$runpath_var='$rpath\$$runpath_var'; export $runpath_var"
+ 	  fi
+ 	  test -n "$dep_rpath" && deplibs="$dep_rpath $deplibs"
+ 	fi
+ 
+ 	shlibpath="$finalize_shlibpath"
+-	test "$mode" != relink && shlibpath="$compile_shlibpath$shlibpath"
++	test "$opt_mode" != relink && shlibpath="$compile_shlibpath$shlibpath"
+ 	if test -n "$shlibpath"; then
+-	  eval $shlibpath_var=\$shlibpath\$$shlibpath_var
+-	  export $shlibpath_var
++	  eval "$shlibpath_var='$shlibpath\$$shlibpath_var'; export $shlibpath_var"
+ 	fi
+ 
+ 	# Get the real and link names of the library.
+-	eval "shared_ext=\"$shrext_cmds\""
+-	eval "library_names=\"$library_names_spec\""
++	eval shared_ext=\"$shrext_cmds\"
++	eval library_names=\"$library_names_spec\"
+ 	set dummy $library_names
+ 	shift
+ 	realname="$1"
+ 	shift
+ 
+ 	if test -n "$soname_spec"; then
+-	  eval "soname=\"$soname_spec\""
++	  eval soname=\"$soname_spec\"
+ 	else
+ 	  soname="$realname"
+ 	fi
+@@ -7178,7 +8109,7 @@ EOF
+ 	linknames=
+ 	for link
+ 	do
+-	  linknames="$linknames $link"
++	  func_append linknames " $link"
+ 	done
+ 
+ 	# Use standard objects if they are pic
+@@ -7189,7 +8120,7 @@ EOF
+ 	if test -n "$export_symbols" && test -n "$include_expsyms"; then
+ 	  $opt_dry_run || cp "$export_symbols" "$output_objdir/$libname.uexp"
+ 	  export_symbols="$output_objdir/$libname.uexp"
+-	  delfiles="$delfiles $export_symbols"
++	  func_append delfiles " $export_symbols"
+ 	fi
+ 
+ 	orig_export_symbols=
+@@ -7220,13 +8151,45 @@ EOF
+ 	    $opt_dry_run || $RM $export_symbols
+ 	    cmds=$export_symbols_cmds
+ 	    save_ifs="$IFS"; IFS='~'
+-	    for cmd in $cmds; do
++	    for cmd1 in $cmds; do
+ 	      IFS="$save_ifs"
+-	      eval "cmd=\"$cmd\""
+-	      func_len " $cmd"
+-	      len=$func_len_result
+-	      if test "$len" -lt "$max_cmd_len" || test "$max_cmd_len" -le -1; then
++	      # Take the normal branch if the nm_file_list_spec branch
++	      # doesn't work or if tool conversion is not needed.
++	      case $nm_file_list_spec~$to_tool_file_cmd in
++		*~func_convert_file_noop | *~func_convert_file_msys_to_w32 | ~*)
++		  try_normal_branch=yes
++		  eval cmd=\"$cmd1\"
++		  func_len " $cmd"
++		  len=$func_len_result
++		  ;;
++		*)
++		  try_normal_branch=no
++		  ;;
++	      esac
++	      if test "$try_normal_branch" = yes \
++		 && { test "$len" -lt "$max_cmd_len" \
++		      || test "$max_cmd_len" -le -1; }
++	      then
++		func_show_eval "$cmd" 'exit $?'
++		skipped_export=false
++	      elif test -n "$nm_file_list_spec"; then
++		func_basename "$output"
++		output_la=$func_basename_result
++		save_libobjs=$libobjs
++		save_output=$output
++		output=${output_objdir}/${output_la}.nm
++		func_to_tool_file "$output"
++		libobjs=$nm_file_list_spec$func_to_tool_file_result
++		func_append delfiles " $output"
++		func_verbose "creating $NM input file list: $output"
++		for obj in $save_libobjs; do
++		  func_to_tool_file "$obj"
++		  $ECHO "$func_to_tool_file_result"
++		done > "$output"
++		eval cmd=\"$cmd1\"
+ 		func_show_eval "$cmd" 'exit $?'
++		output=$save_output
++		libobjs=$save_libobjs
+ 		skipped_export=false
+ 	      else
+ 		# The command line is too long to execute in one step.
+@@ -7248,7 +8211,7 @@ EOF
+ 	if test -n "$export_symbols" && test -n "$include_expsyms"; then
+ 	  tmp_export_symbols="$export_symbols"
+ 	  test -n "$orig_export_symbols" && tmp_export_symbols="$orig_export_symbols"
+-	  $opt_dry_run || $ECHO "$include_expsyms" | $SP2NL >> "$tmp_export_symbols"
++	  $opt_dry_run || eval '$ECHO "$include_expsyms" | $SP2NL >> "$tmp_export_symbols"'
+ 	fi
+ 
+ 	if test "X$skipped_export" != "X:" && test -n "$orig_export_symbols"; then
+@@ -7260,7 +8223,7 @@ EOF
+ 	  # global variables. join(1) would be nice here, but unfortunately
+ 	  # isn't a blessed tool.
+ 	  $opt_dry_run || $SED -e '/[ ,]DATA/!d;s,\(.*\)\([ \,].*\),s|^\1$|\1\2|,' < $export_symbols > $output_objdir/$libname.filter
+-	  delfiles="$delfiles $export_symbols $output_objdir/$libname.filter"
++	  func_append delfiles " $export_symbols $output_objdir/$libname.filter"
+ 	  export_symbols=$output_objdir/$libname.def
+ 	  $opt_dry_run || $SED -f $output_objdir/$libname.filter < $orig_export_symbols > $export_symbols
+ 	fi
+@@ -7270,7 +8233,7 @@ EOF
+ 	  case " $convenience " in
+ 	  *" $test_deplib "*) ;;
+ 	  *)
+-	    tmp_deplibs="$tmp_deplibs $test_deplib"
++	    func_append tmp_deplibs " $test_deplib"
+ 	    ;;
+ 	  esac
+ 	done
+@@ -7286,43 +8249,43 @@ EOF
+ 	  fi
+ 	  if test -n "$whole_archive_flag_spec"; then
+ 	    save_libobjs=$libobjs
+-	    eval "libobjs=\"\$libobjs $whole_archive_flag_spec\""
++	    eval libobjs=\"\$libobjs $whole_archive_flag_spec\"
+ 	    test "X$libobjs" = "X " && libobjs=
+ 	  else
+ 	    gentop="$output_objdir/${outputname}x"
+-	    generated="$generated $gentop"
++	    func_append generated " $gentop"
+ 
+ 	    func_extract_archives $gentop $convenience
+-	    libobjs="$libobjs $func_extract_archives_result"
++	    func_append libobjs " $func_extract_archives_result"
+ 	    test "X$libobjs" = "X " && libobjs=
+ 	  fi
+ 	fi
+ 
+ 	if test "$thread_safe" = yes && test -n "$thread_safe_flag_spec"; then
+-	  eval "flag=\"$thread_safe_flag_spec\""
+-	  linker_flags="$linker_flags $flag"
++	  eval flag=\"$thread_safe_flag_spec\"
++	  func_append linker_flags " $flag"
+ 	fi
+ 
+ 	# Make a backup of the uninstalled library when relinking
+-	if test "$mode" = relink; then
+-	  $opt_dry_run || (cd $output_objdir && $RM ${realname}U && $MV $realname ${realname}U) || exit $?
++	if test "$opt_mode" = relink; then
++	  $opt_dry_run || eval '(cd $output_objdir && $RM ${realname}U && $MV $realname ${realname}U)' || exit $?
+ 	fi
+ 
+ 	# Do each of the archive commands.
+ 	if test "$module" = yes && test -n "$module_cmds" ; then
+ 	  if test -n "$export_symbols" && test -n "$module_expsym_cmds"; then
+-	    eval "test_cmds=\"$module_expsym_cmds\""
++	    eval test_cmds=\"$module_expsym_cmds\"
+ 	    cmds=$module_expsym_cmds
+ 	  else
+-	    eval "test_cmds=\"$module_cmds\""
++	    eval test_cmds=\"$module_cmds\"
+ 	    cmds=$module_cmds
+ 	  fi
+ 	else
+ 	  if test -n "$export_symbols" && test -n "$archive_expsym_cmds"; then
+-	    eval "test_cmds=\"$archive_expsym_cmds\""
++	    eval test_cmds=\"$archive_expsym_cmds\"
+ 	    cmds=$archive_expsym_cmds
+ 	  else
+-	    eval "test_cmds=\"$archive_cmds\""
++	    eval test_cmds=\"$archive_cmds\"
+ 	    cmds=$archive_cmds
+ 	  fi
+ 	fi
+@@ -7366,10 +8329,13 @@ EOF
+ 	    echo 'INPUT (' > $output
+ 	    for obj in $save_libobjs
+ 	    do
+-	      $ECHO "$obj" >> $output
++	      func_to_tool_file "$obj"
++	      $ECHO "$func_to_tool_file_result" >> $output
+ 	    done
+ 	    echo ')' >> $output
+-	    delfiles="$delfiles $output"
++	    func_append delfiles " $output"
++	    func_to_tool_file "$output"
++	    output=$func_to_tool_file_result
+ 	  elif test -n "$save_libobjs" && test "X$skipped_export" != "X:" && test "X$file_list_spec" != X; then
+ 	    output=${output_objdir}/${output_la}.lnk
+ 	    func_verbose "creating linker input file list: $output"
+@@ -7383,15 +8349,17 @@ EOF
+ 	    fi
+ 	    for obj
+ 	    do
+-	      $ECHO "$obj" >> $output
++	      func_to_tool_file "$obj"
++	      $ECHO "$func_to_tool_file_result" >> $output
+ 	    done
+-	    delfiles="$delfiles $output"
+-	    output=$firstobj\"$file_list_spec$output\"
++	    func_append delfiles " $output"
++	    func_to_tool_file "$output"
++	    output=$firstobj\"$file_list_spec$func_to_tool_file_result\"
+ 	  else
+ 	    if test -n "$save_libobjs"; then
+ 	      func_verbose "creating reloadable object files..."
+ 	      output=$output_objdir/$output_la-${k}.$objext
+-	      eval "test_cmds=\"$reload_cmds\""
++	      eval test_cmds=\"$reload_cmds\"
+ 	      func_len " $test_cmds"
+ 	      len0=$func_len_result
+ 	      len=$len0
+@@ -7411,12 +8379,12 @@ EOF
+ 		  if test "$k" -eq 1 ; then
+ 		    # The first file doesn't have a previous command to add.
+ 		    reload_objs=$objlist
+-		    eval "concat_cmds=\"$reload_cmds\""
++		    eval concat_cmds=\"$reload_cmds\"
+ 		  else
+ 		    # All subsequent reloadable object files will link in
+ 		    # the last one created.
+ 		    reload_objs="$objlist $last_robj"
+-		    eval "concat_cmds=\"\$concat_cmds~$reload_cmds~\$RM $last_robj\""
++		    eval concat_cmds=\"\$concat_cmds~$reload_cmds~\$RM $last_robj\"
+ 		  fi
+ 		  last_robj=$output_objdir/$output_la-${k}.$objext
+ 		  func_arith $k + 1
+@@ -7433,11 +8401,11 @@ EOF
+ 	      # files will link in the last one created.
+ 	      test -z "$concat_cmds" || concat_cmds=$concat_cmds~
+ 	      reload_objs="$objlist $last_robj"
+-	      eval "concat_cmds=\"\${concat_cmds}$reload_cmds\""
++	      eval concat_cmds=\"\${concat_cmds}$reload_cmds\"
+ 	      if test -n "$last_robj"; then
+-	        eval "concat_cmds=\"\${concat_cmds}~\$RM $last_robj\""
++	        eval concat_cmds=\"\${concat_cmds}~\$RM $last_robj\"
+ 	      fi
+-	      delfiles="$delfiles $output"
++	      func_append delfiles " $output"
+ 
+ 	    else
+ 	      output=
+@@ -7450,9 +8418,9 @@ EOF
+ 	      libobjs=$output
+ 	      # Append the command to create the export file.
+ 	      test -z "$concat_cmds" || concat_cmds=$concat_cmds~
+-	      eval "concat_cmds=\"\$concat_cmds$export_symbols_cmds\""
++	      eval concat_cmds=\"\$concat_cmds$export_symbols_cmds\"
+ 	      if test -n "$last_robj"; then
+-		eval "concat_cmds=\"\$concat_cmds~\$RM $last_robj\""
++		eval concat_cmds=\"\$concat_cmds~\$RM $last_robj\"
+ 	      fi
+ 	    fi
+ 
+@@ -7471,7 +8439,7 @@ EOF
+ 		lt_exit=$?
+ 
+ 		# Restore the uninstalled library and exit
+-		if test "$mode" = relink; then
++		if test "$opt_mode" = relink; then
+ 		  ( cd "$output_objdir" && \
+ 		    $RM "${realname}T" && \
+ 		    $MV "${realname}U" "$realname" )
+@@ -7492,7 +8460,7 @@ EOF
+ 	    if test -n "$export_symbols" && test -n "$include_expsyms"; then
+ 	      tmp_export_symbols="$export_symbols"
+ 	      test -n "$orig_export_symbols" && tmp_export_symbols="$orig_export_symbols"
+-	      $opt_dry_run || $ECHO "$include_expsyms" | $SP2NL >> "$tmp_export_symbols"
++	      $opt_dry_run || eval '$ECHO "$include_expsyms" | $SP2NL >> "$tmp_export_symbols"'
+ 	    fi
+ 
+ 	    if test -n "$orig_export_symbols"; then
+@@ -7504,7 +8472,7 @@ EOF
+ 	      # global variables. join(1) would be nice here, but unfortunately
+ 	      # isn't a blessed tool.
+ 	      $opt_dry_run || $SED -e '/[ ,]DATA/!d;s,\(.*\)\([ \,].*\),s|^\1$|\1\2|,' < $export_symbols > $output_objdir/$libname.filter
+-	      delfiles="$delfiles $export_symbols $output_objdir/$libname.filter"
++	      func_append delfiles " $export_symbols $output_objdir/$libname.filter"
+ 	      export_symbols=$output_objdir/$libname.def
+ 	      $opt_dry_run || $SED -f $output_objdir/$libname.filter < $orig_export_symbols > $export_symbols
+ 	    fi
+@@ -7515,7 +8483,7 @@ EOF
+ 	  output=$save_output
+ 
+ 	  if test -n "$convenience" && test -n "$whole_archive_flag_spec"; then
+-	    eval "libobjs=\"\$libobjs $whole_archive_flag_spec\""
++	    eval libobjs=\"\$libobjs $whole_archive_flag_spec\"
+ 	    test "X$libobjs" = "X " && libobjs=
+ 	  fi
+ 	  # Expand the library linking commands again to reset the
+@@ -7539,23 +8507,23 @@ EOF
+ 
+ 	if test -n "$delfiles"; then
+ 	  # Append the command to remove temporary files to $cmds.
+-	  eval "cmds=\"\$cmds~\$RM $delfiles\""
++	  eval cmds=\"\$cmds~\$RM $delfiles\"
+ 	fi
+ 
+ 	# Add any objects from preloaded convenience libraries
+ 	if test -n "$dlprefiles"; then
+ 	  gentop="$output_objdir/${outputname}x"
+-	  generated="$generated $gentop"
++	  func_append generated " $gentop"
+ 
+ 	  func_extract_archives $gentop $dlprefiles
+-	  libobjs="$libobjs $func_extract_archives_result"
++	  func_append libobjs " $func_extract_archives_result"
+ 	  test "X$libobjs" = "X " && libobjs=
+ 	fi
+ 
+ 	save_ifs="$IFS"; IFS='~'
+ 	for cmd in $cmds; do
+ 	  IFS="$save_ifs"
+-	  eval "cmd=\"$cmd\""
++	  eval cmd=\"$cmd\"
+ 	  $opt_silent || {
+ 	    func_quote_for_expand "$cmd"
+ 	    eval "func_echo $func_quote_for_expand_result"
+@@ -7564,7 +8532,7 @@ EOF
+ 	    lt_exit=$?
+ 
+ 	    # Restore the uninstalled library and exit
+-	    if test "$mode" = relink; then
++	    if test "$opt_mode" = relink; then
+ 	      ( cd "$output_objdir" && \
+ 	        $RM "${realname}T" && \
+ 		$MV "${realname}U" "$realname" )
+@@ -7576,8 +8544,8 @@ EOF
+ 	IFS="$save_ifs"
+ 
+ 	# Restore the uninstalled library and exit
+-	if test "$mode" = relink; then
+-	  $opt_dry_run || (cd $output_objdir && $RM ${realname}T && $MV $realname ${realname}T && $MV ${realname}U $realname) || exit $?
++	if test "$opt_mode" = relink; then
++	  $opt_dry_run || eval '(cd $output_objdir && $RM ${realname}T && $MV $realname ${realname}T && $MV ${realname}U $realname)' || exit $?
+ 
+ 	  if test -n "$convenience"; then
+ 	    if test -z "$whole_archive_flag_spec"; then
+@@ -7656,17 +8624,20 @@ EOF
+ 
+       if test -n "$convenience"; then
+ 	if test -n "$whole_archive_flag_spec"; then
+-	  eval "tmp_whole_archive_flags=\"$whole_archive_flag_spec\""
++	  eval tmp_whole_archive_flags=\"$whole_archive_flag_spec\"
+ 	  reload_conv_objs=$reload_objs\ `$ECHO "$tmp_whole_archive_flags" | $SED 's|,| |g'`
+ 	else
+ 	  gentop="$output_objdir/${obj}x"
+-	  generated="$generated $gentop"
++	  func_append generated " $gentop"
+ 
+ 	  func_extract_archives $gentop $convenience
+ 	  reload_conv_objs="$reload_objs $func_extract_archives_result"
+ 	fi
+       fi
+ 
++      # If we're not building shared, we need to use non_pic_objs
++      test "$build_libtool_libs" != yes && libobjs="$non_pic_objects"
++
+       # Create the old-style object.
+       reload_objs="$objs$old_deplibs "`$ECHO "$libobjs" | $SP2NL | $SED "/\.${libext}$/d; /\.lib$/d; $lo2o" | $NL2SP`" $reload_conv_objs" ### testsuite: skip nested quoting test
+ 
+@@ -7690,7 +8661,7 @@ EOF
+ 	# Create an invalid libtool object if no PIC, so that we don't
+ 	# accidentally link it into a program.
+ 	# $show "echo timestamp > $libobj"
+-	# $opt_dry_run || echo timestamp > $libobj || exit $?
++	# $opt_dry_run || eval "echo timestamp > $libobj" || exit $?
+ 	exit $EXIT_SUCCESS
+       fi
+ 
+@@ -7740,8 +8711,8 @@ EOF
+ 	if test "$tagname" = CXX ; then
+ 	  case ${MACOSX_DEPLOYMENT_TARGET-10.0} in
+ 	    10.[0123])
+-	      compile_command="$compile_command ${wl}-bind_at_load"
+-	      finalize_command="$finalize_command ${wl}-bind_at_load"
++	      func_append compile_command " ${wl}-bind_at_load"
++	      func_append finalize_command " ${wl}-bind_at_load"
+ 	    ;;
+ 	  esac
+ 	fi
+@@ -7761,7 +8732,7 @@ EOF
+ 	*)
+ 	  case " $compile_deplibs " in
+ 	  *" -L$path/$objdir "*)
+-	    new_libs="$new_libs -L$path/$objdir" ;;
++	    func_append new_libs " -L$path/$objdir" ;;
+ 	  esac
+ 	  ;;
+ 	esac
+@@ -7771,17 +8742,17 @@ EOF
+ 	-L*)
+ 	  case " $new_libs " in
+ 	  *" $deplib "*) ;;
+-	  *) new_libs="$new_libs $deplib" ;;
++	  *) func_append new_libs " $deplib" ;;
+ 	  esac
+ 	  ;;
+-	*) new_libs="$new_libs $deplib" ;;
++	*) func_append new_libs " $deplib" ;;
+ 	esac
+       done
+       compile_deplibs="$new_libs"
+ 
+ 
+-      compile_command="$compile_command $compile_deplibs"
+-      finalize_command="$finalize_command $finalize_deplibs"
++      func_append compile_command " $compile_deplibs"
++      func_append finalize_command " $finalize_deplibs"
+ 
+       if test -n "$rpath$xrpath"; then
+ 	# If the user specified any rpath flags, then add them.
+@@ -7789,7 +8760,7 @@ EOF
+ 	  # This is the magic to use -rpath.
+ 	  case "$finalize_rpath " in
+ 	  *" $libdir "*) ;;
+-	  *) finalize_rpath="$finalize_rpath $libdir" ;;
++	  *) func_append finalize_rpath " $libdir" ;;
+ 	  esac
+ 	done
+       fi
+@@ -7808,18 +8779,18 @@ EOF
+ 	      *"$hardcode_libdir_separator$libdir$hardcode_libdir_separator"*)
+ 		;;
+ 	      *)
+-		hardcode_libdirs="$hardcode_libdirs$hardcode_libdir_separator$libdir"
++		func_append hardcode_libdirs "$hardcode_libdir_separator$libdir"
+ 		;;
+ 	      esac
+ 	    fi
+ 	  else
+-	    eval "flag=\"$hardcode_libdir_flag_spec\""
+-	    rpath="$rpath $flag"
++	    eval flag=\"$hardcode_libdir_flag_spec\"
++	    func_append rpath " $flag"
+ 	  fi
+ 	elif test -n "$runpath_var"; then
+ 	  case "$perm_rpath " in
+ 	  *" $libdir "*) ;;
+-	  *) perm_rpath="$perm_rpath $libdir" ;;
++	  *) func_append perm_rpath " $libdir" ;;
+ 	  esac
+ 	fi
+ 	case $host in
+@@ -7828,12 +8799,12 @@ EOF
+ 	  case :$dllsearchpath: in
+ 	  *":$libdir:"*) ;;
+ 	  ::) dllsearchpath=$libdir;;
+-	  *) dllsearchpath="$dllsearchpath:$libdir";;
++	  *) func_append dllsearchpath ":$libdir";;
+ 	  esac
+ 	  case :$dllsearchpath: in
+ 	  *":$testbindir:"*) ;;
+ 	  ::) dllsearchpath=$testbindir;;
+-	  *) dllsearchpath="$dllsearchpath:$testbindir";;
++	  *) func_append dllsearchpath ":$testbindir";;
+ 	  esac
+ 	  ;;
+ 	esac
+@@ -7842,7 +8813,7 @@ EOF
+       if test -n "$hardcode_libdir_separator" &&
+ 	 test -n "$hardcode_libdirs"; then
+ 	libdir="$hardcode_libdirs"
+-	eval "rpath=\" $hardcode_libdir_flag_spec\""
++	eval rpath=\" $hardcode_libdir_flag_spec\"
+       fi
+       compile_rpath="$rpath"
+ 
+@@ -7859,18 +8830,18 @@ EOF
+ 	      *"$hardcode_libdir_separator$libdir$hardcode_libdir_separator"*)
+ 		;;
+ 	      *)
+-		hardcode_libdirs="$hardcode_libdirs$hardcode_libdir_separator$libdir"
++		func_append hardcode_libdirs "$hardcode_libdir_separator$libdir"
+ 		;;
+ 	      esac
+ 	    fi
+ 	  else
+-	    eval "flag=\"$hardcode_libdir_flag_spec\""
+-	    rpath="$rpath $flag"
++	    eval flag=\"$hardcode_libdir_flag_spec\"
++	    func_append rpath " $flag"
+ 	  fi
+ 	elif test -n "$runpath_var"; then
+ 	  case "$finalize_perm_rpath " in
+ 	  *" $libdir "*) ;;
+-	  *) finalize_perm_rpath="$finalize_perm_rpath $libdir" ;;
++	  *) func_append finalize_perm_rpath " $libdir" ;;
+ 	  esac
+ 	fi
+       done
+@@ -7878,7 +8849,7 @@ EOF
+       if test -n "$hardcode_libdir_separator" &&
+ 	 test -n "$hardcode_libdirs"; then
+ 	libdir="$hardcode_libdirs"
+-	eval "rpath=\" $hardcode_libdir_flag_spec\""
++	eval rpath=\" $hardcode_libdir_flag_spec\"
+       fi
+       finalize_rpath="$rpath"
+ 
+@@ -7921,6 +8892,12 @@ EOF
+ 	exit_status=0
+ 	func_show_eval "$link_command" 'exit_status=$?'
+ 
++	if test -n "$postlink_cmds"; then
++	  func_to_tool_file "$output"
++	  postlink_cmds=`func_echo_all "$postlink_cmds" | $SED -e 's%@OUTPUT@%'"$output"'%g' -e 's%@TOOL_OUTPUT@%'"$func_to_tool_file_result"'%g'`
++	  func_execute_cmds "$postlink_cmds" 'exit $?'
++	fi
++
+ 	# Delete the generated files.
+ 	if test -f "$output_objdir/${outputname}S.${objext}"; then
+ 	  func_show_eval '$RM "$output_objdir/${outputname}S.${objext}"'
+@@ -7943,7 +8920,7 @@ EOF
+ 	  # We should set the runpath_var.
+ 	  rpath=
+ 	  for dir in $perm_rpath; do
+-	    rpath="$rpath$dir:"
++	    func_append rpath "$dir:"
+ 	  done
+ 	  compile_var="$runpath_var=\"$rpath\$$runpath_var\" "
+ 	fi
+@@ -7951,7 +8928,7 @@ EOF
+ 	  # We should set the runpath_var.
+ 	  rpath=
+ 	  for dir in $finalize_perm_rpath; do
+-	    rpath="$rpath$dir:"
++	    func_append rpath "$dir:"
+ 	  done
+ 	  finalize_var="$runpath_var=\"$rpath\$$runpath_var\" "
+ 	fi
+@@ -7966,6 +8943,13 @@ EOF
+ 	$opt_dry_run || $RM $output
+ 	# Link the executable and exit
+ 	func_show_eval "$link_command" 'exit $?'
++
++	if test -n "$postlink_cmds"; then
++	  func_to_tool_file "$output"
++	  postlink_cmds=`func_echo_all "$postlink_cmds" | $SED -e 's%@OUTPUT@%'"$output"'%g' -e 's%@TOOL_OUTPUT@%'"$func_to_tool_file_result"'%g'`
++	  func_execute_cmds "$postlink_cmds" 'exit $?'
++	fi
++
+ 	exit $EXIT_SUCCESS
+       fi
+ 
+@@ -7999,6 +8983,12 @@ EOF
+ 
+       func_show_eval "$link_command" 'exit $?'
+ 
++      if test -n "$postlink_cmds"; then
++	func_to_tool_file "$output_objdir/$outputname"
++	postlink_cmds=`func_echo_all "$postlink_cmds" | $SED -e 's%@OUTPUT@%'"$output_objdir/$outputname"'%g' -e 's%@TOOL_OUTPUT@%'"$func_to_tool_file_result"'%g'`
++	func_execute_cmds "$postlink_cmds" 'exit $?'
++      fi
++
+       # Now create the wrapper script.
+       func_verbose "creating $output"
+ 
+@@ -8096,7 +9086,7 @@ EOF
+ 	else
+ 	  oldobjs="$old_deplibs $non_pic_objects"
+ 	  if test "$preload" = yes && test -f "$symfileobj"; then
+-	    oldobjs="$oldobjs $symfileobj"
++	    func_append oldobjs " $symfileobj"
+ 	  fi
+ 	fi
+ 	addlibs="$old_convenience"
+@@ -8104,10 +9094,10 @@ EOF
+ 
+       if test -n "$addlibs"; then
+ 	gentop="$output_objdir/${outputname}x"
+-	generated="$generated $gentop"
++	func_append generated " $gentop"
+ 
+ 	func_extract_archives $gentop $addlibs
+-	oldobjs="$oldobjs $func_extract_archives_result"
++	func_append oldobjs " $func_extract_archives_result"
+       fi
+ 
+       # Do each command in the archive commands.
+@@ -8118,10 +9108,10 @@ EOF
+ 	# Add any objects from preloaded convenience libraries
+ 	if test -n "$dlprefiles"; then
+ 	  gentop="$output_objdir/${outputname}x"
+-	  generated="$generated $gentop"
++	  func_append generated " $gentop"
+ 
+ 	  func_extract_archives $gentop $dlprefiles
+-	  oldobjs="$oldobjs $func_extract_archives_result"
++	  func_append oldobjs " $func_extract_archives_result"
+ 	fi
+ 
+ 	# POSIX demands no paths to be encoded in archives.  We have
+@@ -8139,7 +9129,7 @@ EOF
+ 	else
+ 	  echo "copying selected object files to avoid basename conflicts..."
+ 	  gentop="$output_objdir/${outputname}x"
+-	  generated="$generated $gentop"
++	  func_append generated " $gentop"
+ 	  func_mkdir_p "$gentop"
+ 	  save_oldobjs=$oldobjs
+ 	  oldobjs=
+@@ -8163,18 +9153,28 @@ EOF
+ 		esac
+ 	      done
+ 	      func_show_eval "ln $obj $gentop/$newobj || cp $obj $gentop/$newobj"
+-	      oldobjs="$oldobjs $gentop/$newobj"
++	      func_append oldobjs " $gentop/$newobj"
+ 	      ;;
+-	    *) oldobjs="$oldobjs $obj" ;;
++	    *) func_append oldobjs " $obj" ;;
+ 	    esac
+ 	  done
+ 	fi
+-	eval "cmds=\"$old_archive_cmds\""
++	eval cmds=\"$old_archive_cmds\"
+ 
+ 	func_len " $cmds"
+ 	len=$func_len_result
+ 	if test "$len" -lt "$max_cmd_len" || test "$max_cmd_len" -le -1; then
+ 	  cmds=$old_archive_cmds
++	elif test -n "$archiver_list_spec"; then
++	  func_verbose "using command file archive linking..."
++	  for obj in $oldobjs
++	  do
++	    func_to_tool_file "$obj"
++	    $ECHO "$func_to_tool_file_result"
++	  done > $output_objdir/$libname.libcmd
++	  func_to_tool_file "$output_objdir/$libname.libcmd"
++	  oldobjs=" $archiver_list_spec$func_to_tool_file_result"
++	  cmds=$old_archive_cmds
+ 	else
+ 	  # the command line is too long to link in one step, link in parts
+ 	  func_verbose "using piecewise archive linking..."
+@@ -8189,7 +9189,7 @@ EOF
+ 	  do
+ 	    last_oldobj=$obj
+ 	  done
+-	  eval "test_cmds=\"$old_archive_cmds\""
++	  eval test_cmds=\"$old_archive_cmds\"
+ 	  func_len " $test_cmds"
+ 	  len0=$func_len_result
+ 	  len=$len0
+@@ -8208,7 +9208,7 @@ EOF
+ 		RANLIB=$save_RANLIB
+ 	      fi
+ 	      test -z "$concat_cmds" || concat_cmds=$concat_cmds~
+-	      eval "concat_cmds=\"\${concat_cmds}$old_archive_cmds\""
++	      eval concat_cmds=\"\${concat_cmds}$old_archive_cmds\"
+ 	      objlist=
+ 	      len=$len0
+ 	    fi
+@@ -8216,9 +9216,9 @@ EOF
+ 	  RANLIB=$save_RANLIB
+ 	  oldobjs=$objlist
+ 	  if test "X$oldobjs" = "X" ; then
+-	    eval "cmds=\"\$concat_cmds\""
++	    eval cmds=\"\$concat_cmds\"
+ 	  else
+-	    eval "cmds=\"\$concat_cmds~\$old_archive_cmds\""
++	    eval cmds=\"\$concat_cmds~\$old_archive_cmds\"
+ 	  fi
+ 	fi
+       fi
+@@ -8268,12 +9268,23 @@ EOF
+ 	      *.la)
+ 		func_basename "$deplib"
+ 		name="$func_basename_result"
+-		libdir=`${SED} -n -e 's/^libdir=\(.*\)$/\1/p' $deplib`
++		func_resolve_sysroot "$deplib"
++		eval libdir=`${SED} -n -e 's/^libdir=\(.*\)$/\1/p' $func_resolve_sysroot_result`
+ 		test -z "$libdir" && \
+ 		  func_fatal_error "\`$deplib' is not a valid libtool archive"
+-		newdependency_libs="$newdependency_libs $libdir/$name"
++		func_append newdependency_libs " ${lt_sysroot:+=}$libdir/$name"
++		;;
++	      -L*)
++		func_stripname -L '' "$deplib"
++		func_replace_sysroot "$func_stripname_result"
++		func_append newdependency_libs " -L$func_replace_sysroot_result"
+ 		;;
+-	      *) newdependency_libs="$newdependency_libs $deplib" ;;
++	      -R*)
++		func_stripname -R '' "$deplib"
++		func_replace_sysroot "$func_stripname_result"
++		func_append newdependency_libs " -R$func_replace_sysroot_result"
++		;;
++	      *) func_append newdependency_libs " $deplib" ;;
+ 	      esac
+ 	    done
+ 	    dependency_libs="$newdependency_libs"
+@@ -8284,12 +9295,14 @@ EOF
+ 	      *.la)
+ 	        func_basename "$lib"
+ 		name="$func_basename_result"
+-		libdir=`${SED} -n -e 's/^libdir=\(.*\)$/\1/p' $lib`
++		func_resolve_sysroot "$lib"
++		eval libdir=`${SED} -n -e 's/^libdir=\(.*\)$/\1/p' $func_resolve_sysroot_result`
++
+ 		test -z "$libdir" && \
+ 		  func_fatal_error "\`$lib' is not a valid libtool archive"
+-		newdlfiles="$newdlfiles $libdir/$name"
++		func_append newdlfiles " ${lt_sysroot:+=}$libdir/$name"
+ 		;;
+-	      *) newdlfiles="$newdlfiles $lib" ;;
++	      *) func_append newdlfiles " $lib" ;;
+ 	      esac
+ 	    done
+ 	    dlfiles="$newdlfiles"
+@@ -8303,10 +9316,11 @@ EOF
+ 		# the library:
+ 		func_basename "$lib"
+ 		name="$func_basename_result"
+-		libdir=`${SED} -n -e 's/^libdir=\(.*\)$/\1/p' $lib`
++		func_resolve_sysroot "$lib"
++		eval libdir=`${SED} -n -e 's/^libdir=\(.*\)$/\1/p' $func_resolve_sysroot_result`
+ 		test -z "$libdir" && \
+ 		  func_fatal_error "\`$lib' is not a valid libtool archive"
+-		newdlprefiles="$newdlprefiles $libdir/$name"
++		func_append newdlprefiles " ${lt_sysroot:+=}$libdir/$name"
+ 		;;
+ 	      esac
+ 	    done
+@@ -8318,7 +9332,7 @@ EOF
+ 		[\\/]* | [A-Za-z]:[\\/]*) abs="$lib" ;;
+ 		*) abs=`pwd`"/$lib" ;;
+ 	      esac
+-	      newdlfiles="$newdlfiles $abs"
++	      func_append newdlfiles " $abs"
+ 	    done
+ 	    dlfiles="$newdlfiles"
+ 	    newdlprefiles=
+@@ -8327,7 +9341,7 @@ EOF
+ 		[\\/]* | [A-Za-z]:[\\/]*) abs="$lib" ;;
+ 		*) abs=`pwd`"/$lib" ;;
+ 	      esac
+-	      newdlprefiles="$newdlprefiles $abs"
++	      func_append newdlprefiles " $abs"
+ 	    done
+ 	    dlprefiles="$newdlprefiles"
+ 	  fi
+@@ -8412,7 +9426,7 @@ relink_command=\"$relink_command\""
+     exit $EXIT_SUCCESS
+ }
+ 
+-{ test "$mode" = link || test "$mode" = relink; } &&
++{ test "$opt_mode" = link || test "$opt_mode" = relink; } &&
+     func_mode_link ${1+"$@"}
+ 
+ 
+@@ -8432,9 +9446,9 @@ func_mode_uninstall ()
+     for arg
+     do
+       case $arg in
+-      -f) RM="$RM $arg"; rmforce=yes ;;
+-      -*) RM="$RM $arg" ;;
+-      *) files="$files $arg" ;;
++      -f) func_append RM " $arg"; rmforce=yes ;;
++      -*) func_append RM " $arg" ;;
++      *) func_append files " $arg" ;;
+       esac
+     done
+ 
+@@ -8443,24 +9457,23 @@ func_mode_uninstall ()
+ 
+     rmdirs=
+ 
+-    origobjdir="$objdir"
+     for file in $files; do
+       func_dirname "$file" "" "."
+       dir="$func_dirname_result"
+       if test "X$dir" = X.; then
+-	objdir="$origobjdir"
++	odir="$objdir"
+       else
+-	objdir="$dir/$origobjdir"
++	odir="$dir/$objdir"
+       fi
+       func_basename "$file"
+       name="$func_basename_result"
+-      test "$mode" = uninstall && objdir="$dir"
++      test "$opt_mode" = uninstall && odir="$dir"
+ 
+-      # Remember objdir for removal later, being careful to avoid duplicates
+-      if test "$mode" = clean; then
++      # Remember odir for removal later, being careful to avoid duplicates
++      if test "$opt_mode" = clean; then
+ 	case " $rmdirs " in
+-	  *" $objdir "*) ;;
+-	  *) rmdirs="$rmdirs $objdir" ;;
++	  *" $odir "*) ;;
++	  *) func_append rmdirs " $odir" ;;
+ 	esac
+       fi
+ 
+@@ -8486,18 +9499,17 @@ func_mode_uninstall ()
+ 
+ 	  # Delete the libtool libraries and symlinks.
+ 	  for n in $library_names; do
+-	    rmfiles="$rmfiles $objdir/$n"
++	    func_append rmfiles " $odir/$n"
+ 	  done
+-	  test -n "$old_library" && rmfiles="$rmfiles $objdir/$old_library"
++	  test -n "$old_library" && func_append rmfiles " $odir/$old_library"
+ 
+-	  case "$mode" in
++	  case "$opt_mode" in
+ 	  clean)
+-	    case "  $library_names " in
+-	    # "  " in the beginning catches empty $dlname
++	    case " $library_names " in
+ 	    *" $dlname "*) ;;
+-	    *) rmfiles="$rmfiles $objdir/$dlname" ;;
++	    *) test -n "$dlname" && func_append rmfiles " $odir/$dlname" ;;
+ 	    esac
+-	    test -n "$libdir" && rmfiles="$rmfiles $objdir/$name $objdir/${name}i"
++	    test -n "$libdir" && func_append rmfiles " $odir/$name $odir/${name}i"
+ 	    ;;
+ 	  uninstall)
+ 	    if test -n "$library_names"; then
+@@ -8525,19 +9537,19 @@ func_mode_uninstall ()
+ 	  # Add PIC object to the list of files to remove.
+ 	  if test -n "$pic_object" &&
+ 	     test "$pic_object" != none; then
+-	    rmfiles="$rmfiles $dir/$pic_object"
++	    func_append rmfiles " $dir/$pic_object"
+ 	  fi
+ 
+ 	  # Add non-PIC object to the list of files to remove.
+ 	  if test -n "$non_pic_object" &&
+ 	     test "$non_pic_object" != none; then
+-	    rmfiles="$rmfiles $dir/$non_pic_object"
++	    func_append rmfiles " $dir/$non_pic_object"
+ 	  fi
+ 	fi
+ 	;;
+ 
+       *)
+-	if test "$mode" = clean ; then
++	if test "$opt_mode" = clean ; then
+ 	  noexename=$name
+ 	  case $file in
+ 	  *.exe)
+@@ -8547,7 +9559,7 @@ func_mode_uninstall ()
+ 	    noexename=$func_stripname_result
+ 	    # $file with .exe has already been added to rmfiles,
+ 	    # add $file without .exe
+-	    rmfiles="$rmfiles $file"
++	    func_append rmfiles " $file"
+ 	    ;;
+ 	  esac
+ 	  # Do a test to see if this is a libtool program.
+@@ -8556,7 +9568,7 @@ func_mode_uninstall ()
+ 	      func_ltwrapper_scriptname "$file"
+ 	      relink_command=
+ 	      func_source $func_ltwrapper_scriptname_result
+-	      rmfiles="$rmfiles $func_ltwrapper_scriptname_result"
++	      func_append rmfiles " $func_ltwrapper_scriptname_result"
+ 	    else
+ 	      relink_command=
+ 	      func_source $dir/$noexename
+@@ -8564,12 +9576,12 @@ func_mode_uninstall ()
+ 
+ 	    # note $name still contains .exe if it was in $file originally
+ 	    # as does the version of $file that was added into $rmfiles
+-	    rmfiles="$rmfiles $objdir/$name $objdir/${name}S.${objext}"
++	    func_append rmfiles " $odir/$name $odir/${name}S.${objext}"
+ 	    if test "$fast_install" = yes && test -n "$relink_command"; then
+-	      rmfiles="$rmfiles $objdir/lt-$name"
++	      func_append rmfiles " $odir/lt-$name"
+ 	    fi
+ 	    if test "X$noexename" != "X$name" ; then
+-	      rmfiles="$rmfiles $objdir/lt-${noexename}.c"
++	      func_append rmfiles " $odir/lt-${noexename}.c"
+ 	    fi
+ 	  fi
+ 	fi
+@@ -8577,7 +9589,6 @@ func_mode_uninstall ()
+       esac
+       func_show_eval "$RM $rmfiles" 'exit_status=1'
+     done
+-    objdir="$origobjdir"
+ 
+     # Try to remove the ${objdir}s in the directories where we deleted files
+     for dir in $rmdirs; do
+@@ -8589,16 +9600,16 @@ func_mode_uninstall ()
+     exit $exit_status
+ }
+ 
+-{ test "$mode" = uninstall || test "$mode" = clean; } &&
++{ test "$opt_mode" = uninstall || test "$opt_mode" = clean; } &&
+     func_mode_uninstall ${1+"$@"}
+ 
+-test -z "$mode" && {
++test -z "$opt_mode" && {
+   help="$generic_help"
+   func_fatal_help "you must specify a MODE"
+ }
+ 
+ test -z "$exec_cmd" && \
+-  func_fatal_help "invalid operation mode \`$mode'"
++  func_fatal_help "invalid operation mode \`$opt_mode'"
+ 
+ if test -n "$exec_cmd"; then
+   eval exec "$exec_cmd"
+diff --git a/ltoptions.m4 b/ltoptions.m4
+index 5ef12ced2a8..17cfd51c0b3 100644
+--- a/ltoptions.m4
++++ b/ltoptions.m4
+@@ -8,7 +8,7 @@
+ # unlimited permission to copy and/or distribute it, with or without
+ # modifications, as long as this notice is preserved.
+ 
+-# serial 6 ltoptions.m4
++# serial 7 ltoptions.m4
+ 
+ # This is to help aclocal find these macros, as it can't see m4_define.
+ AC_DEFUN([LTOPTIONS_VERSION], [m4_if([1])])
+diff --git a/ltversion.m4 b/ltversion.m4
+index bf87f77132d..9c7b5d41185 100644
+--- a/ltversion.m4
++++ b/ltversion.m4
+@@ -7,17 +7,17 @@
+ # unlimited permission to copy and/or distribute it, with or without
+ # modifications, as long as this notice is preserved.
+ 
+-# Generated from ltversion.in.
++# @configure_input@
+ 
+-# serial 3134 ltversion.m4
++# serial 3293 ltversion.m4
+ # This file is part of GNU Libtool
+ 
+-m4_define([LT_PACKAGE_VERSION], [2.2.7a])
+-m4_define([LT_PACKAGE_REVISION], [1.3134])
++m4_define([LT_PACKAGE_VERSION], [2.4])
++m4_define([LT_PACKAGE_REVISION], [1.3293])
+ 
+ AC_DEFUN([LTVERSION_VERSION],
+-[macro_version='2.2.7a'
+-macro_revision='1.3134'
++[macro_version='2.4'
++macro_revision='1.3293'
+ _LT_DECL(, macro_version, 0, [Which release of libtool.m4 was used?])
+ _LT_DECL(, macro_revision, 0)
+ ])
+diff --git a/lt~obsolete.m4 b/lt~obsolete.m4
+index bf92b5e0790..c573da90c5c 100644
+--- a/lt~obsolete.m4
++++ b/lt~obsolete.m4
+@@ -7,7 +7,7 @@
+ # unlimited permission to copy and/or distribute it, with or without
+ # modifications, as long as this notice is preserved.
+ 
+-# serial 4 lt~obsolete.m4
++# serial 5 lt~obsolete.m4
+ 
+ # These exist entirely to fool aclocal when bootstrapping libtool.
+ #
+diff --git a/opcodes/Makefile.in b/opcodes/Makefile.in
+index 2257b0872af..73aae3b210f 100644
+--- a/opcodes/Makefile.in
++++ b/opcodes/Makefile.in
+@@ -292,6 +292,7 @@ CYGPATH_W = @CYGPATH_W@
+ DATADIRNAME = @DATADIRNAME@
+ DEFS = @DEFS@
+ DEPDIR = @DEPDIR@
++DLLTOOL = @DLLTOOL@
+ DSYMUTIL = @DSYMUTIL@
+ DUMPBIN = @DUMPBIN@
+ ECHO_C = @ECHO_C@
+@@ -325,6 +326,7 @@ LN_S = @LN_S@
+ LTLIBOBJS = @LTLIBOBJS@
+ MAINT = @MAINT@
+ MAKEINFO = @MAKEINFO@
++MANIFEST_TOOL = @MANIFEST_TOOL@
+ MKDIR_P = @MKDIR_P@
+ MKINSTALLDIRS = @MKINSTALLDIRS@
+ MSGFMT = @MSGFMT@
+@@ -363,6 +365,7 @@ abs_builddir = @abs_builddir@
+ abs_srcdir = @abs_srcdir@
+ abs_top_builddir = @abs_top_builddir@
+ abs_top_srcdir = @abs_top_srcdir@
++ac_ct_AR = @ac_ct_AR@
+ ac_ct_CC = @ac_ct_CC@
+ ac_ct_DUMPBIN = @ac_ct_DUMPBIN@
+ am__include = @am__include@
+diff --git a/opcodes/configure b/opcodes/configure
+index db023b48c28..c562aada2a4 100755
+--- a/opcodes/configure
++++ b/opcodes/configure
+@@ -682,6 +682,9 @@ OTOOL
+ LIPO
+ NMEDIT
+ DSYMUTIL
++MANIFEST_TOOL
++ac_ct_AR
++DLLTOOL
+ OBJDUMP
+ LN_S
+ NM
+@@ -800,6 +803,7 @@ enable_static
+ with_pic
+ enable_fast_install
+ with_gnu_ld
++with_libtool_sysroot
+ enable_libtool_lock
+ enable_checking
+ enable_targets
+@@ -1468,6 +1472,8 @@ Optional Packages:
+   --with-pic              try to use only PIC/non-PIC objects [default=use
+                           both]
+   --with-gnu-ld           assume the C compiler uses GNU ld [default=no]
++  --with-libtool-sysroot=DIR Search for dependent libraries within DIR
++                        (or the compiler's sysroot if not specified).
+ 
+ Some influential environment variables:
+   CC          C compiler command
+@@ -4977,8 +4983,8 @@ esac
+ 
+ 
+ 
+-macro_version='2.2.7a'
+-macro_revision='1.3134'
++macro_version='2.4'
++macro_revision='1.3293'
+ 
+ 
+ 
+@@ -5018,7 +5024,7 @@ ECHO=$ECHO$ECHO$ECHO$ECHO$ECHO$ECHO
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking how to print strings" >&5
+ $as_echo_n "checking how to print strings... " >&6; }
+ # Test print first, because it will be a builtin if present.
+-if test "X`print -r -- -n 2>/dev/null`" = X-n && \
++if test "X`( print -r -- -n ) 2>/dev/null`" = X-n && \
+    test "X`print -r -- $ECHO 2>/dev/null`" = "X$ECHO"; then
+   ECHO='print -r --'
+ elif test "X`printf %s $ECHO 2>/dev/null`" = "X$ECHO"; then
+@@ -5711,8 +5717,8 @@ $as_echo_n "checking whether the shell understands some XSI constructs... " >&6;
+ # Try some XSI features
+ xsi_shell=no
+ ( _lt_dummy="a/b/c"
+-  test "${_lt_dummy##*/},${_lt_dummy%/*},"${_lt_dummy%"$_lt_dummy"}, \
+-      = c,a/b,, \
++  test "${_lt_dummy##*/},${_lt_dummy%/*},${_lt_dummy#??}"${_lt_dummy%"$_lt_dummy"}, \
++      = c,a/b,b/c, \
+     && eval 'test $(( 1 + 1 )) -eq 2 \
+     && test "${#_lt_dummy}" -eq 5' ) >/dev/null 2>&1 \
+   && xsi_shell=yes
+@@ -5761,6 +5767,80 @@ esac
+ 
+ 
+ 
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking how to convert $build file names to $host format" >&5
++$as_echo_n "checking how to convert $build file names to $host format... " >&6; }
++if ${lt_cv_to_host_file_cmd+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  case $host in
++  *-*-mingw* )
++    case $build in
++      *-*-mingw* ) # actually msys
++        lt_cv_to_host_file_cmd=func_convert_file_msys_to_w32
++        ;;
++      *-*-cygwin* )
++        lt_cv_to_host_file_cmd=func_convert_file_cygwin_to_w32
++        ;;
++      * ) # otherwise, assume *nix
++        lt_cv_to_host_file_cmd=func_convert_file_nix_to_w32
++        ;;
++    esac
++    ;;
++  *-*-cygwin* )
++    case $build in
++      *-*-mingw* ) # actually msys
++        lt_cv_to_host_file_cmd=func_convert_file_msys_to_cygwin
++        ;;
++      *-*-cygwin* )
++        lt_cv_to_host_file_cmd=func_convert_file_noop
++        ;;
++      * ) # otherwise, assume *nix
++        lt_cv_to_host_file_cmd=func_convert_file_nix_to_cygwin
++        ;;
++    esac
++    ;;
++  * ) # unhandled hosts (and "normal" native builds)
++    lt_cv_to_host_file_cmd=func_convert_file_noop
++    ;;
++esac
++
++fi
++
++to_host_file_cmd=$lt_cv_to_host_file_cmd
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_to_host_file_cmd" >&5
++$as_echo "$lt_cv_to_host_file_cmd" >&6; }
++
++
++
++
++
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking how to convert $build file names to toolchain format" >&5
++$as_echo_n "checking how to convert $build file names to toolchain format... " >&6; }
++if ${lt_cv_to_tool_file_cmd+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  #assume ordinary cross tools, or native build.
++lt_cv_to_tool_file_cmd=func_convert_file_noop
++case $host in
++  *-*-mingw* )
++    case $build in
++      *-*-mingw* ) # actually msys
++        lt_cv_to_tool_file_cmd=func_convert_file_msys_to_w32
++        ;;
++    esac
++    ;;
++esac
++
++fi
++
++to_tool_file_cmd=$lt_cv_to_tool_file_cmd
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_to_tool_file_cmd" >&5
++$as_echo "$lt_cv_to_tool_file_cmd" >&6; }
++
++
++
++
++
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $LD option to reload object files" >&5
+ $as_echo_n "checking for $LD option to reload object files... " >&6; }
+ if ${lt_cv_ld_reload_flag+:} false; then :
+@@ -5777,6 +5857,11 @@ case $reload_flag in
+ esac
+ reload_cmds='$LD$reload_flag -o $output$reload_objs'
+ case $host_os in
++  cygwin* | mingw* | pw32* | cegcc*)
++    if test "$GCC" != yes; then
++      reload_cmds=false
++    fi
++    ;;
+   darwin*)
+     if test "$GCC" = yes; then
+       reload_cmds='$LTCC $LTCFLAGS -nostdlib ${wl}-r -o $output$reload_objs'
+@@ -5945,7 +6030,8 @@ mingw* | pw32*)
+     lt_cv_deplibs_check_method='file_magic ^x86 archive import|^x86 DLL'
+     lt_cv_file_magic_cmd='func_win32_libid'
+   else
+-    lt_cv_deplibs_check_method='file_magic file format pei*-i386(.*architecture: i386)?'
++    # Keep this pattern in sync with the one in func_win32_libid.
++    lt_cv_deplibs_check_method='file_magic file format (pei*-i386(.*architecture: i386)?|pe-arm-wince|pe-x86-64)'
+     lt_cv_file_magic_cmd='$OBJDUMP -f'
+   fi
+   ;;
+@@ -6099,6 +6185,21 @@ esac
+ fi
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_deplibs_check_method" >&5
+ $as_echo "$lt_cv_deplibs_check_method" >&6; }
++
++file_magic_glob=
++want_nocaseglob=no
++if test "$build" = "$host"; then
++  case $host_os in
++  mingw* | pw32*)
++    if ( shopt | grep nocaseglob ) >/dev/null 2>&1; then
++      want_nocaseglob=yes
++    else
++      file_magic_glob=`echo aAbBcCdDeEfFgGhHiIjJkKlLmMnNoOpPqQrRsStTuUvVwWxXyYzZ | $SED -e "s/\(..\)/s\/[\1]\/[\1]\/g;/g"`
++    fi
++    ;;
++  esac
++fi
++
+ file_magic_cmd=$lt_cv_file_magic_cmd
+ deplibs_check_method=$lt_cv_deplibs_check_method
+ test -z "$deplibs_check_method" && deplibs_check_method=unknown
+@@ -6114,6 +6215,157 @@ test -z "$deplibs_check_method" && deplibs_check_method=unknown
+ 
+ 
+ 
++
++
++
++
++
++
++
++
++
++
++if test -n "$ac_tool_prefix"; then
++  # Extract the first word of "${ac_tool_prefix}dlltool", so it can be a program name with args.
++set dummy ${ac_tool_prefix}dlltool; ac_word=$2
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
++$as_echo_n "checking for $ac_word... " >&6; }
++if ${ac_cv_prog_DLLTOOL+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  if test -n "$DLLTOOL"; then
++  ac_cv_prog_DLLTOOL="$DLLTOOL" # Let the user override the test.
++else
++as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
++for as_dir in $PATH
++do
++  IFS=$as_save_IFS
++  test -z "$as_dir" && as_dir=.
++    for ac_exec_ext in '' $ac_executable_extensions; do
++  if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
++    ac_cv_prog_DLLTOOL="${ac_tool_prefix}dlltool"
++    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
++    break 2
++  fi
++done
++  done
++IFS=$as_save_IFS
++
++fi
++fi
++DLLTOOL=$ac_cv_prog_DLLTOOL
++if test -n "$DLLTOOL"; then
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $DLLTOOL" >&5
++$as_echo "$DLLTOOL" >&6; }
++else
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
++$as_echo "no" >&6; }
++fi
++
++
++fi
++if test -z "$ac_cv_prog_DLLTOOL"; then
++  ac_ct_DLLTOOL=$DLLTOOL
++  # Extract the first word of "dlltool", so it can be a program name with args.
++set dummy dlltool; ac_word=$2
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
++$as_echo_n "checking for $ac_word... " >&6; }
++if ${ac_cv_prog_ac_ct_DLLTOOL+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  if test -n "$ac_ct_DLLTOOL"; then
++  ac_cv_prog_ac_ct_DLLTOOL="$ac_ct_DLLTOOL" # Let the user override the test.
++else
++as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
++for as_dir in $PATH
++do
++  IFS=$as_save_IFS
++  test -z "$as_dir" && as_dir=.
++    for ac_exec_ext in '' $ac_executable_extensions; do
++  if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
++    ac_cv_prog_ac_ct_DLLTOOL="dlltool"
++    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
++    break 2
++  fi
++done
++  done
++IFS=$as_save_IFS
++
++fi
++fi
++ac_ct_DLLTOOL=$ac_cv_prog_ac_ct_DLLTOOL
++if test -n "$ac_ct_DLLTOOL"; then
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_DLLTOOL" >&5
++$as_echo "$ac_ct_DLLTOOL" >&6; }
++else
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
++$as_echo "no" >&6; }
++fi
++
++  if test "x$ac_ct_DLLTOOL" = x; then
++    DLLTOOL="false"
++  else
++    case $cross_compiling:$ac_tool_warned in
++yes:)
++{ $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5
++$as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;}
++ac_tool_warned=yes ;;
++esac
++    DLLTOOL=$ac_ct_DLLTOOL
++  fi
++else
++  DLLTOOL="$ac_cv_prog_DLLTOOL"
++fi
++
++test -z "$DLLTOOL" && DLLTOOL=dlltool
++
++
++
++
++
++
++
++
++
++
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking how to associate runtime and link libraries" >&5
++$as_echo_n "checking how to associate runtime and link libraries... " >&6; }
++if ${lt_cv_sharedlib_from_linklib_cmd+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  lt_cv_sharedlib_from_linklib_cmd='unknown'
++
++case $host_os in
++cygwin* | mingw* | pw32* | cegcc*)
++  # two different shell functions defined in ltmain.sh
++  # decide which to use based on capabilities of $DLLTOOL
++  case `$DLLTOOL --help 2>&1` in
++  *--identify-strict*)
++    lt_cv_sharedlib_from_linklib_cmd=func_cygming_dll_for_implib
++    ;;
++  *)
++    lt_cv_sharedlib_from_linklib_cmd=func_cygming_dll_for_implib_fallback
++    ;;
++  esac
++  ;;
++*)
++  # fallback: assume linklib IS sharedlib
++  lt_cv_sharedlib_from_linklib_cmd="$ECHO"
++  ;;
++esac
++
++fi
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_sharedlib_from_linklib_cmd" >&5
++$as_echo "$lt_cv_sharedlib_from_linklib_cmd" >&6; }
++sharedlib_from_linklib_cmd=$lt_cv_sharedlib_from_linklib_cmd
++test -z "$sharedlib_from_linklib_cmd" && sharedlib_from_linklib_cmd=$ECHO
++
++
++
++
++
++
++
+ plugin_option=
+ plugin_names="liblto_plugin.so liblto_plugin-0.dll cyglto_plugin-0.dll"
+ for plugin in $plugin_names; do
+@@ -6128,8 +6380,10 @@ for plugin in $plugin_names; do
+ done
+ 
+ if test -n "$ac_tool_prefix"; then
+-  # Extract the first word of "${ac_tool_prefix}ar", so it can be a program name with args.
+-set dummy ${ac_tool_prefix}ar; ac_word=$2
++  for ac_prog in ar
++  do
++    # Extract the first word of "$ac_tool_prefix$ac_prog", so it can be a program name with args.
++set dummy $ac_tool_prefix$ac_prog; ac_word=$2
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+ $as_echo_n "checking for $ac_word... " >&6; }
+ if ${ac_cv_prog_AR+:} false; then :
+@@ -6145,7 +6399,7 @@ do
+   test -z "$as_dir" && as_dir=.
+     for ac_exec_ext in '' $ac_executable_extensions; do
+   if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
+-    ac_cv_prog_AR="${ac_tool_prefix}ar"
++    ac_cv_prog_AR="$ac_tool_prefix$ac_prog"
+     $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
+     break 2
+   fi
+@@ -6165,11 +6419,15 @@ $as_echo "no" >&6; }
+ fi
+ 
+ 
++    test -n "$AR" && break
++  done
+ fi
+-if test -z "$ac_cv_prog_AR"; then
++if test -z "$AR"; then
+   ac_ct_AR=$AR
+-  # Extract the first word of "ar", so it can be a program name with args.
+-set dummy ar; ac_word=$2
++  for ac_prog in ar
++do
++  # Extract the first word of "$ac_prog", so it can be a program name with args.
++set dummy $ac_prog; ac_word=$2
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+ $as_echo_n "checking for $ac_word... " >&6; }
+ if ${ac_cv_prog_ac_ct_AR+:} false; then :
+@@ -6185,7 +6443,7 @@ do
+   test -z "$as_dir" && as_dir=.
+     for ac_exec_ext in '' $ac_executable_extensions; do
+   if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
+-    ac_cv_prog_ac_ct_AR="ar"
++    ac_cv_prog_ac_ct_AR="$ac_prog"
+     $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
+     break 2
+   fi
+@@ -6204,6 +6462,10 @@ else
+ $as_echo "no" >&6; }
+ fi
+ 
++
++  test -n "$ac_ct_AR" && break
++done
++
+   if test "x$ac_ct_AR" = x; then
+     AR="false"
+   else
+@@ -6215,25 +6477,20 @@ ac_tool_warned=yes ;;
+ esac
+     AR=$ac_ct_AR
+   fi
+-else
+-  AR="$ac_cv_prog_AR"
+ fi
+ 
+-test -z "$AR" && AR=ar
+-if test -n "$plugin_option"; then
+-  if $AR --help 2>&1 | grep -q "\--plugin"; then
+-    touch conftest.c
+-    $AR $plugin_option rc conftest.a conftest.c
+-    if test "$?" != 0; then
+-      { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: Failed: $AR $plugin_option rc" >&5
++  touch conftest.c
++  $AR $plugin_option rc conftest.a conftest.c
++  if test "$?" != 0; then
++    { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: Failed: $AR $plugin_option rc" >&5
+ $as_echo "$as_me: WARNING: Failed: $AR $plugin_option rc" >&2;}
+-    else
+-      AR="$AR $plugin_option"
+-    fi
+-    rm -f conftest.*
++  else
++    AR="$AR $plugin_option"
+   fi
+-fi
+-test -z "$AR_FLAGS" && AR_FLAGS=cru
++  rm -f conftest.*
++: ${AR=ar}
++: ${AR_FLAGS=cru}
++
+ 
+ 
+ 
+@@ -6244,6 +6501,63 @@ test -z "$AR_FLAGS" && AR_FLAGS=cru
+ 
+ 
+ 
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for archiver @FILE support" >&5
++$as_echo_n "checking for archiver @FILE support... " >&6; }
++if ${lt_cv_ar_at_file+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  lt_cv_ar_at_file=no
++   cat confdefs.h - <<_ACEOF >conftest.$ac_ext
++/* end confdefs.h.  */
++
++int
++main ()
++{
++
++  ;
++  return 0;
++}
++_ACEOF
++if ac_fn_c_try_compile "$LINENO"; then :
++  echo conftest.$ac_objext > conftest.lst
++      lt_ar_try='$AR $AR_FLAGS libconftest.a @conftest.lst >&5'
++      { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$lt_ar_try\""; } >&5
++  (eval $lt_ar_try) 2>&5
++  ac_status=$?
++  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
++  test $ac_status = 0; }
++      if test "$ac_status" -eq 0; then
++	# Ensure the archiver fails upon bogus file names.
++	rm -f conftest.$ac_objext libconftest.a
++	{ { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$lt_ar_try\""; } >&5
++  (eval $lt_ar_try) 2>&5
++  ac_status=$?
++  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
++  test $ac_status = 0; }
++	if test "$ac_status" -ne 0; then
++          lt_cv_ar_at_file=@
++        fi
++      fi
++      rm -f conftest.* libconftest.a
++
++fi
++rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext
++
++fi
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_ar_at_file" >&5
++$as_echo "$lt_cv_ar_at_file" >&6; }
++
++if test "x$lt_cv_ar_at_file" = xno; then
++  archiver_list_spec=
++else
++  archiver_list_spec=$lt_cv_ar_at_file
++fi
++
++
++
++
++
++
+ 
+ if test -n "$ac_tool_prefix"; then
+   # Extract the first word of "${ac_tool_prefix}strip", so it can be a program name with args.
+@@ -6584,8 +6898,8 @@ esac
+ lt_cv_sys_global_symbol_to_cdecl="sed -n -e 's/^T .* \(.*\)$/extern int \1();/p' -e 's/^$symcode* .* \(.*\)$/extern char \1;/p'"
+ 
+ # Transform an extracted symbol line into symbol name and symbol address
+-lt_cv_sys_global_symbol_to_c_name_address="sed -n -e 's/^: \([^ ]*\) $/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"\2\", (void *) \&\2},/p'"
+-lt_cv_sys_global_symbol_to_c_name_address_lib_prefix="sed -n -e 's/^: \([^ ]*\) $/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \(lib[^ ]*\)$/  {\"\2\", (void *) \&\2},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"lib\2\", (void *) \&\2},/p'"
++lt_cv_sys_global_symbol_to_c_name_address="sed -n -e 's/^: \([^ ]*\)[ ]*$/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"\2\", (void *) \&\2},/p'"
++lt_cv_sys_global_symbol_to_c_name_address_lib_prefix="sed -n -e 's/^: \([^ ]*\)[ ]*$/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \(lib[^ ]*\)$/  {\"\2\", (void *) \&\2},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"lib\2\", (void *) \&\2},/p'"
+ 
+ # Handle CRLF in mingw tool chain
+ opt_cr=
+@@ -6621,6 +6935,7 @@ for ac_symprfx in "" "_"; do
+   else
+     lt_cv_sys_global_symbol_pipe="sed -n -e 's/^.*[	 ]\($symcode$symcode*\)[	 ][	 ]*$ac_symprfx$sympat$opt_cr$/$symxfrm/p'"
+   fi
++  lt_cv_sys_global_symbol_pipe="$lt_cv_sys_global_symbol_pipe | sed '/ __gnu_lto/d'"
+ 
+   # Check to see that the pipe works correctly.
+   pipe_works=no
+@@ -6662,6 +6977,18 @@ _LT_EOF
+       if $GREP ' nm_test_var$' "$nlist" >/dev/null; then
+ 	if $GREP ' nm_test_func$' "$nlist" >/dev/null; then
+ 	  cat <<_LT_EOF > conftest.$ac_ext
++/* Keep this code in sync between libtool.m4, ltmain, lt_system.h, and tests.  */
++#if defined(_WIN32) || defined(__CYGWIN__) || defined(_WIN32_WCE)
++/* DATA imports from DLLs on WIN32 con't be const, because runtime
++   relocations are performed -- see ld's documentation on pseudo-relocs.  */
++# define LT_DLSYM_CONST
++#elif defined(__osf__)
++/* This system does not cope well with relocations in const data.  */
++# define LT_DLSYM_CONST
++#else
++# define LT_DLSYM_CONST const
++#endif
++
+ #ifdef __cplusplus
+ extern "C" {
+ #endif
+@@ -6673,7 +7000,7 @@ _LT_EOF
+ 	  cat <<_LT_EOF >> conftest.$ac_ext
+ 
+ /* The mapping between symbol names and symbols.  */
+-const struct {
++LT_DLSYM_CONST struct {
+   const char *name;
+   void       *address;
+ }
+@@ -6699,8 +7026,8 @@ static const void *lt_preloaded_setup() {
+ _LT_EOF
+ 	  # Now try linking the two files.
+ 	  mv conftest.$ac_objext conftstm.$ac_objext
+-	  lt_save_LIBS="$LIBS"
+-	  lt_save_CFLAGS="$CFLAGS"
++	  lt_globsym_save_LIBS=$LIBS
++	  lt_globsym_save_CFLAGS=$CFLAGS
+ 	  LIBS="conftstm.$ac_objext"
+ 	  CFLAGS="$CFLAGS$lt_prog_compiler_no_builtin_flag"
+ 	  if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_link\""; } >&5
+@@ -6710,8 +7037,8 @@ _LT_EOF
+   test $ac_status = 0; } && test -s conftest${ac_exeext}; then
+ 	    pipe_works=yes
+ 	  fi
+-	  LIBS="$lt_save_LIBS"
+-	  CFLAGS="$lt_save_CFLAGS"
++	  LIBS=$lt_globsym_save_LIBS
++	  CFLAGS=$lt_globsym_save_CFLAGS
+ 	else
+ 	  echo "cannot find nm_test_func in $nlist" >&5
+ 	fi
+@@ -6748,6 +7075,14 @@ else
+ $as_echo "ok" >&6; }
+ fi
+ 
++# Response file support.
++if test "$lt_cv_nm_interface" = "MS dumpbin"; then
++  nm_file_list_spec='@'
++elif $NM --help 2>/dev/null | grep '[@]FILE' >/dev/null; then
++  nm_file_list_spec='@'
++fi
++
++
+ 
+ 
+ 
+@@ -6766,6 +7101,47 @@ fi
+ 
+ 
+ 
++
++
++
++
++
++
++
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for sysroot" >&5
++$as_echo_n "checking for sysroot... " >&6; }
++
++# Check whether --with-libtool-sysroot was given.
++if test "${with_libtool_sysroot+set}" = set; then :
++  withval=$with_libtool_sysroot;
++else
++  with_libtool_sysroot=no
++fi
++
++
++lt_sysroot=
++case ${with_libtool_sysroot} in #(
++ yes)
++   if test "$GCC" = yes; then
++     lt_sysroot=`$CC --print-sysroot 2>/dev/null`
++   fi
++   ;; #(
++ /*)
++   lt_sysroot=`echo "$with_libtool_sysroot" | sed -e "$sed_quote_subst"`
++   ;; #(
++ no|'')
++   ;; #(
++ *)
++   { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${with_libtool_sysroot}" >&5
++$as_echo "${with_libtool_sysroot}" >&6; }
++   as_fn_error $? "The sysroot must be an absolute path." "$LINENO" 5
++   ;;
++esac
++
++ { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${lt_sysroot:-no}" >&5
++$as_echo "${lt_sysroot:-no}" >&6; }
++
++
+ 
+ 
+ 
+@@ -6975,6 +7351,123 @@ esac
+ 
+ need_locks="$enable_libtool_lock"
+ 
++if test -n "$ac_tool_prefix"; then
++  # Extract the first word of "${ac_tool_prefix}mt", so it can be a program name with args.
++set dummy ${ac_tool_prefix}mt; ac_word=$2
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
++$as_echo_n "checking for $ac_word... " >&6; }
++if ${ac_cv_prog_MANIFEST_TOOL+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  if test -n "$MANIFEST_TOOL"; then
++  ac_cv_prog_MANIFEST_TOOL="$MANIFEST_TOOL" # Let the user override the test.
++else
++as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
++for as_dir in $PATH
++do
++  IFS=$as_save_IFS
++  test -z "$as_dir" && as_dir=.
++    for ac_exec_ext in '' $ac_executable_extensions; do
++  if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
++    ac_cv_prog_MANIFEST_TOOL="${ac_tool_prefix}mt"
++    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
++    break 2
++  fi
++done
++  done
++IFS=$as_save_IFS
++
++fi
++fi
++MANIFEST_TOOL=$ac_cv_prog_MANIFEST_TOOL
++if test -n "$MANIFEST_TOOL"; then
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $MANIFEST_TOOL" >&5
++$as_echo "$MANIFEST_TOOL" >&6; }
++else
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
++$as_echo "no" >&6; }
++fi
++
++
++fi
++if test -z "$ac_cv_prog_MANIFEST_TOOL"; then
++  ac_ct_MANIFEST_TOOL=$MANIFEST_TOOL
++  # Extract the first word of "mt", so it can be a program name with args.
++set dummy mt; ac_word=$2
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
++$as_echo_n "checking for $ac_word... " >&6; }
++if ${ac_cv_prog_ac_ct_MANIFEST_TOOL+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  if test -n "$ac_ct_MANIFEST_TOOL"; then
++  ac_cv_prog_ac_ct_MANIFEST_TOOL="$ac_ct_MANIFEST_TOOL" # Let the user override the test.
++else
++as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
++for as_dir in $PATH
++do
++  IFS=$as_save_IFS
++  test -z "$as_dir" && as_dir=.
++    for ac_exec_ext in '' $ac_executable_extensions; do
++  if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
++    ac_cv_prog_ac_ct_MANIFEST_TOOL="mt"
++    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
++    break 2
++  fi
++done
++  done
++IFS=$as_save_IFS
++
++fi
++fi
++ac_ct_MANIFEST_TOOL=$ac_cv_prog_ac_ct_MANIFEST_TOOL
++if test -n "$ac_ct_MANIFEST_TOOL"; then
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_MANIFEST_TOOL" >&5
++$as_echo "$ac_ct_MANIFEST_TOOL" >&6; }
++else
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
++$as_echo "no" >&6; }
++fi
++
++  if test "x$ac_ct_MANIFEST_TOOL" = x; then
++    MANIFEST_TOOL=":"
++  else
++    case $cross_compiling:$ac_tool_warned in
++yes:)
++{ $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5
++$as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;}
++ac_tool_warned=yes ;;
++esac
++    MANIFEST_TOOL=$ac_ct_MANIFEST_TOOL
++  fi
++else
++  MANIFEST_TOOL="$ac_cv_prog_MANIFEST_TOOL"
++fi
++
++test -z "$MANIFEST_TOOL" && MANIFEST_TOOL=mt
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking if $MANIFEST_TOOL is a manifest tool" >&5
++$as_echo_n "checking if $MANIFEST_TOOL is a manifest tool... " >&6; }
++if ${lt_cv_path_mainfest_tool+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  lt_cv_path_mainfest_tool=no
++  echo "$as_me:$LINENO: $MANIFEST_TOOL '-?'" >&5
++  $MANIFEST_TOOL '-?' 2>conftest.err > conftest.out
++  cat conftest.err >&5
++  if $GREP 'Manifest Tool' conftest.out > /dev/null; then
++    lt_cv_path_mainfest_tool=yes
++  fi
++  rm -f conftest*
++fi
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_path_mainfest_tool" >&5
++$as_echo "$lt_cv_path_mainfest_tool" >&6; }
++if test "x$lt_cv_path_mainfest_tool" != xyes; then
++  MANIFEST_TOOL=:
++fi
++
++
++
++
++
+ 
+   case $host_os in
+     rhapsody* | darwin*)
+@@ -7538,6 +8031,8 @@ _LT_EOF
+       $LTCC $LTCFLAGS -c -o conftest.o conftest.c 2>&5
+       echo "$AR cru libconftest.a conftest.o" >&5
+       $AR cru libconftest.a conftest.o 2>&5
++      echo "$RANLIB libconftest.a" >&5
++      $RANLIB libconftest.a 2>&5
+       cat > conftest.c << _LT_EOF
+ int main() { return 0;}
+ _LT_EOF
+@@ -8090,8 +8585,6 @@ fi
+ lt_prog_compiler_pic=
+ lt_prog_compiler_static=
+ 
+-{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $compiler option to produce PIC" >&5
+-$as_echo_n "checking for $compiler option to produce PIC... " >&6; }
+ 
+   if test "$GCC" = yes; then
+     lt_prog_compiler_wl='-Wl,'
+@@ -8257,6 +8750,12 @@ $as_echo_n "checking for $compiler option to produce PIC... " >&6; }
+ 	lt_prog_compiler_pic='--shared'
+ 	lt_prog_compiler_static='--static'
+ 	;;
++      nagfor*)
++	# NAG Fortran compiler
++	lt_prog_compiler_wl='-Wl,-Wl,,'
++	lt_prog_compiler_pic='-PIC'
++	lt_prog_compiler_static='-Bstatic'
++	;;
+       pgcc* | pgf77* | pgf90* | pgf95* | pgfortran*)
+         # Portland Group compilers (*not* the Pentium gcc compiler,
+ 	# which looks to be a dead project)
+@@ -8319,7 +8818,7 @@ $as_echo_n "checking for $compiler option to produce PIC... " >&6; }
+       lt_prog_compiler_pic='-KPIC'
+       lt_prog_compiler_static='-Bstatic'
+       case $cc_basename in
+-      f77* | f90* | f95*)
++      f77* | f90* | f95* | sunf77* | sunf90* | sunf95*)
+ 	lt_prog_compiler_wl='-Qoption ld ';;
+       *)
+ 	lt_prog_compiler_wl='-Wl,';;
+@@ -8376,13 +8875,17 @@ case $host_os in
+     lt_prog_compiler_pic="$lt_prog_compiler_pic -DPIC"
+     ;;
+ esac
+-{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_prog_compiler_pic" >&5
+-$as_echo "$lt_prog_compiler_pic" >&6; }
+-
+-
+-
+-
+ 
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $compiler option to produce PIC" >&5
++$as_echo_n "checking for $compiler option to produce PIC... " >&6; }
++if ${lt_cv_prog_compiler_pic+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  lt_cv_prog_compiler_pic=$lt_prog_compiler_pic
++fi
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_prog_compiler_pic" >&5
++$as_echo "$lt_cv_prog_compiler_pic" >&6; }
++lt_prog_compiler_pic=$lt_cv_prog_compiler_pic
+ 
+ #
+ # Check to make sure the PIC flag actually works.
+@@ -8443,6 +8946,11 @@ fi
+ 
+ 
+ 
++
++
++
++
++
+ #
+ # Check to make sure the static flag actually works.
+ #
+@@ -8793,7 +9301,8 @@ _LT_EOF
+       allow_undefined_flag=unsupported
+       always_export_symbols=no
+       enable_shared_with_static_runtimes=yes
+-      export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1 DATA/'\'' | $SED -e '\''/^[AITW][ ]/s/.*[ ]//'\'' | sort | uniq > $export_symbols'
++      export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1 DATA/;s/^.*[ ]__nm__\([^ ]*\)[ ][^ ]*/\1 DATA/;/^I[ ]/d;/^[AITW][ ]/s/.* //'\'' | sort | uniq > $export_symbols'
++      exclude_expsyms='[_]+GLOBAL_OFFSET_TABLE_|[_]+GLOBAL__[FID]_.*|[_]+head_[A-Za-z0-9_]+_dll|[A-Za-z0-9_]+_dll_iname'
+ 
+       if $LD --help 2>&1 | $GREP 'auto-import' > /dev/null; then
+         archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib'
+@@ -8892,12 +9401,12 @@ _LT_EOF
+ 	  whole_archive_flag_spec='--whole-archive$convenience --no-whole-archive'
+ 	  hardcode_libdir_flag_spec=
+ 	  hardcode_libdir_flag_spec_ld='-rpath $libdir'
+-	  archive_cmds='$LD -shared $libobjs $deplibs $compiler_flags -soname $soname -o $lib'
++	  archive_cmds='$LD -shared $libobjs $deplibs $linker_flags -soname $soname -o $lib'
+ 	  if test "x$supports_anon_versioning" = xyes; then
+ 	    archive_expsym_cmds='echo "{ global:" > $output_objdir/$libname.ver~
+ 	      cat $export_symbols | sed -e "s/\(.*\)/\1;/" >> $output_objdir/$libname.ver~
+ 	      echo "local: *; };" >> $output_objdir/$libname.ver~
+-	      $LD -shared $libobjs $deplibs $compiler_flags -soname $soname -version-script $output_objdir/$libname.ver -o $lib'
++	      $LD -shared $libobjs $deplibs $linker_flags -soname $soname -version-script $output_objdir/$libname.ver -o $lib'
+ 	  fi
+ 	  ;;
+ 	esac
+@@ -8911,8 +9420,8 @@ _LT_EOF
+ 	archive_cmds='$LD -Bshareable $libobjs $deplibs $linker_flags -o $lib'
+ 	wlarc=
+       else
+-	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
+-	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
++	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
++	archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
+       fi
+       ;;
+ 
+@@ -8930,8 +9439,8 @@ _LT_EOF
+ 
+ _LT_EOF
+       elif $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then
+-	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
+-	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
++	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
++	archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
+       else
+ 	ld_shlibs=no
+       fi
+@@ -8977,8 +9486,8 @@ _LT_EOF
+ 
+     *)
+       if $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then
+-	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
+-	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
++	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
++	archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
+       else
+ 	ld_shlibs=no
+       fi
+@@ -9108,7 +9617,13 @@ _LT_EOF
+ 	allow_undefined_flag='-berok'
+         # Determine the default libpath from the value encoded in an
+         # empty executable.
+-        cat confdefs.h - <<_ACEOF >conftest.$ac_ext
++        if test "${lt_cv_aix_libpath+set}" = set; then
++  aix_libpath=$lt_cv_aix_libpath
++else
++  if ${lt_cv_aix_libpath_+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+ /* end confdefs.h.  */
+ 
+ int
+@@ -9121,22 +9636,29 @@ main ()
+ _ACEOF
+ if ac_fn_c_try_link "$LINENO"; then :
+ 
+-lt_aix_libpath_sed='
+-    /Import File Strings/,/^$/ {
+-	/^0/ {
+-	    s/^0  *\(.*\)$/\1/
+-	    p
+-	}
+-    }'
+-aix_libpath=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
+-# Check for a 64-bit object if we didn't find anything.
+-if test -z "$aix_libpath"; then
+-  aix_libpath=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
+-fi
++  lt_aix_libpath_sed='
++      /Import File Strings/,/^$/ {
++	  /^0/ {
++	      s/^0  *\([^ ]*\) *$/\1/
++	      p
++	  }
++      }'
++  lt_cv_aix_libpath_=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
++  # Check for a 64-bit object if we didn't find anything.
++  if test -z "$lt_cv_aix_libpath_"; then
++    lt_cv_aix_libpath_=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
++  fi
+ fi
+ rm -f core conftest.err conftest.$ac_objext \
+     conftest$ac_exeext conftest.$ac_ext
+-if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
++  if test -z "$lt_cv_aix_libpath_"; then
++    lt_cv_aix_libpath_="/usr/lib:/lib"
++  fi
++
++fi
++
++  aix_libpath=$lt_cv_aix_libpath_
++fi
+ 
+         hardcode_libdir_flag_spec='${wl}-blibpath:$libdir:'"$aix_libpath"
+         archive_expsym_cmds='$CC -o $output_objdir/$soname $libobjs $deplibs '"\${wl}$no_entry_flag"' $compiler_flags `if test "x${allow_undefined_flag}" != "x"; then func_echo_all "${wl}${allow_undefined_flag}"; else :; fi` '"\${wl}$exp_sym_flag:\$export_symbols $shared_flag"
+@@ -9148,7 +9670,13 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 	else
+ 	 # Determine the default libpath from the value encoded in an
+ 	 # empty executable.
+-	 cat confdefs.h - <<_ACEOF >conftest.$ac_ext
++	 if test "${lt_cv_aix_libpath+set}" = set; then
++  aix_libpath=$lt_cv_aix_libpath
++else
++  if ${lt_cv_aix_libpath_+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+ /* end confdefs.h.  */
+ 
+ int
+@@ -9161,22 +9689,29 @@ main ()
+ _ACEOF
+ if ac_fn_c_try_link "$LINENO"; then :
+ 
+-lt_aix_libpath_sed='
+-    /Import File Strings/,/^$/ {
+-	/^0/ {
+-	    s/^0  *\(.*\)$/\1/
+-	    p
+-	}
+-    }'
+-aix_libpath=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
+-# Check for a 64-bit object if we didn't find anything.
+-if test -z "$aix_libpath"; then
+-  aix_libpath=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
+-fi
++  lt_aix_libpath_sed='
++      /Import File Strings/,/^$/ {
++	  /^0/ {
++	      s/^0  *\([^ ]*\) *$/\1/
++	      p
++	  }
++      }'
++  lt_cv_aix_libpath_=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
++  # Check for a 64-bit object if we didn't find anything.
++  if test -z "$lt_cv_aix_libpath_"; then
++    lt_cv_aix_libpath_=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
++  fi
+ fi
+ rm -f core conftest.err conftest.$ac_objext \
+     conftest$ac_exeext conftest.$ac_ext
+-if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
++  if test -z "$lt_cv_aix_libpath_"; then
++    lt_cv_aix_libpath_="/usr/lib:/lib"
++  fi
++
++fi
++
++  aix_libpath=$lt_cv_aix_libpath_
++fi
+ 
+ 	 hardcode_libdir_flag_spec='${wl}-blibpath:$libdir:'"$aix_libpath"
+ 	  # Warning - without using the other run time loading flags,
+@@ -9221,20 +9756,63 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+       # Microsoft Visual C++.
+       # hardcode_libdir_flag_spec is actually meaningless, as there is
+       # no search path for DLLs.
+-      hardcode_libdir_flag_spec=' '
+-      allow_undefined_flag=unsupported
+-      # Tell ltmain to make .lib files, not .a files.
+-      libext=lib
+-      # Tell ltmain to make .dll files, not .so files.
+-      shrext_cmds=".dll"
+-      # FIXME: Setting linknames here is a bad hack.
+-      archive_cmds='$CC -o $lib $libobjs $compiler_flags `func_echo_all "$deplibs" | $SED '\''s/ -lc$//'\''` -link -dll~linknames='
+-      # The linker will automatically build a .lib file if we build a DLL.
+-      old_archive_from_new_cmds='true'
+-      # FIXME: Should let the user specify the lib program.
+-      old_archive_cmds='lib -OUT:$oldlib$oldobjs$old_deplibs'
+-      fix_srcfile_path='`cygpath -w "$srcfile"`'
+-      enable_shared_with_static_runtimes=yes
++      case $cc_basename in
++      cl*)
++	# Native MSVC
++	hardcode_libdir_flag_spec=' '
++	allow_undefined_flag=unsupported
++	always_export_symbols=yes
++	file_list_spec='@'
++	# Tell ltmain to make .lib files, not .a files.
++	libext=lib
++	# Tell ltmain to make .dll files, not .so files.
++	shrext_cmds=".dll"
++	# FIXME: Setting linknames here is a bad hack.
++	archive_cmds='$CC -o $output_objdir/$soname $libobjs $compiler_flags $deplibs -Wl,-dll~linknames='
++	archive_expsym_cmds='if test "x`$SED 1q $export_symbols`" = xEXPORTS; then
++	    sed -n -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' -e '1\\\!p' < $export_symbols > $output_objdir/$soname.exp;
++	  else
++	    sed -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' < $export_symbols > $output_objdir/$soname.exp;
++	  fi~
++	  $CC -o $tool_output_objdir$soname $libobjs $compiler_flags $deplibs "@$tool_output_objdir$soname.exp" -Wl,-DLL,-IMPLIB:"$tool_output_objdir$libname.dll.lib"~
++	  linknames='
++	# The linker will not automatically build a static lib if we build a DLL.
++	# _LT_TAGVAR(old_archive_from_new_cmds, )='true'
++	enable_shared_with_static_runtimes=yes
++	export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1,DATA/'\'' | $SED -e '\''/^[AITW][ ]/s/.*[ ]//'\'' | sort | uniq > $export_symbols'
++	# Don't use ranlib
++	old_postinstall_cmds='chmod 644 $oldlib'
++	postlink_cmds='lt_outputfile="@OUTPUT@"~
++	  lt_tool_outputfile="@TOOL_OUTPUT@"~
++	  case $lt_outputfile in
++	    *.exe|*.EXE) ;;
++	    *)
++	      lt_outputfile="$lt_outputfile.exe"
++	      lt_tool_outputfile="$lt_tool_outputfile.exe"
++	      ;;
++	  esac~
++	  if test "$MANIFEST_TOOL" != ":" && test -f "$lt_outputfile.manifest"; then
++	    $MANIFEST_TOOL -manifest "$lt_tool_outputfile.manifest" -outputresource:"$lt_tool_outputfile" || exit 1;
++	    $RM "$lt_outputfile.manifest";
++	  fi'
++	;;
++      *)
++	# Assume MSVC wrapper
++	hardcode_libdir_flag_spec=' '
++	allow_undefined_flag=unsupported
++	# Tell ltmain to make .lib files, not .a files.
++	libext=lib
++	# Tell ltmain to make .dll files, not .so files.
++	shrext_cmds=".dll"
++	# FIXME: Setting linknames here is a bad hack.
++	archive_cmds='$CC -o $lib $libobjs $compiler_flags `func_echo_all "$deplibs" | $SED '\''s/ -lc$//'\''` -link -dll~linknames='
++	# The linker will automatically build a .lib file if we build a DLL.
++	old_archive_from_new_cmds='true'
++	# FIXME: Should let the user specify the lib program.
++	old_archive_cmds='lib -OUT:$oldlib$oldobjs$old_deplibs'
++	enable_shared_with_static_runtimes=yes
++	;;
++      esac
+       ;;
+ 
+     darwin* | rhapsody*)
+@@ -9295,7 +9873,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 
+     # FreeBSD 3 and greater uses gcc -shared to do shared libraries.
+     freebsd* | dragonfly*)
+-      archive_cmds='$CC -shared -o $lib $libobjs $deplibs $compiler_flags'
++      archive_cmds='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags'
+       hardcode_libdir_flag_spec='-R$libdir'
+       hardcode_direct=yes
+       hardcode_shlibpath_var=no
+@@ -9303,7 +9881,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 
+     hpux9*)
+       if test "$GCC" = yes; then
+-	archive_cmds='$RM $output_objdir/$soname~$CC -shared -fPIC ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $libobjs $deplibs $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
++	archive_cmds='$RM $output_objdir/$soname~$CC -shared $pic_flag ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $libobjs $deplibs $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
+       else
+ 	archive_cmds='$RM $output_objdir/$soname~$LD -b +b $install_libdir -o $output_objdir/$soname $libobjs $deplibs $linker_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
+       fi
+@@ -9319,7 +9897,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 
+     hpux10*)
+       if test "$GCC" = yes && test "$with_gnu_ld" = no; then
+-	archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
++	archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
+       else
+ 	archive_cmds='$LD -b +h $soname +b $install_libdir -o $lib $libobjs $deplibs $linker_flags'
+       fi
+@@ -9343,10 +9921,10 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 	  archive_cmds='$CC -shared ${wl}+h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
+ 	  ;;
+ 	ia64*)
+-	  archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags'
++	  archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags'
+ 	  ;;
+ 	*)
+-	  archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
++	  archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
+ 	  ;;
+ 	esac
+       else
+@@ -9425,23 +10003,36 @@ fi
+ 
+     irix5* | irix6* | nonstopux*)
+       if test "$GCC" = yes; then
+-	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
++	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
+ 	# Try to use the -exported_symbol ld option, if it does not
+ 	# work, assume that -exports_file does not work either and
+ 	# implicitly export all symbols.
+-        save_LDFLAGS="$LDFLAGS"
+-        LDFLAGS="$LDFLAGS -shared ${wl}-exported_symbol ${wl}foo ${wl}-update_registry ${wl}/dev/null"
+-        cat confdefs.h - <<_ACEOF >conftest.$ac_ext
++	# This should be the same for all languages, so no per-tag cache variable.
++	{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether the $host_os linker accepts -exported_symbol" >&5
++$as_echo_n "checking whether the $host_os linker accepts -exported_symbol... " >&6; }
++if ${lt_cv_irix_exported_symbol+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  save_LDFLAGS="$LDFLAGS"
++	   LDFLAGS="$LDFLAGS -shared ${wl}-exported_symbol ${wl}foo ${wl}-update_registry ${wl}/dev/null"
++	   cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+ /* end confdefs.h.  */
+-int foo(void) {}
++int foo (void) { return 0; }
+ _ACEOF
+ if ac_fn_c_try_link "$LINENO"; then :
+-  archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations ${wl}-exports_file ${wl}$export_symbols -o $lib'
+-
++  lt_cv_irix_exported_symbol=yes
++else
++  lt_cv_irix_exported_symbol=no
+ fi
+ rm -f core conftest.err conftest.$ac_objext \
+     conftest$ac_exeext conftest.$ac_ext
+-        LDFLAGS="$save_LDFLAGS"
++           LDFLAGS="$save_LDFLAGS"
++fi
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_irix_exported_symbol" >&5
++$as_echo "$lt_cv_irix_exported_symbol" >&6; }
++	if test "$lt_cv_irix_exported_symbol" = yes; then
++          archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations ${wl}-exports_file ${wl}$export_symbols -o $lib'
++	fi
+       else
+ 	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -o $lib'
+ 	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -exports_file $export_symbols -o $lib'
+@@ -9526,7 +10117,7 @@ rm -f core conftest.err conftest.$ac_objext \
+     osf4* | osf5*)	# as osf3* with the addition of -msym flag
+       if test "$GCC" = yes; then
+ 	allow_undefined_flag=' ${wl}-expect_unresolved ${wl}\*'
+-	archive_cmds='$CC -shared${allow_undefined_flag} $libobjs $deplibs $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
++	archive_cmds='$CC -shared${allow_undefined_flag} $pic_flag $libobjs $deplibs $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
+ 	hardcode_libdir_flag_spec='${wl}-rpath ${wl}$libdir'
+       else
+ 	allow_undefined_flag=' -expect_unresolved \*'
+@@ -9545,9 +10136,9 @@ rm -f core conftest.err conftest.$ac_objext \
+       no_undefined_flag=' -z defs'
+       if test "$GCC" = yes; then
+ 	wlarc='${wl}'
+-	archive_cmds='$CC -shared ${wl}-z ${wl}text ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
++	archive_cmds='$CC -shared $pic_flag ${wl}-z ${wl}text ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
+ 	archive_expsym_cmds='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~
+-	  $CC -shared ${wl}-z ${wl}text ${wl}-M ${wl}$lib.exp ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags~$RM $lib.exp'
++	  $CC -shared $pic_flag ${wl}-z ${wl}text ${wl}-M ${wl}$lib.exp ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags~$RM $lib.exp'
+       else
+ 	case `$CC -V 2>&1` in
+ 	*"Compilers 5.0"*)
+@@ -10123,8 +10714,9 @@ cygwin* | mingw* | pw32* | cegcc*)
+   need_version=no
+   need_lib_prefix=no
+ 
+-  case $GCC,$host_os in
+-  yes,cygwin* | yes,mingw* | yes,pw32* | yes,cegcc*)
++  case $GCC,$cc_basename in
++  yes,*)
++    # gcc
+     library_names_spec='$libname.dll.a'
+     # DLL is installed to $(libdir)/../bin by postinstall_cmds
+     postinstall_cmds='base_file=`basename \${file}`~
+@@ -10157,13 +10749,71 @@ cygwin* | mingw* | pw32* | cegcc*)
+       library_names_spec='`echo ${libname} | sed -e 's/^lib/pw/'``echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}'
+       ;;
+     esac
++    dynamic_linker='Win32 ld.exe'
++    ;;
++
++  *,cl*)
++    # Native MSVC
++    libname_spec='$name'
++    soname_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}'
++    library_names_spec='${libname}.dll.lib'
++
++    case $build_os in
++    mingw*)
++      sys_lib_search_path_spec=
++      lt_save_ifs=$IFS
++      IFS=';'
++      for lt_path in $LIB
++      do
++        IFS=$lt_save_ifs
++        # Let DOS variable expansion print the short 8.3 style file name.
++        lt_path=`cd "$lt_path" 2>/dev/null && cmd //C "for %i in (".") do @echo %~si"`
++        sys_lib_search_path_spec="$sys_lib_search_path_spec $lt_path"
++      done
++      IFS=$lt_save_ifs
++      # Convert to MSYS style.
++      sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | sed -e 's|\\\\|/|g' -e 's| \\([a-zA-Z]\\):| /\\1|g' -e 's|^ ||'`
++      ;;
++    cygwin*)
++      # Convert to unix form, then to dos form, then back to unix form
++      # but this time dos style (no spaces!) so that the unix form looks
++      # like /cygdrive/c/PROGRA~1:/cygdr...
++      sys_lib_search_path_spec=`cygpath --path --unix "$LIB"`
++      sys_lib_search_path_spec=`cygpath --path --dos "$sys_lib_search_path_spec" 2>/dev/null`
++      sys_lib_search_path_spec=`cygpath --path --unix "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
++      ;;
++    *)
++      sys_lib_search_path_spec="$LIB"
++      if $ECHO "$sys_lib_search_path_spec" | $GREP ';[c-zC-Z]:/' >/dev/null; then
++        # It is most probably a Windows format PATH.
++        sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e 's/;/ /g'`
++      else
++        sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
++      fi
++      # FIXME: find the short name or the path components, as spaces are
++      # common. (e.g. "Program Files" -> "PROGRA~1")
++      ;;
++    esac
++
++    # DLL is installed to $(libdir)/../bin by postinstall_cmds
++    postinstall_cmds='base_file=`basename \${file}`~
++      dlpath=`$SHELL 2>&1 -c '\''. $dir/'\''\${base_file}'\''i; echo \$dlname'\''`~
++      dldir=$destdir/`dirname \$dlpath`~
++      test -d \$dldir || mkdir -p \$dldir~
++      $install_prog $dir/$dlname \$dldir/$dlname'
++    postuninstall_cmds='dldll=`$SHELL 2>&1 -c '\''. $file; echo \$dlname'\''`~
++      dlpath=$dir/\$dldll~
++       $RM \$dlpath'
++    shlibpath_overrides_runpath=yes
++    dynamic_linker='Win32 link.exe'
+     ;;
+ 
+   *)
++    # Assume MSVC wrapper
+     library_names_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext} $libname.lib'
++    dynamic_linker='Win32 ld.exe'
+     ;;
+   esac
+-  dynamic_linker='Win32 ld.exe'
+   # FIXME: first we should search . and the directory the executable is in
+   shlibpath_var=PATH
+   ;;
+@@ -11041,7 +11691,7 @@ else
+   lt_dlunknown=0; lt_dlno_uscore=1; lt_dlneed_uscore=2
+   lt_status=$lt_dlunknown
+   cat > conftest.$ac_ext <<_LT_EOF
+-#line 11044 "configure"
++#line $LINENO "configure"
+ #include "confdefs.h"
+ 
+ #if HAVE_DLFCN_H
+@@ -11085,10 +11735,10 @@ else
+ /* When -fvisbility=hidden is used, assume the code has been annotated
+    correspondingly for the symbols needed.  */
+ #if defined(__GNUC__) && (((__GNUC__ == 3) && (__GNUC_MINOR__ >= 3)) || (__GNUC__ > 3))
+-void fnord () __attribute__((visibility("default")));
++int fnord () __attribute__((visibility("default")));
+ #endif
+ 
+-void fnord () { int i=42; }
++int fnord () { return 42; }
+ int main ()
+ {
+   void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW);
+@@ -11147,7 +11797,7 @@ else
+   lt_dlunknown=0; lt_dlno_uscore=1; lt_dlneed_uscore=2
+   lt_status=$lt_dlunknown
+   cat > conftest.$ac_ext <<_LT_EOF
+-#line 11150 "configure"
++#line $LINENO "configure"
+ #include "confdefs.h"
+ 
+ #if HAVE_DLFCN_H
+@@ -11191,10 +11841,10 @@ else
+ /* When -fvisbility=hidden is used, assume the code has been annotated
+    correspondingly for the symbols needed.  */
+ #if defined(__GNUC__) && (((__GNUC__ == 3) && (__GNUC_MINOR__ >= 3)) || (__GNUC__ > 3))
+-void fnord () __attribute__((visibility("default")));
++int fnord () __attribute__((visibility("default")));
+ #endif
+ 
+-void fnord () { int i=42; }
++int fnord () { return 42; }
+ int main ()
+ {
+   void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW);
+@@ -13396,13 +14046,20 @@ exeext='`$ECHO "$exeext" | $SED "$delay_single_quote_subst"`'
+ lt_unset='`$ECHO "$lt_unset" | $SED "$delay_single_quote_subst"`'
+ lt_SP2NL='`$ECHO "$lt_SP2NL" | $SED "$delay_single_quote_subst"`'
+ lt_NL2SP='`$ECHO "$lt_NL2SP" | $SED "$delay_single_quote_subst"`'
++lt_cv_to_host_file_cmd='`$ECHO "$lt_cv_to_host_file_cmd" | $SED "$delay_single_quote_subst"`'
++lt_cv_to_tool_file_cmd='`$ECHO "$lt_cv_to_tool_file_cmd" | $SED "$delay_single_quote_subst"`'
+ reload_flag='`$ECHO "$reload_flag" | $SED "$delay_single_quote_subst"`'
+ reload_cmds='`$ECHO "$reload_cmds" | $SED "$delay_single_quote_subst"`'
+ OBJDUMP='`$ECHO "$OBJDUMP" | $SED "$delay_single_quote_subst"`'
+ deplibs_check_method='`$ECHO "$deplibs_check_method" | $SED "$delay_single_quote_subst"`'
+ file_magic_cmd='`$ECHO "$file_magic_cmd" | $SED "$delay_single_quote_subst"`'
++file_magic_glob='`$ECHO "$file_magic_glob" | $SED "$delay_single_quote_subst"`'
++want_nocaseglob='`$ECHO "$want_nocaseglob" | $SED "$delay_single_quote_subst"`'
++DLLTOOL='`$ECHO "$DLLTOOL" | $SED "$delay_single_quote_subst"`'
++sharedlib_from_linklib_cmd='`$ECHO "$sharedlib_from_linklib_cmd" | $SED "$delay_single_quote_subst"`'
+ AR='`$ECHO "$AR" | $SED "$delay_single_quote_subst"`'
+ AR_FLAGS='`$ECHO "$AR_FLAGS" | $SED "$delay_single_quote_subst"`'
++archiver_list_spec='`$ECHO "$archiver_list_spec" | $SED "$delay_single_quote_subst"`'
+ STRIP='`$ECHO "$STRIP" | $SED "$delay_single_quote_subst"`'
+ RANLIB='`$ECHO "$RANLIB" | $SED "$delay_single_quote_subst"`'
+ old_postinstall_cmds='`$ECHO "$old_postinstall_cmds" | $SED "$delay_single_quote_subst"`'
+@@ -13417,14 +14074,17 @@ lt_cv_sys_global_symbol_pipe='`$ECHO "$lt_cv_sys_global_symbol_pipe" | $SED "$de
+ lt_cv_sys_global_symbol_to_cdecl='`$ECHO "$lt_cv_sys_global_symbol_to_cdecl" | $SED "$delay_single_quote_subst"`'
+ lt_cv_sys_global_symbol_to_c_name_address='`$ECHO "$lt_cv_sys_global_symbol_to_c_name_address" | $SED "$delay_single_quote_subst"`'
+ lt_cv_sys_global_symbol_to_c_name_address_lib_prefix='`$ECHO "$lt_cv_sys_global_symbol_to_c_name_address_lib_prefix" | $SED "$delay_single_quote_subst"`'
++nm_file_list_spec='`$ECHO "$nm_file_list_spec" | $SED "$delay_single_quote_subst"`'
++lt_sysroot='`$ECHO "$lt_sysroot" | $SED "$delay_single_quote_subst"`'
+ objdir='`$ECHO "$objdir" | $SED "$delay_single_quote_subst"`'
+ MAGIC_CMD='`$ECHO "$MAGIC_CMD" | $SED "$delay_single_quote_subst"`'
+ lt_prog_compiler_no_builtin_flag='`$ECHO "$lt_prog_compiler_no_builtin_flag" | $SED "$delay_single_quote_subst"`'
+-lt_prog_compiler_wl='`$ECHO "$lt_prog_compiler_wl" | $SED "$delay_single_quote_subst"`'
+ lt_prog_compiler_pic='`$ECHO "$lt_prog_compiler_pic" | $SED "$delay_single_quote_subst"`'
++lt_prog_compiler_wl='`$ECHO "$lt_prog_compiler_wl" | $SED "$delay_single_quote_subst"`'
+ lt_prog_compiler_static='`$ECHO "$lt_prog_compiler_static" | $SED "$delay_single_quote_subst"`'
+ lt_cv_prog_compiler_c_o='`$ECHO "$lt_cv_prog_compiler_c_o" | $SED "$delay_single_quote_subst"`'
+ need_locks='`$ECHO "$need_locks" | $SED "$delay_single_quote_subst"`'
++MANIFEST_TOOL='`$ECHO "$MANIFEST_TOOL" | $SED "$delay_single_quote_subst"`'
+ DSYMUTIL='`$ECHO "$DSYMUTIL" | $SED "$delay_single_quote_subst"`'
+ NMEDIT='`$ECHO "$NMEDIT" | $SED "$delay_single_quote_subst"`'
+ LIPO='`$ECHO "$LIPO" | $SED "$delay_single_quote_subst"`'
+@@ -13457,12 +14117,12 @@ hardcode_shlibpath_var='`$ECHO "$hardcode_shlibpath_var" | $SED "$delay_single_q
+ hardcode_automatic='`$ECHO "$hardcode_automatic" | $SED "$delay_single_quote_subst"`'
+ inherit_rpath='`$ECHO "$inherit_rpath" | $SED "$delay_single_quote_subst"`'
+ link_all_deplibs='`$ECHO "$link_all_deplibs" | $SED "$delay_single_quote_subst"`'
+-fix_srcfile_path='`$ECHO "$fix_srcfile_path" | $SED "$delay_single_quote_subst"`'
+ always_export_symbols='`$ECHO "$always_export_symbols" | $SED "$delay_single_quote_subst"`'
+ export_symbols_cmds='`$ECHO "$export_symbols_cmds" | $SED "$delay_single_quote_subst"`'
+ exclude_expsyms='`$ECHO "$exclude_expsyms" | $SED "$delay_single_quote_subst"`'
+ include_expsyms='`$ECHO "$include_expsyms" | $SED "$delay_single_quote_subst"`'
+ prelink_cmds='`$ECHO "$prelink_cmds" | $SED "$delay_single_quote_subst"`'
++postlink_cmds='`$ECHO "$postlink_cmds" | $SED "$delay_single_quote_subst"`'
+ file_list_spec='`$ECHO "$file_list_spec" | $SED "$delay_single_quote_subst"`'
+ variables_saved_for_relink='`$ECHO "$variables_saved_for_relink" | $SED "$delay_single_quote_subst"`'
+ need_lib_prefix='`$ECHO "$need_lib_prefix" | $SED "$delay_single_quote_subst"`'
+@@ -13517,8 +14177,13 @@ reload_flag \
+ OBJDUMP \
+ deplibs_check_method \
+ file_magic_cmd \
++file_magic_glob \
++want_nocaseglob \
++DLLTOOL \
++sharedlib_from_linklib_cmd \
+ AR \
+ AR_FLAGS \
++archiver_list_spec \
+ STRIP \
+ RANLIB \
+ CC \
+@@ -13528,12 +14193,14 @@ lt_cv_sys_global_symbol_pipe \
+ lt_cv_sys_global_symbol_to_cdecl \
+ lt_cv_sys_global_symbol_to_c_name_address \
+ lt_cv_sys_global_symbol_to_c_name_address_lib_prefix \
++nm_file_list_spec \
+ lt_prog_compiler_no_builtin_flag \
+-lt_prog_compiler_wl \
+ lt_prog_compiler_pic \
++lt_prog_compiler_wl \
+ lt_prog_compiler_static \
+ lt_cv_prog_compiler_c_o \
+ need_locks \
++MANIFEST_TOOL \
+ DSYMUTIL \
+ NMEDIT \
+ LIPO \
+@@ -13549,7 +14216,6 @@ no_undefined_flag \
+ hardcode_libdir_flag_spec \
+ hardcode_libdir_flag_spec_ld \
+ hardcode_libdir_separator \
+-fix_srcfile_path \
+ exclude_expsyms \
+ include_expsyms \
+ file_list_spec \
+@@ -13585,6 +14251,7 @@ module_cmds \
+ module_expsym_cmds \
+ export_symbols_cmds \
+ prelink_cmds \
++postlink_cmds \
+ postinstall_cmds \
+ postuninstall_cmds \
+ finish_cmds \
+@@ -14350,7 +15017,8 @@ $as_echo X"$file" |
+ # NOTE: Changes made to this file will be lost: look at ltmain.sh.
+ #
+ #   Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001, 2003, 2004, 2005,
+-#                 2006, 2007, 2008, 2009 Free Software Foundation, Inc.
++#                 2006, 2007, 2008, 2009, 2010 Free Software Foundation,
++#                 Inc.
+ #   Written by Gordon Matzigkeit, 1996
+ #
+ #   This file is part of GNU Libtool.
+@@ -14453,19 +15121,42 @@ SP2NL=$lt_lt_SP2NL
+ # turn newlines into spaces.
+ NL2SP=$lt_lt_NL2SP
+ 
++# convert \$build file names to \$host format.
++to_host_file_cmd=$lt_cv_to_host_file_cmd
++
++# convert \$build files to toolchain format.
++to_tool_file_cmd=$lt_cv_to_tool_file_cmd
++
+ # An object symbol dumper.
+ OBJDUMP=$lt_OBJDUMP
+ 
+ # Method to check whether dependent libraries are shared objects.
+ deplibs_check_method=$lt_deplibs_check_method
+ 
+-# Command to use when deplibs_check_method == "file_magic".
++# Command to use when deplibs_check_method = "file_magic".
+ file_magic_cmd=$lt_file_magic_cmd
+ 
++# How to find potential files when deplibs_check_method = "file_magic".
++file_magic_glob=$lt_file_magic_glob
++
++# Find potential files using nocaseglob when deplibs_check_method = "file_magic".
++want_nocaseglob=$lt_want_nocaseglob
++
++# DLL creation program.
++DLLTOOL=$lt_DLLTOOL
++
++# Command to associate shared and link libraries.
++sharedlib_from_linklib_cmd=$lt_sharedlib_from_linklib_cmd
++
+ # The archiver.
+ AR=$lt_AR
++
++# Flags to create an archive.
+ AR_FLAGS=$lt_AR_FLAGS
+ 
++# How to feed a file listing to the archiver.
++archiver_list_spec=$lt_archiver_list_spec
++
+ # A symbol stripping program.
+ STRIP=$lt_STRIP
+ 
+@@ -14495,6 +15186,12 @@ global_symbol_to_c_name_address=$lt_lt_cv_sys_global_symbol_to_c_name_address
+ # Transform the output of nm in a C name address pair when lib prefix is needed.
+ global_symbol_to_c_name_address_lib_prefix=$lt_lt_cv_sys_global_symbol_to_c_name_address_lib_prefix
+ 
++# Specify filename containing input files for \$NM.
++nm_file_list_spec=$lt_nm_file_list_spec
++
++# The root where to search for dependent libraries,and in which our libraries should be installed.
++lt_sysroot=$lt_sysroot
++
+ # The name of the directory that contains temporary libtool files.
+ objdir=$objdir
+ 
+@@ -14504,6 +15201,9 @@ MAGIC_CMD=$MAGIC_CMD
+ # Must we lock files when doing compilation?
+ need_locks=$lt_need_locks
+ 
++# Manifest tool.
++MANIFEST_TOOL=$lt_MANIFEST_TOOL
++
+ # Tool to manipulate archived DWARF debug symbol files on Mac OS X.
+ DSYMUTIL=$lt_DSYMUTIL
+ 
+@@ -14618,12 +15318,12 @@ with_gcc=$GCC
+ # Compiler flag to turn off builtin functions.
+ no_builtin_flag=$lt_lt_prog_compiler_no_builtin_flag
+ 
+-# How to pass a linker flag through the compiler.
+-wl=$lt_lt_prog_compiler_wl
+-
+ # Additional compiler flags for building library objects.
+ pic_flag=$lt_lt_prog_compiler_pic
+ 
++# How to pass a linker flag through the compiler.
++wl=$lt_lt_prog_compiler_wl
++
+ # Compiler flag to prevent dynamic linking.
+ link_static_flag=$lt_lt_prog_compiler_static
+ 
+@@ -14710,9 +15410,6 @@ inherit_rpath=$inherit_rpath
+ # Whether libtool must link a program against all its dependency libraries.
+ link_all_deplibs=$link_all_deplibs
+ 
+-# Fix the shell variable \$srcfile for the compiler.
+-fix_srcfile_path=$lt_fix_srcfile_path
+-
+ # Set to "yes" if exported symbols are required.
+ always_export_symbols=$always_export_symbols
+ 
+@@ -14728,6 +15425,9 @@ include_expsyms=$lt_include_expsyms
+ # Commands necessary for linking programs (against libraries) with templates.
+ prelink_cmds=$lt_prelink_cmds
+ 
++# Commands necessary for finishing linking programs.
++postlink_cmds=$lt_postlink_cmds
++
+ # Specify filename containing input files.
+ file_list_spec=$lt_file_list_spec
+ 
+@@ -14760,210 +15460,169 @@ ltmain="$ac_aux_dir/ltmain.sh"
+   # if finds mixed CR/LF and LF-only lines.  Since sed operates in
+   # text mode, it properly converts lines to CR/LF.  This bash problem
+   # is reportedly fixed, but why not run on old versions too?
+-  sed '/^# Generated shell functions inserted here/q' "$ltmain" >> "$cfgfile" \
+-    || (rm -f "$cfgfile"; exit 1)
+-
+-  case $xsi_shell in
+-  yes)
+-    cat << \_LT_EOF >> "$cfgfile"
+-
+-# func_dirname file append nondir_replacement
+-# Compute the dirname of FILE.  If nonempty, add APPEND to the result,
+-# otherwise set result to NONDIR_REPLACEMENT.
+-func_dirname ()
+-{
+-  case ${1} in
+-    */*) func_dirname_result="${1%/*}${2}" ;;
+-    *  ) func_dirname_result="${3}" ;;
+-  esac
+-}
+-
+-# func_basename file
+-func_basename ()
+-{
+-  func_basename_result="${1##*/}"
+-}
+-
+-# func_dirname_and_basename file append nondir_replacement
+-# perform func_basename and func_dirname in a single function
+-# call:
+-#   dirname:  Compute the dirname of FILE.  If nonempty,
+-#             add APPEND to the result, otherwise set result
+-#             to NONDIR_REPLACEMENT.
+-#             value returned in "$func_dirname_result"
+-#   basename: Compute filename of FILE.
+-#             value retuned in "$func_basename_result"
+-# Implementation must be kept synchronized with func_dirname
+-# and func_basename. For efficiency, we do not delegate to
+-# those functions but instead duplicate the functionality here.
+-func_dirname_and_basename ()
+-{
+-  case ${1} in
+-    */*) func_dirname_result="${1%/*}${2}" ;;
+-    *  ) func_dirname_result="${3}" ;;
+-  esac
+-  func_basename_result="${1##*/}"
+-}
+-
+-# func_stripname prefix suffix name
+-# strip PREFIX and SUFFIX off of NAME.
+-# PREFIX and SUFFIX must not contain globbing or regex special
+-# characters, hashes, percent signs, but SUFFIX may contain a leading
+-# dot (in which case that matches only a dot).
+-func_stripname ()
+-{
+-  # pdksh 5.2.14 does not do ${X%$Y} correctly if both X and Y are
+-  # positional parameters, so assign one to ordinary parameter first.
+-  func_stripname_result=${3}
+-  func_stripname_result=${func_stripname_result#"${1}"}
+-  func_stripname_result=${func_stripname_result%"${2}"}
+-}
+-
+-# func_opt_split
+-func_opt_split ()
+-{
+-  func_opt_split_opt=${1%%=*}
+-  func_opt_split_arg=${1#*=}
+-}
+-
+-# func_lo2o object
+-func_lo2o ()
+-{
+-  case ${1} in
+-    *.lo) func_lo2o_result=${1%.lo}.${objext} ;;
+-    *)    func_lo2o_result=${1} ;;
+-  esac
+-}
+-
+-# func_xform libobj-or-source
+-func_xform ()
+-{
+-  func_xform_result=${1%.*}.lo
+-}
+-
+-# func_arith arithmetic-term...
+-func_arith ()
+-{
+-  func_arith_result=$(( $* ))
+-}
+-
+-# func_len string
+-# STRING may not start with a hyphen.
+-func_len ()
+-{
+-  func_len_result=${#1}
+-}
+-
+-_LT_EOF
+-    ;;
+-  *) # Bourne compatible functions.
+-    cat << \_LT_EOF >> "$cfgfile"
+-
+-# func_dirname file append nondir_replacement
+-# Compute the dirname of FILE.  If nonempty, add APPEND to the result,
+-# otherwise set result to NONDIR_REPLACEMENT.
+-func_dirname ()
+-{
+-  # Extract subdirectory from the argument.
+-  func_dirname_result=`$ECHO "${1}" | $SED "$dirname"`
+-  if test "X$func_dirname_result" = "X${1}"; then
+-    func_dirname_result="${3}"
+-  else
+-    func_dirname_result="$func_dirname_result${2}"
+-  fi
+-}
+-
+-# func_basename file
+-func_basename ()
+-{
+-  func_basename_result=`$ECHO "${1}" | $SED "$basename"`
+-}
+-
+-
+-# func_stripname prefix suffix name
+-# strip PREFIX and SUFFIX off of NAME.
+-# PREFIX and SUFFIX must not contain globbing or regex special
+-# characters, hashes, percent signs, but SUFFIX may contain a leading
+-# dot (in which case that matches only a dot).
+-# func_strip_suffix prefix name
+-func_stripname ()
+-{
+-  case ${2} in
+-    .*) func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%\\\\${2}\$%%"`;;
+-    *)  func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%${2}\$%%"`;;
+-  esac
+-}
+-
+-# sed scripts:
+-my_sed_long_opt='1s/^\(-[^=]*\)=.*/\1/;q'
+-my_sed_long_arg='1s/^-[^=]*=//'
+-
+-# func_opt_split
+-func_opt_split ()
+-{
+-  func_opt_split_opt=`$ECHO "${1}" | $SED "$my_sed_long_opt"`
+-  func_opt_split_arg=`$ECHO "${1}" | $SED "$my_sed_long_arg"`
+-}
+-
+-# func_lo2o object
+-func_lo2o ()
+-{
+-  func_lo2o_result=`$ECHO "${1}" | $SED "$lo2o"`
+-}
+-
+-# func_xform libobj-or-source
+-func_xform ()
+-{
+-  func_xform_result=`$ECHO "${1}" | $SED 's/\.[^.]*$/.lo/'`
+-}
+-
+-# func_arith arithmetic-term...
+-func_arith ()
+-{
+-  func_arith_result=`expr "$@"`
+-}
+-
+-# func_len string
+-# STRING may not start with a hyphen.
+-func_len ()
+-{
+-  func_len_result=`expr "$1" : ".*" 2>/dev/null || echo $max_cmd_len`
+-}
+-
+-_LT_EOF
+-esac
+-
+-case $lt_shell_append in
+-  yes)
+-    cat << \_LT_EOF >> "$cfgfile"
+-
+-# func_append var value
+-# Append VALUE to the end of shell variable VAR.
+-func_append ()
+-{
+-  eval "$1+=\$2"
+-}
+-_LT_EOF
+-    ;;
+-  *)
+-    cat << \_LT_EOF >> "$cfgfile"
+-
+-# func_append var value
+-# Append VALUE to the end of shell variable VAR.
+-func_append ()
+-{
+-  eval "$1=\$$1\$2"
+-}
+-
+-_LT_EOF
+-    ;;
+-  esac
+-
+-
+-  sed -n '/^# Generated shell functions inserted here/,$p' "$ltmain" >> "$cfgfile" \
+-    || (rm -f "$cfgfile"; exit 1)
+-
+-  mv -f "$cfgfile" "$ofile" ||
++  sed '$q' "$ltmain" >> "$cfgfile" \
++     || (rm -f "$cfgfile"; exit 1)
++
++  if test x"$xsi_shell" = xyes; then
++  sed -e '/^func_dirname ()$/,/^} # func_dirname /c\
++func_dirname ()\
++{\
++\    case ${1} in\
++\      */*) func_dirname_result="${1%/*}${2}" ;;\
++\      *  ) func_dirname_result="${3}" ;;\
++\    esac\
++} # Extended-shell func_dirname implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_basename ()$/,/^} # func_basename /c\
++func_basename ()\
++{\
++\    func_basename_result="${1##*/}"\
++} # Extended-shell func_basename implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_dirname_and_basename ()$/,/^} # func_dirname_and_basename /c\
++func_dirname_and_basename ()\
++{\
++\    case ${1} in\
++\      */*) func_dirname_result="${1%/*}${2}" ;;\
++\      *  ) func_dirname_result="${3}" ;;\
++\    esac\
++\    func_basename_result="${1##*/}"\
++} # Extended-shell func_dirname_and_basename implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_stripname ()$/,/^} # func_stripname /c\
++func_stripname ()\
++{\
++\    # pdksh 5.2.14 does not do ${X%$Y} correctly if both X and Y are\
++\    # positional parameters, so assign one to ordinary parameter first.\
++\    func_stripname_result=${3}\
++\    func_stripname_result=${func_stripname_result#"${1}"}\
++\    func_stripname_result=${func_stripname_result%"${2}"}\
++} # Extended-shell func_stripname implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_split_long_opt ()$/,/^} # func_split_long_opt /c\
++func_split_long_opt ()\
++{\
++\    func_split_long_opt_name=${1%%=*}\
++\    func_split_long_opt_arg=${1#*=}\
++} # Extended-shell func_split_long_opt implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_split_short_opt ()$/,/^} # func_split_short_opt /c\
++func_split_short_opt ()\
++{\
++\    func_split_short_opt_arg=${1#??}\
++\    func_split_short_opt_name=${1%"$func_split_short_opt_arg"}\
++} # Extended-shell func_split_short_opt implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_lo2o ()$/,/^} # func_lo2o /c\
++func_lo2o ()\
++{\
++\    case ${1} in\
++\      *.lo) func_lo2o_result=${1%.lo}.${objext} ;;\
++\      *)    func_lo2o_result=${1} ;;\
++\    esac\
++} # Extended-shell func_lo2o implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_xform ()$/,/^} # func_xform /c\
++func_xform ()\
++{\
++    func_xform_result=${1%.*}.lo\
++} # Extended-shell func_xform implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_arith ()$/,/^} # func_arith /c\
++func_arith ()\
++{\
++    func_arith_result=$(( $* ))\
++} # Extended-shell func_arith implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_len ()$/,/^} # func_len /c\
++func_len ()\
++{\
++    func_len_result=${#1}\
++} # Extended-shell func_len implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++fi
++
++if test x"$lt_shell_append" = xyes; then
++  sed -e '/^func_append ()$/,/^} # func_append /c\
++func_append ()\
++{\
++    eval "${1}+=\\${2}"\
++} # Extended-shell func_append implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_append_quoted ()$/,/^} # func_append_quoted /c\
++func_append_quoted ()\
++{\
++\    func_quote_for_eval "${2}"\
++\    eval "${1}+=\\\\ \\$func_quote_for_eval_result"\
++} # Extended-shell func_append_quoted implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  # Save a `func_append' function call where possible by direct use of '+='
++  sed -e 's%func_append \([a-zA-Z_]\{1,\}\) "%\1+="%g' $cfgfile > $cfgfile.tmp \
++    && mv -f "$cfgfile.tmp" "$cfgfile" \
++      || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++  test 0 -eq $? || _lt_function_replace_fail=:
++else
++  # Save a `func_append' function call even when '+=' is not available
++  sed -e 's%func_append \([a-zA-Z_]\{1,\}\) "%\1="$\1%g' $cfgfile > $cfgfile.tmp \
++    && mv -f "$cfgfile.tmp" "$cfgfile" \
++      || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++  test 0 -eq $? || _lt_function_replace_fail=:
++fi
++
++if test x"$_lt_function_replace_fail" = x":"; then
++  { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: Unable to substitute extended shell functions in $ofile" >&5
++$as_echo "$as_me: WARNING: Unable to substitute extended shell functions in $ofile" >&2;}
++fi
++
++
++   mv -f "$cfgfile" "$ofile" ||
+     (rm -f "$ofile" && cp "$cfgfile" "$ofile" && rm -f "$cfgfile")
+   chmod +x "$ofile"
+ 
+diff --git a/sim/Makefile.in b/sim/Makefile.in
+index dbbaa84224a..f819bbe0bdd 100644
+--- a/sim/Makefile.in
++++ b/sim/Makefile.in
+@@ -744,6 +744,7 @@ C_DIALECT = @C_DIALECT@
+ DATADIRNAME = @DATADIRNAME@
+ DEFS = @DEFS@
+ DEPDIR = @DEPDIR@
++DLLTOOL = @DLLTOOL@
+ DSYMUTIL = @DSYMUTIL@
+ DTC = @DTC@
+ DUMPBIN = @DUMPBIN@
+@@ -809,6 +810,7 @@ LN_S = @LN_S@
+ LTLIBOBJS = @LTLIBOBJS@
+ MAINT = @MAINT@
+ MAKEINFO = @MAKEINFO@
++MANIFEST_TOOL = @MANIFEST_TOOL@
+ MKDIR_P = @MKDIR_P@
+ NM = @NM@
+ NMEDIT = @NMEDIT@
+@@ -859,6 +861,7 @@ abs_builddir = @abs_builddir@
+ abs_srcdir = @abs_srcdir@
+ abs_top_builddir = @abs_top_builddir@
+ abs_top_srcdir = @abs_top_srcdir@
++ac_ct_AR = @ac_ct_AR@
+ ac_ct_CC = @ac_ct_CC@
+ ac_ct_DUMPBIN = @ac_ct_DUMPBIN@
+ am__include = @am__include@
+diff --git a/zlib/Makefile.in b/zlib/Makefile.in
+index c7584492a65..0605835c14f 100644
+--- a/zlib/Makefile.in
++++ b/zlib/Makefile.in
+@@ -1,7 +1,7 @@
+-# Makefile.in generated by automake 1.16.5 from Makefile.am.
++# Makefile.in generated by automake 1.15.1 from Makefile.am.
+ # @configure_input@
+ 
+-# Copyright (C) 1994-2021 Free Software Foundation, Inc.
++# Copyright (C) 1994-2017 Free Software Foundation, Inc.
+ 
+ # This Makefile.in is free software; the Free Software Foundation
+ # gives unlimited permission to copy and/or distribute it,
+@@ -138,7 +138,6 @@ am__uninstall_files_from_dir = { \
+   }
+ am__installdirs = "$(DESTDIR)$(toolexeclibdir)"
+ LIBRARIES = $(toolexeclib_LIBRARIES)
+-LTLIBRARIES = $(noinst_LTLIBRARIES)
+ ARFLAGS = cru
+ AM_V_AR = $(am__v_AR_@AM_V@)
+ am__v_AR_ = $(am__v_AR_@AM_DEFAULT_V@)
+@@ -161,6 +160,7 @@ am__objects_1 = libz_a-adler32.$(OBJEXT) libz_a-compress.$(OBJEXT) \
+ 	libz_a-zutil.$(OBJEXT)
+ @TARGET_LIBRARY_FALSE@am_libz_a_OBJECTS = $(am__objects_1)
+ libz_a_OBJECTS = $(am_libz_a_OBJECTS)
++LTLIBRARIES = $(noinst_LTLIBRARIES)
+ libzgcj_convenience_la_LIBADD =
+ am__libzgcj_convenience_la_SOURCES_DIST = adler32.c compress.c crc32.c \
+ 	crc32.h deflate.c deflate.h gzguts.h gzread.c gzclose.c \
+@@ -192,22 +192,7 @@ am__v_at_0 = @
+ am__v_at_1 = 
+ DEFAULT_INCLUDES = -I.@am__isrc@
+ depcomp = $(SHELL) $(top_srcdir)/../depcomp
+-am__maybe_remake_depfiles = depfiles
+-am__depfiles_remade = ./$(DEPDIR)/adler32.Plo ./$(DEPDIR)/compress.Plo \
+-	./$(DEPDIR)/crc32.Plo ./$(DEPDIR)/deflate.Plo \
+-	./$(DEPDIR)/gzclose.Plo ./$(DEPDIR)/gzlib.Plo \
+-	./$(DEPDIR)/gzread.Plo ./$(DEPDIR)/gzwrite.Plo \
+-	./$(DEPDIR)/infback.Plo ./$(DEPDIR)/inffast.Plo \
+-	./$(DEPDIR)/inflate.Plo ./$(DEPDIR)/inftrees.Plo \
+-	./$(DEPDIR)/libz_a-adler32.Po ./$(DEPDIR)/libz_a-compress.Po \
+-	./$(DEPDIR)/libz_a-crc32.Po ./$(DEPDIR)/libz_a-deflate.Po \
+-	./$(DEPDIR)/libz_a-gzclose.Po ./$(DEPDIR)/libz_a-gzlib.Po \
+-	./$(DEPDIR)/libz_a-gzread.Po ./$(DEPDIR)/libz_a-gzwrite.Po \
+-	./$(DEPDIR)/libz_a-infback.Po ./$(DEPDIR)/libz_a-inffast.Po \
+-	./$(DEPDIR)/libz_a-inflate.Po ./$(DEPDIR)/libz_a-inftrees.Po \
+-	./$(DEPDIR)/libz_a-trees.Po ./$(DEPDIR)/libz_a-uncompr.Po \
+-	./$(DEPDIR)/libz_a-zutil.Po ./$(DEPDIR)/trees.Plo \
+-	./$(DEPDIR)/uncompr.Plo ./$(DEPDIR)/zutil.Plo
++am__depfiles_maybe = depfiles
+ am__mv = mv -f
+ COMPILE = $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(AM_CPPFLAGS) \
+ 	$(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS)
+@@ -252,6 +237,9 @@ am__define_uniq_tagged_files = \
+   unique=`for i in $$list; do \
+     if test -f "$$i"; then echo $$i; else echo $(srcdir)/$$i; fi; \
+   done | $(am__uniquify_input)`
++ETAGS = etags
++CTAGS = ctags
++CSCOPE = cscope
+ AM_RECURSIVE_TARGETS = cscope
+ am__DIST_COMMON = $(srcdir)/Makefile.in $(top_srcdir)/../compile \
+ 	$(top_srcdir)/../config.guess $(top_srcdir)/../config.sub \
+@@ -272,8 +260,6 @@ am__post_remove_distdir = $(am__remove_distdir)
+ DIST_ARCHIVES = $(distdir).tar.gz
+ GZIP_ENV = --best
+ DIST_TARGETS = dist-gzip
+-# Exists only to be overridden by the user if desired.
+-AM_DISTCHECK_DVI_TARGET = dvi
+ distuninstallcheck_listfiles = find . -type f -print
+ am__distuninstallcheck_listfiles = $(distuninstallcheck_listfiles) \
+   | sed 's|^\./|$(prefix)/|' | grep -v '$(infodir)/dir$$'
+@@ -292,18 +278,16 @@ CFLAGS = @CFLAGS@
+ COMPPATH = @COMPPATH@
+ CPP = @CPP@
+ CPPFLAGS = @CPPFLAGS@
+-CSCOPE = @CSCOPE@
+-CTAGS = @CTAGS@
+ CYGPATH_W = @CYGPATH_W@
+ DEFS = @DEFS@
+ DEPDIR = @DEPDIR@
++DLLTOOL = @DLLTOOL@
+ DSYMUTIL = @DSYMUTIL@
+ DUMPBIN = @DUMPBIN@
+ ECHO_C = @ECHO_C@
+ ECHO_N = @ECHO_N@
+ ECHO_T = @ECHO_T@
+ EGREP = @EGREP@
+-ETAGS = @ETAGS@
+ EXEEXT = @EXEEXT@
+ FGREP = @FGREP@
+ GREP = @GREP@
+@@ -322,6 +306,7 @@ LN_S = @LN_S@
+ LTLIBOBJS = @LTLIBOBJS@
+ MAINT = @MAINT@
+ MAKEINFO = @MAKEINFO@
++MANIFEST_TOOL = @MANIFEST_TOOL@
+ MKDIR_P = @MKDIR_P@
+ NM = @NM@
+ NMEDIT = @NMEDIT@
+@@ -348,6 +333,7 @@ abs_builddir = @abs_builddir@
+ abs_srcdir = @abs_srcdir@
+ abs_top_builddir = @abs_top_builddir@
+ abs_top_srcdir = @abs_top_srcdir@
++ac_ct_AR = @ac_ct_AR@
+ ac_ct_CC = @ac_ct_CC@
+ ac_ct_DUMPBIN = @ac_ct_DUMPBIN@
+ am__include = @am__include@
+@@ -491,8 +477,8 @@ Makefile: $(srcdir)/Makefile.in $(top_builddir)/config.status
+ 	    echo ' $(SHELL) ./config.status'; \
+ 	    $(SHELL) ./config.status;; \
+ 	  *) \
+-	    echo ' cd $(top_builddir) && $(SHELL) ./config.status $@ $(am__maybe_remake_depfiles)'; \
+-	    cd $(top_builddir) && $(SHELL) ./config.status $@ $(am__maybe_remake_depfiles);; \
++	    echo ' cd $(top_builddir) && $(SHELL) ./config.status $@ $(am__depfiles_maybe)'; \
++	    cd $(top_builddir) && $(SHELL) ./config.status $@ $(am__depfiles_maybe);; \
+ 	esac;
+ $(top_srcdir)/../multilib.am $(am__empty):
+ 
+@@ -536,6 +522,11 @@ uninstall-toolexeclibLIBRARIES:
+ clean-toolexeclibLIBRARIES:
+ 	-test -z "$(toolexeclib_LIBRARIES)" || rm -f $(toolexeclib_LIBRARIES)
+ 
++libz.a: $(libz_a_OBJECTS) $(libz_a_DEPENDENCIES) $(EXTRA_libz_a_DEPENDENCIES) 
++	$(AM_V_at)-rm -f libz.a
++	$(AM_V_AR)$(libz_a_AR) libz.a $(libz_a_OBJECTS) $(libz_a_LIBADD)
++	$(AM_V_at)$(RANLIB) libz.a
++
+ clean-noinstLTLIBRARIES:
+ 	-test -z "$(noinst_LTLIBRARIES)" || rm -f $(noinst_LTLIBRARIES)
+ 	@list='$(noinst_LTLIBRARIES)'; \
+@@ -547,11 +538,6 @@ clean-noinstLTLIBRARIES:
+ 	  rm -f $${locs}; \
+ 	}
+ 
+-libz.a: $(libz_a_OBJECTS) $(libz_a_DEPENDENCIES) $(EXTRA_libz_a_DEPENDENCIES) 
+-	$(AM_V_at)-rm -f libz.a
+-	$(AM_V_AR)$(libz_a_AR) libz.a $(libz_a_OBJECTS) $(libz_a_LIBADD)
+-	$(AM_V_at)$(RANLIB) libz.a
+-
+ libzgcj_convenience.la: $(libzgcj_convenience_la_OBJECTS) $(libzgcj_convenience_la_DEPENDENCIES) $(EXTRA_libzgcj_convenience_la_DEPENDENCIES) 
+ 	$(AM_V_CCLD)$(LINK) $(am_libzgcj_convenience_la_rpath) $(libzgcj_convenience_la_OBJECTS) $(libzgcj_convenience_la_LIBADD) $(LIBS)
+ 
+@@ -561,42 +547,36 @@ mostlyclean-compile:
+ distclean-compile:
+ 	-rm -f *.tab.c
+ 
+-@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/adler32.Plo@am__quote@ # am--include-marker
+-@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/compress.Plo@am__quote@ # am--include-marker
+-@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/crc32.Plo@am__quote@ # am--include-marker
+-@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/deflate.Plo@am__quote@ # am--include-marker
+-@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/gzclose.Plo@am__quote@ # am--include-marker
+-@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/gzlib.Plo@am__quote@ # am--include-marker
+-@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/gzread.Plo@am__quote@ # am--include-marker
+-@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/gzwrite.Plo@am__quote@ # am--include-marker
+-@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/infback.Plo@am__quote@ # am--include-marker
+-@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/inffast.Plo@am__quote@ # am--include-marker
+-@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/inflate.Plo@am__quote@ # am--include-marker
+-@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/inftrees.Plo@am__quote@ # am--include-marker
+-@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/libz_a-adler32.Po@am__quote@ # am--include-marker
+-@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/libz_a-compress.Po@am__quote@ # am--include-marker
+-@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/libz_a-crc32.Po@am__quote@ # am--include-marker
+-@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/libz_a-deflate.Po@am__quote@ # am--include-marker
+-@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/libz_a-gzclose.Po@am__quote@ # am--include-marker
+-@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/libz_a-gzlib.Po@am__quote@ # am--include-marker
+-@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/libz_a-gzread.Po@am__quote@ # am--include-marker
+-@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/libz_a-gzwrite.Po@am__quote@ # am--include-marker
+-@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/libz_a-infback.Po@am__quote@ # am--include-marker
+-@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/libz_a-inffast.Po@am__quote@ # am--include-marker
+-@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/libz_a-inflate.Po@am__quote@ # am--include-marker
+-@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/libz_a-inftrees.Po@am__quote@ # am--include-marker
+-@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/libz_a-trees.Po@am__quote@ # am--include-marker
+-@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/libz_a-uncompr.Po@am__quote@ # am--include-marker
+-@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/libz_a-zutil.Po@am__quote@ # am--include-marker
+-@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/trees.Plo@am__quote@ # am--include-marker
+-@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/uncompr.Plo@am__quote@ # am--include-marker
+-@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/zutil.Plo@am__quote@ # am--include-marker
+-
+-$(am__depfiles_remade):
+-	@$(MKDIR_P) $(@D)
+-	@echo '# dummy' >$@-t && $(am__mv) $@-t $@
+-
+-am--depfiles: $(am__depfiles_remade)
++@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/adler32.Plo@am__quote@
++@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/compress.Plo@am__quote@
++@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/crc32.Plo@am__quote@
++@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/deflate.Plo@am__quote@
++@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/gzclose.Plo@am__quote@
++@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/gzlib.Plo@am__quote@
++@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/gzread.Plo@am__quote@
++@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/gzwrite.Plo@am__quote@
++@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/infback.Plo@am__quote@
++@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/inffast.Plo@am__quote@
++@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/inflate.Plo@am__quote@
++@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/inftrees.Plo@am__quote@
++@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/libz_a-adler32.Po@am__quote@
++@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/libz_a-compress.Po@am__quote@
++@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/libz_a-crc32.Po@am__quote@
++@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/libz_a-deflate.Po@am__quote@
++@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/libz_a-gzclose.Po@am__quote@
++@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/libz_a-gzlib.Po@am__quote@
++@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/libz_a-gzread.Po@am__quote@
++@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/libz_a-gzwrite.Po@am__quote@
++@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/libz_a-infback.Po@am__quote@
++@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/libz_a-inffast.Po@am__quote@
++@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/libz_a-inflate.Po@am__quote@
++@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/libz_a-inftrees.Po@am__quote@
++@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/libz_a-trees.Po@am__quote@
++@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/libz_a-uncompr.Po@am__quote@
++@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/libz_a-zutil.Po@am__quote@
++@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/trees.Plo@am__quote@
++@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/uncompr.Plo@am__quote@
++@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/zutil.Plo@am__quote@
+ 
+ .c.o:
+ @am__fastdepCC_TRUE@	$(AM_V_CC)$(COMPILE) -MT $@ -MD -MP -MF $(DEPDIR)/$*.Tpo -c -o $@ $<
+@@ -896,10 +876,8 @@ cscopelist-am: $(am__tagged_files)
+ distclean-tags:
+ 	-rm -f TAGS ID GTAGS GRTAGS GSYMS GPATH tags
+ 	-rm -f cscope.out cscope.in.out cscope.po.out cscope.files
+-distdir: $(BUILT_SOURCES)
+-	$(MAKE) $(AM_MAKEFLAGS) distdir-am
+ 
+-distdir-am: $(DISTFILES)
++distdir: $(DISTFILES)
+ 	$(am__remove_distdir)
+ 	test -d "$(distdir)" || mkdir "$(distdir)"
+ 	@srcdirstrip=`echo "$(srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \
+@@ -954,10 +932,6 @@ dist-xz: distdir
+ 	tardir=$(distdir) && $(am__tar) | XZ_OPT=$${XZ_OPT--e} xz -c >$(distdir).tar.xz
+ 	$(am__post_remove_distdir)
+ 
+-dist-zstd: distdir
+-	tardir=$(distdir) && $(am__tar) | zstd -c $${ZSTD_CLEVEL-$${ZSTD_OPT--19}} >$(distdir).tar.zst
+-	$(am__post_remove_distdir)
+-
+ dist-tarZ: distdir
+ 	@echo WARNING: "Support for distribution archives compressed with" \
+ 		       "legacy program 'compress' is deprecated." >&2
+@@ -1000,8 +974,6 @@ distcheck: dist
+ 	  eval GZIP= gzip $(GZIP_ENV) -dc $(distdir).shar.gz | unshar ;;\
+ 	*.zip*) \
+ 	  unzip $(distdir).zip ;;\
+-	*.tar.zst*) \
+-	  zstd -dc $(distdir).tar.zst | $(am__untar) ;;\
+ 	esac
+ 	chmod -R a-w $(distdir)
+ 	chmod u+w $(distdir)
+@@ -1017,7 +989,7 @@ distcheck: dist
+ 	    $(DISTCHECK_CONFIGURE_FLAGS) \
+ 	    --srcdir=../.. --prefix="$$dc_install_base" \
+ 	  && $(MAKE) $(AM_MAKEFLAGS) \
+-	  && $(MAKE) $(AM_MAKEFLAGS) $(AM_DISTCHECK_DVI_TARGET) \
++	  && $(MAKE) $(AM_MAKEFLAGS) dvi \
+ 	  && $(MAKE) $(AM_MAKEFLAGS) check \
+ 	  && $(MAKE) $(AM_MAKEFLAGS) install \
+ 	  && $(MAKE) $(AM_MAKEFLAGS) installcheck \
+@@ -1113,36 +1085,7 @@ clean-am: clean-generic clean-libtool clean-local \
+ 
+ distclean: distclean-am
+ 	-rm -f $(am__CONFIG_DISTCLEAN_FILES)
+-		-rm -f ./$(DEPDIR)/adler32.Plo
+-	-rm -f ./$(DEPDIR)/compress.Plo
+-	-rm -f ./$(DEPDIR)/crc32.Plo
+-	-rm -f ./$(DEPDIR)/deflate.Plo
+-	-rm -f ./$(DEPDIR)/gzclose.Plo
+-	-rm -f ./$(DEPDIR)/gzlib.Plo
+-	-rm -f ./$(DEPDIR)/gzread.Plo
+-	-rm -f ./$(DEPDIR)/gzwrite.Plo
+-	-rm -f ./$(DEPDIR)/infback.Plo
+-	-rm -f ./$(DEPDIR)/inffast.Plo
+-	-rm -f ./$(DEPDIR)/inflate.Plo
+-	-rm -f ./$(DEPDIR)/inftrees.Plo
+-	-rm -f ./$(DEPDIR)/libz_a-adler32.Po
+-	-rm -f ./$(DEPDIR)/libz_a-compress.Po
+-	-rm -f ./$(DEPDIR)/libz_a-crc32.Po
+-	-rm -f ./$(DEPDIR)/libz_a-deflate.Po
+-	-rm -f ./$(DEPDIR)/libz_a-gzclose.Po
+-	-rm -f ./$(DEPDIR)/libz_a-gzlib.Po
+-	-rm -f ./$(DEPDIR)/libz_a-gzread.Po
+-	-rm -f ./$(DEPDIR)/libz_a-gzwrite.Po
+-	-rm -f ./$(DEPDIR)/libz_a-infback.Po
+-	-rm -f ./$(DEPDIR)/libz_a-inffast.Po
+-	-rm -f ./$(DEPDIR)/libz_a-inflate.Po
+-	-rm -f ./$(DEPDIR)/libz_a-inftrees.Po
+-	-rm -f ./$(DEPDIR)/libz_a-trees.Po
+-	-rm -f ./$(DEPDIR)/libz_a-uncompr.Po
+-	-rm -f ./$(DEPDIR)/libz_a-zutil.Po
+-	-rm -f ./$(DEPDIR)/trees.Plo
+-	-rm -f ./$(DEPDIR)/uncompr.Plo
+-	-rm -f ./$(DEPDIR)/zutil.Plo
++	-rm -rf ./$(DEPDIR)
+ 	-rm -f Makefile
+ distclean-am: clean-am distclean-compile distclean-generic \
+ 	distclean-libtool distclean-local distclean-tags
+@@ -1190,36 +1133,7 @@ installcheck-am:
+ maintainer-clean: maintainer-clean-am
+ 	-rm -f $(am__CONFIG_DISTCLEAN_FILES)
+ 	-rm -rf $(top_srcdir)/autom4te.cache
+-		-rm -f ./$(DEPDIR)/adler32.Plo
+-	-rm -f ./$(DEPDIR)/compress.Plo
+-	-rm -f ./$(DEPDIR)/crc32.Plo
+-	-rm -f ./$(DEPDIR)/deflate.Plo
+-	-rm -f ./$(DEPDIR)/gzclose.Plo
+-	-rm -f ./$(DEPDIR)/gzlib.Plo
+-	-rm -f ./$(DEPDIR)/gzread.Plo
+-	-rm -f ./$(DEPDIR)/gzwrite.Plo
+-	-rm -f ./$(DEPDIR)/infback.Plo
+-	-rm -f ./$(DEPDIR)/inffast.Plo
+-	-rm -f ./$(DEPDIR)/inflate.Plo
+-	-rm -f ./$(DEPDIR)/inftrees.Plo
+-	-rm -f ./$(DEPDIR)/libz_a-adler32.Po
+-	-rm -f ./$(DEPDIR)/libz_a-compress.Po
+-	-rm -f ./$(DEPDIR)/libz_a-crc32.Po
+-	-rm -f ./$(DEPDIR)/libz_a-deflate.Po
+-	-rm -f ./$(DEPDIR)/libz_a-gzclose.Po
+-	-rm -f ./$(DEPDIR)/libz_a-gzlib.Po
+-	-rm -f ./$(DEPDIR)/libz_a-gzread.Po
+-	-rm -f ./$(DEPDIR)/libz_a-gzwrite.Po
+-	-rm -f ./$(DEPDIR)/libz_a-infback.Po
+-	-rm -f ./$(DEPDIR)/libz_a-inffast.Po
+-	-rm -f ./$(DEPDIR)/libz_a-inflate.Po
+-	-rm -f ./$(DEPDIR)/libz_a-inftrees.Po
+-	-rm -f ./$(DEPDIR)/libz_a-trees.Po
+-	-rm -f ./$(DEPDIR)/libz_a-uncompr.Po
+-	-rm -f ./$(DEPDIR)/libz_a-zutil.Po
+-	-rm -f ./$(DEPDIR)/trees.Plo
+-	-rm -f ./$(DEPDIR)/uncompr.Plo
+-	-rm -f ./$(DEPDIR)/zutil.Plo
++	-rm -rf ./$(DEPDIR)
+ 	-rm -f Makefile
+ maintainer-clean-am: distclean-am maintainer-clean-generic \
+ 	maintainer-clean-local
+@@ -1241,17 +1155,17 @@ uninstall-am: uninstall-toolexeclibLIBRARIES
+ 
+ .MAKE: install-am install-strip
+ 
+-.PHONY: CTAGS GTAGS TAGS all all-am all-local am--depfiles am--refresh \
+-	check check-am clean clean-cscope clean-generic clean-libtool \
++.PHONY: CTAGS GTAGS TAGS all all-am all-local am--refresh check \
++	check-am clean clean-cscope clean-generic clean-libtool \
+ 	clean-local clean-noinstLTLIBRARIES clean-toolexeclibLIBRARIES \
+ 	cscope cscopelist-am ctags ctags-am dist dist-all dist-bzip2 \
+ 	dist-gzip dist-lzip dist-shar dist-tarZ dist-xz dist-zip \
+-	dist-zstd distcheck distclean distclean-compile \
+-	distclean-generic distclean-libtool distclean-local \
+-	distclean-tags distcleancheck distdir distuninstallcheck dvi \
+-	dvi-am html html-am info info-am install install-am \
+-	install-data install-data-am install-dvi install-dvi-am \
+-	install-exec install-exec-am install-exec-local install-html \
++	distcheck distclean distclean-compile distclean-generic \
++	distclean-libtool distclean-local distclean-tags \
++	distcleancheck distdir distuninstallcheck dvi dvi-am html \
++	html-am info info-am install install-am install-data \
++	install-data-am install-dvi install-dvi-am install-exec \
++	install-exec-am install-exec-local install-html \
+ 	install-html-am install-info install-info-am install-man \
+ 	install-pdf install-pdf-am install-ps install-ps-am \
+ 	install-strip install-toolexeclibLIBRARIES installcheck \
+diff --git a/zlib/aclocal.m4 b/zlib/aclocal.m4
+index 3538b0f0aea..e5eed57bd68 100644
+--- a/zlib/aclocal.m4
++++ b/zlib/aclocal.m4
+@@ -1,6 +1,6 @@
+-# generated automatically by aclocal 1.16.5 -*- Autoconf -*-
++# generated automatically by aclocal 1.15.1 -*- Autoconf -*-
+ 
+-# Copyright (C) 1996-2021 Free Software Foundation, Inc.
++# Copyright (C) 1996-2017 Free Software Foundation, Inc.
+ 
+ # This file is free software; the Free Software Foundation
+ # gives unlimited permission to copy and/or distribute it,
+@@ -20,7 +20,7 @@ You have another version of autoconf.  It may work, but is not guaranteed to.
+ If you have problems, you may need to regenerate the build system entirely.
+ To do so, use the procedure documented by the package, typically 'autoreconf'.])])
+ 
+-# Copyright (C) 2002-2021 Free Software Foundation, Inc.
++# Copyright (C) 2002-2017 Free Software Foundation, Inc.
+ #
+ # This file is free software; the Free Software Foundation
+ # gives unlimited permission to copy and/or distribute it,
+@@ -32,10 +32,10 @@ To do so, use the procedure documented by the package, typically 'autoreconf'.])
+ # generated from the m4 files accompanying Automake X.Y.
+ # (This private macro should not be called outside this file.)
+ AC_DEFUN([AM_AUTOMAKE_VERSION],
+-[am__api_version='1.16'
++[am__api_version='1.15'
+ dnl Some users find AM_AUTOMAKE_VERSION and mistake it for a way to
+ dnl require some minimum version.  Point them to the right macro.
+-m4_if([$1], [1.16.5], [],
++m4_if([$1], [1.15.1], [],
+       [AC_FATAL([Do not call $0, use AM_INIT_AUTOMAKE([$1]).])])dnl
+ ])
+ 
+@@ -51,14 +51,14 @@ m4_define([_AM_AUTOCONF_VERSION], [])
+ # Call AM_AUTOMAKE_VERSION and AM_AUTOMAKE_VERSION so they can be traced.
+ # This function is AC_REQUIREd by AM_INIT_AUTOMAKE.
+ AC_DEFUN([AM_SET_CURRENT_AUTOMAKE_VERSION],
+-[AM_AUTOMAKE_VERSION([1.16.5])dnl
++[AM_AUTOMAKE_VERSION([1.15.1])dnl
+ m4_ifndef([AC_AUTOCONF_VERSION],
+   [m4_copy([m4_PACKAGE_VERSION], [AC_AUTOCONF_VERSION])])dnl
+ _AM_AUTOCONF_VERSION(m4_defn([AC_AUTOCONF_VERSION]))])
+ 
+ # AM_AUX_DIR_EXPAND                                         -*- Autoconf -*-
+ 
+-# Copyright (C) 2001-2021 Free Software Foundation, Inc.
++# Copyright (C) 2001-2017 Free Software Foundation, Inc.
+ #
+ # This file is free software; the Free Software Foundation
+ # gives unlimited permission to copy and/or distribute it,
+@@ -110,7 +110,7 @@ am_aux_dir=`cd "$ac_aux_dir" && pwd`
+ 
+ # AM_CONDITIONAL                                            -*- Autoconf -*-
+ 
+-# Copyright (C) 1997-2021 Free Software Foundation, Inc.
++# Copyright (C) 1997-2017 Free Software Foundation, Inc.
+ #
+ # This file is free software; the Free Software Foundation
+ # gives unlimited permission to copy and/or distribute it,
+@@ -141,7 +141,7 @@ AC_CONFIG_COMMANDS_PRE(
+ Usually this means the macro was only invoked conditionally.]])
+ fi])])
+ 
+-# Copyright (C) 1999-2021 Free Software Foundation, Inc.
++# Copyright (C) 1999-2017 Free Software Foundation, Inc.
+ #
+ # This file is free software; the Free Software Foundation
+ # gives unlimited permission to copy and/or distribute it,
+@@ -332,12 +332,13 @@ _AM_SUBST_NOTMAKE([am__nodep])dnl
+ 
+ # Generate code to set up dependency tracking.              -*- Autoconf -*-
+ 
+-# Copyright (C) 1999-2021 Free Software Foundation, Inc.
++# Copyright (C) 1999-2017 Free Software Foundation, Inc.
+ #
+ # This file is free software; the Free Software Foundation
+ # gives unlimited permission to copy and/or distribute it,
+ # with or without modifications, as long as this notice is preserved.
+ 
++
+ # _AM_OUTPUT_DEPENDENCY_COMMANDS
+ # ------------------------------
+ AC_DEFUN([_AM_OUTPUT_DEPENDENCY_COMMANDS],
+@@ -345,43 +346,49 @@ AC_DEFUN([_AM_OUTPUT_DEPENDENCY_COMMANDS],
+   # Older Autoconf quotes --file arguments for eval, but not when files
+   # are listed without --file.  Let's play safe and only enable the eval
+   # if we detect the quoting.
+-  # TODO: see whether this extra hack can be removed once we start
+-  # requiring Autoconf 2.70 or later.
+-  AS_CASE([$CONFIG_FILES],
+-          [*\'*], [eval set x "$CONFIG_FILES"],
+-          [*], [set x $CONFIG_FILES])
++  case $CONFIG_FILES in
++  *\'*) eval set x "$CONFIG_FILES" ;;
++  *)   set x $CONFIG_FILES ;;
++  esac
+   shift
+-  # Used to flag and report bootstrapping failures.
+-  am_rc=0
+-  for am_mf
++  for mf
+   do
+     # Strip MF so we end up with the name of the file.
+-    am_mf=`AS_ECHO(["$am_mf"]) | sed -e 's/:.*$//'`
+-    # Check whether this is an Automake generated Makefile which includes
+-    # dependency-tracking related rules and includes.
+-    # Grep'ing the whole file directly is not great: AIX grep has a line
++    mf=`echo "$mf" | sed -e 's/:.*$//'`
++    # Check whether this is an Automake generated Makefile or not.
++    # We used to match only the files named 'Makefile.in', but
++    # some people rename them; so instead we look at the file content.
++    # Grep'ing the first line is not enough: some people post-process
++    # each Makefile.in and add a new line on top of each file to say so.
++    # Grep'ing the whole file is not good either: AIX grep has a line
+     # limit of 2048, but all sed's we know have understand at least 4000.
+-    sed -n 's,^am--depfiles:.*,X,p' "$am_mf" | grep X >/dev/null 2>&1 \
+-      || continue
+-    am_dirpart=`AS_DIRNAME(["$am_mf"])`
+-    am_filepart=`AS_BASENAME(["$am_mf"])`
+-    AM_RUN_LOG([cd "$am_dirpart" \
+-      && sed -e '/# am--include-marker/d' "$am_filepart" \
+-        | $MAKE -f - am--depfiles]) || am_rc=$?
++    if sed -n 's,^#.*generated by automake.*,X,p' "$mf" | grep X >/dev/null 2>&1; then
++      dirpart=`AS_DIRNAME("$mf")`
++    else
++      continue
++    fi
++    # Extract the definition of DEPDIR, am__include, and am__quote
++    # from the Makefile without running 'make'.
++    DEPDIR=`sed -n 's/^DEPDIR = //p' < "$mf"`
++    test -z "$DEPDIR" && continue
++    am__include=`sed -n 's/^am__include = //p' < "$mf"`
++    test -z "$am__include" && continue
++    am__quote=`sed -n 's/^am__quote = //p' < "$mf"`
++    # Find all dependency output files, they are included files with
++    # $(DEPDIR) in their names.  We invoke sed twice because it is the
++    # simplest approach to changing $(DEPDIR) to its actual value in the
++    # expansion.
++    for file in `sed -n "
++      s/^$am__include $am__quote\(.*(DEPDIR).*\)$am__quote"'$/\1/p' <"$mf" | \
++	 sed -e 's/\$(DEPDIR)/'"$DEPDIR"'/g'`; do
++      # Make sure the directory exists.
++      test -f "$dirpart/$file" && continue
++      fdir=`AS_DIRNAME(["$file"])`
++      AS_MKDIR_P([$dirpart/$fdir])
++      # echo "creating $dirpart/$file"
++      echo '# dummy' > "$dirpart/$file"
++    done
+   done
+-  if test $am_rc -ne 0; then
+-    AC_MSG_FAILURE([Something went wrong bootstrapping makefile fragments
+-    for automatic dependency tracking.  If GNU make was not used, consider
+-    re-running the configure script with MAKE="gmake" (or whatever is
+-    necessary).  You can also try re-running configure with the
+-    '--disable-dependency-tracking' option to at least be able to build
+-    the package (albeit without support for automatic dependency tracking).])
+-  fi
+-  AS_UNSET([am_dirpart])
+-  AS_UNSET([am_filepart])
+-  AS_UNSET([am_mf])
+-  AS_UNSET([am_rc])
+-  rm -f conftest-deps.mk
+ }
+ ])# _AM_OUTPUT_DEPENDENCY_COMMANDS
+ 
+@@ -390,17 +397,18 @@ AC_DEFUN([_AM_OUTPUT_DEPENDENCY_COMMANDS],
+ # -----------------------------
+ # This macro should only be invoked once -- use via AC_REQUIRE.
+ #
+-# This code is only required when automatic dependency tracking is enabled.
+-# This creates each '.Po' and '.Plo' makefile fragment that we'll need in
+-# order to bootstrap the dependency handling code.
++# This code is only required when automatic dependency tracking
++# is enabled.  FIXME.  This creates each '.P' file that we will
++# need in order to bootstrap the dependency handling code.
+ AC_DEFUN([AM_OUTPUT_DEPENDENCY_COMMANDS],
+ [AC_CONFIG_COMMANDS([depfiles],
+      [test x"$AMDEP_TRUE" != x"" || _AM_OUTPUT_DEPENDENCY_COMMANDS],
+-     [AMDEP_TRUE="$AMDEP_TRUE" MAKE="${MAKE-make}"])])
++     [AMDEP_TRUE="$AMDEP_TRUE" ac_aux_dir="$ac_aux_dir"])
++])
+ 
+ # Do all the work for Automake.                             -*- Autoconf -*-
+ 
+-# Copyright (C) 1996-2021 Free Software Foundation, Inc.
++# Copyright (C) 1996-2017 Free Software Foundation, Inc.
+ #
+ # This file is free software; the Free Software Foundation
+ # gives unlimited permission to copy and/or distribute it,
+@@ -428,10 +436,6 @@ m4_defn([AC_PROG_CC])
+ # release and drop the old call support.
+ AC_DEFUN([AM_INIT_AUTOMAKE],
+ [AC_PREREQ([2.65])dnl
+-m4_ifdef([_$0_ALREADY_INIT],
+-  [m4_fatal([$0 expanded multiple times
+-]m4_defn([_$0_ALREADY_INIT]))],
+-  [m4_define([_$0_ALREADY_INIT], m4_expansion_stack)])dnl
+ dnl Autoconf wants to disallow AM_ names.  We explicitly allow
+ dnl the ones we care about.
+ m4_pattern_allow([^AM_[A-Z]+FLAGS$])dnl
+@@ -468,7 +472,7 @@ m4_ifval([$3], [_AM_SET_OPTION([no-define])])dnl
+ [_AM_SET_OPTIONS([$1])dnl
+ dnl Diagnose old-style AC_INIT with new-style AM_AUTOMAKE_INIT.
+ m4_if(
+-  m4_ifset([AC_PACKAGE_NAME], [ok]):m4_ifset([AC_PACKAGE_VERSION], [ok]),
++  m4_ifdef([AC_PACKAGE_NAME], [ok]):m4_ifdef([AC_PACKAGE_VERSION], [ok]),
+   [ok:ok],,
+   [m4_fatal([AC_INIT should be called with package and version arguments])])dnl
+  AC_SUBST([PACKAGE], ['AC_PACKAGE_TARNAME'])dnl
+@@ -491,8 +495,8 @@ AC_REQUIRE([AM_PROG_INSTALL_STRIP])dnl
+ AC_REQUIRE([AC_PROG_MKDIR_P])dnl
+ # For better backward compatibility.  To be removed once Automake 1.9.x
+ # dies out for good.  For more background, see:
+-# <https://lists.gnu.org/archive/html/automake/2012-07/msg00001.html>
+-# <https://lists.gnu.org/archive/html/automake/2012-07/msg00014.html>
++# <http://lists.gnu.org/archive/html/automake/2012-07/msg00001.html>
++# <http://lists.gnu.org/archive/html/automake/2012-07/msg00014.html>
+ AC_SUBST([mkdir_p], ['$(MKDIR_P)'])
+ # We need awk for the "check" target (and possibly the TAP driver).  The
+ # system "awk" is bad on some platforms.
+@@ -520,20 +524,6 @@ AC_PROVIDE_IFELSE([AC_PROG_OBJCXX],
+ 		  [m4_define([AC_PROG_OBJCXX],
+ 			     m4_defn([AC_PROG_OBJCXX])[_AM_DEPENDENCIES([OBJCXX])])])dnl
+ ])
+-# Variables for tags utilities; see am/tags.am
+-if test -z "$CTAGS"; then
+-  CTAGS=ctags
+-fi
+-AC_SUBST([CTAGS])
+-if test -z "$ETAGS"; then
+-  ETAGS=etags
+-fi
+-AC_SUBST([ETAGS])
+-if test -z "$CSCOPE"; then
+-  CSCOPE=cscope
+-fi
+-AC_SUBST([CSCOPE])
+-
+ AC_REQUIRE([AM_SILENT_RULES])dnl
+ dnl The testsuite driver may need to know about EXEEXT, so add the
+ dnl 'am__EXEEXT' conditional if _AM_COMPILER_EXEEXT was seen.  This
+@@ -573,7 +563,7 @@ END
+ Aborting the configuration process, to ensure you take notice of the issue.
+ 
+ You can download and install GNU coreutils to get an 'rm' implementation
+-that behaves properly: <https://www.gnu.org/software/coreutils/>.
++that behaves properly: <http://www.gnu.org/software/coreutils/>.
+ 
+ If you want to complete the configuration process using your problematic
+ 'rm' anyway, export the environment variable ACCEPT_INFERIOR_RM_PROGRAM
+@@ -615,7 +605,7 @@ for _am_header in $config_headers :; do
+ done
+ echo "timestamp for $_am_arg" >`AS_DIRNAME(["$_am_arg"])`/stamp-h[]$_am_stamp_count])
+ 
+-# Copyright (C) 2001-2021 Free Software Foundation, Inc.
++# Copyright (C) 2001-2017 Free Software Foundation, Inc.
+ #
+ # This file is free software; the Free Software Foundation
+ # gives unlimited permission to copy and/or distribute it,
+@@ -639,7 +629,7 @@ AC_SUBST([install_sh])])
+ # Add --enable-maintainer-mode option to configure.         -*- Autoconf -*-
+ # From Jim Meyering
+ 
+-# Copyright (C) 1996-2021 Free Software Foundation, Inc.
++# Copyright (C) 1996-2017 Free Software Foundation, Inc.
+ #
+ # This file is free software; the Free Software Foundation
+ # gives unlimited permission to copy and/or distribute it,
+@@ -674,7 +664,7 @@ AC_MSG_CHECKING([whether to enable maintainer-specific portions of Makefiles])
+ 
+ # Check to see how 'make' treats includes.	            -*- Autoconf -*-
+ 
+-# Copyright (C) 2001-2021 Free Software Foundation, Inc.
++# Copyright (C) 2001-2017 Free Software Foundation, Inc.
+ #
+ # This file is free software; the Free Software Foundation
+ # gives unlimited permission to copy and/or distribute it,
+@@ -682,42 +672,49 @@ AC_MSG_CHECKING([whether to enable maintainer-specific portions of Makefiles])
+ 
+ # AM_MAKE_INCLUDE()
+ # -----------------
+-# Check whether make has an 'include' directive that can support all
+-# the idioms we need for our automatic dependency tracking code.
++# Check to see how make treats includes.
+ AC_DEFUN([AM_MAKE_INCLUDE],
+-[AC_MSG_CHECKING([whether ${MAKE-make} supports the include directive])
+-cat > confinc.mk << 'END'
++[am_make=${MAKE-make}
++cat > confinc << 'END'
+ am__doit:
+-	@echo this is the am__doit target >confinc.out
++	@echo this is the am__doit target
+ .PHONY: am__doit
+ END
++# If we don't find an include directive, just comment out the code.
++AC_MSG_CHECKING([for style of include used by $am_make])
+ am__include="#"
+ am__quote=
+-# BSD make does it like this.
+-echo '.include "confinc.mk" # ignored' > confmf.BSD
+-# Other make implementations (GNU, Solaris 10, AIX) do it like this.
+-echo 'include confinc.mk # ignored' > confmf.GNU
+-_am_result=no
+-for s in GNU BSD; do
+-  AM_RUN_LOG([${MAKE-make} -f confmf.$s && cat confinc.out])
+-  AS_CASE([$?:`cat confinc.out 2>/dev/null`],
+-      ['0:this is the am__doit target'],
+-      [AS_CASE([$s],
+-          [BSD], [am__include='.include' am__quote='"'],
+-          [am__include='include' am__quote=''])])
+-  if test "$am__include" != "#"; then
+-    _am_result="yes ($s style)"
+-    break
+-  fi
+-done
+-rm -f confinc.* confmf.*
+-AC_MSG_RESULT([${_am_result}])
+-AC_SUBST([am__include])])
+-AC_SUBST([am__quote])])
++_am_result=none
++# First try GNU make style include.
++echo "include confinc" > confmf
++# Ignore all kinds of additional output from 'make'.
++case `$am_make -s -f confmf 2> /dev/null` in #(
++*the\ am__doit\ target*)
++  am__include=include
++  am__quote=
++  _am_result=GNU
++  ;;
++esac
++# Now try BSD make style include.
++if test "$am__include" = "#"; then
++   echo '.include "confinc"' > confmf
++   case `$am_make -s -f confmf 2> /dev/null` in #(
++   *the\ am__doit\ target*)
++     am__include=.include
++     am__quote="\""
++     _am_result=BSD
++     ;;
++   esac
++fi
++AC_SUBST([am__include])
++AC_SUBST([am__quote])
++AC_MSG_RESULT([$_am_result])
++rm -f confinc confmf
++])
+ 
+ # Fake the existence of programs that GNU maintainers use.  -*- Autoconf -*-
+ 
+-# Copyright (C) 1997-2021 Free Software Foundation, Inc.
++# Copyright (C) 1997-2017 Free Software Foundation, Inc.
+ #
+ # This file is free software; the Free Software Foundation
+ # gives unlimited permission to copy and/or distribute it,
+@@ -738,7 +735,12 @@ AC_DEFUN([AM_MISSING_HAS_RUN],
+ [AC_REQUIRE([AM_AUX_DIR_EXPAND])dnl
+ AC_REQUIRE_AUX_FILE([missing])dnl
+ if test x"${MISSING+set}" != xset; then
+-  MISSING="\${SHELL} '$am_aux_dir/missing'"
++  case $am_aux_dir in
++  *\ * | *\	*)
++    MISSING="\${SHELL} \"$am_aux_dir/missing\"" ;;
++  *)
++    MISSING="\${SHELL} $am_aux_dir/missing" ;;
++  esac
+ fi
+ # Use eval to expand $SHELL
+ if eval "$MISSING --is-lightweight"; then
+@@ -751,7 +753,7 @@ fi
+ 
+ # Helper functions for option handling.                     -*- Autoconf -*-
+ 
+-# Copyright (C) 2001-2021 Free Software Foundation, Inc.
++# Copyright (C) 2001-2017 Free Software Foundation, Inc.
+ #
+ # This file is free software; the Free Software Foundation
+ # gives unlimited permission to copy and/or distribute it,
+@@ -780,7 +782,7 @@ AC_DEFUN([_AM_SET_OPTIONS],
+ AC_DEFUN([_AM_IF_OPTION],
+ [m4_ifset(_AM_MANGLE_OPTION([$1]), [$2], [$3])])
+ 
+-# Copyright (C) 1999-2021 Free Software Foundation, Inc.
++# Copyright (C) 1999-2017 Free Software Foundation, Inc.
+ #
+ # This file is free software; the Free Software Foundation
+ # gives unlimited permission to copy and/or distribute it,
+@@ -827,7 +829,7 @@ AC_LANG_POP([C])])
+ # For backward compatibility.
+ AC_DEFUN_ONCE([AM_PROG_CC_C_O], [AC_REQUIRE([AC_PROG_CC])])
+ 
+-# Copyright (C) 2001-2021 Free Software Foundation, Inc.
++# Copyright (C) 2001-2017 Free Software Foundation, Inc.
+ #
+ # This file is free software; the Free Software Foundation
+ # gives unlimited permission to copy and/or distribute it,
+@@ -846,7 +848,7 @@ AC_DEFUN([AM_RUN_LOG],
+ 
+ # Check to make sure that the build environment is sane.    -*- Autoconf -*-
+ 
+-# Copyright (C) 1996-2021 Free Software Foundation, Inc.
++# Copyright (C) 1996-2017 Free Software Foundation, Inc.
+ #
+ # This file is free software; the Free Software Foundation
+ # gives unlimited permission to copy and/or distribute it,
+@@ -927,7 +929,7 @@ AC_CONFIG_COMMANDS_PRE(
+ rm -f conftest.file
+ ])
+ 
+-# Copyright (C) 2009-2021 Free Software Foundation, Inc.
++# Copyright (C) 2009-2017 Free Software Foundation, Inc.
+ #
+ # This file is free software; the Free Software Foundation
+ # gives unlimited permission to copy and/or distribute it,
+@@ -987,7 +989,7 @@ AC_SUBST([AM_BACKSLASH])dnl
+ _AM_SUBST_NOTMAKE([AM_BACKSLASH])dnl
+ ])
+ 
+-# Copyright (C) 2001-2021 Free Software Foundation, Inc.
++# Copyright (C) 2001-2017 Free Software Foundation, Inc.
+ #
+ # This file is free software; the Free Software Foundation
+ # gives unlimited permission to copy and/or distribute it,
+@@ -1015,7 +1017,7 @@ fi
+ INSTALL_STRIP_PROGRAM="\$(install_sh) -c -s"
+ AC_SUBST([INSTALL_STRIP_PROGRAM])])
+ 
+-# Copyright (C) 2006-2021 Free Software Foundation, Inc.
++# Copyright (C) 2006-2017 Free Software Foundation, Inc.
+ #
+ # This file is free software; the Free Software Foundation
+ # gives unlimited permission to copy and/or distribute it,
+@@ -1034,7 +1036,7 @@ AC_DEFUN([AM_SUBST_NOTMAKE], [_AM_SUBST_NOTMAKE($@)])
+ 
+ # Check how to create a tarball.                            -*- Autoconf -*-
+ 
+-# Copyright (C) 2004-2021 Free Software Foundation, Inc.
++# Copyright (C) 2004-2017 Free Software Foundation, Inc.
+ #
+ # This file is free software; the Free Software Foundation
+ # gives unlimited permission to copy and/or distribute it,
+diff --git a/zlib/configure b/zlib/configure
+index 0a9ad9e8ccb..35b44e2819c 100755
+--- a/zlib/configure
++++ b/zlib/configure
+@@ -646,8 +646,11 @@ OTOOL
+ LIPO
+ NMEDIT
+ DSYMUTIL
++MANIFEST_TOOL
+ RANLIB
++ac_ct_AR
+ AR
++DLLTOOL
+ OBJDUMP
+ LN_S
+ NM
+@@ -666,6 +669,7 @@ am__nodep
+ AMDEPBACKSLASH
+ AMDEP_FALSE
+ AMDEP_TRUE
++am__quote
+ am__include
+ DEPDIR
+ OBJEXT
+@@ -683,9 +687,6 @@ AM_BACKSLASH
+ AM_DEFAULT_VERBOSITY
+ AM_DEFAULT_V
+ AM_V
+-CSCOPE
+-ETAGS
+-CTAGS
+ am__untar
+ am__tar
+ AMTAR
+@@ -760,8 +761,7 @@ PACKAGE_VERSION
+ PACKAGE_TARNAME
+ PACKAGE_NAME
+ PATH_SEPARATOR
+-SHELL
+-am__quote'
++SHELL'
+ ac_subst_files=''
+ ac_user_opts='
+ enable_option_checking
+@@ -777,6 +777,7 @@ enable_static
+ with_pic
+ enable_fast_install
+ with_gnu_ld
++with_libtool_sysroot
+ enable_libtool_lock
+ enable_host_shared
+ '
+@@ -1431,6 +1432,8 @@ Optional Packages:
+   --with-pic              try to use only PIC/non-PIC objects [default=use
+                           both]
+   --with-gnu-ld           assume the C compiler uses GNU ld [default=no]
++  --with-libtool-sysroot=DIR Search for dependent libraries within DIR
++                        (or the compiler's sysroot if not specified).
+ 
+ Some influential environment variables:
+   CC          C compiler command
+@@ -2417,7 +2420,7 @@ test -n "$target_alias" &&
+ mkinstalldirs="`cd $ac_aux_dir && ${PWDCMD-pwd}`/mkinstalldirs"
+ 
+ 
+-am__api_version='1.16'
++am__api_version='1.15'
+ 
+ # Find a good install program.  We prefer a C program (faster),
+ # so one script is as good as another.  But avoid the broken or
+@@ -2593,7 +2596,12 @@ program_transform_name=`$as_echo "$program_transform_name" | sed "$ac_script"`
+ am_aux_dir=`cd "$ac_aux_dir" && pwd`
+ 
+ if test x"${MISSING+set}" != xset; then
+-  MISSING="\${SHELL} '$am_aux_dir/missing'"
++  case $am_aux_dir in
++  *\ * | *\	*)
++    MISSING="\${SHELL} \"$am_aux_dir/missing\"" ;;
++  *)
++    MISSING="\${SHELL} $am_aux_dir/missing" ;;
++  esac
+ fi
+ # Use eval to expand $SHELL
+ if eval "$MISSING --is-lightweight"; then
+@@ -2928,8 +2936,8 @@ MAKEINFO=${MAKEINFO-"${am_missing_run}makeinfo"}
+ 
+ # For better backward compatibility.  To be removed once Automake 1.9.x
+ # dies out for good.  For more background, see:
+-# <https://lists.gnu.org/archive/html/automake/2012-07/msg00001.html>
+-# <https://lists.gnu.org/archive/html/automake/2012-07/msg00014.html>
++# <http://lists.gnu.org/archive/html/automake/2012-07/msg00001.html>
++# <http://lists.gnu.org/archive/html/automake/2012-07/msg00014.html>
+ mkdir_p='$(MKDIR_P)'
+ 
+ # We need awk for the "check" target (and possibly the TAP driver).  The
+@@ -2948,20 +2956,6 @@ am__tar='$${TAR-tar} chof - "$$tardir"' am__untar='$${TAR-tar} xf -'
+ 
+ 
+ 
+-# Variables for tags utilities; see am/tags.am
+-if test -z "$CTAGS"; then
+-  CTAGS=ctags
+-fi
+-
+-if test -z "$ETAGS"; then
+-  ETAGS=etags
+-fi
+-
+-if test -z "$CSCOPE"; then
+-  CSCOPE=cscope
+-fi
+-
+-
+ 
+ # POSIX will say in a future version that running "rm -f" with no argument
+ # is OK; and we want to be able to make that assumption in our Makefile
+@@ -2994,7 +2988,7 @@ END
+ Aborting the configuration process, to ensure you take notice of the issue.
+ 
+ You can download and install GNU coreutils to get an 'rm' implementation
+-that behaves properly: <https://www.gnu.org/software/coreutils/>.
++that behaves properly: <http://www.gnu.org/software/coreutils/>.
+ 
+ If you want to complete the configuration process using your problematic
+ 'rm' anyway, export the environment variable ACCEPT_INFERIOR_RM_PROGRAM
+@@ -3998,45 +3992,45 @@ DEPDIR="${am__leading_dot}deps"
+ 
+ ac_config_commands="$ac_config_commands depfiles"
+ 
+-{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether ${MAKE-make} supports the include directive" >&5
+-$as_echo_n "checking whether ${MAKE-make} supports the include directive... " >&6; }
+-cat > confinc.mk << 'END'
++
++am_make=${MAKE-make}
++cat > confinc << 'END'
+ am__doit:
+-	@echo this is the am__doit target >confinc.out
++	@echo this is the am__doit target
+ .PHONY: am__doit
+ END
++# If we don't find an include directive, just comment out the code.
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for style of include used by $am_make" >&5
++$as_echo_n "checking for style of include used by $am_make... " >&6; }
+ am__include="#"
+ am__quote=
+-# BSD make does it like this.
+-echo '.include "confinc.mk" # ignored' > confmf.BSD
+-# Other make implementations (GNU, Solaris 10, AIX) do it like this.
+-echo 'include confinc.mk # ignored' > confmf.GNU
+-_am_result=no
+-for s in GNU BSD; do
+-  { echo "$as_me:$LINENO: ${MAKE-make} -f confmf.$s && cat confinc.out" >&5
+-   (${MAKE-make} -f confmf.$s && cat confinc.out) >&5 2>&5
+-   ac_status=$?
+-   echo "$as_me:$LINENO: \$? = $ac_status" >&5
+-   (exit $ac_status); }
+-  case $?:`cat confinc.out 2>/dev/null` in #(
+-  '0:this is the am__doit target') :
+-    case $s in #(
+-  BSD) :
+-    am__include='.include' am__quote='"' ;; #(
+-  *) :
+-    am__include='include' am__quote='' ;;
+-esac ;; #(
+-  *) :
+-     ;;
++_am_result=none
++# First try GNU make style include.
++echo "include confinc" > confmf
++# Ignore all kinds of additional output from 'make'.
++case `$am_make -s -f confmf 2> /dev/null` in #(
++*the\ am__doit\ target*)
++  am__include=include
++  am__quote=
++  _am_result=GNU
++  ;;
+ esac
+-  if test "$am__include" != "#"; then
+-    _am_result="yes ($s style)"
+-    break
+-  fi
+-done
+-rm -f confinc.* confmf.*
+-{ $as_echo "$as_me:${as_lineno-$LINENO}: result: ${_am_result}" >&5
+-$as_echo "${_am_result}" >&6; }
++# Now try BSD make style include.
++if test "$am__include" = "#"; then
++   echo '.include "confinc"' > confmf
++   case `$am_make -s -f confmf 2> /dev/null` in #(
++   *the\ am__doit\ target*)
++     am__include=.include
++     am__quote="\""
++     _am_result=BSD
++     ;;
++   esac
++fi
++
++
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $_am_result" >&5
++$as_echo "$_am_result" >&6; }
++rm -f confinc confmf
+ 
+ # Check whether --enable-dependency-tracking was given.
+ if test "${enable_dependency_tracking+set}" = set; then :
+@@ -4198,8 +4192,8 @@ esac
+ 
+ 
+ 
+-macro_version='2.2.7a'
+-macro_revision='1.3134'
++macro_version='2.4'
++macro_revision='1.3293'
+ 
+ 
+ 
+@@ -4239,7 +4233,7 @@ ECHO=$ECHO$ECHO$ECHO$ECHO$ECHO$ECHO
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking how to print strings" >&5
+ $as_echo_n "checking how to print strings... " >&6; }
+ # Test print first, because it will be a builtin if present.
+-if test "X`print -r -- -n 2>/dev/null`" = X-n && \
++if test "X`( print -r -- -n ) 2>/dev/null`" = X-n && \
+    test "X`print -r -- $ECHO 2>/dev/null`" = "X$ECHO"; then
+   ECHO='print -r --'
+ elif test "X`printf %s $ECHO 2>/dev/null`" = "X$ECHO"; then
+@@ -5062,8 +5056,8 @@ $as_echo_n "checking whether the shell understands some XSI constructs... " >&6;
+ # Try some XSI features
+ xsi_shell=no
+ ( _lt_dummy="a/b/c"
+-  test "${_lt_dummy##*/},${_lt_dummy%/*},"${_lt_dummy%"$_lt_dummy"}, \
+-      = c,a/b,, \
++  test "${_lt_dummy##*/},${_lt_dummy%/*},${_lt_dummy#??}"${_lt_dummy%"$_lt_dummy"}, \
++      = c,a/b,b/c, \
+     && eval 'test $(( 1 + 1 )) -eq 2 \
+     && test "${#_lt_dummy}" -eq 5' ) >/dev/null 2>&1 \
+   && xsi_shell=yes
+@@ -5112,6 +5106,80 @@ esac
+ 
+ 
+ 
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking how to convert $build file names to $host format" >&5
++$as_echo_n "checking how to convert $build file names to $host format... " >&6; }
++if ${lt_cv_to_host_file_cmd+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  case $host in
++  *-*-mingw* )
++    case $build in
++      *-*-mingw* ) # actually msys
++        lt_cv_to_host_file_cmd=func_convert_file_msys_to_w32
++        ;;
++      *-*-cygwin* )
++        lt_cv_to_host_file_cmd=func_convert_file_cygwin_to_w32
++        ;;
++      * ) # otherwise, assume *nix
++        lt_cv_to_host_file_cmd=func_convert_file_nix_to_w32
++        ;;
++    esac
++    ;;
++  *-*-cygwin* )
++    case $build in
++      *-*-mingw* ) # actually msys
++        lt_cv_to_host_file_cmd=func_convert_file_msys_to_cygwin
++        ;;
++      *-*-cygwin* )
++        lt_cv_to_host_file_cmd=func_convert_file_noop
++        ;;
++      * ) # otherwise, assume *nix
++        lt_cv_to_host_file_cmd=func_convert_file_nix_to_cygwin
++        ;;
++    esac
++    ;;
++  * ) # unhandled hosts (and "normal" native builds)
++    lt_cv_to_host_file_cmd=func_convert_file_noop
++    ;;
++esac
++
++fi
++
++to_host_file_cmd=$lt_cv_to_host_file_cmd
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_to_host_file_cmd" >&5
++$as_echo "$lt_cv_to_host_file_cmd" >&6; }
++
++
++
++
++
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking how to convert $build file names to toolchain format" >&5
++$as_echo_n "checking how to convert $build file names to toolchain format... " >&6; }
++if ${lt_cv_to_tool_file_cmd+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  #assume ordinary cross tools, or native build.
++lt_cv_to_tool_file_cmd=func_convert_file_noop
++case $host in
++  *-*-mingw* )
++    case $build in
++      *-*-mingw* ) # actually msys
++        lt_cv_to_tool_file_cmd=func_convert_file_msys_to_w32
++        ;;
++    esac
++    ;;
++esac
++
++fi
++
++to_tool_file_cmd=$lt_cv_to_tool_file_cmd
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_to_tool_file_cmd" >&5
++$as_echo "$lt_cv_to_tool_file_cmd" >&6; }
++
++
++
++
++
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $LD option to reload object files" >&5
+ $as_echo_n "checking for $LD option to reload object files... " >&6; }
+ if ${lt_cv_ld_reload_flag+:} false; then :
+@@ -5128,6 +5196,11 @@ case $reload_flag in
+ esac
+ reload_cmds='$LD$reload_flag -o $output$reload_objs'
+ case $host_os in
++  cygwin* | mingw* | pw32* | cegcc*)
++    if test "$GCC" != yes; then
++      reload_cmds=false
++    fi
++    ;;
+   darwin*)
+     if test "$GCC" = yes; then
+       reload_cmds='$LTCC $LTCFLAGS -nostdlib ${wl}-r -o $output$reload_objs'
+@@ -5296,7 +5369,8 @@ mingw* | pw32*)
+     lt_cv_deplibs_check_method='file_magic ^x86 archive import|^x86 DLL'
+     lt_cv_file_magic_cmd='func_win32_libid'
+   else
+-    lt_cv_deplibs_check_method='file_magic file format pei*-i386(.*architecture: i386)?'
++    # Keep this pattern in sync with the one in func_win32_libid.
++    lt_cv_deplibs_check_method='file_magic file format (pei*-i386(.*architecture: i386)?|pe-arm-wince|pe-x86-64)'
+     lt_cv_file_magic_cmd='$OBJDUMP -f'
+   fi
+   ;;
+@@ -5450,6 +5524,21 @@ esac
+ fi
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_deplibs_check_method" >&5
+ $as_echo "$lt_cv_deplibs_check_method" >&6; }
++
++file_magic_glob=
++want_nocaseglob=no
++if test "$build" = "$host"; then
++  case $host_os in
++  mingw* | pw32*)
++    if ( shopt | grep nocaseglob ) >/dev/null 2>&1; then
++      want_nocaseglob=yes
++    else
++      file_magic_glob=`echo aAbBcCdDeEfFgGhHiIjJkKlLmMnNoOpPqQrRsStTuUvVwWxXyYzZ | $SED -e "s/\(..\)/s\/[\1]\/[\1]\/g;/g"`
++    fi
++    ;;
++  esac
++fi
++
+ file_magic_cmd=$lt_cv_file_magic_cmd
+ deplibs_check_method=$lt_cv_deplibs_check_method
+ test -z "$deplibs_check_method" && deplibs_check_method=unknown
+@@ -5465,6 +5554,158 @@ test -z "$deplibs_check_method" && deplibs_check_method=unknown
+ 
+ 
+ 
++
++
++
++
++
++
++
++
++
++
++if test -n "$ac_tool_prefix"; then
++  # Extract the first word of "${ac_tool_prefix}dlltool", so it can be a program name with args.
++set dummy ${ac_tool_prefix}dlltool; ac_word=$2
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
++$as_echo_n "checking for $ac_word... " >&6; }
++if ${ac_cv_prog_DLLTOOL+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  if test -n "$DLLTOOL"; then
++  ac_cv_prog_DLLTOOL="$DLLTOOL" # Let the user override the test.
++else
++as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
++for as_dir in $PATH
++do
++  IFS=$as_save_IFS
++  test -z "$as_dir" && as_dir=.
++    for ac_exec_ext in '' $ac_executable_extensions; do
++  if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
++    ac_cv_prog_DLLTOOL="${ac_tool_prefix}dlltool"
++    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
++    break 2
++  fi
++done
++  done
++IFS=$as_save_IFS
++
++fi
++fi
++DLLTOOL=$ac_cv_prog_DLLTOOL
++if test -n "$DLLTOOL"; then
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $DLLTOOL" >&5
++$as_echo "$DLLTOOL" >&6; }
++else
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
++$as_echo "no" >&6; }
++fi
++
++
++fi
++if test -z "$ac_cv_prog_DLLTOOL"; then
++  ac_ct_DLLTOOL=$DLLTOOL
++  # Extract the first word of "dlltool", so it can be a program name with args.
++set dummy dlltool; ac_word=$2
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
++$as_echo_n "checking for $ac_word... " >&6; }
++if ${ac_cv_prog_ac_ct_DLLTOOL+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  if test -n "$ac_ct_DLLTOOL"; then
++  ac_cv_prog_ac_ct_DLLTOOL="$ac_ct_DLLTOOL" # Let the user override the test.
++else
++as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
++for as_dir in $PATH
++do
++  IFS=$as_save_IFS
++  test -z "$as_dir" && as_dir=.
++    for ac_exec_ext in '' $ac_executable_extensions; do
++  if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
++    ac_cv_prog_ac_ct_DLLTOOL="dlltool"
++    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
++    break 2
++  fi
++done
++  done
++IFS=$as_save_IFS
++
++fi
++fi
++ac_ct_DLLTOOL=$ac_cv_prog_ac_ct_DLLTOOL
++if test -n "$ac_ct_DLLTOOL"; then
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_DLLTOOL" >&5
++$as_echo "$ac_ct_DLLTOOL" >&6; }
++else
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
++$as_echo "no" >&6; }
++fi
++
++  if test "x$ac_ct_DLLTOOL" = x; then
++    DLLTOOL="false"
++  else
++    case $cross_compiling:$ac_tool_warned in
++yes:)
++{ $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5
++$as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;}
++ac_tool_warned=yes ;;
++esac
++    DLLTOOL=$ac_ct_DLLTOOL
++  fi
++else
++  DLLTOOL="$ac_cv_prog_DLLTOOL"
++fi
++
++test -z "$DLLTOOL" && DLLTOOL=dlltool
++
++
++
++
++
++
++
++
++
++
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking how to associate runtime and link libraries" >&5
++$as_echo_n "checking how to associate runtime and link libraries... " >&6; }
++if ${lt_cv_sharedlib_from_linklib_cmd+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  lt_cv_sharedlib_from_linklib_cmd='unknown'
++
++case $host_os in
++cygwin* | mingw* | pw32* | cegcc*)
++  # two different shell functions defined in ltmain.sh
++  # decide which to use based on capabilities of $DLLTOOL
++  case `$DLLTOOL --help 2>&1` in
++  *--identify-strict*)
++    lt_cv_sharedlib_from_linklib_cmd=func_cygming_dll_for_implib
++    ;;
++  *)
++    lt_cv_sharedlib_from_linklib_cmd=func_cygming_dll_for_implib_fallback
++    ;;
++  esac
++  ;;
++*)
++  # fallback: assume linklib IS sharedlib
++  lt_cv_sharedlib_from_linklib_cmd="$ECHO"
++  ;;
++esac
++
++fi
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_sharedlib_from_linklib_cmd" >&5
++$as_echo "$lt_cv_sharedlib_from_linklib_cmd" >&6; }
++sharedlib_from_linklib_cmd=$lt_cv_sharedlib_from_linklib_cmd
++test -z "$sharedlib_from_linklib_cmd" && sharedlib_from_linklib_cmd=$ECHO
++
++
++
++
++
++
++
++
+ plugin_option=
+ plugin_names="liblto_plugin.so liblto_plugin-0.dll cyglto_plugin-0.dll"
+ for plugin in $plugin_names; do
+@@ -5479,8 +5720,10 @@ for plugin in $plugin_names; do
+ done
+ 
+ if test -n "$ac_tool_prefix"; then
+-  # Extract the first word of "${ac_tool_prefix}ar", so it can be a program name with args.
+-set dummy ${ac_tool_prefix}ar; ac_word=$2
++  for ac_prog in ar
++  do
++    # Extract the first word of "$ac_tool_prefix$ac_prog", so it can be a program name with args.
++set dummy $ac_tool_prefix$ac_prog; ac_word=$2
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+ $as_echo_n "checking for $ac_word... " >&6; }
+ if ${ac_cv_prog_AR+:} false; then :
+@@ -5496,7 +5739,7 @@ do
+   test -z "$as_dir" && as_dir=.
+     for ac_exec_ext in '' $ac_executable_extensions; do
+   if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
+-    ac_cv_prog_AR="${ac_tool_prefix}ar"
++    ac_cv_prog_AR="$ac_tool_prefix$ac_prog"
+     $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
+     break 2
+   fi
+@@ -5516,11 +5759,15 @@ $as_echo "no" >&6; }
+ fi
+ 
+ 
++    test -n "$AR" && break
++  done
+ fi
+-if test -z "$ac_cv_prog_AR"; then
++if test -z "$AR"; then
+   ac_ct_AR=$AR
+-  # Extract the first word of "ar", so it can be a program name with args.
+-set dummy ar; ac_word=$2
++  for ac_prog in ar
++do
++  # Extract the first word of "$ac_prog", so it can be a program name with args.
++set dummy $ac_prog; ac_word=$2
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+ $as_echo_n "checking for $ac_word... " >&6; }
+ if ${ac_cv_prog_ac_ct_AR+:} false; then :
+@@ -5536,7 +5783,7 @@ do
+   test -z "$as_dir" && as_dir=.
+     for ac_exec_ext in '' $ac_executable_extensions; do
+   if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
+-    ac_cv_prog_ac_ct_AR="ar"
++    ac_cv_prog_ac_ct_AR="$ac_prog"
+     $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
+     break 2
+   fi
+@@ -5555,6 +5802,10 @@ else
+ $as_echo "no" >&6; }
+ fi
+ 
++
++  test -n "$ac_ct_AR" && break
++done
++
+   if test "x$ac_ct_AR" = x; then
+     AR="false"
+   else
+@@ -5566,25 +5817,22 @@ ac_tool_warned=yes ;;
+ esac
+     AR=$ac_ct_AR
+   fi
+-else
+-  AR="$ac_cv_prog_AR"
+ fi
+ 
+-test -z "$AR" && AR=ar
+-if test -n "$plugin_option"; then
+-  if $AR --help 2>&1 | grep -q "\--plugin"; then
+-    touch conftest.c
+-    $AR $plugin_option rc conftest.a conftest.c
+-    if test "$?" != 0; then
+-      { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: Failed: $AR $plugin_option rc" >&5
++  touch conftest.c
++  $AR $plugin_option rc conftest.a conftest.c
++  if test "$?" != 0; then
++    { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: Failed: $AR $plugin_option rc" >&5
+ $as_echo "$as_me: WARNING: Failed: $AR $plugin_option rc" >&2;}
+-    else
+-      AR="$AR $plugin_option"
+-    fi
+-    rm -f conftest.*
++  else
++    AR="$AR $plugin_option"
+   fi
+-fi
+-test -z "$AR_FLAGS" && AR_FLAGS=cru
++  rm -f conftest.*
++: ${AR=ar}
++: ${AR_FLAGS=cru}
++
++
++
+ 
+ 
+ 
+@@ -5593,6 +5841,61 @@ test -z "$AR_FLAGS" && AR_FLAGS=cru
+ 
+ 
+ 
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for archiver @FILE support" >&5
++$as_echo_n "checking for archiver @FILE support... " >&6; }
++if ${lt_cv_ar_at_file+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  lt_cv_ar_at_file=no
++   cat confdefs.h - <<_ACEOF >conftest.$ac_ext
++/* end confdefs.h.  */
++
++int
++main ()
++{
++
++  ;
++  return 0;
++}
++_ACEOF
++if ac_fn_c_try_compile "$LINENO"; then :
++  echo conftest.$ac_objext > conftest.lst
++      lt_ar_try='$AR $AR_FLAGS libconftest.a @conftest.lst >&5'
++      { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$lt_ar_try\""; } >&5
++  (eval $lt_ar_try) 2>&5
++  ac_status=$?
++  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
++  test $ac_status = 0; }
++      if test "$ac_status" -eq 0; then
++	# Ensure the archiver fails upon bogus file names.
++	rm -f conftest.$ac_objext libconftest.a
++	{ { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$lt_ar_try\""; } >&5
++  (eval $lt_ar_try) 2>&5
++  ac_status=$?
++  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
++  test $ac_status = 0; }
++	if test "$ac_status" -ne 0; then
++          lt_cv_ar_at_file=@
++        fi
++      fi
++      rm -f conftest.* libconftest.a
++
++fi
++rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext
++
++fi
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_ar_at_file" >&5
++$as_echo "$lt_cv_ar_at_file" >&6; }
++
++if test "x$lt_cv_ar_at_file" = xno; then
++  archiver_list_spec=
++else
++  archiver_list_spec=$lt_cv_ar_at_file
++fi
++
++
++
++
+ 
+ 
+ 
+@@ -5935,8 +6238,8 @@ esac
+ lt_cv_sys_global_symbol_to_cdecl="sed -n -e 's/^T .* \(.*\)$/extern int \1();/p' -e 's/^$symcode* .* \(.*\)$/extern char \1;/p'"
+ 
+ # Transform an extracted symbol line into symbol name and symbol address
+-lt_cv_sys_global_symbol_to_c_name_address="sed -n -e 's/^: \([^ ]*\) $/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"\2\", (void *) \&\2},/p'"
+-lt_cv_sys_global_symbol_to_c_name_address_lib_prefix="sed -n -e 's/^: \([^ ]*\) $/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \(lib[^ ]*\)$/  {\"\2\", (void *) \&\2},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"lib\2\", (void *) \&\2},/p'"
++lt_cv_sys_global_symbol_to_c_name_address="sed -n -e 's/^: \([^ ]*\)[ ]*$/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"\2\", (void *) \&\2},/p'"
++lt_cv_sys_global_symbol_to_c_name_address_lib_prefix="sed -n -e 's/^: \([^ ]*\)[ ]*$/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \(lib[^ ]*\)$/  {\"\2\", (void *) \&\2},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"lib\2\", (void *) \&\2},/p'"
+ 
+ # Handle CRLF in mingw tool chain
+ opt_cr=
+@@ -5972,6 +6275,7 @@ for ac_symprfx in "" "_"; do
+   else
+     lt_cv_sys_global_symbol_pipe="sed -n -e 's/^.*[	 ]\($symcode$symcode*\)[	 ][	 ]*$ac_symprfx$sympat$opt_cr$/$symxfrm/p'"
+   fi
++  lt_cv_sys_global_symbol_pipe="$lt_cv_sys_global_symbol_pipe | sed '/ __gnu_lto/d'"
+ 
+   # Check to see that the pipe works correctly.
+   pipe_works=no
+@@ -6013,6 +6317,18 @@ _LT_EOF
+       if $GREP ' nm_test_var$' "$nlist" >/dev/null; then
+ 	if $GREP ' nm_test_func$' "$nlist" >/dev/null; then
+ 	  cat <<_LT_EOF > conftest.$ac_ext
++/* Keep this code in sync between libtool.m4, ltmain, lt_system.h, and tests.  */
++#if defined(_WIN32) || defined(__CYGWIN__) || defined(_WIN32_WCE)
++/* DATA imports from DLLs on WIN32 con't be const, because runtime
++   relocations are performed -- see ld's documentation on pseudo-relocs.  */
++# define LT_DLSYM_CONST
++#elif defined(__osf__)
++/* This system does not cope well with relocations in const data.  */
++# define LT_DLSYM_CONST
++#else
++# define LT_DLSYM_CONST const
++#endif
++
+ #ifdef __cplusplus
+ extern "C" {
+ #endif
+@@ -6024,7 +6340,7 @@ _LT_EOF
+ 	  cat <<_LT_EOF >> conftest.$ac_ext
+ 
+ /* The mapping between symbol names and symbols.  */
+-const struct {
++LT_DLSYM_CONST struct {
+   const char *name;
+   void       *address;
+ }
+@@ -6050,8 +6366,8 @@ static const void *lt_preloaded_setup() {
+ _LT_EOF
+ 	  # Now try linking the two files.
+ 	  mv conftest.$ac_objext conftstm.$ac_objext
+-	  lt_save_LIBS="$LIBS"
+-	  lt_save_CFLAGS="$CFLAGS"
++	  lt_globsym_save_LIBS=$LIBS
++	  lt_globsym_save_CFLAGS=$CFLAGS
+ 	  LIBS="conftstm.$ac_objext"
+ 	  CFLAGS="$CFLAGS$lt_prog_compiler_no_builtin_flag"
+ 	  if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_link\""; } >&5
+@@ -6061,8 +6377,8 @@ _LT_EOF
+   test $ac_status = 0; } && test -s conftest${ac_exeext}; then
+ 	    pipe_works=yes
+ 	  fi
+-	  LIBS="$lt_save_LIBS"
+-	  CFLAGS="$lt_save_CFLAGS"
++	  LIBS=$lt_globsym_save_LIBS
++	  CFLAGS=$lt_globsym_save_CFLAGS
+ 	else
+ 	  echo "cannot find nm_test_func in $nlist" >&5
+ 	fi
+@@ -6099,6 +6415,16 @@ else
+ $as_echo "ok" >&6; }
+ fi
+ 
++# Response file support.
++if test "$lt_cv_nm_interface" = "MS dumpbin"; then
++  nm_file_list_spec='@'
++elif $NM --help 2>/dev/null | grep '[@]FILE' >/dev/null; then
++  nm_file_list_spec='@'
++fi
++
++
++
++
+ 
+ 
+ 
+@@ -6116,6 +6442,44 @@ fi
+ 
+ 
+ 
++
++
++
++
++
++
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for sysroot" >&5
++$as_echo_n "checking for sysroot... " >&6; }
++
++# Check whether --with-libtool-sysroot was given.
++if test "${with_libtool_sysroot+set}" = set; then :
++  withval=$with_libtool_sysroot;
++else
++  with_libtool_sysroot=no
++fi
++
++
++lt_sysroot=
++case ${with_libtool_sysroot} in #(
++ yes)
++   if test "$GCC" = yes; then
++     lt_sysroot=`$CC --print-sysroot 2>/dev/null`
++   fi
++   ;; #(
++ /*)
++   lt_sysroot=`echo "$with_libtool_sysroot" | sed -e "$sed_quote_subst"`
++   ;; #(
++ no|'')
++   ;; #(
++ *)
++   { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${with_libtool_sysroot}" >&5
++$as_echo "${with_libtool_sysroot}" >&6; }
++   as_fn_error $? "The sysroot must be an absolute path." "$LINENO" 5
++   ;;
++esac
++
++ { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${lt_sysroot:-no}" >&5
++$as_echo "${lt_sysroot:-no}" >&6; }
+ 
+ 
+ 
+@@ -6324,11 +6688,128 @@ sparc*-*solaris*)
+       ;;
+     esac
+   fi
+-  rm -rf conftest*
+-  ;;
++  rm -rf conftest*
++  ;;
++esac
++
++need_locks="$enable_libtool_lock"
++
++if test -n "$ac_tool_prefix"; then
++  # Extract the first word of "${ac_tool_prefix}mt", so it can be a program name with args.
++set dummy ${ac_tool_prefix}mt; ac_word=$2
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
++$as_echo_n "checking for $ac_word... " >&6; }
++if ${ac_cv_prog_MANIFEST_TOOL+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  if test -n "$MANIFEST_TOOL"; then
++  ac_cv_prog_MANIFEST_TOOL="$MANIFEST_TOOL" # Let the user override the test.
++else
++as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
++for as_dir in $PATH
++do
++  IFS=$as_save_IFS
++  test -z "$as_dir" && as_dir=.
++    for ac_exec_ext in '' $ac_executable_extensions; do
++  if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
++    ac_cv_prog_MANIFEST_TOOL="${ac_tool_prefix}mt"
++    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
++    break 2
++  fi
++done
++  done
++IFS=$as_save_IFS
++
++fi
++fi
++MANIFEST_TOOL=$ac_cv_prog_MANIFEST_TOOL
++if test -n "$MANIFEST_TOOL"; then
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $MANIFEST_TOOL" >&5
++$as_echo "$MANIFEST_TOOL" >&6; }
++else
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
++$as_echo "no" >&6; }
++fi
++
++
++fi
++if test -z "$ac_cv_prog_MANIFEST_TOOL"; then
++  ac_ct_MANIFEST_TOOL=$MANIFEST_TOOL
++  # Extract the first word of "mt", so it can be a program name with args.
++set dummy mt; ac_word=$2
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
++$as_echo_n "checking for $ac_word... " >&6; }
++if ${ac_cv_prog_ac_ct_MANIFEST_TOOL+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  if test -n "$ac_ct_MANIFEST_TOOL"; then
++  ac_cv_prog_ac_ct_MANIFEST_TOOL="$ac_ct_MANIFEST_TOOL" # Let the user override the test.
++else
++as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
++for as_dir in $PATH
++do
++  IFS=$as_save_IFS
++  test -z "$as_dir" && as_dir=.
++    for ac_exec_ext in '' $ac_executable_extensions; do
++  if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
++    ac_cv_prog_ac_ct_MANIFEST_TOOL="mt"
++    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
++    break 2
++  fi
++done
++  done
++IFS=$as_save_IFS
++
++fi
++fi
++ac_ct_MANIFEST_TOOL=$ac_cv_prog_ac_ct_MANIFEST_TOOL
++if test -n "$ac_ct_MANIFEST_TOOL"; then
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_MANIFEST_TOOL" >&5
++$as_echo "$ac_ct_MANIFEST_TOOL" >&6; }
++else
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
++$as_echo "no" >&6; }
++fi
++
++  if test "x$ac_ct_MANIFEST_TOOL" = x; then
++    MANIFEST_TOOL=":"
++  else
++    case $cross_compiling:$ac_tool_warned in
++yes:)
++{ $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5
++$as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;}
++ac_tool_warned=yes ;;
+ esac
++    MANIFEST_TOOL=$ac_ct_MANIFEST_TOOL
++  fi
++else
++  MANIFEST_TOOL="$ac_cv_prog_MANIFEST_TOOL"
++fi
++
++test -z "$MANIFEST_TOOL" && MANIFEST_TOOL=mt
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking if $MANIFEST_TOOL is a manifest tool" >&5
++$as_echo_n "checking if $MANIFEST_TOOL is a manifest tool... " >&6; }
++if ${lt_cv_path_mainfest_tool+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  lt_cv_path_mainfest_tool=no
++  echo "$as_me:$LINENO: $MANIFEST_TOOL '-?'" >&5
++  $MANIFEST_TOOL '-?' 2>conftest.err > conftest.out
++  cat conftest.err >&5
++  if $GREP 'Manifest Tool' conftest.out > /dev/null; then
++    lt_cv_path_mainfest_tool=yes
++  fi
++  rm -f conftest*
++fi
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_path_mainfest_tool" >&5
++$as_echo "$lt_cv_path_mainfest_tool" >&6; }
++if test "x$lt_cv_path_mainfest_tool" != xyes; then
++  MANIFEST_TOOL=:
++fi
++
++
++
+ 
+-need_locks="$enable_libtool_lock"
+ 
+ 
+   case $host_os in
+@@ -6896,6 +7377,8 @@ _LT_EOF
+       $LTCC $LTCFLAGS -c -o conftest.o conftest.c 2>&5
+       echo "$AR cru libconftest.a conftest.o" >&5
+       $AR cru libconftest.a conftest.o 2>&5
++      echo "$RANLIB libconftest.a" >&5
++      $RANLIB libconftest.a 2>&5
+       cat > conftest.c << _LT_EOF
+ int main() { return 0;}
+ _LT_EOF
+@@ -7745,8 +8228,6 @@ fi
+ lt_prog_compiler_pic=
+ lt_prog_compiler_static=
+ 
+-{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $compiler option to produce PIC" >&5
+-$as_echo_n "checking for $compiler option to produce PIC... " >&6; }
+ 
+   if test "$GCC" = yes; then
+     lt_prog_compiler_wl='-Wl,'
+@@ -7912,6 +8393,12 @@ $as_echo_n "checking for $compiler option to produce PIC... " >&6; }
+ 	lt_prog_compiler_pic='--shared'
+ 	lt_prog_compiler_static='--static'
+ 	;;
++      nagfor*)
++	# NAG Fortran compiler
++	lt_prog_compiler_wl='-Wl,-Wl,,'
++	lt_prog_compiler_pic='-PIC'
++	lt_prog_compiler_static='-Bstatic'
++	;;
+       pgcc* | pgf77* | pgf90* | pgf95* | pgfortran*)
+         # Portland Group compilers (*not* the Pentium gcc compiler,
+ 	# which looks to be a dead project)
+@@ -7974,7 +8461,7 @@ $as_echo_n "checking for $compiler option to produce PIC... " >&6; }
+       lt_prog_compiler_pic='-KPIC'
+       lt_prog_compiler_static='-Bstatic'
+       case $cc_basename in
+-      f77* | f90* | f95*)
++      f77* | f90* | f95* | sunf77* | sunf90* | sunf95*)
+ 	lt_prog_compiler_wl='-Qoption ld ';;
+       *)
+ 	lt_prog_compiler_wl='-Wl,';;
+@@ -8031,13 +8518,17 @@ case $host_os in
+     lt_prog_compiler_pic="$lt_prog_compiler_pic -DPIC"
+     ;;
+ esac
+-{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_prog_compiler_pic" >&5
+-$as_echo "$lt_prog_compiler_pic" >&6; }
+-
+-
+-
+-
+ 
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $compiler option to produce PIC" >&5
++$as_echo_n "checking for $compiler option to produce PIC... " >&6; }
++if ${lt_cv_prog_compiler_pic+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  lt_cv_prog_compiler_pic=$lt_prog_compiler_pic
++fi
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_prog_compiler_pic" >&5
++$as_echo "$lt_cv_prog_compiler_pic" >&6; }
++lt_prog_compiler_pic=$lt_cv_prog_compiler_pic
+ 
+ #
+ # Check to make sure the PIC flag actually works.
+@@ -8098,6 +8589,11 @@ fi
+ 
+ 
+ 
++
++
++
++
++
+ #
+ # Check to make sure the static flag actually works.
+ #
+@@ -8448,7 +8944,8 @@ _LT_EOF
+       allow_undefined_flag=unsupported
+       always_export_symbols=no
+       enable_shared_with_static_runtimes=yes
+-      export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1 DATA/'\'' | $SED -e '\''/^[AITW][ ]/s/.*[ ]//'\'' | sort | uniq > $export_symbols'
++      export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1 DATA/;s/^.*[ ]__nm__\([^ ]*\)[ ][^ ]*/\1 DATA/;/^I[ ]/d;/^[AITW][ ]/s/.* //'\'' | sort | uniq > $export_symbols'
++      exclude_expsyms='[_]+GLOBAL_OFFSET_TABLE_|[_]+GLOBAL__[FID]_.*|[_]+head_[A-Za-z0-9_]+_dll|[A-Za-z0-9_]+_dll_iname'
+ 
+       if $LD --help 2>&1 | $GREP 'auto-import' > /dev/null; then
+         archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib'
+@@ -8547,12 +9044,12 @@ _LT_EOF
+ 	  whole_archive_flag_spec='--whole-archive$convenience --no-whole-archive'
+ 	  hardcode_libdir_flag_spec=
+ 	  hardcode_libdir_flag_spec_ld='-rpath $libdir'
+-	  archive_cmds='$LD -shared $libobjs $deplibs $compiler_flags -soname $soname -o $lib'
++	  archive_cmds='$LD -shared $libobjs $deplibs $linker_flags -soname $soname -o $lib'
+ 	  if test "x$supports_anon_versioning" = xyes; then
+ 	    archive_expsym_cmds='echo "{ global:" > $output_objdir/$libname.ver~
+ 	      cat $export_symbols | sed -e "s/\(.*\)/\1;/" >> $output_objdir/$libname.ver~
+ 	      echo "local: *; };" >> $output_objdir/$libname.ver~
+-	      $LD -shared $libobjs $deplibs $compiler_flags -soname $soname -version-script $output_objdir/$libname.ver -o $lib'
++	      $LD -shared $libobjs $deplibs $linker_flags -soname $soname -version-script $output_objdir/$libname.ver -o $lib'
+ 	  fi
+ 	  ;;
+ 	esac
+@@ -8566,8 +9063,8 @@ _LT_EOF
+ 	archive_cmds='$LD -Bshareable $libobjs $deplibs $linker_flags -o $lib'
+ 	wlarc=
+       else
+-	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
+-	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
++	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
++	archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
+       fi
+       ;;
+ 
+@@ -8585,8 +9082,8 @@ _LT_EOF
+ 
+ _LT_EOF
+       elif $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then
+-	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
+-	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
++	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
++	archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
+       else
+ 	ld_shlibs=no
+       fi
+@@ -8632,8 +9129,8 @@ _LT_EOF
+ 
+     *)
+       if $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then
+-	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
+-	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
++	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
++	archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
+       else
+ 	ld_shlibs=no
+       fi
+@@ -8763,7 +9260,13 @@ _LT_EOF
+ 	allow_undefined_flag='-berok'
+         # Determine the default libpath from the value encoded in an
+         # empty executable.
+-        if test x$gcc_no_link = xyes; then
++        if test "${lt_cv_aix_libpath+set}" = set; then
++  aix_libpath=$lt_cv_aix_libpath
++else
++  if ${lt_cv_aix_libpath_+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  if test x$gcc_no_link = xyes; then
+   as_fn_error $? "Link tests are not allowed after GCC_NO_EXECUTABLES." "$LINENO" 5
+ fi
+ cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+@@ -8779,22 +9282,29 @@ main ()
+ _ACEOF
+ if ac_fn_c_try_link "$LINENO"; then :
+ 
+-lt_aix_libpath_sed='
+-    /Import File Strings/,/^$/ {
+-	/^0/ {
+-	    s/^0  *\(.*\)$/\1/
+-	    p
+-	}
+-    }'
+-aix_libpath=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
+-# Check for a 64-bit object if we didn't find anything.
+-if test -z "$aix_libpath"; then
+-  aix_libpath=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
+-fi
++  lt_aix_libpath_sed='
++      /Import File Strings/,/^$/ {
++	  /^0/ {
++	      s/^0  *\([^ ]*\) *$/\1/
++	      p
++	  }
++      }'
++  lt_cv_aix_libpath_=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
++  # Check for a 64-bit object if we didn't find anything.
++  if test -z "$lt_cv_aix_libpath_"; then
++    lt_cv_aix_libpath_=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
++  fi
+ fi
+ rm -f core conftest.err conftest.$ac_objext \
+     conftest$ac_exeext conftest.$ac_ext
+-if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
++  if test -z "$lt_cv_aix_libpath_"; then
++    lt_cv_aix_libpath_="/usr/lib:/lib"
++  fi
++
++fi
++
++  aix_libpath=$lt_cv_aix_libpath_
++fi
+ 
+         hardcode_libdir_flag_spec='${wl}-blibpath:$libdir:'"$aix_libpath"
+         archive_expsym_cmds='$CC -o $output_objdir/$soname $libobjs $deplibs '"\${wl}$no_entry_flag"' $compiler_flags `if test "x${allow_undefined_flag}" != "x"; then func_echo_all "${wl}${allow_undefined_flag}"; else :; fi` '"\${wl}$exp_sym_flag:\$export_symbols $shared_flag"
+@@ -8806,7 +9316,13 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 	else
+ 	 # Determine the default libpath from the value encoded in an
+ 	 # empty executable.
+-	 if test x$gcc_no_link = xyes; then
++	 if test "${lt_cv_aix_libpath+set}" = set; then
++  aix_libpath=$lt_cv_aix_libpath
++else
++  if ${lt_cv_aix_libpath_+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  if test x$gcc_no_link = xyes; then
+   as_fn_error $? "Link tests are not allowed after GCC_NO_EXECUTABLES." "$LINENO" 5
+ fi
+ cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+@@ -8822,22 +9338,29 @@ main ()
+ _ACEOF
+ if ac_fn_c_try_link "$LINENO"; then :
+ 
+-lt_aix_libpath_sed='
+-    /Import File Strings/,/^$/ {
+-	/^0/ {
+-	    s/^0  *\(.*\)$/\1/
+-	    p
+-	}
+-    }'
+-aix_libpath=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
+-# Check for a 64-bit object if we didn't find anything.
+-if test -z "$aix_libpath"; then
+-  aix_libpath=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
+-fi
++  lt_aix_libpath_sed='
++      /Import File Strings/,/^$/ {
++	  /^0/ {
++	      s/^0  *\([^ ]*\) *$/\1/
++	      p
++	  }
++      }'
++  lt_cv_aix_libpath_=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
++  # Check for a 64-bit object if we didn't find anything.
++  if test -z "$lt_cv_aix_libpath_"; then
++    lt_cv_aix_libpath_=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
++  fi
+ fi
+ rm -f core conftest.err conftest.$ac_objext \
+     conftest$ac_exeext conftest.$ac_ext
+-if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
++  if test -z "$lt_cv_aix_libpath_"; then
++    lt_cv_aix_libpath_="/usr/lib:/lib"
++  fi
++
++fi
++
++  aix_libpath=$lt_cv_aix_libpath_
++fi
+ 
+ 	 hardcode_libdir_flag_spec='${wl}-blibpath:$libdir:'"$aix_libpath"
+ 	  # Warning - without using the other run time loading flags,
+@@ -8882,20 +9405,63 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+       # Microsoft Visual C++.
+       # hardcode_libdir_flag_spec is actually meaningless, as there is
+       # no search path for DLLs.
+-      hardcode_libdir_flag_spec=' '
+-      allow_undefined_flag=unsupported
+-      # Tell ltmain to make .lib files, not .a files.
+-      libext=lib
+-      # Tell ltmain to make .dll files, not .so files.
+-      shrext_cmds=".dll"
+-      # FIXME: Setting linknames here is a bad hack.
+-      archive_cmds='$CC -o $lib $libobjs $compiler_flags `func_echo_all "$deplibs" | $SED '\''s/ -lc$//'\''` -link -dll~linknames='
+-      # The linker will automatically build a .lib file if we build a DLL.
+-      old_archive_from_new_cmds='true'
+-      # FIXME: Should let the user specify the lib program.
+-      old_archive_cmds='lib -OUT:$oldlib$oldobjs$old_deplibs'
+-      fix_srcfile_path='`cygpath -w "$srcfile"`'
+-      enable_shared_with_static_runtimes=yes
++      case $cc_basename in
++      cl*)
++	# Native MSVC
++	hardcode_libdir_flag_spec=' '
++	allow_undefined_flag=unsupported
++	always_export_symbols=yes
++	file_list_spec='@'
++	# Tell ltmain to make .lib files, not .a files.
++	libext=lib
++	# Tell ltmain to make .dll files, not .so files.
++	shrext_cmds=".dll"
++	# FIXME: Setting linknames here is a bad hack.
++	archive_cmds='$CC -o $output_objdir/$soname $libobjs $compiler_flags $deplibs -Wl,-dll~linknames='
++	archive_expsym_cmds='if test "x`$SED 1q $export_symbols`" = xEXPORTS; then
++	    sed -n -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' -e '1\\\!p' < $export_symbols > $output_objdir/$soname.exp;
++	  else
++	    sed -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' < $export_symbols > $output_objdir/$soname.exp;
++	  fi~
++	  $CC -o $tool_output_objdir$soname $libobjs $compiler_flags $deplibs "@$tool_output_objdir$soname.exp" -Wl,-DLL,-IMPLIB:"$tool_output_objdir$libname.dll.lib"~
++	  linknames='
++	# The linker will not automatically build a static lib if we build a DLL.
++	# _LT_TAGVAR(old_archive_from_new_cmds, )='true'
++	enable_shared_with_static_runtimes=yes
++	export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1,DATA/'\'' | $SED -e '\''/^[AITW][ ]/s/.*[ ]//'\'' | sort | uniq > $export_symbols'
++	# Don't use ranlib
++	old_postinstall_cmds='chmod 644 $oldlib'
++	postlink_cmds='lt_outputfile="@OUTPUT@"~
++	  lt_tool_outputfile="@TOOL_OUTPUT@"~
++	  case $lt_outputfile in
++	    *.exe|*.EXE) ;;
++	    *)
++	      lt_outputfile="$lt_outputfile.exe"
++	      lt_tool_outputfile="$lt_tool_outputfile.exe"
++	      ;;
++	  esac~
++	  if test "$MANIFEST_TOOL" != ":" && test -f "$lt_outputfile.manifest"; then
++	    $MANIFEST_TOOL -manifest "$lt_tool_outputfile.manifest" -outputresource:"$lt_tool_outputfile" || exit 1;
++	    $RM "$lt_outputfile.manifest";
++	  fi'
++	;;
++      *)
++	# Assume MSVC wrapper
++	hardcode_libdir_flag_spec=' '
++	allow_undefined_flag=unsupported
++	# Tell ltmain to make .lib files, not .a files.
++	libext=lib
++	# Tell ltmain to make .dll files, not .so files.
++	shrext_cmds=".dll"
++	# FIXME: Setting linknames here is a bad hack.
++	archive_cmds='$CC -o $lib $libobjs $compiler_flags `func_echo_all "$deplibs" | $SED '\''s/ -lc$//'\''` -link -dll~linknames='
++	# The linker will automatically build a .lib file if we build a DLL.
++	old_archive_from_new_cmds='true'
++	# FIXME: Should let the user specify the lib program.
++	old_archive_cmds='lib -OUT:$oldlib$oldobjs$old_deplibs'
++	enable_shared_with_static_runtimes=yes
++	;;
++      esac
+       ;;
+ 
+     darwin* | rhapsody*)
+@@ -8956,7 +9522,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 
+     # FreeBSD 3 and greater uses gcc -shared to do shared libraries.
+     freebsd* | dragonfly*)
+-      archive_cmds='$CC -shared -o $lib $libobjs $deplibs $compiler_flags'
++      archive_cmds='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags'
+       hardcode_libdir_flag_spec='-R$libdir'
+       hardcode_direct=yes
+       hardcode_shlibpath_var=no
+@@ -8964,7 +9530,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 
+     hpux9*)
+       if test "$GCC" = yes; then
+-	archive_cmds='$RM $output_objdir/$soname~$CC -shared -fPIC ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $libobjs $deplibs $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
++	archive_cmds='$RM $output_objdir/$soname~$CC -shared $pic_flag ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $libobjs $deplibs $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
+       else
+ 	archive_cmds='$RM $output_objdir/$soname~$LD -b +b $install_libdir -o $output_objdir/$soname $libobjs $deplibs $linker_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
+       fi
+@@ -8980,7 +9546,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 
+     hpux10*)
+       if test "$GCC" = yes && test "$with_gnu_ld" = no; then
+-	archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
++	archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
+       else
+ 	archive_cmds='$LD -b +h $soname +b $install_libdir -o $lib $libobjs $deplibs $linker_flags'
+       fi
+@@ -9004,10 +9570,10 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 	  archive_cmds='$CC -shared ${wl}+h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
+ 	  ;;
+ 	ia64*)
+-	  archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags'
++	  archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags'
+ 	  ;;
+ 	*)
+-	  archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
++	  archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
+ 	  ;;
+ 	esac
+       else
+@@ -9086,26 +9652,39 @@ fi
+ 
+     irix5* | irix6* | nonstopux*)
+       if test "$GCC" = yes; then
+-	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
++	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
+ 	# Try to use the -exported_symbol ld option, if it does not
+ 	# work, assume that -exports_file does not work either and
+ 	# implicitly export all symbols.
+-        save_LDFLAGS="$LDFLAGS"
+-        LDFLAGS="$LDFLAGS -shared ${wl}-exported_symbol ${wl}foo ${wl}-update_registry ${wl}/dev/null"
+-        if test x$gcc_no_link = xyes; then
++	# This should be the same for all languages, so no per-tag cache variable.
++	{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether the $host_os linker accepts -exported_symbol" >&5
++$as_echo_n "checking whether the $host_os linker accepts -exported_symbol... " >&6; }
++if ${lt_cv_irix_exported_symbol+:} false; then :
++  $as_echo_n "(cached) " >&6
++else
++  save_LDFLAGS="$LDFLAGS"
++	   LDFLAGS="$LDFLAGS -shared ${wl}-exported_symbol ${wl}foo ${wl}-update_registry ${wl}/dev/null"
++	   if test x$gcc_no_link = xyes; then
+   as_fn_error $? "Link tests are not allowed after GCC_NO_EXECUTABLES." "$LINENO" 5
+ fi
+ cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+ /* end confdefs.h.  */
+-int foo(void) {}
++int foo (void) { return 0; }
+ _ACEOF
+ if ac_fn_c_try_link "$LINENO"; then :
+-  archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations ${wl}-exports_file ${wl}$export_symbols -o $lib'
+-
++  lt_cv_irix_exported_symbol=yes
++else
++  lt_cv_irix_exported_symbol=no
+ fi
+ rm -f core conftest.err conftest.$ac_objext \
+     conftest$ac_exeext conftest.$ac_ext
+-        LDFLAGS="$save_LDFLAGS"
++           LDFLAGS="$save_LDFLAGS"
++fi
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_irix_exported_symbol" >&5
++$as_echo "$lt_cv_irix_exported_symbol" >&6; }
++	if test "$lt_cv_irix_exported_symbol" = yes; then
++          archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations ${wl}-exports_file ${wl}$export_symbols -o $lib'
++	fi
+       else
+ 	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -o $lib'
+ 	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -exports_file $export_symbols -o $lib'
+@@ -9190,7 +9769,7 @@ rm -f core conftest.err conftest.$ac_objext \
+     osf4* | osf5*)	# as osf3* with the addition of -msym flag
+       if test "$GCC" = yes; then
+ 	allow_undefined_flag=' ${wl}-expect_unresolved ${wl}\*'
+-	archive_cmds='$CC -shared${allow_undefined_flag} $libobjs $deplibs $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
++	archive_cmds='$CC -shared${allow_undefined_flag} $pic_flag $libobjs $deplibs $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
+ 	hardcode_libdir_flag_spec='${wl}-rpath ${wl}$libdir'
+       else
+ 	allow_undefined_flag=' -expect_unresolved \*'
+@@ -9209,9 +9788,9 @@ rm -f core conftest.err conftest.$ac_objext \
+       no_undefined_flag=' -z defs'
+       if test "$GCC" = yes; then
+ 	wlarc='${wl}'
+-	archive_cmds='$CC -shared ${wl}-z ${wl}text ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
++	archive_cmds='$CC -shared $pic_flag ${wl}-z ${wl}text ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
+ 	archive_expsym_cmds='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~
+-	  $CC -shared ${wl}-z ${wl}text ${wl}-M ${wl}$lib.exp ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags~$RM $lib.exp'
++	  $CC -shared $pic_flag ${wl}-z ${wl}text ${wl}-M ${wl}$lib.exp ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags~$RM $lib.exp'
+       else
+ 	case `$CC -V 2>&1` in
+ 	*"Compilers 5.0"*)
+@@ -9787,8 +10366,9 @@ cygwin* | mingw* | pw32* | cegcc*)
+   need_version=no
+   need_lib_prefix=no
+ 
+-  case $GCC,$host_os in
+-  yes,cygwin* | yes,mingw* | yes,pw32* | yes,cegcc*)
++  case $GCC,$cc_basename in
++  yes,*)
++    # gcc
+     library_names_spec='$libname.dll.a'
+     # DLL is installed to $(libdir)/../bin by postinstall_cmds
+     postinstall_cmds='base_file=`basename \${file}`~
+@@ -9821,13 +10401,71 @@ cygwin* | mingw* | pw32* | cegcc*)
+       library_names_spec='`echo ${libname} | sed -e 's/^lib/pw/'``echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}'
+       ;;
+     esac
++    dynamic_linker='Win32 ld.exe'
++    ;;
++
++  *,cl*)
++    # Native MSVC
++    libname_spec='$name'
++    soname_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}'
++    library_names_spec='${libname}.dll.lib'
++
++    case $build_os in
++    mingw*)
++      sys_lib_search_path_spec=
++      lt_save_ifs=$IFS
++      IFS=';'
++      for lt_path in $LIB
++      do
++        IFS=$lt_save_ifs
++        # Let DOS variable expansion print the short 8.3 style file name.
++        lt_path=`cd "$lt_path" 2>/dev/null && cmd //C "for %i in (".") do @echo %~si"`
++        sys_lib_search_path_spec="$sys_lib_search_path_spec $lt_path"
++      done
++      IFS=$lt_save_ifs
++      # Convert to MSYS style.
++      sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | sed -e 's|\\\\|/|g' -e 's| \\([a-zA-Z]\\):| /\\1|g' -e 's|^ ||'`
++      ;;
++    cygwin*)
++      # Convert to unix form, then to dos form, then back to unix form
++      # but this time dos style (no spaces!) so that the unix form looks
++      # like /cygdrive/c/PROGRA~1:/cygdr...
++      sys_lib_search_path_spec=`cygpath --path --unix "$LIB"`
++      sys_lib_search_path_spec=`cygpath --path --dos "$sys_lib_search_path_spec" 2>/dev/null`
++      sys_lib_search_path_spec=`cygpath --path --unix "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
++      ;;
++    *)
++      sys_lib_search_path_spec="$LIB"
++      if $ECHO "$sys_lib_search_path_spec" | $GREP ';[c-zC-Z]:/' >/dev/null; then
++        # It is most probably a Windows format PATH.
++        sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e 's/;/ /g'`
++      else
++        sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
++      fi
++      # FIXME: find the short name or the path components, as spaces are
++      # common. (e.g. "Program Files" -> "PROGRA~1")
++      ;;
++    esac
++
++    # DLL is installed to $(libdir)/../bin by postinstall_cmds
++    postinstall_cmds='base_file=`basename \${file}`~
++      dlpath=`$SHELL 2>&1 -c '\''. $dir/'\''\${base_file}'\''i; echo \$dlname'\''`~
++      dldir=$destdir/`dirname \$dlpath`~
++      test -d \$dldir || mkdir -p \$dldir~
++      $install_prog $dir/$dlname \$dldir/$dlname'
++    postuninstall_cmds='dldll=`$SHELL 2>&1 -c '\''. $file; echo \$dlname'\''`~
++      dlpath=$dir/\$dldll~
++       $RM \$dlpath'
++    shlibpath_overrides_runpath=yes
++    dynamic_linker='Win32 link.exe'
+     ;;
+ 
+   *)
++    # Assume MSVC wrapper
+     library_names_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext} $libname.lib'
++    dynamic_linker='Win32 ld.exe'
+     ;;
+   esac
+-  dynamic_linker='Win32 ld.exe'
+   # FIXME: first we should search . and the directory the executable is in
+   shlibpath_var=PATH
+   ;;
+@@ -10723,7 +11361,7 @@ else
+   lt_dlunknown=0; lt_dlno_uscore=1; lt_dlneed_uscore=2
+   lt_status=$lt_dlunknown
+   cat > conftest.$ac_ext <<_LT_EOF
+-#line 10726 "configure"
++#line $LINENO "configure"
+ #include "confdefs.h"
+ 
+ #if HAVE_DLFCN_H
+@@ -10767,10 +11405,10 @@ else
+ /* When -fvisbility=hidden is used, assume the code has been annotated
+    correspondingly for the symbols needed.  */
+ #if defined(__GNUC__) && (((__GNUC__ == 3) && (__GNUC_MINOR__ >= 3)) || (__GNUC__ > 3))
+-void fnord () __attribute__((visibility("default")));
++int fnord () __attribute__((visibility("default")));
+ #endif
+ 
+-void fnord () { int i=42; }
++int fnord () { return 42; }
+ int main ()
+ {
+   void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW);
+@@ -10829,7 +11467,7 @@ else
+   lt_dlunknown=0; lt_dlno_uscore=1; lt_dlneed_uscore=2
+   lt_status=$lt_dlunknown
+   cat > conftest.$ac_ext <<_LT_EOF
+-#line 10832 "configure"
++#line $LINENO "configure"
+ #include "confdefs.h"
+ 
+ #if HAVE_DLFCN_H
+@@ -10873,10 +11511,10 @@ else
+ /* When -fvisbility=hidden is used, assume the code has been annotated
+    correspondingly for the symbols needed.  */
+ #if defined(__GNUC__) && (((__GNUC__ == 3) && (__GNUC_MINOR__ >= 3)) || (__GNUC__ > 3))
+-void fnord () __attribute__((visibility("default")));
++int fnord () __attribute__((visibility("default")));
+ #endif
+ 
+-void fnord () { int i=42; }
++int fnord () { return 42; }
+ int main ()
+ {
+   void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW);
+@@ -12308,7 +12946,7 @@ CC="$CC"
+ CXX="$CXX"
+ GFORTRAN="$GFORTRAN"
+ GDC="$GDC"
+-AMDEP_TRUE="$AMDEP_TRUE" MAKE="${MAKE-make}"
++AMDEP_TRUE="$AMDEP_TRUE" ac_aux_dir="$ac_aux_dir"
+ 
+ 
+ # The HP-UX ksh and POSIX shell print the target directory to stdout
+@@ -12346,13 +12984,20 @@ exeext='`$ECHO "$exeext" | $SED "$delay_single_quote_subst"`'
+ lt_unset='`$ECHO "$lt_unset" | $SED "$delay_single_quote_subst"`'
+ lt_SP2NL='`$ECHO "$lt_SP2NL" | $SED "$delay_single_quote_subst"`'
+ lt_NL2SP='`$ECHO "$lt_NL2SP" | $SED "$delay_single_quote_subst"`'
++lt_cv_to_host_file_cmd='`$ECHO "$lt_cv_to_host_file_cmd" | $SED "$delay_single_quote_subst"`'
++lt_cv_to_tool_file_cmd='`$ECHO "$lt_cv_to_tool_file_cmd" | $SED "$delay_single_quote_subst"`'
+ reload_flag='`$ECHO "$reload_flag" | $SED "$delay_single_quote_subst"`'
+ reload_cmds='`$ECHO "$reload_cmds" | $SED "$delay_single_quote_subst"`'
+ OBJDUMP='`$ECHO "$OBJDUMP" | $SED "$delay_single_quote_subst"`'
+ deplibs_check_method='`$ECHO "$deplibs_check_method" | $SED "$delay_single_quote_subst"`'
+ file_magic_cmd='`$ECHO "$file_magic_cmd" | $SED "$delay_single_quote_subst"`'
++file_magic_glob='`$ECHO "$file_magic_glob" | $SED "$delay_single_quote_subst"`'
++want_nocaseglob='`$ECHO "$want_nocaseglob" | $SED "$delay_single_quote_subst"`'
++DLLTOOL='`$ECHO "$DLLTOOL" | $SED "$delay_single_quote_subst"`'
++sharedlib_from_linklib_cmd='`$ECHO "$sharedlib_from_linklib_cmd" | $SED "$delay_single_quote_subst"`'
+ AR='`$ECHO "$AR" | $SED "$delay_single_quote_subst"`'
+ AR_FLAGS='`$ECHO "$AR_FLAGS" | $SED "$delay_single_quote_subst"`'
++archiver_list_spec='`$ECHO "$archiver_list_spec" | $SED "$delay_single_quote_subst"`'
+ STRIP='`$ECHO "$STRIP" | $SED "$delay_single_quote_subst"`'
+ RANLIB='`$ECHO "$RANLIB" | $SED "$delay_single_quote_subst"`'
+ old_postinstall_cmds='`$ECHO "$old_postinstall_cmds" | $SED "$delay_single_quote_subst"`'
+@@ -12367,14 +13012,17 @@ lt_cv_sys_global_symbol_pipe='`$ECHO "$lt_cv_sys_global_symbol_pipe" | $SED "$de
+ lt_cv_sys_global_symbol_to_cdecl='`$ECHO "$lt_cv_sys_global_symbol_to_cdecl" | $SED "$delay_single_quote_subst"`'
+ lt_cv_sys_global_symbol_to_c_name_address='`$ECHO "$lt_cv_sys_global_symbol_to_c_name_address" | $SED "$delay_single_quote_subst"`'
+ lt_cv_sys_global_symbol_to_c_name_address_lib_prefix='`$ECHO "$lt_cv_sys_global_symbol_to_c_name_address_lib_prefix" | $SED "$delay_single_quote_subst"`'
++nm_file_list_spec='`$ECHO "$nm_file_list_spec" | $SED "$delay_single_quote_subst"`'
++lt_sysroot='`$ECHO "$lt_sysroot" | $SED "$delay_single_quote_subst"`'
+ objdir='`$ECHO "$objdir" | $SED "$delay_single_quote_subst"`'
+ MAGIC_CMD='`$ECHO "$MAGIC_CMD" | $SED "$delay_single_quote_subst"`'
+ lt_prog_compiler_no_builtin_flag='`$ECHO "$lt_prog_compiler_no_builtin_flag" | $SED "$delay_single_quote_subst"`'
+-lt_prog_compiler_wl='`$ECHO "$lt_prog_compiler_wl" | $SED "$delay_single_quote_subst"`'
+ lt_prog_compiler_pic='`$ECHO "$lt_prog_compiler_pic" | $SED "$delay_single_quote_subst"`'
++lt_prog_compiler_wl='`$ECHO "$lt_prog_compiler_wl" | $SED "$delay_single_quote_subst"`'
+ lt_prog_compiler_static='`$ECHO "$lt_prog_compiler_static" | $SED "$delay_single_quote_subst"`'
+ lt_cv_prog_compiler_c_o='`$ECHO "$lt_cv_prog_compiler_c_o" | $SED "$delay_single_quote_subst"`'
+ need_locks='`$ECHO "$need_locks" | $SED "$delay_single_quote_subst"`'
++MANIFEST_TOOL='`$ECHO "$MANIFEST_TOOL" | $SED "$delay_single_quote_subst"`'
+ DSYMUTIL='`$ECHO "$DSYMUTIL" | $SED "$delay_single_quote_subst"`'
+ NMEDIT='`$ECHO "$NMEDIT" | $SED "$delay_single_quote_subst"`'
+ LIPO='`$ECHO "$LIPO" | $SED "$delay_single_quote_subst"`'
+@@ -12407,12 +13055,12 @@ hardcode_shlibpath_var='`$ECHO "$hardcode_shlibpath_var" | $SED "$delay_single_q
+ hardcode_automatic='`$ECHO "$hardcode_automatic" | $SED "$delay_single_quote_subst"`'
+ inherit_rpath='`$ECHO "$inherit_rpath" | $SED "$delay_single_quote_subst"`'
+ link_all_deplibs='`$ECHO "$link_all_deplibs" | $SED "$delay_single_quote_subst"`'
+-fix_srcfile_path='`$ECHO "$fix_srcfile_path" | $SED "$delay_single_quote_subst"`'
+ always_export_symbols='`$ECHO "$always_export_symbols" | $SED "$delay_single_quote_subst"`'
+ export_symbols_cmds='`$ECHO "$export_symbols_cmds" | $SED "$delay_single_quote_subst"`'
+ exclude_expsyms='`$ECHO "$exclude_expsyms" | $SED "$delay_single_quote_subst"`'
+ include_expsyms='`$ECHO "$include_expsyms" | $SED "$delay_single_quote_subst"`'
+ prelink_cmds='`$ECHO "$prelink_cmds" | $SED "$delay_single_quote_subst"`'
++postlink_cmds='`$ECHO "$postlink_cmds" | $SED "$delay_single_quote_subst"`'
+ file_list_spec='`$ECHO "$file_list_spec" | $SED "$delay_single_quote_subst"`'
+ variables_saved_for_relink='`$ECHO "$variables_saved_for_relink" | $SED "$delay_single_quote_subst"`'
+ need_lib_prefix='`$ECHO "$need_lib_prefix" | $SED "$delay_single_quote_subst"`'
+@@ -12467,8 +13115,13 @@ reload_flag \
+ OBJDUMP \
+ deplibs_check_method \
+ file_magic_cmd \
++file_magic_glob \
++want_nocaseglob \
++DLLTOOL \
++sharedlib_from_linklib_cmd \
+ AR \
+ AR_FLAGS \
++archiver_list_spec \
+ STRIP \
+ RANLIB \
+ CC \
+@@ -12478,12 +13131,14 @@ lt_cv_sys_global_symbol_pipe \
+ lt_cv_sys_global_symbol_to_cdecl \
+ lt_cv_sys_global_symbol_to_c_name_address \
+ lt_cv_sys_global_symbol_to_c_name_address_lib_prefix \
++nm_file_list_spec \
+ lt_prog_compiler_no_builtin_flag \
+-lt_prog_compiler_wl \
+ lt_prog_compiler_pic \
++lt_prog_compiler_wl \
+ lt_prog_compiler_static \
+ lt_cv_prog_compiler_c_o \
+ need_locks \
++MANIFEST_TOOL \
+ DSYMUTIL \
+ NMEDIT \
+ LIPO \
+@@ -12499,7 +13154,6 @@ no_undefined_flag \
+ hardcode_libdir_flag_spec \
+ hardcode_libdir_flag_spec_ld \
+ hardcode_libdir_separator \
+-fix_srcfile_path \
+ exclude_expsyms \
+ include_expsyms \
+ file_list_spec \
+@@ -12535,6 +13189,7 @@ module_cmds \
+ module_expsym_cmds \
+ export_symbols_cmds \
+ prelink_cmds \
++postlink_cmds \
+ postinstall_cmds \
+ postuninstall_cmds \
+ finish_cmds \
+@@ -13023,35 +13678,29 @@ esac ;;
+   # Older Autoconf quotes --file arguments for eval, but not when files
+   # are listed without --file.  Let's play safe and only enable the eval
+   # if we detect the quoting.
+-  # TODO: see whether this extra hack can be removed once we start
+-  # requiring Autoconf 2.70 or later.
+-  case $CONFIG_FILES in #(
+-  *\'*) :
+-    eval set x "$CONFIG_FILES" ;; #(
+-  *) :
+-    set x $CONFIG_FILES ;; #(
+-  *) :
+-     ;;
+-esac
++  case $CONFIG_FILES in
++  *\'*) eval set x "$CONFIG_FILES" ;;
++  *)   set x $CONFIG_FILES ;;
++  esac
+   shift
+-  # Used to flag and report bootstrapping failures.
+-  am_rc=0
+-  for am_mf
++  for mf
+   do
+     # Strip MF so we end up with the name of the file.
+-    am_mf=`$as_echo "$am_mf" | sed -e 's/:.*$//'`
+-    # Check whether this is an Automake generated Makefile which includes
+-    # dependency-tracking related rules and includes.
+-    # Grep'ing the whole file directly is not great: AIX grep has a line
++    mf=`echo "$mf" | sed -e 's/:.*$//'`
++    # Check whether this is an Automake generated Makefile or not.
++    # We used to match only the files named 'Makefile.in', but
++    # some people rename them; so instead we look at the file content.
++    # Grep'ing the first line is not enough: some people post-process
++    # each Makefile.in and add a new line on top of each file to say so.
++    # Grep'ing the whole file is not good either: AIX grep has a line
+     # limit of 2048, but all sed's we know have understand at least 4000.
+-    sed -n 's,^am--depfiles:.*,X,p' "$am_mf" | grep X >/dev/null 2>&1 \
+-      || continue
+-    am_dirpart=`$as_dirname -- "$am_mf" ||
+-$as_expr X"$am_mf" : 'X\(.*[^/]\)//*[^/][^/]*/*$' \| \
+-	 X"$am_mf" : 'X\(//\)[^/]' \| \
+-	 X"$am_mf" : 'X\(//\)$' \| \
+-	 X"$am_mf" : 'X\(/\)' \| . 2>/dev/null ||
+-$as_echo X"$am_mf" |
++    if sed -n 's,^#.*generated by automake.*,X,p' "$mf" | grep X >/dev/null 2>&1; then
++      dirpart=`$as_dirname -- "$mf" ||
++$as_expr X"$mf" : 'X\(.*[^/]\)//*[^/][^/]*/*$' \| \
++	 X"$mf" : 'X\(//\)[^/]' \| \
++	 X"$mf" : 'X\(//\)$' \| \
++	 X"$mf" : 'X\(/\)' \| . 2>/dev/null ||
++$as_echo X"$mf" |
+     sed '/^X\(.*[^/]\)\/\/*[^/][^/]*\/*$/{
+ 	    s//\1/
+ 	    q
+@@ -13069,50 +13718,53 @@ $as_echo X"$am_mf" |
+ 	    q
+ 	  }
+ 	  s/.*/./; q'`
+-    am_filepart=`$as_basename -- "$am_mf" ||
+-$as_expr X/"$am_mf" : '.*/\([^/][^/]*\)/*$' \| \
+-	 X"$am_mf" : 'X\(//\)$' \| \
+-	 X"$am_mf" : 'X\(/\)' \| . 2>/dev/null ||
+-$as_echo X/"$am_mf" |
+-    sed '/^.*\/\([^/][^/]*\)\/*$/{
++    else
++      continue
++    fi
++    # Extract the definition of DEPDIR, am__include, and am__quote
++    # from the Makefile without running 'make'.
++    DEPDIR=`sed -n 's/^DEPDIR = //p' < "$mf"`
++    test -z "$DEPDIR" && continue
++    am__include=`sed -n 's/^am__include = //p' < "$mf"`
++    test -z "$am__include" && continue
++    am__quote=`sed -n 's/^am__quote = //p' < "$mf"`
++    # Find all dependency output files, they are included files with
++    # $(DEPDIR) in their names.  We invoke sed twice because it is the
++    # simplest approach to changing $(DEPDIR) to its actual value in the
++    # expansion.
++    for file in `sed -n "
++      s/^$am__include $am__quote\(.*(DEPDIR).*\)$am__quote"'$/\1/p' <"$mf" | \
++	 sed -e 's/\$(DEPDIR)/'"$DEPDIR"'/g'`; do
++      # Make sure the directory exists.
++      test -f "$dirpart/$file" && continue
++      fdir=`$as_dirname -- "$file" ||
++$as_expr X"$file" : 'X\(.*[^/]\)//*[^/][^/]*/*$' \| \
++	 X"$file" : 'X\(//\)[^/]' \| \
++	 X"$file" : 'X\(//\)$' \| \
++	 X"$file" : 'X\(/\)' \| . 2>/dev/null ||
++$as_echo X"$file" |
++    sed '/^X\(.*[^/]\)\/\/*[^/][^/]*\/*$/{
+ 	    s//\1/
+ 	    q
+ 	  }
+-	  /^X\/\(\/\/\)$/{
++	  /^X\(\/\/\)[^/].*/{
+ 	    s//\1/
+ 	    q
+ 	  }
+-	  /^X\/\(\/\).*/{
++	  /^X\(\/\/\)$/{
++	    s//\1/
++	    q
++	  }
++	  /^X\(\/\).*/{
+ 	    s//\1/
+ 	    q
+ 	  }
+ 	  s/.*/./; q'`
+-    { echo "$as_me:$LINENO: cd "$am_dirpart" \
+-      && sed -e '/# am--include-marker/d' "$am_filepart" \
+-        | $MAKE -f - am--depfiles" >&5
+-   (cd "$am_dirpart" \
+-      && sed -e '/# am--include-marker/d' "$am_filepart" \
+-        | $MAKE -f - am--depfiles) >&5 2>&5
+-   ac_status=$?
+-   echo "$as_me:$LINENO: \$? = $ac_status" >&5
+-   (exit $ac_status); } || am_rc=$?
++      as_dir=$dirpart/$fdir; as_fn_mkdir_p
++      # echo "creating $dirpart/$file"
++      echo '# dummy' > "$dirpart/$file"
++    done
+   done
+-  if test $am_rc -ne 0; then
+-    { { $as_echo "$as_me:${as_lineno-$LINENO}: error: in \`$ac_pwd':" >&5
+-$as_echo "$as_me: error: in \`$ac_pwd':" >&2;}
+-as_fn_error $? "Something went wrong bootstrapping makefile fragments
+-    for automatic dependency tracking.  If GNU make was not used, consider
+-    re-running the configure script with MAKE=\"gmake\" (or whatever is
+-    necessary).  You can also try re-running configure with the
+-    '--disable-dependency-tracking' option to at least be able to build
+-    the package (albeit without support for automatic dependency tracking).
+-See \`config.log' for more details" "$LINENO" 5; }
+-  fi
+-  { am_dirpart=; unset am_dirpart;}
+-  { am_filepart=; unset am_filepart;}
+-  { am_mf=; unset am_mf;}
+-  { am_rc=; unset am_rc;}
+-  rm -f conftest-deps.mk
+ }
+  ;;
+     "libtool":C)
+@@ -13136,7 +13788,8 @@ See \`config.log' for more details" "$LINENO" 5; }
+ # NOTE: Changes made to this file will be lost: look at ltmain.sh.
+ #
+ #   Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001, 2003, 2004, 2005,
+-#                 2006, 2007, 2008, 2009 Free Software Foundation, Inc.
++#                 2006, 2007, 2008, 2009, 2010 Free Software Foundation,
++#                 Inc.
+ #   Written by Gordon Matzigkeit, 1996
+ #
+ #   This file is part of GNU Libtool.
+@@ -13239,19 +13892,42 @@ SP2NL=$lt_lt_SP2NL
+ # turn newlines into spaces.
+ NL2SP=$lt_lt_NL2SP
+ 
++# convert \$build file names to \$host format.
++to_host_file_cmd=$lt_cv_to_host_file_cmd
++
++# convert \$build files to toolchain format.
++to_tool_file_cmd=$lt_cv_to_tool_file_cmd
++
+ # An object symbol dumper.
+ OBJDUMP=$lt_OBJDUMP
+ 
+ # Method to check whether dependent libraries are shared objects.
+ deplibs_check_method=$lt_deplibs_check_method
+ 
+-# Command to use when deplibs_check_method == "file_magic".
++# Command to use when deplibs_check_method = "file_magic".
+ file_magic_cmd=$lt_file_magic_cmd
+ 
++# How to find potential files when deplibs_check_method = "file_magic".
++file_magic_glob=$lt_file_magic_glob
++
++# Find potential files using nocaseglob when deplibs_check_method = "file_magic".
++want_nocaseglob=$lt_want_nocaseglob
++
++# DLL creation program.
++DLLTOOL=$lt_DLLTOOL
++
++# Command to associate shared and link libraries.
++sharedlib_from_linklib_cmd=$lt_sharedlib_from_linklib_cmd
++
+ # The archiver.
+ AR=$lt_AR
++
++# Flags to create an archive.
+ AR_FLAGS=$lt_AR_FLAGS
+ 
++# How to feed a file listing to the archiver.
++archiver_list_spec=$lt_archiver_list_spec
++
+ # A symbol stripping program.
+ STRIP=$lt_STRIP
+ 
+@@ -13281,6 +13957,12 @@ global_symbol_to_c_name_address=$lt_lt_cv_sys_global_symbol_to_c_name_address
+ # Transform the output of nm in a C name address pair when lib prefix is needed.
+ global_symbol_to_c_name_address_lib_prefix=$lt_lt_cv_sys_global_symbol_to_c_name_address_lib_prefix
+ 
++# Specify filename containing input files for \$NM.
++nm_file_list_spec=$lt_nm_file_list_spec
++
++# The root where to search for dependent libraries,and in which our libraries should be installed.
++lt_sysroot=$lt_sysroot
++
+ # The name of the directory that contains temporary libtool files.
+ objdir=$objdir
+ 
+@@ -13290,6 +13972,9 @@ MAGIC_CMD=$MAGIC_CMD
+ # Must we lock files when doing compilation?
+ need_locks=$lt_need_locks
+ 
++# Manifest tool.
++MANIFEST_TOOL=$lt_MANIFEST_TOOL
++
+ # Tool to manipulate archived DWARF debug symbol files on Mac OS X.
+ DSYMUTIL=$lt_DSYMUTIL
+ 
+@@ -13404,12 +14089,12 @@ with_gcc=$GCC
+ # Compiler flag to turn off builtin functions.
+ no_builtin_flag=$lt_lt_prog_compiler_no_builtin_flag
+ 
+-# How to pass a linker flag through the compiler.
+-wl=$lt_lt_prog_compiler_wl
+-
+ # Additional compiler flags for building library objects.
+ pic_flag=$lt_lt_prog_compiler_pic
+ 
++# How to pass a linker flag through the compiler.
++wl=$lt_lt_prog_compiler_wl
++
+ # Compiler flag to prevent dynamic linking.
+ link_static_flag=$lt_lt_prog_compiler_static
+ 
+@@ -13496,9 +14181,6 @@ inherit_rpath=$inherit_rpath
+ # Whether libtool must link a program against all its dependency libraries.
+ link_all_deplibs=$link_all_deplibs
+ 
+-# Fix the shell variable \$srcfile for the compiler.
+-fix_srcfile_path=$lt_fix_srcfile_path
+-
+ # Set to "yes" if exported symbols are required.
+ always_export_symbols=$always_export_symbols
+ 
+@@ -13514,6 +14196,9 @@ include_expsyms=$lt_include_expsyms
+ # Commands necessary for linking programs (against libraries) with templates.
+ prelink_cmds=$lt_prelink_cmds
+ 
++# Commands necessary for finishing linking programs.
++postlink_cmds=$lt_postlink_cmds
++
+ # Specify filename containing input files.
+ file_list_spec=$lt_file_list_spec
+ 
+@@ -13546,210 +14231,169 @@ ltmain="$ac_aux_dir/ltmain.sh"
+   # if finds mixed CR/LF and LF-only lines.  Since sed operates in
+   # text mode, it properly converts lines to CR/LF.  This bash problem
+   # is reportedly fixed, but why not run on old versions too?
+-  sed '/^# Generated shell functions inserted here/q' "$ltmain" >> "$cfgfile" \
+-    || (rm -f "$cfgfile"; exit 1)
+-
+-  case $xsi_shell in
+-  yes)
+-    cat << \_LT_EOF >> "$cfgfile"
+-
+-# func_dirname file append nondir_replacement
+-# Compute the dirname of FILE.  If nonempty, add APPEND to the result,
+-# otherwise set result to NONDIR_REPLACEMENT.
+-func_dirname ()
+-{
+-  case ${1} in
+-    */*) func_dirname_result="${1%/*}${2}" ;;
+-    *  ) func_dirname_result="${3}" ;;
+-  esac
+-}
+-
+-# func_basename file
+-func_basename ()
+-{
+-  func_basename_result="${1##*/}"
+-}
+-
+-# func_dirname_and_basename file append nondir_replacement
+-# perform func_basename and func_dirname in a single function
+-# call:
+-#   dirname:  Compute the dirname of FILE.  If nonempty,
+-#             add APPEND to the result, otherwise set result
+-#             to NONDIR_REPLACEMENT.
+-#             value returned in "$func_dirname_result"
+-#   basename: Compute filename of FILE.
+-#             value retuned in "$func_basename_result"
+-# Implementation must be kept synchronized with func_dirname
+-# and func_basename. For efficiency, we do not delegate to
+-# those functions but instead duplicate the functionality here.
+-func_dirname_and_basename ()
+-{
+-  case ${1} in
+-    */*) func_dirname_result="${1%/*}${2}" ;;
+-    *  ) func_dirname_result="${3}" ;;
+-  esac
+-  func_basename_result="${1##*/}"
+-}
+-
+-# func_stripname prefix suffix name
+-# strip PREFIX and SUFFIX off of NAME.
+-# PREFIX and SUFFIX must not contain globbing or regex special
+-# characters, hashes, percent signs, but SUFFIX may contain a leading
+-# dot (in which case that matches only a dot).
+-func_stripname ()
+-{
+-  # pdksh 5.2.14 does not do ${X%$Y} correctly if both X and Y are
+-  # positional parameters, so assign one to ordinary parameter first.
+-  func_stripname_result=${3}
+-  func_stripname_result=${func_stripname_result#"${1}"}
+-  func_stripname_result=${func_stripname_result%"${2}"}
+-}
+-
+-# func_opt_split
+-func_opt_split ()
+-{
+-  func_opt_split_opt=${1%%=*}
+-  func_opt_split_arg=${1#*=}
+-}
+-
+-# func_lo2o object
+-func_lo2o ()
+-{
+-  case ${1} in
+-    *.lo) func_lo2o_result=${1%.lo}.${objext} ;;
+-    *)    func_lo2o_result=${1} ;;
+-  esac
+-}
+-
+-# func_xform libobj-or-source
+-func_xform ()
+-{
+-  func_xform_result=${1%.*}.lo
+-}
+-
+-# func_arith arithmetic-term...
+-func_arith ()
+-{
+-  func_arith_result=$(( $* ))
+-}
+-
+-# func_len string
+-# STRING may not start with a hyphen.
+-func_len ()
+-{
+-  func_len_result=${#1}
+-}
+-
+-_LT_EOF
+-    ;;
+-  *) # Bourne compatible functions.
+-    cat << \_LT_EOF >> "$cfgfile"
+-
+-# func_dirname file append nondir_replacement
+-# Compute the dirname of FILE.  If nonempty, add APPEND to the result,
+-# otherwise set result to NONDIR_REPLACEMENT.
+-func_dirname ()
+-{
+-  # Extract subdirectory from the argument.
+-  func_dirname_result=`$ECHO "${1}" | $SED "$dirname"`
+-  if test "X$func_dirname_result" = "X${1}"; then
+-    func_dirname_result="${3}"
+-  else
+-    func_dirname_result="$func_dirname_result${2}"
+-  fi
+-}
+-
+-# func_basename file
+-func_basename ()
+-{
+-  func_basename_result=`$ECHO "${1}" | $SED "$basename"`
+-}
+-
+-
+-# func_stripname prefix suffix name
+-# strip PREFIX and SUFFIX off of NAME.
+-# PREFIX and SUFFIX must not contain globbing or regex special
+-# characters, hashes, percent signs, but SUFFIX may contain a leading
+-# dot (in which case that matches only a dot).
+-# func_strip_suffix prefix name
+-func_stripname ()
+-{
+-  case ${2} in
+-    .*) func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%\\\\${2}\$%%"`;;
+-    *)  func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%${2}\$%%"`;;
+-  esac
+-}
+-
+-# sed scripts:
+-my_sed_long_opt='1s/^\(-[^=]*\)=.*/\1/;q'
+-my_sed_long_arg='1s/^-[^=]*=//'
+-
+-# func_opt_split
+-func_opt_split ()
+-{
+-  func_opt_split_opt=`$ECHO "${1}" | $SED "$my_sed_long_opt"`
+-  func_opt_split_arg=`$ECHO "${1}" | $SED "$my_sed_long_arg"`
+-}
+-
+-# func_lo2o object
+-func_lo2o ()
+-{
+-  func_lo2o_result=`$ECHO "${1}" | $SED "$lo2o"`
+-}
+-
+-# func_xform libobj-or-source
+-func_xform ()
+-{
+-  func_xform_result=`$ECHO "${1}" | $SED 's/\.[^.]*$/.lo/'`
+-}
+-
+-# func_arith arithmetic-term...
+-func_arith ()
+-{
+-  func_arith_result=`expr "$@"`
+-}
+-
+-# func_len string
+-# STRING may not start with a hyphen.
+-func_len ()
+-{
+-  func_len_result=`expr "$1" : ".*" 2>/dev/null || echo $max_cmd_len`
+-}
+-
+-_LT_EOF
+-esac
+-
+-case $lt_shell_append in
+-  yes)
+-    cat << \_LT_EOF >> "$cfgfile"
+-
+-# func_append var value
+-# Append VALUE to the end of shell variable VAR.
+-func_append ()
+-{
+-  eval "$1+=\$2"
+-}
+-_LT_EOF
+-    ;;
+-  *)
+-    cat << \_LT_EOF >> "$cfgfile"
+-
+-# func_append var value
+-# Append VALUE to the end of shell variable VAR.
+-func_append ()
+-{
+-  eval "$1=\$$1\$2"
+-}
+-
+-_LT_EOF
+-    ;;
+-  esac
+-
+-
+-  sed -n '/^# Generated shell functions inserted here/,$p' "$ltmain" >> "$cfgfile" \
+-    || (rm -f "$cfgfile"; exit 1)
+-
+-  mv -f "$cfgfile" "$ofile" ||
++  sed '$q' "$ltmain" >> "$cfgfile" \
++     || (rm -f "$cfgfile"; exit 1)
++
++  if test x"$xsi_shell" = xyes; then
++  sed -e '/^func_dirname ()$/,/^} # func_dirname /c\
++func_dirname ()\
++{\
++\    case ${1} in\
++\      */*) func_dirname_result="${1%/*}${2}" ;;\
++\      *  ) func_dirname_result="${3}" ;;\
++\    esac\
++} # Extended-shell func_dirname implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_basename ()$/,/^} # func_basename /c\
++func_basename ()\
++{\
++\    func_basename_result="${1##*/}"\
++} # Extended-shell func_basename implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_dirname_and_basename ()$/,/^} # func_dirname_and_basename /c\
++func_dirname_and_basename ()\
++{\
++\    case ${1} in\
++\      */*) func_dirname_result="${1%/*}${2}" ;;\
++\      *  ) func_dirname_result="${3}" ;;\
++\    esac\
++\    func_basename_result="${1##*/}"\
++} # Extended-shell func_dirname_and_basename implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_stripname ()$/,/^} # func_stripname /c\
++func_stripname ()\
++{\
++\    # pdksh 5.2.14 does not do ${X%$Y} correctly if both X and Y are\
++\    # positional parameters, so assign one to ordinary parameter first.\
++\    func_stripname_result=${3}\
++\    func_stripname_result=${func_stripname_result#"${1}"}\
++\    func_stripname_result=${func_stripname_result%"${2}"}\
++} # Extended-shell func_stripname implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_split_long_opt ()$/,/^} # func_split_long_opt /c\
++func_split_long_opt ()\
++{\
++\    func_split_long_opt_name=${1%%=*}\
++\    func_split_long_opt_arg=${1#*=}\
++} # Extended-shell func_split_long_opt implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_split_short_opt ()$/,/^} # func_split_short_opt /c\
++func_split_short_opt ()\
++{\
++\    func_split_short_opt_arg=${1#??}\
++\    func_split_short_opt_name=${1%"$func_split_short_opt_arg"}\
++} # Extended-shell func_split_short_opt implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_lo2o ()$/,/^} # func_lo2o /c\
++func_lo2o ()\
++{\
++\    case ${1} in\
++\      *.lo) func_lo2o_result=${1%.lo}.${objext} ;;\
++\      *)    func_lo2o_result=${1} ;;\
++\    esac\
++} # Extended-shell func_lo2o implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_xform ()$/,/^} # func_xform /c\
++func_xform ()\
++{\
++    func_xform_result=${1%.*}.lo\
++} # Extended-shell func_xform implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_arith ()$/,/^} # func_arith /c\
++func_arith ()\
++{\
++    func_arith_result=$(( $* ))\
++} # Extended-shell func_arith implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_len ()$/,/^} # func_len /c\
++func_len ()\
++{\
++    func_len_result=${#1}\
++} # Extended-shell func_len implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++fi
++
++if test x"$lt_shell_append" = xyes; then
++  sed -e '/^func_append ()$/,/^} # func_append /c\
++func_append ()\
++{\
++    eval "${1}+=\\${2}"\
++} # Extended-shell func_append implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_append_quoted ()$/,/^} # func_append_quoted /c\
++func_append_quoted ()\
++{\
++\    func_quote_for_eval "${2}"\
++\    eval "${1}+=\\\\ \\$func_quote_for_eval_result"\
++} # Extended-shell func_append_quoted implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  # Save a `func_append' function call where possible by direct use of '+='
++  sed -e 's%func_append \([a-zA-Z_]\{1,\}\) "%\1+="%g' $cfgfile > $cfgfile.tmp \
++    && mv -f "$cfgfile.tmp" "$cfgfile" \
++      || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++  test 0 -eq $? || _lt_function_replace_fail=:
++else
++  # Save a `func_append' function call even when '+=' is not available
++  sed -e 's%func_append \([a-zA-Z_]\{1,\}\) "%\1="$\1%g' $cfgfile > $cfgfile.tmp \
++    && mv -f "$cfgfile.tmp" "$cfgfile" \
++      || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++  test 0 -eq $? || _lt_function_replace_fail=:
++fi
++
++if test x"$_lt_function_replace_fail" = x":"; then
++  { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: Unable to substitute extended shell functions in $ofile" >&5
++$as_echo "$as_me: WARNING: Unable to substitute extended shell functions in $ofile" >&2;}
++fi
++
++
++   mv -f "$cfgfile" "$ofile" ||
+     (rm -f "$ofile" && cp "$cfgfile" "$ofile" && rm -f "$cfgfile")
+   chmod +x "$ofile"
+ 
diff --git a/poky/meta/recipes-devtools/binutils/binutils/0008-fix-the-incorrect-assembling-for-ppc-wait-mnemonic.patch b/poky/meta/recipes-devtools/binutils/binutils/0008-fix-the-incorrect-assembling-for-ppc-wait-mnemonic.patch
deleted file mode 100644
index 648bdc1..0000000
--- a/poky/meta/recipes-devtools/binutils/binutils/0008-fix-the-incorrect-assembling-for-ppc-wait-mnemonic.patch
+++ /dev/null
@@ -1,37 +0,0 @@
-From 00ae1ee97ad3ad0624798b28c6bab94a19b3ef39 Mon Sep 17 00:00:00 2001
-From: Zhenhua Luo <zhenhua.luo@nxp.com>
-Date: Sat, 11 Jun 2016 22:08:29 -0500
-Subject: [PATCH] fix the incorrect assembling for ppc wait mnemonic
-
-The wait mnemonic for ppc targets is incorrectly assembled into 0x7c00003c due
-to duplicated address definition with waitasec instruction. The issue causes
-kernel boot calltrace for ppc targets when wait instruction is executed.
-
-Upstream-Status: Pending
-Signed-off-by: Zhenhua Luo <zhenhua.luo@nxp.com>
----
- opcodes/ppc-opc.c | 4 +---
- 1 file changed, 1 insertion(+), 3 deletions(-)
-
-diff --git a/opcodes/ppc-opc.c b/opcodes/ppc-opc.c
-index a424dd924de..406d5b60917 100644
---- a/opcodes/ppc-opc.c
-+++ b/opcodes/ppc-opc.c
-@@ -6378,8 +6378,6 @@ const struct powerpc_opcode powerpc_opcodes[] = {
- {"waitasec",	X(31,30),      XRTRARB_MASK, POWER8,	POWER9,		{0}},
- {"waitrsv",	XWCPL(31,30,1,0),0xffffffff, POWER10,	EXT,		{0}},
- {"pause_short",	XWCPL(31,30,2,0),0xffffffff, POWER10,	EXT,		{0}},
--{"wait",	X(31,30),	XWCPL_MASK,  POWER10,	0,		{WC, PL}},
--{"wait",	X(31,30),	XWC_MASK,    POWER9,	POWER10,	{WC}},
- 
- {"lwepx",	X(31,31),	X_MASK,	  E500MC|PPCA2, 0,		{RT, RA0, RB}},
- 
-@@ -6433,7 +6431,7 @@ const struct powerpc_opcode powerpc_opcodes[] = {
- 
- {"waitrsv",	X(31,62)|(1<<21), 0xffffffff, E500MC|PPCA2, EXT,	{0}},
- {"waitimpl",	X(31,62)|(2<<21), 0xffffffff, E500MC|PPCA2, EXT,	{0}},
--{"wait",	X(31,62),	XWC_MASK,    E500MC|PPCA2, 0,		{WC}},
-+{"wait",	X(31,62),	XWC_MASK,    E500MC|PPCA2|POWER9|POWER10, 0,	{WC}},
- 
- {"dcbstep",	XRT(31,63,0),	XRT_MASK,    E500MC|PPCA2, 0,		{RA0, RB}},
- 
diff --git a/poky/meta/recipes-devtools/binutils/binutils/0009-Fix-rpath-in-libtool-when-sysroot-is-enabled.patch b/poky/meta/recipes-devtools/binutils/binutils/0009-Fix-rpath-in-libtool-when-sysroot-is-enabled.patch
new file mode 100644
index 0000000..2c4ffec
--- /dev/null
+++ b/poky/meta/recipes-devtools/binutils/binutils/0009-Fix-rpath-in-libtool-when-sysroot-is-enabled.patch
@@ -0,0 +1,49 @@
+From 98410efc334e31ccfbdc0080fb293b0e06885454 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Mon, 2 Mar 2015 01:42:38 +0000
+Subject: [PATCH] Fix rpath in libtool when sysroot is enabled
+
+Enabling sysroot support in libtool exposed a bug where the final
+library had an RPATH encoded into it which still pointed to the
+sysroot. This works around the issue until it gets sorted out
+upstream.
+
+Fix suggested by Richard Purdie <richard.purdie@linuxfoundation.org>
+
+Upstream-Status: Inappropriate [embedded specific]
+
+Signed-off-by: Scott Garman <scott.a.garman@intel.com>
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ ltmain.sh | 10 ++++++++--
+ 1 file changed, 8 insertions(+), 2 deletions(-)
+
+diff --git a/ltmain.sh b/ltmain.sh
+index 70e856e0659..11ee684cccf 100644
+--- a/ltmain.sh
++++ b/ltmain.sh
+@@ -8035,9 +8035,11 @@ EOF
+ 	  test "$opt_mode" != relink && rpath="$compile_rpath$rpath"
+ 	  for libdir in $rpath; do
+ 	    if test -n "$hardcode_libdir_flag_spec"; then
++		  func_replace_sysroot "$libdir"
++		  libdir=$func_replace_sysroot_result
++		  func_stripname '=' '' "$libdir"
++		  libdir=$func_stripname_result
+ 	      if test -n "$hardcode_libdir_separator"; then
+-		func_replace_sysroot "$libdir"
+-		libdir=$func_replace_sysroot_result
+ 		if test -z "$hardcode_libdirs"; then
+ 		  hardcode_libdirs="$libdir"
+ 		else
+@@ -8770,6 +8772,10 @@ EOF
+       hardcode_libdirs=
+       for libdir in $compile_rpath $finalize_rpath; do
+ 	if test -n "$hardcode_libdir_flag_spec"; then
++	  func_replace_sysroot "$libdir"
++	  libdir=$func_replace_sysroot_result
++	  func_stripname '=' '' "$libdir"
++	  libdir=$func_stripname_result
+ 	  if test -n "$hardcode_libdir_separator"; then
+ 	    if test -z "$hardcode_libdirs"; then
+ 	      hardcode_libdirs="$libdir"
diff --git a/poky/meta/recipes-devtools/binutils/binutils/0009-Use-libtool-2.4.patch b/poky/meta/recipes-devtools/binutils/binutils/0009-Use-libtool-2.4.patch
deleted file mode 100644
index 9f0209e..0000000
--- a/poky/meta/recipes-devtools/binutils/binutils/0009-Use-libtool-2.4.patch
+++ /dev/null
@@ -1,25302 +0,0 @@
-From 9a0dea4d2f1f0f2c71f519e6195ef9cfacd9fda9 Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Sun, 14 Feb 2016 17:04:07 +0000
-Subject: [PATCH] Use libtool 2.4
-
-get libtool sysroot support
-
-Upstream-Status: Pending
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
----
- bfd/configure          | 1333 +++++++++++++-----
- bfd/configure.ac       |    2 +-
- binutils/configure     | 1331 +++++++++++++-----
- gas/configure          | 1331 +++++++++++++-----
- gprof/configure        | 1331 +++++++++++++-----
- ld/configure           | 1704 +++++++++++++++++------
- libbacktrace/configure | 1534 +++++++++++++++------
- libctf/configure       | 1330 +++++++++++++-----
- libtool.m4             | 1093 ++++++++++-----
- ltmain.sh              | 2925 +++++++++++++++++++++++++++-------------
- ltoptions.m4           |    2 +-
- ltversion.m4           |   12 +-
- lt~obsolete.m4         |    2 +-
- opcodes/configure      | 1331 +++++++++++++-----
- zlib/configure         | 1331 +++++++++++++-----
- 15 files changed, 12067 insertions(+), 4525 deletions(-)
-
-diff --git a/bfd/configure b/bfd/configure
-index b23c9eebfd7..fb25d046cd2 100755
---- a/bfd/configure
-+++ b/bfd/configure
-@@ -707,6 +707,9 @@ OTOOL
- LIPO
- NMEDIT
- DSYMUTIL
-+MANIFEST_TOOL
-+ac_ct_AR
-+DLLTOOL
- OBJDUMP
- LN_S
- NM
-@@ -825,6 +828,7 @@ enable_static
- with_pic
- enable_fast_install
- with_gnu_ld
-+with_libtool_sysroot
- enable_libtool_lock
- enable_plugins
- enable_largefile
-@@ -1509,6 +1513,8 @@ Optional Packages:
-   --with-pic              try to use only PIC/non-PIC objects [default=use
-                           both]
-   --with-gnu-ld           assume the C compiler uses GNU ld [default=no]
-+  --with-libtool-sysroot=DIR Search for dependent libraries within DIR
-+                        (or the compiler's sysroot if not specified).
-   --with-mmap             try using mmap for BFD input files if available
-   --with-separate-debug-dir=DIR
-                           Look for global separate debug info in DIR
-@@ -5029,8 +5035,8 @@ esac
- 
- 
- 
--macro_version='2.2.7a'
--macro_revision='1.3134'
-+macro_version='2.4'
-+macro_revision='1.3293'
- 
- 
- 
-@@ -5070,7 +5076,7 @@ ECHO=$ECHO$ECHO$ECHO$ECHO$ECHO$ECHO
- { $as_echo "$as_me:${as_lineno-$LINENO}: checking how to print strings" >&5
- $as_echo_n "checking how to print strings... " >&6; }
- # Test print first, because it will be a builtin if present.
--if test "X`print -r -- -n 2>/dev/null`" = X-n && \
-+if test "X`( print -r -- -n ) 2>/dev/null`" = X-n && \
-    test "X`print -r -- $ECHO 2>/dev/null`" = "X$ECHO"; then
-   ECHO='print -r --'
- elif test "X`printf %s $ECHO 2>/dev/null`" = "X$ECHO"; then
-@@ -5757,8 +5763,8 @@ $as_echo_n "checking whether the shell understands some XSI constructs... " >&6;
- # Try some XSI features
- xsi_shell=no
- ( _lt_dummy="a/b/c"
--  test "${_lt_dummy##*/},${_lt_dummy%/*},"${_lt_dummy%"$_lt_dummy"}, \
--      = c,a/b,, \
-+  test "${_lt_dummy##*/},${_lt_dummy%/*},${_lt_dummy#??}"${_lt_dummy%"$_lt_dummy"}, \
-+      = c,a/b,b/c, \
-     && eval 'test $(( 1 + 1 )) -eq 2 \
-     && test "${#_lt_dummy}" -eq 5' ) >/dev/null 2>&1 \
-   && xsi_shell=yes
-@@ -5807,6 +5813,80 @@ esac
- 
- 
- 
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking how to convert $build file names to $host format" >&5
-+$as_echo_n "checking how to convert $build file names to $host format... " >&6; }
-+if ${lt_cv_to_host_file_cmd+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  case $host in
-+  *-*-mingw* )
-+    case $build in
-+      *-*-mingw* ) # actually msys
-+        lt_cv_to_host_file_cmd=func_convert_file_msys_to_w32
-+        ;;
-+      *-*-cygwin* )
-+        lt_cv_to_host_file_cmd=func_convert_file_cygwin_to_w32
-+        ;;
-+      * ) # otherwise, assume *nix
-+        lt_cv_to_host_file_cmd=func_convert_file_nix_to_w32
-+        ;;
-+    esac
-+    ;;
-+  *-*-cygwin* )
-+    case $build in
-+      *-*-mingw* ) # actually msys
-+        lt_cv_to_host_file_cmd=func_convert_file_msys_to_cygwin
-+        ;;
-+      *-*-cygwin* )
-+        lt_cv_to_host_file_cmd=func_convert_file_noop
-+        ;;
-+      * ) # otherwise, assume *nix
-+        lt_cv_to_host_file_cmd=func_convert_file_nix_to_cygwin
-+        ;;
-+    esac
-+    ;;
-+  * ) # unhandled hosts (and "normal" native builds)
-+    lt_cv_to_host_file_cmd=func_convert_file_noop
-+    ;;
-+esac
-+
-+fi
-+
-+to_host_file_cmd=$lt_cv_to_host_file_cmd
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_to_host_file_cmd" >&5
-+$as_echo "$lt_cv_to_host_file_cmd" >&6; }
-+
-+
-+
-+
-+
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking how to convert $build file names to toolchain format" >&5
-+$as_echo_n "checking how to convert $build file names to toolchain format... " >&6; }
-+if ${lt_cv_to_tool_file_cmd+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  #assume ordinary cross tools, or native build.
-+lt_cv_to_tool_file_cmd=func_convert_file_noop
-+case $host in
-+  *-*-mingw* )
-+    case $build in
-+      *-*-mingw* ) # actually msys
-+        lt_cv_to_tool_file_cmd=func_convert_file_msys_to_w32
-+        ;;
-+    esac
-+    ;;
-+esac
-+
-+fi
-+
-+to_tool_file_cmd=$lt_cv_to_tool_file_cmd
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_to_tool_file_cmd" >&5
-+$as_echo "$lt_cv_to_tool_file_cmd" >&6; }
-+
-+
-+
-+
-+
- { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $LD option to reload object files" >&5
- $as_echo_n "checking for $LD option to reload object files... " >&6; }
- if ${lt_cv_ld_reload_flag+:} false; then :
-@@ -5823,6 +5903,11 @@ case $reload_flag in
- esac
- reload_cmds='$LD$reload_flag -o $output$reload_objs'
- case $host_os in
-+  cygwin* | mingw* | pw32* | cegcc*)
-+    if test "$GCC" != yes; then
-+      reload_cmds=false
-+    fi
-+    ;;
-   darwin*)
-     if test "$GCC" = yes; then
-       reload_cmds='$LTCC $LTCFLAGS -nostdlib ${wl}-r -o $output$reload_objs'
-@@ -5991,7 +6076,8 @@ mingw* | pw32*)
-     lt_cv_deplibs_check_method='file_magic ^x86 archive import|^x86 DLL'
-     lt_cv_file_magic_cmd='func_win32_libid'
-   else
--    lt_cv_deplibs_check_method='file_magic file format pei*-i386(.*architecture: i386)?'
-+    # Keep this pattern in sync with the one in func_win32_libid.
-+    lt_cv_deplibs_check_method='file_magic file format (pei*-i386(.*architecture: i386)?|pe-arm-wince|pe-x86-64)'
-     lt_cv_file_magic_cmd='$OBJDUMP -f'
-   fi
-   ;;
-@@ -6145,6 +6231,21 @@ esac
- fi
- { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_deplibs_check_method" >&5
- $as_echo "$lt_cv_deplibs_check_method" >&6; }
-+
-+file_magic_glob=
-+want_nocaseglob=no
-+if test "$build" = "$host"; then
-+  case $host_os in
-+  mingw* | pw32*)
-+    if ( shopt | grep nocaseglob ) >/dev/null 2>&1; then
-+      want_nocaseglob=yes
-+    else
-+      file_magic_glob=`echo aAbBcCdDeEfFgGhHiIjJkKlLmMnNoOpPqQrRsStTuUvVwWxXyYzZ | $SED -e "s/\(..\)/s\/[\1]\/[\1]\/g;/g"`
-+    fi
-+    ;;
-+  esac
-+fi
-+
- file_magic_cmd=$lt_cv_file_magic_cmd
- deplibs_check_method=$lt_cv_deplibs_check_method
- test -z "$deplibs_check_method" && deplibs_check_method=unknown
-@@ -6160,6 +6261,157 @@ test -z "$deplibs_check_method" && deplibs_check_method=unknown
- 
- 
- 
-+
-+
-+
-+
-+
-+
-+
-+
-+
-+
-+if test -n "$ac_tool_prefix"; then
-+  # Extract the first word of "${ac_tool_prefix}dlltool", so it can be a program name with args.
-+set dummy ${ac_tool_prefix}dlltool; ac_word=$2
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
-+$as_echo_n "checking for $ac_word... " >&6; }
-+if ${ac_cv_prog_DLLTOOL+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  if test -n "$DLLTOOL"; then
-+  ac_cv_prog_DLLTOOL="$DLLTOOL" # Let the user override the test.
-+else
-+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
-+for as_dir in $PATH
-+do
-+  IFS=$as_save_IFS
-+  test -z "$as_dir" && as_dir=.
-+    for ac_exec_ext in '' $ac_executable_extensions; do
-+  if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
-+    ac_cv_prog_DLLTOOL="${ac_tool_prefix}dlltool"
-+    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
-+    break 2
-+  fi
-+done
-+  done
-+IFS=$as_save_IFS
-+
-+fi
-+fi
-+DLLTOOL=$ac_cv_prog_DLLTOOL
-+if test -n "$DLLTOOL"; then
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $DLLTOOL" >&5
-+$as_echo "$DLLTOOL" >&6; }
-+else
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
-+$as_echo "no" >&6; }
-+fi
-+
-+
-+fi
-+if test -z "$ac_cv_prog_DLLTOOL"; then
-+  ac_ct_DLLTOOL=$DLLTOOL
-+  # Extract the first word of "dlltool", so it can be a program name with args.
-+set dummy dlltool; ac_word=$2
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
-+$as_echo_n "checking for $ac_word... " >&6; }
-+if ${ac_cv_prog_ac_ct_DLLTOOL+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  if test -n "$ac_ct_DLLTOOL"; then
-+  ac_cv_prog_ac_ct_DLLTOOL="$ac_ct_DLLTOOL" # Let the user override the test.
-+else
-+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
-+for as_dir in $PATH
-+do
-+  IFS=$as_save_IFS
-+  test -z "$as_dir" && as_dir=.
-+    for ac_exec_ext in '' $ac_executable_extensions; do
-+  if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
-+    ac_cv_prog_ac_ct_DLLTOOL="dlltool"
-+    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
-+    break 2
-+  fi
-+done
-+  done
-+IFS=$as_save_IFS
-+
-+fi
-+fi
-+ac_ct_DLLTOOL=$ac_cv_prog_ac_ct_DLLTOOL
-+if test -n "$ac_ct_DLLTOOL"; then
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_DLLTOOL" >&5
-+$as_echo "$ac_ct_DLLTOOL" >&6; }
-+else
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
-+$as_echo "no" >&6; }
-+fi
-+
-+  if test "x$ac_ct_DLLTOOL" = x; then
-+    DLLTOOL="false"
-+  else
-+    case $cross_compiling:$ac_tool_warned in
-+yes:)
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5
-+$as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;}
-+ac_tool_warned=yes ;;
-+esac
-+    DLLTOOL=$ac_ct_DLLTOOL
-+  fi
-+else
-+  DLLTOOL="$ac_cv_prog_DLLTOOL"
-+fi
-+
-+test -z "$DLLTOOL" && DLLTOOL=dlltool
-+
-+
-+
-+
-+
-+
-+
-+
-+
-+
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking how to associate runtime and link libraries" >&5
-+$as_echo_n "checking how to associate runtime and link libraries... " >&6; }
-+if ${lt_cv_sharedlib_from_linklib_cmd+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  lt_cv_sharedlib_from_linklib_cmd='unknown'
-+
-+case $host_os in
-+cygwin* | mingw* | pw32* | cegcc*)
-+  # two different shell functions defined in ltmain.sh
-+  # decide which to use based on capabilities of $DLLTOOL
-+  case `$DLLTOOL --help 2>&1` in
-+  *--identify-strict*)
-+    lt_cv_sharedlib_from_linklib_cmd=func_cygming_dll_for_implib
-+    ;;
-+  *)
-+    lt_cv_sharedlib_from_linklib_cmd=func_cygming_dll_for_implib_fallback
-+    ;;
-+  esac
-+  ;;
-+*)
-+  # fallback: assume linklib IS sharedlib
-+  lt_cv_sharedlib_from_linklib_cmd="$ECHO"
-+  ;;
-+esac
-+
-+fi
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_sharedlib_from_linklib_cmd" >&5
-+$as_echo "$lt_cv_sharedlib_from_linklib_cmd" >&6; }
-+sharedlib_from_linklib_cmd=$lt_cv_sharedlib_from_linklib_cmd
-+test -z "$sharedlib_from_linklib_cmd" && sharedlib_from_linklib_cmd=$ECHO
-+
-+
-+
-+
-+
-+
-+
- plugin_option=
- plugin_names="liblto_plugin.so liblto_plugin-0.dll cyglto_plugin-0.dll"
- for plugin in $plugin_names; do
-@@ -6174,8 +6426,10 @@ for plugin in $plugin_names; do
- done
- 
- if test -n "$ac_tool_prefix"; then
--  # Extract the first word of "${ac_tool_prefix}ar", so it can be a program name with args.
--set dummy ${ac_tool_prefix}ar; ac_word=$2
-+  for ac_prog in ar
-+  do
-+    # Extract the first word of "$ac_tool_prefix$ac_prog", so it can be a program name with args.
-+set dummy $ac_tool_prefix$ac_prog; ac_word=$2
- { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
- $as_echo_n "checking for $ac_word... " >&6; }
- if ${ac_cv_prog_AR+:} false; then :
-@@ -6191,7 +6445,7 @@ do
-   test -z "$as_dir" && as_dir=.
-     for ac_exec_ext in '' $ac_executable_extensions; do
-   if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
--    ac_cv_prog_AR="${ac_tool_prefix}ar"
-+    ac_cv_prog_AR="$ac_tool_prefix$ac_prog"
-     $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
-     break 2
-   fi
-@@ -6211,11 +6465,15 @@ $as_echo "no" >&6; }
- fi
- 
- 
-+    test -n "$AR" && break
-+  done
- fi
--if test -z "$ac_cv_prog_AR"; then
-+if test -z "$AR"; then
-   ac_ct_AR=$AR
--  # Extract the first word of "ar", so it can be a program name with args.
--set dummy ar; ac_word=$2
-+  for ac_prog in ar
-+do
-+  # Extract the first word of "$ac_prog", so it can be a program name with args.
-+set dummy $ac_prog; ac_word=$2
- { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
- $as_echo_n "checking for $ac_word... " >&6; }
- if ${ac_cv_prog_ac_ct_AR+:} false; then :
-@@ -6231,7 +6489,7 @@ do
-   test -z "$as_dir" && as_dir=.
-     for ac_exec_ext in '' $ac_executable_extensions; do
-   if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
--    ac_cv_prog_ac_ct_AR="ar"
-+    ac_cv_prog_ac_ct_AR="$ac_prog"
-     $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
-     break 2
-   fi
-@@ -6250,6 +6508,10 @@ else
- $as_echo "no" >&6; }
- fi
- 
-+
-+  test -n "$ac_ct_AR" && break
-+done
-+
-   if test "x$ac_ct_AR" = x; then
-     AR="false"
-   else
-@@ -6261,25 +6523,20 @@ ac_tool_warned=yes ;;
- esac
-     AR=$ac_ct_AR
-   fi
--else
--  AR="$ac_cv_prog_AR"
- fi
- 
--test -z "$AR" && AR=ar
--if test -n "$plugin_option"; then
--  if $AR --help 2>&1 | grep -q "\--plugin"; then
--    touch conftest.c
--    $AR $plugin_option rc conftest.a conftest.c
--    if test "$?" != 0; then
--      { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: Failed: $AR $plugin_option rc" >&5
-+  touch conftest.c
-+  $AR $plugin_option rc conftest.a conftest.c
-+  if test "$?" != 0; then
-+    { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: Failed: $AR $plugin_option rc" >&5
- $as_echo "$as_me: WARNING: Failed: $AR $plugin_option rc" >&2;}
--    else
--      AR="$AR $plugin_option"
--    fi
--    rm -f conftest.*
-+  else
-+    AR="$AR $plugin_option"
-   fi
--fi
--test -z "$AR_FLAGS" && AR_FLAGS=cru
-+  rm -f conftest.*
-+: ${AR=ar}
-+: ${AR_FLAGS=cru}
-+
- 
- 
- 
-@@ -6290,6 +6547,63 @@ test -z "$AR_FLAGS" && AR_FLAGS=cru
- 
- 
- 
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for archiver @FILE support" >&5
-+$as_echo_n "checking for archiver @FILE support... " >&6; }
-+if ${lt_cv_ar_at_file+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  lt_cv_ar_at_file=no
-+   cat confdefs.h - <<_ACEOF >conftest.$ac_ext
-+/* end confdefs.h.  */
-+
-+int
-+main ()
-+{
-+
-+  ;
-+  return 0;
-+}
-+_ACEOF
-+if ac_fn_c_try_compile "$LINENO"; then :
-+  echo conftest.$ac_objext > conftest.lst
-+      lt_ar_try='$AR $AR_FLAGS libconftest.a @conftest.lst >&5'
-+      { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$lt_ar_try\""; } >&5
-+  (eval $lt_ar_try) 2>&5
-+  ac_status=$?
-+  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
-+  test $ac_status = 0; }
-+      if test "$ac_status" -eq 0; then
-+	# Ensure the archiver fails upon bogus file names.
-+	rm -f conftest.$ac_objext libconftest.a
-+	{ { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$lt_ar_try\""; } >&5
-+  (eval $lt_ar_try) 2>&5
-+  ac_status=$?
-+  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
-+  test $ac_status = 0; }
-+	if test "$ac_status" -ne 0; then
-+          lt_cv_ar_at_file=@
-+        fi
-+      fi
-+      rm -f conftest.* libconftest.a
-+
-+fi
-+rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext
-+
-+fi
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_ar_at_file" >&5
-+$as_echo "$lt_cv_ar_at_file" >&6; }
-+
-+if test "x$lt_cv_ar_at_file" = xno; then
-+  archiver_list_spec=
-+else
-+  archiver_list_spec=$lt_cv_ar_at_file
-+fi
-+
-+
-+
-+
-+
-+
- 
- if test -n "$ac_tool_prefix"; then
-   # Extract the first word of "${ac_tool_prefix}strip", so it can be a program name with args.
-@@ -6630,8 +6944,8 @@ esac
- lt_cv_sys_global_symbol_to_cdecl="sed -n -e 's/^T .* \(.*\)$/extern int \1();/p' -e 's/^$symcode* .* \(.*\)$/extern char \1;/p'"
- 
- # Transform an extracted symbol line into symbol name and symbol address
--lt_cv_sys_global_symbol_to_c_name_address="sed -n -e 's/^: \([^ ]*\) $/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"\2\", (void *) \&\2},/p'"
--lt_cv_sys_global_symbol_to_c_name_address_lib_prefix="sed -n -e 's/^: \([^ ]*\) $/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \(lib[^ ]*\)$/  {\"\2\", (void *) \&\2},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"lib\2\", (void *) \&\2},/p'"
-+lt_cv_sys_global_symbol_to_c_name_address="sed -n -e 's/^: \([^ ]*\)[ ]*$/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"\2\", (void *) \&\2},/p'"
-+lt_cv_sys_global_symbol_to_c_name_address_lib_prefix="sed -n -e 's/^: \([^ ]*\)[ ]*$/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \(lib[^ ]*\)$/  {\"\2\", (void *) \&\2},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"lib\2\", (void *) \&\2},/p'"
- 
- # Handle CRLF in mingw tool chain
- opt_cr=
-@@ -6667,6 +6981,7 @@ for ac_symprfx in "" "_"; do
-   else
-     lt_cv_sys_global_symbol_pipe="sed -n -e 's/^.*[	 ]\($symcode$symcode*\)[	 ][	 ]*$ac_symprfx$sympat$opt_cr$/$symxfrm/p'"
-   fi
-+  lt_cv_sys_global_symbol_pipe="$lt_cv_sys_global_symbol_pipe | sed '/ __gnu_lto/d'"
- 
-   # Check to see that the pipe works correctly.
-   pipe_works=no
-@@ -6708,6 +7023,18 @@ _LT_EOF
-       if $GREP ' nm_test_var$' "$nlist" >/dev/null; then
- 	if $GREP ' nm_test_func$' "$nlist" >/dev/null; then
- 	  cat <<_LT_EOF > conftest.$ac_ext
-+/* Keep this code in sync between libtool.m4, ltmain, lt_system.h, and tests.  */
-+#if defined(_WIN32) || defined(__CYGWIN__) || defined(_WIN32_WCE)
-+/* DATA imports from DLLs on WIN32 con't be const, because runtime
-+   relocations are performed -- see ld's documentation on pseudo-relocs.  */
-+# define LT_DLSYM_CONST
-+#elif defined(__osf__)
-+/* This system does not cope well with relocations in const data.  */
-+# define LT_DLSYM_CONST
-+#else
-+# define LT_DLSYM_CONST const
-+#endif
-+
- #ifdef __cplusplus
- extern "C" {
- #endif
-@@ -6719,7 +7046,7 @@ _LT_EOF
- 	  cat <<_LT_EOF >> conftest.$ac_ext
- 
- /* The mapping between symbol names and symbols.  */
--const struct {
-+LT_DLSYM_CONST struct {
-   const char *name;
-   void       *address;
- }
-@@ -6745,8 +7072,8 @@ static const void *lt_preloaded_setup() {
- _LT_EOF
- 	  # Now try linking the two files.
- 	  mv conftest.$ac_objext conftstm.$ac_objext
--	  lt_save_LIBS="$LIBS"
--	  lt_save_CFLAGS="$CFLAGS"
-+	  lt_globsym_save_LIBS=$LIBS
-+	  lt_globsym_save_CFLAGS=$CFLAGS
- 	  LIBS="conftstm.$ac_objext"
- 	  CFLAGS="$CFLAGS$lt_prog_compiler_no_builtin_flag"
- 	  if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_link\""; } >&5
-@@ -6756,8 +7083,8 @@ _LT_EOF
-   test $ac_status = 0; } && test -s conftest${ac_exeext}; then
- 	    pipe_works=yes
- 	  fi
--	  LIBS="$lt_save_LIBS"
--	  CFLAGS="$lt_save_CFLAGS"
-+	  LIBS=$lt_globsym_save_LIBS
-+	  CFLAGS=$lt_globsym_save_CFLAGS
- 	else
- 	  echo "cannot find nm_test_func in $nlist" >&5
- 	fi
-@@ -6794,6 +7121,14 @@ else
- $as_echo "ok" >&6; }
- fi
- 
-+# Response file support.
-+if test "$lt_cv_nm_interface" = "MS dumpbin"; then
-+  nm_file_list_spec='@'
-+elif $NM --help 2>/dev/null | grep '[@]FILE' >/dev/null; then
-+  nm_file_list_spec='@'
-+fi
-+
-+
- 
- 
- 
-@@ -6812,6 +7147,47 @@ fi
- 
- 
- 
-+
-+
-+
-+
-+
-+
-+
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for sysroot" >&5
-+$as_echo_n "checking for sysroot... " >&6; }
-+
-+# Check whether --with-libtool-sysroot was given.
-+if test "${with_libtool_sysroot+set}" = set; then :
-+  withval=$with_libtool_sysroot;
-+else
-+  with_libtool_sysroot=no
-+fi
-+
-+
-+lt_sysroot=
-+case ${with_libtool_sysroot} in #(
-+ yes)
-+   if test "$GCC" = yes; then
-+     lt_sysroot=`$CC --print-sysroot 2>/dev/null`
-+   fi
-+   ;; #(
-+ /*)
-+   lt_sysroot=`echo "$with_libtool_sysroot" | sed -e "$sed_quote_subst"`
-+   ;; #(
-+ no|'')
-+   ;; #(
-+ *)
-+   { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${with_libtool_sysroot}" >&5
-+$as_echo "${with_libtool_sysroot}" >&6; }
-+   as_fn_error $? "The sysroot must be an absolute path." "$LINENO" 5
-+   ;;
-+esac
-+
-+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${lt_sysroot:-no}" >&5
-+$as_echo "${lt_sysroot:-no}" >&6; }
-+
-+
- 
- 
- 
-@@ -7021,6 +7397,123 @@ esac
- 
- need_locks="$enable_libtool_lock"
- 
-+if test -n "$ac_tool_prefix"; then
-+  # Extract the first word of "${ac_tool_prefix}mt", so it can be a program name with args.
-+set dummy ${ac_tool_prefix}mt; ac_word=$2
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
-+$as_echo_n "checking for $ac_word... " >&6; }
-+if ${ac_cv_prog_MANIFEST_TOOL+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  if test -n "$MANIFEST_TOOL"; then
-+  ac_cv_prog_MANIFEST_TOOL="$MANIFEST_TOOL" # Let the user override the test.
-+else
-+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
-+for as_dir in $PATH
-+do
-+  IFS=$as_save_IFS
-+  test -z "$as_dir" && as_dir=.
-+    for ac_exec_ext in '' $ac_executable_extensions; do
-+  if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
-+    ac_cv_prog_MANIFEST_TOOL="${ac_tool_prefix}mt"
-+    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
-+    break 2
-+  fi
-+done
-+  done
-+IFS=$as_save_IFS
-+
-+fi
-+fi
-+MANIFEST_TOOL=$ac_cv_prog_MANIFEST_TOOL
-+if test -n "$MANIFEST_TOOL"; then
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $MANIFEST_TOOL" >&5
-+$as_echo "$MANIFEST_TOOL" >&6; }
-+else
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
-+$as_echo "no" >&6; }
-+fi
-+
-+
-+fi
-+if test -z "$ac_cv_prog_MANIFEST_TOOL"; then
-+  ac_ct_MANIFEST_TOOL=$MANIFEST_TOOL
-+  # Extract the first word of "mt", so it can be a program name with args.
-+set dummy mt; ac_word=$2
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
-+$as_echo_n "checking for $ac_word... " >&6; }
-+if ${ac_cv_prog_ac_ct_MANIFEST_TOOL+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  if test -n "$ac_ct_MANIFEST_TOOL"; then
-+  ac_cv_prog_ac_ct_MANIFEST_TOOL="$ac_ct_MANIFEST_TOOL" # Let the user override the test.
-+else
-+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
-+for as_dir in $PATH
-+do
-+  IFS=$as_save_IFS
-+  test -z "$as_dir" && as_dir=.
-+    for ac_exec_ext in '' $ac_executable_extensions; do
-+  if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
-+    ac_cv_prog_ac_ct_MANIFEST_TOOL="mt"
-+    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
-+    break 2
-+  fi
-+done
-+  done
-+IFS=$as_save_IFS
-+
-+fi
-+fi
-+ac_ct_MANIFEST_TOOL=$ac_cv_prog_ac_ct_MANIFEST_TOOL
-+if test -n "$ac_ct_MANIFEST_TOOL"; then
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_MANIFEST_TOOL" >&5
-+$as_echo "$ac_ct_MANIFEST_TOOL" >&6; }
-+else
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
-+$as_echo "no" >&6; }
-+fi
-+
-+  if test "x$ac_ct_MANIFEST_TOOL" = x; then
-+    MANIFEST_TOOL=":"
-+  else
-+    case $cross_compiling:$ac_tool_warned in
-+yes:)
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5
-+$as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;}
-+ac_tool_warned=yes ;;
-+esac
-+    MANIFEST_TOOL=$ac_ct_MANIFEST_TOOL
-+  fi
-+else
-+  MANIFEST_TOOL="$ac_cv_prog_MANIFEST_TOOL"
-+fi
-+
-+test -z "$MANIFEST_TOOL" && MANIFEST_TOOL=mt
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking if $MANIFEST_TOOL is a manifest tool" >&5
-+$as_echo_n "checking if $MANIFEST_TOOL is a manifest tool... " >&6; }
-+if ${lt_cv_path_mainfest_tool+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  lt_cv_path_mainfest_tool=no
-+  echo "$as_me:$LINENO: $MANIFEST_TOOL '-?'" >&5
-+  $MANIFEST_TOOL '-?' 2>conftest.err > conftest.out
-+  cat conftest.err >&5
-+  if $GREP 'Manifest Tool' conftest.out > /dev/null; then
-+    lt_cv_path_mainfest_tool=yes
-+  fi
-+  rm -f conftest*
-+fi
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_path_mainfest_tool" >&5
-+$as_echo "$lt_cv_path_mainfest_tool" >&6; }
-+if test "x$lt_cv_path_mainfest_tool" != xyes; then
-+  MANIFEST_TOOL=:
-+fi
-+
-+
-+
-+
-+
- 
-   case $host_os in
-     rhapsody* | darwin*)
-@@ -7584,6 +8077,8 @@ _LT_EOF
-       $LTCC $LTCFLAGS -c -o conftest.o conftest.c 2>&5
-       echo "$AR cru libconftest.a conftest.o" >&5
-       $AR cru libconftest.a conftest.o 2>&5
-+      echo "$RANLIB libconftest.a" >&5
-+      $RANLIB libconftest.a 2>&5
-       cat > conftest.c << _LT_EOF
- int main() { return 0;}
- _LT_EOF
-@@ -8135,8 +8630,6 @@ fi
- lt_prog_compiler_pic=
- lt_prog_compiler_static=
- 
--{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $compiler option to produce PIC" >&5
--$as_echo_n "checking for $compiler option to produce PIC... " >&6; }
- 
-   if test "$GCC" = yes; then
-     lt_prog_compiler_wl='-Wl,'
-@@ -8302,6 +8795,12 @@ $as_echo_n "checking for $compiler option to produce PIC... " >&6; }
- 	lt_prog_compiler_pic='--shared'
- 	lt_prog_compiler_static='--static'
- 	;;
-+      nagfor*)
-+	# NAG Fortran compiler
-+	lt_prog_compiler_wl='-Wl,-Wl,,'
-+	lt_prog_compiler_pic='-PIC'
-+	lt_prog_compiler_static='-Bstatic'
-+	;;
-       pgcc* | pgf77* | pgf90* | pgf95* | pgfortran*)
-         # Portland Group compilers (*not* the Pentium gcc compiler,
- 	# which looks to be a dead project)
-@@ -8364,7 +8863,7 @@ $as_echo_n "checking for $compiler option to produce PIC... " >&6; }
-       lt_prog_compiler_pic='-KPIC'
-       lt_prog_compiler_static='-Bstatic'
-       case $cc_basename in
--      f77* | f90* | f95*)
-+      f77* | f90* | f95* | sunf77* | sunf90* | sunf95*)
- 	lt_prog_compiler_wl='-Qoption ld ';;
-       *)
- 	lt_prog_compiler_wl='-Wl,';;
-@@ -8421,13 +8920,17 @@ case $host_os in
-     lt_prog_compiler_pic="$lt_prog_compiler_pic -DPIC"
-     ;;
- esac
--{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_prog_compiler_pic" >&5
--$as_echo "$lt_prog_compiler_pic" >&6; }
--
--
--
--
- 
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $compiler option to produce PIC" >&5
-+$as_echo_n "checking for $compiler option to produce PIC... " >&6; }
-+if ${lt_cv_prog_compiler_pic+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  lt_cv_prog_compiler_pic=$lt_prog_compiler_pic
-+fi
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_prog_compiler_pic" >&5
-+$as_echo "$lt_cv_prog_compiler_pic" >&6; }
-+lt_prog_compiler_pic=$lt_cv_prog_compiler_pic
- 
- #
- # Check to make sure the PIC flag actually works.
-@@ -8488,6 +8991,11 @@ fi
- 
- 
- 
-+
-+
-+
-+
-+
- #
- # Check to make sure the static flag actually works.
- #
-@@ -8838,7 +9346,8 @@ _LT_EOF
-       allow_undefined_flag=unsupported
-       always_export_symbols=no
-       enable_shared_with_static_runtimes=yes
--      export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1 DATA/'\'' | $SED -e '\''/^[AITW][ ]/s/.*[ ]//'\'' | sort | uniq > $export_symbols'
-+      export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1 DATA/;s/^.*[ ]__nm__\([^ ]*\)[ ][^ ]*/\1 DATA/;/^I[ ]/d;/^[AITW][ ]/s/.* //'\'' | sort | uniq > $export_symbols'
-+      exclude_expsyms='[_]+GLOBAL_OFFSET_TABLE_|[_]+GLOBAL__[FID]_.*|[_]+head_[A-Za-z0-9_]+_dll|[A-Za-z0-9_]+_dll_iname'
- 
-       if $LD --help 2>&1 | $GREP 'auto-import' > /dev/null; then
-         archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib'
-@@ -8937,12 +9446,12 @@ _LT_EOF
- 	  whole_archive_flag_spec='--whole-archive$convenience --no-whole-archive'
- 	  hardcode_libdir_flag_spec=
- 	  hardcode_libdir_flag_spec_ld='-rpath $libdir'
--	  archive_cmds='$LD -shared $libobjs $deplibs $compiler_flags -soname $soname -o $lib'
-+	  archive_cmds='$LD -shared $libobjs $deplibs $linker_flags -soname $soname -o $lib'
- 	  if test "x$supports_anon_versioning" = xyes; then
- 	    archive_expsym_cmds='echo "{ global:" > $output_objdir/$libname.ver~
- 	      cat $export_symbols | sed -e "s/\(.*\)/\1;/" >> $output_objdir/$libname.ver~
- 	      echo "local: *; };" >> $output_objdir/$libname.ver~
--	      $LD -shared $libobjs $deplibs $compiler_flags -soname $soname -version-script $output_objdir/$libname.ver -o $lib'
-+	      $LD -shared $libobjs $deplibs $linker_flags -soname $soname -version-script $output_objdir/$libname.ver -o $lib'
- 	  fi
- 	  ;;
- 	esac
-@@ -8956,8 +9465,8 @@ _LT_EOF
- 	archive_cmds='$LD -Bshareable $libobjs $deplibs $linker_flags -o $lib'
- 	wlarc=
-       else
--	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
--	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-+	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
-+	archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-       fi
-       ;;
- 
-@@ -8975,8 +9484,8 @@ _LT_EOF
- 
- _LT_EOF
-       elif $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then
--	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
--	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-+	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
-+	archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-       else
- 	ld_shlibs=no
-       fi
-@@ -9022,8 +9531,8 @@ _LT_EOF
- 
-     *)
-       if $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then
--	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
--	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-+	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
-+	archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-       else
- 	ld_shlibs=no
-       fi
-@@ -9153,7 +9662,13 @@ _LT_EOF
- 	allow_undefined_flag='-berok'
-         # Determine the default libpath from the value encoded in an
-         # empty executable.
--        cat confdefs.h - <<_ACEOF >conftest.$ac_ext
-+        if test "${lt_cv_aix_libpath+set}" = set; then
-+  aix_libpath=$lt_cv_aix_libpath
-+else
-+  if ${lt_cv_aix_libpath_+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
- /* end confdefs.h.  */
- 
- int
-@@ -9166,22 +9681,29 @@ main ()
- _ACEOF
- if ac_fn_c_try_link "$LINENO"; then :
- 
--lt_aix_libpath_sed='
--    /Import File Strings/,/^$/ {
--	/^0/ {
--	    s/^0  *\(.*\)$/\1/
--	    p
--	}
--    }'
--aix_libpath=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
--# Check for a 64-bit object if we didn't find anything.
--if test -z "$aix_libpath"; then
--  aix_libpath=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
--fi
-+  lt_aix_libpath_sed='
-+      /Import File Strings/,/^$/ {
-+	  /^0/ {
-+	      s/^0  *\([^ ]*\) *$/\1/
-+	      p
-+	  }
-+      }'
-+  lt_cv_aix_libpath_=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
-+  # Check for a 64-bit object if we didn't find anything.
-+  if test -z "$lt_cv_aix_libpath_"; then
-+    lt_cv_aix_libpath_=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
-+  fi
- fi
- rm -f core conftest.err conftest.$ac_objext \
-     conftest$ac_exeext conftest.$ac_ext
--if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
-+  if test -z "$lt_cv_aix_libpath_"; then
-+    lt_cv_aix_libpath_="/usr/lib:/lib"
-+  fi
-+
-+fi
-+
-+  aix_libpath=$lt_cv_aix_libpath_
-+fi
- 
-         hardcode_libdir_flag_spec='${wl}-blibpath:$libdir:'"$aix_libpath"
-         archive_expsym_cmds='$CC -o $output_objdir/$soname $libobjs $deplibs '"\${wl}$no_entry_flag"' $compiler_flags `if test "x${allow_undefined_flag}" != "x"; then func_echo_all "${wl}${allow_undefined_flag}"; else :; fi` '"\${wl}$exp_sym_flag:\$export_symbols $shared_flag"
-@@ -9193,7 +9715,13 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
- 	else
- 	 # Determine the default libpath from the value encoded in an
- 	 # empty executable.
--	 cat confdefs.h - <<_ACEOF >conftest.$ac_ext
-+	 if test "${lt_cv_aix_libpath+set}" = set; then
-+  aix_libpath=$lt_cv_aix_libpath
-+else
-+  if ${lt_cv_aix_libpath_+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
- /* end confdefs.h.  */
- 
- int
-@@ -9206,22 +9734,29 @@ main ()
- _ACEOF
- if ac_fn_c_try_link "$LINENO"; then :
- 
--lt_aix_libpath_sed='
--    /Import File Strings/,/^$/ {
--	/^0/ {
--	    s/^0  *\(.*\)$/\1/
--	    p
--	}
--    }'
--aix_libpath=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
--# Check for a 64-bit object if we didn't find anything.
--if test -z "$aix_libpath"; then
--  aix_libpath=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
--fi
-+  lt_aix_libpath_sed='
-+      /Import File Strings/,/^$/ {
-+	  /^0/ {
-+	      s/^0  *\([^ ]*\) *$/\1/
-+	      p
-+	  }
-+      }'
-+  lt_cv_aix_libpath_=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
-+  # Check for a 64-bit object if we didn't find anything.
-+  if test -z "$lt_cv_aix_libpath_"; then
-+    lt_cv_aix_libpath_=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
-+  fi
- fi
- rm -f core conftest.err conftest.$ac_objext \
-     conftest$ac_exeext conftest.$ac_ext
--if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
-+  if test -z "$lt_cv_aix_libpath_"; then
-+    lt_cv_aix_libpath_="/usr/lib:/lib"
-+  fi
-+
-+fi
-+
-+  aix_libpath=$lt_cv_aix_libpath_
-+fi
- 
- 	 hardcode_libdir_flag_spec='${wl}-blibpath:$libdir:'"$aix_libpath"
- 	  # Warning - without using the other run time loading flags,
-@@ -9266,20 +9801,63 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
-       # Microsoft Visual C++.
-       # hardcode_libdir_flag_spec is actually meaningless, as there is
-       # no search path for DLLs.
--      hardcode_libdir_flag_spec=' '
--      allow_undefined_flag=unsupported
--      # Tell ltmain to make .lib files, not .a files.
--      libext=lib
--      # Tell ltmain to make .dll files, not .so files.
--      shrext_cmds=".dll"
--      # FIXME: Setting linknames here is a bad hack.
--      archive_cmds='$CC -o $lib $libobjs $compiler_flags `func_echo_all "$deplibs" | $SED '\''s/ -lc$//'\''` -link -dll~linknames='
--      # The linker will automatically build a .lib file if we build a DLL.
--      old_archive_from_new_cmds='true'
--      # FIXME: Should let the user specify the lib program.
--      old_archive_cmds='lib -OUT:$oldlib$oldobjs$old_deplibs'
--      fix_srcfile_path='`cygpath -w "$srcfile"`'
--      enable_shared_with_static_runtimes=yes
-+      case $cc_basename in
-+      cl*)
-+	# Native MSVC
-+	hardcode_libdir_flag_spec=' '
-+	allow_undefined_flag=unsupported
-+	always_export_symbols=yes
-+	file_list_spec='@'
-+	# Tell ltmain to make .lib files, not .a files.
-+	libext=lib
-+	# Tell ltmain to make .dll files, not .so files.
-+	shrext_cmds=".dll"
-+	# FIXME: Setting linknames here is a bad hack.
-+	archive_cmds='$CC -o $output_objdir/$soname $libobjs $compiler_flags $deplibs -Wl,-dll~linknames='
-+	archive_expsym_cmds='if test "x`$SED 1q $export_symbols`" = xEXPORTS; then
-+	    sed -n -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' -e '1\\\!p' < $export_symbols > $output_objdir/$soname.exp;
-+	  else
-+	    sed -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' < $export_symbols > $output_objdir/$soname.exp;
-+	  fi~
-+	  $CC -o $tool_output_objdir$soname $libobjs $compiler_flags $deplibs "@$tool_output_objdir$soname.exp" -Wl,-DLL,-IMPLIB:"$tool_output_objdir$libname.dll.lib"~
-+	  linknames='
-+	# The linker will not automatically build a static lib if we build a DLL.
-+	# _LT_TAGVAR(old_archive_from_new_cmds, )='true'
-+	enable_shared_with_static_runtimes=yes
-+	export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1,DATA/'\'' | $SED -e '\''/^[AITW][ ]/s/.*[ ]//'\'' | sort | uniq > $export_symbols'
-+	# Don't use ranlib
-+	old_postinstall_cmds='chmod 644 $oldlib'
-+	postlink_cmds='lt_outputfile="@OUTPUT@"~
-+	  lt_tool_outputfile="@TOOL_OUTPUT@"~
-+	  case $lt_outputfile in
-+	    *.exe|*.EXE) ;;
-+	    *)
-+	      lt_outputfile="$lt_outputfile.exe"
-+	      lt_tool_outputfile="$lt_tool_outputfile.exe"
-+	      ;;
-+	  esac~
-+	  if test "$MANIFEST_TOOL" != ":" && test -f "$lt_outputfile.manifest"; then
-+	    $MANIFEST_TOOL -manifest "$lt_tool_outputfile.manifest" -outputresource:"$lt_tool_outputfile" || exit 1;
-+	    $RM "$lt_outputfile.manifest";
-+	  fi'
-+	;;
-+      *)
-+	# Assume MSVC wrapper
-+	hardcode_libdir_flag_spec=' '
-+	allow_undefined_flag=unsupported
-+	# Tell ltmain to make .lib files, not .a files.
-+	libext=lib
-+	# Tell ltmain to make .dll files, not .so files.
-+	shrext_cmds=".dll"
-+	# FIXME: Setting linknames here is a bad hack.
-+	archive_cmds='$CC -o $lib $libobjs $compiler_flags `func_echo_all "$deplibs" | $SED '\''s/ -lc$//'\''` -link -dll~linknames='
-+	# The linker will automatically build a .lib file if we build a DLL.
-+	old_archive_from_new_cmds='true'
-+	# FIXME: Should let the user specify the lib program.
-+	old_archive_cmds='lib -OUT:$oldlib$oldobjs$old_deplibs'
-+	enable_shared_with_static_runtimes=yes
-+	;;
-+      esac
-       ;;
- 
-     darwin* | rhapsody*)
-@@ -9340,7 +9918,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
- 
-     # FreeBSD 3 and greater uses gcc -shared to do shared libraries.
-     freebsd* | dragonfly*)
--      archive_cmds='$CC -shared -o $lib $libobjs $deplibs $compiler_flags'
-+      archive_cmds='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags'
-       hardcode_libdir_flag_spec='-R$libdir'
-       hardcode_direct=yes
-       hardcode_shlibpath_var=no
-@@ -9348,7 +9926,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
- 
-     hpux9*)
-       if test "$GCC" = yes; then
--	archive_cmds='$RM $output_objdir/$soname~$CC -shared -fPIC ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $libobjs $deplibs $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
-+	archive_cmds='$RM $output_objdir/$soname~$CC -shared $pic_flag ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $libobjs $deplibs $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
-       else
- 	archive_cmds='$RM $output_objdir/$soname~$LD -b +b $install_libdir -o $output_objdir/$soname $libobjs $deplibs $linker_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
-       fi
-@@ -9364,7 +9942,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
- 
-     hpux10*)
-       if test "$GCC" = yes && test "$with_gnu_ld" = no; then
--	archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
-+	archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
-       else
- 	archive_cmds='$LD -b +h $soname +b $install_libdir -o $lib $libobjs $deplibs $linker_flags'
-       fi
-@@ -9388,10 +9966,10 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
- 	  archive_cmds='$CC -shared ${wl}+h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
- 	  ;;
- 	ia64*)
--	  archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags'
-+	  archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags'
- 	  ;;
- 	*)
--	  archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
-+	  archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
- 	  ;;
- 	esac
-       else
-@@ -9470,23 +10048,36 @@ fi
- 
-     irix5* | irix6* | nonstopux*)
-       if test "$GCC" = yes; then
--	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
-+	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
- 	# Try to use the -exported_symbol ld option, if it does not
- 	# work, assume that -exports_file does not work either and
- 	# implicitly export all symbols.
--        save_LDFLAGS="$LDFLAGS"
--        LDFLAGS="$LDFLAGS -shared ${wl}-exported_symbol ${wl}foo ${wl}-update_registry ${wl}/dev/null"
--        cat confdefs.h - <<_ACEOF >conftest.$ac_ext
-+	# This should be the same for all languages, so no per-tag cache variable.
-+	{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether the $host_os linker accepts -exported_symbol" >&5
-+$as_echo_n "checking whether the $host_os linker accepts -exported_symbol... " >&6; }
-+if ${lt_cv_irix_exported_symbol+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  save_LDFLAGS="$LDFLAGS"
-+	   LDFLAGS="$LDFLAGS -shared ${wl}-exported_symbol ${wl}foo ${wl}-update_registry ${wl}/dev/null"
-+	   cat confdefs.h - <<_ACEOF >conftest.$ac_ext
- /* end confdefs.h.  */
--int foo(void) {}
-+int foo (void) { return 0; }
- _ACEOF
- if ac_fn_c_try_link "$LINENO"; then :
--  archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations ${wl}-exports_file ${wl}$export_symbols -o $lib'
--
-+  lt_cv_irix_exported_symbol=yes
-+else
-+  lt_cv_irix_exported_symbol=no
- fi
- rm -f core conftest.err conftest.$ac_objext \
-     conftest$ac_exeext conftest.$ac_ext
--        LDFLAGS="$save_LDFLAGS"
-+           LDFLAGS="$save_LDFLAGS"
-+fi
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_irix_exported_symbol" >&5
-+$as_echo "$lt_cv_irix_exported_symbol" >&6; }
-+	if test "$lt_cv_irix_exported_symbol" = yes; then
-+          archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations ${wl}-exports_file ${wl}$export_symbols -o $lib'
-+	fi
-       else
- 	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -o $lib'
- 	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -exports_file $export_symbols -o $lib'
-@@ -9571,7 +10162,7 @@ rm -f core conftest.err conftest.$ac_objext \
-     osf4* | osf5*)	# as osf3* with the addition of -msym flag
-       if test "$GCC" = yes; then
- 	allow_undefined_flag=' ${wl}-expect_unresolved ${wl}\*'
--	archive_cmds='$CC -shared${allow_undefined_flag} $libobjs $deplibs $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
-+	archive_cmds='$CC -shared${allow_undefined_flag} $pic_flag $libobjs $deplibs $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
- 	hardcode_libdir_flag_spec='${wl}-rpath ${wl}$libdir'
-       else
- 	allow_undefined_flag=' -expect_unresolved \*'
-@@ -9590,9 +10181,9 @@ rm -f core conftest.err conftest.$ac_objext \
-       no_undefined_flag=' -z defs'
-       if test "$GCC" = yes; then
- 	wlarc='${wl}'
--	archive_cmds='$CC -shared ${wl}-z ${wl}text ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
-+	archive_cmds='$CC -shared $pic_flag ${wl}-z ${wl}text ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
- 	archive_expsym_cmds='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~
--	  $CC -shared ${wl}-z ${wl}text ${wl}-M ${wl}$lib.exp ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags~$RM $lib.exp'
-+	  $CC -shared $pic_flag ${wl}-z ${wl}text ${wl}-M ${wl}$lib.exp ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags~$RM $lib.exp'
-       else
- 	case `$CC -V 2>&1` in
- 	*"Compilers 5.0"*)
-@@ -10168,8 +10759,9 @@ cygwin* | mingw* | pw32* | cegcc*)
-   need_version=no
-   need_lib_prefix=no
- 
--  case $GCC,$host_os in
--  yes,cygwin* | yes,mingw* | yes,pw32* | yes,cegcc*)
-+  case $GCC,$cc_basename in
-+  yes,*)
-+    # gcc
-     library_names_spec='$libname.dll.a'
-     # DLL is installed to $(libdir)/../bin by postinstall_cmds
-     postinstall_cmds='base_file=`basename \${file}`~
-@@ -10202,13 +10794,71 @@ cygwin* | mingw* | pw32* | cegcc*)
-       library_names_spec='`echo ${libname} | sed -e 's/^lib/pw/'``echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}'
-       ;;
-     esac
-+    dynamic_linker='Win32 ld.exe'
-+    ;;
-+
-+  *,cl*)
-+    # Native MSVC
-+    libname_spec='$name'
-+    soname_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}'
-+    library_names_spec='${libname}.dll.lib'
-+
-+    case $build_os in
-+    mingw*)
-+      sys_lib_search_path_spec=
-+      lt_save_ifs=$IFS
-+      IFS=';'
-+      for lt_path in $LIB
-+      do
-+        IFS=$lt_save_ifs
-+        # Let DOS variable expansion print the short 8.3 style file name.
-+        lt_path=`cd "$lt_path" 2>/dev/null && cmd //C "for %i in (".") do @echo %~si"`
-+        sys_lib_search_path_spec="$sys_lib_search_path_spec $lt_path"
-+      done
-+      IFS=$lt_save_ifs
-+      # Convert to MSYS style.
-+      sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | sed -e 's|\\\\|/|g' -e 's| \\([a-zA-Z]\\):| /\\1|g' -e 's|^ ||'`
-+      ;;
-+    cygwin*)
-+      # Convert to unix form, then to dos form, then back to unix form
-+      # but this time dos style (no spaces!) so that the unix form looks
-+      # like /cygdrive/c/PROGRA~1:/cygdr...
-+      sys_lib_search_path_spec=`cygpath --path --unix "$LIB"`
-+      sys_lib_search_path_spec=`cygpath --path --dos "$sys_lib_search_path_spec" 2>/dev/null`
-+      sys_lib_search_path_spec=`cygpath --path --unix "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
-+      ;;
-+    *)
-+      sys_lib_search_path_spec="$LIB"
-+      if $ECHO "$sys_lib_search_path_spec" | $GREP ';[c-zC-Z]:/' >/dev/null; then
-+        # It is most probably a Windows format PATH.
-+        sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e 's/;/ /g'`
-+      else
-+        sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
-+      fi
-+      # FIXME: find the short name or the path components, as spaces are
-+      # common. (e.g. "Program Files" -> "PROGRA~1")
-+      ;;
-+    esac
-+
-+    # DLL is installed to $(libdir)/../bin by postinstall_cmds
-+    postinstall_cmds='base_file=`basename \${file}`~
-+      dlpath=`$SHELL 2>&1 -c '\''. $dir/'\''\${base_file}'\''i; echo \$dlname'\''`~
-+      dldir=$destdir/`dirname \$dlpath`~
-+      test -d \$dldir || mkdir -p \$dldir~
-+      $install_prog $dir/$dlname \$dldir/$dlname'
-+    postuninstall_cmds='dldll=`$SHELL 2>&1 -c '\''. $file; echo \$dlname'\''`~
-+      dlpath=$dir/\$dldll~
-+       $RM \$dlpath'
-+    shlibpath_overrides_runpath=yes
-+    dynamic_linker='Win32 link.exe'
-     ;;
- 
-   *)
-+    # Assume MSVC wrapper
-     library_names_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext} $libname.lib'
-+    dynamic_linker='Win32 ld.exe'
-     ;;
-   esac
--  dynamic_linker='Win32 ld.exe'
-   # FIXME: first we should search . and the directory the executable is in
-   shlibpath_var=PATH
-   ;;
-@@ -11086,7 +11736,7 @@ else
-   lt_dlunknown=0; lt_dlno_uscore=1; lt_dlneed_uscore=2
-   lt_status=$lt_dlunknown
-   cat > conftest.$ac_ext <<_LT_EOF
--#line 11089 "configure"
-+#line $LINENO "configure"
- #include "confdefs.h"
- 
- #if HAVE_DLFCN_H
-@@ -11130,10 +11780,10 @@ else
- /* When -fvisbility=hidden is used, assume the code has been annotated
-    correspondingly for the symbols needed.  */
- #if defined(__GNUC__) && (((__GNUC__ == 3) && (__GNUC_MINOR__ >= 3)) || (__GNUC__ > 3))
--void fnord () __attribute__((visibility("default")));
-+int fnord () __attribute__((visibility("default")));
- #endif
- 
--void fnord () { int i=42; }
-+int fnord () { return 42; }
- int main ()
- {
-   void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW);
-@@ -11192,7 +11842,7 @@ else
-   lt_dlunknown=0; lt_dlno_uscore=1; lt_dlneed_uscore=2
-   lt_status=$lt_dlunknown
-   cat > conftest.$ac_ext <<_LT_EOF
--#line 11195 "configure"
-+#line $LINENO "configure"
- #include "confdefs.h"
- 
- #if HAVE_DLFCN_H
-@@ -11236,10 +11886,10 @@ else
- /* When -fvisbility=hidden is used, assume the code has been annotated
-    correspondingly for the symbols needed.  */
- #if defined(__GNUC__) && (((__GNUC__ == 3) && (__GNUC_MINOR__ >= 3)) || (__GNUC__ > 3))
--void fnord () __attribute__((visibility("default")));
-+int fnord () __attribute__((visibility("default")));
- #endif
- 
--void fnord () { int i=42; }
-+int fnord () { return 42; }
- int main ()
- {
-   void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW);
-@@ -13224,7 +13874,7 @@ SHARED_LDFLAGS=
- if test "$enable_shared" = "yes"; then
-   x=`sed -n -e 's/^[ 	]*PICFLAG[ 	]*=[ 	]*//p' < ../libiberty/Makefile | sed -n '$p'`
-   if test -n "$x"; then
--    SHARED_LIBADD="-L`pwd`/../libiberty/pic -liberty"
-+    SHARED_LIBADD="`pwd`/../libiberty/pic/libiberty.a"
-   fi
- fi
- 
-@@ -15879,13 +16529,20 @@ exeext='`$ECHO "$exeext" | $SED "$delay_single_quote_subst"`'
- lt_unset='`$ECHO "$lt_unset" | $SED "$delay_single_quote_subst"`'
- lt_SP2NL='`$ECHO "$lt_SP2NL" | $SED "$delay_single_quote_subst"`'
- lt_NL2SP='`$ECHO "$lt_NL2SP" | $SED "$delay_single_quote_subst"`'
-+lt_cv_to_host_file_cmd='`$ECHO "$lt_cv_to_host_file_cmd" | $SED "$delay_single_quote_subst"`'
-+lt_cv_to_tool_file_cmd='`$ECHO "$lt_cv_to_tool_file_cmd" | $SED "$delay_single_quote_subst"`'
- reload_flag='`$ECHO "$reload_flag" | $SED "$delay_single_quote_subst"`'
- reload_cmds='`$ECHO "$reload_cmds" | $SED "$delay_single_quote_subst"`'
- OBJDUMP='`$ECHO "$OBJDUMP" | $SED "$delay_single_quote_subst"`'
- deplibs_check_method='`$ECHO "$deplibs_check_method" | $SED "$delay_single_quote_subst"`'
- file_magic_cmd='`$ECHO "$file_magic_cmd" | $SED "$delay_single_quote_subst"`'
-+file_magic_glob='`$ECHO "$file_magic_glob" | $SED "$delay_single_quote_subst"`'
-+want_nocaseglob='`$ECHO "$want_nocaseglob" | $SED "$delay_single_quote_subst"`'
-+DLLTOOL='`$ECHO "$DLLTOOL" | $SED "$delay_single_quote_subst"`'
-+sharedlib_from_linklib_cmd='`$ECHO "$sharedlib_from_linklib_cmd" | $SED "$delay_single_quote_subst"`'
- AR='`$ECHO "$AR" | $SED "$delay_single_quote_subst"`'
- AR_FLAGS='`$ECHO "$AR_FLAGS" | $SED "$delay_single_quote_subst"`'
-+archiver_list_spec='`$ECHO "$archiver_list_spec" | $SED "$delay_single_quote_subst"`'
- STRIP='`$ECHO "$STRIP" | $SED "$delay_single_quote_subst"`'
- RANLIB='`$ECHO "$RANLIB" | $SED "$delay_single_quote_subst"`'
- old_postinstall_cmds='`$ECHO "$old_postinstall_cmds" | $SED "$delay_single_quote_subst"`'
-@@ -15900,14 +16557,17 @@ lt_cv_sys_global_symbol_pipe='`$ECHO "$lt_cv_sys_global_symbol_pipe" | $SED "$de
- lt_cv_sys_global_symbol_to_cdecl='`$ECHO "$lt_cv_sys_global_symbol_to_cdecl" | $SED "$delay_single_quote_subst"`'
- lt_cv_sys_global_symbol_to_c_name_address='`$ECHO "$lt_cv_sys_global_symbol_to_c_name_address" | $SED "$delay_single_quote_subst"`'
- lt_cv_sys_global_symbol_to_c_name_address_lib_prefix='`$ECHO "$lt_cv_sys_global_symbol_to_c_name_address_lib_prefix" | $SED "$delay_single_quote_subst"`'
-+nm_file_list_spec='`$ECHO "$nm_file_list_spec" | $SED "$delay_single_quote_subst"`'
-+lt_sysroot='`$ECHO "$lt_sysroot" | $SED "$delay_single_quote_subst"`'
- objdir='`$ECHO "$objdir" | $SED "$delay_single_quote_subst"`'
- MAGIC_CMD='`$ECHO "$MAGIC_CMD" | $SED "$delay_single_quote_subst"`'
- lt_prog_compiler_no_builtin_flag='`$ECHO "$lt_prog_compiler_no_builtin_flag" | $SED "$delay_single_quote_subst"`'
--lt_prog_compiler_wl='`$ECHO "$lt_prog_compiler_wl" | $SED "$delay_single_quote_subst"`'
- lt_prog_compiler_pic='`$ECHO "$lt_prog_compiler_pic" | $SED "$delay_single_quote_subst"`'
-+lt_prog_compiler_wl='`$ECHO "$lt_prog_compiler_wl" | $SED "$delay_single_quote_subst"`'
- lt_prog_compiler_static='`$ECHO "$lt_prog_compiler_static" | $SED "$delay_single_quote_subst"`'
- lt_cv_prog_compiler_c_o='`$ECHO "$lt_cv_prog_compiler_c_o" | $SED "$delay_single_quote_subst"`'
- need_locks='`$ECHO "$need_locks" | $SED "$delay_single_quote_subst"`'
-+MANIFEST_TOOL='`$ECHO "$MANIFEST_TOOL" | $SED "$delay_single_quote_subst"`'
- DSYMUTIL='`$ECHO "$DSYMUTIL" | $SED "$delay_single_quote_subst"`'
- NMEDIT='`$ECHO "$NMEDIT" | $SED "$delay_single_quote_subst"`'
- LIPO='`$ECHO "$LIPO" | $SED "$delay_single_quote_subst"`'
-@@ -15940,12 +16600,12 @@ hardcode_shlibpath_var='`$ECHO "$hardcode_shlibpath_var" | $SED "$delay_single_q
- hardcode_automatic='`$ECHO "$hardcode_automatic" | $SED "$delay_single_quote_subst"`'
- inherit_rpath='`$ECHO "$inherit_rpath" | $SED "$delay_single_quote_subst"`'
- link_all_deplibs='`$ECHO "$link_all_deplibs" | $SED "$delay_single_quote_subst"`'
--fix_srcfile_path='`$ECHO "$fix_srcfile_path" | $SED "$delay_single_quote_subst"`'
- always_export_symbols='`$ECHO "$always_export_symbols" | $SED "$delay_single_quote_subst"`'
- export_symbols_cmds='`$ECHO "$export_symbols_cmds" | $SED "$delay_single_quote_subst"`'
- exclude_expsyms='`$ECHO "$exclude_expsyms" | $SED "$delay_single_quote_subst"`'
- include_expsyms='`$ECHO "$include_expsyms" | $SED "$delay_single_quote_subst"`'
- prelink_cmds='`$ECHO "$prelink_cmds" | $SED "$delay_single_quote_subst"`'
-+postlink_cmds='`$ECHO "$postlink_cmds" | $SED "$delay_single_quote_subst"`'
- file_list_spec='`$ECHO "$file_list_spec" | $SED "$delay_single_quote_subst"`'
- variables_saved_for_relink='`$ECHO "$variables_saved_for_relink" | $SED "$delay_single_quote_subst"`'
- need_lib_prefix='`$ECHO "$need_lib_prefix" | $SED "$delay_single_quote_subst"`'
-@@ -16000,8 +16660,13 @@ reload_flag \
- OBJDUMP \
- deplibs_check_method \
- file_magic_cmd \
-+file_magic_glob \
-+want_nocaseglob \
-+DLLTOOL \
-+sharedlib_from_linklib_cmd \
- AR \
- AR_FLAGS \
-+archiver_list_spec \
- STRIP \
- RANLIB \
- CC \
-@@ -16011,12 +16676,14 @@ lt_cv_sys_global_symbol_pipe \
- lt_cv_sys_global_symbol_to_cdecl \
- lt_cv_sys_global_symbol_to_c_name_address \
- lt_cv_sys_global_symbol_to_c_name_address_lib_prefix \
-+nm_file_list_spec \
- lt_prog_compiler_no_builtin_flag \
--lt_prog_compiler_wl \
- lt_prog_compiler_pic \
-+lt_prog_compiler_wl \
- lt_prog_compiler_static \
- lt_cv_prog_compiler_c_o \
- need_locks \
-+MANIFEST_TOOL \
- DSYMUTIL \
- NMEDIT \
- LIPO \
-@@ -16032,7 +16699,6 @@ no_undefined_flag \
- hardcode_libdir_flag_spec \
- hardcode_libdir_flag_spec_ld \
- hardcode_libdir_separator \
--fix_srcfile_path \
- exclude_expsyms \
- include_expsyms \
- file_list_spec \
-@@ -16068,6 +16734,7 @@ module_cmds \
- module_expsym_cmds \
- export_symbols_cmds \
- prelink_cmds \
-+postlink_cmds \
- postinstall_cmds \
- postuninstall_cmds \
- finish_cmds \
-@@ -16837,7 +17504,8 @@ $as_echo X"$file" |
- # NOTE: Changes made to this file will be lost: look at ltmain.sh.
- #
- #   Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001, 2003, 2004, 2005,
--#                 2006, 2007, 2008, 2009 Free Software Foundation, Inc.
-+#                 2006, 2007, 2008, 2009, 2010 Free Software Foundation,
-+#                 Inc.
- #   Written by Gordon Matzigkeit, 1996
- #
- #   This file is part of GNU Libtool.
-@@ -16940,19 +17608,42 @@ SP2NL=$lt_lt_SP2NL
- # turn newlines into spaces.
- NL2SP=$lt_lt_NL2SP
- 
-+# convert \$build file names to \$host format.
-+to_host_file_cmd=$lt_cv_to_host_file_cmd
-+
-+# convert \$build files to toolchain format.
-+to_tool_file_cmd=$lt_cv_to_tool_file_cmd
-+
- # An object symbol dumper.
- OBJDUMP=$lt_OBJDUMP
- 
- # Method to check whether dependent libraries are shared objects.
- deplibs_check_method=$lt_deplibs_check_method
- 
--# Command to use when deplibs_check_method == "file_magic".
-+# Command to use when deplibs_check_method = "file_magic".
- file_magic_cmd=$lt_file_magic_cmd
- 
-+# How to find potential files when deplibs_check_method = "file_magic".
-+file_magic_glob=$lt_file_magic_glob
-+
-+# Find potential files using nocaseglob when deplibs_check_method = "file_magic".
-+want_nocaseglob=$lt_want_nocaseglob
-+
-+# DLL creation program.
-+DLLTOOL=$lt_DLLTOOL
-+
-+# Command to associate shared and link libraries.
-+sharedlib_from_linklib_cmd=$lt_sharedlib_from_linklib_cmd
-+
- # The archiver.
- AR=$lt_AR
-+
-+# Flags to create an archive.
- AR_FLAGS=$lt_AR_FLAGS
- 
-+# How to feed a file listing to the archiver.
-+archiver_list_spec=$lt_archiver_list_spec
-+
- # A symbol stripping program.
- STRIP=$lt_STRIP
- 
-@@ -16982,6 +17673,12 @@ global_symbol_to_c_name_address=$lt_lt_cv_sys_global_symbol_to_c_name_address
- # Transform the output of nm in a C name address pair when lib prefix is needed.
- global_symbol_to_c_name_address_lib_prefix=$lt_lt_cv_sys_global_symbol_to_c_name_address_lib_prefix
- 
-+# Specify filename containing input files for \$NM.
-+nm_file_list_spec=$lt_nm_file_list_spec
-+
-+# The root where to search for dependent libraries,and in which our libraries should be installed.
-+lt_sysroot=$lt_sysroot
-+
- # The name of the directory that contains temporary libtool files.
- objdir=$objdir
- 
-@@ -16991,6 +17688,9 @@ MAGIC_CMD=$MAGIC_CMD
- # Must we lock files when doing compilation?
- need_locks=$lt_need_locks
- 
-+# Manifest tool.
-+MANIFEST_TOOL=$lt_MANIFEST_TOOL
-+
- # Tool to manipulate archived DWARF debug symbol files on Mac OS X.
- DSYMUTIL=$lt_DSYMUTIL
- 
-@@ -17105,12 +17805,12 @@ with_gcc=$GCC
- # Compiler flag to turn off builtin functions.
- no_builtin_flag=$lt_lt_prog_compiler_no_builtin_flag
- 
--# How to pass a linker flag through the compiler.
--wl=$lt_lt_prog_compiler_wl
--
- # Additional compiler flags for building library objects.
- pic_flag=$lt_lt_prog_compiler_pic
- 
-+# How to pass a linker flag through the compiler.
-+wl=$lt_lt_prog_compiler_wl
-+
- # Compiler flag to prevent dynamic linking.
- link_static_flag=$lt_lt_prog_compiler_static
- 
-@@ -17197,9 +17897,6 @@ inherit_rpath=$inherit_rpath
- # Whether libtool must link a program against all its dependency libraries.
- link_all_deplibs=$link_all_deplibs
- 
--# Fix the shell variable \$srcfile for the compiler.
--fix_srcfile_path=$lt_fix_srcfile_path
--
- # Set to "yes" if exported symbols are required.
- always_export_symbols=$always_export_symbols
- 
-@@ -17215,6 +17912,9 @@ include_expsyms=$lt_include_expsyms
- # Commands necessary for linking programs (against libraries) with templates.
- prelink_cmds=$lt_prelink_cmds
- 
-+# Commands necessary for finishing linking programs.
-+postlink_cmds=$lt_postlink_cmds
-+
- # Specify filename containing input files.
- file_list_spec=$lt_file_list_spec
- 
-@@ -17247,210 +17947,169 @@ ltmain="$ac_aux_dir/ltmain.sh"
-   # if finds mixed CR/LF and LF-only lines.  Since sed operates in
-   # text mode, it properly converts lines to CR/LF.  This bash problem
-   # is reportedly fixed, but why not run on old versions too?
--  sed '/^# Generated shell functions inserted here/q' "$ltmain" >> "$cfgfile" \
--    || (rm -f "$cfgfile"; exit 1)
--
--  case $xsi_shell in
--  yes)
--    cat << \_LT_EOF >> "$cfgfile"
--
--# func_dirname file append nondir_replacement
--# Compute the dirname of FILE.  If nonempty, add APPEND to the result,
--# otherwise set result to NONDIR_REPLACEMENT.
--func_dirname ()
--{
--  case ${1} in
--    */*) func_dirname_result="${1%/*}${2}" ;;
--    *  ) func_dirname_result="${3}" ;;
--  esac
--}
--
--# func_basename file
--func_basename ()
--{
--  func_basename_result="${1##*/}"
--}
--
--# func_dirname_and_basename file append nondir_replacement
--# perform func_basename and func_dirname in a single function
--# call:
--#   dirname:  Compute the dirname of FILE.  If nonempty,
--#             add APPEND to the result, otherwise set result
--#             to NONDIR_REPLACEMENT.
--#             value returned in "$func_dirname_result"
--#   basename: Compute filename of FILE.
--#             value retuned in "$func_basename_result"
--# Implementation must be kept synchronized with func_dirname
--# and func_basename. For efficiency, we do not delegate to
--# those functions but instead duplicate the functionality here.
--func_dirname_and_basename ()
--{
--  case ${1} in
--    */*) func_dirname_result="${1%/*}${2}" ;;
--    *  ) func_dirname_result="${3}" ;;
--  esac
--  func_basename_result="${1##*/}"
--}
--
--# func_stripname prefix suffix name
--# strip PREFIX and SUFFIX off of NAME.
--# PREFIX and SUFFIX must not contain globbing or regex special
--# characters, hashes, percent signs, but SUFFIX may contain a leading
--# dot (in which case that matches only a dot).
--func_stripname ()
--{
--  # pdksh 5.2.14 does not do ${X%$Y} correctly if both X and Y are
--  # positional parameters, so assign one to ordinary parameter first.
--  func_stripname_result=${3}
--  func_stripname_result=${func_stripname_result#"${1}"}
--  func_stripname_result=${func_stripname_result%"${2}"}
--}
--
--# func_opt_split
--func_opt_split ()
--{
--  func_opt_split_opt=${1%%=*}
--  func_opt_split_arg=${1#*=}
--}
--
--# func_lo2o object
--func_lo2o ()
--{
--  case ${1} in
--    *.lo) func_lo2o_result=${1%.lo}.${objext} ;;
--    *)    func_lo2o_result=${1} ;;
--  esac
--}
--
--# func_xform libobj-or-source
--func_xform ()
--{
--  func_xform_result=${1%.*}.lo
--}
--
--# func_arith arithmetic-term...
--func_arith ()
--{
--  func_arith_result=$(( $* ))
--}
--
--# func_len string
--# STRING may not start with a hyphen.
--func_len ()
--{
--  func_len_result=${#1}
--}
--
--_LT_EOF
--    ;;
--  *) # Bourne compatible functions.
--    cat << \_LT_EOF >> "$cfgfile"
--
--# func_dirname file append nondir_replacement
--# Compute the dirname of FILE.  If nonempty, add APPEND to the result,
--# otherwise set result to NONDIR_REPLACEMENT.
--func_dirname ()
--{
--  # Extract subdirectory from the argument.
--  func_dirname_result=`$ECHO "${1}" | $SED "$dirname"`
--  if test "X$func_dirname_result" = "X${1}"; then
--    func_dirname_result="${3}"
--  else
--    func_dirname_result="$func_dirname_result${2}"
--  fi
--}
--
--# func_basename file
--func_basename ()
--{
--  func_basename_result=`$ECHO "${1}" | $SED "$basename"`
--}
--
--
--# func_stripname prefix suffix name
--# strip PREFIX and SUFFIX off of NAME.
--# PREFIX and SUFFIX must not contain globbing or regex special
--# characters, hashes, percent signs, but SUFFIX may contain a leading
--# dot (in which case that matches only a dot).
--# func_strip_suffix prefix name
--func_stripname ()
--{
--  case ${2} in
--    .*) func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%\\\\${2}\$%%"`;;
--    *)  func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%${2}\$%%"`;;
--  esac
--}
--
--# sed scripts:
--my_sed_long_opt='1s/^\(-[^=]*\)=.*/\1/;q'
--my_sed_long_arg='1s/^-[^=]*=//'
--
--# func_opt_split
--func_opt_split ()
--{
--  func_opt_split_opt=`$ECHO "${1}" | $SED "$my_sed_long_opt"`
--  func_opt_split_arg=`$ECHO "${1}" | $SED "$my_sed_long_arg"`
--}
--
--# func_lo2o object
--func_lo2o ()
--{
--  func_lo2o_result=`$ECHO "${1}" | $SED "$lo2o"`
--}
--
--# func_xform libobj-or-source
--func_xform ()
--{
--  func_xform_result=`$ECHO "${1}" | $SED 's/\.[^.]*$/.lo/'`
--}
--
--# func_arith arithmetic-term...
--func_arith ()
--{
--  func_arith_result=`expr "$@"`
--}
--
--# func_len string
--# STRING may not start with a hyphen.
--func_len ()
--{
--  func_len_result=`expr "$1" : ".*" 2>/dev/null || echo $max_cmd_len`
--}
--
--_LT_EOF
--esac
--
--case $lt_shell_append in
--  yes)
--    cat << \_LT_EOF >> "$cfgfile"
--
--# func_append var value
--# Append VALUE to the end of shell variable VAR.
--func_append ()
--{
--  eval "$1+=\$2"
--}
--_LT_EOF
--    ;;
--  *)
--    cat << \_LT_EOF >> "$cfgfile"
--
--# func_append var value
--# Append VALUE to the end of shell variable VAR.
--func_append ()
--{
--  eval "$1=\$$1\$2"
--}
--
--_LT_EOF
--    ;;
--  esac
--
--
--  sed -n '/^# Generated shell functions inserted here/,$p' "$ltmain" >> "$cfgfile" \
--    || (rm -f "$cfgfile"; exit 1)
--
--  mv -f "$cfgfile" "$ofile" ||
-+  sed '$q' "$ltmain" >> "$cfgfile" \
-+     || (rm -f "$cfgfile"; exit 1)
-+
-+  if test x"$xsi_shell" = xyes; then
-+  sed -e '/^func_dirname ()$/,/^} # func_dirname /c\
-+func_dirname ()\
-+{\
-+\    case ${1} in\
-+\      */*) func_dirname_result="${1%/*}${2}" ;;\
-+\      *  ) func_dirname_result="${3}" ;;\
-+\    esac\
-+} # Extended-shell func_dirname implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_basename ()$/,/^} # func_basename /c\
-+func_basename ()\
-+{\
-+\    func_basename_result="${1##*/}"\
-+} # Extended-shell func_basename implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_dirname_and_basename ()$/,/^} # func_dirname_and_basename /c\
-+func_dirname_and_basename ()\
-+{\
-+\    case ${1} in\
-+\      */*) func_dirname_result="${1%/*}${2}" ;;\
-+\      *  ) func_dirname_result="${3}" ;;\
-+\    esac\
-+\    func_basename_result="${1##*/}"\
-+} # Extended-shell func_dirname_and_basename implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_stripname ()$/,/^} # func_stripname /c\
-+func_stripname ()\
-+{\
-+\    # pdksh 5.2.14 does not do ${X%$Y} correctly if both X and Y are\
-+\    # positional parameters, so assign one to ordinary parameter first.\
-+\    func_stripname_result=${3}\
-+\    func_stripname_result=${func_stripname_result#"${1}"}\
-+\    func_stripname_result=${func_stripname_result%"${2}"}\
-+} # Extended-shell func_stripname implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_split_long_opt ()$/,/^} # func_split_long_opt /c\
-+func_split_long_opt ()\
-+{\
-+\    func_split_long_opt_name=${1%%=*}\
-+\    func_split_long_opt_arg=${1#*=}\
-+} # Extended-shell func_split_long_opt implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_split_short_opt ()$/,/^} # func_split_short_opt /c\
-+func_split_short_opt ()\
-+{\
-+\    func_split_short_opt_arg=${1#??}\
-+\    func_split_short_opt_name=${1%"$func_split_short_opt_arg"}\
-+} # Extended-shell func_split_short_opt implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_lo2o ()$/,/^} # func_lo2o /c\
-+func_lo2o ()\
-+{\
-+\    case ${1} in\
-+\      *.lo) func_lo2o_result=${1%.lo}.${objext} ;;\
-+\      *)    func_lo2o_result=${1} ;;\
-+\    esac\
-+} # Extended-shell func_lo2o implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_xform ()$/,/^} # func_xform /c\
-+func_xform ()\
-+{\
-+    func_xform_result=${1%.*}.lo\
-+} # Extended-shell func_xform implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_arith ()$/,/^} # func_arith /c\
-+func_arith ()\
-+{\
-+    func_arith_result=$(( $* ))\
-+} # Extended-shell func_arith implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_len ()$/,/^} # func_len /c\
-+func_len ()\
-+{\
-+    func_len_result=${#1}\
-+} # Extended-shell func_len implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+fi
-+
-+if test x"$lt_shell_append" = xyes; then
-+  sed -e '/^func_append ()$/,/^} # func_append /c\
-+func_append ()\
-+{\
-+    eval "${1}+=\\${2}"\
-+} # Extended-shell func_append implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_append_quoted ()$/,/^} # func_append_quoted /c\
-+func_append_quoted ()\
-+{\
-+\    func_quote_for_eval "${2}"\
-+\    eval "${1}+=\\\\ \\$func_quote_for_eval_result"\
-+} # Extended-shell func_append_quoted implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  # Save a `func_append' function call where possible by direct use of '+='
-+  sed -e 's%func_append \([a-zA-Z_]\{1,\}\) "%\1+="%g' $cfgfile > $cfgfile.tmp \
-+    && mv -f "$cfgfile.tmp" "$cfgfile" \
-+      || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+  test 0 -eq $? || _lt_function_replace_fail=:
-+else
-+  # Save a `func_append' function call even when '+=' is not available
-+  sed -e 's%func_append \([a-zA-Z_]\{1,\}\) "%\1="$\1%g' $cfgfile > $cfgfile.tmp \
-+    && mv -f "$cfgfile.tmp" "$cfgfile" \
-+      || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+  test 0 -eq $? || _lt_function_replace_fail=:
-+fi
-+
-+if test x"$_lt_function_replace_fail" = x":"; then
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: Unable to substitute extended shell functions in $ofile" >&5
-+$as_echo "$as_me: WARNING: Unable to substitute extended shell functions in $ofile" >&2;}
-+fi
-+
-+
-+   mv -f "$cfgfile" "$ofile" ||
-     (rm -f "$ofile" && cp "$cfgfile" "$ofile" && rm -f "$cfgfile")
-   chmod +x "$ofile"
- 
-diff --git a/bfd/configure.ac b/bfd/configure.ac
-index a9078965c40..22b5b7ea567 100644
---- a/bfd/configure.ac
-+++ b/bfd/configure.ac
-@@ -303,7 +303,7 @@ changequote(,)dnl
-   x=`sed -n -e 's/^[ 	]*PICFLAG[ 	]*=[ 	]*//p' < ../libiberty/Makefile | sed -n '$p'`
- changequote([,])dnl
-   if test -n "$x"; then
--    SHARED_LIBADD="-L`pwd`/../libiberty/pic -liberty"
-+    SHARED_LIBADD="`pwd`/../libiberty/pic/libiberty.a"
-   fi
- fi
- 
-diff --git a/binutils/configure b/binutils/configure
-index 8cde216cb1f..15f3f4eb874 100755
---- a/binutils/configure
-+++ b/binutils/configure
-@@ -696,8 +696,11 @@ OTOOL
- LIPO
- NMEDIT
- DSYMUTIL
-+MANIFEST_TOOL
- RANLIB
-+ac_ct_AR
- AR
-+DLLTOOL
- OBJDUMP
- LN_S
- NM
-@@ -814,6 +817,7 @@ enable_static
- with_pic
- enable_fast_install
- with_gnu_ld
-+with_libtool_sysroot
- enable_libtool_lock
- enable_plugins
- enable_largefile
-@@ -1509,6 +1513,8 @@ Optional Packages:
-   --with-pic              try to use only PIC/non-PIC objects [default=use
-                           both]
-   --with-gnu-ld           assume the C compiler uses GNU ld [default=no]
-+  --with-libtool-sysroot=DIR Search for dependent libraries within DIR
-+                        (or the compiler's sysroot if not specified).
-   --with-debuginfod       Enable debuginfo lookups with debuginfod
-                           (auto/yes/no)
-   --with-system-zlib      use installed libz
-@@ -4883,8 +4889,8 @@ esac
- 
- 
- 
--macro_version='2.2.7a'
--macro_revision='1.3134'
-+macro_version='2.4'
-+macro_revision='1.3293'
- 
- 
- 
-@@ -4924,7 +4930,7 @@ ECHO=$ECHO$ECHO$ECHO$ECHO$ECHO$ECHO
- { $as_echo "$as_me:${as_lineno-$LINENO}: checking how to print strings" >&5
- $as_echo_n "checking how to print strings... " >&6; }
- # Test print first, because it will be a builtin if present.
--if test "X`print -r -- -n 2>/dev/null`" = X-n && \
-+if test "X`( print -r -- -n ) 2>/dev/null`" = X-n && \
-    test "X`print -r -- $ECHO 2>/dev/null`" = "X$ECHO"; then
-   ECHO='print -r --'
- elif test "X`printf %s $ECHO 2>/dev/null`" = "X$ECHO"; then
-@@ -5611,8 +5617,8 @@ $as_echo_n "checking whether the shell understands some XSI constructs... " >&6;
- # Try some XSI features
- xsi_shell=no
- ( _lt_dummy="a/b/c"
--  test "${_lt_dummy##*/},${_lt_dummy%/*},"${_lt_dummy%"$_lt_dummy"}, \
--      = c,a/b,, \
-+  test "${_lt_dummy##*/},${_lt_dummy%/*},${_lt_dummy#??}"${_lt_dummy%"$_lt_dummy"}, \
-+      = c,a/b,b/c, \
-     && eval 'test $(( 1 + 1 )) -eq 2 \
-     && test "${#_lt_dummy}" -eq 5' ) >/dev/null 2>&1 \
-   && xsi_shell=yes
-@@ -5661,6 +5667,80 @@ esac
- 
- 
- 
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking how to convert $build file names to $host format" >&5
-+$as_echo_n "checking how to convert $build file names to $host format... " >&6; }
-+if ${lt_cv_to_host_file_cmd+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  case $host in
-+  *-*-mingw* )
-+    case $build in
-+      *-*-mingw* ) # actually msys
-+        lt_cv_to_host_file_cmd=func_convert_file_msys_to_w32
-+        ;;
-+      *-*-cygwin* )
-+        lt_cv_to_host_file_cmd=func_convert_file_cygwin_to_w32
-+        ;;
-+      * ) # otherwise, assume *nix
-+        lt_cv_to_host_file_cmd=func_convert_file_nix_to_w32
-+        ;;
-+    esac
-+    ;;
-+  *-*-cygwin* )
-+    case $build in
-+      *-*-mingw* ) # actually msys
-+        lt_cv_to_host_file_cmd=func_convert_file_msys_to_cygwin
-+        ;;
-+      *-*-cygwin* )
-+        lt_cv_to_host_file_cmd=func_convert_file_noop
-+        ;;
-+      * ) # otherwise, assume *nix
-+        lt_cv_to_host_file_cmd=func_convert_file_nix_to_cygwin
-+        ;;
-+    esac
-+    ;;
-+  * ) # unhandled hosts (and "normal" native builds)
-+    lt_cv_to_host_file_cmd=func_convert_file_noop
-+    ;;
-+esac
-+
-+fi
-+
-+to_host_file_cmd=$lt_cv_to_host_file_cmd
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_to_host_file_cmd" >&5
-+$as_echo "$lt_cv_to_host_file_cmd" >&6; }
-+
-+
-+
-+
-+
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking how to convert $build file names to toolchain format" >&5
-+$as_echo_n "checking how to convert $build file names to toolchain format... " >&6; }
-+if ${lt_cv_to_tool_file_cmd+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  #assume ordinary cross tools, or native build.
-+lt_cv_to_tool_file_cmd=func_convert_file_noop
-+case $host in
-+  *-*-mingw* )
-+    case $build in
-+      *-*-mingw* ) # actually msys
-+        lt_cv_to_tool_file_cmd=func_convert_file_msys_to_w32
-+        ;;
-+    esac
-+    ;;
-+esac
-+
-+fi
-+
-+to_tool_file_cmd=$lt_cv_to_tool_file_cmd
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_to_tool_file_cmd" >&5
-+$as_echo "$lt_cv_to_tool_file_cmd" >&6; }
-+
-+
-+
-+
-+
- { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $LD option to reload object files" >&5
- $as_echo_n "checking for $LD option to reload object files... " >&6; }
- if ${lt_cv_ld_reload_flag+:} false; then :
-@@ -5677,6 +5757,11 @@ case $reload_flag in
- esac
- reload_cmds='$LD$reload_flag -o $output$reload_objs'
- case $host_os in
-+  cygwin* | mingw* | pw32* | cegcc*)
-+    if test "$GCC" != yes; then
-+      reload_cmds=false
-+    fi
-+    ;;
-   darwin*)
-     if test "$GCC" = yes; then
-       reload_cmds='$LTCC $LTCFLAGS -nostdlib ${wl}-r -o $output$reload_objs'
-@@ -5845,7 +5930,8 @@ mingw* | pw32*)
-     lt_cv_deplibs_check_method='file_magic ^x86 archive import|^x86 DLL'
-     lt_cv_file_magic_cmd='func_win32_libid'
-   else
--    lt_cv_deplibs_check_method='file_magic file format pei*-i386(.*architecture: i386)?'
-+    # Keep this pattern in sync with the one in func_win32_libid.
-+    lt_cv_deplibs_check_method='file_magic file format (pei*-i386(.*architecture: i386)?|pe-arm-wince|pe-x86-64)'
-     lt_cv_file_magic_cmd='$OBJDUMP -f'
-   fi
-   ;;
-@@ -5999,6 +6085,21 @@ esac
- fi
- { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_deplibs_check_method" >&5
- $as_echo "$lt_cv_deplibs_check_method" >&6; }
-+
-+file_magic_glob=
-+want_nocaseglob=no
-+if test "$build" = "$host"; then
-+  case $host_os in
-+  mingw* | pw32*)
-+    if ( shopt | grep nocaseglob ) >/dev/null 2>&1; then
-+      want_nocaseglob=yes
-+    else
-+      file_magic_glob=`echo aAbBcCdDeEfFgGhHiIjJkKlLmMnNoOpPqQrRsStTuUvVwWxXyYzZ | $SED -e "s/\(..\)/s\/[\1]\/[\1]\/g;/g"`
-+    fi
-+    ;;
-+  esac
-+fi
-+
- file_magic_cmd=$lt_cv_file_magic_cmd
- deplibs_check_method=$lt_cv_deplibs_check_method
- test -z "$deplibs_check_method" && deplibs_check_method=unknown
-@@ -6014,6 +6115,157 @@ test -z "$deplibs_check_method" && deplibs_check_method=unknown
- 
- 
- 
-+
-+
-+
-+
-+
-+
-+
-+
-+
-+
-+if test -n "$ac_tool_prefix"; then
-+  # Extract the first word of "${ac_tool_prefix}dlltool", so it can be a program name with args.
-+set dummy ${ac_tool_prefix}dlltool; ac_word=$2
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
-+$as_echo_n "checking for $ac_word... " >&6; }
-+if ${ac_cv_prog_DLLTOOL+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  if test -n "$DLLTOOL"; then
-+  ac_cv_prog_DLLTOOL="$DLLTOOL" # Let the user override the test.
-+else
-+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
-+for as_dir in $PATH
-+do
-+  IFS=$as_save_IFS
-+  test -z "$as_dir" && as_dir=.
-+    for ac_exec_ext in '' $ac_executable_extensions; do
-+  if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
-+    ac_cv_prog_DLLTOOL="${ac_tool_prefix}dlltool"
-+    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
-+    break 2
-+  fi
-+done
-+  done
-+IFS=$as_save_IFS
-+
-+fi
-+fi
-+DLLTOOL=$ac_cv_prog_DLLTOOL
-+if test -n "$DLLTOOL"; then
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $DLLTOOL" >&5
-+$as_echo "$DLLTOOL" >&6; }
-+else
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
-+$as_echo "no" >&6; }
-+fi
-+
-+
-+fi
-+if test -z "$ac_cv_prog_DLLTOOL"; then
-+  ac_ct_DLLTOOL=$DLLTOOL
-+  # Extract the first word of "dlltool", so it can be a program name with args.
-+set dummy dlltool; ac_word=$2
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
-+$as_echo_n "checking for $ac_word... " >&6; }
-+if ${ac_cv_prog_ac_ct_DLLTOOL+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  if test -n "$ac_ct_DLLTOOL"; then
-+  ac_cv_prog_ac_ct_DLLTOOL="$ac_ct_DLLTOOL" # Let the user override the test.
-+else
-+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
-+for as_dir in $PATH
-+do
-+  IFS=$as_save_IFS
-+  test -z "$as_dir" && as_dir=.
-+    for ac_exec_ext in '' $ac_executable_extensions; do
-+  if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
-+    ac_cv_prog_ac_ct_DLLTOOL="dlltool"
-+    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
-+    break 2
-+  fi
-+done
-+  done
-+IFS=$as_save_IFS
-+
-+fi
-+fi
-+ac_ct_DLLTOOL=$ac_cv_prog_ac_ct_DLLTOOL
-+if test -n "$ac_ct_DLLTOOL"; then
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_DLLTOOL" >&5
-+$as_echo "$ac_ct_DLLTOOL" >&6; }
-+else
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
-+$as_echo "no" >&6; }
-+fi
-+
-+  if test "x$ac_ct_DLLTOOL" = x; then
-+    DLLTOOL="false"
-+  else
-+    case $cross_compiling:$ac_tool_warned in
-+yes:)
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5
-+$as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;}
-+ac_tool_warned=yes ;;
-+esac
-+    DLLTOOL=$ac_ct_DLLTOOL
-+  fi
-+else
-+  DLLTOOL="$ac_cv_prog_DLLTOOL"
-+fi
-+
-+test -z "$DLLTOOL" && DLLTOOL=dlltool
-+
-+
-+
-+
-+
-+
-+
-+
-+
-+
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking how to associate runtime and link libraries" >&5
-+$as_echo_n "checking how to associate runtime and link libraries... " >&6; }
-+if ${lt_cv_sharedlib_from_linklib_cmd+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  lt_cv_sharedlib_from_linklib_cmd='unknown'
-+
-+case $host_os in
-+cygwin* | mingw* | pw32* | cegcc*)
-+  # two different shell functions defined in ltmain.sh
-+  # decide which to use based on capabilities of $DLLTOOL
-+  case `$DLLTOOL --help 2>&1` in
-+  *--identify-strict*)
-+    lt_cv_sharedlib_from_linklib_cmd=func_cygming_dll_for_implib
-+    ;;
-+  *)
-+    lt_cv_sharedlib_from_linklib_cmd=func_cygming_dll_for_implib_fallback
-+    ;;
-+  esac
-+  ;;
-+*)
-+  # fallback: assume linklib IS sharedlib
-+  lt_cv_sharedlib_from_linklib_cmd="$ECHO"
-+  ;;
-+esac
-+
-+fi
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_sharedlib_from_linklib_cmd" >&5
-+$as_echo "$lt_cv_sharedlib_from_linklib_cmd" >&6; }
-+sharedlib_from_linklib_cmd=$lt_cv_sharedlib_from_linklib_cmd
-+test -z "$sharedlib_from_linklib_cmd" && sharedlib_from_linklib_cmd=$ECHO
-+
-+
-+
-+
-+
-+
-+
- plugin_option=
- plugin_names="liblto_plugin.so liblto_plugin-0.dll cyglto_plugin-0.dll"
- for plugin in $plugin_names; do
-@@ -6028,8 +6280,10 @@ for plugin in $plugin_names; do
- done
- 
- if test -n "$ac_tool_prefix"; then
--  # Extract the first word of "${ac_tool_prefix}ar", so it can be a program name with args.
--set dummy ${ac_tool_prefix}ar; ac_word=$2
-+  for ac_prog in ar
-+  do
-+    # Extract the first word of "$ac_tool_prefix$ac_prog", so it can be a program name with args.
-+set dummy $ac_tool_prefix$ac_prog; ac_word=$2
- { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
- $as_echo_n "checking for $ac_word... " >&6; }
- if ${ac_cv_prog_AR+:} false; then :
-@@ -6045,7 +6299,7 @@ do
-   test -z "$as_dir" && as_dir=.
-     for ac_exec_ext in '' $ac_executable_extensions; do
-   if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
--    ac_cv_prog_AR="${ac_tool_prefix}ar"
-+    ac_cv_prog_AR="$ac_tool_prefix$ac_prog"
-     $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
-     break 2
-   fi
-@@ -6065,11 +6319,15 @@ $as_echo "no" >&6; }
- fi
- 
- 
-+    test -n "$AR" && break
-+  done
- fi
--if test -z "$ac_cv_prog_AR"; then
-+if test -z "$AR"; then
-   ac_ct_AR=$AR
--  # Extract the first word of "ar", so it can be a program name with args.
--set dummy ar; ac_word=$2
-+  for ac_prog in ar
-+do
-+  # Extract the first word of "$ac_prog", so it can be a program name with args.
-+set dummy $ac_prog; ac_word=$2
- { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
- $as_echo_n "checking for $ac_word... " >&6; }
- if ${ac_cv_prog_ac_ct_AR+:} false; then :
-@@ -6085,7 +6343,7 @@ do
-   test -z "$as_dir" && as_dir=.
-     for ac_exec_ext in '' $ac_executable_extensions; do
-   if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
--    ac_cv_prog_ac_ct_AR="ar"
-+    ac_cv_prog_ac_ct_AR="$ac_prog"
-     $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
-     break 2
-   fi
-@@ -6104,6 +6362,10 @@ else
- $as_echo "no" >&6; }
- fi
- 
-+
-+  test -n "$ac_ct_AR" && break
-+done
-+
-   if test "x$ac_ct_AR" = x; then
-     AR="false"
-   else
-@@ -6115,29 +6377,81 @@ ac_tool_warned=yes ;;
- esac
-     AR=$ac_ct_AR
-   fi
--else
--  AR="$ac_cv_prog_AR"
- fi
- 
--test -z "$AR" && AR=ar
--if test -n "$plugin_option"; then
--  if $AR --help 2>&1 | grep -q "\--plugin"; then
--    touch conftest.c
--    $AR $plugin_option rc conftest.a conftest.c
--    if test "$?" != 0; then
--      { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: Failed: $AR $plugin_option rc" >&5
-+  touch conftest.c
-+  $AR $plugin_option rc conftest.a conftest.c
-+  if test "$?" != 0; then
-+    { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: Failed: $AR $plugin_option rc" >&5
- $as_echo "$as_me: WARNING: Failed: $AR $plugin_option rc" >&2;}
--    else
--      AR="$AR $plugin_option"
--    fi
--    rm -f conftest.*
-+  else
-+    AR="$AR $plugin_option"
-   fi
--fi
--test -z "$AR_FLAGS" && AR_FLAGS=cru
-+  rm -f conftest.*
-+: ${AR=ar}
-+: ${AR_FLAGS=cru}
-+
-+
-+
-+
-+
-+
-+
-+
-+
-+
-+
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for archiver @FILE support" >&5
-+$as_echo_n "checking for archiver @FILE support... " >&6; }
-+if ${lt_cv_ar_at_file+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  lt_cv_ar_at_file=no
-+   cat confdefs.h - <<_ACEOF >conftest.$ac_ext
-+/* end confdefs.h.  */
-+
-+int
-+main ()
-+{
- 
-+  ;
-+  return 0;
-+}
-+_ACEOF
-+if ac_fn_c_try_compile "$LINENO"; then :
-+  echo conftest.$ac_objext > conftest.lst
-+      lt_ar_try='$AR $AR_FLAGS libconftest.a @conftest.lst >&5'
-+      { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$lt_ar_try\""; } >&5
-+  (eval $lt_ar_try) 2>&5
-+  ac_status=$?
-+  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
-+  test $ac_status = 0; }
-+      if test "$ac_status" -eq 0; then
-+	# Ensure the archiver fails upon bogus file names.
-+	rm -f conftest.$ac_objext libconftest.a
-+	{ { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$lt_ar_try\""; } >&5
-+  (eval $lt_ar_try) 2>&5
-+  ac_status=$?
-+  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
-+  test $ac_status = 0; }
-+	if test "$ac_status" -ne 0; then
-+          lt_cv_ar_at_file=@
-+        fi
-+      fi
-+      rm -f conftest.* libconftest.a
- 
-+fi
-+rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext
- 
-+fi
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_ar_at_file" >&5
-+$as_echo "$lt_cv_ar_at_file" >&6; }
- 
-+if test "x$lt_cv_ar_at_file" = xno; then
-+  archiver_list_spec=
-+else
-+  archiver_list_spec=$lt_cv_ar_at_file
-+fi
- 
- 
- 
-@@ -6484,8 +6798,8 @@ esac
- lt_cv_sys_global_symbol_to_cdecl="sed -n -e 's/^T .* \(.*\)$/extern int \1();/p' -e 's/^$symcode* .* \(.*\)$/extern char \1;/p'"
- 
- # Transform an extracted symbol line into symbol name and symbol address
--lt_cv_sys_global_symbol_to_c_name_address="sed -n -e 's/^: \([^ ]*\) $/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"\2\", (void *) \&\2},/p'"
--lt_cv_sys_global_symbol_to_c_name_address_lib_prefix="sed -n -e 's/^: \([^ ]*\) $/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \(lib[^ ]*\)$/  {\"\2\", (void *) \&\2},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"lib\2\", (void *) \&\2},/p'"
-+lt_cv_sys_global_symbol_to_c_name_address="sed -n -e 's/^: \([^ ]*\)[ ]*$/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"\2\", (void *) \&\2},/p'"
-+lt_cv_sys_global_symbol_to_c_name_address_lib_prefix="sed -n -e 's/^: \([^ ]*\)[ ]*$/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \(lib[^ ]*\)$/  {\"\2\", (void *) \&\2},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"lib\2\", (void *) \&\2},/p'"
- 
- # Handle CRLF in mingw tool chain
- opt_cr=
-@@ -6521,6 +6835,7 @@ for ac_symprfx in "" "_"; do
-   else
-     lt_cv_sys_global_symbol_pipe="sed -n -e 's/^.*[	 ]\($symcode$symcode*\)[	 ][	 ]*$ac_symprfx$sympat$opt_cr$/$symxfrm/p'"
-   fi
-+  lt_cv_sys_global_symbol_pipe="$lt_cv_sys_global_symbol_pipe | sed '/ __gnu_lto/d'"
- 
-   # Check to see that the pipe works correctly.
-   pipe_works=no
-@@ -6562,6 +6877,18 @@ _LT_EOF
-       if $GREP ' nm_test_var$' "$nlist" >/dev/null; then
- 	if $GREP ' nm_test_func$' "$nlist" >/dev/null; then
- 	  cat <<_LT_EOF > conftest.$ac_ext
-+/* Keep this code in sync between libtool.m4, ltmain, lt_system.h, and tests.  */
-+#if defined(_WIN32) || defined(__CYGWIN__) || defined(_WIN32_WCE)
-+/* DATA imports from DLLs on WIN32 con't be const, because runtime
-+   relocations are performed -- see ld's documentation on pseudo-relocs.  */
-+# define LT_DLSYM_CONST
-+#elif defined(__osf__)
-+/* This system does not cope well with relocations in const data.  */
-+# define LT_DLSYM_CONST
-+#else
-+# define LT_DLSYM_CONST const
-+#endif
-+
- #ifdef __cplusplus
- extern "C" {
- #endif
-@@ -6573,7 +6900,7 @@ _LT_EOF
- 	  cat <<_LT_EOF >> conftest.$ac_ext
- 
- /* The mapping between symbol names and symbols.  */
--const struct {
-+LT_DLSYM_CONST struct {
-   const char *name;
-   void       *address;
- }
-@@ -6599,8 +6926,8 @@ static const void *lt_preloaded_setup() {
- _LT_EOF
- 	  # Now try linking the two files.
- 	  mv conftest.$ac_objext conftstm.$ac_objext
--	  lt_save_LIBS="$LIBS"
--	  lt_save_CFLAGS="$CFLAGS"
-+	  lt_globsym_save_LIBS=$LIBS
-+	  lt_globsym_save_CFLAGS=$CFLAGS
- 	  LIBS="conftstm.$ac_objext"
- 	  CFLAGS="$CFLAGS$lt_prog_compiler_no_builtin_flag"
- 	  if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_link\""; } >&5
-@@ -6610,8 +6937,8 @@ _LT_EOF
-   test $ac_status = 0; } && test -s conftest${ac_exeext}; then
- 	    pipe_works=yes
- 	  fi
--	  LIBS="$lt_save_LIBS"
--	  CFLAGS="$lt_save_CFLAGS"
-+	  LIBS=$lt_globsym_save_LIBS
-+	  CFLAGS=$lt_globsym_save_CFLAGS
- 	else
- 	  echo "cannot find nm_test_func in $nlist" >&5
- 	fi
-@@ -6648,6 +6975,19 @@ else
- $as_echo "ok" >&6; }
- fi
- 
-+# Response file support.
-+if test "$lt_cv_nm_interface" = "MS dumpbin"; then
-+  nm_file_list_spec='@'
-+elif $NM --help 2>/dev/null | grep '[@]FILE' >/dev/null; then
-+  nm_file_list_spec='@'
-+fi
-+
-+
-+
-+
-+
-+
-+
- 
- 
- 
-@@ -6664,6 +7004,42 @@ fi
- 
- 
- 
-+
-+
-+
-+
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for sysroot" >&5
-+$as_echo_n "checking for sysroot... " >&6; }
-+
-+# Check whether --with-libtool-sysroot was given.
-+if test "${with_libtool_sysroot+set}" = set; then :
-+  withval=$with_libtool_sysroot;
-+else
-+  with_libtool_sysroot=no
-+fi
-+
-+
-+lt_sysroot=
-+case ${with_libtool_sysroot} in #(
-+ yes)
-+   if test "$GCC" = yes; then
-+     lt_sysroot=`$CC --print-sysroot 2>/dev/null`
-+   fi
-+   ;; #(
-+ /*)
-+   lt_sysroot=`echo "$with_libtool_sysroot" | sed -e "$sed_quote_subst"`
-+   ;; #(
-+ no|'')
-+   ;; #(
-+ *)
-+   { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${with_libtool_sysroot}" >&5
-+$as_echo "${with_libtool_sysroot}" >&6; }
-+   as_fn_error $? "The sysroot must be an absolute path." "$LINENO" 5
-+   ;;
-+esac
-+
-+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${lt_sysroot:-no}" >&5
-+$as_echo "${lt_sysroot:-no}" >&6; }
- 
- 
- 
-@@ -6875,6 +7251,123 @@ esac
- 
- need_locks="$enable_libtool_lock"
- 
-+if test -n "$ac_tool_prefix"; then
-+  # Extract the first word of "${ac_tool_prefix}mt", so it can be a program name with args.
-+set dummy ${ac_tool_prefix}mt; ac_word=$2
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
-+$as_echo_n "checking for $ac_word... " >&6; }
-+if ${ac_cv_prog_MANIFEST_TOOL+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  if test -n "$MANIFEST_TOOL"; then
-+  ac_cv_prog_MANIFEST_TOOL="$MANIFEST_TOOL" # Let the user override the test.
-+else
-+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
-+for as_dir in $PATH
-+do
-+  IFS=$as_save_IFS
-+  test -z "$as_dir" && as_dir=.
-+    for ac_exec_ext in '' $ac_executable_extensions; do
-+  if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
-+    ac_cv_prog_MANIFEST_TOOL="${ac_tool_prefix}mt"
-+    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
-+    break 2
-+  fi
-+done
-+  done
-+IFS=$as_save_IFS
-+
-+fi
-+fi
-+MANIFEST_TOOL=$ac_cv_prog_MANIFEST_TOOL
-+if test -n "$MANIFEST_TOOL"; then
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $MANIFEST_TOOL" >&5
-+$as_echo "$MANIFEST_TOOL" >&6; }
-+else
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
-+$as_echo "no" >&6; }
-+fi
-+
-+
-+fi
-+if test -z "$ac_cv_prog_MANIFEST_TOOL"; then
-+  ac_ct_MANIFEST_TOOL=$MANIFEST_TOOL
-+  # Extract the first word of "mt", so it can be a program name with args.
-+set dummy mt; ac_word=$2
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
-+$as_echo_n "checking for $ac_word... " >&6; }
-+if ${ac_cv_prog_ac_ct_MANIFEST_TOOL+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  if test -n "$ac_ct_MANIFEST_TOOL"; then
-+  ac_cv_prog_ac_ct_MANIFEST_TOOL="$ac_ct_MANIFEST_TOOL" # Let the user override the test.
-+else
-+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
-+for as_dir in $PATH
-+do
-+  IFS=$as_save_IFS
-+  test -z "$as_dir" && as_dir=.
-+    for ac_exec_ext in '' $ac_executable_extensions; do
-+  if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
-+    ac_cv_prog_ac_ct_MANIFEST_TOOL="mt"
-+    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
-+    break 2
-+  fi
-+done
-+  done
-+IFS=$as_save_IFS
-+
-+fi
-+fi
-+ac_ct_MANIFEST_TOOL=$ac_cv_prog_ac_ct_MANIFEST_TOOL
-+if test -n "$ac_ct_MANIFEST_TOOL"; then
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_MANIFEST_TOOL" >&5
-+$as_echo "$ac_ct_MANIFEST_TOOL" >&6; }
-+else
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
-+$as_echo "no" >&6; }
-+fi
-+
-+  if test "x$ac_ct_MANIFEST_TOOL" = x; then
-+    MANIFEST_TOOL=":"
-+  else
-+    case $cross_compiling:$ac_tool_warned in
-+yes:)
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5
-+$as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;}
-+ac_tool_warned=yes ;;
-+esac
-+    MANIFEST_TOOL=$ac_ct_MANIFEST_TOOL
-+  fi
-+else
-+  MANIFEST_TOOL="$ac_cv_prog_MANIFEST_TOOL"
-+fi
-+
-+test -z "$MANIFEST_TOOL" && MANIFEST_TOOL=mt
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking if $MANIFEST_TOOL is a manifest tool" >&5
-+$as_echo_n "checking if $MANIFEST_TOOL is a manifest tool... " >&6; }
-+if ${lt_cv_path_mainfest_tool+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  lt_cv_path_mainfest_tool=no
-+  echo "$as_me:$LINENO: $MANIFEST_TOOL '-?'" >&5
-+  $MANIFEST_TOOL '-?' 2>conftest.err > conftest.out
-+  cat conftest.err >&5
-+  if $GREP 'Manifest Tool' conftest.out > /dev/null; then
-+    lt_cv_path_mainfest_tool=yes
-+  fi
-+  rm -f conftest*
-+fi
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_path_mainfest_tool" >&5
-+$as_echo "$lt_cv_path_mainfest_tool" >&6; }
-+if test "x$lt_cv_path_mainfest_tool" != xyes; then
-+  MANIFEST_TOOL=:
-+fi
-+
-+
-+
-+
-+
- 
-   case $host_os in
-     rhapsody* | darwin*)
-@@ -7438,6 +7931,8 @@ _LT_EOF
-       $LTCC $LTCFLAGS -c -o conftest.o conftest.c 2>&5
-       echo "$AR cru libconftest.a conftest.o" >&5
-       $AR cru libconftest.a conftest.o 2>&5
-+      echo "$RANLIB libconftest.a" >&5
-+      $RANLIB libconftest.a 2>&5
-       cat > conftest.c << _LT_EOF
- int main() { return 0;}
- _LT_EOF
-@@ -8020,8 +8515,6 @@ fi
- lt_prog_compiler_pic=
- lt_prog_compiler_static=
- 
--{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $compiler option to produce PIC" >&5
--$as_echo_n "checking for $compiler option to produce PIC... " >&6; }
- 
-   if test "$GCC" = yes; then
-     lt_prog_compiler_wl='-Wl,'
-@@ -8187,6 +8680,12 @@ $as_echo_n "checking for $compiler option to produce PIC... " >&6; }
- 	lt_prog_compiler_pic='--shared'
- 	lt_prog_compiler_static='--static'
- 	;;
-+      nagfor*)
-+	# NAG Fortran compiler
-+	lt_prog_compiler_wl='-Wl,-Wl,,'
-+	lt_prog_compiler_pic='-PIC'
-+	lt_prog_compiler_static='-Bstatic'
-+	;;
-       pgcc* | pgf77* | pgf90* | pgf95* | pgfortran*)
-         # Portland Group compilers (*not* the Pentium gcc compiler,
- 	# which looks to be a dead project)
-@@ -8249,7 +8748,7 @@ $as_echo_n "checking for $compiler option to produce PIC... " >&6; }
-       lt_prog_compiler_pic='-KPIC'
-       lt_prog_compiler_static='-Bstatic'
-       case $cc_basename in
--      f77* | f90* | f95*)
-+      f77* | f90* | f95* | sunf77* | sunf90* | sunf95*)
- 	lt_prog_compiler_wl='-Qoption ld ';;
-       *)
- 	lt_prog_compiler_wl='-Wl,';;
-@@ -8306,13 +8805,17 @@ case $host_os in
-     lt_prog_compiler_pic="$lt_prog_compiler_pic -DPIC"
-     ;;
- esac
--{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_prog_compiler_pic" >&5
--$as_echo "$lt_prog_compiler_pic" >&6; }
--
--
--
--
- 
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $compiler option to produce PIC" >&5
-+$as_echo_n "checking for $compiler option to produce PIC... " >&6; }
-+if ${lt_cv_prog_compiler_pic+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  lt_cv_prog_compiler_pic=$lt_prog_compiler_pic
-+fi
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_prog_compiler_pic" >&5
-+$as_echo "$lt_cv_prog_compiler_pic" >&6; }
-+lt_prog_compiler_pic=$lt_cv_prog_compiler_pic
- 
- #
- # Check to make sure the PIC flag actually works.
-@@ -8373,6 +8876,11 @@ fi
- 
- 
- 
-+
-+
-+
-+
-+
- #
- # Check to make sure the static flag actually works.
- #
-@@ -8723,7 +9231,8 @@ _LT_EOF
-       allow_undefined_flag=unsupported
-       always_export_symbols=no
-       enable_shared_with_static_runtimes=yes
--      export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1 DATA/'\'' | $SED -e '\''/^[AITW][ ]/s/.*[ ]//'\'' | sort | uniq > $export_symbols'
-+      export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1 DATA/;s/^.*[ ]__nm__\([^ ]*\)[ ][^ ]*/\1 DATA/;/^I[ ]/d;/^[AITW][ ]/s/.* //'\'' | sort | uniq > $export_symbols'
-+      exclude_expsyms='[_]+GLOBAL_OFFSET_TABLE_|[_]+GLOBAL__[FID]_.*|[_]+head_[A-Za-z0-9_]+_dll|[A-Za-z0-9_]+_dll_iname'
- 
-       if $LD --help 2>&1 | $GREP 'auto-import' > /dev/null; then
-         archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib'
-@@ -8822,12 +9331,12 @@ _LT_EOF
- 	  whole_archive_flag_spec='--whole-archive$convenience --no-whole-archive'
- 	  hardcode_libdir_flag_spec=
- 	  hardcode_libdir_flag_spec_ld='-rpath $libdir'
--	  archive_cmds='$LD -shared $libobjs $deplibs $compiler_flags -soname $soname -o $lib'
-+	  archive_cmds='$LD -shared $libobjs $deplibs $linker_flags -soname $soname -o $lib'
- 	  if test "x$supports_anon_versioning" = xyes; then
- 	    archive_expsym_cmds='echo "{ global:" > $output_objdir/$libname.ver~
- 	      cat $export_symbols | sed -e "s/\(.*\)/\1;/" >> $output_objdir/$libname.ver~
- 	      echo "local: *; };" >> $output_objdir/$libname.ver~
--	      $LD -shared $libobjs $deplibs $compiler_flags -soname $soname -version-script $output_objdir/$libname.ver -o $lib'
-+	      $LD -shared $libobjs $deplibs $linker_flags -soname $soname -version-script $output_objdir/$libname.ver -o $lib'
- 	  fi
- 	  ;;
- 	esac
-@@ -8841,8 +9350,8 @@ _LT_EOF
- 	archive_cmds='$LD -Bshareable $libobjs $deplibs $linker_flags -o $lib'
- 	wlarc=
-       else
--	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
--	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-+	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
-+	archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-       fi
-       ;;
- 
-@@ -8860,8 +9369,8 @@ _LT_EOF
- 
- _LT_EOF
-       elif $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then
--	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
--	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-+	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
-+	archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-       else
- 	ld_shlibs=no
-       fi
-@@ -8907,8 +9416,8 @@ _LT_EOF
- 
-     *)
-       if $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then
--	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
--	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-+	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
-+	archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-       else
- 	ld_shlibs=no
-       fi
-@@ -9038,7 +9547,13 @@ _LT_EOF
- 	allow_undefined_flag='-berok'
-         # Determine the default libpath from the value encoded in an
-         # empty executable.
--        cat confdefs.h - <<_ACEOF >conftest.$ac_ext
-+        if test "${lt_cv_aix_libpath+set}" = set; then
-+  aix_libpath=$lt_cv_aix_libpath
-+else
-+  if ${lt_cv_aix_libpath_+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
- /* end confdefs.h.  */
- 
- int
-@@ -9051,22 +9566,29 @@ main ()
- _ACEOF
- if ac_fn_c_try_link "$LINENO"; then :
- 
--lt_aix_libpath_sed='
--    /Import File Strings/,/^$/ {
--	/^0/ {
--	    s/^0  *\(.*\)$/\1/
--	    p
--	}
--    }'
--aix_libpath=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
--# Check for a 64-bit object if we didn't find anything.
--if test -z "$aix_libpath"; then
--  aix_libpath=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
--fi
-+  lt_aix_libpath_sed='
-+      /Import File Strings/,/^$/ {
-+	  /^0/ {
-+	      s/^0  *\([^ ]*\) *$/\1/
-+	      p
-+	  }
-+      }'
-+  lt_cv_aix_libpath_=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
-+  # Check for a 64-bit object if we didn't find anything.
-+  if test -z "$lt_cv_aix_libpath_"; then
-+    lt_cv_aix_libpath_=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
-+  fi
- fi
- rm -f core conftest.err conftest.$ac_objext \
-     conftest$ac_exeext conftest.$ac_ext
--if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
-+  if test -z "$lt_cv_aix_libpath_"; then
-+    lt_cv_aix_libpath_="/usr/lib:/lib"
-+  fi
-+
-+fi
-+
-+  aix_libpath=$lt_cv_aix_libpath_
-+fi
- 
-         hardcode_libdir_flag_spec='${wl}-blibpath:$libdir:'"$aix_libpath"
-         archive_expsym_cmds='$CC -o $output_objdir/$soname $libobjs $deplibs '"\${wl}$no_entry_flag"' $compiler_flags `if test "x${allow_undefined_flag}" != "x"; then func_echo_all "${wl}${allow_undefined_flag}"; else :; fi` '"\${wl}$exp_sym_flag:\$export_symbols $shared_flag"
-@@ -9078,7 +9600,13 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
- 	else
- 	 # Determine the default libpath from the value encoded in an
- 	 # empty executable.
--	 cat confdefs.h - <<_ACEOF >conftest.$ac_ext
-+	 if test "${lt_cv_aix_libpath+set}" = set; then
-+  aix_libpath=$lt_cv_aix_libpath
-+else
-+  if ${lt_cv_aix_libpath_+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
- /* end confdefs.h.  */
- 
- int
-@@ -9091,22 +9619,29 @@ main ()
- _ACEOF
- if ac_fn_c_try_link "$LINENO"; then :
- 
--lt_aix_libpath_sed='
--    /Import File Strings/,/^$/ {
--	/^0/ {
--	    s/^0  *\(.*\)$/\1/
--	    p
--	}
--    }'
--aix_libpath=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
--# Check for a 64-bit object if we didn't find anything.
--if test -z "$aix_libpath"; then
--  aix_libpath=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
--fi
-+  lt_aix_libpath_sed='
-+      /Import File Strings/,/^$/ {
-+	  /^0/ {
-+	      s/^0  *\([^ ]*\) *$/\1/
-+	      p
-+	  }
-+      }'
-+  lt_cv_aix_libpath_=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
-+  # Check for a 64-bit object if we didn't find anything.
-+  if test -z "$lt_cv_aix_libpath_"; then
-+    lt_cv_aix_libpath_=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
-+  fi
- fi
- rm -f core conftest.err conftest.$ac_objext \
-     conftest$ac_exeext conftest.$ac_ext
--if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
-+  if test -z "$lt_cv_aix_libpath_"; then
-+    lt_cv_aix_libpath_="/usr/lib:/lib"
-+  fi
-+
-+fi
-+
-+  aix_libpath=$lt_cv_aix_libpath_
-+fi
- 
- 	 hardcode_libdir_flag_spec='${wl}-blibpath:$libdir:'"$aix_libpath"
- 	  # Warning - without using the other run time loading flags,
-@@ -9151,20 +9686,63 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
-       # Microsoft Visual C++.
-       # hardcode_libdir_flag_spec is actually meaningless, as there is
-       # no search path for DLLs.
--      hardcode_libdir_flag_spec=' '
--      allow_undefined_flag=unsupported
--      # Tell ltmain to make .lib files, not .a files.
--      libext=lib
--      # Tell ltmain to make .dll files, not .so files.
--      shrext_cmds=".dll"
--      # FIXME: Setting linknames here is a bad hack.
--      archive_cmds='$CC -o $lib $libobjs $compiler_flags `func_echo_all "$deplibs" | $SED '\''s/ -lc$//'\''` -link -dll~linknames='
--      # The linker will automatically build a .lib file if we build a DLL.
--      old_archive_from_new_cmds='true'
--      # FIXME: Should let the user specify the lib program.
--      old_archive_cmds='lib -OUT:$oldlib$oldobjs$old_deplibs'
--      fix_srcfile_path='`cygpath -w "$srcfile"`'
--      enable_shared_with_static_runtimes=yes
-+      case $cc_basename in
-+      cl*)
-+	# Native MSVC
-+	hardcode_libdir_flag_spec=' '
-+	allow_undefined_flag=unsupported
-+	always_export_symbols=yes
-+	file_list_spec='@'
-+	# Tell ltmain to make .lib files, not .a files.
-+	libext=lib
-+	# Tell ltmain to make .dll files, not .so files.
-+	shrext_cmds=".dll"
-+	# FIXME: Setting linknames here is a bad hack.
-+	archive_cmds='$CC -o $output_objdir/$soname $libobjs $compiler_flags $deplibs -Wl,-dll~linknames='
-+	archive_expsym_cmds='if test "x`$SED 1q $export_symbols`" = xEXPORTS; then
-+	    sed -n -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' -e '1\\\!p' < $export_symbols > $output_objdir/$soname.exp;
-+	  else
-+	    sed -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' < $export_symbols > $output_objdir/$soname.exp;
-+	  fi~
-+	  $CC -o $tool_output_objdir$soname $libobjs $compiler_flags $deplibs "@$tool_output_objdir$soname.exp" -Wl,-DLL,-IMPLIB:"$tool_output_objdir$libname.dll.lib"~
-+	  linknames='
-+	# The linker will not automatically build a static lib if we build a DLL.
-+	# _LT_TAGVAR(old_archive_from_new_cmds, )='true'
-+	enable_shared_with_static_runtimes=yes
-+	export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1,DATA/'\'' | $SED -e '\''/^[AITW][ ]/s/.*[ ]//'\'' | sort | uniq > $export_symbols'
-+	# Don't use ranlib
-+	old_postinstall_cmds='chmod 644 $oldlib'
-+	postlink_cmds='lt_outputfile="@OUTPUT@"~
-+	  lt_tool_outputfile="@TOOL_OUTPUT@"~
-+	  case $lt_outputfile in
-+	    *.exe|*.EXE) ;;
-+	    *)
-+	      lt_outputfile="$lt_outputfile.exe"
-+	      lt_tool_outputfile="$lt_tool_outputfile.exe"
-+	      ;;
-+	  esac~
-+	  if test "$MANIFEST_TOOL" != ":" && test -f "$lt_outputfile.manifest"; then
-+	    $MANIFEST_TOOL -manifest "$lt_tool_outputfile.manifest" -outputresource:"$lt_tool_outputfile" || exit 1;
-+	    $RM "$lt_outputfile.manifest";
-+	  fi'
-+	;;
-+      *)
-+	# Assume MSVC wrapper
-+	hardcode_libdir_flag_spec=' '
-+	allow_undefined_flag=unsupported
-+	# Tell ltmain to make .lib files, not .a files.
-+	libext=lib
-+	# Tell ltmain to make .dll files, not .so files.
-+	shrext_cmds=".dll"
-+	# FIXME: Setting linknames here is a bad hack.
-+	archive_cmds='$CC -o $lib $libobjs $compiler_flags `func_echo_all "$deplibs" | $SED '\''s/ -lc$//'\''` -link -dll~linknames='
-+	# The linker will automatically build a .lib file if we build a DLL.
-+	old_archive_from_new_cmds='true'
-+	# FIXME: Should let the user specify the lib program.
-+	old_archive_cmds='lib -OUT:$oldlib$oldobjs$old_deplibs'
-+	enable_shared_with_static_runtimes=yes
-+	;;
-+      esac
-       ;;
- 
-     darwin* | rhapsody*)
-@@ -9225,7 +9803,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
- 
-     # FreeBSD 3 and greater uses gcc -shared to do shared libraries.
-     freebsd* | dragonfly*)
--      archive_cmds='$CC -shared -o $lib $libobjs $deplibs $compiler_flags'
-+      archive_cmds='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags'
-       hardcode_libdir_flag_spec='-R$libdir'
-       hardcode_direct=yes
-       hardcode_shlibpath_var=no
-@@ -9233,7 +9811,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
- 
-     hpux9*)
-       if test "$GCC" = yes; then
--	archive_cmds='$RM $output_objdir/$soname~$CC -shared -fPIC ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $libobjs $deplibs $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
-+	archive_cmds='$RM $output_objdir/$soname~$CC -shared $pic_flag ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $libobjs $deplibs $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
-       else
- 	archive_cmds='$RM $output_objdir/$soname~$LD -b +b $install_libdir -o $output_objdir/$soname $libobjs $deplibs $linker_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
-       fi
-@@ -9249,7 +9827,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
- 
-     hpux10*)
-       if test "$GCC" = yes && test "$with_gnu_ld" = no; then
--	archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
-+	archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
-       else
- 	archive_cmds='$LD -b +h $soname +b $install_libdir -o $lib $libobjs $deplibs $linker_flags'
-       fi
-@@ -9273,10 +9851,10 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
- 	  archive_cmds='$CC -shared ${wl}+h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
- 	  ;;
- 	ia64*)
--	  archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags'
-+	  archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags'
- 	  ;;
- 	*)
--	  archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
-+	  archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
- 	  ;;
- 	esac
-       else
-@@ -9355,23 +9933,36 @@ fi
- 
-     irix5* | irix6* | nonstopux*)
-       if test "$GCC" = yes; then
--	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
-+	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
- 	# Try to use the -exported_symbol ld option, if it does not
- 	# work, assume that -exports_file does not work either and
- 	# implicitly export all symbols.
--        save_LDFLAGS="$LDFLAGS"
--        LDFLAGS="$LDFLAGS -shared ${wl}-exported_symbol ${wl}foo ${wl}-update_registry ${wl}/dev/null"
--        cat confdefs.h - <<_ACEOF >conftest.$ac_ext
-+	# This should be the same for all languages, so no per-tag cache variable.
-+	{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether the $host_os linker accepts -exported_symbol" >&5
-+$as_echo_n "checking whether the $host_os linker accepts -exported_symbol... " >&6; }
-+if ${lt_cv_irix_exported_symbol+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  save_LDFLAGS="$LDFLAGS"
-+	   LDFLAGS="$LDFLAGS -shared ${wl}-exported_symbol ${wl}foo ${wl}-update_registry ${wl}/dev/null"
-+	   cat confdefs.h - <<_ACEOF >conftest.$ac_ext
- /* end confdefs.h.  */
--int foo(void) {}
-+int foo (void) { return 0; }
- _ACEOF
- if ac_fn_c_try_link "$LINENO"; then :
--  archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations ${wl}-exports_file ${wl}$export_symbols -o $lib'
--
-+  lt_cv_irix_exported_symbol=yes
-+else
-+  lt_cv_irix_exported_symbol=no
- fi
- rm -f core conftest.err conftest.$ac_objext \
-     conftest$ac_exeext conftest.$ac_ext
--        LDFLAGS="$save_LDFLAGS"
-+           LDFLAGS="$save_LDFLAGS"
-+fi
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_irix_exported_symbol" >&5
-+$as_echo "$lt_cv_irix_exported_symbol" >&6; }
-+	if test "$lt_cv_irix_exported_symbol" = yes; then
-+          archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations ${wl}-exports_file ${wl}$export_symbols -o $lib'
-+	fi
-       else
- 	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -o $lib'
- 	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -exports_file $export_symbols -o $lib'
-@@ -9456,7 +10047,7 @@ rm -f core conftest.err conftest.$ac_objext \
-     osf4* | osf5*)	# as osf3* with the addition of -msym flag
-       if test "$GCC" = yes; then
- 	allow_undefined_flag=' ${wl}-expect_unresolved ${wl}\*'
--	archive_cmds='$CC -shared${allow_undefined_flag} $libobjs $deplibs $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
-+	archive_cmds='$CC -shared${allow_undefined_flag} $pic_flag $libobjs $deplibs $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
- 	hardcode_libdir_flag_spec='${wl}-rpath ${wl}$libdir'
-       else
- 	allow_undefined_flag=' -expect_unresolved \*'
-@@ -9475,9 +10066,9 @@ rm -f core conftest.err conftest.$ac_objext \
-       no_undefined_flag=' -z defs'
-       if test "$GCC" = yes; then
- 	wlarc='${wl}'
--	archive_cmds='$CC -shared ${wl}-z ${wl}text ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
-+	archive_cmds='$CC -shared $pic_flag ${wl}-z ${wl}text ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
- 	archive_expsym_cmds='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~
--	  $CC -shared ${wl}-z ${wl}text ${wl}-M ${wl}$lib.exp ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags~$RM $lib.exp'
-+	  $CC -shared $pic_flag ${wl}-z ${wl}text ${wl}-M ${wl}$lib.exp ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags~$RM $lib.exp'
-       else
- 	case `$CC -V 2>&1` in
- 	*"Compilers 5.0"*)
-@@ -10053,8 +10644,9 @@ cygwin* | mingw* | pw32* | cegcc*)
-   need_version=no
-   need_lib_prefix=no
- 
--  case $GCC,$host_os in
--  yes,cygwin* | yes,mingw* | yes,pw32* | yes,cegcc*)
-+  case $GCC,$cc_basename in
-+  yes,*)
-+    # gcc
-     library_names_spec='$libname.dll.a'
-     # DLL is installed to $(libdir)/../bin by postinstall_cmds
-     postinstall_cmds='base_file=`basename \${file}`~
-@@ -10087,13 +10679,71 @@ cygwin* | mingw* | pw32* | cegcc*)
-       library_names_spec='`echo ${libname} | sed -e 's/^lib/pw/'``echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}'
-       ;;
-     esac
-+    dynamic_linker='Win32 ld.exe'
-+    ;;
-+
-+  *,cl*)
-+    # Native MSVC
-+    libname_spec='$name'
-+    soname_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}'
-+    library_names_spec='${libname}.dll.lib'
-+
-+    case $build_os in
-+    mingw*)
-+      sys_lib_search_path_spec=
-+      lt_save_ifs=$IFS
-+      IFS=';'
-+      for lt_path in $LIB
-+      do
-+        IFS=$lt_save_ifs
-+        # Let DOS variable expansion print the short 8.3 style file name.
-+        lt_path=`cd "$lt_path" 2>/dev/null && cmd //C "for %i in (".") do @echo %~si"`
-+        sys_lib_search_path_spec="$sys_lib_search_path_spec $lt_path"
-+      done
-+      IFS=$lt_save_ifs
-+      # Convert to MSYS style.
-+      sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | sed -e 's|\\\\|/|g' -e 's| \\([a-zA-Z]\\):| /\\1|g' -e 's|^ ||'`
-+      ;;
-+    cygwin*)
-+      # Convert to unix form, then to dos form, then back to unix form
-+      # but this time dos style (no spaces!) so that the unix form looks
-+      # like /cygdrive/c/PROGRA~1:/cygdr...
-+      sys_lib_search_path_spec=`cygpath --path --unix "$LIB"`
-+      sys_lib_search_path_spec=`cygpath --path --dos "$sys_lib_search_path_spec" 2>/dev/null`
-+      sys_lib_search_path_spec=`cygpath --path --unix "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
-+      ;;
-+    *)
-+      sys_lib_search_path_spec="$LIB"
-+      if $ECHO "$sys_lib_search_path_spec" | $GREP ';[c-zC-Z]:/' >/dev/null; then
-+        # It is most probably a Windows format PATH.
-+        sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e 's/;/ /g'`
-+      else
-+        sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
-+      fi
-+      # FIXME: find the short name or the path components, as spaces are
-+      # common. (e.g. "Program Files" -> "PROGRA~1")
-+      ;;
-+    esac
-+
-+    # DLL is installed to $(libdir)/../bin by postinstall_cmds
-+    postinstall_cmds='base_file=`basename \${file}`~
-+      dlpath=`$SHELL 2>&1 -c '\''. $dir/'\''\${base_file}'\''i; echo \$dlname'\''`~
-+      dldir=$destdir/`dirname \$dlpath`~
-+      test -d \$dldir || mkdir -p \$dldir~
-+      $install_prog $dir/$dlname \$dldir/$dlname'
-+    postuninstall_cmds='dldll=`$SHELL 2>&1 -c '\''. $file; echo \$dlname'\''`~
-+      dlpath=$dir/\$dldll~
-+       $RM \$dlpath'
-+    shlibpath_overrides_runpath=yes
-+    dynamic_linker='Win32 link.exe'
-     ;;
- 
-   *)
-+    # Assume MSVC wrapper
-     library_names_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext} $libname.lib'
-+    dynamic_linker='Win32 ld.exe'
-     ;;
-   esac
--  dynamic_linker='Win32 ld.exe'
-   # FIXME: first we should search . and the directory the executable is in
-   shlibpath_var=PATH
-   ;;
-@@ -10971,7 +11621,7 @@ else
-   lt_dlunknown=0; lt_dlno_uscore=1; lt_dlneed_uscore=2
-   lt_status=$lt_dlunknown
-   cat > conftest.$ac_ext <<_LT_EOF
--#line 10974 "configure"
-+#line $LINENO "configure"
- #include "confdefs.h"
- 
- #if HAVE_DLFCN_H
-@@ -11015,10 +11665,10 @@ else
- /* When -fvisbility=hidden is used, assume the code has been annotated
-    correspondingly for the symbols needed.  */
- #if defined(__GNUC__) && (((__GNUC__ == 3) && (__GNUC_MINOR__ >= 3)) || (__GNUC__ > 3))
--void fnord () __attribute__((visibility("default")));
-+int fnord () __attribute__((visibility("default")));
- #endif
- 
--void fnord () { int i=42; }
-+int fnord () { return 42; }
- int main ()
- {
-   void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW);
-@@ -11077,7 +11727,7 @@ else
-   lt_dlunknown=0; lt_dlno_uscore=1; lt_dlneed_uscore=2
-   lt_status=$lt_dlunknown
-   cat > conftest.$ac_ext <<_LT_EOF
--#line 11080 "configure"
-+#line $LINENO "configure"
- #include "confdefs.h"
- 
- #if HAVE_DLFCN_H
-@@ -11121,10 +11771,10 @@ else
- /* When -fvisbility=hidden is used, assume the code has been annotated
-    correspondingly for the symbols needed.  */
- #if defined(__GNUC__) && (((__GNUC__ == 3) && (__GNUC_MINOR__ >= 3)) || (__GNUC__ > 3))
--void fnord () __attribute__((visibility("default")));
-+int fnord () __attribute__((visibility("default")));
- #endif
- 
--void fnord () { int i=42; }
-+int fnord () { return 42; }
- int main ()
- {
-   void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW);
-@@ -15505,13 +16155,20 @@ exeext='`$ECHO "$exeext" | $SED "$delay_single_quote_subst"`'
- lt_unset='`$ECHO "$lt_unset" | $SED "$delay_single_quote_subst"`'
- lt_SP2NL='`$ECHO "$lt_SP2NL" | $SED "$delay_single_quote_subst"`'
- lt_NL2SP='`$ECHO "$lt_NL2SP" | $SED "$delay_single_quote_subst"`'
-+lt_cv_to_host_file_cmd='`$ECHO "$lt_cv_to_host_file_cmd" | $SED "$delay_single_quote_subst"`'
-+lt_cv_to_tool_file_cmd='`$ECHO "$lt_cv_to_tool_file_cmd" | $SED "$delay_single_quote_subst"`'
- reload_flag='`$ECHO "$reload_flag" | $SED "$delay_single_quote_subst"`'
- reload_cmds='`$ECHO "$reload_cmds" | $SED "$delay_single_quote_subst"`'
- OBJDUMP='`$ECHO "$OBJDUMP" | $SED "$delay_single_quote_subst"`'
- deplibs_check_method='`$ECHO "$deplibs_check_method" | $SED "$delay_single_quote_subst"`'
- file_magic_cmd='`$ECHO "$file_magic_cmd" | $SED "$delay_single_quote_subst"`'
-+file_magic_glob='`$ECHO "$file_magic_glob" | $SED "$delay_single_quote_subst"`'
-+want_nocaseglob='`$ECHO "$want_nocaseglob" | $SED "$delay_single_quote_subst"`'
-+DLLTOOL='`$ECHO "$DLLTOOL" | $SED "$delay_single_quote_subst"`'
-+sharedlib_from_linklib_cmd='`$ECHO "$sharedlib_from_linklib_cmd" | $SED "$delay_single_quote_subst"`'
- AR='`$ECHO "$AR" | $SED "$delay_single_quote_subst"`'
- AR_FLAGS='`$ECHO "$AR_FLAGS" | $SED "$delay_single_quote_subst"`'
-+archiver_list_spec='`$ECHO "$archiver_list_spec" | $SED "$delay_single_quote_subst"`'
- STRIP='`$ECHO "$STRIP" | $SED "$delay_single_quote_subst"`'
- RANLIB='`$ECHO "$RANLIB" | $SED "$delay_single_quote_subst"`'
- old_postinstall_cmds='`$ECHO "$old_postinstall_cmds" | $SED "$delay_single_quote_subst"`'
-@@ -15526,14 +16183,17 @@ lt_cv_sys_global_symbol_pipe='`$ECHO "$lt_cv_sys_global_symbol_pipe" | $SED "$de
- lt_cv_sys_global_symbol_to_cdecl='`$ECHO "$lt_cv_sys_global_symbol_to_cdecl" | $SED "$delay_single_quote_subst"`'
- lt_cv_sys_global_symbol_to_c_name_address='`$ECHO "$lt_cv_sys_global_symbol_to_c_name_address" | $SED "$delay_single_quote_subst"`'
- lt_cv_sys_global_symbol_to_c_name_address_lib_prefix='`$ECHO "$lt_cv_sys_global_symbol_to_c_name_address_lib_prefix" | $SED "$delay_single_quote_subst"`'
-+nm_file_list_spec='`$ECHO "$nm_file_list_spec" | $SED "$delay_single_quote_subst"`'
-+lt_sysroot='`$ECHO "$lt_sysroot" | $SED "$delay_single_quote_subst"`'
- objdir='`$ECHO "$objdir" | $SED "$delay_single_quote_subst"`'
- MAGIC_CMD='`$ECHO "$MAGIC_CMD" | $SED "$delay_single_quote_subst"`'
- lt_prog_compiler_no_builtin_flag='`$ECHO "$lt_prog_compiler_no_builtin_flag" | $SED "$delay_single_quote_subst"`'
--lt_prog_compiler_wl='`$ECHO "$lt_prog_compiler_wl" | $SED "$delay_single_quote_subst"`'
- lt_prog_compiler_pic='`$ECHO "$lt_prog_compiler_pic" | $SED "$delay_single_quote_subst"`'
-+lt_prog_compiler_wl='`$ECHO "$lt_prog_compiler_wl" | $SED "$delay_single_quote_subst"`'
- lt_prog_compiler_static='`$ECHO "$lt_prog_compiler_static" | $SED "$delay_single_quote_subst"`'
- lt_cv_prog_compiler_c_o='`$ECHO "$lt_cv_prog_compiler_c_o" | $SED "$delay_single_quote_subst"`'
- need_locks='`$ECHO "$need_locks" | $SED "$delay_single_quote_subst"`'
-+MANIFEST_TOOL='`$ECHO "$MANIFEST_TOOL" | $SED "$delay_single_quote_subst"`'
- DSYMUTIL='`$ECHO "$DSYMUTIL" | $SED "$delay_single_quote_subst"`'
- NMEDIT='`$ECHO "$NMEDIT" | $SED "$delay_single_quote_subst"`'
- LIPO='`$ECHO "$LIPO" | $SED "$delay_single_quote_subst"`'
-@@ -15566,12 +16226,12 @@ hardcode_shlibpath_var='`$ECHO "$hardcode_shlibpath_var" | $SED "$delay_single_q
- hardcode_automatic='`$ECHO "$hardcode_automatic" | $SED "$delay_single_quote_subst"`'
- inherit_rpath='`$ECHO "$inherit_rpath" | $SED "$delay_single_quote_subst"`'
- link_all_deplibs='`$ECHO "$link_all_deplibs" | $SED "$delay_single_quote_subst"`'
--fix_srcfile_path='`$ECHO "$fix_srcfile_path" | $SED "$delay_single_quote_subst"`'
- always_export_symbols='`$ECHO "$always_export_symbols" | $SED "$delay_single_quote_subst"`'
- export_symbols_cmds='`$ECHO "$export_symbols_cmds" | $SED "$delay_single_quote_subst"`'
- exclude_expsyms='`$ECHO "$exclude_expsyms" | $SED "$delay_single_quote_subst"`'
- include_expsyms='`$ECHO "$include_expsyms" | $SED "$delay_single_quote_subst"`'
- prelink_cmds='`$ECHO "$prelink_cmds" | $SED "$delay_single_quote_subst"`'
-+postlink_cmds='`$ECHO "$postlink_cmds" | $SED "$delay_single_quote_subst"`'
- file_list_spec='`$ECHO "$file_list_spec" | $SED "$delay_single_quote_subst"`'
- variables_saved_for_relink='`$ECHO "$variables_saved_for_relink" | $SED "$delay_single_quote_subst"`'
- need_lib_prefix='`$ECHO "$need_lib_prefix" | $SED "$delay_single_quote_subst"`'
-@@ -15626,8 +16286,13 @@ reload_flag \
- OBJDUMP \
- deplibs_check_method \
- file_magic_cmd \
-+file_magic_glob \
-+want_nocaseglob \
-+DLLTOOL \
-+sharedlib_from_linklib_cmd \
- AR \
- AR_FLAGS \
-+archiver_list_spec \
- STRIP \
- RANLIB \
- CC \
-@@ -15637,12 +16302,14 @@ lt_cv_sys_global_symbol_pipe \
- lt_cv_sys_global_symbol_to_cdecl \
- lt_cv_sys_global_symbol_to_c_name_address \
- lt_cv_sys_global_symbol_to_c_name_address_lib_prefix \
-+nm_file_list_spec \
- lt_prog_compiler_no_builtin_flag \
--lt_prog_compiler_wl \
- lt_prog_compiler_pic \
-+lt_prog_compiler_wl \
- lt_prog_compiler_static \
- lt_cv_prog_compiler_c_o \
- need_locks \
-+MANIFEST_TOOL \
- DSYMUTIL \
- NMEDIT \
- LIPO \
-@@ -15658,7 +16325,6 @@ no_undefined_flag \
- hardcode_libdir_flag_spec \
- hardcode_libdir_flag_spec_ld \
- hardcode_libdir_separator \
--fix_srcfile_path \
- exclude_expsyms \
- include_expsyms \
- file_list_spec \
-@@ -15694,6 +16360,7 @@ module_cmds \
- module_expsym_cmds \
- export_symbols_cmds \
- prelink_cmds \
-+postlink_cmds \
- postinstall_cmds \
- postuninstall_cmds \
- finish_cmds \
-@@ -16459,7 +17126,8 @@ $as_echo X"$file" |
- # NOTE: Changes made to this file will be lost: look at ltmain.sh.
- #
- #   Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001, 2003, 2004, 2005,
--#                 2006, 2007, 2008, 2009 Free Software Foundation, Inc.
-+#                 2006, 2007, 2008, 2009, 2010 Free Software Foundation,
-+#                 Inc.
- #   Written by Gordon Matzigkeit, 1996
- #
- #   This file is part of GNU Libtool.
-@@ -16562,19 +17230,42 @@ SP2NL=$lt_lt_SP2NL
- # turn newlines into spaces.
- NL2SP=$lt_lt_NL2SP
- 
-+# convert \$build file names to \$host format.
-+to_host_file_cmd=$lt_cv_to_host_file_cmd
-+
-+# convert \$build files to toolchain format.
-+to_tool_file_cmd=$lt_cv_to_tool_file_cmd
-+
- # An object symbol dumper.
- OBJDUMP=$lt_OBJDUMP
- 
- # Method to check whether dependent libraries are shared objects.
- deplibs_check_method=$lt_deplibs_check_method
- 
--# Command to use when deplibs_check_method == "file_magic".
-+# Command to use when deplibs_check_method = "file_magic".
- file_magic_cmd=$lt_file_magic_cmd
- 
-+# How to find potential files when deplibs_check_method = "file_magic".
-+file_magic_glob=$lt_file_magic_glob
-+
-+# Find potential files using nocaseglob when deplibs_check_method = "file_magic".
-+want_nocaseglob=$lt_want_nocaseglob
-+
-+# DLL creation program.
-+DLLTOOL=$lt_DLLTOOL
-+
-+# Command to associate shared and link libraries.
-+sharedlib_from_linklib_cmd=$lt_sharedlib_from_linklib_cmd
-+
- # The archiver.
- AR=$lt_AR
-+
-+# Flags to create an archive.
- AR_FLAGS=$lt_AR_FLAGS
- 
-+# How to feed a file listing to the archiver.
-+archiver_list_spec=$lt_archiver_list_spec
-+
- # A symbol stripping program.
- STRIP=$lt_STRIP
- 
-@@ -16604,6 +17295,12 @@ global_symbol_to_c_name_address=$lt_lt_cv_sys_global_symbol_to_c_name_address
- # Transform the output of nm in a C name address pair when lib prefix is needed.
- global_symbol_to_c_name_address_lib_prefix=$lt_lt_cv_sys_global_symbol_to_c_name_address_lib_prefix
- 
-+# Specify filename containing input files for \$NM.
-+nm_file_list_spec=$lt_nm_file_list_spec
-+
-+# The root where to search for dependent libraries,and in which our libraries should be installed.
-+lt_sysroot=$lt_sysroot
-+
- # The name of the directory that contains temporary libtool files.
- objdir=$objdir
- 
-@@ -16613,6 +17310,9 @@ MAGIC_CMD=$MAGIC_CMD
- # Must we lock files when doing compilation?
- need_locks=$lt_need_locks
- 
-+# Manifest tool.
-+MANIFEST_TOOL=$lt_MANIFEST_TOOL
-+
- # Tool to manipulate archived DWARF debug symbol files on Mac OS X.
- DSYMUTIL=$lt_DSYMUTIL
- 
-@@ -16727,12 +17427,12 @@ with_gcc=$GCC
- # Compiler flag to turn off builtin functions.
- no_builtin_flag=$lt_lt_prog_compiler_no_builtin_flag
- 
--# How to pass a linker flag through the compiler.
--wl=$lt_lt_prog_compiler_wl
--
- # Additional compiler flags for building library objects.
- pic_flag=$lt_lt_prog_compiler_pic
- 
-+# How to pass a linker flag through the compiler.
-+wl=$lt_lt_prog_compiler_wl
-+
- # Compiler flag to prevent dynamic linking.
- link_static_flag=$lt_lt_prog_compiler_static
- 
-@@ -16819,9 +17519,6 @@ inherit_rpath=$inherit_rpath
- # Whether libtool must link a program against all its dependency libraries.
- link_all_deplibs=$link_all_deplibs
- 
--# Fix the shell variable \$srcfile for the compiler.
--fix_srcfile_path=$lt_fix_srcfile_path
--
- # Set to "yes" if exported symbols are required.
- always_export_symbols=$always_export_symbols
- 
-@@ -16837,6 +17534,9 @@ include_expsyms=$lt_include_expsyms
- # Commands necessary for linking programs (against libraries) with templates.
- prelink_cmds=$lt_prelink_cmds
- 
-+# Commands necessary for finishing linking programs.
-+postlink_cmds=$lt_postlink_cmds
-+
- # Specify filename containing input files.
- file_list_spec=$lt_file_list_spec
- 
-@@ -16869,210 +17569,169 @@ ltmain="$ac_aux_dir/ltmain.sh"
-   # if finds mixed CR/LF and LF-only lines.  Since sed operates in
-   # text mode, it properly converts lines to CR/LF.  This bash problem
-   # is reportedly fixed, but why not run on old versions too?
--  sed '/^# Generated shell functions inserted here/q' "$ltmain" >> "$cfgfile" \
--    || (rm -f "$cfgfile"; exit 1)
--
--  case $xsi_shell in
--  yes)
--    cat << \_LT_EOF >> "$cfgfile"
--
--# func_dirname file append nondir_replacement
--# Compute the dirname of FILE.  If nonempty, add APPEND to the result,
--# otherwise set result to NONDIR_REPLACEMENT.
--func_dirname ()
--{
--  case ${1} in
--    */*) func_dirname_result="${1%/*}${2}" ;;
--    *  ) func_dirname_result="${3}" ;;
--  esac
--}
--
--# func_basename file
--func_basename ()
--{
--  func_basename_result="${1##*/}"
--}
--
--# func_dirname_and_basename file append nondir_replacement
--# perform func_basename and func_dirname in a single function
--# call:
--#   dirname:  Compute the dirname of FILE.  If nonempty,
--#             add APPEND to the result, otherwise set result
--#             to NONDIR_REPLACEMENT.
--#             value returned in "$func_dirname_result"
--#   basename: Compute filename of FILE.
--#             value retuned in "$func_basename_result"
--# Implementation must be kept synchronized with func_dirname
--# and func_basename. For efficiency, we do not delegate to
--# those functions but instead duplicate the functionality here.
--func_dirname_and_basename ()
--{
--  case ${1} in
--    */*) func_dirname_result="${1%/*}${2}" ;;
--    *  ) func_dirname_result="${3}" ;;
--  esac
--  func_basename_result="${1##*/}"
--}
--
--# func_stripname prefix suffix name
--# strip PREFIX and SUFFIX off of NAME.
--# PREFIX and SUFFIX must not contain globbing or regex special
--# characters, hashes, percent signs, but SUFFIX may contain a leading
--# dot (in which case that matches only a dot).
--func_stripname ()
--{
--  # pdksh 5.2.14 does not do ${X%$Y} correctly if both X and Y are
--  # positional parameters, so assign one to ordinary parameter first.
--  func_stripname_result=${3}
--  func_stripname_result=${func_stripname_result#"${1}"}
--  func_stripname_result=${func_stripname_result%"${2}"}
--}
--
--# func_opt_split
--func_opt_split ()
--{
--  func_opt_split_opt=${1%%=*}
--  func_opt_split_arg=${1#*=}
--}
--
--# func_lo2o object
--func_lo2o ()
--{
--  case ${1} in
--    *.lo) func_lo2o_result=${1%.lo}.${objext} ;;
--    *)    func_lo2o_result=${1} ;;
--  esac
--}
--
--# func_xform libobj-or-source
--func_xform ()
--{
--  func_xform_result=${1%.*}.lo
--}
--
--# func_arith arithmetic-term...
--func_arith ()
--{
--  func_arith_result=$(( $* ))
--}
--
--# func_len string
--# STRING may not start with a hyphen.
--func_len ()
--{
--  func_len_result=${#1}
--}
--
--_LT_EOF
--    ;;
--  *) # Bourne compatible functions.
--    cat << \_LT_EOF >> "$cfgfile"
--
--# func_dirname file append nondir_replacement
--# Compute the dirname of FILE.  If nonempty, add APPEND to the result,
--# otherwise set result to NONDIR_REPLACEMENT.
--func_dirname ()
--{
--  # Extract subdirectory from the argument.
--  func_dirname_result=`$ECHO "${1}" | $SED "$dirname"`
--  if test "X$func_dirname_result" = "X${1}"; then
--    func_dirname_result="${3}"
--  else
--    func_dirname_result="$func_dirname_result${2}"
--  fi
--}
--
--# func_basename file
--func_basename ()
--{
--  func_basename_result=`$ECHO "${1}" | $SED "$basename"`
--}
--
--
--# func_stripname prefix suffix name
--# strip PREFIX and SUFFIX off of NAME.
--# PREFIX and SUFFIX must not contain globbing or regex special
--# characters, hashes, percent signs, but SUFFIX may contain a leading
--# dot (in which case that matches only a dot).
--# func_strip_suffix prefix name
--func_stripname ()
--{
--  case ${2} in
--    .*) func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%\\\\${2}\$%%"`;;
--    *)  func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%${2}\$%%"`;;
--  esac
--}
--
--# sed scripts:
--my_sed_long_opt='1s/^\(-[^=]*\)=.*/\1/;q'
--my_sed_long_arg='1s/^-[^=]*=//'
--
--# func_opt_split
--func_opt_split ()
--{
--  func_opt_split_opt=`$ECHO "${1}" | $SED "$my_sed_long_opt"`
--  func_opt_split_arg=`$ECHO "${1}" | $SED "$my_sed_long_arg"`
--}
--
--# func_lo2o object
--func_lo2o ()
--{
--  func_lo2o_result=`$ECHO "${1}" | $SED "$lo2o"`
--}
--
--# func_xform libobj-or-source
--func_xform ()
--{
--  func_xform_result=`$ECHO "${1}" | $SED 's/\.[^.]*$/.lo/'`
--}
--
--# func_arith arithmetic-term...
--func_arith ()
--{
--  func_arith_result=`expr "$@"`
--}
--
--# func_len string
--# STRING may not start with a hyphen.
--func_len ()
--{
--  func_len_result=`expr "$1" : ".*" 2>/dev/null || echo $max_cmd_len`
--}
--
--_LT_EOF
--esac
--
--case $lt_shell_append in
--  yes)
--    cat << \_LT_EOF >> "$cfgfile"
--
--# func_append var value
--# Append VALUE to the end of shell variable VAR.
--func_append ()
--{
--  eval "$1+=\$2"
--}
--_LT_EOF
--    ;;
--  *)
--    cat << \_LT_EOF >> "$cfgfile"
--
--# func_append var value
--# Append VALUE to the end of shell variable VAR.
--func_append ()
--{
--  eval "$1=\$$1\$2"
--}
--
--_LT_EOF
--    ;;
--  esac
--
--
--  sed -n '/^# Generated shell functions inserted here/,$p' "$ltmain" >> "$cfgfile" \
--    || (rm -f "$cfgfile"; exit 1)
--
--  mv -f "$cfgfile" "$ofile" ||
-+  sed '$q' "$ltmain" >> "$cfgfile" \
-+     || (rm -f "$cfgfile"; exit 1)
-+
-+  if test x"$xsi_shell" = xyes; then
-+  sed -e '/^func_dirname ()$/,/^} # func_dirname /c\
-+func_dirname ()\
-+{\
-+\    case ${1} in\
-+\      */*) func_dirname_result="${1%/*}${2}" ;;\
-+\      *  ) func_dirname_result="${3}" ;;\
-+\    esac\
-+} # Extended-shell func_dirname implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_basename ()$/,/^} # func_basename /c\
-+func_basename ()\
-+{\
-+\    func_basename_result="${1##*/}"\
-+} # Extended-shell func_basename implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_dirname_and_basename ()$/,/^} # func_dirname_and_basename /c\
-+func_dirname_and_basename ()\
-+{\
-+\    case ${1} in\
-+\      */*) func_dirname_result="${1%/*}${2}" ;;\
-+\      *  ) func_dirname_result="${3}" ;;\
-+\    esac\
-+\    func_basename_result="${1##*/}"\
-+} # Extended-shell func_dirname_and_basename implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_stripname ()$/,/^} # func_stripname /c\
-+func_stripname ()\
-+{\
-+\    # pdksh 5.2.14 does not do ${X%$Y} correctly if both X and Y are\
-+\    # positional parameters, so assign one to ordinary parameter first.\
-+\    func_stripname_result=${3}\
-+\    func_stripname_result=${func_stripname_result#"${1}"}\
-+\    func_stripname_result=${func_stripname_result%"${2}"}\
-+} # Extended-shell func_stripname implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_split_long_opt ()$/,/^} # func_split_long_opt /c\
-+func_split_long_opt ()\
-+{\
-+\    func_split_long_opt_name=${1%%=*}\
-+\    func_split_long_opt_arg=${1#*=}\
-+} # Extended-shell func_split_long_opt implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_split_short_opt ()$/,/^} # func_split_short_opt /c\
-+func_split_short_opt ()\
-+{\
-+\    func_split_short_opt_arg=${1#??}\
-+\    func_split_short_opt_name=${1%"$func_split_short_opt_arg"}\
-+} # Extended-shell func_split_short_opt implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_lo2o ()$/,/^} # func_lo2o /c\
-+func_lo2o ()\
-+{\
-+\    case ${1} in\
-+\      *.lo) func_lo2o_result=${1%.lo}.${objext} ;;\
-+\      *)    func_lo2o_result=${1} ;;\
-+\    esac\
-+} # Extended-shell func_lo2o implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_xform ()$/,/^} # func_xform /c\
-+func_xform ()\
-+{\
-+    func_xform_result=${1%.*}.lo\
-+} # Extended-shell func_xform implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_arith ()$/,/^} # func_arith /c\
-+func_arith ()\
-+{\
-+    func_arith_result=$(( $* ))\
-+} # Extended-shell func_arith implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_len ()$/,/^} # func_len /c\
-+func_len ()\
-+{\
-+    func_len_result=${#1}\
-+} # Extended-shell func_len implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+fi
-+
-+if test x"$lt_shell_append" = xyes; then
-+  sed -e '/^func_append ()$/,/^} # func_append /c\
-+func_append ()\
-+{\
-+    eval "${1}+=\\${2}"\
-+} # Extended-shell func_append implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_append_quoted ()$/,/^} # func_append_quoted /c\
-+func_append_quoted ()\
-+{\
-+\    func_quote_for_eval "${2}"\
-+\    eval "${1}+=\\\\ \\$func_quote_for_eval_result"\
-+} # Extended-shell func_append_quoted implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  # Save a `func_append' function call where possible by direct use of '+='
-+  sed -e 's%func_append \([a-zA-Z_]\{1,\}\) "%\1+="%g' $cfgfile > $cfgfile.tmp \
-+    && mv -f "$cfgfile.tmp" "$cfgfile" \
-+      || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+  test 0 -eq $? || _lt_function_replace_fail=:
-+else
-+  # Save a `func_append' function call even when '+=' is not available
-+  sed -e 's%func_append \([a-zA-Z_]\{1,\}\) "%\1="$\1%g' $cfgfile > $cfgfile.tmp \
-+    && mv -f "$cfgfile.tmp" "$cfgfile" \
-+      || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+  test 0 -eq $? || _lt_function_replace_fail=:
-+fi
-+
-+if test x"$_lt_function_replace_fail" = x":"; then
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: Unable to substitute extended shell functions in $ofile" >&5
-+$as_echo "$as_me: WARNING: Unable to substitute extended shell functions in $ofile" >&2;}
-+fi
-+
-+
-+   mv -f "$cfgfile" "$ofile" ||
-     (rm -f "$ofile" && cp "$cfgfile" "$ofile" && rm -f "$cfgfile")
-   chmod +x "$ofile"
- 
-diff --git a/gas/configure b/gas/configure
-index dc6a6682aa4..10364bd81da 100755
---- a/gas/configure
-+++ b/gas/configure
-@@ -681,8 +681,11 @@ OTOOL
- LIPO
- NMEDIT
- DSYMUTIL
-+MANIFEST_TOOL
- RANLIB
-+ac_ct_AR
- AR
-+DLLTOOL
- OBJDUMP
- LN_S
- NM
-@@ -799,6 +802,7 @@ enable_static
- with_pic
- enable_fast_install
- with_gnu_ld
-+with_libtool_sysroot
- enable_libtool_lock
- enable_plugins
- enable_largefile
-@@ -1490,6 +1494,8 @@ Optional Packages:
-   --with-pic              try to use only PIC/non-PIC objects [default=use
-                           both]
-   --with-gnu-ld           assume the C compiler uses GNU ld [default=no]
-+  --with-libtool-sysroot=DIR Search for dependent libraries within DIR
-+                        (or the compiler's sysroot if not specified).
-   --with-cpu=CPU          default cpu variant is CPU (currently only supported
-                           on ARC)
-   --with-system-zlib      use installed libz
-@@ -4608,8 +4614,8 @@ esac
- 
- 
- 
--macro_version='2.2.7a'
--macro_revision='1.3134'
-+macro_version='2.4'
-+macro_revision='1.3293'
- 
- 
- 
-@@ -4649,7 +4655,7 @@ ECHO=$ECHO$ECHO$ECHO$ECHO$ECHO$ECHO
- { $as_echo "$as_me:${as_lineno-$LINENO}: checking how to print strings" >&5
- $as_echo_n "checking how to print strings... " >&6; }
- # Test print first, because it will be a builtin if present.
--if test "X`print -r -- -n 2>/dev/null`" = X-n && \
-+if test "X`( print -r -- -n ) 2>/dev/null`" = X-n && \
-    test "X`print -r -- $ECHO 2>/dev/null`" = "X$ECHO"; then
-   ECHO='print -r --'
- elif test "X`printf %s $ECHO 2>/dev/null`" = "X$ECHO"; then
-@@ -5336,8 +5342,8 @@ $as_echo_n "checking whether the shell understands some XSI constructs... " >&6;
- # Try some XSI features
- xsi_shell=no
- ( _lt_dummy="a/b/c"
--  test "${_lt_dummy##*/},${_lt_dummy%/*},"${_lt_dummy%"$_lt_dummy"}, \
--      = c,a/b,, \
-+  test "${_lt_dummy##*/},${_lt_dummy%/*},${_lt_dummy#??}"${_lt_dummy%"$_lt_dummy"}, \
-+      = c,a/b,b/c, \
-     && eval 'test $(( 1 + 1 )) -eq 2 \
-     && test "${#_lt_dummy}" -eq 5' ) >/dev/null 2>&1 \
-   && xsi_shell=yes
-@@ -5386,6 +5392,80 @@ esac
- 
- 
- 
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking how to convert $build file names to $host format" >&5
-+$as_echo_n "checking how to convert $build file names to $host format... " >&6; }
-+if ${lt_cv_to_host_file_cmd+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  case $host in
-+  *-*-mingw* )
-+    case $build in
-+      *-*-mingw* ) # actually msys
-+        lt_cv_to_host_file_cmd=func_convert_file_msys_to_w32
-+        ;;
-+      *-*-cygwin* )
-+        lt_cv_to_host_file_cmd=func_convert_file_cygwin_to_w32
-+        ;;
-+      * ) # otherwise, assume *nix
-+        lt_cv_to_host_file_cmd=func_convert_file_nix_to_w32
-+        ;;
-+    esac
-+    ;;
-+  *-*-cygwin* )
-+    case $build in
-+      *-*-mingw* ) # actually msys
-+        lt_cv_to_host_file_cmd=func_convert_file_msys_to_cygwin
-+        ;;
-+      *-*-cygwin* )
-+        lt_cv_to_host_file_cmd=func_convert_file_noop
-+        ;;
-+      * ) # otherwise, assume *nix
-+        lt_cv_to_host_file_cmd=func_convert_file_nix_to_cygwin
-+        ;;
-+    esac
-+    ;;
-+  * ) # unhandled hosts (and "normal" native builds)
-+    lt_cv_to_host_file_cmd=func_convert_file_noop
-+    ;;
-+esac
-+
-+fi
-+
-+to_host_file_cmd=$lt_cv_to_host_file_cmd
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_to_host_file_cmd" >&5
-+$as_echo "$lt_cv_to_host_file_cmd" >&6; }
-+
-+
-+
-+
-+
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking how to convert $build file names to toolchain format" >&5
-+$as_echo_n "checking how to convert $build file names to toolchain format... " >&6; }
-+if ${lt_cv_to_tool_file_cmd+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  #assume ordinary cross tools, or native build.
-+lt_cv_to_tool_file_cmd=func_convert_file_noop
-+case $host in
-+  *-*-mingw* )
-+    case $build in
-+      *-*-mingw* ) # actually msys
-+        lt_cv_to_tool_file_cmd=func_convert_file_msys_to_w32
-+        ;;
-+    esac
-+    ;;
-+esac
-+
-+fi
-+
-+to_tool_file_cmd=$lt_cv_to_tool_file_cmd
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_to_tool_file_cmd" >&5
-+$as_echo "$lt_cv_to_tool_file_cmd" >&6; }
-+
-+
-+
-+
-+
- { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $LD option to reload object files" >&5
- $as_echo_n "checking for $LD option to reload object files... " >&6; }
- if ${lt_cv_ld_reload_flag+:} false; then :
-@@ -5402,6 +5482,11 @@ case $reload_flag in
- esac
- reload_cmds='$LD$reload_flag -o $output$reload_objs'
- case $host_os in
-+  cygwin* | mingw* | pw32* | cegcc*)
-+    if test "$GCC" != yes; then
-+      reload_cmds=false
-+    fi
-+    ;;
-   darwin*)
-     if test "$GCC" = yes; then
-       reload_cmds='$LTCC $LTCFLAGS -nostdlib ${wl}-r -o $output$reload_objs'
-@@ -5570,7 +5655,8 @@ mingw* | pw32*)
-     lt_cv_deplibs_check_method='file_magic ^x86 archive import|^x86 DLL'
-     lt_cv_file_magic_cmd='func_win32_libid'
-   else
--    lt_cv_deplibs_check_method='file_magic file format pei*-i386(.*architecture: i386)?'
-+    # Keep this pattern in sync with the one in func_win32_libid.
-+    lt_cv_deplibs_check_method='file_magic file format (pei*-i386(.*architecture: i386)?|pe-arm-wince|pe-x86-64)'
-     lt_cv_file_magic_cmd='$OBJDUMP -f'
-   fi
-   ;;
-@@ -5724,6 +5810,21 @@ esac
- fi
- { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_deplibs_check_method" >&5
- $as_echo "$lt_cv_deplibs_check_method" >&6; }
-+
-+file_magic_glob=
-+want_nocaseglob=no
-+if test "$build" = "$host"; then
-+  case $host_os in
-+  mingw* | pw32*)
-+    if ( shopt | grep nocaseglob ) >/dev/null 2>&1; then
-+      want_nocaseglob=yes
-+    else
-+      file_magic_glob=`echo aAbBcCdDeEfFgGhHiIjJkKlLmMnNoOpPqQrRsStTuUvVwWxXyYzZ | $SED -e "s/\(..\)/s\/[\1]\/[\1]\/g;/g"`
-+    fi
-+    ;;
-+  esac
-+fi
-+
- file_magic_cmd=$lt_cv_file_magic_cmd
- deplibs_check_method=$lt_cv_deplibs_check_method
- test -z "$deplibs_check_method" && deplibs_check_method=unknown
-@@ -5739,6 +5840,157 @@ test -z "$deplibs_check_method" && deplibs_check_method=unknown
- 
- 
- 
-+
-+
-+
-+
-+
-+
-+
-+
-+
-+
-+if test -n "$ac_tool_prefix"; then
-+  # Extract the first word of "${ac_tool_prefix}dlltool", so it can be a program name with args.
-+set dummy ${ac_tool_prefix}dlltool; ac_word=$2
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
-+$as_echo_n "checking for $ac_word... " >&6; }
-+if ${ac_cv_prog_DLLTOOL+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  if test -n "$DLLTOOL"; then
-+  ac_cv_prog_DLLTOOL="$DLLTOOL" # Let the user override the test.
-+else
-+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
-+for as_dir in $PATH
-+do
-+  IFS=$as_save_IFS
-+  test -z "$as_dir" && as_dir=.
-+    for ac_exec_ext in '' $ac_executable_extensions; do
-+  if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
-+    ac_cv_prog_DLLTOOL="${ac_tool_prefix}dlltool"
-+    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
-+    break 2
-+  fi
-+done
-+  done
-+IFS=$as_save_IFS
-+
-+fi
-+fi
-+DLLTOOL=$ac_cv_prog_DLLTOOL
-+if test -n "$DLLTOOL"; then
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $DLLTOOL" >&5
-+$as_echo "$DLLTOOL" >&6; }
-+else
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
-+$as_echo "no" >&6; }
-+fi
-+
-+
-+fi
-+if test -z "$ac_cv_prog_DLLTOOL"; then
-+  ac_ct_DLLTOOL=$DLLTOOL
-+  # Extract the first word of "dlltool", so it can be a program name with args.
-+set dummy dlltool; ac_word=$2
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
-+$as_echo_n "checking for $ac_word... " >&6; }
-+if ${ac_cv_prog_ac_ct_DLLTOOL+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  if test -n "$ac_ct_DLLTOOL"; then
-+  ac_cv_prog_ac_ct_DLLTOOL="$ac_ct_DLLTOOL" # Let the user override the test.
-+else
-+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
-+for as_dir in $PATH
-+do
-+  IFS=$as_save_IFS
-+  test -z "$as_dir" && as_dir=.
-+    for ac_exec_ext in '' $ac_executable_extensions; do
-+  if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
-+    ac_cv_prog_ac_ct_DLLTOOL="dlltool"
-+    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
-+    break 2
-+  fi
-+done
-+  done
-+IFS=$as_save_IFS
-+
-+fi
-+fi
-+ac_ct_DLLTOOL=$ac_cv_prog_ac_ct_DLLTOOL
-+if test -n "$ac_ct_DLLTOOL"; then
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_DLLTOOL" >&5
-+$as_echo "$ac_ct_DLLTOOL" >&6; }
-+else
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
-+$as_echo "no" >&6; }
-+fi
-+
-+  if test "x$ac_ct_DLLTOOL" = x; then
-+    DLLTOOL="false"
-+  else
-+    case $cross_compiling:$ac_tool_warned in
-+yes:)
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5
-+$as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;}
-+ac_tool_warned=yes ;;
-+esac
-+    DLLTOOL=$ac_ct_DLLTOOL
-+  fi
-+else
-+  DLLTOOL="$ac_cv_prog_DLLTOOL"
-+fi
-+
-+test -z "$DLLTOOL" && DLLTOOL=dlltool
-+
-+
-+
-+
-+
-+
-+
-+
-+
-+
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking how to associate runtime and link libraries" >&5
-+$as_echo_n "checking how to associate runtime and link libraries... " >&6; }
-+if ${lt_cv_sharedlib_from_linklib_cmd+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  lt_cv_sharedlib_from_linklib_cmd='unknown'
-+
-+case $host_os in
-+cygwin* | mingw* | pw32* | cegcc*)
-+  # two different shell functions defined in ltmain.sh
-+  # decide which to use based on capabilities of $DLLTOOL
-+  case `$DLLTOOL --help 2>&1` in
-+  *--identify-strict*)
-+    lt_cv_sharedlib_from_linklib_cmd=func_cygming_dll_for_implib
-+    ;;
-+  *)
-+    lt_cv_sharedlib_from_linklib_cmd=func_cygming_dll_for_implib_fallback
-+    ;;
-+  esac
-+  ;;
-+*)
-+  # fallback: assume linklib IS sharedlib
-+  lt_cv_sharedlib_from_linklib_cmd="$ECHO"
-+  ;;
-+esac
-+
-+fi
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_sharedlib_from_linklib_cmd" >&5
-+$as_echo "$lt_cv_sharedlib_from_linklib_cmd" >&6; }
-+sharedlib_from_linklib_cmd=$lt_cv_sharedlib_from_linklib_cmd
-+test -z "$sharedlib_from_linklib_cmd" && sharedlib_from_linklib_cmd=$ECHO
-+
-+
-+
-+
-+
-+
-+
- plugin_option=
- plugin_names="liblto_plugin.so liblto_plugin-0.dll cyglto_plugin-0.dll"
- for plugin in $plugin_names; do
-@@ -5753,8 +6005,10 @@ for plugin in $plugin_names; do
- done
- 
- if test -n "$ac_tool_prefix"; then
--  # Extract the first word of "${ac_tool_prefix}ar", so it can be a program name with args.
--set dummy ${ac_tool_prefix}ar; ac_word=$2
-+  for ac_prog in ar
-+  do
-+    # Extract the first word of "$ac_tool_prefix$ac_prog", so it can be a program name with args.
-+set dummy $ac_tool_prefix$ac_prog; ac_word=$2
- { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
- $as_echo_n "checking for $ac_word... " >&6; }
- if ${ac_cv_prog_AR+:} false; then :
-@@ -5770,7 +6024,7 @@ do
-   test -z "$as_dir" && as_dir=.
-     for ac_exec_ext in '' $ac_executable_extensions; do
-   if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
--    ac_cv_prog_AR="${ac_tool_prefix}ar"
-+    ac_cv_prog_AR="$ac_tool_prefix$ac_prog"
-     $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
-     break 2
-   fi
-@@ -5790,11 +6044,15 @@ $as_echo "no" >&6; }
- fi
- 
- 
-+    test -n "$AR" && break
-+  done
- fi
--if test -z "$ac_cv_prog_AR"; then
-+if test -z "$AR"; then
-   ac_ct_AR=$AR
--  # Extract the first word of "ar", so it can be a program name with args.
--set dummy ar; ac_word=$2
-+  for ac_prog in ar
-+do
-+  # Extract the first word of "$ac_prog", so it can be a program name with args.
-+set dummy $ac_prog; ac_word=$2
- { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
- $as_echo_n "checking for $ac_word... " >&6; }
- if ${ac_cv_prog_ac_ct_AR+:} false; then :
-@@ -5810,7 +6068,7 @@ do
-   test -z "$as_dir" && as_dir=.
-     for ac_exec_ext in '' $ac_executable_extensions; do
-   if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
--    ac_cv_prog_ac_ct_AR="ar"
-+    ac_cv_prog_ac_ct_AR="$ac_prog"
-     $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
-     break 2
-   fi
-@@ -5829,6 +6087,10 @@ else
- $as_echo "no" >&6; }
- fi
- 
-+
-+  test -n "$ac_ct_AR" && break
-+done
-+
-   if test "x$ac_ct_AR" = x; then
-     AR="false"
-   else
-@@ -5840,29 +6102,81 @@ ac_tool_warned=yes ;;
- esac
-     AR=$ac_ct_AR
-   fi
--else
--  AR="$ac_cv_prog_AR"
- fi
- 
--test -z "$AR" && AR=ar
--if test -n "$plugin_option"; then
--  if $AR --help 2>&1 | grep -q "\--plugin"; then
--    touch conftest.c
--    $AR $plugin_option rc conftest.a conftest.c
--    if test "$?" != 0; then
--      { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: Failed: $AR $plugin_option rc" >&5
-+  touch conftest.c
-+  $AR $plugin_option rc conftest.a conftest.c
-+  if test "$?" != 0; then
-+    { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: Failed: $AR $plugin_option rc" >&5
- $as_echo "$as_me: WARNING: Failed: $AR $plugin_option rc" >&2;}
--    else
--      AR="$AR $plugin_option"
--    fi
--    rm -f conftest.*
-+  else
-+    AR="$AR $plugin_option"
-   fi
--fi
--test -z "$AR_FLAGS" && AR_FLAGS=cru
-+  rm -f conftest.*
-+: ${AR=ar}
-+: ${AR_FLAGS=cru}
-+
-+
-+
-+
-+
-+
-+
-+
-+
-+
-+
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for archiver @FILE support" >&5
-+$as_echo_n "checking for archiver @FILE support... " >&6; }
-+if ${lt_cv_ar_at_file+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  lt_cv_ar_at_file=no
-+   cat confdefs.h - <<_ACEOF >conftest.$ac_ext
-+/* end confdefs.h.  */
-+
-+int
-+main ()
-+{
- 
-+  ;
-+  return 0;
-+}
-+_ACEOF
-+if ac_fn_c_try_compile "$LINENO"; then :
-+  echo conftest.$ac_objext > conftest.lst
-+      lt_ar_try='$AR $AR_FLAGS libconftest.a @conftest.lst >&5'
-+      { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$lt_ar_try\""; } >&5
-+  (eval $lt_ar_try) 2>&5
-+  ac_status=$?
-+  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
-+  test $ac_status = 0; }
-+      if test "$ac_status" -eq 0; then
-+	# Ensure the archiver fails upon bogus file names.
-+	rm -f conftest.$ac_objext libconftest.a
-+	{ { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$lt_ar_try\""; } >&5
-+  (eval $lt_ar_try) 2>&5
-+  ac_status=$?
-+  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
-+  test $ac_status = 0; }
-+	if test "$ac_status" -ne 0; then
-+          lt_cv_ar_at_file=@
-+        fi
-+      fi
-+      rm -f conftest.* libconftest.a
- 
-+fi
-+rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext
- 
-+fi
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_ar_at_file" >&5
-+$as_echo "$lt_cv_ar_at_file" >&6; }
- 
-+if test "x$lt_cv_ar_at_file" = xno; then
-+  archiver_list_spec=
-+else
-+  archiver_list_spec=$lt_cv_ar_at_file
-+fi
- 
- 
- 
-@@ -6209,8 +6523,8 @@ esac
- lt_cv_sys_global_symbol_to_cdecl="sed -n -e 's/^T .* \(.*\)$/extern int \1();/p' -e 's/^$symcode* .* \(.*\)$/extern char \1;/p'"
- 
- # Transform an extracted symbol line into symbol name and symbol address
--lt_cv_sys_global_symbol_to_c_name_address="sed -n -e 's/^: \([^ ]*\) $/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"\2\", (void *) \&\2},/p'"
--lt_cv_sys_global_symbol_to_c_name_address_lib_prefix="sed -n -e 's/^: \([^ ]*\) $/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \(lib[^ ]*\)$/  {\"\2\", (void *) \&\2},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"lib\2\", (void *) \&\2},/p'"
-+lt_cv_sys_global_symbol_to_c_name_address="sed -n -e 's/^: \([^ ]*\)[ ]*$/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"\2\", (void *) \&\2},/p'"
-+lt_cv_sys_global_symbol_to_c_name_address_lib_prefix="sed -n -e 's/^: \([^ ]*\)[ ]*$/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \(lib[^ ]*\)$/  {\"\2\", (void *) \&\2},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"lib\2\", (void *) \&\2},/p'"
- 
- # Handle CRLF in mingw tool chain
- opt_cr=
-@@ -6246,6 +6560,7 @@ for ac_symprfx in "" "_"; do
-   else
-     lt_cv_sys_global_symbol_pipe="sed -n -e 's/^.*[	 ]\($symcode$symcode*\)[	 ][	 ]*$ac_symprfx$sympat$opt_cr$/$symxfrm/p'"
-   fi
-+  lt_cv_sys_global_symbol_pipe="$lt_cv_sys_global_symbol_pipe | sed '/ __gnu_lto/d'"
- 
-   # Check to see that the pipe works correctly.
-   pipe_works=no
-@@ -6287,6 +6602,18 @@ _LT_EOF
-       if $GREP ' nm_test_var$' "$nlist" >/dev/null; then
- 	if $GREP ' nm_test_func$' "$nlist" >/dev/null; then
- 	  cat <<_LT_EOF > conftest.$ac_ext
-+/* Keep this code in sync between libtool.m4, ltmain, lt_system.h, and tests.  */
-+#if defined(_WIN32) || defined(__CYGWIN__) || defined(_WIN32_WCE)
-+/* DATA imports from DLLs on WIN32 con't be const, because runtime
-+   relocations are performed -- see ld's documentation on pseudo-relocs.  */
-+# define LT_DLSYM_CONST
-+#elif defined(__osf__)
-+/* This system does not cope well with relocations in const data.  */
-+# define LT_DLSYM_CONST
-+#else
-+# define LT_DLSYM_CONST const
-+#endif
-+
- #ifdef __cplusplus
- extern "C" {
- #endif
-@@ -6298,7 +6625,7 @@ _LT_EOF
- 	  cat <<_LT_EOF >> conftest.$ac_ext
- 
- /* The mapping between symbol names and symbols.  */
--const struct {
-+LT_DLSYM_CONST struct {
-   const char *name;
-   void       *address;
- }
-@@ -6324,8 +6651,8 @@ static const void *lt_preloaded_setup() {
- _LT_EOF
- 	  # Now try linking the two files.
- 	  mv conftest.$ac_objext conftstm.$ac_objext
--	  lt_save_LIBS="$LIBS"
--	  lt_save_CFLAGS="$CFLAGS"
-+	  lt_globsym_save_LIBS=$LIBS
-+	  lt_globsym_save_CFLAGS=$CFLAGS
- 	  LIBS="conftstm.$ac_objext"
- 	  CFLAGS="$CFLAGS$lt_prog_compiler_no_builtin_flag"
- 	  if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_link\""; } >&5
-@@ -6335,8 +6662,8 @@ _LT_EOF
-   test $ac_status = 0; } && test -s conftest${ac_exeext}; then
- 	    pipe_works=yes
- 	  fi
--	  LIBS="$lt_save_LIBS"
--	  CFLAGS="$lt_save_CFLAGS"
-+	  LIBS=$lt_globsym_save_LIBS
-+	  CFLAGS=$lt_globsym_save_CFLAGS
- 	else
- 	  echo "cannot find nm_test_func in $nlist" >&5
- 	fi
-@@ -6373,6 +6700,19 @@ else
- $as_echo "ok" >&6; }
- fi
- 
-+# Response file support.
-+if test "$lt_cv_nm_interface" = "MS dumpbin"; then
-+  nm_file_list_spec='@'
-+elif $NM --help 2>/dev/null | grep '[@]FILE' >/dev/null; then
-+  nm_file_list_spec='@'
-+fi
-+
-+
-+
-+
-+
-+
-+
- 
- 
- 
-@@ -6389,6 +6729,42 @@ fi
- 
- 
- 
-+
-+
-+
-+
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for sysroot" >&5
-+$as_echo_n "checking for sysroot... " >&6; }
-+
-+# Check whether --with-libtool-sysroot was given.
-+if test "${with_libtool_sysroot+set}" = set; then :
-+  withval=$with_libtool_sysroot;
-+else
-+  with_libtool_sysroot=no
-+fi
-+
-+
-+lt_sysroot=
-+case ${with_libtool_sysroot} in #(
-+ yes)
-+   if test "$GCC" = yes; then
-+     lt_sysroot=`$CC --print-sysroot 2>/dev/null`
-+   fi
-+   ;; #(
-+ /*)
-+   lt_sysroot=`echo "$with_libtool_sysroot" | sed -e "$sed_quote_subst"`
-+   ;; #(
-+ no|'')
-+   ;; #(
-+ *)
-+   { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${with_libtool_sysroot}" >&5
-+$as_echo "${with_libtool_sysroot}" >&6; }
-+   as_fn_error $? "The sysroot must be an absolute path." "$LINENO" 5
-+   ;;
-+esac
-+
-+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${lt_sysroot:-no}" >&5
-+$as_echo "${lt_sysroot:-no}" >&6; }
- 
- 
- 
-@@ -6600,6 +6976,123 @@ esac
- 
- need_locks="$enable_libtool_lock"
- 
-+if test -n "$ac_tool_prefix"; then
-+  # Extract the first word of "${ac_tool_prefix}mt", so it can be a program name with args.
-+set dummy ${ac_tool_prefix}mt; ac_word=$2
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
-+$as_echo_n "checking for $ac_word... " >&6; }
-+if ${ac_cv_prog_MANIFEST_TOOL+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  if test -n "$MANIFEST_TOOL"; then
-+  ac_cv_prog_MANIFEST_TOOL="$MANIFEST_TOOL" # Let the user override the test.
-+else
-+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
-+for as_dir in $PATH
-+do
-+  IFS=$as_save_IFS
-+  test -z "$as_dir" && as_dir=.
-+    for ac_exec_ext in '' $ac_executable_extensions; do
-+  if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
-+    ac_cv_prog_MANIFEST_TOOL="${ac_tool_prefix}mt"
-+    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
-+    break 2
-+  fi
-+done
-+  done
-+IFS=$as_save_IFS
-+
-+fi
-+fi
-+MANIFEST_TOOL=$ac_cv_prog_MANIFEST_TOOL
-+if test -n "$MANIFEST_TOOL"; then
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $MANIFEST_TOOL" >&5
-+$as_echo "$MANIFEST_TOOL" >&6; }
-+else
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
-+$as_echo "no" >&6; }
-+fi
-+
-+
-+fi
-+if test -z "$ac_cv_prog_MANIFEST_TOOL"; then
-+  ac_ct_MANIFEST_TOOL=$MANIFEST_TOOL
-+  # Extract the first word of "mt", so it can be a program name with args.
-+set dummy mt; ac_word=$2
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
-+$as_echo_n "checking for $ac_word... " >&6; }
-+if ${ac_cv_prog_ac_ct_MANIFEST_TOOL+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  if test -n "$ac_ct_MANIFEST_TOOL"; then
-+  ac_cv_prog_ac_ct_MANIFEST_TOOL="$ac_ct_MANIFEST_TOOL" # Let the user override the test.
-+else
-+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
-+for as_dir in $PATH
-+do
-+  IFS=$as_save_IFS
-+  test -z "$as_dir" && as_dir=.
-+    for ac_exec_ext in '' $ac_executable_extensions; do
-+  if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
-+    ac_cv_prog_ac_ct_MANIFEST_TOOL="mt"
-+    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
-+    break 2
-+  fi
-+done
-+  done
-+IFS=$as_save_IFS
-+
-+fi
-+fi
-+ac_ct_MANIFEST_TOOL=$ac_cv_prog_ac_ct_MANIFEST_TOOL
-+if test -n "$ac_ct_MANIFEST_TOOL"; then
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_MANIFEST_TOOL" >&5
-+$as_echo "$ac_ct_MANIFEST_TOOL" >&6; }
-+else
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
-+$as_echo "no" >&6; }
-+fi
-+
-+  if test "x$ac_ct_MANIFEST_TOOL" = x; then
-+    MANIFEST_TOOL=":"
-+  else
-+    case $cross_compiling:$ac_tool_warned in
-+yes:)
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5
-+$as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;}
-+ac_tool_warned=yes ;;
-+esac
-+    MANIFEST_TOOL=$ac_ct_MANIFEST_TOOL
-+  fi
-+else
-+  MANIFEST_TOOL="$ac_cv_prog_MANIFEST_TOOL"
-+fi
-+
-+test -z "$MANIFEST_TOOL" && MANIFEST_TOOL=mt
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking if $MANIFEST_TOOL is a manifest tool" >&5
-+$as_echo_n "checking if $MANIFEST_TOOL is a manifest tool... " >&6; }
-+if ${lt_cv_path_mainfest_tool+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  lt_cv_path_mainfest_tool=no
-+  echo "$as_me:$LINENO: $MANIFEST_TOOL '-?'" >&5
-+  $MANIFEST_TOOL '-?' 2>conftest.err > conftest.out
-+  cat conftest.err >&5
-+  if $GREP 'Manifest Tool' conftest.out > /dev/null; then
-+    lt_cv_path_mainfest_tool=yes
-+  fi
-+  rm -f conftest*
-+fi
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_path_mainfest_tool" >&5
-+$as_echo "$lt_cv_path_mainfest_tool" >&6; }
-+if test "x$lt_cv_path_mainfest_tool" != xyes; then
-+  MANIFEST_TOOL=:
-+fi
-+
-+
-+
-+
-+
- 
-   case $host_os in
-     rhapsody* | darwin*)
-@@ -7163,6 +7656,8 @@ _LT_EOF
-       $LTCC $LTCFLAGS -c -o conftest.o conftest.c 2>&5
-       echo "$AR cru libconftest.a conftest.o" >&5
-       $AR cru libconftest.a conftest.o 2>&5
-+      echo "$RANLIB libconftest.a" >&5
-+      $RANLIB libconftest.a 2>&5
-       cat > conftest.c << _LT_EOF
- int main() { return 0;}
- _LT_EOF
-@@ -7745,8 +8240,6 @@ fi
- lt_prog_compiler_pic=
- lt_prog_compiler_static=
- 
--{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $compiler option to produce PIC" >&5
--$as_echo_n "checking for $compiler option to produce PIC... " >&6; }
- 
-   if test "$GCC" = yes; then
-     lt_prog_compiler_wl='-Wl,'
-@@ -7912,6 +8405,12 @@ $as_echo_n "checking for $compiler option to produce PIC... " >&6; }
- 	lt_prog_compiler_pic='--shared'
- 	lt_prog_compiler_static='--static'
- 	;;
-+      nagfor*)
-+	# NAG Fortran compiler
-+	lt_prog_compiler_wl='-Wl,-Wl,,'
-+	lt_prog_compiler_pic='-PIC'
-+	lt_prog_compiler_static='-Bstatic'
-+	;;
-       pgcc* | pgf77* | pgf90* | pgf95* | pgfortran*)
-         # Portland Group compilers (*not* the Pentium gcc compiler,
- 	# which looks to be a dead project)
-@@ -7974,7 +8473,7 @@ $as_echo_n "checking for $compiler option to produce PIC... " >&6; }
-       lt_prog_compiler_pic='-KPIC'
-       lt_prog_compiler_static='-Bstatic'
-       case $cc_basename in
--      f77* | f90* | f95*)
-+      f77* | f90* | f95* | sunf77* | sunf90* | sunf95*)
- 	lt_prog_compiler_wl='-Qoption ld ';;
-       *)
- 	lt_prog_compiler_wl='-Wl,';;
-@@ -8031,13 +8530,17 @@ case $host_os in
-     lt_prog_compiler_pic="$lt_prog_compiler_pic -DPIC"
-     ;;
- esac
--{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_prog_compiler_pic" >&5
--$as_echo "$lt_prog_compiler_pic" >&6; }
--
--
--
--
- 
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $compiler option to produce PIC" >&5
-+$as_echo_n "checking for $compiler option to produce PIC... " >&6; }
-+if ${lt_cv_prog_compiler_pic+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  lt_cv_prog_compiler_pic=$lt_prog_compiler_pic
-+fi
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_prog_compiler_pic" >&5
-+$as_echo "$lt_cv_prog_compiler_pic" >&6; }
-+lt_prog_compiler_pic=$lt_cv_prog_compiler_pic
- 
- #
- # Check to make sure the PIC flag actually works.
-@@ -8098,6 +8601,11 @@ fi
- 
- 
- 
-+
-+
-+
-+
-+
- #
- # Check to make sure the static flag actually works.
- #
-@@ -8448,7 +8956,8 @@ _LT_EOF
-       allow_undefined_flag=unsupported
-       always_export_symbols=no
-       enable_shared_with_static_runtimes=yes
--      export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1 DATA/'\'' | $SED -e '\''/^[AITW][ ]/s/.*[ ]//'\'' | sort | uniq > $export_symbols'
-+      export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1 DATA/;s/^.*[ ]__nm__\([^ ]*\)[ ][^ ]*/\1 DATA/;/^I[ ]/d;/^[AITW][ ]/s/.* //'\'' | sort | uniq > $export_symbols'
-+      exclude_expsyms='[_]+GLOBAL_OFFSET_TABLE_|[_]+GLOBAL__[FID]_.*|[_]+head_[A-Za-z0-9_]+_dll|[A-Za-z0-9_]+_dll_iname'
- 
-       if $LD --help 2>&1 | $GREP 'auto-import' > /dev/null; then
-         archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib'
-@@ -8547,12 +9056,12 @@ _LT_EOF
- 	  whole_archive_flag_spec='--whole-archive$convenience --no-whole-archive'
- 	  hardcode_libdir_flag_spec=
- 	  hardcode_libdir_flag_spec_ld='-rpath $libdir'
--	  archive_cmds='$LD -shared $libobjs $deplibs $compiler_flags -soname $soname -o $lib'
-+	  archive_cmds='$LD -shared $libobjs $deplibs $linker_flags -soname $soname -o $lib'
- 	  if test "x$supports_anon_versioning" = xyes; then
- 	    archive_expsym_cmds='echo "{ global:" > $output_objdir/$libname.ver~
- 	      cat $export_symbols | sed -e "s/\(.*\)/\1;/" >> $output_objdir/$libname.ver~
- 	      echo "local: *; };" >> $output_objdir/$libname.ver~
--	      $LD -shared $libobjs $deplibs $compiler_flags -soname $soname -version-script $output_objdir/$libname.ver -o $lib'
-+	      $LD -shared $libobjs $deplibs $linker_flags -soname $soname -version-script $output_objdir/$libname.ver -o $lib'
- 	  fi
- 	  ;;
- 	esac
-@@ -8566,8 +9075,8 @@ _LT_EOF
- 	archive_cmds='$LD -Bshareable $libobjs $deplibs $linker_flags -o $lib'
- 	wlarc=
-       else
--	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
--	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-+	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
-+	archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-       fi
-       ;;
- 
-@@ -8585,8 +9094,8 @@ _LT_EOF
- 
- _LT_EOF
-       elif $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then
--	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
--	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-+	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
-+	archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-       else
- 	ld_shlibs=no
-       fi
-@@ -8632,8 +9141,8 @@ _LT_EOF
- 
-     *)
-       if $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then
--	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
--	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-+	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
-+	archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-       else
- 	ld_shlibs=no
-       fi
-@@ -8763,7 +9272,13 @@ _LT_EOF
- 	allow_undefined_flag='-berok'
-         # Determine the default libpath from the value encoded in an
-         # empty executable.
--        cat confdefs.h - <<_ACEOF >conftest.$ac_ext
-+        if test "${lt_cv_aix_libpath+set}" = set; then
-+  aix_libpath=$lt_cv_aix_libpath
-+else
-+  if ${lt_cv_aix_libpath_+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
- /* end confdefs.h.  */
- 
- int
-@@ -8776,22 +9291,29 @@ main ()
- _ACEOF
- if ac_fn_c_try_link "$LINENO"; then :
- 
--lt_aix_libpath_sed='
--    /Import File Strings/,/^$/ {
--	/^0/ {
--	    s/^0  *\(.*\)$/\1/
--	    p
--	}
--    }'
--aix_libpath=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
--# Check for a 64-bit object if we didn't find anything.
--if test -z "$aix_libpath"; then
--  aix_libpath=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
--fi
-+  lt_aix_libpath_sed='
-+      /Import File Strings/,/^$/ {
-+	  /^0/ {
-+	      s/^0  *\([^ ]*\) *$/\1/
-+	      p
-+	  }
-+      }'
-+  lt_cv_aix_libpath_=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
-+  # Check for a 64-bit object if we didn't find anything.
-+  if test -z "$lt_cv_aix_libpath_"; then
-+    lt_cv_aix_libpath_=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
-+  fi
- fi
- rm -f core conftest.err conftest.$ac_objext \
-     conftest$ac_exeext conftest.$ac_ext
--if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
-+  if test -z "$lt_cv_aix_libpath_"; then
-+    lt_cv_aix_libpath_="/usr/lib:/lib"
-+  fi
-+
-+fi
-+
-+  aix_libpath=$lt_cv_aix_libpath_
-+fi
- 
-         hardcode_libdir_flag_spec='${wl}-blibpath:$libdir:'"$aix_libpath"
-         archive_expsym_cmds='$CC -o $output_objdir/$soname $libobjs $deplibs '"\${wl}$no_entry_flag"' $compiler_flags `if test "x${allow_undefined_flag}" != "x"; then func_echo_all "${wl}${allow_undefined_flag}"; else :; fi` '"\${wl}$exp_sym_flag:\$export_symbols $shared_flag"
-@@ -8803,7 +9325,13 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
- 	else
- 	 # Determine the default libpath from the value encoded in an
- 	 # empty executable.
--	 cat confdefs.h - <<_ACEOF >conftest.$ac_ext
-+	 if test "${lt_cv_aix_libpath+set}" = set; then
-+  aix_libpath=$lt_cv_aix_libpath
-+else
-+  if ${lt_cv_aix_libpath_+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
- /* end confdefs.h.  */
- 
- int
-@@ -8816,22 +9344,29 @@ main ()
- _ACEOF
- if ac_fn_c_try_link "$LINENO"; then :
- 
--lt_aix_libpath_sed='
--    /Import File Strings/,/^$/ {
--	/^0/ {
--	    s/^0  *\(.*\)$/\1/
--	    p
--	}
--    }'
--aix_libpath=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
--# Check for a 64-bit object if we didn't find anything.
--if test -z "$aix_libpath"; then
--  aix_libpath=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
--fi
-+  lt_aix_libpath_sed='
-+      /Import File Strings/,/^$/ {
-+	  /^0/ {
-+	      s/^0  *\([^ ]*\) *$/\1/
-+	      p
-+	  }
-+      }'
-+  lt_cv_aix_libpath_=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
-+  # Check for a 64-bit object if we didn't find anything.
-+  if test -z "$lt_cv_aix_libpath_"; then
-+    lt_cv_aix_libpath_=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
-+  fi
- fi
- rm -f core conftest.err conftest.$ac_objext \
-     conftest$ac_exeext conftest.$ac_ext
--if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
-+  if test -z "$lt_cv_aix_libpath_"; then
-+    lt_cv_aix_libpath_="/usr/lib:/lib"
-+  fi
-+
-+fi
-+
-+  aix_libpath=$lt_cv_aix_libpath_
-+fi
- 
- 	 hardcode_libdir_flag_spec='${wl}-blibpath:$libdir:'"$aix_libpath"
- 	  # Warning - without using the other run time loading flags,
-@@ -8876,20 +9411,63 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
-       # Microsoft Visual C++.
-       # hardcode_libdir_flag_spec is actually meaningless, as there is
-       # no search path for DLLs.
--      hardcode_libdir_flag_spec=' '
--      allow_undefined_flag=unsupported
--      # Tell ltmain to make .lib files, not .a files.
--      libext=lib
--      # Tell ltmain to make .dll files, not .so files.
--      shrext_cmds=".dll"
--      # FIXME: Setting linknames here is a bad hack.
--      archive_cmds='$CC -o $lib $libobjs $compiler_flags `func_echo_all "$deplibs" | $SED '\''s/ -lc$//'\''` -link -dll~linknames='
--      # The linker will automatically build a .lib file if we build a DLL.
--      old_archive_from_new_cmds='true'
--      # FIXME: Should let the user specify the lib program.
--      old_archive_cmds='lib -OUT:$oldlib$oldobjs$old_deplibs'
--      fix_srcfile_path='`cygpath -w "$srcfile"`'
--      enable_shared_with_static_runtimes=yes
-+      case $cc_basename in
-+      cl*)
-+	# Native MSVC
-+	hardcode_libdir_flag_spec=' '
-+	allow_undefined_flag=unsupported
-+	always_export_symbols=yes
-+	file_list_spec='@'
-+	# Tell ltmain to make .lib files, not .a files.
-+	libext=lib
-+	# Tell ltmain to make .dll files, not .so files.
-+	shrext_cmds=".dll"
-+	# FIXME: Setting linknames here is a bad hack.
-+	archive_cmds='$CC -o $output_objdir/$soname $libobjs $compiler_flags $deplibs -Wl,-dll~linknames='
-+	archive_expsym_cmds='if test "x`$SED 1q $export_symbols`" = xEXPORTS; then
-+	    sed -n -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' -e '1\\\!p' < $export_symbols > $output_objdir/$soname.exp;
-+	  else
-+	    sed -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' < $export_symbols > $output_objdir/$soname.exp;
-+	  fi~
-+	  $CC -o $tool_output_objdir$soname $libobjs $compiler_flags $deplibs "@$tool_output_objdir$soname.exp" -Wl,-DLL,-IMPLIB:"$tool_output_objdir$libname.dll.lib"~
-+	  linknames='
-+	# The linker will not automatically build a static lib if we build a DLL.
-+	# _LT_TAGVAR(old_archive_from_new_cmds, )='true'
-+	enable_shared_with_static_runtimes=yes
-+	export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1,DATA/'\'' | $SED -e '\''/^[AITW][ ]/s/.*[ ]//'\'' | sort | uniq > $export_symbols'
-+	# Don't use ranlib
-+	old_postinstall_cmds='chmod 644 $oldlib'
-+	postlink_cmds='lt_outputfile="@OUTPUT@"~
-+	  lt_tool_outputfile="@TOOL_OUTPUT@"~
-+	  case $lt_outputfile in
-+	    *.exe|*.EXE) ;;
-+	    *)
-+	      lt_outputfile="$lt_outputfile.exe"
-+	      lt_tool_outputfile="$lt_tool_outputfile.exe"
-+	      ;;
-+	  esac~
-+	  if test "$MANIFEST_TOOL" != ":" && test -f "$lt_outputfile.manifest"; then
-+	    $MANIFEST_TOOL -manifest "$lt_tool_outputfile.manifest" -outputresource:"$lt_tool_outputfile" || exit 1;
-+	    $RM "$lt_outputfile.manifest";
-+	  fi'
-+	;;
-+      *)
-+	# Assume MSVC wrapper
-+	hardcode_libdir_flag_spec=' '
-+	allow_undefined_flag=unsupported
-+	# Tell ltmain to make .lib files, not .a files.
-+	libext=lib
-+	# Tell ltmain to make .dll files, not .so files.
-+	shrext_cmds=".dll"
-+	# FIXME: Setting linknames here is a bad hack.
-+	archive_cmds='$CC -o $lib $libobjs $compiler_flags `func_echo_all "$deplibs" | $SED '\''s/ -lc$//'\''` -link -dll~linknames='
-+	# The linker will automatically build a .lib file if we build a DLL.
-+	old_archive_from_new_cmds='true'
-+	# FIXME: Should let the user specify the lib program.
-+	old_archive_cmds='lib -OUT:$oldlib$oldobjs$old_deplibs'
-+	enable_shared_with_static_runtimes=yes
-+	;;
-+      esac
-       ;;
- 
-     darwin* | rhapsody*)
-@@ -8950,7 +9528,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
- 
-     # FreeBSD 3 and greater uses gcc -shared to do shared libraries.
-     freebsd* | dragonfly*)
--      archive_cmds='$CC -shared -o $lib $libobjs $deplibs $compiler_flags'
-+      archive_cmds='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags'
-       hardcode_libdir_flag_spec='-R$libdir'
-       hardcode_direct=yes
-       hardcode_shlibpath_var=no
-@@ -8958,7 +9536,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
- 
-     hpux9*)
-       if test "$GCC" = yes; then
--	archive_cmds='$RM $output_objdir/$soname~$CC -shared -fPIC ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $libobjs $deplibs $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
-+	archive_cmds='$RM $output_objdir/$soname~$CC -shared $pic_flag ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $libobjs $deplibs $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
-       else
- 	archive_cmds='$RM $output_objdir/$soname~$LD -b +b $install_libdir -o $output_objdir/$soname $libobjs $deplibs $linker_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
-       fi
-@@ -8974,7 +9552,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
- 
-     hpux10*)
-       if test "$GCC" = yes && test "$with_gnu_ld" = no; then
--	archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
-+	archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
-       else
- 	archive_cmds='$LD -b +h $soname +b $install_libdir -o $lib $libobjs $deplibs $linker_flags'
-       fi
-@@ -8998,10 +9576,10 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
- 	  archive_cmds='$CC -shared ${wl}+h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
- 	  ;;
- 	ia64*)
--	  archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags'
-+	  archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags'
- 	  ;;
- 	*)
--	  archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
-+	  archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
- 	  ;;
- 	esac
-       else
-@@ -9080,23 +9658,36 @@ fi
- 
-     irix5* | irix6* | nonstopux*)
-       if test "$GCC" = yes; then
--	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
-+	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
- 	# Try to use the -exported_symbol ld option, if it does not
- 	# work, assume that -exports_file does not work either and
- 	# implicitly export all symbols.
--        save_LDFLAGS="$LDFLAGS"
--        LDFLAGS="$LDFLAGS -shared ${wl}-exported_symbol ${wl}foo ${wl}-update_registry ${wl}/dev/null"
--        cat confdefs.h - <<_ACEOF >conftest.$ac_ext
-+	# This should be the same for all languages, so no per-tag cache variable.
-+	{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether the $host_os linker accepts -exported_symbol" >&5
-+$as_echo_n "checking whether the $host_os linker accepts -exported_symbol... " >&6; }
-+if ${lt_cv_irix_exported_symbol+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  save_LDFLAGS="$LDFLAGS"
-+	   LDFLAGS="$LDFLAGS -shared ${wl}-exported_symbol ${wl}foo ${wl}-update_registry ${wl}/dev/null"
-+	   cat confdefs.h - <<_ACEOF >conftest.$ac_ext
- /* end confdefs.h.  */
--int foo(void) {}
-+int foo (void) { return 0; }
- _ACEOF
- if ac_fn_c_try_link "$LINENO"; then :
--  archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations ${wl}-exports_file ${wl}$export_symbols -o $lib'
--
-+  lt_cv_irix_exported_symbol=yes
-+else
-+  lt_cv_irix_exported_symbol=no
- fi
- rm -f core conftest.err conftest.$ac_objext \
-     conftest$ac_exeext conftest.$ac_ext
--        LDFLAGS="$save_LDFLAGS"
-+           LDFLAGS="$save_LDFLAGS"
-+fi
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_irix_exported_symbol" >&5
-+$as_echo "$lt_cv_irix_exported_symbol" >&6; }
-+	if test "$lt_cv_irix_exported_symbol" = yes; then
-+          archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations ${wl}-exports_file ${wl}$export_symbols -o $lib'
-+	fi
-       else
- 	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -o $lib'
- 	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -exports_file $export_symbols -o $lib'
-@@ -9181,7 +9772,7 @@ rm -f core conftest.err conftest.$ac_objext \
-     osf4* | osf5*)	# as osf3* with the addition of -msym flag
-       if test "$GCC" = yes; then
- 	allow_undefined_flag=' ${wl}-expect_unresolved ${wl}\*'
--	archive_cmds='$CC -shared${allow_undefined_flag} $libobjs $deplibs $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
-+	archive_cmds='$CC -shared${allow_undefined_flag} $pic_flag $libobjs $deplibs $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
- 	hardcode_libdir_flag_spec='${wl}-rpath ${wl}$libdir'
-       else
- 	allow_undefined_flag=' -expect_unresolved \*'
-@@ -9200,9 +9791,9 @@ rm -f core conftest.err conftest.$ac_objext \
-       no_undefined_flag=' -z defs'
-       if test "$GCC" = yes; then
- 	wlarc='${wl}'
--	archive_cmds='$CC -shared ${wl}-z ${wl}text ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
-+	archive_cmds='$CC -shared $pic_flag ${wl}-z ${wl}text ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
- 	archive_expsym_cmds='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~
--	  $CC -shared ${wl}-z ${wl}text ${wl}-M ${wl}$lib.exp ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags~$RM $lib.exp'
-+	  $CC -shared $pic_flag ${wl}-z ${wl}text ${wl}-M ${wl}$lib.exp ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags~$RM $lib.exp'
-       else
- 	case `$CC -V 2>&1` in
- 	*"Compilers 5.0"*)
-@@ -9778,8 +10369,9 @@ cygwin* | mingw* | pw32* | cegcc*)
-   need_version=no
-   need_lib_prefix=no
- 
--  case $GCC,$host_os in
--  yes,cygwin* | yes,mingw* | yes,pw32* | yes,cegcc*)
-+  case $GCC,$cc_basename in
-+  yes,*)
-+    # gcc
-     library_names_spec='$libname.dll.a'
-     # DLL is installed to $(libdir)/../bin by postinstall_cmds
-     postinstall_cmds='base_file=`basename \${file}`~
-@@ -9812,13 +10404,71 @@ cygwin* | mingw* | pw32* | cegcc*)
-       library_names_spec='`echo ${libname} | sed -e 's/^lib/pw/'``echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}'
-       ;;
-     esac
-+    dynamic_linker='Win32 ld.exe'
-+    ;;
-+
-+  *,cl*)
-+    # Native MSVC
-+    libname_spec='$name'
-+    soname_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}'
-+    library_names_spec='${libname}.dll.lib'
-+
-+    case $build_os in
-+    mingw*)
-+      sys_lib_search_path_spec=
-+      lt_save_ifs=$IFS
-+      IFS=';'
-+      for lt_path in $LIB
-+      do
-+        IFS=$lt_save_ifs
-+        # Let DOS variable expansion print the short 8.3 style file name.
-+        lt_path=`cd "$lt_path" 2>/dev/null && cmd //C "for %i in (".") do @echo %~si"`
-+        sys_lib_search_path_spec="$sys_lib_search_path_spec $lt_path"
-+      done
-+      IFS=$lt_save_ifs
-+      # Convert to MSYS style.
-+      sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | sed -e 's|\\\\|/|g' -e 's| \\([a-zA-Z]\\):| /\\1|g' -e 's|^ ||'`
-+      ;;
-+    cygwin*)
-+      # Convert to unix form, then to dos form, then back to unix form
-+      # but this time dos style (no spaces!) so that the unix form looks
-+      # like /cygdrive/c/PROGRA~1:/cygdr...
-+      sys_lib_search_path_spec=`cygpath --path --unix "$LIB"`
-+      sys_lib_search_path_spec=`cygpath --path --dos "$sys_lib_search_path_spec" 2>/dev/null`
-+      sys_lib_search_path_spec=`cygpath --path --unix "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
-+      ;;
-+    *)
-+      sys_lib_search_path_spec="$LIB"
-+      if $ECHO "$sys_lib_search_path_spec" | $GREP ';[c-zC-Z]:/' >/dev/null; then
-+        # It is most probably a Windows format PATH.
-+        sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e 's/;/ /g'`
-+      else
-+        sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
-+      fi
-+      # FIXME: find the short name or the path components, as spaces are
-+      # common. (e.g. "Program Files" -> "PROGRA~1")
-+      ;;
-+    esac
-+
-+    # DLL is installed to $(libdir)/../bin by postinstall_cmds
-+    postinstall_cmds='base_file=`basename \${file}`~
-+      dlpath=`$SHELL 2>&1 -c '\''. $dir/'\''\${base_file}'\''i; echo \$dlname'\''`~
-+      dldir=$destdir/`dirname \$dlpath`~
-+      test -d \$dldir || mkdir -p \$dldir~
-+      $install_prog $dir/$dlname \$dldir/$dlname'
-+    postuninstall_cmds='dldll=`$SHELL 2>&1 -c '\''. $file; echo \$dlname'\''`~
-+      dlpath=$dir/\$dldll~
-+       $RM \$dlpath'
-+    shlibpath_overrides_runpath=yes
-+    dynamic_linker='Win32 link.exe'
-     ;;
- 
-   *)
-+    # Assume MSVC wrapper
-     library_names_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext} $libname.lib'
-+    dynamic_linker='Win32 ld.exe'
-     ;;
-   esac
--  dynamic_linker='Win32 ld.exe'
-   # FIXME: first we should search . and the directory the executable is in
-   shlibpath_var=PATH
-   ;;
-@@ -10696,7 +11346,7 @@ else
-   lt_dlunknown=0; lt_dlno_uscore=1; lt_dlneed_uscore=2
-   lt_status=$lt_dlunknown
-   cat > conftest.$ac_ext <<_LT_EOF
--#line 10699 "configure"
-+#line $LINENO "configure"
- #include "confdefs.h"
- 
- #if HAVE_DLFCN_H
-@@ -10740,10 +11390,10 @@ else
- /* When -fvisbility=hidden is used, assume the code has been annotated
-    correspondingly for the symbols needed.  */
- #if defined(__GNUC__) && (((__GNUC__ == 3) && (__GNUC_MINOR__ >= 3)) || (__GNUC__ > 3))
--void fnord () __attribute__((visibility("default")));
-+int fnord () __attribute__((visibility("default")));
- #endif
- 
--void fnord () { int i=42; }
-+int fnord () { return 42; }
- int main ()
- {
-   void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW);
-@@ -10802,7 +11452,7 @@ else
-   lt_dlunknown=0; lt_dlno_uscore=1; lt_dlneed_uscore=2
-   lt_status=$lt_dlunknown
-   cat > conftest.$ac_ext <<_LT_EOF
--#line 10805 "configure"
-+#line $LINENO "configure"
- #include "confdefs.h"
- 
- #if HAVE_DLFCN_H
-@@ -10846,10 +11496,10 @@ else
- /* When -fvisbility=hidden is used, assume the code has been annotated
-    correspondingly for the symbols needed.  */
- #if defined(__GNUC__) && (((__GNUC__ == 3) && (__GNUC_MINOR__ >= 3)) || (__GNUC__ > 3))
--void fnord () __attribute__((visibility("default")));
-+int fnord () __attribute__((visibility("default")));
- #endif
- 
--void fnord () { int i=42; }
-+int fnord () { return 42; }
- int main ()
- {
-   void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW);
-@@ -14832,13 +15482,20 @@ exeext='`$ECHO "$exeext" | $SED "$delay_single_quote_subst"`'
- lt_unset='`$ECHO "$lt_unset" | $SED "$delay_single_quote_subst"`'
- lt_SP2NL='`$ECHO "$lt_SP2NL" | $SED "$delay_single_quote_subst"`'
- lt_NL2SP='`$ECHO "$lt_NL2SP" | $SED "$delay_single_quote_subst"`'
-+lt_cv_to_host_file_cmd='`$ECHO "$lt_cv_to_host_file_cmd" | $SED "$delay_single_quote_subst"`'
-+lt_cv_to_tool_file_cmd='`$ECHO "$lt_cv_to_tool_file_cmd" | $SED "$delay_single_quote_subst"`'
- reload_flag='`$ECHO "$reload_flag" | $SED "$delay_single_quote_subst"`'
- reload_cmds='`$ECHO "$reload_cmds" | $SED "$delay_single_quote_subst"`'
- OBJDUMP='`$ECHO "$OBJDUMP" | $SED "$delay_single_quote_subst"`'
- deplibs_check_method='`$ECHO "$deplibs_check_method" | $SED "$delay_single_quote_subst"`'
- file_magic_cmd='`$ECHO "$file_magic_cmd" | $SED "$delay_single_quote_subst"`'
-+file_magic_glob='`$ECHO "$file_magic_glob" | $SED "$delay_single_quote_subst"`'
-+want_nocaseglob='`$ECHO "$want_nocaseglob" | $SED "$delay_single_quote_subst"`'
-+DLLTOOL='`$ECHO "$DLLTOOL" | $SED "$delay_single_quote_subst"`'
-+sharedlib_from_linklib_cmd='`$ECHO "$sharedlib_from_linklib_cmd" | $SED "$delay_single_quote_subst"`'
- AR='`$ECHO "$AR" | $SED "$delay_single_quote_subst"`'
- AR_FLAGS='`$ECHO "$AR_FLAGS" | $SED "$delay_single_quote_subst"`'
-+archiver_list_spec='`$ECHO "$archiver_list_spec" | $SED "$delay_single_quote_subst"`'
- STRIP='`$ECHO "$STRIP" | $SED "$delay_single_quote_subst"`'
- RANLIB='`$ECHO "$RANLIB" | $SED "$delay_single_quote_subst"`'
- old_postinstall_cmds='`$ECHO "$old_postinstall_cmds" | $SED "$delay_single_quote_subst"`'
-@@ -14853,14 +15510,17 @@ lt_cv_sys_global_symbol_pipe='`$ECHO "$lt_cv_sys_global_symbol_pipe" | $SED "$de
- lt_cv_sys_global_symbol_to_cdecl='`$ECHO "$lt_cv_sys_global_symbol_to_cdecl" | $SED "$delay_single_quote_subst"`'
- lt_cv_sys_global_symbol_to_c_name_address='`$ECHO "$lt_cv_sys_global_symbol_to_c_name_address" | $SED "$delay_single_quote_subst"`'
- lt_cv_sys_global_symbol_to_c_name_address_lib_prefix='`$ECHO "$lt_cv_sys_global_symbol_to_c_name_address_lib_prefix" | $SED "$delay_single_quote_subst"`'
-+nm_file_list_spec='`$ECHO "$nm_file_list_spec" | $SED "$delay_single_quote_subst"`'
-+lt_sysroot='`$ECHO "$lt_sysroot" | $SED "$delay_single_quote_subst"`'
- objdir='`$ECHO "$objdir" | $SED "$delay_single_quote_subst"`'
- MAGIC_CMD='`$ECHO "$MAGIC_CMD" | $SED "$delay_single_quote_subst"`'
- lt_prog_compiler_no_builtin_flag='`$ECHO "$lt_prog_compiler_no_builtin_flag" | $SED "$delay_single_quote_subst"`'
--lt_prog_compiler_wl='`$ECHO "$lt_prog_compiler_wl" | $SED "$delay_single_quote_subst"`'
- lt_prog_compiler_pic='`$ECHO "$lt_prog_compiler_pic" | $SED "$delay_single_quote_subst"`'
-+lt_prog_compiler_wl='`$ECHO "$lt_prog_compiler_wl" | $SED "$delay_single_quote_subst"`'
- lt_prog_compiler_static='`$ECHO "$lt_prog_compiler_static" | $SED "$delay_single_quote_subst"`'
- lt_cv_prog_compiler_c_o='`$ECHO "$lt_cv_prog_compiler_c_o" | $SED "$delay_single_quote_subst"`'
- need_locks='`$ECHO "$need_locks" | $SED "$delay_single_quote_subst"`'
-+MANIFEST_TOOL='`$ECHO "$MANIFEST_TOOL" | $SED "$delay_single_quote_subst"`'
- DSYMUTIL='`$ECHO "$DSYMUTIL" | $SED "$delay_single_quote_subst"`'
- NMEDIT='`$ECHO "$NMEDIT" | $SED "$delay_single_quote_subst"`'
- LIPO='`$ECHO "$LIPO" | $SED "$delay_single_quote_subst"`'
-@@ -14893,12 +15553,12 @@ hardcode_shlibpath_var='`$ECHO "$hardcode_shlibpath_var" | $SED "$delay_single_q
- hardcode_automatic='`$ECHO "$hardcode_automatic" | $SED "$delay_single_quote_subst"`'
- inherit_rpath='`$ECHO "$inherit_rpath" | $SED "$delay_single_quote_subst"`'
- link_all_deplibs='`$ECHO "$link_all_deplibs" | $SED "$delay_single_quote_subst"`'
--fix_srcfile_path='`$ECHO "$fix_srcfile_path" | $SED "$delay_single_quote_subst"`'
- always_export_symbols='`$ECHO "$always_export_symbols" | $SED "$delay_single_quote_subst"`'
- export_symbols_cmds='`$ECHO "$export_symbols_cmds" | $SED "$delay_single_quote_subst"`'
- exclude_expsyms='`$ECHO "$exclude_expsyms" | $SED "$delay_single_quote_subst"`'
- include_expsyms='`$ECHO "$include_expsyms" | $SED "$delay_single_quote_subst"`'
- prelink_cmds='`$ECHO "$prelink_cmds" | $SED "$delay_single_quote_subst"`'
-+postlink_cmds='`$ECHO "$postlink_cmds" | $SED "$delay_single_quote_subst"`'
- file_list_spec='`$ECHO "$file_list_spec" | $SED "$delay_single_quote_subst"`'
- variables_saved_for_relink='`$ECHO "$variables_saved_for_relink" | $SED "$delay_single_quote_subst"`'
- need_lib_prefix='`$ECHO "$need_lib_prefix" | $SED "$delay_single_quote_subst"`'
-@@ -14953,8 +15613,13 @@ reload_flag \
- OBJDUMP \
- deplibs_check_method \
- file_magic_cmd \
-+file_magic_glob \
-+want_nocaseglob \
-+DLLTOOL \
-+sharedlib_from_linklib_cmd \
- AR \
- AR_FLAGS \
-+archiver_list_spec \
- STRIP \
- RANLIB \
- CC \
-@@ -14964,12 +15629,14 @@ lt_cv_sys_global_symbol_pipe \
- lt_cv_sys_global_symbol_to_cdecl \
- lt_cv_sys_global_symbol_to_c_name_address \
- lt_cv_sys_global_symbol_to_c_name_address_lib_prefix \
-+nm_file_list_spec \
- lt_prog_compiler_no_builtin_flag \
--lt_prog_compiler_wl \
- lt_prog_compiler_pic \
-+lt_prog_compiler_wl \
- lt_prog_compiler_static \
- lt_cv_prog_compiler_c_o \
- need_locks \
-+MANIFEST_TOOL \
- DSYMUTIL \
- NMEDIT \
- LIPO \
-@@ -14985,7 +15652,6 @@ no_undefined_flag \
- hardcode_libdir_flag_spec \
- hardcode_libdir_flag_spec_ld \
- hardcode_libdir_separator \
--fix_srcfile_path \
- exclude_expsyms \
- include_expsyms \
- file_list_spec \
-@@ -15021,6 +15687,7 @@ module_cmds \
- module_expsym_cmds \
- export_symbols_cmds \
- prelink_cmds \
-+postlink_cmds \
- postinstall_cmds \
- postuninstall_cmds \
- finish_cmds \
-@@ -15793,7 +16460,8 @@ $as_echo X"$file" |
- # NOTE: Changes made to this file will be lost: look at ltmain.sh.
- #
- #   Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001, 2003, 2004, 2005,
--#                 2006, 2007, 2008, 2009 Free Software Foundation, Inc.
-+#                 2006, 2007, 2008, 2009, 2010 Free Software Foundation,
-+#                 Inc.
- #   Written by Gordon Matzigkeit, 1996
- #
- #   This file is part of GNU Libtool.
-@@ -15896,19 +16564,42 @@ SP2NL=$lt_lt_SP2NL
- # turn newlines into spaces.
- NL2SP=$lt_lt_NL2SP
- 
-+# convert \$build file names to \$host format.
-+to_host_file_cmd=$lt_cv_to_host_file_cmd
-+
-+# convert \$build files to toolchain format.
-+to_tool_file_cmd=$lt_cv_to_tool_file_cmd
-+
- # An object symbol dumper.
- OBJDUMP=$lt_OBJDUMP
- 
- # Method to check whether dependent libraries are shared objects.
- deplibs_check_method=$lt_deplibs_check_method
- 
--# Command to use when deplibs_check_method == "file_magic".
-+# Command to use when deplibs_check_method = "file_magic".
- file_magic_cmd=$lt_file_magic_cmd
- 
-+# How to find potential files when deplibs_check_method = "file_magic".
-+file_magic_glob=$lt_file_magic_glob
-+
-+# Find potential files using nocaseglob when deplibs_check_method = "file_magic".
-+want_nocaseglob=$lt_want_nocaseglob
-+
-+# DLL creation program.
-+DLLTOOL=$lt_DLLTOOL
-+
-+# Command to associate shared and link libraries.
-+sharedlib_from_linklib_cmd=$lt_sharedlib_from_linklib_cmd
-+
- # The archiver.
- AR=$lt_AR
-+
-+# Flags to create an archive.
- AR_FLAGS=$lt_AR_FLAGS
- 
-+# How to feed a file listing to the archiver.
-+archiver_list_spec=$lt_archiver_list_spec
-+
- # A symbol stripping program.
- STRIP=$lt_STRIP
- 
-@@ -15938,6 +16629,12 @@ global_symbol_to_c_name_address=$lt_lt_cv_sys_global_symbol_to_c_name_address
- # Transform the output of nm in a C name address pair when lib prefix is needed.
- global_symbol_to_c_name_address_lib_prefix=$lt_lt_cv_sys_global_symbol_to_c_name_address_lib_prefix
- 
-+# Specify filename containing input files for \$NM.
-+nm_file_list_spec=$lt_nm_file_list_spec
-+
-+# The root where to search for dependent libraries,and in which our libraries should be installed.
-+lt_sysroot=$lt_sysroot
-+
- # The name of the directory that contains temporary libtool files.
- objdir=$objdir
- 
-@@ -15947,6 +16644,9 @@ MAGIC_CMD=$MAGIC_CMD
- # Must we lock files when doing compilation?
- need_locks=$lt_need_locks
- 
-+# Manifest tool.
-+MANIFEST_TOOL=$lt_MANIFEST_TOOL
-+
- # Tool to manipulate archived DWARF debug symbol files on Mac OS X.
- DSYMUTIL=$lt_DSYMUTIL
- 
-@@ -16061,12 +16761,12 @@ with_gcc=$GCC
- # Compiler flag to turn off builtin functions.
- no_builtin_flag=$lt_lt_prog_compiler_no_builtin_flag
- 
--# How to pass a linker flag through the compiler.
--wl=$lt_lt_prog_compiler_wl
--
- # Additional compiler flags for building library objects.
- pic_flag=$lt_lt_prog_compiler_pic
- 
-+# How to pass a linker flag through the compiler.
-+wl=$lt_lt_prog_compiler_wl
-+
- # Compiler flag to prevent dynamic linking.
- link_static_flag=$lt_lt_prog_compiler_static
- 
-@@ -16153,9 +16853,6 @@ inherit_rpath=$inherit_rpath
- # Whether libtool must link a program against all its dependency libraries.
- link_all_deplibs=$link_all_deplibs
- 
--# Fix the shell variable \$srcfile for the compiler.
--fix_srcfile_path=$lt_fix_srcfile_path
--
- # Set to "yes" if exported symbols are required.
- always_export_symbols=$always_export_symbols
- 
-@@ -16171,6 +16868,9 @@ include_expsyms=$lt_include_expsyms
- # Commands necessary for linking programs (against libraries) with templates.
- prelink_cmds=$lt_prelink_cmds
- 
-+# Commands necessary for finishing linking programs.
-+postlink_cmds=$lt_postlink_cmds
-+
- # Specify filename containing input files.
- file_list_spec=$lt_file_list_spec
- 
-@@ -16203,210 +16903,169 @@ ltmain="$ac_aux_dir/ltmain.sh"
-   # if finds mixed CR/LF and LF-only lines.  Since sed operates in
-   # text mode, it properly converts lines to CR/LF.  This bash problem
-   # is reportedly fixed, but why not run on old versions too?
--  sed '/^# Generated shell functions inserted here/q' "$ltmain" >> "$cfgfile" \
--    || (rm -f "$cfgfile"; exit 1)
--
--  case $xsi_shell in
--  yes)
--    cat << \_LT_EOF >> "$cfgfile"
--
--# func_dirname file append nondir_replacement
--# Compute the dirname of FILE.  If nonempty, add APPEND to the result,
--# otherwise set result to NONDIR_REPLACEMENT.
--func_dirname ()
--{
--  case ${1} in
--    */*) func_dirname_result="${1%/*}${2}" ;;
--    *  ) func_dirname_result="${3}" ;;
--  esac
--}
--
--# func_basename file
--func_basename ()
--{
--  func_basename_result="${1##*/}"
--}
--
--# func_dirname_and_basename file append nondir_replacement
--# perform func_basename and func_dirname in a single function
--# call:
--#   dirname:  Compute the dirname of FILE.  If nonempty,
--#             add APPEND to the result, otherwise set result
--#             to NONDIR_REPLACEMENT.
--#             value returned in "$func_dirname_result"
--#   basename: Compute filename of FILE.
--#             value retuned in "$func_basename_result"
--# Implementation must be kept synchronized with func_dirname
--# and func_basename. For efficiency, we do not delegate to
--# those functions but instead duplicate the functionality here.
--func_dirname_and_basename ()
--{
--  case ${1} in
--    */*) func_dirname_result="${1%/*}${2}" ;;
--    *  ) func_dirname_result="${3}" ;;
--  esac
--  func_basename_result="${1##*/}"
--}
--
--# func_stripname prefix suffix name
--# strip PREFIX and SUFFIX off of NAME.
--# PREFIX and SUFFIX must not contain globbing or regex special
--# characters, hashes, percent signs, but SUFFIX may contain a leading
--# dot (in which case that matches only a dot).
--func_stripname ()
--{
--  # pdksh 5.2.14 does not do ${X%$Y} correctly if both X and Y are
--  # positional parameters, so assign one to ordinary parameter first.
--  func_stripname_result=${3}
--  func_stripname_result=${func_stripname_result#"${1}"}
--  func_stripname_result=${func_stripname_result%"${2}"}
--}
--
--# func_opt_split
--func_opt_split ()
--{
--  func_opt_split_opt=${1%%=*}
--  func_opt_split_arg=${1#*=}
--}
--
--# func_lo2o object
--func_lo2o ()
--{
--  case ${1} in
--    *.lo) func_lo2o_result=${1%.lo}.${objext} ;;
--    *)    func_lo2o_result=${1} ;;
--  esac
--}
--
--# func_xform libobj-or-source
--func_xform ()
--{
--  func_xform_result=${1%.*}.lo
--}
--
--# func_arith arithmetic-term...
--func_arith ()
--{
--  func_arith_result=$(( $* ))
--}
--
--# func_len string
--# STRING may not start with a hyphen.
--func_len ()
--{
--  func_len_result=${#1}
--}
--
--_LT_EOF
--    ;;
--  *) # Bourne compatible functions.
--    cat << \_LT_EOF >> "$cfgfile"
--
--# func_dirname file append nondir_replacement
--# Compute the dirname of FILE.  If nonempty, add APPEND to the result,
--# otherwise set result to NONDIR_REPLACEMENT.
--func_dirname ()
--{
--  # Extract subdirectory from the argument.
--  func_dirname_result=`$ECHO "${1}" | $SED "$dirname"`
--  if test "X$func_dirname_result" = "X${1}"; then
--    func_dirname_result="${3}"
--  else
--    func_dirname_result="$func_dirname_result${2}"
--  fi
--}
--
--# func_basename file
--func_basename ()
--{
--  func_basename_result=`$ECHO "${1}" | $SED "$basename"`
--}
--
--
--# func_stripname prefix suffix name
--# strip PREFIX and SUFFIX off of NAME.
--# PREFIX and SUFFIX must not contain globbing or regex special
--# characters, hashes, percent signs, but SUFFIX may contain a leading
--# dot (in which case that matches only a dot).
--# func_strip_suffix prefix name
--func_stripname ()
--{
--  case ${2} in
--    .*) func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%\\\\${2}\$%%"`;;
--    *)  func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%${2}\$%%"`;;
--  esac
--}
--
--# sed scripts:
--my_sed_long_opt='1s/^\(-[^=]*\)=.*/\1/;q'
--my_sed_long_arg='1s/^-[^=]*=//'
--
--# func_opt_split
--func_opt_split ()
--{
--  func_opt_split_opt=`$ECHO "${1}" | $SED "$my_sed_long_opt"`
--  func_opt_split_arg=`$ECHO "${1}" | $SED "$my_sed_long_arg"`
--}
--
--# func_lo2o object
--func_lo2o ()
--{
--  func_lo2o_result=`$ECHO "${1}" | $SED "$lo2o"`
--}
--
--# func_xform libobj-or-source
--func_xform ()
--{
--  func_xform_result=`$ECHO "${1}" | $SED 's/\.[^.]*$/.lo/'`
--}
--
--# func_arith arithmetic-term...
--func_arith ()
--{
--  func_arith_result=`expr "$@"`
--}
--
--# func_len string
--# STRING may not start with a hyphen.
--func_len ()
--{
--  func_len_result=`expr "$1" : ".*" 2>/dev/null || echo $max_cmd_len`
--}
--
--_LT_EOF
--esac
--
--case $lt_shell_append in
--  yes)
--    cat << \_LT_EOF >> "$cfgfile"
--
--# func_append var value
--# Append VALUE to the end of shell variable VAR.
--func_append ()
--{
--  eval "$1+=\$2"
--}
--_LT_EOF
--    ;;
--  *)
--    cat << \_LT_EOF >> "$cfgfile"
--
--# func_append var value
--# Append VALUE to the end of shell variable VAR.
--func_append ()
--{
--  eval "$1=\$$1\$2"
--}
--
--_LT_EOF
--    ;;
--  esac
--
--
--  sed -n '/^# Generated shell functions inserted here/,$p' "$ltmain" >> "$cfgfile" \
--    || (rm -f "$cfgfile"; exit 1)
--
--  mv -f "$cfgfile" "$ofile" ||
-+  sed '$q' "$ltmain" >> "$cfgfile" \
-+     || (rm -f "$cfgfile"; exit 1)
-+
-+  if test x"$xsi_shell" = xyes; then
-+  sed -e '/^func_dirname ()$/,/^} # func_dirname /c\
-+func_dirname ()\
-+{\
-+\    case ${1} in\
-+\      */*) func_dirname_result="${1%/*}${2}" ;;\
-+\      *  ) func_dirname_result="${3}" ;;\
-+\    esac\
-+} # Extended-shell func_dirname implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_basename ()$/,/^} # func_basename /c\
-+func_basename ()\
-+{\
-+\    func_basename_result="${1##*/}"\
-+} # Extended-shell func_basename implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_dirname_and_basename ()$/,/^} # func_dirname_and_basename /c\
-+func_dirname_and_basename ()\
-+{\
-+\    case ${1} in\
-+\      */*) func_dirname_result="${1%/*}${2}" ;;\
-+\      *  ) func_dirname_result="${3}" ;;\
-+\    esac\
-+\    func_basename_result="${1##*/}"\
-+} # Extended-shell func_dirname_and_basename implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_stripname ()$/,/^} # func_stripname /c\
-+func_stripname ()\
-+{\
-+\    # pdksh 5.2.14 does not do ${X%$Y} correctly if both X and Y are\
-+\    # positional parameters, so assign one to ordinary parameter first.\
-+\    func_stripname_result=${3}\
-+\    func_stripname_result=${func_stripname_result#"${1}"}\
-+\    func_stripname_result=${func_stripname_result%"${2}"}\
-+} # Extended-shell func_stripname implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_split_long_opt ()$/,/^} # func_split_long_opt /c\
-+func_split_long_opt ()\
-+{\
-+\    func_split_long_opt_name=${1%%=*}\
-+\    func_split_long_opt_arg=${1#*=}\
-+} # Extended-shell func_split_long_opt implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_split_short_opt ()$/,/^} # func_split_short_opt /c\
-+func_split_short_opt ()\
-+{\
-+\    func_split_short_opt_arg=${1#??}\
-+\    func_split_short_opt_name=${1%"$func_split_short_opt_arg"}\
-+} # Extended-shell func_split_short_opt implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_lo2o ()$/,/^} # func_lo2o /c\
-+func_lo2o ()\
-+{\
-+\    case ${1} in\
-+\      *.lo) func_lo2o_result=${1%.lo}.${objext} ;;\
-+\      *)    func_lo2o_result=${1} ;;\
-+\    esac\
-+} # Extended-shell func_lo2o implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_xform ()$/,/^} # func_xform /c\
-+func_xform ()\
-+{\
-+    func_xform_result=${1%.*}.lo\
-+} # Extended-shell func_xform implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_arith ()$/,/^} # func_arith /c\
-+func_arith ()\
-+{\
-+    func_arith_result=$(( $* ))\
-+} # Extended-shell func_arith implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_len ()$/,/^} # func_len /c\
-+func_len ()\
-+{\
-+    func_len_result=${#1}\
-+} # Extended-shell func_len implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+fi
-+
-+if test x"$lt_shell_append" = xyes; then
-+  sed -e '/^func_append ()$/,/^} # func_append /c\
-+func_append ()\
-+{\
-+    eval "${1}+=\\${2}"\
-+} # Extended-shell func_append implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_append_quoted ()$/,/^} # func_append_quoted /c\
-+func_append_quoted ()\
-+{\
-+\    func_quote_for_eval "${2}"\
-+\    eval "${1}+=\\\\ \\$func_quote_for_eval_result"\
-+} # Extended-shell func_append_quoted implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  # Save a `func_append' function call where possible by direct use of '+='
-+  sed -e 's%func_append \([a-zA-Z_]\{1,\}\) "%\1+="%g' $cfgfile > $cfgfile.tmp \
-+    && mv -f "$cfgfile.tmp" "$cfgfile" \
-+      || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+  test 0 -eq $? || _lt_function_replace_fail=:
-+else
-+  # Save a `func_append' function call even when '+=' is not available
-+  sed -e 's%func_append \([a-zA-Z_]\{1,\}\) "%\1="$\1%g' $cfgfile > $cfgfile.tmp \
-+    && mv -f "$cfgfile.tmp" "$cfgfile" \
-+      || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+  test 0 -eq $? || _lt_function_replace_fail=:
-+fi
-+
-+if test x"$_lt_function_replace_fail" = x":"; then
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: Unable to substitute extended shell functions in $ofile" >&5
-+$as_echo "$as_me: WARNING: Unable to substitute extended shell functions in $ofile" >&2;}
-+fi
-+
-+
-+   mv -f "$cfgfile" "$ofile" ||
-     (rm -f "$ofile" && cp "$cfgfile" "$ofile" && rm -f "$cfgfile")
-   chmod +x "$ofile"
- 
-diff --git a/gprof/configure b/gprof/configure
-index a7f788f0411..e7703613024 100755
---- a/gprof/configure
-+++ b/gprof/configure
-@@ -663,8 +663,11 @@ OTOOL
- LIPO
- NMEDIT
- DSYMUTIL
-+MANIFEST_TOOL
- RANLIB
-+ac_ct_AR
- AR
-+DLLTOOL
- OBJDUMP
- LN_S
- NM
-@@ -781,6 +784,7 @@ enable_static
- with_pic
- enable_fast_install
- with_gnu_ld
-+with_libtool_sysroot
- enable_libtool_lock
- enable_plugins
- enable_largefile
-@@ -1443,6 +1447,8 @@ Optional Packages:
-   --with-pic              try to use only PIC/non-PIC objects [default=use
-                           both]
-   --with-gnu-ld           assume the C compiler uses GNU ld [default=no]
-+  --with-libtool-sysroot=DIR Search for dependent libraries within DIR
-+                        (or the compiler's sysroot if not specified).
- 
- Some influential environment variables:
-   CC          C compiler command
-@@ -4510,8 +4516,8 @@ esac
- 
- 
- 
--macro_version='2.2.7a'
--macro_revision='1.3134'
-+macro_version='2.4'
-+macro_revision='1.3293'
- 
- 
- 
-@@ -4551,7 +4557,7 @@ ECHO=$ECHO$ECHO$ECHO$ECHO$ECHO$ECHO
- { $as_echo "$as_me:${as_lineno-$LINENO}: checking how to print strings" >&5
- $as_echo_n "checking how to print strings... " >&6; }
- # Test print first, because it will be a builtin if present.
--if test "X`print -r -- -n 2>/dev/null`" = X-n && \
-+if test "X`( print -r -- -n ) 2>/dev/null`" = X-n && \
-    test "X`print -r -- $ECHO 2>/dev/null`" = "X$ECHO"; then
-   ECHO='print -r --'
- elif test "X`printf %s $ECHO 2>/dev/null`" = "X$ECHO"; then
-@@ -5238,8 +5244,8 @@ $as_echo_n "checking whether the shell understands some XSI constructs... " >&6;
- # Try some XSI features
- xsi_shell=no
- ( _lt_dummy="a/b/c"
--  test "${_lt_dummy##*/},${_lt_dummy%/*},"${_lt_dummy%"$_lt_dummy"}, \
--      = c,a/b,, \
-+  test "${_lt_dummy##*/},${_lt_dummy%/*},${_lt_dummy#??}"${_lt_dummy%"$_lt_dummy"}, \
-+      = c,a/b,b/c, \
-     && eval 'test $(( 1 + 1 )) -eq 2 \
-     && test "${#_lt_dummy}" -eq 5' ) >/dev/null 2>&1 \
-   && xsi_shell=yes
-@@ -5288,6 +5294,80 @@ esac
- 
- 
- 
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking how to convert $build file names to $host format" >&5
-+$as_echo_n "checking how to convert $build file names to $host format... " >&6; }
-+if ${lt_cv_to_host_file_cmd+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  case $host in
-+  *-*-mingw* )
-+    case $build in
-+      *-*-mingw* ) # actually msys
-+        lt_cv_to_host_file_cmd=func_convert_file_msys_to_w32
-+        ;;
-+      *-*-cygwin* )
-+        lt_cv_to_host_file_cmd=func_convert_file_cygwin_to_w32
-+        ;;
-+      * ) # otherwise, assume *nix
-+        lt_cv_to_host_file_cmd=func_convert_file_nix_to_w32
-+        ;;
-+    esac
-+    ;;
-+  *-*-cygwin* )
-+    case $build in
-+      *-*-mingw* ) # actually msys
-+        lt_cv_to_host_file_cmd=func_convert_file_msys_to_cygwin
-+        ;;
-+      *-*-cygwin* )
-+        lt_cv_to_host_file_cmd=func_convert_file_noop
-+        ;;
-+      * ) # otherwise, assume *nix
-+        lt_cv_to_host_file_cmd=func_convert_file_nix_to_cygwin
-+        ;;
-+    esac
-+    ;;
-+  * ) # unhandled hosts (and "normal" native builds)
-+    lt_cv_to_host_file_cmd=func_convert_file_noop
-+    ;;
-+esac
-+
-+fi
-+
-+to_host_file_cmd=$lt_cv_to_host_file_cmd
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_to_host_file_cmd" >&5
-+$as_echo "$lt_cv_to_host_file_cmd" >&6; }
-+
-+
-+
-+
-+
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking how to convert $build file names to toolchain format" >&5
-+$as_echo_n "checking how to convert $build file names to toolchain format... " >&6; }
-+if ${lt_cv_to_tool_file_cmd+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  #assume ordinary cross tools, or native build.
-+lt_cv_to_tool_file_cmd=func_convert_file_noop
-+case $host in
-+  *-*-mingw* )
-+    case $build in
-+      *-*-mingw* ) # actually msys
-+        lt_cv_to_tool_file_cmd=func_convert_file_msys_to_w32
-+        ;;
-+    esac
-+    ;;
-+esac
-+
-+fi
-+
-+to_tool_file_cmd=$lt_cv_to_tool_file_cmd
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_to_tool_file_cmd" >&5
-+$as_echo "$lt_cv_to_tool_file_cmd" >&6; }
-+
-+
-+
-+
-+
- { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $LD option to reload object files" >&5
- $as_echo_n "checking for $LD option to reload object files... " >&6; }
- if ${lt_cv_ld_reload_flag+:} false; then :
-@@ -5304,6 +5384,11 @@ case $reload_flag in
- esac
- reload_cmds='$LD$reload_flag -o $output$reload_objs'
- case $host_os in
-+  cygwin* | mingw* | pw32* | cegcc*)
-+    if test "$GCC" != yes; then
-+      reload_cmds=false
-+    fi
-+    ;;
-   darwin*)
-     if test "$GCC" = yes; then
-       reload_cmds='$LTCC $LTCFLAGS -nostdlib ${wl}-r -o $output$reload_objs'
-@@ -5472,7 +5557,8 @@ mingw* | pw32*)
-     lt_cv_deplibs_check_method='file_magic ^x86 archive import|^x86 DLL'
-     lt_cv_file_magic_cmd='func_win32_libid'
-   else
--    lt_cv_deplibs_check_method='file_magic file format pei*-i386(.*architecture: i386)?'
-+    # Keep this pattern in sync with the one in func_win32_libid.
-+    lt_cv_deplibs_check_method='file_magic file format (pei*-i386(.*architecture: i386)?|pe-arm-wince|pe-x86-64)'
-     lt_cv_file_magic_cmd='$OBJDUMP -f'
-   fi
-   ;;
-@@ -5626,6 +5712,21 @@ esac
- fi
- { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_deplibs_check_method" >&5
- $as_echo "$lt_cv_deplibs_check_method" >&6; }
-+
-+file_magic_glob=
-+want_nocaseglob=no
-+if test "$build" = "$host"; then
-+  case $host_os in
-+  mingw* | pw32*)
-+    if ( shopt | grep nocaseglob ) >/dev/null 2>&1; then
-+      want_nocaseglob=yes
-+    else
-+      file_magic_glob=`echo aAbBcCdDeEfFgGhHiIjJkKlLmMnNoOpPqQrRsStTuUvVwWxXyYzZ | $SED -e "s/\(..\)/s\/[\1]\/[\1]\/g;/g"`
-+    fi
-+    ;;
-+  esac
-+fi
-+
- file_magic_cmd=$lt_cv_file_magic_cmd
- deplibs_check_method=$lt_cv_deplibs_check_method
- test -z "$deplibs_check_method" && deplibs_check_method=unknown
-@@ -5641,6 +5742,157 @@ test -z "$deplibs_check_method" && deplibs_check_method=unknown
- 
- 
- 
-+
-+
-+
-+
-+
-+
-+
-+
-+
-+
-+if test -n "$ac_tool_prefix"; then
-+  # Extract the first word of "${ac_tool_prefix}dlltool", so it can be a program name with args.
-+set dummy ${ac_tool_prefix}dlltool; ac_word=$2
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
-+$as_echo_n "checking for $ac_word... " >&6; }
-+if ${ac_cv_prog_DLLTOOL+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  if test -n "$DLLTOOL"; then
-+  ac_cv_prog_DLLTOOL="$DLLTOOL" # Let the user override the test.
-+else
-+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
-+for as_dir in $PATH
-+do
-+  IFS=$as_save_IFS
-+  test -z "$as_dir" && as_dir=.
-+    for ac_exec_ext in '' $ac_executable_extensions; do
-+  if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
-+    ac_cv_prog_DLLTOOL="${ac_tool_prefix}dlltool"
-+    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
-+    break 2
-+  fi
-+done
-+  done
-+IFS=$as_save_IFS
-+
-+fi
-+fi
-+DLLTOOL=$ac_cv_prog_DLLTOOL
-+if test -n "$DLLTOOL"; then
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $DLLTOOL" >&5
-+$as_echo "$DLLTOOL" >&6; }
-+else
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
-+$as_echo "no" >&6; }
-+fi
-+
-+
-+fi
-+if test -z "$ac_cv_prog_DLLTOOL"; then
-+  ac_ct_DLLTOOL=$DLLTOOL
-+  # Extract the first word of "dlltool", so it can be a program name with args.
-+set dummy dlltool; ac_word=$2
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
-+$as_echo_n "checking for $ac_word... " >&6; }
-+if ${ac_cv_prog_ac_ct_DLLTOOL+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  if test -n "$ac_ct_DLLTOOL"; then
-+  ac_cv_prog_ac_ct_DLLTOOL="$ac_ct_DLLTOOL" # Let the user override the test.
-+else
-+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
-+for as_dir in $PATH
-+do
-+  IFS=$as_save_IFS
-+  test -z "$as_dir" && as_dir=.
-+    for ac_exec_ext in '' $ac_executable_extensions; do
-+  if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
-+    ac_cv_prog_ac_ct_DLLTOOL="dlltool"
-+    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
-+    break 2
-+  fi
-+done
-+  done
-+IFS=$as_save_IFS
-+
-+fi
-+fi
-+ac_ct_DLLTOOL=$ac_cv_prog_ac_ct_DLLTOOL
-+if test -n "$ac_ct_DLLTOOL"; then
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_DLLTOOL" >&5
-+$as_echo "$ac_ct_DLLTOOL" >&6; }
-+else
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
-+$as_echo "no" >&6; }
-+fi
-+
-+  if test "x$ac_ct_DLLTOOL" = x; then
-+    DLLTOOL="false"
-+  else
-+    case $cross_compiling:$ac_tool_warned in
-+yes:)
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5
-+$as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;}
-+ac_tool_warned=yes ;;
-+esac
-+    DLLTOOL=$ac_ct_DLLTOOL
-+  fi
-+else
-+  DLLTOOL="$ac_cv_prog_DLLTOOL"
-+fi
-+
-+test -z "$DLLTOOL" && DLLTOOL=dlltool
-+
-+
-+
-+
-+
-+
-+
-+
-+
-+
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking how to associate runtime and link libraries" >&5
-+$as_echo_n "checking how to associate runtime and link libraries... " >&6; }
-+if ${lt_cv_sharedlib_from_linklib_cmd+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  lt_cv_sharedlib_from_linklib_cmd='unknown'
-+
-+case $host_os in
-+cygwin* | mingw* | pw32* | cegcc*)
-+  # two different shell functions defined in ltmain.sh
-+  # decide which to use based on capabilities of $DLLTOOL
-+  case `$DLLTOOL --help 2>&1` in
-+  *--identify-strict*)
-+    lt_cv_sharedlib_from_linklib_cmd=func_cygming_dll_for_implib
-+    ;;
-+  *)
-+    lt_cv_sharedlib_from_linklib_cmd=func_cygming_dll_for_implib_fallback
-+    ;;
-+  esac
-+  ;;
-+*)
-+  # fallback: assume linklib IS sharedlib
-+  lt_cv_sharedlib_from_linklib_cmd="$ECHO"
-+  ;;
-+esac
-+
-+fi
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_sharedlib_from_linklib_cmd" >&5
-+$as_echo "$lt_cv_sharedlib_from_linklib_cmd" >&6; }
-+sharedlib_from_linklib_cmd=$lt_cv_sharedlib_from_linklib_cmd
-+test -z "$sharedlib_from_linklib_cmd" && sharedlib_from_linklib_cmd=$ECHO
-+
-+
-+
-+
-+
-+
-+
- plugin_option=
- plugin_names="liblto_plugin.so liblto_plugin-0.dll cyglto_plugin-0.dll"
- for plugin in $plugin_names; do
-@@ -5655,8 +5907,10 @@ for plugin in $plugin_names; do
- done
- 
- if test -n "$ac_tool_prefix"; then
--  # Extract the first word of "${ac_tool_prefix}ar", so it can be a program name with args.
--set dummy ${ac_tool_prefix}ar; ac_word=$2
-+  for ac_prog in ar
-+  do
-+    # Extract the first word of "$ac_tool_prefix$ac_prog", so it can be a program name with args.
-+set dummy $ac_tool_prefix$ac_prog; ac_word=$2
- { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
- $as_echo_n "checking for $ac_word... " >&6; }
- if ${ac_cv_prog_AR+:} false; then :
-@@ -5672,7 +5926,7 @@ do
-   test -z "$as_dir" && as_dir=.
-     for ac_exec_ext in '' $ac_executable_extensions; do
-   if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
--    ac_cv_prog_AR="${ac_tool_prefix}ar"
-+    ac_cv_prog_AR="$ac_tool_prefix$ac_prog"
-     $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
-     break 2
-   fi
-@@ -5692,11 +5946,15 @@ $as_echo "no" >&6; }
- fi
- 
- 
-+    test -n "$AR" && break
-+  done
- fi
--if test -z "$ac_cv_prog_AR"; then
-+if test -z "$AR"; then
-   ac_ct_AR=$AR
--  # Extract the first word of "ar", so it can be a program name with args.
--set dummy ar; ac_word=$2
-+  for ac_prog in ar
-+do
-+  # Extract the first word of "$ac_prog", so it can be a program name with args.
-+set dummy $ac_prog; ac_word=$2
- { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
- $as_echo_n "checking for $ac_word... " >&6; }
- if ${ac_cv_prog_ac_ct_AR+:} false; then :
-@@ -5712,7 +5970,7 @@ do
-   test -z "$as_dir" && as_dir=.
-     for ac_exec_ext in '' $ac_executable_extensions; do
-   if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
--    ac_cv_prog_ac_ct_AR="ar"
-+    ac_cv_prog_ac_ct_AR="$ac_prog"
-     $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
-     break 2
-   fi
-@@ -5731,6 +5989,10 @@ else
- $as_echo "no" >&6; }
- fi
- 
-+
-+  test -n "$ac_ct_AR" && break
-+done
-+
-   if test "x$ac_ct_AR" = x; then
-     AR="false"
-   else
-@@ -5742,25 +6004,19 @@ ac_tool_warned=yes ;;
- esac
-     AR=$ac_ct_AR
-   fi
--else
--  AR="$ac_cv_prog_AR"
- fi
- 
--test -z "$AR" && AR=ar
--if test -n "$plugin_option"; then
--  if $AR --help 2>&1 | grep -q "\--plugin"; then
--    touch conftest.c
--    $AR $plugin_option rc conftest.a conftest.c
--    if test "$?" != 0; then
--      { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: Failed: $AR $plugin_option rc" >&5
-+  touch conftest.c
-+  $AR $plugin_option rc conftest.a conftest.c
-+  if test "$?" != 0; then
-+    { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: Failed: $AR $plugin_option rc" >&5
- $as_echo "$as_me: WARNING: Failed: $AR $plugin_option rc" >&2;}
--    else
--      AR="$AR $plugin_option"
--    fi
--    rm -f conftest.*
-+  else
-+    AR="$AR $plugin_option"
-   fi
--fi
--test -z "$AR_FLAGS" && AR_FLAGS=cru
-+  rm -f conftest.*
-+: ${AR=ar}
-+: ${AR_FLAGS=cru}
- 
- 
- 
-@@ -5772,6 +6028,64 @@ test -z "$AR_FLAGS" && AR_FLAGS=cru
- 
- 
- 
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for archiver @FILE support" >&5
-+$as_echo_n "checking for archiver @FILE support... " >&6; }
-+if ${lt_cv_ar_at_file+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  lt_cv_ar_at_file=no
-+   cat confdefs.h - <<_ACEOF >conftest.$ac_ext
-+/* end confdefs.h.  */
-+
-+int
-+main ()
-+{
-+
-+  ;
-+  return 0;
-+}
-+_ACEOF
-+if ac_fn_c_try_compile "$LINENO"; then :
-+  echo conftest.$ac_objext > conftest.lst
-+      lt_ar_try='$AR $AR_FLAGS libconftest.a @conftest.lst >&5'
-+      { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$lt_ar_try\""; } >&5
-+  (eval $lt_ar_try) 2>&5
-+  ac_status=$?
-+  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
-+  test $ac_status = 0; }
-+      if test "$ac_status" -eq 0; then
-+	# Ensure the archiver fails upon bogus file names.
-+	rm -f conftest.$ac_objext libconftest.a
-+	{ { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$lt_ar_try\""; } >&5
-+  (eval $lt_ar_try) 2>&5
-+  ac_status=$?
-+  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
-+  test $ac_status = 0; }
-+	if test "$ac_status" -ne 0; then
-+          lt_cv_ar_at_file=@
-+        fi
-+      fi
-+      rm -f conftest.* libconftest.a
-+
-+fi
-+rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext
-+
-+fi
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_ar_at_file" >&5
-+$as_echo "$lt_cv_ar_at_file" >&6; }
-+
-+if test "x$lt_cv_ar_at_file" = xno; then
-+  archiver_list_spec=
-+else
-+  archiver_list_spec=$lt_cv_ar_at_file
-+fi
-+
-+
-+
-+
-+
-+
-+
- if test -n "$ac_tool_prefix"; then
-   # Extract the first word of "${ac_tool_prefix}strip", so it can be a program name with args.
- set dummy ${ac_tool_prefix}strip; ac_word=$2
-@@ -6111,8 +6425,8 @@ esac
- lt_cv_sys_global_symbol_to_cdecl="sed -n -e 's/^T .* \(.*\)$/extern int \1();/p' -e 's/^$symcode* .* \(.*\)$/extern char \1;/p'"
- 
- # Transform an extracted symbol line into symbol name and symbol address
--lt_cv_sys_global_symbol_to_c_name_address="sed -n -e 's/^: \([^ ]*\) $/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"\2\", (void *) \&\2},/p'"
--lt_cv_sys_global_symbol_to_c_name_address_lib_prefix="sed -n -e 's/^: \([^ ]*\) $/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \(lib[^ ]*\)$/  {\"\2\", (void *) \&\2},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"lib\2\", (void *) \&\2},/p'"
-+lt_cv_sys_global_symbol_to_c_name_address="sed -n -e 's/^: \([^ ]*\)[ ]*$/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"\2\", (void *) \&\2},/p'"
-+lt_cv_sys_global_symbol_to_c_name_address_lib_prefix="sed -n -e 's/^: \([^ ]*\)[ ]*$/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \(lib[^ ]*\)$/  {\"\2\", (void *) \&\2},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"lib\2\", (void *) \&\2},/p'"
- 
- # Handle CRLF in mingw tool chain
- opt_cr=
-@@ -6148,6 +6462,7 @@ for ac_symprfx in "" "_"; do
-   else
-     lt_cv_sys_global_symbol_pipe="sed -n -e 's/^.*[	 ]\($symcode$symcode*\)[	 ][	 ]*$ac_symprfx$sympat$opt_cr$/$symxfrm/p'"
-   fi
-+  lt_cv_sys_global_symbol_pipe="$lt_cv_sys_global_symbol_pipe | sed '/ __gnu_lto/d'"
- 
-   # Check to see that the pipe works correctly.
-   pipe_works=no
-@@ -6189,6 +6504,18 @@ _LT_EOF
-       if $GREP ' nm_test_var$' "$nlist" >/dev/null; then
- 	if $GREP ' nm_test_func$' "$nlist" >/dev/null; then
- 	  cat <<_LT_EOF > conftest.$ac_ext
-+/* Keep this code in sync between libtool.m4, ltmain, lt_system.h, and tests.  */
-+#if defined(_WIN32) || defined(__CYGWIN__) || defined(_WIN32_WCE)
-+/* DATA imports from DLLs on WIN32 con't be const, because runtime
-+   relocations are performed -- see ld's documentation on pseudo-relocs.  */
-+# define LT_DLSYM_CONST
-+#elif defined(__osf__)
-+/* This system does not cope well with relocations in const data.  */
-+# define LT_DLSYM_CONST
-+#else
-+# define LT_DLSYM_CONST const
-+#endif
-+
- #ifdef __cplusplus
- extern "C" {
- #endif
-@@ -6200,7 +6527,7 @@ _LT_EOF
- 	  cat <<_LT_EOF >> conftest.$ac_ext
- 
- /* The mapping between symbol names and symbols.  */
--const struct {
-+LT_DLSYM_CONST struct {
-   const char *name;
-   void       *address;
- }
-@@ -6226,8 +6553,8 @@ static const void *lt_preloaded_setup() {
- _LT_EOF
- 	  # Now try linking the two files.
- 	  mv conftest.$ac_objext conftstm.$ac_objext
--	  lt_save_LIBS="$LIBS"
--	  lt_save_CFLAGS="$CFLAGS"
-+	  lt_globsym_save_LIBS=$LIBS
-+	  lt_globsym_save_CFLAGS=$CFLAGS
- 	  LIBS="conftstm.$ac_objext"
- 	  CFLAGS="$CFLAGS$lt_prog_compiler_no_builtin_flag"
- 	  if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_link\""; } >&5
-@@ -6237,8 +6564,8 @@ _LT_EOF
-   test $ac_status = 0; } && test -s conftest${ac_exeext}; then
- 	    pipe_works=yes
- 	  fi
--	  LIBS="$lt_save_LIBS"
--	  CFLAGS="$lt_save_CFLAGS"
-+	  LIBS=$lt_globsym_save_LIBS
-+	  CFLAGS=$lt_globsym_save_CFLAGS
- 	else
- 	  echo "cannot find nm_test_func in $nlist" >&5
- 	fi
-@@ -6275,6 +6602,18 @@ else
- $as_echo "ok" >&6; }
- fi
- 
-+# Response file support.
-+if test "$lt_cv_nm_interface" = "MS dumpbin"; then
-+  nm_file_list_spec='@'
-+elif $NM --help 2>/dev/null | grep '[@]FILE' >/dev/null; then
-+  nm_file_list_spec='@'
-+fi
-+
-+
-+
-+
-+
-+
- 
- 
- 
-@@ -6291,6 +6630,43 @@ fi
- 
- 
- 
-+
-+
-+
-+
-+
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for sysroot" >&5
-+$as_echo_n "checking for sysroot... " >&6; }
-+
-+# Check whether --with-libtool-sysroot was given.
-+if test "${with_libtool_sysroot+set}" = set; then :
-+  withval=$with_libtool_sysroot;
-+else
-+  with_libtool_sysroot=no
-+fi
-+
-+
-+lt_sysroot=
-+case ${with_libtool_sysroot} in #(
-+ yes)
-+   if test "$GCC" = yes; then
-+     lt_sysroot=`$CC --print-sysroot 2>/dev/null`
-+   fi
-+   ;; #(
-+ /*)
-+   lt_sysroot=`echo "$with_libtool_sysroot" | sed -e "$sed_quote_subst"`
-+   ;; #(
-+ no|'')
-+   ;; #(
-+ *)
-+   { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${with_libtool_sysroot}" >&5
-+$as_echo "${with_libtool_sysroot}" >&6; }
-+   as_fn_error $? "The sysroot must be an absolute path." "$LINENO" 5
-+   ;;
-+esac
-+
-+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${lt_sysroot:-no}" >&5
-+$as_echo "${lt_sysroot:-no}" >&6; }
- 
- 
- 
-@@ -6502,6 +6878,123 @@ esac
- 
- need_locks="$enable_libtool_lock"
- 
-+if test -n "$ac_tool_prefix"; then
-+  # Extract the first word of "${ac_tool_prefix}mt", so it can be a program name with args.
-+set dummy ${ac_tool_prefix}mt; ac_word=$2
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
-+$as_echo_n "checking for $ac_word... " >&6; }
-+if ${ac_cv_prog_MANIFEST_TOOL+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  if test -n "$MANIFEST_TOOL"; then
-+  ac_cv_prog_MANIFEST_TOOL="$MANIFEST_TOOL" # Let the user override the test.
-+else
-+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
-+for as_dir in $PATH
-+do
-+  IFS=$as_save_IFS
-+  test -z "$as_dir" && as_dir=.
-+    for ac_exec_ext in '' $ac_executable_extensions; do
-+  if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
-+    ac_cv_prog_MANIFEST_TOOL="${ac_tool_prefix}mt"
-+    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
-+    break 2
-+  fi
-+done
-+  done
-+IFS=$as_save_IFS
-+
-+fi
-+fi
-+MANIFEST_TOOL=$ac_cv_prog_MANIFEST_TOOL
-+if test -n "$MANIFEST_TOOL"; then
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $MANIFEST_TOOL" >&5
-+$as_echo "$MANIFEST_TOOL" >&6; }
-+else
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
-+$as_echo "no" >&6; }
-+fi
-+
-+
-+fi
-+if test -z "$ac_cv_prog_MANIFEST_TOOL"; then
-+  ac_ct_MANIFEST_TOOL=$MANIFEST_TOOL
-+  # Extract the first word of "mt", so it can be a program name with args.
-+set dummy mt; ac_word=$2
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
-+$as_echo_n "checking for $ac_word... " >&6; }
-+if ${ac_cv_prog_ac_ct_MANIFEST_TOOL+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  if test -n "$ac_ct_MANIFEST_TOOL"; then
-+  ac_cv_prog_ac_ct_MANIFEST_TOOL="$ac_ct_MANIFEST_TOOL" # Let the user override the test.
-+else
-+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
-+for as_dir in $PATH
-+do
-+  IFS=$as_save_IFS
-+  test -z "$as_dir" && as_dir=.
-+    for ac_exec_ext in '' $ac_executable_extensions; do
-+  if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
-+    ac_cv_prog_ac_ct_MANIFEST_TOOL="mt"
-+    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
-+    break 2
-+  fi
-+done
-+  done
-+IFS=$as_save_IFS
-+
-+fi
-+fi
-+ac_ct_MANIFEST_TOOL=$ac_cv_prog_ac_ct_MANIFEST_TOOL
-+if test -n "$ac_ct_MANIFEST_TOOL"; then
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_MANIFEST_TOOL" >&5
-+$as_echo "$ac_ct_MANIFEST_TOOL" >&6; }
-+else
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
-+$as_echo "no" >&6; }
-+fi
-+
-+  if test "x$ac_ct_MANIFEST_TOOL" = x; then
-+    MANIFEST_TOOL=":"
-+  else
-+    case $cross_compiling:$ac_tool_warned in
-+yes:)
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5
-+$as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;}
-+ac_tool_warned=yes ;;
-+esac
-+    MANIFEST_TOOL=$ac_ct_MANIFEST_TOOL
-+  fi
-+else
-+  MANIFEST_TOOL="$ac_cv_prog_MANIFEST_TOOL"
-+fi
-+
-+test -z "$MANIFEST_TOOL" && MANIFEST_TOOL=mt
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking if $MANIFEST_TOOL is a manifest tool" >&5
-+$as_echo_n "checking if $MANIFEST_TOOL is a manifest tool... " >&6; }
-+if ${lt_cv_path_mainfest_tool+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  lt_cv_path_mainfest_tool=no
-+  echo "$as_me:$LINENO: $MANIFEST_TOOL '-?'" >&5
-+  $MANIFEST_TOOL '-?' 2>conftest.err > conftest.out
-+  cat conftest.err >&5
-+  if $GREP 'Manifest Tool' conftest.out > /dev/null; then
-+    lt_cv_path_mainfest_tool=yes
-+  fi
-+  rm -f conftest*
-+fi
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_path_mainfest_tool" >&5
-+$as_echo "$lt_cv_path_mainfest_tool" >&6; }
-+if test "x$lt_cv_path_mainfest_tool" != xyes; then
-+  MANIFEST_TOOL=:
-+fi
-+
-+
-+
-+
-+
- 
-   case $host_os in
-     rhapsody* | darwin*)
-@@ -7065,6 +7558,8 @@ _LT_EOF
-       $LTCC $LTCFLAGS -c -o conftest.o conftest.c 2>&5
-       echo "$AR cru libconftest.a conftest.o" >&5
-       $AR cru libconftest.a conftest.o 2>&5
-+      echo "$RANLIB libconftest.a" >&5
-+      $RANLIB libconftest.a 2>&5
-       cat > conftest.c << _LT_EOF
- int main() { return 0;}
- _LT_EOF
-@@ -7647,8 +8142,6 @@ fi
- lt_prog_compiler_pic=
- lt_prog_compiler_static=
- 
--{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $compiler option to produce PIC" >&5
--$as_echo_n "checking for $compiler option to produce PIC... " >&6; }
- 
-   if test "$GCC" = yes; then
-     lt_prog_compiler_wl='-Wl,'
-@@ -7814,6 +8307,12 @@ $as_echo_n "checking for $compiler option to produce PIC... " >&6; }
- 	lt_prog_compiler_pic='--shared'
- 	lt_prog_compiler_static='--static'
- 	;;
-+      nagfor*)
-+	# NAG Fortran compiler
-+	lt_prog_compiler_wl='-Wl,-Wl,,'
-+	lt_prog_compiler_pic='-PIC'
-+	lt_prog_compiler_static='-Bstatic'
-+	;;
-       pgcc* | pgf77* | pgf90* | pgf95* | pgfortran*)
-         # Portland Group compilers (*not* the Pentium gcc compiler,
- 	# which looks to be a dead project)
-@@ -7876,7 +8375,7 @@ $as_echo_n "checking for $compiler option to produce PIC... " >&6; }
-       lt_prog_compiler_pic='-KPIC'
-       lt_prog_compiler_static='-Bstatic'
-       case $cc_basename in
--      f77* | f90* | f95*)
-+      f77* | f90* | f95* | sunf77* | sunf90* | sunf95*)
- 	lt_prog_compiler_wl='-Qoption ld ';;
-       *)
- 	lt_prog_compiler_wl='-Wl,';;
-@@ -7933,13 +8432,17 @@ case $host_os in
-     lt_prog_compiler_pic="$lt_prog_compiler_pic -DPIC"
-     ;;
- esac
--{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_prog_compiler_pic" >&5
--$as_echo "$lt_prog_compiler_pic" >&6; }
--
--
--
--
- 
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $compiler option to produce PIC" >&5
-+$as_echo_n "checking for $compiler option to produce PIC... " >&6; }
-+if ${lt_cv_prog_compiler_pic+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  lt_cv_prog_compiler_pic=$lt_prog_compiler_pic
-+fi
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_prog_compiler_pic" >&5
-+$as_echo "$lt_cv_prog_compiler_pic" >&6; }
-+lt_prog_compiler_pic=$lt_cv_prog_compiler_pic
- 
- #
- # Check to make sure the PIC flag actually works.
-@@ -8000,6 +8503,11 @@ fi
- 
- 
- 
-+
-+
-+
-+
-+
- #
- # Check to make sure the static flag actually works.
- #
-@@ -8350,7 +8858,8 @@ _LT_EOF
-       allow_undefined_flag=unsupported
-       always_export_symbols=no
-       enable_shared_with_static_runtimes=yes
--      export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1 DATA/'\'' | $SED -e '\''/^[AITW][ ]/s/.*[ ]//'\'' | sort | uniq > $export_symbols'
-+      export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1 DATA/;s/^.*[ ]__nm__\([^ ]*\)[ ][^ ]*/\1 DATA/;/^I[ ]/d;/^[AITW][ ]/s/.* //'\'' | sort | uniq > $export_symbols'
-+      exclude_expsyms='[_]+GLOBAL_OFFSET_TABLE_|[_]+GLOBAL__[FID]_.*|[_]+head_[A-Za-z0-9_]+_dll|[A-Za-z0-9_]+_dll_iname'
- 
-       if $LD --help 2>&1 | $GREP 'auto-import' > /dev/null; then
-         archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib'
-@@ -8449,12 +8958,12 @@ _LT_EOF
- 	  whole_archive_flag_spec='--whole-archive$convenience --no-whole-archive'
- 	  hardcode_libdir_flag_spec=
- 	  hardcode_libdir_flag_spec_ld='-rpath $libdir'
--	  archive_cmds='$LD -shared $libobjs $deplibs $compiler_flags -soname $soname -o $lib'
-+	  archive_cmds='$LD -shared $libobjs $deplibs $linker_flags -soname $soname -o $lib'
- 	  if test "x$supports_anon_versioning" = xyes; then
- 	    archive_expsym_cmds='echo "{ global:" > $output_objdir/$libname.ver~
- 	      cat $export_symbols | sed -e "s/\(.*\)/\1;/" >> $output_objdir/$libname.ver~
- 	      echo "local: *; };" >> $output_objdir/$libname.ver~
--	      $LD -shared $libobjs $deplibs $compiler_flags -soname $soname -version-script $output_objdir/$libname.ver -o $lib'
-+	      $LD -shared $libobjs $deplibs $linker_flags -soname $soname -version-script $output_objdir/$libname.ver -o $lib'
- 	  fi
- 	  ;;
- 	esac
-@@ -8468,8 +8977,8 @@ _LT_EOF
- 	archive_cmds='$LD -Bshareable $libobjs $deplibs $linker_flags -o $lib'
- 	wlarc=
-       else
--	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
--	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-+	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
-+	archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-       fi
-       ;;
- 
-@@ -8487,8 +8996,8 @@ _LT_EOF
- 
- _LT_EOF
-       elif $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then
--	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
--	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-+	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
-+	archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-       else
- 	ld_shlibs=no
-       fi
-@@ -8534,8 +9043,8 @@ _LT_EOF
- 
-     *)
-       if $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then
--	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
--	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-+	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
-+	archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-       else
- 	ld_shlibs=no
-       fi
-@@ -8665,7 +9174,13 @@ _LT_EOF
- 	allow_undefined_flag='-berok'
-         # Determine the default libpath from the value encoded in an
-         # empty executable.
--        cat confdefs.h - <<_ACEOF >conftest.$ac_ext
-+        if test "${lt_cv_aix_libpath+set}" = set; then
-+  aix_libpath=$lt_cv_aix_libpath
-+else
-+  if ${lt_cv_aix_libpath_+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
- /* end confdefs.h.  */
- 
- int
-@@ -8678,22 +9193,29 @@ main ()
- _ACEOF
- if ac_fn_c_try_link "$LINENO"; then :
- 
--lt_aix_libpath_sed='
--    /Import File Strings/,/^$/ {
--	/^0/ {
--	    s/^0  *\(.*\)$/\1/
--	    p
--	}
--    }'
--aix_libpath=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
--# Check for a 64-bit object if we didn't find anything.
--if test -z "$aix_libpath"; then
--  aix_libpath=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
--fi
-+  lt_aix_libpath_sed='
-+      /Import File Strings/,/^$/ {
-+	  /^0/ {
-+	      s/^0  *\([^ ]*\) *$/\1/
-+	      p
-+	  }
-+      }'
-+  lt_cv_aix_libpath_=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
-+  # Check for a 64-bit object if we didn't find anything.
-+  if test -z "$lt_cv_aix_libpath_"; then
-+    lt_cv_aix_libpath_=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
-+  fi
- fi
- rm -f core conftest.err conftest.$ac_objext \
-     conftest$ac_exeext conftest.$ac_ext
--if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
-+  if test -z "$lt_cv_aix_libpath_"; then
-+    lt_cv_aix_libpath_="/usr/lib:/lib"
-+  fi
-+
-+fi
-+
-+  aix_libpath=$lt_cv_aix_libpath_
-+fi
- 
-         hardcode_libdir_flag_spec='${wl}-blibpath:$libdir:'"$aix_libpath"
-         archive_expsym_cmds='$CC -o $output_objdir/$soname $libobjs $deplibs '"\${wl}$no_entry_flag"' $compiler_flags `if test "x${allow_undefined_flag}" != "x"; then func_echo_all "${wl}${allow_undefined_flag}"; else :; fi` '"\${wl}$exp_sym_flag:\$export_symbols $shared_flag"
-@@ -8705,7 +9227,13 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
- 	else
- 	 # Determine the default libpath from the value encoded in an
- 	 # empty executable.
--	 cat confdefs.h - <<_ACEOF >conftest.$ac_ext
-+	 if test "${lt_cv_aix_libpath+set}" = set; then
-+  aix_libpath=$lt_cv_aix_libpath
-+else
-+  if ${lt_cv_aix_libpath_+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
- /* end confdefs.h.  */
- 
- int
-@@ -8718,22 +9246,29 @@ main ()
- _ACEOF
- if ac_fn_c_try_link "$LINENO"; then :
- 
--lt_aix_libpath_sed='
--    /Import File Strings/,/^$/ {
--	/^0/ {
--	    s/^0  *\(.*\)$/\1/
--	    p
--	}
--    }'
--aix_libpath=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
--# Check for a 64-bit object if we didn't find anything.
--if test -z "$aix_libpath"; then
--  aix_libpath=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
--fi
-+  lt_aix_libpath_sed='
-+      /Import File Strings/,/^$/ {
-+	  /^0/ {
-+	      s/^0  *\([^ ]*\) *$/\1/
-+	      p
-+	  }
-+      }'
-+  lt_cv_aix_libpath_=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
-+  # Check for a 64-bit object if we didn't find anything.
-+  if test -z "$lt_cv_aix_libpath_"; then
-+    lt_cv_aix_libpath_=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
-+  fi
- fi
- rm -f core conftest.err conftest.$ac_objext \
-     conftest$ac_exeext conftest.$ac_ext
--if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
-+  if test -z "$lt_cv_aix_libpath_"; then
-+    lt_cv_aix_libpath_="/usr/lib:/lib"
-+  fi
-+
-+fi
-+
-+  aix_libpath=$lt_cv_aix_libpath_
-+fi
- 
- 	 hardcode_libdir_flag_spec='${wl}-blibpath:$libdir:'"$aix_libpath"
- 	  # Warning - without using the other run time loading flags,
-@@ -8778,20 +9313,63 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
-       # Microsoft Visual C++.
-       # hardcode_libdir_flag_spec is actually meaningless, as there is
-       # no search path for DLLs.
--      hardcode_libdir_flag_spec=' '
--      allow_undefined_flag=unsupported
--      # Tell ltmain to make .lib files, not .a files.
--      libext=lib
--      # Tell ltmain to make .dll files, not .so files.
--      shrext_cmds=".dll"
--      # FIXME: Setting linknames here is a bad hack.
--      archive_cmds='$CC -o $lib $libobjs $compiler_flags `func_echo_all "$deplibs" | $SED '\''s/ -lc$//'\''` -link -dll~linknames='
--      # The linker will automatically build a .lib file if we build a DLL.
--      old_archive_from_new_cmds='true'
--      # FIXME: Should let the user specify the lib program.
--      old_archive_cmds='lib -OUT:$oldlib$oldobjs$old_deplibs'
--      fix_srcfile_path='`cygpath -w "$srcfile"`'
--      enable_shared_with_static_runtimes=yes
-+      case $cc_basename in
-+      cl*)
-+	# Native MSVC
-+	hardcode_libdir_flag_spec=' '
-+	allow_undefined_flag=unsupported
-+	always_export_symbols=yes
-+	file_list_spec='@'
-+	# Tell ltmain to make .lib files, not .a files.
-+	libext=lib
-+	# Tell ltmain to make .dll files, not .so files.
-+	shrext_cmds=".dll"
-+	# FIXME: Setting linknames here is a bad hack.
-+	archive_cmds='$CC -o $output_objdir/$soname $libobjs $compiler_flags $deplibs -Wl,-dll~linknames='
-+	archive_expsym_cmds='if test "x`$SED 1q $export_symbols`" = xEXPORTS; then
-+	    sed -n -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' -e '1\\\!p' < $export_symbols > $output_objdir/$soname.exp;
-+	  else
-+	    sed -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' < $export_symbols > $output_objdir/$soname.exp;
-+	  fi~
-+	  $CC -o $tool_output_objdir$soname $libobjs $compiler_flags $deplibs "@$tool_output_objdir$soname.exp" -Wl,-DLL,-IMPLIB:"$tool_output_objdir$libname.dll.lib"~
-+	  linknames='
-+	# The linker will not automatically build a static lib if we build a DLL.
-+	# _LT_TAGVAR(old_archive_from_new_cmds, )='true'
-+	enable_shared_with_static_runtimes=yes
-+	export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1,DATA/'\'' | $SED -e '\''/^[AITW][ ]/s/.*[ ]//'\'' | sort | uniq > $export_symbols'
-+	# Don't use ranlib
-+	old_postinstall_cmds='chmod 644 $oldlib'
-+	postlink_cmds='lt_outputfile="@OUTPUT@"~
-+	  lt_tool_outputfile="@TOOL_OUTPUT@"~
-+	  case $lt_outputfile in
-+	    *.exe|*.EXE) ;;
-+	    *)
-+	      lt_outputfile="$lt_outputfile.exe"
-+	      lt_tool_outputfile="$lt_tool_outputfile.exe"
-+	      ;;
-+	  esac~
-+	  if test "$MANIFEST_TOOL" != ":" && test -f "$lt_outputfile.manifest"; then
-+	    $MANIFEST_TOOL -manifest "$lt_tool_outputfile.manifest" -outputresource:"$lt_tool_outputfile" || exit 1;
-+	    $RM "$lt_outputfile.manifest";
-+	  fi'
-+	;;
-+      *)
-+	# Assume MSVC wrapper
-+	hardcode_libdir_flag_spec=' '
-+	allow_undefined_flag=unsupported
-+	# Tell ltmain to make .lib files, not .a files.
-+	libext=lib
-+	# Tell ltmain to make .dll files, not .so files.
-+	shrext_cmds=".dll"
-+	# FIXME: Setting linknames here is a bad hack.
-+	archive_cmds='$CC -o $lib $libobjs $compiler_flags `func_echo_all "$deplibs" | $SED '\''s/ -lc$//'\''` -link -dll~linknames='
-+	# The linker will automatically build a .lib file if we build a DLL.
-+	old_archive_from_new_cmds='true'
-+	# FIXME: Should let the user specify the lib program.
-+	old_archive_cmds='lib -OUT:$oldlib$oldobjs$old_deplibs'
-+	enable_shared_with_static_runtimes=yes
-+	;;
-+      esac
-       ;;
- 
-     darwin* | rhapsody*)
-@@ -8852,7 +9430,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
- 
-     # FreeBSD 3 and greater uses gcc -shared to do shared libraries.
-     freebsd* | dragonfly*)
--      archive_cmds='$CC -shared -o $lib $libobjs $deplibs $compiler_flags'
-+      archive_cmds='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags'
-       hardcode_libdir_flag_spec='-R$libdir'
-       hardcode_direct=yes
-       hardcode_shlibpath_var=no
-@@ -8860,7 +9438,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
- 
-     hpux9*)
-       if test "$GCC" = yes; then
--	archive_cmds='$RM $output_objdir/$soname~$CC -shared -fPIC ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $libobjs $deplibs $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
-+	archive_cmds='$RM $output_objdir/$soname~$CC -shared $pic_flag ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $libobjs $deplibs $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
-       else
- 	archive_cmds='$RM $output_objdir/$soname~$LD -b +b $install_libdir -o $output_objdir/$soname $libobjs $deplibs $linker_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
-       fi
-@@ -8876,7 +9454,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
- 
-     hpux10*)
-       if test "$GCC" = yes && test "$with_gnu_ld" = no; then
--	archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
-+	archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
-       else
- 	archive_cmds='$LD -b +h $soname +b $install_libdir -o $lib $libobjs $deplibs $linker_flags'
-       fi
-@@ -8900,10 +9478,10 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
- 	  archive_cmds='$CC -shared ${wl}+h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
- 	  ;;
- 	ia64*)
--	  archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags'
-+	  archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags'
- 	  ;;
- 	*)
--	  archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
-+	  archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
- 	  ;;
- 	esac
-       else
-@@ -8982,23 +9560,36 @@ fi
- 
-     irix5* | irix6* | nonstopux*)
-       if test "$GCC" = yes; then
--	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
-+	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
- 	# Try to use the -exported_symbol ld option, if it does not
- 	# work, assume that -exports_file does not work either and
- 	# implicitly export all symbols.
--        save_LDFLAGS="$LDFLAGS"
--        LDFLAGS="$LDFLAGS -shared ${wl}-exported_symbol ${wl}foo ${wl}-update_registry ${wl}/dev/null"
--        cat confdefs.h - <<_ACEOF >conftest.$ac_ext
-+	# This should be the same for all languages, so no per-tag cache variable.
-+	{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether the $host_os linker accepts -exported_symbol" >&5
-+$as_echo_n "checking whether the $host_os linker accepts -exported_symbol... " >&6; }
-+if ${lt_cv_irix_exported_symbol+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  save_LDFLAGS="$LDFLAGS"
-+	   LDFLAGS="$LDFLAGS -shared ${wl}-exported_symbol ${wl}foo ${wl}-update_registry ${wl}/dev/null"
-+	   cat confdefs.h - <<_ACEOF >conftest.$ac_ext
- /* end confdefs.h.  */
--int foo(void) {}
-+int foo (void) { return 0; }
- _ACEOF
- if ac_fn_c_try_link "$LINENO"; then :
--  archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations ${wl}-exports_file ${wl}$export_symbols -o $lib'
--
-+  lt_cv_irix_exported_symbol=yes
-+else
-+  lt_cv_irix_exported_symbol=no
- fi
- rm -f core conftest.err conftest.$ac_objext \
-     conftest$ac_exeext conftest.$ac_ext
--        LDFLAGS="$save_LDFLAGS"
-+           LDFLAGS="$save_LDFLAGS"
-+fi
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_irix_exported_symbol" >&5
-+$as_echo "$lt_cv_irix_exported_symbol" >&6; }
-+	if test "$lt_cv_irix_exported_symbol" = yes; then
-+          archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations ${wl}-exports_file ${wl}$export_symbols -o $lib'
-+	fi
-       else
- 	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -o $lib'
- 	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -exports_file $export_symbols -o $lib'
-@@ -9083,7 +9674,7 @@ rm -f core conftest.err conftest.$ac_objext \
-     osf4* | osf5*)	# as osf3* with the addition of -msym flag
-       if test "$GCC" = yes; then
- 	allow_undefined_flag=' ${wl}-expect_unresolved ${wl}\*'
--	archive_cmds='$CC -shared${allow_undefined_flag} $libobjs $deplibs $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
-+	archive_cmds='$CC -shared${allow_undefined_flag} $pic_flag $libobjs $deplibs $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
- 	hardcode_libdir_flag_spec='${wl}-rpath ${wl}$libdir'
-       else
- 	allow_undefined_flag=' -expect_unresolved \*'
-@@ -9102,9 +9693,9 @@ rm -f core conftest.err conftest.$ac_objext \
-       no_undefined_flag=' -z defs'
-       if test "$GCC" = yes; then
- 	wlarc='${wl}'
--	archive_cmds='$CC -shared ${wl}-z ${wl}text ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
-+	archive_cmds='$CC -shared $pic_flag ${wl}-z ${wl}text ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
- 	archive_expsym_cmds='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~
--	  $CC -shared ${wl}-z ${wl}text ${wl}-M ${wl}$lib.exp ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags~$RM $lib.exp'
-+	  $CC -shared $pic_flag ${wl}-z ${wl}text ${wl}-M ${wl}$lib.exp ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags~$RM $lib.exp'
-       else
- 	case `$CC -V 2>&1` in
- 	*"Compilers 5.0"*)
-@@ -9680,8 +10271,9 @@ cygwin* | mingw* | pw32* | cegcc*)
-   need_version=no
-   need_lib_prefix=no
- 
--  case $GCC,$host_os in
--  yes,cygwin* | yes,mingw* | yes,pw32* | yes,cegcc*)
-+  case $GCC,$cc_basename in
-+  yes,*)
-+    # gcc
-     library_names_spec='$libname.dll.a'
-     # DLL is installed to $(libdir)/../bin by postinstall_cmds
-     postinstall_cmds='base_file=`basename \${file}`~
-@@ -9714,13 +10306,71 @@ cygwin* | mingw* | pw32* | cegcc*)
-       library_names_spec='`echo ${libname} | sed -e 's/^lib/pw/'``echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}'
-       ;;
-     esac
-+    dynamic_linker='Win32 ld.exe'
-+    ;;
-+
-+  *,cl*)
-+    # Native MSVC
-+    libname_spec='$name'
-+    soname_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}'
-+    library_names_spec='${libname}.dll.lib'
-+
-+    case $build_os in
-+    mingw*)
-+      sys_lib_search_path_spec=
-+      lt_save_ifs=$IFS
-+      IFS=';'
-+      for lt_path in $LIB
-+      do
-+        IFS=$lt_save_ifs
-+        # Let DOS variable expansion print the short 8.3 style file name.
-+        lt_path=`cd "$lt_path" 2>/dev/null && cmd //C "for %i in (".") do @echo %~si"`
-+        sys_lib_search_path_spec="$sys_lib_search_path_spec $lt_path"
-+      done
-+      IFS=$lt_save_ifs
-+      # Convert to MSYS style.
-+      sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | sed -e 's|\\\\|/|g' -e 's| \\([a-zA-Z]\\):| /\\1|g' -e 's|^ ||'`
-+      ;;
-+    cygwin*)
-+      # Convert to unix form, then to dos form, then back to unix form
-+      # but this time dos style (no spaces!) so that the unix form looks
-+      # like /cygdrive/c/PROGRA~1:/cygdr...
-+      sys_lib_search_path_spec=`cygpath --path --unix "$LIB"`
-+      sys_lib_search_path_spec=`cygpath --path --dos "$sys_lib_search_path_spec" 2>/dev/null`
-+      sys_lib_search_path_spec=`cygpath --path --unix "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
-+      ;;
-+    *)
-+      sys_lib_search_path_spec="$LIB"
-+      if $ECHO "$sys_lib_search_path_spec" | $GREP ';[c-zC-Z]:/' >/dev/null; then
-+        # It is most probably a Windows format PATH.
-+        sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e 's/;/ /g'`
-+      else
-+        sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
-+      fi
-+      # FIXME: find the short name or the path components, as spaces are
-+      # common. (e.g. "Program Files" -> "PROGRA~1")
-+      ;;
-+    esac
-+
-+    # DLL is installed to $(libdir)/../bin by postinstall_cmds
-+    postinstall_cmds='base_file=`basename \${file}`~
-+      dlpath=`$SHELL 2>&1 -c '\''. $dir/'\''\${base_file}'\''i; echo \$dlname'\''`~
-+      dldir=$destdir/`dirname \$dlpath`~
-+      test -d \$dldir || mkdir -p \$dldir~
-+      $install_prog $dir/$dlname \$dldir/$dlname'
-+    postuninstall_cmds='dldll=`$SHELL 2>&1 -c '\''. $file; echo \$dlname'\''`~
-+      dlpath=$dir/\$dldll~
-+       $RM \$dlpath'
-+    shlibpath_overrides_runpath=yes
-+    dynamic_linker='Win32 link.exe'
-     ;;
- 
-   *)
-+    # Assume MSVC wrapper
-     library_names_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext} $libname.lib'
-+    dynamic_linker='Win32 ld.exe'
-     ;;
-   esac
--  dynamic_linker='Win32 ld.exe'
-   # FIXME: first we should search . and the directory the executable is in
-   shlibpath_var=PATH
-   ;;
-@@ -10598,7 +11248,7 @@ else
-   lt_dlunknown=0; lt_dlno_uscore=1; lt_dlneed_uscore=2
-   lt_status=$lt_dlunknown
-   cat > conftest.$ac_ext <<_LT_EOF
--#line 10601 "configure"
-+#line $LINENO "configure"
- #include "confdefs.h"
- 
- #if HAVE_DLFCN_H
-@@ -10642,10 +11292,10 @@ else
- /* When -fvisbility=hidden is used, assume the code has been annotated
-    correspondingly for the symbols needed.  */
- #if defined(__GNUC__) && (((__GNUC__ == 3) && (__GNUC_MINOR__ >= 3)) || (__GNUC__ > 3))
--void fnord () __attribute__((visibility("default")));
-+int fnord () __attribute__((visibility("default")));
- #endif
- 
--void fnord () { int i=42; }
-+int fnord () { return 42; }
- int main ()
- {
-   void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW);
-@@ -10704,7 +11354,7 @@ else
-   lt_dlunknown=0; lt_dlno_uscore=1; lt_dlneed_uscore=2
-   lt_status=$lt_dlunknown
-   cat > conftest.$ac_ext <<_LT_EOF
--#line 10707 "configure"
-+#line $LINENO "configure"
- #include "confdefs.h"
- 
- #if HAVE_DLFCN_H
-@@ -10748,10 +11398,10 @@ else
- /* When -fvisbility=hidden is used, assume the code has been annotated
-    correspondingly for the symbols needed.  */
- #if defined(__GNUC__) && (((__GNUC__ == 3) && (__GNUC_MINOR__ >= 3)) || (__GNUC__ > 3))
--void fnord () __attribute__((visibility("default")));
-+int fnord () __attribute__((visibility("default")));
- #endif
- 
--void fnord () { int i=42; }
-+int fnord () { return 42; }
- int main ()
- {
-   void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW);
-@@ -12771,13 +13421,20 @@ exeext='`$ECHO "$exeext" | $SED "$delay_single_quote_subst"`'
- lt_unset='`$ECHO "$lt_unset" | $SED "$delay_single_quote_subst"`'
- lt_SP2NL='`$ECHO "$lt_SP2NL" | $SED "$delay_single_quote_subst"`'
- lt_NL2SP='`$ECHO "$lt_NL2SP" | $SED "$delay_single_quote_subst"`'
-+lt_cv_to_host_file_cmd='`$ECHO "$lt_cv_to_host_file_cmd" | $SED "$delay_single_quote_subst"`'
-+lt_cv_to_tool_file_cmd='`$ECHO "$lt_cv_to_tool_file_cmd" | $SED "$delay_single_quote_subst"`'
- reload_flag='`$ECHO "$reload_flag" | $SED "$delay_single_quote_subst"`'
- reload_cmds='`$ECHO "$reload_cmds" | $SED "$delay_single_quote_subst"`'
- OBJDUMP='`$ECHO "$OBJDUMP" | $SED "$delay_single_quote_subst"`'
- deplibs_check_method='`$ECHO "$deplibs_check_method" | $SED "$delay_single_quote_subst"`'
- file_magic_cmd='`$ECHO "$file_magic_cmd" | $SED "$delay_single_quote_subst"`'
-+file_magic_glob='`$ECHO "$file_magic_glob" | $SED "$delay_single_quote_subst"`'
-+want_nocaseglob='`$ECHO "$want_nocaseglob" | $SED "$delay_single_quote_subst"`'
-+DLLTOOL='`$ECHO "$DLLTOOL" | $SED "$delay_single_quote_subst"`'
-+sharedlib_from_linklib_cmd='`$ECHO "$sharedlib_from_linklib_cmd" | $SED "$delay_single_quote_subst"`'
- AR='`$ECHO "$AR" | $SED "$delay_single_quote_subst"`'
- AR_FLAGS='`$ECHO "$AR_FLAGS" | $SED "$delay_single_quote_subst"`'
-+archiver_list_spec='`$ECHO "$archiver_list_spec" | $SED "$delay_single_quote_subst"`'
- STRIP='`$ECHO "$STRIP" | $SED "$delay_single_quote_subst"`'
- RANLIB='`$ECHO "$RANLIB" | $SED "$delay_single_quote_subst"`'
- old_postinstall_cmds='`$ECHO "$old_postinstall_cmds" | $SED "$delay_single_quote_subst"`'
-@@ -12792,14 +13449,17 @@ lt_cv_sys_global_symbol_pipe='`$ECHO "$lt_cv_sys_global_symbol_pipe" | $SED "$de
- lt_cv_sys_global_symbol_to_cdecl='`$ECHO "$lt_cv_sys_global_symbol_to_cdecl" | $SED "$delay_single_quote_subst"`'
- lt_cv_sys_global_symbol_to_c_name_address='`$ECHO "$lt_cv_sys_global_symbol_to_c_name_address" | $SED "$delay_single_quote_subst"`'
- lt_cv_sys_global_symbol_to_c_name_address_lib_prefix='`$ECHO "$lt_cv_sys_global_symbol_to_c_name_address_lib_prefix" | $SED "$delay_single_quote_subst"`'
-+nm_file_list_spec='`$ECHO "$nm_file_list_spec" | $SED "$delay_single_quote_subst"`'
-+lt_sysroot='`$ECHO "$lt_sysroot" | $SED "$delay_single_quote_subst"`'
- objdir='`$ECHO "$objdir" | $SED "$delay_single_quote_subst"`'
- MAGIC_CMD='`$ECHO "$MAGIC_CMD" | $SED "$delay_single_quote_subst"`'
- lt_prog_compiler_no_builtin_flag='`$ECHO "$lt_prog_compiler_no_builtin_flag" | $SED "$delay_single_quote_subst"`'
--lt_prog_compiler_wl='`$ECHO "$lt_prog_compiler_wl" | $SED "$delay_single_quote_subst"`'
- lt_prog_compiler_pic='`$ECHO "$lt_prog_compiler_pic" | $SED "$delay_single_quote_subst"`'
-+lt_prog_compiler_wl='`$ECHO "$lt_prog_compiler_wl" | $SED "$delay_single_quote_subst"`'
- lt_prog_compiler_static='`$ECHO "$lt_prog_compiler_static" | $SED "$delay_single_quote_subst"`'
- lt_cv_prog_compiler_c_o='`$ECHO "$lt_cv_prog_compiler_c_o" | $SED "$delay_single_quote_subst"`'
- need_locks='`$ECHO "$need_locks" | $SED "$delay_single_quote_subst"`'
-+MANIFEST_TOOL='`$ECHO "$MANIFEST_TOOL" | $SED "$delay_single_quote_subst"`'
- DSYMUTIL='`$ECHO "$DSYMUTIL" | $SED "$delay_single_quote_subst"`'
- NMEDIT='`$ECHO "$NMEDIT" | $SED "$delay_single_quote_subst"`'
- LIPO='`$ECHO "$LIPO" | $SED "$delay_single_quote_subst"`'
-@@ -12832,12 +13492,12 @@ hardcode_shlibpath_var='`$ECHO "$hardcode_shlibpath_var" | $SED "$delay_single_q
- hardcode_automatic='`$ECHO "$hardcode_automatic" | $SED "$delay_single_quote_subst"`'
- inherit_rpath='`$ECHO "$inherit_rpath" | $SED "$delay_single_quote_subst"`'
- link_all_deplibs='`$ECHO "$link_all_deplibs" | $SED "$delay_single_quote_subst"`'
--fix_srcfile_path='`$ECHO "$fix_srcfile_path" | $SED "$delay_single_quote_subst"`'
- always_export_symbols='`$ECHO "$always_export_symbols" | $SED "$delay_single_quote_subst"`'
- export_symbols_cmds='`$ECHO "$export_symbols_cmds" | $SED "$delay_single_quote_subst"`'
- exclude_expsyms='`$ECHO "$exclude_expsyms" | $SED "$delay_single_quote_subst"`'
- include_expsyms='`$ECHO "$include_expsyms" | $SED "$delay_single_quote_subst"`'
- prelink_cmds='`$ECHO "$prelink_cmds" | $SED "$delay_single_quote_subst"`'
-+postlink_cmds='`$ECHO "$postlink_cmds" | $SED "$delay_single_quote_subst"`'
- file_list_spec='`$ECHO "$file_list_spec" | $SED "$delay_single_quote_subst"`'
- variables_saved_for_relink='`$ECHO "$variables_saved_for_relink" | $SED "$delay_single_quote_subst"`'
- need_lib_prefix='`$ECHO "$need_lib_prefix" | $SED "$delay_single_quote_subst"`'
-@@ -12892,8 +13552,13 @@ reload_flag \
- OBJDUMP \
- deplibs_check_method \
- file_magic_cmd \
-+file_magic_glob \
-+want_nocaseglob \
-+DLLTOOL \
-+sharedlib_from_linklib_cmd \
- AR \
- AR_FLAGS \
-+archiver_list_spec \
- STRIP \
- RANLIB \
- CC \
-@@ -12903,12 +13568,14 @@ lt_cv_sys_global_symbol_pipe \
- lt_cv_sys_global_symbol_to_cdecl \
- lt_cv_sys_global_symbol_to_c_name_address \
- lt_cv_sys_global_symbol_to_c_name_address_lib_prefix \
-+nm_file_list_spec \
- lt_prog_compiler_no_builtin_flag \
--lt_prog_compiler_wl \
- lt_prog_compiler_pic \
-+lt_prog_compiler_wl \
- lt_prog_compiler_static \
- lt_cv_prog_compiler_c_o \
- need_locks \
-+MANIFEST_TOOL \
- DSYMUTIL \
- NMEDIT \
- LIPO \
-@@ -12924,7 +13591,6 @@ no_undefined_flag \
- hardcode_libdir_flag_spec \
- hardcode_libdir_flag_spec_ld \
- hardcode_libdir_separator \
--fix_srcfile_path \
- exclude_expsyms \
- include_expsyms \
- file_list_spec \
-@@ -12960,6 +13626,7 @@ module_cmds \
- module_expsym_cmds \
- export_symbols_cmds \
- prelink_cmds \
-+postlink_cmds \
- postinstall_cmds \
- postuninstall_cmds \
- finish_cmds \
-@@ -13725,7 +14392,8 @@ $as_echo X"$file" |
- # NOTE: Changes made to this file will be lost: look at ltmain.sh.
- #
- #   Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001, 2003, 2004, 2005,
--#                 2006, 2007, 2008, 2009 Free Software Foundation, Inc.
-+#                 2006, 2007, 2008, 2009, 2010 Free Software Foundation,
-+#                 Inc.
- #   Written by Gordon Matzigkeit, 1996
- #
- #   This file is part of GNU Libtool.
-@@ -13828,19 +14496,42 @@ SP2NL=$lt_lt_SP2NL
- # turn newlines into spaces.
- NL2SP=$lt_lt_NL2SP
- 
-+# convert \$build file names to \$host format.
-+to_host_file_cmd=$lt_cv_to_host_file_cmd
-+
-+# convert \$build files to toolchain format.
-+to_tool_file_cmd=$lt_cv_to_tool_file_cmd
-+
- # An object symbol dumper.
- OBJDUMP=$lt_OBJDUMP
- 
- # Method to check whether dependent libraries are shared objects.
- deplibs_check_method=$lt_deplibs_check_method
- 
--# Command to use when deplibs_check_method == "file_magic".
-+# Command to use when deplibs_check_method = "file_magic".
- file_magic_cmd=$lt_file_magic_cmd
- 
-+# How to find potential files when deplibs_check_method = "file_magic".
-+file_magic_glob=$lt_file_magic_glob
-+
-+# Find potential files using nocaseglob when deplibs_check_method = "file_magic".
-+want_nocaseglob=$lt_want_nocaseglob
-+
-+# DLL creation program.
-+DLLTOOL=$lt_DLLTOOL
-+
-+# Command to associate shared and link libraries.
-+sharedlib_from_linklib_cmd=$lt_sharedlib_from_linklib_cmd
-+
- # The archiver.
- AR=$lt_AR
-+
-+# Flags to create an archive.
- AR_FLAGS=$lt_AR_FLAGS
- 
-+# How to feed a file listing to the archiver.
-+archiver_list_spec=$lt_archiver_list_spec
-+
- # A symbol stripping program.
- STRIP=$lt_STRIP
- 
-@@ -13870,6 +14561,12 @@ global_symbol_to_c_name_address=$lt_lt_cv_sys_global_symbol_to_c_name_address
- # Transform the output of nm in a C name address pair when lib prefix is needed.
- global_symbol_to_c_name_address_lib_prefix=$lt_lt_cv_sys_global_symbol_to_c_name_address_lib_prefix
- 
-+# Specify filename containing input files for \$NM.
-+nm_file_list_spec=$lt_nm_file_list_spec
-+
-+# The root where to search for dependent libraries,and in which our libraries should be installed.
-+lt_sysroot=$lt_sysroot
-+
- # The name of the directory that contains temporary libtool files.
- objdir=$objdir
- 
-@@ -13879,6 +14576,9 @@ MAGIC_CMD=$MAGIC_CMD
- # Must we lock files when doing compilation?
- need_locks=$lt_need_locks
- 
-+# Manifest tool.
-+MANIFEST_TOOL=$lt_MANIFEST_TOOL
-+
- # Tool to manipulate archived DWARF debug symbol files on Mac OS X.
- DSYMUTIL=$lt_DSYMUTIL
- 
-@@ -13993,12 +14693,12 @@ with_gcc=$GCC
- # Compiler flag to turn off builtin functions.
- no_builtin_flag=$lt_lt_prog_compiler_no_builtin_flag
- 
--# How to pass a linker flag through the compiler.
--wl=$lt_lt_prog_compiler_wl
--
- # Additional compiler flags for building library objects.
- pic_flag=$lt_lt_prog_compiler_pic
- 
-+# How to pass a linker flag through the compiler.
-+wl=$lt_lt_prog_compiler_wl
-+
- # Compiler flag to prevent dynamic linking.
- link_static_flag=$lt_lt_prog_compiler_static
- 
-@@ -14085,9 +14785,6 @@ inherit_rpath=$inherit_rpath
- # Whether libtool must link a program against all its dependency libraries.
- link_all_deplibs=$link_all_deplibs
- 
--# Fix the shell variable \$srcfile for the compiler.
--fix_srcfile_path=$lt_fix_srcfile_path
--
- # Set to "yes" if exported symbols are required.
- always_export_symbols=$always_export_symbols
- 
-@@ -14103,6 +14800,9 @@ include_expsyms=$lt_include_expsyms
- # Commands necessary for linking programs (against libraries) with templates.
- prelink_cmds=$lt_prelink_cmds
- 
-+# Commands necessary for finishing linking programs.
-+postlink_cmds=$lt_postlink_cmds
-+
- # Specify filename containing input files.
- file_list_spec=$lt_file_list_spec
- 
-@@ -14135,210 +14835,169 @@ ltmain="$ac_aux_dir/ltmain.sh"
-   # if finds mixed CR/LF and LF-only lines.  Since sed operates in
-   # text mode, it properly converts lines to CR/LF.  This bash problem
-   # is reportedly fixed, but why not run on old versions too?
--  sed '/^# Generated shell functions inserted here/q' "$ltmain" >> "$cfgfile" \
--    || (rm -f "$cfgfile"; exit 1)
--
--  case $xsi_shell in
--  yes)
--    cat << \_LT_EOF >> "$cfgfile"
--
--# func_dirname file append nondir_replacement
--# Compute the dirname of FILE.  If nonempty, add APPEND to the result,
--# otherwise set result to NONDIR_REPLACEMENT.
--func_dirname ()
--{
--  case ${1} in
--    */*) func_dirname_result="${1%/*}${2}" ;;
--    *  ) func_dirname_result="${3}" ;;
--  esac
--}
--
--# func_basename file
--func_basename ()
--{
--  func_basename_result="${1##*/}"
--}
--
--# func_dirname_and_basename file append nondir_replacement
--# perform func_basename and func_dirname in a single function
--# call:
--#   dirname:  Compute the dirname of FILE.  If nonempty,
--#             add APPEND to the result, otherwise set result
--#             to NONDIR_REPLACEMENT.
--#             value returned in "$func_dirname_result"
--#   basename: Compute filename of FILE.
--#             value retuned in "$func_basename_result"
--# Implementation must be kept synchronized with func_dirname
--# and func_basename. For efficiency, we do not delegate to
--# those functions but instead duplicate the functionality here.
--func_dirname_and_basename ()
--{
--  case ${1} in
--    */*) func_dirname_result="${1%/*}${2}" ;;
--    *  ) func_dirname_result="${3}" ;;
--  esac
--  func_basename_result="${1##*/}"
--}
--
--# func_stripname prefix suffix name
--# strip PREFIX and SUFFIX off of NAME.
--# PREFIX and SUFFIX must not contain globbing or regex special
--# characters, hashes, percent signs, but SUFFIX may contain a leading
--# dot (in which case that matches only a dot).
--func_stripname ()
--{
--  # pdksh 5.2.14 does not do ${X%$Y} correctly if both X and Y are
--  # positional parameters, so assign one to ordinary parameter first.
--  func_stripname_result=${3}
--  func_stripname_result=${func_stripname_result#"${1}"}
--  func_stripname_result=${func_stripname_result%"${2}"}
--}
--
--# func_opt_split
--func_opt_split ()
--{
--  func_opt_split_opt=${1%%=*}
--  func_opt_split_arg=${1#*=}
--}
--
--# func_lo2o object
--func_lo2o ()
--{
--  case ${1} in
--    *.lo) func_lo2o_result=${1%.lo}.${objext} ;;
--    *)    func_lo2o_result=${1} ;;
--  esac
--}
--
--# func_xform libobj-or-source
--func_xform ()
--{
--  func_xform_result=${1%.*}.lo
--}
--
--# func_arith arithmetic-term...
--func_arith ()
--{
--  func_arith_result=$(( $* ))
--}
--
--# func_len string
--# STRING may not start with a hyphen.
--func_len ()
--{
--  func_len_result=${#1}
--}
--
--_LT_EOF
--    ;;
--  *) # Bourne compatible functions.
--    cat << \_LT_EOF >> "$cfgfile"
--
--# func_dirname file append nondir_replacement
--# Compute the dirname of FILE.  If nonempty, add APPEND to the result,
--# otherwise set result to NONDIR_REPLACEMENT.
--func_dirname ()
--{
--  # Extract subdirectory from the argument.
--  func_dirname_result=`$ECHO "${1}" | $SED "$dirname"`
--  if test "X$func_dirname_result" = "X${1}"; then
--    func_dirname_result="${3}"
--  else
--    func_dirname_result="$func_dirname_result${2}"
--  fi
--}
--
--# func_basename file
--func_basename ()
--{
--  func_basename_result=`$ECHO "${1}" | $SED "$basename"`
--}
--
--
--# func_stripname prefix suffix name
--# strip PREFIX and SUFFIX off of NAME.
--# PREFIX and SUFFIX must not contain globbing or regex special
--# characters, hashes, percent signs, but SUFFIX may contain a leading
--# dot (in which case that matches only a dot).
--# func_strip_suffix prefix name
--func_stripname ()
--{
--  case ${2} in
--    .*) func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%\\\\${2}\$%%"`;;
--    *)  func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%${2}\$%%"`;;
--  esac
--}
--
--# sed scripts:
--my_sed_long_opt='1s/^\(-[^=]*\)=.*/\1/;q'
--my_sed_long_arg='1s/^-[^=]*=//'
--
--# func_opt_split
--func_opt_split ()
--{
--  func_opt_split_opt=`$ECHO "${1}" | $SED "$my_sed_long_opt"`
--  func_opt_split_arg=`$ECHO "${1}" | $SED "$my_sed_long_arg"`
--}
--
--# func_lo2o object
--func_lo2o ()
--{
--  func_lo2o_result=`$ECHO "${1}" | $SED "$lo2o"`
--}
--
--# func_xform libobj-or-source
--func_xform ()
--{
--  func_xform_result=`$ECHO "${1}" | $SED 's/\.[^.]*$/.lo/'`
--}
--
--# func_arith arithmetic-term...
--func_arith ()
--{
--  func_arith_result=`expr "$@"`
--}
--
--# func_len string
--# STRING may not start with a hyphen.
--func_len ()
--{
--  func_len_result=`expr "$1" : ".*" 2>/dev/null || echo $max_cmd_len`
--}
--
--_LT_EOF
--esac
--
--case $lt_shell_append in
--  yes)
--    cat << \_LT_EOF >> "$cfgfile"
--
--# func_append var value
--# Append VALUE to the end of shell variable VAR.
--func_append ()
--{
--  eval "$1+=\$2"
--}
--_LT_EOF
--    ;;
--  *)
--    cat << \_LT_EOF >> "$cfgfile"
--
--# func_append var value
--# Append VALUE to the end of shell variable VAR.
--func_append ()
--{
--  eval "$1=\$$1\$2"
--}
--
--_LT_EOF
--    ;;
--  esac
--
--
--  sed -n '/^# Generated shell functions inserted here/,$p' "$ltmain" >> "$cfgfile" \
--    || (rm -f "$cfgfile"; exit 1)
--
--  mv -f "$cfgfile" "$ofile" ||
-+  sed '$q' "$ltmain" >> "$cfgfile" \
-+     || (rm -f "$cfgfile"; exit 1)
-+
-+  if test x"$xsi_shell" = xyes; then
-+  sed -e '/^func_dirname ()$/,/^} # func_dirname /c\
-+func_dirname ()\
-+{\
-+\    case ${1} in\
-+\      */*) func_dirname_result="${1%/*}${2}" ;;\
-+\      *  ) func_dirname_result="${3}" ;;\
-+\    esac\
-+} # Extended-shell func_dirname implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_basename ()$/,/^} # func_basename /c\
-+func_basename ()\
-+{\
-+\    func_basename_result="${1##*/}"\
-+} # Extended-shell func_basename implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_dirname_and_basename ()$/,/^} # func_dirname_and_basename /c\
-+func_dirname_and_basename ()\
-+{\
-+\    case ${1} in\
-+\      */*) func_dirname_result="${1%/*}${2}" ;;\
-+\      *  ) func_dirname_result="${3}" ;;\
-+\    esac\
-+\    func_basename_result="${1##*/}"\
-+} # Extended-shell func_dirname_and_basename implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_stripname ()$/,/^} # func_stripname /c\
-+func_stripname ()\
-+{\
-+\    # pdksh 5.2.14 does not do ${X%$Y} correctly if both X and Y are\
-+\    # positional parameters, so assign one to ordinary parameter first.\
-+\    func_stripname_result=${3}\
-+\    func_stripname_result=${func_stripname_result#"${1}"}\
-+\    func_stripname_result=${func_stripname_result%"${2}"}\
-+} # Extended-shell func_stripname implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_split_long_opt ()$/,/^} # func_split_long_opt /c\
-+func_split_long_opt ()\
-+{\
-+\    func_split_long_opt_name=${1%%=*}\
-+\    func_split_long_opt_arg=${1#*=}\
-+} # Extended-shell func_split_long_opt implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_split_short_opt ()$/,/^} # func_split_short_opt /c\
-+func_split_short_opt ()\
-+{\
-+\    func_split_short_opt_arg=${1#??}\
-+\    func_split_short_opt_name=${1%"$func_split_short_opt_arg"}\
-+} # Extended-shell func_split_short_opt implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_lo2o ()$/,/^} # func_lo2o /c\
-+func_lo2o ()\
-+{\
-+\    case ${1} in\
-+\      *.lo) func_lo2o_result=${1%.lo}.${objext} ;;\
-+\      *)    func_lo2o_result=${1} ;;\
-+\    esac\
-+} # Extended-shell func_lo2o implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_xform ()$/,/^} # func_xform /c\
-+func_xform ()\
-+{\
-+    func_xform_result=${1%.*}.lo\
-+} # Extended-shell func_xform implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_arith ()$/,/^} # func_arith /c\
-+func_arith ()\
-+{\
-+    func_arith_result=$(( $* ))\
-+} # Extended-shell func_arith implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_len ()$/,/^} # func_len /c\
-+func_len ()\
-+{\
-+    func_len_result=${#1}\
-+} # Extended-shell func_len implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+fi
-+
-+if test x"$lt_shell_append" = xyes; then
-+  sed -e '/^func_append ()$/,/^} # func_append /c\
-+func_append ()\
-+{\
-+    eval "${1}+=\\${2}"\
-+} # Extended-shell func_append implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_append_quoted ()$/,/^} # func_append_quoted /c\
-+func_append_quoted ()\
-+{\
-+\    func_quote_for_eval "${2}"\
-+\    eval "${1}+=\\\\ \\$func_quote_for_eval_result"\
-+} # Extended-shell func_append_quoted implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  # Save a `func_append' function call where possible by direct use of '+='
-+  sed -e 's%func_append \([a-zA-Z_]\{1,\}\) "%\1+="%g' $cfgfile > $cfgfile.tmp \
-+    && mv -f "$cfgfile.tmp" "$cfgfile" \
-+      || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+  test 0 -eq $? || _lt_function_replace_fail=:
-+else
-+  # Save a `func_append' function call even when '+=' is not available
-+  sed -e 's%func_append \([a-zA-Z_]\{1,\}\) "%\1="$\1%g' $cfgfile > $cfgfile.tmp \
-+    && mv -f "$cfgfile.tmp" "$cfgfile" \
-+      || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+  test 0 -eq $? || _lt_function_replace_fail=:
-+fi
-+
-+if test x"$_lt_function_replace_fail" = x":"; then
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: Unable to substitute extended shell functions in $ofile" >&5
-+$as_echo "$as_me: WARNING: Unable to substitute extended shell functions in $ofile" >&2;}
-+fi
-+
-+
-+   mv -f "$cfgfile" "$ofile" ||
-     (rm -f "$ofile" && cp "$cfgfile" "$ofile" && rm -f "$cfgfile")
-   chmod +x "$ofile"
- 
-diff --git a/ld/configure b/ld/configure
-index 1f9ec8ec580..4a35108ce7c 100755
---- a/ld/configure
-+++ b/ld/configure
-@@ -695,8 +695,11 @@ OTOOL
- LIPO
- NMEDIT
- DSYMUTIL
-+MANIFEST_TOOL
- RANLIB
-+ac_ct_AR
- AR
-+DLLTOOL
- OBJDUMP
- LN_S
- NM
-@@ -823,6 +826,7 @@ enable_static
- with_pic
- enable_fast_install
- with_gnu_ld
-+with_libtool_sysroot
- enable_libtool_lock
- enable_plugins
- enable_largefile
-@@ -1530,6 +1534,8 @@ Optional Packages:
-   --with-pic              try to use only PIC/non-PIC objects [default=use
-                           both]
-   --with-gnu-ld           assume the C compiler uses GNU ld [default=no]
-+  --with-libtool-sysroot=DIR Search for dependent libraries within DIR
-+                        (or the compiler's sysroot if not specified).
-   --with-lib-path=dir1:dir2...  set default LIB_PATH
-   --with-sysroot=DIR Search for usr/lib et al within DIR.
-   --with-system-zlib      use installed libz
-@@ -5368,8 +5374,8 @@ esac
- 
- 
- 
--macro_version='2.2.7a'
--macro_revision='1.3134'
-+macro_version='2.4'
-+macro_revision='1.3293'
- 
- 
- 
-@@ -5409,7 +5415,7 @@ ECHO=$ECHO$ECHO$ECHO$ECHO$ECHO$ECHO
- { $as_echo "$as_me:${as_lineno-$LINENO}: checking how to print strings" >&5
- $as_echo_n "checking how to print strings... " >&6; }
- # Test print first, because it will be a builtin if present.
--if test "X`print -r -- -n 2>/dev/null`" = X-n && \
-+if test "X`( print -r -- -n ) 2>/dev/null`" = X-n && \
-    test "X`print -r -- $ECHO 2>/dev/null`" = "X$ECHO"; then
-   ECHO='print -r --'
- elif test "X`printf %s $ECHO 2>/dev/null`" = "X$ECHO"; then
-@@ -6096,8 +6102,8 @@ $as_echo_n "checking whether the shell understands some XSI constructs... " >&6;
- # Try some XSI features
- xsi_shell=no
- ( _lt_dummy="a/b/c"
--  test "${_lt_dummy##*/},${_lt_dummy%/*},"${_lt_dummy%"$_lt_dummy"}, \
--      = c,a/b,, \
-+  test "${_lt_dummy##*/},${_lt_dummy%/*},${_lt_dummy#??}"${_lt_dummy%"$_lt_dummy"}, \
-+      = c,a/b,b/c, \
-     && eval 'test $(( 1 + 1 )) -eq 2 \
-     && test "${#_lt_dummy}" -eq 5' ) >/dev/null 2>&1 \
-   && xsi_shell=yes
-@@ -6146,6 +6152,80 @@ esac
- 
- 
- 
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking how to convert $build file names to $host format" >&5
-+$as_echo_n "checking how to convert $build file names to $host format... " >&6; }
-+if ${lt_cv_to_host_file_cmd+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  case $host in
-+  *-*-mingw* )
-+    case $build in
-+      *-*-mingw* ) # actually msys
-+        lt_cv_to_host_file_cmd=func_convert_file_msys_to_w32
-+        ;;
-+      *-*-cygwin* )
-+        lt_cv_to_host_file_cmd=func_convert_file_cygwin_to_w32
-+        ;;
-+      * ) # otherwise, assume *nix
-+        lt_cv_to_host_file_cmd=func_convert_file_nix_to_w32
-+        ;;
-+    esac
-+    ;;
-+  *-*-cygwin* )
-+    case $build in
-+      *-*-mingw* ) # actually msys
-+        lt_cv_to_host_file_cmd=func_convert_file_msys_to_cygwin
-+        ;;
-+      *-*-cygwin* )
-+        lt_cv_to_host_file_cmd=func_convert_file_noop
-+        ;;
-+      * ) # otherwise, assume *nix
-+        lt_cv_to_host_file_cmd=func_convert_file_nix_to_cygwin
-+        ;;
-+    esac
-+    ;;
-+  * ) # unhandled hosts (and "normal" native builds)
-+    lt_cv_to_host_file_cmd=func_convert_file_noop
-+    ;;
-+esac
-+
-+fi
-+
-+to_host_file_cmd=$lt_cv_to_host_file_cmd
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_to_host_file_cmd" >&5
-+$as_echo "$lt_cv_to_host_file_cmd" >&6; }
-+
-+
-+
-+
-+
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking how to convert $build file names to toolchain format" >&5
-+$as_echo_n "checking how to convert $build file names to toolchain format... " >&6; }
-+if ${lt_cv_to_tool_file_cmd+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  #assume ordinary cross tools, or native build.
-+lt_cv_to_tool_file_cmd=func_convert_file_noop
-+case $host in
-+  *-*-mingw* )
-+    case $build in
-+      *-*-mingw* ) # actually msys
-+        lt_cv_to_tool_file_cmd=func_convert_file_msys_to_w32
-+        ;;
-+    esac
-+    ;;
-+esac
-+
-+fi
-+
-+to_tool_file_cmd=$lt_cv_to_tool_file_cmd
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_to_tool_file_cmd" >&5
-+$as_echo "$lt_cv_to_tool_file_cmd" >&6; }
-+
-+
-+
-+
-+
- { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $LD option to reload object files" >&5
- $as_echo_n "checking for $LD option to reload object files... " >&6; }
- if ${lt_cv_ld_reload_flag+:} false; then :
-@@ -6162,6 +6242,11 @@ case $reload_flag in
- esac
- reload_cmds='$LD$reload_flag -o $output$reload_objs'
- case $host_os in
-+  cygwin* | mingw* | pw32* | cegcc*)
-+    if test "$GCC" != yes; then
-+      reload_cmds=false
-+    fi
-+    ;;
-   darwin*)
-     if test "$GCC" = yes; then
-       reload_cmds='$LTCC $LTCFLAGS -nostdlib ${wl}-r -o $output$reload_objs'
-@@ -6330,7 +6415,8 @@ mingw* | pw32*)
-     lt_cv_deplibs_check_method='file_magic ^x86 archive import|^x86 DLL'
-     lt_cv_file_magic_cmd='func_win32_libid'
-   else
--    lt_cv_deplibs_check_method='file_magic file format pei*-i386(.*architecture: i386)?'
-+    # Keep this pattern in sync with the one in func_win32_libid.
-+    lt_cv_deplibs_check_method='file_magic file format (pei*-i386(.*architecture: i386)?|pe-arm-wince|pe-x86-64)'
-     lt_cv_file_magic_cmd='$OBJDUMP -f'
-   fi
-   ;;
-@@ -6484,6 +6570,21 @@ esac
- fi
- { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_deplibs_check_method" >&5
- $as_echo "$lt_cv_deplibs_check_method" >&6; }
-+
-+file_magic_glob=
-+want_nocaseglob=no
-+if test "$build" = "$host"; then
-+  case $host_os in
-+  mingw* | pw32*)
-+    if ( shopt | grep nocaseglob ) >/dev/null 2>&1; then
-+      want_nocaseglob=yes
-+    else
-+      file_magic_glob=`echo aAbBcCdDeEfFgGhHiIjJkKlLmMnNoOpPqQrRsStTuUvVwWxXyYzZ | $SED -e "s/\(..\)/s\/[\1]\/[\1]\/g;/g"`
-+    fi
-+    ;;
-+  esac
-+fi
-+
- file_magic_cmd=$lt_cv_file_magic_cmd
- deplibs_check_method=$lt_cv_deplibs_check_method
- test -z "$deplibs_check_method" && deplibs_check_method=unknown
-@@ -6499,6 +6600,157 @@ test -z "$deplibs_check_method" && deplibs_check_method=unknown
- 
- 
- 
-+
-+
-+
-+
-+
-+
-+
-+
-+
-+
-+if test -n "$ac_tool_prefix"; then
-+  # Extract the first word of "${ac_tool_prefix}dlltool", so it can be a program name with args.
-+set dummy ${ac_tool_prefix}dlltool; ac_word=$2
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
-+$as_echo_n "checking for $ac_word... " >&6; }
-+if ${ac_cv_prog_DLLTOOL+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  if test -n "$DLLTOOL"; then
-+  ac_cv_prog_DLLTOOL="$DLLTOOL" # Let the user override the test.
-+else
-+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
-+for as_dir in $PATH
-+do
-+  IFS=$as_save_IFS
-+  test -z "$as_dir" && as_dir=.
-+    for ac_exec_ext in '' $ac_executable_extensions; do
-+  if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
-+    ac_cv_prog_DLLTOOL="${ac_tool_prefix}dlltool"
-+    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
-+    break 2
-+  fi
-+done
-+  done
-+IFS=$as_save_IFS
-+
-+fi
-+fi
-+DLLTOOL=$ac_cv_prog_DLLTOOL
-+if test -n "$DLLTOOL"; then
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $DLLTOOL" >&5
-+$as_echo "$DLLTOOL" >&6; }
-+else
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
-+$as_echo "no" >&6; }
-+fi
-+
-+
-+fi
-+if test -z "$ac_cv_prog_DLLTOOL"; then
-+  ac_ct_DLLTOOL=$DLLTOOL
-+  # Extract the first word of "dlltool", so it can be a program name with args.
-+set dummy dlltool; ac_word=$2
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
-+$as_echo_n "checking for $ac_word... " >&6; }
-+if ${ac_cv_prog_ac_ct_DLLTOOL+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  if test -n "$ac_ct_DLLTOOL"; then
-+  ac_cv_prog_ac_ct_DLLTOOL="$ac_ct_DLLTOOL" # Let the user override the test.
-+else
-+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
-+for as_dir in $PATH
-+do
-+  IFS=$as_save_IFS
-+  test -z "$as_dir" && as_dir=.
-+    for ac_exec_ext in '' $ac_executable_extensions; do
-+  if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
-+    ac_cv_prog_ac_ct_DLLTOOL="dlltool"
-+    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
-+    break 2
-+  fi
-+done
-+  done
-+IFS=$as_save_IFS
-+
-+fi
-+fi
-+ac_ct_DLLTOOL=$ac_cv_prog_ac_ct_DLLTOOL
-+if test -n "$ac_ct_DLLTOOL"; then
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_DLLTOOL" >&5
-+$as_echo "$ac_ct_DLLTOOL" >&6; }
-+else
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
-+$as_echo "no" >&6; }
-+fi
-+
-+  if test "x$ac_ct_DLLTOOL" = x; then
-+    DLLTOOL="false"
-+  else
-+    case $cross_compiling:$ac_tool_warned in
-+yes:)
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5
-+$as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;}
-+ac_tool_warned=yes ;;
-+esac
-+    DLLTOOL=$ac_ct_DLLTOOL
-+  fi
-+else
-+  DLLTOOL="$ac_cv_prog_DLLTOOL"
-+fi
-+
-+test -z "$DLLTOOL" && DLLTOOL=dlltool
-+
-+
-+
-+
-+
-+
-+
-+
-+
-+
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking how to associate runtime and link libraries" >&5
-+$as_echo_n "checking how to associate runtime and link libraries... " >&6; }
-+if ${lt_cv_sharedlib_from_linklib_cmd+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  lt_cv_sharedlib_from_linklib_cmd='unknown'
-+
-+case $host_os in
-+cygwin* | mingw* | pw32* | cegcc*)
-+  # two different shell functions defined in ltmain.sh
-+  # decide which to use based on capabilities of $DLLTOOL
-+  case `$DLLTOOL --help 2>&1` in
-+  *--identify-strict*)
-+    lt_cv_sharedlib_from_linklib_cmd=func_cygming_dll_for_implib
-+    ;;
-+  *)
-+    lt_cv_sharedlib_from_linklib_cmd=func_cygming_dll_for_implib_fallback
-+    ;;
-+  esac
-+  ;;
-+*)
-+  # fallback: assume linklib IS sharedlib
-+  lt_cv_sharedlib_from_linklib_cmd="$ECHO"
-+  ;;
-+esac
-+
-+fi
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_sharedlib_from_linklib_cmd" >&5
-+$as_echo "$lt_cv_sharedlib_from_linklib_cmd" >&6; }
-+sharedlib_from_linklib_cmd=$lt_cv_sharedlib_from_linklib_cmd
-+test -z "$sharedlib_from_linklib_cmd" && sharedlib_from_linklib_cmd=$ECHO
-+
-+
-+
-+
-+
-+
-+
- plugin_option=
- plugin_names="liblto_plugin.so liblto_plugin-0.dll cyglto_plugin-0.dll"
- for plugin in $plugin_names; do
-@@ -6513,8 +6765,10 @@ for plugin in $plugin_names; do
- done
- 
- if test -n "$ac_tool_prefix"; then
--  # Extract the first word of "${ac_tool_prefix}ar", so it can be a program name with args.
--set dummy ${ac_tool_prefix}ar; ac_word=$2
-+  for ac_prog in ar
-+  do
-+    # Extract the first word of "$ac_tool_prefix$ac_prog", so it can be a program name with args.
-+set dummy $ac_tool_prefix$ac_prog; ac_word=$2
- { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
- $as_echo_n "checking for $ac_word... " >&6; }
- if ${ac_cv_prog_AR+:} false; then :
-@@ -6530,7 +6784,7 @@ do
-   test -z "$as_dir" && as_dir=.
-     for ac_exec_ext in '' $ac_executable_extensions; do
-   if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
--    ac_cv_prog_AR="${ac_tool_prefix}ar"
-+    ac_cv_prog_AR="$ac_tool_prefix$ac_prog"
-     $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
-     break 2
-   fi
-@@ -6550,11 +6804,15 @@ $as_echo "no" >&6; }
- fi
- 
- 
-+    test -n "$AR" && break
-+  done
- fi
--if test -z "$ac_cv_prog_AR"; then
-+if test -z "$AR"; then
-   ac_ct_AR=$AR
--  # Extract the first word of "ar", so it can be a program name with args.
--set dummy ar; ac_word=$2
-+  for ac_prog in ar
-+do
-+  # Extract the first word of "$ac_prog", so it can be a program name with args.
-+set dummy $ac_prog; ac_word=$2
- { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
- $as_echo_n "checking for $ac_word... " >&6; }
- if ${ac_cv_prog_ac_ct_AR+:} false; then :
-@@ -6570,7 +6828,7 @@ do
-   test -z "$as_dir" && as_dir=.
-     for ac_exec_ext in '' $ac_executable_extensions; do
-   if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
--    ac_cv_prog_ac_ct_AR="ar"
-+    ac_cv_prog_ac_ct_AR="$ac_prog"
-     $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
-     break 2
-   fi
-@@ -6589,6 +6847,10 @@ else
- $as_echo "no" >&6; }
- fi
- 
-+
-+  test -n "$ac_ct_AR" && break
-+done
-+
-   if test "x$ac_ct_AR" = x; then
-     AR="false"
-   else
-@@ -6600,25 +6862,19 @@ ac_tool_warned=yes ;;
- esac
-     AR=$ac_ct_AR
-   fi
--else
--  AR="$ac_cv_prog_AR"
- fi
- 
--test -z "$AR" && AR=ar
--if test -n "$plugin_option"; then
--  if $AR --help 2>&1 | grep -q "\--plugin"; then
--    touch conftest.c
--    $AR $plugin_option rc conftest.a conftest.c
--    if test "$?" != 0; then
--      { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: Failed: $AR $plugin_option rc" >&5
-+  touch conftest.c
-+  $AR $plugin_option rc conftest.a conftest.c
-+  if test "$?" != 0; then
-+    { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: Failed: $AR $plugin_option rc" >&5
- $as_echo "$as_me: WARNING: Failed: $AR $plugin_option rc" >&2;}
--    else
--      AR="$AR $plugin_option"
--    fi
--    rm -f conftest.*
-+  else
-+    AR="$AR $plugin_option"
-   fi
--fi
--test -z "$AR_FLAGS" && AR_FLAGS=cru
-+  rm -f conftest.*
-+: ${AR=ar}
-+: ${AR_FLAGS=cru}
- 
- 
- 
-@@ -6630,6 +6886,64 @@ test -z "$AR_FLAGS" && AR_FLAGS=cru
- 
- 
- 
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for archiver @FILE support" >&5
-+$as_echo_n "checking for archiver @FILE support... " >&6; }
-+if ${lt_cv_ar_at_file+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  lt_cv_ar_at_file=no
-+   cat confdefs.h - <<_ACEOF >conftest.$ac_ext
-+/* end confdefs.h.  */
-+
-+int
-+main ()
-+{
-+
-+  ;
-+  return 0;
-+}
-+_ACEOF
-+if ac_fn_c_try_compile "$LINENO"; then :
-+  echo conftest.$ac_objext > conftest.lst
-+      lt_ar_try='$AR $AR_FLAGS libconftest.a @conftest.lst >&5'
-+      { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$lt_ar_try\""; } >&5
-+  (eval $lt_ar_try) 2>&5
-+  ac_status=$?
-+  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
-+  test $ac_status = 0; }
-+      if test "$ac_status" -eq 0; then
-+	# Ensure the archiver fails upon bogus file names.
-+	rm -f conftest.$ac_objext libconftest.a
-+	{ { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$lt_ar_try\""; } >&5
-+  (eval $lt_ar_try) 2>&5
-+  ac_status=$?
-+  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
-+  test $ac_status = 0; }
-+	if test "$ac_status" -ne 0; then
-+          lt_cv_ar_at_file=@
-+        fi
-+      fi
-+      rm -f conftest.* libconftest.a
-+
-+fi
-+rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext
-+
-+fi
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_ar_at_file" >&5
-+$as_echo "$lt_cv_ar_at_file" >&6; }
-+
-+if test "x$lt_cv_ar_at_file" = xno; then
-+  archiver_list_spec=
-+else
-+  archiver_list_spec=$lt_cv_ar_at_file
-+fi
-+
-+
-+
-+
-+
-+
-+
- if test -n "$ac_tool_prefix"; then
-   # Extract the first word of "${ac_tool_prefix}strip", so it can be a program name with args.
- set dummy ${ac_tool_prefix}strip; ac_word=$2
-@@ -6969,8 +7283,8 @@ esac
- lt_cv_sys_global_symbol_to_cdecl="sed -n -e 's/^T .* \(.*\)$/extern int \1();/p' -e 's/^$symcode* .* \(.*\)$/extern char \1;/p'"
- 
- # Transform an extracted symbol line into symbol name and symbol address
--lt_cv_sys_global_symbol_to_c_name_address="sed -n -e 's/^: \([^ ]*\) $/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"\2\", (void *) \&\2},/p'"
--lt_cv_sys_global_symbol_to_c_name_address_lib_prefix="sed -n -e 's/^: \([^ ]*\) $/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \(lib[^ ]*\)$/  {\"\2\", (void *) \&\2},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"lib\2\", (void *) \&\2},/p'"
-+lt_cv_sys_global_symbol_to_c_name_address="sed -n -e 's/^: \([^ ]*\)[ ]*$/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"\2\", (void *) \&\2},/p'"
-+lt_cv_sys_global_symbol_to_c_name_address_lib_prefix="sed -n -e 's/^: \([^ ]*\)[ ]*$/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \(lib[^ ]*\)$/  {\"\2\", (void *) \&\2},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"lib\2\", (void *) \&\2},/p'"
- 
- # Handle CRLF in mingw tool chain
- opt_cr=
-@@ -7006,6 +7320,7 @@ for ac_symprfx in "" "_"; do
-   else
-     lt_cv_sys_global_symbol_pipe="sed -n -e 's/^.*[	 ]\($symcode$symcode*\)[	 ][	 ]*$ac_symprfx$sympat$opt_cr$/$symxfrm/p'"
-   fi
-+  lt_cv_sys_global_symbol_pipe="$lt_cv_sys_global_symbol_pipe | sed '/ __gnu_lto/d'"
- 
-   # Check to see that the pipe works correctly.
-   pipe_works=no
-@@ -7047,6 +7362,18 @@ _LT_EOF
-       if $GREP ' nm_test_var$' "$nlist" >/dev/null; then
- 	if $GREP ' nm_test_func$' "$nlist" >/dev/null; then
- 	  cat <<_LT_EOF > conftest.$ac_ext
-+/* Keep this code in sync between libtool.m4, ltmain, lt_system.h, and tests.  */
-+#if defined(_WIN32) || defined(__CYGWIN__) || defined(_WIN32_WCE)
-+/* DATA imports from DLLs on WIN32 con't be const, because runtime
-+   relocations are performed -- see ld's documentation on pseudo-relocs.  */
-+# define LT_DLSYM_CONST
-+#elif defined(__osf__)
-+/* This system does not cope well with relocations in const data.  */
-+# define LT_DLSYM_CONST
-+#else
-+# define LT_DLSYM_CONST const
-+#endif
-+
- #ifdef __cplusplus
- extern "C" {
- #endif
-@@ -7058,7 +7385,7 @@ _LT_EOF
- 	  cat <<_LT_EOF >> conftest.$ac_ext
- 
- /* The mapping between symbol names and symbols.  */
--const struct {
-+LT_DLSYM_CONST struct {
-   const char *name;
-   void       *address;
- }
-@@ -7084,8 +7411,8 @@ static const void *lt_preloaded_setup() {
- _LT_EOF
- 	  # Now try linking the two files.
- 	  mv conftest.$ac_objext conftstm.$ac_objext
--	  lt_save_LIBS="$LIBS"
--	  lt_save_CFLAGS="$CFLAGS"
-+	  lt_globsym_save_LIBS=$LIBS
-+	  lt_globsym_save_CFLAGS=$CFLAGS
- 	  LIBS="conftstm.$ac_objext"
- 	  CFLAGS="$CFLAGS$lt_prog_compiler_no_builtin_flag"
- 	  if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_link\""; } >&5
-@@ -7095,8 +7422,8 @@ _LT_EOF
-   test $ac_status = 0; } && test -s conftest${ac_exeext}; then
- 	    pipe_works=yes
- 	  fi
--	  LIBS="$lt_save_LIBS"
--	  CFLAGS="$lt_save_CFLAGS"
-+	  LIBS=$lt_globsym_save_LIBS
-+	  CFLAGS=$lt_globsym_save_CFLAGS
- 	else
- 	  echo "cannot find nm_test_func in $nlist" >&5
- 	fi
-@@ -7133,6 +7460,17 @@ else
- $as_echo "ok" >&6; }
- fi
- 
-+# Response file support.
-+if test "$lt_cv_nm_interface" = "MS dumpbin"; then
-+  nm_file_list_spec='@'
-+elif $NM --help 2>/dev/null | grep '[@]FILE' >/dev/null; then
-+  nm_file_list_spec='@'
-+fi
-+
-+
-+
-+
-+
- 
- 
- 
-@@ -7149,6 +7487,44 @@ fi
- 
- 
- 
-+
-+
-+
-+
-+
-+
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for sysroot" >&5
-+$as_echo_n "checking for sysroot... " >&6; }
-+
-+# Check whether --with-libtool-sysroot was given.
-+if test "${with_libtool_sysroot+set}" = set; then :
-+  withval=$with_libtool_sysroot;
-+else
-+  with_libtool_sysroot=no
-+fi
-+
-+
-+lt_sysroot=
-+case ${with_libtool_sysroot} in #(
-+ yes)
-+   if test "$GCC" = yes; then
-+     lt_sysroot=`$CC --print-sysroot 2>/dev/null`
-+   fi
-+   ;; #(
-+ /*)
-+   lt_sysroot=`echo "$with_libtool_sysroot" | sed -e "$sed_quote_subst"`
-+   ;; #(
-+ no|'')
-+   ;; #(
-+ *)
-+   { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${with_libtool_sysroot}" >&5
-+$as_echo "${with_libtool_sysroot}" >&6; }
-+   as_fn_error $? "The sysroot must be an absolute path." "$LINENO" 5
-+   ;;
-+esac
-+
-+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${lt_sysroot:-no}" >&5
-+$as_echo "${lt_sysroot:-no}" >&6; }
- 
- 
- 
-@@ -7360,6 +7736,123 @@ esac
- 
- need_locks="$enable_libtool_lock"
- 
-+if test -n "$ac_tool_prefix"; then
-+  # Extract the first word of "${ac_tool_prefix}mt", so it can be a program name with args.
-+set dummy ${ac_tool_prefix}mt; ac_word=$2
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
-+$as_echo_n "checking for $ac_word... " >&6; }
-+if ${ac_cv_prog_MANIFEST_TOOL+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  if test -n "$MANIFEST_TOOL"; then
-+  ac_cv_prog_MANIFEST_TOOL="$MANIFEST_TOOL" # Let the user override the test.
-+else
-+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
-+for as_dir in $PATH
-+do
-+  IFS=$as_save_IFS
-+  test -z "$as_dir" && as_dir=.
-+    for ac_exec_ext in '' $ac_executable_extensions; do
-+  if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
-+    ac_cv_prog_MANIFEST_TOOL="${ac_tool_prefix}mt"
-+    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
-+    break 2
-+  fi
-+done
-+  done
-+IFS=$as_save_IFS
-+
-+fi
-+fi
-+MANIFEST_TOOL=$ac_cv_prog_MANIFEST_TOOL
-+if test -n "$MANIFEST_TOOL"; then
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $MANIFEST_TOOL" >&5
-+$as_echo "$MANIFEST_TOOL" >&6; }
-+else
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
-+$as_echo "no" >&6; }
-+fi
-+
-+
-+fi
-+if test -z "$ac_cv_prog_MANIFEST_TOOL"; then
-+  ac_ct_MANIFEST_TOOL=$MANIFEST_TOOL
-+  # Extract the first word of "mt", so it can be a program name with args.
-+set dummy mt; ac_word=$2
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
-+$as_echo_n "checking for $ac_word... " >&6; }
-+if ${ac_cv_prog_ac_ct_MANIFEST_TOOL+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  if test -n "$ac_ct_MANIFEST_TOOL"; then
-+  ac_cv_prog_ac_ct_MANIFEST_TOOL="$ac_ct_MANIFEST_TOOL" # Let the user override the test.
-+else
-+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
-+for as_dir in $PATH
-+do
-+  IFS=$as_save_IFS
-+  test -z "$as_dir" && as_dir=.
-+    for ac_exec_ext in '' $ac_executable_extensions; do
-+  if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
-+    ac_cv_prog_ac_ct_MANIFEST_TOOL="mt"
-+    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
-+    break 2
-+  fi
-+done
-+  done
-+IFS=$as_save_IFS
-+
-+fi
-+fi
-+ac_ct_MANIFEST_TOOL=$ac_cv_prog_ac_ct_MANIFEST_TOOL
-+if test -n "$ac_ct_MANIFEST_TOOL"; then
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_MANIFEST_TOOL" >&5
-+$as_echo "$ac_ct_MANIFEST_TOOL" >&6; }
-+else
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
-+$as_echo "no" >&6; }
-+fi
-+
-+  if test "x$ac_ct_MANIFEST_TOOL" = x; then
-+    MANIFEST_TOOL=":"
-+  else
-+    case $cross_compiling:$ac_tool_warned in
-+yes:)
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5
-+$as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;}
-+ac_tool_warned=yes ;;
-+esac
-+    MANIFEST_TOOL=$ac_ct_MANIFEST_TOOL
-+  fi
-+else
-+  MANIFEST_TOOL="$ac_cv_prog_MANIFEST_TOOL"
-+fi
-+
-+test -z "$MANIFEST_TOOL" && MANIFEST_TOOL=mt
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking if $MANIFEST_TOOL is a manifest tool" >&5
-+$as_echo_n "checking if $MANIFEST_TOOL is a manifest tool... " >&6; }
-+if ${lt_cv_path_mainfest_tool+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  lt_cv_path_mainfest_tool=no
-+  echo "$as_me:$LINENO: $MANIFEST_TOOL '-?'" >&5
-+  $MANIFEST_TOOL '-?' 2>conftest.err > conftest.out
-+  cat conftest.err >&5
-+  if $GREP 'Manifest Tool' conftest.out > /dev/null; then
-+    lt_cv_path_mainfest_tool=yes
-+  fi
-+  rm -f conftest*
-+fi
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_path_mainfest_tool" >&5
-+$as_echo "$lt_cv_path_mainfest_tool" >&6; }
-+if test "x$lt_cv_path_mainfest_tool" != xyes; then
-+  MANIFEST_TOOL=:
-+fi
-+
-+
-+
-+
-+
- 
-   case $host_os in
-     rhapsody* | darwin*)
-@@ -7923,6 +8416,8 @@ _LT_EOF
-       $LTCC $LTCFLAGS -c -o conftest.o conftest.c 2>&5
-       echo "$AR cru libconftest.a conftest.o" >&5
-       $AR cru libconftest.a conftest.o 2>&5
-+      echo "$RANLIB libconftest.a" >&5
-+      $RANLIB libconftest.a 2>&5
-       cat > conftest.c << _LT_EOF
- int main() { return 0;}
- _LT_EOF
-@@ -7991,6 +8486,16 @@ done
- 
- 
- 
-+func_stripname_cnf ()
-+{
-+  case ${2} in
-+  .*) func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%\\\\${2}\$%%"`;;
-+  *)  func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%${2}\$%%"`;;
-+  esac
-+} # func_stripname_cnf
-+
-+
-+
- 
- 
- # Set options
-@@ -8506,8 +9011,6 @@ fi
- lt_prog_compiler_pic=
- lt_prog_compiler_static=
- 
--{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $compiler option to produce PIC" >&5
--$as_echo_n "checking for $compiler option to produce PIC... " >&6; }
- 
-   if test "$GCC" = yes; then
-     lt_prog_compiler_wl='-Wl,'
-@@ -8673,6 +9176,12 @@ $as_echo_n "checking for $compiler option to produce PIC... " >&6; }
- 	lt_prog_compiler_pic='--shared'
- 	lt_prog_compiler_static='--static'
- 	;;
-+      nagfor*)
-+	# NAG Fortran compiler
-+	lt_prog_compiler_wl='-Wl,-Wl,,'
-+	lt_prog_compiler_pic='-PIC'
-+	lt_prog_compiler_static='-Bstatic'
-+	;;
-       pgcc* | pgf77* | pgf90* | pgf95* | pgfortran*)
-         # Portland Group compilers (*not* the Pentium gcc compiler,
- 	# which looks to be a dead project)
-@@ -8735,7 +9244,7 @@ $as_echo_n "checking for $compiler option to produce PIC... " >&6; }
-       lt_prog_compiler_pic='-KPIC'
-       lt_prog_compiler_static='-Bstatic'
-       case $cc_basename in
--      f77* | f90* | f95*)
-+      f77* | f90* | f95* | sunf77* | sunf90* | sunf95*)
- 	lt_prog_compiler_wl='-Qoption ld ';;
-       *)
- 	lt_prog_compiler_wl='-Wl,';;
-@@ -8792,13 +9301,17 @@ case $host_os in
-     lt_prog_compiler_pic="$lt_prog_compiler_pic -DPIC"
-     ;;
- esac
--{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_prog_compiler_pic" >&5
--$as_echo "$lt_prog_compiler_pic" >&6; }
--
--
--
--
- 
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $compiler option to produce PIC" >&5
-+$as_echo_n "checking for $compiler option to produce PIC... " >&6; }
-+if ${lt_cv_prog_compiler_pic+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  lt_cv_prog_compiler_pic=$lt_prog_compiler_pic
-+fi
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_prog_compiler_pic" >&5
-+$as_echo "$lt_cv_prog_compiler_pic" >&6; }
-+lt_prog_compiler_pic=$lt_cv_prog_compiler_pic
- 
- #
- # Check to make sure the PIC flag actually works.
-@@ -8859,6 +9372,11 @@ fi
- 
- 
- 
-+
-+
-+
-+
-+
- #
- # Check to make sure the static flag actually works.
- #
-@@ -9209,7 +9727,8 @@ _LT_EOF
-       allow_undefined_flag=unsupported
-       always_export_symbols=no
-       enable_shared_with_static_runtimes=yes
--      export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1 DATA/'\'' | $SED -e '\''/^[AITW][ ]/s/.*[ ]//'\'' | sort | uniq > $export_symbols'
-+      export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1 DATA/;s/^.*[ ]__nm__\([^ ]*\)[ ][^ ]*/\1 DATA/;/^I[ ]/d;/^[AITW][ ]/s/.* //'\'' | sort | uniq > $export_symbols'
-+      exclude_expsyms='[_]+GLOBAL_OFFSET_TABLE_|[_]+GLOBAL__[FID]_.*|[_]+head_[A-Za-z0-9_]+_dll|[A-Za-z0-9_]+_dll_iname'
- 
-       if $LD --help 2>&1 | $GREP 'auto-import' > /dev/null; then
-         archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib'
-@@ -9308,12 +9827,12 @@ _LT_EOF
- 	  whole_archive_flag_spec='--whole-archive$convenience --no-whole-archive'
- 	  hardcode_libdir_flag_spec=
- 	  hardcode_libdir_flag_spec_ld='-rpath $libdir'
--	  archive_cmds='$LD -shared $libobjs $deplibs $compiler_flags -soname $soname -o $lib'
-+	  archive_cmds='$LD -shared $libobjs $deplibs $linker_flags -soname $soname -o $lib'
- 	  if test "x$supports_anon_versioning" = xyes; then
- 	    archive_expsym_cmds='echo "{ global:" > $output_objdir/$libname.ver~
- 	      cat $export_symbols | sed -e "s/\(.*\)/\1;/" >> $output_objdir/$libname.ver~
- 	      echo "local: *; };" >> $output_objdir/$libname.ver~
--	      $LD -shared $libobjs $deplibs $compiler_flags -soname $soname -version-script $output_objdir/$libname.ver -o $lib'
-+	      $LD -shared $libobjs $deplibs $linker_flags -soname $soname -version-script $output_objdir/$libname.ver -o $lib'
- 	  fi
- 	  ;;
- 	esac
-@@ -9327,8 +9846,8 @@ _LT_EOF
- 	archive_cmds='$LD -Bshareable $libobjs $deplibs $linker_flags -o $lib'
- 	wlarc=
-       else
--	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
--	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-+	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
-+	archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-       fi
-       ;;
- 
-@@ -9346,8 +9865,8 @@ _LT_EOF
- 
- _LT_EOF
-       elif $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then
--	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
--	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-+	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
-+	archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-       else
- 	ld_shlibs=no
-       fi
-@@ -9393,8 +9912,8 @@ _LT_EOF
- 
-     *)
-       if $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then
--	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
--	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-+	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
-+	archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-       else
- 	ld_shlibs=no
-       fi
-@@ -9524,7 +10043,13 @@ _LT_EOF
- 	allow_undefined_flag='-berok'
-         # Determine the default libpath from the value encoded in an
-         # empty executable.
--        cat confdefs.h - <<_ACEOF >conftest.$ac_ext
-+        if test "${lt_cv_aix_libpath+set}" = set; then
-+  aix_libpath=$lt_cv_aix_libpath
-+else
-+  if ${lt_cv_aix_libpath_+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
- /* end confdefs.h.  */
- 
- int
-@@ -9537,22 +10062,29 @@ main ()
- _ACEOF
- if ac_fn_c_try_link "$LINENO"; then :
- 
--lt_aix_libpath_sed='
--    /Import File Strings/,/^$/ {
--	/^0/ {
--	    s/^0  *\(.*\)$/\1/
--	    p
--	}
--    }'
--aix_libpath=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
--# Check for a 64-bit object if we didn't find anything.
--if test -z "$aix_libpath"; then
--  aix_libpath=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
--fi
-+  lt_aix_libpath_sed='
-+      /Import File Strings/,/^$/ {
-+	  /^0/ {
-+	      s/^0  *\([^ ]*\) *$/\1/
-+	      p
-+	  }
-+      }'
-+  lt_cv_aix_libpath_=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
-+  # Check for a 64-bit object if we didn't find anything.
-+  if test -z "$lt_cv_aix_libpath_"; then
-+    lt_cv_aix_libpath_=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
-+  fi
- fi
- rm -f core conftest.err conftest.$ac_objext \
-     conftest$ac_exeext conftest.$ac_ext
--if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
-+  if test -z "$lt_cv_aix_libpath_"; then
-+    lt_cv_aix_libpath_="/usr/lib:/lib"
-+  fi
-+
-+fi
-+
-+  aix_libpath=$lt_cv_aix_libpath_
-+fi
- 
-         hardcode_libdir_flag_spec='${wl}-blibpath:$libdir:'"$aix_libpath"
-         archive_expsym_cmds='$CC -o $output_objdir/$soname $libobjs $deplibs '"\${wl}$no_entry_flag"' $compiler_flags `if test "x${allow_undefined_flag}" != "x"; then func_echo_all "${wl}${allow_undefined_flag}"; else :; fi` '"\${wl}$exp_sym_flag:\$export_symbols $shared_flag"
-@@ -9564,7 +10096,13 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
- 	else
- 	 # Determine the default libpath from the value encoded in an
- 	 # empty executable.
--	 cat confdefs.h - <<_ACEOF >conftest.$ac_ext
-+	 if test "${lt_cv_aix_libpath+set}" = set; then
-+  aix_libpath=$lt_cv_aix_libpath
-+else
-+  if ${lt_cv_aix_libpath_+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
- /* end confdefs.h.  */
- 
- int
-@@ -9577,22 +10115,29 @@ main ()
- _ACEOF
- if ac_fn_c_try_link "$LINENO"; then :
- 
--lt_aix_libpath_sed='
--    /Import File Strings/,/^$/ {
--	/^0/ {
--	    s/^0  *\(.*\)$/\1/
--	    p
--	}
--    }'
--aix_libpath=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
--# Check for a 64-bit object if we didn't find anything.
--if test -z "$aix_libpath"; then
--  aix_libpath=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
--fi
-+  lt_aix_libpath_sed='
-+      /Import File Strings/,/^$/ {
-+	  /^0/ {
-+	      s/^0  *\([^ ]*\) *$/\1/
-+	      p
-+	  }
-+      }'
-+  lt_cv_aix_libpath_=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
-+  # Check for a 64-bit object if we didn't find anything.
-+  if test -z "$lt_cv_aix_libpath_"; then
-+    lt_cv_aix_libpath_=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
-+  fi
- fi
- rm -f core conftest.err conftest.$ac_objext \
-     conftest$ac_exeext conftest.$ac_ext
--if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
-+  if test -z "$lt_cv_aix_libpath_"; then
-+    lt_cv_aix_libpath_="/usr/lib:/lib"
-+  fi
-+
-+fi
-+
-+  aix_libpath=$lt_cv_aix_libpath_
-+fi
- 
- 	 hardcode_libdir_flag_spec='${wl}-blibpath:$libdir:'"$aix_libpath"
- 	  # Warning - without using the other run time loading flags,
-@@ -9636,21 +10181,64 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
-       # When not using gcc, we currently assume that we are using
-       # Microsoft Visual C++.
-       # hardcode_libdir_flag_spec is actually meaningless, as there is
--      # no search path for DLLs.
--      hardcode_libdir_flag_spec=' '
--      allow_undefined_flag=unsupported
--      # Tell ltmain to make .lib files, not .a files.
--      libext=lib
--      # Tell ltmain to make .dll files, not .so files.
--      shrext_cmds=".dll"
--      # FIXME: Setting linknames here is a bad hack.
--      archive_cmds='$CC -o $lib $libobjs $compiler_flags `func_echo_all "$deplibs" | $SED '\''s/ -lc$//'\''` -link -dll~linknames='
--      # The linker will automatically build a .lib file if we build a DLL.
--      old_archive_from_new_cmds='true'
--      # FIXME: Should let the user specify the lib program.
--      old_archive_cmds='lib -OUT:$oldlib$oldobjs$old_deplibs'
--      fix_srcfile_path='`cygpath -w "$srcfile"`'
--      enable_shared_with_static_runtimes=yes
-+      # no search path for DLLs.
-+      case $cc_basename in
-+      cl*)
-+	# Native MSVC
-+	hardcode_libdir_flag_spec=' '
-+	allow_undefined_flag=unsupported
-+	always_export_symbols=yes
-+	file_list_spec='@'
-+	# Tell ltmain to make .lib files, not .a files.
-+	libext=lib
-+	# Tell ltmain to make .dll files, not .so files.
-+	shrext_cmds=".dll"
-+	# FIXME: Setting linknames here is a bad hack.
-+	archive_cmds='$CC -o $output_objdir/$soname $libobjs $compiler_flags $deplibs -Wl,-dll~linknames='
-+	archive_expsym_cmds='if test "x`$SED 1q $export_symbols`" = xEXPORTS; then
-+	    sed -n -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' -e '1\\\!p' < $export_symbols > $output_objdir/$soname.exp;
-+	  else
-+	    sed -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' < $export_symbols > $output_objdir/$soname.exp;
-+	  fi~
-+	  $CC -o $tool_output_objdir$soname $libobjs $compiler_flags $deplibs "@$tool_output_objdir$soname.exp" -Wl,-DLL,-IMPLIB:"$tool_output_objdir$libname.dll.lib"~
-+	  linknames='
-+	# The linker will not automatically build a static lib if we build a DLL.
-+	# _LT_TAGVAR(old_archive_from_new_cmds, )='true'
-+	enable_shared_with_static_runtimes=yes
-+	export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1,DATA/'\'' | $SED -e '\''/^[AITW][ ]/s/.*[ ]//'\'' | sort | uniq > $export_symbols'
-+	# Don't use ranlib
-+	old_postinstall_cmds='chmod 644 $oldlib'
-+	postlink_cmds='lt_outputfile="@OUTPUT@"~
-+	  lt_tool_outputfile="@TOOL_OUTPUT@"~
-+	  case $lt_outputfile in
-+	    *.exe|*.EXE) ;;
-+	    *)
-+	      lt_outputfile="$lt_outputfile.exe"
-+	      lt_tool_outputfile="$lt_tool_outputfile.exe"
-+	      ;;
-+	  esac~
-+	  if test "$MANIFEST_TOOL" != ":" && test -f "$lt_outputfile.manifest"; then
-+	    $MANIFEST_TOOL -manifest "$lt_tool_outputfile.manifest" -outputresource:"$lt_tool_outputfile" || exit 1;
-+	    $RM "$lt_outputfile.manifest";
-+	  fi'
-+	;;
-+      *)
-+	# Assume MSVC wrapper
-+	hardcode_libdir_flag_spec=' '
-+	allow_undefined_flag=unsupported
-+	# Tell ltmain to make .lib files, not .a files.
-+	libext=lib
-+	# Tell ltmain to make .dll files, not .so files.
-+	shrext_cmds=".dll"
-+	# FIXME: Setting linknames here is a bad hack.
-+	archive_cmds='$CC -o $lib $libobjs $compiler_flags `func_echo_all "$deplibs" | $SED '\''s/ -lc$//'\''` -link -dll~linknames='
-+	# The linker will automatically build a .lib file if we build a DLL.
-+	old_archive_from_new_cmds='true'
-+	# FIXME: Should let the user specify the lib program.
-+	old_archive_cmds='lib -OUT:$oldlib$oldobjs$old_deplibs'
-+	enable_shared_with_static_runtimes=yes
-+	;;
-+      esac
-       ;;
- 
-     darwin* | rhapsody*)
-@@ -9711,7 +10299,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
- 
-     # FreeBSD 3 and greater uses gcc -shared to do shared libraries.
-     freebsd* | dragonfly*)
--      archive_cmds='$CC -shared -o $lib $libobjs $deplibs $compiler_flags'
-+      archive_cmds='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags'
-       hardcode_libdir_flag_spec='-R$libdir'
-       hardcode_direct=yes
-       hardcode_shlibpath_var=no
-@@ -9719,7 +10307,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
- 
-     hpux9*)
-       if test "$GCC" = yes; then
--	archive_cmds='$RM $output_objdir/$soname~$CC -shared -fPIC ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $libobjs $deplibs $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
-+	archive_cmds='$RM $output_objdir/$soname~$CC -shared $pic_flag ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $libobjs $deplibs $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
-       else
- 	archive_cmds='$RM $output_objdir/$soname~$LD -b +b $install_libdir -o $output_objdir/$soname $libobjs $deplibs $linker_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
-       fi
-@@ -9735,7 +10323,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
- 
-     hpux10*)
-       if test "$GCC" = yes && test "$with_gnu_ld" = no; then
--	archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
-+	archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
-       else
- 	archive_cmds='$LD -b +h $soname +b $install_libdir -o $lib $libobjs $deplibs $linker_flags'
-       fi
-@@ -9759,10 +10347,10 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
- 	  archive_cmds='$CC -shared ${wl}+h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
- 	  ;;
- 	ia64*)
--	  archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags'
-+	  archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags'
- 	  ;;
- 	*)
--	  archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
-+	  archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
- 	  ;;
- 	esac
-       else
-@@ -9841,23 +10429,36 @@ fi
- 
-     irix5* | irix6* | nonstopux*)
-       if test "$GCC" = yes; then
--	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
-+	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
- 	# Try to use the -exported_symbol ld option, if it does not
- 	# work, assume that -exports_file does not work either and
- 	# implicitly export all symbols.
--        save_LDFLAGS="$LDFLAGS"
--        LDFLAGS="$LDFLAGS -shared ${wl}-exported_symbol ${wl}foo ${wl}-update_registry ${wl}/dev/null"
--        cat confdefs.h - <<_ACEOF >conftest.$ac_ext
-+	# This should be the same for all languages, so no per-tag cache variable.
-+	{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether the $host_os linker accepts -exported_symbol" >&5
-+$as_echo_n "checking whether the $host_os linker accepts -exported_symbol... " >&6; }
-+if ${lt_cv_irix_exported_symbol+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  save_LDFLAGS="$LDFLAGS"
-+	   LDFLAGS="$LDFLAGS -shared ${wl}-exported_symbol ${wl}foo ${wl}-update_registry ${wl}/dev/null"
-+	   cat confdefs.h - <<_ACEOF >conftest.$ac_ext
- /* end confdefs.h.  */
--int foo(void) {}
-+int foo (void) { return 0; }
- _ACEOF
- if ac_fn_c_try_link "$LINENO"; then :
--  archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations ${wl}-exports_file ${wl}$export_symbols -o $lib'
--
-+  lt_cv_irix_exported_symbol=yes
-+else
-+  lt_cv_irix_exported_symbol=no
- fi
- rm -f core conftest.err conftest.$ac_objext \
-     conftest$ac_exeext conftest.$ac_ext
--        LDFLAGS="$save_LDFLAGS"
-+           LDFLAGS="$save_LDFLAGS"
-+fi
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_irix_exported_symbol" >&5
-+$as_echo "$lt_cv_irix_exported_symbol" >&6; }
-+	if test "$lt_cv_irix_exported_symbol" = yes; then
-+          archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations ${wl}-exports_file ${wl}$export_symbols -o $lib'
-+	fi
-       else
- 	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -o $lib'
- 	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -exports_file $export_symbols -o $lib'
-@@ -9942,7 +10543,7 @@ rm -f core conftest.err conftest.$ac_objext \
-     osf4* | osf5*)	# as osf3* with the addition of -msym flag
-       if test "$GCC" = yes; then
- 	allow_undefined_flag=' ${wl}-expect_unresolved ${wl}\*'
--	archive_cmds='$CC -shared${allow_undefined_flag} $libobjs $deplibs $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
-+	archive_cmds='$CC -shared${allow_undefined_flag} $pic_flag $libobjs $deplibs $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
- 	hardcode_libdir_flag_spec='${wl}-rpath ${wl}$libdir'
-       else
- 	allow_undefined_flag=' -expect_unresolved \*'
-@@ -9961,9 +10562,9 @@ rm -f core conftest.err conftest.$ac_objext \
-       no_undefined_flag=' -z defs'
-       if test "$GCC" = yes; then
- 	wlarc='${wl}'
--	archive_cmds='$CC -shared ${wl}-z ${wl}text ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
-+	archive_cmds='$CC -shared $pic_flag ${wl}-z ${wl}text ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
- 	archive_expsym_cmds='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~
--	  $CC -shared ${wl}-z ${wl}text ${wl}-M ${wl}$lib.exp ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags~$RM $lib.exp'
-+	  $CC -shared $pic_flag ${wl}-z ${wl}text ${wl}-M ${wl}$lib.exp ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags~$RM $lib.exp'
-       else
- 	case `$CC -V 2>&1` in
- 	*"Compilers 5.0"*)
-@@ -10539,8 +11140,9 @@ cygwin* | mingw* | pw32* | cegcc*)
-   need_version=no
-   need_lib_prefix=no
- 
--  case $GCC,$host_os in
--  yes,cygwin* | yes,mingw* | yes,pw32* | yes,cegcc*)
-+  case $GCC,$cc_basename in
-+  yes,*)
-+    # gcc
-     library_names_spec='$libname.dll.a'
-     # DLL is installed to $(libdir)/../bin by postinstall_cmds
-     postinstall_cmds='base_file=`basename \${file}`~
-@@ -10573,13 +11175,71 @@ cygwin* | mingw* | pw32* | cegcc*)
-       library_names_spec='`echo ${libname} | sed -e 's/^lib/pw/'``echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}'
-       ;;
-     esac
-+    dynamic_linker='Win32 ld.exe'
-+    ;;
-+
-+  *,cl*)
-+    # Native MSVC
-+    libname_spec='$name'
-+    soname_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}'
-+    library_names_spec='${libname}.dll.lib'
-+
-+    case $build_os in
-+    mingw*)
-+      sys_lib_search_path_spec=
-+      lt_save_ifs=$IFS
-+      IFS=';'
-+      for lt_path in $LIB
-+      do
-+        IFS=$lt_save_ifs
-+        # Let DOS variable expansion print the short 8.3 style file name.
-+        lt_path=`cd "$lt_path" 2>/dev/null && cmd //C "for %i in (".") do @echo %~si"`
-+        sys_lib_search_path_spec="$sys_lib_search_path_spec $lt_path"
-+      done
-+      IFS=$lt_save_ifs
-+      # Convert to MSYS style.
-+      sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | sed -e 's|\\\\|/|g' -e 's| \\([a-zA-Z]\\):| /\\1|g' -e 's|^ ||'`
-+      ;;
-+    cygwin*)
-+      # Convert to unix form, then to dos form, then back to unix form
-+      # but this time dos style (no spaces!) so that the unix form looks
-+      # like /cygdrive/c/PROGRA~1:/cygdr...
-+      sys_lib_search_path_spec=`cygpath --path --unix "$LIB"`
-+      sys_lib_search_path_spec=`cygpath --path --dos "$sys_lib_search_path_spec" 2>/dev/null`
-+      sys_lib_search_path_spec=`cygpath --path --unix "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
-+      ;;
-+    *)
-+      sys_lib_search_path_spec="$LIB"
-+      if $ECHO "$sys_lib_search_path_spec" | $GREP ';[c-zC-Z]:/' >/dev/null; then
-+        # It is most probably a Windows format PATH.
-+        sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e 's/;/ /g'`
-+      else
-+        sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
-+      fi
-+      # FIXME: find the short name or the path components, as spaces are
-+      # common. (e.g. "Program Files" -> "PROGRA~1")
-+      ;;
-+    esac
-+
-+    # DLL is installed to $(libdir)/../bin by postinstall_cmds
-+    postinstall_cmds='base_file=`basename \${file}`~
-+      dlpath=`$SHELL 2>&1 -c '\''. $dir/'\''\${base_file}'\''i; echo \$dlname'\''`~
-+      dldir=$destdir/`dirname \$dlpath`~
-+      test -d \$dldir || mkdir -p \$dldir~
-+      $install_prog $dir/$dlname \$dldir/$dlname'
-+    postuninstall_cmds='dldll=`$SHELL 2>&1 -c '\''. $file; echo \$dlname'\''`~
-+      dlpath=$dir/\$dldll~
-+       $RM \$dlpath'
-+    shlibpath_overrides_runpath=yes
-+    dynamic_linker='Win32 link.exe'
-     ;;
- 
-   *)
-+    # Assume MSVC wrapper
-     library_names_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext} $libname.lib'
-+    dynamic_linker='Win32 ld.exe'
-     ;;
-   esac
--  dynamic_linker='Win32 ld.exe'
-   # FIXME: first we should search . and the directory the executable is in
-   shlibpath_var=PATH
-   ;;
-@@ -11457,7 +12117,7 @@ else
-   lt_dlunknown=0; lt_dlno_uscore=1; lt_dlneed_uscore=2
-   lt_status=$lt_dlunknown
-   cat > conftest.$ac_ext <<_LT_EOF
--#line 11457 "configure"
-+#line $LINENO "configure"
- #include "confdefs.h"
- 
- #if HAVE_DLFCN_H
-@@ -11501,10 +12161,10 @@ else
- /* When -fvisbility=hidden is used, assume the code has been annotated
-    correspondingly for the symbols needed.  */
- #if defined(__GNUC__) && (((__GNUC__ == 3) && (__GNUC_MINOR__ >= 3)) || (__GNUC__ > 3))
--void fnord () __attribute__((visibility("default")));
-+int fnord () __attribute__((visibility("default")));
- #endif
- 
--void fnord () { int i=42; }
-+int fnord () { return 42; }
- int main ()
- {
-   void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW);
-@@ -11563,7 +12223,7 @@ else
-   lt_dlunknown=0; lt_dlno_uscore=1; lt_dlneed_uscore=2
-   lt_status=$lt_dlunknown
-   cat > conftest.$ac_ext <<_LT_EOF
--#line 11563 "configure"
-+#line $LINENO "configure"
- #include "confdefs.h"
- 
- #if HAVE_DLFCN_H
-@@ -11607,10 +12267,10 @@ else
- /* When -fvisbility=hidden is used, assume the code has been annotated
-    correspondingly for the symbols needed.  */
- #if defined(__GNUC__) && (((__GNUC__ == 3) && (__GNUC_MINOR__ >= 3)) || (__GNUC__ > 3))
--void fnord () __attribute__((visibility("default")));
-+int fnord () __attribute__((visibility("default")));
- #endif
- 
--void fnord () { int i=42; }
-+int fnord () { return 42; }
- int main ()
- {
-   void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW);
-@@ -12002,6 +12662,7 @@ $RM -r conftest*
- 
-   # Allow CC to be a program name with arguments.
-   lt_save_CC=$CC
-+  lt_save_CFLAGS=$CFLAGS
-   lt_save_LD=$LD
-   lt_save_GCC=$GCC
-   GCC=$GXX
-@@ -12019,6 +12680,7 @@ $RM -r conftest*
-   fi
-   test -z "${LDCXX+set}" || LD=$LDCXX
-   CC=${CXX-"c++"}
-+  CFLAGS=$CXXFLAGS
-   compiler=$CC
-   compiler_CXX=$CC
-   for cc_temp in $compiler""; do
-@@ -12301,7 +12963,13 @@ $as_echo_n "checking whether the $compiler linker ($LD) supports shared librarie
-           allow_undefined_flag_CXX='-berok'
-           # Determine the default libpath from the value encoded in an empty
-           # executable.
--          cat confdefs.h - <<_ACEOF >conftest.$ac_ext
-+          if test "${lt_cv_aix_libpath+set}" = set; then
-+  aix_libpath=$lt_cv_aix_libpath
-+else
-+  if ${lt_cv_aix_libpath__CXX+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
- /* end confdefs.h.  */
- 
- int
-@@ -12314,22 +12982,29 @@ main ()
- _ACEOF
- if ac_fn_cxx_try_link "$LINENO"; then :
- 
--lt_aix_libpath_sed='
--    /Import File Strings/,/^$/ {
--	/^0/ {
--	    s/^0  *\(.*\)$/\1/
--	    p
--	}
--    }'
--aix_libpath=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
--# Check for a 64-bit object if we didn't find anything.
--if test -z "$aix_libpath"; then
--  aix_libpath=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
--fi
-+  lt_aix_libpath_sed='
-+      /Import File Strings/,/^$/ {
-+	  /^0/ {
-+	      s/^0  *\([^ ]*\) *$/\1/
-+	      p
-+	  }
-+      }'
-+  lt_cv_aix_libpath__CXX=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
-+  # Check for a 64-bit object if we didn't find anything.
-+  if test -z "$lt_cv_aix_libpath__CXX"; then
-+    lt_cv_aix_libpath__CXX=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
-+  fi
- fi
- rm -f core conftest.err conftest.$ac_objext \
-     conftest$ac_exeext conftest.$ac_ext
--if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
-+  if test -z "$lt_cv_aix_libpath__CXX"; then
-+    lt_cv_aix_libpath__CXX="/usr/lib:/lib"
-+  fi
-+
-+fi
-+
-+  aix_libpath=$lt_cv_aix_libpath__CXX
-+fi
- 
-           hardcode_libdir_flag_spec_CXX='${wl}-blibpath:$libdir:'"$aix_libpath"
- 
-@@ -12342,7 +13017,13 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
-           else
- 	    # Determine the default libpath from the value encoded in an
- 	    # empty executable.
--	    cat confdefs.h - <<_ACEOF >conftest.$ac_ext
-+	    if test "${lt_cv_aix_libpath+set}" = set; then
-+  aix_libpath=$lt_cv_aix_libpath
-+else
-+  if ${lt_cv_aix_libpath__CXX+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
- /* end confdefs.h.  */
- 
- int
-@@ -12355,22 +13036,29 @@ main ()
- _ACEOF
- if ac_fn_cxx_try_link "$LINENO"; then :
- 
--lt_aix_libpath_sed='
--    /Import File Strings/,/^$/ {
--	/^0/ {
--	    s/^0  *\(.*\)$/\1/
--	    p
--	}
--    }'
--aix_libpath=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
--# Check for a 64-bit object if we didn't find anything.
--if test -z "$aix_libpath"; then
--  aix_libpath=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
--fi
-+  lt_aix_libpath_sed='
-+      /Import File Strings/,/^$/ {
-+	  /^0/ {
-+	      s/^0  *\([^ ]*\) *$/\1/
-+	      p
-+	  }
-+      }'
-+  lt_cv_aix_libpath__CXX=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
-+  # Check for a 64-bit object if we didn't find anything.
-+  if test -z "$lt_cv_aix_libpath__CXX"; then
-+    lt_cv_aix_libpath__CXX=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
-+  fi
- fi
- rm -f core conftest.err conftest.$ac_objext \
-     conftest$ac_exeext conftest.$ac_ext
--if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
-+  if test -z "$lt_cv_aix_libpath__CXX"; then
-+    lt_cv_aix_libpath__CXX="/usr/lib:/lib"
-+  fi
-+
-+fi
-+
-+  aix_libpath=$lt_cv_aix_libpath__CXX
-+fi
- 
- 	    hardcode_libdir_flag_spec_CXX='${wl}-blibpath:$libdir:'"$aix_libpath"
- 	    # Warning - without using the other run time loading flags,
-@@ -12413,29 +13101,75 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
-         ;;
- 
-       cygwin* | mingw* | pw32* | cegcc*)
--        # _LT_TAGVAR(hardcode_libdir_flag_spec, CXX) is actually meaningless,
--        # as there is no search path for DLLs.
--        hardcode_libdir_flag_spec_CXX='-L$libdir'
--        export_dynamic_flag_spec_CXX='${wl}--export-all-symbols'
--        allow_undefined_flag_CXX=unsupported
--        always_export_symbols_CXX=no
--        enable_shared_with_static_runtimes_CXX=yes
--
--        if $LD --help 2>&1 | $GREP 'auto-import' > /dev/null; then
--          archive_cmds_CXX='$CC -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib'
--          # If the export-symbols file already is a .def file (1st line
--          # is EXPORTS), use it as is; otherwise, prepend...
--          archive_expsym_cmds_CXX='if test "x`$SED 1q $export_symbols`" = xEXPORTS; then
--	    cp $export_symbols $output_objdir/$soname.def;
--          else
--	    echo EXPORTS > $output_objdir/$soname.def;
--	    cat $export_symbols >> $output_objdir/$soname.def;
--          fi~
--          $CC -shared -nostdlib $output_objdir/$soname.def $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib'
--        else
--          ld_shlibs_CXX=no
--        fi
--        ;;
-+	case $GXX,$cc_basename in
-+	,cl* | no,cl*)
-+	  # Native MSVC
-+	  # hardcode_libdir_flag_spec is actually meaningless, as there is
-+	  # no search path for DLLs.
-+	  hardcode_libdir_flag_spec_CXX=' '
-+	  allow_undefined_flag_CXX=unsupported
-+	  always_export_symbols_CXX=yes
-+	  file_list_spec_CXX='@'
-+	  # Tell ltmain to make .lib files, not .a files.
-+	  libext=lib
-+	  # Tell ltmain to make .dll files, not .so files.
-+	  shrext_cmds=".dll"
-+	  # FIXME: Setting linknames here is a bad hack.
-+	  archive_cmds_CXX='$CC -o $output_objdir/$soname $libobjs $compiler_flags $deplibs -Wl,-dll~linknames='
-+	  archive_expsym_cmds_CXX='if test "x`$SED 1q $export_symbols`" = xEXPORTS; then
-+	      $SED -n -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' -e '1\\\!p' < $export_symbols > $output_objdir/$soname.exp;
-+	    else
-+	      $SED -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' < $export_symbols > $output_objdir/$soname.exp;
-+	    fi~
-+	    $CC -o $tool_output_objdir$soname $libobjs $compiler_flags $deplibs "@$tool_output_objdir$soname.exp" -Wl,-DLL,-IMPLIB:"$tool_output_objdir$libname.dll.lib"~
-+	    linknames='
-+	  # The linker will not automatically build a static lib if we build a DLL.
-+	  # _LT_TAGVAR(old_archive_from_new_cmds, CXX)='true'
-+	  enable_shared_with_static_runtimes_CXX=yes
-+	  # Don't use ranlib
-+	  old_postinstall_cmds_CXX='chmod 644 $oldlib'
-+	  postlink_cmds_CXX='lt_outputfile="@OUTPUT@"~
-+	    lt_tool_outputfile="@TOOL_OUTPUT@"~
-+	    case $lt_outputfile in
-+	      *.exe|*.EXE) ;;
-+	      *)
-+		lt_outputfile="$lt_outputfile.exe"
-+		lt_tool_outputfile="$lt_tool_outputfile.exe"
-+		;;
-+	    esac~
-+	    func_to_tool_file "$lt_outputfile"~
-+	    if test "$MANIFEST_TOOL" != ":" && test -f "$lt_outputfile.manifest"; then
-+	      $MANIFEST_TOOL -manifest "$lt_tool_outputfile.manifest" -outputresource:"$lt_tool_outputfile" || exit 1;
-+	      $RM "$lt_outputfile.manifest";
-+	    fi'
-+	  ;;
-+	*)
-+	  # g++
-+	  # _LT_TAGVAR(hardcode_libdir_flag_spec, CXX) is actually meaningless,
-+	  # as there is no search path for DLLs.
-+	  hardcode_libdir_flag_spec_CXX='-L$libdir'
-+	  export_dynamic_flag_spec_CXX='${wl}--export-all-symbols'
-+	  allow_undefined_flag_CXX=unsupported
-+	  always_export_symbols_CXX=no
-+	  enable_shared_with_static_runtimes_CXX=yes
-+
-+	  if $LD --help 2>&1 | $GREP 'auto-import' > /dev/null; then
-+	    archive_cmds_CXX='$CC -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib'
-+	    # If the export-symbols file already is a .def file (1st line
-+	    # is EXPORTS), use it as is; otherwise, prepend...
-+	    archive_expsym_cmds_CXX='if test "x`$SED 1q $export_symbols`" = xEXPORTS; then
-+	      cp $export_symbols $output_objdir/$soname.def;
-+	    else
-+	      echo EXPORTS > $output_objdir/$soname.def;
-+	      cat $export_symbols >> $output_objdir/$soname.def;
-+	    fi~
-+	    $CC -shared -nostdlib $output_objdir/$soname.def $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib'
-+	  else
-+	    ld_shlibs_CXX=no
-+	  fi
-+	  ;;
-+	esac
-+	;;
-       darwin* | rhapsody*)
- 
- 
-@@ -12541,7 +13275,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
-             ;;
-           *)
-             if test "$GXX" = yes; then
--              archive_cmds_CXX='$RM $output_objdir/$soname~$CC -shared -nostdlib -fPIC ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
-+              archive_cmds_CXX='$RM $output_objdir/$soname~$CC -shared -nostdlib $pic_flag ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
-             else
-               # FIXME: insert proper C++ library support
-               ld_shlibs_CXX=no
-@@ -12612,10 +13346,10 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
- 	            archive_cmds_CXX='$CC -shared -nostdlib -fPIC ${wl}+h ${wl}$soname -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags'
- 	            ;;
- 	          ia64*)
--	            archive_cmds_CXX='$CC -shared -nostdlib -fPIC ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags'
-+	            archive_cmds_CXX='$CC -shared -nostdlib $pic_flag ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags'
- 	            ;;
- 	          *)
--	            archive_cmds_CXX='$CC -shared -nostdlib -fPIC ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags'
-+	            archive_cmds_CXX='$CC -shared -nostdlib $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags'
- 	            ;;
- 	        esac
- 	      fi
-@@ -12656,9 +13390,9 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
-           *)
- 	    if test "$GXX" = yes; then
- 	      if test "$with_gnu_ld" = no; then
--	        archive_cmds_CXX='$CC -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
-+	        archive_cmds_CXX='$CC -shared $pic_flag -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
- 	      else
--	        archive_cmds_CXX='$CC -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` -o $lib'
-+	        archive_cmds_CXX='$CC -shared $pic_flag -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` -o $lib'
- 	      fi
- 	    fi
- 	    link_all_deplibs_CXX=yes
-@@ -12728,20 +13462,20 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
- 	      prelink_cmds_CXX='tpldir=Template.dir~
- 		rm -rf $tpldir~
- 		$CC --prelink_objects --instantiation_dir $tpldir $objs $libobjs $compile_deplibs~
--		compile_command="$compile_command `find $tpldir -name \*.o | $NL2SP`"'
-+		compile_command="$compile_command `find $tpldir -name \*.o | sort | $NL2SP`"'
- 	      old_archive_cmds_CXX='tpldir=Template.dir~
- 		rm -rf $tpldir~
- 		$CC --prelink_objects --instantiation_dir $tpldir $oldobjs$old_deplibs~
--		$AR $AR_FLAGS $oldlib$oldobjs$old_deplibs `find $tpldir -name \*.o | $NL2SP`~
-+		$AR $AR_FLAGS $oldlib$oldobjs$old_deplibs `find $tpldir -name \*.o | sort | $NL2SP`~
- 		$RANLIB $oldlib'
- 	      archive_cmds_CXX='tpldir=Template.dir~
- 		rm -rf $tpldir~
- 		$CC --prelink_objects --instantiation_dir $tpldir $predep_objects $libobjs $deplibs $convenience $postdep_objects~
--		$CC -shared $pic_flag $predep_objects $libobjs $deplibs `find $tpldir -name \*.o | $NL2SP` $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname -o $lib'
-+		$CC -shared $pic_flag $predep_objects $libobjs $deplibs `find $tpldir -name \*.o | sort | $NL2SP` $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname -o $lib'
- 	      archive_expsym_cmds_CXX='tpldir=Template.dir~
- 		rm -rf $tpldir~
- 		$CC --prelink_objects --instantiation_dir $tpldir $predep_objects $libobjs $deplibs $convenience $postdep_objects~
--		$CC -shared $pic_flag $predep_objects $libobjs $deplibs `find $tpldir -name \*.o | $NL2SP` $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname ${wl}-retain-symbols-file ${wl}$export_symbols -o $lib'
-+		$CC -shared $pic_flag $predep_objects $libobjs $deplibs `find $tpldir -name \*.o | sort | $NL2SP` $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname ${wl}-retain-symbols-file ${wl}$export_symbols -o $lib'
- 	      ;;
- 	    *) # Version 6 and above use weak symbols
- 	      archive_cmds_CXX='$CC -shared $pic_flag $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname -o $lib'
-@@ -12936,7 +13670,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
- 	          archive_cmds_CXX='$CC -shared -nostdlib ${allow_undefined_flag} $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
- 		  ;;
- 	        *)
--	          archive_cmds_CXX='$CC -shared -nostdlib ${allow_undefined_flag} $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
-+	          archive_cmds_CXX='$CC -shared $pic_flag -nostdlib ${allow_undefined_flag} $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
- 		  ;;
- 	      esac
- 
-@@ -12982,7 +13716,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
- 
-       solaris*)
-         case $cc_basename in
--          CC*)
-+          CC* | sunCC*)
- 	    # Sun C++ 4.2, 5.x and Centerline C++
-             archive_cmds_need_lc_CXX=yes
- 	    no_undefined_flag_CXX=' -zdefs'
-@@ -13023,9 +13757,9 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
- 	    if test "$GXX" = yes && test "$with_gnu_ld" = no; then
- 	      no_undefined_flag_CXX=' ${wl}-z ${wl}defs'
- 	      if $CC --version | $GREP -v '^2\.7' > /dev/null; then
--	        archive_cmds_CXX='$CC -shared -nostdlib $LDFLAGS $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-h $wl$soname -o $lib'
-+	        archive_cmds_CXX='$CC -shared $pic_flag -nostdlib $LDFLAGS $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-h $wl$soname -o $lib'
- 	        archive_expsym_cmds_CXX='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~
--		  $CC -shared -nostdlib ${wl}-M $wl$lib.exp -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~$RM $lib.exp'
-+		  $CC -shared $pic_flag -nostdlib ${wl}-M $wl$lib.exp -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~$RM $lib.exp'
- 
- 	        # Commands to make compiler produce verbose output that lists
- 	        # what "hidden" libraries, object files and flags are used when
-@@ -13160,6 +13894,13 @@ private:
- };
- _LT_EOF
- 
-+
-+_lt_libdeps_save_CFLAGS=$CFLAGS
-+case "$CC $CFLAGS " in #(
-+*\ -flto*\ *) CFLAGS="$CFLAGS -fno-lto" ;;
-+*\ -fwhopr*\ *) CFLAGS="$CFLAGS -fno-whopr" ;;
-+esac
-+
- if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_compile\""; } >&5
-   (eval $ac_compile) 2>&5
-   ac_status=$?
-@@ -13173,7 +13914,7 @@ if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_compile\""; } >&5
-   pre_test_object_deps_done=no
- 
-   for p in `eval "$output_verbose_link_cmd"`; do
--    case $p in
-+    case ${prev}${p} in
- 
-     -L* | -R* | -l*)
-        # Some compilers place space between "-{L,R}" and the path.
-@@ -13182,13 +13923,22 @@ if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_compile\""; } >&5
-           test $p = "-R"; then
- 	 prev=$p
- 	 continue
--       else
--	 prev=
-        fi
- 
-+       # Expand the sysroot to ease extracting the directories later.
-+       if test -z "$prev"; then
-+         case $p in
-+         -L*) func_stripname_cnf '-L' '' "$p"; prev=-L; p=$func_stripname_result ;;
-+         -R*) func_stripname_cnf '-R' '' "$p"; prev=-R; p=$func_stripname_result ;;
-+         -l*) func_stripname_cnf '-l' '' "$p"; prev=-l; p=$func_stripname_result ;;
-+         esac
-+       fi
-+       case $p in
-+       =*) func_stripname_cnf '=' '' "$p"; p=$lt_sysroot$func_stripname_result ;;
-+       esac
-        if test "$pre_test_object_deps_done" = no; then
--	 case $p in
--	 -L* | -R*)
-+	 case ${prev} in
-+	 -L | -R)
- 	   # Internal compiler library paths should come after those
- 	   # provided the user.  The postdeps already come after the
- 	   # user supplied libs so there is no need to process them.
-@@ -13208,8 +13958,10 @@ if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_compile\""; } >&5
- 	   postdeps_CXX="${postdeps_CXX} ${prev}${p}"
- 	 fi
-        fi
-+       prev=
-        ;;
- 
-+    *.lto.$objext) ;; # Ignore GCC LTO objects
-     *.$objext)
-        # This assumes that the test object file only shows up
-        # once in the compiler output.
-@@ -13245,6 +13997,7 @@ else
- fi
- 
- $RM -f confest.$objext
-+CFLAGS=$_lt_libdeps_save_CFLAGS
- 
- # PORTME: override above test on systems where it is broken
- case $host_os in
-@@ -13280,7 +14033,7 @@ linux*)
- 
- solaris*)
-   case $cc_basename in
--  CC*)
-+  CC* | sunCC*)
-     # The more standards-conforming stlport4 library is
-     # incompatible with the Cstd library. Avoid specifying
-     # it if it's in CXXFLAGS. Ignore libCrun as
-@@ -13345,8 +14098,6 @@ fi
- lt_prog_compiler_pic_CXX=
- lt_prog_compiler_static_CXX=
- 
--{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $compiler option to produce PIC" >&5
--$as_echo_n "checking for $compiler option to produce PIC... " >&6; }
- 
-   # C++ specific cases for pic, static, wl, etc.
-   if test "$GXX" = yes; then
-@@ -13451,6 +14202,11 @@ $as_echo_n "checking for $compiler option to produce PIC... " >&6; }
- 	  ;;
- 	esac
- 	;;
-+      mingw* | cygwin* | os2* | pw32* | cegcc*)
-+	# This hack is so that the source file can tell whether it is being
-+	# built for inclusion in a dll (and should export symbols for example).
-+	lt_prog_compiler_pic_CXX='-DDLL_EXPORT'
-+	;;
-       dgux*)
- 	case $cc_basename in
- 	  ec++*)
-@@ -13603,7 +14359,7 @@ $as_echo_n "checking for $compiler option to produce PIC... " >&6; }
- 	;;
-       solaris*)
- 	case $cc_basename in
--	  CC*)
-+	  CC* | sunCC*)
- 	    # Sun C++ 4.2, 5.x and Centerline C++
- 	    lt_prog_compiler_pic_CXX='-KPIC'
- 	    lt_prog_compiler_static_CXX='-Bstatic'
-@@ -13668,10 +14424,17 @@ case $host_os in
-     lt_prog_compiler_pic_CXX="$lt_prog_compiler_pic_CXX -DPIC"
-     ;;
- esac
--{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_prog_compiler_pic_CXX" >&5
--$as_echo "$lt_prog_compiler_pic_CXX" >&6; }
--
- 
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $compiler option to produce PIC" >&5
-+$as_echo_n "checking for $compiler option to produce PIC... " >&6; }
-+if ${lt_cv_prog_compiler_pic_CXX+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  lt_cv_prog_compiler_pic_CXX=$lt_prog_compiler_pic_CXX
-+fi
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_prog_compiler_pic_CXX" >&5
-+$as_echo "$lt_cv_prog_compiler_pic_CXX" >&6; }
-+lt_prog_compiler_pic_CXX=$lt_cv_prog_compiler_pic_CXX
- 
- #
- # Check to make sure the PIC flag actually works.
-@@ -13729,6 +14492,8 @@ fi
- 
- 
- 
-+
-+
- #
- # Check to make sure the static flag actually works.
- #
-@@ -13906,6 +14671,7 @@ fi
- $as_echo_n "checking whether the $compiler linker ($LD) supports shared libraries... " >&6; }
- 
-   export_symbols_cmds_CXX='$NM $libobjs $convenience | $global_symbol_pipe | $SED '\''s/.* //'\'' | sort | uniq > $export_symbols'
-+  exclude_expsyms_CXX='_GLOBAL_OFFSET_TABLE_|_GLOBAL__F[ID]_.*'
-   case $host_os in
-   aix[4-9]*)
-     # If we're using GNU nm, then we don't want the "-C" option.
-@@ -13920,15 +14686,20 @@ $as_echo_n "checking whether the $compiler linker ($LD) supports shared librarie
-     ;;
-   pw32*)
-     export_symbols_cmds_CXX="$ltdll_cmds"
--  ;;
-+    ;;
-   cygwin* | mingw* | cegcc*)
--    export_symbols_cmds_CXX='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1 DATA/;/^.*[ ]__nm__/s/^.*[ ]__nm__\([^ ]*\)[ ][^ ]*/\1 DATA/;/^I[ ]/d;/^[AITW][ ]/s/.* //'\'' | sort | uniq > $export_symbols'
--  ;;
-+    case $cc_basename in
-+    cl*) ;;
-+    *)
-+      export_symbols_cmds_CXX='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1 DATA/;s/^.*[ ]__nm__\([^ ]*\)[ ][^ ]*/\1 DATA/;/^I[ ]/d;/^[AITW][ ]/s/.* //'\'' | sort | uniq > $export_symbols'
-+      exclude_expsyms_CXX='[_]+GLOBAL_OFFSET_TABLE_|[_]+GLOBAL__[FID]_.*|[_]+head_[A-Za-z0-9_]+_dll|[A-Za-z0-9_]+_dll_iname'
-+      ;;
-+    esac
-+    ;;
-   *)
-     export_symbols_cmds_CXX='$NM $libobjs $convenience | $global_symbol_pipe | $SED '\''s/.* //'\'' | sort | uniq > $export_symbols'
--  ;;
-+    ;;
-   esac
--  exclude_expsyms_CXX='_GLOBAL_OFFSET_TABLE_|_GLOBAL__F[ID]_.*'
- 
- { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ld_shlibs_CXX" >&5
- $as_echo "$ld_shlibs_CXX" >&6; }
-@@ -14191,8 +14962,9 @@ cygwin* | mingw* | pw32* | cegcc*)
-   need_version=no
-   need_lib_prefix=no
- 
--  case $GCC,$host_os in
--  yes,cygwin* | yes,mingw* | yes,pw32* | yes,cegcc*)
-+  case $GCC,$cc_basename in
-+  yes,*)
-+    # gcc
-     library_names_spec='$libname.dll.a'
-     # DLL is installed to $(libdir)/../bin by postinstall_cmds
-     postinstall_cmds='base_file=`basename \${file}`~
-@@ -14224,13 +14996,71 @@ cygwin* | mingw* | pw32* | cegcc*)
-       library_names_spec='`echo ${libname} | sed -e 's/^lib/pw/'``echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}'
-       ;;
-     esac
-+    dynamic_linker='Win32 ld.exe'
-+    ;;
-+
-+  *,cl*)
-+    # Native MSVC
-+    libname_spec='$name'
-+    soname_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}'
-+    library_names_spec='${libname}.dll.lib'
-+
-+    case $build_os in
-+    mingw*)
-+      sys_lib_search_path_spec=
-+      lt_save_ifs=$IFS
-+      IFS=';'
-+      for lt_path in $LIB
-+      do
-+        IFS=$lt_save_ifs
-+        # Let DOS variable expansion print the short 8.3 style file name.
-+        lt_path=`cd "$lt_path" 2>/dev/null && cmd //C "for %i in (".") do @echo %~si"`
-+        sys_lib_search_path_spec="$sys_lib_search_path_spec $lt_path"
-+      done
-+      IFS=$lt_save_ifs
-+      # Convert to MSYS style.
-+      sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | sed -e 's|\\\\|/|g' -e 's| \\([a-zA-Z]\\):| /\\1|g' -e 's|^ ||'`
-+      ;;
-+    cygwin*)
-+      # Convert to unix form, then to dos form, then back to unix form
-+      # but this time dos style (no spaces!) so that the unix form looks
-+      # like /cygdrive/c/PROGRA~1:/cygdr...
-+      sys_lib_search_path_spec=`cygpath --path --unix "$LIB"`
-+      sys_lib_search_path_spec=`cygpath --path --dos "$sys_lib_search_path_spec" 2>/dev/null`
-+      sys_lib_search_path_spec=`cygpath --path --unix "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
-+      ;;
-+    *)
-+      sys_lib_search_path_spec="$LIB"
-+      if $ECHO "$sys_lib_search_path_spec" | $GREP ';[c-zC-Z]:/' >/dev/null; then
-+        # It is most probably a Windows format PATH.
-+        sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e 's/;/ /g'`
-+      else
-+        sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
-+      fi
-+      # FIXME: find the short name or the path components, as spaces are
-+      # common. (e.g. "Program Files" -> "PROGRA~1")
-+      ;;
-+    esac
-+
-+    # DLL is installed to $(libdir)/../bin by postinstall_cmds
-+    postinstall_cmds='base_file=`basename \${file}`~
-+      dlpath=`$SHELL 2>&1 -c '\''. $dir/'\''\${base_file}'\''i; echo \$dlname'\''`~
-+      dldir=$destdir/`dirname \$dlpath`~
-+      test -d \$dldir || mkdir -p \$dldir~
-+      $install_prog $dir/$dlname \$dldir/$dlname'
-+    postuninstall_cmds='dldll=`$SHELL 2>&1 -c '\''. $file; echo \$dlname'\''`~
-+      dlpath=$dir/\$dldll~
-+       $RM \$dlpath'
-+    shlibpath_overrides_runpath=yes
-+    dynamic_linker='Win32 link.exe'
-     ;;
- 
-   *)
-+    # Assume MSVC wrapper
-     library_names_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext} $libname.lib'
-+    dynamic_linker='Win32 ld.exe'
-     ;;
-   esac
--  dynamic_linker='Win32 ld.exe'
-   # FIXME: first we should search . and the directory the executable is in
-   shlibpath_var=PATH
-   ;;
-@@ -14770,6 +15600,7 @@ fi
-   fi # test -n "$compiler"
- 
-   CC=$lt_save_CC
-+  CFLAGS=$lt_save_CFLAGS
-   LDCXX=$LD
-   LD=$lt_save_LD
-   GCC=$lt_save_GCC
-@@ -17830,13 +18661,20 @@ exeext='`$ECHO "$exeext" | $SED "$delay_single_quote_subst"`'
- lt_unset='`$ECHO "$lt_unset" | $SED "$delay_single_quote_subst"`'
- lt_SP2NL='`$ECHO "$lt_SP2NL" | $SED "$delay_single_quote_subst"`'
- lt_NL2SP='`$ECHO "$lt_NL2SP" | $SED "$delay_single_quote_subst"`'
-+lt_cv_to_host_file_cmd='`$ECHO "$lt_cv_to_host_file_cmd" | $SED "$delay_single_quote_subst"`'
-+lt_cv_to_tool_file_cmd='`$ECHO "$lt_cv_to_tool_file_cmd" | $SED "$delay_single_quote_subst"`'
- reload_flag='`$ECHO "$reload_flag" | $SED "$delay_single_quote_subst"`'
- reload_cmds='`$ECHO "$reload_cmds" | $SED "$delay_single_quote_subst"`'
- OBJDUMP='`$ECHO "$OBJDUMP" | $SED "$delay_single_quote_subst"`'
- deplibs_check_method='`$ECHO "$deplibs_check_method" | $SED "$delay_single_quote_subst"`'
- file_magic_cmd='`$ECHO "$file_magic_cmd" | $SED "$delay_single_quote_subst"`'
-+file_magic_glob='`$ECHO "$file_magic_glob" | $SED "$delay_single_quote_subst"`'
-+want_nocaseglob='`$ECHO "$want_nocaseglob" | $SED "$delay_single_quote_subst"`'
-+DLLTOOL='`$ECHO "$DLLTOOL" | $SED "$delay_single_quote_subst"`'
-+sharedlib_from_linklib_cmd='`$ECHO "$sharedlib_from_linklib_cmd" | $SED "$delay_single_quote_subst"`'
- AR='`$ECHO "$AR" | $SED "$delay_single_quote_subst"`'
- AR_FLAGS='`$ECHO "$AR_FLAGS" | $SED "$delay_single_quote_subst"`'
-+archiver_list_spec='`$ECHO "$archiver_list_spec" | $SED "$delay_single_quote_subst"`'
- STRIP='`$ECHO "$STRIP" | $SED "$delay_single_quote_subst"`'
- RANLIB='`$ECHO "$RANLIB" | $SED "$delay_single_quote_subst"`'
- old_postinstall_cmds='`$ECHO "$old_postinstall_cmds" | $SED "$delay_single_quote_subst"`'
-@@ -17851,14 +18689,17 @@ lt_cv_sys_global_symbol_pipe='`$ECHO "$lt_cv_sys_global_symbol_pipe" | $SED "$de
- lt_cv_sys_global_symbol_to_cdecl='`$ECHO "$lt_cv_sys_global_symbol_to_cdecl" | $SED "$delay_single_quote_subst"`'
- lt_cv_sys_global_symbol_to_c_name_address='`$ECHO "$lt_cv_sys_global_symbol_to_c_name_address" | $SED "$delay_single_quote_subst"`'
- lt_cv_sys_global_symbol_to_c_name_address_lib_prefix='`$ECHO "$lt_cv_sys_global_symbol_to_c_name_address_lib_prefix" | $SED "$delay_single_quote_subst"`'
-+nm_file_list_spec='`$ECHO "$nm_file_list_spec" | $SED "$delay_single_quote_subst"`'
-+lt_sysroot='`$ECHO "$lt_sysroot" | $SED "$delay_single_quote_subst"`'
- objdir='`$ECHO "$objdir" | $SED "$delay_single_quote_subst"`'
- MAGIC_CMD='`$ECHO "$MAGIC_CMD" | $SED "$delay_single_quote_subst"`'
- lt_prog_compiler_no_builtin_flag='`$ECHO "$lt_prog_compiler_no_builtin_flag" | $SED "$delay_single_quote_subst"`'
--lt_prog_compiler_wl='`$ECHO "$lt_prog_compiler_wl" | $SED "$delay_single_quote_subst"`'
- lt_prog_compiler_pic='`$ECHO "$lt_prog_compiler_pic" | $SED "$delay_single_quote_subst"`'
-+lt_prog_compiler_wl='`$ECHO "$lt_prog_compiler_wl" | $SED "$delay_single_quote_subst"`'
- lt_prog_compiler_static='`$ECHO "$lt_prog_compiler_static" | $SED "$delay_single_quote_subst"`'
- lt_cv_prog_compiler_c_o='`$ECHO "$lt_cv_prog_compiler_c_o" | $SED "$delay_single_quote_subst"`'
- need_locks='`$ECHO "$need_locks" | $SED "$delay_single_quote_subst"`'
-+MANIFEST_TOOL='`$ECHO "$MANIFEST_TOOL" | $SED "$delay_single_quote_subst"`'
- DSYMUTIL='`$ECHO "$DSYMUTIL" | $SED "$delay_single_quote_subst"`'
- NMEDIT='`$ECHO "$NMEDIT" | $SED "$delay_single_quote_subst"`'
- LIPO='`$ECHO "$LIPO" | $SED "$delay_single_quote_subst"`'
-@@ -17891,12 +18732,12 @@ hardcode_shlibpath_var='`$ECHO "$hardcode_shlibpath_var" | $SED "$delay_single_q
- hardcode_automatic='`$ECHO "$hardcode_automatic" | $SED "$delay_single_quote_subst"`'
- inherit_rpath='`$ECHO "$inherit_rpath" | $SED "$delay_single_quote_subst"`'
- link_all_deplibs='`$ECHO "$link_all_deplibs" | $SED "$delay_single_quote_subst"`'
--fix_srcfile_path='`$ECHO "$fix_srcfile_path" | $SED "$delay_single_quote_subst"`'
- always_export_symbols='`$ECHO "$always_export_symbols" | $SED "$delay_single_quote_subst"`'
- export_symbols_cmds='`$ECHO "$export_symbols_cmds" | $SED "$delay_single_quote_subst"`'
- exclude_expsyms='`$ECHO "$exclude_expsyms" | $SED "$delay_single_quote_subst"`'
- include_expsyms='`$ECHO "$include_expsyms" | $SED "$delay_single_quote_subst"`'
- prelink_cmds='`$ECHO "$prelink_cmds" | $SED "$delay_single_quote_subst"`'
-+postlink_cmds='`$ECHO "$postlink_cmds" | $SED "$delay_single_quote_subst"`'
- file_list_spec='`$ECHO "$file_list_spec" | $SED "$delay_single_quote_subst"`'
- variables_saved_for_relink='`$ECHO "$variables_saved_for_relink" | $SED "$delay_single_quote_subst"`'
- need_lib_prefix='`$ECHO "$need_lib_prefix" | $SED "$delay_single_quote_subst"`'
-@@ -17935,8 +18776,8 @@ old_archive_cmds_CXX='`$ECHO "$old_archive_cmds_CXX" | $SED "$delay_single_quote
- compiler_CXX='`$ECHO "$compiler_CXX" | $SED "$delay_single_quote_subst"`'
- GCC_CXX='`$ECHO "$GCC_CXX" | $SED "$delay_single_quote_subst"`'
- lt_prog_compiler_no_builtin_flag_CXX='`$ECHO "$lt_prog_compiler_no_builtin_flag_CXX" | $SED "$delay_single_quote_subst"`'
--lt_prog_compiler_wl_CXX='`$ECHO "$lt_prog_compiler_wl_CXX" | $SED "$delay_single_quote_subst"`'
- lt_prog_compiler_pic_CXX='`$ECHO "$lt_prog_compiler_pic_CXX" | $SED "$delay_single_quote_subst"`'
-+lt_prog_compiler_wl_CXX='`$ECHO "$lt_prog_compiler_wl_CXX" | $SED "$delay_single_quote_subst"`'
- lt_prog_compiler_static_CXX='`$ECHO "$lt_prog_compiler_static_CXX" | $SED "$delay_single_quote_subst"`'
- lt_cv_prog_compiler_c_o_CXX='`$ECHO "$lt_cv_prog_compiler_c_o_CXX" | $SED "$delay_single_quote_subst"`'
- archive_cmds_need_lc_CXX='`$ECHO "$archive_cmds_need_lc_CXX" | $SED "$delay_single_quote_subst"`'
-@@ -17963,12 +18804,12 @@ hardcode_shlibpath_var_CXX='`$ECHO "$hardcode_shlibpath_var_CXX" | $SED "$delay_
- hardcode_automatic_CXX='`$ECHO "$hardcode_automatic_CXX" | $SED "$delay_single_quote_subst"`'
- inherit_rpath_CXX='`$ECHO "$inherit_rpath_CXX" | $SED "$delay_single_quote_subst"`'
- link_all_deplibs_CXX='`$ECHO "$link_all_deplibs_CXX" | $SED "$delay_single_quote_subst"`'
--fix_srcfile_path_CXX='`$ECHO "$fix_srcfile_path_CXX" | $SED "$delay_single_quote_subst"`'
- always_export_symbols_CXX='`$ECHO "$always_export_symbols_CXX" | $SED "$delay_single_quote_subst"`'
- export_symbols_cmds_CXX='`$ECHO "$export_symbols_cmds_CXX" | $SED "$delay_single_quote_subst"`'
- exclude_expsyms_CXX='`$ECHO "$exclude_expsyms_CXX" | $SED "$delay_single_quote_subst"`'
- include_expsyms_CXX='`$ECHO "$include_expsyms_CXX" | $SED "$delay_single_quote_subst"`'
- prelink_cmds_CXX='`$ECHO "$prelink_cmds_CXX" | $SED "$delay_single_quote_subst"`'
-+postlink_cmds_CXX='`$ECHO "$postlink_cmds_CXX" | $SED "$delay_single_quote_subst"`'
- file_list_spec_CXX='`$ECHO "$file_list_spec_CXX" | $SED "$delay_single_quote_subst"`'
- hardcode_action_CXX='`$ECHO "$hardcode_action_CXX" | $SED "$delay_single_quote_subst"`'
- compiler_lib_search_dirs_CXX='`$ECHO "$compiler_lib_search_dirs_CXX" | $SED "$delay_single_quote_subst"`'
-@@ -18006,8 +18847,13 @@ reload_flag \
- OBJDUMP \
- deplibs_check_method \
- file_magic_cmd \
-+file_magic_glob \
-+want_nocaseglob \
-+DLLTOOL \
-+sharedlib_from_linklib_cmd \
- AR \
- AR_FLAGS \
-+archiver_list_spec \
- STRIP \
- RANLIB \
- CC \
-@@ -18017,12 +18863,14 @@ lt_cv_sys_global_symbol_pipe \
- lt_cv_sys_global_symbol_to_cdecl \
- lt_cv_sys_global_symbol_to_c_name_address \
- lt_cv_sys_global_symbol_to_c_name_address_lib_prefix \
-+nm_file_list_spec \
- lt_prog_compiler_no_builtin_flag \
--lt_prog_compiler_wl \
- lt_prog_compiler_pic \
-+lt_prog_compiler_wl \
- lt_prog_compiler_static \
- lt_cv_prog_compiler_c_o \
- need_locks \
-+MANIFEST_TOOL \
- DSYMUTIL \
- NMEDIT \
- LIPO \
-@@ -18038,7 +18886,6 @@ no_undefined_flag \
- hardcode_libdir_flag_spec \
- hardcode_libdir_flag_spec_ld \
- hardcode_libdir_separator \
--fix_srcfile_path \
- exclude_expsyms \
- include_expsyms \
- file_list_spec \
-@@ -18060,8 +18907,8 @@ LD_CXX \
- reload_flag_CXX \
- compiler_CXX \
- lt_prog_compiler_no_builtin_flag_CXX \
--lt_prog_compiler_wl_CXX \
- lt_prog_compiler_pic_CXX \
-+lt_prog_compiler_wl_CXX \
- lt_prog_compiler_static_CXX \
- lt_cv_prog_compiler_c_o_CXX \
- export_dynamic_flag_spec_CXX \
-@@ -18073,7 +18920,6 @@ no_undefined_flag_CXX \
- hardcode_libdir_flag_spec_CXX \
- hardcode_libdir_flag_spec_ld_CXX \
- hardcode_libdir_separator_CXX \
--fix_srcfile_path_CXX \
- exclude_expsyms_CXX \
- include_expsyms_CXX \
- file_list_spec_CXX \
-@@ -18107,6 +18953,7 @@ module_cmds \
- module_expsym_cmds \
- export_symbols_cmds \
- prelink_cmds \
-+postlink_cmds \
- postinstall_cmds \
- postuninstall_cmds \
- finish_cmds \
-@@ -18121,7 +18968,8 @@ archive_expsym_cmds_CXX \
- module_cmds_CXX \
- module_expsym_cmds_CXX \
- export_symbols_cmds_CXX \
--prelink_cmds_CXX; do
-+prelink_cmds_CXX \
-+postlink_cmds_CXX; do
-     case \`eval \\\\\$ECHO \\\\""\\\\\$\$var"\\\\"\` in
-     *[\\\\\\\`\\"\\\$]*)
-       eval "lt_\$var=\\\\\\"\\\`\\\$ECHO \\"\\\$\$var\\" | \\\$SED -e \\"\\\$double_quote_subst\\" -e \\"\\\$sed_quote_subst\\" -e \\"\\\$delay_variable_subst\\"\\\`\\\\\\""
-@@ -18886,7 +19734,8 @@ $as_echo X"$file" |
- # NOTE: Changes made to this file will be lost: look at ltmain.sh.
- #
- #   Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001, 2003, 2004, 2005,
--#                 2006, 2007, 2008, 2009 Free Software Foundation, Inc.
-+#                 2006, 2007, 2008, 2009, 2010 Free Software Foundation,
-+#                 Inc.
- #   Written by Gordon Matzigkeit, 1996
- #
- #   This file is part of GNU Libtool.
-@@ -18989,19 +19838,42 @@ SP2NL=$lt_lt_SP2NL
- # turn newlines into spaces.
- NL2SP=$lt_lt_NL2SP
- 
-+# convert \$build file names to \$host format.
-+to_host_file_cmd=$lt_cv_to_host_file_cmd
-+
-+# convert \$build files to toolchain format.
-+to_tool_file_cmd=$lt_cv_to_tool_file_cmd
-+
- # An object symbol dumper.
- OBJDUMP=$lt_OBJDUMP
- 
- # Method to check whether dependent libraries are shared objects.
- deplibs_check_method=$lt_deplibs_check_method
- 
--# Command to use when deplibs_check_method == "file_magic".
-+# Command to use when deplibs_check_method = "file_magic".
- file_magic_cmd=$lt_file_magic_cmd
- 
-+# How to find potential files when deplibs_check_method = "file_magic".
-+file_magic_glob=$lt_file_magic_glob
-+
-+# Find potential files using nocaseglob when deplibs_check_method = "file_magic".
-+want_nocaseglob=$lt_want_nocaseglob
-+
-+# DLL creation program.
-+DLLTOOL=$lt_DLLTOOL
-+
-+# Command to associate shared and link libraries.
-+sharedlib_from_linklib_cmd=$lt_sharedlib_from_linklib_cmd
-+
- # The archiver.
- AR=$lt_AR
-+
-+# Flags to create an archive.
- AR_FLAGS=$lt_AR_FLAGS
- 
-+# How to feed a file listing to the archiver.
-+archiver_list_spec=$lt_archiver_list_spec
-+
- # A symbol stripping program.
- STRIP=$lt_STRIP
- 
-@@ -19031,6 +19903,12 @@ global_symbol_to_c_name_address=$lt_lt_cv_sys_global_symbol_to_c_name_address
- # Transform the output of nm in a C name address pair when lib prefix is needed.
- global_symbol_to_c_name_address_lib_prefix=$lt_lt_cv_sys_global_symbol_to_c_name_address_lib_prefix
- 
-+# Specify filename containing input files for \$NM.
-+nm_file_list_spec=$lt_nm_file_list_spec
-+
-+# The root where to search for dependent libraries,and in which our libraries should be installed.
-+lt_sysroot=$lt_sysroot
-+
- # The name of the directory that contains temporary libtool files.
- objdir=$objdir
- 
-@@ -19040,6 +19918,9 @@ MAGIC_CMD=$MAGIC_CMD
- # Must we lock files when doing compilation?
- need_locks=$lt_need_locks
- 
-+# Manifest tool.
-+MANIFEST_TOOL=$lt_MANIFEST_TOOL
-+
- # Tool to manipulate archived DWARF debug symbol files on Mac OS X.
- DSYMUTIL=$lt_DSYMUTIL
- 
-@@ -19154,12 +20035,12 @@ with_gcc=$GCC
- # Compiler flag to turn off builtin functions.
- no_builtin_flag=$lt_lt_prog_compiler_no_builtin_flag
- 
--# How to pass a linker flag through the compiler.
--wl=$lt_lt_prog_compiler_wl
--
- # Additional compiler flags for building library objects.
- pic_flag=$lt_lt_prog_compiler_pic
- 
-+# How to pass a linker flag through the compiler.
-+wl=$lt_lt_prog_compiler_wl
-+
- # Compiler flag to prevent dynamic linking.
- link_static_flag=$lt_lt_prog_compiler_static
- 
-@@ -19246,9 +20127,6 @@ inherit_rpath=$inherit_rpath
- # Whether libtool must link a program against all its dependency libraries.
- link_all_deplibs=$link_all_deplibs
- 
--# Fix the shell variable \$srcfile for the compiler.
--fix_srcfile_path=$lt_fix_srcfile_path
--
- # Set to "yes" if exported symbols are required.
- always_export_symbols=$always_export_symbols
- 
-@@ -19264,6 +20142,9 @@ include_expsyms=$lt_include_expsyms
- # Commands necessary for linking programs (against libraries) with templates.
- prelink_cmds=$lt_prelink_cmds
- 
-+# Commands necessary for finishing linking programs.
-+postlink_cmds=$lt_postlink_cmds
-+
- # Specify filename containing input files.
- file_list_spec=$lt_file_list_spec
- 
-@@ -19310,210 +20191,169 @@ ltmain="$ac_aux_dir/ltmain.sh"
-   # if finds mixed CR/LF and LF-only lines.  Since sed operates in
-   # text mode, it properly converts lines to CR/LF.  This bash problem
-   # is reportedly fixed, but why not run on old versions too?
--  sed '/^# Generated shell functions inserted here/q' "$ltmain" >> "$cfgfile" \
--    || (rm -f "$cfgfile"; exit 1)
--
--  case $xsi_shell in
--  yes)
--    cat << \_LT_EOF >> "$cfgfile"
--
--# func_dirname file append nondir_replacement
--# Compute the dirname of FILE.  If nonempty, add APPEND to the result,
--# otherwise set result to NONDIR_REPLACEMENT.
--func_dirname ()
--{
--  case ${1} in
--    */*) func_dirname_result="${1%/*}${2}" ;;
--    *  ) func_dirname_result="${3}" ;;
--  esac
--}
--
--# func_basename file
--func_basename ()
--{
--  func_basename_result="${1##*/}"
--}
--
--# func_dirname_and_basename file append nondir_replacement
--# perform func_basename and func_dirname in a single function
--# call:
--#   dirname:  Compute the dirname of FILE.  If nonempty,
--#             add APPEND to the result, otherwise set result
--#             to NONDIR_REPLACEMENT.
--#             value returned in "$func_dirname_result"
--#   basename: Compute filename of FILE.
--#             value retuned in "$func_basename_result"
--# Implementation must be kept synchronized with func_dirname
--# and func_basename. For efficiency, we do not delegate to
--# those functions but instead duplicate the functionality here.
--func_dirname_and_basename ()
--{
--  case ${1} in
--    */*) func_dirname_result="${1%/*}${2}" ;;
--    *  ) func_dirname_result="${3}" ;;
--  esac
--  func_basename_result="${1##*/}"
--}
--
--# func_stripname prefix suffix name
--# strip PREFIX and SUFFIX off of NAME.
--# PREFIX and SUFFIX must not contain globbing or regex special
--# characters, hashes, percent signs, but SUFFIX may contain a leading
--# dot (in which case that matches only a dot).
--func_stripname ()
--{
--  # pdksh 5.2.14 does not do ${X%$Y} correctly if both X and Y are
--  # positional parameters, so assign one to ordinary parameter first.
--  func_stripname_result=${3}
--  func_stripname_result=${func_stripname_result#"${1}"}
--  func_stripname_result=${func_stripname_result%"${2}"}
--}
--
--# func_opt_split
--func_opt_split ()
--{
--  func_opt_split_opt=${1%%=*}
--  func_opt_split_arg=${1#*=}
--}
--
--# func_lo2o object
--func_lo2o ()
--{
--  case ${1} in
--    *.lo) func_lo2o_result=${1%.lo}.${objext} ;;
--    *)    func_lo2o_result=${1} ;;
--  esac
--}
--
--# func_xform libobj-or-source
--func_xform ()
--{
--  func_xform_result=${1%.*}.lo
--}
--
--# func_arith arithmetic-term...
--func_arith ()
--{
--  func_arith_result=$(( $* ))
--}
--
--# func_len string
--# STRING may not start with a hyphen.
--func_len ()
--{
--  func_len_result=${#1}
--}
--
--_LT_EOF
--    ;;
--  *) # Bourne compatible functions.
--    cat << \_LT_EOF >> "$cfgfile"
--
--# func_dirname file append nondir_replacement
--# Compute the dirname of FILE.  If nonempty, add APPEND to the result,
--# otherwise set result to NONDIR_REPLACEMENT.
--func_dirname ()
--{
--  # Extract subdirectory from the argument.
--  func_dirname_result=`$ECHO "${1}" | $SED "$dirname"`
--  if test "X$func_dirname_result" = "X${1}"; then
--    func_dirname_result="${3}"
--  else
--    func_dirname_result="$func_dirname_result${2}"
--  fi
--}
--
--# func_basename file
--func_basename ()
--{
--  func_basename_result=`$ECHO "${1}" | $SED "$basename"`
--}
--
--
--# func_stripname prefix suffix name
--# strip PREFIX and SUFFIX off of NAME.
--# PREFIX and SUFFIX must not contain globbing or regex special
--# characters, hashes, percent signs, but SUFFIX may contain a leading
--# dot (in which case that matches only a dot).
--# func_strip_suffix prefix name
--func_stripname ()
--{
--  case ${2} in
--    .*) func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%\\\\${2}\$%%"`;;
--    *)  func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%${2}\$%%"`;;
--  esac
--}
--
--# sed scripts:
--my_sed_long_opt='1s/^\(-[^=]*\)=.*/\1/;q'
--my_sed_long_arg='1s/^-[^=]*=//'
--
--# func_opt_split
--func_opt_split ()
--{
--  func_opt_split_opt=`$ECHO "${1}" | $SED "$my_sed_long_opt"`
--  func_opt_split_arg=`$ECHO "${1}" | $SED "$my_sed_long_arg"`
--}
--
--# func_lo2o object
--func_lo2o ()
--{
--  func_lo2o_result=`$ECHO "${1}" | $SED "$lo2o"`
--}
--
--# func_xform libobj-or-source
--func_xform ()
--{
--  func_xform_result=`$ECHO "${1}" | $SED 's/\.[^.]*$/.lo/'`
--}
--
--# func_arith arithmetic-term...
--func_arith ()
--{
--  func_arith_result=`expr "$@"`
--}
--
--# func_len string
--# STRING may not start with a hyphen.
--func_len ()
--{
--  func_len_result=`expr "$1" : ".*" 2>/dev/null || echo $max_cmd_len`
--}
--
--_LT_EOF
--esac
--
--case $lt_shell_append in
--  yes)
--    cat << \_LT_EOF >> "$cfgfile"
--
--# func_append var value
--# Append VALUE to the end of shell variable VAR.
--func_append ()
--{
--  eval "$1+=\$2"
--}
--_LT_EOF
--    ;;
--  *)
--    cat << \_LT_EOF >> "$cfgfile"
--
--# func_append var value
--# Append VALUE to the end of shell variable VAR.
--func_append ()
--{
--  eval "$1=\$$1\$2"
--}
--
--_LT_EOF
--    ;;
--  esac
--
--
--  sed -n '/^# Generated shell functions inserted here/,$p' "$ltmain" >> "$cfgfile" \
--    || (rm -f "$cfgfile"; exit 1)
--
--  mv -f "$cfgfile" "$ofile" ||
-+  sed '$q' "$ltmain" >> "$cfgfile" \
-+     || (rm -f "$cfgfile"; exit 1)
-+
-+  if test x"$xsi_shell" = xyes; then
-+  sed -e '/^func_dirname ()$/,/^} # func_dirname /c\
-+func_dirname ()\
-+{\
-+\    case ${1} in\
-+\      */*) func_dirname_result="${1%/*}${2}" ;;\
-+\      *  ) func_dirname_result="${3}" ;;\
-+\    esac\
-+} # Extended-shell func_dirname implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_basename ()$/,/^} # func_basename /c\
-+func_basename ()\
-+{\
-+\    func_basename_result="${1##*/}"\
-+} # Extended-shell func_basename implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_dirname_and_basename ()$/,/^} # func_dirname_and_basename /c\
-+func_dirname_and_basename ()\
-+{\
-+\    case ${1} in\
-+\      */*) func_dirname_result="${1%/*}${2}" ;;\
-+\      *  ) func_dirname_result="${3}" ;;\
-+\    esac\
-+\    func_basename_result="${1##*/}"\
-+} # Extended-shell func_dirname_and_basename implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_stripname ()$/,/^} # func_stripname /c\
-+func_stripname ()\
-+{\
-+\    # pdksh 5.2.14 does not do ${X%$Y} correctly if both X and Y are\
-+\    # positional parameters, so assign one to ordinary parameter first.\
-+\    func_stripname_result=${3}\
-+\    func_stripname_result=${func_stripname_result#"${1}"}\
-+\    func_stripname_result=${func_stripname_result%"${2}"}\
-+} # Extended-shell func_stripname implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_split_long_opt ()$/,/^} # func_split_long_opt /c\
-+func_split_long_opt ()\
-+{\
-+\    func_split_long_opt_name=${1%%=*}\
-+\    func_split_long_opt_arg=${1#*=}\
-+} # Extended-shell func_split_long_opt implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_split_short_opt ()$/,/^} # func_split_short_opt /c\
-+func_split_short_opt ()\
-+{\
-+\    func_split_short_opt_arg=${1#??}\
-+\    func_split_short_opt_name=${1%"$func_split_short_opt_arg"}\
-+} # Extended-shell func_split_short_opt implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_lo2o ()$/,/^} # func_lo2o /c\
-+func_lo2o ()\
-+{\
-+\    case ${1} in\
-+\      *.lo) func_lo2o_result=${1%.lo}.${objext} ;;\
-+\      *)    func_lo2o_result=${1} ;;\
-+\    esac\
-+} # Extended-shell func_lo2o implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_xform ()$/,/^} # func_xform /c\
-+func_xform ()\
-+{\
-+    func_xform_result=${1%.*}.lo\
-+} # Extended-shell func_xform implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_arith ()$/,/^} # func_arith /c\
-+func_arith ()\
-+{\
-+    func_arith_result=$(( $* ))\
-+} # Extended-shell func_arith implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_len ()$/,/^} # func_len /c\
-+func_len ()\
-+{\
-+    func_len_result=${#1}\
-+} # Extended-shell func_len implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+fi
-+
-+if test x"$lt_shell_append" = xyes; then
-+  sed -e '/^func_append ()$/,/^} # func_append /c\
-+func_append ()\
-+{\
-+    eval "${1}+=\\${2}"\
-+} # Extended-shell func_append implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_append_quoted ()$/,/^} # func_append_quoted /c\
-+func_append_quoted ()\
-+{\
-+\    func_quote_for_eval "${2}"\
-+\    eval "${1}+=\\\\ \\$func_quote_for_eval_result"\
-+} # Extended-shell func_append_quoted implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  # Save a `func_append' function call where possible by direct use of '+='
-+  sed -e 's%func_append \([a-zA-Z_]\{1,\}\) "%\1+="%g' $cfgfile > $cfgfile.tmp \
-+    && mv -f "$cfgfile.tmp" "$cfgfile" \
-+      || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+  test 0 -eq $? || _lt_function_replace_fail=:
-+else
-+  # Save a `func_append' function call even when '+=' is not available
-+  sed -e 's%func_append \([a-zA-Z_]\{1,\}\) "%\1="$\1%g' $cfgfile > $cfgfile.tmp \
-+    && mv -f "$cfgfile.tmp" "$cfgfile" \
-+      || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+  test 0 -eq $? || _lt_function_replace_fail=:
-+fi
-+
-+if test x"$_lt_function_replace_fail" = x":"; then
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: Unable to substitute extended shell functions in $ofile" >&5
-+$as_echo "$as_me: WARNING: Unable to substitute extended shell functions in $ofile" >&2;}
-+fi
-+
-+
-+   mv -f "$cfgfile" "$ofile" ||
-     (rm -f "$ofile" && cp "$cfgfile" "$ofile" && rm -f "$cfgfile")
-   chmod +x "$ofile"
- 
-@@ -19541,12 +20381,12 @@ with_gcc=$GCC_CXX
- # Compiler flag to turn off builtin functions.
- no_builtin_flag=$lt_lt_prog_compiler_no_builtin_flag_CXX
- 
--# How to pass a linker flag through the compiler.
--wl=$lt_lt_prog_compiler_wl_CXX
--
- # Additional compiler flags for building library objects.
- pic_flag=$lt_lt_prog_compiler_pic_CXX
- 
-+# How to pass a linker flag through the compiler.
-+wl=$lt_lt_prog_compiler_wl_CXX
-+
- # Compiler flag to prevent dynamic linking.
- link_static_flag=$lt_lt_prog_compiler_static_CXX
- 
-@@ -19633,9 +20473,6 @@ inherit_rpath=$inherit_rpath_CXX
- # Whether libtool must link a program against all its dependency libraries.
- link_all_deplibs=$link_all_deplibs_CXX
- 
--# Fix the shell variable \$srcfile for the compiler.
--fix_srcfile_path=$lt_fix_srcfile_path_CXX
--
- # Set to "yes" if exported symbols are required.
- always_export_symbols=$always_export_symbols_CXX
- 
-@@ -19651,6 +20488,9 @@ include_expsyms=$lt_include_expsyms_CXX
- # Commands necessary for linking programs (against libraries) with templates.
- prelink_cmds=$lt_prelink_cmds_CXX
- 
-+# Commands necessary for finishing linking programs.
-+postlink_cmds=$lt_postlink_cmds_CXX
-+
- # Specify filename containing input files.
- file_list_spec=$lt_file_list_spec_CXX
- 
-diff --git a/libbacktrace/configure b/libbacktrace/configure
-index a2f33c0f35d..90667680701 100755
---- a/libbacktrace/configure
-+++ b/libbacktrace/configure
-@@ -680,7 +680,10 @@ OTOOL
- LIPO
- NMEDIT
- DSYMUTIL
-+MANIFEST_TOOL
-+ac_ct_AR
- AR
-+DLLTOOL
- OBJDUMP
- LN_S
- NM
-@@ -798,6 +801,7 @@ enable_static
- with_pic
- enable_fast_install
- with_gnu_ld
-+with_libtool_sysroot
- enable_libtool_lock
- enable_largefile
- enable_cet
-@@ -1458,6 +1462,8 @@ Optional Packages:
-   --with-pic              try to use only PIC/non-PIC objects [default=use
-                           both]
-   --with-gnu-ld           assume the C compiler uses GNU ld [default=no]
-+  --with-libtool-sysroot=DIR Search for dependent libraries within DIR
-+                        (or the compiler's sysroot if not specified).
-   --with-system-libunwind use installed libunwind
- 
- Some influential environment variables:
-@@ -5446,8 +5452,8 @@ esac
- 
- 
- 
--macro_version='2.2.7a'
--macro_revision='1.3134'
-+macro_version='2.4'
-+macro_revision='1.3293'
- 
- 
- 
-@@ -5487,7 +5493,7 @@ ECHO=$ECHO$ECHO$ECHO$ECHO$ECHO$ECHO
- { $as_echo "$as_me:${as_lineno-$LINENO}: checking how to print strings" >&5
- $as_echo_n "checking how to print strings... " >&6; }
- # Test print first, because it will be a builtin if present.
--if test "X`print -r -- -n 2>/dev/null`" = X-n && \
-+if test "X`( print -r -- -n ) 2>/dev/null`" = X-n && \
-    test "X`print -r -- $ECHO 2>/dev/null`" = "X$ECHO"; then
-   ECHO='print -r --'
- elif test "X`printf %s $ECHO 2>/dev/null`" = "X$ECHO"; then
-@@ -5818,48 +5824,49 @@ if ${lt_cv_path_NM+:} false; then :
-   $as_echo_n "(cached) " >&6
- else
-   if test -n "$NM"; then
--  # Let the user override the test.
--  lt_cv_path_NM="$NM"
--else
--  lt_nm_to_check="${ac_tool_prefix}nm"
--  if test -n "$ac_tool_prefix" && test "$build" = "$host"; then
--    lt_nm_to_check="$lt_nm_to_check nm"
--  fi
--  for lt_tmp_nm in $lt_nm_to_check; do
--    lt_save_ifs="$IFS"; IFS=$PATH_SEPARATOR
--    for ac_dir in $PATH /usr/ccs/bin/elf /usr/ccs/bin /usr/ucb /bin; do
--      IFS="$lt_save_ifs"
--      test -z "$ac_dir" && ac_dir=.
--      tmp_nm="$ac_dir/$lt_tmp_nm"
--      if test -f "$tmp_nm" || test -f "$tmp_nm$ac_exeext" ; then
--	# Check to see if the nm accepts a BSD-compat flag.
--	# Adding the `sed 1q' prevents false positives on HP-UX, which says:
--	#   nm: unknown option "B" ignored
--	# Tru64's nm complains that /dev/null is an invalid object file
--	case `"$tmp_nm" -B /dev/null 2>&1 | sed '1q'` in
--	*/dev/null* | *'Invalid file or object type'*)
--	  lt_cv_path_NM="$tmp_nm -B"
--	  break
--	  ;;
--	*)
--	  case `"$tmp_nm" -p /dev/null 2>&1 | sed '1q'` in
--	  */dev/null*)
--	    lt_cv_path_NM="$tmp_nm -p"
--	    break
--	    ;;
--	  *)
--	    lt_cv_path_NM=${lt_cv_path_NM="$tmp_nm"} # keep the first match, but
--	    continue # so that we can try to find one that supports BSD flags
--	    ;;
--	  esac
--	  ;;
--	esac
--      fi
--    done
--    IFS="$lt_save_ifs"
--  done
--  : ${lt_cv_path_NM=no}
--fi
-+   # Let the user override the nm to test.
-+   lt_nm_to_check="$NM"
-+ else
-+   lt_nm_to_check="${ac_tool_prefix}nm"
-+   if test -n "$ac_tool_prefix" && test "$build" = "$host"; then
-+     lt_nm_to_check="$lt_nm_to_check nm"
-+   fi
-+ fi
-+ for lt_tmp_nm in $lt_nm_to_check; do
-+   lt_save_ifs="$IFS"; IFS=$PATH_SEPARATOR
-+   for ac_dir in $PATH /usr/ccs/bin/elf /usr/ccs/bin /usr/ucb /bin; do
-+     IFS="$lt_save_ifs"
-+     test -z "$ac_dir" && ac_dir=.
-+     case "$lt_tmp_nm" in
-+     */*|*\\*) tmp_nm="$lt_tmp_nm";;
-+     *) tmp_nm="$ac_dir/$lt_tmp_nm";;
-+     esac
-+     if test -f "$tmp_nm" || test -f "$tmp_nm$ac_exeext" ; then
-+       # Check to see if the nm accepts a BSD-compat flag.
-+       # Adding the `sed 1q' prevents false positives on HP-UX, which says:
-+       #   nm: unknown option "B" ignored
-+       case `"$tmp_nm" -B "$tmp_nm" 2>&1 | grep -v '^ *$' | sed '1q'` in
-+       *$tmp_nm*) lt_cv_path_NM="$tmp_nm -B"
-+	 break
-+	 ;;
-+       *)
-+	 case `"$tmp_nm" -p "$tmp_nm" 2>&1 | grep -v '^ *$' | sed '1q'` in
-+	 *$tmp_nm*)
-+	   lt_cv_path_NM="$tmp_nm -p"
-+	   break
-+	   ;;
-+	 *)
-+	   lt_cv_path_NM=${lt_cv_path_NM="$tmp_nm"} # keep the first match, but
-+	   continue # so that we can try to find one that supports BSD flags
-+	   ;;
-+	 esac
-+	 ;;
-+       esac
-+     fi
-+   done
-+   IFS="$lt_save_ifs"
-+ done
-+ : ${lt_cv_path_NM=no}
- fi
- { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_path_NM" >&5
- $as_echo "$lt_cv_path_NM" >&6; }
-@@ -6173,8 +6180,8 @@ $as_echo_n "checking whether the shell understands some XSI constructs... " >&6;
- # Try some XSI features
- xsi_shell=no
- ( _lt_dummy="a/b/c"
--  test "${_lt_dummy##*/},${_lt_dummy%/*},"${_lt_dummy%"$_lt_dummy"}, \
--      = c,a/b,, \
-+  test "${_lt_dummy##*/},${_lt_dummy%/*},${_lt_dummy#??}"${_lt_dummy%"$_lt_dummy"}, \
-+      = c,a/b,b/c, \
-     && eval 'test $(( 1 + 1 )) -eq 2 \
-     && test "${#_lt_dummy}" -eq 5' ) >/dev/null 2>&1 \
-   && xsi_shell=yes
-@@ -6223,6 +6230,80 @@ esac
- 
- 
- 
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking how to convert $build file names to $host format" >&5
-+$as_echo_n "checking how to convert $build file names to $host format... " >&6; }
-+if ${lt_cv_to_host_file_cmd+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  case $host in
-+  *-*-mingw* )
-+    case $build in
-+      *-*-mingw* ) # actually msys
-+        lt_cv_to_host_file_cmd=func_convert_file_msys_to_w32
-+        ;;
-+      *-*-cygwin* )
-+        lt_cv_to_host_file_cmd=func_convert_file_cygwin_to_w32
-+        ;;
-+      * ) # otherwise, assume *nix
-+        lt_cv_to_host_file_cmd=func_convert_file_nix_to_w32
-+        ;;
-+    esac
-+    ;;
-+  *-*-cygwin* )
-+    case $build in
-+      *-*-mingw* ) # actually msys
-+        lt_cv_to_host_file_cmd=func_convert_file_msys_to_cygwin
-+        ;;
-+      *-*-cygwin* )
-+        lt_cv_to_host_file_cmd=func_convert_file_noop
-+        ;;
-+      * ) # otherwise, assume *nix
-+        lt_cv_to_host_file_cmd=func_convert_file_nix_to_cygwin
-+        ;;
-+    esac
-+    ;;
-+  * ) # unhandled hosts (and "normal" native builds)
-+    lt_cv_to_host_file_cmd=func_convert_file_noop
-+    ;;
-+esac
-+
-+fi
-+
-+to_host_file_cmd=$lt_cv_to_host_file_cmd
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_to_host_file_cmd" >&5
-+$as_echo "$lt_cv_to_host_file_cmd" >&6; }
-+
-+
-+
-+
-+
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking how to convert $build file names to toolchain format" >&5
-+$as_echo_n "checking how to convert $build file names to toolchain format... " >&6; }
-+if ${lt_cv_to_tool_file_cmd+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  #assume ordinary cross tools, or native build.
-+lt_cv_to_tool_file_cmd=func_convert_file_noop
-+case $host in
-+  *-*-mingw* )
-+    case $build in
-+      *-*-mingw* ) # actually msys
-+        lt_cv_to_tool_file_cmd=func_convert_file_msys_to_w32
-+        ;;
-+    esac
-+    ;;
-+esac
-+
-+fi
-+
-+to_tool_file_cmd=$lt_cv_to_tool_file_cmd
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_to_tool_file_cmd" >&5
-+$as_echo "$lt_cv_to_tool_file_cmd" >&6; }
-+
-+
-+
-+
-+
- { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $LD option to reload object files" >&5
- $as_echo_n "checking for $LD option to reload object files... " >&6; }
- if ${lt_cv_ld_reload_flag+:} false; then :
-@@ -6239,6 +6320,11 @@ case $reload_flag in
- esac
- reload_cmds='$LD$reload_flag -o $output$reload_objs'
- case $host_os in
-+  cygwin* | mingw* | pw32* | cegcc*)
-+    if test "$GCC" != yes; then
-+      reload_cmds=false
-+    fi
-+    ;;
-   darwin*)
-     if test "$GCC" = yes; then
-       reload_cmds='$LTCC $LTCFLAGS -nostdlib ${wl}-r -o $output$reload_objs'
-@@ -6407,7 +6493,8 @@ mingw* | pw32*)
-     lt_cv_deplibs_check_method='file_magic ^x86 archive import|^x86 DLL'
-     lt_cv_file_magic_cmd='func_win32_libid'
-   else
--    lt_cv_deplibs_check_method='file_magic file format pei*-i386(.*architecture: i386)?'
-+    # Keep this pattern in sync with the one in func_win32_libid.
-+    lt_cv_deplibs_check_method='file_magic file format (pei*-i386(.*architecture: i386)?|pe-arm-wince|pe-x86-64)'
-     lt_cv_file_magic_cmd='$OBJDUMP -f'
-   fi
-   ;;
-@@ -6480,7 +6567,7 @@ irix5* | irix6* | nonstopux*)
-   ;;
- 
- # This must be Linux ELF.
--linux* | k*bsd*-gnu | kopensolaris*-gnu | uclinuxfdpiceabi)
-+linux* | k*bsd*-gnu | kopensolaris*-gnu)
-   lt_cv_deplibs_check_method=pass_all
-   ;;
- 
-@@ -6561,6 +6648,21 @@ esac
- fi
- { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_deplibs_check_method" >&5
- $as_echo "$lt_cv_deplibs_check_method" >&6; }
-+
-+file_magic_glob=
-+want_nocaseglob=no
-+if test "$build" = "$host"; then
-+  case $host_os in
-+  mingw* | pw32*)
-+    if ( shopt | grep nocaseglob ) >/dev/null 2>&1; then
-+      want_nocaseglob=yes
-+    else
-+      file_magic_glob=`echo aAbBcCdDeEfFgGhHiIjJkKlLmMnNoOpPqQrRsStTuUvVwWxXyYzZ | $SED -e "s/\(..\)/s\/[\1]\/[\1]\/g;/g"`
-+    fi
-+    ;;
-+  esac
-+fi
-+
- file_magic_cmd=$lt_cv_file_magic_cmd
- deplibs_check_method=$lt_cv_deplibs_check_method
- test -z "$deplibs_check_method" && deplibs_check_method=unknown
-@@ -6574,11 +6676,177 @@ test -z "$deplibs_check_method" && deplibs_check_method=unknown
- 
- 
- 
-+
-+
-+
-+
-+
-+
-+
-+
-+
-+
- 
- 
- if test -n "$ac_tool_prefix"; then
--  # Extract the first word of "${ac_tool_prefix}ar", so it can be a program name with args.
--set dummy ${ac_tool_prefix}ar; ac_word=$2
-+  # Extract the first word of "${ac_tool_prefix}dlltool", so it can be a program name with args.
-+set dummy ${ac_tool_prefix}dlltool; ac_word=$2
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
-+$as_echo_n "checking for $ac_word... " >&6; }
-+if ${ac_cv_prog_DLLTOOL+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  if test -n "$DLLTOOL"; then
-+  ac_cv_prog_DLLTOOL="$DLLTOOL" # Let the user override the test.
-+else
-+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
-+for as_dir in $PATH
-+do
-+  IFS=$as_save_IFS
-+  test -z "$as_dir" && as_dir=.
-+    for ac_exec_ext in '' $ac_executable_extensions; do
-+  if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
-+    ac_cv_prog_DLLTOOL="${ac_tool_prefix}dlltool"
-+    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
-+    break 2
-+  fi
-+done
-+  done
-+IFS=$as_save_IFS
-+
-+fi
-+fi
-+DLLTOOL=$ac_cv_prog_DLLTOOL
-+if test -n "$DLLTOOL"; then
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $DLLTOOL" >&5
-+$as_echo "$DLLTOOL" >&6; }
-+else
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
-+$as_echo "no" >&6; }
-+fi
-+
-+
-+fi
-+if test -z "$ac_cv_prog_DLLTOOL"; then
-+  ac_ct_DLLTOOL=$DLLTOOL
-+  # Extract the first word of "dlltool", so it can be a program name with args.
-+set dummy dlltool; ac_word=$2
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
-+$as_echo_n "checking for $ac_word... " >&6; }
-+if ${ac_cv_prog_ac_ct_DLLTOOL+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  if test -n "$ac_ct_DLLTOOL"; then
-+  ac_cv_prog_ac_ct_DLLTOOL="$ac_ct_DLLTOOL" # Let the user override the test.
-+else
-+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
-+for as_dir in $PATH
-+do
-+  IFS=$as_save_IFS
-+  test -z "$as_dir" && as_dir=.
-+    for ac_exec_ext in '' $ac_executable_extensions; do
-+  if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
-+    ac_cv_prog_ac_ct_DLLTOOL="dlltool"
-+    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
-+    break 2
-+  fi
-+done
-+  done
-+IFS=$as_save_IFS
-+
-+fi
-+fi
-+ac_ct_DLLTOOL=$ac_cv_prog_ac_ct_DLLTOOL
-+if test -n "$ac_ct_DLLTOOL"; then
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_DLLTOOL" >&5
-+$as_echo "$ac_ct_DLLTOOL" >&6; }
-+else
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
-+$as_echo "no" >&6; }
-+fi
-+
-+  if test "x$ac_ct_DLLTOOL" = x; then
-+    DLLTOOL="false"
-+  else
-+    case $cross_compiling:$ac_tool_warned in
-+yes:)
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5
-+$as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;}
-+ac_tool_warned=yes ;;
-+esac
-+    DLLTOOL=$ac_ct_DLLTOOL
-+  fi
-+else
-+  DLLTOOL="$ac_cv_prog_DLLTOOL"
-+fi
-+
-+test -z "$DLLTOOL" && DLLTOOL=dlltool
-+
-+
-+
-+
-+
-+
-+
-+
-+
-+
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking how to associate runtime and link libraries" >&5
-+$as_echo_n "checking how to associate runtime and link libraries... " >&6; }
-+if ${lt_cv_sharedlib_from_linklib_cmd+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  lt_cv_sharedlib_from_linklib_cmd='unknown'
-+
-+case $host_os in
-+cygwin* | mingw* | pw32* | cegcc*)
-+  # two different shell functions defined in ltmain.sh
-+  # decide which to use based on capabilities of $DLLTOOL
-+  case `$DLLTOOL --help 2>&1` in
-+  *--identify-strict*)
-+    lt_cv_sharedlib_from_linklib_cmd=func_cygming_dll_for_implib
-+    ;;
-+  *)
-+    lt_cv_sharedlib_from_linklib_cmd=func_cygming_dll_for_implib_fallback
-+    ;;
-+  esac
-+  ;;
-+*)
-+  # fallback: assume linklib IS sharedlib
-+  lt_cv_sharedlib_from_linklib_cmd="$ECHO"
-+  ;;
-+esac
-+
-+fi
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_sharedlib_from_linklib_cmd" >&5
-+$as_echo "$lt_cv_sharedlib_from_linklib_cmd" >&6; }
-+sharedlib_from_linklib_cmd=$lt_cv_sharedlib_from_linklib_cmd
-+test -z "$sharedlib_from_linklib_cmd" && sharedlib_from_linklib_cmd=$ECHO
-+
-+
-+
-+
-+
-+
-+
-+plugin_option=
-+plugin_names="liblto_plugin.so liblto_plugin-0.dll cyglto_plugin-0.dll"
-+for plugin in $plugin_names; do
-+  plugin_so=`${CC} ${CFLAGS} --print-prog-name $plugin`
-+  if test x$plugin_so = x$plugin; then
-+    plugin_so=`${CC} ${CFLAGS} --print-file-name $plugin`
-+  fi
-+  if test x$plugin_so != x$plugin; then
-+    plugin_option="--plugin $plugin_so"
-+    break
-+  fi
-+done
-+
-+if test -n "$ac_tool_prefix"; then
-+  for ac_prog in ar
-+  do
-+    # Extract the first word of "$ac_tool_prefix$ac_prog", so it can be a program name with args.
-+set dummy $ac_tool_prefix$ac_prog; ac_word=$2
- { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
- $as_echo_n "checking for $ac_word... " >&6; }
- if ${ac_cv_prog_AR+:} false; then :
-@@ -6594,7 +6862,7 @@ do
-   test -z "$as_dir" && as_dir=.
-     for ac_exec_ext in '' $ac_executable_extensions; do
-   if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
--    ac_cv_prog_AR="${ac_tool_prefix}ar"
-+    ac_cv_prog_AR="$ac_tool_prefix$ac_prog"
-     $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
-     break 2
-   fi
-@@ -6614,11 +6882,15 @@ $as_echo "no" >&6; }
- fi
- 
- 
-+    test -n "$AR" && break
-+  done
- fi
--if test -z "$ac_cv_prog_AR"; then
-+if test -z "$AR"; then
-   ac_ct_AR=$AR
--  # Extract the first word of "ar", so it can be a program name with args.
--set dummy ar; ac_word=$2
-+  for ac_prog in ar
-+do
-+  # Extract the first word of "$ac_prog", so it can be a program name with args.
-+set dummy $ac_prog; ac_word=$2
- { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
- $as_echo_n "checking for $ac_word... " >&6; }
- if ${ac_cv_prog_ac_ct_AR+:} false; then :
-@@ -6634,7 +6906,7 @@ do
-   test -z "$as_dir" && as_dir=.
-     for ac_exec_ext in '' $ac_executable_extensions; do
-   if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
--    ac_cv_prog_ac_ct_AR="ar"
-+    ac_cv_prog_ac_ct_AR="$ac_prog"
-     $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
-     break 2
-   fi
-@@ -6653,6 +6925,10 @@ else
- $as_echo "no" >&6; }
- fi
- 
-+
-+  test -n "$ac_ct_AR" && break
-+done
-+
-   if test "x$ac_ct_AR" = x; then
-     AR="false"
-   else
-@@ -6664,12 +6940,21 @@ ac_tool_warned=yes ;;
- esac
-     AR=$ac_ct_AR
-   fi
--else
--  AR="$ac_cv_prog_AR"
- fi
- 
--test -z "$AR" && AR=ar
--test -z "$AR_FLAGS" && AR_FLAGS=cru
-+  touch conftest.c
-+  $AR $plugin_option rc conftest.a conftest.c
-+  if test "$?" != 0; then
-+    { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: Failed: $AR $plugin_option rc" >&5
-+$as_echo "$as_me: WARNING: Failed: $AR $plugin_option rc" >&2;}
-+  else
-+    AR="$AR $plugin_option"
-+  fi
-+  rm -f conftest.*
-+: ${AR=ar}
-+: ${AR_FLAGS=cru}
-+
-+
- 
- 
- 
-@@ -6679,6 +6964,62 @@ test -z "$AR_FLAGS" && AR_FLAGS=cru
- 
- 
- 
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for archiver @FILE support" >&5
-+$as_echo_n "checking for archiver @FILE support... " >&6; }
-+if ${lt_cv_ar_at_file+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  lt_cv_ar_at_file=no
-+   cat confdefs.h - <<_ACEOF >conftest.$ac_ext
-+/* end confdefs.h.  */
-+
-+int
-+main ()
-+{
-+
-+  ;
-+  return 0;
-+}
-+_ACEOF
-+if ac_fn_c_try_compile "$LINENO"; then :
-+  echo conftest.$ac_objext > conftest.lst
-+      lt_ar_try='$AR $AR_FLAGS libconftest.a @conftest.lst >&5'
-+      { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$lt_ar_try\""; } >&5
-+  (eval $lt_ar_try) 2>&5
-+  ac_status=$?
-+  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
-+  test $ac_status = 0; }
-+      if test "$ac_status" -eq 0; then
-+	# Ensure the archiver fails upon bogus file names.
-+	rm -f conftest.$ac_objext libconftest.a
-+	{ { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$lt_ar_try\""; } >&5
-+  (eval $lt_ar_try) 2>&5
-+  ac_status=$?
-+  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
-+  test $ac_status = 0; }
-+	if test "$ac_status" -ne 0; then
-+          lt_cv_ar_at_file=@
-+        fi
-+      fi
-+      rm -f conftest.* libconftest.a
-+
-+fi
-+rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext
-+
-+fi
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_ar_at_file" >&5
-+$as_echo "$lt_cv_ar_at_file" >&6; }
-+
-+if test "x$lt_cv_ar_at_file" = xno; then
-+  archiver_list_spec=
-+else
-+  archiver_list_spec=$lt_cv_ar_at_file
-+fi
-+
-+
-+
-+
-+
- 
- 
- if test -n "$ac_tool_prefix"; then
-@@ -6873,6 +7214,11 @@ else
- fi
- 
- test -z "$RANLIB" && RANLIB=:
-+if test -n "$plugin_option" && test "$RANLIB" != ":"; then
-+  if $RANLIB --help 2>&1 | grep -q "\--plugin"; then
-+    RANLIB="$RANLIB $plugin_option"
-+  fi
-+fi
- 
- 
- 
-@@ -6987,7 +7333,7 @@ osf*)
-   symcode='[BCDEGQRST]'
-   ;;
- solaris*)
--  symcode='[BDRT]'
-+  symcode='[BCDRT]'
-   ;;
- sco3.2v5*)
-   symcode='[DT]'
-@@ -7015,8 +7361,8 @@ esac
- lt_cv_sys_global_symbol_to_cdecl="sed -n -e 's/^T .* \(.*\)$/extern int \1();/p' -e 's/^$symcode* .* \(.*\)$/extern char \1;/p'"
- 
- # Transform an extracted symbol line into symbol name and symbol address
--lt_cv_sys_global_symbol_to_c_name_address="sed -n -e 's/^: \([^ ]*\) $/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"\2\", (void *) \&\2},/p'"
--lt_cv_sys_global_symbol_to_c_name_address_lib_prefix="sed -n -e 's/^: \([^ ]*\) $/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \(lib[^ ]*\)$/  {\"\2\", (void *) \&\2},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"lib\2\", (void *) \&\2},/p'"
-+lt_cv_sys_global_symbol_to_c_name_address="sed -n -e 's/^: \([^ ]*\)[ ]*$/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"\2\", (void *) \&\2},/p'"
-+lt_cv_sys_global_symbol_to_c_name_address_lib_prefix="sed -n -e 's/^: \([^ ]*\)[ ]*$/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \(lib[^ ]*\)$/  {\"\2\", (void *) \&\2},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"lib\2\", (void *) \&\2},/p'"
- 
- # Handle CRLF in mingw tool chain
- opt_cr=
-@@ -7052,6 +7398,7 @@ for ac_symprfx in "" "_"; do
-   else
-     lt_cv_sys_global_symbol_pipe="sed -n -e 's/^.*[	 ]\($symcode$symcode*\)[	 ][	 ]*$ac_symprfx$sympat$opt_cr$/$symxfrm/p'"
-   fi
-+  lt_cv_sys_global_symbol_pipe="$lt_cv_sys_global_symbol_pipe | sed '/ __gnu_lto/d'"
- 
-   # Check to see that the pipe works correctly.
-   pipe_works=no
-@@ -7093,6 +7440,18 @@ _LT_EOF
-       if $GREP ' nm_test_var$' "$nlist" >/dev/null; then
- 	if $GREP ' nm_test_func$' "$nlist" >/dev/null; then
- 	  cat <<_LT_EOF > conftest.$ac_ext
-+/* Keep this code in sync between libtool.m4, ltmain, lt_system.h, and tests.  */
-+#if defined(_WIN32) || defined(__CYGWIN__) || defined(_WIN32_WCE)
-+/* DATA imports from DLLs on WIN32 con't be const, because runtime
-+   relocations are performed -- see ld's documentation on pseudo-relocs.  */
-+# define LT_DLSYM_CONST
-+#elif defined(__osf__)
-+/* This system does not cope well with relocations in const data.  */
-+# define LT_DLSYM_CONST
-+#else
-+# define LT_DLSYM_CONST const
-+#endif
-+
- #ifdef __cplusplus
- extern "C" {
- #endif
-@@ -7104,7 +7463,7 @@ _LT_EOF
- 	  cat <<_LT_EOF >> conftest.$ac_ext
- 
- /* The mapping between symbol names and symbols.  */
--const struct {
-+LT_DLSYM_CONST struct {
-   const char *name;
-   void       *address;
- }
-@@ -7130,8 +7489,8 @@ static const void *lt_preloaded_setup() {
- _LT_EOF
- 	  # Now try linking the two files.
- 	  mv conftest.$ac_objext conftstm.$ac_objext
--	  lt_save_LIBS="$LIBS"
--	  lt_save_CFLAGS="$CFLAGS"
-+	  lt_globsym_save_LIBS=$LIBS
-+	  lt_globsym_save_CFLAGS=$CFLAGS
- 	  LIBS="conftstm.$ac_objext"
- 	  CFLAGS="$CFLAGS$lt_prog_compiler_no_builtin_flag"
- 	  if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_link\""; } >&5
-@@ -7141,8 +7500,8 @@ _LT_EOF
-   test $ac_status = 0; } && test -s conftest${ac_exeext}; then
- 	    pipe_works=yes
- 	  fi
--	  LIBS="$lt_save_LIBS"
--	  CFLAGS="$lt_save_CFLAGS"
-+	  LIBS=$lt_globsym_save_LIBS
-+	  CFLAGS=$lt_globsym_save_CFLAGS
- 	else
- 	  echo "cannot find nm_test_func in $nlist" >&5
- 	fi
-@@ -7179,6 +7538,17 @@ else
- $as_echo "ok" >&6; }
- fi
- 
-+# Response file support.
-+if test "$lt_cv_nm_interface" = "MS dumpbin"; then
-+  nm_file_list_spec='@'
-+elif $NM --help 2>/dev/null | grep '[@]FILE' >/dev/null; then
-+  nm_file_list_spec='@'
-+fi
-+
-+
-+
-+
-+
- 
- 
- 
-@@ -7195,6 +7565,44 @@ fi
- 
- 
- 
-+
-+
-+
-+
-+
-+
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for sysroot" >&5
-+$as_echo_n "checking for sysroot... " >&6; }
-+
-+# Check whether --with-libtool-sysroot was given.
-+if test "${with_libtool_sysroot+set}" = set; then :
-+  withval=$with_libtool_sysroot;
-+else
-+  with_libtool_sysroot=no
-+fi
-+
-+
-+lt_sysroot=
-+case ${with_libtool_sysroot} in #(
-+ yes)
-+   if test "$GCC" = yes; then
-+     lt_sysroot=`$CC --print-sysroot 2>/dev/null`
-+   fi
-+   ;; #(
-+ /*)
-+   lt_sysroot=`echo "$with_libtool_sysroot" | sed -e "$sed_quote_subst"`
-+   ;; #(
-+ no|'')
-+   ;; #(
-+ *)
-+   { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${with_libtool_sysroot}" >&5
-+$as_echo "${with_libtool_sysroot}" >&6; }
-+   as_fn_error $? "The sysroot must be an absolute path." "$LINENO" 5
-+   ;;
-+esac
-+
-+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${lt_sysroot:-no}" >&5
-+$as_echo "${lt_sysroot:-no}" >&6; }
- 
- 
- 
-@@ -7372,39 +7780,156 @@ ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $
- ac_compiler_gnu=$ac_cv_c_compiler_gnu
- 
- fi
--{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_cc_needs_belf" >&5
--$as_echo "$lt_cv_cc_needs_belf" >&6; }
--  if test x"$lt_cv_cc_needs_belf" != x"yes"; then
--    # this is probably gcc 2.8.0, egcs 1.0 or newer; no need for -belf
--    CFLAGS="$SAVE_CFLAGS"
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_cc_needs_belf" >&5
-+$as_echo "$lt_cv_cc_needs_belf" >&6; }
-+  if test x"$lt_cv_cc_needs_belf" != x"yes"; then
-+    # this is probably gcc 2.8.0, egcs 1.0 or newer; no need for -belf
-+    CFLAGS="$SAVE_CFLAGS"
-+  fi
-+  ;;
-+sparc*-*solaris*)
-+  # Find out which ABI we are using.
-+  echo 'int i;' > conftest.$ac_ext
-+  if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_compile\""; } >&5
-+  (eval $ac_compile) 2>&5
-+  ac_status=$?
-+  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
-+  test $ac_status = 0; }; then
-+    case `/usr/bin/file conftest.o` in
-+    *64-bit*)
-+      case $lt_cv_prog_gnu_ld in
-+      yes*) LD="${LD-ld} -m elf64_sparc" ;;
-+      *)
-+	if ${LD-ld} -64 -r -o conftest2.o conftest.o >/dev/null 2>&1; then
-+	  LD="${LD-ld} -64"
-+	fi
-+	;;
-+      esac
-+      ;;
-+    esac
-+  fi
-+  rm -rf conftest*
-+  ;;
-+esac
-+
-+need_locks="$enable_libtool_lock"
-+
-+if test -n "$ac_tool_prefix"; then
-+  # Extract the first word of "${ac_tool_prefix}mt", so it can be a program name with args.
-+set dummy ${ac_tool_prefix}mt; ac_word=$2
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
-+$as_echo_n "checking for $ac_word... " >&6; }
-+if ${ac_cv_prog_MANIFEST_TOOL+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  if test -n "$MANIFEST_TOOL"; then
-+  ac_cv_prog_MANIFEST_TOOL="$MANIFEST_TOOL" # Let the user override the test.
-+else
-+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
-+for as_dir in $PATH
-+do
-+  IFS=$as_save_IFS
-+  test -z "$as_dir" && as_dir=.
-+    for ac_exec_ext in '' $ac_executable_extensions; do
-+  if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
-+    ac_cv_prog_MANIFEST_TOOL="${ac_tool_prefix}mt"
-+    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
-+    break 2
-+  fi
-+done
-+  done
-+IFS=$as_save_IFS
-+
-+fi
-+fi
-+MANIFEST_TOOL=$ac_cv_prog_MANIFEST_TOOL
-+if test -n "$MANIFEST_TOOL"; then
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $MANIFEST_TOOL" >&5
-+$as_echo "$MANIFEST_TOOL" >&6; }
-+else
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
-+$as_echo "no" >&6; }
-+fi
-+
-+
-+fi
-+if test -z "$ac_cv_prog_MANIFEST_TOOL"; then
-+  ac_ct_MANIFEST_TOOL=$MANIFEST_TOOL
-+  # Extract the first word of "mt", so it can be a program name with args.
-+set dummy mt; ac_word=$2
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
-+$as_echo_n "checking for $ac_word... " >&6; }
-+if ${ac_cv_prog_ac_ct_MANIFEST_TOOL+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  if test -n "$ac_ct_MANIFEST_TOOL"; then
-+  ac_cv_prog_ac_ct_MANIFEST_TOOL="$ac_ct_MANIFEST_TOOL" # Let the user override the test.
-+else
-+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
-+for as_dir in $PATH
-+do
-+  IFS=$as_save_IFS
-+  test -z "$as_dir" && as_dir=.
-+    for ac_exec_ext in '' $ac_executable_extensions; do
-+  if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
-+    ac_cv_prog_ac_ct_MANIFEST_TOOL="mt"
-+    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
-+    break 2
-+  fi
-+done
-+  done
-+IFS=$as_save_IFS
-+
-+fi
-+fi
-+ac_ct_MANIFEST_TOOL=$ac_cv_prog_ac_ct_MANIFEST_TOOL
-+if test -n "$ac_ct_MANIFEST_TOOL"; then
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_MANIFEST_TOOL" >&5
-+$as_echo "$ac_ct_MANIFEST_TOOL" >&6; }
-+else
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
-+$as_echo "no" >&6; }
-+fi
-+
-+  if test "x$ac_ct_MANIFEST_TOOL" = x; then
-+    MANIFEST_TOOL=":"
-+  else
-+    case $cross_compiling:$ac_tool_warned in
-+yes:)
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5
-+$as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;}
-+ac_tool_warned=yes ;;
-+esac
-+    MANIFEST_TOOL=$ac_ct_MANIFEST_TOOL
-   fi
--  ;;
--sparc*-*solaris*)
--  # Find out which ABI we are using.
--  echo 'int i;' > conftest.$ac_ext
--  if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_compile\""; } >&5
--  (eval $ac_compile) 2>&5
--  ac_status=$?
--  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
--  test $ac_status = 0; }; then
--    case `/usr/bin/file conftest.o` in
--    *64-bit*)
--      case $lt_cv_prog_gnu_ld in
--      yes*) LD="${LD-ld} -m elf64_sparc" ;;
--      *)
--	if ${LD-ld} -64 -r -o conftest2.o conftest.o >/dev/null 2>&1; then
--	  LD="${LD-ld} -64"
--	fi
--	;;
--      esac
--      ;;
--    esac
-+else
-+  MANIFEST_TOOL="$ac_cv_prog_MANIFEST_TOOL"
-+fi
-+
-+test -z "$MANIFEST_TOOL" && MANIFEST_TOOL=mt
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking if $MANIFEST_TOOL is a manifest tool" >&5
-+$as_echo_n "checking if $MANIFEST_TOOL is a manifest tool... " >&6; }
-+if ${lt_cv_path_mainfest_tool+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  lt_cv_path_mainfest_tool=no
-+  echo "$as_me:$LINENO: $MANIFEST_TOOL '-?'" >&5
-+  $MANIFEST_TOOL '-?' 2>conftest.err > conftest.out
-+  cat conftest.err >&5
-+  if $GREP 'Manifest Tool' conftest.out > /dev/null; then
-+    lt_cv_path_mainfest_tool=yes
-   fi
--  rm -rf conftest*
--  ;;
--esac
-+  rm -f conftest*
-+fi
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_path_mainfest_tool" >&5
-+$as_echo "$lt_cv_path_mainfest_tool" >&6; }
-+if test "x$lt_cv_path_mainfest_tool" != xyes; then
-+  MANIFEST_TOOL=:
-+fi
-+
-+
-+
- 
--need_locks="$enable_libtool_lock"
- 
- 
-   case $host_os in
-@@ -7969,6 +8494,8 @@ _LT_EOF
-       $LTCC $LTCFLAGS -c -o conftest.o conftest.c 2>&5
-       echo "$AR cru libconftest.a conftest.o" >&5
-       $AR cru libconftest.a conftest.o 2>&5
-+      echo "$RANLIB libconftest.a" >&5
-+      $RANLIB libconftest.a 2>&5
-       cat > conftest.c << _LT_EOF
- int main() { return 0;}
- _LT_EOF
-@@ -7986,25 +8513,23 @@ _LT_EOF
- fi
- { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_ld_force_load" >&5
- $as_echo "$lt_cv_ld_force_load" >&6; }
--    # Allow for Darwin 4-7 (macOS 10.0-10.3) although these are not expect to
--    # build without first building modern cctools / linker.
--    case $host_cpu-$host_os in
--    *-rhapsody* | *-darwin1.[012])
-+    case $host_os in
-+    rhapsody* | darwin1.[012])
-       _lt_dar_allow_undefined='${wl}-undefined ${wl}suppress' ;;
--    *-darwin1.*)
-+    darwin1.*)
-       _lt_dar_allow_undefined='${wl}-flat_namespace ${wl}-undefined ${wl}suppress' ;;
--    *-darwin*)
--      # darwin 5.x (macOS 10.1) onwards we only need to adjust when the
--      # deployment target is forced to an earlier version.
--      case ${MACOSX_DEPLOYMENT_TARGET-UNSET},$host in
--	UNSET,*-darwin[89]*|UNSET,*-darwin[12][0123456789]*)
--	  ;;
-+    darwin*) # darwin 5.x on
-+      # if running on 10.5 or later, the deployment target defaults
-+      # to the OS version, if on x86, and 10.4, the deployment
-+      # target defaults to 10.4. Don't you love it?
-+      case ${MACOSX_DEPLOYMENT_TARGET-10.0},$host in
-+	10.0,*86*-darwin8*|10.0,*-darwin[91]*)
-+	  _lt_dar_allow_undefined='${wl}-undefined ${wl}dynamic_lookup' ;;
- 	10.[012][,.]*)
--	  _lt_dar_allow_undefined='${wl}-flat_namespace ${wl}-undefined ${wl}suppress'
--	  ;;
--	*)
--	  ;;
--     esac
-+	  _lt_dar_allow_undefined='${wl}-flat_namespace ${wl}-undefined ${wl}suppress' ;;
-+	10.*)
-+	  _lt_dar_allow_undefined='${wl}-undefined ${wl}dynamic_lookup' ;;
-+      esac
-     ;;
-   esac
-     if test "$lt_cv_apple_cc_single_mod" = "yes"; then
-@@ -8553,8 +9078,6 @@ fi
- lt_prog_compiler_pic=
- lt_prog_compiler_static=
- 
--{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $compiler option to produce PIC" >&5
--$as_echo_n "checking for $compiler option to produce PIC... " >&6; }
- 
-   if test "$GCC" = yes; then
-     lt_prog_compiler_wl='-Wl,'
-@@ -8720,6 +9243,12 @@ $as_echo_n "checking for $compiler option to produce PIC... " >&6; }
- 	lt_prog_compiler_pic='--shared'
- 	lt_prog_compiler_static='--static'
- 	;;
-+      nagfor*)
-+	# NAG Fortran compiler
-+	lt_prog_compiler_wl='-Wl,-Wl,,'
-+	lt_prog_compiler_pic='-PIC'
-+	lt_prog_compiler_static='-Bstatic'
-+	;;
-       pgcc* | pgf77* | pgf90* | pgf95* | pgfortran*)
-         # Portland Group compilers (*not* the Pentium gcc compiler,
- 	# which looks to be a dead project)
-@@ -8782,7 +9311,7 @@ $as_echo_n "checking for $compiler option to produce PIC... " >&6; }
-       lt_prog_compiler_pic='-KPIC'
-       lt_prog_compiler_static='-Bstatic'
-       case $cc_basename in
--      f77* | f90* | f95*)
-+      f77* | f90* | f95* | sunf77* | sunf90* | sunf95*)
- 	lt_prog_compiler_wl='-Qoption ld ';;
-       *)
- 	lt_prog_compiler_wl='-Wl,';;
-@@ -8839,13 +9368,17 @@ case $host_os in
-     lt_prog_compiler_pic="$lt_prog_compiler_pic -DPIC"
-     ;;
- esac
--{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_prog_compiler_pic" >&5
--$as_echo "$lt_prog_compiler_pic" >&6; }
--
--
--
--
- 
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $compiler option to produce PIC" >&5
-+$as_echo_n "checking for $compiler option to produce PIC... " >&6; }
-+if ${lt_cv_prog_compiler_pic+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  lt_cv_prog_compiler_pic=$lt_prog_compiler_pic
-+fi
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_prog_compiler_pic" >&5
-+$as_echo "$lt_cv_prog_compiler_pic" >&6; }
-+lt_prog_compiler_pic=$lt_cv_prog_compiler_pic
- 
- #
- # Check to make sure the PIC flag actually works.
-@@ -8906,6 +9439,11 @@ fi
- 
- 
- 
-+
-+
-+
-+
-+
- #
- # Check to make sure the static flag actually works.
- #
-@@ -9256,7 +9794,8 @@ _LT_EOF
-       allow_undefined_flag=unsupported
-       always_export_symbols=no
-       enable_shared_with_static_runtimes=yes
--      export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1 DATA/'\'' | $SED -e '\''/^[AITW][ ]/s/.*[ ]//'\'' | sort | uniq > $export_symbols'
-+      export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1 DATA/;s/^.*[ ]__nm__\([^ ]*\)[ ][^ ]*/\1 DATA/;/^I[ ]/d;/^[AITW][ ]/s/.* //'\'' | sort | uniq > $export_symbols'
-+      exclude_expsyms='[_]+GLOBAL_OFFSET_TABLE_|[_]+GLOBAL__[FID]_.*|[_]+head_[A-Za-z0-9_]+_dll|[A-Za-z0-9_]+_dll_iname'
- 
-       if $LD --help 2>&1 | $GREP 'auto-import' > /dev/null; then
-         archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib'
-@@ -9294,7 +9833,7 @@ _LT_EOF
-       archive_expsym_cmds='sed "s,^,_," $export_symbols >$output_objdir/$soname.expsym~$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-h,$soname ${wl}--retain-symbols-file,$output_objdir/$soname.expsym ${wl}--image-base,`expr ${RANDOM-$$} % 4096 / 2 \* 262144 + 1342177280` -o $lib'
-       ;;
- 
--    gnu* | linux* | tpf* | k*bsd*-gnu | kopensolaris*-gnu | uclinuxfdpiceabi)
-+    gnu* | linux* | tpf* | k*bsd*-gnu | kopensolaris*-gnu)
-       tmp_diet=no
-       if test "$host_os" = linux-dietlibc; then
- 	case $cc_basename in
-@@ -9355,12 +9894,12 @@ _LT_EOF
- 	  whole_archive_flag_spec='--whole-archive$convenience --no-whole-archive'
- 	  hardcode_libdir_flag_spec=
- 	  hardcode_libdir_flag_spec_ld='-rpath $libdir'
--	  archive_cmds='$LD -shared $libobjs $deplibs $compiler_flags -soname $soname -o $lib'
-+	  archive_cmds='$LD -shared $libobjs $deplibs $linker_flags -soname $soname -o $lib'
- 	  if test "x$supports_anon_versioning" = xyes; then
- 	    archive_expsym_cmds='echo "{ global:" > $output_objdir/$libname.ver~
- 	      cat $export_symbols | sed -e "s/\(.*\)/\1;/" >> $output_objdir/$libname.ver~
- 	      echo "local: *; };" >> $output_objdir/$libname.ver~
--	      $LD -shared $libobjs $deplibs $compiler_flags -soname $soname -version-script $output_objdir/$libname.ver -o $lib'
-+	      $LD -shared $libobjs $deplibs $linker_flags -soname $soname -version-script $output_objdir/$libname.ver -o $lib'
- 	  fi
- 	  ;;
- 	esac
-@@ -9374,8 +9913,8 @@ _LT_EOF
- 	archive_cmds='$LD -Bshareable $libobjs $deplibs $linker_flags -o $lib'
- 	wlarc=
-       else
--	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
--	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-+	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
-+	archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-       fi
-       ;;
- 
-@@ -9393,8 +9932,8 @@ _LT_EOF
- 
- _LT_EOF
-       elif $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then
--	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
--	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-+	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
-+	archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-       else
- 	ld_shlibs=no
-       fi
-@@ -9440,8 +9979,8 @@ _LT_EOF
- 
-     *)
-       if $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then
--	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
--	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-+	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
-+	archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-       else
- 	ld_shlibs=no
-       fi
-@@ -9571,7 +10110,13 @@ _LT_EOF
- 	allow_undefined_flag='-berok'
-         # Determine the default libpath from the value encoded in an
-         # empty executable.
--        cat confdefs.h - <<_ACEOF >conftest.$ac_ext
-+        if test "${lt_cv_aix_libpath+set}" = set; then
-+  aix_libpath=$lt_cv_aix_libpath
-+else
-+  if ${lt_cv_aix_libpath_+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
- /* end confdefs.h.  */
- 
- int
-@@ -9584,22 +10129,29 @@ main ()
- _ACEOF
- if ac_fn_c_try_link "$LINENO"; then :
- 
--lt_aix_libpath_sed='
--    /Import File Strings/,/^$/ {
--	/^0/ {
--	    s/^0  *\(.*\)$/\1/
--	    p
--	}
--    }'
--aix_libpath=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
--# Check for a 64-bit object if we didn't find anything.
--if test -z "$aix_libpath"; then
--  aix_libpath=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
--fi
-+  lt_aix_libpath_sed='
-+      /Import File Strings/,/^$/ {
-+	  /^0/ {
-+	      s/^0  *\([^ ]*\) *$/\1/
-+	      p
-+	  }
-+      }'
-+  lt_cv_aix_libpath_=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
-+  # Check for a 64-bit object if we didn't find anything.
-+  if test -z "$lt_cv_aix_libpath_"; then
-+    lt_cv_aix_libpath_=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
-+  fi
- fi
- rm -f core conftest.err conftest.$ac_objext \
-     conftest$ac_exeext conftest.$ac_ext
--if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
-+  if test -z "$lt_cv_aix_libpath_"; then
-+    lt_cv_aix_libpath_="/usr/lib:/lib"
-+  fi
-+
-+fi
-+
-+  aix_libpath=$lt_cv_aix_libpath_
-+fi
- 
-         hardcode_libdir_flag_spec='${wl}-blibpath:$libdir:'"$aix_libpath"
-         archive_expsym_cmds='$CC -o $output_objdir/$soname $libobjs $deplibs '"\${wl}$no_entry_flag"' $compiler_flags `if test "x${allow_undefined_flag}" != "x"; then func_echo_all "${wl}${allow_undefined_flag}"; else :; fi` '"\${wl}$exp_sym_flag:\$export_symbols $shared_flag"
-@@ -9611,7 +10163,13 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
- 	else
- 	 # Determine the default libpath from the value encoded in an
- 	 # empty executable.
--	 cat confdefs.h - <<_ACEOF >conftest.$ac_ext
-+	 if test "${lt_cv_aix_libpath+set}" = set; then
-+  aix_libpath=$lt_cv_aix_libpath
-+else
-+  if ${lt_cv_aix_libpath_+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
- /* end confdefs.h.  */
- 
- int
-@@ -9624,22 +10182,29 @@ main ()
- _ACEOF
- if ac_fn_c_try_link "$LINENO"; then :
- 
--lt_aix_libpath_sed='
--    /Import File Strings/,/^$/ {
--	/^0/ {
--	    s/^0  *\(.*\)$/\1/
--	    p
--	}
--    }'
--aix_libpath=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
--# Check for a 64-bit object if we didn't find anything.
--if test -z "$aix_libpath"; then
--  aix_libpath=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
--fi
-+  lt_aix_libpath_sed='
-+      /Import File Strings/,/^$/ {
-+	  /^0/ {
-+	      s/^0  *\([^ ]*\) *$/\1/
-+	      p
-+	  }
-+      }'
-+  lt_cv_aix_libpath_=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
-+  # Check for a 64-bit object if we didn't find anything.
-+  if test -z "$lt_cv_aix_libpath_"; then
-+    lt_cv_aix_libpath_=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
-+  fi
- fi
- rm -f core conftest.err conftest.$ac_objext \
-     conftest$ac_exeext conftest.$ac_ext
--if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
-+  if test -z "$lt_cv_aix_libpath_"; then
-+    lt_cv_aix_libpath_="/usr/lib:/lib"
-+  fi
-+
-+fi
-+
-+  aix_libpath=$lt_cv_aix_libpath_
-+fi
- 
- 	 hardcode_libdir_flag_spec='${wl}-blibpath:$libdir:'"$aix_libpath"
- 	  # Warning - without using the other run time loading flags,
-@@ -9684,20 +10249,63 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
-       # Microsoft Visual C++.
-       # hardcode_libdir_flag_spec is actually meaningless, as there is
-       # no search path for DLLs.
--      hardcode_libdir_flag_spec=' '
--      allow_undefined_flag=unsupported
--      # Tell ltmain to make .lib files, not .a files.
--      libext=lib
--      # Tell ltmain to make .dll files, not .so files.
--      shrext_cmds=".dll"
--      # FIXME: Setting linknames here is a bad hack.
--      archive_cmds='$CC -o $lib $libobjs $compiler_flags `func_echo_all "$deplibs" | $SED '\''s/ -lc$//'\''` -link -dll~linknames='
--      # The linker will automatically build a .lib file if we build a DLL.
--      old_archive_from_new_cmds='true'
--      # FIXME: Should let the user specify the lib program.
--      old_archive_cmds='lib -OUT:$oldlib$oldobjs$old_deplibs'
--      fix_srcfile_path='`cygpath -w "$srcfile"`'
--      enable_shared_with_static_runtimes=yes
-+      case $cc_basename in
-+      cl*)
-+	# Native MSVC
-+	hardcode_libdir_flag_spec=' '
-+	allow_undefined_flag=unsupported
-+	always_export_symbols=yes
-+	file_list_spec='@'
-+	# Tell ltmain to make .lib files, not .a files.
-+	libext=lib
-+	# Tell ltmain to make .dll files, not .so files.
-+	shrext_cmds=".dll"
-+	# FIXME: Setting linknames here is a bad hack.
-+	archive_cmds='$CC -o $output_objdir/$soname $libobjs $compiler_flags $deplibs -Wl,-dll~linknames='
-+	archive_expsym_cmds='if test "x`$SED 1q $export_symbols`" = xEXPORTS; then
-+	    sed -n -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' -e '1\\\!p' < $export_symbols > $output_objdir/$soname.exp;
-+	  else
-+	    sed -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' < $export_symbols > $output_objdir/$soname.exp;
-+	  fi~
-+	  $CC -o $tool_output_objdir$soname $libobjs $compiler_flags $deplibs "@$tool_output_objdir$soname.exp" -Wl,-DLL,-IMPLIB:"$tool_output_objdir$libname.dll.lib"~
-+	  linknames='
-+	# The linker will not automatically build a static lib if we build a DLL.
-+	# _LT_TAGVAR(old_archive_from_new_cmds, )='true'
-+	enable_shared_with_static_runtimes=yes
-+	export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1,DATA/'\'' | $SED -e '\''/^[AITW][ ]/s/.*[ ]//'\'' | sort | uniq > $export_symbols'
-+	# Don't use ranlib
-+	old_postinstall_cmds='chmod 644 $oldlib'
-+	postlink_cmds='lt_outputfile="@OUTPUT@"~
-+	  lt_tool_outputfile="@TOOL_OUTPUT@"~
-+	  case $lt_outputfile in
-+	    *.exe|*.EXE) ;;
-+	    *)
-+	      lt_outputfile="$lt_outputfile.exe"
-+	      lt_tool_outputfile="$lt_tool_outputfile.exe"
-+	      ;;
-+	  esac~
-+	  if test "$MANIFEST_TOOL" != ":" && test -f "$lt_outputfile.manifest"; then
-+	    $MANIFEST_TOOL -manifest "$lt_tool_outputfile.manifest" -outputresource:"$lt_tool_outputfile" || exit 1;
-+	    $RM "$lt_outputfile.manifest";
-+	  fi'
-+	;;
-+      *)
-+	# Assume MSVC wrapper
-+	hardcode_libdir_flag_spec=' '
-+	allow_undefined_flag=unsupported
-+	# Tell ltmain to make .lib files, not .a files.
-+	libext=lib
-+	# Tell ltmain to make .dll files, not .so files.
-+	shrext_cmds=".dll"
-+	# FIXME: Setting linknames here is a bad hack.
-+	archive_cmds='$CC -o $lib $libobjs $compiler_flags `func_echo_all "$deplibs" | $SED '\''s/ -lc$//'\''` -link -dll~linknames='
-+	# The linker will automatically build a .lib file if we build a DLL.
-+	old_archive_from_new_cmds='true'
-+	# FIXME: Should let the user specify the lib program.
-+	old_archive_cmds='lib -OUT:$oldlib$oldobjs$old_deplibs'
-+	enable_shared_with_static_runtimes=yes
-+	;;
-+      esac
-       ;;
- 
-     darwin* | rhapsody*)
-@@ -9758,7 +10366,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
- 
-     # FreeBSD 3 and greater uses gcc -shared to do shared libraries.
-     freebsd* | dragonfly*)
--      archive_cmds='$CC -shared -o $lib $libobjs $deplibs $compiler_flags'
-+      archive_cmds='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags'
-       hardcode_libdir_flag_spec='-R$libdir'
-       hardcode_direct=yes
-       hardcode_shlibpath_var=no
-@@ -9766,7 +10374,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
- 
-     hpux9*)
-       if test "$GCC" = yes; then
--	archive_cmds='$RM $output_objdir/$soname~$CC -shared -fPIC ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $libobjs $deplibs $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
-+	archive_cmds='$RM $output_objdir/$soname~$CC -shared $pic_flag ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $libobjs $deplibs $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
-       else
- 	archive_cmds='$RM $output_objdir/$soname~$LD -b +b $install_libdir -o $output_objdir/$soname $libobjs $deplibs $linker_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
-       fi
-@@ -9782,7 +10390,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
- 
-     hpux10*)
-       if test "$GCC" = yes && test "$with_gnu_ld" = no; then
--	archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
-+	archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
-       else
- 	archive_cmds='$LD -b +h $soname +b $install_libdir -o $lib $libobjs $deplibs $linker_flags'
-       fi
-@@ -9803,19 +10411,19 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
-       if test "$GCC" = yes && test "$with_gnu_ld" = no; then
- 	case $host_cpu in
- 	hppa*64*)
--	  archive_cmds='$CC -shared ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags'
-+	  archive_cmds='$CC -shared ${wl}+h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
- 	  ;;
- 	ia64*)
--	  archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags'
-+	  archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags'
- 	  ;;
- 	*)
--	  archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
-+	  archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
- 	  ;;
- 	esac
-       else
- 	case $host_cpu in
- 	hppa*64*)
--	  archive_cmds='$CC -b ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags'
-+	  archive_cmds='$CC -b ${wl}+h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
- 	  ;;
- 	ia64*)
- 	  archive_cmds='$CC -b ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags'
-@@ -9888,23 +10496,36 @@ fi
- 
-     irix5* | irix6* | nonstopux*)
-       if test "$GCC" = yes; then
--	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
-+	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
- 	# Try to use the -exported_symbol ld option, if it does not
- 	# work, assume that -exports_file does not work either and
- 	# implicitly export all symbols.
--        save_LDFLAGS="$LDFLAGS"
--        LDFLAGS="$LDFLAGS -shared ${wl}-exported_symbol ${wl}foo ${wl}-update_registry ${wl}/dev/null"
--        cat confdefs.h - <<_ACEOF >conftest.$ac_ext
-+	# This should be the same for all languages, so no per-tag cache variable.
-+	{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether the $host_os linker accepts -exported_symbol" >&5
-+$as_echo_n "checking whether the $host_os linker accepts -exported_symbol... " >&6; }
-+if ${lt_cv_irix_exported_symbol+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  save_LDFLAGS="$LDFLAGS"
-+	   LDFLAGS="$LDFLAGS -shared ${wl}-exported_symbol ${wl}foo ${wl}-update_registry ${wl}/dev/null"
-+	   cat confdefs.h - <<_ACEOF >conftest.$ac_ext
- /* end confdefs.h.  */
--int foo(void) {}
-+int foo (void) { return 0; }
- _ACEOF
- if ac_fn_c_try_link "$LINENO"; then :
--  archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations ${wl}-exports_file ${wl}$export_symbols -o $lib'
--
-+  lt_cv_irix_exported_symbol=yes
-+else
-+  lt_cv_irix_exported_symbol=no
- fi
- rm -f core conftest.err conftest.$ac_objext \
-     conftest$ac_exeext conftest.$ac_ext
--        LDFLAGS="$save_LDFLAGS"
-+           LDFLAGS="$save_LDFLAGS"
-+fi
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_irix_exported_symbol" >&5
-+$as_echo "$lt_cv_irix_exported_symbol" >&6; }
-+	if test "$lt_cv_irix_exported_symbol" = yes; then
-+          archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations ${wl}-exports_file ${wl}$export_symbols -o $lib'
-+	fi
-       else
- 	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -o $lib'
- 	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -exports_file $export_symbols -o $lib'
-@@ -9989,7 +10610,7 @@ rm -f core conftest.err conftest.$ac_objext \
-     osf4* | osf5*)	# as osf3* with the addition of -msym flag
-       if test "$GCC" = yes; then
- 	allow_undefined_flag=' ${wl}-expect_unresolved ${wl}\*'
--	archive_cmds='$CC -shared${allow_undefined_flag} $libobjs $deplibs $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
-+	archive_cmds='$CC -shared${allow_undefined_flag} $pic_flag $libobjs $deplibs $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
- 	hardcode_libdir_flag_spec='${wl}-rpath ${wl}$libdir'
-       else
- 	allow_undefined_flag=' -expect_unresolved \*'
-@@ -10008,9 +10629,9 @@ rm -f core conftest.err conftest.$ac_objext \
-       no_undefined_flag=' -z defs'
-       if test "$GCC" = yes; then
- 	wlarc='${wl}'
--	archive_cmds='$CC -shared ${wl}-z ${wl}text ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
-+	archive_cmds='$CC -shared $pic_flag ${wl}-z ${wl}text ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
- 	archive_expsym_cmds='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~
--	  $CC -shared ${wl}-z ${wl}text ${wl}-M ${wl}$lib.exp ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags~$RM $lib.exp'
-+	  $CC -shared $pic_flag ${wl}-z ${wl}text ${wl}-M ${wl}$lib.exp ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags~$RM $lib.exp'
-       else
- 	case `$CC -V 2>&1` in
- 	*"Compilers 5.0"*)
-@@ -10586,8 +11207,9 @@ cygwin* | mingw* | pw32* | cegcc*)
-   need_version=no
-   need_lib_prefix=no
- 
--  case $GCC,$host_os in
--  yes,cygwin* | yes,mingw* | yes,pw32* | yes,cegcc*)
-+  case $GCC,$cc_basename in
-+  yes,*)
-+    # gcc
-     library_names_spec='$libname.dll.a'
-     # DLL is installed to $(libdir)/../bin by postinstall_cmds
-     postinstall_cmds='base_file=`basename \${file}`~
-@@ -10620,13 +11242,71 @@ cygwin* | mingw* | pw32* | cegcc*)
-       library_names_spec='`echo ${libname} | sed -e 's/^lib/pw/'``echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}'
-       ;;
-     esac
-+    dynamic_linker='Win32 ld.exe'
-+    ;;
-+
-+  *,cl*)
-+    # Native MSVC
-+    libname_spec='$name'
-+    soname_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}'
-+    library_names_spec='${libname}.dll.lib'
-+
-+    case $build_os in
-+    mingw*)
-+      sys_lib_search_path_spec=
-+      lt_save_ifs=$IFS
-+      IFS=';'
-+      for lt_path in $LIB
-+      do
-+        IFS=$lt_save_ifs
-+        # Let DOS variable expansion print the short 8.3 style file name.
-+        lt_path=`cd "$lt_path" 2>/dev/null && cmd //C "for %i in (".") do @echo %~si"`
-+        sys_lib_search_path_spec="$sys_lib_search_path_spec $lt_path"
-+      done
-+      IFS=$lt_save_ifs
-+      # Convert to MSYS style.
-+      sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | sed -e 's|\\\\|/|g' -e 's| \\([a-zA-Z]\\):| /\\1|g' -e 's|^ ||'`
-+      ;;
-+    cygwin*)
-+      # Convert to unix form, then to dos form, then back to unix form
-+      # but this time dos style (no spaces!) so that the unix form looks
-+      # like /cygdrive/c/PROGRA~1:/cygdr...
-+      sys_lib_search_path_spec=`cygpath --path --unix "$LIB"`
-+      sys_lib_search_path_spec=`cygpath --path --dos "$sys_lib_search_path_spec" 2>/dev/null`
-+      sys_lib_search_path_spec=`cygpath --path --unix "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
-+      ;;
-+    *)
-+      sys_lib_search_path_spec="$LIB"
-+      if $ECHO "$sys_lib_search_path_spec" | $GREP ';[c-zC-Z]:/' >/dev/null; then
-+        # It is most probably a Windows format PATH.
-+        sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e 's/;/ /g'`
-+      else
-+        sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
-+      fi
-+      # FIXME: find the short name or the path components, as spaces are
-+      # common. (e.g. "Program Files" -> "PROGRA~1")
-+      ;;
-+    esac
-+
-+    # DLL is installed to $(libdir)/../bin by postinstall_cmds
-+    postinstall_cmds='base_file=`basename \${file}`~
-+      dlpath=`$SHELL 2>&1 -c '\''. $dir/'\''\${base_file}'\''i; echo \$dlname'\''`~
-+      dldir=$destdir/`dirname \$dlpath`~
-+      test -d \$dldir || mkdir -p \$dldir~
-+      $install_prog $dir/$dlname \$dldir/$dlname'
-+    postuninstall_cmds='dldll=`$SHELL 2>&1 -c '\''. $file; echo \$dlname'\''`~
-+      dlpath=$dir/\$dldll~
-+       $RM \$dlpath'
-+    shlibpath_overrides_runpath=yes
-+    dynamic_linker='Win32 link.exe'
-     ;;
- 
-   *)
-+    # Assume MSVC wrapper
-     library_names_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext} $libname.lib'
-+    dynamic_linker='Win32 ld.exe'
-     ;;
-   esac
--  dynamic_linker='Win32 ld.exe'
-   # FIXME: first we should search . and the directory the executable is in
-   shlibpath_var=PATH
-   ;;
-@@ -10708,7 +11388,7 @@ haiku*)
-   soname_spec='${libname}${release}${shared_ext}$major'
-   shlibpath_var=LIBRARY_PATH
-   shlibpath_overrides_runpath=yes
--  sys_lib_dlsearch_path_spec='/boot/home/config/lib /boot/common/lib /boot/beos/system/lib'
-+  sys_lib_dlsearch_path_spec='/boot/home/config/lib /boot/common/lib /boot/system/lib'
-   hardcode_into_libs=yes
-   ;;
- 
-@@ -10815,12 +11495,7 @@ linux*oldld* | linux*aout* | linux*coff*)
-   ;;
- 
- # This must be Linux ELF.
--
--# uclinux* changes (here and below) have been submitted to the libtool
--# project, but have not yet been accepted: they are GCC-local changes
--# for the time being.  (See
--# https://lists.gnu.org/archive/html/libtool-patches/2018-05/msg00000.html)
--linux* | k*bsd*-gnu | kopensolaris*-gnu | gnu* | uclinuxfdpiceabi)
-+linux* | k*bsd*-gnu | kopensolaris*-gnu | gnu*)
-   version_type=linux
-   need_lib_prefix=no
-   need_version=no
-@@ -11509,7 +12184,7 @@ else
-   lt_dlunknown=0; lt_dlno_uscore=1; lt_dlneed_uscore=2
-   lt_status=$lt_dlunknown
-   cat > conftest.$ac_ext <<_LT_EOF
--#line 11512 "configure"
-+#line $LINENO "configure"
- #include "confdefs.h"
- 
- #if HAVE_DLFCN_H
-@@ -11553,10 +12228,10 @@ else
- /* When -fvisbility=hidden is used, assume the code has been annotated
-    correspondingly for the symbols needed.  */
- #if defined(__GNUC__) && (((__GNUC__ == 3) && (__GNUC_MINOR__ >= 3)) || (__GNUC__ > 3))
--void fnord () __attribute__((visibility("default")));
-+int fnord () __attribute__((visibility("default")));
- #endif
- 
--void fnord () { int i=42; }
-+int fnord () { return 42; }
- int main ()
- {
-   void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW);
-@@ -11615,7 +12290,7 @@ else
-   lt_dlunknown=0; lt_dlno_uscore=1; lt_dlneed_uscore=2
-   lt_status=$lt_dlunknown
-   cat > conftest.$ac_ext <<_LT_EOF
--#line 11618 "configure"
-+#line $LINENO "configure"
- #include "confdefs.h"
- 
- #if HAVE_DLFCN_H
-@@ -11659,10 +12334,10 @@ else
- /* When -fvisbility=hidden is used, assume the code has been annotated
-    correspondingly for the symbols needed.  */
- #if defined(__GNUC__) && (((__GNUC__ == 3) && (__GNUC_MINOR__ >= 3)) || (__GNUC__ > 3))
--void fnord () __attribute__((visibility("default")));
-+int fnord () __attribute__((visibility("default")));
- #endif
- 
--void fnord () { int i=42; }
-+int fnord () { return 42; }
- int main ()
- {
-   void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW);
-@@ -14948,13 +15623,20 @@ exeext='`$ECHO "$exeext" | $SED "$delay_single_quote_subst"`'
- lt_unset='`$ECHO "$lt_unset" | $SED "$delay_single_quote_subst"`'
- lt_SP2NL='`$ECHO "$lt_SP2NL" | $SED "$delay_single_quote_subst"`'
- lt_NL2SP='`$ECHO "$lt_NL2SP" | $SED "$delay_single_quote_subst"`'
-+lt_cv_to_host_file_cmd='`$ECHO "$lt_cv_to_host_file_cmd" | $SED "$delay_single_quote_subst"`'
-+lt_cv_to_tool_file_cmd='`$ECHO "$lt_cv_to_tool_file_cmd" | $SED "$delay_single_quote_subst"`'
- reload_flag='`$ECHO "$reload_flag" | $SED "$delay_single_quote_subst"`'
- reload_cmds='`$ECHO "$reload_cmds" | $SED "$delay_single_quote_subst"`'
- OBJDUMP='`$ECHO "$OBJDUMP" | $SED "$delay_single_quote_subst"`'
- deplibs_check_method='`$ECHO "$deplibs_check_method" | $SED "$delay_single_quote_subst"`'
- file_magic_cmd='`$ECHO "$file_magic_cmd" | $SED "$delay_single_quote_subst"`'
-+file_magic_glob='`$ECHO "$file_magic_glob" | $SED "$delay_single_quote_subst"`'
-+want_nocaseglob='`$ECHO "$want_nocaseglob" | $SED "$delay_single_quote_subst"`'
-+DLLTOOL='`$ECHO "$DLLTOOL" | $SED "$delay_single_quote_subst"`'
-+sharedlib_from_linklib_cmd='`$ECHO "$sharedlib_from_linklib_cmd" | $SED "$delay_single_quote_subst"`'
- AR='`$ECHO "$AR" | $SED "$delay_single_quote_subst"`'
- AR_FLAGS='`$ECHO "$AR_FLAGS" | $SED "$delay_single_quote_subst"`'
-+archiver_list_spec='`$ECHO "$archiver_list_spec" | $SED "$delay_single_quote_subst"`'
- STRIP='`$ECHO "$STRIP" | $SED "$delay_single_quote_subst"`'
- RANLIB='`$ECHO "$RANLIB" | $SED "$delay_single_quote_subst"`'
- old_postinstall_cmds='`$ECHO "$old_postinstall_cmds" | $SED "$delay_single_quote_subst"`'
-@@ -14969,14 +15651,17 @@ lt_cv_sys_global_symbol_pipe='`$ECHO "$lt_cv_sys_global_symbol_pipe" | $SED "$de
- lt_cv_sys_global_symbol_to_cdecl='`$ECHO "$lt_cv_sys_global_symbol_to_cdecl" | $SED "$delay_single_quote_subst"`'
- lt_cv_sys_global_symbol_to_c_name_address='`$ECHO "$lt_cv_sys_global_symbol_to_c_name_address" | $SED "$delay_single_quote_subst"`'
- lt_cv_sys_global_symbol_to_c_name_address_lib_prefix='`$ECHO "$lt_cv_sys_global_symbol_to_c_name_address_lib_prefix" | $SED "$delay_single_quote_subst"`'
-+nm_file_list_spec='`$ECHO "$nm_file_list_spec" | $SED "$delay_single_quote_subst"`'
-+lt_sysroot='`$ECHO "$lt_sysroot" | $SED "$delay_single_quote_subst"`'
- objdir='`$ECHO "$objdir" | $SED "$delay_single_quote_subst"`'
- MAGIC_CMD='`$ECHO "$MAGIC_CMD" | $SED "$delay_single_quote_subst"`'
- lt_prog_compiler_no_builtin_flag='`$ECHO "$lt_prog_compiler_no_builtin_flag" | $SED "$delay_single_quote_subst"`'
--lt_prog_compiler_wl='`$ECHO "$lt_prog_compiler_wl" | $SED "$delay_single_quote_subst"`'
- lt_prog_compiler_pic='`$ECHO "$lt_prog_compiler_pic" | $SED "$delay_single_quote_subst"`'
-+lt_prog_compiler_wl='`$ECHO "$lt_prog_compiler_wl" | $SED "$delay_single_quote_subst"`'
- lt_prog_compiler_static='`$ECHO "$lt_prog_compiler_static" | $SED "$delay_single_quote_subst"`'
- lt_cv_prog_compiler_c_o='`$ECHO "$lt_cv_prog_compiler_c_o" | $SED "$delay_single_quote_subst"`'
- need_locks='`$ECHO "$need_locks" | $SED "$delay_single_quote_subst"`'
-+MANIFEST_TOOL='`$ECHO "$MANIFEST_TOOL" | $SED "$delay_single_quote_subst"`'
- DSYMUTIL='`$ECHO "$DSYMUTIL" | $SED "$delay_single_quote_subst"`'
- NMEDIT='`$ECHO "$NMEDIT" | $SED "$delay_single_quote_subst"`'
- LIPO='`$ECHO "$LIPO" | $SED "$delay_single_quote_subst"`'
-@@ -15009,12 +15694,12 @@ hardcode_shlibpath_var='`$ECHO "$hardcode_shlibpath_var" | $SED "$delay_single_q
- hardcode_automatic='`$ECHO "$hardcode_automatic" | $SED "$delay_single_quote_subst"`'
- inherit_rpath='`$ECHO "$inherit_rpath" | $SED "$delay_single_quote_subst"`'
- link_all_deplibs='`$ECHO "$link_all_deplibs" | $SED "$delay_single_quote_subst"`'
--fix_srcfile_path='`$ECHO "$fix_srcfile_path" | $SED "$delay_single_quote_subst"`'
- always_export_symbols='`$ECHO "$always_export_symbols" | $SED "$delay_single_quote_subst"`'
- export_symbols_cmds='`$ECHO "$export_symbols_cmds" | $SED "$delay_single_quote_subst"`'
- exclude_expsyms='`$ECHO "$exclude_expsyms" | $SED "$delay_single_quote_subst"`'
- include_expsyms='`$ECHO "$include_expsyms" | $SED "$delay_single_quote_subst"`'
- prelink_cmds='`$ECHO "$prelink_cmds" | $SED "$delay_single_quote_subst"`'
-+postlink_cmds='`$ECHO "$postlink_cmds" | $SED "$delay_single_quote_subst"`'
- file_list_spec='`$ECHO "$file_list_spec" | $SED "$delay_single_quote_subst"`'
- variables_saved_for_relink='`$ECHO "$variables_saved_for_relink" | $SED "$delay_single_quote_subst"`'
- need_lib_prefix='`$ECHO "$need_lib_prefix" | $SED "$delay_single_quote_subst"`'
-@@ -15069,8 +15754,13 @@ reload_flag \
- OBJDUMP \
- deplibs_check_method \
- file_magic_cmd \
-+file_magic_glob \
-+want_nocaseglob \
-+DLLTOOL \
-+sharedlib_from_linklib_cmd \
- AR \
- AR_FLAGS \
-+archiver_list_spec \
- STRIP \
- RANLIB \
- CC \
-@@ -15080,12 +15770,14 @@ lt_cv_sys_global_symbol_pipe \
- lt_cv_sys_global_symbol_to_cdecl \
- lt_cv_sys_global_symbol_to_c_name_address \
- lt_cv_sys_global_symbol_to_c_name_address_lib_prefix \
-+nm_file_list_spec \
- lt_prog_compiler_no_builtin_flag \
--lt_prog_compiler_wl \
- lt_prog_compiler_pic \
-+lt_prog_compiler_wl \
- lt_prog_compiler_static \
- lt_cv_prog_compiler_c_o \
- need_locks \
-+MANIFEST_TOOL \
- DSYMUTIL \
- NMEDIT \
- LIPO \
-@@ -15101,7 +15793,6 @@ no_undefined_flag \
- hardcode_libdir_flag_spec \
- hardcode_libdir_flag_spec_ld \
- hardcode_libdir_separator \
--fix_srcfile_path \
- exclude_expsyms \
- include_expsyms \
- file_list_spec \
-@@ -15137,6 +15828,7 @@ module_cmds \
- module_expsym_cmds \
- export_symbols_cmds \
- prelink_cmds \
-+postlink_cmds \
- postinstall_cmds \
- postuninstall_cmds \
- finish_cmds \
-@@ -15835,7 +16527,8 @@ esac ;;
- # NOTE: Changes made to this file will be lost: look at ltmain.sh.
- #
- #   Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001, 2003, 2004, 2005,
--#                 2006, 2007, 2008, 2009 Free Software Foundation, Inc.
-+#                 2006, 2007, 2008, 2009, 2010 Free Software Foundation,
-+#                 Inc.
- #   Written by Gordon Matzigkeit, 1996
- #
- #   This file is part of GNU Libtool.
-@@ -15938,19 +16631,42 @@ SP2NL=$lt_lt_SP2NL
- # turn newlines into spaces.
- NL2SP=$lt_lt_NL2SP
- 
-+# convert \$build file names to \$host format.
-+to_host_file_cmd=$lt_cv_to_host_file_cmd
-+
-+# convert \$build files to toolchain format.
-+to_tool_file_cmd=$lt_cv_to_tool_file_cmd
-+
- # An object symbol dumper.
- OBJDUMP=$lt_OBJDUMP
- 
- # Method to check whether dependent libraries are shared objects.
- deplibs_check_method=$lt_deplibs_check_method
- 
--# Command to use when deplibs_check_method == "file_magic".
-+# Command to use when deplibs_check_method = "file_magic".
- file_magic_cmd=$lt_file_magic_cmd
- 
-+# How to find potential files when deplibs_check_method = "file_magic".
-+file_magic_glob=$lt_file_magic_glob
-+
-+# Find potential files using nocaseglob when deplibs_check_method = "file_magic".
-+want_nocaseglob=$lt_want_nocaseglob
-+
-+# DLL creation program.
-+DLLTOOL=$lt_DLLTOOL
-+
-+# Command to associate shared and link libraries.
-+sharedlib_from_linklib_cmd=$lt_sharedlib_from_linklib_cmd
-+
- # The archiver.
- AR=$lt_AR
-+
-+# Flags to create an archive.
- AR_FLAGS=$lt_AR_FLAGS
- 
-+# How to feed a file listing to the archiver.
-+archiver_list_spec=$lt_archiver_list_spec
-+
- # A symbol stripping program.
- STRIP=$lt_STRIP
- 
-@@ -15980,6 +16696,12 @@ global_symbol_to_c_name_address=$lt_lt_cv_sys_global_symbol_to_c_name_address
- # Transform the output of nm in a C name address pair when lib prefix is needed.
- global_symbol_to_c_name_address_lib_prefix=$lt_lt_cv_sys_global_symbol_to_c_name_address_lib_prefix
- 
-+# Specify filename containing input files for \$NM.
-+nm_file_list_spec=$lt_nm_file_list_spec
-+
-+# The root where to search for dependent libraries,and in which our libraries should be installed.
-+lt_sysroot=$lt_sysroot
-+
- # The name of the directory that contains temporary libtool files.
- objdir=$objdir
- 
-@@ -15989,6 +16711,9 @@ MAGIC_CMD=$MAGIC_CMD
- # Must we lock files when doing compilation?
- need_locks=$lt_need_locks
- 
-+# Manifest tool.
-+MANIFEST_TOOL=$lt_MANIFEST_TOOL
-+
- # Tool to manipulate archived DWARF debug symbol files on Mac OS X.
- DSYMUTIL=$lt_DSYMUTIL
- 
-@@ -16103,12 +16828,12 @@ with_gcc=$GCC
- # Compiler flag to turn off builtin functions.
- no_builtin_flag=$lt_lt_prog_compiler_no_builtin_flag
- 
--# How to pass a linker flag through the compiler.
--wl=$lt_lt_prog_compiler_wl
--
- # Additional compiler flags for building library objects.
- pic_flag=$lt_lt_prog_compiler_pic
- 
-+# How to pass a linker flag through the compiler.
-+wl=$lt_lt_prog_compiler_wl
-+
- # Compiler flag to prevent dynamic linking.
- link_static_flag=$lt_lt_prog_compiler_static
- 
-@@ -16195,9 +16920,6 @@ inherit_rpath=$inherit_rpath
- # Whether libtool must link a program against all its dependency libraries.
- link_all_deplibs=$link_all_deplibs
- 
--# Fix the shell variable \$srcfile for the compiler.
--fix_srcfile_path=$lt_fix_srcfile_path
--
- # Set to "yes" if exported symbols are required.
- always_export_symbols=$always_export_symbols
- 
-@@ -16213,6 +16935,9 @@ include_expsyms=$lt_include_expsyms
- # Commands necessary for linking programs (against libraries) with templates.
- prelink_cmds=$lt_prelink_cmds
- 
-+# Commands necessary for finishing linking programs.
-+postlink_cmds=$lt_postlink_cmds
-+
- # Specify filename containing input files.
- file_list_spec=$lt_file_list_spec
- 
-@@ -16245,210 +16970,169 @@ ltmain="$ac_aux_dir/ltmain.sh"
-   # if finds mixed CR/LF and LF-only lines.  Since sed operates in
-   # text mode, it properly converts lines to CR/LF.  This bash problem
-   # is reportedly fixed, but why not run on old versions too?
--  sed '/^# Generated shell functions inserted here/q' "$ltmain" >> "$cfgfile" \
--    || (rm -f "$cfgfile"; exit 1)
--
--  case $xsi_shell in
--  yes)
--    cat << \_LT_EOF >> "$cfgfile"
--
--# func_dirname file append nondir_replacement
--# Compute the dirname of FILE.  If nonempty, add APPEND to the result,
--# otherwise set result to NONDIR_REPLACEMENT.
--func_dirname ()
--{
--  case ${1} in
--    */*) func_dirname_result="${1%/*}${2}" ;;
--    *  ) func_dirname_result="${3}" ;;
--  esac
--}
--
--# func_basename file
--func_basename ()
--{
--  func_basename_result="${1##*/}"
--}
--
--# func_dirname_and_basename file append nondir_replacement
--# perform func_basename and func_dirname in a single function
--# call:
--#   dirname:  Compute the dirname of FILE.  If nonempty,
--#             add APPEND to the result, otherwise set result
--#             to NONDIR_REPLACEMENT.
--#             value returned in "$func_dirname_result"
--#   basename: Compute filename of FILE.
--#             value retuned in "$func_basename_result"
--# Implementation must be kept synchronized with func_dirname
--# and func_basename. For efficiency, we do not delegate to
--# those functions but instead duplicate the functionality here.
--func_dirname_and_basename ()
--{
--  case ${1} in
--    */*) func_dirname_result="${1%/*}${2}" ;;
--    *  ) func_dirname_result="${3}" ;;
--  esac
--  func_basename_result="${1##*/}"
--}
--
--# func_stripname prefix suffix name
--# strip PREFIX and SUFFIX off of NAME.
--# PREFIX and SUFFIX must not contain globbing or regex special
--# characters, hashes, percent signs, but SUFFIX may contain a leading
--# dot (in which case that matches only a dot).
--func_stripname ()
--{
--  # pdksh 5.2.14 does not do ${X%$Y} correctly if both X and Y are
--  # positional parameters, so assign one to ordinary parameter first.
--  func_stripname_result=${3}
--  func_stripname_result=${func_stripname_result#"${1}"}
--  func_stripname_result=${func_stripname_result%"${2}"}
--}
--
--# func_opt_split
--func_opt_split ()
--{
--  func_opt_split_opt=${1%%=*}
--  func_opt_split_arg=${1#*=}
--}
--
--# func_lo2o object
--func_lo2o ()
--{
--  case ${1} in
--    *.lo) func_lo2o_result=${1%.lo}.${objext} ;;
--    *)    func_lo2o_result=${1} ;;
--  esac
--}
--
--# func_xform libobj-or-source
--func_xform ()
--{
--  func_xform_result=${1%.*}.lo
--}
--
--# func_arith arithmetic-term...
--func_arith ()
--{
--  func_arith_result=$(( $* ))
--}
--
--# func_len string
--# STRING may not start with a hyphen.
--func_len ()
--{
--  func_len_result=${#1}
--}
--
--_LT_EOF
--    ;;
--  *) # Bourne compatible functions.
--    cat << \_LT_EOF >> "$cfgfile"
--
--# func_dirname file append nondir_replacement
--# Compute the dirname of FILE.  If nonempty, add APPEND to the result,
--# otherwise set result to NONDIR_REPLACEMENT.
--func_dirname ()
--{
--  # Extract subdirectory from the argument.
--  func_dirname_result=`$ECHO "${1}" | $SED "$dirname"`
--  if test "X$func_dirname_result" = "X${1}"; then
--    func_dirname_result="${3}"
--  else
--    func_dirname_result="$func_dirname_result${2}"
--  fi
--}
--
--# func_basename file
--func_basename ()
--{
--  func_basename_result=`$ECHO "${1}" | $SED "$basename"`
--}
--
--
--# func_stripname prefix suffix name
--# strip PREFIX and SUFFIX off of NAME.
--# PREFIX and SUFFIX must not contain globbing or regex special
--# characters, hashes, percent signs, but SUFFIX may contain a leading
--# dot (in which case that matches only a dot).
--# func_strip_suffix prefix name
--func_stripname ()
--{
--  case ${2} in
--    .*) func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%\\\\${2}\$%%"`;;
--    *)  func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%${2}\$%%"`;;
--  esac
--}
--
--# sed scripts:
--my_sed_long_opt='1s/^\(-[^=]*\)=.*/\1/;q'
--my_sed_long_arg='1s/^-[^=]*=//'
--
--# func_opt_split
--func_opt_split ()
--{
--  func_opt_split_opt=`$ECHO "${1}" | $SED "$my_sed_long_opt"`
--  func_opt_split_arg=`$ECHO "${1}" | $SED "$my_sed_long_arg"`
--}
--
--# func_lo2o object
--func_lo2o ()
--{
--  func_lo2o_result=`$ECHO "${1}" | $SED "$lo2o"`
--}
--
--# func_xform libobj-or-source
--func_xform ()
--{
--  func_xform_result=`$ECHO "${1}" | $SED 's/\.[^.]*$/.lo/'`
--}
--
--# func_arith arithmetic-term...
--func_arith ()
--{
--  func_arith_result=`expr "$@"`
--}
--
--# func_len string
--# STRING may not start with a hyphen.
--func_len ()
--{
--  func_len_result=`expr "$1" : ".*" 2>/dev/null || echo $max_cmd_len`
--}
--
--_LT_EOF
--esac
--
--case $lt_shell_append in
--  yes)
--    cat << \_LT_EOF >> "$cfgfile"
--
--# func_append var value
--# Append VALUE to the end of shell variable VAR.
--func_append ()
--{
--  eval "$1+=\$2"
--}
--_LT_EOF
--    ;;
--  *)
--    cat << \_LT_EOF >> "$cfgfile"
--
--# func_append var value
--# Append VALUE to the end of shell variable VAR.
--func_append ()
--{
--  eval "$1=\$$1\$2"
--}
--
--_LT_EOF
--    ;;
--  esac
--
--
--  sed -n '/^# Generated shell functions inserted here/,$p' "$ltmain" >> "$cfgfile" \
--    || (rm -f "$cfgfile"; exit 1)
--
--  mv -f "$cfgfile" "$ofile" ||
-+  sed '$q' "$ltmain" >> "$cfgfile" \
-+     || (rm -f "$cfgfile"; exit 1)
-+
-+  if test x"$xsi_shell" = xyes; then
-+  sed -e '/^func_dirname ()$/,/^} # func_dirname /c\
-+func_dirname ()\
-+{\
-+\    case ${1} in\
-+\      */*) func_dirname_result="${1%/*}${2}" ;;\
-+\      *  ) func_dirname_result="${3}" ;;\
-+\    esac\
-+} # Extended-shell func_dirname implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_basename ()$/,/^} # func_basename /c\
-+func_basename ()\
-+{\
-+\    func_basename_result="${1##*/}"\
-+} # Extended-shell func_basename implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_dirname_and_basename ()$/,/^} # func_dirname_and_basename /c\
-+func_dirname_and_basename ()\
-+{\
-+\    case ${1} in\
-+\      */*) func_dirname_result="${1%/*}${2}" ;;\
-+\      *  ) func_dirname_result="${3}" ;;\
-+\    esac\
-+\    func_basename_result="${1##*/}"\
-+} # Extended-shell func_dirname_and_basename implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_stripname ()$/,/^} # func_stripname /c\
-+func_stripname ()\
-+{\
-+\    # pdksh 5.2.14 does not do ${X%$Y} correctly if both X and Y are\
-+\    # positional parameters, so assign one to ordinary parameter first.\
-+\    func_stripname_result=${3}\
-+\    func_stripname_result=${func_stripname_result#"${1}"}\
-+\    func_stripname_result=${func_stripname_result%"${2}"}\
-+} # Extended-shell func_stripname implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_split_long_opt ()$/,/^} # func_split_long_opt /c\
-+func_split_long_opt ()\
-+{\
-+\    func_split_long_opt_name=${1%%=*}\
-+\    func_split_long_opt_arg=${1#*=}\
-+} # Extended-shell func_split_long_opt implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_split_short_opt ()$/,/^} # func_split_short_opt /c\
-+func_split_short_opt ()\
-+{\
-+\    func_split_short_opt_arg=${1#??}\
-+\    func_split_short_opt_name=${1%"$func_split_short_opt_arg"}\
-+} # Extended-shell func_split_short_opt implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_lo2o ()$/,/^} # func_lo2o /c\
-+func_lo2o ()\
-+{\
-+\    case ${1} in\
-+\      *.lo) func_lo2o_result=${1%.lo}.${objext} ;;\
-+\      *)    func_lo2o_result=${1} ;;\
-+\    esac\
-+} # Extended-shell func_lo2o implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_xform ()$/,/^} # func_xform /c\
-+func_xform ()\
-+{\
-+    func_xform_result=${1%.*}.lo\
-+} # Extended-shell func_xform implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_arith ()$/,/^} # func_arith /c\
-+func_arith ()\
-+{\
-+    func_arith_result=$(( $* ))\
-+} # Extended-shell func_arith implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_len ()$/,/^} # func_len /c\
-+func_len ()\
-+{\
-+    func_len_result=${#1}\
-+} # Extended-shell func_len implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+fi
-+
-+if test x"$lt_shell_append" = xyes; then
-+  sed -e '/^func_append ()$/,/^} # func_append /c\
-+func_append ()\
-+{\
-+    eval "${1}+=\\${2}"\
-+} # Extended-shell func_append implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_append_quoted ()$/,/^} # func_append_quoted /c\
-+func_append_quoted ()\
-+{\
-+\    func_quote_for_eval "${2}"\
-+\    eval "${1}+=\\\\ \\$func_quote_for_eval_result"\
-+} # Extended-shell func_append_quoted implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  # Save a `func_append' function call where possible by direct use of '+='
-+  sed -e 's%func_append \([a-zA-Z_]\{1,\}\) "%\1+="%g' $cfgfile > $cfgfile.tmp \
-+    && mv -f "$cfgfile.tmp" "$cfgfile" \
-+      || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+  test 0 -eq $? || _lt_function_replace_fail=:
-+else
-+  # Save a `func_append' function call even when '+=' is not available
-+  sed -e 's%func_append \([a-zA-Z_]\{1,\}\) "%\1="$\1%g' $cfgfile > $cfgfile.tmp \
-+    && mv -f "$cfgfile.tmp" "$cfgfile" \
-+      || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+  test 0 -eq $? || _lt_function_replace_fail=:
-+fi
-+
-+if test x"$_lt_function_replace_fail" = x":"; then
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: Unable to substitute extended shell functions in $ofile" >&5
-+$as_echo "$as_me: WARNING: Unable to substitute extended shell functions in $ofile" >&2;}
-+fi
-+
-+
-+   mv -f "$cfgfile" "$ofile" ||
-     (rm -f "$ofile" && cp "$cfgfile" "$ofile" && rm -f "$cfgfile")
-   chmod +x "$ofile"
- 
-diff --git a/libctf/configure b/libctf/configure
-index de10fef84a1..1b0ee0d32c6 100755
---- a/libctf/configure
-+++ b/libctf/configure
-@@ -669,6 +669,8 @@ OTOOL
- LIPO
- NMEDIT
- DSYMUTIL
-+MANIFEST_TOOL
-+DLLTOOL
- OBJDUMP
- LN_S
- NM
-@@ -801,6 +803,7 @@ enable_static
- with_pic
- enable_fast_install
- with_gnu_ld
-+with_libtool_sysroot
- enable_libtool_lock
- enable_largefile
- enable_werror_always
-@@ -1475,6 +1478,8 @@ Optional Packages:
-   --with-pic              try to use only PIC/non-PIC objects [default=use
-                           both]
-   --with-gnu-ld           assume the C compiler uses GNU ld [default=no]
-+  --with-libtool-sysroot=DIR Search for dependent libraries within DIR
-+                        (or the compiler's sysroot if not specified).
-   --with-system-zlib      use installed libz
- 
- Some influential environment variables:
-@@ -5583,8 +5588,8 @@ esac
- 
- 
- 
--macro_version='2.2.7a'
--macro_revision='1.3134'
-+macro_version='2.4'
-+macro_revision='1.3293'
- 
- 
- 
-@@ -5624,7 +5629,7 @@ ECHO=$ECHO$ECHO$ECHO$ECHO$ECHO$ECHO
- { $as_echo "$as_me:${as_lineno-$LINENO}: checking how to print strings" >&5
- $as_echo_n "checking how to print strings... " >&6; }
- # Test print first, because it will be a builtin if present.
--if test "X`print -r -- -n 2>/dev/null`" = X-n && \
-+if test "X`( print -r -- -n ) 2>/dev/null`" = X-n && \
-    test "X`print -r -- $ECHO 2>/dev/null`" = "X$ECHO"; then
-   ECHO='print -r --'
- elif test "X`printf %s $ECHO 2>/dev/null`" = "X$ECHO"; then
-@@ -6311,8 +6316,8 @@ $as_echo_n "checking whether the shell understands some XSI constructs... " >&6;
- # Try some XSI features
- xsi_shell=no
- ( _lt_dummy="a/b/c"
--  test "${_lt_dummy##*/},${_lt_dummy%/*},"${_lt_dummy%"$_lt_dummy"}, \
--      = c,a/b,, \
-+  test "${_lt_dummy##*/},${_lt_dummy%/*},${_lt_dummy#??}"${_lt_dummy%"$_lt_dummy"}, \
-+      = c,a/b,b/c, \
-     && eval 'test $(( 1 + 1 )) -eq 2 \
-     && test "${#_lt_dummy}" -eq 5' ) >/dev/null 2>&1 \
-   && xsi_shell=yes
-@@ -6361,6 +6366,80 @@ esac
- 
- 
- 
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking how to convert $build file names to $host format" >&5
-+$as_echo_n "checking how to convert $build file names to $host format... " >&6; }
-+if ${lt_cv_to_host_file_cmd+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  case $host in
-+  *-*-mingw* )
-+    case $build in
-+      *-*-mingw* ) # actually msys
-+        lt_cv_to_host_file_cmd=func_convert_file_msys_to_w32
-+        ;;
-+      *-*-cygwin* )
-+        lt_cv_to_host_file_cmd=func_convert_file_cygwin_to_w32
-+        ;;
-+      * ) # otherwise, assume *nix
-+        lt_cv_to_host_file_cmd=func_convert_file_nix_to_w32
-+        ;;
-+    esac
-+    ;;
-+  *-*-cygwin* )
-+    case $build in
-+      *-*-mingw* ) # actually msys
-+        lt_cv_to_host_file_cmd=func_convert_file_msys_to_cygwin
-+        ;;
-+      *-*-cygwin* )
-+        lt_cv_to_host_file_cmd=func_convert_file_noop
-+        ;;
-+      * ) # otherwise, assume *nix
-+        lt_cv_to_host_file_cmd=func_convert_file_nix_to_cygwin
-+        ;;
-+    esac
-+    ;;
-+  * ) # unhandled hosts (and "normal" native builds)
-+    lt_cv_to_host_file_cmd=func_convert_file_noop
-+    ;;
-+esac
-+
-+fi
-+
-+to_host_file_cmd=$lt_cv_to_host_file_cmd
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_to_host_file_cmd" >&5
-+$as_echo "$lt_cv_to_host_file_cmd" >&6; }
-+
-+
-+
-+
-+
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking how to convert $build file names to toolchain format" >&5
-+$as_echo_n "checking how to convert $build file names to toolchain format... " >&6; }
-+if ${lt_cv_to_tool_file_cmd+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  #assume ordinary cross tools, or native build.
-+lt_cv_to_tool_file_cmd=func_convert_file_noop
-+case $host in
-+  *-*-mingw* )
-+    case $build in
-+      *-*-mingw* ) # actually msys
-+        lt_cv_to_tool_file_cmd=func_convert_file_msys_to_w32
-+        ;;
-+    esac
-+    ;;
-+esac
-+
-+fi
-+
-+to_tool_file_cmd=$lt_cv_to_tool_file_cmd
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_to_tool_file_cmd" >&5
-+$as_echo "$lt_cv_to_tool_file_cmd" >&6; }
-+
-+
-+
-+
-+
- { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $LD option to reload object files" >&5
- $as_echo_n "checking for $LD option to reload object files... " >&6; }
- if ${lt_cv_ld_reload_flag+:} false; then :
-@@ -6377,6 +6456,11 @@ case $reload_flag in
- esac
- reload_cmds='$LD$reload_flag -o $output$reload_objs'
- case $host_os in
-+  cygwin* | mingw* | pw32* | cegcc*)
-+    if test "$GCC" != yes; then
-+      reload_cmds=false
-+    fi
-+    ;;
-   darwin*)
-     if test "$GCC" = yes; then
-       reload_cmds='$LTCC $LTCFLAGS -nostdlib ${wl}-r -o $output$reload_objs'
-@@ -6545,7 +6629,8 @@ mingw* | pw32*)
-     lt_cv_deplibs_check_method='file_magic ^x86 archive import|^x86 DLL'
-     lt_cv_file_magic_cmd='func_win32_libid'
-   else
--    lt_cv_deplibs_check_method='file_magic file format pei*-i386(.*architecture: i386)?'
-+    # Keep this pattern in sync with the one in func_win32_libid.
-+    lt_cv_deplibs_check_method='file_magic file format (pei*-i386(.*architecture: i386)?|pe-arm-wince|pe-x86-64)'
-     lt_cv_file_magic_cmd='$OBJDUMP -f'
-   fi
-   ;;
-@@ -6699,6 +6784,21 @@ esac
- fi
- { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_deplibs_check_method" >&5
- $as_echo "$lt_cv_deplibs_check_method" >&6; }
-+
-+file_magic_glob=
-+want_nocaseglob=no
-+if test "$build" = "$host"; then
-+  case $host_os in
-+  mingw* | pw32*)
-+    if ( shopt | grep nocaseglob ) >/dev/null 2>&1; then
-+      want_nocaseglob=yes
-+    else
-+      file_magic_glob=`echo aAbBcCdDeEfFgGhHiIjJkKlLmMnNoOpPqQrRsStTuUvVwWxXyYzZ | $SED -e "s/\(..\)/s\/[\1]\/[\1]\/g;/g"`
-+    fi
-+    ;;
-+  esac
-+fi
-+
- file_magic_cmd=$lt_cv_file_magic_cmd
- deplibs_check_method=$lt_cv_deplibs_check_method
- test -z "$deplibs_check_method" && deplibs_check_method=unknown
-@@ -6714,6 +6814,157 @@ test -z "$deplibs_check_method" && deplibs_check_method=unknown
- 
- 
- 
-+
-+
-+
-+
-+
-+
-+
-+
-+
-+
-+if test -n "$ac_tool_prefix"; then
-+  # Extract the first word of "${ac_tool_prefix}dlltool", so it can be a program name with args.
-+set dummy ${ac_tool_prefix}dlltool; ac_word=$2
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
-+$as_echo_n "checking for $ac_word... " >&6; }
-+if ${ac_cv_prog_DLLTOOL+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  if test -n "$DLLTOOL"; then
-+  ac_cv_prog_DLLTOOL="$DLLTOOL" # Let the user override the test.
-+else
-+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
-+for as_dir in $PATH
-+do
-+  IFS=$as_save_IFS
-+  test -z "$as_dir" && as_dir=.
-+    for ac_exec_ext in '' $ac_executable_extensions; do
-+  if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
-+    ac_cv_prog_DLLTOOL="${ac_tool_prefix}dlltool"
-+    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
-+    break 2
-+  fi
-+done
-+  done
-+IFS=$as_save_IFS
-+
-+fi
-+fi
-+DLLTOOL=$ac_cv_prog_DLLTOOL
-+if test -n "$DLLTOOL"; then
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $DLLTOOL" >&5
-+$as_echo "$DLLTOOL" >&6; }
-+else
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
-+$as_echo "no" >&6; }
-+fi
-+
-+
-+fi
-+if test -z "$ac_cv_prog_DLLTOOL"; then
-+  ac_ct_DLLTOOL=$DLLTOOL
-+  # Extract the first word of "dlltool", so it can be a program name with args.
-+set dummy dlltool; ac_word=$2
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
-+$as_echo_n "checking for $ac_word... " >&6; }
-+if ${ac_cv_prog_ac_ct_DLLTOOL+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  if test -n "$ac_ct_DLLTOOL"; then
-+  ac_cv_prog_ac_ct_DLLTOOL="$ac_ct_DLLTOOL" # Let the user override the test.
-+else
-+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
-+for as_dir in $PATH
-+do
-+  IFS=$as_save_IFS
-+  test -z "$as_dir" && as_dir=.
-+    for ac_exec_ext in '' $ac_executable_extensions; do
-+  if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
-+    ac_cv_prog_ac_ct_DLLTOOL="dlltool"
-+    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
-+    break 2
-+  fi
-+done
-+  done
-+IFS=$as_save_IFS
-+
-+fi
-+fi
-+ac_ct_DLLTOOL=$ac_cv_prog_ac_ct_DLLTOOL
-+if test -n "$ac_ct_DLLTOOL"; then
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_DLLTOOL" >&5
-+$as_echo "$ac_ct_DLLTOOL" >&6; }
-+else
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
-+$as_echo "no" >&6; }
-+fi
-+
-+  if test "x$ac_ct_DLLTOOL" = x; then
-+    DLLTOOL="false"
-+  else
-+    case $cross_compiling:$ac_tool_warned in
-+yes:)
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5
-+$as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;}
-+ac_tool_warned=yes ;;
-+esac
-+    DLLTOOL=$ac_ct_DLLTOOL
-+  fi
-+else
-+  DLLTOOL="$ac_cv_prog_DLLTOOL"
-+fi
-+
-+test -z "$DLLTOOL" && DLLTOOL=dlltool
-+
-+
-+
-+
-+
-+
-+
-+
-+
-+
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking how to associate runtime and link libraries" >&5
-+$as_echo_n "checking how to associate runtime and link libraries... " >&6; }
-+if ${lt_cv_sharedlib_from_linklib_cmd+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  lt_cv_sharedlib_from_linklib_cmd='unknown'
-+
-+case $host_os in
-+cygwin* | mingw* | pw32* | cegcc*)
-+  # two different shell functions defined in ltmain.sh
-+  # decide which to use based on capabilities of $DLLTOOL
-+  case `$DLLTOOL --help 2>&1` in
-+  *--identify-strict*)
-+    lt_cv_sharedlib_from_linklib_cmd=func_cygming_dll_for_implib
-+    ;;
-+  *)
-+    lt_cv_sharedlib_from_linklib_cmd=func_cygming_dll_for_implib_fallback
-+    ;;
-+  esac
-+  ;;
-+*)
-+  # fallback: assume linklib IS sharedlib
-+  lt_cv_sharedlib_from_linklib_cmd="$ECHO"
-+  ;;
-+esac
-+
-+fi
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_sharedlib_from_linklib_cmd" >&5
-+$as_echo "$lt_cv_sharedlib_from_linklib_cmd" >&6; }
-+sharedlib_from_linklib_cmd=$lt_cv_sharedlib_from_linklib_cmd
-+test -z "$sharedlib_from_linklib_cmd" && sharedlib_from_linklib_cmd=$ECHO
-+
-+
-+
-+
-+
-+
-+
- plugin_option=
- plugin_names="liblto_plugin.so liblto_plugin-0.dll cyglto_plugin-0.dll"
- for plugin in $plugin_names; do
-@@ -6728,8 +6979,10 @@ for plugin in $plugin_names; do
- done
- 
- if test -n "$ac_tool_prefix"; then
--  # Extract the first word of "${ac_tool_prefix}ar", so it can be a program name with args.
--set dummy ${ac_tool_prefix}ar; ac_word=$2
-+  for ac_prog in ar
-+  do
-+    # Extract the first word of "$ac_tool_prefix$ac_prog", so it can be a program name with args.
-+set dummy $ac_tool_prefix$ac_prog; ac_word=$2
- { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
- $as_echo_n "checking for $ac_word... " >&6; }
- if ${ac_cv_prog_AR+:} false; then :
-@@ -6745,7 +6998,7 @@ do
-   test -z "$as_dir" && as_dir=.
-     for ac_exec_ext in '' $ac_executable_extensions; do
-   if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
--    ac_cv_prog_AR="${ac_tool_prefix}ar"
-+    ac_cv_prog_AR="$ac_tool_prefix$ac_prog"
-     $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
-     break 2
-   fi
-@@ -6765,11 +7018,15 @@ $as_echo "no" >&6; }
- fi
- 
- 
-+    test -n "$AR" && break
-+  done
- fi
--if test -z "$ac_cv_prog_AR"; then
-+if test -z "$AR"; then
-   ac_ct_AR=$AR
--  # Extract the first word of "ar", so it can be a program name with args.
--set dummy ar; ac_word=$2
-+  for ac_prog in ar
-+do
-+  # Extract the first word of "$ac_prog", so it can be a program name with args.
-+set dummy $ac_prog; ac_word=$2
- { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
- $as_echo_n "checking for $ac_word... " >&6; }
- if ${ac_cv_prog_ac_ct_AR+:} false; then :
-@@ -6785,7 +7042,7 @@ do
-   test -z "$as_dir" && as_dir=.
-     for ac_exec_ext in '' $ac_executable_extensions; do
-   if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
--    ac_cv_prog_ac_ct_AR="ar"
-+    ac_cv_prog_ac_ct_AR="$ac_prog"
-     $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
-     break 2
-   fi
-@@ -6804,6 +7061,10 @@ else
- $as_echo "no" >&6; }
- fi
- 
-+
-+  test -n "$ac_ct_AR" && break
-+done
-+
-   if test "x$ac_ct_AR" = x; then
-     AR="false"
-   else
-@@ -6815,25 +7076,19 @@ ac_tool_warned=yes ;;
- esac
-     AR=$ac_ct_AR
-   fi
--else
--  AR="$ac_cv_prog_AR"
- fi
- 
--test -z "$AR" && AR=ar
--if test -n "$plugin_option"; then
--  if $AR --help 2>&1 | grep -q "\--plugin"; then
--    touch conftest.c
--    $AR $plugin_option rc conftest.a conftest.c
--    if test "$?" != 0; then
--      { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: Failed: $AR $plugin_option rc" >&5
-+  touch conftest.c
-+  $AR $plugin_option rc conftest.a conftest.c
-+  if test "$?" != 0; then
-+    { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: Failed: $AR $plugin_option rc" >&5
- $as_echo "$as_me: WARNING: Failed: $AR $plugin_option rc" >&2;}
--    else
--      AR="$AR $plugin_option"
--    fi
--    rm -f conftest.*
-+  else
-+    AR="$AR $plugin_option"
-   fi
--fi
--test -z "$AR_FLAGS" && AR_FLAGS=cru
-+  rm -f conftest.*
-+: ${AR=ar}
-+: ${AR_FLAGS=cru}
- 
- 
- 
-@@ -6845,6 +7100,64 @@ test -z "$AR_FLAGS" && AR_FLAGS=cru
- 
- 
- 
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for archiver @FILE support" >&5
-+$as_echo_n "checking for archiver @FILE support... " >&6; }
-+if ${lt_cv_ar_at_file+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  lt_cv_ar_at_file=no
-+   cat confdefs.h - <<_ACEOF >conftest.$ac_ext
-+/* end confdefs.h.  */
-+
-+int
-+main ()
-+{
-+
-+  ;
-+  return 0;
-+}
-+_ACEOF
-+if ac_fn_c_try_compile "$LINENO"; then :
-+  echo conftest.$ac_objext > conftest.lst
-+      lt_ar_try='$AR $AR_FLAGS libconftest.a @conftest.lst >&5'
-+      { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$lt_ar_try\""; } >&5
-+  (eval $lt_ar_try) 2>&5
-+  ac_status=$?
-+  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
-+  test $ac_status = 0; }
-+      if test "$ac_status" -eq 0; then
-+	# Ensure the archiver fails upon bogus file names.
-+	rm -f conftest.$ac_objext libconftest.a
-+	{ { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$lt_ar_try\""; } >&5
-+  (eval $lt_ar_try) 2>&5
-+  ac_status=$?
-+  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
-+  test $ac_status = 0; }
-+	if test "$ac_status" -ne 0; then
-+          lt_cv_ar_at_file=@
-+        fi
-+      fi
-+      rm -f conftest.* libconftest.a
-+
-+fi
-+rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext
-+
-+fi
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_ar_at_file" >&5
-+$as_echo "$lt_cv_ar_at_file" >&6; }
-+
-+if test "x$lt_cv_ar_at_file" = xno; then
-+  archiver_list_spec=
-+else
-+  archiver_list_spec=$lt_cv_ar_at_file
-+fi
-+
-+
-+
-+
-+
-+
-+
- if test -n "$ac_tool_prefix"; then
-   # Extract the first word of "${ac_tool_prefix}strip", so it can be a program name with args.
- set dummy ${ac_tool_prefix}strip; ac_word=$2
-@@ -7184,8 +7497,8 @@ esac
- lt_cv_sys_global_symbol_to_cdecl="sed -n -e 's/^T .* \(.*\)$/extern int \1();/p' -e 's/^$symcode* .* \(.*\)$/extern char \1;/p'"
- 
- # Transform an extracted symbol line into symbol name and symbol address
--lt_cv_sys_global_symbol_to_c_name_address="sed -n -e 's/^: \([^ ]*\) $/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"\2\", (void *) \&\2},/p'"
--lt_cv_sys_global_symbol_to_c_name_address_lib_prefix="sed -n -e 's/^: \([^ ]*\) $/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \(lib[^ ]*\)$/  {\"\2\", (void *) \&\2},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"lib\2\", (void *) \&\2},/p'"
-+lt_cv_sys_global_symbol_to_c_name_address="sed -n -e 's/^: \([^ ]*\)[ ]*$/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"\2\", (void *) \&\2},/p'"
-+lt_cv_sys_global_symbol_to_c_name_address_lib_prefix="sed -n -e 's/^: \([^ ]*\)[ ]*$/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \(lib[^ ]*\)$/  {\"\2\", (void *) \&\2},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"lib\2\", (void *) \&\2},/p'"
- 
- # Handle CRLF in mingw tool chain
- opt_cr=
-@@ -7221,6 +7534,7 @@ for ac_symprfx in "" "_"; do
-   else
-     lt_cv_sys_global_symbol_pipe="sed -n -e 's/^.*[	 ]\($symcode$symcode*\)[	 ][	 ]*$ac_symprfx$sympat$opt_cr$/$symxfrm/p'"
-   fi
-+  lt_cv_sys_global_symbol_pipe="$lt_cv_sys_global_symbol_pipe | sed '/ __gnu_lto/d'"
- 
-   # Check to see that the pipe works correctly.
-   pipe_works=no
-@@ -7262,6 +7576,18 @@ _LT_EOF
-       if $GREP ' nm_test_var$' "$nlist" >/dev/null; then
- 	if $GREP ' nm_test_func$' "$nlist" >/dev/null; then
- 	  cat <<_LT_EOF > conftest.$ac_ext
-+/* Keep this code in sync between libtool.m4, ltmain, lt_system.h, and tests.  */
-+#if defined(_WIN32) || defined(__CYGWIN__) || defined(_WIN32_WCE)
-+/* DATA imports from DLLs on WIN32 con't be const, because runtime
-+   relocations are performed -- see ld's documentation on pseudo-relocs.  */
-+# define LT_DLSYM_CONST
-+#elif defined(__osf__)
-+/* This system does not cope well with relocations in const data.  */
-+# define LT_DLSYM_CONST
-+#else
-+# define LT_DLSYM_CONST const
-+#endif
-+
- #ifdef __cplusplus
- extern "C" {
- #endif
-@@ -7273,7 +7599,7 @@ _LT_EOF
- 	  cat <<_LT_EOF >> conftest.$ac_ext
- 
- /* The mapping between symbol names and symbols.  */
--const struct {
-+LT_DLSYM_CONST struct {
-   const char *name;
-   void       *address;
- }
-@@ -7299,8 +7625,8 @@ static const void *lt_preloaded_setup() {
- _LT_EOF
- 	  # Now try linking the two files.
- 	  mv conftest.$ac_objext conftstm.$ac_objext
--	  lt_save_LIBS="$LIBS"
--	  lt_save_CFLAGS="$CFLAGS"
-+	  lt_globsym_save_LIBS=$LIBS
-+	  lt_globsym_save_CFLAGS=$CFLAGS
- 	  LIBS="conftstm.$ac_objext"
- 	  CFLAGS="$CFLAGS$lt_prog_compiler_no_builtin_flag"
- 	  if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_link\""; } >&5
-@@ -7310,8 +7636,8 @@ _LT_EOF
-   test $ac_status = 0; } && test -s conftest${ac_exeext}; then
- 	    pipe_works=yes
- 	  fi
--	  LIBS="$lt_save_LIBS"
--	  CFLAGS="$lt_save_CFLAGS"
-+	  LIBS=$lt_globsym_save_LIBS
-+	  CFLAGS=$lt_globsym_save_CFLAGS
- 	else
- 	  echo "cannot find nm_test_func in $nlist" >&5
- 	fi
-@@ -7348,6 +7674,14 @@ else
- $as_echo "ok" >&6; }
- fi
- 
-+# Response file support.
-+if test "$lt_cv_nm_interface" = "MS dumpbin"; then
-+  nm_file_list_spec='@'
-+elif $NM --help 2>/dev/null | grep '[@]FILE' >/dev/null; then
-+  nm_file_list_spec='@'
-+fi
-+
-+
- 
- 
- 
-@@ -7366,6 +7700,47 @@ fi
- 
- 
- 
-+
-+
-+
-+
-+
-+
-+
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for sysroot" >&5
-+$as_echo_n "checking for sysroot... " >&6; }
-+
-+# Check whether --with-libtool-sysroot was given.
-+if test "${with_libtool_sysroot+set}" = set; then :
-+  withval=$with_libtool_sysroot;
-+else
-+  with_libtool_sysroot=no
-+fi
-+
-+
-+lt_sysroot=
-+case ${with_libtool_sysroot} in #(
-+ yes)
-+   if test "$GCC" = yes; then
-+     lt_sysroot=`$CC --print-sysroot 2>/dev/null`
-+   fi
-+   ;; #(
-+ /*)
-+   lt_sysroot=`echo "$with_libtool_sysroot" | sed -e "$sed_quote_subst"`
-+   ;; #(
-+ no|'')
-+   ;; #(
-+ *)
-+   { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${with_libtool_sysroot}" >&5
-+$as_echo "${with_libtool_sysroot}" >&6; }
-+   as_fn_error $? "The sysroot must be an absolute path." "$LINENO" 5
-+   ;;
-+esac
-+
-+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${lt_sysroot:-no}" >&5
-+$as_echo "${lt_sysroot:-no}" >&6; }
-+
-+
- 
- 
- 
-@@ -7575,6 +7950,123 @@ esac
- 
- need_locks="$enable_libtool_lock"
- 
-+if test -n "$ac_tool_prefix"; then
-+  # Extract the first word of "${ac_tool_prefix}mt", so it can be a program name with args.
-+set dummy ${ac_tool_prefix}mt; ac_word=$2
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
-+$as_echo_n "checking for $ac_word... " >&6; }
-+if ${ac_cv_prog_MANIFEST_TOOL+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  if test -n "$MANIFEST_TOOL"; then
-+  ac_cv_prog_MANIFEST_TOOL="$MANIFEST_TOOL" # Let the user override the test.
-+else
-+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
-+for as_dir in $PATH
-+do
-+  IFS=$as_save_IFS
-+  test -z "$as_dir" && as_dir=.
-+    for ac_exec_ext in '' $ac_executable_extensions; do
-+  if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
-+    ac_cv_prog_MANIFEST_TOOL="${ac_tool_prefix}mt"
-+    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
-+    break 2
-+  fi
-+done
-+  done
-+IFS=$as_save_IFS
-+
-+fi
-+fi
-+MANIFEST_TOOL=$ac_cv_prog_MANIFEST_TOOL
-+if test -n "$MANIFEST_TOOL"; then
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $MANIFEST_TOOL" >&5
-+$as_echo "$MANIFEST_TOOL" >&6; }
-+else
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
-+$as_echo "no" >&6; }
-+fi
-+
-+
-+fi
-+if test -z "$ac_cv_prog_MANIFEST_TOOL"; then
-+  ac_ct_MANIFEST_TOOL=$MANIFEST_TOOL
-+  # Extract the first word of "mt", so it can be a program name with args.
-+set dummy mt; ac_word=$2
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
-+$as_echo_n "checking for $ac_word... " >&6; }
-+if ${ac_cv_prog_ac_ct_MANIFEST_TOOL+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  if test -n "$ac_ct_MANIFEST_TOOL"; then
-+  ac_cv_prog_ac_ct_MANIFEST_TOOL="$ac_ct_MANIFEST_TOOL" # Let the user override the test.
-+else
-+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
-+for as_dir in $PATH
-+do
-+  IFS=$as_save_IFS
-+  test -z "$as_dir" && as_dir=.
-+    for ac_exec_ext in '' $ac_executable_extensions; do
-+  if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
-+    ac_cv_prog_ac_ct_MANIFEST_TOOL="mt"
-+    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
-+    break 2
-+  fi
-+done
-+  done
-+IFS=$as_save_IFS
-+
-+fi
-+fi
-+ac_ct_MANIFEST_TOOL=$ac_cv_prog_ac_ct_MANIFEST_TOOL
-+if test -n "$ac_ct_MANIFEST_TOOL"; then
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_MANIFEST_TOOL" >&5
-+$as_echo "$ac_ct_MANIFEST_TOOL" >&6; }
-+else
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
-+$as_echo "no" >&6; }
-+fi
-+
-+  if test "x$ac_ct_MANIFEST_TOOL" = x; then
-+    MANIFEST_TOOL=":"
-+  else
-+    case $cross_compiling:$ac_tool_warned in
-+yes:)
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5
-+$as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;}
-+ac_tool_warned=yes ;;
-+esac
-+    MANIFEST_TOOL=$ac_ct_MANIFEST_TOOL
-+  fi
-+else
-+  MANIFEST_TOOL="$ac_cv_prog_MANIFEST_TOOL"
-+fi
-+
-+test -z "$MANIFEST_TOOL" && MANIFEST_TOOL=mt
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking if $MANIFEST_TOOL is a manifest tool" >&5
-+$as_echo_n "checking if $MANIFEST_TOOL is a manifest tool... " >&6; }
-+if ${lt_cv_path_mainfest_tool+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  lt_cv_path_mainfest_tool=no
-+  echo "$as_me:$LINENO: $MANIFEST_TOOL '-?'" >&5
-+  $MANIFEST_TOOL '-?' 2>conftest.err > conftest.out
-+  cat conftest.err >&5
-+  if $GREP 'Manifest Tool' conftest.out > /dev/null; then
-+    lt_cv_path_mainfest_tool=yes
-+  fi
-+  rm -f conftest*
-+fi
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_path_mainfest_tool" >&5
-+$as_echo "$lt_cv_path_mainfest_tool" >&6; }
-+if test "x$lt_cv_path_mainfest_tool" != xyes; then
-+  MANIFEST_TOOL=:
-+fi
-+
-+
-+
-+
-+
- 
-   case $host_os in
-     rhapsody* | darwin*)
-@@ -8138,6 +8630,8 @@ _LT_EOF
-       $LTCC $LTCFLAGS -c -o conftest.o conftest.c 2>&5
-       echo "$AR cru libconftest.a conftest.o" >&5
-       $AR cru libconftest.a conftest.o 2>&5
-+      echo "$RANLIB libconftest.a" >&5
-+      $RANLIB libconftest.a 2>&5
-       cat > conftest.c << _LT_EOF
- int main() { return 0;}
- _LT_EOF
-@@ -8690,8 +9184,6 @@ fi
- lt_prog_compiler_pic=
- lt_prog_compiler_static=
- 
--{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $compiler option to produce PIC" >&5
--$as_echo_n "checking for $compiler option to produce PIC... " >&6; }
- 
-   if test "$GCC" = yes; then
-     lt_prog_compiler_wl='-Wl,'
-@@ -8857,6 +9349,12 @@ $as_echo_n "checking for $compiler option to produce PIC... " >&6; }
- 	lt_prog_compiler_pic='--shared'
- 	lt_prog_compiler_static='--static'
- 	;;
-+      nagfor*)
-+	# NAG Fortran compiler
-+	lt_prog_compiler_wl='-Wl,-Wl,,'
-+	lt_prog_compiler_pic='-PIC'
-+	lt_prog_compiler_static='-Bstatic'
-+	;;
-       pgcc* | pgf77* | pgf90* | pgf95* | pgfortran*)
-         # Portland Group compilers (*not* the Pentium gcc compiler,
- 	# which looks to be a dead project)
-@@ -8919,7 +9417,7 @@ $as_echo_n "checking for $compiler option to produce PIC... " >&6; }
-       lt_prog_compiler_pic='-KPIC'
-       lt_prog_compiler_static='-Bstatic'
-       case $cc_basename in
--      f77* | f90* | f95*)
-+      f77* | f90* | f95* | sunf77* | sunf90* | sunf95*)
- 	lt_prog_compiler_wl='-Qoption ld ';;
-       *)
- 	lt_prog_compiler_wl='-Wl,';;
-@@ -8976,13 +9474,17 @@ case $host_os in
-     lt_prog_compiler_pic="$lt_prog_compiler_pic -DPIC"
-     ;;
- esac
--{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_prog_compiler_pic" >&5
--$as_echo "$lt_prog_compiler_pic" >&6; }
--
--
--
--
- 
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $compiler option to produce PIC" >&5
-+$as_echo_n "checking for $compiler option to produce PIC... " >&6; }
-+if ${lt_cv_prog_compiler_pic+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  lt_cv_prog_compiler_pic=$lt_prog_compiler_pic
-+fi
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_prog_compiler_pic" >&5
-+$as_echo "$lt_cv_prog_compiler_pic" >&6; }
-+lt_prog_compiler_pic=$lt_cv_prog_compiler_pic
- 
- #
- # Check to make sure the PIC flag actually works.
-@@ -9043,6 +9545,11 @@ fi
- 
- 
- 
-+
-+
-+
-+
-+
- #
- # Check to make sure the static flag actually works.
- #
-@@ -9393,7 +9900,8 @@ _LT_EOF
-       allow_undefined_flag=unsupported
-       always_export_symbols=no
-       enable_shared_with_static_runtimes=yes
--      export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1 DATA/'\'' | $SED -e '\''/^[AITW][ ]/s/.*[ ]//'\'' | sort | uniq > $export_symbols'
-+      export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1 DATA/;s/^.*[ ]__nm__\([^ ]*\)[ ][^ ]*/\1 DATA/;/^I[ ]/d;/^[AITW][ ]/s/.* //'\'' | sort | uniq > $export_symbols'
-+      exclude_expsyms='[_]+GLOBAL_OFFSET_TABLE_|[_]+GLOBAL__[FID]_.*|[_]+head_[A-Za-z0-9_]+_dll|[A-Za-z0-9_]+_dll_iname'
- 
-       if $LD --help 2>&1 | $GREP 'auto-import' > /dev/null; then
-         archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib'
-@@ -9492,12 +10000,12 @@ _LT_EOF
- 	  whole_archive_flag_spec='--whole-archive$convenience --no-whole-archive'
- 	  hardcode_libdir_flag_spec=
- 	  hardcode_libdir_flag_spec_ld='-rpath $libdir'
--	  archive_cmds='$LD -shared $libobjs $deplibs $compiler_flags -soname $soname -o $lib'
-+	  archive_cmds='$LD -shared $libobjs $deplibs $linker_flags -soname $soname -o $lib'
- 	  if test "x$supports_anon_versioning" = xyes; then
- 	    archive_expsym_cmds='echo "{ global:" > $output_objdir/$libname.ver~
- 	      cat $export_symbols | sed -e "s/\(.*\)/\1;/" >> $output_objdir/$libname.ver~
- 	      echo "local: *; };" >> $output_objdir/$libname.ver~
--	      $LD -shared $libobjs $deplibs $compiler_flags -soname $soname -version-script $output_objdir/$libname.ver -o $lib'
-+	      $LD -shared $libobjs $deplibs $linker_flags -soname $soname -version-script $output_objdir/$libname.ver -o $lib'
- 	  fi
- 	  ;;
- 	esac
-@@ -9511,8 +10019,8 @@ _LT_EOF
- 	archive_cmds='$LD -Bshareable $libobjs $deplibs $linker_flags -o $lib'
- 	wlarc=
-       else
--	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
--	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-+	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
-+	archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-       fi
-       ;;
- 
-@@ -9530,8 +10038,8 @@ _LT_EOF
- 
- _LT_EOF
-       elif $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then
--	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
--	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-+	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
-+	archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-       else
- 	ld_shlibs=no
-       fi
-@@ -9577,8 +10085,8 @@ _LT_EOF
- 
-     *)
-       if $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then
--	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
--	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-+	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
-+	archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-       else
- 	ld_shlibs=no
-       fi
-@@ -9708,7 +10216,13 @@ _LT_EOF
- 	allow_undefined_flag='-berok'
-         # Determine the default libpath from the value encoded in an
-         # empty executable.
--        cat confdefs.h - <<_ACEOF >conftest.$ac_ext
-+        if test "${lt_cv_aix_libpath+set}" = set; then
-+  aix_libpath=$lt_cv_aix_libpath
-+else
-+  if ${lt_cv_aix_libpath_+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
- /* end confdefs.h.  */
- 
- int
-@@ -9721,22 +10235,29 @@ main ()
- _ACEOF
- if ac_fn_c_try_link "$LINENO"; then :
- 
--lt_aix_libpath_sed='
--    /Import File Strings/,/^$/ {
--	/^0/ {
--	    s/^0  *\(.*\)$/\1/
--	    p
--	}
--    }'
--aix_libpath=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
--# Check for a 64-bit object if we didn't find anything.
--if test -z "$aix_libpath"; then
--  aix_libpath=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
--fi
-+  lt_aix_libpath_sed='
-+      /Import File Strings/,/^$/ {
-+	  /^0/ {
-+	      s/^0  *\([^ ]*\) *$/\1/
-+	      p
-+	  }
-+      }'
-+  lt_cv_aix_libpath_=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
-+  # Check for a 64-bit object if we didn't find anything.
-+  if test -z "$lt_cv_aix_libpath_"; then
-+    lt_cv_aix_libpath_=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
-+  fi
- fi
- rm -f core conftest.err conftest.$ac_objext \
-     conftest$ac_exeext conftest.$ac_ext
--if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
-+  if test -z "$lt_cv_aix_libpath_"; then
-+    lt_cv_aix_libpath_="/usr/lib:/lib"
-+  fi
-+
-+fi
-+
-+  aix_libpath=$lt_cv_aix_libpath_
-+fi
- 
-         hardcode_libdir_flag_spec='${wl}-blibpath:$libdir:'"$aix_libpath"
-         archive_expsym_cmds='$CC -o $output_objdir/$soname $libobjs $deplibs '"\${wl}$no_entry_flag"' $compiler_flags `if test "x${allow_undefined_flag}" != "x"; then func_echo_all "${wl}${allow_undefined_flag}"; else :; fi` '"\${wl}$exp_sym_flag:\$export_symbols $shared_flag"
-@@ -9748,7 +10269,13 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
- 	else
- 	 # Determine the default libpath from the value encoded in an
- 	 # empty executable.
--	 cat confdefs.h - <<_ACEOF >conftest.$ac_ext
-+	 if test "${lt_cv_aix_libpath+set}" = set; then
-+  aix_libpath=$lt_cv_aix_libpath
-+else
-+  if ${lt_cv_aix_libpath_+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
- /* end confdefs.h.  */
- 
- int
-@@ -9761,22 +10288,29 @@ main ()
- _ACEOF
- if ac_fn_c_try_link "$LINENO"; then :
- 
--lt_aix_libpath_sed='
--    /Import File Strings/,/^$/ {
--	/^0/ {
--	    s/^0  *\(.*\)$/\1/
--	    p
--	}
--    }'
--aix_libpath=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
--# Check for a 64-bit object if we didn't find anything.
--if test -z "$aix_libpath"; then
--  aix_libpath=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
--fi
-+  lt_aix_libpath_sed='
-+      /Import File Strings/,/^$/ {
-+	  /^0/ {
-+	      s/^0  *\([^ ]*\) *$/\1/
-+	      p
-+	  }
-+      }'
-+  lt_cv_aix_libpath_=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
-+  # Check for a 64-bit object if we didn't find anything.
-+  if test -z "$lt_cv_aix_libpath_"; then
-+    lt_cv_aix_libpath_=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
-+  fi
- fi
- rm -f core conftest.err conftest.$ac_objext \
-     conftest$ac_exeext conftest.$ac_ext
--if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
-+  if test -z "$lt_cv_aix_libpath_"; then
-+    lt_cv_aix_libpath_="/usr/lib:/lib"
-+  fi
-+
-+fi
-+
-+  aix_libpath=$lt_cv_aix_libpath_
-+fi
- 
- 	 hardcode_libdir_flag_spec='${wl}-blibpath:$libdir:'"$aix_libpath"
- 	  # Warning - without using the other run time loading flags,
-@@ -9821,20 +10355,63 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
-       # Microsoft Visual C++.
-       # hardcode_libdir_flag_spec is actually meaningless, as there is
-       # no search path for DLLs.
--      hardcode_libdir_flag_spec=' '
--      allow_undefined_flag=unsupported
--      # Tell ltmain to make .lib files, not .a files.
--      libext=lib
--      # Tell ltmain to make .dll files, not .so files.
--      shrext_cmds=".dll"
--      # FIXME: Setting linknames here is a bad hack.
--      archive_cmds='$CC -o $lib $libobjs $compiler_flags `func_echo_all "$deplibs" | $SED '\''s/ -lc$//'\''` -link -dll~linknames='
--      # The linker will automatically build a .lib file if we build a DLL.
--      old_archive_from_new_cmds='true'
--      # FIXME: Should let the user specify the lib program.
--      old_archive_cmds='lib -OUT:$oldlib$oldobjs$old_deplibs'
--      fix_srcfile_path='`cygpath -w "$srcfile"`'
--      enable_shared_with_static_runtimes=yes
-+      case $cc_basename in
-+      cl*)
-+	# Native MSVC
-+	hardcode_libdir_flag_spec=' '
-+	allow_undefined_flag=unsupported
-+	always_export_symbols=yes
-+	file_list_spec='@'
-+	# Tell ltmain to make .lib files, not .a files.
-+	libext=lib
-+	# Tell ltmain to make .dll files, not .so files.
-+	shrext_cmds=".dll"
-+	# FIXME: Setting linknames here is a bad hack.
-+	archive_cmds='$CC -o $output_objdir/$soname $libobjs $compiler_flags $deplibs -Wl,-dll~linknames='
-+	archive_expsym_cmds='if test "x`$SED 1q $export_symbols`" = xEXPORTS; then
-+	    sed -n -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' -e '1\\\!p' < $export_symbols > $output_objdir/$soname.exp;
-+	  else
-+	    sed -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' < $export_symbols > $output_objdir/$soname.exp;
-+	  fi~
-+	  $CC -o $tool_output_objdir$soname $libobjs $compiler_flags $deplibs "@$tool_output_objdir$soname.exp" -Wl,-DLL,-IMPLIB:"$tool_output_objdir$libname.dll.lib"~
-+	  linknames='
-+	# The linker will not automatically build a static lib if we build a DLL.
-+	# _LT_TAGVAR(old_archive_from_new_cmds, )='true'
-+	enable_shared_with_static_runtimes=yes
-+	export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1,DATA/'\'' | $SED -e '\''/^[AITW][ ]/s/.*[ ]//'\'' | sort | uniq > $export_symbols'
-+	# Don't use ranlib
-+	old_postinstall_cmds='chmod 644 $oldlib'
-+	postlink_cmds='lt_outputfile="@OUTPUT@"~
-+	  lt_tool_outputfile="@TOOL_OUTPUT@"~
-+	  case $lt_outputfile in
-+	    *.exe|*.EXE) ;;
-+	    *)
-+	      lt_outputfile="$lt_outputfile.exe"
-+	      lt_tool_outputfile="$lt_tool_outputfile.exe"
-+	      ;;
-+	  esac~
-+	  if test "$MANIFEST_TOOL" != ":" && test -f "$lt_outputfile.manifest"; then
-+	    $MANIFEST_TOOL -manifest "$lt_tool_outputfile.manifest" -outputresource:"$lt_tool_outputfile" || exit 1;
-+	    $RM "$lt_outputfile.manifest";
-+	  fi'
-+	;;
-+      *)
-+	# Assume MSVC wrapper
-+	hardcode_libdir_flag_spec=' '
-+	allow_undefined_flag=unsupported
-+	# Tell ltmain to make .lib files, not .a files.
-+	libext=lib
-+	# Tell ltmain to make .dll files, not .so files.
-+	shrext_cmds=".dll"
-+	# FIXME: Setting linknames here is a bad hack.
-+	archive_cmds='$CC -o $lib $libobjs $compiler_flags `func_echo_all "$deplibs" | $SED '\''s/ -lc$//'\''` -link -dll~linknames='
-+	# The linker will automatically build a .lib file if we build a DLL.
-+	old_archive_from_new_cmds='true'
-+	# FIXME: Should let the user specify the lib program.
-+	old_archive_cmds='lib -OUT:$oldlib$oldobjs$old_deplibs'
-+	enable_shared_with_static_runtimes=yes
-+	;;
-+      esac
-       ;;
- 
-     darwin* | rhapsody*)
-@@ -9895,7 +10472,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
- 
-     # FreeBSD 3 and greater uses gcc -shared to do shared libraries.
-     freebsd* | dragonfly*)
--      archive_cmds='$CC -shared -o $lib $libobjs $deplibs $compiler_flags'
-+      archive_cmds='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags'
-       hardcode_libdir_flag_spec='-R$libdir'
-       hardcode_direct=yes
-       hardcode_shlibpath_var=no
-@@ -9903,7 +10480,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
- 
-     hpux9*)
-       if test "$GCC" = yes; then
--	archive_cmds='$RM $output_objdir/$soname~$CC -shared -fPIC ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $libobjs $deplibs $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
-+	archive_cmds='$RM $output_objdir/$soname~$CC -shared $pic_flag ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $libobjs $deplibs $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
-       else
- 	archive_cmds='$RM $output_objdir/$soname~$LD -b +b $install_libdir -o $output_objdir/$soname $libobjs $deplibs $linker_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
-       fi
-@@ -9919,7 +10496,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
- 
-     hpux10*)
-       if test "$GCC" = yes && test "$with_gnu_ld" = no; then
--	archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
-+	archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
-       else
- 	archive_cmds='$LD -b +h $soname +b $install_libdir -o $lib $libobjs $deplibs $linker_flags'
-       fi
-@@ -9943,10 +10520,10 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
- 	  archive_cmds='$CC -shared ${wl}+h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
- 	  ;;
- 	ia64*)
--	  archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags'
-+	  archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags'
- 	  ;;
- 	*)
--	  archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
-+	  archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
- 	  ;;
- 	esac
-       else
-@@ -10025,23 +10602,36 @@ fi
- 
-     irix5* | irix6* | nonstopux*)
-       if test "$GCC" = yes; then
--	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
-+	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
- 	# Try to use the -exported_symbol ld option, if it does not
- 	# work, assume that -exports_file does not work either and
- 	# implicitly export all symbols.
--        save_LDFLAGS="$LDFLAGS"
--        LDFLAGS="$LDFLAGS -shared ${wl}-exported_symbol ${wl}foo ${wl}-update_registry ${wl}/dev/null"
--        cat confdefs.h - <<_ACEOF >conftest.$ac_ext
-+	# This should be the same for all languages, so no per-tag cache variable.
-+	{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether the $host_os linker accepts -exported_symbol" >&5
-+$as_echo_n "checking whether the $host_os linker accepts -exported_symbol... " >&6; }
-+if ${lt_cv_irix_exported_symbol+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  save_LDFLAGS="$LDFLAGS"
-+	   LDFLAGS="$LDFLAGS -shared ${wl}-exported_symbol ${wl}foo ${wl}-update_registry ${wl}/dev/null"
-+	   cat confdefs.h - <<_ACEOF >conftest.$ac_ext
- /* end confdefs.h.  */
--int foo(void) {}
-+int foo (void) { return 0; }
- _ACEOF
- if ac_fn_c_try_link "$LINENO"; then :
--  archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations ${wl}-exports_file ${wl}$export_symbols -o $lib'
--
-+  lt_cv_irix_exported_symbol=yes
-+else
-+  lt_cv_irix_exported_symbol=no
- fi
- rm -f core conftest.err conftest.$ac_objext \
-     conftest$ac_exeext conftest.$ac_ext
--        LDFLAGS="$save_LDFLAGS"
-+           LDFLAGS="$save_LDFLAGS"
-+fi
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_irix_exported_symbol" >&5
-+$as_echo "$lt_cv_irix_exported_symbol" >&6; }
-+	if test "$lt_cv_irix_exported_symbol" = yes; then
-+          archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations ${wl}-exports_file ${wl}$export_symbols -o $lib'
-+	fi
-       else
- 	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -o $lib'
- 	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -exports_file $export_symbols -o $lib'
-@@ -10126,7 +10716,7 @@ rm -f core conftest.err conftest.$ac_objext \
-     osf4* | osf5*)	# as osf3* with the addition of -msym flag
-       if test "$GCC" = yes; then
- 	allow_undefined_flag=' ${wl}-expect_unresolved ${wl}\*'
--	archive_cmds='$CC -shared${allow_undefined_flag} $libobjs $deplibs $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
-+	archive_cmds='$CC -shared${allow_undefined_flag} $pic_flag $libobjs $deplibs $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
- 	hardcode_libdir_flag_spec='${wl}-rpath ${wl}$libdir'
-       else
- 	allow_undefined_flag=' -expect_unresolved \*'
-@@ -10145,9 +10735,9 @@ rm -f core conftest.err conftest.$ac_objext \
-       no_undefined_flag=' -z defs'
-       if test "$GCC" = yes; then
- 	wlarc='${wl}'
--	archive_cmds='$CC -shared ${wl}-z ${wl}text ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
-+	archive_cmds='$CC -shared $pic_flag ${wl}-z ${wl}text ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
- 	archive_expsym_cmds='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~
--	  $CC -shared ${wl}-z ${wl}text ${wl}-M ${wl}$lib.exp ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags~$RM $lib.exp'
-+	  $CC -shared $pic_flag ${wl}-z ${wl}text ${wl}-M ${wl}$lib.exp ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags~$RM $lib.exp'
-       else
- 	case `$CC -V 2>&1` in
- 	*"Compilers 5.0"*)
-@@ -10723,8 +11313,9 @@ cygwin* | mingw* | pw32* | cegcc*)
-   need_version=no
-   need_lib_prefix=no
- 
--  case $GCC,$host_os in
--  yes,cygwin* | yes,mingw* | yes,pw32* | yes,cegcc*)
-+  case $GCC,$cc_basename in
-+  yes,*)
-+    # gcc
-     library_names_spec='$libname.dll.a'
-     # DLL is installed to $(libdir)/../bin by postinstall_cmds
-     postinstall_cmds='base_file=`basename \${file}`~
-@@ -10757,13 +11348,71 @@ cygwin* | mingw* | pw32* | cegcc*)
-       library_names_spec='`echo ${libname} | sed -e 's/^lib/pw/'``echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}'
-       ;;
-     esac
-+    dynamic_linker='Win32 ld.exe'
-+    ;;
-+
-+  *,cl*)
-+    # Native MSVC
-+    libname_spec='$name'
-+    soname_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}'
-+    library_names_spec='${libname}.dll.lib'
-+
-+    case $build_os in
-+    mingw*)
-+      sys_lib_search_path_spec=
-+      lt_save_ifs=$IFS
-+      IFS=';'
-+      for lt_path in $LIB
-+      do
-+        IFS=$lt_save_ifs
-+        # Let DOS variable expansion print the short 8.3 style file name.
-+        lt_path=`cd "$lt_path" 2>/dev/null && cmd //C "for %i in (".") do @echo %~si"`
-+        sys_lib_search_path_spec="$sys_lib_search_path_spec $lt_path"
-+      done
-+      IFS=$lt_save_ifs
-+      # Convert to MSYS style.
-+      sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | sed -e 's|\\\\|/|g' -e 's| \\([a-zA-Z]\\):| /\\1|g' -e 's|^ ||'`
-+      ;;
-+    cygwin*)
-+      # Convert to unix form, then to dos form, then back to unix form
-+      # but this time dos style (no spaces!) so that the unix form looks
-+      # like /cygdrive/c/PROGRA~1:/cygdr...
-+      sys_lib_search_path_spec=`cygpath --path --unix "$LIB"`
-+      sys_lib_search_path_spec=`cygpath --path --dos "$sys_lib_search_path_spec" 2>/dev/null`
-+      sys_lib_search_path_spec=`cygpath --path --unix "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
-+      ;;
-+    *)
-+      sys_lib_search_path_spec="$LIB"
-+      if $ECHO "$sys_lib_search_path_spec" | $GREP ';[c-zC-Z]:/' >/dev/null; then
-+        # It is most probably a Windows format PATH.
-+        sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e 's/;/ /g'`
-+      else
-+        sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
-+      fi
-+      # FIXME: find the short name or the path components, as spaces are
-+      # common. (e.g. "Program Files" -> "PROGRA~1")
-+      ;;
-+    esac
-+
-+    # DLL is installed to $(libdir)/../bin by postinstall_cmds
-+    postinstall_cmds='base_file=`basename \${file}`~
-+      dlpath=`$SHELL 2>&1 -c '\''. $dir/'\''\${base_file}'\''i; echo \$dlname'\''`~
-+      dldir=$destdir/`dirname \$dlpath`~
-+      test -d \$dldir || mkdir -p \$dldir~
-+      $install_prog $dir/$dlname \$dldir/$dlname'
-+    postuninstall_cmds='dldll=`$SHELL 2>&1 -c '\''. $file; echo \$dlname'\''`~
-+      dlpath=$dir/\$dldll~
-+       $RM \$dlpath'
-+    shlibpath_overrides_runpath=yes
-+    dynamic_linker='Win32 link.exe'
-     ;;
- 
-   *)
-+    # Assume MSVC wrapper
-     library_names_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext} $libname.lib'
-+    dynamic_linker='Win32 ld.exe'
-     ;;
-   esac
--  dynamic_linker='Win32 ld.exe'
-   # FIXME: first we should search . and the directory the executable is in
-   shlibpath_var=PATH
-   ;;
-@@ -11641,7 +12290,7 @@ else
-   lt_dlunknown=0; lt_dlno_uscore=1; lt_dlneed_uscore=2
-   lt_status=$lt_dlunknown
-   cat > conftest.$ac_ext <<_LT_EOF
--#line 11644 "configure"
-+#line $LINENO "configure"
- #include "confdefs.h"
- 
- #if HAVE_DLFCN_H
-@@ -11685,10 +12334,10 @@ else
- /* When -fvisbility=hidden is used, assume the code has been annotated
-    correspondingly for the symbols needed.  */
- #if defined(__GNUC__) && (((__GNUC__ == 3) && (__GNUC_MINOR__ >= 3)) || (__GNUC__ > 3))
--void fnord () __attribute__((visibility("default")));
-+int fnord () __attribute__((visibility("default")));
- #endif
- 
--void fnord () { int i=42; }
-+int fnord () { return 42; }
- int main ()
- {
-   void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW);
-@@ -11747,7 +12396,7 @@ else
-   lt_dlunknown=0; lt_dlno_uscore=1; lt_dlneed_uscore=2
-   lt_status=$lt_dlunknown
-   cat > conftest.$ac_ext <<_LT_EOF
--#line 11750 "configure"
-+#line $LINENO "configure"
- #include "confdefs.h"
- 
- #if HAVE_DLFCN_H
-@@ -11791,10 +12440,10 @@ else
- /* When -fvisbility=hidden is used, assume the code has been annotated
-    correspondingly for the symbols needed.  */
- #if defined(__GNUC__) && (((__GNUC__ == 3) && (__GNUC_MINOR__ >= 3)) || (__GNUC__ > 3))
--void fnord () __attribute__((visibility("default")));
-+int fnord () __attribute__((visibility("default")));
- #endif
- 
--void fnord () { int i=42; }
-+int fnord () { return 42; }
- int main ()
- {
-   void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW);
-@@ -14479,13 +15128,20 @@ exeext='`$ECHO "$exeext" | $SED "$delay_single_quote_subst"`'
- lt_unset='`$ECHO "$lt_unset" | $SED "$delay_single_quote_subst"`'
- lt_SP2NL='`$ECHO "$lt_SP2NL" | $SED "$delay_single_quote_subst"`'
- lt_NL2SP='`$ECHO "$lt_NL2SP" | $SED "$delay_single_quote_subst"`'
-+lt_cv_to_host_file_cmd='`$ECHO "$lt_cv_to_host_file_cmd" | $SED "$delay_single_quote_subst"`'
-+lt_cv_to_tool_file_cmd='`$ECHO "$lt_cv_to_tool_file_cmd" | $SED "$delay_single_quote_subst"`'
- reload_flag='`$ECHO "$reload_flag" | $SED "$delay_single_quote_subst"`'
- reload_cmds='`$ECHO "$reload_cmds" | $SED "$delay_single_quote_subst"`'
- OBJDUMP='`$ECHO "$OBJDUMP" | $SED "$delay_single_quote_subst"`'
- deplibs_check_method='`$ECHO "$deplibs_check_method" | $SED "$delay_single_quote_subst"`'
- file_magic_cmd='`$ECHO "$file_magic_cmd" | $SED "$delay_single_quote_subst"`'
-+file_magic_glob='`$ECHO "$file_magic_glob" | $SED "$delay_single_quote_subst"`'
-+want_nocaseglob='`$ECHO "$want_nocaseglob" | $SED "$delay_single_quote_subst"`'
-+DLLTOOL='`$ECHO "$DLLTOOL" | $SED "$delay_single_quote_subst"`'
-+sharedlib_from_linklib_cmd='`$ECHO "$sharedlib_from_linklib_cmd" | $SED "$delay_single_quote_subst"`'
- AR='`$ECHO "$AR" | $SED "$delay_single_quote_subst"`'
- AR_FLAGS='`$ECHO "$AR_FLAGS" | $SED "$delay_single_quote_subst"`'
-+archiver_list_spec='`$ECHO "$archiver_list_spec" | $SED "$delay_single_quote_subst"`'
- STRIP='`$ECHO "$STRIP" | $SED "$delay_single_quote_subst"`'
- RANLIB='`$ECHO "$RANLIB" | $SED "$delay_single_quote_subst"`'
- old_postinstall_cmds='`$ECHO "$old_postinstall_cmds" | $SED "$delay_single_quote_subst"`'
-@@ -14500,14 +15156,17 @@ lt_cv_sys_global_symbol_pipe='`$ECHO "$lt_cv_sys_global_symbol_pipe" | $SED "$de
- lt_cv_sys_global_symbol_to_cdecl='`$ECHO "$lt_cv_sys_global_symbol_to_cdecl" | $SED "$delay_single_quote_subst"`'
- lt_cv_sys_global_symbol_to_c_name_address='`$ECHO "$lt_cv_sys_global_symbol_to_c_name_address" | $SED "$delay_single_quote_subst"`'
- lt_cv_sys_global_symbol_to_c_name_address_lib_prefix='`$ECHO "$lt_cv_sys_global_symbol_to_c_name_address_lib_prefix" | $SED "$delay_single_quote_subst"`'
-+nm_file_list_spec='`$ECHO "$nm_file_list_spec" | $SED "$delay_single_quote_subst"`'
-+lt_sysroot='`$ECHO "$lt_sysroot" | $SED "$delay_single_quote_subst"`'
- objdir='`$ECHO "$objdir" | $SED "$delay_single_quote_subst"`'
- MAGIC_CMD='`$ECHO "$MAGIC_CMD" | $SED "$delay_single_quote_subst"`'
- lt_prog_compiler_no_builtin_flag='`$ECHO "$lt_prog_compiler_no_builtin_flag" | $SED "$delay_single_quote_subst"`'
--lt_prog_compiler_wl='`$ECHO "$lt_prog_compiler_wl" | $SED "$delay_single_quote_subst"`'
- lt_prog_compiler_pic='`$ECHO "$lt_prog_compiler_pic" | $SED "$delay_single_quote_subst"`'
-+lt_prog_compiler_wl='`$ECHO "$lt_prog_compiler_wl" | $SED "$delay_single_quote_subst"`'
- lt_prog_compiler_static='`$ECHO "$lt_prog_compiler_static" | $SED "$delay_single_quote_subst"`'
- lt_cv_prog_compiler_c_o='`$ECHO "$lt_cv_prog_compiler_c_o" | $SED "$delay_single_quote_subst"`'
- need_locks='`$ECHO "$need_locks" | $SED "$delay_single_quote_subst"`'
-+MANIFEST_TOOL='`$ECHO "$MANIFEST_TOOL" | $SED "$delay_single_quote_subst"`'
- DSYMUTIL='`$ECHO "$DSYMUTIL" | $SED "$delay_single_quote_subst"`'
- NMEDIT='`$ECHO "$NMEDIT" | $SED "$delay_single_quote_subst"`'
- LIPO='`$ECHO "$LIPO" | $SED "$delay_single_quote_subst"`'
-@@ -14540,12 +15199,12 @@ hardcode_shlibpath_var='`$ECHO "$hardcode_shlibpath_var" | $SED "$delay_single_q
- hardcode_automatic='`$ECHO "$hardcode_automatic" | $SED "$delay_single_quote_subst"`'
- inherit_rpath='`$ECHO "$inherit_rpath" | $SED "$delay_single_quote_subst"`'
- link_all_deplibs='`$ECHO "$link_all_deplibs" | $SED "$delay_single_quote_subst"`'
--fix_srcfile_path='`$ECHO "$fix_srcfile_path" | $SED "$delay_single_quote_subst"`'
- always_export_symbols='`$ECHO "$always_export_symbols" | $SED "$delay_single_quote_subst"`'
- export_symbols_cmds='`$ECHO "$export_symbols_cmds" | $SED "$delay_single_quote_subst"`'
- exclude_expsyms='`$ECHO "$exclude_expsyms" | $SED "$delay_single_quote_subst"`'
- include_expsyms='`$ECHO "$include_expsyms" | $SED "$delay_single_quote_subst"`'
- prelink_cmds='`$ECHO "$prelink_cmds" | $SED "$delay_single_quote_subst"`'
-+postlink_cmds='`$ECHO "$postlink_cmds" | $SED "$delay_single_quote_subst"`'
- file_list_spec='`$ECHO "$file_list_spec" | $SED "$delay_single_quote_subst"`'
- variables_saved_for_relink='`$ECHO "$variables_saved_for_relink" | $SED "$delay_single_quote_subst"`'
- need_lib_prefix='`$ECHO "$need_lib_prefix" | $SED "$delay_single_quote_subst"`'
-@@ -14600,8 +15259,13 @@ reload_flag \
- OBJDUMP \
- deplibs_check_method \
- file_magic_cmd \
-+file_magic_glob \
-+want_nocaseglob \
-+DLLTOOL \
-+sharedlib_from_linklib_cmd \
- AR \
- AR_FLAGS \
-+archiver_list_spec \
- STRIP \
- RANLIB \
- CC \
-@@ -14611,12 +15275,14 @@ lt_cv_sys_global_symbol_pipe \
- lt_cv_sys_global_symbol_to_cdecl \
- lt_cv_sys_global_symbol_to_c_name_address \
- lt_cv_sys_global_symbol_to_c_name_address_lib_prefix \
-+nm_file_list_spec \
- lt_prog_compiler_no_builtin_flag \
--lt_prog_compiler_wl \
- lt_prog_compiler_pic \
-+lt_prog_compiler_wl \
- lt_prog_compiler_static \
- lt_cv_prog_compiler_c_o \
- need_locks \
-+MANIFEST_TOOL \
- DSYMUTIL \
- NMEDIT \
- LIPO \
-@@ -14632,7 +15298,6 @@ no_undefined_flag \
- hardcode_libdir_flag_spec \
- hardcode_libdir_flag_spec_ld \
- hardcode_libdir_separator \
--fix_srcfile_path \
- exclude_expsyms \
- include_expsyms \
- file_list_spec \
-@@ -14668,6 +15333,7 @@ module_cmds \
- module_expsym_cmds \
- export_symbols_cmds \
- prelink_cmds \
-+postlink_cmds \
- postinstall_cmds \
- postuninstall_cmds \
- finish_cmds \
-@@ -15424,7 +16090,8 @@ $as_echo X"$file" |
- # NOTE: Changes made to this file will be lost: look at ltmain.sh.
- #
- #   Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001, 2003, 2004, 2005,
--#                 2006, 2007, 2008, 2009 Free Software Foundation, Inc.
-+#                 2006, 2007, 2008, 2009, 2010 Free Software Foundation,
-+#                 Inc.
- #   Written by Gordon Matzigkeit, 1996
- #
- #   This file is part of GNU Libtool.
-@@ -15527,19 +16194,42 @@ SP2NL=$lt_lt_SP2NL
- # turn newlines into spaces.
- NL2SP=$lt_lt_NL2SP
- 
-+# convert \$build file names to \$host format.
-+to_host_file_cmd=$lt_cv_to_host_file_cmd
-+
-+# convert \$build files to toolchain format.
-+to_tool_file_cmd=$lt_cv_to_tool_file_cmd
-+
- # An object symbol dumper.
- OBJDUMP=$lt_OBJDUMP
- 
- # Method to check whether dependent libraries are shared objects.
- deplibs_check_method=$lt_deplibs_check_method
- 
--# Command to use when deplibs_check_method == "file_magic".
-+# Command to use when deplibs_check_method = "file_magic".
- file_magic_cmd=$lt_file_magic_cmd
- 
-+# How to find potential files when deplibs_check_method = "file_magic".
-+file_magic_glob=$lt_file_magic_glob
-+
-+# Find potential files using nocaseglob when deplibs_check_method = "file_magic".
-+want_nocaseglob=$lt_want_nocaseglob
-+
-+# DLL creation program.
-+DLLTOOL=$lt_DLLTOOL
-+
-+# Command to associate shared and link libraries.
-+sharedlib_from_linklib_cmd=$lt_sharedlib_from_linklib_cmd
-+
- # The archiver.
- AR=$lt_AR
-+
-+# Flags to create an archive.
- AR_FLAGS=$lt_AR_FLAGS
- 
-+# How to feed a file listing to the archiver.
-+archiver_list_spec=$lt_archiver_list_spec
-+
- # A symbol stripping program.
- STRIP=$lt_STRIP
- 
-@@ -15569,6 +16259,12 @@ global_symbol_to_c_name_address=$lt_lt_cv_sys_global_symbol_to_c_name_address
- # Transform the output of nm in a C name address pair when lib prefix is needed.
- global_symbol_to_c_name_address_lib_prefix=$lt_lt_cv_sys_global_symbol_to_c_name_address_lib_prefix
- 
-+# Specify filename containing input files for \$NM.
-+nm_file_list_spec=$lt_nm_file_list_spec
-+
-+# The root where to search for dependent libraries,and in which our libraries should be installed.
-+lt_sysroot=$lt_sysroot
-+
- # The name of the directory that contains temporary libtool files.
- objdir=$objdir
- 
-@@ -15578,6 +16274,9 @@ MAGIC_CMD=$MAGIC_CMD
- # Must we lock files when doing compilation?
- need_locks=$lt_need_locks
- 
-+# Manifest tool.
-+MANIFEST_TOOL=$lt_MANIFEST_TOOL
-+
- # Tool to manipulate archived DWARF debug symbol files on Mac OS X.
- DSYMUTIL=$lt_DSYMUTIL
- 
-@@ -15692,12 +16391,12 @@ with_gcc=$GCC
- # Compiler flag to turn off builtin functions.
- no_builtin_flag=$lt_lt_prog_compiler_no_builtin_flag
- 
--# How to pass a linker flag through the compiler.
--wl=$lt_lt_prog_compiler_wl
--
- # Additional compiler flags for building library objects.
- pic_flag=$lt_lt_prog_compiler_pic
- 
-+# How to pass a linker flag through the compiler.
-+wl=$lt_lt_prog_compiler_wl
-+
- # Compiler flag to prevent dynamic linking.
- link_static_flag=$lt_lt_prog_compiler_static
- 
-@@ -15784,9 +16483,6 @@ inherit_rpath=$inherit_rpath
- # Whether libtool must link a program against all its dependency libraries.
- link_all_deplibs=$link_all_deplibs
- 
--# Fix the shell variable \$srcfile for the compiler.
--fix_srcfile_path=$lt_fix_srcfile_path
--
- # Set to "yes" if exported symbols are required.
- always_export_symbols=$always_export_symbols
- 
-@@ -15802,6 +16498,9 @@ include_expsyms=$lt_include_expsyms
- # Commands necessary for linking programs (against libraries) with templates.
- prelink_cmds=$lt_prelink_cmds
- 
-+# Commands necessary for finishing linking programs.
-+postlink_cmds=$lt_postlink_cmds
-+
- # Specify filename containing input files.
- file_list_spec=$lt_file_list_spec
- 
-@@ -15834,210 +16533,169 @@ ltmain="$ac_aux_dir/ltmain.sh"
-   # if finds mixed CR/LF and LF-only lines.  Since sed operates in
-   # text mode, it properly converts lines to CR/LF.  This bash problem
-   # is reportedly fixed, but why not run on old versions too?
--  sed '/^# Generated shell functions inserted here/q' "$ltmain" >> "$cfgfile" \
--    || (rm -f "$cfgfile"; exit 1)
--
--  case $xsi_shell in
--  yes)
--    cat << \_LT_EOF >> "$cfgfile"
--
--# func_dirname file append nondir_replacement
--# Compute the dirname of FILE.  If nonempty, add APPEND to the result,
--# otherwise set result to NONDIR_REPLACEMENT.
--func_dirname ()
--{
--  case ${1} in
--    */*) func_dirname_result="${1%/*}${2}" ;;
--    *  ) func_dirname_result="${3}" ;;
--  esac
--}
--
--# func_basename file
--func_basename ()
--{
--  func_basename_result="${1##*/}"
--}
--
--# func_dirname_and_basename file append nondir_replacement
--# perform func_basename and func_dirname in a single function
--# call:
--#   dirname:  Compute the dirname of FILE.  If nonempty,
--#             add APPEND to the result, otherwise set result
--#             to NONDIR_REPLACEMENT.
--#             value returned in "$func_dirname_result"
--#   basename: Compute filename of FILE.
--#             value retuned in "$func_basename_result"
--# Implementation must be kept synchronized with func_dirname
--# and func_basename. For efficiency, we do not delegate to
--# those functions but instead duplicate the functionality here.
--func_dirname_and_basename ()
--{
--  case ${1} in
--    */*) func_dirname_result="${1%/*}${2}" ;;
--    *  ) func_dirname_result="${3}" ;;
--  esac
--  func_basename_result="${1##*/}"
--}
--
--# func_stripname prefix suffix name
--# strip PREFIX and SUFFIX off of NAME.
--# PREFIX and SUFFIX must not contain globbing or regex special
--# characters, hashes, percent signs, but SUFFIX may contain a leading
--# dot (in which case that matches only a dot).
--func_stripname ()
--{
--  # pdksh 5.2.14 does not do ${X%$Y} correctly if both X and Y are
--  # positional parameters, so assign one to ordinary parameter first.
--  func_stripname_result=${3}
--  func_stripname_result=${func_stripname_result#"${1}"}
--  func_stripname_result=${func_stripname_result%"${2}"}
--}
--
--# func_opt_split
--func_opt_split ()
--{
--  func_opt_split_opt=${1%%=*}
--  func_opt_split_arg=${1#*=}
--}
--
--# func_lo2o object
--func_lo2o ()
--{
--  case ${1} in
--    *.lo) func_lo2o_result=${1%.lo}.${objext} ;;
--    *)    func_lo2o_result=${1} ;;
--  esac
--}
--
--# func_xform libobj-or-source
--func_xform ()
--{
--  func_xform_result=${1%.*}.lo
--}
--
--# func_arith arithmetic-term...
--func_arith ()
--{
--  func_arith_result=$(( $* ))
--}
--
--# func_len string
--# STRING may not start with a hyphen.
--func_len ()
--{
--  func_len_result=${#1}
--}
--
--_LT_EOF
--    ;;
--  *) # Bourne compatible functions.
--    cat << \_LT_EOF >> "$cfgfile"
--
--# func_dirname file append nondir_replacement
--# Compute the dirname of FILE.  If nonempty, add APPEND to the result,
--# otherwise set result to NONDIR_REPLACEMENT.
--func_dirname ()
--{
--  # Extract subdirectory from the argument.
--  func_dirname_result=`$ECHO "${1}" | $SED "$dirname"`
--  if test "X$func_dirname_result" = "X${1}"; then
--    func_dirname_result="${3}"
--  else
--    func_dirname_result="$func_dirname_result${2}"
--  fi
--}
--
--# func_basename file
--func_basename ()
--{
--  func_basename_result=`$ECHO "${1}" | $SED "$basename"`
--}
--
--
--# func_stripname prefix suffix name
--# strip PREFIX and SUFFIX off of NAME.
--# PREFIX and SUFFIX must not contain globbing or regex special
--# characters, hashes, percent signs, but SUFFIX may contain a leading
--# dot (in which case that matches only a dot).
--# func_strip_suffix prefix name
--func_stripname ()
--{
--  case ${2} in
--    .*) func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%\\\\${2}\$%%"`;;
--    *)  func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%${2}\$%%"`;;
--  esac
--}
--
--# sed scripts:
--my_sed_long_opt='1s/^\(-[^=]*\)=.*/\1/;q'
--my_sed_long_arg='1s/^-[^=]*=//'
--
--# func_opt_split
--func_opt_split ()
--{
--  func_opt_split_opt=`$ECHO "${1}" | $SED "$my_sed_long_opt"`
--  func_opt_split_arg=`$ECHO "${1}" | $SED "$my_sed_long_arg"`
--}
--
--# func_lo2o object
--func_lo2o ()
--{
--  func_lo2o_result=`$ECHO "${1}" | $SED "$lo2o"`
--}
--
--# func_xform libobj-or-source
--func_xform ()
--{
--  func_xform_result=`$ECHO "${1}" | $SED 's/\.[^.]*$/.lo/'`
--}
--
--# func_arith arithmetic-term...
--func_arith ()
--{
--  func_arith_result=`expr "$@"`
--}
--
--# func_len string
--# STRING may not start with a hyphen.
--func_len ()
--{
--  func_len_result=`expr "$1" : ".*" 2>/dev/null || echo $max_cmd_len`
--}
--
--_LT_EOF
--esac
--
--case $lt_shell_append in
--  yes)
--    cat << \_LT_EOF >> "$cfgfile"
--
--# func_append var value
--# Append VALUE to the end of shell variable VAR.
--func_append ()
--{
--  eval "$1+=\$2"
--}
--_LT_EOF
--    ;;
--  *)
--    cat << \_LT_EOF >> "$cfgfile"
--
--# func_append var value
--# Append VALUE to the end of shell variable VAR.
--func_append ()
--{
--  eval "$1=\$$1\$2"
--}
--
--_LT_EOF
--    ;;
--  esac
--
--
--  sed -n '/^# Generated shell functions inserted here/,$p' "$ltmain" >> "$cfgfile" \
--    || (rm -f "$cfgfile"; exit 1)
--
--  mv -f "$cfgfile" "$ofile" ||
-+  sed '$q' "$ltmain" >> "$cfgfile" \
-+     || (rm -f "$cfgfile"; exit 1)
-+
-+  if test x"$xsi_shell" = xyes; then
-+  sed -e '/^func_dirname ()$/,/^} # func_dirname /c\
-+func_dirname ()\
-+{\
-+\    case ${1} in\
-+\      */*) func_dirname_result="${1%/*}${2}" ;;\
-+\      *  ) func_dirname_result="${3}" ;;\
-+\    esac\
-+} # Extended-shell func_dirname implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_basename ()$/,/^} # func_basename /c\
-+func_basename ()\
-+{\
-+\    func_basename_result="${1##*/}"\
-+} # Extended-shell func_basename implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_dirname_and_basename ()$/,/^} # func_dirname_and_basename /c\
-+func_dirname_and_basename ()\
-+{\
-+\    case ${1} in\
-+\      */*) func_dirname_result="${1%/*}${2}" ;;\
-+\      *  ) func_dirname_result="${3}" ;;\
-+\    esac\
-+\    func_basename_result="${1##*/}"\
-+} # Extended-shell func_dirname_and_basename implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_stripname ()$/,/^} # func_stripname /c\
-+func_stripname ()\
-+{\
-+\    # pdksh 5.2.14 does not do ${X%$Y} correctly if both X and Y are\
-+\    # positional parameters, so assign one to ordinary parameter first.\
-+\    func_stripname_result=${3}\
-+\    func_stripname_result=${func_stripname_result#"${1}"}\
-+\    func_stripname_result=${func_stripname_result%"${2}"}\
-+} # Extended-shell func_stripname implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_split_long_opt ()$/,/^} # func_split_long_opt /c\
-+func_split_long_opt ()\
-+{\
-+\    func_split_long_opt_name=${1%%=*}\
-+\    func_split_long_opt_arg=${1#*=}\
-+} # Extended-shell func_split_long_opt implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_split_short_opt ()$/,/^} # func_split_short_opt /c\
-+func_split_short_opt ()\
-+{\
-+\    func_split_short_opt_arg=${1#??}\
-+\    func_split_short_opt_name=${1%"$func_split_short_opt_arg"}\
-+} # Extended-shell func_split_short_opt implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_lo2o ()$/,/^} # func_lo2o /c\
-+func_lo2o ()\
-+{\
-+\    case ${1} in\
-+\      *.lo) func_lo2o_result=${1%.lo}.${objext} ;;\
-+\      *)    func_lo2o_result=${1} ;;\
-+\    esac\
-+} # Extended-shell func_lo2o implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_xform ()$/,/^} # func_xform /c\
-+func_xform ()\
-+{\
-+    func_xform_result=${1%.*}.lo\
-+} # Extended-shell func_xform implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_arith ()$/,/^} # func_arith /c\
-+func_arith ()\
-+{\
-+    func_arith_result=$(( $* ))\
-+} # Extended-shell func_arith implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_len ()$/,/^} # func_len /c\
-+func_len ()\
-+{\
-+    func_len_result=${#1}\
-+} # Extended-shell func_len implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+fi
-+
-+if test x"$lt_shell_append" = xyes; then
-+  sed -e '/^func_append ()$/,/^} # func_append /c\
-+func_append ()\
-+{\
-+    eval "${1}+=\\${2}"\
-+} # Extended-shell func_append implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_append_quoted ()$/,/^} # func_append_quoted /c\
-+func_append_quoted ()\
-+{\
-+\    func_quote_for_eval "${2}"\
-+\    eval "${1}+=\\\\ \\$func_quote_for_eval_result"\
-+} # Extended-shell func_append_quoted implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  # Save a `func_append' function call where possible by direct use of '+='
-+  sed -e 's%func_append \([a-zA-Z_]\{1,\}\) "%\1+="%g' $cfgfile > $cfgfile.tmp \
-+    && mv -f "$cfgfile.tmp" "$cfgfile" \
-+      || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+  test 0 -eq $? || _lt_function_replace_fail=:
-+else
-+  # Save a `func_append' function call even when '+=' is not available
-+  sed -e 's%func_append \([a-zA-Z_]\{1,\}\) "%\1="$\1%g' $cfgfile > $cfgfile.tmp \
-+    && mv -f "$cfgfile.tmp" "$cfgfile" \
-+      || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+  test 0 -eq $? || _lt_function_replace_fail=:
-+fi
-+
-+if test x"$_lt_function_replace_fail" = x":"; then
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: Unable to substitute extended shell functions in $ofile" >&5
-+$as_echo "$as_me: WARNING: Unable to substitute extended shell functions in $ofile" >&2;}
-+fi
-+
-+
-+   mv -f "$cfgfile" "$ofile" ||
-     (rm -f "$ofile" && cp "$cfgfile" "$ofile" && rm -f "$cfgfile")
-   chmod +x "$ofile"
- 
-diff --git a/libtool.m4 b/libtool.m4
-index a216bb14e99..e37c45ac0b1 100644
---- a/libtool.m4
-+++ b/libtool.m4
-@@ -1,7 +1,8 @@
- # libtool.m4 - Configure libtool for the host system. -*-Autoconf-*-
- #
- #   Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001, 2003, 2004, 2005,
--#                 2006, 2007, 2008, 2009 Free Software Foundation, Inc.
-+#                 2006, 2007, 2008, 2009, 2010 Free Software Foundation,
-+#                 Inc.
- #   Written by Gordon Matzigkeit, 1996
- #
- # This file is free software; the Free Software Foundation gives
-@@ -10,7 +11,8 @@
- 
- m4_define([_LT_COPYING], [dnl
- #   Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001, 2003, 2004, 2005,
--#                 2006, 2007, 2008, 2009 Free Software Foundation, Inc.
-+#                 2006, 2007, 2008, 2009, 2010 Free Software Foundation,
-+#                 Inc.
- #   Written by Gordon Matzigkeit, 1996
- #
- #   This file is part of GNU Libtool.
-@@ -37,7 +39,7 @@ m4_define([_LT_COPYING], [dnl
- # 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
- ])
- 
--# serial 56 LT_INIT
-+# serial 57 LT_INIT
- 
- 
- # LT_PREREQ(VERSION)
-@@ -166,10 +168,13 @@ _LT_DECL([], [exeext], [0], [Executable file suffix (normally "")])dnl
- dnl
- m4_require([_LT_FILEUTILS_DEFAULTS])dnl
- m4_require([_LT_CHECK_SHELL_FEATURES])dnl
-+m4_require([_LT_PATH_CONVERSION_FUNCTIONS])dnl
- m4_require([_LT_CMD_RELOAD])dnl
- m4_require([_LT_CHECK_MAGIC_METHOD])dnl
-+m4_require([_LT_CHECK_SHAREDLIB_FROM_LINKLIB])dnl
- m4_require([_LT_CMD_OLD_ARCHIVE])dnl
- m4_require([_LT_CMD_GLOBAL_SYMBOLS])dnl
-+m4_require([_LT_WITH_SYSROOT])dnl
- 
- _LT_CONFIG_LIBTOOL_INIT([
- # See if we are running on zsh, and set the options which allow our
-@@ -632,7 +637,7 @@ m4_ifset([AC_PACKAGE_NAME], [AC_PACKAGE_NAME ])config.lt[]dnl
- m4_ifset([AC_PACKAGE_VERSION], [ AC_PACKAGE_VERSION])
- configured by $[0], generated by m4_PACKAGE_STRING.
- 
--Copyright (C) 2009 Free Software Foundation, Inc.
-+Copyright (C) 2010 Free Software Foundation, Inc.
- This config.lt script is free software; the Free Software Foundation
- gives unlimited permision to copy, distribute and modify it."
- 
-@@ -746,15 +751,12 @@ _LT_EOF
-   # if finds mixed CR/LF and LF-only lines.  Since sed operates in
-   # text mode, it properly converts lines to CR/LF.  This bash problem
-   # is reportedly fixed, but why not run on old versions too?
--  sed '/^# Generated shell functions inserted here/q' "$ltmain" >> "$cfgfile" \
--    || (rm -f "$cfgfile"; exit 1)
-+  sed '$q' "$ltmain" >> "$cfgfile" \
-+     || (rm -f "$cfgfile"; exit 1)
- 
--  _LT_PROG_XSI_SHELLFNS
-+  _LT_PROG_REPLACE_SHELLFNS
- 
--  sed -n '/^# Generated shell functions inserted here/,$p' "$ltmain" >> "$cfgfile" \
--    || (rm -f "$cfgfile"; exit 1)
--
--  mv -f "$cfgfile" "$ofile" ||
-+   mv -f "$cfgfile" "$ofile" ||
-     (rm -f "$ofile" && cp "$cfgfile" "$ofile" && rm -f "$cfgfile")
-   chmod +x "$ofile"
- ],
-@@ -980,6 +982,8 @@ _LT_EOF
-       $LTCC $LTCFLAGS -c -o conftest.o conftest.c 2>&AS_MESSAGE_LOG_FD
-       echo "$AR cru libconftest.a conftest.o" >&AS_MESSAGE_LOG_FD
-       $AR cru libconftest.a conftest.o 2>&AS_MESSAGE_LOG_FD
-+      echo "$RANLIB libconftest.a" >&AS_MESSAGE_LOG_FD
-+      $RANLIB libconftest.a 2>&AS_MESSAGE_LOG_FD
-       cat > conftest.c << _LT_EOF
- int main() { return 0;}
- _LT_EOF
-@@ -1069,30 +1073,41 @@ m4_defun([_LT_DARWIN_LINKER_FEATURES],
-   fi
- ])
- 
--# _LT_SYS_MODULE_PATH_AIX
--# -----------------------
-+# _LT_SYS_MODULE_PATH_AIX([TAGNAME])
-+# ----------------------------------
- # Links a minimal program and checks the executable
- # for the system default hardcoded library path. In most cases,
- # this is /usr/lib:/lib, but when the MPI compilers are used
- # the location of the communication and MPI libs are included too.
- # If we don't find anything, use the default library path according
- # to the aix ld manual.
-+# Store the results from the different compilers for each TAGNAME.
-+# Allow to override them for all tags through lt_cv_aix_libpath.
- m4_defun([_LT_SYS_MODULE_PATH_AIX],
- [m4_require([_LT_DECL_SED])dnl
--AC_LINK_IFELSE([AC_LANG_SOURCE([AC_LANG_PROGRAM])],[
--lt_aix_libpath_sed='
--    /Import File Strings/,/^$/ {
--	/^0/ {
--	    s/^0  *\(.*\)$/\1/
--	    p
--	}
--    }'
--aix_libpath=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
--# Check for a 64-bit object if we didn't find anything.
--if test -z "$aix_libpath"; then
--  aix_libpath=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
--fi],[])
--if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
-+if test "${lt_cv_aix_libpath+set}" = set; then
-+  aix_libpath=$lt_cv_aix_libpath
-+else
-+  AC_CACHE_VAL([_LT_TAGVAR([lt_cv_aix_libpath_], [$1])],
-+  [AC_LINK_IFELSE([AC_LANG_PROGRAM],[
-+  lt_aix_libpath_sed='[
-+      /Import File Strings/,/^$/ {
-+	  /^0/ {
-+	      s/^0  *\([^ ]*\) *$/\1/
-+	      p
-+	  }
-+      }]'
-+  _LT_TAGVAR([lt_cv_aix_libpath_], [$1])=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
-+  # Check for a 64-bit object if we didn't find anything.
-+  if test -z "$_LT_TAGVAR([lt_cv_aix_libpath_], [$1])"; then
-+    _LT_TAGVAR([lt_cv_aix_libpath_], [$1])=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
-+  fi],[])
-+  if test -z "$_LT_TAGVAR([lt_cv_aix_libpath_], [$1])"; then
-+    _LT_TAGVAR([lt_cv_aix_libpath_], [$1])="/usr/lib:/lib"
-+  fi
-+  ])
-+  aix_libpath=$_LT_TAGVAR([lt_cv_aix_libpath_], [$1])
-+fi
- ])# _LT_SYS_MODULE_PATH_AIX
- 
- 
-@@ -1117,7 +1132,7 @@ ECHO=$ECHO$ECHO$ECHO$ECHO$ECHO$ECHO
- 
- AC_MSG_CHECKING([how to print strings])
- # Test print first, because it will be a builtin if present.
--if test "X`print -r -- -n 2>/dev/null`" = X-n && \
-+if test "X`( print -r -- -n ) 2>/dev/null`" = X-n && \
-    test "X`print -r -- $ECHO 2>/dev/null`" = "X$ECHO"; then
-   ECHO='print -r --'
- elif test "X`printf %s $ECHO 2>/dev/null`" = "X$ECHO"; then
-@@ -1161,6 +1176,39 @@ _LT_DECL([], [ECHO], [1], [An echo program that protects backslashes])
- ])# _LT_PROG_ECHO_BACKSLASH
- 
- 
-+# _LT_WITH_SYSROOT
-+# ----------------
-+AC_DEFUN([_LT_WITH_SYSROOT],
-+[AC_MSG_CHECKING([for sysroot])
-+AC_ARG_WITH([libtool-sysroot],
-+[  --with-libtool-sysroot[=DIR] Search for dependent libraries within DIR
-+                        (or the compiler's sysroot if not specified).],
-+[], [with_libtool_sysroot=no])
-+
-+dnl lt_sysroot will always be passed unquoted.  We quote it here
-+dnl in case the user passed a directory name.
-+lt_sysroot=
-+case ${with_libtool_sysroot} in #(
-+ yes)
-+   if test "$GCC" = yes; then
-+     lt_sysroot=`$CC --print-sysroot 2>/dev/null`
-+   fi
-+   ;; #(
-+ /*)
-+   lt_sysroot=`echo "$with_libtool_sysroot" | sed -e "$sed_quote_subst"`
-+   ;; #(
-+ no|'')
-+   ;; #(
-+ *)
-+   AC_MSG_RESULT([${with_libtool_sysroot}])
-+   AC_MSG_ERROR([The sysroot must be an absolute path.])
-+   ;;
-+esac
-+
-+ AC_MSG_RESULT([${lt_sysroot:-no}])
-+_LT_DECL([], [lt_sysroot], [0], [The root where to search for ]dnl
-+[dependent libraries, and in which our libraries should be installed.])])
-+
- # _LT_ENABLE_LOCK
- # ---------------
- m4_defun([_LT_ENABLE_LOCK],
-@@ -1320,6 +1368,51 @@ need_locks="$enable_libtool_lock"
- ])# _LT_ENABLE_LOCK
- 
- 
-+# _LT_PROG_AR
-+# -----------
-+m4_defun([_LT_PROG_AR],
-+[AC_CHECK_TOOLS(AR, [ar], false)
-+  touch conftest.c
-+  $AR $plugin_option rc conftest.a conftest.c
-+  if test "$?" != 0; then
-+    AC_MSG_WARN([Failed: $AR $plugin_option rc])
-+  else
-+    AR="$AR $plugin_option"
-+  fi
-+  rm -f conftest.*
-+: ${AR=ar}
-+: ${AR_FLAGS=cru}
-+_LT_DECL([], [AR], [1], [The archiver])
-+_LT_DECL([], [AR_FLAGS], [1], [Flags to create an archive])
-+
-+AC_CACHE_CHECK([for archiver @FILE support], [lt_cv_ar_at_file],
-+  [lt_cv_ar_at_file=no
-+   AC_COMPILE_IFELSE([AC_LANG_PROGRAM],
-+     [echo conftest.$ac_objext > conftest.lst
-+      lt_ar_try='$AR $AR_FLAGS libconftest.a @conftest.lst >&AS_MESSAGE_LOG_FD'
-+      AC_TRY_EVAL([lt_ar_try])
-+      if test "$ac_status" -eq 0; then
-+	# Ensure the archiver fails upon bogus file names.
-+	rm -f conftest.$ac_objext libconftest.a
-+	AC_TRY_EVAL([lt_ar_try])
-+	if test "$ac_status" -ne 0; then
-+          lt_cv_ar_at_file=@
-+        fi
-+      fi
-+      rm -f conftest.* libconftest.a
-+     ])
-+  ])
-+
-+if test "x$lt_cv_ar_at_file" = xno; then
-+  archiver_list_spec=
-+else
-+  archiver_list_spec=$lt_cv_ar_at_file
-+fi
-+_LT_DECL([], [archiver_list_spec], [1],
-+  [How to feed a file listing to the archiver])
-+])# _LT_PROG_AR
-+
-+
- # _LT_CMD_OLD_ARCHIVE
- # -------------------
- m4_defun([_LT_CMD_OLD_ARCHIVE],
-@@ -1336,23 +1429,7 @@ for plugin in $plugin_names; do
-   fi
- done
- 
--AC_CHECK_TOOL(AR, ar, false)
--test -z "$AR" && AR=ar
--if test -n "$plugin_option"; then
--  if $AR --help 2>&1 | grep -q "\--plugin"; then
--    touch conftest.c
--    $AR $plugin_option rc conftest.a conftest.c
--    if test "$?" != 0; then
--      AC_MSG_WARN([Failed: $AR $plugin_option rc])
--    else
--      AR="$AR $plugin_option"
--    fi
--    rm -f conftest.*
--  fi
--fi
--test -z "$AR_FLAGS" && AR_FLAGS=cru
--_LT_DECL([], [AR], [1], [The archiver])
--_LT_DECL([], [AR_FLAGS], [1])
-+_LT_PROG_AR
- 
- AC_CHECK_TOOL(STRIP, strip, :)
- test -z "$STRIP" && STRIP=:
-@@ -1653,7 +1730,7 @@ else
-   lt_dlunknown=0; lt_dlno_uscore=1; lt_dlneed_uscore=2
-   lt_status=$lt_dlunknown
-   cat > conftest.$ac_ext <<_LT_EOF
--[#line __oline__ "configure"
-+[#line $LINENO "configure"
- #include "confdefs.h"
- 
- #if HAVE_DLFCN_H
-@@ -1697,10 +1774,10 @@ else
- /* When -fvisbility=hidden is used, assume the code has been annotated
-    correspondingly for the symbols needed.  */
- #if defined(__GNUC__) && (((__GNUC__ == 3) && (__GNUC_MINOR__ >= 3)) || (__GNUC__ > 3))
--void fnord () __attribute__((visibility("default")));
-+int fnord () __attribute__((visibility("default")));
- #endif
- 
--void fnord () { int i=42; }
-+int fnord () { return 42; }
- int main ()
- {
-   void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW);
-@@ -2240,8 +2317,9 @@ cygwin* | mingw* | pw32* | cegcc*)
-   need_version=no
-   need_lib_prefix=no
- 
--  case $GCC,$host_os in
--  yes,cygwin* | yes,mingw* | yes,pw32* | yes,cegcc*)
-+  case $GCC,$cc_basename in
-+  yes,*)
-+    # gcc
-     library_names_spec='$libname.dll.a'
-     # DLL is installed to $(libdir)/../bin by postinstall_cmds
-     postinstall_cmds='base_file=`basename \${file}`~
-@@ -2274,13 +2352,71 @@ m4_if([$1], [],[
-       library_names_spec='`echo ${libname} | sed -e 's/^lib/pw/'``echo ${release} | $SED -e 's/[[.]]/-/g'`${versuffix}${shared_ext}'
-       ;;
-     esac
-+    dynamic_linker='Win32 ld.exe'
-+    ;;
-+
-+  *,cl*)
-+    # Native MSVC
-+    libname_spec='$name'
-+    soname_spec='${libname}`echo ${release} | $SED -e 's/[[.]]/-/g'`${versuffix}${shared_ext}'
-+    library_names_spec='${libname}.dll.lib'
-+
-+    case $build_os in
-+    mingw*)
-+      sys_lib_search_path_spec=
-+      lt_save_ifs=$IFS
-+      IFS=';'
-+      for lt_path in $LIB
-+      do
-+        IFS=$lt_save_ifs
-+        # Let DOS variable expansion print the short 8.3 style file name.
-+        lt_path=`cd "$lt_path" 2>/dev/null && cmd //C "for %i in (".") do @echo %~si"`
-+        sys_lib_search_path_spec="$sys_lib_search_path_spec $lt_path"
-+      done
-+      IFS=$lt_save_ifs
-+      # Convert to MSYS style.
-+      sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | sed -e 's|\\\\|/|g' -e 's| \\([[a-zA-Z]]\\):| /\\1|g' -e 's|^ ||'`
-+      ;;
-+    cygwin*)
-+      # Convert to unix form, then to dos form, then back to unix form
-+      # but this time dos style (no spaces!) so that the unix form looks
-+      # like /cygdrive/c/PROGRA~1:/cygdr...
-+      sys_lib_search_path_spec=`cygpath --path --unix "$LIB"`
-+      sys_lib_search_path_spec=`cygpath --path --dos "$sys_lib_search_path_spec" 2>/dev/null`
-+      sys_lib_search_path_spec=`cygpath --path --unix "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
-+      ;;
-+    *)
-+      sys_lib_search_path_spec="$LIB"
-+      if $ECHO "$sys_lib_search_path_spec" | [$GREP ';[c-zC-Z]:/' >/dev/null]; then
-+        # It is most probably a Windows format PATH.
-+        sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e 's/;/ /g'`
-+      else
-+        sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
-+      fi
-+      # FIXME: find the short name or the path components, as spaces are
-+      # common. (e.g. "Program Files" -> "PROGRA~1")
-+      ;;
-+    esac
-+
-+    # DLL is installed to $(libdir)/../bin by postinstall_cmds
-+    postinstall_cmds='base_file=`basename \${file}`~
-+      dlpath=`$SHELL 2>&1 -c '\''. $dir/'\''\${base_file}'\''i; echo \$dlname'\''`~
-+      dldir=$destdir/`dirname \$dlpath`~
-+      test -d \$dldir || mkdir -p \$dldir~
-+      $install_prog $dir/$dlname \$dldir/$dlname'
-+    postuninstall_cmds='dldll=`$SHELL 2>&1 -c '\''. $file; echo \$dlname'\''`~
-+      dlpath=$dir/\$dldll~
-+       $RM \$dlpath'
-+    shlibpath_overrides_runpath=yes
-+    dynamic_linker='Win32 link.exe'
-     ;;
- 
-   *)
-+    # Assume MSVC wrapper
-     library_names_spec='${libname}`echo ${release} | $SED -e 's/[[.]]/-/g'`${versuffix}${shared_ext} $libname.lib'
-+    dynamic_linker='Win32 ld.exe'
-     ;;
-   esac
--  dynamic_linker='Win32 ld.exe'
-   # FIXME: first we should search . and the directory the executable is in
-   shlibpath_var=PATH
-   ;;
-@@ -2970,6 +3106,11 @@ case $reload_flag in
- esac
- reload_cmds='$LD$reload_flag -o $output$reload_objs'
- case $host_os in
-+  cygwin* | mingw* | pw32* | cegcc*)
-+    if test "$GCC" != yes; then
-+      reload_cmds=false
-+    fi
-+    ;;
-   darwin*)
-     if test "$GCC" = yes; then
-       reload_cmds='$LTCC $LTCFLAGS -nostdlib ${wl}-r -o $output$reload_objs'
-@@ -3036,7 +3177,8 @@ mingw* | pw32*)
-     lt_cv_deplibs_check_method='file_magic ^x86 archive import|^x86 DLL'
-     lt_cv_file_magic_cmd='func_win32_libid'
-   else
--    lt_cv_deplibs_check_method='file_magic file format pei*-i386(.*architecture: i386)?'
-+    # Keep this pattern in sync with the one in func_win32_libid.
-+    lt_cv_deplibs_check_method='file_magic file format (pei*-i386(.*architecture: i386)?|pe-arm-wince|pe-x86-64)'
-     lt_cv_file_magic_cmd='$OBJDUMP -f'
-   fi
-   ;;
-@@ -3187,6 +3329,21 @@ tpf*)
-   ;;
- esac
- ])
-+
-+file_magic_glob=
-+want_nocaseglob=no
-+if test "$build" = "$host"; then
-+  case $host_os in
-+  mingw* | pw32*)
-+    if ( shopt | grep nocaseglob ) >/dev/null 2>&1; then
-+      want_nocaseglob=yes
-+    else
-+      file_magic_glob=`echo aAbBcCdDeEfFgGhHiIjJkKlLmMnNoOpPqQrRsStTuUvVwWxXyYzZ | $SED -e "s/\(..\)/s\/[[\1]]\/[[\1]]\/g;/g"`
-+    fi
-+    ;;
-+  esac
-+fi
-+
- file_magic_cmd=$lt_cv_file_magic_cmd
- deplibs_check_method=$lt_cv_deplibs_check_method
- test -z "$deplibs_check_method" && deplibs_check_method=unknown
-@@ -3194,7 +3351,11 @@ test -z "$deplibs_check_method" && deplibs_check_method=unknown
- _LT_DECL([], [deplibs_check_method], [1],
-     [Method to check whether dependent libraries are shared objects])
- _LT_DECL([], [file_magic_cmd], [1],
--    [Command to use when deplibs_check_method == "file_magic"])
-+    [Command to use when deplibs_check_method = "file_magic"])
-+_LT_DECL([], [file_magic_glob], [1],
-+    [How to find potential files when deplibs_check_method = "file_magic"])
-+_LT_DECL([], [want_nocaseglob], [1],
-+    [Find potential files using nocaseglob when deplibs_check_method = "file_magic"])
- ])# _LT_CHECK_MAGIC_METHOD
- 
- 
-@@ -3299,6 +3460,67 @@ dnl aclocal-1.4 backwards compatibility:
- dnl AC_DEFUN([AM_PROG_NM], [])
- dnl AC_DEFUN([AC_PROG_NM], [])
- 
-+# _LT_CHECK_SHAREDLIB_FROM_LINKLIB
-+# --------------------------------
-+# how to determine the name of the shared library
-+# associated with a specific link library.
-+#  -- PORTME fill in with the dynamic library characteristics
-+m4_defun([_LT_CHECK_SHAREDLIB_FROM_LINKLIB],
-+[m4_require([_LT_DECL_EGREP])
-+m4_require([_LT_DECL_OBJDUMP])
-+m4_require([_LT_DECL_DLLTOOL])
-+AC_CACHE_CHECK([how to associate runtime and link libraries],
-+lt_cv_sharedlib_from_linklib_cmd,
-+[lt_cv_sharedlib_from_linklib_cmd='unknown'
-+
-+case $host_os in
-+cygwin* | mingw* | pw32* | cegcc*)
-+  # two different shell functions defined in ltmain.sh
-+  # decide which to use based on capabilities of $DLLTOOL
-+  case `$DLLTOOL --help 2>&1` in
-+  *--identify-strict*)
-+    lt_cv_sharedlib_from_linklib_cmd=func_cygming_dll_for_implib
-+    ;;
-+  *)
-+    lt_cv_sharedlib_from_linklib_cmd=func_cygming_dll_for_implib_fallback
-+    ;;
-+  esac
-+  ;;
-+*)
-+  # fallback: assume linklib IS sharedlib
-+  lt_cv_sharedlib_from_linklib_cmd="$ECHO"
-+  ;;
-+esac
-+])
-+sharedlib_from_linklib_cmd=$lt_cv_sharedlib_from_linklib_cmd
-+test -z "$sharedlib_from_linklib_cmd" && sharedlib_from_linklib_cmd=$ECHO
-+
-+_LT_DECL([], [sharedlib_from_linklib_cmd], [1],
-+    [Command to associate shared and link libraries])
-+])# _LT_CHECK_SHAREDLIB_FROM_LINKLIB
-+
-+
-+# _LT_PATH_MANIFEST_TOOL
-+# ----------------------
-+# locate the manifest tool
-+m4_defun([_LT_PATH_MANIFEST_TOOL],
-+[AC_CHECK_TOOL(MANIFEST_TOOL, mt, :)
-+test -z "$MANIFEST_TOOL" && MANIFEST_TOOL=mt
-+AC_CACHE_CHECK([if $MANIFEST_TOOL is a manifest tool], [lt_cv_path_mainfest_tool],
-+  [lt_cv_path_mainfest_tool=no
-+  echo "$as_me:$LINENO: $MANIFEST_TOOL '-?'" >&AS_MESSAGE_LOG_FD
-+  $MANIFEST_TOOL '-?' 2>conftest.err > conftest.out
-+  cat conftest.err >&AS_MESSAGE_LOG_FD
-+  if $GREP 'Manifest Tool' conftest.out > /dev/null; then
-+    lt_cv_path_mainfest_tool=yes
-+  fi
-+  rm -f conftest*])
-+if test "x$lt_cv_path_mainfest_tool" != xyes; then
-+  MANIFEST_TOOL=:
-+fi
-+_LT_DECL([], [MANIFEST_TOOL], [1], [Manifest tool])dnl
-+])# _LT_PATH_MANIFEST_TOOL
-+
- 
- # LT_LIB_M
- # --------
-@@ -3425,8 +3647,8 @@ esac
- lt_cv_sys_global_symbol_to_cdecl="sed -n -e 's/^T .* \(.*\)$/extern int \1();/p' -e 's/^$symcode* .* \(.*\)$/extern char \1;/p'"
- 
- # Transform an extracted symbol line into symbol name and symbol address
--lt_cv_sys_global_symbol_to_c_name_address="sed -n -e 's/^: \([[^ ]]*\) $/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([[^ ]]*\) \([[^ ]]*\)$/  {\"\2\", (void *) \&\2},/p'"
--lt_cv_sys_global_symbol_to_c_name_address_lib_prefix="sed -n -e 's/^: \([[^ ]]*\) $/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([[^ ]]*\) \(lib[[^ ]]*\)$/  {\"\2\", (void *) \&\2},/p' -e 's/^$symcode* \([[^ ]]*\) \([[^ ]]*\)$/  {\"lib\2\", (void *) \&\2},/p'"
-+lt_cv_sys_global_symbol_to_c_name_address="sed -n -e 's/^: \([[^ ]]*\)[[ ]]*$/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([[^ ]]*\) \([[^ ]]*\)$/  {\"\2\", (void *) \&\2},/p'"
-+lt_cv_sys_global_symbol_to_c_name_address_lib_prefix="sed -n -e 's/^: \([[^ ]]*\)[[ ]]*$/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([[^ ]]*\) \(lib[[^ ]]*\)$/  {\"\2\", (void *) \&\2},/p' -e 's/^$symcode* \([[^ ]]*\) \([[^ ]]*\)$/  {\"lib\2\", (void *) \&\2},/p'"
- 
- # Handle CRLF in mingw tool chain
- opt_cr=
-@@ -3462,6 +3684,7 @@ for ac_symprfx in "" "_"; do
-   else
-     lt_cv_sys_global_symbol_pipe="sed -n -e 's/^.*[[	 ]]\($symcode$symcode*\)[[	 ]][[	 ]]*$ac_symprfx$sympat$opt_cr$/$symxfrm/p'"
-   fi
-+  lt_cv_sys_global_symbol_pipe="$lt_cv_sys_global_symbol_pipe | sed '/ __gnu_lto/d'"
- 
-   # Check to see that the pipe works correctly.
-   pipe_works=no
-@@ -3495,6 +3718,18 @@ _LT_EOF
-       if $GREP ' nm_test_var$' "$nlist" >/dev/null; then
- 	if $GREP ' nm_test_func$' "$nlist" >/dev/null; then
- 	  cat <<_LT_EOF > conftest.$ac_ext
-+/* Keep this code in sync between libtool.m4, ltmain, lt_system.h, and tests.  */
-+#if defined(_WIN32) || defined(__CYGWIN__) || defined(_WIN32_WCE)
-+/* DATA imports from DLLs on WIN32 con't be const, because runtime
-+   relocations are performed -- see ld's documentation on pseudo-relocs.  */
-+# define LT@&t@_DLSYM_CONST
-+#elif defined(__osf__)
-+/* This system does not cope well with relocations in const data.  */
-+# define LT@&t@_DLSYM_CONST
-+#else
-+# define LT@&t@_DLSYM_CONST const
-+#endif
-+
- #ifdef __cplusplus
- extern "C" {
- #endif
-@@ -3506,7 +3741,7 @@ _LT_EOF
- 	  cat <<_LT_EOF >> conftest.$ac_ext
- 
- /* The mapping between symbol names and symbols.  */
--const struct {
-+LT@&t@_DLSYM_CONST struct {
-   const char *name;
-   void       *address;
- }
-@@ -3532,15 +3767,15 @@ static const void *lt_preloaded_setup() {
- _LT_EOF
- 	  # Now try linking the two files.
- 	  mv conftest.$ac_objext conftstm.$ac_objext
--	  lt_save_LIBS="$LIBS"
--	  lt_save_CFLAGS="$CFLAGS"
-+	  lt_globsym_save_LIBS=$LIBS
-+	  lt_globsym_save_CFLAGS=$CFLAGS
- 	  LIBS="conftstm.$ac_objext"
- 	  CFLAGS="$CFLAGS$_LT_TAGVAR(lt_prog_compiler_no_builtin_flag, $1)"
- 	  if AC_TRY_EVAL(ac_link) && test -s conftest${ac_exeext}; then
- 	    pipe_works=yes
- 	  fi
--	  LIBS="$lt_save_LIBS"
--	  CFLAGS="$lt_save_CFLAGS"
-+	  LIBS=$lt_globsym_save_LIBS
-+	  CFLAGS=$lt_globsym_save_CFLAGS
- 	else
- 	  echo "cannot find nm_test_func in $nlist" >&AS_MESSAGE_LOG_FD
- 	fi
-@@ -3573,6 +3808,13 @@ else
-   AC_MSG_RESULT(ok)
- fi
- 
-+# Response file support.
-+if test "$lt_cv_nm_interface" = "MS dumpbin"; then
-+  nm_file_list_spec='@'
-+elif $NM --help 2>/dev/null | grep '[[@]]FILE' >/dev/null; then
-+  nm_file_list_spec='@'
-+fi
-+
- _LT_DECL([global_symbol_pipe], [lt_cv_sys_global_symbol_pipe], [1],
-     [Take the output of nm and produce a listing of raw symbols and C names])
- _LT_DECL([global_symbol_to_cdecl], [lt_cv_sys_global_symbol_to_cdecl], [1],
-@@ -3583,6 +3825,8 @@ _LT_DECL([global_symbol_to_c_name_address],
- _LT_DECL([global_symbol_to_c_name_address_lib_prefix],
-     [lt_cv_sys_global_symbol_to_c_name_address_lib_prefix], [1],
-     [Transform the output of nm in a C name address pair when lib prefix is needed])
-+_LT_DECL([], [nm_file_list_spec], [1],
-+    [Specify filename containing input files for $NM])
- ]) # _LT_CMD_GLOBAL_SYMBOLS
- 
- 
-@@ -3594,7 +3838,6 @@ _LT_TAGVAR(lt_prog_compiler_wl, $1)=
- _LT_TAGVAR(lt_prog_compiler_pic, $1)=
- _LT_TAGVAR(lt_prog_compiler_static, $1)=
- 
--AC_MSG_CHECKING([for $compiler option to produce PIC])
- m4_if([$1], [CXX], [
-   # C++ specific cases for pic, static, wl, etc.
-   if test "$GXX" = yes; then
-@@ -3700,6 +3943,12 @@ m4_if([$1], [CXX], [
- 	  ;;
- 	esac
- 	;;
-+      mingw* | cygwin* | os2* | pw32* | cegcc*)
-+	# This hack is so that the source file can tell whether it is being
-+	# built for inclusion in a dll (and should export symbols for example).
-+	m4_if([$1], [GCJ], [],
-+	  [_LT_TAGVAR(lt_prog_compiler_pic, $1)='-DDLL_EXPORT'])
-+	;;
-       dgux*)
- 	case $cc_basename in
- 	  ec++*)
-@@ -3852,7 +4101,7 @@ m4_if([$1], [CXX], [
- 	;;
-       solaris*)
- 	case $cc_basename in
--	  CC*)
-+	  CC* | sunCC*)
- 	    # Sun C++ 4.2, 5.x and Centerline C++
- 	    _LT_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC'
- 	    _LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic'
-@@ -4075,6 +4324,12 @@ m4_if([$1], [CXX], [
- 	_LT_TAGVAR(lt_prog_compiler_pic, $1)='--shared'
- 	_LT_TAGVAR(lt_prog_compiler_static, $1)='--static'
- 	;;
-+      nagfor*)
-+	# NAG Fortran compiler
-+	_LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,-Wl,,'
-+	_LT_TAGVAR(lt_prog_compiler_pic, $1)='-PIC'
-+	_LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic'
-+	;;
-       pgcc* | pgf77* | pgf90* | pgf95* | pgfortran*)
-         # Portland Group compilers (*not* the Pentium gcc compiler,
- 	# which looks to be a dead project)
-@@ -4137,7 +4392,7 @@ m4_if([$1], [CXX], [
-       _LT_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC'
-       _LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic'
-       case $cc_basename in
--      f77* | f90* | f95*)
-+      f77* | f90* | f95* | sunf77* | sunf90* | sunf95*)
- 	_LT_TAGVAR(lt_prog_compiler_wl, $1)='-Qoption ld ';;
-       *)
- 	_LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,';;
-@@ -4194,9 +4449,11 @@ case $host_os in
-     _LT_TAGVAR(lt_prog_compiler_pic, $1)="$_LT_TAGVAR(lt_prog_compiler_pic, $1)@&t@m4_if([$1],[],[ -DPIC],[m4_if([$1],[CXX],[ -DPIC],[])])"
-     ;;
- esac
--AC_MSG_RESULT([$_LT_TAGVAR(lt_prog_compiler_pic, $1)])
--_LT_TAGDECL([wl], [lt_prog_compiler_wl], [1],
--	[How to pass a linker flag through the compiler])
-+
-+AC_CACHE_CHECK([for $compiler option to produce PIC],
-+  [_LT_TAGVAR(lt_cv_prog_compiler_pic, $1)],
-+  [_LT_TAGVAR(lt_cv_prog_compiler_pic, $1)=$_LT_TAGVAR(lt_prog_compiler_pic, $1)])
-+_LT_TAGVAR(lt_prog_compiler_pic, $1)=$_LT_TAGVAR(lt_cv_prog_compiler_pic, $1)
- 
- #
- # Check to make sure the PIC flag actually works.
-@@ -4215,6 +4472,8 @@ fi
- _LT_TAGDECL([pic_flag], [lt_prog_compiler_pic], [1],
- 	[Additional compiler flags for building library objects])
- 
-+_LT_TAGDECL([wl], [lt_prog_compiler_wl], [1],
-+	[How to pass a linker flag through the compiler])
- #
- # Check to make sure the static flag actually works.
- #
-@@ -4235,6 +4494,7 @@ _LT_TAGDECL([link_static_flag], [lt_prog_compiler_static], [1],
- m4_defun([_LT_LINKER_SHLIBS],
- [AC_REQUIRE([LT_PATH_LD])dnl
- AC_REQUIRE([LT_PATH_NM])dnl
-+m4_require([_LT_PATH_MANIFEST_TOOL])dnl
- m4_require([_LT_FILEUTILS_DEFAULTS])dnl
- m4_require([_LT_DECL_EGREP])dnl
- m4_require([_LT_DECL_SED])dnl
-@@ -4243,6 +4503,7 @@ m4_require([_LT_TAG_COMPILER])dnl
- AC_MSG_CHECKING([whether the $compiler linker ($LD) supports shared libraries])
- m4_if([$1], [CXX], [
-   _LT_TAGVAR(export_symbols_cmds, $1)='$NM $libobjs $convenience | $global_symbol_pipe | $SED '\''s/.* //'\'' | sort | uniq > $export_symbols'
-+  _LT_TAGVAR(exclude_expsyms, $1)=['_GLOBAL_OFFSET_TABLE_|_GLOBAL__F[ID]_.*']
-   case $host_os in
-   aix[[4-9]]*)
-     # If we're using GNU nm, then we don't want the "-C" option.
-@@ -4257,15 +4518,20 @@ m4_if([$1], [CXX], [
-     ;;
-   pw32*)
-     _LT_TAGVAR(export_symbols_cmds, $1)="$ltdll_cmds"
--  ;;
-+    ;;
-   cygwin* | mingw* | cegcc*)
--    _LT_TAGVAR(export_symbols_cmds, $1)='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[[BCDGRS]][[ ]]/s/.*[[ ]]\([[^ ]]*\)/\1 DATA/;/^.*[[ ]]__nm__/s/^.*[[ ]]__nm__\([[^ ]]*\)[[ ]][[^ ]]*/\1 DATA/;/^I[[ ]]/d;/^[[AITW]][[ ]]/s/.* //'\'' | sort | uniq > $export_symbols'
--  ;;
-+    case $cc_basename in
-+    cl*) ;;
-+    *)
-+      _LT_TAGVAR(export_symbols_cmds, $1)='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[[BCDGRS]][[ ]]/s/.*[[ ]]\([[^ ]]*\)/\1 DATA/;s/^.*[[ ]]__nm__\([[^ ]]*\)[[ ]][[^ ]]*/\1 DATA/;/^I[[ ]]/d;/^[[AITW]][[ ]]/s/.* //'\'' | sort | uniq > $export_symbols'
-+      _LT_TAGVAR(exclude_expsyms, $1)=['[_]+GLOBAL_OFFSET_TABLE_|[_]+GLOBAL__[FID]_.*|[_]+head_[A-Za-z0-9_]+_dll|[A-Za-z0-9_]+_dll_iname']
-+      ;;
-+    esac
-+    ;;
-   *)
-     _LT_TAGVAR(export_symbols_cmds, $1)='$NM $libobjs $convenience | $global_symbol_pipe | $SED '\''s/.* //'\'' | sort | uniq > $export_symbols'
--  ;;
-+    ;;
-   esac
--  _LT_TAGVAR(exclude_expsyms, $1)=['_GLOBAL_OFFSET_TABLE_|_GLOBAL__F[ID]_.*']
- ], [
-   runpath_var=
-   _LT_TAGVAR(allow_undefined_flag, $1)=
-@@ -4433,7 +4699,8 @@ _LT_EOF
-       _LT_TAGVAR(allow_undefined_flag, $1)=unsupported
-       _LT_TAGVAR(always_export_symbols, $1)=no
-       _LT_TAGVAR(enable_shared_with_static_runtimes, $1)=yes
--      _LT_TAGVAR(export_symbols_cmds, $1)='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[[BCDGRS]][[ ]]/s/.*[[ ]]\([[^ ]]*\)/\1 DATA/'\'' | $SED -e '\''/^[[AITW]][[ ]]/s/.*[[ ]]//'\'' | sort | uniq > $export_symbols'
-+      _LT_TAGVAR(export_symbols_cmds, $1)='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[[BCDGRS]][[ ]]/s/.*[[ ]]\([[^ ]]*\)/\1 DATA/;s/^.*[[ ]]__nm__\([[^ ]]*\)[[ ]][[^ ]]*/\1 DATA/;/^I[[ ]]/d;/^[[AITW]][[ ]]/s/.* //'\'' | sort | uniq > $export_symbols'
-+      _LT_TAGVAR(exclude_expsyms, $1)=['[_]+GLOBAL_OFFSET_TABLE_|[_]+GLOBAL__[FID]_.*|[_]+head_[A-Za-z0-9_]+_dll|[A-Za-z0-9_]+_dll_iname']
- 
-       if $LD --help 2>&1 | $GREP 'auto-import' > /dev/null; then
-         _LT_TAGVAR(archive_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib'
-@@ -4532,12 +4799,12 @@ _LT_EOF
- 	  _LT_TAGVAR(whole_archive_flag_spec, $1)='--whole-archive$convenience --no-whole-archive'
- 	  _LT_TAGVAR(hardcode_libdir_flag_spec, $1)=
- 	  _LT_TAGVAR(hardcode_libdir_flag_spec_ld, $1)='-rpath $libdir'
--	  _LT_TAGVAR(archive_cmds, $1)='$LD -shared $libobjs $deplibs $compiler_flags -soname $soname -o $lib'
-+	  _LT_TAGVAR(archive_cmds, $1)='$LD -shared $libobjs $deplibs $linker_flags -soname $soname -o $lib'
- 	  if test "x$supports_anon_versioning" = xyes; then
- 	    _LT_TAGVAR(archive_expsym_cmds, $1)='echo "{ global:" > $output_objdir/$libname.ver~
- 	      cat $export_symbols | sed -e "s/\(.*\)/\1;/" >> $output_objdir/$libname.ver~
- 	      echo "local: *; };" >> $output_objdir/$libname.ver~
--	      $LD -shared $libobjs $deplibs $compiler_flags -soname $soname -version-script $output_objdir/$libname.ver -o $lib'
-+	      $LD -shared $libobjs $deplibs $linker_flags -soname $soname -version-script $output_objdir/$libname.ver -o $lib'
- 	  fi
- 	  ;;
- 	esac
-@@ -4551,8 +4818,8 @@ _LT_EOF
- 	_LT_TAGVAR(archive_cmds, $1)='$LD -Bshareable $libobjs $deplibs $linker_flags -o $lib'
- 	wlarc=
-       else
--	_LT_TAGVAR(archive_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
--	_LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-+	_LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
-+	_LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-       fi
-       ;;
- 
-@@ -4570,8 +4837,8 @@ _LT_EOF
- 
- _LT_EOF
-       elif $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then
--	_LT_TAGVAR(archive_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
--	_LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-+	_LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
-+	_LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-       else
- 	_LT_TAGVAR(ld_shlibs, $1)=no
-       fi
-@@ -4617,8 +4884,8 @@ _LT_EOF
- 
-     *)
-       if $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then
--	_LT_TAGVAR(archive_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
--	_LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-+	_LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
-+	_LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-       else
- 	_LT_TAGVAR(ld_shlibs, $1)=no
-       fi
-@@ -4748,7 +5015,7 @@ _LT_EOF
- 	_LT_TAGVAR(allow_undefined_flag, $1)='-berok'
-         # Determine the default libpath from the value encoded in an
-         # empty executable.
--        _LT_SYS_MODULE_PATH_AIX
-+        _LT_SYS_MODULE_PATH_AIX([$1])
-         _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-blibpath:$libdir:'"$aix_libpath"
-         _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -o $output_objdir/$soname $libobjs $deplibs '"\${wl}$no_entry_flag"' $compiler_flags `if test "x${allow_undefined_flag}" != "x"; then func_echo_all "${wl}${allow_undefined_flag}"; else :; fi` '"\${wl}$exp_sym_flag:\$export_symbols $shared_flag"
-       else
-@@ -4759,7 +5026,7 @@ _LT_EOF
- 	else
- 	 # Determine the default libpath from the value encoded in an
- 	 # empty executable.
--	 _LT_SYS_MODULE_PATH_AIX
-+	 _LT_SYS_MODULE_PATH_AIX([$1])
- 	 _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-blibpath:$libdir:'"$aix_libpath"
- 	  # Warning - without using the other run time loading flags,
- 	  # -berok will link without error, but may produce a broken library.
-@@ -4803,20 +5070,63 @@ _LT_EOF
-       # Microsoft Visual C++.
-       # hardcode_libdir_flag_spec is actually meaningless, as there is
-       # no search path for DLLs.
--      _LT_TAGVAR(hardcode_libdir_flag_spec, $1)=' '
--      _LT_TAGVAR(allow_undefined_flag, $1)=unsupported
--      # Tell ltmain to make .lib files, not .a files.
--      libext=lib
--      # Tell ltmain to make .dll files, not .so files.
--      shrext_cmds=".dll"
--      # FIXME: Setting linknames here is a bad hack.
--      _LT_TAGVAR(archive_cmds, $1)='$CC -o $lib $libobjs $compiler_flags `func_echo_all "$deplibs" | $SED '\''s/ -lc$//'\''` -link -dll~linknames='
--      # The linker will automatically build a .lib file if we build a DLL.
--      _LT_TAGVAR(old_archive_from_new_cmds, $1)='true'
--      # FIXME: Should let the user specify the lib program.
--      _LT_TAGVAR(old_archive_cmds, $1)='lib -OUT:$oldlib$oldobjs$old_deplibs'
--      _LT_TAGVAR(fix_srcfile_path, $1)='`cygpath -w "$srcfile"`'
--      _LT_TAGVAR(enable_shared_with_static_runtimes, $1)=yes
-+      case $cc_basename in
-+      cl*)
-+	# Native MSVC
-+	_LT_TAGVAR(hardcode_libdir_flag_spec, $1)=' '
-+	_LT_TAGVAR(allow_undefined_flag, $1)=unsupported
-+	_LT_TAGVAR(always_export_symbols, $1)=yes
-+	_LT_TAGVAR(file_list_spec, $1)='@'
-+	# Tell ltmain to make .lib files, not .a files.
-+	libext=lib
-+	# Tell ltmain to make .dll files, not .so files.
-+	shrext_cmds=".dll"
-+	# FIXME: Setting linknames here is a bad hack.
-+	_LT_TAGVAR(archive_cmds, $1)='$CC -o $output_objdir/$soname $libobjs $compiler_flags $deplibs -Wl,-dll~linknames='
-+	_LT_TAGVAR(archive_expsym_cmds, $1)='if test "x`$SED 1q $export_symbols`" = xEXPORTS; then
-+	    sed -n -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' -e '1\\\!p' < $export_symbols > $output_objdir/$soname.exp;
-+	  else
-+	    sed -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' < $export_symbols > $output_objdir/$soname.exp;
-+	  fi~
-+	  $CC -o $tool_output_objdir$soname $libobjs $compiler_flags $deplibs "@$tool_output_objdir$soname.exp" -Wl,-DLL,-IMPLIB:"$tool_output_objdir$libname.dll.lib"~
-+	  linknames='
-+	# The linker will not automatically build a static lib if we build a DLL.
-+	# _LT_TAGVAR(old_archive_from_new_cmds, $1)='true'
-+	_LT_TAGVAR(enable_shared_with_static_runtimes, $1)=yes
-+	_LT_TAGVAR(export_symbols_cmds, $1)='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[[BCDGRS]][[ ]]/s/.*[[ ]]\([[^ ]]*\)/\1,DATA/'\'' | $SED -e '\''/^[[AITW]][[ ]]/s/.*[[ ]]//'\'' | sort | uniq > $export_symbols'
-+	# Don't use ranlib
-+	_LT_TAGVAR(old_postinstall_cmds, $1)='chmod 644 $oldlib'
-+	_LT_TAGVAR(postlink_cmds, $1)='lt_outputfile="@OUTPUT@"~
-+	  lt_tool_outputfile="@TOOL_OUTPUT@"~
-+	  case $lt_outputfile in
-+	    *.exe|*.EXE) ;;
-+	    *)
-+	      lt_outputfile="$lt_outputfile.exe"
-+	      lt_tool_outputfile="$lt_tool_outputfile.exe"
-+	      ;;
-+	  esac~
-+	  if test "$MANIFEST_TOOL" != ":" && test -f "$lt_outputfile.manifest"; then
-+	    $MANIFEST_TOOL -manifest "$lt_tool_outputfile.manifest" -outputresource:"$lt_tool_outputfile" || exit 1;
-+	    $RM "$lt_outputfile.manifest";
-+	  fi'
-+	;;
-+      *)
-+	# Assume MSVC wrapper
-+	_LT_TAGVAR(hardcode_libdir_flag_spec, $1)=' '
-+	_LT_TAGVAR(allow_undefined_flag, $1)=unsupported
-+	# Tell ltmain to make .lib files, not .a files.
-+	libext=lib
-+	# Tell ltmain to make .dll files, not .so files.
-+	shrext_cmds=".dll"
-+	# FIXME: Setting linknames here is a bad hack.
-+	_LT_TAGVAR(archive_cmds, $1)='$CC -o $lib $libobjs $compiler_flags `func_echo_all "$deplibs" | $SED '\''s/ -lc$//'\''` -link -dll~linknames='
-+	# The linker will automatically build a .lib file if we build a DLL.
-+	_LT_TAGVAR(old_archive_from_new_cmds, $1)='true'
-+	# FIXME: Should let the user specify the lib program.
-+	_LT_TAGVAR(old_archive_cmds, $1)='lib -OUT:$oldlib$oldobjs$old_deplibs'
-+	_LT_TAGVAR(enable_shared_with_static_runtimes, $1)=yes
-+	;;
-+      esac
-       ;;
- 
-     darwin* | rhapsody*)
-@@ -4850,7 +5160,7 @@ _LT_EOF
- 
-     # FreeBSD 3 and greater uses gcc -shared to do shared libraries.
-     freebsd* | dragonfly*)
--      _LT_TAGVAR(archive_cmds, $1)='$CC -shared -o $lib $libobjs $deplibs $compiler_flags'
-+      _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags'
-       _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-R$libdir'
-       _LT_TAGVAR(hardcode_direct, $1)=yes
-       _LT_TAGVAR(hardcode_shlibpath_var, $1)=no
-@@ -4858,7 +5168,7 @@ _LT_EOF
- 
-     hpux9*)
-       if test "$GCC" = yes; then
--	_LT_TAGVAR(archive_cmds, $1)='$RM $output_objdir/$soname~$CC -shared -fPIC ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $libobjs $deplibs $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
-+	_LT_TAGVAR(archive_cmds, $1)='$RM $output_objdir/$soname~$CC -shared $pic_flag ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $libobjs $deplibs $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
-       else
- 	_LT_TAGVAR(archive_cmds, $1)='$RM $output_objdir/$soname~$LD -b +b $install_libdir -o $output_objdir/$soname $libobjs $deplibs $linker_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
-       fi
-@@ -4874,7 +5184,7 @@ _LT_EOF
- 
-     hpux10*)
-       if test "$GCC" = yes && test "$with_gnu_ld" = no; then
--	_LT_TAGVAR(archive_cmds, $1)='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
-+	_LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
-       else
- 	_LT_TAGVAR(archive_cmds, $1)='$LD -b +h $soname +b $install_libdir -o $lib $libobjs $deplibs $linker_flags'
-       fi
-@@ -4898,10 +5208,10 @@ _LT_EOF
- 	  _LT_TAGVAR(archive_cmds, $1)='$CC -shared ${wl}+h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
- 	  ;;
- 	ia64*)
--	  _LT_TAGVAR(archive_cmds, $1)='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags'
-+	  _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags'
- 	  ;;
- 	*)
--	  _LT_TAGVAR(archive_cmds, $1)='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
-+	  _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
- 	  ;;
- 	esac
-       else
-@@ -4948,16 +5258,31 @@ _LT_EOF
- 
-     irix5* | irix6* | nonstopux*)
-       if test "$GCC" = yes; then
--	_LT_TAGVAR(archive_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
-+	_LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
- 	# Try to use the -exported_symbol ld option, if it does not
- 	# work, assume that -exports_file does not work either and
- 	# implicitly export all symbols.
--        save_LDFLAGS="$LDFLAGS"
--        LDFLAGS="$LDFLAGS -shared ${wl}-exported_symbol ${wl}foo ${wl}-update_registry ${wl}/dev/null"
--        AC_LINK_IFELSE([AC_LANG_SOURCE([int foo(void) {}])],
--          _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations ${wl}-exports_file ${wl}$export_symbols -o $lib'
--        )
--        LDFLAGS="$save_LDFLAGS"
-+	# This should be the same for all languages, so no per-tag cache variable.
-+	AC_CACHE_CHECK([whether the $host_os linker accepts -exported_symbol],
-+	  [lt_cv_irix_exported_symbol],
-+	  [save_LDFLAGS="$LDFLAGS"
-+	   LDFLAGS="$LDFLAGS -shared ${wl}-exported_symbol ${wl}foo ${wl}-update_registry ${wl}/dev/null"
-+	   AC_LINK_IFELSE(
-+	     [AC_LANG_SOURCE(
-+	        [AC_LANG_CASE([C], [[int foo (void) { return 0; }]],
-+			      [C++], [[int foo (void) { return 0; }]],
-+			      [Fortran 77], [[
-+      subroutine foo
-+      end]],
-+			      [Fortran], [[
-+      subroutine foo
-+      end]])])],
-+	      [lt_cv_irix_exported_symbol=yes],
-+	      [lt_cv_irix_exported_symbol=no])
-+           LDFLAGS="$save_LDFLAGS"])
-+	if test "$lt_cv_irix_exported_symbol" = yes; then
-+          _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations ${wl}-exports_file ${wl}$export_symbols -o $lib'
-+	fi
-       else
- 	_LT_TAGVAR(archive_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -o $lib'
- 	_LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -exports_file $export_symbols -o $lib'
-@@ -5042,7 +5367,7 @@ _LT_EOF
-     osf4* | osf5*)	# as osf3* with the addition of -msym flag
-       if test "$GCC" = yes; then
- 	_LT_TAGVAR(allow_undefined_flag, $1)=' ${wl}-expect_unresolved ${wl}\*'
--	_LT_TAGVAR(archive_cmds, $1)='$CC -shared${allow_undefined_flag} $libobjs $deplibs $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
-+	_LT_TAGVAR(archive_cmds, $1)='$CC -shared${allow_undefined_flag} $pic_flag $libobjs $deplibs $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
- 	_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-rpath ${wl}$libdir'
-       else
- 	_LT_TAGVAR(allow_undefined_flag, $1)=' -expect_unresolved \*'
-@@ -5061,9 +5386,9 @@ _LT_EOF
-       _LT_TAGVAR(no_undefined_flag, $1)=' -z defs'
-       if test "$GCC" = yes; then
- 	wlarc='${wl}'
--	_LT_TAGVAR(archive_cmds, $1)='$CC -shared ${wl}-z ${wl}text ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
-+	_LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag ${wl}-z ${wl}text ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
- 	_LT_TAGVAR(archive_expsym_cmds, $1)='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~
--	  $CC -shared ${wl}-z ${wl}text ${wl}-M ${wl}$lib.exp ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags~$RM $lib.exp'
-+	  $CC -shared $pic_flag ${wl}-z ${wl}text ${wl}-M ${wl}$lib.exp ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags~$RM $lib.exp'
-       else
- 	case `$CC -V 2>&1` in
- 	*"Compilers 5.0"*)
-@@ -5335,8 +5660,6 @@ _LT_TAGDECL([], [inherit_rpath], [0],
-     to runtime path list])
- _LT_TAGDECL([], [link_all_deplibs], [0],
-     [Whether libtool must link a program against all its dependency libraries])
--_LT_TAGDECL([], [fix_srcfile_path], [1],
--    [Fix the shell variable $srcfile for the compiler])
- _LT_TAGDECL([], [always_export_symbols], [0],
-     [Set to "yes" if exported symbols are required])
- _LT_TAGDECL([], [export_symbols_cmds], [2],
-@@ -5347,6 +5670,8 @@ _LT_TAGDECL([], [include_expsyms], [1],
-     [Symbols that must always be exported])
- _LT_TAGDECL([], [prelink_cmds], [2],
-     [Commands necessary for linking programs (against libraries) with templates])
-+_LT_TAGDECL([], [postlink_cmds], [2],
-+    [Commands necessary for finishing linking programs])
- _LT_TAGDECL([], [file_list_spec], [1],
-     [Specify filename containing input files])
- dnl FIXME: Not yet implemented
-@@ -5448,6 +5773,7 @@ CC="$lt_save_CC"
- m4_defun([_LT_LANG_CXX_CONFIG],
- [m4_require([_LT_FILEUTILS_DEFAULTS])dnl
- m4_require([_LT_DECL_EGREP])dnl
-+m4_require([_LT_PATH_MANIFEST_TOOL])dnl
- if test -n "$CXX" && ( test "X$CXX" != "Xno" &&
-     ( (test "X$CXX" = "Xg++" && `g++ -v >/dev/null 2>&1` ) ||
-     (test "X$CXX" != "Xg++"))) ; then
-@@ -5509,6 +5835,7 @@ if test "$_lt_caught_CXX_error" != yes; then
- 
-   # Allow CC to be a program name with arguments.
-   lt_save_CC=$CC
-+  lt_save_CFLAGS=$CFLAGS
-   lt_save_LD=$LD
-   lt_save_GCC=$GCC
-   GCC=$GXX
-@@ -5526,6 +5853,7 @@ if test "$_lt_caught_CXX_error" != yes; then
-   fi
-   test -z "${LDCXX+set}" || LD=$LDCXX
-   CC=${CXX-"c++"}
-+  CFLAGS=$CXXFLAGS
-   compiler=$CC
-   _LT_TAGVAR(compiler, $1)=$CC
-   _LT_CC_BASENAME([$compiler])
-@@ -5689,7 +6017,7 @@ if test "$_lt_caught_CXX_error" != yes; then
-           _LT_TAGVAR(allow_undefined_flag, $1)='-berok'
-           # Determine the default libpath from the value encoded in an empty
-           # executable.
--          _LT_SYS_MODULE_PATH_AIX
-+          _LT_SYS_MODULE_PATH_AIX([$1])
-           _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-blibpath:$libdir:'"$aix_libpath"
- 
-           _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -o $output_objdir/$soname $libobjs $deplibs '"\${wl}$no_entry_flag"' $compiler_flags `if test "x${allow_undefined_flag}" != "x"; then func_echo_all "${wl}${allow_undefined_flag}"; else :; fi` '"\${wl}$exp_sym_flag:\$export_symbols $shared_flag"
-@@ -5701,7 +6029,7 @@ if test "$_lt_caught_CXX_error" != yes; then
-           else
- 	    # Determine the default libpath from the value encoded in an
- 	    # empty executable.
--	    _LT_SYS_MODULE_PATH_AIX
-+	    _LT_SYS_MODULE_PATH_AIX([$1])
- 	    _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-blibpath:$libdir:'"$aix_libpath"
- 	    # Warning - without using the other run time loading flags,
- 	    # -berok will link without error, but may produce a broken library.
-@@ -5743,29 +6071,75 @@ if test "$_lt_caught_CXX_error" != yes; then
-         ;;
- 
-       cygwin* | mingw* | pw32* | cegcc*)
--        # _LT_TAGVAR(hardcode_libdir_flag_spec, $1) is actually meaningless,
--        # as there is no search path for DLLs.
--        _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-L$libdir'
--        _LT_TAGVAR(export_dynamic_flag_spec, $1)='${wl}--export-all-symbols'
--        _LT_TAGVAR(allow_undefined_flag, $1)=unsupported
--        _LT_TAGVAR(always_export_symbols, $1)=no
--        _LT_TAGVAR(enable_shared_with_static_runtimes, $1)=yes
--
--        if $LD --help 2>&1 | $GREP 'auto-import' > /dev/null; then
--          _LT_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib'
--          # If the export-symbols file already is a .def file (1st line
--          # is EXPORTS), use it as is; otherwise, prepend...
--          _LT_TAGVAR(archive_expsym_cmds, $1)='if test "x`$SED 1q $export_symbols`" = xEXPORTS; then
--	    cp $export_symbols $output_objdir/$soname.def;
--          else
--	    echo EXPORTS > $output_objdir/$soname.def;
--	    cat $export_symbols >> $output_objdir/$soname.def;
--          fi~
--          $CC -shared -nostdlib $output_objdir/$soname.def $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib'
--        else
--          _LT_TAGVAR(ld_shlibs, $1)=no
--        fi
--        ;;
-+	case $GXX,$cc_basename in
-+	,cl* | no,cl*)
-+	  # Native MSVC
-+	  # hardcode_libdir_flag_spec is actually meaningless, as there is
-+	  # no search path for DLLs.
-+	  _LT_TAGVAR(hardcode_libdir_flag_spec, $1)=' '
-+	  _LT_TAGVAR(allow_undefined_flag, $1)=unsupported
-+	  _LT_TAGVAR(always_export_symbols, $1)=yes
-+	  _LT_TAGVAR(file_list_spec, $1)='@'
-+	  # Tell ltmain to make .lib files, not .a files.
-+	  libext=lib
-+	  # Tell ltmain to make .dll files, not .so files.
-+	  shrext_cmds=".dll"
-+	  # FIXME: Setting linknames here is a bad hack.
-+	  _LT_TAGVAR(archive_cmds, $1)='$CC -o $output_objdir/$soname $libobjs $compiler_flags $deplibs -Wl,-dll~linknames='
-+	  _LT_TAGVAR(archive_expsym_cmds, $1)='if test "x`$SED 1q $export_symbols`" = xEXPORTS; then
-+	      $SED -n -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' -e '1\\\!p' < $export_symbols > $output_objdir/$soname.exp;
-+	    else
-+	      $SED -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' < $export_symbols > $output_objdir/$soname.exp;
-+	    fi~
-+	    $CC -o $tool_output_objdir$soname $libobjs $compiler_flags $deplibs "@$tool_output_objdir$soname.exp" -Wl,-DLL,-IMPLIB:"$tool_output_objdir$libname.dll.lib"~
-+	    linknames='
-+	  # The linker will not automatically build a static lib if we build a DLL.
-+	  # _LT_TAGVAR(old_archive_from_new_cmds, $1)='true'
-+	  _LT_TAGVAR(enable_shared_with_static_runtimes, $1)=yes
-+	  # Don't use ranlib
-+	  _LT_TAGVAR(old_postinstall_cmds, $1)='chmod 644 $oldlib'
-+	  _LT_TAGVAR(postlink_cmds, $1)='lt_outputfile="@OUTPUT@"~
-+	    lt_tool_outputfile="@TOOL_OUTPUT@"~
-+	    case $lt_outputfile in
-+	      *.exe|*.EXE) ;;
-+	      *)
-+		lt_outputfile="$lt_outputfile.exe"
-+		lt_tool_outputfile="$lt_tool_outputfile.exe"
-+		;;
-+	    esac~
-+	    func_to_tool_file "$lt_outputfile"~
-+	    if test "$MANIFEST_TOOL" != ":" && test -f "$lt_outputfile.manifest"; then
-+	      $MANIFEST_TOOL -manifest "$lt_tool_outputfile.manifest" -outputresource:"$lt_tool_outputfile" || exit 1;
-+	      $RM "$lt_outputfile.manifest";
-+	    fi'
-+	  ;;
-+	*)
-+	  # g++
-+	  # _LT_TAGVAR(hardcode_libdir_flag_spec, $1) is actually meaningless,
-+	  # as there is no search path for DLLs.
-+	  _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-L$libdir'
-+	  _LT_TAGVAR(export_dynamic_flag_spec, $1)='${wl}--export-all-symbols'
-+	  _LT_TAGVAR(allow_undefined_flag, $1)=unsupported
-+	  _LT_TAGVAR(always_export_symbols, $1)=no
-+	  _LT_TAGVAR(enable_shared_with_static_runtimes, $1)=yes
-+
-+	  if $LD --help 2>&1 | $GREP 'auto-import' > /dev/null; then
-+	    _LT_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib'
-+	    # If the export-symbols file already is a .def file (1st line
-+	    # is EXPORTS), use it as is; otherwise, prepend...
-+	    _LT_TAGVAR(archive_expsym_cmds, $1)='if test "x`$SED 1q $export_symbols`" = xEXPORTS; then
-+	      cp $export_symbols $output_objdir/$soname.def;
-+	    else
-+	      echo EXPORTS > $output_objdir/$soname.def;
-+	      cat $export_symbols >> $output_objdir/$soname.def;
-+	    fi~
-+	    $CC -shared -nostdlib $output_objdir/$soname.def $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib'
-+	  else
-+	    _LT_TAGVAR(ld_shlibs, $1)=no
-+	  fi
-+	  ;;
-+	esac
-+	;;
-       darwin* | rhapsody*)
-         _LT_DARWIN_LINKER_FEATURES($1)
- 	;;
-@@ -5840,7 +6214,7 @@ if test "$_lt_caught_CXX_error" != yes; then
-             ;;
-           *)
-             if test "$GXX" = yes; then
--              _LT_TAGVAR(archive_cmds, $1)='$RM $output_objdir/$soname~$CC -shared -nostdlib -fPIC ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
-+              _LT_TAGVAR(archive_cmds, $1)='$RM $output_objdir/$soname~$CC -shared -nostdlib $pic_flag ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
-             else
-               # FIXME: insert proper C++ library support
-               _LT_TAGVAR(ld_shlibs, $1)=no
-@@ -5911,10 +6285,10 @@ if test "$_lt_caught_CXX_error" != yes; then
- 	            _LT_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib -fPIC ${wl}+h ${wl}$soname -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags'
- 	            ;;
- 	          ia64*)
--	            _LT_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib -fPIC ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags'
-+	            _LT_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib $pic_flag ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags'
- 	            ;;
- 	          *)
--	            _LT_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib -fPIC ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags'
-+	            _LT_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags'
- 	            ;;
- 	        esac
- 	      fi
-@@ -5955,9 +6329,9 @@ if test "$_lt_caught_CXX_error" != yes; then
-           *)
- 	    if test "$GXX" = yes; then
- 	      if test "$with_gnu_ld" = no; then
--	        _LT_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
-+	        _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
- 	      else
--	        _LT_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` -o $lib'
-+	        _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` -o $lib'
- 	      fi
- 	    fi
- 	    _LT_TAGVAR(link_all_deplibs, $1)=yes
-@@ -6027,20 +6401,20 @@ if test "$_lt_caught_CXX_error" != yes; then
- 	      _LT_TAGVAR(prelink_cmds, $1)='tpldir=Template.dir~
- 		rm -rf $tpldir~
- 		$CC --prelink_objects --instantiation_dir $tpldir $objs $libobjs $compile_deplibs~
--		compile_command="$compile_command `find $tpldir -name \*.o | $NL2SP`"'
-+		compile_command="$compile_command `find $tpldir -name \*.o | sort | $NL2SP`"'
- 	      _LT_TAGVAR(old_archive_cmds, $1)='tpldir=Template.dir~
- 		rm -rf $tpldir~
- 		$CC --prelink_objects --instantiation_dir $tpldir $oldobjs$old_deplibs~
--		$AR $AR_FLAGS $oldlib$oldobjs$old_deplibs `find $tpldir -name \*.o | $NL2SP`~
-+		$AR $AR_FLAGS $oldlib$oldobjs$old_deplibs `find $tpldir -name \*.o | sort | $NL2SP`~
- 		$RANLIB $oldlib'
- 	      _LT_TAGVAR(archive_cmds, $1)='tpldir=Template.dir~
- 		rm -rf $tpldir~
- 		$CC --prelink_objects --instantiation_dir $tpldir $predep_objects $libobjs $deplibs $convenience $postdep_objects~
--		$CC -shared $pic_flag $predep_objects $libobjs $deplibs `find $tpldir -name \*.o | $NL2SP` $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname -o $lib'
-+		$CC -shared $pic_flag $predep_objects $libobjs $deplibs `find $tpldir -name \*.o | sort | $NL2SP` $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname -o $lib'
- 	      _LT_TAGVAR(archive_expsym_cmds, $1)='tpldir=Template.dir~
- 		rm -rf $tpldir~
- 		$CC --prelink_objects --instantiation_dir $tpldir $predep_objects $libobjs $deplibs $convenience $postdep_objects~
--		$CC -shared $pic_flag $predep_objects $libobjs $deplibs `find $tpldir -name \*.o | $NL2SP` $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname ${wl}-retain-symbols-file ${wl}$export_symbols -o $lib'
-+		$CC -shared $pic_flag $predep_objects $libobjs $deplibs `find $tpldir -name \*.o | sort | $NL2SP` $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname ${wl}-retain-symbols-file ${wl}$export_symbols -o $lib'
- 	      ;;
- 	    *) # Version 6 and above use weak symbols
- 	      _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname -o $lib'
-@@ -6235,7 +6609,7 @@ if test "$_lt_caught_CXX_error" != yes; then
- 	          _LT_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib ${allow_undefined_flag} $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
- 		  ;;
- 	        *)
--	          _LT_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib ${allow_undefined_flag} $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
-+	          _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag -nostdlib ${allow_undefined_flag} $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
- 		  ;;
- 	      esac
- 
-@@ -6281,7 +6655,7 @@ if test "$_lt_caught_CXX_error" != yes; then
- 
-       solaris*)
-         case $cc_basename in
--          CC*)
-+          CC* | sunCC*)
- 	    # Sun C++ 4.2, 5.x and Centerline C++
-             _LT_TAGVAR(archive_cmds_need_lc,$1)=yes
- 	    _LT_TAGVAR(no_undefined_flag, $1)=' -zdefs'
-@@ -6322,9 +6696,9 @@ if test "$_lt_caught_CXX_error" != yes; then
- 	    if test "$GXX" = yes && test "$with_gnu_ld" = no; then
- 	      _LT_TAGVAR(no_undefined_flag, $1)=' ${wl}-z ${wl}defs'
- 	      if $CC --version | $GREP -v '^2\.7' > /dev/null; then
--	        _LT_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib $LDFLAGS $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-h $wl$soname -o $lib'
-+	        _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag -nostdlib $LDFLAGS $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-h $wl$soname -o $lib'
- 	        _LT_TAGVAR(archive_expsym_cmds, $1)='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~
--		  $CC -shared -nostdlib ${wl}-M $wl$lib.exp -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~$RM $lib.exp'
-+		  $CC -shared $pic_flag -nostdlib ${wl}-M $wl$lib.exp -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~$RM $lib.exp'
- 
- 	        # Commands to make compiler produce verbose output that lists
- 	        # what "hidden" libraries, object files and flags are used when
-@@ -6453,6 +6827,7 @@ if test "$_lt_caught_CXX_error" != yes; then
-   fi # test -n "$compiler"
- 
-   CC=$lt_save_CC
-+  CFLAGS=$lt_save_CFLAGS
-   LDCXX=$LD
-   LD=$lt_save_LD
-   GCC=$lt_save_GCC
-@@ -6467,6 +6842,29 @@ AC_LANG_POP
- ])# _LT_LANG_CXX_CONFIG
- 
- 
-+# _LT_FUNC_STRIPNAME_CNF
-+# ----------------------
-+# func_stripname_cnf prefix suffix name
-+# strip PREFIX and SUFFIX off of NAME.
-+# PREFIX and SUFFIX must not contain globbing or regex special
-+# characters, hashes, percent signs, but SUFFIX may contain a leading
-+# dot (in which case that matches only a dot).
-+#
-+# This function is identical to the (non-XSI) version of func_stripname,
-+# except this one can be used by m4 code that may be executed by configure,
-+# rather than the libtool script.
-+m4_defun([_LT_FUNC_STRIPNAME_CNF],[dnl
-+AC_REQUIRE([_LT_DECL_SED])
-+AC_REQUIRE([_LT_PROG_ECHO_BACKSLASH])
-+func_stripname_cnf ()
-+{
-+  case ${2} in
-+  .*) func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%\\\\${2}\$%%"`;;
-+  *)  func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%${2}\$%%"`;;
-+  esac
-+} # func_stripname_cnf
-+])# _LT_FUNC_STRIPNAME_CNF
-+
- # _LT_SYS_HIDDEN_LIBDEPS([TAGNAME])
- # ---------------------------------
- # Figure out "hidden" library dependencies from verbose
-@@ -6475,6 +6873,7 @@ AC_LANG_POP
- # objects, libraries and library flags.
- m4_defun([_LT_SYS_HIDDEN_LIBDEPS],
- [m4_require([_LT_FILEUTILS_DEFAULTS])dnl
-+AC_REQUIRE([_LT_FUNC_STRIPNAME_CNF])dnl
- # Dependencies to place before and after the object being linked:
- _LT_TAGVAR(predep_objects, $1)=
- _LT_TAGVAR(postdep_objects, $1)=
-@@ -6525,6 +6924,13 @@ public class foo {
- };
- _LT_EOF
- ])
-+
-+_lt_libdeps_save_CFLAGS=$CFLAGS
-+case "$CC $CFLAGS " in #(
-+*\ -flto*\ *) CFLAGS="$CFLAGS -fno-lto" ;;
-+*\ -fwhopr*\ *) CFLAGS="$CFLAGS -fno-whopr" ;;
-+esac
-+
- dnl Parse the compiler output and extract the necessary
- dnl objects, libraries and library flags.
- if AC_TRY_EVAL(ac_compile); then
-@@ -6536,7 +6942,7 @@ if AC_TRY_EVAL(ac_compile); then
-   pre_test_object_deps_done=no
- 
-   for p in `eval "$output_verbose_link_cmd"`; do
--    case $p in
-+    case ${prev}${p} in
- 
-     -L* | -R* | -l*)
-        # Some compilers place space between "-{L,R}" and the path.
-@@ -6545,13 +6951,22 @@ if AC_TRY_EVAL(ac_compile); then
-           test $p = "-R"; then
- 	 prev=$p
- 	 continue
--       else
--	 prev=
-        fi
- 
-+       # Expand the sysroot to ease extracting the directories later.
-+       if test -z "$prev"; then
-+         case $p in
-+         -L*) func_stripname_cnf '-L' '' "$p"; prev=-L; p=$func_stripname_result ;;
-+         -R*) func_stripname_cnf '-R' '' "$p"; prev=-R; p=$func_stripname_result ;;
-+         -l*) func_stripname_cnf '-l' '' "$p"; prev=-l; p=$func_stripname_result ;;
-+         esac
-+       fi
-+       case $p in
-+       =*) func_stripname_cnf '=' '' "$p"; p=$lt_sysroot$func_stripname_result ;;
-+       esac
-        if test "$pre_test_object_deps_done" = no; then
--	 case $p in
--	 -L* | -R*)
-+	 case ${prev} in
-+	 -L | -R)
- 	   # Internal compiler library paths should come after those
- 	   # provided the user.  The postdeps already come after the
- 	   # user supplied libs so there is no need to process them.
-@@ -6571,8 +6986,10 @@ if AC_TRY_EVAL(ac_compile); then
- 	   _LT_TAGVAR(postdeps, $1)="${_LT_TAGVAR(postdeps, $1)} ${prev}${p}"
- 	 fi
-        fi
-+       prev=
-        ;;
- 
-+    *.lto.$objext) ;; # Ignore GCC LTO objects
-     *.$objext)
-        # This assumes that the test object file only shows up
-        # once in the compiler output.
-@@ -6608,6 +7025,7 @@ else
- fi
- 
- $RM -f confest.$objext
-+CFLAGS=$_lt_libdeps_save_CFLAGS
- 
- # PORTME: override above test on systems where it is broken
- m4_if([$1], [CXX],
-@@ -6644,7 +7062,7 @@ linux*)
- 
- solaris*)
-   case $cc_basename in
--  CC*)
-+  CC* | sunCC*)
-     # The more standards-conforming stlport4 library is
-     # incompatible with the Cstd library. Avoid specifying
-     # it if it's in CXXFLAGS. Ignore libCrun as
-@@ -6757,7 +7175,9 @@ if test "$_lt_disable_F77" != yes; then
-   # Allow CC to be a program name with arguments.
-   lt_save_CC="$CC"
-   lt_save_GCC=$GCC
-+  lt_save_CFLAGS=$CFLAGS
-   CC=${F77-"f77"}
-+  CFLAGS=$FFLAGS
-   compiler=$CC
-   _LT_TAGVAR(compiler, $1)=$CC
-   _LT_CC_BASENAME([$compiler])
-@@ -6811,6 +7231,7 @@ if test "$_lt_disable_F77" != yes; then
- 
-   GCC=$lt_save_GCC
-   CC="$lt_save_CC"
-+  CFLAGS="$lt_save_CFLAGS"
- fi # test "$_lt_disable_F77" != yes
- 
- AC_LANG_POP
-@@ -6887,7 +7308,9 @@ if test "$_lt_disable_FC" != yes; then
-   # Allow CC to be a program name with arguments.
-   lt_save_CC="$CC"
-   lt_save_GCC=$GCC
-+  lt_save_CFLAGS=$CFLAGS
-   CC=${FC-"f95"}
-+  CFLAGS=$FCFLAGS
-   compiler=$CC
-   GCC=$ac_cv_fc_compiler_gnu
- 
-@@ -6943,7 +7366,8 @@ if test "$_lt_disable_FC" != yes; then
-   fi # test -n "$compiler"
- 
-   GCC=$lt_save_GCC
--  CC="$lt_save_CC"
-+  CC=$lt_save_CC
-+  CFLAGS=$lt_save_CFLAGS
- fi # test "$_lt_disable_FC" != yes
- 
- AC_LANG_POP
-@@ -6980,10 +7404,12 @@ _LT_COMPILER_BOILERPLATE
- _LT_LINKER_BOILERPLATE
- 
- # Allow CC to be a program name with arguments.
--lt_save_CC="$CC"
-+lt_save_CC=$CC
-+lt_save_CFLAGS=$CFLAGS
- lt_save_GCC=$GCC
- GCC=yes
- CC=${GCJ-"gcj"}
-+CFLAGS=$GCJFLAGS
- compiler=$CC
- _LT_TAGVAR(compiler, $1)=$CC
- _LT_TAGVAR(LD, $1)="$LD"
-@@ -7014,7 +7440,8 @@ fi
- AC_LANG_RESTORE
- 
- GCC=$lt_save_GCC
--CC="$lt_save_CC"
-+CC=$lt_save_CC
-+CFLAGS=$lt_save_CFLAGS
- ])# _LT_LANG_GCJ_CONFIG
- 
- 
-@@ -7049,9 +7476,11 @@ _LT_LINKER_BOILERPLATE
- 
- # Allow CC to be a program name with arguments.
- lt_save_CC="$CC"
-+lt_save_CFLAGS=$CFLAGS
- lt_save_GCC=$GCC
- GCC=
- CC=${RC-"windres"}
-+CFLAGS=
- compiler=$CC
- _LT_TAGVAR(compiler, $1)=$CC
- _LT_CC_BASENAME([$compiler])
-@@ -7064,7 +7493,8 @@ fi
- 
- GCC=$lt_save_GCC
- AC_LANG_RESTORE
--CC="$lt_save_CC"
-+CC=$lt_save_CC
-+CFLAGS=$lt_save_CFLAGS
- ])# _LT_LANG_RC_CONFIG
- 
- 
-@@ -7123,6 +7553,15 @@ _LT_DECL([], [OBJDUMP], [1], [An object symbol dumper])
- AC_SUBST([OBJDUMP])
- ])
- 
-+# _LT_DECL_DLLTOOL
-+# ----------------
-+# Ensure DLLTOOL variable is set.
-+m4_defun([_LT_DECL_DLLTOOL],
-+[AC_CHECK_TOOL(DLLTOOL, dlltool, false)
-+test -z "$DLLTOOL" && DLLTOOL=dlltool
-+_LT_DECL([], [DLLTOOL], [1], [DLL creation program])
-+AC_SUBST([DLLTOOL])
-+])
- 
- # _LT_DECL_SED
- # ------------
-@@ -7216,8 +7655,8 @@ m4_defun([_LT_CHECK_SHELL_FEATURES],
- # Try some XSI features
- xsi_shell=no
- ( _lt_dummy="a/b/c"
--  test "${_lt_dummy##*/},${_lt_dummy%/*},"${_lt_dummy%"$_lt_dummy"}, \
--      = c,a/b,, \
-+  test "${_lt_dummy##*/},${_lt_dummy%/*},${_lt_dummy#??}"${_lt_dummy%"$_lt_dummy"}, \
-+      = c,a/b,b/c, \
-     && eval 'test $(( 1 + 1 )) -eq 2 \
-     && test "${#_lt_dummy}" -eq 5' ) >/dev/null 2>&1 \
-   && xsi_shell=yes
-@@ -7256,206 +7695,162 @@ _LT_DECL([NL2SP], [lt_NL2SP], [1], [turn newlines into spaces])dnl
- ])# _LT_CHECK_SHELL_FEATURES
- 
- 
--# _LT_PROG_XSI_SHELLFNS
--# ---------------------
--# Bourne and XSI compatible variants of some useful shell functions.
--m4_defun([_LT_PROG_XSI_SHELLFNS],
--[case $xsi_shell in
--  yes)
--    cat << \_LT_EOF >> "$cfgfile"
--
--# func_dirname file append nondir_replacement
--# Compute the dirname of FILE.  If nonempty, add APPEND to the result,
--# otherwise set result to NONDIR_REPLACEMENT.
--func_dirname ()
--{
--  case ${1} in
--    */*) func_dirname_result="${1%/*}${2}" ;;
--    *  ) func_dirname_result="${3}" ;;
--  esac
--}
--
--# func_basename file
--func_basename ()
--{
--  func_basename_result="${1##*/}"
--}
--
--# func_dirname_and_basename file append nondir_replacement
--# perform func_basename and func_dirname in a single function
--# call:
--#   dirname:  Compute the dirname of FILE.  If nonempty,
--#             add APPEND to the result, otherwise set result
--#             to NONDIR_REPLACEMENT.
--#             value returned in "$func_dirname_result"
--#   basename: Compute filename of FILE.
--#             value retuned in "$func_basename_result"
--# Implementation must be kept synchronized with func_dirname
--# and func_basename. For efficiency, we do not delegate to
--# those functions but instead duplicate the functionality here.
--func_dirname_and_basename ()
--{
--  case ${1} in
--    */*) func_dirname_result="${1%/*}${2}" ;;
--    *  ) func_dirname_result="${3}" ;;
--  esac
--  func_basename_result="${1##*/}"
--}
--
--# func_stripname prefix suffix name
--# strip PREFIX and SUFFIX off of NAME.
--# PREFIX and SUFFIX must not contain globbing or regex special
--# characters, hashes, percent signs, but SUFFIX may contain a leading
--# dot (in which case that matches only a dot).
--func_stripname ()
--{
--  # pdksh 5.2.14 does not do ${X%$Y} correctly if both X and Y are
--  # positional parameters, so assign one to ordinary parameter first.
--  func_stripname_result=${3}
--  func_stripname_result=${func_stripname_result#"${1}"}
--  func_stripname_result=${func_stripname_result%"${2}"}
--}
--
--# func_opt_split
--func_opt_split ()
--{
--  func_opt_split_opt=${1%%=*}
--  func_opt_split_arg=${1#*=}
--}
--
--# func_lo2o object
--func_lo2o ()
--{
--  case ${1} in
--    *.lo) func_lo2o_result=${1%.lo}.${objext} ;;
--    *)    func_lo2o_result=${1} ;;
--  esac
--}
--
--# func_xform libobj-or-source
--func_xform ()
--{
--  func_xform_result=${1%.*}.lo
--}
--
--# func_arith arithmetic-term...
--func_arith ()
--{
--  func_arith_result=$(( $[*] ))
--}
--
--# func_len string
--# STRING may not start with a hyphen.
--func_len ()
--{
--  func_len_result=${#1}
--}
-+# _LT_PROG_FUNCTION_REPLACE (FUNCNAME, REPLACEMENT-BODY)
-+# ------------------------------------------------------
-+# In `$cfgfile', look for function FUNCNAME delimited by `^FUNCNAME ()$' and
-+# '^} FUNCNAME ', and replace its body with REPLACEMENT-BODY.
-+m4_defun([_LT_PROG_FUNCTION_REPLACE],
-+[dnl {
-+sed -e '/^$1 ()$/,/^} # $1 /c\
-+$1 ()\
-+{\
-+m4_bpatsubsts([$2], [$], [\\], [^\([	 ]\)], [\\\1])
-+} # Extended-shell $1 implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+])
- 
--_LT_EOF
--    ;;
--  *) # Bourne compatible functions.
--    cat << \_LT_EOF >> "$cfgfile"
- 
--# func_dirname file append nondir_replacement
--# Compute the dirname of FILE.  If nonempty, add APPEND to the result,
--# otherwise set result to NONDIR_REPLACEMENT.
--func_dirname ()
--{
--  # Extract subdirectory from the argument.
--  func_dirname_result=`$ECHO "${1}" | $SED "$dirname"`
--  if test "X$func_dirname_result" = "X${1}"; then
--    func_dirname_result="${3}"
--  else
--    func_dirname_result="$func_dirname_result${2}"
--  fi
--}
-+# _LT_PROG_REPLACE_SHELLFNS
-+# -------------------------
-+# Replace existing portable implementations of several shell functions with
-+# equivalent extended shell implementations where those features are available..
-+m4_defun([_LT_PROG_REPLACE_SHELLFNS],
-+[if test x"$xsi_shell" = xyes; then
-+  _LT_PROG_FUNCTION_REPLACE([func_dirname], [dnl
-+    case ${1} in
-+      */*) func_dirname_result="${1%/*}${2}" ;;
-+      *  ) func_dirname_result="${3}" ;;
-+    esac])
-+
-+  _LT_PROG_FUNCTION_REPLACE([func_basename], [dnl
-+    func_basename_result="${1##*/}"])
-+
-+  _LT_PROG_FUNCTION_REPLACE([func_dirname_and_basename], [dnl
-+    case ${1} in
-+      */*) func_dirname_result="${1%/*}${2}" ;;
-+      *  ) func_dirname_result="${3}" ;;
-+    esac
-+    func_basename_result="${1##*/}"])
- 
--# func_basename file
--func_basename ()
--{
--  func_basename_result=`$ECHO "${1}" | $SED "$basename"`
--}
-+  _LT_PROG_FUNCTION_REPLACE([func_stripname], [dnl
-+    # pdksh 5.2.14 does not do ${X%$Y} correctly if both X and Y are
-+    # positional parameters, so assign one to ordinary parameter first.
-+    func_stripname_result=${3}
-+    func_stripname_result=${func_stripname_result#"${1}"}
-+    func_stripname_result=${func_stripname_result%"${2}"}])
- 
--dnl func_dirname_and_basename
--dnl A portable version of this function is already defined in general.m4sh
--dnl so there is no need for it here.
-+  _LT_PROG_FUNCTION_REPLACE([func_split_long_opt], [dnl
-+    func_split_long_opt_name=${1%%=*}
-+    func_split_long_opt_arg=${1#*=}])
- 
--# func_stripname prefix suffix name
--# strip PREFIX and SUFFIX off of NAME.
--# PREFIX and SUFFIX must not contain globbing or regex special
--# characters, hashes, percent signs, but SUFFIX may contain a leading
--# dot (in which case that matches only a dot).
--# func_strip_suffix prefix name
--func_stripname ()
--{
--  case ${2} in
--    .*) func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%\\\\${2}\$%%"`;;
--    *)  func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%${2}\$%%"`;;
--  esac
--}
-+  _LT_PROG_FUNCTION_REPLACE([func_split_short_opt], [dnl
-+    func_split_short_opt_arg=${1#??}
-+    func_split_short_opt_name=${1%"$func_split_short_opt_arg"}])
- 
--# sed scripts:
--my_sed_long_opt='1s/^\(-[[^=]]*\)=.*/\1/;q'
--my_sed_long_arg='1s/^-[[^=]]*=//'
-+  _LT_PROG_FUNCTION_REPLACE([func_lo2o], [dnl
-+    case ${1} in
-+      *.lo) func_lo2o_result=${1%.lo}.${objext} ;;
-+      *)    func_lo2o_result=${1} ;;
-+    esac])
- 
--# func_opt_split
--func_opt_split ()
--{
--  func_opt_split_opt=`$ECHO "${1}" | $SED "$my_sed_long_opt"`
--  func_opt_split_arg=`$ECHO "${1}" | $SED "$my_sed_long_arg"`
--}
-+  _LT_PROG_FUNCTION_REPLACE([func_xform], [    func_xform_result=${1%.*}.lo])
- 
--# func_lo2o object
--func_lo2o ()
--{
--  func_lo2o_result=`$ECHO "${1}" | $SED "$lo2o"`
--}
-+  _LT_PROG_FUNCTION_REPLACE([func_arith], [    func_arith_result=$(( $[*] ))])
- 
--# func_xform libobj-or-source
--func_xform ()
--{
--  func_xform_result=`$ECHO "${1}" | $SED 's/\.[[^.]]*$/.lo/'`
--}
-+  _LT_PROG_FUNCTION_REPLACE([func_len], [    func_len_result=${#1}])
-+fi
- 
--# func_arith arithmetic-term...
--func_arith ()
--{
--  func_arith_result=`expr "$[@]"`
--}
-+if test x"$lt_shell_append" = xyes; then
-+  _LT_PROG_FUNCTION_REPLACE([func_append], [    eval "${1}+=\\${2}"])
- 
--# func_len string
--# STRING may not start with a hyphen.
--func_len ()
--{
--  func_len_result=`expr "$[1]" : ".*" 2>/dev/null || echo $max_cmd_len`
--}
-+  _LT_PROG_FUNCTION_REPLACE([func_append_quoted], [dnl
-+    func_quote_for_eval "${2}"
-+dnl m4 expansion turns \\\\ into \\, and then the shell eval turns that into \
-+    eval "${1}+=\\\\ \\$func_quote_for_eval_result"])
- 
--_LT_EOF
--esac
-+  # Save a `func_append' function call where possible by direct use of '+='
-+  sed -e 's%func_append \([[a-zA-Z_]]\{1,\}\) "%\1+="%g' $cfgfile > $cfgfile.tmp \
-+    && mv -f "$cfgfile.tmp" "$cfgfile" \
-+      || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+  test 0 -eq $? || _lt_function_replace_fail=:
-+else
-+  # Save a `func_append' function call even when '+=' is not available
-+  sed -e 's%func_append \([[a-zA-Z_]]\{1,\}\) "%\1="$\1%g' $cfgfile > $cfgfile.tmp \
-+    && mv -f "$cfgfile.tmp" "$cfgfile" \
-+      || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+  test 0 -eq $? || _lt_function_replace_fail=:
-+fi
- 
--case $lt_shell_append in
--  yes)
--    cat << \_LT_EOF >> "$cfgfile"
-+if test x"$_lt_function_replace_fail" = x":"; then
-+  AC_MSG_WARN([Unable to substitute extended shell functions in $ofile])
-+fi
-+])
- 
--# func_append var value
--# Append VALUE to the end of shell variable VAR.
--func_append ()
--{
--  eval "$[1]+=\$[2]"
--}
--_LT_EOF
-+# _LT_PATH_CONVERSION_FUNCTIONS
-+# -----------------------------
-+# Determine which file name conversion functions should be used by
-+# func_to_host_file (and, implicitly, by func_to_host_path).  These are needed
-+# for certain cross-compile configurations and native mingw.
-+m4_defun([_LT_PATH_CONVERSION_FUNCTIONS],
-+[AC_REQUIRE([AC_CANONICAL_HOST])dnl
-+AC_REQUIRE([AC_CANONICAL_BUILD])dnl
-+AC_MSG_CHECKING([how to convert $build file names to $host format])
-+AC_CACHE_VAL(lt_cv_to_host_file_cmd,
-+[case $host in
-+  *-*-mingw* )
-+    case $build in
-+      *-*-mingw* ) # actually msys
-+        lt_cv_to_host_file_cmd=func_convert_file_msys_to_w32
-+        ;;
-+      *-*-cygwin* )
-+        lt_cv_to_host_file_cmd=func_convert_file_cygwin_to_w32
-+        ;;
-+      * ) # otherwise, assume *nix
-+        lt_cv_to_host_file_cmd=func_convert_file_nix_to_w32
-+        ;;
-+    esac
-     ;;
--  *)
--    cat << \_LT_EOF >> "$cfgfile"
--
--# func_append var value
--# Append VALUE to the end of shell variable VAR.
--func_append ()
--{
--  eval "$[1]=\$$[1]\$[2]"
--}
--
--_LT_EOF
-+  *-*-cygwin* )
-+    case $build in
-+      *-*-mingw* ) # actually msys
-+        lt_cv_to_host_file_cmd=func_convert_file_msys_to_cygwin
-+        ;;
-+      *-*-cygwin* )
-+        lt_cv_to_host_file_cmd=func_convert_file_noop
-+        ;;
-+      * ) # otherwise, assume *nix
-+        lt_cv_to_host_file_cmd=func_convert_file_nix_to_cygwin
-+        ;;
-+    esac
-     ;;
--  esac
-+  * ) # unhandled hosts (and "normal" native builds)
-+    lt_cv_to_host_file_cmd=func_convert_file_noop
-+    ;;
-+esac
-+])
-+to_host_file_cmd=$lt_cv_to_host_file_cmd
-+AC_MSG_RESULT([$lt_cv_to_host_file_cmd])
-+_LT_DECL([to_host_file_cmd], [lt_cv_to_host_file_cmd],
-+         [0], [convert $build file names to $host format])dnl
-+
-+AC_MSG_CHECKING([how to convert $build file names to toolchain format])
-+AC_CACHE_VAL(lt_cv_to_tool_file_cmd,
-+[#assume ordinary cross tools, or native build.
-+lt_cv_to_tool_file_cmd=func_convert_file_noop
-+case $host in
-+  *-*-mingw* )
-+    case $build in
-+      *-*-mingw* ) # actually msys
-+        lt_cv_to_tool_file_cmd=func_convert_file_msys_to_w32
-+        ;;
-+    esac
-+    ;;
-+esac
- ])
-+to_tool_file_cmd=$lt_cv_to_tool_file_cmd
-+AC_MSG_RESULT([$lt_cv_to_tool_file_cmd])
-+_LT_DECL([to_tool_file_cmd], [lt_cv_to_tool_file_cmd],
-+         [0], [convert $build files to toolchain format])dnl
-+])# _LT_PATH_CONVERSION_FUNCTIONS
-diff --git a/ltmain.sh b/ltmain.sh
-index 9503ec85d70..70e856e0659 100644
---- a/ltmain.sh
-+++ b/ltmain.sh
-@@ -1,10 +1,9 @@
--# Generated from ltmain.m4sh.
- 
--# libtool (GNU libtool 1.3134 2009-11-29) 2.2.7a
-+# libtool (GNU libtool) 2.4
- # Written by Gordon Matzigkeit <gord@gnu.ai.mit.edu>, 1996
- 
- # Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001, 2003, 2004, 2005, 2006,
--# 2007, 2008, 2009 Free Software Foundation, Inc.
-+# 2007, 2008, 2009, 2010 Free Software Foundation, Inc.
- # This is free software; see the source for copying conditions.  There is NO
- # warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
- 
-@@ -38,7 +37,6 @@
- #   -n, --dry-run            display commands without modifying any files
- #       --features           display basic configuration information and exit
- #       --mode=MODE          use operation mode MODE
--#       --no-finish          let install mode avoid finish commands
- #       --preserve-dup-deps  don't remove duplicate dependency libraries
- #       --quiet, --silent    don't print informational messages
- #       --no-quiet, --no-silent
-@@ -71,17 +69,19 @@
- #         compiler:		$LTCC
- #         compiler flags:		$LTCFLAGS
- #         linker:		$LD (gnu? $with_gnu_ld)
--#         $progname:	(GNU libtool 1.3134 2009-11-29) 2.2.7a
-+#         $progname:	(GNU libtool) 2.4
- #         automake:	$automake_version
- #         autoconf:	$autoconf_version
- #
- # Report bugs to <bug-libtool@gnu.org>.
-+# GNU libtool home page: <http://www.gnu.org/software/libtool/>.
-+# General help using GNU software: <http://www.gnu.org/gethelp/>.
- 
- PROGRAM=libtool
- PACKAGE=libtool
--VERSION=2.2.7a
--TIMESTAMP=" 1.3134 2009-11-29"
--package_revision=1.3134
-+VERSION=2.4
-+TIMESTAMP=""
-+package_revision=1.3293
- 
- # Be Bourne compatible
- if test -n "${ZSH_VERSION+set}" && (emulate sh) >/dev/null 2>&1; then
-@@ -106,9 +106,6 @@ _LTECHO_EOF'
- }
- 
- # NLS nuisances: We save the old values to restore during execute mode.
--# Only set LANG and LC_ALL to C if already set.
--# These must not be set unconditionally because not all systems understand
--# e.g. LANG=C (notably SCO).
- lt_user_locale=
- lt_safe_locale=
- for lt_var in LANG LANGUAGE LC_ALL LC_CTYPE LC_COLLATE LC_MESSAGES
-@@ -121,15 +118,13 @@ do
- 	  lt_safe_locale=\"$lt_var=C; \$lt_safe_locale\"
- 	fi"
- done
-+LC_ALL=C
-+LANGUAGE=C
-+export LANGUAGE LC_ALL
- 
- $lt_unset CDPATH
- 
- 
--
--
--
--
--
- # Work around backward compatibility issue on IRIX 6.5. On IRIX 6.4+, sh
- # is ksh but when the shell is invoked as "sh" and the current value of
- # the _XPG environment variable is not equal to 1 (one), the special
-@@ -140,7 +135,7 @@ progpath="$0"
- 
- 
- : ${CP="cp -f"}
--: ${ECHO=$as_echo}
-+test "${ECHO+set}" = set || ECHO=${as_echo-'printf %s\n'}
- : ${EGREP="/bin/grep -E"}
- : ${FGREP="/bin/grep -F"}
- : ${GREP="/bin/grep"}
-@@ -149,7 +144,7 @@ progpath="$0"
- : ${MKDIR="mkdir"}
- : ${MV="mv -f"}
- : ${RM="rm -f"}
--: ${SED="/mount/endor/wildenhu/local-x86_64/bin/sed"}
-+: ${SED="/bin/sed"}
- : ${SHELL="${CONFIG_SHELL-/bin/sh}"}
- : ${Xsed="$SED -e 1s/^X//"}
- 
-@@ -169,6 +164,27 @@ IFS=" 	$lt_nl"
- dirname="s,/[^/]*$,,"
- basename="s,^.*/,,"
- 
-+# func_dirname file append nondir_replacement
-+# Compute the dirname of FILE.  If nonempty, add APPEND to the result,
-+# otherwise set result to NONDIR_REPLACEMENT.
-+func_dirname ()
-+{
-+    func_dirname_result=`$ECHO "${1}" | $SED "$dirname"`
-+    if test "X$func_dirname_result" = "X${1}"; then
-+      func_dirname_result="${3}"
-+    else
-+      func_dirname_result="$func_dirname_result${2}"
-+    fi
-+} # func_dirname may be replaced by extended shell implementation
-+
-+
-+# func_basename file
-+func_basename ()
-+{
-+    func_basename_result=`$ECHO "${1}" | $SED "$basename"`
-+} # func_basename may be replaced by extended shell implementation
-+
-+
- # func_dirname_and_basename file append nondir_replacement
- # perform func_basename and func_dirname in a single function
- # call:
-@@ -183,17 +199,31 @@ basename="s,^.*/,,"
- # those functions but instead duplicate the functionality here.
- func_dirname_and_basename ()
- {
--  # Extract subdirectory from the argument.
--  func_dirname_result=`$ECHO "${1}" | $SED -e "$dirname"`
--  if test "X$func_dirname_result" = "X${1}"; then
--    func_dirname_result="${3}"
--  else
--    func_dirname_result="$func_dirname_result${2}"
--  fi
--  func_basename_result=`$ECHO "${1}" | $SED -e "$basename"`
--}
-+    # Extract subdirectory from the argument.
-+    func_dirname_result=`$ECHO "${1}" | $SED -e "$dirname"`
-+    if test "X$func_dirname_result" = "X${1}"; then
-+      func_dirname_result="${3}"
-+    else
-+      func_dirname_result="$func_dirname_result${2}"
-+    fi
-+    func_basename_result=`$ECHO "${1}" | $SED -e "$basename"`
-+} # func_dirname_and_basename may be replaced by extended shell implementation
-+
-+
-+# func_stripname prefix suffix name
-+# strip PREFIX and SUFFIX off of NAME.
-+# PREFIX and SUFFIX must not contain globbing or regex special
-+# characters, hashes, percent signs, but SUFFIX may contain a leading
-+# dot (in which case that matches only a dot).
-+# func_strip_suffix prefix name
-+func_stripname ()
-+{
-+    case ${2} in
-+      .*) func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%\\\\${2}\$%%"`;;
-+      *)  func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%${2}\$%%"`;;
-+    esac
-+} # func_stripname may be replaced by extended shell implementation
- 
--# Generated shell functions inserted here.
- 
- # These SED scripts presuppose an absolute path with a trailing slash.
- pathcar='s,^/\([^/]*\).*$,\1,'
-@@ -376,6 +406,15 @@ sed_quote_subst='s/\([`"$\\]\)/\\\1/g'
- # Same as above, but do not quote variable references.
- double_quote_subst='s/\(["`\\]\)/\\\1/g'
- 
-+# Sed substitution that turns a string into a regex matching for the
-+# string literally.
-+sed_make_literal_regex='s,[].[^$\\*\/],\\&,g'
-+
-+# Sed substitution that converts a w32 file name or path
-+# which contains forward slashes, into one that contains
-+# (escaped) backslashes.  A very naive implementation.
-+lt_sed_naive_backslashify='s|\\\\*|\\|g;s|/|\\|g;s|\\|\\\\|g'
-+
- # Re-`\' parameter expansions in output of double_quote_subst that were
- # `\'-ed in input to the same.  If an odd number of `\' preceded a '$'
- # in input to double_quote_subst, that '$' was protected from expansion.
-@@ -404,7 +443,7 @@ opt_warning=:
- # name if it has been set yet.
- func_echo ()
- {
--    $ECHO "$progname${mode+: }$mode: $*"
-+    $ECHO "$progname: ${opt_mode+$opt_mode: }$*"
- }
- 
- # func_verbose arg...
-@@ -430,14 +469,14 @@ func_echo_all ()
- # Echo program name prefixed message to standard error.
- func_error ()
- {
--    $ECHO "$progname${mode+: }$mode: "${1+"$@"} 1>&2
-+    $ECHO "$progname: ${opt_mode+$opt_mode: }"${1+"$@"} 1>&2
- }
- 
- # func_warning arg...
- # Echo program name prefixed warning message to standard error.
- func_warning ()
- {
--    $opt_warning && $ECHO "$progname${mode+: }$mode: warning: "${1+"$@"} 1>&2
-+    $opt_warning && $ECHO "$progname: ${opt_mode+$opt_mode: }warning: "${1+"$@"} 1>&2
- 
-     # bash bug again:
-     :
-@@ -656,19 +695,35 @@ func_show_eval_locale ()
-     fi
- }
- 
--
--
-+# func_tr_sh
-+# Turn $1 into a string suitable for a shell variable name.
-+# Result is stored in $func_tr_sh_result.  All characters
-+# not in the set a-zA-Z0-9_ are replaced with '_'. Further,
-+# if $1 begins with a digit, a '_' is prepended as well.
-+func_tr_sh ()
-+{
-+  case $1 in
-+  [0-9]* | *[!a-zA-Z0-9_]*)
-+    func_tr_sh_result=`$ECHO "$1" | $SED 's/^\([0-9]\)/_\1/; s/[^a-zA-Z0-9_]/_/g'`
-+    ;;
-+  * )
-+    func_tr_sh_result=$1
-+    ;;
-+  esac
-+}
- 
- 
- # func_version
- # Echo version message to standard output and exit.
- func_version ()
- {
-+    $opt_debug
-+
-     $SED -n '/(C)/!b go
- 	:more
- 	/\./!{
- 	  N
--	  s/\n# //
-+	  s/\n# / /
- 	  b more
- 	}
- 	:go
-@@ -685,7 +740,9 @@ func_version ()
- # Echo short help message to standard output and exit.
- func_usage ()
- {
--    $SED -n '/^# Usage:/,/^#  *-h/ {
-+    $opt_debug
-+
-+    $SED -n '/^# Usage:/,/^#  *.*--help/ {
-         s/^# //
- 	s/^# *$//
- 	s/\$progname/'$progname'/
-@@ -701,7 +758,10 @@ func_usage ()
- # unless 'noexit' is passed as argument.
- func_help ()
- {
-+    $opt_debug
-+
-     $SED -n '/^# Usage:/,/# Report bugs to/ {
-+	:print
-         s/^# //
- 	s/^# *$//
- 	s*\$progname*'$progname'*
-@@ -714,7 +774,11 @@ func_help ()
- 	s/\$automake_version/'"`(automake --version) 2>/dev/null |$SED 1q`"'/
- 	s/\$autoconf_version/'"`(autoconf --version) 2>/dev/null |$SED 1q`"'/
- 	p
--     }' < "$progpath"
-+	d
-+     }
-+     /^# .* home page:/b print
-+     /^# General help using/b print
-+     ' < "$progpath"
-     ret=$?
-     if test -z "$1"; then
-       exit $ret
-@@ -726,12 +790,39 @@ func_help ()
- # exit_cmd.
- func_missing_arg ()
- {
--    func_error "missing argument for $1"
-+    $opt_debug
-+
-+    func_error "missing argument for $1."
-     exit_cmd=exit
- }
- 
--exit_cmd=:
- 
-+# func_split_short_opt shortopt
-+# Set func_split_short_opt_name and func_split_short_opt_arg shell
-+# variables after splitting SHORTOPT after the 2nd character.
-+func_split_short_opt ()
-+{
-+    my_sed_short_opt='1s/^\(..\).*$/\1/;q'
-+    my_sed_short_rest='1s/^..\(.*\)$/\1/;q'
-+
-+    func_split_short_opt_name=`$ECHO "$1" | $SED "$my_sed_short_opt"`
-+    func_split_short_opt_arg=`$ECHO "$1" | $SED "$my_sed_short_rest"`
-+} # func_split_short_opt may be replaced by extended shell implementation
-+
-+
-+# func_split_long_opt longopt
-+# Set func_split_long_opt_name and func_split_long_opt_arg shell
-+# variables after splitting LONGOPT at the `=' sign.
-+func_split_long_opt ()
-+{
-+    my_sed_long_opt='1s/^\(--[^=]*\)=.*/\1/;q'
-+    my_sed_long_arg='1s/^--[^=]*=//'
-+
-+    func_split_long_opt_name=`$ECHO "$1" | $SED "$my_sed_long_opt"`
-+    func_split_long_opt_arg=`$ECHO "$1" | $SED "$my_sed_long_arg"`
-+} # func_split_long_opt may be replaced by extended shell implementation
-+
-+exit_cmd=:
- 
- 
- 
-@@ -741,26 +832,64 @@ magic="%%%MAGIC variable%%%"
- magic_exe="%%%MAGIC EXE variable%%%"
- 
- # Global variables.
--# $mode is unset
- nonopt=
--execute_dlfiles=
- preserve_args=
- lo2o="s/\\.lo\$/.${objext}/"
- o2lo="s/\\.${objext}\$/.lo/"
- extracted_archives=
- extracted_serial=0
- 
--opt_dry_run=false
--opt_finish=:
--opt_duplicate_deps=false
--opt_silent=false
--opt_debug=:
--
- # If this variable is set in any of the actions, the command in it
- # will be execed at the end.  This prevents here-documents from being
- # left over by shells.
- exec_cmd=
- 
-+# func_append var value
-+# Append VALUE to the end of shell variable VAR.
-+func_append ()
-+{
-+    eval "${1}=\$${1}\${2}"
-+} # func_append may be replaced by extended shell implementation
-+
-+# func_append_quoted var value
-+# Quote VALUE and append to the end of shell variable VAR, separated
-+# by a space.
-+func_append_quoted ()
-+{
-+    func_quote_for_eval "${2}"
-+    eval "${1}=\$${1}\\ \$func_quote_for_eval_result"
-+} # func_append_quoted may be replaced by extended shell implementation
-+
-+
-+# func_arith arithmetic-term...
-+func_arith ()
-+{
-+    func_arith_result=`expr "${@}"`
-+} # func_arith may be replaced by extended shell implementation
-+
-+
-+# func_len string
-+# STRING may not start with a hyphen.
-+func_len ()
-+{
-+    func_len_result=`expr "${1}" : ".*" 2>/dev/null || echo $max_cmd_len`
-+} # func_len may be replaced by extended shell implementation
-+
-+
-+# func_lo2o object
-+func_lo2o ()
-+{
-+    func_lo2o_result=`$ECHO "${1}" | $SED "$lo2o"`
-+} # func_lo2o may be replaced by extended shell implementation
-+
-+
-+# func_xform libobj-or-source
-+func_xform ()
-+{
-+    func_xform_result=`$ECHO "${1}" | $SED 's/\.[^.]*$/.lo/'`
-+} # func_xform may be replaced by extended shell implementation
-+
-+
- # func_fatal_configuration arg...
- # Echo program name prefixed message to standard error, followed by
- # a configuration failure hint, and exit.
-@@ -850,130 +979,204 @@ func_enable_tag ()
-   esac
- }
- 
--# Parse options once, thoroughly.  This comes as soon as possible in
--# the script to make things like `libtool --version' happen quickly.
-+# func_check_version_match
-+# Ensure that we are using m4 macros, and libtool script from the same
-+# release of libtool.
-+func_check_version_match ()
- {
-+  if test "$package_revision" != "$macro_revision"; then
-+    if test "$VERSION" != "$macro_version"; then
-+      if test -z "$macro_version"; then
-+        cat >&2 <<_LT_EOF
-+$progname: Version mismatch error.  This is $PACKAGE $VERSION, but the
-+$progname: definition of this LT_INIT comes from an older release.
-+$progname: You should recreate aclocal.m4 with macros from $PACKAGE $VERSION
-+$progname: and run autoconf again.
-+_LT_EOF
-+      else
-+        cat >&2 <<_LT_EOF
-+$progname: Version mismatch error.  This is $PACKAGE $VERSION, but the
-+$progname: definition of this LT_INIT comes from $PACKAGE $macro_version.
-+$progname: You should recreate aclocal.m4 with macros from $PACKAGE $VERSION
-+$progname: and run autoconf again.
-+_LT_EOF
-+      fi
-+    else
-+      cat >&2 <<_LT_EOF
-+$progname: Version mismatch error.  This is $PACKAGE $VERSION, revision $package_revision,
-+$progname: but the definition of this LT_INIT comes from revision $macro_revision.
-+$progname: You should recreate aclocal.m4 with macros from revision $package_revision
-+$progname: of $PACKAGE $VERSION and run autoconf again.
-+_LT_EOF
-+    fi
- 
--  # Shorthand for --mode=foo, only valid as the first argument
--  case $1 in
--  clean|clea|cle|cl)
--    shift; set dummy --mode clean ${1+"$@"}; shift
--    ;;
--  compile|compil|compi|comp|com|co|c)
--    shift; set dummy --mode compile ${1+"$@"}; shift
--    ;;
--  execute|execut|execu|exec|exe|ex|e)
--    shift; set dummy --mode execute ${1+"$@"}; shift
--    ;;
--  finish|finis|fini|fin|fi|f)
--    shift; set dummy --mode finish ${1+"$@"}; shift
--    ;;
--  install|instal|insta|inst|ins|in|i)
--    shift; set dummy --mode install ${1+"$@"}; shift
--    ;;
--  link|lin|li|l)
--    shift; set dummy --mode link ${1+"$@"}; shift
--    ;;
--  uninstall|uninstal|uninsta|uninst|unins|unin|uni|un|u)
--    shift; set dummy --mode uninstall ${1+"$@"}; shift
--    ;;
--  esac
-+    exit $EXIT_MISMATCH
-+  fi
-+}
-+
-+
-+# Shorthand for --mode=foo, only valid as the first argument
-+case $1 in
-+clean|clea|cle|cl)
-+  shift; set dummy --mode clean ${1+"$@"}; shift
-+  ;;
-+compile|compil|compi|comp|com|co|c)
-+  shift; set dummy --mode compile ${1+"$@"}; shift
-+  ;;
-+execute|execut|execu|exec|exe|ex|e)
-+  shift; set dummy --mode execute ${1+"$@"}; shift
-+  ;;
-+finish|finis|fini|fin|fi|f)
-+  shift; set dummy --mode finish ${1+"$@"}; shift
-+  ;;
-+install|instal|insta|inst|ins|in|i)
-+  shift; set dummy --mode install ${1+"$@"}; shift
-+  ;;
-+link|lin|li|l)
-+  shift; set dummy --mode link ${1+"$@"}; shift
-+  ;;
-+uninstall|uninstal|uninsta|uninst|unins|unin|uni|un|u)
-+  shift; set dummy --mode uninstall ${1+"$@"}; shift
-+  ;;
-+esac
- 
--  # Parse non-mode specific arguments:
--  while test "$#" -gt 0; do
-+
-+
-+# Option defaults:
-+opt_debug=:
-+opt_dry_run=false
-+opt_config=false
-+opt_preserve_dup_deps=false
-+opt_features=false
-+opt_finish=false
-+opt_help=false
-+opt_help_all=false
-+opt_silent=:
-+opt_verbose=:
-+opt_silent=false
-+opt_verbose=false
-+
-+
-+# Parse options once, thoroughly.  This comes as soon as possible in the
-+# script to make things like `--version' happen as quickly as we can.
-+{
-+  # this just eases exit handling
-+  while test $# -gt 0; do
-     opt="$1"
-     shift
--
-     case $opt in
--      --config)		func_config					;;
--
--      --debug)		preserve_args="$preserve_args $opt"
-+      --debug|-x)	opt_debug='set -x'
- 			func_echo "enabling shell trace mode"
--			opt_debug='set -x'
- 			$opt_debug
- 			;;
--
--      -dlopen)		test "$#" -eq 0 && func_missing_arg "$opt" && break
--			execute_dlfiles="$execute_dlfiles $1"
--			shift
-+      --dry-run|--dryrun|-n)
-+			opt_dry_run=:
- 			;;
--
--      --dry-run | -n)	opt_dry_run=:					;;
--      --features)       func_features					;;
--      --finish)		mode="finish"					;;
--      --no-finish)	opt_finish=false				;;
--
--      --mode)		test "$#" -eq 0 && func_missing_arg "$opt" && break
--			case $1 in
--			  # Valid mode arguments:
--			  clean)	;;
--			  compile)	;;
--			  execute)	;;
--			  finish)	;;
--			  install)	;;
--			  link)		;;
--			  relink)	;;
--			  uninstall)	;;
--
--			  # Catch anything else as an error
--			  *) func_error "invalid argument for $opt"
--			     exit_cmd=exit
--			     break
--			     ;;
--		        esac
--
--			mode="$1"
-+      --config)
-+			opt_config=:
-+func_config
-+			;;
-+      --dlopen|-dlopen)
-+			optarg="$1"
-+			opt_dlopen="${opt_dlopen+$opt_dlopen
-+}$optarg"
- 			shift
- 			;;
--
-       --preserve-dup-deps)
--			opt_duplicate_deps=:				;;
--
--      --quiet|--silent)	preserve_args="$preserve_args $opt"
--			opt_silent=:
--			opt_verbose=false
-+			opt_preserve_dup_deps=:
- 			;;
--
--      --no-quiet|--no-silent)
--			preserve_args="$preserve_args $opt"
--			opt_silent=false
-+      --features)
-+			opt_features=:
-+func_features
- 			;;
--
--      --verbose| -v)	preserve_args="$preserve_args $opt"
-+      --finish)
-+			opt_finish=:
-+set dummy --mode finish ${1+"$@"}; shift
-+			;;
-+      --help)
-+			opt_help=:
-+			;;
-+      --help-all)
-+			opt_help_all=:
-+opt_help=': help-all'
-+			;;
-+      --mode)
-+			test $# = 0 && func_missing_arg $opt && break
-+			optarg="$1"
-+			opt_mode="$optarg"
-+case $optarg in
-+  # Valid mode arguments:
-+  clean|compile|execute|finish|install|link|relink|uninstall) ;;
-+
-+  # Catch anything else as an error
-+  *) func_error "invalid argument for $opt"
-+     exit_cmd=exit
-+     break
-+     ;;
-+esac
-+			shift
-+			;;
-+      --no-silent|--no-quiet)
- 			opt_silent=false
--			opt_verbose=:
-+func_append preserve_args " $opt"
- 			;;
--
--      --no-verbose)	preserve_args="$preserve_args $opt"
-+      --no-verbose)
- 			opt_verbose=false
-+func_append preserve_args " $opt"
- 			;;
--
--      --tag)		test "$#" -eq 0 && func_missing_arg "$opt" && break
--			preserve_args="$preserve_args $opt $1"
--			func_enable_tag "$1"	# tagname is set here
-+      --silent|--quiet)
-+			opt_silent=:
-+func_append preserve_args " $opt"
-+        opt_verbose=false
-+			;;
-+      --verbose|-v)
-+			opt_verbose=:
-+func_append preserve_args " $opt"
-+opt_silent=false
-+			;;
-+      --tag)
-+			test $# = 0 && func_missing_arg $opt && break
-+			optarg="$1"
-+			opt_tag="$optarg"
-+func_append preserve_args " $opt $optarg"
-+func_enable_tag "$optarg"
- 			shift
- 			;;
- 
-+      -\?|-h)		func_usage				;;
-+      --help)		func_help				;;
-+      --version)	func_version				;;
-+
-       # Separate optargs to long options:
--      -dlopen=*|--mode=*|--tag=*)
--			func_opt_split "$opt"
--			set dummy "$func_opt_split_opt" "$func_opt_split_arg" ${1+"$@"}
-+      --*=*)
-+			func_split_long_opt "$opt"
-+			set dummy "$func_split_long_opt_name" "$func_split_long_opt_arg" ${1+"$@"}
- 			shift
- 			;;
- 
--      -\?|-h)		func_usage					;;
--      --help)		opt_help=:					;;
--      --help-all)	opt_help=': help-all'				;;
--      --version)	func_version					;;
--
--      -*)		func_fatal_help "unrecognized option \`$opt'"	;;
--
--      *)		nonopt="$opt"
--			break
-+      # Separate non-argument short options:
-+      -\?*|-h*|-n*|-v*)
-+			func_split_short_opt "$opt"
-+			set dummy "$func_split_short_opt_name" "-$func_split_short_opt_arg" ${1+"$@"}
-+			shift
- 			;;
-+
-+      --)		break					;;
-+      -*)		func_fatal_help "unrecognized option \`$opt'" ;;
-+      *)		set dummy "$opt" ${1+"$@"};	shift; break  ;;
-     esac
-   done
- 
-+  # Validate options:
-+
-+  # save first non-option argument
-+  if test "$#" -gt 0; then
-+    nonopt="$opt"
-+    shift
-+  fi
-+
-+  # preserve --debug
-+  test "$opt_debug" = : || func_append preserve_args " --debug"
- 
-   case $host in
-     *cygwin* | *mingw* | *pw32* | *cegcc* | *solaris2* )
-@@ -981,82 +1184,44 @@ func_enable_tag ()
-       opt_duplicate_compiler_generated_deps=:
-       ;;
-     *)
--      opt_duplicate_compiler_generated_deps=$opt_duplicate_deps
-+      opt_duplicate_compiler_generated_deps=$opt_preserve_dup_deps
-       ;;
-   esac
- 
--  # Having warned about all mis-specified options, bail out if
--  # anything was wrong.
--  $exit_cmd $EXIT_FAILURE
--}
-+  $opt_help || {
-+    # Sanity checks first:
-+    func_check_version_match
- 
--# func_check_version_match
--# Ensure that we are using m4 macros, and libtool script from the same
--# release of libtool.
--func_check_version_match ()
--{
--  if test "$package_revision" != "$macro_revision"; then
--    if test "$VERSION" != "$macro_version"; then
--      if test -z "$macro_version"; then
--        cat >&2 <<_LT_EOF
--$progname: Version mismatch error.  This is $PACKAGE $VERSION, but the
--$progname: definition of this LT_INIT comes from an older release.
--$progname: You should recreate aclocal.m4 with macros from $PACKAGE $VERSION
--$progname: and run autoconf again.
--_LT_EOF
--      else
--        cat >&2 <<_LT_EOF
--$progname: Version mismatch error.  This is $PACKAGE $VERSION, but the
--$progname: definition of this LT_INIT comes from $PACKAGE $macro_version.
--$progname: You should recreate aclocal.m4 with macros from $PACKAGE $VERSION
--$progname: and run autoconf again.
--_LT_EOF
--      fi
--    else
--      cat >&2 <<_LT_EOF
--$progname: Version mismatch error.  This is $PACKAGE $VERSION, revision $package_revision,
--$progname: but the definition of this LT_INIT comes from revision $macro_revision.
--$progname: You should recreate aclocal.m4 with macros from revision $package_revision
--$progname: of $PACKAGE $VERSION and run autoconf again.
--_LT_EOF
-+    if test "$build_libtool_libs" != yes && test "$build_old_libs" != yes; then
-+      func_fatal_configuration "not configured to build any kind of library"
-     fi
- 
--    exit $EXIT_MISMATCH
--  fi
--}
--
-+    # Darwin sucks
-+    eval std_shrext=\"$shrext_cmds\"
- 
--## ----------- ##
--##    Main.    ##
--## ----------- ##
--
--$opt_help || {
--  # Sanity checks first:
--  func_check_version_match
--
--  if test "$build_libtool_libs" != yes && test "$build_old_libs" != yes; then
--    func_fatal_configuration "not configured to build any kind of library"
--  fi
-+    # Only execute mode is allowed to have -dlopen flags.
-+    if test -n "$opt_dlopen" && test "$opt_mode" != execute; then
-+      func_error "unrecognized option \`-dlopen'"
-+      $ECHO "$help" 1>&2
-+      exit $EXIT_FAILURE
-+    fi
- 
--  test -z "$mode" && func_fatal_error "error: you must specify a MODE."
-+    # Change the help message to a mode-specific one.
-+    generic_help="$help"
-+    help="Try \`$progname --help --mode=$opt_mode' for more information."
-+  }
- 
- 
--  # Darwin sucks
--  eval "std_shrext=\"$shrext_cmds\""
-+  # Bail if the options were screwed
-+  $exit_cmd $EXIT_FAILURE
-+}
- 
- 
--  # Only execute mode is allowed to have -dlopen flags.
--  if test -n "$execute_dlfiles" && test "$mode" != execute; then
--    func_error "unrecognized option \`-dlopen'"
--    $ECHO "$help" 1>&2
--    exit $EXIT_FAILURE
--  fi
- 
--  # Change the help message to a mode-specific one.
--  generic_help="$help"
--  help="Try \`$progname --help --mode=$mode' for more information."
--}
- 
-+## ----------- ##
-+##    Main.    ##
-+## ----------- ##
- 
- # func_lalib_p file
- # True iff FILE is a libtool `.la' library or `.lo' object file.
-@@ -1121,12 +1286,9 @@ func_ltwrapper_executable_p ()
- # temporary ltwrapper_script.
- func_ltwrapper_scriptname ()
- {
--    func_ltwrapper_scriptname_result=""
--    if func_ltwrapper_executable_p "$1"; then
--	func_dirname_and_basename "$1" "" "."
--	func_stripname '' '.exe' "$func_basename_result"
--	func_ltwrapper_scriptname_result="$func_dirname_result/$objdir/${func_stripname_result}_ltshwrapper"
--    fi
-+    func_dirname_and_basename "$1" "" "."
-+    func_stripname '' '.exe' "$func_basename_result"
-+    func_ltwrapper_scriptname_result="$func_dirname_result/$objdir/${func_stripname_result}_ltshwrapper"
- }
- 
- # func_ltwrapper_p file
-@@ -1149,7 +1311,7 @@ func_execute_cmds ()
-     save_ifs=$IFS; IFS='~'
-     for cmd in $1; do
-       IFS=$save_ifs
--      eval "cmd=\"$cmd\""
-+      eval cmd=\"$cmd\"
-       func_show_eval "$cmd" "${2-:}"
-     done
-     IFS=$save_ifs
-@@ -1172,6 +1334,37 @@ func_source ()
- }
- 
- 
-+# func_resolve_sysroot PATH
-+# Replace a leading = in PATH with a sysroot.  Store the result into
-+# func_resolve_sysroot_result
-+func_resolve_sysroot ()
-+{
-+  func_resolve_sysroot_result=$1
-+  case $func_resolve_sysroot_result in
-+  =*)
-+    func_stripname '=' '' "$func_resolve_sysroot_result"
-+    func_resolve_sysroot_result=$lt_sysroot$func_stripname_result
-+    ;;
-+  esac
-+}
-+
-+# func_replace_sysroot PATH
-+# If PATH begins with the sysroot, replace it with = and
-+# store the result into func_replace_sysroot_result.
-+func_replace_sysroot ()
-+{
-+  case "$lt_sysroot:$1" in
-+  ?*:"$lt_sysroot"*)
-+    func_stripname "$lt_sysroot" '' "$1"
-+    func_replace_sysroot_result="=$func_stripname_result"
-+    ;;
-+  *)
-+    # Including no sysroot.
-+    func_replace_sysroot_result=$1
-+    ;;
-+  esac
-+}
-+
- # func_infer_tag arg
- # Infer tagged configuration to use if any are available and
- # if one wasn't chosen via the "--tag" command line option.
-@@ -1184,8 +1377,7 @@ func_infer_tag ()
-     if test -n "$available_tags" && test -z "$tagname"; then
-       CC_quoted=
-       for arg in $CC; do
--        func_quote_for_eval "$arg"
--	CC_quoted="$CC_quoted $func_quote_for_eval_result"
-+	func_append_quoted CC_quoted "$arg"
-       done
-       CC_expanded=`func_echo_all $CC`
-       CC_quoted_expanded=`func_echo_all $CC_quoted`
-@@ -1204,8 +1396,7 @@ func_infer_tag ()
- 	    CC_quoted=
- 	    for arg in $CC; do
- 	      # Double-quote args containing other shell metacharacters.
--	      func_quote_for_eval "$arg"
--	      CC_quoted="$CC_quoted $func_quote_for_eval_result"
-+	      func_append_quoted CC_quoted "$arg"
- 	    done
- 	    CC_expanded=`func_echo_all $CC`
- 	    CC_quoted_expanded=`func_echo_all $CC_quoted`
-@@ -1274,6 +1465,486 @@ EOF
-     }
- }
- 
-+
-+##################################################
-+# FILE NAME AND PATH CONVERSION HELPER FUNCTIONS #
-+##################################################
-+
-+# func_convert_core_file_wine_to_w32 ARG
-+# Helper function used by file name conversion functions when $build is *nix,
-+# and $host is mingw, cygwin, or some other w32 environment. Relies on a
-+# correctly configured wine environment available, with the winepath program
-+# in $build's $PATH.
-+#
-+# ARG is the $build file name to be converted to w32 format.
-+# Result is available in $func_convert_core_file_wine_to_w32_result, and will
-+# be empty on error (or when ARG is empty)
-+func_convert_core_file_wine_to_w32 ()
-+{
-+  $opt_debug
-+  func_convert_core_file_wine_to_w32_result="$1"
-+  if test -n "$1"; then
-+    # Unfortunately, winepath does not exit with a non-zero error code, so we
-+    # are forced to check the contents of stdout. On the other hand, if the
-+    # command is not found, the shell will set an exit code of 127 and print
-+    # *an error message* to stdout. So we must check for both error code of
-+    # zero AND non-empty stdout, which explains the odd construction:
-+    func_convert_core_file_wine_to_w32_tmp=`winepath -w "$1" 2>/dev/null`
-+    if test "$?" -eq 0 && test -n "${func_convert_core_file_wine_to_w32_tmp}"; then
-+      func_convert_core_file_wine_to_w32_result=`$ECHO "$func_convert_core_file_wine_to_w32_tmp" |
-+        $SED -e "$lt_sed_naive_backslashify"`
-+    else
-+      func_convert_core_file_wine_to_w32_result=
-+    fi
-+  fi
-+}
-+# end: func_convert_core_file_wine_to_w32
-+
-+
-+# func_convert_core_path_wine_to_w32 ARG
-+# Helper function used by path conversion functions when $build is *nix, and
-+# $host is mingw, cygwin, or some other w32 environment. Relies on a correctly
-+# configured wine environment available, with the winepath program in $build's
-+# $PATH. Assumes ARG has no leading or trailing path separator characters.
-+#
-+# ARG is path to be converted from $build format to win32.
-+# Result is available in $func_convert_core_path_wine_to_w32_result.
-+# Unconvertible file (directory) names in ARG are skipped; if no directory names
-+# are convertible, then the result may be empty.
-+func_convert_core_path_wine_to_w32 ()
-+{
-+  $opt_debug
-+  # unfortunately, winepath doesn't convert paths, only file names
-+  func_convert_core_path_wine_to_w32_result=""
-+  if test -n "$1"; then
-+    oldIFS=$IFS
-+    IFS=:
-+    for func_convert_core_path_wine_to_w32_f in $1; do
-+      IFS=$oldIFS
-+      func_convert_core_file_wine_to_w32 "$func_convert_core_path_wine_to_w32_f"
-+      if test -n "$func_convert_core_file_wine_to_w32_result" ; then
-+        if test -z "$func_convert_core_path_wine_to_w32_result"; then
-+          func_convert_core_path_wine_to_w32_result="$func_convert_core_file_wine_to_w32_result"
-+        else
-+          func_append func_convert_core_path_wine_to_w32_result ";$func_convert_core_file_wine_to_w32_result"
-+        fi
-+      fi
-+    done
-+    IFS=$oldIFS
-+  fi
-+}
-+# end: func_convert_core_path_wine_to_w32
-+
-+
-+# func_cygpath ARGS...
-+# Wrapper around calling the cygpath program via LT_CYGPATH. This is used when
-+# when (1) $build is *nix and Cygwin is hosted via a wine environment; or (2)
-+# $build is MSYS and $host is Cygwin, or (3) $build is Cygwin. In case (1) or
-+# (2), returns the Cygwin file name or path in func_cygpath_result (input
-+# file name or path is assumed to be in w32 format, as previously converted
-+# from $build's *nix or MSYS format). In case (3), returns the w32 file name
-+# or path in func_cygpath_result (input file name or path is assumed to be in
-+# Cygwin format). Returns an empty string on error.
-+#
-+# ARGS are passed to cygpath, with the last one being the file name or path to
-+# be converted.
-+#
-+# Specify the absolute *nix (or w32) name to cygpath in the LT_CYGPATH
-+# environment variable; do not put it in $PATH.
-+func_cygpath ()
-+{
-+  $opt_debug
-+  if test -n "$LT_CYGPATH" && test -f "$LT_CYGPATH"; then
-+    func_cygpath_result=`$LT_CYGPATH "$@" 2>/dev/null`
-+    if test "$?" -ne 0; then
-+      # on failure, ensure result is empty
-+      func_cygpath_result=
-+    fi
-+  else
-+    func_cygpath_result=
-+    func_error "LT_CYGPATH is empty or specifies non-existent file: \`$LT_CYGPATH'"
-+  fi
-+}
-+#end: func_cygpath
-+
-+
-+# func_convert_core_msys_to_w32 ARG
-+# Convert file name or path ARG from MSYS format to w32 format.  Return
-+# result in func_convert_core_msys_to_w32_result.
-+func_convert_core_msys_to_w32 ()
-+{
-+  $opt_debug
-+  # awkward: cmd appends spaces to result
-+  func_convert_core_msys_to_w32_result=`( cmd //c echo "$1" ) 2>/dev/null |
-+    $SED -e 's/[ ]*$//' -e "$lt_sed_naive_backslashify"`
-+}
-+#end: func_convert_core_msys_to_w32
-+
-+
-+# func_convert_file_check ARG1 ARG2
-+# Verify that ARG1 (a file name in $build format) was converted to $host
-+# format in ARG2. Otherwise, emit an error message, but continue (resetting
-+# func_to_host_file_result to ARG1).
-+func_convert_file_check ()
-+{
-+  $opt_debug
-+  if test -z "$2" && test -n "$1" ; then
-+    func_error "Could not determine host file name corresponding to"
-+    func_error "  \`$1'"
-+    func_error "Continuing, but uninstalled executables may not work."
-+    # Fallback:
-+    func_to_host_file_result="$1"
-+  fi
-+}
-+# end func_convert_file_check
-+
-+
-+# func_convert_path_check FROM_PATHSEP TO_PATHSEP FROM_PATH TO_PATH
-+# Verify that FROM_PATH (a path in $build format) was converted to $host
-+# format in TO_PATH. Otherwise, emit an error message, but continue, resetting
-+# func_to_host_file_result to a simplistic fallback value (see below).
-+func_convert_path_check ()
-+{
-+  $opt_debug
-+  if test -z "$4" && test -n "$3"; then
-+    func_error "Could not determine the host path corresponding to"
-+    func_error "  \`$3'"
-+    func_error "Continuing, but uninstalled executables may not work."
-+    # Fallback.  This is a deliberately simplistic "conversion" and
-+    # should not be "improved".  See libtool.info.
-+    if test "x$1" != "x$2"; then
-+      lt_replace_pathsep_chars="s|$1|$2|g"
-+      func_to_host_path_result=`echo "$3" |
-+        $SED -e "$lt_replace_pathsep_chars"`
-+    else
-+      func_to_host_path_result="$3"
-+    fi
-+  fi
-+}
-+# end func_convert_path_check
-+
-+
-+# func_convert_path_front_back_pathsep FRONTPAT BACKPAT REPL ORIG
-+# Modifies func_to_host_path_result by prepending REPL if ORIG matches FRONTPAT
-+# and appending REPL if ORIG matches BACKPAT.
-+func_convert_path_front_back_pathsep ()
-+{
-+  $opt_debug
-+  case $4 in
-+  $1 ) func_to_host_path_result="$3$func_to_host_path_result"
-+    ;;
-+  esac
-+  case $4 in
-+  $2 ) func_append func_to_host_path_result "$3"
-+    ;;
-+  esac
-+}
-+# end func_convert_path_front_back_pathsep
-+
-+
-+##################################################
-+# $build to $host FILE NAME CONVERSION FUNCTIONS #
-+##################################################
-+# invoked via `$to_host_file_cmd ARG'
-+#
-+# In each case, ARG is the path to be converted from $build to $host format.
-+# Result will be available in $func_to_host_file_result.
-+
-+
-+# func_to_host_file ARG
-+# Converts the file name ARG from $build format to $host format. Return result
-+# in func_to_host_file_result.
-+func_to_host_file ()
-+{
-+  $opt_debug
-+  $to_host_file_cmd "$1"
-+}
-+# end func_to_host_file
-+
-+
-+# func_to_tool_file ARG LAZY
-+# converts the file name ARG from $build format to toolchain format. Return
-+# result in func_to_tool_file_result.  If the conversion in use is listed
-+# in (the comma separated) LAZY, no conversion takes place.
-+func_to_tool_file ()
-+{
-+  $opt_debug
-+  case ,$2, in
-+    *,"$to_tool_file_cmd",*)
-+      func_to_tool_file_result=$1
-+      ;;
-+    *)
-+      $to_tool_file_cmd "$1"
-+      func_to_tool_file_result=$func_to_host_file_result
-+      ;;
-+  esac
-+}
-+# end func_to_tool_file
-+
-+
-+# func_convert_file_noop ARG
-+# Copy ARG to func_to_host_file_result.
-+func_convert_file_noop ()
-+{
-+  func_to_host_file_result="$1"
-+}
-+# end func_convert_file_noop
-+
-+
-+# func_convert_file_msys_to_w32 ARG
-+# Convert file name ARG from (mingw) MSYS to (mingw) w32 format; automatic
-+# conversion to w32 is not available inside the cwrapper.  Returns result in
-+# func_to_host_file_result.
-+func_convert_file_msys_to_w32 ()
-+{
-+  $opt_debug
-+  func_to_host_file_result="$1"
-+  if test -n "$1"; then
-+    func_convert_core_msys_to_w32 "$1"
-+    func_to_host_file_result="$func_convert_core_msys_to_w32_result"
-+  fi
-+  func_convert_file_check "$1" "$func_to_host_file_result"
-+}
-+# end func_convert_file_msys_to_w32
-+
-+
-+# func_convert_file_cygwin_to_w32 ARG
-+# Convert file name ARG from Cygwin to w32 format.  Returns result in
-+# func_to_host_file_result.
-+func_convert_file_cygwin_to_w32 ()
-+{
-+  $opt_debug
-+  func_to_host_file_result="$1"
-+  if test -n "$1"; then
-+    # because $build is cygwin, we call "the" cygpath in $PATH; no need to use
-+    # LT_CYGPATH in this case.
-+    func_to_host_file_result=`cygpath -m "$1"`
-+  fi
-+  func_convert_file_check "$1" "$func_to_host_file_result"
-+}
-+# end func_convert_file_cygwin_to_w32
-+
-+
-+# func_convert_file_nix_to_w32 ARG
-+# Convert file name ARG from *nix to w32 format.  Requires a wine environment
-+# and a working winepath. Returns result in func_to_host_file_result.
-+func_convert_file_nix_to_w32 ()
-+{
-+  $opt_debug
-+  func_to_host_file_result="$1"
-+  if test -n "$1"; then
-+    func_convert_core_file_wine_to_w32 "$1"
-+    func_to_host_file_result="$func_convert_core_file_wine_to_w32_result"
-+  fi
-+  func_convert_file_check "$1" "$func_to_host_file_result"
-+}
-+# end func_convert_file_nix_to_w32
-+
-+
-+# func_convert_file_msys_to_cygwin ARG
-+# Convert file name ARG from MSYS to Cygwin format.  Requires LT_CYGPATH set.
-+# Returns result in func_to_host_file_result.
-+func_convert_file_msys_to_cygwin ()
-+{
-+  $opt_debug
-+  func_to_host_file_result="$1"
-+  if test -n "$1"; then
-+    func_convert_core_msys_to_w32 "$1"
-+    func_cygpath -u "$func_convert_core_msys_to_w32_result"
-+    func_to_host_file_result="$func_cygpath_result"
-+  fi
-+  func_convert_file_check "$1" "$func_to_host_file_result"
-+}
-+# end func_convert_file_msys_to_cygwin
-+
-+
-+# func_convert_file_nix_to_cygwin ARG
-+# Convert file name ARG from *nix to Cygwin format.  Requires Cygwin installed
-+# in a wine environment, working winepath, and LT_CYGPATH set.  Returns result
-+# in func_to_host_file_result.
-+func_convert_file_nix_to_cygwin ()
-+{
-+  $opt_debug
-+  func_to_host_file_result="$1"
-+  if test -n "$1"; then
-+    # convert from *nix to w32, then use cygpath to convert from w32 to cygwin.
-+    func_convert_core_file_wine_to_w32 "$1"
-+    func_cygpath -u "$func_convert_core_file_wine_to_w32_result"
-+    func_to_host_file_result="$func_cygpath_result"
-+  fi
-+  func_convert_file_check "$1" "$func_to_host_file_result"
-+}
-+# end func_convert_file_nix_to_cygwin
-+
-+
-+#############################################
-+# $build to $host PATH CONVERSION FUNCTIONS #
-+#############################################
-+# invoked via `$to_host_path_cmd ARG'
-+#
-+# In each case, ARG is the path to be converted from $build to $host format.
-+# The result will be available in $func_to_host_path_result.
-+#
-+# Path separators are also converted from $build format to $host format.  If
-+# ARG begins or ends with a path separator character, it is preserved (but
-+# converted to $host format) on output.
-+#
-+# All path conversion functions are named using the following convention:
-+#   file name conversion function    : func_convert_file_X_to_Y ()
-+#   path conversion function         : func_convert_path_X_to_Y ()
-+# where, for any given $build/$host combination the 'X_to_Y' value is the
-+# same.  If conversion functions are added for new $build/$host combinations,
-+# the two new functions must follow this pattern, or func_init_to_host_path_cmd
-+# will break.
-+
-+
-+# func_init_to_host_path_cmd
-+# Ensures that function "pointer" variable $to_host_path_cmd is set to the
-+# appropriate value, based on the value of $to_host_file_cmd.
-+to_host_path_cmd=
-+func_init_to_host_path_cmd ()
-+{
-+  $opt_debug
-+  if test -z "$to_host_path_cmd"; then
-+    func_stripname 'func_convert_file_' '' "$to_host_file_cmd"
-+    to_host_path_cmd="func_convert_path_${func_stripname_result}"
-+  fi
-+}
-+
-+
-+# func_to_host_path ARG
-+# Converts the path ARG from $build format to $host format. Return result
-+# in func_to_host_path_result.
-+func_to_host_path ()
-+{
-+  $opt_debug
-+  func_init_to_host_path_cmd
-+  $to_host_path_cmd "$1"
-+}
-+# end func_to_host_path
-+
-+
-+# func_convert_path_noop ARG
-+# Copy ARG to func_to_host_path_result.
-+func_convert_path_noop ()
-+{
-+  func_to_host_path_result="$1"
-+}
-+# end func_convert_path_noop
-+
-+
-+# func_convert_path_msys_to_w32 ARG
-+# Convert path ARG from (mingw) MSYS to (mingw) w32 format; automatic
-+# conversion to w32 is not available inside the cwrapper.  Returns result in
-+# func_to_host_path_result.
-+func_convert_path_msys_to_w32 ()
-+{
-+  $opt_debug
-+  func_to_host_path_result="$1"
-+  if test -n "$1"; then
-+    # Remove leading and trailing path separator characters from ARG.  MSYS
-+    # behavior is inconsistent here; cygpath turns them into '.;' and ';.';
-+    # and winepath ignores them completely.
-+    func_stripname : : "$1"
-+    func_to_host_path_tmp1=$func_stripname_result
-+    func_convert_core_msys_to_w32 "$func_to_host_path_tmp1"
-+    func_to_host_path_result="$func_convert_core_msys_to_w32_result"
-+    func_convert_path_check : ";" \
-+      "$func_to_host_path_tmp1" "$func_to_host_path_result"
-+    func_convert_path_front_back_pathsep ":*" "*:" ";" "$1"
-+  fi
-+}
-+# end func_convert_path_msys_to_w32
-+
-+
-+# func_convert_path_cygwin_to_w32 ARG
-+# Convert path ARG from Cygwin to w32 format.  Returns result in
-+# func_to_host_file_result.
-+func_convert_path_cygwin_to_w32 ()
-+{
-+  $opt_debug
-+  func_to_host_path_result="$1"
-+  if test -n "$1"; then
-+    # See func_convert_path_msys_to_w32:
-+    func_stripname : : "$1"
-+    func_to_host_path_tmp1=$func_stripname_result
-+    func_to_host_path_result=`cygpath -m -p "$func_to_host_path_tmp1"`
-+    func_convert_path_check : ";" \
-+      "$func_to_host_path_tmp1" "$func_to_host_path_result"
-+    func_convert_path_front_back_pathsep ":*" "*:" ";" "$1"
-+  fi
-+}
-+# end func_convert_path_cygwin_to_w32
-+
-+
-+# func_convert_path_nix_to_w32 ARG
-+# Convert path ARG from *nix to w32 format.  Requires a wine environment and
-+# a working winepath.  Returns result in func_to_host_file_result.
-+func_convert_path_nix_to_w32 ()
-+{
-+  $opt_debug
-+  func_to_host_path_result="$1"
-+  if test -n "$1"; then
-+    # See func_convert_path_msys_to_w32:
-+    func_stripname : : "$1"
-+    func_to_host_path_tmp1=$func_stripname_result
-+    func_convert_core_path_wine_to_w32 "$func_to_host_path_tmp1"
-+    func_to_host_path_result="$func_convert_core_path_wine_to_w32_result"
-+    func_convert_path_check : ";" \
-+      "$func_to_host_path_tmp1" "$func_to_host_path_result"
-+    func_convert_path_front_back_pathsep ":*" "*:" ";" "$1"
-+  fi
-+}
-+# end func_convert_path_nix_to_w32
-+
-+
-+# func_convert_path_msys_to_cygwin ARG
-+# Convert path ARG from MSYS to Cygwin format.  Requires LT_CYGPATH set.
-+# Returns result in func_to_host_file_result.
-+func_convert_path_msys_to_cygwin ()
-+{
-+  $opt_debug
-+  func_to_host_path_result="$1"
-+  if test -n "$1"; then
-+    # See func_convert_path_msys_to_w32:
-+    func_stripname : : "$1"
-+    func_to_host_path_tmp1=$func_stripname_result
-+    func_convert_core_msys_to_w32 "$func_to_host_path_tmp1"
-+    func_cygpath -u -p "$func_convert_core_msys_to_w32_result"
-+    func_to_host_path_result="$func_cygpath_result"
-+    func_convert_path_check : : \
-+      "$func_to_host_path_tmp1" "$func_to_host_path_result"
-+    func_convert_path_front_back_pathsep ":*" "*:" : "$1"
-+  fi
-+}
-+# end func_convert_path_msys_to_cygwin
-+
-+
-+# func_convert_path_nix_to_cygwin ARG
-+# Convert path ARG from *nix to Cygwin format.  Requires Cygwin installed in a
-+# a wine environment, working winepath, and LT_CYGPATH set.  Returns result in
-+# func_to_host_file_result.
-+func_convert_path_nix_to_cygwin ()
-+{
-+  $opt_debug
-+  func_to_host_path_result="$1"
-+  if test -n "$1"; then
-+    # Remove leading and trailing path separator characters from
-+    # ARG. msys behavior is inconsistent here, cygpath turns them
-+    # into '.;' and ';.', and winepath ignores them completely.
-+    func_stripname : : "$1"
-+    func_to_host_path_tmp1=$func_stripname_result
-+    func_convert_core_path_wine_to_w32 "$func_to_host_path_tmp1"
-+    func_cygpath -u -p "$func_convert_core_path_wine_to_w32_result"
-+    func_to_host_path_result="$func_cygpath_result"
-+    func_convert_path_check : : \
-+      "$func_to_host_path_tmp1" "$func_to_host_path_result"
-+    func_convert_path_front_back_pathsep ":*" "*:" : "$1"
-+  fi
-+}
-+# end func_convert_path_nix_to_cygwin
-+
-+
- # func_mode_compile arg...
- func_mode_compile ()
- {
-@@ -1314,12 +1985,12 @@ func_mode_compile ()
- 	  ;;
- 
- 	-pie | -fpie | -fPIE)
--          pie_flag="$pie_flag $arg"
-+          func_append pie_flag " $arg"
- 	  continue
- 	  ;;
- 
- 	-shared | -static | -prefer-pic | -prefer-non-pic)
--	  later="$later $arg"
-+	  func_append later " $arg"
- 	  continue
- 	  ;;
- 
-@@ -1340,15 +2011,14 @@ func_mode_compile ()
- 	  save_ifs="$IFS"; IFS=','
- 	  for arg in $args; do
- 	    IFS="$save_ifs"
--	    func_quote_for_eval "$arg"
--	    lastarg="$lastarg $func_quote_for_eval_result"
-+	    func_append_quoted lastarg "$arg"
- 	  done
- 	  IFS="$save_ifs"
- 	  func_stripname ' ' '' "$lastarg"
- 	  lastarg=$func_stripname_result
- 
- 	  # Add the arguments to base_compile.
--	  base_compile="$base_compile $lastarg"
-+	  func_append base_compile " $lastarg"
- 	  continue
- 	  ;;
- 
-@@ -1364,8 +2034,7 @@ func_mode_compile ()
-       esac    #  case $arg_mode
- 
-       # Aesthetically quote the previous argument.
--      func_quote_for_eval "$lastarg"
--      base_compile="$base_compile $func_quote_for_eval_result"
-+      func_append_quoted base_compile "$lastarg"
-     done # for arg
- 
-     case $arg_mode in
-@@ -1496,17 +2165,16 @@ compiler."
- 	$opt_dry_run || $RM $removelist
- 	exit $EXIT_FAILURE
-       fi
--      removelist="$removelist $output_obj"
-+      func_append removelist " $output_obj"
-       $ECHO "$srcfile" > "$lockfile"
-     fi
- 
-     $opt_dry_run || $RM $removelist
--    removelist="$removelist $lockfile"
-+    func_append removelist " $lockfile"
-     trap '$opt_dry_run || $RM $removelist; exit $EXIT_FAILURE' 1 2 15
- 
--    if test -n "$fix_srcfile_path"; then
--      eval "srcfile=\"$fix_srcfile_path\""
--    fi
-+    func_to_tool_file "$srcfile" func_convert_file_msys_to_w32
-+    srcfile=$func_to_tool_file_result
-     func_quote_for_eval "$srcfile"
-     qsrcfile=$func_quote_for_eval_result
- 
-@@ -1526,7 +2194,7 @@ compiler."
- 
-       if test -z "$output_obj"; then
- 	# Place PIC objects in $objdir
--	command="$command -o $lobj"
-+	func_append command " -o $lobj"
-       fi
- 
-       func_show_eval_locale "$command"	\
-@@ -1573,11 +2241,11 @@ compiler."
- 	command="$base_compile $qsrcfile $pic_flag"
-       fi
-       if test "$compiler_c_o" = yes; then
--	command="$command -o $obj"
-+	func_append command " -o $obj"
-       fi
- 
-       # Suppress compiler output if we already did a PIC compilation.
--      command="$command$suppress_output"
-+      func_append command "$suppress_output"
-       func_show_eval_locale "$command" \
-         '$opt_dry_run || $RM $removelist; exit $EXIT_FAILURE'
- 
-@@ -1622,13 +2290,13 @@ compiler."
- }
- 
- $opt_help || {
--  test "$mode" = compile && func_mode_compile ${1+"$@"}
-+  test "$opt_mode" = compile && func_mode_compile ${1+"$@"}
- }
- 
- func_mode_help ()
- {
-     # We need to display help for each of the modes.
--    case $mode in
-+    case $opt_mode in
-       "")
-         # Generic help is extracted from the usage comments
-         # at the start of this file.
-@@ -1659,8 +2327,8 @@ This mode accepts the following additional options:
- 
-   -o OUTPUT-FILE    set the output file name to OUTPUT-FILE
-   -no-suppress      do not suppress compiler output for multiple passes
--  -prefer-pic       try to building PIC objects only
--  -prefer-non-pic   try to building non-PIC objects only
-+  -prefer-pic       try to build PIC objects only
-+  -prefer-non-pic   try to build non-PIC objects only
-   -shared           do not build a \`.o' file suitable for static linking
-   -static           only build a \`.o' file suitable for static linking
-   -Wc,FLAG          pass FLAG directly to the compiler
-@@ -1804,7 +2472,7 @@ Otherwise, only FILE itself is deleted using RM."
-         ;;
- 
-       *)
--        func_fatal_help "invalid operation mode \`$mode'"
-+        func_fatal_help "invalid operation mode \`$opt_mode'"
-         ;;
-     esac
- 
-@@ -1819,13 +2487,13 @@ if $opt_help; then
-   else
-     {
-       func_help noexit
--      for mode in compile link execute install finish uninstall clean; do
-+      for opt_mode in compile link execute install finish uninstall clean; do
- 	func_mode_help
-       done
-     } | sed -n '1p; 2,$s/^Usage:/  or: /p'
-     {
-       func_help noexit
--      for mode in compile link execute install finish uninstall clean; do
-+      for opt_mode in compile link execute install finish uninstall clean; do
- 	echo
- 	func_mode_help
-       done
-@@ -1854,13 +2522,16 @@ func_mode_execute ()
-       func_fatal_help "you must specify a COMMAND"
- 
-     # Handle -dlopen flags immediately.
--    for file in $execute_dlfiles; do
-+    for file in $opt_dlopen; do
-       test -f "$file" \
- 	|| func_fatal_help "\`$file' is not a file"
- 
-       dir=
-       case $file in
-       *.la)
-+	func_resolve_sysroot "$file"
-+	file=$func_resolve_sysroot_result
-+
- 	# Check to see that this really is a libtool archive.
- 	func_lalib_unsafe_p "$file" \
- 	  || func_fatal_help "\`$lib' is not a valid libtool archive"
-@@ -1882,7 +2553,7 @@ func_mode_execute ()
- 	dir="$func_dirname_result"
- 
- 	if test -f "$dir/$objdir/$dlname"; then
--	  dir="$dir/$objdir"
-+	  func_append dir "/$objdir"
- 	else
- 	  if test ! -f "$dir/$dlname"; then
- 	    func_fatal_error "cannot find \`$dlname' in \`$dir' or \`$dir/$objdir'"
-@@ -1907,10 +2578,10 @@ func_mode_execute ()
-       test -n "$absdir" && dir="$absdir"
- 
-       # Now add the directory to shlibpath_var.
--      if eval test -z \"\$$shlibpath_var\"; then
--	eval $shlibpath_var=\$dir
-+      if eval "test -z \"\$$shlibpath_var\""; then
-+	eval "$shlibpath_var=\"\$dir\""
-       else
--	eval $shlibpath_var=\$dir:\$$shlibpath_var
-+	eval "$shlibpath_var=\"\$dir:\$$shlibpath_var\""
-       fi
-     done
- 
-@@ -1939,8 +2610,7 @@ func_mode_execute ()
- 	;;
-       esac
-       # Quote arguments (to preserve shell metacharacters).
--      func_quote_for_eval "$file"
--      args="$args $func_quote_for_eval_result"
-+      func_append_quoted args "$file"
-     done
- 
-     if test "X$opt_dry_run" = Xfalse; then
-@@ -1972,22 +2642,59 @@ func_mode_execute ()
-     fi
- }
- 
--test "$mode" = execute && func_mode_execute ${1+"$@"}
-+test "$opt_mode" = execute && func_mode_execute ${1+"$@"}
- 
- 
- # func_mode_finish arg...
- func_mode_finish ()
- {
-     $opt_debug
--    libdirs="$nonopt"
-+    libs=
-+    libdirs=
-     admincmds=
- 
--    if test -n "$finish_cmds$finish_eval" && test -n "$libdirs"; then
--      for dir
--      do
--	libdirs="$libdirs $dir"
--      done
-+    for opt in "$nonopt" ${1+"$@"}
-+    do
-+      if test -d "$opt"; then
-+	func_append libdirs " $opt"
- 
-+      elif test -f "$opt"; then
-+	if func_lalib_unsafe_p "$opt"; then
-+	  func_append libs " $opt"
-+	else
-+	  func_warning "\`$opt' is not a valid libtool archive"
-+	fi
-+
-+      else
-+	func_fatal_error "invalid argument \`$opt'"
-+      fi
-+    done
-+
-+    if test -n "$libs"; then
-+      if test -n "$lt_sysroot"; then
-+        sysroot_regex=`$ECHO "$lt_sysroot" | $SED "$sed_make_literal_regex"`
-+        sysroot_cmd="s/\([ ']\)$sysroot_regex/\1/g;"
-+      else
-+        sysroot_cmd=
-+      fi
-+
-+      # Remove sysroot references
-+      if $opt_dry_run; then
-+        for lib in $libs; do
-+          echo "removing references to $lt_sysroot and \`=' prefixes from $lib"
-+        done
-+      else
-+        tmpdir=`func_mktempdir`
-+        for lib in $libs; do
-+	  sed -e "${sysroot_cmd} s/\([ ']-[LR]\)=/\1/g; s/\([ ']\)=/\1/g" $lib \
-+	    > $tmpdir/tmp-la
-+	  mv -f $tmpdir/tmp-la $lib
-+	done
-+        ${RM}r "$tmpdir"
-+      fi
-+    fi
-+
-+    if test -n "$finish_cmds$finish_eval" && test -n "$libdirs"; then
-       for libdir in $libdirs; do
- 	if test -n "$finish_cmds"; then
- 	  # Do each command in the finish commands.
-@@ -1997,7 +2704,7 @@ func_mode_finish ()
- 	if test -n "$finish_eval"; then
- 	  # Do the single finish_eval.
- 	  eval cmds=\"$finish_eval\"
--	  $opt_dry_run || eval "$cmds" || admincmds="$admincmds
-+	  $opt_dry_run || eval "$cmds" || func_append admincmds "
-        $cmds"
- 	fi
-       done
-@@ -2006,53 +2713,55 @@ func_mode_finish ()
-     # Exit here if they wanted silent mode.
-     $opt_silent && exit $EXIT_SUCCESS
- 
--    echo "----------------------------------------------------------------------"
--    echo "Libraries have been installed in:"
--    for libdir in $libdirs; do
--      $ECHO "   $libdir"
--    done
--    echo
--    echo "If you ever happen to want to link against installed libraries"
--    echo "in a given directory, LIBDIR, you must either use libtool, and"
--    echo "specify the full pathname of the library, or use the \`-LLIBDIR'"
--    echo "flag during linking and do at least one of the following:"
--    if test -n "$shlibpath_var"; then
--      echo "   - add LIBDIR to the \`$shlibpath_var' environment variable"
--      echo "     during execution"
--    fi
--    if test -n "$runpath_var"; then
--      echo "   - add LIBDIR to the \`$runpath_var' environment variable"
--      echo "     during linking"
--    fi
--    if test -n "$hardcode_libdir_flag_spec"; then
--      libdir=LIBDIR
--      eval "flag=\"$hardcode_libdir_flag_spec\""
-+    if test -n "$finish_cmds$finish_eval" && test -n "$libdirs"; then
-+      echo "----------------------------------------------------------------------"
-+      echo "Libraries have been installed in:"
-+      for libdir in $libdirs; do
-+	$ECHO "   $libdir"
-+      done
-+      echo
-+      echo "If you ever happen to want to link against installed libraries"
-+      echo "in a given directory, LIBDIR, you must either use libtool, and"
-+      echo "specify the full pathname of the library, or use the \`-LLIBDIR'"
-+      echo "flag during linking and do at least one of the following:"
-+      if test -n "$shlibpath_var"; then
-+	echo "   - add LIBDIR to the \`$shlibpath_var' environment variable"
-+	echo "     during execution"
-+      fi
-+      if test -n "$runpath_var"; then
-+	echo "   - add LIBDIR to the \`$runpath_var' environment variable"
-+	echo "     during linking"
-+      fi
-+      if test -n "$hardcode_libdir_flag_spec"; then
-+	libdir=LIBDIR
-+	eval flag=\"$hardcode_libdir_flag_spec\"
- 
--      $ECHO "   - use the \`$flag' linker flag"
--    fi
--    if test -n "$admincmds"; then
--      $ECHO "   - have your system administrator run these commands:$admincmds"
--    fi
--    if test -f /etc/ld.so.conf; then
--      echo "   - have your system administrator add LIBDIR to \`/etc/ld.so.conf'"
--    fi
--    echo
-+	$ECHO "   - use the \`$flag' linker flag"
-+      fi
-+      if test -n "$admincmds"; then
-+	$ECHO "   - have your system administrator run these commands:$admincmds"
-+      fi
-+      if test -f /etc/ld.so.conf; then
-+	echo "   - have your system administrator add LIBDIR to \`/etc/ld.so.conf'"
-+      fi
-+      echo
- 
--    echo "See any operating system documentation about shared libraries for"
--    case $host in
--      solaris2.[6789]|solaris2.1[0-9])
--        echo "more information, such as the ld(1), crle(1) and ld.so(8) manual"
--	echo "pages."
--	;;
--      *)
--        echo "more information, such as the ld(1) and ld.so(8) manual pages."
--        ;;
--    esac
--    echo "----------------------------------------------------------------------"
-+      echo "See any operating system documentation about shared libraries for"
-+      case $host in
-+	solaris2.[6789]|solaris2.1[0-9])
-+	  echo "more information, such as the ld(1), crle(1) and ld.so(8) manual"
-+	  echo "pages."
-+	  ;;
-+	*)
-+	  echo "more information, such as the ld(1) and ld.so(8) manual pages."
-+	  ;;
-+      esac
-+      echo "----------------------------------------------------------------------"
-+    fi
-     exit $EXIT_SUCCESS
- }
- 
--test "$mode" = finish && func_mode_finish ${1+"$@"}
-+test "$opt_mode" = finish && func_mode_finish ${1+"$@"}
- 
- 
- # func_mode_install arg...
-@@ -2077,7 +2786,7 @@ func_mode_install ()
-     # The real first argument should be the name of the installation program.
-     # Aesthetically quote it.
-     func_quote_for_eval "$arg"
--    install_prog="$install_prog$func_quote_for_eval_result"
-+    func_append install_prog "$func_quote_for_eval_result"
-     install_shared_prog=$install_prog
-     case " $install_prog " in
-       *[\\\ /]cp\ *) install_cp=: ;;
-@@ -2097,7 +2806,7 @@ func_mode_install ()
-     do
-       arg2=
-       if test -n "$dest"; then
--	files="$files $dest"
-+	func_append files " $dest"
- 	dest=$arg
- 	continue
-       fi
-@@ -2135,11 +2844,11 @@ func_mode_install ()
- 
-       # Aesthetically quote the argument.
-       func_quote_for_eval "$arg"
--      install_prog="$install_prog $func_quote_for_eval_result"
-+      func_append install_prog " $func_quote_for_eval_result"
-       if test -n "$arg2"; then
- 	func_quote_for_eval "$arg2"
-       fi
--      install_shared_prog="$install_shared_prog $func_quote_for_eval_result"
-+      func_append install_shared_prog " $func_quote_for_eval_result"
-     done
- 
-     test -z "$install_prog" && \
-@@ -2151,7 +2860,7 @@ func_mode_install ()
-     if test -n "$install_override_mode" && $no_mode; then
-       if $install_cp; then :; else
- 	func_quote_for_eval "$install_override_mode"
--	install_shared_prog="$install_shared_prog -m $func_quote_for_eval_result"
-+	func_append install_shared_prog " -m $func_quote_for_eval_result"
-       fi
-     fi
- 
-@@ -2209,10 +2918,13 @@ func_mode_install ()
-       case $file in
-       *.$libext)
- 	# Do the static libraries later.
--	staticlibs="$staticlibs $file"
-+	func_append staticlibs " $file"
- 	;;
- 
-       *.la)
-+	func_resolve_sysroot "$file"
-+	file=$func_resolve_sysroot_result
-+
- 	# Check to see that this really is a libtool archive.
- 	func_lalib_unsafe_p "$file" \
- 	  || func_fatal_help "\`$file' is not a valid libtool archive"
-@@ -2226,23 +2938,30 @@ func_mode_install ()
- 	if test "X$destdir" = "X$libdir"; then
- 	  case "$current_libdirs " in
- 	  *" $libdir "*) ;;
--	  *) current_libdirs="$current_libdirs $libdir" ;;
-+	  *) func_append current_libdirs " $libdir" ;;
- 	  esac
- 	else
- 	  # Note the libdir as a future libdir.
- 	  case "$future_libdirs " in
- 	  *" $libdir "*) ;;
--	  *) future_libdirs="$future_libdirs $libdir" ;;
-+	  *) func_append future_libdirs " $libdir" ;;
- 	  esac
- 	fi
- 
- 	func_dirname "$file" "/" ""
- 	dir="$func_dirname_result"
--	dir="$dir$objdir"
-+	func_append dir "$objdir"
- 
- 	if test -n "$relink_command"; then
-+      # Strip any trailing slash from the destination.
-+      func_stripname '' '/' "$libdir"
-+      destlibdir=$func_stripname_result
-+
-+      func_stripname '' '/' "$destdir"
-+      s_destdir=$func_stripname_result
-+
- 	  # Determine the prefix the user has applied to our future dir.
--	  inst_prefix_dir=`$ECHO "$destdir" | $SED -e "s%$libdir\$%%"`
-+	  inst_prefix_dir=`$ECHO "X$s_destdir" | $Xsed -e "s%$destlibdir\$%%"`
- 
- 	  # Don't allow the user to place us outside of our expected
- 	  # location b/c this prevents finding dependent libraries that
-@@ -2315,7 +3034,7 @@ func_mode_install ()
- 	func_show_eval "$install_prog $instname $destdir/$name" 'exit $?'
- 
- 	# Maybe install the static library, too.
--	test -n "$old_library" && staticlibs="$staticlibs $dir/$old_library"
-+	test -n "$old_library" && func_append staticlibs " $dir/$old_library"
- 	;;
- 
-       *.lo)
-@@ -2503,7 +3222,7 @@ func_mode_install ()
-     test -n "$future_libdirs" && \
-       func_warning "remember to run \`$progname --finish$future_libdirs'"
- 
--    if test -n "$current_libdirs" && $opt_finish; then
-+    if test -n "$current_libdirs"; then
-       # Maybe just do a dry run.
-       $opt_dry_run && current_libdirs=" -n$current_libdirs"
-       exec_cmd='$SHELL $progpath $preserve_args --finish$current_libdirs'
-@@ -2512,7 +3231,7 @@ func_mode_install ()
-     fi
- }
- 
--test "$mode" = install && func_mode_install ${1+"$@"}
-+test "$opt_mode" = install && func_mode_install ${1+"$@"}
- 
- 
- # func_generate_dlsyms outputname originator pic_p
-@@ -2559,6 +3278,18 @@ extern \"C\" {
- #pragma GCC diagnostic ignored \"-Wstrict-prototypes\"
- #endif
- 
-+/* Keep this code in sync between libtool.m4, ltmain, lt_system.h, and tests.  */
-+#if defined(_WIN32) || defined(__CYGWIN__) || defined(_WIN32_WCE)
-+/* DATA imports from DLLs on WIN32 con't be const, because runtime
-+   relocations are performed -- see ld's documentation on pseudo-relocs.  */
-+# define LT_DLSYM_CONST
-+#elif defined(__osf__)
-+/* This system does not cope well with relocations in const data.  */
-+# define LT_DLSYM_CONST
-+#else
-+# define LT_DLSYM_CONST const
-+#endif
-+
- /* External symbol declarations for the compiler. */\
- "
- 
-@@ -2570,21 +3301,22 @@ extern \"C\" {
- 	  # Add our own program objects to the symbol list.
- 	  progfiles=`$ECHO "$objs$old_deplibs" | $SP2NL | $SED "$lo2o" | $NL2SP`
- 	  for progfile in $progfiles; do
--	    func_verbose "extracting global C symbols from \`$progfile'"
--	    $opt_dry_run || eval "$NM $progfile | $global_symbol_pipe >> '$nlist'"
-+	    func_to_tool_file "$progfile" func_convert_file_msys_to_w32
-+	    func_verbose "extracting global C symbols from \`$func_to_tool_file_result'"
-+	    $opt_dry_run || eval "$NM $func_to_tool_file_result | $global_symbol_pipe >> '$nlist'"
- 	  done
- 
- 	  if test -n "$exclude_expsyms"; then
- 	    $opt_dry_run || {
--	      $EGREP -v " ($exclude_expsyms)$" "$nlist" > "$nlist"T
--	      $MV "$nlist"T "$nlist"
-+	      eval '$EGREP -v " ($exclude_expsyms)$" "$nlist" > "$nlist"T'
-+	      eval '$MV "$nlist"T "$nlist"'
- 	    }
- 	  fi
- 
- 	  if test -n "$export_symbols_regex"; then
- 	    $opt_dry_run || {
--	      $EGREP -e "$export_symbols_regex" "$nlist" > "$nlist"T
--	      $MV "$nlist"T "$nlist"
-+	      eval '$EGREP -e "$export_symbols_regex" "$nlist" > "$nlist"T'
-+	      eval '$MV "$nlist"T "$nlist"'
- 	    }
- 	  fi
- 
-@@ -2593,23 +3325,23 @@ extern \"C\" {
- 	    export_symbols="$output_objdir/$outputname.exp"
- 	    $opt_dry_run || {
- 	      $RM $export_symbols
--	      ${SED} -n -e '/^: @PROGRAM@ $/d' -e 's/^.* \(.*\)$/\1/p' < "$nlist" > "$export_symbols"
-+	      eval "${SED} -n -e '/^: @PROGRAM@ $/d' -e 's/^.* \(.*\)$/\1/p' "'< "$nlist" > "$export_symbols"'
- 	      case $host in
- 	      *cygwin* | *mingw* | *cegcc* )
--                echo EXPORTS > "$output_objdir/$outputname.def"
--                cat "$export_symbols" >> "$output_objdir/$outputname.def"
-+                eval "echo EXPORTS "'> "$output_objdir/$outputname.def"'
-+                eval 'cat "$export_symbols" >> "$output_objdir/$outputname.def"'
- 	        ;;
- 	      esac
- 	    }
- 	  else
- 	    $opt_dry_run || {
--	      ${SED} -e 's/\([].[*^$]\)/\\\1/g' -e 's/^/ /' -e 's/$/$/' < "$export_symbols" > "$output_objdir/$outputname.exp"
--	      $GREP -f "$output_objdir/$outputname.exp" < "$nlist" > "$nlist"T
--	      $MV "$nlist"T "$nlist"
-+	      eval "${SED} -e 's/\([].[*^$]\)/\\\\\1/g' -e 's/^/ /' -e 's/$/$/'"' < "$export_symbols" > "$output_objdir/$outputname.exp"'
-+	      eval '$GREP -f "$output_objdir/$outputname.exp" < "$nlist" > "$nlist"T'
-+	      eval '$MV "$nlist"T "$nlist"'
- 	      case $host in
- 	        *cygwin* | *mingw* | *cegcc* )
--	          echo EXPORTS > "$output_objdir/$outputname.def"
--	          cat "$nlist" >> "$output_objdir/$outputname.def"
-+	          eval "echo EXPORTS "'> "$output_objdir/$outputname.def"'
-+	          eval 'cat "$nlist" >> "$output_objdir/$outputname.def"'
- 	          ;;
- 	      esac
- 	    }
-@@ -2620,10 +3352,52 @@ extern \"C\" {
- 	  func_verbose "extracting global C symbols from \`$dlprefile'"
- 	  func_basename "$dlprefile"
- 	  name="$func_basename_result"
--	  $opt_dry_run || {
--	    $ECHO ": $name " >> "$nlist"
--	    eval "$NM $dlprefile 2>/dev/null | $global_symbol_pipe >> '$nlist'"
--	  }
-+          case $host in
-+	    *cygwin* | *mingw* | *cegcc* )
-+	      # if an import library, we need to obtain dlname
-+	      if func_win32_import_lib_p "$dlprefile"; then
-+	        func_tr_sh "$dlprefile"
-+	        eval "curr_lafile=\$libfile_$func_tr_sh_result"
-+	        dlprefile_dlbasename=""
-+	        if test -n "$curr_lafile" && func_lalib_p "$curr_lafile"; then
-+	          # Use subshell, to avoid clobbering current variable values
-+	          dlprefile_dlname=`source "$curr_lafile" && echo "$dlname"`
-+	          if test -n "$dlprefile_dlname" ; then
-+	            func_basename "$dlprefile_dlname"
-+	            dlprefile_dlbasename="$func_basename_result"
-+	          else
-+	            # no lafile. user explicitly requested -dlpreopen <import library>.
-+	            $sharedlib_from_linklib_cmd "$dlprefile"
-+	            dlprefile_dlbasename=$sharedlib_from_linklib_result
-+	          fi
-+	        fi
-+	        $opt_dry_run || {
-+	          if test -n "$dlprefile_dlbasename" ; then
-+	            eval '$ECHO ": $dlprefile_dlbasename" >> "$nlist"'
-+	          else
-+	            func_warning "Could not compute DLL name from $name"
-+	            eval '$ECHO ": $name " >> "$nlist"'
-+	          fi
-+	          func_to_tool_file "$dlprefile" func_convert_file_msys_to_w32
-+	          eval "$NM \"$func_to_tool_file_result\" 2>/dev/null | $global_symbol_pipe |
-+	            $SED -e '/I __imp/d' -e 's/I __nm_/D /;s/_nm__//' >> '$nlist'"
-+	        }
-+	      else # not an import lib
-+	        $opt_dry_run || {
-+	          eval '$ECHO ": $name " >> "$nlist"'
-+	          func_to_tool_file "$dlprefile" func_convert_file_msys_to_w32
-+	          eval "$NM \"$func_to_tool_file_result\" 2>/dev/null | $global_symbol_pipe >> '$nlist'"
-+	        }
-+	      fi
-+	    ;;
-+	    *)
-+	      $opt_dry_run || {
-+	        eval '$ECHO ": $name " >> "$nlist"'
-+	        func_to_tool_file "$dlprefile" func_convert_file_msys_to_w32
-+	        eval "$NM \"$func_to_tool_file_result\" 2>/dev/null | $global_symbol_pipe >> '$nlist'"
-+	      }
-+	    ;;
-+          esac
- 	done
- 
- 	$opt_dry_run || {
-@@ -2661,26 +3435,9 @@ typedef struct {
-   const char *name;
-   void *address;
- } lt_dlsymlist;
--"
--	  case $host in
--	  *cygwin* | *mingw* | *cegcc* )
--	    echo >> "$output_objdir/$my_dlsyms" "\
--/* DATA imports from DLLs on WIN32 con't be const, because
--   runtime relocations are performed -- see ld's documentation
--   on pseudo-relocs.  */"
--	    lt_dlsym_const= ;;
--	  *osf5*)
--	    echo >> "$output_objdir/$my_dlsyms" "\
--/* This system does not cope well with relocations in const data */"
--	    lt_dlsym_const= ;;
--	  *)
--	    lt_dlsym_const=const ;;
--	  esac
--
--	  echo >> "$output_objdir/$my_dlsyms" "\
--extern $lt_dlsym_const lt_dlsymlist
-+extern LT_DLSYM_CONST lt_dlsymlist
- lt_${my_prefix}_LTX_preloaded_symbols[];
--$lt_dlsym_const lt_dlsymlist
-+LT_DLSYM_CONST lt_dlsymlist
- lt_${my_prefix}_LTX_preloaded_symbols[] =
- {\
-   { \"$my_originator\", (void *) 0 },"
-@@ -2736,7 +3493,7 @@ static const void *lt_preloaded_setup() {
- 	for arg in $LTCFLAGS; do
- 	  case $arg in
- 	  -pie | -fpie | -fPIE) ;;
--	  *) symtab_cflags="$symtab_cflags $arg" ;;
-+	  *) func_append symtab_cflags " $arg" ;;
- 	  esac
- 	done
- 
-@@ -2796,9 +3553,11 @@ func_win32_libid ()
-     win32_libid_type="x86 archive import"
-     ;;
-   *ar\ archive*) # could be an import, or static
--    if $OBJDUMP -f "$1" | $SED -e '10q' 2>/dev/null |
--       $EGREP 'file format (pe-i386(.*architecture: i386)?|pe-arm-wince|pe-x86-64)' >/dev/null; then
--      win32_nmres=`$NM -f posix -A "$1" |
-+    # Keep the egrep pattern in sync with the one in _LT_CHECK_MAGIC_METHOD.
-+    if eval $OBJDUMP -f $1 | $SED -e '10q' 2>/dev/null |
-+       $EGREP 'file format (pei*-i386(.*architecture: i386)?|pe-arm-wince|pe-x86-64)' >/dev/null; then
-+      func_to_tool_file "$1" func_convert_file_msys_to_w32
-+      win32_nmres=`eval $NM -f posix -A \"$func_to_tool_file_result\" |
- 	$SED -n -e '
- 	    1,100{
- 		/ I /{
-@@ -2827,6 +3586,131 @@ func_win32_libid ()
-   $ECHO "$win32_libid_type"
- }
- 
-+# func_cygming_dll_for_implib ARG
-+#
-+# Platform-specific function to extract the
-+# name of the DLL associated with the specified
-+# import library ARG.
-+# Invoked by eval'ing the libtool variable
-+#    $sharedlib_from_linklib_cmd
-+# Result is available in the variable
-+#    $sharedlib_from_linklib_result
-+func_cygming_dll_for_implib ()
-+{
-+  $opt_debug
-+  sharedlib_from_linklib_result=`$DLLTOOL --identify-strict --identify "$1"`
-+}
-+
-+# func_cygming_dll_for_implib_fallback_core SECTION_NAME LIBNAMEs
-+#
-+# The is the core of a fallback implementation of a
-+# platform-specific function to extract the name of the
-+# DLL associated with the specified import library LIBNAME.
-+#
-+# SECTION_NAME is either .idata$6 or .idata$7, depending
-+# on the platform and compiler that created the implib.
-+#
-+# Echos the name of the DLL associated with the
-+# specified import library.
-+func_cygming_dll_for_implib_fallback_core ()
-+{
-+  $opt_debug
-+  match_literal=`$ECHO "$1" | $SED "$sed_make_literal_regex"`
-+  $OBJDUMP -s --section "$1" "$2" 2>/dev/null |
-+    $SED '/^Contents of section '"$match_literal"':/{
-+      # Place marker at beginning of archive member dllname section
-+      s/.*/====MARK====/
-+      p
-+      d
-+    }
-+    # These lines can sometimes be longer than 43 characters, but
-+    # are always uninteresting
-+    /:[	 ]*file format pe[i]\{,1\}-/d
-+    /^In archive [^:]*:/d
-+    # Ensure marker is printed
-+    /^====MARK====/p
-+    # Remove all lines with less than 43 characters
-+    /^.\{43\}/!d
-+    # From remaining lines, remove first 43 characters
-+    s/^.\{43\}//' |
-+    $SED -n '
-+      # Join marker and all lines until next marker into a single line
-+      /^====MARK====/ b para
-+      H
-+      $ b para
-+      b
-+      :para
-+      x
-+      s/\n//g
-+      # Remove the marker
-+      s/^====MARK====//
-+      # Remove trailing dots and whitespace
-+      s/[\. \t]*$//
-+      # Print
-+      /./p' |
-+    # we now have a list, one entry per line, of the stringified
-+    # contents of the appropriate section of all members of the
-+    # archive which possess that section. Heuristic: eliminate
-+    # all those which have a first or second character that is
-+    # a '.' (that is, objdump's representation of an unprintable
-+    # character.) This should work for all archives with less than
-+    # 0x302f exports -- but will fail for DLLs whose name actually
-+    # begins with a literal '.' or a single character followed by
-+    # a '.'.
-+    #
-+    # Of those that remain, print the first one.
-+    $SED -e '/^\./d;/^.\./d;q'
-+}
-+
-+# func_cygming_gnu_implib_p ARG
-+# This predicate returns with zero status (TRUE) if
-+# ARG is a GNU/binutils-style import library. Returns
-+# with nonzero status (FALSE) otherwise.
-+func_cygming_gnu_implib_p ()
-+{
-+  $opt_debug
-+  func_to_tool_file "$1" func_convert_file_msys_to_w32
-+  func_cygming_gnu_implib_tmp=`$NM "$func_to_tool_file_result" | eval "$global_symbol_pipe" | $EGREP ' (_head_[A-Za-z0-9_]+_[ad]l*|[A-Za-z0-9_]+_[ad]l*_iname)$'`
-+  test -n "$func_cygming_gnu_implib_tmp"
-+}
-+
-+# func_cygming_ms_implib_p ARG
-+# This predicate returns with zero status (TRUE) if
-+# ARG is an MS-style import library. Returns
-+# with nonzero status (FALSE) otherwise.
-+func_cygming_ms_implib_p ()
-+{
-+  $opt_debug
-+  func_to_tool_file "$1" func_convert_file_msys_to_w32
-+  func_cygming_ms_implib_tmp=`$NM "$func_to_tool_file_result" | eval "$global_symbol_pipe" | $GREP '_NULL_IMPORT_DESCRIPTOR'`
-+  test -n "$func_cygming_ms_implib_tmp"
-+}
-+
-+# func_cygming_dll_for_implib_fallback ARG
-+# Platform-specific function to extract the
-+# name of the DLL associated with the specified
-+# import library ARG.
-+#
-+# This fallback implementation is for use when $DLLTOOL
-+# does not support the --identify-strict option.
-+# Invoked by eval'ing the libtool variable
-+#    $sharedlib_from_linklib_cmd
-+# Result is available in the variable
-+#    $sharedlib_from_linklib_result
-+func_cygming_dll_for_implib_fallback ()
-+{
-+  $opt_debug
-+  if func_cygming_gnu_implib_p "$1" ; then
-+    # binutils import library
-+    sharedlib_from_linklib_result=`func_cygming_dll_for_implib_fallback_core '.idata$7' "$1"`
-+  elif func_cygming_ms_implib_p "$1" ; then
-+    # ms-generated import library
-+    sharedlib_from_linklib_result=`func_cygming_dll_for_implib_fallback_core '.idata$6' "$1"`
-+  else
-+    # unknown
-+    sharedlib_from_linklib_result=""
-+  fi
-+}
- 
- 
- # func_extract_an_archive dir oldlib
-@@ -2917,7 +3801,7 @@ func_extract_archives ()
- 	    darwin_file=
- 	    darwin_files=
- 	    for darwin_file in $darwin_filelist; do
--	      darwin_files=`find unfat-$$ -name $darwin_file -print | $NL2SP`
-+	      darwin_files=`find unfat-$$ -name $darwin_file -print | sort | $NL2SP`
- 	      $LIPO -create -output "$darwin_file" $darwin_files
- 	    done # $darwin_filelist
- 	    $RM -rf unfat-$$
-@@ -2932,7 +3816,7 @@ func_extract_archives ()
-         func_extract_an_archive "$my_xdir" "$my_xabs"
- 	;;
-       esac
--      my_oldobjs="$my_oldobjs "`find $my_xdir -name \*.$objext -print -o -name \*.lo -print | $NL2SP`
-+      my_oldobjs="$my_oldobjs "`find $my_xdir -name \*.$objext -print -o -name \*.lo -print | sort | $NL2SP`
-     done
- 
-     func_extract_archives_result="$my_oldobjs"
-@@ -3014,7 +3898,110 @@ func_fallback_echo ()
- _LTECHO_EOF'
- }
-     ECHO=\"$qECHO\"
--  fi\
-+  fi
-+
-+# Very basic option parsing. These options are (a) specific to
-+# the libtool wrapper, (b) are identical between the wrapper
-+# /script/ and the wrapper /executable/ which is used only on
-+# windows platforms, and (c) all begin with the string "--lt-"
-+# (application programs are unlikely to have options which match
-+# this pattern).
-+#
-+# There are only two supported options: --lt-debug and
-+# --lt-dump-script. There is, deliberately, no --lt-help.
-+#
-+# The first argument to this parsing function should be the
-+# script's $0 value, followed by "$@".
-+lt_option_debug=
-+func_parse_lt_options ()
-+{
-+  lt_script_arg0=\$0
-+  shift
-+  for lt_opt
-+  do
-+    case \"\$lt_opt\" in
-+    --lt-debug) lt_option_debug=1 ;;
-+    --lt-dump-script)
-+        lt_dump_D=\`\$ECHO \"X\$lt_script_arg0\" | $SED -e 's/^X//' -e 's%/[^/]*$%%'\`
-+        test \"X\$lt_dump_D\" = \"X\$lt_script_arg0\" && lt_dump_D=.
-+        lt_dump_F=\`\$ECHO \"X\$lt_script_arg0\" | $SED -e 's/^X//' -e 's%^.*/%%'\`
-+        cat \"\$lt_dump_D/\$lt_dump_F\"
-+        exit 0
-+      ;;
-+    --lt-*)
-+        \$ECHO \"Unrecognized --lt- option: '\$lt_opt'\" 1>&2
-+        exit 1
-+      ;;
-+    esac
-+  done
-+
-+  # Print the debug banner immediately:
-+  if test -n \"\$lt_option_debug\"; then
-+    echo \"${outputname}:${output}:\${LINENO}: libtool wrapper (GNU $PACKAGE$TIMESTAMP) $VERSION\" 1>&2
-+  fi
-+}
-+
-+# Used when --lt-debug. Prints its arguments to stdout
-+# (redirection is the responsibility of the caller)
-+func_lt_dump_args ()
-+{
-+  lt_dump_args_N=1;
-+  for lt_arg
-+  do
-+    \$ECHO \"${outputname}:${output}:\${LINENO}: newargv[\$lt_dump_args_N]: \$lt_arg\"
-+    lt_dump_args_N=\`expr \$lt_dump_args_N + 1\`
-+  done
-+}
-+
-+# Core function for launching the target application
-+func_exec_program_core ()
-+{
-+"
-+  case $host in
-+  # Backslashes separate directories on plain windows
-+  *-*-mingw | *-*-os2* | *-cegcc*)
-+    $ECHO "\
-+      if test -n \"\$lt_option_debug\"; then
-+        \$ECHO \"${outputname}:${output}:\${LINENO}: newargv[0]: \$progdir\\\\\$program\" 1>&2
-+        func_lt_dump_args \${1+\"\$@\"} 1>&2
-+      fi
-+      exec \"\$progdir\\\\\$program\" \${1+\"\$@\"}
-+"
-+    ;;
-+
-+  *)
-+    $ECHO "\
-+      if test -n \"\$lt_option_debug\"; then
-+        \$ECHO \"${outputname}:${output}:\${LINENO}: newargv[0]: \$progdir/\$program\" 1>&2
-+        func_lt_dump_args \${1+\"\$@\"} 1>&2
-+      fi
-+      exec \"\$progdir/\$program\" \${1+\"\$@\"}
-+"
-+    ;;
-+  esac
-+  $ECHO "\
-+      \$ECHO \"\$0: cannot exec \$program \$*\" 1>&2
-+      exit 1
-+}
-+
-+# A function to encapsulate launching the target application
-+# Strips options in the --lt-* namespace from \$@ and
-+# launches target application with the remaining arguments.
-+func_exec_program ()
-+{
-+  for lt_wr_arg
-+  do
-+    case \$lt_wr_arg in
-+    --lt-*) ;;
-+    *) set x \"\$@\" \"\$lt_wr_arg\"; shift;;
-+    esac
-+    shift
-+  done
-+  func_exec_program_core \${1+\"\$@\"}
-+}
-+
-+  # Parse options
-+  func_parse_lt_options \"\$0\" \${1+\"\$@\"}
- 
-   # Find the directory that this script lives in.
-   thisdir=\`\$ECHO \"\$file\" | $SED 's%/[^/]*$%%'\`
-@@ -3078,7 +4065,7 @@ _LTECHO_EOF'
- 
-     # relink executable if necessary
-     if test -n \"\$relink_command\"; then
--      if relink_command_output=\`eval \"\$relink_command\" 2>&1\`; then :
-+      if relink_command_output=\`eval \$relink_command 2>&1\`; then :
-       else
- 	$ECHO \"\$relink_command_output\" >&2
- 	$RM \"\$progdir/\$file\"
-@@ -3102,6 +4089,18 @@ _LTECHO_EOF'
- 
-   if test -f \"\$progdir/\$program\"; then"
- 
-+	# fixup the dll searchpath if we need to.
-+	#
-+	# Fix the DLL searchpath if we need to.  Do this before prepending
-+	# to shlibpath, because on Windows, both are PATH and uninstalled
-+	# libraries must come first.
-+	if test -n "$dllsearchpath"; then
-+	  $ECHO "\
-+    # Add the dll search path components to the executable PATH
-+    PATH=$dllsearchpath:\$PATH
-+"
-+	fi
-+
- 	# Export our shlibpath_var if we have one.
- 	if test "$shlibpath_overrides_runpath" = yes && test -n "$shlibpath_var" && test -n "$temp_rpath"; then
- 	  $ECHO "\
-@@ -3116,35 +4115,10 @@ _LTECHO_EOF'
- "
- 	fi
- 
--	# fixup the dll searchpath if we need to.
--	if test -n "$dllsearchpath"; then
--	  $ECHO "\
--    # Add the dll search path components to the executable PATH
--    PATH=$dllsearchpath:\$PATH
--"
--	fi
--
- 	$ECHO "\
-     if test \"\$libtool_execute_magic\" != \"$magic\"; then
-       # Run the actual program with our arguments.
--"
--	case $host in
--	# Backslashes separate directories on plain windows
--	*-*-mingw | *-*-os2* | *-cegcc*)
--	  $ECHO "\
--      exec \"\$progdir\\\\\$program\" \${1+\"\$@\"}
--"
--	  ;;
--
--	*)
--	  $ECHO "\
--      exec \"\$progdir/\$program\" \${1+\"\$@\"}
--"
--	  ;;
--	esac
--	$ECHO "\
--      \$ECHO \"\$0: cannot exec \$program \$*\" 1>&2
--      exit 1
-+      func_exec_program \${1+\"\$@\"}
-     fi
-   else
-     # The program doesn't exist.
-@@ -3158,166 +4132,6 @@ fi\
- }
- 
- 
--# func_to_host_path arg
--#
--# Convert paths to host format when used with build tools.
--# Intended for use with "native" mingw (where libtool itself
--# is running under the msys shell), or in the following cross-
--# build environments:
--#    $build          $host
--#    mingw (msys)    mingw  [e.g. native]
--#    cygwin          mingw
--#    *nix + wine     mingw
--# where wine is equipped with the `winepath' executable.
--# In the native mingw case, the (msys) shell automatically
--# converts paths for any non-msys applications it launches,
--# but that facility isn't available from inside the cwrapper.
--# Similar accommodations are necessary for $host mingw and
--# $build cygwin.  Calling this function does no harm for other
--# $host/$build combinations not listed above.
--#
--# ARG is the path (on $build) that should be converted to
--# the proper representation for $host. The result is stored
--# in $func_to_host_path_result.
--func_to_host_path ()
--{
--  func_to_host_path_result="$1"
--  if test -n "$1"; then
--    case $host in
--      *mingw* )
--        lt_sed_naive_backslashify='s|\\\\*|\\|g;s|/|\\|g;s|\\|\\\\|g'
--        case $build in
--          *mingw* ) # actually, msys
--            # awkward: cmd appends spaces to result
--            func_to_host_path_result=`( cmd //c echo "$1" ) 2>/dev/null |
--              $SED -e 's/[ ]*$//' -e "$lt_sed_naive_backslashify"`
--            ;;
--          *cygwin* )
--            func_to_host_path_result=`cygpath -w "$1" |
--	      $SED -e "$lt_sed_naive_backslashify"`
--            ;;
--          * )
--            # Unfortunately, winepath does not exit with a non-zero
--            # error code, so we are forced to check the contents of
--            # stdout. On the other hand, if the command is not
--            # found, the shell will set an exit code of 127 and print
--            # *an error message* to stdout. So we must check for both
--            # error code of zero AND non-empty stdout, which explains
--            # the odd construction:
--            func_to_host_path_tmp1=`winepath -w "$1" 2>/dev/null`
--            if test "$?" -eq 0 && test -n "${func_to_host_path_tmp1}"; then
--              func_to_host_path_result=`$ECHO "$func_to_host_path_tmp1" |
--                $SED -e "$lt_sed_naive_backslashify"`
--            else
--              # Allow warning below.
--              func_to_host_path_result=
--            fi
--            ;;
--        esac
--        if test -z "$func_to_host_path_result" ; then
--          func_error "Could not determine host path corresponding to"
--          func_error "  \`$1'"
--          func_error "Continuing, but uninstalled executables may not work."
--          # Fallback:
--          func_to_host_path_result="$1"
--        fi
--        ;;
--    esac
--  fi
--}
--# end: func_to_host_path
--
--# func_to_host_pathlist arg
--#
--# Convert pathlists to host format when used with build tools.
--# See func_to_host_path(), above. This function supports the
--# following $build/$host combinations (but does no harm for
--# combinations not listed here):
--#    $build          $host
--#    mingw (msys)    mingw  [e.g. native]
--#    cygwin          mingw
--#    *nix + wine     mingw
--#
--# Path separators are also converted from $build format to
--# $host format. If ARG begins or ends with a path separator
--# character, it is preserved (but converted to $host format)
--# on output.
--#
--# ARG is a pathlist (on $build) that should be converted to
--# the proper representation on $host. The result is stored
--# in $func_to_host_pathlist_result.
--func_to_host_pathlist ()
--{
--  func_to_host_pathlist_result="$1"
--  if test -n "$1"; then
--    case $host in
--      *mingw* )
--        lt_sed_naive_backslashify='s|\\\\*|\\|g;s|/|\\|g;s|\\|\\\\|g'
--        # Remove leading and trailing path separator characters from
--        # ARG. msys behavior is inconsistent here, cygpath turns them
--        # into '.;' and ';.', and winepath ignores them completely.
--	func_stripname : : "$1"
--        func_to_host_pathlist_tmp1=$func_stripname_result
--        case $build in
--          *mingw* ) # Actually, msys.
--            # Awkward: cmd appends spaces to result.
--            func_to_host_pathlist_result=`
--	      ( cmd //c echo "$func_to_host_pathlist_tmp1" ) 2>/dev/null |
--	      $SED -e 's/[ ]*$//' -e "$lt_sed_naive_backslashify"`
--            ;;
--          *cygwin* )
--            func_to_host_pathlist_result=`cygpath -w -p "$func_to_host_pathlist_tmp1" |
--              $SED -e "$lt_sed_naive_backslashify"`
--            ;;
--          * )
--            # unfortunately, winepath doesn't convert pathlists
--            func_to_host_pathlist_result=""
--            func_to_host_pathlist_oldIFS=$IFS
--            IFS=:
--            for func_to_host_pathlist_f in $func_to_host_pathlist_tmp1 ; do
--              IFS=$func_to_host_pathlist_oldIFS
--              if test -n "$func_to_host_pathlist_f" ; then
--                func_to_host_path "$func_to_host_pathlist_f"
--                if test -n "$func_to_host_path_result" ; then
--                  if test -z "$func_to_host_pathlist_result" ; then
--                    func_to_host_pathlist_result="$func_to_host_path_result"
--                  else
--                    func_append func_to_host_pathlist_result ";$func_to_host_path_result"
--                  fi
--                fi
--              fi
--            done
--            IFS=$func_to_host_pathlist_oldIFS
--            ;;
--        esac
--        if test -z "$func_to_host_pathlist_result"; then
--          func_error "Could not determine the host path(s) corresponding to"
--          func_error "  \`$1'"
--          func_error "Continuing, but uninstalled executables may not work."
--          # Fallback. This may break if $1 contains DOS-style drive
--          # specifications. The fix is not to complicate the expression
--          # below, but for the user to provide a working wine installation
--          # with winepath so that path translation in the cross-to-mingw
--          # case works properly.
--          lt_replace_pathsep_nix_to_dos="s|:|;|g"
--          func_to_host_pathlist_result=`echo "$func_to_host_pathlist_tmp1" |\
--            $SED -e "$lt_replace_pathsep_nix_to_dos"`
--        fi
--        # Now, add the leading and trailing path separators back
--        case "$1" in
--          :* ) func_to_host_pathlist_result=";$func_to_host_pathlist_result"
--            ;;
--        esac
--        case "$1" in
--          *: ) func_append func_to_host_pathlist_result ";"
--            ;;
--        esac
--        ;;
--    esac
--  fi
--}
--# end: func_to_host_pathlist
--
- # func_emit_cwrapperexe_src
- # emit the source code for a wrapper executable on stdout
- # Must ONLY be called from within func_mode_link because
-@@ -3334,10 +4148,6 @@ func_emit_cwrapperexe_src ()
- 
-    This wrapper executable should never be moved out of the build directory.
-    If it is, it will not operate correctly.
--
--   Currently, it simply execs the wrapper *script* "$SHELL $output",
--   but could eventually absorb all of the scripts functionality and
--   exec $objdir/$outputname directly.
- */
- EOF
- 	    cat <<"EOF"
-@@ -3462,22 +4272,13 @@ int setenv (const char *, const char *, int);
-   if (stale) { free ((void *) stale); stale = 0; } \
- } while (0)
- 
--#undef LTWRAPPER_DEBUGPRINTF
--#if defined LT_DEBUGWRAPPER
--# define LTWRAPPER_DEBUGPRINTF(args) ltwrapper_debugprintf args
--static void
--ltwrapper_debugprintf (const char *fmt, ...)
--{
--    va_list args;
--    va_start (args, fmt);
--    (void) vfprintf (stderr, fmt, args);
--    va_end (args);
--}
-+#if defined(LT_DEBUGWRAPPER)
-+static int lt_debug = 1;
- #else
--# define LTWRAPPER_DEBUGPRINTF(args)
-+static int lt_debug = 0;
- #endif
- 
--const char *program_name = NULL;
-+const char *program_name = "libtool-wrapper"; /* in case xstrdup fails */
- 
- void *xmalloc (size_t num);
- char *xstrdup (const char *string);
-@@ -3487,7 +4288,10 @@ char *chase_symlinks (const char *pathspec);
- int make_executable (const char *path);
- int check_executable (const char *path);
- char *strendzap (char *str, const char *pat);
--void lt_fatal (const char *message, ...);
-+void lt_debugprintf (const char *file, int line, const char *fmt, ...);
-+void lt_fatal (const char *file, int line, const char *message, ...);
-+static const char *nonnull (const char *s);
-+static const char *nonempty (const char *s);
- void lt_setenv (const char *name, const char *value);
- char *lt_extend_str (const char *orig_value, const char *add, int to_end);
- void lt_update_exe_path (const char *name, const char *value);
-@@ -3497,14 +4301,14 @@ void lt_dump_script (FILE *f);
- EOF
- 
- 	    cat <<EOF
--const char * MAGIC_EXE = "$magic_exe";
-+volatile const char * MAGIC_EXE = "$magic_exe";
- const char * LIB_PATH_VARNAME = "$shlibpath_var";
- EOF
- 
- 	    if test "$shlibpath_overrides_runpath" = yes && test -n "$shlibpath_var" && test -n "$temp_rpath"; then
--              func_to_host_pathlist "$temp_rpath"
-+              func_to_host_path "$temp_rpath"
- 	      cat <<EOF
--const char * LIB_PATH_VALUE   = "$func_to_host_pathlist_result";
-+const char * LIB_PATH_VALUE   = "$func_to_host_path_result";
- EOF
- 	    else
- 	      cat <<"EOF"
-@@ -3513,10 +4317,10 @@ EOF
- 	    fi
- 
- 	    if test -n "$dllsearchpath"; then
--              func_to_host_pathlist "$dllsearchpath:"
-+              func_to_host_path "$dllsearchpath:"
- 	      cat <<EOF
- const char * EXE_PATH_VARNAME = "PATH";
--const char * EXE_PATH_VALUE   = "$func_to_host_pathlist_result";
-+const char * EXE_PATH_VALUE   = "$func_to_host_path_result";
- EOF
- 	    else
- 	      cat <<"EOF"
-@@ -3539,12 +4343,10 @@ EOF
- 	    cat <<"EOF"
- 
- #define LTWRAPPER_OPTION_PREFIX         "--lt-"
--#define LTWRAPPER_OPTION_PREFIX_LENGTH  5
- 
--static const size_t opt_prefix_len         = LTWRAPPER_OPTION_PREFIX_LENGTH;
- static const char *ltwrapper_option_prefix = LTWRAPPER_OPTION_PREFIX;
--
- static const char *dumpscript_opt       = LTWRAPPER_OPTION_PREFIX "dump-script";
-+static const char *debug_opt            = LTWRAPPER_OPTION_PREFIX "debug";
- 
- int
- main (int argc, char *argv[])
-@@ -3561,10 +4363,13 @@ main (int argc, char *argv[])
-   int i;
- 
-   program_name = (char *) xstrdup (base_name (argv[0]));
--  LTWRAPPER_DEBUGPRINTF (("(main) argv[0]      : %s\n", argv[0]));
--  LTWRAPPER_DEBUGPRINTF (("(main) program_name : %s\n", program_name));
-+  newargz = XMALLOC (char *, argc + 1);
- 
--  /* very simple arg parsing; don't want to rely on getopt */
-+  /* very simple arg parsing; don't want to rely on getopt
-+   * also, copy all non cwrapper options to newargz, except
-+   * argz[0], which is handled differently
-+   */
-+  newargc=0;
-   for (i = 1; i < argc; i++)
-     {
-       if (strcmp (argv[i], dumpscript_opt) == 0)
-@@ -3581,21 +4386,54 @@ EOF
- 	  lt_dump_script (stdout);
- 	  return 0;
- 	}
-+      if (strcmp (argv[i], debug_opt) == 0)
-+	{
-+          lt_debug = 1;
-+          continue;
-+	}
-+      if (strcmp (argv[i], ltwrapper_option_prefix) == 0)
-+        {
-+          /* however, if there is an option in the LTWRAPPER_OPTION_PREFIX
-+             namespace, but it is not one of the ones we know about and
-+             have already dealt with, above (inluding dump-script), then
-+             report an error. Otherwise, targets might begin to believe
-+             they are allowed to use options in the LTWRAPPER_OPTION_PREFIX
-+             namespace. The first time any user complains about this, we'll
-+             need to make LTWRAPPER_OPTION_PREFIX a configure-time option
-+             or a configure.ac-settable value.
-+           */
-+          lt_fatal (__FILE__, __LINE__,
-+		    "unrecognized %s option: '%s'",
-+                    ltwrapper_option_prefix, argv[i]);
-+        }
-+      /* otherwise ... */
-+      newargz[++newargc] = xstrdup (argv[i]);
-     }
-+  newargz[++newargc] = NULL;
-+
-+EOF
-+	    cat <<EOF
-+  /* The GNU banner must be the first non-error debug message */
-+  lt_debugprintf (__FILE__, __LINE__, "libtool wrapper (GNU $PACKAGE$TIMESTAMP) $VERSION\n");
-+EOF
-+	    cat <<"EOF"
-+  lt_debugprintf (__FILE__, __LINE__, "(main) argv[0]: %s\n", argv[0]);
-+  lt_debugprintf (__FILE__, __LINE__, "(main) program_name: %s\n", program_name);
- 
--  newargz = XMALLOC (char *, argc + 1);
-   tmp_pathspec = find_executable (argv[0]);
-   if (tmp_pathspec == NULL)
--    lt_fatal ("Couldn't find %s", argv[0]);
--  LTWRAPPER_DEBUGPRINTF (("(main) found exe (before symlink chase) at : %s\n",
--			  tmp_pathspec));
-+    lt_fatal (__FILE__, __LINE__, "couldn't find %s", argv[0]);
-+  lt_debugprintf (__FILE__, __LINE__,
-+                  "(main) found exe (before symlink chase) at: %s\n",
-+		  tmp_pathspec);
- 
-   actual_cwrapper_path = chase_symlinks (tmp_pathspec);
--  LTWRAPPER_DEBUGPRINTF (("(main) found exe (after symlink chase) at : %s\n",
--			  actual_cwrapper_path));
-+  lt_debugprintf (__FILE__, __LINE__,
-+                  "(main) found exe (after symlink chase) at: %s\n",
-+		  actual_cwrapper_path);
-   XFREE (tmp_pathspec);
- 
--  actual_cwrapper_name = xstrdup( base_name (actual_cwrapper_path));
-+  actual_cwrapper_name = xstrdup (base_name (actual_cwrapper_path));
-   strendzap (actual_cwrapper_path, actual_cwrapper_name);
- 
-   /* wrapper name transforms */
-@@ -3613,8 +4451,9 @@ EOF
-   target_name = tmp_pathspec;
-   tmp_pathspec = 0;
- 
--  LTWRAPPER_DEBUGPRINTF (("(main) libtool target name: %s\n",
--			  target_name));
-+  lt_debugprintf (__FILE__, __LINE__,
-+		  "(main) libtool target name: %s\n",
-+		  target_name);
- EOF
- 
- 	    cat <<EOF
-@@ -3664,35 +4503,19 @@ EOF
- 
-   lt_setenv ("BIN_SH", "xpg4"); /* for Tru64 */
-   lt_setenv ("DUALCASE", "1");  /* for MSK sh */
--  lt_update_lib_path (LIB_PATH_VARNAME, LIB_PATH_VALUE);
-+  /* Update the DLL searchpath.  EXE_PATH_VALUE ($dllsearchpath) must
-+     be prepended before (that is, appear after) LIB_PATH_VALUE ($temp_rpath)
-+     because on Windows, both *_VARNAMEs are PATH but uninstalled
-+     libraries must come first. */
-   lt_update_exe_path (EXE_PATH_VARNAME, EXE_PATH_VALUE);
-+  lt_update_lib_path (LIB_PATH_VARNAME, LIB_PATH_VALUE);
- 
--  newargc=0;
--  for (i = 1; i < argc; i++)
--    {
--      if (strncmp (argv[i], ltwrapper_option_prefix, opt_prefix_len) == 0)
--        {
--          /* however, if there is an option in the LTWRAPPER_OPTION_PREFIX
--             namespace, but it is not one of the ones we know about and
--             have already dealt with, above (inluding dump-script), then
--             report an error. Otherwise, targets might begin to believe
--             they are allowed to use options in the LTWRAPPER_OPTION_PREFIX
--             namespace. The first time any user complains about this, we'll
--             need to make LTWRAPPER_OPTION_PREFIX a configure-time option
--             or a configure.ac-settable value.
--           */
--          lt_fatal ("Unrecognized option in %s namespace: '%s'",
--                    ltwrapper_option_prefix, argv[i]);
--        }
--      /* otherwise ... */
--      newargz[++newargc] = xstrdup (argv[i]);
--    }
--  newargz[++newargc] = NULL;
--
--  LTWRAPPER_DEBUGPRINTF     (("(main) lt_argv_zero : %s\n", (lt_argv_zero ? lt_argv_zero : "<NULL>")));
-+  lt_debugprintf (__FILE__, __LINE__, "(main) lt_argv_zero: %s\n",
-+		  nonnull (lt_argv_zero));
-   for (i = 0; i < newargc; i++)
-     {
--      LTWRAPPER_DEBUGPRINTF (("(main) newargz[%d]   : %s\n", i, (newargz[i] ? newargz[i] : "<NULL>")));
-+      lt_debugprintf (__FILE__, __LINE__, "(main) newargz[%d]: %s\n",
-+		      i, nonnull (newargz[i]));
-     }
- 
- EOF
-@@ -3706,7 +4529,9 @@ EOF
-   if (rval == -1)
-     {
-       /* failed to start process */
--      LTWRAPPER_DEBUGPRINTF (("(main) failed to launch target \"%s\": errno = %d\n", lt_argv_zero, errno));
-+      lt_debugprintf (__FILE__, __LINE__,
-+		      "(main) failed to launch target \"%s\": %s\n",
-+		      lt_argv_zero, nonnull (strerror (errno)));
-       return 127;
-     }
-   return rval;
-@@ -3728,7 +4553,7 @@ xmalloc (size_t num)
- {
-   void *p = (void *) malloc (num);
-   if (!p)
--    lt_fatal ("Memory exhausted");
-+    lt_fatal (__FILE__, __LINE__, "memory exhausted");
- 
-   return p;
- }
-@@ -3762,8 +4587,8 @@ check_executable (const char *path)
- {
-   struct stat st;
- 
--  LTWRAPPER_DEBUGPRINTF (("(check_executable)  : %s\n",
--			  path ? (*path ? path : "EMPTY!") : "NULL!"));
-+  lt_debugprintf (__FILE__, __LINE__, "(check_executable): %s\n",
-+                  nonempty (path));
-   if ((!path) || (!*path))
-     return 0;
- 
-@@ -3780,8 +4605,8 @@ make_executable (const char *path)
-   int rval = 0;
-   struct stat st;
- 
--  LTWRAPPER_DEBUGPRINTF (("(make_executable)   : %s\n",
--			  path ? (*path ? path : "EMPTY!") : "NULL!"));
-+  lt_debugprintf (__FILE__, __LINE__, "(make_executable): %s\n",
-+                  nonempty (path));
-   if ((!path) || (!*path))
-     return 0;
- 
-@@ -3807,8 +4632,8 @@ find_executable (const char *wrapper)
-   int tmp_len;
-   char *concat_name;
- 
--  LTWRAPPER_DEBUGPRINTF (("(find_executable)   : %s\n",
--			  wrapper ? (*wrapper ? wrapper : "EMPTY!") : "NULL!"));
-+  lt_debugprintf (__FILE__, __LINE__, "(find_executable): %s\n",
-+                  nonempty (wrapper));
- 
-   if ((wrapper == NULL) || (*wrapper == '\0'))
-     return NULL;
-@@ -3861,7 +4686,8 @@ find_executable (const char *wrapper)
- 		{
- 		  /* empty path: current directory */
- 		  if (getcwd (tmp, LT_PATHMAX) == NULL)
--		    lt_fatal ("getcwd failed");
-+		    lt_fatal (__FILE__, __LINE__, "getcwd failed: %s",
-+                              nonnull (strerror (errno)));
- 		  tmp_len = strlen (tmp);
- 		  concat_name =
- 		    XMALLOC (char, tmp_len + 1 + strlen (wrapper) + 1);
-@@ -3886,7 +4712,8 @@ find_executable (const char *wrapper)
-     }
-   /* Relative path | not found in path: prepend cwd */
-   if (getcwd (tmp, LT_PATHMAX) == NULL)
--    lt_fatal ("getcwd failed");
-+    lt_fatal (__FILE__, __LINE__, "getcwd failed: %s",
-+              nonnull (strerror (errno)));
-   tmp_len = strlen (tmp);
-   concat_name = XMALLOC (char, tmp_len + 1 + strlen (wrapper) + 1);
-   memcpy (concat_name, tmp, tmp_len);
-@@ -3912,8 +4739,9 @@ chase_symlinks (const char *pathspec)
-   int has_symlinks = 0;
-   while (strlen (tmp_pathspec) && !has_symlinks)
-     {
--      LTWRAPPER_DEBUGPRINTF (("checking path component for symlinks: %s\n",
--			      tmp_pathspec));
-+      lt_debugprintf (__FILE__, __LINE__,
-+		      "checking path component for symlinks: %s\n",
-+		      tmp_pathspec);
-       if (lstat (tmp_pathspec, &s) == 0)
- 	{
- 	  if (S_ISLNK (s.st_mode) != 0)
-@@ -3935,8 +4763,9 @@ chase_symlinks (const char *pathspec)
- 	}
-       else
- 	{
--	  char *errstr = strerror (errno);
--	  lt_fatal ("Error accessing file %s (%s)", tmp_pathspec, errstr);
-+	  lt_fatal (__FILE__, __LINE__,
-+		    "error accessing file \"%s\": %s",
-+		    tmp_pathspec, nonnull (strerror (errno)));
- 	}
-     }
-   XFREE (tmp_pathspec);
-@@ -3949,7 +4778,8 @@ chase_symlinks (const char *pathspec)
-   tmp_pathspec = realpath (pathspec, buf);
-   if (tmp_pathspec == 0)
-     {
--      lt_fatal ("Could not follow symlinks for %s", pathspec);
-+      lt_fatal (__FILE__, __LINE__,
-+		"could not follow symlinks for %s", pathspec);
-     }
-   return xstrdup (tmp_pathspec);
- #endif
-@@ -3975,11 +4805,25 @@ strendzap (char *str, const char *pat)
-   return str;
- }
- 
-+void
-+lt_debugprintf (const char *file, int line, const char *fmt, ...)
-+{
-+  va_list args;
-+  if (lt_debug)
-+    {
-+      (void) fprintf (stderr, "%s:%s:%d: ", program_name, file, line);
-+      va_start (args, fmt);
-+      (void) vfprintf (stderr, fmt, args);
-+      va_end (args);
-+    }
-+}
-+
- static void
--lt_error_core (int exit_status, const char *mode,
-+lt_error_core (int exit_status, const char *file,
-+	       int line, const char *mode,
- 	       const char *message, va_list ap)
- {
--  fprintf (stderr, "%s: %s: ", program_name, mode);
-+  fprintf (stderr, "%s:%s:%d: %s: ", program_name, file, line, mode);
-   vfprintf (stderr, message, ap);
-   fprintf (stderr, ".\n");
- 
-@@ -3988,20 +4832,32 @@ lt_error_core (int exit_status, const char *mode,
- }
- 
- void
--lt_fatal (const char *message, ...)
-+lt_fatal (const char *file, int line, const char *message, ...)
- {
-   va_list ap;
-   va_start (ap, message);
--  lt_error_core (EXIT_FAILURE, "FATAL", message, ap);
-+  lt_error_core (EXIT_FAILURE, file, line, "FATAL", message, ap);
-   va_end (ap);
- }
- 
-+static const char *
-+nonnull (const char *s)
-+{
-+  return s ? s : "(null)";
-+}
-+
-+static const char *
-+nonempty (const char *s)
-+{
-+  return (s && !*s) ? "(empty)" : nonnull (s);
-+}
-+
- void
- lt_setenv (const char *name, const char *value)
- {
--  LTWRAPPER_DEBUGPRINTF (("(lt_setenv) setting '%s' to '%s'\n",
--                          (name ? name : "<NULL>"),
--                          (value ? value : "<NULL>")));
-+  lt_debugprintf (__FILE__, __LINE__,
-+		  "(lt_setenv) setting '%s' to '%s'\n",
-+                  nonnull (name), nonnull (value));
-   {
- #ifdef HAVE_SETENV
-     /* always make a copy, for consistency with !HAVE_SETENV */
-@@ -4049,9 +4905,9 @@ lt_extend_str (const char *orig_value, const char *add, int to_end)
- void
- lt_update_exe_path (const char *name, const char *value)
- {
--  LTWRAPPER_DEBUGPRINTF (("(lt_update_exe_path) modifying '%s' by prepending '%s'\n",
--                          (name ? name : "<NULL>"),
--                          (value ? value : "<NULL>")));
-+  lt_debugprintf (__FILE__, __LINE__,
-+		  "(lt_update_exe_path) modifying '%s' by prepending '%s'\n",
-+                  nonnull (name), nonnull (value));
- 
-   if (name && *name && value && *value)
-     {
-@@ -4070,9 +4926,9 @@ lt_update_exe_path (const char *name, const char *value)
- void
- lt_update_lib_path (const char *name, const char *value)
- {
--  LTWRAPPER_DEBUGPRINTF (("(lt_update_lib_path) modifying '%s' by prepending '%s'\n",
--                          (name ? name : "<NULL>"),
--                          (value ? value : "<NULL>")));
-+  lt_debugprintf (__FILE__, __LINE__,
-+		  "(lt_update_lib_path) modifying '%s' by prepending '%s'\n",
-+                  nonnull (name), nonnull (value));
- 
-   if (name && *name && value && *value)
-     {
-@@ -4222,7 +5078,7 @@ EOF
- func_win32_import_lib_p ()
- {
-     $opt_debug
--    case `eval "$file_magic_cmd \"\$1\" 2>/dev/null" | $SED -e 10q` in
-+    case `eval $file_magic_cmd \"\$1\" 2>/dev/null | $SED -e 10q` in
-     *import*) : ;;
-     *) false ;;
-     esac
-@@ -4401,9 +5257,9 @@ func_mode_link ()
- 	    ;;
- 	  *)
- 	    if test "$prev" = dlfiles; then
--	      dlfiles="$dlfiles $arg"
-+	      func_append dlfiles " $arg"
- 	    else
--	      dlprefiles="$dlprefiles $arg"
-+	      func_append dlprefiles " $arg"
- 	    fi
- 	    prev=
- 	    continue
-@@ -4427,7 +5283,7 @@ func_mode_link ()
- 	    *-*-darwin*)
- 	      case "$deplibs " in
- 		*" $qarg.ltframework "*) ;;
--		*) deplibs="$deplibs $qarg.ltframework" # this is fixed later
-+		*) func_append deplibs " $qarg.ltframework" # this is fixed later
- 		   ;;
- 	      esac
- 	      ;;
-@@ -4446,7 +5302,7 @@ func_mode_link ()
- 	    moreargs=
- 	    for fil in `cat "$save_arg"`
- 	    do
--#	      moreargs="$moreargs $fil"
-+#	      func_append moreargs " $fil"
- 	      arg=$fil
- 	      # A libtool-controlled object.
- 
-@@ -4475,7 +5331,7 @@ func_mode_link ()
- 
- 		  if test "$prev" = dlfiles; then
- 		    if test "$build_libtool_libs" = yes && test "$dlopen_support" = yes; then
--		      dlfiles="$dlfiles $pic_object"
-+		      func_append dlfiles " $pic_object"
- 		      prev=
- 		      continue
- 		    else
-@@ -4487,7 +5343,7 @@ func_mode_link ()
- 		  # CHECK ME:  I think I busted this.  -Ossama
- 		  if test "$prev" = dlprefiles; then
- 		    # Preload the old-style object.
--		    dlprefiles="$dlprefiles $pic_object"
-+		    func_append dlprefiles " $pic_object"
- 		    prev=
- 		  fi
- 
-@@ -4557,12 +5413,12 @@ func_mode_link ()
- 	  if test "$prev" = rpath; then
- 	    case "$rpath " in
- 	    *" $arg "*) ;;
--	    *) rpath="$rpath $arg" ;;
-+	    *) func_append rpath " $arg" ;;
- 	    esac
- 	  else
- 	    case "$xrpath " in
- 	    *" $arg "*) ;;
--	    *) xrpath="$xrpath $arg" ;;
-+	    *) func_append xrpath " $arg" ;;
- 	    esac
- 	  fi
- 	  prev=
-@@ -4574,28 +5430,28 @@ func_mode_link ()
- 	  continue
- 	  ;;
- 	weak)
--	  weak_libs="$weak_libs $arg"
-+	  func_append weak_libs " $arg"
- 	  prev=
- 	  continue
- 	  ;;
- 	xcclinker)
--	  linker_flags="$linker_flags $qarg"
--	  compiler_flags="$compiler_flags $qarg"
-+	  func_append linker_flags " $qarg"
-+	  func_append compiler_flags " $qarg"
- 	  prev=
- 	  func_append compile_command " $qarg"
- 	  func_append finalize_command " $qarg"
- 	  continue
- 	  ;;
- 	xcompiler)
--	  compiler_flags="$compiler_flags $qarg"
-+	  func_append compiler_flags " $qarg"
- 	  prev=
- 	  func_append compile_command " $qarg"
- 	  func_append finalize_command " $qarg"
- 	  continue
- 	  ;;
- 	xlinker)
--	  linker_flags="$linker_flags $qarg"
--	  compiler_flags="$compiler_flags $wl$qarg"
-+	  func_append linker_flags " $qarg"
-+	  func_append compiler_flags " $wl$qarg"
- 	  prev=
- 	  func_append compile_command " $wl$qarg"
- 	  func_append finalize_command " $wl$qarg"
-@@ -4686,15 +5542,16 @@ func_mode_link ()
- 	;;
- 
-       -L*)
--	func_stripname '-L' '' "$arg"
--	dir=$func_stripname_result
--	if test -z "$dir"; then
-+	func_stripname "-L" '' "$arg"
-+	if test -z "$func_stripname_result"; then
- 	  if test "$#" -gt 0; then
- 	    func_fatal_error "require no space between \`-L' and \`$1'"
- 	  else
- 	    func_fatal_error "need path for \`-L' option"
- 	  fi
- 	fi
-+	func_resolve_sysroot "$func_stripname_result"
-+	dir=$func_resolve_sysroot_result
- 	# We need an absolute path.
- 	case $dir in
- 	[\\/]* | [A-Za-z]:[\\/]*) ;;
-@@ -4706,10 +5563,16 @@ func_mode_link ()
- 	  ;;
- 	esac
- 	case "$deplibs " in
--	*" -L$dir "*) ;;
-+	*" -L$dir "* | *" $arg "*)
-+	  # Will only happen for absolute or sysroot arguments
-+	  ;;
- 	*)
--	  deplibs="$deplibs -L$dir"
--	  lib_search_path="$lib_search_path $dir"
-+	  # Preserve sysroot, but never include relative directories
-+	  case $dir in
-+	    [\\/]* | [A-Za-z]:[\\/]* | =*) func_append deplibs " $arg" ;;
-+	    *) func_append deplibs " -L$dir" ;;
-+	  esac
-+	  func_append lib_search_path " $dir"
- 	  ;;
- 	esac
- 	case $host in
-@@ -4718,12 +5581,12 @@ func_mode_link ()
- 	  case :$dllsearchpath: in
- 	  *":$dir:"*) ;;
- 	  ::) dllsearchpath=$dir;;
--	  *) dllsearchpath="$dllsearchpath:$dir";;
-+	  *) func_append dllsearchpath ":$dir";;
- 	  esac
- 	  case :$dllsearchpath: in
- 	  *":$testbindir:"*) ;;
- 	  ::) dllsearchpath=$testbindir;;
--	  *) dllsearchpath="$dllsearchpath:$testbindir";;
-+	  *) func_append dllsearchpath ":$testbindir";;
- 	  esac
- 	  ;;
- 	esac
-@@ -4747,7 +5610,7 @@ func_mode_link ()
- 	    ;;
- 	  *-*-rhapsody* | *-*-darwin1.[012])
- 	    # Rhapsody C and math libraries are in the System framework
--	    deplibs="$deplibs System.ltframework"
-+	    func_append deplibs " System.ltframework"
- 	    continue
- 	    ;;
- 	  *-*-sco3.2v5* | *-*-sco5v6*)
-@@ -4758,9 +5621,6 @@ func_mode_link ()
- 	    # Compiler inserts libc in the correct place for threads to work
- 	    test "X$arg" = "X-lc" && continue
- 	    ;;
--	  *-*-linux*)
--	    test "X$arg" = "X-lc" && continue
--	    ;;
- 	  esac
- 	elif test "X$arg" = "X-lc_r"; then
- 	 case $host in
-@@ -4770,7 +5630,7 @@ func_mode_link ()
- 	   ;;
- 	 esac
- 	fi
--	deplibs="$deplibs $arg"
-+	func_append deplibs " $arg"
- 	continue
- 	;;
- 
-@@ -4782,8 +5642,8 @@ func_mode_link ()
-       # Tru64 UNIX uses -model [arg] to determine the layout of C++
-       # classes, name mangling, and exception handling.
-       # Darwin uses the -arch flag to determine output architecture.
--      -model|-arch|-isysroot)
--	compiler_flags="$compiler_flags $arg"
-+      -model|-arch|-isysroot|--sysroot)
-+	func_append compiler_flags " $arg"
- 	func_append compile_command " $arg"
- 	func_append finalize_command " $arg"
- 	prev=xcompiler
-@@ -4791,12 +5651,12 @@ func_mode_link ()
- 	;;
- 
-       -mt|-mthreads|-kthread|-Kthread|-pthread|-pthreads|--thread-safe|-threads)
--	compiler_flags="$compiler_flags $arg"
-+	func_append compiler_flags " $arg"
- 	func_append compile_command " $arg"
- 	func_append finalize_command " $arg"
- 	case "$new_inherited_linker_flags " in
- 	    *" $arg "*) ;;
--	    * ) new_inherited_linker_flags="$new_inherited_linker_flags $arg" ;;
-+	    * ) func_append new_inherited_linker_flags " $arg" ;;
- 	esac
- 	continue
- 	;;
-@@ -4863,13 +5723,17 @@ func_mode_link ()
- 	# We need an absolute path.
- 	case $dir in
- 	[\\/]* | [A-Za-z]:[\\/]*) ;;
-+	=*)
-+	  func_stripname '=' '' "$dir"
-+	  dir=$lt_sysroot$func_stripname_result
-+	  ;;
- 	*)
- 	  func_fatal_error "only absolute run-paths are allowed"
- 	  ;;
- 	esac
- 	case "$xrpath " in
- 	*" $dir "*) ;;
--	*) xrpath="$xrpath $dir" ;;
-+	*) func_append xrpath " $dir" ;;
- 	esac
- 	continue
- 	;;
-@@ -4922,8 +5786,8 @@ func_mode_link ()
- 	for flag in $args; do
- 	  IFS="$save_ifs"
-           func_quote_for_eval "$flag"
--	  arg="$arg $func_quote_for_eval_result"
--	  compiler_flags="$compiler_flags $func_quote_for_eval_result"
-+	  func_append arg " $func_quote_for_eval_result"
-+	  func_append compiler_flags " $func_quote_for_eval_result"
- 	done
- 	IFS="$save_ifs"
- 	func_stripname ' ' '' "$arg"
-@@ -4938,9 +5802,9 @@ func_mode_link ()
- 	for flag in $args; do
- 	  IFS="$save_ifs"
-           func_quote_for_eval "$flag"
--	  arg="$arg $wl$func_quote_for_eval_result"
--	  compiler_flags="$compiler_flags $wl$func_quote_for_eval_result"
--	  linker_flags="$linker_flags $func_quote_for_eval_result"
-+	  func_append arg " $wl$func_quote_for_eval_result"
-+	  func_append compiler_flags " $wl$func_quote_for_eval_result"
-+	  func_append linker_flags " $func_quote_for_eval_result"
- 	done
- 	IFS="$save_ifs"
- 	func_stripname ' ' '' "$arg"
-@@ -4968,24 +5832,27 @@ func_mode_link ()
- 	arg="$func_quote_for_eval_result"
- 	;;
- 
--      # -64, -mips[0-9] enable 64-bit mode on the SGI compiler
--      # -r[0-9][0-9]* specifies the processor on the SGI compiler
--      # -xarch=*, -xtarget=* enable 64-bit mode on the Sun compiler
--      # +DA*, +DD* enable 64-bit mode on the HP compiler
--      # -q* pass through compiler args for the IBM compiler
--      # -m*, -t[45]*, -txscale* pass through architecture-specific
--      # compiler args for GCC
--      # -F/path gives path to uninstalled frameworks, gcc on darwin
--      # -p, -pg, --coverage, -fprofile-* pass through profiling flag for GCC
--      # @file GCC response files
--      # -tp=* Portland pgcc target processor selection
-+      # Flags to be passed through unchanged, with rationale:
-+      # -64, -mips[0-9]      enable 64-bit mode for the SGI compiler
-+      # -r[0-9][0-9]*        specify processor for the SGI compiler
-+      # -xarch=*, -xtarget=* enable 64-bit mode for the Sun compiler
-+      # +DA*, +DD*           enable 64-bit mode for the HP compiler
-+      # -q*                  compiler args for the IBM compiler
-+      # -m*, -t[45]*, -txscale* architecture-specific flags for GCC
-+      # -F/path              path to uninstalled frameworks, gcc on darwin
-+      # -p, -pg, --coverage, -fprofile-*  profiling flags for GCC
-+      # @file                GCC response files
-+      # -tp=*                Portland pgcc target processor selection
-+      # --sysroot=*          for sysroot support
-+      # -O*, -flto*, -fwhopr*, -fuse-linker-plugin GCC link-time optimization
-       -64|-mips[0-9]|-r[0-9][0-9]*|-xarch=*|-xtarget=*|+DA*|+DD*|-q*|-m*| \
--      -t[45]*|-txscale*|-p|-pg|--coverage|-fprofile-*|-F*|@*|-tp=*)
-+      -t[45]*|-txscale*|-p|-pg|--coverage|-fprofile-*|-F*|@*|-tp=*|--sysroot=*| \
-+      -O*|-flto*|-fwhopr*|-fuse-linker-plugin)
-         func_quote_for_eval "$arg"
- 	arg="$func_quote_for_eval_result"
-         func_append compile_command " $arg"
-         func_append finalize_command " $arg"
--        compiler_flags="$compiler_flags $arg"
-+        func_append compiler_flags " $arg"
-         continue
-         ;;
- 
-@@ -4997,7 +5864,7 @@ func_mode_link ()
- 
-       *.$objext)
- 	# A standard object.
--	objs="$objs $arg"
-+	func_append objs " $arg"
- 	;;
- 
-       *.lo)
-@@ -5028,7 +5895,7 @@ func_mode_link ()
- 
- 	    if test "$prev" = dlfiles; then
- 	      if test "$build_libtool_libs" = yes && test "$dlopen_support" = yes; then
--		dlfiles="$dlfiles $pic_object"
-+		func_append dlfiles " $pic_object"
- 		prev=
- 		continue
- 	      else
-@@ -5040,7 +5907,7 @@ func_mode_link ()
- 	    # CHECK ME:  I think I busted this.  -Ossama
- 	    if test "$prev" = dlprefiles; then
- 	      # Preload the old-style object.
--	      dlprefiles="$dlprefiles $pic_object"
-+	      func_append dlprefiles " $pic_object"
- 	      prev=
- 	    fi
- 
-@@ -5085,24 +5952,25 @@ func_mode_link ()
- 
-       *.$libext)
- 	# An archive.
--	deplibs="$deplibs $arg"
--	old_deplibs="$old_deplibs $arg"
-+	func_append deplibs " $arg"
-+	func_append old_deplibs " $arg"
- 	continue
- 	;;
- 
-       *.la)
- 	# A libtool-controlled library.
- 
-+	func_resolve_sysroot "$arg"
- 	if test "$prev" = dlfiles; then
- 	  # This library was specified with -dlopen.
--	  dlfiles="$dlfiles $arg"
-+	  func_append dlfiles " $func_resolve_sysroot_result"
- 	  prev=
- 	elif test "$prev" = dlprefiles; then
- 	  # The library was specified with -dlpreopen.
--	  dlprefiles="$dlprefiles $arg"
-+	  func_append dlprefiles " $func_resolve_sysroot_result"
- 	  prev=
- 	else
--	  deplibs="$deplibs $arg"
-+	  func_append deplibs " $func_resolve_sysroot_result"
- 	fi
- 	continue
- 	;;
-@@ -5127,7 +5995,7 @@ func_mode_link ()
-       func_fatal_help "the \`$prevarg' option requires an argument"
- 
-     if test "$export_dynamic" = yes && test -n "$export_dynamic_flag_spec"; then
--      eval "arg=\"$export_dynamic_flag_spec\""
-+      eval arg=\"$export_dynamic_flag_spec\"
-       func_append compile_command " $arg"
-       func_append finalize_command " $arg"
-     fi
-@@ -5144,11 +6012,13 @@ func_mode_link ()
-     else
-       shlib_search_path=
-     fi
--    eval "sys_lib_search_path=\"$sys_lib_search_path_spec\""
--    eval "sys_lib_dlsearch_path=\"$sys_lib_dlsearch_path_spec\""
-+    eval sys_lib_search_path=\"$sys_lib_search_path_spec\"
-+    eval sys_lib_dlsearch_path=\"$sys_lib_dlsearch_path_spec\"
- 
-     func_dirname "$output" "/" ""
-     output_objdir="$func_dirname_result$objdir"
-+    func_to_tool_file "$output_objdir/"
-+    tool_output_objdir=$func_to_tool_file_result
-     # Create the object directory.
-     func_mkdir_p "$output_objdir"
- 
-@@ -5169,12 +6039,12 @@ func_mode_link ()
-     # Find all interdependent deplibs by searching for libraries
-     # that are linked more than once (e.g. -la -lb -la)
-     for deplib in $deplibs; do
--      if $opt_duplicate_deps ; then
-+      if $opt_preserve_dup_deps ; then
- 	case "$libs " in
--	*" $deplib "*) specialdeplibs="$specialdeplibs $deplib" ;;
-+	*" $deplib "*) func_append specialdeplibs " $deplib" ;;
- 	esac
-       fi
--      libs="$libs $deplib"
-+      func_append libs " $deplib"
-     done
- 
-     if test "$linkmode" = lib; then
-@@ -5187,9 +6057,9 @@ func_mode_link ()
-       if $opt_duplicate_compiler_generated_deps; then
- 	for pre_post_dep in $predeps $postdeps; do
- 	  case "$pre_post_deps " in
--	  *" $pre_post_dep "*) specialdeplibs="$specialdeplibs $pre_post_deps" ;;
-+	  *" $pre_post_dep "*) func_append specialdeplibs " $pre_post_deps" ;;
- 	  esac
--	  pre_post_deps="$pre_post_deps $pre_post_dep"
-+	  func_append pre_post_deps " $pre_post_dep"
- 	done
-       fi
-       pre_post_deps=
-@@ -5256,8 +6126,9 @@ func_mode_link ()
- 	for lib in $dlprefiles; do
- 	  # Ignore non-libtool-libs
- 	  dependency_libs=
-+	  func_resolve_sysroot "$lib"
- 	  case $lib in
--	  *.la)	func_source "$lib" ;;
-+	  *.la)	func_source "$func_resolve_sysroot_result" ;;
- 	  esac
- 
- 	  # Collect preopened libtool deplibs, except any this library
-@@ -5267,7 +6138,7 @@ func_mode_link ()
-             deplib_base=$func_basename_result
- 	    case " $weak_libs " in
- 	    *" $deplib_base "*) ;;
--	    *) deplibs="$deplibs $deplib" ;;
-+	    *) func_append deplibs " $deplib" ;;
- 	    esac
- 	  done
- 	done
-@@ -5288,11 +6159,11 @@ func_mode_link ()
- 	    compile_deplibs="$deplib $compile_deplibs"
- 	    finalize_deplibs="$deplib $finalize_deplibs"
- 	  else
--	    compiler_flags="$compiler_flags $deplib"
-+	    func_append compiler_flags " $deplib"
- 	    if test "$linkmode" = lib ; then
- 		case "$new_inherited_linker_flags " in
- 		    *" $deplib "*) ;;
--		    * ) new_inherited_linker_flags="$new_inherited_linker_flags $deplib" ;;
-+		    * ) func_append new_inherited_linker_flags " $deplib" ;;
- 		esac
- 	    fi
- 	  fi
-@@ -5377,7 +6248,7 @@ func_mode_link ()
- 	    if test "$linkmode" = lib ; then
- 		case "$new_inherited_linker_flags " in
- 		    *" $deplib "*) ;;
--		    * ) new_inherited_linker_flags="$new_inherited_linker_flags $deplib" ;;
-+		    * ) func_append new_inherited_linker_flags " $deplib" ;;
- 		esac
- 	    fi
- 	  fi
-@@ -5390,7 +6261,8 @@ func_mode_link ()
- 	    test "$pass" = conv && continue
- 	    newdependency_libs="$deplib $newdependency_libs"
- 	    func_stripname '-L' '' "$deplib"
--	    newlib_search_path="$newlib_search_path $func_stripname_result"
-+	    func_resolve_sysroot "$func_stripname_result"
-+	    func_append newlib_search_path " $func_resolve_sysroot_result"
- 	    ;;
- 	  prog)
- 	    if test "$pass" = conv; then
-@@ -5404,7 +6276,8 @@ func_mode_link ()
- 	      finalize_deplibs="$deplib $finalize_deplibs"
- 	    fi
- 	    func_stripname '-L' '' "$deplib"
--	    newlib_search_path="$newlib_search_path $func_stripname_result"
-+	    func_resolve_sysroot "$func_stripname_result"
-+	    func_append newlib_search_path " $func_resolve_sysroot_result"
- 	    ;;
- 	  *)
- 	    func_warning "\`-L' is ignored for archives/objects"
-@@ -5415,17 +6288,21 @@ func_mode_link ()
- 	-R*)
- 	  if test "$pass" = link; then
- 	    func_stripname '-R' '' "$deplib"
--	    dir=$func_stripname_result
-+	    func_resolve_sysroot "$func_stripname_result"
-+	    dir=$func_resolve_sysroot_result
- 	    # Make sure the xrpath contains only unique directories.
- 	    case "$xrpath " in
- 	    *" $dir "*) ;;
--	    *) xrpath="$xrpath $dir" ;;
-+	    *) func_append xrpath " $dir" ;;
- 	    esac
- 	  fi
- 	  deplibs="$deplib $deplibs"
- 	  continue
- 	  ;;
--	*.la) lib="$deplib" ;;
-+	*.la)
-+	  func_resolve_sysroot "$deplib"
-+	  lib=$func_resolve_sysroot_result
-+	  ;;
- 	*.$libext)
- 	  if test "$pass" = conv; then
- 	    deplibs="$deplib $deplibs"
-@@ -5488,11 +6365,11 @@ func_mode_link ()
- 	    if test "$pass" = dlpreopen || test "$dlopen_support" != yes || test "$build_libtool_libs" = no; then
- 	      # If there is no dlopen support or we're linking statically,
- 	      # we need to preload.
--	      newdlprefiles="$newdlprefiles $deplib"
-+	      func_append newdlprefiles " $deplib"
- 	      compile_deplibs="$deplib $compile_deplibs"
- 	      finalize_deplibs="$deplib $finalize_deplibs"
- 	    else
--	      newdlfiles="$newdlfiles $deplib"
-+	      func_append newdlfiles " $deplib"
- 	    fi
- 	  fi
- 	  continue
-@@ -5538,7 +6415,7 @@ func_mode_link ()
- 	  for tmp_inherited_linker_flag in $tmp_inherited_linker_flags; do
- 	    case " $new_inherited_linker_flags " in
- 	      *" $tmp_inherited_linker_flag "*) ;;
--	      *) new_inherited_linker_flags="$new_inherited_linker_flags $tmp_inherited_linker_flag";;
-+	      *) func_append new_inherited_linker_flags " $tmp_inherited_linker_flag";;
- 	    esac
- 	  done
- 	fi
-@@ -5546,8 +6423,8 @@ func_mode_link ()
- 	if test "$linkmode,$pass" = "lib,link" ||
- 	   test "$linkmode,$pass" = "prog,scan" ||
- 	   { test "$linkmode" != prog && test "$linkmode" != lib; }; then
--	  test -n "$dlopen" && dlfiles="$dlfiles $dlopen"
--	  test -n "$dlpreopen" && dlprefiles="$dlprefiles $dlpreopen"
-+	  test -n "$dlopen" && func_append dlfiles " $dlopen"
-+	  test -n "$dlpreopen" && func_append dlprefiles " $dlpreopen"
- 	fi
- 
- 	if test "$pass" = conv; then
-@@ -5558,20 +6435,20 @@ func_mode_link ()
- 	      func_fatal_error "cannot find name of link library for \`$lib'"
- 	    fi
- 	    # It is a libtool convenience library, so add in its objects.
--	    convenience="$convenience $ladir/$objdir/$old_library"
--	    old_convenience="$old_convenience $ladir/$objdir/$old_library"
-+	    func_append convenience " $ladir/$objdir/$old_library"
-+	    func_append old_convenience " $ladir/$objdir/$old_library"
- 	  elif test "$linkmode" != prog && test "$linkmode" != lib; then
- 	    func_fatal_error "\`$lib' is not a convenience library"
- 	  fi
- 	  tmp_libs=
- 	  for deplib in $dependency_libs; do
- 	    deplibs="$deplib $deplibs"
--	    if $opt_duplicate_deps ; then
-+	    if $opt_preserve_dup_deps ; then
- 	      case "$tmp_libs " in
--	      *" $deplib "*) specialdeplibs="$specialdeplibs $deplib" ;;
-+	      *" $deplib "*) func_append specialdeplibs " $deplib" ;;
- 	      esac
- 	    fi
--	    tmp_libs="$tmp_libs $deplib"
-+	    func_append tmp_libs " $deplib"
- 	  done
- 	  continue
- 	fi # $pass = conv
-@@ -5579,9 +6456,15 @@ func_mode_link ()
- 
- 	# Get the name of the library we link against.
- 	linklib=
--	for l in $old_library $library_names; do
--	  linklib="$l"
--	done
-+	if test -n "$old_library" &&
-+	   { test "$prefer_static_libs" = yes ||
-+	     test "$prefer_static_libs,$installed" = "built,no"; }; then
-+	  linklib=$old_library
-+	else
-+	  for l in $old_library $library_names; do
-+	    linklib="$l"
-+	  done
-+	fi
- 	if test -z "$linklib"; then
- 	  func_fatal_error "cannot find name of link library for \`$lib'"
- 	fi
-@@ -5598,9 +6481,9 @@ func_mode_link ()
- 	    # statically, we need to preload.  We also need to preload any
- 	    # dependent libraries so libltdl's deplib preloader doesn't
- 	    # bomb out in the load deplibs phase.
--	    dlprefiles="$dlprefiles $lib $dependency_libs"
-+	    func_append dlprefiles " $lib $dependency_libs"
- 	  else
--	    newdlfiles="$newdlfiles $lib"
-+	    func_append newdlfiles " $lib"
- 	  fi
- 	  continue
- 	fi # $pass = dlopen
-@@ -5622,14 +6505,14 @@ func_mode_link ()
- 
- 	# Find the relevant object directory and library name.
- 	if test "X$installed" = Xyes; then
--	  if test ! -f "$libdir/$linklib" && test -f "$abs_ladir/$linklib"; then
-+	  if test ! -f "$lt_sysroot$libdir/$linklib" && test -f "$abs_ladir/$linklib"; then
- 	    func_warning "library \`$lib' was moved."
- 	    dir="$ladir"
- 	    absdir="$abs_ladir"
- 	    libdir="$abs_ladir"
- 	  else
--	    dir="$libdir"
--	    absdir="$libdir"
-+	    dir="$lt_sysroot$libdir"
-+	    absdir="$lt_sysroot$libdir"
- 	  fi
- 	  test "X$hardcode_automatic" = Xyes && avoidtemprpath=yes
- 	else
-@@ -5637,12 +6520,12 @@ func_mode_link ()
- 	    dir="$ladir"
- 	    absdir="$abs_ladir"
- 	    # Remove this search path later
--	    notinst_path="$notinst_path $abs_ladir"
-+	    func_append notinst_path " $abs_ladir"
- 	  else
- 	    dir="$ladir/$objdir"
- 	    absdir="$abs_ladir/$objdir"
- 	    # Remove this search path later
--	    notinst_path="$notinst_path $abs_ladir"
-+	    func_append notinst_path " $abs_ladir"
- 	  fi
- 	fi # $installed = yes
- 	func_stripname 'lib' '.la' "$laname"
-@@ -5653,20 +6536,46 @@ func_mode_link ()
- 	  if test -z "$libdir" && test "$linkmode" = prog; then
- 	    func_fatal_error "only libraries may -dlpreopen a convenience library: \`$lib'"
- 	  fi
--	  # Prefer using a static library (so that no silly _DYNAMIC symbols
--	  # are required to link).
--	  if test -n "$old_library"; then
--	    newdlprefiles="$newdlprefiles $dir/$old_library"
--	    # Keep a list of preopened convenience libraries to check
--	    # that they are being used correctly in the link pass.
--	    test -z "$libdir" && \
--		dlpreconveniencelibs="$dlpreconveniencelibs $dir/$old_library"
--	  # Otherwise, use the dlname, so that lt_dlopen finds it.
--	  elif test -n "$dlname"; then
--	    newdlprefiles="$newdlprefiles $dir/$dlname"
--	  else
--	    newdlprefiles="$newdlprefiles $dir/$linklib"
--	  fi
-+	  case "$host" in
-+	    # special handling for platforms with PE-DLLs.
-+	    *cygwin* | *mingw* | *cegcc* )
-+	      # Linker will automatically link against shared library if both
-+	      # static and shared are present.  Therefore, ensure we extract
-+	      # symbols from the import library if a shared library is present
-+	      # (otherwise, the dlopen module name will be incorrect).  We do
-+	      # this by putting the import library name into $newdlprefiles.
-+	      # We recover the dlopen module name by 'saving' the la file
-+	      # name in a special purpose variable, and (later) extracting the
-+	      # dlname from the la file.
-+	      if test -n "$dlname"; then
-+	        func_tr_sh "$dir/$linklib"
-+	        eval "libfile_$func_tr_sh_result=\$abs_ladir/\$laname"
-+	        func_append newdlprefiles " $dir/$linklib"
-+	      else
-+	        func_append newdlprefiles " $dir/$old_library"
-+	        # Keep a list of preopened convenience libraries to check
-+	        # that they are being used correctly in the link pass.
-+	        test -z "$libdir" && \
-+	          func_append dlpreconveniencelibs " $dir/$old_library"
-+	      fi
-+	    ;;
-+	    * )
-+	      # Prefer using a static library (so that no silly _DYNAMIC symbols
-+	      # are required to link).
-+	      if test -n "$old_library"; then
-+	        func_append newdlprefiles " $dir/$old_library"
-+	        # Keep a list of preopened convenience libraries to check
-+	        # that they are being used correctly in the link pass.
-+	        test -z "$libdir" && \
-+	          func_append dlpreconveniencelibs " $dir/$old_library"
-+	      # Otherwise, use the dlname, so that lt_dlopen finds it.
-+	      elif test -n "$dlname"; then
-+	        func_append newdlprefiles " $dir/$dlname"
-+	      else
-+	        func_append newdlprefiles " $dir/$linklib"
-+	      fi
-+	    ;;
-+	  esac
- 	fi # $pass = dlpreopen
- 
- 	if test -z "$libdir"; then
-@@ -5684,7 +6593,7 @@ func_mode_link ()
- 
- 
- 	if test "$linkmode" = prog && test "$pass" != link; then
--	  newlib_search_path="$newlib_search_path $ladir"
-+	  func_append newlib_search_path " $ladir"
- 	  deplibs="$lib $deplibs"
- 
- 	  linkalldeplibs=no
-@@ -5697,7 +6606,8 @@ func_mode_link ()
- 	  for deplib in $dependency_libs; do
- 	    case $deplib in
- 	    -L*) func_stripname '-L' '' "$deplib"
--	         newlib_search_path="$newlib_search_path $func_stripname_result"
-+	         func_resolve_sysroot "$func_stripname_result"
-+	         func_append newlib_search_path " $func_resolve_sysroot_result"
- 		 ;;
- 	    esac
- 	    # Need to link against all dependency_libs?
-@@ -5708,12 +6618,12 @@ func_mode_link ()
- 	      # or/and link against static libraries
- 	      newdependency_libs="$deplib $newdependency_libs"
- 	    fi
--	    if $opt_duplicate_deps ; then
-+	    if $opt_preserve_dup_deps ; then
- 	      case "$tmp_libs " in
--	      *" $deplib "*) specialdeplibs="$specialdeplibs $deplib" ;;
-+	      *" $deplib "*) func_append specialdeplibs " $deplib" ;;
- 	      esac
- 	    fi
--	    tmp_libs="$tmp_libs $deplib"
-+	    func_append tmp_libs " $deplib"
- 	  done # for deplib
- 	  continue
- 	fi # $linkmode = prog...
-@@ -5728,7 +6638,7 @@ func_mode_link ()
- 	      # Make sure the rpath contains only unique directories.
- 	      case "$temp_rpath:" in
- 	      *"$absdir:"*) ;;
--	      *) temp_rpath="$temp_rpath$absdir:" ;;
-+	      *) func_append temp_rpath "$absdir:" ;;
- 	      esac
- 	    fi
- 
-@@ -5740,7 +6650,7 @@ func_mode_link ()
- 	    *)
- 	      case "$compile_rpath " in
- 	      *" $absdir "*) ;;
--	      *) compile_rpath="$compile_rpath $absdir"
-+	      *) func_append compile_rpath " $absdir" ;;
- 	      esac
- 	      ;;
- 	    esac
-@@ -5749,7 +6659,7 @@ func_mode_link ()
- 	    *)
- 	      case "$finalize_rpath " in
- 	      *" $libdir "*) ;;
--	      *) finalize_rpath="$finalize_rpath $libdir"
-+	      *) func_append finalize_rpath " $libdir" ;;
- 	      esac
- 	      ;;
- 	    esac
-@@ -5774,12 +6684,12 @@ func_mode_link ()
- 	  case $host in
- 	  *cygwin* | *mingw* | *cegcc*)
- 	      # No point in relinking DLLs because paths are not encoded
--	      notinst_deplibs="$notinst_deplibs $lib"
-+	      func_append notinst_deplibs " $lib"
- 	      need_relink=no
- 	    ;;
- 	  *)
- 	    if test "$installed" = no; then
--	      notinst_deplibs="$notinst_deplibs $lib"
-+	      func_append notinst_deplibs " $lib"
- 	      need_relink=yes
- 	    fi
- 	    ;;
-@@ -5814,7 +6724,7 @@ func_mode_link ()
- 	    *)
- 	      case "$compile_rpath " in
- 	      *" $absdir "*) ;;
--	      *) compile_rpath="$compile_rpath $absdir"
-+	      *) func_append compile_rpath " $absdir" ;;
- 	      esac
- 	      ;;
- 	    esac
-@@ -5823,7 +6733,7 @@ func_mode_link ()
- 	    *)
- 	      case "$finalize_rpath " in
- 	      *" $libdir "*) ;;
--	      *) finalize_rpath="$finalize_rpath $libdir"
-+	      *) func_append finalize_rpath " $libdir" ;;
- 	      esac
- 	      ;;
- 	    esac
-@@ -5835,7 +6745,7 @@ func_mode_link ()
- 	    shift
- 	    realname="$1"
- 	    shift
--	    eval "libname=\"$libname_spec\""
-+	    libname=`eval "\\$ECHO \"$libname_spec\""`
- 	    # use dlname if we got it. it's perfectly good, no?
- 	    if test -n "$dlname"; then
- 	      soname="$dlname"
-@@ -5848,7 +6758,7 @@ func_mode_link ()
- 		versuffix="-$major"
- 		;;
- 	      esac
--	      eval "soname=\"$soname_spec\""
-+	      eval soname=\"$soname_spec\"
- 	    else
- 	      soname="$realname"
- 	    fi
-@@ -5877,7 +6787,7 @@ func_mode_link ()
- 	    linklib=$newlib
- 	  fi # test -n "$old_archive_from_expsyms_cmds"
- 
--	  if test "$linkmode" = prog || test "$mode" != relink; then
-+	  if test "$linkmode" = prog || test "$opt_mode" != relink; then
- 	    add_shlibpath=
- 	    add_dir=
- 	    add=
-@@ -5933,7 +6843,7 @@ func_mode_link ()
- 		if test -n "$inst_prefix_dir"; then
- 		  case $libdir in
- 		    [\\/]*)
--		      add_dir="$add_dir -L$inst_prefix_dir$libdir"
-+		      func_append add_dir " -L$inst_prefix_dir$libdir"
- 		      ;;
- 		  esac
- 		fi
-@@ -5955,7 +6865,7 @@ func_mode_link ()
- 	    if test -n "$add_shlibpath"; then
- 	      case :$compile_shlibpath: in
- 	      *":$add_shlibpath:"*) ;;
--	      *) compile_shlibpath="$compile_shlibpath$add_shlibpath:" ;;
-+	      *) func_append compile_shlibpath "$add_shlibpath:" ;;
- 	      esac
- 	    fi
- 	    if test "$linkmode" = prog; then
-@@ -5969,13 +6879,13 @@ func_mode_link ()
- 		 test "$hardcode_shlibpath_var" = yes; then
- 		case :$finalize_shlibpath: in
- 		*":$libdir:"*) ;;
--		*) finalize_shlibpath="$finalize_shlibpath$libdir:" ;;
-+		*) func_append finalize_shlibpath "$libdir:" ;;
- 		esac
- 	      fi
- 	    fi
- 	  fi
- 
--	  if test "$linkmode" = prog || test "$mode" = relink; then
-+	  if test "$linkmode" = prog || test "$opt_mode" = relink; then
- 	    add_shlibpath=
- 	    add_dir=
- 	    add=
-@@ -5989,7 +6899,7 @@ func_mode_link ()
- 	    elif test "$hardcode_shlibpath_var" = yes; then
- 	      case :$finalize_shlibpath: in
- 	      *":$libdir:"*) ;;
--	      *) finalize_shlibpath="$finalize_shlibpath$libdir:" ;;
-+	      *) func_append finalize_shlibpath "$libdir:" ;;
- 	      esac
- 	      add="-l$name"
- 	    elif test "$hardcode_automatic" = yes; then
-@@ -6001,12 +6911,12 @@ func_mode_link ()
- 	      fi
- 	    else
- 	      # We cannot seem to hardcode it, guess we'll fake it.
--	      add_dir="-L$libdir"
-+	      add_dir="-L$lt_sysroot$libdir"
- 	      # Try looking first in the location we're being installed to.
- 	      if test -n "$inst_prefix_dir"; then
- 		case $libdir in
- 		  [\\/]*)
--		    add_dir="$add_dir -L$inst_prefix_dir$libdir"
-+		    func_append add_dir " -L$inst_prefix_dir$libdir"
- 		    ;;
- 		esac
- 	      fi
-@@ -6083,27 +6993,33 @@ func_mode_link ()
- 	           temp_xrpath=$func_stripname_result
- 		   case " $xrpath " in
- 		   *" $temp_xrpath "*) ;;
--		   *) xrpath="$xrpath $temp_xrpath";;
-+		   *) func_append xrpath " $temp_xrpath";;
- 		   esac;;
--	      *) temp_deplibs="$temp_deplibs $libdir";;
-+	      *) func_append temp_deplibs " $libdir";;
- 	      esac
- 	    done
- 	    dependency_libs="$temp_deplibs"
- 	  fi
- 
--	  newlib_search_path="$newlib_search_path $absdir"
-+	  func_append newlib_search_path " $absdir"
- 	  # Link against this library
- 	  test "$link_static" = no && newdependency_libs="$abs_ladir/$laname $newdependency_libs"
- 	  # ... and its dependency_libs
- 	  tmp_libs=
- 	  for deplib in $dependency_libs; do
- 	    newdependency_libs="$deplib $newdependency_libs"
--	    if $opt_duplicate_deps ; then
-+	    case $deplib in
-+              -L*) func_stripname '-L' '' "$deplib"
-+                   func_resolve_sysroot "$func_stripname_result";;
-+              *) func_resolve_sysroot "$deplib" ;;
-+            esac
-+	    if $opt_preserve_dup_deps ; then
- 	      case "$tmp_libs " in
--	      *" $deplib "*) specialdeplibs="$specialdeplibs $deplib" ;;
-+	      *" $func_resolve_sysroot_result "*)
-+                func_append specialdeplibs " $func_resolve_sysroot_result" ;;
- 	      esac
- 	    fi
--	    tmp_libs="$tmp_libs $deplib"
-+	    func_append tmp_libs " $func_resolve_sysroot_result"
- 	  done
- 
- 	  if test "$link_all_deplibs" != no; then
-@@ -6113,8 +7029,10 @@ func_mode_link ()
- 	      case $deplib in
- 	      -L*) path="$deplib" ;;
- 	      *.la)
-+	        func_resolve_sysroot "$deplib"
-+	        deplib=$func_resolve_sysroot_result
- 	        func_dirname "$deplib" "" "."
--		dir="$func_dirname_result"
-+		dir=$func_dirname_result
- 		# We need an absolute path.
- 		case $dir in
- 		[\\/]* | [A-Za-z]:[\\/]*) absdir="$dir" ;;
-@@ -6130,7 +7048,7 @@ func_mode_link ()
- 		case $host in
- 		*-*-darwin*)
- 		  depdepl=
--		  deplibrary_names=`${SED} -n -e 's/^library_names=\(.*\)$/\1/p' $deplib`
-+		  eval deplibrary_names=`${SED} -n -e 's/^library_names=\(.*\)$/\1/p' $deplib`
- 		  if test -n "$deplibrary_names" ; then
- 		    for tmp in $deplibrary_names ; do
- 		      depdepl=$tmp
-@@ -6141,8 +7059,8 @@ func_mode_link ()
-                       if test -z "$darwin_install_name"; then
-                           darwin_install_name=`${OTOOL64} -L $depdepl  | awk '{if (NR == 2) {print $1;exit}}'`
-                       fi
--		      compiler_flags="$compiler_flags ${wl}-dylib_file ${wl}${darwin_install_name}:${depdepl}"
--		      linker_flags="$linker_flags -dylib_file ${darwin_install_name}:${depdepl}"
-+		      func_append compiler_flags " ${wl}-dylib_file ${wl}${darwin_install_name}:${depdepl}"
-+		      func_append linker_flags " -dylib_file ${darwin_install_name}:${depdepl}"
- 		      path=
- 		    fi
- 		  fi
-@@ -6152,7 +7070,7 @@ func_mode_link ()
- 		  ;;
- 		esac
- 		else
--		  libdir=`${SED} -n -e 's/^libdir=\(.*\)$/\1/p' $deplib`
-+		  eval libdir=`${SED} -n -e 's/^libdir=\(.*\)$/\1/p' $deplib`
- 		  test -z "$libdir" && \
- 		    func_fatal_error "\`$deplib' is not a valid libtool archive"
- 		  test "$absdir" != "$libdir" && \
-@@ -6192,7 +7110,7 @@ func_mode_link ()
- 	  for dir in $newlib_search_path; do
- 	    case "$lib_search_path " in
- 	    *" $dir "*) ;;
--	    *) lib_search_path="$lib_search_path $dir" ;;
-+	    *) func_append lib_search_path " $dir" ;;
- 	    esac
- 	  done
- 	  newlib_search_path=
-@@ -6205,7 +7123,7 @@ func_mode_link ()
- 	fi
- 	for var in $vars dependency_libs; do
- 	  # Add libraries to $var in reverse order
--	  eval tmp_libs=\$$var
-+	  eval tmp_libs=\"\$$var\"
- 	  new_libs=
- 	  for deplib in $tmp_libs; do
- 	    # FIXME: Pedantically, this is the right thing to do, so
-@@ -6250,13 +7168,13 @@ func_mode_link ()
- 	    -L*)
- 	      case " $tmp_libs " in
- 	      *" $deplib "*) ;;
--	      *) tmp_libs="$tmp_libs $deplib" ;;
-+	      *) func_append tmp_libs " $deplib" ;;
- 	      esac
- 	      ;;
--	    *) tmp_libs="$tmp_libs $deplib" ;;
-+	    *) func_append tmp_libs " $deplib" ;;
- 	    esac
- 	  done
--	  eval $var=\$tmp_libs
-+	  eval $var=\"$tmp_libs\"
- 	done # for var
-       fi
-       # Last step: remove runtime libs from dependency_libs
-@@ -6269,7 +7187,7 @@ func_mode_link ()
- 	  ;;
- 	esac
- 	if test -n "$i" ; then
--	  tmp_libs="$tmp_libs $i"
-+	  func_append tmp_libs " $i"
- 	fi
-       done
-       dependency_libs=$tmp_libs
-@@ -6310,7 +7228,7 @@ func_mode_link ()
-       # Now set the variables for building old libraries.
-       build_libtool_libs=no
-       oldlibs="$output"
--      objs="$objs$old_deplibs"
-+      func_append objs "$old_deplibs"
-       ;;
- 
-     lib)
-@@ -6319,8 +7237,8 @@ func_mode_link ()
-       lib*)
- 	func_stripname 'lib' '.la' "$outputname"
- 	name=$func_stripname_result
--	eval "shared_ext=\"$shrext_cmds\""
--	eval "libname=\"$libname_spec\""
-+	eval shared_ext=\"$shrext_cmds\"
-+	eval libname=\"$libname_spec\"
- 	;;
-       *)
- 	test "$module" = no && \
-@@ -6330,8 +7248,8 @@ func_mode_link ()
- 	  # Add the "lib" prefix for modules if required
- 	  func_stripname '' '.la' "$outputname"
- 	  name=$func_stripname_result
--	  eval "shared_ext=\"$shrext_cmds\""
--	  eval "libname=\"$libname_spec\""
-+	  eval shared_ext=\"$shrext_cmds\"
-+	  eval libname=\"$libname_spec\"
- 	else
- 	  func_stripname '' '.la' "$outputname"
- 	  libname=$func_stripname_result
-@@ -6346,7 +7264,7 @@ func_mode_link ()
- 	  echo
- 	  $ECHO "*** Warning: Linking the shared library $output against the non-libtool"
- 	  $ECHO "*** objects $objs is not portable!"
--	  libobjs="$libobjs $objs"
-+	  func_append libobjs " $objs"
- 	fi
-       fi
- 
-@@ -6544,7 +7462,7 @@ func_mode_link ()
- 	  done
- 
- 	  # Make executables depend on our current version.
--	  verstring="$verstring:${current}.0"
-+	  func_append verstring ":${current}.0"
- 	  ;;
- 
- 	qnx)
-@@ -6612,10 +7530,10 @@ func_mode_link ()
-       fi
- 
-       func_generate_dlsyms "$libname" "$libname" "yes"
--      libobjs="$libobjs $symfileobj"
-+      func_append libobjs " $symfileobj"
-       test "X$libobjs" = "X " && libobjs=
- 
--      if test "$mode" != relink; then
-+      if test "$opt_mode" != relink; then
- 	# Remove our outputs, but don't remove object files since they
- 	# may have been created when compiling PIC objects.
- 	removelist=
-@@ -6631,7 +7549,7 @@ func_mode_link ()
- 		   continue
- 		 fi
- 	       fi
--	       removelist="$removelist $p"
-+	       func_append removelist " $p"
- 	       ;;
- 	    *) ;;
- 	  esac
-@@ -6642,7 +7560,7 @@ func_mode_link ()
- 
-       # Now set the variables for building old libraries.
-       if test "$build_old_libs" = yes && test "$build_libtool_libs" != convenience ; then
--	oldlibs="$oldlibs $output_objdir/$libname.$libext"
-+	func_append oldlibs " $output_objdir/$libname.$libext"
- 
- 	# Transform .lo files to .o files.
- 	oldobjs="$objs "`$ECHO "$libobjs" | $SP2NL | $SED "/\.${libext}$/d; $lo2o" | $NL2SP`
-@@ -6659,10 +7577,11 @@ func_mode_link ()
- 	# If the user specified any rpath flags, then add them.
- 	temp_xrpath=
- 	for libdir in $xrpath; do
--	  temp_xrpath="$temp_xrpath -R$libdir"
-+	  func_replace_sysroot "$libdir"
-+	  func_append temp_xrpath " -R$func_replace_sysroot_result"
- 	  case "$finalize_rpath " in
- 	  *" $libdir "*) ;;
--	  *) finalize_rpath="$finalize_rpath $libdir" ;;
-+	  *) func_append finalize_rpath " $libdir" ;;
- 	  esac
- 	done
- 	if test "$hardcode_into_libs" != yes || test "$build_old_libs" = yes; then
-@@ -6676,7 +7595,7 @@ func_mode_link ()
-       for lib in $old_dlfiles; do
- 	case " $dlprefiles $dlfiles " in
- 	*" $lib "*) ;;
--	*) dlfiles="$dlfiles $lib" ;;
-+	*) func_append dlfiles " $lib" ;;
- 	esac
-       done
- 
-@@ -6686,7 +7605,7 @@ func_mode_link ()
-       for lib in $old_dlprefiles; do
- 	case "$dlprefiles " in
- 	*" $lib "*) ;;
--	*) dlprefiles="$dlprefiles $lib" ;;
-+	*) func_append dlprefiles " $lib" ;;
- 	esac
-       done
- 
-@@ -6698,7 +7617,7 @@ func_mode_link ()
- 	    ;;
- 	  *-*-rhapsody* | *-*-darwin1.[012])
- 	    # Rhapsody C library is in the System framework
--	    deplibs="$deplibs System.ltframework"
-+	    func_append deplibs " System.ltframework"
- 	    ;;
- 	  *-*-netbsd*)
- 	    # Don't link with libc until the a.out ld.so is fixed.
-@@ -6715,7 +7634,7 @@ func_mode_link ()
- 	  *)
- 	    # Add libc to deplibs on all other systems if necessary.
- 	    if test "$build_libtool_need_lc" = "yes"; then
--	      deplibs="$deplibs -lc"
-+	      func_append deplibs " -lc"
- 	    fi
- 	    ;;
- 	  esac
-@@ -6764,18 +7683,18 @@ EOF
- 		if test "X$allow_libtool_libs_with_static_runtimes" = "Xyes" ; then
- 		  case " $predeps $postdeps " in
- 		  *" $i "*)
--		    newdeplibs="$newdeplibs $i"
-+		    func_append newdeplibs " $i"
- 		    i=""
- 		    ;;
- 		  esac
- 		fi
- 		if test -n "$i" ; then
--		  eval "libname=\"$libname_spec\""
--		  eval "deplib_matches=\"$library_names_spec\""
-+		  libname=`eval "\\$ECHO \"$libname_spec\""`
-+		  deplib_matches=`eval "\\$ECHO \"$library_names_spec\""`
- 		  set dummy $deplib_matches; shift
- 		  deplib_match=$1
- 		  if test `expr "$ldd_output" : ".*$deplib_match"` -ne 0 ; then
--		    newdeplibs="$newdeplibs $i"
-+		    func_append newdeplibs " $i"
- 		  else
- 		    droppeddeps=yes
- 		    echo
-@@ -6789,7 +7708,7 @@ EOF
- 		fi
- 		;;
- 	      *)
--		newdeplibs="$newdeplibs $i"
-+		func_append newdeplibs " $i"
- 		;;
- 	      esac
- 	    done
-@@ -6807,18 +7726,18 @@ EOF
- 		  if test "X$allow_libtool_libs_with_static_runtimes" = "Xyes" ; then
- 		    case " $predeps $postdeps " in
- 		    *" $i "*)
--		      newdeplibs="$newdeplibs $i"
-+		      func_append newdeplibs " $i"
- 		      i=""
- 		      ;;
- 		    esac
- 		  fi
- 		  if test -n "$i" ; then
--		    eval "libname=\"$libname_spec\""
--		    eval "deplib_matches=\"$library_names_spec\""
-+		    libname=`eval "\\$ECHO \"$libname_spec\""`
-+		    deplib_matches=`eval "\\$ECHO \"$library_names_spec\""`
- 		    set dummy $deplib_matches; shift
- 		    deplib_match=$1
- 		    if test `expr "$ldd_output" : ".*$deplib_match"` -ne 0 ; then
--		      newdeplibs="$newdeplibs $i"
-+		      func_append newdeplibs " $i"
- 		    else
- 		      droppeddeps=yes
- 		      echo
-@@ -6840,7 +7759,7 @@ EOF
- 		fi
- 		;;
- 	      *)
--		newdeplibs="$newdeplibs $i"
-+		func_append newdeplibs " $i"
- 		;;
- 	      esac
- 	    done
-@@ -6857,15 +7776,27 @@ EOF
- 	      if test "X$allow_libtool_libs_with_static_runtimes" = "Xyes" ; then
- 		case " $predeps $postdeps " in
- 		*" $a_deplib "*)
--		  newdeplibs="$newdeplibs $a_deplib"
-+		  func_append newdeplibs " $a_deplib"
- 		  a_deplib=""
- 		  ;;
- 		esac
- 	      fi
- 	      if test -n "$a_deplib" ; then
--		eval "libname=\"$libname_spec\""
-+		libname=`eval "\\$ECHO \"$libname_spec\""`
-+		if test -n "$file_magic_glob"; then
-+		  libnameglob=`func_echo_all "$libname" | $SED -e $file_magic_glob`
-+		else
-+		  libnameglob=$libname
-+		fi
-+		test "$want_nocaseglob" = yes && nocaseglob=`shopt -p nocaseglob`
- 		for i in $lib_search_path $sys_lib_search_path $shlib_search_path; do
--		  potential_libs=`ls $i/$libname[.-]* 2>/dev/null`
-+		  if test "$want_nocaseglob" = yes; then
-+		    shopt -s nocaseglob
-+		    potential_libs=`ls $i/$libnameglob[.-]* 2>/dev/null`
-+		    $nocaseglob
-+		  else
-+		    potential_libs=`ls $i/$libnameglob[.-]* 2>/dev/null`
-+		  fi
- 		  for potent_lib in $potential_libs; do
- 		      # Follow soft links.
- 		      if ls -lLd "$potent_lib" 2>/dev/null |
-@@ -6885,10 +7816,10 @@ EOF
- 			*) potlib=`$ECHO "$potlib" | $SED 's,[^/]*$,,'`"$potliblink";;
- 			esac
- 		      done
--		      if eval "$file_magic_cmd \"\$potlib\"" 2>/dev/null |
-+		      if eval $file_magic_cmd \"\$potlib\" 2>/dev/null |
- 			 $SED -e 10q |
- 			 $EGREP "$file_magic_regex" > /dev/null; then
--			newdeplibs="$newdeplibs $a_deplib"
-+			func_append newdeplibs " $a_deplib"
- 			a_deplib=""
- 			break 2
- 		      fi
-@@ -6913,7 +7844,7 @@ EOF
- 	      ;;
- 	    *)
- 	      # Add a -L argument.
--	      newdeplibs="$newdeplibs $a_deplib"
-+	      func_append newdeplibs " $a_deplib"
- 	      ;;
- 	    esac
- 	  done # Gone through all deplibs.
-@@ -6929,20 +7860,20 @@ EOF
- 	      if test "X$allow_libtool_libs_with_static_runtimes" = "Xyes" ; then
- 		case " $predeps $postdeps " in
- 		*" $a_deplib "*)
--		  newdeplibs="$newdeplibs $a_deplib"
-+		  func_append newdeplibs " $a_deplib"
- 		  a_deplib=""
- 		  ;;
- 		esac
- 	      fi
- 	      if test -n "$a_deplib" ; then
--		eval "libname=\"$libname_spec\""
-+		libname=`eval "\\$ECHO \"$libname_spec\""`
- 		for i in $lib_search_path $sys_lib_search_path $shlib_search_path; do
- 		  potential_libs=`ls $i/$libname[.-]* 2>/dev/null`
- 		  for potent_lib in $potential_libs; do
- 		    potlib="$potent_lib" # see symlink-check above in file_magic test
- 		    if eval "\$ECHO \"$potent_lib\"" 2>/dev/null | $SED 10q | \
- 		       $EGREP "$match_pattern_regex" > /dev/null; then
--		      newdeplibs="$newdeplibs $a_deplib"
-+		      func_append newdeplibs " $a_deplib"
- 		      a_deplib=""
- 		      break 2
- 		    fi
-@@ -6967,7 +7898,7 @@ EOF
- 	      ;;
- 	    *)
- 	      # Add a -L argument.
--	      newdeplibs="$newdeplibs $a_deplib"
-+	      func_append newdeplibs " $a_deplib"
- 	      ;;
- 	    esac
- 	  done # Gone through all deplibs.
-@@ -7071,7 +8002,7 @@ EOF
- 	*)
- 	  case " $deplibs " in
- 	  *" -L$path/$objdir "*)
--	    new_libs="$new_libs -L$path/$objdir" ;;
-+	    func_append new_libs " -L$path/$objdir" ;;
- 	  esac
- 	  ;;
- 	esac
-@@ -7081,10 +8012,10 @@ EOF
- 	-L*)
- 	  case " $new_libs " in
- 	  *" $deplib "*) ;;
--	  *) new_libs="$new_libs $deplib" ;;
-+	  *) func_append new_libs " $deplib" ;;
- 	  esac
- 	  ;;
--	*) new_libs="$new_libs $deplib" ;;
-+	*) func_append new_libs " $deplib" ;;
- 	esac
-       done
-       deplibs="$new_libs"
-@@ -7101,10 +8032,12 @@ EOF
- 	  hardcode_libdirs=
- 	  dep_rpath=
- 	  rpath="$finalize_rpath"
--	  test "$mode" != relink && rpath="$compile_rpath$rpath"
-+	  test "$opt_mode" != relink && rpath="$compile_rpath$rpath"
- 	  for libdir in $rpath; do
- 	    if test -n "$hardcode_libdir_flag_spec"; then
- 	      if test -n "$hardcode_libdir_separator"; then
-+		func_replace_sysroot "$libdir"
-+		libdir=$func_replace_sysroot_result
- 		if test -z "$hardcode_libdirs"; then
- 		  hardcode_libdirs="$libdir"
- 		else
-@@ -7113,18 +8046,18 @@ EOF
- 		  *"$hardcode_libdir_separator$libdir$hardcode_libdir_separator"*)
- 		    ;;
- 		  *)
--		    hardcode_libdirs="$hardcode_libdirs$hardcode_libdir_separator$libdir"
-+		    func_append hardcode_libdirs "$hardcode_libdir_separator$libdir"
- 		    ;;
- 		  esac
- 		fi
- 	      else
--		eval "flag=\"$hardcode_libdir_flag_spec\""
--		dep_rpath="$dep_rpath $flag"
-+		eval flag=\"$hardcode_libdir_flag_spec\"
-+		func_append dep_rpath " $flag"
- 	      fi
- 	    elif test -n "$runpath_var"; then
- 	      case "$perm_rpath " in
- 	      *" $libdir "*) ;;
--	      *) perm_rpath="$perm_rpath $libdir" ;;
-+	      *) func_apped perm_rpath " $libdir" ;;
- 	      esac
- 	    fi
- 	  done
-@@ -7133,40 +8066,38 @@ EOF
- 	     test -n "$hardcode_libdirs"; then
- 	    libdir="$hardcode_libdirs"
- 	    if test -n "$hardcode_libdir_flag_spec_ld"; then
--	      eval "dep_rpath=\"$hardcode_libdir_flag_spec_ld\""
-+	      eval dep_rpath=\"$hardcode_libdir_flag_spec_ld\"
- 	    else
--	      eval "dep_rpath=\"$hardcode_libdir_flag_spec\""
-+	      eval dep_rpath=\"$hardcode_libdir_flag_spec\"
- 	    fi
- 	  fi
- 	  if test -n "$runpath_var" && test -n "$perm_rpath"; then
- 	    # We should set the runpath_var.
- 	    rpath=
- 	    for dir in $perm_rpath; do
--	      rpath="$rpath$dir:"
-+	      func_append rpath "$dir:"
- 	    done
--	    eval $runpath_var=\$rpath\$$runpath_var
--	    export $runpath_var
-+	    eval "$runpath_var='$rpath\$$runpath_var'; export $runpath_var"
- 	  fi
- 	  test -n "$dep_rpath" && deplibs="$dep_rpath $deplibs"
- 	fi
- 
- 	shlibpath="$finalize_shlibpath"
--	test "$mode" != relink && shlibpath="$compile_shlibpath$shlibpath"
-+	test "$opt_mode" != relink && shlibpath="$compile_shlibpath$shlibpath"
- 	if test -n "$shlibpath"; then
--	  eval $shlibpath_var=\$shlibpath\$$shlibpath_var
--	  export $shlibpath_var
-+	  eval "$shlibpath_var='$shlibpath\$$shlibpath_var'; export $shlibpath_var"
- 	fi
- 
- 	# Get the real and link names of the library.
--	eval "shared_ext=\"$shrext_cmds\""
--	eval "library_names=\"$library_names_spec\""
-+	eval shared_ext=\"$shrext_cmds\"
-+	eval library_names=\"$library_names_spec\"
- 	set dummy $library_names
- 	shift
- 	realname="$1"
- 	shift
- 
- 	if test -n "$soname_spec"; then
--	  eval "soname=\"$soname_spec\""
-+	  eval soname=\"$soname_spec\"
- 	else
- 	  soname="$realname"
- 	fi
-@@ -7178,7 +8109,7 @@ EOF
- 	linknames=
- 	for link
- 	do
--	  linknames="$linknames $link"
-+	  func_append linknames " $link"
- 	done
- 
- 	# Use standard objects if they are pic
-@@ -7189,7 +8120,7 @@ EOF
- 	if test -n "$export_symbols" && test -n "$include_expsyms"; then
- 	  $opt_dry_run || cp "$export_symbols" "$output_objdir/$libname.uexp"
- 	  export_symbols="$output_objdir/$libname.uexp"
--	  delfiles="$delfiles $export_symbols"
-+	  func_append delfiles " $export_symbols"
- 	fi
- 
- 	orig_export_symbols=
-@@ -7220,13 +8151,45 @@ EOF
- 	    $opt_dry_run || $RM $export_symbols
- 	    cmds=$export_symbols_cmds
- 	    save_ifs="$IFS"; IFS='~'
--	    for cmd in $cmds; do
-+	    for cmd1 in $cmds; do
- 	      IFS="$save_ifs"
--	      eval "cmd=\"$cmd\""
--	      func_len " $cmd"
--	      len=$func_len_result
--	      if test "$len" -lt "$max_cmd_len" || test "$max_cmd_len" -le -1; then
-+	      # Take the normal branch if the nm_file_list_spec branch
-+	      # doesn't work or if tool conversion is not needed.
-+	      case $nm_file_list_spec~$to_tool_file_cmd in
-+		*~func_convert_file_noop | *~func_convert_file_msys_to_w32 | ~*)
-+		  try_normal_branch=yes
-+		  eval cmd=\"$cmd1\"
-+		  func_len " $cmd"
-+		  len=$func_len_result
-+		  ;;
-+		*)
-+		  try_normal_branch=no
-+		  ;;
-+	      esac
-+	      if test "$try_normal_branch" = yes \
-+		 && { test "$len" -lt "$max_cmd_len" \
-+		      || test "$max_cmd_len" -le -1; }
-+	      then
-+		func_show_eval "$cmd" 'exit $?'
-+		skipped_export=false
-+	      elif test -n "$nm_file_list_spec"; then
-+		func_basename "$output"
-+		output_la=$func_basename_result
-+		save_libobjs=$libobjs
-+		save_output=$output
-+		output=${output_objdir}/${output_la}.nm
-+		func_to_tool_file "$output"
-+		libobjs=$nm_file_list_spec$func_to_tool_file_result
-+		func_append delfiles " $output"
-+		func_verbose "creating $NM input file list: $output"
-+		for obj in $save_libobjs; do
-+		  func_to_tool_file "$obj"
-+		  $ECHO "$func_to_tool_file_result"
-+		done > "$output"
-+		eval cmd=\"$cmd1\"
- 		func_show_eval "$cmd" 'exit $?'
-+		output=$save_output
-+		libobjs=$save_libobjs
- 		skipped_export=false
- 	      else
- 		# The command line is too long to execute in one step.
-@@ -7248,7 +8211,7 @@ EOF
- 	if test -n "$export_symbols" && test -n "$include_expsyms"; then
- 	  tmp_export_symbols="$export_symbols"
- 	  test -n "$orig_export_symbols" && tmp_export_symbols="$orig_export_symbols"
--	  $opt_dry_run || $ECHO "$include_expsyms" | $SP2NL >> "$tmp_export_symbols"
-+	  $opt_dry_run || eval '$ECHO "$include_expsyms" | $SP2NL >> "$tmp_export_symbols"'
- 	fi
- 
- 	if test "X$skipped_export" != "X:" && test -n "$orig_export_symbols"; then
-@@ -7260,7 +8223,7 @@ EOF
- 	  # global variables. join(1) would be nice here, but unfortunately
- 	  # isn't a blessed tool.
- 	  $opt_dry_run || $SED -e '/[ ,]DATA/!d;s,\(.*\)\([ \,].*\),s|^\1$|\1\2|,' < $export_symbols > $output_objdir/$libname.filter
--	  delfiles="$delfiles $export_symbols $output_objdir/$libname.filter"
-+	  func_append delfiles " $export_symbols $output_objdir/$libname.filter"
- 	  export_symbols=$output_objdir/$libname.def
- 	  $opt_dry_run || $SED -f $output_objdir/$libname.filter < $orig_export_symbols > $export_symbols
- 	fi
-@@ -7270,7 +8233,7 @@ EOF
- 	  case " $convenience " in
- 	  *" $test_deplib "*) ;;
- 	  *)
--	    tmp_deplibs="$tmp_deplibs $test_deplib"
-+	    func_append tmp_deplibs " $test_deplib"
- 	    ;;
- 	  esac
- 	done
-@@ -7286,43 +8249,43 @@ EOF
- 	  fi
- 	  if test -n "$whole_archive_flag_spec"; then
- 	    save_libobjs=$libobjs
--	    eval "libobjs=\"\$libobjs $whole_archive_flag_spec\""
-+	    eval libobjs=\"\$libobjs $whole_archive_flag_spec\"
- 	    test "X$libobjs" = "X " && libobjs=
- 	  else
- 	    gentop="$output_objdir/${outputname}x"
--	    generated="$generated $gentop"
-+	    func_append generated " $gentop"
- 
- 	    func_extract_archives $gentop $convenience
--	    libobjs="$libobjs $func_extract_archives_result"
-+	    func_append libobjs " $func_extract_archives_result"
- 	    test "X$libobjs" = "X " && libobjs=
- 	  fi
- 	fi
- 
- 	if test "$thread_safe" = yes && test -n "$thread_safe_flag_spec"; then
--	  eval "flag=\"$thread_safe_flag_spec\""
--	  linker_flags="$linker_flags $flag"
-+	  eval flag=\"$thread_safe_flag_spec\"
-+	  func_append linker_flags " $flag"
- 	fi
- 
- 	# Make a backup of the uninstalled library when relinking
--	if test "$mode" = relink; then
--	  $opt_dry_run || (cd $output_objdir && $RM ${realname}U && $MV $realname ${realname}U) || exit $?
-+	if test "$opt_mode" = relink; then
-+	  $opt_dry_run || eval '(cd $output_objdir && $RM ${realname}U && $MV $realname ${realname}U)' || exit $?
- 	fi
- 
- 	# Do each of the archive commands.
- 	if test "$module" = yes && test -n "$module_cmds" ; then
- 	  if test -n "$export_symbols" && test -n "$module_expsym_cmds"; then
--	    eval "test_cmds=\"$module_expsym_cmds\""
-+	    eval test_cmds=\"$module_expsym_cmds\"
- 	    cmds=$module_expsym_cmds
- 	  else
--	    eval "test_cmds=\"$module_cmds\""
-+	    eval test_cmds=\"$module_cmds\"
- 	    cmds=$module_cmds
- 	  fi
- 	else
- 	  if test -n "$export_symbols" && test -n "$archive_expsym_cmds"; then
--	    eval "test_cmds=\"$archive_expsym_cmds\""
-+	    eval test_cmds=\"$archive_expsym_cmds\"
- 	    cmds=$archive_expsym_cmds
- 	  else
--	    eval "test_cmds=\"$archive_cmds\""
-+	    eval test_cmds=\"$archive_cmds\"
- 	    cmds=$archive_cmds
- 	  fi
- 	fi
-@@ -7366,10 +8329,13 @@ EOF
- 	    echo 'INPUT (' > $output
- 	    for obj in $save_libobjs
- 	    do
--	      $ECHO "$obj" >> $output
-+	      func_to_tool_file "$obj"
-+	      $ECHO "$func_to_tool_file_result" >> $output
- 	    done
- 	    echo ')' >> $output
--	    delfiles="$delfiles $output"
-+	    func_append delfiles " $output"
-+	    func_to_tool_file "$output"
-+	    output=$func_to_tool_file_result
- 	  elif test -n "$save_libobjs" && test "X$skipped_export" != "X:" && test "X$file_list_spec" != X; then
- 	    output=${output_objdir}/${output_la}.lnk
- 	    func_verbose "creating linker input file list: $output"
-@@ -7383,15 +8349,17 @@ EOF
- 	    fi
- 	    for obj
- 	    do
--	      $ECHO "$obj" >> $output
-+	      func_to_tool_file "$obj"
-+	      $ECHO "$func_to_tool_file_result" >> $output
- 	    done
--	    delfiles="$delfiles $output"
--	    output=$firstobj\"$file_list_spec$output\"
-+	    func_append delfiles " $output"
-+	    func_to_tool_file "$output"
-+	    output=$firstobj\"$file_list_spec$func_to_tool_file_result\"
- 	  else
- 	    if test -n "$save_libobjs"; then
- 	      func_verbose "creating reloadable object files..."
- 	      output=$output_objdir/$output_la-${k}.$objext
--	      eval "test_cmds=\"$reload_cmds\""
-+	      eval test_cmds=\"$reload_cmds\"
- 	      func_len " $test_cmds"
- 	      len0=$func_len_result
- 	      len=$len0
-@@ -7411,12 +8379,12 @@ EOF
- 		  if test "$k" -eq 1 ; then
- 		    # The first file doesn't have a previous command to add.
- 		    reload_objs=$objlist
--		    eval "concat_cmds=\"$reload_cmds\""
-+		    eval concat_cmds=\"$reload_cmds\"
- 		  else
- 		    # All subsequent reloadable object files will link in
- 		    # the last one created.
- 		    reload_objs="$objlist $last_robj"
--		    eval "concat_cmds=\"\$concat_cmds~$reload_cmds~\$RM $last_robj\""
-+		    eval concat_cmds=\"\$concat_cmds~$reload_cmds~\$RM $last_robj\"
- 		  fi
- 		  last_robj=$output_objdir/$output_la-${k}.$objext
- 		  func_arith $k + 1
-@@ -7433,11 +8401,11 @@ EOF
- 	      # files will link in the last one created.
- 	      test -z "$concat_cmds" || concat_cmds=$concat_cmds~
- 	      reload_objs="$objlist $last_robj"
--	      eval "concat_cmds=\"\${concat_cmds}$reload_cmds\""
-+	      eval concat_cmds=\"\${concat_cmds}$reload_cmds\"
- 	      if test -n "$last_robj"; then
--	        eval "concat_cmds=\"\${concat_cmds}~\$RM $last_robj\""
-+	        eval concat_cmds=\"\${concat_cmds}~\$RM $last_robj\"
- 	      fi
--	      delfiles="$delfiles $output"
-+	      func_append delfiles " $output"
- 
- 	    else
- 	      output=
-@@ -7450,9 +8418,9 @@ EOF
- 	      libobjs=$output
- 	      # Append the command to create the export file.
- 	      test -z "$concat_cmds" || concat_cmds=$concat_cmds~
--	      eval "concat_cmds=\"\$concat_cmds$export_symbols_cmds\""
-+	      eval concat_cmds=\"\$concat_cmds$export_symbols_cmds\"
- 	      if test -n "$last_robj"; then
--		eval "concat_cmds=\"\$concat_cmds~\$RM $last_robj\""
-+		eval concat_cmds=\"\$concat_cmds~\$RM $last_robj\"
- 	      fi
- 	    fi
- 
-@@ -7471,7 +8439,7 @@ EOF
- 		lt_exit=$?
- 
- 		# Restore the uninstalled library and exit
--		if test "$mode" = relink; then
-+		if test "$opt_mode" = relink; then
- 		  ( cd "$output_objdir" && \
- 		    $RM "${realname}T" && \
- 		    $MV "${realname}U" "$realname" )
-@@ -7492,7 +8460,7 @@ EOF
- 	    if test -n "$export_symbols" && test -n "$include_expsyms"; then
- 	      tmp_export_symbols="$export_symbols"
- 	      test -n "$orig_export_symbols" && tmp_export_symbols="$orig_export_symbols"
--	      $opt_dry_run || $ECHO "$include_expsyms" | $SP2NL >> "$tmp_export_symbols"
-+	      $opt_dry_run || eval '$ECHO "$include_expsyms" | $SP2NL >> "$tmp_export_symbols"'
- 	    fi
- 
- 	    if test -n "$orig_export_symbols"; then
-@@ -7504,7 +8472,7 @@ EOF
- 	      # global variables. join(1) would be nice here, but unfortunately
- 	      # isn't a blessed tool.
- 	      $opt_dry_run || $SED -e '/[ ,]DATA/!d;s,\(.*\)\([ \,].*\),s|^\1$|\1\2|,' < $export_symbols > $output_objdir/$libname.filter
--	      delfiles="$delfiles $export_symbols $output_objdir/$libname.filter"
-+	      func_append delfiles " $export_symbols $output_objdir/$libname.filter"
- 	      export_symbols=$output_objdir/$libname.def
- 	      $opt_dry_run || $SED -f $output_objdir/$libname.filter < $orig_export_symbols > $export_symbols
- 	    fi
-@@ -7515,7 +8483,7 @@ EOF
- 	  output=$save_output
- 
- 	  if test -n "$convenience" && test -n "$whole_archive_flag_spec"; then
--	    eval "libobjs=\"\$libobjs $whole_archive_flag_spec\""
-+	    eval libobjs=\"\$libobjs $whole_archive_flag_spec\"
- 	    test "X$libobjs" = "X " && libobjs=
- 	  fi
- 	  # Expand the library linking commands again to reset the
-@@ -7539,23 +8507,23 @@ EOF
- 
- 	if test -n "$delfiles"; then
- 	  # Append the command to remove temporary files to $cmds.
--	  eval "cmds=\"\$cmds~\$RM $delfiles\""
-+	  eval cmds=\"\$cmds~\$RM $delfiles\"
- 	fi
- 
- 	# Add any objects from preloaded convenience libraries
- 	if test -n "$dlprefiles"; then
- 	  gentop="$output_objdir/${outputname}x"
--	  generated="$generated $gentop"
-+	  func_append generated " $gentop"
- 
- 	  func_extract_archives $gentop $dlprefiles
--	  libobjs="$libobjs $func_extract_archives_result"
-+	  func_append libobjs " $func_extract_archives_result"
- 	  test "X$libobjs" = "X " && libobjs=
- 	fi
- 
- 	save_ifs="$IFS"; IFS='~'
- 	for cmd in $cmds; do
- 	  IFS="$save_ifs"
--	  eval "cmd=\"$cmd\""
-+	  eval cmd=\"$cmd\"
- 	  $opt_silent || {
- 	    func_quote_for_expand "$cmd"
- 	    eval "func_echo $func_quote_for_expand_result"
-@@ -7564,7 +8532,7 @@ EOF
- 	    lt_exit=$?
- 
- 	    # Restore the uninstalled library and exit
--	    if test "$mode" = relink; then
-+	    if test "$opt_mode" = relink; then
- 	      ( cd "$output_objdir" && \
- 	        $RM "${realname}T" && \
- 		$MV "${realname}U" "$realname" )
-@@ -7576,8 +8544,8 @@ EOF
- 	IFS="$save_ifs"
- 
- 	# Restore the uninstalled library and exit
--	if test "$mode" = relink; then
--	  $opt_dry_run || (cd $output_objdir && $RM ${realname}T && $MV $realname ${realname}T && $MV ${realname}U $realname) || exit $?
-+	if test "$opt_mode" = relink; then
-+	  $opt_dry_run || eval '(cd $output_objdir && $RM ${realname}T && $MV $realname ${realname}T && $MV ${realname}U $realname)' || exit $?
- 
- 	  if test -n "$convenience"; then
- 	    if test -z "$whole_archive_flag_spec"; then
-@@ -7656,17 +8624,20 @@ EOF
- 
-       if test -n "$convenience"; then
- 	if test -n "$whole_archive_flag_spec"; then
--	  eval "tmp_whole_archive_flags=\"$whole_archive_flag_spec\""
-+	  eval tmp_whole_archive_flags=\"$whole_archive_flag_spec\"
- 	  reload_conv_objs=$reload_objs\ `$ECHO "$tmp_whole_archive_flags" | $SED 's|,| |g'`
- 	else
- 	  gentop="$output_objdir/${obj}x"
--	  generated="$generated $gentop"
-+	  func_append generated " $gentop"
- 
- 	  func_extract_archives $gentop $convenience
- 	  reload_conv_objs="$reload_objs $func_extract_archives_result"
- 	fi
-       fi
- 
-+      # If we're not building shared, we need to use non_pic_objs
-+      test "$build_libtool_libs" != yes && libobjs="$non_pic_objects"
-+
-       # Create the old-style object.
-       reload_objs="$objs$old_deplibs "`$ECHO "$libobjs" | $SP2NL | $SED "/\.${libext}$/d; /\.lib$/d; $lo2o" | $NL2SP`" $reload_conv_objs" ### testsuite: skip nested quoting test
- 
-@@ -7690,7 +8661,7 @@ EOF
- 	# Create an invalid libtool object if no PIC, so that we don't
- 	# accidentally link it into a program.
- 	# $show "echo timestamp > $libobj"
--	# $opt_dry_run || echo timestamp > $libobj || exit $?
-+	# $opt_dry_run || eval "echo timestamp > $libobj" || exit $?
- 	exit $EXIT_SUCCESS
-       fi
- 
-@@ -7740,8 +8711,8 @@ EOF
- 	if test "$tagname" = CXX ; then
- 	  case ${MACOSX_DEPLOYMENT_TARGET-10.0} in
- 	    10.[0123])
--	      compile_command="$compile_command ${wl}-bind_at_load"
--	      finalize_command="$finalize_command ${wl}-bind_at_load"
-+	      func_append compile_command " ${wl}-bind_at_load"
-+	      func_append finalize_command " ${wl}-bind_at_load"
- 	    ;;
- 	  esac
- 	fi
-@@ -7761,7 +8732,7 @@ EOF
- 	*)
- 	  case " $compile_deplibs " in
- 	  *" -L$path/$objdir "*)
--	    new_libs="$new_libs -L$path/$objdir" ;;
-+	    func_append new_libs " -L$path/$objdir" ;;
- 	  esac
- 	  ;;
- 	esac
-@@ -7771,17 +8742,17 @@ EOF
- 	-L*)
- 	  case " $new_libs " in
- 	  *" $deplib "*) ;;
--	  *) new_libs="$new_libs $deplib" ;;
-+	  *) func_append new_libs " $deplib" ;;
- 	  esac
- 	  ;;
--	*) new_libs="$new_libs $deplib" ;;
-+	*) func_append new_libs " $deplib" ;;
- 	esac
-       done
-       compile_deplibs="$new_libs"
- 
- 
--      compile_command="$compile_command $compile_deplibs"
--      finalize_command="$finalize_command $finalize_deplibs"
-+      func_append compile_command " $compile_deplibs"
-+      func_append finalize_command " $finalize_deplibs"
- 
-       if test -n "$rpath$xrpath"; then
- 	# If the user specified any rpath flags, then add them.
-@@ -7789,7 +8760,7 @@ EOF
- 	  # This is the magic to use -rpath.
- 	  case "$finalize_rpath " in
- 	  *" $libdir "*) ;;
--	  *) finalize_rpath="$finalize_rpath $libdir" ;;
-+	  *) func_append finalize_rpath " $libdir" ;;
- 	  esac
- 	done
-       fi
-@@ -7808,18 +8779,18 @@ EOF
- 	      *"$hardcode_libdir_separator$libdir$hardcode_libdir_separator"*)
- 		;;
- 	      *)
--		hardcode_libdirs="$hardcode_libdirs$hardcode_libdir_separator$libdir"
-+		func_append hardcode_libdirs "$hardcode_libdir_separator$libdir"
- 		;;
- 	      esac
- 	    fi
- 	  else
--	    eval "flag=\"$hardcode_libdir_flag_spec\""
--	    rpath="$rpath $flag"
-+	    eval flag=\"$hardcode_libdir_flag_spec\"
-+	    func_append rpath " $flag"
- 	  fi
- 	elif test -n "$runpath_var"; then
- 	  case "$perm_rpath " in
- 	  *" $libdir "*) ;;
--	  *) perm_rpath="$perm_rpath $libdir" ;;
-+	  *) func_append perm_rpath " $libdir" ;;
- 	  esac
- 	fi
- 	case $host in
-@@ -7828,12 +8799,12 @@ EOF
- 	  case :$dllsearchpath: in
- 	  *":$libdir:"*) ;;
- 	  ::) dllsearchpath=$libdir;;
--	  *) dllsearchpath="$dllsearchpath:$libdir";;
-+	  *) func_append dllsearchpath ":$libdir";;
- 	  esac
- 	  case :$dllsearchpath: in
- 	  *":$testbindir:"*) ;;
- 	  ::) dllsearchpath=$testbindir;;
--	  *) dllsearchpath="$dllsearchpath:$testbindir";;
-+	  *) func_append dllsearchpath ":$testbindir";;
- 	  esac
- 	  ;;
- 	esac
-@@ -7842,7 +8813,7 @@ EOF
-       if test -n "$hardcode_libdir_separator" &&
- 	 test -n "$hardcode_libdirs"; then
- 	libdir="$hardcode_libdirs"
--	eval "rpath=\" $hardcode_libdir_flag_spec\""
-+	eval rpath=\" $hardcode_libdir_flag_spec\"
-       fi
-       compile_rpath="$rpath"
- 
-@@ -7859,18 +8830,18 @@ EOF
- 	      *"$hardcode_libdir_separator$libdir$hardcode_libdir_separator"*)
- 		;;
- 	      *)
--		hardcode_libdirs="$hardcode_libdirs$hardcode_libdir_separator$libdir"
-+		func_append hardcode_libdirs "$hardcode_libdir_separator$libdir"
- 		;;
- 	      esac
- 	    fi
- 	  else
--	    eval "flag=\"$hardcode_libdir_flag_spec\""
--	    rpath="$rpath $flag"
-+	    eval flag=\"$hardcode_libdir_flag_spec\"
-+	    func_append rpath " $flag"
- 	  fi
- 	elif test -n "$runpath_var"; then
- 	  case "$finalize_perm_rpath " in
- 	  *" $libdir "*) ;;
--	  *) finalize_perm_rpath="$finalize_perm_rpath $libdir" ;;
-+	  *) func_append finalize_perm_rpath " $libdir" ;;
- 	  esac
- 	fi
-       done
-@@ -7878,7 +8849,7 @@ EOF
-       if test -n "$hardcode_libdir_separator" &&
- 	 test -n "$hardcode_libdirs"; then
- 	libdir="$hardcode_libdirs"
--	eval "rpath=\" $hardcode_libdir_flag_spec\""
-+	eval rpath=\" $hardcode_libdir_flag_spec\"
-       fi
-       finalize_rpath="$rpath"
- 
-@@ -7921,6 +8892,12 @@ EOF
- 	exit_status=0
- 	func_show_eval "$link_command" 'exit_status=$?'
- 
-+	if test -n "$postlink_cmds"; then
-+	  func_to_tool_file "$output"
-+	  postlink_cmds=`func_echo_all "$postlink_cmds" | $SED -e 's%@OUTPUT@%'"$output"'%g' -e 's%@TOOL_OUTPUT@%'"$func_to_tool_file_result"'%g'`
-+	  func_execute_cmds "$postlink_cmds" 'exit $?'
-+	fi
-+
- 	# Delete the generated files.
- 	if test -f "$output_objdir/${outputname}S.${objext}"; then
- 	  func_show_eval '$RM "$output_objdir/${outputname}S.${objext}"'
-@@ -7943,7 +8920,7 @@ EOF
- 	  # We should set the runpath_var.
- 	  rpath=
- 	  for dir in $perm_rpath; do
--	    rpath="$rpath$dir:"
-+	    func_append rpath "$dir:"
- 	  done
- 	  compile_var="$runpath_var=\"$rpath\$$runpath_var\" "
- 	fi
-@@ -7951,7 +8928,7 @@ EOF
- 	  # We should set the runpath_var.
- 	  rpath=
- 	  for dir in $finalize_perm_rpath; do
--	    rpath="$rpath$dir:"
-+	    func_append rpath "$dir:"
- 	  done
- 	  finalize_var="$runpath_var=\"$rpath\$$runpath_var\" "
- 	fi
-@@ -7966,6 +8943,13 @@ EOF
- 	$opt_dry_run || $RM $output
- 	# Link the executable and exit
- 	func_show_eval "$link_command" 'exit $?'
-+
-+	if test -n "$postlink_cmds"; then
-+	  func_to_tool_file "$output"
-+	  postlink_cmds=`func_echo_all "$postlink_cmds" | $SED -e 's%@OUTPUT@%'"$output"'%g' -e 's%@TOOL_OUTPUT@%'"$func_to_tool_file_result"'%g'`
-+	  func_execute_cmds "$postlink_cmds" 'exit $?'
-+	fi
-+
- 	exit $EXIT_SUCCESS
-       fi
- 
-@@ -7999,6 +8983,12 @@ EOF
- 
-       func_show_eval "$link_command" 'exit $?'
- 
-+      if test -n "$postlink_cmds"; then
-+	func_to_tool_file "$output_objdir/$outputname"
-+	postlink_cmds=`func_echo_all "$postlink_cmds" | $SED -e 's%@OUTPUT@%'"$output_objdir/$outputname"'%g' -e 's%@TOOL_OUTPUT@%'"$func_to_tool_file_result"'%g'`
-+	func_execute_cmds "$postlink_cmds" 'exit $?'
-+      fi
-+
-       # Now create the wrapper script.
-       func_verbose "creating $output"
- 
-@@ -8096,7 +9086,7 @@ EOF
- 	else
- 	  oldobjs="$old_deplibs $non_pic_objects"
- 	  if test "$preload" = yes && test -f "$symfileobj"; then
--	    oldobjs="$oldobjs $symfileobj"
-+	    func_append oldobjs " $symfileobj"
- 	  fi
- 	fi
- 	addlibs="$old_convenience"
-@@ -8104,10 +9094,10 @@ EOF
- 
-       if test -n "$addlibs"; then
- 	gentop="$output_objdir/${outputname}x"
--	generated="$generated $gentop"
-+	func_append generated " $gentop"
- 
- 	func_extract_archives $gentop $addlibs
--	oldobjs="$oldobjs $func_extract_archives_result"
-+	func_append oldobjs " $func_extract_archives_result"
-       fi
- 
-       # Do each command in the archive commands.
-@@ -8118,10 +9108,10 @@ EOF
- 	# Add any objects from preloaded convenience libraries
- 	if test -n "$dlprefiles"; then
- 	  gentop="$output_objdir/${outputname}x"
--	  generated="$generated $gentop"
-+	  func_append generated " $gentop"
- 
- 	  func_extract_archives $gentop $dlprefiles
--	  oldobjs="$oldobjs $func_extract_archives_result"
-+	  func_append oldobjs " $func_extract_archives_result"
- 	fi
- 
- 	# POSIX demands no paths to be encoded in archives.  We have
-@@ -8139,7 +9129,7 @@ EOF
- 	else
- 	  echo "copying selected object files to avoid basename conflicts..."
- 	  gentop="$output_objdir/${outputname}x"
--	  generated="$generated $gentop"
-+	  func_append generated " $gentop"
- 	  func_mkdir_p "$gentop"
- 	  save_oldobjs=$oldobjs
- 	  oldobjs=
-@@ -8163,18 +9153,28 @@ EOF
- 		esac
- 	      done
- 	      func_show_eval "ln $obj $gentop/$newobj || cp $obj $gentop/$newobj"
--	      oldobjs="$oldobjs $gentop/$newobj"
-+	      func_append oldobjs " $gentop/$newobj"
- 	      ;;
--	    *) oldobjs="$oldobjs $obj" ;;
-+	    *) func_append oldobjs " $obj" ;;
- 	    esac
- 	  done
- 	fi
--	eval "cmds=\"$old_archive_cmds\""
-+	eval cmds=\"$old_archive_cmds\"
- 
- 	func_len " $cmds"
- 	len=$func_len_result
- 	if test "$len" -lt "$max_cmd_len" || test "$max_cmd_len" -le -1; then
- 	  cmds=$old_archive_cmds
-+	elif test -n "$archiver_list_spec"; then
-+	  func_verbose "using command file archive linking..."
-+	  for obj in $oldobjs
-+	  do
-+	    func_to_tool_file "$obj"
-+	    $ECHO "$func_to_tool_file_result"
-+	  done > $output_objdir/$libname.libcmd
-+	  func_to_tool_file "$output_objdir/$libname.libcmd"
-+	  oldobjs=" $archiver_list_spec$func_to_tool_file_result"
-+	  cmds=$old_archive_cmds
- 	else
- 	  # the command line is too long to link in one step, link in parts
- 	  func_verbose "using piecewise archive linking..."
-@@ -8189,7 +9189,7 @@ EOF
- 	  do
- 	    last_oldobj=$obj
- 	  done
--	  eval "test_cmds=\"$old_archive_cmds\""
-+	  eval test_cmds=\"$old_archive_cmds\"
- 	  func_len " $test_cmds"
- 	  len0=$func_len_result
- 	  len=$len0
-@@ -8208,7 +9208,7 @@ EOF
- 		RANLIB=$save_RANLIB
- 	      fi
- 	      test -z "$concat_cmds" || concat_cmds=$concat_cmds~
--	      eval "concat_cmds=\"\${concat_cmds}$old_archive_cmds\""
-+	      eval concat_cmds=\"\${concat_cmds}$old_archive_cmds\"
- 	      objlist=
- 	      len=$len0
- 	    fi
-@@ -8216,9 +9216,9 @@ EOF
- 	  RANLIB=$save_RANLIB
- 	  oldobjs=$objlist
- 	  if test "X$oldobjs" = "X" ; then
--	    eval "cmds=\"\$concat_cmds\""
-+	    eval cmds=\"\$concat_cmds\"
- 	  else
--	    eval "cmds=\"\$concat_cmds~\$old_archive_cmds\""
-+	    eval cmds=\"\$concat_cmds~\$old_archive_cmds\"
- 	  fi
- 	fi
-       fi
-@@ -8268,12 +9268,23 @@ EOF
- 	      *.la)
- 		func_basename "$deplib"
- 		name="$func_basename_result"
--		libdir=`${SED} -n -e 's/^libdir=\(.*\)$/\1/p' $deplib`
-+		func_resolve_sysroot "$deplib"
-+		eval libdir=`${SED} -n -e 's/^libdir=\(.*\)$/\1/p' $func_resolve_sysroot_result`
- 		test -z "$libdir" && \
- 		  func_fatal_error "\`$deplib' is not a valid libtool archive"
--		newdependency_libs="$newdependency_libs $libdir/$name"
-+		func_append newdependency_libs " ${lt_sysroot:+=}$libdir/$name"
-+		;;
-+	      -L*)
-+		func_stripname -L '' "$deplib"
-+		func_replace_sysroot "$func_stripname_result"
-+		func_append newdependency_libs " -L$func_replace_sysroot_result"
- 		;;
--	      *) newdependency_libs="$newdependency_libs $deplib" ;;
-+	      -R*)
-+		func_stripname -R '' "$deplib"
-+		func_replace_sysroot "$func_stripname_result"
-+		func_append newdependency_libs " -R$func_replace_sysroot_result"
-+		;;
-+	      *) func_append newdependency_libs " $deplib" ;;
- 	      esac
- 	    done
- 	    dependency_libs="$newdependency_libs"
-@@ -8284,12 +9295,14 @@ EOF
- 	      *.la)
- 	        func_basename "$lib"
- 		name="$func_basename_result"
--		libdir=`${SED} -n -e 's/^libdir=\(.*\)$/\1/p' $lib`
-+		func_resolve_sysroot "$lib"
-+		eval libdir=`${SED} -n -e 's/^libdir=\(.*\)$/\1/p' $func_resolve_sysroot_result`
-+
- 		test -z "$libdir" && \
- 		  func_fatal_error "\`$lib' is not a valid libtool archive"
--		newdlfiles="$newdlfiles $libdir/$name"
-+		func_append newdlfiles " ${lt_sysroot:+=}$libdir/$name"
- 		;;
--	      *) newdlfiles="$newdlfiles $lib" ;;
-+	      *) func_append newdlfiles " $lib" ;;
- 	      esac
- 	    done
- 	    dlfiles="$newdlfiles"
-@@ -8303,10 +9316,11 @@ EOF
- 		# the library:
- 		func_basename "$lib"
- 		name="$func_basename_result"
--		libdir=`${SED} -n -e 's/^libdir=\(.*\)$/\1/p' $lib`
-+		func_resolve_sysroot "$lib"
-+		eval libdir=`${SED} -n -e 's/^libdir=\(.*\)$/\1/p' $func_resolve_sysroot_result`
- 		test -z "$libdir" && \
- 		  func_fatal_error "\`$lib' is not a valid libtool archive"
--		newdlprefiles="$newdlprefiles $libdir/$name"
-+		func_append newdlprefiles " ${lt_sysroot:+=}$libdir/$name"
- 		;;
- 	      esac
- 	    done
-@@ -8318,7 +9332,7 @@ EOF
- 		[\\/]* | [A-Za-z]:[\\/]*) abs="$lib" ;;
- 		*) abs=`pwd`"/$lib" ;;
- 	      esac
--	      newdlfiles="$newdlfiles $abs"
-+	      func_append newdlfiles " $abs"
- 	    done
- 	    dlfiles="$newdlfiles"
- 	    newdlprefiles=
-@@ -8327,7 +9341,7 @@ EOF
- 		[\\/]* | [A-Za-z]:[\\/]*) abs="$lib" ;;
- 		*) abs=`pwd`"/$lib" ;;
- 	      esac
--	      newdlprefiles="$newdlprefiles $abs"
-+	      func_append newdlprefiles " $abs"
- 	    done
- 	    dlprefiles="$newdlprefiles"
- 	  fi
-@@ -8412,7 +9426,7 @@ relink_command=\"$relink_command\""
-     exit $EXIT_SUCCESS
- }
- 
--{ test "$mode" = link || test "$mode" = relink; } &&
-+{ test "$opt_mode" = link || test "$opt_mode" = relink; } &&
-     func_mode_link ${1+"$@"}
- 
- 
-@@ -8432,9 +9446,9 @@ func_mode_uninstall ()
-     for arg
-     do
-       case $arg in
--      -f) RM="$RM $arg"; rmforce=yes ;;
--      -*) RM="$RM $arg" ;;
--      *) files="$files $arg" ;;
-+      -f) func_append RM " $arg"; rmforce=yes ;;
-+      -*) func_append RM " $arg" ;;
-+      *) func_append files " $arg" ;;
-       esac
-     done
- 
-@@ -8443,24 +9457,23 @@ func_mode_uninstall ()
- 
-     rmdirs=
- 
--    origobjdir="$objdir"
-     for file in $files; do
-       func_dirname "$file" "" "."
-       dir="$func_dirname_result"
-       if test "X$dir" = X.; then
--	objdir="$origobjdir"
-+	odir="$objdir"
-       else
--	objdir="$dir/$origobjdir"
-+	odir="$dir/$objdir"
-       fi
-       func_basename "$file"
-       name="$func_basename_result"
--      test "$mode" = uninstall && objdir="$dir"
-+      test "$opt_mode" = uninstall && odir="$dir"
- 
--      # Remember objdir for removal later, being careful to avoid duplicates
--      if test "$mode" = clean; then
-+      # Remember odir for removal later, being careful to avoid duplicates
-+      if test "$opt_mode" = clean; then
- 	case " $rmdirs " in
--	  *" $objdir "*) ;;
--	  *) rmdirs="$rmdirs $objdir" ;;
-+	  *" $odir "*) ;;
-+	  *) func_append rmdirs " $odir" ;;
- 	esac
-       fi
- 
-@@ -8486,18 +9499,17 @@ func_mode_uninstall ()
- 
- 	  # Delete the libtool libraries and symlinks.
- 	  for n in $library_names; do
--	    rmfiles="$rmfiles $objdir/$n"
-+	    func_append rmfiles " $odir/$n"
- 	  done
--	  test -n "$old_library" && rmfiles="$rmfiles $objdir/$old_library"
-+	  test -n "$old_library" && func_append rmfiles " $odir/$old_library"
- 
--	  case "$mode" in
-+	  case "$opt_mode" in
- 	  clean)
--	    case "  $library_names " in
--	    # "  " in the beginning catches empty $dlname
-+	    case " $library_names " in
- 	    *" $dlname "*) ;;
--	    *) rmfiles="$rmfiles $objdir/$dlname" ;;
-+	    *) test -n "$dlname" && func_append rmfiles " $odir/$dlname" ;;
- 	    esac
--	    test -n "$libdir" && rmfiles="$rmfiles $objdir/$name $objdir/${name}i"
-+	    test -n "$libdir" && func_append rmfiles " $odir/$name $odir/${name}i"
- 	    ;;
- 	  uninstall)
- 	    if test -n "$library_names"; then
-@@ -8525,19 +9537,19 @@ func_mode_uninstall ()
- 	  # Add PIC object to the list of files to remove.
- 	  if test -n "$pic_object" &&
- 	     test "$pic_object" != none; then
--	    rmfiles="$rmfiles $dir/$pic_object"
-+	    func_append rmfiles " $dir/$pic_object"
- 	  fi
- 
- 	  # Add non-PIC object to the list of files to remove.
- 	  if test -n "$non_pic_object" &&
- 	     test "$non_pic_object" != none; then
--	    rmfiles="$rmfiles $dir/$non_pic_object"
-+	    func_append rmfiles " $dir/$non_pic_object"
- 	  fi
- 	fi
- 	;;
- 
-       *)
--	if test "$mode" = clean ; then
-+	if test "$opt_mode" = clean ; then
- 	  noexename=$name
- 	  case $file in
- 	  *.exe)
-@@ -8547,7 +9559,7 @@ func_mode_uninstall ()
- 	    noexename=$func_stripname_result
- 	    # $file with .exe has already been added to rmfiles,
- 	    # add $file without .exe
--	    rmfiles="$rmfiles $file"
-+	    func_append rmfiles " $file"
- 	    ;;
- 	  esac
- 	  # Do a test to see if this is a libtool program.
-@@ -8556,7 +9568,7 @@ func_mode_uninstall ()
- 	      func_ltwrapper_scriptname "$file"
- 	      relink_command=
- 	      func_source $func_ltwrapper_scriptname_result
--	      rmfiles="$rmfiles $func_ltwrapper_scriptname_result"
-+	      func_append rmfiles " $func_ltwrapper_scriptname_result"
- 	    else
- 	      relink_command=
- 	      func_source $dir/$noexename
-@@ -8564,12 +9576,12 @@ func_mode_uninstall ()
- 
- 	    # note $name still contains .exe if it was in $file originally
- 	    # as does the version of $file that was added into $rmfiles
--	    rmfiles="$rmfiles $objdir/$name $objdir/${name}S.${objext}"
-+	    func_append rmfiles " $odir/$name $odir/${name}S.${objext}"
- 	    if test "$fast_install" = yes && test -n "$relink_command"; then
--	      rmfiles="$rmfiles $objdir/lt-$name"
-+	      func_append rmfiles " $odir/lt-$name"
- 	    fi
- 	    if test "X$noexename" != "X$name" ; then
--	      rmfiles="$rmfiles $objdir/lt-${noexename}.c"
-+	      func_append rmfiles " $odir/lt-${noexename}.c"
- 	    fi
- 	  fi
- 	fi
-@@ -8577,7 +9589,6 @@ func_mode_uninstall ()
-       esac
-       func_show_eval "$RM $rmfiles" 'exit_status=1'
-     done
--    objdir="$origobjdir"
- 
-     # Try to remove the ${objdir}s in the directories where we deleted files
-     for dir in $rmdirs; do
-@@ -8589,16 +9600,16 @@ func_mode_uninstall ()
-     exit $exit_status
- }
- 
--{ test "$mode" = uninstall || test "$mode" = clean; } &&
-+{ test "$opt_mode" = uninstall || test "$opt_mode" = clean; } &&
-     func_mode_uninstall ${1+"$@"}
- 
--test -z "$mode" && {
-+test -z "$opt_mode" && {
-   help="$generic_help"
-   func_fatal_help "you must specify a MODE"
- }
- 
- test -z "$exec_cmd" && \
--  func_fatal_help "invalid operation mode \`$mode'"
-+  func_fatal_help "invalid operation mode \`$opt_mode'"
- 
- if test -n "$exec_cmd"; then
-   eval exec "$exec_cmd"
-diff --git a/ltoptions.m4 b/ltoptions.m4
-index 5ef12ced2a8..17cfd51c0b3 100644
---- a/ltoptions.m4
-+++ b/ltoptions.m4
-@@ -8,7 +8,7 @@
- # unlimited permission to copy and/or distribute it, with or without
- # modifications, as long as this notice is preserved.
- 
--# serial 6 ltoptions.m4
-+# serial 7 ltoptions.m4
- 
- # This is to help aclocal find these macros, as it can't see m4_define.
- AC_DEFUN([LTOPTIONS_VERSION], [m4_if([1])])
-diff --git a/ltversion.m4 b/ltversion.m4
-index bf87f77132d..9c7b5d41185 100644
---- a/ltversion.m4
-+++ b/ltversion.m4
-@@ -7,17 +7,17 @@
- # unlimited permission to copy and/or distribute it, with or without
- # modifications, as long as this notice is preserved.
- 
--# Generated from ltversion.in.
-+# @configure_input@
- 
--# serial 3134 ltversion.m4
-+# serial 3293 ltversion.m4
- # This file is part of GNU Libtool
- 
--m4_define([LT_PACKAGE_VERSION], [2.2.7a])
--m4_define([LT_PACKAGE_REVISION], [1.3134])
-+m4_define([LT_PACKAGE_VERSION], [2.4])
-+m4_define([LT_PACKAGE_REVISION], [1.3293])
- 
- AC_DEFUN([LTVERSION_VERSION],
--[macro_version='2.2.7a'
--macro_revision='1.3134'
-+[macro_version='2.4'
-+macro_revision='1.3293'
- _LT_DECL(, macro_version, 0, [Which release of libtool.m4 was used?])
- _LT_DECL(, macro_revision, 0)
- ])
-diff --git a/lt~obsolete.m4 b/lt~obsolete.m4
-index bf92b5e0790..c573da90c5c 100644
---- a/lt~obsolete.m4
-+++ b/lt~obsolete.m4
-@@ -7,7 +7,7 @@
- # unlimited permission to copy and/or distribute it, with or without
- # modifications, as long as this notice is preserved.
- 
--# serial 4 lt~obsolete.m4
-+# serial 5 lt~obsolete.m4
- 
- # These exist entirely to fool aclocal when bootstrapping libtool.
- #
-diff --git a/opcodes/configure b/opcodes/configure
-index 6690a502b2f..badcc0776df 100755
---- a/opcodes/configure
-+++ b/opcodes/configure
-@@ -682,6 +682,9 @@ OTOOL
- LIPO
- NMEDIT
- DSYMUTIL
-+MANIFEST_TOOL
-+ac_ct_AR
-+DLLTOOL
- OBJDUMP
- LN_S
- NM
-@@ -800,6 +803,7 @@ enable_static
- with_pic
- enable_fast_install
- with_gnu_ld
-+with_libtool_sysroot
- enable_libtool_lock
- enable_checking
- enable_targets
-@@ -1468,6 +1472,8 @@ Optional Packages:
-   --with-pic              try to use only PIC/non-PIC objects [default=use
-                           both]
-   --with-gnu-ld           assume the C compiler uses GNU ld [default=no]
-+  --with-libtool-sysroot=DIR Search for dependent libraries within DIR
-+                        (or the compiler's sysroot if not specified).
- 
- Some influential environment variables:
-   CC          C compiler command
-@@ -4977,8 +4983,8 @@ esac
- 
- 
- 
--macro_version='2.2.7a'
--macro_revision='1.3134'
-+macro_version='2.4'
-+macro_revision='1.3293'
- 
- 
- 
-@@ -5018,7 +5024,7 @@ ECHO=$ECHO$ECHO$ECHO$ECHO$ECHO$ECHO
- { $as_echo "$as_me:${as_lineno-$LINENO}: checking how to print strings" >&5
- $as_echo_n "checking how to print strings... " >&6; }
- # Test print first, because it will be a builtin if present.
--if test "X`print -r -- -n 2>/dev/null`" = X-n && \
-+if test "X`( print -r -- -n ) 2>/dev/null`" = X-n && \
-    test "X`print -r -- $ECHO 2>/dev/null`" = "X$ECHO"; then
-   ECHO='print -r --'
- elif test "X`printf %s $ECHO 2>/dev/null`" = "X$ECHO"; then
-@@ -5705,8 +5711,8 @@ $as_echo_n "checking whether the shell understands some XSI constructs... " >&6;
- # Try some XSI features
- xsi_shell=no
- ( _lt_dummy="a/b/c"
--  test "${_lt_dummy##*/},${_lt_dummy%/*},"${_lt_dummy%"$_lt_dummy"}, \
--      = c,a/b,, \
-+  test "${_lt_dummy##*/},${_lt_dummy%/*},${_lt_dummy#??}"${_lt_dummy%"$_lt_dummy"}, \
-+      = c,a/b,b/c, \
-     && eval 'test $(( 1 + 1 )) -eq 2 \
-     && test "${#_lt_dummy}" -eq 5' ) >/dev/null 2>&1 \
-   && xsi_shell=yes
-@@ -5755,6 +5761,80 @@ esac
- 
- 
- 
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking how to convert $build file names to $host format" >&5
-+$as_echo_n "checking how to convert $build file names to $host format... " >&6; }
-+if ${lt_cv_to_host_file_cmd+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  case $host in
-+  *-*-mingw* )
-+    case $build in
-+      *-*-mingw* ) # actually msys
-+        lt_cv_to_host_file_cmd=func_convert_file_msys_to_w32
-+        ;;
-+      *-*-cygwin* )
-+        lt_cv_to_host_file_cmd=func_convert_file_cygwin_to_w32
-+        ;;
-+      * ) # otherwise, assume *nix
-+        lt_cv_to_host_file_cmd=func_convert_file_nix_to_w32
-+        ;;
-+    esac
-+    ;;
-+  *-*-cygwin* )
-+    case $build in
-+      *-*-mingw* ) # actually msys
-+        lt_cv_to_host_file_cmd=func_convert_file_msys_to_cygwin
-+        ;;
-+      *-*-cygwin* )
-+        lt_cv_to_host_file_cmd=func_convert_file_noop
-+        ;;
-+      * ) # otherwise, assume *nix
-+        lt_cv_to_host_file_cmd=func_convert_file_nix_to_cygwin
-+        ;;
-+    esac
-+    ;;
-+  * ) # unhandled hosts (and "normal" native builds)
-+    lt_cv_to_host_file_cmd=func_convert_file_noop
-+    ;;
-+esac
-+
-+fi
-+
-+to_host_file_cmd=$lt_cv_to_host_file_cmd
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_to_host_file_cmd" >&5
-+$as_echo "$lt_cv_to_host_file_cmd" >&6; }
-+
-+
-+
-+
-+
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking how to convert $build file names to toolchain format" >&5
-+$as_echo_n "checking how to convert $build file names to toolchain format... " >&6; }
-+if ${lt_cv_to_tool_file_cmd+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  #assume ordinary cross tools, or native build.
-+lt_cv_to_tool_file_cmd=func_convert_file_noop
-+case $host in
-+  *-*-mingw* )
-+    case $build in
-+      *-*-mingw* ) # actually msys
-+        lt_cv_to_tool_file_cmd=func_convert_file_msys_to_w32
-+        ;;
-+    esac
-+    ;;
-+esac
-+
-+fi
-+
-+to_tool_file_cmd=$lt_cv_to_tool_file_cmd
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_to_tool_file_cmd" >&5
-+$as_echo "$lt_cv_to_tool_file_cmd" >&6; }
-+
-+
-+
-+
-+
- { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $LD option to reload object files" >&5
- $as_echo_n "checking for $LD option to reload object files... " >&6; }
- if ${lt_cv_ld_reload_flag+:} false; then :
-@@ -5771,6 +5851,11 @@ case $reload_flag in
- esac
- reload_cmds='$LD$reload_flag -o $output$reload_objs'
- case $host_os in
-+  cygwin* | mingw* | pw32* | cegcc*)
-+    if test "$GCC" != yes; then
-+      reload_cmds=false
-+    fi
-+    ;;
-   darwin*)
-     if test "$GCC" = yes; then
-       reload_cmds='$LTCC $LTCFLAGS -nostdlib ${wl}-r -o $output$reload_objs'
-@@ -5939,7 +6024,8 @@ mingw* | pw32*)
-     lt_cv_deplibs_check_method='file_magic ^x86 archive import|^x86 DLL'
-     lt_cv_file_magic_cmd='func_win32_libid'
-   else
--    lt_cv_deplibs_check_method='file_magic file format pei*-i386(.*architecture: i386)?'
-+    # Keep this pattern in sync with the one in func_win32_libid.
-+    lt_cv_deplibs_check_method='file_magic file format (pei*-i386(.*architecture: i386)?|pe-arm-wince|pe-x86-64)'
-     lt_cv_file_magic_cmd='$OBJDUMP -f'
-   fi
-   ;;
-@@ -6093,6 +6179,21 @@ esac
- fi
- { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_deplibs_check_method" >&5
- $as_echo "$lt_cv_deplibs_check_method" >&6; }
-+
-+file_magic_glob=
-+want_nocaseglob=no
-+if test "$build" = "$host"; then
-+  case $host_os in
-+  mingw* | pw32*)
-+    if ( shopt | grep nocaseglob ) >/dev/null 2>&1; then
-+      want_nocaseglob=yes
-+    else
-+      file_magic_glob=`echo aAbBcCdDeEfFgGhHiIjJkKlLmMnNoOpPqQrRsStTuUvVwWxXyYzZ | $SED -e "s/\(..\)/s\/[\1]\/[\1]\/g;/g"`
-+    fi
-+    ;;
-+  esac
-+fi
-+
- file_magic_cmd=$lt_cv_file_magic_cmd
- deplibs_check_method=$lt_cv_deplibs_check_method
- test -z "$deplibs_check_method" && deplibs_check_method=unknown
-@@ -6108,6 +6209,157 @@ test -z "$deplibs_check_method" && deplibs_check_method=unknown
- 
- 
- 
-+
-+
-+
-+
-+
-+
-+
-+
-+
-+
-+if test -n "$ac_tool_prefix"; then
-+  # Extract the first word of "${ac_tool_prefix}dlltool", so it can be a program name with args.
-+set dummy ${ac_tool_prefix}dlltool; ac_word=$2
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
-+$as_echo_n "checking for $ac_word... " >&6; }
-+if ${ac_cv_prog_DLLTOOL+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  if test -n "$DLLTOOL"; then
-+  ac_cv_prog_DLLTOOL="$DLLTOOL" # Let the user override the test.
-+else
-+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
-+for as_dir in $PATH
-+do
-+  IFS=$as_save_IFS
-+  test -z "$as_dir" && as_dir=.
-+    for ac_exec_ext in '' $ac_executable_extensions; do
-+  if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
-+    ac_cv_prog_DLLTOOL="${ac_tool_prefix}dlltool"
-+    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
-+    break 2
-+  fi
-+done
-+  done
-+IFS=$as_save_IFS
-+
-+fi
-+fi
-+DLLTOOL=$ac_cv_prog_DLLTOOL
-+if test -n "$DLLTOOL"; then
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $DLLTOOL" >&5
-+$as_echo "$DLLTOOL" >&6; }
-+else
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
-+$as_echo "no" >&6; }
-+fi
-+
-+
-+fi
-+if test -z "$ac_cv_prog_DLLTOOL"; then
-+  ac_ct_DLLTOOL=$DLLTOOL
-+  # Extract the first word of "dlltool", so it can be a program name with args.
-+set dummy dlltool; ac_word=$2
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
-+$as_echo_n "checking for $ac_word... " >&6; }
-+if ${ac_cv_prog_ac_ct_DLLTOOL+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  if test -n "$ac_ct_DLLTOOL"; then
-+  ac_cv_prog_ac_ct_DLLTOOL="$ac_ct_DLLTOOL" # Let the user override the test.
-+else
-+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
-+for as_dir in $PATH
-+do
-+  IFS=$as_save_IFS
-+  test -z "$as_dir" && as_dir=.
-+    for ac_exec_ext in '' $ac_executable_extensions; do
-+  if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
-+    ac_cv_prog_ac_ct_DLLTOOL="dlltool"
-+    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
-+    break 2
-+  fi
-+done
-+  done
-+IFS=$as_save_IFS
-+
-+fi
-+fi
-+ac_ct_DLLTOOL=$ac_cv_prog_ac_ct_DLLTOOL
-+if test -n "$ac_ct_DLLTOOL"; then
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_DLLTOOL" >&5
-+$as_echo "$ac_ct_DLLTOOL" >&6; }
-+else
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
-+$as_echo "no" >&6; }
-+fi
-+
-+  if test "x$ac_ct_DLLTOOL" = x; then
-+    DLLTOOL="false"
-+  else
-+    case $cross_compiling:$ac_tool_warned in
-+yes:)
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5
-+$as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;}
-+ac_tool_warned=yes ;;
-+esac
-+    DLLTOOL=$ac_ct_DLLTOOL
-+  fi
-+else
-+  DLLTOOL="$ac_cv_prog_DLLTOOL"
-+fi
-+
-+test -z "$DLLTOOL" && DLLTOOL=dlltool
-+
-+
-+
-+
-+
-+
-+
-+
-+
-+
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking how to associate runtime and link libraries" >&5
-+$as_echo_n "checking how to associate runtime and link libraries... " >&6; }
-+if ${lt_cv_sharedlib_from_linklib_cmd+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  lt_cv_sharedlib_from_linklib_cmd='unknown'
-+
-+case $host_os in
-+cygwin* | mingw* | pw32* | cegcc*)
-+  # two different shell functions defined in ltmain.sh
-+  # decide which to use based on capabilities of $DLLTOOL
-+  case `$DLLTOOL --help 2>&1` in
-+  *--identify-strict*)
-+    lt_cv_sharedlib_from_linklib_cmd=func_cygming_dll_for_implib
-+    ;;
-+  *)
-+    lt_cv_sharedlib_from_linklib_cmd=func_cygming_dll_for_implib_fallback
-+    ;;
-+  esac
-+  ;;
-+*)
-+  # fallback: assume linklib IS sharedlib
-+  lt_cv_sharedlib_from_linklib_cmd="$ECHO"
-+  ;;
-+esac
-+
-+fi
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_sharedlib_from_linklib_cmd" >&5
-+$as_echo "$lt_cv_sharedlib_from_linklib_cmd" >&6; }
-+sharedlib_from_linklib_cmd=$lt_cv_sharedlib_from_linklib_cmd
-+test -z "$sharedlib_from_linklib_cmd" && sharedlib_from_linklib_cmd=$ECHO
-+
-+
-+
-+
-+
-+
-+
- plugin_option=
- plugin_names="liblto_plugin.so liblto_plugin-0.dll cyglto_plugin-0.dll"
- for plugin in $plugin_names; do
-@@ -6122,8 +6374,10 @@ for plugin in $plugin_names; do
- done
- 
- if test -n "$ac_tool_prefix"; then
--  # Extract the first word of "${ac_tool_prefix}ar", so it can be a program name with args.
--set dummy ${ac_tool_prefix}ar; ac_word=$2
-+  for ac_prog in ar
-+  do
-+    # Extract the first word of "$ac_tool_prefix$ac_prog", so it can be a program name with args.
-+set dummy $ac_tool_prefix$ac_prog; ac_word=$2
- { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
- $as_echo_n "checking for $ac_word... " >&6; }
- if ${ac_cv_prog_AR+:} false; then :
-@@ -6139,7 +6393,7 @@ do
-   test -z "$as_dir" && as_dir=.
-     for ac_exec_ext in '' $ac_executable_extensions; do
-   if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
--    ac_cv_prog_AR="${ac_tool_prefix}ar"
-+    ac_cv_prog_AR="$ac_tool_prefix$ac_prog"
-     $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
-     break 2
-   fi
-@@ -6159,11 +6413,15 @@ $as_echo "no" >&6; }
- fi
- 
- 
-+    test -n "$AR" && break
-+  done
- fi
--if test -z "$ac_cv_prog_AR"; then
-+if test -z "$AR"; then
-   ac_ct_AR=$AR
--  # Extract the first word of "ar", so it can be a program name with args.
--set dummy ar; ac_word=$2
-+  for ac_prog in ar
-+do
-+  # Extract the first word of "$ac_prog", so it can be a program name with args.
-+set dummy $ac_prog; ac_word=$2
- { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
- $as_echo_n "checking for $ac_word... " >&6; }
- if ${ac_cv_prog_ac_ct_AR+:} false; then :
-@@ -6179,7 +6437,7 @@ do
-   test -z "$as_dir" && as_dir=.
-     for ac_exec_ext in '' $ac_executable_extensions; do
-   if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
--    ac_cv_prog_ac_ct_AR="ar"
-+    ac_cv_prog_ac_ct_AR="$ac_prog"
-     $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
-     break 2
-   fi
-@@ -6198,6 +6456,10 @@ else
- $as_echo "no" >&6; }
- fi
- 
-+
-+  test -n "$ac_ct_AR" && break
-+done
-+
-   if test "x$ac_ct_AR" = x; then
-     AR="false"
-   else
-@@ -6209,25 +6471,20 @@ ac_tool_warned=yes ;;
- esac
-     AR=$ac_ct_AR
-   fi
--else
--  AR="$ac_cv_prog_AR"
- fi
- 
--test -z "$AR" && AR=ar
--if test -n "$plugin_option"; then
--  if $AR --help 2>&1 | grep -q "\--plugin"; then
--    touch conftest.c
--    $AR $plugin_option rc conftest.a conftest.c
--    if test "$?" != 0; then
--      { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: Failed: $AR $plugin_option rc" >&5
-+  touch conftest.c
-+  $AR $plugin_option rc conftest.a conftest.c
-+  if test "$?" != 0; then
-+    { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: Failed: $AR $plugin_option rc" >&5
- $as_echo "$as_me: WARNING: Failed: $AR $plugin_option rc" >&2;}
--    else
--      AR="$AR $plugin_option"
--    fi
--    rm -f conftest.*
-+  else
-+    AR="$AR $plugin_option"
-   fi
--fi
--test -z "$AR_FLAGS" && AR_FLAGS=cru
-+  rm -f conftest.*
-+: ${AR=ar}
-+: ${AR_FLAGS=cru}
-+
- 
- 
- 
-@@ -6238,6 +6495,63 @@ test -z "$AR_FLAGS" && AR_FLAGS=cru
- 
- 
- 
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for archiver @FILE support" >&5
-+$as_echo_n "checking for archiver @FILE support... " >&6; }
-+if ${lt_cv_ar_at_file+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  lt_cv_ar_at_file=no
-+   cat confdefs.h - <<_ACEOF >conftest.$ac_ext
-+/* end confdefs.h.  */
-+
-+int
-+main ()
-+{
-+
-+  ;
-+  return 0;
-+}
-+_ACEOF
-+if ac_fn_c_try_compile "$LINENO"; then :
-+  echo conftest.$ac_objext > conftest.lst
-+      lt_ar_try='$AR $AR_FLAGS libconftest.a @conftest.lst >&5'
-+      { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$lt_ar_try\""; } >&5
-+  (eval $lt_ar_try) 2>&5
-+  ac_status=$?
-+  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
-+  test $ac_status = 0; }
-+      if test "$ac_status" -eq 0; then
-+	# Ensure the archiver fails upon bogus file names.
-+	rm -f conftest.$ac_objext libconftest.a
-+	{ { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$lt_ar_try\""; } >&5
-+  (eval $lt_ar_try) 2>&5
-+  ac_status=$?
-+  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
-+  test $ac_status = 0; }
-+	if test "$ac_status" -ne 0; then
-+          lt_cv_ar_at_file=@
-+        fi
-+      fi
-+      rm -f conftest.* libconftest.a
-+
-+fi
-+rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext
-+
-+fi
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_ar_at_file" >&5
-+$as_echo "$lt_cv_ar_at_file" >&6; }
-+
-+if test "x$lt_cv_ar_at_file" = xno; then
-+  archiver_list_spec=
-+else
-+  archiver_list_spec=$lt_cv_ar_at_file
-+fi
-+
-+
-+
-+
-+
-+
- 
- if test -n "$ac_tool_prefix"; then
-   # Extract the first word of "${ac_tool_prefix}strip", so it can be a program name with args.
-@@ -6578,8 +6892,8 @@ esac
- lt_cv_sys_global_symbol_to_cdecl="sed -n -e 's/^T .* \(.*\)$/extern int \1();/p' -e 's/^$symcode* .* \(.*\)$/extern char \1;/p'"
- 
- # Transform an extracted symbol line into symbol name and symbol address
--lt_cv_sys_global_symbol_to_c_name_address="sed -n -e 's/^: \([^ ]*\) $/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"\2\", (void *) \&\2},/p'"
--lt_cv_sys_global_symbol_to_c_name_address_lib_prefix="sed -n -e 's/^: \([^ ]*\) $/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \(lib[^ ]*\)$/  {\"\2\", (void *) \&\2},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"lib\2\", (void *) \&\2},/p'"
-+lt_cv_sys_global_symbol_to_c_name_address="sed -n -e 's/^: \([^ ]*\)[ ]*$/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"\2\", (void *) \&\2},/p'"
-+lt_cv_sys_global_symbol_to_c_name_address_lib_prefix="sed -n -e 's/^: \([^ ]*\)[ ]*$/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \(lib[^ ]*\)$/  {\"\2\", (void *) \&\2},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"lib\2\", (void *) \&\2},/p'"
- 
- # Handle CRLF in mingw tool chain
- opt_cr=
-@@ -6615,6 +6929,7 @@ for ac_symprfx in "" "_"; do
-   else
-     lt_cv_sys_global_symbol_pipe="sed -n -e 's/^.*[	 ]\($symcode$symcode*\)[	 ][	 ]*$ac_symprfx$sympat$opt_cr$/$symxfrm/p'"
-   fi
-+  lt_cv_sys_global_symbol_pipe="$lt_cv_sys_global_symbol_pipe | sed '/ __gnu_lto/d'"
- 
-   # Check to see that the pipe works correctly.
-   pipe_works=no
-@@ -6656,6 +6971,18 @@ _LT_EOF
-       if $GREP ' nm_test_var$' "$nlist" >/dev/null; then
- 	if $GREP ' nm_test_func$' "$nlist" >/dev/null; then
- 	  cat <<_LT_EOF > conftest.$ac_ext
-+/* Keep this code in sync between libtool.m4, ltmain, lt_system.h, and tests.  */
-+#if defined(_WIN32) || defined(__CYGWIN__) || defined(_WIN32_WCE)
-+/* DATA imports from DLLs on WIN32 con't be const, because runtime
-+   relocations are performed -- see ld's documentation on pseudo-relocs.  */
-+# define LT_DLSYM_CONST
-+#elif defined(__osf__)
-+/* This system does not cope well with relocations in const data.  */
-+# define LT_DLSYM_CONST
-+#else
-+# define LT_DLSYM_CONST const
-+#endif
-+
- #ifdef __cplusplus
- extern "C" {
- #endif
-@@ -6667,7 +6994,7 @@ _LT_EOF
- 	  cat <<_LT_EOF >> conftest.$ac_ext
- 
- /* The mapping between symbol names and symbols.  */
--const struct {
-+LT_DLSYM_CONST struct {
-   const char *name;
-   void       *address;
- }
-@@ -6693,8 +7020,8 @@ static const void *lt_preloaded_setup() {
- _LT_EOF
- 	  # Now try linking the two files.
- 	  mv conftest.$ac_objext conftstm.$ac_objext
--	  lt_save_LIBS="$LIBS"
--	  lt_save_CFLAGS="$CFLAGS"
-+	  lt_globsym_save_LIBS=$LIBS
-+	  lt_globsym_save_CFLAGS=$CFLAGS
- 	  LIBS="conftstm.$ac_objext"
- 	  CFLAGS="$CFLAGS$lt_prog_compiler_no_builtin_flag"
- 	  if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_link\""; } >&5
-@@ -6704,8 +7031,8 @@ _LT_EOF
-   test $ac_status = 0; } && test -s conftest${ac_exeext}; then
- 	    pipe_works=yes
- 	  fi
--	  LIBS="$lt_save_LIBS"
--	  CFLAGS="$lt_save_CFLAGS"
-+	  LIBS=$lt_globsym_save_LIBS
-+	  CFLAGS=$lt_globsym_save_CFLAGS
- 	else
- 	  echo "cannot find nm_test_func in $nlist" >&5
- 	fi
-@@ -6742,6 +7069,14 @@ else
- $as_echo "ok" >&6; }
- fi
- 
-+# Response file support.
-+if test "$lt_cv_nm_interface" = "MS dumpbin"; then
-+  nm_file_list_spec='@'
-+elif $NM --help 2>/dev/null | grep '[@]FILE' >/dev/null; then
-+  nm_file_list_spec='@'
-+fi
-+
-+
- 
- 
- 
-@@ -6760,6 +7095,47 @@ fi
- 
- 
- 
-+
-+
-+
-+
-+
-+
-+
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for sysroot" >&5
-+$as_echo_n "checking for sysroot... " >&6; }
-+
-+# Check whether --with-libtool-sysroot was given.
-+if test "${with_libtool_sysroot+set}" = set; then :
-+  withval=$with_libtool_sysroot;
-+else
-+  with_libtool_sysroot=no
-+fi
-+
-+
-+lt_sysroot=
-+case ${with_libtool_sysroot} in #(
-+ yes)
-+   if test "$GCC" = yes; then
-+     lt_sysroot=`$CC --print-sysroot 2>/dev/null`
-+   fi
-+   ;; #(
-+ /*)
-+   lt_sysroot=`echo "$with_libtool_sysroot" | sed -e "$sed_quote_subst"`
-+   ;; #(
-+ no|'')
-+   ;; #(
-+ *)
-+   { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${with_libtool_sysroot}" >&5
-+$as_echo "${with_libtool_sysroot}" >&6; }
-+   as_fn_error $? "The sysroot must be an absolute path." "$LINENO" 5
-+   ;;
-+esac
-+
-+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${lt_sysroot:-no}" >&5
-+$as_echo "${lt_sysroot:-no}" >&6; }
-+
-+
- 
- 
- 
-@@ -6969,6 +7345,123 @@ esac
- 
- need_locks="$enable_libtool_lock"
- 
-+if test -n "$ac_tool_prefix"; then
-+  # Extract the first word of "${ac_tool_prefix}mt", so it can be a program name with args.
-+set dummy ${ac_tool_prefix}mt; ac_word=$2
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
-+$as_echo_n "checking for $ac_word... " >&6; }
-+if ${ac_cv_prog_MANIFEST_TOOL+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  if test -n "$MANIFEST_TOOL"; then
-+  ac_cv_prog_MANIFEST_TOOL="$MANIFEST_TOOL" # Let the user override the test.
-+else
-+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
-+for as_dir in $PATH
-+do
-+  IFS=$as_save_IFS
-+  test -z "$as_dir" && as_dir=.
-+    for ac_exec_ext in '' $ac_executable_extensions; do
-+  if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
-+    ac_cv_prog_MANIFEST_TOOL="${ac_tool_prefix}mt"
-+    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
-+    break 2
-+  fi
-+done
-+  done
-+IFS=$as_save_IFS
-+
-+fi
-+fi
-+MANIFEST_TOOL=$ac_cv_prog_MANIFEST_TOOL
-+if test -n "$MANIFEST_TOOL"; then
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $MANIFEST_TOOL" >&5
-+$as_echo "$MANIFEST_TOOL" >&6; }
-+else
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
-+$as_echo "no" >&6; }
-+fi
-+
-+
-+fi
-+if test -z "$ac_cv_prog_MANIFEST_TOOL"; then
-+  ac_ct_MANIFEST_TOOL=$MANIFEST_TOOL
-+  # Extract the first word of "mt", so it can be a program name with args.
-+set dummy mt; ac_word=$2
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
-+$as_echo_n "checking for $ac_word... " >&6; }
-+if ${ac_cv_prog_ac_ct_MANIFEST_TOOL+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  if test -n "$ac_ct_MANIFEST_TOOL"; then
-+  ac_cv_prog_ac_ct_MANIFEST_TOOL="$ac_ct_MANIFEST_TOOL" # Let the user override the test.
-+else
-+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
-+for as_dir in $PATH
-+do
-+  IFS=$as_save_IFS
-+  test -z "$as_dir" && as_dir=.
-+    for ac_exec_ext in '' $ac_executable_extensions; do
-+  if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
-+    ac_cv_prog_ac_ct_MANIFEST_TOOL="mt"
-+    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
-+    break 2
-+  fi
-+done
-+  done
-+IFS=$as_save_IFS
-+
-+fi
-+fi
-+ac_ct_MANIFEST_TOOL=$ac_cv_prog_ac_ct_MANIFEST_TOOL
-+if test -n "$ac_ct_MANIFEST_TOOL"; then
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_MANIFEST_TOOL" >&5
-+$as_echo "$ac_ct_MANIFEST_TOOL" >&6; }
-+else
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
-+$as_echo "no" >&6; }
-+fi
-+
-+  if test "x$ac_ct_MANIFEST_TOOL" = x; then
-+    MANIFEST_TOOL=":"
-+  else
-+    case $cross_compiling:$ac_tool_warned in
-+yes:)
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5
-+$as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;}
-+ac_tool_warned=yes ;;
-+esac
-+    MANIFEST_TOOL=$ac_ct_MANIFEST_TOOL
-+  fi
-+else
-+  MANIFEST_TOOL="$ac_cv_prog_MANIFEST_TOOL"
-+fi
-+
-+test -z "$MANIFEST_TOOL" && MANIFEST_TOOL=mt
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking if $MANIFEST_TOOL is a manifest tool" >&5
-+$as_echo_n "checking if $MANIFEST_TOOL is a manifest tool... " >&6; }
-+if ${lt_cv_path_mainfest_tool+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  lt_cv_path_mainfest_tool=no
-+  echo "$as_me:$LINENO: $MANIFEST_TOOL '-?'" >&5
-+  $MANIFEST_TOOL '-?' 2>conftest.err > conftest.out
-+  cat conftest.err >&5
-+  if $GREP 'Manifest Tool' conftest.out > /dev/null; then
-+    lt_cv_path_mainfest_tool=yes
-+  fi
-+  rm -f conftest*
-+fi
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_path_mainfest_tool" >&5
-+$as_echo "$lt_cv_path_mainfest_tool" >&6; }
-+if test "x$lt_cv_path_mainfest_tool" != xyes; then
-+  MANIFEST_TOOL=:
-+fi
-+
-+
-+
-+
-+
- 
-   case $host_os in
-     rhapsody* | darwin*)
-@@ -7532,6 +8025,8 @@ _LT_EOF
-       $LTCC $LTCFLAGS -c -o conftest.o conftest.c 2>&5
-       echo "$AR cru libconftest.a conftest.o" >&5
-       $AR cru libconftest.a conftest.o 2>&5
-+      echo "$RANLIB libconftest.a" >&5
-+      $RANLIB libconftest.a 2>&5
-       cat > conftest.c << _LT_EOF
- int main() { return 0;}
- _LT_EOF
-@@ -8084,8 +8579,6 @@ fi
- lt_prog_compiler_pic=
- lt_prog_compiler_static=
- 
--{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $compiler option to produce PIC" >&5
--$as_echo_n "checking for $compiler option to produce PIC... " >&6; }
- 
-   if test "$GCC" = yes; then
-     lt_prog_compiler_wl='-Wl,'
-@@ -8251,6 +8744,12 @@ $as_echo_n "checking for $compiler option to produce PIC... " >&6; }
- 	lt_prog_compiler_pic='--shared'
- 	lt_prog_compiler_static='--static'
- 	;;
-+      nagfor*)
-+	# NAG Fortran compiler
-+	lt_prog_compiler_wl='-Wl,-Wl,,'
-+	lt_prog_compiler_pic='-PIC'
-+	lt_prog_compiler_static='-Bstatic'
-+	;;
-       pgcc* | pgf77* | pgf90* | pgf95* | pgfortran*)
-         # Portland Group compilers (*not* the Pentium gcc compiler,
- 	# which looks to be a dead project)
-@@ -8313,7 +8812,7 @@ $as_echo_n "checking for $compiler option to produce PIC... " >&6; }
-       lt_prog_compiler_pic='-KPIC'
-       lt_prog_compiler_static='-Bstatic'
-       case $cc_basename in
--      f77* | f90* | f95*)
-+      f77* | f90* | f95* | sunf77* | sunf90* | sunf95*)
- 	lt_prog_compiler_wl='-Qoption ld ';;
-       *)
- 	lt_prog_compiler_wl='-Wl,';;
-@@ -8370,13 +8869,17 @@ case $host_os in
-     lt_prog_compiler_pic="$lt_prog_compiler_pic -DPIC"
-     ;;
- esac
--{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_prog_compiler_pic" >&5
--$as_echo "$lt_prog_compiler_pic" >&6; }
--
--
--
--
- 
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $compiler option to produce PIC" >&5
-+$as_echo_n "checking for $compiler option to produce PIC... " >&6; }
-+if ${lt_cv_prog_compiler_pic+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  lt_cv_prog_compiler_pic=$lt_prog_compiler_pic
-+fi
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_prog_compiler_pic" >&5
-+$as_echo "$lt_cv_prog_compiler_pic" >&6; }
-+lt_prog_compiler_pic=$lt_cv_prog_compiler_pic
- 
- #
- # Check to make sure the PIC flag actually works.
-@@ -8437,6 +8940,11 @@ fi
- 
- 
- 
-+
-+
-+
-+
-+
- #
- # Check to make sure the static flag actually works.
- #
-@@ -8787,7 +9295,8 @@ _LT_EOF
-       allow_undefined_flag=unsupported
-       always_export_symbols=no
-       enable_shared_with_static_runtimes=yes
--      export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1 DATA/'\'' | $SED -e '\''/^[AITW][ ]/s/.*[ ]//'\'' | sort | uniq > $export_symbols'
-+      export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1 DATA/;s/^.*[ ]__nm__\([^ ]*\)[ ][^ ]*/\1 DATA/;/^I[ ]/d;/^[AITW][ ]/s/.* //'\'' | sort | uniq > $export_symbols'
-+      exclude_expsyms='[_]+GLOBAL_OFFSET_TABLE_|[_]+GLOBAL__[FID]_.*|[_]+head_[A-Za-z0-9_]+_dll|[A-Za-z0-9_]+_dll_iname'
- 
-       if $LD --help 2>&1 | $GREP 'auto-import' > /dev/null; then
-         archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib'
-@@ -8886,12 +9395,12 @@ _LT_EOF
- 	  whole_archive_flag_spec='--whole-archive$convenience --no-whole-archive'
- 	  hardcode_libdir_flag_spec=
- 	  hardcode_libdir_flag_spec_ld='-rpath $libdir'
--	  archive_cmds='$LD -shared $libobjs $deplibs $compiler_flags -soname $soname -o $lib'
-+	  archive_cmds='$LD -shared $libobjs $deplibs $linker_flags -soname $soname -o $lib'
- 	  if test "x$supports_anon_versioning" = xyes; then
- 	    archive_expsym_cmds='echo "{ global:" > $output_objdir/$libname.ver~
- 	      cat $export_symbols | sed -e "s/\(.*\)/\1;/" >> $output_objdir/$libname.ver~
- 	      echo "local: *; };" >> $output_objdir/$libname.ver~
--	      $LD -shared $libobjs $deplibs $compiler_flags -soname $soname -version-script $output_objdir/$libname.ver -o $lib'
-+	      $LD -shared $libobjs $deplibs $linker_flags -soname $soname -version-script $output_objdir/$libname.ver -o $lib'
- 	  fi
- 	  ;;
- 	esac
-@@ -8905,8 +9414,8 @@ _LT_EOF
- 	archive_cmds='$LD -Bshareable $libobjs $deplibs $linker_flags -o $lib'
- 	wlarc=
-       else
--	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
--	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-+	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
-+	archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-       fi
-       ;;
- 
-@@ -8924,8 +9433,8 @@ _LT_EOF
- 
- _LT_EOF
-       elif $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then
--	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
--	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-+	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
-+	archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-       else
- 	ld_shlibs=no
-       fi
-@@ -8971,8 +9480,8 @@ _LT_EOF
- 
-     *)
-       if $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then
--	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
--	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-+	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
-+	archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-       else
- 	ld_shlibs=no
-       fi
-@@ -9102,7 +9611,13 @@ _LT_EOF
- 	allow_undefined_flag='-berok'
-         # Determine the default libpath from the value encoded in an
-         # empty executable.
--        cat confdefs.h - <<_ACEOF >conftest.$ac_ext
-+        if test "${lt_cv_aix_libpath+set}" = set; then
-+  aix_libpath=$lt_cv_aix_libpath
-+else
-+  if ${lt_cv_aix_libpath_+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
- /* end confdefs.h.  */
- 
- int
-@@ -9115,22 +9630,29 @@ main ()
- _ACEOF
- if ac_fn_c_try_link "$LINENO"; then :
- 
--lt_aix_libpath_sed='
--    /Import File Strings/,/^$/ {
--	/^0/ {
--	    s/^0  *\(.*\)$/\1/
--	    p
--	}
--    }'
--aix_libpath=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
--# Check for a 64-bit object if we didn't find anything.
--if test -z "$aix_libpath"; then
--  aix_libpath=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
--fi
-+  lt_aix_libpath_sed='
-+      /Import File Strings/,/^$/ {
-+	  /^0/ {
-+	      s/^0  *\([^ ]*\) *$/\1/
-+	      p
-+	  }
-+      }'
-+  lt_cv_aix_libpath_=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
-+  # Check for a 64-bit object if we didn't find anything.
-+  if test -z "$lt_cv_aix_libpath_"; then
-+    lt_cv_aix_libpath_=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
-+  fi
- fi
- rm -f core conftest.err conftest.$ac_objext \
-     conftest$ac_exeext conftest.$ac_ext
--if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
-+  if test -z "$lt_cv_aix_libpath_"; then
-+    lt_cv_aix_libpath_="/usr/lib:/lib"
-+  fi
-+
-+fi
-+
-+  aix_libpath=$lt_cv_aix_libpath_
-+fi
- 
-         hardcode_libdir_flag_spec='${wl}-blibpath:$libdir:'"$aix_libpath"
-         archive_expsym_cmds='$CC -o $output_objdir/$soname $libobjs $deplibs '"\${wl}$no_entry_flag"' $compiler_flags `if test "x${allow_undefined_flag}" != "x"; then func_echo_all "${wl}${allow_undefined_flag}"; else :; fi` '"\${wl}$exp_sym_flag:\$export_symbols $shared_flag"
-@@ -9142,7 +9664,13 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
- 	else
- 	 # Determine the default libpath from the value encoded in an
- 	 # empty executable.
--	 cat confdefs.h - <<_ACEOF >conftest.$ac_ext
-+	 if test "${lt_cv_aix_libpath+set}" = set; then
-+  aix_libpath=$lt_cv_aix_libpath
-+else
-+  if ${lt_cv_aix_libpath_+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
- /* end confdefs.h.  */
- 
- int
-@@ -9155,22 +9683,29 @@ main ()
- _ACEOF
- if ac_fn_c_try_link "$LINENO"; then :
- 
--lt_aix_libpath_sed='
--    /Import File Strings/,/^$/ {
--	/^0/ {
--	    s/^0  *\(.*\)$/\1/
--	    p
--	}
--    }'
--aix_libpath=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
--# Check for a 64-bit object if we didn't find anything.
--if test -z "$aix_libpath"; then
--  aix_libpath=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
--fi
-+  lt_aix_libpath_sed='
-+      /Import File Strings/,/^$/ {
-+	  /^0/ {
-+	      s/^0  *\([^ ]*\) *$/\1/
-+	      p
-+	  }
-+      }'
-+  lt_cv_aix_libpath_=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
-+  # Check for a 64-bit object if we didn't find anything.
-+  if test -z "$lt_cv_aix_libpath_"; then
-+    lt_cv_aix_libpath_=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
-+  fi
- fi
- rm -f core conftest.err conftest.$ac_objext \
-     conftest$ac_exeext conftest.$ac_ext
--if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
-+  if test -z "$lt_cv_aix_libpath_"; then
-+    lt_cv_aix_libpath_="/usr/lib:/lib"
-+  fi
-+
-+fi
-+
-+  aix_libpath=$lt_cv_aix_libpath_
-+fi
- 
- 	 hardcode_libdir_flag_spec='${wl}-blibpath:$libdir:'"$aix_libpath"
- 	  # Warning - without using the other run time loading flags,
-@@ -9215,20 +9750,63 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
-       # Microsoft Visual C++.
-       # hardcode_libdir_flag_spec is actually meaningless, as there is
-       # no search path for DLLs.
--      hardcode_libdir_flag_spec=' '
--      allow_undefined_flag=unsupported
--      # Tell ltmain to make .lib files, not .a files.
--      libext=lib
--      # Tell ltmain to make .dll files, not .so files.
--      shrext_cmds=".dll"
--      # FIXME: Setting linknames here is a bad hack.
--      archive_cmds='$CC -o $lib $libobjs $compiler_flags `func_echo_all "$deplibs" | $SED '\''s/ -lc$//'\''` -link -dll~linknames='
--      # The linker will automatically build a .lib file if we build a DLL.
--      old_archive_from_new_cmds='true'
--      # FIXME: Should let the user specify the lib program.
--      old_archive_cmds='lib -OUT:$oldlib$oldobjs$old_deplibs'
--      fix_srcfile_path='`cygpath -w "$srcfile"`'
--      enable_shared_with_static_runtimes=yes
-+      case $cc_basename in
-+      cl*)
-+	# Native MSVC
-+	hardcode_libdir_flag_spec=' '
-+	allow_undefined_flag=unsupported
-+	always_export_symbols=yes
-+	file_list_spec='@'
-+	# Tell ltmain to make .lib files, not .a files.
-+	libext=lib
-+	# Tell ltmain to make .dll files, not .so files.
-+	shrext_cmds=".dll"
-+	# FIXME: Setting linknames here is a bad hack.
-+	archive_cmds='$CC -o $output_objdir/$soname $libobjs $compiler_flags $deplibs -Wl,-dll~linknames='
-+	archive_expsym_cmds='if test "x`$SED 1q $export_symbols`" = xEXPORTS; then
-+	    sed -n -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' -e '1\\\!p' < $export_symbols > $output_objdir/$soname.exp;
-+	  else
-+	    sed -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' < $export_symbols > $output_objdir/$soname.exp;
-+	  fi~
-+	  $CC -o $tool_output_objdir$soname $libobjs $compiler_flags $deplibs "@$tool_output_objdir$soname.exp" -Wl,-DLL,-IMPLIB:"$tool_output_objdir$libname.dll.lib"~
-+	  linknames='
-+	# The linker will not automatically build a static lib if we build a DLL.
-+	# _LT_TAGVAR(old_archive_from_new_cmds, )='true'
-+	enable_shared_with_static_runtimes=yes
-+	export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1,DATA/'\'' | $SED -e '\''/^[AITW][ ]/s/.*[ ]//'\'' | sort | uniq > $export_symbols'
-+	# Don't use ranlib
-+	old_postinstall_cmds='chmod 644 $oldlib'
-+	postlink_cmds='lt_outputfile="@OUTPUT@"~
-+	  lt_tool_outputfile="@TOOL_OUTPUT@"~
-+	  case $lt_outputfile in
-+	    *.exe|*.EXE) ;;
-+	    *)
-+	      lt_outputfile="$lt_outputfile.exe"
-+	      lt_tool_outputfile="$lt_tool_outputfile.exe"
-+	      ;;
-+	  esac~
-+	  if test "$MANIFEST_TOOL" != ":" && test -f "$lt_outputfile.manifest"; then
-+	    $MANIFEST_TOOL -manifest "$lt_tool_outputfile.manifest" -outputresource:"$lt_tool_outputfile" || exit 1;
-+	    $RM "$lt_outputfile.manifest";
-+	  fi'
-+	;;
-+      *)
-+	# Assume MSVC wrapper
-+	hardcode_libdir_flag_spec=' '
-+	allow_undefined_flag=unsupported
-+	# Tell ltmain to make .lib files, not .a files.
-+	libext=lib
-+	# Tell ltmain to make .dll files, not .so files.
-+	shrext_cmds=".dll"
-+	# FIXME: Setting linknames here is a bad hack.
-+	archive_cmds='$CC -o $lib $libobjs $compiler_flags `func_echo_all "$deplibs" | $SED '\''s/ -lc$//'\''` -link -dll~linknames='
-+	# The linker will automatically build a .lib file if we build a DLL.
-+	old_archive_from_new_cmds='true'
-+	# FIXME: Should let the user specify the lib program.
-+	old_archive_cmds='lib -OUT:$oldlib$oldobjs$old_deplibs'
-+	enable_shared_with_static_runtimes=yes
-+	;;
-+      esac
-       ;;
- 
-     darwin* | rhapsody*)
-@@ -9289,7 +9867,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
- 
-     # FreeBSD 3 and greater uses gcc -shared to do shared libraries.
-     freebsd* | dragonfly*)
--      archive_cmds='$CC -shared -o $lib $libobjs $deplibs $compiler_flags'
-+      archive_cmds='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags'
-       hardcode_libdir_flag_spec='-R$libdir'
-       hardcode_direct=yes
-       hardcode_shlibpath_var=no
-@@ -9297,7 +9875,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
- 
-     hpux9*)
-       if test "$GCC" = yes; then
--	archive_cmds='$RM $output_objdir/$soname~$CC -shared -fPIC ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $libobjs $deplibs $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
-+	archive_cmds='$RM $output_objdir/$soname~$CC -shared $pic_flag ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $libobjs $deplibs $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
-       else
- 	archive_cmds='$RM $output_objdir/$soname~$LD -b +b $install_libdir -o $output_objdir/$soname $libobjs $deplibs $linker_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
-       fi
-@@ -9313,7 +9891,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
- 
-     hpux10*)
-       if test "$GCC" = yes && test "$with_gnu_ld" = no; then
--	archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
-+	archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
-       else
- 	archive_cmds='$LD -b +h $soname +b $install_libdir -o $lib $libobjs $deplibs $linker_flags'
-       fi
-@@ -9337,10 +9915,10 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
- 	  archive_cmds='$CC -shared ${wl}+h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
- 	  ;;
- 	ia64*)
--	  archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags'
-+	  archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags'
- 	  ;;
- 	*)
--	  archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
-+	  archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
- 	  ;;
- 	esac
-       else
-@@ -9419,23 +9997,36 @@ fi
- 
-     irix5* | irix6* | nonstopux*)
-       if test "$GCC" = yes; then
--	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
-+	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
- 	# Try to use the -exported_symbol ld option, if it does not
- 	# work, assume that -exports_file does not work either and
- 	# implicitly export all symbols.
--        save_LDFLAGS="$LDFLAGS"
--        LDFLAGS="$LDFLAGS -shared ${wl}-exported_symbol ${wl}foo ${wl}-update_registry ${wl}/dev/null"
--        cat confdefs.h - <<_ACEOF >conftest.$ac_ext
-+	# This should be the same for all languages, so no per-tag cache variable.
-+	{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether the $host_os linker accepts -exported_symbol" >&5
-+$as_echo_n "checking whether the $host_os linker accepts -exported_symbol... " >&6; }
-+if ${lt_cv_irix_exported_symbol+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  save_LDFLAGS="$LDFLAGS"
-+	   LDFLAGS="$LDFLAGS -shared ${wl}-exported_symbol ${wl}foo ${wl}-update_registry ${wl}/dev/null"
-+	   cat confdefs.h - <<_ACEOF >conftest.$ac_ext
- /* end confdefs.h.  */
--int foo(void) {}
-+int foo (void) { return 0; }
- _ACEOF
- if ac_fn_c_try_link "$LINENO"; then :
--  archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations ${wl}-exports_file ${wl}$export_symbols -o $lib'
--
-+  lt_cv_irix_exported_symbol=yes
-+else
-+  lt_cv_irix_exported_symbol=no
- fi
- rm -f core conftest.err conftest.$ac_objext \
-     conftest$ac_exeext conftest.$ac_ext
--        LDFLAGS="$save_LDFLAGS"
-+           LDFLAGS="$save_LDFLAGS"
-+fi
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_irix_exported_symbol" >&5
-+$as_echo "$lt_cv_irix_exported_symbol" >&6; }
-+	if test "$lt_cv_irix_exported_symbol" = yes; then
-+          archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations ${wl}-exports_file ${wl}$export_symbols -o $lib'
-+	fi
-       else
- 	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -o $lib'
- 	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -exports_file $export_symbols -o $lib'
-@@ -9520,7 +10111,7 @@ rm -f core conftest.err conftest.$ac_objext \
-     osf4* | osf5*)	# as osf3* with the addition of -msym flag
-       if test "$GCC" = yes; then
- 	allow_undefined_flag=' ${wl}-expect_unresolved ${wl}\*'
--	archive_cmds='$CC -shared${allow_undefined_flag} $libobjs $deplibs $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
-+	archive_cmds='$CC -shared${allow_undefined_flag} $pic_flag $libobjs $deplibs $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
- 	hardcode_libdir_flag_spec='${wl}-rpath ${wl}$libdir'
-       else
- 	allow_undefined_flag=' -expect_unresolved \*'
-@@ -9539,9 +10130,9 @@ rm -f core conftest.err conftest.$ac_objext \
-       no_undefined_flag=' -z defs'
-       if test "$GCC" = yes; then
- 	wlarc='${wl}'
--	archive_cmds='$CC -shared ${wl}-z ${wl}text ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
-+	archive_cmds='$CC -shared $pic_flag ${wl}-z ${wl}text ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
- 	archive_expsym_cmds='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~
--	  $CC -shared ${wl}-z ${wl}text ${wl}-M ${wl}$lib.exp ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags~$RM $lib.exp'
-+	  $CC -shared $pic_flag ${wl}-z ${wl}text ${wl}-M ${wl}$lib.exp ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags~$RM $lib.exp'
-       else
- 	case `$CC -V 2>&1` in
- 	*"Compilers 5.0"*)
-@@ -10117,8 +10708,9 @@ cygwin* | mingw* | pw32* | cegcc*)
-   need_version=no
-   need_lib_prefix=no
- 
--  case $GCC,$host_os in
--  yes,cygwin* | yes,mingw* | yes,pw32* | yes,cegcc*)
-+  case $GCC,$cc_basename in
-+  yes,*)
-+    # gcc
-     library_names_spec='$libname.dll.a'
-     # DLL is installed to $(libdir)/../bin by postinstall_cmds
-     postinstall_cmds='base_file=`basename \${file}`~
-@@ -10151,13 +10743,71 @@ cygwin* | mingw* | pw32* | cegcc*)
-       library_names_spec='`echo ${libname} | sed -e 's/^lib/pw/'``echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}'
-       ;;
-     esac
-+    dynamic_linker='Win32 ld.exe'
-+    ;;
-+
-+  *,cl*)
-+    # Native MSVC
-+    libname_spec='$name'
-+    soname_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}'
-+    library_names_spec='${libname}.dll.lib'
-+
-+    case $build_os in
-+    mingw*)
-+      sys_lib_search_path_spec=
-+      lt_save_ifs=$IFS
-+      IFS=';'
-+      for lt_path in $LIB
-+      do
-+        IFS=$lt_save_ifs
-+        # Let DOS variable expansion print the short 8.3 style file name.
-+        lt_path=`cd "$lt_path" 2>/dev/null && cmd //C "for %i in (".") do @echo %~si"`
-+        sys_lib_search_path_spec="$sys_lib_search_path_spec $lt_path"
-+      done
-+      IFS=$lt_save_ifs
-+      # Convert to MSYS style.
-+      sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | sed -e 's|\\\\|/|g' -e 's| \\([a-zA-Z]\\):| /\\1|g' -e 's|^ ||'`
-+      ;;
-+    cygwin*)
-+      # Convert to unix form, then to dos form, then back to unix form
-+      # but this time dos style (no spaces!) so that the unix form looks
-+      # like /cygdrive/c/PROGRA~1:/cygdr...
-+      sys_lib_search_path_spec=`cygpath --path --unix "$LIB"`
-+      sys_lib_search_path_spec=`cygpath --path --dos "$sys_lib_search_path_spec" 2>/dev/null`
-+      sys_lib_search_path_spec=`cygpath --path --unix "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
-+      ;;
-+    *)
-+      sys_lib_search_path_spec="$LIB"
-+      if $ECHO "$sys_lib_search_path_spec" | $GREP ';[c-zC-Z]:/' >/dev/null; then
-+        # It is most probably a Windows format PATH.
-+        sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e 's/;/ /g'`
-+      else
-+        sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
-+      fi
-+      # FIXME: find the short name or the path components, as spaces are
-+      # common. (e.g. "Program Files" -> "PROGRA~1")
-+      ;;
-+    esac
-+
-+    # DLL is installed to $(libdir)/../bin by postinstall_cmds
-+    postinstall_cmds='base_file=`basename \${file}`~
-+      dlpath=`$SHELL 2>&1 -c '\''. $dir/'\''\${base_file}'\''i; echo \$dlname'\''`~
-+      dldir=$destdir/`dirname \$dlpath`~
-+      test -d \$dldir || mkdir -p \$dldir~
-+      $install_prog $dir/$dlname \$dldir/$dlname'
-+    postuninstall_cmds='dldll=`$SHELL 2>&1 -c '\''. $file; echo \$dlname'\''`~
-+      dlpath=$dir/\$dldll~
-+       $RM \$dlpath'
-+    shlibpath_overrides_runpath=yes
-+    dynamic_linker='Win32 link.exe'
-     ;;
- 
-   *)
-+    # Assume MSVC wrapper
-     library_names_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext} $libname.lib'
-+    dynamic_linker='Win32 ld.exe'
-     ;;
-   esac
--  dynamic_linker='Win32 ld.exe'
-   # FIXME: first we should search . and the directory the executable is in
-   shlibpath_var=PATH
-   ;;
-@@ -11035,7 +11685,7 @@ else
-   lt_dlunknown=0; lt_dlno_uscore=1; lt_dlneed_uscore=2
-   lt_status=$lt_dlunknown
-   cat > conftest.$ac_ext <<_LT_EOF
--#line 11038 "configure"
-+#line $LINENO "configure"
- #include "confdefs.h"
- 
- #if HAVE_DLFCN_H
-@@ -11079,10 +11729,10 @@ else
- /* When -fvisbility=hidden is used, assume the code has been annotated
-    correspondingly for the symbols needed.  */
- #if defined(__GNUC__) && (((__GNUC__ == 3) && (__GNUC_MINOR__ >= 3)) || (__GNUC__ > 3))
--void fnord () __attribute__((visibility("default")));
-+int fnord () __attribute__((visibility("default")));
- #endif
- 
--void fnord () { int i=42; }
-+int fnord () { return 42; }
- int main ()
- {
-   void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW);
-@@ -11141,7 +11791,7 @@ else
-   lt_dlunknown=0; lt_dlno_uscore=1; lt_dlneed_uscore=2
-   lt_status=$lt_dlunknown
-   cat > conftest.$ac_ext <<_LT_EOF
--#line 11144 "configure"
-+#line $LINENO "configure"
- #include "confdefs.h"
- 
- #if HAVE_DLFCN_H
-@@ -11185,10 +11835,10 @@ else
- /* When -fvisbility=hidden is used, assume the code has been annotated
-    correspondingly for the symbols needed.  */
- #if defined(__GNUC__) && (((__GNUC__ == 3) && (__GNUC_MINOR__ >= 3)) || (__GNUC__ > 3))
--void fnord () __attribute__((visibility("default")));
-+int fnord () __attribute__((visibility("default")));
- #endif
- 
--void fnord () { int i=42; }
-+int fnord () { return 42; }
- int main ()
- {
-   void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW);
-@@ -13390,13 +14040,20 @@ exeext='`$ECHO "$exeext" | $SED "$delay_single_quote_subst"`'
- lt_unset='`$ECHO "$lt_unset" | $SED "$delay_single_quote_subst"`'
- lt_SP2NL='`$ECHO "$lt_SP2NL" | $SED "$delay_single_quote_subst"`'
- lt_NL2SP='`$ECHO "$lt_NL2SP" | $SED "$delay_single_quote_subst"`'
-+lt_cv_to_host_file_cmd='`$ECHO "$lt_cv_to_host_file_cmd" | $SED "$delay_single_quote_subst"`'
-+lt_cv_to_tool_file_cmd='`$ECHO "$lt_cv_to_tool_file_cmd" | $SED "$delay_single_quote_subst"`'
- reload_flag='`$ECHO "$reload_flag" | $SED "$delay_single_quote_subst"`'
- reload_cmds='`$ECHO "$reload_cmds" | $SED "$delay_single_quote_subst"`'
- OBJDUMP='`$ECHO "$OBJDUMP" | $SED "$delay_single_quote_subst"`'
- deplibs_check_method='`$ECHO "$deplibs_check_method" | $SED "$delay_single_quote_subst"`'
- file_magic_cmd='`$ECHO "$file_magic_cmd" | $SED "$delay_single_quote_subst"`'
-+file_magic_glob='`$ECHO "$file_magic_glob" | $SED "$delay_single_quote_subst"`'
-+want_nocaseglob='`$ECHO "$want_nocaseglob" | $SED "$delay_single_quote_subst"`'
-+DLLTOOL='`$ECHO "$DLLTOOL" | $SED "$delay_single_quote_subst"`'
-+sharedlib_from_linklib_cmd='`$ECHO "$sharedlib_from_linklib_cmd" | $SED "$delay_single_quote_subst"`'
- AR='`$ECHO "$AR" | $SED "$delay_single_quote_subst"`'
- AR_FLAGS='`$ECHO "$AR_FLAGS" | $SED "$delay_single_quote_subst"`'
-+archiver_list_spec='`$ECHO "$archiver_list_spec" | $SED "$delay_single_quote_subst"`'
- STRIP='`$ECHO "$STRIP" | $SED "$delay_single_quote_subst"`'
- RANLIB='`$ECHO "$RANLIB" | $SED "$delay_single_quote_subst"`'
- old_postinstall_cmds='`$ECHO "$old_postinstall_cmds" | $SED "$delay_single_quote_subst"`'
-@@ -13411,14 +14068,17 @@ lt_cv_sys_global_symbol_pipe='`$ECHO "$lt_cv_sys_global_symbol_pipe" | $SED "$de
- lt_cv_sys_global_symbol_to_cdecl='`$ECHO "$lt_cv_sys_global_symbol_to_cdecl" | $SED "$delay_single_quote_subst"`'
- lt_cv_sys_global_symbol_to_c_name_address='`$ECHO "$lt_cv_sys_global_symbol_to_c_name_address" | $SED "$delay_single_quote_subst"`'
- lt_cv_sys_global_symbol_to_c_name_address_lib_prefix='`$ECHO "$lt_cv_sys_global_symbol_to_c_name_address_lib_prefix" | $SED "$delay_single_quote_subst"`'
-+nm_file_list_spec='`$ECHO "$nm_file_list_spec" | $SED "$delay_single_quote_subst"`'
-+lt_sysroot='`$ECHO "$lt_sysroot" | $SED "$delay_single_quote_subst"`'
- objdir='`$ECHO "$objdir" | $SED "$delay_single_quote_subst"`'
- MAGIC_CMD='`$ECHO "$MAGIC_CMD" | $SED "$delay_single_quote_subst"`'
- lt_prog_compiler_no_builtin_flag='`$ECHO "$lt_prog_compiler_no_builtin_flag" | $SED "$delay_single_quote_subst"`'
--lt_prog_compiler_wl='`$ECHO "$lt_prog_compiler_wl" | $SED "$delay_single_quote_subst"`'
- lt_prog_compiler_pic='`$ECHO "$lt_prog_compiler_pic" | $SED "$delay_single_quote_subst"`'
-+lt_prog_compiler_wl='`$ECHO "$lt_prog_compiler_wl" | $SED "$delay_single_quote_subst"`'
- lt_prog_compiler_static='`$ECHO "$lt_prog_compiler_static" | $SED "$delay_single_quote_subst"`'
- lt_cv_prog_compiler_c_o='`$ECHO "$lt_cv_prog_compiler_c_o" | $SED "$delay_single_quote_subst"`'
- need_locks='`$ECHO "$need_locks" | $SED "$delay_single_quote_subst"`'
-+MANIFEST_TOOL='`$ECHO "$MANIFEST_TOOL" | $SED "$delay_single_quote_subst"`'
- DSYMUTIL='`$ECHO "$DSYMUTIL" | $SED "$delay_single_quote_subst"`'
- NMEDIT='`$ECHO "$NMEDIT" | $SED "$delay_single_quote_subst"`'
- LIPO='`$ECHO "$LIPO" | $SED "$delay_single_quote_subst"`'
-@@ -13451,12 +14111,12 @@ hardcode_shlibpath_var='`$ECHO "$hardcode_shlibpath_var" | $SED "$delay_single_q
- hardcode_automatic='`$ECHO "$hardcode_automatic" | $SED "$delay_single_quote_subst"`'
- inherit_rpath='`$ECHO "$inherit_rpath" | $SED "$delay_single_quote_subst"`'
- link_all_deplibs='`$ECHO "$link_all_deplibs" | $SED "$delay_single_quote_subst"`'
--fix_srcfile_path='`$ECHO "$fix_srcfile_path" | $SED "$delay_single_quote_subst"`'
- always_export_symbols='`$ECHO "$always_export_symbols" | $SED "$delay_single_quote_subst"`'
- export_symbols_cmds='`$ECHO "$export_symbols_cmds" | $SED "$delay_single_quote_subst"`'
- exclude_expsyms='`$ECHO "$exclude_expsyms" | $SED "$delay_single_quote_subst"`'
- include_expsyms='`$ECHO "$include_expsyms" | $SED "$delay_single_quote_subst"`'
- prelink_cmds='`$ECHO "$prelink_cmds" | $SED "$delay_single_quote_subst"`'
-+postlink_cmds='`$ECHO "$postlink_cmds" | $SED "$delay_single_quote_subst"`'
- file_list_spec='`$ECHO "$file_list_spec" | $SED "$delay_single_quote_subst"`'
- variables_saved_for_relink='`$ECHO "$variables_saved_for_relink" | $SED "$delay_single_quote_subst"`'
- need_lib_prefix='`$ECHO "$need_lib_prefix" | $SED "$delay_single_quote_subst"`'
-@@ -13511,8 +14171,13 @@ reload_flag \
- OBJDUMP \
- deplibs_check_method \
- file_magic_cmd \
-+file_magic_glob \
-+want_nocaseglob \
-+DLLTOOL \
-+sharedlib_from_linklib_cmd \
- AR \
- AR_FLAGS \
-+archiver_list_spec \
- STRIP \
- RANLIB \
- CC \
-@@ -13522,12 +14187,14 @@ lt_cv_sys_global_symbol_pipe \
- lt_cv_sys_global_symbol_to_cdecl \
- lt_cv_sys_global_symbol_to_c_name_address \
- lt_cv_sys_global_symbol_to_c_name_address_lib_prefix \
-+nm_file_list_spec \
- lt_prog_compiler_no_builtin_flag \
--lt_prog_compiler_wl \
- lt_prog_compiler_pic \
-+lt_prog_compiler_wl \
- lt_prog_compiler_static \
- lt_cv_prog_compiler_c_o \
- need_locks \
-+MANIFEST_TOOL \
- DSYMUTIL \
- NMEDIT \
- LIPO \
-@@ -13543,7 +14210,6 @@ no_undefined_flag \
- hardcode_libdir_flag_spec \
- hardcode_libdir_flag_spec_ld \
- hardcode_libdir_separator \
--fix_srcfile_path \
- exclude_expsyms \
- include_expsyms \
- file_list_spec \
-@@ -13579,6 +14245,7 @@ module_cmds \
- module_expsym_cmds \
- export_symbols_cmds \
- prelink_cmds \
-+postlink_cmds \
- postinstall_cmds \
- postuninstall_cmds \
- finish_cmds \
-@@ -14344,7 +15011,8 @@ $as_echo X"$file" |
- # NOTE: Changes made to this file will be lost: look at ltmain.sh.
- #
- #   Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001, 2003, 2004, 2005,
--#                 2006, 2007, 2008, 2009 Free Software Foundation, Inc.
-+#                 2006, 2007, 2008, 2009, 2010 Free Software Foundation,
-+#                 Inc.
- #   Written by Gordon Matzigkeit, 1996
- #
- #   This file is part of GNU Libtool.
-@@ -14447,19 +15115,42 @@ SP2NL=$lt_lt_SP2NL
- # turn newlines into spaces.
- NL2SP=$lt_lt_NL2SP
- 
-+# convert \$build file names to \$host format.
-+to_host_file_cmd=$lt_cv_to_host_file_cmd
-+
-+# convert \$build files to toolchain format.
-+to_tool_file_cmd=$lt_cv_to_tool_file_cmd
-+
- # An object symbol dumper.
- OBJDUMP=$lt_OBJDUMP
- 
- # Method to check whether dependent libraries are shared objects.
- deplibs_check_method=$lt_deplibs_check_method
- 
--# Command to use when deplibs_check_method == "file_magic".
-+# Command to use when deplibs_check_method = "file_magic".
- file_magic_cmd=$lt_file_magic_cmd
- 
-+# How to find potential files when deplibs_check_method = "file_magic".
-+file_magic_glob=$lt_file_magic_glob
-+
-+# Find potential files using nocaseglob when deplibs_check_method = "file_magic".
-+want_nocaseglob=$lt_want_nocaseglob
-+
-+# DLL creation program.
-+DLLTOOL=$lt_DLLTOOL
-+
-+# Command to associate shared and link libraries.
-+sharedlib_from_linklib_cmd=$lt_sharedlib_from_linklib_cmd
-+
- # The archiver.
- AR=$lt_AR
-+
-+# Flags to create an archive.
- AR_FLAGS=$lt_AR_FLAGS
- 
-+# How to feed a file listing to the archiver.
-+archiver_list_spec=$lt_archiver_list_spec
-+
- # A symbol stripping program.
- STRIP=$lt_STRIP
- 
-@@ -14489,6 +15180,12 @@ global_symbol_to_c_name_address=$lt_lt_cv_sys_global_symbol_to_c_name_address
- # Transform the output of nm in a C name address pair when lib prefix is needed.
- global_symbol_to_c_name_address_lib_prefix=$lt_lt_cv_sys_global_symbol_to_c_name_address_lib_prefix
- 
-+# Specify filename containing input files for \$NM.
-+nm_file_list_spec=$lt_nm_file_list_spec
-+
-+# The root where to search for dependent libraries,and in which our libraries should be installed.
-+lt_sysroot=$lt_sysroot
-+
- # The name of the directory that contains temporary libtool files.
- objdir=$objdir
- 
-@@ -14498,6 +15195,9 @@ MAGIC_CMD=$MAGIC_CMD
- # Must we lock files when doing compilation?
- need_locks=$lt_need_locks
- 
-+# Manifest tool.
-+MANIFEST_TOOL=$lt_MANIFEST_TOOL
-+
- # Tool to manipulate archived DWARF debug symbol files on Mac OS X.
- DSYMUTIL=$lt_DSYMUTIL
- 
-@@ -14612,12 +15312,12 @@ with_gcc=$GCC
- # Compiler flag to turn off builtin functions.
- no_builtin_flag=$lt_lt_prog_compiler_no_builtin_flag
- 
--# How to pass a linker flag through the compiler.
--wl=$lt_lt_prog_compiler_wl
--
- # Additional compiler flags for building library objects.
- pic_flag=$lt_lt_prog_compiler_pic
- 
-+# How to pass a linker flag through the compiler.
-+wl=$lt_lt_prog_compiler_wl
-+
- # Compiler flag to prevent dynamic linking.
- link_static_flag=$lt_lt_prog_compiler_static
- 
-@@ -14704,9 +15404,6 @@ inherit_rpath=$inherit_rpath
- # Whether libtool must link a program against all its dependency libraries.
- link_all_deplibs=$link_all_deplibs
- 
--# Fix the shell variable \$srcfile for the compiler.
--fix_srcfile_path=$lt_fix_srcfile_path
--
- # Set to "yes" if exported symbols are required.
- always_export_symbols=$always_export_symbols
- 
-@@ -14722,6 +15419,9 @@ include_expsyms=$lt_include_expsyms
- # Commands necessary for linking programs (against libraries) with templates.
- prelink_cmds=$lt_prelink_cmds
- 
-+# Commands necessary for finishing linking programs.
-+postlink_cmds=$lt_postlink_cmds
-+
- # Specify filename containing input files.
- file_list_spec=$lt_file_list_spec
- 
-@@ -14754,210 +15454,169 @@ ltmain="$ac_aux_dir/ltmain.sh"
-   # if finds mixed CR/LF and LF-only lines.  Since sed operates in
-   # text mode, it properly converts lines to CR/LF.  This bash problem
-   # is reportedly fixed, but why not run on old versions too?
--  sed '/^# Generated shell functions inserted here/q' "$ltmain" >> "$cfgfile" \
--    || (rm -f "$cfgfile"; exit 1)
--
--  case $xsi_shell in
--  yes)
--    cat << \_LT_EOF >> "$cfgfile"
--
--# func_dirname file append nondir_replacement
--# Compute the dirname of FILE.  If nonempty, add APPEND to the result,
--# otherwise set result to NONDIR_REPLACEMENT.
--func_dirname ()
--{
--  case ${1} in
--    */*) func_dirname_result="${1%/*}${2}" ;;
--    *  ) func_dirname_result="${3}" ;;
--  esac
--}
--
--# func_basename file
--func_basename ()
--{
--  func_basename_result="${1##*/}"
--}
--
--# func_dirname_and_basename file append nondir_replacement
--# perform func_basename and func_dirname in a single function
--# call:
--#   dirname:  Compute the dirname of FILE.  If nonempty,
--#             add APPEND to the result, otherwise set result
--#             to NONDIR_REPLACEMENT.
--#             value returned in "$func_dirname_result"
--#   basename: Compute filename of FILE.
--#             value retuned in "$func_basename_result"
--# Implementation must be kept synchronized with func_dirname
--# and func_basename. For efficiency, we do not delegate to
--# those functions but instead duplicate the functionality here.
--func_dirname_and_basename ()
--{
--  case ${1} in
--    */*) func_dirname_result="${1%/*}${2}" ;;
--    *  ) func_dirname_result="${3}" ;;
--  esac
--  func_basename_result="${1##*/}"
--}
--
--# func_stripname prefix suffix name
--# strip PREFIX and SUFFIX off of NAME.
--# PREFIX and SUFFIX must not contain globbing or regex special
--# characters, hashes, percent signs, but SUFFIX may contain a leading
--# dot (in which case that matches only a dot).
--func_stripname ()
--{
--  # pdksh 5.2.14 does not do ${X%$Y} correctly if both X and Y are
--  # positional parameters, so assign one to ordinary parameter first.
--  func_stripname_result=${3}
--  func_stripname_result=${func_stripname_result#"${1}"}
--  func_stripname_result=${func_stripname_result%"${2}"}
--}
--
--# func_opt_split
--func_opt_split ()
--{
--  func_opt_split_opt=${1%%=*}
--  func_opt_split_arg=${1#*=}
--}
--
--# func_lo2o object
--func_lo2o ()
--{
--  case ${1} in
--    *.lo) func_lo2o_result=${1%.lo}.${objext} ;;
--    *)    func_lo2o_result=${1} ;;
--  esac
--}
--
--# func_xform libobj-or-source
--func_xform ()
--{
--  func_xform_result=${1%.*}.lo
--}
--
--# func_arith arithmetic-term...
--func_arith ()
--{
--  func_arith_result=$(( $* ))
--}
--
--# func_len string
--# STRING may not start with a hyphen.
--func_len ()
--{
--  func_len_result=${#1}
--}
--
--_LT_EOF
--    ;;
--  *) # Bourne compatible functions.
--    cat << \_LT_EOF >> "$cfgfile"
--
--# func_dirname file append nondir_replacement
--# Compute the dirname of FILE.  If nonempty, add APPEND to the result,
--# otherwise set result to NONDIR_REPLACEMENT.
--func_dirname ()
--{
--  # Extract subdirectory from the argument.
--  func_dirname_result=`$ECHO "${1}" | $SED "$dirname"`
--  if test "X$func_dirname_result" = "X${1}"; then
--    func_dirname_result="${3}"
--  else
--    func_dirname_result="$func_dirname_result${2}"
--  fi
--}
--
--# func_basename file
--func_basename ()
--{
--  func_basename_result=`$ECHO "${1}" | $SED "$basename"`
--}
--
--
--# func_stripname prefix suffix name
--# strip PREFIX and SUFFIX off of NAME.
--# PREFIX and SUFFIX must not contain globbing or regex special
--# characters, hashes, percent signs, but SUFFIX may contain a leading
--# dot (in which case that matches only a dot).
--# func_strip_suffix prefix name
--func_stripname ()
--{
--  case ${2} in
--    .*) func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%\\\\${2}\$%%"`;;
--    *)  func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%${2}\$%%"`;;
--  esac
--}
--
--# sed scripts:
--my_sed_long_opt='1s/^\(-[^=]*\)=.*/\1/;q'
--my_sed_long_arg='1s/^-[^=]*=//'
--
--# func_opt_split
--func_opt_split ()
--{
--  func_opt_split_opt=`$ECHO "${1}" | $SED "$my_sed_long_opt"`
--  func_opt_split_arg=`$ECHO "${1}" | $SED "$my_sed_long_arg"`
--}
--
--# func_lo2o object
--func_lo2o ()
--{
--  func_lo2o_result=`$ECHO "${1}" | $SED "$lo2o"`
--}
--
--# func_xform libobj-or-source
--func_xform ()
--{
--  func_xform_result=`$ECHO "${1}" | $SED 's/\.[^.]*$/.lo/'`
--}
--
--# func_arith arithmetic-term...
--func_arith ()
--{
--  func_arith_result=`expr "$@"`
--}
--
--# func_len string
--# STRING may not start with a hyphen.
--func_len ()
--{
--  func_len_result=`expr "$1" : ".*" 2>/dev/null || echo $max_cmd_len`
--}
--
--_LT_EOF
--esac
--
--case $lt_shell_append in
--  yes)
--    cat << \_LT_EOF >> "$cfgfile"
--
--# func_append var value
--# Append VALUE to the end of shell variable VAR.
--func_append ()
--{
--  eval "$1+=\$2"
--}
--_LT_EOF
--    ;;
--  *)
--    cat << \_LT_EOF >> "$cfgfile"
--
--# func_append var value
--# Append VALUE to the end of shell variable VAR.
--func_append ()
--{
--  eval "$1=\$$1\$2"
--}
--
--_LT_EOF
--    ;;
--  esac
--
--
--  sed -n '/^# Generated shell functions inserted here/,$p' "$ltmain" >> "$cfgfile" \
--    || (rm -f "$cfgfile"; exit 1)
--
--  mv -f "$cfgfile" "$ofile" ||
-+  sed '$q' "$ltmain" >> "$cfgfile" \
-+     || (rm -f "$cfgfile"; exit 1)
-+
-+  if test x"$xsi_shell" = xyes; then
-+  sed -e '/^func_dirname ()$/,/^} # func_dirname /c\
-+func_dirname ()\
-+{\
-+\    case ${1} in\
-+\      */*) func_dirname_result="${1%/*}${2}" ;;\
-+\      *  ) func_dirname_result="${3}" ;;\
-+\    esac\
-+} # Extended-shell func_dirname implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_basename ()$/,/^} # func_basename /c\
-+func_basename ()\
-+{\
-+\    func_basename_result="${1##*/}"\
-+} # Extended-shell func_basename implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_dirname_and_basename ()$/,/^} # func_dirname_and_basename /c\
-+func_dirname_and_basename ()\
-+{\
-+\    case ${1} in\
-+\      */*) func_dirname_result="${1%/*}${2}" ;;\
-+\      *  ) func_dirname_result="${3}" ;;\
-+\    esac\
-+\    func_basename_result="${1##*/}"\
-+} # Extended-shell func_dirname_and_basename implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_stripname ()$/,/^} # func_stripname /c\
-+func_stripname ()\
-+{\
-+\    # pdksh 5.2.14 does not do ${X%$Y} correctly if both X and Y are\
-+\    # positional parameters, so assign one to ordinary parameter first.\
-+\    func_stripname_result=${3}\
-+\    func_stripname_result=${func_stripname_result#"${1}"}\
-+\    func_stripname_result=${func_stripname_result%"${2}"}\
-+} # Extended-shell func_stripname implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_split_long_opt ()$/,/^} # func_split_long_opt /c\
-+func_split_long_opt ()\
-+{\
-+\    func_split_long_opt_name=${1%%=*}\
-+\    func_split_long_opt_arg=${1#*=}\
-+} # Extended-shell func_split_long_opt implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_split_short_opt ()$/,/^} # func_split_short_opt /c\
-+func_split_short_opt ()\
-+{\
-+\    func_split_short_opt_arg=${1#??}\
-+\    func_split_short_opt_name=${1%"$func_split_short_opt_arg"}\
-+} # Extended-shell func_split_short_opt implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_lo2o ()$/,/^} # func_lo2o /c\
-+func_lo2o ()\
-+{\
-+\    case ${1} in\
-+\      *.lo) func_lo2o_result=${1%.lo}.${objext} ;;\
-+\      *)    func_lo2o_result=${1} ;;\
-+\    esac\
-+} # Extended-shell func_lo2o implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_xform ()$/,/^} # func_xform /c\
-+func_xform ()\
-+{\
-+    func_xform_result=${1%.*}.lo\
-+} # Extended-shell func_xform implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_arith ()$/,/^} # func_arith /c\
-+func_arith ()\
-+{\
-+    func_arith_result=$(( $* ))\
-+} # Extended-shell func_arith implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_len ()$/,/^} # func_len /c\
-+func_len ()\
-+{\
-+    func_len_result=${#1}\
-+} # Extended-shell func_len implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+fi
-+
-+if test x"$lt_shell_append" = xyes; then
-+  sed -e '/^func_append ()$/,/^} # func_append /c\
-+func_append ()\
-+{\
-+    eval "${1}+=\\${2}"\
-+} # Extended-shell func_append implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_append_quoted ()$/,/^} # func_append_quoted /c\
-+func_append_quoted ()\
-+{\
-+\    func_quote_for_eval "${2}"\
-+\    eval "${1}+=\\\\ \\$func_quote_for_eval_result"\
-+} # Extended-shell func_append_quoted implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  # Save a `func_append' function call where possible by direct use of '+='
-+  sed -e 's%func_append \([a-zA-Z_]\{1,\}\) "%\1+="%g' $cfgfile > $cfgfile.tmp \
-+    && mv -f "$cfgfile.tmp" "$cfgfile" \
-+      || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+  test 0 -eq $? || _lt_function_replace_fail=:
-+else
-+  # Save a `func_append' function call even when '+=' is not available
-+  sed -e 's%func_append \([a-zA-Z_]\{1,\}\) "%\1="$\1%g' $cfgfile > $cfgfile.tmp \
-+    && mv -f "$cfgfile.tmp" "$cfgfile" \
-+      || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+  test 0 -eq $? || _lt_function_replace_fail=:
-+fi
-+
-+if test x"$_lt_function_replace_fail" = x":"; then
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: Unable to substitute extended shell functions in $ofile" >&5
-+$as_echo "$as_me: WARNING: Unable to substitute extended shell functions in $ofile" >&2;}
-+fi
-+
-+
-+   mv -f "$cfgfile" "$ofile" ||
-     (rm -f "$ofile" && cp "$cfgfile" "$ofile" && rm -f "$cfgfile")
-   chmod +x "$ofile"
- 
-diff --git a/zlib/configure b/zlib/configure
-index db7845c5d42..cd59daa39b5 100755
---- a/zlib/configure
-+++ b/zlib/configure
-@@ -646,8 +646,11 @@ OTOOL
- LIPO
- NMEDIT
- DSYMUTIL
-+MANIFEST_TOOL
- RANLIB
-+ac_ct_AR
- AR
-+DLLTOOL
- OBJDUMP
- LN_S
- NM
-@@ -774,6 +777,7 @@ enable_static
- with_pic
- enable_fast_install
- with_gnu_ld
-+with_libtool_sysroot
- enable_libtool_lock
- enable_host_shared
- '
-@@ -1428,6 +1432,8 @@ Optional Packages:
-   --with-pic              try to use only PIC/non-PIC objects [default=use
-                           both]
-   --with-gnu-ld           assume the C compiler uses GNU ld [default=no]
-+  --with-libtool-sysroot=DIR Search for dependent libraries within DIR
-+                        (or the compiler's sysroot if not specified).
- 
- Some influential environment variables:
-   CC          C compiler command
-@@ -4186,8 +4192,8 @@ esac
- 
- 
- 
--macro_version='2.2.7a'
--macro_revision='1.3134'
-+macro_version='2.4'
-+macro_revision='1.3293'
- 
- 
- 
-@@ -4227,7 +4233,7 @@ ECHO=$ECHO$ECHO$ECHO$ECHO$ECHO$ECHO
- { $as_echo "$as_me:${as_lineno-$LINENO}: checking how to print strings" >&5
- $as_echo_n "checking how to print strings... " >&6; }
- # Test print first, because it will be a builtin if present.
--if test "X`print -r -- -n 2>/dev/null`" = X-n && \
-+if test "X`( print -r -- -n ) 2>/dev/null`" = X-n && \
-    test "X`print -r -- $ECHO 2>/dev/null`" = "X$ECHO"; then
-   ECHO='print -r --'
- elif test "X`printf %s $ECHO 2>/dev/null`" = "X$ECHO"; then
-@@ -5044,8 +5050,8 @@ $as_echo_n "checking whether the shell understands some XSI constructs... " >&6;
- # Try some XSI features
- xsi_shell=no
- ( _lt_dummy="a/b/c"
--  test "${_lt_dummy##*/},${_lt_dummy%/*},"${_lt_dummy%"$_lt_dummy"}, \
--      = c,a/b,, \
-+  test "${_lt_dummy##*/},${_lt_dummy%/*},${_lt_dummy#??}"${_lt_dummy%"$_lt_dummy"}, \
-+      = c,a/b,b/c, \
-     && eval 'test $(( 1 + 1 )) -eq 2 \
-     && test "${#_lt_dummy}" -eq 5' ) >/dev/null 2>&1 \
-   && xsi_shell=yes
-@@ -5094,6 +5100,80 @@ esac
- 
- 
- 
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking how to convert $build file names to $host format" >&5
-+$as_echo_n "checking how to convert $build file names to $host format... " >&6; }
-+if ${lt_cv_to_host_file_cmd+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  case $host in
-+  *-*-mingw* )
-+    case $build in
-+      *-*-mingw* ) # actually msys
-+        lt_cv_to_host_file_cmd=func_convert_file_msys_to_w32
-+        ;;
-+      *-*-cygwin* )
-+        lt_cv_to_host_file_cmd=func_convert_file_cygwin_to_w32
-+        ;;
-+      * ) # otherwise, assume *nix
-+        lt_cv_to_host_file_cmd=func_convert_file_nix_to_w32
-+        ;;
-+    esac
-+    ;;
-+  *-*-cygwin* )
-+    case $build in
-+      *-*-mingw* ) # actually msys
-+        lt_cv_to_host_file_cmd=func_convert_file_msys_to_cygwin
-+        ;;
-+      *-*-cygwin* )
-+        lt_cv_to_host_file_cmd=func_convert_file_noop
-+        ;;
-+      * ) # otherwise, assume *nix
-+        lt_cv_to_host_file_cmd=func_convert_file_nix_to_cygwin
-+        ;;
-+    esac
-+    ;;
-+  * ) # unhandled hosts (and "normal" native builds)
-+    lt_cv_to_host_file_cmd=func_convert_file_noop
-+    ;;
-+esac
-+
-+fi
-+
-+to_host_file_cmd=$lt_cv_to_host_file_cmd
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_to_host_file_cmd" >&5
-+$as_echo "$lt_cv_to_host_file_cmd" >&6; }
-+
-+
-+
-+
-+
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking how to convert $build file names to toolchain format" >&5
-+$as_echo_n "checking how to convert $build file names to toolchain format... " >&6; }
-+if ${lt_cv_to_tool_file_cmd+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  #assume ordinary cross tools, or native build.
-+lt_cv_to_tool_file_cmd=func_convert_file_noop
-+case $host in
-+  *-*-mingw* )
-+    case $build in
-+      *-*-mingw* ) # actually msys
-+        lt_cv_to_tool_file_cmd=func_convert_file_msys_to_w32
-+        ;;
-+    esac
-+    ;;
-+esac
-+
-+fi
-+
-+to_tool_file_cmd=$lt_cv_to_tool_file_cmd
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_to_tool_file_cmd" >&5
-+$as_echo "$lt_cv_to_tool_file_cmd" >&6; }
-+
-+
-+
-+
-+
- { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $LD option to reload object files" >&5
- $as_echo_n "checking for $LD option to reload object files... " >&6; }
- if ${lt_cv_ld_reload_flag+:} false; then :
-@@ -5110,6 +5190,11 @@ case $reload_flag in
- esac
- reload_cmds='$LD$reload_flag -o $output$reload_objs'
- case $host_os in
-+  cygwin* | mingw* | pw32* | cegcc*)
-+    if test "$GCC" != yes; then
-+      reload_cmds=false
-+    fi
-+    ;;
-   darwin*)
-     if test "$GCC" = yes; then
-       reload_cmds='$LTCC $LTCFLAGS -nostdlib ${wl}-r -o $output$reload_objs'
-@@ -5278,7 +5363,8 @@ mingw* | pw32*)
-     lt_cv_deplibs_check_method='file_magic ^x86 archive import|^x86 DLL'
-     lt_cv_file_magic_cmd='func_win32_libid'
-   else
--    lt_cv_deplibs_check_method='file_magic file format pei*-i386(.*architecture: i386)?'
-+    # Keep this pattern in sync with the one in func_win32_libid.
-+    lt_cv_deplibs_check_method='file_magic file format (pei*-i386(.*architecture: i386)?|pe-arm-wince|pe-x86-64)'
-     lt_cv_file_magic_cmd='$OBJDUMP -f'
-   fi
-   ;;
-@@ -5432,6 +5518,21 @@ esac
- fi
- { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_deplibs_check_method" >&5
- $as_echo "$lt_cv_deplibs_check_method" >&6; }
-+
-+file_magic_glob=
-+want_nocaseglob=no
-+if test "$build" = "$host"; then
-+  case $host_os in
-+  mingw* | pw32*)
-+    if ( shopt | grep nocaseglob ) >/dev/null 2>&1; then
-+      want_nocaseglob=yes
-+    else
-+      file_magic_glob=`echo aAbBcCdDeEfFgGhHiIjJkKlLmMnNoOpPqQrRsStTuUvVwWxXyYzZ | $SED -e "s/\(..\)/s\/[\1]\/[\1]\/g;/g"`
-+    fi
-+    ;;
-+  esac
-+fi
-+
- file_magic_cmd=$lt_cv_file_magic_cmd
- deplibs_check_method=$lt_cv_deplibs_check_method
- test -z "$deplibs_check_method" && deplibs_check_method=unknown
-@@ -5447,6 +5548,158 @@ test -z "$deplibs_check_method" && deplibs_check_method=unknown
- 
- 
- 
-+
-+
-+
-+
-+
-+
-+
-+
-+
-+
-+if test -n "$ac_tool_prefix"; then
-+  # Extract the first word of "${ac_tool_prefix}dlltool", so it can be a program name with args.
-+set dummy ${ac_tool_prefix}dlltool; ac_word=$2
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
-+$as_echo_n "checking for $ac_word... " >&6; }
-+if ${ac_cv_prog_DLLTOOL+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  if test -n "$DLLTOOL"; then
-+  ac_cv_prog_DLLTOOL="$DLLTOOL" # Let the user override the test.
-+else
-+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
-+for as_dir in $PATH
-+do
-+  IFS=$as_save_IFS
-+  test -z "$as_dir" && as_dir=.
-+    for ac_exec_ext in '' $ac_executable_extensions; do
-+  if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
-+    ac_cv_prog_DLLTOOL="${ac_tool_prefix}dlltool"
-+    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
-+    break 2
-+  fi
-+done
-+  done
-+IFS=$as_save_IFS
-+
-+fi
-+fi
-+DLLTOOL=$ac_cv_prog_DLLTOOL
-+if test -n "$DLLTOOL"; then
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $DLLTOOL" >&5
-+$as_echo "$DLLTOOL" >&6; }
-+else
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
-+$as_echo "no" >&6; }
-+fi
-+
-+
-+fi
-+if test -z "$ac_cv_prog_DLLTOOL"; then
-+  ac_ct_DLLTOOL=$DLLTOOL
-+  # Extract the first word of "dlltool", so it can be a program name with args.
-+set dummy dlltool; ac_word=$2
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
-+$as_echo_n "checking for $ac_word... " >&6; }
-+if ${ac_cv_prog_ac_ct_DLLTOOL+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  if test -n "$ac_ct_DLLTOOL"; then
-+  ac_cv_prog_ac_ct_DLLTOOL="$ac_ct_DLLTOOL" # Let the user override the test.
-+else
-+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
-+for as_dir in $PATH
-+do
-+  IFS=$as_save_IFS
-+  test -z "$as_dir" && as_dir=.
-+    for ac_exec_ext in '' $ac_executable_extensions; do
-+  if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
-+    ac_cv_prog_ac_ct_DLLTOOL="dlltool"
-+    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
-+    break 2
-+  fi
-+done
-+  done
-+IFS=$as_save_IFS
-+
-+fi
-+fi
-+ac_ct_DLLTOOL=$ac_cv_prog_ac_ct_DLLTOOL
-+if test -n "$ac_ct_DLLTOOL"; then
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_DLLTOOL" >&5
-+$as_echo "$ac_ct_DLLTOOL" >&6; }
-+else
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
-+$as_echo "no" >&6; }
-+fi
-+
-+  if test "x$ac_ct_DLLTOOL" = x; then
-+    DLLTOOL="false"
-+  else
-+    case $cross_compiling:$ac_tool_warned in
-+yes:)
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5
-+$as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;}
-+ac_tool_warned=yes ;;
-+esac
-+    DLLTOOL=$ac_ct_DLLTOOL
-+  fi
-+else
-+  DLLTOOL="$ac_cv_prog_DLLTOOL"
-+fi
-+
-+test -z "$DLLTOOL" && DLLTOOL=dlltool
-+
-+
-+
-+
-+
-+
-+
-+
-+
-+
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking how to associate runtime and link libraries" >&5
-+$as_echo_n "checking how to associate runtime and link libraries... " >&6; }
-+if ${lt_cv_sharedlib_from_linklib_cmd+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  lt_cv_sharedlib_from_linklib_cmd='unknown'
-+
-+case $host_os in
-+cygwin* | mingw* | pw32* | cegcc*)
-+  # two different shell functions defined in ltmain.sh
-+  # decide which to use based on capabilities of $DLLTOOL
-+  case `$DLLTOOL --help 2>&1` in
-+  *--identify-strict*)
-+    lt_cv_sharedlib_from_linklib_cmd=func_cygming_dll_for_implib
-+    ;;
-+  *)
-+    lt_cv_sharedlib_from_linklib_cmd=func_cygming_dll_for_implib_fallback
-+    ;;
-+  esac
-+  ;;
-+*)
-+  # fallback: assume linklib IS sharedlib
-+  lt_cv_sharedlib_from_linklib_cmd="$ECHO"
-+  ;;
-+esac
-+
-+fi
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_sharedlib_from_linklib_cmd" >&5
-+$as_echo "$lt_cv_sharedlib_from_linklib_cmd" >&6; }
-+sharedlib_from_linklib_cmd=$lt_cv_sharedlib_from_linklib_cmd
-+test -z "$sharedlib_from_linklib_cmd" && sharedlib_from_linklib_cmd=$ECHO
-+
-+
-+
-+
-+
-+
-+
-+
- plugin_option=
- plugin_names="liblto_plugin.so liblto_plugin-0.dll cyglto_plugin-0.dll"
- for plugin in $plugin_names; do
-@@ -5461,8 +5714,10 @@ for plugin in $plugin_names; do
- done
- 
- if test -n "$ac_tool_prefix"; then
--  # Extract the first word of "${ac_tool_prefix}ar", so it can be a program name with args.
--set dummy ${ac_tool_prefix}ar; ac_word=$2
-+  for ac_prog in ar
-+  do
-+    # Extract the first word of "$ac_tool_prefix$ac_prog", so it can be a program name with args.
-+set dummy $ac_tool_prefix$ac_prog; ac_word=$2
- { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
- $as_echo_n "checking for $ac_word... " >&6; }
- if ${ac_cv_prog_AR+:} false; then :
-@@ -5478,7 +5733,7 @@ do
-   test -z "$as_dir" && as_dir=.
-     for ac_exec_ext in '' $ac_executable_extensions; do
-   if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
--    ac_cv_prog_AR="${ac_tool_prefix}ar"
-+    ac_cv_prog_AR="$ac_tool_prefix$ac_prog"
-     $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
-     break 2
-   fi
-@@ -5498,11 +5753,15 @@ $as_echo "no" >&6; }
- fi
- 
- 
-+    test -n "$AR" && break
-+  done
- fi
--if test -z "$ac_cv_prog_AR"; then
-+if test -z "$AR"; then
-   ac_ct_AR=$AR
--  # Extract the first word of "ar", so it can be a program name with args.
--set dummy ar; ac_word=$2
-+  for ac_prog in ar
-+do
-+  # Extract the first word of "$ac_prog", so it can be a program name with args.
-+set dummy $ac_prog; ac_word=$2
- { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
- $as_echo_n "checking for $ac_word... " >&6; }
- if ${ac_cv_prog_ac_ct_AR+:} false; then :
-@@ -5518,7 +5777,7 @@ do
-   test -z "$as_dir" && as_dir=.
-     for ac_exec_ext in '' $ac_executable_extensions; do
-   if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
--    ac_cv_prog_ac_ct_AR="ar"
-+    ac_cv_prog_ac_ct_AR="$ac_prog"
-     $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
-     break 2
-   fi
-@@ -5537,6 +5796,10 @@ else
- $as_echo "no" >&6; }
- fi
- 
-+
-+  test -n "$ac_ct_AR" && break
-+done
-+
-   if test "x$ac_ct_AR" = x; then
-     AR="false"
-   else
-@@ -5548,25 +5811,19 @@ ac_tool_warned=yes ;;
- esac
-     AR=$ac_ct_AR
-   fi
--else
--  AR="$ac_cv_prog_AR"
- fi
- 
--test -z "$AR" && AR=ar
--if test -n "$plugin_option"; then
--  if $AR --help 2>&1 | grep -q "\--plugin"; then
--    touch conftest.c
--    $AR $plugin_option rc conftest.a conftest.c
--    if test "$?" != 0; then
--      { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: Failed: $AR $plugin_option rc" >&5
-+  touch conftest.c
-+  $AR $plugin_option rc conftest.a conftest.c
-+  if test "$?" != 0; then
-+    { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: Failed: $AR $plugin_option rc" >&5
- $as_echo "$as_me: WARNING: Failed: $AR $plugin_option rc" >&2;}
--    else
--      AR="$AR $plugin_option"
--    fi
--    rm -f conftest.*
-+  else
-+    AR="$AR $plugin_option"
-   fi
--fi
--test -z "$AR_FLAGS" && AR_FLAGS=cru
-+  rm -f conftest.*
-+: ${AR=ar}
-+: ${AR_FLAGS=cru}
- 
- 
- 
-@@ -5578,6 +5835,64 @@ test -z "$AR_FLAGS" && AR_FLAGS=cru
- 
- 
- 
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for archiver @FILE support" >&5
-+$as_echo_n "checking for archiver @FILE support... " >&6; }
-+if ${lt_cv_ar_at_file+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  lt_cv_ar_at_file=no
-+   cat confdefs.h - <<_ACEOF >conftest.$ac_ext
-+/* end confdefs.h.  */
-+
-+int
-+main ()
-+{
-+
-+  ;
-+  return 0;
-+}
-+_ACEOF
-+if ac_fn_c_try_compile "$LINENO"; then :
-+  echo conftest.$ac_objext > conftest.lst
-+      lt_ar_try='$AR $AR_FLAGS libconftest.a @conftest.lst >&5'
-+      { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$lt_ar_try\""; } >&5
-+  (eval $lt_ar_try) 2>&5
-+  ac_status=$?
-+  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
-+  test $ac_status = 0; }
-+      if test "$ac_status" -eq 0; then
-+	# Ensure the archiver fails upon bogus file names.
-+	rm -f conftest.$ac_objext libconftest.a
-+	{ { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$lt_ar_try\""; } >&5
-+  (eval $lt_ar_try) 2>&5
-+  ac_status=$?
-+  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
-+  test $ac_status = 0; }
-+	if test "$ac_status" -ne 0; then
-+          lt_cv_ar_at_file=@
-+        fi
-+      fi
-+      rm -f conftest.* libconftest.a
-+
-+fi
-+rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext
-+
-+fi
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_ar_at_file" >&5
-+$as_echo "$lt_cv_ar_at_file" >&6; }
-+
-+if test "x$lt_cv_ar_at_file" = xno; then
-+  archiver_list_spec=
-+else
-+  archiver_list_spec=$lt_cv_ar_at_file
-+fi
-+
-+
-+
-+
-+
-+
-+
- if test -n "$ac_tool_prefix"; then
-   # Extract the first word of "${ac_tool_prefix}strip", so it can be a program name with args.
- set dummy ${ac_tool_prefix}strip; ac_word=$2
-@@ -5917,8 +6232,8 @@ esac
- lt_cv_sys_global_symbol_to_cdecl="sed -n -e 's/^T .* \(.*\)$/extern int \1();/p' -e 's/^$symcode* .* \(.*\)$/extern char \1;/p'"
- 
- # Transform an extracted symbol line into symbol name and symbol address
--lt_cv_sys_global_symbol_to_c_name_address="sed -n -e 's/^: \([^ ]*\) $/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"\2\", (void *) \&\2},/p'"
--lt_cv_sys_global_symbol_to_c_name_address_lib_prefix="sed -n -e 's/^: \([^ ]*\) $/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \(lib[^ ]*\)$/  {\"\2\", (void *) \&\2},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"lib\2\", (void *) \&\2},/p'"
-+lt_cv_sys_global_symbol_to_c_name_address="sed -n -e 's/^: \([^ ]*\)[ ]*$/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"\2\", (void *) \&\2},/p'"
-+lt_cv_sys_global_symbol_to_c_name_address_lib_prefix="sed -n -e 's/^: \([^ ]*\)[ ]*$/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \(lib[^ ]*\)$/  {\"\2\", (void *) \&\2},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"lib\2\", (void *) \&\2},/p'"
- 
- # Handle CRLF in mingw tool chain
- opt_cr=
-@@ -5954,6 +6269,7 @@ for ac_symprfx in "" "_"; do
-   else
-     lt_cv_sys_global_symbol_pipe="sed -n -e 's/^.*[	 ]\($symcode$symcode*\)[	 ][	 ]*$ac_symprfx$sympat$opt_cr$/$symxfrm/p'"
-   fi
-+  lt_cv_sys_global_symbol_pipe="$lt_cv_sys_global_symbol_pipe | sed '/ __gnu_lto/d'"
- 
-   # Check to see that the pipe works correctly.
-   pipe_works=no
-@@ -5995,6 +6311,18 @@ _LT_EOF
-       if $GREP ' nm_test_var$' "$nlist" >/dev/null; then
- 	if $GREP ' nm_test_func$' "$nlist" >/dev/null; then
- 	  cat <<_LT_EOF > conftest.$ac_ext
-+/* Keep this code in sync between libtool.m4, ltmain, lt_system.h, and tests.  */
-+#if defined(_WIN32) || defined(__CYGWIN__) || defined(_WIN32_WCE)
-+/* DATA imports from DLLs on WIN32 con't be const, because runtime
-+   relocations are performed -- see ld's documentation on pseudo-relocs.  */
-+# define LT_DLSYM_CONST
-+#elif defined(__osf__)
-+/* This system does not cope well with relocations in const data.  */
-+# define LT_DLSYM_CONST
-+#else
-+# define LT_DLSYM_CONST const
-+#endif
-+
- #ifdef __cplusplus
- extern "C" {
- #endif
-@@ -6006,7 +6334,7 @@ _LT_EOF
- 	  cat <<_LT_EOF >> conftest.$ac_ext
- 
- /* The mapping between symbol names and symbols.  */
--const struct {
-+LT_DLSYM_CONST struct {
-   const char *name;
-   void       *address;
- }
-@@ -6032,8 +6360,8 @@ static const void *lt_preloaded_setup() {
- _LT_EOF
- 	  # Now try linking the two files.
- 	  mv conftest.$ac_objext conftstm.$ac_objext
--	  lt_save_LIBS="$LIBS"
--	  lt_save_CFLAGS="$CFLAGS"
-+	  lt_globsym_save_LIBS=$LIBS
-+	  lt_globsym_save_CFLAGS=$CFLAGS
- 	  LIBS="conftstm.$ac_objext"
- 	  CFLAGS="$CFLAGS$lt_prog_compiler_no_builtin_flag"
- 	  if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_link\""; } >&5
-@@ -6043,8 +6371,8 @@ _LT_EOF
-   test $ac_status = 0; } && test -s conftest${ac_exeext}; then
- 	    pipe_works=yes
- 	  fi
--	  LIBS="$lt_save_LIBS"
--	  CFLAGS="$lt_save_CFLAGS"
-+	  LIBS=$lt_globsym_save_LIBS
-+	  CFLAGS=$lt_globsym_save_CFLAGS
- 	else
- 	  echo "cannot find nm_test_func in $nlist" >&5
- 	fi
-@@ -6081,6 +6409,17 @@ else
- $as_echo "ok" >&6; }
- fi
- 
-+# Response file support.
-+if test "$lt_cv_nm_interface" = "MS dumpbin"; then
-+  nm_file_list_spec='@'
-+elif $NM --help 2>/dev/null | grep '[@]FILE' >/dev/null; then
-+  nm_file_list_spec='@'
-+fi
-+
-+
-+
-+
-+
- 
- 
- 
-@@ -6098,6 +6437,43 @@ fi
- 
- 
- 
-+
-+
-+
-+
-+
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for sysroot" >&5
-+$as_echo_n "checking for sysroot... " >&6; }
-+
-+# Check whether --with-libtool-sysroot was given.
-+if test "${with_libtool_sysroot+set}" = set; then :
-+  withval=$with_libtool_sysroot;
-+else
-+  with_libtool_sysroot=no
-+fi
-+
-+
-+lt_sysroot=
-+case ${with_libtool_sysroot} in #(
-+ yes)
-+   if test "$GCC" = yes; then
-+     lt_sysroot=`$CC --print-sysroot 2>/dev/null`
-+   fi
-+   ;; #(
-+ /*)
-+   lt_sysroot=`echo "$with_libtool_sysroot" | sed -e "$sed_quote_subst"`
-+   ;; #(
-+ no|'')
-+   ;; #(
-+ *)
-+   { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${with_libtool_sysroot}" >&5
-+$as_echo "${with_libtool_sysroot}" >&6; }
-+   as_fn_error $? "The sysroot must be an absolute path." "$LINENO" 5
-+   ;;
-+esac
-+
-+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${lt_sysroot:-no}" >&5
-+$as_echo "${lt_sysroot:-no}" >&6; }
- 
- 
- 
-@@ -6312,6 +6688,123 @@ esac
- 
- need_locks="$enable_libtool_lock"
- 
-+if test -n "$ac_tool_prefix"; then
-+  # Extract the first word of "${ac_tool_prefix}mt", so it can be a program name with args.
-+set dummy ${ac_tool_prefix}mt; ac_word=$2
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
-+$as_echo_n "checking for $ac_word... " >&6; }
-+if ${ac_cv_prog_MANIFEST_TOOL+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  if test -n "$MANIFEST_TOOL"; then
-+  ac_cv_prog_MANIFEST_TOOL="$MANIFEST_TOOL" # Let the user override the test.
-+else
-+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
-+for as_dir in $PATH
-+do
-+  IFS=$as_save_IFS
-+  test -z "$as_dir" && as_dir=.
-+    for ac_exec_ext in '' $ac_executable_extensions; do
-+  if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
-+    ac_cv_prog_MANIFEST_TOOL="${ac_tool_prefix}mt"
-+    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
-+    break 2
-+  fi
-+done
-+  done
-+IFS=$as_save_IFS
-+
-+fi
-+fi
-+MANIFEST_TOOL=$ac_cv_prog_MANIFEST_TOOL
-+if test -n "$MANIFEST_TOOL"; then
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $MANIFEST_TOOL" >&5
-+$as_echo "$MANIFEST_TOOL" >&6; }
-+else
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
-+$as_echo "no" >&6; }
-+fi
-+
-+
-+fi
-+if test -z "$ac_cv_prog_MANIFEST_TOOL"; then
-+  ac_ct_MANIFEST_TOOL=$MANIFEST_TOOL
-+  # Extract the first word of "mt", so it can be a program name with args.
-+set dummy mt; ac_word=$2
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
-+$as_echo_n "checking for $ac_word... " >&6; }
-+if ${ac_cv_prog_ac_ct_MANIFEST_TOOL+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  if test -n "$ac_ct_MANIFEST_TOOL"; then
-+  ac_cv_prog_ac_ct_MANIFEST_TOOL="$ac_ct_MANIFEST_TOOL" # Let the user override the test.
-+else
-+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
-+for as_dir in $PATH
-+do
-+  IFS=$as_save_IFS
-+  test -z "$as_dir" && as_dir=.
-+    for ac_exec_ext in '' $ac_executable_extensions; do
-+  if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
-+    ac_cv_prog_ac_ct_MANIFEST_TOOL="mt"
-+    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
-+    break 2
-+  fi
-+done
-+  done
-+IFS=$as_save_IFS
-+
-+fi
-+fi
-+ac_ct_MANIFEST_TOOL=$ac_cv_prog_ac_ct_MANIFEST_TOOL
-+if test -n "$ac_ct_MANIFEST_TOOL"; then
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_MANIFEST_TOOL" >&5
-+$as_echo "$ac_ct_MANIFEST_TOOL" >&6; }
-+else
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
-+$as_echo "no" >&6; }
-+fi
-+
-+  if test "x$ac_ct_MANIFEST_TOOL" = x; then
-+    MANIFEST_TOOL=":"
-+  else
-+    case $cross_compiling:$ac_tool_warned in
-+yes:)
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5
-+$as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;}
-+ac_tool_warned=yes ;;
-+esac
-+    MANIFEST_TOOL=$ac_ct_MANIFEST_TOOL
-+  fi
-+else
-+  MANIFEST_TOOL="$ac_cv_prog_MANIFEST_TOOL"
-+fi
-+
-+test -z "$MANIFEST_TOOL" && MANIFEST_TOOL=mt
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking if $MANIFEST_TOOL is a manifest tool" >&5
-+$as_echo_n "checking if $MANIFEST_TOOL is a manifest tool... " >&6; }
-+if ${lt_cv_path_mainfest_tool+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  lt_cv_path_mainfest_tool=no
-+  echo "$as_me:$LINENO: $MANIFEST_TOOL '-?'" >&5
-+  $MANIFEST_TOOL '-?' 2>conftest.err > conftest.out
-+  cat conftest.err >&5
-+  if $GREP 'Manifest Tool' conftest.out > /dev/null; then
-+    lt_cv_path_mainfest_tool=yes
-+  fi
-+  rm -f conftest*
-+fi
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_path_mainfest_tool" >&5
-+$as_echo "$lt_cv_path_mainfest_tool" >&6; }
-+if test "x$lt_cv_path_mainfest_tool" != xyes; then
-+  MANIFEST_TOOL=:
-+fi
-+
-+
-+
-+
-+
- 
-   case $host_os in
-     rhapsody* | darwin*)
-@@ -6878,6 +7371,8 @@ _LT_EOF
-       $LTCC $LTCFLAGS -c -o conftest.o conftest.c 2>&5
-       echo "$AR cru libconftest.a conftest.o" >&5
-       $AR cru libconftest.a conftest.o 2>&5
-+      echo "$RANLIB libconftest.a" >&5
-+      $RANLIB libconftest.a 2>&5
-       cat > conftest.c << _LT_EOF
- int main() { return 0;}
- _LT_EOF
-@@ -7727,8 +8222,6 @@ fi
- lt_prog_compiler_pic=
- lt_prog_compiler_static=
- 
--{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $compiler option to produce PIC" >&5
--$as_echo_n "checking for $compiler option to produce PIC... " >&6; }
- 
-   if test "$GCC" = yes; then
-     lt_prog_compiler_wl='-Wl,'
-@@ -7894,6 +8387,12 @@ $as_echo_n "checking for $compiler option to produce PIC... " >&6; }
- 	lt_prog_compiler_pic='--shared'
- 	lt_prog_compiler_static='--static'
- 	;;
-+      nagfor*)
-+	# NAG Fortran compiler
-+	lt_prog_compiler_wl='-Wl,-Wl,,'
-+	lt_prog_compiler_pic='-PIC'
-+	lt_prog_compiler_static='-Bstatic'
-+	;;
-       pgcc* | pgf77* | pgf90* | pgf95* | pgfortran*)
-         # Portland Group compilers (*not* the Pentium gcc compiler,
- 	# which looks to be a dead project)
-@@ -7956,7 +8455,7 @@ $as_echo_n "checking for $compiler option to produce PIC... " >&6; }
-       lt_prog_compiler_pic='-KPIC'
-       lt_prog_compiler_static='-Bstatic'
-       case $cc_basename in
--      f77* | f90* | f95*)
-+      f77* | f90* | f95* | sunf77* | sunf90* | sunf95*)
- 	lt_prog_compiler_wl='-Qoption ld ';;
-       *)
- 	lt_prog_compiler_wl='-Wl,';;
-@@ -8013,13 +8512,17 @@ case $host_os in
-     lt_prog_compiler_pic="$lt_prog_compiler_pic -DPIC"
-     ;;
- esac
--{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_prog_compiler_pic" >&5
--$as_echo "$lt_prog_compiler_pic" >&6; }
--
--
--
--
- 
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $compiler option to produce PIC" >&5
-+$as_echo_n "checking for $compiler option to produce PIC... " >&6; }
-+if ${lt_cv_prog_compiler_pic+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  lt_cv_prog_compiler_pic=$lt_prog_compiler_pic
-+fi
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_prog_compiler_pic" >&5
-+$as_echo "$lt_cv_prog_compiler_pic" >&6; }
-+lt_prog_compiler_pic=$lt_cv_prog_compiler_pic
- 
- #
- # Check to make sure the PIC flag actually works.
-@@ -8080,6 +8583,11 @@ fi
- 
- 
- 
-+
-+
-+
-+
-+
- #
- # Check to make sure the static flag actually works.
- #
-@@ -8430,7 +8938,8 @@ _LT_EOF
-       allow_undefined_flag=unsupported
-       always_export_symbols=no
-       enable_shared_with_static_runtimes=yes
--      export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1 DATA/'\'' | $SED -e '\''/^[AITW][ ]/s/.*[ ]//'\'' | sort | uniq > $export_symbols'
-+      export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1 DATA/;s/^.*[ ]__nm__\([^ ]*\)[ ][^ ]*/\1 DATA/;/^I[ ]/d;/^[AITW][ ]/s/.* //'\'' | sort | uniq > $export_symbols'
-+      exclude_expsyms='[_]+GLOBAL_OFFSET_TABLE_|[_]+GLOBAL__[FID]_.*|[_]+head_[A-Za-z0-9_]+_dll|[A-Za-z0-9_]+_dll_iname'
- 
-       if $LD --help 2>&1 | $GREP 'auto-import' > /dev/null; then
-         archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib'
-@@ -8529,12 +9038,12 @@ _LT_EOF
- 	  whole_archive_flag_spec='--whole-archive$convenience --no-whole-archive'
- 	  hardcode_libdir_flag_spec=
- 	  hardcode_libdir_flag_spec_ld='-rpath $libdir'
--	  archive_cmds='$LD -shared $libobjs $deplibs $compiler_flags -soname $soname -o $lib'
-+	  archive_cmds='$LD -shared $libobjs $deplibs $linker_flags -soname $soname -o $lib'
- 	  if test "x$supports_anon_versioning" = xyes; then
- 	    archive_expsym_cmds='echo "{ global:" > $output_objdir/$libname.ver~
- 	      cat $export_symbols | sed -e "s/\(.*\)/\1;/" >> $output_objdir/$libname.ver~
- 	      echo "local: *; };" >> $output_objdir/$libname.ver~
--	      $LD -shared $libobjs $deplibs $compiler_flags -soname $soname -version-script $output_objdir/$libname.ver -o $lib'
-+	      $LD -shared $libobjs $deplibs $linker_flags -soname $soname -version-script $output_objdir/$libname.ver -o $lib'
- 	  fi
- 	  ;;
- 	esac
-@@ -8548,8 +9057,8 @@ _LT_EOF
- 	archive_cmds='$LD -Bshareable $libobjs $deplibs $linker_flags -o $lib'
- 	wlarc=
-       else
--	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
--	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-+	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
-+	archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-       fi
-       ;;
- 
-@@ -8567,8 +9076,8 @@ _LT_EOF
- 
- _LT_EOF
-       elif $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then
--	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
--	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-+	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
-+	archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-       else
- 	ld_shlibs=no
-       fi
-@@ -8614,8 +9123,8 @@ _LT_EOF
- 
-     *)
-       if $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then
--	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
--	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-+	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
-+	archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-       else
- 	ld_shlibs=no
-       fi
-@@ -8745,7 +9254,13 @@ _LT_EOF
- 	allow_undefined_flag='-berok'
-         # Determine the default libpath from the value encoded in an
-         # empty executable.
--        if test x$gcc_no_link = xyes; then
-+        if test "${lt_cv_aix_libpath+set}" = set; then
-+  aix_libpath=$lt_cv_aix_libpath
-+else
-+  if ${lt_cv_aix_libpath_+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  if test x$gcc_no_link = xyes; then
-   as_fn_error $? "Link tests are not allowed after GCC_NO_EXECUTABLES." "$LINENO" 5
- fi
- cat confdefs.h - <<_ACEOF >conftest.$ac_ext
-@@ -8761,22 +9276,29 @@ main ()
- _ACEOF
- if ac_fn_c_try_link "$LINENO"; then :
- 
--lt_aix_libpath_sed='
--    /Import File Strings/,/^$/ {
--	/^0/ {
--	    s/^0  *\(.*\)$/\1/
--	    p
--	}
--    }'
--aix_libpath=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
--# Check for a 64-bit object if we didn't find anything.
--if test -z "$aix_libpath"; then
--  aix_libpath=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
--fi
-+  lt_aix_libpath_sed='
-+      /Import File Strings/,/^$/ {
-+	  /^0/ {
-+	      s/^0  *\([^ ]*\) *$/\1/
-+	      p
-+	  }
-+      }'
-+  lt_cv_aix_libpath_=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
-+  # Check for a 64-bit object if we didn't find anything.
-+  if test -z "$lt_cv_aix_libpath_"; then
-+    lt_cv_aix_libpath_=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
-+  fi
- fi
- rm -f core conftest.err conftest.$ac_objext \
-     conftest$ac_exeext conftest.$ac_ext
--if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
-+  if test -z "$lt_cv_aix_libpath_"; then
-+    lt_cv_aix_libpath_="/usr/lib:/lib"
-+  fi
-+
-+fi
-+
-+  aix_libpath=$lt_cv_aix_libpath_
-+fi
- 
-         hardcode_libdir_flag_spec='${wl}-blibpath:$libdir:'"$aix_libpath"
-         archive_expsym_cmds='$CC -o $output_objdir/$soname $libobjs $deplibs '"\${wl}$no_entry_flag"' $compiler_flags `if test "x${allow_undefined_flag}" != "x"; then func_echo_all "${wl}${allow_undefined_flag}"; else :; fi` '"\${wl}$exp_sym_flag:\$export_symbols $shared_flag"
-@@ -8788,7 +9310,13 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
- 	else
- 	 # Determine the default libpath from the value encoded in an
- 	 # empty executable.
--	 if test x$gcc_no_link = xyes; then
-+	 if test "${lt_cv_aix_libpath+set}" = set; then
-+  aix_libpath=$lt_cv_aix_libpath
-+else
-+  if ${lt_cv_aix_libpath_+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  if test x$gcc_no_link = xyes; then
-   as_fn_error $? "Link tests are not allowed after GCC_NO_EXECUTABLES." "$LINENO" 5
- fi
- cat confdefs.h - <<_ACEOF >conftest.$ac_ext
-@@ -8804,22 +9332,29 @@ main ()
- _ACEOF
- if ac_fn_c_try_link "$LINENO"; then :
- 
--lt_aix_libpath_sed='
--    /Import File Strings/,/^$/ {
--	/^0/ {
--	    s/^0  *\(.*\)$/\1/
--	    p
--	}
--    }'
--aix_libpath=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
--# Check for a 64-bit object if we didn't find anything.
--if test -z "$aix_libpath"; then
--  aix_libpath=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
--fi
-+  lt_aix_libpath_sed='
-+      /Import File Strings/,/^$/ {
-+	  /^0/ {
-+	      s/^0  *\([^ ]*\) *$/\1/
-+	      p
-+	  }
-+      }'
-+  lt_cv_aix_libpath_=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
-+  # Check for a 64-bit object if we didn't find anything.
-+  if test -z "$lt_cv_aix_libpath_"; then
-+    lt_cv_aix_libpath_=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
-+  fi
- fi
- rm -f core conftest.err conftest.$ac_objext \
-     conftest$ac_exeext conftest.$ac_ext
--if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
-+  if test -z "$lt_cv_aix_libpath_"; then
-+    lt_cv_aix_libpath_="/usr/lib:/lib"
-+  fi
-+
-+fi
-+
-+  aix_libpath=$lt_cv_aix_libpath_
-+fi
- 
- 	 hardcode_libdir_flag_spec='${wl}-blibpath:$libdir:'"$aix_libpath"
- 	  # Warning - without using the other run time loading flags,
-@@ -8864,20 +9399,63 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
-       # Microsoft Visual C++.
-       # hardcode_libdir_flag_spec is actually meaningless, as there is
-       # no search path for DLLs.
--      hardcode_libdir_flag_spec=' '
--      allow_undefined_flag=unsupported
--      # Tell ltmain to make .lib files, not .a files.
--      libext=lib
--      # Tell ltmain to make .dll files, not .so files.
--      shrext_cmds=".dll"
--      # FIXME: Setting linknames here is a bad hack.
--      archive_cmds='$CC -o $lib $libobjs $compiler_flags `func_echo_all "$deplibs" | $SED '\''s/ -lc$//'\''` -link -dll~linknames='
--      # The linker will automatically build a .lib file if we build a DLL.
--      old_archive_from_new_cmds='true'
--      # FIXME: Should let the user specify the lib program.
--      old_archive_cmds='lib -OUT:$oldlib$oldobjs$old_deplibs'
--      fix_srcfile_path='`cygpath -w "$srcfile"`'
--      enable_shared_with_static_runtimes=yes
-+      case $cc_basename in
-+      cl*)
-+	# Native MSVC
-+	hardcode_libdir_flag_spec=' '
-+	allow_undefined_flag=unsupported
-+	always_export_symbols=yes
-+	file_list_spec='@'
-+	# Tell ltmain to make .lib files, not .a files.
-+	libext=lib
-+	# Tell ltmain to make .dll files, not .so files.
-+	shrext_cmds=".dll"
-+	# FIXME: Setting linknames here is a bad hack.
-+	archive_cmds='$CC -o $output_objdir/$soname $libobjs $compiler_flags $deplibs -Wl,-dll~linknames='
-+	archive_expsym_cmds='if test "x`$SED 1q $export_symbols`" = xEXPORTS; then
-+	    sed -n -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' -e '1\\\!p' < $export_symbols > $output_objdir/$soname.exp;
-+	  else
-+	    sed -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' < $export_symbols > $output_objdir/$soname.exp;
-+	  fi~
-+	  $CC -o $tool_output_objdir$soname $libobjs $compiler_flags $deplibs "@$tool_output_objdir$soname.exp" -Wl,-DLL,-IMPLIB:"$tool_output_objdir$libname.dll.lib"~
-+	  linknames='
-+	# The linker will not automatically build a static lib if we build a DLL.
-+	# _LT_TAGVAR(old_archive_from_new_cmds, )='true'
-+	enable_shared_with_static_runtimes=yes
-+	export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1,DATA/'\'' | $SED -e '\''/^[AITW][ ]/s/.*[ ]//'\'' | sort | uniq > $export_symbols'
-+	# Don't use ranlib
-+	old_postinstall_cmds='chmod 644 $oldlib'
-+	postlink_cmds='lt_outputfile="@OUTPUT@"~
-+	  lt_tool_outputfile="@TOOL_OUTPUT@"~
-+	  case $lt_outputfile in
-+	    *.exe|*.EXE) ;;
-+	    *)
-+	      lt_outputfile="$lt_outputfile.exe"
-+	      lt_tool_outputfile="$lt_tool_outputfile.exe"
-+	      ;;
-+	  esac~
-+	  if test "$MANIFEST_TOOL" != ":" && test -f "$lt_outputfile.manifest"; then
-+	    $MANIFEST_TOOL -manifest "$lt_tool_outputfile.manifest" -outputresource:"$lt_tool_outputfile" || exit 1;
-+	    $RM "$lt_outputfile.manifest";
-+	  fi'
-+	;;
-+      *)
-+	# Assume MSVC wrapper
-+	hardcode_libdir_flag_spec=' '
-+	allow_undefined_flag=unsupported
-+	# Tell ltmain to make .lib files, not .a files.
-+	libext=lib
-+	# Tell ltmain to make .dll files, not .so files.
-+	shrext_cmds=".dll"
-+	# FIXME: Setting linknames here is a bad hack.
-+	archive_cmds='$CC -o $lib $libobjs $compiler_flags `func_echo_all "$deplibs" | $SED '\''s/ -lc$//'\''` -link -dll~linknames='
-+	# The linker will automatically build a .lib file if we build a DLL.
-+	old_archive_from_new_cmds='true'
-+	# FIXME: Should let the user specify the lib program.
-+	old_archive_cmds='lib -OUT:$oldlib$oldobjs$old_deplibs'
-+	enable_shared_with_static_runtimes=yes
-+	;;
-+      esac
-       ;;
- 
-     darwin* | rhapsody*)
-@@ -8938,7 +9516,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
- 
-     # FreeBSD 3 and greater uses gcc -shared to do shared libraries.
-     freebsd* | dragonfly*)
--      archive_cmds='$CC -shared -o $lib $libobjs $deplibs $compiler_flags'
-+      archive_cmds='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags'
-       hardcode_libdir_flag_spec='-R$libdir'
-       hardcode_direct=yes
-       hardcode_shlibpath_var=no
-@@ -8946,7 +9524,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
- 
-     hpux9*)
-       if test "$GCC" = yes; then
--	archive_cmds='$RM $output_objdir/$soname~$CC -shared -fPIC ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $libobjs $deplibs $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
-+	archive_cmds='$RM $output_objdir/$soname~$CC -shared $pic_flag ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $libobjs $deplibs $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
-       else
- 	archive_cmds='$RM $output_objdir/$soname~$LD -b +b $install_libdir -o $output_objdir/$soname $libobjs $deplibs $linker_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
-       fi
-@@ -8962,7 +9540,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
- 
-     hpux10*)
-       if test "$GCC" = yes && test "$with_gnu_ld" = no; then
--	archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
-+	archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
-       else
- 	archive_cmds='$LD -b +h $soname +b $install_libdir -o $lib $libobjs $deplibs $linker_flags'
-       fi
-@@ -8986,10 +9564,10 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
- 	  archive_cmds='$CC -shared ${wl}+h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
- 	  ;;
- 	ia64*)
--	  archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags'
-+	  archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags'
- 	  ;;
- 	*)
--	  archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
-+	  archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
- 	  ;;
- 	esac
-       else
-@@ -9068,26 +9646,39 @@ fi
- 
-     irix5* | irix6* | nonstopux*)
-       if test "$GCC" = yes; then
--	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
-+	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
- 	# Try to use the -exported_symbol ld option, if it does not
- 	# work, assume that -exports_file does not work either and
- 	# implicitly export all symbols.
--        save_LDFLAGS="$LDFLAGS"
--        LDFLAGS="$LDFLAGS -shared ${wl}-exported_symbol ${wl}foo ${wl}-update_registry ${wl}/dev/null"
--        if test x$gcc_no_link = xyes; then
-+	# This should be the same for all languages, so no per-tag cache variable.
-+	{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether the $host_os linker accepts -exported_symbol" >&5
-+$as_echo_n "checking whether the $host_os linker accepts -exported_symbol... " >&6; }
-+if ${lt_cv_irix_exported_symbol+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  save_LDFLAGS="$LDFLAGS"
-+	   LDFLAGS="$LDFLAGS -shared ${wl}-exported_symbol ${wl}foo ${wl}-update_registry ${wl}/dev/null"
-+	   if test x$gcc_no_link = xyes; then
-   as_fn_error $? "Link tests are not allowed after GCC_NO_EXECUTABLES." "$LINENO" 5
- fi
- cat confdefs.h - <<_ACEOF >conftest.$ac_ext
- /* end confdefs.h.  */
--int foo(void) {}
-+int foo (void) { return 0; }
- _ACEOF
- if ac_fn_c_try_link "$LINENO"; then :
--  archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations ${wl}-exports_file ${wl}$export_symbols -o $lib'
--
-+  lt_cv_irix_exported_symbol=yes
-+else
-+  lt_cv_irix_exported_symbol=no
- fi
- rm -f core conftest.err conftest.$ac_objext \
-     conftest$ac_exeext conftest.$ac_ext
--        LDFLAGS="$save_LDFLAGS"
-+           LDFLAGS="$save_LDFLAGS"
-+fi
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_irix_exported_symbol" >&5
-+$as_echo "$lt_cv_irix_exported_symbol" >&6; }
-+	if test "$lt_cv_irix_exported_symbol" = yes; then
-+          archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations ${wl}-exports_file ${wl}$export_symbols -o $lib'
-+	fi
-       else
- 	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -o $lib'
- 	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -exports_file $export_symbols -o $lib'
-@@ -9172,7 +9763,7 @@ rm -f core conftest.err conftest.$ac_objext \
-     osf4* | osf5*)	# as osf3* with the addition of -msym flag
-       if test "$GCC" = yes; then
- 	allow_undefined_flag=' ${wl}-expect_unresolved ${wl}\*'
--	archive_cmds='$CC -shared${allow_undefined_flag} $libobjs $deplibs $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
-+	archive_cmds='$CC -shared${allow_undefined_flag} $pic_flag $libobjs $deplibs $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
- 	hardcode_libdir_flag_spec='${wl}-rpath ${wl}$libdir'
-       else
- 	allow_undefined_flag=' -expect_unresolved \*'
-@@ -9191,9 +9782,9 @@ rm -f core conftest.err conftest.$ac_objext \
-       no_undefined_flag=' -z defs'
-       if test "$GCC" = yes; then
- 	wlarc='${wl}'
--	archive_cmds='$CC -shared ${wl}-z ${wl}text ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
-+	archive_cmds='$CC -shared $pic_flag ${wl}-z ${wl}text ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
- 	archive_expsym_cmds='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~
--	  $CC -shared ${wl}-z ${wl}text ${wl}-M ${wl}$lib.exp ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags~$RM $lib.exp'
-+	  $CC -shared $pic_flag ${wl}-z ${wl}text ${wl}-M ${wl}$lib.exp ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags~$RM $lib.exp'
-       else
- 	case `$CC -V 2>&1` in
- 	*"Compilers 5.0"*)
-@@ -9769,8 +10360,9 @@ cygwin* | mingw* | pw32* | cegcc*)
-   need_version=no
-   need_lib_prefix=no
- 
--  case $GCC,$host_os in
--  yes,cygwin* | yes,mingw* | yes,pw32* | yes,cegcc*)
-+  case $GCC,$cc_basename in
-+  yes,*)
-+    # gcc
-     library_names_spec='$libname.dll.a'
-     # DLL is installed to $(libdir)/../bin by postinstall_cmds
-     postinstall_cmds='base_file=`basename \${file}`~
-@@ -9803,13 +10395,71 @@ cygwin* | mingw* | pw32* | cegcc*)
-       library_names_spec='`echo ${libname} | sed -e 's/^lib/pw/'``echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}'
-       ;;
-     esac
-+    dynamic_linker='Win32 ld.exe'
-+    ;;
-+
-+  *,cl*)
-+    # Native MSVC
-+    libname_spec='$name'
-+    soname_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}'
-+    library_names_spec='${libname}.dll.lib'
-+
-+    case $build_os in
-+    mingw*)
-+      sys_lib_search_path_spec=
-+      lt_save_ifs=$IFS
-+      IFS=';'
-+      for lt_path in $LIB
-+      do
-+        IFS=$lt_save_ifs
-+        # Let DOS variable expansion print the short 8.3 style file name.
-+        lt_path=`cd "$lt_path" 2>/dev/null && cmd //C "for %i in (".") do @echo %~si"`
-+        sys_lib_search_path_spec="$sys_lib_search_path_spec $lt_path"
-+      done
-+      IFS=$lt_save_ifs
-+      # Convert to MSYS style.
-+      sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | sed -e 's|\\\\|/|g' -e 's| \\([a-zA-Z]\\):| /\\1|g' -e 's|^ ||'`
-+      ;;
-+    cygwin*)
-+      # Convert to unix form, then to dos form, then back to unix form
-+      # but this time dos style (no spaces!) so that the unix form looks
-+      # like /cygdrive/c/PROGRA~1:/cygdr...
-+      sys_lib_search_path_spec=`cygpath --path --unix "$LIB"`
-+      sys_lib_search_path_spec=`cygpath --path --dos "$sys_lib_search_path_spec" 2>/dev/null`
-+      sys_lib_search_path_spec=`cygpath --path --unix "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
-+      ;;
-+    *)
-+      sys_lib_search_path_spec="$LIB"
-+      if $ECHO "$sys_lib_search_path_spec" | $GREP ';[c-zC-Z]:/' >/dev/null; then
-+        # It is most probably a Windows format PATH.
-+        sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e 's/;/ /g'`
-+      else
-+        sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
-+      fi
-+      # FIXME: find the short name or the path components, as spaces are
-+      # common. (e.g. "Program Files" -> "PROGRA~1")
-+      ;;
-+    esac
-+
-+    # DLL is installed to $(libdir)/../bin by postinstall_cmds
-+    postinstall_cmds='base_file=`basename \${file}`~
-+      dlpath=`$SHELL 2>&1 -c '\''. $dir/'\''\${base_file}'\''i; echo \$dlname'\''`~
-+      dldir=$destdir/`dirname \$dlpath`~
-+      test -d \$dldir || mkdir -p \$dldir~
-+      $install_prog $dir/$dlname \$dldir/$dlname'
-+    postuninstall_cmds='dldll=`$SHELL 2>&1 -c '\''. $file; echo \$dlname'\''`~
-+      dlpath=$dir/\$dldll~
-+       $RM \$dlpath'
-+    shlibpath_overrides_runpath=yes
-+    dynamic_linker='Win32 link.exe'
-     ;;
- 
-   *)
-+    # Assume MSVC wrapper
-     library_names_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext} $libname.lib'
-+    dynamic_linker='Win32 ld.exe'
-     ;;
-   esac
--  dynamic_linker='Win32 ld.exe'
-   # FIXME: first we should search . and the directory the executable is in
-   shlibpath_var=PATH
-   ;;
-@@ -10705,7 +11355,7 @@ else
-   lt_dlunknown=0; lt_dlno_uscore=1; lt_dlneed_uscore=2
-   lt_status=$lt_dlunknown
-   cat > conftest.$ac_ext <<_LT_EOF
--#line 10708 "configure"
-+#line $LINENO "configure"
- #include "confdefs.h"
- 
- #if HAVE_DLFCN_H
-@@ -10749,10 +11399,10 @@ else
- /* When -fvisbility=hidden is used, assume the code has been annotated
-    correspondingly for the symbols needed.  */
- #if defined(__GNUC__) && (((__GNUC__ == 3) && (__GNUC_MINOR__ >= 3)) || (__GNUC__ > 3))
--void fnord () __attribute__((visibility("default")));
-+int fnord () __attribute__((visibility("default")));
- #endif
- 
--void fnord () { int i=42; }
-+int fnord () { return 42; }
- int main ()
- {
-   void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW);
-@@ -10811,7 +11461,7 @@ else
-   lt_dlunknown=0; lt_dlno_uscore=1; lt_dlneed_uscore=2
-   lt_status=$lt_dlunknown
-   cat > conftest.$ac_ext <<_LT_EOF
--#line 10814 "configure"
-+#line $LINENO "configure"
- #include "confdefs.h"
- 
- #if HAVE_DLFCN_H
-@@ -10855,10 +11505,10 @@ else
- /* When -fvisbility=hidden is used, assume the code has been annotated
-    correspondingly for the symbols needed.  */
- #if defined(__GNUC__) && (((__GNUC__ == 3) && (__GNUC_MINOR__ >= 3)) || (__GNUC__ > 3))
--void fnord () __attribute__((visibility("default")));
-+int fnord () __attribute__((visibility("default")));
- #endif
- 
--void fnord () { int i=42; }
-+int fnord () { return 42; }
- int main ()
- {
-   void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW);
-@@ -12328,13 +12978,20 @@ exeext='`$ECHO "$exeext" | $SED "$delay_single_quote_subst"`'
- lt_unset='`$ECHO "$lt_unset" | $SED "$delay_single_quote_subst"`'
- lt_SP2NL='`$ECHO "$lt_SP2NL" | $SED "$delay_single_quote_subst"`'
- lt_NL2SP='`$ECHO "$lt_NL2SP" | $SED "$delay_single_quote_subst"`'
-+lt_cv_to_host_file_cmd='`$ECHO "$lt_cv_to_host_file_cmd" | $SED "$delay_single_quote_subst"`'
-+lt_cv_to_tool_file_cmd='`$ECHO "$lt_cv_to_tool_file_cmd" | $SED "$delay_single_quote_subst"`'
- reload_flag='`$ECHO "$reload_flag" | $SED "$delay_single_quote_subst"`'
- reload_cmds='`$ECHO "$reload_cmds" | $SED "$delay_single_quote_subst"`'
- OBJDUMP='`$ECHO "$OBJDUMP" | $SED "$delay_single_quote_subst"`'
- deplibs_check_method='`$ECHO "$deplibs_check_method" | $SED "$delay_single_quote_subst"`'
- file_magic_cmd='`$ECHO "$file_magic_cmd" | $SED "$delay_single_quote_subst"`'
-+file_magic_glob='`$ECHO "$file_magic_glob" | $SED "$delay_single_quote_subst"`'
-+want_nocaseglob='`$ECHO "$want_nocaseglob" | $SED "$delay_single_quote_subst"`'
-+DLLTOOL='`$ECHO "$DLLTOOL" | $SED "$delay_single_quote_subst"`'
-+sharedlib_from_linklib_cmd='`$ECHO "$sharedlib_from_linklib_cmd" | $SED "$delay_single_quote_subst"`'
- AR='`$ECHO "$AR" | $SED "$delay_single_quote_subst"`'
- AR_FLAGS='`$ECHO "$AR_FLAGS" | $SED "$delay_single_quote_subst"`'
-+archiver_list_spec='`$ECHO "$archiver_list_spec" | $SED "$delay_single_quote_subst"`'
- STRIP='`$ECHO "$STRIP" | $SED "$delay_single_quote_subst"`'
- RANLIB='`$ECHO "$RANLIB" | $SED "$delay_single_quote_subst"`'
- old_postinstall_cmds='`$ECHO "$old_postinstall_cmds" | $SED "$delay_single_quote_subst"`'
-@@ -12349,14 +13006,17 @@ lt_cv_sys_global_symbol_pipe='`$ECHO "$lt_cv_sys_global_symbol_pipe" | $SED "$de
- lt_cv_sys_global_symbol_to_cdecl='`$ECHO "$lt_cv_sys_global_symbol_to_cdecl" | $SED "$delay_single_quote_subst"`'
- lt_cv_sys_global_symbol_to_c_name_address='`$ECHO "$lt_cv_sys_global_symbol_to_c_name_address" | $SED "$delay_single_quote_subst"`'
- lt_cv_sys_global_symbol_to_c_name_address_lib_prefix='`$ECHO "$lt_cv_sys_global_symbol_to_c_name_address_lib_prefix" | $SED "$delay_single_quote_subst"`'
-+nm_file_list_spec='`$ECHO "$nm_file_list_spec" | $SED "$delay_single_quote_subst"`'
-+lt_sysroot='`$ECHO "$lt_sysroot" | $SED "$delay_single_quote_subst"`'
- objdir='`$ECHO "$objdir" | $SED "$delay_single_quote_subst"`'
- MAGIC_CMD='`$ECHO "$MAGIC_CMD" | $SED "$delay_single_quote_subst"`'
- lt_prog_compiler_no_builtin_flag='`$ECHO "$lt_prog_compiler_no_builtin_flag" | $SED "$delay_single_quote_subst"`'
--lt_prog_compiler_wl='`$ECHO "$lt_prog_compiler_wl" | $SED "$delay_single_quote_subst"`'
- lt_prog_compiler_pic='`$ECHO "$lt_prog_compiler_pic" | $SED "$delay_single_quote_subst"`'
-+lt_prog_compiler_wl='`$ECHO "$lt_prog_compiler_wl" | $SED "$delay_single_quote_subst"`'
- lt_prog_compiler_static='`$ECHO "$lt_prog_compiler_static" | $SED "$delay_single_quote_subst"`'
- lt_cv_prog_compiler_c_o='`$ECHO "$lt_cv_prog_compiler_c_o" | $SED "$delay_single_quote_subst"`'
- need_locks='`$ECHO "$need_locks" | $SED "$delay_single_quote_subst"`'
-+MANIFEST_TOOL='`$ECHO "$MANIFEST_TOOL" | $SED "$delay_single_quote_subst"`'
- DSYMUTIL='`$ECHO "$DSYMUTIL" | $SED "$delay_single_quote_subst"`'
- NMEDIT='`$ECHO "$NMEDIT" | $SED "$delay_single_quote_subst"`'
- LIPO='`$ECHO "$LIPO" | $SED "$delay_single_quote_subst"`'
-@@ -12389,12 +13049,12 @@ hardcode_shlibpath_var='`$ECHO "$hardcode_shlibpath_var" | $SED "$delay_single_q
- hardcode_automatic='`$ECHO "$hardcode_automatic" | $SED "$delay_single_quote_subst"`'
- inherit_rpath='`$ECHO "$inherit_rpath" | $SED "$delay_single_quote_subst"`'
- link_all_deplibs='`$ECHO "$link_all_deplibs" | $SED "$delay_single_quote_subst"`'
--fix_srcfile_path='`$ECHO "$fix_srcfile_path" | $SED "$delay_single_quote_subst"`'
- always_export_symbols='`$ECHO "$always_export_symbols" | $SED "$delay_single_quote_subst"`'
- export_symbols_cmds='`$ECHO "$export_symbols_cmds" | $SED "$delay_single_quote_subst"`'
- exclude_expsyms='`$ECHO "$exclude_expsyms" | $SED "$delay_single_quote_subst"`'
- include_expsyms='`$ECHO "$include_expsyms" | $SED "$delay_single_quote_subst"`'
- prelink_cmds='`$ECHO "$prelink_cmds" | $SED "$delay_single_quote_subst"`'
-+postlink_cmds='`$ECHO "$postlink_cmds" | $SED "$delay_single_quote_subst"`'
- file_list_spec='`$ECHO "$file_list_spec" | $SED "$delay_single_quote_subst"`'
- variables_saved_for_relink='`$ECHO "$variables_saved_for_relink" | $SED "$delay_single_quote_subst"`'
- need_lib_prefix='`$ECHO "$need_lib_prefix" | $SED "$delay_single_quote_subst"`'
-@@ -12449,8 +13109,13 @@ reload_flag \
- OBJDUMP \
- deplibs_check_method \
- file_magic_cmd \
-+file_magic_glob \
-+want_nocaseglob \
-+DLLTOOL \
-+sharedlib_from_linklib_cmd \
- AR \
- AR_FLAGS \
-+archiver_list_spec \
- STRIP \
- RANLIB \
- CC \
-@@ -12460,12 +13125,14 @@ lt_cv_sys_global_symbol_pipe \
- lt_cv_sys_global_symbol_to_cdecl \
- lt_cv_sys_global_symbol_to_c_name_address \
- lt_cv_sys_global_symbol_to_c_name_address_lib_prefix \
-+nm_file_list_spec \
- lt_prog_compiler_no_builtin_flag \
--lt_prog_compiler_wl \
- lt_prog_compiler_pic \
-+lt_prog_compiler_wl \
- lt_prog_compiler_static \
- lt_cv_prog_compiler_c_o \
- need_locks \
-+MANIFEST_TOOL \
- DSYMUTIL \
- NMEDIT \
- LIPO \
-@@ -12481,7 +13148,6 @@ no_undefined_flag \
- hardcode_libdir_flag_spec \
- hardcode_libdir_flag_spec_ld \
- hardcode_libdir_separator \
--fix_srcfile_path \
- exclude_expsyms \
- include_expsyms \
- file_list_spec \
-@@ -12517,6 +13183,7 @@ module_cmds \
- module_expsym_cmds \
- export_symbols_cmds \
- prelink_cmds \
-+postlink_cmds \
- postinstall_cmds \
- postuninstall_cmds \
- finish_cmds \
-@@ -13115,7 +13782,8 @@ $as_echo X"$file" |
- # NOTE: Changes made to this file will be lost: look at ltmain.sh.
- #
- #   Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001, 2003, 2004, 2005,
--#                 2006, 2007, 2008, 2009 Free Software Foundation, Inc.
-+#                 2006, 2007, 2008, 2009, 2010 Free Software Foundation,
-+#                 Inc.
- #   Written by Gordon Matzigkeit, 1996
- #
- #   This file is part of GNU Libtool.
-@@ -13218,19 +13886,42 @@ SP2NL=$lt_lt_SP2NL
- # turn newlines into spaces.
- NL2SP=$lt_lt_NL2SP
- 
-+# convert \$build file names to \$host format.
-+to_host_file_cmd=$lt_cv_to_host_file_cmd
-+
-+# convert \$build files to toolchain format.
-+to_tool_file_cmd=$lt_cv_to_tool_file_cmd
-+
- # An object symbol dumper.
- OBJDUMP=$lt_OBJDUMP
- 
- # Method to check whether dependent libraries are shared objects.
- deplibs_check_method=$lt_deplibs_check_method
- 
--# Command to use when deplibs_check_method == "file_magic".
-+# Command to use when deplibs_check_method = "file_magic".
- file_magic_cmd=$lt_file_magic_cmd
- 
-+# How to find potential files when deplibs_check_method = "file_magic".
-+file_magic_glob=$lt_file_magic_glob
-+
-+# Find potential files using nocaseglob when deplibs_check_method = "file_magic".
-+want_nocaseglob=$lt_want_nocaseglob
-+
-+# DLL creation program.
-+DLLTOOL=$lt_DLLTOOL
-+
-+# Command to associate shared and link libraries.
-+sharedlib_from_linklib_cmd=$lt_sharedlib_from_linklib_cmd
-+
- # The archiver.
- AR=$lt_AR
-+
-+# Flags to create an archive.
- AR_FLAGS=$lt_AR_FLAGS
- 
-+# How to feed a file listing to the archiver.
-+archiver_list_spec=$lt_archiver_list_spec
-+
- # A symbol stripping program.
- STRIP=$lt_STRIP
- 
-@@ -13260,6 +13951,12 @@ global_symbol_to_c_name_address=$lt_lt_cv_sys_global_symbol_to_c_name_address
- # Transform the output of nm in a C name address pair when lib prefix is needed.
- global_symbol_to_c_name_address_lib_prefix=$lt_lt_cv_sys_global_symbol_to_c_name_address_lib_prefix
- 
-+# Specify filename containing input files for \$NM.
-+nm_file_list_spec=$lt_nm_file_list_spec
-+
-+# The root where to search for dependent libraries,and in which our libraries should be installed.
-+lt_sysroot=$lt_sysroot
-+
- # The name of the directory that contains temporary libtool files.
- objdir=$objdir
- 
-@@ -13269,6 +13966,9 @@ MAGIC_CMD=$MAGIC_CMD
- # Must we lock files when doing compilation?
- need_locks=$lt_need_locks
- 
-+# Manifest tool.
-+MANIFEST_TOOL=$lt_MANIFEST_TOOL
-+
- # Tool to manipulate archived DWARF debug symbol files on Mac OS X.
- DSYMUTIL=$lt_DSYMUTIL
- 
-@@ -13383,12 +14083,12 @@ with_gcc=$GCC
- # Compiler flag to turn off builtin functions.
- no_builtin_flag=$lt_lt_prog_compiler_no_builtin_flag
- 
--# How to pass a linker flag through the compiler.
--wl=$lt_lt_prog_compiler_wl
--
- # Additional compiler flags for building library objects.
- pic_flag=$lt_lt_prog_compiler_pic
- 
-+# How to pass a linker flag through the compiler.
-+wl=$lt_lt_prog_compiler_wl
-+
- # Compiler flag to prevent dynamic linking.
- link_static_flag=$lt_lt_prog_compiler_static
- 
-@@ -13475,9 +14175,6 @@ inherit_rpath=$inherit_rpath
- # Whether libtool must link a program against all its dependency libraries.
- link_all_deplibs=$link_all_deplibs
- 
--# Fix the shell variable \$srcfile for the compiler.
--fix_srcfile_path=$lt_fix_srcfile_path
--
- # Set to "yes" if exported symbols are required.
- always_export_symbols=$always_export_symbols
- 
-@@ -13493,6 +14190,9 @@ include_expsyms=$lt_include_expsyms
- # Commands necessary for linking programs (against libraries) with templates.
- prelink_cmds=$lt_prelink_cmds
- 
-+# Commands necessary for finishing linking programs.
-+postlink_cmds=$lt_postlink_cmds
-+
- # Specify filename containing input files.
- file_list_spec=$lt_file_list_spec
- 
-@@ -13525,210 +14225,169 @@ ltmain="$ac_aux_dir/ltmain.sh"
-   # if finds mixed CR/LF and LF-only lines.  Since sed operates in
-   # text mode, it properly converts lines to CR/LF.  This bash problem
-   # is reportedly fixed, but why not run on old versions too?
--  sed '/^# Generated shell functions inserted here/q' "$ltmain" >> "$cfgfile" \
--    || (rm -f "$cfgfile"; exit 1)
--
--  case $xsi_shell in
--  yes)
--    cat << \_LT_EOF >> "$cfgfile"
--
--# func_dirname file append nondir_replacement
--# Compute the dirname of FILE.  If nonempty, add APPEND to the result,
--# otherwise set result to NONDIR_REPLACEMENT.
--func_dirname ()
--{
--  case ${1} in
--    */*) func_dirname_result="${1%/*}${2}" ;;
--    *  ) func_dirname_result="${3}" ;;
--  esac
--}
--
--# func_basename file
--func_basename ()
--{
--  func_basename_result="${1##*/}"
--}
--
--# func_dirname_and_basename file append nondir_replacement
--# perform func_basename and func_dirname in a single function
--# call:
--#   dirname:  Compute the dirname of FILE.  If nonempty,
--#             add APPEND to the result, otherwise set result
--#             to NONDIR_REPLACEMENT.
--#             value returned in "$func_dirname_result"
--#   basename: Compute filename of FILE.
--#             value retuned in "$func_basename_result"
--# Implementation must be kept synchronized with func_dirname
--# and func_basename. For efficiency, we do not delegate to
--# those functions but instead duplicate the functionality here.
--func_dirname_and_basename ()
--{
--  case ${1} in
--    */*) func_dirname_result="${1%/*}${2}" ;;
--    *  ) func_dirname_result="${3}" ;;
--  esac
--  func_basename_result="${1##*/}"
--}
--
--# func_stripname prefix suffix name
--# strip PREFIX and SUFFIX off of NAME.
--# PREFIX and SUFFIX must not contain globbing or regex special
--# characters, hashes, percent signs, but SUFFIX may contain a leading
--# dot (in which case that matches only a dot).
--func_stripname ()
--{
--  # pdksh 5.2.14 does not do ${X%$Y} correctly if both X and Y are
--  # positional parameters, so assign one to ordinary parameter first.
--  func_stripname_result=${3}
--  func_stripname_result=${func_stripname_result#"${1}"}
--  func_stripname_result=${func_stripname_result%"${2}"}
--}
--
--# func_opt_split
--func_opt_split ()
--{
--  func_opt_split_opt=${1%%=*}
--  func_opt_split_arg=${1#*=}
--}
--
--# func_lo2o object
--func_lo2o ()
--{
--  case ${1} in
--    *.lo) func_lo2o_result=${1%.lo}.${objext} ;;
--    *)    func_lo2o_result=${1} ;;
--  esac
--}
--
--# func_xform libobj-or-source
--func_xform ()
--{
--  func_xform_result=${1%.*}.lo
--}
--
--# func_arith arithmetic-term...
--func_arith ()
--{
--  func_arith_result=$(( $* ))
--}
--
--# func_len string
--# STRING may not start with a hyphen.
--func_len ()
--{
--  func_len_result=${#1}
--}
--
--_LT_EOF
--    ;;
--  *) # Bourne compatible functions.
--    cat << \_LT_EOF >> "$cfgfile"
--
--# func_dirname file append nondir_replacement
--# Compute the dirname of FILE.  If nonempty, add APPEND to the result,
--# otherwise set result to NONDIR_REPLACEMENT.
--func_dirname ()
--{
--  # Extract subdirectory from the argument.
--  func_dirname_result=`$ECHO "${1}" | $SED "$dirname"`
--  if test "X$func_dirname_result" = "X${1}"; then
--    func_dirname_result="${3}"
--  else
--    func_dirname_result="$func_dirname_result${2}"
--  fi
--}
--
--# func_basename file
--func_basename ()
--{
--  func_basename_result=`$ECHO "${1}" | $SED "$basename"`
--}
--
--
--# func_stripname prefix suffix name
--# strip PREFIX and SUFFIX off of NAME.
--# PREFIX and SUFFIX must not contain globbing or regex special
--# characters, hashes, percent signs, but SUFFIX may contain a leading
--# dot (in which case that matches only a dot).
--# func_strip_suffix prefix name
--func_stripname ()
--{
--  case ${2} in
--    .*) func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%\\\\${2}\$%%"`;;
--    *)  func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%${2}\$%%"`;;
--  esac
--}
--
--# sed scripts:
--my_sed_long_opt='1s/^\(-[^=]*\)=.*/\1/;q'
--my_sed_long_arg='1s/^-[^=]*=//'
--
--# func_opt_split
--func_opt_split ()
--{
--  func_opt_split_opt=`$ECHO "${1}" | $SED "$my_sed_long_opt"`
--  func_opt_split_arg=`$ECHO "${1}" | $SED "$my_sed_long_arg"`
--}
--
--# func_lo2o object
--func_lo2o ()
--{
--  func_lo2o_result=`$ECHO "${1}" | $SED "$lo2o"`
--}
--
--# func_xform libobj-or-source
--func_xform ()
--{
--  func_xform_result=`$ECHO "${1}" | $SED 's/\.[^.]*$/.lo/'`
--}
--
--# func_arith arithmetic-term...
--func_arith ()
--{
--  func_arith_result=`expr "$@"`
--}
--
--# func_len string
--# STRING may not start with a hyphen.
--func_len ()
--{
--  func_len_result=`expr "$1" : ".*" 2>/dev/null || echo $max_cmd_len`
--}
--
--_LT_EOF
--esac
--
--case $lt_shell_append in
--  yes)
--    cat << \_LT_EOF >> "$cfgfile"
--
--# func_append var value
--# Append VALUE to the end of shell variable VAR.
--func_append ()
--{
--  eval "$1+=\$2"
--}
--_LT_EOF
--    ;;
--  *)
--    cat << \_LT_EOF >> "$cfgfile"
--
--# func_append var value
--# Append VALUE to the end of shell variable VAR.
--func_append ()
--{
--  eval "$1=\$$1\$2"
--}
--
--_LT_EOF
--    ;;
--  esac
--
--
--  sed -n '/^# Generated shell functions inserted here/,$p' "$ltmain" >> "$cfgfile" \
--    || (rm -f "$cfgfile"; exit 1)
--
--  mv -f "$cfgfile" "$ofile" ||
-+  sed '$q' "$ltmain" >> "$cfgfile" \
-+     || (rm -f "$cfgfile"; exit 1)
-+
-+  if test x"$xsi_shell" = xyes; then
-+  sed -e '/^func_dirname ()$/,/^} # func_dirname /c\
-+func_dirname ()\
-+{\
-+\    case ${1} in\
-+\      */*) func_dirname_result="${1%/*}${2}" ;;\
-+\      *  ) func_dirname_result="${3}" ;;\
-+\    esac\
-+} # Extended-shell func_dirname implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_basename ()$/,/^} # func_basename /c\
-+func_basename ()\
-+{\
-+\    func_basename_result="${1##*/}"\
-+} # Extended-shell func_basename implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_dirname_and_basename ()$/,/^} # func_dirname_and_basename /c\
-+func_dirname_and_basename ()\
-+{\
-+\    case ${1} in\
-+\      */*) func_dirname_result="${1%/*}${2}" ;;\
-+\      *  ) func_dirname_result="${3}" ;;\
-+\    esac\
-+\    func_basename_result="${1##*/}"\
-+} # Extended-shell func_dirname_and_basename implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_stripname ()$/,/^} # func_stripname /c\
-+func_stripname ()\
-+{\
-+\    # pdksh 5.2.14 does not do ${X%$Y} correctly if both X and Y are\
-+\    # positional parameters, so assign one to ordinary parameter first.\
-+\    func_stripname_result=${3}\
-+\    func_stripname_result=${func_stripname_result#"${1}"}\
-+\    func_stripname_result=${func_stripname_result%"${2}"}\
-+} # Extended-shell func_stripname implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_split_long_opt ()$/,/^} # func_split_long_opt /c\
-+func_split_long_opt ()\
-+{\
-+\    func_split_long_opt_name=${1%%=*}\
-+\    func_split_long_opt_arg=${1#*=}\
-+} # Extended-shell func_split_long_opt implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_split_short_opt ()$/,/^} # func_split_short_opt /c\
-+func_split_short_opt ()\
-+{\
-+\    func_split_short_opt_arg=${1#??}\
-+\    func_split_short_opt_name=${1%"$func_split_short_opt_arg"}\
-+} # Extended-shell func_split_short_opt implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_lo2o ()$/,/^} # func_lo2o /c\
-+func_lo2o ()\
-+{\
-+\    case ${1} in\
-+\      *.lo) func_lo2o_result=${1%.lo}.${objext} ;;\
-+\      *)    func_lo2o_result=${1} ;;\
-+\    esac\
-+} # Extended-shell func_lo2o implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_xform ()$/,/^} # func_xform /c\
-+func_xform ()\
-+{\
-+    func_xform_result=${1%.*}.lo\
-+} # Extended-shell func_xform implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_arith ()$/,/^} # func_arith /c\
-+func_arith ()\
-+{\
-+    func_arith_result=$(( $* ))\
-+} # Extended-shell func_arith implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_len ()$/,/^} # func_len /c\
-+func_len ()\
-+{\
-+    func_len_result=${#1}\
-+} # Extended-shell func_len implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+fi
-+
-+if test x"$lt_shell_append" = xyes; then
-+  sed -e '/^func_append ()$/,/^} # func_append /c\
-+func_append ()\
-+{\
-+    eval "${1}+=\\${2}"\
-+} # Extended-shell func_append implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_append_quoted ()$/,/^} # func_append_quoted /c\
-+func_append_quoted ()\
-+{\
-+\    func_quote_for_eval "${2}"\
-+\    eval "${1}+=\\\\ \\$func_quote_for_eval_result"\
-+} # Extended-shell func_append_quoted implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  # Save a `func_append' function call where possible by direct use of '+='
-+  sed -e 's%func_append \([a-zA-Z_]\{1,\}\) "%\1+="%g' $cfgfile > $cfgfile.tmp \
-+    && mv -f "$cfgfile.tmp" "$cfgfile" \
-+      || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+  test 0 -eq $? || _lt_function_replace_fail=:
-+else
-+  # Save a `func_append' function call even when '+=' is not available
-+  sed -e 's%func_append \([a-zA-Z_]\{1,\}\) "%\1="$\1%g' $cfgfile > $cfgfile.tmp \
-+    && mv -f "$cfgfile.tmp" "$cfgfile" \
-+      || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+  test 0 -eq $? || _lt_function_replace_fail=:
-+fi
-+
-+if test x"$_lt_function_replace_fail" = x":"; then
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: Unable to substitute extended shell functions in $ofile" >&5
-+$as_echo "$as_me: WARNING: Unable to substitute extended shell functions in $ofile" >&2;}
-+fi
-+
-+
-+   mv -f "$cfgfile" "$ofile" ||
-     (rm -f "$ofile" && cp "$cfgfile" "$ofile" && rm -f "$cfgfile")
-   chmod +x "$ofile"
- 
diff --git a/poky/meta/recipes-devtools/binutils/binutils/0010-Fix-rpath-in-libtool-when-sysroot-is-enabled.patch b/poky/meta/recipes-devtools/binutils/binutils/0010-Fix-rpath-in-libtool-when-sysroot-is-enabled.patch
deleted file mode 100644
index 217ba5d..0000000
--- a/poky/meta/recipes-devtools/binutils/binutils/0010-Fix-rpath-in-libtool-when-sysroot-is-enabled.patch
+++ /dev/null
@@ -1,49 +0,0 @@
-From 1c4581a059afe2799bb825b388ae92f8fa6f19a3 Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Mon, 2 Mar 2015 01:42:38 +0000
-Subject: [PATCH] Fix rpath in libtool when sysroot is enabled
-
-Enabling sysroot support in libtool exposed a bug where the final
-library had an RPATH encoded into it which still pointed to the
-sysroot. This works around the issue until it gets sorted out
-upstream.
-
-Fix suggested by Richard Purdie <richard.purdie@linuxfoundation.org>
-
-Upstream-Status: Inappropriate [embedded specific]
-
-Signed-off-by: Scott Garman <scott.a.garman@intel.com>
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
----
- ltmain.sh | 10 ++++++++--
- 1 file changed, 8 insertions(+), 2 deletions(-)
-
-diff --git a/ltmain.sh b/ltmain.sh
-index 70e856e0659..11ee684cccf 100644
---- a/ltmain.sh
-+++ b/ltmain.sh
-@@ -8035,9 +8035,11 @@ EOF
- 	  test "$opt_mode" != relink && rpath="$compile_rpath$rpath"
- 	  for libdir in $rpath; do
- 	    if test -n "$hardcode_libdir_flag_spec"; then
-+		  func_replace_sysroot "$libdir"
-+		  libdir=$func_replace_sysroot_result
-+		  func_stripname '=' '' "$libdir"
-+		  libdir=$func_stripname_result
- 	      if test -n "$hardcode_libdir_separator"; then
--		func_replace_sysroot "$libdir"
--		libdir=$func_replace_sysroot_result
- 		if test -z "$hardcode_libdirs"; then
- 		  hardcode_libdirs="$libdir"
- 		else
-@@ -8770,6 +8772,10 @@ EOF
-       hardcode_libdirs=
-       for libdir in $compile_rpath $finalize_rpath; do
- 	if test -n "$hardcode_libdir_flag_spec"; then
-+	  func_replace_sysroot "$libdir"
-+	  libdir=$func_replace_sysroot_result
-+	  func_stripname '=' '' "$libdir"
-+	  libdir=$func_stripname_result
- 	  if test -n "$hardcode_libdir_separator"; then
- 	    if test -z "$hardcode_libdirs"; then
- 	      hardcode_libdirs="$libdir"
diff --git a/poky/meta/recipes-devtools/binutils/binutils/0010-sync-with-OE-libtool-changes.patch b/poky/meta/recipes-devtools/binutils/binutils/0010-sync-with-OE-libtool-changes.patch
new file mode 100644
index 0000000..199aafc
--- /dev/null
+++ b/poky/meta/recipes-devtools/binutils/binutils/0010-sync-with-OE-libtool-changes.patch
@@ -0,0 +1,86 @@
+From 84fc4ceafcbfad4c6ddc9d65f6a425bd62dd062e Mon Sep 17 00:00:00 2001
+From: Ross Burton <ross.burton@intel.com>
+Date: Mon, 6 Mar 2017 23:33:27 -0800
+Subject: [PATCH] sync with OE libtool changes
+
+Apply these patches from our libtool patches as not only are redundant RPATHs a
+waste of space but they can cause incorrect linking when native packages are
+restored from sstate.
+
+fix-rpath.patch:
+We don't want to add RPATHS which match default linker
+search paths, they're a waste of space. This patch
+filters libtools list and removes the ones we don't need.
+
+norm-rpath.patch:
+Libtool may be passed link paths of the form "/usr/lib/../lib", which
+fool its detection code into thinking it should be included as an
+RPATH in the generated binary.  Normalize before comparision.
+
+Upstream-Status: Inappropriate
+
+Signed-off-by: Ross Burton <ross.burton@intel.com>
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ ltmain.sh | 34 ++++++++++++++++++++++++++++------
+ 1 file changed, 28 insertions(+), 6 deletions(-)
+
+diff --git a/ltmain.sh b/ltmain.sh
+index 11ee684cccf..3b19ac15328 100644
+--- a/ltmain.sh
++++ b/ltmain.sh
+@@ -8053,8 +8053,16 @@ EOF
+ 		  esac
+ 		fi
+ 	      else
+-		eval flag=\"$hardcode_libdir_flag_spec\"
+-		func_append dep_rpath " $flag"
++                # We only want to hardcode in an rpath if it isn't in the
++                # default dlsearch path.
++                func_normal_abspath "$libdir"
++                libdir_norm=$func_normal_abspath_result
++	        case " $sys_lib_dlsearch_path " in
++	        *" $libdir_norm "*) ;;
++	        *) eval flag=\"$hardcode_libdir_flag_spec\"
++                   func_append dep_rpath " $flag"
++                   ;;
++	        esac
+ 	      fi
+ 	    elif test -n "$runpath_var"; then
+ 	      case "$perm_rpath " in
+@@ -8790,8 +8798,16 @@ EOF
+ 	      esac
+ 	    fi
+ 	  else
+-	    eval flag=\"$hardcode_libdir_flag_spec\"
+-	    func_append rpath " $flag"
++            # We only want to hardcode in an rpath if it isn't in the
++            # default dlsearch path.
++            func_normal_abspath "$libdir"
++            libdir_norm=$func_normal_abspath_result
++	    case " $sys_lib_dlsearch_path " in
++	    *" $libdir_norm "*) ;;
++	    *) eval flag=\"$hardcode_libdir_flag_spec\"
++               rpath+=" $flag"
++               ;;
++	    esac
+ 	  fi
+ 	elif test -n "$runpath_var"; then
+ 	  case "$perm_rpath " in
+@@ -8841,8 +8857,14 @@ EOF
+ 	      esac
+ 	    fi
+ 	  else
+-	    eval flag=\"$hardcode_libdir_flag_spec\"
+-	    func_append rpath " $flag"
++            # We only want to hardcode in an rpath if it isn't in the
++            # default dlsearch path.
++	    case " $sys_lib_dlsearch_path " in
++	    *" $libdir "*) ;;
++	    *) eval flag=\"$hardcode_libdir_flag_spec\"
++               func_append rpath " $flag"
++               ;;
++	    esac
+ 	  fi
+ 	elif test -n "$runpath_var"; then
+ 	  case "$finalize_perm_rpath " in
diff --git a/poky/meta/recipes-devtools/binutils/binutils/0011-Check-for-clang-before-checking-gcc-version.patch b/poky/meta/recipes-devtools/binutils/binutils/0011-Check-for-clang-before-checking-gcc-version.patch
new file mode 100644
index 0000000..f75ec2e
--- /dev/null
+++ b/poky/meta/recipes-devtools/binutils/binutils/0011-Check-for-clang-before-checking-gcc-version.patch
@@ -0,0 +1,45 @@
+From 628c10087e6e11a7bc748437c5b695835b704aaf Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Wed, 15 Apr 2020 14:17:20 -0700
+Subject: [PATCH] Check for clang before checking gcc version
+
+Clang advertises itself to be gcc 4.2.1, so when compiling this test
+here fails since gcc < 4.4.5 did not support -static-libstdc++ but thats
+not true for clang, so its better to make an additional check for clang
+before resorting to gcc version check. This should let clang enable
+static libstdc++ linking
+
+Upstream-Status: Pending
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ configure    | 2 +-
+ configure.ac | 2 +-
+ 2 files changed, 2 insertions(+), 2 deletions(-)
+
+diff --git a/configure b/configure
+index be433ef6d5d..7494fbd2f06 100755
+--- a/configure
++++ b/configure
+@@ -5294,7 +5294,7 @@ ac_compiler_gnu=$ac_cv_cxx_compiler_gnu
+ cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+ /* end confdefs.h.  */
+ 
+-#if (__GNUC__ < 4) || (__GNUC__ == 4 && __GNUC_MINOR__ < 5)
++#if !defined(__clang__) && ((__GNUC__ < 4) || (__GNUC__ == 4 && __GNUC_MINOR__ < 5))
+ #error -static-libstdc++ not implemented
+ #endif
+ int main() {}
+diff --git a/configure.ac b/configure.ac
+index 1651cbf3b02..2e2ecc47542 100644
+--- a/configure.ac
++++ b/configure.ac
+@@ -1323,7 +1323,7 @@ if test "$GCC" = yes; then
+   AC_MSG_CHECKING([whether g++ accepts -static-libstdc++ -static-libgcc])
+   AC_LANG_PUSH(C++)
+   AC_LINK_IFELSE([AC_LANG_SOURCE([
+-#if (__GNUC__ < 4) || (__GNUC__ == 4 && __GNUC_MINOR__ < 5)
++#if !defined(__clang__) && ((__GNUC__ < 4) || (__GNUC__ == 4 && __GNUC_MINOR__ < 5))
+ #error -static-libstdc++ not implemented
+ #endif
+ int main() {}])],
diff --git a/poky/meta/recipes-devtools/binutils/binutils/0011-sync-with-OE-libtool-changes.patch b/poky/meta/recipes-devtools/binutils/binutils/0011-sync-with-OE-libtool-changes.patch
deleted file mode 100644
index 3607e36..0000000
--- a/poky/meta/recipes-devtools/binutils/binutils/0011-sync-with-OE-libtool-changes.patch
+++ /dev/null
@@ -1,86 +0,0 @@
-From d71c715554a054c534954b0aa357ca699ed68430 Mon Sep 17 00:00:00 2001
-From: Ross Burton <ross.burton@intel.com>
-Date: Mon, 6 Mar 2017 23:33:27 -0800
-Subject: [PATCH] sync with OE libtool changes
-
-Apply these patches from our libtool patches as not only are redundant RPATHs a
-waste of space but they can cause incorrect linking when native packages are
-restored from sstate.
-
-fix-rpath.patch:
-We don't want to add RPATHS which match default linker
-search paths, they're a waste of space. This patch
-filters libtools list and removes the ones we don't need.
-
-norm-rpath.patch:
-Libtool may be passed link paths of the form "/usr/lib/../lib", which
-fool its detection code into thinking it should be included as an
-RPATH in the generated binary.  Normalize before comparision.
-
-Upstream-Status: Inappropriate
-
-Signed-off-by: Ross Burton <ross.burton@intel.com>
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
----
- ltmain.sh | 34 ++++++++++++++++++++++++++++------
- 1 file changed, 28 insertions(+), 6 deletions(-)
-
-diff --git a/ltmain.sh b/ltmain.sh
-index 11ee684cccf..3b19ac15328 100644
---- a/ltmain.sh
-+++ b/ltmain.sh
-@@ -8053,8 +8053,16 @@ EOF
- 		  esac
- 		fi
- 	      else
--		eval flag=\"$hardcode_libdir_flag_spec\"
--		func_append dep_rpath " $flag"
-+                # We only want to hardcode in an rpath if it isn't in the
-+                # default dlsearch path.
-+                func_normal_abspath "$libdir"
-+                libdir_norm=$func_normal_abspath_result
-+	        case " $sys_lib_dlsearch_path " in
-+	        *" $libdir_norm "*) ;;
-+	        *) eval flag=\"$hardcode_libdir_flag_spec\"
-+                   func_append dep_rpath " $flag"
-+                   ;;
-+	        esac
- 	      fi
- 	    elif test -n "$runpath_var"; then
- 	      case "$perm_rpath " in
-@@ -8790,8 +8798,16 @@ EOF
- 	      esac
- 	    fi
- 	  else
--	    eval flag=\"$hardcode_libdir_flag_spec\"
--	    func_append rpath " $flag"
-+            # We only want to hardcode in an rpath if it isn't in the
-+            # default dlsearch path.
-+            func_normal_abspath "$libdir"
-+            libdir_norm=$func_normal_abspath_result
-+	    case " $sys_lib_dlsearch_path " in
-+	    *" $libdir_norm "*) ;;
-+	    *) eval flag=\"$hardcode_libdir_flag_spec\"
-+               rpath+=" $flag"
-+               ;;
-+	    esac
- 	  fi
- 	elif test -n "$runpath_var"; then
- 	  case "$perm_rpath " in
-@@ -8841,8 +8857,14 @@ EOF
- 	      esac
- 	    fi
- 	  else
--	    eval flag=\"$hardcode_libdir_flag_spec\"
--	    func_append rpath " $flag"
-+            # We only want to hardcode in an rpath if it isn't in the
-+            # default dlsearch path.
-+	    case " $sys_lib_dlsearch_path " in
-+	    *" $libdir "*) ;;
-+	    *) eval flag=\"$hardcode_libdir_flag_spec\"
-+               func_append rpath " $flag"
-+               ;;
-+	    esac
- 	  fi
- 	elif test -n "$runpath_var"; then
- 	  case "$finalize_perm_rpath " in
diff --git a/poky/meta/recipes-devtools/binutils/binutils/0012-Check-for-clang-before-checking-gcc-version.patch b/poky/meta/recipes-devtools/binutils/binutils/0012-Check-for-clang-before-checking-gcc-version.patch
deleted file mode 100644
index 8848c05..0000000
--- a/poky/meta/recipes-devtools/binutils/binutils/0012-Check-for-clang-before-checking-gcc-version.patch
+++ /dev/null
@@ -1,45 +0,0 @@
-From 787d7cd71d7886d3193c0fd747101c54ad7c3cd8 Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Wed, 15 Apr 2020 14:17:20 -0700
-Subject: [PATCH] Check for clang before checking gcc version
-
-Clang advertises itself to be gcc 4.2.1, so when compiling this test
-here fails since gcc < 4.4.5 did not support -static-libstdc++ but thats
-not true for clang, so its better to make an additional check for clang
-before resorting to gcc version check. This should let clang enable
-static libstdc++ linking
-
-Upstream-Status: Pending
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
----
- configure    | 2 +-
- configure.ac | 2 +-
- 2 files changed, 2 insertions(+), 2 deletions(-)
-
-diff --git a/configure b/configure
-index 6a1da1665d8..916656dc233 100755
---- a/configure
-+++ b/configure
-@@ -5287,7 +5287,7 @@ ac_compiler_gnu=$ac_cv_cxx_compiler_gnu
- cat confdefs.h - <<_ACEOF >conftest.$ac_ext
- /* end confdefs.h.  */
- 
--#if (__GNUC__ < 4) || (__GNUC__ == 4 && __GNUC_MINOR__ < 5)
-+#if !defined(__clang__) && ((__GNUC__ < 4) || (__GNUC__ == 4 && __GNUC_MINOR__ < 5))
- #error -static-libstdc++ not implemented
- #endif
- int main() {}
-diff --git a/configure.ac b/configure.ac
-index 2b10e9a1b02..677a0196c2b 100644
---- a/configure.ac
-+++ b/configure.ac
-@@ -1309,7 +1309,7 @@ if test "$GCC" = yes; then
-   AC_MSG_CHECKING([whether g++ accepts -static-libstdc++ -static-libgcc])
-   AC_LANG_PUSH(C++)
-   AC_LINK_IFELSE([AC_LANG_SOURCE([
--#if (__GNUC__ < 4) || (__GNUC__ == 4 && __GNUC_MINOR__ < 5)
-+#if !defined(__clang__) && ((__GNUC__ < 4) || (__GNUC__ == 4 && __GNUC_MINOR__ < 5))
- #error -static-libstdc++ not implemented
- #endif
- int main() {}])],
diff --git a/poky/meta/recipes-devtools/binutils/binutils/0012-Only-generate-an-RPATH-entry-if-LD_RUN_PATH-is-not-e.patch b/poky/meta/recipes-devtools/binutils/binutils/0012-Only-generate-an-RPATH-entry-if-LD_RUN_PATH-is-not-e.patch
new file mode 100644
index 0000000..c4b4198
--- /dev/null
+++ b/poky/meta/recipes-devtools/binutils/binutils/0012-Only-generate-an-RPATH-entry-if-LD_RUN_PATH-is-not-e.patch
@@ -0,0 +1,38 @@
+From 63157cb403b6aa13147840c036a8555c4ea9c166 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Thu, 10 Mar 2022 21:21:33 -0800
+Subject: [PATCH] Only generate an RPATH entry if LD_RUN_PATH is not empty
+
+for cases where -rpath isn't specified. debian (#151024)
+
+Upstream-Status: Pending
+
+Signed-off-by: Chris Chimelis <chris@debian.org>
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ ld/ldelf.c | 5 +++++
+ 1 file changed, 5 insertions(+)
+
+diff --git a/ld/ldelf.c b/ld/ldelf.c
+index 0d61a3209ec..cd0da2013e2 100644
+--- a/ld/ldelf.c
++++ b/ld/ldelf.c
+@@ -1127,6 +1127,9 @@ ldelf_handle_dt_needed (struct elf_link_hash_table *htab,
+ 		  && command_line.rpath == NULL)
+ 		{
+ 		  path = (const char *) getenv ("LD_RUN_PATH");
++		  if ((path) && (strlen (path) == 0))
++		    path = NULL;
++
+ 		  if (path
+ 		      && ldelf_search_needed (path, &n, force,
+ 					      is_linux, elfsize))
+@@ -1801,6 +1804,8 @@ ldelf_before_allocation (char *audit, char *depaudit,
+   rpath = command_line.rpath;
+   if (rpath == NULL)
+     rpath = (const char *) getenv ("LD_RUN_PATH");
++  if ((rpath) && (strlen (rpath) == 0))
++    rpath = NULL;
+ 
+   for (abfd = link_info.input_bfds; abfd; abfd = abfd->link.next)
+     if (bfd_get_flavour (abfd) == bfd_target_elf_flavour)
diff --git a/poky/meta/recipes-devtools/binutils/binutils/0013-Avoid-as-info-race-condition.patch b/poky/meta/recipes-devtools/binutils/binutils/0013-Avoid-as-info-race-condition.patch
deleted file mode 100644
index 3b3d0bb..0000000
--- a/poky/meta/recipes-devtools/binutils/binutils/0013-Avoid-as-info-race-condition.patch
+++ /dev/null
@@ -1,75 +0,0 @@
-From 9a84a44d5df4618dd616137fa755bd71b7eacc5f Mon Sep 17 00:00:00 2001
-From: Mike Frysinger <vapier@gentoo.org>
-Date: Sun, 23 Jan 2022 12:44:24 -0500
-Subject: [PATCH] gas: drop old cygnus install hack
-
-This was needed when gas was using the automake cygnus option, but
-this was removed years ago by Simon in d0ac1c44885daf68f631befa37e
-("Bump to autoconf 2.69 and automake 1.15.1").  So delete it here.
-The info pages are already & still installed by default w/out it.
-
-Upstream-Status: Backport [https://sourceware.org/git/?p=binutils-gdb.git;a=commit;h=9a84a44d5df4618dd616137fa755bd71b7eacc5f]
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
----
- gas/Makefile.in  | 14 +++++---------
- gas/doc/local.mk |  4 ----
- 2 files changed, 5 insertions(+), 13 deletions(-)
-
-diff --git a/gas/Makefile.in b/gas/Makefile.in
-index 8f0a56fd8d6..67dac53f68c 100644
---- a/gas/Makefile.in
-+++ b/gas/Makefile.in
-@@ -1854,7 +1854,7 @@ info: info-recursive
- 
- info-am: $(INFO_DEPS) info-local
- 
--install-data-am: install-data-local install-info-am install-man
-+install-data-am: install-info-am install-man
- 
- install-dvi: install-dvi-recursive
- 
-@@ -2008,10 +2008,10 @@ uninstall-man: uninstall-man1
- 	distclean-DEJAGNU distclean-compile distclean-generic \
- 	distclean-hdr distclean-libtool distclean-tags dvi dvi-am html \
- 	html-am html-local info info-am info-local install install-am \
--	install-data install-data-am install-data-local install-dvi \
--	install-dvi-am install-exec install-exec-am install-exec-local \
--	install-html install-html-am install-info install-info-am \
--	install-man install-man1 install-pdf install-pdf-am install-ps \
-+	install-data install-data-am install-dvi install-dvi-am \
-+	install-exec install-exec-am install-exec-local install-html \
-+	install-html-am install-info install-info-am install-man \
-+	install-man1 install-pdf install-pdf-am install-ps \
- 	install-ps-am install-strip installcheck installcheck-am \
- 	installdirs installdirs-am maintainer-clean \
- 	maintainer-clean-aminfo maintainer-clean-generic mostlyclean \
-@@ -2211,10 +2211,6 @@ doc/asconfig.texi: doc/$(CONFIG).texi doc/$(am__dirstamp)
- 	$(AM_V_GEN)cp $(srcdir)/doc/$(CONFIG).texi doc/asconfig.texi
- 	$(AM_V_at)chmod u+w doc/asconfig.texi
- 
--# We want install to imply install-info as per GNU standards, despite the
--# cygnus option.
--install-data-local: install-info
--
- # Maintenance
- 
- # We need it for the taz target in ../Makefile.in.
-diff --git a/gas/doc/local.mk b/gas/doc/local.mk
-index c2de441257c..ac205cf08a2 100644
---- a/gas/doc/local.mk
-+++ b/gas/doc/local.mk
-@@ -101,10 +101,6 @@ CPU_DOCS = \
- 	%D%/c-z80.texi \
- 	%D%/c-z8k.texi
- 
--# We want install to imply install-info as per GNU standards, despite the
--# cygnus option.
--install-data-local: install-info
--
- # This one isn't ready for prime time yet.  Not even a little bit.
- 
- noinst_TEXINFOS = %D%/internals.texi
--- 
-2.27.0
-
diff --git a/poky/meta/recipes-devtools/binutils/binutils/0014-CVE-2019-1010204.patch b/poky/meta/recipes-devtools/binutils/binutils/0014-CVE-2019-1010204.patch
deleted file mode 100644
index dad4a62..0000000
--- a/poky/meta/recipes-devtools/binutils/binutils/0014-CVE-2019-1010204.patch
+++ /dev/null
@@ -1,49 +0,0 @@
-From 2a4fc266dbf77ed7ab83da16468e9ba627b8bc2d Mon Sep 17 00:00:00 2001
-From: Nick Clifton <nickc@redhat.com>
-Date: Mon, 27 Jun 2022 13:07:40 +0100
-Subject: [PATCH] Have gold's File_read::do_read() function check the start
- parameter
-
-	PR 23765
-	* fileread.cc (File_read::do_read): Check start parameter before
-	computing number of bytes to read.
-
-Upstream-Status: Backport [https://sourceware.org/git/gitweb.cgi?p=binutils-gdb.git;h=2a4fc266dbf77ed7ab83da16468e9ba627b8bc2d]
-
-Signed-off-by: Pgowda <pgowda.cve@gmail.com>
----
- gold/ChangeLog   | 6 ++++++
- gold/fileread.cc | 6 ++++++
- 2 files changed, 12 insertions(+)
-
-diff --git a/gold/ChangeLog b/gold/ChangeLog
-index 5103dab7b67..8557dc6db7f 100644
---- a/gold/ChangeLog
-+++ b/gold/ChangeLog
-@@ -1,3 +1,9 @@
-+2022-06-27  Nick Clifton  <nickc@redhat.com>
-+
-+	PR 23765
-+	* fileread.cc (File_read::do_read): Check start parameter before
-+	computing number of bytes to read.
-+
- 2022-02-17  Nick Clifton  <nickc@redhat.com>
- 
- 	* po/sr.po: Updated Serbian translation.
-diff --git a/gold/fileread.cc b/gold/fileread.cc
-index 2b653f78c2e..af2df215468 100644
---- a/gold/fileread.cc
-+++ b/gold/fileread.cc
-@@ -385,6 +385,12 @@ File_read::do_read(off_t start, section_
-   ssize_t bytes;
-   if (this->whole_file_view_ != NULL)
-     {
-+      // See PR 23765 for an example of a testcase that triggers this error.
-+      if (((ssize_t) start) < 0)
-+	gold_fatal(_("%s: read failed, starting offset (%#llx) less than zero"),
-+		   this->filename().c_str(),
-+		   static_cast<long long>(start));
-+	
-       bytes = this->size_ - start;
-       if (static_cast<section_size_type>(bytes) >= size)
- 	{
diff --git a/poky/meta/recipes-devtools/binutils/binutils_2.38.bb b/poky/meta/recipes-devtools/binutils/binutils_2.38.bb
deleted file mode 100644
index 12a6fb5..0000000
--- a/poky/meta/recipes-devtools/binutils/binutils_2.38.bb
+++ /dev/null
@@ -1,69 +0,0 @@
-require binutils.inc
-require binutils-${PV}.inc
-
-DEPENDS += "zlib"
-
-EXTRA_OECONF += "--with-sysroot=/ \
-                --enable-install-libbfd \
-                --enable-install-libiberty \
-                --enable-shared \
-                --with-system-zlib \
-                "
-
-EXTRA_OEMAKE:append:libc-musl = "\
-                                 gt_cv_func_gnugettext1_libc=yes \
-                                 gt_cv_func_gnugettext2_libc=yes \
-                                "
-EXTRA_OECONF:class-native = "--enable-targets=all \
-                             --enable-64-bit-bfd \
-                             --enable-install-libiberty \
-                             --enable-install-libbfd \
-                             --disable-gdb \
-                             --disable-gdbserver \
-                             --disable-libdecnumber \
-                             --disable-readline \
-                             --disable-sim \
-                             --disable-werror"
-
-PACKAGECONFIG ??= "${@bb.utils.filter('DISTRO_FEATURES', 'debuginfod', d)}"
-PACKAGECONFIG[debuginfod] = "--with-debuginfod, --without-debuginfod, elfutils"
-
-do_install:class-native () {
-	autotools_do_install
-
-	# Install the libiberty header
-	install -d ${D}${includedir}
-	install -m 644 ${S}/include/ansidecl.h ${D}${includedir}
-	install -m 644 ${S}/include/libiberty.h ${D}${includedir}
-
-	# We only want libiberty, libbfd and libopcodes
-	rm -rf ${D}${bindir}
-	rm -rf ${D}${prefix}/${TARGET_SYS}
-	rm -rf ${D}${prefix}/lib/ldscripts
-	rm -rf ${D}${prefix}/share/info
-	rm -rf ${D}${prefix}/share/locale
-	rm -rf ${D}${prefix}/share/man
-	rmdir ${D}${prefix}/share || :
-	rmdir ${D}/${libdir}/gcc-lib || :
-	rmdir ${D}/${libdir}64/gcc-lib || :
-	rmdir ${D}/${libdir} || :
-	rmdir ${D}/${libdir}64 || :
-}
-
-# libctf races with libbfd
-PARALLEL_MAKEINST:class-target = ""
-PARALLEL_MAKEINST:class-nativesdk = ""
-
-# Split out libbfd-*.so and libopcodes-*.so so including perf doesn't include
-# extra stuff
-PACKAGE_BEFORE_PN += "libbfd libopcodes"
-FILES:libbfd = "${libdir}/libbfd-*.so.* ${libdir}/libbfd-*.so"
-FILES:libopcodes = "${libdir}/libopcodes-*.so.* ${libdir}/libopcodes-*.so"
-
-SRC_URI:append:class-nativesdk =  " file://0003-binutils-nativesdk-Search-for-alternative-ld.so.conf.patch "
-
-USE_ALTERNATIVES_FOR:class-nativesdk = ""
-FILES:${PN}:append:class-nativesdk = " ${bindir}"
-
-BBCLASSEXTEND = "native nativesdk"
-
diff --git a/poky/meta/recipes-devtools/binutils/binutils_2.39.bb b/poky/meta/recipes-devtools/binutils/binutils_2.39.bb
new file mode 100644
index 0000000..6724038
--- /dev/null
+++ b/poky/meta/recipes-devtools/binutils/binutils_2.39.bb
@@ -0,0 +1,75 @@
+require binutils.inc
+require binutils-${PV}.inc
+
+DEPENDS += "zlib"
+
+EXTRA_OECONF += "--with-sysroot=/ \
+                --enable-install-libbfd \
+                --enable-install-libiberty \
+                --enable-shared \
+                --with-system-zlib \
+                "
+
+EXTRA_OEMAKE:append:libc-musl = "\
+                                 gt_cv_func_gnugettext1_libc=yes \
+                                 gt_cv_func_gnugettext2_libc=yes \
+                                "
+# libcollector/collector.c:547:15: error: no member named '__fprintf_chk' in 'struct CollectorUtilFuncs'
+EXTRA_OECONF:append:toolchain-clang = " --disable-gprofng"
+# | ../../../gprofng/libcollector/../src/collector_module.h:78:13: error: duplicate member 'pwrite'
+# | ../../../gprofng/libcollector/dispatcher.c:578:8: error: 'struct sigevent' has no member named '_sigev_un'
+EXTRA_OECONF:append:libc-musl = " --disable-gprofng"
+
+EXTRA_OECONF:class-native = "--enable-targets=all \
+                             --enable-64-bit-bfd \
+                             --enable-install-libiberty \
+                             --enable-install-libbfd \
+                             --disable-gdb \
+                             --disable-gdbserver \
+                             --disable-libdecnumber \
+                             --disable-readline \
+                             --disable-sim \
+                             --disable-werror"
+
+PACKAGECONFIG ??= "${@bb.utils.filter('DISTRO_FEATURES', 'debuginfod', d)}"
+PACKAGECONFIG[debuginfod] = "--with-debuginfod, --without-debuginfod, elfutils"
+
+do_install:class-native () {
+	autotools_do_install
+
+	# Install the libiberty header
+	install -d ${D}${includedir}
+	install -m 644 ${S}/include/ansidecl.h ${D}${includedir}
+	install -m 644 ${S}/include/libiberty.h ${D}${includedir}
+
+	# We only want libiberty, libbfd and libopcodes
+	rm -rf ${D}${bindir}
+	rm -rf ${D}${prefix}/${TARGET_SYS}
+	rm -rf ${D}${prefix}/lib/ldscripts
+	rm -rf ${D}${prefix}/share/info
+	rm -rf ${D}${prefix}/share/locale
+	rm -rf ${D}${prefix}/share/man
+	rmdir ${D}${prefix}/share || :
+	rmdir ${D}/${libdir}/gcc-lib || :
+	rmdir ${D}/${libdir}64/gcc-lib || :
+	rmdir ${D}/${libdir} || :
+	rmdir ${D}/${libdir}64 || :
+}
+
+# libctf races with libbfd
+PARALLEL_MAKEINST:class-target = ""
+PARALLEL_MAKEINST:class-nativesdk = ""
+
+# Split out libbfd-*.so and libopcodes-*.so so including perf doesn't include
+# extra stuff
+PACKAGE_BEFORE_PN += "libbfd libopcodes gprofng"
+FILES:libbfd = "${libdir}/libbfd-*.so.* ${libdir}/libbfd-*.so"
+FILES:libopcodes = "${libdir}/libopcodes-*.so.* ${libdir}/libopcodes-*.so"
+FILES:gprofng = "${sysconfdir}/gprofng.rc ${libdir}/gprofng/libgp-*.so ${libdir}/gprofng/libgprofng.so.* ${bindir}/gp-* ${bindir}/gprofng"
+FILES:${PN}-dev += "${libdir}/gprofng/libgprofng.so"
+SRC_URI:append:class-nativesdk =  " file://0003-binutils-nativesdk-Search-for-alternative-ld.so.conf.patch "
+
+USE_ALTERNATIVES_FOR:class-nativesdk = ""
+FILES:${PN}:append:class-nativesdk = " ${bindir}"
+
+BBCLASSEXTEND = "native nativesdk"
diff --git a/poky/meta/recipes-devtools/bootchart2/bootchart2/0001-Do-not-include-linux-fs.h.patch b/poky/meta/recipes-devtools/bootchart2/bootchart2/0001-Do-not-include-linux-fs.h.patch
new file mode 100644
index 0000000..4e71e5c
--- /dev/null
+++ b/poky/meta/recipes-devtools/bootchart2/bootchart2/0001-Do-not-include-linux-fs.h.patch
@@ -0,0 +1,31 @@
+From 8591c1e3edaea8f17396e3d2819d9064b2818cfb Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Sat, 6 Aug 2022 20:39:01 -0700
+Subject: [PATCH] Do not include linux/fs.h
+
+This header is not needed to be included anymore, moreover it conflicts
+with sys/mount.h from glibc 2.36+ see [1]
+
+[1] https://sourceware.org/glibc/wiki/Release/2.36
+
+Upstream-Status: Submitted [https://github.com/xrmx/bootchart/pull/99]
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ collector/collector.c | 1 -
+ 1 file changed, 1 deletion(-)
+
+diff --git a/collector/collector.c b/collector/collector.c
+index 5055181..12738ff 100644
+--- a/collector/collector.c
++++ b/collector/collector.c
+@@ -34,7 +34,6 @@
+ 
+ #include <sys/mount.h>
+ #include <sys/sysmacros.h>
+-#include <linux/fs.h>
+ #include <linux/genetlink.h>
+ #include <linux/taskstats.h>
+ #include <linux/cgroupstats.h>
+-- 
+2.37.1
+
diff --git a/poky/meta/recipes-devtools/bootchart2/bootchart2_0.14.9.bb b/poky/meta/recipes-devtools/bootchart2/bootchart2_0.14.9.bb
index b162807..b4d5b7c 100644
--- a/poky/meta/recipes-devtools/bootchart2/bootchart2_0.14.9.bb
+++ b/poky/meta/recipes-devtools/bootchart2/bootchart2_0.14.9.bb
@@ -95,6 +95,7 @@
            file://0001-collector-Allocate-space-on-heap-for-chunks.patch \
            file://0001-bootchart2-support-usrmerge.patch \
            file://0001-bootchartd.in-make-sure-only-one-bootchartd-process.patch \
+           file://0001-Do-not-include-linux-fs.h.patch \
           "
 
 S = "${WORKDIR}/git"
diff --git a/poky/meta/recipes-devtools/btrfs-tools/btrfs-tools/0001-device-utils.c-Use-linux-mount.h-instead-of-sys-moun.patch b/poky/meta/recipes-devtools/btrfs-tools/btrfs-tools/0001-device-utils.c-Use-linux-mount.h-instead-of-sys-moun.patch
deleted file mode 100644
index 1397e50..0000000
--- a/poky/meta/recipes-devtools/btrfs-tools/btrfs-tools/0001-device-utils.c-Use-linux-mount.h-instead-of-sys-moun.patch
+++ /dev/null
@@ -1,32 +0,0 @@
-From d9f118a3408a8a2530f0f60e8072f4323911530f Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Wed, 27 Jul 2022 01:08:20 +0000
-Subject: [PATCH] device-utils.c: Use linux mount.h instead of sys/mount.h
-
-This file includes linucx/fs.h which includes linux/mount.h and with
-glibc 2.36 linux/mount.h and glibc mount.h are not compatible [1]
-therefore try to avoid including both headers
-
-[1] https://sourceware.org/glibc/wiki/Release/2.36
-
-Upstream-Status: Submitted [https://www.spinics.net/lists/linux-btrfs/msg126918.html]
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
----
- common/device-utils.c | 1 -
- 1 file changed, 1 deletion(-)
-
-diff --git a/common/device-utils.c b/common/device-utils.c
-index 617b6746..25a4fb8c 100644
---- a/common/device-utils.c
-+++ b/common/device-utils.c
-@@ -15,7 +15,6 @@
-  */
- 
- #include <sys/ioctl.h>
--#include <sys/mount.h>
- #include <sys/statfs.h>
- #include <sys/types.h>
- #include <stdio.h>
--- 
-2.25.1
-
diff --git a/poky/meta/recipes-devtools/btrfs-tools/btrfs-tools_5.18.1.bb b/poky/meta/recipes-devtools/btrfs-tools/btrfs-tools_5.18.1.bb
deleted file mode 100644
index 5b24bef..0000000
--- a/poky/meta/recipes-devtools/btrfs-tools/btrfs-tools_5.18.1.bb
+++ /dev/null
@@ -1,73 +0,0 @@
-SUMMARY = "Checksumming Copy on Write Filesystem utilities"
-DESCRIPTION = "Btrfs is a new copy on write filesystem for Linux aimed at \
-implementing advanced features while focusing on fault tolerance, repair and \
-easy administration. \
-This package contains utilities (mkfs, fsck, btrfsctl) used to work with \
-btrfs and an utility (btrfs-convert) to make a btrfs filesystem from an ext3."
-
-HOMEPAGE = "https://btrfs.wiki.kernel.org"
-
-LICENSE = "GPL-2.0-only & LGPL-2.1-or-later"
-LIC_FILES_CHKSUM = " \
-    file://COPYING;md5=fcb02dc552a041dee27e4b85c7396067 \
-    file://libbtrfsutil/COPYING;md5=4fbd65380cdd255951079008b364516c \
-"
-SECTION = "base"
-DEPENDS = "util-linux zlib"
-
-SRC_URI = "git://git.kernel.org/pub/scm/linux/kernel/git/kdave/btrfs-progs.git;branch=master \
-           file://0001-Add-a-possibility-to-specify-where-python-modules-ar.patch \
-           file://0001-device-utils.c-Use-linux-mount.h-instead-of-sys-moun.patch \
-           "
-SRCREV = "47b5cf867fc37411ef51eb5c09893a95f7f6c3b7"
-S = "${WORKDIR}/git"
-
-PACKAGECONFIG ??= " \
-    programs \
-    convert \
-    python \
-    crypto-builtin \
-"
-PACKAGECONFIG[manpages] = "--enable-documentation, --disable-documentation, python3-sphinx-native"
-PACKAGECONFIG[programs] = "--enable-programs,--disable-programs"
-PACKAGECONFIG[convert] = "--enable-convert --with-convert=ext2,--disable-convert --without-convert,e2fsprogs"
-PACKAGECONFIG[zoned] = "--enable-zoned,--disable-zoned"
-PACKAGECONFIG[python] = "--enable-python,--disable-python,python3-setuptools-native"
-PACKAGECONFIG[lzo] = "--enable-lzo,--disable-lzo,lzo"
-PACKAGECONFIG[zstd] = "--enable-zstd,--disable-zstd,zstd"
-PACKAGECONFIG[udev] = "--enable-libudev,--disable-libudev,udev"
-
-# Pick only one crypto provider
-PACKAGECONFIG[crypto-builtin] = "--with-crypto=builtin"
-PACKAGECONFIG[crypto-libgcrypt] = "--with-crypto=libgcrypt,,libgcrypt"
-PACKAGECONFIG[crypto-libsodium] = "--with-crypto=libsodium,,libsodium"
-PACKAGECONFIG[crypto-libkcapi] = "--with-crypto=libkcapi,,libkcapi"
-
-inherit autotools-brokensep pkgconfig manpages
-inherit ${@bb.utils.contains('PACKAGECONFIG', 'python', 'setuptools3-base', '', d)}
-
-CLEANBROKEN = "1"
-
-EXTRA_OECONF = "--enable-largefile"
-EXTRA_OECONF:append:libc-musl = " --disable-backtrace "
-EXTRA_PYTHON_CFLAGS = "${DEBUG_PREFIX_MAP}"
-EXTRA_PYTHON_CFLAGS:class-native = ""
-EXTRA_PYTHON_LDFLAGS = "${LDFLAGS}"
-EXTRA_OEMAKE = "V=1 'EXTRA_PYTHON_CFLAGS=${EXTRA_PYTHON_CFLAGS}' 'EXTRA_PYTHON_LDFLAGS=${EXTRA_PYTHON_LDFLAGS}'"
-
-do_configure:prepend() {
-	# Upstream doesn't ship this and autoreconf won't install it as automake isn't used.
-	mkdir -p ${S}/config
-	cp -f $(automake --print-libdir)/install-sh ${S}/config/
-}
-
-
-do_install:append() {
-    if [ "${@bb.utils.filter('PACKAGECONFIG', 'python', d)}" ]; then
-        oe_runmake 'DESTDIR=${D}' 'PYTHON_SITEPACKAGES_DIR=${PYTHON_SITEPACKAGES_DIR}' install_python
-    fi
-}
-
-RDEPENDS:${PN} = "libgcc"
-
-BBCLASSEXTEND = "native nativesdk"
diff --git a/poky/meta/recipes-devtools/btrfs-tools/btrfs-tools_5.19.bb b/poky/meta/recipes-devtools/btrfs-tools/btrfs-tools_5.19.bb
new file mode 100644
index 0000000..4f116a8
--- /dev/null
+++ b/poky/meta/recipes-devtools/btrfs-tools/btrfs-tools_5.19.bb
@@ -0,0 +1,72 @@
+SUMMARY = "Checksumming Copy on Write Filesystem utilities"
+DESCRIPTION = "Btrfs is a new copy on write filesystem for Linux aimed at \
+implementing advanced features while focusing on fault tolerance, repair and \
+easy administration. \
+This package contains utilities (mkfs, fsck, btrfsctl) used to work with \
+btrfs and an utility (btrfs-convert) to make a btrfs filesystem from an ext3."
+
+HOMEPAGE = "https://btrfs.wiki.kernel.org"
+
+LICENSE = "GPL-2.0-only & LGPL-2.1-or-later"
+LIC_FILES_CHKSUM = " \
+    file://COPYING;md5=fcb02dc552a041dee27e4b85c7396067 \
+    file://libbtrfsutil/COPYING;md5=4fbd65380cdd255951079008b364516c \
+"
+SECTION = "base"
+DEPENDS = "util-linux zlib"
+
+SRC_URI = "git://git.kernel.org/pub/scm/linux/kernel/git/kdave/btrfs-progs.git;branch=master \
+           file://0001-Add-a-possibility-to-specify-where-python-modules-ar.patch \
+           "
+SRCREV = "96b83b16158f3b87037085761bf212e958473767"
+S = "${WORKDIR}/git"
+
+PACKAGECONFIG ??= " \
+    programs \
+    convert \
+    python \
+    crypto-builtin \
+"
+PACKAGECONFIG[manpages] = "--enable-documentation, --disable-documentation, python3-sphinx-native"
+PACKAGECONFIG[programs] = "--enable-programs,--disable-programs"
+PACKAGECONFIG[convert] = "--enable-convert --with-convert=ext2,--disable-convert --without-convert,e2fsprogs"
+PACKAGECONFIG[zoned] = "--enable-zoned,--disable-zoned"
+PACKAGECONFIG[python] = "--enable-python,--disable-python,python3-setuptools-native"
+PACKAGECONFIG[lzo] = "--enable-lzo,--disable-lzo,lzo"
+PACKAGECONFIG[zstd] = "--enable-zstd,--disable-zstd,zstd"
+PACKAGECONFIG[udev] = "--enable-libudev,--disable-libudev,udev"
+
+# Pick only one crypto provider
+PACKAGECONFIG[crypto-builtin] = "--with-crypto=builtin"
+PACKAGECONFIG[crypto-libgcrypt] = "--with-crypto=libgcrypt,,libgcrypt"
+PACKAGECONFIG[crypto-libsodium] = "--with-crypto=libsodium,,libsodium"
+PACKAGECONFIG[crypto-libkcapi] = "--with-crypto=libkcapi,,libkcapi"
+
+inherit autotools-brokensep pkgconfig manpages
+inherit ${@bb.utils.contains('PACKAGECONFIG', 'python', 'setuptools3-base', '', d)}
+
+CLEANBROKEN = "1"
+
+EXTRA_OECONF = "--enable-largefile"
+EXTRA_OECONF:append:libc-musl = " --disable-backtrace "
+EXTRA_PYTHON_CFLAGS = "${DEBUG_PREFIX_MAP}"
+EXTRA_PYTHON_CFLAGS:class-native = ""
+EXTRA_PYTHON_LDFLAGS = "${LDFLAGS}"
+EXTRA_OEMAKE = "V=1 'EXTRA_PYTHON_CFLAGS=${EXTRA_PYTHON_CFLAGS}' 'EXTRA_PYTHON_LDFLAGS=${EXTRA_PYTHON_LDFLAGS}'"
+
+do_configure:prepend() {
+	# Upstream doesn't ship this and autoreconf won't install it as automake isn't used.
+	mkdir -p ${S}/config
+	cp -f $(automake --print-libdir)/install-sh ${S}/config/
+}
+
+
+do_install:append() {
+    if [ "${@bb.utils.filter('PACKAGECONFIG', 'python', d)}" ]; then
+        oe_runmake 'DESTDIR=${D}' 'PYTHON_SITEPACKAGES_DIR=${PYTHON_SITEPACKAGES_DIR}' install_python
+    fi
+}
+
+RDEPENDS:${PN} = "libgcc"
+
+BBCLASSEXTEND = "native nativesdk"
diff --git a/poky/meta/recipes-devtools/cargo/cargo-cross-canadian.inc b/poky/meta/recipes-devtools/cargo/cargo-cross-canadian.inc
deleted file mode 100644
index a2fac92..0000000
--- a/poky/meta/recipes-devtools/cargo/cargo-cross-canadian.inc
+++ /dev/null
@@ -1,85 +0,0 @@
-SUMMARY = "Cargo, a package manager for Rust cross canadian flavor."
-
-RUST_ALTERNATE_EXE_PATH = "${STAGING_LIBDIR_NATIVE}/llvm-rust/bin/llvm-config"
-
-HOST_SYS = "${HOST_ARCH}-unknown-linux-gnu"
-CARGO_RUST_TARGET_CCLD = "${RUST_BUILD_CCLD}"
-
-inherit rust-target-config
-require cargo.inc
-
-CARGO = "${WORKDIR}/${CARGO_SNAPSHOT}/bin/cargo"
-BASEDEPENDS:remove = "cargo-native"
-
-export RUST_TARGET_PATH="${WORKDIR}/targets/"
-
-RUSTLIB = " \
-	-L ${STAGING_DIR_NATIVE}/${SDKPATHNATIVE}/usr/lib/${TARGET_SYS}/rustlib/${HOST_SYS}/lib \
-"
-
-DEPENDS += "rust-native \
-            rust-cross-canadian-${TRANSLATED_TARGET_ARCH} \
-            virtual/nativesdk-${HOST_PREFIX}compilerlibs \
-            nativesdk-openssl nativesdk-zlib \
-            virtual/nativesdk-libc \
-"
-
-inherit cross-canadian
-
-PN = "cargo-cross-canadian-${TRANSLATED_TARGET_ARCH}"
-
-RUST_TARGETGENS = "BUILD HOST"
-
-do_compile:prepend () {
-	PKG_CONFIG_PATH="${RECIPE_SYSROOT_NATIVE}/usr/lib/pkgconfig:${PKG_CONFIG_PATH}"
-}
-
-create_sdk_wrapper () {
-        file="$1"
-        shift
-
-        cat <<- EOF > "${file}"
-		#!/bin/sh
-		\$$1 \$@
-		EOF
-
-        chmod +x "$file"
-}
-
-do_install () {
-    SYS_BINDIR=$(dirname ${D}${bindir})
-    install -d "${SYS_BINDIR}"
-    install -m 755 "${B}/target/${CARGO_TARGET_SUBDIR}/cargo" "${SYS_BINDIR}"
-    for i in ${SYS_BINDIR}/*; do
-	chrpath -r "\$ORIGIN/../lib" ${i}
-    done
-
-    # Uses SDK's CC as linker so linked binaries works out of box.
-    create_sdk_wrapper "${SYS_BINDIR}/target-rust-ccld" "CC"
-
-    ENV_SETUP_DIR=${D}${base_prefix}/environment-setup.d
-    mkdir "${ENV_SETUP_DIR}"
-    ENV_SETUP_SH="${ENV_SETUP_DIR}/cargo.sh"
-    cat <<- EOF > "${ENV_SETUP_SH}"
-	export CARGO_HOME="\$OECORE_TARGET_SYSROOT/home/cargo"
-	mkdir -p "\$CARGO_HOME"
-        # Init the default target once, it might be otherwise user modified.
-	if [ ! -f "\$CARGO_HOME/config" ]; then
-		touch "\$CARGO_HOME/config"
-		echo "[build]" >> "\$CARGO_HOME/config"
-		echo 'target = "'${TARGET_SYS}'"' >> "\$CARGO_HOME/config"
-		echo '# TARGET_SYS' >> "\$CARGO_HOME/config"
-		echo '[target.'${TARGET_SYS}']' >> "\$CARGO_HOME/config"
-		echo 'linker = "target-rust-ccld"' >> "\$CARGO_HOME/config"
-    fi
-
-	# Keep the below off as long as HTTP/2 is disabled.
-	export CARGO_HTTP_MULTIPLEXING=false
-
-	export CARGO_HTTP_CAINFO="\$OECORE_NATIVE_SYSROOT/etc/ssl/certs/ca-certificates.crt"
-	EOF
-}
-
-PKG_SYS_BINDIR = "${SDKPATHNATIVE}/usr/bin"
-FILES:${PN} += "${base_prefix}/environment-setup.d ${PKG_SYS_BINDIR}"
-
diff --git a/poky/meta/recipes-devtools/cargo/cargo-cross-canadian_1.62.0.bb b/poky/meta/recipes-devtools/cargo/cargo-cross-canadian_1.62.0.bb
deleted file mode 100644
index 63fd691..0000000
--- a/poky/meta/recipes-devtools/cargo/cargo-cross-canadian_1.62.0.bb
+++ /dev/null
@@ -1,6 +0,0 @@
-require recipes-devtools/rust/rust-source.inc
-require recipes-devtools/rust/rust-snapshot.inc
-
-FILESEXTRAPATHS:prepend := "${THISDIR}/cargo-${PV}:"
-
-require cargo-cross-canadian.inc
diff --git a/poky/meta/recipes-devtools/cargo/cargo.inc b/poky/meta/recipes-devtools/cargo/cargo.inc
index 607c51f..40421df 100644
--- a/poky/meta/recipes-devtools/cargo/cargo.inc
+++ b/poky/meta/recipes-devtools/cargo/cargo.inc
@@ -41,6 +41,14 @@
 	install -m 755 "${B}/target/${CARGO_TARGET_SUBDIR}/cargo" "${D}${bindir}"
 }
 
+do_install:append:class-nativesdk() {
+	# To quote the cargo docs, "Cargo also sets the dynamic library path when compiling
+	# and running binaries with commands like `cargo run` and `cargo test`". Sadly it
+	# sets to libdir but not base_libdir leading to symbol mismatches depending on the
+	# host OS. Fully set LD_LIBRARY_PATH to contain both to avoid this.
+	create_wrapper ${D}/${bindir}/cargo LD_LIBRARY_PATH=${libdir}:${base_libdir}
+}
+
 # Disabled due to incompatibility with libgit2 0.28.x (https://github.com/rust-lang/git2-rs/issues/458, https://bugs.gentoo.org/707746#c1)
 # as shipped by Yocto Dunfell.
 # According to https://github.com/rust-lang/git2-rs/issues/458#issuecomment-522567539, there are no compatibility guarantees between
@@ -54,3 +62,8 @@
 # so we must use the locally set up snapshot to bootstrap the build.
 BASEDEPENDS:remove:class-native = "cargo-native"
 CARGO:class-native = "${WORKDIR}/${CARGO_SNAPSHOT}/bin/cargo"
+
+DEPENDS:append:class-nativesdk = " nativesdk-rust"
+RUSTLIB:append:class-nativesdk = " -L ${STAGING_DIR_HOST}/${SDKPATHNATIVE}/usr/lib/rustlib/${RUST_HOST_SYS}/lib"
+
+
diff --git a/poky/meta/recipes-devtools/cargo/cargo_1.62.0.bb b/poky/meta/recipes-devtools/cargo/cargo_1.62.0.bb
deleted file mode 100644
index eee58fc..0000000
--- a/poky/meta/recipes-devtools/cargo/cargo_1.62.0.bb
+++ /dev/null
@@ -1,4 +0,0 @@
-require recipes-devtools/rust/rust-source.inc
-require recipes-devtools/rust/rust-snapshot.inc
-require cargo.inc
-BBCLASSEXTEND = "native nativesdk"
diff --git a/poky/meta/recipes-devtools/cargo/cargo_1.63.0.bb b/poky/meta/recipes-devtools/cargo/cargo_1.63.0.bb
new file mode 100644
index 0000000..5c85277
--- /dev/null
+++ b/poky/meta/recipes-devtools/cargo/cargo_1.63.0.bb
@@ -0,0 +1,5 @@
+require recipes-devtools/rust/rust-source.inc
+require recipes-devtools/rust/rust-snapshot.inc
+require cargo.inc
+BBCLASSEXTEND = "native nativesdk"
+RUSTLIB_DEP:class-nativesdk = ""
\ No newline at end of file
diff --git a/poky/meta/recipes-devtools/ccache/ccache_4.6.1.bb b/poky/meta/recipes-devtools/ccache/ccache_4.6.1.bb
deleted file mode 100644
index 0d6471d..0000000
--- a/poky/meta/recipes-devtools/ccache/ccache_4.6.1.bb
+++ /dev/null
@@ -1,28 +0,0 @@
-SUMMARY = "a fast C/C++ compiler cache"
-DESCRIPTION = "ccache is a compiler cache. It speeds up recompilation \
-by caching the result of previous compilations and detecting when the \
-same compilation is being done again. Supported languages are C, C\+\+, \
-Objective-C and Objective-C++."
-HOMEPAGE = "http://ccache.samba.org"
-SECTION = "devel"
-
-LICENSE = "GPL-3.0-or-later"
-LIC_FILES_CHKSUM = "file://LICENSE.adoc;md5=7a19377a02749d8a1281ed608169b0ee"
-
-DEPENDS = "zstd"
-
-SRC_URI = "https://github.com/ccache/ccache/releases/download/v${PV}/${BP}.tar.gz \
-           file://0001-xxhash.h-Fix-build-with-gcc-12.patch \
-"
-SRC_URI[sha256sum] = "59b28a57c3a45e48d6049001999c9f94cd4d3e9b0196994bed9a6a7437ffa3bc"
-
-UPSTREAM_CHECK_URI = "https://github.com/ccache/ccache/releases/"
-
-inherit cmake
-
-PATCHTOOL = "patch"
-
-BBCLASSEXTEND = "native nativesdk"
-
-PACKAGECONFIG[docs] = "-DENABLE_DOCUMENTATION=ON,-DENABLE_DOCUMENTATION=OFF,asciidoc"
-PACKAGECONFIG[redis] = "-DREDIS_STORAGE_BACKEND=ON,-DREDIS_STORAGE_BACKEND=OFF,hiredis"
diff --git a/poky/meta/recipes-devtools/ccache/ccache_4.6.2.bb b/poky/meta/recipes-devtools/ccache/ccache_4.6.2.bb
new file mode 100644
index 0000000..dbac022
--- /dev/null
+++ b/poky/meta/recipes-devtools/ccache/ccache_4.6.2.bb
@@ -0,0 +1,30 @@
+SUMMARY = "a fast C/C++ compiler cache"
+DESCRIPTION = "ccache is a compiler cache. It speeds up recompilation \
+by caching the result of previous compilations and detecting when the \
+same compilation is being done again. Supported languages are C, C\+\+, \
+Objective-C and Objective-C++."
+HOMEPAGE = "http://ccache.samba.org"
+SECTION = "devel"
+
+LICENSE = "GPL-3.0-or-later"
+LIC_FILES_CHKSUM = "file://LICENSE.adoc;md5=7a19377a02749d8a1281ed608169b0ee"
+
+DEPENDS = "zstd"
+
+SRC_URI = "https://github.com/ccache/ccache/releases/download/v${PV}/${BP}.tar.gz \
+           file://0001-xxhash.h-Fix-build-with-gcc-12.patch \
+           file://0001-Include-time.h-for-time_t.patch \
+           file://0002-config-Include-sys-types.h-for-mode_t-defintion.patch \
+"
+SRC_URI[sha256sum] = "6a746a9bed01585388b68e2d58af2e77741fc8d66bc360b5a0b4c41fc284dafe"
+
+UPSTREAM_CHECK_URI = "https://github.com/ccache/ccache/releases/"
+
+inherit cmake
+
+PATCHTOOL = "patch"
+
+BBCLASSEXTEND = "native nativesdk"
+
+PACKAGECONFIG[docs] = "-DENABLE_DOCUMENTATION=ON,-DENABLE_DOCUMENTATION=OFF,asciidoc"
+PACKAGECONFIG[redis] = "-DREDIS_STORAGE_BACKEND=ON,-DREDIS_STORAGE_BACKEND=OFF,hiredis"
diff --git a/poky/meta/recipes-devtools/ccache/files/0001-Include-time.h-for-time_t.patch b/poky/meta/recipes-devtools/ccache/files/0001-Include-time.h-for-time_t.patch
new file mode 100644
index 0000000..d752eb0
--- /dev/null
+++ b/poky/meta/recipes-devtools/ccache/files/0001-Include-time.h-for-time_t.patch
@@ -0,0 +1,29 @@
+From 590c656838a9b3769a7a855fb1891bfa8d8878ad Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Tue, 23 Aug 2022 10:27:21 -0700
+Subject: [PATCH] Include time.h for time_t
+
+Fixes
+src/core/Statistics.hpp:41:37: error: 'time_t' has not been declared
+|    41 |                                     time_t last_updated,
+|       |                                     ^~~~~~
+
+Upstream-Status: Submitted [https://github.com/ccache/ccache/pull/1145]
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+
+---
+ src/core/Statistics.hpp | 1 +
+ 1 file changed, 1 insertion(+)
+
+diff --git a/src/core/Statistics.hpp b/src/core/Statistics.hpp
+index 54f32e9..eb80e1c 100644
+--- a/src/core/Statistics.hpp
++++ b/src/core/Statistics.hpp
+@@ -21,6 +21,7 @@
+ #include <core/StatisticsCounters.hpp>
+ 
+ #include <cstdint>
++#include <ctime>
+ #include <string>
+ #include <unordered_map>
+ #include <vector>
diff --git a/poky/meta/recipes-devtools/ccache/files/0001-xxhash.h-Fix-build-with-gcc-12.patch b/poky/meta/recipes-devtools/ccache/files/0001-xxhash.h-Fix-build-with-gcc-12.patch
index 4c2623e..67c74a1 100644
--- a/poky/meta/recipes-devtools/ccache/files/0001-xxhash.h-Fix-build-with-gcc-12.patch
+++ b/poky/meta/recipes-devtools/ccache/files/0001-xxhash.h-Fix-build-with-gcc-12.patch
@@ -1,4 +1,4 @@
-From cfde5ba7d10ae1e9d0c259dd1e7027e9bad8f83c Mon Sep 17 00:00:00 2001
+From 550834a3ec2e05e379be63b084e7fa06a1723f84 Mon Sep 17 00:00:00 2001
 From: Mingli Yu <mingli.yu@windriver.com>
 Date: Mon, 6 Jun 2022 17:53:20 +0800
 Subject: [PATCH] xxhash.h: Fix build with gcc-12
@@ -14,9 +14,10 @@
        |                  ~~~
   4371 |                  secret + n*XXH_SECRET_CONSUME_RATE);
 
-Upstream-Status: Submitted [https://github.com/ccache/ccache/pull/1089]
+Upstream-Status: Submitted [https://github.com/Cyan4973/xxHash/pull/720]
 
 Signed-off-by: Mingli Yu <mingli.yu@windriver.com>
+
 ---
  src/third_party/xxhash.h | 2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)
@@ -34,6 +35,3 @@
  #  define XXH_NO_INLINE static __attribute__((noinline))
  #elif defined(_MSC_VER)  /* Visual Studio */
  #  define XXH_FORCE_INLINE static __forceinline
--- 
-2.25.1
-
diff --git a/poky/meta/recipes-devtools/ccache/files/0002-config-Include-sys-types.h-for-mode_t-defintion.patch b/poky/meta/recipes-devtools/ccache/files/0002-config-Include-sys-types.h-for-mode_t-defintion.patch
new file mode 100644
index 0000000..0fd7760
--- /dev/null
+++ b/poky/meta/recipes-devtools/ccache/files/0002-config-Include-sys-types.h-for-mode_t-defintion.patch
@@ -0,0 +1,25 @@
+From f98b390a2d323f7f92fb0492b0943d201afe5b8f Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Tue, 23 Aug 2022 10:40:53 -0700
+Subject: [PATCH] config: Include sys/types.h for mode_t defintion
+
+Upstream-Status: Submitted [https://github.com/ccache/ccache/pull/1145]
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+
+---
+ src/Config.hpp | 2 ++
+ 1 file changed, 2 insertions(+)
+
+diff --git a/src/Config.hpp b/src/Config.hpp
+index a9e08ec..9e7af40 100644
+--- a/src/Config.hpp
++++ b/src/Config.hpp
+@@ -25,6 +25,8 @@
+ 
+ #include "third_party/nonstd/optional.hpp"
+ 
++#include <sys/types.h>
++
+ #include <cstdint>
+ #include <functional>
+ #include <limits>
diff --git a/poky/meta/recipes-devtools/cmake/cmake-native_3.23.2.bb b/poky/meta/recipes-devtools/cmake/cmake-native_3.24.0.bb
similarity index 100%
rename from poky/meta/recipes-devtools/cmake/cmake-native_3.23.2.bb
rename to poky/meta/recipes-devtools/cmake/cmake-native_3.24.0.bb
diff --git a/poky/meta/recipes-devtools/cmake/cmake.inc b/poky/meta/recipes-devtools/cmake/cmake.inc
index 4a6884e..d64afff 100644
--- a/poky/meta/recipes-devtools/cmake/cmake.inc
+++ b/poky/meta/recipes-devtools/cmake/cmake.inc
@@ -10,7 +10,7 @@
 BUGTRACKER = "http://public.kitware.com/Bug/my_view_page.php"
 SECTION = "console/utils"
 LICENSE = "BSD-3-Clause"
-LIC_FILES_CHKSUM = "file://Copyright.txt;md5=f2102a52df7aa592cf072180e7ebc8c7 \
+LIC_FILES_CHKSUM = "file://Copyright.txt;md5=45025187a129339459b6f1a24f7fac6e \
                     file://Source/cmake.h;beginline=1;endline=2;md5=a5f70e1fef8614734eae0d62b4f5891b \
                     "
 
@@ -21,7 +21,7 @@
            file://0004-Fail-silently-if-system-Qt-installation-is-broken.patch \
 "
 
-SRC_URI[sha256sum] = "f316b40053466f9a416adf981efda41b160ca859e97f6a484b447ea299ff26aa"
+SRC_URI[sha256sum] = "c2b61f7cdecb1576cad25f918a8f42b8685d88a832fd4b62b9e0fa32e915a658"
 
 UPSTREAM_CHECK_REGEX = "cmake-(?P<pver>\d+(\.\d+)+)\.tar"
 
diff --git a/poky/meta/recipes-devtools/cmake/cmake_3.23.2.bb b/poky/meta/recipes-devtools/cmake/cmake_3.24.0.bb
similarity index 100%
rename from poky/meta/recipes-devtools/cmake/cmake_3.23.2.bb
rename to poky/meta/recipes-devtools/cmake/cmake_3.24.0.bb
diff --git a/poky/meta/recipes-devtools/expect/expect/0001-Add-prototype-to-function-definitions.patch b/poky/meta/recipes-devtools/expect/expect/0001-Add-prototype-to-function-definitions.patch
new file mode 100644
index 0000000..7d211b3
--- /dev/null
+++ b/poky/meta/recipes-devtools/expect/expect/0001-Add-prototype-to-function-definitions.patch
@@ -0,0 +1,113 @@
+From 904c7cf6647594939ce1e398468bca3c885f0622 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Mon, 15 Aug 2022 18:25:23 -0700
+Subject: [PATCH] Add prototype to function definitions
+
+Compilers like clang has started erroring out on implicit-function-declaration
+therefore arrange the relevant include files where needed.
+
+Upstream-Status: Submitted [https://sourceforge.net/p/expect/patches/24/]
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ exp_chan.c     | 5 +++--
+ exp_clib.c     | 4 +++-
+ exp_main_sub.c | 5 +++++
+ pty_termios.c  | 4 ++++
+ 4 files changed, 15 insertions(+), 3 deletions(-)
+
+diff --git a/exp_chan.c b/exp_chan.c
+index 79f486c..50375d3 100644
+--- a/exp_chan.c
++++ b/exp_chan.c
+@@ -35,6 +35,7 @@
+ #include "exp_prog.h"
+ #include "exp_command.h"
+ #include "exp_log.h"
++#include "exp_event.h" /* exp_background_channelhandler */
+ #include "tcldbg.h" /* Dbg_StdinMode */
+ 
+ extern int		expSetBlockModeProc _ANSI_ARGS_((int fd, int mode));
+@@ -631,7 +632,7 @@ expWaitOnOne() {
+ }
+ 
+ void
+-exp_background_channelhandlers_run_all()
++exp_background_channelhandlers_run_all(void)
+ {
+     ThreadSpecificData *tsdPtr = TCL_TSD_INIT(&dataKey);
+     ExpState *esPtr;
+@@ -760,7 +761,7 @@ expCreateChannel(interp,fdin,fdout,pid)
+ }
+ 
+ void
+-expChannelInit() {
++expChannelInit(void) {
+     ThreadSpecificData *tsdPtr = TCL_TSD_INIT(&dataKey);
+ 
+     tsdPtr->channelCount = 0;
+diff --git a/exp_clib.c b/exp_clib.c
+index b21fb5d..8f31fc3 100644
+--- a/exp_clib.c
++++ b/exp_clib.c
+@@ -9,13 +9,14 @@ would appreciate credit if this program or parts of it are used.
+ 
+ #include "expect_cf.h"
+ #include <stdio.h>
++#include <unistd.h>
+ #include <setjmp.h>
+ #ifdef HAVE_INTTYPES_H
+ #  include <inttypes.h>
+ #endif
+ #include <sys/types.h>
+ #include <sys/ioctl.h>
+-
++#include <sys/wait.h>
+ #ifdef TIME_WITH_SYS_TIME
+ # include <sys/time.h>
+ # include <time.h>
+@@ -1738,6 +1739,7 @@ int exp_getptyslave();
+ #define sysreturn(x)	return(errno = x, -1)
+ 
+ void exp_init_pty();
++void exp_init_tty();
+ 
+ /*
+    The following functions are linked from the Tcl library.  They
+diff --git a/exp_main_sub.c b/exp_main_sub.c
+index bf6c4be..f53b89e 100644
+--- a/exp_main_sub.c
++++ b/exp_main_sub.c
+@@ -61,6 +61,11 @@ int exp_cmdlinecmds = FALSE;
+ int exp_interactive =  FALSE;
+ int exp_buffer_command_input = FALSE;/* read in entire cmdfile at once */
+ int exp_fgets();
++int exp_tty_cooked_echo(
++    Tcl_Interp *interp,
++    exp_tty *tty_old,
++    int *was_raw,
++    int *was_echo);
+ 
+ Tcl_Interp *exp_interp;	/* for use by signal handlers who can't figure out */
+ 			/* the interpreter directly */
+diff --git a/pty_termios.c b/pty_termios.c
+index c605b23..80ed5e7 100644
+--- a/pty_termios.c
++++ b/pty_termios.c
+@@ -7,6 +7,7 @@ would appreciate credit if you use this file or parts of it.
+ 
+ */
+ 
++#include <pty.h> /* openpty */
+ #include <stdio.h>
+ #include <signal.h>
+ 
+@@ -15,6 +16,9 @@ would appreciate credit if you use this file or parts of it.
+ #endif
+ 
+ #include "expect_cf.h"
++#include "tclInt.h"
++
++extern char * expErrnoMsg    _ANSI_ARGS_((int));
+ 
+ /*
+    The following functions are linked from the Tcl library.  They
diff --git a/poky/meta/recipes-devtools/expect/expect_5.45.4.bb b/poky/meta/recipes-devtools/expect/expect_5.45.4.bb
index e22fa14..6cb46f3 100644
--- a/poky/meta/recipes-devtools/expect/expect_5.45.4.bb
+++ b/poky/meta/recipes-devtools/expect/expect_5.45.4.bb
@@ -26,7 +26,8 @@
            file://0001-expect-Fix-segfaults-if-Tcl-is-built-with-stubs-and-.patch \
            file://0001-exp_main_sub.c-Use-PATH_MAX-for-path.patch \
            file://0001-fixline1-fix-line-1.patch \
-          "
+           file://0001-Add-prototype-to-function-definitions.patch \
+           "
 SRC_URI[md5sum] = "00fce8de158422f5ccd2666512329bd2"
 SRC_URI[sha256sum] = "49a7da83b0bdd9f46d04a04deec19c7767bb9a323e40c4781f89caf760b92c34"
 
diff --git a/poky/meta/recipes-devtools/gcc/gcc-12.1.inc b/poky/meta/recipes-devtools/gcc/gcc-12.1.inc
deleted file mode 100644
index 56678c7..0000000
--- a/poky/meta/recipes-devtools/gcc/gcc-12.1.inc
+++ /dev/null
@@ -1,116 +0,0 @@
-require gcc-common.inc
-
-# Third digit in PV should be incremented after a minor release
-
-PV = "12.1.0"
-
-# BINV should be incremented to a revision after a minor gcc release
-
-BINV = "12.1.0"
-
-FILESEXTRAPATHS =. "${FILE_DIRNAME}/gcc:${FILE_DIRNAME}/gcc/backport:"
-
-DEPENDS =+ "mpfr gmp libmpc zlib flex-native"
-NATIVEDEPS = "mpfr-native gmp-native libmpc-native zlib-native flex-native zstd-native"
-
-LICENSE = "GPL-3.0-with-GCC-exception & GPL-3.0-only"
-
-LIC_FILES_CHKSUM = "\
-    file://COPYING;md5=59530bdf33659b29e73d4adb9f9f6552 \
-    file://COPYING3;md5=d32239bcb673463ab874e80d47fae504 \
-    file://COPYING3.LIB;md5=6a6a8e020838b23406c81b19c1d46df6 \
-    file://COPYING.LIB;md5=2d5025d4aa3495befef8f17206a5b0a1 \
-    file://COPYING.RUNTIME;md5=fe60d87048567d4fe8c8a0ed2448bcc8 \
-"
-# from git
-#RELEASE ?= "7092b7aea122a91824d048aeb23834cf1d19b1a1"
-#BASEURI ?= "https://repo.or.cz/official-gcc.git/snapshot/${RELEASE}.tar.gz;downloadfilename=gcc-${PV}-${RELEASE}.tar.gz"
-#SOURCEDIR ?= "official-gcc-${@'${RELEASE}'[0:7]}"
-
-# from snapshot
-#RELEASE ?= "12.1.0-RC-20220429"
-#SOURCEDIR ?= "gcc-${RELEASE}"
-#BASEURI ?= "https://gcc.gnu.org/pub/gcc/snapshots/${RELEASE}/gcc-${RELEASE}.tar.xz"
-
-# official release
-RELEASE ?= "${PV}"
-BASEURI ?= "${GNU_MIRROR}/gcc/gcc-${PV}/gcc-${PV}.tar.xz"
-SOURCEDIR ?= "gcc-${PV}"
-
-SRC_URI = "${BASEURI} \
-           file://0001-gcc-4.3.1-ARCH_FLAGS_FOR_TARGET.patch \
-           file://0002-gcc-poison-system-directories.patch \
-           file://0003-64-bit-multilib-hack.patch \
-           file://0004-Pass-CXXFLAGS_FOR_BUILD-in-a-couple-of-places-to-avo.patch \
-           file://0005-Use-the-defaults.h-in-B-instead-of-S-and-t-oe-in-B.patch \
-           file://0006-cpp-honor-sysroot.patch \
-           file://0007-Define-GLIBC_DYNAMIC_LINKER-and-UCLIBC_DYNAMIC_LINKE.patch \
-           file://0008-libtool.patch \
-           file://0009-gcc-armv4-pass-fix-v4bx-to-linker-to-support-EABI.patch \
-           file://0010-Use-the-multilib-config-files-from-B-instead-of-usin.patch \
-           file://0011-Avoid-using-libdir-from-.la-which-usually-points-to-.patch \
-           file://0012-export-CPP.patch \
-           file://0013-Ensure-target-gcc-headers-can-be-included.patch \
-           file://0014-Don-t-search-host-directory-during-relink-if-inst_pr.patch \
-           file://0015-libcc1-fix-libcc1-s-install-path-and-rpath.patch \
-           file://0016-handle-sysroot-support-for-nativesdk-gcc.patch \
-           file://0017-Search-target-sysroot-gcc-version-specific-dirs-with.patch \
-           file://0018-Add-ssp_nonshared-to-link-commandline-for-musl-targe.patch \
-           file://0019-Re-introduce-spe-commandline-options.patch \
-           file://0020-libgcc_s-Use-alias-for-__cpu_indicator_init-instead-.patch \
-           file://0021-gentypes-genmodes-Do-not-use-__LINE__-for-maintainin.patch \
-           file://0022-mingw32-Enable-operation_not_supported.patch \
-           file://0023-libatomic-Do-not-enforce-march-on-aarch64.patch \
-           file://0024-Fix-install-path-of-linux64.h.patch \
-           file://0025-Move-sched.h-include-ahead-of-user-headers.patch \
-           file://0026-rust-recursion-limit.patch \
-           file://0001-libsanitizer-cherry-pick-9cf13067cb5088626ba7-from-u.patch \
-"
-SRC_URI[sha256sum] = "62fd634889f31c02b64af2c468f064b47ad1ca78411c45abe6ac4b5f8dd19c7b"
-
-S = "${TMPDIR}/work-shared/gcc-${PV}-${PR}/${SOURCEDIR}"
-B = "${WORKDIR}/gcc-${PV}/build.${HOST_SYS}.${TARGET_SYS}"
-
-# Language Overrides
-FORTRAN = ""
-JAVA = ""
-
-SSP ?= "--disable-libssp"
-SSP:mingw32 = "--enable-libssp"
-
-EXTRA_OECONF_BASE = "\
-    ${SSP} \
-    --enable-libitm \
-    --enable-lto \
-    --disable-bootstrap \
-    --with-system-zlib \
-    ${@'--with-linker-hash-style=${LINKER_HASH_STYLE}' if '${LINKER_HASH_STYLE}' else ''} \
-    --enable-linker-build-id \
-    --with-ppl=no \
-    --with-cloog=no \
-    --enable-checking=release \
-    --enable-cheaders=c_global \
-    --without-isl \
-"
-
-EXTRA_OECONF_INITIAL = "\
-    --disable-libgomp \
-    --disable-libitm \
-    --disable-libquadmath \
-    --with-system-zlib \
-    --disable-lto \
-    --disable-plugin \
-    --enable-linker-build-id \
-    --enable-decimal-float=no \
-    --without-isl \
-    --disable-libssp \
-"
-
-EXTRA_OECONF_PATHS = "\
-    --with-gxx-include-dir=/not/exist{target_includedir}/c++/${BINV} \
-    --with-sysroot=/not/exist \
-    --with-build-sysroot=${STAGING_DIR_TARGET} \
-"
-
-# Is a binutils 2.26 issue, not gcc
-CVE_CHECK_IGNORE += "CVE-2021-37322"
diff --git a/poky/meta/recipes-devtools/gcc/gcc-12.2.inc b/poky/meta/recipes-devtools/gcc/gcc-12.2.inc
new file mode 100644
index 0000000..572fd8b
--- /dev/null
+++ b/poky/meta/recipes-devtools/gcc/gcc-12.2.inc
@@ -0,0 +1,117 @@
+require gcc-common.inc
+
+# Third digit in PV should be incremented after a minor release
+
+PV = "12.2.0"
+
+# BINV should be incremented to a revision after a minor gcc release
+
+BINV = "12.2.0"
+
+FILESEXTRAPATHS =. "${FILE_DIRNAME}/gcc:${FILE_DIRNAME}/gcc/backport:"
+
+DEPENDS =+ "mpfr gmp libmpc zlib flex-native"
+NATIVEDEPS = "mpfr-native gmp-native libmpc-native zlib-native flex-native zstd-native"
+
+LICENSE = "GPL-3.0-with-GCC-exception & GPL-3.0-only"
+
+LIC_FILES_CHKSUM = "\
+    file://COPYING;md5=59530bdf33659b29e73d4adb9f9f6552 \
+    file://COPYING3;md5=d32239bcb673463ab874e80d47fae504 \
+    file://COPYING3.LIB;md5=6a6a8e020838b23406c81b19c1d46df6 \
+    file://COPYING.LIB;md5=2d5025d4aa3495befef8f17206a5b0a1 \
+    file://COPYING.RUNTIME;md5=fe60d87048567d4fe8c8a0ed2448bcc8 \
+"
+# from git
+#RELEASE ?= "7092b7aea122a91824d048aeb23834cf1d19b1a1"
+#BASEURI ?= "https://repo.or.cz/official-gcc.git/snapshot/${RELEASE}.tar.gz;downloadfilename=gcc-${PV}-${RELEASE}.tar.gz"
+#SOURCEDIR ?= "official-gcc-${@'${RELEASE}'[0:7]}"
+
+# from snapshot
+#RELEASE ?= "12.1.0-RC-20220429"
+#SOURCEDIR ?= "gcc-${RELEASE}"
+#BASEURI ?= "https://gcc.gnu.org/pub/gcc/snapshots/${RELEASE}/gcc-${RELEASE}.tar.xz"
+
+# official release
+RELEASE ?= "${PV}"
+BASEURI ?= "${GNU_MIRROR}/gcc/gcc-${PV}/gcc-${PV}.tar.xz"
+SOURCEDIR ?= "gcc-${PV}"
+
+SRC_URI = "${BASEURI} \
+           file://0001-gcc-4.3.1-ARCH_FLAGS_FOR_TARGET.patch \
+           file://0002-gcc-poison-system-directories.patch \
+           file://0003-64-bit-multilib-hack.patch \
+           file://0004-Pass-CXXFLAGS_FOR_BUILD-in-a-couple-of-places-to-avo.patch \
+           file://0005-Use-the-defaults.h-in-B-instead-of-S-and-t-oe-in-B.patch \
+           file://0006-cpp-honor-sysroot.patch \
+           file://0007-Define-GLIBC_DYNAMIC_LINKER-and-UCLIBC_DYNAMIC_LINKE.patch \
+           file://0008-libtool.patch \
+           file://0009-gcc-armv4-pass-fix-v4bx-to-linker-to-support-EABI.patch \
+           file://0010-Use-the-multilib-config-files-from-B-instead-of-usin.patch \
+           file://0011-Avoid-using-libdir-from-.la-which-usually-points-to-.patch \
+           file://0012-export-CPP.patch \
+           file://0013-Ensure-target-gcc-headers-can-be-included.patch \
+           file://0014-Don-t-search-host-directory-during-relink-if-inst_pr.patch \
+           file://0015-libcc1-fix-libcc1-s-install-path-and-rpath.patch \
+           file://0016-handle-sysroot-support-for-nativesdk-gcc.patch \
+           file://0017-Search-target-sysroot-gcc-version-specific-dirs-with.patch \
+           file://0018-Add-ssp_nonshared-to-link-commandline-for-musl-targe.patch \
+           file://0019-Re-introduce-spe-commandline-options.patch \
+           file://0020-libgcc_s-Use-alias-for-__cpu_indicator_init-instead-.patch \
+           file://0021-gentypes-genmodes-Do-not-use-__LINE__-for-maintainin.patch \
+           file://0022-mingw32-Enable-operation_not_supported.patch \
+           file://0023-libatomic-Do-not-enforce-march-on-aarch64.patch \
+           file://0024-Fix-install-path-of-linux64.h.patch \
+           file://0025-Move-sched.h-include-ahead-of-user-headers.patch \
+           file://0026-rust-recursion-limit.patch \
+           file://prefix-map-realpath.patch \
+           file://hardcoded-paths.patch \
+"
+SRC_URI[sha256sum] = "e549cf9cf3594a00e27b6589d4322d70e0720cdd213f39beb4181e06926230ff"
+
+S = "${TMPDIR}/work-shared/gcc-${PV}-${PR}/${SOURCEDIR}"
+B = "${WORKDIR}/gcc-${PV}/build.${HOST_SYS}.${TARGET_SYS}"
+
+# Language Overrides
+FORTRAN = ""
+JAVA = ""
+
+SSP ?= "--disable-libssp"
+SSP:mingw32 = "--enable-libssp"
+
+EXTRA_OECONF_BASE = "\
+    ${SSP} \
+    --enable-libitm \
+    --enable-lto \
+    --disable-bootstrap \
+    --with-system-zlib \
+    ${@'--with-linker-hash-style=${LINKER_HASH_STYLE}' if '${LINKER_HASH_STYLE}' else ''} \
+    --enable-linker-build-id \
+    --with-ppl=no \
+    --with-cloog=no \
+    --enable-checking=release \
+    --enable-cheaders=c_global \
+    --without-isl \
+"
+
+EXTRA_OECONF_INITIAL = "\
+    --disable-libgomp \
+    --disable-libitm \
+    --disable-libquadmath \
+    --with-system-zlib \
+    --disable-lto \
+    --disable-plugin \
+    --enable-linker-build-id \
+    --enable-decimal-float=no \
+    --without-isl \
+    --disable-libssp \
+"
+
+EXTRA_OECONF_PATHS = "\
+    --with-gxx-include-dir=/not/exist{target_includedir}/c++/${BINV} \
+    --with-sysroot=/not/exist \
+    --with-build-sysroot=${STAGING_DIR_TARGET} \
+"
+
+# Is a binutils 2.26 issue, not gcc
+CVE_CHECK_IGNORE += "CVE-2021-37322"
diff --git a/poky/meta/recipes-devtools/gcc/gcc-cross-canadian_12.1.bb b/poky/meta/recipes-devtools/gcc/gcc-cross-canadian_12.2.bb
similarity index 100%
rename from poky/meta/recipes-devtools/gcc/gcc-cross-canadian_12.1.bb
rename to poky/meta/recipes-devtools/gcc/gcc-cross-canadian_12.2.bb
diff --git a/poky/meta/recipes-devtools/gcc/gcc-cross.inc b/poky/meta/recipes-devtools/gcc/gcc-cross.inc
index 3ffa1f0..a540fb2 100644
--- a/poky/meta/recipes-devtools/gcc/gcc-cross.inc
+++ b/poky/meta/recipes-devtools/gcc/gcc-cross.inc
@@ -149,6 +149,7 @@
 	# Makefile does move-if-change which can end up with 'timestamp' as file contents so break links to those files
 	rm $dest/gcc/include/*.h
 	cp gcc/include/*.h $dest/gcc/include/
+	sysroot-relativelinks.py $dest
 }
 addtask do_gcc_stash_builddir after do_compile before do_install
 SSTATETASKS += "do_gcc_stash_builddir"
diff --git a/poky/meta/recipes-devtools/gcc/gcc-cross_12.1.bb b/poky/meta/recipes-devtools/gcc/gcc-cross_12.2.bb
similarity index 100%
rename from poky/meta/recipes-devtools/gcc/gcc-cross_12.1.bb
rename to poky/meta/recipes-devtools/gcc/gcc-cross_12.2.bb
diff --git a/poky/meta/recipes-devtools/gcc/gcc-crosssdk_12.1.bb b/poky/meta/recipes-devtools/gcc/gcc-crosssdk_12.2.bb
similarity index 100%
rename from poky/meta/recipes-devtools/gcc/gcc-crosssdk_12.1.bb
rename to poky/meta/recipes-devtools/gcc/gcc-crosssdk_12.2.bb
diff --git a/poky/meta/recipes-devtools/gcc/gcc-multilib-config.inc b/poky/meta/recipes-devtools/gcc/gcc-multilib-config.inc
index 26bfed9..2dbbc23 100644
--- a/poky/meta/recipes-devtools/gcc/gcc-multilib-config.inc
+++ b/poky/meta/recipes-devtools/gcc/gcc-multilib-config.inc
@@ -154,7 +154,7 @@
     gcc_header_config_files = {
         'x86_64'    : ['gcc/config/linux.h', 'gcc/config/i386/linux.h', 'gcc/config/i386/linux64.h'],
         'i586'      : ['gcc/config/linux.h', 'gcc/config/i386/linux.h', 'gcc/config/i386/linux64.h'],
-        'i686'      : ['gcc/config/linux.h', 'gcc/config/i386/linux64.h'],
+        'i686'      : ['gcc/config/linux.h', 'gcc/config/i386/linux.h', 'gcc/config/i386/linux64.h'],
         'mips'      : ['gcc/config/linux.h', 'gcc/config/mips/linux.h', 'gcc/config/mips/linux64.h'],
         'mips64'    : ['gcc/config/linux.h', 'gcc/config/mips/linux.h', 'gcc/config/mips/linux64.h'],
         'powerpc'   : ['gcc/config/linux.h', 'gcc/config/rs6000/linux64.h'],
diff --git a/poky/meta/recipes-devtools/gcc/gcc-runtime.inc b/poky/meta/recipes-devtools/gcc/gcc-runtime.inc
index b8bfdce..fa5b048 100644
--- a/poky/meta/recipes-devtools/gcc/gcc-runtime.inc
+++ b/poky/meta/recipes-devtools/gcc/gcc-runtime.inc
@@ -50,20 +50,6 @@
 # libiberty
 # libgfortran needs separate recipe due to libquadmath dependency
 
-# Relative path to be repaced into debug info
-REL_S = "/usr/src/debug/${PN}/${EXTENDPE}${PV}-${PR}"
-
-DEBUG_PREFIX_MAP:class-target = " \
-   -fdebug-prefix-map=${WORKDIR}/${MLPREFIX}recipe-sysroot= \
-   -fdebug-prefix-map=${WORKDIR}/recipe-sysroot-native= \
-   -fdebug-prefix-map=${S}=${REL_S} \
-   -fdebug-prefix-map=${S}/include=${REL_S}/libstdc++-v3/../include \
-   -fdebug-prefix-map=${S}/libiberty=${REL_S}/libstdc++-v3/../libiberty \
-   -fdebug-prefix-map=${S}/libgcc=${REL_S}/libstdc++-v3/../libgcc \
-   -fdebug-prefix-map=${B}=${REL_S} \
-   -ffile-prefix-map=${B}/${HOST_SYS}/libstdc++-v3/include=${includedir}/c++/${BINV} \
-   "
-
 do_configure () {
 	export CXX="${CXX} -nostdinc++ -L${WORKDIR}/dummylib"
 	# libstdc++ isn't built yet so CXX would error not able to find it which breaks stdc++'s configure
@@ -77,8 +63,7 @@
 		mkdir -p ${B}/${TARGET_SYS}/$d/
 		cd ${B}/${TARGET_SYS}/$d/
 		chmod a+x ${S}/$d/configure
-		relpath=${@os.path.relpath("${S}/$d", "${B}/${TARGET_SYS}/$d")}
-		$relpath/configure ${CONFIGUREOPTS} ${EXTRA_OECONF}
+		${S}/$d/configure ${CONFIGUREOPTS} ${EXTRA_OECONF}
 		if [ "$d" = "libgcc" ]; then
 			(cd ${B}/${TARGET_SYS}/libgcc; oe_runmake enable-execute-stack.c unwind.h md-unwind-support.h sfp-machine.h gthr-default.h)
 		fi
diff --git a/poky/meta/recipes-devtools/gcc/gcc-runtime_12.1.bb b/poky/meta/recipes-devtools/gcc/gcc-runtime_12.2.bb
similarity index 100%
rename from poky/meta/recipes-devtools/gcc/gcc-runtime_12.1.bb
rename to poky/meta/recipes-devtools/gcc/gcc-runtime_12.2.bb
diff --git a/poky/meta/recipes-devtools/gcc/gcc-sanitizers_12.1.bb b/poky/meta/recipes-devtools/gcc/gcc-sanitizers_12.2.bb
similarity index 100%
rename from poky/meta/recipes-devtools/gcc/gcc-sanitizers_12.1.bb
rename to poky/meta/recipes-devtools/gcc/gcc-sanitizers_12.2.bb
diff --git a/poky/meta/recipes-devtools/gcc/gcc-source_12.1.bb b/poky/meta/recipes-devtools/gcc/gcc-source_12.2.bb
similarity index 100%
rename from poky/meta/recipes-devtools/gcc/gcc-source_12.1.bb
rename to poky/meta/recipes-devtools/gcc/gcc-source_12.2.bb
diff --git a/poky/meta/recipes-devtools/gcc/gcc/0001-libsanitizer-cherry-pick-9cf13067cb5088626ba7-from-u.patch b/poky/meta/recipes-devtools/gcc/gcc/0001-libsanitizer-cherry-pick-9cf13067cb5088626ba7-from-u.patch
deleted file mode 100644
index 6bbc95a..0000000
--- a/poky/meta/recipes-devtools/gcc/gcc/0001-libsanitizer-cherry-pick-9cf13067cb5088626ba7-from-u.patch
+++ /dev/null
@@ -1,45 +0,0 @@
-From 2701442d0cf6292f6624443c15813d6d1a3562fe Mon Sep 17 00:00:00 2001
-From: Martin Liska <mliska@suse.cz>
-Date: Mon, 11 Jul 2022 22:03:14 +0200
-Subject: [PATCH] libsanitizer: cherry-pick 9cf13067cb5088626ba7 from upstream
-
-9cf13067cb5088626ba7ee1ec4c42ec59c7995a0 [sanitizer] Remove #include <linux/fs.h> to resolve fsconfig_command/mount_attr conflict with glibc 2.36
-
-Upstream-Status: Backport [https://gcc.gnu.org/git/?p=gcc.git;a=commit;h=2701442d0cf6292f6624443c15813d6d1a3562fe]
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
----
- .../sanitizer_platform_limits_posix.cpp                | 10 ++++++----
- 1 file changed, 6 insertions(+), 4 deletions(-)
-
-diff --git a/libsanitizer/sanitizer_common/sanitizer_platform_limits_posix.cpp b/libsanitizer/sanitizer_common/sanitizer_platform_limits_posix.cpp
-index 8ed3e92d270..97fd07acf9d 100644
---- a/libsanitizer/sanitizer_common/sanitizer_platform_limits_posix.cpp
-+++ b/libsanitizer/sanitizer_common/sanitizer_platform_limits_posix.cpp
-@@ -73,7 +73,9 @@
- #include <sys/vt.h>
- #include <linux/cdrom.h>
- #include <linux/fd.h>
-+#if SANITIZER_ANDROID
- #include <linux/fs.h>
-+#endif
- #include <linux/hdreg.h>
- #include <linux/input.h>
- #include <linux/ioctl.h>
-@@ -869,10 +871,10 @@ unsigned struct_ElfW_Phdr_sz = sizeof(Elf_Phdr);
-   unsigned IOCTL_EVIOCGPROP = IOCTL_NOT_PRESENT;
-   unsigned IOCTL_EVIOCSKEYCODE_V2 = IOCTL_NOT_PRESENT;
- #endif
--  unsigned IOCTL_FS_IOC_GETFLAGS = FS_IOC_GETFLAGS;
--  unsigned IOCTL_FS_IOC_GETVERSION = FS_IOC_GETVERSION;
--  unsigned IOCTL_FS_IOC_SETFLAGS = FS_IOC_SETFLAGS;
--  unsigned IOCTL_FS_IOC_SETVERSION = FS_IOC_SETVERSION;
-+  unsigned IOCTL_FS_IOC_GETFLAGS = _IOR('f', 1, long);
-+  unsigned IOCTL_FS_IOC_GETVERSION = _IOR('v', 1, long);
-+  unsigned IOCTL_FS_IOC_SETFLAGS = _IOW('f', 2, long);
-+  unsigned IOCTL_FS_IOC_SETVERSION = _IOW('v', 2, long);
-   unsigned IOCTL_GIO_CMAP = GIO_CMAP;
-   unsigned IOCTL_GIO_FONT = GIO_FONT;
-   unsigned IOCTL_GIO_UNIMAP = GIO_UNIMAP;
--- 
-2.37.1
-
diff --git a/poky/meta/recipes-devtools/gcc/gcc/hardcoded-paths.patch b/poky/meta/recipes-devtools/gcc/gcc/hardcoded-paths.patch
new file mode 100644
index 0000000..f348585
--- /dev/null
+++ b/poky/meta/recipes-devtools/gcc/gcc/hardcoded-paths.patch
@@ -0,0 +1,19 @@
+Avoid encoding build paths into sources used for floating point on powerpc.
+(MACHINE=qemuppc bitbake libgcc).
+
+Upstream-Status: Submitted [https://gcc.gnu.org/pipermail/gcc-patches/2022-August/599882.html]
+Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
+
+Index: gcc-12.1.0/libgcc/config/rs6000/t-float128
+===================================================================
+--- gcc-12.1.0.orig/libgcc/config/rs6000/t-float128
++++ gcc-12.1.0/libgcc/config/rs6000/t-float128
+@@ -103,7 +103,7 @@ $(ibm128_dec_objs)	: INTERNAL_CFLAGS +=
+ $(fp128_softfp_src) : $(srcdir)/soft-fp/$(subst -sw,,$(subst kf,tf,$@)) $(fp128_dep)
+ 	@src="$(srcdir)/soft-fp/$(subst -sw,,$(subst kf,tf,$@))"; \
+ 	echo "Create $@"; \
+-	(echo "/* file created from $$src */"; \
++	(echo "/* file created from `basename $$src` */"; \
+ 	 echo; \
+ 	 sed -f $(fp128_sed) < $$src) > $@
+ 
diff --git a/poky/meta/recipes-devtools/gcc/gcc/prefix-map-realpath.patch b/poky/meta/recipes-devtools/gcc/gcc/prefix-map-realpath.patch
new file mode 100644
index 0000000..7f1a2de
--- /dev/null
+++ b/poky/meta/recipes-devtools/gcc/gcc/prefix-map-realpath.patch
@@ -0,0 +1,63 @@
+Relative paths don't work with -fdebug-prefix-map and friends. This
+can lead to paths which the user wanted to be remapped being missed.
+Setting -fdebug-prefix-map to work with a relative path isn't practical
+either.
+
+Instead, call gcc's realpath function on the incomming path name before
+comparing it with the remapping. This means other issues like symlinks
+are also accounted for and leads to a more consistent remapping experience.
+
+Upstream-Status: Submitted [https://gcc.gnu.org/pipermail/gcc-patches/2022-August/599885.html]
+[Also https://gcc.gnu.org/pipermail/gcc-patches/2022-August/599884.html]
+Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
+
+
+Index: gcc-12.1.0/gcc/file-prefix-map.cc
+===================================================================
+--- gcc-12.1.0.orig/gcc/file-prefix-map.cc
++++ gcc-12.1.0/gcc/file-prefix-map.cc
+@@ -70,19 +70,28 @@ remap_filename (file_prefix_map *maps, c
+   file_prefix_map *map;
+   char *s;
+   const char *name;
++  char *realname;
+   size_t name_len;
+ 
++  if (lbasename (filename) == filename)
++    return filename;
++
++  realname = lrealpath (filename);
++
+   for (map = maps; map; map = map->next)
+-    if (filename_ncmp (filename, map->old_prefix, map->old_len) == 0)
++    if (filename_ncmp (realname, map->old_prefix, map->old_len) == 0)
+       break;
+-  if (!map)
++  if (!map) {
++    free (realname);
+     return filename;
+-  name = filename + map->old_len;
++  }
++  name = realname + map->old_len;
+   name_len = strlen (name) + 1;
+ 
+   s = (char *) ggc_alloc_atomic (name_len + map->new_len);
+   memcpy (s, map->new_prefix, map->new_len);
+   memcpy (s + map->new_len, name, name_len);
++  free (realname);
+   return s;
+ }
+ 
+Index: gcc-12.1.0/libcpp/macro.cc
+===================================================================
+--- gcc-12.1.0.orig/libcpp/macro.cc
++++ gcc-12.1.0/libcpp/macro.cc
+@@ -563,7 +563,7 @@ _cpp_builtin_macro_text (cpp_reader *pfi
+ 	    if (!name)
+ 	      abort ();
+ 	  }
+-	if (pfile->cb.remap_filename)
++	if (pfile->cb.remap_filename && !pfile->state.in_directive)
+ 	  name = pfile->cb.remap_filename (name);
+ 	len = strlen (name);
+ 	buf = _cpp_unaligned_alloc (pfile, len * 2 + 3);
diff --git a/poky/meta/recipes-devtools/gcc/gcc_12.1.bb b/poky/meta/recipes-devtools/gcc/gcc_12.2.bb
similarity index 100%
rename from poky/meta/recipes-devtools/gcc/gcc_12.1.bb
rename to poky/meta/recipes-devtools/gcc/gcc_12.2.bb
diff --git a/poky/meta/recipes-devtools/gcc/libgcc-common.inc b/poky/meta/recipes-devtools/gcc/libgcc-common.inc
index cf8d6b7..d9084af 100644
--- a/poky/meta/recipes-devtools/gcc/libgcc-common.inc
+++ b/poky/meta/recipes-devtools/gcc/libgcc-common.inc
@@ -10,8 +10,7 @@
 	mkdir -p ${B}/${TARGET_SYS}/${BPN}/
 	cd ${B}/${BPN}
 	chmod a+x ${S}/${BPN}/configure
-	relpath=${@os.path.relpath("${S}/${BPN}", "${B}/${BPN}")}
-	$relpath/configure ${CONFIGUREOPTS} ${EXTRA_OECONF}
+	${S}/${BPN}/configure ${CONFIGUREOPTS} ${EXTRA_OECONF}
 }
 EXTRACONFFUNCS += "extract_stashed_builddir"
 do_configure[depends] += "${COMPILERDEP}"
diff --git a/poky/meta/recipes-devtools/gcc/libgcc-initial_12.1.bb b/poky/meta/recipes-devtools/gcc/libgcc-initial_12.2.bb
similarity index 100%
rename from poky/meta/recipes-devtools/gcc/libgcc-initial_12.1.bb
rename to poky/meta/recipes-devtools/gcc/libgcc-initial_12.2.bb
diff --git a/poky/meta/recipes-devtools/gcc/libgcc_12.1.bb b/poky/meta/recipes-devtools/gcc/libgcc_12.2.bb
similarity index 100%
rename from poky/meta/recipes-devtools/gcc/libgcc_12.1.bb
rename to poky/meta/recipes-devtools/gcc/libgcc_12.2.bb
diff --git a/poky/meta/recipes-devtools/gcc/libgfortran_12.1.bb b/poky/meta/recipes-devtools/gcc/libgfortran_12.2.bb
similarity index 100%
rename from poky/meta/recipes-devtools/gcc/libgfortran_12.1.bb
rename to poky/meta/recipes-devtools/gcc/libgfortran_12.2.bb
diff --git a/poky/meta/recipes-devtools/git/git_2.37.1.bb b/poky/meta/recipes-devtools/git/git_2.37.1.bb
deleted file mode 100644
index 5d2524a..0000000
--- a/poky/meta/recipes-devtools/git/git_2.37.1.bb
+++ /dev/null
@@ -1,168 +0,0 @@
-SUMMARY = "Distributed version control system"
-HOMEPAGE = "http://git-scm.com"
-DESCRIPTION = "Git is a free and open source distributed version control system designed to handle everything from small to very large projects with speed and efficiency."
-SECTION = "console/utils"
-LICENSE = "GPL-2.0-only & GPL-2.0-or-later & BSD-3-Clause & MIT & BSL-1.0 & LGPL-2.1-or-later"
-DEPENDS = "openssl zlib"
-
-PROVIDES:append:class-native = " git-replacement-native"
-
-SRC_URI = "${KERNELORG_MIRROR}/software/scm/git/git-${PV}.tar.gz;name=tarball \
-           file://fixsort.patch \
-           file://0001-config.mak.uname-do-not-force-RHEL-7-specific-build-.patch \
-           "
-
-S = "${WORKDIR}/git-${PV}"
-
-LIC_FILES_CHKSUM = "\
-	file://COPYING;md5=7c0d7ef03a7eb04ce795b0f60e68e7e1 \
-	file://reftable/LICENSE;md5=1a6424cafc4c9c88c689848e165af33b \
-	file://sha1dc/LICENSE.txt;md5=9bbe4c990a9e98ea4b98ef5d3bcb8a7a \
-	file://compat/nedmalloc/License.txt;md5=e4224ccaecb14d942c71d31bef20d78c \
-	file://compat/inet_ntop.c;md5=76593c6f74e8ced5b24520175688d59b;endline=16 \
-	file://compat/obstack.h;md5=08ad25fee5428cd879ceef451ce3a22e;endline=18 \
-	file://compat/poll/poll.h;md5=9fc00170a53b8e3e52157c91ac688dd1;endline=19 \
-	file://compat/regex/regex.h;md5=30cc8af0e6f0f8a25acec6d8783bb763;beginline=4;endline=22 \
-"
-
-CVE_PRODUCT = "git-scm:git"
-
-# This is about a manpage not mentioning --mirror may "leak" information
-# in mirrored git repos. Most OE users wouldn't build the docs and
-# we don't see this as a major issue for our general users/usecases.
-CVE_CHECK_IGNORE += "CVE-2022-24975"
-
-PACKAGECONFIG ??= "expat curl"
-PACKAGECONFIG[cvsserver] = ""
-PACKAGECONFIG[svn] = ""
-PACKAGECONFIG[manpages] = ",,asciidoc-native xmlto-native"
-PACKAGECONFIG[curl] = "--with-curl,--without-curl,curl"
-PACKAGECONFIG[expat] = "--with-expat,--without-expat,expat"
-
-EXTRA_OECONF = "--with-perl=${STAGING_BINDIR_NATIVE}/perl-native/perl \
-		--without-tcltk \
-		--without-iconv \
-"
-EXTRA_OECONF:append:class-nativesdk = " --with-gitconfig=/etc/gitconfig "
-
-# Needs brokensep as this doesn't use automake
-inherit autotools-brokensep perlnative bash-completion manpages
-
-EXTRA_OEMAKE = "NO_PYTHON=1 CFLAGS='${CFLAGS}' LDFLAGS='${LDFLAGS}'"
-EXTRA_OEMAKE += "'PERL_PATH=/usr/bin/env perl'"
-EXTRA_OEMAKE += "COMPUTE_HEADER_DEPENDENCIES=no"
-EXTRA_OEMAKE:append:class-native = " NO_CROSS_DIRECTORY_HARDLINKS=1"
-
-do_compile:prepend () {
-	# Remove perl/perl.mak to fix the out-of-date perl.mak error
-	# during rebuild
-	rm -f perl/perl.mak
-
-        if [ "${@bb.utils.filter('PACKAGECONFIG', 'manpages', d)}" ]; then
-            oe_runmake man
-        fi
-}
-
-do_install () {
-	oe_runmake install DESTDIR="${D}" bindir=${bindir} \
-		template_dir=${datadir}/git-core/templates
-
-	install -d ${D}/${datadir}/bash-completion/completions/
-	install -m 644 ${S}/contrib/completion/git-completion.bash ${D}/${datadir}/bash-completion/completions/git
-
-        if [ "${@bb.utils.filter('PACKAGECONFIG', 'manpages', d)}" ]; then
-            oe_runmake install-man DESTDIR="${D}"
-        fi
-}
-
-perl_native_fixup () {
-	sed -i -e 's#${STAGING_BINDIR_NATIVE}/perl-native/#${bindir}/#' \
-	       -e 's#${libdir}/perl-native/#${libdir}/#' \
-	    ${@d.getVar("PERLTOOLS").replace(' /',d.getVar('D') + '/')}
-
-	if [ ! "${@bb.utils.filter('PACKAGECONFIG', 'cvsserver', d)}" ]; then
-		# Only install the git cvsserver command if explicitly requested
-		# as it requires the DBI Perl module, which does not exist in
-		# OE-Core.
-		rm ${D}${libexecdir}/git-core/git-cvsserver \
-		   ${D}${bindir}/git-cvsserver
-	fi
-
-	if [ ! "${@bb.utils.filter('PACKAGECONFIG', 'svn', d)}" ]; then
-		# Only install the git svn command and all Git::SVN Perl modules
-		# if explicitly requested as they require the SVN::Core Perl
-		# module, which does not exist in OE-Core.
-		rm -r ${D}${libexecdir}/git-core/git-svn \
-		      ${D}${datadir}/perl5/Git/SVN*
-	fi
-}
-
-REL_GIT_EXEC_PATH = "${@os.path.relpath(libexecdir, bindir)}/git-core"
-REL_GIT_TEMPLATE_DIR = "${@os.path.relpath(datadir, bindir)}/git-core/templates"
-
-do_install:append:class-target () {
-	perl_native_fixup
-}
-
-do_install:append:class-native() {
-	create_wrapper ${D}${bindir}/git \
-		GIT_EXEC_PATH='`dirname $''realpath`'/${REL_GIT_EXEC_PATH} \
-		GIT_TEMPLATE_DIR='`dirname $''realpath`'/${REL_GIT_TEMPLATE_DIR}
-}
-
-do_install:append:class-nativesdk() {
-	create_wrapper ${D}${bindir}/git \
-		GIT_EXEC_PATH='`dirname $''realpath`'/${REL_GIT_EXEC_PATH} \
-		GIT_TEMPLATE_DIR='`dirname $''realpath`'/${REL_GIT_TEMPLATE_DIR}
-	perl_native_fixup
-}
-
-FILES:${PN} += "${datadir}/git-core ${libexecdir}/git-core/"
-
-PERLTOOLS = " \
-    ${bindir}/git-cvsserver \
-    ${libexecdir}/git-core/git-add--interactive \
-    ${libexecdir}/git-core/git-archimport \
-    ${libexecdir}/git-core/git-cvsexportcommit \
-    ${libexecdir}/git-core/git-cvsimport \
-    ${libexecdir}/git-core/git-cvsserver \
-    ${libexecdir}/git-core/git-send-email \
-    ${libexecdir}/git-core/git-svn \
-    ${libexecdir}/git-core/git-instaweb \
-    ${datadir}/gitweb/gitweb.cgi \
-    ${datadir}/git-core/templates/hooks/prepare-commit-msg.sample \
-    ${datadir}/git-core/templates/hooks/pre-rebase.sample \
-    ${datadir}/git-core/templates/hooks/fsmonitor-watchman.sample \
-"
-
-# Git tools requiring perl
-PACKAGES =+ "${PN}-perltools"
-FILES:${PN}-perltools += " \
-    ${PERLTOOLS} \
-    ${libdir}/perl \
-    ${datadir}/perl5 \
-"
-
-RDEPENDS:${PN}-perltools = "${PN} perl perl-module-file-path findutils"
-
-# git-tk package with gitk and git-gui
-PACKAGES =+ "${PN}-tk"
-#RDEPENDS:${PN}-tk = "${PN} tk tcl"
-#EXTRA_OEMAKE = "TCL_PATH=${STAGING_BINDIR_CROSS}/tclsh"
-FILES:${PN}-tk = " \
-    ${bindir}/gitk \
-    ${datadir}/gitk \
-"
-
-PACKAGES =+ "gitweb"
-FILES:gitweb = "${datadir}/gitweb/"
-RDEPENDS:gitweb = "perl"
-
-BBCLASSEXTEND = "native nativesdk"
-
-EXTRA_OECONF += "ac_cv_snprintf_returns_bogus=no \
-                 ac_cv_fread_reads_directories=${ac_cv_fread_reads_directories=yes} \
-                 "
-EXTRA_OEMAKE += "NO_GETTEXT=1"
-
-SRC_URI[tarball.sha256sum] = "7dded96a52e7996ce90dd74a187aec175737f680dc063f3f33c8932cf5c8d809"
diff --git a/poky/meta/recipes-devtools/git/git_2.37.2.bb b/poky/meta/recipes-devtools/git/git_2.37.2.bb
new file mode 100644
index 0000000..b7858e2
--- /dev/null
+++ b/poky/meta/recipes-devtools/git/git_2.37.2.bb
@@ -0,0 +1,168 @@
+SUMMARY = "Distributed version control system"
+HOMEPAGE = "http://git-scm.com"
+DESCRIPTION = "Git is a free and open source distributed version control system designed to handle everything from small to very large projects with speed and efficiency."
+SECTION = "console/utils"
+LICENSE = "GPL-2.0-only & GPL-2.0-or-later & BSD-3-Clause & MIT & BSL-1.0 & LGPL-2.1-or-later"
+DEPENDS = "openssl zlib"
+
+PROVIDES:append:class-native = " git-replacement-native"
+
+SRC_URI = "${KERNELORG_MIRROR}/software/scm/git/git-${PV}.tar.gz;name=tarball \
+           file://fixsort.patch \
+           file://0001-config.mak.uname-do-not-force-RHEL-7-specific-build-.patch \
+           "
+
+S = "${WORKDIR}/git-${PV}"
+
+LIC_FILES_CHKSUM = "\
+	file://COPYING;md5=7c0d7ef03a7eb04ce795b0f60e68e7e1 \
+	file://reftable/LICENSE;md5=1a6424cafc4c9c88c689848e165af33b \
+	file://sha1dc/LICENSE.txt;md5=9bbe4c990a9e98ea4b98ef5d3bcb8a7a \
+	file://compat/nedmalloc/License.txt;md5=e4224ccaecb14d942c71d31bef20d78c \
+	file://compat/inet_ntop.c;md5=76593c6f74e8ced5b24520175688d59b;endline=16 \
+	file://compat/obstack.h;md5=08ad25fee5428cd879ceef451ce3a22e;endline=18 \
+	file://compat/poll/poll.h;md5=9fc00170a53b8e3e52157c91ac688dd1;endline=19 \
+	file://compat/regex/regex.h;md5=30cc8af0e6f0f8a25acec6d8783bb763;beginline=4;endline=22 \
+"
+
+CVE_PRODUCT = "git-scm:git"
+
+# This is about a manpage not mentioning --mirror may "leak" information
+# in mirrored git repos. Most OE users wouldn't build the docs and
+# we don't see this as a major issue for our general users/usecases.
+CVE_CHECK_IGNORE += "CVE-2022-24975"
+
+PACKAGECONFIG ??= "expat curl"
+PACKAGECONFIG[cvsserver] = ""
+PACKAGECONFIG[svn] = ""
+PACKAGECONFIG[manpages] = ",,asciidoc-native xmlto-native"
+PACKAGECONFIG[curl] = "--with-curl,--without-curl,curl"
+PACKAGECONFIG[expat] = "--with-expat,--without-expat,expat"
+
+EXTRA_OECONF = "--with-perl=${STAGING_BINDIR_NATIVE}/perl-native/perl \
+		--without-tcltk \
+		--without-iconv \
+"
+EXTRA_OECONF:append:class-nativesdk = " --with-gitconfig=/etc/gitconfig "
+
+# Needs brokensep as this doesn't use automake
+inherit autotools-brokensep perlnative bash-completion manpages
+
+EXTRA_OEMAKE = "NO_PYTHON=1 CFLAGS='${CFLAGS}' LDFLAGS='${LDFLAGS}'"
+EXTRA_OEMAKE += "'PERL_PATH=/usr/bin/env perl'"
+EXTRA_OEMAKE += "COMPUTE_HEADER_DEPENDENCIES=no"
+EXTRA_OEMAKE:append:class-native = " NO_CROSS_DIRECTORY_HARDLINKS=1"
+
+do_compile:prepend () {
+	# Remove perl/perl.mak to fix the out-of-date perl.mak error
+	# during rebuild
+	rm -f perl/perl.mak
+
+        if [ "${@bb.utils.filter('PACKAGECONFIG', 'manpages', d)}" ]; then
+            oe_runmake man
+        fi
+}
+
+do_install () {
+	oe_runmake install DESTDIR="${D}" bindir=${bindir} \
+		template_dir=${datadir}/git-core/templates
+
+	install -d ${D}/${datadir}/bash-completion/completions/
+	install -m 644 ${S}/contrib/completion/git-completion.bash ${D}/${datadir}/bash-completion/completions/git
+
+        if [ "${@bb.utils.filter('PACKAGECONFIG', 'manpages', d)}" ]; then
+            oe_runmake install-man DESTDIR="${D}"
+        fi
+}
+
+perl_native_fixup () {
+	sed -i -e 's#${STAGING_BINDIR_NATIVE}/perl-native/#${bindir}/#' \
+	       -e 's#${libdir}/perl-native/#${libdir}/#' \
+	    ${@d.getVar("PERLTOOLS").replace(' /',d.getVar('D') + '/')}
+
+	if [ ! "${@bb.utils.filter('PACKAGECONFIG', 'cvsserver', d)}" ]; then
+		# Only install the git cvsserver command if explicitly requested
+		# as it requires the DBI Perl module, which does not exist in
+		# OE-Core.
+		rm ${D}${libexecdir}/git-core/git-cvsserver \
+		   ${D}${bindir}/git-cvsserver
+	fi
+
+	if [ ! "${@bb.utils.filter('PACKAGECONFIG', 'svn', d)}" ]; then
+		# Only install the git svn command and all Git::SVN Perl modules
+		# if explicitly requested as they require the SVN::Core Perl
+		# module, which does not exist in OE-Core.
+		rm -r ${D}${libexecdir}/git-core/git-svn \
+		      ${D}${datadir}/perl5/Git/SVN*
+	fi
+}
+
+REL_GIT_EXEC_PATH = "${@os.path.relpath(libexecdir, bindir)}/git-core"
+REL_GIT_TEMPLATE_DIR = "${@os.path.relpath(datadir, bindir)}/git-core/templates"
+
+do_install:append:class-target () {
+	perl_native_fixup
+}
+
+do_install:append:class-native() {
+	create_wrapper ${D}${bindir}/git \
+		GIT_EXEC_PATH='`dirname $''realpath`'/${REL_GIT_EXEC_PATH} \
+		GIT_TEMPLATE_DIR='`dirname $''realpath`'/${REL_GIT_TEMPLATE_DIR}
+}
+
+do_install:append:class-nativesdk() {
+	create_wrapper ${D}${bindir}/git \
+		GIT_EXEC_PATH='`dirname $''realpath`'/${REL_GIT_EXEC_PATH} \
+		GIT_TEMPLATE_DIR='`dirname $''realpath`'/${REL_GIT_TEMPLATE_DIR}
+	perl_native_fixup
+}
+
+FILES:${PN} += "${datadir}/git-core ${libexecdir}/git-core/"
+
+PERLTOOLS = " \
+    ${bindir}/git-cvsserver \
+    ${libexecdir}/git-core/git-add--interactive \
+    ${libexecdir}/git-core/git-archimport \
+    ${libexecdir}/git-core/git-cvsexportcommit \
+    ${libexecdir}/git-core/git-cvsimport \
+    ${libexecdir}/git-core/git-cvsserver \
+    ${libexecdir}/git-core/git-send-email \
+    ${libexecdir}/git-core/git-svn \
+    ${libexecdir}/git-core/git-instaweb \
+    ${datadir}/gitweb/gitweb.cgi \
+    ${datadir}/git-core/templates/hooks/prepare-commit-msg.sample \
+    ${datadir}/git-core/templates/hooks/pre-rebase.sample \
+    ${datadir}/git-core/templates/hooks/fsmonitor-watchman.sample \
+"
+
+# Git tools requiring perl
+PACKAGES =+ "${PN}-perltools"
+FILES:${PN}-perltools += " \
+    ${PERLTOOLS} \
+    ${libdir}/perl \
+    ${datadir}/perl5 \
+"
+
+RDEPENDS:${PN}-perltools = "${PN} perl perl-module-file-path findutils"
+
+# git-tk package with gitk and git-gui
+PACKAGES =+ "${PN}-tk"
+#RDEPENDS:${PN}-tk = "${PN} tk tcl"
+#EXTRA_OEMAKE = "TCL_PATH=${STAGING_BINDIR_CROSS}/tclsh"
+FILES:${PN}-tk = " \
+    ${bindir}/gitk \
+    ${datadir}/gitk \
+"
+
+PACKAGES =+ "gitweb"
+FILES:gitweb = "${datadir}/gitweb/"
+RDEPENDS:gitweb = "perl"
+
+BBCLASSEXTEND = "native nativesdk"
+
+EXTRA_OECONF += "ac_cv_snprintf_returns_bogus=no \
+                 ac_cv_fread_reads_directories=${ac_cv_fread_reads_directories=yes} \
+                 "
+EXTRA_OEMAKE += "NO_GETTEXT=1"
+
+SRC_URI[tarball.sha256sum] = "4c428908e3a2dca4174df6ef49acc995a4fdb1b45205a2c79794487a33bc06e5"
diff --git a/poky/meta/recipes-devtools/go/go-1.18.4.inc b/poky/meta/recipes-devtools/go/go-1.18.4.inc
deleted file mode 100644
index bfda15c..0000000
--- a/poky/meta/recipes-devtools/go/go-1.18.4.inc
+++ /dev/null
@@ -1,18 +0,0 @@
-require go-common.inc
-
-FILESEXTRAPATHS:prepend := "${FILE_DIRNAME}/go:"
-
-LIC_FILES_CHKSUM = "file://LICENSE;md5=5d4950ecb7b26d2c5e4e7b4e0dd74707"
-
-SRC_URI += "\
-    file://0001-cmd-go-make-content-based-hash-generation-less-pedan.patch \
-    file://0003-allow-GOTOOLDIR-to-be-overridden-in-the-environment.patch \
-    file://0004-ld-add-soname-to-shareable-objects.patch \
-    file://0005-make.bash-override-CC-when-building-dist-and-go_boot.patch \
-    file://0006-cmd-dist-separate-host-and-target-builds.patch \
-    file://0007-cmd-go-make-GOROOT-precious-by-default.patch \
-    file://0001-exec.go-do-not-write-linker-flags-into-buildids.patch \
-    file://0001-src-cmd-dist-buildgo.go-do-not-hardcode-host-compile.patch \
-    file://filter-build-paths.patch \
-"
-SRC_URI[main.sha256sum] = "4525aa6b0e3cecb57845f4060a7075aafc9ab752bb7b6b4cf8a212d43078e1e4"
diff --git a/poky/meta/recipes-devtools/go/go-1.19.inc b/poky/meta/recipes-devtools/go/go-1.19.inc
new file mode 100644
index 0000000..f733a80
--- /dev/null
+++ b/poky/meta/recipes-devtools/go/go-1.19.inc
@@ -0,0 +1,19 @@
+require go-common.inc
+
+FILESEXTRAPATHS:prepend := "${FILE_DIRNAME}/go:"
+
+LIC_FILES_CHKSUM = "file://LICENSE;md5=5d4950ecb7b26d2c5e4e7b4e0dd74707"
+
+SRC_URI += "\
+    file://0001-cmd-go-make-content-based-hash-generation-less-pedan.patch \
+    file://0003-allow-GOTOOLDIR-to-be-overridden-in-the-environment.patch \
+    file://0004-ld-add-soname-to-shareable-objects.patch \
+    file://0005-make.bash-override-CC-when-building-dist-and-go_boot.patch \
+    file://0006-cmd-dist-separate-host-and-target-builds.patch \
+    file://0007-cmd-go-make-GOROOT-precious-by-default.patch \
+    file://0001-exec.go-do-not-write-linker-flags-into-buildids.patch \
+    file://0001-src-cmd-dist-buildgo.go-do-not-hardcode-host-compile.patch \
+    file://filter-build-paths.patch \
+    file://stack-protector.patch \
+"
+SRC_URI[main.sha256sum] = "9419cc70dc5a2523f29a77053cafff658ed21ef3561d9b6b020280ebceab28b9"
diff --git a/poky/meta/recipes-devtools/go/go-binary-native_1.18.4.bb b/poky/meta/recipes-devtools/go/go-binary-native_1.18.4.bb
deleted file mode 100644
index 252c467..0000000
--- a/poky/meta/recipes-devtools/go/go-binary-native_1.18.4.bb
+++ /dev/null
@@ -1,46 +0,0 @@
-# This recipe is for bootstrapping our go-cross from a prebuilt binary of Go from golang.org.
-
-SUMMARY = "Go programming language compiler (upstream binary for bootstrap)"
-HOMEPAGE = " http://golang.org/"
-LICENSE = "BSD-3-Clause"
-LIC_FILES_CHKSUM = "file://LICENSE;md5=5d4950ecb7b26d2c5e4e7b4e0dd74707"
-
-PROVIDES = "go-native"
-
-SRC_URI = "https://dl.google.com/go/go${PV}.${BUILD_GOOS}-${BUILD_GOARCH}.tar.gz;name=go_${BUILD_GOTUPLE}"
-SRC_URI[go_linux_amd64.sha256sum] = "c9b099b68d93f5c5c8a8844a89f8db07eaa58270e3a1e01804f17f4cf8df02f5"
-SRC_URI[go_linux_arm64.sha256sum] = "35014d92b50d97da41dade965df7ebeb9a715da600206aa59ce1b2d05527421f"
-
-UPSTREAM_CHECK_URI = "https://golang.org/dl/"
-UPSTREAM_CHECK_REGEX = "go(?P<pver>\d+(\.\d+)+)\.linux"
-
-S = "${WORKDIR}/go"
-
-inherit goarch native
-
-do_compile() {
-    :
-}
-
-make_wrapper() {
-	rm -f ${D}${bindir}/$1
-	cat <<END >${D}${bindir}/$1
-#!/bin/bash
-here=\`dirname \$0\`
-export GOROOT="${GOROOT:-\`readlink -f \$here/../lib/go\`}"
-\$here/../lib/go/bin/$1 "\$@"
-END
-	chmod +x ${D}${bindir}/$1
-}
-
-do_install() {
-    find ${S} -depth -type d -name testdata -exec rm -rf {} +
-
-	install -d ${D}${bindir} ${D}${libdir}/go
-	cp --preserve=mode,timestamps -R ${S}/ ${D}${libdir}/
-
-	for f in ${S}/bin/*
-	do
-	  	make_wrapper `basename $f`
-	done
-}
diff --git a/poky/meta/recipes-devtools/go/go-binary-native_1.19.bb b/poky/meta/recipes-devtools/go/go-binary-native_1.19.bb
new file mode 100644
index 0000000..ca424a6
--- /dev/null
+++ b/poky/meta/recipes-devtools/go/go-binary-native_1.19.bb
@@ -0,0 +1,46 @@
+# This recipe is for bootstrapping our go-cross from a prebuilt binary of Go from golang.org.
+
+SUMMARY = "Go programming language compiler (upstream binary for bootstrap)"
+HOMEPAGE = " http://golang.org/"
+LICENSE = "BSD-3-Clause"
+LIC_FILES_CHKSUM = "file://LICENSE;md5=5d4950ecb7b26d2c5e4e7b4e0dd74707"
+
+PROVIDES = "go-native"
+
+SRC_URI = "https://dl.google.com/go/go${PV}.${BUILD_GOOS}-${BUILD_GOARCH}.tar.gz;name=go_${BUILD_GOTUPLE}"
+SRC_URI[go_linux_amd64.sha256sum] = "464b6b66591f6cf055bc5df90a9750bf5fbc9d038722bb84a9d56a2bea974be6"
+SRC_URI[go_linux_arm64.sha256sum] = "efa97fac9574fc6ef6c9ff3e3758fb85f1439b046573bf434cccb5e012bd00c8"
+
+UPSTREAM_CHECK_URI = "https://golang.org/dl/"
+UPSTREAM_CHECK_REGEX = "go(?P<pver>\d+(\.\d+)+)\.linux"
+
+S = "${WORKDIR}/go"
+
+inherit goarch native
+
+do_compile() {
+    :
+}
+
+make_wrapper() {
+	rm -f ${D}${bindir}/$1
+	cat <<END >${D}${bindir}/$1
+#!/bin/bash
+here=\`dirname \$0\`
+export GOROOT="${GOROOT:-\`readlink -f \$here/../lib/go\`}"
+\$here/../lib/go/bin/$1 "\$@"
+END
+	chmod +x ${D}${bindir}/$1
+}
+
+do_install() {
+    find ${S} -depth -type d -name testdata -exec rm -rf {} +
+
+	install -d ${D}${bindir} ${D}${libdir}/go
+	cp --preserve=mode,timestamps -R ${S}/ ${D}${libdir}/
+
+	for f in ${S}/bin/*
+	do
+	  	make_wrapper `basename $f`
+	done
+}
diff --git a/poky/meta/recipes-devtools/go/go-cross-canadian_1.18.4.bb b/poky/meta/recipes-devtools/go/go-cross-canadian_1.19.bb
similarity index 100%
rename from poky/meta/recipes-devtools/go/go-cross-canadian_1.18.4.bb
rename to poky/meta/recipes-devtools/go/go-cross-canadian_1.19.bb
diff --git a/poky/meta/recipes-devtools/go/go-cross_1.18.4.bb b/poky/meta/recipes-devtools/go/go-cross_1.19.bb
similarity index 100%
rename from poky/meta/recipes-devtools/go/go-cross_1.18.4.bb
rename to poky/meta/recipes-devtools/go/go-cross_1.19.bb
diff --git a/poky/meta/recipes-devtools/go/go-crosssdk_1.18.4.bb b/poky/meta/recipes-devtools/go/go-crosssdk_1.19.bb
similarity index 100%
rename from poky/meta/recipes-devtools/go/go-crosssdk_1.18.4.bb
rename to poky/meta/recipes-devtools/go/go-crosssdk_1.19.bb
diff --git a/poky/meta/recipes-devtools/go/go-native_1.18.4.bb b/poky/meta/recipes-devtools/go/go-native_1.19.bb
similarity index 100%
rename from poky/meta/recipes-devtools/go/go-native_1.18.4.bb
rename to poky/meta/recipes-devtools/go/go-native_1.19.bb
diff --git a/poky/meta/recipes-devtools/go/go-runtime_1.18.4.bb b/poky/meta/recipes-devtools/go/go-runtime_1.19.bb
similarity index 100%
rename from poky/meta/recipes-devtools/go/go-runtime_1.18.4.bb
rename to poky/meta/recipes-devtools/go/go-runtime_1.19.bb
diff --git a/poky/meta/recipes-devtools/go/go/0001-cmd-go-make-content-based-hash-generation-less-pedan.patch b/poky/meta/recipes-devtools/go/go/0001-cmd-go-make-content-based-hash-generation-less-pedan.patch
index f9db5df..8cbed93 100644
--- a/poky/meta/recipes-devtools/go/go/0001-cmd-go-make-content-based-hash-generation-less-pedan.patch
+++ b/poky/meta/recipes-devtools/go/go/0001-cmd-go-make-content-based-hash-generation-less-pedan.patch
@@ -1,4 +1,4 @@
-From 61de6067f5ad127d246543527947a357647f95e5 Mon Sep 17 00:00:00 2001
+From a3db4da51df37d163ff9e8c1e1057280c648c545 Mon Sep 17 00:00:00 2001
 From: Khem Raj <raj.khem@gmail.com>
 Date: Mon, 28 Mar 2022 10:59:03 -0700
 Subject: [PATCH] cmd/go: make content-based hash generation less pedantic
@@ -25,14 +25,17 @@
 Signed-off-by: Alex Kube <alexander.j.kube@gmail.com>
 Signed-off-by: Matt Madison <matt@madison.systems>
 Signed-off-by: Khem Raj <raj.khem@gmail.com>
+
 ---
  src/cmd/go/internal/envcmd/env.go |  2 +-
- src/cmd/go/internal/work/exec.go  | 42 +++++++++++++++++++++++++------
- 2 files changed, 35 insertions(+), 9 deletions(-)
+ src/cmd/go/internal/work/exec.go  | 42 ++++++++++++++++++++++++-------
+ 2 files changed, 34 insertions(+), 10 deletions(-)
 
+diff --git a/src/cmd/go/internal/envcmd/env.go b/src/cmd/go/internal/envcmd/env.go
+index 529351d..df791b0 100644
 --- a/src/cmd/go/internal/envcmd/env.go
 +++ b/src/cmd/go/internal/envcmd/env.go
-@@ -169,7 +169,7 @@ func ExtraEnvVars() []cfg.EnvVar {
+@@ -176,7 +176,7 @@ func ExtraEnvVars() []cfg.EnvVar {
  func ExtraEnvVarsCostly() []cfg.EnvVar {
  	var b work.Builder
  	b.Init()
@@ -41,9 +44,11 @@
  	if err != nil {
  		// Should not happen - b.CFlags was given an empty package.
  		fmt.Fprintf(os.Stderr, "go: invalid cflags: %v\n", err)
+diff --git a/src/cmd/go/internal/work/exec.go b/src/cmd/go/internal/work/exec.go
+index c88b315..a06455c 100644
 --- a/src/cmd/go/internal/work/exec.go
 +++ b/src/cmd/go/internal/work/exec.go
-@@ -213,6 +213,8 @@ func (b *Builder) Do(ctx context.Context
+@@ -213,6 +213,8 @@ func (b *Builder) Do(ctx context.Context, root *Action) {
  	writeActionGraph()
  }
  
@@ -52,7 +57,7 @@
  // buildActionID computes the action ID for a build action.
  func (b *Builder) buildActionID(a *Action) cache.ActionID {
  	p := a.Package
-@@ -234,7 +236,7 @@ func (b *Builder) buildActionID(a *Actio
+@@ -234,7 +236,7 @@ func (b *Builder) buildActionID(a *Action) cache.ActionID {
  		if p.Module != nil {
  			fmt.Fprintf(h, "module %s@%s\n", p.Module.Path, p.Module.Version)
  		}
@@ -61,7 +66,7 @@
  		// The Go compiler always hides the exact value of $GOROOT
  		// when building things in GOROOT.
  		//
-@@ -266,9 +268,9 @@ func (b *Builder) buildActionID(a *Actio
+@@ -266,9 +268,9 @@ func (b *Builder) buildActionID(a *Action) cache.ActionID {
  	}
  	if len(p.CgoFiles)+len(p.SwigFiles)+len(p.SwigCXXFiles) > 0 {
  		fmt.Fprintf(h, "cgo %q\n", b.toolID("cgo"))
@@ -73,7 +78,7 @@
  		fmt.Fprintf(h, "CC=%q %q %q %q\n", ccExe, cppflags, cflags, ldflags)
  		// Include the C compiler tool ID so that if the C
  		// compiler changes we rebuild the package.
-@@ -281,14 +283,14 @@ func (b *Builder) buildActionID(a *Actio
+@@ -281,14 +283,14 @@ func (b *Builder) buildActionID(a *Action) cache.ActionID {
  			}
  		}
  		if len(p.CXXFiles)+len(p.SwigCXXFiles) > 0 {
@@ -90,16 +95,16 @@
  			fmt.Fprintf(h, "FC=%q %q\n", fcExe, fflags)
  			if fcID, err := b.gccToolID(fcExe[0], "f95"); err == nil {
  				fmt.Fprintf(h, "FC ID=%q\n", fcID)
-@@ -304,7 +306,7 @@ func (b *Builder) buildActionID(a *Actio
- 			fmt.Fprintf(h, "fuzz %q\n", fuzzFlags)
+@@ -305,7 +307,7 @@ func (b *Builder) buildActionID(a *Action) cache.ActionID {
  		}
  	}
--	fmt.Fprintf(h, "modinfo %q\n", p.Internal.BuildInfo)
-+	//fmt.Fprintf(h, "modinfo %q\n", p.Internal.BuildInfo)
+ 	if p.Internal.BuildInfo != "" {
+-		fmt.Fprintf(h, "modinfo %q\n", p.Internal.BuildInfo)
++		//fmt.Fprintf(h, "modinfo %q\n", p.Internal.BuildInfo)
+ 	}
  
  	// Configuration specific to compiler toolchain.
- 	switch cfg.BuildToolchainName {
-@@ -2679,8 +2681,23 @@ func envList(key, def string) []string {
+@@ -2705,8 +2707,23 @@ func envList(key, def string) []string {
  	return args
  }
  
@@ -124,7 +129,7 @@
  	defaults := "-g -O2"
  
  	if cppflags, err = buildFlags("CPPFLAGS", "", p.CgoCPPFLAGS, checkCompilerFlags); err != nil {
-@@ -2698,6 +2715,13 @@ func (b *Builder) CFlags(p *load.Package
+@@ -2724,6 +2741,13 @@ func (b *Builder) CFlags(p *load.Package) (cppflags, cflags, cxxflags, fflags, l
  	if ldflags, err = buildFlags("LDFLAGS", defaults, p.CgoLDFLAGS, checkLinkerFlags); err != nil {
  		return
  	}
@@ -138,7 +143,7 @@
  
  	return
  }
-@@ -2713,7 +2737,7 @@ var cgoRe = lazyregexp.New(`[/\\:]`)
+@@ -2739,7 +2763,7 @@ var cgoRe = lazyregexp.New(`[/\\:]`)
  
  func (b *Builder) cgo(a *Action, cgoExe, objdir string, pcCFLAGS, pcLDFLAGS, cgofiles, gccfiles, gxxfiles, mfiles, ffiles []string) (outGo, outObj []string, err error) {
  	p := a.Package
@@ -147,7 +152,7 @@
  	if err != nil {
  		return nil, nil, err
  	}
-@@ -3174,7 +3198,7 @@ func (b *Builder) swigIntSize(objdir str
+@@ -3246,7 +3270,7 @@ func (b *Builder) swigIntSize(objdir string) (intsize string, err error) {
  
  // Run SWIG on one SWIG input file.
  func (b *Builder) swigOne(a *Action, p *load.Package, file, objdir string, pcCFLAGS []string, cxx bool, intgosize string) (outGo, outC string, err error) {
diff --git a/poky/meta/recipes-devtools/go/go/0003-allow-GOTOOLDIR-to-be-overridden-in-the-environment.patch b/poky/meta/recipes-devtools/go/go/0003-allow-GOTOOLDIR-to-be-overridden-in-the-environment.patch
index c3ccffc..30068d8 100644
--- a/poky/meta/recipes-devtools/go/go/0003-allow-GOTOOLDIR-to-be-overridden-in-the-environment.patch
+++ b/poky/meta/recipes-devtools/go/go/0003-allow-GOTOOLDIR-to-be-overridden-in-the-environment.patch
@@ -1,4 +1,4 @@
-From 8512964c0bfdfc3c9c3805743ea7de551a1d476a Mon Sep 17 00:00:00 2001
+From 7e0136a882757da0a374ab8592209586eced0e1c Mon Sep 17 00:00:00 2001
 From: Alex Kube <alexander.j.kube@gmail.com>
 Date: Wed, 23 Oct 2019 21:15:37 +0430
 Subject: [PATCH] cmd/go: Allow GOTOOLDIR to be overridden in the environment
@@ -18,9 +18,11 @@
  src/cmd/go/internal/cfg/cfg.go | 6 +++++-
  2 files changed, 8 insertions(+), 2 deletions(-)
 
+diff --git a/src/cmd/dist/build.go b/src/cmd/dist/build.go
+index 7c44c4a..3024d0c 100644
 --- a/src/cmd/dist/build.go
 +++ b/src/cmd/dist/build.go
-@@ -251,7 +251,9 @@ func xinit() {
+@@ -264,7 +264,9 @@ func xinit() {
  	}
  	xatexit(rmworkdir)
  
@@ -31,18 +33,20 @@
  }
  
  // compilerEnv returns a map from "goos/goarch" to the
+diff --git a/src/cmd/go/internal/cfg/cfg.go b/src/cmd/go/internal/cfg/cfg.go
+index c6ddfe5..605adb1 100644
 --- a/src/cmd/go/internal/cfg/cfg.go
 +++ b/src/cmd/go/internal/cfg/cfg.go
-@@ -76,7 +76,11 @@ func defaultContext() build.Context {
+@@ -162,7 +162,11 @@ func SetGOROOT(goroot string) {
  		// variables. This matches the initialization of ToolDir in
- 		// go/build, except for using ctxt.GOROOT rather than
+ 		// go/build, except for using BuildContext.GOROOT rather than
  		// runtime.GOROOT.
--		build.ToolDir = filepath.Join(ctxt.GOROOT, "pkg/tool/"+runtime.GOOS+"_"+runtime.GOARCH)
+-		build.ToolDir = filepath.Join(goroot, "pkg/tool/"+runtime.GOOS+"_"+runtime.GOARCH)
 +		if s := os.Getenv("GOTOOLDIR"); s != "" {
 +			build.ToolDir = filepath.Clean(s)
 +		} else {
-+			build.ToolDir = filepath.Join(ctxt.GOROOT, "pkg/tool/"+runtime.GOOS+"_"+runtime.GOARCH)
++			build.ToolDir = filepath.Join(goroot, "pkg/tool/"+runtime.GOOS+"_"+runtime.GOARCH)
 +		}
  	}
+ }
  
- 	ctxt.GOPATH = envOr("GOPATH", gopath(ctxt))
diff --git a/poky/meta/recipes-devtools/go/go/0004-ld-add-soname-to-shareable-objects.patch b/poky/meta/recipes-devtools/go/go/0004-ld-add-soname-to-shareable-objects.patch
index 058fa64..b700634 100644
--- a/poky/meta/recipes-devtools/go/go/0004-ld-add-soname-to-shareable-objects.patch
+++ b/poky/meta/recipes-devtools/go/go/0004-ld-add-soname-to-shareable-objects.patch
@@ -1,7 +1,7 @@
-From bf5cf5301ae5914498454c87293d1df2e1d8489f Mon Sep 17 00:00:00 2001
+From 68867eae5d3a51f32b2a2e16374323338408781e Mon Sep 17 00:00:00 2001
 From: Alex Kube <alexander.j.kube@gmail.com>
 Date: Wed, 23 Oct 2019 21:16:32 +0430
-Subject: [PATCH 4/9] ld: add soname to shareable objects
+Subject: [PATCH] ld: add soname to shareable objects
 
 so that OE's shared library dependency handling
 can find them.
@@ -13,21 +13,24 @@
 Upstream-Status: Inappropriate [OE specific]
 
 Signed-off-by: Alexander J Kube <alexander.j.kube@gmail.com>
+
 ---
  src/cmd/link/internal/ld/lib.go | 3 +++
  1 file changed, 3 insertions(+)
 
+diff --git a/src/cmd/link/internal/ld/lib.go b/src/cmd/link/internal/ld/lib.go
+index 18910dd..b2e1d36 100644
 --- a/src/cmd/link/internal/ld/lib.go
 +++ b/src/cmd/link/internal/ld/lib.go
-@@ -1347,6 +1347,7 @@ func (ctxt *Link) hostlink() {
+@@ -1459,6 +1459,7 @@ func (ctxt *Link) hostlink() {
  				argv = append(argv, "-Wl,-z,relro")
  			}
  			argv = append(argv, "-shared")
 +			argv = append(argv, fmt.Sprintf("-Wl,-soname,%s", filepath.Base(*flagOutfile)))
  			if ctxt.HeadType == objabi.Hwindows {
- 				if *flagAslr {
- 					argv = addASLRargs(argv)
-@@ -1364,6 +1365,7 @@ func (ctxt *Link) hostlink() {
+ 				argv = addASLRargs(argv, *flagAslr)
+ 			} else {
+@@ -1474,6 +1475,7 @@ func (ctxt *Link) hostlink() {
  			argv = append(argv, "-Wl,-z,relro")
  		}
  		argv = append(argv, "-shared")
@@ -35,7 +38,7 @@
  	case BuildModePlugin:
  		if ctxt.HeadType == objabi.Hdarwin {
  			argv = append(argv, "-dynamiclib")
-@@ -1372,6 +1374,7 @@ func (ctxt *Link) hostlink() {
+@@ -1482,6 +1484,7 @@ func (ctxt *Link) hostlink() {
  				argv = append(argv, "-Wl,-z,relro")
  			}
  			argv = append(argv, "-shared")
diff --git a/poky/meta/recipes-devtools/go/go/0005-make.bash-override-CC-when-building-dist-and-go_boot.patch b/poky/meta/recipes-devtools/go/go/0005-make.bash-override-CC-when-building-dist-and-go_boot.patch
index a693767..608f1eb 100644
--- a/poky/meta/recipes-devtools/go/go/0005-make.bash-override-CC-when-building-dist-and-go_boot.patch
+++ b/poky/meta/recipes-devtools/go/go/0005-make.bash-override-CC-when-building-dist-and-go_boot.patch
@@ -1,4 +1,4 @@
-From 153e2dda6103fd9dd871be4bb495a8da5328301e Mon Sep 17 00:00:00 2001
+From 8f020921c464e95ded850950382115154448580a Mon Sep 17 00:00:00 2001
 From: Alex Kube <alexander.j.kube@gmail.com>
 Date: Wed, 23 Oct 2019 21:17:16 +0430
 Subject: [PATCH] make.bash: override CC when building dist and go_bootstrap
@@ -17,18 +17,20 @@
  src/make.bash | 4 ++--
  1 file changed, 2 insertions(+), 2 deletions(-)
 
+diff --git a/src/make.bash b/src/make.bash
+index ab2ce19..37ec1fb 100755
 --- a/src/make.bash
 +++ b/src/make.bash
-@@ -195,7 +195,7 @@ if [ "$GOROOT_BOOTSTRAP" = "$GOROOT" ];
+@@ -198,7 +198,7 @@ if [ "$GOROOT_BOOTSTRAP" = "$GOROOT" ]; then
  	exit 1
  fi
  rm -f cmd/dist/dist
--GOROOT="$GOROOT_BOOTSTRAP" GOOS="" GOARCH="" GO111MODULE=off "$GOROOT_BOOTSTRAP/bin/go" build -o cmd/dist/dist ./cmd/dist
-+CC="${BUILD_CC:-${CC}}" GOROOT="$GOROOT_BOOTSTRAP" GOOS="" GOARCH="" GO111MODULE=off "$GOROOT_BOOTSTRAP/bin/go" build -o cmd/dist/dist ./cmd/dist
+-GOROOT="$GOROOT_BOOTSTRAP" GOOS="" GOARCH="" GO111MODULE=off GOEXPERIMENT="" GOENV=off GOFLAGS="" "$GOROOT_BOOTSTRAP/bin/go" build -o cmd/dist/dist ./cmd/dist
++CC="${BUILD_CC:-${CC}}" GOROOT="$GOROOT_BOOTSTRAP" GOOS="" GOARCH="" GO111MODULE=off GOEXPERIMENT="" GOENV=off GOFLAGS="" "$GOROOT_BOOTSTRAP/bin/go" build -o cmd/dist/dist ./cmd/dist
  
  # -e doesn't propagate out of eval, so check success by hand.
  eval $(./cmd/dist/dist env -p || echo FAIL=true)
-@@ -220,7 +220,7 @@ fi
+@@ -223,7 +223,7 @@ fi
  # Run dist bootstrap to complete make.bash.
  # Bootstrap installs a proper cmd/dist, built with the new toolchain.
  # Throw ours, built with Go 1.4, away after bootstrap.
diff --git a/poky/meta/recipes-devtools/go/go/0006-cmd-dist-separate-host-and-target-builds.patch b/poky/meta/recipes-devtools/go/go/0006-cmd-dist-separate-host-and-target-builds.patch
index ee743ab..2c864ba 100644
--- a/poky/meta/recipes-devtools/go/go/0006-cmd-dist-separate-host-and-target-builds.patch
+++ b/poky/meta/recipes-devtools/go/go/0006-cmd-dist-separate-host-and-target-builds.patch
@@ -1,4 +1,4 @@
-From 7bc891e00be4263311d75aa2b2ee6a3b7b75355f Mon Sep 17 00:00:00 2001
+From ef5fddafdec78cab9963d21736e64d71ca520bcc Mon Sep 17 00:00:00 2001
 From: Alex Kube <alexander.j.kube@gmail.com>
 Date: Wed, 23 Oct 2019 21:18:12 +0430
 Subject: [PATCH] cmd/dist: separate host and target builds
@@ -36,12 +36,14 @@
 Signed-off-by: Alexander J Kube <alexander.j.kube@gmail.com>
 
 ---
- src/cmd/dist/build.go | 156 ++++++++++++++++++++++++++++++------------
- 1 file changed, 113 insertions(+), 43 deletions(-)
+ src/cmd/dist/build.go | 154 ++++++++++++++++++++++++++++++------------
+ 1 file changed, 112 insertions(+), 42 deletions(-)
 
+diff --git a/src/cmd/dist/build.go b/src/cmd/dist/build.go
+index 3024d0c..45ebee0 100644
 --- a/src/cmd/dist/build.go
 +++ b/src/cmd/dist/build.go
-@@ -44,6 +44,7 @@ var (
+@@ -45,6 +45,7 @@ var (
  	goexperiment     string
  	workdir          string
  	tooldir          string
@@ -49,7 +51,7 @@
  	oldgoos          string
  	oldgoarch        string
  	exe              string
-@@ -54,6 +55,7 @@ var (
+@@ -55,6 +56,7 @@ var (
  
  	rebuildall   bool
  	defaultclang bool
@@ -57,7 +59,7 @@
  
  	vflag int // verbosity
  )
-@@ -254,6 +256,8 @@ func xinit() {
+@@ -267,6 +269,8 @@ func xinit() {
  	if tooldir = os.Getenv("GOTOOLDIR"); tooldir == "" {
  		tooldir = pathf("%s/pkg/tool/%s_%s", goroot, gohostos, gohostarch)
  	}
@@ -66,7 +68,7 @@
  }
  
  // compilerEnv returns a map from "goos/goarch" to the
-@@ -499,8 +503,10 @@ func setup() {
+@@ -468,8 +472,10 @@ func setup() {
  	p := pathf("%s/pkg/%s_%s", goroot, gohostos, gohostarch)
  	if rebuildall {
  		xremoveall(p)
@@ -77,7 +79,7 @@
  
  	if goos != gohostos || goarch != gohostarch {
  		p := pathf("%s/pkg/%s_%s", goroot, goos, goarch)
-@@ -1252,17 +1258,35 @@ func cmdbootstrap() {
+@@ -1248,17 +1254,35 @@ func cmdbootstrap() {
  
  	var noBanner, noClean bool
  	var debug bool
@@ -114,7 +116,7 @@
  	// Set GOPATH to an internal directory. We shouldn't actually
  	// need to store files here, since the toolchain won't
  	// depend on modules outside of vendor directories, but if
-@@ -1330,8 +1354,13 @@ func cmdbootstrap() {
+@@ -1326,8 +1350,13 @@ func cmdbootstrap() {
  		xprintf("\n")
  	}
  
@@ -128,9 +130,9 @@
 +		goldflags = os.Getenv("GO_LDFLAGS") // we were using $BOOT_GO_LDFLAGS until now
 +	}
  	goBootstrap := pathf("%s/go_bootstrap", tooldir)
- 	cmdGo := pathf("%s/go", gobin)
+ 	cmdGo := pathf("%s/go", gorootBin)
  	if debug {
-@@ -1360,7 +1389,11 @@ func cmdbootstrap() {
+@@ -1356,7 +1385,11 @@ func cmdbootstrap() {
  		xprintf("\n")
  	}
  	xprintf("Building Go toolchain2 using go_bootstrap and Go toolchain1.\n")
@@ -143,7 +145,7 @@
  	// Now that cmd/go is in charge of the build process, enable GOEXPERIMENT.
  	os.Setenv("GOEXPERIMENT", goexperiment)
  	goInstall(goBootstrap, append([]string{"-i"}, toolchain...)...)
-@@ -1399,50 +1432,84 @@ func cmdbootstrap() {
+@@ -1395,50 +1428,84 @@ func cmdbootstrap() {
  	}
  	checkNotStale(goBootstrap, append(toolchain, "runtime/internal/sys")...)
  
@@ -235,7 +237,12 @@
 -		timelog("build", "target toolchain")
 -		if vflag > 0 {
 -			xprintf("\n")
--		}
++		if debug {
++			run("", ShowOutput|CheckExit, pathf("%s/compile", tooldir), "-V=full")
++			run("", ShowOutput|CheckExit, pathf("%s/buildid", tooldir), pathf("%s/pkg/%s_%s/runtime/internal/sys.a", goroot, goos, goarch))
++			checkNotStale(goBootstrap, append(toolchain, "runtime/internal/sys")...)
++			copyfile(pathf("%s/compile4", tooldir), pathf("%s/compile", tooldir), writeExec)
+ 		}
 -		goos = oldgoos
 -		goarch = oldgoarch
 -		os.Setenv("GOOS", goos)
@@ -256,16 +263,10 @@
 -		run("", ShowOutput|CheckExit, pathf("%s/buildid", tooldir), pathf("%s/pkg/%s_%s/runtime/internal/sys.a", goroot, goos, goarch))
 -		checkNotStale(goBootstrap, append(toolchain, "runtime/internal/sys")...)
 -		copyfile(pathf("%s/compile4", tooldir), pathf("%s/compile", tooldir), writeExec)
-+		if debug {
-+			run("", ShowOutput|CheckExit, pathf("%s/compile", tooldir), "-V=full")
-+			run("", ShowOutput|CheckExit, pathf("%s/buildid", tooldir), pathf("%s/pkg/%s_%s/runtime/internal/sys.a", goroot, goos, goarch))
-+			checkNotStale(goBootstrap, append(toolchain, "runtime/internal/sys")...)
-+			copyfile(pathf("%s/compile4", tooldir), pathf("%s/compile", tooldir), writeExec)
-+		}
  	}
  
  	// Check that there are no new files in $GOROOT/bin other than
-@@ -1459,8 +1526,11 @@ func cmdbootstrap() {
+@@ -1455,8 +1522,11 @@ func cmdbootstrap() {
  		}
  	}
  
diff --git a/poky/meta/recipes-devtools/go/go/filter-build-paths.patch b/poky/meta/recipes-devtools/go/go/filter-build-paths.patch
index caf7277..a1aa37c 100644
--- a/poky/meta/recipes-devtools/go/go/filter-build-paths.patch
+++ b/poky/meta/recipes-devtools/go/go/filter-build-paths.patch
@@ -1,3 +1,8 @@
+From 3bdbce685c688a27eece36ccc8be9b50b4849498 Mon Sep 17 00:00:00 2001
+From: Richard Purdie <richard.purdie@linuxfoundation.org>
+Date: Sat, 2 Jul 2022 23:08:13 +0100
+Subject: [PATCH] go: Filter build paths on staticly linked arches
+
 Filter out build time paths from ldflags and other flags variables when they're
 embedded in the go binary so that builds are reproducible regardless of build
 location. This codepath is hit for statically linked go binaries such as those
@@ -6,11 +11,15 @@
 Upstream-Status: Pending
 Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
 
-Index: go/src/cmd/go/internal/load/pkg.go
-===================================================================
---- go.orig/src/cmd/go/internal/load/pkg.go
-+++ go/src/cmd/go/internal/load/pkg.go
-@@ -2225,6 +2225,17 @@ func (p *Package) collectDeps() {
+---
+ src/cmd/go/internal/load/pkg.go | 15 +++++++++++++--
+ 1 file changed, 13 insertions(+), 2 deletions(-)
+
+diff --git a/src/cmd/go/internal/load/pkg.go b/src/cmd/go/internal/load/pkg.go
+index 046f508..353cbc4 100644
+--- a/src/cmd/go/internal/load/pkg.go
++++ b/src/cmd/go/internal/load/pkg.go
+@@ -2256,6 +2256,17 @@ func (p *Package) collectDeps() {
  // to their VCS information (vcsStatusError).
  var vcsStatusCache par.Cache
  
@@ -28,21 +37,21 @@
  // setBuildInfo gathers build information, formats it as a string to be
  // embedded in the binary, then sets p.Internal.BuildInfo to that string.
  // setBuildInfo should only be called on a main package with no errors.
-@@ -2329,7 +2340,7 @@ func (p *Package) setBuildInfo(includeVC
- 			appendSetting("-gcflags", BuildGcflags.String())
+@@ -2353,7 +2364,7 @@ func (p *Package) setBuildInfo(includeVCS bool) {
+ 	if gcflags := BuildGcflags.String(); gcflags != "" && cfg.BuildContext.Compiler == "gc" {
+ 		appendSetting("-gcflags", gcflags)
+ 	}
+-	if ldflags := BuildLdflags.String(); ldflags != "" {
++	if ldflags := filterCompilerFlags(BuildLdflags.String()); ldflags != "" {
+ 		// https://go.dev/issue/52372: only include ldflags if -trimpath is not set,
+ 		// since it can include system paths through various linker flags (notably
+ 		// -extar, -extld, and -extldflags).
+@@ -2392,7 +2403,7 @@ func (p *Package) setBuildInfo(includeVCS bool) {
+ 	// subset of flags that are known not to be paths?
+ 	if cfg.BuildContext.CgoEnabled && !cfg.BuildTrimpath {
+ 		for _, name := range []string{"CGO_CFLAGS", "CGO_CPPFLAGS", "CGO_CXXFLAGS", "CGO_LDFLAGS"} {
+-			appendSetting(name, cfg.Getenv(name))
++			appendSetting(name, filterCompilerFlags(cfg.Getenv(name)))
  		}
- 		if BuildLdflags.present {
--			appendSetting("-ldflags", BuildLdflags.String())
-+			appendSetting("-ldflags", filterCompilerFlags(BuildLdflags.String()))
- 		}
- 		if cfg.BuildMSan {
- 			appendSetting("-msan", "true")
-@@ -2347,7 +2358,7 @@ func (p *Package) setBuildInfo(includeVC
- 		appendSetting("CGO_ENABLED", cgo)
- 		if cfg.BuildContext.CgoEnabled {
- 			for _, name := range []string{"CGO_CFLAGS", "CGO_CPPFLAGS", "CGO_CXXFLAGS", "CGO_LDFLAGS"} {
--				appendSetting(name, cfg.Getenv(name))
-+				appendSetting(name, filterCompilerFlags(cfg.Getenv(name)))
- 			}
- 		}
- 		appendSetting("GOARCH", cfg.BuildContext.GOARCH)
+ 	}
+ 	appendSetting("GOARCH", cfg.BuildContext.GOARCH)
diff --git a/poky/meta/recipes-devtools/go/go/stack-protector.patch b/poky/meta/recipes-devtools/go/go/stack-protector.patch
new file mode 100644
index 0000000..cc92a44
--- /dev/null
+++ b/poky/meta/recipes-devtools/go/go/stack-protector.patch
@@ -0,0 +1,32 @@
+From c537b87782293fe222f2ef5eb1ae818092118e97 Mon Sep 17 00:00:00 2001
+From: Ian Lance Taylor <iant@golang.org>
+Date: Sun, 07 Aug 2022 19:21:15 -0700
+Subject: [PATCH] runtime/cgo: add -fno-stack-protector to CFLAGS
+
+Some compilers default to having -fstack-protector on, which breaks
+when using internal linking because the linker doesn't know how to
+find the support functions.
+
+Fixes #52919
+Fixes #54313
+
+Change-Id: I6f51d5e906503f61fc768ad8e30c163bad135087
+Upstream-Status: Submitted [https://github.com/golang/go/issues/54313]
+Signed-off-by: Alexander Kanavin <alex.kanavin@gmail.com>
+---
+
+diff --git a/src/runtime/cgo/cgo.go b/src/runtime/cgo/cgo.go
+index 298aa63..4b7046e 100644
+--- a/src/runtime/cgo/cgo.go
++++ b/src/runtime/cgo/cgo.go
+@@ -23,7 +23,9 @@
+ #cgo solaris LDFLAGS: -lxnet
+ #cgo solaris LDFLAGS: -lsocket
+ 
+-#cgo CFLAGS: -Wall -Werror
++// We use -fno-stack-protector because internal linking won't find
++// the support functions. See issues #52919 and #54313.
++#cgo CFLAGS: -Wall -Werror -fno-stack-protector
+ 
+ #cgo solaris CPPFLAGS: -D_POSIX_PTHREAD_SEMANTICS
+ 
diff --git a/poky/meta/recipes-devtools/go/go_1.18.4.bb b/poky/meta/recipes-devtools/go/go_1.19.bb
similarity index 100%
rename from poky/meta/recipes-devtools/go/go_1.18.4.bb
rename to poky/meta/recipes-devtools/go/go_1.19.bb
diff --git a/poky/meta/recipes-devtools/json-c/json-c/0001-Fix-build-with-clang-15.patch b/poky/meta/recipes-devtools/json-c/json-c/0001-Fix-build-with-clang-15.patch
new file mode 100644
index 0000000..215f4d8
--- /dev/null
+++ b/poky/meta/recipes-devtools/json-c/json-c/0001-Fix-build-with-clang-15.patch
@@ -0,0 +1,34 @@
+From 0145b575ac1fe6a77e00d639864f26fc91ceb12f Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Sat, 13 Aug 2022 20:37:03 -0700
+Subject: [PATCH] Fix build with clang-15+
+
+Fixes
+json_util.c:63:35: error: a function declaration without a prototype is deprecated in all versions of C [-We
+rror,-Wstrict-prototypes]
+const char *json_util_get_last_err()
+                                  ^
+                                   void
+
+Upstream-Status: Backport [https://github.com/json-c/json-c/pull/783]
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ json_util.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/json_util.c b/json_util.c
+index 952770a..83d9c68 100644
+--- a/json_util.c
++++ b/json_util.c
+@@ -60,7 +60,7 @@ static int _json_object_to_fd(int fd, struct json_object *obj, int flags, const
+ 
+ static char _last_err[256] = "";
+ 
+-const char *json_util_get_last_err()
++const char *json_util_get_last_err(void)
+ {
+ 	if (_last_err[0] == '\0')
+ 		return NULL;
+-- 
+2.37.2
+
diff --git a/poky/meta/recipes-devtools/json-c/json-c/run-ptest b/poky/meta/recipes-devtools/json-c/json-c/run-ptest
new file mode 100644
index 0000000..9ee6095
--- /dev/null
+++ b/poky/meta/recipes-devtools/json-c/json-c/run-ptest
@@ -0,0 +1,20 @@
+#!/bin/sh
+
+# This script is used to run json-c test suites
+cd tests
+
+ret_val=0
+for i in test*.test; do
+    # test_basic is not an own testcase, just
+    # contains common code of other tests
+    if [ "$i" != "test_basic.test" ]; then
+        if ./$i > json-c_test.log 2>&1 ; then
+            echo PASS: $i
+        else
+            ret_val=1
+            echo FAIL: $i
+        fi
+    fi
+done
+
+exit $ret_val
diff --git a/poky/meta/recipes-devtools/json-c/json-c_0.16.bb b/poky/meta/recipes-devtools/json-c/json-c_0.16.bb
index fdec5ec..3aba41d 100644
--- a/poky/meta/recipes-devtools/json-c/json-c_0.16.bb
+++ b/poky/meta/recipes-devtools/json-c/json-c_0.16.bb
@@ -4,8 +4,11 @@
 LICENSE = "MIT"
 LIC_FILES_CHKSUM = "file://COPYING;md5=de54b60fbbc35123ba193fea8ee216f2"
 
-SRC_URI = "https://s3.amazonaws.com/json-c_releases/releases/${BP}.tar.gz"
-
+SRC_URI = " \
+    https://s3.amazonaws.com/json-c_releases/releases/${BP}.tar.gz \
+    file://0001-Fix-build-with-clang-15.patch \
+    file://run-ptest \
+"
 SRC_URI[sha256sum] = "8e45ac8f96ec7791eaf3bb7ee50e9c2100bbbc87b8d0f1d030c5ba8a0288d96b"
 
 UPSTREAM_CHECK_URI = "https://github.com/${BPN}/${BPN}/tags"
@@ -13,6 +16,15 @@
 
 RPROVIDES:${PN} = "libjson"
 
-inherit cmake
+inherit cmake ptest
+
+do_install_ptest() {
+    install -d ${D}/${PTEST_PATH}/tests
+    install ${B}/tests/test* ${D}/${PTEST_PATH}/tests
+    install ${S}/tests/*.test ${D}/${PTEST_PATH}/tests
+    install ${S}/tests/*.expected ${D}/${PTEST_PATH}/tests
+    install ${S}/tests/test-defs.sh ${D}/${PTEST_PATH}/tests
+    install ${S}/tests/valid*json ${D}/${PTEST_PATH}/tests
+}
 
 BBCLASSEXTEND = "native nativesdk"
diff --git a/poky/meta/recipes-devtools/libdnf/libdnf_0.67.0.bb b/poky/meta/recipes-devtools/libdnf/libdnf_0.67.0.bb
deleted file mode 100644
index 69255c5..0000000
--- a/poky/meta/recipes-devtools/libdnf/libdnf_0.67.0.bb
+++ /dev/null
@@ -1,36 +0,0 @@
-SUMMARY = "Library providing simplified C and Python API to libsolv"
-HOMEPAGE = "https://github.com/rpm-software-management/libdnf"
-DESCRIPTION = "This library provides a high level package-manager. It's core library of dnf, PackageKit and rpm-ostree. It's replacement for deprecated hawkey library which it contains inside and uses librepo under the hood."
-LICENSE = "LGPL-2.1-or-later"
-LIC_FILES_CHKSUM = "file://COPYING;md5=4fbd65380cdd255951079008b364516c"
-
-SRC_URI = "git://github.com/rpm-software-management/libdnf;branch=dnf-4-master;protocol=https \
-           file://0001-FindGtkDoc.cmake-drop-the-requirement-for-GTKDOC_SCA.patch \
-           file://0004-Set-libsolv-variables-with-pkg-config-cmake-s-own-mo.patch \
-           file://0001-Get-parameters-for-both-libsolv-and-libsolvext-libdn.patch \
-           file://enable_test_data_dir_set.patch \
-           file://0001-drop-FindPythonInstDir.cmake.patch \
-           file://0001-libdnf-dnf-context.cpp-do-not-try-to-access-BDB-data.patch \
-           "
-
-SRCREV = "1742be5225b3a4928707696db8c69391def55f5a"
-UPSTREAM_CHECK_GITTAGREGEX = "(?P<pver>(?!4\.90)\d+(\.\d+)+)"
-
-S = "${WORKDIR}/git"
-
-DEPENDS = "glib-2.0 libsolv libcheck librepo rpm gtk-doc libmodulemd json-c swig-native util-linux"
-
-inherit gtk-doc gobject-introspection cmake pkgconfig setuptools3-base
-
-EXTRA_OECMAKE = " -DPYTHON_INSTALL_DIR=${PYTHON_SITEPACKAGES_DIR} -DWITH_MAN=OFF -DPYTHON_DESIRED=3 \
-                  ${@bb.utils.contains('GI_DATA_ENABLED', 'True', '-DWITH_GIR=ON', '-DWITH_GIR=OFF', d)} \
-                  -DWITH_TESTS=OFF \
-                  -DWITH_ZCHUNK=OFF \
-                  -DWITH_HTML=OFF \
-                "
-EXTRA_OECMAKE:append:class-native = " -DWITH_GIR=OFF"
-EXTRA_OECMAKE:append:class-nativesdk = " -DWITH_GIR=OFF"
-
-BBCLASSEXTEND = "native nativesdk"
-SKIP_RECIPE[libdnf] ?= "${@bb.utils.contains('PACKAGE_CLASSES', 'package_rpm', '', 'Does not build without package_rpm in PACKAGE_CLASSES due disabled rpm support in libsolv', d)}"
-
diff --git a/poky/meta/recipes-devtools/libdnf/libdnf_0.68.0.bb b/poky/meta/recipes-devtools/libdnf/libdnf_0.68.0.bb
new file mode 100644
index 0000000..86cf41c
--- /dev/null
+++ b/poky/meta/recipes-devtools/libdnf/libdnf_0.68.0.bb
@@ -0,0 +1,36 @@
+SUMMARY = "Library providing simplified C and Python API to libsolv"
+HOMEPAGE = "https://github.com/rpm-software-management/libdnf"
+DESCRIPTION = "This library provides a high level package-manager. It's core library of dnf, PackageKit and rpm-ostree. It's replacement for deprecated hawkey library which it contains inside and uses librepo under the hood."
+LICENSE = "LGPL-2.1-or-later"
+LIC_FILES_CHKSUM = "file://COPYING;md5=4fbd65380cdd255951079008b364516c"
+
+SRC_URI = "git://github.com/rpm-software-management/libdnf;branch=dnf-4-master;protocol=https \
+           file://0001-FindGtkDoc.cmake-drop-the-requirement-for-GTKDOC_SCA.patch \
+           file://0004-Set-libsolv-variables-with-pkg-config-cmake-s-own-mo.patch \
+           file://0001-Get-parameters-for-both-libsolv-and-libsolvext-libdn.patch \
+           file://enable_test_data_dir_set.patch \
+           file://0001-drop-FindPythonInstDir.cmake.patch \
+           file://0001-libdnf-dnf-context.cpp-do-not-try-to-access-BDB-data.patch \
+           "
+
+SRCREV = "388e7699f8a75fa81aca05d09389acea7e489168"
+UPSTREAM_CHECK_GITTAGREGEX = "(?P<pver>(?!4\.90)\d+(\.\d+)+)"
+
+S = "${WORKDIR}/git"
+
+DEPENDS = "glib-2.0 libsolv libcheck librepo rpm gtk-doc libmodulemd json-c swig-native util-linux"
+
+inherit gtk-doc gobject-introspection cmake pkgconfig setuptools3-base
+
+EXTRA_OECMAKE = " -DPYTHON_INSTALL_DIR=${PYTHON_SITEPACKAGES_DIR} -DWITH_MAN=OFF -DPYTHON_DESIRED=3 \
+                  ${@bb.utils.contains('GI_DATA_ENABLED', 'True', '-DWITH_GIR=ON', '-DWITH_GIR=OFF', d)} \
+                  -DWITH_TESTS=OFF \
+                  -DWITH_ZCHUNK=OFF \
+                  -DWITH_HTML=OFF \
+                "
+EXTRA_OECMAKE:append:class-native = " -DWITH_GIR=OFF"
+EXTRA_OECMAKE:append:class-nativesdk = " -DWITH_GIR=OFF"
+
+BBCLASSEXTEND = "native nativesdk"
+SKIP_RECIPE[libdnf] ?= "${@bb.utils.contains('PACKAGE_CLASSES', 'package_rpm', '', 'Does not build without package_rpm in PACKAGE_CLASSES due disabled rpm support in libsolv', d)}"
+
diff --git a/poky/meta/recipes-devtools/librepo/librepo/0001-metadata_downloader-Include-unistd.h-for-lseek.patch b/poky/meta/recipes-devtools/librepo/librepo/0001-metadata_downloader-Include-unistd.h-for-lseek.patch
new file mode 100644
index 0000000..22b3110
--- /dev/null
+++ b/poky/meta/recipes-devtools/librepo/librepo/0001-metadata_downloader-Include-unistd.h-for-lseek.patch
@@ -0,0 +1,34 @@
+From 5c63ec2e2d4726268ace85e5c61727cbd811d982 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Sat, 27 Aug 2022 09:00:24 -0700
+Subject: [PATCH] metadata_downloader: Include unistd.h for lseek()
+
+This is found when compiling on musl systems
+
+Fixes
+
+metadata_downloader.c:331:9: error: call to undeclared function 'lseek'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
+        lseek(fd_value, SEEK_SET, 0);
+        ^
+
+Upstream-Status: Submitted [https://github.com/rpm-software-management/librepo/pull/263]
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ librepo/metadata_downloader.c | 1 +
+ 1 file changed, 1 insertion(+)
+
+diff --git a/librepo/metadata_downloader.c b/librepo/metadata_downloader.c
+index 123c77b..6604255 100644
+--- a/librepo/metadata_downloader.c
++++ b/librepo/metadata_downloader.c
+@@ -24,6 +24,7 @@
+ #include <assert.h>
+ #include <string.h>
+ #include <errno.h>
++#include <unistd.h>
+ #include <sys/stat.h>
+ 
+ #include "librepo/librepo.h"
+-- 
+2.37.2
+
diff --git a/poky/meta/recipes-devtools/librepo/librepo_1.14.3.bb b/poky/meta/recipes-devtools/librepo/librepo_1.14.3.bb
deleted file mode 100644
index 2c8e592..0000000
--- a/poky/meta/recipes-devtools/librepo/librepo_1.14.3.bb
+++ /dev/null
@@ -1,29 +0,0 @@
-SUMMARY = "A library providing C and Python (libcURL like) API \
-           for downloading linux repository metadata and packages."
-HOMEPAGE = "https://github.com/rpm-software-management/librepo"
-DESCRIPTION = "${SUMMARY}"
-LICENSE = "LGPL-2.1-only"
-LIC_FILES_CHKSUM = "file://COPYING;md5=4fbd65380cdd255951079008b364516c"
-
-SRC_URI = "git://github.com/rpm-software-management/librepo.git;branch=master;protocol=https \
-           file://0002-Do-not-try-to-obtain-PYTHON_INSTALL_DIR-by-running-p.patch \
-           file://0004-Set-gpgme-variables-with-pkg-config-not-with-cmake-m.patch \
-           "
-
-SRCREV = "8fc7950795282d9c7c50071f45973006de5594ab"
-
-S = "${WORKDIR}/git"
-
-DEPENDS = "curl glib-2.0 openssl attr gpgme libxml2"
-
-inherit cmake setuptools3-base pkgconfig
-
-EXTRA_OECMAKE = " \
-    -DPYTHON_INSTALL_DIR=${PYTHON_SITEPACKAGES_DIR} \
-    -DPYTHON_DESIRED=3 \
-    -DENABLE_TESTS=OFF \
-    -DENABLE_DOCS=OFF \
-    -DWITH_ZCHUNK=OFF \
-"
-
-BBCLASSEXTEND = "native nativesdk"
diff --git a/poky/meta/recipes-devtools/librepo/librepo_1.14.4.bb b/poky/meta/recipes-devtools/librepo/librepo_1.14.4.bb
new file mode 100644
index 0000000..2b8bd13
--- /dev/null
+++ b/poky/meta/recipes-devtools/librepo/librepo_1.14.4.bb
@@ -0,0 +1,30 @@
+SUMMARY = "A library providing C and Python (libcURL like) API \
+           for downloading linux repository metadata and packages."
+HOMEPAGE = "https://github.com/rpm-software-management/librepo"
+DESCRIPTION = "${SUMMARY}"
+LICENSE = "LGPL-2.1-only"
+LIC_FILES_CHKSUM = "file://COPYING;md5=4fbd65380cdd255951079008b364516c"
+
+SRC_URI = "git://github.com/rpm-software-management/librepo.git;branch=master;protocol=https \
+           file://0002-Do-not-try-to-obtain-PYTHON_INSTALL_DIR-by-running-p.patch \
+           file://0004-Set-gpgme-variables-with-pkg-config-not-with-cmake-m.patch \
+           file://0001-metadata_downloader-Include-unistd.h-for-lseek.patch \
+           "
+
+SRCREV = "2bd1041c741c85bc196ca01dcca1eae6099eb742"
+
+S = "${WORKDIR}/git"
+
+DEPENDS = "curl glib-2.0 openssl attr gpgme libxml2"
+
+inherit cmake setuptools3-base pkgconfig
+
+EXTRA_OECMAKE = " \
+    -DPYTHON_INSTALL_DIR=${PYTHON_SITEPACKAGES_DIR} \
+    -DPYTHON_DESIRED=3 \
+    -DENABLE_TESTS=OFF \
+    -DENABLE_DOCS=OFF \
+    -DWITH_ZCHUNK=OFF \
+"
+
+BBCLASSEXTEND = "native nativesdk"
diff --git a/poky/meta/recipes-devtools/llvm/llvm/0006-llvm-TargetLibraryInfo-Undefine-libc-functions-if-th.patch b/poky/meta/recipes-devtools/llvm/llvm/0006-llvm-TargetLibraryInfo-Undefine-libc-functions-if-th.patch
deleted file mode 100644
index d02b7ba..0000000
--- a/poky/meta/recipes-devtools/llvm/llvm/0006-llvm-TargetLibraryInfo-Undefine-libc-functions-if-th.patch
+++ /dev/null
@@ -1,90 +0,0 @@
-Upstream-Status: Pending
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
-
-From dbeecdb307be8b783b42cbc89dcb9c5e7f528989 Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Sat, 21 May 2016 00:33:20 +0000
-Subject: [PATCH] llvm: TargetLibraryInfo: Undefine libc functions if they are macros
-
-musl defines some functions as macros and not inline functions
-if this is the case then make sure to undefine them
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
----
- .../llvm/Analysis/TargetLibraryInfo.def       | 21 +++++++++++++++++++
- 1 file changed, 21 insertions(+)
-
-diff --git a/llvm/include/llvm/Analysis/TargetLibraryInfo.def b/llvm/include/llvm/Analysis/TargetLibraryInfo.def
-index afed404f04c..876888656f2 100644
---- a/llvm/include/llvm/Analysis/TargetLibraryInfo.def
-+++ b/llvm/include/llvm/Analysis/TargetLibraryInfo.def
-@@ -782,6 +782,9 @@ TLI_DEFINE_STRING_INTERNAL("fmodl")
- TLI_DEFINE_ENUM_INTERNAL(fopen)
- TLI_DEFINE_STRING_INTERNAL("fopen")
- /// FILE *fopen64(const char *filename, const char *opentype)
-+#ifdef fopen64
-+#undef fopen64
-+#endif
- TLI_DEFINE_ENUM_INTERNAL(fopen64)
- TLI_DEFINE_STRING_INTERNAL("fopen64")
- /// int fork();
-@@ -829,6 +832,9 @@ TLI_DEFINE_STRING_INTERNAL("fseek")
- /// int fseeko(FILE *stream, off_t offset, int whence);
- TLI_DEFINE_ENUM_INTERNAL(fseeko)
- TLI_DEFINE_STRING_INTERNAL("fseeko")
-+#ifdef fseeko64
-+#undef fseeko64
-+#endif
- /// int fseeko64(FILE *stream, off64_t offset, int whence)
- TLI_DEFINE_ENUM_INTERNAL(fseeko64)
- TLI_DEFINE_STRING_INTERNAL("fseeko64")
-@@ -839,6 +845,9 @@ TLI_DEFINE_STRING_INTERNAL("fsetpos")
- TLI_DEFINE_ENUM_INTERNAL(fstat)
- TLI_DEFINE_STRING_INTERNAL("fstat")
- /// int fstat64(int filedes, struct stat64 *buf)
-+#ifdef fstat64
-+#undef fstat64
-+#endif
- TLI_DEFINE_ENUM_INTERNAL(fstat64)
- TLI_DEFINE_STRING_INTERNAL("fstat64")
- /// int fstatvfs(int fildes, struct statvfs *buf);
-@@ -854,6 +863,9 @@ TLI_DEFINE_STRING_INTERNAL("ftell")
- TLI_DEFINE_ENUM_INTERNAL(ftello)
- TLI_DEFINE_STRING_INTERNAL("ftello")
- /// off64_t ftello64(FILE *stream)
-+#ifdef ftello64
-+#undef ftello64
-+#endif
- TLI_DEFINE_ENUM_INTERNAL(ftello64)
- TLI_DEFINE_STRING_INTERNAL("ftello64")
- /// int ftrylockfile(FILE *file);
-@@ -980,6 +992,9 @@ TLI_DEFINE_STRING_INTERNAL("logl")
- TLI_DEFINE_ENUM_INTERNAL(lstat)
- TLI_DEFINE_STRING_INTERNAL("lstat")
- /// int lstat64(const char *path, struct stat64 *buf);
-+#ifdef lstat64
-+#undef lstat64
-+#endif
- TLI_DEFINE_ENUM_INTERNAL(lstat64)
- TLI_DEFINE_STRING_INTERNAL("lstat64")
- /// void *malloc(size_t size);
-@@ -1205,6 +1220,9 @@ TLI_DEFINE_STRING_INTERNAL("sscanf")
- TLI_DEFINE_ENUM_INTERNAL(stat)
- TLI_DEFINE_STRING_INTERNAL("stat")
- /// int stat64(const char *path, struct stat64 *buf);
-+#ifdef stat64
-+#undef stat64
-+#endif
- TLI_DEFINE_ENUM_INTERNAL(stat64)
- TLI_DEFINE_STRING_INTERNAL("stat64")
- /// int statvfs(const char *path, struct statvfs *buf);
-@@ -1340,6 +1358,9 @@ TLI_DEFINE_STRING_INTERNAL("times")
- TLI_DEFINE_ENUM_INTERNAL(tmpfile)
- TLI_DEFINE_STRING_INTERNAL("tmpfile")
- /// FILE *tmpfile64(void)
-+#ifdef tmpfile64
-+#undef tmpfile64
-+#endif
- TLI_DEFINE_ENUM_INTERNAL(tmpfile64)
- TLI_DEFINE_STRING_INTERNAL("tmpfile64")
- /// int toascii(int c);
diff --git a/poky/meta/recipes-devtools/llvm/llvm/llvm-config b/poky/meta/recipes-devtools/llvm/llvm/llvm-config
new file mode 100644
index 0000000..a45f38c
--- /dev/null
+++ b/poky/meta/recipes-devtools/llvm/llvm/llvm-config
@@ -0,0 +1,42 @@
+#!/bin/bash
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+# Wrap llvm-config since the native llvm-config will remap some values correctly
+# if placed in the target sysroot but for flags, it would provide the native ones.
+# Provide ours from the environment instead.
+
+NEXT_LLVM_CONFIG="$(which -a llvm-config | sed -n 2p)"
+if [[ $# == 0 ]]; then
+  exec "$NEXT_LLVM_CONFIG"
+fi
+
+remain=""
+output=""
+for arg in "$@"; do
+  case "$arg" in
+    --cppflags)
+      output="${output} ${CPPFLAGS}"
+      ;;
+    --cflags)
+      output="${output} ${CFLAGS}"
+      ;;
+    --cxxflags)
+      output="${output} ${CXXFLAGS}"
+      ;;
+    --ldflags)
+      output="${output} ${LDFLAGS}"
+      ;;
+    *)
+      remain="${remain} ${arg}"
+      ;;
+  esac
+done
+
+if [ "${remain}" != "" ]; then
+      output="${output} "$("$NEXT_LLVM_CONFIG" ${remain})
+fi
+
+echo "${output}"
diff --git a/poky/meta/recipes-devtools/llvm/llvm_git.bb b/poky/meta/recipes-devtools/llvm/llvm_git.bb
index bdea95d..25c0a43 100644
--- a/poky/meta/recipes-devtools/llvm/llvm_git.bb
+++ b/poky/meta/recipes-devtools/llvm/llvm_git.bb
@@ -28,9 +28,9 @@
 BRANCH = "release/${MAJOR_VERSION}.x"
 SRCREV = "f28c006a5895fc0e329fe15fead81e37457cb1d1"
 SRC_URI = "git://github.com/llvm/llvm-project.git;branch=${BRANCH};protocol=https \
-           file://0006-llvm-TargetLibraryInfo-Undefine-libc-functions-if-th.patch;striplevel=2 \
            file://0007-llvm-allow-env-override-of-exe-path.patch;striplevel=2 \
            file://0001-AsmMatcherEmitter-sort-ClassInfo-lists-by-name-as-we.patch;striplevel=2 \
+           file://llvm-config \
            "
 
 UPSTREAM_CHECK_GITTAGREGEX = "llvmorg-(?P<pver>\d+(\.\d+)+)"
@@ -124,6 +124,15 @@
 do_install:class-native() {
 	install -D -m 0755 ${B}/bin/llvm-tblgen ${D}${bindir}/llvm-tblgen${PV}
 	install -D -m 0755 ${B}/bin/llvm-config ${D}${bindir}/llvm-config${PV}
+	ln -sf llvm-config${PV} ${D}${bindir}/llvm-config
+}
+
+SYSROOT_PREPROCESS_FUNCS:append:class-target = " llvm_sysroot_preprocess"
+
+llvm_sysroot_preprocess() {
+	install -d ${SYSROOT_DESTDIR}${bindir_crossscripts}/
+	install -m 0755 ${WORKDIR}/llvm-config ${SYSROOT_DESTDIR}${bindir_crossscripts}/
+	ln -sf llvm-config ${SYSROOT_DESTDIR}${bindir_crossscripts}/llvm-config${PV}
 }
 
 PACKAGES =+ "${PN}-bugpointpasses ${PN}-llvmhello ${PN}-libllvm ${PN}-liboptremarks ${PN}-liblto"
diff --git a/poky/meta/recipes-devtools/meson/meson_0.63.0.bb b/poky/meta/recipes-devtools/meson/meson_0.63.0.bb
deleted file mode 100644
index 890f475..0000000
--- a/poky/meta/recipes-devtools/meson/meson_0.63.0.bb
+++ /dev/null
@@ -1,158 +0,0 @@
-HOMEPAGE = "http://mesonbuild.com"
-SUMMARY = "A high performance build system"
-DESCRIPTION = "Meson is a build system designed to increase programmer \
-productivity. It does this by providing a fast, simple and easy to use \
-interface for modern software development tools and practices."
-
-LICENSE = "Apache-2.0"
-LIC_FILES_CHKSUM = "file://COPYING;md5=3b83ef96387f14655fc854ddc3c6bd57"
-
-SRC_URI = "https://github.com/mesonbuild/meson/releases/download/${PV}/meson-${PV}.tar.gz \
-           file://meson-setup.py \
-           file://meson-wrapper \
-           file://0001-python-module-do-not-manipulate-the-environment-when.patch \
-           file://disable-rpath-handling.patch \
-           file://0001-Make-CPU-family-warnings-fatal.patch \
-           file://0002-Support-building-allarch-recipes-again.patch \
-           file://0001-is_debianlike-always-return-False.patch \
-           file://0001-Check-for-clang-before-guessing-gcc-or-lcc.patch \
-           "
-SRC_URI[sha256sum] = "3b51d451744c2bc71838524ec8d96cd4f8c4793d5b8d5d0d0a9c8a4f7c94cd6f"
-
-UPSTREAM_CHECK_URI = "https://github.com/mesonbuild/meson/releases"
-UPSTREAM_CHECK_REGEX = "meson-(?P<pver>\d+(\.\d+)+)\.tar"
-
-inherit python_setuptools_build_meta
-
-RDEPENDS:${PN} = "ninja python3-modules python3-pkg-resources"
-
-FILES:${PN} += "${datadir}/polkit-1"
-
-do_install:append () {
-	# As per the same issue in the python recipe itself:
-	# Unfortunately the following pyc files are non-deterministc due to 'frozenset'
-	# being written without strict ordering, even with PYTHONHASHSEED = 0
-	# Upstream is discussing ways to solve the issue properly, until then let's
-	# just not install the problematic files.
-	# More info: http://benno.id.au/blog/2013/01/15/python-determinism
-	rm ${D}${libdir}/python*/site-packages/mesonbuild/dependencies/__pycache__/mpi.cpython*
-}
-
-BBCLASSEXTEND = "native nativesdk"
-
-inherit meson-routines
-
-# The cross file logic is similar but not identical to that in meson.bbclass,
-# since it's generating for an SDK rather than a cross-compile. Important
-# differences are:
-# - We can't set vars like CC, CXX, etc. yet because they will be filled in with
-#   real paths by meson-setup.sh when the SDK is extracted.
-# - Some overrides aren't needed, since the SDK injects paths that take care of
-#   them.
-def var_list2str(var, d):
-    items = d.getVar(var).split()
-    return items[0] if len(items) == 1 else ', '.join(repr(s) for s in items)
-
-def generate_native_link_template(d):
-    val = ['-L@{OECORE_NATIVE_SYSROOT}${libdir_native}',
-           '-L@{OECORE_NATIVE_SYSROOT}${base_libdir_native}',
-           '-Wl,-rpath-link,@{OECORE_NATIVE_SYSROOT}${libdir_native}',
-           '-Wl,-rpath-link,@{OECORE_NATIVE_SYSROOT}${base_libdir_native}',
-           '-Wl,--allow-shlib-undefined'
-        ]
-    build_arch = d.getVar('BUILD_ARCH')
-    if 'x86_64' in build_arch:
-        loader = 'ld-linux-x86-64.so.2'
-    elif 'i686' in build_arch:
-        loader = 'ld-linux.so.2'
-    elif 'aarch64' in build_arch:
-        loader = 'ld-linux-aarch64.so.1'
-    elif 'ppc64le' in build_arch:
-        loader = 'ld64.so.2'
-
-    if loader:
-        val += ['-Wl,--dynamic-linker=@{OECORE_NATIVE_SYSROOT}${base_libdir_native}/' + loader]
-
-    return repr(val)
-
-install_templates() {
-    install -d ${D}${datadir}/meson
-
-    cat >${D}${datadir}/meson/meson.native.template <<EOF
-[binaries]
-c = ${@meson_array('BUILD_CC', d)}
-cpp = ${@meson_array('BUILD_CXX', d)}
-ar = ${@meson_array('BUILD_AR', d)}
-nm = ${@meson_array('BUILD_NM', d)}
-strip = ${@meson_array('BUILD_STRIP', d)}
-readelf = ${@meson_array('BUILD_READELF', d)}
-pkgconfig = 'pkg-config-native'
-
-[built-in options]
-c_args = ['-isystem@{OECORE_NATIVE_SYSROOT}${includedir_native}' , ${@var_list2str('BUILD_OPTIMIZATION', d)}]
-c_link_args = ${@generate_native_link_template(d)}
-cpp_args = ['-isystem@{OECORE_NATIVE_SYSROOT}${includedir_native}' , ${@var_list2str('BUILD_OPTIMIZATION', d)}]
-cpp_link_args = ${@generate_native_link_template(d)}
-[properties]
-sys_root = '@OECORE_NATIVE_SYSROOT'
-EOF
-
-    cat >${D}${datadir}/meson/meson.cross.template <<EOF
-[binaries]
-c = @CC
-cpp = @CXX
-ar = @AR
-nm = @NM
-strip = @STRIP
-pkgconfig = 'pkg-config'
-
-[built-in options]
-c_args = @CFLAGS
-c_link_args = @LDFLAGS
-cpp_args = @CPPFLAGS
-cpp_link_args = @LDFLAGS
-
-[properties]
-needs_exe_wrapper = true
-sys_root = @OECORE_TARGET_SYSROOT
-
-[host_machine]
-system = '$host_system'
-cpu_family = '$host_cpu_family'
-cpu = '$host_cpu'
-endian = '$host_endian'
-EOF
-}
-
-do_install:append:class-nativesdk() {
-    host_system=${SDK_OS}
-    host_cpu_family=${@meson_cpu_family("SDK_ARCH", d)}
-    host_cpu=${SDK_ARCH}
-    host_endian=${@meson_endian("SDK", d)}
-    install_templates
-
-    install -d ${D}${SDKPATHNATIVE}/post-relocate-setup.d
-    install -m 0755 ${WORKDIR}/meson-setup.py ${D}${SDKPATHNATIVE}/post-relocate-setup.d/
-
-    # We need to wrap the real meson with a thin env setup wrapper.
-    mv ${D}${bindir}/meson ${D}${bindir}/meson.real
-    install -m 0755 ${WORKDIR}/meson-wrapper ${D}${bindir}/meson
-}
-
-FILES:${PN}:append:class-nativesdk = "${datadir}/meson ${SDKPATHNATIVE}"
-
-do_install:append:class-native() {
-    host_system=${HOST_OS}
-    host_cpu_family=${@meson_cpu_family("HOST_ARCH", d)}
-    host_cpu=${HOST_ARCH}
-    host_endian=${@meson_endian("HOST", d)}
-    install_templates
-
-    install -d ${D}${datadir}/post-relocate-setup.d
-    install -m 0755 ${WORKDIR}/meson-setup.py ${D}${datadir}/post-relocate-setup.d/
-
-    # We need to wrap the real meson with a thin wrapper that substitues native/cross files
-    # when running in a direct SDK environment.
-    mv ${D}${bindir}/meson ${D}${bindir}/meson.real
-    install -m 0755 ${WORKDIR}/meson-wrapper ${D}${bindir}/meson
-}
diff --git a/poky/meta/recipes-devtools/meson/meson_0.63.1.bb b/poky/meta/recipes-devtools/meson/meson_0.63.1.bb
new file mode 100644
index 0000000..7f77a7d
--- /dev/null
+++ b/poky/meta/recipes-devtools/meson/meson_0.63.1.bb
@@ -0,0 +1,158 @@
+HOMEPAGE = "http://mesonbuild.com"
+SUMMARY = "A high performance build system"
+DESCRIPTION = "Meson is a build system designed to increase programmer \
+productivity. It does this by providing a fast, simple and easy to use \
+interface for modern software development tools and practices."
+
+LICENSE = "Apache-2.0"
+LIC_FILES_CHKSUM = "file://COPYING;md5=3b83ef96387f14655fc854ddc3c6bd57"
+
+SRC_URI = "https://github.com/mesonbuild/meson/releases/download/${PV}/meson-${PV}.tar.gz \
+           file://meson-setup.py \
+           file://meson-wrapper \
+           file://0001-python-module-do-not-manipulate-the-environment-when.patch \
+           file://disable-rpath-handling.patch \
+           file://0001-Make-CPU-family-warnings-fatal.patch \
+           file://0002-Support-building-allarch-recipes-again.patch \
+           file://0001-is_debianlike-always-return-False.patch \
+           file://0001-Check-for-clang-before-guessing-gcc-or-lcc.patch \
+           "
+SRC_URI[sha256sum] = "06fe13297213d6ff0121c5d5aab25a56ef938ffec57414ed6086fda272cb65e9"
+
+UPSTREAM_CHECK_URI = "https://github.com/mesonbuild/meson/releases"
+UPSTREAM_CHECK_REGEX = "meson-(?P<pver>\d+(\.\d+)+)\.tar"
+
+inherit python_setuptools_build_meta
+
+RDEPENDS:${PN} = "ninja python3-modules python3-pkg-resources"
+
+FILES:${PN} += "${datadir}/polkit-1"
+
+do_install:append () {
+	# As per the same issue in the python recipe itself:
+	# Unfortunately the following pyc files are non-deterministc due to 'frozenset'
+	# being written without strict ordering, even with PYTHONHASHSEED = 0
+	# Upstream is discussing ways to solve the issue properly, until then let's
+	# just not install the problematic files.
+	# More info: http://benno.id.au/blog/2013/01/15/python-determinism
+	rm ${D}${libdir}/python*/site-packages/mesonbuild/dependencies/__pycache__/mpi.cpython*
+}
+
+BBCLASSEXTEND = "native nativesdk"
+
+inherit meson-routines
+
+# The cross file logic is similar but not identical to that in meson.bbclass,
+# since it's generating for an SDK rather than a cross-compile. Important
+# differences are:
+# - We can't set vars like CC, CXX, etc. yet because they will be filled in with
+#   real paths by meson-setup.sh when the SDK is extracted.
+# - Some overrides aren't needed, since the SDK injects paths that take care of
+#   them.
+def var_list2str(var, d):
+    items = d.getVar(var).split()
+    return items[0] if len(items) == 1 else ', '.join(repr(s) for s in items)
+
+def generate_native_link_template(d):
+    val = ['-L@{OECORE_NATIVE_SYSROOT}${libdir_native}',
+           '-L@{OECORE_NATIVE_SYSROOT}${base_libdir_native}',
+           '-Wl,-rpath-link,@{OECORE_NATIVE_SYSROOT}${libdir_native}',
+           '-Wl,-rpath-link,@{OECORE_NATIVE_SYSROOT}${base_libdir_native}',
+           '-Wl,--allow-shlib-undefined'
+        ]
+    build_arch = d.getVar('BUILD_ARCH')
+    if 'x86_64' in build_arch:
+        loader = 'ld-linux-x86-64.so.2'
+    elif 'i686' in build_arch:
+        loader = 'ld-linux.so.2'
+    elif 'aarch64' in build_arch:
+        loader = 'ld-linux-aarch64.so.1'
+    elif 'ppc64le' in build_arch:
+        loader = 'ld64.so.2'
+
+    if loader:
+        val += ['-Wl,--dynamic-linker=@{OECORE_NATIVE_SYSROOT}${base_libdir_native}/' + loader]
+
+    return repr(val)
+
+install_templates() {
+    install -d ${D}${datadir}/meson
+
+    cat >${D}${datadir}/meson/meson.native.template <<EOF
+[binaries]
+c = ${@meson_array('BUILD_CC', d)}
+cpp = ${@meson_array('BUILD_CXX', d)}
+ar = ${@meson_array('BUILD_AR', d)}
+nm = ${@meson_array('BUILD_NM', d)}
+strip = ${@meson_array('BUILD_STRIP', d)}
+readelf = ${@meson_array('BUILD_READELF', d)}
+pkgconfig = 'pkg-config-native'
+
+[built-in options]
+c_args = ['-isystem@{OECORE_NATIVE_SYSROOT}${includedir_native}' , ${@var_list2str('BUILD_OPTIMIZATION', d)}]
+c_link_args = ${@generate_native_link_template(d)}
+cpp_args = ['-isystem@{OECORE_NATIVE_SYSROOT}${includedir_native}' , ${@var_list2str('BUILD_OPTIMIZATION', d)}]
+cpp_link_args = ${@generate_native_link_template(d)}
+[properties]
+sys_root = '@OECORE_NATIVE_SYSROOT'
+EOF
+
+    cat >${D}${datadir}/meson/meson.cross.template <<EOF
+[binaries]
+c = @CC
+cpp = @CXX
+ar = @AR
+nm = @NM
+strip = @STRIP
+pkgconfig = 'pkg-config'
+
+[built-in options]
+c_args = @CFLAGS
+c_link_args = @LDFLAGS
+cpp_args = @CPPFLAGS
+cpp_link_args = @LDFLAGS
+
+[properties]
+needs_exe_wrapper = true
+sys_root = @OECORE_TARGET_SYSROOT
+
+[host_machine]
+system = '$host_system'
+cpu_family = '$host_cpu_family'
+cpu = '$host_cpu'
+endian = '$host_endian'
+EOF
+}
+
+do_install:append:class-nativesdk() {
+    host_system=${SDK_OS}
+    host_cpu_family=${@meson_cpu_family("SDK_ARCH", d)}
+    host_cpu=${SDK_ARCH}
+    host_endian=${@meson_endian("SDK", d)}
+    install_templates
+
+    install -d ${D}${SDKPATHNATIVE}/post-relocate-setup.d
+    install -m 0755 ${WORKDIR}/meson-setup.py ${D}${SDKPATHNATIVE}/post-relocate-setup.d/
+
+    # We need to wrap the real meson with a thin env setup wrapper.
+    mv ${D}${bindir}/meson ${D}${bindir}/meson.real
+    install -m 0755 ${WORKDIR}/meson-wrapper ${D}${bindir}/meson
+}
+
+FILES:${PN}:append:class-nativesdk = "${datadir}/meson ${SDKPATHNATIVE}"
+
+do_install:append:class-native() {
+    host_system=${HOST_OS}
+    host_cpu_family=${@meson_cpu_family("HOST_ARCH", d)}
+    host_cpu=${HOST_ARCH}
+    host_endian=${@meson_endian("HOST", d)}
+    install_templates
+
+    install -d ${D}${datadir}/post-relocate-setup.d
+    install -m 0755 ${WORKDIR}/meson-setup.py ${D}${datadir}/post-relocate-setup.d/
+
+    # We need to wrap the real meson with a thin wrapper that substitues native/cross files
+    # when running in a direct SDK environment.
+    mv ${D}${bindir}/meson ${D}${bindir}/meson.real
+    install -m 0755 ${WORKDIR}/meson-wrapper ${D}${bindir}/meson
+}
diff --git a/poky/meta/recipes-devtools/mtd/mtd-utils/0001-tests-Remove-unused-linux-fs.h-header-from-includes.patch b/poky/meta/recipes-devtools/mtd/mtd-utils/0001-tests-Remove-unused-linux-fs.h-header-from-includes.patch
new file mode 100644
index 0000000..73d4a84
--- /dev/null
+++ b/poky/meta/recipes-devtools/mtd/mtd-utils/0001-tests-Remove-unused-linux-fs.h-header-from-includes.patch
@@ -0,0 +1,31 @@
+From 6fb10bd18488ed84776675bc1b2982800a51d839 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Sat, 6 Aug 2022 20:14:38 -0700
+Subject: [mtd-utils][PATCH] tests: Remove unused linux/fs.h header from includes
+
+This header is not needed, moreover it includes linux/mount.h which is
+now in conflict[1] with glibc provided sys/mount.h from glibc 2.36 onwards
+
+[1] https://sourceware.org/glibc/wiki/Release/2.36
+
+Upstream-Status: Submitted [https://lists.infradead.org/pipermail/linux-mtd/2022-August/094667.html]
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ tests/fs-tests/lib/tests.c | 1 -
+ 1 file changed, 1 deletion(-)
+
+diff --git a/tests/fs-tests/lib/tests.c b/tests/fs-tests/lib/tests.c
+index d1a2e0c..3db0426 100644
+--- a/tests/fs-tests/lib/tests.c
++++ b/tests/fs-tests/lib/tests.c
+@@ -35,7 +35,6 @@
+ #include <sys/vfs.h>
+ #include <sys/mount.h>
+ #include <sys/statvfs.h>
+-#include <linux/fs.h>
+ #include <linux/jffs2.h>
+ 
+ #include "tests.h"
+-- 
+2.37.1
+
diff --git a/poky/meta/recipes-devtools/mtd/mtd-utils/add-exclusion-to-mkfs-jffs2-git-2.patch b/poky/meta/recipes-devtools/mtd/mtd-utils/add-exclusion-to-mkfs-jffs2-git-2.patch
deleted file mode 100644
index 5d874d9..0000000
--- a/poky/meta/recipes-devtools/mtd/mtd-utils/add-exclusion-to-mkfs-jffs2-git-2.patch
+++ /dev/null
@@ -1,105 +0,0 @@
-Upstream-Status: Pending
-
-Index: git/jffsX-utils/mkfs.jffs2.c
-===================================================================
---- git.orig/jffsX-utils/mkfs.jffs2.c
-+++ git/jffsX-utils/mkfs.jffs2.c
-@@ -100,6 +100,11 @@ struct filesystem_entry {
- 	struct rb_node hardlink_rb;
- };
- 
-+struct ignorepath_entry {
-+	struct ignorepath_entry* next;  /* Points to the next ignorepath element */
-+	char name[PATH_MAX];        /* Name of the entry */
-+};
-+static struct ignorepath_entry* ignorepath = 0;
- struct rb_root hardlinks;
- static int out_fd = -1;
- static int in_fd = -1;
-@@ -309,7 +314,7 @@ static struct filesystem_entry *recursiv
- 	char *hpath, *tpath;
- 	struct dirent *dp, **namelist;
- 	struct filesystem_entry *entry;
--
-+	struct ignorepath_entry* element = ignorepath;
- 
- 	if (lstat(hostpath, &sb)) {
- 		sys_errmsg_die("%s", hostpath);
-@@ -318,6 +323,15 @@ static struct filesystem_entry *recursiv
- 	entry = add_host_filesystem_entry(targetpath, hostpath,
- 			sb.st_uid, sb.st_gid, sb.st_mode, 0, parent);
- 
-+	while ( element ) {
-+		if ( strcmp( element->name, targetpath ) == 0 ) {
-+			printf( "Note: ignoring directories below '%s'\n", targetpath );
-+			return entry;
-+			break;
-+		}
-+		element = element->next;
-+	}
-+
- 	n = scandir(hostpath, &namelist, 0, alphasort);
- 	if (n < 0) {
- 		sys_errmsg_die("opening directory %s", hostpath);
-@@ -1359,6 +1373,7 @@ static struct option long_options[] = {
- 	{"root", 1, NULL, 'r'},
- 	{"pagesize", 1, NULL, 's'},
- 	{"eraseblock", 1, NULL, 'e'},
-+	{"ignore", 1, NULL, 'I'},
- 	{"output", 1, NULL, 'o'},
- 	{"help", 0, NULL, 'h'},
- 	{"verbose", 0, NULL, 'v'},
-@@ -1409,6 +1424,7 @@ static const char helptext[] =
- "  -L, --list-compressors  Show the list of the available compressors\n"
- "  -t, --test-compression  Call decompress and compare with the original (for test)\n"
- "  -n, --no-cleanmarkers   Don't add a cleanmarker to every eraseblock\n"
-+"  -I, --ignore=PATH       Ignore sub directory and file tree below PATH when recursing over the file system\n"
- "  -o, --output=FILE       Output to FILE (default: stdout)\n"
- "  -l, --little-endian     Create a little-endian filesystem\n"
- "  -b, --big-endian        Create a big-endian filesystem\n"
-@@ -1566,6 +1582,7 @@ int main(int argc, char **argv)
- 	char *compr_name = NULL;
- 	int compr_prior  = -1;
- 	int warn_page_size = 0;
-+	struct ignorepath_entry* element = ignorepath;
- 
- 	page_size = sysconf(_SC_PAGESIZE);
- 	if (page_size < 0) /* System doesn't know so ... */
-@@ -1576,7 +1593,7 @@ int main(int argc, char **argv)
- 	jffs2_compressors_init();
- 
- 	while ((opt = getopt_long(argc, argv,
--					"D:d:r:s:o:qUPfh?vVe:lbp::nc:m:x:X:Lty:i:", long_options, &c)) >= 0)
-+					"D:d:r:s:I:o:qUPfh?vVe:lbp::nc:m:x:X:Lty:i:", long_options, &c)) >= 0)
- 	{
- 		switch (opt) {
- 			case 'D':
-@@ -1600,6 +1617,28 @@ int main(int argc, char **argv)
- 				warn_page_size = 0; /* set by user, so don't need to warn */
- 				break;
- 
-+			case 'I':
-+				printf( "Note: Adding '%s' to ignore Path\n", optarg );
-+				element = ignorepath;
-+				if ( !ignorepath ) {
-+					ignorepath = xmalloc( sizeof( struct ignorepath_entry ) );
-+					ignorepath->next = 0;
-+					strcpy( &ignorepath->name[0], optarg );
-+				} else {
-+					while ( element->next ) element = element->next;
-+					element->next = xmalloc( sizeof( struct ignorepath_entry ) );
-+ 					element->next->next = 0;
-+					strcpy( &element->next->name[0], optarg );
-+				}
-+				printf( "--------- Dumping ignore path list ----------------\n" );
-+				element = ignorepath;
-+				while ( element ) {
-+					printf( "  * '%s'\n", &element->name[0] );
-+					element = element->next;
-+				}
-+				printf( "---------------------------------------------------\n" );
-+				break;
-+
- 			case 'o':
- 				if (out_fd != -1) {
- 					errmsg_die("output filename specified more than once");
diff --git a/poky/meta/recipes-devtools/mtd/mtd-utils_git.bb b/poky/meta/recipes-devtools/mtd/mtd-utils_git.bb
index 3318277..943666e 100644
--- a/poky/meta/recipes-devtools/mtd/mtd-utils_git.bb
+++ b/poky/meta/recipes-devtools/mtd/mtd-utils_git.bb
@@ -15,7 +15,7 @@
 
 SRCREV = "c7f1bfa44a84d02061787e2f6093df5cc40b9f5c"
 SRC_URI = "git://git.infradead.org/mtd-utils.git;branch=master \
-           file://add-exclusion-to-mkfs-jffs2-git-2.patch \
+           file://0001-tests-Remove-unused-linux-fs.h-header-from-includes.patch \
            "
 
 S = "${WORKDIR}/git"
diff --git a/poky/meta/recipes-devtools/patchelf/patchelf/handle-read-only-files.patch b/poky/meta/recipes-devtools/patchelf/patchelf/handle-read-only-files.patch
deleted file mode 100644
index b755a26..0000000
--- a/poky/meta/recipes-devtools/patchelf/patchelf/handle-read-only-files.patch
+++ /dev/null
@@ -1,65 +0,0 @@
-From 682fb48c137b687477008b68863c2a0b73ed47d1 Mon Sep 17 00:00:00 2001
-From: Fabio Berton <fabio.berton@ossystems.com.br>
-Date: Fri, 9 Sep 2016 16:00:42 -0300
-Subject: [PATCH] handle read-only files
-
-Patch from:
-https://github.com/darealshinji/patchelf/commit/40e66392bc4b96e9b4eda496827d26348a503509
-
-Upstream-Status: Denied [https://github.com/NixOS/patchelf/pull/89]
-
-Signed-off-by: Fabio Berton <fabio.berton@ossystems.com.br>
-
----
- src/patchelf.cc | 16 +++++++++++++++-
- 1 file changed, 15 insertions(+), 1 deletion(-)
-
-Index: git/src/patchelf.cc
-===================================================================
---- git.orig/src/patchelf.cc
-+++ git/src/patchelf.cc
-@@ -534,9 +534,19 @@ void ElfFile<ElfFileParamNames>::sortShd
- 
- static void writeFile(const std::string & fileName, const FileContents & contents)
- {
-+    struct stat st;
-+    int fd;
-+
-     debug("writing %s\n", fileName.c_str());
- 
--    int fd = open(fileName.c_str(), O_CREAT | O_TRUNC | O_WRONLY, 0777);
-+    if (stat(fileName.c_str(), &st) != 0)
-+        error("stat");
-+
-+    if (chmod(fileName.c_str(), 0600) != 0)
-+        error("chmod");
-+
-+    fd = open(fileName.c_str(), O_CREAT | O_TRUNC | O_WRONLY, 0777);
-+
-     if (fd == -1)
-         error("open");
- 
-@@ -551,8 +561,6 @@ static void writeFile(const std::string
-         bytesWritten += portion;
-     }
- 
--    if (close(fd) >= 0)
--        return;
-     /*
-      * Just ignore EINTR; a retry loop is the wrong thing to do.
-      *
-@@ -561,9 +569,11 @@ static void writeFile(const std::string
-      * http://utcc.utoronto.ca/~cks/space/blog/unix/CloseEINTR
-      * https://sites.google.com/site/michaelsafyan/software-engineering/checkforeintrwheninvokingclosethinkagain
-      */
--    if (errno == EINTR)
--        return;
--    error("close");
-+    if ((close(fd) < 0) && errno != EINTR)
-+        error("close");
-+
-+    if (chmod(fileName.c_str(), st.st_mode) != 0)
-+        error("chmod");
- }
- 
- 
diff --git a/poky/meta/recipes-devtools/patchelf/patchelf_0.14.5.bb b/poky/meta/recipes-devtools/patchelf/patchelf_0.14.5.bb
deleted file mode 100644
index 0fa2c00..0000000
--- a/poky/meta/recipes-devtools/patchelf/patchelf_0.14.5.bb
+++ /dev/null
@@ -1,18 +0,0 @@
-SUMMARY = "Tool to allow editing of RPATH and interpreter fields in ELF binaries"
-DESCRIPTION = "PatchELF is a simple utility for modifying existing ELF executables and libraries."
-HOMEPAGE = "https://github.com/NixOS/patchelf"
-
-LICENSE = "GPL-3.0-only"
-
-SRC_URI = "git://github.com/NixOS/patchelf;protocol=https;branch=master \
-           file://handle-read-only-files.patch \
-           "
-SRCREV = "a35054504293f9ff64539850d1ed0bfd2f5399f2"
-
-S = "${WORKDIR}/git"
-
-LIC_FILES_CHKSUM = "file://COPYING;md5=d32239bcb673463ab874e80d47fae504"
-
-inherit autotools
-
-BBCLASSEXTEND = "native nativesdk"
diff --git a/poky/meta/recipes-devtools/patchelf/patchelf_0.15.0.bb b/poky/meta/recipes-devtools/patchelf/patchelf_0.15.0.bb
new file mode 100644
index 0000000..e07775f
--- /dev/null
+++ b/poky/meta/recipes-devtools/patchelf/patchelf_0.15.0.bb
@@ -0,0 +1,17 @@
+SUMMARY = "Tool to allow editing of RPATH and interpreter fields in ELF binaries"
+DESCRIPTION = "PatchELF is a simple utility for modifying existing ELF executables and libraries."
+HOMEPAGE = "https://github.com/NixOS/patchelf"
+
+LICENSE = "GPL-3.0-only"
+
+SRC_URI = "git://github.com/NixOS/patchelf;protocol=https;branch=master \
+           "
+SRCREV = "49008002562355b0e35075cbc1c42c645ff04e28"
+
+S = "${WORKDIR}/git"
+
+LIC_FILES_CHKSUM = "file://COPYING;md5=d32239bcb673463ab874e80d47fae504"
+
+inherit autotools
+
+BBCLASSEXTEND = "native nativesdk"
diff --git a/poky/meta/recipes-devtools/perl-cross/files/0001-Makefile-correctly-list-modules-when-cleaning-them.patch b/poky/meta/recipes-devtools/perl-cross/files/0001-Makefile-correctly-list-modules-when-cleaning-them.patch
deleted file mode 100644
index 80388fa..0000000
--- a/poky/meta/recipes-devtools/perl-cross/files/0001-Makefile-correctly-list-modules-when-cleaning-them.patch
+++ /dev/null
@@ -1,24 +0,0 @@
-From 7b8d819e012c24df228a313beb86e1942611c904 Mon Sep 17 00:00:00 2001
-From: Alexander Kanavin <alex@linutronix.de>
-Date: Sat, 4 Jun 2022 13:00:12 +0200
-Subject: [PATCH] Makefile: correctly list modules when cleaning them
-
-Upstream-Status: Submitted [https://github.com/arsv/perl-cross/pull/133]
-Signed-off-by: Alexander Kanavin <alex@linutronix.de>
----
- Makefile | 2 +-
- 1 file changed, 1 insertion(+), 1 deletion(-)
-
-diff --git a/Makefile b/Makefile
-index 6b35fb0..9ef9324 100644
---- a/Makefile
-+++ b/Makefile
-@@ -462,7 +462,7 @@ clean-subdirs:
- 
- # assuming modules w/o Makefiles were never built and need no cleaning
- clean-modules: config.h
--	@for i in $(modules disabled); do \
-+	@for i in $(modules) $(disabled); do \
- 		test -f $$i/Makefile && \
- 		touch $$i/Makefile && \
- 		$(MAKE) -C $$i clean \
diff --git a/poky/meta/recipes-devtools/perl-cross/files/0001-Makefile-do-not-clean-config.h-xconfig.h.patch b/poky/meta/recipes-devtools/perl-cross/files/0001-Makefile-do-not-clean-config.h-xconfig.h.patch
deleted file mode 100644
index cbb935c..0000000
--- a/poky/meta/recipes-devtools/perl-cross/files/0001-Makefile-do-not-clean-config.h-xconfig.h.patch
+++ /dev/null
@@ -1,28 +0,0 @@
-From ade4a70308d3b9d79cc3db841c0f60385780fe1a Mon Sep 17 00:00:00 2001
-From: Alexander Kanavin <alex@linutronix.de>
-Date: Sat, 4 Jun 2022 13:45:20 +0200
-Subject: [PATCH] Makefile: do not clean config.h/xconfig.h
-
-These are generated by ./configure and not by make.
-
-Upstream-Status: Submitted [https://github.com/arsv/perl-cross/pull/134]
-Signed-off-by: Alexander Kanavin <alex@linutronix.de>
----
- Makefile | 1 -
- 1 file changed, 1 deletion(-)
-
-diff --git a/Makefile b/Makefile
-index 9ef9324..3de2c2e 100644
---- a/Makefile
-+++ b/Makefile
-@@ -473,7 +473,6 @@ clean-generated-files:
- 	-rm -f uudmap.h opmini.c generate_uudmap$X bitcount.h $(CONFIGPM)
- 	-rm -f git_version.h lib/re.pm lib/Config_git.pl
- 	-rm -f perlmini.c perlmain.c
--	-rm -f config.h xconfig.h
- 	-rm -f pod/perlmodlib.pod
- 	-rm -f ext.libs static.list
- 	-rm -f $(patsubst %,%/ppport.h,$(mkppport_lst))
--- 
-2.30.2
-
diff --git a/poky/meta/recipes-devtools/perl-cross/files/0001-configure_func.sh-Add-_GNU_SOURCE-define-and-functio.patch b/poky/meta/recipes-devtools/perl-cross/files/0001-configure_func.sh-Add-_GNU_SOURCE-define-and-functio.patch
new file mode 100644
index 0000000..893b55e
--- /dev/null
+++ b/poky/meta/recipes-devtools/perl-cross/files/0001-configure_func.sh-Add-_GNU_SOURCE-define-and-functio.patch
@@ -0,0 +1,485 @@
+From 65db86f0161c393fd5b082c10837b278adadbff2 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Sun, 7 Aug 2022 23:57:20 -0700
+Subject: [PATCH] configure_func.sh: Add _GNU_SOURCE define and function
+ signatures
+
+Modern compilers are getting stricter about include paths and function
+signature being known duting compilation e.g. clang-15 now errors out if
+a function signature is not found
+
+try.c:1:18: error: call to undeclared function 'getspnam'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
+
+This causes the test of function to fail even though the function is
+available in libc. Therefore try to add proper include headers which
+define these functions and also define _GNU_SOURCE in every test
+since some of GNU/Linux funtions e.g. accept4 are guarged by this define
+
+Upstream-Status: Submitted [https://github.com/arsv/perl-cross/pull/137]
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ cnf/configure_func.sh | 41 +++++++++++++++++++++--------------------
+ 1 file changed, 21 insertions(+), 20 deletions(-)
+
+--- a/cnf/configure_func.sh
++++ b/cnf/configure_func.sh
+@@ -5,6 +5,7 @@ checkfunc() {
+ 	mstart "Checking for $2"
+ 	if not hinted $1 'found' 'missing'; then
+ 		try_start
++    try_add '#define _GNU_SOURCE'
+ 		funcincludes "$3" "$4" "$includes"
+ 		try_add "int main(void) { $2($3); return 0; }"
+ 		try_link -O0 -fno-builtin
+@@ -42,115 +43,115 @@ checkfunc d_chroot 'chroot' "NULL" 'unis
+ checkfunc d_chsize 'chsize' "0,0"
+ checkfunc d_class 'class'
+ checkfunc d_clearenv 'clearenv' "" 'stdlib.h'
+-checkfunc d_closedir 'closedir' "NULL"
+-checkfunc d_crypt 'crypt'
++checkfunc d_closedir 'closedir' "NULL" 'dirent.h sys/types.h'
++checkfunc d_crypt 'crypt' "NULL,NULL" 'crypt.h'
+ checkfunc d_ctermid 'ctermid'
+ checkfunc d_ctime64 'ctime64'
+ checkfunc d_cuserid 'cuserid'
+-checkfunc d_difftime 'difftime' "0,0"
++checkfunc d_difftime 'difftime' "0,0" 'time.h'
+ checkfunc d_difftime64 'difftime64'
+-checkfunc d_dirfd 'dirfd'
++checkfunc d_dirfd 'dirfd' "NULL" 'dirent.h sys/types.h'
+ checkfunc d_dladdr 'dladdr' 'NULL, NULL' 'dlfcn.h'
+-checkfunc d_dlerror 'dlerror'
+-checkfunc d_dlopen 'dlopen'
+-checkfunc d_drand48 'drand48'
++checkfunc d_dlerror 'dlerror' "" 'dlfcn.h'
++checkfunc d_dlopen 'dlopen' "NULL,0" "dlfcn.h"
++checkfunc d_drand48 'drand48' "" 'stdlib.h'
+ checkfunc d_dup2 'dup2' "0,0" 'unistd.h'
+ checkfunc d_dup3 'dup3' "0,0,0" 'fcntl.h unistd.h'
+ checkfunc d_duplocale 'duplocale' '0' 'locale.h'
+-checkfunc d_eaccess 'eaccess'
+-checkfunc d_endgrent 'endgrent'
+-checkfunc d_endhent 'endhostent'
+-checkfunc d_endnent 'endnetent'
+-checkfunc d_endpent 'endprotoent'
+-checkfunc d_endpwent 'endpwent'
+-checkfunc d_endservent 'endservent'
++checkfunc d_eaccess 'eaccess' "NULL,0" 'unistd.h'
++checkfunc d_endgrent 'endgrent' '' 'grp.h sys/types.h'
++checkfunc d_endhent 'endhostent' "" 'netdb.h'
++checkfunc d_endnent 'endnetent' "" 'netdb.h'
++checkfunc d_endpent 'endprotoent' "" 'netdb.h'
++checkfunc d_endpwent 'endpwent' "" 'sys/types.h pwd.h'
++checkfunc d_endservent 'endservent' "" 'netdb.h'
+ checkfunc d_fchdir 'fchdir' "0" 'unistd.h'
+-checkfunc d_fchmod 'fchmod' "0,0" 'unistd.h'
+-checkfunc d_fchmodat 'fchmodat' "0,NULL,0,0" 'unistd.h'
++checkfunc d_fchmod 'fchmod' "0,0" 'unistd.h sys/stat.h'
++checkfunc d_fchmodat 'fchmodat' "0,NULL,0,0" 'unistd.h sys/stat.h'
+ checkfunc d_fchown 'fchown' "0,0,0" 'unistd.h'
+ checkfunc d_fcntl 'fcntl' "0,0" 'unistd.h fcntl.h'
+ checkfunc d_fdclose 'fdclose'
+-checkfunc d_ffs 'ffs' 'strings.h'
+-checkfunc d_ffsl 'ffsl' 'strings.h'
++checkfunc d_ffs 'ffs' "0" 'strings.h'
++checkfunc d_ffsl 'ffsl' "0" 'strings.h'
+ checkfunc d_fgetpos 'fgetpos' "NULL, 0" 'stdio.h'
+-checkfunc d_flock 'flock' "0,0" 'unistd.h'
++checkfunc d_flock 'flock' "0,0" 'sys/file.h'
+ checkfunc d_fork 'fork' "" 'unistd.h'
+ checkfunc d_fp_class 'fp_class'
+ checkfunc d_fpathconf 'fpathconf' "0,0" 'unistd.h'
+-checkfunc d_freelocale 'freelocale' '0' 'locale.h'
+-checkfunc d_fseeko 'fseeko' 'NULL,0,0'
+-checkfunc d_fsetpos 'fsetpos' 'NULL,0'
+-checkfunc d_fstatfs 'fstatfs'
+-checkfunc d_fstatvfs 'fstatvfs'
+-checkfunc d_fsync 'fsync'
+-checkfunc d_ftello 'ftello'
+-checkfunc d_futimes 'futimes'
++checkfunc d_freelocale 'freelocale' "0" 'locale.h'
++checkfunc d_fseeko 'fseeko' "NULL,0,0" 'stdio.h'
++checkfunc d_fsetpos 'fsetpos' "NULL,0" 'stdio.h'
++checkfunc d_fstatfs 'fstatfs' "0,NULL" 'sys/vfs.h'
++checkfunc d_fstatvfs 'fstatvfs' "0,NULL" 'sys/statvfs.h'
++checkfunc d_fsync 'fsync' "0" 'unistd.h'
++checkfunc d_ftello 'ftello' "NULL" 'stdio.h'
++checkfunc d_futimes 'futimes' '0,0' 'sys/time.h'
+ checkfunc d_gai_strerror 'gai_strerror' '0' 'sys/types.h sys/socket.h netdb.h'
+-checkfunc d_getaddrinfo 'getaddrinfo'
+-checkfunc d_get_current_dir_name 'get_current_dir_name'
+-checkfunc d_getcwd 'getcwd' 'NULL,0'
++checkfunc d_getaddrinfo 'getaddrinfo' "NULL,NULL,NULL,NULL" 'sys/types.h sys/socket.h netdb.h'
++checkfunc d_get_current_dir_name 'get_current_dir_name' "" 'unistd.h'
++checkfunc d_getcwd 'getcwd' 'NULL,0' 'unistd.h'
+ checkfunc d_getespwnam 'getespwnam'
+ checkfunc d_getfsstat 'getfsstat'
+-checkfunc d_getgrent 'getgrent'
+-checkfunc d_getgrps 'getgroups'
+-checkfunc d_gethbyaddr 'gethostbyaddr'
+-checkfunc d_gethbyname 'gethostbyname'
++checkfunc d_getgrent 'getgrent' "" 'sys/types.h grp.h'
++checkfunc d_getgrps 'getgroups' "0,NULL" 'unistd.h'
++checkfunc d_gethbyaddr 'gethostbyaddr' "NULL,0,0" 'netdb.h'
++checkfunc d_gethbyname 'gethostbyname' "NULL" 'netdb.h'
+ checkfunc d_getnbyaddr 'getnetbyaddr' '0,0' 'netdb.h'
+ checkfunc d_getnbyname 'getnetbyname' 'NULL' 'netdb.h'
+-checkfunc d_gethent 'gethostent'
+-checkfunc d_gethname 'gethostname'
+-checkfunc d_getitimer 'getitimer'
+-checkfunc d_getlogin 'getlogin'
++checkfunc d_gethent 'gethostent' "" 'netdb.h'
++checkfunc d_gethname 'gethostname' "NULL,0" 'unistd.h'
++checkfunc d_getitimer 'getitimer' "0,NULL" 'sys/time.h'
++checkfunc d_getlogin 'getlogin' "" 'unistd.h'
+ checkfunc d_getmnt 'getmnt'
+-checkfunc d_getmntent 'getmntent'
+-checkfunc d_getnameinfo 'getnameinfo'
+-checkfunc d_getnent 'getnetent'
+-checkfunc d_getnetbyaddr 'getnetbyaddr'
+-checkfunc d_getnetbyname 'getnetbyname'
+-checkfunc d_getpagsz 'getpagesize'
++checkfunc d_getmntent 'getmntent' "NULL" 'stdio.h mntent.h'
++checkfunc d_getnameinfo 'getnameinfo' "NULL,0,NULL,0,NULL,0,0" 'sys/socket.h netdb.h'
++checkfunc d_getnent 'getnetent' "" 'netdb.h'
++checkfunc d_getnetbyaddr 'getnetbyaddr' "0,0" 'netdb.h'
++checkfunc d_getnetbyname 'getnetbyname' "NULL" 'netdb.h'
++checkfunc d_getpagsz 'getpagesize' "" 'unistd.h'
+ checkfunc d_getpbyaddr 'getprotobyaddr'
+-checkfunc d_getpbyname 'getprotobyname'
+-checkfunc d_getpbynumber 'getprotobynumber'
+-checkfunc d_getpent 'getprotoent'
+-checkfunc d_getpgid 'getpgid'
++checkfunc d_getpbyname 'getprotobyname' "NULL" 'netdb.h'
++checkfunc d_getpbynumber 'getprotobynumber' "0" 'netdb.h'
++checkfunc d_getpent 'getprotoent' "" 'netdb.h'
++checkfunc d_getpgid 'getpgid' "0" 'unistd.h'
+ checkfunc d_getpgrp 'getpgrp' "" 'unistd.h'
+ checkfunc d_getpgrp2 'getpgrp2'
+-checkfunc d_getppid 'getppid'
++checkfunc d_getppid 'getppid' "" 'unistd.h'
+ checkfunc d_getprior 'getpriority' "0,0" 'sys/time.h sys/resource.h'
+ checkfunc d_getprpwnam 'getprpwnam'
+-checkfunc d_getpwent 'getpwent'
++checkfunc d_getpwent 'getpwent' "" 'sys/types.h pwd.h'
+ checkfunc d_getsbyaddr 'getservbyaddr'
+-checkfunc d_getsbyname 'getservbyname'
+-checkfunc d_getsbyport 'getservbyport'
+-checkfunc d_getsent 'getservent'
+-checkfunc d_setsent 'setservent'
+-checkfunc d_endsent 'endservent'
+-checkfunc d_getspnam 'getspnam'
+-checkfunc d_gettimeod 'gettimeofday' 'NULL,NULL'
++checkfunc d_getsbyname 'getservbyname' "NULL,NULL" 'netdb.h'
++checkfunc d_getsbyport 'getservbyport' "0,NULL" 'netdb.h'
++checkfunc d_getsent 'getservent' "" 'netdb.h'
++checkfunc d_setsent 'setservent' "0" 'netdb.h'
++checkfunc d_endsent 'endservent' "" 'netdb.h'
++checkfunc d_getspnam 'getspnam' "NULL" 'shadow.h'
++checkfunc d_gettimeod 'gettimeofday' 'NULL,NULL' 'sys/time.h'
+ checkfunc d_gmtime64 'gmtime64'
+-checkfunc d_hasmntopt 'hasmntopt'
++checkfunc d_hasmntopt 'hasmntopt' "NULL,NULL" 'stdio.h mntent.h'
+ checkfunc d_htonl 'htonl' "0" 'stdio.h sys/types.h netinet/in.h arpa/inet.h'
+-checkfunc d_ilogbl 'ilogbl'
++checkfunc d_ilogbl 'ilogbl' "0.0" 'math.h'
+ checkfunc d_index 'index' "NULL,0" 'string.h strings.h'
+-checkfunc d_inetaton 'inet_aton'
+-checkfunc d_inetntop 'inet_ntop'
+-checkfunc d_inetpton 'inet_pton'
++checkfunc d_inetaton 'inet_aton' "NULL,NULL" 'sys/socket.h netinet/in.h arpa/inet.h'
++checkfunc d_inetntop 'inet_ntop' "0,NULL,NULL,0" 'arpa/inet.h'
++checkfunc d_inetpton 'inet_pton' "0,NULL,NULL" 'arpa/inet.h'
+ checkfunc d_isascii 'isascii' "'A'" 'stdio.h ctype.h'
+ checkfunc d_isblank 'isblank' "' '" 'stdio.h ctype.h'
+-checkfunc d_killpg 'killpg'
++checkfunc d_killpg 'killpg' "0,0" 'signal.h'
+ checkfunc d_lchown 'lchown' "NULL, 0, 0" 'unistd.h'
+-checkfunc d_link 'link' 'NULL,NULL'
+-checkfunc d_linkat 'linkat' '0,NULL,0,NULL,0'
++checkfunc d_link 'link' 'NULL,NULL' 'unistd.h'
++checkfunc d_linkat 'linkat' '0,NULL,0,NULL,0' 'unistd.h'
+ checkfunc d_localtime64 'localtime64'
+ checkfunc d_localeconv_l 'localeconv_l' 'NULL' 'locale.h'
+-checkfunc d_locconv 'localeconv'
+-checkfunc d_lockf 'lockf'
+-checkfunc d_lstat 'lstat'
+-checkfunc d_madvise 'madvise'
++checkfunc d_locconv 'localeconv' "" 'locale.h'
++checkfunc d_lockf 'lockf' "0,0,0" 'unistd.h'
++checkfunc d_lstat 'lstat' "NULL, NULL" 'sys/stat.h'
++checkfunc d_madvise 'madvise' "NULL,0,0" 'sys/mman.h'
+ checkfunc d_malloc_good_size 'malloc_good_size'
+ checkfunc d_malloc_size 'malloc_size'
+ checkfunc d_mblen 'mblen' '"", 0' 'stdlib.h'
+-checkfunc d_mbstowcs 'mbstowcs'
++checkfunc d_mbstowcs 'mbstowcs' "NULL,NULL,0"
+ checkfunc d_mbtowc 'mbtowc' 'NULL, NULL, 0' 'stdlib.h'
+ checkfunc d_mbrlen 'mbrlen' 'NULL, 0, NULL' 'wchar.h'
+ checkfunc d_mbrtowc 'mbrtowc' 'NULL, NULL, 0, NULL' 'wchar.h'
+@@ -161,152 +162,152 @@ checkfunc d_memmem 'memmem' "NULL, 0, NU
+ checkfunc d_memmove 'memmove' "NULL, NULL, 0" 'string.h'
+ checkfunc d_memrchr 'memrchr' "NULL, 0, 0" 'string.h'
+ checkfunc d_memset 'memset' "NULL, 0, 0" 'string.h'
+-checkfunc d_mkdir 'mkdir' 'NULL, 0'
+-checkfunc d_mkdtemp 'mkdtemp'
+-checkfunc d_mkfifo 'mkfifo'
++checkfunc d_mkdir 'mkdir' 'NULL, 0' 'sys/stat.h'
++checkfunc d_mkdtemp 'mkdtemp' 'NULL'
++checkfunc d_mkfifo 'mkfifo' 'NULL,0' 'sys/types.h sys/stat.h'
+ checkfunc d_mkostemp 'mkostemp' 'NULL,0' 'stdlib.h'
+ checkfunc d_mkstemp 'mkstemp' 'NULL'
+-checkfunc d_mkstemps 'mkstemps'
+-checkfunc d_mktime 'mktime' 'NULL'
++checkfunc d_mkstemps 'mkstemps' 'NULL,0'
++checkfunc d_mktime 'mktime' 'NULL' 'time.h'
+ checkfunc d_mktime64 'mktime64'
+-checkfunc d_mmap 'mmap'
+-checkfunc d_mprotect 'mprotect'
+-checkfunc d_msgctl 'msgctl'
+-checkfunc d_msgget 'msgget'
+-checkfunc d_msgrcv 'msgrcv'
+-checkfunc d_msgsnd 'msgsnd'
+-checkfunc d_msync 'msync'
+-checkfunc d_munmap 'munmap'
++checkfunc d_mmap 'mmap' 'NULL,0,0,0,0,0' 'sys/mman.h'
++checkfunc d_mprotect 'mprotect' 'NULL,0,0' 'sys/mman.h'
++checkfunc d_msgctl 'msgctl' '0,0,NULL' 'sys/msg.h'
++checkfunc d_msgget 'msgget' '0,0' 'sys/msg.h'
++checkfunc d_msgrcv 'msgrcv' '0,NULL,0,0,0' 'sys/msg.h'
++checkfunc d_msgsnd 'msgsnd' '0,NULL,0,0' 'sys/msg.h'
++checkfunc d_msync 'msync' 'NULL,0,0' 'sys/mman.h'
++checkfunc d_munmap 'munmap' 'NULL,0' 'sys/mman.h'
+ checkfunc d_newlocale 'newlocale' '0,NULL,0' 'locale.h'
+-checkfunc d_nice 'nice' '0'
+-checkfunc d_nl_langinfo 'nl_langinfo'
+-checkfunc d_nl_langinfo_l 'nl_langinfo_l'
++checkfunc d_nice 'nice' '0' 'unistd.h'
++checkfunc d_nl_langinfo 'nl_langinfo' '0' 'langinfo.h'
++checkfunc d_nl_langinfo_l 'nl_langinfo_l' '0,0' 'langinfo.h'
+ checkfunc d_open 'open' "NULL,0,0" 'sys/types.h sys/stat.h fcntl.h'
+ checkfunc d_openat 'openat' "0,NULL,0,0" 'sys/types.h sys/stat.h fcntl.h'
+-checkfunc d_pathconf 'pathconf'
+-checkfunc d_pause 'pause'
++checkfunc d_pathconf 'pathconf' 'NULL,0' 'unistd.h'
++checkfunc d_pause 'pause' '' 'unistd.h'
+ checkfunc d_pipe 'pipe' 'NULL' 'fcntl.h unistd.h'
+ checkfunc d_pipe2 'pipe' 'NULL,0' 'fcntl.h unistd.h'
+-checkfunc d_poll 'poll'
+-checkfunc d_prctl 'prctl'
+-checkfunc d_pthread_atfork 'pthread_atfork'
+-checkfunc d_pthread_attr_setscope 'pthread_attr_setscope'
+-checkfunc d_pthread_yield 'pthread_yield'
++checkfunc d_poll 'poll' 'NULL,0,0' 'poll.h'
++checkfunc d_prctl 'prctl' '0,0,0,0,0' 'sys/prctl.h'
++checkfunc d_pthread_atfork 'pthread_atfork' 'NULL,NULL,NULL' 'pthread.h'
++checkfunc d_pthread_attr_setscope 'pthread_attr_setscope' 'NULL,0' 'pthread.h'
++checkfunc d_pthread_yield 'pthread_yield' '' 'pthread.h'
+ checkfunc d_querylocale 'querylocale'
+ checkfunc d_qgcvt 'qgcvt' '1.0,1,NULL'
+-checkfunc d_rand 'rand'
+-checkfunc d_random 'random'
+-checkfunc d_re_comp 're_comp'
+-checkfunc d_readdir 'readdir' 'NULL'
+-checkfunc d_readlink 'readlink'
+-checkfunc d_realpath 'realpath'
+-checkfunc d_readv 'readv'
+-checkfunc d_recvmsg 'recvmsg'
++checkfunc d_rand 'rand' '' 'stdlib.h'
++checkfunc d_random 'random' '' 'stdlib.h'
++checkfunc d_re_comp 're_comp' 'NULL' 'sys/types.h regex.h'
++checkfunc d_readdir 'readdir' 'NULL' 'dirent.h'
++checkfunc d_readlink 'readlink' 'NULL,NULL,0' 'unistd.h'
++checkfunc d_realpath 'realpath' 'NULL,NULL' 'limits.h stdlib.h'
++checkfunc d_readv 'readv' '0,NULL,0' 'sys/uio.h'
++checkfunc d_recvmsg 'recvmsg' '0,NULL,0' 'sys/socket.h'
+ checkfunc d_regcmp 'regcmp'
+-checkfunc d_regcomp 'regcomp'
+-checkfunc d_rename 'rename' 'NULL,NULL'
+-checkfunc d_renameat 'renameat' '0,NULL,0,NULL'
+-checkfunc d_rewinddir 'rewinddir'
+-checkfunc d_rmdir 'rmdir' 'NULL'
+-checkfunc d_sched_yield 'sched_yield'
+-checkfunc d_seekdir 'seekdir'
+-checkfunc d_select 'select' '0,NULL,NULL,NULL,NULL'
+-checkfunc d_semctl 'semctl'
+-checkfunc d_semget 'semget'
+-checkfunc d_semop 'semop'
+-checkfunc d_sendmsg 'sendmsg'
+-checkfunc d_setegid 'setegid'
+-checkfunc d_setent 'setservent'
+-checkfunc d_setenv 'setenv'
+-checkfunc d_seteuid 'seteuid'
+-checkfunc d_setgrent 'setgrent'
+-checkfunc d_setgrps 'setgroups'
+-checkfunc d_sethent 'sethostent'
+-checkfunc d_setitimer 'setitimer'
+-checkfunc d_setlinebuf 'setlinebuf'
++checkfunc d_regcomp 'regcomp' 'NULL,NULL,0' 'regex.h'
++checkfunc d_rename 'rename' 'NULL,NULL' 'stdio.h'
++checkfunc d_renameat 'renameat' '0,NULL,0,NULL' 'fcntl.h stdio.h'
++checkfunc d_rewinddir 'rewinddir' 'NULL' 'sys/types.h dirent.h'
++checkfunc d_rmdir 'rmdir' 'NULL' 'unistd.h'
++checkfunc d_sched_yield 'sched_yield' '' 'sched.h'
++checkfunc d_seekdir 'seekdir' 'NULL,0' 'dirent.h'
++checkfunc d_select 'select' '0,NULL,NULL,NULL,NULL' 'sys/select.h'
++checkfunc d_semctl 'semctl' '0,0,0, NULL' 'sys/sem.h'
++checkfunc d_semget 'semget' '0,0,0' 'sys/sem.h'
++checkfunc d_semop 'semop' '0,NULL,0' 'sys/sem.h'
++checkfunc d_sendmsg 'sendmsg' '0,NULL,0' 'sys/socket.h'
++checkfunc d_setegid 'setegid' '0' 'unistd.h'
++checkfunc d_setent 'setservent' '0' 'netdb.h'
++checkfunc d_setenv 'setenv' 'NULL,NULL,0'
++checkfunc d_seteuid 'seteuid' '0' 'unistd.h'
++checkfunc d_setgrent 'setgrent' '' 'sys/types.h grp.h'
++checkfunc d_setgrps 'setgroups' '0,NULL' 'unistd.h grp.h'
++checkfunc d_sethent 'sethostent' '0' 'netdb.h'
++checkfunc d_setitimer 'setitimer' '0,NULL,NULL' 'sys/time.h'
++checkfunc d_setlinebuf 'setlinebuf' 'NULL' 'stdio.h'
+ checkfunc d_setlocale 'setlocale' "0,NULL" 'locale.h'
+-checkfunc d_setnent 'setnetent'
+-checkfunc d_setpent 'setprotoent'
+-checkfunc d_setpgid 'setpgid'
+-checkfunc d_setpgrp 'setpgrp'
++checkfunc d_setnent 'setnetent' '0' 'netdb.h'
++checkfunc d_setpent 'setprotoent' '0' 'netdb.h'
++checkfunc d_setpgid 'setpgid' '0,0' 'unistd.h'
++checkfunc d_setpgrp 'setpgrp' '' 'unistd.h'
+ checkfunc d_setpgrp2 'setpgrp2'
+-checkfunc d_setprior 'setpriority'
+-checkfunc d_setproctitle 'setproctitle'
+-checkfunc d_setpwent 'setpwent'
+-checkfunc d_setregid 'setregid'
+-checkfunc d_setresgid 'setresgid'
+-checkfunc d_setresuid 'setresuid'
+-checkfunc d_setreuid 'setreuid'
+-checkfunc d_setrgid 'setrgid'
++checkfunc d_setprior 'setpriority' '0,0,0' 'sys/resource.h'
++checkfunc d_setproctitle 'setproctitle' 'NULL,NULL' 'sys/types.h unistd.h'
++checkfunc d_setpwent 'setpwent' '' 'sys/types.h pwd.h'
++checkfunc d_setregid 'setregid' '0,0' 'unistd.h'
++checkfunc d_setresgid 'setresgid' '0,0,0' 'unistd.h'
++checkfunc d_setresuid 'setresuid' '0,0,0' 'unistd.h'
++checkfunc d_setreuid 'setreuid' '0,0' 'unistd.h'
++checkfunc d_setrgid 'setrgid' ''
+ checkfunc d_setruid 'setruid'
+-checkfunc d_setsid 'setsid'
+-checkfunc d_setvbuf 'setvbuf' 'NULL,NULL,0,0'
++checkfunc d_setsid 'setsid' '' 'unistd.h'
++checkfunc d_setvbuf 'setvbuf' 'NULL,NULL,0,0' 'stdio.h'
+ checkfunc d_sfreserve 'sfreserve' "" 'sfio.h'
+-checkfunc d_shmat 'shmat'
+-checkfunc d_shmctl 'shmctl'
+-checkfunc d_shmdt 'shmdt'
+-checkfunc d_shmget 'shmget'
+-checkfunc d_sigaction 'sigaction'
+-checkfunc d_sigprocmask 'sigprocmask'
++checkfunc d_shmat 'shmat' '0,NULL,0' 'sys/shm.h'
++checkfunc d_shmctl 'shmctl' '0,0,NULL' 'sys/shm.h'
++checkfunc d_shmdt 'shmdt' 'NULL' 'sys/shm.h'
++checkfunc d_shmget 'shmget' '0,0,0' 'sys/shm.h'
++checkfunc d_sigaction 'sigaction' '0,NULL,NULL' 'signal.h'
++checkfunc d_sigprocmask 'sigprocmask' '0,NULL,NULL' 'signal.h'
+ checkfunc d_sigsetjmp 'sigsetjmp' "NULL,0" 'setjmp.h'
+-checkfunc d_snprintf 'snprintf'
+-checkfunc d_sockatmark 'sockatmark'
++checkfunc d_snprintf 'snprintf' 'NULL,0,NULL' 'stdio.h'
++checkfunc d_sockatmark 'sockatmark' '0' 'sys/socket.h'
+ checkfunc d_socket 'socket' "0,0,0" 'sys/types.h sys/socket.h'
+-checkfunc d_sockpair 'socketpair'
++checkfunc d_sockpair 'socketpair' '0,0,0,NULL' 'sys/socket.h'
+ checkfunc d_socks5_init 'socks5_init'
+-checkfunc d_stat 'stat'
+-checkfunc d_statvfs 'statvfs'
++checkfunc d_stat 'stat' 'NULL,NULL' 'sys/stat.h'
++checkfunc d_statvfs 'statvfs' 'NULL,NULL' 'sys/statvfs.h'
+ checkfunc d_strchr 'strchr' "NULL,0" 'string.h strings.h'
+ checkfunc d_strcoll 'strcoll' "NULL,NULL" 'string.h'
+ checkfunc d_strerror 'strerror' "0" 'string.h stdlib.h'
+-checkfunc d_strerror_l 'strerror_l'
++checkfunc d_strerror_l 'strerror_l' '0,NULL' 'string.h'
+ checkfunc d_strftime 'strftime' "NULL,0,NULL,NULL" 'time.h'
+-checkfunc d_strlcat 'strlcat'
+-checkfunc d_strlcpy 'strlcpy'
++checkfunc d_strlcat 'strlcat' 'NULL,NULL,0' 'string.h'
++checkfunc d_strlcpy 'strlcpy' 'NULL,NULL,0' 'string.h'
+ checkfunc d_strnlen 'strnlen' '"",0' 'string.h'
+ checkfunc d_strtod 'strtod' 'NULL,NULL'
+ checkfunc d_strtod_l 'strtod_l'
+ checkfunc d_strtol 'strtol' 'NULL,NULL,0'
+-checkfunc d_strtold 'strtold'
++checkfunc d_strtold 'strtold' 'NULL,NULL'
+ checkfunc d_strtold_l 'strtold_l'
+-checkfunc d_strtoll 'strtoll'
+-checkfunc d_strtoq 'strtoq'
++checkfunc d_strtoll 'strtoll' 'NULL,NULL,0'
++checkfunc d_strtoq 'strtoq' 'NULL,NULL,0'
+ checkfunc d_strtoul 'strtoul' 'NULL,NULL,0'
+ checkfunc d_strtoull 'strtoull' 'NULL,NULL,0'
+-checkfunc d_strtouq 'strtouq'
+-checkfunc d_strxfrm 'strxfrm'
+-checkfunc d_strxfrm_l 'strxfrm_l'
+-checkfunc d_symlink 'symlink'
+-checkfunc d_syscall 'syscall'
+-checkfunc d_sysconf 'sysconf' '0'
++checkfunc d_strtouq 'strtouq' 'NULL,NULL,0'
++checkfunc d_strxfrm 'strxfrm' 'NULL,NULL,0' 'string.h'
++checkfunc d_strxfrm_l 'strxfrm_l' 'NULL,NULL,0,NULL' 'string.h'
++checkfunc d_symlink 'symlink' 'NULL,NULL' 'unistd.h'
++checkfunc d_syscall 'syscall' '0,NULL' 'sys/syscall.h unistd.h'
++checkfunc d_sysconf 'sysconf' '0' 'unistd.h'
+ checkfunc d_system 'system' 'NULL'
+-checkfunc d_tcgetpgrp 'tcgetpgrp'
+-checkfunc d_tcsetpgrp 'tcsetpgrp'
+-checkfunc d_telldir 'telldir'
+-checkfunc d_time 'time' 'NULL'
+-checkfunc d_timegm 'timegm'
+-checkfunc d_times 'times' 'NULL'
++checkfunc d_tcgetpgrp 'tcgetpgrp' '0' 'unistd.h'
++checkfunc d_tcsetpgrp 'tcsetpgrp' '0,0' 'unistd.h'
++checkfunc d_telldir 'telldir' 'NULL' 'dirent.h'
++checkfunc d_time 'time' 'NULL' 'time.h'
++checkfunc d_timegm 'timegm' 'NULL' 'time.h'
++checkfunc d_times 'times' 'NULL' 'sys/times.h'
+ checkfunc d_towlower 'towlower' '0' 'wctype.h'
+ checkfunc d_towupper 'towupper' '0' 'wctype.h'
+-checkfunc d_truncate 'truncate' 'NULL,0'
+-checkfunc d_ualarm 'ualarm'
+-checkfunc d_umask 'umask' '0'
+-checkfunc d_uname 'uname'
+-checkfunc d_unlinkat 'unlinkat' '0,NULL,0'
++checkfunc d_truncate 'truncate' 'NULL,0' 'unistd.h'
++checkfunc d_ualarm 'ualarm' 'NULL,NULL' 'unistd.h'
++checkfunc d_umask 'umask' '0' 'sys/stat.h'
++checkfunc d_uname 'uname' 'NULL' 'sys/utsname.h'
++checkfunc d_unlinkat 'unlinkat' '0,NULL,0' 'unistd.h fcntl.h'
+ checkfunc d_unordered 'unordered'
+-checkfunc d_unsetenv 'unsetenv'
++checkfunc d_unsetenv 'unsetenv' 'NULL'
+ checkfunc d_uselocale 'uselocale' '0' 'locale.h'
+-checkfunc d_usleep 'usleep'
+-checkfunc d_ustat 'ustat'
++checkfunc d_usleep 'usleep' '0' 'unistd.h'
++checkfunc d_ustat 'ustat' '0,NULL' 'sys/types.h unistd.h'
+ define d_vfork 'undef' # unnecessary
+-checkfunc d_vprintf 'vprintf' 'NULL,0'
+-checkfunc d_vsnprintf 'vsnprintf'
+-checkfunc d_wait4 'wait4'
+-checkfunc d_waitpid 'waitpid' '0,NULL,0'
+-checkfunc d_wcrtomb 'wcrtomb'
+-checkfunc d_wcscmp 'wcscmp'
+-checkfunc d_wcstombs 'wcstombs' 'NULL,NULL,0'
+-checkfunc d_wcsxfrm 'wcsxfrm'
+-checkfunc d_wctomb 'wctomb'
+-checkfunc d_writev 'writev'
++checkfunc d_vprintf 'vprintf' 'NULL,0' 'stdio.h'
++checkfunc d_vsnprintf 'vsnprintf' 'NULL,0,NULL,NULL' 'stdio.h'
++checkfunc d_wait4 'wait4' '0,NULL,0,NULL' 'sys/wait.h'
++checkfunc d_waitpid 'waitpid' '0,NULL,0' 'sys/wait.h'
++checkfunc d_wcrtomb 'wcrtomb' 'NULL,0,NULL' 'wchar.h'
++checkfunc d_wcscmp 'wcscmp' 'NULL,NULL' 'wchar.h'
++checkfunc d_wcstombs 'wcstombs' 'NULL,NULL,0' 'wchar.h'
++checkfunc d_wcsxfrm 'wcsxfrm' 'NULL,NULL,0' 'wchar.h'
++checkfunc d_wctomb 'wctomb' 'NULL,NULL' 'wchar.h'
++checkfunc d_writev 'writev' '0,NULL,0' 'sys/uio.h'
+ unset includes
diff --git a/poky/meta/recipes-devtools/perl-cross/perlcross_1.4.bb b/poky/meta/recipes-devtools/perl-cross/perlcross_1.4.bb
index 2704976..17ce901 100644
--- a/poky/meta/recipes-devtools/perl-cross/perlcross_1.4.bb
+++ b/poky/meta/recipes-devtools/perl-cross/perlcross_1.4.bb
@@ -15,8 +15,7 @@
            file://0001-perl-cross-add-LDFLAGS-when-linking-libperl.patch \
            file://determinism.patch \
            file://0001-Makefile-check-the-file-if-patched-or-not.patch \
-           file://0001-Makefile-correctly-list-modules-when-cleaning-them.patch \
-           file://0001-Makefile-do-not-clean-config.h-xconfig.h.patch \
+           file://0001-configure_func.sh-Add-_GNU_SOURCE-define-and-functio.patch \
            "
 UPSTREAM_CHECK_URI = "https://github.com/arsv/perl-cross/releases/"
 
diff --git a/poky/meta/recipes-devtools/perl/perl_5.36.0.bb b/poky/meta/recipes-devtools/perl/perl_5.36.0.bb
index 4456cdb..2dc558a 100644
--- a/poky/meta/recipes-devtools/perl/perl_5.36.0.bb
+++ b/poky/meta/recipes-devtools/perl/perl_5.36.0.bb
@@ -28,7 +28,7 @@
 
 SRC_URI[perl.sha256sum] = "e26085af8ac396f62add8a533c3a0ea8c8497d836f0689347ac5abd7b7a4e00a"
 
-S = "${WORKDIR}/perl-${PV}"
+B = "${WORKDIR}/perl-${PV}-build"
 
 inherit upstream-version-is-even update-alternatives
 
@@ -45,8 +45,13 @@
 # Don't generate comments in enc2xs output files. They are not reproducible
 export ENC2XS_NO_COMMENTS = "1"
 
+CFLAGS += "-D_GNU_SOURCE -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64"
+
 do_configure:prepend() {
-    cp -rfp ${STAGING_DATADIR_NATIVE}/perl-cross/* ${S}
+    rm -rf ${B}
+    cp -rfp ${S} ${B}
+    cp -rfp ${STAGING_DATADIR_NATIVE}/perl-cross/* ${B}
+    cd ${B}
 }
 
 do_configure:class-target() {
@@ -114,7 +119,6 @@
             ")"
         echo "#define PERL_BUILD_DATE \"$PERL_BUILD_DATE\"" >> config.h
     fi
-    oe_runmake clean
 }
 
 do_compile() {
diff --git a/poky/meta/recipes-devtools/pkgconf/pkgconf_1.8.0.bb b/poky/meta/recipes-devtools/pkgconf/pkgconf_1.8.0.bb
deleted file mode 100644
index 887e15e..0000000
--- a/poky/meta/recipes-devtools/pkgconf/pkgconf_1.8.0.bb
+++ /dev/null
@@ -1,67 +0,0 @@
-SUMMARY = "pkgconf provides compiler and linker configuration for development frameworks."
-DESCRIPTION = "pkgconf is a program which helps to configure compiler and linker \
-flags for development frameworks. It is similar to pkg-config from \
-freedesktop.org, providing additional functionality while also maintaining \
-compatibility."
-HOMEPAGE = "http://pkgconf.org"
-BUGTRACKER = "https://github.com/pkgconf/pkgconf/issues"
-SECTION = "devel"
-PROVIDES += "pkgconfig"
-RPROVIDES:${PN} += "pkgconfig"
-
-# The pkgconf license seems to be functionally equivalent to BSD-2-Clause or
-# ISC, but has different wording, so needs its own name.
-LICENSE = "pkgconf"
-LIC_FILES_CHKSUM = "file://COPYING;md5=2214222ec1a820bd6cc75167a56925e0"
-
-SRC_URI = "\
-    https://distfiles.dereferenced.org/pkgconf/pkgconf-${PV}.tar.xz \
-    file://pkg-config-wrapper \
-    file://pkg-config-native.in \
-    file://pkg-config-esdk.in \
-"
-SRC_URI[sha256sum] = "ef9c7e61822b7cb8356e6e9e1dca58d9556f3200d78acab35e4347e9d4c2bbaf"
-
-inherit autotools
-
-EXTRA_OECONF += "--with-pkg-config-dir='${libdir}/pkgconfig:${datadir}/pkgconfig'"
-
-do_install:append () {
-    # Install a wrapper which deals, as much as possible with pkgconf vs
-    # pkg-config compatibility issues.
-    install -m 0755 "${WORKDIR}/pkg-config-wrapper" "${D}${bindir}/pkg-config"
-}
-
-do_install:append:class-native () {
-    # Install a pkg-config-native wrapper that will use the native sysroot instead
-    # of the MACHINE sysroot, for using pkg-config when building native tools.
-    sed -e "s|@PATH_NATIVE@|${PKG_CONFIG_PATH}|" \
-        < ${WORKDIR}/pkg-config-native.in > ${B}/pkg-config-native
-    install -m755 ${B}/pkg-config-native ${D}${bindir}/pkg-config-native
-    sed -e "s|@PATH_NATIVE@|${PKG_CONFIG_PATH}|" \
-        -e "s|@LIBDIR_NATIVE@|${PKG_CONFIG_LIBDIR}|" \
-        < ${WORKDIR}/pkg-config-esdk.in > ${B}/pkg-config-esdk
-    install -m755 ${B}/pkg-config-esdk ${D}${bindir}/pkg-config-esdk
-}
-
-# When using the RPM generated automatic package dependencies, some packages
-# will end up requiring 'pkgconfig(pkg-config)'.  Allow this behavior by
-# specifying an appropriate provide.
-RPROVIDES:${PN} += "pkgconfig(pkg-config)"
-
-# Include pkg.m4 in the main package, leaving libpkgconf dev files in -dev
-FILES:${PN}-dev:remove = "${datadir}/aclocal"
-FILES:${PN} += "${datadir}/aclocal"
-
-BBCLASSEXTEND += "native nativesdk"
-
-pkgconf_sstate_fixup_esdk () {
-   if [ "${BB_CURRENTTASK}" = "populate_sysroot_setscene" -a "${WITHIN_EXT_SDK}" = "1" ] ; then
-       pkgconfdir="${SSTATE_INSTDIR}/recipe-sysroot-native/${bindir_native}"
-       mv $pkgconfdir/pkg-config $pkgconfdir/pkg-config.real
-       ln -rs $pkgconfdir/pkg-config-esdk $pkgconfdir/pkg-config
-       sed -i -e "s|^pkg-config|pkg-config.real|" $pkgconfdir/pkg-config-native
-   fi
-}
-
-SSTATEPOSTUNPACKFUNCS:append:class-native = " pkgconf_sstate_fixup_esdk"
diff --git a/poky/meta/recipes-devtools/pkgconf/pkgconf_1.9.3.bb b/poky/meta/recipes-devtools/pkgconf/pkgconf_1.9.3.bb
new file mode 100644
index 0000000..453da89
--- /dev/null
+++ b/poky/meta/recipes-devtools/pkgconf/pkgconf_1.9.3.bb
@@ -0,0 +1,67 @@
+SUMMARY = "pkgconf provides compiler and linker configuration for development frameworks."
+DESCRIPTION = "pkgconf is a program which helps to configure compiler and linker \
+flags for development frameworks. It is similar to pkg-config from \
+freedesktop.org, providing additional functionality while also maintaining \
+compatibility."
+HOMEPAGE = "http://pkgconf.org"
+BUGTRACKER = "https://github.com/pkgconf/pkgconf/issues"
+SECTION = "devel"
+PROVIDES += "pkgconfig"
+RPROVIDES:${PN} += "pkgconfig"
+
+# The pkgconf license seems to be functionally equivalent to BSD-2-Clause or
+# ISC, but has different wording, so needs its own name.
+LICENSE = "pkgconf"
+LIC_FILES_CHKSUM = "file://COPYING;md5=2214222ec1a820bd6cc75167a56925e0"
+
+SRC_URI = "\
+    https://distfiles.dereferenced.org/pkgconf/pkgconf-${PV}.tar.xz \
+    file://pkg-config-wrapper \
+    file://pkg-config-native.in \
+    file://pkg-config-esdk.in \
+"
+SRC_URI[sha256sum] = "5fb355b487d54fb6d341e4f18d4e2f7e813a6622cf03a9e87affa6a40565699d"
+
+inherit autotools
+
+EXTRA_OECONF += "--with-pkg-config-dir='${libdir}/pkgconfig:${datadir}/pkgconfig'"
+
+do_install:append () {
+    # Install a wrapper which deals, as much as possible with pkgconf vs
+    # pkg-config compatibility issues.
+    install -m 0755 "${WORKDIR}/pkg-config-wrapper" "${D}${bindir}/pkg-config"
+}
+
+do_install:append:class-native () {
+    # Install a pkg-config-native wrapper that will use the native sysroot instead
+    # of the MACHINE sysroot, for using pkg-config when building native tools.
+    sed -e "s|@PATH_NATIVE@|${PKG_CONFIG_PATH}|" \
+        < ${WORKDIR}/pkg-config-native.in > ${B}/pkg-config-native
+    install -m755 ${B}/pkg-config-native ${D}${bindir}/pkg-config-native
+    sed -e "s|@PATH_NATIVE@|${PKG_CONFIG_PATH}|" \
+        -e "s|@LIBDIR_NATIVE@|${PKG_CONFIG_LIBDIR}|" \
+        < ${WORKDIR}/pkg-config-esdk.in > ${B}/pkg-config-esdk
+    install -m755 ${B}/pkg-config-esdk ${D}${bindir}/pkg-config-esdk
+}
+
+# When using the RPM generated automatic package dependencies, some packages
+# will end up requiring 'pkgconfig(pkg-config)'.  Allow this behavior by
+# specifying an appropriate provide.
+RPROVIDES:${PN} += "pkgconfig(pkg-config)"
+
+# Include pkg.m4 in the main package, leaving libpkgconf dev files in -dev
+FILES:${PN}-dev:remove = "${datadir}/aclocal"
+FILES:${PN} += "${datadir}/aclocal"
+
+BBCLASSEXTEND += "native nativesdk"
+
+pkgconf_sstate_fixup_esdk () {
+   if [ "${BB_CURRENTTASK}" = "populate_sysroot_setscene" -a "${WITHIN_EXT_SDK}" = "1" ] ; then
+       pkgconfdir="${SSTATE_INSTDIR}/recipe-sysroot-native/${bindir_native}"
+       mv $pkgconfdir/pkg-config $pkgconfdir/pkg-config.real
+       ln -rs $pkgconfdir/pkg-config-esdk $pkgconfdir/pkg-config
+       sed -i -e "s|^pkg-config|pkg-config.real|" $pkgconfdir/pkg-config-native
+   fi
+}
+
+SSTATEPOSTUNPACKFUNCS:append:class-native = " pkgconf_sstate_fixup_esdk"
diff --git a/poky/meta/recipes-devtools/pseudo/pseudo_git.bb b/poky/meta/recipes-devtools/pseudo/pseudo_git.bb
index e7ef6a7..c34580b 100644
--- a/poky/meta/recipes-devtools/pseudo/pseudo_git.bb
+++ b/poky/meta/recipes-devtools/pseudo/pseudo_git.bb
@@ -13,7 +13,7 @@
     file://older-glibc-symbols.patch"
 SRC_URI[prebuilt.sha256sum] = "ed9f456856e9d86359f169f46a70ad7be4190d6040282b84c8d97b99072485aa"
 
-SRCREV = "2b4b88eb513335b0ece55fe51854693d9b20de35"
+SRCREV = "c9670c27ff67ab899007ce749254b16091577e55"
 S = "${WORKDIR}/git"
 PV = "1.9.0+git${SRCPV}"
 
diff --git a/poky/meta/recipes-devtools/python/python3-cython_0.29.32.bb b/poky/meta/recipes-devtools/python/python3-cython_0.29.32.bb
index 26333cb..8fed1cf 100644
--- a/poky/meta/recipes-devtools/python/python3-cython_0.29.32.bb
+++ b/poky/meta/recipes-devtools/python/python3-cython_0.29.32.bb
@@ -20,17 +20,17 @@
 PACKAGEBUILDPKGD += "cython_fix_sources"
 
 cython_fix_sources () {
-	for f in ${PKGD}/usr/src/debug/${PN}/${EXTENDPE}${PV}-${PR}/Cython-${PV}/Cython/Compiler/FlowControl.c \
-		${PKGD}/usr/src/debug/${PN}/${EXTENDPE}${PV}-${PR}/Cython-${PV}/Cython/Compiler/FusedNode.c \
-		${PKGD}/usr/src/debug/${PN}/${EXTENDPE}${PV}-${PR}/Cython-${PV}/Cython/Compiler/Scanning.c \
-		${PKGD}/usr/src/debug/${PN}/${EXTENDPE}${PV}-${PR}/Cython-${PV}/Cython/Compiler/Visitor.c \
-		${PKGD}/usr/src/debug/${PN}/${EXTENDPE}${PV}-${PR}/Cython-${PV}/Cython/Plex/Actions.c \
-		${PKGD}/usr/src/debug/${PN}/${EXTENDPE}${PV}-${PR}/Cython-${PV}/Cython/Plex/Scanners.c \
-		${PKGD}/usr/src/debug/${PN}/${EXTENDPE}${PV}-${PR}/Cython-${PV}/Cython/Runtime/refnanny.c \
-		${PKGD}/usr/src/debug/${PN}/${EXTENDPE}${PV}-${PR}/Cython-${PV}/Cython/Tempita/_tempita.c \
+	for f in ${PKGD}/usr/src/debug/${PN}/${EXTENDPE}${PV}-${PR}/Cython/Compiler/FlowControl.c \
+		${PKGD}/usr/src/debug/${PN}/${EXTENDPE}${PV}-${PR}/Cython/Compiler/FusedNode.c \
+		${PKGD}/usr/src/debug/${PN}/${EXTENDPE}${PV}-${PR}/Cython/Compiler/Scanning.c \
+		${PKGD}/usr/src/debug/${PN}/${EXTENDPE}${PV}-${PR}/Cython/Compiler/Visitor.c \
+		${PKGD}/usr/src/debug/${PN}/${EXTENDPE}${PV}-${PR}/Cython/Plex/Actions.c \
+		${PKGD}/usr/src/debug/${PN}/${EXTENDPE}${PV}-${PR}/Cython/Plex/Scanners.c \
+		${PKGD}/usr/src/debug/${PN}/${EXTENDPE}${PV}-${PR}/Cython/Runtime/refnanny.c \
+		${PKGD}/usr/src/debug/${PN}/${EXTENDPE}${PV}-${PR}/Cython/Tempita/_tempita.c \
 		${PKGD}${libdir}/${PYTHON_DIR}/site-packages/Cython*/SOURCES.txt; do
 		if [ -e $f ]; then
-			sed -i -e 's#${WORKDIR}#/usr/src/debug/${PN}/${EXTENDPE}${PV}-${PR}#g' $f
+			sed -i -e 's#${WORKDIR}/Cython-${PV}#/usr/src/debug/${PN}/${EXTENDPE}${PV}-${PR}#g' $f
 		fi
 	done
 }
diff --git a/poky/meta/recipes-devtools/python/python3-dtschema_2022.7.bb b/poky/meta/recipes-devtools/python/python3-dtschema_2022.7.bb
deleted file mode 100644
index dd9092f..0000000
--- a/poky/meta/recipes-devtools/python/python3-dtschema_2022.7.bb
+++ /dev/null
@@ -1,15 +0,0 @@
-DESCRIPTION = "Tooling for devicetree validation using YAML and jsonschema"
-HOMEPAGE = "https://github.com/devicetree-org/dt-schema"
-LICENSE = "BSD-2-Clause"
-LIC_FILES_CHKSUM = "file://LICENSE.txt;md5=457495c8fa03540db4a576bf7869e811"
-
-inherit pypi setuptools3
-
-PYPI_PACKAGE = "dtschema"
-
-SRC_URI[sha256sum] = "2238753fa16bee7b26841cced75d745af777b896190f40aff5802992335201bb"
-
-DEPENDS += "python3-setuptools-scm-native"
-RDEPENDS:${PN} += "python3-ruamel-yaml python3-jsonschema python3-rfc3987"
-
-BBCLASSEXTEND = "native nativesdk"
diff --git a/poky/meta/recipes-devtools/python/python3-dtschema_2022.8.1.bb b/poky/meta/recipes-devtools/python/python3-dtschema_2022.8.1.bb
new file mode 100644
index 0000000..38f646e
--- /dev/null
+++ b/poky/meta/recipes-devtools/python/python3-dtschema_2022.8.1.bb
@@ -0,0 +1,15 @@
+DESCRIPTION = "Tooling for devicetree validation using YAML and jsonschema"
+HOMEPAGE = "https://github.com/devicetree-org/dt-schema"
+LICENSE = "BSD-2-Clause"
+LIC_FILES_CHKSUM = "file://LICENSE.txt;md5=457495c8fa03540db4a576bf7869e811"
+
+inherit pypi setuptools3
+
+PYPI_PACKAGE = "dtschema"
+
+SRC_URI[sha256sum] = "3e56a9920944223d6f93fd51ada19dd8db554ac9182ef52c1c5c9d4966ab30aa"
+
+DEPENDS += "python3-setuptools-scm-native"
+RDEPENDS:${PN} += "python3-ruamel-yaml python3-jsonschema python3-rfc3987"
+
+BBCLASSEXTEND = "native nativesdk"
diff --git a/poky/meta/recipes-devtools/python/python3-hatchling_1.6.0.bb b/poky/meta/recipes-devtools/python/python3-hatchling_1.6.0.bb
deleted file mode 100644
index e06bdf0..0000000
--- a/poky/meta/recipes-devtools/python/python3-hatchling_1.6.0.bb
+++ /dev/null
@@ -1,17 +0,0 @@
-SUMMARY = "The extensible, standards compliant build backend used by Hatch"
-HOMEPAGE = "https://hatch.pypa.io/"
-LICENSE = "MIT"
-LIC_FILES_CHKSUM = "file://LICENSE.txt;md5=cbe2fd33fc9297692812fc94b7d27fd9"
-
-inherit pypi python_hatchling
-
-DEPENDS += "python3-pluggy-native python3-tomli-native python3-pathspec-native python3-packaging-native python3-editables-native"
-DEPENDS:remove:class-native = "python3-hatchling-native"
-
-SRC_URI[sha256sum] = "bd6e8505de511ac4217ff50927f6d1845494608e401e63a62b830c31fb613544"
-
-do_compile:prepend() {
-    export PYTHONPATH=src
-}
-
-BBCLASSEXTEND = "native nativesdk"
diff --git a/poky/meta/recipes-devtools/python/python3-hatchling_1.8.1.bb b/poky/meta/recipes-devtools/python/python3-hatchling_1.8.1.bb
new file mode 100644
index 0000000..bfdb664
--- /dev/null
+++ b/poky/meta/recipes-devtools/python/python3-hatchling_1.8.1.bb
@@ -0,0 +1,17 @@
+SUMMARY = "The extensible, standards compliant build backend used by Hatch"
+HOMEPAGE = "https://hatch.pypa.io/"
+LICENSE = "MIT"
+LIC_FILES_CHKSUM = "file://LICENSE.txt;md5=cbe2fd33fc9297692812fc94b7d27fd9"
+
+inherit pypi python_hatchling
+
+DEPENDS += "python3-pluggy-native python3-tomli-native python3-pathspec-native python3-packaging-native python3-editables-native"
+DEPENDS:remove:class-native = "python3-hatchling-native"
+
+SRC_URI[sha256sum] = "448b04b23faed669b2b565b998ac955af4feea66c5deed3a1212ac9399d2e1cd"
+
+do_compile:prepend() {
+    export PYTHONPATH=src
+}
+
+BBCLASSEXTEND = "native nativesdk"
diff --git a/poky/meta/recipes-devtools/python/python3-hypothesis_6.46.11.bb b/poky/meta/recipes-devtools/python/python3-hypothesis_6.46.11.bb
new file mode 100644
index 0000000..1d9772d
--- /dev/null
+++ b/poky/meta/recipes-devtools/python/python3-hypothesis_6.46.11.bb
@@ -0,0 +1,38 @@
+SUMMARY = "A library for property-based testing"
+HOMEPAGE = "https://github.com/HypothesisWorks/hypothesis/tree/master/hypothesis-python"
+LICENSE = "MPL-2.0"
+LIC_FILES_CHKSUM = "file://LICENSE.txt;md5=4ee62c16ebd0f4f99d906f36b7de8c3c"
+
+PYPI_PACKAGE = "hypothesis"
+
+inherit pypi setuptools3 ptest
+
+SRC_URI += " \
+    file://run-ptest \
+    file://test_binary_search.py \
+    file://test_rle.py \
+    "
+
+SRC_URI[sha256sum] = "f5c1cf61b24b094355577a6b8fbbb8eb54c1b0216fbc0519af97c46bddf43c42"
+
+RDEPENDS:${PN} += " \
+    python3-attrs \
+    python3-compression \
+    python3-core \
+    python3-json \
+    python3-sortedcontainers \
+    python3-statistics \
+    python3-unittest \
+    "
+
+RDEPENDS:${PN}-ptest += " \
+    ${PYTHON_PN}-pytest \
+    "
+
+do_install_ptest() {
+    install -d ${D}${PTEST_PATH}/examples
+    install -m 0755 ${WORKDIR}/test_binary_search.py ${D}${PTEST_PATH}/examples/
+    install -m 0755 ${WORKDIR}/test_rle.py ${D}${PTEST_PATH}/examples/
+}
+
+BBCLASSEXTEND = "native nativesdk"
diff --git a/poky/meta/recipes-devtools/python/python3-hypothesis_6.50.1.bb b/poky/meta/recipes-devtools/python/python3-hypothesis_6.50.1.bb
deleted file mode 100644
index 0c93c12..0000000
--- a/poky/meta/recipes-devtools/python/python3-hypothesis_6.50.1.bb
+++ /dev/null
@@ -1,38 +0,0 @@
-SUMMARY = "A library for property-based testing"
-HOMEPAGE = "https://github.com/HypothesisWorks/hypothesis/tree/master/hypothesis-python"
-LICENSE = "MPL-2.0"
-LIC_FILES_CHKSUM = "file://LICENSE.txt;md5=4ee62c16ebd0f4f99d906f36b7de8c3c"
-
-PYPI_PACKAGE = "hypothesis"
-
-inherit pypi setuptools3 ptest
-
-SRC_URI += " \
-    file://run-ptest \
-    file://test_binary_search.py \
-    file://test_rle.py \
-    "
-
-SRC_URI[sha256sum] = "1a19ade3b27825cab622c95fcf25182a27a42f97589c163173fcbdafb8621d1e"
-
-RDEPENDS:${PN} += " \
-    python3-attrs \
-    python3-compression \
-    python3-core \
-    python3-json \
-    python3-sortedcontainers \
-    python3-statistics \
-    python3-unittest \
-    "
-
-RDEPENDS:${PN}-ptest += " \
-    ${PYTHON_PN}-pytest \
-    "
-
-do_install_ptest() {
-    install -d ${D}${PTEST_PATH}/examples
-    install -m 0755 ${WORKDIR}/test_binary_search.py ${D}${PTEST_PATH}/examples/
-    install -m 0755 ${WORKDIR}/test_rle.py ${D}${PTEST_PATH}/examples/
-}
-
-BBCLASSEXTEND = "native nativesdk"
diff --git a/poky/meta/recipes-devtools/python/python3-jsonschema_4.9.0.bb b/poky/meta/recipes-devtools/python/python3-jsonschema_4.9.0.bb
deleted file mode 100644
index 66e0aff..0000000
--- a/poky/meta/recipes-devtools/python/python3-jsonschema_4.9.0.bb
+++ /dev/null
@@ -1,48 +0,0 @@
-SUMMARY = "An implementation of JSON Schema validation for Python"
-HOMEPAGE = "https://github.com/Julian/jsonschema"
-LICENSE = "MIT"
-LIC_FILES_CHKSUM = "file://COPYING;md5=7a60a81c146ec25599a3e1dabb8610a8 \
-                    file://json/LICENSE;md5=9d4de43111d33570c8fe49b4cb0e01af"
-
-SRC_URI[sha256sum] = "df10e65c8f3687a48e93d0d348ce0ce5f897b5a28e9bbcbbe8f7c7eaf019e850"
-
-inherit pypi python_hatchling
-
-PACKAGES =+ "${PN}-tests"
-FILES:${PN}-tests = "${libdir}/${PYTHON_DIR}/site-packages/jsonschema/tests"
-
-DEPENDS += "${PYTHON_PN}-hatch-vcs-native"
-
-PACKAGECONFIG ??= "format"
-PACKAGECONFIG[format] = ",,,\
-    ${PYTHON_PN}-idna \
-    ${PYTHON_PN}-jsonpointer \
-    ${PYTHON_PN}-webcolors \
-    ${PYTHON_PN}-rfc3987 \
-    ${PYTHON_PN}-strict-rfc3339 \
-"
-PACKAGECONFIG[nongpl] = ",,,\
-    ${PYTHON_PN}-idna \
-    ${PYTHON_PN}-jsonpointer \
-    ${PYTHON_PN}-webcolors \
-    ${PYTHON_PN}-rfc3986-validator \
-    ${PYTHON_PN}-rfc3339-validator \
-"
-
-RDEPENDS:${PN} += " \
-    ${PYTHON_PN}-attrs \
-    ${PYTHON_PN}-core \
-    ${PYTHON_PN}-datetime \
-    ${PYTHON_PN}-importlib-metadata \
-    ${PYTHON_PN}-io \
-    ${PYTHON_PN}-json \
-    ${PYTHON_PN}-netclient \
-    ${PYTHON_PN}-numbers \
-    ${PYTHON_PN}-pprint \
-    ${PYTHON_PN}-pyrsistent \
-    ${PYTHON_PN}-zipp \
-"
-
-RDEPENDS:${PN}-tests = "${PN}"
-
-BBCLASSEXTEND = "native nativesdk"
diff --git a/poky/meta/recipes-devtools/python/python3-jsonschema_4.9.1.bb b/poky/meta/recipes-devtools/python/python3-jsonschema_4.9.1.bb
new file mode 100644
index 0000000..125bc6b
--- /dev/null
+++ b/poky/meta/recipes-devtools/python/python3-jsonschema_4.9.1.bb
@@ -0,0 +1,48 @@
+SUMMARY = "An implementation of JSON Schema validation for Python"
+HOMEPAGE = "https://github.com/Julian/jsonschema"
+LICENSE = "MIT"
+LIC_FILES_CHKSUM = "file://COPYING;md5=7a60a81c146ec25599a3e1dabb8610a8 \
+                    file://json/LICENSE;md5=9d4de43111d33570c8fe49b4cb0e01af"
+
+SRC_URI[sha256sum] = "408c4c8ed0dede3b268f7a441784f74206380b04f93eb2d537c7befb3df3099f"
+
+inherit pypi python_hatchling
+
+PACKAGES =+ "${PN}-tests"
+FILES:${PN}-tests = "${libdir}/${PYTHON_DIR}/site-packages/jsonschema/tests"
+
+DEPENDS += "${PYTHON_PN}-hatch-vcs-native"
+
+PACKAGECONFIG ??= "format"
+PACKAGECONFIG[format] = ",,,\
+    ${PYTHON_PN}-idna \
+    ${PYTHON_PN}-jsonpointer \
+    ${PYTHON_PN}-webcolors \
+    ${PYTHON_PN}-rfc3987 \
+    ${PYTHON_PN}-strict-rfc3339 \
+"
+PACKAGECONFIG[nongpl] = ",,,\
+    ${PYTHON_PN}-idna \
+    ${PYTHON_PN}-jsonpointer \
+    ${PYTHON_PN}-webcolors \
+    ${PYTHON_PN}-rfc3986-validator \
+    ${PYTHON_PN}-rfc3339-validator \
+"
+
+RDEPENDS:${PN} += " \
+    ${PYTHON_PN}-attrs \
+    ${PYTHON_PN}-core \
+    ${PYTHON_PN}-datetime \
+    ${PYTHON_PN}-importlib-metadata \
+    ${PYTHON_PN}-io \
+    ${PYTHON_PN}-json \
+    ${PYTHON_PN}-netclient \
+    ${PYTHON_PN}-numbers \
+    ${PYTHON_PN}-pprint \
+    ${PYTHON_PN}-pyrsistent \
+    ${PYTHON_PN}-zipp \
+"
+
+RDEPENDS:${PN}-tests = "${PN}"
+
+BBCLASSEXTEND = "native nativesdk"
diff --git a/poky/meta/recipes-devtools/python/python3-markdown_3.3.7.bb b/poky/meta/recipes-devtools/python/python3-markdown_3.3.7.bb
deleted file mode 100644
index c456cec..0000000
--- a/poky/meta/recipes-devtools/python/python3-markdown_3.3.7.bb
+++ /dev/null
@@ -1,13 +0,0 @@
-SUMMARY = "A Python implementation of John Gruber's Markdown."
-HOMEPAGE = "https://python-markdown.github.io/"
-LICENSE = "BSD-3-Clause"
-LIC_FILES_CHKSUM = "file://LICENSE.md;md5=745aaad0c69c60039e638bff9ffc59ed"
-
-inherit pypi python_setuptools_build_meta
-
-PYPI_PACKAGE = "Markdown"
-SRC_URI[sha256sum] = "cbb516f16218e643d8e0a95b309f77eb118cb138d39a4f27851e6a63581db874"
-
-BBCLASSEXTEND = "native"
-
-RDEPENDS:${PN} += "${PYTHON_PN}-logging ${PYTHON_PN}-setuptools"
diff --git a/poky/meta/recipes-devtools/python/python3-markdown_3.4.1.bb b/poky/meta/recipes-devtools/python/python3-markdown_3.4.1.bb
new file mode 100644
index 0000000..e99c331
--- /dev/null
+++ b/poky/meta/recipes-devtools/python/python3-markdown_3.4.1.bb
@@ -0,0 +1,13 @@
+SUMMARY = "A Python implementation of John Gruber's Markdown."
+HOMEPAGE = "https://python-markdown.github.io/"
+LICENSE = "BSD-3-Clause"
+LIC_FILES_CHKSUM = "file://LICENSE.md;md5=745aaad0c69c60039e638bff9ffc59ed"
+
+inherit pypi python_setuptools_build_meta
+
+PYPI_PACKAGE = "Markdown"
+SRC_URI[sha256sum] = "3b809086bb6efad416156e00a0da66fe47618a5d6918dd688f53f40c8e4cfeff"
+
+BBCLASSEXTEND = "native"
+
+RDEPENDS:${PN} += "${PYTHON_PN}-logging ${PYTHON_PN}-setuptools"
diff --git a/poky/meta/recipes-devtools/python/python3-more-itertools_8.13.0.bb b/poky/meta/recipes-devtools/python/python3-more-itertools_8.13.0.bb
deleted file mode 100644
index 94f6f9e..0000000
--- a/poky/meta/recipes-devtools/python/python3-more-itertools_8.13.0.bb
+++ /dev/null
@@ -1,27 +0,0 @@
-DESCRIPTION = "More routines for operating on iterables, beyond itertools"
-HOMEPAGE = "https://github.com/erikrose/more-itertools"
-LICENSE = "MIT"
-LIC_FILES_CHKSUM = "file://LICENSE;md5=3396ea30f9d21389d7857719816f83b5"
-
-SRC_URI[sha256sum] = "a42901a0a5b169d925f6f217cd5a190e32ef54360905b9c39ee7db5313bfec0f"
-
-inherit pypi python_flit_core ptest
-
-SRC_URI += " \
-	file://run-ptest \
-"
-
-RDEPENDS:${PN} += " \
-        ${PYTHON_PN}-asyncio \
-        "
-
-RDEPENDS:${PN}-ptest += " \
-	${PYTHON_PN}-pytest \
-        "
-
-do_install_ptest() {
-	install -d ${D}${PTEST_PATH}/tests
-	cp -rf ${S}/tests/* ${D}${PTEST_PATH}/tests/
-}
-
-BBCLASSEXTEND = "native nativesdk"
diff --git a/poky/meta/recipes-devtools/python/python3-more-itertools_8.14.0.bb b/poky/meta/recipes-devtools/python/python3-more-itertools_8.14.0.bb
new file mode 100644
index 0000000..c955f93
--- /dev/null
+++ b/poky/meta/recipes-devtools/python/python3-more-itertools_8.14.0.bb
@@ -0,0 +1,27 @@
+DESCRIPTION = "More routines for operating on iterables, beyond itertools"
+HOMEPAGE = "https://github.com/erikrose/more-itertools"
+LICENSE = "MIT"
+LIC_FILES_CHKSUM = "file://LICENSE;md5=3396ea30f9d21389d7857719816f83b5"
+
+SRC_URI[sha256sum] = "c09443cd3d5438b8dafccd867a6bc1cb0894389e90cb53d227456b0b0bccb750"
+
+inherit pypi python_flit_core ptest
+
+SRC_URI += " \
+	file://run-ptest \
+"
+
+RDEPENDS:${PN} += " \
+        ${PYTHON_PN}-asyncio \
+        "
+
+RDEPENDS:${PN}-ptest += " \
+	${PYTHON_PN}-pytest \
+        "
+
+do_install_ptest() {
+	install -d ${D}${PTEST_PATH}/tests
+	cp -rf ${S}/tests/* ${D}${PTEST_PATH}/tests/
+}
+
+BBCLASSEXTEND = "native nativesdk"
diff --git a/poky/meta/recipes-devtools/python/python3-numpy_1.23.1.bb b/poky/meta/recipes-devtools/python/python3-numpy_1.23.1.bb
deleted file mode 100644
index 67ab1d9..0000000
--- a/poky/meta/recipes-devtools/python/python3-numpy_1.23.1.bb
+++ /dev/null
@@ -1,73 +0,0 @@
-SUMMARY = "A sophisticated Numeric Processing Package for Python"
-HOMEPAGE = "https://numpy.org/"
-DESCRIPTION = "NumPy is the fundamental package needed for scientific computing with Python."
-SECTION = "devel/python"
-LICENSE = "BSD-3-Clause & BSD-2-Clause & PSF-2.0 & Apache-2.0 & MIT"
-LIC_FILES_CHKSUM = "file://LICENSE.txt;md5=8026691468924fb6ec155dadfe2a1a7f"
-
-SRCNAME = "numpy"
-
-SRC_URI = "https://github.com/${SRCNAME}/${SRCNAME}/releases/download/v${PV}/${SRCNAME}-${PV}.tar.gz \
-           file://0001-Don-t-search-usr-and-so-on-for-libraries-by-default-.patch \
-           file://0001-numpy-core-Define-RISCV-32-support.patch \
-           file://run-ptest \
-           file://0001-generate_umath.py-do-not-write-full-path-to-output-f.patch \
-           "
-SRC_URI[sha256sum] = "d748ef349bfef2e1194b59da37ed5a29c19ea8d7e6342019921ba2ba4fd8b624"
-
-UPSTREAM_CHECK_URI = "https://github.com/numpy/numpy/releases"
-UPSTREAM_CHECK_REGEX = "(?P<pver>\d+(\.\d+)+)\.tar"
-
-DEPENDS += "python3-cython-native"
-
-inherit ptest setuptools3
-
-S = "${WORKDIR}/numpy-${PV}"
-
-CLEANBROKEN = "1"
-
-do_compile:prepend() {
-    export NPY_DISABLE_SVML=1
-}
-
-# Unfortunately the following pyc files are non-deterministc due to 'frozenset'
-# being written without strict ordering, even with PYTHONHASHSEED = 0
-# Upstream is discussing ways to solve the issue properly, until then let's
-# just not install the problematic files.
-# More info: http://benno.id.au/blog/2013/01/15/python-determinism
-do_install:append() {
-	rm ${D}${PYTHON_SITEPACKAGES_DIR}/numpy/typing/tests/data/pass/__pycache__/literal.cpython*
-}
-
-FILES:${PN}-staticdev += "${PYTHON_SITEPACKAGES_DIR}/numpy/core/lib/*.a ${PYTHON_SITEPACKAGES_DIR}/numpy/random/lib/*.a"
-
-# install what is needed for numpy.test()
-RDEPENDS:${PN} = "${PYTHON_PN}-unittest \
-                  ${PYTHON_PN}-difflib \
-                  ${PYTHON_PN}-pprint \
-                  ${PYTHON_PN}-pickle \
-                  ${PYTHON_PN}-shell \
-                  ${PYTHON_PN}-doctest \
-                  ${PYTHON_PN}-datetime \
-                  ${PYTHON_PN}-distutils \
-                  ${PYTHON_PN}-misc \
-                  ${PYTHON_PN}-mmap \
-                  ${PYTHON_PN}-netclient \
-                  ${PYTHON_PN}-numbers \
-                  ${PYTHON_PN}-pydoc \
-                  ${PYTHON_PN}-pkgutil \
-                  ${PYTHON_PN}-email \
-                  ${PYTHON_PN}-compression \
-                  ${PYTHON_PN}-ctypes \
-                  ${PYTHON_PN}-threading \
-                  ${PYTHON_PN}-multiprocessing \
-                  ${PYTHON_PN}-json \
-"
-RDEPENDS:${PN}-ptest += "${PYTHON_PN}-pytest \
-                         ${PYTHON_PN}-hypothesis \
-                         ${PYTHON_PN}-sortedcontainers \
-                         ${PYTHON_PN}-resource \
-                         ldd \
-"
-
-BBCLASSEXTEND = "native nativesdk"
diff --git a/poky/meta/recipes-devtools/python/python3-numpy_1.23.2.bb b/poky/meta/recipes-devtools/python/python3-numpy_1.23.2.bb
new file mode 100644
index 0000000..960dcf9
--- /dev/null
+++ b/poky/meta/recipes-devtools/python/python3-numpy_1.23.2.bb
@@ -0,0 +1,73 @@
+SUMMARY = "A sophisticated Numeric Processing Package for Python"
+HOMEPAGE = "https://numpy.org/"
+DESCRIPTION = "NumPy is the fundamental package needed for scientific computing with Python."
+SECTION = "devel/python"
+LICENSE = "BSD-3-Clause & BSD-2-Clause & PSF-2.0 & Apache-2.0 & MIT"
+LIC_FILES_CHKSUM = "file://LICENSE.txt;md5=8026691468924fb6ec155dadfe2a1a7f"
+
+SRCNAME = "numpy"
+
+SRC_URI = "https://github.com/${SRCNAME}/${SRCNAME}/releases/download/v${PV}/${SRCNAME}-${PV}.tar.gz \
+           file://0001-Don-t-search-usr-and-so-on-for-libraries-by-default-.patch \
+           file://0001-numpy-core-Define-RISCV-32-support.patch \
+           file://run-ptest \
+           file://0001-generate_umath.py-do-not-write-full-path-to-output-f.patch \
+           "
+SRC_URI[sha256sum] = "b78d00e48261fbbd04aa0d7427cf78d18401ee0abd89c7559bbf422e5b1c7d01"
+
+UPSTREAM_CHECK_URI = "https://github.com/numpy/numpy/releases"
+UPSTREAM_CHECK_REGEX = "(?P<pver>\d+(\.\d+)+)\.tar"
+
+DEPENDS += "python3-cython-native"
+
+inherit ptest setuptools3
+
+S = "${WORKDIR}/numpy-${PV}"
+
+CLEANBROKEN = "1"
+
+do_compile:prepend() {
+    export NPY_DISABLE_SVML=1
+}
+
+# Unfortunately the following pyc files are non-deterministc due to 'frozenset'
+# being written without strict ordering, even with PYTHONHASHSEED = 0
+# Upstream is discussing ways to solve the issue properly, until then let's
+# just not install the problematic files.
+# More info: http://benno.id.au/blog/2013/01/15/python-determinism
+do_install:append() {
+	rm ${D}${PYTHON_SITEPACKAGES_DIR}/numpy/typing/tests/data/pass/__pycache__/literal.cpython*
+}
+
+FILES:${PN}-staticdev += "${PYTHON_SITEPACKAGES_DIR}/numpy/core/lib/*.a ${PYTHON_SITEPACKAGES_DIR}/numpy/random/lib/*.a"
+
+# install what is needed for numpy.test()
+RDEPENDS:${PN} = "${PYTHON_PN}-unittest \
+                  ${PYTHON_PN}-difflib \
+                  ${PYTHON_PN}-pprint \
+                  ${PYTHON_PN}-pickle \
+                  ${PYTHON_PN}-shell \
+                  ${PYTHON_PN}-doctest \
+                  ${PYTHON_PN}-datetime \
+                  ${PYTHON_PN}-distutils \
+                  ${PYTHON_PN}-misc \
+                  ${PYTHON_PN}-mmap \
+                  ${PYTHON_PN}-netclient \
+                  ${PYTHON_PN}-numbers \
+                  ${PYTHON_PN}-pydoc \
+                  ${PYTHON_PN}-pkgutil \
+                  ${PYTHON_PN}-email \
+                  ${PYTHON_PN}-compression \
+                  ${PYTHON_PN}-ctypes \
+                  ${PYTHON_PN}-threading \
+                  ${PYTHON_PN}-multiprocessing \
+                  ${PYTHON_PN}-json \
+"
+RDEPENDS:${PN}-ptest += "${PYTHON_PN}-pytest \
+                         ${PYTHON_PN}-hypothesis \
+                         ${PYTHON_PN}-sortedcontainers \
+                         ${PYTHON_PN}-resource \
+                         ldd \
+"
+
+BBCLASSEXTEND = "native nativesdk"
diff --git a/poky/meta/recipes-devtools/python/python3-pbr_5.10.0.bb b/poky/meta/recipes-devtools/python/python3-pbr_5.10.0.bb
new file mode 100644
index 0000000..022f834
--- /dev/null
+++ b/poky/meta/recipes-devtools/python/python3-pbr_5.10.0.bb
@@ -0,0 +1,4 @@
+inherit setuptools3
+require python-pbr.inc
+
+SRC_URI[sha256sum] = "cfcc4ff8e698256fc17ea3ff796478b050852585aa5bae79ecd05b2ab7b39b9a"
diff --git a/poky/meta/recipes-devtools/python/python3-pbr_5.9.0.bb b/poky/meta/recipes-devtools/python/python3-pbr_5.9.0.bb
deleted file mode 100644
index c93b71d..0000000
--- a/poky/meta/recipes-devtools/python/python3-pbr_5.9.0.bb
+++ /dev/null
@@ -1,4 +0,0 @@
-inherit setuptools3
-require python-pbr.inc
-
-SRC_URI[sha256sum] = "e8dca2f4b43560edef58813969f52a56cef023146cbb8931626db80e6c1c4308"
diff --git a/poky/meta/recipes-devtools/python/python3-pip/reproducible.patch b/poky/meta/recipes-devtools/python/python3-pip/reproducible.patch
deleted file mode 100644
index 0ed0c91..0000000
--- a/poky/meta/recipes-devtools/python/python3-pip/reproducible.patch
+++ /dev/null
@@ -1,83 +0,0 @@
-Pip installed wheels are not reproducible currently. The direct_url
-files encode an installation path and the installed wheels compile
-the python files at their location, not their final install location
-which is incorrect.
-
-To fix this, simply disable the direct_urls and pass the "root" to
-the python compile function to strip that path out of the compiled
-files.
-
-A version of this patch, perhaps stripping root from the direct_urls
-may be something that could be considered by upstream.
-
-Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
-
-Upstream-Status: Pending
-
-Signed-off-by: Wang Mingyu <wangmy@fujitsu.com>
----
- src/pip/_internal/operations/install/wheel.py | 5 ++++-
- src/pip/_internal/req/req_install.py          | 5 ++++-
- 2 files changed, 8 insertions(+), 2 deletions(-)
-
-diff --git a/src/pip/_internal/operations/install/wheel.py b/src/pip/_internal/operations/install/wheel.py
-index 1af8978..3e48f9b 100644
---- a/src/pip/_internal/operations/install/wheel.py
-+++ b/src/pip/_internal/operations/install/wheel.py
-@@ -434,6 +434,7 @@ def _install_wheel(
-     warn_script_location: bool = True,
-     direct_url: Optional[DirectUrl] = None,
-     requested: bool = False,
-+    root: str = None,
- ) -> None:
-     """Install a wheel.
- 
-@@ -610,7 +611,7 @@ def _install_wheel(
-             with warnings.catch_warnings():
-                 warnings.filterwarnings("ignore")
-                 for path in pyc_source_file_paths():
--                    success = compileall.compile_file(path, force=True, quiet=True)
-+                    success = compileall.compile_file(path, force=True, quiet=True, stripdir=root)
-                     if success:
-                         pyc_path = pyc_output_path(path)
-                         assert os.path.exists(pyc_path)
-@@ -721,6 +722,7 @@ def install_wheel(
-     warn_script_location: bool = True,
-     direct_url: Optional[DirectUrl] = None,
-     requested: bool = False,
-+    root: str = None,
- ) -> None:
-     with ZipFile(wheel_path, allowZip64=True) as z:
-         with req_error_context(req_description):
-@@ -733,4 +735,5 @@ def install_wheel(
-                 warn_script_location=warn_script_location,
-                 direct_url=direct_url,
-                 requested=requested,
-+                root=root,
-             )
-diff --git a/src/pip/_internal/req/req_install.py b/src/pip/_internal/req/req_install.py
-index a1e376c..4c3f1bb 100644
---- a/src/pip/_internal/req/req_install.py
-+++ b/src/pip/_internal/req/req_install.py
-@@ -779,7 +779,9 @@ class InstallRequirement:
-             assert self.local_file_path
-             direct_url = None
-             # TODO this can be refactored to direct_url = self.download_info
--            if self.editable:
-+            if '_PYTHON_SYSCONFIGDATA_NAME' in os.environ:
-+                direct_url = None
-+            elif self.editable:
-                 direct_url = direct_url_for_editable(self.unpacked_source_directory)
-             elif self.original_link:
-                 direct_url = direct_url_from_link(
-@@ -796,6 +798,7 @@ class InstallRequirement:
-                 warn_script_location=warn_script_location,
-                 direct_url=direct_url,
-                 requested=self.user_supplied,
-+                root=root,
-             )
-             self.install_succeeded = True
-             return
--- 
-2.25.1
-
diff --git a/poky/meta/recipes-devtools/python/python3-pip_22.2.1.bb b/poky/meta/recipes-devtools/python/python3-pip_22.2.1.bb
deleted file mode 100644
index 39349b6..0000000
--- a/poky/meta/recipes-devtools/python/python3-pip_22.2.1.bb
+++ /dev/null
@@ -1,63 +0,0 @@
-SUMMARY = "The PyPA recommended tool for installing Python packages"
-HOMEPAGE = "https://pypi.org/project/pip"
-SECTION = "devel/python"
-LICENSE = "MIT & Apache-2.0 & MPL-2.0 & LGPL-2.1-only & BSD-3-Clause & PSF-2.0 & BSD-2-Clause"
-LIC_FILES_CHKSUM = "\
-  file://LICENSE.txt;md5=63ec52baf95163b597008bb46db68030 \
-  file://src/pip/_vendor/cachecontrol/LICENSE.txt;md5=6572692148079ebbbd800be4b9f36c6d \
-  file://src/pip/_vendor/certifi/LICENSE;md5=67da0714c3f9471067b729eca6c9fbe8 \
-  file://src/pip/_vendor/chardet/LICENSE;md5=4fbd65380cdd255951079008b364516c \
-  file://src/pip/_vendor/colorama/LICENSE.txt;md5=b4936429a56a652b84c5c01280dcaa26 \
-  file://src/pip/_vendor/distlib/LICENSE.txt;md5=f6a11430d5cd6e2cd3832ee94f22ddfc \
-  file://src/pip/_vendor/distro/LICENSE;md5=d2794c0df5b907fdace235a619d80314 \
-  file://src/pip/_vendor/idna/LICENSE.md;md5=239668a7c6066d9e0c5382e9c8c6c0e1 \
-  file://src/pip/_vendor/msgpack/COPYING;md5=cd9523181d9d4fbf7ffca52eaa2a5751 \
-  file://src/pip/_vendor/packaging/LICENSE;md5=faadaedca9251a90b205c9167578ce91 \
-  file://src/pip/_vendor/packaging/LICENSE.APACHE;md5=2ee41112a44fe7014dce33e26468ba93 \
-  file://src/pip/_vendor/pep517/LICENSE;md5=aad69c93f605003e3342b174d9b0708c \
-  file://src/pip/_vendor/pkg_resources/LICENSE;md5=9a33897f1bca1160d7aad3835152e158 \
-  file://src/pip/_vendor/platformdirs/LICENSE.txt;md5=282c970bb844954c8535dd6e9733db7f \
-  file://src/pip/_vendor/pygments/LICENSE;md5=36a13c90514e2899f1eba7f41c3ee592 \
-  file://src/pip/_vendor/pyparsing/LICENSE;md5=657a566233888513e1f07ba13e2f47f1 \
-  file://src/pip/_vendor/requests/LICENSE;md5=34400b68072d710fecd0a2940a0d1658 \
-  file://src/pip/_vendor/resolvelib/LICENSE;md5=78e1c0248051c32a38a7f820c30bd7a5 \
-  file://src/pip/_vendor/rich/LICENSE;md5=b5f0b94fbc94f5ad9ae4efcf8a778303 \
-  file://src/pip/_vendor/six.LICENSE;md5=43cfc9e4ac0e377acfb9b76f56b8415d \
-  file://src/pip/_vendor/tenacity/LICENSE;md5=175792518e4ac015ab6696d16c4f607e \
-  file://src/pip/_vendor/tomli/LICENSE;md5=aaaaf0879d17df0110d1aa8c8c9f46f5 \
-  file://src/pip/_vendor/typing_extensions.LICENSE;md5=64fc2b30b67d0a8423c250e0386ed72f \
-  file://src/pip/_vendor/urllib3/LICENSE.txt;md5=c2823cb995439c984fd62a973d79815c \
-  file://src/pip/_vendor/webencodings/LICENSE;md5=81fb24cd7823cce23b69f721993dce4d \
-"
-
-inherit pypi python_setuptools_build_meta
-
-SRC_URI += "file://no_shebang_mangling.patch"
-SRC_URI += "file://reproducible.patch"
-
-SRC_URI[sha256sum] = "50516e47a2b79e77446f0d05649f0d53772c192571486236b1905492bfc24bac"
-
-do_install:append() {
-    rm -f ${D}/${bindir}/pip
-}
-
-RDEPENDS:${PN} = "\
-  python3-compile \
-  python3-io \
-  python3-html \
-  python3-json \
-  python3-multiprocessing \
-  python3-netserver \
-  python3-setuptools \
-  python3-unixadmin \
-  python3-xmlrpc \
-  python3-pickle \
-  python3-distutils \
-  python3-image \
-"
-
-BBCLASSEXTEND = "native nativesdk"
-
-# This used to use the bootstrap install which didn't compile. Until we bump the
-# tmpdir version we can't compile the native otherwise the sysroot unpack fails
-INSTALL_WHEEL_COMPILE_BYTECODE:class-native = "--no-compile-bytecode"
diff --git a/poky/meta/recipes-devtools/python/python3-pip_22.2.2.bb b/poky/meta/recipes-devtools/python/python3-pip_22.2.2.bb
new file mode 100644
index 0000000..5b6cccf
--- /dev/null
+++ b/poky/meta/recipes-devtools/python/python3-pip_22.2.2.bb
@@ -0,0 +1,62 @@
+SUMMARY = "The PyPA recommended tool for installing Python packages"
+HOMEPAGE = "https://pypi.org/project/pip"
+SECTION = "devel/python"
+LICENSE = "MIT & Apache-2.0 & MPL-2.0 & LGPL-2.1-only & BSD-3-Clause & PSF-2.0 & BSD-2-Clause"
+LIC_FILES_CHKSUM = "\
+  file://LICENSE.txt;md5=63ec52baf95163b597008bb46db68030 \
+  file://src/pip/_vendor/cachecontrol/LICENSE.txt;md5=6572692148079ebbbd800be4b9f36c6d \
+  file://src/pip/_vendor/certifi/LICENSE;md5=67da0714c3f9471067b729eca6c9fbe8 \
+  file://src/pip/_vendor/chardet/LICENSE;md5=4fbd65380cdd255951079008b364516c \
+  file://src/pip/_vendor/colorama/LICENSE.txt;md5=b4936429a56a652b84c5c01280dcaa26 \
+  file://src/pip/_vendor/distlib/LICENSE.txt;md5=f6a11430d5cd6e2cd3832ee94f22ddfc \
+  file://src/pip/_vendor/distro/LICENSE;md5=d2794c0df5b907fdace235a619d80314 \
+  file://src/pip/_vendor/idna/LICENSE.md;md5=239668a7c6066d9e0c5382e9c8c6c0e1 \
+  file://src/pip/_vendor/msgpack/COPYING;md5=cd9523181d9d4fbf7ffca52eaa2a5751 \
+  file://src/pip/_vendor/packaging/LICENSE;md5=faadaedca9251a90b205c9167578ce91 \
+  file://src/pip/_vendor/packaging/LICENSE.APACHE;md5=2ee41112a44fe7014dce33e26468ba93 \
+  file://src/pip/_vendor/pep517/LICENSE;md5=aad69c93f605003e3342b174d9b0708c \
+  file://src/pip/_vendor/pkg_resources/LICENSE;md5=9a33897f1bca1160d7aad3835152e158 \
+  file://src/pip/_vendor/platformdirs/LICENSE.txt;md5=282c970bb844954c8535dd6e9733db7f \
+  file://src/pip/_vendor/pygments/LICENSE;md5=36a13c90514e2899f1eba7f41c3ee592 \
+  file://src/pip/_vendor/pyparsing/LICENSE;md5=657a566233888513e1f07ba13e2f47f1 \
+  file://src/pip/_vendor/requests/LICENSE;md5=34400b68072d710fecd0a2940a0d1658 \
+  file://src/pip/_vendor/resolvelib/LICENSE;md5=78e1c0248051c32a38a7f820c30bd7a5 \
+  file://src/pip/_vendor/rich/LICENSE;md5=b5f0b94fbc94f5ad9ae4efcf8a778303 \
+  file://src/pip/_vendor/six.LICENSE;md5=43cfc9e4ac0e377acfb9b76f56b8415d \
+  file://src/pip/_vendor/tenacity/LICENSE;md5=175792518e4ac015ab6696d16c4f607e \
+  file://src/pip/_vendor/tomli/LICENSE;md5=aaaaf0879d17df0110d1aa8c8c9f46f5 \
+  file://src/pip/_vendor/typing_extensions.LICENSE;md5=64fc2b30b67d0a8423c250e0386ed72f \
+  file://src/pip/_vendor/urllib3/LICENSE.txt;md5=c2823cb995439c984fd62a973d79815c \
+  file://src/pip/_vendor/webencodings/LICENSE;md5=81fb24cd7823cce23b69f721993dce4d \
+"
+
+inherit pypi python_setuptools_build_meta
+
+SRC_URI += "file://no_shebang_mangling.patch"
+
+SRC_URI[sha256sum] = "3fd1929db052f056d7a998439176d3333fa1b3f6c1ad881de1885c0717608a4b"
+
+do_install:append() {
+    rm -f ${D}/${bindir}/pip
+}
+
+RDEPENDS:${PN} = "\
+  python3-compile \
+  python3-io \
+  python3-html \
+  python3-json \
+  python3-multiprocessing \
+  python3-netserver \
+  python3-setuptools \
+  python3-unixadmin \
+  python3-xmlrpc \
+  python3-pickle \
+  python3-distutils \
+  python3-image \
+"
+
+BBCLASSEXTEND = "native nativesdk"
+
+# This used to use the bootstrap install which didn't compile. Until we bump the
+# tmpdir version we can't compile the native otherwise the sysroot unpack fails
+INSTALL_WHEEL_COMPILE_BYTECODE:class-native = "--no-compile-bytecode"
diff --git a/poky/meta/recipes-devtools/python/python3-pyelftools_0.28.bb b/poky/meta/recipes-devtools/python/python3-pyelftools_0.28.bb
deleted file mode 100644
index 0ceddcb..0000000
--- a/poky/meta/recipes-devtools/python/python3-pyelftools_0.28.bb
+++ /dev/null
@@ -1,15 +0,0 @@
-DESCRIPTION = "pyelftools is a pure-Python library for parsing and analyzing ELF files and DWARF debugging information"
-HOMEPAGE = "https://github.com/eliben/pyelftools"
-SECTION = "devel/python"
-LICENSE = "PD"
-LIC_FILES_CHKSUM = "file://LICENSE;md5=5ce2a2b07fca326bc7c146d10105ccfc"
-
-SRC_URI[sha256sum] = "53e5609cac016471d40bd88dc410cd90755942c25e58a61021cfdf7abdfeacff"
-
-PYPI_PACKAGE = "pyelftools"
-
-inherit pypi setuptools3
-
-BBCLASSEXTEND = "native"
-
-RDEPENDS:${PN} += "${PYTHON_PN}-debugger ${PYTHON_PN}-pprint"
diff --git a/poky/meta/recipes-devtools/python/python3-pyelftools_0.29.bb b/poky/meta/recipes-devtools/python/python3-pyelftools_0.29.bb
new file mode 100644
index 0000000..2199e9f
--- /dev/null
+++ b/poky/meta/recipes-devtools/python/python3-pyelftools_0.29.bb
@@ -0,0 +1,15 @@
+DESCRIPTION = "pyelftools is a pure-Python library for parsing and analyzing ELF files and DWARF debugging information"
+HOMEPAGE = "https://github.com/eliben/pyelftools"
+SECTION = "devel/python"
+LICENSE = "PD"
+LIC_FILES_CHKSUM = "file://LICENSE;md5=5ce2a2b07fca326bc7c146d10105ccfc"
+
+SRC_URI[sha256sum] = "ec761596aafa16e282a31de188737e5485552469ac63b60cfcccf22263fd24ff"
+
+PYPI_PACKAGE = "pyelftools"
+
+inherit pypi setuptools3
+
+BBCLASSEXTEND = "native"
+
+RDEPENDS:${PN} += "${PYTHON_PN}-debugger ${PYTHON_PN}-pprint"
diff --git a/poky/meta/recipes-devtools/python/python3-pygments_2.12.0.bb b/poky/meta/recipes-devtools/python/python3-pygments_2.12.0.bb
deleted file mode 100644
index b47e0af..0000000
--- a/poky/meta/recipes-devtools/python/python3-pygments_2.12.0.bb
+++ /dev/null
@@ -1,19 +0,0 @@
-SUMMARY = "Pygments is a syntax highlighting package written in Python."
-DESCRIPTION = "Pygments is a syntax highlighting package written in Python."
-HOMEPAGE = "http://pygments.org/"
-LICENSE = "BSD-2-Clause"
-LIC_FILES_CHKSUM = "file://LICENSE;md5=36a13c90514e2899f1eba7f41c3ee592"
-
-inherit setuptools3
-SRC_URI[sha256sum] = "5eb116118f9612ff1ee89ac96437bb6b49e8f04d8a13b514ba26f620208e26eb"
-
-DEPENDS += "\
-            ${PYTHON_PN} \
-            "
-
-PYPI_PACKAGE = "Pygments"
-
-inherit pypi
-
-BBCLASSEXTEND = "native nativesdk"
-
diff --git a/poky/meta/recipes-devtools/python/python3-pygments_2.13.0.bb b/poky/meta/recipes-devtools/python/python3-pygments_2.13.0.bb
new file mode 100644
index 0000000..59706cc
--- /dev/null
+++ b/poky/meta/recipes-devtools/python/python3-pygments_2.13.0.bb
@@ -0,0 +1,19 @@
+SUMMARY = "Pygments is a syntax highlighting package written in Python."
+DESCRIPTION = "Pygments is a syntax highlighting package written in Python."
+HOMEPAGE = "http://pygments.org/"
+LICENSE = "BSD-2-Clause"
+LIC_FILES_CHKSUM = "file://LICENSE;md5=36a13c90514e2899f1eba7f41c3ee592"
+
+inherit setuptools3
+SRC_URI[sha256sum] = "56a8508ae95f98e2b9bdf93a6be5ae3f7d8af858b43e02c5a2ff083726be40c1"
+
+DEPENDS += "\
+            ${PYTHON_PN} \
+            "
+
+PYPI_PACKAGE = "Pygments"
+
+inherit pypi
+
+BBCLASSEXTEND = "native nativesdk"
+
diff --git a/poky/meta/recipes-devtools/python/python3-pytz_2022.1.bb b/poky/meta/recipes-devtools/python/python3-pytz_2022.1.bb
deleted file mode 100644
index a4bb4f5..0000000
--- a/poky/meta/recipes-devtools/python/python3-pytz_2022.1.bb
+++ /dev/null
@@ -1,35 +0,0 @@
-SUMMARY = "World timezone definitions, modern and historical"
-HOMEPAGE = "http://pythonhosted.org/pytz"
-LICENSE = "MIT"
-LIC_FILES_CHKSUM = "file://LICENSE.txt;md5=1a67fc46c1b596cce5d21209bbe75999"
-
-inherit pypi setuptools3 ptest
-
-SRC_URI[sha256sum] = "1e760e2fe6a8163bc0b3d9a19c4f84342afa0a2affebfaa84b01b978a02ecaa7"
-
-RDEPENDS:${PN}:class-target += "\
-    ${PYTHON_PN}-datetime \
-    ${PYTHON_PN}-doctest \
-    ${PYTHON_PN}-io \
-    ${PYTHON_PN}-pickle \
-    ${PYTHON_PN}-pprint \
-    ${PYTHON_PN}-threading \
-"
-
-BBCLASSEXTEND = "native nativesdk"
-
-SRC_URI += " \
-	file://run-ptest \
-"
-
-RDEPENDS:${PN}-ptest += " \
-	${PYTHON_PN}-pytest \
-"
-
-do_install_ptest() {
-	install -d ${D}${PTEST_PATH}/pytz
-	install -d ${D}${PTEST_PATH}/pytz/tests
-	cp -rf ${S}/pytz/tests/* ${D}${PTEST_PATH}/pytz/tests/
-	cp -f ${S}/README.rst ${D}${PTEST_PATH}/
-
-}
diff --git a/poky/meta/recipes-devtools/python/python3-pytz_2022.2.1.bb b/poky/meta/recipes-devtools/python/python3-pytz_2022.2.1.bb
new file mode 100644
index 0000000..bb7aeee
--- /dev/null
+++ b/poky/meta/recipes-devtools/python/python3-pytz_2022.2.1.bb
@@ -0,0 +1,35 @@
+SUMMARY = "World timezone definitions, modern and historical"
+HOMEPAGE = "http://pythonhosted.org/pytz"
+LICENSE = "MIT"
+LIC_FILES_CHKSUM = "file://LICENSE.txt;md5=1a67fc46c1b596cce5d21209bbe75999"
+
+inherit pypi setuptools3 ptest
+
+SRC_URI[sha256sum] = "cea221417204f2d1a2aa03ddae3e867921971d0d76f14d87abb4414415bbdcf5"
+
+RDEPENDS:${PN}:class-target += "\
+    ${PYTHON_PN}-datetime \
+    ${PYTHON_PN}-doctest \
+    ${PYTHON_PN}-io \
+    ${PYTHON_PN}-pickle \
+    ${PYTHON_PN}-pprint \
+    ${PYTHON_PN}-threading \
+"
+
+BBCLASSEXTEND = "native nativesdk"
+
+SRC_URI += " \
+	file://run-ptest \
+"
+
+RDEPENDS:${PN}-ptest += " \
+	${PYTHON_PN}-pytest \
+"
+
+do_install_ptest() {
+	install -d ${D}${PTEST_PATH}/pytz
+	install -d ${D}${PTEST_PATH}/pytz/tests
+	cp -rf ${S}/pytz/tests/* ${D}${PTEST_PATH}/pytz/tests/
+	cp -f ${S}/README.rst ${D}${PTEST_PATH}/
+
+}
diff --git a/poky/meta/recipes-devtools/python/python3-requests_2.28.1.bb b/poky/meta/recipes-devtools/python/python3-requests_2.28.1.bb
index ac8a570..8de08d2 100644
--- a/poky/meta/recipes-devtools/python/python3-requests_2.28.1.bb
+++ b/poky/meta/recipes-devtools/python/python3-requests_2.28.1.bb
@@ -18,6 +18,7 @@
     ${PYTHON_PN}-urllib3 \
     ${PYTHON_PN}-chardet \
     ${PYTHON_PN}-idna \
+    ${PYTHON_PN}-compression \
 "
 
 CVE_PRODUCT = "requests"
diff --git a/poky/meta/recipes-devtools/python/python3-setuptools-rust/8e9892f08b1248dc03862da86915c2745e0ff7ec.patch b/poky/meta/recipes-devtools/python/python3-setuptools-rust/8e9892f08b1248dc03862da86915c2745e0ff7ec.patch
deleted file mode 100644
index 2a531e1..0000000
--- a/poky/meta/recipes-devtools/python/python3-setuptools-rust/8e9892f08b1248dc03862da86915c2745e0ff7ec.patch
+++ /dev/null
@@ -1,221 +0,0 @@
-From 8e9892f08b1248dc03862da86915c2745e0ff7ec Mon Sep 17 00:00:00 2001
-From: "Andrew J. Hesford" <ajh@sideband.org>
-Date: Fri, 15 Jul 2022 10:33:02 -0400
-Subject: [PATCH] build_rust: remove linker handling that broke cross
- compilation
-
-Upstream-Status: Submitted [https://github.com/PyO3/setuptools-rust/pull/269]
-Signed-off-by: Alexander Kanavin <alex@linutronix.de>
----
- setuptools_rust/build.py | 151 ++-------------------------------------
- 1 file changed, 7 insertions(+), 144 deletions(-)
-
-diff --git a/setuptools_rust/build.py b/setuptools_rust/build.py
-index 4fe594b..e81ed8f 100644
---- a/setuptools_rust/build.py
-+++ b/setuptools_rust/build.py
-@@ -113,23 +113,10 @@ def build_extension(
-         self, ext: RustExtension, forced_target_triple: Optional[str] = None
-     ) -> List["_BuiltModule"]:
- 
--        target_info = self._detect_rust_target(forced_target_triple)
--        if target_info is not None:
--            target_triple = target_info.triple
--            cross_lib = target_info.cross_lib
--            linker = target_info.linker
--            # We're ignoring target_info.linker_args for now because we're not
--            # sure if they will always do the right thing. Might help with some
--            # of the OS-specific logic if it does.
--
--        else:
--            target_triple = None
--            cross_lib = None
--            linker = None
--
-+        target_triple = self._detect_rust_target(forced_target_triple)
-         rustc_cfgs = get_rustc_cfgs(target_triple)
- 
--        env = _prepare_build_environment(cross_lib)
-+        env = _prepare_build_environment()
- 
-         if not os.path.exists(ext.path):
-             raise DistutilsFileError(
-@@ -150,9 +137,6 @@ def build_extension(
- 
-         rustflags = []
- 
--        if linker is not None:
--            rustflags.extend(["-C", "linker=" + linker])
--
-         if ext._uses_exec_binding():
-             command = [self.cargo, "build", "--manifest-path", ext.path, *cargo_args]
- 
-@@ -407,45 +391,12 @@ def _py_limited_api(self) -> _PyLimitedApi:
- 
-     def _detect_rust_target(
-         self, forced_target_triple: Optional[str] = None
--    ) -> Optional["_TargetInfo"]:
-+    ) -> Optional[str]:
-         assert self.plat_name is not None
--        cross_compile_info = _detect_unix_cross_compile_info()
--        if cross_compile_info is not None:
--            cross_target_info = cross_compile_info.to_target_info()
--            if forced_target_triple is not None:
--                if (
--                    cross_target_info is not None
--                    and not cross_target_info.is_compatible_with(forced_target_triple)
--                ):
--                    self.warn(
--                        f"Forced Rust target `{forced_target_triple}` is not "
--                        f"compatible with deduced Rust target "
--                        f"`{cross_target_info.triple}` - the built package "
--                        f" may not import successfully once installed."
--                    )
--
--                # Forcing the target in a cross-compile environment; use
--                # the cross-compile information in combination with the
--                # forced target
--                return _TargetInfo(
--                    forced_target_triple,
--                    cross_compile_info.cross_lib,
--                    cross_compile_info.linker,
--                    cross_compile_info.linker_args,
--                )
--            elif cross_target_info is not None:
--                return cross_target_info
--            else:
--                raise DistutilsPlatformError(
--                    "Don't know the correct rust target for system type "
--                    f"{cross_compile_info.host_type}. Please set the "
--                    "CARGO_BUILD_TARGET environment variable."
--                )
--
--        elif forced_target_triple is not None:
-+        if forced_target_triple is not None:
-             # Automatic target detection can be overridden via the CARGO_BUILD_TARGET
-             # environment variable or --target command line option
--            return _TargetInfo.for_triple(forced_target_triple)
-+            return forced_target_triple
- 
-         # Determine local rust target which needs to be "forced" if necessary
-         local_rust_target = _adjusted_local_rust_target(self.plat_name)
-@@ -457,7 +408,7 @@ def _detect_rust_target(
-             # check for None first to avoid calling to rustc if not needed
-             and local_rust_target != get_rust_host()
-         ):
--            return _TargetInfo.for_triple(local_rust_target)
-+            return local_rust_target
- 
-         return None
- 
-@@ -547,91 +498,6 @@ class _BuiltModule(NamedTuple):
-     path: str
- 
- 
--class _TargetInfo(NamedTuple):
--    triple: str
--    cross_lib: Optional[str]
--    linker: Optional[str]
--    linker_args: Optional[str]
--
--    @staticmethod
--    def for_triple(triple: str) -> "_TargetInfo":
--        return _TargetInfo(triple, None, None, None)
--
--    def is_compatible_with(self, target: str) -> bool:
--        if self.triple == target:
--            return True
--
--        # the vendor field can be ignored, so x86_64-pc-linux-gnu is compatible
--        # with x86_64-unknown-linux-gnu
--        if _replace_vendor_with_unknown(self.triple) == target:
--            return True
--
--        return False
--
--
--class _CrossCompileInfo(NamedTuple):
--    host_type: str
--    cross_lib: Optional[str]
--    linker: Optional[str]
--    linker_args: Optional[str]
--
--    def to_target_info(self) -> Optional[_TargetInfo]:
--        """Maps this cross compile info to target info.
--
--        Returns None if the corresponding target information could not be
--        deduced.
--        """
--        # hopefully an exact match
--        targets = get_rust_target_list()
--        if self.host_type in targets:
--            return _TargetInfo(
--                self.host_type, self.cross_lib, self.linker, self.linker_args
--            )
--
--        # the vendor field can be ignored, so x86_64-pc-linux-gnu is compatible
--        # with x86_64-unknown-linux-gnu
--        without_vendor = _replace_vendor_with_unknown(self.host_type)
--        if without_vendor is not None and without_vendor in targets:
--            return _TargetInfo(
--                without_vendor, self.cross_lib, self.linker, self.linker_args
--            )
--
--        return None
--
--
--def _detect_unix_cross_compile_info() -> Optional["_CrossCompileInfo"]:
--    # See https://github.com/PyO3/setuptools-rust/issues/138
--    # This is to support cross compiling on *NIX, where plat_name isn't
--    # necessarily the same as the system we are running on.  *NIX systems
--    # have more detailed information available in sysconfig. We need that
--    # because plat_name doesn't give us information on e.g., glibc vs musl.
--    host_type = sysconfig.get_config_var("HOST_GNU_TYPE")
--    build_type = sysconfig.get_config_var("BUILD_GNU_TYPE")
--
--    if not host_type or host_type == build_type:
--        # not *NIX, or not cross compiling
--        return None
--
--    if "apple-darwin" in host_type and (build_type and "apple-darwin" in build_type):
--        # On macos and the build and host differ. This is probably an arm
--        # Python which was built on x86_64. Don't try to handle this for now.
--        # (See https://github.com/PyO3/setuptools-rust/issues/192)
--        return None
--
--    stdlib = sysconfig.get_path("stdlib")
--    assert stdlib is not None
--    cross_lib = os.path.dirname(stdlib)
--
--    bldshared = sysconfig.get_config_var("BLDSHARED")
--    if not bldshared:
--        linker = None
--        linker_args = None
--    else:
--        [linker, _, linker_args] = bldshared.partition(" ")
--
--    return _CrossCompileInfo(host_type, cross_lib, linker, linker_args)
--
--
- def _replace_vendor_with_unknown(target: str) -> Optional[str]:
-     """Replaces vendor in the target triple with unknown.
- 
-@@ -644,7 +510,7 @@ def _replace_vendor_with_unknown(target: str) -> Optional[str]:
-     return "-".join(components)
- 
- 
--def _prepare_build_environment(cross_lib: Optional[str]) -> Dict[str, str]:
-+def _prepare_build_environment() -> Dict[str, str]:
-     """Prepares environment variables to use when executing cargo build."""
- 
-     # Make sure that if pythonXX-sys is used, it builds against the current
-@@ -665,9 +531,6 @@ def _prepare_build_environment(cross_lib: Optional[str]) -> Dict[str, str]:
-         }
-     )
- 
--    if cross_lib:
--        env.setdefault("PYO3_CROSS_LIB_DIR", cross_lib)
--
-     env.pop("CARGO", None)
-     return env
- 
diff --git a/poky/meta/recipes-devtools/python/python3-setuptools-rust_1.4.1.bb b/poky/meta/recipes-devtools/python/python3-setuptools-rust_1.4.1.bb
deleted file mode 100644
index c63a3f2..0000000
--- a/poky/meta/recipes-devtools/python/python3-setuptools-rust_1.4.1.bb
+++ /dev/null
@@ -1,29 +0,0 @@
-SUMMARY = "Setuptools Rust extension plugin"
-DESCRIPTION = "setuptools-rust is a plugin for setuptools to build Rust \
-Python extensions implemented with PyO3 or rust-cpython.\
-\
-Compile and distribute Python extensions written in Rust as easily as if they were written in C."
-HOMEPAGE = "https://github.com/PyO3/setuptools-rust"
-BUGTRACKER = "https://github.com/PyO3/setuptools-rust/issues"
-
-LICENSE = "MIT"
-LIC_FILES_CHKSUM = "file://LICENSE;md5=011cd92e702dd9e6b1a26157b6fd53f5"
-
-SRC_URI = "https://files.pythonhosted.org/packages/67/08/e1aa2c582c62ac76e4d60f8e454bd3bba933781a06a88b4e38797445822a/setuptools-rust-${PV}.tar.gz \
-           file://8e9892f08b1248dc03862da86915c2745e0ff7ec.patch"
-SRC_URI[sha256sum] = "18ff850831f58ee21d5783825c99fad632da21e47645e9427fd7dec048029e76"
-
-inherit cargo pypi python_setuptools_build_meta
-
-DEPENDS += "python3-setuptools-scm-native python3-wheel-native"
-
-RDEPENDS:${PN}:class-native += " \
-    python3-semantic-version-native \
-    python3-setuptools-native \
-    python3-setuptools-scm-native \
-    python3-toml-native \
-    python3-typing-extensions-native \
-    python3-wheel-native \
-"
-
-BBCLASSEXTEND = "native"
diff --git a/poky/meta/recipes-devtools/python/python3-setuptools-rust_1.5.1.bb b/poky/meta/recipes-devtools/python/python3-setuptools-rust_1.5.1.bb
new file mode 100644
index 0000000..24a4f4a
--- /dev/null
+++ b/poky/meta/recipes-devtools/python/python3-setuptools-rust_1.5.1.bb
@@ -0,0 +1,30 @@
+SUMMARY = "Setuptools Rust extension plugin"
+DESCRIPTION = "setuptools-rust is a plugin for setuptools to build Rust \
+Python extensions implemented with PyO3 or rust-cpython.\
+\
+Compile and distribute Python extensions written in Rust as easily as if they were written in C."
+HOMEPAGE = "https://github.com/PyO3/setuptools-rust"
+BUGTRACKER = "https://github.com/PyO3/setuptools-rust/issues"
+
+LICENSE = "MIT"
+LIC_FILES_CHKSUM = "file://LICENSE;md5=011cd92e702dd9e6b1a26157b6fd53f5"
+
+SRC_URI = "${PYPI_SRC_URI} \
+           https://files.pythonhosted.org/packages/67/08/e1aa2c582c62ac76e4d60f8e454bd3bba933781a06a88b4e38797445822a/setuptools-rust-${PV}.tar.gz \
+           "
+SRC_URI[sha256sum] = "0e05e456645d59429cb1021370aede73c0760e9360bbfdaaefb5bced530eb9d7"
+
+inherit cargo pypi python_setuptools_build_meta
+
+DEPENDS += "python3-setuptools-scm-native python3-wheel-native"
+
+RDEPENDS:${PN}:class-native += " \
+    python3-semantic-version-native \
+    python3-setuptools-native \
+    python3-setuptools-scm-native \
+    python3-toml-native \
+    python3-typing-extensions-native \
+    python3-wheel-native \
+"
+
+BBCLASSEXTEND = "native"
diff --git a/poky/meta/recipes-devtools/python/python3-setuptools/0001-conditionally-do-not-fetch-code-by-easy_install.patch b/poky/meta/recipes-devtools/python/python3-setuptools/0001-conditionally-do-not-fetch-code-by-easy_install.patch
index 9c5ff6a..c8c713c 100644
--- a/poky/meta/recipes-devtools/python/python3-setuptools/0001-conditionally-do-not-fetch-code-by-easy_install.patch
+++ b/poky/meta/recipes-devtools/python/python3-setuptools/0001-conditionally-do-not-fetch-code-by-easy_install.patch
@@ -1,4 +1,4 @@
-From 3a5ae454c0738510daf5df68b7968cab66cceb7f Mon Sep 17 00:00:00 2001
+From 42d349031cd952c12620fcf02cbab70a371f4b19 Mon Sep 17 00:00:00 2001
 From: Hongxu Jia <hongxu.jia@windriver.com>
 Date: Tue, 17 Jul 2018 10:13:38 +0800
 Subject: [PATCH] conditionally do not fetch code by easy_install
diff --git a/poky/meta/recipes-devtools/python/python3-setuptools_63.3.0.bb b/poky/meta/recipes-devtools/python/python3-setuptools_63.3.0.bb
deleted file mode 100644
index da7e789..0000000
--- a/poky/meta/recipes-devtools/python/python3-setuptools_63.3.0.bb
+++ /dev/null
@@ -1,55 +0,0 @@
-SUMMARY = "Download, build, install, upgrade, and uninstall Python packages"
-HOMEPAGE = "https://pypi.org/project/setuptools"
-SECTION = "devel/python"
-LICENSE = "MIT"
-LIC_FILES_CHKSUM = "file://LICENSE;beginline=1;endline=19;md5=7a7126e068206290f3fe9f8d6c713ea6"
-
-inherit pypi python_setuptools_build_meta
-
-SRC_URI:append:class-native = " file://0001-conditionally-do-not-fetch-code-by-easy_install.patch"
-
-SRC_URI += "file://0001-change-shebang-to-python3.patch \
-            file://0001-_distutils-sysconfig.py-make-it-possible-to-substite.patch"
-
-SRC_URI[sha256sum] = "273b6847ae61f7829c1affcdd9a32f67aa65233be508f4fbaab866c5faa4e408"
-
-DEPENDS += "${PYTHON_PN}"
-
-RDEPENDS:${PN} = "\
-    ${PYTHON_PN}-2to3 \
-    ${PYTHON_PN}-compile \
-    ${PYTHON_PN}-compression \
-    ${PYTHON_PN}-ctypes \
-    ${PYTHON_PN}-email \
-    ${PYTHON_PN}-html \
-    ${PYTHON_PN}-json \
-    ${PYTHON_PN}-netserver \
-    ${PYTHON_PN}-numbers \
-    ${PYTHON_PN}-pickle \
-    ${PYTHON_PN}-pkg-resources \
-    ${PYTHON_PN}-pkgutil \
-    ${PYTHON_PN}-plistlib \
-    ${PYTHON_PN}-shell \
-    ${PYTHON_PN}-stringold \
-    ${PYTHON_PN}-threading \
-    ${PYTHON_PN}-unittest \
-    ${PYTHON_PN}-xml \
-"
-
-BBCLASSEXTEND = "native nativesdk"
-
-# The pkg-resources module can be used by itself, without the package downloader
-# and easy_install. Ship it in a separate package so that it can be used by
-# minimal distributions.
-PACKAGES =+ "${PYTHON_PN}-pkg-resources "
-FILES:${PYTHON_PN}-pkg-resources = "${PYTHON_SITEPACKAGES_DIR}/pkg_resources/*"
-RDEPENDS:${PYTHON_PN}-pkg-resources = "\
-    ${PYTHON_PN}-compression \
-    ${PYTHON_PN}-email \
-    ${PYTHON_PN}-plistlib \
-    ${PYTHON_PN}-pprint \
-"
-
-# This used to use the bootstrap install which didn't compile. Until we bump the
-# tmpdir version we can't compile the native otherwise the sysroot unpack fails
-INSTALL_WHEEL_COMPILE_BYTECODE:class-native = "--no-compile-bytecode"
diff --git a/poky/meta/recipes-devtools/python/python3-setuptools_65.0.2.bb b/poky/meta/recipes-devtools/python/python3-setuptools_65.0.2.bb
new file mode 100644
index 0000000..1a639ea
--- /dev/null
+++ b/poky/meta/recipes-devtools/python/python3-setuptools_65.0.2.bb
@@ -0,0 +1,55 @@
+SUMMARY = "Download, build, install, upgrade, and uninstall Python packages"
+HOMEPAGE = "https://pypi.org/project/setuptools"
+SECTION = "devel/python"
+LICENSE = "MIT"
+LIC_FILES_CHKSUM = "file://LICENSE;beginline=1;endline=19;md5=7a7126e068206290f3fe9f8d6c713ea6"
+
+inherit pypi python_setuptools_build_meta
+
+SRC_URI:append:class-native = " file://0001-conditionally-do-not-fetch-code-by-easy_install.patch"
+
+SRC_URI += "file://0001-change-shebang-to-python3.patch \
+            file://0001-_distutils-sysconfig.py-make-it-possible-to-substite.patch"
+
+SRC_URI[sha256sum] = "101bf15ca723beef42c8db91a761f3748d4d697e17fae904db60c0b619d8d094"
+
+DEPENDS += "${PYTHON_PN}"
+
+RDEPENDS:${PN} = "\
+    ${PYTHON_PN}-2to3 \
+    ${PYTHON_PN}-compile \
+    ${PYTHON_PN}-compression \
+    ${PYTHON_PN}-ctypes \
+    ${PYTHON_PN}-email \
+    ${PYTHON_PN}-html \
+    ${PYTHON_PN}-json \
+    ${PYTHON_PN}-netserver \
+    ${PYTHON_PN}-numbers \
+    ${PYTHON_PN}-pickle \
+    ${PYTHON_PN}-pkg-resources \
+    ${PYTHON_PN}-pkgutil \
+    ${PYTHON_PN}-plistlib \
+    ${PYTHON_PN}-shell \
+    ${PYTHON_PN}-stringold \
+    ${PYTHON_PN}-threading \
+    ${PYTHON_PN}-unittest \
+    ${PYTHON_PN}-xml \
+"
+
+BBCLASSEXTEND = "native nativesdk"
+
+# The pkg-resources module can be used by itself, without the package downloader
+# and easy_install. Ship it in a separate package so that it can be used by
+# minimal distributions.
+PACKAGES =+ "${PYTHON_PN}-pkg-resources "
+FILES:${PYTHON_PN}-pkg-resources = "${PYTHON_SITEPACKAGES_DIR}/pkg_resources/*"
+RDEPENDS:${PYTHON_PN}-pkg-resources = "\
+    ${PYTHON_PN}-compression \
+    ${PYTHON_PN}-email \
+    ${PYTHON_PN}-plistlib \
+    ${PYTHON_PN}-pprint \
+"
+
+# This used to use the bootstrap install which didn't compile. Until we bump the
+# tmpdir version we can't compile the native otherwise the sysroot unpack fails
+INSTALL_WHEEL_COMPILE_BYTECODE:class-native = "--no-compile-bytecode"
diff --git a/poky/meta/recipes-devtools/python/python3-sphinx_5.0.2.bb b/poky/meta/recipes-devtools/python/python3-sphinx_5.0.2.bb
deleted file mode 100644
index 46cc26e..0000000
--- a/poky/meta/recipes-devtools/python/python3-sphinx_5.0.2.bb
+++ /dev/null
@@ -1,28 +0,0 @@
-DESCRIPTION = "Python documentation generator"
-HOMEPAGE = "http://sphinx-doc.org/"
-SECTION = "devel/python"
-LICENSE = "BSD-2-Clause & MIT & BSD-3-Clause"
-LIC_FILES_CHKSUM = "file://LICENSE;md5=72c536e78c21c567311b193fe00cd253"
-
-PYPI_PACKAGE = "Sphinx"
-
-SRC_URI[sha256sum] = "b18e978ea7565720f26019c702cd85c84376e948370f1cd43d60265010e1c7b0"
-
-inherit setuptools3 pypi
-
- 
-do_install:append () {
-	# The cache format of "{None, 'en', 'ja'}" doesn't seem to be consistent (dict ordering?)
-	rm ${D}${libdir}/${PYTHON_DIR}/site-packages/sphinx/writers/__pycache__/*latex*
-}
-
-RDEPENDS:${PN} = "\
-    python3-packaging python3-docutils python3-requests \
-    python3-imagesize python3-alabaster python3-jinja2 \
-    python3-babel python3-pygments python3-snowballstemmer \
-    python3-sphinxcontrib-applehelp python3-sphinxcontrib-devhelp \
-    python3-sphinxcontrib-jsmath python3-sphinxcontrib-htmlhelp \
-    python3-sphinxcontrib-serializinghtml python3-sphinxcontrib-qthelp \
-    "
-
-BBCLASSEXTEND = "native nativesdk"
diff --git a/poky/meta/recipes-devtools/python/python3-sphinx_5.1.1.bb b/poky/meta/recipes-devtools/python/python3-sphinx_5.1.1.bb
new file mode 100644
index 0000000..1bef20c
--- /dev/null
+++ b/poky/meta/recipes-devtools/python/python3-sphinx_5.1.1.bb
@@ -0,0 +1,28 @@
+DESCRIPTION = "Python documentation generator"
+HOMEPAGE = "http://sphinx-doc.org/"
+SECTION = "devel/python"
+LICENSE = "BSD-2-Clause & MIT & BSD-3-Clause"
+LIC_FILES_CHKSUM = "file://LICENSE;md5=72c536e78c21c567311b193fe00cd253"
+
+PYPI_PACKAGE = "Sphinx"
+
+SRC_URI[sha256sum] = "ba3224a4e206e1fbdecf98a4fae4992ef9b24b85ebf7b584bb340156eaf08d89"
+
+inherit setuptools3 pypi
+
+ 
+do_install:append () {
+	# The cache format of "{None, 'en', 'ja'}" doesn't seem to be consistent (dict ordering?)
+	rm ${D}${libdir}/${PYTHON_DIR}/site-packages/sphinx/writers/__pycache__/*latex*
+}
+
+RDEPENDS:${PN} = "\
+    python3-packaging python3-docutils python3-requests \
+    python3-imagesize python3-alabaster python3-jinja2 \
+    python3-babel python3-pygments python3-snowballstemmer \
+    python3-sphinxcontrib-applehelp python3-sphinxcontrib-devhelp \
+    python3-sphinxcontrib-jsmath python3-sphinxcontrib-htmlhelp \
+    python3-sphinxcontrib-serializinghtml python3-sphinxcontrib-qthelp \
+    "
+
+BBCLASSEXTEND = "native nativesdk"
diff --git a/poky/meta/recipes-devtools/python/python3-urllib3_1.26.10.bb b/poky/meta/recipes-devtools/python/python3-urllib3_1.26.10.bb
deleted file mode 100644
index a8e2073..0000000
--- a/poky/meta/recipes-devtools/python/python3-urllib3_1.26.10.bb
+++ /dev/null
@@ -1,23 +0,0 @@
-SUMMARY = "Python HTTP library with thread-safe connection pooling, file post support, sanity friendly, and more"
-HOMEPAGE = "https://github.com/shazow/urllib3"
-LICENSE = "MIT"
-LIC_FILES_CHKSUM = "file://LICENSE.txt;md5=c2823cb995439c984fd62a973d79815c"
-
-SRC_URI[sha256sum] = "879ba4d1e89654d9769ce13121e0f94310ea32e8d2f8cf587b77c08bbcdb30d6"
-
-inherit pypi setuptools3
-
-RDEPENDS:${PN} += "\
-    ${PYTHON_PN}-certifi \
-    ${PYTHON_PN}-cryptography \
-    ${PYTHON_PN}-email \
-    ${PYTHON_PN}-idna \
-    ${PYTHON_PN}-netclient \
-    ${PYTHON_PN}-pyopenssl \
-    ${PYTHON_PN}-threading \
-    ${PYTHON_PN}-logging \
-"
-
-CVE_PRODUCT = "urllib3"
-
-BBCLASSEXTEND = "native nativesdk"
diff --git a/poky/meta/recipes-devtools/python/python3-urllib3_1.26.12.bb b/poky/meta/recipes-devtools/python/python3-urllib3_1.26.12.bb
new file mode 100644
index 0000000..1cd69bc
--- /dev/null
+++ b/poky/meta/recipes-devtools/python/python3-urllib3_1.26.12.bb
@@ -0,0 +1,23 @@
+SUMMARY = "Python HTTP library with thread-safe connection pooling, file post support, sanity friendly, and more"
+HOMEPAGE = "https://github.com/shazow/urllib3"
+LICENSE = "MIT"
+LIC_FILES_CHKSUM = "file://LICENSE.txt;md5=c2823cb995439c984fd62a973d79815c"
+
+SRC_URI[sha256sum] = "3fa96cf423e6987997fc326ae8df396db2a8b7c667747d47ddd8ecba91f4a74e"
+
+inherit pypi setuptools3
+
+RDEPENDS:${PN} += "\
+    ${PYTHON_PN}-certifi \
+    ${PYTHON_PN}-cryptography \
+    ${PYTHON_PN}-email \
+    ${PYTHON_PN}-idna \
+    ${PYTHON_PN}-netclient \
+    ${PYTHON_PN}-pyopenssl \
+    ${PYTHON_PN}-threading \
+    ${PYTHON_PN}-logging \
+"
+
+CVE_PRODUCT = "urllib3"
+
+BBCLASSEXTEND = "native nativesdk"
diff --git a/poky/meta/recipes-devtools/python/python3/0001-Don-t-search-system-for-headers-libraries.patch b/poky/meta/recipes-devtools/python/python3/0001-Don-t-search-system-for-headers-libraries.patch
index 62dce4b..c790c7b 100644
--- a/poky/meta/recipes-devtools/python/python3/0001-Don-t-search-system-for-headers-libraries.patch
+++ b/poky/meta/recipes-devtools/python/python3/0001-Don-t-search-system-for-headers-libraries.patch
@@ -1,4 +1,4 @@
-From c83256e40d3057ac6325d649f9ce4c4da2c00874 Mon Sep 17 00:00:00 2001
+From 7589ab03ad3f7cb4bb092c31273ff22371ac77e4 Mon Sep 17 00:00:00 2001
 From: Jeremy Puhlman <jpuhlman@mvista.com>
 Date: Wed, 4 Mar 2020 00:06:42 +0000
 Subject: [PATCH] Don't search system for headers/libraries
@@ -11,10 +11,10 @@
  1 file changed, 2 insertions(+), 2 deletions(-)
 
 diff --git a/setup.py b/setup.py
-index f7a3d39..9d2273d 100644
+index c3a6b5e..c892537 100644
 --- a/setup.py
 +++ b/setup.py
-@@ -857,8 +857,8 @@ class PyBuildExt(build_ext):
+@@ -856,8 +856,8 @@ class PyBuildExt(build_ext):
              add_dir_to_list(self.compiler.include_dirs,
                              sysconfig.get_config_var("INCLUDEDIR"))
  
diff --git a/poky/meta/recipes-devtools/python/python3/0001-Lib-sysconfig.py-use-prefix-value-from-build-configu.patch b/poky/meta/recipes-devtools/python/python3/0001-Lib-sysconfig.py-use-prefix-value-from-build-configu.patch
index a9240b3..641017e 100644
--- a/poky/meta/recipes-devtools/python/python3/0001-Lib-sysconfig.py-use-prefix-value-from-build-configu.patch
+++ b/poky/meta/recipes-devtools/python/python3/0001-Lib-sysconfig.py-use-prefix-value-from-build-configu.patch
@@ -1,4 +1,4 @@
-From 01d209277e145072e478d8b9acfea3638ee16cdc Mon Sep 17 00:00:00 2001
+From d82cb96eed1098920ad3cdcb36feb32137618066 Mon Sep 17 00:00:00 2001
 From: Alexander Kanavin <alex@linutronix.de>
 Date: Fri, 10 Sep 2021 12:28:31 +0200
 Subject: [PATCH] Lib/sysconfig.py: use prefix value from build configuration
diff --git a/poky/meta/recipes-devtools/python/python3/0001-distutils-sysconfig-append-STAGING_LIBDIR-python-sys.patch b/poky/meta/recipes-devtools/python/python3/0001-distutils-sysconfig-append-STAGING_LIBDIR-python-sys.patch
index 3c62c2a..368a725 100644
--- a/poky/meta/recipes-devtools/python/python3/0001-distutils-sysconfig-append-STAGING_LIBDIR-python-sys.patch
+++ b/poky/meta/recipes-devtools/python/python3/0001-distutils-sysconfig-append-STAGING_LIBDIR-python-sys.patch
@@ -1,4 +1,4 @@
-From 78dd1def953e18e7cda0325bb26d27c051bb6890 Mon Sep 17 00:00:00 2001
+From c24674e0a52367359a1a3d950bab8bc3d282279b Mon Sep 17 00:00:00 2001
 From: Alexander Kanavin <alex.kanavin@gmail.com>
 Date: Thu, 31 Jan 2019 16:46:30 +0100
 Subject: [PATCH] distutils/sysconfig: append
diff --git a/poky/meta/recipes-devtools/python/python3/0001-python3-use-cc_basename-to-replace-CC-for-checking-c.patch b/poky/meta/recipes-devtools/python/python3/0001-python3-use-cc_basename-to-replace-CC-for-checking-c.patch
index 6bb85fc..2c7d264 100644
--- a/poky/meta/recipes-devtools/python/python3/0001-python3-use-cc_basename-to-replace-CC-for-checking-c.patch
+++ b/poky/meta/recipes-devtools/python/python3/0001-python3-use-cc_basename-to-replace-CC-for-checking-c.patch
@@ -14,7 +14,7 @@
 Here use cc_basename to replace CC for checking compiler to avoid such
 kind of issue.
 
-Upstream-Status: Pending
+Upstream-Status: Submitted [https://github.com/python/cpython/pull/96399]
 
 Signed-off-by: Li Zhou <li.zhou@windriver.com>
 
diff --git a/poky/meta/recipes-devtools/python/python3/0017-setup.py-do-not-report-missing-dependencies-for-disa.patch b/poky/meta/recipes-devtools/python/python3/0017-setup.py-do-not-report-missing-dependencies-for-disa.patch
index 0ead57e..041a03b 100644
--- a/poky/meta/recipes-devtools/python/python3/0017-setup.py-do-not-report-missing-dependencies-for-disa.patch
+++ b/poky/meta/recipes-devtools/python/python3/0017-setup.py-do-not-report-missing-dependencies-for-disa.patch
@@ -1,4 +1,4 @@
-From 246c5ffe75a2d494e415d8a7522df9fe22056d41 Mon Sep 17 00:00:00 2001
+From 311cf9abc213fcd76795cc3a25814a15fb552065 Mon Sep 17 00:00:00 2001
 From: Alexander Kanavin <alex.kanavin@gmail.com>
 Date: Mon, 7 Oct 2019 13:22:14 +0200
 Subject: [PATCH] setup.py: do not report missing dependencies for disabled
@@ -18,7 +18,7 @@
  1 file changed, 8 insertions(+)
 
 diff --git a/setup.py b/setup.py
-index 2be4738..62f0e18 100644
+index 934cf2e..ccf83b4 100644
 --- a/setup.py
 +++ b/setup.py
 @@ -517,6 +517,14 @@ class PyBuildExt(build_ext):
@@ -35,4 +35,4 @@
 +
          if self.missing:
              print()
-             print("Python build finished successfully!")
+             print("The necessary bits to build these optional modules were not "
diff --git a/poky/meta/recipes-devtools/python/python3/12-distutils-prefix-is-inside-staging-area.patch b/poky/meta/recipes-devtools/python/python3/12-distutils-prefix-is-inside-staging-area.patch
index de4c6c4..a06e9b5 100644
--- a/poky/meta/recipes-devtools/python/python3/12-distutils-prefix-is-inside-staging-area.patch
+++ b/poky/meta/recipes-devtools/python/python3/12-distutils-prefix-is-inside-staging-area.patch
@@ -1,4 +1,4 @@
-From 33b5a31df6050110f4481a24f5a0a0bf7fe80096 Mon Sep 17 00:00:00 2001
+From 1cc4cab8d579bbccb8a4fc13a28158a58c603cb4 Mon Sep 17 00:00:00 2001
 From: Khem Raj <raj.khem@gmail.com>
 Date: Tue, 14 May 2013 15:00:26 -0700
 Subject: [PATCH] python3: Add target and native recipes
diff --git a/poky/meta/recipes-devtools/python/python3_3.10.5.bb b/poky/meta/recipes-devtools/python/python3_3.10.5.bb
deleted file mode 100644
index b237c48..0000000
--- a/poky/meta/recipes-devtools/python/python3_3.10.5.bb
+++ /dev/null
@@ -1,429 +0,0 @@
-SUMMARY = "The Python Programming Language"
-HOMEPAGE = "http://www.python.org"
-DESCRIPTION = "Python is a programming language that lets you work more quickly and integrate your systems more effectively."
-LICENSE = "PSF-2.0"
-SECTION = "devel/python"
-
-LIC_FILES_CHKSUM = "file://LICENSE;md5=4b8801e752a2c70ac41a5f9aa243f766"
-
-SRC_URI = "http://www.python.org/ftp/python/${PV}/Python-${PV}.tar.xz \
-           file://run-ptest \
-           file://create_manifest3.py \
-           file://get_module_deps3.py \
-           file://python3-manifest.json \
-           file://check_build_completeness.py \
-           file://reformat_sysconfig.py \
-           file://cgi_py.patch \
-           file://0001-Do-not-add-usr-lib-termcap-to-linker-flags-to-avoid-.patch \
-           ${@bb.utils.contains('PACKAGECONFIG', 'tk', '', 'file://avoid_warning_about_tkinter.patch', d)} \
-           file://0001-Do-not-use-the-shell-version-of-python-config-that-w.patch \
-           file://python-config.patch \
-           file://0001-Makefile.pre-use-qemu-wrapper-when-gathering-profile.patch \
-           file://0001-python3-use-cc_basename-to-replace-CC-for-checking-c.patch \
-           file://0001-bpo-36852-proper-detection-of-mips-architecture-for-.patch \
-           file://crosspythonpath.patch \
-           file://0001-Use-FLAG_REF-always-for-interned-strings.patch \
-           file://0001-test_locale.py-correct-the-test-output-format.patch \
-           file://0017-setup.py-do-not-report-missing-dependencies-for-disa.patch \
-           file://0001-Makefile-do-not-compile-.pyc-in-parallel.patch \
-           file://0020-configure.ac-setup.py-do-not-add-a-curses-include-pa.patch \
-           file://0001-Skip-failing-tests-due-to-load-variability-on-YP-AB.patch \
-           file://0001-test_ctypes.test_find-skip-without-tools-sdk.patch \
-           file://makerace.patch \
-           file://0001-sysconfig.py-use-platlibdir-also-for-purelib.patch \
-           file://0001-Lib-pty.py-handle-stdin-I-O-errors-same-way-as-maste.patch \
-           file://0001-setup.py-Do-not-detect-multiarch-paths-when-cross-co.patch \
-           file://deterministic_imports.patch \
-           file://0001-Avoid-shebang-overflow-on-python-config.py.patch \
-           file://0001-Mitigate-the-race-condition-in-testSockName.patch \
-           "
-
-SRC_URI:append:class-native = " \
-           file://0001-Lib-sysconfig.py-use-prefix-value-from-build-configu.patch \
-           file://0001-distutils-sysconfig-append-STAGING_LIBDIR-python-sys.patch \
-           file://12-distutils-prefix-is-inside-staging-area.patch \
-           file://0001-Don-t-search-system-for-headers-libraries.patch \
-           "
-SRC_URI[sha256sum] = "8437efd5b106ef0a75aabfbf23d880625120a73a86a22ade4d2e2e68d7b74486"
-
-# exclude pre-releases for both python 2.x and 3.x
-UPSTREAM_CHECK_REGEX = "[Pp]ython-(?P<pver>\d+(\.\d+)+).tar"
-UPSTREAM_CHECK_URI = "https://www.python.org/downloads/source/"
-
-CVE_PRODUCT = "python"
-
-# Upstream consider this expected behaviour
-CVE_CHECK_IGNORE += "CVE-2007-4559"
-# This is not exploitable when glibc has CVE-2016-10739 fixed.
-CVE_CHECK_IGNORE += "CVE-2019-18348"
-# These are specific to Microsoft Windows
-CVE_CHECK_IGNORE += "CVE-2020-15523 CVE-2022-26488"
-# The mailcap module is insecure by design, so this can't be fixed in a meaningful way.
-# The module will be removed in the future and flaws documented.
-CVE_CHECK_IGNORE += "CVE-2015-20107"
-
-PYTHON_MAJMIN = "3.10"
-
-S = "${WORKDIR}/Python-${PV}"
-
-BBCLASSEXTEND = "native nativesdk"
-
-inherit autotools pkgconfig qemu ptest multilib_header update-alternatives
-
-MULTILIB_SUFFIX = "${@d.getVar('base_libdir',1).split('/')[-1]}"
-
-ALTERNATIVE:${PN}-dev = "python3-config"
-ALTERNATIVE_LINK_NAME[python3-config] = "${bindir}/python${PYTHON_MAJMIN}-config"
-ALTERNATIVE_TARGET[python3-config] = "${bindir}/python${PYTHON_MAJMIN}-config-${MULTILIB_SUFFIX}"
-
-
-DEPENDS = "bzip2-replacement-native libffi bzip2 openssl sqlite3 zlib virtual/libintl xz virtual/crypt util-linux-libuuid libtirpc libnsl2 autoconf-archive-native ncurses"
-DEPENDS:append:class-target = " python3-native"
-DEPENDS:append:class-nativesdk = " python3-native"
-
-# force to use the mutex+cond implementation (https://bugs.python.org/issue41710)
-CFLAGS += "-DHAVE_BROKEN_POSIX_SEMAPHORES"
-
-EXTRA_OECONF = " --without-ensurepip --enable-shared --with-platlibdir=${baselib}"
-EXTRA_OECONF:append:class-native = " --bindir=${bindir}/${PN}"
-
-export CROSSPYTHONPATH="${STAGING_LIBDIR_NATIVE}/python${PYTHON_MAJMIN}/lib-dynload/"
-
-EXTRANATIVEPATH += "python3-native"
-
-# LTO will be enabled via packageconfig depending upong distro features
-LTO:class-target = ""
-
-CACHED_CONFIGUREVARS = " \
-                ac_cv_file__dev_ptmx=yes \
-                ac_cv_file__dev_ptc=no \
-                ac_cv_working_tzset=yes \
-"
-
-# PGO currently causes builds to not be reproducible so disable by default, see YOCTO #13407
-PACKAGECONFIG:class-target ??= "readline gdbm ${@bb.utils.filter('DISTRO_FEATURES', 'lto', d)}"
-PACKAGECONFIG:class-native ??= "readline gdbm"
-PACKAGECONFIG:class-nativesdk ??= "readline gdbm"
-PACKAGECONFIG[readline] = ",,readline"
-# Use profile guided optimisation by running PyBench inside qemu-user
-PACKAGECONFIG[pgo] = "--enable-optimizations,,qemu-native"
-PACKAGECONFIG[tk] = ",,tk"
-PACKAGECONFIG[gdbm] = ",,gdbm"
-PACKAGECONFIG[lto] = "--with-lto,,"
-
-do_configure:prepend () {
-    mkdir -p ${B}/Modules
-    cat > ${B}/Modules/Setup.local << EOF
-*disabled*
-${@bb.utils.contains('PACKAGECONFIG', 'gdbm', '', '_gdbm _dbm', d)}
-${@bb.utils.contains('PACKAGECONFIG', 'readline', '', 'readline', d)}
-EOF
-}
-
-CPPFLAGS:append = " -I${STAGING_INCDIR}/ncursesw -I${STAGING_INCDIR}/uuid"
-
-EXTRA_OEMAKE = '\
-  STAGING_LIBDIR=${STAGING_LIBDIR} \
-  STAGING_INCDIR=${STAGING_INCDIR} \
-  LIB=${baselib} \
-'
-
-do_compile:prepend:class-target() {
-       if ${@bb.utils.contains('PACKAGECONFIG', 'pgo', 'true', 'false', d)}; then
-                qemu_binary="${@qemu_wrapper_cmdline(d, '${STAGING_DIR_TARGET}', ['${B}', '${STAGING_DIR_TARGET}/${base_libdir}'])}"
-                cat >pgo-wrapper <<EOF
-#!/bin/sh
-cd ${B}
-$qemu_binary "\$@"
-EOF
-                chmod +x pgo-wrapper
-        fi
-}
-
-do_install:prepend() {
-        ${WORKDIR}/check_build_completeness.py ${T}/log.do_compile
-}
-
-do_install:append:class-target() {
-        oe_multilib_header python${PYTHON_MAJMIN}/pyconfig.h
-}
-
-do_install:append:class-native() {
-        # Make sure we use /usr/bin/env python
-        for PYTHSCRIPT in `grep -rIl ${bindir}/${PN}/python ${D}${bindir}/${PN}`; do
-                sed -i -e '1s|^#!.*|#!/usr/bin/env python3|' $PYTHSCRIPT
-        done
-        # Add a symlink to the native Python so that scripts can just invoke
-        # "nativepython" and get the right one without needing absolute paths
-        # (these often end up too long for the #! parser in the kernel as the
-        # buffer is 128 bytes long).
-        ln -s python3-native/python3 ${D}${bindir}/nativepython3
-
-        # Remove the opt-1.pyc and opt-2.pyc files. There are over 3,000 of them
-        # and the overhead in each recipe-sysroot-native isn't worth it, particularly
-        # when they're only used for python called with -O or -OO.
-        #find ${D} -name *opt-*.pyc -delete
-        # Remove all pyc files. There are a ton of them and it is probably faster to let
-        # python create the ones it wants at runtime rather than manage in the sstate 
-        # tarballs and sysroot creation.
-        find ${D} -name *.pyc -delete
-
-        # Nothing should be looking into ${B} for python3-native
-        sed -i -e 's:${B}:/build/path/unavailable/:g' \
-                ${D}/${libdir}/python${PYTHON_MAJMIN}/config-${PYTHON_MAJMIN}${PYTHON_ABI}*/Makefile
-}
-
-do_install:append() {
-        for c in ${D}/${libdir}/python${PYTHON_MAJMIN}/_sysconfigdata*.py; do
-            python3 ${WORKDIR}/reformat_sysconfig.py $c
-        done
-        rm -f ${D}${libdir}/python${PYTHON_MAJMIN}/__pycache__/_sysconfigdata*.cpython*
-
-        mkdir -p ${D}${libdir}/python-sysconfigdata
-        sysconfigfile=`find ${D} -name _sysconfig*.py`
-        sed -i  \
-                -e "s,^ 'LIBDIR'.*, 'LIBDIR': '${STAGING_LIBDIR}'\,,g" \
-                -e "s,^ 'INCLUDEDIR'.*, 'INCLUDEDIR': '${STAGING_INCDIR}'\,,g" \
-                -e "s,^ 'CONFINCLUDEDIR'.*, 'CONFINCLUDEDIR': '${STAGING_INCDIR}'\,,g" \
-                -e "/^ 'INCLDIRSTOMAKE'/{N; s,/usr/include,${STAGING_INCDIR},g}" \
-                -e "/^ 'INCLUDEPY'/s,/usr/include,${STAGING_INCDIR},g" \
-                -e "s,${B},/build/path/unavailable/,g" \
-                $sysconfigfile
-        cp $sysconfigfile ${D}${libdir}/python-sysconfigdata/_sysconfigdata.py
-
-
-        # Unfortunately the following pyc files are non-deterministc due to 'frozenset'
-        # being written without strict ordering, even with PYTHONHASHSEED = 0
-        # Upstream is discussing ways to solve the issue properly, until then let's
-        # just not install the problematic files.
-        # More info: http://benno.id.au/blog/2013/01/15/python-determinism
-        rm -f ${D}${libdir}/python${PYTHON_MAJMIN}/test/__pycache__/test_range.cpython*
-        rm -f ${D}${libdir}/python${PYTHON_MAJMIN}/test/__pycache__/test_xml_etree.cpython*
-
-        # Similar to the above, we're getting reproducibility issues with 
-        # /usr/lib/python3.10/__pycache__/traceback.cpython-310.pyc
-        # so remove it too
-        rm -f ${D}${libdir}/python${PYTHON_MAJMIN}/__pycache__/traceback.cpython*
-
-        # Remove the opt-1.pyc and opt-2.pyc files. They effectively waste space on embedded
-        # style targets as they're only used when python is called with the -O or -OO options
-        # which is rare.
-        find ${D} -name *opt-*.pyc -delete
-}
-
-do_install:append:class-nativesdk () {
-    # Make sure we use /usr/bin/env python
-    for PYTHSCRIPT in `grep -rIl ${bindir}/python ${D}${bindir}`; do
-         sed -i -e '1s|^#!.*|#!/usr/bin/env python3|' $PYTHSCRIPT
-    done
-    create_wrapper ${D}${bindir}/python${PYTHON_MAJMIN} TERMINFO_DIRS='${sysconfdir}/terminfo:/etc/terminfo:/usr/share/terminfo:/usr/share/misc/terminfo:/lib/terminfo' PYTHONNOUSERSITE='1'
-}
-
-SSTATE_SCAN_FILES += "Makefile _sysconfigdata.py"
-SSTATE_HASHEQUIV_FILEMAP = " \
-    populate_sysroot:*/lib*/python3*/_sysconfigdata*.py:${TMPDIR} \
-    populate_sysroot:*/lib*/python3*/_sysconfigdata*.py:${COREBASE} \
-    populate_sysroot:*/lib*/python3*/config-*/Makefile:${TMPDIR} \
-    populate_sysroot:*/lib*/python3*/config-*/Makefile:${COREBASE} \
-    populate_sysroot:*/lib*/python-sysconfigdata/_sysconfigdata.py:${TMPDIR} \
-    populate_sysroot:*/lib*/python-sysconfigdata/_sysconfigdata.py:${COREBASE} \
-    "
-PACKAGE_PREPROCESS_FUNCS += "py_package_preprocess"
-
-py_package_preprocess () {
-        # Remove references to buildmachine paths in target Makefile and _sysconfigdata
-        sed -i -e 's:--sysroot=${STAGING_DIR_TARGET}::g' -e s:'--with-libtool-sysroot=${STAGING_DIR_TARGET}'::g \
-                -e 's|${DEBUG_PREFIX_MAP}||g' \
-                -e 's:${HOSTTOOLS_DIR}/::g' \
-                -e 's:${RECIPE_SYSROOT_NATIVE}::g' \
-                -e 's:${RECIPE_SYSROOT}::g' \
-                -e 's:${BASE_WORKDIR}/${MULTIMACH_TARGET_SYS}::g' \
-                ${PKGD}/${libdir}/python${PYTHON_MAJMIN}/config-${PYTHON_MAJMIN}${PYTHON_ABI}*/Makefile \
-                ${PKGD}/${libdir}/python${PYTHON_MAJMIN}/_sysconfigdata*.py \
-                ${PKGD}/${bindir}/python${PYTHON_MAJMIN}-config
-
-        # Reformat _sysconfigdata after modifying it so that it remains
-        # reproducible
-        for c in ${PKGD}/${libdir}/python${PYTHON_MAJMIN}/_sysconfigdata*.py; do
-            python3 ${WORKDIR}/reformat_sysconfig.py $c
-        done
-
-        # Recompile _sysconfigdata after modifying it
-        cd ${PKGD}
-        sysconfigfile=`find . -name _sysconfigdata_*.py`
-        ${STAGING_BINDIR_NATIVE}/python3-native/python3 \
-             -c "from py_compile import compile; compile('$sysconfigfile')"
-        ${STAGING_BINDIR_NATIVE}/python3-native/python3 \
-             -c "from py_compile import compile; compile('$sysconfigfile', optimize=1)"
-        ${STAGING_BINDIR_NATIVE}/python3-native/python3 \
-             -c "from py_compile import compile; compile('$sysconfigfile', optimize=2)"
-        cd -
-
-        mv ${PKGD}/${bindir}/python${PYTHON_MAJMIN}-config ${PKGD}/${bindir}/python${PYTHON_MAJMIN}-config-${MULTILIB_SUFFIX}
-        
-        #Remove the unneeded copy of target sysconfig data
-        rm -rf ${PKGD}/${libdir}/python-sysconfigdata
-}
-
-# We want bytecode precompiled .py files (.pyc's) by default
-# but the user may set it on their own conf
-INCLUDE_PYCS ?= "1"
-
-python(){
-    import collections, json
-
-    filename = os.path.join(d.getVar('THISDIR'), 'python3', 'python3-manifest.json')
-    # This python changes the datastore based on the contents of a file, so mark
-    # that dependency.
-    bb.parse.mark_dependency(d, filename)
-
-    with open(filename) as manifest_file:
-        manifest_str =  manifest_file.read()
-        json_start = manifest_str.find('# EOC') + 6
-        manifest_file.seek(json_start)
-        manifest_str = manifest_file.read()
-        python_manifest = json.loads(manifest_str, object_pairs_hook=collections.OrderedDict)
-
-    # First set RPROVIDES for -native case
-    # Hardcoded since it cant be python3-native-foo, should be python3-foo-native
-    pn = 'python3'
-    rprovides = (d.getVar('RPROVIDES') or "").split()
-
-    # ${PN}-misc-native is not in the manifest
-    rprovides.append(pn + '-misc-native')
-
-    for key in python_manifest:
-        pypackage = pn + '-' + key + '-native'
-        if pypackage not in rprovides:
-              rprovides.append(pypackage)
-
-    d.setVar('RPROVIDES:class-native', ' '.join(rprovides))
-
-    # Then work on the target
-    include_pycs = d.getVar('INCLUDE_PYCS')
-
-    packages = d.getVar('PACKAGES').split()
-    pn = d.getVar('PN')
-
-    newpackages=[]
-    for key in python_manifest:
-        pypackage = pn + '-' + key
-
-        if pypackage not in packages:
-            # We need to prepend, otherwise python-misc gets everything
-            # so we use a new variable
-            newpackages.append(pypackage)
-
-        # "Build" python's manifest FILES, RDEPENDS and SUMMARY
-        d.setVar('FILES:' + pypackage, '')
-        for value in python_manifest[key]['files']:
-            d.appendVar('FILES:' + pypackage, ' ' + value)
-
-        # Add cached files
-        if include_pycs == '1':
-            for value in python_manifest[key]['cached']:
-                    d.appendVar('FILES:' + pypackage, ' ' + value)
-
-        for value in python_manifest[key]['rdepends']:
-            # Make it work with or without $PN
-            if '${PN}' in value:
-                value=value.split('-', 1)[1]
-            d.appendVar('RDEPENDS:' + pypackage, ' ' + pn + '-' + value)
-
-        for value in python_manifest[key].get('rrecommends', ()):
-            if '${PN}' in value:
-                value=value.split('-', 1)[1]
-            d.appendVar('RRECOMMENDS:' + pypackage, ' ' + pn + '-' + value)
-
-        d.setVar('SUMMARY:' + pypackage, python_manifest[key]['summary'])
-
-    # Prepending so to avoid python-misc getting everything
-    packages = newpackages + packages
-    d.setVar('PACKAGES', ' '.join(packages))
-    d.setVar('ALLOW_EMPTY:${PN}-modules', '1')
-    d.setVar('ALLOW_EMPTY:${PN}-pkgutil', '1')
-
-    if "pgo" in d.getVar("PACKAGECONFIG").split() and not bb.utils.contains('MACHINE_FEATURES', 'qemu-usermode', True, False, d):
-        bb.fatal("pgo cannot be enabled as there is no qemu-usermode support for this architecture/machine")
-}
-
-# Files needed to create a new manifest
-
-do_create_manifest() {
-    # This task should be run with every new release of Python.
-    # We must ensure that PACKAGECONFIG enables everything when creating
-    # a new manifest, this is to base our new manifest on a complete
-    # native python build, containing all dependencies, otherwise the task
-    # wont be able to find the required files.
-    # e.g. BerkeleyDB is an optional build dependency so it may or may not
-    # be present, we must ensure it is.
-
-    cd ${WORKDIR}
-    # This needs to be executed by python-native and NOT by HOST's python
-    nativepython3 create_manifest3.py ${PYTHON_MAJMIN}
-    cp python3-manifest.json.new ${THISDIR}/python3/python3-manifest.json
-}
-
-# bitbake python -c create_manifest
-# Make sure we have native python ready when we create a new manifest
-addtask do_create_manifest after do_patch do_prepare_recipe_sysroot
-
-# manual dependency additions
-RRECOMMENDS:${PN}-core:append:class-nativesdk = " nativesdk-python3-modules"
-RRECOMMENDS:${PN}-crypt:append:class-target = " ${MLPREFIX}openssl ${MLPREFIX}ca-certificates"
-RRECOMMENDS:${PN}-crypt:append:class-nativesdk = " ${MLPREFIX}openssl ${MLPREFIX}ca-certificates"
-
-# For historical reasons PN is empty and provided by python3-modules
-FILES:${PN} = ""
-RPROVIDES:${PN}-modules = "${PN}"
-
-FILES:${PN}-pydoc += "${bindir}/pydoc${PYTHON_MAJMIN} ${bindir}/pydoc3"
-FILES:${PN}-idle += "${bindir}/idle3 ${bindir}/idle${PYTHON_MAJMIN}"
-
-# provide python-pyvenv from python3-venv
-RPROVIDES:${PN}-venv += "${MLPREFIX}python3-pyvenv"
-
-# package libpython3
-PACKAGES =+ "libpython3 libpython3-staticdev"
-FILES:libpython3 = "${libdir}/libpython*.so.*"
-FILES:libpython3-staticdev += "${libdir}/python${PYTHON_MAJMIN}/config-${PYTHON_MAJMIN}-*/libpython${PYTHON_MAJMIN}.a"
-INSANE_SKIP:${PN}-dev += "dev-elf"
-INSANE_SKIP:${PN}-ptest = "dev-deps"
-
-# catch all the rest (unsorted)
-PACKAGES += "${PN}-misc"
-RDEPENDS:${PN}-misc += "\
-  ${PN}-core \
-  ${PN}-email \
-  ${PN}-codecs \
-  ${PN}-pydoc \
-  ${PN}-pickle \
-  ${PN}-audio \
-  ${PN}-numbers \
-"
-RDEPENDS:${PN}-modules:append:class-target = " ${MLPREFIX}python3-misc"
-RDEPENDS:${PN}-modules:append:class-nativesdk = " ${MLPREFIX}python3-misc"
-FILES:${PN}-misc = "${libdir}/python${PYTHON_MAJMIN} ${libdir}/python${PYTHON_MAJMIN}/lib-dynload"
-
-# catch manpage
-PACKAGES += "${PN}-man"
-FILES:${PN}-man = "${datadir}/man"
-
-# See https://bugs.python.org/issue18748 and https://bugs.python.org/issue37395
-RDEPENDS:libpython3:append:libc-glibc = " libgcc"
-RDEPENDS:${PN}-ctypes:append:libc-glibc = " ${MLPREFIX}ldconfig"
-RDEPENDS:${PN}-ptest = "${PN}-modules ${PN}-tests ${PN}-dev unzip bzip2 libgcc tzdata-europe coreutils sed"
-RDEPENDS:${PN}-ptest:append:libc-glibc = " locale-base-tr-tr.iso-8859-9"
-RDEPENDS:${PN}-tkinter += "${@bb.utils.contains('PACKAGECONFIG', 'tk', '${MLPREFIX}tk ${MLPREFIX}tk-lib', '', d)}"
-RDEPENDS:${PN}-idle += "${@bb.utils.contains('PACKAGECONFIG', 'tk', '${PN}-tkinter ${MLPREFIX}tcl', '', d)}"
-DEV_PKG_DEPENDENCY = ""
-RDEPENDS:${PN}-pydoc += "${PN}-io"
-
-RDEPENDS:${PN}-tests:append:class-target = " ${MLPREFIX}bash"
-RDEPENDS:${PN}-tests:append:class-nativesdk = " ${MLPREFIX}bash"
-
-# Python's tests contain large numbers of files we don't need in the recipe sysroots
-SYSROOT_PREPROCESS_FUNCS += " py3_sysroot_cleanup"
-py3_sysroot_cleanup () {
-	rm -rf ${SYSROOT_DESTDIR}${libdir}/python${PYTHON_MAJMIN}/test
-}
diff --git a/poky/meta/recipes-devtools/python/python3_3.10.6.bb b/poky/meta/recipes-devtools/python/python3_3.10.6.bb
new file mode 100644
index 0000000..1b28728
--- /dev/null
+++ b/poky/meta/recipes-devtools/python/python3_3.10.6.bb
@@ -0,0 +1,432 @@
+SUMMARY = "The Python Programming Language"
+HOMEPAGE = "http://www.python.org"
+DESCRIPTION = "Python is a programming language that lets you work more quickly and integrate your systems more effectively."
+LICENSE = "PSF-2.0"
+SECTION = "devel/python"
+
+LIC_FILES_CHKSUM = "file://LICENSE;md5=4b8801e752a2c70ac41a5f9aa243f766"
+
+SRC_URI = "http://www.python.org/ftp/python/${PV}/Python-${PV}.tar.xz \
+           file://run-ptest \
+           file://create_manifest3.py \
+           file://get_module_deps3.py \
+           file://python3-manifest.json \
+           file://check_build_completeness.py \
+           file://reformat_sysconfig.py \
+           file://cgi_py.patch \
+           file://0001-Do-not-add-usr-lib-termcap-to-linker-flags-to-avoid-.patch \
+           ${@bb.utils.contains('PACKAGECONFIG', 'tk', '', 'file://avoid_warning_about_tkinter.patch', d)} \
+           file://0001-Do-not-use-the-shell-version-of-python-config-that-w.patch \
+           file://python-config.patch \
+           file://0001-Makefile.pre-use-qemu-wrapper-when-gathering-profile.patch \
+           file://0001-python3-use-cc_basename-to-replace-CC-for-checking-c.patch \
+           file://0001-bpo-36852-proper-detection-of-mips-architecture-for-.patch \
+           file://crosspythonpath.patch \
+           file://0001-Use-FLAG_REF-always-for-interned-strings.patch \
+           file://0001-test_locale.py-correct-the-test-output-format.patch \
+           file://0017-setup.py-do-not-report-missing-dependencies-for-disa.patch \
+           file://0001-Makefile-do-not-compile-.pyc-in-parallel.patch \
+           file://0020-configure.ac-setup.py-do-not-add-a-curses-include-pa.patch \
+           file://0001-Skip-failing-tests-due-to-load-variability-on-YP-AB.patch \
+           file://0001-test_ctypes.test_find-skip-without-tools-sdk.patch \
+           file://makerace.patch \
+           file://0001-sysconfig.py-use-platlibdir-also-for-purelib.patch \
+           file://0001-Lib-pty.py-handle-stdin-I-O-errors-same-way-as-maste.patch \
+           file://0001-setup.py-Do-not-detect-multiarch-paths-when-cross-co.patch \
+           file://deterministic_imports.patch \
+           file://0001-Avoid-shebang-overflow-on-python-config.py.patch \
+           file://0001-Mitigate-the-race-condition-in-testSockName.patch \
+           "
+
+SRC_URI:append:class-native = " \
+           file://0001-Lib-sysconfig.py-use-prefix-value-from-build-configu.patch \
+           file://0001-distutils-sysconfig-append-STAGING_LIBDIR-python-sys.patch \
+           file://12-distutils-prefix-is-inside-staging-area.patch \
+           file://0001-Don-t-search-system-for-headers-libraries.patch \
+           "
+SRC_URI[sha256sum] = "f795ff87d11d4b0c7c33bc8851b0c28648d8a4583aa2100a98c22b4326b6d3f3"
+
+# exclude pre-releases for both python 2.x and 3.x
+UPSTREAM_CHECK_REGEX = "[Pp]ython-(?P<pver>\d+(\.\d+)+).tar"
+UPSTREAM_CHECK_URI = "https://www.python.org/downloads/source/"
+
+CVE_PRODUCT = "python"
+
+# Upstream consider this expected behaviour
+CVE_CHECK_IGNORE += "CVE-2007-4559"
+# This is not exploitable when glibc has CVE-2016-10739 fixed.
+CVE_CHECK_IGNORE += "CVE-2019-18348"
+# These are specific to Microsoft Windows
+CVE_CHECK_IGNORE += "CVE-2020-15523 CVE-2022-26488"
+# The mailcap module is insecure by design, so this can't be fixed in a meaningful way.
+# The module will be removed in the future and flaws documented.
+CVE_CHECK_IGNORE += "CVE-2015-20107"
+
+PYTHON_MAJMIN = "3.10"
+
+S = "${WORKDIR}/Python-${PV}"
+
+BBCLASSEXTEND = "native nativesdk"
+
+inherit autotools pkgconfig qemu ptest multilib_header update-alternatives
+
+MULTILIB_SUFFIX = "${@d.getVar('base_libdir',1).split('/')[-1]}"
+
+ALTERNATIVE:${PN}-dev = "python3-config"
+ALTERNATIVE_LINK_NAME[python3-config] = "${bindir}/python${PYTHON_MAJMIN}-config"
+ALTERNATIVE_TARGET[python3-config] = "${bindir}/python${PYTHON_MAJMIN}-config-${MULTILIB_SUFFIX}"
+
+
+DEPENDS = "bzip2-replacement-native libffi bzip2 openssl sqlite3 zlib virtual/libintl xz virtual/crypt util-linux-libuuid libtirpc libnsl2 autoconf-archive-native ncurses"
+DEPENDS:append:class-target = " python3-native"
+DEPENDS:append:class-nativesdk = " python3-native"
+
+# force to use the mutex+cond implementation (https://bugs.python.org/issue41710)
+CFLAGS += "-DHAVE_BROKEN_POSIX_SEMAPHORES"
+
+EXTRA_OECONF = " --without-ensurepip --enable-shared --with-platlibdir=${baselib}"
+EXTRA_OECONF:append:class-native = " --bindir=${bindir}/${PN}"
+
+export CROSSPYTHONPATH="${STAGING_LIBDIR_NATIVE}/python${PYTHON_MAJMIN}/lib-dynload/"
+
+EXTRANATIVEPATH += "python3-native"
+
+# LTO will be enabled via packageconfig depending upong distro features
+LTO:class-target = ""
+
+CACHED_CONFIGUREVARS = " \
+                ac_cv_file__dev_ptmx=yes \
+                ac_cv_file__dev_ptc=no \
+                ac_cv_working_tzset=yes \
+"
+
+# PGO currently causes builds to not be reproducible so disable by default, see YOCTO #13407
+PACKAGECONFIG:class-target ??= "readline gdbm ${@bb.utils.filter('DISTRO_FEATURES', 'lto', d)}"
+PACKAGECONFIG:class-native ??= "readline gdbm"
+PACKAGECONFIG:class-nativesdk ??= "readline gdbm"
+PACKAGECONFIG[readline] = ",,readline"
+# Use profile guided optimisation by running PyBench inside qemu-user
+PACKAGECONFIG[pgo] = "--enable-optimizations,,qemu-native"
+PACKAGECONFIG[tk] = ",,tk"
+PACKAGECONFIG[gdbm] = ",,gdbm"
+PACKAGECONFIG[lto] = "--with-lto,,"
+
+do_configure:prepend () {
+    mkdir -p ${B}/Modules
+    cat > ${B}/Modules/Setup.local << EOF
+*disabled*
+${@bb.utils.contains('PACKAGECONFIG', 'gdbm', '', '_gdbm _dbm', d)}
+${@bb.utils.contains('PACKAGECONFIG', 'readline', '', 'readline', d)}
+EOF
+}
+
+CPPFLAGS:append = " -I${STAGING_INCDIR}/ncursesw -I${STAGING_INCDIR}/uuid"
+
+EXTRA_OEMAKE = '\
+  STAGING_LIBDIR=${STAGING_LIBDIR} \
+  STAGING_INCDIR=${STAGING_INCDIR} \
+  LIB=${baselib} \
+'
+
+do_compile:prepend:class-target() {
+       if ${@bb.utils.contains('PACKAGECONFIG', 'pgo', 'true', 'false', d)}; then
+                qemu_binary="${@qemu_wrapper_cmdline(d, '${STAGING_DIR_TARGET}', ['${B}', '${STAGING_DIR_TARGET}/${base_libdir}'])}"
+                cat >pgo-wrapper <<EOF
+#!/bin/sh
+cd ${B}
+$qemu_binary "\$@"
+EOF
+                chmod +x pgo-wrapper
+        fi
+}
+
+do_install:prepend() {
+        ${WORKDIR}/check_build_completeness.py ${T}/log.do_compile
+}
+
+do_install:append:class-target() {
+        oe_multilib_header python${PYTHON_MAJMIN}/pyconfig.h
+}
+
+do_install:append:class-native() {
+        # Make sure we use /usr/bin/env python
+        for PYTHSCRIPT in `grep -rIl ${bindir}/${PN}/python ${D}${bindir}/${PN}`; do
+                sed -i -e '1s|^#!.*|#!/usr/bin/env python3|' $PYTHSCRIPT
+        done
+        # Add a symlink to the native Python so that scripts can just invoke
+        # "nativepython" and get the right one without needing absolute paths
+        # (these often end up too long for the #! parser in the kernel as the
+        # buffer is 128 bytes long).
+        ln -s python3-native/python3 ${D}${bindir}/nativepython3
+
+        # Remove the opt-1.pyc and opt-2.pyc files. There are over 3,000 of them
+        # and the overhead in each recipe-sysroot-native isn't worth it, particularly
+        # when they're only used for python called with -O or -OO.
+        #find ${D} -name *opt-*.pyc -delete
+        # Remove all pyc files. There are a ton of them and it is probably faster to let
+        # python create the ones it wants at runtime rather than manage in the sstate 
+        # tarballs and sysroot creation.
+        find ${D} -name *.pyc -delete
+
+        # Nothing should be looking into ${B} for python3-native
+        sed -i -e 's:${B}:/build/path/unavailable/:g' \
+                ${D}/${libdir}/python${PYTHON_MAJMIN}/config-${PYTHON_MAJMIN}${PYTHON_ABI}*/Makefile
+        
+        # disable the lookup in user's site-packages globally
+        sed -i 's#ENABLE_USER_SITE = None#ENABLE_USER_SITE = False#' ${D}${libdir}/python${PYTHON_MAJMIN}/site.py
+}
+
+do_install:append() {
+        for c in ${D}/${libdir}/python${PYTHON_MAJMIN}/_sysconfigdata*.py; do
+            python3 ${WORKDIR}/reformat_sysconfig.py $c
+        done
+        rm -f ${D}${libdir}/python${PYTHON_MAJMIN}/__pycache__/_sysconfigdata*.cpython*
+
+        mkdir -p ${D}${libdir}/python-sysconfigdata
+        sysconfigfile=`find ${D} -name _sysconfig*.py`
+        sed -i  \
+                -e "s,^ 'LIBDIR'.*, 'LIBDIR': '${STAGING_LIBDIR}'\,,g" \
+                -e "s,^ 'INCLUDEDIR'.*, 'INCLUDEDIR': '${STAGING_INCDIR}'\,,g" \
+                -e "s,^ 'CONFINCLUDEDIR'.*, 'CONFINCLUDEDIR': '${STAGING_INCDIR}'\,,g" \
+                -e "/^ 'INCLDIRSTOMAKE'/{N; s,/usr/include,${STAGING_INCDIR},g}" \
+                -e "/^ 'INCLUDEPY'/s,/usr/include,${STAGING_INCDIR},g" \
+                -e "s,${B},/build/path/unavailable/,g" \
+                $sysconfigfile
+        cp $sysconfigfile ${D}${libdir}/python-sysconfigdata/_sysconfigdata.py
+
+
+        # Unfortunately the following pyc files are non-deterministc due to 'frozenset'
+        # being written without strict ordering, even with PYTHONHASHSEED = 0
+        # Upstream is discussing ways to solve the issue properly, until then let's
+        # just not install the problematic files.
+        # More info: http://benno.id.au/blog/2013/01/15/python-determinism
+        rm -f ${D}${libdir}/python${PYTHON_MAJMIN}/test/__pycache__/test_range.cpython*
+        rm -f ${D}${libdir}/python${PYTHON_MAJMIN}/test/__pycache__/test_xml_etree.cpython*
+
+        # Similar to the above, we're getting reproducibility issues with 
+        # /usr/lib/python3.10/__pycache__/traceback.cpython-310.pyc
+        # so remove it too
+        rm -f ${D}${libdir}/python${PYTHON_MAJMIN}/__pycache__/traceback.cpython*
+
+        # Remove the opt-1.pyc and opt-2.pyc files. They effectively waste space on embedded
+        # style targets as they're only used when python is called with the -O or -OO options
+        # which is rare.
+        find ${D} -name *opt-*.pyc -delete
+}
+
+do_install:append:class-nativesdk () {
+    # Make sure we use /usr/bin/env python
+    for PYTHSCRIPT in `grep -rIl ${bindir}/python ${D}${bindir}`; do
+         sed -i -e '1s|^#!.*|#!/usr/bin/env python3|' $PYTHSCRIPT
+    done
+    create_wrapper ${D}${bindir}/python${PYTHON_MAJMIN} TERMINFO_DIRS='${sysconfdir}/terminfo:/etc/terminfo:/usr/share/terminfo:/usr/share/misc/terminfo:/lib/terminfo' PYTHONNOUSERSITE='1'
+}
+
+SSTATE_SCAN_FILES += "Makefile _sysconfigdata.py"
+SSTATE_HASHEQUIV_FILEMAP = " \
+    populate_sysroot:*/lib*/python3*/_sysconfigdata*.py:${TMPDIR} \
+    populate_sysroot:*/lib*/python3*/_sysconfigdata*.py:${COREBASE} \
+    populate_sysroot:*/lib*/python3*/config-*/Makefile:${TMPDIR} \
+    populate_sysroot:*/lib*/python3*/config-*/Makefile:${COREBASE} \
+    populate_sysroot:*/lib*/python-sysconfigdata/_sysconfigdata.py:${TMPDIR} \
+    populate_sysroot:*/lib*/python-sysconfigdata/_sysconfigdata.py:${COREBASE} \
+    "
+PACKAGE_PREPROCESS_FUNCS += "py_package_preprocess"
+
+py_package_preprocess () {
+        # Remove references to buildmachine paths in target Makefile and _sysconfigdata
+        sed -i -e 's:--sysroot=${STAGING_DIR_TARGET}::g' -e s:'--with-libtool-sysroot=${STAGING_DIR_TARGET}'::g \
+                -e 's|${DEBUG_PREFIX_MAP}||g' \
+                -e 's:${HOSTTOOLS_DIR}/::g' \
+                -e 's:${RECIPE_SYSROOT_NATIVE}::g' \
+                -e 's:${RECIPE_SYSROOT}::g' \
+                -e 's:${BASE_WORKDIR}/${MULTIMACH_TARGET_SYS}::g' \
+                ${PKGD}/${libdir}/python${PYTHON_MAJMIN}/config-${PYTHON_MAJMIN}${PYTHON_ABI}*/Makefile \
+                ${PKGD}/${libdir}/python${PYTHON_MAJMIN}/_sysconfigdata*.py \
+                ${PKGD}/${bindir}/python${PYTHON_MAJMIN}-config
+
+        # Reformat _sysconfigdata after modifying it so that it remains
+        # reproducible
+        for c in ${PKGD}/${libdir}/python${PYTHON_MAJMIN}/_sysconfigdata*.py; do
+            python3 ${WORKDIR}/reformat_sysconfig.py $c
+        done
+
+        # Recompile _sysconfigdata after modifying it
+        cd ${PKGD}
+        sysconfigfile=`find . -name _sysconfigdata_*.py`
+        ${STAGING_BINDIR_NATIVE}/python3-native/python3 \
+             -c "from py_compile import compile; compile('$sysconfigfile')"
+        ${STAGING_BINDIR_NATIVE}/python3-native/python3 \
+             -c "from py_compile import compile; compile('$sysconfigfile', optimize=1)"
+        ${STAGING_BINDIR_NATIVE}/python3-native/python3 \
+             -c "from py_compile import compile; compile('$sysconfigfile', optimize=2)"
+        cd -
+
+        mv ${PKGD}/${bindir}/python${PYTHON_MAJMIN}-config ${PKGD}/${bindir}/python${PYTHON_MAJMIN}-config-${MULTILIB_SUFFIX}
+        
+        #Remove the unneeded copy of target sysconfig data
+        rm -rf ${PKGD}/${libdir}/python-sysconfigdata
+}
+
+# We want bytecode precompiled .py files (.pyc's) by default
+# but the user may set it on their own conf
+INCLUDE_PYCS ?= "1"
+
+python(){
+    import collections, json
+
+    filename = os.path.join(d.getVar('THISDIR'), 'python3', 'python3-manifest.json')
+    # This python changes the datastore based on the contents of a file, so mark
+    # that dependency.
+    bb.parse.mark_dependency(d, filename)
+
+    with open(filename) as manifest_file:
+        manifest_str =  manifest_file.read()
+        json_start = manifest_str.find('# EOC') + 6
+        manifest_file.seek(json_start)
+        manifest_str = manifest_file.read()
+        python_manifest = json.loads(manifest_str, object_pairs_hook=collections.OrderedDict)
+
+    # First set RPROVIDES for -native case
+    # Hardcoded since it cant be python3-native-foo, should be python3-foo-native
+    pn = 'python3'
+    rprovides = (d.getVar('RPROVIDES') or "").split()
+
+    # ${PN}-misc-native is not in the manifest
+    rprovides.append(pn + '-misc-native')
+
+    for key in python_manifest:
+        pypackage = pn + '-' + key + '-native'
+        if pypackage not in rprovides:
+              rprovides.append(pypackage)
+
+    d.setVar('RPROVIDES:class-native', ' '.join(rprovides))
+
+    # Then work on the target
+    include_pycs = d.getVar('INCLUDE_PYCS')
+
+    packages = d.getVar('PACKAGES').split()
+    pn = d.getVar('PN')
+
+    newpackages=[]
+    for key in python_manifest:
+        pypackage = pn + '-' + key
+
+        if pypackage not in packages:
+            # We need to prepend, otherwise python-misc gets everything
+            # so we use a new variable
+            newpackages.append(pypackage)
+
+        # "Build" python's manifest FILES, RDEPENDS and SUMMARY
+        d.setVar('FILES:' + pypackage, '')
+        for value in python_manifest[key]['files']:
+            d.appendVar('FILES:' + pypackage, ' ' + value)
+
+        # Add cached files
+        if include_pycs == '1':
+            for value in python_manifest[key]['cached']:
+                    d.appendVar('FILES:' + pypackage, ' ' + value)
+
+        for value in python_manifest[key]['rdepends']:
+            # Make it work with or without $PN
+            if '${PN}' in value:
+                value=value.split('-', 1)[1]
+            d.appendVar('RDEPENDS:' + pypackage, ' ' + pn + '-' + value)
+
+        for value in python_manifest[key].get('rrecommends', ()):
+            if '${PN}' in value:
+                value=value.split('-', 1)[1]
+            d.appendVar('RRECOMMENDS:' + pypackage, ' ' + pn + '-' + value)
+
+        d.setVar('SUMMARY:' + pypackage, python_manifest[key]['summary'])
+
+    # Prepending so to avoid python-misc getting everything
+    packages = newpackages + packages
+    d.setVar('PACKAGES', ' '.join(packages))
+    d.setVar('ALLOW_EMPTY:${PN}-modules', '1')
+    d.setVar('ALLOW_EMPTY:${PN}-pkgutil', '1')
+
+    if "pgo" in d.getVar("PACKAGECONFIG").split() and not bb.utils.contains('MACHINE_FEATURES', 'qemu-usermode', True, False, d):
+        bb.fatal("pgo cannot be enabled as there is no qemu-usermode support for this architecture/machine")
+}
+
+# Files needed to create a new manifest
+
+do_create_manifest() {
+    # This task should be run with every new release of Python.
+    # We must ensure that PACKAGECONFIG enables everything when creating
+    # a new manifest, this is to base our new manifest on a complete
+    # native python build, containing all dependencies, otherwise the task
+    # wont be able to find the required files.
+    # e.g. BerkeleyDB is an optional build dependency so it may or may not
+    # be present, we must ensure it is.
+
+    cd ${WORKDIR}
+    # This needs to be executed by python-native and NOT by HOST's python
+    nativepython3 create_manifest3.py ${PYTHON_MAJMIN}
+    cp python3-manifest.json.new ${THISDIR}/python3/python3-manifest.json
+}
+
+# bitbake python -c create_manifest
+# Make sure we have native python ready when we create a new manifest
+addtask do_create_manifest after do_patch do_prepare_recipe_sysroot
+
+# manual dependency additions
+RRECOMMENDS:${PN}-core:append:class-nativesdk = " nativesdk-python3-modules"
+RRECOMMENDS:${PN}-crypt:append:class-target = " ${MLPREFIX}openssl ${MLPREFIX}ca-certificates"
+RRECOMMENDS:${PN}-crypt:append:class-nativesdk = " ${MLPREFIX}openssl ${MLPREFIX}ca-certificates"
+
+# For historical reasons PN is empty and provided by python3-modules
+FILES:${PN} = ""
+RPROVIDES:${PN}-modules = "${PN}"
+
+FILES:${PN}-pydoc += "${bindir}/pydoc${PYTHON_MAJMIN} ${bindir}/pydoc3"
+FILES:${PN}-idle += "${bindir}/idle3 ${bindir}/idle${PYTHON_MAJMIN}"
+
+# provide python-pyvenv from python3-venv
+RPROVIDES:${PN}-venv += "${MLPREFIX}python3-pyvenv"
+
+# package libpython3
+PACKAGES =+ "libpython3 libpython3-staticdev"
+FILES:libpython3 = "${libdir}/libpython*.so.*"
+FILES:libpython3-staticdev += "${libdir}/python${PYTHON_MAJMIN}/config-${PYTHON_MAJMIN}-*/libpython${PYTHON_MAJMIN}.a"
+INSANE_SKIP:${PN}-dev += "dev-elf"
+INSANE_SKIP:${PN}-ptest = "dev-deps"
+
+# catch all the rest (unsorted)
+PACKAGES += "${PN}-misc"
+RDEPENDS:${PN}-misc += "\
+  ${PN}-core \
+  ${PN}-email \
+  ${PN}-codecs \
+  ${PN}-pydoc \
+  ${PN}-pickle \
+  ${PN}-audio \
+  ${PN}-numbers \
+"
+RDEPENDS:${PN}-modules:append:class-target = " ${MLPREFIX}python3-misc"
+RDEPENDS:${PN}-modules:append:class-nativesdk = " ${MLPREFIX}python3-misc"
+FILES:${PN}-misc = "${libdir}/python${PYTHON_MAJMIN} ${libdir}/python${PYTHON_MAJMIN}/lib-dynload"
+
+# catch manpage
+PACKAGES += "${PN}-man"
+FILES:${PN}-man = "${datadir}/man"
+
+# See https://bugs.python.org/issue18748 and https://bugs.python.org/issue37395
+RDEPENDS:libpython3:append:libc-glibc = " libgcc"
+RDEPENDS:${PN}-ctypes:append:libc-glibc = " ${MLPREFIX}ldconfig"
+RDEPENDS:${PN}-ptest = "${PN}-modules ${PN}-tests ${PN}-dev unzip bzip2 libgcc tzdata-europe coreutils sed"
+RDEPENDS:${PN}-ptest:append:libc-glibc = " locale-base-tr-tr.iso-8859-9"
+RDEPENDS:${PN}-tkinter += "${@bb.utils.contains('PACKAGECONFIG', 'tk', '${MLPREFIX}tk ${MLPREFIX}tk-lib', '', d)}"
+RDEPENDS:${PN}-idle += "${@bb.utils.contains('PACKAGECONFIG', 'tk', '${PN}-tkinter ${MLPREFIX}tcl', '', d)}"
+DEV_PKG_DEPENDENCY = ""
+RDEPENDS:${PN}-pydoc += "${PN}-io"
+
+RDEPENDS:${PN}-tests:append:class-target = " ${MLPREFIX}bash"
+RDEPENDS:${PN}-tests:append:class-nativesdk = " ${MLPREFIX}bash"
+
+# Python's tests contain large numbers of files we don't need in the recipe sysroots
+SYSROOT_PREPROCESS_FUNCS += " py3_sysroot_cleanup"
+py3_sysroot_cleanup () {
+	rm -rf ${SYSROOT_DESTDIR}${libdir}/python${PYTHON_MAJMIN}/test
+}
diff --git a/poky/meta/recipes-devtools/qemu/qemu.inc b/poky/meta/recipes-devtools/qemu/qemu.inc
index 0db6701..56fc7aa 100644
--- a/poky/meta/recipes-devtools/qemu/qemu.inc
+++ b/poky/meta/recipes-devtools/qemu/qemu.inc
@@ -27,7 +27,12 @@
            file://0008-tests-meson.build-use-relative-path-to-refer-to-file.patch \
            file://0009-Define-MAP_SYNC-and-MAP_SHARED_VALIDATE-on-needed-li.patch \
            file://0010-hw-pvrdma-Protect-against-buggy-or-malicious-guest-d.patch \
+           file://qemu-7.0.0-glibc-2.36.patch \
            file://CVE-2022-35414.patch \
+           file://CVE-2021-3507_1.patch \
+           file://CVE-2021-3507_2.patch \
+           file://CVE-2022-0216_1.patch \
+           file://CVE-2022-0216_2.patch \
            "
 UPSTREAM_CHECK_REGEX = "qemu-(?P<pver>\d+(\.\d+)+)\.tar"
 
diff --git a/poky/meta/recipes-devtools/qemu/qemu/CVE-2021-3507_1.patch b/poky/meta/recipes-devtools/qemu/qemu/CVE-2021-3507_1.patch
new file mode 100644
index 0000000..24fd2c5
--- /dev/null
+++ b/poky/meta/recipes-devtools/qemu/qemu/CVE-2021-3507_1.patch
@@ -0,0 +1,92 @@
+From 57a89cc36ead7234e540d0ecbe1a792ab6b04cb7 Mon Sep 17 00:00:00 2001
+From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
+Date: Thu, 18 Nov 2021 12:57:32 +0100
+Subject: [PATCH 1/2] hw/block/fdc: Prevent end-of-track overrun
+ (CVE-2021-3507)
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+Per the 82078 datasheet, if the end-of-track (EOT byte in
+the FIFO) is more than the number of sectors per side, the
+command is terminated unsuccessfully:
+
+* 5.2.5 DATA TRANSFER TERMINATION
+
+  The 82078 supports terminal count explicitly through
+  the TC pin and implicitly through the underrun/over-
+  run and end-of-track (EOT) functions. For full sector
+  transfers, the EOT parameter can define the last
+  sector to be transferred in a single or multisector
+  transfer. If the last sector to be transferred is a par-
+  tial sector, the host can stop transferring the data in
+  mid-sector, and the 82078 will continue to complete
+  the sector as if a hardware TC was received. The
+  only difference between these implicit functions and
+  TC is that they return "abnormal termination" result
+  status. Such status indications can be ignored if they
+  were expected.
+
+* 6.1.3 READ TRACK
+
+  This command terminates when the EOT specified
+  number of sectors have been read. If the 82078
+  does not find an I D Address Mark on the diskette
+  after the second· occurrence of a pulse on the
+  INDX# pin, then it sets the IC code in Status Regis-
+  ter 0 to "01" (Abnormal termination), sets the MA bit
+  in Status Register 1 to "1", and terminates the com-
+  mand.
+
+* 6.1.6 VERIFY
+
+  Refer to Table 6-6 and Table 6-7 for information
+  concerning the values of MT and EC versus SC and
+  EOT value.
+
+* Table 6·6. Result Phase Table
+
+* Table 6-7. Verify Command Result Phase Table
+
+Fix by aborting the transfer when EOT > # Sectors Per Side.
+
+Cc: qemu-stable@nongnu.org
+Cc: Hervé Poussineau <hpoussin@reactos.org>
+Fixes: baca51faff0 ("floppy driver: disk geometry auto detect")
+Reported-by: Alexander Bulekov <alxndr@bu.edu>
+Resolves: https://gitlab.com/qemu-project/qemu/-/issues/339
+Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
+Message-Id: <20211118115733.4038610-2-philmd@redhat.com>
+Reviewed-by: Hanna Reitz <hreitz@redhat.com>
+Signed-off-by: Kevin Wolf <kwolf@redhat.com>
+
+Upstream-Status: Backport [defac5e2fbddf8423a354ff0454283a2115e1367]
+CVE: CVE-2021-3507
+
+Signed-off-by: Sakib Sajal <sakib.sajal@windriver.com>
+---
+ hw/block/fdc.c | 8 ++++++++
+ 1 file changed, 8 insertions(+)
+
+diff --git a/hw/block/fdc.c b/hw/block/fdc.c
+index 347875a0c..57bb35579 100644
+--- a/hw/block/fdc.c
++++ b/hw/block/fdc.c
+@@ -1530,6 +1530,14 @@ static void fdctrl_start_transfer(FDCtrl *fdctrl, int direction)
+         int tmp;
+         fdctrl->data_len = 128 << (fdctrl->fifo[5] > 7 ? 7 : fdctrl->fifo[5]);
+         tmp = (fdctrl->fifo[6] - ks + 1);
++        if (tmp < 0) {
++            FLOPPY_DPRINTF("invalid EOT: %d\n", tmp);
++            fdctrl_stop_transfer(fdctrl, FD_SR0_ABNTERM, FD_SR1_MA, 0x00);
++            fdctrl->fifo[3] = kt;
++            fdctrl->fifo[4] = kh;
++            fdctrl->fifo[5] = ks;
++            return;
++        }
+         if (fdctrl->fifo[0] & 0x80)
+             tmp += fdctrl->fifo[6];
+         fdctrl->data_len *= tmp;
+-- 
+2.33.0
+
diff --git a/poky/meta/recipes-devtools/qemu/qemu/CVE-2021-3507_2.patch b/poky/meta/recipes-devtools/qemu/qemu/CVE-2021-3507_2.patch
new file mode 100644
index 0000000..acc93e8
--- /dev/null
+++ b/poky/meta/recipes-devtools/qemu/qemu/CVE-2021-3507_2.patch
@@ -0,0 +1,115 @@
+From 3e8601ec707dcbc3c768f7733d016dc70c947e4a Mon Sep 17 00:00:00 2001
+From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
+Date: Thu, 18 Nov 2021 12:57:33 +0100
+Subject: [PATCH 2/2] tests/qtest/fdc-test: Add a regression test for
+ CVE-2021-3507
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+Add the reproducer from https://gitlab.com/qemu-project/qemu/-/issues/339
+
+Without the previous commit, when running 'make check-qtest-i386'
+with QEMU configured with '--enable-sanitizers' we get:
+
+  ==4028352==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x619000062a00 at pc 0x5626d03c491a bp 0x7ffdb4199410 sp 0x7ffdb4198bc0
+  READ of size 786432 at 0x619000062a00 thread T0
+      #0 0x5626d03c4919 in __asan_memcpy (qemu-system-i386+0x1e65919)
+      #1 0x5626d1c023cc in flatview_write_continue softmmu/physmem.c:2787:13
+      #2 0x5626d1bf0c0f in flatview_write softmmu/physmem.c:2822:14
+      #3 0x5626d1bf0798 in address_space_write softmmu/physmem.c:2914:18
+      #4 0x5626d1bf0f37 in address_space_rw softmmu/physmem.c:2924:16
+      #5 0x5626d1bf14c8 in cpu_physical_memory_rw softmmu/physmem.c:2933:5
+      #6 0x5626d0bd5649 in cpu_physical_memory_write include/exec/cpu-common.h:82:5
+      #7 0x5626d0bd0a07 in i8257_dma_write_memory hw/dma/i8257.c:452:9
+      #8 0x5626d09f825d in fdctrl_transfer_handler hw/block/fdc.c:1616:13
+      #9 0x5626d0a048b4 in fdctrl_start_transfer hw/block/fdc.c:1539:13
+      #10 0x5626d09f4c3e in fdctrl_write_data hw/block/fdc.c:2266:13
+      #11 0x5626d09f22f7 in fdctrl_write hw/block/fdc.c:829:9
+      #12 0x5626d1c20bc5 in portio_write softmmu/ioport.c:207:17
+
+  0x619000062a00 is located 0 bytes to the right of 512-byte region [0x619000062800,0x619000062a00)
+  allocated by thread T0 here:
+      #0 0x5626d03c66ec in posix_memalign (qemu-system-i386+0x1e676ec)
+      #1 0x5626d2b988d4 in qemu_try_memalign util/oslib-posix.c:210:11
+      #2 0x5626d2b98b0c in qemu_memalign util/oslib-posix.c:226:27
+      #3 0x5626d09fbaf0 in fdctrl_realize_common hw/block/fdc.c:2341:20
+      #4 0x5626d0a150ed in isabus_fdc_realize hw/block/fdc-isa.c:113:5
+      #5 0x5626d2367935 in device_set_realized hw/core/qdev.c:531:13
+
+  SUMMARY: AddressSanitizer: heap-buffer-overflow (qemu-system-i386+0x1e65919) in __asan_memcpy
+  Shadow bytes around the buggy address:
+    0x0c32800044f0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
+    0x0c3280004500: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
+    0x0c3280004510: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
+    0x0c3280004520: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
+    0x0c3280004530: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
+  =>0x0c3280004540:[fa]fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
+    0x0c3280004550: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
+    0x0c3280004560: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
+    0x0c3280004570: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
+    0x0c3280004580: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
+    0x0c3280004590: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
+  Shadow byte legend (one shadow byte represents 8 application bytes):
+    Addressable:           00
+    Heap left redzone:       fa
+    Freed heap region:       fd
+  ==4028352==ABORTING
+
+[ kwolf: Added snapshot=on to prevent write file lock failure ]
+
+Reported-by: Alexander Bulekov <alxndr@bu.edu>
+Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
+Reviewed-by: Alexander Bulekov <alxndr@bu.edu>
+Signed-off-by: Kevin Wolf <kwolf@redhat.com>
+
+Upstream-Status: Backport [46609b90d9e3a6304def11038a76b58ff43f77bc]
+CVE: CVE-2021-3507
+
+Signed-off-by: Sakib Sajal <sakib.sajal@windriver.com>
+---
+ tests/qtest/fdc-test.c | 21 +++++++++++++++++++++
+ 1 file changed, 21 insertions(+)
+
+diff --git a/tests/qtest/fdc-test.c b/tests/qtest/fdc-test.c
+index b0d40012e..1d4f85212 100644
+--- a/tests/qtest/fdc-test.c
++++ b/tests/qtest/fdc-test.c
+@@ -583,6 +583,26 @@ static void test_cve_2021_20196(void)
+     qtest_quit(s);
+ }
+ 
++static void test_cve_2021_3507(void)
++{
++    QTestState *s;
++
++    s = qtest_initf("-nographic -m 32M -nodefaults "
++                    "-drive file=%s,format=raw,if=floppy,snapshot=on",
++                    test_image);
++    qtest_outl(s, 0x9, 0x0a0206);
++    qtest_outw(s, 0x3f4, 0x1600);
++    qtest_outw(s, 0x3f4, 0x0000);
++    qtest_outw(s, 0x3f4, 0x0000);
++    qtest_outw(s, 0x3f4, 0x0000);
++    qtest_outw(s, 0x3f4, 0x0200);
++    qtest_outw(s, 0x3f4, 0x0200);
++    qtest_outw(s, 0x3f4, 0x0000);
++    qtest_outw(s, 0x3f4, 0x0000);
++    qtest_outw(s, 0x3f4, 0x0000);
++    qtest_quit(s);
++}
++
+ int main(int argc, char **argv)
+ {
+     int fd;
+@@ -614,6 +634,7 @@ int main(int argc, char **argv)
+     qtest_add_func("/fdc/read_no_dma_19", test_read_no_dma_19);
+     qtest_add_func("/fdc/fuzz-registers", fuzz_registers);
+     qtest_add_func("/fdc/fuzz/cve_2021_20196", test_cve_2021_20196);
++    qtest_add_func("/fdc/fuzz/cve_2021_3507", test_cve_2021_3507);
+ 
+     ret = g_test_run();
+ 
+-- 
+2.33.0
+
diff --git a/poky/meta/recipes-devtools/qemu/qemu/CVE-2022-0216_1.patch b/poky/meta/recipes-devtools/qemu/qemu/CVE-2022-0216_1.patch
new file mode 100644
index 0000000..56fc34c
--- /dev/null
+++ b/poky/meta/recipes-devtools/qemu/qemu/CVE-2022-0216_1.patch
@@ -0,0 +1,42 @@
+From f37ac8619a39498edd225c4a0b3039b28814833d Mon Sep 17 00:00:00 2001
+From: Mauro Matteo Cascella <mcascell@redhat.com>
+Date: Tue, 5 Jul 2022 22:05:43 +0200
+Subject: [PATCH 1/2] scsi/lsi53c895a: fix use-after-free in lsi_do_msgout
+ (CVE-2022-0216)
+
+Set current_req->req to NULL to prevent reusing a free'd buffer in case of
+repeated SCSI cancel requests. Thanks to Thomas Huth for suggesting the patch.
+
+Fixes: CVE-2022-0216
+Resolves: https://gitlab.com/qemu-project/qemu/-/issues/972
+Signed-off-by: Mauro Matteo Cascella <mcascell@redhat.com>
+Reviewed-by: Thomas Huth <thuth@redhat.com>
+Message-Id: <20220705200543.2366809-1-mcascell@redhat.com>
+Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
+
+Upstream-Status: Backport [6c8fa961da5e60f574bb52fd3ad44b1e9e8ad4b8]
+CVE: CVE-2022-0216
+
+Signed-off-by: Sakib Sajal <sakib.sajal@windriver.com>
+---
+ hw/scsi/lsi53c895a.c | 3 ++-
+ 1 file changed, 2 insertions(+), 1 deletion(-)
+
+diff --git a/hw/scsi/lsi53c895a.c b/hw/scsi/lsi53c895a.c
+index c8773f73f..99ea42d49 100644
+--- a/hw/scsi/lsi53c895a.c
++++ b/hw/scsi/lsi53c895a.c
+@@ -1028,8 +1028,9 @@ static void lsi_do_msgout(LSIState *s)
+         case 0x0d:
+             /* The ABORT TAG message clears the current I/O process only. */
+             trace_lsi_do_msgout_abort(current_tag);
+-            if (current_req) {
++            if (current_req && current_req->req) {
+                 scsi_req_cancel(current_req->req);
++                current_req->req = NULL;
+             }
+             lsi_disconnect(s);
+             break;
+-- 
+2.33.0
+
diff --git a/poky/meta/recipes-devtools/qemu/qemu/CVE-2022-0216_2.patch b/poky/meta/recipes-devtools/qemu/qemu/CVE-2022-0216_2.patch
new file mode 100644
index 0000000..f332154
--- /dev/null
+++ b/poky/meta/recipes-devtools/qemu/qemu/CVE-2022-0216_2.patch
@@ -0,0 +1,146 @@
+From 5451bf6db85ce3da1238e9154d051ebccec8f171 Mon Sep 17 00:00:00 2001
+From: Mauro Matteo Cascella <mcascell@redhat.com>
+Date: Mon, 11 Jul 2022 14:33:16 +0200
+Subject: [PATCH 2/2] scsi/lsi53c895a: really fix use-after-free in
+ lsi_do_msgout (CVE-2022-0216)
+
+Set current_req to NULL, not current_req->req, to prevent reusing a free'd
+buffer in case of repeated SCSI cancel requests.  Also apply the fix to
+CLEAR QUEUE and BUS DEVICE RESET messages as well, since they also cancel
+the request.
+
+Thanks to Alexander Bulekov for providing a reproducer.
+
+Fixes: CVE-2022-0216
+Resolves: https://gitlab.com/qemu-project/qemu/-/issues/972
+Signed-off-by: Mauro Matteo Cascella <mcascell@redhat.com>
+Tested-by: Alexander Bulekov <alxndr@bu.edu>
+Message-Id: <20220711123316.421279-1-mcascell@redhat.com>
+Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
+
+Upstream-Status: Backport [4367a20cc442c56b05611b4224de9a61908f9eac]
+CVE: CVE-2022-0216
+
+Signed-off-by: Sakib Sajal <sakib.sajal@windriver.com>
+---
+ hw/scsi/lsi53c895a.c               |  3 +-
+ tests/qtest/fuzz-lsi53c895a-test.c | 76 ++++++++++++++++++++++++++++++
+ 2 files changed, 78 insertions(+), 1 deletion(-)
+
+diff --git a/hw/scsi/lsi53c895a.c b/hw/scsi/lsi53c895a.c
+index 99ea42d49..ad5f5e5f3 100644
+--- a/hw/scsi/lsi53c895a.c
++++ b/hw/scsi/lsi53c895a.c
+@@ -1030,7 +1030,7 @@ static void lsi_do_msgout(LSIState *s)
+             trace_lsi_do_msgout_abort(current_tag);
+             if (current_req && current_req->req) {
+                 scsi_req_cancel(current_req->req);
+-                current_req->req = NULL;
++                current_req = NULL;
+             }
+             lsi_disconnect(s);
+             break;
+@@ -1056,6 +1056,7 @@ static void lsi_do_msgout(LSIState *s)
+             /* clear the current I/O process */
+             if (s->current) {
+                 scsi_req_cancel(s->current->req);
++                current_req = NULL;
+             }
+ 
+             /* As the current implemented devices scsi_disk and scsi_generic
+diff --git a/tests/qtest/fuzz-lsi53c895a-test.c b/tests/qtest/fuzz-lsi53c895a-test.c
+index ba5d46897..c1af0ab1c 100644
+--- a/tests/qtest/fuzz-lsi53c895a-test.c
++++ b/tests/qtest/fuzz-lsi53c895a-test.c
+@@ -8,6 +8,79 @@
+ #include "qemu/osdep.h"
+ #include "libqos/libqtest.h"
+ 
++/*
++ * This used to trigger a UAF in lsi_do_msgout()
++ * https://gitlab.com/qemu-project/qemu/-/issues/972
++ */
++static void test_lsi_do_msgout_cancel_req(void)
++{
++    QTestState *s;
++
++    if (sizeof(void *) == 4) {
++        g_test_skip("memory size too big for 32-bit build");
++        return;
++    }
++
++    s = qtest_init("-M q35 -m 4G -display none -nodefaults "
++                   "-device lsi53c895a,id=scsi "
++                   "-device scsi-hd,drive=disk0 "
++                   "-drive file=null-co://,id=disk0,if=none,format=raw");
++
++    qtest_outl(s, 0xcf8, 0x80000810);
++    qtest_outl(s, 0xcf8, 0xc000);
++    qtest_outl(s, 0xcf8, 0x80000810);
++    qtest_outw(s, 0xcfc, 0x7);
++    qtest_outl(s, 0xcf8, 0x80000810);
++    qtest_outl(s, 0xcfc, 0xc000);
++    qtest_outl(s, 0xcf8, 0x80000804);
++    qtest_outw(s, 0xcfc, 0x05);
++    qtest_writeb(s, 0x69736c10, 0x08);
++    qtest_writeb(s, 0x69736c13, 0x58);
++    qtest_writeb(s, 0x69736c1a, 0x01);
++    qtest_writeb(s, 0x69736c1b, 0x06);
++    qtest_writeb(s, 0x69736c22, 0x01);
++    qtest_writeb(s, 0x69736c23, 0x07);
++    qtest_writeb(s, 0x69736c2b, 0x02);
++    qtest_writeb(s, 0x69736c48, 0x08);
++    qtest_writeb(s, 0x69736c4b, 0x58);
++    qtest_writeb(s, 0x69736c52, 0x04);
++    qtest_writeb(s, 0x69736c53, 0x06);
++    qtest_writeb(s, 0x69736c5b, 0x02);
++    qtest_outl(s, 0xc02d, 0x697300);
++    qtest_writeb(s, 0x5a554662, 0x01);
++    qtest_writeb(s, 0x5a554663, 0x07);
++    qtest_writeb(s, 0x5a55466a, 0x10);
++    qtest_writeb(s, 0x5a55466b, 0x22);
++    qtest_writeb(s, 0x5a55466c, 0x5a);
++    qtest_writeb(s, 0x5a55466d, 0x5a);
++    qtest_writeb(s, 0x5a55466e, 0x34);
++    qtest_writeb(s, 0x5a55466f, 0x5a);
++    qtest_writeb(s, 0x5a345a5a, 0x77);
++    qtest_writeb(s, 0x5a345a5b, 0x55);
++    qtest_writeb(s, 0x5a345a5c, 0x51);
++    qtest_writeb(s, 0x5a345a5d, 0x27);
++    qtest_writeb(s, 0x27515577, 0x41);
++    qtest_outl(s, 0xc02d, 0x5a5500);
++    qtest_writeb(s, 0x364001d0, 0x08);
++    qtest_writeb(s, 0x364001d3, 0x58);
++    qtest_writeb(s, 0x364001da, 0x01);
++    qtest_writeb(s, 0x364001db, 0x26);
++    qtest_writeb(s, 0x364001dc, 0x0d);
++    qtest_writeb(s, 0x364001dd, 0xae);
++    qtest_writeb(s, 0x364001de, 0x41);
++    qtest_writeb(s, 0x364001df, 0x5a);
++    qtest_writeb(s, 0x5a41ae0d, 0xf8);
++    qtest_writeb(s, 0x5a41ae0e, 0x36);
++    qtest_writeb(s, 0x5a41ae0f, 0xd7);
++    qtest_writeb(s, 0x5a41ae10, 0x36);
++    qtest_writeb(s, 0x36d736f8, 0x0c);
++    qtest_writeb(s, 0x36d736f9, 0x80);
++    qtest_writeb(s, 0x36d736fa, 0x0d);
++    qtest_outl(s, 0xc02d, 0x364000);
++
++    qtest_quit(s);
++}
++
+ /*
+  * This used to trigger the assert in lsi_do_dma()
+  * https://bugs.launchpad.net/qemu/+bug/697510
+@@ -48,5 +121,8 @@ int main(int argc, char **argv)
+                        test_lsi_do_dma_empty_queue);
+     }
+ 
++    qtest_add_func("fuzz/lsi53c895a/lsi_do_msgout_cancel_req",
++                   test_lsi_do_msgout_cancel_req);
++
+     return g_test_run();
+ }
+-- 
+2.33.0
+
diff --git a/poky/meta/recipes-devtools/qemu/qemu/qemu-7.0.0-glibc-2.36.patch b/poky/meta/recipes-devtools/qemu/qemu/qemu-7.0.0-glibc-2.36.patch
new file mode 100644
index 0000000..abad1cf
--- /dev/null
+++ b/poky/meta/recipes-devtools/qemu/qemu/qemu-7.0.0-glibc-2.36.patch
@@ -0,0 +1,46 @@
+Avoid conflicts between sys/mount.h and linux/mount.h that are seen
+with glibc 2.36
+
+Source: https://github.com/archlinux/svntogit-packages/blob/packages/qemu/trunk/qemu-7.0.0-glibc-2.36.patch
+
+Upstream-Status: Pending
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+--- a/linux-user/syscall.c
++++ b/linux-user/syscall.c
+@@ -95,7 +95,25 @@
+ #include <linux/soundcard.h>
+ #include <linux/kd.h>
+ #include <linux/mtio.h>
++
++#ifdef HAVE_SYS_MOUNT_FSCONFIG
++/*
++ * glibc >= 2.36 linux/mount.h conflicts with sys/mount.h,
++ * which in turn prevents use of linux/fs.h. So we have to
++ * define the constants ourselves for now.
++ */
++#define FS_IOC_GETFLAGS                _IOR('f', 1, long)
++#define FS_IOC_SETFLAGS                _IOW('f', 2, long)
++#define FS_IOC_GETVERSION              _IOR('v', 1, long)
++#define FS_IOC_SETVERSION              _IOW('v', 2, long)
++#define FS_IOC_FIEMAP                  _IOWR('f', 11, struct fiemap)
++#define FS_IOC32_GETFLAGS              _IOR('f', 1, int)
++#define FS_IOC32_SETFLAGS              _IOW('f', 2, int)
++#define FS_IOC32_GETVERSION            _IOR('v', 1, int)
++#define FS_IOC32_SETVERSION            _IOW('v', 2, int)
++#else
+ #include <linux/fs.h>
++#endif
+ #include <linux/fd.h>
+ #if defined(CONFIG_FIEMAP)
+ #include <linux/fiemap.h>
+--- a/meson.build
++++ b/meson.build
+@@ -1686,6 +1686,8 @@ config_host_data.set('HAVE_OPTRESET',
+                      cc.has_header_symbol('getopt.h', 'optreset'))
+ config_host_data.set('HAVE_IPPROTO_MPTCP',
+                      cc.has_header_symbol('netinet/in.h', 'IPPROTO_MPTCP'))
++config_host_data.set('HAVE_SYS_MOUNT_FSCONFIG',
++                     cc.has_header_symbol('sys/mount.h', 'FSCONFIG_SET_FLAG'))
+ 
+ # has_member
+ config_host_data.set('HAVE_SIGEV_NOTIFY_THREAD_ID',
diff --git a/poky/meta/recipes-devtools/repo/repo_2.28.bb b/poky/meta/recipes-devtools/repo/repo_2.28.bb
deleted file mode 100644
index 052108e..0000000
--- a/poky/meta/recipes-devtools/repo/repo_2.28.bb
+++ /dev/null
@@ -1,31 +0,0 @@
-# SPDX-License-Identifier: MIT
-# Copyright (C) 2021 iris-GmbH infrared & intelligent sensors
-
-SUMMARY = "Tool for managing many Git repositories"
-DESCRIPTION = "Repo is a tool built on top of Git. Repo helps manage many Git repositories, does the uploads to revision control systems, and automates parts of the development workflow."
-HOMEPAGE = "https://android.googlesource.com/tools/repo"
-SECTION = "console/utils"
-
-LICENSE = "Apache-2.0"
-LIC_FILES_CHKSUM = "file://LICENSE;md5=3b83ef96387f14655fc854ddc3c6bd57"
-
-SRC_URI = "git://gerrit.googlesource.com/git-repo.git;protocol=https;branch=main \
-           file://0001-python3-shebang.patch \
-           "
-SRCREV = "a8cf575d68e7e211292d967f4a12cf609a028b20"
-
-MIRRORS += "git://gerrit.googlesource.com/git-repo.git git://github.com/GerritCodeReview/git-repo.git"
-
-S = "${WORKDIR}/git"
-
-do_configure:prepend() {
-	sed -Ei "s/REPO_REV\s*=\s*('|\")stable('|\")/REPO_REV = '${SRCREV}'/g" ${S}/repo
-}
-
-do_install() {
-	install -D ${WORKDIR}/git/repo ${D}${bindir}/repo
-}
-
-RDEPENDS:${PN} = "python3 git"
-
-BBCLASSEXTEND = "native nativesdk"
diff --git a/poky/meta/recipes-devtools/repo/repo_2.29.1.bb b/poky/meta/recipes-devtools/repo/repo_2.29.1.bb
new file mode 100644
index 0000000..740132c
--- /dev/null
+++ b/poky/meta/recipes-devtools/repo/repo_2.29.1.bb
@@ -0,0 +1,31 @@
+# SPDX-License-Identifier: MIT
+# Copyright (C) 2021 iris-GmbH infrared & intelligent sensors
+
+SUMMARY = "Tool for managing many Git repositories"
+DESCRIPTION = "Repo is a tool built on top of Git. Repo helps manage many Git repositories, does the uploads to revision control systems, and automates parts of the development workflow."
+HOMEPAGE = "https://android.googlesource.com/tools/repo"
+SECTION = "console/utils"
+
+LICENSE = "Apache-2.0"
+LIC_FILES_CHKSUM = "file://LICENSE;md5=3b83ef96387f14655fc854ddc3c6bd57"
+
+SRC_URI = "git://gerrit.googlesource.com/git-repo.git;protocol=https;branch=main \
+           file://0001-python3-shebang.patch \
+           "
+SRCREV = "4112c07688d0e0e568478e9f42be349bdd511d45"
+
+MIRRORS += "git://gerrit.googlesource.com/git-repo.git git://github.com/GerritCodeReview/git-repo.git"
+
+S = "${WORKDIR}/git"
+
+do_configure:prepend() {
+	sed -Ei "s/REPO_REV\s*=\s*('|\")stable('|\")/REPO_REV = '${SRCREV}'/g" ${S}/repo
+}
+
+do_install() {
+	install -D ${WORKDIR}/git/repo ${D}${bindir}/repo
+}
+
+RDEPENDS:${PN} = "python3 git"
+
+BBCLASSEXTEND = "native nativesdk"
diff --git a/poky/meta/recipes-devtools/rpm/files/0001-CVE-2021-3521.patch b/poky/meta/recipes-devtools/rpm/files/0001-CVE-2021-3521.patch
deleted file mode 100644
index 044b4dd..0000000
--- a/poky/meta/recipes-devtools/rpm/files/0001-CVE-2021-3521.patch
+++ /dev/null
@@ -1,57 +0,0 @@
-From 9a6871126f472feea057d5f803505ec8cc78f083 Mon Sep 17 00:00:00 2001
-From: Panu Matilainen <pmatilai@redhat.com>
-Date: Thu, 30 Sep 2021 09:56:20 +0300
-Subject: [PATCH 1/3] Refactor pgpDigParams construction to helper function
-
-No functional changes, just to reduce code duplication and needed by
-the following commits.
-
-CVE: CVE-2021-3521
-Upstream-Status: Backport [https://github.com/rpm-software-management/rpm/commit/9f03f42e2]
-
-Signed-off-by: Changqing Li <changqing.li@windriver.com>
----
- rpmio/rpmpgp.c | 13 +++++++++----
- 1 file changed, 9 insertions(+), 4 deletions(-)
-
-diff --git a/rpmio/rpmpgp.c b/rpmio/rpmpgp.c
-index d0688ebe9a..e472b5320f 100644
---- a/rpmio/rpmpgp.c
-+++ b/rpmio/rpmpgp.c
-@@ -1041,6 +1041,13 @@ unsigned int pgpDigParamsAlgo(pgpDigParams digp, unsigned int algotype)
-     return algo;
- }
- 
-+static pgpDigParams pgpDigParamsNew(uint8_t tag)
-+{
-+    pgpDigParams digp = xcalloc(1, sizeof(*digp));
-+    digp->tag = tag;
-+    return digp;
-+}
-+
- int pgpPrtParams(const uint8_t * pkts, size_t pktlen, unsigned int pkttype,
- 		 pgpDigParams * ret)
- {
-@@ -1058,8 +1065,7 @@ int pgpPrtParams(const uint8_t * pkts, size_t pktlen, unsigned int pkttype,
- 	    if (pkttype && pkt.tag != pkttype) {
- 		break;
- 	    } else {
--		digp = xcalloc(1, sizeof(*digp));
--		digp->tag = pkt.tag;
-+		digp = pgpDigParamsNew(pkt.tag);
- 	    }
- 	}
- 
-@@ -1105,8 +1111,7 @@ int pgpPrtParamsSubkeys(const uint8_t *pkts, size_t pktlen,
- 		digps = xrealloc(digps, alloced * sizeof(*digps));
- 	    }
- 
--	    digps[count] = xcalloc(1, sizeof(**digps));
--	    digps[count]->tag = PGPTAG_PUBLIC_SUBKEY;
-+	    digps[count] = pgpDigParamsNew(PGPTAG_PUBLIC_SUBKEY);
- 	    /* Copy UID from main key to subkey */
- 	    digps[count]->userid = xstrdup(mainkey->userid);
- 
--- 
-2.17.1
-
diff --git a/poky/meta/recipes-devtools/rpm/files/0001-Do-not-hardcode-lib-rpm-as-the-installation-path-for.patch b/poky/meta/recipes-devtools/rpm/files/0001-Do-not-hardcode-lib-rpm-as-the-installation-path-for.patch
index 6d236ac..c6cf9d4 100644
--- a/poky/meta/recipes-devtools/rpm/files/0001-Do-not-hardcode-lib-rpm-as-the-installation-path-for.patch
+++ b/poky/meta/recipes-devtools/rpm/files/0001-Do-not-hardcode-lib-rpm-as-the-installation-path-for.patch
@@ -1,4 +1,4 @@
-From 8d013fe154a162305f76141151baf767dd04b598 Mon Sep 17 00:00:00 2001
+From 4ab6a4c5bbad65c3401016bb26b87214cdd0c59b Mon Sep 17 00:00:00 2001
 From: Alexander Kanavin <alex.kanavin@gmail.com>
 Date: Mon, 27 Feb 2017 09:43:30 +0200
 Subject: [PATCH] Do not hardcode "lib/rpm" as the installation path for
@@ -14,10 +14,10 @@
  3 files changed, 4 insertions(+), 4 deletions(-)
 
 diff --git a/configure.ac b/configure.ac
-index eb7d6941b..10a889b5d 100644
+index 372875fc4..1b7add9ee 100644
 --- a/configure.ac
 +++ b/configure.ac
-@@ -871,7 +871,7 @@ else
+@@ -884,7 +884,7 @@ else
      usrprefix=$prefix
  fi
  
@@ -27,10 +27,10 @@
  
  AC_SUBST(OBJDUMP)
 diff --git a/macros.in b/macros.in
-index a1f795e5f..689e784ef 100644
+index d53ab5ed5..9d10441c8 100644
 --- a/macros.in
 +++ b/macros.in
-@@ -933,7 +933,7 @@ package or when debugging this package.\
+@@ -911,7 +911,7 @@ package or when debugging this package.\
  %_sharedstatedir	%{_prefix}/com
  %_localstatedir		%{_prefix}/var
  %_lib			lib
@@ -40,7 +40,7 @@
  %_infodir		%{_datadir}/info
  %_mandir		%{_datadir}/man
 diff --git a/rpm.am b/rpm.am
-index 7b57f433b..9bbb9ee96 100644
+index ebe4e40d1..e6920e258 100644
 --- a/rpm.am
 +++ b/rpm.am
 @@ -1,10 +1,10 @@
@@ -55,4 +55,4 @@
 +rpmconfigdir = $(libdir)/rpm
  
  # Libtool version (current-revision-age) for all our libraries
- rpm_version_info = 11:0:2
+ rpm_version_info = 12:0:3
diff --git a/poky/meta/recipes-devtools/rpm/files/0001-When-cross-installing-execute-package-scriptlets-wit.patch b/poky/meta/recipes-devtools/rpm/files/0001-When-cross-installing-execute-package-scriptlets-wit.patch
index 4020a31..2a0069c 100644
--- a/poky/meta/recipes-devtools/rpm/files/0001-When-cross-installing-execute-package-scriptlets-wit.patch
+++ b/poky/meta/recipes-devtools/rpm/files/0001-When-cross-installing-execute-package-scriptlets-wit.patch
@@ -28,11 +28,18 @@
  lib/rpmscript.c | 11 ++++++++---
  1 file changed, 8 insertions(+), 3 deletions(-)
 
-diff --git a/lib/rpmscript.c b/lib/rpmscript.c
-index cc98c4885..f8bd3df04 100644
 --- a/lib/rpmscript.c
 +++ b/lib/rpmscript.c
-@@ -394,8 +394,7 @@ exit:
+@@ -17,7 +17,7 @@
+ #include "rpmio/rpmio_internal.h"
+ 
+ #include "lib/rpmplugins.h"     /* rpm plugins hooks */
+-
++#include "lib/rpmchroot.h"      /* rpmChrootOut */
+ #include "debug.h"
+ 
+ struct scriptNextFileFunc_s {
+@@ -391,8 +391,7 @@ exit:
  	Fclose(out);	/* XXX dup'd STDOUT_FILENO */
  
      if (fn) {
@@ -42,7 +49,7 @@
  	free(fn);
      }
      free(mline);
-@@ -428,7 +427,13 @@ rpmRC rpmScriptRun(rpmScript script, int arg1, int arg2, FD_t scriptFd,
+@@ -426,7 +425,13 @@ rpmRC rpmScriptRun(rpmScript script, int
  
      if (rc != RPMRC_FAIL) {
  	if (script_type & RPMSCRIPTLET_EXEC) {
@@ -57,6 +64,3 @@
  	} else {
  	    rc = runLuaScript(plugins, prefixes, script->descr, lvl, scriptFd, &args, script->body, arg1, arg2, &script->nextFileFunc);
  	}
--- 
-2.11.0
-
diff --git a/poky/meta/recipes-devtools/rpm/files/0001-configure.ac-add-linux-gnux32-variant-to-triplet-han.patch b/poky/meta/recipes-devtools/rpm/files/0001-configure.ac-add-linux-gnux32-variant-to-triplet-han.patch
new file mode 100644
index 0000000..2174a79
--- /dev/null
+++ b/poky/meta/recipes-devtools/rpm/files/0001-configure.ac-add-linux-gnux32-variant-to-triplet-han.patch
@@ -0,0 +1,31 @@
+From 8f51462d41d8fe942d5d0a06f08d47f625141995 Mon Sep 17 00:00:00 2001
+From: Alexander Kanavin <alex@linutronix.de>
+Date: Thu, 4 Aug 2022 12:15:08 +0200
+Subject: [PATCH] configure.ac: add linux-gnux32 variant to triplet handling
+
+x32 is a 64 bit x86 ABI with 32 bit pointers.
+
+Upstream-Status: Submitted [https://github.com/rpm-software-management/rpm/pull/2143]
+Signed-off-by: Alexander Kanavin <alex@linutronix.de>
+---
+ configure.ac | 4 ++++
+ 1 file changed, 4 insertions(+)
+
+diff --git a/configure.ac b/configure.ac
+index 372875fc49..7d6a3d274e 100644
+--- a/configure.ac
++++ b/configure.ac
+@@ -845,6 +845,10 @@ if echo "$host_os" | grep '.*-gnuabi64$' > /dev/null ; then
+ 	host_os=`echo "${host_os}" | sed 's/-gnuabi64$//'`
+ 	host_os_gnu=-gnuabi64
+ fi
++if echo "$host_os" | grep '.*-gnux32$' > /dev/null ; then
++	host_os=`echo "${host_os}" | sed 's/-gnux32$//'`
++	host_os_gnu=-gnux32
++fi
+ if echo "$host_os" | grep '.*-gnu$' > /dev/null ; then
+ 	host_os=`echo "${host_os}" | sed 's/-gnu$//'`
+ fi
+-- 
+2.30.2
+
diff --git a/poky/meta/recipes-devtools/rpm/files/0002-CVE-2021-3521.patch b/poky/meta/recipes-devtools/rpm/files/0002-CVE-2021-3521.patch
deleted file mode 100644
index 683b57d..0000000
--- a/poky/meta/recipes-devtools/rpm/files/0002-CVE-2021-3521.patch
+++ /dev/null
@@ -1,64 +0,0 @@
-From c4b1bee51bbdd732b94b431a951481af99117703 Mon Sep 17 00:00:00 2001
-From: Panu Matilainen <pmatilai@redhat.com>
-Date: Thu, 30 Sep 2021 09:51:10 +0300
-Subject: [PATCH 2/3] Process MPI's from all kinds of signatures
-
-No immediate effect but needed by the following commits.
-
-CVE: CVE-2021-3521
-Upstream-Status: Backport [https://github.com/rpm-software-management/rpm/commit/b5e8bc74b]
-
-Signed-off-by: Changqing Li <changqing.li@windriver.com>
-
----
- rpmio/rpmpgp.c | 13 +++++--------
- 1 file changed, 5 insertions(+), 8 deletions(-)
-
-diff --git a/rpmio/rpmpgp.c b/rpmio/rpmpgp.c
-index 25f67048fd..509e777e6d 100644
---- a/rpmio/rpmpgp.c
-+++ b/rpmio/rpmpgp.c
-@@ -543,7 +543,7 @@ pgpDigAlg pgpDigAlgFree(pgpDigAlg alg)
-     return NULL;
- }
- 
--static int pgpPrtSigParams(pgpTag tag, uint8_t pubkey_algo, uint8_t sigtype,
-+static int pgpPrtSigParams(pgpTag tag, uint8_t pubkey_algo,
- 		const uint8_t *p, const uint8_t *h, size_t hlen,
- 		pgpDigParams sigp)
- {
-@@ -556,10 +556,8 @@ static int pgpPrtSigParams(pgpTag tag, uint8_t pubkey_algo, uint8_t sigtype,
- 	int mpil = pgpMpiLen(p);
- 	if (pend - p < mpil)
- 	    break;
--	if (sigtype == PGPSIGTYPE_BINARY || sigtype == PGPSIGTYPE_TEXT) {
--	    if (sigalg->setmpi(sigalg, i, p))
--		break;
--	}
-+        if (sigalg->setmpi(sigalg, i, p))
-+            break;
- 	p += mpil;
-     }
- 
-@@ -619,7 +617,7 @@ static int pgpPrtSig(pgpTag tag, const uint8_t *h, size_t hlen,
- 	}
- 
- 	p = ((uint8_t *)v) + sizeof(*v);
--	rc = pgpPrtSigParams(tag, v->pubkey_algo, v->sigtype, p, h, hlen, _digp);
-+	rc = pgpPrtSigParams(tag, v->pubkey_algo, p, h, hlen, _digp);
-     }	break;
-     case 4:
-     {   pgpPktSigV4 v = (pgpPktSigV4)h;
-@@ -677,8 +675,7 @@ static int pgpPrtSig(pgpTag tag, const uint8_t *h, size_t hlen,
- 	p += 2;
- 	if (p > hend)
- 	    return 1;
--
--	rc = pgpPrtSigParams(tag, v->pubkey_algo, v->sigtype, p, h, hlen, _digp);
-+	rc = pgpPrtSigParams(tag, v->pubkey_algo, p, h, hlen, _digp);
-     }	break;
-     default:
- 	rpmlog(RPMLOG_WARNING, _("Unsupported version of signature: V%d\n"), version);
--- 
-2.17.1
-
diff --git a/poky/meta/recipes-devtools/rpm/files/0003-CVE-2021-3521.patch b/poky/meta/recipes-devtools/rpm/files/0003-CVE-2021-3521.patch
deleted file mode 100644
index a5ec802..0000000
--- a/poky/meta/recipes-devtools/rpm/files/0003-CVE-2021-3521.patch
+++ /dev/null
@@ -1,329 +0,0 @@
-From 07676ca03ad8afcf1ca95a2353c83fbb1d970b9b Mon Sep 17 00:00:00 2001
-From: Panu Matilainen <pmatilai@redhat.com>
-Date: Thu, 30 Sep 2021 09:59:30 +0300
-Subject: [PATCH 3/3] Validate and require subkey binding signatures on PGP
- public keys
-
-All subkeys must be followed by a binding signature by the primary key
-as per the OpenPGP RFC, enforce the presence and validity in the parser.
-
-The implementation is as kludgey as they come to work around our
-simple-minded parser structure without touching API, to maximise
-backportability. Store all the raw packets internally as we decode them
-to be able to access previous elements at will, needed to validate ordering
-and access the actual data. Add testcases for manipulated keys whose
-import previously would succeed.
-
-Depends on the two previous commits:
-7b399fcb8f52566e6f3b4327197a85facd08db91 and
-236b802a4aa48711823a191d1b7f753c82a89ec5
-
-Fixes CVE-2021-3521.
-
-Upstream-Status: Backport [https://github.com/rpm-software-management/rpm/commit/bd36c5dc9]
-CVE:CVE-2021-3521
-
-Signed-off-by: Changqing Li <changqing.li@windriver.com>
-
----
- rpmio/rpmpgp.c                                | 99 +++++++++++++++++--
- tests/Makefile.am                             |  3 +
- tests/data/keys/CVE-2021-3521-badbind.asc     | 25 +++++
- .../data/keys/CVE-2021-3521-nosubsig-last.asc | 25 +++++
- tests/data/keys/CVE-2021-3521-nosubsig.asc    | 37 +++++++
- tests/rpmsigdig.at                            | 28 ++++++
- 6 files changed, 209 insertions(+), 8 deletions(-)
- create mode 100644 tests/data/keys/CVE-2021-3521-badbind.asc
- create mode 100644 tests/data/keys/CVE-2021-3521-nosubsig-last.asc
- create mode 100644 tests/data/keys/CVE-2021-3521-nosubsig.asc
-
-diff --git a/rpmio/rpmpgp.c b/rpmio/rpmpgp.c
-index 509e777e6d..371ad4d9b6 100644
---- a/rpmio/rpmpgp.c
-+++ b/rpmio/rpmpgp.c
-@@ -1061,33 +1061,116 @@ static pgpDigParams pgpDigParamsNew(uint8_t tag)
-     return digp;
- }
- 
-+static int hashKey(DIGEST_CTX hash, const struct pgpPkt *pkt, int exptag)
-+{
-+    int rc = -1;
-+    if (pkt->tag == exptag) {
-+       uint8_t head[] = {
-+           0x99,
-+           (pkt->blen >> 8),
-+           (pkt->blen     ),
-+       };
-+
-+       rpmDigestUpdate(hash, head, 3);
-+       rpmDigestUpdate(hash, pkt->body, pkt->blen);
-+       rc = 0;
-+    }
-+    return rc;
-+}
-+
-+static int pgpVerifySelf(pgpDigParams key, pgpDigParams selfsig,
-+                       const struct pgpPkt *all, int i)
-+{
-+    int rc = -1;
-+    DIGEST_CTX hash = NULL;
-+
-+    switch (selfsig->sigtype) {
-+    case PGPSIGTYPE_SUBKEY_BINDING:
-+       hash = rpmDigestInit(selfsig->hash_algo, 0);
-+       if (hash) {
-+           rc = hashKey(hash, &all[0], PGPTAG_PUBLIC_KEY);
-+           if (!rc)
-+               rc = hashKey(hash, &all[i-1], PGPTAG_PUBLIC_SUBKEY);
-+       }
-+       break;
-+    default:
-+       /* ignore types we can't handle */
-+       rc = 0;
-+       break;
-+    }
-+
-+    if (hash && rc == 0)
-+       rc = pgpVerifySignature(key, selfsig, hash);
-+
-+    rpmDigestFinal(hash, NULL, NULL, 0);
-+
-+    return rc;
-+}
-+
- int pgpPrtParams(const uint8_t * pkts, size_t pktlen, unsigned int pkttype,
- 		 pgpDigParams * ret)
- {
-     const uint8_t *p = pkts;
-     const uint8_t *pend = pkts + pktlen;
-     pgpDigParams digp = NULL;
--    struct pgpPkt pkt;
-+    pgpDigParams selfsig = NULL;
-+    int i = 0;
-+    int alloced = 16; /* plenty for normal cases */
-+    struct pgpPkt *all = xmalloc(alloced * sizeof(*all));
-     int rc = -1; /* assume failure */
-+    int expect = 0;
-+    int prevtag = 0;
- 
-     while (p < pend) {
--	if (decodePkt(p, (pend - p), &pkt))
-+	struct pgpPkt *pkt = &all[i];
-+	if (decodePkt(p, (pend - p), pkt))
- 	    break;
- 
- 	if (digp == NULL) {
--	    if (pkttype && pkt.tag != pkttype) {
-+               if (pkttype && pkt->tag != pkttype) {
- 		break;
- 	    } else {
--		digp = pgpDigParamsNew(pkt.tag);
-+		digp = pgpDigParamsNew(pkt->tag);
- 	    }
- 	}
- 
--	if (pgpPrtPkt(&pkt, digp))
-+        if (expect) {
-+            if (pkt->tag != expect)
-+                break;
-+            selfsig = pgpDigParamsNew(pkt->tag);
-+        }
-+	if (pgpPrtPkt(pkt, selfsig ? selfsig : digp))
- 	    break;
- 
--	p += (pkt.body - pkt.head) + pkt.blen;
--	if (pkttype == PGPTAG_SIGNATURE)
--	    break;
-+	if (selfsig) {
-+           /* subkeys must be followed by binding signature */
-+           if (prevtag == PGPTAG_PUBLIC_SUBKEY) {
-+               if (selfsig->sigtype != PGPSIGTYPE_SUBKEY_BINDING)
-+                   break;
-+           }
-+
-+           int xx = pgpVerifySelf(digp, selfsig, all, i);
-+
-+           selfsig = pgpDigParamsFree(selfsig);
-+           if (xx)
-+               break;
-+           expect = 0;
-+       }
-+
-+       if (pkt->tag == PGPTAG_PUBLIC_SUBKEY)
-+           expect = PGPTAG_SIGNATURE;
-+       prevtag = pkt->tag;
-+
-+       i++;
-+       p += (pkt->body - pkt->head) + pkt->blen;
-+       if (pkttype == PGPTAG_SIGNATURE)
-+           break;
-+
-+       if (alloced <= i) {
-+           alloced *= 2;
-+           all = xrealloc(all, alloced * sizeof(*all));
-+       }
-+
-     }
- 
-     rc = (digp && (p == pend)) ? 0 : -1;
-diff --git a/tests/Makefile.am b/tests/Makefile.am
-index a41ce10de8..7bb23247f1 100644
---- a/tests/Makefile.am
-+++ b/tests/Makefile.am
-@@ -107,6 +107,9 @@ EXTRA_DIST += data/SPECS/hello-config-buildid.spec
- EXTRA_DIST += data/SPECS/hello-cd.spec
- EXTRA_DIST += data/keys/rpm.org-rsa-2048-test.pub
- EXTRA_DIST += data/keys/rpm.org-rsa-2048-test.secret
-+EXTRA_DIST += data/keys/CVE-2021-3521-badbind.asc
-+EXTRA_DIST += data/keys/CVE-2022-3521-nosubsig.asc
-+EXTRA_DIST += data/keys/CVE-2022-3521-nosubsig-last.asc
- EXTRA_DIST += data/macros.testfile
- EXTRA_DIST += data/macros.debug
- EXTRA_DIST += data/SOURCES/foo.c
-diff --git a/tests/data/keys/CVE-2021-3521-badbind.asc b/tests/data/keys/CVE-2021-3521-badbind.asc
-new file mode 100644
-index 0000000000..aea00f9d7a
---- /dev/null
-+++ b/tests/data/keys/CVE-2021-3521-badbind.asc
-@@ -0,0 +1,25 @@
-+-----BEGIN PGP PUBLIC KEY BLOCK-----
-+Version: rpm-4.17.90 (NSS-3)
-+
-+mQENBFjmORgBCAC7TMEk6wnjSs8Dr4yqSScWdU2pjcqrkTxuzdWvowcIUPZI0w/g
-+HkRqGd4apjvY2V15kjL10gk3QhFP3pZ/9p7zh8o8NHX7aGdSGDK7NOq1eFaErPRY
-+91LW9RiZ0lbOjXEzIL0KHxUiTQEmdXJT43DJMFPyW9fkCWg0OltiX618FUdWWfI8
-+eySdLur1utnqBvdEbCUvWK2RX3vQZQdvEBODnNk2pxqTyV0w6VPQ96W++lF/5Aas
-+7rUv3HIyIXxIggc8FRrnH+y9XvvHDonhTIlGnYZN4ubm9i4y3gOkrZlGTrEw7elQ
-+1QeMyG2QQEbze8YjpTm4iLABCBrRfPRaQpwrABEBAAG0IXJwbS5vcmcgUlNBIHRl
-+c3RrZXkgPHJzYUBycG0ub3JnPokBNwQTAQgAIQUCWOY5GAIbAwULCQgHAgYVCAkK
-+CwIEFgIDAQIeAQIXgAAKCRBDRFkeGWTF/MxxCACnjqFL+MmPh9W9JQKT2DcLbBzf
-+Cqo6wcEBoCOcwgRSk8dSikhARoteoa55JRJhuMyeKhhEAogE9HRmCPFdjezFTwgB
-+BDVBpO2dZ023mLXDVCYX3S8pShOgCP6Tn4wqCnYeAdLcGg106N4xcmgtcssJE+Pr
-+XzTZksbZsrTVEmL/Ym+R5w5jBfFnGk7Yw7ndwfQsfNXQb5AZynClFxnX546lcyZX
-+fEx3/e6ezw57WNOUK6WT+8b+EGovPkbetK/rGxNXuWaP6X4A/QUm8O98nCuHYFQq
-++mvNdsCBqGf7mhaRGtpHk/JgCn5rFvArMDqLVrR9hX0LdCSsH7EGE+bR3r7wuQEN
-+BFjmORgBCACk+vDZrIXQuFXEYToZVwb2attzbbJJCqD71vmZTLsW0QxuPKRgbcYY
-+zp4K4lVBnHhFrF8MOUOxJ7kQWIJZMZFt+BDcptCYurbD2H4W2xvnWViiC+LzCMzz
-+iMJT6165uefL4JHTDPxC2fFiM9yrc72LmylJNkM/vepT128J5Qv0gRUaQbHiQuS6
-+Dm/+WRnUfx3i89SV4mnBxb/Ta93GVqoOciWwzWSnwEnWYAvOb95JL4U7c5J5f/+c
-+KnQDHsW7sIiIdscsWzvgf6qs2Ra1Zrt7Fdk4+ZS2f/adagLhDO1C24sXf5XfMk5m
-+L0OGwZSr9m5s17VXxfspgU5ugc8kBJfzABEBAAE=
-+=WCfs
-+-----END PGP PUBLIC KEY BLOCK-----
-+
-diff --git a/tests/data/keys/CVE-2021-3521-nosubsig-last.asc b/tests/data/keys/CVE-2021-3521-nosubsig-last.asc
-new file mode 100644
-index 0000000000..aea00f9d7a
---- /dev/null
-+++ b/tests/data/keys/CVE-2021-3521-nosubsig-last.asc
-@@ -0,0 +1,25 @@
-+-----BEGIN PGP PUBLIC KEY BLOCK-----
-+Version: rpm-4.17.90 (NSS-3)
-+
-+mQENBFjmORgBCAC7TMEk6wnjSs8Dr4yqSScWdU2pjcqrkTxuzdWvowcIUPZI0w/g
-+HkRqGd4apjvY2V15kjL10gk3QhFP3pZ/9p7zh8o8NHX7aGdSGDK7NOq1eFaErPRY
-+91LW9RiZ0lbOjXEzIL0KHxUiTQEmdXJT43DJMFPyW9fkCWg0OltiX618FUdWWfI8
-+eySdLur1utnqBvdEbCUvWK2RX3vQZQdvEBODnNk2pxqTyV0w6VPQ96W++lF/5Aas
-+7rUv3HIyIXxIggc8FRrnH+y9XvvHDonhTIlGnYZN4ubm9i4y3gOkrZlGTrEw7elQ
-+1QeMyG2QQEbze8YjpTm4iLABCBrRfPRaQpwrABEBAAG0IXJwbS5vcmcgUlNBIHRl
-+c3RrZXkgPHJzYUBycG0ub3JnPokBNwQTAQgAIQUCWOY5GAIbAwULCQgHAgYVCAkK
-+CwIEFgIDAQIeAQIXgAAKCRBDRFkeGWTF/MxxCACnjqFL+MmPh9W9JQKT2DcLbBzf
-+Cqo6wcEBoCOcwgRSk8dSikhARoteoa55JRJhuMyeKhhEAogE9HRmCPFdjezFTwgB
-+BDVBpO2dZ023mLXDVCYX3S8pShOgCP6Tn4wqCnYeAdLcGg106N4xcmgtcssJE+Pr
-+XzTZksbZsrTVEmL/Ym+R5w5jBfFnGk7Yw7ndwfQsfNXQb5AZynClFxnX546lcyZX
-+fEx3/e6ezw57WNOUK6WT+8b+EGovPkbetK/rGxNXuWaP6X4A/QUm8O98nCuHYFQq
-++mvNdsCBqGf7mhaRGtpHk/JgCn5rFvArMDqLVrR9hX0LdCSsH7EGE+bR3r7wuQEN
-+BFjmORgBCACk+vDZrIXQuFXEYToZVwb2attzbbJJCqD71vmZTLsW0QxuPKRgbcYY
-+zp4K4lVBnHhFrF8MOUOxJ7kQWIJZMZFt+BDcptCYurbD2H4W2xvnWViiC+LzCMzz
-+iMJT6165uefL4JHTDPxC2fFiM9yrc72LmylJNkM/vepT128J5Qv0gRUaQbHiQuS6
-+Dm/+WRnUfx3i89SV4mnBxb/Ta93GVqoOciWwzWSnwEnWYAvOb95JL4U7c5J5f/+c
-+KnQDHsW7sIiIdscsWzvgf6qs2Ra1Zrt7Fdk4+ZS2f/adagLhDO1C24sXf5XfMk5m
-+L0OGwZSr9m5s17VXxfspgU5ugc8kBJfzABEBAAE=
-+=WCfs
-+-----END PGP PUBLIC KEY BLOCK-----
-+
-diff --git a/tests/data/keys/CVE-2021-3521-nosubsig.asc b/tests/data/keys/CVE-2021-3521-nosubsig.asc
-new file mode 100644
-index 0000000000..3a2e7417f8
---- /dev/null
-+++ b/tests/data/keys/CVE-2021-3521-nosubsig.asc
-@@ -0,0 +1,37 @@
-+-----BEGIN PGP PUBLIC KEY BLOCK-----
-+Version: rpm-4.17.90 (NSS-3)
-+
-+mQENBFjmORgBCAC7TMEk6wnjSs8Dr4yqSScWdU2pjcqrkTxuzdWvowcIUPZI0w/g
-+HkRqGd4apjvY2V15kjL10gk3QhFP3pZ/9p7zh8o8NHX7aGdSGDK7NOq1eFaErPRY
-+91LW9RiZ0lbOjXEzIL0KHxUiTQEmdXJT43DJMFPyW9fkCWg0OltiX618FUdWWfI8
-+eySdLur1utnqBvdEbCUvWK2RX3vQZQdvEBODnNk2pxqTyV0w6VPQ96W++lF/5Aas
-+7rUv3HIyIXxIggc8FRrnH+y9XvvHDonhTIlGnYZN4ubm9i4y3gOkrZlGTrEw7elQ
-+1QeMyG2QQEbze8YjpTm4iLABCBrRfPRaQpwrABEBAAG0IXJwbS5vcmcgUlNBIHRl
-+c3RrZXkgPHJzYUBycG0ub3JnPokBNwQTAQgAIQUCWOY5GAIbAwULCQgHAgYVCAkK
-+CwIEFgIDAQIeAQIXgAAKCRBDRFkeGWTF/MxxCACnjqFL+MmPh9W9JQKT2DcLbBzf
-+Cqo6wcEBoCOcwgRSk8dSikhARoteoa55JRJhuMyeKhhEAogE9HRmCPFdjezFTwgB
-+BDVBpO2dZ023mLXDVCYX3S8pShOgCP6Tn4wqCnYeAdLcGg106N4xcmgtcssJE+Pr
-+XzTZksbZsrTVEmL/Ym+R5w5jBfFnGk7Yw7ndwfQsfNXQb5AZynClFxnX546lcyZX
-+fEx3/e6ezw57WNOUK6WT+8b+EGovPkbetK/rGxNXuWaP6X4A/QUm8O98nCuHYFQq
-++mvNdsCBqGf7mhaRGtpHk/JgCn5rFvArMDqLVrR9hX0LdCSsH7EGE+bR3r7wuQEN
-+BFjmORgBCACk+vDZrIXQuFXEYToZVwb2attzbbJJCqD71vmZTLsW0QxuPKRgbcYY
-+zp4K4lVBnHhFrF8MOUOxJ7kQWIJZMZFt+BDcptCYurbD2H4W2xvnWViiC+LzCMzz
-+iMJT6165uefL4JHTDPxC2fFiM9yrc72LmylJNkM/vepT128J5Qv0gRUaQbHiQuS6
-+Dm/+WRnUfx3i89SV4mnBxb/Ta93GVqoOciWwzWSnwEnWYAvOb95JL4U7c5J5f/+c
-+KnQDHsW7sIiIdscsWzvgf6qs2Ra1Zrt7Fdk4+ZS2f/adagLhDO1C24sXf5XfMk5m
-+L0OGwZSr9m5s17VXxfspgU5ugc8kBJfzABEBAAG5AQ0EWOY5GAEIAKT68NmshdC4
-+VcRhOhlXBvZq23NtskkKoPvW+ZlMuxbRDG48pGBtxhjOngriVUGceEWsXww5Q7En
-+uRBYglkxkW34ENym0Ji6tsPYfhbbG+dZWKIL4vMIzPOIwlPrXrm558vgkdMM/ELZ
-+8WIz3KtzvYubKUk2Qz+96lPXbwnlC/SBFRpBseJC5LoOb/5ZGdR/HeLz1JXiacHF
-+v9Nr3cZWqg5yJbDNZKfASdZgC85v3kkvhTtzknl//5wqdAMexbuwiIh2xyxbO+B/
-+qqzZFrVmu3sV2Tj5lLZ/9p1qAuEM7ULbixd/ld8yTmYvQ4bBlKv2bmzXtVfF+ymB
-+Tm6BzyQEl/MAEQEAAYkBHwQYAQgACQUCWOY5GAIbDAAKCRBDRFkeGWTF/PANB/9j
-+mifmj6z/EPe0PJFhrpISt9PjiUQCt0IPtiL5zKAkWjHePIzyi+0kCTBF6DDLFxos
-+3vN4bWnVKT1kBhZAQlPqpJTg+m74JUYeDGCdNx9SK7oRllATqyu+5rncgxjWVPnQ
-+zu/HRPlWJwcVFYEVXYL8xzfantwQTqefjmcRmBRdA2XJITK+hGWwAmrqAWx+q5xX
-+Pa8wkNMxVzNS2rUKO9SoVuJ/wlUvfoShkJ/VJ5HDp3qzUqncADfdGN35TDzscngQ
-+gHvnMwVBfYfSCABV1hNByoZcc/kxkrWMmsd/EnIyLd1Q1baKqc3cEDuC6E6/o4yJ
-+E4XX4jtDmdZPreZALsiB
-+=rRop
-+-----END PGP PUBLIC KEY BLOCK-----
-+
-diff --git a/tests/rpmsigdig.at b/tests/rpmsigdig.at
-index 8e7c759b8f..e2d30a7f1b 100644
---- a/tests/rpmsigdig.at
-+++ b/tests/rpmsigdig.at
-@@ -2,6 +2,34 @@
- 
- AT_BANNER([RPM signatures and digests])
- 
-+AT_SETUP([rpmkeys --import invalid keys])
-+AT_KEYWORDS([rpmkeys import])
-+RPMDB_INIT
-+
-+AT_CHECK([
-+runroot rpmkeys --import /data/keys/CVE-2021-3521-badbind.asc
-+],
-+[1],
-+[],
-+[error: /data/keys/CVE-2021-3521-badbind.asc: key 1 import failed.]
-+)
-+AT_CHECK([
-+runroot rpmkeys --import /data/keys/CVE-2021-3521-nosubsig.asc
-+],
-+[1],
-+[],
-+[error: /data/keys/CVE-2021-3521-nosubsig.asc: key 1 import failed.]
-+)
-+
-+AT_CHECK([
-+runroot rpmkeys --import /data/keys/CVE-2021-3521-nosubsig-last.asc
-+],
-+[1],
-+[],
-+[error: /data/keys/CVE-2021-3521-nosubsig-last.asc: key 1 import failed.]
-+)
-+AT_CLEANUP
-+
- # ------------------------------
- # Test pre-built package verification
- AT_SETUP([rpmkeys -Kv <unsigned> 1])
--- 
-2.17.1
-
diff --git a/poky/meta/recipes-devtools/rpm/rpm_4.17.0.bb b/poky/meta/recipes-devtools/rpm/rpm_4.17.0.bb
deleted file mode 100644
index c392ac0..0000000
--- a/poky/meta/recipes-devtools/rpm/rpm_4.17.0.bb
+++ /dev/null
@@ -1,208 +0,0 @@
-SUMMARY = "The RPM package management system"
-DESCRIPTION = "The RPM Package Manager (RPM) is a powerful command line driven \
-package management system capable of installing, uninstalling, \
-verifying, querying, and updating software packages. Each software \
-package consists of an archive of files along with information about \
-the package like its version, a description, etc."
-
-SUMMARY:${PN}-dev = "Development files for manipulating RPM packages"
-DESCRIPTION:${PN}-dev = "This package contains the RPM C library and header files. These \
-development files will simplify the process of writing programs that \
-manipulate RPM packages and databases. These files are intended to \
-simplify the process of creating graphical package managers or any \
-other tools that need an intimate knowledge of RPM packages in order \
-to function."
-
-SUMMARY:python3-rpm = "Python bindings for apps which will manupulate RPM packages"
-DESCRIPTION:python3-rpm = "The python3-rpm package contains a module that permits applications \
-written in the Python programming language to use the interface \
-supplied by the RPM Package Manager libraries."
-
-HOMEPAGE = "http://www.rpm.org"
-
-# libraries are also LGPL - how to express this?
-LICENSE = "GPL-2.0-only"
-LIC_FILES_CHKSUM = "file://COPYING;md5=c4eec0c20c6034b9407a09945b48a43f"
-
-SRC_URI = "git://github.com/rpm-software-management/rpm;branch=rpm-4.17.x;protocol=https \
-           file://environment.d-rpm.sh \
-           file://0001-Do-not-add-an-unsatisfiable-dependency-when-building.patch \
-           file://0001-Do-not-read-config-files-from-HOME.patch \
-           file://0001-When-cross-installing-execute-package-scriptlets-wit.patch \
-           file://0001-Do-not-reset-the-PATH-environment-variable-before-ru.patch \
-           file://0002-Add-support-for-prefixing-etc-from-RPM_ETCCONFIGDIR-.patch \
-           file://0001-Do-not-hardcode-lib-rpm-as-the-installation-path-for.patch \
-           file://0001-Add-a-color-setting-for-mips64_n32-binaries.patch \
-           file://0001-perl-disable-auto-reqs.patch \
-           file://0016-rpmscript.c-change-logging-level-around-scriptlets-t.patch \
-           file://0001-lib-transaction.c-fix-file-conflicts-for-MIPS64-N32.patch \
-           file://0001-tools-Add-error.h-for-non-glibc-case.patch \
-           file://0001-docs-do-not-build-manpages-requires-pandoc.patch \
-           file://0001-build-pack.c-do-not-insert-payloadflags-into-.rpm-me.patch \
-           file://0001-CVE-2021-3521.patch \
-           file://0002-CVE-2021-3521.patch \
-           file://0003-CVE-2021-3521.patch \
-           "
-
-PE = "1"
-SRCREV = "3e74e8ba2dd5e76a5353d238dc7fc38651ce27b3"
-
-S = "${WORKDIR}/git"
-
-DEPENDS = "lua libgcrypt file popt xz bzip2 elfutils python3"
-DEPENDS:append:class-native = " file-replacement-native bzip2-replacement-native"
-
-inherit autotools gettext pkgconfig python3native
-export PYTHON_ABI
-
-AUTOTOOLS_AUXDIR = "${S}/build-aux"
-
-# OE-core patches autoreconf to additionally run gnu-configize, which fails with this recipe
-EXTRA_AUTORECONF:append = " --exclude=gnu-configize"
-
-# Vendor is detected differently on x86 and aarch64 hosts and can feed into target packages
-EXTRA_OECONF:append = " --enable-python --with-crypto=libgcrypt --with-vendor=pc"
-EXTRA_OECONF:append:libc-musl = " --disable-nls --disable-openmp"
-
-# --sysconfdir prevents rpm from attempting to access machine-specific configuration in sysroot/etc; we need to have it in rootfs
-# --localstatedir prevents rpm from writing its database to native sysroot when building images
-# Forcibly disable plugins for native/nativesdk, as the inhibit and prioreset
-# plugins both behave badly inside builds.
-EXTRA_OECONF:append:class-native = " --sysconfdir=/etc --localstatedir=/var --disable-plugins"
-EXTRA_OECONF:append:class-nativesdk = " --sysconfdir=/etc --disable-plugins"
-
-BBCLASSEXTEND = "native nativesdk"
-
-PACKAGECONFIG ??= "${@bb.utils.contains('DISTRO_FEATURES', 'systemd', 'inhibit', '', d)} sqlite zstd"
-# The inhibit plugin serves no purpose outside of the target
-PACKAGECONFIG:remove:class-native = "inhibit"
-PACKAGECONFIG:remove:class-nativesdk = "inhibit"
-
-PACKAGECONFIG[imaevm] = "--with-imaevm,,ima-evm-utils"
-PACKAGECONFIG[inhibit] = "--enable-inhibit-plugin,--disable-inhibit-plugin,dbus"
-PACKAGECONFIG[rpm2archive] = "--with-archive,--without-archive,libarchive"
-PACKAGECONFIG[sqlite] = "--enable-sqlite=yes,--enable-sqlite=no,sqlite3"
-PACKAGECONFIG[ndb] = "--enable-ndb,--disable-ndb"
-PACKAGECONFIG[bdb-ro] = "--enable-bdb-ro,--disable-bdb-ro"
-PACKAGECONFIG[zstd] = "--enable-zstd=yes,--enable-zstd=no,zstd"
-
-ASNEEDED = ""
-
-# Direct rpm-native to read configuration from our sysroot, not the one it was compiled in
-# libmagic also has sysroot path contamination, so override it
-
-WRAPPER_TOOLS = " \
-   ${bindir}/rpm \
-   ${bindir}/rpm2archive \
-   ${bindir}/rpm2cpio \
-   ${bindir}/rpmbuild \
-   ${bindir}/rpmdb \
-   ${bindir}/rpmgraph \
-   ${bindir}/rpmkeys \
-   ${bindir}/rpmsign \
-   ${bindir}/rpmspec \
-   ${libdir}/rpm/rpmdeps \
-"
-
-do_configure:prepend() {
-        mkdir -p ${S}/build-aux
-}
-
-do_install:append:class-native() {
-        for tool in ${WRAPPER_TOOLS}; do
-                test -x ${D}$tool && create_wrapper ${D}$tool \
-                        RPM_CONFIGDIR=${STAGING_LIBDIR_NATIVE}/rpm \
-                        RPM_ETCCONFIGDIR=${STAGING_DIR_NATIVE} \
-                        MAGIC=${STAGING_DIR_NATIVE}${datadir_native}/misc/magic.mgc \
-                        RPM_NO_CHROOT_FOR_SCRIPTS=1
-        done
-}
-
-do_install:append:class-nativesdk() {
-        for tool in ${WRAPPER_TOOLS}; do
-                test -x ${D}$tool && create_wrapper ${D}$tool \
-                        RPM_CONFIGDIR='`dirname $''realpath`'/${@os.path.relpath(d.getVar('libdir'), d.getVar('bindir'))}/rpm \
-                        RPM_ETCCONFIGDIR='$'{RPM_ETCCONFIGDIR-'`dirname $''realpath`'/${@os.path.relpath(d.getVar('sysconfdir'), d.getVar('bindir'))}/..} \
-                        MAGIC='`dirname $''realpath`'/${@os.path.relpath(d.getVar('datadir'), d.getVar('bindir'))}/misc/magic.mgc \
-                        RPM_NO_CHROOT_FOR_SCRIPTS=1
-        done
-
-        rm -rf ${D}/var
-
-        mkdir -p ${D}${SDKPATHNATIVE}/environment-setup.d
-        install -m 644 ${WORKDIR}/environment.d-rpm.sh ${D}${SDKPATHNATIVE}/environment-setup.d/rpm.sh
-}
-
-# Rpm's make install creates var/tmp which clashes with base-files packaging
-do_install:append:class-target() {
-    rm -rf ${D}/var
-}
-do_install:append:class-nativesdk() {
-    rm -rf ${D}${SDKPATHNATIVE}/var
-}
-
-do_install:append () {
-	sed -i -e 's:${HOSTTOOLS_DIR}/::g' \
-	    ${D}/${libdir}/rpm/macros
-
-}
-
-FILES:${PN} += "${libdir}/rpm-plugins/*.so \
-               "
-FILES:${PN}:append:class-nativesdk = " ${SDKPATHNATIVE}/environment-setup.d/rpm.sh"
-
-FILES:${PN}-dev += "${libdir}/rpm-plugins/*.la \
-                    "
-PACKAGE_BEFORE_PN += "${PN}-build ${PN}-sign ${PN}-archive"
-
-RRECOMMENDS:${PN} += "rpm-sign rpm-archive"
-
-FILES:${PN}-build = "\
-    ${bindir}/rpmbuild \
-    ${bindir}/gendiff \
-    ${bindir}/rpmspec \
-    ${libdir}/librpmbuild.so.* \
-    ${libdir}/rpm/brp-* \
-    ${libdir}/rpm/check-* \
-    ${libdir}/rpm/debugedit \
-    ${libdir}/rpm/sepdebugcrcfix \
-    ${libdir}/rpm/find-debuginfo.sh \
-    ${libdir}/rpm/find-lang.sh \
-    ${libdir}/rpm/*provides* \
-    ${libdir}/rpm/*requires* \
-    ${libdir}/rpm/*deps* \
-    ${libdir}/rpm/*.prov \
-    ${libdir}/rpm/*.req \
-    ${libdir}/rpm/config.* \
-    ${libdir}/rpm/mkinstalldirs \
-    ${libdir}/rpm/macros.p* \
-    ${libdir}/rpm/fileattrs/* \
-"
-
-FILES:${PN}-sign = "\
-    ${bindir}/rpmsign \
-    ${libdir}/librpmsign.so.* \
-"
-
-FILES:${PN}-archive = "\
-    ${bindir}/rpm2archive \
-"
-
-PACKAGES += "python3-rpm"
-PROVIDES += "python3-rpm"
-FILES:python3-rpm = "${PYTHON_SITEPACKAGES_DIR}/rpm/*"
-
-RDEPENDS:${PN}-build = "bash perl python3-core"
-
-PACKAGE_PREPROCESS_FUNCS += "rpm_package_preprocess"
-
-# Do not specify a sysroot when compiling on a target.
-rpm_package_preprocess () {
-	sed -i -e 's:--sysroot[^ ]*::g' \
-	    ${PKGD}/${libdir}/rpm/macros
-}
-
-SSTATE_HASHEQUIV_FILEMAP = " \
-    populate_sysroot:*/rpm/macros:${TMPDIR} \
-    populate_sysroot:*/rpm/macros:${COREBASE} \
-    "
diff --git a/poky/meta/recipes-devtools/rpm/rpm_4.17.1.bb b/poky/meta/recipes-devtools/rpm/rpm_4.17.1.bb
new file mode 100644
index 0000000..9b6446f
--- /dev/null
+++ b/poky/meta/recipes-devtools/rpm/rpm_4.17.1.bb
@@ -0,0 +1,206 @@
+SUMMARY = "The RPM package management system"
+DESCRIPTION = "The RPM Package Manager (RPM) is a powerful command line driven \
+package management system capable of installing, uninstalling, \
+verifying, querying, and updating software packages. Each software \
+package consists of an archive of files along with information about \
+the package like its version, a description, etc."
+
+SUMMARY:${PN}-dev = "Development files for manipulating RPM packages"
+DESCRIPTION:${PN}-dev = "This package contains the RPM C library and header files. These \
+development files will simplify the process of writing programs that \
+manipulate RPM packages and databases. These files are intended to \
+simplify the process of creating graphical package managers or any \
+other tools that need an intimate knowledge of RPM packages in order \
+to function."
+
+SUMMARY:python3-rpm = "Python bindings for apps which will manupulate RPM packages"
+DESCRIPTION:python3-rpm = "The python3-rpm package contains a module that permits applications \
+written in the Python programming language to use the interface \
+supplied by the RPM Package Manager libraries."
+
+HOMEPAGE = "http://www.rpm.org"
+
+# libraries are also LGPL - how to express this?
+LICENSE = "GPL-2.0-only"
+LIC_FILES_CHKSUM = "file://COPYING;md5=c4eec0c20c6034b9407a09945b48a43f"
+
+SRC_URI = "git://github.com/rpm-software-management/rpm;branch=rpm-4.17.x;protocol=https \
+           file://environment.d-rpm.sh \
+           file://0001-Do-not-add-an-unsatisfiable-dependency-when-building.patch \
+           file://0001-Do-not-read-config-files-from-HOME.patch \
+           file://0001-When-cross-installing-execute-package-scriptlets-wit.patch \
+           file://0001-Do-not-reset-the-PATH-environment-variable-before-ru.patch \
+           file://0002-Add-support-for-prefixing-etc-from-RPM_ETCCONFIGDIR-.patch \
+           file://0001-Do-not-hardcode-lib-rpm-as-the-installation-path-for.patch \
+           file://0001-Add-a-color-setting-for-mips64_n32-binaries.patch \
+           file://0001-perl-disable-auto-reqs.patch \
+           file://0016-rpmscript.c-change-logging-level-around-scriptlets-t.patch \
+           file://0001-lib-transaction.c-fix-file-conflicts-for-MIPS64-N32.patch \
+           file://0001-tools-Add-error.h-for-non-glibc-case.patch \
+           file://0001-docs-do-not-build-manpages-requires-pandoc.patch \
+           file://0001-build-pack.c-do-not-insert-payloadflags-into-.rpm-me.patch \
+           file://0001-configure.ac-add-linux-gnux32-variant-to-triplet-han.patch \
+           "
+
+PE = "1"
+SRCREV = "5bef402da334595ed9302b8bca1acdf5e88bfe11"
+
+S = "${WORKDIR}/git"
+
+DEPENDS = "lua libgcrypt file popt xz bzip2 elfutils python3"
+DEPENDS:append:class-native = " file-replacement-native bzip2-replacement-native"
+
+inherit autotools gettext pkgconfig python3native
+export PYTHON_ABI
+
+AUTOTOOLS_AUXDIR = "${S}/build-aux"
+
+# OE-core patches autoreconf to additionally run gnu-configize, which fails with this recipe
+EXTRA_AUTORECONF:append = " --exclude=gnu-configize"
+
+# Vendor is detected differently on x86 and aarch64 hosts and can feed into target packages
+EXTRA_OECONF:append = " --enable-python --with-crypto=libgcrypt --with-vendor=pc"
+EXTRA_OECONF:append:libc-musl = " --disable-nls --disable-openmp"
+
+# --sysconfdir prevents rpm from attempting to access machine-specific configuration in sysroot/etc; we need to have it in rootfs
+# --localstatedir prevents rpm from writing its database to native sysroot when building images
+# Forcibly disable plugins for native/nativesdk, as the inhibit and prioreset
+# plugins both behave badly inside builds.
+EXTRA_OECONF:append:class-native = " --sysconfdir=/etc --localstatedir=/var --disable-plugins"
+EXTRA_OECONF:append:class-nativesdk = " --sysconfdir=/etc --disable-plugins"
+
+BBCLASSEXTEND = "native nativesdk"
+
+PACKAGECONFIG ??= "${@bb.utils.contains('DISTRO_FEATURES', 'systemd', 'inhibit', '', d)} sqlite zstd"
+# The inhibit plugin serves no purpose outside of the target
+PACKAGECONFIG:remove:class-native = "inhibit"
+PACKAGECONFIG:remove:class-nativesdk = "inhibit"
+
+PACKAGECONFIG[imaevm] = "--with-imaevm,,ima-evm-utils"
+PACKAGECONFIG[inhibit] = "--enable-inhibit-plugin,--disable-inhibit-plugin,dbus"
+PACKAGECONFIG[rpm2archive] = "--with-archive,--without-archive,libarchive"
+PACKAGECONFIG[sqlite] = "--enable-sqlite=yes,--enable-sqlite=no,sqlite3"
+PACKAGECONFIG[ndb] = "--enable-ndb,--disable-ndb"
+PACKAGECONFIG[bdb-ro] = "--enable-bdb-ro,--disable-bdb-ro"
+PACKAGECONFIG[zstd] = "--enable-zstd=yes,--enable-zstd=no,zstd"
+
+ASNEEDED = ""
+
+# Direct rpm-native to read configuration from our sysroot, not the one it was compiled in
+# libmagic also has sysroot path contamination, so override it
+
+WRAPPER_TOOLS = " \
+   ${bindir}/rpm \
+   ${bindir}/rpm2archive \
+   ${bindir}/rpm2cpio \
+   ${bindir}/rpmbuild \
+   ${bindir}/rpmdb \
+   ${bindir}/rpmgraph \
+   ${bindir}/rpmkeys \
+   ${bindir}/rpmsign \
+   ${bindir}/rpmspec \
+   ${libdir}/rpm/rpmdeps \
+"
+
+do_configure:prepend() {
+        mkdir -p ${S}/build-aux
+}
+
+do_install:append:class-native() {
+        for tool in ${WRAPPER_TOOLS}; do
+                test -x ${D}$tool && create_wrapper ${D}$tool \
+                        RPM_CONFIGDIR=${STAGING_LIBDIR_NATIVE}/rpm \
+                        RPM_ETCCONFIGDIR=${STAGING_DIR_NATIVE} \
+                        MAGIC=${STAGING_DIR_NATIVE}${datadir_native}/misc/magic.mgc \
+                        RPM_NO_CHROOT_FOR_SCRIPTS=1
+        done
+}
+
+do_install:append:class-nativesdk() {
+        for tool in ${WRAPPER_TOOLS}; do
+                test -x ${D}$tool && create_wrapper ${D}$tool \
+                        RPM_CONFIGDIR='`dirname $''realpath`'/${@os.path.relpath(d.getVar('libdir'), d.getVar('bindir'))}/rpm \
+                        RPM_ETCCONFIGDIR='$'{RPM_ETCCONFIGDIR-'`dirname $''realpath`'/${@os.path.relpath(d.getVar('sysconfdir'), d.getVar('bindir'))}/..} \
+                        MAGIC='`dirname $''realpath`'/${@os.path.relpath(d.getVar('datadir'), d.getVar('bindir'))}/misc/magic.mgc \
+                        RPM_NO_CHROOT_FOR_SCRIPTS=1
+        done
+
+        rm -rf ${D}/var
+
+        mkdir -p ${D}${SDKPATHNATIVE}/environment-setup.d
+        install -m 644 ${WORKDIR}/environment.d-rpm.sh ${D}${SDKPATHNATIVE}/environment-setup.d/rpm.sh
+}
+
+# Rpm's make install creates var/tmp which clashes with base-files packaging
+do_install:append:class-target() {
+    rm -rf ${D}/var
+}
+do_install:append:class-nativesdk() {
+    rm -rf ${D}${SDKPATHNATIVE}/var
+}
+
+do_install:append () {
+	sed -i -e 's:${HOSTTOOLS_DIR}/::g' \
+	    ${D}/${libdir}/rpm/macros
+
+}
+
+FILES:${PN} += "${libdir}/rpm-plugins/*.so \
+               "
+FILES:${PN}:append:class-nativesdk = " ${SDKPATHNATIVE}/environment-setup.d/rpm.sh"
+
+FILES:${PN}-dev += "${libdir}/rpm-plugins/*.la \
+                    "
+PACKAGE_BEFORE_PN += "${PN}-build ${PN}-sign ${PN}-archive"
+
+RRECOMMENDS:${PN} += "rpm-sign rpm-archive"
+
+FILES:${PN}-build = "\
+    ${bindir}/rpmbuild \
+    ${bindir}/gendiff \
+    ${bindir}/rpmspec \
+    ${libdir}/librpmbuild.so.* \
+    ${libdir}/rpm/brp-* \
+    ${libdir}/rpm/check-* \
+    ${libdir}/rpm/debugedit \
+    ${libdir}/rpm/sepdebugcrcfix \
+    ${libdir}/rpm/find-debuginfo.sh \
+    ${libdir}/rpm/find-lang.sh \
+    ${libdir}/rpm/*provides* \
+    ${libdir}/rpm/*requires* \
+    ${libdir}/rpm/*deps* \
+    ${libdir}/rpm/*.prov \
+    ${libdir}/rpm/*.req \
+    ${libdir}/rpm/config.* \
+    ${libdir}/rpm/mkinstalldirs \
+    ${libdir}/rpm/macros.p* \
+    ${libdir}/rpm/fileattrs/* \
+"
+
+FILES:${PN}-sign = "\
+    ${bindir}/rpmsign \
+    ${libdir}/librpmsign.so.* \
+"
+
+FILES:${PN}-archive = "\
+    ${bindir}/rpm2archive \
+"
+
+PACKAGES += "python3-rpm"
+PROVIDES += "python3-rpm"
+FILES:python3-rpm = "${PYTHON_SITEPACKAGES_DIR}/rpm/*"
+
+RDEPENDS:${PN}-build = "bash perl python3-core"
+
+PACKAGE_PREPROCESS_FUNCS += "rpm_package_preprocess"
+
+# Do not specify a sysroot when compiling on a target.
+rpm_package_preprocess () {
+	sed -i -e 's:--sysroot[^ ]*::g' \
+	    ${PKGD}/${libdir}/rpm/macros
+}
+
+SSTATE_HASHEQUIV_FILEMAP = " \
+    populate_sysroot:*/rpm/macros:${TMPDIR} \
+    populate_sysroot:*/rpm/macros:${COREBASE} \
+    "
diff --git a/poky/meta/recipes-devtools/rsync/files/0001-Add-missing-prototypes-to-function-declarations.patch b/poky/meta/recipes-devtools/rsync/files/0001-Add-missing-prototypes-to-function-declarations.patch
new file mode 100644
index 0000000..474d82d
--- /dev/null
+++ b/poky/meta/recipes-devtools/rsync/files/0001-Add-missing-prototypes-to-function-declarations.patch
@@ -0,0 +1,173 @@
+From 785c0072c80c2f6e0839478453cf65fdeac15da0 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Mon, 29 Aug 2022 19:53:28 -0700
+Subject: [PATCH] Add missing prototypes to function declarations
+
+With Clang 15+ compiler -Wstrict-prototypes is triggering warnings which
+are turned into errors with -Werror, this fixes the problem by adding
+missing prototypes
+
+Fixes errors like
+| log.c:134:24: error: a function declaration without a prototype is deprecated in all versions of C [-Werror,-Wstrict-prototypes]
+| static void syslog_init()
+|                        ^
+|                         void
+
+Upstream-Status: Submitted [https://lists.samba.org/archive/rsync/2022-August/032858.html]
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ checksum.c       | 2 +-
+ exclude.c        | 2 +-
+ hlink.c          | 3 +--
+ lib/pool_alloc.c | 2 +-
+ log.c            | 2 +-
+ main.c           | 2 +-
+ syscall.c        | 4 ++--
+ zlib/crc32.c     | 2 +-
+ zlib/trees.c     | 2 +-
+ zlib/zutil.c     | 4 ++--
+ 10 files changed, 12 insertions(+), 13 deletions(-)
+
+diff --git a/checksum.c b/checksum.c
+index fb8c0a0..174c28c 100644
+--- a/checksum.c
++++ b/checksum.c
+@@ -629,7 +629,7 @@ int sum_end(char *sum)
+ 	return csum_len_for_type(cursum_type, 0);
+ }
+ 
+-void init_checksum_choices()
++void init_checksum_choices(void)
+ {
+ #ifdef SUPPORT_XXH3
+ 	char buf[32816];
+diff --git a/exclude.c b/exclude.c
+index adc82e2..79f5a82 100644
+--- a/exclude.c
++++ b/exclude.c
+@@ -358,7 +358,7 @@ void implied_include_partial_string(const char *s_start, const char *s_end)
+ 	memcpy(partial_string_buf, s_start, partial_string_len);
+ }
+ 
+-void free_implied_include_partial_string()
++void free_implied_include_partial_string(void)
+ {
+ 	if (partial_string_buf) {
+ 		free(partial_string_buf);
+diff --git a/hlink.c b/hlink.c
+index 66810a3..6511dfb 100644
+--- a/hlink.c
++++ b/hlink.c
+@@ -117,8 +117,7 @@ static void match_gnums(int32 *ndx_list, int ndx_count)
+ 	struct ht_int32_node *node = NULL;
+ 	int32 gnum, gnum_next;
+ 
+-	qsort(ndx_list, ndx_count, sizeof ndx_list[0], (int (*)()) hlink_compare_gnum);
+-
++	qsort(ndx_list, ndx_count, sizeof ndx_list[0], (int (*)(const void *, const void *)) hlink_compare_gnum);
+ 	for (from = 0; from < ndx_count; from++) {
+ 		file = hlink_flist->sorted[ndx_list[from]];
+ 		gnum = F_HL_GNUM(file);
+diff --git a/lib/pool_alloc.c b/lib/pool_alloc.c
+index a1a7245..4eae062 100644
+--- a/lib/pool_alloc.c
++++ b/lib/pool_alloc.c
+@@ -9,7 +9,7 @@ struct alloc_pool
+ 	size_t			size;		/* extent size		*/
+ 	size_t			quantum;	/* allocation quantum	*/
+ 	struct pool_extent	*extents;	/* top extent is "live" */
+-	void			(*bomb)();	/* called if malloc fails */
++	void			(*bomb)(const char *, const char *, int);	/* called if malloc fails */
+ 	int			flags;
+ 
+ 	/* statistical data */
+diff --git a/log.c b/log.c
+index 44344e2..991e359 100644
+--- a/log.c
++++ b/log.c
+@@ -131,7 +131,7 @@ static void logit(int priority, const char *buf)
+ 	}
+ }
+ 
+-static void syslog_init()
++static void syslog_init(void)
+ {
+ 	int options = LOG_PID;
+ 
+diff --git a/main.c b/main.c
+index 9ebfbea..affa244 100644
+--- a/main.c
++++ b/main.c
+@@ -244,7 +244,7 @@ void read_del_stats(int f)
+ 	stats.deleted_files += stats.deleted_specials = read_varint(f);
+ }
+ 
+-static void become_copy_as_user()
++static void become_copy_as_user(void)
+ {
+ 	char *gname;
+ 	uid_t uid;
+diff --git a/syscall.c b/syscall.c
+index d92074a..92ca86d 100644
+--- a/syscall.c
++++ b/syscall.c
+@@ -389,9 +389,9 @@ OFF_T do_lseek(int fd, OFF_T offset, int whence)
+ {
+ #ifdef HAVE_LSEEK64
+ #if !SIZEOF_OFF64_T
+-	OFF_T lseek64();
++	OFF_T lseek64(int fd, OFF_T offset, int whence);
+ #else
+-	off64_t lseek64();
++	off64_t lseek64(int fd, off64_t offset, int whence);
+ #endif
+ 	return lseek64(fd, offset, whence);
+ #else
+diff --git a/zlib/crc32.c b/zlib/crc32.c
+index 05733f4..50c6c02 100644
+--- a/zlib/crc32.c
++++ b/zlib/crc32.c
+@@ -187,7 +187,7 @@ local void write_table(out, table)
+ /* =========================================================================
+  * This function can be used by asm versions of crc32()
+  */
+-const z_crc_t FAR * ZEXPORT get_crc_table()
++const z_crc_t FAR * ZEXPORT get_crc_table(void)
+ {
+ #ifdef DYNAMIC_CRC_TABLE
+     if (crc_table_empty)
+diff --git a/zlib/trees.c b/zlib/trees.c
+index 9c66770..0d9047e 100644
+--- a/zlib/trees.c
++++ b/zlib/trees.c
+@@ -231,7 +231,7 @@ local void send_bits(s, value, length)
+ /* ===========================================================================
+  * Initialize the various 'constant' tables.
+  */
+-local void tr_static_init()
++local void tr_static_init(void)
+ {
+ #if defined(GEN_TREES_H) || !defined(STDC)
+     static int static_init_done = 0;
+diff --git a/zlib/zutil.c b/zlib/zutil.c
+index bbba7b2..61f8dc9 100644
+--- a/zlib/zutil.c
++++ b/zlib/zutil.c
+@@ -27,12 +27,12 @@ z_const char * const z_errmsg[10] = {
+ ""};
+ 
+ 
+-const char * ZEXPORT zlibVersion()
++const char * ZEXPORT zlibVersion(void)
+ {
+     return ZLIB_VERSION;
+ }
+ 
+-uLong ZEXPORT zlibCompileFlags()
++uLong ZEXPORT zlibCompileFlags(void)
+ {
+     uLong flags;
+ 
+-- 
+2.37.2
+
diff --git a/poky/meta/recipes-devtools/rsync/files/0001-Turn-on-pedantic-errors-at-the-end-of-configure.patch b/poky/meta/recipes-devtools/rsync/files/0001-Turn-on-pedantic-errors-at-the-end-of-configure.patch
new file mode 100644
index 0000000..1d9c4bf
--- /dev/null
+++ b/poky/meta/recipes-devtools/rsync/files/0001-Turn-on-pedantic-errors-at-the-end-of-configure.patch
@@ -0,0 +1,68 @@
+From e64a58387db46239902b610871a0eb81626e99ff Mon Sep 17 00:00:00 2001
+From: Paul Eggert <eggert@cs.ucla.edu>
+Date: Thu, 18 Aug 2022 07:46:28 -0700
+Subject: [PATCH] Turn on -pedantic-errors at the end of 'configure'
+
+Problem reported by Khem Raj in:
+https://lists.gnu.org/r/autoconf-patches/2022-08/msg00009.html
+Upstream-Status: Submitted [https://lists.samba.org/archive/rsync/2022-August/032862.html]
+---
+ configure.ac | 35 ++++++++++++++++++++---------------
+ 1 file changed, 20 insertions(+), 15 deletions(-)
+
+diff --git a/configure.ac b/configure.ac
+index d185b2d3..7e9514f7 100644
+--- a/configure.ac
++++ b/configure.ac
+@@ -1071,21 +1071,6 @@ elif test x"$ac_cv_header_popt_h" != x"yes"; then
+     with_included_popt=yes
+ fi
+ 
+-if test x"$GCC" = x"yes"; then
+-    if test x"$with_included_popt" != x"yes"; then
+-	# Turn pedantic warnings into errors to ensure an array-init overflow is an error.
+-	CFLAGS="$CFLAGS -pedantic-errors"
+-    else
+-	# Our internal popt code cannot be compiled with pedantic warnings as errors, so try to
+-	# turn off pedantic warnings (which will not lose the error for array-init overflow).
+-	# Older gcc versions don't understand -Wno-pedantic, so check if --help=warnings lists
+-	# -Wpedantic and use that as a flag.
+-	case `$CC --help=warnings 2>/dev/null | grep Wpedantic` in
+-	    *-Wpedantic*) CFLAGS="$CFLAGS -pedantic-errors -Wno-pedantic" ;;
+-	esac
+-    fi
+-fi
+-
+ AC_MSG_CHECKING([whether to use included libpopt])
+ if test x"$with_included_popt" = x"yes"; then
+     AC_MSG_RESULT($srcdir/popt)
+@@ -1444,6 +1429,26 @@ case "$CC" in
+     ;;
+ esac
+ 
++# Enable -pedantic-errors last, so that it doesn't mess up other
++# 'configure' tests.  For example, Autoconf uses empty function
++# prototypes like 'int main () {}' which Clang 15's -pedantic-errors
++# would reject.  Generally it's not a good idea to try to run
++# 'configure' itself with strict compiler checking.
++if test x"$GCC" = x"yes"; then
++    if test x"$with_included_popt" != x"yes"; then
++	# Turn pedantic warnings into errors to ensure an array-init overflow is an error.
++	CFLAGS="$CFLAGS -pedantic-errors"
++    else
++	# Our internal popt code cannot be compiled with pedantic warnings as errors, so try to
++	# turn off pedantic warnings (which will not lose the error for array-init overflow).
++	# Older gcc versions don't understand -Wno-pedantic, so check if --help=warnings lists
++	# -Wpedantic and use that as a flag.
++	case `$CC --help=warnings 2>/dev/null | grep Wpedantic` in
++	    *-Wpedantic*) CFLAGS="$CFLAGS -pedantic-errors -Wno-pedantic" ;;
++	esac
++    fi
++fi
++
+ AC_CONFIG_FILES([Makefile lib/dummy zlib/dummy popt/dummy shconfig])
+ AC_OUTPUT
+ 
+-- 
+2.37.1
+
diff --git a/poky/meta/recipes-devtools/rsync/rsync_3.2.4.bb b/poky/meta/recipes-devtools/rsync/rsync_3.2.4.bb
deleted file mode 100644
index e6f917b..0000000
--- a/poky/meta/recipes-devtools/rsync/rsync_3.2.4.bb
+++ /dev/null
@@ -1,70 +0,0 @@
-SUMMARY = "File synchronization tool"
-HOMEPAGE = "http://rsync.samba.org/"
-DESCRIPTION = "rsync is an open source utility that provides fast incremental file transfer."
-BUGTRACKER = "http://rsync.samba.org/bugzilla.html"
-SECTION = "console/network"
-# GPL-2.0-or-later (<< 3.0.0), GPL-3.0-or-later (>= 3.0.0)
-# Includes opennsh and xxhash dynamic link exception
-LICENSE = "GPL-3.0-or-later"
-LIC_FILES_CHKSUM = "file://COPYING;md5=24423708fe159c9d12be1ea29fcb18c7"
-
-DEPENDS = "popt"
-
-SRC_URI = "https://download.samba.org/pub/${BPN}/src/${BP}.tar.gz \
-           file://rsyncd.conf \
-           file://makefile-no-rebuild.patch \
-           file://determism.patch \
-           "
-
-SRC_URI[sha256sum] = "6f761838d08052b0b6579cf7f6737d93e47f01f4da04c5d24d3447b7f2a5fad1"
-
-# -16548 required for v3.1.3pre1. Already in v3.1.3.
-CVE_CHECK_IGNORE += " CVE-2017-16548 "
-
-inherit autotools-brokensep
-
-PACKAGECONFIG ??= "acl attr \
-    ${@bb.utils.filter('DISTRO_FEATURES', 'ipv6', d)} \
-"
-
-PACKAGECONFIG[acl] = "--enable-acl-support,--disable-acl-support,acl,"
-PACKAGECONFIG[attr] = "--enable-xattr-support,--disable-xattr-support,attr,"
-PACKAGECONFIG[ipv6] = "--enable-ipv6,--disable-ipv6,"
-PACKAGECONFIG[lz4] = "--enable-lz4,--disable-lz4,lz4"
-PACKAGECONFIG[openssl] = "--enable-openssl,--disable-openssl,openssl"
-PACKAGECONFIG[xxhash] = "--enable-xxhash,--disable-xxhash,xxhash"
-PACKAGECONFIG[zstd] = "--enable-zstd,--disable-zstd,zstd"
-
-# By default, if crosscompiling, rsync disables a number of
-# capabilities, hardlinking symlinks and special files (i.e. devices)
-CACHED_CONFIGUREVARS += "rsync_cv_can_hardlink_special=yes rsync_cv_can_hardlink_symlink=yes"
-
-EXTRA_OEMAKE = 'STRIP=""'
-EXTRA_OECONF = "--disable-md2man --with-nobody-group=nogroup"
-
-#| ./simd-checksum-x86_64.cpp: In function 'uint32_t get_checksum1_cpp(char*, int32_t)':
-#| ./simd-checksum-x86_64.cpp:89:52: error: multiversioning needs 'ifunc' which is not supported on this target
-#|    89 | __attribute__ ((target("default"))) MVSTATIC int32 get_checksum1_avx2_64(schar* buf, int32 len, int32 i, uint32* ps1, uint32* ps2) { return i; }
-#|       |                                                    ^~~~~~~~~~~~~~~~~~~~~
-#| ./simd-checksum-x86_64.cpp:480:1: error: use of multiversioned function without a default
-#|   480 | }
-#|       | ^
-#| If you can't fix the issue, re-run ./configure with --disable-roll-simd.
-EXTRA_OECONF:append:libc-musl = " --disable-roll-simd"
-
-# rsync 3.0 uses configure.sh instead of configure, and
-# makefile checks the existence of configure.sh
-do_configure:prepend () {
-	rm -f ${S}/configure ${S}/configure.sh
-}
-
-do_configure:append () {
-	cp -f ${S}/configure ${S}/configure.sh
-}
-
-do_install:append() {
-	install -d ${D}${sysconfdir}
-	install -m 0644 ${WORKDIR}/rsyncd.conf ${D}${sysconfdir}
-}
-
-BBCLASSEXTEND = "native nativesdk"
diff --git a/poky/meta/recipes-devtools/rsync/rsync_3.2.5.bb b/poky/meta/recipes-devtools/rsync/rsync_3.2.5.bb
new file mode 100644
index 0000000..0bbbac7
--- /dev/null
+++ b/poky/meta/recipes-devtools/rsync/rsync_3.2.5.bb
@@ -0,0 +1,71 @@
+SUMMARY = "File synchronization tool"
+HOMEPAGE = "http://rsync.samba.org/"
+DESCRIPTION = "rsync is an open source utility that provides fast incremental file transfer."
+BUGTRACKER = "http://rsync.samba.org/bugzilla.html"
+SECTION = "console/network"
+# GPL-2.0-or-later (<< 3.0.0), GPL-3.0-or-later (>= 3.0.0)
+# Includes opennsh and xxhash dynamic link exception
+LICENSE = "GPL-3.0-or-later"
+LIC_FILES_CHKSUM = "file://COPYING;md5=24423708fe159c9d12be1ea29fcb18c7"
+
+DEPENDS = "popt"
+
+SRC_URI = "https://download.samba.org/pub/${BPN}/src/${BP}.tar.gz \
+           file://rsyncd.conf \
+           file://makefile-no-rebuild.patch \
+           file://determism.patch \
+           file://0001-Add-missing-prototypes-to-function-declarations.patch \
+           file://0001-Turn-on-pedantic-errors-at-the-end-of-configure.patch \
+           "
+SRC_URI[sha256sum] = "2ac4d21635cdf791867bc377c35ca6dda7f50d919a58be45057fd51600c69aba"
+
+# -16548 required for v3.1.3pre1. Already in v3.1.3.
+CVE_CHECK_IGNORE += " CVE-2017-16548 "
+
+inherit autotools-brokensep
+
+PACKAGECONFIG ??= "acl attr \
+    ${@bb.utils.filter('DISTRO_FEATURES', 'ipv6', d)} \
+"
+
+PACKAGECONFIG[acl] = "--enable-acl-support,--disable-acl-support,acl,"
+PACKAGECONFIG[attr] = "--enable-xattr-support,--disable-xattr-support,attr,"
+PACKAGECONFIG[ipv6] = "--enable-ipv6,--disable-ipv6,"
+PACKAGECONFIG[lz4] = "--enable-lz4,--disable-lz4,lz4"
+PACKAGECONFIG[openssl] = "--enable-openssl,--disable-openssl,openssl"
+PACKAGECONFIG[xxhash] = "--enable-xxhash,--disable-xxhash,xxhash"
+PACKAGECONFIG[zstd] = "--enable-zstd,--disable-zstd,zstd"
+
+# By default, if crosscompiling, rsync disables a number of
+# capabilities, hardlinking symlinks and special files (i.e. devices)
+CACHED_CONFIGUREVARS += "rsync_cv_can_hardlink_special=yes rsync_cv_can_hardlink_symlink=yes"
+
+EXTRA_OEMAKE = 'STRIP=""'
+EXTRA_OECONF = "--disable-md2man --with-nobody-group=nogroup"
+
+#| ./simd-checksum-x86_64.cpp: In function 'uint32_t get_checksum1_cpp(char*, int32_t)':
+#| ./simd-checksum-x86_64.cpp:89:52: error: multiversioning needs 'ifunc' which is not supported on this target
+#|    89 | __attribute__ ((target("default"))) MVSTATIC int32 get_checksum1_avx2_64(schar* buf, int32 len, int32 i, uint32* ps1, uint32* ps2) { return i; }
+#|       |                                                    ^~~~~~~~~~~~~~~~~~~~~
+#| ./simd-checksum-x86_64.cpp:480:1: error: use of multiversioned function without a default
+#|   480 | }
+#|       | ^
+#| If you can't fix the issue, re-run ./configure with --disable-roll-simd.
+EXTRA_OECONF:append:libc-musl = " --disable-roll-simd"
+
+# rsync 3.0 uses configure.sh instead of configure, and
+# makefile checks the existence of configure.sh
+do_configure:prepend () {
+	rm -f ${S}/configure ${S}/configure.sh
+}
+
+do_configure:append () {
+	cp -f ${S}/configure ${S}/configure.sh
+}
+
+do_install:append() {
+	install -d ${D}${sysconfdir}
+	install -m 0644 ${WORKDIR}/rsyncd.conf ${D}${sysconfdir}
+}
+
+BBCLASSEXTEND = "native nativesdk"
diff --git a/poky/meta/recipes-devtools/ruby/ruby/0001-Remove-dependency-on-libcapstone.patch b/poky/meta/recipes-devtools/ruby/ruby/0001-Remove-dependency-on-libcapstone.patch
new file mode 100644
index 0000000..5d0f8fc
--- /dev/null
+++ b/poky/meta/recipes-devtools/ruby/ruby/0001-Remove-dependency-on-libcapstone.patch
@@ -0,0 +1,36 @@
+From 222203297966f312109e8eaa2520f2cf2f59c09d Mon Sep 17 00:00:00 2001
+From: Alan Wu <XrXr@users.noreply.github.com>
+Date: Thu, 31 Mar 2022 17:26:28 -0400
+Subject: [PATCH] Remove dependency on libcapstone
+
+We have received reports of build failures due to this configuration
+check modifying compile flags. Since only YJIT devs use this library
+we can remove it to make Ruby easier to build for users.
+
+See: https://github.com/rbenv/ruby-build/discussions/1933
+
+Upstream-Status: Backport
+---
+ configure.ac | 9 ---------
+ 1 file changed, 9 deletions(-)
+
+Index: ruby-3.1.2/configure.ac
+===================================================================
+--- ruby-3.1.2.orig/configure.ac
++++ ruby-3.1.2/configure.ac
+@@ -1244,15 +1244,6 @@ AC_CHECK_LIB(dl, dlopen)	# Dynamic linki
+ AC_CHECK_LIB(dld, shl_load)	# Dynamic linking for HP-UX
+ AC_CHECK_LIB(socket, shutdown)  # SunOS/Solaris
+ 
+-if pkg-config --exists capstone; then
+-   CAPSTONE_CFLAGS=`pkg-config --cflags capstone`
+-   CAPSTONE_LIB_L=`pkg-config --libs-only-L capstone`
+-   LDFLAGS="$LDFLAGS $CAPSTONE_LIB_L"
+-   CFLAGS="$CFLAGS $CAPSTONE_CFLAGS"
+-fi
+-
+-AC_CHECK_LIB(capstone, cs_open) # Capstone disassembler for debugging YJIT
+-
+ dnl Checks for header files.
+ AC_HEADER_DIRENT
+ dnl AC_HEADER_STDC has been checked in AC_USE_SYSTEM_EXTENSIONS
diff --git a/poky/meta/recipes-devtools/ruby/ruby_3.1.2.bb b/poky/meta/recipes-devtools/ruby/ruby_3.1.2.bb
index 6fc1f53..387bfa9 100644
--- a/poky/meta/recipes-devtools/ruby/ruby_3.1.2.bb
+++ b/poky/meta/recipes-devtools/ruby/ruby_3.1.2.bb
@@ -12,6 +12,7 @@
            file://0005-Mark-Gemspec-reproducible-change-fixing-784225-too.patch \
            file://0006-Make-gemspecs-reproducible.patch \
            file://0001-vm_dump.c-Define-REG_S1-and-REG_S2-for-musl-riscv.patch \
+           file://0001-Remove-dependency-on-libcapstone.patch \
            "
 
 SRC_URI[sha256sum] = "61843112389f02b735428b53bb64cf988ad9fb81858b8248e22e57336f24a83e"
@@ -25,7 +26,6 @@
 # rdoc is off by default due to non-reproducibility reported in
 # https://bugs.ruby-lang.org/issues/18456
 PACKAGECONFIG[rdoc] = "--enable-install-rdoc,--disable-install-rdoc,"
-PACKAGECONFIG[capstone] = "--with-capstone=yes, --with-capstone=no"
 
 EXTRA_OECONF = "\
     --disable-versioned-paths \
diff --git a/poky/meta/recipes-devtools/rust/README-rust.md b/poky/meta/recipes-devtools/rust/README-rust.md
index b87637c..209836a 100644
--- a/poky/meta/recipes-devtools/rust/README-rust.md
+++ b/poky/meta/recipes-devtools/rust/README-rust.md
@@ -3,22 +3,6 @@
 This provides the Rust compiler, tools for building packages (cargo), and 
 a few example projects.
 
-## What works:
-
- - Building `rust-native` and `cargo-native`
- - Building Rust based projects with Cargo for the TARGET
-   - e.g. `rustfmt` which is used by the CI system
- - `-buildsdk` and `-crosssdk` packages
-
-## What doesn't:
-
- - Using anything but x86_64 or arm64 as the build environment
- - rust (built for target) [issue #81](https://github.com/meta-rust/meta-rust/issues/81)
-
-## What's untested:
-
- - cargo (built for target)
-
 ## Building a rust package
 
 When building a rust package in bitbake, it's usually easiest to build with
@@ -36,11 +20,11 @@
 NOTE: You will have to edit the generated recipe based on the comments
 contained within it
 
-## TODO
-
 ## Pitfalls
 
- - TARGET_SYS _must_ be different from BUILD_SYS. This is due to the way Rust configuration options are tracked for different targets. This is the reason we use the Yocto triples instead of the native Rust triples. See rust-lang/cargo#3349.
+ - TARGET_SYS _must_ be different from BUILD_SYS. This is due to the way Rust
+ configuration options are tracked for different targets. This is the reason 
+ we use the Yocto triples instead of the native Rust triples. See rust-lang/cargo#3349.
 
 ## Dependencies
 
@@ -52,7 +36,3 @@
  - Any `-sys` packages your project might need must have RDEPENDs for
  the native library.
 
-## Copyright
-
-MIT OR Apache-2.0 - Same as rust
-
diff --git a/poky/meta/recipes-devtools/rust/libstd-rs.inc b/poky/meta/recipes-devtools/rust/libstd-rs.inc
index 9879563..d49383c 100644
--- a/poky/meta/recipes-devtools/rust/libstd-rs.inc
+++ b/poky/meta/recipes-devtools/rust/libstd-rs.inc
@@ -35,6 +35,6 @@
     # With the incremental build support added in 1.24, the libstd deps directory also includes dependency
     # files that get installed. Those are really only needed to incrementally rebuild the libstd library
     # itself and don't need to be installed.
-    rm -f ${B}/${TARGET_SYS}/${BUILD_DIR}/deps/*.d
-    cp ${B}/${TARGET_SYS}/${BUILD_DIR}/deps/* ${D}${rustlibdir}
+    rm -f ${B}/${RUST_TARGET_SYS}/${BUILD_DIR}/deps/*.d
+    cp ${B}/${RUST_TARGET_SYS}/${BUILD_DIR}/deps/* ${D}${rustlibdir}
 }
diff --git a/poky/meta/recipes-devtools/rust/libstd-rs_1.62.0.bb b/poky/meta/recipes-devtools/rust/libstd-rs_1.63.0.bb
similarity index 100%
rename from poky/meta/recipes-devtools/rust/libstd-rs_1.62.0.bb
rename to poky/meta/recipes-devtools/rust/libstd-rs_1.63.0.bb
diff --git a/poky/meta/recipes-devtools/rust/rust-cross-canadian-common.inc b/poky/meta/recipes-devtools/rust/rust-cross-canadian-common.inc
deleted file mode 100644
index 34020ff..0000000
--- a/poky/meta/recipes-devtools/rust/rust-cross-canadian-common.inc
+++ /dev/null
@@ -1,49 +0,0 @@
-
-RUST_ALTERNATE_EXE_PATH = "${STAGING_LIBDIR_NATIVE}/llvm-rust/bin/llvm-config"
-
-require rust.inc
-
-DEPENDS += "rust-llvm (=${PV})"
-
-inherit cross-canadian
-
-DEPENDS += "  \
-            virtual/${HOST_PREFIX}gcc-crosssdk \
-            virtual/nativesdk-libc rust-llvm-native \
-            virtual/${TARGET_PREFIX}compilerlibs \
-            virtual/nativesdk-${HOST_PREFIX}compilerlibs \
-            gcc-cross-${TARGET_ARCH} \
-           "
-
-# The host tools are likely not to be able to do the necessary operation on
-# the target architecturea. Alternatively one could check compatibility
-# between host/target.
-EXCLUDE_FROM_SHLIBS_${RUSTLIB_TARGET_PN} = "1"
-
-DEBUG_PREFIX_MAP = "-fdebug-prefix-map=${WORKDIR}=/usr/src/debug/${PN}/${EXTENDPE}${PV}-${PR} \
-                    -fdebug-prefix-map=${STAGING_DIR_HOST}= \
-                    -fdebug-prefix-map=${STAGING_DIR_NATIVE}= \
-                    "
-
-RUST_TARGETGENS = "BUILD HOST TARGET"
-
-INHIBIT_DEFAULT_RUST_DEPS = "1"
-
-export WRAPPER_TARGET_CC = "${CCACHE}${TARGET_PREFIX}gcc --sysroot=${STAGING_DIR_TARGET} ${TARGET_CC_ARCH} ${SECURITY_NOPIE_CFLAGS}"
-export WRAPPER_TARGET_CXX = "${CCACHE}${TARGET_PREFIX}g++ --sysroot=${STAGING_DIR_TARGET} ${TARGET_CC_ARCH} ${SECURITY_NOPIE_CFLAGS}"
-export WRAPPER_TARGET_CCLD = "${TARGET_PREFIX}gcc --sysroot=${STAGING_DIR_TARGET} ${TARGET_CC_ARCH} ${SECURITY_NOPIE_CFLAGS}"
-export WRAPPER_TARGET_LDFLAGS = "${TARGET_LDFLAGS}"
-export WRAPPER_TARGET_AR = "${TARGET_PREFIX}ar"
-
-python do_configure:prepend() {
-    targets = [d.getVar("TARGET_SYS", True), "{}-unknown-linux-gnu".format(d.getVar("HOST_ARCH", True))]
-    hosts = ["{}-unknown-linux-gnu".format(d.getVar("HOST_ARCH", True))]
-}
-
-INSANE_SKIP:${RUSTLIB_TARGET_PN} = "file-rdeps arch ldflags"
-SKIP_FILEDEPS:${RUSTLIB_TARGET_PN} = "1"
-
-INHIBIT_PACKAGE_DEBUG_SPLIT = "1"
-INHIBIT_PACKAGE_STRIP = "1"
-INHIBIT_SYSROOT_STRIP = "1"
-
diff --git a/poky/meta/recipes-devtools/rust/rust-cross-canadian.inc b/poky/meta/recipes-devtools/rust/rust-cross-canadian.inc
index 8bbbd61..7bf75a4 100644
--- a/poky/meta/recipes-devtools/rust/rust-cross-canadian.inc
+++ b/poky/meta/recipes-devtools/rust/rust-cross-canadian.inc
@@ -1,78 +1,71 @@
-
-require rust-cross-canadian-common.inc
-
-RUSTLIB_TARGET_PN = "rust-cross-canadian-rustlib-target-${TRANSLATED_TARGET_ARCH}"
-RUSTLIB_HOST_PN = "rust-cross-canadian-rustlib-host-${TRANSLATED_TARGET_ARCH}"
-RUSTLIB_SRC_PN = "rust-cross-canadian-src"
-RUSTLIB_PKGS = "${RUSTLIB_SRC_PN} ${RUSTLIB_TARGET_PN} ${RUSTLIB_HOST_PN}"
 PN = "rust-cross-canadian-${TRANSLATED_TARGET_ARCH}"
 
-PACKAGES = "${RUSTLIB_PKGS} ${PN}"
-RDEPENDS:${PN} += "${RUSTLIB_PKGS}"
+inherit rust-target-config
+inherit rust-common
 
-# The default behaviour of x.py changed in 1.47+ so now we need to
-# explicitly ask for the stage 2 compiler to be assembled.
-do_compile () {
-    rust_runx build --stage 2
+LICENSE = "MIT"
+
+MODIFYTOS = "0"
+
+# Need to use our SDK's sh here, see #14878
+create_sdk_wrapper () {
+        file="$1"
+        shift
+
+        cat <<- EOF > "${file}"
+		#!${base_prefix}/bin/sh
+		\$$1 \$@
+		EOF
+
+        chmod +x "$file"
 }
 
 do_install () {
     # Rust requires /usr/lib to contain the libs.
-    # Similar story is with /usr/bin ruquiring  `lib` to be at the same level.
     # The required structure is retained for simplicity.
     SYS_LIBDIR=$(dirname ${D}${libdir})
     SYS_BINDIR=$(dirname ${D}${bindir})
     RUSTLIB_DIR=${SYS_LIBDIR}/${TARGET_SYS}/rustlib
 
-    install -d "${SYS_BINDIR}"
-    cp build/${SNAPSHOT_BUILD_SYS}/stage2/bin/* ${SYS_BINDIR}
-    for i in ${SYS_BINDIR}/*; do
-	chrpath -r "\$ORIGIN/../lib" ${i}
-    done
+    install -d ${RUSTLIB_DIR}
+    install -m 0644 "${RUST_TARGETS_DIR}/${RUST_HOST_SYS}.json" "${RUSTLIB_DIR}"
+    install -m 0644 "${RUST_TARGETS_DIR}/${RUST_TARGET_SYS}.json" "${RUSTLIB_DIR}"
 
-    install -d "${D}${libdir}"
-    cp -pRd build/${SNAPSHOT_BUILD_SYS}/stage2/lib/${TARGET_SYS}/*.so ${SYS_LIBDIR}
-    cp -pRd build/${SNAPSHOT_BUILD_SYS}/stage2/lib/${TARGET_SYS}/rustlib ${RUSTLIB_DIR}
-
-    for i in ${SYS_LIBDIR}/*.so; do
-	chrpath -r "\$ORIGIN/../lib" ${i}
-    done
-    for i in ${RUSTLIB_DIR}/*/lib/*.so; do
-	chrpath -d ${i}
-    done
-
-    install -m 0644 "${WORKDIR}/targets/${TARGET_SYS}.json" "${RUSTLIB_DIR}"
-
-    SRC_DIR=${RUSTLIB_DIR}/src/rust
-    install -d ${SRC_DIR}/src/llvm-project
-    cp -R --no-dereference build/${SNAPSHOT_BUILD_SYS}/stage2/lib/rustlib/src/rust/src/llvm-project/libunwind ${SRC_DIR}/src/llvm-project
-    cp -R --no-dereference build/${SNAPSHOT_BUILD_SYS}/stage2/lib/rustlib/src/rust/library ${SRC_DIR}
-    cp --no-dereference build/${SNAPSHOT_BUILD_SYS}/stage2/lib/rustlib/src/rust/Cargo.lock ${SRC_DIR}
-    # Remove executable bit from any files so then SDK doesn't try to relocate.
-    chmod -R -x+X ${SRC_DIR}
+    # Uses SDK's CC as linker so linked binaries works out of box.
+    install -d ${SYS_BINDIR}
+    create_sdk_wrapper "${SYS_BINDIR}/target-rust-ccld" "CC"
 
     ENV_SETUP_DIR=${D}${base_prefix}/environment-setup.d
     mkdir "${ENV_SETUP_DIR}"
-    ENV_SETUP_SH="${ENV_SETUP_DIR}/rust.sh"
+    RUST_ENV_SETUP_SH="${ENV_SETUP_DIR}/rust.sh"
 
-    cat <<- EOF > "${ENV_SETUP_SH}"
-	export RUSTFLAGS="--sysroot=\$OECORE_NATIVE_SYSROOT/usr -C link-arg=--sysroot=\$OECORE_TARGET_SYSROOT -L\$OECORE_NATIVE_SYSROOT/usr/lib/${TARGET_SYS}/rustlib/${TARGET_SYS}/lib"
+    cat <<- EOF > "${RUST_ENV_SETUP_SH}"
+	export RUSTFLAGS="--sysroot=\$OECORE_TARGET_SYSROOT/usr -C link-arg=--sysroot=\$OECORE_TARGET_SYSROOT"
 	export RUST_TARGET_PATH="\$OECORE_NATIVE_SYSROOT/usr/lib/${TARGET_SYS}/rustlib"
 	EOF
 
     chown -R root.root ${D}
+
+    CARGO_ENV_SETUP_SH="${ENV_SETUP_DIR}/cargo.sh"
+    cat <<- EOF > "${CARGO_ENV_SETUP_SH}"
+	export CARGO_HOME="\$OECORE_TARGET_SYSROOT/home/cargo"
+	mkdir -p "\$CARGO_HOME"
+        # Init the default target once, it might be otherwise user modified.
+	if [ ! -f "\$CARGO_HOME/config" ]; then
+		touch "\$CARGO_HOME/config"
+		echo "[build]" >> "\$CARGO_HOME/config"
+		echo 'target = "'${RUST_TARGET_SYS}'"' >> "\$CARGO_HOME/config"
+		echo '# TARGET_SYS' >> "\$CARGO_HOME/config"
+		echo '[target.'${RUST_TARGET_SYS}']' >> "\$CARGO_HOME/config"
+		echo 'linker = "target-rust-ccld"' >> "\$CARGO_HOME/config"
+    fi
+
+	# Keep the below off as long as HTTP/2 is disabled.
+	export CARGO_HTTP_MULTIPLEXING=false
+
+	export CARGO_HTTP_CAINFO="\$OECORE_NATIVE_SYSROOT/etc/ssl/certs/ca-certificates.crt"
+	EOF
 }
 
-PKG_SYS_LIBDIR = "${SDKPATHNATIVE}/usr/lib"
-PKG_SYS_BINDIR = "${SDKPATHNATIVE}/usr/bin"
-PKG_RUSTLIB_DIR = "${PKG_SYS_LIBDIR}/${TARGET_SYS}/rustlib"
-FILES:${PN} = "${PKG_SYS_LIBDIR}/*.so ${PKG_SYS_BINDIR} ${base_prefix}/environment-setup.d"
-FILES:${RUSTLIB_TARGET_PN} = "${PKG_RUSTLIB_DIR}/${TARGET_SYS} ${PKG_RUSTLIB_DIR}/${TARGET_SYS}.json"
-FILES:${RUSTLIB_HOST_PN} = "${PKG_RUSTLIB_DIR}/${BUILD_ARCH}-unknown-linux-gnu"
-FILES:${RUSTLIB_SRC_PN} = "${PKG_RUSTLIB_DIR}/src"
-
-SUMMARY:${RUSTLIB_TARGET_PN} = "Rust cross canadian libaries for ${TARGET_SYS}"
-SUMMARY:${RUSTLIB_HOST_PN} = "Rust cross canadian libaries for ${HOST_SYS}"
-SUMMARY:${RUSTLIB_SRC_PN} = "Rust standard library sources for cross canadian toolchain"
-SUMMARY:${PN} = "Rust crost canadian compiler"
+FILES:${PN} += "${base_prefix}/environment-setup.d"
 
diff --git a/poky/meta/recipes-devtools/rust/rust-cross-canadian_1.62.0.bb b/poky/meta/recipes-devtools/rust/rust-cross-canadian_1.62.0.bb
deleted file mode 100644
index 766912c..0000000
--- a/poky/meta/recipes-devtools/rust/rust-cross-canadian_1.62.0.bb
+++ /dev/null
@@ -1,6 +0,0 @@
-require rust-cross-canadian.inc
-require rust-source.inc
-require rust-snapshot.inc
-
-FILESEXTRAPATHS:prepend := "${THISDIR}/rust:"
-
diff --git a/poky/meta/recipes-devtools/rust/rust-cross-canadian_1.63.0.bb b/poky/meta/recipes-devtools/rust/rust-cross-canadian_1.63.0.bb
new file mode 100644
index 0000000..5586523
--- /dev/null
+++ b/poky/meta/recipes-devtools/rust/rust-cross-canadian_1.63.0.bb
@@ -0,0 +1,2 @@
+inherit cross-canadian
+require rust-cross-canadian.inc
\ No newline at end of file
diff --git a/poky/meta/recipes-devtools/rust/rust-cross.inc b/poky/meta/recipes-devtools/rust/rust-cross.inc
deleted file mode 100644
index ab538e6..0000000
--- a/poky/meta/recipes-devtools/rust/rust-cross.inc
+++ /dev/null
@@ -1,47 +0,0 @@
-RUST_TARGETGENS = "BUILD HOST TARGET"
-
-# Otherwise we'll depend on what we provide
-INHIBIT_DEFAULT_RUST_DEPS = "1"
-
-# Unlike native (which nicely maps it's DEPENDS) cross wipes them out completely.
-# Generally, we (and cross in general) need the same things that native needs,
-# so it might make sense to take it's mapping. For now, though, we just mention
-# the bits we need explicitly.
-DEPENDS += "rust-llvm-native"
-DEPENDS += "rust-native"
-
-# In the cross compilation case, rustc doesn't seem to get the rpath quite
-# right. It manages to include '../../lib/${TARGET_PREFIX}', but doesn't
-# include the '../../lib' (ie: relative path from cross_bindir to normal
-# libdir. As a result, we end up not being able to properly reference files in normal ${libdir}.
-# Most of the time this happens to work fine as the systems libraries are
-# subsituted, but sometimes a host system will lack a library, or the right
-# version of a library (libtinfo was how I noticed this).
-#
-# FIXME: this should really be fixed in rust itself.
-# FIXME: using hard-coded relative paths is wrong, we should ask bitbake for
-#        the relative path between 2 of it's vars.
-HOST_POST_LINK_ARGS:append = " -Wl,-rpath=../../lib"
-BUILD_POST_LINK_ARGS:append = " -Wl,-rpath=../../lib"
-
-# We need the same thing for the calls to the compiler when building the runtime crap
-TARGET_CC_ARCH:append = " --sysroot=${STAGING_DIR_TARGET}"
-
-do_rust_setup_snapshot () {
-}
-
-do_configure () {
-}
-
-do_compile () {
-}
-
-do_install () {
-	mkdir -p ${D}${prefix}/${base_libdir_native}/rustlib
-	cp ${WORKDIR}/targets/${TARGET_SYS}.json ${D}${prefix}/${base_libdir_native}/rustlib
-}
-
-rust_cross_sysroot_preprocess() {
-    sysroot_stage_dir ${D}${prefix}/${base_libdir_native}/rustlib ${SYSROOT_DESTDIR}${prefix}/${base_libdir_native}/rustlib
-}
-SYSROOT_PREPROCESS_FUNCS += "rust_cross_sysroot_preprocess"
diff --git a/poky/meta/recipes-devtools/rust/rust-cross_1.62.0.bb b/poky/meta/recipes-devtools/rust/rust-cross_1.62.0.bb
deleted file mode 100644
index 5358d98..0000000
--- a/poky/meta/recipes-devtools/rust/rust-cross_1.62.0.bb
+++ /dev/null
@@ -1,8 +0,0 @@
-require rust.inc
-inherit cross
-require rust-cross.inc
-require rust-source.inc
-
-DEPENDS += "virtual/${TARGET_PREFIX}gcc virtual/${TARGET_PREFIX}compilerlibs virtual/libc"
-PROVIDES = "virtual/${TARGET_PREFIX}rust"
-PN = "rust-cross-${TUNE_PKGARCH}-${TCLIBC}"
diff --git a/poky/meta/recipes-devtools/rust/rust-crosssdk_1.62.0.bb b/poky/meta/recipes-devtools/rust/rust-crosssdk_1.62.0.bb
deleted file mode 100644
index 6ea8cb0..0000000
--- a/poky/meta/recipes-devtools/rust/rust-crosssdk_1.62.0.bb
+++ /dev/null
@@ -1,8 +0,0 @@
-require rust.inc
-inherit crosssdk
-require rust-cross.inc
-require rust-source.inc
-
-DEPENDS += "virtual/${TARGET_PREFIX}gcc-crosssdk virtual/nativesdk-${TARGET_PREFIX}compilerlibs virtual/nativesdk-libc"
-PROVIDES = "virtual/nativesdk-${TARGET_PREFIX}rust"
-PN = "rust-crosssdk-${TUNE_PKGARCH}-${RUST_LIBC}"
diff --git a/poky/meta/recipes-devtools/rust/rust-llvm.inc b/poky/meta/recipes-devtools/rust/rust-llvm.inc
index 9baad12..625eb57 100644
--- a/poky/meta/recipes-devtools/rust/rust-llvm.inc
+++ b/poky/meta/recipes-devtools/rust/rust-llvm.inc
@@ -47,6 +47,13 @@
     -DLLVM_CONFIG_PATH=${STAGING_LIBDIR_NATIVE}/llvm-rust/bin/llvm-config \
 "
 
+EXTRA_OECMAKE:append:class-nativesdk = "\
+    -DCMAKE_CROSSCOMPILING:BOOL=ON \
+    -DLLVM_BUILD_TOOLS=OFF \
+    -DLLVM_TABLEGEN=${STAGING_LIBDIR_NATIVE}/llvm-rust/bin/llvm-tblgen \
+    -DLLVM_CONFIG_PATH=${STAGING_LIBDIR_NATIVE}/llvm-rust/bin/llvm-config \
+"
+
 # The debug symbols are huge here (>2GB) so suppress them since they
 # provide almost no value. If you really need them then override this
 INHIBIT_PACKAGE_DEBUG_SPLIT = "1"
@@ -68,4 +75,4 @@
 FILES:${PN} += "${libdir}/libLLVM*.so.* ${libdir}/llvm-rust/lib/*.so.* ${libdir}/llvm-rust/bin"
 FILES:${PN}-dev += "${datadir}/llvm ${libdir}/llvm-rust/lib/*.so ${libdir}/llvm-rust/include ${libdir}/llvm-rust/share ${libdir}/llvm-rust/lib/cmake"
 
-BBCLASSEXTEND = "native"
+BBCLASSEXTEND = "native nativesdk"
diff --git a/poky/meta/recipes-devtools/rust/rust-llvm_1.62.0.bb b/poky/meta/recipes-devtools/rust/rust-llvm_1.62.0.bb
deleted file mode 100644
index 5b94e22..0000000
--- a/poky/meta/recipes-devtools/rust/rust-llvm_1.62.0.bb
+++ /dev/null
@@ -1,6 +0,0 @@
-# check src/llvm-project/llvm/CMakeLists.txt for llvm version in use
-#
-LLVM_RELEASE = "13.0.0"
-require rust-source.inc
-require rust-llvm.inc
-
diff --git a/poky/meta/recipes-devtools/rust/rust-llvm_1.63.0.bb b/poky/meta/recipes-devtools/rust/rust-llvm_1.63.0.bb
new file mode 100644
index 0000000..396f741
--- /dev/null
+++ b/poky/meta/recipes-devtools/rust/rust-llvm_1.63.0.bb
@@ -0,0 +1,6 @@
+# check src/llvm-project/llvm/CMakeLists.txt for llvm version in use
+#
+LLVM_RELEASE = "14.0.5"
+require rust-source.inc
+require rust-llvm.inc
+
diff --git a/poky/meta/recipes-devtools/rust/rust-snapshot.inc b/poky/meta/recipes-devtools/rust/rust-snapshot.inc
index 3bd7b07..b9d7edd 100644
--- a/poky/meta/recipes-devtools/rust/rust-snapshot.inc
+++ b/poky/meta/recipes-devtools/rust/rust-snapshot.inc
@@ -1,25 +1,25 @@
 ## This is information on the rust-snapshot (binary) used to build our current release.
-## snapshot info is taken from rust/src/stage0.txt
+## snapshot info is taken from rust/src/stage0.json
 ## Rust is self-hosting and bootstraps itself with a pre-built previous version of itself.
 ## The exact (previous) version that has been used is specified in the source tarball.
 ## The version is replicated here.
 ## TODO: find a way to add additional SRC_URIs based on the contents of an
 ##       earlier SRC_URI.
-RS_VERSION = "1.61.0"
-CARGO_VERSION = "1.61.0"
+RS_VERSION = "1.62.0"
+CARGO_VERSION = "1.62.0"
 
 # TODO: Add hashes for other architecture toolchains as well. Make a script?
-SRC_URI[rust-std-snapshot-x86_64.sha256sum] = "270b07aa5f2de52255a117e1e587138d77375ce0d09a1d7fead085f29b3977e9"
-SRC_URI[rustc-snapshot-x86_64.sha256sum] = "21c4613f389ed130fbaaf88f1e984319f72b5fc10734569a5ba19e22ebb03abd"
-SRC_URI[cargo-snapshot-x86_64.sha256sum] = "9461727d754f865ef2a87479d40bbe4c5176f80963b7c50b7797bc8940d7a0a0"
+SRC_URI[rust-std-snapshot-x86_64.sha256sum] = "addfae87b6b1b521d98a50fdc5120990888a51bb397100062e9c558267c67c77"
+SRC_URI[rustc-snapshot-x86_64.sha256sum] = "e7f71f4ef09334ddc9ec8cbf2f958d654e36f580c95f8fec6d5c816ce256dbd6"
+SRC_URI[cargo-snapshot-x86_64.sha256sum] = "815c63119a9cf0282ff240c6444b6f867238763ee3dea182f10837ae7dbbb1d4"
 
-SRC_URI[rust-std-snapshot-aarch64.sha256sum] = "57d60a519dbce12146849f7e72d55f3cffe9cdcbff8d58e90bb62d3c016bb5c0"
-SRC_URI[rustc-snapshot-aarch64.sha256sum] = "c996de6391e3ea94629fbc09b03bce186fcde345159f43ec95a82c500adb5e94"
-SRC_URI[cargo-snapshot-aarch64.sha256sum] = "a055e6cfd9b5f8938780db6179d2ef92990c714ce64278337d7edf3d29c8ab62"
+SRC_URI[rust-std-snapshot-aarch64.sha256sum] = "dd5df8a92af3e5d49a1122b9561821ebd72a9317884a37ecddae041e652a7563"
+SRC_URI[rustc-snapshot-aarch64.sha256sum] = "0fa320a19d41dcfc592bc006f5e9eda8e3b972598a26c96ad64eedd868516df3"
+SRC_URI[cargo-snapshot-aarch64.sha256sum] = "475038ecacca9ff586cad2082d5d950544b0d581a2a287facc7d899aae488813"
 
-SRC_URI[rust-std-snapshot-powerpc64le.sha256sum] = "36c0ccff14c80419507561db050f9533f0abd43fc50f3ddb859c10df74b1c351"
-SRC_URI[rustc-snapshot-powerpc64le.sha256sum] = "dc54893d747e4f3330515caa75e404f78c6c5475a1216d1428f5e7ce1c2e9602"
-SRC_URI[cargo-snapshot-powerpc64le.sha256sum] = "09817011ff1ef4b7006387c7cabb6a059731792a9372533dec7d87e7f014444b"
+SRC_URI[rust-std-snapshot-powerpc64le.sha256sum] = "d6678b7c971f3adbe7f820adae669d03a314468441e2907747c76eca98e0be92"
+SRC_URI[rustc-snapshot-powerpc64le.sha256sum] = "b66d0bc6dbfdc0d4b826f787ec4e772dea8e3d2015cecbe2105632d468c28dcb"
+SRC_URI[cargo-snapshot-powerpc64le.sha256sum] = "016257f1641693008068bd086fec66d68550d1778f6aea9d06c9b263fca392d5"
 
 SRC_URI += " \
     https://static.rust-lang.org/dist/${RUST_STD_SNAPSHOT}.tar.xz;name=rust-std-snapshot-${RUST_BUILD_ARCH};subdir=rust-snapshot-components \
diff --git a/poky/meta/recipes-devtools/rust/rust-source.inc b/poky/meta/recipes-devtools/rust/rust-source.inc
index fda2653..ce6c983 100644
--- a/poky/meta/recipes-devtools/rust/rust-source.inc
+++ b/poky/meta/recipes-devtools/rust/rust-source.inc
@@ -1,5 +1,11 @@
 SRC_URI += "https://static.rust-lang.org/dist/rustc-${PV}-src.tar.xz;name=rust"
-SRC_URI[rust.sha256sum] = "6c00ef115c894c2645e60b5049a4f5dacf1dc0c993f3074f7ae4fdf4c755dd5e"
+SRC_URI[rust.sha256sum] = "8f44af6dc44cc4146634a4dd5e4cc5470b3052a2337019b870c0e025e8987e0c"
+
+SRC_URI:append:class-target:pn-rust = " \
+    file://hardcodepaths.patch \
+    file://crossbeam_atomic.patch \
+    file://0001-Add-ENOTSUP-constant-for-riscv32-musl.patch"
+SRC_URI:append:class-nativesdk:pn-nativesdk-rust = " file://hardcodepaths.patch"
 
 RUSTSRC = "${WORKDIR}/rustc-${PV}-src"
 
diff --git a/poky/meta/recipes-devtools/rust/rust-target.inc b/poky/meta/recipes-devtools/rust/rust-target.inc
index 3f637b3..dce2b47 100644
--- a/poky/meta/recipes-devtools/rust/rust-target.inc
+++ b/poky/meta/recipes-devtools/rust/rust-target.inc
@@ -7,4 +7,4 @@
 # We don't need to depend on gcc-native because yocto assumes it exists
 PROVIDES:class-native = "virtual/${TARGET_PREFIX}rust"
 
-BBCLASSEXTEND = "native"
+BBCLASSEXTEND = "native nativesdk"
diff --git a/poky/meta/recipes-devtools/rust/rust-tools-cross-canadian.inc b/poky/meta/recipes-devtools/rust/rust-tools-cross-canadian.inc
deleted file mode 100644
index f035855..0000000
--- a/poky/meta/recipes-devtools/rust/rust-tools-cross-canadian.inc
+++ /dev/null
@@ -1,38 +0,0 @@
-
-require rust-cross-canadian-common.inc
-
-RUST_TOOLS_CLIPPY_PN = "rust-tools-clippy-cross-canadian-${TRANSLATED_TARGET_ARCH}"
-RUST_TOOLS_RUSTFMT_PN = "rust-tools-rustfmt-cross-canadian-${TRANSLATED_TARGET_ARCH}"
-RUST_TOOLS_PKGS = "${RUST_TOOLS_CLIPPY_PN} ${RUST_TOOLS_RUSTFMT_PN}"
-PN = "rust-tools-cross-canadian-${TRANSLATED_TARGET_ARCH}"
-
-PACKAGES = "${RUST_TOOLS_CLIPPY_PN} ${RUST_TOOLS_RUSTFMT_PN} ${PN}"
-RDEPENDS:${PN} += "${RUST_TOOLS_PKGS}"
-
-do_compile () {
-    rust_runx build --stage 2 src/tools/clippy
-    rust_runx build --stage 2 src/tools/rustfmt
-}
-
-do_install () {
-    SYS_BINDIR=$(dirname ${D}${bindir})
-
-    install -d "${SYS_BINDIR}"
-    cp build/${SNAPSHOT_BUILD_SYS}/stage2-tools-bin/* ${SYS_BINDIR}
-    for i in ${SYS_BINDIR}/*; do
-	chrpath -r "\$ORIGIN/../lib" ${i}
-    done
-
-    chown -R root.root ${D}
-}
-
-ALLOW_EMPTY:${PN} = "1"
-
-PKG_SYS_BINDIR = "${SDKPATHNATIVE}/usr/bin"
-FILES:${RUST_TOOLS_CLIPPY_PN} = "${PKG_SYS_BINDIR}/cargo-clippy ${PKG_SYS_BINDIR}/clippy-driver"
-FILES:${RUST_TOOLS_RUSTFMT_PN} = "${PKG_SYS_BINDIR}/rustfmt"
-
-SUMMARY:${PN} = "Rust helper tools"
-SUMMARY:${RUST_TOOLS_CLIPPY_PN} = "A collection of lints to catch common mistakes and improve your Rust code"
-SUMMARY:${RUST_TOOLS_RUSTFMT_PN} = "A tool for formatting Rust code according to style guidelines"
-
diff --git a/poky/meta/recipes-devtools/rust/rust-tools-cross-canadian_1.62.0.bb b/poky/meta/recipes-devtools/rust/rust-tools-cross-canadian_1.62.0.bb
deleted file mode 100644
index 2d809d6..0000000
--- a/poky/meta/recipes-devtools/rust/rust-tools-cross-canadian_1.62.0.bb
+++ /dev/null
@@ -1,6 +0,0 @@
-require rust-tools-cross-canadian.inc
-require rust-source.inc
-require rust-snapshot.inc
-
-FILESEXTRAPATHS:prepend := "${THISDIR}/rust:"
-
diff --git a/poky/meta/recipes-devtools/rust/rust.inc b/poky/meta/recipes-devtools/rust/rust.inc
index ecb057a..284347d 100644
--- a/poky/meta/recipes-devtools/rust/rust.inc
+++ b/poky/meta/recipes-devtools/rust/rust.inc
@@ -9,17 +9,14 @@
 
 DEPENDS += "file-native python3-native"
 DEPENDS:append:class-native = " rust-llvm-native"
+DEPENDS:append:class-nativesdk = " nativesdk-rust-llvm"
 
 S = "${RUSTSRC}"
 
-# We generate local targets, and need to be able to locate them
-export RUST_TARGET_PATH="${WORKDIR}/targets/"
-
 export FORCE_CRATE_HASH="${BB_TASKHASH}"
 
 RUST_ALTERNATE_EXE_PATH ?= "${STAGING_LIBDIR}/llvm-rust/bin/llvm-config"
-export YOCTO_ALTERNATE_EXE_PATH = "${RUST_ALTERNATE_EXE_PATH}"
-export YOCTO_ALTERNATE_MULTILIB_NAME = "/${BASELIB}"
+RUST_ALTERNATE_EXE_PATH_NATIVE = "${STAGING_LIBDIR_NATIVE}/llvm-rust/bin/llvm-config"
 
 # We don't want to use bitbakes vendoring because the rust sources do their
 # own vendoring.
@@ -27,16 +24,12 @@
 
 # We can't use RUST_BUILD_SYS here because that may be "musl" if
 # TCLIBC="musl". Snapshots are always -unknown-linux-gnu
-SNAPSHOT_BUILD_SYS = "${RUST_BUILD_ARCH}-unknown-linux-gnu"
 setup_cargo_environment () {
     # The first step is to build bootstrap and some early stage tools,
     # these are build for the same target as the snapshot, e.g.
     # x86_64-unknown-linux-gnu.
     # Later stages are build for the native target (i.e. target.x86_64-linux)
     cargo_common_do_configure
-
-    printf '[target.%s]\n' "${SNAPSHOT_BUILD_SYS}" >> ${CARGO_HOME}/config
-    printf "linker = '%s'\n" "${RUST_BUILD_CCLD}" >> ${CARGO_HOME}/config
 }
 
 inherit rust-target-config
@@ -79,24 +72,37 @@
     config = configparser.RawConfigParser()
 
     # [target.ARCH-poky-linux]
-    target_section = "target.{}".format(d.getVar('TARGET_SYS', True))
-    config.add_section(target_section)
+    host_section = "target.{}".format(d.getVar('RUST_HOST_SYS', True))
+    config.add_section(host_section)
 
-    llvm_config = d.expand("${YOCTO_ALTERNATE_EXE_PATH}")
-    config.set(target_section, "llvm-config", e(llvm_config))
+    llvm_config_target = d.expand("${RUST_ALTERNATE_EXE_PATH}")
+    llvm_config_build = d.expand("${RUST_ALTERNATE_EXE_PATH_NATIVE}")
+    config.set(host_section, "llvm-config", e(llvm_config_target))
 
-    config.set(target_section, "cxx", e(d.expand("${RUST_TARGET_CXX}")))
-    config.set(target_section, "cc", e(d.expand("${RUST_TARGET_CC}")))
+    config.set(host_section, "cxx", e(d.expand("${RUST_TARGET_CXX}")))
+    config.set(host_section, "cc", e(d.expand("${RUST_TARGET_CC}")))
+    if "musl" in host_section:
+        config.set(host_section, "musl-root", e(d.expand("${STAGING_DIR_HOST}${exec_prefix}")))
 
     # If we don't do this rust-native will compile it's own llvm for BUILD.
     # [target.${BUILD_ARCH}-unknown-linux-gnu]
-    target_section = "target.{}".format(d.getVar('SNAPSHOT_BUILD_SYS', True))
-    config.add_section(target_section)
+    build_section = "target.{}".format(d.getVar('RUST_BUILD_SYS', True))
+    if build_section != host_section:
+        config.add_section(build_section)
 
-    config.set(target_section, "llvm-config", e(llvm_config))
+        config.set(build_section, "llvm-config", e(llvm_config_build))
 
-    config.set(target_section, "cxx", e(d.expand("${RUST_BUILD_CXX}")))
-    config.set(target_section, "cc", e(d.expand("${RUST_BUILD_CC}")))
+        config.set(build_section, "cxx", e(d.expand("${RUST_BUILD_CXX}")))
+        config.set(build_section, "cc", e(d.expand("${RUST_BUILD_CC}")))
+
+    target_section = "target.{}".format(d.getVar('RUST_TARGET_SYS', True))
+    if target_section != host_section and target_section != build_section:
+        config.add_section(target_section)
+
+        config.set(target_section, "llvm-config", e(llvm_config_target))
+
+        config.set(target_section, "cxx", e(d.expand("${RUST_TARGET_CXX}")))
+        config.set(target_section, "cc", e(d.expand("${RUST_TARGET_CC}")))
 
     # [llvm]
     config.add_section("llvm")
@@ -128,16 +134,16 @@
     config.set("build", "vendor", e(True))
 
     if not "targets" in locals():
-        targets = [d.getVar("TARGET_SYS", True)]
+        targets = [d.getVar("RUST_TARGET_SYS", True)]
     config.set("build", "target", e(targets))
 
     if not "hosts" in locals():
-        hosts = [d.getVar("HOST_SYS", True)]
+        hosts = [d.getVar("RUST_HOST_SYS", True)]
     config.set("build", "host", e(hosts))
 
     # We can't use BUILD_SYS since that is something the rust snapshot knows
     # nothing about when trying to build some stage0 tools (like fabricate)
-    config.set("build", "build", e(d.getVar("SNAPSHOT_BUILD_SYS", True)))
+    config.set("build", "build", e(d.getVar("RUST_BUILD_SYS", True)))
 
     # [install]
     config.add_section("install")
@@ -169,6 +175,16 @@
     unset CXXFLAGS
     unset CPPFLAGS
 
+    export RUSTFLAGS="${RUST_DEBUG_REMAP}"
+
+    # Copy the natively built llvm-config into the target so we can run it. Horrible,
+    # but works!
+    if [ ${RUST_ALTERNATE_EXE_PATH_NATIVE} != ${RUST_ALTERNATE_EXE_PATH} ]; then
+        mkdir -p `dirname ${RUST_ALTERNATE_EXE_PATH}`
+        cp ${RUST_ALTERNATE_EXE_PATH_NATIVE} ${RUST_ALTERNATE_EXE_PATH}
+        chrpath -d ${RUST_ALTERNATE_EXE_PATH}
+    fi
+
     oe_cargo_fix_env
 
     python3 src/bootstrap/bootstrap.py ${@oe.utils.parallel_make_argument(d, '-j %d')} "$@" --verbose
@@ -181,26 +197,14 @@
 
 rust_do_install () {
     mkdir -p ${D}${bindir}
-    cp build/${HOST_SYS}/stage2/bin/* ${D}${bindir}
+    cp build/${RUST_HOST_SYS}/stage2/bin/* ${D}${bindir}
 
     mkdir -p ${D}${libdir}/rustlib
-    cp -pRd build/${HOST_SYS}/stage2/lib/* ${D}${libdir}
+    cp -pRd build/${RUST_HOST_SYS}/stage2/lib/* ${D}${libdir}
     # Remove absolute symlink so bitbake doesn't complain
     rm -f ${D}${libdir}/rustlib/src/rust
 }
 
-rust_install_targets() {
-    # Install our custom target.json files
-    local td="${D}${libdir}/rustlib/"
-    install -d "$td"
-    for tgt in "${WORKDIR}/targets/"* ; do
-        install -m 0644 "$tgt" "$td"
-    done
-}
-
-
 do_install () {
     rust_do_install
-    rust_install_targets
 }
-# ex: sts=4 et sw=4 ts=8
diff --git a/poky/meta/recipes-devtools/rust/rust/0001-Add-ENOTSUP-constant-for-riscv32-musl.patch b/poky/meta/recipes-devtools/rust/rust/0001-Add-ENOTSUP-constant-for-riscv32-musl.patch
new file mode 100644
index 0000000..c885a09
--- /dev/null
+++ b/poky/meta/recipes-devtools/rust/rust/0001-Add-ENOTSUP-constant-for-riscv32-musl.patch
@@ -0,0 +1,27 @@
+From e9fb036eeffaaa5cb7b1e6fc1dabb92ed6263f0f Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Fri, 19 Aug 2022 10:26:43 -0700
+Subject: [PATCH] Add ENOTSUP constant for riscv32/musl
+
+Upstream-Status: Backport [https://github.com/rust-lang/libc/commit/e9fb036eeffaaa5cb7b1e6fc1dabb92ed6263f0f]
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ vendor/libc/src/unix/linux_like/linux/musl/b32/riscv32/mod.rs | 1 +
+ 1 file changed, 1 insertion(+)
+
+--- a/vendor/libc/src/unix/linux_like/linux/musl/b32/riscv32/mod.rs
++++ b/vendor/libc/src/unix/linux_like/linux/musl/b32/riscv32/mod.rs
+@@ -271,6 +271,7 @@ pub const ENOPROTOOPT: ::c_int = 92;
+ pub const EPROTONOSUPPORT: ::c_int = 93;
+ pub const ESOCKTNOSUPPORT: ::c_int = 94;
+ pub const EOPNOTSUPP: ::c_int = 95;
++pub const ENOTSUP: ::c_int = EOPNOTSUPP;
+ pub const EPFNOSUPPORT: ::c_int = 96;
+ pub const EAFNOSUPPORT: ::c_int = 97;
+ pub const EADDRINUSE: ::c_int = 98;
+--- a/vendor/libc/.cargo-checksum.json
++++ b/vendor/libc/.cargo-checksum.json
+@@ -1 +1 @@
+-{"files":{"CONTRIBUTING.md":"752eea5a703d11b485c6b5f195f51bd2c79aa5159b619ce09555c779e1fb586b","Cargo.toml":"6a0abcfcbc1d9fb00a356343043a161f5b84b3f780cb0f02df4627d022b14b7f","LICENSE-APACHE":"62c7a1e35f56406896d7aa7ca52d0cc0d272ac022b5d2796e7d6905db8a3636a","LICENSE-MIT":"a8d47ff51ca256f56a8932dba07660672dbfe3004257ca8de708aac1415937a1","README.md":"8228847944f1332882fbb00275b6f30e4a8aad08a13569c25d52cac012cc2a47","build.rs":"41f9743021d9e5ed74ede55c94057cf6867a322a13b25a5d524b966656a08e38","rustfmt.toml":"eaa2ea84fc1ba0359b77680804903e07bb38d257ab11986b95b158e460f787b2","src/fixed_width_ints.rs":"7f986e5f5e68d25ef04d386fd2f640e8be8f15427a8d4a458ea01d26b8dca0ca","src/fuchsia/aarch64.rs":"378776a9e40766154a54c94c2a7b4675b5c302a38e6e42da99e67bfbaee60e56","src/fuchsia/align.rs":"ae1cf8f011a99737eabeb14ffff768e60f13b13363d7646744dbb0f443dab3d6","src/fuchsia/mod.rs":"bc8c46531bd1a2429f36aaf2bc137b50e42505b798de83f34eecfa94ad89179b","src/fuchsia/no_align.rs":"303f3f1b255e0088b5715094353cf00476131d8e94e6aebb3f469557771c8b8a","src/fuchsia/x86_64.rs":"93a3632b5cf67d2a6bcb7dc0a558605252d5fe689e0f38d8aa2ec5852255ac87","src/hermit/aarch64.rs":"86048676e335944c37a63d0083d0f368ae10ceccefeed9debb3bbe08777fc682","src/hermit/mod.rs":"d3bfce41e4463d4be8020a2d063c9bfa8b665f45f1cc6cbf3163f5d01e7cb21f","src/hermit/x86_64.rs":"ab832b7524e5fb15c49ff7431165ab1a37dc4667ae0b58e8306f4c539bfa110c","src/lib.rs":"3f13d5f9b29d8969dde507661f1637524e21fe3bd34957fc778868f2f3713c46","src/macros.rs":"d7437c2573c4915768ff96b34710a61ee9f63622b23526cddeaeaf6bfb4751a2","src/psp.rs":"dd31aabd46171d474ec5828372e28588935120e7355c90c105360d8fa9264c1c","src/sgx.rs":"16a95cdefc81c5ee00d8353a60db363c4cc3e0f75abcd5d0144723f2a306ed1b","src/solid/aarch64.rs":"a726e47f324adf73a4a0b67a2c183408d0cad105ae66acf36db37a42ab7f8707","src/solid/arm.rs":"e39a4f74ebbef3b97b8c95758ad741123d84ed3eb48d9cf4f1f4872097fc27fe","src/solid/mod.rs":"5f4151dca5132e4b4e4c23ab9737e12856dddbdc0ca3f7dbc004328ef3c8acde","src/switch.rs":"9da3dd39b3de45a7928789926e8572d00e1e11a39e6f7289a1349aadce90edba","src/unix/align.rs":"2cdc7c826ef7ae61f5171c5ae8c445a743d86f1a7f2d9d7e4ceeec56d6874f65","src/unix/bsd/apple/b32/align.rs":"ec833a747866fe19ca2d9b4d3c9ff0385faba5edf4bd0d15fa68884c40b0e26c","src/unix/bsd/apple/b32/mod.rs":"2546ad3eb6aecb95f916648bc63264117c92b4b4859532b34cb011e4c75a5a72","src/unix/bsd/apple/b64/aarch64/align.rs":"2eaf0f561a32bdcbf4e0477c8895d5e7bcb5cdebd5fef7b4df2ca8e38e144d94","src/unix/bsd/apple/b64/aarch64/mod.rs":"a3f0dfff62d0f7f4f1b5f9a4e2b662acf233a46badbc5419d3cc2d735629a710","src/unix/bsd/apple/b64/align.rs":"ec833a747866fe19ca2d9b4d3c9ff0385faba5edf4bd0d15fa68884c40b0e26c","src/unix/bsd/apple/b64/mod.rs":"f5e278a1af7fb358891d1c9be4eb7e815aaca0c5cb738d0c3604ba2208a856f7","src/unix/bsd/apple/b64/x86_64/align.rs":"ec833a747866fe19ca2d9b4d3c9ff0385faba5edf4bd0d15fa68884c40b0e26c","src/unix/bsd/apple/b64/x86_64/mod.rs":"8c87c5855038aae5d433c8f5eb3b29b0a175879a0245342b3bfd83bdf4cfd936","src/unix/bsd/apple/mod.rs":"d7155927fbd1af6dd984a4c6c1a735a5932af828c96818209487eb6ae72d7d7e","src/unix/bsd/freebsdlike/dragonfly/errno.rs":"8295b8bb0dfd38d2cdb4d9192cdeeb534cc6c3b208170e64615fa3e0edb3e578","src/unix/bsd/freebsdlike/dragonfly/mod.rs":"379302e12d30807a1f973c4e2dd2205179d95343ee6fae05f33c9ed45a342799","src/unix/bsd/freebsdlike/freebsd/aarch64.rs":"2a215bd6136b8617aacedf9be738ccee94da9d29b418e9a78101d6291c182352","src/unix/bsd/freebsdlike/freebsd/arm.rs":"59d6a670eea562fb87686e243e0a84603d29a2028a3d4b3f99ccc01bd04d2f47","src/unix/bsd/freebsdlike/freebsd/freebsd11/b64.rs":"9808d152c1196aa647f1b0f0cf84dac8c930da7d7f897a44975545e3d9d17681","src/unix/bsd/freebsdlike/freebsd/freebsd11/mod.rs":"a6eee615e6ca5a6e04b526bb6b22d13b9356e87e51825cda33476c37a46cb0ef","src/unix/bsd/freebsdlike/freebsd/freebsd12/b64.rs":"61cbe45f8499bedb168106b686d4f8239472f25c7553b069eec2afe197ff2df6","src/unix/bsd/freebsdlike/freebsd/freebsd12/mod.rs":"755dafaf3f0945e798c34ea94c48e8552804ce60e2a15a4f0649f9d1aceaf422","src/unix/bsd/freebsdlike/freebsd/freebsd12/x86_64.rs":"2df36a7f122f6d6e5753cfb4d22e915cc80f6bc91c0161b3daae55a481bfd052","src/unix/bsd/freebsdlike/freebsd/freebsd13/b64.rs":"61cbe45f8499bedb168106b686d4f8239472f25c7553b069eec2afe197ff2df6","src/unix/bsd/freebsdlike/freebsd/freebsd13/mod.rs":"cc65a73b0fa95a77044a4b3ee76d6eceb9773b55aea7d73bdf070e6f66e9ea38","src/unix/bsd/freebsdlike/freebsd/freebsd13/x86_64.rs":"2df36a7f122f6d6e5753cfb4d22e915cc80f6bc91c0161b3daae55a481bfd052","src/unix/bsd/freebsdlike/freebsd/freebsd14/b64.rs":"61cbe45f8499bedb168106b686d4f8239472f25c7553b069eec2afe197ff2df6","src/unix/bsd/freebsdlike/freebsd/freebsd14/mod.rs":"0ed92eb93e78299cd7de0ae9daebb04a53b3c2d5e6a078e1fcd977f2a86bffc3","src/unix/bsd/freebsdlike/freebsd/freebsd14/x86_64.rs":"2df36a7f122f6d6e5753cfb4d22e915cc80f6bc91c0161b3daae55a481bfd052","src/unix/bsd/freebsdlike/freebsd/mod.rs":"467b66843ab8c1a54b01ae9e90aaf0295b32f54decbdfb64caca84523b488925","src/unix/bsd/freebsdlike/freebsd/powerpc.rs":"9ca3f82f88974e6db5569f2d76a5a3749b248a31747a6c0da5820492bdfeca42","src/unix/bsd/freebsdlike/freebsd/powerpc64.rs":"2dae3ecc87eac3b11657aa98915def55fc4b5c0de11fe26aae23329a54628a9a","src/unix/bsd/freebsdlike/freebsd/riscv64.rs":"8f591bd273464d684c4f64365f8ed56a8138175daa70d96008541393057a0dae","src/unix/bsd/freebsdlike/freebsd/x86.rs":"c5005e3249eb7c93cfbac72a9e9272320d80ce7983da990ceb05a447f59a02c5","src/unix/bsd/freebsdlike/freebsd/x86_64/align.rs":"0e1f69a88fca1c32874b1daf5db3d446fefbe518dca497f096cc9168c39dde70","src/unix/bsd/freebsdlike/freebsd/x86_64/mod.rs":"51e4dd0c8ae247bb652feda5adad9333ea3bb30c750c3a3935e0b0e47d7803eb","src/unix/bsd/freebsdlike/mod.rs":"a7345cc3fb7372572efe06848feb2cc246dfc2c2a0cc9ccf434f4c55041a59fa","src/unix/bsd/mod.rs":"7720ec82c9334f988ec4b271784768a017c3dc2e6dfae4d02418eef753388aa7","src/unix/bsd/netbsdlike/mod.rs":"594a0f9e23c4d7702ba38afdba5a0e866be25d692ec0afd66e04d939aa6b3f04","src/unix/bsd/netbsdlike/netbsd/aarch64.rs":"65dcb58d11e8d8028401a9d07ca3eb4cb4f053e04249cc877353449d84ccc4cb","src/unix/bsd/netbsdlike/netbsd/arm.rs":"58cdbb70b0d6f536551f0f3bb3725d2d75c4690db12c26c034e7d6ec4a924452","src/unix/bsd/netbsdlike/netbsd/mod.rs":"cb1560bf8ffcc7b2726a27b433efac90e726292960626f3064bd2c6b7f861a55","src/unix/bsd/netbsdlike/netbsd/powerpc.rs":"ee7ff5d89d0ed22f531237b5059aa669df93a3b5c489fa641465ace8d405bf41","src/unix/bsd/netbsdlike/netbsd/sparc64.rs":"9489f4b3e4566f43bb12dfb92238960613dac7f6a45cc13068a8d152b902d7d9","src/unix/bsd/netbsdlike/netbsd/x86.rs":"20692320e36bfe028d1a34d16fe12ca77aa909cb02bda167376f98f1a09aefe7","src/unix/bsd/netbsdlike/netbsd/x86_64.rs":"1afe5ef46b14397cdd68664b5b232e4f5b035b6db1d4cf411c899d51ebca9f30","src/unix/bsd/netbsdlike/openbsd/aarch64.rs":"dd91931d373b7ecaf6e2de25adadee10d16fa9b12c2cbacdff3eb291e1ba36af","src/unix/bsd/netbsdlike/openbsd/arm.rs":"01580d261bc6447bb327a0d982181b7bdabfa066cee65a30373d3ced729ad307","src/unix/bsd/netbsdlike/openbsd/mips64.rs":"8532a189ae10c7d668d9d4065da8b05d124e09bd39442c9f74a7f231c43eca48","src/unix/bsd/netbsdlike/openbsd/mod.rs":"816a8ef47df60a752a91967627eeccb9ca776dc718ecc53ae902d8edaee0bce9","src/unix/bsd/netbsdlike/openbsd/powerpc.rs":"01580d261bc6447bb327a0d982181b7bdabfa066cee65a30373d3ced729ad307","src/unix/bsd/netbsdlike/openbsd/powerpc64.rs":"1dd5449dd1fd3d51e30ffdeeaece91d0aaf05c710e0ac699fecc5461cfa2c28e","src/unix/bsd/netbsdlike/openbsd/riscv64.rs":"1dd5449dd1fd3d51e30ffdeeaece91d0aaf05c710e0ac699fecc5461cfa2c28e","src/unix/bsd/netbsdlike/openbsd/sparc64.rs":"d04fd287afbaa2c5df9d48c94e8374a532a3ba491b424ddf018270c7312f4085","src/unix/bsd/netbsdlike/openbsd/x86.rs":"6f7f5c4fde2a2259eb547890cbd86570cea04ef85347d7569e94e679448bec87","src/unix/bsd/netbsdlike/openbsd/x86_64.rs":"d31db31630289c85af3339dbe357998a21ca584cbae31607448fe2cf7675a4e1","src/unix/haiku/b32.rs":"a2efdbf7158a6da341e1db9176b0ab193ba88b449616239ed95dced11f54d87b","src/unix/haiku/b64.rs":"ff8115367d3d7d354f792d6176dfaaa26353f57056197b563bf4681f91ff7985","src/unix/haiku/mod.rs":"d5833ff9b94daa81d2470df544453212af17530d78c5a7fb912eac915d00f329","src/unix/haiku/native.rs":"dbfcbf4954a79d1df2ff58e0590bbcb8c57dfc7a32392aa73ee4726b66bd6cc8","src/unix/haiku/x86_64.rs":"3ec3aeeb7ed208b8916f3e32d42bfd085ff5e16936a1a35d9a52789f043b7237","src/unix/hermit/aarch64.rs":"86048676e335944c37a63d0083d0f368ae10ceccefeed9debb3bbe08777fc682","src/unix/hermit/mod.rs":"859814f5df89e28fd4b345db399d181e11e7ed413841b6ff703a1fcbdbf013ae","src/unix/hermit/x86_64.rs":"ab832b7524e5fb15c49ff7431165ab1a37dc4667ae0b58e8306f4c539bfa110c","src/unix/linux_like/android/b32/arm.rs":"433c1530f602cc5ed26610c58055dde0c4ceea5e00150063b24ddc60768332a4","src/unix/linux_like/android/b32/mod.rs":"7c173e0375119bf06a3081652faede95e5bcd6858e7576b7533d037978737c8f","src/unix/linux_like/android/b32/x86/align.rs":"812914e4241df82e32b12375ca3374615dc3a4bdd4cf31f0423c5815320c0dab","src/unix/linux_like/android/b32/x86/mod.rs":"8388bd3a0fcb5636bf965eee6dc95ae6860b85a2b555b387c868aa4d4e01ec89","src/unix/linux_like/android/b64/aarch64/align.rs":"2179c3b1608fa4bf68840482bfc2b2fa3ee2faf6fcae3770f9e505cddca35c7b","src/unix/linux_like/android/b64/aarch64/int128.rs":"1735f6f5c56770d20dd426442f09724d9b2052b46a7cd82f23f3288a4a7276de","src/unix/linux_like/android/b64/aarch64/mod.rs":"ef230d49fd0d182adf2dae6f8e10babf18d72259d65980bf1c4c2dc8a4f84501","src/unix/linux_like/android/b64/mod.rs":"d7bbbadafdb2cb2ff8e9cde3d89a03b9facaabb6b2d45705225d3ece1c5cce37","src/unix/linux_like/android/b64/x86_64/align.rs":"7169d07a9fd4716f7512719aec9fda5d8bed306dc0720ffc1b21696c9951e3c6","src/unix/linux_like/android/b64/x86_64/mod.rs":"6454948ea98be86243229f99d67cdc7ca460e16b2a6445663ff4b5b6907c358d","src/unix/linux_like/android/mod.rs":"1247397a7e1b6269e0d2d6df88c6396bc18eb477e3ff293ff57d66a6b1380d94","src/unix/linux_like/emscripten/align.rs":"86c95cbed7a7161b1f23ee06843e7b0e2340ad92b2cb86fe2a8ef3e0e8c36216","src/unix/linux_like/emscripten/mod.rs":"b71d37106750f57bc2dae4e9bcb473ff098ef48235827e41a1687a39825f0aa4","src/unix/linux_like/emscripten/no_align.rs":"0128e4aa721a9902754828b61b5ec7d8a86619983ed1e0544a85d35b1051fad6","src/unix/linux_like/linux/align.rs":"d6c259942c8e843373accd180fc8f4f45f03544dfd21b93a8d02641ead3ef63e","src/unix/linux_like/linux/arch/generic/mod.rs":"e20013ed91edcfb7f84f3f9f5a9ef827fd5c406e24b65989d8438da332236ef6","src/unix/linux_like/linux/arch/mips/mod.rs":"2d166054a586bb4bf6e4a4ba35f7574907b217225eff8f1a43adc4277e142460","src/unix/linux_like/linux/arch/mod.rs":"466a29622e47c6c7f1500682b2eb17f5566dd81b322cd6348f0fdd355cec593a","src/unix/linux_like/linux/arch/powerpc/mod.rs":"3f6da7b0fa7b394c7d4eea2bb3caa7a7729ab0d6c1491fef02206a912c41b815","src/unix/linux_like/linux/arch/sparc/mod.rs":"91593ec0440f1dd8f8e612028f432c44c14089286e2aca50e10511ab942db8c3","src/unix/linux_like/linux/gnu/align.rs":"e4a3c27fe20a57b8d612c34cb05bc70646edb5cec7251957315afa53a7b9f936","src/unix/linux_like/linux/gnu/b32/arm/align.rs":"6ec0eb3ee93f7ae99fd714b4deabfb5e97fbcefd8c26f5a45fb8e7150899cdeb","src/unix/linux_like/linux/gnu/b32/arm/mod.rs":"92ea7edc0e24f79dfbf5e3efc2d7509bed230562036e6aa85ef4f2c8088ecc8f","src/unix/linux_like/linux/gnu/b32/m68k/align.rs":"8faa92f77a9232c035418d45331774e64a9a841d99c91791570a203bf2b45bcb","src/unix/linux_like/linux/gnu/b32/m68k/mod.rs":"a2a0a9400dae44086ebf579e0448e0676d4a3214d1ae7d13a024857251e23b6b","src/unix/linux_like/linux/gnu/b32/mips/align.rs":"429fb5e005cb7143602d430098b6ebfb7d360685b194f333dfd587472ae954ee","src/unix/linux_like/linux/gnu/b32/mips/mod.rs":"0d7849eb2435ec1f49b6774872a0518f0129c50f37c9d38b37b1535722777a22","src/unix/linux_like/linux/gnu/b32/mod.rs":"8da281da578cdee972e952b118b903b370320897a7e335342a15e1359864bef2","src/unix/linux_like/linux/gnu/b32/powerpc.rs":"049d6211ba4a9304bd4497c160bc21ae847c24e0528dd9d76263f16192e6aff5","src/unix/linux_like/linux/gnu/b32/riscv32/align.rs":"d321491612be8d5c61b6ec2dc0111beb3a22e58803f99cd37543efe86621b119","src/unix/linux_like/linux/gnu/b32/riscv32/mod.rs":"a4256148cec0bb672c8dfa605866930d9761af9655721de72ae41eeeb8fdbf6d","src/unix/linux_like/linux/gnu/b32/sparc/align.rs":"21adbed27df73e2d1ed934aaf733a643003d7baf2bde9c48ea440895bcca6d41","src/unix/linux_like/linux/gnu/b32/sparc/mod.rs":"525618615aa0cb80c6c90860bf579dfed8db307fffd56b97dc235fb945419434","src/unix/linux_like/linux/gnu/b32/x86/align.rs":"e4bafdc4a519a7922a81b37a62bbfd1177a2f620890eef8f1fbc47162e9eb413","src/unix/linux_like/linux/gnu/b32/x86/mod.rs":"78b4038852986436888c63be9258037cf642124daee9d5fa5cef2bf8e412bf54","src/unix/linux_like/linux/gnu/b64/aarch64/align.rs":"2179c3b1608fa4bf68840482bfc2b2fa3ee2faf6fcae3770f9e505cddca35c7b","src/unix/linux_like/linux/gnu/b64/aarch64/ilp32.rs":"21a21503ef2e095f4371044915d4bfb07a8578011cb5c713cd9f45947b0b5730","src/unix/linux_like/linux/gnu/b64/aarch64/int128.rs":"1735f6f5c56770d20dd426442f09724d9b2052b46a7cd82f23f3288a4a7276de","src/unix/linux_like/linux/gnu/b64/aarch64/lp64.rs":"e78c3cd197f44832338b414d1a9bc0d194f44c74db77bd7bf830c1fff62b2690","src/unix/linux_like/linux/gnu/b64/aarch64/mod.rs":"c4e20b1c63d7a03a6e22aef2046689ef95cc4651011ade7cb94176fcea1dc252","src/unix/linux_like/linux/gnu/b64/loongarch64/align.rs":"7169d07a9fd4716f7512719aec9fda5d8bed306dc0720ffc1b21696c9951e3c6","src/unix/linux_like/linux/gnu/b64/loongarch64/mod.rs":"387808d5398b24339e7e2bf7591150735011befc5b421fa713d7017c04a7b1da","src/unix/linux_like/linux/gnu/b64/mips64/align.rs":"7169d07a9fd4716f7512719aec9fda5d8bed306dc0720ffc1b21696c9951e3c6","src/unix/linux_like/linux/gnu/b64/mips64/mod.rs":"17aad16329431d83e1909e3a08022f6e28f4bcba7dec4a967fe1a321a6a43b99","src/unix/linux_like/linux/gnu/b64/mod.rs":"3c6555f30a7a8852757b31a542ea73fb6a16a6e27e838397e819278ad56e57a4","src/unix/linux_like/linux/gnu/b64/powerpc64/align.rs":"e29c4868bbecfa4a6cd8a2ad06193f3bbc78a468cc1dc9df83f002f1268130d9","src/unix/linux_like/linux/gnu/b64/powerpc64/mod.rs":"97e0ecf11ecce793a13fec39654fb513c5479edf7faa7a276fa714b61993d0fc","src/unix/linux_like/linux/gnu/b64/riscv64/align.rs":"d321491612be8d5c61b6ec2dc0111beb3a22e58803f99cd37543efe86621b119","src/unix/linux_like/linux/gnu/b64/riscv64/mod.rs":"b3fe290afe63d2d6e315d0cf1f775464e9c1f2a1906d243c1af74a137a4031cb","src/unix/linux_like/linux/gnu/b64/s390x.rs":"254f00266ecf9644a4b469457cb37c4dd6c055820926c1de0fb9035b6048e75c","src/unix/linux_like/linux/gnu/b64/sparc64/align.rs":"e29c4868bbecfa4a6cd8a2ad06193f3bbc78a468cc1dc9df83f002f1268130d9","src/unix/linux_like/linux/gnu/b64/sparc64/mod.rs":"87dd7f3d5bf3c09f4064ec738e306cc9cc41ad49b4a5df62c5983301c3bbf99a","src/unix/linux_like/linux/gnu/b64/x86_64/align.rs":"7169d07a9fd4716f7512719aec9fda5d8bed306dc0720ffc1b21696c9951e3c6","src/unix/linux_like/linux/gnu/b64/x86_64/mod.rs":"1b939aaf3cdf3efc7d3bd53e83e80235530d3ba0c24bb7611d4730d35d457ec1","src/unix/linux_like/linux/gnu/b64/x86_64/not_x32.rs":"b88ef8a1eaa9ed73bf2acb8192afb73af987a92abb94140c6376fc83f2fa5553","src/unix/linux_like/linux/gnu/b64/x86_64/x32.rs":"79305936a60d342efdc10519ba89507d6b48e65f13f33090d3b04dc9655ceed0","src/unix/linux_like/linux/gnu/mod.rs":"84ad4a663b5fa2498179be8dca96fef5f0446ec1619215ac674424ee394e307d","src/unix/linux_like/linux/gnu/no_align.rs":"9cd223135de75315840ff9c3fd5441ba1cb632b96b5c85a76f8316c86653db25","src/unix/linux_like/linux/mod.rs":"cff23db1e0bac8c30052dfed5e89e65bc207471e4bfb3a6fa0a4df62ed9e413e","src/unix/linux_like/linux/musl/b32/arm/align.rs":"3e8ac052c1043764776b54c93ba4260e061df998631737a897d9d47d54f7b80c","src/unix/linux_like/linux/musl/b32/arm/mod.rs":"e5faee8efda8a225ea0b17d4d6f9e893a678e73773fa62c549a8e19c106b9f04","src/unix/linux_like/linux/musl/b32/hexagon.rs":"226a8b64ce9c75abbbee6d2dceb0b44f7b6c750c4102ebd4d015194afee6666e","src/unix/linux_like/linux/musl/b32/mips/align.rs":"429fb5e005cb7143602d430098b6ebfb7d360685b194f333dfd587472ae954ee","src/unix/linux_like/linux/musl/b32/mips/mod.rs":"df8f8b529a6cc6b8a7326639e83303cf1320c6c50d76517c17d42bcf45f6240a","src/unix/linux_like/linux/musl/b32/mod.rs":"7b3d9dfd8605b00bb9b5fa1439abe5ebf60199c7fa033eee555e8d181e93ffa2","src/unix/linux_like/linux/musl/b32/powerpc.rs":"c957d99a4d4371d2411a5769be8cf344516bf9ddc1011f977501a4eb57cb4e82","src/unix/linux_like/linux/musl/b32/riscv32/align.rs":"efd2accf33b87de7c7547903359a5da896edc33cd6c719552c7474b60d4a5d48","src/unix/linux_like/linux/musl/b32/riscv32/mod.rs":"698f77bfcc838f82126c54f7387881fe3e89490117e5a4f333d1b4433823a672","src/unix/linux_like/linux/musl/b32/x86/align.rs":"08e77fbd7435d7dec2ff56932433bece3f02e47ce810f89004a275a86d39cbe1","src/unix/linux_like/linux/musl/b32/x86/mod.rs":"199a91e90b454f9dc32770d5204cc4f6e5b8f144e0e34a1c91829949d6e804b3","src/unix/linux_like/linux/musl/b64/aarch64/align.rs":"798a9229d70ce235394f2dd625f6c4c1e10519a94382dc5b091952b638ae2928","src/unix/linux_like/linux/musl/b64/aarch64/int128.rs":"1735f6f5c56770d20dd426442f09724d9b2052b46a7cd82f23f3288a4a7276de","src/unix/linux_like/linux/musl/b64/aarch64/mod.rs":"9c4878df0fea0e0affd85346e0bc191abdc5e41e74dc9199b5644ec33d29c300","src/unix/linux_like/linux/musl/b64/mips64.rs":"3686fc8cb2e311cda8e6b96f6dfe90b65a366714bd480312b692b1a6ca1241b6","src/unix/linux_like/linux/musl/b64/mod.rs":"8c10627bd582cb272514e7350ae4743a65d489356eae039d2e7e55cd533fbbc8","src/unix/linux_like/linux/musl/b64/powerpc64.rs":"36694cbdcdc33879e00502d55cb95eaa0096d213538993dd39c3da800cdd06d1","src/unix/linux_like/linux/musl/b64/riscv64/align.rs":"d321491612be8d5c61b6ec2dc0111beb3a22e58803f99cd37543efe86621b119","src/unix/linux_like/linux/musl/b64/riscv64/mod.rs":"36621aca8ecf714f8dd42662dc2997833d95b9f129ef8c220503362e21efd695","src/unix/linux_like/linux/musl/b64/s390x.rs":"9b05b1fae6bcb7cb6d909b9973977fde01684175f3e26c27dcb44223cc3933d9","src/unix/linux_like/linux/musl/b64/x86_64/align.rs":"7169d07a9fd4716f7512719aec9fda5d8bed306dc0720ffc1b21696c9951e3c6","src/unix/linux_like/linux/musl/b64/x86_64/mod.rs":"238789097a26abc8b7cd578ed1a8e6cb8672083054303902781983902cd66854","src/unix/linux_like/linux/musl/mod.rs":"2efe98b8166270be1624888953fa07da06f08313f15bf4c7dc2eba63f5a790a5","src/unix/linux_like/linux/no_align.rs":"da2a8721becaaaa528781f97f5d9aae6a982ae5d4f5f6d2ffc0150bed72319b3","src/unix/linux_like/linux/non_exhaustive.rs":"181a05bf94fdb911db83ce793b993bd6548a4115b306a7ef3c10f745a8fea3e9","src/unix/linux_like/linux/uclibc/align.rs":"9ed16138d8e439bd90930845a65eafa7ebd67366e6bf633936d44014f6e4c959","src/unix/linux_like/linux/uclibc/arm/align.rs":"e4a3c27fe20a57b8d612c34cb05bc70646edb5cec7251957315afa53a7b9f936","src/unix/linux_like/linux/uclibc/arm/mod.rs":"a056bbf718ddd775519058706bdb4909b56e6256985869e3c3132aa8ec5faca0","src/unix/linux_like/linux/uclibc/arm/no_align.rs":"9cd223135de75315840ff9c3fd5441ba1cb632b96b5c85a76f8316c86653db25","src/unix/linux_like/linux/uclibc/mips/mips32/align.rs":"e4a3c27fe20a57b8d612c34cb05bc70646edb5cec7251957315afa53a7b9f936","src/unix/linux_like/linux/uclibc/mips/mips32/mod.rs":"b84def53a49587e87f884c2bc28b21b290463b00b52e1d0309f2ba233a5b4a99","src/unix/linux_like/linux/uclibc/mips/mips32/no_align.rs":"9cd223135de75315840ff9c3fd5441ba1cb632b96b5c85a76f8316c86653db25","src/unix/linux_like/linux/uclibc/mips/mips64/align.rs":"a7bdcb18a37a2d91e64d5fad83ea3edc78f5412adb28f77ab077dbb26dd08b2d","src/unix/linux_like/linux/uclibc/mips/mips64/mod.rs":"256a428290a560163ef7dc7d18b27bd3c6ce9748a0f28d5dc7f82203ee228220","src/unix/linux_like/linux/uclibc/mips/mips64/no_align.rs":"4a18e3875698c85229599225ac3401a2a40da87e77b2ad4ef47c6fcd5a24ed30","src/unix/linux_like/linux/uclibc/mips/mod.rs":"367ec5483ad317e6ccba1ac0888da6cf088a8d32689214cc8d16129aa692260c","src/unix/linux_like/linux/uclibc/mod.rs":"ddd223d4f574b2b92072bbdab091f12d8d89d5e05f41076ddfa6bc6af379698f","src/unix/linux_like/linux/uclibc/no_align.rs":"3f28637046524618adaa1012e26cb7ffe94b9396e6b518cccdc69d59f274d709","src/unix/linux_like/linux/uclibc/x86_64/l4re.rs":"024eba5753e852dbdd212427351affe7e83f9916c1864bce414d7aa2618f192e","src/unix/linux_like/linux/uclibc/x86_64/mod.rs":"bf6985e901041a61e90ccee1296b35a4c62ef90aa528d31989e1d647f072e79a","src/unix/linux_like/linux/uclibc/x86_64/other.rs":"42c3f71e58cabba373f6a55a623f3c31b85049eb64824c09c2b082b3b2d6a0a8","src/unix/linux_like/mod.rs":"dd4f7a1d66d8501b4a2c4e75e6e9305ed69f1002ae99e410596a6c636878595a","src/unix/mod.rs":"d1b4ba41f9b9c106f6ba8c661b5808824b774a7a7caac7d8938bf50979ba5195","src/unix/newlib/aarch64/mod.rs":"bac93836a9a57b2c710f32f852e92a4d11ad6759ab0fb6ad33e71d60e53278af","src/unix/newlib/align.rs":"28aaf87fafbc6b312622719d472d8cf65f9e5467d15339df5f73e66d8502b28a","src/unix/newlib/arm/mod.rs":"cbba6b3e957eceb496806e60de8725a23ff3fa0015983b4b4fa27b233732b526","src/unix/newlib/espidf/mod.rs":"ff9c13e99d84912f5ebe75b7a7ea9c1e9d8f35a268716081e09899c7ea822bc6","src/unix/newlib/generic.rs":"eab066d9f0a0f3eb53cc1073d01496bba0110989e1f6a59838afd19f870cd599","src/unix/newlib/horizon/mod.rs":"7cc5cc120437421db139bfa6a90b18168cd3070bdd0f5be96d40fe4c996f3ca1","src/unix/newlib/mod.rs":"54633d606e4e0413274af0b5beb5e697e6c061b63feaa0704b026554cc9d9c3e","src/unix/newlib/no_align.rs":"e0743b2179495a9514bc3a4d1781e492878c4ec834ee0085d0891dd1712e82fb","src/unix/newlib/powerpc/mod.rs":"0202ffd57caf75b6afa2c9717750ffb96e375ac33df0ae9609a3f831be393b67","src/unix/no_align.rs":"c06e95373b9088266e0b14bba0954eef95f93fb2b01d951855e382d22de78e53","src/unix/redox/mod.rs":"033768cb273daf2c8090d97252c2de9dba6809e6a5d2457f5727d724807695db","src/unix/solarish/compat.rs":"b07a5bfac925eb012003a459ba6bddbd3bfa9c44b3394da2ac5a602e54beae9c","src/unix/solarish/illumos.rs":"29387916ee7dc58f07478746024003215e631cd30953e8fa2a5c415f81839007","src/unix/solarish/mod.rs":"976b07a13e195840b67c166a62318abfa9ffc8d5ebbb0358f199dd213ec98d1b","src/unix/solarish/solaris.rs":"65b005453aefa9b9d4fc860fe77cfec80d8c97a51342b15daf55fc3e808bb384","src/unix/solarish/x86.rs":"e86e806df0caed72765040eaa2f3c883198d1aa91508540adf9b7008c77f522e","src/unix/solarish/x86_64.rs":"9074e813949f3c613afeac39d4118fb942c0b3c476232fc536489357cff5790f","src/unix/solarish/x86_common.rs":"ac869d9c3c95645c22460468391eb1982023c3a8e02b9e06a72e3aef3d5f1eac","src/vxworks/aarch64.rs":"98f0afdc511cd02557e506c21fed6737585490a1dce7a9d4941d08c437762b99","src/vxworks/arm.rs":"acb7968ce99fe3f4abdf39d98f8133d21a4fba435b8ef7084777cb181d788e88","src/vxworks/mod.rs":"aea3da66f2140f2a82dfc9c58f6e6531d2dd9c15ea696e0f95a0d4a2a187b5b6","src/vxworks/powerpc.rs":"acb7968ce99fe3f4abdf39d98f8133d21a4fba435b8ef7084777cb181d788e88","src/vxworks/powerpc64.rs":"98f0afdc511cd02557e506c21fed6737585490a1dce7a9d4941d08c437762b99","src/vxworks/x86.rs":"552f007f38317620b23889cb7c49d1d115841252439060122f52f434fbc6e5ba","src/vxworks/x86_64.rs":"018d92be3ad628a129eff9f2f5dfbc0883d8b8e5f2fa917b900a7f98ed6b514a","src/wasi.rs":"4fae202af0327d768ed9e1b586b75816cce14fe2dc16947d2f3d381f209a54c1","src/windows/gnu/align.rs":"b2c13ec1b9f3b39a75c452c80c951dff9d0215e31d77e883b4502afb31794647","src/windows/gnu/mod.rs":"3c8c7edb7cdf5d0c44af936db2a94869585c69dfabeef30571b4f4e38375767a","src/windows/mod.rs":"e3ad95ba54f76e74c301611fe868d3d94f6b8939b03be672f568b06b10ae71c7","src/windows/msvc/mod.rs":"c068271e00fca6b62bc4bf44bcf142cfc38caeded9b6c4e01d1ceef3ccf986f4","tests/const_fn.rs":"cb75a1f0864f926aebe79118fc34d51a0d1ade2c20a394e7774c7e545f21f1f4"},"package":"349d5a591cd28b49e1d1037471617a32ddcda5731b99419008085f72d5a53836"}
+\ No newline at end of file
++{"files":{"CONTRIBUTING.md":"752eea5a703d11b485c6b5f195f51bd2c79aa5159b619ce09555c779e1fb586b","Cargo.toml":"6a0abcfcbc1d9fb00a356343043a161f5b84b3f780cb0f02df4627d022b14b7f","LICENSE-APACHE":"62c7a1e35f56406896d7aa7ca52d0cc0d272ac022b5d2796e7d6905db8a3636a","LICENSE-MIT":"a8d47ff51ca256f56a8932dba07660672dbfe3004257ca8de708aac1415937a1","README.md":"8228847944f1332882fbb00275b6f30e4a8aad08a13569c25d52cac012cc2a47","build.rs":"41f9743021d9e5ed74ede55c94057cf6867a322a13b25a5d524b966656a08e38","rustfmt.toml":"eaa2ea84fc1ba0359b77680804903e07bb38d257ab11986b95b158e460f787b2","src/fixed_width_ints.rs":"7f986e5f5e68d25ef04d386fd2f640e8be8f15427a8d4a458ea01d26b8dca0ca","src/fuchsia/aarch64.rs":"378776a9e40766154a54c94c2a7b4675b5c302a38e6e42da99e67bfbaee60e56","src/fuchsia/align.rs":"ae1cf8f011a99737eabeb14ffff768e60f13b13363d7646744dbb0f443dab3d6","src/fuchsia/mod.rs":"bc8c46531bd1a2429f36aaf2bc137b50e42505b798de83f34eecfa94ad89179b","src/fuchsia/no_align.rs":"303f3f1b255e0088b5715094353cf00476131d8e94e6aebb3f469557771c8b8a","src/fuchsia/x86_64.rs":"93a3632b5cf67d2a6bcb7dc0a558605252d5fe689e0f38d8aa2ec5852255ac87","src/hermit/aarch64.rs":"86048676e335944c37a63d0083d0f368ae10ceccefeed9debb3bbe08777fc682","src/hermit/mod.rs":"d3bfce41e4463d4be8020a2d063c9bfa8b665f45f1cc6cbf3163f5d01e7cb21f","src/hermit/x86_64.rs":"ab832b7524e5fb15c49ff7431165ab1a37dc4667ae0b58e8306f4c539bfa110c","src/lib.rs":"3f13d5f9b29d8969dde507661f1637524e21fe3bd34957fc778868f2f3713c46","src/macros.rs":"d7437c2573c4915768ff96b34710a61ee9f63622b23526cddeaeaf6bfb4751a2","src/psp.rs":"dd31aabd46171d474ec5828372e28588935120e7355c90c105360d8fa9264c1c","src/sgx.rs":"16a95cdefc81c5ee00d8353a60db363c4cc3e0f75abcd5d0144723f2a306ed1b","src/solid/aarch64.rs":"a726e47f324adf73a4a0b67a2c183408d0cad105ae66acf36db37a42ab7f8707","src/solid/arm.rs":"e39a4f74ebbef3b97b8c95758ad741123d84ed3eb48d9cf4f1f4872097fc27fe","src/solid/mod.rs":"5f4151dca5132e4b4e4c23ab9737e12856dddbdc0ca3f7dbc004328ef3c8acde","src/switch.rs":"9da3dd39b3de45a7928789926e8572d00e1e11a39e6f7289a1349aadce90edba","src/unix/align.rs":"2cdc7c826ef7ae61f5171c5ae8c445a743d86f1a7f2d9d7e4ceeec56d6874f65","src/unix/bsd/apple/b32/align.rs":"ec833a747866fe19ca2d9b4d3c9ff0385faba5edf4bd0d15fa68884c40b0e26c","src/unix/bsd/apple/b32/mod.rs":"2546ad3eb6aecb95f916648bc63264117c92b4b4859532b34cb011e4c75a5a72","src/unix/bsd/apple/b64/aarch64/align.rs":"2eaf0f561a32bdcbf4e0477c8895d5e7bcb5cdebd5fef7b4df2ca8e38e144d94","src/unix/bsd/apple/b64/aarch64/mod.rs":"a3f0dfff62d0f7f4f1b5f9a4e2b662acf233a46badbc5419d3cc2d735629a710","src/unix/bsd/apple/b64/align.rs":"ec833a747866fe19ca2d9b4d3c9ff0385faba5edf4bd0d15fa68884c40b0e26c","src/unix/bsd/apple/b64/mod.rs":"f5e278a1af7fb358891d1c9be4eb7e815aaca0c5cb738d0c3604ba2208a856f7","src/unix/bsd/apple/b64/x86_64/align.rs":"ec833a747866fe19ca2d9b4d3c9ff0385faba5edf4bd0d15fa68884c40b0e26c","src/unix/bsd/apple/b64/x86_64/mod.rs":"8c87c5855038aae5d433c8f5eb3b29b0a175879a0245342b3bfd83bdf4cfd936","src/unix/bsd/apple/mod.rs":"d7155927fbd1af6dd984a4c6c1a735a5932af828c96818209487eb6ae72d7d7e","src/unix/bsd/freebsdlike/dragonfly/errno.rs":"8295b8bb0dfd38d2cdb4d9192cdeeb534cc6c3b208170e64615fa3e0edb3e578","src/unix/bsd/freebsdlike/dragonfly/mod.rs":"379302e12d30807a1f973c4e2dd2205179d95343ee6fae05f33c9ed45a342799","src/unix/bsd/freebsdlike/freebsd/aarch64.rs":"2a215bd6136b8617aacedf9be738ccee94da9d29b418e9a78101d6291c182352","src/unix/bsd/freebsdlike/freebsd/arm.rs":"59d6a670eea562fb87686e243e0a84603d29a2028a3d4b3f99ccc01bd04d2f47","src/unix/bsd/freebsdlike/freebsd/freebsd11/b64.rs":"9808d152c1196aa647f1b0f0cf84dac8c930da7d7f897a44975545e3d9d17681","src/unix/bsd/freebsdlike/freebsd/freebsd11/mod.rs":"a6eee615e6ca5a6e04b526bb6b22d13b9356e87e51825cda33476c37a46cb0ef","src/unix/bsd/freebsdlike/freebsd/freebsd12/b64.rs":"61cbe45f8499bedb168106b686d4f8239472f25c7553b069eec2afe197ff2df6","src/unix/bsd/freebsdlike/freebsd/freebsd12/mod.rs":"755dafaf3f0945e798c34ea94c48e8552804ce60e2a15a4f0649f9d1aceaf422","src/unix/bsd/freebsdlike/freebsd/freebsd12/x86_64.rs":"2df36a7f122f6d6e5753cfb4d22e915cc80f6bc91c0161b3daae55a481bfd052","src/unix/bsd/freebsdlike/freebsd/freebsd13/b64.rs":"61cbe45f8499bedb168106b686d4f8239472f25c7553b069eec2afe197ff2df6","src/unix/bsd/freebsdlike/freebsd/freebsd13/mod.rs":"cc65a73b0fa95a77044a4b3ee76d6eceb9773b55aea7d73bdf070e6f66e9ea38","src/unix/bsd/freebsdlike/freebsd/freebsd13/x86_64.rs":"2df36a7f122f6d6e5753cfb4d22e915cc80f6bc91c0161b3daae55a481bfd052","src/unix/bsd/freebsdlike/freebsd/freebsd14/b64.rs":"61cbe45f8499bedb168106b686d4f8239472f25c7553b069eec2afe197ff2df6","src/unix/bsd/freebsdlike/freebsd/freebsd14/mod.rs":"0ed92eb93e78299cd7de0ae9daebb04a53b3c2d5e6a078e1fcd977f2a86bffc3","src/unix/bsd/freebsdlike/freebsd/freebsd14/x86_64.rs":"2df36a7f122f6d6e5753cfb4d22e915cc80f6bc91c0161b3daae55a481bfd052","src/unix/bsd/freebsdlike/freebsd/mod.rs":"467b66843ab8c1a54b01ae9e90aaf0295b32f54decbdfb64caca84523b488925","src/unix/bsd/freebsdlike/freebsd/powerpc.rs":"9ca3f82f88974e6db5569f2d76a5a3749b248a31747a6c0da5820492bdfeca42","src/unix/bsd/freebsdlike/freebsd/powerpc64.rs":"2dae3ecc87eac3b11657aa98915def55fc4b5c0de11fe26aae23329a54628a9a","src/unix/bsd/freebsdlike/freebsd/riscv64.rs":"8f591bd273464d684c4f64365f8ed56a8138175daa70d96008541393057a0dae","src/unix/bsd/freebsdlike/freebsd/x86.rs":"c5005e3249eb7c93cfbac72a9e9272320d80ce7983da990ceb05a447f59a02c5","src/unix/bsd/freebsdlike/freebsd/x86_64/align.rs":"0e1f69a88fca1c32874b1daf5db3d446fefbe518dca497f096cc9168c39dde70","src/unix/bsd/freebsdlike/freebsd/x86_64/mod.rs":"51e4dd0c8ae247bb652feda5adad9333ea3bb30c750c3a3935e0b0e47d7803eb","src/unix/bsd/freebsdlike/mod.rs":"a7345cc3fb7372572efe06848feb2cc246dfc2c2a0cc9ccf434f4c55041a59fa","src/unix/bsd/mod.rs":"7720ec82c9334f988ec4b271784768a017c3dc2e6dfae4d02418eef753388aa7","src/unix/bsd/netbsdlike/mod.rs":"594a0f9e23c4d7702ba38afdba5a0e866be25d692ec0afd66e04d939aa6b3f04","src/unix/bsd/netbsdlike/netbsd/aarch64.rs":"65dcb58d11e8d8028401a9d07ca3eb4cb4f053e04249cc877353449d84ccc4cb","src/unix/bsd/netbsdlike/netbsd/arm.rs":"58cdbb70b0d6f536551f0f3bb3725d2d75c4690db12c26c034e7d6ec4a924452","src/unix/bsd/netbsdlike/netbsd/mod.rs":"cb1560bf8ffcc7b2726a27b433efac90e726292960626f3064bd2c6b7f861a55","src/unix/bsd/netbsdlike/netbsd/powerpc.rs":"ee7ff5d89d0ed22f531237b5059aa669df93a3b5c489fa641465ace8d405bf41","src/unix/bsd/netbsdlike/netbsd/sparc64.rs":"9489f4b3e4566f43bb12dfb92238960613dac7f6a45cc13068a8d152b902d7d9","src/unix/bsd/netbsdlike/netbsd/x86.rs":"20692320e36bfe028d1a34d16fe12ca77aa909cb02bda167376f98f1a09aefe7","src/unix/bsd/netbsdlike/netbsd/x86_64.rs":"1afe5ef46b14397cdd68664b5b232e4f5b035b6db1d4cf411c899d51ebca9f30","src/unix/bsd/netbsdlike/openbsd/aarch64.rs":"dd91931d373b7ecaf6e2de25adadee10d16fa9b12c2cbacdff3eb291e1ba36af","src/unix/bsd/netbsdlike/openbsd/arm.rs":"01580d261bc6447bb327a0d982181b7bdabfa066cee65a30373d3ced729ad307","src/unix/bsd/netbsdlike/openbsd/mips64.rs":"8532a189ae10c7d668d9d4065da8b05d124e09bd39442c9f74a7f231c43eca48","src/unix/bsd/netbsdlike/openbsd/mod.rs":"816a8ef47df60a752a91967627eeccb9ca776dc718ecc53ae902d8edaee0bce9","src/unix/bsd/netbsdlike/openbsd/powerpc.rs":"01580d261bc6447bb327a0d982181b7bdabfa066cee65a30373d3ced729ad307","src/unix/bsd/netbsdlike/openbsd/powerpc64.rs":"1dd5449dd1fd3d51e30ffdeeaece91d0aaf05c710e0ac699fecc5461cfa2c28e","src/unix/bsd/netbsdlike/openbsd/riscv64.rs":"1dd5449dd1fd3d51e30ffdeeaece91d0aaf05c710e0ac699fecc5461cfa2c28e","src/unix/bsd/netbsdlike/openbsd/sparc64.rs":"d04fd287afbaa2c5df9d48c94e8374a532a3ba491b424ddf018270c7312f4085","src/unix/bsd/netbsdlike/openbsd/x86.rs":"6f7f5c4fde2a2259eb547890cbd86570cea04ef85347d7569e94e679448bec87","src/unix/bsd/netbsdlike/openbsd/x86_64.rs":"d31db31630289c85af3339dbe357998a21ca584cbae31607448fe2cf7675a4e1","src/unix/haiku/b32.rs":"a2efdbf7158a6da341e1db9176b0ab193ba88b449616239ed95dced11f54d87b","src/unix/haiku/b64.rs":"ff8115367d3d7d354f792d6176dfaaa26353f57056197b563bf4681f91ff7985","src/unix/haiku/mod.rs":"d5833ff9b94daa81d2470df544453212af17530d78c5a7fb912eac915d00f329","src/unix/haiku/native.rs":"dbfcbf4954a79d1df2ff58e0590bbcb8c57dfc7a32392aa73ee4726b66bd6cc8","src/unix/haiku/x86_64.rs":"3ec3aeeb7ed208b8916f3e32d42bfd085ff5e16936a1a35d9a52789f043b7237","src/unix/hermit/aarch64.rs":"86048676e335944c37a63d0083d0f368ae10ceccefeed9debb3bbe08777fc682","src/unix/hermit/mod.rs":"859814f5df89e28fd4b345db399d181e11e7ed413841b6ff703a1fcbdbf013ae","src/unix/hermit/x86_64.rs":"ab832b7524e5fb15c49ff7431165ab1a37dc4667ae0b58e8306f4c539bfa110c","src/unix/linux_like/android/b32/arm.rs":"433c1530f602cc5ed26610c58055dde0c4ceea5e00150063b24ddc60768332a4","src/unix/linux_like/android/b32/mod.rs":"7c173e0375119bf06a3081652faede95e5bcd6858e7576b7533d037978737c8f","src/unix/linux_like/android/b32/x86/align.rs":"812914e4241df82e32b12375ca3374615dc3a4bdd4cf31f0423c5815320c0dab","src/unix/linux_like/android/b32/x86/mod.rs":"8388bd3a0fcb5636bf965eee6dc95ae6860b85a2b555b387c868aa4d4e01ec89","src/unix/linux_like/android/b64/aarch64/align.rs":"2179c3b1608fa4bf68840482bfc2b2fa3ee2faf6fcae3770f9e505cddca35c7b","src/unix/linux_like/android/b64/aarch64/int128.rs":"1735f6f5c56770d20dd426442f09724d9b2052b46a7cd82f23f3288a4a7276de","src/unix/linux_like/android/b64/aarch64/mod.rs":"ef230d49fd0d182adf2dae6f8e10babf18d72259d65980bf1c4c2dc8a4f84501","src/unix/linux_like/android/b64/mod.rs":"d7bbbadafdb2cb2ff8e9cde3d89a03b9facaabb6b2d45705225d3ece1c5cce37","src/unix/linux_like/android/b64/x86_64/align.rs":"7169d07a9fd4716f7512719aec9fda5d8bed306dc0720ffc1b21696c9951e3c6","src/unix/linux_like/android/b64/x86_64/mod.rs":"6454948ea98be86243229f99d67cdc7ca460e16b2a6445663ff4b5b6907c358d","src/unix/linux_like/android/mod.rs":"1247397a7e1b6269e0d2d6df88c6396bc18eb477e3ff293ff57d66a6b1380d94","src/unix/linux_like/emscripten/align.rs":"86c95cbed7a7161b1f23ee06843e7b0e2340ad92b2cb86fe2a8ef3e0e8c36216","src/unix/linux_like/emscripten/mod.rs":"b71d37106750f57bc2dae4e9bcb473ff098ef48235827e41a1687a39825f0aa4","src/unix/linux_like/emscripten/no_align.rs":"0128e4aa721a9902754828b61b5ec7d8a86619983ed1e0544a85d35b1051fad6","src/unix/linux_like/linux/align.rs":"d6c259942c8e843373accd180fc8f4f45f03544dfd21b93a8d02641ead3ef63e","src/unix/linux_like/linux/arch/generic/mod.rs":"e20013ed91edcfb7f84f3f9f5a9ef827fd5c406e24b65989d8438da332236ef6","src/unix/linux_like/linux/arch/mips/mod.rs":"2d166054a586bb4bf6e4a4ba35f7574907b217225eff8f1a43adc4277e142460","src/unix/linux_like/linux/arch/mod.rs":"466a29622e47c6c7f1500682b2eb17f5566dd81b322cd6348f0fdd355cec593a","src/unix/linux_like/linux/arch/powerpc/mod.rs":"3f6da7b0fa7b394c7d4eea2bb3caa7a7729ab0d6c1491fef02206a912c41b815","src/unix/linux_like/linux/arch/sparc/mod.rs":"91593ec0440f1dd8f8e612028f432c44c14089286e2aca50e10511ab942db8c3","src/unix/linux_like/linux/gnu/align.rs":"e4a3c27fe20a57b8d612c34cb05bc70646edb5cec7251957315afa53a7b9f936","src/unix/linux_like/linux/gnu/b32/arm/align.rs":"6ec0eb3ee93f7ae99fd714b4deabfb5e97fbcefd8c26f5a45fb8e7150899cdeb","src/unix/linux_like/linux/gnu/b32/arm/mod.rs":"92ea7edc0e24f79dfbf5e3efc2d7509bed230562036e6aa85ef4f2c8088ecc8f","src/unix/linux_like/linux/gnu/b32/m68k/align.rs":"8faa92f77a9232c035418d45331774e64a9a841d99c91791570a203bf2b45bcb","src/unix/linux_like/linux/gnu/b32/m68k/mod.rs":"a2a0a9400dae44086ebf579e0448e0676d4a3214d1ae7d13a024857251e23b6b","src/unix/linux_like/linux/gnu/b32/mips/align.rs":"429fb5e005cb7143602d430098b6ebfb7d360685b194f333dfd587472ae954ee","src/unix/linux_like/linux/gnu/b32/mips/mod.rs":"0d7849eb2435ec1f49b6774872a0518f0129c50f37c9d38b37b1535722777a22","src/unix/linux_like/linux/gnu/b32/mod.rs":"8da281da578cdee972e952b118b903b370320897a7e335342a15e1359864bef2","src/unix/linux_like/linux/gnu/b32/powerpc.rs":"049d6211ba4a9304bd4497c160bc21ae847c24e0528dd9d76263f16192e6aff5","src/unix/linux_like/linux/gnu/b32/riscv32/align.rs":"d321491612be8d5c61b6ec2dc0111beb3a22e58803f99cd37543efe86621b119","src/unix/linux_like/linux/gnu/b32/riscv32/mod.rs":"a4256148cec0bb672c8dfa605866930d9761af9655721de72ae41eeeb8fdbf6d","src/unix/linux_like/linux/gnu/b32/sparc/align.rs":"21adbed27df73e2d1ed934aaf733a643003d7baf2bde9c48ea440895bcca6d41","src/unix/linux_like/linux/gnu/b32/sparc/mod.rs":"525618615aa0cb80c6c90860bf579dfed8db307fffd56b97dc235fb945419434","src/unix/linux_like/linux/gnu/b32/x86/align.rs":"e4bafdc4a519a7922a81b37a62bbfd1177a2f620890eef8f1fbc47162e9eb413","src/unix/linux_like/linux/gnu/b32/x86/mod.rs":"78b4038852986436888c63be9258037cf642124daee9d5fa5cef2bf8e412bf54","src/unix/linux_like/linux/gnu/b64/aarch64/align.rs":"2179c3b1608fa4bf68840482bfc2b2fa3ee2faf6fcae3770f9e505cddca35c7b","src/unix/linux_like/linux/gnu/b64/aarch64/ilp32.rs":"21a21503ef2e095f4371044915d4bfb07a8578011cb5c713cd9f45947b0b5730","src/unix/linux_like/linux/gnu/b64/aarch64/int128.rs":"1735f6f5c56770d20dd426442f09724d9b2052b46a7cd82f23f3288a4a7276de","src/unix/linux_like/linux/gnu/b64/aarch64/lp64.rs":"e78c3cd197f44832338b414d1a9bc0d194f44c74db77bd7bf830c1fff62b2690","src/unix/linux_like/linux/gnu/b64/aarch64/mod.rs":"c4e20b1c63d7a03a6e22aef2046689ef95cc4651011ade7cb94176fcea1dc252","src/unix/linux_like/linux/gnu/b64/loongarch64/align.rs":"7169d07a9fd4716f7512719aec9fda5d8bed306dc0720ffc1b21696c9951e3c6","src/unix/linux_like/linux/gnu/b64/loongarch64/mod.rs":"387808d5398b24339e7e2bf7591150735011befc5b421fa713d7017c04a7b1da","src/unix/linux_like/linux/gnu/b64/mips64/align.rs":"7169d07a9fd4716f7512719aec9fda5d8bed306dc0720ffc1b21696c9951e3c6","src/unix/linux_like/linux/gnu/b64/mips64/mod.rs":"17aad16329431d83e1909e3a08022f6e28f4bcba7dec4a967fe1a321a6a43b99","src/unix/linux_like/linux/gnu/b64/mod.rs":"3c6555f30a7a8852757b31a542ea73fb6a16a6e27e838397e819278ad56e57a4","src/unix/linux_like/linux/gnu/b64/powerpc64/align.rs":"e29c4868bbecfa4a6cd8a2ad06193f3bbc78a468cc1dc9df83f002f1268130d9","src/unix/linux_like/linux/gnu/b64/powerpc64/mod.rs":"97e0ecf11ecce793a13fec39654fb513c5479edf7faa7a276fa714b61993d0fc","src/unix/linux_like/linux/gnu/b64/riscv64/align.rs":"d321491612be8d5c61b6ec2dc0111beb3a22e58803f99cd37543efe86621b119","src/unix/linux_like/linux/gnu/b64/riscv64/mod.rs":"b3fe290afe63d2d6e315d0cf1f775464e9c1f2a1906d243c1af74a137a4031cb","src/unix/linux_like/linux/gnu/b64/s390x.rs":"254f00266ecf9644a4b469457cb37c4dd6c055820926c1de0fb9035b6048e75c","src/unix/linux_like/linux/gnu/b64/sparc64/align.rs":"e29c4868bbecfa4a6cd8a2ad06193f3bbc78a468cc1dc9df83f002f1268130d9","src/unix/linux_like/linux/gnu/b64/sparc64/mod.rs":"87dd7f3d5bf3c09f4064ec738e306cc9cc41ad49b4a5df62c5983301c3bbf99a","src/unix/linux_like/linux/gnu/b64/x86_64/align.rs":"7169d07a9fd4716f7512719aec9fda5d8bed306dc0720ffc1b21696c9951e3c6","src/unix/linux_like/linux/gnu/b64/x86_64/mod.rs":"1b939aaf3cdf3efc7d3bd53e83e80235530d3ba0c24bb7611d4730d35d457ec1","src/unix/linux_like/linux/gnu/b64/x86_64/not_x32.rs":"b88ef8a1eaa9ed73bf2acb8192afb73af987a92abb94140c6376fc83f2fa5553","src/unix/linux_like/linux/gnu/b64/x86_64/x32.rs":"79305936a60d342efdc10519ba89507d6b48e65f13f33090d3b04dc9655ceed0","src/unix/linux_like/linux/gnu/mod.rs":"84ad4a663b5fa2498179be8dca96fef5f0446ec1619215ac674424ee394e307d","src/unix/linux_like/linux/gnu/no_align.rs":"9cd223135de75315840ff9c3fd5441ba1cb632b96b5c85a76f8316c86653db25","src/unix/linux_like/linux/mod.rs":"cff23db1e0bac8c30052dfed5e89e65bc207471e4bfb3a6fa0a4df62ed9e413e","src/unix/linux_like/linux/musl/b32/arm/align.rs":"3e8ac052c1043764776b54c93ba4260e061df998631737a897d9d47d54f7b80c","src/unix/linux_like/linux/musl/b32/arm/mod.rs":"e5faee8efda8a225ea0b17d4d6f9e893a678e73773fa62c549a8e19c106b9f04","src/unix/linux_like/linux/musl/b32/hexagon.rs":"226a8b64ce9c75abbbee6d2dceb0b44f7b6c750c4102ebd4d015194afee6666e","src/unix/linux_like/linux/musl/b32/mips/align.rs":"429fb5e005cb7143602d430098b6ebfb7d360685b194f333dfd587472ae954ee","src/unix/linux_like/linux/musl/b32/mips/mod.rs":"df8f8b529a6cc6b8a7326639e83303cf1320c6c50d76517c17d42bcf45f6240a","src/unix/linux_like/linux/musl/b32/mod.rs":"7b3d9dfd8605b00bb9b5fa1439abe5ebf60199c7fa033eee555e8d181e93ffa2","src/unix/linux_like/linux/musl/b32/powerpc.rs":"c957d99a4d4371d2411a5769be8cf344516bf9ddc1011f977501a4eb57cb4e82","src/unix/linux_like/linux/musl/b32/riscv32/align.rs":"efd2accf33b87de7c7547903359a5da896edc33cd6c719552c7474b60d4a5d48","src/unix/linux_like/linux/musl/b32/riscv32/mod.rs":"e57dc5562553aab6d0765e0ec266254aa52975f8757bfe97e0c6028fa7d5d37c","src/unix/linux_like/linux/musl/b32/x86/align.rs":"08e77fbd7435d7dec2ff56932433bece3f02e47ce810f89004a275a86d39cbe1","src/unix/linux_like/linux/musl/b32/x86/mod.rs":"199a91e90b454f9dc32770d5204cc4f6e5b8f144e0e34a1c91829949d6e804b3","src/unix/linux_like/linux/musl/b64/aarch64/align.rs":"798a9229d70ce235394f2dd625f6c4c1e10519a94382dc5b091952b638ae2928","src/unix/linux_like/linux/musl/b64/aarch64/int128.rs":"1735f6f5c56770d20dd426442f09724d9b2052b46a7cd82f23f3288a4a7276de","src/unix/linux_like/linux/musl/b64/aarch64/mod.rs":"9c4878df0fea0e0affd85346e0bc191abdc5e41e74dc9199b5644ec33d29c300","src/unix/linux_like/linux/musl/b64/mips64.rs":"3686fc8cb2e311cda8e6b96f6dfe90b65a366714bd480312b692b1a6ca1241b6","src/unix/linux_like/linux/musl/b64/mod.rs":"8c10627bd582cb272514e7350ae4743a65d489356eae039d2e7e55cd533fbbc8","src/unix/linux_like/linux/musl/b64/powerpc64.rs":"36694cbdcdc33879e00502d55cb95eaa0096d213538993dd39c3da800cdd06d1","src/unix/linux_like/linux/musl/b64/riscv64/align.rs":"d321491612be8d5c61b6ec2dc0111beb3a22e58803f99cd37543efe86621b119","src/unix/linux_like/linux/musl/b64/riscv64/mod.rs":"36621aca8ecf714f8dd42662dc2997833d95b9f129ef8c220503362e21efd695","src/unix/linux_like/linux/musl/b64/s390x.rs":"9b05b1fae6bcb7cb6d909b9973977fde01684175f3e26c27dcb44223cc3933d9","src/unix/linux_like/linux/musl/b64/x86_64/align.rs":"7169d07a9fd4716f7512719aec9fda5d8bed306dc0720ffc1b21696c9951e3c6","src/unix/linux_like/linux/musl/b64/x86_64/mod.rs":"238789097a26abc8b7cd578ed1a8e6cb8672083054303902781983902cd66854","src/unix/linux_like/linux/musl/mod.rs":"2efe98b8166270be1624888953fa07da06f08313f15bf4c7dc2eba63f5a790a5","src/unix/linux_like/linux/no_align.rs":"da2a8721becaaaa528781f97f5d9aae6a982ae5d4f5f6d2ffc0150bed72319b3","src/unix/linux_like/linux/non_exhaustive.rs":"181a05bf94fdb911db83ce793b993bd6548a4115b306a7ef3c10f745a8fea3e9","src/unix/linux_like/linux/uclibc/align.rs":"9ed16138d8e439bd90930845a65eafa7ebd67366e6bf633936d44014f6e4c959","src/unix/linux_like/linux/uclibc/arm/align.rs":"e4a3c27fe20a57b8d612c34cb05bc70646edb5cec7251957315afa53a7b9f936","src/unix/linux_like/linux/uclibc/arm/mod.rs":"a056bbf718ddd775519058706bdb4909b56e6256985869e3c3132aa8ec5faca0","src/unix/linux_like/linux/uclibc/arm/no_align.rs":"9cd223135de75315840ff9c3fd5441ba1cb632b96b5c85a76f8316c86653db25","src/unix/linux_like/linux/uclibc/mips/mips32/align.rs":"e4a3c27fe20a57b8d612c34cb05bc70646edb5cec7251957315afa53a7b9f936","src/unix/linux_like/linux/uclibc/mips/mips32/mod.rs":"b84def53a49587e87f884c2bc28b21b290463b00b52e1d0309f2ba233a5b4a99","src/unix/linux_like/linux/uclibc/mips/mips32/no_align.rs":"9cd223135de75315840ff9c3fd5441ba1cb632b96b5c85a76f8316c86653db25","src/unix/linux_like/linux/uclibc/mips/mips64/align.rs":"a7bdcb18a37a2d91e64d5fad83ea3edc78f5412adb28f77ab077dbb26dd08b2d","src/unix/linux_like/linux/uclibc/mips/mips64/mod.rs":"256a428290a560163ef7dc7d18b27bd3c6ce9748a0f28d5dc7f82203ee228220","src/unix/linux_like/linux/uclibc/mips/mips64/no_align.rs":"4a18e3875698c85229599225ac3401a2a40da87e77b2ad4ef47c6fcd5a24ed30","src/unix/linux_like/linux/uclibc/mips/mod.rs":"367ec5483ad317e6ccba1ac0888da6cf088a8d32689214cc8d16129aa692260c","src/unix/linux_like/linux/uclibc/mod.rs":"ddd223d4f574b2b92072bbdab091f12d8d89d5e05f41076ddfa6bc6af379698f","src/unix/linux_like/linux/uclibc/no_align.rs":"3f28637046524618adaa1012e26cb7ffe94b9396e6b518cccdc69d59f274d709","src/unix/linux_like/linux/uclibc/x86_64/l4re.rs":"024eba5753e852dbdd212427351affe7e83f9916c1864bce414d7aa2618f192e","src/unix/linux_like/linux/uclibc/x86_64/mod.rs":"bf6985e901041a61e90ccee1296b35a4c62ef90aa528d31989e1d647f072e79a","src/unix/linux_like/linux/uclibc/x86_64/other.rs":"42c3f71e58cabba373f6a55a623f3c31b85049eb64824c09c2b082b3b2d6a0a8","src/unix/linux_like/mod.rs":"dd4f7a1d66d8501b4a2c4e75e6e9305ed69f1002ae99e410596a6c636878595a","src/unix/mod.rs":"d1b4ba41f9b9c106f6ba8c661b5808824b774a7a7caac7d8938bf50979ba5195","src/unix/newlib/aarch64/mod.rs":"bac93836a9a57b2c710f32f852e92a4d11ad6759ab0fb6ad33e71d60e53278af","src/unix/newlib/align.rs":"28aaf87fafbc6b312622719d472d8cf65f9e5467d15339df5f73e66d8502b28a","src/unix/newlib/arm/mod.rs":"cbba6b3e957eceb496806e60de8725a23ff3fa0015983b4b4fa27b233732b526","src/unix/newlib/espidf/mod.rs":"ff9c13e99d84912f5ebe75b7a7ea9c1e9d8f35a268716081e09899c7ea822bc6","src/unix/newlib/generic.rs":"eab066d9f0a0f3eb53cc1073d01496bba0110989e1f6a59838afd19f870cd599","src/unix/newlib/horizon/mod.rs":"7cc5cc120437421db139bfa6a90b18168cd3070bdd0f5be96d40fe4c996f3ca1","src/unix/newlib/mod.rs":"54633d606e4e0413274af0b5beb5e697e6c061b63feaa0704b026554cc9d9c3e","src/unix/newlib/no_align.rs":"e0743b2179495a9514bc3a4d1781e492878c4ec834ee0085d0891dd1712e82fb","src/unix/newlib/powerpc/mod.rs":"0202ffd57caf75b6afa2c9717750ffb96e375ac33df0ae9609a3f831be393b67","src/unix/no_align.rs":"c06e95373b9088266e0b14bba0954eef95f93fb2b01d951855e382d22de78e53","src/unix/redox/mod.rs":"033768cb273daf2c8090d97252c2de9dba6809e6a5d2457f5727d724807695db","src/unix/solarish/compat.rs":"b07a5bfac925eb012003a459ba6bddbd3bfa9c44b3394da2ac5a602e54beae9c","src/unix/solarish/illumos.rs":"29387916ee7dc58f07478746024003215e631cd30953e8fa2a5c415f81839007","src/unix/solarish/mod.rs":"976b07a13e195840b67c166a62318abfa9ffc8d5ebbb0358f199dd213ec98d1b","src/unix/solarish/solaris.rs":"65b005453aefa9b9d4fc860fe77cfec80d8c97a51342b15daf55fc3e808bb384","src/unix/solarish/x86.rs":"e86e806df0caed72765040eaa2f3c883198d1aa91508540adf9b7008c77f522e","src/unix/solarish/x86_64.rs":"9074e813949f3c613afeac39d4118fb942c0b3c476232fc536489357cff5790f","src/unix/solarish/x86_common.rs":"ac869d9c3c95645c22460468391eb1982023c3a8e02b9e06a72e3aef3d5f1eac","src/vxworks/aarch64.rs":"98f0afdc511cd02557e506c21fed6737585490a1dce7a9d4941d08c437762b99","src/vxworks/arm.rs":"acb7968ce99fe3f4abdf39d98f8133d21a4fba435b8ef7084777cb181d788e88","src/vxworks/mod.rs":"aea3da66f2140f2a82dfc9c58f6e6531d2dd9c15ea696e0f95a0d4a2a187b5b6","src/vxworks/powerpc.rs":"acb7968ce99fe3f4abdf39d98f8133d21a4fba435b8ef7084777cb181d788e88","src/vxworks/powerpc64.rs":"98f0afdc511cd02557e506c21fed6737585490a1dce7a9d4941d08c437762b99","src/vxworks/x86.rs":"552f007f38317620b23889cb7c49d1d115841252439060122f52f434fbc6e5ba","src/vxworks/x86_64.rs":"018d92be3ad628a129eff9f2f5dfbc0883d8b8e5f2fa917b900a7f98ed6b514a","src/wasi.rs":"4fae202af0327d768ed9e1b586b75816cce14fe2dc16947d2f3d381f209a54c1","src/windows/gnu/align.rs":"b2c13ec1b9f3b39a75c452c80c951dff9d0215e31d77e883b4502afb31794647","src/windows/gnu/mod.rs":"3c8c7edb7cdf5d0c44af936db2a94869585c69dfabeef30571b4f4e38375767a","src/windows/mod.rs":"e3ad95ba54f76e74c301611fe868d3d94f6b8939b03be672f568b06b10ae71c7","src/windows/msvc/mod.rs":"c068271e00fca6b62bc4bf44bcf142cfc38caeded9b6c4e01d1ceef3ccf986f4","tests/const_fn.rs":"cb75a1f0864f926aebe79118fc34d51a0d1ade2c20a394e7774c7e545f21f1f4"},"package":"349d5a591cd28b49e1d1037471617a32ddcda5731b99419008085f72d5a53836"}
diff --git a/poky/meta/recipes-devtools/rust/rust/crossbeam_atomic.patch b/poky/meta/recipes-devtools/rust/rust/crossbeam_atomic.patch
new file mode 100644
index 0000000..7097bb9
--- /dev/null
+++ b/poky/meta/recipes-devtools/rust/rust/crossbeam_atomic.patch
@@ -0,0 +1,50 @@
+crossbeam-utils is taking the target triplet and comparing it against a
+known list of platforms that have issues either with any atomics or with
+64 bit atomics. Since OE encodes TARGET_VENDOR into the rust triplet (to
+differentiate host vs. target) this means that platforms that should match,
+don't.
+
+We could make a list of platforms and pass in configuration values but
+having one list in rust and another in our recipes is likely to cause
+problems in the future. We do already have this issue in the librsvg recipe.
+Instead, switch out the value of TARGET_VENDOR for "-unknown" which
+them makes the list in no_atomics.rs work correctly.
+
+Someone with more rust knowledge could split up the triplets in no_atmoics.rs
+and compare against the architecture/processor, or replace -unknown with a glob
+to create a patch that upstream might accept.
+
+Upstream-Status: Inappropriate [OE Specific tweak  but could be rewritten]
+Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
+
+Index: rustc-1.63.0-src/vendor/crossbeam-utils/build.rs
+===================================================================
+--- rustc-1.63.0-src.orig/vendor/crossbeam-utils/build.rs
++++ rustc-1.63.0-src/vendor/crossbeam-utils/build.rs
+@@ -29,7 +29,7 @@ use std::env;
+ include!("no_atomic.rs");
+ 
+ fn main() {
+-    let target = match env::var("TARGET") {
++    let mut target = match env::var("TARGET") {
+         Ok(target) => target,
+         Err(e) => {
+             println!(
+@@ -40,6 +40,8 @@ fn main() {
+             return;
+         }
+     };
++    let vendor = env::var("TARGET_VENDOR").unwrap();
++    target = target.replace(&vendor, "-unknown");
+ 
+     // Note that this is `no_*`, not `has_*`. This allows treating
+     // `cfg(target_has_atomic = "ptr")` as true when the build script doesn't
+Index: rustc-1.63.0-src/vendor/crossbeam-utils/.cargo-checksum.json
+===================================================================
+--- rustc-1.63.0-src.orig/vendor/crossbeam-utils/.cargo-checksum.json
++++ rustc-1.63.0-src/vendor/crossbeam-utils/.cargo-checksum.json
+@@ -1 +1 @@
+-{"files":{"CHANGELOG.md":"665a9f2c5fd37c98bef7c1b6eda753b58bb925d87e5b42d7298df973d7590631","Cargo.toml":"fe22292acd6a868e65baf225f90d5678678971642814d2d8e92a03954b8bdb40","LICENSE-APACHE":"a60eea817514531668d7e00765731449fe14d059d3249e0bc93b36de45f759f2","LICENSE-MIT":"5734ed989dfca1f625b40281ee9f4530f91b2411ec01cb748223e7eb87e201ab","README.md":"dfa9fbed47c344c134a63c84b7c0e4651baeac1554b7b3266d0e38643743fc33","benches/atomic_cell.rs":"c927eb3cd1e5ecc4b91adbc3bde98af15ffab4086190792ba64d5cde0e24df3d","build.rs":"7e74dc72343ff57e83d0a84a9fbdd9ff1645894165909999b4c3d2fba94bc96c","no_atomic.rs":"71b5f78fd701ce604aa766dd3d825fa5bed774282aae4d6c31d7acb01b1b242f","src/atomic/atomic_cell.rs":"01185588e0e16ba81425677966d0c11887dedc4ac0d4a65991a34057c418adc4","src/atomic/consume.rs":"7a7736fcd64f6473dfea7653559ffc5e1a2a234df43835f8aa8734862145ac15","src/atomic/mod.rs":"94193895fa03cece415e8d7be700b73a9a8a7015774ca821253438607f9b0736","src/atomic/seq_lock.rs":"27182e6b87a9db73c5f6831759f8625f9fcdec3c2828204c444aef04f427735a","src/atomic/seq_lock_wide.rs":"9888dd03116bb89ca36d4ab8d5a0b5032107a2983a7eb8024454263b09080088","src/backoff.rs":"7cc7754e15f69b52e92a70d4f49d1bc274693455a0933a2d7eb0605806566af3","src/cache_padded.rs":"6a512698115ad0d5a5b163dbd7a83247e1f1c146c4a30f3fc74b952e3b767b59","src/lib.rs":"6f1bcf157abe06ad8458a53e865bf8efab9fad4a9424790147cee8fefb3795d8","src/sync/mod.rs":"59986f559a8f170a4b3247ab2eea2460b09809d87c8110ed88e4e7103d3519dc","src/sync/parker.rs":"3f997f5b41fec286ccedcf3d36f801d741387badb574820b8e3456117ecd9154","src/sync/sharded_lock.rs":"14be659744918d0b27db24c56b41c618b0f0484b6761da46561023d96c4c120f","src/sync/wait_group.rs":"32e946a7581c55f8aa9904527b92b177c538fa0cf7cbcfa1d1f25990582cb6ea","src/thread.rs":"6a7676fd4e50af63aec6f655121a10cd6e8c704f4677125388186ba58dc5842d","tests/atomic_cell.rs":"d64faa1ca8896373468308031220940d988aa3a1679ea25d2291a7a7d22bc51a","tests/cache_padded.rs":"1bfaff8354c8184e1ee1f902881ca9400b60effb273b0d3f752801a483d2b66d","tests/parker.rs":"6def4721287d9d70b1cfd63ebb34e1c83fbb3376edbad2bc8aac6ef69dd99d20","tests/sharded_lock.rs":"eb6c5b59f007e0d290dd0f58758e8ccb5cacd38af34e3341368ced815f0c41be","tests/thread.rs":"9a7d7d3028c552fd834c68598b04a1cc252a816bc20ab62cec060d6cd09cab10","tests/wait_group.rs":"ad8f0cdfed31f9594a2e0737234d418f8b924d784a4db8d7e469deab8c95f5f8"},"package":"0bf124c720b7686e3c2663cf54062ab0f68a88af2fb6a030e87e30bf721fcb38"}
+\ No newline at end of file
++{"files":{"CHANGELOG.md":"665a9f2c5fd37c98bef7c1b6eda753b58bb925d87e5b42d7298df973d7590631","Cargo.toml":"fe22292acd6a868e65baf225f90d5678678971642814d2d8e92a03954b8bdb40","LICENSE-APACHE":"a60eea817514531668d7e00765731449fe14d059d3249e0bc93b36de45f759f2","LICENSE-MIT":"5734ed989dfca1f625b40281ee9f4530f91b2411ec01cb748223e7eb87e201ab","README.md":"dfa9fbed47c344c134a63c84b7c0e4651baeac1554b7b3266d0e38643743fc33","benches/atomic_cell.rs":"c927eb3cd1e5ecc4b91adbc3bde98af15ffab4086190792ba64d5cde0e24df3d","build.rs":"d983d511c89607ce89473779d1ee195e3eb509cc4d3043b9efe6aa2f94c98158","no_atomic.rs":"71b5f78fd701ce604aa766dd3d825fa5bed774282aae4d6c31d7acb01b1b242f","src/atomic/atomic_cell.rs":"01185588e0e16ba81425677966d0c11887dedc4ac0d4a65991a34057c418adc4","src/atomic/consume.rs":"7a7736fcd64f6473dfea7653559ffc5e1a2a234df43835f8aa8734862145ac15","src/atomic/mod.rs":"94193895fa03cece415e8d7be700b73a9a8a7015774ca821253438607f9b0736","src/atomic/seq_lock.rs":"27182e6b87a9db73c5f6831759f8625f9fcdec3c2828204c444aef04f427735a","src/atomic/seq_lock_wide.rs":"9888dd03116bb89ca36d4ab8d5a0b5032107a2983a7eb8024454263b09080088","src/backoff.rs":"7cc7754e15f69b52e92a70d4f49d1bc274693455a0933a2d7eb0605806566af3","src/cache_padded.rs":"6a512698115ad0d5a5b163dbd7a83247e1f1c146c4a30f3fc74b952e3b767b59","src/lib.rs":"6f1bcf157abe06ad8458a53e865bf8efab9fad4a9424790147cee8fefb3795d8","src/sync/mod.rs":"59986f559a8f170a4b3247ab2eea2460b09809d87c8110ed88e4e7103d3519dc","src/sync/parker.rs":"3f997f5b41fec286ccedcf3d36f801d741387badb574820b8e3456117ecd9154","src/sync/sharded_lock.rs":"14be659744918d0b27db24c56b41c618b0f0484b6761da46561023d96c4c120f","src/sync/wait_group.rs":"32e946a7581c55f8aa9904527b92b177c538fa0cf7cbcfa1d1f25990582cb6ea","src/thread.rs":"6a7676fd4e50af63aec6f655121a10cd6e8c704f4677125388186ba58dc5842d","tests/atomic_cell.rs":"d64faa1ca8896373468308031220940d988aa3a1679ea25d2291a7a7d22bc51a","tests/cache_padded.rs":"1bfaff8354c8184e1ee1f902881ca9400b60effb273b0d3f752801a483d2b66d","tests/parker.rs":"6def4721287d9d70b1cfd63ebb34e1c83fbb3376edbad2bc8aac6ef69dd99d20","tests/sharded_lock.rs":"eb6c5b59f007e0d290dd0f58758e8ccb5cacd38af34e3341368ced815f0c41be","tests/thread.rs":"9a7d7d3028c552fd834c68598b04a1cc252a816bc20ab62cec060d6cd09cab10","tests/wait_group.rs":"ad8f0cdfed31f9594a2e0737234d418f8b924d784a4db8d7e469deab8c95f5f8"},"package":"0bf124c720b7686e3c2663cf54062ab0f68a88af2fb6a030e87e30bf721fcb38"}
+\ No newline at end of file
diff --git a/poky/meta/recipes-devtools/rust/rust/hardcodepaths.patch b/poky/meta/recipes-devtools/rust/rust/hardcodepaths.patch
new file mode 100644
index 0000000..2fdfe6d
--- /dev/null
+++ b/poky/meta/recipes-devtools/rust/rust/hardcodepaths.patch
@@ -0,0 +1,70 @@
+When building for the target, some build paths end up embedded in the binaries. 
+These changes remove that. Further investigation is needed to work out the way
+to resolve these issues properly upstream.
+
+Upstream-Status: Inappropriate [patches need rework]
+Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
+
+Index: rustc-1.62.0-src/src/tools/clippy/src/driver.rs
+===================================================================
+--- rustc-1.62.0-src.orig/src/tools/clippy/src/driver.rs
++++ rustc-1.62.0-src/src/tools/clippy/src/driver.rs
+@@ -255,7 +255,6 @@ pub fn main() {
+                     .and_then(|out| String::from_utf8(out.stdout).ok())
+                     .map(|s| PathBuf::from(s.trim()))
+             })
+-            .or_else(|| option_env!("SYSROOT").map(PathBuf::from))
+             .or_else(|| {
+                 let home = option_env!("RUSTUP_HOME")
+                     .or(option_env!("MULTIRUST_HOME"))
+Index: rustc-1.62.0-src/compiler/rustc_codegen_llvm/src/context.rs
+===================================================================
+--- rustc-1.62.0-src.orig/compiler/rustc_codegen_llvm/src/context.rs
++++ rustc-1.62.0-src/compiler/rustc_codegen_llvm/src/context.rs
+@@ -167,46 +167,6 @@ pub unsafe fn create_module<'ll>(
+         }
+     }
+ 
+-    // Ensure the data-layout values hardcoded remain the defaults.
+-    if sess.target.is_builtin {
+-        let tm = crate::back::write::create_informational_target_machine(tcx.sess);
+-        llvm::LLVMRustSetDataLayoutFromTargetMachine(llmod, tm);
+-        llvm::LLVMRustDisposeTargetMachine(tm);
+-
+-        let llvm_data_layout = llvm::LLVMGetDataLayoutStr(llmod);
+-        let llvm_data_layout = str::from_utf8(CStr::from_ptr(llvm_data_layout).to_bytes())
+-            .expect("got a non-UTF8 data-layout from LLVM");
+-
+-        // Unfortunately LLVM target specs change over time, and right now we
+-        // don't have proper support to work with any more than one
+-        // `data_layout` than the one that is in the rust-lang/rust repo. If
+-        // this compiler is configured against a custom LLVM, we may have a
+-        // differing data layout, even though we should update our own to use
+-        // that one.
+-        //
+-        // As an interim hack, if CFG_LLVM_ROOT is not an empty string then we
+-        // disable this check entirely as we may be configured with something
+-        // that has a different target layout.
+-        //
+-        // Unsure if this will actually cause breakage when rustc is configured
+-        // as such.
+-        //
+-        // FIXME(#34960)
+-        let cfg_llvm_root = option_env!("CFG_LLVM_ROOT").unwrap_or("");
+-        let custom_llvm_used = cfg_llvm_root.trim() != "";
+-
+-        if !custom_llvm_used && target_data_layout != llvm_data_layout {
+-            bug!(
+-                "data-layout for target `{rustc_target}`, `{rustc_layout}`, \
+-                  differs from LLVM target's `{llvm_target}` default layout, `{llvm_layout}`",
+-                rustc_target = sess.opts.target_triple,
+-                rustc_layout = target_data_layout,
+-                llvm_target = sess.target.llvm_target,
+-                llvm_layout = llvm_data_layout
+-            );
+-        }
+-    }
+-
+     let data_layout = SmallCStr::new(&target_data_layout);
+     llvm::LLVMSetDataLayout(llmod, data_layout.as_ptr());
+ 
diff --git a/poky/meta/recipes-devtools/rust/rust_1.62.0.bb b/poky/meta/recipes-devtools/rust/rust_1.62.0.bb
deleted file mode 100644
index b505ad4..0000000
--- a/poky/meta/recipes-devtools/rust/rust_1.62.0.bb
+++ /dev/null
@@ -1,21 +0,0 @@
-require rust-target.inc
-require rust-source.inc
-require rust-snapshot.inc
-
-INSANE_SKIP:${PN}:class-native = "already-stripped"
-
-do_compile () {
-    rust_runx build --stage 2
-}
-
-rust_do_install() {
-    rust_runx install
-}
-
-python () {
-    pn = d.getVar('PN')
-
-    if not pn.endswith("-native"):
-        raise bb.parse.SkipRecipe("Rust recipe doesn't work for target builds at this time. Fixes welcome.")
-}
-
diff --git a/poky/meta/recipes-devtools/rust/rust_1.63.0.bb b/poky/meta/recipes-devtools/rust/rust_1.63.0.bb
new file mode 100644
index 0000000..401d510
--- /dev/null
+++ b/poky/meta/recipes-devtools/rust/rust_1.63.0.bb
@@ -0,0 +1,85 @@
+require rust-target.inc
+require rust-source.inc
+require rust-snapshot.inc
+
+INSANE_SKIP:${PN}:class-native = "already-stripped"
+FILES:${PN} += "${libdir}/rustlib"
+FILES:${PN} += "${libdir}/*.so"
+FILES:${PN}-dev = ""
+
+# Used by crossbeam_atomic.patch
+export TARGET_VENDOR
+
+do_compile () {
+    rust_runx build --stage 2
+}
+
+do_compile:append:class-target () {
+    rust_runx build --stage 2 src/tools/clippy
+    rust_runx build --stage 2 src/tools/rustfmt
+}
+
+do_compile:append:class-nativesdk () {
+    rust_runx build --stage 2 src/tools/clippy
+    rust_runx build --stage 2 src/tools/rustfmt
+}
+
+ALLOW_EMPTY:${PN} = "1"
+
+PACKAGES =+ "${PN}-tools-clippy ${PN}-tools-rustfmt"
+FILES:${PN}-tools-clippy = "${bindir}/cargo-clippy ${bindir}/clippy-driver"
+FILES:${PN}-tools-rustfmt = "${bindir}/rustfmt"
+RDEPENDS:${PN}-tools-clippy = "${PN}"
+RDEPENDS:${PN}-tools-rustfmt = "${PN}"
+
+SUMMARY:${PN}-tools-clippy = "A collection of lints to catch common mistakes and improve your Rust code"
+SUMMARY:${PN}-tools-rustfmt = "A tool for formatting Rust code according to style guidelines"
+
+rust_do_install() {
+    rust_runx install
+}
+
+rust_do_install:class-nativesdk() {
+    export PSEUDO_UNLOAD=1
+    rust_runx install
+    unset PSEUDO_UNLOAD
+
+    install -d ${D}${bindir}
+    for i in cargo-clippy clippy-driver rustfmt; do
+        cp build/${RUST_BUILD_SYS}/stage2-tools/${RUST_HOST_SYS}/release/$i ${D}${bindir}
+        chrpath -r "\$ORIGIN/../lib" ${D}${bindir}/$i
+    done
+
+    chown root:root ${D}/ -R
+    rm ${D}${libdir}/rustlib/uninstall.sh
+    rm ${D}${libdir}/rustlib/install.log
+    rm ${D}${libdir}/rustlib/manifest*
+}
+
+rust_do_install:class-target() {
+    export PSEUDO_UNLOAD=1
+    rust_runx install
+    unset PSEUDO_UNLOAD
+
+    install -d ${D}${bindir}
+    for i in cargo-clippy clippy-driver rustfmt; do
+        cp build/${RUST_BUILD_SYS}/stage2-tools/${RUST_HOST_SYS}/release/$i ${D}${bindir}
+        chrpath -r "\$ORIGIN/../lib" ${D}${bindir}/$i
+    done
+
+    chown root:root ${D}/ -R
+    rm ${D}${libdir}/rustlib/uninstall.sh
+    rm ${D}${libdir}/rustlib/install.log
+    rm ${D}${libdir}/rustlib/manifest*
+}
+
+# see recipes-devtools/gcc/gcc/0018-Add-ssp_nonshared-to-link-commandline-for-musl-targe.patch
+# we need to link with ssp_nonshared on musl to avoid "undefined reference to `__stack_chk_fail_local'"
+# when building MACHINE=qemux86 for musl
+WRAPPER_TARGET_EXTRALD:libc-musl = "-lssp_nonshared"
+
+RUSTLIB_DEP:class-nativesdk = ""
+
+# musl builds include libunwind.a
+INSANE_SKIP:${PN} = "staticdev"
+
diff --git a/poky/meta/recipes-devtools/strace/strace_5.18.bb b/poky/meta/recipes-devtools/strace/strace_5.18.bb
deleted file mode 100644
index 75ff58b..0000000
--- a/poky/meta/recipes-devtools/strace/strace_5.18.bb
+++ /dev/null
@@ -1,57 +0,0 @@
-SUMMARY = "System call tracing tool"
-HOMEPAGE = "http://strace.io"
-DESCRIPTION = "strace is a diagnostic, debugging and instructional userspace utility for Linux. It is used to monitor and tamper with interactions between processes and the Linux kernel, which include system calls, signal deliveries, and changes of process state."
-SECTION = "console/utils"
-LICENSE = "LGPL-2.1-or-later & GPL-2.0-or-later"
-LIC_FILES_CHKSUM = "file://COPYING;md5=59a33f0a3e6122d67c0b3befccbdaa6b"
-
-SRC_URI = "https://strace.io/files/${PV}/strace-${PV}.tar.xz \
-           file://update-gawk-paths.patch \
-           file://Makefile-ptest.patch \
-           file://run-ptest \
-           file://0001-caps-abbrev.awk-fix-gawk-s-path.patch \
-           file://ptest-spacesave.patch \
-           file://0001-strace-fix-reproducibilty-issues.patch \
-           file://skip-load.patch \
-           "
-SRC_URI[sha256sum] = "60293ea79ac9253d600cdc9be077ad2988ca22284a439c9e66be5150db3d1187"
-
-inherit autotools ptest
-
-# Not yet ported to rv32
-COMPATIBLE_HOST:riscv32 = "null"
-
-PACKAGECONFIG:class-target ??= "\
-    ${@bb.utils.contains('DISTRO_FEATURES', 'bluetooth', 'bluez', '', d)} \
-"
-
-PACKAGECONFIG[bluez] = "ac_cv_header_bluetooth_bluetooth_h=yes,ac_cv_header_bluetooth_bluetooth_h=no,bluez5"
-PACKAGECONFIG[libunwind] = "--with-libunwind,--without-libunwind,libunwind"
-
-EXTRA_OECONF += "--enable-mpers=no --disable-gcc-Werror"
-
-CFLAGS:append:libc-musl = " -Dsigcontext_struct=sigcontext"
-
-TESTDIR = "tests"
-PTEST_BUILD_HOST_PATTERN = "^(DEB_CHANGELOGTIME|RPM_CHANGELOGTIME|WARN_CFLAGS_FOR_BUILD|LDFLAGS_FOR_BUILD)"
-
-do_compile_ptest() {
-	oe_runmake -C ${TESTDIR} buildtest-TESTS
-}
-
-do_install_ptest() {
-	oe_runmake -C ${TESTDIR} install-ptest BUILDDIR=${B} DESTDIR=${D}${PTEST_PATH} TESTDIR=${TESTDIR}
-	mkdir -p ${D}${PTEST_PATH}/build-aux
-	mkdir -p ${D}${PTEST_PATH}/src
-	install -m 755 ${S}/build-aux/test-driver ${D}${PTEST_PATH}/build-aux/
-	install -m 644 ${B}/src/config.h ${D}${PTEST_PATH}/src/
-        sed -i -e '/^src/s/strace.*[0-9]/ptest/' ${D}/${PTEST_PATH}/${TESTDIR}/Makefile
-}
-
-RDEPENDS:${PN}-ptest += "make coreutils grep gawk sed"
-
-RDEPENDS:${PN}-ptest:append:libc-glibc = "\
-     locale-base-en-us.iso-8859-1 \
-"
-
-BBCLASSEXTEND = "native"
diff --git a/poky/meta/recipes-devtools/strace/strace_5.19.bb b/poky/meta/recipes-devtools/strace/strace_5.19.bb
new file mode 100644
index 0000000..5e69cfd
--- /dev/null
+++ b/poky/meta/recipes-devtools/strace/strace_5.19.bb
@@ -0,0 +1,57 @@
+SUMMARY = "System call tracing tool"
+HOMEPAGE = "http://strace.io"
+DESCRIPTION = "strace is a diagnostic, debugging and instructional userspace utility for Linux. It is used to monitor and tamper with interactions between processes and the Linux kernel, which include system calls, signal deliveries, and changes of process state."
+SECTION = "console/utils"
+LICENSE = "LGPL-2.1-or-later & GPL-2.0-or-later"
+LIC_FILES_CHKSUM = "file://COPYING;md5=59a33f0a3e6122d67c0b3befccbdaa6b"
+
+SRC_URI = "https://strace.io/files/${PV}/strace-${PV}.tar.xz \
+           file://update-gawk-paths.patch \
+           file://Makefile-ptest.patch \
+           file://run-ptest \
+           file://0001-caps-abbrev.awk-fix-gawk-s-path.patch \
+           file://ptest-spacesave.patch \
+           file://0001-strace-fix-reproducibilty-issues.patch \
+           file://skip-load.patch \
+           "
+SRC_URI[sha256sum] = "aa3dc1c8e60e4f6ff3d396514aa247f3c7bf719d8a8dc4dd4fa793be786beca3"
+
+inherit autotools ptest
+
+# Not yet ported to rv32
+COMPATIBLE_HOST:riscv32 = "null"
+
+PACKAGECONFIG:class-target ??= "\
+    ${@bb.utils.contains('DISTRO_FEATURES', 'bluetooth', 'bluez', '', d)} \
+"
+
+PACKAGECONFIG[bluez] = "ac_cv_header_bluetooth_bluetooth_h=yes,ac_cv_header_bluetooth_bluetooth_h=no,bluez5"
+PACKAGECONFIG[libunwind] = "--with-libunwind,--without-libunwind,libunwind"
+
+EXTRA_OECONF += "--enable-mpers=no --disable-gcc-Werror"
+
+CFLAGS:append:libc-musl = " -Dsigcontext_struct=sigcontext"
+
+TESTDIR = "tests"
+PTEST_BUILD_HOST_PATTERN = "^(DEB_CHANGELOGTIME|RPM_CHANGELOGTIME|WARN_CFLAGS_FOR_BUILD|LDFLAGS_FOR_BUILD)"
+
+do_compile_ptest() {
+	oe_runmake -C ${TESTDIR} buildtest-TESTS
+}
+
+do_install_ptest() {
+	oe_runmake -C ${TESTDIR} install-ptest BUILDDIR=${B} DESTDIR=${D}${PTEST_PATH} TESTDIR=${TESTDIR}
+	mkdir -p ${D}${PTEST_PATH}/build-aux
+	mkdir -p ${D}${PTEST_PATH}/src
+	install -m 755 ${S}/build-aux/test-driver ${D}${PTEST_PATH}/build-aux/
+	install -m 644 ${B}/src/config.h ${D}${PTEST_PATH}/src/
+        sed -i -e '/^src/s/strace.*[0-9]/ptest/' ${D}/${PTEST_PATH}/${TESTDIR}/Makefile
+}
+
+RDEPENDS:${PN}-ptest += "make coreutils grep gawk sed"
+
+RDEPENDS:${PN}-ptest:append:libc-glibc = "\
+     locale-base-en-us.iso-8859-1 \
+"
+
+BBCLASSEXTEND = "native"
diff --git a/poky/meta/recipes-devtools/syslinux/syslinux/0001-install-don-t-install-obsolete-file-com32.ld.patch b/poky/meta/recipes-devtools/syslinux/syslinux/0001-install-don-t-install-obsolete-file-com32.ld.patch
deleted file mode 100644
index bfd7f41..0000000
--- a/poky/meta/recipes-devtools/syslinux/syslinux/0001-install-don-t-install-obsolete-file-com32.ld.patch
+++ /dev/null
@@ -1,32 +0,0 @@
-From bf6db5b48ec25f83939f1fdebb59028bc3c40b00 Mon Sep 17 00:00:00 2001
-From: "H. Peter Anvin (Intel)" <hpa@zytor.com>
-Date: Wed, 6 Feb 2019 11:30:51 -0800
-Subject: [PATCH] install: don't install obsolete file com32.ld
-
-com32.ld has been obsolete for a long time, and has been removed now;
-don't install it either.
-
-Reported-by: Joakim Tjernlund <Joakim.Tjernlund@infinera.com>
-Signed-off-by: H. Peter Anvin (Intel) <hpa@zytor.com>
-
-Upstream-Status: Backport
-Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
----
- com32/lib/Makefile | 1 -
- 1 file changed, 1 deletion(-)
-
-diff --git a/com32/lib/Makefile b/com32/lib/Makefile
-index 74fff149..6a931492 100644
---- a/com32/lib/Makefile
-+++ b/com32/lib/Makefile
-@@ -113,7 +113,6 @@ spotless: clean
- 
- install: all
- 	mkdir -m 755 -p $(INSTALLROOT)$(COM32DIR)
--	install -m 644 $(SRC)/com32.ld $(INSTALLROOT)$(COM32DIR)
- 	-rm -rf $(INSTALLROOT)$(COM32DIR)/include
- 	cp -r $(SRC)/../include $(INSTALLROOT)$(COM32DIR)
- 
--- 
-2.17.1
-
diff --git a/poky/meta/recipes-devtools/syslinux/syslinux/0001-linux-syslinux-support-ext2-3-4-device.patch b/poky/meta/recipes-devtools/syslinux/syslinux/0001-linux-syslinux-support-ext2-3-4-device.patch
index 47a8dac..1a4a4e3 100644
--- a/poky/meta/recipes-devtools/syslinux/syslinux/0001-linux-syslinux-support-ext2-3-4-device.patch
+++ b/poky/meta/recipes-devtools/syslinux/syslinux/0001-linux-syslinux-support-ext2-3-4-device.patch
@@ -1,7 +1,7 @@
-From 60f3833ab2b5899771b4eab654e88f9888b99501 Mon Sep 17 00:00:00 2001
+From a469ce05055c44fdca1ca094ff3a735cc059480d Mon Sep 17 00:00:00 2001
 From: Robert Yang <liezhi.yang@windriver.com>
 Date: Wed, 31 Dec 2014 16:01:55 +0800
-Subject: [PATCH 1/9] linux/syslinux: support ext2/3/4 device
+Subject: [PATCH] linux/syslinux: support ext2/3/4 device
 
 * Support ext2/3/4 deivce.
 * The open_ext2_fs() checks whether it is an ext2/3/4 device,
@@ -19,10 +19,10 @@
  1 file changed, 36 insertions(+)
 
 diff --git a/linux/syslinux.c b/linux/syslinux.c
-index 912de71..36fc202 100755
+index 46d5624..1cc276b 100755
 --- a/linux/syslinux.c
 +++ b/linux/syslinux.c
-@@ -256,6 +256,23 @@ int do_open_file(char *name)
+@@ -257,6 +257,23 @@ int do_open_file(char *name)
      return fd;
  }
  
@@ -46,7 +46,7 @@
  int main(int argc, char *argv[])
  {
      static unsigned char sectbuf[SECTOR_SIZE];
-@@ -313,6 +330,24 @@ int main(int argc, char *argv[])
+@@ -314,6 +331,24 @@ int main(int argc, char *argv[])
  	die("can't combine an offset with a block device");
      }
  
@@ -71,7 +71,7 @@
      xpread(dev_fd, sectbuf, SECTOR_SIZE, opt.offset);
      fsync(dev_fd);
  
-@@ -322,6 +357,7 @@ int main(int argc, char *argv[])
+@@ -323,6 +358,7 @@ int main(int argc, char *argv[])
       */
      if ((errmsg = syslinux_check_bootsect(sectbuf, &fs_type))) {
  	fprintf(stderr, "%s: %s\n", opt.device, errmsg);
@@ -79,6 +79,3 @@
  	exit(1);
      }
  
--- 
-1.9.1
-
diff --git a/poky/meta/recipes-devtools/syslinux/syslinux/0002-linux-syslinux-implement-open_ext2_fs.patch b/poky/meta/recipes-devtools/syslinux/syslinux/0002-linux-syslinux-implement-open_ext2_fs.patch
index 77cf060..1acd9b0 100644
--- a/poky/meta/recipes-devtools/syslinux/syslinux/0002-linux-syslinux-implement-open_ext2_fs.patch
+++ b/poky/meta/recipes-devtools/syslinux/syslinux/0002-linux-syslinux-implement-open_ext2_fs.patch
@@ -1,7 +1,7 @@
-From 07fb737fb60c08eaaa41989d531fc23009523546 Mon Sep 17 00:00:00 2001
+From c6ddb179577dd4c4ea4d1d154f979e90e53d6bf1 Mon Sep 17 00:00:00 2001
 From: Robert Yang <liezhi.yang@windriver.com>
 Date: Wed, 31 Dec 2014 16:09:18 +0800
-Subject: [PATCH 2/9] linux/syslinux: implement open_ext2_fs()
+Subject: [PATCH] linux/syslinux: implement open_ext2_fs()
 
 The open_ext2_fs() checks whether it is an ext2/ext3/ext4 device, and
 return:
@@ -15,14 +15,14 @@
 Tested-by: Du Dolpher <dolpher.du@intel.com>
 ---
  linux/Makefile   |  2 +-
- linux/syslinux.c | 80 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+ linux/syslinux.c | 80 ++++++++++++++++++++++++++++++++++++++++++++++++
  2 files changed, 81 insertions(+), 1 deletion(-)
 
 diff --git a/linux/Makefile b/linux/Makefile
-index 11667e1..ac1ac58 100644
+index 5a49d81..67cbbb4 100644
 --- a/linux/Makefile
 +++ b/linux/Makefile
-@@ -51,7 +51,7 @@ spotless: clean
+@@ -52,7 +52,7 @@ spotless: clean
  installer: syslinux syslinux-nomtools
  
  syslinux: $(OBJS)
@@ -32,10 +32,10 @@
  syslinux-nomtools: syslinux
  	ln -f $< $@
 diff --git a/linux/syslinux.c b/linux/syslinux.c
-index 36fc202..cc4e7da 100755
+index 1cc276b..f3727ea 100755
 --- a/linux/syslinux.c
 +++ b/linux/syslinux.c
-@@ -72,6 +72,7 @@
+@@ -73,6 +73,7 @@
  #include "syslxfs.h"
  #include "setadv.h"
  #include "syslxopt.h" /* unified options */
@@ -43,7 +43,7 @@
  
  extern const char *program;	/* Name of program */
  
-@@ -82,6 +83,9 @@ char *mntpath = NULL;		/* Path on which to mount */
+@@ -83,6 +84,9 @@ char *mntpath = NULL;		/* Path on which to mount */
  int loop_fd = -1;		/* Loop device */
  #endif
  
@@ -53,7 +53,7 @@
  void __attribute__ ((noreturn)) die(const char *msg)
  {
      fprintf(stderr, "%s: %s\n", program, msg);
-@@ -266,6 +270,82 @@ int do_open_file(char *name)
+@@ -267,6 +271,82 @@ int do_open_file(char *name)
   */
  static int open_ext2_fs(const char *device, const char *subdir)
  {
@@ -136,6 +136,3 @@
  }
  
  /* The install func for ext2, ext3 and ext4 */
--- 
-1.9.1
-
diff --git a/poky/meta/recipes-devtools/syslinux/syslinux/0003-linux-syslinux-implement-install_to_ext2.patch b/poky/meta/recipes-devtools/syslinux/syslinux/0003-linux-syslinux-implement-install_to_ext2.patch
index 84ba105..8d2fef2 100644
--- a/poky/meta/recipes-devtools/syslinux/syslinux/0003-linux-syslinux-implement-install_to_ext2.patch
+++ b/poky/meta/recipes-devtools/syslinux/syslinux/0003-linux-syslinux-implement-install_to_ext2.patch
@@ -1,7 +1,7 @@
-From 64d856b243812907068776b204a003a3a8fa122a Mon Sep 17 00:00:00 2001
+From 9110cf47d04ca1958d14228908a5c57a23769e7d Mon Sep 17 00:00:00 2001
 From: Robert Yang <liezhi.yang@windriver.com>
 Date: Wed, 31 Dec 2014 16:17:42 +0800
-Subject: [PATCH 3/9] linux/syslinux: implement install_to_ext2()
+Subject: [PATCH] linux/syslinux: implement install_to_ext2()
 
 * The handle_adv_on_ext() checks whether we only need update adv.
 * The write_to_ext() installs files (ldlinux.sys or ldlinux.c32) to the
@@ -13,14 +13,14 @@
 Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
 Tested-by: Du Dolpher <dolpher.du@intel.com>
 ---
- linux/syslinux.c | 79 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+ linux/syslinux.c | 79 ++++++++++++++++++++++++++++++++++++++++++++++++
  1 file changed, 79 insertions(+)
 
 diff --git a/linux/syslinux.c b/linux/syslinux.c
-index cc4e7da..45f080d 100755
+index f3727ea..fc5edb1 100755
 --- a/linux/syslinux.c
 +++ b/linux/syslinux.c
-@@ -346,11 +346,90 @@ static int open_ext2_fs(const char *device, const char *subdir)
+@@ -347,11 +347,90 @@ static int open_ext2_fs(const char *device, const char *subdir)
  fail:
      (void) ext2fs_close(e2fs);
      return -1;
@@ -111,6 +111,3 @@
  }
  
  int main(int argc, char *argv[])
--- 
-1.9.1
-
diff --git a/poky/meta/recipes-devtools/syslinux/syslinux/0004-linux-syslinux-add-ext_file_read-and-ext_file_write.patch b/poky/meta/recipes-devtools/syslinux/syslinux/0004-linux-syslinux-add-ext_file_read-and-ext_file_write.patch
index 64b56d9..0a32969 100644
--- a/poky/meta/recipes-devtools/syslinux/syslinux/0004-linux-syslinux-add-ext_file_read-and-ext_file_write.patch
+++ b/poky/meta/recipes-devtools/syslinux/syslinux/0004-linux-syslinux-add-ext_file_read-and-ext_file_write.patch
@@ -1,7 +1,7 @@
-From 35d3842cc4b930c5102eed2921e0189b7f4fd069 Mon Sep 17 00:00:00 2001
+From 1957fc6c069493c6789557936adb675f5e7e51ba Mon Sep 17 00:00:00 2001
 From: Robert Yang <liezhi.yang@windriver.com>
 Date: Wed, 31 Dec 2014 16:43:37 +0800
-Subject: [PATCH 4/9] linux/syslinux: add ext_file_read() and ext_file_write()
+Subject: [PATCH] linux/syslinux: add ext_file_read() and ext_file_write()
 
 Will use them to read and write on the extX device.
 
@@ -10,14 +10,14 @@
 Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
 Tested-by: Du Dolpher <dolpher.du@intel.com>
 ---
- linux/syslinux.c | 62 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+ linux/syslinux.c | 62 ++++++++++++++++++++++++++++++++++++++++++++++++
  1 file changed, 62 insertions(+)
 
 diff --git a/linux/syslinux.c b/linux/syslinux.c
-index 45f080d..247c86a 100755
+index fc5edb1..c7c1994 100755
 --- a/linux/syslinux.c
 +++ b/linux/syslinux.c
-@@ -349,6 +349,68 @@ fail:
+@@ -350,6 +350,68 @@ fail:
  
  }
  
@@ -86,6 +86,3 @@
  /*
   * Install the boot block on the specified device.
   * Must be run AFTER file installed.
--- 
-1.9.1
-
diff --git a/poky/meta/recipes-devtools/syslinux/syslinux/0005-linux-syslinux-implement-handle_adv_on_ext.patch b/poky/meta/recipes-devtools/syslinux/syslinux/0005-linux-syslinux-implement-handle_adv_on_ext.patch
index 829e7c4..76885f7 100644
--- a/poky/meta/recipes-devtools/syslinux/syslinux/0005-linux-syslinux-implement-handle_adv_on_ext.patch
+++ b/poky/meta/recipes-devtools/syslinux/syslinux/0005-linux-syslinux-implement-handle_adv_on_ext.patch
@@ -1,7 +1,7 @@
-From cdb980b37f40dc2c41891434c7736e49da53756e Mon Sep 17 00:00:00 2001
+From ee3a60829edc9d3344dc872fb0158e7b006f02be Mon Sep 17 00:00:00 2001
 From: Robert Yang <liezhi.yang@windriver.com>
 Date: Wed, 31 Dec 2014 16:47:52 +0800
-Subject: [PATCH 5/9] linux/syslinux: implement handle_adv_on_ext()
+Subject: [PATCH] linux/syslinux: implement handle_adv_on_ext()
 
 It reads adv if found on the device, or resets syslinux_adv, or update
 the adv if update adv only.
@@ -11,14 +11,14 @@
 Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
 Tested-by: Du Dolpher <dolpher.du@intel.com>
 ---
- linux/syslinux.c | 97 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+ linux/syslinux.c | 97 ++++++++++++++++++++++++++++++++++++++++++++++++
  1 file changed, 97 insertions(+)
 
 diff --git a/linux/syslinux.c b/linux/syslinux.c
-index 247c86a..de5d272 100755
+index c7c1994..90b8edd 100755
 --- a/linux/syslinux.c
 +++ b/linux/syslinux.c
-@@ -421,6 +421,103 @@ int install_bootblock(int fd, const char *device)
+@@ -422,6 +422,103 @@ int install_bootblock(int fd, const char *device)
  
  static int handle_adv_on_ext(void)
  {
@@ -122,6 +122,3 @@
  }
  
  /* Write files, adv, boot sector */
--- 
-1.9.1
-
diff --git a/poky/meta/recipes-devtools/syslinux/syslinux/0006-linux-syslinux-implement-write_to_ext-and-add-syslin.patch b/poky/meta/recipes-devtools/syslinux/syslinux/0006-linux-syslinux-implement-write_to_ext-and-add-syslin.patch
index cba8725..ba6d29d 100644
--- a/poky/meta/recipes-devtools/syslinux/syslinux/0006-linux-syslinux-implement-write_to_ext-and-add-syslin.patch
+++ b/poky/meta/recipes-devtools/syslinux/syslinux/0006-linux-syslinux-implement-write_to_ext-and-add-syslin.patch
@@ -1,7 +1,7 @@
-From 922e56c10e36d876777580c84daef9a66bea6525 Mon Sep 17 00:00:00 2001
+From 758731ce2432ab29a73505bbeb99a960996ab686 Mon Sep 17 00:00:00 2001
 From: Robert Yang <liezhi.yang@windriver.com>
 Date: Wed, 31 Dec 2014 17:20:43 +0800
-Subject: [PATCH 6/9] linux/syslinux: implement write_to_ext() and add
+Subject: [PATCH] linux/syslinux: implement write_to_ext() and add
  syslinuxext.c
 
 * The write_to_ext() write file to the extX device, and handle the boot
@@ -17,7 +17,7 @@
  libinstaller/syslinuxext.c |   7 +++
  libinstaller/syslinuxext.h |   5 ++
  linux/Makefile             |   3 +-
- linux/syslinux.c           | 118 +++++++++++++++++++++++++++++++++++++++++++++
+ linux/syslinux.c           | 118 +++++++++++++++++++++++++++++++++++++
  4 files changed, 132 insertions(+), 1 deletion(-)
  create mode 100644 libinstaller/syslinuxext.c
  create mode 100644 libinstaller/syslinuxext.h
@@ -47,10 +47,10 @@
 +
 +void syslinux_patch_bootsect(int dev_fd);
 diff --git a/linux/Makefile b/linux/Makefile
-index ac1ac58..3b23867 100644
+index 67cbbb4..567134c 100644
 --- a/linux/Makefile
 +++ b/linux/Makefile
-@@ -30,7 +30,8 @@ SRCS     = syslinux.c \
+@@ -31,7 +31,8 @@ SRCS     = syslinux.c \
             ../libinstaller/syslxmod.c \
  	   ../libinstaller/bootsect_bin.c \
  	   ../libinstaller/ldlinuxc32_bin.c \
@@ -61,7 +61,7 @@
  
  .SUFFIXES: .c .o .i .s .S
 diff --git a/linux/syslinux.c b/linux/syslinux.c
-index de5d272..f0c97a8 100755
+index 90b8edd..7a20fe6 100755
 --- a/linux/syslinux.c
 +++ b/linux/syslinux.c
 @@ -46,6 +46,7 @@
@@ -72,7 +72,7 @@
  
  #include "linuxioctl.h"
  
-@@ -72,6 +73,7 @@
+@@ -73,6 +74,7 @@
  #include "syslxfs.h"
  #include "setadv.h"
  #include "syslxopt.h" /* unified options */
@@ -80,7 +80,7 @@
  #include <ext2fs/ext2fs.h>
  
  extern const char *program;	/* Name of program */
-@@ -419,6 +421,12 @@ int install_bootblock(int fd, const char *device)
+@@ -420,6 +422,12 @@ int install_bootblock(int fd, const char *device)
  {
  }
  
@@ -93,7 +93,7 @@
  static int handle_adv_on_ext(void)
  {
      int                 i, retval, found_file;
-@@ -524,6 +532,116 @@ fail:
+@@ -525,6 +533,116 @@ fail:
  static int write_to_ext(const char *filename, const char *str, int length,
                          int i_flags, int dev_fd, const char *subdir)
  {
@@ -210,6 +210,3 @@
  }
  
  /* The install func for ext2, ext3 and ext4 */
--- 
-1.9.1
-
diff --git a/poky/meta/recipes-devtools/syslinux/syslinux/0007-linux-syslinux-implement-ext_construct_sectmap_fs.patch b/poky/meta/recipes-devtools/syslinux/syslinux/0007-linux-syslinux-implement-ext_construct_sectmap_fs.patch
index 3913811..57cdaf4 100644
--- a/poky/meta/recipes-devtools/syslinux/syslinux/0007-linux-syslinux-implement-ext_construct_sectmap_fs.patch
+++ b/poky/meta/recipes-devtools/syslinux/syslinux/0007-linux-syslinux-implement-ext_construct_sectmap_fs.patch
@@ -1,7 +1,7 @@
-From a95b831e18dd123f859bc5e6c4cecdcc0184ee37 Mon Sep 17 00:00:00 2001
+From 906205015601d5d1190e7326f51ea4316a74a479 Mon Sep 17 00:00:00 2001
 From: Robert Yang <liezhi.yang@windriver.com>
 Date: Fri, 2 Jan 2015 12:18:02 +0800
-Subject: [PATCH 7/9] linux/syslinux: implement ext_construct_sectmap_fs()
+Subject: [PATCH] linux/syslinux: implement ext_construct_sectmap_fs()
 
 The ext_construct_sectmap_fs() constucts the sector according to the
 bmap.
@@ -11,14 +11,14 @@
 Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
 Tested-by: Du Dolpher <dolpher.du@intel.com>
 ---
- linux/syslinux.c | 50 ++++++++++++++++++++++++++++++++++++++++++++++++++
+ linux/syslinux.c | 50 ++++++++++++++++++++++++++++++++++++++++++++++++
  1 file changed, 50 insertions(+)
 
 diff --git a/linux/syslinux.c b/linux/syslinux.c
-index f0c97a8..c741750 100755
+index 7a20fe6..4e43921 100755
 --- a/linux/syslinux.c
 +++ b/linux/syslinux.c
-@@ -421,10 +421,60 @@ int install_bootblock(int fd, const char *device)
+@@ -422,10 +422,60 @@ int install_bootblock(int fd, const char *device)
  {
  }
  
@@ -79,6 +79,3 @@
  }
  
  static int handle_adv_on_ext(void)
--- 
-1.9.1
-
diff --git a/poky/meta/recipes-devtools/syslinux/syslinux/0008-libinstaller-syslinuxext-implement-syslinux_patch_bo.patch b/poky/meta/recipes-devtools/syslinux/syslinux/0008-libinstaller-syslinuxext-implement-syslinux_patch_bo.patch
index f1d01fa..b026eba 100644
--- a/poky/meta/recipes-devtools/syslinux/syslinux/0008-libinstaller-syslinuxext-implement-syslinux_patch_bo.patch
+++ b/poky/meta/recipes-devtools/syslinux/syslinux/0008-libinstaller-syslinuxext-implement-syslinux_patch_bo.patch
@@ -1,4 +1,4 @@
-From efce87e5ab98664c57e5f4e3955a2f3747df5737 Mon Sep 17 00:00:00 2001
+From acfc8214d3d60b7e251ae66a59b81cdd1ff7a6dc Mon Sep 17 00:00:00 2001
 From: Robert Yang <liezhi.yang@windriver.com>
 Date: Fri, 2 Jan 2015 12:26:46 +0800
 Subject: [PATCH] libinstaller/syslinuxext: implement syslinux_patch_bootsect()
@@ -22,7 +22,7 @@
  3 files changed, 176 insertions(+), 165 deletions(-)
 
 diff --git a/extlinux/Makefile b/extlinux/Makefile
-index 1721ee54..62a49728 100644
+index 1721ee5..62a4972 100644
 --- a/extlinux/Makefile
 +++ b/extlinux/Makefile
 @@ -32,7 +32,8 @@ SRCS     = main.c \
@@ -36,7 +36,7 @@
  
  .SUFFIXES: .c .o .i .s .S
 diff --git a/extlinux/main.c b/extlinux/main.c
-index ebff7eae..9add50fb 100644
+index ebff7ea..9add50f 100644
 --- a/extlinux/main.c
 +++ b/extlinux/main.c
 @@ -62,6 +62,7 @@
@@ -244,7 +244,7 @@
      /* Construct the boot file map */
  
 diff --git a/libinstaller/syslinuxext.c b/libinstaller/syslinuxext.c
-index bb54cefc..9ae82884 100644
+index bb54cef..9ae8288 100644
 --- a/libinstaller/syslinuxext.c
 +++ b/libinstaller/syslinuxext.c
 @@ -1,7 +1,178 @@
@@ -426,6 +426,3 @@
 +    set_32(&sbs->bsHiddenSecs, geo.start);
  }
  
--- 
-2.17.1
-
diff --git a/poky/meta/recipes-devtools/syslinux/syslinux/0009-linux-syslinux-implement-install_bootblock.patch b/poky/meta/recipes-devtools/syslinux/syslinux/0009-linux-syslinux-implement-install_bootblock.patch
index cd89d92..1c875e8 100644
--- a/poky/meta/recipes-devtools/syslinux/syslinux/0009-linux-syslinux-implement-install_bootblock.patch
+++ b/poky/meta/recipes-devtools/syslinux/syslinux/0009-linux-syslinux-implement-install_bootblock.patch
@@ -1,7 +1,7 @@
-From 76c465e87312dbc6cffd05427f1f4d2ebdee4f13 Mon Sep 17 00:00:00 2001
+From c28aae8bd381f77e66e6bac79761df7a484b054c Mon Sep 17 00:00:00 2001
 From: Robert Yang <liezhi.yang@windriver.com>
 Date: Fri, 2 Jan 2015 12:28:35 +0800
-Subject: [PATCH 9/9] linux/syslinux: implement install_bootblock()
+Subject: [PATCH] linux/syslinux: implement install_bootblock()
 
 Refer to the install_bootblock() in extlinux/main.c to make
 linux/syslinux.c's install_bootblock() which only supports ext2/3/4.
@@ -15,10 +15,10 @@
  1 file changed, 20 insertions(+)
 
 diff --git a/linux/syslinux.c b/linux/syslinux.c
-index c741750..917f83a 100755
+index 4e43921..93ed880 100755
 --- a/linux/syslinux.c
 +++ b/linux/syslinux.c
-@@ -419,6 +419,26 @@ static int ext_file_write(ext2_file_t e2_file, const void *buf, size_t count,
+@@ -420,6 +420,26 @@ static int ext_file_write(ext2_file_t e2_file, const void *buf, size_t count,
   */
  int install_bootblock(int fd, const char *device)
  {
@@ -45,6 +45,3 @@
  }
  
  /* The file's block count */
--- 
-1.9.1
-
diff --git a/poky/meta/recipes-devtools/syslinux/syslinux/0010-Workaround-multiple-definition-of-symbol-errors.patch b/poky/meta/recipes-devtools/syslinux/syslinux/0010-Workaround-multiple-definition-of-symbol-errors.patch
index 44cb153..813d10b 100644
--- a/poky/meta/recipes-devtools/syslinux/syslinux/0010-Workaround-multiple-definition-of-symbol-errors.patch
+++ b/poky/meta/recipes-devtools/syslinux/syslinux/0010-Workaround-multiple-definition-of-symbol-errors.patch
@@ -1,13 +1,12 @@
-From 951928f2cad5682c2844e6bd0f7201236c5d9b66 Mon Sep 17 00:00:00 2001
+From f2a5b64785958226c022cac9931b059b98f4e896 Mon Sep 17 00:00:00 2001
 From: Merlin Mathesius <mmathesi@redhat.com>
 Date: Wed, 13 May 2020 08:02:27 -0500
 Subject: [PATCH] Workaround multiple definition of symbol errors
 
 Lifted from Fedora https://src.fedoraproject.org/rpms/syslinux/blob/master/f/0005-Workaround-multiple-definition-of-symbol-errors.patch
 
-Upstream-Status: Pending
+Upstream-Status: Inactive-Upstream
 Signed-off-by: Khem Raj <raj.khem@gmail.com>
-
 ---
  com32/cmenu/Makefile           | 2 +-
  com32/elflink/ldlinux/Makefile | 2 +-
@@ -18,6 +17,8 @@
  efi/Makefile                   | 2 +-
  7 files changed, 7 insertions(+), 7 deletions(-)
 
+diff --git a/com32/cmenu/Makefile b/com32/cmenu/Makefile
+index b81b68e..2ae989c 100644
 --- a/com32/cmenu/Makefile
 +++ b/com32/cmenu/Makefile
 @@ -49,7 +49,7 @@ makeoutputdirs:
@@ -29,6 +30,8 @@
  		-o $@ $^
  
  tidy dist:
+diff --git a/com32/elflink/ldlinux/Makefile b/com32/elflink/ldlinux/Makefile
+index 87c0d36..2be2a01 100644
 --- a/com32/elflink/ldlinux/Makefile
 +++ b/com32/elflink/ldlinux/Makefile
 @@ -33,7 +33,7 @@ endif
@@ -40,6 +43,8 @@
  
  LNXCFLAGS += -D__export='__attribute__((visibility("default")))'
  LNXLIBOBJS = get_key.lo
+diff --git a/com32/gpllib/Makefile b/com32/gpllib/Makefile
+index 1fec914..2d764d0 100644
 --- a/com32/gpllib/Makefile
 +++ b/com32/gpllib/Makefile
 @@ -24,7 +24,7 @@ makeoutputdirs:
@@ -51,6 +56,8 @@
  
  tidy dist clean:
  	find . \( -name \*.o -o -name .\*.d -o -name \*.tmp \) -print0 | \
+diff --git a/com32/hdt/Makefile b/com32/hdt/Makefile
+index 61736d0..1d94785 100644
 --- a/com32/hdt/Makefile
 +++ b/com32/hdt/Makefile
 @@ -52,7 +52,7 @@ QEMU			?= qemu-kvm
@@ -62,6 +69,8 @@
  
  memtest:
  	-[ ! -f $(FLOPPY_DIR)/$(MEMTEST) ] && $(WGET) $(MEMTEST_URL) -O $(FLOPPY_DIR)/$(MEMTEST)
+diff --git a/core/Makefile b/core/Makefile
+index 50ff35a..f0a5562 100644
 --- a/core/Makefile
 +++ b/core/Makefile
 @@ -156,7 +156,7 @@ LDSCRIPT = $(SRC)/$(ARCH)/syslinux.ld
@@ -73,6 +82,8 @@
  		-T $(LDSCRIPT) \
  		--unresolved-symbols=report-all \
  		-E --hash-style=gnu -M -o $@ $< \
+diff --git a/dos/Makefile b/dos/Makefile
+index 4c930d1..5d1c72c 100644
 --- a/dos/Makefile
 +++ b/dos/Makefile
 @@ -19,7 +19,7 @@ include $(MAKEDIR)/embedded.mk
@@ -84,6 +95,8 @@
  OPTFLAGS = -g
  INCLUDES = -include code16.h -nostdinc -iwithprefix include \
  	   -I$(SRC) -I$(SRC)/.. -I$(SRC)/../libfat \
+diff --git a/efi/Makefile b/efi/Makefile
+index f4501e7..72e081e 100644
 --- a/efi/Makefile
 +++ b/efi/Makefile
 @@ -71,7 +71,7 @@ $(OBJS): | $(OBJ)/$(ARCH)
diff --git a/poky/meta/recipes-devtools/syslinux/syslinux/0011-install-don-t-install-obsolete-file-com32.ld.patch b/poky/meta/recipes-devtools/syslinux/syslinux/0011-install-don-t-install-obsolete-file-com32.ld.patch
new file mode 100644
index 0000000..4bc423a
--- /dev/null
+++ b/poky/meta/recipes-devtools/syslinux/syslinux/0011-install-don-t-install-obsolete-file-com32.ld.patch
@@ -0,0 +1,29 @@
+From 66447f7c5c6996481ebd68ce8224d3de7525aad8 Mon Sep 17 00:00:00 2001
+From: "H. Peter Anvin (Intel)" <hpa@zytor.com>
+Date: Wed, 6 Feb 2019 11:30:51 -0800
+Subject: [PATCH] install: don't install obsolete file com32.ld
+
+com32.ld has been obsolete for a long time, and has been removed now;
+don't install it either.
+
+Reported-by: Joakim Tjernlund <Joakim.Tjernlund@infinera.com>
+Signed-off-by: H. Peter Anvin (Intel) <hpa@zytor.com>
+
+Upstream-Status: Backport
+Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
+---
+ com32/lib/Makefile | 1 -
+ 1 file changed, 1 deletion(-)
+
+diff --git a/com32/lib/Makefile b/com32/lib/Makefile
+index 74fff14..6a93149 100644
+--- a/com32/lib/Makefile
++++ b/com32/lib/Makefile
+@@ -113,7 +113,6 @@ spotless: clean
+ 
+ install: all
+ 	mkdir -m 755 -p $(INSTALLROOT)$(COM32DIR)
+-	install -m 644 $(SRC)/com32.ld $(INSTALLROOT)$(COM32DIR)
+ 	-rm -rf $(INSTALLROOT)$(COM32DIR)/include
+ 	cp -r $(SRC)/../include $(INSTALLROOT)$(COM32DIR)
+ 
diff --git a/poky/meta/recipes-devtools/syslinux/syslinux/0012-libinstaller-Fix-build-with-glibc-2.36.patch b/poky/meta/recipes-devtools/syslinux/syslinux/0012-libinstaller-Fix-build-with-glibc-2.36.patch
new file mode 100644
index 0000000..21b83e4
--- /dev/null
+++ b/poky/meta/recipes-devtools/syslinux/syslinux/0012-libinstaller-Fix-build-with-glibc-2.36.patch
@@ -0,0 +1,56 @@
+From 821d31148c07a8318277be32bc6a943c7fd2ba3f Mon Sep 17 00:00:00 2001
+From: Martin Jansa <Martin.Jansa@gmail.com>
+Date: Sat, 6 Aug 2022 11:53:55 +0000
+Subject: [PATCH] libinstaller: Fix build with glibc-2.36
+
+* add only necessary definitions from linux/fs.h, because including whole
+  causes conflicts with sys/mount.h:
+  http://errors.yoctoproject.org/Errors/Details/664535/
+
+In file included from TOPDIR/tmp-glibc/work/core2-64-oe-linux/syslinux/6.04-pre2-r1/recipe-sysroot/usr/include/linux/fs.h:19,
+                 from TOPDIR/tmp-glibc/work/core2-64-oe-linux/syslinux/6.04-pre2-r1/syslinux-6.04-pre2/linux/../libinstaller/linuxioctl.h:19,
+                 from TOPDIR/tmp-glibc/work/core2-64-oe-linux/syslinux/6.04-pre2-r1/syslinux-6.04-pre2/linux/../libinstaller/syslxcom.c:34:
+TOPDIR/tmp-glibc/work/core2-64-oe-linux/syslinux/6.04-pre2-r1/recipe-sysroot/usr/include/linux/mount.h:95:6: error: redeclaration of 'enum fsconfig_command'
+   95 | enum fsconfig_command {
+      |      ^~~~~~~~~~~~~~~~
+In file included from TOPDIR/tmp-glibc/work/core2-64-oe-linux/syslinux/6.04-pre2-r1/syslinux-6.04-pre2/linux/../libinstaller/syslxcom.c:31:
+TOPDIR/tmp-glibc/work/core2-64-oe-linux/syslinux/6.04-pre2-r1/recipe-sysroot/usr/include/sys/mount.h:189:6: note: originally defined here
+  189 | enum fsconfig_command
+      |      ^~~~~~~~~~~~~~~~
+TOPDIR/tmp-glibc/work/core2-64-oe-linux/syslinux/6.04-pre2-r1/recipe-sysroot/usr/include/linux/mount.h:96:9: error: redeclaration of enumerator 'FSCONFIG_SET_FLAG'
+   96 |         FSCONFIG_SET_FLAG       = 0,    /* Set parameter, supplying no value */
+      |         ^~~~~~~~~~~~~~~~~
+...
+
+Upstream-Status: Inactive-Upstream
+Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
+---
+ libinstaller/linuxioctl.h | 15 ++++++++++++++-
+ 1 file changed, 14 insertions(+), 1 deletion(-)
+
+diff --git a/libinstaller/linuxioctl.h b/libinstaller/linuxioctl.h
+index e2731c7..f4a6703 100644
+--- a/libinstaller/linuxioctl.h
++++ b/libinstaller/linuxioctl.h
+@@ -16,7 +16,20 @@
+ #include <linux/fd.h>		/* Floppy geometry */
+ #include <linux/hdreg.h>	/* Hard disk geometry */
+ 
+-#include <linux/fs.h>		/* FIGETBSZ, FIBMAP, FS_IOC_* */
++// #include <linux/fs.h>		/* FIGETBSZ, FIBMAP, FS_IOC_* */
++// linux/fs.h unfortunately causes conflict with sys/mount.h since glibc-2.36
++// https://sourceware.org/glibc/wiki/Release/2.36#Usage_of_.3Clinux.2Fmount.h.3E_and_.3Csys.2Fmount.h.3E
++// add the necessary definitions
++
++#define FS_IOC_GETFLAGS                 _IOR('f', 1, long)
++#define FS_IOC_SETFLAGS                 _IOW('f', 2, long)
++#define FIBMAP	   _IO(0x00,1)	/* bmap access */
++#define FIGETBSZ   _IO(0x00,2)	/* get the block size used for bmap */
++#define FS_IMMUTABLE_FL			0x00000010 /* Immutable file */
++#define BLKGETSIZE _IO(0x12,96)	/* return device size /512 (long *arg) */
++
++// for musl we also need limits.h for PATH_MAX
++#include <linux/limits.h>
+ 
+ #undef SECTOR_SIZE		/* Defined in msdos_fs.h for no good reason */
+ #undef SECTOR_BITS
diff --git a/poky/meta/recipes-devtools/syslinux/syslinux/0013-remove-clean-script.patch b/poky/meta/recipes-devtools/syslinux/syslinux/0013-remove-clean-script.patch
new file mode 100644
index 0000000..c0af7ef
--- /dev/null
+++ b/poky/meta/recipes-devtools/syslinux/syslinux/0013-remove-clean-script.patch
@@ -0,0 +1,27 @@
+From a11c8f88de6b6c42c805ba76e70532977bfd24bf Mon Sep 17 00:00:00 2001
+From: Saul Wold <sgw@linux.intel.com>
+Date: Wed, 10 Dec 2014 10:26:33 -0800
+Subject: [PATCH] remove clean script
+
+This script try to call git submodule, since we are downloading
+the tarball it seems in-correct to do this.
+
+Upstream-Status: Inappropriate [OE-Specific]
+Signed-off-by: Saul Wold <sgw@linux.intel.com>
+Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
+---
+ efi/Makefile | 1 -
+ 1 file changed, 1 deletion(-)
+
+diff --git a/efi/Makefile b/efi/Makefile
+index 72e081e..3cfb3f6 100644
+--- a/efi/Makefile
++++ b/efi/Makefile
+@@ -102,7 +102,6 @@ tidy dist:
+ 	rm -f *.so *.o wrapper
+ 	find . \( -name \*.o -o -name \*.a -o -name .\*.d -o -name \*.tmp \) -print0 | \
+ 		xargs -0r rm -f
+-	$(topdir)/efi/clean-gnu-efi.sh $(EFI_SUBARCH) $(objdir)
+ 
+ clean: tidy
+ 
diff --git a/poky/meta/recipes-devtools/syslinux/syslinux/0014-Fix-reproducibility-issues.patch b/poky/meta/recipes-devtools/syslinux/syslinux/0014-Fix-reproducibility-issues.patch
new file mode 100644
index 0000000..bc48160
--- /dev/null
+++ b/poky/meta/recipes-devtools/syslinux/syslinux/0014-Fix-reproducibility-issues.patch
@@ -0,0 +1,32 @@
+From e49e86bd3199f51ada8a4a1d51aa8d627645279e Mon Sep 17 00:00:00 2001
+From: Richard Purdie <richard.purdie@linuxfoundation.org>
+Date: Sat, 27 Feb 2021 23:42:03 +0000
+Subject: [PATCH] Fix reproducibility issues
+
+In order to build deterministic binaries, we need to sort the wildcard expansion
+so the libraries are linked in the same order each time. This fixes reproducibility
+issues within syslinux builds.
+
+Upstream-Status: Inactive-Upstream
+RP 2021/3/1
+
+Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
+---
+ mk/lib.mk | 4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+diff --git a/mk/lib.mk b/mk/lib.mk
+index f3fb07c..815698c 100644
+--- a/mk/lib.mk
++++ b/mk/lib.mk
+@@ -130,8 +130,8 @@ LIBENTRY_OBJS = \
+ 	exit.o
+ 
+ LIBGCC_OBJS = \
+-	  $(patsubst $(com32)/lib/%.c,%.o,$(wildcard $(com32)/lib/$(ARCH)/libgcc/*.c)) \
+-	  $(patsubst $(com32)/lib/%.S,%.o,$(wildcard $(com32)/lib/$(ARCH)/libgcc/*.S))
++	  $(sort $(patsubst $(com32)/lib/%.c,%.o,$(wildcard $(com32)/lib/$(ARCH)/libgcc/*.c))) \
++	  $(sort $(patsubst $(com32)/lib/%.S,%.o,$(wildcard $(com32)/lib/$(ARCH)/libgcc/*.S)))
+ 
+ LIBCONSOLE_OBJS = \
+ 	\
diff --git a/poky/meta/recipes-devtools/syslinux/syslinux/determinism.patch b/poky/meta/recipes-devtools/syslinux/syslinux/determinism.patch
deleted file mode 100644
index 2fb8c64..0000000
--- a/poky/meta/recipes-devtools/syslinux/syslinux/determinism.patch
+++ /dev/null
@@ -1,22 +0,0 @@
-In order to build deterministic binaries, we need to sort the wildcard expansion
-so the libraries are linked in the same order each time. This fixes reproducibility
-issues within syslinux builds.
-
-Upstream-Status: Pending
-RP 2021/3/1
-
-Index: syslinux-6.04-pre2/mk/lib.mk
-===================================================================
---- syslinux-6.04-pre2.orig/mk/lib.mk
-+++ syslinux-6.04-pre2/mk/lib.mk
-@@ -130,8 +130,8 @@ LIBENTRY_OBJS = \
- 	exit.o
- 
- LIBGCC_OBJS = \
--	  $(patsubst $(com32)/lib/%.c,%.o,$(wildcard $(com32)/lib/$(ARCH)/libgcc/*.c)) \
--	  $(patsubst $(com32)/lib/%.S,%.o,$(wildcard $(com32)/lib/$(ARCH)/libgcc/*.S))
-+	  $(sort $(patsubst $(com32)/lib/%.c,%.o,$(wildcard $(com32)/lib/$(ARCH)/libgcc/*.c))) \
-+	  $(sort $(patsubst $(com32)/lib/%.S,%.o,$(wildcard $(com32)/lib/$(ARCH)/libgcc/*.S)))
- 
- LIBCONSOLE_OBJS = \
- 	\
diff --git a/poky/meta/recipes-devtools/syslinux/syslinux/syslinux-remove-clean-script.patch b/poky/meta/recipes-devtools/syslinux/syslinux/syslinux-remove-clean-script.patch
deleted file mode 100644
index 7c003e1..0000000
--- a/poky/meta/recipes-devtools/syslinux/syslinux/syslinux-remove-clean-script.patch
+++ /dev/null
@@ -1,17 +0,0 @@
-This script try to call git submodule, since we are downloading
-the tarball it seems in-correct to do this.
-
-Upstream-Status: Inappropriate [OE-Specific]
-Signed-off-by: Saul Wold <sgw@linux.intel.com>
-Index: syslinux-6.03/efi/Makefile
-===================================================================
---- syslinux-6.03.orig/efi/Makefile
-+++ syslinux-6.03/efi/Makefile
-@@ -101,7 +101,6 @@ tidy dist:
- 	rm -f *.so *.o wrapper
- 	find . \( -name \*.o -o -name \*.a -o -name .\*.d -o -name \*.tmp \) -print0 | \
- 		xargs -0r rm -f
--	$(topdir)/efi/clean-gnu-efi.sh $(EFI_SUBARCH) $(objdir)
- 
- clean: tidy
- 
diff --git a/poky/meta/recipes-devtools/syslinux/syslinux_6.04-pre2.bb b/poky/meta/recipes-devtools/syslinux/syslinux_6.04-pre2.bb
index 0e4a23c..5604901 100644
--- a/poky/meta/recipes-devtools/syslinux/syslinux_6.04-pre2.bb
+++ b/poky/meta/recipes-devtools/syslinux/syslinux_6.04-pre2.bb
@@ -8,7 +8,6 @@
 DEPENDS = "nasm-native util-linux e2fsprogs"
 
 SRC_URI = "https://www.zytor.com/pub/syslinux/Testing/6.04/syslinux-${PV}.tar.xz \
-           file://syslinux-remove-clean-script.patch \
            file://0001-linux-syslinux-support-ext2-3-4-device.patch \
            file://0002-linux-syslinux-implement-open_ext2_fs.patch \
            file://0003-linux-syslinux-implement-install_to_ext2.patch \
@@ -19,9 +18,11 @@
            file://0008-libinstaller-syslinuxext-implement-syslinux_patch_bo.patch \
            file://0009-linux-syslinux-implement-install_bootblock.patch \
            file://0010-Workaround-multiple-definition-of-symbol-errors.patch \
-           file://0001-install-don-t-install-obsolete-file-com32.ld.patch \
-           file://determinism.patch \
-           "
+           file://0011-install-don-t-install-obsolete-file-com32.ld.patch \
+           file://0012-libinstaller-Fix-build-with-glibc-2.36.patch \
+           file://0013-remove-clean-script.patch \
+           file://0014-Fix-reproducibility-issues.patch \
+"
 
 SRC_URI[md5sum] = "2b31c78f087f99179feb357da312d7ec"
 SRC_URI[sha256sum] = "4441a5d593f85bb6e8d578cf6653fb4ec30f9e8f4a2315a3d8f2d0a8b3fadf94"
diff --git a/poky/meta/recipes-devtools/valgrind/valgrind/Added-support-for-PPC-instructions-mfatbu-mfatbl.patch b/poky/meta/recipes-devtools/valgrind/valgrind/Added-support-for-PPC-instructions-mfatbu-mfatbl.patch
index 07774f3..51cd353 100644
--- a/poky/meta/recipes-devtools/valgrind/valgrind/Added-support-for-PPC-instructions-mfatbu-mfatbl.patch
+++ b/poky/meta/recipes-devtools/valgrind/valgrind/Added-support-for-PPC-instructions-mfatbu-mfatbl.patch
@@ -3,14 +3,14 @@
 Date: Mon, 21 Nov 2011 17:31:39 +0530
 Subject: [PATCH] Added support for PPC instructions mfatbu, mfatbl.
 
-Upstream-Status: Pending
-
-Signed-off-by: Aneesh Bansal <aneesh.bansal@freescale.com>
----
 Currently Valgrind 3.7.0 does not have support for PPC instructions mfatbu and mfatbl. When we run a USDPAA application with VALGRIND, the following error is given by valgrind :
 dis_proc_ctl(ppc)(mfspr,SPR)(0x20F)
 disInstr(ppc): unhandled instruction: 0x7C0F82A6
 
+Upstream-Status: Submitted [https://bugs.kde.org/show_bug.cgi?id=289836]
+
+Signed-off-by: Aneesh Bansal <aneesh.bansal@freescale.com>
+---
 
  VEX/priv/guest_ppc_defs.h    |    2 ++
  VEX/priv/guest_ppc_helpers.c |   18 ++++++++++++++++++
diff --git a/poky/meta/recipes-devtools/valgrind/valgrind/remove-for-all b/poky/meta/recipes-devtools/valgrind/valgrind/remove-for-all
index cb8d10b..a26837d 100644
--- a/poky/meta/recipes-devtools/valgrind/valgrind/remove-for-all
+++ b/poky/meta/recipes-devtools/valgrind/valgrind/remove-for-all
@@ -4,5 +4,6 @@
 helgrind/tests/tls_threads
 drd/tests/bar_bad_xml
 drd/tests/pth_barrier_thr_cr
+drd/tests/std_thread2
 drd/tests/thread_name_xml
 massif/tests/deep-D
diff --git a/poky/meta/recipes-devtools/valgrind/valgrind_3.19.0.bb b/poky/meta/recipes-devtools/valgrind/valgrind_3.19.0.bb
index 69915de..4b21b74 100644
--- a/poky/meta/recipes-devtools/valgrind/valgrind_3.19.0.bb
+++ b/poky/meta/recipes-devtools/valgrind/valgrind_3.19.0.bb
@@ -239,8 +239,8 @@
 
     # As the binary isn't stripped or debug-splitted, the source file isn't fetched
     # via dwarfsrcfiles either, so it needs to be installed manually.
-    mkdir -p ${D}/usr/src/debug/${PN}/${EXTENDPE}${PV}-${PR}/${BP}/none/tests/
-    install ${S}/none/tests/tls.c ${D}/usr/src/debug/${PN}/${EXTENDPE}${PV}-${PR}/${BP}/none/tests/
+    mkdir -p ${D}/usr/src/debug/${PN}/${EXTENDPE}${PV}-${PR}/none/tests/
+    install ${S}/none/tests/tls.c ${D}/usr/src/debug/${PN}/${EXTENDPE}${PV}-${PR}/none/tests/
 }
 
 # avoid stripping some generated binaries otherwise some of the tests will fail
diff --git a/poky/meta/recipes-devtools/xmlto/xmlto-0.0.28/configure.in-drop-the-test-of-xmllint-and-xsltproc.patch b/poky/meta/recipes-devtools/xmlto/xmlto-0.0.28/configure.in-drop-the-test-of-xmllint-and-xsltproc.patch
deleted file mode 100644
index 6d547a6..0000000
--- a/poky/meta/recipes-devtools/xmlto/xmlto-0.0.28/configure.in-drop-the-test-of-xmllint-and-xsltproc.patch
+++ /dev/null
@@ -1,30 +0,0 @@
-configure.in: drop the test of xmllint and xsltproc
-
-The test is unnecessary, the xmllint and xsltproc were explicitly
-added to RDEPENDS.
-
-Upstream-Status: Inappropriate
-Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
----
- configure.in | 4 ++--
- 1 file changed, 2 insertions(+), 2 deletions(-)
-
-diff --git a/configure.in b/configure.in
---- a/configure.in
-+++ b/configure.in
-@@ -42,10 +42,10 @@ AC_ARG_VAR([LOCALE], [Name and path of the `locale' program.])
- AC_PATH_PROG([LOCALE], [locale], [locale])
- 
- AC_ARG_VAR([XMLLINT], [Name and path of the `xmllint' program.])
--AC_PATH_PROG([XMLLINT], [xmllint], [xmllint])
-+dnl AC_PATH_PROG([XMLLINT], [xmllint], [xmllint])
- 
- AC_ARG_VAR([XSLTPROC], [Name and path of the `xsltproc' program.])
--AC_PATH_PROG([XSLTPROC], [xsltproc], [xsltproc])
-+dnl AC_PATH_PROG([XSLTPROC], [xsltproc], [xsltproc])
- 
- dnl
- dnl toolchains
--- 
-1.8.1.2
-
diff --git a/poky/meta/recipes-devtools/xmlto/xmlto/0001-Skip-validating-xmlto-output.patch b/poky/meta/recipes-devtools/xmlto/xmlto/0001-Skip-validating-xmlto-output.patch
new file mode 100644
index 0000000..c6857a9
--- /dev/null
+++ b/poky/meta/recipes-devtools/xmlto/xmlto/0001-Skip-validating-xmlto-output.patch
@@ -0,0 +1,29 @@
+From 3deb7a0eded04ab08a9cb2d88526cb1c7b440061 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Sun, 14 Aug 2022 00:23:29 -0700
+Subject: [PATCH] Skip validating xmlto output
+
+Avoids network access
+
+Upstream-Status: Submitted [https://pagure.io/xmlto/pull-request/11]
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ Makefile.am | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/Makefile.am b/Makefile.am
+index 50fa279..6a2da62 100644
+--- a/Makefile.am
++++ b/Makefile.am
+@@ -68,7 +68,7 @@ EXTRA_DIST = xmlto.spec \
+ 	doc/xmlif.xml \
+ 	xmlto.mak
+ 
+-GEN_MANPAGE = FORMAT_DIR=$(top_srcdir)/format $(BASH) ./xmlto -o $(@D) man $<
++GEN_MANPAGE = FORMAT_DIR=$(top_srcdir)/format $(BASH) ./xmlto --skip-validation -o $(@D) man $<
+ man/man1/xmlto.1: doc/xmlto.xml ; $(GEN_MANPAGE)
+ man/man1/xmlif.1: doc/xmlif.xml ; $(GEN_MANPAGE)
+ 
+-- 
+2.37.2
+
diff --git a/poky/meta/recipes-devtools/xmlto/xmlto/configure.in-drop-the-test-of-xmllint-and-xsltproc.patch b/poky/meta/recipes-devtools/xmlto/xmlto/configure.in-drop-the-test-of-xmllint-and-xsltproc.patch
new file mode 100644
index 0000000..7cc3cbe
--- /dev/null
+++ b/poky/meta/recipes-devtools/xmlto/xmlto/configure.in-drop-the-test-of-xmllint-and-xsltproc.patch
@@ -0,0 +1,30 @@
+configure.in: drop the test of xmllint and xsltproc
+
+The test is unnecessary, the xmllint and xsltproc were explicitly
+added to RDEPENDS.
+
+Upstream-Status: Inappropriate
+Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
+---
+ configure.in | 4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+diff --git a/configure.ac b/configure.ac
+--- a/configure.ac
++++ b/configure.ac
+@@ -42,10 +42,10 @@ AC_ARG_VAR([LOCALE], [Name and path of the `locale' program.])
+ AC_PATH_PROG([LOCALE], [locale], [locale])
+ 
+ AC_ARG_VAR([XMLLINT], [Name and path of the `xmllint' program.])
+-AC_PATH_PROG([XMLLINT], [xmllint], [xmllint])
++dnl AC_PATH_PROG([XMLLINT], [xmllint], [xmllint])
+ 
+ AC_ARG_VAR([XSLTPROC], [Name and path of the `xsltproc' program.])
+-AC_PATH_PROG([XSLTPROC], [xsltproc], [xsltproc])
++dnl AC_PATH_PROG([XSLTPROC], [xsltproc], [xsltproc])
+ 
+ dnl
+ dnl toolchains
+-- 
+1.8.1.2
+
diff --git a/poky/meta/recipes-devtools/xmlto/xmlto_0.0.28.bb b/poky/meta/recipes-devtools/xmlto/xmlto_0.0.28.bb
index 5cb9a4c..373eca2 100644
--- a/poky/meta/recipes-devtools/xmlto/xmlto_0.0.28.bb
+++ b/poky/meta/recipes-devtools/xmlto/xmlto_0.0.28.bb
@@ -6,17 +6,21 @@
 
 LIC_FILES_CHKSUM = "file://COPYING;md5=59530bdf33659b29e73d4adb9f9f6552"
 
-SRC_URI = "https://releases.pagure.org/xmlto/xmlto-${PV}.tar.gz \
+SRCREV = "6fa6a0e07644f20abf2596f78a60112713e11cbe"
+UPSTREAM_CHECK_COMMITS = "1"
+SRC_URI = "git://pagure.io/xmlto.git;protocol=https;branch=master \
            file://configure.in-drop-the-test-of-xmllint-and-xsltproc.patch \
+           file://0001-Skip-validating-xmlto-output.patch \
 "
-SRC_URI[md5sum] = "a1fefad9d83499a15576768f60f847c6"
-SRC_URI[sha256sum] = "2f986b7c9a0e9ac6728147668e776d405465284e13c74d4146c9cbc51fd8aad3"
+S = "${WORKDIR}/git"
+
+PV .= "+0.0.29+git${SRCPV}"
 
 inherit autotools
 
 CLEANBROKEN = "1"
 
-DEPENDS = "libxml2-native"
+DEPENDS = "libxml2-native libxslt-native flex-native docbook-xml-dtd4-native docbook-xsl-stylesheets-native"
 
 RDEPENDS:${PN} = "docbook-xml-dtd4 \
                   docbook-xsl-stylesheets \
@@ -36,6 +40,10 @@
 
 EXTRA_OECONF:append = " BASH=/bin/bash GCP=/bin/cp XMLLINT=xmllint XSLTPROC=xsltproc"
 
+do_configure:prepend() {
+    (cd ${S} && flex -o xmlif/xmlif.c xmlif/xmlif.l)
+}
+
 do_install:append:class-native() {
     create_wrapper ${D}${bindir}/xmlto XML_CATALOG_FILES=${sysconfdir}/xml/catalog
 }
diff --git a/poky/meta/recipes-extended/cracklib/cracklib/0001-rules-Drop-using-register-keyword.patch b/poky/meta/recipes-extended/cracklib/cracklib/0001-rules-Drop-using-register-keyword.patch
new file mode 100644
index 0000000..a844665
--- /dev/null
+++ b/poky/meta/recipes-extended/cracklib/cracklib/0001-rules-Drop-using-register-keyword.patch
@@ -0,0 +1,278 @@
+From fe49471cfa7fe0618615c065f4c0ad04e888bf92 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Sun, 7 Aug 2022 12:24:39 -0700
+Subject: [PATCH 1/2] rules: Drop using register keyword
+
+This is a deprecated keyword
+
+Upstream-Status: Submitted [https://github.com/cracklib/cracklib/pull/48]
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ src/lib/rules.c | 94 ++++++++++++++++++++++++-------------------------
+ 1 file changed, 47 insertions(+), 47 deletions(-)
+
+diff --git a/lib/rules.c b/lib/rules.c
+index 3a2aa46..6e7a12a 100644
+--- a/lib/rules.c
++++ b/lib/rules.c
+@@ -67,8 +67,8 @@ Suffix(myword, suffix)
+     char *myword;
+     char *suffix;
+ {
+-    register int i;
+-    register int j;
++    int i;
++    int j;
+     i = strlen(myword);
+     j = strlen(suffix);
+ 
+@@ -83,10 +83,10 @@ Suffix(myword, suffix)
+ 
+ char *
+ Reverse(str)			/* return a pointer to a reversal */
+-    register char *str;
++    char *str;
+ {
+-    register int i;
+-    register int j;
++    int i;
++    int j;
+     static char area[STRINGSIZE];
+     j = i = strlen(str);
+     while (*str)
+@@ -99,9 +99,9 @@ Reverse(str)			/* return a pointer to a reversal */
+ 
+ char *
+ Uppercase(str)			/* return a pointer to an uppercase */
+-    register char *str;
++    char *str;
+ {
+-    register char *ptr;
++    char *ptr;
+     static char area[STRINGSIZE];
+     ptr = area;
+     while (*str)
+@@ -116,9 +116,9 @@ Uppercase(str)			/* return a pointer to an uppercase */
+ 
+ char *
+ Lowercase(str)			/* return a pointer to an lowercase */
+-    register char *str;
++    char *str;
+ {
+-    register char *ptr;
++    char *ptr;
+     static char area[STRINGSIZE];
+     ptr = area;
+     while (*str)
+@@ -133,9 +133,9 @@ Lowercase(str)			/* return a pointer to an lowercase */
+ 
+ char *
+ Capitalise(str)			/* return a pointer to an capitalised */
+-    register char *str;
++    char *str;
+ {
+-    register char *ptr;
++    char *ptr;
+     static char area[STRINGSIZE];
+     ptr = area;
+ 
+@@ -152,9 +152,9 @@ Capitalise(str)			/* return a pointer to an capitalised */
+ 
+ char *
+ Pluralise(string)		/* returns a pointer to a plural */
+-    register char *string;
++    char *string;
+ {
+-    register int length;
++    int length;
+     static char area[STRINGSIZE];
+     length = strlen(string);
+     strcpy(area, string);
+@@ -193,11 +193,11 @@ Pluralise(string)		/* returns a pointer to a plural */
+ 
+ char *
+ Substitute(string, old, new)	/* returns pointer to a swapped about copy */
+-    register char *string;
+-    register char old;
+-    register char new;
++    char *string;
++    char old;
++    char new;
+ {
+-    register char *ptr;
++    char *ptr;
+     static char area[STRINGSIZE];
+     ptr = area;
+     while (*string)
+@@ -211,11 +211,11 @@ Substitute(string, old, new)	/* returns pointer to a swapped about copy */
+ 
+ char *
+ Purge(string, target)		/* returns pointer to a purged copy */
+-    register char *string;
+-    register char target;
++    char *string;
++    char target;
+ {
+-    register char *ptr;
+-    static char area[STRINGSIZE];
++    char *ptr;
++    char area[STRINGSIZE];
+     ptr = area;
+     while (*string)
+     {
+@@ -238,11 +238,11 @@ Purge(string, target)		/* returns pointer to a purged copy */
+ 
+ int
+ MatchClass(class, input)
+-    register char class;
+-    register char input;
++    char class;
++    char input;
+ {
+-    register char c;
+-    register int retval;
++    char c;
++    int retval;
+     retval = 0;
+ 
+     switch (class)
+@@ -357,8 +357,8 @@ MatchClass(class, input)
+ 
+ char *
+ PolyStrchr(string, class)
+-    register char *string;
+-    register char class;
++    char *string;
++    char class;
+ {
+     while (*string)
+     {
+@@ -373,11 +373,11 @@ PolyStrchr(string, class)
+ 
+ char *
+ PolySubst(string, class, new)	/* returns pointer to a swapped about copy */
+-    register char *string;
+-    register char class;
+-    register char new;
++    char *string;
++    char class;
++    char new;
+ {
+-    register char *ptr;
++    char *ptr;
+     static char area[STRINGSIZE];
+     ptr = area;
+     while (*string)
+@@ -391,10 +391,10 @@ PolySubst(string, class, new)	/* returns pointer to a swapped about copy */
+ 
+ char *
+ PolyPurge(string, class)	/* returns pointer to a purged copy */
+-    register char *string;
+-    register char class;
++    char *string;
++    char class;
+ {
+-    register char *ptr;
++    char *ptr;
+     static char area[STRINGSIZE];
+     ptr = area;
+     while (*string)
+@@ -433,7 +433,7 @@ Mangle(input, control)		/* returns a pointer to a controlled Mangle */
+     char *control;
+ {
+     int limit;
+-    register char *ptr;
++    char *ptr;
+     static char area[STRINGSIZE * 2] = {0};
+     char area2[STRINGSIZE * 2] = {0};
+     strcpy(area, input);
+@@ -523,7 +523,7 @@ Mangle(input, control)		/* returns a pointer to a controlled Mangle */
+ 		return NULL;
+ 	    } else
+ 	    {
+-		register char *string;
++		char *string;
+ 		string = area;
+ 		while (*(string++));
+ 		string[-1] = *(++ptr);
+@@ -537,7 +537,7 @@ Mangle(input, control)		/* returns a pointer to a controlled Mangle */
+ 		return NULL;
+ 	    } else
+ 	    {
+-		register int i;
++		int i;
+ 		int start;
+ 		int length;
+ 		start = Char2Int(*(++ptr));
+@@ -563,7 +563,7 @@ Mangle(input, control)		/* returns a pointer to a controlled Mangle */
+ 		return NULL;
+ 	    } else
+ 	    {
+-		register int i;
++		int i;
+ 		i = Char2Int(*(++ptr));
+ 		if (i < 0)
+ 		{
+@@ -587,9 +587,9 @@ Mangle(input, control)		/* returns a pointer to a controlled Mangle */
+ 		return NULL;
+ 	    } else
+ 	    {
+-		register int i;
+-		register char *p1;
+-		register char *p2;
++		int i;
++		char *p1;
++		char *p2;
+ 		i = Char2Int(*(++ptr));
+ 		if (i < 0)
+ 		{
+@@ -696,7 +696,7 @@ Mangle(input, control)		/* returns a pointer to a controlled Mangle */
+ 		return NULL;
+ 	    } else
+ 	    {
+-		register int i;
++		int i;
+ 		if ((i = Char2Int(ptr[1])) < 0)
+ 		{
+ 		    Debug(1, "Mangle: '=' weird argument in '%s'\n", control);
+@@ -723,7 +723,7 @@ Mangle(input, control)		/* returns a pointer to a controlled Mangle */
+ 	case RULE_DFIRST:
+ 	    if (area[0])
+ 	    {
+-		register int i;
++		int i;
+ 		for (i = 1; area[i]; i++)
+ 		{
+ 		    area[i - 1] = area[i];
+@@ -735,7 +735,7 @@ Mangle(input, control)		/* returns a pointer to a controlled Mangle */
+ 	case RULE_DLAST:
+ 	    if (area[0])
+ 	    {
+-		register int i;
++		int i;
+ 		for (i = 1; area[i]; i++);
+ 		area[i - 1] = '\0';
+ 	    }
+@@ -771,7 +771,7 @@ Mangle(input, control)		/* returns a pointer to a controlled Mangle */
+ 		return NULL;
+ 	    } else
+ 	    {
+-		register int i;
++		int i;
+ 
+ 		for (i = 0; area[i]; i++);
+ 
+@@ -815,8 +815,8 @@ Mangle(input, control)		/* returns a pointer to a controlled Mangle */
+ 
+ int
+ PMatch(control, string)
+-register char *control;
+-register char *string;
++char *control;
++char *string;
+ {
+     while (*string && *control)
+     {
+-- 
+2.37.1
+
diff --git a/poky/meta/recipes-extended/cracklib/cracklib/0002-rules-Correct-parameter-types-to-Debug-calls.patch b/poky/meta/recipes-extended/cracklib/cracklib/0002-rules-Correct-parameter-types-to-Debug-calls.patch
new file mode 100644
index 0000000..a8692b0
--- /dev/null
+++ b/poky/meta/recipes-extended/cracklib/cracklib/0002-rules-Correct-parameter-types-to-Debug-calls.patch
@@ -0,0 +1,40 @@
+From 793921a8ee4ae7f20e1fd2bbec5196bc83176b01 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Sun, 7 Aug 2022 12:25:24 -0700
+Subject: [PATCH 2/2] rules: Correct parameter types to Debug() calls
+
+Fixes
+src/lib/rules.c:346:45: error: incompatible integer to pointer conversion passing 'char' to parameter of type 'char *'; take the address with & [-Wint-conversion]
+src/lib/rules.c:804:53: error: incompatible integer to pointer conversion passing 'char' to parameter of type 'char *'; remove * [-Wint-conversion]                                           Debug(1, "Mangle: unknown command %c in %s\n", *ptr, control);
+                                                           ^~~~
+Upstream-Status: Submitted [https://github.com/cracklib/cracklib/pull/48]
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ src/lib/rules.c | 4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+diff --git a/lib/rules.c b/lib/rules.c
+index 6e7a12a..4a34f91 100644
+--- a/lib/rules.c
++++ b/lib/rules.c
+@@ -343,7 +343,7 @@ MatchClass(class, input)
+ 	break;
+ 
+     default:
+-	Debug(1, "MatchClass: unknown class %c\n", class);
++	Debug(1, "MatchClass: unknown class %c\n", &class);
+ 	return (0);
+ 	break;
+     }
+@@ -801,7 +801,7 @@ Mangle(input, control)		/* returns a pointer to a controlled Mangle */
+ 	    }
+ 
+ 	default:
+-	    Debug(1, "Mangle: unknown command %c in %s\n", *ptr, control);
++	    Debug(1, "Mangle: unknown command %c in %s\n", ptr, control);
+ 	    return NULL;
+ 	    break;
+ 	}
+-- 
+2.37.1
+
diff --git a/poky/meta/recipes-extended/cracklib/cracklib_2.9.7.bb b/poky/meta/recipes-extended/cracklib/cracklib_2.9.7.bb
index 629069e..ffed88e 100644
--- a/poky/meta/recipes-extended/cracklib/cracklib_2.9.7.bb
+++ b/poky/meta/recipes-extended/cracklib/cracklib_2.9.7.bb
@@ -11,7 +11,10 @@
 
 SRC_URI = "git://github.com/cracklib/cracklib;protocol=https;branch=master \
            file://0001-packlib.c-support-dictionary-byte-order-dependent.patch \
-           file://0002-craklib-fix-testnum-and-teststr-failed.patch"
+           file://0002-craklib-fix-testnum-and-teststr-failed.patch \
+           file://0001-rules-Drop-using-register-keyword.patch \
+           file://0002-rules-Correct-parameter-types-to-Debug-calls.patch \
+           "
 
 SRCREV = "f83934cf3cced0c9600c7d81332f4169f122a2cf"
 S = "${WORKDIR}/git/src"
diff --git a/poky/meta/recipes-extended/ethtool/ethtool/avoid_parallel_tests.patch b/poky/meta/recipes-extended/ethtool/ethtool/avoid_parallel_tests.patch
index 4391a91..f8703d8 100644
--- a/poky/meta/recipes-extended/ethtool/ethtool/avoid_parallel_tests.patch
+++ b/poky/meta/recipes-extended/ethtool/ethtool/avoid_parallel_tests.patch
@@ -1,4 +1,4 @@
-From b0270486eca02e1b70eff7517936fad77bb1fb8e Mon Sep 17 00:00:00 2001
+From 2ceea729810475ca8988e2955dc33b5843520e6b Mon Sep 17 00:00:00 2001
 From: Tudor Florea <tudor.florea@enea.com>
 Date: Wed, 28 May 2014 18:59:54 +0200
 Subject: [PATCH] ethtool: use serial-tests config needed by ptest.
@@ -15,11 +15,11 @@
  1 file changed, 1 insertion(+), 1 deletion(-)
 
 diff --git a/configure.ac b/configure.ac
-index c11f2b1..2bd2b83 100644
+index cc3525c..873af9b 100644
 --- a/configure.ac
 +++ b/configure.ac
 @@ -2,7 +2,7 @@ dnl Process this file with autoconf to produce a configure script.
- AC_INIT(ethtool, 5.18, netdev@vger.kernel.org)
+ AC_INIT(ethtool, 5.19, netdev@vger.kernel.org)
  AC_PREREQ(2.52)
  AC_CONFIG_SRCDIR([ethtool.c])
 -AM_INIT_AUTOMAKE([gnu subdir-objects])
diff --git a/poky/meta/recipes-extended/ethtool/ethtool_5.18.bb b/poky/meta/recipes-extended/ethtool/ethtool_5.18.bb
deleted file mode 100644
index b4f74e7..0000000
--- a/poky/meta/recipes-extended/ethtool/ethtool_5.18.bb
+++ /dev/null
@@ -1,37 +0,0 @@
-SUMMARY = "Display or change ethernet card settings"
-DESCRIPTION = "A small utility for examining and tuning the settings of your ethernet-based network interfaces."
-HOMEPAGE = "http://www.kernel.org/pub/software/network/ethtool/"
-SECTION = "console/network"
-LICENSE = "GPL-2.0-or-later"
-LIC_FILES_CHKSUM = "file://COPYING;md5=b234ee4d69f5fce4486a80fdaf4a4263 \
-                    file://ethtool.c;beginline=4;endline=17;md5=c19b30548c582577fc6b443626fc1216"
-
-SRC_URI = "${KERNELORG_MIRROR}/software/network/ethtool/ethtool-${PV}.tar.gz \
-           file://run-ptest \
-           file://avoid_parallel_tests.patch \
-           "
-
-SRC_URI[sha256sum] = "70091f95448eba851f9f9804059d0e237f1d909064046f9e3c6dc892aa1e8afe"
-
-UPSTREAM_CHECK_URI = "https://www.kernel.org/pub/software/network/ethtool/"
-
-inherit autotools ptest bash-completion pkgconfig
-
-RDEPENDS:${PN}-ptest += "make"
-
-PACKAGECONFIG ?= "netlink"
-PACKAGECONFIG[netlink] = "--enable-netlink,--disable-netlink,libmnl,"
-
-do_compile_ptest() {
-   oe_runmake buildtest-TESTS
-}
-
-do_install_ptest () {
-   cp ${B}/Makefile                 ${D}${PTEST_PATH}
-   install ${B}/test-cmdline        ${D}${PTEST_PATH}
-   if ${@bb.utils.contains('PACKAGECONFIG', 'netlink', 'false', 'true', d)}; then
-       install ${B}/test-features       ${D}${PTEST_PATH}
-   fi
-   install ${B}/ethtool             ${D}${PTEST_PATH}/ethtool
-   sed -i 's/^Makefile/_Makefile/'  ${D}${PTEST_PATH}/Makefile
-}
diff --git a/poky/meta/recipes-extended/ethtool/ethtool_5.19.bb b/poky/meta/recipes-extended/ethtool/ethtool_5.19.bb
new file mode 100644
index 0000000..8c995b2
--- /dev/null
+++ b/poky/meta/recipes-extended/ethtool/ethtool_5.19.bb
@@ -0,0 +1,37 @@
+SUMMARY = "Display or change ethernet card settings"
+DESCRIPTION = "A small utility for examining and tuning the settings of your ethernet-based network interfaces."
+HOMEPAGE = "http://www.kernel.org/pub/software/network/ethtool/"
+SECTION = "console/network"
+LICENSE = "GPL-2.0-or-later"
+LIC_FILES_CHKSUM = "file://COPYING;md5=b234ee4d69f5fce4486a80fdaf4a4263 \
+                    file://ethtool.c;beginline=4;endline=17;md5=c19b30548c582577fc6b443626fc1216"
+
+SRC_URI = "${KERNELORG_MIRROR}/software/network/ethtool/ethtool-${PV}.tar.gz \
+           file://run-ptest \
+           file://avoid_parallel_tests.patch \
+           "
+
+SRC_URI[sha256sum] = "24412dcd4ac886177abd68282efa98914a4dd2497218e298e7049e9cb72b2336"
+
+UPSTREAM_CHECK_URI = "https://www.kernel.org/pub/software/network/ethtool/"
+
+inherit autotools ptest bash-completion pkgconfig
+
+RDEPENDS:${PN}-ptest += "make"
+
+PACKAGECONFIG ?= "netlink"
+PACKAGECONFIG[netlink] = "--enable-netlink,--disable-netlink,libmnl,"
+
+do_compile_ptest() {
+   oe_runmake buildtest-TESTS
+}
+
+do_install_ptest () {
+   cp ${B}/Makefile                 ${D}${PTEST_PATH}
+   install ${B}/test-cmdline        ${D}${PTEST_PATH}
+   if ${@bb.utils.contains('PACKAGECONFIG', 'netlink', 'false', 'true', d)}; then
+       install ${B}/test-features       ${D}${PTEST_PATH}
+   fi
+   install ${B}/ethtool             ${D}${PTEST_PATH}/ethtool
+   sed -i 's/^Makefile/_Makefile/'  ${D}${PTEST_PATH}/Makefile
+}
diff --git a/poky/meta/recipes-extended/ghostscript/ghostscript/ghostscript-9.21-prevent_recompiling.patch b/poky/meta/recipes-extended/ghostscript/ghostscript/ghostscript-9.21-prevent_recompiling.patch
deleted file mode 100644
index c76915f..0000000
--- a/poky/meta/recipes-extended/ghostscript/ghostscript/ghostscript-9.21-prevent_recompiling.patch
+++ /dev/null
@@ -1,78 +0,0 @@
-From 239d681306a8d97ed10954788d32ba2f4b55f77c Mon Sep 17 00:00:00 2001
-From: Kang Kai <kai.kang@windriver.com>
-Date: Thu, 29 Mar 2018 16:10:16 +0800
-Subject: [PATCH 06/10] prevent recompiling
-
-Just use commands provided by ghostscript-native, preventing recompile
-them when compile ghostscript. Way to enable cross compile.
-
-Upstream-Status: Pending
-
-Signed-off-by: Kang Kai <kai.kang@windriver.com>
-Signed-off-by: Wenzong Fan <wenzong.fan@windriver.com>
-
-Rebase to 9.25
-Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
-Signed-off-by: Jagadeesh Krishnanjanappa <jkrishnanjanappa@mvista.com>
----
- base/unix-aux.mak | 44 --------------------------------------------
- 1 file changed, 44 deletions(-)
-
-diff --git a/base/unix-aux.mak b/base/unix-aux.mak
-index 5bf72e9..9cb39d7 100644
---- a/base/unix-aux.mak
-+++ b/base/unix-aux.mak
-@@ -54,50 +54,6 @@ $(AUX)gp_stdia.$(OBJ): $(GLSRC)gp_stdia.
-   $(stdio__h) $(time__h) $(unistd__h) $(gx_h) $(gp_h) $(UNIX_AUX_MAK) $(MAKEDIRS)
- 	$(GLCCAUX) $(AUXO_)gp_stdia.$(OBJ) $(C_) $(GLSRC)gp_stdia.c
- 
--# -------------------------- Auxiliary programs --------------------------- #
--
--$(ECHOGS_XE): $(GLSRC)echogs.c $(AK) $(stdpre_h) $(UNIX_AUX_MAK) $(MAKEDIRS)
--	$(CCAUX_) $(I_)$(GLSRCDIR)$(_I) $(O_)$(ECHOGS_XE) $(GLSRC)echogs.c $(AUXEXTRALIBS)
--
--$(PACKPS_XE): $(GLSRC)pack_ps.c $(stdpre_h) $(UNIX_AUX_MAK) $(MAKEDIRS)
--	$(CCAUX_) $(I_)$(GLSRCDIR)$(_I) $(O_)$(PACKPS_XE) $(GLSRC)pack_ps.c $(AUXEXTRALIBS)
--
--# On the RS/6000 (at least), compiling genarch.c with gcc with -O
--# produces a buggy executable.
--$(GENARCH_XE): $(GLSRC)genarch.c $(AK) $(GENARCH_DEPS) $(UNIX_AUX_MAK) $(MAKEDIRS)
--	$(CCAUX_) $(I_)$(GLSRCDIR)$(_I) $(O_)$(GENARCH_XE) $(GLSRC)genarch.c $(AUXEXTRALIBS)
--
--$(GENCONF_XE): $(GLSRC)genconf.c $(AK) $(GENCONF_DEPS) $(UNIX_AUX_MAK) $(MAKEDIRS)
--	$(CCAUX_) $(I_)$(GLSRCDIR)$(_I) $(O_)$(GENCONF_XE) $(GLSRC)genconf.c $(AUXEXTRALIBS)
--
--$(GENDEV_XE): $(GLSRC)gendev.c $(AK) $(GENDEV_DEPS) $(UNIX_AUX_MAK) $(MAKEDIRS)
--	$(CCAUX_) $(I_)$(GLSRCDIR)$(_I) $(O_)$(GENDEV_XE) $(GLSRC)gendev.c $(AUXEXTRALIBS)
--
--$(GENHT_XE): $(GLSRC)genht.c $(AK) $(GENHT_DEPS) $(UNIX_AUX_MAK) $(MAKEDIRS)
--	$(CCAUX_) $(GENHT_CFLAGS) $(O_)$(GENHT_XE) $(GLSRC)genht.c $(AUXEXTRALIBS)
--
--# To get GS to use the system zlib, you remove/hide the gs/zlib directory
--# which means that the mkromfs build can't find the zlib source it needs.
--# So it's split into two targets, one using the zlib source directly.....
--MKROMFS_OBJS_0=$(MKROMFS_ZLIB_OBJS) $(AUX)gpmisc.$(OBJ) $(AUX)gp_getnv.$(OBJ) \
-- $(AUX)gscdefs.$(OBJ) $(AUX)gp_unix.$(OBJ) $(AUX)gp_unifs.$(OBJ) $(AUX)gp_unifn.$(OBJ) \
-- $(AUX)gp_stdia.$(OBJ) $(AUX)gsutil.$(OBJ) $(AUX)memento.$(OBJ)
--
--$(MKROMFS_XE)_0: $(GLSRC)mkromfs.c $(MKROMFS_COMMON_DEPS) $(MKROMFS_OBJS_0) $(UNIX_AUX_MAK) $(MAKEDIRS)
--	$(CCAUX_) $(GENOPTAUX) $(I_)$(GLSRCDIR)$(_I) $(I_)$(GLOBJ)$(_I) $(I_)$(ZSRCDIR)$(_I) $(GLSRC)mkromfs.c $(O_)$(MKROMFS_XE)_0 $(MKROMFS_OBJS_0) $(AUXEXTRALIBS)
--
--# .... and one using the zlib library linked via the command line
--MKROMFS_OBJS_1=$(AUX)gscdefs.$(OBJ) \
-- $(AUX)gpmisc.$(OBJ) $(AUX)gp_getnv.$(OBJ) \
-- $(AUX)gp_unix.$(OBJ) $(AUX)gp_unifs.$(OBJ) $(AUX)gp_unifn.$(OBJ) \
-- $(AUX)gp_stdia.$(OBJ) $(AUX)gsutil.$(OBJ)
--
--$(MKROMFS_XE)_1: $(GLSRC)mkromfs.c $(MKROMFS_COMMON_DEPS) $(MKROMFS_OBJS_1) $(UNIX_AUX_MAK) $(MAKEDIRS)
--	$(CCAUX_) $(GENOPTAUX) $(I_)$(GLSRCDIR)$(_I) $(I_)$(GLOBJ)$(_I) $(I_)$(ZSRCDIR)$(_I) $(GLSRC)mkromfs.c $(O_)$(MKROMFS_XE)_1 $(MKROMFS_OBJS_1) $(AUXEXTRALIBS)
--
--$(MKROMFS_XE): $(MKROMFS_XE)_$(SHARE_ZLIB) $(UNIX_AUX_MAK) $(MAKEDIRS)
--	$(CP_) $(MKROMFS_XE)_$(SHARE_ZLIB) $(MKROMFS_XE)
--
- # Query the environment to construct gconfig_.h.
- # These are all defined conditionally (except the JasPER one), so that
- # they can be overridden by settings from the configure script.
--- 
-1.8.3.1
-
diff --git a/poky/meta/recipes-extended/ghostscript/ghostscript_9.56.1.bb b/poky/meta/recipes-extended/ghostscript/ghostscript_9.56.1.bb
index b2e741b..e71a6cc 100644
--- a/poky/meta/recipes-extended/ghostscript/ghostscript_9.56.1.bb
+++ b/poky/meta/recipes-extended/ghostscript/ghostscript_9.56.1.bb
@@ -36,7 +36,6 @@
 "
 
 SRC_URI = "${SRC_URI_BASE} \
-           file://ghostscript-9.21-prevent_recompiling.patch \
            file://cups-no-gcrypt.patch \
            "
 
diff --git a/poky/meta/recipes-extended/libmnl/libmnl_1.0.5.bb b/poky/meta/recipes-extended/libmnl/libmnl_1.0.5.bb
index 3c5bde3..748326c 100644
--- a/poky/meta/recipes-extended/libmnl/libmnl_1.0.5.bb
+++ b/poky/meta/recipes-extended/libmnl/libmnl_1.0.5.bb
@@ -6,8 +6,8 @@
 LICENSE = "LGPL-2.1-or-later"
 LIC_FILES_CHKSUM = "file://COPYING;md5=4fbd65380cdd255951079008b364516c"
 
-SRC_URI = "https://netfilter.org/projects/libmnl/files/libmnl-${PV}.tar.bz2;name=tar"
-SRC_URI[tar.sha256sum] = "274b9b919ef3152bfb3da3a13c950dd60d6e2bcd54230ffeca298d03b40d0525"
+SRC_URI = "https://netfilter.org/projects/libmnl/files/libmnl-${PV}.tar.bz2"
+SRC_URI[sha256sum] = "274b9b919ef3152bfb3da3a13c950dd60d6e2bcd54230ffeca298d03b40d0525"
 
 inherit autotools pkgconfig
 
diff --git a/poky/meta/recipes-extended/libtirpc/libtirpc_1.3.2.bb b/poky/meta/recipes-extended/libtirpc/libtirpc_1.3.2.bb
deleted file mode 100644
index 45b3d2b..0000000
--- a/poky/meta/recipes-extended/libtirpc/libtirpc_1.3.2.bb
+++ /dev/null
@@ -1,25 +0,0 @@
-SUMMARY = "Transport-Independent RPC library"
-DESCRIPTION = "Libtirpc is a port of Suns Transport-Independent RPC library to Linux"
-SECTION = "libs/network"
-HOMEPAGE = "http://sourceforge.net/projects/libtirpc/"
-BUGTRACKER = "http://sourceforge.net/tracker/?group_id=183075&atid=903784"
-LICENSE = "BSD-3-Clause"
-LIC_FILES_CHKSUM = "file://COPYING;md5=f835cce8852481e4b2bbbdd23b5e47f3 \
-                    file://src/netname.c;beginline=1;endline=27;md5=f8a8cd2cb25ac5aa16767364fb0e3c24"
-
-PROVIDES = "virtual/librpc"
-
-SRC_URI = "${SOURCEFORGE_MIRROR}/${BPN}/${BP}.tar.bz2"
-UPSTREAM_CHECK_URI = "https://sourceforge.net/projects/libtirpc/files/libtirpc/"
-UPSTREAM_CHECK_REGEX = "(?P<pver>\d+(\.\d+)+)/"
-SRC_URI[sha256sum] = "e24eb88b8ce7db3b7ca6eb80115dd1284abc5ec32a8deccfed2224fc2532b9fd"
-
-inherit autotools pkgconfig
-
-EXTRA_OECONF = "--disable-gssapi"
-
-do_install:append() {
-	chown root:root ${D}${sysconfdir}/netconfig
-}
-
-BBCLASSEXTEND = "native nativesdk"
diff --git a/poky/meta/recipes-extended/libtirpc/libtirpc_1.3.3.bb b/poky/meta/recipes-extended/libtirpc/libtirpc_1.3.3.bb
new file mode 100644
index 0000000..8c6c207
--- /dev/null
+++ b/poky/meta/recipes-extended/libtirpc/libtirpc_1.3.3.bb
@@ -0,0 +1,28 @@
+SUMMARY = "Transport-Independent RPC library"
+DESCRIPTION = "Libtirpc is a port of Suns Transport-Independent RPC library to Linux"
+SECTION = "libs/network"
+HOMEPAGE = "http://sourceforge.net/projects/libtirpc/"
+BUGTRACKER = "http://sourceforge.net/tracker/?group_id=183075&atid=903784"
+LICENSE = "BSD-3-Clause"
+LIC_FILES_CHKSUM = "file://COPYING;md5=f835cce8852481e4b2bbbdd23b5e47f3 \
+                    file://src/netname.c;beginline=1;endline=27;md5=f8a8cd2cb25ac5aa16767364fb0e3c24"
+
+PROVIDES = "virtual/librpc"
+
+SRC_URI = "${SOURCEFORGE_MIRROR}/${BPN}/${BP}.tar.bz2"
+UPSTREAM_CHECK_URI = "https://sourceforge.net/projects/libtirpc/files/libtirpc/"
+UPSTREAM_CHECK_REGEX = "(?P<pver>\d+(\.\d+)+)/"
+SRC_URI[sha256sum] = "6474e98851d9f6f33871957ddee9714fdcd9d8a5ee9abb5a98d63ea2e60e12f3"
+
+# Was fixed in 1.3.3rc1 so not present in 1.3.3
+CVE_CHECK_IGNORE += "CVE-2021-46828"
+
+inherit autotools pkgconfig
+
+EXTRA_OECONF = "--disable-gssapi"
+
+do_install:append() {
+	chown root:root ${D}${sysconfdir}/netconfig
+}
+
+BBCLASSEXTEND = "native nativesdk"
diff --git a/poky/meta/recipes-extended/lighttpd/lighttpd_1.4.65.bb b/poky/meta/recipes-extended/lighttpd/lighttpd_1.4.65.bb
deleted file mode 100644
index 10aa27f..0000000
--- a/poky/meta/recipes-extended/lighttpd/lighttpd_1.4.65.bb
+++ /dev/null
@@ -1,79 +0,0 @@
-SUMMARY = "Lightweight high-performance web server"
-HOMEPAGE = "http://www.lighttpd.net/"
-DESCRIPTION = "Lightweight high-performance web server is designed and optimized for high performance environments. With a small memory footprint compared to other web-servers, effective management of the cpu-load, and advanced feature set (FastCGI, SCGI, Auth, Output-Compression, URL-Rewriting and many more)"
-BUGTRACKER = "http://redmine.lighttpd.net/projects/lighttpd/issues"
-
-LICENSE = "BSD-3-Clause"
-LIC_FILES_CHKSUM = "file://COPYING;md5=e4dac5c6ab169aa212feb5028853a579"
-
-SECTION = "net"
-RDEPENDS:${PN} = "lighttpd-module-dirlisting \
-                  lighttpd-module-indexfile \
-                  lighttpd-module-staticfile"
-RRECOMMENDS:${PN} = "lighttpd-module-access \
-                     lighttpd-module-accesslog"
-
-SRC_URI = "http://download.lighttpd.net/lighttpd/releases-1.4.x/lighttpd-${PV}.tar.xz \
-           file://index.html.lighttpd \
-           file://lighttpd.conf \
-           file://lighttpd \
-           "
-
-SRC_URI[sha256sum] = "bf0fa68a629fbc404023a912b377e70049331d6797bcbb4b3e8df4c3b42328be"
-
-DEPENDS = "virtual/crypt"
-
-PACKAGECONFIG ??= "openssl pcre zlib \
-    ${@bb.utils.contains('DISTRO_FEATURES', 'xattr', 'attr', '', d)} \
-"
-
-PACKAGECONFIG[libev] = "-Dwith_libev=true,-Dwith_libev=false,libev"
-PACKAGECONFIG[mysql] = "-Dwith_mysql=true,-Dwith_mysql=false,mariadb"
-PACKAGECONFIG[ldap] = "-Dwith_ldap=true,-Dwith_ldap=false,openldap"
-PACKAGECONFIG[attr] = "-Dwith_xattr=true,-Dwith_xattr=false,attr"
-PACKAGECONFIG[openssl] = "-Dwith_openssl=true,-Dwith_openssl=false,openssl"
-PACKAGECONFIG[krb5] = "-Dwith_krb5=true,-Dwith_krb5=false,krb5"
-PACKAGECONFIG[pcre] = "-Dwith_pcre=true,-Dwith_pcre=false,libpcre"
-PACKAGECONFIG[zlib] = "-Dwith_zlib=true,-Dwith_zlib=false,zlib"
-PACKAGECONFIG[bzip2] = "-Dwith_bzip=true,-Dwith_bzip=false,bzip2"
-PACKAGECONFIG[webdav-props] = "-Dwith_webdav_props=true,-Dwith_webdav_props=false,libxml2 sqlite3"
-PACKAGECONFIG[webdav-locks] = "-Dwith_webdav_locks=true,-Dwith_webdav_locks=false,util-linux"
-PACKAGECONFIG[lua] = "-Dwith_lua=true,-Dwith_lua=false,lua"
-PACKAGECONFIG[zstd] = "-Dwith_zstd=true,-Dwith_zstd=false,zstd"
-
-inherit meson pkgconfig update-rc.d gettext systemd
-
-INITSCRIPT_NAME = "lighttpd"
-INITSCRIPT_PARAMS = "defaults 70"
-
-SYSTEMD_SERVICE:${PN} = "lighttpd.service"
-
-do_install:append() {
-	install -d ${D}${sysconfdir}/init.d ${D}${sysconfdir}/lighttpd ${D}${sysconfdir}/lighttpd.d ${D}/www/pages/dav
-	install -m 0755 ${WORKDIR}/lighttpd ${D}${sysconfdir}/init.d
-	install -m 0644 ${WORKDIR}/lighttpd.conf ${D}${sysconfdir}/lighttpd
-	install -m 0644 ${WORKDIR}/index.html.lighttpd ${D}/www/pages/index.html
-
-	install -d ${D}${systemd_system_unitdir}
-	install -m 0644 ${S}/doc/systemd/lighttpd.service ${D}${systemd_system_unitdir}
-	sed -i -e 's,@SBINDIR@,${sbindir},g' \
-		-e 's,@SYSCONFDIR@,${sysconfdir},g' \
-		-e 's,@BASE_BINDIR@,${base_bindir},g' \
-		${D}${systemd_system_unitdir}/lighttpd.service
-	#For FHS compliance, create symbolic links to /var/log and /var/tmp for logs and temporary data
-	ln -sf ${localstatedir}/log ${D}/www/logs
-	ln -sf ${localstatedir}/tmp ${D}/www/var
-}
-
-# bitbake.conf sets ${libdir}/${BPN}/* in FILES, which messes up the module split.
-# So we re-do the variable.
-FILES:${PN} = "${sysconfdir} /www ${sbindir}"
-
-CONFFILES:${PN} = "${sysconfdir}/lighttpd/lighttpd.conf"
-
-PACKAGES_DYNAMIC += "^lighttpd-module-.*"
-
-python populate_packages:prepend () {
-    lighttpd_libdir = d.expand('${prefix}/lib/lighttpd')
-    do_split_packages(d, lighttpd_libdir, r'^mod_(.*)\.so$', 'lighttpd-module-%s', 'Lighttpd module for %s', extra_depends='')
-}
diff --git a/poky/meta/recipes-extended/lighttpd/lighttpd_1.4.66.bb b/poky/meta/recipes-extended/lighttpd/lighttpd_1.4.66.bb
new file mode 100644
index 0000000..8011628
--- /dev/null
+++ b/poky/meta/recipes-extended/lighttpd/lighttpd_1.4.66.bb
@@ -0,0 +1,79 @@
+SUMMARY = "Lightweight high-performance web server"
+HOMEPAGE = "http://www.lighttpd.net/"
+DESCRIPTION = "Lightweight high-performance web server is designed and optimized for high performance environments. With a small memory footprint compared to other web-servers, effective management of the cpu-load, and advanced feature set (FastCGI, SCGI, Auth, Output-Compression, URL-Rewriting and many more)"
+BUGTRACKER = "http://redmine.lighttpd.net/projects/lighttpd/issues"
+
+LICENSE = "BSD-3-Clause"
+LIC_FILES_CHKSUM = "file://COPYING;md5=e4dac5c6ab169aa212feb5028853a579"
+
+SECTION = "net"
+RDEPENDS:${PN} = "lighttpd-module-dirlisting \
+                  lighttpd-module-indexfile \
+                  lighttpd-module-staticfile"
+RRECOMMENDS:${PN} = "lighttpd-module-access \
+                     lighttpd-module-accesslog"
+
+SRC_URI = "http://download.lighttpd.net/lighttpd/releases-1.4.x/lighttpd-${PV}.tar.xz \
+           file://index.html.lighttpd \
+           file://lighttpd.conf \
+           file://lighttpd \
+           "
+
+SRC_URI[sha256sum] = "47ac6e60271aa0196e65472d02d019556dc7c6d09df3b65df2c1ab6866348e3b"
+
+DEPENDS = "virtual/crypt"
+
+PACKAGECONFIG ??= "openssl pcre zlib \
+    ${@bb.utils.contains('DISTRO_FEATURES', 'xattr', 'attr', '', d)} \
+"
+
+PACKAGECONFIG[libev] = "-Dwith_libev=true,-Dwith_libev=false,libev"
+PACKAGECONFIG[mysql] = "-Dwith_mysql=true,-Dwith_mysql=false,mariadb"
+PACKAGECONFIG[ldap] = "-Dwith_ldap=true,-Dwith_ldap=false,openldap"
+PACKAGECONFIG[attr] = "-Dwith_xattr=true,-Dwith_xattr=false,attr"
+PACKAGECONFIG[openssl] = "-Dwith_openssl=true,-Dwith_openssl=false,openssl"
+PACKAGECONFIG[krb5] = "-Dwith_krb5=true,-Dwith_krb5=false,krb5"
+PACKAGECONFIG[pcre] = "-Dwith_pcre=true,-Dwith_pcre=false,libpcre"
+PACKAGECONFIG[zlib] = "-Dwith_zlib=true,-Dwith_zlib=false,zlib"
+PACKAGECONFIG[bzip2] = "-Dwith_bzip=true,-Dwith_bzip=false,bzip2"
+PACKAGECONFIG[webdav-props] = "-Dwith_webdav_props=true,-Dwith_webdav_props=false,libxml2 sqlite3"
+PACKAGECONFIG[webdav-locks] = "-Dwith_webdav_locks=true,-Dwith_webdav_locks=false,util-linux"
+PACKAGECONFIG[lua] = "-Dwith_lua=true,-Dwith_lua=false,lua"
+PACKAGECONFIG[zstd] = "-Dwith_zstd=true,-Dwith_zstd=false,zstd"
+
+inherit meson pkgconfig update-rc.d gettext systemd
+
+INITSCRIPT_NAME = "lighttpd"
+INITSCRIPT_PARAMS = "defaults 70"
+
+SYSTEMD_SERVICE:${PN} = "lighttpd.service"
+
+do_install:append() {
+	install -d ${D}${sysconfdir}/init.d ${D}${sysconfdir}/lighttpd ${D}${sysconfdir}/lighttpd.d ${D}/www/pages/dav
+	install -m 0755 ${WORKDIR}/lighttpd ${D}${sysconfdir}/init.d
+	install -m 0644 ${WORKDIR}/lighttpd.conf ${D}${sysconfdir}/lighttpd
+	install -m 0644 ${WORKDIR}/index.html.lighttpd ${D}/www/pages/index.html
+
+	install -d ${D}${systemd_system_unitdir}
+	install -m 0644 ${S}/doc/systemd/lighttpd.service ${D}${systemd_system_unitdir}
+	sed -i -e 's,@SBINDIR@,${sbindir},g' \
+		-e 's,@SYSCONFDIR@,${sysconfdir},g' \
+		-e 's,@BASE_BINDIR@,${base_bindir},g' \
+		${D}${systemd_system_unitdir}/lighttpd.service
+	#For FHS compliance, create symbolic links to /var/log and /var/tmp for logs and temporary data
+	ln -sf ${localstatedir}/log ${D}/www/logs
+	ln -sf ${localstatedir}/tmp ${D}/www/var
+}
+
+# bitbake.conf sets ${libdir}/${BPN}/* in FILES, which messes up the module split.
+# So we re-do the variable.
+FILES:${PN} = "${sysconfdir} /www ${sbindir}"
+
+CONFFILES:${PN} = "${sysconfdir}/lighttpd/lighttpd.conf"
+
+PACKAGES_DYNAMIC += "^lighttpd-module-.*"
+
+python populate_packages:prepend () {
+    lighttpd_libdir = d.expand('${prefix}/lib/lighttpd')
+    do_split_packages(d, lighttpd_libdir, r'^mod_(.*)\.so$', 'lighttpd-module-%s', 'Lighttpd module for %s', extra_depends='')
+}
diff --git a/poky/meta/recipes-extended/ltp/ltp/0001-lapi-fsmount-resolve-conflict-in-different-header-fi.patch b/poky/meta/recipes-extended/ltp/ltp/0001-lapi-fsmount-resolve-conflict-in-different-header-fi.patch
new file mode 100644
index 0000000..cdbcf6b
--- /dev/null
+++ b/poky/meta/recipes-extended/ltp/ltp/0001-lapi-fsmount-resolve-conflict-in-different-header-fi.patch
@@ -0,0 +1,71 @@
+From b857f8723f30a4b9554bf6b0ff8fa52fd07e8b60 Mon Sep 17 00:00:00 2001
+From: Li Wang <liwang@redhat.com>
+Date: Fri, 5 Aug 2022 14:34:01 +0800
+Subject: [PATCH] lapi/fsmount: resolve conflict in different header files
+
+The latest glibc added new wrappers (e.g. mount_setattr, fsopen) support
+in sys/mount.h, which partly conflicts with linux/mount.h at the same time.
+
+We need to make adjustments to header files to fix compiling error on
+different platforms.
+
+Upstream-Status: Backport [https://github.com/linux-test-project/ltp/commit/b857f8723f30a4b9554bf6b0ff8fa52fd07e8b60]
+Signed-off-by: Li Wang <liwang@redhat.com>
+Reviewed-by: Petr Vorel <pvorel@suse.cz>
+---
+ configure.ac           | 1 +
+ include/lapi/fs.h      | 6 ++++--
+ include/lapi/fsmount.h | 7 +++++--
+ 3 files changed, 10 insertions(+), 4 deletions(-)
+
+diff --git a/configure.ac b/configure.ac
+index d50ec1ea7..dbd53cab6 100644
+--- a/configure.ac
++++ b/configure.ac
+@@ -113,6 +113,7 @@ AC_CHECK_FUNCS_ONCE([ \
+     mkdirat \
+     mknodat \
+     modify_ldt \
++    mount_setattr \
+     move_mount \
+     name_to_handle_at \
+     open_tree \
+diff --git a/include/lapi/fs.h b/include/lapi/fs.h
+index 27b3a183c..84a168a67 100644
+--- a/include/lapi/fs.h
++++ b/include/lapi/fs.h
+@@ -6,8 +6,10 @@
+  * Email: code@zilogic.com
+  */
+ 
+-#ifdef HAVE_LINUX_FS_H
+-# include <linux/fs.h>
++#ifndef HAVE_MOUNT_SETATTR
++# ifdef HAVE_LINUX_FS_H
++#  include <linux/fs.h>
++# endif
+ #endif
+ 
+ #include <sys/user.h>
+diff --git a/include/lapi/fsmount.h b/include/lapi/fsmount.h
+index b11e7a7bd..07eb42ffa 100644
+--- a/include/lapi/fsmount.h
++++ b/include/lapi/fsmount.h
+@@ -11,9 +11,12 @@
+ #include "config.h"
+ #include <sys/syscall.h>
+ #include <sys/types.h>
++#include <sys/mount.h>
+ 
+-#ifdef HAVE_LINUX_MOUNT_H
+-# include <linux/mount.h>
++#ifndef HAVE_FSOPEN
++# ifdef HAVE_LINUX_MOUNT_H
++#  include <linux/mount.h>
++# endif
+ #endif
+ 
+ #include "lapi/fcntl.h"
+-- 
+2.37.2
+
diff --git a/poky/meta/recipes-extended/ltp/ltp/0001-lapi-pidfd-adding-pidfd-header-file.patch b/poky/meta/recipes-extended/ltp/ltp/0001-lapi-pidfd-adding-pidfd-header-file.patch
new file mode 100644
index 0000000..184c4264
--- /dev/null
+++ b/poky/meta/recipes-extended/ltp/ltp/0001-lapi-pidfd-adding-pidfd-header-file.patch
@@ -0,0 +1,60 @@
+From dbc9c14c92a5acf450d07868a735ac8cd6ec5b90 Mon Sep 17 00:00:00 2001
+From: Li Wang <liwang@redhat.com>
+Date: Fri, 5 Aug 2022 14:34:00 +0800
+Subject: [PATCH] lapi/pidfd: adding pidfd header file
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+The newer Glibc already provided wrapper for the series pidfd syscall,
+so let's include the header file conditionally.
+
+  # rpm -q glibc-devel
+  glibc-devel-2.35.9000-31.fc37.ppc64le
+  # rpm -ql glibc-devel | grep pidfd
+  /usr/include/sys/pidfd.h
+
+To get rid of compiling error from fedora-rawhide:
+
+  tst_safe_macros.c: In function ‘safe_pidfd_open’:
+  tst_safe_macros.c:135:16: error: implicit declaration of function ‘pidfd_open’ [-Werror=implicit-function-declaration]
+  135 |         rval = pidfd_open(pid, flags);
+      |                ^~~~~~~~~~
+
+Upstream-Status: Backport [https://github.com/linux-test-project/ltp/commit/dbc9c14c92a5acf450d07868a735ac8cd6ec5b90]
+Signed-off-by: Li Wang <liwang@redhat.com>
+Reviewed-by: Petr Vorel <pvorel@suse.cz>
+---
+ configure.ac         | 1 +
+ include/lapi/pidfd.h | 3 +++
+ 2 files changed, 4 insertions(+)
+
+diff --git a/configure.ac b/configure.ac
+index 69b145b5f..d50ec1ea7 100644
+--- a/configure.ac
++++ b/configure.ac
+@@ -71,6 +71,7 @@ AC_CHECK_HEADERS_ONCE([ \
+     sys/epoll.h \
+     sys/fanotify.h \
+     sys/inotify.h \
++    sys/pidfd.h
+     sys/prctl.h \
+     sys/shm.h \
+     sys/timerfd.h \
+diff --git a/include/lapi/pidfd.h b/include/lapi/pidfd.h
+index 244d3acaf..9ca8e5aa2 100644
+--- a/include/lapi/pidfd.h
++++ b/include/lapi/pidfd.h
+@@ -8,6 +8,9 @@
+ #define LAPI_PIDFD_H__
+ 
+ #include <fcntl.h>
++#ifdef HAVE_SYS_PIDFD_H
++# include <sys/pidfd.h>
++#endif
+ #include "config.h"
+ #include "lapi/syscalls.h"
+ 
+-- 
+2.37.2
+
diff --git a/poky/meta/recipes-extended/ltp/ltp/0001-rt-migrate-Use-int-instead-of-pthread_t-for-thread-I.patch b/poky/meta/recipes-extended/ltp/ltp/0001-rt-migrate-Use-int-instead-of-pthread_t-for-thread-I.patch
new file mode 100644
index 0000000..e49f53a
--- /dev/null
+++ b/poky/meta/recipes-extended/ltp/ltp/0001-rt-migrate-Use-int-instead-of-pthread_t-for-thread-I.patch
@@ -0,0 +1,36 @@
+From 11e503344c36c1c7df3e455d81736dc4a5b43775 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Tue, 23 Aug 2022 23:20:53 -0700
+Subject: [PATCH] rt-migrate: Use int instead of pthread_t for thread IDs
+
+pthread_t is opaque, but create_fifo_thread() returns integer therefore
+on musl where thread_t is not integer, this fails to compile e.g.
+
+| rt-migrate.c:450:14: error: incompatible integer to pointer conversion assigning to 'pthread_t' (aka 'struct __pthread *') from 'int' [-Wint-conversion]
+|                 threads[i] = create_fifo_thread(start_task, (void *)i,
+|                            ^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Align the types used to fix the problems.
+
+Upstream-Status: Submitted [https://lists.linux.it/pipermail/ltp/2022-August/030239.html]
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ testcases/realtime/func/rt-migrate/rt-migrate.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/testcases/realtime/func/rt-migrate/rt-migrate.c b/testcases/realtime/func/rt-migrate/rt-migrate.c
+index 3e6c82a2fb..97ab604c7f 100644
+--- a/testcases/realtime/func/rt-migrate/rt-migrate.c
++++ b/testcases/realtime/func/rt-migrate/rt-migrate.c
+@@ -394,7 +394,7 @@ static void stop_log(int sig)
+ 
+ int main(int argc, char **argv)
+ {
+-	pthread_t *threads;
++	int *threads;
+ 	long i;
+ 	int ret;
+ 	struct timespec intv;
+-- 
+2.37.2
+
diff --git a/poky/meta/recipes-extended/ltp/ltp_20220527.bb b/poky/meta/recipes-extended/ltp/ltp_20220527.bb
index b0f4ea6..00ff906 100644
--- a/poky/meta/recipes-extended/ltp/ltp_20220527.bb
+++ b/poky/meta/recipes-extended/ltp/ltp_20220527.bb
@@ -20,6 +20,7 @@
 # earlier in CFLAGS, etc.
 CFLAGS:append:x86-64 = " -fomit-frame-pointer"
 TUNE_CCARGS:remove:x86 = "-mfpmath=sse"
+TUNE_CCARGS:remove:x86-64 = "-mfpmath=sse"
 
 CFLAGS:append:powerpc64 = " -D__SANE_USERSPACE_TYPES__"
 CFLAGS:append:mipsarchn64 = " -D__SANE_USERSPACE_TYPES__"
@@ -37,6 +38,9 @@
            file://0001-netstress-Restore-runtime-to-5m.patch \
            file://0001-net_stress-Fix-usage-of-variables-from-tst_net.sh.patch \
            file://0001-memcg-functional-Fix-usage-of-PAGESIZE-from-memcg_li.patch \
+           file://0001-lapi-pidfd-adding-pidfd-header-file.patch \
+           file://0001-lapi-fsmount-resolve-conflict-in-different-header-fi.patch \
+           file://0001-rt-migrate-Use-int-instead-of-pthread_t-for-thread-I.patch \
            "
 
 S = "${WORKDIR}/git"
diff --git a/poky/meta/recipes-extended/mc/files/0001-mc-replace-perl-w-with-use-warnings.patch b/poky/meta/recipes-extended/mc/files/0001-mc-replace-perl-w-with-use-warnings.patch
index bf8037c..012a499 100644
--- a/poky/meta/recipes-extended/mc/files/0001-mc-replace-perl-w-with-use-warnings.patch
+++ b/poky/meta/recipes-extended/mc/files/0001-mc-replace-perl-w-with-use-warnings.patch
@@ -17,7 +17,7 @@
 
 The man2hlp.in already has "use warnings;", so just remove '-w' is OK.
 
-Upstream-Status: Pending
+Upstream-Status: Submitted [https://github.com/MidnightCommander/mc/pull/174]
 
 Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
 ---
diff --git a/poky/meta/recipes-extended/msmtp/msmtp_1.8.20.bb b/poky/meta/recipes-extended/msmtp/msmtp_1.8.20.bb
deleted file mode 100644
index da3f70a..0000000
--- a/poky/meta/recipes-extended/msmtp/msmtp_1.8.20.bb
+++ /dev/null
@@ -1,27 +0,0 @@
-SUMMARY = "msmtp is an SMTP client"
-DESCRIPTION = "A sendmail replacement for use in MTAs like mutt"
-HOMEPAGE = "https://marlam.de/msmtp/"
-SECTION = "console/network"
-
-LICENSE = "GPL-3.0-only"
-DEPENDS = "zlib gnutls"
-
-LIC_FILES_CHKSUM = "file://COPYING;md5=d32239bcb673463ab874e80d47fae504"
-
-UPSTREAM_CHECK_URI = "https://marlam.de/msmtp/download/"
-
-SRC_URI = "https://marlam.de/${BPN}/releases/${BP}.tar.xz"
-SRC_URI[sha256sum] = "d93ae2aafc0f48af7dc9d0b394df1bb800588b8b4e8d096d8b3cf225344eb111"
-
-inherit gettext autotools update-alternatives pkgconfig
-
-EXTRA_OECONF += "--without-libsecret --without-libgsasl --without-libidn"
-
-ALTERNATIVE:${PN} = "sendmail"
-# /usr/lib/sendmial is required by LSB core test
-ALTERNATIVE:${PN}:linuxstdbase = "sendmail usr-lib-sendmail"
-ALTERNATIVE_TARGET[sendmail] = "${bindir}/msmtp"
-ALTERNATIVE_LINK_NAME[sendmail] = "${sbindir}/sendmail"
-ALTERNATIVE_TARGET[usr-lib-sendmail] = "${bindir}/msmtp"
-ALTERNATIVE_LINK_NAME[usr-lib-sendmail] = "/usr/lib/sendmail"
-ALTERNATIVE_PRIORITY = "100"
diff --git a/poky/meta/recipes-extended/msmtp/msmtp_1.8.22.bb b/poky/meta/recipes-extended/msmtp/msmtp_1.8.22.bb
new file mode 100644
index 0000000..d56af73
--- /dev/null
+++ b/poky/meta/recipes-extended/msmtp/msmtp_1.8.22.bb
@@ -0,0 +1,27 @@
+SUMMARY = "msmtp is an SMTP client"
+DESCRIPTION = "A sendmail replacement for use in MTAs like mutt"
+HOMEPAGE = "https://marlam.de/msmtp/"
+SECTION = "console/network"
+
+LICENSE = "GPL-3.0-only"
+DEPENDS = "zlib gnutls"
+
+LIC_FILES_CHKSUM = "file://COPYING;md5=d32239bcb673463ab874e80d47fae504"
+
+UPSTREAM_CHECK_URI = "https://marlam.de/msmtp/download/"
+
+SRC_URI = "https://marlam.de/${BPN}/releases/${BP}.tar.xz"
+SRC_URI[sha256sum] = "1b04206286a5b82622335e4eb09e17074368b7288e53d134543cbbc6b79ea3e7"
+
+inherit gettext autotools update-alternatives pkgconfig
+
+EXTRA_OECONF += "--without-libsecret --without-libgsasl --without-libidn"
+
+ALTERNATIVE:${PN} = "sendmail"
+# /usr/lib/sendmial is required by LSB core test
+ALTERNATIVE:${PN}:linuxstdbase = "sendmail usr-lib-sendmail"
+ALTERNATIVE_TARGET[sendmail] = "${bindir}/msmtp"
+ALTERNATIVE_LINK_NAME[sendmail] = "${sbindir}/sendmail"
+ALTERNATIVE_TARGET[usr-lib-sendmail] = "${bindir}/msmtp"
+ALTERNATIVE_LINK_NAME[usr-lib-sendmail] = "/usr/lib/sendmail"
+ALTERNATIVE_PRIORITY = "100"
diff --git a/poky/meta/recipes-extended/pam/libpam/99_pam b/poky/meta/recipes-extended/pam/libpam/99_pam
index 97e990d..a88247b 100644
--- a/poky/meta/recipes-extended/pam/libpam/99_pam
+++ b/poky/meta/recipes-extended/pam/libpam/99_pam
@@ -1 +1 @@
-d root root 0755 /var/run/sepermit none
+d root root 0755 /run/sepermit none
diff --git a/poky/meta/recipes-extended/screen/screen/0001-configure-Add-needed-system-headers-in-checks.patch b/poky/meta/recipes-extended/screen/screen/0001-configure-Add-needed-system-headers-in-checks.patch
new file mode 100644
index 0000000..8065994
--- /dev/null
+++ b/poky/meta/recipes-extended/screen/screen/0001-configure-Add-needed-system-headers-in-checks.patch
@@ -0,0 +1,151 @@
+From 4e102de2e6204c1d8e8be00bb5ffd4587e70350c Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Mon, 15 Aug 2022 10:35:53 -0700
+Subject: [PATCH] configure: Add needed system headers in checks
+
+Newer compilers throw warnings when a funciton is used with implicit
+declaration and enabling -Werror can silently fail these tests and
+result in wrong configure results. Therefore add the needed headers in
+the AC_TRY_LINK macros
+
+	* configure.ac: Add missing system headers in AC_TRY_LINK.
+
+Upstream-Status: Submitted [https://lists.gnu.org/archive/html/screen-devel/2022-08/msg00000.html]
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ configure.ac | 57 +++++++++++++++++++++++++++++++++++++++-------------
+ 1 file changed, 43 insertions(+), 14 deletions(-)
+
+diff --git a/configure.ac b/configure.ac
+index c0f02df..d308079 100644
+--- a/configure.ac
++++ b/configure.ac
+@@ -233,6 +233,7 @@ AC_CHECKING(BSD job jontrol)
+ AC_TRY_LINK(
+ [#include <sys/types.h>
+ #include <sys/ioctl.h>
++#include <unistd.h>
+ ], [
+ #ifdef POSIX
+ tcsetpgrp(0, 0);
+@@ -250,12 +251,16 @@ dnl
+ dnl    ****  setresuid(), setreuid(), seteuid()  ****
+ dnl
+ AC_CHECKING(setresuid)
+-AC_TRY_LINK(,[
+-setresuid(0, 0, 0);
++AC_TRY_LINK([
++#include <unistd.h>
++],[
++return setresuid(0, 0, 0);
+ ], AC_DEFINE(HAVE_SETRESUID))
+ AC_CHECKING(setreuid)
+-AC_TRY_LINK(,[
+-setreuid(0, 0);
++AC_TRY_LINK([
++#include <unistd.h>
++],[
++return setreuid(0, 0);
+ ], AC_DEFINE(HAVE_SETREUID))
+ dnl
+ dnl seteuid() check:
+@@ -274,7 +279,9 @@ seteuid(0);
+ 
+ dnl execvpe
+ AC_CHECKING(execvpe)
+-AC_TRY_LINK(,[
++AC_TRY_LINK([
++    #include <unistd.h>
++],[
+     execvpe(0, 0, 0);
+ ], AC_DEFINE(HAVE_EXECVPE)
+ CFLAGS="$CFLAGS -D_GNU_SOURCE")
+@@ -284,10 +291,18 @@ dnl    ****  select()  ****
+ dnl
+ 
+ AC_CHECKING(select)
+-AC_TRY_LINK(,[select(0, 0, 0, 0, 0);],, 
++AC_TRY_LINK([
++    #include <sys/select.h>
++],[
++    select(0, 0, 0, 0, 0);
++],, 
+ LIBS="$LIBS -lnet -lnsl"
+ AC_CHECKING(select with $LIBS)
+-AC_TRY_LINK(,[select(0, 0, 0, 0, 0);],, 
++AC_TRY_LINK([
++    #include <sys/select.h>
++],[
++    select(0, 0, 0, 0, 0);
++],, 
+ AC_MSG_ERROR(!!! no select - no screen))
+ )
+ dnl
+@@ -624,11 +639,19 @@ dnl
+ dnl    ****  termcap or terminfo  ****
+ dnl
+ AC_CHECKING(for tgetent)
+-AC_TRY_LINK(,tgetent((char *)0, (char *)0);,,
++AC_TRY_LINK([
++    #include <curses.h>
++    #include <term.h>
++],[
++    tgetent((char *)0, (char *)0);
++],,
+ olibs="$LIBS"
+ LIBS="-lcurses $olibs"
+ AC_CHECKING(libcurses)
+-AC_TRY_LINK(,[
++AC_TRY_LINK([
++    #include <curses.h>
++    #include <term.h>
++],[
+ #ifdef __hpux
+ __sorry_hpux_libcurses_is_totally_broken_in_10_10();
+ #else
+@@ -871,7 +894,7 @@ test -f /usr/lib/libutil.a && LIBS="$LIBS -lutil"
+ fi
+ 
+ AC_CHECKING(getloadavg)
+-AC_TRY_LINK(,[getloadavg((double *)0, 0);],
++AC_TRY_LINK([#include <stdlib.h>],[getloadavg((double *)0, 0);],
+ AC_DEFINE(LOADAV_GETLOADAVG) load=1,
+ if test "$cross_compiling" = no && test -f /usr/lib/libkvm.a ; then
+ olibs="$LIBS"
+@@ -1109,10 +1132,10 @@ AC_CHECKING(IRIX sun library)
+ AC_TRY_LINK(,,,LIBS="$oldlibs")
+ 
+ AC_CHECKING(syslog)
+-AC_TRY_LINK(,[closelog();], , [oldlibs="$LIBS"
++AC_TRY_LINK([#include <syslog.h>],[closelog();], , [oldlibs="$LIBS"
+ LIBS="$LIBS -lbsd"
+ AC_CHECKING(syslog in libbsd.a)
+-AC_TRY_LINK(, [closelog();], AC_NOTE(- found.), [LIBS="$oldlibs"
++AC_TRY_LINK([#include <syslog.h>], [closelog();], AC_NOTE(- found.), [LIBS="$oldlibs"
+ AC_NOTE(- bad news: syslog missing.) AC_DEFINE(NOSYSLOG)])])
+ 
+ AC_EGREP_CPP(YES_IS_DEFINED,
+@@ -1149,7 +1172,7 @@ AC_CHECKING(getspnam)
+ AC_TRY_LINK([#include <shadow.h>], [getspnam("x");],AC_DEFINE(SHADOWPW))
+ 
+ AC_CHECKING(getttyent)
+-AC_TRY_LINK(,[getttyent();], AC_DEFINE(GETTTYENT))
++AC_TRY_LINK([#include <ttyent.h>],[getttyent();], AC_DEFINE(GETTTYENT))
+ 
+ AC_CHECKING(fdwalk)
+ AC_TRY_LINK([#include <stdlib.h>], [fdwalk(NULL, NULL);],AC_DEFINE(HAVE_FDWALK))
+@@ -1204,7 +1227,13 @@ main() {
+ AC_SYS_LONG_FILE_NAMES
+ 
+ AC_MSG_CHECKING(for vsprintf)
+-AC_TRY_LINK([#include <stdarg.h>],[va_list valist; vsprintf(0,0,valist);], AC_MSG_RESULT(yes);AC_DEFINE(USEVARARGS), AC_MSG_RESULT(no))
++AC_TRY_LINK([
++    #include <stdarg.h>
++    #include <stdio.h>
++],[
++   va_list valist;
++   vsprintf(0,0,valist);
++], AC_MSG_RESULT(yes);AC_DEFINE(USEVARARGS), AC_MSG_RESULT(no))
+ 
+ AC_HEADER_DIRENT
+ 
diff --git a/poky/meta/recipes-extended/screen/screen_4.9.0.bb b/poky/meta/recipes-extended/screen/screen_4.9.0.bb
index b36173b..77e8000 100644
--- a/poky/meta/recipes-extended/screen/screen_4.9.0.bb
+++ b/poky/meta/recipes-extended/screen/screen_4.9.0.bb
@@ -21,7 +21,8 @@
            file://0002-comm.h-now-depends-on-term.h.patch \
            file://0001-fix-for-multijob-build.patch \
            file://0001-Remove-more-compatibility-stuff.patch \
-          "
+           file://0001-configure-Add-needed-system-headers-in-checks.patch \
+           "
 
 SRC_URI[sha256sum] = "f9335281bb4d1538ed078df78a20c2f39d3af9a4e91c57d084271e0289c730f4"
 
diff --git a/poky/meta/recipes-extended/shadow/files/0001-Drop-nsswitch.conf-message-when-not-in-place-eg.-musl.patch b/poky/meta/recipes-extended/shadow/files/0001-Drop-nsswitch.conf-message-when-not-in-place-eg.-musl.patch
new file mode 100644
index 0000000..21c9a14
--- /dev/null
+++ b/poky/meta/recipes-extended/shadow/files/0001-Drop-nsswitch.conf-message-when-not-in-place-eg.-musl.patch
@@ -0,0 +1,27 @@
+From 11290e897a49adddee215833944a518443d9b0d6 Mon Sep 17 00:00:00 2001
+From: Andrei Gherzan <andrei.gherzan@huawei.com>
+Date: Wed, 24 Aug 2022 00:54:47 +0200
+Subject: [PATCH] Drop nsswitch.conf message when not in place - eg. musl
+
+Upstream-Status: Inappropriate [issue reported at https://github.com/shadow-maint/shadow/issues/557]
+Signed-off-by: Andrei Gherzan <andrei.gherzan@huawei.com>
+---
+ lib/nss.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/lib/nss.c b/lib/nss.c
+index 06fa48e..44245da 100644
+--- a/lib/nss.c
++++ b/lib/nss.c
+@@ -59,7 +59,7 @@ void nss_init(const char *nsswitch_path) {
+ 	//   subid:	files
+ 	nssfp = fopen(nsswitch_path, "r");
+ 	if (!nssfp) {
+-		fprintf(shadow_logfd, "Failed opening %s: %m\n", nsswitch_path);
++		//fprintf(shadow_logfd, "Failed opening %s: %m\n", nsswitch_path);
+ 		atomic_store(&nss_init_completed, true);
+ 		return;
+ 	}
+-- 
+2.25.1
+
diff --git a/poky/meta/recipes-extended/shadow/files/0001-shadow-use-relaxed-usernames.patch b/poky/meta/recipes-extended/shadow/files/0001-shadow-use-relaxed-usernames.patch
new file mode 100644
index 0000000..6c7abce
--- /dev/null
+++ b/poky/meta/recipes-extended/shadow/files/0001-shadow-use-relaxed-usernames.patch
@@ -0,0 +1,104 @@
+From b182c52d63bea0f08e1befcec5c3797dd97cdef5 Mon Sep 17 00:00:00 2001
+From: Alexander Kanavin <alex@linutronix.de>
+Date: Tue, 16 Aug 2022 13:46:22 +0200
+Subject: [PATCH] shadow: use relaxed usernames
+
+The groupadd from shadow does not allow upper case group names, the
+same is true for the upstream shadow. But distributions like
+Debian/Ubuntu/CentOS has their own way to cope with this problem,
+this patch is picked up from CentOS release 7.0 to relax the usernames
+restrictions to allow the upper case group names, and the relaxation is
+POSIX compliant because POSIX indicate that usernames are composed of
+characters from the portable filename character set [A-Za-z0-9._-].
+
+Upstream-Status: Submitted [https://github.com/shadow-maint/shadow/pull/551]
+
+Signed-off-by: Shan Hai <shan.hai@windriver.com>
+Signed-off-by: Alexander Kanavin <alex@linutronix.de>
+---
+ libmisc/chkname.c  | 29 ++++++++++++++++++-----------
+ man/groupadd.8.xml |  6 ------
+ man/useradd.8.xml  |  6 ------
+ 3 files changed, 18 insertions(+), 23 deletions(-)
+
+diff --git a/libmisc/chkname.c b/libmisc/chkname.c
+index cb002a14..c0306c5a 100644
+--- a/libmisc/chkname.c
++++ b/libmisc/chkname.c
+@@ -32,21 +32,28 @@ static bool is_valid_name (const char *name)
+ 	}
+ 
+ 	/*
+-	 * User/group names must match [a-z_][a-z0-9_-]*[$]
+-	 */
+-
+-	if (('\0' == *name) ||
+-	    !((('a' <= *name) && ('z' >= *name)) || ('_' == *name))) {
++         * User/group names must match gnu e-regex:
++         *    [a-zA-Z0-9_.][a-zA-Z0-9_.-]{0,30}[a-zA-Z0-9_.$-]?
++         *
++         * as a non-POSIX, extension, allow "$" as the last char for
++         * sake of Samba 3.x "add machine script"
++         */
++	if ( ('\0' == *name) ||
++             !((*name >= 'a' && *name <= 'z') ||
++               (*name >= 'A' && *name <= 'Z') ||
++               (*name >= '0' && *name <= '9') ||
++               (*name == '_') || (*name == '.') 
++	      )) {
+ 		return false;
+ 	}
+ 
+ 	while ('\0' != *++name) {
+-		if (!(( ('a' <= *name) && ('z' >= *name) ) ||
+-		      ( ('0' <= *name) && ('9' >= *name) ) ||
+-		      ('_' == *name) ||
+-		      ('-' == *name) ||
+-		      ( ('$' == *name) && ('\0' == *(name + 1)) )
+-		     )) {
++                if (!(  (*name >= 'a' && *name <= 'z') ||
++                        (*name >= 'A' && *name <= 'Z') ||
++                        (*name >= '0' && *name <= '9') ||
++                        (*name == '_') || (*name == '.') || (*name == '-') ||
++                        (*name == '$' && *(name + 1) == '\0') 
++                     )) {
+ 			return false;
+ 		}
+ 	}
+diff --git a/man/groupadd.8.xml b/man/groupadd.8.xml
+index 26671f92..3eacaa09 100644
+--- a/man/groupadd.8.xml
++++ b/man/groupadd.8.xml
+@@ -63,12 +63,6 @@
+       values from the system. The new group will be entered into the system
+       files as needed.
+     </para>
+-     <para>
+-       Groupnames must start with a lower case letter or an underscore,
+-       followed by lower case letters, digits, underscores, or dashes.
+-       They can end with a dollar sign.
+-       In regular expression terms: [a-z_][a-z0-9_-]*[$]?
+-     </para>
+      <para>
+        Groupnames may only be up to &GROUP_NAME_MAX_LENGTH; characters long.
+      </para>
+diff --git a/man/useradd.8.xml b/man/useradd.8.xml
+index c7f95b47..e056d141 100644
+--- a/man/useradd.8.xml
++++ b/man/useradd.8.xml
+@@ -691,12 +691,6 @@
+       the user account creation request.
+     </para>
+ 
+-    <para>
+-      Usernames must start with a lower case letter or an underscore,
+-      followed by lower case letters, digits, underscores, or dashes.
+-      They can end with a dollar sign.
+-      In regular expression terms: [a-z_][a-z0-9_-]*[$]?
+-    </para>
+     <para>
+       Usernames may only be up to 32 characters long.
+     </para>
+-- 
+2.30.2
+
diff --git a/poky/meta/recipes-extended/shadow/files/shadow-4.1.3-dots-in-usernames.patch b/poky/meta/recipes-extended/shadow/files/shadow-4.1.3-dots-in-usernames.patch
deleted file mode 100644
index a7bb0a9..0000000
--- a/poky/meta/recipes-extended/shadow/files/shadow-4.1.3-dots-in-usernames.patch
+++ /dev/null
@@ -1,27 +0,0 @@
-# commit message copied from openembedded:
-#    commit 246c80637b135f3a113d319b163422f98174ee6c
-#    Author: Khem Raj <raj.khem@gmail.com>
-#    Date:   Wed Jun 9 13:37:03 2010 -0700
-#
-#    shadow-4.1.4.2: Add patches to support dots in login id.
-#    
-#    Signed-off-by: Khem Raj <raj.khem@gmail.com>
-#
-# comment added by Kevin Tian <kevin.tian@intel.com>, 2010-08-11
-
-Upstream-Status: Pending
-
-Signed-off-by: Scott Garman <scott.a.garman@intel.com>
-
-Index: shadow-4.1.4.2/libmisc/chkname.c
-===================================================================
---- shadow-4.1.4.2.orig/libmisc/chkname.c	2009-04-28 12:14:04.000000000 -0700
-+++ shadow-4.1.4.2/libmisc/chkname.c	2010-06-03 17:43:20.638973857 -0700
-@@ -61,6 +61,7 @@ static bool is_valid_name (const char *n
- 		      ( ('0' <= *name) && ('9' >= *name) ) ||
- 		      ('_' == *name) ||
- 		      ('-' == *name) ||
-+		      ('.' == *name) ||
- 		      ( ('$' == *name) && ('\0' == *(name + 1)) )
- 		     )) {
- 			return false;
diff --git a/poky/meta/recipes-extended/shadow/files/shadow-relaxed-usernames.patch b/poky/meta/recipes-extended/shadow/files/shadow-relaxed-usernames.patch
deleted file mode 100644
index cc83336..0000000
--- a/poky/meta/recipes-extended/shadow/files/shadow-relaxed-usernames.patch
+++ /dev/null
@@ -1,111 +0,0 @@
-From ca472d6866e545aaa70a70020e3226f236a8aafc Mon Sep 17 00:00:00 2001
-From: Shan Hai <shan.hai@windriver.com>
-Date: Tue, 13 Sep 2016 13:45:46 +0800
-Subject: [PATCH] shadow: use relaxed usernames
-
-The groupadd from shadow does not allow upper case group names, the
-same is true for the upstream shadow. But distributions like
-Debian/Ubuntu/CentOS has their own way to cope with this problem,
-this patch is picked up from CentOS release 7.0 to relax the usernames
-restrictions to allow the upper case group names, and the relaxation is
-POSIX compliant because POSIX indicate that usernames are composed of
-characters from the portable filename character set [A-Za-z0-9._-].
-
-Upstream-Status: Pending
-
-Signed-off-by: Shan Hai <shan.hai@windriver.com>
-
----
- libmisc/chkname.c  | 30 ++++++++++++++++++------------
- man/groupadd.8.xml |  6 ------
- man/useradd.8.xml  |  8 +-------
- 3 files changed, 19 insertions(+), 25 deletions(-)
-
-diff --git a/libmisc/chkname.c b/libmisc/chkname.c
-index 90f185c..65762b4 100644
---- a/libmisc/chkname.c
-+++ b/libmisc/chkname.c
-@@ -55,22 +55,28 @@ static bool is_valid_name (const char *name)
- 	}
- 
- 	/*
--	 * User/group names must match [a-z_][a-z0-9_-]*[$]
--	 */
--
--	if (('\0' == *name) ||
--	    !((('a' <= *name) && ('z' >= *name)) || ('_' == *name))) {
-+         * User/group names must match gnu e-regex:
-+         *    [a-zA-Z0-9_.][a-zA-Z0-9_.-]{0,30}[a-zA-Z0-9_.$-]?
-+         *
-+         * as a non-POSIX, extension, allow "$" as the last char for
-+         * sake of Samba 3.x "add machine script"
-+         */
-+	if ( ('\0' == *name) ||
-+             !((*name >= 'a' && *name <= 'z') ||
-+               (*name >= 'A' && *name <= 'Z') ||
-+               (*name >= '0' && *name <= '9') ||
-+               (*name == '_') || (*name == '.') 
-+	      )) {
- 		return false;
- 	}
- 
- 	while ('\0' != *++name) {
--		if (!(( ('a' <= *name) && ('z' >= *name) ) ||
--		      ( ('0' <= *name) && ('9' >= *name) ) ||
--		      ('_' == *name) ||
--		      ('-' == *name) ||
--		      ('.' == *name) ||
--		      ( ('$' == *name) && ('\0' == *(name + 1)) )
--		     )) {
-+                if (!(  (*name >= 'a' && *name <= 'z') ||
-+                        (*name >= 'A' && *name <= 'Z') ||
-+                        (*name >= '0' && *name <= '9') ||
-+                        (*name == '_') || (*name == '.') || (*name == '-') ||
-+                        (*name == '$' && *(name + 1) == '\0') 
-+                     )) {
- 			return false;
- 		}
- 	}
-diff --git a/man/groupadd.8.xml b/man/groupadd.8.xml
-index 1e58f09..d804b61 100644
---- a/man/groupadd.8.xml
-+++ b/man/groupadd.8.xml
-@@ -272,12 +272,6 @@
- 
-    <refsect1 id='caveats'>
-      <title>CAVEATS</title>
--     <para>
--       Groupnames must start with a lower case letter or an underscore,
--       followed by lower case letters, digits, underscores, or dashes.
--       They can end with a dollar sign.
--       In regular expression terms: [a-z_][a-z0-9_-]*[$]?
--     </para>
-      <para>
-        Groupnames may only be up to &GROUP_NAME_MAX_LENGTH; characters long.
-      </para>
-diff --git a/man/useradd.8.xml b/man/useradd.8.xml
-index a16d730..c0bd777 100644
---- a/man/useradd.8.xml
-+++ b/man/useradd.8.xml
-@@ -366,7 +366,7 @@
- 	</term>
- 	<listitem>
- 	  <para>
--	    Do no create the user's home directory, even if the system
-+	    Do not create the user's home directory, even if the system
- 	    wide setting from <filename>/etc/login.defs</filename>
- 	    (<option>CREATE_HOME</option>) is set to
- 	    <replaceable>yes</replaceable>.
-@@ -660,12 +660,6 @@
-       the user account creation request.
-     </para>
- 
--    <para>
--      Usernames must start with a lower case letter or an underscore,
--      followed by lower case letters, digits, underscores, or dashes.
--      They can end with a dollar sign.
--      In regular expression terms: [a-z_][a-z0-9_-]*[$]?
--    </para>
-     <para>
-       Usernames may only be up to 32 characters long.
-     </para>
diff --git a/poky/meta/recipes-extended/shadow/files/shadow-update-pam-conf.patch b/poky/meta/recipes-extended/shadow/files/shadow-update-pam-conf.patch
index 15f8044..3b61b75 100644
--- a/poky/meta/recipes-extended/shadow/files/shadow-update-pam-conf.patch
+++ b/poky/meta/recipes-extended/shadow/files/shadow-update-pam-conf.patch
@@ -4,7 +4,9 @@
 common-password and common-session.
 So update them with oe way.
 
-Upstream-Status: Pending
+See meta/recipes-extended/pam/libpam/pam.d/common-password
+
+Upstream-Status: Inappropriate [oe-core specific]
 
 Signed-off-by: Kang Kai <kai.kang@windriver.com>
 
diff --git a/poky/meta/recipes-extended/shadow/shadow.inc b/poky/meta/recipes-extended/shadow/shadow.inc
index f5fdf43..414bf46 100644
--- a/poky/meta/recipes-extended/shadow/shadow.inc
+++ b/poky/meta/recipes-extended/shadow/shadow.inc
@@ -11,10 +11,9 @@
 DEPENDS = "virtual/crypt"
 
 UPSTREAM_CHECK_URI = "https://github.com/shadow-maint/shadow/releases"
-SRC_URI = "https://github.com/shadow-maint/shadow/releases/download/v${PV}/${BP}.tar.gz \
-           file://shadow-4.1.3-dots-in-usernames.patch \
+SRC_URI = "https://github.com/shadow-maint/shadow/releases/download/${PV}/${BP}.tar.gz \
+           file://0001-shadow-use-relaxed-usernames.patch \
            ${@bb.utils.contains('PACKAGECONFIG', 'pam', '${PAM_SRC_URI}', '', d)} \
-           file://shadow-relaxed-usernames.patch \
            file://useradd \
            "
 
@@ -26,12 +25,13 @@
 SRC_URI:append:class-native = " \
            file://0001-Disable-use-of-syslog-for-sysroot.patch \
            file://commonio.c-fix-unexpected-open-failure-in-chroot-env.patch \
+           file://0001-Drop-nsswitch.conf-message-when-not-in-place-eg.-musl.patch \
            "
 SRC_URI:append:class-nativesdk = " \
            file://0001-Disable-use-of-syslog-for-sysroot.patch \
            "
+SRC_URI[sha256sum] = "9fdb73b5d2b44e8ba9fcee1b4493ac75dd5040bda35b9ac8b06570cd192e7ee3"
 
-SRC_URI[sha256sum] = "f262089be6a1011d50ec7849e14571b7b2e788334368f3dccb718513f17935ed"
 
 # Additional Policy files for PAM
 PAM_SRC_URI = "file://pam.d/chfn \
@@ -149,6 +149,13 @@
 	# Handle link properly after rename, otherwise missing files would
 	# lead rpm failed dependencies.
 	ln -sf newgrp.${BPN} ${D}${bindir}/sg
+
+	# usermod requires the subuid/subgid files to be in place before being
+	# able to use the -v/-V flags otherwise it fails:
+	# usermod: /etc/subuid does not exist, you cannot use the flags -v or -V
+	install -d ${D}${sysconfdir}
+	touch ${D}${sysconfdir}/subuid
+	touch ${D}${sysconfdir}/subgid
 }
 
 PACKAGES =+ "${PN}-base"
diff --git a/poky/meta/recipes-extended/shadow/shadow_4.11.1.bb b/poky/meta/recipes-extended/shadow/shadow_4.12.1.bb
similarity index 100%
rename from poky/meta/recipes-extended/shadow/shadow_4.11.1.bb
rename to poky/meta/recipes-extended/shadow/shadow_4.12.1.bb
diff --git a/poky/meta/recipes-extended/slang/slang/dont-link-to-host.patch b/poky/meta/recipes-extended/slang/slang/dont-link-to-host.patch
index 42dba0f..4b02068 100644
--- a/poky/meta/recipes-extended/slang/slang/dont-link-to-host.patch
+++ b/poky/meta/recipes-extended/slang/slang/dont-link-to-host.patch
@@ -1,3 +1,8 @@
+From b4a6e3c8309cff0f2311cd959c5091213b633851 Mon Sep 17 00:00:00 2001
+From: Ross Burton <ross.burton@intel.com>
+Date: Tue, 7 Feb 2017 14:35:43 +0000
+Subject: [PATCH] slang: rewrite recipe to run autoconf
+
 SLANG_INST_LIB is the location of where slang will end up, but when building for
 packaging this doesn't have DESTDIR appended so can potentially link to the host
 for cross builds and will trigger QA errors.
@@ -7,10 +12,20 @@
 Upstream-Status: Pending
 Signed-off-by: Ross Burton <ross.burton@intel.com>
 
+---
+ slsh/Makefile.in | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
 diff --git a/slsh/Makefile.in b/slsh/Makefile.in
-index cba9d81..4c1c370 100644
+index addd343..63a5c9b 100644
 --- a/slsh/Makefile.in
 +++ b/slsh/Makefile.in
-@@ -80 +80 @@ SHELL = /bin/sh
--INST_LIBS = $(DEST_LIB_DIR) $(RPATH) $(SLANG_INST_LIB) -lslang $(READLINE_LIB) $(DYNAMIC_LIBS)
-+INST_LIBS = $(DEST_LIB_DIR) $(RPATH) -lslang $(READLINE_LIB) $(DYNAMIC_LIBS)
+@@ -77,7 +77,7 @@ SLSYSWRAP_LIB = @LIB_SLSYSWRAP@
+ #----------------------------------------------------------------------------
+ @SET_MAKE@
+ SHELL = /bin/sh
+-INST_LIBS = $(DEST_LIB_DIR) $(RPATH) $(SLANG_INST_LIB) -lslang $(LDFLAGS) $(READLINE_LIB) $(DYNAMIC_LIBS)
++INST_LIBS = $(DEST_LIB_DIR) $(RPATH) -lslang $(LDFLAGS) $(READLINE_LIB) $(DYNAMIC_LIBS)
+ DEFS = -DSLSH_CONF_DIR='"$(SLSH_CONF_DIR)"' -DSLSH_PATH='"$(SLSH_LIB_DIR)"' \
+  -DSLSH_CONF_DIR_ENV='$(SLSH_CONF_DIR_ENV)' -DSLSH_LIB_DIR_ENV='$(SLSH_LIB_DIR_ENV)' \
+  -DSLSH_PATH_ENV='$(SLSH_PATH_ENV)' $(SLSYSWRAP_DEF)
diff --git a/poky/meta/recipes-extended/slang/slang/terminfo_fixes.patch b/poky/meta/recipes-extended/slang/slang/terminfo_fixes.patch
index 3ca20a8..331b7f0 100644
--- a/poky/meta/recipes-extended/slang/slang/terminfo_fixes.patch
+++ b/poky/meta/recipes-extended/slang/slang/terminfo_fixes.patch
@@ -1,3 +1,8 @@
+From 2a75095638002d37a2f9c7aeb0ec54f271b0a1c4 Mon Sep 17 00:00:00 2001
+From: Joe Slater <joe.slater@windriver.com>
+Date: Tue, 1 Aug 2017 12:36:53 -0700
+Subject: [PATCH] slang: fix terminfo related problems
+
 Do not use the JD_TERMCAP macro since we cannot get the terminfo from
 ncurses pkg-config, but fix the macro to not reference host directories.
 Also add src/test/Makefile.in so that we can use -ltermcap if we want to.
@@ -8,10 +13,18 @@
 
 Signed-off-by: Joe Slater <joe.slater@windriver.com>
 
+---
+ autoconf/aclocal.m4   |  8 +---
+ autoconf/configure.ac | 11 +++++-
+ src/test/Makefile.in  | 90 +++++++++++++++++++++++++++++++++++++++++++
+ 3 files changed, 100 insertions(+), 9 deletions(-)
+ create mode 100644 src/test/Makefile.in
 
+diff --git a/autoconf/aclocal.m4 b/autoconf/aclocal.m4
+index b2dfcd3..5f94ed3 100644
 --- a/autoconf/aclocal.m4
 +++ b/autoconf/aclocal.m4
-@@ -506,14 +506,10 @@ then
+@@ -509,15 +509,9 @@ then
  else
     MISC_TERMINFO_DIRS=""
  fi
@@ -19,8 +32,8 @@
 -                  /usr/lib/terminfo \
 -                  /usr/share/terminfo \
 -                  /usr/share/lib/terminfo \
--		  /usr/local/lib/terminfo"
-+
+-		  /usr/local/lib/terminfo \
+-                  /etc/terminfo /lib/terminfo"
  TERMCAP=-ltermcap
  
 -for terminfo_dir in $JD_Terminfo_Dirs
@@ -28,9 +41,11 @@
  do
     if test -d $terminfo_dir
     then
+diff --git a/autoconf/configure.ac b/autoconf/configure.ac
+index 8e11e13..9e6402c 100644
 --- a/autoconf/configure.ac
 +++ b/autoconf/configure.ac
-@@ -249,7 +249,14 @@ AC_CHECK_SIZEOF(size_t)
+@@ -250,7 +250,14 @@ AC_CHECK_SIZEOF(size_t)
  JD_CHECK_LONG_LONG
  JD_LARGE_FILE_SUPPORT
  
@@ -46,7 +61,7 @@
  JD_GCC_WARNINGS
  
  JD_SET_OBJ_SRC_DIR(src)
-@@ -364,7 +371,7 @@ AC_CONFIG_HEADER(src/sysconf.h:src/confi
+@@ -365,7 +372,7 @@ AC_CONFIG_HEADER(src/sysconf.h:src/config.hin)
  dnl AC_CONFIG_SUBDIRS(demo)
  
  AC_OUTPUT(Makefile:autoconf/Makefile.in \
@@ -55,6 +70,9 @@
    slang.pc:autoconf/slangpc.in \
  )
  
+diff --git a/src/test/Makefile.in b/src/test/Makefile.in
+new file mode 100644
+index 0000000..4b7307f
 --- /dev/null
 +++ b/src/test/Makefile.in
 @@ -0,0 +1,90 @@
diff --git a/poky/meta/recipes-extended/slang/slang_2.3.2.bb b/poky/meta/recipes-extended/slang/slang_2.3.2.bb
deleted file mode 100644
index 08cc967..0000000
--- a/poky/meta/recipes-extended/slang/slang_2.3.2.bb
+++ /dev/null
@@ -1,84 +0,0 @@
-SUMMARY = "The shared library for the S-Lang extension language"
-
-DESCRIPTION = "S-Lang is an interpreted language and a programming library.  The \
-S-Lang language was designed so that it can be easily embedded into \
-a program to provide the program with a powerful extension language. \
-The S-Lang library, provided in this package, provides the S-Lang \
-extension language.  S-Lang's syntax resembles C, which makes it easy \
-to recode S-Lang procedures in C if you need to."
-
-HOMEPAGE = "http://www.jedsoft.org/slang/"
-SECTION = "libs"
-DEPENDS = "ncurses virtual/libiconv"
-
-LICENSE = "GPL-2.0-only"
-LIC_FILES_CHKSUM = "file://COPYING;md5=a52a18a472d4f7e45479b06563717c02"
-
-SRC_URI = "http://www.jedsoft.org/releases/${BPN}/${BP}.tar.bz2 \
-           file://no-x.patch \
-           file://dont-link-to-host.patch \
-           file://test-add-output-in-the-format-result-testname.patch \
-           file://terminfo_fixes.patch \
-           file://array_test.patch \
-           file://run-ptest \
-          "
-
-SRC_URI[md5sum] = "c2d5a7aa0246627da490be4e399c87cb"
-SRC_URI[sha256sum] = "fc9e3b0fc4f67c3c1f6d43c90c16a5c42d117b8e28457c5b46831b8b5d3ae31a"
-
-UPSTREAM_CHECK_URI = "http://www.jedsoft.org/releases/slang/"
-PREMIRRORS:append = " http://www.jedsoft.org/releases/slang/.* http://www.jedsoft.org/releases/slang/old/"
-
-inherit autotools-brokensep ptest
-CLEANBROKEN = "1"
-
-EXTRA_OECONF = "--without-onig"
-# There's no way to turn off rpaths and slang will -rpath to the default search
-# path. Unset RPATH to stop this.
-EXTRA_OEMAKE = "RPATH=''"
-
-PACKAGECONFIG ??= "pcre"
-PACKAGECONFIG[pcre] = "--with-pcre=${STAGING_DIR_HOST}${prefix},--without-pcre,pcre"
-PACKAGECONFIG[png] = "--with-png=${STAGING_DIR_HOST}${prefix},--without-png,libpng"
-PACKAGECONFIG[zlib] = "--with-z=${STAGING_DIR_HOST}${prefix},--without-z,zlib"
-
-do_configure:prepend() {
-    cd ${S}/autoconf
-    # slang keeps configure.ac and rest of autoconf files in autoconf/ directory
-    # we have to go there to be able to run gnu-configize cause it expects configure.{in,ac}
-    # to be present. Resulting files land in autoconf/autoconf/ so we need to move them.
-    gnu-configize --force && mv autoconf/config.* .
-    # For the same reason we also need to run autoconf manually.
-    autoconf && mv configure ..
-    cd ${B}
-}
-
-do_compile_ptest() {
-	oe_runmake -C src static
-	oe_runmake -C src/test sltest
-}
-
-do_install_ptest() {
-	mkdir ${D}${PTEST_PATH}/test
-	for f in Makefile sltest runtests.sh *.sl *.inc; do
-		cp ${S}/src/test/$f ${D}${PTEST_PATH}/test/
-	done
-	sed -e 's/\ \$(TEST_PGM)\.c\ assoc\.c\ list\.c\ \$(SLANGLIB)\/libslang\.a//' \
-	    -e '/\$(CC).*(TEST_PGM)/d' \
-	    -i ${D}${PTEST_PATH}/test/Makefile
-
-	cp ${S}/slsh/lib/require.sl ${D}${PTEST_PATH}/test/
-	sed -i 's/\.\.\/\.\.\/slsh\/lib\/require\.sl/require\.sl/' ${D}${PTEST_PATH}/test/req.sl
-
-	cp ${S}/doc/text/slangfun.txt ${D}${PTEST_PATH}/test/
-	sed -i 's/\.\.\/\.\.\/doc\/text\/slangfun\.txt/slangfun\.txt/' ${D}${PTEST_PATH}/test/docfun.sl
-}
-
-FILES:${PN} += "${libdir}/${BPN}/v2/modules/ ${datadir}/slsh/"
-
-RDEPENDS:${PN}-ptest += "make"
-
-PARALLEL_MAKE = ""
-PARALLEL_MAKEINST = ""
-
-BBCLASSEXTEND = "native nativesdk"
diff --git a/poky/meta/recipes-extended/slang/slang_2.3.3.bb b/poky/meta/recipes-extended/slang/slang_2.3.3.bb
new file mode 100644
index 0000000..05b8aff
--- /dev/null
+++ b/poky/meta/recipes-extended/slang/slang_2.3.3.bb
@@ -0,0 +1,83 @@
+SUMMARY = "The shared library for the S-Lang extension language"
+
+DESCRIPTION = "S-Lang is an interpreted language and a programming library.  The \
+S-Lang language was designed so that it can be easily embedded into \
+a program to provide the program with a powerful extension language. \
+The S-Lang library, provided in this package, provides the S-Lang \
+extension language.  S-Lang's syntax resembles C, which makes it easy \
+to recode S-Lang procedures in C if you need to."
+
+HOMEPAGE = "http://www.jedsoft.org/slang/"
+SECTION = "libs"
+DEPENDS = "ncurses virtual/libiconv"
+
+LICENSE = "GPL-2.0-only"
+LIC_FILES_CHKSUM = "file://COPYING;md5=a52a18a472d4f7e45479b06563717c02"
+
+SRC_URI = "http://www.jedsoft.org/releases/${BPN}/${BP}.tar.bz2 \
+           file://no-x.patch \
+           file://dont-link-to-host.patch \
+           file://test-add-output-in-the-format-result-testname.patch \
+           file://terminfo_fixes.patch \
+           file://array_test.patch \
+           file://run-ptest \
+          "
+
+SRC_URI[sha256sum] = "f9145054ae131973c61208ea82486d5dd10e3c5cdad23b7c4a0617743c8f5a18"
+
+UPSTREAM_CHECK_URI = "http://www.jedsoft.org/releases/slang/"
+PREMIRRORS:append = " http://www.jedsoft.org/releases/slang/.* http://www.jedsoft.org/releases/slang/old/"
+
+inherit autotools-brokensep ptest
+CLEANBROKEN = "1"
+
+EXTRA_OECONF = "--without-onig"
+# There's no way to turn off rpaths and slang will -rpath to the default search
+# path. Unset RPATH to stop this.
+EXTRA_OEMAKE = "RPATH=''"
+
+PACKAGECONFIG ??= "pcre"
+PACKAGECONFIG[pcre] = "--with-pcre=${STAGING_DIR_HOST}${prefix},--without-pcre,pcre"
+PACKAGECONFIG[png] = "--with-png=${STAGING_DIR_HOST}${prefix},--without-png,libpng"
+PACKAGECONFIG[zlib] = "--with-z=${STAGING_DIR_HOST}${prefix},--without-z,zlib"
+
+do_configure:prepend() {
+    cd ${S}/autoconf
+    # slang keeps configure.ac and rest of autoconf files in autoconf/ directory
+    # we have to go there to be able to run gnu-configize cause it expects configure.{in,ac}
+    # to be present. Resulting files land in autoconf/autoconf/ so we need to move them.
+    gnu-configize --force && mv autoconf/config.* .
+    # For the same reason we also need to run autoconf manually.
+    autoconf && mv configure ..
+    cd ${B}
+}
+
+do_compile_ptest() {
+	oe_runmake -C src static
+	oe_runmake -C src/test sltest
+}
+
+do_install_ptest() {
+	mkdir ${D}${PTEST_PATH}/test
+	for f in Makefile sltest runtests.sh *.sl *.inc; do
+		cp ${S}/src/test/$f ${D}${PTEST_PATH}/test/
+	done
+	sed -e 's/\ \$(TEST_PGM)\.c\ assoc\.c\ list\.c\ \$(SLANGLIB)\/libslang\.a//' \
+	    -e '/\$(CC).*(TEST_PGM)/d' \
+	    -i ${D}${PTEST_PATH}/test/Makefile
+
+	cp ${S}/slsh/lib/require.sl ${D}${PTEST_PATH}/test/
+	sed -i 's/\.\.\/\.\.\/slsh\/lib\/require\.sl/require\.sl/' ${D}${PTEST_PATH}/test/req.sl
+
+	cp ${S}/doc/text/slangfun.txt ${D}${PTEST_PATH}/test/
+	sed -i 's/\.\.\/\.\.\/doc\/text\/slangfun\.txt/slangfun\.txt/' ${D}${PTEST_PATH}/test/docfun.sl
+}
+
+FILES:${PN} += "${libdir}/${BPN}/v2/modules/ ${datadir}/slsh/"
+
+RDEPENDS:${PN}-ptest += "make"
+
+PARALLEL_MAKE = ""
+PARALLEL_MAKEINST = ""
+
+BBCLASSEXTEND = "native nativesdk"
diff --git a/poky/meta/recipes-extended/stress-ng/stress-ng/0001-core-helper-remove-include-of-sys-mount.h.patch b/poky/meta/recipes-extended/stress-ng/stress-ng/0001-core-helper-remove-include-of-sys-mount.h.patch
new file mode 100644
index 0000000..52b2e61
--- /dev/null
+++ b/poky/meta/recipes-extended/stress-ng/stress-ng/0001-core-helper-remove-include-of-sys-mount.h.patch
@@ -0,0 +1,34 @@
+From 627e5227783ff2a0c3b11adee57ef7f0684a367e Mon Sep 17 00:00:00 2001
+From: Colin Ian King <colin.i.king@gmail.com>
+Date: Mon, 1 Aug 2022 21:39:39 +0100
+Subject: [PATCH 1/2] core-helper: remove include of sys/mount.h
+
+This is not required in the shim core and it fixes a build issue
+with newer glibc 2.36
+
+Fixes: https://github.com/ColinIanKing/stress-ng/issues/216
+
+Upstream-Status: Backport [https://github.com/ColinIanKing/stress-ng/commit/69f4f4d629c5f4304b5388b6a7fa8616de23f50e]
+Signed-off-by: Colin Ian King <colin.i.king@gmail.com>
+---
+ core-helper.c | 4 ----
+ 1 file changed, 4 deletions(-)
+
+diff --git a/core-helper.c b/core-helper.c
+index 6795410d..9e4533f2 100644
+--- a/core-helper.c
++++ b/core-helper.c
+@@ -39,10 +39,6 @@
+ #include <sys/loadavg.h>
+ #endif
+ 
+-#if defined(HAVE_SYS_MOUNT_H)
+-#include <sys/mount.h>
+-#endif
+-
+ #if defined(HAVE_SYS_PRCTL_H)
+ #include <sys/prctl.h>
+ #endif
+-- 
+2.37.1
+
diff --git a/poky/meta/recipes-extended/stress-ng/stress-ng/0002-core-shim-remove-include-of-sys-mount.h.patch b/poky/meta/recipes-extended/stress-ng/stress-ng/0002-core-shim-remove-include-of-sys-mount.h.patch
new file mode 100644
index 0000000..5cb95f1
--- /dev/null
+++ b/poky/meta/recipes-extended/stress-ng/stress-ng/0002-core-shim-remove-include-of-sys-mount.h.patch
@@ -0,0 +1,34 @@
+From 0503ec88e9187c0152b7b2840a1ad5bfb022bbfe Mon Sep 17 00:00:00 2001
+From: Colin Ian King <colin.i.king@gmail.com>
+Date: Mon, 1 Aug 2022 21:28:49 +0100
+Subject: [PATCH 2/2] core-shim: remove include of sys/mount.h
+
+This is not required in the shim core and it fixes a build issue
+with newer glibc 2.36
+
+Fixes: https://github.com/ColinIanKing/stress-ng/issues/216
+
+Upstream-Status: Backport [https://github.com/ColinIanKing/stress-ng/commit/0c9a711f213b5734729ab0c5ed90669e9fd11ca2]
+Signed-off-by: Colin Ian King <colin.i.king@gmail.com>
+---
+ core-shim.c | 4 ----
+ 1 file changed, 4 deletions(-)
+
+diff --git a/core-shim.c b/core-shim.c
+index 0343402a..324eba7d 100644
+--- a/core-shim.c
++++ b/core-shim.c
+@@ -52,10 +52,6 @@
+ #include <asm/ldt.h>
+ #endif
+ 
+-#if defined(HAVE_SYS_MOUNT_H)
+-#include <sys/mount.h>
+-#endif
+-
+ #if defined(HAVE_SYS_PRCTL_H)
+ #include <sys/prctl.h>
+ #endif
+-- 
+2.37.1
+
diff --git a/poky/meta/recipes-extended/stress-ng/stress-ng_0.14.02.bb b/poky/meta/recipes-extended/stress-ng/stress-ng_0.14.02.bb
deleted file mode 100644
index 000a640..0000000
--- a/poky/meta/recipes-extended/stress-ng/stress-ng_0.14.02.bb
+++ /dev/null
@@ -1,29 +0,0 @@
-SUMMARY = "System load testing utility"
-DESCRIPTION = "Deliberately simple workload generator for POSIX systems. It \
-imposes a configurable amount of CPU, memory, I/O, and disk stress on the system."
-HOMEPAGE = "https://github.com/ColinIanKing/stress-ng#readme"
-LICENSE = "GPL-2.0-only"
-LIC_FILES_CHKSUM = "file://COPYING;md5=b234ee4d69f5fce4486a80fdaf4a4263"
-
-SRC_URI = "git://github.com/ColinIanKing/stress-ng.git;protocol=https;branch=master"
-SRCREV = "5239ae6c82bfb239637b5a66cd39a035a158e641"
-S = "${WORKDIR}/git"
-
-DEPENDS = "coreutils-native"
-
-PROVIDES = "stress"
-RPROVIDES:${PN} = "stress"
-RREPLACES:${PN} = "stress"
-RCONFLICTS:${PN} = "stress"
-
-inherit bash-completion
-
-do_compile:prepend() {
-    mkdir -p configs
-    touch configs/HAVE_APPARMOR
-}
-
-do_install() {
-    oe_runmake DESTDIR=${D} install
-    ln -s stress-ng ${D}${bindir}/stress
-}
diff --git a/poky/meta/recipes-extended/stress-ng/stress-ng_0.14.03.bb b/poky/meta/recipes-extended/stress-ng/stress-ng_0.14.03.bb
new file mode 100644
index 0000000..370662b
--- /dev/null
+++ b/poky/meta/recipes-extended/stress-ng/stress-ng_0.14.03.bb
@@ -0,0 +1,32 @@
+SUMMARY = "System load testing utility"
+DESCRIPTION = "Deliberately simple workload generator for POSIX systems. It \
+imposes a configurable amount of CPU, memory, I/O, and disk stress on the system."
+HOMEPAGE = "https://github.com/ColinIanKing/stress-ng#readme"
+LICENSE = "GPL-2.0-only"
+LIC_FILES_CHKSUM = "file://COPYING;md5=b234ee4d69f5fce4486a80fdaf4a4263"
+
+SRC_URI = "git://github.com/ColinIanKing/stress-ng.git;protocol=https;branch=master \
+           file://0001-core-helper-remove-include-of-sys-mount.h.patch \
+           file://0002-core-shim-remove-include-of-sys-mount.h.patch \
+"
+SRCREV = "346518caffe5302f9a6d36860459c297c6968aaa"
+S = "${WORKDIR}/git"
+
+DEPENDS = "coreutils-native"
+
+PROVIDES = "stress"
+RPROVIDES:${PN} = "stress"
+RREPLACES:${PN} = "stress"
+RCONFLICTS:${PN} = "stress"
+
+inherit bash-completion
+
+do_compile:prepend() {
+    mkdir -p configs
+    touch configs/HAVE_APPARMOR
+}
+
+do_install() {
+    oe_runmake DESTDIR=${D} install
+    ln -s stress-ng ${D}${bindir}/stress
+}
diff --git a/poky/meta/recipes-extended/sysklogd/sysklogd_2.4.0.bb b/poky/meta/recipes-extended/sysklogd/sysklogd_2.4.0.bb
deleted file mode 100644
index 4e6da70..0000000
--- a/poky/meta/recipes-extended/sysklogd/sysklogd_2.4.0.bb
+++ /dev/null
@@ -1,56 +0,0 @@
-SUMMARY = "System Log Daemons"
-DESCRIPTION = "The sysklogd package implements system log daemons: syslogd"
-HOMEPAGE = "http://www.infodrom.org/projects/sysklogd/"
-SECTION = "base"
-
-LICENSE = "BSD-3-Clause"
-LIC_FILES_CHKSUM = "file://LICENSE;md5=5b4be4b2549338526758ef479c040943 \
-                    file://src/syslogd.c;beginline=2;endline=15;md5=a880fecbc04503f071c494a9c0dd4f97 \
-                   "
-
-inherit update-rc.d update-alternatives systemd autotools
-
-SRC_URI = "git://github.com/troglobit/sysklogd.git;branch=master;protocol=https \
-           file://sysklogd \
-           "
-
-SRCREV = "7dc4783af8e8e0d200d38a87b6a4bc9bbec5ce92"
-
-S = "${WORKDIR}/git"
-
-EXTRA_OECONF = "--with-systemd=${systemd_system_unitdir} --without-logger"
-
-do_install:append () {
-       install -d ${D}${sysconfdir}
-       install -m 644 ${S}/syslog.conf ${D}${sysconfdir}/syslog.conf
-       install -d ${D}${sysconfdir}/init.d
-       install -m 755 ${WORKDIR}/sysklogd ${D}${sysconfdir}/init.d/syslog
-}
-
-SYSTEMD_PACKAGES = "${PN}"
-SYSTEMD_SERVICE:${PN} = "syslogd.service"
-SYSTEMD_AUTO_ENABLE = "enable"
-
-INITSCRIPT_NAME = "syslog"
-CONFFILES:${PN} = "${sysconfdir}/syslog.conf"
-RCONFLICTS:${PN} = "rsyslog busybox-syslog syslog-ng"
-
-FILES:${PN} += "${@bb.utils.contains('DISTRO_FEATURES','systemd','${exec_prefix}/lib/tmpfiles.d/sysklogd.conf', '', d)}"
-
-ALTERNATIVE_PRIORITY = "100"
-
-ALTERNATIVE:${PN}-doc = "syslogd.8"
-ALTERNATIVE_LINK_NAME[syslogd.8] = "${mandir}/man8/syslogd.8"
-
-pkg_prerm:${PN} () {
-	if test "x$D" = "x"; then
-	if test "$1" = "upgrade" -o "$1" = "remove"; then
-		/etc/init.d/syslog stop || :
-	fi
-	fi
-}
-
-python () {
-    if not bb.utils.contains('DISTRO_FEATURES', 'sysvinit', True, False, d):
-        d.setVar("INHIBIT_UPDATERCD_BBCLASS", "1")
-}
diff --git a/poky/meta/recipes-extended/sysklogd/sysklogd_2.4.4.bb b/poky/meta/recipes-extended/sysklogd/sysklogd_2.4.4.bb
new file mode 100644
index 0000000..a19b4f5
--- /dev/null
+++ b/poky/meta/recipes-extended/sysklogd/sysklogd_2.4.4.bb
@@ -0,0 +1,56 @@
+SUMMARY = "System Log Daemons"
+DESCRIPTION = "The sysklogd package implements system log daemons: syslogd"
+HOMEPAGE = "http://www.infodrom.org/projects/sysklogd/"
+SECTION = "base"
+
+LICENSE = "BSD-3-Clause"
+LIC_FILES_CHKSUM = "file://LICENSE;md5=5b4be4b2549338526758ef479c040943 \
+                    file://src/syslogd.c;beginline=2;endline=15;md5=a880fecbc04503f071c494a9c0dd4f97 \
+                   "
+
+inherit update-rc.d update-alternatives systemd autotools
+
+SRC_URI = "git://github.com/troglobit/sysklogd.git;branch=master;protocol=https \
+           file://sysklogd \
+           "
+
+SRCREV = "51d471543ce59eace6df6da0e42658911f1fb8c0"
+
+S = "${WORKDIR}/git"
+
+EXTRA_OECONF = "--with-systemd=${systemd_system_unitdir} --without-logger"
+
+do_install:append () {
+       install -d ${D}${sysconfdir}
+       install -m 644 ${S}/syslog.conf ${D}${sysconfdir}/syslog.conf
+       install -d ${D}${sysconfdir}/init.d
+       install -m 755 ${WORKDIR}/sysklogd ${D}${sysconfdir}/init.d/syslog
+}
+
+SYSTEMD_PACKAGES = "${PN}"
+SYSTEMD_SERVICE:${PN} = "syslogd.service"
+SYSTEMD_AUTO_ENABLE = "enable"
+
+INITSCRIPT_NAME = "syslog"
+CONFFILES:${PN} = "${sysconfdir}/syslog.conf"
+RCONFLICTS:${PN} = "rsyslog busybox-syslog syslog-ng"
+
+FILES:${PN} += "${@bb.utils.contains('DISTRO_FEATURES','systemd','${exec_prefix}/lib/tmpfiles.d/sysklogd.conf', '', d)}"
+
+ALTERNATIVE_PRIORITY = "100"
+
+ALTERNATIVE:${PN}-doc = "syslogd.8"
+ALTERNATIVE_LINK_NAME[syslogd.8] = "${mandir}/man8/syslogd.8"
+
+pkg_prerm:${PN} () {
+	if test "x$D" = "x"; then
+	if test "$1" = "upgrade" -o "$1" = "remove"; then
+		/etc/init.d/syslog stop || :
+	fi
+	fi
+}
+
+python () {
+    if not bb.utils.contains('DISTRO_FEATURES', 'sysvinit', True, False, d):
+        d.setVar("INHIBIT_UPDATERCD_BBCLASS", "1")
+}
diff --git a/poky/meta/recipes-extended/tcp-wrappers/tcp-wrappers-7.6/0001-Fix-implicit-function-declaration-warnings.patch b/poky/meta/recipes-extended/tcp-wrappers/tcp-wrappers-7.6/0001-Fix-implicit-function-declaration-warnings.patch
new file mode 100644
index 0000000..ec793ac
--- /dev/null
+++ b/poky/meta/recipes-extended/tcp-wrappers/tcp-wrappers-7.6/0001-Fix-implicit-function-declaration-warnings.patch
@@ -0,0 +1,109 @@
+From 9c97b5db237a793e0d1b6b0241570bdc6e35ee24 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Sun, 7 Aug 2022 17:42:24 -0700
+Subject: [PATCH] Fix implicit-function-declaration warnings
+
+These are seen with clang-15+
+
+Upstream-Status: Inappropriate [upstream is dead]
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ hosts_access.c | 3 +++
+ safe_finger.c  | 1 +
+ shell_cmd.c    | 3 +++
+ tcpd.c         | 2 +-
+ tcpdchk.c      | 1 +
+ workarounds.c  | 1 +
+ 6 files changed, 10 insertions(+), 1 deletion(-)
+
+diff --git a/hosts_access.c b/hosts_access.c
+index 0133e5e..58697ea 100644
+--- a/hosts_access.c
++++ b/hosts_access.c
+@@ -33,6 +33,7 @@ static char sccsid[] = "@(#) hosts_access.c 1.21 97/02/12 02:13:22";
+ #endif
+ #include <netinet/in.h>
+ #include <arpa/inet.h>
++#include <rpcsvc/ypclnt.h>
+ #include <stdio.h>
+ #include <stdlib.h>
+ #include <syslog.h>
+@@ -45,6 +46,8 @@ static char sccsid[] = "@(#) hosts_access.c 1.21 97/02/12 02:13:22";
+ #endif
+ 
+ extern int errno;
++extern int match_pattern_ylo(const char *s, const char *pattern);
++extern unsigned long cidr_mask_addr(char* str);
+ 
+ #ifndef	INADDR_NONE
+ #define	INADDR_NONE	(-1)		/* XXX should be 0xffffffff */
+diff --git a/safe_finger.c b/safe_finger.c
+index 23afab1..a6458fb 100644
+--- a/safe_finger.c
++++ b/safe_finger.c
+@@ -34,6 +34,7 @@ static char sccsid[] = "@(#) safe_finger.c 1.4 94/12/28 17:42:41";
+ #include <syslog.h>
+ 
+ extern void exit();
++extern int pipe_stdin(char  **argv);
+ 
+ /* Local stuff */
+ 
+diff --git a/shell_cmd.c b/shell_cmd.c
+index 62d31bc..a566092 100644
+--- a/shell_cmd.c
++++ b/shell_cmd.c
+@@ -16,10 +16,13 @@ static char sccsid[] = "@(#) shell_cmd.c 1.5 94/12/28 17:42:44";
+ 
+ #include <sys/types.h>
+ #include <sys/param.h>
++#include <sys/wait.h>
++#include <fcntl.h>
+ #include <signal.h>
+ #include <stdio.h>
+ #include <syslog.h>
+ #include <string.h>
++#include <unistd.h>
+ 
+ extern void exit();
+ 
+diff --git a/tcpd.c b/tcpd.c
+index dc9ff17..4353caa 100644
+--- a/tcpd.c
++++ b/tcpd.c
+@@ -46,7 +46,7 @@ void fix_options(struct request_info *);
+ int     allow_severity = SEVERITY;	/* run-time adjustable */
+ int     deny_severity = LOG_WARNING;	/* ditto */
+ 
+-main(argc, argv)
++void main(argc, argv)
+ int     argc;
+ char  **argv;
+ {
+diff --git a/tcpdchk.c b/tcpdchk.c
+index 5dca8bd..67c12ce 100644
+--- a/tcpdchk.c
++++ b/tcpdchk.c
+@@ -38,6 +38,7 @@ static char sccsid[] = "@(#) tcpdchk.c 1.8 97/02/12 02:13:25";
+ 
+ extern int errno;
+ extern void exit();
++extern unsigned long cidr_mask_addr(char* str);
+ extern int optind;
+ extern char *optarg;
+ 
+diff --git a/workarounds.c b/workarounds.c
+index b22b378..6335049 100644
+--- a/workarounds.c
++++ b/workarounds.c
+@@ -21,6 +21,7 @@ char    sccsid[] = "@(#) workarounds.c 1.6 96/03/19 16:22:25";
+ #include <stdio.h>
+ #include <syslog.h>
+ #include <string.h>
++#include <unistd.h>
+ 
+ extern int errno;
+ 
+-- 
+2.37.1
+
diff --git a/poky/meta/recipes-extended/tcp-wrappers/tcp-wrappers_7.6.bb b/poky/meta/recipes-extended/tcp-wrappers/tcp-wrappers_7.6.bb
index 814d7fd..8137d25 100644
--- a/poky/meta/recipes-extended/tcp-wrappers/tcp-wrappers_7.6.bb
+++ b/poky/meta/recipes-extended/tcp-wrappers/tcp-wrappers_7.6.bb
@@ -50,6 +50,7 @@
            file://fix_warnings.patch \
            file://fix_warnings2.patch \
            file://0001-Remove-fgets-extern-declaration.patch \
+           file://0001-Fix-implicit-function-declaration-warnings.patch \
            "
 
 SRC_URI[md5sum] = "e6fa25f71226d090f34de3f6b122fb5a"
diff --git a/poky/meta/recipes-extended/time/time/0001-include-string.h-for-memset.patch b/poky/meta/recipes-extended/time/time/0001-include-string.h-for-memset.patch
new file mode 100644
index 0000000..f6ea212
--- /dev/null
+++ b/poky/meta/recipes-extended/time/time/0001-include-string.h-for-memset.patch
@@ -0,0 +1,27 @@
+From c8deae54f92d636878097063b411af9fb5262ad3 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Mon, 15 Aug 2022 07:24:24 -0700
+Subject: [PATCH] include string.h for memset()
+
+Fixes implicit function declaration warning e.g.
+
+resuse.c:103:3: error: call to undeclared library function 'memset' with type 'void *(void *, int, unsigned long)'
+
+Upstream-Status: Submitted [https://lists.gnu.org/archive/html/bug-time/2022-08/msg00001.html]
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ src/resuse.c | 1 +
+ 1 file changed, 1 insertion(+)
+
+diff --git a/src/resuse.c b/src/resuse.c
+index cf5a08c..9d3d18a 100644
+--- a/src/resuse.c
++++ b/src/resuse.c
+@@ -22,6 +22,7 @@
+ */ 
+ 
+ #include "config.h"
++#include <string.h>
+ #include <sys/time.h>
+ #include <sys/wait.h>
+ #include <sys/resource.h>
diff --git a/poky/meta/recipes-extended/time/time_1.9.bb b/poky/meta/recipes-extended/time/time_1.9.bb
index 706605f..8364210 100644
--- a/poky/meta/recipes-extended/time/time_1.9.bb
+++ b/poky/meta/recipes-extended/time/time_1.9.bb
@@ -13,7 +13,9 @@
 
 BBCLASSEXTEND = "native nativesdk"
 
-SRC_URI = "${GNU_MIRROR}/time/time-${PV}.tar.gz"
+SRC_URI = "${GNU_MIRROR}/time/time-${PV}.tar.gz \
+           file://0001-include-string.h-for-memset.patch \
+           "
 
 SRC_URI[md5sum] = "d2356e0fe1c0b85285d83c6b2ad51b5f"
 SRC_URI[sha256sum] = "fbacf0c81e62429df3e33bda4cee38756604f18e01d977338e23306a3e3b521e"
diff --git a/poky/meta/recipes-extended/timezone/timezone.inc b/poky/meta/recipes-extended/timezone/timezone.inc
index cdd1a2a..2b956cf 100644
--- a/poky/meta/recipes-extended/timezone/timezone.inc
+++ b/poky/meta/recipes-extended/timezone/timezone.inc
@@ -6,7 +6,7 @@
 LICENSE = "PD & BSD-3-Clause"
 LIC_FILES_CHKSUM = "file://LICENSE;md5=c679c9d6b02bc2757b3eaf8f53c43fba"
 
-PV = "2022a"
+PV = "2022b"
 
 SRC_URI =" http://www.iana.org/time-zones/repository/releases/tzcode${PV}.tar.gz;name=tzcode \
            http://www.iana.org/time-zones/repository/releases/tzdata${PV}.tar.gz;name=tzdata \
@@ -14,6 +14,6 @@
 
 UPSTREAM_CHECK_URI = "http://www.iana.org/time-zones"
 
-SRC_URI[tzcode.sha256sum] = "f8575e7e33be9ee265df2081092526b81c80abac3f4a04399ae9d4d91cdadac7"
-SRC_URI[tzdata.sha256sum] = "ef7fffd9f4f50f4f58328b35022a32a5a056b245c5cb3d6791dddb342f871664"
+SRC_URI[tzcode.sha256sum] = "bab20d943e59a3218435f48d868a4e552f18d6d7f3dd128660c5660c80b8a05f"
+SRC_URI[tzdata.sha256sum] = "f590eaf04a395245426c2be4fae71c143aea5cebc11088b7a0a5704461df397d"
 
diff --git a/poky/meta/recipes-extended/unzip/unzip/0001-configure-Add-correct-system-headers-and-prototypes-.patch b/poky/meta/recipes-extended/unzip/unzip/0001-configure-Add-correct-system-headers-and-prototypes-.patch
new file mode 100644
index 0000000..f7e0854
--- /dev/null
+++ b/poky/meta/recipes-extended/unzip/unzip/0001-configure-Add-correct-system-headers-and-prototypes-.patch
@@ -0,0 +1,112 @@
+From 5ac5885d35257888d0e4a9dda903405314f9fc84 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Wed, 10 Aug 2022 17:53:13 -0700
+Subject: [PATCH] configure: Add correct system headers and prototypes to tests
+
+Newer compilers e.g. clang-15+ have turned stricter towards these
+warnings and turned them into errors which results in subtle failures
+during build, therefore make the testcases use the needed headers and
+modern C
+
+Upstream-Status: Inactive-Upstream
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ unix/configure | 51 +++++++++++++++++++++++++++++++++++++++-----------
+ 1 file changed, 40 insertions(+), 11 deletions(-)
+
+diff --git a/unix/configure b/unix/configure
+index 49579f3..8fd82dd 100755
+--- a/unix/configure
++++ b/unix/configure
+@@ -379,14 +379,37 @@ $CC $CFLAGS -c conftest.c >/dev/null 2>/dev/null
+ 
+ # Check for missing functions
+ # add NO_'function_name' to flags if missing
+-for func in fchmod fchown lchown nl_langinfo
+-do
+-  echo Check for $func
+-  echo "int main(){ $func(); return 0; }" > conftest.c
+-  $CC $BFLAG $LDFLAGS -o conftest conftest.c >/dev/null 2>/dev/null
+-  [ $? -ne 0 ] && CFLAGSR="${CFLAGSR} -DNO_`echo $func | tr '[a-z]' '[A-Z]'`"
+-done
++echo Check for fchmod
++cat > conftest.c << _EOF_
++#include <sys/stat.h>
++int main(){ fchmod(0,0); return 0; }
++_EOF_
++$CC $BFLAG $LDFLAGS -o conftest conftest.c >/dev/null 2>/dev/null
++[ $? -ne 0 ] && CFLAGSR="${CFLAGSR} -DNO_FCHMOD"
+ 
++echo Check for fchown
++cat > conftest.c << _EOF_
++#include <unistd.h>
++int main(){ fchown(0,0,0); return 0; }
++_EOF_
++$CC $BFLAG $LDFLAGS -o conftest conftest.c >/dev/null 2>/dev/null
++[ $? -ne 0 ] && CFLAGSR="${CFLAGSR} -DNO_FCHOWN"
++
++echo Check for lchown
++cat > conftest.c << _EOF_
++#include <unistd.h>
++int main(){ lchown(NULL,0,0); return 0; }
++_EOF_
++$CC $BFLAG $LDFLAGS -o conftest conftest.c >/dev/null 2>/dev/null
++[ $? -ne 0 ] && CFLAGSR="${CFLAGSR} -DNO_LCHOWN"
++
++echo Check for nl_langinfo
++cat > conftest.c << _EOF_
++#include <langinfo.h>
++int main(){ nl_langinfo(0); return 0; }
++_EOF_
++$CC $BFLAG $LDFLAGS -o conftest conftest.c >/dev/null 2>/dev/null
++[ $? -ne 0 ] && CFLAGSR="${CFLAGSR} -DNO_NL_LANGINFO"
+ # Check (seriously) for a working lchmod.
+ echo 'Check for lchmod'
+ temp_file="/tmp/unzip_test_$$"
+@@ -401,14 +424,17 @@ ln -s "${temp_link}" "${temp_file}" && \
+ rm -f "${temp_file}"
+ 
+ echo Check for memset
+-echo "int main(){ char k; memset(&k,0,0); return 0; }" > conftest.c
++cat > conftest.c << _EOF_
++#include <string.h>
++int main(){ char k; memset(&k,0,0); return 0; }
++_EOF_
+ $CC $CFLAGS $LDFLAGS -o conftest conftest.c >/dev/null 2>/dev/null
+ [ $? -ne 0 ] && CFLAGSR="${CFLAGSR} -DZMEM"
+ 
+ echo Check for errno declaration
+ cat > conftest.c << _EOF_
+ #include <errno.h>
+-main()
++int main()
+ {
+   errno = 0;
+   return 0;
+@@ -419,6 +445,8 @@ $CC $CFLAGS -c conftest.c >/dev/null 2>/dev/null
+ 
+ echo Check for directory libraries
+ cat > conftest.c << _EOF_
++#include <sys/types.h>
++#include <dirent.h>
+ int main() { return closedir(opendir(".")); }
+ _EOF_
+ 
+@@ -523,10 +551,11 @@ fi
+ # needed for AIX (and others ?) when mmap is used
+ echo Check for valloc
+ cat > conftest.c << _EOF_
+-main()
++#include <stdlib.h>
++int main()
+ {
+ #ifdef MMAP
+-    valloc();
++    valloc(0);
+ #endif
+ }
+ _EOF_
+-- 
+2.37.1
+
diff --git a/poky/meta/recipes-extended/unzip/unzip_6.0.bb b/poky/meta/recipes-extended/unzip/unzip_6.0.bb
index f35856c..a4d10c3 100644
--- a/poky/meta/recipes-extended/unzip/unzip_6.0.bb
+++ b/poky/meta/recipes-extended/unzip/unzip_6.0.bb
@@ -31,6 +31,7 @@
         file://CVE-2021-4217.patch \
         file://CVE-2022-0529.patch \
         file://CVE-2022-0530.patch \
+        file://0001-configure-Add-correct-system-headers-and-prototypes-.patch \
 "
 UPSTREAM_VERSION_UNKNOWN = "1"
 
@@ -45,6 +46,9 @@
 
 S = "${WORKDIR}/unzip60"
 
+# Enable largefile support
+CFLAGS += "-DLARGE_FILE_SUPPORT"
+
 # Makefile uses CF_NOOPT instead of CFLAGS.  We lifted the values from
 # Makefile and add CFLAGS.  Optimization will be overriden by unzip
 # configure to be -O3.
diff --git a/poky/meta/recipes-extended/watchdog/watchdog/0001-shutdown-Do-not-guard-sys-quota.h-sys-swap.h-and-sys.patch b/poky/meta/recipes-extended/watchdog/watchdog/0001-shutdown-Do-not-guard-sys-quota.h-sys-swap.h-and-sys.patch
new file mode 100644
index 0000000..8c419e1
--- /dev/null
+++ b/poky/meta/recipes-extended/watchdog/watchdog/0001-shutdown-Do-not-guard-sys-quota.h-sys-swap.h-and-sys.patch
@@ -0,0 +1,37 @@
+From ca1d379fa13c4055d42d2ff3a647b4397768efcd Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Tue, 23 Aug 2022 19:23:26 -0700
+Subject: [PATCH] shutdown: Do not guard sys/quota.h sys/swap.h and
+ sys/reboot.h with __GLIBC__
+
+These headers are provided by uclibc/musl/glibc and bionic so we can
+assume they are not needed to be glibc specific includes. This also
+ensures that we get proper declaration of reboot() API
+
+Upstream-Status: Submitted [https://sourceforge.net/p/watchdog/patches/12/]
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ src/shutdown.c | 4 ----
+ 1 file changed, 4 deletions(-)
+
+diff --git a/src/shutdown.c b/src/shutdown.c
+index 1d9a857..6aea0d0 100644
+--- a/src/shutdown.c
++++ b/src/shutdown.c
+@@ -29,13 +29,9 @@
+ #include "extern.h"
+ #include "ext2_mnt.h"
+ 
+-#if defined __GLIBC__
+ #include <sys/quota.h>
+ #include <sys/swap.h>
+ #include <sys/reboot.h>
+-#else				/* __GLIBC__ */
+-#include <linux/quota.h>
+-#endif				/* __GLIBC__ */
+ 
+ #include <unistd.h>
+ 
+-- 
+2.37.2
+
diff --git a/poky/meta/recipes-extended/watchdog/watchdog_5.16.bb b/poky/meta/recipes-extended/watchdog/watchdog_5.16.bb
index 1163846..26fcc10 100644
--- a/poky/meta/recipes-extended/watchdog/watchdog_5.16.bb
+++ b/poky/meta/recipes-extended/watchdog/watchdog_5.16.bb
@@ -13,6 +13,7 @@
            file://watchdog.init \
            file://wd_keepalive.init \
            file://0001-wd_keepalive.service-use-run-instead-of-var-run.patch \
+           file://0001-shutdown-Do-not-guard-sys-quota.h-sys-swap.h-and-sys.patch \
 "
 
 SRC_URI[md5sum] = "1b4f51cabc64d1bee2fce7cdd626831f"
diff --git a/poky/meta/recipes-extended/xinetd/xinetd_2.3.15.4.bb b/poky/meta/recipes-extended/xinetd/xinetd_2.3.15.4.bb
index 62ee70d..8974173 100644
--- a/poky/meta/recipes-extended/xinetd/xinetd_2.3.15.4.bb
+++ b/poky/meta/recipes-extended/xinetd/xinetd_2.3.15.4.bb
@@ -30,6 +30,8 @@
 PACKAGECONFIG ??= "tcp-wrappers"
 PACKAGECONFIG[tcp-wrappers] = "--with-libwrap,,tcp-wrappers"
 
+CFLAGS += "-D_GNU_SOURCE"
+
 CONFFILES:${PN} = "${sysconfdir}/xinetd.conf"
 
 do_install:append() {
diff --git a/poky/meta/recipes-extended/xz/xz/CVE-2022-1271.patch b/poky/meta/recipes-extended/xz/xz/CVE-2022-1271.patch
deleted file mode 100644
index e43e73c..0000000
--- a/poky/meta/recipes-extended/xz/xz/CVE-2022-1271.patch
+++ /dev/null
@@ -1,96 +0,0 @@
-From dc932a1e9c0d9f1db71be11a9b82496e3a72f112 Mon Sep 17 00:00:00 2001
-From: Lasse Collin <lasse.collin@tukaani.org>
-Date: Tue, 29 Mar 2022 19:19:12 +0300
-Subject: [PATCH] xzgrep: Fix escaping of malicious filenames (ZDI-CAN-16587).
-
-Malicious filenames can make xzgrep to write to arbitrary files
-or (with a GNU sed extension) lead to arbitrary code execution.
-
-xzgrep from XZ Utils versions up to and including 5.2.5 are
-affected. 5.3.1alpha and 5.3.2alpha are affected as well.
-This patch works for all of them.
-
-This bug was inherited from gzip's zgrep. gzip 1.12 includes
-a fix for zgrep.
-
-The issue with the old sed script is that with multiple newlines,
-the N-command will read the second line of input, then the
-s-commands will be skipped because it's not the end of the
-file yet, then a new sed cycle starts and the pattern space
-is printed and emptied. So only the last line or two get escaped.
-
-One way to fix this would be to read all lines into the pattern
-space first. However, the included fix is even simpler: All lines
-except the last line get a backslash appended at the end. To ensure
-that shell command substitution doesn't eat a possible trailing
-newline, a colon is appended to the filename before escaping.
-The colon is later used to separate the filename from the grep
-output so it is fine to add it here instead of a few lines later.
-
-The old code also wasn't POSIX compliant as it used \n in the
-replacement section of the s-command. Using \<newline> is the
-POSIX compatible method.
-
-LC_ALL=C was added to the two critical sed commands. POSIX sed
-manual recommends it when using sed to manipulate pathnames
-because in other locales invalid multibyte sequences might
-cause issues with some sed implementations. In case of GNU sed,
-these particular sed scripts wouldn't have such problems but some
-other scripts could have, see:
-
-    info '(sed)Locale Considerations'
-
-This vulnerability was discovered by:
-cleemy desu wayo working with Trend Micro Zero Day Initiative
-
-Thanks to Jim Meyering and Paul Eggert discussing the different
-ways to fix this and for coordinating the patch release schedule
-with gzip.
-
-Upstream-Status: Backport [https://tukaani.org/xz/xzgrep-ZDI-CAN-16587.patch]
-CVE: CVE-2022-1271
-
-Signed-off-by: Ralph Siemsen <ralph.siemsen@linaro.org>
----
- src/scripts/xzgrep.in | 20 ++++++++++++--------
- 1 file changed, 12 insertions(+), 8 deletions(-)
-
-diff --git a/src/scripts/xzgrep.in b/src/scripts/xzgrep.in
-index 9db5c3a..f64dddb 100644
---- a/src/scripts/xzgrep.in
-+++ b/src/scripts/xzgrep.in
-@@ -179,22 +179,26 @@ for i; do
-          { test $# -eq 1 || test $no_filename -eq 1; }; then
-       eval "$grep"
-     else
-+      # Append a colon so that the last character will never be a newline
-+      # which would otherwise get lost in shell command substitution.
-+      i="$i:"
-+
-+      # Escape & \ | and newlines only if such characters are present
-+      # (speed optimization).
-       case $i in
-       (*'
- '* | *'&'* | *'\'* | *'|'*)
--        i=$(printf '%s\n' "$i" |
--            sed '
--              $!N
--              $s/[&\|]/\\&/g
--              $s/\n/\\n/g
--            ');;
-+        i=$(printf '%s\n' "$i" | LC_ALL=C sed 's/[&\|]/\\&/g; $!s/$/\\/');;
-       esac
--      sed_script="s|^|$i:|"
-+
-+      # $i already ends with a colon so don't add it here.
-+      sed_script="s|^|$i|"
- 
-       # Fail if grep or sed fails.
-       r=$(
-         exec 4>&1
--        (eval "$grep" 4>&-; echo $? >&4) 3>&- | sed "$sed_script" >&3 4>&-
-+        (eval "$grep" 4>&-; echo $? >&4) 3>&- |
-+            LC_ALL=C sed "$sed_script" >&3 4>&-
-       ) || r=2
-       exit $r
-     fi >&3 5>&-
diff --git a/poky/meta/recipes-extended/xz/xz_5.2.5.bb b/poky/meta/recipes-extended/xz/xz_5.2.5.bb
deleted file mode 100644
index 720e070..0000000
--- a/poky/meta/recipes-extended/xz/xz_5.2.5.bb
+++ /dev/null
@@ -1,47 +0,0 @@
-SUMMARY = "Utilities for managing LZMA compressed files"
-HOMEPAGE = "https://tukaani.org/xz/"
-DESCRIPTION = "XZ Utils is free general-purpose data compression software with a high compression ratio. XZ Utils were written for POSIX-like systems, but also work on some not-so-POSIX systems. XZ Utils are the successor to LZMA Utils."
-SECTION = "base"
-
-# The source includes bits of PD, GPL-2.0, GPL-3.0, LGPL-2.1-or-later, but the
-# only file which is GPL-3.0 is an m4 macro which isn't shipped in any of our
-# packages, and the LGPL bits are under lib/, which appears to be used for
-# libgnu, which appears to be used for DOS builds. So we're left with
-# GPL-2.0-or-later and PD.
-LICENSE = "GPL-2.0-or-later & GPL-3.0-with-autoconf-exception & LGPL-2.1-or-later & PD"
-LICENSE:${PN} = "GPL-2.0-or-later"
-LICENSE:${PN}-dev = "GPL-2.0-or-later"
-LICENSE:${PN}-staticdev = "GPL-2.0-or-later"
-LICENSE:${PN}-doc = "GPL-2.0-or-later"
-LICENSE:${PN}-dbg = "GPL-2.0-or-later"
-LICENSE:${PN}-locale = "GPL-2.0-or-later"
-LICENSE:liblzma = "PD"
-
-LIC_FILES_CHKSUM = "file://COPYING;md5=97d554a32881fee0aa283d96e47cb24a \
-                    file://COPYING.GPLv2;md5=b234ee4d69f5fce4486a80fdaf4a4263 \
-                    file://COPYING.GPLv3;md5=d32239bcb673463ab874e80d47fae504 \
-                    file://COPYING.LGPLv2.1;md5=4fbd65380cdd255951079008b364516c \
-                    file://lib/getopt.c;endline=23;md5=2069b0ee710572c03bb3114e4532cd84 \
-                    "
-
-SRC_URI = "https://tukaani.org/xz/xz-${PV}.tar.gz \
-           file://CVE-2022-1271.patch \
-           "
-SRC_URI[md5sum] = "0d270c997aff29708c74d53f599ef717"
-SRC_URI[sha256sum] = "f6f4910fd033078738bd82bfba4f49219d03b17eb0794eb91efbae419f4aba10"
-UPSTREAM_CHECK_REGEX = "xz-(?P<pver>\d+(\.\d+)+)\.tar"
-
-CACHED_CONFIGUREVARS += "gl_cv_posix_shell=/bin/sh"
-
-inherit autotools gettext
-
-PACKAGES =+ "liblzma"
-
-FILES:liblzma = "${libdir}/liblzma*${SOLIBS}"
-
-inherit update-alternatives
-ALTERNATIVE_PRIORITY = "100"
-ALTERNATIVE:${PN} = "xz xzcat unxz \
-                     lzma lzcat unlzma"
-
-BBCLASSEXTEND = "native nativesdk"
diff --git a/poky/meta/recipes-extended/xz/xz_5.2.6.bb b/poky/meta/recipes-extended/xz/xz_5.2.6.bb
new file mode 100644
index 0000000..3482622
--- /dev/null
+++ b/poky/meta/recipes-extended/xz/xz_5.2.6.bb
@@ -0,0 +1,44 @@
+SUMMARY = "Utilities for managing LZMA compressed files"
+HOMEPAGE = "https://tukaani.org/xz/"
+DESCRIPTION = "XZ Utils is free general-purpose data compression software with a high compression ratio. XZ Utils were written for POSIX-like systems, but also work on some not-so-POSIX systems. XZ Utils are the successor to LZMA Utils."
+SECTION = "base"
+
+# The source includes bits of PD, GPL-2.0, GPL-3.0, LGPL-2.1-or-later, but the
+# only file which is GPL-3.0 is an m4 macro which isn't shipped in any of our
+# packages, and the LGPL bits are under lib/, which appears to be used for
+# libgnu, which appears to be used for DOS builds. So we're left with
+# GPL-2.0-or-later and PD.
+LICENSE = "GPL-2.0-or-later & GPL-3.0-with-autoconf-exception & LGPL-2.1-or-later & PD"
+LICENSE:${PN} = "GPL-2.0-or-later"
+LICENSE:${PN}-dev = "GPL-2.0-or-later"
+LICENSE:${PN}-staticdev = "GPL-2.0-or-later"
+LICENSE:${PN}-doc = "GPL-2.0-or-later"
+LICENSE:${PN}-dbg = "GPL-2.0-or-later"
+LICENSE:${PN}-locale = "GPL-2.0-or-later"
+LICENSE:liblzma = "PD"
+
+LIC_FILES_CHKSUM = "file://COPYING;md5=97d554a32881fee0aa283d96e47cb24a \
+                    file://COPYING.GPLv2;md5=b234ee4d69f5fce4486a80fdaf4a4263 \
+                    file://COPYING.GPLv3;md5=d32239bcb673463ab874e80d47fae504 \
+                    file://COPYING.LGPLv2.1;md5=4fbd65380cdd255951079008b364516c \
+                    file://lib/getopt.c;endline=23;md5=2069b0ee710572c03bb3114e4532cd84 \
+                    "
+
+SRC_URI = "https://tukaani.org/xz/xz-${PV}.tar.gz"
+SRC_URI[sha256sum] = "a2105abee17bcd2ebd15ced31b4f5eda6e17efd6b10f921a01cda4a44c91b3a0"
+UPSTREAM_CHECK_REGEX = "xz-(?P<pver>\d+(\.\d+)+)\.tar"
+
+CACHED_CONFIGUREVARS += "gl_cv_posix_shell=/bin/sh"
+
+inherit autotools gettext
+
+PACKAGES =+ "liblzma"
+
+FILES:liblzma = "${libdir}/liblzma*${SOLIBS}"
+
+inherit update-alternatives
+ALTERNATIVE_PRIORITY = "100"
+ALTERNATIVE:${PN} = "xz xzcat unxz \
+                     lzma lzcat unlzma"
+
+BBCLASSEXTEND = "native nativesdk"
diff --git a/poky/meta/recipes-extended/zip/zip-3.0/0001-configure-Specify-correct-function-signatures-and-de.patch b/poky/meta/recipes-extended/zip/zip-3.0/0001-configure-Specify-correct-function-signatures-and-de.patch
new file mode 100644
index 0000000..a4f8382
--- /dev/null
+++ b/poky/meta/recipes-extended/zip/zip-3.0/0001-configure-Specify-correct-function-signatures-and-de.patch
@@ -0,0 +1,134 @@
+From 8810f2643c9372a8083272dc1fc157427646d961 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Wed, 10 Aug 2022 17:16:23 -0700
+Subject: [PATCH 1/2] configure: Specify correct function signatures and
+ declarations
+
+Include needed system headers in configure tests, this is needed because
+newer compilers are getting stricter about the C99 specs and turning
+-Wimplicit-function-declaration into hard error e.g. clang-15+
+
+Upstream-Status: Inactive-Upstream
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ unix/configure | 79 +++++++++++++++++++++++++++++++++++++++++---------
+ 1 file changed, 66 insertions(+), 13 deletions(-)
+
+diff --git a/unix/configure b/unix/configure
+index 1d9a9bb..f2b3d02 100644
+--- a/unix/configure
++++ b/unix/configure
+@@ -513,21 +513,70 @@ $CC $CFLAGS -c conftest.c >/dev/null 2>/dev/null
+ # Check for missing functions
+ # add NO_'function_name' to flags if missing
+ 
+-for func in rmdir strchr strrchr rename mktemp mktime mkstemp
+-do
+-  echo Check for $func
+-  echo "int main(){ $func(); return 0; }" > conftest.c
+-  $CC $CFLAGS $LDFLAGS $BFLAG -o conftest conftest.c >/dev/null 2>/dev/null
+-  [ $? -ne 0 ] && CFLAGS="${CFLAGS} -DNO_`echo $func | tr '[a-z]' '[A-Z]'`"
+-done
++echo Check for rmdir
++cat > conftest.c << _EOF_
++#include <unistd.h>
++int main(){ rmdir(NULL); return 0; }
++_EOF_
++$CC $CFLAGS $LDFLAGS -o conftest conftest.c >/dev/null 2>/dev/null
++[ $? -ne 0 ] && CFLAGS="${CFLAGS} -DNO_RMDIR"
++
++echo Check for strchr
++cat > conftest.c << _EOF_
++#include <string.h>
++int main(){ strchr(NULL,0); return 0; }
++_EOF_
++$CC $CFLAGS $LDFLAGS -o conftest conftest.c >/dev/null 2>/dev/null
++[ $? -ne 0 ] && CFLAGS="${CFLAGS} -DNO_STRCHR"
+ 
++echo Check for strrchr
++cat > conftest.c << _EOF_
++#include <string.h>
++int main(){ strrchr(NULL,0); return 0; }
++_EOF_
++$CC $CFLAGS $LDFLAGS -o conftest conftest.c >/dev/null 2>/dev/null
++[ $? -ne 0 ] && CFLAGS="${CFLAGS} -DNO_STRRCHR"
++
++echo Check for rename
++cat > conftest.c << _EOF_
++#include <stdio.h>
++int main(){ rename(NULL,NULL); return 0; }
++_EOF_
++$CC $CFLAGS $LDFLAGS -o conftest conftest.c >/dev/null 2>/dev/null
++[ $? -ne 0 ] && CFLAGS="${CFLAGS} -DNO_RENAME"
++
++echo Check for mktemp
++cat > conftest.c << _EOF_
++#include <stdlib.h>
++int main(){ mktemp(NULL); return 0; }
++_EOF_
++$CC $CFLAGS $LDFLAGS -o conftest conftest.c >/dev/null 2>/dev/null
++[ $? -ne 0 ] && CFLAGS="${CFLAGS} -DNO_MKTEMP"
++
++echo Check for mktime
++cat > conftest.c << _EOF_
++#include <time.h>
++int main(){ mktime(NULL); return 0; }
++_EOF_
++$CC $CFLAGS $LDFLAGS -o conftest conftest.c >/dev/null 2>/dev/null
++[ $? -ne 0 ] && CFLAGS="${CFLAGS} -DNO_MKTIME"
++
++echo Check for mkstemp
++cat > conftest.c << _EOF_
++#include <stdlib.h>
++int main(){ return mkstemp(NULL); }
++_EOF_
++$CC $CFLAGS $LDFLAGS -o conftest conftest.c >/dev/null 2>/dev/null
++[ $? -ne 0 ] && CFLAGS="${CFLAGS} -DNO_MKSTEMP"
+ 
+ echo Check for memset
+-echo "int main(){ char k; memset(&k,0,0); return 0; }" > conftest.c
++cat > conftest.c << _EOF_
++#include <string.h>
++int main(){ char k; memset(&k,0,0); return 0; }
++_EOF_
+ $CC $CFLAGS $LDFLAGS -o conftest conftest.c >/dev/null 2>/dev/null
+ [ $? -ne 0 ] && CFLAGS="${CFLAGS} -DZMEM"
+ 
+-
+ echo Check for memmove
+ cat > conftest.c << _EOF_
+ #include <string.h>
+@@ -548,7 +597,7 @@ $CC $CFLAGS $LDFLAGS -o conftest conftest.c >/dev/null 2>/dev/null
+ echo Check for errno declaration
+ cat > conftest.c << _EOF_
+ #include <errno.h>
+-main()
++int main()
+ {
+   errno = 0;
+   return 0;
+@@ -625,14 +674,18 @@ CFLAGS="${CFLAGS} ${OPT}"
+ 
+ echo Check for valloc
+ cat > conftest.c << _EOF_
+-main()
++#include <stdlib.h>
++int main()
+ {
+ #ifdef MMAP
+-    valloc();
++    valloc(0);
+ #endif
++    return 0;
+ }
+ _EOF_
+-$CC ${CFLAGS} -c conftest.c > /dev/null 2>/dev/null
++#$CC ${CFLAGS} -c conftest.c > /dev/null 2>/dev/null
++$CC ${CFLAGS} -c conftest.c
++echo "==========================================="
+ [ $? -ne 0 ] && CFLAGS="${CFLAGS} -DNO_VALLOC"
+ 
+ 
+-- 
+2.37.1
+
diff --git a/poky/meta/recipes-extended/zip/zip-3.0/0002-unix.c-Do-not-redefine-DIR-as-FILE.patch b/poky/meta/recipes-extended/zip/zip-3.0/0002-unix.c-Do-not-redefine-DIR-as-FILE.patch
new file mode 100644
index 0000000..a86e03e
--- /dev/null
+++ b/poky/meta/recipes-extended/zip/zip-3.0/0002-unix.c-Do-not-redefine-DIR-as-FILE.patch
@@ -0,0 +1,35 @@
+From 76f5bf3546d826dcbc03acbefcf0b10b972bf136 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Wed, 10 Aug 2022 17:19:38 -0700
+Subject: [PATCH 2/2] unix.c: Do not redefine DIR as FILE
+
+DIR is already provided on Linux via
+/usr/include/dirent.h system header
+
+Upstream-Status: Inactive-Upstream
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ unix/unix.c | 2 --
+ 1 file changed, 2 deletions(-)
+
+diff --git a/unix/unix.c b/unix/unix.c
+index ba87614..6e6f4d2 100644
+--- a/unix/unix.c
++++ b/unix/unix.c
+@@ -61,13 +61,11 @@ local time_t label_utim = 0;
+ /* Local functions */
+ local char *readd OF((DIR *));
+ 
+-
+ #ifdef NO_DIR                    /* for AT&T 3B1 */
+ #include <sys/dir.h>
+ #ifndef dirent
+ #  define dirent direct
+ #endif
+-typedef FILE DIR;
+ /*
+ **  Apparently originally by Rich Salz.
+ **  Cleaned up and modified by James W. Birdsall.
+-- 
+2.37.1
+
diff --git a/poky/meta/recipes-extended/zip/zip_3.0.bb b/poky/meta/recipes-extended/zip/zip_3.0.bb
index 07a67b9..1930a40 100644
--- a/poky/meta/recipes-extended/zip/zip_3.0.bb
+++ b/poky/meta/recipes-extended/zip/zip_3.0.bb
@@ -17,6 +17,8 @@
            file://0001-configure-use-correct-CPP.patch \
            file://0002-configure-support-PIC-code-build.patch \
            file://0001-configure-Use-CFLAGS-and-LDFLAGS-when-doing-link-tes.patch \
+           file://0001-configure-Specify-correct-function-signatures-and-de.patch \
+           file://0002-unix.c-Do-not-redefine-DIR-as-FILE.patch \
            "
 UPSTREAM_VERSION_UNKNOWN = "1"
 
@@ -29,6 +31,9 @@
 # Not for zip but for smart contract implementation for it
 CVE_CHECK_IGNORE += "CVE-2018-13684"
 
+# Enable largefile support
+CFLAGS += "-DLARGE_FILE_SUPPORT"
+
 # zip.inc sets CFLAGS, but what Makefile actually uses is
 # CFLAGS_NOOPT.  It will also force -O3 optimization, overriding
 # whatever we set.
diff --git a/poky/meta/recipes-gnome/epiphany/epiphany_42.3.bb b/poky/meta/recipes-gnome/epiphany/epiphany_42.3.bb
deleted file mode 100644
index f9d60ff..0000000
--- a/poky/meta/recipes-gnome/epiphany/epiphany_42.3.bb
+++ /dev/null
@@ -1,43 +0,0 @@
-SUMMARY = "WebKit based web browser for GNOME"
-DESCRIPTION = "Epiphany is an open source web browser for the Linux desktop environment. \
-It provides a simple and easy-to-use internet browsing experience."
-HOMEPAGE = "https://wiki.gnome.org/Apps/Web"
-BUGTRACKER = "https://gitlab.gnome.org/GNOME/epiphany"
-LICENSE = "GPL-3.0-or-later"
-LIC_FILES_CHKSUM = "file://COPYING;md5=d32239bcb673463ab874e80d47fae504"
-
-DEPENDS = " \
-          webkitgtk \
-          gcr \
-          gsettings-desktop-schemas \
-          nettle \
-          json-glib \
-          libarchive \
-          libdazzle \
-          libhandy \
-          glib-2.0-native \
-          coreutils-native \
-          "
-
-GNOMEBASEBUILDCLASS = "meson"
-inherit gnomebase gsettings features_check gettext mime-xdg
-REQUIRED_DISTRO_FEATURES = "x11 opengl"
-
-SRC_URI = "${GNOME_MIRROR}/${GNOMEBN}/${@oe.utils.trim_version("${PV}", 1)}/${GNOMEBN}-${PV}.tar.${GNOME_COMPRESS_TYPE};name=archive \
-           file://0002-help-meson.build-disable-the-use-of-yelp.patch \
-           file://migrator.patch \
-           file://distributor.patch \
-           "
-SRC_URI[archive.sha256sum] = "7316d3c6500e825d8e57293fa58047c56727bee16cd6b6ac804ffe5d9b229560"
-
-PACKAGECONFIG_SOUP ?= "soup2"
-PACKAGECONFIG ??= "${PACKAGECONFIG_SOUP}"
-
-# Developer mode enables debugging
-PACKAGECONFIG[developer-mode] = "-Ddeveloper_mode=true,-Ddeveloper_mode=false"
-PACKAGECONFIG[soup2] = "-Dsoup2=enabled,-Dsoup2=disabled,libsoup-2.4,,,soup3"
-PACKAGECONFIG[soup3] = ",,libsoup,,,soup2"
-PACKAGECONFIG[libportal] = "-Dlibportal=enabled,-Dlibportal=disabled,libportal"
-
-FILES:${PN} += "${datadir}/dbus-1 ${datadir}/gnome-shell/search-providers ${datadir}/metainfo"
-RDEPENDS:${PN} = "iso-codes adwaita-icon-theme gsettings-desktop-schemas"
diff --git a/poky/meta/recipes-gnome/epiphany/epiphany_42.4.bb b/poky/meta/recipes-gnome/epiphany/epiphany_42.4.bb
new file mode 100644
index 0000000..9efd280
--- /dev/null
+++ b/poky/meta/recipes-gnome/epiphany/epiphany_42.4.bb
@@ -0,0 +1,43 @@
+SUMMARY = "WebKit based web browser for GNOME"
+DESCRIPTION = "Epiphany is an open source web browser for the Linux desktop environment. \
+It provides a simple and easy-to-use internet browsing experience."
+HOMEPAGE = "https://wiki.gnome.org/Apps/Web"
+BUGTRACKER = "https://gitlab.gnome.org/GNOME/epiphany"
+LICENSE = "GPL-3.0-or-later"
+LIC_FILES_CHKSUM = "file://COPYING;md5=d32239bcb673463ab874e80d47fae504"
+
+DEPENDS = " \
+          webkitgtk \
+          gcr \
+          gsettings-desktop-schemas \
+          nettle \
+          json-glib \
+          libarchive \
+          libdazzle \
+          libhandy \
+          glib-2.0-native \
+          coreutils-native \
+          "
+
+GNOMEBASEBUILDCLASS = "meson"
+inherit gnomebase gsettings features_check gettext mime-xdg
+REQUIRED_DISTRO_FEATURES = "x11 opengl"
+
+SRC_URI = "${GNOME_MIRROR}/${GNOMEBN}/${@oe.utils.trim_version("${PV}", 1)}/${GNOMEBN}-${PV}.tar.${GNOME_COMPRESS_TYPE};name=archive \
+           file://0002-help-meson.build-disable-the-use-of-yelp.patch \
+           file://migrator.patch \
+           file://distributor.patch \
+           "
+SRC_URI[archive.sha256sum] = "370938ad2920eeb28bc2435944776b7ba55a0e2ede65836f79818cfb7e8f0860"
+
+PACKAGECONFIG_SOUP ?= "soup2"
+PACKAGECONFIG ??= "${PACKAGECONFIG_SOUP}"
+
+# Developer mode enables debugging
+PACKAGECONFIG[developer-mode] = "-Ddeveloper_mode=true,-Ddeveloper_mode=false"
+PACKAGECONFIG[soup2] = "-Dsoup2=enabled,-Dsoup2=disabled,libsoup-2.4,,,soup3"
+PACKAGECONFIG[soup3] = ",,libsoup,,,soup2"
+PACKAGECONFIG[libportal] = "-Dlibportal=enabled,-Dlibportal=disabled,libportal"
+
+FILES:${PN} += "${datadir}/dbus-1 ${datadir}/gnome-shell/search-providers ${datadir}/metainfo"
+RDEPENDS:${PN} = "iso-codes adwaita-icon-theme gsettings-desktop-schemas"
diff --git a/poky/meta/recipes-gnome/gcr/gcr_3.40.0.bb b/poky/meta/recipes-gnome/gcr/gcr_3.40.0.bb
index 0c2af42..917be59 100644
--- a/poky/meta/recipes-gnome/gcr/gcr_3.40.0.bb
+++ b/poky/meta/recipes-gnome/gcr/gcr_3.40.0.bb
@@ -13,6 +13,8 @@
 
 CACHED_CONFIGUREVARS += "ac_cv_path_GPG='gpg2'"
 
+CFLAGS += "-D_GNU_SOURCE"
+
 GNOMEBASEBUILDCLASS = "meson"
 GTKDOC_MESON_OPTION = "gtk_doc"
 inherit gnomebase gtk-icon-cache gtk-doc features_check upstream-version-is-even vala gobject-introspection gettext mime mime-xdg
diff --git a/poky/meta/recipes-gnome/gdk-pixbuf/gdk-pixbuf/0001-Add-use_prebuilt_tools-option.patch b/poky/meta/recipes-gnome/gdk-pixbuf/gdk-pixbuf/0001-Add-use_prebuilt_tools-option.patch
index a8206a4..02cc9a2 100644
--- a/poky/meta/recipes-gnome/gdk-pixbuf/gdk-pixbuf/0001-Add-use_prebuilt_tools-option.patch
+++ b/poky/meta/recipes-gnome/gdk-pixbuf/gdk-pixbuf/0001-Add-use_prebuilt_tools-option.patch
@@ -1,4 +1,4 @@
-From ba73bb0f3d2023839bc3b681c49b7ec1192cceb4 Mon Sep 17 00:00:00 2001
+From f81b60ebcbbfd9548c8aa1e388662c429068d1e3 Mon Sep 17 00:00:00 2001
 From: Alexander Kanavin <alex.kanavin@gmail.com>
 Date: Sat, 8 May 2021 21:58:54 +0200
 Subject: [PATCH] Add use_prebuilt_tools option
@@ -18,7 +18,7 @@
  5 files changed, 42 insertions(+), 19 deletions(-)
 
 diff --git a/gdk-pixbuf/meson.build b/gdk-pixbuf/meson.build
-index 8b0590b..7331491 100644
+index 54ff9dd..2e321cf 100644
 --- a/gdk-pixbuf/meson.build
 +++ b/gdk-pixbuf/meson.build
 @@ -342,13 +342,20 @@ foreach bin: gdkpixbuf_bin
@@ -45,16 +45,18 @@
    # load the installed cache; we always build it by default
    loaders_cache = custom_target('loaders.cache',
 diff --git a/meson.build b/meson.build
-index 7a1409b..0bc73eb 100644
+index 813bd43..a93e6f7 100644
 --- a/meson.build
 +++ b/meson.build
-@@ -403,16 +403,16 @@ subdir('gdk-pixbuf')
+@@ -369,18 +369,18 @@ subdir('gdk-pixbuf')
  # i18n
  subdir('po')
  
 -if not meson.is_cross_build()
 +if not meson.is_cross_build() or get_option('use_prebuilt_tools')
-   subdir('tests')
+   if get_option('tests')
+     subdir('tests')
+   endif
 -  subdir('thumbnailer')
  endif
 +subdir('thumbnailer')
@@ -69,10 +71,10 @@
      gdk_pixbuf_bindir,
      gdk_pixbuf_libdir,
 diff --git a/meson_options.txt b/meson_options.txt
-index 0ee6718..cc29855 100644
+index d198d99..1c899e9 100644
 --- a/meson_options.txt
 +++ b/meson_options.txt
-@@ -49,4 +49,8 @@ option('gio_sniffing',
+@@ -53,4 +53,8 @@ option('gio_sniffing',
         description: 'Perform file type detection using GIO (Unused on MacOS and Windows)',
         type: 'boolean',
         value: true)
@@ -82,7 +84,7 @@
 +       value: false)
  
 diff --git a/tests/meson.build b/tests/meson.build
-index 7c6cb11..1029e6a 100644
+index 28c2525..d97c02d 100644
 --- a/tests/meson.build
 +++ b/tests/meson.build
 @@ -5,6 +5,12 @@
diff --git a/poky/meta/recipes-gnome/gdk-pixbuf/gdk-pixbuf/fatal-loader.patch b/poky/meta/recipes-gnome/gdk-pixbuf/gdk-pixbuf/fatal-loader.patch
index 25410b1..23c68a0 100644
--- a/poky/meta/recipes-gnome/gdk-pixbuf/gdk-pixbuf/fatal-loader.patch
+++ b/poky/meta/recipes-gnome/gdk-pixbuf/gdk-pixbuf/fatal-loader.patch
@@ -1,4 +1,4 @@
-From f00603d58d844422363b896ea7d07aaf48ddaa66 Mon Sep 17 00:00:00 2001
+From b511bd1efb43ffc49c753e309717a242ec686ef1 Mon Sep 17 00:00:00 2001
 From: Ross Burton <ross.burton@intel.com>
 Date: Tue, 1 Apr 2014 17:23:36 +0100
 Subject: [PATCH] gdk-pixbuf: add an option so that loader errors are fatal
@@ -6,7 +6,7 @@
 If an environment variable is specified set the return value from main() to
 non-zero if the loader had errors (missing libraries, generally).
 
-Upstream-Status: Pending
+Upstream-Status: Submitted [https://gitlab.gnome.org/GNOME/gdk-pixbuf/-/merge_requests/144]
 Signed-off-by: Ross Burton <ross.burton@intel.com>
 
 ---
@@ -14,10 +14,10 @@
  1 file changed, 15 insertions(+), 4 deletions(-)
 
 diff --git a/gdk-pixbuf/queryloaders.c b/gdk-pixbuf/queryloaders.c
-index 312aa78..b813d99 100644
+index 1d39b44..2b00815 100644
 --- a/gdk-pixbuf/queryloaders.c
 +++ b/gdk-pixbuf/queryloaders.c
-@@ -212,7 +212,7 @@ write_loader_info (GString *contents, const char *path, GdkPixbufFormat *info)
+@@ -216,7 +216,7 @@ write_loader_info (GString *contents, const char *path, GdkPixbufFormat *info)
          g_string_append_c (contents, '\n');
  }
  
@@ -26,7 +26,7 @@
  query_module (GString *contents, const char *dir, const char *file)
  {
          char *path;
-@@ -221,6 +221,7 @@ query_module (GString *contents, const char *dir, const char *file)
+@@ -225,6 +225,7 @@ query_module (GString *contents, const char *dir, const char *file)
          void                    (*fill_vtable)   (GdkPixbufModule *module);
          gpointer fill_info_ptr;
          gpointer fill_vtable_ptr;
@@ -34,7 +34,7 @@
  
          if (g_path_is_absolute (file))
                  path = g_strdup (file);
-@@ -270,10 +271,13 @@ query_module (GString *contents, const char *dir, const char *file)
+@@ -274,10 +275,13 @@ query_module (GString *contents, const char *dir, const char *file)
                                     g_module_error());
                  else
                          g_fprintf (stderr, "Cannot load loader %s\n", path);
@@ -47,8 +47,8 @@
 +        return ret;
  }
  
- #ifdef G_OS_WIN32
-@@ -314,6 +318,7 @@ int main (int argc, char **argv)
+ #if defined(G_OS_WIN32) && defined(GDK_PIXBUF_RELOCATABLE)
+@@ -318,6 +322,7 @@ int main (int argc, char **argv)
          gint first_file = 1;
          GFile *pixbuf_libdir_file;
          gchar *pixbuf_libdir;
@@ -56,7 +56,7 @@
  
  #ifdef G_OS_WIN32
          gchar *libdir;
-@@ -452,7 +457,9 @@ int main (int argc, char **argv)
+@@ -456,7 +461,9 @@ int main (int argc, char **argv)
                  }
                  modules = g_list_sort (modules, (GCompareFunc)strcmp);
                  for (l = modules; l != NULL; l = l->next)
@@ -67,7 +67,7 @@
                  g_list_free_full (modules, g_free);
                  g_free (moduledir);
  #else
-@@ -468,7 +475,8 @@ int main (int argc, char **argv)
+@@ -472,7 +479,8 @@ int main (int argc, char **argv)
                          infilename = g_locale_to_utf8 (infilename,
                                                         -1, NULL, NULL, NULL);
  #endif
@@ -77,7 +77,7 @@
                  }
                  g_free (cwd);
          }
-@@ -486,5 +494,8 @@ int main (int argc, char **argv)
+@@ -490,5 +498,8 @@ int main (int argc, char **argv)
  
          g_free (pixbuf_libdir);
  
diff --git a/poky/meta/recipes-gnome/gdk-pixbuf/gdk-pixbuf_2.42.8.bb b/poky/meta/recipes-gnome/gdk-pixbuf/gdk-pixbuf_2.42.8.bb
deleted file mode 100644
index fb6829a..0000000
--- a/poky/meta/recipes-gnome/gdk-pixbuf/gdk-pixbuf_2.42.8.bb
+++ /dev/null
@@ -1,128 +0,0 @@
-SUMMARY = "Image loading library for GTK+"
-DESCRIPTION = "The GDK Pixbuf library provides: Image loading and saving \
-facilities, fast scaling and compositing of pixbufs and Simple animation \
-loading (ie. animated GIFs)"
-HOMEPAGE = "https://wiki.gnome.org/Projects/GdkPixbuf"
-BUGTRACKER = "https://gitlab.gnome.org/GNOME/gdk-pixbuf/issues"
-
-LICENSE = "LGPL-2.1-or-later"
-LIC_FILES_CHKSUM = "file://COPYING;md5=4fbd65380cdd255951079008b364516c \
-                    file://gdk-pixbuf/gdk-pixbuf.h;endline=26;md5=72b39da7cbdde2e665329fef618e1d6b \
-                    "
-
-SECTION = "libs"
-
-DEPENDS = "glib-2.0 gdk-pixbuf-native shared-mime-info"
-DEPENDS:remove:class-native = "gdk-pixbuf-native"
-
-MAJ_VER = "${@oe.utils.trim_version("${PV}", 2)}"
-
-SRC_URI = "${GNOME_MIRROR}/${BPN}/${MAJ_VER}/${BPN}-${PV}.tar.xz \
-           file://run-ptest \
-           file://fatal-loader.patch \
-           file://0001-Add-use_prebuilt_tools-option.patch \
-           "
-
-SRC_URI[sha256sum] = "84acea3acb2411b29134b32015a5b1aaa62844b19c4b1ef8b8971c6b0759f4c6"
-
-inherit meson pkgconfig gettext pixbufcache ptest-gnome upstream-version-is-even gobject-introspection gi-docgen lib_package
-
-GIR_MESON_OPTION = 'introspection'
-GIR_MESON_ENABLE_FLAG = "enabled"
-GIR_MESON_DISABLE_FLAG = "disabled"
-
-LIBV = "2.10.0"
-
-GDK_PIXBUF_LOADERS ?= "png jpeg"
-
-PACKAGECONFIG = "${GDK_PIXBUF_LOADERS} \
-                 ${@bb.utils.contains('PTEST_ENABLED', '1', 'tests', '', d)}"
-PACKAGECONFIG:class-native = "${GDK_PIXBUF_LOADERS}"
-
-PACKAGECONFIG[png] = "-Dpng=enabled,-Dpng=disabled,libpng"
-PACKAGECONFIG[jpeg] = "-Djpeg=enabled,-Djpeg=disabled,jpeg"
-PACKAGECONFIG[tiff] = "-Dtiff=enabled,-Dtiff=disabled,tiff"
-PACKAGECONFIG[tests] = "-Dinstalled_tests=true,-Dinstalled_tests=false"
-
-EXTRA_OEMESON:class-target = " \
-    -Duse_prebuilt_tools=true \
-"
-
-EXTRA_OEMESON:class-nativesdk = " \
-    -Duse_prebuilt_tools=true \
-"
-
-PACKAGES =+ "${PN}-xlib"
-
-# For GIO image type sniffing
-RDEPENDS:${PN} = "shared-mime-info"
-
-FILES:${PN}-xlib = "${libdir}/*pixbuf_xlib*${SOLIBS}"
-ALLOW_EMPTY:${PN}-xlib = "1"
-
-FILES:${PN} += "${libdir}/gdk-pixbuf-2.0/gdk-pixbuf-query-loaders"
-
-FILES:${PN}-bin += "${datadir}/thumbnailers/gdk-pixbuf-thumbnailer.thumbnailer"
-
-FILES:${PN}-dev += " \
-	${bindir}/gdk-pixbuf-csource \
-	${bindir}/gdk-pixbuf-pixdata \
-        ${bindir}/gdk-pixbuf-print-mime-types \
-	${includedir}/* \
-	${libdir}/gdk-pixbuf-2.0/${LIBV}/loaders/*.la \
-"
-
-PACKAGES_DYNAMIC += "^gdk-pixbuf-loader-.*"
-PACKAGES_DYNAMIC:class-native = ""
-
-python populate_packages:prepend () {
-    postinst_pixbufloader = d.getVar("postinst_pixbufloader")
-
-    loaders_root = d.expand('${libdir}/gdk-pixbuf-2.0/${LIBV}/loaders')
-
-    packages = ' '.join(do_split_packages(d, loaders_root, r'^libpixbufloader-(.*)\.so$', 'gdk-pixbuf-loader-%s', 'GDK pixbuf loader for %s'))
-    d.setVar('PIXBUF_PACKAGES', packages)
-
-    # The test suite exercises all the loaders, so ensure they are all
-    # dependencies of the ptest package.
-    d.appendVar("RDEPENDS:%s-ptest" % d.getVar('PN'), " " + packages)
-}
-
-do_install:append() {
-	# Copy gdk-pixbuf-query-loaders into libdir so it is always available
-	# in multilib builds.
-	cp ${D}/${bindir}/gdk-pixbuf-query-loaders ${D}/${libdir}/gdk-pixbuf-2.0/
-
-}
-
-# Remove a bad fuzzing attempt that sporadically fails without a way to reproduce
-do_install_ptest() {
-	rm ${D}/${datadir}/installed-tests/gdk-pixbuf/pixbuf-randomly-modified.test
-}
-
-do_install:append:class-native() {
-	find ${D}${libdir} -name "libpixbufloader-*.la" -exec rm \{\} \;
-
-	create_wrapper ${D}/${bindir}/gdk-pixbuf-csource \
-		XDG_DATA_DIRS=${STAGING_DATADIR} \
-		GDK_PIXBUF_MODULE_FILE=${STAGING_LIBDIR_NATIVE}/gdk-pixbuf-2.0/${LIBV}/loaders.cache
-
-	create_wrapper ${D}/${bindir}/gdk-pixbuf-pixdata \
-		XDG_DATA_DIRS=${STAGING_DATADIR} \
-		GDK_PIXBUF_MODULE_FILE=${STAGING_LIBDIR_NATIVE}/gdk-pixbuf-2.0/${LIBV}/loaders.cache
-
-	create_wrapper ${D}/${bindir}/gdk-pixbuf-print-mime-types \
-		XDG_DATA_DIRS=${STAGING_DATADIR} \
-		GDK_PIXBUF_MODULE_FILE=${STAGING_LIBDIR_NATIVE}/gdk-pixbuf-2.0/${LIBV}/loaders.cache
-
-	create_wrapper ${D}/${libdir}/gdk-pixbuf-2.0/gdk-pixbuf-query-loaders \
-		XDG_DATA_DIRS=${STAGING_DATADIR} \
-		GDK_PIXBUF_MODULE_FILE=${STAGING_LIBDIR_NATIVE}/gdk-pixbuf-2.0/${LIBV}/loaders.cache \
-		GDK_PIXBUF_MODULEDIR=${STAGING_LIBDIR_NATIVE}/gdk-pixbuf-2.0/${LIBV}/loaders
-
-	create_wrapper ${D}/${bindir}/gdk-pixbuf-query-loaders \
-		XDG_DATA_DIRS=${STAGING_DATADIR} \
-		GDK_PIXBUF_MODULE_FILE=${STAGING_LIBDIR_NATIVE}/gdk-pixbuf-2.0/${LIBV}/loaders.cache \
-		GDK_PIXBUF_MODULEDIR=${STAGING_LIBDIR_NATIVE}/gdk-pixbuf-2.0/${LIBV}/loaders
-}
-BBCLASSEXTEND = "native nativesdk"
diff --git a/poky/meta/recipes-gnome/gdk-pixbuf/gdk-pixbuf_2.42.9.bb b/poky/meta/recipes-gnome/gdk-pixbuf/gdk-pixbuf_2.42.9.bb
new file mode 100644
index 0000000..d33718e
--- /dev/null
+++ b/poky/meta/recipes-gnome/gdk-pixbuf/gdk-pixbuf_2.42.9.bb
@@ -0,0 +1,132 @@
+SUMMARY = "Image loading library for GTK+"
+DESCRIPTION = "The GDK Pixbuf library provides: Image loading and saving \
+facilities, fast scaling and compositing of pixbufs and Simple animation \
+loading (ie. animated GIFs)"
+HOMEPAGE = "https://wiki.gnome.org/Projects/GdkPixbuf"
+BUGTRACKER = "https://gitlab.gnome.org/GNOME/gdk-pixbuf/issues"
+
+LICENSE = "LGPL-2.1-or-later"
+LIC_FILES_CHKSUM = "file://COPYING;md5=4fbd65380cdd255951079008b364516c \
+                    file://gdk-pixbuf/gdk-pixbuf.h;endline=26;md5=72b39da7cbdde2e665329fef618e1d6b \
+                    "
+
+SECTION = "libs"
+
+DEPENDS = "glib-2.0 gdk-pixbuf-native shared-mime-info"
+DEPENDS:remove:class-native = "gdk-pixbuf-native"
+
+MAJ_VER = "${@oe.utils.trim_version("${PV}", 2)}"
+
+SRC_URI = "${GNOME_MIRROR}/${BPN}/${MAJ_VER}/${BPN}-${PV}.tar.xz \
+           file://run-ptest \
+           file://fatal-loader.patch \
+           file://0001-Add-use_prebuilt_tools-option.patch \
+           "
+
+SRC_URI[sha256sum] = "28f7958e7bf29a32d4e963556d241d0a41a6786582ff6a5ad11665e0347fc962"
+
+inherit meson pkgconfig gettext pixbufcache ptest-gnome upstream-version-is-even gobject-introspection gi-docgen lib_package
+
+GIR_MESON_OPTION = 'introspection'
+GIR_MESON_ENABLE_FLAG = "enabled"
+GIR_MESON_DISABLE_FLAG = "disabled"
+
+LIBV = "2.10.0"
+
+GDK_PIXBUF_LOADERS ?= "png jpeg"
+
+PACKAGECONFIG = "${GDK_PIXBUF_LOADERS} \
+                 ${@bb.utils.contains('PTEST_ENABLED', '1', 'tests', '', d)}"
+PACKAGECONFIG:class-native = "${GDK_PIXBUF_LOADERS}"
+
+PACKAGECONFIG[png] = "-Dpng=enabled,-Dpng=disabled,libpng"
+PACKAGECONFIG[jpeg] = "-Djpeg=enabled,-Djpeg=disabled,jpeg"
+PACKAGECONFIG[tiff] = "-Dtiff=enabled,-Dtiff=disabled,tiff"
+PACKAGECONFIG[tests] = "-Dinstalled_tests=true,-Dinstalled_tests=false"
+
+EXTRA_OEMESON = "-Dman=false"
+
+EXTRA_OEMESON:append:class-target = " \
+    -Duse_prebuilt_tools=true \
+"
+
+EXTRA_OEMESON:append:class-nativesdk = " \
+    -Duse_prebuilt_tools=true \
+"
+
+PACKAGES =+ "${PN}-xlib"
+
+# For GIO image type sniffing
+RDEPENDS:${PN} = "shared-mime-info"
+
+FILES:${PN}-xlib = "${libdir}/*pixbuf_xlib*${SOLIBS}"
+ALLOW_EMPTY:${PN}-xlib = "1"
+
+FILES:${PN} += "${libdir}/gdk-pixbuf-2.0/gdk-pixbuf-query-loaders"
+
+FILES:${PN}-bin += "${datadir}/thumbnailers/gdk-pixbuf-thumbnailer.thumbnailer"
+
+FILES:${PN}-dev += " \
+	${bindir}/gdk-pixbuf-csource \
+	${bindir}/gdk-pixbuf-pixdata \
+        ${bindir}/gdk-pixbuf-print-mime-types \
+	${includedir}/* \
+	${libdir}/gdk-pixbuf-2.0/${LIBV}/loaders/*.la \
+"
+
+PACKAGES_DYNAMIC += "^gdk-pixbuf-loader-.*"
+PACKAGES_DYNAMIC:class-native = ""
+
+python populate_packages:prepend () {
+    postinst_pixbufloader = d.getVar("postinst_pixbufloader")
+
+    loaders_root = d.expand('${libdir}/gdk-pixbuf-2.0/${LIBV}/loaders')
+
+    packages = ' '.join(do_split_packages(d, loaders_root, r'^libpixbufloader-(.*)\.so$', 'gdk-pixbuf-loader-%s', 'GDK pixbuf loader for %s'))
+    d.setVar('PIXBUF_PACKAGES', packages)
+
+    # The test suite exercises all the loaders, so ensure they are all
+    # dependencies of the ptest package.
+    d.appendVar("RDEPENDS:%s-ptest" % d.getVar('PN'), " " + packages)
+}
+
+do_install:append() {
+	# Copy gdk-pixbuf-query-loaders into libdir so it is always available
+	# in multilib builds.
+	cp ${D}/${bindir}/gdk-pixbuf-query-loaders ${D}/${libdir}/gdk-pixbuf-2.0/
+
+}
+
+do_install_ptest() {
+        # Remove a bad fuzzing attempt that sporadically fails without a way to reproduce
+	rm ${D}/${datadir}/installed-tests/gdk-pixbuf/pixbuf-randomly-modified.test
+        # https://gitlab.gnome.org/GNOME/gdk-pixbuf/-/issues/215
+	rm ${D}/${datadir}/installed-tests/gdk-pixbuf/pixbuf-jpeg.test
+}
+
+do_install:append:class-native() {
+	find ${D}${libdir} -name "libpixbufloader-*.la" -exec rm \{\} \;
+
+	create_wrapper ${D}/${bindir}/gdk-pixbuf-csource \
+		XDG_DATA_DIRS=${STAGING_DATADIR} \
+		GDK_PIXBUF_MODULE_FILE=${STAGING_LIBDIR_NATIVE}/gdk-pixbuf-2.0/${LIBV}/loaders.cache
+
+	create_wrapper ${D}/${bindir}/gdk-pixbuf-pixdata \
+		XDG_DATA_DIRS=${STAGING_DATADIR} \
+		GDK_PIXBUF_MODULE_FILE=${STAGING_LIBDIR_NATIVE}/gdk-pixbuf-2.0/${LIBV}/loaders.cache
+
+	create_wrapper ${D}/${bindir}/gdk-pixbuf-print-mime-types \
+		XDG_DATA_DIRS=${STAGING_DATADIR} \
+		GDK_PIXBUF_MODULE_FILE=${STAGING_LIBDIR_NATIVE}/gdk-pixbuf-2.0/${LIBV}/loaders.cache
+
+	create_wrapper ${D}/${libdir}/gdk-pixbuf-2.0/gdk-pixbuf-query-loaders \
+		XDG_DATA_DIRS=${STAGING_DATADIR} \
+		GDK_PIXBUF_MODULE_FILE=${STAGING_LIBDIR_NATIVE}/gdk-pixbuf-2.0/${LIBV}/loaders.cache \
+		GDK_PIXBUF_MODULEDIR=${STAGING_LIBDIR_NATIVE}/gdk-pixbuf-2.0/${LIBV}/loaders
+
+	create_wrapper ${D}/${bindir}/gdk-pixbuf-query-loaders \
+		XDG_DATA_DIRS=${STAGING_DATADIR} \
+		GDK_PIXBUF_MODULE_FILE=${STAGING_LIBDIR_NATIVE}/gdk-pixbuf-2.0/${LIBV}/loaders.cache \
+		GDK_PIXBUF_MODULEDIR=${STAGING_LIBDIR_NATIVE}/gdk-pixbuf-2.0/${LIBV}/loaders
+}
+BBCLASSEXTEND = "native nativesdk"
diff --git a/poky/meta/recipes-gnome/libnotify/libnotify_0.8.0.bb b/poky/meta/recipes-gnome/libnotify/libnotify_0.8.0.bb
deleted file mode 100644
index 9c9af87..0000000
--- a/poky/meta/recipes-gnome/libnotify/libnotify_0.8.0.bb
+++ /dev/null
@@ -1,37 +0,0 @@
-SUMMARY = "Library for sending desktop notifications to a notification daemon"
-DESCRIPTION = "It sends desktop notifications to a notification daemon, as defined \
-in the Desktop Notifications spec. These notifications can be used to inform \
-the user about an event or display some form of information without getting \
-in the user's way."
-HOMEPAGE = "https://gitlab.gnome.org/GNOME/libnotify"
-BUGTRACKER = "https://gitlab.gnome.org/GNOME/libnotify/issues"
-SECTION = "libs"
-LICENSE = "LGPL-2.1-only"
-LIC_FILES_CHKSUM = "file://COPYING;md5=7fbc338309ac38fefcd64b04bb903e34"
-
-DEPENDS = "dbus glib-2.0 gdk-pixbuf"
-
-PACKAGECONFIG ?= ""
-PACKAGECONFIG[tests] = "-Dtests=true,-Dtests=false,gtk+3"
-
-GNOMEBASEBUILDCLASS = "meson"
-GTKDOC_MESON_OPTION = "gtk_doc"
-GIR_MESON_ENABLE_FLAG = "enabled"
-GIR_MESON_DISABLE_FLAG = "disabled"
-inherit gnomebase gtk-doc features_check gobject-introspection
-# depends on gtk+3 if tests are enabled
-ANY_OF_DISTRO_FEATURES = "${@bb.utils.contains('PACKAGECONFIG', 'tests', '${GTK3DISTROFEATURES}', '', d)}"
-
-SRC_URI[archive.sha256sum] = "46a26f0db4e64cf24016291eb1579ed9f0ba7611fe6cd9e1afec8f42780d3924"
-
-EXTRA_OEMESON = "-Dman=false"
-
-# there were times, we had two versions of libnotify (oe-core libnotify:0.6.x /
-# meta-gnome libnotify3: 0.7.x)
-PROVIDES += "libnotify3"
-RPROVIDES:${PN} += "libnotify3"
-RCONFLICTS:${PN} += "libnotify3"
-RREPLACES:${PN} += "libnotify3"
-
-# -7381 is specific to the NodeJS bindings
-CVE_CHECK_IGNORE += "CVE-2013-7381"
diff --git a/poky/meta/recipes-gnome/libnotify/libnotify_0.8.1.bb b/poky/meta/recipes-gnome/libnotify/libnotify_0.8.1.bb
new file mode 100644
index 0000000..3bdc70d
--- /dev/null
+++ b/poky/meta/recipes-gnome/libnotify/libnotify_0.8.1.bb
@@ -0,0 +1,37 @@
+SUMMARY = "Library for sending desktop notifications to a notification daemon"
+DESCRIPTION = "It sends desktop notifications to a notification daemon, as defined \
+in the Desktop Notifications spec. These notifications can be used to inform \
+the user about an event or display some form of information without getting \
+in the user's way."
+HOMEPAGE = "https://gitlab.gnome.org/GNOME/libnotify"
+BUGTRACKER = "https://gitlab.gnome.org/GNOME/libnotify/issues"
+SECTION = "libs"
+LICENSE = "LGPL-2.1-only"
+LIC_FILES_CHKSUM = "file://COPYING;md5=7fbc338309ac38fefcd64b04bb903e34"
+
+DEPENDS = "dbus glib-2.0 gdk-pixbuf"
+
+PACKAGECONFIG ?= ""
+PACKAGECONFIG[tests] = "-Dtests=true,-Dtests=false,gtk+3"
+
+GNOMEBASEBUILDCLASS = "meson"
+GTKDOC_MESON_OPTION = "gtk_doc"
+GIR_MESON_ENABLE_FLAG = "enabled"
+GIR_MESON_DISABLE_FLAG = "disabled"
+inherit gnomebase gtk-doc features_check gobject-introspection
+# depends on gtk+3 if tests are enabled
+ANY_OF_DISTRO_FEATURES = "${@bb.utils.contains('PACKAGECONFIG', 'tests', '${GTK3DISTROFEATURES}', '', d)}"
+
+SRC_URI[archive.sha256sum] = "d033e6d4d6ccbf46a436c31628a4b661b36dca1f5d4174fe0173e274f4e62557"
+
+EXTRA_OEMESON = "-Dman=false"
+
+# there were times, we had two versions of libnotify (oe-core libnotify:0.6.x /
+# meta-gnome libnotify3: 0.7.x)
+PROVIDES += "libnotify3"
+RPROVIDES:${PN} += "libnotify3"
+RCONFLICTS:${PN} += "libnotify3"
+RREPLACES:${PN} += "libnotify3"
+
+# -7381 is specific to the NodeJS bindings
+CVE_CHECK_IGNORE += "CVE-2013-7381"
diff --git a/poky/meta/recipes-gnome/librsvg/librsvg_2.54.4.bb b/poky/meta/recipes-gnome/librsvg/librsvg_2.54.4.bb
deleted file mode 100644
index cc7fb9b..0000000
--- a/poky/meta/recipes-gnome/librsvg/librsvg_2.54.4.bb
+++ /dev/null
@@ -1,75 +0,0 @@
-SUMMARY = "Library for rendering SVG files"
-DESCRIPTION = "A small library to render Scalable Vector Graphics (SVG), \
-associated with the GNOME Project. It renders SVG files to Cairo surfaces. \
-Cairo is the 2D, antialiased drawing library that GNOME uses to draw things to \
-the screen or to generate output for printing."
-HOMEPAGE = "https://gitlab.gnome.org/GNOME/librsvg"
-BUGTRACKER = "https://gitlab.gnome.org/GNOME/librsvg/issues"
-
-LICENSE = "LGPL-2.1-or-later"
-LIC_FILES_CHKSUM = "file://COPYING.LIB;md5=4fbd65380cdd255951079008b364516c \
-                   "
-
-SECTION = "x11/utils"
-DEPENDS = "cairo gdk-pixbuf glib-2.0 libxml2 pango python3-docutils-native"
-BBCLASSEXTEND = "native nativesdk"
-
-inherit gnomebase pixbufcache upstream-version-is-even gobject-introspection rust vala gi-docgen
-
-SRC_URI += "file://0001-Makefile.am-pass-rust-target-to-cargo-also-when-not-.patch \
-           file://0001-system-deps-src-lib.rs-do-not-probe-into-harcoded-li.patch \
-           "
-
-SRC_URI[archive.sha256sum] = "ea152a243f6a43c0e036a28c70de3fcbcdea5664c6811c78592bc229ecc24833"
-
-# librsvg is still autotools-based, but is calling cargo from its automake-driven makefiles
-# so we cannot use cargo class directly, but still need bits and pieces from it 
-# for cargo to be happy
-BASEDEPENDS:append = " cargo-native"
-
-export RUST_BACKTRACE = "full"
-export RUSTFLAGS
-export RUST_TARGET_PATH
-
-export RUST_TARGET = "${HOST_SYS}"
-
-RUSTFLAGS:append:mips = " --cfg crossbeam_no_atomic_64"
-RUSTFLAGS:append:mipsel = " --cfg crossbeam_no_atomic_64"
-RUSTFLAGS:append:powerpc = " --cfg crossbeam_no_atomic_64"
-RUSTFLAGS:append:riscv32 = " --cfg crossbeam_no_atomic_64"
-
-# rust-cross writes the target linker binary into target json definition without any flags.
-# This breaks here because the linker isn't going to work without at least knowing where
-# the sysroot is. So copy the json to workdir, and patch in the path to wrapper from rust class
-# which supplies the needed flags.
-do_compile:prepend() {
-    cp ${STAGING_LIBDIR_NATIVE}/rustlib/${HOST_SYS}.json ${WORKDIR}
-    cp ${STAGING_LIBDIR_NATIVE}/rustlib/${BUILD_SYS}.json ${WORKDIR}
-    sed -ie 's,"linker": ".*","linker": "${RUST_TARGET_CC}",g' ${WORKDIR}/${HOST_SYS}.json
-    RUST_TARGET_PATH="${WORKDIR}"
-    export RUST_TARGET_PATH
-}
-
-# Issue only on windows
-CVE_CHECK_IGNORE += "CVE-2018-1000041"
-
-CACHED_CONFIGUREVARS = "ac_cv_path_GDK_PIXBUF_QUERYLOADERS=${STAGING_LIBDIR_NATIVE}/gdk-pixbuf-2.0/gdk-pixbuf-query-loaders"
-
-PACKAGECONFIG ??= "gdkpixbuf"
-# The gdk-pixbuf loader
-PACKAGECONFIG[gdkpixbuf] = "--enable-pixbuf-loader,--disable-pixbuf-loader,gdk-pixbuf-native"
-
-do_install:append() {
-	# Loadable modules don't need .a or .la on Linux
-	rm -f ${D}${libdir}/gdk-pixbuf-2.0/*/loaders/*.a ${D}${libdir}/gdk-pixbuf-2.0/*/loaders/*.la
-}
-
-PACKAGES =+ "librsvg-gtk rsvg"
-FILES:rsvg = "${bindir}/rsvg* \
-	      ${datadir}/pixmaps/svg-viewer.svg \
-	      ${datadir}/themes"
-FILES:librsvg-gtk = "${libdir}/gdk-pixbuf-2.0/*/*/*.so \
-                     ${datadir}/thumbnailers/librsvg.thumbnailer"
-RRECOMMENDS:librsvg-gtk = "gdk-pixbuf-bin"
-
-PIXBUF_PACKAGES = "librsvg-gtk"
diff --git a/poky/meta/recipes-gnome/librsvg/librsvg_2.54.5.bb b/poky/meta/recipes-gnome/librsvg/librsvg_2.54.5.bb
new file mode 100644
index 0000000..fc52ae6
--- /dev/null
+++ b/poky/meta/recipes-gnome/librsvg/librsvg_2.54.5.bb
@@ -0,0 +1,75 @@
+SUMMARY = "Library for rendering SVG files"
+DESCRIPTION = "A small library to render Scalable Vector Graphics (SVG), \
+associated with the GNOME Project. It renders SVG files to Cairo surfaces. \
+Cairo is the 2D, antialiased drawing library that GNOME uses to draw things to \
+the screen or to generate output for printing."
+HOMEPAGE = "https://gitlab.gnome.org/GNOME/librsvg"
+BUGTRACKER = "https://gitlab.gnome.org/GNOME/librsvg/issues"
+
+LICENSE = "LGPL-2.1-or-later"
+LIC_FILES_CHKSUM = "file://COPYING.LIB;md5=4fbd65380cdd255951079008b364516c \
+                   "
+
+SECTION = "x11/utils"
+DEPENDS = "cairo gdk-pixbuf glib-2.0 libxml2 pango python3-docutils-native"
+BBCLASSEXTEND = "native nativesdk"
+
+inherit cargo_common gnomebase pixbufcache upstream-version-is-even gobject-introspection rust vala gi-docgen
+
+SRC_URI += "file://0001-Makefile.am-pass-rust-target-to-cargo-also-when-not-.patch \
+           file://0001-system-deps-src-lib.rs-do-not-probe-into-harcoded-li.patch \
+           "
+
+SRC_URI[archive.sha256sum] = "4f03190f45324d1fa1f52a79dfcded1f64eaf49b3ae2f88eedab0c07617cae6e"
+
+# librsvg is still autotools-based, but is calling cargo from its automake-driven makefiles
+# so we cannot use cargo class directly, but still need bits and pieces from it 
+# for cargo to be happy
+BASEDEPENDS:append = " cargo-native"
+
+export RUST_BACKTRACE = "full"
+export RUSTFLAGS
+
+export RUST_TARGET = "${RUST_HOST_SYS}"
+
+RUSTFLAGS:append:mips = " --cfg crossbeam_no_atomic_64"
+RUSTFLAGS:append:mipsel = " --cfg crossbeam_no_atomic_64"
+RUSTFLAGS:append:powerpc = " --cfg crossbeam_no_atomic_64"
+RUSTFLAGS:append:riscv32 = " --cfg crossbeam_no_atomic_64"
+
+CARGO_DISABLE_BITBAKE_VENDORING = "1"
+do_configure[postfuncs] += "cargo_common_do_configure"
+
+inherit rust-target-config
+
+# rust-cross writes the target linker binary into target json definition without any flags.
+# This breaks here because the linker isn't going to work without at least knowing where
+# the sysroot is. So copy the json to workdir, and patch in the path to wrapper from rust class
+# which supplies the needed flags.
+do_compile:prepend() {
+    sed -ie 's,"linker": ".*","linker": "${RUST_TARGET_CC}",g' ${RUST_TARGETS_DIR}/${RUST_HOST_SYS}.json
+}
+
+# Issue only on windows
+CVE_CHECK_IGNORE += "CVE-2018-1000041"
+
+CACHED_CONFIGUREVARS = "ac_cv_path_GDK_PIXBUF_QUERYLOADERS=${STAGING_LIBDIR_NATIVE}/gdk-pixbuf-2.0/gdk-pixbuf-query-loaders"
+
+PACKAGECONFIG ??= "gdkpixbuf"
+# The gdk-pixbuf loader
+PACKAGECONFIG[gdkpixbuf] = "--enable-pixbuf-loader,--disable-pixbuf-loader,gdk-pixbuf-native"
+
+do_install:append() {
+	# Loadable modules don't need .a or .la on Linux
+	rm -f ${D}${libdir}/gdk-pixbuf-2.0/*/loaders/*.a ${D}${libdir}/gdk-pixbuf-2.0/*/loaders/*.la
+}
+
+PACKAGES =+ "librsvg-gtk rsvg"
+FILES:rsvg = "${bindir}/rsvg* \
+	      ${datadir}/pixmaps/svg-viewer.svg \
+	      ${datadir}/themes"
+FILES:librsvg-gtk = "${libdir}/gdk-pixbuf-2.0/*/*/*.so \
+                     ${datadir}/thumbnailers/librsvg.thumbnailer"
+RRECOMMENDS:librsvg-gtk = "gdk-pixbuf-bin"
+
+PIXBUF_PACKAGES = "librsvg-gtk"
diff --git a/poky/meta/recipes-graphics/harfbuzz/harfbuzz/0001-fix-signedness-of-char-in-tests.patch b/poky/meta/recipes-graphics/harfbuzz/harfbuzz/0001-fix-signedness-of-char-in-tests.patch
new file mode 100644
index 0000000..029ca2b
--- /dev/null
+++ b/poky/meta/recipes-graphics/harfbuzz/harfbuzz/0001-fix-signedness-of-char-in-tests.patch
@@ -0,0 +1,27 @@
+From 1bd3884bc0544ffbb6545ed2391f0932bb8d7d91 Mon Sep 17 00:00:00 2001
+From: psykose <alice@ayaya.dev>
+Date: Mon, 1 Aug 2022 07:45:25 +0000
+Subject: [PATCH] fix signedness of char in tests
+
+Upstream-Status: Backport
+Signed-off-by: Alexander Kanavin <alex@linutronix.de>
+---
+ src/test-repacker.cc | 4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+diff --git a/src/test-repacker.cc b/src/test-repacker.cc
+index 053c0c6..1b7e1f0 100644
+--- a/src/test-repacker.cc
++++ b/src/test-repacker.cc
+@@ -112,9 +112,9 @@ static void start_lookup (int8_t type,
+                           hb_serialize_context_t* c)
+ {
+   char lookup[] = {
+-    0, type, // type
++    0, (char)type, // type
+     0, 0, // flag
+-    0, num_subtables, // num subtables
++    0, (char)num_subtables, // num subtables
+   };
+ 
+   start_object (lookup, 6, c);
diff --git a/poky/meta/recipes-graphics/harfbuzz/harfbuzz_4.4.1.bb b/poky/meta/recipes-graphics/harfbuzz/harfbuzz_4.4.1.bb
deleted file mode 100644
index c2867d2..0000000
--- a/poky/meta/recipes-graphics/harfbuzz/harfbuzz_4.4.1.bb
+++ /dev/null
@@ -1,48 +0,0 @@
-SUMMARY = "Text shaping library"
-DESCRIPTION = "HarfBuzz is an OpenType text shaping engine."
-HOMEPAGE = "http://www.freedesktop.org/wiki/Software/HarfBuzz"
-BUGTRACKER = "https://bugs.freedesktop.org/enter_bug.cgi?product=HarfBuzz"
-SECTION = "libs"
-LICENSE = "MIT"
-LIC_FILES_CHKSUM = "file://COPYING;md5=6ee0f16281694fb6aa689cca1e0fb3da \
-                    file://src/hb-ucd.cc;beginline=1;endline=15;md5=29d4dcb6410429195df67efe3382d8bc \
-                    "
-
-UPSTREAM_CHECK_URI = "https://github.com/${BPN}/${BPN}/releases"
-UPSTREAM_CHECK_REGEX = "harfbuzz-(?P<pver>\d+(\.\d+)+).tar"
-
-SRC_URI = "https://github.com/${BPN}/${BPN}/releases/download/${PV}/${BPN}-${PV}.tar.xz"
-SRC_URI[sha256sum] = "c5bc33ac099b2e52f01d27cde21cee4281b9d5bfec7684135e268512478bc9ee"
-
-inherit meson pkgconfig lib_package gtk-doc gobject-introspection
-
-GIR_MESON_ENABLE_FLAG = 'enabled'
-GIR_MESON_DISABLE_FLAG = 'disabled'
-GTKDOC_MESON_ENABLE_FLAG = 'enabled'
-GTKDOC_MESON_DISABLE_FLAG = 'disabled'
-
-PACKAGECONFIG ??= "cairo freetype glib icu"
-PACKAGECONFIG[cairo] = "-Dcairo=enabled,-Dcairo=disabled,cairo"
-PACKAGECONFIG[freetype] = "-Dfreetype=enabled,-Dfreetype=disabled,freetype"
-PACKAGECONFIG[glib] = "-Dglib=enabled,-Dglib=disabled,glib-2.0"
-PACKAGECONFIG[graphite] = "-Dgraphite=enabled,-Dgraphite=disabled,graphite2"
-PACKAGECONFIG[icu] = "-Dicu=enabled,-Dicu=disabled,icu"
-
-PACKAGES =+ "${PN}-icu ${PN}-icu-dev ${PN}-subset"
-
-LEAD_SONAME = "libharfbuzz.so"
-
-do_install:append() {
-    # If no tools are installed due to PACKAGECONFIG then this directory is
-    #still installed, so remove it to stop packaging wanings.
-    rmdir --ignore-fail-on-non-empty ${D}${bindir}
-}
-
-FILES:${PN}-icu = "${libdir}/libharfbuzz-icu.so.*"
-FILES:${PN}-icu-dev = "${libdir}/libharfbuzz-icu.la \
-                       ${libdir}/libharfbuzz-icu.so \
-                       ${libdir}/pkgconfig/harfbuzz-icu.pc \
-"
-FILES:${PN}-subset = "${libdir}/libharfbuzz-subset.so.*"
-
-BBCLASSEXTEND = "native nativesdk"
diff --git a/poky/meta/recipes-graphics/harfbuzz/harfbuzz_5.1.0.bb b/poky/meta/recipes-graphics/harfbuzz/harfbuzz_5.1.0.bb
new file mode 100644
index 0000000..4c2d774
--- /dev/null
+++ b/poky/meta/recipes-graphics/harfbuzz/harfbuzz_5.1.0.bb
@@ -0,0 +1,50 @@
+SUMMARY = "Text shaping library"
+DESCRIPTION = "HarfBuzz is an OpenType text shaping engine."
+HOMEPAGE = "http://www.freedesktop.org/wiki/Software/HarfBuzz"
+BUGTRACKER = "https://bugs.freedesktop.org/enter_bug.cgi?product=HarfBuzz"
+SECTION = "libs"
+LICENSE = "MIT"
+LIC_FILES_CHKSUM = "file://COPYING;md5=6ee0f16281694fb6aa689cca1e0fb3da \
+                    file://src/hb-ucd.cc;beginline=1;endline=15;md5=29d4dcb6410429195df67efe3382d8bc \
+                    "
+
+UPSTREAM_CHECK_URI = "https://github.com/${BPN}/${BPN}/releases"
+UPSTREAM_CHECK_REGEX = "harfbuzz-(?P<pver>\d+(\.\d+)+).tar"
+
+SRC_URI = "https://github.com/${BPN}/${BPN}/releases/download/${PV}/${BPN}-${PV}.tar.xz \
+           file://0001-fix-signedness-of-char-in-tests.patch \
+           "
+SRC_URI[sha256sum] = "2edb95db668781aaa8d60959d21be2ff80085f31b12053cdd660d9a50ce84f05"
+
+inherit meson pkgconfig lib_package gtk-doc gobject-introspection
+
+GIR_MESON_ENABLE_FLAG = 'enabled'
+GIR_MESON_DISABLE_FLAG = 'disabled'
+GTKDOC_MESON_ENABLE_FLAG = 'enabled'
+GTKDOC_MESON_DISABLE_FLAG = 'disabled'
+
+PACKAGECONFIG ??= "cairo freetype glib icu"
+PACKAGECONFIG[cairo] = "-Dcairo=enabled,-Dcairo=disabled,cairo"
+PACKAGECONFIG[freetype] = "-Dfreetype=enabled,-Dfreetype=disabled,freetype"
+PACKAGECONFIG[glib] = "-Dglib=enabled,-Dglib=disabled,glib-2.0"
+PACKAGECONFIG[graphite] = "-Dgraphite=enabled,-Dgraphite=disabled,graphite2"
+PACKAGECONFIG[icu] = "-Dicu=enabled,-Dicu=disabled,icu"
+
+PACKAGES =+ "${PN}-icu ${PN}-icu-dev ${PN}-subset"
+
+LEAD_SONAME = "libharfbuzz.so"
+
+do_install:append() {
+    # If no tools are installed due to PACKAGECONFIG then this directory is
+    #still installed, so remove it to stop packaging wanings.
+    rmdir --ignore-fail-on-non-empty ${D}${bindir}
+}
+
+FILES:${PN}-icu = "${libdir}/libharfbuzz-icu.so.*"
+FILES:${PN}-icu-dev = "${libdir}/libharfbuzz-icu.la \
+                       ${libdir}/libharfbuzz-icu.so \
+                       ${libdir}/pkgconfig/harfbuzz-icu.pc \
+"
+FILES:${PN}-subset = "${libdir}/libharfbuzz-subset.so.*"
+
+BBCLASSEXTEND = "native nativesdk"
diff --git a/poky/meta/recipes-graphics/jpeg/libjpeg-turbo_2.1.3.bb b/poky/meta/recipes-graphics/jpeg/libjpeg-turbo_2.1.3.bb
deleted file mode 100644
index fdc035d..0000000
--- a/poky/meta/recipes-graphics/jpeg/libjpeg-turbo_2.1.3.bb
+++ /dev/null
@@ -1,62 +0,0 @@
-SUMMARY = "Hardware accelerated JPEG compression/decompression library"
-DESCRIPTION = "libjpeg-turbo is a derivative of libjpeg that uses SIMD instructions (MMX, SSE2, NEON) to accelerate baseline JPEG compression and decompression"
-HOMEPAGE = "http://libjpeg-turbo.org/"
-
-LICENSE = "BSD-3-Clause"
-LIC_FILES_CHKSUM = "file://cdjpeg.h;endline=13;md5=8a61af33cc1c681cd5cc297150bbb5bd \
-                    file://jpeglib.h;endline=16;md5=52b5eaade8d5b6a452a7693dfe52c084 \
-                    file://djpeg.c;endline=11;md5=510b386442ab6a27ee241fc5669bc5ea \
-                    "
-DEPENDS:append:x86-64:class-target = " nasm-native"
-DEPENDS:append:x86:class-target = " nasm-native"
-
-SRC_URI = "${SOURCEFORGE_MIRROR}/${BPN}/${BPN}-${PV}.tar.gz \
-           file://0001-libjpeg-turbo-fix-package_qa-error.patch \
-           "
-
-SRC_URI[sha256sum] = "467b310903832b033fe56cd37720d1b73a6a3bd0171dbf6ff0b620385f4f76d0"
-UPSTREAM_CHECK_URI = "http://sourceforge.net/projects/libjpeg-turbo/files/"
-UPSTREAM_CHECK_REGEX = "/libjpeg-turbo/files/(?P<pver>(\d+[\.\-_]*)+)/"
-
-PE = "1"
-
-# Drop-in replacement for jpeg
-PROVIDES = "jpeg"
-RPROVIDES:${PN} += "jpeg"
-RREPLACES:${PN} += "jpeg"
-RCONFLICTS:${PN} += "jpeg"
-
-inherit cmake pkgconfig
-
-export NASMENV = "--reproducible --debug-prefix-map=${WORKDIR}=/usr/src/debug/${PN}/${EXTENDPE}${PV}-${PR}"
-
-# Add nasm-native dependency consistently for all build arches is hard
-EXTRA_OECMAKE:append:class-native = " -DWITH_SIMD=False"
-EXTRA_OECMAKE:append:class-nativesdk = " -DWITH_SIMD=False"
-
-# Work around missing x32 ABI support
-EXTRA_OECMAKE:append:class-target = " ${@bb.utils.contains("TUNE_FEATURES", "mx32", "-DWITH_SIMD=False", "", d)}"
-
-# Work around missing non-floating point ABI support in MIPS
-EXTRA_OECMAKE:append:class-target = " ${@bb.utils.contains("MIPSPKGSFX_FPU", "-nf", "-DWITH_SIMD=False", "", d)}"
-
-EXTRA_OECMAKE:append:class-target:arm = " ${@bb.utils.contains("TUNE_FEATURES", "neon", "", "-DWITH_SIMD=False", d)}"
-EXTRA_OECMAKE:append:class-target:armeb = " ${@bb.utils.contains("TUNE_FEATURES", "neon", "", "-DWITH_SIMD=False", d)}"
-
-# Provide a workaround if Altivec unit is not present in PPC
-EXTRA_OECMAKE:append:class-target:powerpc = " ${@bb.utils.contains("TUNE_FEATURES", "altivec", "", "-DWITH_SIMD=False", d)}"
-EXTRA_OECMAKE:append:class-target:powerpc64 = " ${@bb.utils.contains("TUNE_FEATURES", "altivec", "", "-DWITH_SIMD=False", d)}"
-EXTRA_OECMAKE:append:class-target:powerpc64le = " ${@bb.utils.contains("TUNE_FEATURES", "altivec", "", "-DWITH_SIMD=False", d)}"
-
-DEBUG_OPTIMIZATION:append:armv4 = " ${@bb.utils.contains('TUNE_CCARGS', '-mthumb', '-fomit-frame-pointer', '', d)}"
-DEBUG_OPTIMIZATION:append:armv5 = " ${@bb.utils.contains('TUNE_CCARGS', '-mthumb', '-fomit-frame-pointer', '', d)}"
-
-PACKAGES =+ "jpeg-tools libturbojpeg"
-
-DESCRIPTION:jpeg-tools = "The jpeg-tools package includes client programs to access libjpeg functionality.  These tools allow for the compression, decompression, transformation and display of JPEG files and benchmarking of the libjpeg library."
-FILES:jpeg-tools = "${bindir}/*"
-
-DESCRIPTION:libturbojpeg = "A SIMD-accelerated JPEG codec which provides only TurboJPEG APIs"
-FILES:libturbojpeg = "${libdir}/libturbojpeg.so.*"
-
-BBCLASSEXTEND = "native nativesdk"
diff --git a/poky/meta/recipes-graphics/jpeg/libjpeg-turbo_2.1.4.bb b/poky/meta/recipes-graphics/jpeg/libjpeg-turbo_2.1.4.bb
new file mode 100644
index 0000000..1708fa9
--- /dev/null
+++ b/poky/meta/recipes-graphics/jpeg/libjpeg-turbo_2.1.4.bb
@@ -0,0 +1,62 @@
+SUMMARY = "Hardware accelerated JPEG compression/decompression library"
+DESCRIPTION = "libjpeg-turbo is a derivative of libjpeg that uses SIMD instructions (MMX, SSE2, NEON) to accelerate baseline JPEG compression and decompression"
+HOMEPAGE = "http://libjpeg-turbo.org/"
+
+LICENSE = "BSD-3-Clause"
+LIC_FILES_CHKSUM = "file://cdjpeg.h;endline=13;md5=8a61af33cc1c681cd5cc297150bbb5bd \
+                    file://jpeglib.h;endline=16;md5=52b5eaade8d5b6a452a7693dfe52c084 \
+                    file://djpeg.c;endline=11;md5=510b386442ab6a27ee241fc5669bc5ea \
+                    "
+DEPENDS:append:x86-64:class-target = " nasm-native"
+DEPENDS:append:x86:class-target = " nasm-native"
+
+SRC_URI = "${SOURCEFORGE_MIRROR}/${BPN}/${BPN}-${PV}.tar.gz \
+           file://0001-libjpeg-turbo-fix-package_qa-error.patch \
+           "
+
+SRC_URI[sha256sum] = "d3ed26a1131a13686dfca4935e520eb7c90ae76fbc45d98bb50a8dc86230342b"
+UPSTREAM_CHECK_URI = "http://sourceforge.net/projects/libjpeg-turbo/files/"
+UPSTREAM_CHECK_REGEX = "/libjpeg-turbo/files/(?P<pver>(\d+[\.\-_]*)+)/"
+
+PE = "1"
+
+# Drop-in replacement for jpeg
+PROVIDES = "jpeg"
+RPROVIDES:${PN} += "jpeg"
+RREPLACES:${PN} += "jpeg"
+RCONFLICTS:${PN} += "jpeg"
+
+inherit cmake pkgconfig
+
+export NASMENV = "--reproducible --debug-prefix-map=${WORKDIR}=/usr/src/debug/${PN}/${EXTENDPE}${PV}-${PR}"
+
+# Add nasm-native dependency consistently for all build arches is hard
+EXTRA_OECMAKE:append:class-native = " -DWITH_SIMD=False"
+EXTRA_OECMAKE:append:class-nativesdk = " -DWITH_SIMD=False"
+
+# Work around missing x32 ABI support
+EXTRA_OECMAKE:append:class-target = " ${@bb.utils.contains("TUNE_FEATURES", "mx32", "-DWITH_SIMD=False", "", d)}"
+
+# Work around missing non-floating point ABI support in MIPS
+EXTRA_OECMAKE:append:class-target = " ${@bb.utils.contains("MIPSPKGSFX_FPU", "-nf", "-DWITH_SIMD=False", "", d)}"
+
+EXTRA_OECMAKE:append:class-target:arm = " ${@bb.utils.contains("TUNE_FEATURES", "neon", "", "-DWITH_SIMD=False", d)}"
+EXTRA_OECMAKE:append:class-target:armeb = " ${@bb.utils.contains("TUNE_FEATURES", "neon", "", "-DWITH_SIMD=False", d)}"
+
+# Provide a workaround if Altivec unit is not present in PPC
+EXTRA_OECMAKE:append:class-target:powerpc = " ${@bb.utils.contains("TUNE_FEATURES", "altivec", "", "-DWITH_SIMD=False", d)}"
+EXTRA_OECMAKE:append:class-target:powerpc64 = " ${@bb.utils.contains("TUNE_FEATURES", "altivec", "", "-DWITH_SIMD=False", d)}"
+EXTRA_OECMAKE:append:class-target:powerpc64le = " ${@bb.utils.contains("TUNE_FEATURES", "altivec", "", "-DWITH_SIMD=False", d)}"
+
+DEBUG_OPTIMIZATION:append:armv4 = " ${@bb.utils.contains('TUNE_CCARGS', '-mthumb', '-fomit-frame-pointer', '', d)}"
+DEBUG_OPTIMIZATION:append:armv5 = " ${@bb.utils.contains('TUNE_CCARGS', '-mthumb', '-fomit-frame-pointer', '', d)}"
+
+PACKAGES =+ "jpeg-tools libturbojpeg"
+
+DESCRIPTION:jpeg-tools = "The jpeg-tools package includes client programs to access libjpeg functionality.  These tools allow for the compression, decompression, transformation and display of JPEG files and benchmarking of the libjpeg library."
+FILES:jpeg-tools = "${bindir}/*"
+
+DESCRIPTION:libturbojpeg = "A SIMD-accelerated JPEG codec which provides only TurboJPEG APIs"
+FILES:libturbojpeg = "${libdir}/libturbojpeg.so.*"
+
+BBCLASSEXTEND = "native nativesdk"
diff --git a/poky/meta/recipes-graphics/kmscube/kmscube/0001-drm-common.c-do-not-use-invalid-modifier.patch b/poky/meta/recipes-graphics/kmscube/kmscube/0001-drm-common.c-do-not-use-invalid-modifier.patch
new file mode 100644
index 0000000..58ff3ba
--- /dev/null
+++ b/poky/meta/recipes-graphics/kmscube/kmscube/0001-drm-common.c-do-not-use-invalid-modifier.patch
@@ -0,0 +1,27 @@
+From bdde833c254092a47df6c7109a9751653c82aaae Mon Sep 17 00:00:00 2001
+From: Alexander Kanavin <alex@linutronix.de>
+Date: Mon, 8 Aug 2022 20:22:39 +0200
+Subject: [PATCH] drm-common.c: do not use invalid modifier
+
+Prior to kernel 5.19 this was a soft failure, but 5.19
+adds checks that result in a hard syscall fail.
+
+Upstream-Status: Submitted [https://gitlab.freedesktop.org/mesa/kmscube/-/merge_requests/33]
+Signed-off-by: Alexander Kanavin <alex@linutronix.de>
+---
+ drm-common.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/drm-common.c b/drm-common.c
+index 5c9cca2..964e1c3 100644
+--- a/drm-common.c
++++ b/drm-common.c
+@@ -92,7 +92,7 @@ struct drm_fb * drm_fb_get_from_bo(struct gbm_bo *bo)
+ 			modifiers[i] = modifiers[0];
+ 		}
+ 
+-		if (modifiers[0]) {
++		if (modifiers[0] && modifiers[0] != DRM_FORMAT_MOD_INVALID) {
+ 			flags = DRM_MODE_FB_MODIFIERS;
+ 			printf("Using modifier %" PRIx64 "\n", modifiers[0]);
+ 		}
diff --git a/poky/meta/recipes-graphics/kmscube/kmscube_git.bb b/poky/meta/recipes-graphics/kmscube/kmscube_git.bb
index 58ce26a..f7ee6e4 100644
--- a/poky/meta/recipes-graphics/kmscube/kmscube_git.bb
+++ b/poky/meta/recipes-graphics/kmscube/kmscube_git.bb
@@ -11,8 +11,10 @@
 LIC_FILES_CHKSUM = "file://kmscube.c;beginline=1;endline=23;md5=8b309d4ee67b7315ff7381270dd631fb"
 
 SRCREV = "9f63f359fab1b5d8e862508e4e51c9dfe339ccb0"
-SRC_URI = "git://gitlab.freedesktop.org/mesa/kmscube;branch=master;protocol=https"
-SRC_URI += "file://0001-texturator-Use-correct-GL-extension-header.patch"
+SRC_URI = "git://gitlab.freedesktop.org/mesa/kmscube;branch=master;protocol=https \
+           file://0001-texturator-Use-correct-GL-extension-header.patch \
+           file://0001-drm-common.c-do-not-use-invalid-modifier.patch \
+           "
 UPSTREAM_CHECK_COMMITS = "1"
 
 S = "${WORKDIR}/git"
diff --git a/poky/meta/recipes-graphics/libsdl2/libsdl2_2.0.22.bb b/poky/meta/recipes-graphics/libsdl2/libsdl2_2.0.22.bb
index 8519e7f..ff3e162 100644
--- a/poky/meta/recipes-graphics/libsdl2/libsdl2_2.0.22.bb
+++ b/poky/meta/recipes-graphics/libsdl2/libsdl2_2.0.22.bb
@@ -30,7 +30,7 @@
 
 SRC_URI[sha256sum] = "fe7cbf3127882e3fc7259a75a0cb585620272c51745d3852ab9dd87960697f2e"
 
-inherit cmake lib_package binconfig-disabled pkgconfig
+inherit cmake lib_package binconfig-disabled pkgconfig upstream-version-is-even
 
 BINCONFIG = "${bindir}/sdl2-config"
 
diff --git a/poky/meta/recipes-graphics/mesa/mesa-gl_22.1.3.bb b/poky/meta/recipes-graphics/mesa/mesa-gl_22.1.6.bb
similarity index 100%
rename from poky/meta/recipes-graphics/mesa/mesa-gl_22.1.3.bb
rename to poky/meta/recipes-graphics/mesa/mesa-gl_22.1.6.bb
diff --git a/poky/meta/recipes-graphics/mesa/mesa.inc b/poky/meta/recipes-graphics/mesa/mesa.inc
index f02ef2d..ea7ed4f 100644
--- a/poky/meta/recipes-graphics/mesa/mesa.inc
+++ b/poky/meta/recipes-graphics/mesa/mesa.inc
@@ -25,7 +25,7 @@
            file://0001-nir-nir_opt_move-fix-ALWAYS_INLINE-compiler-error.patch \
            "
 
-SRC_URI[sha256sum] = "b98f32ba7aa2a1ff5725fb36eb999c693079f0ca16f70aa2f103e2b6c3f093e3"
+SRC_URI[sha256sum] = "22ced061eb9adab8ea35368246c1995c09723f3f71653cd5050c5cec376e671a"
 
 UPSTREAM_CHECK_GITTAGREGEX = "mesa-(?P<pver>\d+(\.\d+)+)"
 
@@ -55,12 +55,17 @@
 
 PLATFORMS ??= "${@bb.utils.filter('PACKAGECONFIG', 'x11 wayland', d)}"
 
-export YOCTO_ALTERNATE_EXE_PATH = "${STAGING_LIBDIR}/llvm-config"
-export YOCTO_ALTERNATE_MULTILIB_NAME = "${base_libdir}"
-export LLVM_CONFIG = "${STAGING_BINDIR_NATIVE}/llvm-config${MESA_LLVM_RELEASE}"
-export WANT_LLVM_RELEASE = "${MESA_LLVM_RELEASE}"
-
+# By placing llvm-config in the target sysroot bindir, it will then map values
+# to the target libdir magically. We can safely add to path as there are no other binaries
+# there.
+PATH:prepend = "${STAGING_BINDIR_CROSS}:${STAGING_BINDIR}:"
 MESA_LLVM_RELEASE ?= "${LLVMVERSION}"
+do_configure:prepend () {
+	if [ -e ${STAGING_BINDIR_NATIVE}/llvm-config${MESA_LLVM_RELEASE} ]; then
+		cp ${STAGING_BINDIR_NATIVE}/llvm-config${MESA_LLVM_RELEASE} ${STAGING_BINDIR}
+		cp ${STAGING_BINDIR_NATIVE}/llvm-config ${STAGING_BINDIR}
+	fi
+}
 
 # set the MESA_BUILD_TYPE to either 'release' (default) or 'debug'
 # by default the upstream mesa sources build a debug release
@@ -123,7 +128,8 @@
 PACKAGECONFIG[egl] = "-Degl=enabled, -Degl=disabled"
 
 # "opencl" requires libclc from meta-clang and spirv-tools from OE-Core
-PACKAGECONFIG[opencl] = "-Dgallium-opencl=icd -Dopencl-spirv=true,-Dgallium-opencl=disabled -Dopencl-spirv=false,libclc spirv-tools"
+OPENCL_NATIVE = "${@bb.utils.contains('PACKAGECONFIG', 'freedreno', '-Dopencl-native=true', '', d)}"
+PACKAGECONFIG[opencl] = "-Dgallium-opencl=icd -Dopencl-spirv=true ${OPENCL_NATIVE},-Dgallium-opencl=disabled -Dopencl-spirv=false,libclc spirv-tools"
 
 PACKAGECONFIG[broadcom] = ""
 PACKAGECONFIG[etnaviv] = ""
@@ -297,7 +303,7 @@
 FILES:libgles1-mesa = "${libdir}/libGLESv1*.so.*"
 FILES:libgles2-mesa = "${libdir}/libGLESv2.so.*"
 FILES:libgl-mesa = "${libdir}/libGL.so.*"
-FILES:libopencl-mesa = "${libdir}/libMesaOpenCL.so.* ${sysconfdir}/OpenCL/vendors/mesa.icd"
+FILES:libopencl-mesa = "${libdir}/libMesaOpenCL.so.* ${libdir}/gallium-pipe/*.so ${sysconfdir}/OpenCL/vendors/mesa.icd"
 FILES:libglapi = "${libdir}/libglapi.so.*"
 FILES:libosmesa = "${libdir}/libOSMesa.so.*"
 FILES:libxatracker = "${libdir}/libxatracker.so.*"
diff --git a/poky/meta/recipes-graphics/mesa/mesa_22.1.3.bb b/poky/meta/recipes-graphics/mesa/mesa_22.1.6.bb
similarity index 100%
rename from poky/meta/recipes-graphics/mesa/mesa_22.1.3.bb
rename to poky/meta/recipes-graphics/mesa/mesa_22.1.6.bb
diff --git a/poky/meta/recipes-graphics/pango/pango_1.50.8.bb b/poky/meta/recipes-graphics/pango/pango_1.50.8.bb
deleted file mode 100644
index 9365267..0000000
--- a/poky/meta/recipes-graphics/pango/pango_1.50.8.bb
+++ /dev/null
@@ -1,54 +0,0 @@
-SUMMARY = "Framework for layout and rendering of internationalized text"
-DESCRIPTION = "Pango is a library for laying out and rendering of text, \
-with an emphasis on internationalization. Pango can be used anywhere \
-that text layout is needed, though most of the work on Pango so far has \
-been done in the context of the GTK+ widget toolkit. Pango forms the \
-core of text and font handling for GTK+-2.x."
-HOMEPAGE = "http://www.pango.org/"
-BUGTRACKER = "http://bugzilla.gnome.org"
-SECTION = "libs"
-LICENSE = "LGPL-2.0-or-later"
-
-LIC_FILES_CHKSUM = "file://COPYING;md5=3bf50002aefd002f49e7bb854063f7e7"
-
-GNOMEBASEBUILDCLASS = "meson"
-
-inherit gnomebase gi-docgen ptest-gnome upstream-version-is-even gobject-introspection
-
-UPSTREAM_CHECK_REGEX = "pango-(?P<pver>\d+\.(?!9\d+)\d+\.\d+)"
-
-GIR_MESON_ENABLE_FLAG = "enabled"
-GIR_MESON_DISABLE_FLAG = "disabled"
-
-SRC_URI += "file://run-ptest \
-            file://0001-Skip-running-test-layout-test.patch \
-"
-
-SRC_URI[archive.sha256sum] = "cf626f59dd146c023174c4034920e9667f1d25ac2c1569516d63136c311255fa"
-
-DEPENDS = "glib-2.0 glib-2.0-native fontconfig freetype virtual/libiconv cairo harfbuzz fribidi"
-
-PACKAGECONFIG ??= "${@bb.utils.filter('DISTRO_FEATURES', 'x11', d)} \
-                   ${@bb.utils.contains('PTEST_ENABLED', '1', 'tests', '', d)}"
-
-PACKAGECONFIG[x11] = ",,virtual/libx11 libxft"
-PACKAGECONFIG[tests] = "-Dinstall-tests=true, -Dinstall-tests=false"
-PACKAGECONFIG[thai] = ",,libthai"
-
-GIR_MESON_OPTION = 'introspection'
-
-do_configure:prepend() {
-    chmod +x ${S}/tests/*.py
-}
-
-LEAD_SONAME = "libpango-1.0*"
-
-FILES:${PN} = "${bindir}/* ${libdir}/libpango*${SOLIBS}"
-
-RDEPENDS:${PN}-ptest += "cantarell-fonts"
-RDEPENDS:${PN}-ptest:append:libc-glibc = " locale-base-en-us"
-
-RPROVIDES:${PN} += "pango-modules pango-module-indic-lang \
-                    pango-module-basic-fc pango-module-arabic-lang"
-
-BBCLASSEXTEND = "native nativesdk"
diff --git a/poky/meta/recipes-graphics/pango/pango_1.50.9.bb b/poky/meta/recipes-graphics/pango/pango_1.50.9.bb
new file mode 100644
index 0000000..03e2ca6
--- /dev/null
+++ b/poky/meta/recipes-graphics/pango/pango_1.50.9.bb
@@ -0,0 +1,54 @@
+SUMMARY = "Framework for layout and rendering of internationalized text"
+DESCRIPTION = "Pango is a library for laying out and rendering of text, \
+with an emphasis on internationalization. Pango can be used anywhere \
+that text layout is needed, though most of the work on Pango so far has \
+been done in the context of the GTK+ widget toolkit. Pango forms the \
+core of text and font handling for GTK+-2.x."
+HOMEPAGE = "http://www.pango.org/"
+BUGTRACKER = "http://bugzilla.gnome.org"
+SECTION = "libs"
+LICENSE = "LGPL-2.0-or-later"
+
+LIC_FILES_CHKSUM = "file://COPYING;md5=3bf50002aefd002f49e7bb854063f7e7"
+
+GNOMEBASEBUILDCLASS = "meson"
+
+inherit gnomebase gi-docgen ptest-gnome upstream-version-is-even gobject-introspection
+
+UPSTREAM_CHECK_REGEX = "pango-(?P<pver>\d+\.(?!9\d+)\d+\.\d+)"
+
+GIR_MESON_ENABLE_FLAG = "enabled"
+GIR_MESON_DISABLE_FLAG = "disabled"
+
+SRC_URI += "file://run-ptest \
+            file://0001-Skip-running-test-layout-test.patch \
+"
+
+SRC_URI[archive.sha256sum] = "1b636aabf905130d806372136f5e137b6a27f26d47defd9240bf444f6a4fe610"
+
+DEPENDS = "glib-2.0 glib-2.0-native fontconfig freetype virtual/libiconv cairo harfbuzz fribidi"
+
+PACKAGECONFIG ??= "${@bb.utils.filter('DISTRO_FEATURES', 'x11', d)} \
+                   ${@bb.utils.contains('PTEST_ENABLED', '1', 'tests', '', d)}"
+
+PACKAGECONFIG[x11] = ",,virtual/libx11 libxft"
+PACKAGECONFIG[tests] = "-Dinstall-tests=true, -Dinstall-tests=false"
+PACKAGECONFIG[thai] = ",,libthai"
+
+GIR_MESON_OPTION = 'introspection'
+
+do_configure:prepend() {
+    chmod +x ${S}/tests/*.py
+}
+
+LEAD_SONAME = "libpango-1.0*"
+
+FILES:${PN} = "${bindir}/* ${libdir}/libpango*${SOLIBS}"
+
+RDEPENDS:${PN}-ptest += "cantarell-fonts"
+RDEPENDS:${PN}-ptest:append:libc-glibc = " locale-base-en-us"
+
+RPROVIDES:${PN} += "pango-modules pango-module-indic-lang \
+                    pango-module-basic-fc pango-module-arabic-lang"
+
+BBCLASSEXTEND = "native nativesdk"
diff --git a/poky/meta/recipes-graphics/piglit/piglit/0001-cmake-use-proper-WAYLAND_INCLUDE_DIRS-variable.patch b/poky/meta/recipes-graphics/piglit/piglit/0002-cmake-use-proper-WAYLAND_INCLUDE_DIRS-variable.patch
similarity index 100%
rename from poky/meta/recipes-graphics/piglit/piglit/0001-cmake-use-proper-WAYLAND_INCLUDE_DIRS-variable.patch
rename to poky/meta/recipes-graphics/piglit/piglit/0002-cmake-use-proper-WAYLAND_INCLUDE_DIRS-variable.patch
diff --git a/poky/meta/recipes-graphics/piglit/piglit/0002-tests-util-piglit-shader.c-do-not-hardcode-build-pat.patch b/poky/meta/recipes-graphics/piglit/piglit/0003-tests-util-piglit-shader.c-do-not-hardcode-build-pat.patch
similarity index 100%
rename from poky/meta/recipes-graphics/piglit/piglit/0002-tests-util-piglit-shader.c-do-not-hardcode-build-pat.patch
rename to poky/meta/recipes-graphics/piglit/piglit/0003-tests-util-piglit-shader.c-do-not-hardcode-build-pat.patch
diff --git a/poky/meta/recipes-graphics/piglit/piglit/0001-CMakeLists.txt-add-missing-endian.h-check.patch b/poky/meta/recipes-graphics/piglit/piglit/0004-CMakeLists.txt-add-missing-endian.h-check.patch
similarity index 100%
rename from poky/meta/recipes-graphics/piglit/piglit/0001-CMakeLists.txt-add-missing-endian.h-check.patch
rename to poky/meta/recipes-graphics/piglit/piglit/0004-CMakeLists.txt-add-missing-endian.h-check.patch
diff --git a/poky/meta/recipes-graphics/piglit/piglit/0005-cmake-Don-t-enable-GLX-if-tests-are-disabled.patch b/poky/meta/recipes-graphics/piglit/piglit/0005-cmake-Don-t-enable-GLX-if-tests-are-disabled.patch
new file mode 100644
index 0000000..ef6fda0
--- /dev/null
+++ b/poky/meta/recipes-graphics/piglit/piglit/0005-cmake-Don-t-enable-GLX-if-tests-are-disabled.patch
@@ -0,0 +1,32 @@
+From 13ff43fe760ac343b33d8e8c84b89886aac07116 Mon Sep 17 00:00:00 2001
+From: Tom Hochstein <tom.hochstein@nxp.com>
+Date: Fri, 3 Jun 2022 10:44:29 -0500
+Subject: [PATCH] cmake: Don't enable GLX if tests are disabled
+
+Allow building for systems that don't support GLX.
+
+Upstream-Status: Submitted [https://gitlab.freedesktop.org/mesa/piglit/-/merge_requests/720]
+Signed-off-by: Tom Hochstein <tom.hochstein@nxp.com>
+---
+ CMakeLists.txt | 5 +----
+ 1 file changed, 1 insertion(+), 4 deletions(-)
+
+diff --git a/CMakeLists.txt b/CMakeLists.txt
+index e1aeb5ddf..85e171aba 100644
+--- a/CMakeLists.txt
++++ b/CMakeLists.txt
+@@ -134,10 +134,7 @@ if(PIGLIT_BUILD_CL_TESTS)
+ endif(PIGLIT_BUILD_CL_TESTS)
+ 
+ IF(${CMAKE_SYSTEM_NAME} MATCHES "Linux")
+-	if(X11_FOUND AND OPENGL_gl_LIBRARY)
+-		# Assume the system has GLX. In the future, systems may exist
+-		# with libGL and libX11 but no GLX, but that world hasn't
+-		# arrived yet.
++	if(X11_FOUND AND OPENGL_gl_LIBRARY AND PIGLIT_BUILD_GLX_TESTS)
+ 		set(PIGLIT_HAS_GLX True)
+ 		add_definitions(-DPIGLIT_HAS_GLX)
+ 	endif()
+-- 
+2.17.1
+
diff --git a/poky/meta/recipes-graphics/piglit/piglit_git.bb b/poky/meta/recipes-graphics/piglit/piglit_git.bb
index 932f292..29360a2 100644
--- a/poky/meta/recipes-graphics/piglit/piglit_git.bb
+++ b/poky/meta/recipes-graphics/piglit/piglit_git.bb
@@ -8,13 +8,13 @@
 
 SRC_URI = "git://gitlab.freedesktop.org/mesa/piglit.git;protocol=https;branch=main \
            file://0001-cmake-install-bash-completions-in-the-right-place.patch \
-           file://0001-cmake-use-proper-WAYLAND_INCLUDE_DIRS-variable.patch \
-           file://0002-tests-util-piglit-shader.c-do-not-hardcode-build-pat.patch \
-           file://0001-CMakeLists.txt-add-missing-endian.h-check.patch \
-           "
+           file://0002-cmake-use-proper-WAYLAND_INCLUDE_DIRS-variable.patch \
+           file://0003-tests-util-piglit-shader.c-do-not-hardcode-build-pat.patch \
+           file://0004-CMakeLists.txt-add-missing-endian.h-check.patch \
+           file://0005-cmake-Don-t-enable-GLX-if-tests-are-disabled.patch"
 UPSTREAM_CHECK_COMMITS = "1"
 
-SRCREV = "c107462e4e429fa1cf6daac39168674c1c0c9fd4"
+SRCREV = "6403e90dc7da02d486906cddab8d02c2552a8d46"
 # (when PV goes above 1.0 remove the trailing r)
 PV = "1.0+gitr${SRCPV}"
 
@@ -36,8 +36,10 @@
 export TEMP = "${B}/temp/"
 do_compile[dirs] =+ "${B}/temp/"
 
-PACKAGECONFIG ??= "${@bb.utils.filter('DISTRO_FEATURES', 'x11', d)}"
+PACKAGECONFIG ??= "${@bb.utils.contains('DISTRO_FEATURES', 'x11', 'x11 glx', '', d)}"
 PACKAGECONFIG[freeglut] = "-DPIGLIT_USE_GLUT=1,-DPIGLIT_USE_GLUT=0,freeglut,"
+PACKAGECONFIG[glx] = "-DPIGLIT_BUILD_GLX_TESTS=ON,-DPIGLIT_BUILD_GLX_TESTS=OFF"
+PACKAGECONFIG[opencl] = "-DPIGLIT_BUILD_CL_TESTS=ON,-DPIGLIT_BUILD_CL_TESTS=OFF,opencl-icd-loader"
 PACKAGECONFIG[x11] = "-DPIGLIT_BUILD_GL_TESTS=ON,-DPIGLIT_BUILD_GL_TESTS=OFF,${X11_DEPS}, ${X11_RDEPS}"
 PACKAGECONFIG[vulkan] = "-DPIGLIT_BUILD_VK_TESTS=ON,-DPIGLIT_BUILD_VK_TESTS=OFF,vulkan-loader"
 
diff --git a/poky/meta/recipes-graphics/shaderc/shaderc_2022.1.bb b/poky/meta/recipes-graphics/shaderc/shaderc_2022.1.bb
deleted file mode 100644
index fe9d940..0000000
--- a/poky/meta/recipes-graphics/shaderc/shaderc_2022.1.bb
+++ /dev/null
@@ -1,29 +0,0 @@
-SUMMARY  = "A collection of tools, libraries and tests for shader compilation"
-DESCRIPTION = "The Shaderc library provides an API for compiling GLSL/HLSL \
-source code to SPIRV modules. It has been shipping in the Android NDK since version r12b."
-SECTION = "graphics"
-HOMEPAGE = "https://github.com/google/shaderc"
-LICENSE = "Apache-2.0"
-LIC_FILES_CHKSUM = "file://LICENSE;md5=86d3f3a95c324c9479bd8986968f4327"
-
-SRCREV = "e4722b0ad49ee60c143d43baae8390f75ba27d2d"
-SRC_URI = "git://github.com/google/shaderc.git;protocol=https;branch=main \
-           file://0001-cmake-disable-building-external-dependencies.patch \
-           file://0002-libshaderc_util-fix-glslang-header-file-location.patch \
-           "
-UPSTREAM_CHECK_GITTAGREGEX = "^v(?P<pver>\d+(\.\d+)+)$"
-S = "${WORKDIR}/git"
-
-inherit cmake python3native pkgconfig
-
-DEPENDS = "spirv-headers spirv-tools glslang"
-
-EXTRA_OECMAKE = " \
-    -DCMAKE_BUILD_TYPE=Release \
-    -DBUILD_EXTERNAL=OFF \
-    -DSHADERC_SKIP_TESTS=ON \
-    -DSHADERC_SKIP_EXAMPLES=ON \
-    -DSHADERC_SKIP_COPYRIGHT_CHECK=ON \
-"
-
-BBCLASSEXTEND = "native nativesdk"
diff --git a/poky/meta/recipes-graphics/shaderc/shaderc_2022.2.bb b/poky/meta/recipes-graphics/shaderc/shaderc_2022.2.bb
new file mode 100644
index 0000000..df0fe8e
--- /dev/null
+++ b/poky/meta/recipes-graphics/shaderc/shaderc_2022.2.bb
@@ -0,0 +1,29 @@
+SUMMARY  = "A collection of tools, libraries and tests for shader compilation"
+DESCRIPTION = "The Shaderc library provides an API for compiling GLSL/HLSL \
+source code to SPIRV modules. It has been shipping in the Android NDK since version r12b."
+SECTION = "graphics"
+HOMEPAGE = "https://github.com/google/shaderc"
+LICENSE = "Apache-2.0"
+LIC_FILES_CHKSUM = "file://LICENSE;md5=86d3f3a95c324c9479bd8986968f4327"
+
+SRCREV = "3f1635df7774a90f691773e0093bc6ad8005de5a"
+SRC_URI = "git://github.com/google/shaderc.git;protocol=https;branch=main \
+           file://0001-cmake-disable-building-external-dependencies.patch \
+           file://0002-libshaderc_util-fix-glslang-header-file-location.patch \
+           "
+UPSTREAM_CHECK_GITTAGREGEX = "^v(?P<pver>\d+(\.\d+)+)$"
+S = "${WORKDIR}/git"
+
+inherit cmake python3native pkgconfig
+
+DEPENDS = "spirv-headers spirv-tools glslang"
+
+EXTRA_OECMAKE = " \
+    -DCMAKE_BUILD_TYPE=Release \
+    -DBUILD_EXTERNAL=OFF \
+    -DSHADERC_SKIP_TESTS=ON \
+    -DSHADERC_SKIP_EXAMPLES=ON \
+    -DSHADERC_SKIP_COPYRIGHT_CHECK=ON \
+"
+
+BBCLASSEXTEND = "native nativesdk"
diff --git a/poky/meta/recipes-graphics/spir/spirv-tools/0001-Remove-default-copy-constructor-in-header.-4879.patch b/poky/meta/recipes-graphics/spir/spirv-tools/0001-Remove-default-copy-constructor-in-header.-4879.patch
new file mode 100644
index 0000000..044c366
--- /dev/null
+++ b/poky/meta/recipes-graphics/spir/spirv-tools/0001-Remove-default-copy-constructor-in-header.-4879.patch
@@ -0,0 +1,34 @@
+From a90ccc240501bf3362b23f67771f65b7dec2ccf9 Mon Sep 17 00:00:00 2001
+From: Jamie Madill <jmadill@chromium.org>
+Date: Fri, 29 Jul 2022 14:26:37 -0400
+Subject: [PATCH] Remove default copy constructor in header. (#4879)
+
+A recent libc++ roll in Chrome warned of a deprecated copy. We're
+still looking if this is a bug in libc++ or a valid warning, but
+removing the redundant line is a safe workaround or fix in either
+case.
+
+See discussion in https://crrev.com/c/3791771
+
+Upstream-Status: Backport [https://github.com/KhronosGroup/SPIRV-Tools/pull/4879]
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ source/opt/merge_return_pass.h | 2 --
+ 1 file changed, 2 deletions(-)
+
+diff --git a/source/opt/merge_return_pass.h b/source/opt/merge_return_pass.h
+index a35cf269..d15db2f6 100644
+--- a/source/opt/merge_return_pass.h
++++ b/source/opt/merge_return_pass.h
+@@ -118,8 +118,6 @@ class MergeReturnPass : public MemPass {
+     StructuredControlState(Instruction* break_merge, Instruction* merge)
+         : break_merge_(break_merge), current_merge_(merge) {}
+ 
+-    StructuredControlState(const StructuredControlState&) = default;
+-
+     bool InBreakable() const { return break_merge_; }
+     bool InStructuredFlow() const { return CurrentMergeId() != 0; }
+ 
+-- 
+2.37.2
+
diff --git a/poky/meta/recipes-graphics/spir/spirv-tools_1.3.216.0.bb b/poky/meta/recipes-graphics/spir/spirv-tools_1.3.216.0.bb
index eb90732..fc1074d 100644
--- a/poky/meta/recipes-graphics/spir/spirv-tools_1.3.216.0.bb
+++ b/poky/meta/recipes-graphics/spir/spirv-tools_1.3.216.0.bb
@@ -8,7 +8,9 @@
 LIC_FILES_CHKSUM = "file://LICENSE;md5=3b83ef96387f14655fc854ddc3c6bd57"
 
 SRCREV = "c94501352d545e84c821ce031399e76d1af32d18"
-SRC_URI = "git://github.com/KhronosGroup/SPIRV-Tools.git;branch=master;protocol=https"
+SRC_URI = "git://github.com/KhronosGroup/SPIRV-Tools.git;branch=master;protocol=https \
+           file://0001-Remove-default-copy-constructor-in-header.-4879.patch \
+          "
 PE = "1"
 UPSTREAM_CHECK_GITTAGREGEX = "sdk-(?P<pver>\d+(\.\d+)+)"
 S = "${WORKDIR}/git"
diff --git a/poky/meta/recipes-graphics/vulkan/vulkan-samples/0001-Qualify-move-as-std-move.patch b/poky/meta/recipes-graphics/vulkan/vulkan-samples/0001-Qualify-move-as-std-move.patch
new file mode 100644
index 0000000..80ee521
--- /dev/null
+++ b/poky/meta/recipes-graphics/vulkan/vulkan-samples/0001-Qualify-move-as-std-move.patch
@@ -0,0 +1,405 @@
+From 9e061b12b9305eee96a4d3129b646c244cc02347 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Sun, 14 Aug 2022 19:16:45 -0700
+Subject: [PATCH] Qualify move as std::move
+
+Seeing errors like below with clang-15
+
+spirv_common.hpp:692:19: error: unqualified call to std::move
+
+squashed Backport of following upstream commits
+
+https://github.com/KhronosGroup/SPIRV-Cross/commit/44c3333a1c315ead00c24f7aef5fa8a7ccf49299
+https://github.com/KhronosGroup/SPIRV-Cross/commit/0eda71c40936586ea812d0d20d93c11a3e9af192
+
+Upstream-Status: Backport [https://github.com/KhronosGroup/SPIRV-Cross/commit/0eda71c40936586ea812d0d20d93c11a3e9af192]
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ main.cpp          | 38 +++++++++++++++++++-------------------
+ spirv_common.hpp  |  2 +-
+ spirv_cpp.hpp     |  2 +-
+ spirv_cross.cpp   | 18 +++++++++---------
+ spirv_glsl.cpp    |  2 +-
+ spirv_glsl.hpp    |  2 +-
+ spirv_hlsl.cpp    |  4 ++--
+ spirv_hlsl.hpp    |  2 +-
+ spirv_msl.cpp     |  2 +-
+ spirv_parser.cpp  |  4 ++--
+ spirv_reflect.hpp |  2 +-
+ 11 files changed, 39 insertions(+), 39 deletions(-)
+
+diff --git a/main.cpp b/main.cpp
+index 78991094..581c5e3b 100644
+--- a/main.cpp
++++ b/main.cpp
+@@ -65,7 +65,7 @@ struct CLICallbacks
+ struct CLIParser
+ {
+ 	CLIParser(CLICallbacks cbs_, int argc_, char *argv_[])
+-	    : cbs(move(cbs_))
++	    : cbs(std::move(cbs_))
+ 	    , argc(argc_)
+ 	    , argv(argv_)
+ 	{
+@@ -722,7 +722,7 @@ static int main_inner(int argc, char *argv[])
+ 		auto old_name = parser.next_string();
+ 		auto new_name = parser.next_string();
+ 		auto model = stage_to_execution_model(parser.next_string());
+-		args.entry_point_rename.push_back({ old_name, new_name, move(model) });
++		args.entry_point_rename.push_back({ old_name, new_name, std::move(model) });
+ 	});
+ 	cbs.add("--entry", [&args](CLIParser &parser) { args.entry = parser.next_string(); });
+ 	cbs.add("--stage", [&args](CLIParser &parser) { args.entry_stage = parser.next_string(); });
+@@ -731,20 +731,20 @@ static int main_inner(int argc, char *argv[])
+ 		HLSLVertexAttributeRemap remap;
+ 		remap.location = parser.next_uint();
+ 		remap.semantic = parser.next_string();
+-		args.hlsl_attr_remap.push_back(move(remap));
++		args.hlsl_attr_remap.push_back(std::move(remap));
+ 	});
+ 
+ 	cbs.add("--remap", [&args](CLIParser &parser) {
+ 		string src = parser.next_string();
+ 		string dst = parser.next_string();
+ 		uint32_t components = parser.next_uint();
+-		args.remaps.push_back({ move(src), move(dst), components });
++		args.remaps.push_back({ std::move(src), std::move(dst), components });
+ 	});
+ 
+ 	cbs.add("--remap-variable-type", [&args](CLIParser &parser) {
+ 		string var_name = parser.next_string();
+ 		string new_type = parser.next_string();
+-		args.variable_type_remaps.push_back({ move(var_name), move(new_type) });
++		args.variable_type_remaps.push_back({ std::move(var_name), std::move(new_type) });
+ 	});
+ 
+ 	cbs.add("--rename-interface-variable", [&args](CLIParser &parser) {
+@@ -757,18 +757,18 @@ static int main_inner(int argc, char *argv[])
+ 
+ 		uint32_t loc = parser.next_uint();
+ 		string var_name = parser.next_string();
+-		args.interface_variable_renames.push_back({ cls, loc, move(var_name) });
++		args.interface_variable_renames.push_back({ cls, loc, std::move(var_name) });
+ 	});
+ 
+ 	cbs.add("--pls-in", [&args](CLIParser &parser) {
+ 		auto fmt = pls_format(parser.next_string());
+ 		auto name = parser.next_string();
+-		args.pls_in.push_back({ move(fmt), move(name) });
++		args.pls_in.push_back({ std::move(fmt), std::move(name) });
+ 	});
+ 	cbs.add("--pls-out", [&args](CLIParser &parser) {
+ 		auto fmt = pls_format(parser.next_string());
+ 		auto name = parser.next_string();
+-		args.pls_out.push_back({ move(fmt), move(name) });
++		args.pls_out.push_back({ std::move(fmt), std::move(name) });
+ 	});
+ 	cbs.add("--shader-model", [&args](CLIParser &parser) {
+ 		args.shader_model = parser.next_uint();
+@@ -788,7 +788,7 @@ static int main_inner(int argc, char *argv[])
+ 	cbs.default_handler = [&args](const char *value) { args.input = value; };
+ 	cbs.error_handler = [] { print_help(); };
+ 
+-	CLIParser parser{ move(cbs), argc - 1, argv + 1 };
++	CLIParser parser{ std::move(cbs), argc - 1, argv + 1 };
+ 	if (!parser.parse())
+ 	{
+ 		return EXIT_FAILURE;
+@@ -808,14 +808,14 @@ static int main_inner(int argc, char *argv[])
+ 	auto spirv_file = read_spirv_file(args.input);
+ 	if (spirv_file.empty())
+ 		return EXIT_FAILURE;
+-	Parser spirv_parser(move(spirv_file));
++	Parser spirv_parser(std::move(spirv_file));
+ 
+ 	spirv_parser.parse();
+ 
+ 	// Special case reflection because it has little to do with the path followed by code-outputting compilers
+ 	if (!args.reflect.empty())
+ 	{
+-		CompilerReflection compiler(move(spirv_parser.get_parsed_ir()));
++		CompilerReflection compiler(std::move(spirv_parser.get_parsed_ir()));
+ 		compiler.set_format(args.reflect);
+ 		auto json = compiler.compile();
+ 		if (args.output)
+@@ -831,13 +831,13 @@ static int main_inner(int argc, char *argv[])
+ 
+ 	if (args.cpp)
+ 	{
+-		compiler.reset(new CompilerCPP(move(spirv_parser.get_parsed_ir())));
++		compiler.reset(new CompilerCPP(std::move(spirv_parser.get_parsed_ir())));
+ 		if (args.cpp_interface_name)
+ 			static_cast<CompilerCPP *>(compiler.get())->set_interface_name(args.cpp_interface_name);
+ 	}
+ 	else if (args.msl)
+ 	{
+-		compiler.reset(new CompilerMSL(move(spirv_parser.get_parsed_ir())));
++		compiler.reset(new CompilerMSL(std::move(spirv_parser.get_parsed_ir())));
+ 
+ 		auto *msl_comp = static_cast<CompilerMSL *>(compiler.get());
+ 		auto msl_opts = msl_comp->get_msl_options();
+@@ -850,13 +850,13 @@ static int main_inner(int argc, char *argv[])
+ 		msl_comp->set_msl_options(msl_opts);
+ 	}
+ 	else if (args.hlsl)
+-		compiler.reset(new CompilerHLSL(move(spirv_parser.get_parsed_ir())));
++		compiler.reset(new CompilerHLSL(std::move(spirv_parser.get_parsed_ir())));
+ 	else
+ 	{
+ 		combined_image_samplers = !args.vulkan_semantics;
+ 		if (!args.vulkan_semantics)
+ 			build_dummy_sampler = true;
+-		compiler.reset(new CompilerGLSL(move(spirv_parser.get_parsed_ir())));
++		compiler.reset(new CompilerGLSL(std::move(spirv_parser.get_parsed_ir())));
+ 	}
+ 
+ 	if (!args.variable_type_remaps.empty())
+@@ -867,7 +867,7 @@ static int main_inner(int argc, char *argv[])
+ 					out = remap.new_variable_type;
+ 		};
+ 
+-		compiler->set_variable_type_remap_callback(move(remap_cb));
++		compiler->set_variable_type_remap_callback(std::move(remap_cb));
+ 	}
+ 
+ 	for (auto &rename : args.entry_point_rename)
+@@ -1019,7 +1019,7 @@ static int main_inner(int argc, char *argv[])
+ 	{
+ 		auto active = compiler->get_active_interface_variables();
+ 		res = compiler->get_shader_resources(active);
+-		compiler->set_enabled_interface_variables(move(active));
++		compiler->set_enabled_interface_variables(std::move(active));
+ 	}
+ 	else
+ 		res = compiler->get_shader_resources();
+@@ -1034,7 +1034,7 @@ static int main_inner(int argc, char *argv[])
+ 
+ 	auto pls_inputs = remap_pls(args.pls_in, res.stage_inputs, &res.subpass_inputs);
+ 	auto pls_outputs = remap_pls(args.pls_out, res.stage_outputs, nullptr);
+-	compiler->remap_pixel_local_storage(move(pls_inputs), move(pls_outputs));
++	compiler->remap_pixel_local_storage(std::move(pls_inputs), std::move(pls_outputs));
+ 
+ 	for (auto &ext : args.extensions)
+ 		compiler->require_extension(ext);
+@@ -1101,7 +1101,7 @@ static int main_inner(int argc, char *argv[])
+ 	for (uint32_t i = 0; i < args.iterations; i++)
+ 	{
+ 		if (args.hlsl)
+-			glsl = static_cast<CompilerHLSL *>(compiler.get())->compile(move(args.hlsl_attr_remap));
++			glsl = static_cast<CompilerHLSL *>(compiler.get())->compile(std::move(args.hlsl_attr_remap));
+ 		else
+ 			glsl = compiler->compile();
+ 	}
+diff --git a/spirv_common.hpp b/spirv_common.hpp
+index aa32142d..0daf5ad5 100644
+--- a/spirv_common.hpp
++++ b/spirv_common.hpp
+@@ -536,7 +536,7 @@ struct SPIRExpression : IVariant
+ 
+ 	// Only created by the backend target to avoid creating tons of temporaries.
+ 	SPIRExpression(std::string expr, uint32_t expression_type_, bool immutable_)
+-	    : expression(move(expr))
++	    : expression(std::move(expr))
+ 	    , expression_type(expression_type_)
+ 	    , immutable(immutable_)
+ 	{
+diff --git a/spirv_cpp.hpp b/spirv_cpp.hpp
+index bcdb669b..b4da84b9 100644
+--- a/spirv_cpp.hpp
++++ b/spirv_cpp.hpp
+@@ -27,7 +27,7 @@ class CompilerCPP : public CompilerGLSL
+ {
+ public:
+ 	explicit CompilerCPP(std::vector<uint32_t> spirv_)
+-	    : CompilerGLSL(move(spirv_))
++	    : CompilerGLSL(std::move(spirv_))
+ 	{
+ 	}
+ 
+diff --git a/spirv_cross.cpp b/spirv_cross.cpp
+index 19bac301..d21dbd41 100644
+--- a/spirv_cross.cpp
++++ b/spirv_cross.cpp
+@@ -28,16 +28,16 @@ using namespace spirv_cross;
+ 
+ Compiler::Compiler(vector<uint32_t> ir_)
+ {
+-	Parser parser(move(ir_));
++	Parser parser(std::move(ir_));
+ 	parser.parse();
+-	set_ir(move(parser.get_parsed_ir()));
++	set_ir(std::move(parser.get_parsed_ir()));
+ }
+ 
+ Compiler::Compiler(const uint32_t *ir_, size_t word_count)
+ {
+ 	Parser parser(ir_, word_count);
+ 	parser.parse();
+-	set_ir(move(parser.get_parsed_ir()));
++	set_ir(std::move(parser.get_parsed_ir()));
+ }
+ 
+ Compiler::Compiler(const ParsedIR &ir_)
+@@ -47,12 +47,12 @@ Compiler::Compiler(const ParsedIR &ir_)
+ 
+ Compiler::Compiler(ParsedIR &&ir_)
+ {
+-	set_ir(move(ir_));
++	set_ir(std::move(ir_));
+ }
+ 
+ void Compiler::set_ir(ParsedIR &&ir_)
+ {
+-	ir = move(ir_);
++	ir = std::move(ir_);
+ 	parse_fixup();
+ }
+ 
+@@ -716,7 +716,7 @@ unordered_set<uint32_t> Compiler::get_active_interface_variables() const
+ 
+ void Compiler::set_enabled_interface_variables(std::unordered_set<uint32_t> active_variables)
+ {
+-	active_interface_variables = move(active_variables);
++	active_interface_variables = std::move(active_variables);
+ 	check_active_interface_variables = true;
+ }
+ 
+@@ -2163,7 +2163,7 @@ void Compiler::CombinedImageSamplerHandler::push_remap_parameters(const SPIRFunc
+ 	unordered_map<uint32_t, uint32_t> remapping;
+ 	for (uint32_t i = 0; i < length; i++)
+ 		remapping[func.arguments[i].id] = remap_parameter(args[i]);
+-	parameter_remapping.push(move(remapping));
++	parameter_remapping.push(std::move(remapping));
+ }
+ 
+ void Compiler::CombinedImageSamplerHandler::pop_remap_parameters()
+@@ -3611,7 +3611,7 @@ void Compiler::analyze_image_and_sampler_usage()
+ 
+ 	CombinedImageSamplerUsageHandler handler(*this, dref_handler.dref_combined_samplers);
+ 	traverse_all_reachable_opcodes(get<SPIRFunction>(ir.default_entry_point), handler);
+-	comparison_ids = move(handler.comparison_ids);
++	comparison_ids = std::move(handler.comparison_ids);
+ 	need_subpass_input = handler.need_subpass_input;
+ 
+ 	// Forward information from separate images and samplers into combined image samplers.
+@@ -3650,7 +3650,7 @@ void Compiler::build_function_control_flow_graphs_and_analyze()
+ 	CFGBuilder handler(*this);
+ 	handler.function_cfgs[ir.default_entry_point].reset(new CFG(*this, get<SPIRFunction>(ir.default_entry_point)));
+ 	traverse_all_reachable_opcodes(get<SPIRFunction>(ir.default_entry_point), handler);
+-	function_cfgs = move(handler.function_cfgs);
++	function_cfgs = std::move(handler.function_cfgs);
+ 
+ 	for (auto &f : function_cfgs)
+ 	{
+diff --git a/spirv_glsl.cpp b/spirv_glsl.cpp
+index a8855987..fabbb105 100644
+--- a/spirv_glsl.cpp
++++ b/spirv_glsl.cpp
+@@ -6876,7 +6876,7 @@ void CompilerGLSL::emit_instruction(const Instruction &instruction)
+ 		bool ptr_chain = opcode == OpPtrAccessChain;
+ 		auto e = access_chain(ops[2], &ops[3], length - 3, get<SPIRType>(ops[0]), &meta, ptr_chain);
+ 
+-		auto &expr = set<SPIRExpression>(ops[1], move(e), ops[0], should_forward(ops[2]));
++		auto &expr = set<SPIRExpression>(ops[1], std::move(e), ops[0], should_forward(ops[2]));
+ 
+ 		auto *backing_variable = maybe_get_backing_variable(ops[2]);
+ 		expr.loaded_from = backing_variable ? backing_variable->self : ops[2];
+diff --git a/spirv_glsl.hpp b/spirv_glsl.hpp
+index caf0ad3b..aac1196e 100644
+--- a/spirv_glsl.hpp
++++ b/spirv_glsl.hpp
+@@ -138,7 +138,7 @@ public:
+ 	}
+ 
+ 	explicit CompilerGLSL(std::vector<uint32_t> spirv_)
+-	    : Compiler(move(spirv_))
++	    : Compiler(std::move(spirv_))
+ 	{
+ 		init();
+ 	}
+diff --git a/spirv_hlsl.cpp b/spirv_hlsl.cpp
+index a5b6d2dc..0933017e 100644
+--- a/spirv_hlsl.cpp
++++ b/spirv_hlsl.cpp
+@@ -2028,7 +2028,7 @@ void CompilerHLSL::emit_function_prototype(SPIRFunction &func, const Bitset &ret
+ 		out_argument += " ";
+ 		out_argument += "SPIRV_Cross_return_value";
+ 		out_argument += type_to_array_glsl(type);
+-		arglist.push_back(move(out_argument));
++		arglist.push_back(std::move(out_argument));
+ 	}
+ 
+ 	for (auto &arg : func.arguments)
+@@ -4553,7 +4553,7 @@ void CompilerHLSL::require_texture_query_variant(const SPIRType &type)
+ 
+ string CompilerHLSL::compile(std::vector<HLSLVertexAttributeRemap> vertex_attributes)
+ {
+-	remap_vertex_attributes = move(vertex_attributes);
++	remap_vertex_attributes = std::move(vertex_attributes);
+ 	return compile();
+ }
+ 
+diff --git a/spirv_hlsl.hpp b/spirv_hlsl.hpp
+index b2b60fca..3503c674 100644
+--- a/spirv_hlsl.hpp
++++ b/spirv_hlsl.hpp
+@@ -63,7 +63,7 @@ public:
+ 	};
+ 
+ 	explicit CompilerHLSL(std::vector<uint32_t> spirv_)
+-	    : CompilerGLSL(move(spirv_))
++	    : CompilerGLSL(std::move(spirv_))
+ 	{
+ 	}
+ 
+diff --git a/spirv_msl.cpp b/spirv_msl.cpp
+index 0e1e733d..fbfcdba5 100644
+--- a/spirv_msl.cpp
++++ b/spirv_msl.cpp
+@@ -32,7 +32,7 @@ static const uint32_t k_aux_mbr_idx_swizzle_const = 0u;
+ 
+ CompilerMSL::CompilerMSL(vector<uint32_t> spirv_, vector<MSLVertexAttr> *p_vtx_attrs,
+                          vector<MSLResourceBinding> *p_res_bindings)
+-    : CompilerGLSL(move(spirv_))
++    : CompilerGLSL(std::move(spirv_))
+ {
+ 	if (p_vtx_attrs)
+ 		for (auto &va : *p_vtx_attrs)
+diff --git a/spirv_parser.cpp b/spirv_parser.cpp
+index 1725b4ca..0fcd7691 100644
+--- a/spirv_parser.cpp
++++ b/spirv_parser.cpp
+@@ -24,7 +24,7 @@ namespace spirv_cross
+ {
+ Parser::Parser(std::vector<uint32_t> spirv)
+ {
+-	ir.spirv = move(spirv);
++	ir.spirv = std::move(spirv);
+ }
+ 
+ Parser::Parser(const uint32_t *spirv_data, size_t word_count)
+@@ -223,7 +223,7 @@ void Parser::parse(const Instruction &instruction)
+ 	case OpExtension:
+ 	{
+ 		auto ext = extract_string(ir.spirv, instruction.offset);
+-		ir.declared_extensions.push_back(move(ext));
++		ir.declared_extensions.push_back(std::move(ext));
+ 		break;
+ 	}
+ 
+diff --git a/spirv_reflect.hpp b/spirv_reflect.hpp
+index 13b5b431..f5c6a6ff 100644
+--- a/spirv_reflect.hpp
++++ b/spirv_reflect.hpp
+@@ -34,7 +34,7 @@ class CompilerReflection : public CompilerGLSL
+ 
+ public:
+ 	explicit CompilerReflection(std::vector<uint32_t> spirv_)
+-	    : Parent(move(spirv_))
++	    : Parent(std::move(spirv_))
+ 	{
+ 		options.vulkan_semantics = true;
+ 	}
+-- 
+2.37.2
+
diff --git a/poky/meta/recipes-graphics/vulkan/vulkan-samples/0001-framework-core-hpp_vulkan_resource.h-add-header-incl.patch b/poky/meta/recipes-graphics/vulkan/vulkan-samples/0001-framework-core-hpp_vulkan_resource.h-add-header-incl.patch
deleted file mode 100644
index c2bb43f..0000000
--- a/poky/meta/recipes-graphics/vulkan/vulkan-samples/0001-framework-core-hpp_vulkan_resource.h-add-header-incl.patch
+++ /dev/null
@@ -1,27 +0,0 @@
-From 8c069a1c4452f626e5bafc547463507d86111319 Mon Sep 17 00:00:00 2001
-From: Alexander Kanavin <alex@linutronix.de>
-Date: Tue, 19 Jul 2022 16:55:40 +0200
-Subject: [PATCH] framework/core/hpp_vulkan_resource.h: add header include that
- defines std::exchange
-
-Upstream-Status: Submitted [https://github.com/KhronosGroup/Vulkan-Samples/pull/501]
-Signed-off-by: Alexander Kanavin <alex@linutronix.de>
----
- framework/core/hpp_vulkan_resource.h | 1 +
- 1 file changed, 1 insertion(+)
-
-diff --git a/framework/core/hpp_vulkan_resource.h b/framework/core/hpp_vulkan_resource.h
-index 0e30f5a..8d83019 100644
---- a/framework/core/hpp_vulkan_resource.h
-+++ b/framework/core/hpp_vulkan_resource.h
-@@ -18,6 +18,7 @@
- #pragma once
- 
- #include <vulkan/vulkan.hpp>
-+#include <utility>
- 
- namespace vkb
- {
--- 
-2.30.2
-
diff --git a/poky/meta/recipes-graphics/vulkan/vulkan-samples_git.bb b/poky/meta/recipes-graphics/vulkan/vulkan-samples_git.bb
index 41cb4a4..332411b 100644
--- a/poky/meta/recipes-graphics/vulkan/vulkan-samples_git.bb
+++ b/poky/meta/recipes-graphics/vulkan/vulkan-samples_git.bb
@@ -8,11 +8,11 @@
 SRC_URI = "gitsm://github.com/KhronosGroup/Vulkan-Samples.git;branch=master;protocol=https \
            file://0001-CMakeLists.txt-do-not-hardcode-lib-as-installation-t.patch \
            file://debugfix.patch \
-           file://0001-framework-core-hpp_vulkan_resource.h-add-header-incl.patch \
+           file://0001-Qualify-move-as-std-move.patch;patchdir=third_party/spirv-cross \
            "
 
 UPSTREAM_CHECK_COMMITS = "1"
-SRCREV = "ee6c1522af8ba9b56f8416c2eedafb440f681085"
+SRCREV = "74d45aace02d99d766126711a8aaa0978276ca00"
 
 UPSTREAM_CHECK_GITTAGREGEX = "These are not the releases you're looking for"
 S = "${WORKDIR}/git"
diff --git a/poky/meta/recipes-graphics/wayland/weston/dont-use-plane-add-prop.patch b/poky/meta/recipes-graphics/wayland/weston/dont-use-plane-add-prop.patch
deleted file mode 100644
index 1ac0695..0000000
--- a/poky/meta/recipes-graphics/wayland/weston/dont-use-plane-add-prop.patch
+++ /dev/null
@@ -1,32 +0,0 @@
-From ece4c3d261aeec230869c0304ed1011ff6837c16 Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Sat, 12 Sep 2020 14:04:04 -0700
-Subject: [PATCH] Fix atomic modesetting with musl
-
-atomic modesetting seems to fail with drm weston backend and this patch fixes
-it, below errors are seen before weston exits
-
-atomic: couldn't commit new state: Invalid argument
-
-Upstream-Status: Submitted [https://gitlab.freedesktop.org/wayland/weston/-/issues/158]
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
-
----
- libweston/backend-drm/kms.c | 4 ++--
- 1 file changed, 2 insertions(+), 2 deletions(-)
-
-diff --git a/libweston/backend-drm/kms.c b/libweston/backend-drm/kms.c
-index 780d007..9994da1 100644
---- a/libweston/backend-drm/kms.c
-+++ b/libweston/backend-drm/kms.c
-@@ -1142,8 +1142,8 @@ drm_pending_state_apply_atomic(struct drm_pending_state *pending_state,
- 		wl_list_for_each(plane, &b->plane_list, link) {
- 			drm_debug(b, "\t\t[atomic] starting with plane %lu disabled\n",
- 				  (unsigned long) plane->plane_id);
--			plane_add_prop(req, plane, WDRM_PLANE_CRTC_ID, 0);
--			plane_add_prop(req, plane, WDRM_PLANE_FB_ID, 0);
-+			//plane_add_prop(req, plane, WDRM_PLANE_CRTC_ID, 0);
-+			//plane_add_prop(req, plane, WDRM_PLANE_FB_ID, 0);
- 		}
- 
- 		flags |= DRM_MODE_ATOMIC_ALLOW_MODESET;
diff --git a/poky/meta/recipes-graphics/wayland/weston_10.0.1.bb b/poky/meta/recipes-graphics/wayland/weston_10.0.1.bb
deleted file mode 100644
index e27dac1..0000000
--- a/poky/meta/recipes-graphics/wayland/weston_10.0.1.bb
+++ /dev/null
@@ -1,144 +0,0 @@
-SUMMARY = "Weston, a Wayland compositor"
-DESCRIPTION = "Weston is the reference implementation of a Wayland compositor"
-HOMEPAGE = "http://wayland.freedesktop.org"
-LICENSE = "MIT"
-LIC_FILES_CHKSUM = "file://COPYING;md5=d79ee9e66bb0f95d3386a7acae780b70 \
-                    file://libweston/compositor.c;endline=27;md5=eb6d5297798cabe2ddc65e2af519bcf0 \
-                    "
-
-SRC_URI = "https://gitlab.freedesktop.org/wayland/weston/-/releases/${PV}/downloads/${BPN}-${PV}.tar.xz \
-           file://weston.png \
-           file://weston.desktop \
-           file://xwayland.weston-start \
-           file://systemd-notify.weston-start \
-           "
-
-SRC_URI:append:libc-musl = " file://dont-use-plane-add-prop.patch "
-
-SRC_URI[sha256sum] = "8a9e52506a865a7410981b04f8341b89b84106db8531ab1f9fdd37b5dc034115"
-
-UPSTREAM_CHECK_URI = "https://wayland.freedesktop.org/releases.html"
-
-inherit meson pkgconfig useradd
-
-# depends on virtual/egl
-#
-require ${THISDIR}/required-distro-features.inc
-
-DEPENDS = "libxkbcommon gdk-pixbuf pixman cairo glib-2.0"
-DEPENDS += "wayland wayland-protocols libinput virtual/egl pango wayland-native"
-
-LDFLAGS += "${@bb.utils.contains('DISTRO_FEATURES', 'lto', '-Wl,-z,undefs', '', d)}"
-
-WESTON_MAJOR_VERSION = "${@'.'.join(d.getVar('PV').split('.')[0:1])}"
-
-EXTRA_OEMESON += "-Dpipewire=false"
-
-PACKAGECONFIG ??= "${@bb.utils.contains('DISTRO_FEATURES', 'wayland', 'kms wayland egl clients', '', d)} \
-                   ${@bb.utils.contains('DISTRO_FEATURES', 'x11 wayland', 'xwayland', '', d)} \
-                   ${@bb.utils.filter('DISTRO_FEATURES', 'systemd x11', d)} \
-                   ${@bb.utils.contains_any('DISTRO_FEATURES', 'wayland x11', '', 'headless', d)} \
-                   ${@oe.utils.conditional('VIRTUAL-RUNTIME_init_manager', 'sysvinit', 'launcher-libseat', '', d)} \
-                   image-jpeg \
-                   screenshare \
-                   shell-desktop \
-                   shell-fullscreen \
-                   shell-ivi"
-
-# Can be 'damage', 'im', 'egl', 'shm', 'touch', 'dmabuf-feedback', 'dmabuf-v4l', 'dmabuf-egl' or 'all'
-SIMPLECLIENTS ?= "all"
-
-#
-# Compositor choices
-#
-# Weston on KMS
-PACKAGECONFIG[kms] = "-Dbackend-drm=true,-Dbackend-drm=false,drm udev virtual/egl virtual/libgles2 virtual/libgbm mtdev"
-# Weston on Wayland (nested Weston)
-PACKAGECONFIG[wayland] = "-Dbackend-wayland=true,-Dbackend-wayland=false,virtual/egl virtual/libgles2"
-# Weston on X11
-PACKAGECONFIG[x11] = "-Dbackend-x11=true,-Dbackend-x11=false,virtual/libx11 libxcb libxcb libxcursor cairo"
-# Headless Weston
-PACKAGECONFIG[headless] = "-Dbackend-headless=true,-Dbackend-headless=false"
-# Weston on framebuffer
-PACKAGECONFIG[fbdev] = "-Ddeprecated-backend-fbdev=true,-Ddeprecated-backend-fbdev=false,udev mtdev"
-# Weston on RDP
-PACKAGECONFIG[rdp] = "-Dbackend-rdp=true,-Dbackend-rdp=false,freerdp"
-# weston-launch
-PACKAGECONFIG[launch] = "-Ddeprecated-weston-launch=true,-Ddeprecated-weston-launch=false,drm"
-# VA-API desktop recorder
-PACKAGECONFIG[vaapi] = "-Dbackend-drm-screencast-vaapi=true,-Dbackend-drm-screencast-vaapi=false,libva"
-# Weston with EGL support
-PACKAGECONFIG[egl] = "-Drenderer-gl=true,-Drenderer-gl=false,virtual/egl"
-# Weston with lcms support
-PACKAGECONFIG[lcms] = "-Dcolor-management-lcms=true,-Dcolor-management-lcms=false,lcms"
-# Weston with webp support
-PACKAGECONFIG[webp] = "-Dimage-webp=true,-Dimage-webp=false,libwebp"
-# Weston with systemd-login support
-PACKAGECONFIG[systemd] = "-Dsystemd=true -Dlauncher-logind=true,-Dsystemd=false -Dlauncher-logind=false,systemd dbus"
-# Weston with Xwayland support (requires X11 and Wayland)
-PACKAGECONFIG[xwayland] = "-Dxwayland=true,-Dxwayland=false"
-# colord CMS support
-PACKAGECONFIG[colord] = "-Dcolor-management-colord=true,-Dcolor-management-colord=false,colord"
-# Clients support
-PACKAGECONFIG[clients] = "-Dsimple-clients=${SIMPLECLIENTS} -Ddemo-clients=true,-Dsimple-clients= -Ddemo-clients=false"
-# Virtual remote output with GStreamer on DRM backend
-PACKAGECONFIG[remoting] = "-Dremoting=true,-Dremoting=false,gstreamer1.0 gstreamer1.0-plugins-base"
-# Weston with screen-share support
-PACKAGECONFIG[screenshare] = "-Dscreenshare=true,-Dscreenshare=false"
-# Traditional desktop shell
-PACKAGECONFIG[shell-desktop] = "-Dshell-desktop=true,-Dshell-desktop=false"
-# Fullscreen shell
-PACKAGECONFIG[shell-fullscreen] = "-Dshell-fullscreen=true,-Dshell-fullscreen=false"
-# In-Vehicle Infotainment (IVI) shell
-PACKAGECONFIG[shell-ivi] = "-Dshell-ivi=true,-Dshell-ivi=false"
-# JPEG image loading support
-PACKAGECONFIG[image-jpeg] = "-Dimage-jpeg=true,-Dimage-jpeg=false, jpeg"
-# support libseat based launch
-PACKAGECONFIG[launcher-libseat] = "-Dlauncher-libseat=true,-Dlauncher-libseat=false,seatd"
-
-do_install:append() {
-	# Weston doesn't need the .la files to load modules, so wipe them
-	rm -f ${D}/${libdir}/libweston-${WESTON_MAJOR_VERSION}/*.la
-
-	# If X11, ship a desktop file to launch it
-	if [ "${@bb.utils.filter('DISTRO_FEATURES', 'x11', d)}" ]; then
-		install -d ${D}${datadir}/applications
-		install ${WORKDIR}/weston.desktop ${D}${datadir}/applications
-
-		install -d ${D}${datadir}/icons/hicolor/48x48/apps
-		install ${WORKDIR}/weston.png ${D}${datadir}/icons/hicolor/48x48/apps
-	fi
-
-	if [ "${@bb.utils.contains('PACKAGECONFIG', 'xwayland', 'yes', 'no', d)}" = "yes" ]; then
-		install -Dm 644 ${WORKDIR}/xwayland.weston-start ${D}${datadir}/weston-start/xwayland
-	fi
-
-	if [ "${@bb.utils.contains('PACKAGECONFIG', 'systemd', 'yes', 'no', d)}" = "yes" ]; then
-		install -Dm 644 ${WORKDIR}/systemd-notify.weston-start ${D}${datadir}/weston-start/systemd-notify
-	fi
-
-	if [ "${@bb.utils.contains('PACKAGECONFIG', 'launch', 'yes', 'no', d)}" = "yes" ]; then
-		chmod u+s ${D}${bindir}/weston-launch
-	fi
-}
-
-PACKAGES += "${@bb.utils.contains('PACKAGECONFIG', 'xwayland', '${PN}-xwayland', '', d)} \
-             libweston-${WESTON_MAJOR_VERSION} ${PN}-examples"
-
-FILES:${PN}-dev += "${libdir}/${BPN}/libexec_weston.so"
-FILES:${PN} = "${bindir}/weston ${bindir}/weston-terminal ${bindir}/weston-info ${bindir}/weston-launch ${bindir}/wcap-decode ${libexecdir} ${libdir}/${BPN}/*.so* ${datadir}"
-
-FILES:libweston-${WESTON_MAJOR_VERSION} = "${libdir}/lib*${SOLIBS} ${libdir}/libweston-${WESTON_MAJOR_VERSION}/*.so"
-SUMMARY:libweston-${WESTON_MAJOR_VERSION} = "Helper library for implementing 'wayland window managers'."
-
-FILES:${PN}-examples = "${bindir}/*"
-
-FILES:${PN}-xwayland = "${libdir}/libweston-${WESTON_MAJOR_VERSION}/xwayland.so"
-RDEPENDS:${PN}-xwayland += "xwayland"
-
-RDEPENDS:${PN} += "xkeyboard-config"
-RRECOMMENDS:${PN} = "weston-init liberation-fonts"
-RRECOMMENDS:${PN}-dev += "wayland-protocols"
-
-USERADD_PACKAGES = "${PN}"
-GROUPADD_PARAM:${PN} = "--system weston-launch"
diff --git a/poky/meta/recipes-graphics/wayland/weston_10.0.2.bb b/poky/meta/recipes-graphics/wayland/weston_10.0.2.bb
new file mode 100644
index 0000000..786d12b
--- /dev/null
+++ b/poky/meta/recipes-graphics/wayland/weston_10.0.2.bb
@@ -0,0 +1,143 @@
+SUMMARY = "Weston, a Wayland compositor"
+DESCRIPTION = "Weston is the reference implementation of a Wayland compositor"
+HOMEPAGE = "http://wayland.freedesktop.org"
+LICENSE = "MIT"
+LIC_FILES_CHKSUM = "file://COPYING;md5=d79ee9e66bb0f95d3386a7acae780b70 \
+                    file://libweston/compositor.c;endline=27;md5=eb6d5297798cabe2ddc65e2af519bcf0 \
+                    "
+
+SRC_URI = "https://gitlab.freedesktop.org/wayland/weston/-/releases/${PV}/downloads/${BPN}-${PV}.tar.xz \
+           file://weston.png \
+           file://weston.desktop \
+           file://xwayland.weston-start \
+           file://systemd-notify.weston-start \
+           "
+
+SRC_URI[sha256sum] = "89646ca0d9f8d413c2767e5c3828eaa3fa149c2a105b3729a6894fa7cf1549e7"
+
+UPSTREAM_CHECK_URI = "https://wayland.freedesktop.org/releases.html"
+UPSTREAM_CHECK_REGEX = "weston-(?P<pver>\d+\.\d+\.(?!9\d+)\d+)"
+
+inherit meson pkgconfig useradd
+
+# depends on virtual/egl
+#
+require ${THISDIR}/required-distro-features.inc
+
+DEPENDS = "libxkbcommon gdk-pixbuf pixman cairo glib-2.0"
+DEPENDS += "wayland wayland-protocols libinput virtual/egl pango wayland-native"
+
+LDFLAGS += "${@bb.utils.contains('DISTRO_FEATURES', 'lto', '-Wl,-z,undefs', '', d)}"
+
+WESTON_MAJOR_VERSION = "${@'.'.join(d.getVar('PV').split('.')[0:1])}"
+
+EXTRA_OEMESON += "-Dpipewire=false"
+
+PACKAGECONFIG ??= "${@bb.utils.contains('DISTRO_FEATURES', 'wayland', 'kms wayland egl clients', '', d)} \
+                   ${@bb.utils.contains('DISTRO_FEATURES', 'x11 wayland', 'xwayland', '', d)} \
+                   ${@bb.utils.filter('DISTRO_FEATURES', 'systemd x11', d)} \
+                   ${@bb.utils.contains_any('DISTRO_FEATURES', 'wayland x11', '', 'headless', d)} \
+                   ${@oe.utils.conditional('VIRTUAL-RUNTIME_init_manager', 'sysvinit', 'launcher-libseat', '', d)} \
+                   image-jpeg \
+                   screenshare \
+                   shell-desktop \
+                   shell-fullscreen \
+                   shell-ivi"
+
+# Can be 'damage', 'im', 'egl', 'shm', 'touch', 'dmabuf-feedback', 'dmabuf-v4l', 'dmabuf-egl' or 'all'
+SIMPLECLIENTS ?= "all"
+
+#
+# Compositor choices
+#
+# Weston on KMS
+PACKAGECONFIG[kms] = "-Dbackend-drm=true,-Dbackend-drm=false,drm udev virtual/egl virtual/libgles2 virtual/libgbm mtdev"
+# Weston on Wayland (nested Weston)
+PACKAGECONFIG[wayland] = "-Dbackend-wayland=true,-Dbackend-wayland=false,virtual/egl virtual/libgles2"
+# Weston on X11
+PACKAGECONFIG[x11] = "-Dbackend-x11=true,-Dbackend-x11=false,virtual/libx11 libxcb libxcb libxcursor cairo"
+# Headless Weston
+PACKAGECONFIG[headless] = "-Dbackend-headless=true,-Dbackend-headless=false"
+# Weston on framebuffer
+PACKAGECONFIG[fbdev] = "-Ddeprecated-backend-fbdev=true,-Ddeprecated-backend-fbdev=false,udev mtdev"
+# Weston on RDP
+PACKAGECONFIG[rdp] = "-Dbackend-rdp=true,-Dbackend-rdp=false,freerdp"
+# weston-launch
+PACKAGECONFIG[launch] = "-Ddeprecated-weston-launch=true,-Ddeprecated-weston-launch=false,drm"
+# VA-API desktop recorder
+PACKAGECONFIG[vaapi] = "-Dbackend-drm-screencast-vaapi=true,-Dbackend-drm-screencast-vaapi=false,libva"
+# Weston with EGL support
+PACKAGECONFIG[egl] = "-Drenderer-gl=true,-Drenderer-gl=false,virtual/egl"
+# Weston with lcms support
+PACKAGECONFIG[lcms] = "-Dcolor-management-lcms=true,-Dcolor-management-lcms=false,lcms"
+# Weston with webp support
+PACKAGECONFIG[webp] = "-Dimage-webp=true,-Dimage-webp=false,libwebp"
+# Weston with systemd-login support
+PACKAGECONFIG[systemd] = "-Dsystemd=true -Dlauncher-logind=true,-Dsystemd=false -Dlauncher-logind=false,systemd dbus"
+# Weston with Xwayland support (requires X11 and Wayland)
+PACKAGECONFIG[xwayland] = "-Dxwayland=true,-Dxwayland=false"
+# colord CMS support
+PACKAGECONFIG[colord] = "-Dcolor-management-colord=true,-Dcolor-management-colord=false,colord"
+# Clients support
+PACKAGECONFIG[clients] = "-Dsimple-clients=${SIMPLECLIENTS} -Ddemo-clients=true,-Dsimple-clients= -Ddemo-clients=false"
+# Virtual remote output with GStreamer on DRM backend
+PACKAGECONFIG[remoting] = "-Dremoting=true,-Dremoting=false,gstreamer1.0 gstreamer1.0-plugins-base"
+# Weston with screen-share support
+PACKAGECONFIG[screenshare] = "-Dscreenshare=true,-Dscreenshare=false"
+# Traditional desktop shell
+PACKAGECONFIG[shell-desktop] = "-Dshell-desktop=true,-Dshell-desktop=false"
+# Fullscreen shell
+PACKAGECONFIG[shell-fullscreen] = "-Dshell-fullscreen=true,-Dshell-fullscreen=false"
+# In-Vehicle Infotainment (IVI) shell
+PACKAGECONFIG[shell-ivi] = "-Dshell-ivi=true,-Dshell-ivi=false"
+# JPEG image loading support
+PACKAGECONFIG[image-jpeg] = "-Dimage-jpeg=true,-Dimage-jpeg=false, jpeg"
+# support libseat based launch
+PACKAGECONFIG[launcher-libseat] = "-Dlauncher-libseat=true,-Dlauncher-libseat=false,seatd"
+
+do_install:append() {
+	# Weston doesn't need the .la files to load modules, so wipe them
+	rm -f ${D}/${libdir}/libweston-${WESTON_MAJOR_VERSION}/*.la
+
+	# If X11, ship a desktop file to launch it
+	if [ "${@bb.utils.filter('DISTRO_FEATURES', 'x11', d)}" ]; then
+		install -d ${D}${datadir}/applications
+		install ${WORKDIR}/weston.desktop ${D}${datadir}/applications
+
+		install -d ${D}${datadir}/icons/hicolor/48x48/apps
+		install ${WORKDIR}/weston.png ${D}${datadir}/icons/hicolor/48x48/apps
+	fi
+
+	if [ "${@bb.utils.contains('PACKAGECONFIG', 'xwayland', 'yes', 'no', d)}" = "yes" ]; then
+		install -Dm 644 ${WORKDIR}/xwayland.weston-start ${D}${datadir}/weston-start/xwayland
+	fi
+
+	if [ "${@bb.utils.contains('PACKAGECONFIG', 'systemd', 'yes', 'no', d)}" = "yes" ]; then
+		install -Dm 644 ${WORKDIR}/systemd-notify.weston-start ${D}${datadir}/weston-start/systemd-notify
+	fi
+
+	if [ "${@bb.utils.contains('PACKAGECONFIG', 'launch', 'yes', 'no', d)}" = "yes" ]; then
+		chmod u+s ${D}${bindir}/weston-launch
+	fi
+}
+
+PACKAGES += "${@bb.utils.contains('PACKAGECONFIG', 'xwayland', '${PN}-xwayland', '', d)} \
+             libweston-${WESTON_MAJOR_VERSION} ${PN}-examples"
+
+FILES:${PN}-dev += "${libdir}/${BPN}/libexec_weston.so"
+FILES:${PN} = "${bindir}/weston ${bindir}/weston-terminal ${bindir}/weston-info ${bindir}/weston-launch ${bindir}/wcap-decode ${libexecdir} ${libdir}/${BPN}/*.so* ${datadir}"
+
+FILES:libweston-${WESTON_MAJOR_VERSION} = "${libdir}/lib*${SOLIBS} ${libdir}/libweston-${WESTON_MAJOR_VERSION}/*.so"
+SUMMARY:libweston-${WESTON_MAJOR_VERSION} = "Helper library for implementing 'wayland window managers'."
+
+FILES:${PN}-examples = "${bindir}/*"
+
+FILES:${PN}-xwayland = "${libdir}/libweston-${WESTON_MAJOR_VERSION}/xwayland.so"
+RDEPENDS:${PN}-xwayland += "xwayland"
+
+RDEPENDS:${PN} += "xkeyboard-config"
+RRECOMMENDS:${PN} = "weston-init liberation-fonts"
+RRECOMMENDS:${PN}-dev += "wayland-protocols"
+
+USERADD_PACKAGES = "${PN}"
+GROUPADD_PARAM:${PN} = "--system weston-launch"
diff --git a/poky/meta/recipes-graphics/xorg-lib/libxcvt_0.1.1.bb b/poky/meta/recipes-graphics/xorg-lib/libxcvt_0.1.1.bb
deleted file mode 100644
index 134c40a..0000000
--- a/poky/meta/recipes-graphics/xorg-lib/libxcvt_0.1.1.bb
+++ /dev/null
@@ -1,19 +0,0 @@
-SUMMARY = "Library providing a standalone version of the X server \
-implementation of the VESA CVT standard timing modelines generator"
-HOMEPAGE = "https://gitlab.freedesktop.org/xorg/lib/libxcvt"
-BUGTRACKER = "https://gitlab.freedesktop.org/xorg/lib/libxcvt/-/issues"
-LICENSE = "MIT"
-LIC_FILES_CHKSUM = "file://COPYING;md5=129947a06984d6faa6f9a9788fa2a03f"
-SECTION = "x11/libs"
-
-SRC_URI = "git://gitlab.freedesktop.org/xorg/lib/libxcvt.git;protocol=https;branch=master"
-SRCREV = "6fe840b9295cfdc41bd734586c5b8756f6af6f9b"
-
-S = "${WORKDIR}/git"
-
-inherit meson
-
-FILES:${PN} = " \
-    ${libdir}/libxcvt.so.0* \
-    ${bindir}/cvt \
-"
diff --git a/poky/meta/recipes-graphics/xorg-lib/libxcvt_0.1.2.bb b/poky/meta/recipes-graphics/xorg-lib/libxcvt_0.1.2.bb
new file mode 100644
index 0000000..e62fabd
--- /dev/null
+++ b/poky/meta/recipes-graphics/xorg-lib/libxcvt_0.1.2.bb
@@ -0,0 +1,19 @@
+SUMMARY = "Library providing a standalone version of the X server \
+implementation of the VESA CVT standard timing modelines generator"
+HOMEPAGE = "https://gitlab.freedesktop.org/xorg/lib/libxcvt"
+BUGTRACKER = "https://gitlab.freedesktop.org/xorg/lib/libxcvt/-/issues"
+LICENSE = "MIT"
+LIC_FILES_CHKSUM = "file://COPYING;md5=129947a06984d6faa6f9a9788fa2a03f"
+SECTION = "x11/libs"
+
+SRC_URI = "git://gitlab.freedesktop.org/xorg/lib/libxcvt.git;protocol=https;branch=master"
+SRCREV = "d9ca87eea9eecddaccc3a77227bcb3acf84e89df"
+
+S = "${WORKDIR}/git"
+
+inherit meson
+
+FILES:${PN} = " \
+    ${libdir}/libxcvt.so.0* \
+    ${bindir}/cvt \
+"
diff --git a/poky/meta/recipes-graphics/xorg-proto/xorgproto_2022.1.bb b/poky/meta/recipes-graphics/xorg-proto/xorgproto_2022.1.bb
deleted file mode 100644
index a1e852b..0000000
--- a/poky/meta/recipes-graphics/xorg-proto/xorgproto_2022.1.bb
+++ /dev/null
@@ -1,25 +0,0 @@
-SUMMARY = "X Window System unified protocol definitions"
-DESCRIPTION = "This package provides the headers and specification documents defining \
-the core protocol and (many) extensions for the X Window System"
-HOMEPAGE = "http://www.x.org"
-BUGTRACKER = "https://bugs.freedesktop.org/enter_bug.cgi?product=xorg"
-
-SECTION = "x11/libs"
-LICENSE = "MIT"
-LIC_FILES_CHKSUM = "file://COPYING-x11proto;md5=dfc4bd2b0568b31725b85b0604e69b56"
-
-SRC_URI = "${XORG_MIRROR}/individual/proto/${BP}.tar.bz2"
-SRC_URI[sha256sum] = "1d2dcc66963f234d2c1e1f8d98a0d3e8725149cdac0a263df4097593c48bc2a6"
-
-inherit meson
-
-PACKAGECONFIG ??= ""
-PACKAGECONFIG[legacy] = "-Dlegacy=true,-Dlegacy=false"
-
-# Datadir only used to install pc files, $datadir/pkgconfig
-datadir="${libdir}"
-# ${PN} is empty so we need to tweak -dev and -dbg package dependencies
-DEV_PKG_DEPENDENCY = ""
-RRECOMMENDS:${PN}-dbg = "${PN}-dev (= ${EXTENDPKGV})"
-
-BBCLASSEXTEND = "native nativesdk"
diff --git a/poky/meta/recipes-graphics/xorg-proto/xorgproto_2022.2.bb b/poky/meta/recipes-graphics/xorg-proto/xorgproto_2022.2.bb
new file mode 100644
index 0000000..a1cd66c
--- /dev/null
+++ b/poky/meta/recipes-graphics/xorg-proto/xorgproto_2022.2.bb
@@ -0,0 +1,25 @@
+SUMMARY = "X Window System unified protocol definitions"
+DESCRIPTION = "This package provides the headers and specification documents defining \
+the core protocol and (many) extensions for the X Window System"
+HOMEPAGE = "http://www.x.org"
+BUGTRACKER = "https://bugs.freedesktop.org/enter_bug.cgi?product=xorg"
+
+SECTION = "x11/libs"
+LICENSE = "MIT"
+LIC_FILES_CHKSUM = "file://COPYING-x11proto;md5=dfc4bd2b0568b31725b85b0604e69b56"
+
+SRC_URI = "${XORG_MIRROR}/individual/proto/${BP}.tar.xz"
+SRC_URI[sha256sum] = "5d13dbf2be08f95323985de53352c4f352713860457b95ccaf894a647ac06b9e"
+
+inherit meson
+
+PACKAGECONFIG ??= ""
+PACKAGECONFIG[legacy] = "-Dlegacy=true,-Dlegacy=false"
+
+# Datadir only used to install pc files, $datadir/pkgconfig
+datadir="${libdir}"
+# ${PN} is empty so we need to tweak -dev and -dbg package dependencies
+DEV_PKG_DEPENDENCY = ""
+RRECOMMENDS:${PN}-dbg = "${PN}-dev (= ${EXTENDPKGV})"
+
+BBCLASSEXTEND = "native nativesdk"
diff --git a/poky/meta/recipes-kernel/kexec/kexec-tools_2.0.24.bb b/poky/meta/recipes-kernel/kexec/kexec-tools_2.0.24.bb
deleted file mode 100644
index fdad40e..0000000
--- a/poky/meta/recipes-kernel/kexec/kexec-tools_2.0.24.bb
+++ /dev/null
@@ -1,86 +0,0 @@
-
-SUMMARY = "Kexec fast reboot tools"
-DESCRIPTION = "Kexec is a fast reboot feature that lets you reboot to a new Linux kernel"
-AUTHOR = "Eric Biederman"
-HOMEPAGE = "http://kernel.org/pub/linux/utils/kernel/kexec/"
-SECTION = "kernel/userland"
-LICENSE = "GPL-2.0-only"
-LIC_FILES_CHKSUM = "file://COPYING;md5=ea5bed2f60d357618ca161ad539f7c0a \
-                    file://kexec/kexec.c;beginline=1;endline=20;md5=af10f6ae4a8715965e648aa687ad3e09"
-DEPENDS = "zlib xz"
-
-SRC_URI = "${KERNELORG_MIRROR}/linux/utils/kernel/kexec/kexec-tools-${PV}.tar.gz \
-           file://kdump \
-           file://kdump.conf \
-           file://kdump.service \
-           file://0001-powerpc-change-the-memory-size-limit.patch \
-           file://0002-purgatory-Pass-r-directly-to-linker.patch \
-           file://0003-kexec-ARM-Fix-add_buffer_phys_virt-align-issue.patch \
-           file://0005-Disable-PIE-during-link.patch \
-           file://0001-arm64-kexec-disabled-check-if-kaslr-seed-dtb-propert.patch \
-           "
-
-SRC_URI[sha256sum] = "1ff9137327aeac3b2ab922a71bc6eb4655571a0ff77c071cb92783a9a59d4d26"
-
-inherit autotools update-rc.d systemd
-
-export LDFLAGS = "-L${STAGING_LIBDIR}"
-EXTRA_OECONF = " --with-zlib=yes"
-
-do_compile:prepend() {
-    # Remove the prepackaged config.h from the source tree as it overrides
-    # the same file generated by configure and placed in the build tree
-    rm -f ${S}/include/config.h
-
-    # Remove the '*.d' file to make sure the recompile is OK
-    for dep in `find ${B} -type f -name '*.d'`; do
-        dep_no_d="`echo $dep | sed 's#.d$##'`"
-        # Remove file.d when there is a file.o
-        if [ -f "$dep_no_d.o" ]; then
-            rm -f $dep
-        fi
-    done
-}
-
-do_install:append () {
-        install -d ${D}${sysconfdir}/sysconfig
-        install -m 0644 ${WORKDIR}/kdump.conf ${D}${sysconfdir}/sysconfig
-
-        if ${@bb.utils.contains('DISTRO_FEATURES', 'sysvinit', 'true', 'false', d)}; then
-                install -D -m 0755 ${WORKDIR}/kdump ${D}${sysconfdir}/init.d/kdump
-        fi
-
-        if ${@bb.utils.contains('DISTRO_FEATURES', 'systemd', 'true', 'false', d)}; then
-                install -D -m 0755 ${WORKDIR}/kdump ${D}${libexecdir}/kdump-helper
-                install -D -m 0644 ${WORKDIR}/kdump.service ${D}${systemd_system_unitdir}/kdump.service
-                sed -i -e 's,@LIBEXECDIR@,${libexecdir},g' ${D}${systemd_system_unitdir}/kdump.service
-        fi
-}
-
-PACKAGES =+ "kexec kdump vmcore-dmesg"
-
-ALLOW_EMPTY:${PN} = "1"
-RRECOMMENDS:${PN} = "kexec kdump vmcore-dmesg"
-
-FILES:kexec = "${sbindir}/kexec"
-FILES:kdump = "${sbindir}/kdump \
-               ${sysconfdir}/sysconfig/kdump.conf \
-               ${sysconfdir}/init.d/kdump \
-               ${libexecdir}/kdump-helper \
-               ${systemd_system_unitdir}/kdump.service \
-"
-
-FILES:vmcore-dmesg = "${sbindir}/vmcore-dmesg"
-
-INITSCRIPT_PACKAGES = "kdump"
-INITSCRIPT_NAME:kdump = "kdump"
-INITSCRIPT_PARAMS:kdump = "start 56 2 3 4 5 . stop 56 0 1 6 ."
-
-SYSTEMD_PACKAGES = "kdump"
-SYSTEMD_SERVICE:kdump = "kdump.service"
-
-SECURITY_PIE_CFLAGS:remove = "-fPIE -pie"
-
-COMPATIBLE_HOST = '(x86_64.*|i.86.*|arm.*|aarch64.*|powerpc.*|mips.*)-(linux|freebsd.*)'
-
-INSANE_SKIP:${PN} = "arch"
diff --git a/poky/meta/recipes-kernel/kexec/kexec-tools_2.0.25.bb b/poky/meta/recipes-kernel/kexec/kexec-tools_2.0.25.bb
new file mode 100644
index 0000000..9784404
--- /dev/null
+++ b/poky/meta/recipes-kernel/kexec/kexec-tools_2.0.25.bb
@@ -0,0 +1,86 @@
+
+SUMMARY = "Kexec fast reboot tools"
+DESCRIPTION = "Kexec is a fast reboot feature that lets you reboot to a new Linux kernel"
+AUTHOR = "Eric Biederman"
+HOMEPAGE = "http://kernel.org/pub/linux/utils/kernel/kexec/"
+SECTION = "kernel/userland"
+LICENSE = "GPL-2.0-only"
+LIC_FILES_CHKSUM = "file://COPYING;md5=ea5bed2f60d357618ca161ad539f7c0a \
+                    file://kexec/kexec.c;beginline=1;endline=20;md5=af10f6ae4a8715965e648aa687ad3e09"
+DEPENDS = "zlib xz"
+
+SRC_URI = "${KERNELORG_MIRROR}/linux/utils/kernel/kexec/kexec-tools-${PV}.tar.gz \
+           file://kdump \
+           file://kdump.conf \
+           file://kdump.service \
+           file://0001-powerpc-change-the-memory-size-limit.patch \
+           file://0002-purgatory-Pass-r-directly-to-linker.patch \
+           file://0003-kexec-ARM-Fix-add_buffer_phys_virt-align-issue.patch \
+           file://0005-Disable-PIE-during-link.patch \
+           file://0001-arm64-kexec-disabled-check-if-kaslr-seed-dtb-propert.patch \
+           "
+
+SRC_URI[sha256sum] = "4eea6f8641ea3e349653a52c7fd33be1132322a5f3dc9f1a6ade886019767320"
+
+inherit autotools update-rc.d systemd
+
+export LDFLAGS = "-L${STAGING_LIBDIR}"
+EXTRA_OECONF = " --with-zlib=yes"
+
+do_compile:prepend() {
+    # Remove the prepackaged config.h from the source tree as it overrides
+    # the same file generated by configure and placed in the build tree
+    rm -f ${S}/include/config.h
+
+    # Remove the '*.d' file to make sure the recompile is OK
+    for dep in `find ${B} -type f -name '*.d'`; do
+        dep_no_d="`echo $dep | sed 's#.d$##'`"
+        # Remove file.d when there is a file.o
+        if [ -f "$dep_no_d.o" ]; then
+            rm -f $dep
+        fi
+    done
+}
+
+do_install:append () {
+        install -d ${D}${sysconfdir}/sysconfig
+        install -m 0644 ${WORKDIR}/kdump.conf ${D}${sysconfdir}/sysconfig
+
+        if ${@bb.utils.contains('DISTRO_FEATURES', 'sysvinit', 'true', 'false', d)}; then
+                install -D -m 0755 ${WORKDIR}/kdump ${D}${sysconfdir}/init.d/kdump
+        fi
+
+        if ${@bb.utils.contains('DISTRO_FEATURES', 'systemd', 'true', 'false', d)}; then
+                install -D -m 0755 ${WORKDIR}/kdump ${D}${libexecdir}/kdump-helper
+                install -D -m 0644 ${WORKDIR}/kdump.service ${D}${systemd_system_unitdir}/kdump.service
+                sed -i -e 's,@LIBEXECDIR@,${libexecdir},g' ${D}${systemd_system_unitdir}/kdump.service
+        fi
+}
+
+PACKAGES =+ "kexec kdump vmcore-dmesg"
+
+ALLOW_EMPTY:${PN} = "1"
+RRECOMMENDS:${PN} = "kexec kdump vmcore-dmesg"
+
+FILES:kexec = "${sbindir}/kexec"
+FILES:kdump = "${sbindir}/kdump \
+               ${sysconfdir}/sysconfig/kdump.conf \
+               ${sysconfdir}/init.d/kdump \
+               ${libexecdir}/kdump-helper \
+               ${systemd_system_unitdir}/kdump.service \
+"
+
+FILES:vmcore-dmesg = "${sbindir}/vmcore-dmesg"
+
+INITSCRIPT_PACKAGES = "kdump"
+INITSCRIPT_NAME:kdump = "kdump"
+INITSCRIPT_PARAMS:kdump = "start 56 2 3 4 5 . stop 56 0 1 6 ."
+
+SYSTEMD_PACKAGES = "kdump"
+SYSTEMD_SERVICE:kdump = "kdump.service"
+
+SECURITY_PIE_CFLAGS:remove = "-fPIE -pie"
+
+COMPATIBLE_HOST = '(x86_64.*|i.86.*|arm.*|aarch64.*|powerpc.*|mips.*)-(linux|freebsd.*)'
+
+INSANE_SKIP:${PN} = "arch"
diff --git a/poky/meta/recipes-kernel/linux-libc-headers/linux-libc-headers_5.16.bb b/poky/meta/recipes-kernel/linux-libc-headers/linux-libc-headers_5.16.bb
deleted file mode 100644
index c64629d..0000000
--- a/poky/meta/recipes-kernel/linux-libc-headers/linux-libc-headers_5.16.bb
+++ /dev/null
@@ -1,20 +0,0 @@
-require linux-libc-headers.inc
-
-SRC_URI:append:libc-musl = "\
-    file://0001-libc-compat.h-fix-some-issues-arising-from-in6.h.patch \
-    file://0003-remove-inclusion-of-sysinfo.h-in-kernel.h.patch \
-    file://0001-libc-compat.h-musl-_does_-define-IFF_LOWER_UP-DORMAN.patch \
-    file://0001-include-linux-stddef.h-in-swab.h-uapi-header.patch \
-   "
-
-SRC_URI:append = "\
-    file://0001-scripts-Use-fixed-input-and-output-files-instead-of-.patch \
-    file://0001-kbuild-install_headers.sh-Strip-_UAPI-from-if-define.patch \
-"
-
-LIC_FILES_CHKSUM = "file://COPYING;md5=6bc538ed5bd9a7fc9398086aedcd7e46"
-
-SRC_URI[md5sum] = "e6680ce7c989a3efe58b51e3f3f0bf93"
-SRC_URI[sha256sum] = "027d7e8988bb69ac12ee92406c3be1fe13f990b1ca2249e226225cd1573308bb"
-
-
diff --git a/poky/meta/recipes-kernel/linux-libc-headers/linux-libc-headers_5.19.bb b/poky/meta/recipes-kernel/linux-libc-headers/linux-libc-headers_5.19.bb
new file mode 100644
index 0000000..528e1d3
--- /dev/null
+++ b/poky/meta/recipes-kernel/linux-libc-headers/linux-libc-headers_5.19.bb
@@ -0,0 +1,20 @@
+require linux-libc-headers.inc
+
+SRC_URI:append:libc-musl = "\
+    file://0001-libc-compat.h-fix-some-issues-arising-from-in6.h.patch \
+    file://0003-remove-inclusion-of-sysinfo.h-in-kernel.h.patch \
+    file://0001-libc-compat.h-musl-_does_-define-IFF_LOWER_UP-DORMAN.patch \
+    file://0001-include-linux-stddef.h-in-swab.h-uapi-header.patch \
+   "
+
+SRC_URI:append = "\
+    file://0001-scripts-Use-fixed-input-and-output-files-instead-of-.patch \
+    file://0001-kbuild-install_headers.sh-Strip-_UAPI-from-if-define.patch \
+"
+
+LIC_FILES_CHKSUM = "file://COPYING;md5=6bc538ed5bd9a7fc9398086aedcd7e46"
+
+SRC_URI[md5sum] = "f91bfe133d2cb1692f705947282e123a"
+SRC_URI[sha256sum] = "ff240c579b9ee1affc318917de07394fc1c3bb49dac25ec1287370c2e15005a8"
+
+
diff --git a/poky/meta/recipes-kernel/linux/kernel-devsrc.bb b/poky/meta/recipes-kernel/linux/kernel-devsrc.bb
index f8f7171..46d706b 100644
--- a/poky/meta/recipes-kernel/linux/kernel-devsrc.bb
+++ b/poky/meta/recipes-kernel/linux/kernel-devsrc.bb
@@ -119,6 +119,8 @@
 	if [ "${ARCH}" = "powerpc" ]; then
 	    cp -a --parents arch/powerpc/kernel/vdso32/vdso32.lds $kerneldir/build 2>/dev/null || :
 	    cp -a --parents arch/powerpc/kernel/vdso64/vdso64.lds $kerneldir/build 2>/dev/null || :
+	    # v5.19+
+	    cp -a --parents arch/powerpc/kernel/vdso/vdso*.lds $kerneldir/build 2>/dev/null || :
 	fi
 
 	cp -a include $kerneldir/build/include
@@ -180,9 +182,16 @@
             cp -a --parents arch/arm64/tools/gen-cpucaps.awk $kerneldir/build/ 2>/dev/null || :
             cp -a --parents arch/arm64/tools/cpucaps $kerneldir/build/ 2>/dev/null || :
 
+            # 5.19+
+            cp -a --parents arch/arm64/tools/gen-sysreg.awk $kerneldir/build/   2>/dev/null || :
+            cp -a --parents arch/arm64/tools/sysreg $kerneldir/build/   2>/dev/null || :
+
             if [ -e $kerneldir/build/arch/arm64/tools/gen-cpucaps.awk ]; then
                  sed -i -e "s,#!.*awk.*,#!${USRBINPATH}/env awk," $kerneldir/build/arch/arm64/tools/gen-cpucaps.awk
             fi
+            if [ -e $kerneldir/build/arch/arm64/tools/gen-sysreg.awk ]; then
+                 sed -i -e "s,#!.*awk.*,#!${USRBINPATH}/env awk," $kerneldir/build/arch/arm64/tools/gen-sysreg.awk
+            fi
 	fi
 
 	if [ "${ARCH}" = "powerpc" ]; then
@@ -192,6 +201,11 @@
 	    cp -a --parents arch/${ARCH}/kernel/syscalls/syscallhdr.sh $kerneldir/build/ 2>/dev/null || :
 	    cp -a --parents arch/${ARCH}/kernel/vdso32/* $kerneldir/build/ 2>/dev/null || :
 	    cp -a --parents arch/${ARCH}/kernel/vdso64/* $kerneldir/build/ 2>/dev/null || :
+
+	    # v5.19+
+	    cp -a --parents arch/powerpc/kernel/vdso/*.S $kerneldir/build 2>/dev/null || :
+	    cp -a --parents arch/powerpc/kernel/vdso/*gettimeofday.* $kerneldir/build 2>/dev/null || :
+	    cp -a --parents arch/powerpc/kernel/vdso/gen_vdso*_offsets.sh $kerneldir/build/ 2>/dev/null || :
 	fi
 	if [ "${ARCH}" = "riscv" ]; then
             cp -a --parents arch/riscv/kernel/vdso/*gettimeofday.* $kerneldir/build/
@@ -210,6 +224,9 @@
 	    cp -a --parents arch/arm/tools/gen-mach-types $kerneldir/build/
 	    cp -a --parents arch/arm/tools/mach-types $kerneldir/build/
 
+	    # 5.19+
+	    cp -a --parents arch/arm/tools/gen-sysreg.awk $kerneldir/build/	2>/dev/null || :
+
 	    # ARM syscall table tools only exist for kernels v4.10 or later
             SYSCALL_TOOLS=$(find arch/arm/tools -name "syscall*")
             if [ -n "$SYSCALL_TOOLS" ] ; then
@@ -291,9 +308,6 @@
     # external modules can be built
     touch -r $kerneldir/build/Makefile $kerneldir/build/include/generated/uapi/linux/version.h
 
-    # Copy .config to include/config/auto.conf so "make prepare" is unnecessary.
-    cp $kerneldir/build/.config $kerneldir/build/include/config/auto.conf
-
     # make sure these are at least as old as the .config, or rebuilds will trigger
     touch -r $kerneldir/build/.config $kerneldir/build/include/generated/autoconf.h 2>/dev/null || :
     touch -r $kerneldir/build/.config $kerneldir/build/include/config/auto.conf* 2>/dev/null || :
diff --git a/poky/meta/recipes-kernel/linux/linux-yocto-dev.bb b/poky/meta/recipes-kernel/linux/linux-yocto-dev.bb
index 5a420b7..b1b57be 100644
--- a/poky/meta/recipes-kernel/linux/linux-yocto-dev.bb
+++ b/poky/meta/recipes-kernel/linux/linux-yocto-dev.bb
@@ -50,7 +50,7 @@
 # we need the wrappers if validation isn't in the packageconfig
 DEPENDS += "${@bb.utils.contains('PACKAGECONFIG', 'dt-validation', '', 'python3-dtschema-wrapper-native', d)}"
 
-COMPATIBLE_MACHINE = "(qemuarm|qemux86|qemuppc|qemumips|qemumips64|qemux86-64|qemuriscv32|qemuriscv64)"
+COMPATIBLE_MACHINE = "^(qemuarm|qemux86|qemuppc|qemumips|qemumips64|qemux86-64|qemuriscv32|qemuriscv64)$"
 
 KERNEL_DEVICETREE:qemuarmv5 = "versatile-pb.dtb"
 
diff --git a/poky/meta/recipes-kernel/linux/linux-yocto-rt_5.10.bb b/poky/meta/recipes-kernel/linux/linux-yocto-rt_5.10.bb
deleted file mode 100644
index 53ccd41..0000000
--- a/poky/meta/recipes-kernel/linux/linux-yocto-rt_5.10.bb
+++ /dev/null
@@ -1,45 +0,0 @@
-KBRANCH ?= "v5.10/standard/preempt-rt/base"
-
-require recipes-kernel/linux/linux-yocto.inc
-
-# Skip processing of this recipe if it is not explicitly specified as the
-# PREFERRED_PROVIDER for virtual/kernel. This avoids errors when trying
-# to build multiple virtual/kernel providers, e.g. as dependency of
-# core-image-rt-sdk, core-image-rt.
-python () {
-    if d.getVar("KERNEL_PACKAGE_NAME") == "kernel" and d.getVar("PREFERRED_PROVIDER_virtual/kernel") != "linux-yocto-rt":
-        raise bb.parse.SkipRecipe("Set PREFERRED_PROVIDER_virtual/kernel to linux-yocto-rt to enable it")
-}
-
-SRCREV_machine ?= "63771123b1eea439bea2cf80f9f5682667528d9f"
-SRCREV_meta ?= "96ea2660bb97e15f48f4885b9e436f24c3606bd9"
-
-SRC_URI = "git://git.yoctoproject.org/linux-yocto.git;branch=${KBRANCH};name=machine \
-           git://git.yoctoproject.org/yocto-kernel-cache;type=kmeta;name=meta;branch=yocto-5.10;destsuffix=${KMETA}"
-
-LINUX_VERSION ?= "5.10.130"
-
-LIC_FILES_CHKSUM = "file://COPYING;md5=6bc538ed5bd9a7fc9398086aedcd7e46"
-
-DEPENDS += "${@bb.utils.contains('ARCH', 'x86', 'elfutils-native', '', d)}"
-DEPENDS += "openssl-native util-linux-native"
-
-PV = "${LINUX_VERSION}+git${SRCPV}"
-
-KMETA = "kernel-meta"
-KCONF_BSP_AUDIT_LEVEL = "1"
-
-LINUX_KERNEL_TYPE = "preempt-rt"
-
-COMPATIBLE_MACHINE = "(qemux86|qemux86-64|qemuarm|qemuarmv5|qemuarm64|qemuppc|qemumips)"
-
-KERNEL_DEVICETREE:qemuarmv5 = "versatile-pb.dtb"
-
-# Functionality flags
-KERNEL_EXTRA_FEATURES ?= "features/netfilter/netfilter.scc features/taskstats/taskstats.scc"
-KERNEL_FEATURES:append = " ${KERNEL_EXTRA_FEATURES}"
-KERNEL_FEATURES:append:qemuall=" cfg/virtio.scc features/drm-bochs/drm-bochs.scc"
-KERNEL_FEATURES:append:qemux86=" cfg/sound.scc cfg/paravirt_kvm.scc"
-KERNEL_FEATURES:append:qemux86-64=" cfg/sound.scc cfg/paravirt_kvm.scc"
-KERNEL_FEATURES:append = "${@bb.utils.contains("DISTRO_FEATURES", "ptest", " features/scsi/scsi-debug.scc", "", d)}"
-KERNEL_FEATURES:append = "${@bb.utils.contains("DISTRO_FEATURES", "ptest", " features/gpio/mockup.scc", "", d)}"
diff --git a/poky/meta/recipes-kernel/linux/linux-yocto-rt_5.15.bb b/poky/meta/recipes-kernel/linux/linux-yocto-rt_5.15.bb
index b6e443d..9e37494 100644
--- a/poky/meta/recipes-kernel/linux/linux-yocto-rt_5.15.bb
+++ b/poky/meta/recipes-kernel/linux/linux-yocto-rt_5.15.bb
@@ -11,13 +11,13 @@
         raise bb.parse.SkipRecipe("Set PREFERRED_PROVIDER_virtual/kernel to linux-yocto-rt to enable it")
 }
 
-SRCREV_machine ?= "0222cbb8d40318cf5377875017e32eebefa59ab8"
-SRCREV_meta ?= "0e3a81a5aefbea03388b1235fbcc3dec278425d0"
+SRCREV_machine ?= "cb561ee4438e5961e5c471eee8094737ca873135"
+SRCREV_meta ?= "59c8898d450152a0875af340e6f0e72d05aafdfa"
 
 SRC_URI = "git://git.yoctoproject.org/linux-yocto.git;branch=${KBRANCH};name=machine \
            git://git.yoctoproject.org/yocto-kernel-cache;type=kmeta;name=meta;branch=yocto-5.15;destsuffix=${KMETA}"
 
-LINUX_VERSION ?= "5.15.54"
+LINUX_VERSION ?= "5.15.62"
 
 LIC_FILES_CHKSUM = "file://COPYING;md5=6bc538ed5bd9a7fc9398086aedcd7e46"
 
@@ -31,7 +31,7 @@
 
 LINUX_KERNEL_TYPE = "preempt-rt"
 
-COMPATIBLE_MACHINE = "(qemux86|qemux86-64|qemuarm|qemuarmv5|qemuarm64|qemuppc|qemumips)"
+COMPATIBLE_MACHINE = "^(qemux86|qemux86-64|qemuarm|qemuarmv5|qemuarm64|qemuppc|qemumips)$"
 
 KERNEL_DEVICETREE:qemuarmv5 = "versatile-pb.dtb"
 
diff --git a/poky/meta/recipes-kernel/linux/linux-yocto-rt_5.19.bb b/poky/meta/recipes-kernel/linux/linux-yocto-rt_5.19.bb
new file mode 100644
index 0000000..c12bec3
--- /dev/null
+++ b/poky/meta/recipes-kernel/linux/linux-yocto-rt_5.19.bb
@@ -0,0 +1,45 @@
+KBRANCH ?= "v5.19/standard/preempt-rt/base"
+
+require recipes-kernel/linux/linux-yocto.inc
+
+# Skip processing of this recipe if it is not explicitly specified as the
+# PREFERRED_PROVIDER for virtual/kernel. This avoids errors when trying
+# to build multiple virtual/kernel providers, e.g. as dependency of
+# core-image-rt-sdk, core-image-rt.
+python () {
+    if d.getVar("KERNEL_PACKAGE_NAME") == "kernel" and d.getVar("PREFERRED_PROVIDER_virtual/kernel") != "linux-yocto-rt":
+        raise bb.parse.SkipRecipe("Set PREFERRED_PROVIDER_virtual/kernel to linux-yocto-rt to enable it")
+}
+
+SRCREV_machine ?= "df2290e83a50563688e5ea0be34e091f1c623069"
+SRCREV_meta ?= "5eb0fa93f8490a962ff0c36c14d8def271d75128"
+
+SRC_URI = "git://git.yoctoproject.org/linux-yocto.git;branch=${KBRANCH};name=machine \
+           git://git.yoctoproject.org/yocto-kernel-cache;type=kmeta;name=meta;branch=yocto-5.19;destsuffix=${KMETA}"
+
+LINUX_VERSION ?= "5.19.3"
+
+LIC_FILES_CHKSUM = "file://COPYING;md5=6bc538ed5bd9a7fc9398086aedcd7e46"
+
+DEPENDS += "${@bb.utils.contains('ARCH', 'x86', 'elfutils-native', '', d)}"
+DEPENDS += "openssl-native util-linux-native"
+
+PV = "${LINUX_VERSION}+git${SRCPV}"
+
+KMETA = "kernel-meta"
+KCONF_BSP_AUDIT_LEVEL = "1"
+
+LINUX_KERNEL_TYPE = "preempt-rt"
+
+COMPATIBLE_MACHINE = "^(qemux86|qemux86-64|qemuarm|qemuarmv5|qemuarm64|qemuppc|qemumips)$"
+
+KERNEL_DEVICETREE:qemuarmv5 = "versatile-pb.dtb"
+
+# Functionality flags
+KERNEL_EXTRA_FEATURES ?= "features/netfilter/netfilter.scc features/taskstats/taskstats.scc"
+KERNEL_FEATURES:append = " ${KERNEL_EXTRA_FEATURES}"
+KERNEL_FEATURES:append:qemuall=" cfg/virtio.scc features/drm-bochs/drm-bochs.scc"
+KERNEL_FEATURES:append:qemux86=" cfg/sound.scc cfg/paravirt_kvm.scc"
+KERNEL_FEATURES:append:qemux86-64=" cfg/sound.scc cfg/paravirt_kvm.scc"
+KERNEL_FEATURES:append = "${@bb.utils.contains("DISTRO_FEATURES", "ptest", " features/scsi/scsi-debug.scc", "", d)}"
+KERNEL_FEATURES:append = "${@bb.utils.contains("DISTRO_FEATURES", "ptest", " features/gpio/mockup.scc", "", d)}"
diff --git a/poky/meta/recipes-kernel/linux/linux-yocto-tiny_5.10.bb b/poky/meta/recipes-kernel/linux/linux-yocto-tiny_5.10.bb
deleted file mode 100644
index 7b3aaa7..0000000
--- a/poky/meta/recipes-kernel/linux/linux-yocto-tiny_5.10.bb
+++ /dev/null
@@ -1,32 +0,0 @@
-KBRANCH ?= "v5.10/standard/tiny/base"
-KBRANCH:qemuarm  ?= "v5.10/standard/tiny/arm-versatile-926ejs"
-
-LINUX_KERNEL_TYPE = "tiny"
-KCONFIG_MODE = "--allnoconfig"
-
-require recipes-kernel/linux/linux-yocto.inc
-
-LINUX_VERSION ?= "5.10.130"
-LIC_FILES_CHKSUM = "file://COPYING;md5=6bc538ed5bd9a7fc9398086aedcd7e46"
-
-DEPENDS += "${@bb.utils.contains('ARCH', 'x86', 'elfutils-native', '', d)}"
-DEPENDS += "openssl-native util-linux-native"
-
-KMETA = "kernel-meta"
-KCONF_BSP_AUDIT_LEVEL = "2"
-
-SRCREV_machine:qemuarm ?= "bff12aa9748d83efc518e524858913c028f0707a"
-SRCREV_machine ?= "5bdf36bd73803640ee495fc6f36b0207993bf62a"
-SRCREV_meta ?= "96ea2660bb97e15f48f4885b9e436f24c3606bd9"
-
-PV = "${LINUX_VERSION}+git${SRCPV}"
-
-SRC_URI = "git://git.yoctoproject.org/linux-yocto.git;branch=${KBRANCH};name=machine \
-           git://git.yoctoproject.org/yocto-kernel-cache;type=kmeta;name=meta;branch=yocto-5.10;destsuffix=${KMETA}"
-
-COMPATIBLE_MACHINE = "qemux86|qemux86-64|qemuarm|qemuarmv5"
-
-# Functionality flags
-KERNEL_FEATURES = ""
-
-KERNEL_DEVICETREE:qemuarmv5 = "versatile-pb.dtb"
diff --git a/poky/meta/recipes-kernel/linux/linux-yocto-tiny_5.15.bb b/poky/meta/recipes-kernel/linux/linux-yocto-tiny_5.15.bb
index aadf014..2de32ff 100644
--- a/poky/meta/recipes-kernel/linux/linux-yocto-tiny_5.15.bb
+++ b/poky/meta/recipes-kernel/linux/linux-yocto-tiny_5.15.bb
@@ -5,7 +5,7 @@
 
 require recipes-kernel/linux/linux-yocto.inc
 
-LINUX_VERSION ?= "5.15.54"
+LINUX_VERSION ?= "5.15.62"
 LIC_FILES_CHKSUM = "file://COPYING;md5=6bc538ed5bd9a7fc9398086aedcd7e46"
 
 DEPENDS += "${@bb.utils.contains('ARCH', 'x86', 'elfutils-native', '', d)}"
@@ -14,15 +14,15 @@
 KMETA = "kernel-meta"
 KCONF_BSP_AUDIT_LEVEL = "2"
 
-SRCREV_machine ?= "9b1d0e5eb8b08323577f5e2b21cbb2065aba0aa1"
-SRCREV_meta ?= "0e3a81a5aefbea03388b1235fbcc3dec278425d0"
+SRCREV_machine ?= "b708cb8412758a382516bdc46f26a0b43c50fb82"
+SRCREV_meta ?= "59c8898d450152a0875af340e6f0e72d05aafdfa"
 
 PV = "${LINUX_VERSION}+git${SRCPV}"
 
 SRC_URI = "git://git.yoctoproject.org/linux-yocto.git;branch=${KBRANCH};name=machine \
            git://git.yoctoproject.org/yocto-kernel-cache;type=kmeta;name=meta;branch=yocto-5.15;destsuffix=${KMETA}"
 
-COMPATIBLE_MACHINE = "qemux86|qemux86-64|qemuarm64|qemuarm|qemuarmv5"
+COMPATIBLE_MACHINE = "^(qemux86|qemux86-64|qemuarm64|qemuarm|qemuarmv5)$"
 
 # Functionality flags
 KERNEL_FEATURES = ""
diff --git a/poky/meta/recipes-kernel/linux/linux-yocto-tiny_5.19.bb b/poky/meta/recipes-kernel/linux/linux-yocto-tiny_5.19.bb
new file mode 100644
index 0000000..339f7f6
--- /dev/null
+++ b/poky/meta/recipes-kernel/linux/linux-yocto-tiny_5.19.bb
@@ -0,0 +1,30 @@
+KBRANCH ?= "v5.19/standard/tiny/base"
+
+LINUX_KERNEL_TYPE = "tiny"
+KCONFIG_MODE = "--allnoconfig"
+
+require recipes-kernel/linux/linux-yocto.inc
+
+LINUX_VERSION ?= "5.19.3"
+LIC_FILES_CHKSUM = "file://COPYING;md5=6bc538ed5bd9a7fc9398086aedcd7e46"
+
+DEPENDS += "${@bb.utils.contains('ARCH', 'x86', 'elfutils-native', '', d)}"
+DEPENDS += "openssl-native util-linux-native"
+
+KMETA = "kernel-meta"
+KCONF_BSP_AUDIT_LEVEL = "2"
+
+SRCREV_machine ?= "4d933456709d664a55fdda85304c08567265ad4d"
+SRCREV_meta ?= "5eb0fa93f8490a962ff0c36c14d8def271d75128"
+
+PV = "${LINUX_VERSION}+git${SRCPV}"
+
+SRC_URI = "git://git.yoctoproject.org/linux-yocto.git;branch=${KBRANCH};name=machine \
+           git://git.yoctoproject.org/yocto-kernel-cache;type=kmeta;name=meta;branch=yocto-5.19;destsuffix=${KMETA}"
+
+COMPATIBLE_MACHINE = "^(qemux86|qemux86-64|qemuarm64|qemuarm|qemuarmv5)$"
+
+# Functionality flags
+KERNEL_FEATURES = ""
+
+KERNEL_DEVICETREE:qemuarmv5 = "versatile-pb.dtb"
diff --git a/poky/meta/recipes-kernel/linux/linux-yocto.inc b/poky/meta/recipes-kernel/linux/linux-yocto.inc
index cabc8f4..7ea661e 100644
--- a/poky/meta/recipes-kernel/linux/linux-yocto.inc
+++ b/poky/meta/recipes-kernel/linux/linux-yocto.inc
@@ -60,7 +60,7 @@
 KERNEL_FEATURES:append:qemuall=" features/kernel-sample/kernel-sample.scc"
 
 KERNEL_DEBUG_OPTIONS ?= "stack"
-KERNEL_EXTRA_ARGS:append:x86-64 = "${@bb.utils.contains('KERNEL_DEBUG_OPTIONS', 'stack', 'HOST_LIBELF_LIBS="-L${RECIPE_SYSROOT_NATIVE}/usr/lib/pkgconfig/../../../usr/lib/ -lelf"', '', d)}"
+KERNEL_EXTRA_ARGS:append:x86-64 = " ${@bb.utils.contains('KERNEL_DEBUG_OPTIONS', 'stack', 'HOST_LIBELF_LIBS="-L${RECIPE_SYSROOT_NATIVE}/usr/lib/pkgconfig/../../../usr/lib/ -lelf"', '', d)}"
 
 do_devshell:prepend() {
     # setup native pkg-config variables (kconfig scripts call pkg-config directly, cannot generically be overriden to pkg-config-native)
diff --git a/poky/meta/recipes-kernel/linux/linux-yocto_5.10.bb b/poky/meta/recipes-kernel/linux/linux-yocto_5.10.bb
deleted file mode 100644
index d5bf2c9..0000000
--- a/poky/meta/recipes-kernel/linux/linux-yocto_5.10.bb
+++ /dev/null
@@ -1,58 +0,0 @@
-KBRANCH ?= "v5.10/standard/base"
-
-require recipes-kernel/linux/linux-yocto.inc
-
-# board specific branches
-KBRANCH:qemuarm  ?= "v5.10/standard/arm-versatile-926ejs"
-KBRANCH:qemuarm64 ?= "v5.10/standard/qemuarm64"
-KBRANCH:qemumips ?= "v5.10/standard/mti-malta32"
-KBRANCH:qemuppc  ?= "v5.10/standard/qemuppc"
-KBRANCH:qemuriscv64  ?= "v5.10/standard/base"
-KBRANCH:qemuriscv32  ?= "v5.10/standard/base"
-KBRANCH:qemux86  ?= "v5.10/standard/base"
-KBRANCH:qemux86-64 ?= "v5.10/standard/base"
-KBRANCH:qemumips64 ?= "v5.10/standard/mti-malta64"
-
-SRCREV_machine:qemuarm ?= "8d513bf2294b60cbfa7bfbfab43f7ec458e88de0"
-SRCREV_machine:qemuarm64 ?= "f86e70ec0a39fa6cfd5b19a013703345cf9e8d4c"
-SRCREV_machine:qemumips ?= "a5c1977699a2733ed4ddd08f1bcc1cbcc1fa8862"
-SRCREV_machine:qemuppc ?= "2e52a4c55beaea77e6b99720de58624c416e7569"
-SRCREV_machine:qemuriscv64 ?= "2883e69e202dc7948c99a7828e192b2b42c2d90a"
-SRCREV_machine:qemuriscv32 ?= "2883e69e202dc7948c99a7828e192b2b42c2d90a"
-SRCREV_machine:qemux86 ?= "2883e69e202dc7948c99a7828e192b2b42c2d90a"
-SRCREV_machine:qemux86-64 ?= "2883e69e202dc7948c99a7828e192b2b42c2d90a"
-SRCREV_machine:qemumips64 ?= "37c7c3e8979a2b0eb75bf8ceab7f2b7f12565ceb"
-SRCREV_machine ?= "2883e69e202dc7948c99a7828e192b2b42c2d90a"
-SRCREV_meta ?= "96ea2660bb97e15f48f4885b9e436f24c3606bd9"
-
-SRC_URI = "git://git.yoctoproject.org/linux-yocto.git;name=machine;branch=${KBRANCH}; \
-           git://git.yoctoproject.org/yocto-kernel-cache;type=kmeta;name=meta;branch=yocto-5.10;destsuffix=${KMETA}"
-
-LIC_FILES_CHKSUM = "file://COPYING;md5=6bc538ed5bd9a7fc9398086aedcd7e46"
-LINUX_VERSION ?= "5.10.130"
-
-DEPENDS += "${@bb.utils.contains('ARCH', 'x86', 'elfutils-native', '', d)}"
-DEPENDS += "openssl-native util-linux-native"
-DEPENDS += "gmp-native libmpc-native"
-
-PV = "${LINUX_VERSION}+git${SRCPV}"
-
-KMETA = "kernel-meta"
-KCONF_BSP_AUDIT_LEVEL = "1"
-
-KERNEL_DEVICETREE:qemuarmv5 = "versatile-pb.dtb"
-
-COMPATIBLE_MACHINE = "qemuarm|qemuarmv5|qemuarm64|qemux86|qemuppc|qemuppc64|qemumips|qemumips64|qemux86-64|qemuriscv64|qemuriscv32"
-
-# Functionality flags
-KERNEL_EXTRA_FEATURES ?= "features/netfilter/netfilter.scc"
-KERNEL_FEATURES:append = " ${KERNEL_EXTRA_FEATURES}"
-KERNEL_FEATURES:append:qemuall=" cfg/virtio.scc features/drm-bochs/drm-bochs.scc"
-KERNEL_FEATURES:append:qemux86=" cfg/sound.scc cfg/paravirt_kvm.scc"
-KERNEL_FEATURES:append:qemux86-64=" cfg/sound.scc cfg/paravirt_kvm.scc"
-KERNEL_FEATURES:append:powerpc =" arch/powerpc/powerpc-debug.scc"
-KERNEL_FEATURES:append:powerpc64 =" arch/powerpc/powerpc-debug.scc"
-KERNEL_FEATURES:append:powerpc64le =" arch/powerpc/powerpc-debug.scc"
-KERNEL_FEATURES:append = " ${@bb.utils.contains("TUNE_FEATURES", "mx32", " cfg/x32.scc", "", d)}"
-KERNEL_FEATURES:append = " ${@bb.utils.contains("DISTRO_FEATURES", "ptest", " features/scsi/scsi-debug.scc", "", d)}"
-KERNEL_FEATURES:append = " ${@bb.utils.contains("DISTRO_FEATURES", "ptest", " features/gpio/mockup.scc", "", d)}"
diff --git a/poky/meta/recipes-kernel/linux/linux-yocto_5.15.bb b/poky/meta/recipes-kernel/linux/linux-yocto_5.15.bb
index 4c1d163..40c430a 100644
--- a/poky/meta/recipes-kernel/linux/linux-yocto_5.15.bb
+++ b/poky/meta/recipes-kernel/linux/linux-yocto_5.15.bb
@@ -13,24 +13,24 @@
 KBRANCH:qemux86-64 ?= "v5.15/standard/base"
 KBRANCH:qemumips64 ?= "v5.15/standard/mti-malta64"
 
-SRCREV_machine:qemuarm ?= "c284142affccb534122ad93bdcd4774af161d767"
-SRCREV_machine:qemuarm64 ?= "c4c194a34c568c17389120608b2ee8a7a988150a"
-SRCREV_machine:qemumips ?= "7b446965d9659d312952ef4dedf5b50a493e60c2"
-SRCREV_machine:qemuppc ?= "0c2a4ad856c8f0c1b3ca8a38c17e1194f47e4643"
-SRCREV_machine:qemuriscv64 ?= "a40d2daf2795d89e3ef8af0413b25190558831ec"
-SRCREV_machine:qemuriscv32 ?= "a40d2daf2795d89e3ef8af0413b25190558831ec"
-SRCREV_machine:qemux86 ?= "a40d2daf2795d89e3ef8af0413b25190558831ec"
-SRCREV_machine:qemux86-64 ?= "a40d2daf2795d89e3ef8af0413b25190558831ec"
-SRCREV_machine:qemumips64 ?= "9a8d4e00df67daf224ae62b238c151a3f3f70ae7"
-SRCREV_machine ?= "a40d2daf2795d89e3ef8af0413b25190558831ec"
-SRCREV_meta ?= "0e3a81a5aefbea03388b1235fbcc3dec278425d0"
+SRCREV_machine:qemuarm ?= "9b096ff3914926ac68501bf156c2d1368f3ebe6c"
+SRCREV_machine:qemuarm64 ?= "7cb30c5e95067ad12b7c4d371c048c7f5d5c922c"
+SRCREV_machine:qemumips ?= "3210fe826ade54d891cf2120c964d2a0dc3e7393"
+SRCREV_machine:qemuppc ?= "7bfdc3608327b9c471008af370dbffe053f5bed9"
+SRCREV_machine:qemuriscv64 ?= "14879dcc3ca7b24d8650cf117c380a94bb865f40"
+SRCREV_machine:qemuriscv32 ?= "14879dcc3ca7b24d8650cf117c380a94bb865f40"
+SRCREV_machine:qemux86 ?= "14879dcc3ca7b24d8650cf117c380a94bb865f40"
+SRCREV_machine:qemux86-64 ?= "14879dcc3ca7b24d8650cf117c380a94bb865f40"
+SRCREV_machine:qemumips64 ?= "ef125626d718771f11fab19a3f91cca5ec27f887"
+SRCREV_machine ?= "14879dcc3ca7b24d8650cf117c380a94bb865f40"
+SRCREV_meta ?= "59c8898d450152a0875af340e6f0e72d05aafdfa"
 
 # set your preferred provider of linux-yocto to 'linux-yocto-upstream', and you'll
 # get the <version>/base branch, which is pure upstream -stable, and the same
 # meta SRCREV as the linux-yocto-standard builds. Select your version using the
 # normal PREFERRED_VERSION settings.
 BBCLASSEXTEND = "devupstream:target"
-SRCREV_machine:class-devupstream ?= "843dae1756d9bddee21a96827784791fd97d484e"
+SRCREV_machine:class-devupstream ?= "a0a7e0b2b8b22901945ea2aef1b65871d718accf"
 PN:class-devupstream = "linux-yocto-upstream"
 KBRANCH:class-devupstream = "v5.15/base"
 
@@ -38,7 +38,7 @@
            git://git.yoctoproject.org/yocto-kernel-cache;type=kmeta;name=meta;branch=yocto-5.15;destsuffix=${KMETA}"
 
 LIC_FILES_CHKSUM = "file://COPYING;md5=6bc538ed5bd9a7fc9398086aedcd7e46"
-LINUX_VERSION ?= "5.15.54"
+LINUX_VERSION ?= "5.15.62"
 
 DEPENDS += "${@bb.utils.contains('ARCH', 'x86', 'elfutils-native', '', d)}"
 DEPENDS += "openssl-native util-linux-native"
@@ -51,7 +51,7 @@
 
 KERNEL_DEVICETREE:qemuarmv5 = "versatile-pb.dtb"
 
-COMPATIBLE_MACHINE = "qemuarm|qemuarmv5|qemuarm64|qemux86|qemuppc|qemuppc64|qemumips|qemumips64|qemux86-64|qemuriscv64|qemuriscv32"
+COMPATIBLE_MACHINE = "^(qemuarm|qemuarmv5|qemuarm64|qemux86|qemuppc|qemuppc64|qemumips|qemumips64|qemux86-64|qemuriscv64|qemuriscv32)$"
 
 # Functionality flags
 KERNEL_EXTRA_FEATURES ?= "features/netfilter/netfilter.scc"
diff --git a/poky/meta/recipes-kernel/linux/linux-yocto_5.19.bb b/poky/meta/recipes-kernel/linux/linux-yocto_5.19.bb
new file mode 100644
index 0000000..0ff28aa
--- /dev/null
+++ b/poky/meta/recipes-kernel/linux/linux-yocto_5.19.bb
@@ -0,0 +1,70 @@
+KBRANCH ?= "v5.19/standard/base"
+
+require recipes-kernel/linux/linux-yocto.inc
+
+# board specific branches
+KBRANCH:qemuarm  ?= "v5.19/standard/arm-versatile-926ejs"
+KBRANCH:qemuarm64 ?= "v5.19/standard/qemuarm64"
+KBRANCH:qemumips ?= "v5.19/standard/mti-malta32"
+KBRANCH:qemuppc  ?= "v5.19/standard/qemuppc"
+KBRANCH:qemuriscv64  ?= "v5.19/standard/base"
+KBRANCH:qemuriscv32  ?= "v5.19/standard/base"
+KBRANCH:qemux86  ?= "v5.19/standard/base"
+KBRANCH:qemux86-64 ?= "v5.19/standard/base"
+KBRANCH:qemumips64 ?= "v5.19/standard/mti-malta64"
+
+SRCREV_machine:qemuarm ?= "2cbb2d5097fc44a23da635d2ebbccb33df20a34d"
+SRCREV_machine:qemuarm64 ?= "4d933456709d664a55fdda85304c08567265ad4d"
+SRCREV_machine:qemumips ?= "7741c5b2f536b99815329849cca09799cdb82e62"
+SRCREV_machine:qemuppc ?= "4d933456709d664a55fdda85304c08567265ad4d"
+SRCREV_machine:qemuriscv64 ?= "4d933456709d664a55fdda85304c08567265ad4d"
+SRCREV_machine:qemuriscv32 ?= "4d933456709d664a55fdda85304c08567265ad4d"
+SRCREV_machine:qemux86 ?= "4d933456709d664a55fdda85304c08567265ad4d"
+SRCREV_machine:qemux86-64 ?= "4d933456709d664a55fdda85304c08567265ad4d"
+SRCREV_machine:qemumips64 ?= "4ced38bbd45f6cb623728bd755894928a719edac"
+SRCREV_machine ?= "4d933456709d664a55fdda85304c08567265ad4d"
+SRCREV_meta ?= "5eb0fa93f8490a962ff0c36c14d8def271d75128"
+
+# set your preferred provider of linux-yocto to 'linux-yocto-upstream', and you'll
+# get the <version>/base branch, which is pure upstream -stable, and the same
+# meta SRCREV as the linux-yocto-standard builds. Select your version using the
+# normal PREFERRED_VERSION settings.
+BBCLASSEXTEND = "devupstream:target"
+SRCREV_machine:class-devupstream ?= "bf44eed7f2fc9af74eb72f4bc415bdd3d11c4bed"
+PN:class-devupstream = "linux-yocto-upstream"
+KBRANCH:class-devupstream = "v5.19/base"
+
+SRC_URI = "git://git.yoctoproject.org/linux-yocto.git;name=machine;branch=${KBRANCH}; \
+           git://git.yoctoproject.org/yocto-kernel-cache;type=kmeta;name=meta;branch=yocto-5.19;destsuffix=${KMETA}"
+
+LIC_FILES_CHKSUM = "file://COPYING;md5=6bc538ed5bd9a7fc9398086aedcd7e46"
+LINUX_VERSION ?= "5.19.3"
+
+DEPENDS += "${@bb.utils.contains('ARCH', 'x86', 'elfutils-native', '', d)}"
+DEPENDS += "openssl-native util-linux-native"
+DEPENDS += "gmp-native libmpc-native"
+
+PV = "${LINUX_VERSION}+git${SRCPV}"
+
+KMETA = "kernel-meta"
+KCONF_BSP_AUDIT_LEVEL = "1"
+
+KERNEL_DEVICETREE:qemuarmv5 = "versatile-pb.dtb"
+
+COMPATIBLE_MACHINE = "^(qemuarm|qemuarmv5|qemuarm64|qemux86|qemuppc|qemuppc64|qemumips|qemumips64|qemux86-64|qemuriscv64|qemuriscv32)$"
+
+# Functionality flags
+KERNEL_EXTRA_FEATURES ?= "features/netfilter/netfilter.scc"
+KERNEL_FEATURES:append = " ${KERNEL_EXTRA_FEATURES}"
+KERNEL_FEATURES:append:qemuall=" cfg/virtio.scc features/drm-bochs/drm-bochs.scc"
+KERNEL_FEATURES:append:qemux86=" cfg/sound.scc cfg/paravirt_kvm.scc"
+KERNEL_FEATURES:append:qemux86-64=" cfg/sound.scc cfg/paravirt_kvm.scc"
+KERNEL_FEATURES:append = " ${@bb.utils.contains("TUNE_FEATURES", "mx32", " cfg/x32.scc", "", d)}"
+KERNEL_FEATURES:append = " ${@bb.utils.contains("DISTRO_FEATURES", "ptest", " features/scsi/scsi-debug.scc", "", d)}"
+KERNEL_FEATURES:append = " ${@bb.utils.contains("DISTRO_FEATURES", "ptest", " features/gpio/mockup.scc", "", d)}"
+KERNEL_FEATURES:append:powerpc =" arch/powerpc/powerpc-debug.scc"
+KERNEL_FEATURES:append:powerpc64 =" arch/powerpc/powerpc-debug.scc"
+KERNEL_FEATURES:append:powerpc64le =" arch/powerpc/powerpc-debug.scc"
+
+INSANE_SKIP:kernel-vmlinux:qemuppc64 = "textrel"
+
diff --git a/poky/meta/recipes-kernel/lttng/lttng-modules/0001-fix-compaction.patch b/poky/meta/recipes-kernel/lttng/lttng-modules/0001-fix-compaction.patch
new file mode 100644
index 0000000..21e27ff
--- /dev/null
+++ b/poky/meta/recipes-kernel/lttng/lttng-modules/0001-fix-compaction.patch
@@ -0,0 +1,68 @@
+From 8e42c4821fb5f5cb816b6ddf73d9a13ba3298a63 Mon Sep 17 00:00:00 2001
+From: Michael Jeanson <mjeanson@efficios.com>
+Date: Wed, 10 Aug 2022 11:07:14 -0400
+Subject: [PATCH] fix: tie compaction probe build to CONFIG_COMPACTION
+
+The definition of 'struct compact_control' in 'mm/internal.h' depends on
+CONFIG_COMPACTION being defined. Only build the compaction probe when
+this configuration option is enabled.
+
+Thanks to Bruce Ashfield <bruce.ashfield@gmail.com> for reporting this
+issue.
+
+Upstream-Status: Backport [https://review.lttng.org/c/lttng-modules/+/8660]
+
+Change-Id: I81e77aa9c1bf10452c152d432fe5224df0db42c9
+Signed-off-by: Michael Jeanson <mjeanson@efficios.com>
+---
+ src/probes/Kbuild | 34 ++++++++++++++++++----------------
+ 1 file changed, 18 insertions(+), 16 deletions(-)
+
+diff --git a/src/probes/Kbuild b/src/probes/Kbuild
+index 2908cf75..3e556b8e 100644
+--- a/src/probes/Kbuild
++++ b/src/probes/Kbuild
+@@ -167,22 +167,24 @@ ifneq ($(CONFIG_BTRFS_FS),)
+   endif # $(wildcard $(btrfs_dep))
+ endif # CONFIG_BTRFS_FS
+ 
+-# A dependency on internal header 'mm/internal.h' was introduced in v5.18
+-compaction_dep = $(srctree)/mm/internal.h
+-compaction_dep_wildcard = $(wildcard $(compaction_dep))
+-compaction_dep_check = $(shell \
+-if [ \( $(VERSION) -ge 6 \
+-   -o \( $(VERSION) -eq 5 -a $(PATCHLEVEL) -ge 18 \) \) -a \
+-   -z "$(compaction_dep_wildcard)" ] ; then \
+-  echo "warn" ; \
+-else \
+-  echo "ok" ; \
+-fi ;)
+-ifeq ($(compaction_dep_check),ok)
+-  obj-$(CONFIG_LTTNG) += lttng-probe-compaction.o
+-else
+-  $(warning Files $(compaction_dep) not found. Probe "compaction" is disabled. Use full kernel source tree to enable it.)
+-endif # $(wildcard $(compaction_dep))
++ifneq ($(CONFIG_COMPACTION),)
++  # A dependency on internal header 'mm/internal.h' was introduced in v5.18
++  compaction_dep = $(srctree)/mm/internal.h
++  compaction_dep_wildcard = $(wildcard $(compaction_dep))
++  compaction_dep_check = $(shell \
++  if [ \( $(VERSION) -ge 6 \
++     -o \( $(VERSION) -eq 5 -a $(PATCHLEVEL) -ge 18 \) \) -a \
++     -z "$(compaction_dep_wildcard)" ] ; then \
++    echo "warn" ; \
++  else \
++    echo "ok" ; \
++  fi ;)
++  ifeq ($(compaction_dep_check),ok)
++    obj-$(CONFIG_LTTNG) += lttng-probe-compaction.o
++  else
++    $(warning Files $(compaction_dep) not found. Probe "compaction" is disabled. Use full kernel source tree to enable it.)
++  endif # $(wildcard $(compaction_dep))
++endif # CONFIG_COMPACTION
+ 
+ ifneq ($(CONFIG_EXT4_FS),)
+   ext4_dep = $(srctree)/fs/ext4/*.h
+-- 
+2.34.1
+
diff --git a/poky/meta/recipes-kernel/lttng/lttng-modules_2.13.4.bb b/poky/meta/recipes-kernel/lttng/lttng-modules_2.13.4.bb
index fea0e38..f60ab3b 100644
--- a/poky/meta/recipes-kernel/lttng/lttng-modules_2.13.4.bb
+++ b/poky/meta/recipes-kernel/lttng/lttng-modules_2.13.4.bb
@@ -15,6 +15,7 @@
            file://0002-fix-fs-Remove-flags-parameter-from-aops-write_begin-.patch \
            file://0003-fix-workqueue-Fix-type-of-cpu-in-trace-event-v5.19.patch \
            file://0001-fix-net-skb-introduce-kfree_skb_reason-v5.15.58.v5.1.patch \
+           file://0001-fix-compaction.patch \
            "
 
 # Use :append here so that the patch is applied also when using devupstream
diff --git a/poky/meta/recipes-kernel/lttng/lttng-tools_2.13.7.bb b/poky/meta/recipes-kernel/lttng/lttng-tools_2.13.7.bb
deleted file mode 100644
index 1a972ec..0000000
--- a/poky/meta/recipes-kernel/lttng/lttng-tools_2.13.7.bb
+++ /dev/null
@@ -1,195 +0,0 @@
-SECTION = "devel"
-SUMMARY = "Linux Trace Toolkit Control"
-DESCRIPTION = "The Linux trace toolkit is a suite of tools designed \
-to extract program execution details from the Linux operating system \
-and interpret them."
-HOMEPAGE = "https://github.com/lttng/lttng-tools"
-
-LICENSE = "GPL-2.0-only & LGPL-2.1-only"
-LIC_FILES_CHKSUM = "file://LICENSE;md5=40ef17463fbd6f377db3c47b1cbaded8 \
-                    file://LICENSES/GPL-2.0;md5=e68f69a54b44ba526ad7cb963e18fbce \
-                    file://LICENSES/LGPL-2.1;md5=9920968d0f2ff585ce61fae30344dd95"
-
-include lttng-platforms.inc
-
-DEPENDS = "liburcu popt libxml2 util-linux bison-native"
-RDEPENDS:${PN} = "libgcc"
-RRECOMMENDS:${PN} += "${LTTNGMODULES}"
-RDEPENDS:${PN}-ptest += "make perl bash gawk babeltrace procps perl-module-overloading coreutils util-linux kmod ${LTTNGMODULES} sed python3-core grep"
-RDEPENDS:${PN}-ptest:append:libc-glibc = " glibc-utils"
-RDEPENDS:${PN}-ptest:append:libc-musl = " musl-utils"
-# babelstats.pl wants getopt-long
-RDEPENDS:${PN}-ptest += "perl-module-getopt-long"
-
-PYTHON_OPTION = "am_cv_python_pyexecdir='${PYTHON_SITEPACKAGES_DIR}' \
-                 am_cv_python_pythondir='${PYTHON_SITEPACKAGES_DIR}' \
-                 PYTHON_INCLUDE='-I${STAGING_INCDIR}/python${PYTHON_BASEVERSION}${PYTHON_ABI}' \
-"
-PACKAGECONFIG ??= "${LTTNGUST} kmod"
-PACKAGECONFIG[python] = "--enable-python-bindings ${PYTHON_OPTION},,python3 swig-native"
-PACKAGECONFIG[lttng-ust] = "--with-lttng-ust, --without-lttng-ust, lttng-ust"
-PACKAGECONFIG[kmod] = "--with-kmod, --without-kmod, kmod"
-PACKAGECONFIG[manpages] = "--enable-man-pages, --disable-man-pages, asciidoc-native xmlto-native libxslt-native"
-
-SRC_URI = "https://lttng.org/files/lttng-tools/lttng-tools-${PV}.tar.bz2 \
-           file://0001-tests-do-not-strip-a-helper-library.patch \
-           file://run-ptest \
-           file://lttng-sessiond.service \
-           file://determinism.patch \
-           file://disable-tests.patch \
-           "
-
-SRC_URI[sha256sum] = "d17a02e8f178a7cf3403e3c9edfb90ad3a1628e20aa0b5131408ae47f722f08d"
-
-inherit autotools ptest pkgconfig useradd python3-dir manpages systemd
-
-CACHED_CONFIGUREVARS = "PGREP=/usr/bin/pgrep"
-
-SYSTEMD_SERVICE:${PN} = "lttng-sessiond.service"
-SYSTEMD_AUTO_ENABLE = "disable"
-
-USERADD_PACKAGES = "${PN}"
-GROUPADD_PARAM:${PN} = "tracing"
-
-FILES:${PN} += "${libdir}/lttng/libexec/* ${datadir}/xml/lttng \
-                ${PYTHON_SITEPACKAGES_DIR}/*"
-FILES:${PN}-staticdev += "${PYTHON_SITEPACKAGES_DIR}/*.a"
-FILES:${PN}-dev += "${PYTHON_SITEPACKAGES_DIR}/*.la"
-
-# Since files are installed into ${libdir}/lttng/libexec we match 
-# the libexec insane test so skip it.
-# Python module needs to keep _lttng.so
-INSANE_SKIP:${PN} = "libexec dev-so"
-INSANE_SKIP:${PN}-dbg = "libexec"
-
-PRIVATE_LIBS:${PN}-ptest = "libfoo.so"
-
-do_install:append () {
-    # install systemd unit file
-    install -d ${D}${systemd_system_unitdir}
-    install -m 0644 ${WORKDIR}/lttng-sessiond.service ${D}${systemd_system_unitdir}
-}
-
-do_install_ptest () {
-    for f in Makefile tests/Makefile tests/utils/utils.sh tests/regression/tools/save-load/*.lttng \
-            tests/regression/tools/save-load/configuration/load-42*.lttng tests/regression/tools/health/test_health.sh \
-            tests/regression/tools/metadata/utils.sh tests/regression/tools/rotation/rotate_utils.sh \
-            tests/regression/tools/notification/util_event_generator.sh \
-            tests/regression/tools/base-path/*.lttng; do
-        install -D "${B}/$f" "${D}${PTEST_PATH}/$f"
-    done
-
-    for f in tests/utils/tap-driver.sh config/test-driver src/common/config/session.xsd src/common/mi-lttng-4.1.xsd; do
-        install -D "${S}/$f" "${D}${PTEST_PATH}/$f"
-    done
-
-    # Patch in the correct path for the custom libraries a helper executable needs
-    sed -i -e 's!FIXMEPTESTPATH!${PTEST_PATH}!' "${D}${PTEST_PATH}/run-ptest"
-
-    # Prevent 'make check' from recursing into non-test subdirectories.
-    sed -i -e 's!^SUBDIRS = .*!SUBDIRS = tests!' "${D}${PTEST_PATH}/Makefile"
-
-    # We don't need these
-    sed -i -e '/dist_noinst_SCRIPTS = /,/^$/d' "${D}${PTEST_PATH}/tests/Makefile"
-
-    # We shouldn't need to build anything in tests/utils
-    sed -i -e 's!am__append_1 = . utils!am__append_1 = . !' \
-        "${D}${PTEST_PATH}/tests/Makefile"
-
-    # Copy the tests directory tree and the executables and
-    # Makefiles found within.
-    for d in $(find "${B}/tests" -type d -not -name .libs -printf '%P ') ; do
-        install -d "${D}${PTEST_PATH}/tests/$d"
-        find "${B}/tests/$d" -maxdepth 1 -executable -type f \
-            -exec install -t "${D}${PTEST_PATH}/tests/$d" {} +
-        # Take all .py scripts for tests using the python bindings.
-        find "${B}/tests/$d" -maxdepth 1 -type f -name "*.py" \
-            -exec install -t "${D}${PTEST_PATH}/tests/$d" {} +
-        test -r "${B}/tests/$d/Makefile" && \
-            install -t "${D}${PTEST_PATH}/tests/$d" "${B}/tests/$d/Makefile"
-    done
-
-    for d in $(find "${B}/tests" -type d -name .libs -printf '%P ') ; do
-        for f in $(find "${B}/tests/$d" -maxdepth 1 -executable -type f -printf '%P ') ; do
-            cp ${B}/tests/$d/$f ${D}${PTEST_PATH}/tests/`dirname $d`/$f
-            case $f in
-                *.so|userspace-probe-elf-*)
-                    install -d ${D}${PTEST_PATH}/tests/$d/
-                    ln -s  ../$f ${D}${PTEST_PATH}/tests/$d/$f
-                    # Remove any rpath/runpath to pass QA check.
-                    chrpath --delete ${D}${PTEST_PATH}/tests/$d/$f
-                    ;;
-            esac
-        done
-    done
-
-    chrpath --delete ${D}${PTEST_PATH}/tests/utils/testapp/userspace-probe-elf-binary/userspace-probe-elf-binary
-    chrpath --delete ${D}${PTEST_PATH}/tests/utils/testapp/userspace-probe-elf-cxx-binary/userspace-probe-elf-cxx-binary
-    chrpath --delete ${D}${PTEST_PATH}/tests/regression/ust/ust-dl/libbar.so
-    chrpath --delete ${D}${PTEST_PATH}/tests/regression/ust/ust-dl/libfoo.so
-
-    #
-    # Use the versioned libs of liblttng-ust-dl.
-    #
-    ustdl="${D}${PTEST_PATH}/tests/regression/ust/ust-dl/test_ust-dl.py"
-    if [ -e $ustdl ]; then
-        sed -i -e 's!:liblttng-ust-dl.so!:liblttng-ust-dl.so.0!' $ustdl
-    fi
-
-    install ${B}/tests/unit/ini_config/sample.ini ${D}${PTEST_PATH}/tests/unit/ini_config/
-
-    # We shouldn't need to build anything in tests/regression/tools
-    sed -i -e 's!^SUBDIRS = tools !SUBDIRS = !' \
-        "${D}${PTEST_PATH}/tests/regression/Makefile"
-
-    # Prevent attempts to update Makefiles during test runs, and
-    # silence "Making check in $SUBDIR" messages.
-    find "${D}${PTEST_PATH}" -name Makefile -type f -exec \
-        sed -i -e '/Makefile:/,/^$/d' -e '/%: %.in/,/^$/d' \
-        -e '/echo "Making $$target in $$subdir"; \\/d' \
-        -e 's/^srcdir = \(.*\)/srcdir = ./' \
-        -e 's/^builddir = \(.*\)/builddir = ./' \
-        -e 's/^all-am:.*/all-am:/' \
-        {} +
-
-    find "${D}${PTEST_PATH}" -name Makefile -type f -exec \
-        touch -r "${B}/Makefile" {} +
-
-    #
-    # Need to stop generated binaries from rebuilding by removing their source dependencies
-    #
-    sed -e 's#\(^test.*OBJECTS.=\)#disable\1#g' \
-        -e 's#\(^test.*DEPENDENCIES.=\)#disable\1#g' \
-        -e 's#\(^test.*SOURCES.=\)#disable\1#g' \
-        -e 's#\(^test.*LDADD.=\)#disable\1#g' \
-        -i ${D}${PTEST_PATH}/tests/unit/Makefile
-
-    # Fix hardcoded build path
-    sed -e 's#TESTAPP_PATH=.*/tests/regression/#TESTAPP_PATH="${PTEST_PATH}/tests/regression/#' \
-        -i ${D}${PTEST_PATH}/tests/regression/ust/python-logging/test_python_logging
-
-    # Substitute links to installed binaries.
-    for prog in lttng lttng-relayd lttng-sessiond lttng-consumerd lttng-crash; do
-        exedir="${D}${PTEST_PATH}/src/bin/${prog}"
-        install -d "$exedir"
-        case "$prog" in
-            lttng-consumerd)
-                ln -s "${libdir}/lttng/libexec/$prog" "$exedir"
-                ;;
-            *)
-                ln -s "${bindir}/$prog" "$exedir"
-                ;;
-        esac
-    done
-}
-
-INHIBIT_PACKAGE_STRIP_FILES = "\
-    ${PKGD}${PTEST_PATH}/tests/utils/testapp/userspace-probe-elf-binary/userspace-probe-elf-binary \
-    ${PKGD}${PTEST_PATH}/tests/utils/testapp/userspace-probe-elf-binary/.libs/userspace-probe-elf-binary \
-    ${PKGD}${PTEST_PATH}/tests/utils/testapp/userspace-probe-elf-cxx-binary/userspace-probe-elf-cxx-binary \
-    ${PKGD}${PTEST_PATH}/tests/utils/testapp/userspace-probe-elf-cxx-binary/.libs/userspace-probe-elf-cxx-binary \
-    ${PKGD}${PTEST_PATH}/tests/utils/testapp/gen-syscall-events/gen-syscall-events \
-    ${PKGD}${PTEST_PATH}/tests/utils/testapp/gen-syscall-events/.libs/gen-syscall-events \
-    ${PKGD}${PTEST_PATH}/tests/utils/testapp/gen-syscall-events-callstack/gen-syscall-events-callstack \
-    ${PKGD}${PTEST_PATH}/tests/utils/testapp/gen-syscall-events-callstack/.libs/gen-syscall-events-callstack \
-    "
diff --git a/poky/meta/recipes-kernel/lttng/lttng-tools_2.13.8.bb b/poky/meta/recipes-kernel/lttng/lttng-tools_2.13.8.bb
new file mode 100644
index 0000000..a814eb7
--- /dev/null
+++ b/poky/meta/recipes-kernel/lttng/lttng-tools_2.13.8.bb
@@ -0,0 +1,195 @@
+SECTION = "devel"
+SUMMARY = "Linux Trace Toolkit Control"
+DESCRIPTION = "The Linux trace toolkit is a suite of tools designed \
+to extract program execution details from the Linux operating system \
+and interpret them."
+HOMEPAGE = "https://github.com/lttng/lttng-tools"
+
+LICENSE = "GPL-2.0-only & LGPL-2.1-only"
+LIC_FILES_CHKSUM = "file://LICENSE;md5=40ef17463fbd6f377db3c47b1cbaded8 \
+                    file://LICENSES/GPL-2.0;md5=e68f69a54b44ba526ad7cb963e18fbce \
+                    file://LICENSES/LGPL-2.1;md5=9920968d0f2ff585ce61fae30344dd95"
+
+include lttng-platforms.inc
+
+DEPENDS = "liburcu popt libxml2 util-linux bison-native"
+RDEPENDS:${PN} = "libgcc"
+RRECOMMENDS:${PN} += "${LTTNGMODULES}"
+RDEPENDS:${PN}-ptest += "make perl bash gawk babeltrace procps perl-module-overloading coreutils util-linux kmod ${LTTNGMODULES} sed python3-core grep"
+RDEPENDS:${PN}-ptest:append:libc-glibc = " glibc-utils"
+RDEPENDS:${PN}-ptest:append:libc-musl = " musl-utils"
+# babelstats.pl wants getopt-long
+RDEPENDS:${PN}-ptest += "perl-module-getopt-long"
+
+PYTHON_OPTION = "am_cv_python_pyexecdir='${PYTHON_SITEPACKAGES_DIR}' \
+                 am_cv_python_pythondir='${PYTHON_SITEPACKAGES_DIR}' \
+                 PYTHON_INCLUDE='-I${STAGING_INCDIR}/python${PYTHON_BASEVERSION}${PYTHON_ABI}' \
+"
+PACKAGECONFIG ??= "${LTTNGUST} kmod"
+PACKAGECONFIG[python] = "--enable-python-bindings ${PYTHON_OPTION},,python3 swig-native"
+PACKAGECONFIG[lttng-ust] = "--with-lttng-ust, --without-lttng-ust, lttng-ust"
+PACKAGECONFIG[kmod] = "--with-kmod, --without-kmod, kmod"
+PACKAGECONFIG[manpages] = "--enable-man-pages, --disable-man-pages, asciidoc-native xmlto-native libxslt-native"
+
+SRC_URI = "https://lttng.org/files/lttng-tools/lttng-tools-${PV}.tar.bz2 \
+           file://0001-tests-do-not-strip-a-helper-library.patch \
+           file://run-ptest \
+           file://lttng-sessiond.service \
+           file://determinism.patch \
+           file://disable-tests.patch \
+           "
+
+SRC_URI[sha256sum] = "b1e959579b260790930b20f3c7aa7cefb8a40e0de80d4a777c2bf78c6b353dc1"
+
+inherit autotools ptest pkgconfig useradd python3-dir manpages systemd
+
+CACHED_CONFIGUREVARS = "PGREP=/usr/bin/pgrep"
+
+SYSTEMD_SERVICE:${PN} = "lttng-sessiond.service"
+SYSTEMD_AUTO_ENABLE = "disable"
+
+USERADD_PACKAGES = "${PN}"
+GROUPADD_PARAM:${PN} = "tracing"
+
+FILES:${PN} += "${libdir}/lttng/libexec/* ${datadir}/xml/lttng \
+                ${PYTHON_SITEPACKAGES_DIR}/*"
+FILES:${PN}-staticdev += "${PYTHON_SITEPACKAGES_DIR}/*.a"
+FILES:${PN}-dev += "${PYTHON_SITEPACKAGES_DIR}/*.la"
+
+# Since files are installed into ${libdir}/lttng/libexec we match 
+# the libexec insane test so skip it.
+# Python module needs to keep _lttng.so
+INSANE_SKIP:${PN} = "libexec dev-so"
+INSANE_SKIP:${PN}-dbg = "libexec"
+
+PRIVATE_LIBS:${PN}-ptest = "libfoo.so"
+
+do_install:append () {
+    # install systemd unit file
+    install -d ${D}${systemd_system_unitdir}
+    install -m 0644 ${WORKDIR}/lttng-sessiond.service ${D}${systemd_system_unitdir}
+}
+
+do_install_ptest () {
+    for f in Makefile tests/Makefile tests/utils/utils.sh tests/regression/tools/save-load/*.lttng \
+            tests/regression/tools/save-load/configuration/load-42*.lttng tests/regression/tools/health/test_health.sh \
+            tests/regression/tools/metadata/utils.sh tests/regression/tools/rotation/rotate_utils.sh \
+            tests/regression/tools/notification/util_event_generator.sh \
+            tests/regression/tools/base-path/*.lttng; do
+        install -D "${B}/$f" "${D}${PTEST_PATH}/$f"
+    done
+
+    for f in tests/utils/tap-driver.sh config/test-driver src/common/config/session.xsd src/common/mi-lttng-4.1.xsd; do
+        install -D "${S}/$f" "${D}${PTEST_PATH}/$f"
+    done
+
+    # Patch in the correct path for the custom libraries a helper executable needs
+    sed -i -e 's!FIXMEPTESTPATH!${PTEST_PATH}!' "${D}${PTEST_PATH}/run-ptest"
+
+    # Prevent 'make check' from recursing into non-test subdirectories.
+    sed -i -e 's!^SUBDIRS = .*!SUBDIRS = tests!' "${D}${PTEST_PATH}/Makefile"
+
+    # We don't need these
+    sed -i -e '/dist_noinst_SCRIPTS = /,/^$/d' "${D}${PTEST_PATH}/tests/Makefile"
+
+    # We shouldn't need to build anything in tests/utils
+    sed -i -e 's!am__append_1 = . utils!am__append_1 = . !' \
+        "${D}${PTEST_PATH}/tests/Makefile"
+
+    # Copy the tests directory tree and the executables and
+    # Makefiles found within.
+    for d in $(find "${B}/tests" -type d -not -name .libs -printf '%P ') ; do
+        install -d "${D}${PTEST_PATH}/tests/$d"
+        find "${B}/tests/$d" -maxdepth 1 -executable -type f \
+            -exec install -t "${D}${PTEST_PATH}/tests/$d" {} +
+        # Take all .py scripts for tests using the python bindings.
+        find "${B}/tests/$d" -maxdepth 1 -type f -name "*.py" \
+            -exec install -t "${D}${PTEST_PATH}/tests/$d" {} +
+        test -r "${B}/tests/$d/Makefile" && \
+            install -t "${D}${PTEST_PATH}/tests/$d" "${B}/tests/$d/Makefile"
+    done
+
+    for d in $(find "${B}/tests" -type d -name .libs -printf '%P ') ; do
+        for f in $(find "${B}/tests/$d" -maxdepth 1 -executable -type f -printf '%P ') ; do
+            cp ${B}/tests/$d/$f ${D}${PTEST_PATH}/tests/`dirname $d`/$f
+            case $f in
+                *.so|userspace-probe-elf-*)
+                    install -d ${D}${PTEST_PATH}/tests/$d/
+                    ln -s  ../$f ${D}${PTEST_PATH}/tests/$d/$f
+                    # Remove any rpath/runpath to pass QA check.
+                    chrpath --delete ${D}${PTEST_PATH}/tests/$d/$f
+                    ;;
+            esac
+        done
+    done
+
+    chrpath --delete ${D}${PTEST_PATH}/tests/utils/testapp/userspace-probe-elf-binary/userspace-probe-elf-binary
+    chrpath --delete ${D}${PTEST_PATH}/tests/utils/testapp/userspace-probe-elf-cxx-binary/userspace-probe-elf-cxx-binary
+    chrpath --delete ${D}${PTEST_PATH}/tests/regression/ust/ust-dl/libbar.so
+    chrpath --delete ${D}${PTEST_PATH}/tests/regression/ust/ust-dl/libfoo.so
+
+    #
+    # Use the versioned libs of liblttng-ust-dl.
+    #
+    ustdl="${D}${PTEST_PATH}/tests/regression/ust/ust-dl/test_ust-dl.py"
+    if [ -e $ustdl ]; then
+        sed -i -e 's!:liblttng-ust-dl.so!:liblttng-ust-dl.so.0!' $ustdl
+    fi
+
+    install ${B}/tests/unit/ini_config/sample.ini ${D}${PTEST_PATH}/tests/unit/ini_config/
+
+    # We shouldn't need to build anything in tests/regression/tools
+    sed -i -e 's!^SUBDIRS = tools !SUBDIRS = !' \
+        "${D}${PTEST_PATH}/tests/regression/Makefile"
+
+    # Prevent attempts to update Makefiles during test runs, and
+    # silence "Making check in $SUBDIR" messages.
+    find "${D}${PTEST_PATH}" -name Makefile -type f -exec \
+        sed -i -e '/Makefile:/,/^$/d' -e '/%: %.in/,/^$/d' \
+        -e '/echo "Making $$target in $$subdir"; \\/d' \
+        -e 's/^srcdir = \(.*\)/srcdir = ./' \
+        -e 's/^builddir = \(.*\)/builddir = ./' \
+        -e 's/^all-am:.*/all-am:/' \
+        {} +
+
+    find "${D}${PTEST_PATH}" -name Makefile -type f -exec \
+        touch -r "${B}/Makefile" {} +
+
+    #
+    # Need to stop generated binaries from rebuilding by removing their source dependencies
+    #
+    sed -e 's#\(^test.*OBJECTS.=\)#disable\1#g' \
+        -e 's#\(^test.*DEPENDENCIES.=\)#disable\1#g' \
+        -e 's#\(^test.*SOURCES.=\)#disable\1#g' \
+        -e 's#\(^test.*LDADD.=\)#disable\1#g' \
+        -i ${D}${PTEST_PATH}/tests/unit/Makefile
+
+    # Fix hardcoded build path
+    sed -e 's#TESTAPP_PATH=.*/tests/regression/#TESTAPP_PATH="${PTEST_PATH}/tests/regression/#' \
+        -i ${D}${PTEST_PATH}/tests/regression/ust/python-logging/test_python_logging
+
+    # Substitute links to installed binaries.
+    for prog in lttng lttng-relayd lttng-sessiond lttng-consumerd lttng-crash; do
+        exedir="${D}${PTEST_PATH}/src/bin/${prog}"
+        install -d "$exedir"
+        case "$prog" in
+            lttng-consumerd)
+                ln -s "${libdir}/lttng/libexec/$prog" "$exedir"
+                ;;
+            *)
+                ln -s "${bindir}/$prog" "$exedir"
+                ;;
+        esac
+    done
+}
+
+INHIBIT_PACKAGE_STRIP_FILES = "\
+    ${PKGD}${PTEST_PATH}/tests/utils/testapp/userspace-probe-elf-binary/userspace-probe-elf-binary \
+    ${PKGD}${PTEST_PATH}/tests/utils/testapp/userspace-probe-elf-binary/.libs/userspace-probe-elf-binary \
+    ${PKGD}${PTEST_PATH}/tests/utils/testapp/userspace-probe-elf-cxx-binary/userspace-probe-elf-cxx-binary \
+    ${PKGD}${PTEST_PATH}/tests/utils/testapp/userspace-probe-elf-cxx-binary/.libs/userspace-probe-elf-cxx-binary \
+    ${PKGD}${PTEST_PATH}/tests/utils/testapp/gen-syscall-events/gen-syscall-events \
+    ${PKGD}${PTEST_PATH}/tests/utils/testapp/gen-syscall-events/.libs/gen-syscall-events \
+    ${PKGD}${PTEST_PATH}/tests/utils/testapp/gen-syscall-events-callstack/gen-syscall-events-callstack \
+    ${PKGD}${PTEST_PATH}/tests/utils/testapp/gen-syscall-events-callstack/.libs/gen-syscall-events-callstack \
+    "
diff --git a/poky/meta/recipes-kernel/lttng/lttng-ust_2.13.3.bb b/poky/meta/recipes-kernel/lttng/lttng-ust_2.13.3.bb
deleted file mode 100644
index cc88bf5..0000000
--- a/poky/meta/recipes-kernel/lttng/lttng-ust_2.13.3.bb
+++ /dev/null
@@ -1,53 +0,0 @@
-SUMMARY = "Linux Trace Toolkit Userspace Tracer 2.x"
-DESCRIPTION = "The LTTng UST 2.x package contains the userspace tracer library to trace userspace codes."
-HOMEPAGE = "http://lttng.org/ust"
-BUGTRACKER = "https://bugs.lttng.org/projects/lttng-ust"
-
-LICENSE = "LGPL-2.1-or-later & MIT & GPL-2.0-only"
-LIC_FILES_CHKSUM = "file://LICENSE;md5=a46577a38ad0c36ff6ff43ccf40c480f"
-
-PYTHON_OPTION = "am_cv_python_pyexecdir='${PYTHON_SITEPACKAGES_DIR}' \
-                 am_cv_python_pythondir='${PYTHON_SITEPACKAGES_DIR}' \
-                 PYTHON_INCLUDE='-I${STAGING_INCDIR}/python${PYTHON_BASEVERSION}${PYTHON_ABI}' \
-"
-
-inherit autotools lib_package manpages python3native pkgconfig
-
-include lttng-platforms.inc
-
-EXTRA_OECONF = "--disable-numa"
-CPPFLAGS:append:arm = "${@oe.utils.vartrue('DEBUG_BUILD', '-DUATOMIC_NO_LINK_ERROR', '', d)}"
-
-DEPENDS = "liburcu util-linux"
-RDEPENDS:${PN}-bin = "python3-core"
-
-# For backwards compatibility after rename
-RPROVIDES:${PN} = "lttng2-ust"
-RREPLACES:${PN} = "lttng2-ust"
-RCONFLICTS:${PN} = "lttng2-ust"
-
-PE = "2"
-
-SRC_URI = "https://lttng.org/files/lttng-ust/lttng-ust-${PV}.tar.bz2 \
-           file://0001-python-lttngust-Makefile.am-Add-install-lib-to-setup.patch \
-           file://0001-lttng-ust-common-link-with-liburcu-explicitly.patch \
-           file://0001-Makefile.am-update-rpath-link.patch \
-           "
-
-SRC_URI[sha256sum] = "2cc42f51145050430ac4ab72b32d95fd78d5566ccbe44e14a8fcdd23c0ed8f6f"
-
-CVE_PRODUCT = "ust"
-
-PACKAGECONFIG[examples] = "--enable-examples, --disable-examples,"
-PACKAGECONFIG[manpages] = "--enable-man-pages, --disable-man-pages, asciidoc-native xmlto-native libxslt-native"
-PACKAGECONFIG[python3-agent] = "--enable-python-agent ${PYTHON_OPTION}, --disable-python-agent, python3, python3"
-
-FILES:${PN} += " ${PYTHON_SITEPACKAGES_DIR}/*"
-FILES:${PN}-staticdev += " ${PYTHON_SITEPACKAGES_DIR}/*.a"
-FILES:${PN}-dev += " ${PYTHON_SITEPACKAGES_DIR}/*.la"
-
-do_install:append() {
-        # Patch python tools to use Python 3; they should be source compatible, but
-        # still refer to Python 2 in the shebang
-        sed -i -e '1s,#!.*python.*,#!${bindir}/python3,' ${D}${bindir}/lttng-gen-tp
-}
diff --git a/poky/meta/recipes-kernel/lttng/lttng-ust_2.13.4.bb b/poky/meta/recipes-kernel/lttng/lttng-ust_2.13.4.bb
new file mode 100644
index 0000000..56200ac
--- /dev/null
+++ b/poky/meta/recipes-kernel/lttng/lttng-ust_2.13.4.bb
@@ -0,0 +1,53 @@
+SUMMARY = "Linux Trace Toolkit Userspace Tracer 2.x"
+DESCRIPTION = "The LTTng UST 2.x package contains the userspace tracer library to trace userspace codes."
+HOMEPAGE = "http://lttng.org/ust"
+BUGTRACKER = "https://bugs.lttng.org/projects/lttng-ust"
+
+LICENSE = "LGPL-2.1-or-later & MIT & GPL-2.0-only"
+LIC_FILES_CHKSUM = "file://LICENSE;md5=a46577a38ad0c36ff6ff43ccf40c480f"
+
+PYTHON_OPTION = "am_cv_python_pyexecdir='${PYTHON_SITEPACKAGES_DIR}' \
+                 am_cv_python_pythondir='${PYTHON_SITEPACKAGES_DIR}' \
+                 PYTHON_INCLUDE='-I${STAGING_INCDIR}/python${PYTHON_BASEVERSION}${PYTHON_ABI}' \
+"
+
+inherit autotools lib_package manpages python3native pkgconfig
+
+include lttng-platforms.inc
+
+EXTRA_OECONF = "--disable-numa"
+CPPFLAGS:append:arm = "${@oe.utils.vartrue('DEBUG_BUILD', '-DUATOMIC_NO_LINK_ERROR', '', d)}"
+
+DEPENDS = "liburcu util-linux"
+RDEPENDS:${PN}-bin = "python3-core"
+
+# For backwards compatibility after rename
+RPROVIDES:${PN} = "lttng2-ust"
+RREPLACES:${PN} = "lttng2-ust"
+RCONFLICTS:${PN} = "lttng2-ust"
+
+PE = "2"
+
+SRC_URI = "https://lttng.org/files/lttng-ust/lttng-ust-${PV}.tar.bz2 \
+           file://0001-python-lttngust-Makefile.am-Add-install-lib-to-setup.patch \
+           file://0001-lttng-ust-common-link-with-liburcu-explicitly.patch \
+           file://0001-Makefile.am-update-rpath-link.patch \
+           "
+
+SRC_URI[sha256sum] = "698f82ec5dc56e981c0bb08c46ebabaf31c60e877c2e365b9fd6d3a9fff8b398"
+
+CVE_PRODUCT = "ust"
+
+PACKAGECONFIG[examples] = "--enable-examples, --disable-examples,"
+PACKAGECONFIG[manpages] = "--enable-man-pages, --disable-man-pages, asciidoc-native xmlto-native libxslt-native"
+PACKAGECONFIG[python3-agent] = "--enable-python-agent ${PYTHON_OPTION}, --disable-python-agent, python3, python3"
+
+FILES:${PN} += " ${PYTHON_SITEPACKAGES_DIR}/*"
+FILES:${PN}-staticdev += " ${PYTHON_SITEPACKAGES_DIR}/*.a"
+FILES:${PN}-dev += " ${PYTHON_SITEPACKAGES_DIR}/*.la"
+
+do_install:append() {
+        # Patch python tools to use Python 3; they should be source compatible, but
+        # still refer to Python 2 in the shebang
+        sed -i -e '1s,#!.*python.*,#!${bindir}/python3,' ${D}${bindir}/lttng-gen-tp
+}
diff --git a/poky/meta/recipes-kernel/perf/perf.bb b/poky/meta/recipes-kernel/perf/perf.bb
index 95e7eae..9f7c300 100644
--- a/poky/meta/recipes-kernel/perf/perf.bb
+++ b/poky/meta/recipes-kernel/perf/perf.bb
@@ -20,6 +20,7 @@
 PACKAGECONFIG[tui] = ",NO_NEWT=1,libnewt slang"
 PACKAGECONFIG[libunwind] = ",NO_LIBUNWIND=1 NO_LIBDW_DWARF_UNWIND=1,libunwind"
 PACKAGECONFIG[libnuma] = ",NO_LIBNUMA=1"
+PACKAGECONFIG[bfd] = ",NO_LIBBFD=1"
 PACKAGECONFIG[systemtap] = ",NO_SDT=1,systemtap"
 PACKAGECONFIG[jvmti] = ",NO_JVMTI=1"
 # libaudit support would need scripting to be enabled
@@ -203,7 +204,7 @@
     if [ -e "${S}/tools/perf/Makefile.perf" ]; then
         sed -i -e 's,\ .config-detected, $(OUTPUT)/config-detected,g' \
             ${S}/tools/perf/Makefile.perf
-        sed -i -e "s,prefix='\$(DESTDIR_SQ)/usr'$,prefix='\$(DESTDIR_SQ)/usr' --install-lib='\$(DESTDIR)\$(PYTHON_SITEPACKAGES_DIR)',g" \
+        sed -i -e "s,prefix='\$(DESTDIR_SQ)/usr'$,prefix='\$(DESTDIR_SQ)/usr' --install-lib='\$(PYTHON_SITEPACKAGES_DIR)' --root='\$(DESTDIR)',g" \
             ${S}/tools/perf/Makefile.perf
         # backport https://github.com/torvalds/linux/commit/e4ffd066ff440a57097e9140fa9e16ceef905de8
         sed -i -e 's,\($(Q)$(SHELL) .$(arch_errno_tbl).\) $(CC) $(arch_errno_hdr_dir),\1 $(firstword $(CC)) $(arch_errno_hdr_dir),g' \
diff --git a/poky/meta/recipes-kernel/systemtap/systemtap/python-3.11.patch b/poky/meta/recipes-kernel/systemtap/systemtap/python-3.11.patch
new file mode 100644
index 0000000..6e0c97b
--- /dev/null
+++ b/poky/meta/recipes-kernel/systemtap/systemtap/python-3.11.patch
@@ -0,0 +1,37 @@
+From 069e109c95d1afca17cd3781b39f220cf8b39978 Mon Sep 17 00:00:00 2001
+From: Stan Cox <scox@redhat.com>
+Date: Wed, 13 Jul 2022 09:49:51 -0400
+Subject: [PATCH 1/1] python 3.11 removed direct access to PyFrameObject
+ members
+
+Take into account the change in PyFrameObject definition to allow
+building systemtap with python 3.11.  Additional support for python
+3.11 is forthcoming.
+
+Upstream-Status: Backport [https://sourceware.org/git/?p=systemtap.git;a=commit;h=069e109c95d1afca17cd3781b39f220cf8b39978]
+Signed-off-by: Alexander Kanavin <alex@linutronix.de>
+---
+ python/HelperSDT/_HelperSDT.c | 6 ++++++
+ 1 file changed, 6 insertions(+)
+
+diff --git a/python/HelperSDT/_HelperSDT.c b/python/HelperSDT/_HelperSDT.c
+index 967cb6077..4d287132e 100644
+--- a/python/HelperSDT/_HelperSDT.c
++++ b/python/HelperSDT/_HelperSDT.c
+@@ -14,7 +14,13 @@
+ // PR25841: ensure that the libHelperSDT.so file contains debuginfo
+ // for the tapset helper functions, so they don't have to look into libpython*
+ #include <frameobject.h>
++// python 3.11 removed direct access to PyFrameObject members
++// https://docs.python.org/3.11/whatsnew/3.11.html#c-api-changes
++#if PY_MAJOR_VERSION <= 3 && PY_MINOR_VERSION < 11
+ PyFrameObject _dummy_frame;
++#else
++//PyFrameObject *_dummy_frame;
++#endif
+ #include <object.h>
+ PyVarObject _dummy_var;
+ #include <dictobject.h>
+-- 
+2.31.1
+
diff --git a/poky/meta/recipes-kernel/systemtap/systemtap_git.inc b/poky/meta/recipes-kernel/systemtap/systemtap_git.inc
index 2b79aa8..b05a5a2 100644
--- a/poky/meta/recipes-kernel/systemtap/systemtap_git.inc
+++ b/poky/meta/recipes-kernel/systemtap/systemtap_git.inc
@@ -7,6 +7,7 @@
            file://0001-Do-not-let-configure-write-a-python-location-into-th.patch \
            file://0001-Install-python-modules-to-correct-library-dir.patch \
            file://0001-staprun-stapbpf-don-t-support-installing-a-non-root.patch \
+           file://python-3.11.patch \
            "
 
 COMPATIBLE_HOST = '(x86_64|i.86|powerpc|arm|aarch64|microblazeel|mips|riscv64).*-linux'
diff --git a/poky/meta/recipes-kernel/wireless-regdb/wireless-regdb_2022.06.06.bb b/poky/meta/recipes-kernel/wireless-regdb/wireless-regdb_2022.06.06.bb
deleted file mode 100644
index 2eba4f8..0000000
--- a/poky/meta/recipes-kernel/wireless-regdb/wireless-regdb_2022.06.06.bb
+++ /dev/null
@@ -1,43 +0,0 @@
-SUMMARY = "Wireless Central Regulatory Domain Database"
-HOMEPAGE = "https://wireless.wiki.kernel.org/en/developers/regulatory/crda"
-SECTION = "net"
-LICENSE = "ISC"
-LIC_FILES_CHKSUM = "file://LICENSE;md5=07c4f6dea3845b02a18dc00c8c87699c"
-
-SRC_URI = "https://www.kernel.org/pub/software/network/${BPN}/${BP}.tar.xz"
-SRC_URI[sha256sum] = "ac00f97efecce5046ed069d1d93f3365fdf994c7c7854a8fc50831e959537230"
-
-inherit bin_package allarch
-
-do_install() {
-    install -d -m0755 ${D}${nonarch_libdir}/crda
-    install -d -m0755 ${D}${sysconfdir}/wireless-regdb/pubkeys
-    install -m 0644 regulatory.bin ${D}${nonarch_libdir}/crda/regulatory.bin
-    install -m 0644 sforshee.key.pub.pem ${D}${sysconfdir}/wireless-regdb/pubkeys/sforshee.key.pub.pem
-
-    install -m 0644 -D regulatory.db ${D}${nonarch_base_libdir}/firmware/regulatory.db
-    install -m 0644 regulatory.db.p7s ${D}${nonarch_base_libdir}/firmware/regulatory.db.p7s
-}
-
-# Install static regulatory DB in /lib/firmware for kernel to load.
-# This requires Linux kernel >= v4.15.
-# For kernel <= v4.14, inherit the kernel_wireless_regdb.bbclass
-# (in meta-networking) in kernel's recipe.
-PACKAGES = "${PN}-static ${PN}"
-RCONFLICTS:${PN} = "${PN}-static"
-
-FILES:${PN}-static = " \
-    ${nonarch_base_libdir}/firmware/regulatory.db \
-    ${nonarch_base_libdir}/firmware/regulatory.db.p7s \
-"
-
-# Native users might want to use the source of regulatory DB.
-# This is for example used by Linux kernel <= v4.14 and
-# kernel_wireless_regdb.bbclass in meta-networking.
-do_install:append:class-native() {
-    install -m 0644 -D db.txt ${D}${libdir}/crda/db.txt
-}
-
-RSUGGESTS:${PN} = "crda"
-
-BBCLASSEXTEND = "native"
diff --git a/poky/meta/recipes-kernel/wireless-regdb/wireless-regdb_2022.08.12.bb b/poky/meta/recipes-kernel/wireless-regdb/wireless-regdb_2022.08.12.bb
new file mode 100644
index 0000000..357e79d
--- /dev/null
+++ b/poky/meta/recipes-kernel/wireless-regdb/wireless-regdb_2022.08.12.bb
@@ -0,0 +1,43 @@
+SUMMARY = "Wireless Central Regulatory Domain Database"
+HOMEPAGE = "https://wireless.wiki.kernel.org/en/developers/regulatory/crda"
+SECTION = "net"
+LICENSE = "ISC"
+LIC_FILES_CHKSUM = "file://LICENSE;md5=07c4f6dea3845b02a18dc00c8c87699c"
+
+SRC_URI = "https://www.kernel.org/pub/software/network/${BPN}/${BP}.tar.xz"
+SRC_URI[sha256sum] = "59c8f7d17966db71b27f90e735ee8f5b42ca3527694a8c5e6e9b56bd379c3b84"
+
+inherit bin_package allarch
+
+do_install() {
+    install -d -m0755 ${D}${nonarch_libdir}/crda
+    install -d -m0755 ${D}${sysconfdir}/wireless-regdb/pubkeys
+    install -m 0644 regulatory.bin ${D}${nonarch_libdir}/crda/regulatory.bin
+    install -m 0644 sforshee.key.pub.pem ${D}${sysconfdir}/wireless-regdb/pubkeys/sforshee.key.pub.pem
+
+    install -m 0644 -D regulatory.db ${D}${nonarch_base_libdir}/firmware/regulatory.db
+    install -m 0644 regulatory.db.p7s ${D}${nonarch_base_libdir}/firmware/regulatory.db.p7s
+}
+
+# Install static regulatory DB in /lib/firmware for kernel to load.
+# This requires Linux kernel >= v4.15.
+# For kernel <= v4.14, inherit the kernel_wireless_regdb.bbclass
+# (in meta-networking) in kernel's recipe.
+PACKAGES = "${PN}-static ${PN}"
+RCONFLICTS:${PN} = "${PN}-static"
+
+FILES:${PN}-static = " \
+    ${nonarch_base_libdir}/firmware/regulatory.db \
+    ${nonarch_base_libdir}/firmware/regulatory.db.p7s \
+"
+
+# Native users might want to use the source of regulatory DB.
+# This is for example used by Linux kernel <= v4.14 and
+# kernel_wireless_regdb.bbclass in meta-networking.
+do_install:append:class-native() {
+    install -m 0644 -D db.txt ${D}${libdir}/crda/db.txt
+}
+
+RSUGGESTS:${PN} = "crda"
+
+BBCLASSEXTEND = "native"
diff --git a/poky/meta/recipes-multimedia/alsa/alsa-plugins/0001-arcam_av.c-Include-missing-string.h.patch b/poky/meta/recipes-multimedia/alsa/alsa-plugins/0001-arcam_av.c-Include-missing-string.h.patch
new file mode 100644
index 0000000..ff7745d
--- /dev/null
+++ b/poky/meta/recipes-multimedia/alsa/alsa-plugins/0001-arcam_av.c-Include-missing-string.h.patch
@@ -0,0 +1,25 @@
+From b01b176a665ba65979d74922955f51dc4888a713 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Tue, 23 Aug 2022 15:21:16 -0700
+Subject: [PATCH] arcam_av.c: Include missing string.h
+
+bzero() function needs this header to be included
+
+Upstream-Status: Submitted [https://github.com/alsa-project/alsa-plugins/pull/47]
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ arcam-av/arcam_av.c | 1 +
+ 1 file changed, 1 insertion(+)
+
+diff --git a/arcam-av/arcam_av.c b/arcam-av/arcam_av.c
+index 63f9b4e..29fc537 100644
+--- a/arcam-av/arcam_av.c
++++ b/arcam-av/arcam_av.c
+@@ -27,6 +27,7 @@
+ #include <signal.h>
+ #include <stddef.h>
+ #include <stdio.h>
++#include <string.h>
+ #include <termios.h>
+ #include <unistd.h>
+ 
diff --git a/poky/meta/recipes-multimedia/alsa/alsa-plugins_1.2.7.1.bb b/poky/meta/recipes-multimedia/alsa/alsa-plugins_1.2.7.1.bb
index 62505fe..9500462 100644
--- a/poky/meta/recipes-multimedia/alsa/alsa-plugins_1.2.7.1.bb
+++ b/poky/meta/recipes-multimedia/alsa/alsa-plugins_1.2.7.1.bb
@@ -22,7 +22,9 @@
                     file://rate/rate_samplerate.c;endline=35;md5=fd77bce85f4a338c0e8ab18430b69fae \
                     "
 
-SRC_URI = "https://www.alsa-project.org/files/pub/plugins/${BP}.tar.bz2"
+SRC_URI = "https://www.alsa-project.org/files/pub/plugins/${BP}.tar.bz2 \
+           file://0001-arcam_av.c-Include-missing-string.h.patch \
+           "
 SRC_URI[sha256sum] = "8c337814954bb7c167456733a6046142a2931f12eccba3ec2a4ae618a3432511"
 
 DEPENDS += "alsa-lib"
diff --git a/poky/meta/recipes-multimedia/ffmpeg/ffmpeg/0001-libavutil-include-assembly-with-full-path-from-sourc.patch b/poky/meta/recipes-multimedia/ffmpeg/ffmpeg/0001-libavutil-include-assembly-with-full-path-from-sourc.patch
deleted file mode 100644
index 7d0a06f..0000000
--- a/poky/meta/recipes-multimedia/ffmpeg/ffmpeg/0001-libavutil-include-assembly-with-full-path-from-sourc.patch
+++ /dev/null
@@ -1,112 +0,0 @@
-From 4a891e1eddbf63f32fe769b5bff289f6748abf45 Mon Sep 17 00:00:00 2001
-From: Alexander Kanavin <alex.kanavin@gmail.com>
-Date: Tue, 10 Nov 2020 15:32:14 +0000
-Subject: [PATCH] libavutil: include assembly with full path from source root
-
-Otherwise nasm writes the full host-specific paths into .o
-output, which breaks binary reproducibility.
-
-Upstream-Status: Submitted [http://ffmpeg.org/pipermail/ffmpeg-devel/2022-January/291781.html]
-Signed-off-by: Alexander Kanavin <alex.kanavin@gmail.com>
-
----
- libavutil/x86/cpuid.asm      | 2 +-
- libavutil/x86/emms.asm       | 2 +-
- libavutil/x86/fixed_dsp.asm  | 2 +-
- libavutil/x86/float_dsp.asm  | 2 +-
- libavutil/x86/lls.asm        | 2 +-
- libavutil/x86/pixelutils.asm | 2 +-
- libavutil/x86/tx_float.asm   | 2 +-
- 7 files changed, 7 insertions(+), 7 deletions(-)
-
-diff --git a/libavutil/x86/cpuid.asm b/libavutil/x86/cpuid.asm
-index c3f7866..766f77f 100644
---- a/libavutil/x86/cpuid.asm
-+++ b/libavutil/x86/cpuid.asm
-@@ -21,7 +21,7 @@
- ;* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- ;******************************************************************************
- 
--%include "x86util.asm"
-+%include "libavutil/x86/x86util.asm"
- 
- SECTION .text
- 
-diff --git a/libavutil/x86/emms.asm b/libavutil/x86/emms.asm
-index 8611762..df84f22 100644
---- a/libavutil/x86/emms.asm
-+++ b/libavutil/x86/emms.asm
-@@ -18,7 +18,7 @@
- ;* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- ;******************************************************************************
- 
--%include "x86util.asm"
-+%include "libavutil/x86/x86util.asm"
- 
- SECTION .text
- 
-diff --git a/libavutil/x86/fixed_dsp.asm b/libavutil/x86/fixed_dsp.asm
-index 979dd5c..2f41185 100644
---- a/libavutil/x86/fixed_dsp.asm
-+++ b/libavutil/x86/fixed_dsp.asm
-@@ -20,7 +20,7 @@
- ;* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- ;******************************************************************************
- 
--%include "x86util.asm"
-+%include "libavutil/x86/x86util.asm"
- 
- SECTION .text
- 
-diff --git a/libavutil/x86/float_dsp.asm b/libavutil/x86/float_dsp.asm
-index 517fd63..b773e61 100644
---- a/libavutil/x86/float_dsp.asm
-+++ b/libavutil/x86/float_dsp.asm
-@@ -20,7 +20,7 @@
- ;* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- ;******************************************************************************
- 
--%include "x86util.asm"
-+%include "libavutil/x86/x86util.asm"
- 
- SECTION_RODATA 32
- pd_reverse: dd 7, 6, 5, 4, 3, 2, 1, 0
-diff --git a/libavutil/x86/lls.asm b/libavutil/x86/lls.asm
-index 317fba6..d2526d1 100644
---- a/libavutil/x86/lls.asm
-+++ b/libavutil/x86/lls.asm
-@@ -20,7 +20,7 @@
- ;* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- ;******************************************************************************
- 
--%include "x86util.asm"
-+%include "libavutil/x86/x86util.asm"
- 
- SECTION .text
- 
-diff --git a/libavutil/x86/pixelutils.asm b/libavutil/x86/pixelutils.asm
-index 36c57c5..8b45ead 100644
---- a/libavutil/x86/pixelutils.asm
-+++ b/libavutil/x86/pixelutils.asm
-@@ -21,7 +21,7 @@
- ;* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- ;******************************************************************************
- 
--%include "x86util.asm"
-+%include "libavutil/x86/x86util.asm"
- 
- SECTION .text
- 
-diff --git a/libavutil/x86/tx_float.asm b/libavutil/x86/tx_float.asm
-index 4d2283f..ea39f21 100644
---- a/libavutil/x86/tx_float.asm
-+++ b/libavutil/x86/tx_float.asm
-@@ -29,7 +29,7 @@
- ;       replace some shuffles with vblends?
- ;       avx512 split-radix
- 
--%include "x86util.asm"
-+%include "libavutil/x86/x86util.asm"
- 
- %if ARCH_X86_64
- %define ptr resq
diff --git a/poky/meta/recipes-multimedia/ffmpeg/ffmpeg_5.0.1.bb b/poky/meta/recipes-multimedia/ffmpeg/ffmpeg_5.0.1.bb
deleted file mode 100644
index dd14f8d..0000000
--- a/poky/meta/recipes-multimedia/ffmpeg/ffmpeg_5.0.1.bb
+++ /dev/null
@@ -1,176 +0,0 @@
-SUMMARY = "A complete, cross-platform solution to record, convert and stream audio and video."
-DESCRIPTION = "FFmpeg is the leading multimedia framework, able to decode, encode, transcode, \
-               mux, demux, stream, filter and play pretty much anything that humans and machines \
-               have created. It supports the most obscure ancient formats up to the cutting edge."
-HOMEPAGE = "https://www.ffmpeg.org/"
-SECTION = "libs"
-
-LICENSE = "GPL-2.0-or-later & LGPL-2.1-or-later & ISC & MIT & BSD-2-Clause & BSD-3-Clause & IJG"
-LICENSE:${PN} = "GPL-2.0-or-later"
-LICENSE:libavcodec = "${@bb.utils.contains('PACKAGECONFIG', 'gpl', 'GPL-2.0-or-later', 'LGPL-2.1-or-later', d)}"
-LICENSE:libavdevice = "${@bb.utils.contains('PACKAGECONFIG', 'gpl', 'GPL-2.0-or-later', 'LGPL-2.1-or-later', d)}"
-LICENSE:libavfilter = "${@bb.utils.contains('PACKAGECONFIG', 'gpl', 'GPL-2.0-or-later', 'LGPL-2.1-or-later', d)}"
-LICENSE:libavformat = "${@bb.utils.contains('PACKAGECONFIG', 'gpl', 'GPL-2.0-or-later', 'LGPL-2.1-or-later', d)}"
-LICENSE:libavutil = "${@bb.utils.contains('PACKAGECONFIG', 'gpl', 'GPL-2.0-or-later', 'LGPL-2.1-or-later', d)}"
-LICENSE:libpostproc = "GPL-2.0-or-later"
-LICENSE:libswresample = "${@bb.utils.contains('PACKAGECONFIG', 'gpl', 'GPL-2.0-or-later', 'LGPL-2.1-or-later', d)}"
-LICENSE:libswscale = "${@bb.utils.contains('PACKAGECONFIG', 'gpl', 'GPL-2.0-or-later', 'LGPL-2.1-or-later', d)}"
-LICENSE_FLAGS = "commercial"
-
-LIC_FILES_CHKSUM = "file://COPYING.GPLv2;md5=b234ee4d69f5fce4486a80fdaf4a4263 \
-                    file://COPYING.GPLv3;md5=d32239bcb673463ab874e80d47fae504 \
-                    file://COPYING.LGPLv2.1;md5=bd7a443320af8c812e4c18d1b79df004 \
-                    file://COPYING.LGPLv3;md5=e6a600fd5e1d9cbde2d983680233ad02"
-
-SRC_URI = "https://www.ffmpeg.org/releases/${BP}.tar.xz \
-           file://0001-libavutil-include-assembly-with-full-path-from-sourc.patch \
-           "
-SRC_URI[sha256sum] = "ef2efae259ce80a240de48ec85ecb062cecca26e4352ffb3fda562c21a93007b"
-
-# Build fails when thumb is enabled: https://bugzilla.yoctoproject.org/show_bug.cgi?id=7717
-ARM_INSTRUCTION_SET:armv4 = "arm"
-ARM_INSTRUCTION_SET:armv5 = "arm"
-ARM_INSTRUCTION_SET:armv6 = "arm"
-
-# Should be API compatible with libav (which was a fork of ffmpeg)
-# libpostproc was previously packaged from a separate recipe
-PROVIDES = "libav libpostproc"
-
-DEPENDS = "nasm-native"
-
-inherit autotools pkgconfig
-
-PACKAGECONFIG ??= "avdevice avfilter avcodec avformat swresample swscale postproc \
-                   alsa bzlib lzma pic pthreads shared theora zlib \
-                   ${@bb.utils.contains('AVAILTUNES', 'mips32r2', 'mips32r2', '', d)} \
-                   ${@bb.utils.contains('DISTRO_FEATURES', 'x11', 'xv xcb', '', d)}"
-
-# libraries to build in addition to avutil
-PACKAGECONFIG[avdevice] = "--enable-avdevice,--disable-avdevice"
-PACKAGECONFIG[avfilter] = "--enable-avfilter,--disable-avfilter"
-PACKAGECONFIG[avcodec] = "--enable-avcodec,--disable-avcodec"
-PACKAGECONFIG[avformat] = "--enable-avformat,--disable-avformat"
-PACKAGECONFIG[swresample] = "--enable-swresample,--disable-swresample"
-PACKAGECONFIG[swscale] = "--enable-swscale,--disable-swscale"
-PACKAGECONFIG[postproc] = "--enable-postproc,--disable-postproc"
-
-# features to support
-PACKAGECONFIG[alsa] = "--enable-alsa,--disable-alsa,alsa-lib"
-PACKAGECONFIG[altivec] = "--enable-altivec,--disable-altivec,"
-PACKAGECONFIG[bzlib] = "--enable-bzlib,--disable-bzlib,bzip2"
-PACKAGECONFIG[fdk-aac] = "--enable-libfdk-aac --enable-nonfree,--disable-libfdk-aac,fdk-aac"
-PACKAGECONFIG[gpl] = "--enable-gpl,--disable-gpl"
-PACKAGECONFIG[gsm] = "--enable-libgsm,--disable-libgsm,libgsm"
-PACKAGECONFIG[jack] = "--enable-indev=jack,--disable-indev=jack,jack"
-PACKAGECONFIG[libopus] = "--enable-libopus,--disable-libopus,libopus"
-PACKAGECONFIG[libvorbis] = "--enable-libvorbis,--disable-libvorbis,libvorbis"
-PACKAGECONFIG[lzma] = "--enable-lzma,--disable-lzma,xz"
-PACKAGECONFIG[mfx] = "--enable-libmfx,--disable-libmfx,intel-mediasdk"
-PACKAGECONFIG[mp3lame] = "--enable-libmp3lame,--disable-libmp3lame,lame"
-PACKAGECONFIG[openssl] = "--enable-openssl,--disable-openssl,openssl"
-PACKAGECONFIG[sdl2] = "--enable-sdl2,--disable-sdl2,virtual/libsdl2"
-PACKAGECONFIG[speex] = "--enable-libspeex,--disable-libspeex,speex"
-PACKAGECONFIG[srt] = "--enable-libsrt,--disable-libsrt,srt"
-PACKAGECONFIG[theora] = "--enable-libtheora,--disable-libtheora,libtheora libogg"
-PACKAGECONFIG[vaapi] = "--enable-vaapi,--disable-vaapi,libva"
-PACKAGECONFIG[vdpau] = "--enable-vdpau,--disable-vdpau,libvdpau"
-PACKAGECONFIG[vpx] = "--enable-libvpx,--disable-libvpx,libvpx"
-PACKAGECONFIG[x264] = "--enable-libx264,--disable-libx264,x264"
-PACKAGECONFIG[x265] = "--enable-libx265,--disable-libx265,x265"
-PACKAGECONFIG[xcb] = "--enable-libxcb,--disable-libxcb,libxcb"
-PACKAGECONFIG[xv] = "--enable-outdev=xv,--disable-outdev=xv,libxv"
-PACKAGECONFIG[zlib] = "--enable-zlib,--disable-zlib,zlib"
-
-# other configuration options
-PACKAGECONFIG[mips32r2] = ",--disable-mipsdsp --disable-mipsdspr2"
-PACKAGECONFIG[pic] = "--enable-pic"
-PACKAGECONFIG[pthreads] = "--enable-pthreads,--disable-pthreads"
-PACKAGECONFIG[shared] = "--enable-shared"
-PACKAGECONFIG[strip] = ",--disable-stripping"
-
-# Check codecs that require --enable-nonfree
-USE_NONFREE = "${@bb.utils.contains_any('PACKAGECONFIG', [ 'openssl' ], 'yes', '', d)}"
-
-def cpu(d):
-    for arg in (d.getVar('TUNE_CCARGS') or '').split():
-        if arg.startswith('-mcpu='):
-            return arg[6:]
-    return 'generic'
-
-EXTRA_OECONF = " \
-    ${@bb.utils.contains('USE_NONFREE', 'yes', '--enable-nonfree', '', d)} \
-    \
-    --cross-prefix=${TARGET_PREFIX} \
-    \
-    --ld='${CCLD}' \
-    --cc='${CC}' \
-    --cxx='${CXX}' \
-    --arch=${TARGET_ARCH} \
-    --target-os='linux' \
-    --enable-cross-compile \
-    --extra-cflags='${CFLAGS} ${HOST_CC_ARCH}${TOOLCHAIN_OPTIONS}' \
-    --extra-ldflags='${LDFLAGS}' \
-    --sysroot='${STAGING_DIR_TARGET}' \
-    ${EXTRA_FFCONF} \
-    --libdir=${libdir} \
-    --shlibdir=${libdir} \
-    --datadir=${datadir}/ffmpeg \
-    --cpu=${@cpu(d)} \
-    --pkg-config=pkg-config \
-"
-
-EXTRA_OECONF:append:linux-gnux32 = " --disable-asm"
-
-EXTRA_OECONF += "${@bb.utils.contains('TUNE_FEATURES', 'mipsisa64r6', '--disable-mips64r2 --disable-mips32r2', '', d)}"
-EXTRA_OECONF += "${@bb.utils.contains('TUNE_FEATURES', 'mipsisa64r2', '--disable-mips64r6 --disable-mips32r6', '', d)}"
-EXTRA_OECONF += "${@bb.utils.contains('TUNE_FEATURES', 'mips32r2', '--disable-mips64r6 --disable-mips32r6', '', d)}"
-EXTRA_OECONF += "${@bb.utils.contains('TUNE_FEATURES', 'mips32r6', '--disable-mips64r2 --disable-mips32r2', '', d)}"
-EXTRA_OECONF:append:mips = " --extra-libs=-latomic --disable-mips32r5 --disable-mipsdsp --disable-mipsdspr2 \
-                             --disable-loongson2 --disable-loongson3 --disable-mmi --disable-msa"
-EXTRA_OECONF:append:riscv32 = " --extra-libs=-latomic"
-EXTRA_OECONF:append:armv5 = " --extra-libs=-latomic"
-EXTRA_OECONF:append:powerpc = " --extra-libs=-latomic"
-
-# gold crashes on x86, another solution is to --disable-asm but thats more hacky
-# ld.gold: internal error in relocate_section, at ../../gold/i386.cc:3684
-
-LDFLAGS:append:x86 = "${@bb.utils.contains('DISTRO_FEATURES', 'ld-is-gold', ' -fuse-ld=bfd ', '', d)}"
-
-EXTRA_OEMAKE = "V=1"
-
-do_configure() {
-    ${S}/configure ${EXTRA_OECONF}
-}
-
-# patch out build host paths for reproducibility
-do_compile:prepend:class-target() {
-        sed -i -e "s,${WORKDIR},,g" ${B}/config.h
-}
-
-PACKAGES =+ "libavcodec \
-             libavdevice \
-             libavfilter \
-             libavformat \
-             libavutil \
-             libpostproc \
-             libswresample \
-             libswscale"
-
-FILES:libavcodec = "${libdir}/libavcodec${SOLIBS}"
-FILES:libavdevice = "${libdir}/libavdevice${SOLIBS}"
-FILES:libavfilter = "${libdir}/libavfilter${SOLIBS}"
-FILES:libavformat = "${libdir}/libavformat${SOLIBS}"
-FILES:libavutil = "${libdir}/libavutil${SOLIBS}"
-FILES:libpostproc = "${libdir}/libpostproc${SOLIBS}"
-FILES:libswresample = "${libdir}/libswresample${SOLIBS}"
-FILES:libswscale = "${libdir}/libswscale${SOLIBS}"
-
-# ffmpeg disables PIC on some platforms (e.g. x86-32)
-INSANE_SKIP:${MLPREFIX}libavcodec = "textrel"
-INSANE_SKIP:${MLPREFIX}libavdevice = "textrel"
-INSANE_SKIP:${MLPREFIX}libavfilter = "textrel"
-INSANE_SKIP:${MLPREFIX}libavformat = "textrel"
-INSANE_SKIP:${MLPREFIX}libavutil = "textrel"
-INSANE_SKIP:${MLPREFIX}libswscale = "textrel"
-INSANE_SKIP:${MLPREFIX}libswresample = "textrel"
-INSANE_SKIP:${MLPREFIX}libpostproc = "textrel"
diff --git a/poky/meta/recipes-multimedia/ffmpeg/ffmpeg_5.1.bb b/poky/meta/recipes-multimedia/ffmpeg/ffmpeg_5.1.bb
new file mode 100644
index 0000000..bb507b4
--- /dev/null
+++ b/poky/meta/recipes-multimedia/ffmpeg/ffmpeg_5.1.bb
@@ -0,0 +1,174 @@
+SUMMARY = "A complete, cross-platform solution to record, convert and stream audio and video."
+DESCRIPTION = "FFmpeg is the leading multimedia framework, able to decode, encode, transcode, \
+               mux, demux, stream, filter and play pretty much anything that humans and machines \
+               have created. It supports the most obscure ancient formats up to the cutting edge."
+HOMEPAGE = "https://www.ffmpeg.org/"
+SECTION = "libs"
+
+LICENSE = "GPL-2.0-or-later & LGPL-2.1-or-later & ISC & MIT & BSD-2-Clause & BSD-3-Clause & IJG"
+LICENSE:${PN} = "GPL-2.0-or-later"
+LICENSE:libavcodec = "${@bb.utils.contains('PACKAGECONFIG', 'gpl', 'GPL-2.0-or-later', 'LGPL-2.1-or-later', d)}"
+LICENSE:libavdevice = "${@bb.utils.contains('PACKAGECONFIG', 'gpl', 'GPL-2.0-or-later', 'LGPL-2.1-or-later', d)}"
+LICENSE:libavfilter = "${@bb.utils.contains('PACKAGECONFIG', 'gpl', 'GPL-2.0-or-later', 'LGPL-2.1-or-later', d)}"
+LICENSE:libavformat = "${@bb.utils.contains('PACKAGECONFIG', 'gpl', 'GPL-2.0-or-later', 'LGPL-2.1-or-later', d)}"
+LICENSE:libavutil = "${@bb.utils.contains('PACKAGECONFIG', 'gpl', 'GPL-2.0-or-later', 'LGPL-2.1-or-later', d)}"
+LICENSE:libpostproc = "GPL-2.0-or-later"
+LICENSE:libswresample = "${@bb.utils.contains('PACKAGECONFIG', 'gpl', 'GPL-2.0-or-later', 'LGPL-2.1-or-later', d)}"
+LICENSE:libswscale = "${@bb.utils.contains('PACKAGECONFIG', 'gpl', 'GPL-2.0-or-later', 'LGPL-2.1-or-later', d)}"
+LICENSE_FLAGS = "commercial"
+
+LIC_FILES_CHKSUM = "file://COPYING.GPLv2;md5=b234ee4d69f5fce4486a80fdaf4a4263 \
+                    file://COPYING.GPLv3;md5=d32239bcb673463ab874e80d47fae504 \
+                    file://COPYING.LGPLv2.1;md5=bd7a443320af8c812e4c18d1b79df004 \
+                    file://COPYING.LGPLv3;md5=e6a600fd5e1d9cbde2d983680233ad02"
+
+SRC_URI = "https://www.ffmpeg.org/releases/${BP}.tar.xz"
+SRC_URI[sha256sum] = "55eb6aab5ee235550fa54a33eaf8bf1b4ec66c01453182b12f6a993d75698b03"
+
+# Build fails when thumb is enabled: https://bugzilla.yoctoproject.org/show_bug.cgi?id=7717
+ARM_INSTRUCTION_SET:armv4 = "arm"
+ARM_INSTRUCTION_SET:armv5 = "arm"
+ARM_INSTRUCTION_SET:armv6 = "arm"
+
+# Should be API compatible with libav (which was a fork of ffmpeg)
+# libpostproc was previously packaged from a separate recipe
+PROVIDES = "libav libpostproc"
+
+DEPENDS = "nasm-native"
+
+inherit autotools pkgconfig
+
+PACKAGECONFIG ??= "avdevice avfilter avcodec avformat swresample swscale postproc \
+                   alsa bzlib lzma pic pthreads shared theora zlib \
+                   ${@bb.utils.contains('AVAILTUNES', 'mips32r2', 'mips32r2', '', d)} \
+                   ${@bb.utils.contains('DISTRO_FEATURES', 'x11', 'xv xcb', '', d)}"
+
+# libraries to build in addition to avutil
+PACKAGECONFIG[avdevice] = "--enable-avdevice,--disable-avdevice"
+PACKAGECONFIG[avfilter] = "--enable-avfilter,--disable-avfilter"
+PACKAGECONFIG[avcodec] = "--enable-avcodec,--disable-avcodec"
+PACKAGECONFIG[avformat] = "--enable-avformat,--disable-avformat"
+PACKAGECONFIG[swresample] = "--enable-swresample,--disable-swresample"
+PACKAGECONFIG[swscale] = "--enable-swscale,--disable-swscale"
+PACKAGECONFIG[postproc] = "--enable-postproc,--disable-postproc"
+
+# features to support
+PACKAGECONFIG[alsa] = "--enable-alsa,--disable-alsa,alsa-lib"
+PACKAGECONFIG[altivec] = "--enable-altivec,--disable-altivec,"
+PACKAGECONFIG[bzlib] = "--enable-bzlib,--disable-bzlib,bzip2"
+PACKAGECONFIG[fdk-aac] = "--enable-libfdk-aac --enable-nonfree,--disable-libfdk-aac,fdk-aac"
+PACKAGECONFIG[gpl] = "--enable-gpl,--disable-gpl"
+PACKAGECONFIG[gsm] = "--enable-libgsm,--disable-libgsm,libgsm"
+PACKAGECONFIG[jack] = "--enable-indev=jack,--disable-indev=jack,jack"
+PACKAGECONFIG[libopus] = "--enable-libopus,--disable-libopus,libopus"
+PACKAGECONFIG[libvorbis] = "--enable-libvorbis,--disable-libvorbis,libvorbis"
+PACKAGECONFIG[lzma] = "--enable-lzma,--disable-lzma,xz"
+PACKAGECONFIG[mfx] = "--enable-libmfx,--disable-libmfx,intel-mediasdk"
+PACKAGECONFIG[mp3lame] = "--enable-libmp3lame,--disable-libmp3lame,lame"
+PACKAGECONFIG[openssl] = "--enable-openssl,--disable-openssl,openssl"
+PACKAGECONFIG[sdl2] = "--enable-sdl2,--disable-sdl2,virtual/libsdl2"
+PACKAGECONFIG[speex] = "--enable-libspeex,--disable-libspeex,speex"
+PACKAGECONFIG[srt] = "--enable-libsrt,--disable-libsrt,srt"
+PACKAGECONFIG[theora] = "--enable-libtheora,--disable-libtheora,libtheora libogg"
+PACKAGECONFIG[vaapi] = "--enable-vaapi,--disable-vaapi,libva"
+PACKAGECONFIG[vdpau] = "--enable-vdpau,--disable-vdpau,libvdpau"
+PACKAGECONFIG[vpx] = "--enable-libvpx,--disable-libvpx,libvpx"
+PACKAGECONFIG[x264] = "--enable-libx264,--disable-libx264,x264"
+PACKAGECONFIG[x265] = "--enable-libx265,--disable-libx265,x265"
+PACKAGECONFIG[xcb] = "--enable-libxcb,--disable-libxcb,libxcb"
+PACKAGECONFIG[xv] = "--enable-outdev=xv,--disable-outdev=xv,libxv"
+PACKAGECONFIG[zlib] = "--enable-zlib,--disable-zlib,zlib"
+
+# other configuration options
+PACKAGECONFIG[mips32r2] = ",--disable-mipsdsp --disable-mipsdspr2"
+PACKAGECONFIG[pic] = "--enable-pic"
+PACKAGECONFIG[pthreads] = "--enable-pthreads,--disable-pthreads"
+PACKAGECONFIG[shared] = "--enable-shared"
+PACKAGECONFIG[strip] = ",--disable-stripping"
+
+# Check codecs that require --enable-nonfree
+USE_NONFREE = "${@bb.utils.contains_any('PACKAGECONFIG', [ 'openssl' ], 'yes', '', d)}"
+
+def cpu(d):
+    for arg in (d.getVar('TUNE_CCARGS') or '').split():
+        if arg.startswith('-mcpu='):
+            return arg[6:]
+    return 'generic'
+
+EXTRA_OECONF = " \
+    ${@bb.utils.contains('USE_NONFREE', 'yes', '--enable-nonfree', '', d)} \
+    \
+    --cross-prefix=${TARGET_PREFIX} \
+    \
+    --ld='${CCLD}' \
+    --cc='${CC}' \
+    --cxx='${CXX}' \
+    --arch=${TARGET_ARCH} \
+    --target-os='linux' \
+    --enable-cross-compile \
+    --extra-cflags='${CFLAGS} ${HOST_CC_ARCH}${TOOLCHAIN_OPTIONS}' \
+    --extra-ldflags='${LDFLAGS}' \
+    --sysroot='${STAGING_DIR_TARGET}' \
+    ${EXTRA_FFCONF} \
+    --libdir=${libdir} \
+    --shlibdir=${libdir} \
+    --datadir=${datadir}/ffmpeg \
+    --cpu=${@cpu(d)} \
+    --pkg-config=pkg-config \
+"
+
+EXTRA_OECONF:append:linux-gnux32 = " --disable-asm"
+
+EXTRA_OECONF += "${@bb.utils.contains('TUNE_FEATURES', 'mipsisa64r6', '--disable-mips64r2 --disable-mips32r2', '', d)}"
+EXTRA_OECONF += "${@bb.utils.contains('TUNE_FEATURES', 'mipsisa64r2', '--disable-mips64r6 --disable-mips32r6', '', d)}"
+EXTRA_OECONF += "${@bb.utils.contains('TUNE_FEATURES', 'mips32r2', '--disable-mips64r6 --disable-mips32r6', '', d)}"
+EXTRA_OECONF += "${@bb.utils.contains('TUNE_FEATURES', 'mips32r6', '--disable-mips64r2 --disable-mips32r2', '', d)}"
+EXTRA_OECONF:append:mips = " --extra-libs=-latomic --disable-mips32r5 --disable-mipsdsp --disable-mipsdspr2 \
+                             --disable-loongson2 --disable-loongson3 --disable-mmi --disable-msa"
+EXTRA_OECONF:append:riscv32 = " --extra-libs=-latomic"
+EXTRA_OECONF:append:armv5 = " --extra-libs=-latomic"
+EXTRA_OECONF:append:powerpc = " --extra-libs=-latomic"
+
+# gold crashes on x86, another solution is to --disable-asm but thats more hacky
+# ld.gold: internal error in relocate_section, at ../../gold/i386.cc:3684
+
+LDFLAGS:append:x86 = "${@bb.utils.contains('DISTRO_FEATURES', 'ld-is-gold', ' -fuse-ld=bfd ', '', d)}"
+
+EXTRA_OEMAKE = "V=1"
+
+do_configure() {
+    ${S}/configure ${EXTRA_OECONF}
+}
+
+# patch out build host paths for reproducibility
+do_compile:prepend:class-target() {
+        sed -i -e "s,${WORKDIR},,g" ${B}/config.h
+}
+
+PACKAGES =+ "libavcodec \
+             libavdevice \
+             libavfilter \
+             libavformat \
+             libavutil \
+             libpostproc \
+             libswresample \
+             libswscale"
+
+FILES:libavcodec = "${libdir}/libavcodec${SOLIBS}"
+FILES:libavdevice = "${libdir}/libavdevice${SOLIBS}"
+FILES:libavfilter = "${libdir}/libavfilter${SOLIBS}"
+FILES:libavformat = "${libdir}/libavformat${SOLIBS}"
+FILES:libavutil = "${libdir}/libavutil${SOLIBS}"
+FILES:libpostproc = "${libdir}/libpostproc${SOLIBS}"
+FILES:libswresample = "${libdir}/libswresample${SOLIBS}"
+FILES:libswscale = "${libdir}/libswscale${SOLIBS}"
+
+# ffmpeg disables PIC on some platforms (e.g. x86-32)
+INSANE_SKIP:${MLPREFIX}libavcodec = "textrel"
+INSANE_SKIP:${MLPREFIX}libavdevice = "textrel"
+INSANE_SKIP:${MLPREFIX}libavfilter = "textrel"
+INSANE_SKIP:${MLPREFIX}libavformat = "textrel"
+INSANE_SKIP:${MLPREFIX}libavutil = "textrel"
+INSANE_SKIP:${MLPREFIX}libswscale = "textrel"
+INSANE_SKIP:${MLPREFIX}libswresample = "textrel"
+INSANE_SKIP:${MLPREFIX}libpostproc = "textrel"
diff --git a/poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-plugins-bad/0003-ensure-valid-sentinals-for-gst_structure_get-etc.patch b/poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-plugins-bad/0003-ensure-valid-sentinals-for-gst_structure_get-etc.patch
deleted file mode 100644
index 280cbf9..0000000
--- a/poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-plugins-bad/0003-ensure-valid-sentinals-for-gst_structure_get-etc.patch
+++ /dev/null
@@ -1,86 +0,0 @@
-From 001fa08542dd5fc79571f7c803b2d3dd59c04a06 Mon Sep 17 00:00:00 2001
-From: Andre McCurdy <armccurdy@gmail.com>
-Date: Tue, 9 Feb 2016 14:00:00 -0800
-Subject: [PATCH] ensure valid sentinals for gst_structure_get() etc
-
-For GStreamer functions declared with G_GNUC_NULL_TERMINATED,
-ie __attribute__((__sentinel__)), gcc will generate a warning if the
-last parameter passed to the function is not NULL (where a valid NULL
-in this context is defined as zero with any pointer type).
-
-The C callers to such functions within gst-plugins-bad use the C NULL
-definition (ie ((void*)0)), which is a valid sentinel.
-
-However the C++ NULL definition (ie 0L), is not a valid sentinel
-without an explicit cast to a pointer type.
-
-Upstream-Status: Pending
-
-Signed-off-by: Andre McCurdy <armccurdy@gmail.com>
-
----
- sys/decklink/gstdecklink.cpp          | 10 +++++-----
- sys/decklink/gstdecklinkaudiosrc.cpp  |  2 +-
- sys/decklink/gstdecklinkvideosink.cpp |  2 +-
- 3 files changed, 7 insertions(+), 7 deletions(-)
-
-diff --git a/sys/decklink/gstdecklink.cpp b/sys/decklink/gstdecklink.cpp
-index 3f79deb..96600c6 100644
---- a/sys/decklink/gstdecklink.cpp
-+++ b/sys/decklink/gstdecklink.cpp
-@@ -680,7 +680,7 @@ gst_decklink_mode_get_generic_structure (GstDecklinkModeEnum e)
-       "pixel-aspect-ratio", GST_TYPE_FRACTION, mode->par_n, mode->par_d,
-       "interlace-mode", G_TYPE_STRING,
-       mode->interlaced ? "interleaved" : "progressive",
--      "framerate", GST_TYPE_FRACTION, mode->fps_n, mode->fps_d, NULL);
-+      "framerate", GST_TYPE_FRACTION, mode->fps_n, mode->fps_d, (void*)NULL);
- 
-   return s;
- }
-@@ -705,16 +705,16 @@ gst_decklink_mode_get_structure (GstDecklinkModeEnum e, BMDPixelFormat f,
-     case bmdFormat8BitYUV:     /* '2vuy' */
-       gst_structure_set (s, "format", G_TYPE_STRING, "UYVY",
-           "colorimetry", G_TYPE_STRING, mode->colorimetry,
--          "chroma-site", G_TYPE_STRING, "mpeg2", NULL);
-+          "chroma-site", G_TYPE_STRING, "mpeg2", (void*)NULL);
-       break;
-     case bmdFormat10BitYUV:    /* 'v210' */
--      gst_structure_set (s, "format", G_TYPE_STRING, "v210", NULL);
-+      gst_structure_set (s, "format", G_TYPE_STRING, "v210", (void*)NULL);
-       break;
-     case bmdFormat8BitARGB:    /* 'ARGB' */
--      gst_structure_set (s, "format", G_TYPE_STRING, "ARGB", NULL);
-+      gst_structure_set (s, "format", G_TYPE_STRING, "ARGB", (void*)NULL);
-       break;
-     case bmdFormat8BitBGRA:    /* 'BGRA' */
--      gst_structure_set (s, "format", G_TYPE_STRING, "BGRA", NULL);
-+      gst_structure_set (s, "format", G_TYPE_STRING, "BGRA", (void*)NULL);
-       break;
-     case bmdFormat10BitRGB:    /* 'r210' Big-endian RGB 10-bit per component with SMPTE video levels (64-960). Packed as 2:10:10:10 */
-     case bmdFormat12BitRGB:    /* 'R12B' Big-endian RGB 12-bit per component with full range (0-4095). Packed as 12-bit per component */
-diff --git a/sys/decklink/gstdecklinkaudiosrc.cpp b/sys/decklink/gstdecklinkaudiosrc.cpp
-index 50ad5cc..d209180 100644
---- a/sys/decklink/gstdecklinkaudiosrc.cpp
-+++ b/sys/decklink/gstdecklinkaudiosrc.cpp
-@@ -388,7 +388,7 @@ gst_decklink_audio_src_start (GstDecklinkAudioSrc * self)
-       g_mutex_unlock (&self->input->lock);
- 
-       if (videosrc) {
--        g_object_get (videosrc, "connection", &vconn, NULL);
-+        g_object_get (videosrc, "connection", &vconn, (void *) NULL);
-         gst_object_unref (videosrc);
- 
-         switch (vconn) {
-diff --git a/sys/decklink/gstdecklinkvideosink.cpp b/sys/decklink/gstdecklinkvideosink.cpp
-index a64c046..07a09e8 100644
---- a/sys/decklink/gstdecklinkvideosink.cpp
-+++ b/sys/decklink/gstdecklinkvideosink.cpp
-@@ -288,7 +288,7 @@ reset_framerate (GstCapsFeatures * features, GstStructure * structure,
-     gpointer user_data)
- {
-   gst_structure_set (structure, "framerate", GST_TYPE_FRACTION_RANGE, 0, 1,
--      G_MAXINT, 1, NULL);
-+      G_MAXINT, 1, (void *) NULL);
- 
-   return TRUE;
- }
diff --git a/poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-plugins-bad_1.20.3.bb b/poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-plugins-bad_1.20.3.bb
index 2dd4239..39d5e08 100644
--- a/poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-plugins-bad_1.20.3.bb
+++ b/poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-plugins-bad_1.20.3.bb
@@ -8,7 +8,6 @@
 SRC_URI = "https://gstreamer.freedesktop.org/src/gst-plugins-bad/gst-plugins-bad-${PV}.tar.xz \
            file://0001-fix-maybe-uninitialized-warnings-when-compiling-with.patch \
            file://0002-avoid-including-sys-poll.h-directly.patch \
-           file://0003-ensure-valid-sentinals-for-gst_structure_get-etc.patch \
            file://0004-opencv-resolve-missing-opencv-data-dir-in-yocto-buil.patch \
            "
 SRC_URI[sha256sum] = "7a11c13b55dd1d2386dd902219e41cbfcdda8e1e0aa3e738186c95074b35da4f"
diff --git a/poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-plugins-base/0001-include-required-system-headers-for-isspace-and-ssca.patch b/poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-plugins-base/0001-include-required-system-headers-for-isspace-and-ssca.patch
new file mode 100644
index 0000000..23c1048
--- /dev/null
+++ b/poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-plugins-base/0001-include-required-system-headers-for-isspace-and-ssca.patch
@@ -0,0 +1,35 @@
+From c85a53a41d4e6bfc49c377217ece12a1f330a690 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Fri, 12 Aug 2022 22:50:06 -0700
+Subject: [PATCH] include required system headers for isspace() and sscanf()
+ functions
+
+Newer compilers ( clang 15 ) has turned stricter and errors out instead
+of warning on implicit function declations
+Fixes
+gstssaparse.c:297:12: error: call to undeclared library function 'isspace' with type 'int (int)'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
+while (isspace(*t))
+
+Upstream-Status: Submitted [https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2879]
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ gst/subparse/gstssaparse.c | 2 ++
+ 1 file changed, 2 insertions(+)
+
+diff --git a/gst/subparse/gstssaparse.c b/gst/subparse/gstssaparse.c
+index ff802fa..5ebe678 100755
+--- a/gst/subparse/gstssaparse.c
++++ b/gst/subparse/gstssaparse.c
+@@ -24,6 +24,8 @@
+ #include "config.h"
+ #endif
+ 
++#include <ctype.h>              /* isspace() */
++#include <stdio.h>              /* sscanf() */
+ #include <stdlib.h>             /* atoi() */
+ #include <string.h>
+ 
+-- 
+2.37.1
+
diff --git a/poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-plugins-base_1.20.3.bb b/poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-plugins-base_1.20.3.bb
index 7eebbba..e5e346e 100644
--- a/poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-plugins-base_1.20.3.bb
+++ b/poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-plugins-base_1.20.3.bb
@@ -10,6 +10,7 @@
            file://0001-ENGR00312515-get-caps-from-src-pad-when-query-caps.patch \
            file://0003-viv-fb-Make-sure-config.h-is-included.patch \
            file://0002-ssaparse-enhance-SSA-text-lines-parsing.patch \
+           file://0001-include-required-system-headers-for-isspace-and-ssca.patch \
            "
 SRC_URI[sha256sum] = "7e30b3dd81a70380ff7554f998471d6996ff76bbe6fc5447096f851e24473c9f"
 
diff --git a/poky/meta/recipes-multimedia/libtiff/files/CVE-2022-34526.patch b/poky/meta/recipes-multimedia/libtiff/files/CVE-2022-34526.patch
new file mode 100644
index 0000000..54c3345
--- /dev/null
+++ b/poky/meta/recipes-multimedia/libtiff/files/CVE-2022-34526.patch
@@ -0,0 +1,32 @@
+From 275735d0354e39c0ac1dc3c0db2120d6f31d1990 Mon Sep 17 00:00:00 2001
+From: Even Rouault <even.rouault@spatialys.com>
+Date: Mon, 27 Jun 2022 16:09:43 +0200
+Subject: [PATCH] _TIFFCheckFieldIsValidForCodec(): return FALSE when passed a
+ codec-specific tag and the codec is not configured (fixes #433)
+
+This avoids crashes when querying such tags
+
+CVE: CVE-2022-34526
+Upstream-Status: Backport [https://gitlab.com/libtiff/libtiff/-/commit/275735d0354e39c0ac1dc3c0db2120d6f31d1990]
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ libtiff/tif_dirinfo.c | 3 +++
+ 1 file changed, 3 insertions(+)
+
+diff --git a/libtiff/tif_dirinfo.c b/libtiff/tif_dirinfo.c
+index c30f569b..3371cb5c 100644
+--- a/libtiff/tif_dirinfo.c
++++ b/libtiff/tif_dirinfo.c
+@@ -1191,6 +1191,9 @@ _TIFFCheckFieldIsValidForCodec(TIFF *tif, ttag_t tag)
+ 	    default:
+ 		return 1;
+ 	}
++	if( !TIFFIsCODECConfigured(tif->tif_dir.td_compression) ) {
++		return 0;
++	}
+ 	/* Check if codec specific tags are allowed for the current
+ 	 * compression scheme (codec) */
+ 	switch (tif->tif_dir.td_compression) {
+-- 
+GitLab
+
diff --git a/poky/meta/recipes-multimedia/libtiff/tiff_4.4.0.bb b/poky/meta/recipes-multimedia/libtiff/tiff_4.4.0.bb
index 0af956a..e30df0b 100644
--- a/poky/meta/recipes-multimedia/libtiff/tiff_4.4.0.bb
+++ b/poky/meta/recipes-multimedia/libtiff/tiff_4.4.0.bb
@@ -9,7 +9,9 @@
 CVE_PRODUCT = "libtiff"
 
 SRC_URI = "http://download.osgeo.org/libtiff/tiff-${PV}.tar.gz \
-           file://0001-fix-the-FPE-in-tiffcrop-415-427-and-428.patch"
+           file://0001-fix-the-FPE-in-tiffcrop-415-427-and-428.patch \
+           file://CVE-2022-34526.patch \
+           "
 
 SRC_URI[sha256sum] = "917223b37538959aca3b790d2d73aa6e626b688e02dcda272aec24c2f498abed"
 
diff --git a/poky/meta/recipes-multimedia/mpg123/mpg123_1.30.1.bb b/poky/meta/recipes-multimedia/mpg123/mpg123_1.30.1.bb
deleted file mode 100644
index 502d9d1..0000000
--- a/poky/meta/recipes-multimedia/mpg123/mpg123_1.30.1.bb
+++ /dev/null
@@ -1,52 +0,0 @@
-SUMMARY = "Audio decoder for MPEG-1 Layer 1/2/3"
-DESCRIPTION = "The core of mpg123 is an MPEG-1 Layer 1/2/3 decoding library, which can be used by other programs. \
-mpg123 also comes with a command-line tool which can playback using ALSA, PulseAudio, OSS, and several other APIs, \
-and also can write the decoded audio to WAV."
-HOMEPAGE = "http://mpg123.de/"
-BUGTRACKER = "http://sourceforge.net/p/mpg123/bugs/"
-SECTION = "multimedia"
-
-LICENSE = "LGPL-2.1-only"
-LIC_FILES_CHKSUM = "file://COPYING;md5=e7b9c15fcfb986abb4cc5e8400a24169"
-
-SRC_URI = "https://www.mpg123.de/download/${BP}.tar.bz2"
-SRC_URI[sha256sum] = "1b20c9c751bea9be556749bd7f97cf580f52ed11f2540756e9af26ae036e4c59"
-
-UPSTREAM_CHECK_REGEX = "mpg123-(?P<pver>\d+(\.\d+)+)\.tar"
-
-inherit autotools pkgconfig
-
-# The options should be mutually exclusive for configuration script.
-# If both alsa and pulseaudio are specified (as in the default distro features)
-# pulseaudio takes precedence.
-PACKAGECONFIG_ALSA = "${@bb.utils.filter('DISTRO_FEATURES', 'alsa', d)}"
-PACKAGECONFIG ??= "${@bb.utils.contains('DISTRO_FEATURES', 'pulseaudio', 'pulseaudio', '${PACKAGECONFIG_ALSA}', d)}"
-
-PACKAGECONFIG[alsa] = "--with-default-audio=alsa,,alsa-lib"
-PACKAGECONFIG[esd] = ",,esound"
-PACKAGECONFIG[jack] = ",,jack"
-PACKAGECONFIG[openal] = ",,openal-soft"
-PACKAGECONFIG[portaudio] = ",,portaudio-v19"
-PACKAGECONFIG[pulseaudio] = "--with-default-audio=pulse,,pulseaudio"
-PACKAGECONFIG[sdl] = ",,libsdl2"
-
-# Following are possible sound output modules:
-# alsa arts coreaudio dummy esd jack nas openal os2 oss portaudio pulse sdl sndio sun tinyalsa win32 win32_wasapi
-AUDIOMODS += "${@bb.utils.filter('PACKAGECONFIG', 'alsa esd jack openal portaudio sdl', d)}"
-AUDIOMODS += "${@bb.utils.contains('PACKAGECONFIG', 'pulseaudio', 'pulse', '', d)}"
-
-EXTRA_OECONF = " \
-    --enable-shared \
-    --with-audio='${AUDIOMODS}' \
-    ${@bb.utils.contains('TUNE_FEATURES', 'neon', '--with-cpu=neon', '', d)} \
-    ${@bb.utils.contains('TUNE_FEATURES', 'altivec', '--with-cpu=altivec', '', d)} \
-    ${@bb.utils.contains('TARGET_FPU', 'soft', '--with-cpu=generic_nofpu', '', d)} \
-"
-# Fails to build with thumb-1 (qemuarm)
-#| {standard input}: Assembler messages:
-#| {standard input}:47: Error: selected processor does not support Thumb mode `smull r5,r6,r7,r4'
-#| {standard input}:48: Error: shifts in CMP/MOV instructions are only supported in unified syntax -- `mov r5,r5,lsr#24'
-#...
-#| make[3]: *** [equalizer.lo] Error 1
-ARM_INSTRUCTION_SET:armv4 = "arm"
-ARM_INSTRUCTION_SET:armv5 = "arm"
diff --git a/poky/meta/recipes-multimedia/mpg123/mpg123_1.30.2.bb b/poky/meta/recipes-multimedia/mpg123/mpg123_1.30.2.bb
new file mode 100644
index 0000000..f40bff1
--- /dev/null
+++ b/poky/meta/recipes-multimedia/mpg123/mpg123_1.30.2.bb
@@ -0,0 +1,52 @@
+SUMMARY = "Audio decoder for MPEG-1 Layer 1/2/3"
+DESCRIPTION = "The core of mpg123 is an MPEG-1 Layer 1/2/3 decoding library, which can be used by other programs. \
+mpg123 also comes with a command-line tool which can playback using ALSA, PulseAudio, OSS, and several other APIs, \
+and also can write the decoded audio to WAV."
+HOMEPAGE = "http://mpg123.de/"
+BUGTRACKER = "http://sourceforge.net/p/mpg123/bugs/"
+SECTION = "multimedia"
+
+LICENSE = "LGPL-2.1-only"
+LIC_FILES_CHKSUM = "file://COPYING;md5=e7b9c15fcfb986abb4cc5e8400a24169"
+
+SRC_URI = "https://www.mpg123.de/download/${BP}.tar.bz2"
+SRC_URI[sha256sum] = "c7ea863756bb79daed7cba2942ad3b267a410f26d2dfbd9aaf84451ff28a05d7"
+
+UPSTREAM_CHECK_REGEX = "mpg123-(?P<pver>\d+(\.\d+)+)\.tar"
+
+inherit autotools pkgconfig
+
+# The options should be mutually exclusive for configuration script.
+# If both alsa and pulseaudio are specified (as in the default distro features)
+# pulseaudio takes precedence.
+PACKAGECONFIG_ALSA = "${@bb.utils.filter('DISTRO_FEATURES', 'alsa', d)}"
+PACKAGECONFIG ??= "${@bb.utils.contains('DISTRO_FEATURES', 'pulseaudio', 'pulseaudio', '${PACKAGECONFIG_ALSA}', d)}"
+
+PACKAGECONFIG[alsa] = "--with-default-audio=alsa,,alsa-lib"
+PACKAGECONFIG[esd] = ",,esound"
+PACKAGECONFIG[jack] = ",,jack"
+PACKAGECONFIG[openal] = ",,openal-soft"
+PACKAGECONFIG[portaudio] = ",,portaudio-v19"
+PACKAGECONFIG[pulseaudio] = "--with-default-audio=pulse,,pulseaudio"
+PACKAGECONFIG[sdl] = ",,libsdl2"
+
+# Following are possible sound output modules:
+# alsa arts coreaudio dummy esd jack nas openal os2 oss portaudio pulse sdl sndio sun tinyalsa win32 win32_wasapi
+AUDIOMODS += "${@bb.utils.filter('PACKAGECONFIG', 'alsa esd jack openal portaudio sdl', d)}"
+AUDIOMODS += "${@bb.utils.contains('PACKAGECONFIG', 'pulseaudio', 'pulse', '', d)}"
+
+EXTRA_OECONF = " \
+    --enable-shared \
+    --with-audio='${AUDIOMODS}' \
+    ${@bb.utils.contains('TUNE_FEATURES', 'neon', '--with-cpu=neon', '', d)} \
+    ${@bb.utils.contains('TUNE_FEATURES', 'altivec', '--with-cpu=altivec', '', d)} \
+    ${@bb.utils.contains('TARGET_FPU', 'soft', '--with-cpu=generic_nofpu', '', d)} \
+"
+# Fails to build with thumb-1 (qemuarm)
+#| {standard input}: Assembler messages:
+#| {standard input}:47: Error: selected processor does not support Thumb mode `smull r5,r6,r7,r4'
+#| {standard input}:48: Error: shifts in CMP/MOV instructions are only supported in unified syntax -- `mov r5,r5,lsr#24'
+#...
+#| make[3]: *** [equalizer.lo] Error 1
+ARM_INSTRUCTION_SET:armv4 = "arm"
+ARM_INSTRUCTION_SET:armv5 = "arm"
diff --git a/poky/meta/recipes-multimedia/webp/libwebp_1.2.2.bb b/poky/meta/recipes-multimedia/webp/libwebp_1.2.2.bb
deleted file mode 100644
index 281cff1..0000000
--- a/poky/meta/recipes-multimedia/webp/libwebp_1.2.2.bb
+++ /dev/null
@@ -1,55 +0,0 @@
-SUMMARY = "WebP is an image format designed for the Web"
-DESCRIPTION = "WebP is a method of lossy and lossless compression that can be \
-               used on a large variety of photographic, translucent and \
-               graphical images found on the web. The degree of lossy \
-               compression is adjustable so a user can choose the trade-off \
-               between file size and image quality. WebP typically achieves \
-               an average of 30% more compression than JPEG and JPEG 2000, \
-               without loss of image quality."
-HOMEPAGE = "https://developers.google.com/speed/webp/"
-SECTION = "libs"
-
-LICENSE = "BSD-3-Clause"
-LIC_FILES_CHKSUM = "file://COPYING;md5=6e8dee932c26f2dab503abf70c96d8bb \
-                    file://PATENTS;md5=c6926d0cb07d296f886ab6e0cc5a85b7"
-
-SRC_URI = "http://downloads.webmproject.org/releases/webp/${BP}.tar.gz"
-SRC_URI[sha256sum] = "7656532f837af5f4cec3ff6bafe552c044dc39bf453587bd5b77450802f4aee6"
-
-UPSTREAM_CHECK_URI = "http://downloads.webmproject.org/releases/webp/index.html"
-
-EXTRA_OECONF = " \
-    --disable-wic \
-    --enable-libwebpmux \
-    --enable-libwebpdemux \
-    --enable-threading \
-"
-
-# Do not trust configure to determine if neon is available.
-#
-EXTRA_OECONF_ARM = " \
-    ${@bb.utils.contains("TUNE_FEATURES","neon","--enable-neon","--disable-neon",d)} \
-"
-EXTRA_OECONF:append:arm = " ${EXTRA_OECONF_ARM}"
-EXTRA_OECONF:append:armeb = " ${EXTRA_OECONF_ARM}"
-
-inherit autotools lib_package
-
-PACKAGECONFIG ??= ""
-
-# libwebpdecoder is a subset of libwebp, don't build it unless requested
-PACKAGECONFIG[decoder] = "--enable-libwebpdecoder,--disable-libwebpdecoder"
-
-# Apply for examples programs: cwebp and dwebp
-PACKAGECONFIG[gif] = "--enable-gif,--disable-gif,giflib"
-PACKAGECONFIG[jpeg] = "--enable-jpeg,--disable-jpeg,jpeg"
-PACKAGECONFIG[png] = "--enable-png,--disable-png,,libpng"
-PACKAGECONFIG[tiff] = "--enable-tiff,--disable-tiff,tiff"
-
-# Apply only for example program vwebp
-PACKAGECONFIG[gl] = "--enable-gl,--disable-gl,mesa-glut"
-
-PACKAGES =+ "${PN}-gif2webp"
-
-DESCRIPTION:${PN}-gif2webp = "Simple tool to convert animated GIFs to WebP"
-FILES:${PN}-gif2webp = "${bindir}/gif2webp"
diff --git a/poky/meta/recipes-multimedia/webp/libwebp_1.2.4.bb b/poky/meta/recipes-multimedia/webp/libwebp_1.2.4.bb
new file mode 100644
index 0000000..2635898
--- /dev/null
+++ b/poky/meta/recipes-multimedia/webp/libwebp_1.2.4.bb
@@ -0,0 +1,55 @@
+SUMMARY = "WebP is an image format designed for the Web"
+DESCRIPTION = "WebP is a method of lossy and lossless compression that can be \
+               used on a large variety of photographic, translucent and \
+               graphical images found on the web. The degree of lossy \
+               compression is adjustable so a user can choose the trade-off \
+               between file size and image quality. WebP typically achieves \
+               an average of 30% more compression than JPEG and JPEG 2000, \
+               without loss of image quality."
+HOMEPAGE = "https://developers.google.com/speed/webp/"
+SECTION = "libs"
+
+LICENSE = "BSD-3-Clause"
+LIC_FILES_CHKSUM = "file://COPYING;md5=6e8dee932c26f2dab503abf70c96d8bb \
+                    file://PATENTS;md5=c6926d0cb07d296f886ab6e0cc5a85b7"
+
+SRC_URI = "http://downloads.webmproject.org/releases/webp/${BP}.tar.gz"
+SRC_URI[sha256sum] = "7bf5a8a28cc69bcfa8cb214f2c3095703c6b73ac5fba4d5480c205331d9494df"
+
+UPSTREAM_CHECK_URI = "http://downloads.webmproject.org/releases/webp/index.html"
+
+EXTRA_OECONF = " \
+    --disable-wic \
+    --enable-libwebpmux \
+    --enable-libwebpdemux \
+    --enable-threading \
+"
+
+# Do not trust configure to determine if neon is available.
+#
+EXTRA_OECONF_ARM = " \
+    ${@bb.utils.contains("TUNE_FEATURES","neon","--enable-neon","--disable-neon",d)} \
+"
+EXTRA_OECONF:append:arm = " ${EXTRA_OECONF_ARM}"
+EXTRA_OECONF:append:armeb = " ${EXTRA_OECONF_ARM}"
+
+inherit autotools lib_package
+
+PACKAGECONFIG ??= ""
+
+# libwebpdecoder is a subset of libwebp, don't build it unless requested
+PACKAGECONFIG[decoder] = "--enable-libwebpdecoder,--disable-libwebpdecoder"
+
+# Apply for examples programs: cwebp and dwebp
+PACKAGECONFIG[gif] = "--enable-gif,--disable-gif,giflib"
+PACKAGECONFIG[jpeg] = "--enable-jpeg,--disable-jpeg,jpeg"
+PACKAGECONFIG[png] = "--enable-png,--disable-png,,libpng"
+PACKAGECONFIG[tiff] = "--enable-tiff,--disable-tiff,tiff"
+
+# Apply only for example program vwebp
+PACKAGECONFIG[gl] = "--enable-gl,--disable-gl,mesa-glut"
+
+PACKAGES =+ "${PN}-gif2webp"
+
+DESCRIPTION:${PN}-gif2webp = "Simple tool to convert animated GIFs to WebP"
+FILES:${PN}-gif2webp = "${bindir}/gif2webp"
diff --git a/poky/meta/recipes-sato/puzzles/puzzles_git.bb b/poky/meta/recipes-sato/puzzles/puzzles_git.bb
index 68bdee8..9e9bcd2 100644
--- a/poky/meta/recipes-sato/puzzles/puzzles_git.bb
+++ b/poky/meta/recipes-sato/puzzles/puzzles_git.bb
@@ -10,7 +10,7 @@
 SRC_URI = "git://git.tartarus.org/simon/puzzles.git;branch=main"
 
 UPSTREAM_CHECK_COMMITS = "1"
-SRCREV = "387d323dd8d579db2c90b499b3b19f746cbdbce5"
+SRCREV = "8399cff6a3b9bf15c6d1d9e0c905d1411f25f9b8"
 PE = "2"
 PV = "0.0+git${SRCPV}"
 
diff --git a/poky/meta/recipes-sato/webkit/libwpe_1.12.0.bb b/poky/meta/recipes-sato/webkit/libwpe_1.12.0.bb
deleted file mode 100644
index ac4ee3e..0000000
--- a/poky/meta/recipes-sato/webkit/libwpe_1.12.0.bb
+++ /dev/null
@@ -1,18 +0,0 @@
-SUMMARY = "General-purpose library specifically developed for the WPE-flavored port of WebKit."
-HOMEPAGE = "https://github.com/WebPlatformForEmbedded/libwpe"
-BUGTRACKER = "https://github.com/WebPlatformForEmbedded/libwpe/issues"
-
-LICENSE = "BSD-2-Clause"
-LIC_FILES_CHKSUM = "file://COPYING;md5=371a616eb4903c6cb79e9893a5f615cc"
-DEPENDS = "virtual/egl libxkbcommon"
-
-inherit cmake features_check pkgconfig
-
-REQUIRED_DISTRO_FEATURES = "opengl"
-
-SRC_URI = "https://wpewebkit.org/releases/${BPN}-${PV}.tar.xz"
-SRC_URI[sha256sum] = "e8eeca228a6b4c36294cfb63f7d3ba9ada47a430904a5a973b3c99c96a44c18c"
-
-# This is a tweak of upstream-version-is-even needed because
-# ipstream directory contains tarballs for other components as well.
-UPSTREAM_CHECK_REGEX = "libwpe-(?P<pver>\d+\.(\d*[02468])+(\.\d+)+)\.tar"
diff --git a/poky/meta/recipes-sato/webkit/libwpe_1.12.3.bb b/poky/meta/recipes-sato/webkit/libwpe_1.12.3.bb
new file mode 100644
index 0000000..77ca517
--- /dev/null
+++ b/poky/meta/recipes-sato/webkit/libwpe_1.12.3.bb
@@ -0,0 +1,18 @@
+SUMMARY = "General-purpose library specifically developed for the WPE-flavored port of WebKit."
+HOMEPAGE = "https://github.com/WebPlatformForEmbedded/libwpe"
+BUGTRACKER = "https://github.com/WebPlatformForEmbedded/libwpe/issues"
+
+LICENSE = "BSD-2-Clause"
+LIC_FILES_CHKSUM = "file://COPYING;md5=371a616eb4903c6cb79e9893a5f615cc"
+DEPENDS = "virtual/egl libxkbcommon"
+
+inherit cmake features_check pkgconfig
+
+REQUIRED_DISTRO_FEATURES = "opengl"
+
+SRC_URI = "https://wpewebkit.org/releases/${BPN}-${PV}.tar.xz"
+SRC_URI[sha256sum] = "b84fdbfbc849ce4fdf084bb28b58e5463b1b4b6cc8f200dc77b41f8545d5329d"
+
+# This is a tweak of upstream-version-is-even needed because
+# ipstream directory contains tarballs for other components as well.
+UPSTREAM_CHECK_REGEX = "libwpe-(?P<pver>\d+\.(\d*[02468])+(\.\d+)+)\.tar"
diff --git a/poky/meta/recipes-sato/webkit/webkitgtk_2.36.4.bb b/poky/meta/recipes-sato/webkit/webkitgtk_2.36.4.bb
deleted file mode 100644
index df4ff63..0000000
--- a/poky/meta/recipes-sato/webkit/webkitgtk_2.36.4.bb
+++ /dev/null
@@ -1,167 +0,0 @@
-SUMMARY = "WebKit web rendering engine for the GTK+ platform"
-HOMEPAGE = "https://www.webkitgtk.org/"
-BUGTRACKER = "https://bugs.webkit.org/"
-
-LICENSE = "BSD-2-Clause & LGPL-2.0-or-later"
-LIC_FILES_CHKSUM = "file://Source/JavaScriptCore/COPYING.LIB;md5=d0c6d6397a5d84286dda758da57bd691 \
-                    file://Source/WebCore/LICENSE-APPLE;md5=4646f90082c40bcf298c285f8bab0b12 \
-                    file://Source/WebCore/LICENSE-LGPL-2;md5=36357ffde2b64ae177b2494445b79d21 \
-                    file://Source/WebCore/LICENSE-LGPL-2.1;md5=a778a33ef338abbaf8b8a7c36b6eec80 \
-                    "
-
-SRC_URI = "https://www.webkitgtk.org/releases/${BPN}-${PV}.tar.xz \
-           file://0001-FindGObjectIntrospection.cmake-prefix-variables-obta.patch \
-           file://0001-Tweak-gtkdoc-settings-so-that-gtkdoc-generation-work.patch \
-           file://0001-Fix-build-without-opengl-or-es.patch \
-           file://reproducibility.patch \
-           file://0001-When-building-introspection-files-do-not-quote-CFLAG.patch \
-           "
-
-SRC_URI[sha256sum] = "b6bebe1f85a479d968c19e44a4704622ef8cef61636ad1b2406b77d16ae2e2a8"
-
-inherit cmake pkgconfig gobject-introspection perlnative features_check upstream-version-is-even gtk-doc
-
-ANY_OF_DISTRO_FEATURES = "${GTK3DISTROFEATURES}"
-REQUIRED_DISTRO_FEATURES = "${@bb.utils.contains('DISTRO_FEATURES', 'wayland', 'opengl', '', d)}"
-
-CVE_PRODUCT = "webkitgtk webkitgtk\+"
-
-DEPENDS = " \
-          ruby-native \
-          gperf-native \
-          cairo \
-          harfbuzz \
-          jpeg \
-          atk \
-          libwebp \
-          gtk+3 \
-          libxslt \
-          libtasn1 \
-          libnotify \
-          gstreamer1.0 \
-          gstreamer1.0-plugins-base \
-          "
-
-PACKAGECONFIG_SOUP ?= "soup2"
-PACKAGECONFIG ??= "${@bb.utils.filter('DISTRO_FEATURES', 'systemd wayland x11', d)} \
-                   ${@bb.utils.contains('DISTRO_FEATURES', 'x11 opengl', 'webgl opengl', '', d)} \
-                   ${@bb.utils.contains('DISTRO_FEATURES', 'x11', '', 'webgl gles2', d)} \
-                   ${@bb.utils.contains('DISTRO_FEATURES', 'opengl', 'opengl-or-es', '', d)} \
-                   enchant \
-                   libsecret \
-                   ${PACKAGECONFIG_SOUP} \
-                  "
-
-PACKAGECONFIG[wayland] = "-DENABLE_WAYLAND_TARGET=ON,-DENABLE_WAYLAND_TARGET=OFF,wayland libwpe wpebackend-fdo wayland-native"
-PACKAGECONFIG[angle] = "-DUSE_ANGLE_WEBGL=ON,-DUSE_ANGLE_WEBGL=OFF"
-PACKAGECONFIG[x11] = "-DENABLE_X11_TARGET=ON,-DENABLE_X11_TARGET=OFF,virtual/libx11 libxcomposite libxdamage libxrender libxt"
-PACKAGECONFIG[geoclue] = "-DENABLE_GEOLOCATION=ON,-DENABLE_GEOLOCATION=OFF,geoclue"
-PACKAGECONFIG[enchant] = "-DENABLE_SPELLCHECK=ON,-DENABLE_SPELLCHECK=OFF,enchant2"
-PACKAGECONFIG[gles2] = "-DENABLE_GLES2=ON,-DENABLE_GLES2=OFF,virtual/libgles2"
-PACKAGECONFIG[webgl] = "-DENABLE_WEBGL=ON,-DENABLE_WEBGL=OFF,virtual/egl"
-PACKAGECONFIG[opengl] = "-DENABLE_GRAPHICS_CONTEXT_GL=ON,-DENABLE_GRAPHICS_CONTEXT_GL=OFF,virtual/egl"
-PACKAGECONFIG[opengl-or-es] = "-DUSE_OPENGL_OR_ES=ON,-DUSE_OPENGL_OR_ES=OFF"
-PACKAGECONFIG[libsecret] = "-DUSE_LIBSECRET=ON,-DUSE_LIBSECRET=OFF,libsecret"
-PACKAGECONFIG[libhyphen] = "-DUSE_LIBHYPHEN=ON,-DUSE_LIBHYPHEN=OFF,libhyphen"
-PACKAGECONFIG[woff2] = "-DUSE_WOFF2=ON,-DUSE_WOFF2=OFF,woff2"
-PACKAGECONFIG[openjpeg] = "-DUSE_OPENJPEG=ON,-DUSE_OPENJPEG=OFF,openjpeg"
-PACKAGECONFIG[systemd] = "-DUSE_SYSTEMD=ON,-DUSE_SYSTEMD=off,systemd"
-PACKAGECONFIG[reduce-size] = "-DCMAKE_BUILD_TYPE=MinSizeRel,-DCMAKE_BUILD_TYPE=Release,,"
-PACKAGECONFIG[lcms] = "-DUSE_LCMS=ON,-DUSE_LCMS=OFF,lcms"
-PACKAGECONFIG[soup2] = "-DUSE_SOUP2=ON,-DUSE_SOUP2=OFF,libsoup-2.4,,,soup3"
-PACKAGECONFIG[soup3] = ",,libsoup,,,soup2"
-PACKAGECONFIG[journald] = "-DENABLE_JOURNALD_LOG=ON,-DENABLE_JOURNALD_LOG=OFF,systemd"
-
-# webkitgtk is full of /usr/bin/env python, particular for generating docs
-do_configure[postfuncs] += "setup_python_link"
-setup_python_link() {
-	if [ ! -e ${STAGING_BINDIR_NATIVE}/python ]; then
-		ln -s `which python3` ${STAGING_BINDIR_NATIVE}/python
-	fi
-}
-
-EXTRA_OECMAKE = " \
-		-DPORT=GTK \
-		${@bb.utils.contains('GI_DATA_ENABLED', 'True', '-DENABLE_INTROSPECTION=ON', '-DENABLE_INTROSPECTION=OFF', d)} \
-		${@bb.utils.contains('GTKDOC_ENABLED', 'True', '-DENABLE_GTKDOC=ON', '-DENABLE_GTKDOC=OFF', d)} \
-		-DENABLE_MINIBROWSER=ON \
-                -DPYTHON_EXECUTABLE=`which python3` \
-                -DENABLE_BUBBLEWRAP_SANDBOX=OFF \
-                -DENABLE_GAMEPAD=OFF \
-		"
-
-# Javascript JIT is not supported on ARC
-EXTRA_OECMAKE:append:arc = " -DENABLE_JIT=OFF "
-# By default 25-bit "medium" calls are used on ARC
-# which is not enough for binaries larger than 32 MiB
-CFLAGS:append:arc = " -mlong-calls"
-CXXFLAGS:append:arc = " -mlong-calls"
-
-# Needed for non-mesa graphics stacks when x11 is disabled
-CXXFLAGS += "${@bb.utils.contains('DISTRO_FEATURES', 'x11', '', '-DEGL_NO_X11=1', d)}"
-
-# Javascript JIT is not supported on powerpc
-EXTRA_OECMAKE:append:powerpc = " -DENABLE_JIT=OFF "
-EXTRA_OECMAKE:append:powerpc64 = " -DENABLE_JIT=OFF "
-
-# ARM JIT code does not build on ARMv4/5/6 anymore
-EXTRA_OECMAKE:append:armv5 = " -DENABLE_JIT=OFF "
-EXTRA_OECMAKE:append:armv6 = " -DENABLE_JIT=OFF "
-EXTRA_OECMAKE:append:armv4 = " -DENABLE_JIT=OFF "
-
-EXTRA_OECMAKE:append:mipsarch = " -DUSE_LD_GOLD=OFF "
-EXTRA_OECMAKE:append:powerpc = " -DUSE_LD_GOLD=OFF "
-
-# JIT and gold linker does not work on RISCV
-EXTRA_OECMAKE:append:riscv32 = " -DUSE_LD_GOLD=OFF -DENABLE_JIT=OFF"
-EXTRA_OECMAKE:append:riscv64 = " -DUSE_LD_GOLD=OFF -DENABLE_JIT=OFF"
-
-# JIT not supported on MIPS either
-EXTRA_OECMAKE:append:mipsarch = " -DENABLE_JIT=OFF -DENABLE_C_LOOP=ON "
-
-# JIT not supported on X32
-# An attempt was made to upstream JIT support for x32 in
-# https://bugs.webkit.org/show_bug.cgi?id=100450, but this was closed as
-# unresolved due to limited X32 adoption.
-EXTRA_OECMAKE:append:x86-x32 = " -DENABLE_JIT=OFF "
-
-SECURITY_CFLAGS:remove:aarch64 = "-fpie"
-SECURITY_CFLAGS:append:aarch64 = " -fPIE"
-
-FILES:${PN} += "${libdir}/webkit2gtk-4.*/injected-bundle/libwebkit2gtkinjectedbundle.so"
-
-RRECOMMENDS:${PN} += "ca-certificates shared-mime-info"
-
-# http://errors.yoctoproject.org/Errors/Details/20370/
-ARM_INSTRUCTION_SET:armv4 = "arm"
-ARM_INSTRUCTION_SET:armv5 = "arm"
-ARM_INSTRUCTION_SET:armv6 = "arm"
-
-# https://bugzilla.yoctoproject.org/show_bug.cgi?id=9474
-# https://bugs.webkit.org/show_bug.cgi?id=159880
-# JSC JIT can build on ARMv7 with -marm, but doesn't work on runtime.
-# Upstream only tests regularly the JSC JIT on ARMv7 with Thumb2 (-mthumb).
-ARM_INSTRUCTION_SET:armv7a = "thumb"
-ARM_INSTRUCTION_SET:armv7r = "thumb"
-ARM_INSTRUCTION_SET:armv7ve = "thumb"
-
-# introspection inside qemu-arm hangs forever on musl/arm builds
-# therefore disable GI_DATA
-GI_DATA_ENABLED:libc-musl:armv7a = "False"
-GI_DATA_ENABLED:libc-musl:armv7ve = "False"
-
-# Can't be built with ccache
-CCACHE_DISABLE = "1"
-
-PACKAGE_PREPROCESS_FUNCS += "src_package_preprocess"
-src_package_preprocess () {
-        # Trim build paths from comments in generated sources to ensure reproducibility
-        sed -i -e "s,${WORKDIR},,g" \
-            ${B}/JavaScriptCore/DerivedSources/*.h \
-            ${B}/JavaScriptCore/DerivedSources/yarr/*.h \
-            ${B}/JavaScriptCore/PrivateHeaders/JavaScriptCore/*.h \
-            ${B}/WebKit2Gtk/DerivedSources/webkit2/*.cpp \
-            ${B}/WebKit2Gtk/DerivedSources/webkit2/*.h
-
-}
-
diff --git a/poky/meta/recipes-sato/webkit/webkitgtk_2.36.6.bb b/poky/meta/recipes-sato/webkit/webkitgtk_2.36.6.bb
new file mode 100644
index 0000000..37b977f
--- /dev/null
+++ b/poky/meta/recipes-sato/webkit/webkitgtk_2.36.6.bb
@@ -0,0 +1,167 @@
+SUMMARY = "WebKit web rendering engine for the GTK+ platform"
+HOMEPAGE = "https://www.webkitgtk.org/"
+BUGTRACKER = "https://bugs.webkit.org/"
+
+LICENSE = "BSD-2-Clause & LGPL-2.0-or-later"
+LIC_FILES_CHKSUM = "file://Source/JavaScriptCore/COPYING.LIB;md5=d0c6d6397a5d84286dda758da57bd691 \
+                    file://Source/WebCore/LICENSE-APPLE;md5=4646f90082c40bcf298c285f8bab0b12 \
+                    file://Source/WebCore/LICENSE-LGPL-2;md5=36357ffde2b64ae177b2494445b79d21 \
+                    file://Source/WebCore/LICENSE-LGPL-2.1;md5=a778a33ef338abbaf8b8a7c36b6eec80 \
+                    "
+
+SRC_URI = "https://www.webkitgtk.org/releases/${BPN}-${PV}.tar.xz \
+           file://0001-FindGObjectIntrospection.cmake-prefix-variables-obta.patch \
+           file://0001-Tweak-gtkdoc-settings-so-that-gtkdoc-generation-work.patch \
+           file://0001-Fix-build-without-opengl-or-es.patch \
+           file://reproducibility.patch \
+           file://0001-When-building-introspection-files-do-not-quote-CFLAG.patch \
+           "
+
+SRC_URI[sha256sum] = "1193bc821946336776f0dfa5e0dca5651f1e57157eda12da4721d2441f24a61a"
+
+inherit cmake pkgconfig gobject-introspection perlnative features_check upstream-version-is-even gtk-doc
+
+ANY_OF_DISTRO_FEATURES = "${GTK3DISTROFEATURES}"
+REQUIRED_DISTRO_FEATURES = "${@bb.utils.contains('DISTRO_FEATURES', 'wayland', 'opengl', '', d)}"
+
+CVE_PRODUCT = "webkitgtk webkitgtk\+"
+
+DEPENDS = " \
+          ruby-native \
+          gperf-native \
+          cairo \
+          harfbuzz \
+          jpeg \
+          atk \
+          libwebp \
+          gtk+3 \
+          libxslt \
+          libtasn1 \
+          libnotify \
+          gstreamer1.0 \
+          gstreamer1.0-plugins-base \
+          "
+
+PACKAGECONFIG_SOUP ?= "soup2"
+PACKAGECONFIG ??= "${@bb.utils.filter('DISTRO_FEATURES', 'systemd wayland x11', d)} \
+                   ${@bb.utils.contains('DISTRO_FEATURES', 'x11 opengl', 'webgl opengl', '', d)} \
+                   ${@bb.utils.contains('DISTRO_FEATURES', 'x11', '', 'webgl gles2', d)} \
+                   ${@bb.utils.contains('DISTRO_FEATURES', 'opengl', 'opengl-or-es', '', d)} \
+                   enchant \
+                   libsecret \
+                   ${PACKAGECONFIG_SOUP} \
+                  "
+
+PACKAGECONFIG[wayland] = "-DENABLE_WAYLAND_TARGET=ON,-DENABLE_WAYLAND_TARGET=OFF,wayland libwpe wpebackend-fdo wayland-native"
+PACKAGECONFIG[angle] = "-DUSE_ANGLE_WEBGL=ON,-DUSE_ANGLE_WEBGL=OFF"
+PACKAGECONFIG[x11] = "-DENABLE_X11_TARGET=ON,-DENABLE_X11_TARGET=OFF,virtual/libx11 libxcomposite libxdamage libxrender libxt"
+PACKAGECONFIG[geoclue] = "-DENABLE_GEOLOCATION=ON,-DENABLE_GEOLOCATION=OFF,geoclue"
+PACKAGECONFIG[enchant] = "-DENABLE_SPELLCHECK=ON,-DENABLE_SPELLCHECK=OFF,enchant2"
+PACKAGECONFIG[gles2] = "-DENABLE_GLES2=ON,-DENABLE_GLES2=OFF,virtual/libgles2"
+PACKAGECONFIG[webgl] = "-DENABLE_WEBGL=ON,-DENABLE_WEBGL=OFF,virtual/egl"
+PACKAGECONFIG[opengl] = "-DENABLE_GRAPHICS_CONTEXT_GL=ON,-DENABLE_GRAPHICS_CONTEXT_GL=OFF,virtual/egl"
+PACKAGECONFIG[opengl-or-es] = "-DUSE_OPENGL_OR_ES=ON,-DUSE_OPENGL_OR_ES=OFF"
+PACKAGECONFIG[libsecret] = "-DUSE_LIBSECRET=ON,-DUSE_LIBSECRET=OFF,libsecret"
+PACKAGECONFIG[libhyphen] = "-DUSE_LIBHYPHEN=ON,-DUSE_LIBHYPHEN=OFF,libhyphen"
+PACKAGECONFIG[woff2] = "-DUSE_WOFF2=ON,-DUSE_WOFF2=OFF,woff2"
+PACKAGECONFIG[openjpeg] = "-DUSE_OPENJPEG=ON,-DUSE_OPENJPEG=OFF,openjpeg"
+PACKAGECONFIG[systemd] = "-DUSE_SYSTEMD=ON,-DUSE_SYSTEMD=off,systemd"
+PACKAGECONFIG[reduce-size] = "-DCMAKE_BUILD_TYPE=MinSizeRel,-DCMAKE_BUILD_TYPE=Release,,"
+PACKAGECONFIG[lcms] = "-DUSE_LCMS=ON,-DUSE_LCMS=OFF,lcms"
+PACKAGECONFIG[soup2] = "-DUSE_SOUP2=ON,-DUSE_SOUP2=OFF,libsoup-2.4,,,soup3"
+PACKAGECONFIG[soup3] = ",,libsoup,,,soup2"
+PACKAGECONFIG[journald] = "-DENABLE_JOURNALD_LOG=ON,-DENABLE_JOURNALD_LOG=OFF,systemd"
+
+# webkitgtk is full of /usr/bin/env python, particular for generating docs
+do_configure[postfuncs] += "setup_python_link"
+setup_python_link() {
+	if [ ! -e ${STAGING_BINDIR_NATIVE}/python ]; then
+		ln -s `which python3` ${STAGING_BINDIR_NATIVE}/python
+	fi
+}
+
+EXTRA_OECMAKE = " \
+		-DPORT=GTK \
+		${@bb.utils.contains('GI_DATA_ENABLED', 'True', '-DENABLE_INTROSPECTION=ON', '-DENABLE_INTROSPECTION=OFF', d)} \
+		${@bb.utils.contains('GTKDOC_ENABLED', 'True', '-DENABLE_GTKDOC=ON', '-DENABLE_GTKDOC=OFF', d)} \
+		-DENABLE_MINIBROWSER=ON \
+                -DPYTHON_EXECUTABLE=`which python3` \
+                -DENABLE_BUBBLEWRAP_SANDBOX=OFF \
+                -DENABLE_GAMEPAD=OFF \
+		"
+
+# Javascript JIT is not supported on ARC
+EXTRA_OECMAKE:append:arc = " -DENABLE_JIT=OFF "
+# By default 25-bit "medium" calls are used on ARC
+# which is not enough for binaries larger than 32 MiB
+CFLAGS:append:arc = " -mlong-calls"
+CXXFLAGS:append:arc = " -mlong-calls"
+
+# Needed for non-mesa graphics stacks when x11 is disabled
+CXXFLAGS += "${@bb.utils.contains('DISTRO_FEATURES', 'x11', '', '-DEGL_NO_X11=1', d)}"
+
+# Javascript JIT is not supported on powerpc
+EXTRA_OECMAKE:append:powerpc = " -DENABLE_JIT=OFF "
+EXTRA_OECMAKE:append:powerpc64 = " -DENABLE_JIT=OFF "
+
+# ARM JIT code does not build on ARMv4/5/6 anymore
+EXTRA_OECMAKE:append:armv5 = " -DENABLE_JIT=OFF "
+EXTRA_OECMAKE:append:armv6 = " -DENABLE_JIT=OFF "
+EXTRA_OECMAKE:append:armv4 = " -DENABLE_JIT=OFF "
+
+EXTRA_OECMAKE:append:mipsarch = " -DUSE_LD_GOLD=OFF "
+EXTRA_OECMAKE:append:powerpc = " -DUSE_LD_GOLD=OFF "
+
+# JIT and gold linker does not work on RISCV
+EXTRA_OECMAKE:append:riscv32 = " -DUSE_LD_GOLD=OFF -DENABLE_JIT=OFF"
+EXTRA_OECMAKE:append:riscv64 = " -DUSE_LD_GOLD=OFF -DENABLE_JIT=OFF"
+
+# JIT not supported on MIPS either
+EXTRA_OECMAKE:append:mipsarch = " -DENABLE_JIT=OFF -DENABLE_C_LOOP=ON "
+
+# JIT not supported on X32
+# An attempt was made to upstream JIT support for x32 in
+# https://bugs.webkit.org/show_bug.cgi?id=100450, but this was closed as
+# unresolved due to limited X32 adoption.
+EXTRA_OECMAKE:append:x86-x32 = " -DENABLE_JIT=OFF "
+
+SECURITY_CFLAGS:remove:aarch64 = "-fpie"
+SECURITY_CFLAGS:append:aarch64 = " -fPIE"
+
+FILES:${PN} += "${libdir}/webkit2gtk-4.*/injected-bundle/libwebkit2gtkinjectedbundle.so"
+
+RRECOMMENDS:${PN} += "ca-certificates shared-mime-info"
+
+# http://errors.yoctoproject.org/Errors/Details/20370/
+ARM_INSTRUCTION_SET:armv4 = "arm"
+ARM_INSTRUCTION_SET:armv5 = "arm"
+ARM_INSTRUCTION_SET:armv6 = "arm"
+
+# https://bugzilla.yoctoproject.org/show_bug.cgi?id=9474
+# https://bugs.webkit.org/show_bug.cgi?id=159880
+# JSC JIT can build on ARMv7 with -marm, but doesn't work on runtime.
+# Upstream only tests regularly the JSC JIT on ARMv7 with Thumb2 (-mthumb).
+ARM_INSTRUCTION_SET:armv7a = "thumb"
+ARM_INSTRUCTION_SET:armv7r = "thumb"
+ARM_INSTRUCTION_SET:armv7ve = "thumb"
+
+# introspection inside qemu-arm hangs forever on musl/arm builds
+# therefore disable GI_DATA
+GI_DATA_ENABLED:libc-musl:armv7a = "False"
+GI_DATA_ENABLED:libc-musl:armv7ve = "False"
+
+# Can't be built with ccache
+CCACHE_DISABLE = "1"
+
+PACKAGE_PREPROCESS_FUNCS += "src_package_preprocess"
+src_package_preprocess () {
+        # Trim build paths from comments in generated sources to ensure reproducibility
+        sed -i -e "s,${WORKDIR},,g" \
+            ${B}/JavaScriptCore/DerivedSources/*.h \
+            ${B}/JavaScriptCore/DerivedSources/yarr/*.h \
+            ${B}/JavaScriptCore/PrivateHeaders/JavaScriptCore/*.h \
+            ${B}/WebKit2Gtk/DerivedSources/webkit2/*.cpp \
+            ${B}/WebKit2Gtk/DerivedSources/webkit2/*.h
+
+}
+
diff --git a/poky/meta/recipes-sato/webkit/wpebackend-fdo_1.12.0.bb b/poky/meta/recipes-sato/webkit/wpebackend-fdo_1.12.0.bb
deleted file mode 100644
index 4a18467..0000000
--- a/poky/meta/recipes-sato/webkit/wpebackend-fdo_1.12.0.bb
+++ /dev/null
@@ -1,24 +0,0 @@
-SUMMARY = "WPE's backend based on a freedesktop.org stack."
-HOMEPAGE = "https://github.com/Igalia/WPEBackend-fdo"
-BUGTRACKER = "https://github.com/Igalia/WPEBackend-fdo/issues"
-
-LICENSE = "BSD-2-Clause"
-LIC_FILES_CHKSUM = "file://COPYING;md5=1f62cef2e3645e3e74eb05fd389d7a66"
-DEPENDS = "glib-2.0 libxkbcommon wayland virtual/egl libwpe libepoxy"
-
-DEPENDS:append:class-target = " wayland-native"
-
-inherit meson features_check pkgconfig
-
-REQUIRED_DISTRO_FEATURES = "opengl"
-
-SRC_URI = "https://wpewebkit.org/releases/${BPN}-${PV}.tar.xz"
-SRC_URI[sha256sum] = "6239c9c15523410798d66315de6b491712ab30009ba180f3e0dd076d9b0074ac"
-
-# Especially helps compiling with clang which enable this as error when
-# using c++11
-CXXFLAGS += "-Wno-c++11-narrowing"
-
-# This is a tweak of upstream-version-is-even needed because
-# ipstream directory contains tarballs for other components as well.
-UPSTREAM_CHECK_REGEX = "wpebackend-fdo-(?P<pver>\d+\.(\d*[02468])+(\.\d+)+)\.tar"
diff --git a/poky/meta/recipes-sato/webkit/wpebackend-fdo_1.12.1.bb b/poky/meta/recipes-sato/webkit/wpebackend-fdo_1.12.1.bb
new file mode 100644
index 0000000..5f776c1
--- /dev/null
+++ b/poky/meta/recipes-sato/webkit/wpebackend-fdo_1.12.1.bb
@@ -0,0 +1,24 @@
+SUMMARY = "WPE's backend based on a freedesktop.org stack."
+HOMEPAGE = "https://github.com/Igalia/WPEBackend-fdo"
+BUGTRACKER = "https://github.com/Igalia/WPEBackend-fdo/issues"
+
+LICENSE = "BSD-2-Clause"
+LIC_FILES_CHKSUM = "file://COPYING;md5=1f62cef2e3645e3e74eb05fd389d7a66"
+DEPENDS = "glib-2.0 libxkbcommon wayland virtual/egl libwpe libepoxy"
+
+DEPENDS:append:class-target = " wayland-native"
+
+inherit meson features_check pkgconfig
+
+REQUIRED_DISTRO_FEATURES = "opengl"
+
+SRC_URI = "https://wpewebkit.org/releases/${BPN}-${PV}.tar.xz"
+SRC_URI[sha256sum] = "45aa833c44ec292f31fa943b01b8cc75e54eb623ad7ba6a66fc2f118fe69e629"
+
+# Especially helps compiling with clang which enable this as error when
+# using c++11
+CXXFLAGS += "-Wno-c++11-narrowing"
+
+# This is a tweak of upstream-version-is-even needed because
+# ipstream directory contains tarballs for other components as well.
+UPSTREAM_CHECK_REGEX = "wpebackend-fdo-(?P<pver>\d+\.(\d*[02468])+(\.\d+)+)\.tar"
diff --git a/poky/meta/recipes-support/apr/apr/0001-add-AC_CACHE_CHECK-for-strerror_r-return-type.patch b/poky/meta/recipes-support/apr/apr/0001-add-AC_CACHE_CHECK-for-strerror_r-return-type.patch
new file mode 100644
index 0000000..d0a9bd9
--- /dev/null
+++ b/poky/meta/recipes-support/apr/apr/0001-add-AC_CACHE_CHECK-for-strerror_r-return-type.patch
@@ -0,0 +1,52 @@
+From 8ca3c3306f1a149e51a3be6a4b1e47e9aee88262 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Tue, 23 Aug 2022 22:42:03 -0700
+Subject: [PATCH] add AC_CACHE_CHECK for strerror_r return type
+
+APR's configure script uses AC_TRY_RUN to detect whether the return type
+of strerror_r is int. When cross-compiling this defaults to no.
+
+This commit adds an AC_CACHE_CHECK so users who cross-compile APR may
+influence the outcome with a configure variable.
+
+Upstream-Status: Backport [https://svn.apache.org/viewvc?view=revision&revision=1875065]
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ build/apr_common.m4 | 11 ++++-------
+ 1 file changed, 4 insertions(+), 7 deletions(-)
+
+diff --git a/build/apr_common.m4 b/build/apr_common.m4
+index cbf2a4c..42e75cf 100644
+--- a/build/apr_common.m4
++++ b/build/apr_common.m4
+@@ -525,8 +525,9 @@ dnl  string.
+ dnl
+ dnl
+ AC_DEFUN([APR_CHECK_STRERROR_R_RC], [
+-AC_MSG_CHECKING(for type of return code from strerror_r)
+-AC_TRY_RUN([
++AC_CACHE_CHECK([whether return code from strerror_r has type int],
++[ac_cv_strerror_r_rc_int],
++[AC_TRY_RUN([
+ #include <errno.h>
+ #include <string.h>
+ #include <stdio.h>
+@@ -542,14 +543,10 @@ main()
+ }], [
+     ac_cv_strerror_r_rc_int=yes ], [
+     ac_cv_strerror_r_rc_int=no ], [
+-    ac_cv_strerror_r_rc_int=no ] )
++    ac_cv_strerror_r_rc_int=no ] ) ] )
+ if test "x$ac_cv_strerror_r_rc_int" = xyes; then
+   AC_DEFINE(STRERROR_R_RC_INT, 1, [Define if strerror returns int])
+-  msg="int"
+-else
+-  msg="pointer"
+ fi
+-AC_MSG_RESULT([$msg])
+ ] )
+ 
+ dnl
+-- 
+2.37.2
+
diff --git a/poky/meta/recipes-support/apr/apr/0001-configure-Remove-runtime-test-for-mmap-that-can-map-.patch b/poky/meta/recipes-support/apr/apr/0001-configure-Remove-runtime-test-for-mmap-that-can-map-.patch
new file mode 100644
index 0000000..fa6202d
--- /dev/null
+++ b/poky/meta/recipes-support/apr/apr/0001-configure-Remove-runtime-test-for-mmap-that-can-map-.patch
@@ -0,0 +1,62 @@
+From ee728971fd9d2da39356f1574d58d5daa3b24520 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Fri, 26 Aug 2022 00:28:08 -0700
+Subject: [PATCH] configure: Remove runtime test for mmap that can map
+ /dev/zero
+
+This never works for cross-compile moreover it ends up disabling
+ac_cv_file__dev_zero which then results in compiler errors in shared
+mutexes
+
+Upstream-Status: Inappropriate [Cross-compile specific]
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ configure.in | 32 --------------------------------
+ 1 file changed, 32 deletions(-)
+
+diff --git a/configure.in b/configure.in
+index a99049d..f1f55c7 100644
+--- a/configure.in
++++ b/configure.in
+@@ -1182,38 +1182,6 @@ AC_CHECK_FUNCS([mmap munmap shm_open shm_unlink shmget shmat shmdt shmctl \
+ APR_CHECK_DEFINE(MAP_ANON, sys/mman.h)
+ AC_CHECK_FILE(/dev/zero)
+ 
+-# Not all systems can mmap /dev/zero (such as HP-UX).  Check for that.
+-if test "$ac_cv_func_mmap" = "yes" &&
+-   test "$ac_cv_file__dev_zero" = "yes"; then
+-    AC_MSG_CHECKING(for mmap that can map /dev/zero)
+-    AC_TRY_RUN([
+-#include <sys/types.h>
+-#include <sys/stat.h>
+-#include <fcntl.h>
+-#ifdef HAVE_SYS_MMAN_H
+-#include <sys/mman.h>
+-#endif
+-    int main()
+-    {
+-        int fd;
+-        void *m;
+-        fd = open("/dev/zero", O_RDWR);
+-        if (fd < 0) {
+-            return 1;
+-        }
+-        m = mmap(0, sizeof(void*), PROT_READ|PROT_WRITE, MAP_SHARED, fd, 0);
+-        if (m == (void *)-1) {  /* aka MAP_FAILED */
+-            return 2;
+-        }
+-        if (munmap(m, sizeof(void*)) < 0) {
+-            return 3;
+-        }
+-        return 0;
+-    }], [], [ac_cv_file__dev_zero=no], [ac_cv_file__dev_zero=no])
+-
+-    AC_MSG_RESULT($ac_cv_file__dev_zero)
+-fi
+-
+ # Now we determine which one is our anonymous shmem preference.
+ haveshmgetanon="0"
+ havemmapzero="0"
+-- 
+2.37.2
+
diff --git a/poky/meta/recipes-support/apr/apr_1.7.0.bb b/poky/meta/recipes-support/apr/apr_1.7.0.bb
index 9c826d4..cb4bb93 100644
--- a/poky/meta/recipes-support/apr/apr_1.7.0.bb
+++ b/poky/meta/recipes-support/apr/apr_1.7.0.bb
@@ -24,6 +24,8 @@
            file://libtoolize_check.patch \
            file://0001-Add-option-to-disable-timed-dependant-tests.patch \
            file://autoconf270.patch \
+           file://0001-add-AC_CACHE_CHECK-for-strerror_r-return-type.patch \
+           file://0001-configure-Remove-runtime-test-for-mmap-that-can-map-.patch \
            file://CVE-2021-35940.patch \
            "
 
@@ -36,17 +38,30 @@
 
 # Added to fix some issues with cmake. Refer to https://github.com/bmwcarit/meta-ros/issues/68#issuecomment-19896928
 CACHED_CONFIGUREVARS += "apr_cv_mutex_recursive=yes"
-
+# Enable largefile
+CACHED_CONFIGUREVARS += "apr_cv_use_lfs64=yes"
+# Additional AC_TRY_RUN tests which will need to be cached for cross compile
+CACHED_CONFIGUREVARS += "apr_cv_epoll=yes epoll_create1=yes apr_cv_sock_cloexec=yes \
+    ac_cv_struct_rlimit=yes \
+    ac_cv_func_sem_open=yes \
+    apr_cv_process_shared_works=yes \
+    apr_cv_mutex_robust_shared=yes \
+    "
 # Also suppress trying to use sctp.
 #
 CACHED_CONFIGUREVARS += "ac_cv_header_netinet_sctp_h=no ac_cv_header_netinet_sctp_uio_h=no"
 
-CACHED_CONFIGUREVARS += "ac_cv_sizeof_struct_iovec=yes"
+# ac_cv_sizeof_struct_iovec is deduced using runtime check which will fail during cross-compile
+CACHED_CONFIGUREVARS += "${@['ac_cv_sizeof_struct_iovec=16','ac_cv_sizeof_struct_iovec=8'][d.getVar('SITEINFO_BITS') != '32']}"
+
 CACHED_CONFIGUREVARS += "ac_cv_file__dev_zero=yes"
 
+CACHED_CONFIGUREVARS:append:libc-musl = " ac_cv_strerror_r_rc_int=yes"
 PACKAGECONFIG ??= "${@bb.utils.filter('DISTRO_FEATURES', 'ipv6', d)}"
+PACKAGECONFIG:append:libc-musl = " xsi-strerror"
 PACKAGECONFIG[ipv6] = "--enable-ipv6,--disable-ipv6,"
 PACKAGECONFIG[timed-tests] = "--enable-timed-tests,--disable-timed-tests,"
+PACKAGECONFIG[xsi-strerror] = "ac_cv_strerror_r_rc_int=yes,ac_cv_strerror_r_rc_int=no,"
 
 do_configure:prepend() {
 	# Avoid absolute paths for grep since it causes failures
diff --git a/poky/meta/recipes-support/boost/boost-1.79.0.inc b/poky/meta/recipes-support/boost/boost-1.79.0.inc
deleted file mode 100644
index f90c463..0000000
--- a/poky/meta/recipes-support/boost/boost-1.79.0.inc
+++ /dev/null
@@ -1,20 +0,0 @@
-# The Boost web site provides free peer-reviewed portable
-# C++ source libraries. The emphasis is on libraries which
-# work well with the C++ Standard Library. The libraries are
-# intended to be widely useful, and are in regular use by
-# thousands of programmers across a broad spectrum of applications.
-HOMEPAGE = "http://www.boost.org/"
-LICENSE = "BSL-1.0 & MIT & Python-2.0"
-LIC_FILES_CHKSUM = "file://LICENSE_1_0.txt;md5=e4224ccaecb14d942c71d31bef20d78c"
-
-BOOST_VER = "${@"_".join(d.getVar("PV").split("."))}"
-BOOST_MAJ = "${@"_".join(d.getVar("PV").split(".")[0:2])}"
-BOOST_P = "boost_${BOOST_VER}"
-
-SRC_URI = "https://boostorg.jfrog.io/artifactory/main/release/${PV}/source/${BOOST_P}.tar.bz2"
-SRC_URI[sha256sum] = "475d589d51a7f8b3ba2ba4eda022b170e562ca3b760ee922c146b6c65856ef39"
-
-UPSTREAM_CHECK_URI = "http://www.boost.org/users/download/"
-UPSTREAM_CHECK_REGEX = "release/(?P<pver>.*)/source/"
-
-S = "${WORKDIR}/${BOOST_P}"
diff --git a/poky/meta/recipes-support/boost/boost-1.80.0.inc b/poky/meta/recipes-support/boost/boost-1.80.0.inc
new file mode 100644
index 0000000..3ee82eb
--- /dev/null
+++ b/poky/meta/recipes-support/boost/boost-1.80.0.inc
@@ -0,0 +1,20 @@
+# The Boost web site provides free peer-reviewed portable
+# C++ source libraries. The emphasis is on libraries which
+# work well with the C++ Standard Library. The libraries are
+# intended to be widely useful, and are in regular use by
+# thousands of programmers across a broad spectrum of applications.
+HOMEPAGE = "http://www.boost.org/"
+LICENSE = "BSL-1.0 & MIT & Python-2.0"
+LIC_FILES_CHKSUM = "file://LICENSE_1_0.txt;md5=e4224ccaecb14d942c71d31bef20d78c"
+
+BOOST_VER = "${@"_".join(d.getVar("PV").split("."))}"
+BOOST_MAJ = "${@"_".join(d.getVar("PV").split(".")[0:2])}"
+BOOST_P = "boost_${BOOST_VER}"
+
+SRC_URI = "https://boostorg.jfrog.io/artifactory/main/release/${PV}/source/${BOOST_P}.tar.bz2"
+SRC_URI[sha256sum] = "1e19565d82e43bc59209a168f5ac899d3ba471d55c7610c677d4ccf2c9c500c0"
+
+UPSTREAM_CHECK_URI = "http://www.boost.org/users/download/"
+UPSTREAM_CHECK_REGEX = "release/(?P<pver>.*)/source/"
+
+S = "${WORKDIR}/${BOOST_P}"
diff --git a/poky/meta/recipes-support/boost/boost-build-native_1.80.0.bb b/poky/meta/recipes-support/boost/boost-build-native_1.80.0.bb
new file mode 100644
index 0000000..54c0b20
--- /dev/null
+++ b/poky/meta/recipes-support/boost/boost-build-native_1.80.0.bb
@@ -0,0 +1,28 @@
+SUMMARY = "Boost.Build"
+DESCRIPTION = "B2 makes it easy to build C++ projects, everywhere."
+HOMEPAGE = "https://github.com/boostorg/build"
+SECTION = "devel"
+
+LICENSE = "BSL-1.0"
+LIC_FILES_CHKSUM = "file://LICENSE.txt;md5=e4224ccaecb14d942c71d31bef20d78c"
+
+SRC_URI = "git://github.com/boostorg/build;protocol=https;branch=master"
+SRCREV = "405d34a04d29519625c5edfe1f3bac3bc3dc3534"
+PE = "1"
+
+UPSTREAM_CHECK_GITTAGREGEX = "boost-(?P<pver>(\d+(\.\d+)+))"
+
+inherit native
+
+S = "${WORKDIR}/git"
+
+do_compile() {
+    ./bootstrap.sh
+}
+
+do_install() {
+    HOME=/var/run ./b2 install --prefix=${prefix} staging-prefix=${D}${prefix}
+}
+
+# The build is either release mode (pre-stripped) or debug (-O0).
+INSANE_SKIP:${PN} = "already-stripped"
diff --git a/poky/meta/recipes-support/boost/boost-build-native_4.4.1.bb b/poky/meta/recipes-support/boost/boost-build-native_4.4.1.bb
deleted file mode 100644
index de566ee..0000000
--- a/poky/meta/recipes-support/boost/boost-build-native_4.4.1.bb
+++ /dev/null
@@ -1,27 +0,0 @@
-SUMMARY = "Boost.Build"
-DESCRIPTION = "B2 makes it easy to build C++ projects, everywhere."
-HOMEPAGE = "https://github.com/boostorg/build"
-SECTION = "devel"
-
-LICENSE = "BSL-1.0"
-LIC_FILES_CHKSUM = "file://LICENSE.txt;md5=e4224ccaecb14d942c71d31bef20d78c"
-
-SRC_URI = "git://github.com/boostorg/build;protocol=https;branch=master"
-SRCREV = "76da80f33187a3d9e5336157cdfae12ce82e37eb"
-
-UPSTREAM_CHECK_GITTAGREGEX = "(?P<pver>(\d+(\.\d+){2,}))"
-
-inherit native
-
-S = "${WORKDIR}/git"
-
-do_compile() {
-    ./bootstrap.sh
-}
-
-do_install() {
-    HOME=/var/run ./b2 install --prefix=${prefix} staging-prefix=${D}${prefix}
-}
-
-# The build is either release mode (pre-stripped) or debug (-O0).
-INSANE_SKIP:${PN} = "already-stripped"
diff --git a/poky/meta/recipes-support/boost/boost/0001-Don-t-set-up-arch-instruction-set-flags-we-do-that-o.patch b/poky/meta/recipes-support/boost/boost/0001-Don-t-set-up-arch-instruction-set-flags-we-do-that-o.patch
index 67d5dff..4fe1574 100644
--- a/poky/meta/recipes-support/boost/boost/0001-Don-t-set-up-arch-instruction-set-flags-we-do-that-o.patch
+++ b/poky/meta/recipes-support/boost/boost/0001-Don-t-set-up-arch-instruction-set-flags-we-do-that-o.patch
@@ -1,4 +1,4 @@
-From 4d2a8fc8117e56bc283349e5f7f889ebbfc55c71 Mon Sep 17 00:00:00 2001
+From 21ba558abe074e7d49bdc931018ce2138e6e8eb5 Mon Sep 17 00:00:00 2001
 From: Alexander Kanavin <alex.kanavin@gmail.com>
 Date: Tue, 18 Dec 2018 15:42:57 +0100
 Subject: [PATCH] Don't set up arch/instruction-set flags, we do that
@@ -10,14 +10,14 @@
 Signed-off-by: Alexander Kanavin <alex.kanavin@gmail.com>
 
 ---
- tools/build/src/tools/gcc.jam | 144 ----------------------------------
- 1 file changed, 144 deletions(-)
+ tools/build/src/tools/gcc.jam | 153 ----------------------------------
+ 1 file changed, 153 deletions(-)
 
 diff --git a/tools/build/src/tools/gcc.jam b/tools/build/src/tools/gcc.jam
-index 47a113223..d77525724 100644
+index 726555369..5c5f8ba91 100644
 --- a/tools/build/src/tools/gcc.jam
 +++ b/tools/build/src/tools/gcc.jam
-@@ -1122,147 +1122,3 @@ local rule cpu-flags ( toolset variable : architecture : instruction-set + :
+@@ -1124,156 +1124,3 @@ local rule cpu-flags ( toolset variable : architecture : instruction-set + :
          <architecture>$(architecture)/<instruction-set>$(instruction-set)
          : $(values) ;
  }
@@ -72,6 +72,9 @@
 -cpu-flags gcc OPTIONS : x86 : cascadelake : -march=skylake-avx512 -mavx512vnni ;
 -cpu-flags gcc OPTIONS : x86 : cooperlake : -march=cooperlake ;
 -cpu-flags gcc OPTIONS : x86 : tigerlake : -march=tigerlake ;
+-cpu-flags gcc OPTIONS : x86 : rocketlake : -march=rocketlake ;
+-cpu-flags gcc OPTIONS : x86 : alderlake : -march=alderlake ;
+-cpu-flags gcc OPTIONS : x86 : sapphirerapids : -march=sapphirerapids ;
 -cpu-flags gcc OPTIONS : x86 : k6 : -march=k6 ;
 -cpu-flags gcc OPTIONS : x86 : k6-2 : -march=k6-2 ;
 -cpu-flags gcc OPTIONS : x86 : k6-3 : -march=k6-3 ;
@@ -98,6 +101,7 @@
 -cpu-flags gcc OPTIONS : x86 : btver2 : -march=btver2 ;
 -cpu-flags gcc OPTIONS : x86 : znver1 : -march=znver1 ;
 -cpu-flags gcc OPTIONS : x86 : znver2 : -march=znver2 ;
+-cpu-flags gcc OPTIONS : x86 : znver3 : -march=znver3 ;
 -cpu-flags gcc OPTIONS : x86 : winchip-c6 : -march=winchip-c6 ;
 -cpu-flags gcc OPTIONS : x86 : winchip2 : -march=winchip2 ;
 -cpu-flags gcc OPTIONS : x86 : c3 : -march=c3 ;
@@ -165,3 +169,8 @@
 -cpu-flags gcc OPTIONS : arm : cortex-r5+vfpv3-d16 : -mcpu=cortex-r5 -mfpu=vfpv3-d16 -mfloat-abi=hard ;
 -# AIX variant of RS/6000 & PowerPC
 -toolset.flags gcc AROPTIONS <address-model>64/<target-os>aix : "-X64" ;
+-
+-# Enable response file control
+-toolset.flags gcc RESPONSE_FILE_SUB <response-file>auto : a ;
+-toolset.flags gcc RESPONSE_FILE_SUB <response-file>file : f ;
+-toolset.flags gcc RESPONSE_FILE_SUB <response-file>contents : c ;
diff --git a/poky/meta/recipes-support/boost/boost/0001-The-std-lib-unary-binary_function-base-classes-are-d.patch b/poky/meta/recipes-support/boost/boost/0001-The-std-lib-unary-binary_function-base-classes-are-d.patch
new file mode 100644
index 0000000..4960334
--- /dev/null
+++ b/poky/meta/recipes-support/boost/boost/0001-The-std-lib-unary-binary_function-base-classes-are-d.patch
@@ -0,0 +1,34 @@
+From f9b55f5a1fab85bf73c95e6372779d6f50f75e84 Mon Sep 17 00:00:00 2001
+From: jzmaddock <john@johnmaddock.co.uk>
+Date: Mon, 11 Jul 2022 18:26:07 +0100
+Subject: [PATCH] The std lib unary/binary_function base classes are
+ deprecated/removed from libcpp15. Fixes
+ https://github.com/boostorg/container_hash/issues/24.
+
+Upstream-Status: Backport [https://github.com/boostorg/config/pull/440/commits/f0af4a9184457939b89110795ae2d293582c5f66]
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ boost/config/stdlib/libcpp.hpp | 9 +++++++++
+ 1 file changed, 9 insertions(+)
+
+diff --git a/boost/config/stdlib/libcpp.hpp b/boost/config/stdlib/libcpp.hpp
+index bc8536ead..0e9f2445e 100644
+--- a/boost/config/stdlib/libcpp.hpp
++++ b/boost/config/stdlib/libcpp.hpp
+@@ -168,4 +168,13 @@
+ #  define BOOST_NO_CXX14_HDR_SHARED_MUTEX
+ #endif
+ 
++#if _LIBCPP_VERSION >= 15000
++//
++// Unary function is now deprecated in C++11 and later:
++//
++#if __cplusplus >= 201103L
++#define BOOST_NO_CXX98_FUNCTION_BASE
++#endif
++#endif
++
+ //  --- end ---
+-- 
+2.37.2
+
diff --git a/poky/meta/recipes-support/boost/boost/boost-CVE-2012-2677.patch b/poky/meta/recipes-support/boost/boost/boost-CVE-2012-2677.patch
deleted file mode 100644
index 917617a..0000000
--- a/poky/meta/recipes-support/boost/boost/boost-CVE-2012-2677.patch
+++ /dev/null
@@ -1,112 +0,0 @@
-Reference
-
-https://svn.boost.org/trac/boost/changeset/78326
-
-Upstream-Status: Backport
-CVE: CVE-2012-2677
-Signed-off-by: Yue Tao <yue.tao@windriver.com>
-
-diff --git a/boost/pool/pool.hpp.old b/boost/pool/pool.hpp
-index c47b11f..417a1e0 100644
---- a/boost/pool/pool.hpp.old
-+++ b/boost/pool/pool.hpp
-@@ -26,6 +26,8 @@
- 
- #include <boost/pool/poolfwd.hpp>
- 
-+// std::numeric_limits
-+#include <boost/limits.hpp>
- // boost::integer::static_lcm
- #include <boost/integer/common_factor_ct.hpp>
- // boost::simple_segregated_storage
-@@ -355,6 +357,15 @@ class pool: protected simple_segregated_storage < typename UserAllocator::size_t
-       return s;
-     }
- 
-+    size_type max_chunks() const
-+    { //! Calculated maximum number of memory chunks that can be allocated in a single call by this Pool.
-+      size_type partition_size = alloc_size();
-+      size_type POD_size = integer::static_lcm<sizeof(size_type), sizeof(void *)>::value + sizeof(size_type);
-+      size_type max_chunks = (std::numeric_limits<size_type>::max() - POD_size) / alloc_size();
-+
-+      return max_chunks;
-+    }
-+
-     static void * & nextof(void * const ptr)
-     { //! \returns Pointer dereferenced.
-       //! (Provided and used for the sake of code readability :)
-@@ -375,6 +386,8 @@ class pool: protected simple_segregated_storage < typename UserAllocator::size_t
-       //!   the first time that object needs to allocate system memory.
-       //!   The default is 32. This parameter may not be 0.
-       //! \param nmax_size is the maximum number of chunks to allocate in one block.
-+      set_next_size(nnext_size);
-+      set_max_size(nmax_size);
-     }
- 
-     ~pool()
-@@ -398,8 +411,8 @@ class pool: protected simple_segregated_storage < typename UserAllocator::size_t
-     }
-     void set_next_size(const size_type nnext_size)
-     { //! Set number of chunks to request from the system the next time that object needs to allocate system memory. This value should never be set to 0.
--      //! \returns nnext_size.
--      next_size = start_size = nnext_size;
-+      BOOST_USING_STD_MIN();
-+      next_size = start_size = min BOOST_PREVENT_MACRO_SUBSTITUTION(nnext_size, max_chunks());
-     }
-     size_type get_max_size() const
-     { //! \returns max_size.
-@@ -407,7 +420,8 @@ class pool: protected simple_segregated_storage < typename UserAllocator::size_t
-     }
-     void set_max_size(const size_type nmax_size)
-     { //! Set max_size.
--      max_size = nmax_size;
-+      BOOST_USING_STD_MIN();
-+      max_size = min BOOST_PREVENT_MACRO_SUBSTITUTION(nmax_size, max_chunks());
-     }
-     size_type get_requested_size() const
-     { //!   \returns the requested size passed into the constructor.
-@@ -708,9 +722,9 @@ void * pool<UserAllocator>::malloc_need_resize()
- 
-   BOOST_USING_STD_MIN();
-   if(!max_size)
--    next_size <<= 1;
-+    set_next_size(next_size << 1);
-   else if( next_size*partition_size/requested_size < max_size)
--    next_size = min BOOST_PREVENT_MACRO_SUBSTITUTION(next_size << 1, max_size*requested_size/ partition_size);
-+    set_next_size(min BOOST_PREVENT_MACRO_SUBSTITUTION(next_size << 1, max_size * requested_size / partition_size));
- 
-   //  initialize it,
-   store().add_block(node.begin(), node.element_size(), partition_size);
-@@ -748,9 +762,9 @@ void * pool<UserAllocator>::ordered_malloc_need_resize()
- 
-   BOOST_USING_STD_MIN();
-   if(!max_size)
--    next_size <<= 1;
-+    set_next_size(next_size << 1);
-   else if( next_size*partition_size/requested_size < max_size)
--    next_size = min BOOST_PREVENT_MACRO_SUBSTITUTION(next_size << 1, max_size*requested_size/ partition_size);
-+    set_next_size(min BOOST_PREVENT_MACRO_SUBSTITUTION(next_size << 1, max_size * requested_size / partition_size));
- 
-   //  initialize it,
-   //  (we can use "add_block" here because we know that
-@@ -792,6 +806,8 @@ void * pool<UserAllocator>::ordered_malloc(const size_type n)
- { //! Gets address of a chunk n, allocating new memory if not already available.
-   //! \returns Address of chunk n if allocated ok.
-   //! \returns 0 if not enough memory for n chunks.
-+  if (n > max_chunks())
-+    return 0;
- 
-   const size_type partition_size = alloc_size();
-   const size_type total_req_size = n * requested_size;
-@@ -840,9 +856,9 @@ void * pool<UserAllocator>::ordered_malloc(const size_type n)
- 
-   BOOST_USING_STD_MIN();
-   if(!max_size)
--    next_size <<= 1;
-+    set_next_size(next_size << 1);
-   else if( next_size*partition_size/requested_size < max_size)
--    next_size = min BOOST_PREVENT_MACRO_SUBSTITUTION(next_size << 1, max_size*requested_size/ partition_size);
-+    set_next_size(min BOOST_PREVENT_MACRO_SUBSTITUTION(next_size << 1, max_size * requested_size / partition_size));
- 
-   //  insert it into the list,
-   //   handle border case.
diff --git a/poky/meta/recipes-support/boost/boost_1.79.0.bb b/poky/meta/recipes-support/boost/boost_1.79.0.bb
deleted file mode 100644
index dd5d6ea..0000000
--- a/poky/meta/recipes-support/boost/boost_1.79.0.bb
+++ /dev/null
@@ -1,8 +0,0 @@
-require boost-${PV}.inc
-require boost.inc
-
-SRC_URI += "file://boost-CVE-2012-2677.patch \
-           file://boost-math-disable-pch-for-gcc.patch \
-           file://0001-Don-t-set-up-arch-instruction-set-flags-we-do-that-o.patch \
-           file://0001-dont-setup-compiler-flags-m32-m64.patch \
-           "
diff --git a/poky/meta/recipes-support/boost/boost_1.80.0.bb b/poky/meta/recipes-support/boost/boost_1.80.0.bb
new file mode 100644
index 0000000..c34ab7d
--- /dev/null
+++ b/poky/meta/recipes-support/boost/boost_1.80.0.bb
@@ -0,0 +1,8 @@
+require boost-${PV}.inc
+require boost.inc
+
+SRC_URI += "file://boost-math-disable-pch-for-gcc.patch \
+           file://0001-Don-t-set-up-arch-instruction-set-flags-we-do-that-o.patch \
+           file://0001-dont-setup-compiler-flags-m32-m64.patch \
+           file://0001-The-std-lib-unary-binary_function-base-classes-are-d.patch \
+           "
diff --git a/poky/meta/recipes-support/curl/curl_7.84.0.bb b/poky/meta/recipes-support/curl/curl_7.84.0.bb
deleted file mode 100644
index 75417cd..0000000
--- a/poky/meta/recipes-support/curl/curl_7.84.0.bb
+++ /dev/null
@@ -1,117 +0,0 @@
-SUMMARY = "Command line tool and library for client-side URL transfers"
-DESCRIPTION = "It uses URL syntax to transfer data to and from servers. \
-curl is a widely used because of its ability to be flexible and complete \
-complex tasks. For example, you can use curl for things like user authentication, \
-HTTP post, SSL connections, proxy support, FTP uploads, and more!"
-HOMEPAGE = "https://curl.se/"
-BUGTRACKER = "https://github.com/curl/curl/issues"
-SECTION = "console/network"
-LICENSE = "MIT-open-group"
-LIC_FILES_CHKSUM = "file://COPYING;md5=190c514872597083303371684954f238"
-
-SRC_URI = " \
-    https://curl.se/download/${BP}.tar.xz \
-    file://0001-easy_lock.h-include-sched.h-if-available-to-fix-buil.patch \
-    file://0001-easy_lock-switch-to-using-atomic_int-instead-of-bool.patch \
-    file://run-ptest \
-    file://disable-tests \
-"
-SRC_URI[sha256sum] = "2d118b43f547bfe5bae806d8d47b4e596ea5b25a6c1f080aef49fbcd817c5db8"
-
-# Curl has used many names over the years...
-CVE_PRODUCT = "haxx:curl haxx:libcurl curl:curl curl:libcurl libcurl:libcurl daniel_stenberg:curl"
-
-inherit autotools pkgconfig binconfig multilib_header ptest
-
-# Entropy source for random PACKAGECONFIG option
-RANDOM ?= "/dev/urandom"
-
-PACKAGECONFIG ??= "${@bb.utils.filter('DISTRO_FEATURES', 'ipv6', d)} libidn openssl proxy random threaded-resolver verbose zlib"
-PACKAGECONFIG:class-native = "ipv6 openssl proxy random threaded-resolver verbose zlib"
-PACKAGECONFIG:class-nativesdk = "ipv6 openssl proxy random threaded-resolver verbose zlib"
-
-# 'ares' and 'threaded-resolver' are mutually exclusive
-PACKAGECONFIG[ares] = "--enable-ares,--disable-ares,c-ares,,,threaded-resolver"
-PACKAGECONFIG[brotli] = "--with-brotli,--without-brotli,brotli"
-PACKAGECONFIG[builtinmanual] = "--enable-manual,--disable-manual"
-PACKAGECONFIG[dict] = "--enable-dict,--disable-dict,"
-PACKAGECONFIG[gnutls] = "--with-gnutls,--without-gnutls,gnutls"
-PACKAGECONFIG[gopher] = "--enable-gopher,--disable-gopher,"
-PACKAGECONFIG[imap] = "--enable-imap,--disable-imap,"
-PACKAGECONFIG[ipv6] = "--enable-ipv6,--disable-ipv6,"
-PACKAGECONFIG[krb5] = "--with-gssapi,--without-gssapi,krb5"
-PACKAGECONFIG[ldap] = "--enable-ldap,--disable-ldap,"
-PACKAGECONFIG[ldaps] = "--enable-ldaps,--disable-ldaps,"
-PACKAGECONFIG[libgsasl] = "--with-libgsasl,--without-libgsasl,libgsasl"
-PACKAGECONFIG[libidn] = "--with-libidn2,--without-libidn2,libidn2"
-PACKAGECONFIG[libssh2] = "--with-libssh2,--without-libssh2,libssh2"
-PACKAGECONFIG[mbedtls] = "--with-mbedtls=${STAGING_DIR_TARGET},--without-mbedtls,mbedtls"
-PACKAGECONFIG[mqtt] = "--enable-mqtt,--disable-mqtt,"
-PACKAGECONFIG[nghttp2] = "--with-nghttp2,--without-nghttp2,nghttp2"
-PACKAGECONFIG[openssl] = "--with-openssl,--without-openssl,openssl"
-PACKAGECONFIG[pop3] = "--enable-pop3,--disable-pop3,"
-PACKAGECONFIG[proxy] = "--enable-proxy,--disable-proxy,"
-PACKAGECONFIG[random] = "--with-random=${RANDOM},--without-random"
-PACKAGECONFIG[rtmpdump] = "--with-librtmp,--without-librtmp,rtmpdump"
-PACKAGECONFIG[rtsp] = "--enable-rtsp,--disable-rtsp,"
-PACKAGECONFIG[smb] = "--enable-smb,--disable-smb,"
-PACKAGECONFIG[smtp] = "--enable-smtp,--disable-smtp,"
-PACKAGECONFIG[nss] = "--with-nss,--without-nss,nss"
-PACKAGECONFIG[telnet] = "--enable-telnet,--disable-telnet,"
-PACKAGECONFIG[tftp] = "--enable-tftp,--disable-tftp,"
-PACKAGECONFIG[threaded-resolver] = "--enable-threaded-resolver,--disable-threaded-resolver,,,,ares"
-PACKAGECONFIG[verbose] = "--enable-verbose,--disable-verbose"
-PACKAGECONFIG[zlib] = "--with-zlib=${STAGING_LIBDIR}/../,--without-zlib,zlib"
-PACKAGECONFIG[zstd] = "--with-zstd,--without-zstd,zstd"
-
-EXTRA_OECONF = " \
-    --disable-libcurl-option \
-    --disable-ntlm-wb \
-    --enable-crypto-auth \
-    --with-ca-bundle=${sysconfdir}/ssl/certs/ca-certificates.crt \
-    --without-libpsl \
-    --enable-debug \
-    --enable-optimize \
-    --disable-curldebug \
-    ${@'--without-ssl' if (bb.utils.filter('PACKAGECONFIG', 'gnutls mbedtls nss openssl', d) == '') else ''} \
-"
-
-do_install:append:class-target() {
-	# cleanup buildpaths from curl-config
-	sed -i \
-	    -e 's,--sysroot=${STAGING_DIR_TARGET},,g' \
-	    -e 's,--with-libtool-sysroot=${STAGING_DIR_TARGET},,g' \
-	    -e 's|${DEBUG_PREFIX_MAP}||g' \
-	    -e 's|${@" ".join(d.getVar("DEBUG_PREFIX_MAP").split())}||g' \
-	    ${D}${bindir}/curl-config
-}
-
-do_compile_ptest() {
-	oe_runmake test
-	oe_runmake -C ${B}/tests/server
-}
-
-do_install_ptest() {
-	cat  ${WORKDIR}/disable-tests >> ${S}/tests/data/DISABLED
-	rm -f ${B}/tests/configurehelp.pm
-	cp -rf ${B}/tests ${D}${PTEST_PATH}
-	cp -rf ${S}/tests ${D}${PTEST_PATH}
-	find ${D}${PTEST_PATH}/ -type f -name Makefile.am -o -name Makefile.in -o -name Makefile -delete
-	install -d ${D}${PTEST_PATH}/src
-	ln -sf ${bindir}/curl   ${D}${PTEST_PATH}/src/curl
-	cp -rf ${D}${bindir}/curl-config ${D}${PTEST_PATH}
-}
-
-RDEPENDS:${PN}-ptest += "bash perl-modules"
-
-PACKAGES =+ "lib${BPN}"
-
-FILES:lib${BPN} = "${libdir}/lib*.so.*"
-RRECOMMENDS:lib${BPN} += "ca-certificates"
-
-FILES:${PN} += "${datadir}/zsh"
-
-inherit multilib_script
-MULTILIB_SCRIPTS = "${PN}-dev:${bindir}/curl-config"
-
-BBCLASSEXTEND = "native nativesdk"
diff --git a/poky/meta/recipes-support/curl/curl_7.85.0.bb b/poky/meta/recipes-support/curl/curl_7.85.0.bb
new file mode 100644
index 0000000..3b55830
--- /dev/null
+++ b/poky/meta/recipes-support/curl/curl_7.85.0.bb
@@ -0,0 +1,115 @@
+SUMMARY = "Command line tool and library for client-side URL transfers"
+DESCRIPTION = "It uses URL syntax to transfer data to and from servers. \
+curl is a widely used because of its ability to be flexible and complete \
+complex tasks. For example, you can use curl for things like user authentication, \
+HTTP post, SSL connections, proxy support, FTP uploads, and more!"
+HOMEPAGE = "https://curl.se/"
+BUGTRACKER = "https://github.com/curl/curl/issues"
+SECTION = "console/network"
+LICENSE = "MIT-open-group"
+LIC_FILES_CHKSUM = "file://COPYING;md5=190c514872597083303371684954f238"
+
+SRC_URI = " \
+    https://curl.se/download/${BP}.tar.xz \
+    file://run-ptest \
+    file://disable-tests \
+"
+SRC_URI[sha256sum] = "88b54a6d4b9a48cb4d873c7056dcba997ddd5b7be5a2d537a4acb55c20b04be6"
+
+# Curl has used many names over the years...
+CVE_PRODUCT = "haxx:curl haxx:libcurl curl:curl curl:libcurl libcurl:libcurl daniel_stenberg:curl"
+
+inherit autotools pkgconfig binconfig multilib_header ptest
+
+# Entropy source for random PACKAGECONFIG option
+RANDOM ?= "/dev/urandom"
+
+PACKAGECONFIG ??= "${@bb.utils.filter('DISTRO_FEATURES', 'ipv6', d)} libidn openssl proxy random threaded-resolver verbose zlib"
+PACKAGECONFIG:class-native = "ipv6 openssl proxy random threaded-resolver verbose zlib"
+PACKAGECONFIG:class-nativesdk = "ipv6 openssl proxy random threaded-resolver verbose zlib"
+
+# 'ares' and 'threaded-resolver' are mutually exclusive
+PACKAGECONFIG[ares] = "--enable-ares,--disable-ares,c-ares,,,threaded-resolver"
+PACKAGECONFIG[brotli] = "--with-brotli,--without-brotli,brotli"
+PACKAGECONFIG[builtinmanual] = "--enable-manual,--disable-manual"
+PACKAGECONFIG[dict] = "--enable-dict,--disable-dict,"
+PACKAGECONFIG[gnutls] = "--with-gnutls,--without-gnutls,gnutls"
+PACKAGECONFIG[gopher] = "--enable-gopher,--disable-gopher,"
+PACKAGECONFIG[imap] = "--enable-imap,--disable-imap,"
+PACKAGECONFIG[ipv6] = "--enable-ipv6,--disable-ipv6,"
+PACKAGECONFIG[krb5] = "--with-gssapi,--without-gssapi,krb5"
+PACKAGECONFIG[ldap] = "--enable-ldap,--disable-ldap,"
+PACKAGECONFIG[ldaps] = "--enable-ldaps,--disable-ldaps,"
+PACKAGECONFIG[libgsasl] = "--with-libgsasl,--without-libgsasl,libgsasl"
+PACKAGECONFIG[libidn] = "--with-libidn2,--without-libidn2,libidn2"
+PACKAGECONFIG[libssh2] = "--with-libssh2,--without-libssh2,libssh2"
+PACKAGECONFIG[mbedtls] = "--with-mbedtls=${STAGING_DIR_TARGET},--without-mbedtls,mbedtls"
+PACKAGECONFIG[mqtt] = "--enable-mqtt,--disable-mqtt,"
+PACKAGECONFIG[nghttp2] = "--with-nghttp2,--without-nghttp2,nghttp2"
+PACKAGECONFIG[openssl] = "--with-openssl,--without-openssl,openssl"
+PACKAGECONFIG[pop3] = "--enable-pop3,--disable-pop3,"
+PACKAGECONFIG[proxy] = "--enable-proxy,--disable-proxy,"
+PACKAGECONFIG[random] = "--with-random=${RANDOM},--without-random"
+PACKAGECONFIG[rtmpdump] = "--with-librtmp,--without-librtmp,rtmpdump"
+PACKAGECONFIG[rtsp] = "--enable-rtsp,--disable-rtsp,"
+PACKAGECONFIG[smb] = "--enable-smb,--disable-smb,"
+PACKAGECONFIG[smtp] = "--enable-smtp,--disable-smtp,"
+PACKAGECONFIG[nss] = "--with-nss,--without-nss,nss"
+PACKAGECONFIG[telnet] = "--enable-telnet,--disable-telnet,"
+PACKAGECONFIG[tftp] = "--enable-tftp,--disable-tftp,"
+PACKAGECONFIG[threaded-resolver] = "--enable-threaded-resolver,--disable-threaded-resolver,,,,ares"
+PACKAGECONFIG[verbose] = "--enable-verbose,--disable-verbose"
+PACKAGECONFIG[zlib] = "--with-zlib=${STAGING_LIBDIR}/../,--without-zlib,zlib"
+PACKAGECONFIG[zstd] = "--with-zstd,--without-zstd,zstd"
+
+EXTRA_OECONF = " \
+    --disable-libcurl-option \
+    --disable-ntlm-wb \
+    --enable-crypto-auth \
+    --with-ca-bundle=${sysconfdir}/ssl/certs/ca-certificates.crt \
+    --without-libpsl \
+    --enable-debug \
+    --enable-optimize \
+    --disable-curldebug \
+    ${@'--without-ssl' if (bb.utils.filter('PACKAGECONFIG', 'gnutls mbedtls nss openssl', d) == '') else ''} \
+"
+
+do_install:append:class-target() {
+	# cleanup buildpaths from curl-config
+	sed -i \
+	    -e 's,--sysroot=${STAGING_DIR_TARGET},,g' \
+	    -e 's,--with-libtool-sysroot=${STAGING_DIR_TARGET},,g' \
+	    -e 's|${DEBUG_PREFIX_MAP}||g' \
+	    -e 's|${@" ".join(d.getVar("DEBUG_PREFIX_MAP").split())}||g' \
+	    ${D}${bindir}/curl-config
+}
+
+do_compile_ptest() {
+	oe_runmake test
+	oe_runmake -C ${B}/tests/server
+}
+
+do_install_ptest() {
+	cat  ${WORKDIR}/disable-tests >> ${S}/tests/data/DISABLED
+	rm -f ${B}/tests/configurehelp.pm
+	cp -rf ${B}/tests ${D}${PTEST_PATH}
+	cp -rf ${S}/tests ${D}${PTEST_PATH}
+	find ${D}${PTEST_PATH}/ -type f -name Makefile.am -o -name Makefile.in -o -name Makefile -delete
+	install -d ${D}${PTEST_PATH}/src
+	ln -sf ${bindir}/curl   ${D}${PTEST_PATH}/src/curl
+	cp -rf ${D}${bindir}/curl-config ${D}${PTEST_PATH}
+}
+
+RDEPENDS:${PN}-ptest += "bash perl-modules"
+
+PACKAGES =+ "lib${BPN}"
+
+FILES:lib${BPN} = "${libdir}/lib*.so.*"
+RRECOMMENDS:lib${BPN} += "ca-certificates"
+
+FILES:${PN} += "${datadir}/zsh"
+
+inherit multilib_script
+MULTILIB_SCRIPTS = "${PN}-dev:${bindir}/curl-config"
+
+BBCLASSEXTEND = "native nativesdk"
diff --git a/poky/meta/recipes-support/curl/files/0001-easy_lock-switch-to-using-atomic_int-instead-of-bool.patch b/poky/meta/recipes-support/curl/files/0001-easy_lock-switch-to-using-atomic_int-instead-of-bool.patch
deleted file mode 100644
index 878839a..0000000
--- a/poky/meta/recipes-support/curl/files/0001-easy_lock-switch-to-using-atomic_int-instead-of-bool.patch
+++ /dev/null
@@ -1,37 +0,0 @@
-From 50efb0822aa0e0ab165158dd0a26e65a2290e6d2 Mon Sep 17 00:00:00 2001
-From: Daniel Stenberg <daniel@haxx.se>
-Date: Tue, 28 Jun 2022 09:00:25 +0200
-Subject: [PATCH] easy_lock: switch to using atomic_int instead of bool
-
-To work with more compilers without requiring separate libs to
-link. Like with gcc-12 for RISC-V on Linux.
-
-Reported-by: Adam Sampson
-Fixes #9055
-Closes #9061
-
-Upstream-Status: Backport [50efb0822aa0e0ab165158dd0a26e65a2290e6d2]
-
-Signed-off-by: He Zhe <zhe.he@windriver.com>
----
- lib/easy_lock.h | 4 ++--
- 1 file changed, 2 insertions(+), 2 deletions(-)
-
-diff --git a/lib/easy_lock.h b/lib/easy_lock.h
-index 07c85c5ff..9c11bc50c 100644
---- a/lib/easy_lock.h
-+++ b/lib/easy_lock.h
-@@ -40,8 +40,8 @@
- #include <sched.h>
- #endif
- 
--#define curl_simple_lock atomic_bool
--#define CURL_SIMPLE_LOCK_INIT false
-+#define curl_simple_lock atomic_int
-+#define CURL_SIMPLE_LOCK_INIT 0
- 
- static inline void curl_simple_lock_lock(curl_simple_lock *lock)
- {
--- 
-2.25.1
-
diff --git a/poky/meta/recipes-support/curl/files/0001-easy_lock.h-include-sched.h-if-available-to-fix-buil.patch b/poky/meta/recipes-support/curl/files/0001-easy_lock.h-include-sched.h-if-available-to-fix-buil.patch
deleted file mode 100644
index 771bdb2..0000000
--- a/poky/meta/recipes-support/curl/files/0001-easy_lock.h-include-sched.h-if-available-to-fix-buil.patch
+++ /dev/null
@@ -1,33 +0,0 @@
-From e2e7f54b7bea521fa8373095d0f43261a720cda0 Mon Sep 17 00:00:00 2001
-From: Daniel Stenberg <daniel@haxx.se>
-Date: Mon, 27 Jun 2022 08:46:21 +0200
-Subject: [PATCH] easy_lock.h: include sched.h if available to fix build
-
-Patched-by: Harry Sintonen
-
-Closes #9054
-
-Upstream-Status: Backport [e2e7f54b7bea521fa8373095d0f43261a720cda0]
-
-Signed-off-by: Robert Joslyn <robert.joslyn@redrectangle.org>
----
- lib/easy_lock.h | 3 +++
- 1 file changed, 3 insertions(+)
-
-diff --git a/lib/easy_lock.h b/lib/easy_lock.h
-index 819f50ce8..1f54289ce 100644
---- a/lib/easy_lock.h
-+++ b/lib/easy_lock.h
-@@ -36,6 +36,9 @@
- 
- #elif defined (HAVE_ATOMIC)
- #include <stdatomic.h>
-+#if defined(HAVE_SCHED_YIELD)
-+#include <sched.h>
-+#endif
- 
- #define curl_simple_lock atomic_bool
- #define CURL_SIMPLE_LOCK_INIT false
--- 
-2.35.1
-
diff --git a/poky/meta/recipes-support/diffoscope/diffoscope_218.bb b/poky/meta/recipes-support/diffoscope/diffoscope_218.bb
deleted file mode 100644
index eceba47..0000000
--- a/poky/meta/recipes-support/diffoscope/diffoscope_218.bb
+++ /dev/null
@@ -1,30 +0,0 @@
-SUMMARY = "in-depth comparison of files, archives, and directories"
-DESCRIPTION = "Tries to get to the bottom of what makes files or directories \
-different. It will recursively unpack archives of many kinds and transform \
-various binary formats into more human-readable form to compare them. \
-It can compare two tarballs, ISO images, or PDF just as easily."
-HOMEPAGE = "https://diffoscope.org/"
-BUGTRACKER = "https://salsa.debian.org/reproducible-builds/diffoscope/-/issues"
-LICENSE = "GPL-3.0-or-later"
-LIC_FILES_CHKSUM = "file://COPYING;md5=d32239bcb673463ab874e80d47fae504"
-
-PYPI_PACKAGE = "diffoscope"
-
-inherit pypi setuptools3
-
-SRC_URI[sha256sum] = "68056e6d5382bfe16662c60d47bf710aa7b0ef43bfde1172fad694bc487279e3"
-
-RDEPENDS:${PN} += "binutils vim squashfs-tools python3-libarchive-c python3-magic python3-rpm"
-
-# Dependencies don't build for musl
-COMPATIBLE_HOST:libc-musl = 'null'
-
-do_install:append:class-native() {
-	create_wrapper ${D}${bindir}/diffoscope \
-		MAGIC=${STAGING_DIR_NATIVE}${datadir_native}/misc/magic.mgc \
-		RPM_CONFIGDIR=${STAGING_LIBDIR_NATIVE}/rpm \
-		LD_LIBRARY_PATH=${STAGING_LIBDIR_NATIVE} \
-		RPM_ETCCONFIGDIR=${STAGING_DIR_NATIVE}
-}
-
-BBCLASSEXTEND = "native"
diff --git a/poky/meta/recipes-support/diffoscope/diffoscope_220.bb b/poky/meta/recipes-support/diffoscope/diffoscope_220.bb
new file mode 100644
index 0000000..dc55647
--- /dev/null
+++ b/poky/meta/recipes-support/diffoscope/diffoscope_220.bb
@@ -0,0 +1,30 @@
+SUMMARY = "in-depth comparison of files, archives, and directories"
+DESCRIPTION = "Tries to get to the bottom of what makes files or directories \
+different. It will recursively unpack archives of many kinds and transform \
+various binary formats into more human-readable form to compare them. \
+It can compare two tarballs, ISO images, or PDF just as easily."
+HOMEPAGE = "https://diffoscope.org/"
+BUGTRACKER = "https://salsa.debian.org/reproducible-builds/diffoscope/-/issues"
+LICENSE = "GPL-3.0-or-later"
+LIC_FILES_CHKSUM = "file://COPYING;md5=d32239bcb673463ab874e80d47fae504"
+
+PYPI_PACKAGE = "diffoscope"
+
+inherit pypi setuptools3
+
+SRC_URI[sha256sum] = "7873e13ac8b11b634ee3490b70b056c6a6bae9cfb794d6ba7cb43e7797b2a829"
+
+RDEPENDS:${PN} += "binutils vim squashfs-tools python3-libarchive-c python3-magic python3-rpm"
+
+# Dependencies don't build for musl
+COMPATIBLE_HOST:libc-musl = 'null'
+
+do_install:append:class-native() {
+	create_wrapper ${D}${bindir}/diffoscope \
+		MAGIC=${STAGING_DIR_NATIVE}${datadir_native}/misc/magic.mgc \
+		RPM_CONFIGDIR=${STAGING_LIBDIR_NATIVE}/rpm \
+		LD_LIBRARY_PATH=${STAGING_LIBDIR_NATIVE} \
+		RPM_ETCCONFIGDIR=${STAGING_DIR_NATIVE}
+}
+
+BBCLASSEXTEND = "native"
diff --git a/poky/meta/recipes-support/gnutls/gnutls_3.7.6.bb b/poky/meta/recipes-support/gnutls/gnutls_3.7.6.bb
deleted file mode 100644
index 6a0eb97..0000000
--- a/poky/meta/recipes-support/gnutls/gnutls_3.7.6.bb
+++ /dev/null
@@ -1,90 +0,0 @@
-SUMMARY = "GNU Transport Layer Security Library"
-DESCRIPTION = "a secure communications library implementing the SSL, \
-TLS and DTLS protocols and technologies around them."
-HOMEPAGE = "https://gnutls.org/"
-BUGTRACKER = "https://savannah.gnu.org/support/?group=gnutls"
-
-LICENSE = "GPL-3.0-or-later & LGPL-2.1-or-later"
-LICENSE:${PN} = "LGPL-2.1-or-later"
-LICENSE:${PN}-xx = "LGPL-2.1-or-later"
-LICENSE:${PN}-bin = "GPL-3.0-or-later"
-LICENSE:${PN}-OpenSSL = "GPL-3.0-or-later"
-
-LIC_FILES_CHKSUM = "file://LICENSE;md5=71391c8e0c1cfe68077e7fce3b586283 \
-                    file://doc/COPYING;md5=c678957b0c8e964aa6c70fd77641a71e \
-                    file://doc/COPYING.LESSER;md5=a6f89e2100d9b6cdffcea4f398e37343"
-
-DEPENDS = "nettle gmp virtual/libiconv libunistring"
-DEPENDS:append:libc-musl = " argp-standalone"
-
-SHRT_VER = "${@d.getVar('PV').split('.')[0]}.${@d.getVar('PV').split('.')[1]}"
-
-SRC_URI = "https://www.gnupg.org/ftp/gcrypt/gnutls/v${SHRT_VER}/gnutls-${PV}.tar.xz \
-           file://arm_eabi.patch \
-           file://0001-Creating-.hmac-file-should-be-excuted-in-target-envi.patch \
-           "
-
-SRC_URI[sha256sum] = "77065719a345bfb18faa250134be4c53bef70c1bd61f6c0c23ceb8b44f0262ff"
-
-inherit autotools texinfo pkgconfig gettext lib_package gtk-doc
-
-PACKAGECONFIG ??= "libidn  ${@bb.utils.filter('DISTRO_FEATURES', 'seccomp', d)}"
-
-# You must also have CONFIG_SECCOMP enabled in the kernel for
-# seccomp to work.
-PACKAGECONFIG[seccomp] = "--with-libseccomp-prefix=${STAGING_EXECPREFIXDIR},ac_cv_libseccomp=no,libseccomp"
-PACKAGECONFIG[libidn] = "--with-idn,--without-idn,libidn2"
-PACKAGECONFIG[libtasn1] = "--with-included-libtasn1=no,--with-included-libtasn1,libtasn1"
-PACKAGECONFIG[p11-kit] = "--with-p11-kit,--without-p11-kit,p11-kit"
-PACKAGECONFIG[tpm] = "--with-tpm,--without-tpm,trousers"
-PACKAGECONFIG[fips] = "--enable-fips140-mode --with-libdl-prefix=${STAGING_BASELIBDIR}"
-
-EXTRA_OECONF = " \
-    --enable-doc \
-    --disable-libdane \
-    --disable-guile \
-    --disable-rpath \
-    --enable-openssl-compatibility \
-    --with-libpthread-prefix=${STAGING_DIR_HOST}${prefix} \
-    --with-librt-prefix=${STAGING_DIR_HOST}${prefix} \
-    --with-default-trust-store-file=${sysconfdir}/ssl/certs/ca-certificates.crt \
-"
-
-# Otherwise the tools try and use HOSTTOOLS_DIR/bash as a shell.
-export POSIX_SHELL="${base_bindir}/sh"
-
-LDFLAGS:append:libc-musl = " -largp"
-
-do_configure:prepend() {
-	for dir in . lib; do
-		rm -f ${dir}/aclocal.m4 ${dir}/m4/libtool.m4 ${dir}/m4/lt*.m4
-	done
-}
-
-do_install:append:class-target() {
-        if ${@bb.utils.contains('PACKAGECONFIG', 'fips', 'true', 'false', d)}; then
-          install -d ${D}${bindir}/bin
-          install -m 0755 ${B}/lib/.libs/fipshmac ${D}/${bindir}/
-        fi
-}
-
-PACKAGES =+ "${PN}-openssl ${PN}-xx ${PN}-fips"
-
-FILES:${PN}-dev += "${bindir}/gnutls-cli-debug"
-FILES:${PN}-openssl = "${libdir}/libgnutls-openssl.so.*"
-FILES:${PN}-xx = "${libdir}/libgnutlsxx.so.*"
-FILES:${PN}-fips = "${bindir}/fipshmac"
-
-BBCLASSEXTEND = "native nativesdk"
-
-pkg_postinst_ontarget:${PN}-fips () {
-    if test -x ${bindir}/fipshmac
-    then
-        mkdir ${sysconfdir}/gnutls
-        touch ${sysconfdir}/gnutls/config
-        ${bindir}/fipshmac ${libdir}/libgnutls.so.30.*.* > ${libdir}/.libgnutls.so.30.hmac
-        ${bindir}/fipshmac ${libdir}/libnettle.so.8.* > ${libdir}/.libnettle.so.8.hmac
-        ${bindir}/fipshmac ${libdir}/libgmp.so.10.*.* > ${libdir}/.libgmp.so.10.hmac
-        ${bindir}/fipshmac ${libdir}/libhogweed.so.6.* > ${libdir}/.libhogweed.so.6.hmac
-    fi
-}
diff --git a/poky/meta/recipes-support/gnutls/gnutls_3.7.7.bb b/poky/meta/recipes-support/gnutls/gnutls_3.7.7.bb
new file mode 100644
index 0000000..01fd4db
--- /dev/null
+++ b/poky/meta/recipes-support/gnutls/gnutls_3.7.7.bb
@@ -0,0 +1,90 @@
+SUMMARY = "GNU Transport Layer Security Library"
+DESCRIPTION = "a secure communications library implementing the SSL, \
+TLS and DTLS protocols and technologies around them."
+HOMEPAGE = "https://gnutls.org/"
+BUGTRACKER = "https://savannah.gnu.org/support/?group=gnutls"
+
+LICENSE = "GPL-3.0-or-later & LGPL-2.1-or-later"
+LICENSE:${PN} = "LGPL-2.1-or-later"
+LICENSE:${PN}-xx = "LGPL-2.1-or-later"
+LICENSE:${PN}-bin = "GPL-3.0-or-later"
+LICENSE:${PN}-OpenSSL = "GPL-3.0-or-later"
+
+LIC_FILES_CHKSUM = "file://LICENSE;md5=71391c8e0c1cfe68077e7fce3b586283 \
+                    file://doc/COPYING;md5=c678957b0c8e964aa6c70fd77641a71e \
+                    file://doc/COPYING.LESSER;md5=a6f89e2100d9b6cdffcea4f398e37343"
+
+DEPENDS = "nettle gmp virtual/libiconv libunistring"
+DEPENDS:append:libc-musl = " argp-standalone"
+
+SHRT_VER = "${@d.getVar('PV').split('.')[0]}.${@d.getVar('PV').split('.')[1]}"
+
+SRC_URI = "https://www.gnupg.org/ftp/gcrypt/gnutls/v${SHRT_VER}/gnutls-${PV}.tar.xz \
+           file://arm_eabi.patch \
+           file://0001-Creating-.hmac-file-should-be-excuted-in-target-envi.patch \
+           "
+
+SRC_URI[sha256sum] = "be9143d0d58eab64dba9b77114aaafac529b6c0d7e81de6bdf1c9b59027d2106"
+
+inherit autotools texinfo pkgconfig gettext lib_package gtk-doc
+
+PACKAGECONFIG ??= "libidn  ${@bb.utils.filter('DISTRO_FEATURES', 'seccomp', d)}"
+
+# You must also have CONFIG_SECCOMP enabled in the kernel for
+# seccomp to work.
+PACKAGECONFIG[seccomp] = "--with-libseccomp-prefix=${STAGING_EXECPREFIXDIR},ac_cv_libseccomp=no,libseccomp"
+PACKAGECONFIG[libidn] = "--with-idn,--without-idn,libidn2"
+PACKAGECONFIG[libtasn1] = "--with-included-libtasn1=no,--with-included-libtasn1,libtasn1"
+PACKAGECONFIG[p11-kit] = "--with-p11-kit,--without-p11-kit,p11-kit"
+PACKAGECONFIG[tpm] = "--with-tpm,--without-tpm,trousers"
+PACKAGECONFIG[fips] = "--enable-fips140-mode --with-libdl-prefix=${STAGING_BASELIBDIR}"
+
+EXTRA_OECONF = " \
+    --enable-doc \
+    --disable-libdane \
+    --disable-guile \
+    --disable-rpath \
+    --enable-openssl-compatibility \
+    --with-libpthread-prefix=${STAGING_DIR_HOST}${prefix} \
+    --with-librt-prefix=${STAGING_DIR_HOST}${prefix} \
+    --with-default-trust-store-file=${sysconfdir}/ssl/certs/ca-certificates.crt \
+"
+
+# Otherwise the tools try and use HOSTTOOLS_DIR/bash as a shell.
+export POSIX_SHELL="${base_bindir}/sh"
+
+LDFLAGS:append:libc-musl = " -largp"
+
+do_configure:prepend() {
+	for dir in . lib; do
+		rm -f ${dir}/aclocal.m4 ${dir}/m4/libtool.m4 ${dir}/m4/lt*.m4
+	done
+}
+
+do_install:append:class-target() {
+        if ${@bb.utils.contains('PACKAGECONFIG', 'fips', 'true', 'false', d)}; then
+          install -d ${D}${bindir}/bin
+          install -m 0755 ${B}/lib/.libs/fipshmac ${D}/${bindir}/
+        fi
+}
+
+PACKAGES =+ "${PN}-openssl ${PN}-xx ${PN}-fips"
+
+FILES:${PN}-dev += "${bindir}/gnutls-cli-debug"
+FILES:${PN}-openssl = "${libdir}/libgnutls-openssl.so.*"
+FILES:${PN}-xx = "${libdir}/libgnutlsxx.so.*"
+FILES:${PN}-fips = "${bindir}/fipshmac"
+
+BBCLASSEXTEND = "native nativesdk"
+
+pkg_postinst_ontarget:${PN}-fips () {
+    if test -x ${bindir}/fipshmac
+    then
+        mkdir ${sysconfdir}/gnutls
+        touch ${sysconfdir}/gnutls/config
+        ${bindir}/fipshmac ${libdir}/libgnutls.so.30.*.* > ${libdir}/.libgnutls.so.30.hmac
+        ${bindir}/fipshmac ${libdir}/libnettle.so.8.* > ${libdir}/.libnettle.so.8.hmac
+        ${bindir}/fipshmac ${libdir}/libgmp.so.10.*.* > ${libdir}/.libgmp.so.10.hmac
+        ${bindir}/fipshmac ${libdir}/libhogweed.so.6.* > ${libdir}/.libhogweed.so.6.hmac
+    fi
+}
diff --git a/poky/meta/recipes-support/gnutls/libtasn1_4.18.0.bb b/poky/meta/recipes-support/gnutls/libtasn1_4.18.0.bb
deleted file mode 100644
index db49adc..0000000
--- a/poky/meta/recipes-support/gnutls/libtasn1_4.18.0.bb
+++ /dev/null
@@ -1,23 +0,0 @@
-SUMMARY = "Library for ASN.1 and DER manipulation"
-DESCRIPTION = "A highly portable C library that encodes and decodes \
-DER/BER data following an ASN.1 schema. "
-HOMEPAGE = "http://www.gnu.org/software/libtasn1/"
-
-LICENSE = "GPL-3.0-or-later & LGPL-2.1-or-later"
-LICENSE:${PN}-bin = "GPL-3.0-or-later"
-LICENSE:${PN} = "LGPL-2.1-or-later"
-LIC_FILES_CHKSUM = "file://doc/COPYING;md5=d32239bcb673463ab874e80d47fae504 \
-                    file://doc/COPYING.LESSER;md5=4fbd65380cdd255951079008b364516c \
-                    file://COPYING;md5=75ac100ec923f959898182307970c360"
-
-SRC_URI = "${GNU_MIRROR}/libtasn1/libtasn1-${PV}.tar.gz \
-           file://dont-depend-on-help2man.patch \
-           "
-
-DEPENDS = "bison-native"
-
-SRC_URI[sha256sum] = "4365c154953563d64c67a024b607d1ee75c6db76e0d0f65709ea80a334cd1898"
-
-inherit autotools texinfo lib_package gtk-doc
-
-BBCLASSEXTEND = "native nativesdk"
diff --git a/poky/meta/recipes-support/gnutls/libtasn1_4.19.0.bb b/poky/meta/recipes-support/gnutls/libtasn1_4.19.0.bb
new file mode 100644
index 0000000..5fb8b54
--- /dev/null
+++ b/poky/meta/recipes-support/gnutls/libtasn1_4.19.0.bb
@@ -0,0 +1,23 @@
+SUMMARY = "Library for ASN.1 and DER manipulation"
+DESCRIPTION = "A highly portable C library that encodes and decodes \
+DER/BER data following an ASN.1 schema. "
+HOMEPAGE = "http://www.gnu.org/software/libtasn1/"
+
+LICENSE = "GPL-3.0-or-later & LGPL-2.1-or-later"
+LICENSE:${PN}-bin = "GPL-3.0-or-later"
+LICENSE:${PN} = "LGPL-2.1-or-later"
+LIC_FILES_CHKSUM = "file://doc/COPYING;md5=d32239bcb673463ab874e80d47fae504 \
+                    file://doc/COPYING.LESSER;md5=4fbd65380cdd255951079008b364516c \
+                    file://COPYING;md5=75ac100ec923f959898182307970c360"
+
+SRC_URI = "${GNU_MIRROR}/libtasn1/libtasn1-${PV}.tar.gz \
+           file://dont-depend-on-help2man.patch \
+           "
+
+DEPENDS = "bison-native"
+
+SRC_URI[sha256sum] = "1613f0ac1cf484d6ec0ce3b8c06d56263cc7242f1c23b30d82d23de345a63f7a"
+
+inherit autotools texinfo lib_package gtk-doc
+
+BBCLASSEXTEND = "native nativesdk"
diff --git a/poky/meta/recipes-support/gpgme/gpgme/0001-pkgconfig.patch b/poky/meta/recipes-support/gpgme/gpgme/0001-pkgconfig.patch
index 35c6b40..3b2f59c 100644
--- a/poky/meta/recipes-support/gpgme/gpgme/0001-pkgconfig.patch
+++ b/poky/meta/recipes-support/gpgme/gpgme/0001-pkgconfig.patch
@@ -1,4 +1,4 @@
-From 98ce65902b197faa8f660564613ca2e504c2f8f8 Mon Sep 17 00:00:00 2001
+From 0d7ec5b98dc6cbd35f56deaecec5ecfdaa944aee Mon Sep 17 00:00:00 2001
 From: Richard Purdie <richard.purdie@linuxfoundation.org>
 Date: Fri, 10 May 2019 14:23:55 +0800
 Subject: [PATCH] pkgconfig
@@ -15,6 +15,7 @@
 Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
 Rebase to 1.17.0
 Signed-off-by: Wang Mingyu <wangmy@fujitsu.com>
+
 ---
  configure.ac            |   1 +
  src/Makefile.am         |   4 +-
@@ -25,10 +26,10 @@
  create mode 100644 src/gpgme-pthread.pc.in
 
 diff --git a/configure.ac b/configure.ac
-index 80ce79c..d7c0ac1 100644
+index 9d696b9..5b4e730 100644
 --- a/configure.ac
 +++ b/configure.ac
-@@ -905,6 +905,7 @@ AC_CONFIG_FILES(Makefile src/Makefile
+@@ -926,6 +926,7 @@ AC_CONFIG_FILES(Makefile src/Makefile
                  src/gpgme-glib.pc
                  src/gpgme.h)
  AC_CONFIG_FILES(src/gpgme-config, chmod +x src/gpgme-config)
@@ -37,7 +38,7 @@
  AC_CONFIG_FILES(lang/cpp/tests/Makefile)
  AC_CONFIG_FILES(lang/cpp/src/GpgmeppConfig-w32.cmake.in)
 diff --git a/src/Makefile.am b/src/Makefile.am
-index 39c341f..3aca716 100644
+index 67805a8..d38ba02 100644
 --- a/src/Makefile.am
 +++ b/src/Makefile.am
 @@ -20,11 +20,11 @@
@@ -52,8 +53,8 @@
 -	     gpgme.pc.in gpgme-glib.pc.in
 +	     gpgme.pc.in gpgme-glib.pc.in gpgme-pthread.pc.in
  
- bin_SCRIPTS = gpgme-config
- m4datadir = $(datadir)/aclocal
+ if USE_GPGRT_CONFIG
+ noinst_SCRIPTS = gpgme-config
 diff --git a/src/gpgme-pthread.pc.in b/src/gpgme-pthread.pc.in
 new file mode 100644
 index 0000000..074bbf6
@@ -76,7 +77,7 @@
 +Cflags: -I${includedir}
 +Requires: libassuan gpg-error
 diff --git a/src/gpgme.m4 b/src/gpgme.m4
-index 71b0010..30ec151 100644
+index 71b0010..5821895 100644
 --- a/src/gpgme.m4
 +++ b/src/gpgme.m4
 @@ -79,7 +79,7 @@ dnl config script does not match the host specification the script
@@ -287,6 +288,3 @@
 +Cflags: -I${includedir}
 +Libs: -L${libdir} -lgpgme
  URL: https://www.gnupg.org/software/gpgme/index.html
--- 
-2.25.1
-
diff --git a/poky/meta/recipes-support/gpgme/gpgme_1.17.1.bb b/poky/meta/recipes-support/gpgme/gpgme_1.17.1.bb
deleted file mode 100644
index d95ed6c..0000000
--- a/poky/meta/recipes-support/gpgme/gpgme_1.17.1.bb
+++ /dev/null
@@ -1,87 +0,0 @@
-SUMMARY = "High-level GnuPG encryption/signing API"
-DESCRIPTION = "GnuPG Made Easy (GPGME) is a library designed to make access to GnuPG easier for applications. It provides a High-Level Crypto API for encryption, decryption, signing, signature verification and key management"
-HOMEPAGE = "http://www.gnupg.org/gpgme.html"
-BUGTRACKER = "https://bugs.g10code.com/gnupg/index"
-
-LICENSE = "GPL-2.0-or-later & LGPL-2.1-or-later"
-LIC_FILES_CHKSUM = "file://COPYING;md5=94d55d512a9ba36caa9b7df079bae19f \
-                    file://COPYING.LESSER;md5=bbb461211a33b134d42ed5ee802b37ff \
-                    file://src/gpgme.h.in;endline=23;md5=2f0bf06d1c7dcb28532a9d0f94a7ca1d \
-                    file://src/engine.h;endline=22;md5=4b6d8ba313d9b564cc4d4cfb1640af9d"
-
-UPSTREAM_CHECK_URI = "https://gnupg.org/download/index.html"
-SRC_URI = "${GNUPG_MIRROR}/gpgme/${BP}.tar.bz2 \
-           file://0001-Revert-build-Make-gpgme.m4-use-gpgrt-config-with-.pc.patch \
-           file://0001-pkgconfig.patch \
-           file://0002-gpgme-lang-python-gpg-error-config-should-not-be-use.patch \
-           file://0003-Correctly-install-python-modules.patch \
-           file://0004-python-import.patch \
-           file://0005-gpgme-config-skip-all-lib-or-usr-lib-directories-in-.patch \
-           file://0006-fix-build-path-issue.patch \
-           file://0007-python-Add-variables-to-tests.patch \
-           file://0008-do-not-auto-check-var-PYTHON.patch \
-           file://0001-use-closefrom-on-linux-and-glibc-2.34.patch \
-           "
-
-SRC_URI[sha256sum] = "711eabf5dd661b9b04be9edc9ace2a7bc031f6bd9d37a768d02d0efdef108f5f"
-
-DEPENDS = "libgpg-error libassuan"
-RDEPENDS:${PN}-cpp += "libstdc++"
-
-RDEPENDS:python2-gpg += "python-unixadmin"
-RDEPENDS:python3-gpg += "python3-unixadmin"
-
-BINCONFIG = "${bindir}/gpgme-config"
-
-# Note select python2 or python3, but you can't select both at the same time
-PACKAGECONFIG ??= "python3"
-PACKAGECONFIG[python2] = ",,python swig-native,"
-PACKAGECONFIG[python3] = ",,python3 swig-native,"
-
-# Default in configure.ac: "cl cpp python qt"
-# Supported: "cl cpp python python2 python3 qt"
-# python says 'search and find python2 or python3'
-
-# Building the C++ bindings for native requires a C++ compiler with C++11
-# support. Since these bindings are currently not needed, we can disable them.
-DEFAULT_LANGUAGES = ""
-DEFAULT_LANGUAGES:class-target = "cpp"
-LANGUAGES ?= "${DEFAULT_LANGUAGES} python"
-
-PYTHON_INHERIT = "${@bb.utils.contains('PACKAGECONFIG', 'python2', 'pythonnative', '', d)}"
-PYTHON_INHERIT .= "${@bb.utils.contains('PACKAGECONFIG', 'python3', 'python3native python3targetconfig', '', d)}"
-
-EXTRA_OECONF += '--enable-languages="${LANGUAGES}" \
-                 --disable-gpgconf-test \
-                 --disable-gpg-test \
-                 --disable-gpgsm-test \
-                 --disable-g13-test \
-                 --disable-lang-python-test \
-'
-
-inherit autotools texinfo binconfig-disabled pkgconfig setuptools3-base ${PYTHON_INHERIT} multilib_header
-
-export PKG_CONFIG='pkg-config'
-
-BBCLASSEXTEND = "native nativesdk"
-
-PACKAGES =+ "${PN}-cpp"
-PACKAGES =. "${@bb.utils.contains('PACKAGECONFIG', 'python2', 'python2-gpg ', '', d)}"
-PACKAGES =. "${@bb.utils.contains('PACKAGECONFIG', 'python3', 'python3-gpg ', '', d)}"
-
-FILES:${PN}-cpp = "${libdir}/libgpgmepp.so.*"
-FILES:python2-gpg = "${PYTHON_SITEPACKAGES_DIR}/*"
-FILES:python3-gpg = "${PYTHON_SITEPACKAGES_DIR}/*"
-FILES:${PN}-dev += "${datadir}/common-lisp/source/gpgme/*"
-
-CFLAGS:append:libc-musl = " -D__error_t_defined "
-do_configure:prepend () {
-	# Else these could be used in preference to those in aclocal-copy
-	rm -f ${S}/m4/gpg-error.m4
-	rm -f ${S}/m4/libassuan.m4
-	rm -f ${S}/m4/python.m4
-}
-
-do_install:append() {
-       oe_multilib_header gpgme.h
-}
diff --git a/poky/meta/recipes-support/gpgme/gpgme_1.18.0.bb b/poky/meta/recipes-support/gpgme/gpgme_1.18.0.bb
new file mode 100644
index 0000000..ca9c6ca
--- /dev/null
+++ b/poky/meta/recipes-support/gpgme/gpgme_1.18.0.bb
@@ -0,0 +1,87 @@
+SUMMARY = "High-level GnuPG encryption/signing API"
+DESCRIPTION = "GnuPG Made Easy (GPGME) is a library designed to make access to GnuPG easier for applications. It provides a High-Level Crypto API for encryption, decryption, signing, signature verification and key management"
+HOMEPAGE = "http://www.gnupg.org/gpgme.html"
+BUGTRACKER = "https://bugs.g10code.com/gnupg/index"
+
+LICENSE = "GPL-2.0-or-later & LGPL-2.1-or-later"
+LIC_FILES_CHKSUM = "file://COPYING;md5=94d55d512a9ba36caa9b7df079bae19f \
+                    file://COPYING.LESSER;md5=bbb461211a33b134d42ed5ee802b37ff \
+                    file://src/gpgme.h.in;endline=23;md5=2f0bf06d1c7dcb28532a9d0f94a7ca1d \
+                    file://src/engine.h;endline=22;md5=4b6d8ba313d9b564cc4d4cfb1640af9d"
+
+UPSTREAM_CHECK_URI = "https://gnupg.org/download/index.html"
+SRC_URI = "${GNUPG_MIRROR}/gpgme/${BP}.tar.bz2 \
+           file://0001-Revert-build-Make-gpgme.m4-use-gpgrt-config-with-.pc.patch \
+           file://0001-pkgconfig.patch \
+           file://0002-gpgme-lang-python-gpg-error-config-should-not-be-use.patch \
+           file://0003-Correctly-install-python-modules.patch \
+           file://0004-python-import.patch \
+           file://0005-gpgme-config-skip-all-lib-or-usr-lib-directories-in-.patch \
+           file://0006-fix-build-path-issue.patch \
+           file://0007-python-Add-variables-to-tests.patch \
+           file://0008-do-not-auto-check-var-PYTHON.patch \
+           file://0001-use-closefrom-on-linux-and-glibc-2.34.patch \
+           "
+
+SRC_URI[sha256sum] = "361d4eae47ce925dba0ea569af40e7b52c645c4ae2e65e5621bf1b6cdd8b0e9e"
+
+DEPENDS = "libgpg-error libassuan"
+RDEPENDS:${PN}-cpp += "libstdc++"
+
+RDEPENDS:python2-gpg += "python-unixadmin"
+RDEPENDS:python3-gpg += "python3-unixadmin"
+
+BINCONFIG = "${bindir}/gpgme-config"
+
+# Note select python2 or python3, but you can't select both at the same time
+PACKAGECONFIG ??= "python3"
+PACKAGECONFIG[python2] = ",,python swig-native,"
+PACKAGECONFIG[python3] = ",,python3 swig-native,"
+
+# Default in configure.ac: "cl cpp python qt"
+# Supported: "cl cpp python python2 python3 qt"
+# python says 'search and find python2 or python3'
+
+# Building the C++ bindings for native requires a C++ compiler with C++11
+# support. Since these bindings are currently not needed, we can disable them.
+DEFAULT_LANGUAGES = ""
+DEFAULT_LANGUAGES:class-target = "cpp"
+LANGUAGES ?= "${DEFAULT_LANGUAGES} python"
+
+PYTHON_INHERIT = "${@bb.utils.contains('PACKAGECONFIG', 'python2', 'pythonnative', '', d)}"
+PYTHON_INHERIT .= "${@bb.utils.contains('PACKAGECONFIG', 'python3', 'python3native python3targetconfig', '', d)}"
+
+EXTRA_OECONF += '--enable-languages="${LANGUAGES}" \
+                 --disable-gpgconf-test \
+                 --disable-gpg-test \
+                 --disable-gpgsm-test \
+                 --disable-g13-test \
+                 --disable-lang-python-test \
+'
+
+inherit autotools texinfo binconfig-disabled pkgconfig setuptools3-base ${PYTHON_INHERIT} multilib_header
+
+export PKG_CONFIG='pkg-config'
+
+BBCLASSEXTEND = "native nativesdk"
+
+PACKAGES =+ "${PN}-cpp"
+PACKAGES =. "${@bb.utils.contains('PACKAGECONFIG', 'python2', 'python2-gpg ', '', d)}"
+PACKAGES =. "${@bb.utils.contains('PACKAGECONFIG', 'python3', 'python3-gpg ', '', d)}"
+
+FILES:${PN}-cpp = "${libdir}/libgpgmepp.so.*"
+FILES:python2-gpg = "${PYTHON_SITEPACKAGES_DIR}/*"
+FILES:python3-gpg = "${PYTHON_SITEPACKAGES_DIR}/*"
+FILES:${PN}-dev += "${datadir}/common-lisp/source/gpgme/*"
+
+CFLAGS:append:libc-musl = " -D__error_t_defined "
+do_configure:prepend () {
+	# Else these could be used in preference to those in aclocal-copy
+	rm -f ${S}/m4/gpg-error.m4
+	rm -f ${S}/m4/libassuan.m4
+	rm -f ${S}/m4/python.m4
+}
+
+do_install:append() {
+       oe_multilib_header gpgme.h
+}
diff --git a/poky/meta/recipes-support/icu/icu_71.1.bb b/poky/meta/recipes-support/icu/icu_71.1.bb
index d8ef2a3..b39633c 100644
--- a/poky/meta/recipes-support/icu/icu_71.1.bb
+++ b/poky/meta/recipes-support/icu/icu_71.1.bb
@@ -15,20 +15,16 @@
 SPDX_S = "${WORKDIR}/icu"
 STAGING_ICU_DIR_NATIVE = "${STAGING_DATADIR_NATIVE}/${BPN}/${PV}"
 
-BINCONFIG = "${bindir}/icu-config"
-
 ICU_MAJOR_VER = "${@d.getVar('PV').split('.')[0]}"
 
-inherit autotools pkgconfig binconfig multilib_script
-
-MULTILIB_SCRIPTS = "${PN}-dev:${bindir}/icu-config"
+inherit autotools pkgconfig
 
 # ICU needs the native build directory as an argument to its --with-cross-build option when
 # cross-compiling. Taken the situation that different builds may share a common sstate-cache
 # into consideration, the native build directory needs to be staged.
-EXTRA_OECONF = "--with-cross-build=${STAGING_ICU_DIR_NATIVE}"
-EXTRA_OECONF:class-native = ""
-EXTRA_OECONF:class-nativesdk = "--with-cross-build=${STAGING_ICU_DIR_NATIVE}"
+EXTRA_OECONF = "--with-cross-build=${STAGING_ICU_DIR_NATIVE} --disable-icu-config"
+EXTRA_OECONF:class-native = "--disable-icu-config"
+EXTRA_OECONF:class-nativesdk = "--with-cross-build=${STAGING_ICU_DIR_NATIVE} --disable-icu-config"
 
 EXTRA_OECONF:append:class-target = "${@oe.utils.conditional('SITEINFO_ENDIANNESS', 'be', ' --with-data-packaging=archive', '', d)}"
 TARGET_CXXFLAGS:append = "${@oe.utils.conditional('SITEINFO_ENDIANNESS', 'be', ' -DICU_DATA_DIR=\\""${datadir}/${BPN}/${PV}\\""', '', d)}"
@@ -67,7 +63,7 @@
 	    -e 's,--sysroot=${STAGING_DIR_TARGET},,g' \
 	    -e 's|${DEBUG_PREFIX_MAP}||g' \
 	    -e 's:${HOSTTOOLS_DIR}/::g' \
-	    ${D}/${bindir}/icu-config ${D}/${libdir}/${BPN}/${PV}/Makefile.inc \
+	    ${D}/${libdir}/${BPN}/${PV}/Makefile.inc \
 	    ${D}/${libdir}/${BPN}/${PV}/pkgdata.inc
 }
 
diff --git a/poky/meta/recipes-support/iso-codes/iso-codes_4.10.0.bb b/poky/meta/recipes-support/iso-codes/iso-codes_4.10.0.bb
deleted file mode 100644
index 857fe46..0000000
--- a/poky/meta/recipes-support/iso-codes/iso-codes_4.10.0.bb
+++ /dev/null
@@ -1,22 +0,0 @@
-SUMMARY = "ISO language, territory, currency, script codes and their translations"
-DESCRIPTION = "Provides lists of various ISO standards (e.g. country, \
-language, language scripts, and currency names) in one place, rather \
-than repeated in many programs throughout the system."
-HOMEPAGE = "https://salsa.debian.org/iso-codes-team/iso-codes"
-BUGTRACKER = "https://salsa.debian.org/iso-codes-team/iso-codes/issues"
-
-LICENSE = "LGPL-2.1-only"
-LIC_FILES_CHKSUM = "file://COPYING;md5=4fbd65380cdd255951079008b364516c"
-
-SRC_URI = "git://salsa.debian.org/iso-codes-team/iso-codes.git;protocol=https;branch=main;"
-SRCREV = "9a6c24ee40e737ab34273c1af13a8dabcae888dd"
-
-# inherit gettext cannot be used, because it adds gettext-native to BASEDEPENDS which
-# are inhibited by allarch
-DEPENDS = "gettext-native"
-
-S = "${WORKDIR}/git"
-
-inherit allarch autotools
-
-FILES:${PN} += "${datadir}/xml/"
diff --git a/poky/meta/recipes-support/iso-codes/iso-codes_4.11.0.bb b/poky/meta/recipes-support/iso-codes/iso-codes_4.11.0.bb
new file mode 100644
index 0000000..be57398
--- /dev/null
+++ b/poky/meta/recipes-support/iso-codes/iso-codes_4.11.0.bb
@@ -0,0 +1,22 @@
+SUMMARY = "ISO language, territory, currency, script codes and their translations"
+DESCRIPTION = "Provides lists of various ISO standards (e.g. country, \
+language, language scripts, and currency names) in one place, rather \
+than repeated in many programs throughout the system."
+HOMEPAGE = "https://salsa.debian.org/iso-codes-team/iso-codes"
+BUGTRACKER = "https://salsa.debian.org/iso-codes-team/iso-codes/issues"
+
+LICENSE = "LGPL-2.1-only"
+LIC_FILES_CHKSUM = "file://COPYING;md5=4fbd65380cdd255951079008b364516c"
+
+SRC_URI = "git://salsa.debian.org/iso-codes-team/iso-codes.git;protocol=https;branch=main;"
+SRCREV = "2651d7fe65582263c57385a852b0c6d8a49f6985"
+
+# inherit gettext cannot be used, because it adds gettext-native to BASEDEPENDS which
+# are inhibited by allarch
+DEPENDS = "gettext-native"
+
+S = "${WORKDIR}/git"
+
+inherit allarch autotools
+
+FILES:${PN} += "${datadir}/xml/"
diff --git a/poky/meta/recipes-support/libatomic-ops/libatomic-ops_7.6.12.bb b/poky/meta/recipes-support/libatomic-ops/libatomic-ops_7.6.12.bb
deleted file mode 100644
index 8ea84369..0000000
--- a/poky/meta/recipes-support/libatomic-ops/libatomic-ops_7.6.12.bb
+++ /dev/null
@@ -1,22 +0,0 @@
-SUMMARY = "A library for atomic integer operations"
-DESCRIPTION = "Package provides semi-portable access to hardware-provided atomic memory update operations on a number of architectures."
-HOMEPAGE = "https://github.com/ivmai/libatomic_ops/"
-SECTION = "optional"
-PROVIDES += "libatomics-ops"
-LICENSE = "GPL-2.0-only & MIT"
-LIC_FILES_CHKSUM = "file://COPYING;md5=b234ee4d69f5fce4486a80fdaf4a4263 \
-                    file://doc/LICENSING.txt;md5=e00dd5c8ac03a14c5ae5225a4525fa2d \
-                    "
-
-SRC_URI = "https://github.com/ivmai/libatomic_ops/releases/download/v${PV}/libatomic_ops-${PV}.tar.gz"
-UPSTREAM_CHECK_URI = "https://github.com/ivmai/libatomic_ops/releases"
-
-SRC_URI[sha256sum] = "f0ab566e25fce08b560e1feab6a3db01db4a38e5bc687804334ef3920c549f3e"
-
-S = "${WORKDIR}/libatomic_ops-${PV}"
-
-ALLOW_EMPTY:${PN} = "1"
-
-inherit autotools pkgconfig
-
-BBCLASSEXTEND = "native nativesdk"
diff --git a/poky/meta/recipes-support/libatomic-ops/libatomic-ops_7.6.14.bb b/poky/meta/recipes-support/libatomic-ops/libatomic-ops_7.6.14.bb
new file mode 100644
index 0000000..fad92df
--- /dev/null
+++ b/poky/meta/recipes-support/libatomic-ops/libatomic-ops_7.6.14.bb
@@ -0,0 +1,22 @@
+SUMMARY = "A library for atomic integer operations"
+DESCRIPTION = "Package provides semi-portable access to hardware-provided atomic memory update operations on a number of architectures."
+HOMEPAGE = "https://github.com/ivmai/libatomic_ops/"
+SECTION = "optional"
+PROVIDES += "libatomics-ops"
+LICENSE = "GPL-2.0-only & MIT"
+LIC_FILES_CHKSUM = "file://COPYING;md5=b234ee4d69f5fce4486a80fdaf4a4263 \
+                    file://doc/LICENSING.txt;md5=dfc50c7cea7b66935844587a0f7389e7 \
+                    "
+
+SRC_URI = "https://github.com/ivmai/libatomic_ops/releases/download/v${PV}/libatomic_ops-${PV}.tar.gz"
+UPSTREAM_CHECK_URI = "https://github.com/ivmai/libatomic_ops/releases"
+
+SRC_URI[sha256sum] = "390f244d424714735b7050d056567615b3b8f29008a663c262fb548f1802d292"
+
+S = "${WORKDIR}/libatomic_ops-${PV}"
+
+ALLOW_EMPTY:${PN} = "1"
+
+inherit autotools pkgconfig
+
+BBCLASSEXTEND = "native nativesdk"
diff --git a/poky/meta/recipes-support/libcap/files/0001-nativesdk-libcap-Raise-the-size-of-arrays-containing.patch b/poky/meta/recipes-support/libcap/files/0001-nativesdk-libcap-Raise-the-size-of-arrays-containing.patch
index 9884fb5..3f4c7e5 100644
--- a/poky/meta/recipes-support/libcap/files/0001-nativesdk-libcap-Raise-the-size-of-arrays-containing.patch
+++ b/poky/meta/recipes-support/libcap/files/0001-nativesdk-libcap-Raise-the-size-of-arrays-containing.patch
@@ -1,4 +1,4 @@
-From fc60e000169618a4adced845b9462d36ced1efdd Mon Sep 17 00:00:00 2001
+From 1c234bc39446eb9b23896e85dd67b02976d46c3d Mon Sep 17 00:00:00 2001
 From: Hongxu Jia <hongxu.jia@windriver.com>
 Date: Thu, 14 Oct 2021 15:57:36 +0800
 Subject: [PATCH] nativesdk-libcap: Raise the size of arrays containing dl
diff --git a/poky/meta/recipes-support/libcap/libcap_2.64.bb b/poky/meta/recipes-support/libcap/libcap_2.64.bb
deleted file mode 100644
index 7690d3e..0000000
--- a/poky/meta/recipes-support/libcap/libcap_2.64.bb
+++ /dev/null
@@ -1,80 +0,0 @@
-SUMMARY = "Library for getting/setting POSIX.1e capabilities"
-DESCRIPTION = "A library providing the API to access POSIX capabilities. \
-These allow giving various kinds of specific privileges to individual \
-users, without giving them full root permissions."
-HOMEPAGE = "http://sites.google.com/site/fullycapable/"
-# no specific GPL version required
-LICENSE = "BSD-3-Clause | GPL-2.0-only"
-LIC_FILES_CHKSUM_PAM = "file://pam_cap/License;md5=0ad4c9c052b9719ee4fce1bfc7c7dee4"
-LIC_FILES_CHKSUM = "\
-    file://License;md5=e2370ba375efe9e1a095c26d37e483b8 \
-    ${@bb.utils.contains('PACKAGECONFIG', 'pam', '${LIC_FILES_CHKSUM_PAM}', '', d)} \
-"
-
-DEPENDS = "hostperl-runtime-native gperf-native"
-
-SRC_URI = "${KERNELORG_MIRROR}/linux/libs/security/linux-privs/${BPN}2/${BPN}-${PV}.tar.xz \
-           file://0001-ensure-the-XATTR_NAME_CAPS-is-defined-when-it-is-use.patch \
-           file://0002-tests-do-not-run-target-executables.patch \
-           "
-SRC_URI:append:class-nativesdk = " \
-           file://0001-nativesdk-libcap-Raise-the-size-of-arrays-containing.patch \
-           "
-SRC_URI[sha256sum] = "c8465e1f0b068d5fc06199231135ccac7adb56d662b1de93589252e8cd071e13"
-
-UPSTREAM_CHECK_URI = "https://www.kernel.org/pub/linux/libs/security/linux-privs/${BPN}2/"
-
-inherit lib_package
-
-PACKAGECONFIG ??= "${@bb.utils.filter('DISTRO_FEATURES', 'pam', d)}"
-PACKAGECONFIG:class-native ??= ""
-
-PACKAGECONFIG[pam] = "PAM_CAP=yes,PAM_CAP=no,libpam"
-
-EXTRA_OEMAKE = " \
-  INDENT=  \
-  lib='${baselib}' \
-  RAISE_SETFCAP=no \
-  DYNAMIC=yes \
-  USE_GPERF=yes \
-"
-
-EXTRA_OEMAKE:append:class-target = " SYSTEM_HEADERS=${STAGING_INCDIR}"
-
-do_compile() {
-	unset CFLAGS BUILD_CFLAGS
-	oe_runmake \
-		${PACKAGECONFIG_CONFARGS} \
-		AR="${AR}" \
-		CC="${CC}" \
-		RANLIB="${RANLIB}" \
-                OBJCOPY="${OBJCOPY}" \
-		COPTS="${CFLAGS}" \
-		BUILD_COPTS="${BUILD_CFLAGS}"
-}
-
-do_install() {
-	oe_runmake install \
-		${PACKAGECONFIG_CONFARGS} \
-		DESTDIR="${D}" \
-		prefix="${prefix}" \
-		SBINDIR="${sbindir}"
-}
-
-do_install:append() {
-	# Move the library to base_libdir
-	install -d ${D}${base_libdir}
-	if [ ! ${D}${libdir} -ef ${D}${base_libdir} ]; then
-		mv ${D}${libdir}/libcap* ${D}${base_libdir}
-                if [ -d ${D}${libdir}/security ]; then
-			mv ${D}${libdir}/security ${D}${base_libdir}
-		fi
-	fi
-}
-
-FILES:${PN}-dev += "${base_libdir}/*.so"
-
-# pam files
-FILES:${PN} += "${base_libdir}/security/*.so"
-
-BBCLASSEXTEND = "native nativesdk"
diff --git a/poky/meta/recipes-support/libcap/libcap_2.65.bb b/poky/meta/recipes-support/libcap/libcap_2.65.bb
new file mode 100644
index 0000000..8013d40
--- /dev/null
+++ b/poky/meta/recipes-support/libcap/libcap_2.65.bb
@@ -0,0 +1,80 @@
+SUMMARY = "Library for getting/setting POSIX.1e capabilities"
+DESCRIPTION = "A library providing the API to access POSIX capabilities. \
+These allow giving various kinds of specific privileges to individual \
+users, without giving them full root permissions."
+HOMEPAGE = "http://sites.google.com/site/fullycapable/"
+# no specific GPL version required
+LICENSE = "BSD-3-Clause | GPL-2.0-only"
+LIC_FILES_CHKSUM_PAM = "file://pam_cap/License;md5=0ad4c9c052b9719ee4fce1bfc7c7dee4"
+LIC_FILES_CHKSUM = "\
+    file://License;md5=e2370ba375efe9e1a095c26d37e483b8 \
+    ${@bb.utils.contains('PACKAGECONFIG', 'pam', '${LIC_FILES_CHKSUM_PAM}', '', d)} \
+"
+
+DEPENDS = "hostperl-runtime-native gperf-native"
+
+SRC_URI = "${KERNELORG_MIRROR}/linux/libs/security/linux-privs/${BPN}2/${BPN}-${PV}.tar.xz \
+           file://0001-ensure-the-XATTR_NAME_CAPS-is-defined-when-it-is-use.patch \
+           file://0002-tests-do-not-run-target-executables.patch \
+           "
+SRC_URI:append:class-nativesdk = " \
+           file://0001-nativesdk-libcap-Raise-the-size-of-arrays-containing.patch \
+           "
+SRC_URI[sha256sum] = "73e350020cc31fe15360879d19384ffa3395a825f065fcf6bda3a5cdf965bebd"
+
+UPSTREAM_CHECK_URI = "https://www.kernel.org/pub/linux/libs/security/linux-privs/${BPN}2/"
+
+inherit lib_package
+
+PACKAGECONFIG ??= "${@bb.utils.filter('DISTRO_FEATURES', 'pam', d)}"
+PACKAGECONFIG:class-native ??= ""
+
+PACKAGECONFIG[pam] = "PAM_CAP=yes,PAM_CAP=no,libpam"
+
+EXTRA_OEMAKE = " \
+  INDENT=  \
+  lib='${baselib}' \
+  RAISE_SETFCAP=no \
+  DYNAMIC=yes \
+  USE_GPERF=yes \
+"
+
+EXTRA_OEMAKE:append:class-target = " SYSTEM_HEADERS=${STAGING_INCDIR}"
+
+do_compile() {
+	unset CFLAGS BUILD_CFLAGS
+	oe_runmake \
+		${PACKAGECONFIG_CONFARGS} \
+		AR="${AR}" \
+		CC="${CC}" \
+		RANLIB="${RANLIB}" \
+                OBJCOPY="${OBJCOPY}" \
+		COPTS="${CFLAGS}" \
+		BUILD_COPTS="${BUILD_CFLAGS}"
+}
+
+do_install() {
+	oe_runmake install \
+		${PACKAGECONFIG_CONFARGS} \
+		DESTDIR="${D}" \
+		prefix="${prefix}" \
+		SBINDIR="${sbindir}"
+}
+
+do_install:append() {
+	# Move the library to base_libdir
+	install -d ${D}${base_libdir}
+	if [ ! ${D}${libdir} -ef ${D}${base_libdir} ]; then
+		mv ${D}${libdir}/libcap* ${D}${base_libdir}
+                if [ -d ${D}${libdir}/security ]; then
+			mv ${D}${libdir}/security ${D}${base_libdir}
+		fi
+	fi
+}
+
+FILES:${PN}-dev += "${base_libdir}/*.so"
+
+# pam files
+FILES:${PN} += "${base_libdir}/security/*.so"
+
+BBCLASSEXTEND = "native nativesdk"
diff --git a/poky/meta/recipes-support/libevdev/libevdev_1.12.1.bb b/poky/meta/recipes-support/libevdev/libevdev_1.12.1.bb
deleted file mode 100644
index bdca64f..0000000
--- a/poky/meta/recipes-support/libevdev/libevdev_1.12.1.bb
+++ /dev/null
@@ -1,17 +0,0 @@
-SUMMARY = "Wrapper library for evdev devices"
-DESCRIPTION = "A library for handling evdev kernel devices. It abstracts \
-the evdev ioctls through type-safe interfaces and provides functions \
-to change the appearance of the device."
-HOMEPAGE = "http://www.freedesktop.org/wiki/Software/libevdev/"
-SECTION = "libs"
-
-LICENSE = "MIT"
-LIC_FILES_CHKSUM = "file://COPYING;md5=80c550b3197bcb8da7d7557ebcc3fc46 \
-                    "
-
-SRC_URI = "http://www.freedesktop.org/software/libevdev/${BP}.tar.xz"
-SRC_URI[sha256sum] = "1dbba41bc516d3ca7abc0da5b862efe3ea8a7018fa6e9b97ce9d39401b22426c"
-
-inherit autotools pkgconfig
-
-UPSTREAM_CHECK_REGEX = "libevdev-(?P<pver>(\d+\.)+(?!90\d+)\d+)"
diff --git a/poky/meta/recipes-support/libevdev/libevdev_1.13.0.bb b/poky/meta/recipes-support/libevdev/libevdev_1.13.0.bb
new file mode 100644
index 0000000..ddcc4b6
--- /dev/null
+++ b/poky/meta/recipes-support/libevdev/libevdev_1.13.0.bb
@@ -0,0 +1,17 @@
+SUMMARY = "Wrapper library for evdev devices"
+DESCRIPTION = "A library for handling evdev kernel devices. It abstracts \
+the evdev ioctls through type-safe interfaces and provides functions \
+to change the appearance of the device."
+HOMEPAGE = "http://www.freedesktop.org/wiki/Software/libevdev/"
+SECTION = "libs"
+
+LICENSE = "MIT"
+LIC_FILES_CHKSUM = "file://COPYING;md5=80c550b3197bcb8da7d7557ebcc3fc46 \
+                    "
+
+SRC_URI = "http://www.freedesktop.org/software/libevdev/${BP}.tar.xz"
+SRC_URI[sha256sum] = "9edf2006cc86a5055279647c38ec923d11a821ee4dc2c3033e8d20e8ee237cd9"
+
+inherit autotools pkgconfig
+
+UPSTREAM_CHECK_REGEX = "libevdev-(?P<pver>(\d+\.)+(?!90\d+)\d+)"
diff --git a/poky/meta/recipes-support/libgcrypt/files/0003-tests-bench-slope.c-workaround-ICE-failure-on-mips-w.patch b/poky/meta/recipes-support/libgcrypt/files/0003-tests-bench-slope.c-workaround-ICE-failure-on-mips-w.patch
deleted file mode 100644
index 105df29..0000000
--- a/poky/meta/recipes-support/libgcrypt/files/0003-tests-bench-slope.c-workaround-ICE-failure-on-mips-w.patch
+++ /dev/null
@@ -1,79 +0,0 @@
-From 7cc702c7b5a1ccc2b0091f3effa1391b6c3030fd Mon Sep 17 00:00:00 2001
-From: Hongxu Jia <hongxu.jia@windriver.com>
-Date: Wed, 16 Aug 2017 10:46:28 +0800
-Subject: [PATCH 3/4] tests/bench-slope.c: workaround ICE failure on mips with
- '-O -g'
-
-Hit a ICE and could reduce it to the following minimal example:
-
-1. Only the size of array assigned with 2 caused the issue:
-$ cat > mipgcc-test.c << END
-
-int main (int argc, char **argv)
-{
-        char *pStrArry[ARRAY_SIZE_MAX] = {"hello"};
-        int i = 0;
-
-        while(pStrArry[i] && i<ARRAY_SIZE_MAX)
-        {
-                printf("%s\n", pStrArry[i]);
-                i++;
-        }
-
-        return 0;
-}
-
-END
-
-2. Only -O1 and -g on mips caused the issue:
-$ mips-poky-linux-gcc -O1 -g -o mipgcc-test mipgcc-test.c
-mipgcc-test.c: In function 'main':
-mipgcc-test.c:18:1: internal compiler error: in dwarf2out_var_location,
-at dwarf2out.c:20810
- }
- ^
-Please submit a full bug report,
-with preprocessed source if appropriate.
-See <http://gcc.gnu.org/bugs.html> for instructions
-
-3. The quick workround is trying to enlarge the size of array with
-larger
-than 2.
-
-4. File a bug to GNU, but it could not be reproduced on there
-environment.
-http://gcc.gnu.org/bugzilla/show_bug.cgi?id=60643
-
-Upstream-Status: Inappropriate [oe specific]
-
-Rebase to 1.8.0
-Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
----
- tests/bench-slope.c | 4 ++--
- 1 file changed, 2 insertions(+), 2 deletions(-)
-
-diff --git a/tests/bench-slope.c b/tests/bench-slope.c
-index 75e6e43..4e70842 100644
---- a/tests/bench-slope.c
-+++ b/tests/bench-slope.c
-@@ -1463,7 +1463,7 @@ static struct bench_ops hash_ops = {
- };
- 
- 
--static struct bench_hash_mode hash_modes[] = {
-+static struct bench_hash_mode hash_modes[3] = {
-   {"", &hash_ops},
-   {0},
- };
-@@ -1629,7 +1629,7 @@ static struct bench_ops mac_ops = {
- };
- 
- 
--static struct bench_mac_mode mac_modes[] = {
-+static struct bench_mac_mode mac_modes[3] = {
-   {"", &mac_ops},
-   {0},
- };
--- 
-1.8.3.1
-
diff --git a/poky/meta/recipes-support/libgcrypt/files/no-native-gpg-error.patch b/poky/meta/recipes-support/libgcrypt/files/no-native-gpg-error.patch
new file mode 100644
index 0000000..b9a6078
--- /dev/null
+++ b/poky/meta/recipes-support/libgcrypt/files/no-native-gpg-error.patch
@@ -0,0 +1,18 @@
+Don't depend on a native libgpg-error to build the test driver, as it's
+an optional dependency for some C annotations.
+
+Upstream-Status: Inappropriate
+Signed-off-by: Ross Burton <ross.burton@arm.com>
+
+diff --git a/tests/testdrv.c b/tests/testdrv.c
+index 0ccde326..6d6abd57 100644
+--- a/tests/testdrv.c
++++ b/tests/testdrv.c
+@@ -32,7 +32,6 @@
+ # include <fcntl.h>
+ # include <sys/wait.h>
+ #endif
+-#include <gpg-error.h> /* For some macros.  */
+ 
+ #include "stopwatch.h"
+ 
diff --git a/poky/meta/recipes-support/libgcrypt/files/run-ptest b/poky/meta/recipes-support/libgcrypt/files/run-ptest
index 4818a06..c349ae1 100644
--- a/poky/meta/recipes-support/libgcrypt/files/run-ptest
+++ b/poky/meta/recipes-support/libgcrypt/files/run-ptest
@@ -1,3 +1,9 @@
 #!/bin/sh
 
-make -C build/tests runtest-TESTS
+# Run the tests in regression mode so they are quicker
+export GCRYPT_IN_REGRESSION_TEST=1
+# The 'random' test invokes itself, so we need to be sure that the test
+# directory is on PATH.
+export PATH=$PATH:.
+
+./testdrv --verbose
diff --git a/poky/meta/recipes-support/libgcrypt/libgcrypt_1.10.1.bb b/poky/meta/recipes-support/libgcrypt/libgcrypt_1.10.1.bb
index f710801..b0d88de 100644
--- a/poky/meta/recipes-support/libgcrypt/libgcrypt_1.10.1.bb
+++ b/poky/meta/recipes-support/libgcrypt/libgcrypt_1.10.1.bb
@@ -17,14 +17,13 @@
                     "
 
 DEPENDS = "libgpg-error"
-RDEPENDS:${PN}-ptest = "bash make"
 
 UPSTREAM_CHECK_URI = "https://gnupg.org/download/index.html"
 SRC_URI = "${GNUPG_MIRROR}/libgcrypt/libgcrypt-${PV}.tar.bz2 \
            file://0001-libgcrypt-fix-m4-file-for-oe-core.patch \
-           file://0003-tests-bench-slope.c-workaround-ICE-failure-on-mips-w.patch \
            file://0002-libgcrypt-fix-building-error-with-O2-in-sysroot-path.patch \
            file://0004-tests-Makefile.am-fix-undefined-reference-to-pthread.patch \
+           file://no-native-gpg-error.patch \
            file://run-ptest \
            "
 SRC_URI[sha256sum] = "ef14ae546b0084cd84259f61a55e07a38c3b53afc0f546bffcef2f01baffe9de"
@@ -39,8 +38,6 @@
 EXTRA_OECONF = "--disable-asm"
 EXTRA_OEMAKE:class-target = "LIBTOOLFLAGS='--tag=CC'"
 
-PRIVATE_LIBS:${PN}-ptest:append = " libgcrypt.so.20"
-
 PACKAGECONFIG ??= "capabilities"
 PACKAGECONFIG[capabilities] = "--with-capabilities,--without-capabilities,libcap"
 
@@ -49,25 +46,10 @@
 	rm -f ${S}/m4/gpg-error.m4
 }
 
-# libgcrypt.pc is added locally and thus installed here
-do_install:append() {
-	install -d ${D}/${libdir}/pkgconfig
-	install -m 0644 ${B}/src/libgcrypt.pc ${D}/${libdir}/pkgconfig/
-}
-
 do_install_ptest() {
-    cp -r --preserve=mode,links -v ${S} ${D}${PTEST_PATH}
-    cp -r --preserve=mode,links -v ${B} ${D}${PTEST_PATH}
-    rm ${D}${PTEST_PATH}/build/cipher/gost-s-box
-    rm ${D}${PTEST_PATH}/build/doc/yat2m
-    rm ${D}${PTEST_PATH}/build/libtool
-    rm ${D}${PTEST_PATH}/build/config.status
-    rm ${D}${PTEST_PATH}/build/config.log
-    rm ${D}${PTEST_PATH}/build/src/mpicalc
-    rm ${D}${PTEST_PATH}/${BP}/autom4te* -rf
-    sed -i -e 's/Makefile:.*/Makefile-disabled:/' ${D}${PTEST_PATH}/build/Makefile
-    find ${D}/${PTEST_PATH}/build -name "*.cmake" -or -name "Makefile" \
-    | xargs sed -e "s|${WORKDIR}|${PTEST_PATH}|g" -e "s|${WORKDIR}/recipe-sysroot-native||g" -i
+    cd tests
+    oe_runmake testdrv-build testdrv
+    install testdrv $(srcdir=${S}/tests ./testdrv-build --files | sort | uniq) ${D}${PTEST_PATH}
 }
 
 FILES:${PN}-dev += "${bindir}/hmac256 ${bindir}/dumpsexp"
diff --git a/poky/meta/recipes-support/libmicrohttpd/libmicrohttpd_0.9.75.bb b/poky/meta/recipes-support/libmicrohttpd/libmicrohttpd_0.9.75.bb
index 9c99af7..043fed3 100644
--- a/poky/meta/recipes-support/libmicrohttpd/libmicrohttpd_0.9.75.bb
+++ b/poky/meta/recipes-support/libmicrohttpd/libmicrohttpd_0.9.75.bb
@@ -13,13 +13,10 @@
 
 CFLAGS += "-pthread -D_REENTRANT"
 
-EXTRA_OECONF += "--disable-static --with-gnutls=${STAGING_LIBDIR}/../"
+EXTRA_OECONF += "--disable-static --with-gnutls=${STAGING_LIBDIR}/../ --enable-largefile"
 
 PACKAGECONFIG ?= "curl https"
-PACKAGECONFIG:append:class-target = "\
-        ${@bb.utils.filter('DISTRO_FEATURES', 'largefile', d)} \
-"
-PACKAGECONFIG[largefile] = "--enable-largefile,--disable-largefile,,"
+
 PACKAGECONFIG[curl] = "--enable-curl,--disable-curl,curl,"
 PACKAGECONFIG[https] = "--enable-https,--disable-https,libgcrypt gnutls,"
 
diff --git a/poky/meta/recipes-support/liburcu/liburcu_0.13.1.bb b/poky/meta/recipes-support/liburcu/liburcu_0.13.1.bb
deleted file mode 100644
index 6676334..0000000
--- a/poky/meta/recipes-support/liburcu/liburcu_0.13.1.bb
+++ /dev/null
@@ -1,24 +0,0 @@
-SUMMARY = "Userspace RCU (read-copy-update) library"
-DESCRIPTION = "A userspace RCU (read-copy-update) library. This data \
-synchronization library provides read-side access which scales linearly \
-with the number of cores. "
-HOMEPAGE = "http://lttng.org/urcu"
-BUGTRACKER = "http://lttng.org/project/issues"
-
-LICENSE = "LGPL-2.1-or-later & MIT"
-LIC_FILES_CHKSUM = "file://LICENSE;md5=e548d28737289d75a8f1e01ba2fd7825 \
-                    file://include/urcu/urcu.h;beginline=4;endline=32;md5=4de0d68d3a997643715036d2209ae1d9 \
-                    file://include/urcu/uatomic/x86.h;beginline=4;endline=21;md5=58e50bbd8a2f073bb5500e6554af0d0b"
-
-SRC_URI = "http://lttng.org/files/urcu/userspace-rcu-${PV}.tar.bz2"
-
-SRC_URI[sha256sum] = "3213f33d2b8f710eb920eb1abb279ec04bf8ae6361f44f2513c28c20d3363083"
-
-S = "${WORKDIR}/userspace-rcu-${PV}"
-inherit autotools multilib_header
-
-CPPFLAGS:append:riscv64  = " -pthread -D_REENTRANT"
-
-do_install:append() {
-    oe_multilib_header urcu/config.h
-}
diff --git a/poky/meta/recipes-support/liburcu/liburcu_0.13.2.bb b/poky/meta/recipes-support/liburcu/liburcu_0.13.2.bb
new file mode 100644
index 0000000..6ecf2e2
--- /dev/null
+++ b/poky/meta/recipes-support/liburcu/liburcu_0.13.2.bb
@@ -0,0 +1,24 @@
+SUMMARY = "Userspace RCU (read-copy-update) library"
+DESCRIPTION = "A userspace RCU (read-copy-update) library. This data \
+synchronization library provides read-side access which scales linearly \
+with the number of cores. "
+HOMEPAGE = "http://lttng.org/urcu"
+BUGTRACKER = "http://lttng.org/project/issues"
+
+LICENSE = "LGPL-2.1-or-later & MIT"
+LIC_FILES_CHKSUM = "file://LICENSE;md5=e548d28737289d75a8f1e01ba2fd7825 \
+                    file://include/urcu/urcu.h;beginline=4;endline=32;md5=4de0d68d3a997643715036d2209ae1d9 \
+                    file://include/urcu/uatomic/x86.h;beginline=4;endline=21;md5=58e50bbd8a2f073bb5500e6554af0d0b"
+
+SRC_URI = "http://lttng.org/files/urcu/userspace-rcu-${PV}.tar.bz2"
+
+SRC_URI[sha256sum] = "1213fd9f1b0b74da7de2bb74335b76098db9738fec5d3cdc07c0c524f34fc032"
+
+S = "${WORKDIR}/userspace-rcu-${PV}"
+inherit autotools multilib_header
+
+CPPFLAGS:append:riscv64  = " -pthread -D_REENTRANT"
+
+do_install:append() {
+    oe_multilib_header urcu/config.h
+}
diff --git a/poky/meta/recipes-support/lz4/files/CVE-2021-3520.patch b/poky/meta/recipes-support/lz4/files/CVE-2021-3520.patch
deleted file mode 100644
index 5ac8f66..0000000
--- a/poky/meta/recipes-support/lz4/files/CVE-2021-3520.patch
+++ /dev/null
@@ -1,27 +0,0 @@
-From 8301a21773ef61656225e264f4f06ae14462bca7 Mon Sep 17 00:00:00 2001
-From: Jasper Lievisse Adriaanse <j@jasper.la>
-Date: Fri, 26 Feb 2021 15:21:20 +0100
-Subject: [PATCH] Fix potential memory corruption with negative memmove() size
-
-Upstream-Status: Backport
-https://github.com/lz4/lz4/commit/8301a21773ef61656225e264f4f06ae14462bca7#diff-7055e9cf14c488aea9837aaf9f528b58ee3c22988d7d0d81d172ec62d94a88a7
-CVE: CVE-2021-3520
-Signed-off-by: Armin Kuster <akuster@mvista.com>
-
----
- lib/lz4.c | 2 +-
- 1 file changed, 1 insertion(+), 1 deletion(-)
-
-Index: git/lib/lz4.c
-===================================================================
---- git.orig/lib/lz4.c
-+++ git/lib/lz4.c
-@@ -1665,7 +1665,7 @@ LZ4_decompress_generic(
-                  const size_t dictSize         /* note : = 0 if noDict */
-                  )
- {
--    if (src == NULL) { return -1; }
-+    if ((src == NULL) || (outputSize < 0)) { return -1; }
- 
-     {   const BYTE* ip = (const BYTE*) src;
-         const BYTE* const iend = ip + srcSize;
diff --git a/poky/meta/recipes-support/lz4/lz4_1.9.3.bb b/poky/meta/recipes-support/lz4/lz4_1.9.3.bb
deleted file mode 100644
index 129a86b..0000000
--- a/poky/meta/recipes-support/lz4/lz4_1.9.3.bb
+++ /dev/null
@@ -1,31 +0,0 @@
-SUMMARY = "Extremely Fast Compression algorithm"
-DESCRIPTION = "LZ4 is a very fast lossless compression algorithm, providing compression speed at 400 MB/s per core, scalable with multi-cores CPU. It also features an extremely fast decoder, with speed in multiple GB/s per core, typically reaching RAM speed limits on multi-core systems."
-HOMEPAGE = "https://github.com/lz4/lz4"
-
-LICENSE = "BSD-2-Clause | GPL-2.0-only"
-LIC_FILES_CHKSUM = "file://lib/LICENSE;md5=ebc2ea4814a64de7708f1571904b32cc \
-                    file://programs/COPYING;md5=b234ee4d69f5fce4486a80fdaf4a4263 \
-                    file://LICENSE;md5=d57c0d21cb917fb4e0af2454aa48b956 \
-                    "
-
-PE = "1"
-
-SRCREV = "d44371841a2f1728a3f36839fd4b7e872d0927d3"
-
-SRC_URI = "git://github.com/lz4/lz4.git;branch=release;protocol=https \
-           file://CVE-2021-3520.patch \
-           "
-UPSTREAM_CHECK_GITTAGREGEX = "v(?P<pver>.*)"
-
-S = "${WORKDIR}/git"
-
-# Fixed in r118, which is larger than the current version.
-CVE_CHECK_IGNORE += "CVE-2014-4715"
-
-EXTRA_OEMAKE = "PREFIX=${prefix} CC='${CC}' CFLAGS='${CFLAGS}' DESTDIR=${D} LIBDIR=${libdir} INCLUDEDIR=${includedir} BUILD_STATIC=no"
-
-do_install() {
-	oe_runmake install
-}
-
-BBCLASSEXTEND = "native nativesdk"
diff --git a/poky/meta/recipes-support/lz4/lz4_1.9.4.bb b/poky/meta/recipes-support/lz4/lz4_1.9.4.bb
new file mode 100644
index 0000000..a2a178b
--- /dev/null
+++ b/poky/meta/recipes-support/lz4/lz4_1.9.4.bb
@@ -0,0 +1,29 @@
+SUMMARY = "Extremely Fast Compression algorithm"
+DESCRIPTION = "LZ4 is a very fast lossless compression algorithm, providing compression speed at 400 MB/s per core, scalable with multi-cores CPU. It also features an extremely fast decoder, with speed in multiple GB/s per core, typically reaching RAM speed limits on multi-core systems."
+HOMEPAGE = "https://github.com/lz4/lz4"
+
+LICENSE = "BSD-2-Clause | GPL-2.0-only"
+LIC_FILES_CHKSUM = "file://lib/LICENSE;md5=5cd5f851b52ec832b10eedb3f01f885a \
+                    file://programs/COPYING;md5=b234ee4d69f5fce4486a80fdaf4a4263 \
+                    file://LICENSE;md5=c5cc3cd6f9274b4d32988096df9c3ec3 \
+                    "
+
+PE = "1"
+
+SRCREV = "5ff839680134437dbf4678f3d0c7b371d84f4964"
+
+SRC_URI = "git://github.com/lz4/lz4.git;branch=release;protocol=https"
+UPSTREAM_CHECK_GITTAGREGEX = "v(?P<pver>.*)"
+
+S = "${WORKDIR}/git"
+
+# Fixed in r118, which is larger than the current version.
+CVE_CHECK_IGNORE += "CVE-2014-4715"
+
+EXTRA_OEMAKE = "PREFIX=${prefix} CC='${CC}' CFLAGS='${CFLAGS}' DESTDIR=${D} LIBDIR=${libdir} INCLUDEDIR=${includedir} BUILD_STATIC=no"
+
+do_install() {
+	oe_runmake install
+}
+
+BBCLASSEXTEND = "native nativesdk"
diff --git a/poky/meta/recipes-support/nettle/nettle_3.8.1.bb b/poky/meta/recipes-support/nettle/nettle_3.8.1.bb
new file mode 100644
index 0000000..bf49132
--- /dev/null
+++ b/poky/meta/recipes-support/nettle/nettle_3.8.1.bb
@@ -0,0 +1,57 @@
+SUMMARY = "A low level cryptographic library"
+DESCRIPTION = "Nettle is a cryptographic library that is designed to fit easily in more or less any context: In crypto toolkits for object-oriented languages (C++, Python, Pike, ...), in applications like LSH or GNUPG, or even in kernel space."
+HOMEPAGE = "http://www.lysator.liu.se/~nisse/nettle/"
+DESCRIPTION = "It tries to solve a problem of providing a common set of \
+cryptographic algorithms for higher-level applications by implementing a \
+context-independent set of cryptographic algorithms"
+SECTION = "libs"
+LICENSE = "LGPL-3.0-or-later | GPL-2.0-or-later"
+
+LIC_FILES_CHKSUM = "file://COPYING.LESSERv3;md5=6a6a8e020838b23406c81b19c1d46df6 \
+                    file://COPYINGv2;md5=b234ee4d69f5fce4486a80fdaf4a4263 \
+                    file://serpent-decrypt.c;beginline=14;endline=36;md5=ca0d220bc413e1842ecc507690ce416e \
+                    file://serpent-set-key.c;beginline=14;endline=36;md5=ca0d220bc413e1842ecc507690ce416e"
+
+DEPENDS += "gmp"
+
+SRC_URI = "${GNU_MIRROR}/${BPN}/${BP}.tar.gz \
+           file://Add-target-to-only-build-tests-not-run-them.patch \
+           file://run-ptest \
+           file://check-header-files-of-openssl-only-if-enable_.patch \
+           "
+
+SRC_URI:append:class-target = "\
+            file://dlopen-test.patch \
+            "
+
+SRC_URI[sha256sum] = "364f3e2b77cd7dcde83fd7c45219c834e54b0c75e428b6f894a23d12dd41cbfe"
+
+UPSTREAM_CHECK_REGEX = "nettle-(?P<pver>\d+(\.\d+)+)\.tar"
+
+inherit autotools ptest multilib_header
+
+EXTRA_AUTORECONF += "--exclude=aclocal"
+
+EXTRA_OECONF = "--disable-openssl"
+
+do_compile_ptest() {
+        oe_runmake buildtest
+}
+
+do_install:append() {
+    oe_multilib_header nettle/version.h
+}
+
+do_install_ptest() {
+        install -d ${D}${PTEST_PATH}/testsuite/
+        install ${S}/testsuite/gold-bug.txt ${D}${PTEST_PATH}/testsuite/
+        install ${S}/testsuite/*-test ${D}${PTEST_PATH}/testsuite/
+        # tools can be found in PATH, not in ../tools/
+        sed -i -e 's|../tools/||' ${D}${PTEST_PATH}/testsuite/*-test
+        install ${B}/testsuite/*-test ${D}${PTEST_PATH}/testsuite/
+}
+
+RDEPENDS:${PN}-ptest += "${PN}-dev"
+INSANE_SKIP:${PN}-ptest += "dev-deps"
+
+BBCLASSEXTEND = "native nativesdk"
diff --git a/poky/meta/recipes-support/nettle/nettle_3.8.bb b/poky/meta/recipes-support/nettle/nettle_3.8.bb
deleted file mode 100644
index 0d6562d..0000000
--- a/poky/meta/recipes-support/nettle/nettle_3.8.bb
+++ /dev/null
@@ -1,57 +0,0 @@
-SUMMARY = "A low level cryptographic library"
-DESCRIPTION = "Nettle is a cryptographic library that is designed to fit easily in more or less any context: In crypto toolkits for object-oriented languages (C++, Python, Pike, ...), in applications like LSH or GNUPG, or even in kernel space."
-HOMEPAGE = "http://www.lysator.liu.se/~nisse/nettle/"
-DESCRIPTION = "It tries to solve a problem of providing a common set of \
-cryptographic algorithms for higher-level applications by implementing a \
-context-independent set of cryptographic algorithms"
-SECTION = "libs"
-LICENSE = "LGPL-3.0-or-later | GPL-2.0-or-later"
-
-LIC_FILES_CHKSUM = "file://COPYING.LESSERv3;md5=6a6a8e020838b23406c81b19c1d46df6 \
-                    file://COPYINGv2;md5=b234ee4d69f5fce4486a80fdaf4a4263 \
-                    file://serpent-decrypt.c;beginline=14;endline=36;md5=ca0d220bc413e1842ecc507690ce416e \
-                    file://serpent-set-key.c;beginline=14;endline=36;md5=ca0d220bc413e1842ecc507690ce416e"
-
-DEPENDS += "gmp"
-
-SRC_URI = "${GNU_MIRROR}/${BPN}/${BP}.tar.gz \
-           file://Add-target-to-only-build-tests-not-run-them.patch \
-           file://run-ptest \
-           file://check-header-files-of-openssl-only-if-enable_.patch \
-           "
-
-SRC_URI:append:class-target = "\
-            file://dlopen-test.patch \
-            "
-
-SRC_URI[sha256sum] = "7576c68481c198f644b08c160d1a4850ba9449e308069455b5213319f234e8e6"
-
-UPSTREAM_CHECK_REGEX = "nettle-(?P<pver>\d+(\.\d+)+)\.tar"
-
-inherit autotools ptest multilib_header
-
-EXTRA_AUTORECONF += "--exclude=aclocal"
-
-EXTRA_OECONF = "--disable-openssl"
-
-do_compile_ptest() {
-        oe_runmake buildtest
-}
-
-do_install:append() {
-    oe_multilib_header nettle/version.h
-}
-
-do_install_ptest() {
-        install -d ${D}${PTEST_PATH}/testsuite/
-        install ${S}/testsuite/gold-bug.txt ${D}${PTEST_PATH}/testsuite/
-        install ${S}/testsuite/*-test ${D}${PTEST_PATH}/testsuite/
-        # tools can be found in PATH, not in ../tools/
-        sed -i -e 's|../tools/||' ${D}${PTEST_PATH}/testsuite/*-test
-        install ${B}/testsuite/*-test ${D}${PTEST_PATH}/testsuite/
-}
-
-RDEPENDS:${PN}-ptest += "${PN}-dev"
-INSANE_SKIP:${PN}-ptest += "dev-deps"
-
-BBCLASSEXTEND = "native nativesdk"
diff --git a/poky/meta/recipes-support/pinentry/pinentry_1.2.0.bb b/poky/meta/recipes-support/pinentry/pinentry_1.2.0.bb
index 169cac8..e6cc71a 100644
--- a/poky/meta/recipes-support/pinentry/pinentry_1.2.0.bb
+++ b/poky/meta/recipes-support/pinentry/pinentry_1.2.0.bb
@@ -32,5 +32,8 @@
 EXTRA_OECONF = " \
     --disable-rpath \
 "
+EXTRA_OECONF:append:libc-musl = " \
+    ac_cv_should_define__xopen_source=yes \
+"
 
 BBCLASSEXTEND = "native nativesdk"
diff --git a/poky/meta/recipes-support/rng-tools/rng-tools/rng-tools.service b/poky/meta/recipes-support/rng-tools/rng-tools/rng-tools.service
index 568686e..5ae2fba 100644
--- a/poky/meta/recipes-support/rng-tools/rng-tools/rng-tools.service
+++ b/poky/meta/recipes-support/rng-tools/rng-tools/rng-tools.service
@@ -1,10 +1,9 @@
 [Unit]
 Description=Hardware RNG Entropy Gatherer Daemon
 DefaultDependencies=no
-After=systemd-udev-settle.service
-Before=sysinit.target shutdown.target
-Wants=systemd-udev-settle.service
 Conflicts=shutdown.target
+Before=sysinit.target shutdown.target
+ConditionVirtualization=!container
 
 [Service]
 EnvironmentFile=-@SYSCONFDIR@/default/rng-tools
diff --git a/poky/meta/recipes-support/sqlite/sqlite3_3.39.1.bb b/poky/meta/recipes-support/sqlite/sqlite3_3.39.1.bb
deleted file mode 100644
index 39ddbfb..0000000
--- a/poky/meta/recipes-support/sqlite/sqlite3_3.39.1.bb
+++ /dev/null
@@ -1,14 +0,0 @@
-require sqlite3.inc
-
-LICENSE = "PD"
-LIC_FILES_CHKSUM = "file://sqlite3.h;endline=11;md5=786d3dc581eff03f4fd9e4a77ed00c66"
-
-SRC_URI = "http://www.sqlite.org/2022/sqlite-autoconf-${SQLITE_PV}.tar.gz"
-SRC_URI[sha256sum] = "87c8e7a7fa0c68ab28e208ba49f3a22a56000dbf53a6f90206e2bc5843931cc4"
-
-# -19242 is only an issue in specific development branch commits
-CVE_CHECK_IGNORE += "CVE-2019-19242"
-# This is believed to be iOS specific (https://groups.google.com/g/sqlite-dev/c/U7OjAbZO6LA)
-CVE_CHECK_IGNORE += "CVE-2015-3717"
-# Issue in an experimental extension we don't have/use. Fixed by https://sqlite.org/src/info/b1e0c22ec981cf5f
-CVE_CHECK_IGNORE += "CVE-2021-36690"
diff --git a/poky/meta/recipes-support/sqlite/sqlite3_3.39.2.bb b/poky/meta/recipes-support/sqlite/sqlite3_3.39.2.bb
new file mode 100644
index 0000000..dfef480
--- /dev/null
+++ b/poky/meta/recipes-support/sqlite/sqlite3_3.39.2.bb
@@ -0,0 +1,14 @@
+require sqlite3.inc
+
+LICENSE = "PD"
+LIC_FILES_CHKSUM = "file://sqlite3.h;endline=11;md5=786d3dc581eff03f4fd9e4a77ed00c66"
+
+SRC_URI = "http://www.sqlite.org/2022/sqlite-autoconf-${SQLITE_PV}.tar.gz"
+SRC_URI[sha256sum] = "852be8a6183a17ba47cee0bbff7400b7aa5affd283bf3beefc34fcd088a239de"
+
+# -19242 is only an issue in specific development branch commits
+CVE_CHECK_IGNORE += "CVE-2019-19242"
+# This is believed to be iOS specific (https://groups.google.com/g/sqlite-dev/c/U7OjAbZO6LA)
+CVE_CHECK_IGNORE += "CVE-2015-3717"
+# Issue in an experimental extension we don't have/use. Fixed by https://sqlite.org/src/info/b1e0c22ec981cf5f
+CVE_CHECK_IGNORE += "CVE-2021-36690"
diff --git a/poky/meta/recipes-support/vim/vim.inc b/poky/meta/recipes-support/vim/vim.inc
index 4889646..33a8299 100644
--- a/poky/meta/recipes-support/vim/vim.inc
+++ b/poky/meta/recipes-support/vim/vim.inc
@@ -20,8 +20,8 @@
            file://no-path-adjust.patch \
            "
 
-PV .= ".0115"
-SRCREV = "6747cf1671bd41cddee77c65b3f9a70509f968db"
+PV .= ".0341"
+SRCREV = "92a3d20682d46359bb50a452b4f831659e799155"
 
 # Remove when 8.3 is out
 UPSTREAM_VERSION_UNKNOWN = "1"
diff --git a/poky/scripts/autobuilder-worker-prereq-tests b/poky/scripts/autobuilder-worker-prereq-tests
index 572227d..54fd3c1 100755
--- a/poky/scripts/autobuilder-worker-prereq-tests
+++ b/poky/scripts/autobuilder-worker-prereq-tests
@@ -1,5 +1,7 @@
 #!/bin/bash
 #
+# Copyright OpenEmbedded Contributors
+#
 # Script which can be run on new autobuilder workers to check all needed configuration is present.
 # Designed to be run in a repo where bitbake/oe-core are already present.
 #
diff --git a/poky/scripts/bitbake-prserv-tool b/poky/scripts/bitbake-prserv-tool
index e55d98c..bed97bd 100755
--- a/poky/scripts/bitbake-prserv-tool
+++ b/poky/scripts/bitbake-prserv-tool
@@ -1,5 +1,7 @@
 #!/usr/bin/env bash
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: GPL-2.0-only
 #
 
diff --git a/poky/scripts/bitbake-whatchanged b/poky/scripts/bitbake-whatchanged
index 6f4b268..cdb730d 100755
--- a/poky/scripts/bitbake-whatchanged
+++ b/poky/scripts/bitbake-whatchanged
@@ -1,7 +1,5 @@
 #!/usr/bin/env python3
-# ex:ts=4:sw=4:sts=4:et
-# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
-
+#
 # Copyright (c) 2013 Wind River Systems, Inc.
 #
 # SPDX-License-Identifier: GPL-2.0-only
diff --git a/poky/scripts/combo-layer-hook-default.sh b/poky/scripts/combo-layer-hook-default.sh
index 11547a9..fb9651b 100755
--- a/poky/scripts/combo-layer-hook-default.sh
+++ b/poky/scripts/combo-layer-hook-default.sh
@@ -1,5 +1,7 @@
 #!/bin/sh
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: GPL-2.0-only
 #
 # Hook to add source component/revision info to commit message
diff --git a/poky/scripts/contrib/ddimage b/poky/scripts/contrib/ddimage
index 7f2ad11..70eee8e 100755
--- a/poky/scripts/contrib/ddimage
+++ b/poky/scripts/contrib/ddimage
@@ -1,5 +1,7 @@
 #!/bin/sh
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: GPL-2.0-only
 #
 
diff --git a/poky/scripts/contrib/dialog-power-control b/poky/scripts/contrib/dialog-power-control
index ad6070c..82c84ba 100755
--- a/poky/scripts/contrib/dialog-power-control
+++ b/poky/scripts/contrib/dialog-power-control
@@ -1,5 +1,7 @@
 #!/bin/sh
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: GPL-2.0-only
 #
 # Simple script to show a manual power prompt for when you want to use
diff --git a/poky/scripts/contrib/documentation-audit.sh b/poky/scripts/contrib/documentation-audit.sh
index 36f7f32..7197a2f 100755
--- a/poky/scripts/contrib/documentation-audit.sh
+++ b/poky/scripts/contrib/documentation-audit.sh
@@ -1,5 +1,7 @@
 #!/bin/bash
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: GPL-2.0-only
 #
 # Perform an audit of which packages provide documentation and which
diff --git a/poky/scripts/contrib/patchreview.py b/poky/scripts/contrib/patchreview.py
index 85d2169..b22cc07 100755
--- a/poky/scripts/contrib/patchreview.py
+++ b/poky/scripts/contrib/patchreview.py
@@ -1,5 +1,7 @@
 #! /usr/bin/env python3
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: GPL-2.0-only
 #
 
diff --git a/poky/scripts/contrib/test_build_time_worker.sh b/poky/scripts/contrib/test_build_time_worker.sh
index 478e8b0..a2879d2 100755
--- a/poky/scripts/contrib/test_build_time_worker.sh
+++ b/poky/scripts/contrib/test_build_time_worker.sh
@@ -1,5 +1,7 @@
 #!/bin/bash
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: GPL-2.0-only
 #
 # This is an example script to be used in conjunction with test_build_time.sh
diff --git a/poky/scripts/contrib/verify-homepage.py b/poky/scripts/contrib/verify-homepage.py
index 7bffa78..a90b501 100755
--- a/poky/scripts/contrib/verify-homepage.py
+++ b/poky/scripts/contrib/verify-homepage.py
@@ -1,5 +1,7 @@
 #!/usr/bin/env python3
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: GPL-2.0-only
 #
 # This script can be used to verify HOMEPAGE values for all recipes in
diff --git a/poky/scripts/cp-noerror b/poky/scripts/cp-noerror
index ab617c5..13a098e 100755
--- a/poky/scripts/cp-noerror
+++ b/poky/scripts/cp-noerror
@@ -1,5 +1,7 @@
 #!/usr/bin/env python3
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: GPL-2.0-only
 #
 # Allow copying of $1 to $2 but if files in $1 disappear during the copy operation,
diff --git a/poky/scripts/gen-lockedsig-cache b/poky/scripts/gen-lockedsig-cache
index cc674f9..023015e 100755
--- a/poky/scripts/gen-lockedsig-cache
+++ b/poky/scripts/gen-lockedsig-cache
@@ -1,5 +1,8 @@
 #!/usr/bin/env python3
 #
+#
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: GPL-2.0-only
 #
 
diff --git a/poky/scripts/git b/poky/scripts/git
index 644055e..689adbf 100755
--- a/poky/scripts/git
+++ b/poky/scripts/git
@@ -1,5 +1,9 @@
 #!/usr/bin/env python3
 #
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
 # Wrapper around 'git' that doesn't think we are root
 
 import os
diff --git a/poky/scripts/lib/argparse_oe.py b/poky/scripts/lib/argparse_oe.py
index 94a4ac5..176b732 100644
--- a/poky/scripts/lib/argparse_oe.py
+++ b/poky/scripts/lib/argparse_oe.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: GPL-2.0-only
 #
 
diff --git a/poky/scripts/lib/devtool/menuconfig.py b/poky/scripts/lib/devtool/menuconfig.py
index 95384c5..d87a01e 100644
--- a/poky/scripts/lib/devtool/menuconfig.py
+++ b/poky/scripts/lib/devtool/menuconfig.py
@@ -3,6 +3,8 @@
 # Copyright (C) 2018 Xilinx
 # Written by: Chandana Kalluri <ckalluri@xilinx.com>
 #
+# SPDX-License-Identifier: MIT
+#
 # This program is free software; you can redistribute it and/or modify
 # it under the terms of the GNU General Public License version 2 as
 # published by the Free Software Foundation.
diff --git a/poky/scripts/lib/devtool/standard.py b/poky/scripts/lib/devtool/standard.py
index c98bfe8..e3b74ab 100644
--- a/poky/scripts/lib/devtool/standard.py
+++ b/poky/scripts/lib/devtool/standard.py
@@ -1975,9 +1975,19 @@
                         shutil.rmtree(srctreebase)
                     else:
                         # We don't want to risk wiping out any work in progress
-                        logger.info('Leaving source tree %s as-is; if you no '
-                                    'longer need it then please delete it manually'
-                                    % srctreebase)
+                        if srctreebase.startswith(os.path.join(config.workspace_path, 'sources')):
+                            from datetime import datetime
+                            preservesrc = os.path.join(config.workspace_path, 'attic', 'sources', "{}.{}".format(pn,datetime.now().strftime("%Y%m%d%H%M%S")))
+                            logger.info('Preserving source tree in %s\nIf you no '
+                                        'longer need it then please delete it manually.\n'
+                                        'It is also possible to reuse it via devtool source tree argument.'
+                                        % preservesrc)
+                            bb.utils.mkdirhier(os.path.dirname(preservesrc))
+                            shutil.move(srctreebase, preservesrc)
+                        else:
+                            logger.info('Leaving source tree %s as-is; if you no '
+                                        'longer need it then please delete it manually'
+                                        % srctreebase)
             else:
                 # This is unlikely, but if it's empty we can just remove it
                 os.rmdir(srctreebase)
diff --git a/poky/scripts/lib/recipetool/create_buildsys_python.py b/poky/scripts/lib/recipetool/create_buildsys_python.py
index 5686a62..4675cc6 100644
--- a/poky/scripts/lib/recipetool/create_buildsys_python.py
+++ b/poky/scripts/lib/recipetool/create_buildsys_python.py
@@ -565,7 +565,7 @@
         pkgdata_dir = tinfoil.config_data.getVar('PKGDATA_DIR')
 
         ldata = tinfoil.config_data.createCopy()
-        bb.parse.handle('classes/python3-dir.bbclass', ldata, True)
+        bb.parse.handle('classes-recipe/python3-dir.bbclass', ldata, True)
         python_sitedir = ldata.getVar('PYTHON_SITEPACKAGES_DIR')
 
         dynload_dir = os.path.join(os.path.dirname(python_sitedir), 'lib-dynload')
diff --git a/poky/scripts/lib/wic/ksparser.py b/poky/scripts/lib/wic/ksparser.py
index a49b7b9..d1e546b 100644
--- a/poky/scripts/lib/wic/ksparser.py
+++ b/poky/scripts/lib/wic/ksparser.py
@@ -159,7 +159,7 @@
         part.add_argument('--fstype', default='vfat',
                           choices=('ext2', 'ext3', 'ext4', 'btrfs',
                                    'squashfs', 'vfat', 'msdos', 'erofs',
-                                   'swap'))
+                                   'swap', 'none'))
         part.add_argument('--mkfs-extraopts', default='')
         part.add_argument('--label')
         part.add_argument('--use-label', action='store_true')
diff --git a/poky/scripts/lib/wic/plugins/source/bootimg-partition.py b/poky/scripts/lib/wic/plugins/source/bootimg-partition.py
index 5dbe255..8956162 100644
--- a/poky/scripts/lib/wic/plugins/source/bootimg-partition.py
+++ b/poky/scripts/lib/wic/plugins/source/bootimg-partition.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: GPL-2.0-only
 #
 # DESCRIPTION
diff --git a/poky/scripts/lib/wic/plugins/source/empty.py b/poky/scripts/lib/wic/plugins/source/empty.py
index 041617d..9c492ca 100644
--- a/poky/scripts/lib/wic/plugins/source/empty.py
+++ b/poky/scripts/lib/wic/plugins/source/empty.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/scripts/lib/wic/plugins/source/isoimage-isohybrid.py b/poky/scripts/lib/wic/plugins/source/isoimage-isohybrid.py
index afc9ea0..607356a 100644
--- a/poky/scripts/lib/wic/plugins/source/isoimage-isohybrid.py
+++ b/poky/scripts/lib/wic/plugins/source/isoimage-isohybrid.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: GPL-2.0-only
 #
 # DESCRIPTION
diff --git a/poky/scripts/lib/wic/plugins/source/rawcopy.py b/poky/scripts/lib/wic/plugins/source/rawcopy.py
index 7c90cd3..ccf3325 100644
--- a/poky/scripts/lib/wic/plugins/source/rawcopy.py
+++ b/poky/scripts/lib/wic/plugins/source/rawcopy.py
@@ -1,4 +1,6 @@
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: GPL-2.0-only
 #
 
@@ -23,6 +25,10 @@
 
     @staticmethod
     def do_image_label(fstype, dst, label):
+        # don't create label when fstype is none
+        if fstype == 'none':
+            return
+
         if fstype.startswith('ext'):
             cmd = 'tune2fs -L %s %s' % (label, dst)
         elif fstype in ('msdos', 'vfat'):
diff --git a/poky/scripts/oe-debuginfod b/poky/scripts/oe-debuginfod
index 9e5482d..b525310 100755
--- a/poky/scripts/oe-debuginfod
+++ b/poky/scripts/oe-debuginfod
@@ -1,5 +1,7 @@
 #!/usr/bin/env python3
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: MIT
 #
 
diff --git a/poky/scripts/oe-gnome-terminal-phonehome b/poky/scripts/oe-gnome-terminal-phonehome
index b6b9a38..1352a98 100755
--- a/poky/scripts/oe-gnome-terminal-phonehome
+++ b/poky/scripts/oe-gnome-terminal-phonehome
@@ -1,5 +1,7 @@
 #!/bin/sh
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: GPL-2.0-only
 #
 # Gnome terminal won't tell us which PID a given command is run as 
diff --git a/poky/scripts/oe-pkgdata-browser b/poky/scripts/oe-pkgdata-browser
index a3a3819..c152c82 100755
--- a/poky/scripts/oe-pkgdata-browser
+++ b/poky/scripts/oe-pkgdata-browser
@@ -1,4 +1,9 @@
 #! /usr/bin/env python3
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
 
 import os, sys, enum, ast
 
diff --git a/poky/scripts/oe-pylint b/poky/scripts/oe-pylint
index 7cc1ccb..5ad7283 100755
--- a/poky/scripts/oe-pylint
+++ b/poky/scripts/oe-pylint
@@ -1,5 +1,7 @@
 #!/bin/bash
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: GPL-2.0-only
 #
 # Run the pylint3 against our common python module spaces and print a report of potential issues
diff --git a/poky/scripts/oe-setup-builddir b/poky/scripts/oe-setup-builddir
index 5d64416..d3c7f94 100755
--- a/poky/scripts/oe-setup-builddir
+++ b/poky/scripts/oe-setup-builddir
@@ -38,16 +38,18 @@
 
 cd "$BUILDDIR"
 
-if [ -f "$BUILDDIR/conf/templateconf.cfg" ]; then
+if [ -f "$BUILDDIR/conf/templateconf.cfg" -a -z "$TEMPLATECONF" ]; then
     TEMPLATECONF=$(cat "$BUILDDIR/conf/templateconf.cfg")
+    # The following two are no longer valid; unsetting them will automatically get them replaced
+    # with correct ones.
+    if [ $TEMPLATECONF = "meta/conf" -o $TEMPLATECONF = "meta-poky/conf" ]; then
+        unset TEMPLATECONF
+        rm $BUILDDIR/conf/templateconf.cfg
+    fi
 fi
 
 . "$OEROOT"/.templateconf
 
-if [ ! -f "$BUILDDIR/conf/templateconf.cfg" ]; then
-    echo "$TEMPLATECONF" >"$BUILDDIR/conf/templateconf.cfg"
-fi
-
 # 
 # $TEMPLATECONF can point to a directory for the template local.conf & bblayers.conf
 #
@@ -61,6 +63,11 @@
             echo >&2 "Error: TEMPLATECONF value points to nonexistent directory '$TEMPLATECONF'"
             exit 1
         fi
+        templatesdir=$(python3 -c "import sys; print(sys.argv[1].strip('/').split('/')[-2])" $TEMPLATECONF)
+        if [ ! -f "$TEMPLATECONF/../../layer.conf" -o $templatesdir != "templates" ]; then
+            echo >&2 "Error: TEMPLATECONF value (which is $TEMPLATECONF) must point to meta-some-layer/conf/templates/template-name"
+            exit 1
+        fi
     fi
     OECORELAYERCONF="$TEMPLATECONF/bblayers.conf.sample"
     OECORELOCALCONF="$TEMPLATECONF/local.conf.sample"
@@ -129,3 +136,7 @@
 fi
 [ ! -r "$OECORENOTESCONF" ] || cat "$OECORENOTESCONF"
 unset OECORENOTESCONF
+
+if [ ! -f "$BUILDDIR/conf/templateconf.cfg" ]; then
+    echo "$TEMPLATECONF" >"$BUILDDIR/conf/templateconf.cfg"
+fi
diff --git a/poky/scripts/oe-setup-layers b/poky/scripts/oe-setup-layers
new file mode 100755
index 0000000..6ecaffe
--- /dev/null
+++ b/poky/scripts/oe-setup-layers
@@ -0,0 +1,78 @@
+#!/usr/bin/env python3
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
+
+# This file was copied from poky(or oe-core)/scripts/oe-setup-layers by running
+#
+# bitbake-layers create-layers-setup destdir
+#
+# It is recommended that you do not modify this file directly, but rather re-run the above command to get the freshest upstream copy.
+
+import argparse
+import json
+import os
+import subprocess
+
+def _do_checkout(args, json):
+    layers = json['sources']
+    for l_name in layers:
+        l_data = layers[l_name]
+        layerdir = os.path.abspath(os.path.join(args['destdir'], l_data['path']))
+
+        if 'contains_this_file' in l_data.keys():
+            force_arg = 'force_bootstraplayer_checkout'
+            if not args[force_arg]:
+                print('Note: not checking out source {layer}, use {layerflag} to override.'.format(layer=l_name, layerflag='--force-bootstraplayer-checkout'))
+                continue
+        l_remote = l_data['git-remote']
+        rev = l_remote['rev']
+        desc = l_remote['describe']
+        if not desc:
+            desc = rev[:10]
+        branch = l_remote['branch']
+        remotes = l_remote['remotes']
+
+        print('\nSetting up source {}, revision {}, branch {}'.format(l_name, desc, branch))
+        cmd = 'git init -q {}'.format(layerdir)
+        print("Running '{}'".format(cmd))
+        subprocess.check_output(cmd, shell=True)
+
+        for remote in remotes:
+            cmd = "git remote remove {} > /dev/null 2>&1; git remote add {} {}".format(remote, remote, remotes[remote]['uri'])
+            print("Running '{}' in {}".format(cmd, layerdir))
+            subprocess.check_output(cmd, shell=True, cwd=layerdir)
+
+            cmd = "git fetch -q {} || true".format(remote)
+            print("Running '{}' in {}".format(cmd, layerdir))
+            subprocess.check_output(cmd, shell=True, cwd=layerdir)
+
+        cmd = 'git checkout -q {}'.format(rev)
+        print("Running '{}' in {}".format(cmd, layerdir))
+        subprocess.check_output(cmd, shell=True, cwd=layerdir)
+
+parser = argparse.ArgumentParser(description="A self contained python script that fetches all the needed layers and sets them to correct revisions using data in a json format from a separate file. The json data can be created from an active build directory with 'bitbake-layers create-layers-setup destdir' and there's a sample file and a schema in meta/files/")
+
+parser.add_argument('--force-bootstraplayer-checkout', action='store_true',
+        help='Force the checkout of the layer containing this file (by default it is presumed that as this script is in it, the layer is already in place).')
+
+try:
+    defaultdest = os.path.dirname(subprocess.check_output('git rev-parse --show-toplevel', universal_newlines=True, shell=True, cwd=os.path.dirname(__file__)))
+except subprocess.CalledProcessError as e:
+    defaultdest = os.path.abspath(".")
+
+parser.add_argument('--destdir', default=defaultdest, help='Where to check out the layers (default is {defaultdest}).'.format(defaultdest=defaultdest))
+parser.add_argument('--jsondata', default=__file__+".json", help='File containing the layer data in json format (default is {defaultjson}).'.format(defaultjson=__file__+".json"))
+
+args = parser.parse_args()
+
+with open(args.jsondata) as f:
+    json = json.load(f)
+
+supported_versions = ["1.0"]
+if json["version"] not in supported_versions:
+    raise Exception("File {} has version {}, which is not in supported versions: {}".format(args.jsondata, json["version"], supported_versions))
+
+_do_checkout(vars(args), json)
diff --git a/poky/scripts/oe-time-dd-test.sh b/poky/scripts/oe-time-dd-test.sh
index 386de83..81748b8 100755
--- a/poky/scripts/oe-time-dd-test.sh
+++ b/poky/scripts/oe-time-dd-test.sh
@@ -1,5 +1,9 @@
 #!/bin/bash
 #
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: MIT
+#
 # oe-time-dd-test records how much time it takes to 
 # write <count> number of kilobytes to the filesystem.
 # It also records the number of processes that are in
diff --git a/poky/scripts/oe-trim-schemas b/poky/scripts/oe-trim-schemas
index bf77c8c..e3b26e2 100755
--- a/poky/scripts/oe-trim-schemas
+++ b/poky/scripts/oe-trim-schemas
@@ -1,5 +1,7 @@
 #! /usr/bin/env python3
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: GPL-2.0-only
 #
 
diff --git a/poky/scripts/oepydevshell-internal.py b/poky/scripts/oepydevshell-internal.py
index e3c35bb..3bf7df1 100755
--- a/poky/scripts/oepydevshell-internal.py
+++ b/poky/scripts/oepydevshell-internal.py
@@ -1,5 +1,7 @@
 #!/usr/bin/env python3
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: GPL-2.0-only
 #
 
diff --git a/poky/scripts/pythondeps b/poky/scripts/pythondeps
index be21dd8..48277ec 100755
--- a/poky/scripts/pythondeps
+++ b/poky/scripts/pythondeps
@@ -1,5 +1,7 @@
 #!/usr/bin/env python3
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: GPL-2.0-only
 #
 # Determine dependencies of python scripts or available python modules in a search path.
diff --git a/poky/scripts/relocate_sdk.py b/poky/scripts/relocate_sdk.py
index 4ed8bfc..8a72872 100755
--- a/poky/scripts/relocate_sdk.py
+++ b/poky/scripts/relocate_sdk.py
@@ -104,11 +104,12 @@
             if (len(new_dl_path) >= p_filesz):
                 print("ERROR: could not relocate %s, interp size = %i and %i is needed." \
                     % (elf_file_name, p_memsz, len(new_dl_path) + 1))
-                break
+                return False
             dl_path = new_dl_path + b("\0") * (p_filesz - len(new_dl_path))
             f.seek(p_offset)
             f.write(dl_path)
             break
+    return True
 
 def change_dl_sysdirs(elf_file_name):
     if arch == 32:
@@ -222,6 +223,7 @@
 
 executables_list = sys.argv[3:]
 
+errors = False
 for e in executables_list:
     perms = os.stat(e)[stat.ST_MODE]
     if os.access(e, os.W_OK|os.R_OK):
@@ -247,7 +249,8 @@
         arch = get_arch()
         if arch:
             parse_elf_header()
-            change_interpreter(e)
+            if not change_interpreter(e):
+                errors = True
             change_dl_sysdirs(e)
 
     """ change permissions back """
@@ -260,3 +263,6 @@
         print("New file size for %s is different. Looks like a relocation error!", e)
         sys.exit(-1)
 
+if errors:
+    print("Relocation of one or more executables failed.")
+    sys.exit(-1)
diff --git a/poky/scripts/runqemu.README b/poky/scripts/runqemu.README
index da9abd7..e5f4b46 100644
--- a/poky/scripts/runqemu.README
+++ b/poky/scripts/runqemu.README
@@ -1,12 +1,12 @@
 Using OE images with QEMU
 =========================
 
-OE-Core can generate qemu bootable kernels and images with can be used 
+OE-Core can generate qemu bootable kernels and images which can be used
 on a desktop system. The scripts currently support booting ARM, MIPS, PowerPC
-and x86 (32 and 64 bit) images. The scripts can be used within the OE build 
-system or externaly.
+and x86 (32 and 64 bit) images. The scripts can be used within the OE build
+system or externally.
 
-The runqemu script is run as: 
+The runqemu script is run as:
 
    runqemu <machine> <zimage> <filesystem>
 
@@ -15,13 +15,13 @@
    <machine> is the machine/architecture to use (qemuarm/qemumips/qemuppc/qemux86/qemux86-64)
    <zimage> is the path to a kernel (e.g. zimage-qemuarm.bin)
    <filesystem> is the path to an ext2 image (e.g. filesystem-qemuarm.ext2) or an nfs directory
-   
-If <machine> isn't specified, the script will try to detect the machine name 
+
+If <machine> isn't specified, the script will try to detect the machine name
 from the name of the <zimage> file.
 
 If <filesystem> isn't specified, nfs booting will be assumed.
 
-When used within the build system, it will default to qemuarm, ext2 and the last kernel and 
+When used within the build system, it will default to qemuarm, ext2 and the last kernel and
 core-image-sato-sdk image built by the build system. If an sdk image isn't present it will look
 for sato and minimal images.
 
@@ -31,7 +31,7 @@
 Notes
 =====
 
- - The scripts run qemu using sudo. Change perms on /dev/net/tun to 
+ - The scripts run qemu using sudo. Change perms on /dev/net/tun to
    run as non root. The runqemu-gen-tapdevs script can also be used by
    root to prepopulate the appropriate network devices.
  - You can access the host computer at 192.168.7.1 within the image.
diff --git a/poky/scripts/sstate-diff-machines.sh b/poky/scripts/sstate-diff-machines.sh
index 8b64e11..5ed413b 100755
--- a/poky/scripts/sstate-diff-machines.sh
+++ b/poky/scripts/sstate-diff-machines.sh
@@ -1,5 +1,7 @@
 #!/bin/bash
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: GPL-2.0-only
 #
 # Used to compare sstate checksums between MACHINES.
diff --git a/poky/scripts/sstate-sysroot-cruft.sh b/poky/scripts/sstate-sysroot-cruft.sh
index 9c948e9..b2002ba 100755
--- a/poky/scripts/sstate-sysroot-cruft.sh
+++ b/poky/scripts/sstate-sysroot-cruft.sh
@@ -1,5 +1,7 @@
 #!/bin/sh
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: GPL-2.0-only
 #
 # Used to find files installed in sysroot which are not tracked by sstate manifest
diff --git a/poky/scripts/sysroot-relativelinks.py b/poky/scripts/sysroot-relativelinks.py
index 56e36f3..ccb3c86 100755
--- a/poky/scripts/sysroot-relativelinks.py
+++ b/poky/scripts/sysroot-relativelinks.py
@@ -1,5 +1,7 @@
 #!/usr/bin/env python3
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: GPL-2.0-only
 #
 
diff --git a/poky/scripts/task-time b/poky/scripts/task-time
index bcd1e25..8f71b29 100755
--- a/poky/scripts/task-time
+++ b/poky/scripts/task-time
@@ -1,5 +1,7 @@
 #!/usr/bin/env python3
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: GPL-2.0-only
 #
 
diff --git a/poky/scripts/verify-bashisms b/poky/scripts/verify-bashisms
index ec2374f..fc3677c 100755
--- a/poky/scripts/verify-bashisms
+++ b/poky/scripts/verify-bashisms
@@ -1,5 +1,7 @@
 #!/usr/bin/env python3
 #
+# Copyright OpenEmbedded Contributors
+#
 # SPDX-License-Identifier: GPL-2.0-only
 #